Stochastic computing

One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select  about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

Some links:

 

 

Memory-oriented computing and "From Micro-processors to Nanostores: Rethinking Data-Centric Systems"

I’ve only skimmed this article by Ranganathan, but I find it notable because of the discussion of memory-oriented computing, in which processors are colocated with storage (he uses the word “nanostores”, which additionally implies that the memory is nonvolatile). One of the most important distinctions between neural architecture and present-day computing architecture is that brains appear to be built out of computing elements that do both processing and memory storage, whereas present-day computers have separate memory and CPU components (this separation is a key feature of what is called the “von Neumann” architecture).

Continue reading

Network design algorithm of a slime mold

[The slime mold Physarum polycephalum] “can find the shortest path through a maze (15–17) or connect different arrays of food sources in an efficient manner with low total length… yet short average minimum distance… between pairs of food sources… with a high degree of fault tolerance… to accidental disconnection (11, 18, 19)”

This paper provide a model of the slime mold’s network construction algorithm.

Continue reading

Bayesian truth serum

Neville told me about this neat article from ’04. It presents a way to offer rewards to people taking a poll in such a way so as to motivate them to be honest, with no prior information about what the distribution of correct answers is. Apparently, previous such techniques are based on the idea of rewarding people for agreeing with other people’s answers. This new thing about this technique for calculating the reward is that it provides people with an incentive to tell their true opinion even if they know that they hold a minority viewpoint.

Drazen Prelec. A Bayesian Truth Serum for Subjective Data. Science 15 October 2004: Vol. 306. no. 5695, pp. 462 – 466. DOI: 10.1126/science.1102081

Continue reading

IBM Cat Brain Simulation Scuffle: Symbolic?

You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”.    For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.

A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”.  One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme.  Since then, additional criticisms from Markram have appeared online.

Find out more after the jump.

Continue reading

Frontiers in Neuroscience Journal

The journal, Frontiers in Neuroscience, edited by Idan Segev, has made it Volume 3, issue 1.  Launching last year at the Society for Neuroscience conference, its probably the newest Neuroscience-related journal.

I’m a fan of it because it is an open-access journal featuring a “tiered system” and more.  From their website:

The Frontiers Journal Series is not just another journal. It is a new approach to scientific publishing. As service to scientists, it is driven by researchers for researchers but it also serves the interests of the general public. Frontiers disseminates research in a tiered system that begins with original articles submitted to Specialty Journals. It evaluates research truly democratically and objectively based on the reading activity of the scientific communities and the public. And it drives the most outstanding and relevant research up to the next tier journals, the Field Journals.

Continue reading

IARPA and trust detection

Neurodudes reader Jason M. sent me some information about a funding agency, IARPA, or Intelligence Advanced Research Projects Activity, that is funding neuroscience-related research. I had never heard of IARPA before but it has existed since 2006 as something of an intelligence-focused DARPA. There upcoming funding deadline (Aug 21) is for projects on detecting trust signals between humans.

Just last night, I watched the tense but amazing film The Hurt Locker (don’t let the name disuade you, see the phenomenal Metacritic rating), which is about a bomb disposal squad during the recent Iraq War. There is one particularly stirring scene with a suicide bomber who claims that he was forced to wear a vest with explosives and doesn’t want to go through with it. The difficulty in the limited time before the bomb explosion revolves around whether to actually trust the man and the challenge of trusting someone when neither party speaks the other’s language. You can certainly at least understand (putting aside the ethics of war itself) why governments are interested in detecting nonverbal trust cues.

Details about the IARPA call for proposals are after the jump. Continue reading

VS Ramachandran's TED Talk

Although I’ve been a longtime fan of Ramachandran’s excellent book Phantoms in the Brain, this TED talk is like a compressed summary of the highlight’s of his research. He’s a great speaker and he covers in 20 minutes my two favorite examples in the book (Capgras delusion and mirror treatment for phantom limb syndrome). Perhaps the best part of the talk is that, after listening to it, I was convinced more than ever before of the statistical nature of sensory perception (ie. the brain attempts to find the most likely explanation for sensory observations) and the integrative nature of central processing of multiple modalities. 

http://video.ted.com/assets/player/swf/EmbedPlayer.swf

Atul Gawande also recently wrote a New Yorker article about treating phantom itch with Ramachandran’s mirror box. I found this part of Gawande’s article on statistical inference in perception most interesting:

You can get a sense of this from brain-anatomy studies. If visual sensations were primarily received rather than constructed by the brain, you’d expect that most of the fibres going to the brain’s primary visual cortex would come from the retina. Instead, scientists have found that only twenty per cent do; eighty per cent come downward from regions of the brain governing functions like memory. Richard Gregory, a prominent British neuropsychologist, estimates that visual perception is more than ninety per cent memory and less than ten per cent sensory nerve signals. When Oaklander theorized that M.’s itch was endogenous, rather than generated by peripheral nerve signals, she was onto something important.

I’m not familiar with this field but I wonder if anyone has tried to quantify what percent of our conscious experience that we normally believe to be 100% due to sensory input is actually recall from memory/inference based on past observation. Also, can this percentage adaptively change? Perhaps there are situations where the brain chooses to rely more heavily on memory and other cases where it relies more on primary sensory input.