Stochastic computing

One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select  about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

Some links:

 

 

Advertisements

Memory-oriented computing and "From Micro-processors to Nanostores: Rethinking Data-Centric Systems"

I’ve only skimmed this article by Ranganathan, but I find it notable because of the discussion of memory-oriented computing, in which processors are colocated with storage (he uses the word “nanostores”, which additionally implies that the memory is nonvolatile). One of the most important distinctions between neural architecture and present-day computing architecture is that brains appear to be built out of computing elements that do both processing and memory storage, whereas present-day computers have separate memory and CPU components (this separation is a key feature of what is called the “von Neumann” architecture).

Continue reading

Network design algorithm of a slime mold

[The slime mold Physarum polycephalum] “can find the shortest path through a maze (15–17) or connect different arrays of food sources in an efficient manner with low total length… yet short average minimum distance… between pairs of food sources… with a high degree of fault tolerance… to accidental disconnection (11, 18, 19)”

This paper provide a model of the slime mold’s network construction algorithm.

Continue reading

Bayesian truth serum

Neville told me about this neat article from ’04. It presents a way to offer rewards to people taking a poll in such a way so as to motivate them to be honest, with no prior information about what the distribution of correct answers is. Apparently, previous such techniques are based on the idea of rewarding people for agreeing with other people’s answers. This new thing about this technique for calculating the reward is that it provides people with an incentive to tell their true opinion even if they know that they hold a minority viewpoint.

Drazen Prelec. A Bayesian Truth Serum for Subjective Data. Science 15 October 2004: Vol. 306. no. 5695, pp. 462 – 466. DOI: 10.1126/science.1102081

Continue reading

IBM Cat Brain Simulation Scuffle: Symbolic?

You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”.    For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.

A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”.  One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme.  Since then, additional criticisms from Markram have appeared online.

Find out more after the jump.

Continue reading

Frontiers in Neuroscience Journal

The journal, Frontiers in Neuroscience, edited by Idan Segev, has made it Volume 3, issue 1.  Launching last year at the Society for Neuroscience conference, its probably the newest Neuroscience-related journal.

I’m a fan of it because it is an open-access journal featuring a “tiered system” and more.  From their website:

The Frontiers Journal Series is not just another journal. It is a new approach to scientific publishing. As service to scientists, it is driven by researchers for researchers but it also serves the interests of the general public. Frontiers disseminates research in a tiered system that begins with original articles submitted to Specialty Journals. It evaluates research truly democratically and objectively based on the reading activity of the scientific communities and the public. And it drives the most outstanding and relevant research up to the next tier journals, the Field Journals.

Continue reading