Stochastic computing

One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select  about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

Some links:

 

 

Advertisements

IBM Cat Brain Simulation Scuffle: Symbolic?

You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”.    For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.

A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”.  One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme.  Since then, additional criticisms from Markram have appeared online.

Find out more after the jump.

Continue reading

Crowdsourcing the Brain with the Whole Brain Catalog

A very cool article on a new open source, online system to crowd source the assemblage of data in neuroscience from the Voice of San Diego.  From the article:

Traditionally, the study of the brain was organized somewhat like an archipelago. Neuroscientists would inhabit their own island or peninsula of the brain, and see little reason to venture elsewhere.

Molecular neuroscientists, who study how DNA and RNA function in the brain, didn’t share their work with cognitive specialists who study how psychological and cognitive functions are produced by the brain, for example.

But there has been an awakening to the idea that brains of humans and mammals should be studied like the complex, and interrelated systems that they are. Neuroscientists realized that they had to start collaborating across disciplines and sharing their data if they wanted to make advances in their own field.

[…]

Ellisman and his UCSD colleagues have devised a solution: crowdsource a brain. And this week they unveiled their years-long project — the Whole Brain Catalog — at the annual convention of the Society for Neuroscience, the largest gathering of brain experts in the world.

Continue reading

Henry Markram on TED – video online

We had read that Dr. Henry Markram of the Blue Brain project had given a talk at TED (technology, entertainment, design), but the video wasn’t released until this month.  This talk is geared towards a general audience, rather than getting into the specific details of the Blue Brain project, as he has before.  It is engaging and includes many suggestions towards the future of neuroscience and AI.

Watch it online at the TED website.

Frontiers in Neuroscience Journal

The journal, Frontiers in Neuroscience, edited by Idan Segev, has made it Volume 3, issue 1.  Launching last year at the Society for Neuroscience conference, its probably the newest Neuroscience-related journal.

I’m a fan of it because it is an open-access journal featuring a “tiered system” and more.  From their website:

The Frontiers Journal Series is not just another journal. It is a new approach to scientific publishing. As service to scientists, it is driven by researchers for researchers but it also serves the interests of the general public. Frontiers disseminates research in a tiered system that begins with original articles submitted to Specialty Journals. It evaluates research truly democratically and objectively based on the reading activity of the scientific communities and the public. And it drives the most outstanding and relevant research up to the next tier journals, the Field Journals.

Continue reading

Theory rising

Although it’s a few months old, Larry Abbott has an excellent article in Neuron on the recent (last 20 years) contributions of theoretical neuroscience. (He came by MIT last week to give a talk and that’s when I found out about the article.) It’s a review that is not too long and provides a good overview with both sufficient (though not overwhelming) detail and original perspective. It’s rare to find a short piece that is so informative. (And for a more experimentally-oriented review with an eye toward the future, see Rafael Yuste’s take on the grand challenges.)

Click on for some of my favorite passages from the Abbott piece. Continue reading