Stochastic computing

One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select  about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

Some links:

 

 

IBM Cat Brain Simulation Scuffle: Symbolic?

You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”.    For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.

A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”.  One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme.  Since then, additional criticisms from Markram have appeared online.

Find out more after the jump.

Continue reading

Crowdsourcing the Brain with the Whole Brain Catalog

A very cool article on a new open source, online system to crowd source the assemblage of data in neuroscience from the Voice of San Diego.  From the article:

Traditionally, the study of the brain was organized somewhat like an archipelago. Neuroscientists would inhabit their own island or peninsula of the brain, and see little reason to venture elsewhere.

Molecular neuroscientists, who study how DNA and RNA function in the brain, didn’t share their work with cognitive specialists who study how psychological and cognitive functions are produced by the brain, for example.

But there has been an awakening to the idea that brains of humans and mammals should be studied like the complex, and interrelated systems that they are. Neuroscientists realized that they had to start collaborating across disciplines and sharing their data if they wanted to make advances in their own field.

[…]

Ellisman and his UCSD colleagues have devised a solution: crowdsource a brain. And this week they unveiled their years-long project — the Whole Brain Catalog — at the annual convention of the Society for Neuroscience, the largest gathering of brain experts in the world.

Continue reading

Henry Markram on TED – video online

We had read that Dr. Henry Markram of the Blue Brain project had given a talk at TED (technology, entertainment, design), but the video wasn’t released until this month.  This talk is geared towards a general audience, rather than getting into the specific details of the Blue Brain project, as he has before.  It is engaging and includes many suggestions towards the future of neuroscience and AI.

Watch it online at the TED website.

Frontiers in Neuroscience Journal

The journal, Frontiers in Neuroscience, edited by Idan Segev, has made it Volume 3, issue 1.  Launching last year at the Society for Neuroscience conference, its probably the newest Neuroscience-related journal.

I’m a fan of it because it is an open-access journal featuring a “tiered system” and more.  From their website:

The Frontiers Journal Series is not just another journal. It is a new approach to scientific publishing. As service to scientists, it is driven by researchers for researchers but it also serves the interests of the general public. Frontiers disseminates research in a tiered system that begins with original articles submitted to Specialty Journals. It evaluates research truly democratically and objectively based on the reading activity of the scientific communities and the public. And it drives the most outstanding and relevant research up to the next tier journals, the Field Journals.

Continue reading

Theory rising

Although it’s a few months old, Larry Abbott has an excellent article in Neuron on the recent (last 20 years) contributions of theoretical neuroscience. (He came by MIT last week to give a talk and that’s when I found out about the article.) It’s a review that is not too long and provides a good overview with both sufficient (though not overwhelming) detail and original perspective. It’s rare to find a short piece that is so informative. (And for a more experimentally-oriented review with an eye toward the future, see Rafael Yuste’s take on the grand challenges.)

Click on for some of my favorite passages from the Abbott piece. Continue reading

NSF/EFRI neuro grants

NSF:ENG:EFRI:Home Page

NSF’s Emerging Frontiers in Research and Innovation (EFRI) office funded 4 very futuristic neuroengineering grants.

  1. Deep learning in mammalian cortex
  2. Studying neural networks in vitro with an innovative patch clamp array
  3. Determining how the brain controls the hand for robotics
  4. In vitro power grid simulation using real neurons

Disclaimer: I was involved with the second proposal on this page.

Virtual Neurorobotics

Virtual Neurorobotics

Researchers at the University of Nevada, Reno have an interesting and ambitious set-up for doing research in AI that the describe in a recent paper.

From the paper:

We define virtual neurorobotics as follows: a computer-facilitated behavioral loop wherein a human interacts with a projected robot that meets five criteria: (1) the robot is sufficiently embodied for the human to tentatively accept the robot as a social partner, (2) the loop operates in real time, with no pre-specified parcellation into receptive and responsive time windows, (3) the cognitive control is a neuromorphic brain emulation incorporating realistic neuronal dynamics whose time constants reflect synaptic activation and learning, membrane and circuitry properties, and (4) the neuromorphic architecture is expandable to progressively larger scale and complexity to track brain development, (5) the neuromorphic architecture can potentially provide circuitry underlying intrinsic motivation and intentionality, which physiologically is best described as “emotional” rather than rule-based drive.

What’s interesting to me about this is the combination of a embodied robot in a virtual world with a neurally inspired controller for that robot. While there are pros and cons of embodiment in virtual world (some of which have been touched on here before), I think that if your priority is closing the loop from embodiment to research on neural systems, the importance of this kind of approach cannot be ignored.

Best Way To Describe Neuron Shape?

Standardizing Neuronal Morphology Models

Neurons come in many shapes and sizes. Frequently, the shape of a neuron is characteristic to its type. Several theoretical papers have demonstrated that the shape of a neuron can crucially determine its pattern of activity, independently of other factors (Mainen & Sejnowski, 1996, for example). Several resources on the web such as neuromorpho.org and the Cell Centered Database are dedicated to maintaining repositories of different neuronal shapes (also known as morphologies).

Any computer scientist worth their salt, noticing this trend, is tempted to say: if neuronal shape is so important, maybe we ought to have good data standards to describe it. That’s just what a paper last year did. It surveyed the popular data standards for modeling, primarily in the NEURON and Genesis simulation packages. The result is a data standard called MorphML, which is part of a larger effort called NeuroML.

Neuronal shape is a weird data type for the computer science world, but I think an incredibly important and fundamental one for deeply coping with the complexity of real brain tissue. It seems to me that many areas of neuroscience research could benefit from the construction of more explicit models of the circuits they study.