Could a Neuroscientist Understand a Microprocessor?

Great article where the authors take a microprocessor chip and subject it to the kinds of analyses that we do on brains, to see when they discover true things and when they ‘discover’ misleading results:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

Stochastic computing

One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select  about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

Some links:

 

 

Topological analysis of population activity in visual cortex

Singh, G., Memoli, F., Ishkhanov, T., Sapiro, G., Carlsson, G., & Ringach, D. L. (2008). Topological analysis of population activity in visual cortex. Journal of Vision, 8(8):11, 1–18, http://journalofvision.org/8/8/11/, doi:10.1167/8.8.11

From sparsely sampled data, we can attempt to estimate some of topological structure of the data.

Toplogical structure is here represented by Betti numbers. The paper explains this best:

Consider a world where objects are made of elastic rubber. Two objects are considered equivalent if they can be deformed into each other without tearing the material. If such a transformation between X and Y exists, we say they are topologically equivalent……it is evident that a possible reason for two objects not to be equivalent is that they differ in the number of holes. Thus, simply counting holes can provide a signature for the object at hand. Holes can exist in different dimensions. A one-dimensional hole is exposed when a one-dimensional loop (a closed curve) on the object cannot be deformed into a single point without tearing the loop. If two such loops can be deformed into one another they define the same hole, which should be counted only once. Analogous definitions can be invoked in higher dimensions. For example, a two-dimensional hole is revealed when a closed two-dimensional oriented surface on the object cannot be deformed into a single point.

This notion of counting holes of different dimensions is formalized by the definition of Betti numbers. The Betti numbers of an object X can be arranged in a sequence, b ( X )=( b 0 , b 1 , b 2 , I ), where b 0 represents the number of connected components, b 1 represents the number of one- dimensional holes, b 2 the number of two-dimensional holes, and so forth. An important property of Betti sequences is that if two objects are topologically equiv- alent (they can be deformed into each other) they share the same Betti sequence. One must note, as we will shortly illustrate, that the reverse is not always true: two objects can be different but have the same Betti sequence.

A technique is presented for estimating the Betti numbers of sampled data using “Rips complexes” and “barcodes”. To put this technique to use on neural data, the spiking of 5 cells (mostly “complex cells in the superficial layers”) with high spontaineous rate in V1 in Macaques were recorded from. The spikes were binned and a point cloud in 5D was constructed (so i think the coordinates of the point cloud representing the spike rate in each of the 5 dimensions).

This was done in two experimental conditions, when a stimulus was being presented, and when the eyes were occluded. In both cases, the topological structure varied between a circle and a sphere, although the circle structure was found with higher probability in the stimulus condition. The authors present a model of circular structure generated “if cortical activity is dominated by neuronal responses to stimulus orientation”, and a model of toroidal structure generated “A toroidal representation may arise from a neuronal population responding to two circular variables, such as orientation and color hue”. Note that a torus wasn’t actually observed in the data; a circle and a sphere was. In the conclusions the authors speculate what could have caused the sphere.

The authors conclude that the topology of spiking patterns for “both the data for spontaneous and driven conditions have similar topological structures, with the signatures of the circle and the sphere dominating the results”.

ConnectomeViewer – Multi-Modal Multi-Level Network and Neuroimaging Visualization and Analysis

Two neat tools concerned with the “connectome” (i.e. the pattern of connections in the nervous system):

Semantic wiki:
http://www.connectome.ch/wiki/Main_Page

Desktop viewer:
http://connectomeviewer.org/viewer “Multi-Modal Multi-Level Network and Neuroimaging Visualization and Analysis” (screencasts)

Over time, distribution of shot lengths in movies has moved closer to pink noise

The statistics of shot durations in 150 films from 1935 to 2005 were analyzed. From about 1970 to the present, the power spectrum of shot durations in individual films has tended to become more like pink noise (power ~= 1/f). Also, autocorrelation shows that the lengths of nearby shots has become more and more correlated.

Continue reading

IBM Cat Brain Simulation Scuffle: Symbolic?

You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”.    For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.

A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”.  One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme.  Since then, additional criticisms from Markram have appeared online.

Find out more after the jump.

Continue reading

Crowdsourcing the Brain with the Whole Brain Catalog

A very cool article on a new open source, online system to crowd source the assemblage of data in neuroscience from the Voice of San Diego.  From the article:

Traditionally, the study of the brain was organized somewhat like an archipelago. Neuroscientists would inhabit their own island or peninsula of the brain, and see little reason to venture elsewhere.

Molecular neuroscientists, who study how DNA and RNA function in the brain, didn’t share their work with cognitive specialists who study how psychological and cognitive functions are produced by the brain, for example.

But there has been an awakening to the idea that brains of humans and mammals should be studied like the complex, and interrelated systems that they are. Neuroscientists realized that they had to start collaborating across disciplines and sharing their data if they wanted to make advances in their own field.

[…]

Ellisman and his UCSD colleagues have devised a solution: crowdsource a brain. And this week they unveiled their years-long project — the Whole Brain Catalog — at the annual convention of the Society for Neuroscience, the largest gathering of brain experts in the world.

Continue reading