# Could a Neuroscientist Understand a Microprocessor?

Great article where the authors take a microprocessor chip and subject it to the kinds of analyses that we do on brains, to see when they discover true things and when they ‘discover’ misleading results:

http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

# Stochastic computing

One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select  about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.

If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.

A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).

Obviously, this sort of thing is a candidate neural code.

# Topological analysis of population activity in visual cortex

Singh, G., Memoli, F., Ishkhanov, T., Sapiro, G., Carlsson, G., & Ringach, D. L. (2008). Topological analysis of population activity in visual cortex. Journal of Vision, 8(8):11, 1–18, http://journalofvision.org/8/8/11/, doi:10.1167/8.8.11

From sparsely sampled data, we can attempt to estimate some of topological structure of the data.

Toplogical structure is here represented by Betti numbers. The paper explains this best:

Consider a world where objects are made of elastic rubber. Two objects are considered equivalent if they can be deformed into each other without tearing the material. If such a transformation between X and Y exists, we say they are topologically equivalent……it is evident that a possible reason for two objects not to be equivalent is that they differ in the number of holes. Thus, simply counting holes can provide a signature for the object at hand. Holes can exist in different dimensions. A one-dimensional hole is exposed when a one-dimensional loop (a closed curve) on the object cannot be deformed into a single point without tearing the loop. If two such loops can be deformed into one another they define the same hole, which should be counted only once. Analogous definitions can be invoked in higher dimensions. For example, a two-dimensional hole is revealed when a closed two-dimensional oriented surface on the object cannot be deformed into a single point.

This notion of counting holes of different dimensions is formalized by the definition of Betti numbers. The Betti numbers of an object X can be arranged in a sequence, b ( X )=( b 0 , b 1 , b 2 , I ), where b 0 represents the number of connected components, b 1 represents the number of one- dimensional holes, b 2 the number of two-dimensional holes, and so forth. An important property of Betti sequences is that if two objects are topologically equiv- alent (they can be deformed into each other) they share the same Betti sequence. One must note, as we will shortly illustrate, that the reverse is not always true: two objects can be different but have the same Betti sequence.

A technique is presented for estimating the Betti numbers of sampled data using “Rips complexes” and “barcodes”. To put this technique to use on neural data, the spiking of 5 cells (mostly “complex cells in the superficial layers”) with high spontaineous rate in V1 in Macaques were recorded from. The spikes were binned and a point cloud in 5D was constructed (so i think the coordinates of the point cloud representing the spike rate in each of the 5 dimensions).

This was done in two experimental conditions, when a stimulus was being presented, and when the eyes were occluded. In both cases, the topological structure varied between a circle and a sphere, although the circle structure was found with higher probability in the stimulus condition. The authors present a model of circular structure generated “if cortical activity is dominated by neuronal responses to stimulus orientation”, and a model of toroidal structure generated “A toroidal representation may arise from a neuronal population responding to two circular variables, such as orientation and color hue”. Note that a torus wasn’t actually observed in the data; a circle and a sphere was. In the conclusions the authors speculate what could have caused the sphere.

The authors conclude that the topology of spiking patterns for “both the data for spontaneous and driven conditions have similar topological structures, with the signatures of the circle and the sphere dominating the results”.

# Phenotropic computing

(from 2003) Jaron Lanier talks about the “phenotropic” programme, which consists of trying to design software systems that uses pattern recognition, rather than protocols, for communication between components of the system.

# ConnectomeViewer – Multi-Modal Multi-Level Network and Neuroimaging Visualization and Analysis

Two neat tools concerned with the “connectome” (i.e. the pattern of connections in the nervous system):

Semantic wiki:
http://www.connectome.ch/wiki/Main_Page

Desktop viewer:
http://connectomeviewer.org/viewer “Multi-Modal Multi-Level Network and Neuroimaging Visualization and Analysis” (screencasts)

# Over time, distribution of shot lengths in movies has moved closer to pink noise

The statistics of shot durations in 150 films from 1935 to 2005 were analyzed. From about 1970 to the present, the power spectrum of shot durations in individual films has tended to become more like pink noise (power ~= 1/f). Also, autocorrelation shows that the lengths of nearby shots has become more and more correlated.