Inferring network activity on a MEA from pairwise correlations

Weak pairwise correlations imply strongly correlated network states in a neural population : Nature

Very few MEA studies make it into Nature, so this definitely got my attention.

Often in neuroscience we are confronted with a small sample measurement of a few neurons from a large population. Although many have assumed, few have actually asked: What are we missing here? What does recording a few neurons really tell you about the entire network?

Using an elegant prep (retina on a MEA viewing defined scenes/stimuli), Segev, Bialek, and students show that statistical physics models that assume pairwise correlations (but disregard any higher order phenomena) perform very well in modeling the data. This indicates a certain redundancy exists in the neural code. The results are also replicated with cultured cortical neurons on a MEA.

Some key ideas from the paper are presented after the jump.

To describe the network as a whole, we need to write down a probability distribution for the 2N binary words corresponding to patterns of spiking and silence in the population. The pairwise correlations tell us something about this distribution, but there are an infinite number of models that are consistent with a given set of pairwise correlations. The difficulty thus is to find a distribution that is consistent only with the measured correlations, and does not implicitly assume the existence of unmeasured higher-order interactions.

Therefore, the question of whether pairwise correlations provide an effective description of the system becomes the question of whether the reduction in entropy that comes from these correlations, I(2) = S1 – S2, captures all or most of the multi-information IN.

We conclude that although the pairwise correlations are small and the multi-neuron deviations from independence are large, the maximum entropy model consistent with the pairwise correlations captures almost all of the structure in the distribution of responses from the full population of neurons. Thus, the weak pairwise correlations imply strongly correlated states. To understand how this happens, it is useful to look at the mathematical structure of the maximum entropy distribution.

In a physical system, the maximum entropy distribution is the Boltzmann distribution, and the behaviour of the system depends on the temperature, T. For the network of neurons, there is no real temperature, but the statistical mechanics of the Ising model predicts that when all pairs of elements interact, increasing the number of elements while fixing the typical strength of interactions is equivalent to lowering the temperature, T, in a physical system of fixed size, N. This mapping predicts that correlations will be even more important in larger groups of neurons.

And of note from the Discussion:

The dominance of pairwise interactions means that learning rules based on pairwise correlations could be sufficient to generate nearly optimal internal models for the distribution of ‘codewords’ in the retinal vocabulary, thus allowing the brain to accurately evaluate new events for their degree of surprise.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s