Although it’s a few months old, Larry Abbott has an excellent article in Neuron on the recent (last 20 years) contributions of theoretical neuroscience. (He came by MIT last week to give a talk and that’s when I found out about the article.) It’s a review that is not too long and provides a good overview with both sufficient (though not overwhelming) detail and original perspective. It’s rare to find a short piece that is so informative. (And for a more experimentally-oriented review with an eye toward the future, see Rafael Yuste’s take on the grand challenges.)
Click on for some of my favorite passages from the Abbott piece.
Abbott uses the following problem of input decoding
Spike counts and neuronal firing rates are positive quantities. This simple fact has important implications for neural coding and neural circuits that provide a framework for thinking about a number of research directions taken over the past 20 years.
to highlight new work in synchrony, dendritic compartments, and balanced excitation-inhibition. This is probably the best part of the whole article. With some simple arithmetic, he motivates and explains solutions to the problem of correlating neural activity with real events.
The successes of circuit models (and principles of circuit models) in primary visual cortex:
We now have plausible mechanisms for how simple and complex cells obtain their basic response characteristics. Although no single consensus abouthow the circuits of primary visual cortex operate has arisen from this body of work, this may simply reflect the fact that multiple mechanisms contribute. In other words, many of these ideas are probably correct in one way or another, and the wealth of ideas in this field should be viewed as a success. Circuit-level modeling is now advancing beyond primary sensory areas (for example, Cadieu etal., 2007) and to the consideration of phenomena such as working memory through sustained activity (Amit and Brunel, 1997,Compte etal., 2000,Seung etal., 2000) and decision making (Wang, 2002,Machens etal., 2005).
And the dangers of an unhealthy obsession with connectomics:
What can we learn from the complete connectome or, indeed, a complete mathematical description of a complex artificial network model?
First, what can’t we learn? It is unlikely, for example, that we could deduce the task that the network was constructed to perform even if we were given the complete equations and connections of the model. If, along with this information, we were told what this task was, it is unlikely that we could figure out how the network performs it. If we somehow managed to make any progress along these lines, the people who constructed the network could probably provide us with another one that performs the same task but has a different connectome. In a similarway, biological systems may operate in a more variable manner than we have suspected, as has been stressed by Eve Marder (Marder etal., 2007). These issues are particularly true of a class of network models known as liquid state or echostate networks (Maass etal., 2002,Jaeger, 2003). In these models, the vast majority of interneuronal connections are not directly related to the task being performed (they are typically chosen randomly and left unchanged), the exceptions being synapses onto the output units of the network. Nevertheless, the tuned values of the synapses onto the output units can only be understood through their relationships to the random synapses. Such systems represent enormous challenges for conventional anatomical and physiological approaches.
The fact that the connectome of an artificial neural network does not typically tell us what the network does or how it does it should not be taken as an indication that this information is useless. Far from it. But we must be willing to be more abstract in our thinking. The important issue for an artificial network is not how it works but how it was constructed, which means what training procedures and modification rules were used to get it to perform a task. Although this information is not provided directly by the connectome, much can be inferred. For example, it is important to know whether the network has a feedforward architecture or has strong feedback loops. Other features of the network layout, whether it has hubs or bottlenecks, how many layers it contains, and its degree of heterogeneity, provide important clues as well. Obtaining a high-resolution connectome in neuroscience will be of great value, but artificial neural networks provide a cautionary tale that reminds us that scientific revolutions tend to render uninteresting as many questions as they answer. We will be fortunate if the connectome project does this for neuroscience, butas we launch ourselves into it we should appreciate that, as artificial neural networks appear to suggest, we may be asking the wrong questions.
Finally a major challenge for the future:
This is where I think the future lies in theoretical investigations of cognitive function. We must learn how to build models that construct hypotheses through their internally generated activity while remaining sensitive to the constraints provided by externally generated sensory evidence.