For those with theoretical interests with respect to machine learning flavored AI, the ML Theory blog run by John Langford is highly recommended. Though recently started, Langford and others have so far been doing an excellent job of commenting on both the science and culture of theoretical learning research.
In the September Nature Neuroscience, we have a promising new technique: Millisecond-timescale, genetically targeted optical control of neural activity.
I think several people have suggested doing something like this before but no one has actually done it. What they’ve done is genetically modified (by lentivirus, for those curious) ordinary hippocampal neurons in culture, adding the same photo-electric transducing protein — rhodopsin — found in photoreceptors. Yup. You heard me right. They’ve expressed a cation-channel-gating rhodopsin in ordinary hippocampal neurons. With an standard fluorescence microscope (Xenon lamp + Chroma GFP cube), they can photostimulate single action potentials (and sub-threshold depolarizations) in single neurons.
Now here’s my idea for bioengineers to take this to the next level: Add a second photosensitive protein tied to an inhibitory channel. Ideally, we would want total separation between the stimulating wavelengths for the two different (excitatory, inhibitory) channels. Now, you have a system where all neurons can be directly excited or inhibited with different laser lines. In other words, a network of neurons where all voltages can be fully controlled. Sweet!
This seems like a great tool to add to the existing arsenal of photostimulation techniques (like photoelectric effect-based light-on-silicon stimulation that was pioneered by Goda lab.) Here’s a question: Is this the end of multi-electrode arrays? In slice, we already have single spike detection with Ca-sensitive dyes from Yuste’s lab. Now, we have optical single spike stimulation. Perhaps MEAs will be relegated to the domain implantable devices. Regardless, I’m proud to see several of the authors are from Stanford! Read on for the full abstract. Continue reading
This was news to me, but maybe not to everyone. Everyone’s all impressed with the regularity of cerebellar wiring. Well, how much neater it is when you hear that that architecture may not be just in the cerebellum; in various species, there are a handful of structures similar to cerebellums. And they may have similar functions.
These are separate structures found in species which also have cerebellums (all vertebrates have cerebellums, by the way). At this point, it is debatable how “similar” they really are, but they tend to have:
- analogs to granule cells
- analogs to Purkinje cells
- with spiny apical dendrites and also either basilar dendrites or smooth proximal regions of the apical dendrites
- sensory input to the Purkinje-like cells organized in a topographic map. The afferents synapse onto the basilar dendrites or the proximal parts of the apical dendrites.
- parallel fibers going from the granule cells to the Purkinje cells, each one contacting many Purkinje cells in many parts of the topographic map.
- embryological origins in the alar or sensory plate
Some of them have also been shown in in vivo experiments to have electrophysiological responses which learn in ways similar to the way the cerebellum does in classical conditioning (think rabbit eye puff experiment).
A notable difference between these structures and the cerebellum is that they don’t have climbing fibers.
Curtis Bell (below) theorizes that all of these structures’ function is to filter expected patterns out of incoming sensory signals.
Quoting/paraphrasing from the paper below, the structures are:
- medial octavolateral nucleus (MON) (in most basal aquatic vertebrates and in some myxinoids)
- dorsal octavolateral nucleus (DON) (in most of the same basal aquatic vertebrates as the MON, except for the bony fish (neopterygii), where it is entirely absent)
- marginal layer of the optic tectum (in all ray finned fish (actinopterygii)
- electro-sensory lobe (ELL) (in a few groups of advanced bony fish (teleosteii))
- the rostrolateral nucleus (RLN) of the thalamus (in a few groups of bony fish)
- dorsal cochlear nucleus (DCN) (in almost all mammals)
For more about this, check out this paper:
Anna Devor. Is the cerebellum like cerebellar-like structures?. Brain Research Reviews, Volume 34, Issue 3, December 2000, Pages 149-156.
An interesting (but speculative) essay that presents evidence suggesting that differences in anatomy are more due to differences in regulatory regions than coding regions of the genome.
Neuroimaging data of different brain areas fit to a Rescorla-Wagner model show that different cortical areas integrate stimulus changes over different time intervals. The result itself probably isn’t that shocking but I liked the nice combination of theory and experiment.
From the July 21 Neuron:
Formal Learning Theory Dissociates Brain Regions with Different Temporal Integration
Jan Gläscher and Christian Büchel
Learning can be characterized as the extraction of reliable predictions about stimulus occurrences from past experience. In two experiments, we investigated the interval of temporal integration of previous learning trials in different brain regions using implicit and explicit Pavlovian fear conditioning with a dynamically changing reinforcement regime in an experimental setting. With formal learning theory (the Rescorla-Wagner model), temporal integration is characterized by the learning rate. Using fMRI and this theoretical framework, we are able to distinguish between learning-related brain regions that show long temporal integration (e.g., amygdala) and higher perceptual regions that integrate only over a short period of time (e.g., fusiform face area, parahippocampal place area). This approach allows for the investigation of learning-related changes in brain activation, as it can dissociate brain areas that differ with respect to their integration of past learning experiences by either computing long-term outcome predictions or instantaneous reinforcement expectancies.
How does this relate to Hawkins’s idea that all cortex implements the same underlying “algorithm”? Is the integration time constant (or, in RW terms, the learning rate) tuned differently by different inputs?
I wrote up a little primer on differential equations for neuroscientists. It can be found here: http://science.ethomson.net/Diff_Eq.pdf. Any comments or suggestions appreciated, especially at this early stage!
Here is the first paragraph:
Ordinary first-order differential equations come up frequently in neuroscience. They are used to model many fundamental processes such as passive membrane dynamics and gating kinetics in individual ion channels. When the equations come up, most electrophysiology texts provide the solution, but do not provide any explanation. This manuscript tries to fill the gap, providing an introduction to many of the mathematical facets of the first-order differential equation. Section One provides a brief statement of the problem and its solution. Section Two works through the solution for a special case that often comes up in practice. I also work through a concrete example chosen for its near-ubiquity in neuroscience, the equivalent circuit model of a patch of neuronal membrane. Section Three contains a simple derivation of the general solution given in Section One. The manuscript presupposes a little knowledge of first-year calculus, much of which is reviewed when needed.