This paper hypothesizes that postsynaptic CaMKII (calcium/calmodulin-dependent protein kinase II) receives synaptic input and then interacts with via phosphorylation, suggesting that memories may be encoded in the microtubules in this way. They note that the size and shape of CaMKII appears to be just right to phosphorylate the hexagonal lattices of tubulin proteins in microtubules. The paper also can “demonstrate microtubule-associated protein logic gates, and show how patterns of phosphorylated tubulins in microtubules can control neuronal functions by triggering axonal firings, regulating synapses, and traversing scale.”. Via ScienceDaily.
Travis J. A. Craddock, Jack A. Tuszynski, Stuart Hameroff. Cytoskeletal Signaling: Is Memory Encoded in Microtubule Lattices by CaMKII Phosphorylation? PLoS Computational Biology, 2012; 8 (3): e1002421 DOI: 10.1371/journal.pcbi.1002421.
“activating synapses in a centrifugal sequence (outward from the soma) caused a different [lesser] [cortical pyramidal] neuronal response than activating the synapses in a centripetal (inward) sequence”
Alain Destexhe. Dendrites Do It in Sequences (24 September 2010)
Science 329 (5999), 1611.
Tiago Branco, Beverley A. Clark, and Michael Häusser. Dendritic Discrimination of Temporal Input Sequences in Cortical Neurons (24 September 2010)
Science 329 (5999), 1671.
Jia, H., Rochefort, N., Chen, X., & Konnerth, A. (2010). Dendritic organization of sensory input to cortical neurons in vivo Nature, 464 (7293), 1307-1312 DOI: 10.1038/nature08947
Consider a a cortical neuron in V1, layer 2/3, whose output shows sharp orientation tuning. What are the orientation tunings of the most important inputs to that neuron? What is the spatial distribution of these inputs in the neuron’s dendritic tree?
You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”. For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.
A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”. One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme. Since then, additional criticisms from Markram have appeared online.
Find out more after the jump.
We had read that Dr. Henry Markram of the Blue Brain project had given a talk at TED (technology, entertainment, design), but the video wasn’t released until this month. This talk is geared towards a general audience, rather than getting into the specific details of the Blue Brain project, as he has before. It is engaging and includes many suggestions towards the future of neuroscience and AI.
Watch it online at the TED website.
The journal, Frontiers in Neuroscience, edited by Idan Segev, has made it Volume 3, issue 1. Launching last year at the Society for Neuroscience conference, its probably the newest Neuroscience-related journal.
I’m a fan of it because it is an open-access journal featuring a “tiered system” and more. From their website:
The Frontiers Journal Series is not just another journal. It is a new approach to scientific publishing. As service to scientists, it is driven by researchers for researchers but it also serves the interests of the general public. Frontiers disseminates research in a tiered system that begins with original articles submitted to Specialty Journals. It evaluates research truly democratically and objectively based on the reading activity of the scientific communities and the public. And it drives the most outstanding and relevant research up to the next tier journals, the Field Journals.
Although it’s a few months old, Larry Abbott has an excellent article in Neuron on the recent (last 20 years) contributions of theoretical neuroscience. (He came by MIT last week to give a talk and that’s when I found out about the article.) It’s a review that is not too long and provides a good overview with both sufficient (though not overwhelming) detail and original perspective. It’s rare to find a short piece that is so informative. (And for a more experimentally-oriented review with an eye toward the future, see Rafael Yuste’s take on the grand challenges.)
Click on for some of my favorite passages from the Abbott piece. Continue reading
The Circadian Clock in the Retina Controls Rod-Cone Coupling (Christophe Ribelayga, Yu Cao, and Stuart C. Mangel)
An amazing paper from Neuron demonstrating adaptive (circadian clock-governed) binning in the retina, based on dopamine modulation of gap junction (electrical) synapses between retinal photodetectors. During the day, abundant dopamine release weakens gap junctions coupling rods and cones together so that visual acuity is high. When light is scarce (at night), there is less dopamine and the electrical coupling between rods and cones is increased. This is analogous to on-chip binning in CCD (digital) cameras. Binning increases signal (in light-limited systems, eg. seeing at night) by increasing optical input area and by reducing single element noise (ie. noise at different photoreceptors should be independent) at the cost of resolution. So, the retina activates photoreceptor binning at night to boost low-light signals and deactivates it during the day to increase resolution. The dopamine comes from cells in the interplexiform layer, whose dopamine release is itself governed by melatonin projections.
Also, I never knew that gap junction strengths were directly modifiable. It looks like the D2 receptors are G-protein coupled to PKA, which acts on the gap junctions.
Neurons come in many shapes and sizes. Frequently, the shape of a neuron is characteristic to its type. Several theoretical papers have demonstrated that the shape of a neuron can crucially determine its pattern of activity, independently of other factors (Mainen & Sejnowski, 1996, for example). Several resources on the web such as neuromorpho.org and the Cell Centered Database are dedicated to maintaining repositories of different neuronal shapes (also known as morphologies).
Any computer scientist worth their salt, noticing this trend, is tempted to say: if neuronal shape is so important, maybe we ought to have good data standards to describe it. That’s just what a paper last year did. It surveyed the popular data standards for modeling, primarily in the NEURON and Genesis simulation packages. The result is a data standard called MorphML, which is part of a larger effort called NeuroML.
Neuronal shape is a weird data type for the computer science world, but I think an incredibly important and fundamental one for deeply coping with the complexity of real brain tissue. It seems to me that many areas of neuroscience research could benefit from the construction of more explicit models of the circuits they study.
The field of neuroscience naturally focuses its inquiry into neurons. This approach to understanding the brain by studying its parts has been thought to have a greater potential than that of psychology to understand how the brain works, a comment made by no less than Daniel L. Schacter, chair of Harvard’s Department of Psychology, in his book, The Seven Sins of Memory.
However promising the field has been thus far, even the most accomplished neuroscientists will admit that we still do not understand how the brain really works. I would submit that the current reductionist nature of neuroscience has shed much light on the dynamics of how neurons work, but has to a far lesser degree shed light on how neurons process information. The difference between these two lines of inquiry is important for making progress in understanding how the brain works.