I’ve only skimmed this article by Ranganathan, but I find it notable because of the discussion of memory-oriented computing, in which processors are colocated with storage (he uses the word “nanostores”, which additionally implies that the memory is nonvolatile). One of the most important distinctions between neural architecture and present-day computing architecture is that brains appear to be built out of computing elements that do both processing and memory storage, whereas present-day computers have separate memory and CPU components (this separation is a key feature of what is called the “von Neumann” architecture).
You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”. For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.
A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”. One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme. Since then, additional criticisms from Markram have appeared online.
Find out more after the jump.
What is the right level of biological realism to model neuronal systems in order to understand their computational properties? Some recent papers may help shed some light on the subject. Models of the computational properties of local networks of neurons are starting to come into their own. This year has already seen at least two articles published in experimentalist journals based on the same core of theoretical work.
To bring you up to speed, I need to remind you what is going on in the world of experimental neuroscience.
Experimentalists are now able to record the single-cell activities of a whole population of neurons simultaneously. From Briggman, Abarbanel, Kristan (2006):
By using multi-electrode arrays or optical imaging, investigators can now record from many individual neurons in various parts of nervous systems simultaneously while an animal performs sensory, motor or cognitive tasks. Given the large multidimensional datasets that are now routinely generated, it is often not obvious how to find meaningful results within the data.
This paper goes on to provide a nice overview on mathematical methods that researchers are using to grapple with the challenge of understanding the dynamics of the neural systems they are recording from. They make the case that conceptual progress needs to be made on the interpretation of the data these results yield. How can we understand what computations these neurons are collectively performing?
(Incidentally, this topic is being explored in a conference happening this week at the Los Alamos National Laboratory, which, according to one of the conference session chairs, is intended to help shape future directions for the lab. Hopefully there will be webcasts from this conference.)
HOW DOES THE BRAIN CONTROL BEHAVIOR?
HOW CAN TECHNOLOGY EMULATE BIOLOGICAL INTELLIGENCE?
The conference is aimed at researchers and students of computational neuroscience, cognitive science, neural networks, neuromorphic engineering, and artificial intelligence. It includes invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems adapt to a changing world. The conference is particularly interested in exploring how the brain and biologically-inspired algorithms and systems in engineering and technology can learn. Single-track oral and poster sessions enable all presented work to be highly visible. Three-hour poster sessions with no conflicting events will be held on two of the conference days. Posters will be up all day, and can also be viewed during breaks in the talk schedule.
May 16 – 19, 2007
677 Beacon Street
Boston, Massachusetts 02215 USA
Sponsored by the Boston University
In the provocative-hypothesis-of-the-week department:
Kevin Lafferty, a parasitologist, has put forth the idea that a fairly ubiquitous parasite (infecting O(10%) of Americans, and up to 2/3 of people in places like Brazil) is responsible for some of the diversity of human culures (1). The parasite uses common housecats to increase its transmission to the next host in the life cycle, and has a subtle effect on human personality, with some studies claiming that it even causes neuroticism, and even schizophrenia. (One clinical report (2) claims that “subjects with latent toxoplasmosis had higher intelligence [and] lower guilt proneness.” Hmm!)
Anyway, Lafferty noted that toxoplasmosis varies in prevalence from world region to world region, and then tries to draw correlates between these prevalences and local cultures:
“Drivers of the geographical variation in the prevalence of this parasite include the effects of climate on the persistence of infectious stages in soil, the cultural practices of food preparation and cats as pets. Some variation in culture, therefore, may ultimately be related to how climate affects the distribution of T. gondii, though the results only explain a fraction of the variation in two of the four cultural dimensions, suggesting that if T. gondii does influence human culture, it is only one among many factors.”
I wonder how one could test this hypothesis? Look for recent immigrants from one culture to another, who have lower Toxoplasmosis incidence? (Preferably finding populations that go in opposite directions, as a control.) Track culture change vs. migration vs. climate change?
Unlikely, perhaps. But nice that people are still thinking big 🙂
(1) Lafferty, K
Can the common brain parasite, Toxoplasma gondii, influence human culture?
Proceedings of the Royal Society B: Biological Sciences
Picked up by the popular press here
(2) Flegr J, Havlicek J.
Changes in the personality profile of young women with latent toxoplasmosis.
Folia Parasitol (Praha). 1999;46(1):22-8.
Very few MEA studies make it into Nature, so this definitely got my attention.
Often in neuroscience we are confronted with a small sample measurement of a few neurons from a large population. Although many have assumed, few have actually asked: What are we missing here? What does recording a few neurons really tell you about the entire network?
Using an elegant prep (retina on a MEA viewing defined scenes/stimuli), Segev, Bialek, and students show that statistical physics models that assume pairwise correlations (but disregard any higher order phenomena) perform very well in modeling the data. This indicates a certain redundancy exists in the neural code. The results are also replicated with cultured cortical neurons on a MEA.
Some key ideas from the paper are presented after the jump. Continue reading
This project was announced several months ago, but I didn’t see a post here so I thought I would add it.
The project, dubbed “Blue Brain“, represents a team up between Henry Markram, (who co-authored the chapter on the neocortex in the acclaimed reference The Synaptic Organization of the Brain), and IBM’s Blue Gene super computer.
From the New Scientist article: For over a decade Markram and his colleagues have been building a database of the neural architecture of the neocortex, the largest and most complex part of mammalian brains.
Using pioneering techniques, they have studied precisely how individual neurons behave electrically and built up a set of rules for how different types of neurons connect to one another.
Very thin slices of mouse brain were kept alive under a microscope and probed electrically before being stained to reveal the synaptic, or nerve, connections. “We have the largest database in the world of single neurons that have been recorded and stained,” says Markram.
Plants use distributed computation to decide how to open and close their stomata in order to take in as much CO2 as possible while losing the least amount of water.