(pun intended). I am embarrassed to say that earlier today I remarked to a colleague that dopamine only encodes unexpected reward, not unexpected lack of reward. This is (afaik) incorrect. It has a baseline level of firing that goes down when there is an unexpected lack of reward (see fig 1 in Wolfram Schultz, Peter Dayan, P. Read Montague. A Neural Substrate of Prediction and Reward)
However, because it can only go down so far, the negative signal is clipped, which might have consequences (see Yael Niv, Michael O Duff, Peter Dayan. Dopamine, uncertainty and TD learning).
The previous article mentions that some other people think that maybe dopamine is tracking uncertainty as well as reward. This one talks about a theory that acetylcholine is related to expected uncertainty, and norepinephrine is related to unexpected uncertainty:
Angela Yu, Peter Dayan. Expected and Unexpected Uncertainty: ACh and NE in the Neocortex (huh, all those papers had Peter Dayan as one of the authors) (btw I haven’t read all of the papers I’m posting here)
Since we’re on the subject of temporal difference learning, I’ll mention that in my opinion temporal difference learning may be a model of how futures/speculators in financial markets are supposed to propagate future price changes back in time to the present (if you think of the market as a cognitive system). I haven’t formalized this idea yet, though.
You’ve probably read by now about the announcement by IBM’s Cognitive Computing group that they had created a “computer system that simulates and emulates the brain’s abilities for sensation, perception, action, interaction and cognition” at the “scale of a cat cortex”. For their work, the IBM team led by Dharmendra Modha was awarded the ACM Gordon Bell prize, which recognizes “outstanding achievement in high-performance computing”.
A few days later, Henry Markram, leader of the Blue Brain Project at EPFL, sent off an e-mail to IBM CTO Bernard Meyerson harshly criticizing the IBM press release, and cc’ed several reporters. This brought a spate of shock media into the usually placid arena of computational neuroscience reporting, with headlines such as “IBM’s cat-brain sim a ‘scam,’ says Swiss boffin: Neuroscientist hairs on end”, and “Meow! IBM cat brain simulation dissed as ‘hoax’ by rival scientist”. One reporter chose to highlight the rivalry as cat versus rat, using the different animal model choice of the two researchers as a theme. Since then, additional criticisms from Markram have appeared online.
Find out more after the jump.
Neuroscientists often use mouse models to understand learning and neural disease. Much of our understanding of mammalian biology comes from these amazing animals. It is commonly said that highly inbred lab mice are unintelligent. But is it true for wild mice too? In a talk last week at Harvard, Karl Svoboda referred to this fascinating YouTube video showing a mouse trained to complete an obstacle course:
Other training videos from the same trainer are available along with an official website with interesting tips about mouse training. Perhaps highly inbred lab mice are unable to replicate such feats but it is amazing to see in what detail this trainer understands mouse behavior and development:
An absolute necessity for any pet training is to understand the animal’s needs and to know about its generic behaviour, since appropriate animal training is only based on certain natural habits. For mouse agility, this means e.g. their great spatial orientation abilities and spatial memory which is worth bringing to light by relevant trick training. In nature, mice always prefer the familiar (= safe) route to their feeding site, no matter if it’s a long way round. This is also the reason why mice are unbeatable in maze tests – and a mouse agility course is nothing else than a maze without walls!
But many owners forget that if you expect your pet to show some natural habits and abilities, first and foremost the husbandry has to be species-appropriate. If your mice have to live in a small ground level cage, their three-dimensional consciousness and orientation abilities will surely be stunted or never fully develop.
Although it’s a few months old, Larry Abbott has an excellent article in Neuron on the recent (last 20 years) contributions of theoretical neuroscience. (He came by MIT last week to give a talk and that’s when I found out about the article.) It’s a review that is not too long and provides a good overview with both sufficient (though not overwhelming) detail and original perspective. It’s rare to find a short piece that is so informative. (And for a more experimentally-oriented review with an eye toward the future, see Rafael Yuste’s take on the grand challenges.)
Click on for some of my favorite passages from the Abbott piece. Continue reading
Postdoctoral/research scientist positions are available in the inter-disciplinary group of Dmitri Chklovskii at the new Janelia Farm Research Campus of the Howard Hughes Medical Institute located in the suburbs of Washington, D.C. Candidates are expected to have a PhD in neuroscience, physics, computer science or electrical engineering. Most of the work is theoretical or computational and is done in collaboration with several experimental laboratories. Successful applicants will work on projects centered on neuronal circuits such as high-throughput reconstruction of wiring diagrams as well as combining structural and physiological data to infer circuit function. Salary will be commensurate with qualifications. For more information about research directions in the group please see: http://www.hhmi.org/research/groupleaders/chklovskii.html
Interested applicants should send their CV and a statement of research interests to mitya (at) janelia.hhmi.org, and arrange for three recommendation letters to be emailed to me.
It seems Markram is again back to getting some interesting results. Recently a new discovery from the Brain Mind Institute of the EPFL shows that the brain adapts to new experience by unleashing a burst of new neuronal connections, and only the fittest survive. The research further shows that this process of creation, testing, and reconfiguring of brain circuits takes place on a scale of just hours, suggesting that the brain is evolving considerably even during the course of a single day.
The paper can be found Here.
A very nice neuroecon expt. in the newest Nature:
Daw et al. find that humans choose between multiple slot machines (with different payoff probabilities) based on expected value (versus just going with the highest probability one most of the time and then randomly choosing another one every so often). Then, with fMRI, they find brain areas correlated with different value predictions.
News & Views (Daeyol Lee)
Cortical substrates for exploratory decisions in humans (Daw, Dayan)
Decision making in an uncertain environment poses a conflict between the opposing demands of gathering and exploiting information. In a classic illustration of this ‘exploration-exploitation’ dilemma, a gambler choosing between multiple slot machines balances the desire to select what seems, on the basis of accumulated experience, the richest option, against the desire to choose a less familiar option that might turn out more advantageous (and thereby provide information for improving future decisions). Far from representing idle curiosity, such exploration is often critical for organisms to discover how best to harvest resources such as food and water. In appetitive choice, substantial experimental evidence, underpinned by computational reinforcement learning (RL) theory, indicates that a dopaminergic, striatal and medial prefrontal network mediates learning to exploit. In contrast, although exploration has been well studied from both theoretical and ethological perspectives, its neural substrates are much less clear. Here we show, in a gambling task, that human subjects’ choices can be characterized by a computationally well-regarded strategy for addressing the explore/exploit dilemma. Furthermore, using this characterization to classify decisions as exploratory or exploitative, we employ functional magnetic resonance imaging to show that the frontopolar cortex and intraparietal sulcus are preferentially active during exploratory decisions. In contrast, regions of striatum and ventromedial prefrontal cortex exhibit activity characteristic of an involvement in value-based exploitative decision making. The results suggest a model of action selection under uncertainty that involves switching between exploratory and exploitative behavioural modes, and provide a computationally precise characterization of the contribution of key decision-related brain systems to each of these functions.