Paraphrasing/adding to the article abstract: prevailing theory suggests that long-term memories are encoded via a two-phase process requiring temporary involvement of the hippocampus followed by permanent storage in the neocortex. However this group found that, even weeks later, after the memories are supposed to be independent of the hippocampus, they could disrupt recall by briefly suppressing hippocampal CA1. The suppression must be brief; if they suppress CA1 for a long time recall works again. This suggests that, long after memory formation, the memory is not primarily stored in the hippocampus, but the hippocampus is still somehow involved in recall. The research also implicates anterior cingulate cortex in recall. Abstract after the break.
Berger, Hampson, Song, Goonawardena, Marmarelis, and Deadwyler created a system for recording from and stimulating up to 32 neurons at once. The system learned a model to predict firing of some hippocampal CA1 neurons given some inputs from CA3, and could be “played back” later.
(pun intended). I am embarrassed to say that earlier today I remarked to a colleague that dopamine only encodes unexpected reward, not unexpected lack of reward. This is (afaik) incorrect. It has a baseline level of firing that goes down when there is an unexpected lack of reward (see fig 1 in Wolfram Schultz, Peter Dayan, P. Read Montague. A Neural Substrate of Prediction and Reward)
However, because it can only go down so far, the negative signal is clipped, which might have consequences (see Yael Niv, Michael O Duff, Peter Dayan. Dopamine, uncertainty and TD learning).
The previous article mentions that some other people think that maybe dopamine is tracking uncertainty as well as reward. This one talks about a theory that acetylcholine is related to expected uncertainty, and norepinephrine is related to unexpected uncertainty:
Angela Yu, Peter Dayan. Expected and Unexpected Uncertainty: ACh and NE in the Neocortex (huh, all those papers had Peter Dayan as one of the authors) (btw I haven’t read all of the papers I’m posting here)
Since we’re on the subject of temporal difference learning, I’ll mention that in my opinion temporal difference learning may be a model of how futures/speculators in financial markets are supposed to propagate future price changes back in time to the present (if you think of the market as a cognitive system). I haven’t formalized this idea yet, though.
Sahay A, Scobie KN, Hill AS, O’Carroll CM, Kheirbek MA, Burghardt NS,
Fenton AA, Dranovsky A, Hen R. Increasing adult hippocampal neurogenesis is sufficient to improve
pattern separation. Nature. 2011 Apr 3
Abstract after the break.
When we learn new information we use only a tiny fraction of the neurons in our brain for that particular memory trace. In order to allow the molecular study of those specific neurons we combined elements of the tet system with a promoter that is activated by high level neural activity (the cfos promoter) to generate mice in which a genetic tag can be introduced into neurons that are active at a given point in time. The tag can be maintained for a prolonged period, creating a precise record of the neural activity pattern at a specific point in time. Using fear conditioning we found that the same neurons activated during learning were reactivated when the animal recalled the fearful event. We also found that these neurons were no longer activated following memory extinction, consistent with the idea that extinction modifies a component of the original memory trace.
Replay of behavioral sequences in the hippocampus during sharp wave ripple complexes (SWRs) provides a potential mechanism for memory consolidation and the learning of knowledge structures. Current hypotheses imply that replay should straightforwardly reflect recent experience. However, we find these hypotheses to be incompatible with the content of replay on a task with two distinct behavioral sequences (A and B). We observed forward and backward replay of B even when rats had been performing A for >10 min. Furthermore, replay of nonlocal sequence B occurred more often when B was infrequently experienced. Neither forward nor backward sequences preferentially represented highly experienced trajectories within a session. Additionally, we observed the construction of never-experienced novel-path sequences. These observations challenge the idea that sequence activation during SWRs is a simple replay of recent experience. Instead, replay reflected all physically available trajectories within the environment, suggesting a potential role in active learning and maintenance of the cognitive map.