How do crickets know when they are chirping?
These questions appear to be answered with the discovery of a motor interneuron in the cricket that is resposible for “corallary discharge” or forwarding neural signals from motor systems to sensory systems. By inhibiting auditory neurons during chirping, the animal can “counter the expected, self-generated sensory feedback”.
Over at the synapse blog, it is pointed out that the cerebellum may have this function in vertebrates.
“Scans of brain activity, published online in the journal Nature Neuroscience, indicate that the brain can actually get into the ‘right frame of mind’ to store new information and that we perform at our best if the brain is active not only at the moment we get new information but also in the seconds before.
Tests showed that the brain’s electrical activity differed after the cue question and before the word was presented and this was linked to whether the subject would remember or forget the word in a later unexpected memory test. If the electrical activity maintained a high level over frontal parts of the scalp just before the word was shown, then it was likely that the subject would remember the word up to 50 minutes later – and after doing a series of other word tests. On the other hand, if the voltage was lower, the subjects were less likely to remember the word.”
(from the press release)
Leun J. Otten, Richard N. A. Henson & Michael D. Rugg. State-related and item-related neural correlates of successful memory encoding. Nature Neuroscience 5, 1339 – 1344 (2002). Published online: 28 October 2002; doi:10.1038/nn967
Looking at static pictures of people running versus pictures of people standing still “evokes a delayed response in an area that overlaps with motionsensitive cortex (hMT+)”. Past studies have indicated a similar response for images depicting a falling cup versus a cup resting on a table.
The paper discusses the role of top-down influence from the temporal lobe as a possible cause for the response. How could this kind of brain activity be influencing our ability to recognize objects in scenes? Is this evidence of the activation of a distributed cortical representation of a moving object?
Should the field of AI be trying to figure out how to replicate a similar top-down influence in next-generation object recognition algorithms?
Abstract from the Journal of Cognitive Neuroscience is available here.
This nytimes article points out that:
A top federal medical official overruled the unanimous opinion of his scientific staff when he decided last year to approve a pacemaker-like device to treat persistent depression, a Senate committee reported Thursday.
The device, the surgically implanted vagus nerve stimulator, had not proved effective against depression in its only clinical trial for treatment of that illness. As a result, scientists at the Food and Drug Administration repeatedly and unanimously recommended rejecting the application of its maker, Cyberonics Inc., to sell it as such a treatment, said the report, written by the staff of the Senate Finance Committee.
But Dr. Daniel G. Schultz, director of the Center for Devices and Radiological Health at the agency, kept moving the application along and eventually decided to approve it, the report said.
That approval did follow the backing of a divided F.D.A. advisory committee.
When some epilepsy patients reported that their moods had changed after receiving the devices, Cyberonics, based in Houston, implanted them in 235 depressed patients and turned the machines on in half of them. After three months, the two groups were equally depressed. The trial had failed.
Cyberonics then turned the devices on in all 235 patients and determined that 30 percent showed significant improvement after six months or more. Without a control group, however, it was impossible to determine if the device had caused the improvement.
A recent issue of J Neurosci has a series of mini-reviews on how oscillations play a role in network computations. Two of the reviews are by Sejnowski and collaborators. I haven’t read them yet but I thought I’d post a link here.
Stanford Neuroscientist Bill Newsome wants to implant an electrode in his own brain to study consciousness in ways that would be difficult with volunteer human subjects.
When considered alongside the story of Kevin Warwick who had a 100-electrode array implanted in his arm in 2002 in order to study electrical signals from his hand, one must wonder: is this a starting trend?
From the article:
TR: Do you really want to do this?
BN: Well, I’ve thought about it very carefully. I’ve talked to neurosurgeons, both in the United States and outside the country where the regulatory environment is less strict, about how practical and risky it is. If the risk of serious postsurgical complications was one in one hundred, I wouldn’t do it. If it was one in one thousand, I would seriously consider doing it. To my chagrin, most surgeons estimate the risk to be somewhere in between my benchmarks.
Eugene Izhekevich of The Neurosciences Institute has started a peer-reviewed wiki that looks similar to “The Digital Universe,” featured in Nature earlier this month. Most of the topics deal with computational neuroscience, and judging by his latest book his contributions will be worth reading.