One method of computing is to require that all numbers be real values between 0 and 1, and then instead of encoding these numbers into bit streams using binary, represent them with a long stream of random bits which are 1 with probability x, where x is the number being encoded. An advantage is that computations which require many logic gates to implement can implemented more simply (assuming that the randomness in the input bit streams are uncorrelated); eg x*y can be implemented by ANDing the bit streams together, and (x+y)/2 can be implemented by evenly sampling both of the inputs (select about half the bits from x, and the other half of the bits from y, and concatenate all of the selected bits (in any order) to produce the output). Another advantage is that this method is naturally tolerant to noise.
If the circuit is tolerant to noise, power can be saved because circuit elements can be designed to consume less power at the cost of producing noisy results.
A disadvantage is that the numbers of bits needed to represent each number scales exponentially with required precision, as opposed to radix encodings such as binary which scale linearly (eg to represent one of 256 values, you need 8 bits in binary but 256 bits using stochastic computing).
Obviously, this sort of thing is a candidate neural code.
Neville told me about this neat article from ’04. It presents a way to offer rewards to people taking a poll in such a way so as to motivate them to be honest, with no prior information about what the distribution of correct answers is. Apparently, previous such techniques are based on the idea of rewarding people for agreeing with other people’s answers. This new thing about this technique for calculating the reward is that it provides people with an incentive to tell their true opinion even if they know that they hold a minority viewpoint.
Drazen Prelec. A Bayesian Truth Serum for Subjective Data. Science 15 October 2004: Vol. 306. no. 5695, pp. 462 – 466. DOI: 10.1126/science.1102081
Although I’ve been a longtime fan of Ramachandran’s excellent book Phantoms in the Brain, this TED talk is like a compressed summary of the highlight’s of his research. He’s a great speaker and he covers in 20 minutes my two favorite examples in the book (Capgras delusion and mirror treatment for phantom limb syndrome). Perhaps the best part of the talk is that, after listening to it, I was convinced more than ever before of the statistical nature of sensory perception (ie. the brain attempts to find the most likely explanation for sensory observations) and the integrative nature of central processing of multiple modalities.
Atul Gawande also recently wrote a New Yorker article about treating phantom itch with Ramachandran’s mirror box. I found this part of Gawande’s article on statistical inference in perception most interesting:
You can get a sense of this from brain-anatomy studies. If visual sensations were primarily received rather than constructed by the brain, you’d expect that most of the fibres going to the brain’s primary visual cortex would come from the retina. Instead, scientists have found that only twenty per cent do; eighty per cent come downward from regions of the brain governing functions like memory. Richard Gregory, a prominent British neuropsychologist, estimates that visual perception is more than ninety per cent memory and less than ten per cent sensory nerve signals. When Oaklander theorized that M.’s itch was endogenous, rather than generated by peripheral nerve signals, she was onto something important.
I’m not familiar with this field but I wonder if anyone has tried to quantify what percent of our conscious experience that we normally believe to be 100% due to sensory input is actually recall from memory/inference based on past observation. Also, can this percentage adaptively change? Perhaps there are situations where the brain chooses to rely more heavily on memory and other cases where it relies more on primary sensory input.
I am a prospective graduate student interested in taking up Neural Engineering under EE or Biomedical Engg for research. But I have a lot of concerns and need help from a person who knows about the field well.
1. I have studied VLSI, DSP, Image Processing, Wireless Communication, Control Systems and Embedded Systems as graduate and undergraduate courses and have some research interest in Neural Networks and Machine Learning(That’s how I got interested in Neural Engg and Prosthetics). Which of these subjects will be of help in Neural Engg/Prosthetics research. Which will be of most relevance. Please list them in the order of relevance(high->low).
2. What are the applications of the research ?
3. What is the research and JOB scope for this field? Are there any companies who recruit people with this specialisation? How is the job scene in academia? How many univs are doing research in this field in US? Please let me know about the career progression in academia, like how much time does it take to get full time academic position after PhD?
4. Especially, what are the applications of this research in Robotics?
5. What are the current problems and research themes in universities?
6. What imaging technologies are used in this research?
Though my queries may seem a bit ameteuristic, it is very important for me to get clarity on these doubts.
Hope my queries will be answered.
Thanking all of you in advance,
Neuron : Uncertainty, Neuromodulation, and Attention
Haven’t read this article from Peter Dayan’s lab yet but some interesting Bayesian modeling implicating acetylcholine as a signal of expected uncertainty and norepinephrine as a signal of unexpected uncertainty.
Uncertainty in various forms plagues our interactions with the environment. In a Bayesian statistical framework, optimal inference and prediction, based on unreliable observations in changing contexts, require the representation and manipulation of different forms of uncertainty. We propose that the neuromodulators acetylcholine and norepinephrine play a major role in the brain’s implementation of these uncertainty computations. Acetylcholine signals expected uncertainty, coming from known unreliability of predictive cues within a context. Norepinephrine signals unexpected uncertainty, as when unsignaled context switches produce strongly unexpected observations. These uncertainty signals interact to enable optimal inference and learning in noisy and changeable environments. This formulation is consistent with a wealth of physiological, pharmacological, and behavioral data implicating acetylcholine and norepinephrine in specific aspects of a range of cognitive processes. Moreover, the model suggests a class of attentional cueing tasks that involve both neuromodulators and shows how their interactions may be part-antagonistic, part-synergistic.
PLoS Biology: Attenuation of Self-Generated Tactile Sensations Is Predictive, not Postdictive [open access]
I haven’t gotten a chance to fully digest this article (What is the attenuation phenomena that happens when the taps are delayed?), but it seems like a deep result from a relatively simple haptics experiment. Just thought I’d share it with the crowd.
Also, Happy Birthday to fellow Neurodude Bayle! Congrats, man. 🙂
This week’s Science has a nice introductory article (but with some mathematical detail) on using probabilistic graphical models to model cellular networks. Even for those of you who already know the formalisms (Bayesian networks, HMMs, etc.), you might find the recent biological applications discussed interesting.
Also, there are several other mathematical biology articles in the issue, including a review on evolutionary game theory.