Why Americans resist neuroscience more

Science has a special online feature this week on behavioral science. One of the articles is a review by Paul Bloom and Deena Skolnick Weisberg (a fellow SymSys alum!) presents some interesting evidence about how dualistic ideas about mind/brain are present from an early age. They state:

Another consequence of people’s common-sense psychology is dualism, the belief that the mind is fundamentally different from the brain (5). This belief comes naturally to children. Preschool children will claim that the brain is responsible for some aspects of mental life, typically those involving deliberative mental work, such as solving math problems. But preschoolers will also claim that the brain is not involved in a host of other activities, such as pretending to be a kangaroo, loving one’s brother, or brushing one’s teeth (5, 17). Similarly, when told about a brain transplant from a boy to a pig, they believed that you would get a very smart pig, but one with pig beliefs and pig desires (18). For young children, then, much of mental life is not linked to the brain.

And,

For one thing, debates about the moral status of embryos, fetuses, stem cells, and nonhuman animals are sometimes framed in terms of whether or not these entities possess immaterial souls (20, 21). What’s more, certain proposals about the role of evidence from functional magnetic resonance imaging in criminal trials assume a strong form of dualism (22). It has been argued, for instance, that if one could show that a person’s brain is involved in an act, then the person himself or herself is not responsible, an excuse dubbed “my brain made me do it” (23).

The authors conclude that adult resistance to science is strongest in fields where scientific claims are contested by the society (that is, contested by non-science alternatives rather than by scientific uncertainty). They claim that this accounts for the difference in the United States (versus other countries with less vociferous advocacy of non-science) in the resistance to the central tenets of evolutionary biology and neuroscience.

I think this says something important about science education, namely that it should start earlier in life. And there’s no reason that neuroscience should be left as a “college-level” subject. I think modern neuroscience has progressed to the point where we can confidently teach some basics at a high-school or earlier stage. Judging from my own experiences, I think the desire to learn about neuroscience is certainly there in younger children.

Enabling Neural Engineering Ought To Be The Measure Of Neuroscience

The field of neuroscience naturally focuses its inquiry into neurons. This approach to understanding the brain by studying its parts has been thought to have a greater potential than that of psychology to understand how the brain works, a comment made by no less than Daniel L. Schacter, chair of Harvard’s Department of Psychology, in his book, The Seven Sins of Memory.

However promising the field has been thus far, even the most accomplished neuroscientists will admit that we still do not understand how the brain really works. I would submit that the current reductionist nature of neuroscience has shed much light on the dynamics of how neurons work, but has to a far lesser degree shed light on how neurons process information. The difference between these two lines of inquiry is important for making progress in understanding how the brain works.
Continue reading

Hawkins Releases Numenta Code

Entrepreneur-turned-cognitive neuroscientist Jeff Hawkins is distributing a “research release” of their experimental code base implementing his idea of hierarchical temporal memory described in his book, “On Intelligence”. Hawkins drew inspiration for the model from his own reading about the structure and function of the human neocortex and believes that it represents the foundation for developing intelligent machines.

Jeff explains this surprising move to open source the code for the Numenta Platform for Intelligent Computing (NuPIC) on the Numenta web site:

Why are we making NuPIC available now?

We have been contacted by dozens of researchers and scientists who are excited about HTM and by our work at Numenta. These people are anxious to work on HTM, are willing to be pioneers, and are willing to accept the uncertainty associated with a new technology. We are making our tools available so that these sophisticated developers can start building a community around HTM technology. NuPIC has been under development for 18 months, is pretty solid, and is well documented – including several examples to make it easy to get started – so we’re ready to open up to more developers, even while knowing that we do not yet have benchmarking data, and we cannot make guarantees about applicability to specific problems.

Here’s why Hawkins thinks that HTMs are new.

We have been covering Hawkins’ work for a while now. See these previous posts for more background info.

Neurodudes is actively soliciting code reviews of the newly released software. Is NuPIC the next big thing, or are you left feeling cold? Post your thoughts yourself using the instructions on the right-hand column, or let us know at contactus -AT- neurodudes.com!

More on "Quad Nets" (new brain/mind theory)

In September, 2006, I described my “new brain/mind theory” here and received some challenging criticism from Eric Thomson and Mike S. (see below). To meet these challenges, I prepared a reduced model discussed in a web page linked to a paper in .pdf form. Since my approach is based on little-known thermodynamics, I have also written about mechanical metaphors that may be helpful in explaining my ideas.

Continue reading

So, How Do REAL Neuronal Networks Compute?

What is the right level of biological realism to model neuronal systems in order to understand their computational properties? Some recent papers may help shed some light on the subject. Models of the computational properties of local networks of neurons are starting to come into their own. This year has already seen at least two articles published in experimentalist journals based on the same core of theoretical work.

To bring you up to speed, I need to remind you what is going on in the world of experimental neuroscience.

Experimentalists are now able to record the single-cell activities of a whole population of neurons simultaneously. From Briggman, Abarbanel, Kristan (2006):

By using multi-electrode arrays or optical imaging, investigators can now record from many individual neurons in various parts of nervous systems simultaneously while an animal performs sensory, motor or cognitive tasks. Given the large multidimensional datasets that are now routinely generated, it is often not obvious how to find meaningful results within the data.

This paper goes on to provide a nice overview on mathematical methods that researchers are using to grapple with the challenge of understanding the dynamics of the neural systems they are recording from. They make the case that conceptual progress needs to be made on the interpretation of the data these results yield. How can we understand what computations these neurons are collectively performing?

(Incidentally, this topic is being explored in a conference happening this week at the Los Alamos National Laboratory, which, according to one of the conference session chairs, is intended to help shape future directions for the lab. Hopefully there will be webcasts from this conference.)

Continue reading

Modeling Time in Computational Neuroscience

Computational neuroscience is a field where many successful researchers have a strong physics background. So far, the physics approach has provided a strong foundation from which to understand the brain. Recently, however, the influence of a computer science perspective has become more prominent. How can we understand the different perspectives that these disciplines bring to the field? Can we observe the influence of physics methodologies on the modern study of the brain? And if so, what is the consequence of our understanding of the brain through the lens of physics versus the lens of computer science?

One consequence may be the way that computational neuroscience models time in the brain. The study of physics generally conceptualizes time as continuous. Time is something to be plotted on the x-axis of a graph where some other quantity of interest is plotted on the y-axis.

In computer science, on the other hand, real time is rarely conceptualized explicitly. Computer scientists do not plot quantities against time unless they are profiling software for performance purposes, and even then, time is more generally thought of as number of operations. Thinking about operations generally leads computer scientists to think about time as discrete events.

I posit that the distinction between continuous and discrete time creates a foundational difference between the physics approach and the computer science approach to understanding how the brain works. Due to the discrete time conceptualization, computer scientists are more comfortable explaining the function of brain systems in terms of chains of events with definite beginnings and definite ends. Physicists, on the other hand, are more comfortable explaining the brain in terms of dynamics, which do not require definite beginnings or definite ends. Computer scientists care more what the consequence of an event is in the brain, whereas physicists are more concerned with an concise account of the dynamics of what is occurring.

This divide is visible in the distinct modeling approaches of neurons that derive from these two disciplines. The canonical neuronal model contributed by the physics philosophy is the multi-compartmental conductance based (Hodgkin-Huxley like) model. This model is concerned with matching waveforms of current and voltage traces with those that are measured in real neurons. This model helps us to understand how changes of the properties of excitable membranes over time result in changes of neuronal behavior over time. The computational complexity of these models is thought to prevent more than a few hundred neurons modeled in this way from being analyzed concurrently.

Alternatively, the canonical neuronal model contributed by the computer science philosophy is the integrate-and-fire neuron. This model does away with modeling conductances explicitly as functions of time and simply performs a weighted sum of its inputs at each time step. Here a time step is a discrete event whose duration is a parameter of the model. The simplicity of this model allows large networks to be constructed, which are useful for modeling systems of many thousands of neurons.

The physics approach provides insight into the activity of single cells and small networks, whereas the computer science approach provides insight into the activity of large networks. Neither approach is optimal and neither approach provides all the tools that are necessary to truly understand the brain. As these two perspectives are better understood, the field of computational neuroscience can benefit from finding creative ways to merge these two conceptions of time into models that capture both small scale and large scale neuronal activity.

In conclusion, I have demonstrated that what begins as a division between discrete and continuous time amounts to a divide between a bottom-up and a top-down approach. Furthermore, I have shown that understanding the relative contributions of different sciences to computational neuroscience is important for understanding the paradigms that pervade the field.

Who Cares About Theory?

Is science just about facts, or are theories and conceptualizations important too? Should we worry about having good theories, or do the facts pretty much give us everything we need to know. This article, entitled “Facts, concepts, and theories: The shape of psychology’s epistemic triangle“, discusses this issue for the field of Psychology, though its contents are also applicable to Neuroscience and AI.