NSF/EFRI neuro grants

NSF:ENG:EFRI:Home Page

NSF’s Emerging Frontiers in Research and Innovation (EFRI) office funded 4 very futuristic neuroengineering grants.

  1. Deep learning in mammalian cortex
  2. Studying neural networks in vitro with an innovative patch clamp array
  3. Determining how the brain controls the hand for robotics
  4. In vitro power grid simulation using real neurons

Disclaimer: I was involved with the second proposal on this page.

A Computational Neuroanatomy for Motor Control

An extremely interesting trend in neuroscience has been to use the language of Control Theory to explain brain function. A recent paper by Shadmehr and Krakauer does a very nice job of summarizing this trend and assembling a comprehensive theory of how the brain controls the body. Using control theory, they put forward a mathematically precise description of their theory. Because their theory uses blocks that are direct analogues of specific brain regions like the basal ganglia, motor cortex, and cerebellum, they can use brain lesion studies to undergird their ideas about these components. From the paper:

The theory explains that in order to make a movement, our brain needs to solve three kinds of problems: we need to be able to accurately predict the sensory consequences of our motor commands (this is called system identification), we need to combine these predictions with actual sensory feedback to form a belief about the state of our body and the world (called state estimation), and then given this belief about the state of our body and the world, we have to adjust the gains of the sensorimotor feedback loops so that our movements maximize some measure of performance (called optimal control).

At the heart of the approach is the idea that we make movements to achieve a rewarding state. This crucial description of why we are making a movement, i.e., the rewards we expect to get and the costs we expect to pay, determines how quickly we move, what trajectory we choose to execute, and how we will respond to sensory feedback.

This approach of describing brain lesion studies in the context of a well-thought out theory ought to be further encouraged.

Virtual Neurorobotics

Virtual Neurorobotics

Researchers at the University of Nevada, Reno have an interesting and ambitious set-up for doing research in AI that the describe in a recent paper.

From the paper:

We define virtual neurorobotics as follows: a computer-facilitated behavioral loop wherein a human interacts with a projected robot that meets five criteria: (1) the robot is sufficiently embodied for the human to tentatively accept the robot as a social partner, (2) the loop operates in real time, with no pre-specified parcellation into receptive and responsive time windows, (3) the cognitive control is a neuromorphic brain emulation incorporating realistic neuronal dynamics whose time constants reflect synaptic activation and learning, membrane and circuitry properties, and (4) the neuromorphic architecture is expandable to progressively larger scale and complexity to track brain development, (5) the neuromorphic architecture can potentially provide circuitry underlying intrinsic motivation and intentionality, which physiologically is best described as “emotional” rather than rule-based drive.

What’s interesting to me about this is the combination of a embodied robot in a virtual world with a neurally inspired controller for that robot. While there are pros and cons of embodiment in virtual world (some of which have been touched on here before), I think that if your priority is closing the loop from embodiment to research on neural systems, the importance of this kind of approach cannot be ignored.

Steve Grand on Strong AI

Steve Grand

Interview with Steve Grand on building human level artificial intelligence at Machines Like Us. Really interesting. Via Chris Chatham at (the excellent) Developing Intelligence.

In particular, MLU asks why his current project to create an android was done as a physical robot rather than as a simulation. The answer, that you can cheat too much in a simulation, is familiar to those from the Brooksian school of embodied intelligence. He says that simulations still aren’t good enough to provide the kinds of physical constraints, like gravity and friction, etc, that you get when building real robots .

However, with the availability of free 3D simulation environments that handle physics, like Breve, we are getting a lot closer. Building a robot within a simulation like this, particularly where you don’t modify the code of the the simulation environment itself, is a terrific way to balance the competing interests of keeping yourself honest and avoiding the painstaking mechanical engineering required to construct complicated robots. This kind of environment allows you to build a body with primary sensory systems and primary motor outputs in a similar fashion as one would with real robots.

Why there aren’t more who have adopted this kind of “in silico embodiment” philosophy I think is the result of taking Brooks’ a bit too seriously. Brooks idea of embodiment is very well founded, but back in the day when he first made those statements, there really were no good ways to simulate the physics of an embodied creature very faithfully. Today that is not the case. Moreover, building real physical robots is great if you have a lot of time, or an engineering team, but it’s a huge investment that distracts from the real problem of understanding the nature of intelligence. The fact that the world has extremely few labs that can make that investment is one of the many reasons there aren’t more serious strong AI researchers any more.

Update: Steve apparently received a few comments along these lines and replies.

Next-Gen Walking Robot Better Than Honda Asimo

Recently, a company named AnyBots appears to have been able to build a humanoid walking robot whose gait is much more humanlike that Honda’s Asimo (If you aren’t familiar with Asimo, check it out)

The robot is named Dexter, and here’s a link to a blog explaining why its better than Asimo. From the blog:

There are of course biped robots that walk. The Honda Asimo is the best known. But the Asimo doesn’t balance dynamically. Its walk is preprogrammed; if you had it walk twice across the same space, it would put its feet down in exactly the same place the second time. And of course the floor has to be hard and flat.

Dynamically balancing—the way we walk—is much harder. It looks fairly smooth when we do it, but it’s really a controlled fall. At any given moment you have to think (or at least, your body does) about which direction you’re falling, and put your foot down in exactly the right place to push you in the direction you want to go. Practice makes it seem easy to us, but it’s a very hard problem to solve. Something as tall as a human becomes irretrievably off balance very rapidly.

This is important because robots that dynamically balance can handle arbitrary terrains and can withstand attempts to knock them over, much like the Big Dog robot we reported on earlier, whereas pre-programmed robots like Asimo cannot. If Honda were smart, they’d buy AnyBot ASAP.

Here’s a link to the Slashdot article on this.

Amputee Controls And Feels Bionic Arm as Her Own

(UPDATE 03-05-2007 – Upon closer inspection, it is clear that while the surgery has enabled the woman to have sensation in the nerves of her missing hand when the surface of her chest is touched, the arm she is fitted with at the time of publication did not relay sensory signals from the arm back to her chest. As soon as she is fitted with an arm that has the appropriate sensors, however, she will not have to undergo further surgery to have this kind of direct feedback. Thanks to astute readers for pointing this out.)

The Guardian reports on an article published today in the Lancet about a successful surgical procedure giving an amputee a bionic arm that both responds to motor commands from her remaining motor nerves to control it and provides sensory feedback to sensory nerves when it is touched. If there was any doubt left, the worlds of neural prosthetics and brain-machine interfaces have officially collided.

The Lancet article is accompanied by two movies of the woman using the arm that you should really check out.

Given the recent progress in the decoding of motor signals from the brain and older progress on sensory feedback from neural prosthetics, this was to be expected. Nonetheless, watching this woman use her arm brings the message home in a visceral way. The spooky thesis of MIT CSAIL’s Rodney Brooks that “we will become a merger between flesh and machines” is one step closer today.

Continue reading