Recently, a company named AnyBots appears to have been able to build a humanoid walking robot whose gait is much more humanlike that Honda’s Asimo (If you aren’t familiar with Asimo, check it out)
The robot is named Dexter, and here’s a link to a blog explaining why its better than Asimo. From the blog:
There are of course biped robots that walk. The Honda Asimo is the best known. But the Asimo doesn’t balance dynamically. Its walk is preprogrammed; if you had it walk twice across the same space, it would put its feet down in exactly the same place the second time. And of course the floor has to be hard and flat.
Dynamically balancing—the way we walk—is much harder. It looks fairly smooth when we do it, but it’s really a controlled fall. At any given moment you have to think (or at least, your body does) about which direction you’re falling, and put your foot down in exactly the right place to push you in the direction you want to go. Practice makes it seem easy to us, but it’s a very hard problem to solve. Something as tall as a human becomes irretrievably off balance very rapidly.
This is important because robots that dynamically balance can handle arbitrary terrains and can withstand attempts to knock them over, much like the Big Dog robot we reported on earlier, whereas pre-programmed robots like Asimo cannot. If Honda were smart, they’d buy AnyBot ASAP.
Here’s a link to the Slashdot article on this.
In September, 2006, I described my “new brain/mind theory” here and received some challenging criticism from Eric Thomson and Mike S. (see below). To meet these challenges, I prepared a reduced model discussed in a web page linked to a paper in .pdf form. Since my approach is based on little-known thermodynamics, I have also written about mechanical metaphors that may be helpful in explaining my ideas.
Some surprising and important news from Nature Biotechnology about a common technique in cellular imagining, fluorescence resonance energy transfer, or FRET. Specifically, it looks like ATP/Mg can significantly alter the FRET signal, which has commonly been used for looking at Ca, voltage, and various other binding interactions in neurons:
Given these findings, we predict that fluctuations in free or Mg2+-bound ATP will affect the signal output of most—if not all—CFP-YFP–based FRET indicators.
What is the right level of biological realism to model neuronal systems in order to understand their computational properties? Some recent papers may help shed some light on the subject. Models of the computational properties of local networks of neurons are starting to come into their own. This year has already seen at least two articles published in experimentalist journals based on the same core of theoretical work.
To bring you up to speed, I need to remind you what is going on in the world of experimental neuroscience.
Experimentalists are now able to record the single-cell activities of a whole population of neurons simultaneously. From Briggman, Abarbanel, Kristan (2006):
By using multi-electrode arrays or optical imaging, investigators can now record from many individual neurons in various parts of nervous systems simultaneously while an animal performs sensory, motor or cognitive tasks. Given the large multidimensional datasets that are now routinely generated, it is often not obvious how to find meaningful results within the data.
This paper goes on to provide a nice overview on mathematical methods that researchers are using to grapple with the challenge of understanding the dynamics of the neural systems they are recording from. They make the case that conceptual progress needs to be made on the interpretation of the data these results yield. How can we understand what computations these neurons are collectively performing?
(Incidentally, this topic is being explored in a conference happening this week at the Los Alamos National Laboratory, which, according to one of the conference session chairs, is intended to help shape future directions for the lab. Hopefully there will be webcasts from this conference.)
Postdoctoral/research scientist positions are available in the inter-disciplinary group of Dmitri Chklovskii at the new Janelia Farm Research Campus of the Howard Hughes Medical Institute located in the suburbs of Washington, D.C. Candidates are expected to have a PhD in neuroscience, physics, computer science or electrical engineering. Most of the work is theoretical or computational and is done in collaboration with several experimental laboratories. Successful applicants will work on projects centered on neuronal circuits such as high-throughput reconstruction of wiring diagrams as well as combining structural and physiological data to infer circuit function. Salary will be commensurate with qualifications. For more information about research directions in the group please see: http://www.hhmi.org/research/groupleaders/chklovskii.html
Interested applicants should send their CV and a statement of research interests to mitya (at) janelia.hhmi.org, and arrange for three recommendation letters to be emailed to me.
Videos have been posted from the Conference on Brain Network Dynamics held at the University of California at Berkeley on January 26-27, 2007.
“The conference will explore the dynamics of distributed brain function from multidisciplinary perspectives. It is being organized at Berkeley in honor of Walter Freeman for his contributions to brain dynamics over the past five decades on the occasion of his 80th birthday. ”
Lots of Interesting talks.
See the program
— posted by Charles Cadieu —
(UPDATE 03-05-2007 – Upon closer inspection, it is clear that while the surgery has enabled the woman to have sensation in the nerves of her missing hand when the surface of her chest is touched, the arm she is fitted with at the time of publication did not relay sensory signals from the arm back to her chest. As soon as she is fitted with an arm that has the appropriate sensors, however, she will not have to undergo further surgery to have this kind of direct feedback. Thanks to astute readers for pointing this out.)
The Guardian reports on an article published today in the Lancet about a successful surgical procedure giving an amputee a bionic arm that both responds to motor commands from her remaining motor nerves to control it and provides sensory feedback to sensory nerves when it is touched. If there was any doubt left, the worlds of neural prosthetics and brain-machine interfaces have officially collided.
The Lancet article is accompanied by two movies of the woman using the arm that you should really check out.
Given the recent progress in the decoding of motor signals from the brain and older progress on sensory feedback from neural prosthetics, this was to be expected. Nonetheless, watching this woman use her arm brings the message home in a visceral way. The spooky thesis of MIT CSAIL’s Rodney Brooks that “we will become a merger between flesh and machines” is one step closer today.