From the department of “we are living in a sci-fi movie”, here’s a video of a bionic arm that uses “fluidic muscles”.
Interview with Steve Grand on building human level artificial intelligence at Machines Like Us. Really interesting. Via Chris Chatham at (the excellent) Developing Intelligence.
In particular, MLU asks why his current project to create an android was done as a physical robot rather than as a simulation. The answer, that you can cheat too much in a simulation, is familiar to those from the Brooksian school of embodied intelligence. He says that simulations still aren’t good enough to provide the kinds of physical constraints, like gravity and friction, etc, that you get when building real robots .
However, with the availability of free 3D simulation environments that handle physics, like Breve, we are getting a lot closer. Building a robot within a simulation like this, particularly where you don’t modify the code of the the simulation environment itself, is a terrific way to balance the competing interests of keeping yourself honest and avoiding the painstaking mechanical engineering required to construct complicated robots. This kind of environment allows you to build a body with primary sensory systems and primary motor outputs in a similar fashion as one would with real robots.
Why there aren’t more who have adopted this kind of “in silico embodiment” philosophy I think is the result of taking Brooks’ a bit too seriously. Brooks idea of embodiment is very well founded, but back in the day when he first made those statements, there really were no good ways to simulate the physics of an embodied creature very faithfully. Today that is not the case. Moreover, building real physical robots is great if you have a lot of time, or an engineering team, but it’s a huge investment that distracts from the real problem of understanding the nature of intelligence. The fact that the world has extremely few labs that can make that investment is one of the many reasons there aren’t more serious strong AI researchers any more.
Update: Steve apparently received a few comments along these lines and replies.
There are of course biped robots that walk. The Honda Asimo is the best known. But the Asimo doesn’t balance dynamically. Its walk is preprogrammed; if you had it walk twice across the same space, it would put its feet down in exactly the same place the second time. And of course the floor has to be hard and flat.
Dynamically balancing—the way we walk—is much harder. It looks fairly smooth when we do it, but it’s really a controlled fall. At any given moment you have to think (or at least, your body does) about which direction you’re falling, and put your foot down in exactly the right place to push you in the direction you want to go. Practice makes it seem easy to us, but it’s a very hard problem to solve. Something as tall as a human becomes irretrievably off balance very rapidly.
This is important because robots that dynamically balance can handle arbitrary terrains and can withstand attempts to knock them over, much like the Big Dog robot we reported on earlier, whereas pre-programmed robots like Asimo cannot. If Honda were smart, they’d buy AnyBot ASAP.
Here’s a link to the Slashdot article on this.
(UPDATE 03-05-2007 – Upon closer inspection, it is clear that while the surgery has enabled the woman to have sensation in the nerves of her missing hand when the surface of her chest is touched, the arm she is fitted with at the time of publication did not relay sensory signals from the arm back to her chest. As soon as she is fitted with an arm that has the appropriate sensors, however, she will not have to undergo further surgery to have this kind of direct feedback. Thanks to astute readers for pointing this out.)
The Guardian reports on an article published today in the Lancet about a successful surgical procedure giving an amputee a bionic arm that both responds to motor commands from her remaining motor nerves to control it and provides sensory feedback to sensory nerves when it is touched. If there was any doubt left, the worlds of neural prosthetics and brain-machine interfaces have officially collided.
The Lancet article is accompanied by two movies of the woman using the arm that you should really check out.
Given the recent progress in the decoding of motor signals from the brain and older progress on sensory feedback from neural prosthetics, this was to be expected. Nonetheless, watching this woman use her arm brings the message home in a visceral way. The spooky thesis of MIT CSAIL’s Rodney Brooks that “we will become a merger between flesh and machines” is one step closer today.
I am a prospective graduate student interested in taking up Neural Engineering under EE or Biomedical Engg for research. But I have a lot of concerns and need help from a person who knows about the field well.
1. I have studied VLSI, DSP, Image Processing, Wireless Communication, Control Systems and Embedded Systems as graduate and undergraduate courses and have some research interest in Neural Networks and Machine Learning(That’s how I got interested in Neural Engg and Prosthetics). Which of these subjects will be of help in Neural Engg/Prosthetics research. Which will be of most relevance. Please list them in the order of relevance(high->low).
2. What are the applications of the research ?
3. What is the research and JOB scope for this field? Are there any companies who recruit people with this specialisation? How is the job scene in academia? How many univs are doing research in this field in US? Please let me know about the career progression in academia, like how much time does it take to get full time academic position after PhD?
4. Especially, what are the applications of this research in Robotics?
5. What are the current problems and research themes in universities?
6. What imaging technologies are used in this research?
Though my queries may seem a bit ameteuristic, it is very important for me to get clarity on these doubts.
Hope my queries will be answered.
Thanking all of you in advance,
New Scientist has the scoop on a state-of-the-art walking robot that navigates uneven terrain, slopes, and significant kicks.
The movie has to be seen to be believed. The robot is called BigDog and it looks like a CGI special effect. But don’t let the film industry desensitize you from the accomplishment here. The robotics engineering required to do this on real terrain is extraordinary. The engineers over at Boston Dynamics really deserve kudos for this.
For those with theoretical interests with respect to machine learning flavored AI, the ML Theory blog run by John Langford is highly recommended. Though recently started, Langford and others have so far been doing an excellent job of commenting on both the science and culture of theoretical learning research.