Interview with Steve Grand on building human level artificial intelligence at Machines Like Us. Really interesting. Via Chris Chatham at (the excellent) Developing Intelligence.
In particular, MLU asks why his current project to create an android was done as a physical robot rather than as a simulation. The answer, that you can cheat too much in a simulation, is familiar to those from the Brooksian school of embodied intelligence. He says that simulations still aren’t good enough to provide the kinds of physical constraints, like gravity and friction, etc, that you get when building real robots .
However, with the availability of free 3D simulation environments that handle physics, like Breve, we are getting a lot closer. Building a robot within a simulation like this, particularly where you don’t modify the code of the the simulation environment itself, is a terrific way to balance the competing interests of keeping yourself honest and avoiding the painstaking mechanical engineering required to construct complicated robots. This kind of environment allows you to build a body with primary sensory systems and primary motor outputs in a similar fashion as one would with real robots.
Why there aren’t more who have adopted this kind of “in silico embodiment” philosophy I think is the result of taking Brooks’ a bit too seriously. Brooks idea of embodiment is very well founded, but back in the day when he first made those statements, there really were no good ways to simulate the physics of an embodied creature very faithfully. Today that is not the case. Moreover, building real physical robots is great if you have a lot of time, or an engineering team, but it’s a huge investment that distracts from the real problem of understanding the nature of intelligence. The fact that the world has extremely few labs that can make that investment is one of the many reasons there aren’t more serious strong AI researchers any more.
Update: Steve apparently received a few comments along these lines and replies.