Bad-ass squirrels

In the new issue of PNAS, a totally awesome discovery about an infrared inter-species signalling system:

Ground squirrels not only heat up their tails to deter snake attacks — but they also seem to use the strategy selectively against infrared-sensitive snakes — leading us to the ultimate conclusion that when the bees are gone, the squirrels will inherit the earth…

You can check out an infrared-eye-view of squirrel/snake battles here because I don’t know how to post movies on the internet yet


Company Using "In Silico Embodiment" To Build Artificial Intelligence

If there’s one lesson to be learned from almost 60 years of AI research it’s almost certainly to be skeptical of anyone who says they have found THE ANSWER to producing human-level intelligence from computers. Even in the face of this, however, I am intrigued by a new company’s approach to developing Artifical General Intelligence (AGI), a term which is meant to indicate Strong AI rather than Weak AI. That’s probably because its founder, Ben Goertzel, manages to skillfully walk the tightrope between staying conservative about how much they can realistically accomplish and still managing to inspire hope that their methodology has the potential to get close to AGI.

Continue reading

Out-of-body VR

If you see a virtual body in VR getting stroked by a stick at the same time that you are getting stroked by a stick, you might feel a sense of being the VR body. If you see from the perspective of a camera, and your chest is stroked by a stick at the same time that a stick is moving below the camera (where your “chest” would be if you were the camera), you might feel a sense of being where the camera is.

Continue reading

SciVee provides video supplements for academic publications

The supercomputer center in San Diego has created a cool site called SciVee for scientists to upload brief videos introducing/explaining their publications.

There is quite some variety in the style of these short lectures (even though there are only a few currently posted). Some give a list of the key findings of the publications and others doing a much better job of making their work more accessible by providing an introduction/context and avoiding technical jargon.

Steve Grand on Strong AI

Steve Grand

Interview with Steve Grand on building human level artificial intelligence at Machines Like Us. Really interesting. Via Chris Chatham at (the excellent) Developing Intelligence.

In particular, MLU asks why his current project to create an android was done as a physical robot rather than as a simulation. The answer, that you can cheat too much in a simulation, is familiar to those from the Brooksian school of embodied intelligence. He says that simulations still aren’t good enough to provide the kinds of physical constraints, like gravity and friction, etc, that you get when building real robots .

However, with the availability of free 3D simulation environments that handle physics, like Breve, we are getting a lot closer. Building a robot within a simulation like this, particularly where you don’t modify the code of the the simulation environment itself, is a terrific way to balance the competing interests of keeping yourself honest and avoiding the painstaking mechanical engineering required to construct complicated robots. This kind of environment allows you to build a body with primary sensory systems and primary motor outputs in a similar fashion as one would with real robots.

Why there aren’t more who have adopted this kind of “in silico embodiment” philosophy I think is the result of taking Brooks’ a bit too seriously. Brooks idea of embodiment is very well founded, but back in the day when he first made those statements, there really were no good ways to simulate the physics of an embodied creature very faithfully. Today that is not the case. Moreover, building real physical robots is great if you have a lot of time, or an engineering team, but it’s a huge investment that distracts from the real problem of understanding the nature of intelligence. The fact that the world has extremely few labs that can make that investment is one of the many reasons there aren’t more serious strong AI researchers any more.

Update: Steve apparently received a few comments along these lines and replies.

NYTimes article on light-triggered stimulation

“It sounds like a science-fiction version of stupid pet tricks: by toggling a light switch, neuroscientists can set fruit flies a-leaping and mice a-twirling and stop worms in their squiggling tracks. But such feats, unveiled in the past two years, are proof that a new generation of genetic and optical technology can give researchers unprecedented power to turn on and off targeted sets of cells in the brain, and to do so by remote control…”

Reviews the use of photosensitive proteins in neuroscience and even gives a shout-out to Ed Boyden, of Stanford and MIT fame…

— Davie (who had the same advisor as Ed for about a day and is therefore 0.01% more famous by association)