Company Using "In Silico Embodiment" To Build Artificial Intelligence

If there’s one lesson to be learned from almost 60 years of AI research it’s almost certainly to be skeptical of anyone who says they have found THE ANSWER to producing human-level intelligence from computers. Even in the face of this, however, I am intrigued by a new company’s approach to developing Artifical General Intelligence (AGI), a term which is meant to indicate Strong AI rather than Weak AI. That’s probably because its founder, Ben Goertzel, manages to skillfully walk the tightrope between staying conservative about how much they can realistically accomplish and still managing to inspire hope that their methodology has the potential to get close to AGI.

This is best demonstrated in a recent talk that Goertzel gave at Google on his approach to AGI. His company, Novamente, has entered the interesting niche market of “creating intelligent agents for virtual worlds and MMOGs.” If you are surprised that a market exists for such a thing, you aren’t alone, but the company boasts a client list including “Northrop-Grumman, the NIH Clinical Center, the CDC, Global Health Exchange, CACI, Object Sciences Corporation, Zero Degrees, Think Passenger and [the] Electric Sheep Company.” The basic idea is to use the embodied intelligence approach in simulated environments to construct virtual agents that progress through stages of cognitive development.

After spending some time on Second Life (only to cancel my premium account after the unfortunate but inevitable gambling ban), I can almost see how a company might finance itself in the short term by selling Eliza-like virtual pets and automated shopkeepers to Second Life citizens (after all, Sony’s now defunct AIBO project managed to finance such an endeavor for real world virtual pets). What’s more interesting is that the advent of massively multiplayer virtual worlds provides a novel opportunity to access limitless amounts of training data from humans for a virtual agent..the kind of thing that researchers building real world robots have to recruit undergraduates to get. Taking advantage of this resource for the purposes of AI is a good idea, and the incentive structure of the company is such that incremental improvements in the intelligence of their agents ought to translate to greater profits. Anyone who figures out how to drive AI research with a short-term profit motive (rather than vague promises of long-term profit) gets my attention.

Goertzel’s presentation is notable for its modesty. He acknowledges the classic problems of the AI enterprise, like the habit of promising AI algorithms failure to scale up due to combinatorial explosion, and of AI presentations, like hiding inconvenient details behind complex Powerpoint presentations. Refreshingly, he makes fun of these bad patterns as cliches to reassure the audience that he wants to avoid falling into these traps. It pays off, and he wins my respect for doing so. He lays out some of the details of his architecture, but my attention is drawn more to his basic philosophy and approach more than anything else. He makes some interesting points about combinatorial complexity of AI programs, and, in the QA period, offers his views on competing architectures like that of Jeff Hawkins at Numenta.

The other thing I like about Novamente is that they sponsor a research institute called the Artificial General Intelligence Research Institute, whose goals are roughly the same as Novamente, but not for profit. Clearly there’s a conflict of interest here, but again, in the name of progress I’m willing to accept that. The institute is the organizing hub of Novamente’s ‘in silico embodiment’ virtual environment, AGISim, which is built on the Crystal Space 3D game platform and is open source. Unfortunately, real newtonian physics integration is still on the to do list, making AGISim more limited of a platform that Breve. But then again, AGISim is being built specifically as an environment to integrate with the Novamente Cognition Engine (their ‘brain’), and as such is already more customized for creating environments for cognitive agents.

My biggest critique of this approach is that Novamente doesn’t seem to be integrating enough of brain science into its work. This is revealed in Goertzel’s writings for the fringe journal, Dynamical Psychology, edited by Goertzel himself. The goals of the journal are quite good, to understand “the patterns by which psychological processes unfold through time [and] the emergent, persistent structures which arise as a consequence of this unfolding”. However, not being grounded in neuroscience leads Goertzel and others to describe psychological processes with impenetrable statements like the following:

There are elegant abstract-algebraic symmetries lurking within the social substructures of the self. The internal structure of the self may well be that of a tetrahedral mirrorhouse and related more complex packing structures; and the Fulleresque vision of an iterating dynamical system of adjacent tetrahedral mirrorhouses may well be an accurate model of critical aspects of the emergent cognitive dynamics of societies of social minds.

Nonetheless, I wish Goertzel and Novamente luck, and look forward to seeing what they can accomplish.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s