Festo A.G. bionic learning network 2009 video:
You’ve got to see this to believe it…!
The journal, Frontiers in Neuroscience, edited by Idan Segev, has made it Volume 3, issue 1. Launching last year at the Society for Neuroscience conference, its probably the newest Neuroscience-related journal.
I’m a fan of it because it is an open-access journal featuring a “tiered system” and more. From their website:
The Frontiers Journal Series is not just another journal. It is a new approach to scientific publishing. As service to scientists, it is driven by researchers for researchers but it also serves the interests of the general public. Frontiers disseminates research in a tiered system that begins with original articles submitted to Specialty Journals. It evaluates research truly democratically and objectively based on the reading activity of the scientific communities and the public. And it drives the most outstanding and relevant research up to the next tier journals, the Field Journals.
iRobot looking robots talking to you, for real? Worth watching the video to see the exciting things coming out of the Personal Robotics Group recently.
From the page:
We are developing a team of 4 small mobile humanoid robots that possess a novel combination of mobility, moderate dexterity, and human-centric communication and interaction abilities. […] The purpose of this platform is to support research and education goals in human-robot interaction, teaming, and social learning. In particular, the small footprint of the robot (roughly the size of a 3 year old child) allows multiple robots to operate safely within a typical laboratory floor space.
NSF’s Emerging Frontiers in Research and Innovation (EFRI) office funded 4 very futuristic neuroengineering grants.
- Deep learning in mammalian cortex
- Studying neural networks in vitro with an innovative patch clamp array
- Determining how the brain controls the hand for robotics
- In vitro power grid simulation using real neurons
Disclaimer: I was involved with the second proposal on this page.
An extremely interesting trend in neuroscience has been to use the language of Control Theory to explain brain function. A recent paper by Shadmehr and Krakauer does a very nice job of summarizing this trend and assembling a comprehensive theory of how the brain controls the body. Using control theory, they put forward a mathematically precise description of their theory. Because their theory uses blocks that are direct analogues of specific brain regions like the basal ganglia, motor cortex, and cerebellum, they can use brain lesion studies to undergird their ideas about these components. From the paper:
The theory explains that in order to make a movement, our brain needs to solve three kinds of problems: we need to be able to accurately predict the sensory consequences of our motor commands (this is called system identification), we need to combine these predictions with actual sensory feedback to form a belief about the state of our body and the world (called state estimation), and then given this belief about the state of our body and the world, we have to adjust the gains of the sensorimotor feedback loops so that our movements maximize some measure of performance (called optimal control).
At the heart of the approach is the idea that we make movements to achieve a rewarding state. This crucial description of why we are making a movement, i.e., the rewards we expect to get and the costs we expect to pay, determines how quickly we move, what trajectory we choose to execute, and how we will respond to sensory feedback.
This approach of describing brain lesion studies in the context of a well-thought out theory ought to be further encouraged.
Researchers at the University of Nevada, Reno have an interesting and ambitious set-up for doing research in AI that the describe in a recent paper.
From the paper:
We define virtual neurorobotics as follows: a computer-facilitated behavioral loop wherein a human interacts with a projected robot that meets five criteria: (1) the robot is sufficiently embodied for the human to tentatively accept the robot as a social partner, (2) the loop operates in real time, with no pre-specified parcellation into receptive and responsive time windows, (3) the cognitive control is a neuromorphic brain emulation incorporating realistic neuronal dynamics whose time constants reflect synaptic activation and learning, membrane and circuitry properties, and (4) the neuromorphic architecture is expandable to progressively larger scale and complexity to track brain development, (5) the neuromorphic architecture can potentially provide circuitry underlying intrinsic motivation and intentionality, which physiologically is best described as “emotional” rather than rule-based drive.
What’s interesting to me about this is the combination of a embodied robot in a virtual world with a neurally inspired controller for that robot. While there are pros and cons of embodiment in virtual world (some of which have been touched on here before), I think that if your priority is closing the loop from embodiment to research on neural systems, the importance of this kind of approach cannot be ignored.