The Moral Life of Babies – NYTimes

The Moral Life of Babies – NYTimes.com.

Paul Bloom talks about research on the morality of small children, and ways in which their morality is similar to and different from adults. Concise descriptions of supporting experiments is given throughout.

Basically, babies prefer nice people over mean people, but prefer people who punish mean people over people who reward mean people. But babies are not impartial; for example, they give favorable treatment to other babies who are wearing the same tee-shirt as themselves.

Also has some content about the cognition of babies in general. Experiments show that, at various young ages, “..babies think of objects largely as adults do, as connected masses that move as units, that are solid and subject to gravity and that move in continuous paths through space and time,” and “…expect people to move rationally in accordance with their beliefs and desires…”, and “…know that other people can have false beliefs”.

Evidence of similar linguistic capabilities in Neaderthals

Apparently, in a few years, we will be able to bring Neaderthals back to life with the complete Neaderthal genome [NYT]. Currently, there is good sequence data available over 63% of the genome. (I’m amazed that, given fragmented DNA from bone, Neanderthal sequence can be distinguished from human DNA contamination but perhaps this problem is solved by having high enough coverage/multiple fragments of the same region.)

Also, it looks like Neanderthals share the FOXP2 variant that humans have:

Archaeologists have long debated whether Neanderthals could speak, and they have eagerly awaited Dr. Pääbo’s analysis of the Neanderthal FOXP2, a gene essential for language. Modern humans have two changes in FOXP2 that are not found in chimpanzees, and that presumably evolved to make speech possible. Dr. Pääbo said Neanderthals had the same two changes in their version of the FOXP2 gene. But many other genes are involved in language, so it is too early to say whether Neanderthals could speak.

UPDATE: A few days ago, I heard Wolf Enard, one of Paabo’s postdocs, speak on a fascinating project, where human version of FOXP2 was knocked in to mice (replacing the endogenous mouse version). Although the phenotypic effects were subtle, the approach itself is quite revolutionary: Putting human versions of genes into model organisms to see how the subsequent evolution of the gene changes its function. I wonder what other genes might be amenable to this approach.

Time for neuroscientists to speak up?

Recently, I was pointed to this article in the WSJ (“A Pentagon Agency Is Looking at Brains — And Raising Eyebrows“) by Sharon Begley. It touches on some noninvasive recording techniques for assessing affective state and cognitive enhancers like ampakine CX717 (previously mentioned on Neurodudes here and here).

It was the very last paragraph that caught my eye:

Ever since the atomic bomb, physicists have known that their work has potential military uses, and have spoken up about it. But on the morality of sending orders directly to the brain (of a soldier, employee, child, prisoner …), or of devices that read thoughts and intentions from afar, neuroscientists have been strangely silent. The time to speak up is before the genie is out of the bottle.

Whoa! To me, the physicists who spoke out early on against nuclear proliferation seemed (and still seem) both very courageous and prescient in their ideas. Are we neuroscientists dropping the ball? I would love to start a discussion on this subject and to hear your responses (both from neuro people and others) in the comments below.

I’ll start: I personally don’t think the arena of neural enhancement/intrusion (mind reading, mind control, cognitive enhancement, etc.) is comparable to the sheer destructive power of nuclear weapons. I do see in the near future the unfortunate potential for abuse of neurotechnology and violation of personal freedoms, but the threat does not seem as horrifying or deadly. Still, if neurotechnology allows governments greater control over their citizens, it seems reasonable that scientists who enable such technologies should intervene. Perhaps it is time for a neural bill of rights, which, similar to the freedoms granted by the US Bill of Rights, will clearly state what aspects of a person’s mental state or capacity cannot be infringed upon without permission from that person. Thoughts?

Newsome Wants Electrode In Own Brain

Stanford Neuroscientist Bill Newsome wants to implant an electrode in his own brain to study consciousness in ways that would be difficult with volunteer human subjects.

When considered alongside the story of Kevin Warwick who had a 100-electrode array implanted in his arm in 2002 in order to study electrical signals from his hand, one must wonder: is this a starting trend?

From the article:

TR: Do you really want to do this?

BN: Well, I’ve thought about it very carefully. I’ve talked to neurosurgeons, both in the United States and outside the country where the regulatory environment is less strict, about how practical and risky it is. If the risk of serious postsurgical complications was one in one hundred, I wouldn’t do it. If it was one in one thousand, I would seriously consider doing it. To my chagrin, most surgeons estimate the risk to be somewhere in between my benchmarks.

–Stephen

The Most Dangerous Idea (Apparently)

So, Edge has a new question for 2006 for its All-Stars of Academia to answer: What is your dangerous idea? (Suggested to Edge by Steven Pinker, who perhaps got the idea from a colloquium series at his old haunting grounds.)

Offhand, one might expect a broad range of perceived dangerous ideas, varying by research interests and such. What’s surprising is that many of the luminaries think that the “most dangerous idea” is this particular, same idea: As neuroscience progresses, popular realization that the “astonishing hypothesis” — that mind is brain — will create a potentially cataclysmic upheaval of society as we know and have profound (negative) moral implications as people claim less responsibility for their actions.

Of course, this just isn’t true. But, would you believe that
Paul Bloom,
VS Ramachandran,
John Horgan,
Andy Clark,
Marc Hauser,
Clay Shirky,
Eric Kandel,
John Allen Paulos,
and, in a more genetic context, Jerry Coyne and Craig Venter
are all very worried about this issue? (And I didn’t even read 50% of the Edge dangerous ideas… there might be even more… ) Is this really the most dangerous idea out there to all of these talented thinkers?

I feel strongly that science and morality have always been separate domains and that any worry that, by “debunking” the mind, we automatically become immoral machines is just ridiculous. Through this scientific knowledge, we might gain some humility, maybe better see our close relatedness to nonhuman primates and place in nature, etc., but we’re not going to flip out and become crazed zombies. This just isn’t going to happen.

Does anybody else think that this just isn’t a truly dangerous idea (although certainly an “astonishing” one, in the Crick sense)? Or am I wrong here?

Samples of academic worrying after the jump.
Continue reading

His Holiness's Message: Better living through chemicals (or electrodes)

His Holiness has spoken. He wants neuro-drugs to take and electrodes stuck in his brain so that he doesn’t have to spend hours meditating each day. (Enlightenment now!) If you want to do hot stuff, study physics or brain science. His interest in neuroscience stems from a long-standing interest in body hair. Yes, body hair. Americans need to figure their own way through this whole intelligent design business. Not all antidepressants are alike; for instance, the Dalai Lama is against tranquilizers. Definitely against tranquilizers. And, perhaps most surprisingly, His Holiness, approves of animal research — when it’s done right and with respect.

Minute-by-minute liveblog follows after the jump.
Continue reading