The Most Dangerous Idea (Apparently)

So, Edge has a new question for 2006 for its All-Stars of Academia to answer: What is your dangerous idea? (Suggested to Edge by Steven Pinker, who perhaps got the idea from a colloquium series at his old haunting grounds.)

Offhand, one might expect a broad range of perceived dangerous ideas, varying by research interests and such. What’s surprising is that many of the luminaries think that the “most dangerous idea” is this particular, same idea: As neuroscience progresses, popular realization that the “astonishing hypothesis” — that mind is brain — will create a potentially cataclysmic upheaval of society as we know and have profound (negative) moral implications as people claim less responsibility for their actions.

Of course, this just isn’t true. But, would you believe that
Paul Bloom,
VS Ramachandran,
John Horgan,
Andy Clark,
Marc Hauser,
Clay Shirky,
Eric Kandel,
John Allen Paulos,
and, in a more genetic context, Jerry Coyne and Craig Venter
are all very worried about this issue? (And I didn’t even read 50% of the Edge dangerous ideas… there might be even more… ) Is this really the most dangerous idea out there to all of these talented thinkers?

I feel strongly that science and morality have always been separate domains and that any worry that, by “debunking” the mind, we automatically become immoral machines is just ridiculous. Through this scientific knowledge, we might gain some humility, maybe better see our close relatedness to nonhuman primates and place in nature, etc., but we’re not going to flip out and become crazed zombies. This just isn’t going to happen.

Does anybody else think that this just isn’t a truly dangerous idea (although certainly an “astonishing” one, in the Crick sense)? Or am I wrong here?

Samples of academic worrying after the jump.

“Free will is going away. Time to redesign society to take that into account.” – Clay Shirky
“In contrast, the widespread rejection of the soul would have profound moral and legal consequences.” – Paul Bloom
“If all this seems dehumanizing, you haven’t seen anything yet.” – VS Ramachandran
“The Depressing, Dangerous Hypothesis: We Have No Souls.” – Paul Horgan
“Revealing the genetic basis of personality and behavior will create societal conflicts” – J. Craig Venter
“Unfortunately, what appears to be a rather modest proposal on some counts, is dangerous on another. It is dangerous to those who abhor biologically grounded theories on the often misinterpreted perspective that biology determines our fate, derails free will, and erases the soul.” – Marc Hauser

Seems like a lot of worrying to me over very little…

5 thoughts on “The Most Dangerous Idea (Apparently)

  1. Hmm.. haven’t read the whole article yet, just the ones you pointed to. First of all, various of those guys are talking about different ideas.

    In order of what I think is least to most dangerous,

    Some of them think the triumph of materialism, i.e. “you are a machine with no soul” is dangerous, dangerous in the boy that’s a scary/depressing thought sense.

    Others think that the philosophical idea of a lack of free will is dangerous to society (which is what you think is ridiculous).

    Others think the philosophical idea of the inequality of human beings is dangerous.

    Others (Shirky) think that our growing power to predict and manipulate each other is dangerous.

    my opinions:

    Personally I think the soul one is actually the most “dangerous” in the sense that it’s the issue that’s most important to me, and, I believe, ultimately to all humanity. But I’m not actually worried about that one so much; I know I’m conscious and that seems pretty magical to me. Even if my thoughts and actions are merely determined/computed, the conscious experience itself is still sublime.

    If you had asked me a week ago if the demise of the philosophy of free will was dangerous to society, I would have agreed with you. But after reading these I’m getting chilled. Certainly, in the near term, it’s no big problem. But in the long term, if all of society has “viscerally accepted” this proposition, it probably would shake things up a bit. Free will is at the heart of our Enlightenment-based political system. Even if you don’t believe in it yourself, it would be difficult to extrapolate all of the consequences correctly if you were raised in a society that did. So I don’t know what a society that wasn’t based on free will would be like, but my debater’s intuition tells me that it would be significantly less…. democratic. Worse, this would drive a final stake through the heart of the romantic concepts of freedom, honor, and nobility that some of us find so spiritually appealing.

    Even a week ago, I was already worried about the possibility of science destroying the principal of the equality of human beings. This is another pillar of our political system, and one that is much easier to extrapolate on, probably because just half a century ago, some educated people DID believe that human beings were of different quality (I’m talking about racism of course). If human beings are inherently unequal, and if we can even begin to objectively identify who is better than others, then those who seem to be of better quality will quickly declare themselves nobility and oppress everyone else. I think this can happen even without conclusive proof they are better, or the ability to classify people with a high degree of accuracy; I think this ugly side of human nature will take any excuse it can get. All that is needed is enough shoddy “evidence” to convince the majority of citizens who don’t care about politics or science that the debate is too arcane for them to wade into. I think it’s quite possible that democracy itself will be abolished (one person one vote? why??). If you’ve ever read the Dune books, there’s a good case to be made that feudalism is the best political system so far to jibe with the ethics of inherent inequality.

    Finally, the most dangerous in my view is the possibility that we (and by we of course I don’t mean us, I mean the symbiotic machine minds that we call corporations and governments) will gain vastly more power to predict, manipulate, and even outright control the actions of individuals against their proper judgement or will.


  2. It’s not an idea I find at all dangerous, and apparently you don’t either…but I guarantee you, there are a billion or more people out there who will explode over it. It’s a proposition in direct opposition to most religions.

    It’s also an idea that will be distorted and exploited to rationalize evil behavior, ala Social Darwinism.

    I think, though, that those arguments that it is devastating because it demonstrates the nonexistence of the soul, that it has moral consequences, or that it is dehumanizing are pretty darned bogus. The idea blows away a fog of illusion. It doesn’t change what we are. We’ve been moral beings for millennia without souls — why should being aware of the reality change anything?


  3. In the same way that global warming doesn’t happen overnight in the way depicted in “The Day After Tomorrow”, the impact of the “popular realization” of the Astonishing Hypothesis is similarly unlikely to create a dramatic flash point of chaos–WITHOUT A TANGIBLE FOCAL POINT.

    First of all, we have to be very careful about what we mean by “popular realization”. We might look to the way the popularization of evolution is playing out for some guidance:

    “Darwin’s Dangerous Idea” (to quote the Dennett book title) has been with modern society for almost 150 years. Americans are still just below a 50% rate of belief that he was right. I’m not sure you could say that in America the idea has been “popularly realized” yet.

    The acceptance of Astonishing Hypothesis is going to be evaluated by society in the context of the acceptance of evolution. I suspect the course of its acceptance is going to feel a lot like “evolution 2.0” in terms of its societal impact. We can probably expect some kind of “Scopes Monkey Trial”/Dover-like legal demonstrations preceded by lots of academic philosophical hand-wringing. But social conflict on this scale is not much of a worry really.

    The real worry is not the idea but what it leads to in the physical world; in other words the idea of matter-energy equivalence isn’t dangerous but nuclear weapons are. The theory of evolution at the end of the day doesn’t produce any manifestations that force society to stand up and take notice, and thus its social impact, though significant, has been slow to spread broadly. The Astonishing Hypothesis is likely to follow this same course unless…

    …we become technically proficient enough to create artificial brains. The difference between understanding the Astonishing Hypothesis to be correct and believing that an artificial brain is a real person would be like night and day, the latter an order of magnitude more revolutionary. The truth is that we are still a species that requires overt demonstrations to draw our attention–take a look at CNN during a breaking story if you are in any doubt of this. When a large percentage of society gets a chance to interact with artificial brains…THAT’S when we’re going to see potentially chaotic impact of the idea of the Astonishing Hypothesis.

    So I would turn the question back around at Neville. Assume my reformulation of the issue was correct, that a pre-requisite to popular realization of the Astonishing Hypothesis is the widespread interaction with artificial brains (left vague on purpose). Given this kind of popular realization, one where people can also then *use* the idea tangibly to do things in the physical world, are you more or less concerned about its societal impact?


  4. Bayle, you’re right. I’m glossing over the fact that there are some differences among the Edge authors dangerous ideas. But, despite my deception, I think that by and large their worries are very, very similar. And, for the most part, unwarranted. Briefly: I think you (and obviously a lot of smart people in the Edge crowd) are worried about something that’s just not going to happen. The “demise of free will”, driving a stake through romantic concepts of nobility, etc. are not practical worries.

    Perhaps reading Stephen’s comment got me thinking about evolution… the famous evolutionary biologist Dawkins tells us in “The Selfish Gene” that we (biological organisms) are simply survival machines. To paraphrase: The exoskeletons built and inhabited by our genes are machines that they use to continue on in their (selfish) competition with other genes. Doesn’t this knowledge seem “dangerous”? It implies that we are automotons controlled by our genes… Getting around Stephen’s point that the public probably isn’t terribly aware of Dawkins, let’s look at the lives on evolutionary biologists. Are they any less motivated or any less likely to believe in freedom, love, etc. because of the revolutionary ideas from their field? I don’t think so.


  5. To Stephen: Given the realization of human-level intelligence in “synthetic things”, I am (slightly) more concerned. But I’m not sure exactly why. And somehow I’m not *that* concerned. People are very adaptable and changes happen gradually. Cataclysmic change is rare!

    Another good point, perhaps distant from the neuroscience, is that evolution doesn’t really have any tangible products in modern society. (more accurately, no really BIG tangible products.) As gene therapies and more genetically engineered organisms (which, in the sense of actually DNA manipulation, only started a few years ago, really) are introduced to the public, we will have tangible products. And I don’t think society is going to go crazy. Don’t get me wrong: It will change and perhaps drastically (like the movement into cities of the 19th & 20th century, like the idea of a global village with advent of better communications in the last few years).

    And, at the very least, we can say that we haven’t blown up the world with the dangerous idea of energy-matter equivalence (nuclear weapons). Yet. If the knowledge brought by science seems dehumanizing (the destructive power of nuclear weapons, people are just one kind of smart machine), then it only emphasizes the necessity of improving how we treat each other, improving our ability to empathize, and furthering our own internal desires for moral action.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s