Time for neuroscientists to speak up?

Recently, I was pointed to this article in the WSJ (“A Pentagon Agency Is Looking at Brains — And Raising Eyebrows“) by Sharon Begley. It touches on some noninvasive recording techniques for assessing affective state and cognitive enhancers like ampakine CX717 (previously mentioned on Neurodudes here and here).

It was the very last paragraph that caught my eye:

Ever since the atomic bomb, physicists have known that their work has potential military uses, and have spoken up about it. But on the morality of sending orders directly to the brain (of a soldier, employee, child, prisoner …), or of devices that read thoughts and intentions from afar, neuroscientists have been strangely silent. The time to speak up is before the genie is out of the bottle.

Whoa! To me, the physicists who spoke out early on against nuclear proliferation seemed (and still seem) both very courageous and prescient in their ideas. Are we neuroscientists dropping the ball? I would love to start a discussion on this subject and to hear your responses (both from neuro people and others) in the comments below.

I’ll start: I personally don’t think the arena of neural enhancement/intrusion (mind reading, mind control, cognitive enhancement, etc.) is comparable to the sheer destructive power of nuclear weapons. I do see in the near future the unfortunate potential for abuse of neurotechnology and violation of personal freedoms, but the threat does not seem as horrifying or deadly. Still, if neurotechnology allows governments greater control over their citizens, it seems reasonable that scientists who enable such technologies should intervene. Perhaps it is time for a neural bill of rights, which, similar to the freedoms granted by the US Bill of Rights, will clearly state what aspects of a person’s mental state or capacity cannot be infringed upon without permission from that person. Thoughts?

Advertisements

15 thoughts on “Time for neuroscientists to speak up?

  1. While each test and technology deserve their own discussion one slippery slope that leaps out at me is the use of mind reading as legal tool for a measuring of truth or intention. It is the responsibility of neuroscientists to be crystal clear about the margin of error involved in each measure. Polygraph lie detection has a mere 10% margin of error. I don’t know what the rates are for newer techniques like thermal imaging or fMRI, but I bet they are even higher (given the early stage of their development). It is important that the public and most importantly juries not take the results of these tests as infallible truth.

    Like

  2. I believe neuroscientists will speak up when the issues become a real threat. The way I see it, none of our technology (that I know of) is currently advanced enough to pose a plausible threat to our “neural rights.” I believe response from the neuroscientific community has been proportional to the “threat” so far, and that neuroscientists are very clear about the fact that no current technology can truly “read” or “control” minds. Such technology may not be far though, so it is important that neuroscientists stay aware and have a solid ethical grounding.

    Like

  3. > I personally don’t think the arena of neural enhancement/intrusion (mind reading, mind control, cognitive enhancement, etc.) is comparable to the sheer destructive power of nuclear weapons.

    I think it is. Compare the worst case scenarios for both nuclear weapons and mind reading. They are both very bad.

    > Still, if neurotechnology allows governments greater control over their citizens, it seems reasonable that scientists who enable such technologies should intervene.

    I don’t think that just by being scientists we have the power to direct science. I think that if mind reading or control is possible, then it will be invented even if 99% of neuroscientists abhor it and refuse to work on it. I think we should assume that whatever technology in this area can be invented will be invented, and then think about what should be done given that.

    I think nuclear weapons would have been invented eventually also. The critical failure was not the failure of individual physicists to refuse to work on the project (this would have delayed but not stopped the project), and not the failure of physicists to educate policymakers or the public (everyone understood that nuclear weapons were a really big bomb, and that’s all the understanding that would have been needed to make the right decisions). The failure was that the political system of a technologically advanced nation (the USA) chose to (a) drop the bomb on civilians, and (b) start an arms race. These bad decisions were made because the political system was not nice enough and not smart enough, not because of a scientific failure.

    I think that what is needed are laws that criminalize the use of neurotechnology for violation of privacy or by coercion (even “voluntary” mind-control within an employer-employee relationship, even by the military in times of war, even to seek out terrorists).

    So I think it is a political question that has a political answer. The way for individuals to help is to argue with each other to find out what the right political answer is, and then to spend time bugging their friends about politics, speaking on talk shows, lobbying, etc. Perhaps the prestige afforded scientists will make the media pay more attention to them, and so for that reason perhaps they have a special role to play; but not for any other reason.

    Like

  4. > Perhaps it is time for a neural bill of rights, which, similar to the freedoms granted by the US Bill of Rights, will clearly state what aspects of a person’s mental state or capacity cannot be infringed upon without permission from that person.

    I think that infringing upon a person’s mental state or capacity should be made taboo to the same extent that torture or chemical weapons are. Now, some people argue that torture should be permitted in certain circumstances; that’s a separate argument (personally, I think it should not be permitted even then). But I don’t think there are any circumstances when you should permit any sort of manipulation of someone’s neurobiology, or any sort of readout of signals emanating from the brain which cannot be detected by ordinary human vision and audition, unless the situation is so severe that you would permit torture or mustard gas in the same situation (if you assumed that they would accomplish the same goal that you are trying to accomplish by neurotechnology).

    So, the neuro”bill of rights” that i advocate has just two simple, broad rights: (1) the right for one’s neural state not to be altered without one’s consent, (2) the right for one’s neural state not to be observed without one’s consent. Except for the altering and observing that can happen by when two humans interact without the aid of technology.

    Like

  5. actually, i thought of a third one: (3) the right for one’s employment and advancement not to depend in any way upon consenting to neuromanipulation or neuroreadout.

    Like

  6. > I believe neuroscientists will speak up when the issues become a real threat. The way I see it, none of our technology (that I know of) is currently advanced enough to pose a plausible threat to our “neural rights.” I believe response from the neuroscientific community has been proportional to the “threat” so far, and that neuroscientists are very clear about the fact that no current technology can truly “read” or “control” minds. Such technology may not be far though, so it is important that neuroscientists stay aware and have a solid ethical grounding.

    I agree that such technologies don’t exist and won’t exist in the near future. However, I think that political or social change takes so long that if people wait until this becomes a plausible threat, it will be too late. It is important to act while such things are still so far off so as to seem silly.

    Like

  7. This discussion seems oddly reminiscent of a film by the Wachowski brothers during the late 90’s http://www.imdb.com/title/tt0133093/. Most you probably found it as fascinating as I did, the revelation that the world didn’t even exist. It was all a fabrication in order to harvest energy in the form of BTUs, and the electricity generated by the humans kept in containment pods.

    Obviously this is not reality, human currency demands for energy far outweigh the benefits of yeild, and the technology doesn’t exist ‘yet’.

    I would argue further that there exists no market for mind control. Indeed, the sheeple of the US do what they are told without prompt from a neuroprosthetic device.

    Frankly the need for mind control devices, or a neuro bill of rights will not arise until we’ve lost our taste for consumption.

    Like

  8. The start of a book entitled “Affluenza” begins with the statement that some of the highest paid pscyhologists in Australia are paid to make people unhappy: they work in advertising. It is now quite obvious that politicians and other marketing types are regularly employing lessons learned from psychology. Now it is going to be neuroscience.

    The problem is this: when people start using intimate knowledge about how humans make decisions these people can manipulate others. Where do we draw the line here? Is it acceptable that in our society there are people using all sorts of psychological tricks to garner our attention and dollars? Can we honestly expect each of us to be sufficiently savvy so as to avoid these little manipulations? If we use our superior insight to get people to do what they want are we any different from the powerful who abuse their power?

    We are already subject to manipulations of our neural net. Everyday everywhere. What we are now facing is a qualitatively different type of manipulation that raises the the problem to a whole new level.

    Like

  9. Bayle: Yes, by saying scientists should intervene, I did mean by taking a legislative stance and not by refusing to do research on mind control. (Like you, I believe that what can be invented or discovered certainly will.) Good job on the bill of rights! The first two seem fundamental (“do not peek” and “do not poke” without consent) and the third is also important (“opt-out is okay and given equal treatment”). What remains is what is okay WITH CONSENT. Is anything okay? Like the Hippocratic oath in medicine, I think we need something that says do no harm. Maybe radical libertarians will disagree with me here, but I think it’s reasonable to say that people shouldn’t be harmed (mentally, physically, etc.) when they consent to neural intervention.

    Dead Soul: You’re absolutely right. And I don’t think we can do much about it, except educate ourselves. Probably since humans have been around, people have used psychology techniques to influence others. Politicians have always pandered to keep their power (think of Roman emperors and the spectacle of gladiators for the public…)

    Thanks for the wonderful discussion everyone!

    Like

  10. Neville, i guess i’m a radical libertarian but here even i might agree to some bounds. Imagine an angry suicidal person who irreversibly programs hirself to become a highly rational serial killer or perhaps a “sleeper agent” designed to infiltrate the military and then hijack weapons of mass destruction. In short, there are emotional mechanisms in most humans that prevent us from effectively carrying out certain actions or certain kinds of deceptions, and for the benefit of others, perhaps those mechanisms shouldn’t be permitted to be overridden.

    Dunno if i agree with “do not harm” in general, though. Wouldn’t that prohibit allowing people to consume alcohol, or from skipping sleep in order to get something done, or from doing serious athletic training? i think it’s okay to let people hurt themselves as well if (1) they have carefully and seriously considered the issue for a significant amount of time, and (2) they aren’t harming themselves in a way that would create a major burden on society. Otherwise, you end up preventing people from doing things that they themselves consider beneficial, but that the government considers harmful.

    (p.s. of course i realize you meant “do not harm” applied only to neural manipulations, but i’m hypothesizing that neuro situations might arise in the future which are closely analogous to consuming alcohol, skipping sleep, or serious athletic training)

    Like

  11. Here’s another situation in which regulation might be acceptable to me. Imagine that neural manipulations became available which made you more effective at your job, but at significant personal expense (such as adderall, to judge from some of the comments on that article, or such as a drug which makes sleep unnecessary). A competitive economic system will force any ambitious person to “choose” to accept this manipulation. So the government might want to legislate against its overuse to prevent this from happening. This is similar to minimum wage, or to the French 35-hour workweek.

    But this sort of regulation wouldn’t be a matter for a “bill of rights”, but rather an ordinary statutory regulation which would be debated in its specific context.

    So, I guess the “bill of rights” should include the following negative obligations on medical types:
    * Mere “consent” is not sufficient for harmful manipulations, it must be “informed consent”
    * Don’t conduct a manipulation which would cause the target to become a major burden on society

    (i say “major” because even something like skipping sleep might make a person an annoyance; but such minor burdens on society should be permitted)

    Like

  12. We’re comparing weapons of mass destruction with potential weapons of molecular destruction–or personal privacy destruction. Ethically it’s like anything medical–a hotter potato when it’s to do with brains. People really identify with their brains.

    Like

  13. > We’re comparing weapons of mass destruction with potential weapons of molecular destruction–or personal privacy destruction.

    I think the two are comparable. For example, the world of “1984” seems worse than a nuclear attack, yet the problems with that world are “only” things like loss of personal privacy and freedom.

    Like

  14. “Comparable” isn’t a strong claim and I won’t dispute it, but logically and in a principal way they’re opposites, so besides enjoying the opportunity for word play, I would see it as a foolishly risky moral reasoning plan to start from an assumption that somehow they’re the same. We call Orwellian society “totalitarian” not “individualistic.” What happens to Winston demonstrates that the social nightmare isn’t only an emergent property of individual choice but also centrally held tools for harming the whole population at once–the broadcasting of disinformation,for example, rather than the capacity for Julia or any other individual to call Winston personally on the telephone and misinform him.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s