Rum, Sodomy, and the Lash: Pick Two (aghrivaine) wrote,
Rum, Sodomy, and the Lash: Pick Two

fMRI and the future of truth

A side-effect of the growth of Magnetic Resonance Imaging (MRI) is the increasing accuracy of localized scans - taken in conjunction with studies which map the areas of the brain that activate when presented with certain stimuli, psychologists have come up with the discipline of functional MRI, or fMRI.

There are certain problems with fMRI. The biggest has to be a theoretical one - the studies designed and tested by scientists trying to improve the accuracy and portability of fMRI are not particularly falsifiable. In other words - they say, "Let's see what happens when we show people pictures of puppies. Whatever area of the brain lights up must be the puppy-viewing center of the brain." This is not a falsifiable hypothesis (though my example is an absurd reduction, so don't take defeating my example as defending the theory, ok?) .. future studies with contradictory results would only indicate that a given individual has a different "puppy-viewing center" than the original. Some studies are falsiable or reproducible - but on the whole, fMRI suffers from being quasi-science at best.

But let's assume, for the sake of argument, that it's a legitimate technique for understanding how human thought takes place. This means we're within a few technological generations of a practical mind-reading device that would approach 100% accuracy as a lie-or-truth detector. This is significantly different from a polygraph test, in that the polygraph only detects stress in the subject. A false postiives are common - a question that provokes an unpleasant memory could read as a "lie" ... imagine for instance someone who was witness to a gruesome murder. Both this hypothetical person, and the actual murderer might react with similar levels of stress when asked questions about the murder - a situation in which no answer is decisively "true", and all are possibly "false". fMRI, on the other hand, would be able to detect that the murderer's "truth" center of their brain was not active when asked if he was the murderer, while the witnesses' was. It adds a level of certainty that was previously non-existant. With polygraph, we can tell when someone is alarmed or stressed by answering a question. With fMRI, we can detect if they are spinning a tale or relating a memory - and that's a powerful technology. As it improves, and as mathematical models improve for analyzing answers, this could have a profound effect on the legal system.

At what point does a citizen have a right to the privacy of their own thoughts? The state (and the society which inhabits that state) has a vested interest in being able to positively identify a criminal. With an accurate truth-detector (which is different than a lie detector) there would no longer be any reasonable doubt in criminal trials. Did you do it? Your honor, he's telling the truth, he didn't do it. Case dismissed. Lawyers would still be necessary to be sure that the application of the law is fair and unbiased, but for the most part, the difficulty of establishing guilt or innocence would be forever over. This is something that every citizen should have an interest in - I for one would be very interested in seeing the guilty punished (fairly) and the innocent unpunished. But how do we safeguard against potential abuse?

Imagine a state run amok (like, say - with a president who claims to have unlimited executive power, totally unbound by any law, and without any legal oversight by any other power - who has suspended civil liberties all together, established a place where un-tried and un-charged "criminals" can be held indefinitely, tortured, and even executed without any access to counsel or fair trial. Yeah, something like that.) Can we trust a corrupt State like that to limit the use of fMRI to establishing guilt or innocence in criminal trials? At what point do citizens have the right to refuse to submit to fMRI questioning, and what is the penalty if they do? What oversight will there be to establish that the questioners are accurately reporting results?

The implications are profound - truly a citizen could be tried and convicted of "thought crime" - because the fMRI machine could detect if they were, or EVER HAD, thought about acting against the interest of the State. The law currently doesn't allow for people to be tried for thinking about a crime (unless you're a Catholic talking about Canonical Law, in which case merely thinking about the sin is as bad as committing. To which I say - once you've thought about it, might as well go ahead and do it!!) - but would breakthroughs in detection call for a change in legislation?

What about people who lack the capacity to testify, currently - someone who is retarded or mentally ill. Will they be subjected to fMRI questioning? It would be possible to determine accurately if someone did know right from wrong - and even what they considered "right" -- does this interest of the State override the individuals right to the privacy of their own thoughts?

What about employers? Sure, the Fed is going to use this to approve or deny Top Secret or Secret security clearance - "Is there any reason we shouldn't trust you with a clearance?" "No." "Test shows he's lying ... application denied!" But what about private employers? Could they demand an fMRI clearance before offering you a job, like many companies currently do drug-screening?

I'm interested in hearing what you, my friends and thoughtful people, think about this. A boon to humanity, ushering in a new age of accurate and fair law enforcement ... or a frightening sign of a Thought Control state?
  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.