Log in

No account? Create an account

Previous Entry | Next Entry

fMRI and the future of truth

A side-effect of the growth of Magnetic Resonance Imaging (MRI) is the increasing accuracy of localized scans - taken in conjunction with studies which map the areas of the brain that activate when presented with certain stimuli, psychologists have come up with the discipline of functional MRI, or fMRI.

There are certain problems with fMRI. The biggest has to be a theoretical one - the studies designed and tested by scientists trying to improve the accuracy and portability of fMRI are not particularly falsifiable. In other words - they say, "Let's see what happens when we show people pictures of puppies. Whatever area of the brain lights up must be the puppy-viewing center of the brain." This is not a falsifiable hypothesis (though my example is an absurd reduction, so don't take defeating my example as defending the theory, ok?) .. future studies with contradictory results would only indicate that a given individual has a different "puppy-viewing center" than the original. Some studies are falsiable or reproducible - but on the whole, fMRI suffers from being quasi-science at best.

But let's assume, for the sake of argument, that it's a legitimate technique for understanding how human thought takes place. This means we're within a few technological generations of a practical mind-reading device that would approach 100% accuracy as a lie-or-truth detector. This is significantly different from a polygraph test, in that the polygraph only detects stress in the subject. A false postiives are common - a question that provokes an unpleasant memory could read as a "lie" ... imagine for instance someone who was witness to a gruesome murder. Both this hypothetical person, and the actual murderer might react with similar levels of stress when asked questions about the murder - a situation in which no answer is decisively "true", and all are possibly "false". fMRI, on the other hand, would be able to detect that the murderer's "truth" center of their brain was not active when asked if he was the murderer, while the witnesses' was. It adds a level of certainty that was previously non-existant. With polygraph, we can tell when someone is alarmed or stressed by answering a question. With fMRI, we can detect if they are spinning a tale or relating a memory - and that's a powerful technology. As it improves, and as mathematical models improve for analyzing answers, this could have a profound effect on the legal system.

At what point does a citizen have a right to the privacy of their own thoughts? The state (and the society which inhabits that state) has a vested interest in being able to positively identify a criminal. With an accurate truth-detector (which is different than a lie detector) there would no longer be any reasonable doubt in criminal trials. Did you do it? Your honor, he's telling the truth, he didn't do it. Case dismissed. Lawyers would still be necessary to be sure that the application of the law is fair and unbiased, but for the most part, the difficulty of establishing guilt or innocence would be forever over. This is something that every citizen should have an interest in - I for one would be very interested in seeing the guilty punished (fairly) and the innocent unpunished. But how do we safeguard against potential abuse?

Imagine a state run amok (like, say - with a president who claims to have unlimited executive power, totally unbound by any law, and without any legal oversight by any other power - who has suspended civil liberties all together, established a place where un-tried and un-charged "criminals" can be held indefinitely, tortured, and even executed without any access to counsel or fair trial. Yeah, something like that.) Can we trust a corrupt State like that to limit the use of fMRI to establishing guilt or innocence in criminal trials? At what point do citizens have the right to refuse to submit to fMRI questioning, and what is the penalty if they do? What oversight will there be to establish that the questioners are accurately reporting results?

The implications are profound - truly a citizen could be tried and convicted of "thought crime" - because the fMRI machine could detect if they were, or EVER HAD, thought about acting against the interest of the State. The law currently doesn't allow for people to be tried for thinking about a crime (unless you're a Catholic talking about Canonical Law, in which case merely thinking about the sin is as bad as committing. To which I say - once you've thought about it, might as well go ahead and do it!!) - but would breakthroughs in detection call for a change in legislation?

What about people who lack the capacity to testify, currently - someone who is retarded or mentally ill. Will they be subjected to fMRI questioning? It would be possible to determine accurately if someone did know right from wrong - and even what they considered "right" -- does this interest of the State override the individuals right to the privacy of their own thoughts?

What about employers? Sure, the Fed is going to use this to approve or deny Top Secret or Secret security clearance - "Is there any reason we shouldn't trust you with a clearance?" "No." "Test shows he's lying ... application denied!" But what about private employers? Could they demand an fMRI clearance before offering you a job, like many companies currently do drug-screening?

I'm interested in hearing what you, my friends and thoughtful people, think about this. A boon to humanity, ushering in a new age of accurate and fair law enforcement ... or a frightening sign of a Thought Control state?


( 14 comments — Leave a comment )
Feb. 17th, 2006 06:49 pm (UTC)
I'm not sure that an fMRI machine will ever be portable enough to have a legal application like the one you suggest (though I know that's beside the point). Assuming that technology can be developed, I do think we need to proceed carefully when making legislation about it. Lie detector results are often inadmissible in court, because of their margin for error. I would think any kind of thought policing fMRI would need to be approached with caution. As is often the case with new technology, there great benefit to be had, but that doesn't mean we should leap in with our eye shut. Ethical uses of technology take time and careful consideration to develop.

I also wanted to point out another, current, and very beneficial use of fMRI. It is used today by doctors to evaluate risks in certain brain surgeries. If you have a tumor growing in your motor cortex, e.g., it may or may not be risky to operate. An fMRI helps the doctor know how close the tumor is to the part of that person's brain that is actually responsible for different functions. Patients can make better informed decisions about surgery, because they can have a better understanding of the risks for complications.

Anyway, even if the portability of fMRI never improved, the current day applications are extremely valuable.
Feb. 17th, 2006 06:51 pm (UTC)
I totally agree that there are practical, beneficial, and completely harmless applications of fMRI tech. My concern is when it reaches the point of being a mind-reading device. That point is not far off - and I don't doubt that it WILL become more portable and more error-free, that's the nature of technology - revolutionary breakthroughs are followed by evolutionary improvements.
Feb. 17th, 2006 07:01 pm (UTC)
I thik that if they were able to test this enough, though, what they woudl show is that human memory is too mutable to necesarily be able to determine how things actually happened *just* from people's memories. It is also a mater of persopective. If someone kills someone but doesn't believe what they did was murder, are they lying when they say no?

It might be able to detect a bald faced lie, but I think there are too many nuances to both perception and memory for it ever to really be effectively used to police people's thoughts.
Feb. 17th, 2006 07:03 pm (UTC)
Ok, there's a difference between asking "did you murder the victim?" and "Did you kill the victim?"

fMRI could be used (can be used, really) to at least evaluate with great accuracy the factuality of statements. Memory may be mutable, but at least we could know if a person was telling the truth as he knows it, or not.
Feb. 17th, 2006 07:10 pm (UTC)
It is still only going to be as reliab;e as the people who are asking the questions... and the ones interpreting the data.... etc. even if the machine performs perfectly, it is still in the hands of fallible human persons to make sense of what it says... on top of the skewing of perspecitives in the target, etc.

"I didn't kill him! The brain trauma caused by the sword to the head, that killed him. Arr!"
Feb. 17th, 2006 07:16 pm (UTC)
But see, the "brain trauma to the head" thing would fire up different parts of the brain than just the "yes, I killed him" part. So the machine would report that it wasn't the truth - it was a story. The questioner would then just ask more specific questions, until the facts were established.

My concern is - does the individual have the right to the privacy of their own thoughts?
Feb. 17th, 2006 08:34 pm (UTC)
Of course they do, but I just don't think that this will ever be as useful for really reading them as you tihnk. Certainly not enough to offset the *costs* involved.
Feb. 17th, 2006 09:39 pm (UTC)
I dunno - my reading on the technology is pretty persuasive...and that's today. Imagine what it will be like in a few years. The question isn't so much *if* i'ts possible, it's just a matter of when it's practical.
Feb. 17th, 2006 07:23 pm (UTC)
Oh big brother where art thou?
I think we are far off from any real understanding of the Human Brain, in such a way that we could read a mind. (that doesn't preclude science, industry, the media, or the government from acting too hastily or a theory)

Let's take your Puppy example (absurdity aside):

The question I have is this; what does the the brain activity mean?

For example... I look at a puppy and see a cute furry thing, one that I immediately want to care for and love. A person who doesn't like animals sees the same puppy but has a different emotional and intellectual response. Someone who is deathly afraid of dogs, has an even more radically different response. What does the activity mean? It is a response to a stimulus, in the puppy example a visual one, with perhaps some auditory and olfactory input, assuming we don't get to touch him. In processing that stimulus we access memory, and based on that memory we have a response t othat stimulus. We could easily "beat" the machine by simple conditioning.

The Ethical ramifications of technology, the dangers of irresponsible use, and it's over reliance has been a subject examined by philosophers and SF writers for a very long time. I find it totally ironic that we now have degrees in philosophy, and degrees in science; and they are in different "schools" with in a university. Yet the great scientists and philosophers that we study are the same people. It is because the pursuit of truth encompasses both! Not only should we examine the how and when, but we must examine the why.
Feb. 17th, 2006 07:37 pm (UTC)
Re: Oh big brother where art thou?
The question I have is this; what does the the brain activity mean?

Now that we can see it happen, we can figure that out.

Being able to predict if you like puppies or not is hard because it involves a complex emotional reaction. But being able to tell if you're telling the truth or not isn't --- the part of the brain that invents stories (lies) is totally separate from the one that recalls them (truth). They can *already* do this with near-perfect accuracy, it's just that the machine is unweildy.

Time will yield smaller, portable machines for use in a court-room or police station ... and also more sophisticated metrics for analyzing things like feelings and emotional state, in addition to the already-possible forensic analysis of the truth.

Scary, or heartening?
Feb. 17th, 2006 07:26 pm (UTC)
(1) I'm no expert on the philosophy of science, but what I've gathered is that Popper's falsifiability test isn't widely respected among those who are, precisely because the objection you raise here applies to more than just brain scans. Anamolous results don't usually serve to falsify a theory, since to an extent they can be rationalized away.

(2) The use of fMRIs in criminal investigations would be unconstitutional under existing Supreme Court decisions. The Fifth Amendment prohibition against self-incrimination was extended in the fifties to cover a case where the police ordered a man's stomach pumped to find (iirc) drugs that he'd swallowed. This process would seem to be just as invasive of a suspect's privacy.
Feb. 17th, 2006 07:39 pm (UTC)
1. Interesting, thank you.

2. What if not just the State, but its constituents have a vested interest in allowing for self-incrimination - and necessarily, self-exculpation? Does that interest override the interest in protecting civil liberties, now that there's a technique which substantially changes the cost associated with that liberty?
Feb. 17th, 2006 08:03 pm (UTC)
First, the probative value of the fMRI confession scan is arguably less spectacular than the probative value of somebody found with drugs in his stomach, because as argued earlier, the capacity exists for self-deception or faulty memories, and not so much for spontaneous generation of plastic baggies of cocaine. (Note that this argument doesn't apply to DWI convictions, which for unknown reasons is a constitutional no-man's-land.)

Second, that same argument applies to ordinary cases of self-incrimination. If the defendant didn't actually kill anybody, then he presumably wouldn't mind being asked on the stand if he had. Refusing to answer the question on constitutional grounds is likely more damaging than the truth in that case. And yet the court doesn't force him, or anybody else, to tell the truth, because the law puts it in his hands to decide where his best interests lie.

Now, obviously the laws could change in the light of new technology. But under the existing principles of the Constitution, I think it's a settled question.
Feb. 19th, 2006 09:01 am (UTC)
Another brain
When my son had his MRI a couple months ago not for the Autism, but for his hands begining to shake uncontrollably the neurologist said it could be nothing, a brain deformity or a brain tumor. Obviously we had to find out what was going on with him as his doctor prepared us for the worst.
He was taken to CHOC hospital and completley anesthized and I kissed him good night as they intubated him and an hour later he was awake, smiling and asking for breakfast. Thank the "Almighty Bob" that there was nothing growing there and his shaking hands are of "unknown causes"...... with no known treatment.
What puzzeled me was that even though he has been diagnoised with Autism that there was NOTHING structurally wrong with his brain. NOTHING.... How can that be? He acts like a 6 year old, can do school work of a 1st grader, he can't tie his shoes, does not understand when his sister is upset and tears are streaming down her face and begs for him to play with her. While he "plays" at school by walking in a square over and over and over talking to himself.

How can his brain be normal????

The only answer I have is that there is soooo... much we THINK we know, but in reality we know next to NOTHING of what the brain really does. People believe we use 10% of our brains which is not true we use 100% of our brains just at different times. And I have been informed that Autism is caused by woman that take antidepressant before they know they are pregnant and it deforms their babies brains.

BULLSHIT! I don't know where this 'gaming girl' got this info which she has spread around to everyone that listens to her psydoscience, but now we find out it's most likely genetic and if you have one child with Autism you have a 10-15% of having another.

So bottom line is the brain is a mystery. Albert Einstein had Autism and is considered one of the most brilliant men of our times. Just everybody be glad you have a brain..... or do you???
( 14 comments — Leave a comment )


monkey pirate
Rum, Sodomy, and the Lash: Pick Two
My Yelp Reviews.

Latest Month

June 2018
Powered by LiveJournal.com
Designed by Paulina Bozek