Sie sind auf Seite 1von 144

-- -----

Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 1 of 144


Case 1:07-cr-10074-JPM-tmp Document 157 Filed 01/20/10 Page 1 of3

LAW OFFICE OF
J. HOUSTON GORDON
SUITE 300
UNDO HOTEL BUILDING
114 W. LIBERTY AVENUE
P.O. BOX 846
J. HOUSTON GORDON' COVINGTON, TENNESSEE 38019 PHONE (901) 476·7100
AMBER N. GRIFFIN" FAX (901) 476-3537

January 20, 2010

Mr. Stuart J. Canale


Assistant United States Attorney
U.S. Attorney's Office
Clifford Davis Federal Building
167 North Main Street, Suite 800
Memphis, TN 38103-1827

DEFENDANT'S SUPPLEMENTAL RULE 16 DISCLOSURE

Re: United States v. Lome Allan Semrau


U.S. District Court No.: 07-10074-JDT

Dear Stuart:

To the extent deemed necessary at trial, the Defendant, Lorne Allan Semrau, may
or will call Dr. Steven J. Laken, Ph. D, P.O. Box 45, Tyngsboro, MA, 01879, as an expert
to testify in this cause concerning issues involving scientific brain testing of Dr. Semrau to
determine Defendant's truthfulness and lack of intent to defraud the government as
. established by scientific certainty by means of fMRI examination of Defendant's brain
conducted by Dr. Laken. .

As the government is aware, Defendant has requested on mUltiple occasions the


opportunity to establish his innocence by means of submitting to a government run
polygraph examination. The government has declined to allow Defendant to be te~ted by
the government's own polygraph examiner even though Defendant earlier passed such an
examination by an independent examiner hired by Defendant's counsel. The reason for
the objection to such a government run examination is unknown, except for the statement,
"I don't trust polygraphs." Generally stated, the objections to polygraphs are that the tests
measure emotions, not brain activity. The present state of neuroscience, however, has
demonstrated that fMRI brain testing/screening can and does measure truthfulnesslfalsity
with more than 95% accuracy by assessing brain activity that cannot be manipulated by
the subject.

EXfIII!II'F.,
MEMPHIS OFFICE: 2121 ONE COMMERCE SQUARE, MEMPHIS, TENNESSEE 38103
PHONE (901) 526-6464 FAX (901) 526-6467

•Also licensed in District of Columbia "Also licensed in Mississippi .1


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 2 of 144
Case 1:07-cr-10074-JPM-tmp Document 157 Filed 01/20/10 Page 2 of 3
'"

Dr. Laken is the President and CEO of Cephos Corporation. He graduated in 1993
with a Bachelor of Science in Genetics and Cell Biology from the University of Minnesota
and graduated with a Ph.D. in Cellular and Molecular Medicine from The Johns Hopkins
School of Medicine in 1999. Dr. Laken was named a Technology Review 100 Finalist 100
Young Innovators Under 35 by the Massachusetts Institute of Technology in 2002 and
received the David Israel Macht Award on Young Investigators Day in May 1999 from
Johns Hopkins University.

Attached hereto is Dr. Laken's curriculum vitae setting forth his professional
qualifications, his publications, articles, invited presentations, patents, and previous
testimony.

Dr. Laken is expected to testify that fMRI examination is a reliable and sound
scientific methodology used to determine whether and when a person is being truthful
in response to questions posed. He will testify that the brain responds differently when
a person is being truthful as opposed to when a person is being deceitful. The fMRI
tests records the brain's activity and is read by a computer, not a subjective analyst.
There is no way for the subject to "game" the system. He will testify that this fMRI
testing is peer-reviewed, that the results are repeatable, that it has a rate of error of
significantly less than 10%, is objectively analyzed by a computer process as opposed
to a subjective observer, and that this science is based on tested technology that is
established methodology in his field. The fMRI is the cutting edge of scientific
observation and analysis of brain activity.

Dr. Laken will further testify that fMRI technology works by scanning the brain
with the MRI scanner while the subject is in the process of responding to random
questions selected by the computer. The scanner maps the blood flow of the brain and
the areas of the brain triggered by the response of the subject. Dr. Laken is expected
to testify that truthfulness triggers the portions of the brain where memory is stored,
while deceit triggers much more of a brain response, as one attempts to suppress the
memory and create a false answer.

Dr. Laken will further testify that Dr. Semrau was presented questions using 'fMRI
technology and was instructed to respond to questions in either/both a truthful or
deceitful manner, depending on the question posed to establish his base line brain
activity on the fMRI. He was then tested to determine whether he was telling the truth
about the facts, circumstances and his intent related to the charges in the Second
Superseding Indictment. The fMRI screening demonstrated to a scientific certainty, that
Defendant was truthful and possessed no intent to defraud or cheat the government.
This is Dr. Laken's opinion to a reasonable degree of scientific certainty based on his
testing of Dr. Semrau by means of fMRI imaging.

Dr. Laken will further testify that fMRI testing for truthfulness differs from
polygraph testing, as polygraph testing measures a person's emotional responses
(heart rate, breathing, and physical responses) and is interpreted by an examiner, while

Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 3 of 144


·. Case 1:07-cr-10074-JPM-tmp Document 157 Filed 01/20/10 Page 3 of 3

fMRI testing scans the brain patterns that develop during both truthful and dishonest
answers, and is interpreted by a computer. Thus. the testing is objective in nature and
not subject to the subjective interpretations of the analyst.

After administering fMRI testing. it is clear to a scientific certainty (greater than


95%) that Dr. Semrau is being truthful and candid when he states that he did not intend
to defraud the government or insurance carriers in the billing for AIMS tests under
99301 or psychiatric care under 99312 instead of 90862 and when he states that he
was following instructions he had received as he understood them.

Dr. Laken reserves the right to supplement, modify or add to his opinions in the
event additional evidence, facts, or testing is reqUired.

Kindest regards.

Sincerely,

s/J. HOlJston Gordon


J. Houston Gordon
JHG/km

Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
4 of Page
1441 of 15

QUICK SEARCH: [advanced]


}mlflltll of the A,mericl1f1 Acaaem}' Author: Keyword(s):
ol !'s.w,'ltiauy lUU! the Law OnUm:
B Lmm mJI
HOME HELP FEEDBACK SUBSCRIPTIONS ARCHIVE SEARCH TABLE OF CONTENTS Year: C--- V01 : C---Pag e : t :'
JAm Acad Psychiatry Law 36:4:491-498 (2008)

Copyright © 2008 by the American Academy ofPsychiat'Y and the Law.

This Article
~ Abstract FREE
REGULAR ARTICLE
~ Full Text (PDF)
~ Alert me when this article is cited
Functional MRI Lie Detection: Too ~ Alert me if a correction is posted

Good to be True? ~ Clt;",t;iQnM",p

Services
Joseph R. Simpson, MD, PhD ~ Similar articles in this journal
~ Similar articles in PubMed
Dr. Simpson is Staff Psychiatrist, VA Long Beach Healthcare System, Long ~ Alert me to new issues of the journal
Beach, CA, and Clinical Assistant Professor of Psychiatry and Behavioral ~ Download to citation manager
Sciences, University of Southern California Keck School of Medicine, Los
Angeles, CA. The views expressed in this article do not necessarily reflect any Citing Articles
policy or position of the U.S. Department of Veterans Affairs, the University
of Southern California or the USC Keck School of Medicine. Address ~Cit;!ng.Articles.vi"'. . Hi.g.I1Wire
correspondence to: Joseph R. Simpson, MD, PhD, PO Box 15597, Long ~ Citing Articles via Google Scholar
Beach, CA 90815. E-mail: jrsimpsQl1ffic!@~l!rthliDk,n~l
Google Scholar
~ Articles by Simpson, J. R.
• Search for Related Content

PubMed
~ PubMed Citation
~ Art!cles.. bySimpsQI'!,J,R.

~ Abstract
... Top
Neuroscientists are now applying a 21st-century tool to an age-old • Abstract

question: how can you tell when someone is lying? Relying on recently ...... Th~ . Sci~nc~J3~hjHd . th~,., .

published research, two start-up companies have proposed to use a ...... Applic~liQn$.tQP~t~cting,."

sophisticated brain-imaging technique, functional magnetic resonance ...... Limitations of the Technique
...... Legal Considerations
imaging (fMRI), to detect deception. The new approach promises
.... E;th.ic$:8~I~t~d.CQn$id~ratiQn$
significantly greater accuracy than the conventional polygraph-at least .... COnclu$iQH
under carefully controlled laboratory conditions. But would it work in the ...... References
real world? Despite some significant concerns about validity and
reliability, fMRIlie detection may in fact be appropriate for certain applications. This new ability to peer inside
someone's head raises significant questions of ethics. Commentators have already begun to weigh in on many of
these questions. A wider dialogue within the medical, neuroscientific, and legal communities would be optimal in
promoting the responsible use of this technology and preventing abuses.

Essential to the working of modem legal systems is an assessment of the veracity of the participants in
the process: litigants and witnesses, victims and defendants. Falsification or lying by any of these parties
can and does occur. Outside the legal system, detection of dece tion is also of critical importance in the
EXHIBIT

http://www.jaapl.org/cgi/content/full/36/4/491 2. 1/21/2010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
5 of Page
1442 of 15

corporate world and in the insurance industry, as illustrated by the practice of hiring private investigators
to follow and videotape disability claimants. Because human beings can be very skilled at lying1,2. and,
in general, are poor at determining when they are being lied to,1---:3 scientific, objective methods for
determining truthfulness have been sought for decades.

The most widespread objective method for assessing veracity is multichannel physiological recording,
commonly known as the polygraph or lie detector.1,2 This approach is based on the fact that the act of
lying can cause increased autonomic arousal. Changes in autonomic arousal are detected by measuring
pulse rate, respiration, blood pressure, and sweating (variously known as the galvanic skin response
[GSR], skin conductance response [SCR], or electrodermal activity).

The reliability and validity of the polygraph are controversial. Q,l Estimates of its accuracy range from a
high of 95 percent to a low of 50 percent,Q,8 with the best estimate probably around 75 percent
sensitivity and 65 percent specificity.Q This relatively low accuracy is a major reason that polygraph
evidence is generally, though not universally, inadmissible in legal proceedings. 3

The past six years have seen the development of a possible new lie-detection technique that is not based
on the measurement of autonomic reactions. This is the application of a widely used tool in
neuroscientific research, functional magnetic resonance imaging (tMRI), to the task of obtaining
measurements of cerebral blood flow (a marker for neuronal activity) in individuals engaged in
deception. Within the past two years, two separate research groups have devised experimental paradigms
and statistical methods that they claim allow identification of brain activity patterns consistent with
lying. The approaches can be used on individual subjects, and their creators claim approximately 90
percent accuracy. Two commercial enterprises, No Lie MRI, Inc., and Cephos Corporation, were
launched in 2006, each with the goal of bringing these techniques to the public for use in legal
proceedings, employment screening, and other arenas (such as national security investigations) where
polygraphs have been used.

The announcement of this first potential commercial application for tMRI has attracted a great deal of
attention, both from the popular media9- 1>and from bioethicists. 1{i-:2Q In June 2006, the American Civil
Liberties Union sponsored a forum on the subject oftMRI lie detection and filed a Freedom of
Information Act request for government records relating to the use of fNIRI and other brain-imaging
techniques for this purpose. 27 Much ofthe concern centers on possible uses and abuses of brain-imaging
technologies in interrogation of enemy combatants or other terrorism suspects.

The focus of this article is the potential use of tMRI to detect deception in noninterrogation contexts,
specifically in criminal and civil legal proceedings and in the workplace. At the time of this writing,
there do not appear to have been any instances of the use of tMRI lie detection in a legal or employment
setting. However, there is little doubt that attempts to apply this new technology to real-world situations
will be made, most likely in the near future.

The Science Behind the Scans

http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
6 of Page
1443 of 15

.. Top
Very briefly, functional MRI relies on the fact that cerebral blood .&. Abstract

• The Science Behind the...


flow and neuronal activation are coupled. When a region of the
... A.pplications_toOetecting ...
brain increases its activity, blood flow to that region also increases. ... Limitations of the Technique
This physiological change can be detected by ±MRI due to the ... Legal Considerations
"'Ethics:RelatedConsideJations
blood-oxygen-level-dependent, or BOLD, effect. 28 ,29 Unlike other ... ConclYsion
functional neuroimaging techniques such as positron emission ... References
tomography (PET) or single-photon emission computed
tomography (SPECT), BOLD ±MRI detects only relative changes in blood flow and thus requires a
comparison between two conditions or tasks. However, in contrast to PET or SPECT, ±MRI can detect
signal changes on a time scale of one to two seconds, rather than minutes.

In the experimental setting, the BOLD signal over the whole brain is acquired while volunteer subjects
perform various cognitive tasks.30 The imaging data are then transformed to a standard brain template
and averaged across subjects. Statistical techniques are used to identify a significant change in blood
flow to a particular brain region in one condition compared with another. It is also possible to analyze
data from within a single subject.

The BOLD signal is both valid and reliable in properly constrained experimental paradigms. There is
very good agreement between ±MRI and PET for the mapping of regional changes in brain activity.3J
Functional MRI is now being used for presurgical mapping for epilepsy and brain tumor surgeries32- 35
and is being studied for other diagnostic purposes. 36

~ Applications to Detecting Decepti0r.=n========j]


.. IQR
Since an initial publication in 2001,31 several papers on the BOLD .&. Abstract

... Th~ . .Scien.ceBe.hindJhe.,.


±MRI methodology have reported differential patterns of blood flow
• Applications to Detecting...
in various brain regions in experimental paradigms in which ... Limitations of the Technique
subjects were instructed to lie or deceive in one task condition and ... Legal Considerations
"'Ethi.cs:Relat~d. Conside.rations
respond truthfully in another task condition. The task paradigms "'ConclYsion
included forced-choice lies (Le., responding yes when the truth is .... References
no and VIce )3738 spontaneous l'les (.l.e., saymg
. versa'; . Ch'lcago w hen
the true answer is Seattle)39; rehearsed, memorized lies 39 ; feigning memory impairment40 ,41; and
..
severa1varIatIOns U1 ty Knowe
0 f th e G'l 1 dge Test,' 4243·mc1ud'mg lymg
' ab out h avmg' a pl ' card ,44-47.
aymg
lying about having fired a pistol (loaded with blanks) before the scanning session,48 lying about the
location of hidden money,49,50 and lying about having taken a watch or ring. 51

Some of the experimenters attempted to enhance the emotional salience of the lying task through
monetary incentives: in one paradigm the subjects were told they would double their payment from $50
to $100 if they were able to deceive the experimenters49- 51 ; in others they were told that they would
forfeit their $20 payment if their deception was detected. 44 ,45 In another study, the experimenters did not
manipulate rewards, but put on a demonstration for the subjects before the scanning session that implied

http://www.jaapl.org/cgi/content/fu11l36/4/49l 112112010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
7 ofPage
1444 of 15

that the testers could see the volunteers' brain activation results in real time. 47 The stated purpose of this
was to approximate the conditions of a polygraph examination.

The most consistent results of these studies are greater activation of certain prefrontal and anterior
cingulate regions in the lie conditions relative to the truth conditions. It has been hypothesized that these
regions are recruited for the purpose of inhibiting a prepotent response (i.e., giving a true answer).52 It
has been proposed that this is one of the major cognitive differences between truth and deception:

The liar is called upon to do at least two things simultaneously. He must construct a new
item of information (the lie) while also withholding a factual item (the truth).... [T]he
truthful response comprises a form of baseline, or prepotent response.... We might,
therefore, propose that responding with a lie demands some form of additional cognitive
processing, that it will engage executive, prefrontal systems (more so than telling the truth)
[Ref. 52, p 1757].

These studies do not simply measure neural correlates of autonomic arousal. Thus, the technique may
have some advantages over conventional polygraph methodology. For example, presumably, mere
nervousness in an innocent subject would not create a false positive for deception.

Most of these studies reported only the results of analyses of pooled data from a group of subjects.
However, for the method to have any practical value, it must be applied to individuals. Two separate
research groups have used different statistical methodologies to do just that.

Kozel et al.~ used a modified Guilty Knowledge Test in which 30 subjects engaged in a mock crime of
stealing a watch or ring. (In the debriefing, 60% of the subjects indicated that they thought the crime was
real, which supports the validity of the paradigm.) Subjects were then presented 80 different questions
visually while being scanned. Yes or no responses were delivered by button press. The subjects were
instructed to lie about having taken the item but to answer all other questions truthfully. They were paid
$50 for participating, but were told that they would receive an additional $50 if an observer could not
tell when they were lying (in actuality, all subjects received $100).

Statistical analysis of the group data identified one anterior cingulate and two prefrontal regions that
were more activated in the lie than in the truth condition. The regions were similar to those activated in
several of the other lie-detection paradigms mentioned. By analyzing the activations in these regions in
each subject (pooled across all of that subject's responses), Kozel et al. 5J reported that they were able to
predict accurately which item (watch or ring) was taken in 28 of30, or 93 percent, of the cases. The
activity in the same regions was then applied to the data from a new set of 31 subjects scanned under
identical conditions. For this group, the method identified the item taken with 90 percent accuracy (28 of
31 subjects).

In their discussion, Kozel et al. 51 suggest that their method could be used in real-life settings by first
testing the subject with the Guilty Knowledge Test mock crime scenario and then, if the subject's brain
activation patterns indicate reliable separation between lies and truth, scanning them again while they

http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
Functional
Case Li~ Detection: Too Good toDocument
MRI1:07-cr-10074-JPM-tmp be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
8 of Page
1445 of 15

respond to questions about the actual topic of interest. This approach has been licensed by the Cephos
Corporation.

Davatzikos et ai. 4~ scanned 22 volunteers in a Guilty Knowledge Test paradigm involving lying about
having a particular playing card in one's possession. Subjects were told they would be paid $20 only if
they successfully concealed the fact that they possessed the card (in fact all subjects were paid). The
researchers employed a statistical approach involving the application of machine learning methods to
their entire dataset. Using this approach, they reported high accuracy in distinguishing a lie from the
truth. Whether applied to single events (i.e., a single button-press response) or to all the data from a
single subject, the sensitivity for detection of lying was around 90 percent, and the specificity was
around 86 percent. This methodology is used by No Lie MRI, Inc.

~ Limitations of the Technique


A Top
Despite the intriguing results described in the preceding sections, .& Ab$tract
A The Science Behind the...
how well fMRI lie detection would work in real-life situations A Applications to Detecting...
remains an open question. It is important to bear in mind that, like • Limitations of the Technique
the polygraph, fMRI lie detection requires a willing subject. If an ... LegaICon$idJ!falion$

... Ethics-Related Considerations

individual refuses to enter the scanner, refuses to respond to the ... CoocJ\,I$iQn

questions presented, or gives nonresponsive answers, the technique ... Refg1'~nce$

cannot be used. Even simply moving one's head during scanning


could prevent the collection of usable data.

Over and above these hindrances are more complex questions about transitioning from the research
laboratory to the real world. Some of the concerns that have yet to be fully addressed are discussed in the
following sections.

Generalizability of the Method

The studies conducted thus far have been carried out on healthy volunteers who were screened for
neurological and psychiatric disorders, including substance use. There has been no testing of f}JIRI lie­
detection paradigms in juveniles, the elderly, or individuals with Axis I and/or Axis II disorders, such as
substance abuse, antisocial personality disorder, mental retardation, head injury, or dementia. It is
unclear whether and how such diagnoses would affect the reliability of the approach. (As mentioned, a
potential advantage of the method in comparison with the polygraph is that it does not rely on autonomic
reactions, and thus individuals with antisocial personality disorder may lose the advantage of evading
detection due to their hyporesponsivity during polygraph testing. 53)

All of the published literature involves scenarios in which the volunteer subjects have been instructed to
lie. No literature addresses the question of how this basic fact affects brain activation patterns, in
comparison with the more realistic situation in which the person being tested makes a completely free
decision about whether to lie and repeats this process for each question asked.

http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
9 ofPage
1446 of 15

None of the volunteer subjects faced serious negative consequences for unconvincing lying, although in
some cases they believed there was a monetary incentive for lying successfully.

Lack of Specificity of the Measurement

The fMRI approach to lie detection does not rely on detecting signs of autonomic arousal or nervousness
that can be associated with lying. This approach reduces the chance that a person who is truthful will be
classified as deceptive on the basis of his being fearful (for any reason) during testing. However, the
other side of this coin is that fMRI lie detection appears to depend at least in part on the suppression of
competing responses. It does not directly determine what those competing responses are, and they may
not, in fact, be untruths. As pointed out by Grafton et at.:

When defendants testify, they do inhibit their natural tendency to blurt out everything they
know. They are circumspect about what they say. Many of them also suppress expressions
of anger and outrage at accusation. Suppressing natural tendencies is not a reliable indicator
oflying, in the context of a trial [Ref. 54, pp 36-37].

There are presently no data regarding the likelihood of this type of false-positive result.

Effects of Countermeasures

It is hypothesized that much of the frontal lobe activation in imaging studies of deception is related to
the suppression of competing responses. Unknown at present is the potential effect of extensive
rehearsal. Ganis et at. 39 have already demonstrated differential activation patterns between spontaneous
and rehearsed lies. The rehearsal in that experiment was brief, on the order of minutes. If a person spent
weeks practicing a fabricated story (akin to the preparation an intelligence officer might undertake in
assuming a false identity), would the activations associated with response suppression remain as strong?
For a person with much at stake and adequate advance warning, it is not unreasonable to assume that
extensive rehearsal might be attempted to try to fool the technique. It is not known what effect, if any,
such a cognitive countermeasure would have.

Other countermeasures-for example, one analogous to an approach commonly used against polygraph
examinations (i.e., attempting to raise the baseline response to nontarget questions, to reduce the
differential between target and nontarget responses)-could also be attempted by the subject while in the
scanner. There are also no data on what impact such an action would have.

Delusions Versus Lying

In some psychiatric conditions, subjective experience is at odds with objective reality. This dichotomy is
most glaring in the case of psychosis. It appears that in the case of a delusion, the technique would not
show any deception. Langleben et at.~~ described a medical malpractice lawsuit in which a patient
accused her former psychotherapist of sexual abuse. Both the patient and the physician took polygraph
examinations, and both passed. Other evidence suggested that the patient was most likely suffering from

http://www.jaapl.org/cgi/content/full/36/4/491 112112010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal
02/19/10 Page of...
10 ofPage
1447 of 15

a delusion. Such a situation probably would not be amenable to the use of fMRI lie detection. Other
examples where fMRI may not add useful information might include dementias or amnestic disorders
with confabulation, somatoform disorders, and the pseudologia fantastica seen in some patients with
factitious disorders.

~ Legal Considerations
aTop
These unresolved questions suggest that the potential uses of fMRI a Ab~tract
.... The Science Behind the...
lie detection in real-life situations will remain relatively restricted
.... Applications to Detecting...
for the foreseeable future. A criminal defendant who failed an fMRI .... L"imitations.. o f.. thelech.niqYe
lie detector test could still assert reasonable doubt, unlike the case • Legal Considerations
.... Ethics-Related Considerations
with DNA identification, for example, with which the odds of being .... Conclusion
identified by chance are on the order of billions to one. Thus, there .... .R.eference~

is little to gain for the state in compelling an unwilling defendant to


submit to such a test.

More generally, the present state of the science in this area is unlikely to meet legal standards for
admissibility in court proceedings. The literature on the technique is sparse thus far. As we have seen,
only two groups have published data on single-subject results. The Frye v. Us. 56 standard, which was
applied throughout the nation for seven decades until 1993 and is still the standard in some jurisdictions,
requires that a scientific technique have general acceptance in the relevant scientific community for the
results to be admissible as evidence. The use of fMRI for lie detection would not pass such a test at
present.

The Daubert v. Merrell Dow Pharmaceuticals, Inc).1 standard calls on the trial court to act as a
gatekeeper. The court must determine whether the proposed scientific evidence is relevant to the issue at
hand and assess its reliability. Under this standard, guidelines for the trial court to use in this assessment
include: whether the procedure is generally accepted in the scientific community (as in Frye), whether
the procedure has been tested under field conditions, whether it has been subject to peer review, the
known or potential error rate, and whether standards for the operation of the technique have been
developed. At present, fMRI lie detection would be unlikely to meet all of these criteria. The technique
has not been tested in the field (i.e., in real civil or criminal cases), but only under laboratory conditions.
There is also no standardization of the various techniques and protocols involved in performing and
analyzing the scans.

Even if these obstacles are eventually overcome, the technique would face additional hurdles before any
use in criminal proceedings. The Fifth Amendment right to avoid self-incrimination appears to rule out
compelling a criminal defendant to submit to the technique. Another unresolved question is whether an
fMRI scan constitutes a search, with potential Fourth Amendment implications.

Functional MRI lie detection may see its first application in nonjudicial settings, such as employment
screening. Despite the caveats described herein, the published studies suggest that the technique may be

http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal
02/19/10 Page oC.Page
11 of 1448 of 15

more accurate than traditional polygraph methods, at least in Guilty Knowledge Test paradigms for
which individual subject data have been published. These findings may make it attractive to employers.
The fMRI technique could be used in conjunction with polygraphs, either on a routine basis, or in cases
in which polygraph results are equivocal. Bioethicist Ronald Green has predicted, "Brain-imaging lie­
detection will most likely be used where absolute reliability is not needed and where a predominately
naYve population is under scrutiny...the technology is likely to supplement or take the place of written
honesty tests and polygraphy" (Ref. 23, p 54).

However, there are still significant legal concerns to be addressed before such applications become
widespread. Outside of government agencies and companies involved in the provision of security, the
routine use of polygraph examinations is generally barred by the Federal Employee Polygraph
Protection Act of 1988 (FEPPA). 58 Whether fMRI lie detection is covered under FEPPA is not clear at
present. The key language in FEPPA states:

The term "lie detector" includes a polygraph, deceptograph, voice stress analyzer,
psychological stress evaluator, or any other similar device (whether mechanical or
electrical) that is used, or the results of which are used, for the purpose of rendering a
diagnostic opinion regarding the honesty or dishonesty of an individual.. .. The term
"polygraph" means an instrument that-(A) records continuously, visually, permanently,
and simultaneously changes in cardiovascular, respiratory, and electrodermal patterns as
minimum instrumentation standards; and (B) is used, or the results of which are used, for
the purpose of rendering a diagnostic opinion regarding the honesty or dishonesty of an
individual [Ref. ~8, Section 2001].

Does an flVlRI scan, which does not measure psychological stress or the physiological parameters
detected by a polygraph, nevertheless qualify as similar to the polygraph for the purposes of the FEPPA?
At present, there has been no legally binding interpretation of this question. No Lie MRI, Inc. has taken
the stance that the FEPPA does not apply:

U.S. law prohibits truth verification/lie-detection testing for employees that is based on
measuring the autonomic nervous system (e.g., polygraph testing). No Lie MRI, Inc.
measures the central nervous system directly and such is not subject to restriction by these
laws. The company is unaware of any law that would prohibit its use for employment
. 59
screenmg.-­

Ultimately, whether fMRI lie detection is prohibited by FEPPA may end up being determined by statute
or court decisions.

~ Ethics-Related Considerations
... Top
The growing body of scientific literature and the advent of ... Abstract
commercial enterprises to market brain imaging-based deception ... Th.eSclenceBehin.dth.e•.•.•

http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal
02/19/10 Page of...
12 ofPage
1449 of 15

detection has raised several ethics-related concerns)6-24,@ At the Applications to Detecting...


most basic level is the question of whether a precise definition of Limitations of the Technique
lying even exists. 20 It has been suggested that different types of lies ~ ~i~~~~~~;~~~r~~~~jderations
are reflected in different patterns of brain activity in different "... Conclusion
"" References
individuals. One skeptical commentary offers the opinion that" [w]e ~~~~~~======d
just do not understand enough about brain circuits that mediate emotional or cognitive phenomena to
interpret our measurements.... We are not ready to turn away from the skin and the heart to rely on still
mysterious central mechanisms that correlate with a lie" (Ref. 20, p 55).

An argument can be made that this concern reflects an overabundance of caution. The polygraph may be
even less specific to deception than fMRI. It is also not clear how allowing the poJlygraph but prohibiting
fMRI lie detection addresses the question of the imprecise definition of deception.

A related matter is the possibility of the premature adoption of a scientifically immature technology.
Given the comparatively narrow research base on which fMRI lie detection currently rests, several
commentators have urged caution in allowing it to be used for practical applications. One author has
recommended that any new lie-detection device go through a complete government approval process,
analogous to the Food and Drug Administration's approval process for drugs and medical devices. 22
There are concerns that a rush to apply the technique and the competition for limited government
funding could inhibit the conduct of appropriate research in the area. "Premature commercialization will
bias and stifle the extensive basic research that still remains to be done" (Ref. 16, p 47).

Commentators have also pointed out the danger of the so-called CSI effect, meaning that the aura of big
science and high technology surrounding complex and expensive tests may lead to an overestimation of
the reliability and utility of fMRI lie detection among lay people, including law enforcement personnel
and other investigators, judges, and jurors. If fMRI lie detection were misinterpreted as being an
infallible method of distinguishing truth from falsehood, participants in legal proceedings could
experience significant pressure to submit to testing, with refusal being interpreted as evidence of guilt. 16
The reasoning would be: the test detects lies; therefore, anyone who refuses to take it must have
something to hide. Although such a conclusion is not at all supported by the actual data, it is not
inconceivable that some may draw it.

Another question of ethics concerns the right to the privacy of one's thoughts. Neuroethicists have
coined the term cognitive libertyQJ to refer to the "limits of the state's right to peer into an individual's
thought processes with or without his or her consent, and the proper use of such information in civil,
forensic, and security settings" (Ref. 16, pp 39-40). Under what circumstances should a government
agency--or, for that matter, an employer or insurance company-be allowed to look for deception with
this technique? Our society has not yet grappled with these critical questions, but if enthusiasm for fMRI
lie detection increases, it appears that such a debate will be essential. In the words of one commentator,
"Constitutional and/or legislative limitations must be considered for such techniques" (Ref. 21, p 61).
Another author has proposed that using a "neurotechnology device to essentially peer into a person's
thought processes should be unconstitutional unless it is done with the informed consent of that

http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
·Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal...
02/19/10 Page 13Page 10 of 15
of 144

person" (Ref.L?, p 62).

Related to the concept of cognitive liberty is the possible use of fMRI against the will of the subject.
Hypothetical scenarios in which this might occur have been described in the context of national security
investigations or other types of high-stakes interrogations. 60,62 It is not inconceivable that terrorism
suspects could be restrained and placed in an MRI scanner in such a way that they would be unable to
move their heads enough to foil the scan. Even if they refused to answer questions, it might be possible
to detennine from the brain's response whether the subject recognizes a sensory stimulus, such as a
sound or image.

It should be clear from the preceding discussion, however, that lie detection using fMRI requires the
subject to answer questions. Furthennore, as in a traditional polygraph examination, a comparison to
known truthful responses by the subject is necessary for the technique to work. In any event, the
coercive use of brain-imaging technology would certainly be fraught with ethics-related, legal, and
constitutional difficulties. Scientific and mental health organizations may soon want to articulate
positions on the ethics of nonmedical uses of brain-imaging technology, coercive or not.

~ Conclusion
.. IQP
With ongoing research, and likely improvements in accuracy in the .... Abstract
.. The Science Behind the...
laboratory setting, it does not seem unreasonable to predict that
.. Applications to Detecting...
fMRI lie detection will gain wider acceptance and, at a minimum, "'1..1mitCltiQI!SQf..the..Iechn.ique
replace the polygraph for certain applications. What seems far less .. l..egCllCQI!s.jden~tiQIlS
... Ethics-Related Considerations
likely is the science-fiction scenario in which a criminal defendant • Conclusion
is convicted solely on the basis of a pattern of neuronal activation .... References
when under questioning.

Thus far, under carefully controlled experimental conditions, an accuracy of90 percent is the best that
has been achieved. Improvements in the technology that would reduce the error rate from 10 percent to
something comparable with the billions-to-one accuracy of DNA testing are difficult to conceive of,
given the mechanics of the science involved.

Perhaps more important, the technique does not directly identify the neural signature of a lie. Functional
MRI lie detection is based on the identification of patterns of cerebral blood flow that statistically
correlate with the act of lying in a controlled experimental situation. The technique does not read minds
and detennine whether a person's memory in fact contains something other than what he or she says it
does. The problem of false-positive identification of deception is unlikely to be overcome to a sufficient
degree to allow the results of an fNlRI lie detection test to defeat reasonable doubt. Furthennore, it is
difficult to envision compelling an unwilling criminal defendant to submit to a test, because of the Fifth
Amendment right against self-incrimination. If a criminal defendant volunteers to take the test, it is still
not clear that the results would be any more admissible under current conditions than the results of a
standard polygraph examination would be. It appears to be too early to predict whether fMRI lie

http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal...
02/19/10 Page 14Page 11 of 15
of 144

detection will ever reach the level of reliability and standardization needed to meet Frye or Daubert
criteria.

If the Federal Employee Polygraph Protection Act is interpreted as applying to fMRI lie detection, it will
not be used in the general workplace. Nevertheless, the next few years may see the use of the technology
in government and in the other limited circumstances in which nongovernmental employers are allowed
to administer polygraph examinations. Although there are several unresolved questions regarding the
ethics of this type of application, it is not clear that the concerns are qualitatively different, with fMRI lie
detection in this context, from those raised by the polygraph, or from concerns about the use of brain
imaging in other contexts such as research or diagnostics. As previously mentioned, absolute reliability
is not necessarily required in employment applications.

Like polygraph evidence, which is generally inadmissible, fMRI lie detection may still find a role in
civil suits and in criminal investigations. No claims would be made that the results definitively
determine the truth as do those of the more traditional forensic tests, but the findings could be used in
settlement negotiations. Police could employ the technique in criminal investigations as a means to rule
out suspects, as they already do with the polygraph. 63

A variety of practical, legal, and ethics-related concerns surround the potential use of functional MRI for
the purpose of lie detection. Given the current state of the field and the unresolved practical matters
mentioned herein, the forensic role of the technique is likely to be limited to the civil arena, with both
sides agreeing to have one or more parties consent to undergo the test. Use in the workplace is also
possible, but if FEPPA applies, then the use of fMRI lie detection in employment will be as limited as
the use of the polygraph. Although the ethics-related dangers are perhaps not as grave in employment
applications or civil suits as they would be in a criminal case, an ongoing scientific, legal, and bioethics
dialogue about the appropriate uses of fMRI lie detection is certainly prudent and timely.

References
.... I.QR
... A,b$tract
1. Ekman P, O'Sullivan M: Who can catch a liar? Am Psychol ... Ih~Sci~nc~_6~hLndlbe•••
46:912-20, 1991 .... Applications to Detecting...
2. Spence SA: The deceptive brain. J R Soc Med 97:6-9,2004 ... Limitations of the Technique
... L,~gaICon$ideJalion$
[En~~f!!UT~xt1 ... ethjc$:R~I.ated. . Con$.ideration$
3. Ford EB: Lie detection: historical, neuropsychiatric and legal ...
Conclusion
dimensions. IntJ Law Psychiatry 29:159-77, 2006[Medline] •
References
4. Office of Technology Assessment: Scientific validity of
polygraph testing: a research review and evaluation. A Technical Memorandum. Washington, DC:
US Office of Technology Assessment, 1983
5. Office of Technology Assessment: The use of integrity tests for pre-employment screening.

Washington, DC: US Office of Technology Assessment, 1990

6. Brett AS, Phillips M, Beary JF: Predictive power of the polygraph: can the "lie detector" really
detect liars? Lancet 1:544-7, 1986
7. Lykken DT: A Tremor in the Blood: Use and Abuse of the Lie Detector. New York: Plenum

http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal...
02/19/10 Page 15Page 12 of 15
of 144

Press, 1998
8. Stem PC: The polygraph and lie detection, in Report of the National Research Council Committee
to Review the Scientific Evidence on the Polygraph. Washington, DC: The National Academies
Press, 2003, pp 340-57
9. Hall CT: Fib detector. San Francisco Chronicle. November 26,2001, pAlO
10. Fox M: Lying: it may truly be all in the mind. Courier Mail (Queensland, Australia). December 1,
2004, P 3
11. Anonymous. Signs of lying are all in the mind. The Australian. December 1, 2004, P 10
12. Talan J: No lie, it's easier to tell the truth. Houston Chronicle. October 9, 2005, P 7
13. Haddock V: Lies wide open. San Francisco Chronicle. August 6, 2006, pEl
14. Hadlington S: Science and technology: the lie machine. The Independent (London). September
13,2006,plO
15. Henig RM: Looking for the lie. New York Times Magazine. February 5, 2006, pp 47-53, 76,80
16. Wolpe PR, Foster KR, Langleben DD: Emerging neurotechnologies for lie-detection: promises
and perils. Am J Bioeth 5:39-49, 2005[M~dliJ1~1
17. Boire RG: Searching the brain: the Fourth Amendment implications of brain-based deception
detection devices. Am J Bioeth 5:62-3, 2005 [Medline]
18. Buller T: Can we scan for truth in a society ofliars? Am J Bioeth 5:58-60, 2005[M~dlin~1
19. Fins 11: The Orwellian threat to emerging neurodiagnostic technologies. Am J Bioeth 5:56-7,
2005 [Medline]
20. Fischbach RL, Fischbach GD: The brain doesn't lie. Am J Bioeth 5:54-5, 2005 [Medline]
21. Glenn LM: Keeping an open mind: what legal safeguards are needed? Am J Bioeth 5:60-1, 2005
[Medline]
22. Greely HT: Premarket approval regulation for lie detections: an idea whose time may be coming.
Am J Bioeth 5:50-2, 2005[M~dliJ1~1
23. Green RM: Spy versus spy. Am J Bioeth 5:53-4, 2005 [Medline]
24. Moreno JD: Dual use and the "moral taint" problem. Am J Bioeth 5:52-3, 2005[Medlin~
25. No author listed: Neuroethics needed. Nature 441 :907, 2006[M~dliJ1~1
26. Pearson H: Lure oflie detectors spooks ethicists. Nature 441 :918-19, 2006[Medline}
27. American Civil Liberties Union: ACLU seeks information about government use of brain
scanners in interrogations (June 28, 2006 press release). Available at
http://www.aclu.org/privacy/medica1l26035prs20060628.html. Accessed February 9, 2007
28. Ogawa S, Lee TM: Magnetic resonance imaging of blood vessels at high fields: in vivo and in
vitro measurements and image simulation. Magn Reson Med 16:9-18, 1990[Medline]
29. Ogawa S, Tank DW, Menon R, et al: Intrinsic signal changes accompanying sensory stimulation:
functional brain mapping with magnetic resonance imaging. Proc Natl Acad Sci USA 89:5951-5,
1992[Abstract/Fr~~ Full Text]
30. Huettel SA, Song AW, McCarthy G: Functional Magnetic Resonance Imaging. Sunderland, MA:
Sinauer Associates, Inc., 2004
31. Raichle ME, Mintun M: Brain work and brain imaging. Annu Rev Neurosci 29:449-76, 2006
[M~dlin~]
32. Benke T, Koylu B, Visani P, et al: Language lateralization in temporal lobe epilepsy: a
comparison between fMRI and the Wada Test. Epilepsia47:1308-19, 2006[Medline]
33. Larsen S, Kikinis R, Talos IF, et al: Quantitative comparison of functional MRI and direct
electrocortical stimulation for functional mapping. Int J Med Robot 3:262-70, 2007[Medline]
34. Kesavadas C, Thomas B, Sujesh S, et al: Real-time functional MR imaging (fMRI) for presurgical
evaluation of paediatric epilepsy. Pediatr RadioI37:964-74, 2007[Medline]
35. Pelletier I, Sauerwein HC, Lepore F, et al: Non-invasive alternatives to the Wada test in the
presurgical evaluation of language and memory functions in epilepsy patients. Epileptic Disord
9:111-26, 2007[Medline]
36. Dickerson BC: Advances in functional magnetic resonance imaging: technology and clinical

http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
·Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --
Document Simpson
168-1 36 (4):
Filed 491 -- Journal...
02/19/10 Page 16Page 13 of 15
of 144

applications. Neurotherapeutics 4:360-70, 2007[Medlin€D


37. Spence SA, Farrow TFD, Herford AE, et al: Behavioral and functional anatomical correlates of
deception in humans. Neuroreport 12:2849-53,2001 [Medline]
38. Nufiez JM, Casey BJ, Egner T, et al: Intentional false responding shares neural substrates with
response conflict and cognitive control. Neuroimage 25:267-77, 2005[Medlinel
39. Ganis G, Kosslyn SM, Stose S, et al: Neural correlates of different types of deception: an fMRI
investigation. Cerebral Cortex 13:830-6, 2003 [Abstract/Free Full Text]
40. Lee TMC, Liu H-L, Tan L-H, et al: Lie detection by functional magnetic resonance imaging. Hum
Brain Mapp 15:157-64, 2002[Medline]
41. Lee TMC, Uu H-L, Chan CCH, et al: Neural correlates of feigned memory impairment.
Neuroimage 28:305-13, 2005[Medline]
42. Lykken DT: Why (some) Americans believe in the lie detector while others believe in the guilty
knowledge test. Integr Physiol Behav Sci 26:214-22, 1991 [Medline]
43. Elaad E, Ginton A, Jungman N: Detection measures in real-life criminal guilty knowledge tests. J
Appl Psychol 77:757-67, 1992IMedline}
44. Langleben DD, Schroeder L, Maldjian JA, et al: Brain activity during simulated deception: an
event-related functional magnetic resonance study. Neuroimage 15:727-32, 2002[Medline]
45. Davatzikos C, Ruparel K, Fan Y, et al: Classifying spatial patterns of brain activity with machine
learning methods: application to lie detection. Neuroimage 28:663-8, 2005[Medline]
46. Langleben DD, Loughead JW, Bilker WB, et al: Telling truth from lie in individual subjects with
fast event-related £MRI. Hum Brain Mapp 26:262-72, 2005IMedline]
47. Phan KL, Magalhaes A, Ziemlewicz TJ, et al: Neural correlates oftelling lies: a functional
magnetic resonance imaging study at 4 Tesla. Acad RadiolI2:164-72, 2005 [Medline]
48. Mohamed FB, Faro SH, Gordon NJ, et al: Brain mapping of deception and truth telling about an
ecologically valid situation: functional MR imaging and polygraph investigation: initial
experience. Radiology 238:679-88, 2006[AbstractlFree Full Text]
49. Kozel FA, Revell LJ, Lorberbaum JP, et al: A pilot study of functional magnetic resonance
imaging brain correlates of deception in healthy young men. J Neuropsychiatry Clin Neurosci
16:295-305, 2004[AbstractiFrec Full Text]
50. Kozel FA, Padgett TM, George MS: A replication study ofthe neural correlates of deception.
Behav Neurosci 118:852-6, 2004[Medlinel
51. Kozel FA, Johnson KA, Mu Q, et al: Detecting deception using functional magnetic resonance
imaging. BioI Psychiatry 58:605-13, 2005[Medline]
52. Spence SA, Hunter MD, Farrow TFD, et al: A cognitive neurobiological account of deception:
evidence from functional neuroimaging. Phil Trans R Soc Lond B 359:1755-62,2004
[AbstractIFree Full Text]
53. Lorber MF: Psychophysiology of aggression, psychopathy, and conduct problems: a meta­
analysis. Psychol Bull 130:531-52, 2004[Medlinel
54. Grafton ST, Sinnott-Armstrong WP, Gazzaniga SI, et al: Brain scans go legal. Sci Am 17:30-7,
2006
55. Langleben DD, Dattilio FM, Gutheil TG: True lies: delusions and lie-detection technology. J
Psychiatry Law 34:351-70,2006
56. Frye v. U.S., 293 F. 1013 (D.C. Cir. 1923)
57. Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993)
58. 29 U.S.C. i 2001-2009 (1988)
59. No Lie MRI, Inc. Corporate customer information webpage. Available at
htlp://www.noliemrLcomlcustomers/GroupOrCorporate.htm. Accessed February 7, 2007
60. American Civil Liberties Union: Mining the mind: a panel discussion (video). Windows media
(.wmv) file available at http://www.aclu.org/privacy/2555lres20060512.html. Accessed February
9,2007
61. Boire RG: On cognitive liberty. J Cog Liberties 1:7-13,2000

http://www.jaapl.org/cgi/contentlfull/36/4/491 1/21/2010
functional Lie Detection: Too Good toDocument
MRI1:07-cr-10074-JPM-tmp
Case be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal...
02/19/10 Page 17Page 14 of 15
of 144

62. Thompson S: The legality of the use of psychiatric neuroimaging in intelligence interrogation.
Cornell Law Rev 90:1601-37, 2005[M~dlin~1
63. Anonymous. The polygraph technique Part II: value during an investigation. Available at
http://www.policelink.com/training/articles/1947-the-polygraph-technique-part-ii-value-during­
ill1:inY~st.ig~liml. Accessed December 6, 2007

This article has been cited by other articles:


THE BRITI511 JOUR.NAl Of PSYCHIATRY ~HOME

R. S. Fullam and M. C. Dolan

Authors' reply:

The British Journal of Psychiatry, September 1, 2009; 195(3): 270 - 271.

[Full Text] [PDF]

~HOME

N. K. Aggarwal
Neuroimaging, Culture, and Forensic Psychiatry
J Am Acad Psychiatry Law, June 1, 2009; 37(2): 239 - 244.
___---"---'--J [Abstract] [Full Text] [PDF]

~HOME

J. R. Merikangas
Commentary: Functional MRI Lie Detection
JAm Acad Psychiatry Law, December 1, 2008; 36(4): 499 - 501.
------'--'--' [Abstracfl [Full Text] [PDF]

~HOME

__ D. D. Langleben and F. M. Dattilio


~., f~,~ ,~,'~_

""A b.·."""'"
Commentary: The Future of Forensic Functional Brain Imaging
JAm Acad Psychiatry Law, December 1, 2008; 36(4): 502 - 504.
-----'----"- [Abstract] [Full Text] [PDF]

This Article
~ Abstract FREE

~ Full Text (PDF)


~ Alert me when this article is cited
• Alert me if a correction is posted
~ (;itatiQ!1MaP

Services
• Similar articles in this journal

http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
.Functional
CaseMRI Lie Detection: Too Good toDocument
1:07-cr-10074-JPM-tmp be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal...
02/19/10 Page 18Page 15 of 15
of 144

~ Si.mi.I.i1.r.i1J1i.<:les_iIJP!.!ltMed
~ Alert me to new issues of the iournal
~ Download to citation manager

Citing Articles
~ Citing Articles via HighWire
~ (:itingArti<:!e$vlil.. .GoogleS<:ho!~r

Google Scholar
~ Articles by Simpson, J. R.
~ Search for Related Content

PubMed

~p.!.!bMed(:Jtiltio"
~ Articles by Simpson. J. R.

HOME HELP FEEDBACK SUBSCRIPTIONS ARCHIVE SEARCH TABLE OF CONTENTS

http://www.jaapl.org/cgi/content/full/36/4/491 112112010
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 19 of 144

American Journal o/Law eJ Medicine, 33 (2007): 377-431


© 2007 American Society of Law, Medicine & Ethics
Boston University School of Law

Neuroscience-Based Lie Detection:


The Urgent Need for Regulation
Henry T. Greelyt & Judy Illestt

I. INTRODUCTION
"Illustration" or "map" are among the most frequently used words for
translating the Chinese character tu, a graphic representation of any
phenomenon that can be pictured in life and society, whether in traditional
China or elsewhere. l Investigations of the early role of tu in Chinese culture
first set out to answer questions about who produced tu, the background of its
originator, and the originator's purpose. How were pictures conceptualized?
Interpreted? In examining tu, Chinese scholars stressed the relational aspect
of tu and shu (writing) to answer both these questions, as well as to the
importance of not robbing an image of its overall beauty and life with too
much graphic detail. In the West, specific concepts of technical or scientific
illustrations did not exist before the Renaissance. With the coming of that
age, technical illustration became a specific branch of knowledge and activity,
with its own specific goals and ends. Although these developments did not
proceed in any linear manner in either China or the West, they mirrored the
growing importance of science and technology in both societies. However, the
desire to understand the function of the brain through observation of human
behavior and deficits in patients was marked especially in the West. Ideas
about cerebral localization paved the way to developments for mapping brain
function-a path that has seen at least eight different technological
approaches since the first successful measurements of brain electrical activity
in 1920 by Hans Berger. 2 Each technological approach has different

Deane F. and Kate Edelman Johnson Professor of Law, Professor (by courtesy) of
Genetics, and Director of the Center for Law and the Biosciences, Stanford University.
Professor Greely would like to thank Sean Rodriguez for his excellent research assistance on
this Article.
tt Associate Professor (Research) of Neurology, and Director of the Program in
Neuroethics, Center for Biomedical Ethics, Stanford University.
1 Hans Ulrich Vogel, Technical Illustrations in Ancient China: Achievements and
Limitations, PROC. SECOND SHANGHAI ROUNDTABLE, ANCIENT CHINESE SCI. Be HIGH
TECHNOLOGY: ROOTS, FRUITS AND LESSONS 17-20 (2002).
2 David Millett, Hans Berger: From Psychic Energy to the EEG, 44 PERSP. BIOLOGY Be
MED. 522, 523-42 (2001).

EXHIBIT

I 3_~

Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 20 of 144

378 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

potential, limitations, and degrees of invasiveness, yet both share the notion
that they enable, at least to some extent, mind reading.
.As we enter more fully into the era of mapping and understanding the
brain, society will face an increasing number of important ethical, legal, and
social issues raised by these new technologies. One set of issues that is already
upon us involves the use of neuroscience technologies for the purpose of lie
detection. Companies are already selling "lie detection services." If these
become widely used, the legal issues alone are enormous, implicating at least
the First, Fourth, Fifth, Sixth, Seventh, and Fourteenth Amendments to the
U.S. Constitution. 3 At the same time, the potential benefits to society of such
a technology, if used well, could be at least equally large. This article argues
that non-research use of these technologies is premature at this time. It then
focuses on one of many issues and urges that we adopt a regulatory system
that will assure the efficacy and safety of these lie-detection technologies.
This article begins by describing the history and functioning of brain­
imaging technologies, particularly functioning magnetic resonance imaging
(fMRI). It next discusses, and then critically analyzes, the peer-reviewed
literature on the use of fMRI for lie detection. It ends by arguing for federal
regulation of neuroscience-based lie detection in general and fMRI-based lie
detection in particular.

II. AN INTRODUCTION TO BRAIN IMAGING WITH AN EMPHASIS ON


fMRI 4

A. AN EVOLUTION OF "CEREBROSCOPY"s

.As Illes and colleagues reviewed in 2006,6 the modern evolution of


regional and whole-brain visualization techniques of brain structure and
function has yielded ever-clearer bridges between molecules and mind. These
advances have been possible in part because of new investigative paradigms

For the beginning of discussion of these legal issues, see Henry T. Greely, Prediction,
Litigation, Privacy, and Property: Some Possible Legal and Social Implications ofAdvances in
Neuroscience, in NEUROSCIENCE AND THE LAw: BRAIN, MIND, AND THE SCALES OF JUSTICE
114-156 (Brent Garland ed., 2004); Henry T. Greely, The Social Consequences ofAdvances in
Neuroscience: Legal Problems; Legal Perspectives, in NEUROETHICS: DEFINING THE ISSUES IN
THEORY, PRACTICE AND POLICY 245 (Judy Illes ed., 2006); Michael S. Pardo, Neuroscience
Evidence, Legal Culture, and Criminal Procedure, 33 AM. J. CRIM. L. (forthcoming 2007)
(discussing the search and seizure clause, self-incrimination clause, and due process clause);
Sarah S. Stoller & Paul Root Wolpe, Emerging Neurotechnologies for Lie Detection and the
Fifth Amendment, 33 AM. J. L. Be MED. 359, 359-375 (2007) (self-incrimination clause).
There are also two early discussions of the ethical issues involved in this technology. Judy
Illes, A Fish Story? Brain Maps, Lie Detection, and Personhood, 6 CEREERUM 73 (2004); Paul
R Wolpe, Kenneth R. Foster & David D. Langleben, Emerging Neurotechnologies for Lie­
Detection: Promises and Perils, 5 AM. J. BIOETHICS 38, 42 (2005). Finally, at the very end of
the editing process, we discovered another article that discuss neuroscience-based lie detection
technologies in some detail. Charles N.W. Keckler, Cross-Examining the Brain: A Legal
Analysis ofNeural Imagingfor Credibility Impeachment, 57 HASTINGS L. J. 509 (2006).
• Much of this section is based on Judy Illes, Eric Racine & Matthew P. Kirschen, A
Picture Is Worth 1000 Words, but Which WOO?, in NEUROETHICS: DEFINING THE ISSUES IN
THEORY, PRACTICE AND POLICY (Judy Illes ed., 2006).
• We thank Professor Stephen Rose for inspiring some of the following discussion.
Illes, Racine & Kirschen, supra note 4.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 21 of 144

NEUROSCIENCE-BASED LIE DETECTION 379

and methods. While the earliest reliable noninvasive method, electro­


encephalography (EEG), used electrical signals to localize and quantify brain
activity, measures of metabolic activity using positron emission tomography
(PET) and single photon emission computed tomography (SPECT) followed a
few decades later in the 1960s.7 EEG studies had exceptional benefits for
revealing cognitive processing on the subsecond level, localizing epileptogenic
foci, and monitoring patients with epilepsy. Penfield's corticography work
enabled even more accurate measurements from recordings made directly
from the cortex during neurosurgery. 8 PET and SPECT have been used widely
in basic research studies of neurotransmission and protein synthesis, further
advancing our knowledge of neurodegenerative disorders, affective disorders,
and ischemic states. 9
In the early 1970s, improved detection of weak magnetic fields produced
by ion currents within the body enabled the recording of brain signals in the
form of extracranial electromagnetic activity for the first time using a
technique termed magnetoencephalography (MEG).l0 While not as popular
or available as EEG, PET, or SPECT, MEG has still yielded fundamental
knowledge about human language and cognition, in addition to important
information about epilepsy and various psychiatric diseases. II
In the early 1990s, academic medicine witnessed the discovery of a much
more powerful technique for measuring brain activity using magnetic
resonance imaging (MRI) principles. Using functional MRI (fMRI),
researchers can assess brain function in a rapid, non-invasive manner with a
high degree of both spatial and temporal accuracy. Today even newer imaging
techniques, such as near-infrared spectroscopy (NIRS), are on the horizon,
with a growing body of promising results from visual, auditory and
somatosensory cortex, speech and language, and psychiatry.12
fMRI is a good model for our discussion here, given its excellent spatial
and temporal resolution, adaptability to experimental paradigms, and, most
important for our purposes, the increasing rate and range of studies for which
it is used. Illes and colleagues examined these increases, in fact, in a
comprehensive study of the peer-reviewed literature of fMRI alone or in
combination with other imaging modalities. 13 They showed an increase in the
number of papers-from a handful in 1991, one year after initial proof of
concept, to 865 in 2001. An updated database at the end of 2005 showed that

John R. Mallard, The Evolution ofMedical Imaging: From Geiger Counters to MRJ ­
A Personal Saga, 46 PERSP. BIOLOGY &< MED. 349 (2003).
• W. Penfield & E. Boldrey, Somatic Motor and Sensory Representation in the Cerebral
Cortea: ofMan as Studied by Electrical Stimulation, 60 BRAIN 389 (1937).
9 J. C. Mazziotta, Window on the Brain, 57 ARCHIVES NEUROLOGY 1413 (2000).
10 D. Cohen, Magnetoencephalography: Detection ofthe Brain's Electrical Activity with
a Superconducting Magnetometer, 175 SCI. 664 (1972).
11 Mazziotta, supra note 9.
.. S. Coyle et aI., On the Suitability of Near-Infrared (NIR) Systems for Nea:t-
Generation Brain-Computer Interfaces, 25 PHYSIOLOGICAL MEASUREMENT 815 (2004); S.
Horovitz & J. C. Gore, Simultaneous Event-Related Potential and Near-Infrared Spectroscopic
Studies ofSemantic Processing, 22 HUM. BRAIN MAPPING 110 (2003).
13 Judy Illes, Matthew P. Kirschen & John D.E. Gabrieli, From Neuroimaging to
Neuroethics, 6 NATURE NEUROSCIENCE 205 (2003).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 22 of 144

380 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

in the four years since the original study, another 5300 papers had been
published.a All told, that makes about 8700 papers since 1991.

B. MAPPING, MIND AND MEANING

1. From Signals to Meaning


For functional imaging to be possible, there must be measurable
physiological markers of neural activity. Techniques like fMRI rely on
metabolic correlates of neural activity, not on the activity itself. Images are
then constructed based on blood-oxygenation-Ievel dependent (BOLD)
contrast.
BOLD contrast is an indirect measure of a series of processes. It begins
with a human subject performing a behavioral task, for example, repetitive
finger tapping. Neuronal networks in several brain regions are activated to
initiate, coordinate, and sustain this behavior. These ensembles of neurons
require large amounts of energy, in the form of adenosine triphosphate (ATP),
to sustain their metabolic activity. Because the brain does not store its own
energy, it must make ATP from the oxidation of glucose. Increased blood flow
is required to deliver the necessary glucose and oxygen (bound to hemoglobin)
to meet this metabolic demand.
Because the magnetic properties of oxyhemoglobin and deoxyhemoglobin
are different, they show different signal intensities on an MRI scan. When a
brain region is more metabolically active, more oxygenated hemoglobin is
recruited to that area, which displaces a certain percentage of
deoxyhemoglobin. This displacement results in a local increase in MR signal
or BOLD contrast. Although direct neuronal response to a stimulus can occur
on the order of milliseconds, increases in blood flow, or the hemodynamic
response to this increased neural activity, has a one to two second lag. This
hemodynamic response function is important in determining the temporal
resolution of fMRI.
Blood flow, as with many other physiological processes in the human
body, is influenced by many factors, including the properties of the red blood
cells, the integrity of the blood vessels, and the strength of the heart muscle, in
addition to the age, health, and fitness level of the individual. Fluctuations in
any of these variables could affect the signal measured and the interpretation
of that signal. For example, the velocity of cerebral blood flow decreases with
age/ s and similar differences may be induced by pathologic conditions. 16 By
comparison, women using hormone replacement therapy may have enhanced
cerebrovascular reactivity, thereby increasing the speed and size of their blood

,. Judy Illes, unpublished data.


,. John E. Desmond & S. H. Annabel Chen, Ethical Issues in the Clinical Application of
jMRI: Factors Affecting the Validity and Interpretation ofActivations, 50 BRAIN BI: COGNITION,
482 (2002); A.C. Rosen Arun et aI., Ethical and Practical Issues in Applying Functional
Imaging to the Clinical Management of Alzheimer's Disease, 50 BRAIN BI: COGNITION 498
(2002).
16 Mazziotta, supra note 9. See also Allyson C. Rosen & Ruben C. Gur, Ethical
Considerations for Neuropsychologists as Functional Magnetic Imagers, 50 BRAIN BI:
COGNITION 469 (2002).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 23 of 144

NEUROSCIENCE-BASED LIE DETECTION 381

flow responses. I7 Some medications, such as anti-hypertensives, can also


modify blood flow. IS Because complete medical examinations are not
routinely given to subjects recruited for tMRI studies as healthy controls, it is
important to take this potential variability into account when interpreting and
comparing tMRI data. Variability in blood flow is especially relevant when
evaluating tMRI data taken from a single subject, as might be the case for
diagnosing or monitoring psychiatric disease, or if and when tMRI is used
eventually by the legal system as a lie detector.

2. Spatial and Temporal Resolution oftMRI


High spatial and temporal resolution is necessary for making accurate
interpretations of tMRI data. Advances in MRI hardware over the past
decade have greatly increased both, but the ultimate limitation is the
correlation of the BOLD signal with underlying neuronal activity. The units
of spatial resolution for an MRI scan of the brain are in voxels, or three­
dimensional pixels. While modern MR scanners can acquire images at a
resolution of less than one cubic millimeter, the task-dependent changes in
the BOLD signal might not be large enough to detect in such a small volume
of brain tissue. The measured signal from a voxel is directly proportional to
its size: the smaller the voxel, the weaker the signal. In regions like the
primary visual or motor cortex, where a visual stimulus or a finger-tap will
produce a robust BOLD response, small voxels will be adequate to detect such
changes. More complex cognitive functions, such as, most importantly for our
purposes, moral reasoning or decision-making, use neural networks in several
brain regions and therefore the changes in BOLD signal in any specific region
(e.g., the frontal lobes) might not be detectible with small voxels. Thus, larger
voxel sizes are needed to capture these small changes in neuronal activity,
which results in decreased spatial resolution. tMRI is typically used to image
brain regions on the order of a few millimeters to centimeters.
As the voxel size increases, the probability of introducing heterogeneity of
brain tissue into the voxel also increases (these are referred to as partial
volume effects). Instead of a voxel containing only neuronal cell bodies, it
might also contain white matter tracts, blood vessels, or cerebrospinal fluid.
These additional elements artificially reduce the signal intensity from a given
voxel.
Several other processing steps in the analysis of tMRI data can also have
implications for spatial resolution. tMRI data are typically smoothed using a
Gaussian filter, which improves the reliability of statistical comparisons but in
turn, decreases the image's spatial resolution. Spatial resolution is also
sacrificed when tMRI data are transformed into a common stereotactic space
(also referred to as normalization) for the purposes of averaging or comparing
across subjects. Lastly, smaller voxel sizes require increased scanning time,
which is often a limiting factor when dealing with certain behavioral
paradigms or special subject populations (e.g., children, the elderly, or people
in a mentally compromised state).

17
Desmond & Chen, supra note 15.
,. Rosen & GUl, supra note 16.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 24 of 144

382 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

The temporal resolution of tMRI depends on the hemodynamic response


of the brain and how frequently the response is sampled. The measured
hemodynamic response in the brain rises and falls over a period of about ten
seconds. The more frequently we sample this response, the better the
estimate we can make as to the underlying neural activity. On average, tMRI
has a resolution on the order of seconds, however latency differences as small
as a few hundred milliseconds can be measured. These differences do not
reflect the absolute timing of the neural activity, but rather a relative
difference between different types of stimuli or different brain regions. The
BOLD response is non-linear for multiple stimuli activating the same brain
region. If the same brain region is activated in rapid succession, the BOLD
response to the later stimuli is reduced compared to the initial stimulus.

3. Correlating Structure and Function


As functional neuroimaging relies on task-dependent activations under
highly constrained conditions/9 correlating structure and function is
somewhat analogous to correlating genes and function. This is fraught with
challenges, especially where variability can affect the BOLD signal changes. 2o
On the most basic level, there are intrinsic properties of the MR scanner that
increase variability in the recorded signal from the brain. Technical issues
stem from differences between individual MRI sites and scanner drifts,
resulting in artifacts and errors occurring within an individual site.
Manufacturer upgrades that may also introduce changes in image quality and
other features of a study, however welcome, also require ongoing
consideration.
Variables outside the MRI device also reduce the accuracy of these
readings. Muscles contract with swallowing, pumping blood through large
vessels, and moving limbs to improve comfort. All these result in motion
artifacts and physiological noise, thus introducing intra-subject variability
into the data. Keeping subjects motionless in the scanner presents an
especially great challenge when subjects from vulnerable populations are
needed: patients suffering from executive function problems or severe
memory impairments may find the long period of immobilization taxing.
Inter-subject variability is also a consideration, especially when
understanding of single subject data is the goal. Aguirre and colleagues
showed that the shape of the hemodynamic response across subjects is highly
variable. 21 Thus, if two subjects performed the same task, the levels of BOLD
signal may change, and consequently the activation maps might be different.
It is also possible that two independent subjects will show different patterns
of activation while their behavioral performances are comparable. Although
subjects perform the same behavioral task, they might employ different
strategies, thereby recruiting different neural networks, resulting in different
patterns of activation. In interpreting tMRI activation maps, one must
remember that changes in the BOLD signal are indirect inferences of

19 Desmond & Chen, supra note 15; Kenneth S. Kosik, Beyond Phrenology, at Last, 4
NATURE NEUROSCIENCE 234 (2003).
20 Kosik, supra note 19.
21 G. Aguirre, E. Zarahn & M. D'esposito, The Variability of Human, BOLD
Hemodynamic Responses, 8 NEUROIMAGE 360 (1998).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 25 of 144

NEUROSCIENCE-BASED LIE DETECTION 383

neuronal activity, so all areas of significant activation may not be task-specific.


A false positive activation may lead to erroneous conclusions that brain areas
are associated with a function, when in fact they are not. Therefore, fMRI
results do not definitively demonstrate that a brain region is involved in a
task, but only that it is activated during the task. 22

4. Designing an fMRI Experiment and Analyzing its Results


Part of the art of fMRI imaging is designing an experimental task that is
simple and specific so that behavioral responses can be attributed to an
isolated mental process and not confounded by other functions (a concept
known as functional decomposition). For this reason, fMRI relies on a
subtraction technique where the differences in BOLD signal between a
control task and an experimental task lead to conclusions about the neuronal
activity underlying the areas of increased activation. Therefore, it is essential
to design the control and experimental tasks such that the variable of interest
is the only variable different between the two tasks. The control condition
alone is crucial as it can impact the downstream interpretation of images.
Among the most challenging issues in neuroimaging is the selection of the
statistical treatment of the data. There are several stages of processing that
are performed on the fMRI data. First, pre-processing prepares the data for
statistical analysis. Second, a general linear model regression determines for
each subject the degree to which changes in signal intensity for each voxel can
be predicted by a reference waveform that represents the timing of the
behavioral task. The third and final stage is population inference. This step
involves using the regression results for each subject in a random effects
model for making inferences to the population, or inferences regarding
differences in the population. With this method, activation differences
between experimental conditions or groups can be assessed over all brain
regions and results can be reported in a standardized coordinate system.
Commercial and freely available software packages for data analysis are
widely used, but differences exist in the specific implementation of the
statistics.
When the results from the regression and the random effects model are
combined, the result is a statistical parameter map of brain activity. This
activation map is typically color-coded according to the probability value for
each voxel. The interpretability of fMRI activation maps then depends on
how the data are displayed. The color-coded statistical maps are usually
overlaid onto high-resolution anatomical MR images to highlight the brain
anatomy. There are several ways to display these composite images. The
most rigorous is to overlay the functional data onto single anatomical slices in
any imaging plane. Alternatively, the activation maps can be presented on a
brain rendered in three dimensions. While this technique gives good
visualization of the prominent external brain structures, internal regions like
the hippocampus or basal ganglia are not well characterized on these models.
Researchers often use both of these techniques to examine data, but

22
Rosen & GUl, supra note 16.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 26 of 144

384 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

ultimately choose the one for presentation that best highlights the main
results ofthe study.

5. Ethical Considerations
Ethical considerations for imaging the function of the brain can be
examined in the context of two themes: conditions of the test that enable
acquisition of data and conditions about the use ofthe data themselves.

a. Test Conditions
For any of the methods described above, requirements for the protection
of human subjects and disclosure of risks, and benefits if any, must be
followed. Of paramount importance is safety. Contraindications to EEG
include, for example, dermatologic allergies to electrolytic glue used for
ensuring good conductance of scalp electrodes. For MRI, subject
claustrophobia, metal implants, and metal objects in the environment that
can rapidly become projectiles in the presence of the strong magnetic field are
foremost considerations. For all modalities, the accidental discovery of an
anomaly that might have clinical significance is an important consideration,
and procedures that will be taken for follow-up, if any, must be handled in a
forthright manner when obtaining consent. 23 Long-term negative, even life­
long reactions to repeat scanning or stimuli-particularly when unpleasant or
frightening-are possible but extremely rare.

b. External Conditions
Ethics considerations about how data will be used outside the laboratory
setting bring us back to the original questions that scholars asked about tu.
First, conceptualization: Given the substantial complexity of designing
any imaging experiment, what acquisition and statistical protocols were
applied? This question applies to what Illes and Racine have called design or
paradigmatic bias. 24 Are the stimuli age-, gender- and culturally-appropriate?
Will they generate results that are generalizable to populations not tested?
Are there regions of the brain with significant activity that go unnoticed
because they were not within the brain structure or statistical range of choice?
Second, biases of interpretation: What biases might interpreters bring to
the maps of data? Investigator bias is inevitable given the nature of the
imaging experiment and the necessity for human interpretation of images.
Unlike results from a topographic map, for example, the meaning of an
activation shown on an image is far from black and white.
Third, how will the map data be used? This third question raises perhaps
the toughest ethical challenge of all: privacy, profiling, and predicting future
behavior.

.., Illes, Racine & Kirschen, supra note 4.


2. Judy Illes & Eric Racine, Imaging or Imagining? A Neuroethics Challenge Informed
by Genetics, 5 AM. J. BIOETHICS 5 (2005).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 27 of 144

NEUROSCIENCE-BASED LIE DETECfION 385

III. LIE DETECTION AND fMRI


Despite the long and dubious history ofpolygraphy and other methods for
extracting information from humans, pursuing a method for detecting lying
and deception has been an enduring focus of scientific endeavor in the
neurobehavioral sciences. 25 Whether this interest is driven by human
curiosity, an urge to find the telltale signature of falsehoods, or the perceived
practical usefulness of results of such studies to society, is unclear. 26 It is
certainly not for ease of use given the many layers of difficulty of capturing
data with good real-world or "ecological" validity.27 It is clear, however, that
as new neurotechnologies emerge, their application to this domain is likely to
be rapid.
Such improved methods of lie detection are currently the subject of a
great deal of interest and research. Several different methods with a more or
less scientific basis are being used or developed to detect deception. This
section will scan the landscape of lie detection before focusing on efforts to
use fMRI for this end. It will then review and critique the scientific literature
on fMRI for lie detection.

A. NON-fMRI LIE DETECTION


Human efforts at lie detection must date to near the origin of our species;
and, as some evidence points to the existence of intentional deception by other
animals,28 lie detection may well predate us. As soon as humans began to lie,
other humans would need to assess whether they were being told the truth.
All of us, often without consciously thinking about it, frequently assess the
credibility of information, looking, among other things, for evidence that we
are being lied to. We look and listen for signs of deception or nervousness. In
some cases, we seek truth by coercion, either by legal compulsion ("I swear to
tell the truth, the whole truth, and nothing but the truth") or, in some times
and places, by physical compulsion, including torture.
Empirical efforts to measure things that could be associated with lying
date back at least ninety years. 29 Although disputes exist about who should be
given priority for the modern polygraph machine, many trace the concept to
William Moulton Marston's early research on the link between deception and
blood pressure. 30 Marston, who received his bachelors degree, law degree,
and Ph.D. in psychology from Harvard, began to work on using systolic blood

•• ESSAYS IN SOCIAL NEUROSCIENCE (John T. Cacioppo & Gary G. Berntson eds.,


2005).
•• Judy Illes & Stephanie J. Bird, Neuroethics: A Modern Contea:t for Ethics in
Neuroscience, 29 TRENDS NEUROSCIENCES 511 (2006).
'7 Judy Illes, supra note 3.
'B Jonathan T. Rowell, Stephen P. Ellner & H. Kern Reeve, Why Animals Lie: How
Dishonesty and Belief Can Coea:ist in a Signaling System, 168 AM. NATURALIST 180, 187-93
(2006) (discussing various helpful articles), http://www.journals.uchicago.edu/AN/journal/
issues/v168n6/41663/41663.web.pdf.
'9 See generally Ken Adler, THE LIE DETECTORS: HISTORY OF AN AMERICAN OBSESSION
(2007) (detailing history of lie detection in the United States, focusing on the period until
about 1960).
'0 See Committee To Review the Scientific Evidence on the Polygraph, THE POLYGRAPH
AND LIE DETECTION, Appendix E, Historical Notes on the Modern Polygraph (2003)
[hereinafter NRC].
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 28 of 144

386 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

pressure as a marker of deception in 1915 in his early work as a psychology


graduate student. 3l From then until his death in 1947, Marston continued to
improve his lie detection devices and to promote their widespread use. The
Frye case,32 which for many years set the standard for admissibility of
scientific evidence in federal court (and continues to play that role in some
state courts), revolved around whether Marston's expert testimony about a
polygraph examination that he claimed cleared a murder defendant should
have been admitted in evidence. 33 The court held that the testimony was
properly excluded because the technology lacked general acceptance in the
scientific community and affirmed the conviction. 34
Polygraphs measure several physiological features that are associated with
nervousness or stress, such as systolic blood pressure (the first and more
rapidly variable number in the familiar blood pressure measurement of, for
example, 125/75), heart rate, breathing rate, and skin sweatiness (measuring
the electrical conductivity of skin, known as galvanic skin response).35 The
polygraph has been widely used in the United States for various purposes; a
National Research Council (NRC) committee estimates that several hundred
thousand polygraph examinations are conducted each year in the United
States. American courts, however, have never generally considered it
sufficiently reliable for its results to be admitted into evidence. 36 In the wake
of the Wen Ho Lee case at Los Alamos National Laboratory, in which
polygraph examinations played a role, the NRC was asked to report on the
value of polygraph evidence. The NRC committee produced a careful report,
concluding that the polygraph was not sufficiently valid to be used regularly in
national security screening:
Polygraph testing yields an unacceptable choice for DOE
[Department of Energy] employee security screening between
too many loyal employees falsely judged deceptive and too many
major security threats left undetected. Its accuracy in
distinguishing actual or potential security violators from innocent
test takers is insufficient to justify reliance on its use in employee
security screening in federal agencies. 37

01 ld.; Adler, supra note 29, at 48-51, 181-195 (discussing Marston's unusually
interesting life).
o. Frye v. United States, 293 F. 1013, 1013 (D.C. Cir. 1923).
00 NRC, supra note 30, at 293-94. See also Adler, supra note 29, at 39-40, 51-54. Late
in life, Marston, along with his wife, Elizabeth Holloway Marston, invented the comic book
character, Wonder Woman. One of Wonder Woman's attributes was her possession of the
magic lasso, forged from the Magic Girdle of Aphrodite. NRC, supra note 30, at 295. The
lasso would make anyone it encircled tell the truth.
3. Frye, 293 F. at 1024.
o. NRC, supra note 30, at 12-13.
o. See, e.g., United States v. Scheffer, 523 U.S. 303, 333 (1998). Recently, however, the
New Mexico Supreme Court concluded that polygraph evidence would be presumptively
admissible in New Mexico courts. Lee v. Martinez, 96 P.3d 291 (N.M. 2004). A few federal
courts, applying the newer Daubert standard for admissibility of scientific evidence, have also
found polygraph evidence admissible in particular cases, albeit under unusual circumstances.
United States v. Allard, 464 F.3d. 529 (5th Cir. 2006); Thornburg v. Mullin, 422 F.3d 1113
(10th Cir. 2005); United States v. Piccinonna, 885 F.2d 1529 (11th Cir. 1989) (en bane).
07 NRC, supra note 30, at 6.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 29 of 144

NEUROSCIENCE-BASED LIE DETECTION 387

The federal government has ignored this recommendation and continues to


use the polygraph widely in employee security screening.
Polygraphy is the best-established method of lie detection, but other
methods exist or are under development. Some, such as voice stress detectors,
have little or no scientific support,38 although they appear to be widely sold
and used. Other methods under investigation may be more promising,
although it is too early to know how reliable any of them will prove. Five of
them deserve attention: fMRI, EEG, near infra-red spectroscopy (NIRS),
facial microexpressions, and periorbital thermography. This article focuses on
fMRI because it is the most explored and, apparently, the most advanced, but
the four other methods will be briefly described.

a. Electroencephalography
As discussed earlier, EEGs measure electric currents generated by the
brain. One particular kind of EEG measurement claimed to be useful in
detecting lies is the "P300"-a wave of electrical signal, measured at the scalp,
that occurs approximately 300 milliseconds after a subject receives a stimulus.
The analysis of the timing and shape of this waveform has some meaning, but
the credibility of its usefulness is undercut by the hype given it by its leading
proponent, Lawrence Farwelp9
Farwell is an electrophysiologist who has, for over fifteen years, argued
that human P300 waves can be used as a "guilty knowledge" test, to determine
whether, for example, a suspect has ever seen the site of a crime. 40 Farwell
refers to this process as "brain fingerprinting" and has been selling brain
fingerprinting for several years through Brain Fingerprinting Laboratories, a
privately held company.41 The company's website claims that in more than
175 tests, the method has produced inconclusive results six times and has been
accurate every other time. 42 Farwell's work, however, has not been
substantially vetted in the peer-reviewed literature. 43 Apparently, the only
article he has published on his technology in a peer-reviewed journal is a 2001
on-line article in the Journal of Forensic Science where he and a co-author
reported on a successful trial of his method with six subjects. 44 He has not
revealed any further evidence to support his claims of high accuracy,
protecting it as a trade secret. He is an inventor on four patents that are
relevant to this work. 45

3. [d. at 166-68.

39 Wolpe, Foster & Langleben, supra note 3, at 42.

<0 Brain Fingerprinting Laboratories Home Page, http://www.brainwavescience.com/

HomePage.php (last visited July 6, 2007).


•, [d.
.. Brain Fingerprinting Laboratories, http://www.brainwavescience.com/criminal­
justice.php (last visited July 6, 2007).
•3 Wolpe, Foster & Langleben, supra note 3, at 43.
.. Lawrence A. Farwell & Sharon S. Smith, Using Brain MERMER Testing to Detect
Knowledge Despite Efforts to Conceal, 46 J. FORENSIC SCI. 135, 135 (2001).
.. Method for Electroencephalographic Information Detection, U.S. Patent No.
5,467.777 (filed Sept. 15, 1994); Method and Apparatus for Truth Detection. U.S. Patent No.
5,406,956 (filed Feb. II, 1993); Method and Apparatus for Multifaceted
Electroencphalographic Response Analysis (MERA), U.S. Patent No. 5,363.858 (filed May 5.
1993); Method and Apparatus for Detection of Deception. U.S. Patent No. 4.941,477 (filed
Dec. 12, 1988).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 30 of 144

388 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

Farwell's claims are widely discounted in the relevant scientific


community and his credibility is not helped by his inflated claims for the
judicial acceptance of his technique. The company's website states that "Iowa
Supreme Court overturns the 24 year old conviction of Terry Harrington,
Brain Fingerprinting test aids in the appeals."46 In fact, the Iowa district court
(not afederal district court, as the website claims), in an unpublished opinion,
rejected Harrington's petition for post-conviction relief on several grounds in
spite of that testimony. It did admit the brain fingerprinting evidence, but it
may have been the case that the lower court judge admitted the testimony
merely to deprive Harrington of one ground for appeal. Harrington appealed
and the Iowa Supreme Court reversed for reasons unrelated to the brain
fingerprinting test. 47 As to the brain fingerprinting evidence, the Iowa
Supreme Court specifically said "Because the scientific testing evidence is not
necessary to a resolution of this appeal, we give it no further consideration.'>48
The company's website reports, accurately but with a misleading implication,
that "The Iowa Supreme Court left undisturbed the law of the case
establishing the admissibility of the Brain Fingerprinting evidence."49

b. Near-infrared Spectroscopy (''NIRS'?


NIRS provides a way to measure changes in blood flow-the same goal as
fMRI-in some parts of the brain without the complex apparatus of an MRI
machine. The basis of the technology is the measurement of how near­
infrared light is scattered or absorbed by various materials. Its application in
neuroscience stems from its ability to measure blood flow changes in parts of
the brain (when used for brain studies, the technique is sometimes called
Optical Topography). Small devices are attached to the subject's skull, which
shine near-infrared light through the skull and into the brain. The light does
not penetrate very far into the brain, only approximately a half centimeter,
before it scatters. This scattered laser light is picked up by sensors on the
subject's skull. The pattern of scattering reveals the pattern of blood flow
through the outer regions of the brain. 50 This method could, presumably, be
used in ways similar to fMRI to determine deception, at least for regions of
the brain within reach of the NIRS technology.
Britton Chance, an emeritus professor of biophysics at the University of
Pennsylvania, seems to be the most active researcher on the use of NIRS for
lie detection. He has developed an NIRS device he calls a "cognoscope," which
the subject wears on a headband. Chance works with the Department of
Defense Polygraph Institute, and hopes that ultimately his method could be

..6 [d.
07 Harrington v. Iowa, 659 N.W.2d 509, 525 (Iowa 2003) (holding that Harrington's
conviction violated the Due Process clause under Brady v. Maryland, 373 U.S. 83 (1963),
because the prosecution had failed to disclose material exculpatory information in its
possession to his counsel before trial).
os Id. at 516.
o. Brain Fingerprinting Laboratories, http://www.brainwavescience.com/Ruled%
20Admissable.php (last visited July 6, 2007).
•0 Britton Chance et al., A Novel Method for Fast Imaging of Brain Function, Non­
Invasively, with Light, 2 OPTICS EXPRESS 411, 413 (1998). For a less technical description, see
Steve Silberman, The Corte.r Cop, 14 WIRED 149 (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 31 of 144

NEUROSCIENCE-BASED LIE DETECTION 389

used without any contact with the subject's head. 51 In effect, he hopes to be
able to perform something like an fMRI blood-flow analysis without the cost
and inconvenience of an MRI machine. Although NIRS for lie detection is
widely discussed in the popular52 and semi-popular53 literature, we found no
peer-reviewed publications on it.

c. Facial Micro-expressions
Berkeley psychologist, Paul Ekman, has championed the analysis of facial
micro-expressions. Ekman became famous for his work establishing the
universality of some primary human facial expressions, such as those for
anger, disgust, fear, joy, sadness, and surprise. 54 He has been interested in
methods of detecting deception since at least the late 1960s,55 and now claims
that careful analysis of fleeting "micro-expressions" on subjects' faces can
detect lying with substantial accuracy.56
Ekman has done substantial research on using facial micro-expressions to
detect lying, but he has not published much of the research in peer-reviewed
literature; he has said that this is because of his concern about the
information falling into the wrong hands. As a result, Ekman's methods and
results have not been subject to much public analysis, making their value hard
to assess. If effective, they would have the advantage of not requiring any
obvious intervention with the subject-subjects would not have to have
various sensors attached to them, as with polygraphs, EEGs, or NIRS, or be
inserted into a machine, as with fMRI. This technique could quite plausibly
be used surreptitiously, through undetected videotaping of the subject's face
during questioning.

d. Periorbital Thermography
Periorbital thermography measures the temperature of the tissue around
the eyes. Ioannis Pavlidis, a computer scientist at the University of Houston,
and James Levine, an endocrinologist at the Mayo Clinic, invented, and
continue to promote, this technique. Pavlidis and Levine claim that the

51 Silberman, supra note 50.


•2 Id.; Richard Willing, Terrorism Lends Urgency To Huntfor Better Lie Detector, USA
TODAY, Nov. 4, 2003; Susan Frith, Who's Minding the Brain?, THE PENN. GAZETTE, Jan./Feb.
2004.
53 Britton Chance, Shoko Noka & Yu Chen, Shining New Light on Brain Function, 3
OE MAG. 16 (2003); Detecting Deception (Interview with Chance), 68 ROYAL CANADIAN
MOUNTED POLICE GAZETTE 2 (2006), available at http://www.gazette.rcmp.gc.ca/article­
en.html?&IanlLid;1&article_id;246.
•• Paul Ekman & Maureen O'Sullivan, Who Can Catch a Liar?, 46 AM. PSYCHOLOGIST
913, 915 (1991).
•• Paul Ekman & Wallace V. Friesen, Nonverbal Leakage and Clues to Deception, 32
PSYCHIATRY 88 (1969).
56 See Siri Schubert, A Look TelUi All, SCI. AM. MIND, Oct.-Nov. 2006,
http://www.sciam.com/article.cfm?articleID;0007F06E-B7AE-1522-B7AE83414B7F0182.
The power point slides from a presentation he made to a meeting of the National Research
Council committee on Technical and Privacy Dimensions of Information for Terrorism
Prevention and Other National Goals on April 28, 2006 can be found at http://cstb.org/
terrorismandprivacy/m1/ekman.pdf. See also Paul Ekman & Maureen O'Sullivan, From
Flawed Self-AsselJsment to Blatant Whoppers: The Utility of Voluntary and Involuntary
Behavior in Detecting Deception, 24 BEHAV. SCI. &: L. 673, 684 (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 32 of 144

390 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

temperature of the area around the eyes rises noticeably when subjects lie. 57
Their theory is akin to the approach of the polygraph: the increased stress of
lying triggers an involuntary physiological response in their subjects. In this
case, instead of using blood pressure or the other polygrapher markers, they
contend that rapid eye movements associated with stress increase the blood
flow around the eyes and thus increase that area's temperature. 58 In a series
of articles since 2001, including one article in Nature, the researchers have
claimed accuracy rates from 78% to over 91%.59 Like analyzing facial micro­
expressions, this approach does not require attaching anything to the subject
and could quite plausibly be done without the subject's knowledge through a
device that measures small temperature differences at a distance; high­
resolution infrared cameras can be used for this purpose that can detect
temperature changes as small as 0.045 degrees Fahrenheit. 60
The NRC report discussed periorbital thermography and some of its
limitations. It concluded:
Despite the public attention focused on the published version of
this study in Nature . . . it remains a flawed and incomplete
evaluation based on a small sample, with no cross-validation of
measurements and no blind evaluation. It does not provide
acceptable scientific evidence to support the use of facial
thermography in the detection of deception. 61

B. fMRI-BASED LIE DETECTION-THE BUSINESSES

The most interesting of these new lie-detection technologies is fMRI, for


two reasons. First, it has been the subject of significant peer-reviewed
literature from several laboratories, and second, it is already commercially
available to the general public from one firm, with another company poised to
enter the market.
No Lie MRI apparently started offering its fMRI-based lie detection
services in August 2006. 62 This start-up firm has licensed its technology from
the University of Pennsylvania, based on a patent application filed by the
University based on work by Professor Daniel Langleben. The firm, which is

87 loannis Pavlidis, James Levine & Paulette Baukol, Thermal Imaging for Anxiety
Detection, PROC. lEE WORKSHOP ON COMPUTER VISION BEYOND THE VISIBLE SPECTRUM:
METHODS &: ApPLICATIONS (2000) [hereinafter Pavlidis et aI., Thermal Imaging for Anxiety
Detection].
88 Id.
89 loannis Pavlidis & James Levine, Monitoring ofPeriorbital Blood Flow Rate Through
Thermal Image Analysis and its Application to Polygraph Testing, 3 ENGINEERING MED. &:
BIOLOGY Soc'y 2826, 2826 (2001); loannis Pavlidis & James Levine, Thermal Facial Screening
for Deception Detection, 2 ENGINEERING &: MED. 1183, 1183 (2002); Pavlidis et aI., Thermal
Imagingfor Anxiety Detection, supra note 57, at 56; loannis Pavlidis, Norman L. Eberhardt &
James A. Levine, Seeing Through the Face ofDeception: Thermal Imaging Offers a Promising
Hands-offApproach to Mass Security Screening, 415 NATURE 35, 35 (2002); Dean A. Pollina,
Andrew B. Dollins, Stuart M. Senter, Troy E. Brown, loannis Pavlidis, James Levine & Andrew
H. Ryan, Facial Skin Surface Temperature Changes During a 'Concealed Information' Test, 34
ANNALS BIOMEDICAL ENGINEERING 1182, 1182 (2006) (researching in collaboration with the
Department of Defense Polygraph Institute).
60 Jeffrey Kluger & Coco Masters, How To Spot a Liar, TIME, Aug. 20, 2006.
61 NRC, supra note 30, at 157.
62 Vicki Haddock, Lies Wide Open, SAN FRANCISCO CHRON., Aug. 6, 2006.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 33 of 144

NEUROSCIENCE-BASED LIE DETECTION 391

not publicly traded and hence is not subject to extensive disclosure


requirements, does have a large website. 63 On the website, the "Investors
Overview" describes the company's structure and plans:
Since the discrediting of the polygraph a large market for
accurate truth verification and lie detection has been largely
untapped. The current estimation of this market, conservatively
based on the peak market demand for polygraph testing in the
mid-1980s, is $3.6 billion.

However it is likely the technology used by No Lie MRI, Inc. will


be able to expand the market well beyond this level. For one,
world population has increased considerably since the mid­
1980s. More importantly, the accuracy of No Lie MRF M software
is near-perfect, a level never attained by any other technology.
With such a high rate of accuracy it is expected more people will
endeavor to use this technology.

Corporate Entitities [sic] and Relationships


"No Lie MRFM " is the patent pending product that objectively
and reliably measures intent, prior knowledge, and deception
with proprietary fMRI human brain mapping techniques
developed from recent advances in neuroscience.
"VeraSource" is a division of No Lie MRI, Inc. that is developing
the current No Lie MRF M software. This group is also exploring
new technologies for detecting other types of deception, as well as
other applications covered under the Langleben patent.
"VeraCenter" is the brand name for independently owned or
licensed business centers where truth verification interviews
employing an MRI and the No Lie MRFM software can be
conducted.
'Veracity Sciences" is a division of No Lie MRI, Inc. focusing on
developing implementation and application of No Lie MRFM
software for use by the U.S. military, government agencies, law
enforcement agencies, and foreign governments.
Interviews will be conducted at licensed business centers called
VeraCenters that have access to an MRI machine. Each
VeraCenter will have a connection via the Internet to the
VeraSource to access No Lie MRF M software. The client is
charged in advance for the time on the MRI machine and the
time for use of the No Lie MRF M software. The revenue is split
between No Lie MRI and the VeraCenter. 64
The firm claims to already be using its fMRI test for lie detection at a
center in Tarzana, California, but is seeking additional test centers among

63 No Lie MRI Home Page, http://www.noliemrLcom/ (last visited July 6, 2007).


.. No Lie MRI, Process Overview, http://www.noliemri.com/products/
ProcessOverview.htm (last visited July 6, 2007).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 34 of 144

392 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

MRI facilities with at least a 3-Tesla magnet. 65 Its website does not give any
details on the price of the service or its accuracy, except in its investor
information section. 66 There it claims a "current accuracy" of 93%, with an
expected 99% accuracy "when development ofthe product is complete."67 The
website does not specifically state a price to the customer, but various
projections assume a price of about $1,800 per use. 68 The website does
mention some limitations of its test, but these are only limitations of the MRI
process: individuals cannot have metal in their bodies, cannot be
claustrophobic, cannot be ''brain damaged," and must not move during the
MRI process. 69
No Lie MRI's website has pages for four classes of customers:
corporations, lawyers, government, and individuals. The company envisions
many uses. For corporations, it suggests security firms may want to use its
process for pre-employment screening, that insurance companies should use
it to verify policy-holders' claims, and that investment banking companies
may use it to determine the truthfulness of corporate earnings statements.70
The site claims that its process is not subject to the federal Employee
Polygraph Protection Act, which bans most employment-related use of lie
detection. 7I As discussed in Section III(A) below, this interpretation is
implausible.
For lawyers, the website analogizes No Lie MRI tests to DNA tests,
adding that "it would also be potentially possible for a witness to validate his
or her own statements to the court."72 It does not mention at this point any
barriers to admissibility of such evidence.
The firm suggests a wide range of uses for federal, state, and international
governments. 73 In each case, the firm points to areas where the "now
discredited" polygraph machines are used; for developing counties, it offers
the use of its technology to battle corruption.
Its section for individuals reads:
No Lie MRI has potential applications to a wide variety of
concerns held by individual citizens.
• Risk reduction in dating

.. No Lie MRI, Test Centers, http://www.noliemrLcom/centers/Centers.htm (last


visited July 6, 2007).
66 No Lie MRI, Market Opportunity, http://www.noliemrLcom/investors/
MarketOpportunity.htm (last visited July 6, 2007).
67 Id.
6. As to price, in projecting the size of the world market, it assumes a price of $1,800
per test and later says "the cost oftesting is $30/minute." Id. Although that might refer to the
cost to the firm, it seems more likely to be intended as the price to the customer, which, for a
60 minute MRI session, would be $1,800.
69 No Lie MRI, Process Overview, http://www.noliemrLcom/products/
ProcessOverview.htm (last visited July 6, 2007).
70 No Lie MRI, Customers - Corporations, http://www.noliemrLcom/customers/
GroupOrCorporate.htm (last visited July 6, 2007).
71 Id.
72 No Lie MRI, Customers - Lawyers, http://www.noliemrLcom/customers/
Lawyers.htm (last visited July 6, 2007).
7' By "international governments," it presumably means non-U.S. national
governments and not international organizations like the United Nations. No Lie MRI,
Customers - Government, http://www.noliemrLcom/customers/Government.htm (last visited
July 6, 2007).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 35 of 144

NEUROSCIENCE-BASED LIE DETECTION 393

• Trust issues in interpersonal relationships


• Issues concerning the underlying topics of sex, power,
and money74
Finally, the website has a section called "Legal and Ethics."75 It states that
"No Lie MRI, Inc.," and its franchisees will '1ive up to the generally accepted
moral and ethical standards of the communities in which it tests."76 As
examples of its guidelines for testing, it states that it will only test people with
their full consent, and that it will only ask subjects about topics that they
agree in advance to discuss. 77 It states that the firm "is presently working to
have its testing allowed as evidence in U.S. and state courtS."78
It is hard to know what to make of No Lie MRI. The firm has received a
great deal of publicity, but there is little evidence that its services are actually
being used. The website, at least currently, seems aimed more at potential
investors than at potential customers. The site does not, for example, talk
about actual users. The only story we found about an actual No Lie MRI
customer is that of Mr. Harvey Nathan of Charleston, South Carolina, who
became the firm's first customer in December 2006, as he sought to use their
technology to rebut claims of arson and insurance fraud. 79
Nathan denied any involvement in the fire, and the test
"indicated that he was indeed telling the truth," Huizenga said.
Nathan was No Lie MRI's first customer, and his experience was
filmed for a documentary expected to air in Britain in the spring
[of2007].80
A second firm has also announced that it will offer fMRI-based lie
detection services. CEPHOS Corporation, another privately held start-up
company, uses technology developed by Dr. Frank A. (Andy) Kozel, which it
has licensed. 81 Steven Laken, the founder and chief executive officer of the
firm, is a scientist with a Ph.D. in molecular and cellular biology from Johns
Hopkins University.82 CEPHOS claims greater than 90% accuracy for its
technology. 83

7< No Lie MRI, Customers - Individuals, http://www.noliemri.com/customers/


Individuals.htm (last visited July 6, 2007).
7' No Lie MRI, Customers - Legal and Ethics, http://www.noliemri.com/customers/
LegalAndEthics.htm (last visited July 6, 2007).
76 ld.

77 ld.

78 Id.
79 Glenn Smith, Deli Owner Lays Hope on New MRI Lie Detector, POST AND COURIER
(Charleston, S.C.), Jan. 16, 2007, at AI.
80 ld. Its publicity-attracting skills do have some limits. In late October 2006 one of
the authors (Greely) was taped by the Today Show for a segment on No Lie MRI. The firm
was going to do a truth assessment on camera for Today, supposedly of a woman who wanted
to prove to her husband that she was sexually faithful. Several days after the taping, the
author was told that the segment had been cancelled because the woman had changed her
mind.
81 CEPHOS Corporation Home Page, http://www.cephoscorp.com/index.html (last
visited July 6, 2007).
82 CEPHOS Corporation, http://www.cephoscorp.com/management.htm (last visited
July 6, 2007).
• 0 CEPHOS Corporation, http://www.cephoscorp.com/(last visited July 6, 2007).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 36 of 144

394 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

In September 2005, the company announced that it intended to begin


selling fMRI-based lie detection in 2006. 84 At a conference at Stanford in
March 2006, Laken stated that the firm was waiting for the results of
additional clinical trials. 85 If those were successful, as he anticipated they
would be, CEPHaS would begin operations that summer. 86 The clinical trials
appear to have been less than fully successful, as the firm now says, "CEPHaS
continues to test and validate the technology with the goal of achieving 95%
accuracy. Based on valid clinical results in 2006, the company intends to offer
this service in the first half ofthis year."87
At this point, the future of CEPHaS, like that of No Lie MRI, is unclear.
It is striking, though, how differently the two companies present themselves.
CEPHaS gives the impression, certainly to the authors of this article, as being
genuinely interested in accurate testing, the responsible use of the technology,
and a long-term business. 88 It is not as clear to us that No Lie MRI has an
appreciation of the difficulties ofthis kind of business.
In addition to these two companies, other firms are exploring fMRI-based
lie detection with less visibility. A search through publicly accessible
databases of Department of Defense websites for small business funding
reveals research and development projects with the same technology and the
security-related market as the focus. 89 We would welcome more public
discussion by firms oftheir work and plans for fMRI-based lie detection.

C. fMRI-BASED LIE DETECTION-THE RESEARCh


Through early February 2007, twelve peer-reviewed scientific articles
have been published that describe experiments with fMRI-based lie detection.
Three have come from the laboratory of Daniel Langleben of the University of
Pennsylvania, whose technology has been licensed to No Lie MRI. 90 Another

.. Cephos Corporation to Offer Breakthrough Deception Detection Services Using jMRI


Technology with over 90')6 Accuracy in Individuals, BUSINESS WIRE, Sept. 27, 2005.
8' Steven Laken, CEO, CEPHOS Corporation, Remarks at the Stanford Center for Law
and the Biosciences Conference, Reading Minds: Lie Detection, Neuroscience, Law, and
Society (Mar. 10, 2006). Both authors spoke at this conference, which Professor Greely
organized, and heard Dr. Laken's presentation.
8. Id.
87 CEPHOS Corporation Home Page, http://www.cephoscorp.com/index.html(last
visited July 6, 2007). See also Press Release, Steven Laken, CEO, CEPHOS Corporation,
CEPHOS' CEO Speaks on Commercial Testing (Dec. 2006),
http://www.cephoscorp.com/cephos_comm_testinK-20061215%20V2 .pdf.
88 Our impressions are based on Dr. Laken's presentation at the Stanford conference
and Dr. Kozel's commentary on a paper Judy Illes delivered on January 9, 2007 at the
University of Texas, Dallas, entitled '"The Ethics of Brain Imaging: Authenticity, Bluffing and
the Privacy of Human Thought." Stanford Center for Law and the Biosciences Conference,
Reading Minds: Lie Detection, Neuroscience, Law, and Society (Mar. 10, 2006); Judy Illes,
Associate Professor (Research) of Neurology, Stanford University, The Ethics of Brain
Imaging: Authenticity, Bluffing and the Privacy of Human Thought, Lecture given at the
University of Texas, Dallas (Jan. 9, 2007). Greely also had e-mail correspondence with Kozel
during 2006 in which he was impressed with Kozel's concern about the ethical implications of
this technology.
89 Department of Defense, Small Business Innovation Research, Small Business
Technology Transfer, http://www.acq.osd.mil/osbp/sbir/ (last visited July 6, 2007).
90 Langleben also describes his results in a peer-reviewed article he co-authored in the
American Journal of Bioethics, but it does not discuss any results not published in his other
work. Wolpe, Foster & Langleben supra note 3, at 39.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 37 of 144

NEUROSCIENCE-BASED LIE DETECTION 395

three have been published by Andy Kozel, formerly at the Medical University
of South Carolina and now at the University of Texas Southwestern Medical
Center, whose technology is being developed by CEPHOS Corporation. The
remaining six have come from laboratories all around the world. Of the
twelve papers, only three (two by Langleben and one by Kozel) attempt to
assess differences in truthful and deceptive responses within an individual; all
the others look only at averaged group data. We will focus on the Langleben
and Kozel reports, as those are the basis for the methods being used by the
two known commercial firms. This section will summarize briefly each of
these papers.
Of course, one might object that not all research is published and not all
published research appears in peer-reviewed journals; there might be other
evidence about the efficacy of these methods. Nevertheless, we can only assess
the evidence we have. No one can be expected to accept the kind of dramatic
change in our society that accurate fMRI-based lie detection could bring
without solid, public proof. We find very little in the peer-reviewed literature
that even suggests that fMRI-based lie detection might be useful in real-world
situations and absolutely nothing that proves that it is.

1. Studies by Langleben and Colleagues


Langleben's first paper on fMRI-based lie detection appeared in early
2002 in NeuTolmage. 91 This article reported a study of eighteen subjects, all
recruited from the University of Pennsylvania "community."92 The experiment
used three playing cards as targets: the two of hearts, the five of clubs, and the
ten of spades. Ten other cards were included in the experiments but were not
targets. While in the scanner, the subjects were shown the pictures of cards,
one at a time, through a mirrored reflection of a screen at their feet. The three
target cards were displayed a total of sixteen times each; the ten non-target
cards were each displayed twice. The cards were presented for three seconds
each with twelve seconds between cards, so the total time of the trial was
twenty-two minutes. At the top of the image for all the cards except the ten of
spades was the question "Do you have this card?" Above the ten of spades was
the question "Is this a Ten of Spades?"
Before the scanning started, each subject was asked to choose one of three
sealed envelopes. Each enveloped contained a five of clubs and a twenty­
dollar bill. Subjects were told to lie about having the card they found in the
envelope, but were told that they could keep the money unless they lied about
any other card they saw. Thus, they were to report accurately that they did
not have any of the cards other than the five of clubs, to answer falsely that
they did not have the five of clubs, and to say honestly that the ten of spades
was, in fact, a ten of spades. The goal was to look for differences in brain

91 Daniel D. Langleben et al., Brain Activity During Simulated Deception: An Event­


Related Functional Magnetic Resonance Study, 15 NEURolMAGE 727 (2002).
9' The experiment started with twenty-three subjects, eleven men and twelve women,
all healthy and right-handed. Their average age was thirty-two and their average number of
years of education was sixteen. Five subjects were excluded; four because they moved too
much inside the scanner and one giving the "wrong" answer every time about the five of clubs
(Le., telling the truth). The paper does not report demographic information on the eighteen
who actually completed the trial successfully.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 38 of 144

396 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

activation patterns between when the subjects honestly said they did not have
the two of diamonds, and when they dishonestly said they did not have the
five of clubs. Thus, each of the eighteen subjects was being examined on
having told the truth sixteen times and having lied sixteen times. The other
cards-the non-target cards-were used to help keep the subject's attention
and to make sure they were actually reading the question at the top of the
card-the ten of spades.
When the researchers averaged the results of all eighteen subjects, they
found two regions with statistically significant increases in activation when
the subjects were lying about the five of clubs. No areas showed greater
activation when telling the truth about the two of diamonds. The first region
ran from the left anterior cingulate cortex to the medial aspect of the right
superior frontal gyrus. The second region "is a 91-voxel cluster, U-shaped
along the craniocaudal axis, extending from the border ofthe prefrontal to the
dorsal premotor cortex ... and also involving the anterior parietal cortex from
the central sulcus to the lower bank of the interparietal sulcus ...."93 The
authors pointed out that part of the first region is known to be activated when
someone stops himself or herself from responding in the easiest way. They
speculated that the second region may be involved in providing additional
help in overcoming the first response to answer truthfully. The paper did not
report on any differences between truth-telling and lying for individuals, but
only looked at the overall averages.
Langleben reported on another experiment in a paper published in
Human Brain Mapping in 2005. 94 According to its acknowledgements, this
paper was based in part on work done with funding support from the Defense
Advanced Research Projects Agency (DARPA). This study was done with
twenty-six right-handed male undergraduates, twenty-two in the initial phase
and four more as part of a validation study at the end. The experiment also
presented the subjects with pictures of playing cards while being scanned with
a promised twenty-dollar reward for "success," but with some differences from
the previous experiment.
The subjects again received a sealed envelope with a twenty-dollar bill,
but this time it contained two cards, a five of clubs and a seven of spades.
Langleben told the subjects to lie consistently about one of the two cards, but
he left the decision about which card up to them. The experiment also used a
"recurrent distracter" (the two of hearts), a "variable distracter" (all of the
other number cards from each suit), and a null card (where only the back of
the card was shown). Inside the scanner, the subjects saw a card for two
seconds followed by somewhere between zero and sixteen seconds of the null
card. During each subject's session, they saw the truth, lie, and recurrent
distractor cards (five of clubs, seven of spades, and two of hearts) twenty-four
times each and saw a variable distractor card 168 times, for a total of 230
responses. Each time they were asked whether they had that card. A subject
responding as instructed would say "no" 206 times (to the lie card, the
recurrent distractor, and the variable distractors) and say "yes" twenty-four

93 Langleben et aI., supra note 91, at 730.


94 Daniel D. Langleben et aI., Telling Truth from Lie in Individual Subjects with Fast
Event-RelatedjMRI, 26 HUM. BRAIN MAPPING 262, 262 (2005).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 39 of 144

NEUROSCIENCE-BASED LIE DETECTION 397

times (to the truth card). The whole test session lasted just under fifteen
minutes. .
When analyzed as group averages, the results showed several areas of
different activation between truth and lie. These areas included some, but not
most, of the areas reported in Langleben's earlier article. The researchers
noted that the results contradict some of the assumptions of the earlier paper.
The researchers then looked at individual results, using a logistic regression
model. They used regions of interest identified in the group study to create a
model that, applied to those twenty-two subjects, was 78% accurate (with a
specificity of 76% and a sensitivity of 80%) at being able to tell, for an
individual subject, when he was lying and when he was telling the truth. To
validate that model, they then applied it to four new subjects, also healthy,
right-handed male undergraduates, who were scanned under the same tests.
For these four subjects, they were able to distinguish true answers from false
ones 76.5% of the time, with a sensitivity of 69% and a specificity of 84%.
In 2005, NEURoIMAGE published another lie detection paper from the
Langleben lab, this one with Davatzikos listed as its primary author. 95 The
paper uses the same brain scan results from the twenty-two right-handed
male undergraduate subjects reported in the second Langleben article
discussed above, but it uses a different method of data analysis. This
approach is called a "high-dimensional non-linear pattern classification
method." Using this statistical method, they report being able to distinguish
when the individual subjects were lying or telling the truth just under 90% of
the time (90% specificity, 85.8% sensitivity). The brain regions that appeared
most important in this analysis only overlapped to a limited extent with those
identified in the two other Langleben articles.
In summary, Langleben's peer-reviewed publications report on two
experiments, both of which involve lying about a playing card. The
experiments involved exactly forty-four subjects, the majority of them male
undergraduates. It is also noteworthy that the experimental design used in
the second experiment (the subject ofthe last two Langleben papers) was such
that subjects only pressed the "yes" button when truthfully reporting the card
they held. In examining the contrast between the truth card (yes) and the lie
card (no), it is quite possible that the researchers were, in part, seeing an
effect of the unusual occurrence (24 out of 230 times) of the subjects' need to
push "yes" instead of "no," which casts doubt on the validity ofthis work. 96

2. Studies by Kozel and Colleagues


Kozel also has three peer-reviewed publications, labeled a pilot study, a
study, and a replication study. The pilot study was published in 2004 in the

915 C. Davatzikos, K. Ruparel, Y. Fan, D.G. SheD, M. Acharyya, J. W. Loughead, R.C.


Gur & Daniel D. Langleben, Classifying Spatial Patterns of Brain Activity with Machine
Learning Methods: Application to Lie Detection, 28 NEURolMAGE 663 (2005).
96 We owe this observation to Nancy Kanwisher, who pointed it out at a symposium at
the American Academy of Arts and Sciences on February 2, 2007. She in turn had it pointed
out to her by Dr. Langleben shortly before that meeting.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 40 of 144

398 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

Journal ofNeuropsychiatry and Clinical Neurosciences. 97 For the pilot study,


Kozel scanned eight subjects, all healthy, right-handed young men.
The subjects were shown two different rooms, a "truth room" and a
"deception room." In each room, the subject was told to find a fifty-dollar bill
hidden under one of five specified objects. They were told to remember under
what object the money was hidden, but to leave the money in place. Inside
the MRI, the subjects were shown all five objects, first in one room, then in
the other. When shown objects from the truth room, subjects were told to
raise one finger if that object did not hide the money and two fingers if it did.
For pictures from the deception room, the subjects were told to pick one
object that had not hidden the money and hold up one finger when it was
shown, while holding up two fingers for the object that had hidden the money
(always a hat) as well as the other three objects. They would see the objects in
each room four different times, which means that they would lie eight times­
four times when they said the money was not under the correct object in the
deception room and four when they said that it was under an incorrect object
times in the deception room. They were told they would get fifty dollars for
accurately reporting the truth room object and another fifty dollars for
successfully lying about the deception room object. The study looked at both
the subjects' fMRI responses and "electrodermal activity," that is the
sweatiness of the skin (a major component of the polygraph).
The researchers did both group and individual analyses of the data. They
predicted activation in three regions: the orbitofrontal cortex, the anterior
cingulate, and the amygdala. They found statistically increased activation
when lying in the first two, as well as in three other regions (the superior
temporal gyrus, the cerebellum, and the frontal gyrus), but did not see
increased activation in the amygdala. They found variable patterns of
activation when they looked at individual results; one of the eight subjects
showed no regions of differential activation when lying and the other seven
showed "diverse activation patterns." The researchers did find correlation
between the patterns of electrodermal activity and regions of brain activation,
both on average and individually.
Kozel published his second article on this topic in 2004 in Behavioral
Neuroscience. 98 The subjects were healthy right-handed men and women
between eighteen and forty years old, recruited by an advertisement in a
medical school newspaper. Kozel's team recruited fourteen subjects but only
ten provided useable results.
This time, the subjects were only shown one room, which contained six
objects. Fifty-dollar bills were hidden under two of the objects. The subjects
were again told to find the money but leave it in place. They were then
scanned in an MRI and shown pictures of the six objects in the room, in

97 F. Andrew Kozel et aI., A Pilot Study of Functional Magnetic Resonance Imaging


Brain Correlates of Deception in Healthy Young Men, 16 J. NEUROPSYCHIATRY &: CLINICAL
NEUROSCIENCE 295, 295 (2004). This lie detection work has been under way for some time.
The authors presented an abstract of their fMRI results from this paper in December 2000.
The paper was submitted in June 2002 and accepted in January 2003, but not published until
summer 2004.
98 F. Andrew Kozel, Tamara M. Padgett & Mark S. George, Briif Communications: A
Replication Study of the Neural Correlates of Deception, 118 BEHAV. NEUROSCIENCE 852
(2004).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 41 of 144

NEUROSCIENCE-BASED LIE DETECTION 399

random order. They were to press one button to signify that money was under
an object and another button to say that it was not. They were to tell the truth
about one of the objects hiding the money, lie about the other object hiding
the money, and falsely say that a third object was concealing money when it
was not. The subjects chose which objects to lie about. This continued
through twenty iterations, so each object was shown twenty times. Unlike the
pilot study, this study used an MRI system with greater field strength than
before (a 3-Tesla magnet compared to 1.5-Tesla), more cases of deception
(forty, or two in each of the twenty iterations, instead of eight), and a few
more subjects (ten instead of eight).
The researchers did a group analysis of the results and found increased
activation in five areas when the subjects were lying and no areas of increased
activation when subjects were telling the truth. Of the eleven areas with
significantly increased activation during lying, five were areas identified in the
pilot study. The investigators again found quite varied patterns of activation
among individual subjects, although, when they defined regions broadly, they
found that seven of the ten had increased activation in the right prefrontal
cortex.
The third Kozel article on fMRI-based lie detection was published in
Biological Psychiatry in 2005. 99 This used a quite different approach. First,
the researchers recruited thirty healthy unmedicated adults from the local
university community, between the ages of eighteen to fifty, to be part of the
"model building group." Unlike the other studies, this experiment contained a
few people who were left-handed or had mixed-handedness. The subjects
were taken to a room and told to "steal" either a ring or a watch from a
drawer. When scanned, they were asked whether they had taken the ring or
had taken the watch, along with two control questions. Each of the four
questions was asked twenty times. The subjects were instructed to deny
taking either object. They were told they would receive an extra fifty dollars if
their lie was not detected.
The experimenters first tested thirty subjects whose results were used to
build a model that would distinguish, for those individuals, who was lying and
who was telling the truth. They then tested thirty-one different subjects, with
the same test, and applied the initial model. The subjects were more diverse
than in many of the previous experiments. Only about 20% of them were
students and a substantial number (six in the model building group and
twelve in the testing group) were African-American. On average, they had
completed over sixteen years of education and all of them had at least
completed high school.
Group analysis in the model-building group showed significant increased
activation when lying in seven clusters of brain regions, including five the
researchers had seen before. They focused on three clusters for building their
mode1. 1oo When the researchers looked at activation in those three clusters

99 F. Andrew Kozel et aI., Detecting Deception Using Functional Magnetic Imaging, 58


BIOLOGICAL PSYCHIATRY 605 (2005). Steven Laken, the chief executive officer of CEPHOS
Corporation, is an author ofthis paper.
100 The researchers labeled these three clusters of brain regions the first, second, and
fourth out of the seven of special interest. The first cluster included the right and left anterior
cingulate cortex, the right middle cingulate cortex, the right and left superior medial frontal
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 42 of 144

400 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

combined, they were able to determine accurately which object twenty-eight


of the thirty subjects in the model-building group had taken. When they
applied the same model to the thirty-one subjects in the model-testing group,
they were able to detect lying just over 90% of the time. In addition, when
they questioned the subjects after the test, several of them (seven in the first
group, six in the second) told the researchers that they had tried, on their own,
some methods they thought might fool the lie detector-pretending to
themselves that they had not taken it, thinking about some other location,
changing their breathing pattern, or intentionally slowing their answers. The
subjects who had used these self-created countermeasures were not more
successful in evading detection than the subjects who did not.

3. Other Articles
We found six other papers in the peer-reviewed literature that described
experimental tests of fMRI-based lie detection. Each involved only group
results and none to our knowledge is currently being commercialized. We
summarize these briefly next.
The first published fMRI-based lie detection paper was published in
Brain Imaging: NeuroReport in 2001, with Sean A. Spence from the
University of Sheffield as its first author. lol The experiment included thirty
people. Before being tested, the subjects were asked thirty-six questions about
their activities that day (for example, did they make their beds). These
questions were then re-asked with each subject told to tell the truth if the
"yes" and "no" answers were displayed in one color (either green or red), and
to lie if they were shown in the other. The tests were done twice with the
questions displayed in written form and twice with the questions spoken (in
both cases the "yes" and "no" answered were presented visually). All thirty
subjects were measured for their reaction times. Ten of the subjects, all
healthy, right-handed, males between twenty-three and twenty-five years old,
then did the same experiment while being scanned.
On average, the subjects, in the scanner and out, took about 200
milliseconds (eight to twelve percent) longer to lie than to tell the truth. The
scanned subjects showed statistically significant increases in activation in
various brain regions when lying than when telling the truth.
Tatia M.C. Lee from the University of Hong Kong is the first author in
two other published studies of lie detection. Her first study, in Human Brain
Mapping, appeared in 2002. 102 The scanning experiments were performed on
six right-handed male subjects, all of whom were native Mandarin speakers in
their thirties from mainland China. The subjects were told to feign memory
impairment in two trials. In one trial, they were asked whether two three-

cortex, and the right supplementary motor area. The second cluster included the right
orbitofrontal cortex, the right inferior frontal cortex, and the right iinsula. The final cluster
comprised the right middle frontal cortex and the right superior frontal cortex. Id. at 608.
'0' Sean A. Spence et al., Behavioural and Functional Anatomical Correlatu of
Deception in Humans, 12 BRAIN IMAGING NSUROREPORT 2849 (2001).
'02 Tatia M.e. Lee et al., Lie Detection by Functional Magnetic Resonance Imaging, 15
HUM. BRAIN MAPPING 157 (2002) (manuscript was received by the journal two months before
Spence's earlier-published article had been received and, in that sense, may be the earliest of
these experiments).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 43 of 144

NEUROSCIENCE-BASED LIE DETECTION 401

digit numbers, shown one after another, were identical; in the second trial,
they were asked questions about their personal history, such as where they
were born. Each trial was done four times with the subjects instructed to
answer truthfully one time, answer falsely as successfully as they could,
answer falsely badly, and answer randomly. After averaging the results of the
last five subjects, the researchers found four broad regions of the brain with
increased activation during lying. 103
Lee's second paper, published in 2005 in Neurolmage,104 studied three
cohorts of subjects, totaling twenty-eight individuals, to see if regions of
greater activation during lying varied by gender or mother language. The first
cohort contained eight Chinese men; the second contained seven men and
eight women, all Chinese; the third trial looked at six Caucasian monolingual
English speakers. All the subjects were right-handed. Subjects were again
asked to answer slightly different questions in one of four ways: truthfully,
falsely but well, falsely but poorly, or randomly. After averaging the results
within the first and third cohorts, and separately in the second cohort between
men and women, the researchers found significant increases in activation
during lying in the same broad regions as in their earlier work, regardless of
the subject's sex or mother tongue.
Giorgio Ganis and colleagues from Harvard published the results of
another lie detection experiment in Cerebral Cortex in 2003. 105 In this study,
the investigators compared memorized lies that fit into a coherent story with
spontaneous lies that did not fit such a story. A total of ten subjects (seven
women and three men) were asked to quickly make up and tell lies about their
most memorable real vacations and work experiences. Then they were told to
give a memorized lie that was part of a coherent story. When the subjects'
results were averaged, both lies led to more activation in several areas than
telling the truth did, but the two different kinds of lies also showed
significantly different activation patterns when compared.
Jennifer Maria Nunez and her colleagues from Cornell published a study
of twenty subjects in Neurolmage in 2005. 106 This experiment involved ten
women and ten men, all healthy, right-handed young adults. The subjects
first gave honest answers to seventy-two yes or no questions two days before
the scanning. The questions included were both autobiographical ("do you
own a laptop computer?") and non-autobiographical ("are laptop computers
portable?"). Subjects answered each question once truthfully and once falsely,
while being scanned. On average, eight regions were more active when lying
than when telling the truth; seven regions were more active when answering
autobiographical questions. No regions were active during the averaged
honest responses or the averaged non-autobiographical answers.

103 Lee's group reported "four principle regions of brain activation: prefrontal and front,
parietal, temoral, and sub-cortical." Id. at 161.
100 Tatia M.C. Lee et al., Neural Correlates of Feigned Memory Impairment, 28
NEUROIMAGE 305 (2005).
106 G. Ganis et al., Neural Correlates of Different Types of Deception: An jMRI
Investigation, 13 CEREBRAL CORTEX 830, 830 (2003).
106 Jennifer Maria Nunez et al., Intentional False Responding Shares Neural Substrates
with Response Conflict and Cognitive Control, 25 NEUROIMAGE 267 (2005).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 44 of 144

402 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

The most recent article was published in Radiolog;y in 2006 by Feroze B.


Mohamed and colleagues from Temple and Drexel Universities. 107 The eleven
subjects were healthy non-drug users, five women and six men with an
average age of twenty-nine. One subject was left-handed. The subjects were
scanned and polygraphed under two different scenarios. In both scenarios
someone had fired a gun in the hospital and that person, recorded by a
security camera, looked like the subject. The subjects actually did fire a
starter pistol loaded with blanks in a testing room in the functional
neuroimaging center. Five subjects were told they were guilty and should lie;
six were told they were innocent and should cooperate with anyone who came
to investigate. All subjects were questioned both under polygraph
examination and while in an tMRI machine. Each group (guilty and
innocent) was averaged. Fourteen regions showed significantly increased
activation during lying, while seven showed increased activation when
subjects told the truth. The polygraph when used on individual subjects was
fairly accurate at detecting lying but less accurate at detecting truth.

D. tMRI-BASED LIE DETECTION-EvALUATING THE RESEARCH


These twelve peer-reviewed articles establish tMRI-based lie detection as
a promising technology. They do not prove that it is currently effective as a lie
detector in the real world, at any accuracy level, let alone the 80 to 90% levels
being claimed. At least six different issues raise concern about these results:
the small number of studies with individual effects, the lack of replication, the
small and nondiverse groups of subjects, the inconsistency of reported regions
of activity, the artificiality of the deceptive tasks, and the lack of attempted
countermeasures.
Of these twelve studies, only three deal at all with determining whether or
not individuals are lying. Information that, on average, a group of twelve
people showed significant activation in a particular region does not tell us
how many of the individual subjects showed activation in that region. It could
have been all, many, or only a few. This does not mean these group studies
were bad experiments; the researchers did not claim to be testing individual
lie detection but instead were looking for broad similarities that might
indicate some localization in the brain oflying.
The second problem is the lack of replication of the results by any other
laboratories. As is common with tMRI research today, each laboratory tried
its own experiments and used its own analytical methods. The only
experiments that can be said to have been replicated are the two Langleben
experiments (even there with some differences in the tests) and the first two
Kozel experiments (the third Kozel experiment used a very different model).
Only one of those-the second Langleben study-dealt with individual results.
A good rule of thumb is to never believe a result until at least one investigator
from outside the original group confirms it. Lie detection through tMRI does
not pass this test.

107 Feroze B. Mohamed et al., Brain Mapping of Deception and Truth Telling about an
Ecologically Valid Situation: Function MR Imaging and Polygraph Investigation - Initial
Experience, 238 RADIOLOGY 679 (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 45 of 144

NEUROSCIENCE-BASED LIE DETECTION 403

A third concern is the number and diversity of subjects. IOB The


experiments used healthy young adults, almost all right-handed,109 with little
gender or ethnic diversity. No one tested children, the middle-aged or elderly,
those with physical or mental illnesses, or those taking drugs, either as
medication or illicitly.
Fourth, all ofthe relevant experiments report finding activation of various
regions of the brain (sometimes defined narrowly, sometimes broadly).
Together they find activation in many different areas of the brain without
strong consistency among the experiments, except when brain regions are very
broadly defined. no The number of cortical areas activating in these lying and
deception tests include anterior prefrontal area, ventromedial prefrontal area,
dorsolateral prefrontal area, parahippocampal areas, anterior cingulate, left
posterior cingulate, temporal and subcortical caudate, right precuneous, left
cerebellum, insula, putamen, caudate, thalamus, and regions of temporal
cortex. The activation of many of these regions is known to be correlated with~
a wide range of cognitive behaviors, including memory, self-monitoring,
conscious self-awareness, planning and executive function, and emotion. This
diversity casts some doubt on the accuracy of any particular method of lie
detection.
A fifth problem, and perhaps the greatest, is the artificiality of the
deceptive tasks. Most of the experiments involved subjects lying about
something unimportant-what card they held or whether they could
remember a three-digit number. Only the Kozel paper, involving an
instructed "theft" of a ring or a watch from a room, and the Mohamed paper,
involving the gun firing, seemed close to more typical real-world lie detection
situations. Of course, in those cases, as in every other experiment, the subject
telling a ''lie'' was following an instruction to tell a lie (and, in the Kozel
experiment, the subjects "stealing" the objects were following instructions to
"steal" them, knowing it was part of an experiment and not a real theft).
Sometimes the subjects were told which lie to tell, other times they got to
choose which of two conditions to lie about, but always they were acting not
just with permission to lie, but under a command to do so. It is not clear how
this difference from the more usual lie detection settings would affect the

108 It is not entirely clear whether the number of people tested is important apart from
its likely relationship to the diversity of people tested. One might argue that, apart from being
less diverse, studies with smaller numbers of subjects provide a stiffer test for fMRI-based lie
detection. All other things being equal, it is harder to establish any given level of statistical
significance with a smaller number of subjects than with a larger number (we owe this insight
to Nancy Kanwisher). As all other things are not always equal, larger sample sizes are still
preferable.
109 Brain imaging researchers often prefer to use right-handed subjects. Some brain
functions are found to be located in different places in right-handed and non-right-handed
people. Although there may be no reason to suspect that any particular function (not related
to movement) will correlate with different regions in people with different handedness,
limiting test subjects to those with one handedness removes that possible confounding factor.
The vast majority of people are strongly right-handed, so it is simpler to use them as subjects.
Although there is no evidence and no particular reason to think that non-right-handed people
would show different areas of activation while lying, there is almost no evidence that they
would not.
uo Langleben's own experiments showed significant activations in different regions.
Langleben, supra note 94.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 46 of 144

404 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 &3 2007

results. Although the researchers often told the subjects, falsely, to believe
that their success in lying would earn them more money (in fact, researchers
with that design paid all the subjects the "extra" money), it is also not clear
that this apparent monetary incentive would affect the subjects the same way
as the more common-and more powerful-incentives for lying, such as
avoiding arrest.
The context points to a deeper problem with the artificiality of the
situation-the researchers assume that whatever kind of "lie" they are having
the subjects tell is relevant to the kinds oflies people tell in real life. But those
lies vary tremendously. III Are lies about participation in a crime the same as
lies about the quality of a meal or the existence of a "prior engagement"? Do
lies about sex activate the same regions of the brain as lies about money, lies
to avoid embarrassment, or lies about the five of clubs? Do lies of omission
look the same under fMRI as lies of commission? We do not know the
answers to these, or many other questions-and neither do the researchers
who published these papers. This is not a criticism of the researchers, as
scientists have to start somewhere and a well-defined situation is essential for
analysis. It is likely to be difficult, and perhaps even impossible, to create
good tests of real-world lies. This is a criticism of any attempt to apply this
research to the real world without a great deal more work.
All of the concerns discussed so far are reasons to doubt that these
experiments did, in fact, prove that one can detect real world lies through
fMRI. The last concern is slightly different. Even if the studies had proven
that proposition, they did not begin to prove that the method could actually be
effective because they did not exclude the very real possibilities that subjects
could use countermeasures against fMRI-based lie detection. 1I2
The use of countermeasures to polygraphy has been discussed
substantially in the past and has even been the subject of some limited
research. The National Academy panel on the polygraph spent ten pages on
countermeasures. The panel concluded:
If these measures are effective, they could seriously undermine
any value of polygraph security screening. Basic physiological
theory suggests that training methods might allow individuals to
succeed in employing effective countermeasures. Moreover, the
empirical research literature suggests that polygraph test results
can be affected by the use of countermeasures. 1I3
Countermeasures to fMRI-based lie detection could use a wide range of
methods. At one extreme, we know a subject can make an fMRI scan useless.
Simple movements of the tongue or jaw will make fMRI scans unreadable.
Movements of other muscles will introduce new areas of brain activation,

111 Illes, supra note 3.


112 Only one of the studies addressed countermeasures. In the third Kozel experiment,
the subjects were asked whether they had tried any countermeasures. Several had and their
efforts seemed to make no difference. These, however, were countermeasures thought up on
the spot by the subjects and not necessarily measures the researchers, or others knowledgeable
about the test methods, would have thought plausible. Kozel et al., supra note 99. This may
be a decent proxy for interrogation of a naive person using this method, but it is surely
plausible that the liars of most interested will also be best prepared with countermeasures.
113 NRC, supra note 30, at 151.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 47 of 144

NEUROSCIENCE-BASED LIE DETECTION 405

muddying the underlying picture. Even less visibly, simply thinking about
other things during a task may activate other brain regions in ways that
interfere with the lie-detection paradigm.
If, as some think, lying is detectable because it is harder than telling the
truth and thus requires the activation of more or different areas of the brain, a
subject could try doing mental arithmetic or memory tests while giving true
answers, thus, perhaps, making true answers harder to distinguish from false
ones. Similarly, a well-memorized lie may not activate those additional
regions and may look like a truth. The Ganis paper, discussed above, actually
reported differences between memorized and improvised lies, though it
reported that both were distinguishable on average from the truth. 1l4
This issue of countermeasures is both filled with unknowns and vital. If,
in fact, countermeasures turn out to be effective, the people we may most
want to catch may well be the ones best trained- by criminal gangs, by foreign
intelligence agencies, by terrorists, or others-in countermeasures. Of course,
if the countermeasures are easy enough, "training" may be as simple as a quick
search of the Internet. A quick Google search of "polygraph countermeasures"
already turns up many sites offering information on beating the polygraph,
some free 1l5 and some for payment, including one former polygrapher who
charges $59.95 (plus shipping) for his manual plus DVD. 1l6 IffMRI-based lie
detection becomes common, efforts to beat fMRI-based lie detection will, no
doubt, also become common.

IV. REGULATION OF fMRI-BASED LIE DETECTION


Lie detectors have been around for many years and have attracted
substantial state and some federal regulation. This section reviews the
existing regulation of polygraphy and broader lie detection technologies, and
then will argue that new and stronger regulation is needed as lie detection
moves inside the brain.

A. EXISTING REGULATION OF LIE DETECTION

Existing law in the United States regulates the use of lie detectors in
several ways. First, the federal government and many states limit the use of
lie detectors on employees by their employers (or their agents). Many states
also license and otherwise regulate operators of lie detectors. Finally, all
American courts, state or federal, have positions on the admissibility of at
least one kind of lie detector evidence. Some of those regulations are worded
broadly to apply to lie detection or deception detection, some focus narrowly
on polygraphs, and some are in between. This section will briefly survey all of
those regulations; those that are aimed narrowly at polygraphs, even when not
directly applicable to fMRI-based lie detection, are still useful for showing the
breadth of government interests in this field.

n. Ganis, supra note 105.

m See, e.g., George W. Maschke & Gino J. Scalahrini, THE LIE BEHIND THE LIE

DETECTOR, ch. 4 (4th digital ed.), available at http://antipolygraph.org/puhs.shtml.


116 See, e.g., Polygraph.com, http://www.polygraph.com/(last visited July 6, 2007).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 48 of 144

406 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

1. Statutory Regulation
Lie detectors are not subject to any general regulation requiring that they
be proven effective before they can be used. It is conceivable, but unlikely,
that some methods of lie detection might fall within the jurisdiction of the
Food and Drug Administration (FDA). A novel device might fall under FDA's
control over medical devices; a new molecule, to be used, for example, as a
kind of truth serum, could fall within its power over new drugs. To be subject
to FDA power, the device or drug would have to be "intended for use in the
diagnosis of disease or other conditions, or in the cure, mitigation, treatment,
or prevention of disease in man or other animals" or "intended to affect the
structure or any function of the body of man or other animals.... "117 This
would not appear to include a lie detection device or drug, unless it operated
by affecting the "function" of the brain by releasing inhibitions against lying.
Although the MRI, for example, is clearly a medical device, because it is
intended "for use in the diagnosis of disease or other conditions," it has
already been approved. Under the off-label use doctrine, a drug, biologic, or
device approved for one purpose can generally be legally used for any
purpose. liB
In the absence of general pre-market regulation of lie detection, the most
important regulatory statute in the field is the federal Employee Polygraph
Protection Act of 1988 (EPPA). 119 Under this Act, almost all employers are
forbidden to "directly or indirectly, [] require, request, suggest, or cause any
employee or prospective employee to take or submit to any lie detector test" or
to "use, accept, refer to, or inquire concerning the results of any lie detector
test of any employee or prospective employee."12o The Act provides a very
broad definition of a lie detector test, including within its scope "a polygraph,
deceptograph, voice stress analyzer, psychological stress evaluator, or any
other similar device (whether mechanical or electrical) that is used, or the
results of which are used, for the purpose of rendering a diagnostic opinion
regarding the honesty or dishonesty of an individual."121 Employers violating
the Act are subject to civil penalties levied by the Secretary of Labor of up to
$10,000 per violation, as well as private suits by those harmed by the
violation. 122
The Act contains a variety of exemptions, notably for employees offederal,
state, and local governments, as well as various contractors, experts, and
others involved in national security or work with the FBI. 123 One exemption
allows employers to make limited use of polygraphs-but not any other forms
of lie detectors-in ongoing investigations. 124 Some of the exemptions,
including the exemption for an employer's ongoing investigations, are

117 Federal Food Drug and Cosmetic Act §201(h)-(g)(1), 21 U.S.C. § 333 (2006).
118 Food and Drug Administration, Institutional Review Board Information Sheets,
http://www.fda.gov/oc/ohrt/irbs/offlabel.html (last visited July 6, 2007).
119 Federal Employee Policy Protection Act of1988, 29 U.S.C. §§ 2001-2009 (2006).
120 29 U.S.C. § 2002(1)-(2) (2006). The section also prohibits employers from taking
action against employees because of their refusal to take a test, because of the results of such a
test, or for asserting their rights under the Act. 29 U.S.C. § 2001(3)-(4) (2006).
121 29 U.S.C. § 2001(3) (2006).
122 29 U.S.C. § 2005 (2006).
123 29 U.S.C. § 2006 (2006).
104 29 U.S.C. § 2006(d) (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 49 of 144

NEUROSCIENCE-BASED LIE DETECTION 407

conditioned on the prOVISIOn of specified rights for those being tested. 125
These rights include, among others:
(A) the examinee shall be permitted to terminate the test at any
time;
(B) the examinee is not asked questions in a manner designed
to degrade, or needlessly intrude on, such examinee;
(C) the examinee is not asked any question concerning-­
(i) religious beliefs or affiliations,
(ii) beliefs or opinions regarding racial matters,
(iii) political beliefs or affiliations,
(iv) any matter relating to sexual behavior; and
(v) beliefs, affiliations, opinions, or lawful activities
regarding unions or labor organizations; and
(D) the examiner does not conduct the test if there is sufficient
written evidence by a physician that the examinee is suffering
from a medical or psychological condition or undergoing
treatment that might cause abnormal responses during the actual
testing phase. 126
The Act also limits disclosure of the test results by both the employer or
the polygraph examiner. I27 Finally, the Act expressly provides that it does not
preempt any state or local laws, or collective bargaining agreements that have
added restrictions on lie detector tests. 128
EPPA has seen little activity or discussion since its passage. The Secretary
of Labor adopted extensive regulations for its implementation, many of which
deal with the procedures for imposing civil fines. 129 EPPA has been the
subject of a few law review articles, most of them student notes and comments

'25 29 U.S.C. § 2007 (2006).

126 29 U.S.C. § 2007(b)(1) (2006).

127 29 U.S.C. § 2008 (2006).

128 29 U.S.C. § 2009 (2006).

129 29 C.F.R. § 801 (2006).


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 50 of 144

408 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

published just after its adoption. 13o Moreover, it has been discussed in only a
handful of reported court cases. 131 .
UNo Lie MRI proceeds on its current path, this may change. It claims, as
noted above, that tMRI-based lie detection is not covered by EPPA. 132 Neither
EPPA nor the regulations under it, nor any case law interpreting it, supports
such an interpretation. The statute and the regulation define lie detection
broadly as "a polygraph, deceptograph, voice stress analyzer, psychological
stress evaluator, or any other similar device (whether mechanical or electrical)
that is used, or the results of which are used, for the purpose of rendering a
diagnostic opinion regarding the honesty or dishonesty of an individual."133
No Lie MRI seems to base its argument on the fact that all the methods EPPA
names measure the autonomic nervous system, whereas No Lie MRI
presumes that its method, which goes directly to the central nervous system, is
thus not "any other similar device (whether mechanical or electrical) ... ." 134
In the context both of the statute generally and of that sentence in EPPA,
specifically, this argument borders on frivolous. Congress intended to give a
broad scope to EPPA's definition; it is not surprising that it did not include
tMRI-based lie detection in 1988 as tMRI was not developed until several

130 Ryan K. Brown, Specific Incident Polyg-raph Testing Under the Employee Polyg-raph
Protection Act of 1988, 64 WASH. L. REv. 661 (1989); Ching Wah Chin, Protecting Employees
and Neglecting Technology Assessment: The Employee Polyg-raph Protection Act of 1988, 55
BROOK. L. REV. 1315 (1990); Charles P. Cullen, The Specific Incident Exemption of the
Employee Polyg-raph Protection Act, 65 NOTRE DAME L. REv. 262 (1990); Brad V. Driscoll, The
Employee Polyg-raph Protection Act of 1988: A Balance of Interests, 75 IOWA L. REV. 539
(1990); Earl J. Engle, Counseling the Client in the Employee Polyg-raph Protection Act, 35 PRAC
LAw 65 (1989); Peter C. Johnson, Banning the Truth-Finder in Employment: The Employee
Polyg-raph Protection Act of1988, 54 Mo. L. REV. 155 (1989); Andrew J. Natale, The Employee
Polyg-raph Protection Act of 1988 - Should the Federal Government Regulate the Use of
Polyg-raphs in the Private Sector, 58 U. CIN. L. REV. 559 (1989); Kathleen F. Reilly, The
Employee Polyg-raph Protection Act of 1988: Proper Penalties When Guilty Employees Are
Improperly Caught, 7 HOFSTRA LAB. &: EMF. L.J. 369 (1990); Durwood Ruegger, When
Polyg-raph Testing Is Allowed: Limited Exceptions Under the EPPA, 108 BANKING L.J. 555
(1991); Yvonne K. Sening, Heads or Tails: The Employee Polyg-raph Protection Act, 39 CATH.
U. L. REv. 235 (1989); Paul D. Seyferth, An Overview of the Employee Polyg-raph Protection
Act, 57 J. Mo. B. 226 (2001).
131 The United State Code Services annotations show only fifteen cases reported in
either the Federal Reporter or the Federal Supplement that discuss this Act. Watson v.
Drummond Co., 436 F.3d 1310 (11th Cir. 2006); Polkey v. Transtecs Corp., 404 F.3d 1264
(11th Cir. 2005); Calhillo v. Cavender Oldsmohile, Inc., 288 F.3d 721 (5th Cir. 2002); Veazey v.
Communications & Cahle, Inc., 194 F.3d 850 (7th Cir. 1999); Saari v. Smith Barney, Harris
Upham & Co., 968 F.2d 877 (9th Cir. 1992); Lyles v. Flagship Resort Development Corp., 371
F.Supp. 2d 597 (D. N.J. 2005); Deetjan v. V.I.P., Inc., 287 F.Supp.2d 80 (D. Me. 2003); Long
v. Mango's Tropical Cafe, 972 F.Supp. 655 (S.D. Fla. 1997); Mennen v. Easter Stores, 951
F.Supp. 838 (N.D. la. 1997); James v. Professionals' Detective Agency, 876 F.Supp. 1013 (N.D.
Ill. 1995); Lyle v. Mercy Hosp. Anderson, 876 F.Supp. 157 (S.D. Oh., 1995); Del Canto v. ITT­
Sheraton Corp., 865 F.Supp. 927 (D.D.C. 1994); Blackwell v. 53rd-Ellis Currency Exch., 852
F.Supp. 646 (N.D. Ill. 1994); Ruhin v. Tourneau, Inc., 797 F.Supp. 247 (S.D.N.Y. 1992).
13> See No Lie MRI, Customers - Corporations, http://www.noliemri.com/customers/

GroupOrCorporate.htm (last visited July 6, 2007).


133 29 U.S.C. § 2001(3) (2006).
130 "U.S. law prohihits truth verification/lie detection testing for employees that is hased
on measuring the autonomic nervous system (e.g. polygraph testing). No Lie MRI measures
the central nervous system directly and such is not suhject to restriction by these laws. No Lie
MRI is unaware of any law that would prohibit its use for employment screening." No Lie
MRI, Customers - Corporations, http://www.noliemri.com/customers/GroupOrCorporate.
htm (last visited July 6, 2007).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 51 of 144

NEUROSCIENCE-BASED LIE DETECTION 409

years later and the first experiments with fMRI-based lie detection were not
published until 2000. Furthermore, the company's reading seems to ignore
the structure of the sentence, which covers four named devices "or any other
similar device (whether mechanical or electrical) that is used, or the results of
which are used, for the purpose of rendering a diagnostic opinion regarding
the honesty or dishonesty of an individual."135 This can easily be read to find
the similarity in the use to which the device is put, not in some other similarity
to the four named devices. 136
Several other federal statutes deal specifically with the use of polygraphs
for security purposes within the Defense Department,137 the Energy
Department,138 and more generally in the context of security clearances. 139 In
addition, one federal statute conditions federal grants to help states deal with
domestic violence, sexual assaults, and similar crimes, on states' assurances
that by 2009, their "laws, policies, and practices" will ensure that victims of
such crimes are not asked or required to submit to "a polygraph examination
or other truth telling device ...."140
States have been active in broader ways than the federal government in
regulating lie detection and polygraphy, but the state laws are not particularly
consistent. 141
Twenty-five states and the District of Columbia have their own version of
EPPA. 142 Some of these predated the federal statute; others, passed later,
typically cover some or all of the state and local employees excluded from the
federal act. Interestingly, many of these acts preventing employers from
requiring lie detector tests specifically exclude various law enforcement
officers,143 while others specifically include them144 or are, in fact, limited to
them. 145 Some of these state laws restrict polygraphs specifically/46 while
others, like the federal act, cover lie detection more generally.147

'35 29 U.S.C. § 2001(3) (2006).


'3. No Lie MRI might also try to argue that fMRI is fundamentally a magnetic device
and hence neither mechanical nor electrical, which, given the many mechanical and electrical
aspects of an MRI examination and its analysis, is an even weaker argument - even before
considering the Nineteenth Century's successful unification of electricity and magnetism.
'37 10 U.S.C. § 1564(a) (2006).
'3' 50 U.S.C. §§ 2654-2655 (2006). Section 2654 was passed in the aftermath of the
Wen Ho Lee scandal and requires the Department of Energy to adopt a new polygraph policy.
Thirty days after that new policy is adopted, Section 2655 will be repealed. Interestingly, the
new statute expressly requires the Secretary of Energy to "take into account the results of the
Polygraph Review." NRC, supra note 30; 50 U.S.C. § 2654(b)(2) (2006).
'39 50 U.S.C. § 435(b) (2006) (covering security clearances).
,<0 42 U.S.C. § 3796hh(c)(5) (2006).
,<1 State laws concerning lie detection or polygraphs are set out in the appendix to this
article.
1<2 See appendix.
'43 See, e.g., ALASKA STAT. § 23.10.037 (2006); CONN. GEN. STAT. § 31-51g (2006).
,.. See, e.g., CAL. GOV'T CODE § 3307 (2006); MASS. GEN. LAws ch. 149, § 19B (2006).
,.. See TEX. GOV'T CODE ANN. §§ 411.007, 411.0074, 614.063 (2006) (allowing
polygraph for new police hires but prohibiting polygraphs for existing officers applying for a
new job and banning any adverse actions against police officers for refusing to take a
polygraph).
,.. See, e.g., TEX. GOV'T CODE ANN. § 493.022 (2006).
1t7 See, e.g., CAL. GOV'T CODE § 3307 (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 52 of 144

410 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

Many state laws deal with other aspects of polygraphy, and, to a lesser
extent, lie detection more broadly.14B Twenty-two states have licensing
schemes for polygraph examiners. 149 Twenty states specifically authorize
polygraph or other lie detection tests for sex offenders as a condition of
probation or parole. 150 Eleven states have already met the federal requirement
for protecting complaining victims of sexual offenses from being required to
take a polygraph test. l5l More than thirty other state statutes deal with one
aspect or another of lie detection, from requiring it for some state employees
(typically law enforcement officers), to banning its required use in insurance
claims or welfare applications, to regulating or prohibiting the use of
information from polygraphs by credit reporting agencies.152
The application of any of these statutes to fMRI-based lie detection
requires a careful examination of the language of the law. Some deal only
with polygraphs, but others have broader definitions that would appear to
include fMRI-based lie detection. Several states use very broad language
indeed:
It is the purpose of this chapter to regulate all persons who
purport to be able to detect deception or to verifY truth of
statements through the use of instrumentation, such as lie
detectors, polygraphs, deceptographs, psychological stress
evaluators or similar or related devices and instruments without
regard to the nomenclature applied thereto and this chapter shall
be liberally construed to regulate all these persons and
instruments. No person who purports to be able to detect
deception or to verifY truth of statements through
instrumentation shall be held exempt from this chapter because
of the terminology which he may use to refer to himself, to his
instrument or to his services. 153
Even the statutes that appear to deal only with polygraphy may have some
surprising consequences. In some states, broad statutes may mean that
anyone seeking to administer an fMRI-based lie detection test will need a
state license, from a licensing board set up to regulate polygraphy. In other
states, the polygraph licensure statutes may effectively exclude fMRI-based lie
detection. For example, several statutes have definitions like this: '''Polygraph'
means an instrument which records permanently and simultaneously a
subject's cardiovascular and respiratory patterns and other physiological
changes pertinent to the detection of deception."154 As fMRI tests do not

,.. See the appendix for details on these state laws.


It9 [d.
150 [d.
151 Id.

152 Id.

'63 ME. REV. STAT. ANN. tit. 32, § 7152 (1964). See identical or substantial similar
language from Nebraska, NEB. REV. STAT. § 81-1902 (1999) (providing that the statute be
''liberally construed to regulate all persons" using lie detectors, stress evaluators,
deceptographs and voice analyzers); OklallOma, OKLA. STAT. ANN. tit. 59, §1452 (West 2000)
(providing the statute "regulate[s] all persons who purport to be able to detect deception ...
without regard to the nomenclature applied thereto); and South Carolina, S.C. CODE ANN. §
40-53-20 (1999) (same).
,.. Ky. REv. STAT. ANN. § 329.010(6) (LexisNexis 2001).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 53 of 144

NEUROSCIENCE-BASED LIE DETECTION 411

record cardiovascular and respiratory patterns, statutes limiting lie detection


to polygraphs, so defined, appear to ban fMRI-based lie detection.
The confusing welter of state laws on lie detection generally or polygraphy
specifically, illustrates both the complexity of any introduction offMRI-based
lie detection and the breadth of state interests in regulating these
technologies. Should the new technologies be introduced, many states will
likely want to amend their statutes.

2. Judicial Admissibility
These statutory regulations by and large miss another important way in
which governments regulate lie detection-decisions about whether and
under what circumstances they are admissible in court. Although this is
governed by statute in at least one case,155 for the most part it is the result of
court-adopted rules or judicial decisions based on whether the evidence meets
the tests for admissibility of scientific evidence.
The courtroom situation is surprisingly complicated. It focuses almost
entirely on polygraphy and not on other forms of lie detection, though there
are a few cases on lie detection through voice stress analyzers l56 and one
unpublished case on ''brain fingerprinting."157 Presumably, fMRI-based lie
detection would be judged under the regular rules for admissibility of
scientific evidence, which means that the existing law on polygraphs is not
directly applicable. Nonetheless, it should be useful to review how that law
stands.
As a general matter, no American judicial system routinely allows
polygraph evidence to be introduced except New Mexico's.158 The New Mexico
Supreme Court has adopted a rule of evidence generally allowing polygraph
evidence under some conditions. 159 Outside New Mexico, evidence of the
results of a polygraph examination cannot be admitted in evidence, except as
described below.
Most courts view the admissibility of polygraph evidence as an issue of the
admissibility of scientific or technical evidence. All but New Mexico have
generally rejected it as failing the tests for admissibility of scientific evidence.
The test for scientific evidence widely used in American courts through most
of the 20 th century, the Frye test, takes it name from a 1923 case involving the
admissibility of testimony of William Marston, the inventor of the

,.. See CAL. EVID. CODE § 351.1(a) (1995) (providing that "the results of a polygraph
examination ... shall not be admitted into evidence in any criminal proceeding ... ,").
156 See generally Thomas R. Malia, Annotation, Admissibility ofVoice Stress Evaluation
Test Results or ofStatements Made During Test, 47 A.L.R. 4th 1202 (1986, 2001 supplement)
(collecting and analyzing state and federal law to conclude that tests are generally
inadmissible). See also Whittington v. State, 809 A.2d 721, 740 (Md. App. 2002) (holding that
results of a voice stress test are not admissible at trial); State v. Gaudet, 638 So.2d 1216, 1222
(La. App. 1994) (holding the same); State v. Higginbotham, 554 So.2d 1308, 1310 (La. App.
1989) (holding the same); State v. Arnold, 533 So.2d 1311, 1314 (La. App. 1988) (holding the
same); Smith v. State, 355 A.2d 527, 536 (Md. App. 1976) (holding the same).
151 See Harrington v. Iowa, 659 N.W.2d 509, 525 (Iowa 2003).
I.B See United States v. Scheffer, 523 U.S. 303, 310-11 (1998).
189 See N.M. R. EVID. 11-707(c) (allowing the opinion of the polygraph examiner to be
admitted into evidence at the discretion of the trial judge). See Lee v. Martinez, 96 P.3d 291
(N.M. 2004) (discussing when to allow polygraph evidence in court).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 54 of 144

412 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

polygraph. 160 The Frye test was replaced for federal courts, and many state
courts, by the Daubert test after a 1993 decision of the United States Supreme
Court. l6l The Frye and Daubert tests have spawned a vast literature on their
individual and comparative merits; for present purposes it is important only
to note that both tests rely, at least in the first instance, on the trial court
judge to make a decision about the admissibility of the evidence based on
expert testimony before her. 162 Neither test requires extended
experimentation or sets any accuracy standards that have to be attained
(though the Daubert test does at least inquire about error rates). Barring any
statutory intervention, presumably any evidence coming from a new method
of lie detection would be admitted or not based on trial court determinations
of whether it complied with Frye or Daubert.
In United States v. Scheffer,163 the United States Supreme Court upheld
the exclusion of polygraph evidence against a claim that a criminal defendant
had a right under the Sixth Amendment to have it admitted. Scheffer involved
a blanket ban on the admissibility of polygraph evidence under the Military
Rule of Evidence 707. 164 Scheffer, a enlisted man, worked with military police
as an informant in drug investigations.165 When he was court-martialed for
illegal drug use, he wanted to introduce the results of polygraph examinations,
performed by the military as a routine part of his work as an informant, that
showed that he honestly denied illegal drug use during the same period that a
urine test detected methamphetamine.166 The court-martial refused to admit
Scheffer's evidence because of Rule 707,167 but the Court of Appeals for the
Armed Forces reversed, holding that this per se exclusion of all polygraph
evidence violated the Sixth Amendment. 16B
The Supreme Court reversed in tum, upholding Rule 707 in a fractured
decision. Justice Thomas wrote the opinion announcing the decision of the
Court, joined by Justices Rehnquist, Scalia, and Souter. 169 He found the rule
constitutional on three grounds: (1) continued question about the reliability
of polygraph evidence, (2) the need to "preserve the jury's core function of
making credibility determinations in criminal trials," and (3) the avoidance of
collateral litigation. 170 Justice Kennedy, joined by Justices O'Connor,
Ginsburg, and Breyer, concurred in the section of the Thomas opinion based
on reliability of polygraph evidence, but did not agree with the other two
grounds. l7l Justice Stevens dissented, finding that polygraph testing was
reliable enough to overcome a complete ban. I72
Interestingly, in spite of the almost universal conclusion that polygraph
evidence does not meet the standards for admissibility of scientific evidence, it

160 NRC, supra note 30, at 293-94. See also Adler, supra note 29.

161
Daubert v. Merrell Dow Pharmaceuticals, 516 U.S. 869 (1993).

]62 Frye, 293 F. 1013; Daubert, 516 U.S. 869.


163 523 U.S. 303 (1998).
16t Id. at 305.
165 Id. at 305-6
UilS Id. at 306.
167 Id. at 306-07.
16. Id. at 307-08.
169 Id. at 305.
170 Id. at 308-315.

mId. at 318-320.

172 Id. at 320-38.

Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 55 of 144

NEUROSCIENCE-BASED LIE DETECTION 413

can still be admitted in many jurisdictions under several circumstances. First,


many jurisdictions allow evidence ofthe results of a polygraph examination to
be admitted if all the parties agree to its admission. 173 Second, in some
jurisdictions the results of a polygraph test can be used to corroborate or to
impeach a witness's claims on the stand that are consistent or inconsistent
with the polygraph test results. 174 Third, at least one federal appellate court
has held that a defendant has a constitutional right to have polygraph
evidence admitted in the penalty phase of a death penalty case. 175 Finally,
several federal courts have opened the door to the possible broader
admissibility of polygraph evidence as a consequence of Daubert, finding that
Daubert makes untenable any previous per se exclusions of polygraph
evidence. 176

B. OUR PROPOSED REGULATORY SCHEME

The tu of living brains, when provided by fMRI and similar technologies


for lie detection purposes, poses challenges that our current haphazard
regulatory system cannot meet. As discussed in Section II(C) above, the
existing evidence does not convincingly demonstrate that fMRI-based lie
detection is accurate-but, as described in Section II(B), that has not stopped
at least one company from offering it. This section sets out a rough map of
how we believe our governments should proceed.
Regulation similar to that imposed by the federal and state governments
on polygraphs would need to be considered for any somewhat-valid method of
lie detection. This would require the federal government plus all state
governments (as well as non-American governments) to consider amending
many of their statutes, as well as a new surge of litigation in the courts about
the admissibility of resulting evidence. At this stage in the technology's
development, however, we believe a simpler solution is preferable. The
federal government-or, barring that, state governments-should ban any
non-research use of new methods of lie detection, including specifically fMRI­
based lie detection, unless or until the method has been proven safe and
effective to the satisfaction of a regulatory agency and has been vetted through
the peer-reviewed scientific literature. The rest of this section outlines our
proposed regulatory scheme in general, and raises many additional questions
that need to be answered. 177

173 See 1-8 MATTHEW BENDER &: Co., INC., SCIENTIFIC EVIDENCE 8-4(C) (2005).
17< United States v. Piccinonna, 885 F.2d 1529 (11th Cir. 1989) (en bane). See also
United States v. Henderson, 409 F.3d. 1293 (11th Cir. 2005) (explaining the relationship
between Piccinonna and Daubert).
175 Rupe v. Wood, 93 F.3d 1434 (9th Cir. 1996). See also Height v. State, 604 S.E.2d
796 (Ga. 2004) (holding that Georgia's general ban on admitting polygraph evidence except on
the parties' stipulation did not apply to the sentencing phase of a capital case).
176 See, e.g., United States v. Cordoba, 104 F.3d 225 (9th Cir. 1997); United States v.
Posado, 57 F.3d 428 (5th Cir. 1995). But see United States v. Prince-Oyibo, 320 F.3d 424 (4th
Cir. 2003) (maintaining the Fourth Circuit's per se rule against the admissibility of polygraph
evidence in spite of Daubert).
177 Some of the ideas in this section have appealed previously in Henry T. Greely, Pre­
market Approval Regulationfor Lie Detection: An Idea Whose Time May Be Coming, 5 AM. J.
BIOETHIC 50, 50-52 (2005).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 56 of 144

414 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

1. The General Approach


We envision this regulatory scheme as roughly paralleling the Food and
Drug Administration's (FDA) system for controlling the use of new drugs or
biologics; anyone using new drugs or biologics without FDA approval is
subject to both criminal and civil sanctions. 178 Similarly, the sponsors of the
technology should be required to prove to the regulatory body that the
technology was safe and effective, based on large-scale trials, equivalent to the
clinical trials of medicine. As with the FDA, the expectation would be that the
sponsor would discuss details of the trials extensively with the agency in
advance.
The trials would have to be large and varied enough to provide sound
evidence about the technology. The trials would have to examine subjects
who were representative ofthe population-in age, sex, ethnicity, handedness,
and other characteristics. Actors, excellent poker players, and sociopaths
could be compared to average people; religious people could be compared
with non-believers. Some subjects should be given strong incentives to
deceive the examiners; others should be given advice or training on possible
countermeasures to defeat the test.
The trials should include many types of questions to see whether the
technology was effective against all types of lies. This would include, for
example, testing the technology against memorized, rehearsed, or
spontaneous lies. The trials might also distinguish between the contexts of
the lies: it might be, for example, that people stating "social" lies ("no, of
course, dinner was fantastic")179 might react differently from people lying
about their sexual behavior or their finances.
It may not be necessary to include equivalents for all FDA requirements.
The FDA insists that anyone doing clinical trials of a new drug or biologic first
receive permission from the agency, called an Investigative New Drug
application (IND).180 As novel safety concerns for at least some lie detection
methods are likely be minimal (although well-understood safety precautions,
like keeping metal away from an MRI device, would need to be followed), it
may not be necessary to have an IND-equivalent for all such research. Such a
requirement would, however, provide useful information about the number
and kind of lie detectors under development. Similarly, the Federal Food,
Drug, and Cosmetics Act generally requires at least two randomized
controlled clinical trials;181 it is at least possible that one large and rigorous
trial would be sufficient for this use. And, of course, the requirements might
well change over time in response to the dynamic nature of the technology. If
evidence establishes that some demographic variations or differences in the
types oflies make no difference, the testing process could be simplified.

178 Federal Food Drug and Cosmetic Act §§ 301-308, 21 U.S.C. § 333 (2006). See
PETER B. HUTT, RICHARD A. MERRILL &: LEWIS A. GROSSMAN, HUTT, MERRILL, AND
GROSSMAN'S FOOD AND DRUG LAw 1196-1370 (3d ed. 2007).
179 Illes, supra note 3.
180 Federal Food Drug and Cosmetic Act (FFDCA) § 505(i). See generally HUTT,
MERRILL &: GROSSMAN, supra note 178, at 624-626.
181 Federal Food Drug and Cosmetic Act § 505(i), 21 U.S.C. §333 (2006). See HUTT,
MERRILL &: GROSSMAN, supra note 178, at 624-26.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 57 of 144

NEUROSCIENCE-BASED LIE DETECTION 415

At the end of the testing process, the sponsor would submit an


application, based on the evidence from the trial, for approval of the
technique. The sponsor would also have to demonstrate that its approach had
been substantially discussed in the peer-reviewed literature, thus providing
the agency with confidence that the scientific community had had an
opportunity to analyze the technology. The regulatory agency would then
decide whether the sponsor had proven the technology safe and effective. It
could do this through the kind of advisory committee structure used by the
FDA or more directly. If so, it could approve the technology, along with
information-equivalent to the FDA's required labeling-about the
contraindications and risks of the method. In addition, the agency should
publish information about the method's accuracy, presented with appropriate
detail. For example, if the subjects' demographic characteristics or the nature
of the lie significantly affect accuracy, that information should be made
public. 182 After the final agency decision, the sponsor should be able to appeal
a negative agency action to the federal court system under the usual standards
for review of administrative decisions.

2. The Devilish Details


Our proposed solution, of course, comes with its own questions and
problems. We set out many such questions and problems below and explore
some oftheir ramifications-without, for most part, providing firm answers.
A first question is what. What technologies should be covered and how
should they be defined? This article has focused on fMRI-based lie detection
and we believe it surely should be covered by a regulatory scheme. At a
minimum, other similarly technical methods for looking inside the skull, such
as EEG and NIRS, would also need to be covered. Should this regulatory
scheme cover voice stress analyzers, which do not look into the brain? Should
they-or, as a practical matter, could they-cover training in reading facial
microexpressions? Should they apply to the already existing and substantially
regulated polygraph industry-or should polygraphs, and possibly voice stress
analyzers and other existing, though unproven methods, be grandfathered
into compliance? Whatever scope settled upon needs to then be turned into
language that can be relied upon to treat appropriately future technologies
that may be as yet unimagined. That is not an easy assignment, although the
state statutory language broadly defining polygraphs quoted earlier may
provide a useful model. It would also be possible to allow the regulating
agency to define, to some extent, its own jurisdiction, within a broad statutory
grant of authority.
A second question is who. What government agency should regulate lie
detection? There is no clearly right answer. On the one hand, the FDA is the
federal agency with the most expertise about evaluating clinical trials. Much
of that expertise, especially but not only the statistical knowledge, is likely to
be transferable to lie detection. It is also accustomed to high stakes regulation

,.2 One might argue whether information should be provided about countermeasures.
The information might give test subjects the information they need to nullify the tests; on the
other hand, they may help test operators and others detect, comhat, or evaluate the risk that a
suhject is using countermeasures.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 58 of 144

416 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

of companies through pre-market approval requirements. Although the FDA


covers that large part of the economy that involves putting things in, on, or
through our bodies, lie detection is far from its core. The FDA may well resist
Congressional efforts to give it this power.
On the other hand, no other agency has the relevant expertise. The
Department of Justice, a plausible location for such authority given its field of
expertise, has no experience with scientific trials. It is not clear to us whether
the Defense Department, which has its own Department of Defense Polygraph
Institute, has any expertise in this kind of assessment. Both the Justice and
Defense Departments have sufficiently strong interests in lie detection, such
that their impartiality might reasonably be doubted. In addition, their
secretive cultures would make them unlikely to use transparent review
processes, further increasing distrust of their decisions. Another potential
candidate is the National Institute for Standards and Technologies (NIST),
part of the Department of Commerce. It is more likely to be, and to be
perceived as, impartial, but it does not seem to have expertise in either lie
detection or in designing or assessing large-scale human trials.
Whoever is given this responsibility will need to quickly establish
guidelines about the required human trials. It is not clear how an effective
and ethical trial could be done. The problems we raised above about the
application of the existing research to individual lie detection would need to
resolved, if not perfectly, at least reasonably well. We urge that whatever
group is given responsibility for testing lie detection, immediately appoint an
expert panel, with expertise in clinical trial design and evaluation, to make
recommendations for trial design. Even if the FDA is not the regulating
agency, its expertise might be used in this manner. It is also possible that
early steps toward recommending good trial designs for lie detection could be
taken outside the government, before any legislation is passed.
In our approach, the regulators (and possibly the legislators) would also
face not only the technical questions of trial design but, more fundamentally,
the question of what the trials are intended to prove. It is easy to intone
"proof of safety and efficacy," but what does that mean in the context of lie
detection? Is a lie detection method with a 0.1% chance of causing serious
harm to a subject "safe"? Is a lie detection method with 90% specificity (90%
of those it says are lying are actually lying) and 90% sensitivity (90% of those
it says are not lying are actually not lying) "effective"?
The legislation governing the FDA does not further define these terms. As
a result, the FDA uses its informed discretion in judging drugs safe and
effective. A drug that kills one percent of the people who take it might be
quite safe in the context of lung cancer, but extremely dangerous for the
treatment of teenaged acne. A drug that cures five percent of people with
pancreatic cancer would be quite effective; a drug that lowers the fever offive
percent of influenza patients might not be. That kind of flexibility based on
specific diseases is unlikely to be important in lie detection, as, unlike drugs
intended for specific conditions, a lie detection method would presumably be
used in a wide range of settings.
In the unlikely event of a significant safety risk from the procedure, it
might make sense to allow it to be used in high-stakes circumstances, like a
death penalty trial, but not in minor civil litigation. But the problem with
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 59 of 144

NEUROSCIENCE-BASED LIE DETECTION 417

efficacy remains and many relevant questions lack clear answers. Just how
much accuracy is enough? Should it be enough that the method shows some
statistically significant improvement over the efficacy of random guesses
whether the subject is lying-and, if so, statistically significant at what
probability (p) value? Should it be measured against the accuracy of humans
experienced at detecting lies, such as a veteran investigator or an experienced
judge, or would efficacy have to be a general figure? Could lie detectors be
approved based on how well they worked with just some kinds of people, with
some types of questions, or in particular kinds of situations? If approved for
particular kinds of situations, should its use be legally restricted to the
situations for which it was approved or should something akin to the medical
"off-label use" be allowed? If approved for only some kinds of people, how
well could that restriction be enforced?
It may be that either the legislature or the agency should set quantitative
standards for safety and efficacy, though it is hard to know what reasonable
standards might be in advance of good trials of any lie detection technologies.
One solution might be to allow the agency substantial discretion initially, but
require that it set quantitative standards after it had substantial experience
with the technologies.
Another possibility is to eliminate the requirement for pre-market
approval and just require rigorous and extensive pre-market testing with the
test results becoming public information. In that way, the potential users of
the technologies would have sufficient information to make their own choices
about whether it was effective "enough." We prefer a stronger restriction, but
even an informational requirement would be a great advance over the current
situation.
Regardless of the strength of the regulatory system, another tricky
question will be what to do about changes in the lie detection methods that
might occur if the technology moves from initial validity to sustained validity
in the face of other technological changes and innovation. 183 Drugs cannot be
changed in non-trivial ways without new trials-changes to molecular
structure cannot be made easily or presumed unimportant-but the core of at
least some of the lie detection methods may be easily changeable software. If
the software is changed to vary the weight given to activation in particular
brain regions, should new trials, either for approval or for provision of the
required accuracy information to the public, be required before that change
can be legally implemented?
Assuming the regulatory scheme requires approval and not just creation
and provision of information, the question arises as to whom it would
regulate. Specifically, should defense or intelligence agencies be bound by
these constraints? We think the answer is "yes." If the technology is not
sufficiently accurate, it does no good-and may actually do harm-to allow it
to be used in national security settings. Yet the temptation for such use would
be enormous and the political costs of insisting on universal coverage might
sink the entire plan.

,.. See generally Judy Illes & Margaret L. Eaton, Commercializing Cognitive
Neurotechnology - The Ethical Terrain, 25 NATURE BIOTECHNOLOGY 393 (2007) (asserting
that a "lack of recognition of the ethical, social and policy issues associated with the
commercialization of neurotechnology could compromise new ventures in the area.").
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 60 of 144

418 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

The question of binding works in the other direction as well. Once a lie
detection technology is approved, does everyone have to allow its use? Here,
the answer seems to be "no." Different courts, for example, may have different
views about how accurate a technology needs to be for it to be admitted into
evidence. Even if the federal government wanted to promote a common
standard for judicial admissibility of lie detection evidence, it probably does
not have the power to impose its position on state courts. Similarly, if one
state wants to have stricter (but not weaker) rules for allowing the use of lie
detection, although the federal government might arguably have the power to
force an uniform standard under the Commerce Clause, it would be more
appropriate for it to allow states to be more restrictive, as it did in EPPA. It is
worth noting that judges facing attempts by parties to admit this evidence will
have to cope with questions of field strength, statistical significance, and
various brain regions of interest. Educational or other neutral explanatory
resources might be tremendously useful for such judges and should be
pursued.
A final, and important, practical question has to do with cost. Who will
pay for this testing? Although these trials would likely be much less expensive
than the several hundred million dollars required for drug trials, it is likely
that the cost of testing any single lie detection method would be more than $5
million. 184 Whether companies could raise the money for those tests is
unclear; it would no doubt depend on an assessment of their market
possibilities. This in turn may hinge on whether patents or the regulatory
structure provides them any protection from competition. Government
funding is another possibility, particularly as the government would likely be a
major consumer of lie detection services. Government-funded trials could
also be done by government-employed experts, rather than relying, as the
FDA does, on the regulated companies to do the testing. Politically, however,
it is usually easier to force companies to spend money than to appropriate it
from public funds. Depending on how many lie detection methods need to be
evaluated in anyone year, the aggregate costs of the trials could become
significant.

3. Why Adopt This Regulatory Plan?


The questions raised above are both substantial and difficult.
Additionally, they only scratch the surface of the problems that new methods
of lie detection raise. Society will still have to decide a host of puzzling
questions, through an analysis of constitutional constraints as well as parsing
existing statutes or adopting new ones. The proposal sketched above speaks
to whether a lie detection method is safe and effective enough to be used. It
does not determine what such a method could be used for, by whom, or under
what circumstances. Our rights to and expectations of privacy would have to

,.. This figure is a very rough estimate. It assumes that a trial would use 2,000 subjects
and that the MRI time alone for each individual would cost about $1,000, for a total of $2
million. It then assumes that recruitment and management of the subjects, on the one hand,
and analysis of the results, on the other, would each involve costs roughly comparable to the
MRI costs, bringing the estimated total to about $6 million. The actual figure, even for a
2,000 person trial, could easily be two or three times as much; it seems very unlikely that it
could be half or a third of that amount.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 61 of 144

NEUROSCIENCE-BASED LIE DETECTION 419

be weighed against the benefits of lie detection. This is true not only in our
dealings with governments, whose actions are limited by the federal
Constitution, but in the private spheres oflife. Society would need to consider
whether laws like the Employee Polygraph Protection Act need to be amended
or extended beyond employers to others who may want to use lie detectors­
insurers, lenders, contractors, schools, or parents, among others. Completely
new licensing schemes might be needed for those who operate these new lie
detectors.
Even the preliminary regulation we propose would not pass itself.
Congress, preferably, or state legislatures, would have to be convinced to
adopt a new, complex, and possibly expensive statutory plan, one dealing with
technologies that just barely exist and that may be more widely viewed as
blessings than as threats. Why then would-or should-Congress act?
Because it is important. Companies are marketing the age-old dream of
lie detection coupled with the high-tech mystique (and beautiful color
graphics) of brain scanning. The combination may prove irresistible to many,
but with so little evidence that the method is accurate, the result may be
tragic. Honest people may be treated unfairly based on negative tests;
dishonest people may go free.
Even in the judicial context, where the Daubert and Frye tests provide
some check on inaccurate evidence, the check is only partial. Each trial judge
is empowered to make her own decision, based on the evidence presented in
her court. A good lawyer, with a good expert, pushing admissibility of the
technology, and a bad lawyer, with a bad (or non-existent) expert opposing it,
could tip the balance in any given court. So could an overly impressionable or
scientifically naIve judge. A favorable decision by any single judge anywhere
in the country will be trumpeted by the companies selling the technology, in
the same way Larry Farwell, the developer of "brain fingerprinting," has
publicized his view of the Harrington case.
As a result, lives may be ruined. We have seen lives shattered before, with
and without these technologies. Wen Ho Lee is one example ofa victim of the
polygraph. lss Recent news provides an even clearer example of the costs of
investigative mistakes, although not in a case involving (as far as we know) lie
detection. In September 2002, Maher Arar, a Canadian citizen who was born
in Syria, was returning to Canada with his family from a vacation in Tunisia. ls6
While changing planes at Kennedy Airport in New York, he was detained by
U.S. officials. After thirteen days of questioning-but no formal charges or
court action-he was flown to the Middle East, where his American captors
delivered him to Syrian security agents. 1B7 After a year of imprisonment and
torture, he was released through Canadian intervention. lss After a two-year

,.. See DAN STOBER AND IAN HOFFMAN, A CONVENIENT Spy; WEN Ho LEE AND THE
POLITICS OF NUCLEAR ESPIONAGE (2002) (discussing, in some detail, conflicting conclusions
investigators drew from Lee's several polygraph tests). See also Transcript of Court Opinion,
United States v. Wen Ho Lee, http://cicentre.com/Documents/DOC_Judge_Parker_
on_Lee_Case.htm (extraordinary apology United States District Judge James Parker, the
judge assigned to Lee's criminal case, made to Lee for the method of his detention by the
federal government).
,.6 See Jane Mayer, Outsourcing Torture, NEW YORKER, Feb. 14, 2005.
187 Id.
,.. Id.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 62 of 144

420 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 &3 2007

study by a prestigious commISSIOn, the Canadian government recently


apologized to Arar and agreed to pay him nearly nine million U.S. dollars
because of the assistance it gave to the United States in connection with his
ordeaP89 The United States government has not apologized, has sought to
have Arar's suit against it dismissed as a result of a "state secrets" privilege,
and, indeed, still keeps Arar on its "no fly" list. '90
Mistakes happen. The inappropriate increase in confidence provided by
inaccurate lie detection could make those mistakes worse. Should that
happen, and be discovered, the aftermath of the consequent scandal may be
discrediting of a tool that, properly verified and controlled, could help prevent
such mistakes. Both individuals and the field of lie detection have much to
gain from a careful, prudent approach to these new technologies. Currently,
nothing enforces such an approach. Requiring proof of safety and efficacy
before allowing the use of lie detection technologies is a careful step towards
assuring that these technologies are used wisely.

V. CONCLUSION
We have come a long way, from discussions of the concept of tu, mapping,
and illustration to the use of individualized, rapidly changing maps of blood
flow in the brain to try to detect lies. We need to remember, though, that the
map is never the territory; the fMRI scan is not the same as the brain it scans.
Neuroscience lie detection, if it proves feasible at all, will not be perfect. We
need to prevent the use of unreliable technologies, and to develop fully
detailed information about the limits of accuracy of even reliable lie detection.
Government regulation appears to be the only way to accomplish this goal,
and, by so doing, we take a first step toward maximizing the benefits of these
new technologies while minimizing their harms.

'.9 Id.
190 Dahlia Lithwick, Welcome Back to the Rule of Law, SLATE, Jan. 30, 2007,
http://www.slate.com/id/2157667/.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 63 of 144

NEUROSCIENCE-BASED LIE DETECfION 421

APPENDIX
STATB CITATION DESCRIPTION

Alabama
ALA. CODE §§ 34-25-1 et seq. Licensing scheme.
(LexisNexis 1995).
ALA. CODE § 36-1-8 (1995). No polygraph for state employees.
Alaska
ALAsKA STAT. §§ 12.55.100(e), Beginning July 1, 2007, those on
33.16.150(a)(13) (2006). probation or parole for a sex offense
"shall ... submit to regular periodic
polygraph examinations."
ALAsKA STAT. § 23.10.037 (1989). No polygraph for employees, except law
enforcement officers.

Arizona
ARIz. REv. STAT. § 36-3710 Se:cually violent predators may be
(LexisNexis 1998). monitored during outpatient treatment
by ''polygraph or plethysmograph or
both."
ARIz. REv. STAT. § 38-1101(A), Law enforcement officers do not have a
(B)(4) (2005). right to a representative during an
interview that "could result in dismissal,
demotion or suspension" if the
interview occurs in the course of a
polygraph examination.

Arkansas
ARK. CODE ANN. §§ 17-39-101 et Licensing scheme.
seq. (1987).
ARK. CODE ANN. § 17-40-202(5) Board of Private Investigators and
(1987). Private Security Agencies shall include
one licensed polygraph examiner.

California
CAL. EVID. CODE § 351.1(a) Polygraph evidence not admissible
(Deering 1983). unless stipulated by all parties.

CAL. Gov'T CoDE § 3307 No public safety officer may be required


(Deering 1998). to take a "lie detector test," broadly
defined.
CAL. LAB. CODE § 432.2 (Deering No "lie detector test" for any private
1981). employee or prospective private
employee.
CAL. PENAL CODE § 637.4 (1980). No polygraph to sexual offense victim
"as a prerequisite to filing an accusatory
pleading."
CAL. WELF. II: INST. CODE § No polygraph test for any applicant for
11477.1 (1975). or recipient ofSocial Security Act Title
IV-D without prior written notice that
the test is not required and without
prior written consent.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 64 of 144

422 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

STATE CITATION DESCRIPTION

Colorado
COLO. REv. STAT. § 10-3­ Defines requiring an a polygraph for an
1104(1)(k) (2006). insurance claim as an unfair business
practice subj ect to administrative

discipline.

COLO. REv. STAT. § 16-8-106(3) Defendant's competency may be

(2006). determined in part by polygraph

examination, but only for offenses

committed before July 1995.

COLO. REv. STAT. § 16-11.7­ Sex Offender Management Board shall

103(1)(1) (2006). include one polygraph examiner.

COLO. REv. STAT. § 18-3-407.5 No sexual assault victim can be required

(2006). to submit to a lie detector test without

informed written consent (where

informed consent requires both written

and oral notice about the right to

refuse).

Connecticut
CONN. GEN. STAT. § 31-51g No polygraph for public employees,

(1967). except for police and corrections

officers.

CONN. GEN. STAT. § 54-86j No polygraph for sexual assault victims.

(2007).
Delaware
DEL. CODE ANN. tit. 19, § 704 No polygraph for public or private

(1953). employees, except for law enforcement.

DC
D.C. CODE § 32-902 (1995). No polygraphs for public or private

employees, except for police, fire, or

corrections departments.

D.C. CODE § 32-903 (1981). Polygraphs related to employment

constitute invasion of privacy; no

contract or arbitration decision can

violate § 32-902.

Florida
FLA. STAT. ANN. § 321.065 Highway patrol officers may be required

(2007). to undergo polygraph testing.

FLA.STAT.ANN.§§ Minimum one annual polygraph exam

947.1405(b)(1),948.30(2)(a) required for sex offenders in conditional

(2006). release program, but the results cannot

be used as evidence "in a hearing to

prove that a violation of supervision has

occurred."

Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 65 of 144

NEUROSCIENCE-BASED LIE DETECTION 423

STATE CITATION DESCRIPTION

Georgia
GA. CODE ANN. § 42-1-14 (2006). A sex offender may provide the risk

assessment board with tbe results of a

sexual bistory polygrapb for

consideration in determining the risk

sbe or be presents to the community.

GA. CODE ANN. § 51-1-37 (2001). Expressly recognizes negligence cause of

action against a polygrapb operator if

any person "suffers damages as a result"

ofthe polygraph operator's negligence.

Guam
GUAM CODE ANN. tit. 10, §§ Law enforcement officers must "submit

77109-10,77114 (2006). to and pass a polygraph."

Hawaii
HAW. REv. STAT. ANN. §§ 378­ No polygraphs for public or private

26.5, -27 (1985). employees, except for law enforcement.

Idaho
IDAHO CODE ANN. §§ 44-903,­ No polygraphs for public or private

904 (1978). employees, except for law enforcement.

Illinois
20 ILL. COMPo STAT. ANN. Chicago crime laboratory employees

415/12e (LexisNexis 1995). must submit to polygraph.

20 ILL. COMPo STAT. ANN. Sex Offender Management Board must

4026/15(a)(16) (LexisNexis include the President of the Illinois

2003). Polygraph Society or his designee.

50 ILL. COMPo STAT. ANN. 25/3.11, No law enforcement officer or fireman

725/3.11 (LexisNexis 1998). may be required to submit to polygraph

during the course of a disciplinary

"interrogation" without written consent.

725 ILL. COMPo STAT. ANN. 5/115­ Criminal courts "shall not require,

19 (LexisNexis 1996). request, or suggest" that a defendant

take a polygraph.

725 ILL. COMPo STAT. ANN. 200/1 No polygraphs required for sex assault

(LexisNexis 1993). victims.

735 ILL. COMPo STAT. ANN. 5/2­ Civil courts may not "require" either

1104 (LexisNexis 1993). party to submit to a polygraph exam.

Indiana
IND. CODE ANN.§§ 25-30-2-1 et Licensing scheme.

seq. (LexisNexis 2003).


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 66 of 144

424 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

STATB CITATION DESCRIPTION

Iowa
IOWA CODE § 730.4 (2006). No polygraphs for public or private
employees, except for law enforcement.
IOWA CODE § 915.44 (2006). No required polygraphs required for sex
assault victims or witness; however,
police may consider refusal in deciding
whether to take the case; and polygraph
refusal may not be the sole reason for
declining to investigate.
Kansas
None.
Kentucky
Ky. REv. STAT. ANN. §§
Law enforcement officers must take a
15.382(17), 15.384 (LexisNexis
polygraph.
2002).

Ky. REv. STAT. ANN. § 15.540


Law enforcement dispatchers must take
(LexisNexis 2006).
a polygraph.
Ky. REv. STAT. ANN. §§ 329.010
Licensing scheme.
et seq. (LexisNexis 2006).

Ky. REv. STAT. ANN. § 439.335


In deciding whether to grant parole, the
(LexisNexis 2006).
parole board may use polygraphs (or
voice stress analysis, or "truth serum, "
or other "scientific meansfor
personality analysis that may hereafter
be developed. 'J
Louisiana
LA. REv. STAT. ANN. §§ 37:2831 Licensing scheme.
et seq. (1980).
Maine
ME. REv. STAT. ANN. tit. 10, § Consumer credit information obtained
1312 (2005). by polygraph is subject to state Fair
Credit Reporting Act.

ME. REv. STAT. ANN. tit. 32, §§ Licensing scheme.


7151 et seq. (2007).
ME. REv. STAT. ANN. tit. 32, § No polygraphs for public or private
7166 (2007). employees, except for law enforcement.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 67 of 144

NEUROSCIENCE-BASED LIE DETECTION 425

STATE CITATION DESClUPTION

Maryland
MD. CODE CRIM. ANN. PROC. § Parole conditions for sex offenders may

11-724(c)(7) (2006). include regular polygraphs.

MD. CODE ANN., LAB. I< EMPL. § No polygraphs for public or private

3-702 (2004). employees, except for law enforcement

and some law enforcement support

staff.

MD. CODE ANN., PUB. SAFETY § 1­ Sexual Offender Advisory Board shall

401 (2006). include a polygrapher.

MD. CODE ANN. PUB. SAFETY § 3­ Law enforcement officers under

104(m) (2003). investigation can be required to take a

polygraph, but the results may not be

used as evidence in an administrative

hearing unless stipulated and may not

be admitted or discovered in a criminal

proceeding.

Massachusetts
MAss. GEN. LAws ANN. ch. 149 § No polygraphs for public or private

19B (LexisNexis 1985). employees; explicitly bans polygraphs

for police officers.

Michigan
MICH. COMPo LAws SERVo §§ No polygraphs for public or private

37.201 et seq. (LexisNexis 1983). employees.

MICH. COMPo LAws SERVo §§


Licensing scheme.

338.1701 et seq. (LexisNexis

1973).

MICH. COMPo LAws SERVo §


No required polygraphs required for sex

776.21 (LexisNexis 1981). assault victims or defendants; however,

if either defendant or victim requests a

polygraph they may receive it; if the

defendant passes a polygraph, the police

may inform the victim that the

defendant passed; except for informing

the victim that the defendant has passed

a polygraph, the police may not inform

the victim of the option oftaking a test.

Minnesota
MINN. STAT. § 181.75 (1986). No polygraphs for public or private

employees.

MINN. STAT. § 181.76 (1973). Cannot disclose that an employee has

taken a polygraph except to those the

employee authorizes to receive the

results.

MINN. STAT. § 609.3456 (2005). Courts may order polygraph exams as

parole condition for sex offenders.

Mississippi
MISS. CODE ANN. § 45-3-47 Highway Safety Patrol training

(1981). applicants may be required to take

polygraph.

MISS. CODEANN.§§ 73-29-1 et Licensing scheme.

seq. (1993).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 68 of 144

426 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

STATE CITATION DESCRIPTION

Missouri
Mo. REv. STAT. § 632.505(3)(13) Court may order ''polygraph,
(2006). plethysmograph, or other electronic or
behavioral monitoring or assessment"
as a condition ofparole or probationfor
sex offenders.
Montana
MONT. CODE ANN. § 39-2-304 No polygraphs for public or private
(1997). employees. (1987 Amendment removed
previous exception for law enforcement
agencies.)
Nebraska
NEB. REv. STAT. ANN. §§ 81-1901 Licensing scheme.
et seq. (LexisNexis 1980).
NEB. REv. STAT. ANN. § 83­ Parole Administration may require
174.03(4)(f) (LexisNexis 2006). polygraphs as parole condition for sex
offenders.
Nevada
NEV. REv. STAT. ANN. § Evaluation for parole or probation by
176.139(4)(e) (LexisNexis 2001). Division of Parole and Probation may
include polygraph.
NEV. REv. STAT. ANN. § In ordering release for parole or
176A.410(1)(g) (LexisNexis probation, court shall require sex
2005), NEV. REv. STAT. ANN. § offenders to submit to polygraphs as
213.1245 (LexisNexis 2003). requested by parole or probation
officers.
NEV. REv. STAT. ANN. § 289.050 No polygraphs may be required for law
(LexisNexis 2001), NEV. REv. enforcement officers under
STAT. ANN. § 289.070 investigation; but, if an officer
(LexisNexis 2005). volunteers for a polygraph, the exam
must be recorded and reviewed by a
second polygraph examiner.

NEV. REv. STAT. ANN. §§ 613.440 No polygraphs for private employees;


et seq. (LexisNexis 1989). contains the same exemptions as the
federal Employee Polygraph Protection
Act. Compare N.R.S. § 613.510, wzi'h 29
U.S.C § 2006.
NEV. REv. STAT. ANN. §§ 648.005 Licensing scheme.
et seq. (LexisNexis 1989).
New Hampshire
None.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 69 of 144

NEUROSCIENCE-BASED LIE DETECTION 427

STATE CITATION DESCRIPTION

New Jersey
N.J. REv. STAT. § 2C:40A-l No polygraphs for public or private
(1983). employees, except for some employees
with access to controlled substances.
The exemption is narrower than the
federal Employee Polygraph Protection
Act. Compare N.J. Stat. § 2C:40A-l,
with 29 U.S.C. § 2006(f).

N.J. REv. STAT.§ 30:4-123.88 Parole Board may require all sex
(2005). offenders (and some kidnappers) to
submit to polygraphs at least annually.

New Mexico
N.M. STAT. ANN. § 9-3-13(d)(12) Sex Offender Management Board shall
(LexisNexis 2007). study polygraphs as a method of
evaluating sex offenders.

N.M. STAT. ANN. § 29-14-5 Police officers under investigation may


(LexisNexis 1991). be required to take polygraph.
N.M. STAT. ANN. § 31-20-5.2 Polygraphs may be condition of parole
(LexisNexis 2003), N.M. STAT. or probation for sex offenders.
ANN. § 31-21-10.1 (LexisNexis
2007).
N.M. STAT. ANN. §§ 61-27A-l et Licensing scheme.
seq. (LexisNexis 2007).
N.M. R. EVID. 11-707 (LexisNexis Rules for admissibility of polygraph
2006). evidence; a witness may be compelled to
take a polygraph only if the witness has
previously taken a polygraph voluntarily
and plans to use the polygraph as
evidence.
New York
N.Y. CRIM. PROC. LAw § 160.45 No polygraph for sexual assault victims.
(ConsoI2006).

N.Y. GEN. Bus. LAw § 380-j Consumer reporting agencies may not
(Consul 1986). include polygraph information in file.
North Carolina
None.
North Dakota
N.D. CENT. CODE §§ 43-31-01 et Licensing scheme.
seq. (1993).
Ohio
OHIO REv. CODE ANN. § 177.01 Organized crime commission and
(LexisNexis 1999). consultants may be required to take a
polygraph before being granted a
security clearance.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 70 of 144

428 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007

STATE CITATION DESCRIPTION

Oklahoma
OKLA. STAT. tit. 22, § Sex offenders may be required to
991a(1)(A)(ee) (2006). undergo polygraph exams.

OKLA. STAT. tit. 59, §§ 1451 et seq. Licensing scheme.


(2007).
Oregon

OR. REv. STAT. §§ 137.540(1)(m), Polygraphs may be condition ojparole


144.102(3)(b),144.270(3)(b) or probationjor sex offenders.
(2005). Plethysmograph exams may be a parole
condition.

OR. REv. STAT. § 163.705 (981). No polygraph for sexual assault victims.

OR. REv. STAT. § 659.225, (2001), No polygraphsjor public or private


OR. REv. STAT. § 659A.300 employees. Section 659A.300 e:r:plicitly
(2005). extends to genetic and brain-wave tests.
However, genetic tests may be allowed if
injormed consent is given and the test is
"administered solely to determine a
bonafide occupational qualification. "

OR. REv. STAT. §§ 703.010 et seq. Licensing scheme.


(2003).
Pennsylvania
18 PA. CONS. STAT. § 7321 (2006). No polygraphs for public or private
employees, except for law enforcement
and those with access to controlled
substances.
PuertoRioo
P.R. LAws ANN. tit. 34, § 3014 Institute of Forensic Sciences must
(2007). make polygraph exams available to
executive and judicial branches.

Rhode Island
RI. GEN. LAws §§ 28-6.1-1 et seq. No polygraphs for public or private
(1987). employees.
South Carolina
S.C. CODE ANN. §§ 40-53-10 et Licensing scheme.
seq. (1972).
South Dakota
S.D. CODIFIED LAws §§ 36-30-1 Licensing scheme.
et seq. (1984).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 71 of 144

NEUROSCIENCE-BASED LIE DETECTION 429

STATE CITATION DESCRIPTION

Tennessee
TENN. CODE ANN. § 38-3-123 No polygraph for sexual assault victims.
(2006).

TENN. CODE ANN. § 39-13­ Sex offender treatment board's

704(d)(2) (1998). guidelines may include polygraph

exams.

TENN. CODE ANN. § 41-1-102 Corrections officers may be required to

(1989). submit to polygraph if contraband

found on their person.

TENN. CODE ANN. § 62-27-101 Licensing scheme.

(1982).
Texas
TEX. FAM. CODE ANN. § 51.151 Juveniles in custody may not be given a

(Vernon 2007). polygraph without consent ofthe

juvenile's attorney or the court.

TEX. FAM. CODE ANN. § 54.0405 Juveniles conditionally released after

(Vernon 001), TEx. HUM. REs. sex offenses may be required to submit

CODE ANN. § 61.0813 (Vernon to polygraphs.

2001).

TEX. GOV'T CODE ANN. §§


Applicants for police officer or

411.007,411.0074 (Vernon
dispatcher positions must take a

2005), TEX. GoV'T CODE ANN. § polygraph, but current police officers

614.063 (Vernon 1997). applying for other jobs within the

department (and current dispatchers

applying for another dispatcher

position) cannot be required to take a

polygraph; current officers and

firefighters "may not be suspended,

terminated, or subjected to any form of

discrimination" for refusing to take a

polygraph.

TEX. Gov'T CODE ANN. § 493.022 No polygraphs for Department of

(Vernon 1997). Criminal Justice employees.

TEX. HEALTH & SAFETY CODE Civil commitment/or se:x offenders may
ANN. § 841.083 (Vernon 2005). include regular polygraph and
plethysmographs.
TEX. Loc. GoV'T CODE ANN. §§ Firefighters may be required to submit
143.124,143.314 (Vernon 1997). to polygraph if a complainant files a
complaint against the firefighter and
the complainant takes and passes a
polygraph; firefighters may also be
required to submit to a polygraph ifthe
municipality's department head
"considers the circumstances to be
extraordinary and the fire department
head believes that the integrity of a fire
fighter or the fire department is in
question."
TEX. GCC. CODE ANN. § 1703.001 Licensing scheme.
(Vernon 1999).
TEX. CODE CRIM. PROC. ANN. art. No polygraphs required for sex assault
Art. 15.051 (Vernon 1997). victims.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 72 of 144

430 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007

STATE
CITATION DESCRIPTION

Utah

UTAH CODE ANN. §§ 58-64-101 et Licensing scheme.


seq. (1995).
Vennont
VT. STAT. ANN. tit. 21, §§ 494 et No polygraphs for public or private

seq. (1985). employers, except for law enforcement,

drug business, and precious metal and

gem business. Compare Vt. Stat. tit. 21,

§ 494b(3) (2001), with 29 U.S.C. §

2006 (2006).

VT. STAT. ANN. tit. 26, §§ 2901 et Licensing scheme.

seq. (1975).
Virginia
VA. CODE ANN. § 8.01-418.2 Polygraph evidence is not admissible in

(1995). any state employment disciplinary

proceeding, except for actions against

polygraphers.

VA. CODE ANN. § 19.2-9.1 (1994). No required polygraphs for complaining

witnesses, but they may be requested to

submit to a polygraph so long as they

have prior written notice that the exam

is voluntarily and that results are

inadmissible as evidence; declining to

take a polygraph "shall not be the sole

condition for initiating or continuing"

an investigation.

VA. CODE ANN. §§ 22.1-307, -315 No required polygraphs for teachers;

(1996). refusing to take a polygraph shall not

the sole reason for dismissal or

probation of teacher.

VA. CODE ANN. § 37.2-908, -910 Conditional release/or sex offenders

(2007). may include polygraph,

plethysmograph, "or se:x:ual interest

testing, " by court order.

VA. CODE ANN. § 40.1-51.4:3 In conducting a polygraph exam,

(1990). private employers may not ask

prospective employees about sexual

activities, "unless such sexual activity ...

has resulted in a conviction" under

Virginia law.

VA. CODE ANN. § 40.1-51.4:4 No polygraphs for law enforcement

(2000). unless by order of the department chief

as part of a particular investigation; no

employee required by the chief to

submit to a polygraph can be

disciplined or discharged "solely on the

basis ofthe results."

VA. CODE ANN. §§ 54.1-1800 et Licensing scheme.

seq. (1993).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 73 of 144

NEUROSCIENCE-BASED LIE DETECTION 431

STATE CITATION DESCRIPTION

Washington
WASH. REv. CODEANN.§ Law enforcement officer requirements
43.43.020 (LexisNexis 2005),
for initial hire mayor must include
WASH. REv. CODEANN.§
polygraph, depending on the position
43.101.080(19) (LexisNexis
and date of hire. Most significant, after
2005), WASH. REv. CODE ANN.§
July 2005, all new peace officers must
43.101.095(2)(a) (LexisNexis
pass a polygraph. Wash. Rev. Code §
2005), WASH. REv. CODE ANN.§
43.101.095(2)(a) (LexisNexis 2005).
43.103.090(2)(a) (LexisNexis

1999).

WASH. REv. CODE. ANN. §


No public or private employer may
49.44.120 (LexisNexis 2005).
require a polygraph, except for initial
employment oflaw enforcement officers
and for initial employment in drug
business.
WASH. REv. CODE ANN. Conditional relelUlefor Se:I: offenders
§ 71.09.096 (LexisNexis 2001). may include polYfjT'aph or
plethysmofjT'aph e:I:ams by court order.

West Virginia
W. VA. CODE ANN. § 21-5-5b No polygraphs for public or private _

(LexisNexis 2003). employers, except for law enforcement

and drug business.

W. VA. CODE ANN. §§ 21-5-5c et Licensing scheme.

seq. (LexisNexis 2003).

W. VA. CoDE ANN. §§ 62-11D-1 et Polygraphs required for sex offenders

seq. (LexisNexis 2006). on conditional release.

Wisconsin
WIS. STAT. §§ 51.375,301.132 Polygraphs may be required for sex

(1999). offenders on conditional release or in

custody.

WIS. STAT. § 1ll.37 (1997). No polygraphs for public or private

employers, subject to ongoing

investigation, security, and drug

business exceptions that track the

federal Employee Polygraph Protection

Act. Compare Wis. Stat. § 111.37(5),

with 29 U.S.C. § 2006.

WIS. STAT. § 905.065 (1979). Polygraph evidence subject to claim of

privilege.

WIS. STAT. § 942.06 (2001). No one may require or disclose results

of a polygraph test of another person

without prior informed, written

consent; does not apply to sex offender

provisions of the Code.

WIS. STAT. §§ 950.04(lv)(dL), No mention of polygraph testing to sex

968.265 (2005). offense victims unless victim requests

information.

Wyoming
None.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 74 of 144
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 75 of 144

U sing Imaging
to Identify Deceit:
Scientific and
Ethical Questions

Emilio Bizzi, Steven E. Hyman, Marcus E. Raichle,

Nancy Kanwisher, Elizabeth A. Phelps,

Stephen J. Morse, Walter Sinnott-Arnlstrong,

Jed S. Rakoff, and Henry T. Greely

Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 76 of 144

© 2009 by the American Academy of Arts and Sciences


All rights reserved.

ISBN#: 0-87724-077-9

The views expressed in this volume are those held by each contributor and are
not necessarily those of the Officers and Fellows of the American Academy of
Arts and Sciences.

Please direct inquiries to:


American Academy of Arts and Sciences
136 Irving Street
Cambridge, MA 02138-1996
Telephone: (617) 576-5000
Fax: (617) 576-5050
Email: aaas@anlacad.org
Web: www.amacad.org
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 77 of 144

Contents

INTRODUCTION
Imaging Deception
Emilio Bizzi and Steven E. Hyman

3 CHAPTER 1
An Introduction to Functional Brain Imaging
in the Context of Lie Detection
Marcus E. Raichle

7 CHAPTER 2­
The Use of fMRI in Lie Detection: What Has Been Shown

and What Has Not

Nancy Kanwisher

14 CHAPTER 3

Lying Outside the Laboratory: The Impact of Imagery


and Emotion on the Neural Circuitry of Lie Detection
Elizabeth A. Phelps

23 C H A PTE R 4

Actions Speak Louder than Images


Stephen]. Morse

35 C HAPT E R 5
Neural Lie Detection in Courts
Walter Sinnott-Armstrong

40 CHAPTER (;
Lie Detection in the Courts: The Vain Search for the Magic Bullet
Jed S. Rakoff

46 CHAPTER 7
Neuroscience-Based Lie Detection: The Need for Regulation
Henry T. Greely

56 CON T RIB U TOR S


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 78 of 144

INTRODUCTION

Imaging Deception
EMILIO BIZZI AND STEVEN E. HYMAN

Can the relatively new technique of functional magnetic resonance imaging


(flvIRI) detect deceit? A symposium sponsored by the American Academy of
Arts and Sciences, the McGovern Institute at the Massachusetts Institute of
Technology (MIT), and Harvard University took on this question byexam­
ining the scientific support for using flvIRI as well as the legal and ethical
questions raised whcn machinc-based means are employed to identify dcceit.
Marcus Raichle, a professor at Washington University in St. Louis, opcns
the discussion with a clear description of tMRI, its physiological basis, thc
methodology underlying the extraction of images, and, most important, the
use of imagc averaging to establish correlations between the "images" and
aspects of behavior. While averaging techniques are highly etIective in the
characterization of functional properties of different brain areas, images ob­
tained from a single individual are "noisy," a fact that clearly touches upon
the reliability of the extracted data and a fortiori makes detecting deceit a
qucstionable af£,ir.
Nancy Kanwisher, a professor at MIT, discusses papers that present sup­
posedly direct evidencc of the efficacy of detecting deceit with tMRI, but
dismisses their conclusions. Kanwisher notes that there is an insurmountable
problem with the experimental design of the studies she analyzes. She points
out that by necessity the tested populations in the studies consisted of volun­
teers, usually cooperative students who were asked to lie. For Kanwisher this
experimental paradigm bears no relationship to the real-world situation of
somebody brought to court and accused of a serious crime.
Kanwisher's conclusions are shared by Elizabeth Phelps, a professor at
New York University. Phelps points out that two cortical regions-the para­
hippocampal cortex and the fusiform gyms-display difterent activity in rela­
tion to familiarity. The parahippocampal cortex shows more activity for less
familiar faces, whereas the fusiform gyrus is more active for familiar faces.
However, these neat distinctions can unravel when imagined memories are
generated by subjects involved in emotionally charged situations. Phelps
points out that the brain regions important to memory do not differentiate
between imagined memorics and those based on events in the real world. In
addition, the perceptual details of memories are affectcd by emotional states.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 79 of 144

Phelps's compelling description of how imagination, emotions, and mis­


perceptions all playa role in shaping memories can be brietly expressed as
"brains do not lie: people do." This point is echoed by Stephen Morse, who
begins his presentation by stating "Brains do not commit crimes. Acting peo­
ple do." Morse, a professor at the University of Pennsylvania, takes a skepti­
cal view of the potential contributions of neuroscience in the courtroom.
He believes that behavioral evidence is usually more useful and informative
than information based on brain science, and that when neuroimaging data
and behavioral evidence conflict, the behavioral evidence trumps imaging.
Morse worries that admitting imaging in the courtroom might sway "naive"
judges and jurors to think that the brain plays a "causal" role in a crime. He
repeatedly warns that if causation excuses behavior then no one can ever be
considered responsible.
Walter Sinnott-Armstrong, a professor at Dartmouth College, is also un­
enthusiastic about the usc of fMRI to detect deceit. His concern is that the
error rates in fi\IRI are significant and that determining error rates is not a
simple task. For this reason he believes that evidence from neural lie detection
dTorts should not be allowed in court.
Jed Rakoff, a U.S. district judge, shares Sinnott-Armstrong's concern
about error rates and finds that fMRI-based evidence may be excluded from
trials under the Federal Rules of Evidence. Rakoff argues that the golden path
to discovering truth is the traditional one of exposing witnesses to cross-exam­
ination. He doubts that meaningful correlations between lying and brain
images can be reliably established. In addition, he notes that the law recog­
nizes many kinds of lies-for example, lies of omission, "white lies," and half­
truths-and asks whether brain imaging can come close to distinguishing
among these complex behavioral responses. Clearly not, he concludes, but
traditional cross-examination might do the job.
Henry Greely, a professor at Stanford Law School, discusses the consti­
tutional and ethical issues raised by fMRI lie detection. He cites as concerns
the problems related to the scientific weakness of some fMRI studies, the dis­
agreement among the investigators about \vhich brain regions are associated
with deception, the limitations of pooled studies, and the artificiality of ex­
perimental design.
The authors of these seven essays express a dim view of lie detection with
fMRI. They also consider the widely used polygraph and conclude that both
it and fMRI are unreliable.
Often in science when a new technique such as fMRI appears, the scien­
tists who promote its use argue that, yes, problems exist but more research
will, in the end, give us the magic bullet. Perhaps. In the case of lie detec­
tion through fMRI, however, two sets of problems seem insurmountable:
1) problems of research design, which Kanwisher argues no improvement in
imaging technology is likely to address; and 2) problems of disentangling
emotions, memory, and perception, which, Phelps notes, are processed in
the same region of the brain and thus are commingled.

2 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 80 of 144

CHAPTER 1

An Introduction to
Functional Brain Imaging
in the Context of Lie
Detection
MARCUS E. RAICHLE

Human brain imaging, as the term is understood today, began with the intro­
duction of X-ray computed tomography (i.e., CT as it is known today) in
1972. By passing narrowly focused X-ray beams through the body at many
different angles and detecting the degree to which their energy had been at­
tenuated, Godfrey Hounsfield was able to reconstruct a map of the density of
the tissue in three dimensions. For their day, the resultant images of the brain
were truly remarkable. Hounsfield's work was a landmark event that radically
changed the way medicine was practiced in the world; it spawned the idea that
three-dimensional images of organs of the body could be obtained using the
power of computers and various detection strategies to measure the state of
the underlying tissues of the body.
In the laboratory in which I was working at Washington University in
St. Louis, the notion of positron emission tomography (PET) emerged short­
ly after the introduction of X-ray computed tomography. Instead of passing
an X-ray beam through the tissue and looking at its attenuation as was done
with X-ray computed tomography, PET was based on the idea that biologi­
cally important compounds like glucose and oxygen labeled with cyclotron­
produced isotopes (e.g., 150, llC, and 18F) emitting positrons (hence the
name positron emission tomography) could be detected in three dimensions by
ringing the body with special radiation detectors. The maps arising from this
strategy provided us with the first quantitative maps of brain blood flow and
metabolism, as well as many other interesting measurements of function.
With PET, modern human brain imaging began measuring function.
In 1979, magnetic resonance imaging (MRI) was introduced. While em­
bracing the concept of three-dimensional imaging, this technique was based
on the magnetic properties of atoms (in the case of human imaging, the pri­
mary atom of interest has been the hydrogen atom or proton). Studies of

3
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 81 of 144

these properties had been pursued for several decades in chemistry laborato­
ries using a technique called nuclear magnetic resonance. When this technique
was applied to the human body and images began to emerge, the name was
changed to "magnetic resonance imaging" to assuage concerns about radio­
activity that might mistakenly arise because of the use of the term nuclear.
Functional MRI (£MRI) has become the dominant mode of imaging function
in the human brain.
At the heart of fimctional brain imaging is a relationship between blood
flow to the brain and the brain's ongoing demand for energy. The brain's
voracious appetite for energy derives almost exclusively from glucose, which
in the brain is broken down to carbon dioxide and water. The brain is depen­
dent on a continuing supply of both oxygen and glucose delivered in flowing
blood regardless of moment-to-moment changes in an individual's activities.
For over one hundred years scientists have known that when the brain
changes its activity as an individual engages in various tasks the blood flow in­
creases to the areas of the brain involved in those tasks. What came as a great
surprise was that this increase in blood flow is accompanied by an increase in
glucose use but not oxygen consumption. As a result, areas of the brain tran­
siently increasing their activity during a task contain blood with increased oxy­
gen content (i.e., the supply of oxygen becomes greater than the demand for
oxygen). This observation, which has received much scrutiny from research­
ers, paved the way for the introduction of MRl as a functional brain tool.
By going back to the early research of Michael Faraday in England and,
later, Linus Pauling in the United States, researchers realized that hemoglo­
bin, the molecules in human red blood cells that carry oxygen from the lungs
to the tissue, had interesting magnetic properties. When hemoglobin is carry­
ing a full load of oxygen, it can pass through a magnetic field without caus­
ing any disturbance. However, when hemoglobin loses oxygen to the tissue,
it disrupts any magnetic field through which it passes. MRI is based on the
use of powerful magnetic fields, thousands of times greater than the earth's
magnetic fields. Under normal circumstances when blood passes through an
organ like the brain and loses oxygen to the tissue, the areas of veins that are
draining oxygen-poor blood show up as little dark lines in MRI images, reflect­
ing the loss of the MRI signal in those areas. Now suppose that a sudden in­
crease in blood flow locally in the brain is not accompanied by an increase in
oxygen consumption. The oxygen content of these very small draining veins
increases. The magnetic field in the area is restored, resulting in a local in­
crease in the imaging signal. This phenomenon was first demonstrated with
MRI by Seiji Ogawa at Bell Laboratories in New Jersey. He called the phe­
nomenon the "blood oxygen level dependent" (BOLD) contrast of MRI and
advocated its use in monitoring brain function. As a result researchers now
have fMRI using BOLD contrast, a technique that is employed thousands of
times daily in laboratories throughout the world.
A standard maneuver in functional brain imaging over the last twenty-five
years has been to isolate changes in the brain associated with particular tasks
by subtracting images taken in a control state from the images taken during
the performance of the task in which the researcher is interested. The control

4 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 82 of 144

state is often carefully chosen so as to contain most of the elements of the task
of interest save that which is of particular interest to the researcher. For exam­
ple, to "isolate" areas of the brain concerned with reading words aloud, one
might select as the control task passively viewing words. Having eliminated
areas of the brain concerned with visual word perception, the resulting "dif­
ference image" would contain only those areas concerned with reading aloud.
Another critical element in the strategy of functional brain imaging is the
use of image averaging. A single difference image obtained from one individ­
ual appears "noisy," nothing like the images usually seen in scientific articles
or the popular press. Image averaging is routinely applied to imaging data
and usually involves averaging data from a group of individuals. While this
technique is enormously powerful in detecting common features of brain
function across people, in the process it completely obscures important indi­
vidual differences. Where individual differences are not a concern, this is not
a problem. However, in the context of lie detection researchers and others
are specifically interested in the individual. Thus, where functional brain
imaging is proposed for the detection of deception, it must be clear that the
imaging strategy to be employed will provide satisfactory imaging data for
valid interpretation (i.e., images of high statistical quality ).1

LESSONS FROM THE POLYGRAPH

In 2003, the National Academy of Sciences (NAS) made a series of recom­


mendations in its report on The Polygraph and Lie Detection. Although theses
recommendations were primarily spawned by a consideration of the polygraph,
they are relevant to the issues raised by the use of functional brain imaging as
a tool for the detection of deception. 2
Most people think of specific incidents or crimes when they think of lie
detection. For example, an act of espionage has been committed and a sus­
pect has been arrested. Under these circumstances the polygraph seems to
perform above chance. The reason for this, the NAS committee believed, was
something that psychologists have called the "bogus pipeline": If a person sin­
cerely believed a given object (say a chair attached to electrical equipment)
was a lie detector and that person was wired to the object and had commit­
ted a crime, a high probability exists (much greater than chance) that under
interrogation the person would confess to the crime. The confession would
have nothing to do with the basic scientific validity of the technique (i.e., the
chair attached to electrical equipment) and everything to do with the individ­
ual's belief in the capability of the device to detect a lie. However, contrary to
the belief that lie detection techniques such as the polygraph are most com­
monly used to detect the lies of the accused, by far the most important use
of these techniques in the United States is in employee screening, pre-employ­
ment, and retention in high-security environments. The U.S. government
performs tens of thousands of such studies each year in its various security

1. For a more in-depth explanation of functional brain imaging, see RaichJe (2000); and
Raichle and Mintun (2006).
2. I was a member of the NAS committee that authored the report.

FUNCTIONAL BRAIN IMAGING 5


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 83 of 144

agencies and secret national laboratories. This is a sobering fact given the con­
cerns raised by the NAS report about the use of the polygraph in screening.
As a screening technique the polygraph performs poorly and would likely
falsely incriminate many innocent employees while missing the small number
of spies in their midst. The NAS committee could find no available and prop­
erly tested substitute, including functional brain imaging, that could replace
the polygraph.
The NAS committee found many problems with the scientific data it re­
viewed. The scientific evidence on means of lie detection was of poor quality
with a lack of realism, and studies were poorly controlled, with few tests of
validity. For example, the changes monitored (e.g., changes in skin conduc­
tance, respiration, and heart rate) were not specific to deception. To compound
the problem, studies often lacked a theory relating the monitored responses
to the detection of truthfulness. Changes in cardiac output, peripheral vascu­
lar resistance, and other measures of autonomic function were conspicuous
by their absence. Claims with regard to functional brain imaging hinged for
the most part on dubious extrapolations from group averages.
Countermeasures (i.e., strategies employed by a subject to "beat the poly­
graph") remain a subject clouded in secrecy within the intelligence commu­
nity. Yet information on such measures is freely available on the Internet! Re­
gardless, countermeasures remain a challenge for many techniques, although
one might hold some hope that imaging could have a unique role here. For
example, any covert voluntary motor or cognitive activity employed by a sub­
ject would undoubtedly be associated with predictable changes in functional
brain imaging signals.
At present we have no good ways of detecting deception despite our very
great need for them. We should proceed in acquiring such techniques and
tools in a manner that will avoid the problems that have plagued the detec­
tion of deception since the beginning of recorded history. Expanded research
should be administered by organizations with no operational responsibility for
detecting deception. This research should operate under normal rules of sci­
entific research with freedom and openness of communication to the extent
possible while protecting national security. Finally, the research should vigor­
ously explore alternatives to the polygraph, including functional brain imaging.

REFERENCES

National Academy of Sciences. 2003. The polygraph and lie detection.


Washington, DC: National Research Council.
Raichle, M. E. 2008. A brief history of human brain mapping. Trends in
Neuroscience (S0166-2236( 08 )00265-8 [pii] 10.1016/j.tins.2008.11.001).
Raichle, M. 2000. A brief history of human functional brain mapping.
In Brain mapping: Ihe systems, ed. A. Toga and J. Mazziotta. San Diego:
Academic Press.
Raichle, M. E., and M. A. Mintun. 2006. Brain work and brain imaging.
Annual Review of Neuroscience 29:449-476.

6 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 84 of 144

CHAPTER 2

The Use of fMRI in Lie


Detection: What Has Been
Shown and What Has Not
NANCY KANWISHER

Can you tell what somebody is thinking just by looking at magnetic resonance
imaging (MRI) data from their brain?] My colleagues and I have shown that
a part of the brain we call the "fusiform face area" is most active when a per­
son looks at faces (Kanwisher et al. 1997). A separate part of the brain is most
active when a person looks at images of places (Epstein and Kanwisher 1998).
People can selectively activate these regions during mental imagery. If a sub­
ject closes her eyes while in an MRI scanner and vividly imagines a group of
faces, she turns on the fusiform face area. If the same subject vividly imagines
a group of places, she turns on the place area. When my colleagues and I first
got these results, we wondered how far we could push them. Could we tell
just by looking at the fMRI data what someone was thinking? We decided to
run an experiment to determine whether we could tell in a single trial whether
a subject was imagining a face or a place (O'Craven and Kanwisher 2000).
My collaborator Kathy O'Craven scanned the subjects, and once every
twelve seconds said the name of a famous person or a familiar place. The sub­
ject was instructed to form a vivid mental image of that person or place. Mter
twelve seconds Kathy would say, in random order, the name of another per­
son or place. She then gave me the fMRI data from each subject's face and
place areas.
Figure 1 shows the data from one subject. The x-axis shows time, and the
y-axis shmvs the magnitude of response in the face area (black) and the place
area (gray). The arrows indicate the times at which instructions were given
to the subject. My job was to look at these data and determine for each trial
whether the subject was imagining a face or a place. Just by eyeballing the
data, I correctly determined in over 80 percent of the trials whether the sub­
ject was imagining faces or places. I worried for a long time before we pub­
lished these data that people might think we could use an MRI to read their
minds. Would they not realize the results obtained in my experiment were for

1. This article is based on remarks made at the American Academy of Arts and Sciences's
conference on February 2, 2007.

7
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 85 of 144

a specific, constrained situation? That we used faces and places because we


know which highly specific parts of the brain process those two categories?
That we selected only cooperative subjects who were good mental imagers?
And so on. I thought, "Surely, no one would try to use £MRI to figure out
what somebody else was thinking!"

Figure 1. Time course of fMRI response in the fusiform face area and parahippo­
campal place area of one subject over a segment of a single scan, showing the fMRI
correlates of single unaveraged mental events. Each black arrow indicates a single
trial in which the subject was asked to imagine a specific person (indicated by face
icon) or place (indicated by house icon). Visual inspection of the time courses al­
lowed 83 percent correct determination of whether the subject was imagining a
face or a place. Source: O'Craven and Kanwisher 2000.

My concern proved to be premature. Almost no one cited this work for


several years. In the past couple of years, however, our findings have been
more widely discussed-for example, in Time Magazine, on NPR, and in the
Ntrn' Y01·k Times Magazine (though oddly these venues still fail to cite our
paper when describing our results)-and at least two companies, Cephos
Corp. and No Lie MRI, have begun marketing fMRI "lie-detection" ser­
vices. The Cephos website says, "Lying is shown to activate specific, discrete
parts of the brain. We can use those regions to determine if a person is lying
with a high degree of accuracy. No activation is seen when telling the truth."
The No Lie MRI website includes a product overview that boasts, "Current
accuracy is over 90% and is estimated to be 99% once product development
is complete" (No Lie MRl 2006).
Does any real science lie behind these claims? The literature on using
£MRI for lie detection can be divided into two groups (see Figure 2). The
first uses group analysis, and the second focuses on individual subject data.
In group analysis, subjects' brain scans are aligned as closely as possible. The
data are then averaged and analyzed as a group. Group studies can be useful

8 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 86 of 144

for identifYing patterns of brain response that are consistent across subjects.
Group studies are not useful for determining whether a particular subject is
lying. Studies that analyze individual subject data are relevant for trying to
determine whether fMRI is useful for lie detection, so we discuss these find­
ings in turn.

Figure 2. List of empirical studies of fMRllie detection published in peer-reviewed


journals as of January 2007.

STUDY 1

Kozel and colleagues (2005) instructed subjects to "steal" either a watch or


a ring and to place the "stolen" item with their belongings in a locker while
being watched by one of the investigators. They were then scanned with
fMRI while answering a set of questions presented in a random order. For
example, they might be asked, "Did you take the watch from the drawer?"
Or, "Is the ring in your locker?" There were twenty roughly synonymous
versions of each question, as well as a variety of "neutral" conditions; for
example, "Do you like to swim?" Subjects were instructed to answer ques­
tions about the ring or watch as if they had stolen neither object and to answer
the other questions truthfully. Thus, for the ring condition subjects were
"lying" and for the watch condition they were "telling the truth," or vice
versa. Participants were also told that they would receive $50 if the other
investigator could not tell from the MRI data when they were lying.
What did they find? Well, first let me explain the basic logic of the analy­
sis. Simply put, the idea is to take half of the subjects to discover some kind
of brain "signature" of lying versus truth, and then to use that signature to
classifY the other subjects to figure out whether they were lying or telling the
truth. More specifically, they found seven activation "blobs" in the brain that
produced a stronger response during the lie condition than the truth condi-

THE USE OF fMRI IN LIE DETECTION 9


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 87 of 144

tion, and they found the three or four of those blobs that were most discrim­
inative between truths and lies. They then used those blobs to classify the
activations in the other subjects. So, tor example, they ran statistical tests on
each 3-D pixel or "voxel" in the brain, asking whether that voxel produced
a stronger response during the lie condition than the neutral condition, and
they tallied how many voxels showed that pattern versus how many produced
a stronger response in the truth condition than neutral. If they found more
lie voxels than truth voxels, they considered their model to have identified
which condition was a lie in that subject. By this measure, they could correct­
ly determine for 90 percent of subjects which was the lie and which was the
truth.
This is not really lie detection. The researchers always know that the sub­
jects are lying in response to one of the sets of non-neutral questions. Rather
than answering the question "Can you tell whether the subject is lying," this
research is answering the question "Can you tell which response is the truth
and which is the lie?"

STUDY 2

Langleben and colleagues (2005) scanned twenty-six subjects. Prior to the


scan, a researcher presented the subjects with an envelope containing two
playing cards-the seven of spades and the five of clubs-and a $20 bill. Par­
ticipants were instructed to deny possession of one of the cards and acknowl­
edge possession of the other card during the scans. They were also told they
could keep the $20 if they successfully concealed the identity of the "lie" card.
A different researcher then scanned the subjects, telling them to respond to
each trial as accurately and truthfully as possible. During the scans the subjects
saw, in random order, the five of clubs and seven of spades, and they respond­
ed with a button press, indicating whether they held that card in their pos­
session. Of course, one of these card types would be the lie condition, and
one would be the truth condition. Other various cards were included as con­
trol conditions. Critically, the truth condition was the only response tor which
the subjects said "yes," and there were only 24 of these yes-truth trials out of
432 total trials. This feature is important because it means that the subjects
are sitting during the scanning saying, "No, no, no, no, no, no," most of the
time, looking for those rare five of clubs so that they can say yes. The subjects
probably think of the task essentially as one of detecting that five of clubs,
and it means that the neural signature of the supposed "truth" response is
really just the neural signature of a target detection event.
When the researchers analyzed their data using group analysis, they found
no significantly higher response for tlle trutll condition than for the lie con­
dition, thus failing to replicate their own 2002 study. The probable reason
for this failure is that the truth condition was the salient target detection
condition, and so the lie condition was like a default case and hence did not
activate any part of the brain to a greater extent. Next the researchers low­
ered the statistical threshold to p < .05 (uncorrected). However, the brain

10 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 88 of 144

contains 30,000--40,000 voxels, so p < .05 creates a situation in which hun­


dreds or thousands of voxels reach significance, even if what is being analyzed
is random noise. Neuroimagers must correct for this "multiple comparisons"
problem and generally do not accept the .05 threshold as legitimate without
such corrections. Nonetheless, using p < .05 (uncorrected), Langleben and
colleagues reported that they found activations in the left inferior frontal gyrus
for lie versus truth, commenting that "Lie related activation was less robust
and limited to areas associated with context processing, left inferior frontal
gyrus" (Langleben et a1. 2005). In fact, this "result" was not even worth this
effort at interpretation, because it is nowhere near significant.
In addition to the group analysis, Langleben and colleagues also per­
formed an individual subject analysis in which they asked whether they could
discriminate lies from truths. They used a classification method based on the
data from the group analysis to test new subjects. They found that they could
discriminate, on an individual trial basis, which responses were lies and which
were truths 76 percent of the time. The "false alarm" rate, which is the rate
of true responses incorrectly labeled lies, was 16 percent, and the miss rate,
which is the rate of false responses incorrectly labeled truths, was 31 percent.
However, because in this experiment the truths were rare target events, the
ability to discriminate truths from lies probably just reflects the ability to dis­
tinguish rare events; it has nothing to do with lying per se.
In the final paper of the three that look at individual subjects, Davatzikos
et a1. (2005) analyzed the same data in the Langleben et a1. paper (2005).
They just used fancier math. There is a lot of exciting work going on right
now in computer science and math, where people are devising machine learn­
ing algorithms to find patterns in MRI data and other kinds of neuroscience
data. So they used some of these fancier methods to classifY the responses,
and they got an 89 percent correct classification on a subject basis, not a trial
basis. But now we have to consider what this means, and whether these lab
experiments have anything to do with lie detection as it might be attempted
in the real world.

REAL-WORLD IMPLICATIONS

To summarize all three of the individual subject studies, two sets of function­
al MRI data have been analyzed and used to distinguish lies from truth.
Kozel and colleagues (2005) achieved a 90 percent correct response rate in
determining which was the lie and which was the truth, when they knew in
advance there would be one of each. Langleben got a 76 percent correct
response rate with individual trials, and Davatzikos, analyzing the same data,
got an 89 percent correct response rate. The very important caveat is that in
the last two studies it is not really lies they were looking at, but rather target
detection events. Leaving that problem aside, these numbers aren't terrible.
And these classification methods are getting better rapidly. Imaging methods
are also getting better rapidly. So who knows where all this will be in a few
years. It could get even much better than that.

THE USE OF fMRI IN LIE DETECTION 11


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 89 of 144

But there is a much more fundamental question. vVhat does any of this
have to do with real-\vorld lie detection? Let's consider how lie detection in
the lab differs from any situation where you might want to use these meth­
ods in the real world. The first thing I want to point out is that making a
false response when you are instructed to do so isn't a lie, and it's not decep­
tion. It's simply doing what you are told. We could call it an "instructed false­
hood." Second, the kind of situation where you can imagine wanting to use
MRl for lie detection differs in many respects from the lab paradigms that
have been used in the published studies. For one thing, the stakes are incom­
parably higher. We are not talking about $20 or $50; we are talking about
prison, or life, or life in prison. Further, the subject is suspected of a very
serious crime, and they believe while they are being scanned that the scan
may determine the outcome of their trial. All of this should be expected to
produce extreme anxiety. Importantly, it should be expected to produce extreme
anxiety whether the subject isguilty or notguilty of the crime. The anxiety does
not result from guilt per se, but rather simply from being a suspect. Further,
importantly, the subject may not be interested in cooperating, and all of these
methods we have been discussing are completely foilable by straightforward
countermeasures.
Functional MRI data are useless if the subject is moving more than a
few millimeters. Even when we have cooperative subjects trying their best to
help us and give us good data, we still throw out one of every five, maybe
ten, subjects because they move too much. If they're not motivated to hold
still, it will be much worse. This is not just a matter of moving your head­
you can completely mess up the imaging data just by moving your tongue in
your mouth, or by closing your eyes and not being able to read the questions.
Of course, these things will be detectable, so the experimenter would know
that the subject was using countermeasures. But there are also countermea­
stIres subjects could use that would not be detectable, like performing mental
arithmetic. You can probably activate all of those putative lie regions just by
subtracting seven iteratively in your head.
Because the published results are based on paradigms that share none of
the properties of real-world lie detection, those data offer no compelling evi­
delKe that fM.Rl will work for lie detection in the real world. No published
evidence shows lie detection with fM.Rl under anything even remotely re­
sembling a real-world situation. Furthermore, it is not obvious how the use
of MRI in lie detection could even be tested under anything resembling a
real-world situation. Researchers would need access to a population of sub­
jects accused of serious crimes, including, crucially, some who actually perpe­
trated the crimes of which they are accused and some who did not. Being
suspected but innocent might look a lot like being suspected and guilty in
the brain. For a serious test of lie detection, the subject would have to be­
lieve the scan data could be used in her case. For the data from individual
scans to be of any use in testing the method, the experimenter would ulti­
mately have to know whether the subject of the scan was lying. Finally, the
subjects would have to be interested in cooperating. Could such a study ever
be ethically conducted?

12 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 90 of 144

Before the use of fMRI lie detection can be seriously considered, it must
be demonstrated to work in something more like a real-world situation, and
those data must be published in peer-reviewed journals and replicated by labs
without a financial conflict of interest.

REFERENCES

Davatzikos, c., K. Ruparel, Y. Fan, D. G. Shen, M. Acharyya, J. W.


Loughead, R. C. Gur, and D. D. Langleben. 2005. Classifying spatial pat­
terns of brain activity with machine learning methods: Application to lie
detection. Neuroimage 28:663-668.
Epstein, R., and N. Kanwisher. 1998. A cortical representation of the local
visual environment. Nature 392:598-601.
Kanwisher, N., J. McDermott, and M. Chun. 1997. The fusiform face area:
A module in human extrastriate cortex specialized for the perception of faces.
Journal of Neuroscience 17:4302-4311.
Kozel, F., K. Johnson, Q. Mu, E. Grenesko, S. Laken, and M. George.
2005. Detecting deception using functional magnetic resonance imaging.
Biological Psychiatry 58:605-613.
Langleben, D. D., ]. W. Loughead, W. B. Bilker, K. Ruparel, A. R. Childress,
S. 1. Busch, and R. C. Gur. 2005. Telling truth from lie in individual subjects
with fast event-related fMRI. Human Brain, Mapping 26:262-272.
No Lie MR!. 2006. Product overview. http://\\'WW.noliemli.com/products/
Overview.htm.
O'Craven, K., and N. Kanwisher. 2000. Mental imagery of faces and places
activates corresponding stimulus-specific brain regions. Journal of Cognitive
Neuroscience 12: 1013-1023.

THE USE OF fMRI IN LIE DETECTION 13


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 91 of 144

CHAPTER 3

Lying Outside the


Laboratory: The Impact
of Imagery and Emotion
on the Neural Circuitry
of Lie Detection
ELIZABETH A. PHELPS

One of the challenges of research on lie detection is the difference between


instructed lying in a laboratory setting and the types of situations one might
encounter outside the laboratory that would require the use of lie detection
techniques. The development of techniques for lie detection is based on lab­
oratory studies of lying. However, the characteristics of lies outside the labo­
ratory may differ in important ways. For instance, if someone is accused of a
crime, there are two possible scenarios. First, the person may be innocent.
If this is the case, he or she is likely upset and wondering how the accusers
could think the charges are plausible. Given the serious circumstances, this
person might ruminate on tillS last point and be quite concerned about prov­
ing his or her innocence. On the other hand, if a person is accused of a crime
he or she actually committed, there might be an anxious or guilty feeling
and an effort to formulate a false alibi. This person might think about this
lie in detail and elaborate on the lie in an effort to be especially convincing
when asked about the alibi. In both cases, the person accused of the crime
faces a highly emotional situation and has ample opportunity to mentally
image the circumstances that occurred and ruminate on the situation. It is
these factors, emotion and imagery) that differ between the real lie and the
laboratory lie. Research in cognitive neuroscience has shown that both
imagery and emotion can alter the representation of events. Given that lies
outside the laboratory are likely to be personally relevant and emotional, and
also imagined with elaboration and repetition, any successful lie detection
techniques will need to consider these factors. In this essay, I will explore
some of the ways imagery and emotion might impact the potential neural
signatures of lying.

14
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 92 of 144

There are two possible uses of functional magnetic resonance imaging


(fMRl) for lie detection. One is to assess familiarity. Imagine that the police
want to know whether a suspect has been to the scene of a crime. The sus­
pect might deny any familiarity with that location. In using fMRI to assess
familiarity, the police could show the suspect a picture of the scene and look
for a pattern of blood oxygenation level dependent (BOLD) signal in his or
her brain that indicates previous experience with that scene. In this case, the
use of fMRI for lie detection is based on our knowledge about the neural
representation of memory. The second use of tMRI for lie detection is to
detect deception. One would expect that the neural systems involved when
someone is lying or telling the truth are different. The assumption underly­
ing lie detection techniques for detecting deception is that the truth is the
natural response. When individuals lie they have to inhibit the truth to gen­
erate the lie. In this case, lying results in conflict between the truth and the
lie. The use of fMRI to detect deception is based on our knowledge about
the neural representation of conflict. When lying outside the laboratory, the
stakes are high, the individual is highly emotional, and the story is practiced,
rehearsed, or imagined. Because of this, the use of fMRl for lie detection will
need to consider how imagery and emotion might alter the neural represen­
tation of memory and conflict.
To address these questions, I will first review what is known about the
neural signatures of using fMRI to detect familiarity. Have we identified reli­
able neural markers for item or event familiarity? In other words, when pre­
sented with a face or scene seen before or that is reminiscent, does the brain
respond with a pattern of activity that is different in ways that can be mea­
sured with fMRI? This question was addressed in a recent study by Gonsalves,
Wagner, and colleagues (2005). In this study, the participants were presented
with a series of faces. Mterwards, they were given a recognition test in which
some of the faces were presented earlier, some were morphed to look some­
what like the faces presented earlier, and others were novel. For each face,
participants were asked to judge whether they recollected having seen the
face earlier, if it seemed familiar, or if it was new. There were a few regions in
the temporal lobe, the hippocampal cortex and fusitorm gyrus, where BOLD
responses differed depending on the mnemonic judgment. The term hip­
pocampal cortex refers to a collection of regions known to be important for
memory, including the hippocampus proper and regions around and under­
neath it, such as the parahippocampal cortex. These regions showed more
activity when the faces were judged to be less familiar than when they were
more familiar. The fusiform gyrus, which plays an important role in process­
ing faces, showed more activity when faces were more familiar. These results
suggest the possibility of a neural signature tor familiarity that could be
detected with tMRI (see Figure 1).
However, what do we know about how responses in the hippocampus,
parahippocampus, and fusiforn gyrus might be altered with imagery and
emotion? First, let's explore the role of imagery. A classic paradigm demon­
strates the importance of imagery in memory. Imagine you were presented
the following list of words: SOUR, CANDY, SUGAR, BITTER, GOOD,

LYING OUTSIDE THE LABORATORY 15


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 93 of 144

A B
Figure 1. (A) the hippocampus (light grey) and parahippocampus (dark grey);
(8) the fusiform face area (in circles)

TASTE, TOOTH, NICE, HONEY, SODA, CHOCOLATE, HEART,


CAKE, TART, PIE. After the presentation of this list and a short delay you
are given a recognition test. For instance, you might be asked if tart was on
the list, in which case you would say "yes." If you were asked if chair was on
the list, you would correctly respond "no." However, what if you were asked
if sweet was on the list? If you go back a few sentences, you will see that sweet
was not on the list, however most participants in experiments like this say
"yes" it was on the original list. This is because sweet is strongly associated
with all the words on the list. Even though sweet was not on the list, your
mind most likely came up with the image of sweet when you were reading
the words. This type of mistake is often called a false memory, but is not
really a false memory. We have memories both for images tlnt we internally
generate and events that occur in the external world. In this case, the word
sweet is really an imagined memory, something that was generated when you
saw the initial list of words, and therefore, most people misremember actually
having seen it before. This is an example of mental imagery creating a memo­
ry that results in a memory mistake.
Can tMRI images of the brain distinguish imagined or false memories
from memories based on perceptual experience? As the study by Gonsalves
and colleagues (2005) demonstrates, fMRI signals can indicate familiarity,
but what happens when an item is familiar because the individual has imag­
ined it? Using the word list paradigm described above it was shown that some
regions known to be important for memory, such as the hippocampus, do
not differentiate memories for events that are perceptually experienced from
events that are imagined (Cabeza et al., 2001). In other words, our memo­
ries for real and imagined events rely on overlapping neural systems. If we
were to look at responses in the hippocampus, we could not differentiate if
an event is familiar due to perceptual experience or mental imagery. However,
there are other regions of the hippocampal cortex, specifically the parahip­
pocampus, that are more important in processing tlle perceptual details of
memory. In this same study, this region showed greater activation to true
relative to false memories. This study, and others like it (see Schacter and

16 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 94 of 144

Slotnick, 2004 for a review), indicates that for many brain regions known to
be important in memory, such as the hippocampus, it does not matter whether
an experienced event was the result of our thought or our perception. Given
this, we cannot look at these memory-related brain regions to reveal whether
a person is remembering accurately. Other brain regions, such as the parahip­
pocampus and fusiform gyrus, are more specifically involved in perceptual
processing and memory for perceptual details (Schacter and Slotnick, 2004).
These regions may provide a signal of familiarity for scenes and faces.
However, one difficulty in relying on BOLD signal responses in regions
involved in perceptual aspects of memory, such as the parahippocampus or
fusiform gyrus, to judge whether a suspected criminal is familiar with a scene
or face is that perception is often altered by emotion. For the criminal whose
brain is being imaged to judge involvement in a crime, pictures of the scene
of the crime or partners in crime are likely highly emotional. Changes in per­
ception occur with emotion, and research has demonstrated changes in BOLD
signal in both the parahippocampus and fusiform gyrus for emotional scenes
and faces. For example, a study looking at individuals remembering 9/11
found that people who were closer to the World Trade Center showed less
activity in the parahippocampus when recalling the events of9/11 than when
recalling less emotional events (Sharot et aI., 2007). As indicated earlier, the
parahippocampus also shows less activation when a face is more familiar
(Gonsalves et aI., 2005). Even though imaging this region might reveal per­
ceptual qualities of memory, this region might also be influenced by emotion.
If the event is highly emotional, signals in the parahippocampus might not
be a reliable indicator of familiarity.
What about the fusiform gyrus? This region is known for processing faces
(Kanwisher and Yovel, 2006, for a review). As indicated by Gonslaves and
colleagues (2005) this region also shows stronger activation for more familiar
faces. However, emotion also influences responses in the fusiform gyrus, so
that in a highly emotional situation the signal fi'om this region might be some­
what altered. For faces that are equally unfamiliar, more activation is observed
in the nlsiform gyrus for faces with fear expressions (Vuilleumier et aI., 2001).
Furthermore, the face itself does not need to be fearful. If the context in
which the face is presented is fearful, greater activation of the fusiform gyrus
is observed (Kim et aI., 2004). Because of this, responses in this region may
not be a reliable indicator of familiarity with a face in an emotional situation.
When researchers look at the brain's memory circuitry to detect familiar­
ity with a scene or person, for some regions it may be difficult to differenti­
ate events that a person imagined, rehearsed, or thought were plausible from
those that actually occurred. Memories are formed for events that happen
only in our minds and events that happen in the outside world. The regions
that are important in the perceptual aspects of memory are influenced by
emotion, so they might not be good detectors of familiarity if the situation
is emotional. In other words, the imagery and emotion that would likely be
present when a lie is personally relevant and important might interfere with
the use of fMRI signals to detect familiarity.

LYING OUTSIDE THE LABORATORY 17


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 95 of 144

The second potential use of tl\1RI for lie detection relies on our knowl­
edge of the neural circuitry of conflict. It is assumed that lying results in a
conflict between the truth and the lie. How might emotion and imagery in­
fluence the neural signatures of conflict? Two regions that have been high­
lighted for their role in responding to contlict or interference are the anterior
cingulate cortex and the inferior frontal gyrus. These regions have also been
implicated in studies of lie detection (e.g., Kozel et aI., 2004; Langelben et
aI., 2005). One classic task used to detect conflict-related responses is the
Stroop test, in which participants are shown a list of words and asked not to
read the words but to name the colors in which the words are printed. For
instance, if the vv'Ord table is presented in blue ink, participants are asked to
say "blue." Most of the time, participants can fairly easily ignore the words
and name the color of the ink. However, if the words the participants are
asked to ignore are the names of colors, it is much more difficult. For exam­
ple, it typically takes significantly longer for participants to name the ink
color "blue" if the word they are asked to ignore is red as opposed to table.
This longer reaction time is due to the conflict between reading the word
red and saying the word "blue." Naming the color of ink for color words in
comparison to other words results in significant activation of an anterior,
dorsal region of the cingulate cortex (Carter and van Veen, 2007) and this
same region shows activation in many laboratory studies of lie detection
(e.g., Kozel et aI., 2004).
However, difficulty in naming the ink color of words is not only slower
for color words. In a variation of the Stroop task, called the emotional Stroop
task, subjects are presented with highly emotional words printed in different
colors of ink. When asked to ignore the words and name the ink color, it
takes significantly longer to name the color for emotional words in compari­
son to neutral words. Interestingly, emotional variations of the Stroop task
also result in activation of the anterior cingulate, but a slightly different region
that is more ventral than that observed in the classic Stroop paradigm (Whalen
et aI., 1998). In a meta-analysis ofa number of conflict tasks, Bush et al.
(2000) contlrmed this division within the anterior cingulate (see Figure 2).
Cognitive contlict tasks, such as the classic Stroop task, typically result in
activation of the dorsal anterior cingulate, whereas emotional conflict tasks,
as demonstrated by the emotional Stroop task, result in activation of the ven­
tral anterior cingulate. This suggests that this specific neural indicator of con­
flict is significantly altered depending on the emotional nature of the conflict.
Another region often implicated in contlict or interference in studies of
lie detection is the inferior frontal gyrus. In fact, some studies of lie detection
have suggested that activation of this region is the best predictor of whether
a participant is lying (Langleben et aI., 2005). The role this region plays in
conflict or interference monitoring has traditionally been examined with the
Sternberg Proactive Interference paradigm. In a typical version of this para­
digm, a participant is shown a set of stimuli and told to remember it. For
example, the set might include three letters, such as B, D, F. After a short
delay the participant is presented a letter and asked, "Was this letter in the
target set?" If the letter is D, the participant should answer "yes." In the next

18 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 96 of 144

Figure 2. Meta-analysis
of fMRI studies show­
ing anterior cingulate
7070 60 50 40 30 20 10 o • Cognitive activation for cognitive
task
(circles) and emotional
60 • Emotional
(squares) tasks demon­
task
\ -_._----' strating a cognitive-af­
50 \ fective division within
the anterior cingulate.
40 Consistent with this
Cogl1ltivo division, cognitive (tri­
\ division angles) and affective
(diamond) versions of
a Stroop task results in
activation of dorsal and
ventral regions of the
anterior cingulate, re­
spectively. Reprinted
with permission from
Bush et aI., 2000.
20

trial the participant is given another target set, such as K, E, H. At this point,
if the participant is shown the letter P, she or he should say "no." If the par­
ticipant is shown the letter B, the correct answer is also "no." However, for
most participants it will take longer to correctly respond "no" to B than P.
This is because B was a member of the immediately preceding target set (B,
D, F), but it is not a member of the current target set (K, E, H). On the pre­
ceding trial, a minute or so earlier, the participant was ready to respond
"yes" to B. To correctly respond "no" to Bon tlle current trial requires tlle
participant to inhibit this potential "yes" response and focus only on the cur­
rent target set. This requirement for inhibition is not necessary if the probe
letter is P, which was not a member of either the preceding or current target
set. Research using both brain imaging (D'Esposito et ai., 1999; Jonides and
Nee, 2006) and lesion (Thompson-Schill et ai., 2002) techniques has shown
that the inferior frontal gyrus plays an important role in resolving this type of
interterence or conflict. It is believed this region might be linked to lying
because in order to lie one must inhibit the truth, which creates conflict or
interference (e.g., Langleben et aI., 2005).
In order to examine the impact of emotion on this type of interference,
a recent study used a variation of the typical Sternberg Proactive Interference
paradigm in which the stimuli were emotional words or scenes, instead of
letters or neutral words and scenes. An examination of reaction times found
that the inhibition of emotional stimuli was faster tllan neutral stimuli, sug­
gesting that emotion can impact processing in this interterence paradigm
(Levens and Phelps, 2008). Using fMRI to examine the neural circuitry un­
derlying the impact of emotion on the Sternberg Proactive Interference para­

LYING OUTSIDE THE LABORATORY 19


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 97 of 144

digm revealed that the inhibition of emotional stimuli on this task engages a
slightly different network, including regions of the anterior insula cortex and
orbitofrontal cortex (Levens et aI., 2006). Much like the results observed
with the anterior cingulate, this suggests that emotion alters the neural circuit­
ry of inhibition or conflict observed in the inferior frontal gyrus. Although
furtller studies are needed to clarifY tile impact of emotion on interference
resolution mediated by the inferior frontal gyrus, these initial results suggest
that this neural indicator of conflict in lying may also be different in highly
emotional situations.
There is abundant evidence that emotion can influence the neural circuit­
ry of conflict or interference identified in laboratory studies of lie detection,
but can imagery or repetition also alter responses in these regions? It seems
possible that practicing a lie repeatedly, as one might after generating a false
alibi, could reduce the conflict experienced when telling that lie. If this is the
case, we might expect less evidence of conflict with practice or repetition.
This finding has been observed with the classic Stroop paradigm. A number
of behavioral studies have demonstrated that practice can diminish the Stroop
effect (see MacLeod, 1991, for a review). It has also been shown that prac­
ticing the Stroop task significantly reduces conflict-related activation in the
anterior cingulate (Milhan et aI., 2003). To date, there is little research exam­
ining how imagery might alter the neural circuitry of conflict or interference
as represented in the inferior frontal gyrus, but these findings suggest that at
least some neural indicators of conflict or interference may be unreliable if
the task (or lie) is imagined, practiced, and rehearsed.
Although the use of fMRI to detect lying in legal settings holds some
promise, there are some specific challenges in developing these techniques
that have yet to be addressed. Out of necessity, the development of tech­
niques for lie detection relies on controlled laboratory studies of lying. How­
ever, lying in legally relevant circumstances is rarely so controlled. This differ­
ence should be kept in mind when building a neurocircuitry of lie detection
that is based on unimportant lies told by paid participants in the laboratory,
but is intended to be applied in legally important situations to people outside
the laboratory facing far higher stakes. Because of this, it is important to
examine exactly what might differ between the laboratory lie and the other
lies that could impact the usefulness of tl1ese techniques. In this essay, I have
explored two factors-imagery and emotion-and highlighted how research
suggests that the neural signatures identified in current fMRI lie detection
technologies might be quite different in their utility when the lies detected
are not generated in the laboratory. This problem of applying laboratory find­
ings to other, more everyday andlor personally relevant and important cir­
cumstances is a challenge for all studies of human behavior. However, ad­
dressing this challenge becomes especially critical when we attempt to use
our laboratory findings to generate techniques that can potentially impact
individuals' legal rights. Until this challenge can be addressed, the use of
fMRI for lie detection should remain a research topic, instead of a legal tool.

20 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 98 of 144

REFERENCES

Bush, G., P. Luu, and M.L Posner. 2000. Cognitive and emotional influences
in anterior cingulate cortex. Trends in Cognitive Science 4:215-222.
Cabeza, R., S.M. Rao, AD. Wagner, AR. Mayer, and D.L. Schacter. 2001.
Can medial temporal lobe regions distinguish true from false? An event-relat­
ed functional MRI study of veridical and illusory recognition memory.
Proceedings of the National Academy of Sciences 98:4805-4810.
Carter, C.S., and V. van Veen. 2007. Anterior cingulate cortex and contlict
detection: an update of theory and data. Cognitive) Affective) Behavioral
Neuroscience 7:367-379.
D'Esposito, M., B.R. Postle, ]. Jonides, and E.E. Smith. 1999. The neural
substrate and temporal dynamics of interference eftects in working memory
as revealed by event-related functional MRI. Proceedings of the National
Academy of Sciences 96:7514-7519.
Gonsalves, B.D., 1. Kahn, T Curran, K.A Norman, and AD. Wagner. 2005.
Memory strength and repetition suppression: Multimodal imaging and medi­
al temporal cortical contributions to recognition. Neuron 47:751-781.
Jonides, J., and D.E. Nee. 2006. Brain mechanisms of proactive interference
in working memory. Neuroscience 139:181-193.
Kanwisher, N., and G. Yovel. 2006. The fusiform face area: a cortical region
specialized for the perception of faces. Philosophical Transactions ofthe Royal
Society: B Biological Science 361:2109-2128.
Kim, H., L.H. Somerville, T. Johnstone, S. Polis, A.L. Alexander, L.M. Shin,
and P.]. Whalen. 2004. Contextual modulation of amygdala responsivity to
surprised faces. Journal of Cognitive Neuroscience 16:1730-1745.
Kozel, F.A, TM. Padgett, and M.S. George. 2004. A replication study of
the neural correlates of deception. Behavioral Neuroscience 118:852-856.
Langleben, D.D., ].W. Loughead, W.B. Bilker, K. Ruparel, AR. Childress,
SJ. Busch, and R.C. Gur. 2005. Telling truth from lie in individual subjects
with fast event-related £MRl. Human Brain Mapping 26:262-272.
Levens, S.M., M. Saintilus, and E.A Phelps. 2006. Prefrontal cortex mecha­
nisms underlying the interaction between emotion and inhibitory processes
in working memory. Cognitive Neuroscience Society, San Francisco, CA
Levens, S.M., and E.A. Phelps. 2008. Emotion processing effects on inter­
ference resolution in working memory. Emotion 8:267-280.
MacLeod, C.M. 1991. Half a century of research on the Stroop eftect: An
integrative review. Psychological Bulletin 109:163-203.
Milhan, M.P., M.T Banich, E.D. Claus, and N.]. Cohen. 2003. Practice­
related effects demonstrate complementary roles of anterior cingulate and
prefrontal cortices in attentional control. Neuroimage 18:483-493.

LYING OUTSIDE THE LABORATORY 21


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 99 of 144

Schacter, D.L., and S.D. Slotnick. 2004. The cognitive neuroscience of


memory distortion. Neuron 44:149-160.
Sharot, T., E.A. Martorella, M.R. Delgado, and E.A. Phelps. 2007. How per­
sonal experience modulates the neural circuitry of memories of September
11. Proceedings of the National Academy of Sciences 104:389-394.
Thompson-Schill, S.L.,]. Jonides, C. Marshuetz, E.E. Smith, M. D'Esposito,
J.P. Kan, R.T. Knight, and D. Swick. 2002. Effects offronta I lobe damage
on interference effects in working memory. Cognitive, Affective, Behavioral
Neuroscience 2:109-120.
Vuilleumier, P., J.L. Armony, J. Driver, and R.J. Dolan. 2001. Effects of at­
tention and emotion on face processing in the human brain: an event-related
fMRl study. Neuron 30:829-841.
Whalen, P.J., G. Bush, R.J. McNally, S. Wilhelm, S.C. McInerney, M.A.
Jenike, and S.L. Rauch. 1998. The emotional counting Stroop paradigm:
a functional magnetic resonance imaging probe of the anterior cingulate
affective division. Biological Psychiatry 44:1219-1228.

22 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 100 of 144

CHAPTER 4

Actions Speal( Louder than


Images l

STEPHEN J. MORSE

INTRODUCTION

Law must answer two types of general questions: 1) What legal rules should
govern human interaction in a particular context? and 2) How should an in­
dividual case be decided? Scientific information, including findings from the
new neurosciences, can be relevant both to policy choices and to individual
adjudication. Most legal criteria are behavioral, however, including, broadly
speaking, actions and mental states, and it could not be otherwise. The goal
of law is to help guide and order the interactions between acting persons. Con­
sider criminal responsibility, the legal issue to which neuroscience is consid­
ered most relevant. Criminal prohibitions all concern culpable actions or omis­
sions and are addressed to potentially rational persons, not to brains. Brains
do not commit crimes. Acting people do. We do not blame and punish brains.
We blame and punish persons if they culpably violate a legal prohibition that
society has enacted. All legally relevant evidence, whether addressed to a pol­
icy choice or to individual adjudication, must therefore concern behavior en­
tirely or in large measure.
Behavioral evidence will almost always be more legally useful and proba­
tive than neuroscientific information. If no conflict exists between the two
types of evidence, the neuroscience will be only cumulative and perhaps super­
fluous. If conflict does exist between behavioral and neuroscientific informa­
tion, the strong presumption must be that the behavioral evidence trumps
the neuroscience. Actions speak louder than images. If the behavioral evidence
is unclear, however, but the neuroscience is valid and has legally relevant im­
plications, then the neuroscience may tip the decision-making balance. The
question is whether neuroscientific (or any other) evidence is legally relevant;
that is, whether it genuinely and precisely helps answer a question the law asks.
Consider the following examples of both types of questions, beginning
with a general legal rule. Should adolescents who culpably commit capital
murder when they are sixteen or seventeen years old qualifY for imposition

1. The title of this paper is a precise copy of the title of an article by Apoorva Mandavilli tlIat
appeared in Nature in 2006.

23
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 101 of 144

of the death penalty, or should that punishment be categorically barred for


this class of murderers? Recent neuroscience evidence has demonstrated that
the adolescent fi'ontal cortex-roughly, the seat of judgment and behavioral
control-is not fully biologically mature. What is the relevance of this infor­
mation to deciding whether the death penalty should be imposed (see Roper
v. Simmons, 2005)?
Now consider the following case of individual adjudication. A sixty-three­
year-old businessman with no history of violence or other antisocial conduct
has a harsh argument with his wife. During the course of the argument, she
lunges at him and scratches his face. He grabs her, strangles her to death, and
then throws her out the window of their twelfth-story apartment. The husband
is charged with murder. He has the means to pay for a complete psychiatric
and neurological workup that discloses that he has a sizable but benign sub­
arachnoid cyst pressing on his frontal cortex. What is the relevance of this
finding to his culpability for homicide (see People v. Weinstein, 1992)?

FALSE STARTS

Some common misconceptions bedevil clear thinking about the relevance of


neuroscience to law: the belief that scientific discoveries necessitate particular
political or legal rules or institutions; the belief that neuroscientific explana­
tion of behavior or determinism generally poses a threat to the legal concept
of the person; and the belief that discovery of a cause for behavior means that
within our current responsibility practices the agent is not responsible for the
behavior, an error I have previously termed the "fundamental psycholegal
error" (Morse 1994).
Politics, morality, and law all concern how human beings should live to­
gether. They are the domains of practical reason and normativity. As a disci­
pline of theoretical reason, science can help us understand the causal variables
that constrain and shape human behavior. Such information can and should
inform our reasoning about how to live together, but it cannot dictate any
particular answer because how we should live is not a matter of theoretical
reason. Some moral theorists believe that we can deduce moral conclusions
from facts about the world, but this position is entirely controversial and those
who hold it often disagree about specific moral rules. Many people believe
that the new neuroscience suggests that a fully physical explanation of all lm­
man behavior is possible and that human behavior is as determined as all the
rest of the phenomena of the universe. Some conclude in response that we
should adopt a fully consequential morality in which'concepts like just deserts
have no justifiable place, but this conclusion does not ineluctably follow from
the truth of universal causation. Even if it did, it would not tell us which goods
to maximize, nor would it provide a source of normativity.
To take a more specific example, a recent, provocative study showed that
a particular type of brain damage was associated with a willingness directly to
take innocent life to achieve a net saving of lives (Koenigs et al. 2007). People
without such brain damage were willing indirectly to cause the death of an
innocent person to save more lives. If they had to kill the victim directly, how-

24 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 102 of 144

ever, they stopped calculating and refused to take an innocent life even to save
net lives. If the study is valid in real-world conditions, it suggests that people
with "normal" brains do not consequentially calculate under some conditions
and perhaps it suggests that socializing them to do so might be difficult. But
this finding does not necessarily mean that people cannot be socialized to cal­
culate if we thought that such consequential calculation was desirable.
The new neuroscience joins a long list of contenders for a fully causal,
scientitlc explanation of human behavior, ranging from sociological to psy­
chological to biological theories. Such explanations are thought to be threats
to the law's conception of the person and responsibility. Neuroscience con­
cerns the brain-the biological source of our humanity, personhood, and sense
of self-and it seems to render the challenge to the legal concept of the per­
son more credible. The challenge arises in two forms. The first does not deny
that we are the types of creatures we think we are, but it simply assumes that
responsibility and all that it implies are impossible if determinism is true. This
is a familiar claim. The second challenge denies that we are the type of crea­
tures we think we are and that is presupposed by law and morality. This is a
new and potentially radical claim. Neither succeeds at present, however.
The dispute about whether responsibility is possible in a deterministic
world has been ongoing for millennia, and no resolution is in sight (Kane
2005). No uncontroversial detlnition of determinism has been advanced, and
we will never be able to confirm that it is true or not. A~ a working defini­
tion, however, let us assume, roughly, that all events have causes that operate
according to the physical laws of the universe and that they were themselves
caused by those same laws operating on prior states of the universe in a con­
tinuous thread of causation going back to the first state. Even if this is too
strong, the universe seems so sufficiently regular and lawful that rationality
demands that we must adopt the hypothesis that universal causation is ap­
proximately correct. The English philosopher, Galen Strawson, calls this the
"reality constraint" (Strawson 1989). If determinism is true, the people we
are and the actions we perform have been caused by a chain of causation over
which we mostly had no rational control and for which we could not possi­
bly be responsible. We do not have contra-causal freedom. How can respon­
sibility be possible for action or for anything else in such a universe? How can
it be rational and fair for civil and criminal law to hold anyone accountable
for anything, including blaming and punishing people because they allegedly
deserve to be blamed and punished?
Three common positions are taken in response to this conundrum: meta­
physical libertarianism, hard determinism, and compatibilism. Libertarians
believe that human beings possess a unique kind of freedom of will and action
according to which they are "agent originators" or have "contra-causal free­
dom." In short, they are not determined and effectively able to act uncaused
by anything other than themselves (although they are of course influenced by
their time and place and can only act on opportunities that exist then). The
buck stops with them. Many people believe that libertarianism is a founda­
tional assumption for law. They believe that responsibility is possible only if
we genuinely possess contra-causal freedom. Thus, if we do not have this

ACTIONS SPEAK LOUDER THAN IMAGES 25


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 103 of 144

extraordinary capacity, they fear that many legal doctrines and practices, es­
pecially those relating to responsibility, may be entirely incoherent. Nonethe­
less, metaphysical libertarianism is not a necessary support for current respon­
sibility doctrines and practices. All doctrines of criminal and civil law are fully
consistent with the truth of determinism (Morse 2007). Moreover, only a
small number of philosophers and scientists believe that human beings pos­
sess libertarian freedom of action and will, which has been termed a "pan­
icky" metaphysics (Strawson 1982) because it is so implausible (Bok 1998).
Hard determinists believe that determinism is true and is incompatible
with responsibility. Compatibilists also believe that determinism is true but
claim that it is compatible with responsibility. For either type of determinist,
biological causes, including those arising from the brain, pose no new or more
powerful general metaphysical challenge to responsibility than nonbiological
or social causes. As a conceptual and empirical matter, we do not necessarily
have more control over psychological or social causal variables than over bio­
logical causal variables. More important, in a world of universal causation or
determinism, biological causation creates no greater threat to our life hopes
than psychological or social causation. For purposes of the metaphysical free­
will debate, a cause is just a cause, whether it is neurological, genetic, psycho­
logical, sociological, or astrological. Neuroscience is simply the newest "bogey"
in a dispute about the general possibility of responsibility that has been ongo­
ing for millennia. It certainly is more scientifically respectable than earlier
bogeys, such as astrology and psychoanalysis, and it certainly produces com­
pelling representations of the brain (although these graphics are almost always
misleading to those who do not understand how they are constructed). But
neuroscience evidence for causation does no more work in the general free­
will/responsibility debate than other kinds of causal evidence.
Hard determinism does not try either to explain or to justifY our respon­
sibility concepts and practices; it simply assumes that genuine responsibility is
metaphysically unjustified. For example, a central hard determinist argument
is that people can be responsible only if they could have acted otherwise than
they did, and if determinism is true, they could not have acted other than they
did (Wallace 1994). Consequently, the hard determinist claims that even if
an internally coherent account of responsibility and related practices can be
given, it will be a superficial basis for responsibility, which is allegedly only
an illusion (Smilansky 2000). There is no "real" or "ultimate" responsibility.
Hard determinists concede that Western systems of law and morality hold
some people accountable and excuse others, but the hard determinist argues
that these systems have no justifiable basis for distinguishing genuinely respon­
sible from nonresponsible people. Hard determinists sometimes accept respon­
sibility ascriptions because doing so may have good consequences, but they
still deny that people are genuinely responsible and robustly deserve praise
and blame and reward and punishment.
Hard determinism thus provides an external critique of responsibility. If
determinism is true and is genuinely inconsistent with responsibility, then no
one can ever be really responsible for anything and desert-based responsibili­
ty attributions cannot properly justifY further action. The question, then, is

26 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 104 of 144

whether as rational agents we must swallow our pride, accept hard determin­
ism because it is so self-evidently true, and somehow transform the legal sys­
tem and our moral practices accordingly.
Compatibilists, who agree with hard determinists that determinism is true,
have three basic answers to the incompatibilist challenge. First, they claim that
responsibility attributions and related practices are human activities construct­
ed by us for good reason and tllat they need not conform to any ultimate
metaphysical facts about genuine or "ultimate" responsibility. Indeed, some
compatibilists deny that conforming to ultimate metaphysical facts is even a
coherent goal in this context. Second, compatibilism holds that our positive
doctrines of responsibility are fully consistent with determinism. Third, com­
patibilists believe that our responsibility doctrines and practices are norma­
tively desirable and consistent with moral, legal, and political theories that
we firmly embrace. The first claim is theoretical; the third is primarily nor­
mative. Powerful arguments have been advanced for the first and third claims
(Lenman 2006; Morse 2004). For the present purpose, however, which is ad­
dressed to whether free will is really foundational for law, the second claim is
the most important.
The capacity for rationality is the primary responsibility criterion, and its
lack is the primary excusing condition. Human beings have different capaci­
ties for rationality in general and in specific contexts. For example, young
children in general have less developed rational capacity than adults. Ration­
ality differences also differentially affect agents' capacity to grasp and to be
guided by good reason. Differences in rational capacity and its effects are real
even if determinism is true. Compulsion is also an excusing condition, but it
is simply true that some people act in response to external or internal hard
choice threats to which persons of reasonable firmness might yield, and most
people most of the time are not in such situations when they act. This is true
even if determinism is true and even if people could not have acted otherwise.
Consider the doctrines of criminal responsibility. Assume that the defen­
dant has caused a prohibited harm. Prima facie responsibility requires that the
defendant's behavior was performed with a requisite mental state. Some bodily
movements are intentional and performed in a state of reasonably integrated
consciousness. Some are not. Some defendants possess the requisite mental
state, the intent to cause a prohibited harm such as death. Some do not. The
truth of determinism does not entail that actions are indistinguishable from
nonactions or that different mental states do not accompany action. These facts
are true and make a perfectly rational legal difference even if determinism is
true. Determinism is fully consistent with prima facie guilt and innocence.
Now consider the affirmative defenses of insanity and duress. Some peo­
ple with a mental disorder do not know right from wrong. Others do. In cases
of potential duress, some people face a hard choice that a person of reason­
able firmness would yield to. These differences make perfect sense according
to dominant retributive and consequential theories of punishment. A causal
account can explain how these variations were caused, but it does not mean
that these variations do not exist. Determinism is fully consistent with both
the presence and absence of affirmative defenses. In sum, the legal criteria

ACTIONS SPEAK LOUDER THAN IMAGES 27


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 105 of 144

used to identifY which defendants are criminally responsible map onto real
behavioral differences that justifY differential legal responses.
In their widely noted paper, Joshua Greene and Jonathan Cohen (2004)
take issue with the foregoing account of the positive foundations of legal re­
sponsibility. They suggest that despite the law's official position, most people
hold a dualistic, libertarian view of the necessary conditions for responsibility
because "vivid scientific information about the causes of criminal behavior leads
people to doubt certain individuals' capacity tor moral and legal responsibili­
ty" (Greene and Cohen 2004, p. 1776). To prove their point, they use the
hypothetical of "Mr. Puppet," a person who has been genetically and environ­
mentally engineered to be a specific type of person. Greene and Cohen cor­
rectly point out that Mr. Puppet is really no ditferent from an identical person
I call Mr. Puppet2, who became the same sort of person without intentional
intervention. Yet most people might believe that Mr. Puppet is not responsi­
ble. If so, however, should Mr. Puppet2 also not be responsible? After all,
everyone is a product of a gene/environment interaction. But would it not
then follow, as Greene and Cohen claim, that no one is responsible?
Greene and Cohen are correct about ordinary peoples' intuitions, but
people make the fundamental psycholegal error (Morse 1994) all the time.
That is, they hold the erroneous but persistent belief that causation is per se
an excusing condition. This is a sociological observation and not a justifica­
tion tor thinking causation or determinism does or should excuse behavior.
Whether the cause for behavior is biological, psychological, sociological, or
astrological, or some trothy brew of all of these does not matter. In a causal
universe, all behavior is presumably caused by its necessary and sufficient
causes. A cause is just a cause. If causation excused behavior, no one could
ever be responsible. Our law and morality do hold some people responsible
and excuse others. Thus causation per se cannot be an excusing condition,
no matter how much explanatory and predictive power a cause or set of
causes for a particular behavior might have. The view that causation excuses
per se is inconsistent with our positive doctrines and practices. Moreover, if
Mr. Puppet and Mr. Puppet2 are both rational agents, the argument I have
provided suggests that they are both justifiably held responsible. The lure of
purely mechanistic thinking about behavior when causes are discovered is
powerful but should be resisted.
At present, the law's "official" position about persons, action, and respon­
sibility is justified unless and until neuroscience or any other discipline demon­
strates convincingly that we are not the sorts of creaUlres we and the law tllink
we are-conscious and intentional creatures who act for reasons that playa
causal role in our behavior-and thus that the foundational facts for respon­
sibility ascriptions are mistaken. If it is true, for example, that we are all auto­
mata, then no one is an agent, no one is acting and, therefore, no one can be
responsible for action. But none of the stunning discoveries in the neuro­
sciences or their determinist implications have yet begun to justifY the belief
that we are radically mistaken about ourselves. Let us therefore return to the
proper understanding of the relation between neuroscience and law, again
using criminal responsibility as the most powerful example.

28 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 106 of 144

ACTIONS AND IMAGES

The criteria for legal excuse and mitigation-like all legal criteria-are behav­
ioral, including mental states. For example, lack of rational capacity is a gener­
ic excusing condition, which explains why young children and some people
with mental disorder or dementia may be excused if they commit crimes. For
another example, as Justice Oliver Wendell Holmes wrote long ago, "Even a
dog distinguishes between being stumbled over and being kicked." Mental
states matter to our responsibility for action. Take the insanity defense, for
example, which excuses some people with mental disorder who commit crimes.
The defendant will not be excused simply because he or she is suffering from
mental disorder, no matter how severe it is. The defendant will not be excused
simply because disordered thinking affected the defendant's reasons for action.
Rather, the mental disorder must produce substantial lack of rational capacity
concerning the criminal behavior in question. All insanity defense tests are
primarily rationality tests. Lack of rational capacity is doing the excusing work.
Mental disorder that plays a role in explaining the defendant's behavior
may paradoxically not have any effect on responsibility at all. Imagine a clini­
cally hypomanic businessperson who, as a result of her clinical state, has really
high attention, energy, and the like, and who makes a contract while in that
state. If the deal turns out to be less advantageous than she thought, the law
will not allow her to avoid that contract even though she made it under the
influence of her mood disorder. Why? Because the businessperson was per­
fectly rational when she made the contract. Indeed, her hypomania might
have made her "hyper-rational." Here is another example fi'om criminal law.
Imagine a person with paranoia who is constantly scanning his environment
for signs of impending danger. Because tlle person is hypervigilant, he identi­
fies a genuine and deadly threat to his life that ordinary people would not
have perceived. If the person acts in self-defense, he is fully rational and his
behavior would be justified. In this case, again, the mental abnormality made
the agent "hyper-rational" in the circumstances.
Potentially legally relevant neuroimaging studies attempt to correlate
brain activity with behavior, including mental states. In other words, legally
relevant neuroscience must begin with behavior. We seek brain images associ­
ated with behaviors that we have already identified on normative, moral,
political, and social grounds as important to us. For example, we recognize
that adolescents behave differently from adults. They appear to be more im­
pulsive and peer-oriented. They appear, on average, to be less fully rational
than adults. These differences seemingly should make a moral and legal dif­
ference concerning, for example, criminal responsibility or the age at which
people can drink or make independent health-care decisions. These differ­
ences also make us wonder if, in part, neuroanatomical or neurophysiological
causal explanations might exist for the behavioral differences already identi­
fied as important to us.
Indeed, there is a parallel between the use of neuroscience for legal pur­
poses and the development of cognitive neuroscience itself. Psychology does
and must precede neuroscience when human behavior is in question (Hatfield,

ACTIONS SPEAK LOUDER THAN IMAGES 29


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 107 of 144

2000).2 Brain operations can be divided into various localities and subfllnc­
tions. The investigation of these constitutes the Held of neuroscience. Some
of the functions the brain implements are mental functions, such as percep­
tion, attention, memory, emotions, and planning. Psychology is broadly de­
fined as the experimental science that directly studies mental functions. There­
fore, psychology is the primary discipline investigating a major subset of brain
functioning, including those fi.mctions that make us most distinctly human.
These are also the types of functions that are therefore most relevant to law,
because law is a human construction that is meant to help order human inter­
action. On occasion, inferring function from structure or physiology might
be possible. In most cases, however, general knowledge or conjecture about
function guides the investigation of structure and physiology. This will be
especially true as we move "from the outside in." That is, it will be especially
true as we studycomplex, intentional human behavior as opposed to, say, the
perceptual apparatus. Lastly, therefore, psychology is tlle royal road to brain
science in those areas that make us most distinctly human and that are most
relevant to law.
When we evaluate what might be legally relevant brain science, we will
be limited by the validity of the psychology upon which the brain science is
based. As most-indeed, as all-honest neuroscientists and psychologists will
admit, we wish that our psychological constructs and theories were better than
they are. Thus, the legal helpfUlness of neuroscience is limited.
Despite the limitations just described, neuroscience can sometimes be of
assistance in helping us decide what a general rule should be and in adjudicat­
ing individual cases. Identitying brain correlates oflegally relevant criteria is
seldom necessary, or even helpful, when we are trying to define a legal stan­
dard if the behavioral difference is already clear. If the behavioral difference is
not clear, then the neuroscience does not help, because the neuroscience must
always begin with a behavior or behavioral difference that we have already
identified as important.
For example, we have known that the rational capacity of adolescents is
different from adults. Juvenile courts have existed for over a hundred years,
well before anyone tllOught about neuroimaging the adolescent brain. The
common law treated juveniles differently from adults for hundreds of years
before we had any sense of neuroscience. People had to be of a certain age
to vote, to drink, to join the army, and to be criminally responsible long
before anyone envisioned functional magnetic resonance imaging (fMRI). If
the rational capacity difference between adults and adolescents was less clear,
then neuroscience could not tell us whether to treat adolescents differently,
even if we believed that rationality made a difference. Whether adolescents
are sufficiently different from adults so that they should be treated legally

2. What follows, in which I draw a parallel between the usc of neuroscience for legal purposes
and the development of cognitive neuroscience itself, borrows from and liberally paraphrases
an excellent article by Hatfield (2000). What I will suggest is not meant to be critical or dis­
missive of neuroscience or of any other science. Indeed, I firmly believe that most neuro­
science is genuinely excellent science. Nonetheless, much as legally relevant neuroscience
must begin with identification of the behavior that is normatively relevant, so psychology
does and conceptually must precede neuroscience.

30 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 108 of 144

differently is a behavioral and normative question in the first instance. Once


the behavioral difference is established, at most the neuroscience concerning
the biological immaturity of the adolescent prefrontal cortex does nothing
more than provide a partial biological causal explanation of a normative rele­
vant behavioral difference.
At this point one might object that the law draws bright, categorical
lines when it is responding to behavioral continua and thus the law obscures
important individual behavioral and moral differences. For example, although
adolescents and adults on average demonstrate rationality differences that we
think are morally and legally important, rationality is a continuum capacity
and the adolescent and adult curves overlap, especially at the adolescent/adult
margin. That is, some adults are less rational than some adolescents and some
mid-to-Iate adolescents appear fully rational. Yet the law may create a bright­
line category difference, as it does in the case of capital punishment. No capi­
tal killer who committed murder when he or she was sixteen or seventeen
years old may be put to death, no matter how rational the adolescent may
have been at the time. The law draws such bright lines because sometimes
the costs of individualized decision making are too high and society is simply
better off with a bright-line rule. This does not mean, however, that the law
does not care about the behavioral differences when creating a general rule.
In what types of individual cases can neuroscience help? First, the data
must be generally scientifically valid. They have to show precise correlations
between brain states or activity and reliable and valid measures of legally rele­
vant behavior, such as rational capacity. Such validity is increased if there is
little overlap between the brain and behavior links of the groups being con­
trasted. In other words, the greater the overlap between the brain activity of
people who do and do not meet a legal criterion, the greater will be the diffi­
culty of using the scan of an individual to decide on which side of the line
the person falls. And in nearly all cases, overlap will occur. Further, the tech­
nique and data must be valid for the individual case in question. For exam­
ple, suppose the legal question is retrospective, such as determining a crimi­
nal defendant's mental state in the past at tlle time of the crime. Is a present
scan a valid indication of what the defendant's mental state was during the
criminal event?
Assume for the purposes of argument that we can solve both types of
validity problems. If tlle behavioral evidence is clear, the neuroscience will be
at most cumulative; it might have particular rhetorical force with a decision
maker who is unsophisticated about how flvlRI images are generated and the
like. These images are not pictures of the brain, despite common belief that
they are. Nonetheless, this evidence is superfluous. The expense of neuro­
imaging techniques and the inability successfully to use them with all poten­
tial subjects is reason not to use neuroscience in cases in which the behavior
is clear. After all, the behavioral evidence is the most direct and probative evi­
dence of behavioral criteria. Consequently, if we do use neuroimaging and
the behavioral evidence and neuroscience evidence conflict, we must believe
the behavioral evidence.

ACTIONS SPEAK LOUDER THAN IMAGES 31


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 109 of 144

For example, if the neuroscientific evidence suggests that a criminal de­


fendant has a rationality defect but the behavioral evidence indicates no de­
fect whatsoever, we should conclude that the neuroscientific evidence is in­
valid in this case. Actions speak louder than images. Consider the following
analogy. If a person does not complain of lower back pain or reduced mobili­
ty, we can safely conclude that she has no clinically significant lower back
problem, even if an MRI of the spine shows substantial abnormalities. And, if
the person shows clear signs of pain and reduced mobility, a problem exists
even if the spine looks clean. Likewise, if the result of an IQ test does not
accord with the subject's behavior, believe the behavior.
Here is an example from my own experience as a consultant forensic psy­
chologist in a criminal case in which the defendant claimed that she was too
unintelligent to be able to form the mental states required by the definition
of the offense. She had taken apparently valid IQ tests indicating that her IQ
was in the middle 60s-what is usually termed "mild retardation"-and no
one could determine whether she was faking. My intervention was simple. I
asked whether she had ever applied for a job for which she had to take some
kind of screening test? Did she have kids in school, and if so, had she attend­
ed parent/teacher conferences? Mter all, teachers are really good at evaluat­
ing intelligence. The real-world data were collected and demonstrated with­
out question that the defendant had an intelligence far above average.
To take a final example, suppose you were concerned with racial discrimi­
nation on the job. Research by Mahzarin Banaji, Elizabeth Phelps, and oth­
ers has found, roughly speaking, that peoples' brain activity differentially re­
sponds to faces depending on whether tlle face presented is the same race or
a different race from the subject. Should we assume a person is a racist or
wrongly discriminates because his or her brain activates differentially? Sup­
pose she has never expressed racist attitudes and her behavioral history shows
that she always behaves equitably in the real world. Is the person a racist? Or,
at least, for legal purposes, should we care about the differential brain activity?
What is the role of neuroevidence in cases in which the behavioral evi­
dence is unclear or ambiguous? In such cases, valid neural markers and prox­
ies may be extremely helpful evidence for determining whether a legal criteri­
on is met. Neural indicators will rarely be dispositive because they will not be
perfect markers, but they might genuinely be probative. On the other hand,
the sensitivity or specificity of such markers might not be that high for specific
legal criteria. If so, caution is warranted.
Now reconsider the case of the man who strangled his wife to death and
threw her body out the window. He had a clear abnormality-a large benign
subarachnoid cyst pressing on his frontal cortex-that was present at the time
of the crime. Such a finding would certainly not be inconsistent with a ration­
ality defect. But there was no hint that he suffered from a major mental dis­
order or had any substantial rationality defect eitller before or after the time
of the crime. Some evidence existed that perhaps he had some impulse con­
trol problems, but nothing that had ever seriously interfered with his life or
caused troubles with the law. Mild impulse control problems are not an ex­
cusing condition in any case. Sometimes otherwise law-abiding, rational, and

32 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 110 of 144

responsible people just "lose" it. If the behavioral history had been more prob­
lematic, however, then the potential effect of the cyst might well have caused
us more readily to believe that he did suffer fi-om a rationality defect. Similarly,
in a case in which intelligence is clearly relevant, if the real-world evidence
about intelligence is ambiguous, scientific tests of intelligence, whether psy­
chological or neuroscientific, would surely be helpfld.

THE FUTURE OF IMAGING AND THE LAW

I have so far been arguing for a cautious, somewhat deflationary stance toward
the potential of neuroscience to help us decide general and specific legal is­
sues, but I do not mean to suggest I am a radical skeptic, cynic, or the like.
I am not. I do not know what science is going to discover tomorrow. In the
future we might well find grounds tor greater optimism about broader legal
relevance. But we have to be extraordinary sensitive to the limits of what neu­
roscience can contribute to the law at present.
How should the law proceed? What is the danger of images? The power
of images to persuade might be greater than their legal validity warrants. First,
images might not be legally relevant at all. We must always carefully ask what
precise legal question is under consideration and then ask whether the image
or any other neuroscience (or other type of) evidence actually helps us answer
this precise question. The image might indicate something interesting, and it
might be a vivid, compelling representation, but docs it precisely answer our
legal question?
Second, once naive subjects, such as average legislators, judges, and jurors,
see images of the brain that appear correlated to the behavior in question, they
tend to fall into the trap that I call the "lure of mechanism" or to make the
fundamental psycholegal error discussed previously. That is, they tend to be­
lieve that causation is an excuse, especially if the brain seems to playa causal
role. In a wonderful recent study (Knutson et al. 2007), researchers were able
to predict with astonishing accuracy, depending on what part of the brain was
active, whether the subject would or would not make a choice to buy a con­
sumer item. The title oOohn Tierney's article reporting on the study in the
New Y01'k Times-an excellent example of an educated layperson's response­
asked, "The Voices in My Head Say 'Buy It!' Why Argue?" Tierney conclud­
ed, "You might remove the pleasure of shopping by somehow dulling the
brain's dopamine receptors ... but try getting anyone to stay on that med­
ication. Better the occasional jolt of pain. Charge it to the insula." (Tierney
2007). Note the implication of mechanism: When you shop, you are not an
acting agent but are at the mercy of your brain anatomy and physiology. You,
the acting agent, the shopper, did not decide whether to buy. Your brain did.
We are just brains in a mall. But we must resist the lure of mechanism. Brains
do not shop; people do.
As a result, should we exclude imaging evidence from the courtroom, or
should we, as we commonly do in the law, admit the evidence and trust cross­
examination to expose its strengths and weaknesses? In other words, is the
proper question the weight of such evidence or whether it should be admit-

ACTIONS SPEAK LOUDER THAN IMAGES 33


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 111 of 144

ted at aU? The answer should depend on the relevance and strength of the
science. If the legal relevance of the science is established and the science is
quite good, my preference would be to admit the evidence and let the experts
dispute its worth before the neutral adjudicator, the judge or the jury. But
two criteria must be met first: The science must be both legally relevant and
sound.

REFERENCES

Bok, H. 1998. Freedom and Responsibility. Princeton University Press.


Greene, J., and J, Cohen. 2004. For the law, Neuroscience changes nothing
and everything. Phil. Transactions of the Royal Society B: 1775-1785.
Hatfield, G. 2000. The brain's 'new' science: Psychology, neurophysiology,
and constraint. Philosophy of Science (Proceedings) 67:S388-S403.
Kane, R. 2005. A Contemporary Introduction to Free Will. Oxford University
Press.

Koenigs, M., L. Young, R. Adolphs, et al. 2007. Damage to the prefrontal


cortex increases utilitarian judgments. Nature 446:908-911.
Knutson, B., S. Rick, G. E. Wimmer, D. PreJec, and G. Loewenstein. 2007.
Neural predictors of purchases. Neuron 53: 147-156.
Lenman, J, 2006. Compatibilism and contractualism: The possibility of moral
responsibility. Ethics 117:7-31.
Mandavilli, A. 2006. Actions speak louder than images. Nature 466:664-665.
Morse, S. J. 2007. The non-problem offi'ee will in forensic psychiatry and
psychology. Behavioral Sciences & the Law 25:203-220.
Morse, S. J. 2004. Reason, results and criminal responsibility. Illinois Law
Review:363--444.
Morse, S. J. 1994. Culpability and control. University of Pennsylvania Law
Review 142:1587-1660.
People v. Weinstein, 154 Mise. 2d. 34 (NY 1992).
Roper v. Simmons, 543 U.S. 551 (2005).
Smilansky, S. 2000. Free Will and lllmion. Oxford University Press.
Strawson, P. F. 1982. Freedom and resentment. In Free Will, ed. G. Watson,
59-80. Oxford University Press.
Strawson, G. 1989. Consciousness, free will, and the unimportance of
determinism. Inquiry 32:3-27.
Tierney, J, 2007, January 16. The voices in my head say 'Buy it!' Why argue?
New York Times at F1.
Wallace, R. J. 1994. Responsibility and the Moral Sentiments. Harvard
University Press.

34 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 112 of 144

CHAPTER 5

Neural Lie Detection in


Courts
WALTER SINNOTT-ARMSTRONG

Getting scientists and lawyers to communicate with each other is not easy.
Getting them to talk is easy, but communication requires mutual understand­
ing-and that is a challenge.

LEGAL LINES ON SCIENTIFIC CONTINUA

Scientists and lawyers live in different cultures with different goals. Courts
and lawyers aim at decisions, so they thrive on dichotomies. They need to
determine whether defendants are guilty or not, liable or not, competent or
not, adult or not, insane or not, and so on. Many legal standards implicitly
recognize continua, such as when prediction standards for various forms of
civil and criminal commitment speak of what is "highly likely" or "substan­
tially likely," but in the end courts still need to decide whether the probabili­
ty is or is not high enough for a certain kind of commitment. The legal sys­
tem, thus, depends on on-off switches. This generalization holds for courts,
and much of the legal world revolves around court decisions.
Nature does not work that way. Scientists discover continuous probabili­
ties on multiple dimensions.I An oculist, for example, could find that a patient
is able to discriminate some colors but not others to varying levels of accuracy
in various circumstances. The same patient might be somewhat better than av­
erage at seeing objects far away in bright light but somewhat worse than aver­
age at detecting details nearby or in dim light. Given so many variations in vi­
sion, if a precise scientist were asked, "Is this particular person's vision good?"
he or she could respond only with "It's good to this extent in these ways."
The legal system then needs to determine whether this patient's vision is
good enough for a license to drive. That is a policy question. To answer it,
lawmakers need to determine whether society can live with the number of
accidents that are likely to occur if people with that level of vision get driver's
licenses. The answer can be different for licenses to drive a car or a school bus
or to pilot a plane, but in all such cases the law needs to draw lines on the
continua that scientists discover.

1. This point is generalized from Fingarette (1972, 38-39).

35
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 113 of 144

The story remains the same for mental illness. Modern psychiatrists find
large clusters of symptoms that vary continuously along four main dimen­
sions. 2 Individual patients are more or less likely to engage in various kinds
of behaviors within varying times in varying circumstances. For therapeutic
purposes, psychiatrists need to locate each client on the distinct dimensions,
but they do not need to label any client simply as insane or not.
Psychiatrists also need not use the terms "sane" and "insane" when they
testify in trials involving an insanity defense. One example among many is the
Model Penal Code test, which holds that a defendant can be found not guilty
by reason of insanity if he lacks substantial capacity to appreciate the wrong­
fulness of his conduct or to conform his conduct to the requirements of the
law. This test cannot be applied with scientific techniques alone. If a defendant
gives correct answers on a questionnaire about what is morally right and wrong
but shows no skin conductance response or activity in the limbic system while
giving these answers, does that individual really "appreciate" wrongfulness?
And does this defendant have a "capacity" to appreciate wrongfulness if he
does appreciate it in some circumstances but not others? And when is that
capacity "substantial"? Questions like these drive scientists crazy.
These questions mark the spot where science ends and policy decisions
begin. Lawyers and judges can recognize the scientific dimensions and con­
tinua, but they still need to draw lines in order to serve their own purposes
in reaching decisions. How do they draw a line? They pick a vague area and a
terminology that can be located well enough in practice and that captures
enough of the right cases for society to tolerate the consequences. Where law­
makers draw the line depends both on their predictions and on their values.
Courts have long recognized that the resulting legal questions can be
confusing to psychiatrists and other scientists because their training lies else­
where. Scientists have no special expertise on legal or policy issues. That is
why courts in the past usually did not allow psychiatrists to testify on ultimate
legal issues in trials following pleas of not guilty by reason of insanity. This re­
striction recently was removed in federal courts, but there is wisdom in the
old ways, when scientists gave their diagnoses in their own scientific terms
and left legal decisions to legal experts. In that system, scientists determine
which dimensions are predictive and where a particular defendant lies on those
continua. Lawyers then argue about whether that point is above or below
the legal cutoff that was determined by judges or legislators using policy
considerations. That system works fine as long as the players stick to their
assigned roles.
This general picture applies not just to optometry and psychiatry but to
other interactions between science and law, including neural lie detection.
Brain scientists can develop neural methods of lie detection and then test
their error rates. Scientists can also determine how much these error rates
vary with circumstances, because some methods are bound to work much
better in the lab than during a real trial. However, these scientists have no
special expertise on the question of whether those error rates are too high to

2. These dimensions are standardized in American Psychiatric Association (2000).

36 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 114 of 144

serve as legal evidence. That is a policy question that depends on values; it is


not a neutral scientific issue. This is one reason why neuroscientists should
not be allowed to testifY on the ultimate question of whether a witness is or
is not lying.
Lying might appear different from insanity because insanity is a norma­
tive notion, whereas lying is not normative at all. A lie is an intentional decep­
tion without consent in order to induce reliance. Does the person who lies
really believe that what he or she said is false? Well, he or she ascribes a proba­
bility that varies on a continuum. Does the speaker intend to induce belief
and reliance? Well, that will not be clear if the plans are incomplete, indeter­
minate, or multiple. Does mutual consent exist, as in a game or some busi­
nesses? Well, varying degrees of awareness exist. Some cases are clear-maybe
most cases. Nonetheless, what counts as a lie is partly a normative question
that lies outside the expertise of scientists qua scientists. That is one reason
why scientists should not be allowed to testifY on that ultimate issue of lying.
Their testimony should be restricted to their expertise.

FALSE NEGATIVES VERSUS FALSE POSITIVES

Although scientists can determine error rates for methods of lie detection,
the issue is not so simple. For a given method in given circumstances, scien­
tists distinguish two kinds of errors. The first kind of error is a false positive
(or false alarm), which occurs when the test says that a person is lying but he
or she really is not lying. The second kind of error is a false negative (or a
miss), which occurs when the test says that a person is not lying but he or
she really is lying. The rate of false positives determines the test's specificity,
whereas the rate of false negatives determines the test's sensitivity.
These two error rates can differ widely. For example, elsewhere in this
volume Nancy K;anwisher cites a study of one method of neural lie detection
where one of the error rates was 31 percent and the other was only 16 per­
cent. The error rate was almost twice as high in one direction than in the
other. When error rates differ by so much, lawmakers need to consider each
rate separately. Different kinds of errors create different problems in different
circumstances. Lawmakers need to decide which error rate is the one that
matters for each particular use of neural lie detection.
Compare three legal contexts: In the first a prosecutor asks the judge to
let him use neural lie-detection techniques on a defense witness who has pro­
vided a crucial alibi for the defendant. The prosecutor thinks that this defense
witness is lying. Here the rate of false positives matters much more than the
rate of false negatives, because a false positive might send an innocent person
to prison, and courts are and should be more worried about convicting the
innocent than about failing to convict the guilty.
In contrast, suppose a defendant knows that he is innocent, but the trial
is going against him, largely because one witness claims to have seen the de­
fendant running away from the scene of the crime. The defendant knows that
this witness is lying, so his lawyer asks the judge to let him use neural lie de-

NEURAL LIE DETECTION IN COURTS 37


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 115 of 144

tection techniques on the accusing witness. Here the rate of false negatives
matters more than the rate of false positives because a false negative is what
might send an innocent defendant to prison.
Third, imagine that the defense asks the judge to allow as evidence the
results of neural lie detection on the accused when he says that he did not com­
mit the crime. Here the rate of false positives is irrelevant because the defen­
dant would not submit this evidence if the results were positive for lying.
Overall, then, should courts allow neural lie detection? If the rates of false
positives and false negatives turn out to differ widely (as I suspect they will),
then the values of the system might best be served by allowing some uses in
some contexts but forbidding others uses in other contexts. The legal system
might not allow prosecutors to force any witness to undergo lie detection, but
it still might allow prosecutors to use lie detection on some willing witnesses.
Or the law might not allow prosecutors to use lie detection at all, but it still
might allow defense attorneys to use lie detection on any witness or only on
willing or friendly witnesses. If not even those uses are allowed, then the rules
of evidence deprive the defense of a tool that, while flawed, could create a
reasonable doubt, which is all the defense needs. If the intent is to ensure
that innocent people are not convicted and if the defense volunteers to take
the chance, then why the law should categorically prohibit this imperfect tool
is unclear.
That judges would endorse such a bifurcated system of evidence is doubt­
ful, although why is not clear. Some such system might turn out to be opti­
mal if great differences exist between the rates of false negatives and false pos­
itives and also between the disvalues of convicting the innocent and failing to
convict the guilty. Doctors often distinguish false positives from false nega­
tives and use tests in some cases but not others, so why should courts not do
the same? At least this question is worth thinking about.

BASE RATES

A more general problem, however, suggests that courts should not allow any
neural lie detection. When scientists know the rates of false positives and false
negatives for a test, they usually apply Bayes's theorem to calculate the test's
positive predictive value, which is the probability that a person is lying, given
a positive test result. This calculation cannot be performed without using a
base rate (or prior probability). The base rate has a tremendous effect on the
result. If the base rate is low, then the predictive value is going to be low as
well, even if the rates of false negatives and of false positives seem reasonable.
This need for a base rate malces such Bayesian calculations especially prob­
lematic in legal uses of lie detection (neural or not). In lab studies the nature
of the task or the instructions to subjects usually determines the base rate. 3
However, determining the base rate of lying in legal contexts is much more
difficult.

3. For more on this, see Nancy Kanwisher's paper elsewhere in this volume.

38 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 116 of 144

Imagine that for a certain trial everyone in society were asked, "Did you
commit this crime?" Those who answered "Yes" would be confessing, so al­
most everyone, including the defendant, would answer "No." Only the per­
son who was guilty would be lying. Thus, the base rate of lying in the gener­
al population for this particular question is extremely low. Hence, given Bayes's
theorem, the test of lying might seem to have a low predictive value.
However, this is not the right way to calculate the probability. What real­
ly needs to be known is the probability that someone is lying, given that this
person is a defendant in a trial. How can that base rate be determined? One
way is to gather conviction rates and conclude that most defendants are guil­
ty, so most of them are lying when they deny their guilt. With this assump­
tion, the base rate of lying is high, so Bayes's theorem yields a high predic­
tive value for a method of lie detection with low enough rates of false nega­
tives and false positives. However, this assumption that most defendants are
guilty violates important legal norms. Our laws require us to presume that
each defendant is innocent until proven guilty. Thus, if a defendant is asked
whether he did it and he answers, "No," then our judicial system is legally
required to presume that he is not lying. The system should not, then, depend
on any calculation that assumes guilt or even a high probability of guilt. But
without some such assumption, one cannot justity a high enough base rate to
calculate a high predictive value for any method of neural lie detection of
defendants who deny their guilt.

CONCLUSION

A crystal ball would be needed to conclude that neural lie detection has no
chance of ever working or of being fair in trials. But many details need to be
carefully worked out before such techniques should be allowed in courts.
Whether the crucial issues can be resolved remains to be seen, but the way to
resolve them is for scientists and lawyers to learn to work together and com­
municate with each other.

REFERENCES

American Psychiatric Association. 2000. The diagnostic and statistical manual


ofthe American Psychiatric Association, fourth edition, revised. Arlington, VA:
American Psychiatric Publishing.
Fingarette, H. 1972. The meaning ofcriminal insanity. Berkeley: University
of California Press.

NEURAL LIE DETECTION IN COURTS 39


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 117 of 144

CHAPTER 6

Lie Detection in the Courts:


The Vain Search for the
Magic Bullet
.JED s. RAKOFF

The detection of truth is at the heart of the legal process. l The purpose of a
trial, in particular, is to resolve the factual disputes on which a case turns, and
the trial culminates with the rendering of a "verdict," which in Latin means
"to state the truth."
Given the common tendency of witnesses to exaggerate, to embroider,
and, frankly, to lie,2 how does a fact finder determine the truth? Particularly
in the adversarial system of justice common to the Anglo-American tradition,
truth is revealed, and lying detected, by exposing witnesses to cross-examina­
tion; that is, to tough questioning designed to test the consistency and cred­
ibility of the witness's story. John Henry Wigmore, known to most lawyers
as the most profound expositor of the rules of evidence in the history of law,
famously described cross-examination as "the greatest legal engine ever invent­
ed for the discovery of truth."3 And while not everyone is specially trained
in the art of cross-examination, common experience-whether it be of parents
questioning children, of customers questioning salespersons, or of reporters
questioning politicians-suggests that nothing exposes fabrication like a good
tough questioning.
But no one supposes that cross-examination is a perfect instrument for
detecting the truth. For that matter, there probably has never been a scien­
tific study of how effective cross-examination is in detecting lies, and I am

1. This article, based on remarks made at the American Academy of Arts and Sciences's con­
ference on February 2, 2007, presents the author's personal views and does not reflect how
he might decide any legal issue in any given case.
2. In my remarks on which this article is based, I estimated that 90 percent of all material trial
witnesses knowingly lie in some respect (though not always materially). This estimate is based
on my experience as a trial judge for the past twelve years and as a trial lawyer for the twenty­
five years previous to that and has no scientific study to back it up. Possibly I am too harsh:
perhaps the percentage of material witnesses who consciously lie in some respect is as low as,
say, 89 percent.
3. John Henry \Vigmore, Evidence In Trials At Common Law (Chadbourn Edition, 1974),
Vol. 5, p. 32.

40
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 118 of 144

not even sure how such a study could be devised. In any event, people have
always sought for some simpler, talismanic way of separating truth from false­
hood, and, relatediy, of determining guilt or innocence.
Thus, in medieval times, an accused who protested his innocence was put
to such tests as the ordeal by water, in which he was bound and thrown in the
river. The accused who had falsely protested his innocence would be rejected
by the water and would float, whereas the honest accused would be received
by the water and immediately sink. Usually he was fished out before he
drowned; but, if the rescue came too late, he at least died in a state of inno­
cence. If one accepted the basic religious theories on which these tests were
premised, the tests were infallible.
& the middle ages gave way to the Renaissance, the faith-based methods
of the ordeals were replaced by a more up-to-date, empirical method for de­
termining the truth: torture. Although some men and women, in their per­
versity, might deny the evil of which their accusers were certain they were
guilty, infliction of sufficient pain would lead them to admit, mirabilc dietu,
exactly what the accusers had suspected.
Mter a while, however, it became increasingly obvious that torture was
leading to too many "false positives," and more accurate methods were sought.
In his treatise on evidence, Wigmore contends that cross-examination was
originally developed as an alternative to torture. 4 Today, of course, it is incon­
ceivable that anyone would recommend the use of torture.
From the seventeenth century onward, cross-examination, for all its limi­
tations, seemed to be the best the legal system had to offer as a means of de­
termining the truth. In the late nineteenth century, every schoolchild knew
the story of how Abe Lincoln, in defending a man accused of murder, care­
fully questioned the prosecution's eyewitness as to how he was able to see the
accused commit the murder in the dead of night and, when the witness said
it was because there was a full moon, produced an almanac showing that on
that night the moon was but a sliver. As Lincoln elsewhere said, "you can't
fool all of the people all of the time"-at least not when someone is around
to ask the hard questions.
But the late nineteenth century also witnessed the growing belief that all
areas of inquiry would ultimately yield to the power of science. It was just a
matter of time before an allegedly "scientific" instrument for detecting lies was
invented, namely, the polygraph.
The truth is that the polygraph is not remotely scientific. The theory of
the polygraph-itself largely untested-is that someone who is consciously ly­
ing feels anxiety, and that that anxiety, in turn, is manifested by an increase in
respiration rate, pulse rate, blood pressure, and sweating. Common experi­
ence suggests many possible flaws in this theory: more practiced liars might
feel little anxiety about lying; taking a lie detector test might itself generate
anxiety; sweating, pulse rate, blood pressure, and respiration rate are com­
monly affected by all sorts of other conditions, both external and internal;
and so forth. One might hypothesize, therefore, that polygraph tests, while
they might be better than pure chance in separating truth tellers from liars­

4. Ibid., p. 32ff.

LIE DETECTION IN THE COURTS 41


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 119 of 144

after aU, some people might fit the theory-would nevertheless have a high
rate of error. As Marcus E. Raichle discusses elsewhere in this volume, that is
precisely what the National Academies (NAS), which in 2002 reviewed the
evidence on polygraph reliability, concluded. The NAS also concluded that
polygraph testing has "weak scientific underpinnings"5 and that "belief in its
accuracy goes beyond ,vhat the evidence suggests." 6
Not all experts agree. Reviewing the literature in 1998, the Supreme
Court of the United States concluded that "the scientific community remains
extremely polarized about the reliability of polygraph techniques," with some
studies concluding that polygraphs are no better than chance at detecting lies
and, at the other extreme, one study concluding that polygraph results are
accurate about 87 percent of the time. But even a 13 percent error rate is a
high number when you are dealing with something as important as determin­
ing a witness's credibility, let alone determining whether he or she is guilty
or innocent of a crime.
Moreover, all these error-rate statistics are suspect because the scientific
community is nowhere close to agreeing on how one properly establishes the
base measure for determining the reliability of the polygraph. To devise an
experiment in which one set of subjects is told to lie and the other set of sub­
jects is told to tell the truth is one thing; to recreate the real-life conditions
that would allow for a true test of the polygraph is quite something else.
Whether any sound basis exists on which one can assert anything useful about
the reliability or unreliability of the polygraph is uncertain.
Courts, being conservative and skeptical by nature, have largely tended
to exclude polygraph evidence. But that has not stopped the government, the
military, some private industry, and much of the public generally from accept­
ing the polygraph as reliable-so great is the desire for a "magic bullet" that
can instantly distinguish truth from falsehood.
Even the courts, while excluding polygraph evidence from tlle courtroom,
have sometimes approved its use by the police on the cynical basis that it real­
ly does not matter whether the polygraph actually detects lying, so long as
people believe that it does: if a subject believes that a polygraph actually works,
he or she will be motivated to tell the truth and "confess." The hypocrisy of
this argument is staggering: the argument, in effect, is that even if the truth
is that polygraph tests are, at best, error-prone, the police and other authori­
ties should lie to people and encourage them to believe that the tests are high­
ly accurate because this lie will encourage people to tell the truth.
Even on these terms, moreover, experience in my own courtroom suggests
that the use of polygraphs is much more likely to cause mischief, or worse,
than to be beneficial. Let me give just one example. The Millenium Hotel is
situated next to Ground Zero. A few weeks after the attack on the Twin Tow­
ers, hotel employees were allowed back into the hotel to recover the belong­
ings of the guests who had had to flee the premises on September 11, and one

5. The National Academics, National Research Council, "Polygraph Testing Too Flawed for
Security Screening," October 8, 2002, p. 2.
6. The National Academics, National Research Council, Committee to Review the Scientific
Evidence on the Polygraph, The Polygraph and Lie Detection (2003), p. 7.

42 USING IMAGING TO IDENTIFY DeCEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 120 of 144

of the hotel's security guards reported to the Federal Bureau of Investigation


(FBI) that he had found in the room safe on the fiftieth floor, in a room oc­
cupied by a man named Abdullah Higazy, a copy of the Koran and a pilot's
radio of the kind used to guide planes from the ground. The FBI quickly dis­
covered that Higazy was a former member of the Egyptian Air Force now
resident in Brooklyn, but when they questioned him, he denied ever having a
pilot's radio. Hypothesizing that he was lying to cover up his use of the
radio to guide the terrorist pilots to the Twin Towers, the FBI arrested Higazy
and brought him before me on a material witness warrant. At the hearing,
Higazy repeatedly asked to be given a polygraph test to establish that the
radio was not his. I explained to him that polygraph tests were too unreliable
to be admitted in court. Nevertheless, after the hearing, Higazy, over his
own lawyer's recommendation, asked the FBI to give him a polygraph test.
The FBI brought Higazy, alone, into the polygraph testing room, explain­
ing that his lawyer could not be present because it would upset the balance
of this "delicate" test. Over the next three hours, the FBI agent administer­
ing the test repeatedly told Higazy that he was not being truthful. Finally,
Higazy, by now hysterical, blurted out that maybe the radio really was his.
At that point, the FBI stopped the test and told the lawyer that Higazy had
confessed and would be charged, at a minimum, with making false statements
to the FBI and possibly with aiding and abetting the attack on the Twin
Towers, a capital offense. The next day, based on the prosecutor's flat state­
ment that Higazy had confessed, I ordered Higazy detained without bail and
he was shortly thereafter formally charged with lying to the FBI.
Three days later, an American Airlines pilot contacted the Millennium
Hotel and asked if he could get back the pilot's radio he had left there on
September II. It quickly developed that the radio was, indeed, his and had
never been in Higazy's room or possession. The Millennium security guard
had made up the whole story about finding the radio in Higazy's room, ap­
parently because he wanted revenge for 9/11 on anyone who had Arab an­
cestry. The government dropped the charges against Higazy and prosecuted
the security guard instead, who pled guilty to lying to the FBI.
For my part, I ordered an investigation by the government into the cir­
cumstances of the FBI's polygraph testing, the result of which was a report
assuring me that the manner and mode of Higazy's polygraph examination
was consistent with standard FBI practice. I am not sure whether this means
that the FBI really believes in its polygraph results, despite their inaccuracy,
or whether the FBI simply uses the fas:ade of polygraph testing to try to elicit
confessions. Either way, but for a near miracle, Mr. Higazy might likely now
either be rotting in prison or facing execution.
Why have I spent so much time describing the evils of polygraphs, when
the primary topic of this volume is the brave new world of brain scanning? I
believe that many of the same evils are likely to result from the use of brain
scanning to detect lies unless we are very, very careful. If anytlling, the poten­
tial for mischief is even greater, because while polygraphy was largely devel­
oped by technicians, brain scanning as a means of detecting lies is said to be
the product of studies by honest-to-goodness real-life neuroscientists.

LIE DETECTION IN THE COURTS 43


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 121 of 144

But the credentials of the scientists should not obscure the shakiness of
the science. 7 A basic problem with both polygraphy and brain scanning to de­
tect lying is that no established standard exists for defining what willful de­
ception is, let alone how to establish a base measure against which the validi­
ty and reliability of any lie-detection technique can be evaluated. What exists
at this point are imaging technologies that show us patterns of activity or oth­
er events in the brain that are hypothesized to correlate with various mental
states. Not one of these hypotheses has been su bjected to the kind of rigor­
ous testing that would establish its validity.
That, however, has not stopped several commercial enterprises from of­
fering brain scanning as a purportedly scientific lie-detection technique that
law enforcement agencies, private businesses, and even courts should utilize.
The mere fact that evidence is proffered by someone with scientific creden­
tials does not begin to satisfY the conditions for its admissibility in court.
In the case of the federal courts, the admissibility of expert testimony is
governed by Rule 702 of the Federal Rules of Evidence. Rule 702 provides that
If scientific, technical, or other specialized knowledge will assist the trier
of fact to understand the evidence or to determine a fact in issue, a wit­
ness qualified as an expert by knowledge, skill, experience, training or
education may testifY thereto in the form of an opinion of otherwise, if
(1) the testimony is based upon sufficient facts or data, (2) the testimony
is the product of reliable principles and methods, and (3) the witness has
applied the principles and methods reliably to the facts of the case.
Though every case must be assessed on its individual merits, brain scan­
ning as a means of assessing credibility likely suffers fi'om several defects that
would render such evidence inadmissible under Rule 702 as it has been inter­
preted in the federal courts.
First, and perhaps most fundamentally, there is no commonly accepted
theory of how brain patterns evidence lying: in the absence of such a theory,
all that is being shown, at best, is the presence or absence of certain brain pat­
terns that allegedly correlate with some hypothesized accompaniment of ly­
ing, such as anxiety. But no scientific evidence has shown either that lying is
always accompanied by anxiety or that anxiety cannot be caused by a dozen
other factors that cannot be factored out.
Second, the theories that have been proposed have not been put to the
test of falsifiability, which, if one accepts (as the Supreme Court does) a Pop­
per-like view of science, is the sine qua non of assessing scientific validity and
reliability.
Third, no standard way exists of defining what lying is, let alone how to
test for it. The law recognizes many kinds of lies, ranging from "white lies"
and "puffing" to affirmative misstatements, actionable half-truths, and mate­
rial omissions. Brain scans cannot yet come dose to distinguishing between
these different kinds of lying. Yet the differences are crucial in almost any case:
a little white lie is altogether different, in the eyes of the law and of common

7. For a more expert discussion of the limitations of brain scanning as a truth-detection device,
see the articles by Elizabeth Phelps and Nancy Kanwisher elsewhere in this volume.

44 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 122 of 144

sense, fi'om an intentional scheme to defraud. Nothing in the brain-scan ap­


proach to lie detection even attempts to make such distinctions. And what
might a brain scan be predicted to show in the case of a lie by omission; that
is, the person whose statements are truthful as far as they go but who con­
ceals a material filCt that puts an entirely different perspective on what is being
said? In my experience, these are the most common kinds of lies in court,
and they are revealed only by a good cross-examination.
For these and other reasons, brain-scanning tests of credibility might well
tail to meet the tests for admissibility under Rule 702, both because the brain­
scanning tests lack scientific reliability and because they are unlikely to be u.<. ;e­
ful to the jury. Additionally, even evidence that qualifies for admission under
Rule 702 may be excluded under Rule 403 of the Federal Rules of Evidence,
which provides that "Although relevant, evidence may be excluded if its pro­
bative value is substantially outweighed by the danger of unfair prejudice, con­
fusion of the issues, or misleading the jury, or by considerations of undo delay,
waste of time, or needless presentation of cumulative evidence." Brain-scan­
ning evidence, precisely because it holds itself out as truly scientific, is likely
to have a much greater impact on the trier of fact than its limited theoretical
and experimental bases can fairly support; it therefore might also be excluded
under Rule 403.
A time may corne when some sort of brain-scanning technique will be de­
veloped that will actually show the act of willful lying with enough accuracy
and predictability as to meet both the requirements of law and the even more
rigorous standards of accepted science. But long before that occurs, claims
will be made-some even by reputable scientists whose enthusiasm for their
own studies has overwhelmed all caution-that science has developed brain­
scan lie detectors that are perfectly accurate, when in fact they are not. Indeed,
some such claims are already being made. The desire for a magic bullet that
exposes lies is so strong that many people will be persuaded that, indeed, brain
scans can achieve what polygraphers have long claimed but, in my view, whol­
ly failed to achieve. The result will be an abdication of the difficult work of
actually detecting lies through cross-examination in favor of quasi-scientific
tests that substitute the fas:ade of scientific certainty for the actuality of truth.

LIE DETECTiON IN THE COURTS 45


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 123 of 144

CHAPTER 7

Neuroscience-Based Lie
Detection: The Need for
Regulation
HENRY T. GREELY

"I swear I didn't do it."


"The check is in the mail."
"The article is almost done."

In our lives and in our legal system \ve often are vitally interested in whether
someone is telling us the tmth. Over the years, humans have used reputation,
body language, oaths, and even torture as lie detectors. In the twentieth cen­
tury, polygraphs and truth serum made bids for widespread use. The twenty­
first century is confronting yet another kind of lie detection, one based on
neuroscience and particularly on functional magnetic resonance imaging
(fMRI).
The possibility of effective lie detection raises a host of legal and ethical
questions. Evidentiary rules on scientific evidence, on probative value com­
pared with prejudicial effect, and, possibly, rules on character evidence would
be brought into play. Constitutional issues would be raised under at least the
Fourth, Fifth, and Sixth Amendments, as well as, perhaps, under a First Amend­
ment claim about a protected freedom of thought. Four U.S. Supreme Court
justices have already stated their view that even a perfectly effective lie detec­
tor should not be admissible in court because it would unduly infringe the
province of the jury. And ethicist Paul Wolpe has argued that this kind of inter­
vention raises an entirely novel, and deeply unsettling, ethical issue about pri­
vacy within one's own skulJ.l
These issues are fascinating and the temptation is strong to pursue them,
but we must not forget a crucial first question: does neuroscience-based lie
detection work and, if so, how well? This question has taken on particular ur­
gem.)' as two, and possibly three, companies are already marketing fMRI-based
lie detection services in the United States. The deeper implications of effec­

1. Paul R. iVolpe, Kenneth R.. Foster, and David D. Langleben, "Emerging Neurotechnologies
tor Lie- Detection: Promises and Perils," American Journal of Bioethics 5 (2) (2005): 39-49.

46
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 124 of 144

tive lie detection are important but may prove a dangerous distraction from
the preliminary question of effectiveness. Their exploration may even lead
some readers to infer that neuroscience-based lie detection is ready for use.
It is not. And nonresearch use of neuroscience-based lie detection should
not be allowed until it has been proven safe and effective. 2 This essay will re­
view briefly the state of the science concerning tMRI-based lie detection; it
will then describe the limited extent of regulation of this technology and will
end by arguing for a premarket approval system for regulating neuroscience­
based lie detection, similar to that used by the Food and Drug Administration
to regulate new drugs.

NEUROSCIENCE-BASED LIE DETECTION: THE SCIENCE

Arguably all lie detection, like all human cognitive behavior, has its roots in
neuroscience, but the term neuroscience-based lie detection describes newer
methods of lie detection that try to detect deception based on information
about activity in a subject's brain.
The most common and commonly used lie detector, the polygraph, does
not measure directly activity in the subject's brain. From its invention around
1920, the polygraph has measured physiological indications that are associat­
ed with the mental state of anxiety: blood pressure, heart rate, breathing rate,
and galvanic skin response (sweating). When a subject shows higher levels of
these indicators, the polygraph examiner may infer that the subject is anxious
and nlrther that the subject is lying. Typically, the examiner asks a subject a
series of yes or no questions while his physiological responses are being moni­
tored by the device. The questions may include irrelevant questions and emo­
tionally charged, "probable lie" control questions as well as relevant questions.
An irrelevant question might be "Is today Tuesday?" A probable lie question
would be "Have you ever stolen anything?" a question the subject might well
be tempted to answer "no," even though it is thought unlikely anyone could
truthfully deny ever having stolen anything. Another approach, the so-called
guilty knowledge test, asks the subject questions about, for example, a crime
scene the subject denies having seen. A subject who shows a stronger physio­
logical reaction to a correct statement about the crime scene than to an in­
correct statement may be viewed as lying about his or her lack of knowledge.
The result of a polygraph examination combines the physiological results
gathered by the machine with the examiner's assessment of the subject to draw
a conclusion about whether the subject answered particular questions honest­
ly. The problems lie in the strength (or weakness) of the connection between
the physiological responses and anxiety, on the one hand, and both anxiety
and deception, on the other. Only if both connections are powerful can one
argue tllat the physiological reactions are strong evidence of deception.

2. This argument is made at great length in Henry T. Greely and Judy Illes, "Neuroscience­
Based Lie Detection: The Urgent Need for Regulation," American Journal ofLaw & Medicine
33 (2007): 377-431

NEUROSCIENCE-BASEO LIE OETECTION 47


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 125 of 144

A 2003 National Academy of Sciences (NAS) report analyzed polygraphy


in detail. The NAS found little rigorous scientific assessment of polygraph ac­
curacy but concluded that in good settings it was substantially better than
chance-and substantially less than perfect. The NAS further noted that sub­
jects could take plausible countermeasures to lower the device's accuracy even
further. The NAS advised against the use of polygraphs for personnel screening.
Researchers are now working on at least five methods to produce a newer
generation of lie detection devices, devices that measure aspects of the brain
itself rather than the physiological responses associated with anxiety. One ap­
proach uses electroencephalography to look for a so-called P300 wave in the
brain's electrical activity. This signal, seen about 300 milliseconds after a stim­
ulus, is said to be a sign that the person (or the person's brain) recognizes the
stimulus. A second method uses near-infrared laser spectroscopy to scatter a
laser beam off the outer layers of the subject'S brain and then to correlate the
resulting patterns with deception. Proponents of a third method, periorbital
thermography, claim to be able to detect deception by measuring an increase
in the subject's temperature around the eyes, allegedly as a result of increased
blood flow to the prefrontal regions. A fourth approach analyzes fleeting "facial
micro-expressions" and is more like the polygraph in that it seeks assertedly
involuntary and uncontrollable body reactions correlated with deception rather
than looking at direct measures of aspects of the brain. That these new meth­
ods will work is far from clear. Almost no peer-reviewed literature exists for
any of them.
The most advanced neuroscience-based method for lie detection is fMRI.
& described in more detail by Marcus Raichle elsewhere in this volume, fMRI
uses magnetism to measure the ratio of oxygenated to deoxygenated hemo­
globin in particular areas of the brain, three-dimensional regions referred to
as "voxels." The Blood Oxygenation Level Dependence (BOLD) hypothesis
holds that a higher ratio of oxygenated to deoxygenated blood in a particular
voxel correlates to higher energy consumption in that region a few seconds
earlier. An tMRI scan can document how these ratios change as subjects per­
ceive, act, or think different things while being scanned. Then sophisticated
statistical packages look at the changes in thousands of voxels to find corre­
lations between these blood oxygen ratios and what was happening to the
subject several seconds earlier.
Since the development of tMRI in the early 1990s, many thousands of
peer-reviewed papers have been published using the technique to associate
particular patterns of blood flow (and, hence, under the BO LD hypothesis,
brain activity) ''lith different mental activities. Some of these associations have
been entirely plausible and have been adopted in neurosurgery; for example,
using tMRI to locate precisely the location of a particular region of a patient's
brain in order to guide the surgeon's scalpel. Other claimed connections are
more surprising, like using fMRI to locate brain regions responsible for pas­
sionate, romantic love or for a nun's feeling of mystical union with God. Still
other published associations are about lie detection.

48 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 126 of 144

As of March 2007, at least twelve peer-reviewed articles from eight dif­


ferent laboratories had been published on fMRI-based lie detection. 3 The dif­
ferent laboratories used difterent experimental designs (sometimes the same
laboratory used different designs in different publications), but each claimed
to find statistically significant correlations betwccn dcception and certain pat­
terns of brain activity. This apparcnt scicntific support for fMRl-based lie de­
tcction becomcs much wcakcr on cxamination. This body of work has at least
six major flaws.
First, almost none of the work has been replicated. One of the laboratories,
Andrew Kozel's, replicated at least one of its own studies and two of Daniel
Langleben's published studies are quite similar, though not identical. 4 None
of the other laboratories has replicated, at least in the published literature,
their own studies. More important, none of the studies has been replicated
by othcr labs. Replication is always important in scicnce; it is particularly im­
portant with a new and complex tcchnology likc fMRl, whcrc anything from
thc details of the experimcntal design, the method of subject sclection, or the
technical aspects of the individual MRl machine on any particular day can make
a great difference.
Second, although the studies all find associations between deception and
activation or deactivation in some brain regions, they often disagree among
themselves in what brain regions are associated with deception. This some­

3. Sean A. Spence et aI., "Behavioral and functional Anatomical Correlates of Deception in


Humans," Brain Imaging Neuroreport (2001): 2849; Tatia M. C. Lee et aI., "Lie Detection
by Functional Magnetic Resonance Imaging," Human Brain Mapping 15 (2002): 157 (manu­
script was received by the journal two months before Spence's earlier-published article had
been received and, in that sense, may be the earliest of these experiments); Daniel D. Langlcben
et aI., "Brain Activity During Simulated Deception: An Event-Related Functional Magnetic
Resonance Study," Neuroimage 15 (2002): 727; G. Ganis et aI., "Neural Correlates of Differ­
ent Types of Deception: An fMRI Investigation," Cerebral Cortex 13 (2003): 830; F. Andrew
Kozel et aI., "A Pilot Study of Functional Magnetic Resonance Imaging Brain Correlates of
Deception in Healthy Young Men," Journal of Neuropsychiatry & Clinical Neuroscimce 16
(2004): 295; F. Andrew Kozel, Tamara M. Padgett, and Mark S. George, "BriefConununica­
tions: A Replication Study of the Neural Correlates of Deception," Behavioral Neuroscience
118 (2004): 852; F. Andrew Kozel et aI., "Detecting Deception Using Functional Magnetic
Imaging," Biological Psychiatry 58 (2005): 605; Danicl D. Langlebcn et aI., "Telling Truth
from Lie in Individual Subjects with Fast Event-Related fMIU," Human Brain Mapping 26
(2005): 262; C. Davatzikos et d., "Classitying Spatial Patterns of Brain Activity with Machine
Learning Methods: Application to Lie Detection," Neuroimage 28 (2005): 663; Tatia M. C.
Lee et aI., "Neural Correlates of Feigned Memory Impairment," Neuroimage 28 (2005): 305;
Jennifer Maria Nunez et aI., "Intentional False Responding Shares Neural Substrates with
Response Conflict and Cognitive Comrol," Neuroimage 25 (2005): 267; Feroze B. Mohamed
et aI., "Brain Mapping of Deception and Truth Telling about an Ecologically Valid Situation:
Function MR Imaging and Polygraph Investigation-Initial Experience," Radiology 238
(2006): 679.
4. Kozel, Padgett, and George, "Brief Communications: A Replication Study of the Neural
Correlates of Deception," replicates Kozel et aI., "A Pilot Study of Functional Magnetic Reso­
nance Imaging Brain Correlates of Deception in Healthy Young Men." Langleben's first study,
Langleben et aI., "Brain Activity During Simulated Deception: An Event-Related Functional
Magnetic Resonance Study," is closely mirrored by his second and third, Langlcben et aI.,
"Telling Truth from Lie in Individual Subjects with Fast Event-Related fMRI," and Davatzikos
et aI., "ClassiJ)'ing Spatial Patterns of Brain Activity with Machine Learning Methods: Appli­
cation to Lie Detection."

NEUROSCIENCE-BASED LIE DETECTJON 49


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 127 of 144

times happens within one laboratory: Langleben's first two studies differed
substantially in what regions correlated with deception. s
Third, only three of the twelve studies dealt with predicting deceptiveness
by individuals. 6 The other studies concluded that on average particular regions
in the (pooled) brains of the subjects were statistically significantly likely to be
activated (high ratio of oxygenated to deoxygenated hemoglobin) or deacti­
vated (low ratio) when the subjects were lying. These group averages tell you
nothing useful about the individuals being tested. A group of National Foot­
ball League place kickers and defensive linemen could, on average, weigh 200
pounds when no single individual was within 80 pounds of that amount. The
lie-detection results are not likely to be that stark, but before we can assess
whether the method might be useful, we have to know how accurate it is in
detecting deception by individuals-its specificity (lack offalse positives) and
sensitivity (lack of false negatives) are crucial. Only one of the Kozel articles
and two of the Langleben articles discuss the accuracy of individual results.
Next is the question of the individuals tested. The largest of these stud­
ies involved thirty-one subjects;7 more of them looked at ten to fifteen. The
two Langleben studies that looked at individual results were based on four
subjects. For the most part, the subjects were disconcertingly homogenous­
young, healthy, and almost all right-handed. Langleben's studies, in particular,
were limited to young, healthy, right-handed undergraduates at the University
of Pennsylvania who were not using drugs. How well these results project to
the rest of the population is unknown.
A fifth major problem is the artificiality of the experimental designs. People
are recruited and give their informed consent to participate in a study of fMRI
and deception. Typically, they are told to lie about something. In Langleben's
three studies they were told to lie when they saw a particular playing card pro­
jected on the screen inside the scanner. In Kozel's work, perhaps the least arti­
ficial of the experiments, subjects were told to take either a ring or a watch
from a room and then to say, in the scanner, that they had not taken either
object. Note how different this is from a criminal suspect telling the police
that he had not taken part in a drug deal, or, for that matter, from a .dinner
guest praising an overcooked dish. The experimental subjects are following
orders to lie where nothing more rides on the outcome than (in some cases)
a promised $50 bonus if they successfully deceive the researchers. We just do
not know how well these methods would work in settings similar to those
where lie detection would, in practice, be used.

5. Compare LangJebcn et aI., "Brain Activity During Simulated Deception: An Event-Related


Functional Magnetic Resonance Study," with Langlcben et aI., "Telling Truth from Lie in
Individual Subjects with Fast Event-Related tMRI."
6. Langlcben et a!., "Telling Truth from Lie in Individual Subjects with Fast Event-Related
fMRI"; Davatzikos et aI., "ClassifYing Spatial Patterns of Brain Activity with Machine Learning
Methods: Application to Lie Detection"; and Kozel et a!., "Detecting Deception Using Func­
tional Magnetic Imaging."

7. Kozel et aI., "Detecting Deception Using Functional Magnetic Imaging."

50 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 128 of 144

Finally, and perhaps most worryingly, as with the polygraph, counter­


measures could make £MRI-based lie detection ineffective against trained liars.
And countermeasures are easy with fMRI. One can ruin a scan by movement
of the head or sometimes of just the tongLle. Or, more subtly, as the scanner
is detecting patterns of blood flow associated with brain activity, one can add
additional brain activity. What happens to these results if the subject, when
answering, is also reciting to himself the multiplication tables? We have no idea.
The few pu blished papers that have looked at individuals have claimed
accuracy rates of about 70 to around 90 percent in detecting lies. These results
-not substantially different, by the way, from reported results with the poly­
graph-must be taken with a grain of salt. We just do not know how reliably
accurate £MRI-based lie detection will be with diverse subjects in realistic set­
tings, with or without countermeasures. For now, at least, based on the peer­
reviewed literature, the scientific verdict on £MRI -based lie detection seems
clear: interesting but not proven.

NEUROSCIENCE-BASED LIE DETECTION: THE LAW

In spite of this lack of convincing proof of efficacy, at least two companies in


the United States-No Lie MRI and Cephos Corp-are offering £MRI-based
lie detection services. They can do this because of the near absence of regula­
tion of lie detection in the United States.
In general, the use of lie detectors is legal in the United States. The poly­
graph is used thousands of times each week for security screenings, in crimi­
nal investigations, as part of the conditions of release for sex offenders. The
device can be used even more broadly, subject to almost no regulation, with
one major exception: employers.
The federal Employee Polygraph Protection Act (EPPA) of 1988 forbids
most employers from forcing job applicants and most employees to take lie­
detector tests and from using the results of such tests. As a result, no Ameri­
can can today legally face, as I did, a polygraph test when applying, at age
twenty-one, for a job as a bartender at a pizza parlor. Some employers are
granted exceptions, notably governments and some national security and
criminal-investigation contractors. The act's ddinition of lie detection is
broad (although No Lie MRl has frivolously argued that EPPA does not
apply to £MRl). The act also exempts the use of polygraphs (not other forms
of lie detection) on employees in some kinds of employer investigations, sub­
ject only to some broad rights for the employees. About half the states have
passed their own versions of this act, applying it to most or all of their state
and local employees. Some states have extended protections against lie detec­
tion to a few other situations, including in connection with insurance claims,
welfare applications, or credit reports. (A few states have required the use of
lie detection in some settings, such as investigations of police officers.)
Almost half of states have a licensing scheme for polygraph examiners. A
few of these statutes may effectively prohibit £MRI-based lie detection because
they prohibit lie detection except by licensed examiners and provide only for

NEUROSCIENCE·BASEO LIE DETECTION 51


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 129 of 144

licensing polygraph examiners, not £MRI examiners. No state, however, has


yet explicitly regulated neuroscience-based lie detection.
One site for the possible use of lie-detection technology is particularly
sensitive-the courtroom. Thus far, fMRI-based lie detection has not been
admitted into evidence in court. The courts will apply their own tests in mak­
ing such decisions. However, the eighty-plus years of litigation over courtroom
uses of polygraph evidence might provide some usehlilessons.
The polygraph is never admissible in U.S. courtrooms-except when it is.
Those exceptions are few but not trivial. In state courts in New Mexico, poly­
graph evidence is presumptively admissible. In every other American state and
in the federal courts, polygraph evidence is generally not admissible. Some
jurisdictions will allow it to be introduced to impeach a witness's credibility.
Others will allow its use if both parties have agreed, before the test was taken,
that it should be admitted. (This willingness to allow polygraph to be admit­
ted by the parties' stipulation has always puzzled me; should judges allow the
jury to hear, as scientific evidence, the results of palm reading or the Magic
Eight Ball if the parties stipulated to it?) At least one federal court has ruled
that a defendant undergoing a sentencing hearing where the death penalty
may be imposed is entitled to use polygraph evidence to try to mitigate his
sentence. s
U.S. courts have rejected the polygraph on the grounds that it is not ac­
ceptable scientific evidence. For many years federal and state courts used as
the test of admissibility of scientific evidence a standard taken from the 1923
case Frye v. United States (293 F. 1013 [DC Cir 1923]), which involved one
of the precursors to the polygraph. Frye, which required proof that the method
was generally accepted in the scientific community, was replaced in the feder­
al courts (and in many state courts) by a similar but more complicated test
taken from the 1993 U.S. Supreme Court case, Daubert v. Merrell Dow Phar­
maceuticals, inc. (509 U.S. 579 [1993]). Both cases fundamentally require
a finding that the evidence is scientifically sound, usually based on testimony
from experts. Under both Frye and Daubert, American courts (except in New
Mexico) have lUliformly found that the polygraph has not been proven suffi­
ciently reliable to be admitted into evidence. Evidence from £MRI-based lie
detection will face the same hurdles. The party seeking to introduce it at a
trial will have to convince the judge that it meets the Frye or Daubert standard.
The U.S. Supreme Court confronted an interesting variation on this ques­
tion in 1998 in a case called United States v. SchefJer (523 U.S. 303 [1998 ]).
Airman Scheffer took part in an undercover drug investigation on an Air Force
base. A~ part of the investigation, the military police gave him regular poly­
graph and urine tests to make sure he was not misusing the drugs himself. He
consistently passed the polygraph tests but eventually failed the urine test,
leading to his charge and conviction by court-martial.

8. Rupe v. Wood, 93 F.3d 1434 (9th Cir. 1996). See also Height v. State, 604 S.E.2d 796
(Ga. 2(04).

52 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 130 of 144

Unlike the general Federal Rules of Evidence, the Military Rules of Evi­
dence expressly forbid the admission of any polygraph evidence. Airman Schef­
fer, arguing that if the polygraph were good enough for the military police, it
should be good enough for the court-martial, claimed that this rule, Rule 707,
violated his Sixth Amendment right to present evidence in his own defense.
The U.S. Court of Military Appeals agreed, but the U.S. Supreme Court did
not and reversed. The Court, in an opinion written by Justice Thomas, held
that the unreliability of the polygraph justified Rule 707, as did the potential
for confusion, prejudice, and delay when using the polygraph.
Justice Thomas, joined by only three other justices (and so not creating
a precedent), also wrote that even if the polygraph were extremely reliable, it
could not be introduced in court, at least in jury trials. This, he said, was be­
cause it too greatly undercut "the jury's core function of making credibility
determinations in criminal trials."
Scheffer is a useful reminder that lie detection, whether by polygraph,
fMRI, or any other technical method, will have to face not only limits on sci­
entific evidence but other concerns. Under Federal Rule of Evidence 403 (and
equivalent state rules), the admission of any evidence is subject to the court's
determination that its probative value outweighs its costs in prejudice, confu­
sion, or time. Given the possible damning effect on the jury of a fancy high­
tech conclusion that a witness is a liar, Rule 403 might well hold back all but
the most accurate lie detection. Other rules involving character testimony
might also come into play, particularly if a witness wants to introduce lie-detec­
tion evidence to prove that he or she is telling the truth. In Canada, for example,
polygraph evidence is excluded not because it is unreliable but because it vio­
lates an old common law evidentiary rule against "oath helping" (R. v. Beland,
2 S.C.R. 398 [1987]). While nonjudicial use offMRI-based lie detection is
almost unregulated, the courtroom use offMRI-based lie detection will face
special difficulties. The judicial system should be the model for the rest of so­
ciety. We should not allow any uses offMRI-based (or other neuroscience­
based) lie detection until it is proven sufficiently safe and eflective.

A TRULY MODEST PROPOSAL: PRE MARKET REGULATION


OF LIE DETECTION

Effective lie detection could transform society, particularly the legal system.
Although fMRl-based lie detection is clearly not ready for nonresearch uses
today, I am genuinely agnostic about its value in ten years (or twenty years,
or even five years). It seems plausible to me that some patterns of brain acti­
vation will prove to be powerfully effective at distinguishing truth from lies,
at least in some situations and with some people. (The potential for unde­
tectable countermeasures is responsible for much of my uncertainty about
the future power of neuroscience-based lie detection.)
Of course, "transform" does not have a normative direction-society
could be transformed in ways good, bad, or (most likely) mixed. Should we
develop effective lie detection, we will need to decide how, and under what
circumstances, we want it to be usable, in effect rethinking EPPA in hundreds

NEUROSCIENCE-BASED LIE DETECTION 53


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 131 of 144

of nonemployment settings. And we will need to consider how our constitu­


tional rights do and should constrain the use of lie detection. This kind of
thinking and regulation will be essential to maximizing the benefits and min­
imizing the harms of effective lie detection.
But ineffective lie detection has only harms, mitigated by no benefits. As
a first step, before consideration of particular uses, we should forbid the non­
research use of unproven neuroscience-based lie detection. (Similarly, lie de­
tection not based on neuroscience, including specifically the polygraph, should
be forbidden. However, polygraphy is likely too well established to make its
uprooting politically feasible.)
This step is not radical. We require that new drugs, biologics, and medical
devices be proven "safe and effective" to the satisfaction of the federal Food
and Drug Administration before they may legally be used outside of (regu­
lated) research. Just as unsafe or ineffective drugs can damage bodies, unsafe
or ineffective lie detection can damage lives-the lives of those unjustly treat­
ed as a result of inaccurate tests as well as the lives of those harmed because a
real villain passed the lie detector. Our society can allow false, exaggerated, or
misleading claims and implied claims for many marketed products or services,
from brands of beer to used cars to "star naming" companies, because the
products, though they may not do much good, are unlikely to do much harm.
Lie detection is not benign, but is, instead, potentially quite dangerous and
should be regulated as such.
Of course, just calling for regulation through prcmarkct approval leaves
a host of questions unsettled. What level of government should regulate these
tests? What agency should do the assessments of safety and effIcacy? How
would one define safety or effIcacy in this context? Should we require the
equivalent of clinical trials and, if so, with how many of what kinds of peo­
ple? How effective is effective enough? Should lie-detection companies, or
fMRl-based lie-detection examiners, be licensed? And who will pay for all this
testing and regulation? As always, the devil is truly in the details.
I have thoughts on the answers to all those questions (see Greely and Illes
2007), based largely on the Food and Drug Administration, but this is not
the place to go into them. The important point is the need for some kind of
premarket approval process to keep out unproven lie-detection technologies.
Thanks to No Lie MRl and Cephos, the time to develop such a regulatory
process is yesterday.

CONCLUSION

Lie detection is just one of the many ways in which the revolution in neuro­
science seems likely to change our world. Nothing is as important to us, as
humans, as our brains. Further and more-detailed knowledge about how those
brains work-properly and improperly-is coming and will necessarily change
our medicine, our law, our families, and our day-to-day livcs. We cannot anti­
cipate all the benefits or all the risks this revolution will bring us, but we can
be alert for examples as-or, better, just before-they arise and then do our
best to use tl1em in ways that will make our world better, not worse.

54 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 132 of 144

Neuroscience-based lie detection could be our tlrst test. If it works, it will


force us to rethink a host of questions, mainly revolving around privacy. But if
it does not work, or until it is proven to work, it still poses challenges-chal­
lenges we must accept. Premarket approval regulation of neuroscience-based
lie detection would be a good start.

REFERENCES

Greely, H. T. 2005. Premarket approval regulation for lie detection: An idea


whose time may be coming. American Journal ofBioethics 5 (2): 50-52.
Greely, H. T., and J. Illes. 2007. Neuroscience-based lie detection: The urgent
need for regulation. American Journal of Law and Medicine 33: 377-43l.
National Academy of Sciences. 2003. The polygraph and lie detection.
Washington, DC: National Research Council.
Wolpe, P. R., K. R. Poster, and D. D. Langleben. 2005. Emerging neurotech­
nologies for lie-detection: Promises and perils. American Journal of Bioethics
5 (2): 39-49.

NEUROSCIENce·BASED LIE DETECTION 55


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 133 of 144

CONTRIBUTORS

Emilio Bizzi is President of the American Academy of Arts and Sciences and
Institute Professor at the Massachusetts Institute of Technology, where he
has been a member of the faculty since 1969. He is a neuroscientist whose
rcsearch focuses on movcmcnt control and thc neural substrate for motor
learning. He is also a Fellow of the American Academy of Arts and Sciences,
a Fellow of the National Academy of Sciences, a member of the Institute of
Medicine, and a member of the Accademia Nazionale dei Lincei. He was
awarded the President of Italy's Gold Medal for Scientific Contributions.

Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law
and Professor, by courtesy, of Genetics at Stanford University. He specializes in
legal and social issues arising from advanccs in the biosciences. He chairs the
California Advisory Committee on Human Stem Cell Research and directs the
Stanford Center for Law and the Biosciences. Hc is a member of the executive
committee of the Neuroethics Society and is a co-director of the Law and
Neuroscience Project. He graduated from Stanford University in 1974 and
from Yale Law School in 1977. He served as a law clerk for Judge John Minor
Wisdom of the United States Court of Appeals and for Justice Potter Stewart
of the United States Supreme Court. He began teaching at Stanford in 1985.

Steven E. Hyman is Provost of Harvard University and Professor of Neuro­


biology at Harvard Medical School. From 1996 to 2001, he served as Direc­
tor of the National Institute of Mental Health (NIMH), the component of
the U.S. National Institutes of Health charged with gcnerating the knowledge
needed to understand and treat mental illness. He is a member of the Institute
of Medicine and a Fellow of the American Academy of Arts and Sciences. He
is Editor of the Annual Review of Neuroscience.

Nancy I<anwisher is the Ellen Swallow Richard Professor in the Department


of Brain & Cognitive Sciences at the Massachusetts Institute of Technology
(MIT), and Investigator at MIT's McGovern Institute for Brain Research. She
held a MacArthur Fellowship in Peace and International Security aftcr receiv­
ing her Ph.D. She then served for several years as a faculty member of the
psychology departments at the University of California, Los Angeles and
Harvard University. Her research concerns the cognitive and neural mecha­
nisms underlying visual experience, using fMRI and other methods. She re­
ceived a Troland Research Award from the National Academy of Sciences in
1999, a MacVicar Faculty Fellow Teaching Award from MIT in 2002, and
the Golden Brain Award from the Minerva Foundation in 2007. She was
elected a member of the National Academy of Sciences in 2005, and a Fellow
of thc American Academy of Arts and Scicnces in 2009.

56 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 134 of 144

Stephen J. Morse is Ferdinand Wakeman Hubbell Professor of Law and Pro­


fessor of Psychology and Law in Psychiatry at the University of Pennsylvania.
Trained in both law and psychology at Harvard, he is an expcrt in criminal
and mental health law. His work emphasizes individual responsibility and the
relation of the behavioral and neurosciences to responsibility and social control.
He is currently Legal Coordinator of the MacArthur Foundation Law and
Neuroscience Project and he co-directs the Project's Research Network on
Criminal Responsibility and Prediction. He is currently working on a book,
Desert and Disease: Responsibility and Social Control. He is a founding director
of the Neuroethics Society, and prior to joining the Penn faculty, he was the
Orrin B. Evans Professor of Law, Psychiatry and the Behavioral Sciences at
the University of Southern California.

Elizabeth A. Phelps received her Ph.D. from Princeton University in 1989,


served on the faculty of Yale University until 1999, and is currently the Silver
Professor of Psychology and Neural Science at New York University. Her lab­
oratory has earned widespread acclaim for its groundbreaking research on
how the human brain processes emotion, particularly as it relates to learning,
memory, and decision making. Dr. Phelps is the recipient of the 21st Century
Scientist Award from the James S. McDonnell Foundation and a Fellow of
the American Association for the Advanccment of Science and tlle Society for
Experimental Psychology. She has served on the Board of Directors of the
Association for Psychological Science and the Socicty for Neuroethics, was
tlle President of the Society for Neuroeconomics, and is the current editor of
the APA journal Emotion.

Marcus E. Raichle, a neurologist, is Professor of Radiology and Neurology at


Washington University in St. Louis. He heads a pioneering team investigating
brain function using positron emission tomography (PET) and functional
magnetic resonance imaging (tMRI) to map the functional organization of
the human brain in health and disease. He joined the faculty of Washington
University in 1971, ascending to the rank of professor in 1979. He received
a bachelors and a medical degree from University of Washington in Seattle.
His many honors include election to the Institute of Medicine in 1991, to the
National Academy of Sciences in 1996, and to the American Academy of Arts
and Sciences in 1998.

CONTRIBUTORS 57
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 135 of 144

Jed S. Rakoff is a United States District Judge for the Southern District of
New York. He also serves on the Governance Board of the MacArthur Foun­
dation Project on Law and Neuroscience, and on the National Academies'
Committee to Prepare the Third Edition of the Federal Judges' Manual on
Scientific Evidence. He has a B.A. fi'om Swarthmore College, an M.Phi\. from
Oxford University, and a rD. from Harvard Law School.

Walter Sinnott-Armstrong is Professor of Philosophy and Hardy Professor


of Legal Studies at Dartmouth College, where he has taught since 1981
after receiving a B.A. from Amherst College and a Ph.D. from Yale University.
He is currently Vice Chair of the Board of Officers of the American Philoso­
phical Association and Co- Director of the MacArthur Law and Neuroscience
Program. He has published extensively on ethics (theoretical and applied),
philosophy of law, epistemology, philosophy of religion, and informal logic.
His current research focuses on empirical moral psychology as well as law and
neuroscience.

58 USING IMAGING TO IDENTIFY DECEIT


Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 136 of 144

AMERICAN ACADEMY OF ARTS &. SCIENCES

The Academy was founded during the American Revolution by John Adams,
James Bowdoin, John Hancock, and other leaders who contributed promi­
nently to the establishment of the new nation, its government, and its Con­
stitution. Its purpose was to provide a forum for a select group of scholars,
mcmbers of the learned professions, and government and business leaders to
work together on behalf of the democratic interests of thc republic. In tllC
words of the Academy's Charter, enacted in 1780, the "end and design of
the institution is ... to cultivate every art and science which may tend to
advance the interest, honour, dignity, and happiness of a free, independent,
and virtuous people." Today the Academy is both an honorary learned society
and an independent policy research center that conducts multidisciplinary
studies of complex and emerging problems. Current Academy research focus­
es on science and global security; social policy; the humanities and culture;
and education. The Academy supports young scholars tllrough its Visiting
Scholars Program and Hellman Fellowships in Science and Tcchnology Policy,
providing year-long residencies at its Cambridge, Massachusetts, headquarters.
The Academy's work is advanced by its 4,600 elected members, who are
leaders in the academic disciplines, the arts, business, and public affairs from
around the world.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 137 of 144

N E u R o L A "V

THE COURT WILL NOW CALL ITS

'"
0

""

,.,
14
.
...l
~

c:
BY
en ING.FEI CHEN (lVIA '90)
ILLUSTRATION BY JEFFREY DECOSTER

PORTRAITS BY LENNY GONZALEZ

WILL ADVANCES IN NEUROSClENCF':

iVlAKE TIlE ,JUSTICE SYSTEM tvl0RE ACCURATE': A.ND UNBIASEI)?

Or could brain-based testing wrongly condemn some and trample the civil liberties of others?

The new field of neumlaw is cross-examining f(lr answers.

In August 2008. Hank Greely received an e-mail from an/n{ernationaL HeraLJ Tribune

correspondent in l\'1umbai seeking a bioethicist's perspective on an

unusual murder case in India: A woman had been convicted of killing her ex-fiance with arsenic, and the

circumstantial evidence against her included a brain-scan test that purportedly showed she had

a memory--{)r "experiential knowledge"-of committing the crime.

"I was amazed and somewhat appalled," recalls Greely (BA 74), the Deane F. and

Kate Edelman Johnson Professor of Law, who has

studied the legal, ethical, and social implications of biomedical advances for nearly 20 years.

As a type oflie detector. the supposed memory-parsing powers of the

Brain Electrical Oscillations Signature profiling test-which monitors bl'ain waves

through electrodes placed on the scalp--Jooked implausible, he says.


m8t.
j
--_5.......--­
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 138 of 144
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 139 of 144

Opposite: No studies ofBE as. as it's called, have been published The brain science revolution raises the tantaliz­
Mark G. Kelman, in peer-reviewed scientilic journals to prove it works. ing sci-tl-like prospect that secrets hidden inside
James C. Gaither Maybe society wi]] someday find a technological people's heads-like prejudice, intention to com­
Professor solution to lie detection, GI'eely told the Tribune, "but mit a crime, or deception-are within reach of be­
of Law we need to demand the highest standards ofproof be­ ing knowable. And such "mind reading" could have
ana Vice Dean fore we ruin people's lives based on its application." wide-ranging legal ramilications.
While it remains unclear whether the guilty ver­ Although the scientific know-how is not here yet,
dict in the indian case will be upheld, the idea that someday brain scans might provide stronger proof of
a murder conviction rested in part on the premature an eyewitness's identification of a suspect, confirm a
adoption of an unproven novel technology by a ju­ lack of bias in a potential juror, or demonstrate that
dicial system makes Greely uneasy. The case is the a worker compensation claimant does, in tact, suffer
sort of potentially disastrous scenario that he and hum debilitating pain. Neuroimaging evaluations of
colleagues in the budding field of"neurolaw" seek to drug offenders might help predict the odds of relapse
head off in the U.S. legal system. and guide sentencing. And new treatment options
based on a berter grasp of the neural processcs un­
derlying addictive or violent behavior could improve
rehabilitation pmgrams for repeat lawbreakers.
N THE UNITED STATES. CONCERNS ABOUT A As director of the Stanford Center for Law and
SIMILAR BRAIN-SCANNING lie detection technol­ the Biosciences (CLB) and the Stanford Interdisci­
'"
o
ogy, based on functional magnetic resonance imag­ plinary Group on Neuroscience and Society, Greely
o

'" ing (fMRI), have been swirling around sinee two has pmvided critical analysis of the societal conse­
companies, No Lie MRl and Cephos Corp., began quences of genetic testing and embryonic stem cell
offering commercial testing using the technique in techniques. in recent years, as he has turned his gaze
2006 and 2008, respectively. While reservations to bmin science, Stantord has emerged as a leader
abound about the reliability of these brain scans, the in the neurolaw field, with the CLB holding one­
16 -'
decision of whether courts accept them as evidence day conferences on "Reading Minds: Lie Detection,
will initially rest upon the discretion of individual Neuroscience, Law. and Society" and "Neuroimag­
judges, on a case-by-case basis. ing, Pain, and th~ Law" in 2006 and 2008.
:n in the past two decades, neuroscience research Greely along with two Stantol·d nem'osdence pro­
has made rapid gains in deciphering how the fessors and two research fellows has also been en­
human brain works. building toward a fuller com­ gaged in the Law and Neuroscience Project, a three­
prehension ofbehavior that could vastly change how year, $10 million collaboration funded by the ,John
society goes about educating children, conducting D. and Catherine T. MacArthur Foundation since
business. and treating diseases. Powerful neuroim­ 2007. Presided over by honorary chair retired Su­
aging techniques are for the first time able to reveal preme Court ,Justice Sandra Day O'Connor '52
which parts of the living human brain are in action (BA '50) and headquartered at the University of
while the mind experiences fear, pain, empathy, and California, Santa Barbara. the project brings togeth­
even feelings of religious belief. er legal scholars, judges, philosophers, and scientists
"Anything that leads to a better, deeper un­ from two dozen universities.
derstanding of people's minds plays right to One network is gauging neuroscience's pmmises
the heart of human society and culture-and and potential pel;]s in the areas ofcriminal responsibil­
as a result, right to the heart of the law," says ity and prediction and treatment ofcriminal behavior.
Greely. "The law cares about the mind." A second network, co-directed by Greely, is exploring

co NVEJ]~SA".rI0 N 'IS'i
"I "rJIINK r:I-'IIE] y,,,, I'> "111 "111
.1.' . '-..J."1.J..I'''' \rVILI.J .. k

TIlE NIOST DEAD-ENDED IN ALI~


OF NEUROSCIENCE."

h LL\! . .1;\.\1 ( .
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 140 of 144
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 141 of 144

the impacts of neuroscience on legal decision making. ing with neuroscientist Anthony D. Wagner (phD
"A lot of the judges who are participating are just '97), a Law and Neuroscience Project member, and
frankly barned by this flood of neuroscience evidence Emily R. Murphy '12 and Teneille R. Brown-who
they are seeing coming into the courts. They want to have been Law and Neuroscience Project fellows-to
use it if it's good and solid, but they don't want to further investigate the Indian BEOS profiling tech­
if it's flimflam," says Law and Neuroscience Project nology. The convicted woman and her current hus­
member William T. Newsome, professor of neurobi­ band (also found guilty in the case) were granted bail
ology at the Stanford School of lVledicine. He and by an appellate court while it considers the couple's
the other scientists are helping to educate their legal appeal of the nIling.
counterpal1:s about what brain imaging"can tell you BEOS's inventor claims that by analyzing brain
reliably and what it can't." The project will produce wave patterns that indicate a remembrance of infor­
a neuroscience primer for legal practitioners. mation about a murder, the test can distinguish the
source of that knowledge-from actually experienc­
ing the crime, versus hearing of it in the news. But
there's no scientific evidence supporting that such
EUROSCIENTIFIC EVIDENCE HAS ALREADY INFLU­ a feat is possible, says Wagner, a Stanford associate
ENCED COURT OUTCOMES in a number of inst:mces. professor of psychology. Neuroimaging studies have
Brain scan data is showing some purchase in death shown that merely imagining events in your mind
penalty cases, after a defendant has been found guilo/. triggers patterns of brain activity similar to those
'"
0 says Robert Weisberg '79, the Edwin E. Huddleson, that arise from experiencing the events for real.
0

'" Jr. Professor of Law and faculty co-directOl' of the Motivated by the C<'\SC, Wagner, along with
Stanford Criminal Justice Center. That's because, psychology postdoctoral fellow Jesse Rissman and
'" during the penalty phase, the defendant has "a con­ Greely, is exploring whether basic memory recognition
stitutional right to offer just about anything that could testing is possible with fMRl. Functional MRI looks

18
.>,

..J
be charactel'ized as mitigating evidence," he says.
"What happens here is thata lot ofdefense evidence
for metabolic activity in the brain to see how different
parts "light up" when an individual performs celtain
0 that wouldn't be admissible during the guilt phase, mental tasks while lying inside an l"vlRI machine.
~

then comes back in a secondary way." For instance, The id.ea of fAiRI memory detection raises intrigu­

'" Weisberg says, to try to reduce punishment to a life ing possibilities: Could it be used to verifY whether
sentence, some defense lawyers are presentingneuroirn­ a suspect's brain recognizes the objects in a crime
aging pictures to argue that organic brain damage from scene shown in a photo, or to confirm an eyewit­
an ahusive childhood makes their clientless culpable. ness's identification of a perpetrator-without the
But the bar for admissibility of such evidence is test subject even uttering a word? And, if so, how
different in different legal contexts, Weisberg adds, accurately? The answers aren't known.
and it is generally set much higher in the guilt phase In experiments funded by the Law and Neurosci­
during which criminal responsibility is determined. ence Project; the Stanford researchers are studying new
In that setting, there's a greater reluctance to consid­ computer algorithms for analyzing a person's neural
er brain-based information. "Right now the courts activation patterns to see if they can be used to predict
are very, very worried about allowing big inferences whether a face the person has seen before will be recog­
to be drawn about how neuroscientific evidence ex­ nized. Preliminary accuracy rates look good. But, "Vag­
plains criminal responsibility," he says. ner cautions, it is uncertain whether the lab findings
Still, in two ca.~es in California and New York, would translate over to real-world applicability.
defendants accused of first-degree murder success­ And that is the seemingly insurmountable stick­
fully argued for a lesser charge of manslaughter after ing point with fMRI lie detection. Unlike poly­
presenting brain scans to establish diminished brain gmph testing, which monitors for anxiety-induced
function from neurological disorders. And the first changes in blood pressure, pulse and breathing rates,
in the next generation of evidence from brain-based and sweating that accompany prevarication, fMRI
technology-fMRI lie detection-is already knock­ scans aim to directly capture the brain in the act of
ing at courtroom doors, posing "the most imminent deception. About 20 published peer-reviewed stud­
risk issue" in neurolaw, says Greely. ies found that certain brain areas become more active
On the Stanford campus, Greely has been work­ when a person lies versus when telling the truth. These
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 142 of 144

"E,TERY PL1JS HERE lIAS AN ASS()CIA.TED ~(IN1JS.

Y01J COUI.JD I~IAGINE CHANGING


] ::> ·E.·. 1·;Il.' ·FI
.>. I-\. '\ T I· ()·1 :> 8''1
.. ...il.. V .. . .'\... ~ IN (} () () I)vVi\~TS
OR IN BAD
l,
WAYS."
{I-?-\ '7··1), :) ! '} OF

experiments usually analyzed h('.alt~y volunteers who


are instructed to tell simple fibs (such as about which
playing card they're viewing on a screen).
Based on sueh rese:trch, Cephos claims an at't,ura('y
ratc of 78 percent to 97 perccnt in detecting all kinds
of lies; No Lie MRl claims 93 percent or greater.
Both companies say their tests are better than the
polygraph, which has a poor rccord of reliability
0'.
that has made it inadmissible in most courts. But '"c
they haven't convinced the broader neuroscience '"
community that the £MRl method is good enough .

yet to use in the real world, with all its variegated '"
deceptions of complicated ha1f~truths and rehearsed
>,
~
false alibis. Experimental test conditions are a far
.1 19
cry from the highly emotional, stressful scenario of
being accused of a crime for which you could be sent
""c
~

to prison. So thus far, \Vagner says, it is premature


to use £MRI lie detection technology for any legal VJ

proceeding.
Nonetheless, one of the fil'St known attempts to
admit such evidence into court was in a juvenile sex
abuse case in San Diego County earlier this year. To
try to prove he was innocent, the defendant submitted
a scan from No Lie MRI, but later withdrew it. (For
details about the CLB's involvement, see its hlog at
lawandbiosciences.wordpress.com). No Lie's CEO,
,Joel Huizenga, says that he is confident the brain scan the peer-reviewed literature? What is its error rate? Emily Murphy '12
tests can pass court admissibility rules "with flying col­ Other state courts use the Frye test of admissihility; and Hank Greely
ors" if the decision isn't politicized by opponents. which requires proof that scientific evidence is gen­ (BA '74), the
But George Fisher, the ,Judge ,John Crown Pro­ erally accepted in the relevant scientific community. Deane F. and Kate
fessor of Law and a former criminal prosecutor, Fisher's guess is that £MRI lie detection evidence Edelman Johnson
thinks the justice system won't recognize such evi­ "will not get past the reliability stage in most places." Professor of Law

dence anytime soon. Trial court criteria for admit­ Attempts to reproduce real-world lying in the lab, he
ting data from a new scientific technique set stiff re­ says, "are probably unlikely to satisJY a court when it
quirements for demonstrating its reliability; he says. really gets down and looks hard at these studies."
In federal courts and roughly half of state courts, Plus, the justice system has an ideological aver­
individual judges must apply the Daubert standard sion to lie detection technology: Jn United Stat"" I'.
on a case-by-case basis, hearing testimony from ex­ Sch4Jer [www.law.comell.edulsupct/htmI!96-1133.
perts on key questions: Is the evidence sound? Has ZS.htm\] four Supreme Court justices said that a lie
the scientific technique been tested and published in detection test, regardless of its accuracy, shouldn't
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 143 of 144

be admitted into federal courts because it would points out that no one can be unwillingly interro­
infringe on the jury's role as the human "lie detec­ gated by brain scan, because it currently requires
tor" in the courtroom. "The mythology around the significant cooperation from the subject in prepar­
system is that the jurors are able to tell a lie when ing for and undergoing the procedure.
they see one," says Fisher. Greely has proposed that WRI lie detection com­
Greely is less sanguine that courts v\lill keep un­ panies be required to get pre-marketingapproval &om
proven fMRI testing out. "Anybody can try to admit an agen<;y like the Food and Drug Administration.
neuroscience evidence in any case, in any court in the Not surprisingly; Laken and Huizenga are opposed to
country," he says, adding that busy judges are typical­ the idea Huizenga says that, given the enormous time
ly not wen prepared to make good decisions about it. and expense this would take, the idea is real~y a po­
If there were any inclination, however, for courts litically motivated move to stop the technology cold.
to accept new lie detection evidence that isn't very Laken, however, says he is open to a discussion with
firmly rooted in science, it would most likely happen Law and Neuroscience Project researchers, other sci­
on the defense side rather than the prosecution's, entists, and government agencies about what it would
speculates former U.S. Attorney Carol C. Lam '85, take to validate the accura<;y of the technology.
who is depuly general counsel at Qualcomm. That's As scientists unlock the mysteries of the hum,m
because the criminal justice system is structured to brain, we may learn that some people are neurally
give the benefit of any doubt to the defendant. 'wired in ways that compel them to certain types
Such instances of admission, were they ever to of unlawful behavior: Their brains made them do
'" happen, would most likely also first take place in it. How much this should lessen their culpability
'"
'"
'" proceedings where the judge is the only trier of fact, or punishment are weighly questions that courts
Lam adds; individual judges might be curious about would have to grapple with.

the flYIRI test and confident that they can detel'mine Some philosophers and neurobiologists believe
the appropriate weight to give it. But Lam also notes neuroscience will prove that human beings don't
.
>,
~
that the defense communily actually might not wish have free will; instead, we are creatures whose ac­
20 ..J
to present WRI lie detection results in court-for tions are determined by mechanical workings of the
0 lear that if this kind of evidence became widely ac­ brain that occur even before we make a conscious
~

cepted by the judicial system, prosecutors would decision. If that's true, these thinkers argue, it could
:n begin to use it against criminal defendants. finally explode the very concept of criminal respon­
sibility and shattet' the judicial system.
But most legal scholars don't buy into that.
"I think the free will conversation is the most
HE POTENTIAL ETHICAL AND LEGAL ISSUES dead-ended in all of neuroscience:' says Mark G.
surrounding brain scans for deception or memory Kelman, the ,James C. Gaither Professor of Law
detection get Ihorny quickly. On one hand, every­ and vice dean. The debate has been going on for
one agrees that a highly accurate WID lie detection 2,000 years, he says, with critics of the idea that free
test could be a powerful weapon in exonerating the will exisL~ concluding long ago that human behavior
innocent, similar to forensic DNA evidence. But is governed by the mind-not by some imagined
could prosecutors compel someone to undergo test­ moral entity within it~and the mind is located in
ing, or would that violate the Fifth Amendment's the brain, It's doubtful, Kelman says, that neurosci­
protection against self-incrimination? W(mld it vio­ ence will add anything new to the free will criminal
late the Fourth Amendment's bar against unreason­ responsibility arguments by detailing the precise
able searches? A broader question may be whether locations or processes that c'-'(plain particular actions
a right to privacy is violated if someone scans your or traits, like a lack of empathy or impulse control.
brain to read your mind, neurolaw experts say­ Furthermore, others point out that the criminal
whether for court, the workplace, or school. justice system does not rely on a premise of free will.
Even if flYIRI lie detection's reliabilily remains "It depends on the hypothesis that people's behavior
in doubt, law enforcement and national securily is shapeable by outside forces," says Fisher. 'l\nd
agents could still use it to guide criminal investi­ there's a big difference between saying there is no free
gations, as they do with the polygraph. However will and saying the risk of punishment has no impact
Steven Laken, president and CEO of Cephos, on a person's calculations about what to do next."
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 144 of 144

" ~'IIE MYTIIOLOG-Y AROIJND TIlE SYS~'ElVl

IS THAT THE JURORS ARE ABLE

TO TELL LIE

WHEN THEY
o
SEE ONE."
.J C ., F o

What is far more probable in the future, many provide "an objective basis lor saying, 'This person's
experts &1;Y; is that one of neuroscience's biggest influ­ getting on top of their problem,'" Newsome says.
ences would be in 1'evamping processes like sentenc­ Parole boards have been moving toward taking
ing 01' parole, 01' in forcing us to rethink such ideas greate1' account of evidence-based predictions of
as the rehabilitation of criminals, sexual predators, behavior, adds Weisberg. "It is possible that neurosci­
mentally insane convicts, or drug offenders. Although entific evidence could be used to weigh into inHuenc­
minimum sentences are mandated in many situa­ ing the conditions of parole or the kind of treatment
tions, judges still have some discretion in how they program the prisoner is sent into," he sa:ys.
handle defendants in certain cases. If research led to When it comcs to rehabilitation, new treat­
0'.
better predictions of future behavior that could help ments that seek to change criminal behavior raise 0
0

distinguish the more dangerous lawbreakers &om the their own potentially Orwellian ethical dilemmas,
safer bets, courts could make better decisions about though. A vaccine against cocaine is in clinical trials,
how long a sentence to give a delendant, and whether G1'eely says. If it ever reaches the market, would the "'
he should be given probation or sent to prison. And, legal system torce coke addicts to get vaccinated­
once he's in jail. when he should come out. or otherwise imprison them? "Every plus here has " ~

an associated minus," he says. "You could imagine


.., 21
"Ifwe can better evaluate what the problem is and
.."

what the chances are of controlling the defendant's changing behaviors in good ways, or in bad ways."
~
behavior in the future, we're going to be better ol!;" The thought of giving the government strong tools
says O'Connor. Answers &om neuroscience would be tor altering people's behavior through direct action V)

e.-..::tremely welcome in decision making when defen­ on the brain is, he says, "scary."
dants are committed to a mental institution. "\Vhen Prognosticators of neurolaw must walk a careful
should a person be confined or when is it appropriate line in making conjectures about the tllture. A few years
to have a person released on medication?" she asks. ago, a British bioethics scholar complained to Greely
"There's just a need for that kind of information." that the law professor's dissections of the legal implica­
Drug addiction is another arca whcl'c the law is tions of IMRI lie detection paid short shrift to whether
hungry for better solutions and more effective the technology actually works--possibly leaving people
treatments. "Our jails are overloaded, and they with the impression that it wiIl, or already does.
are overloaded with people who have committed "That was a really good wake-up call," recalls
drug crimes," says O'Connor. "So it just becomes Greely. Still, while no one knows exaetly where the
enormously important to figure out how people get science will take us, he says, the goal of neurolaw is
addicted to drugs and what we can do to sever that to clarifY how coming discoveries might affect the
connection if we can." legal world and to point out tensions, gaps, and
Predictions of recidivism might be improved areas where the law may need rethinking. Society
through the invention of brain imaging tools that must wOI'ry about both long-term implications of
assess whether an addict has truly broken the habit, the hypothetical future and short-term realities of
says Newsome. Fo1' example, one possible test could the present, he says. "You've got to look in both
be to scan the person's brain while she views video directions." SJ.

clips of people injecting heroin. If research established Ingjei Chen M a .1cience writer whOJe work haJ appeared
that such tempting scenes reliably triggered greater in The New York TimCllo Smithsonian, Discover, and
activity in the emotional centers of dnJg abusers' otherpu61iLatlofIJ.To ~iew an inter~iew with Emily Murphy
brains, scans taken before and after treatment could about the Indian Ca.1C, go to www.stanfordlawyer.com.

Das könnte Ihnen auch gefallen