Beruflich Dokumente
Kultur Dokumente
LAW OFFICE OF
J. HOUSTON GORDON
SUITE 300
UNDO HOTEL BUILDING
114 W. LIBERTY AVENUE
P.O. BOX 846
J. HOUSTON GORDON' COVINGTON, TENNESSEE 38019 PHONE (901) 476·7100
AMBER N. GRIFFIN" FAX (901) 476-3537
Dear Stuart:
To the extent deemed necessary at trial, the Defendant, Lorne Allan Semrau, may
or will call Dr. Steven J. Laken, Ph. D, P.O. Box 45, Tyngsboro, MA, 01879, as an expert
to testify in this cause concerning issues involving scientific brain testing of Dr. Semrau to
determine Defendant's truthfulness and lack of intent to defraud the government as
. established by scientific certainty by means of fMRI examination of Defendant's brain
conducted by Dr. Laken. .
EXfIII!II'F.,
MEMPHIS OFFICE: 2121 ONE COMMERCE SQUARE, MEMPHIS, TENNESSEE 38103
PHONE (901) 526-6464 FAX (901) 526-6467
Dr. Laken is the President and CEO of Cephos Corporation. He graduated in 1993
with a Bachelor of Science in Genetics and Cell Biology from the University of Minnesota
and graduated with a Ph.D. in Cellular and Molecular Medicine from The Johns Hopkins
School of Medicine in 1999. Dr. Laken was named a Technology Review 100 Finalist 100
Young Innovators Under 35 by the Massachusetts Institute of Technology in 2002 and
received the David Israel Macht Award on Young Investigators Day in May 1999 from
Johns Hopkins University.
Attached hereto is Dr. Laken's curriculum vitae setting forth his professional
qualifications, his publications, articles, invited presentations, patents, and previous
testimony.
Dr. Laken is expected to testify that fMRI examination is a reliable and sound
scientific methodology used to determine whether and when a person is being truthful
in response to questions posed. He will testify that the brain responds differently when
a person is being truthful as opposed to when a person is being deceitful. The fMRI
tests records the brain's activity and is read by a computer, not a subjective analyst.
There is no way for the subject to "game" the system. He will testify that this fMRI
testing is peer-reviewed, that the results are repeatable, that it has a rate of error of
significantly less than 10%, is objectively analyzed by a computer process as opposed
to a subjective observer, and that this science is based on tested technology that is
established methodology in his field. The fMRI is the cutting edge of scientific
observation and analysis of brain activity.
Dr. Laken will further testify that fMRI technology works by scanning the brain
with the MRI scanner while the subject is in the process of responding to random
questions selected by the computer. The scanner maps the blood flow of the brain and
the areas of the brain triggered by the response of the subject. Dr. Laken is expected
to testify that truthfulness triggers the portions of the brain where memory is stored,
while deceit triggers much more of a brain response, as one attempts to suppress the
memory and create a false answer.
Dr. Laken will further testify that Dr. Semrau was presented questions using 'fMRI
technology and was instructed to respond to questions in either/both a truthful or
deceitful manner, depending on the question posed to establish his base line brain
activity on the fMRI. He was then tested to determine whether he was telling the truth
about the facts, circumstances and his intent related to the charges in the Second
Superseding Indictment. The fMRI screening demonstrated to a scientific certainty, that
Defendant was truthful and possessed no intent to defraud or cheat the government.
This is Dr. Laken's opinion to a reasonable degree of scientific certainty based on his
testing of Dr. Semrau by means of fMRI imaging.
Dr. Laken will further testify that fMRI testing for truthfulness differs from
polygraph testing, as polygraph testing measures a person's emotional responses
(heart rate, breathing, and physical responses) and is interpreted by an examiner, while
fMRI testing scans the brain patterns that develop during both truthful and dishonest
answers, and is interpreted by a computer. Thus. the testing is objective in nature and
not subject to the subjective interpretations of the analyst.
Dr. Laken reserves the right to supplement, modify or add to his opinions in the
event additional evidence, facts, or testing is reqUired.
Kindest regards.
Sincerely,
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
4 of Page
1441 of 15
This Article
~ Abstract FREE
REGULAR ARTICLE
~ Full Text (PDF)
~ Alert me when this article is cited
Functional MRI Lie Detection: Too ~ Alert me if a correction is posted
Services
Joseph R. Simpson, MD, PhD ~ Similar articles in this journal
~ Similar articles in PubMed
Dr. Simpson is Staff Psychiatrist, VA Long Beach Healthcare System, Long ~ Alert me to new issues of the journal
Beach, CA, and Clinical Assistant Professor of Psychiatry and Behavioral ~ Download to citation manager
Sciences, University of Southern California Keck School of Medicine, Los
Angeles, CA. The views expressed in this article do not necessarily reflect any Citing Articles
policy or position of the U.S. Department of Veterans Affairs, the University
of Southern California or the USC Keck School of Medicine. Address ~Cit;!ng.Articles.vi"'. . Hi.g.I1Wire
correspondence to: Joseph R. Simpson, MD, PhD, PO Box 15597, Long ~ Citing Articles via Google Scholar
Beach, CA 90815. E-mail: jrsimpsQl1ffic!@~l!rthliDk,n~l
Google Scholar
~ Articles by Simpson, J. R.
• Search for Related Content
PubMed
~ PubMed Citation
~ Art!cles.. bySimpsQI'!,J,R.
~ Abstract
... Top
Neuroscientists are now applying a 21st-century tool to an age-old • Abstract
question: how can you tell when someone is lying? Relying on recently ...... Th~ . Sci~nc~J3~hjHd . th~,., .
published research, two start-up companies have proposed to use a ...... Applic~liQn$.tQP~t~cting,."
sophisticated brain-imaging technique, functional magnetic resonance ...... Limitations of the Technique
...... Legal Considerations
imaging (fMRI), to detect deception. The new approach promises
.... E;th.ic$:8~I~t~d.CQn$id~ratiQn$
significantly greater accuracy than the conventional polygraph-at least .... COnclu$iQH
under carefully controlled laboratory conditions. But would it work in the ...... References
real world? Despite some significant concerns about validity and
reliability, fMRIlie detection may in fact be appropriate for certain applications. This new ability to peer inside
someone's head raises significant questions of ethics. Commentators have already begun to weigh in on many of
these questions. A wider dialogue within the medical, neuroscientific, and legal communities would be optimal in
promoting the responsible use of this technology and preventing abuses.
Essential to the working of modem legal systems is an assessment of the veracity of the participants in
the process: litigants and witnesses, victims and defendants. Falsification or lying by any of these parties
can and does occur. Outside the legal system, detection of dece tion is also of critical importance in the
EXHIBIT
http://www.jaapl.org/cgi/content/full/36/4/491 2. 1/21/2010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
5 of Page
1442 of 15
corporate world and in the insurance industry, as illustrated by the practice of hiring private investigators
to follow and videotape disability claimants. Because human beings can be very skilled at lying1,2. and,
in general, are poor at determining when they are being lied to,1---:3 scientific, objective methods for
determining truthfulness have been sought for decades.
The most widespread objective method for assessing veracity is multichannel physiological recording,
commonly known as the polygraph or lie detector.1,2 This approach is based on the fact that the act of
lying can cause increased autonomic arousal. Changes in autonomic arousal are detected by measuring
pulse rate, respiration, blood pressure, and sweating (variously known as the galvanic skin response
[GSR], skin conductance response [SCR], or electrodermal activity).
The reliability and validity of the polygraph are controversial. Q,l Estimates of its accuracy range from a
high of 95 percent to a low of 50 percent,Q,8 with the best estimate probably around 75 percent
sensitivity and 65 percent specificity.Q This relatively low accuracy is a major reason that polygraph
evidence is generally, though not universally, inadmissible in legal proceedings. 3
The past six years have seen the development of a possible new lie-detection technique that is not based
on the measurement of autonomic reactions. This is the application of a widely used tool in
neuroscientific research, functional magnetic resonance imaging (tMRI), to the task of obtaining
measurements of cerebral blood flow (a marker for neuronal activity) in individuals engaged in
deception. Within the past two years, two separate research groups have devised experimental paradigms
and statistical methods that they claim allow identification of brain activity patterns consistent with
lying. The approaches can be used on individual subjects, and their creators claim approximately 90
percent accuracy. Two commercial enterprises, No Lie MRI, Inc., and Cephos Corporation, were
launched in 2006, each with the goal of bringing these techniques to the public for use in legal
proceedings, employment screening, and other arenas (such as national security investigations) where
polygraphs have been used.
The announcement of this first potential commercial application for tMRI has attracted a great deal of
attention, both from the popular media9- 1>and from bioethicists. 1{i-:2Q In June 2006, the American Civil
Liberties Union sponsored a forum on the subject oftMRI lie detection and filed a Freedom of
Information Act request for government records relating to the use of fNIRI and other brain-imaging
techniques for this purpose. 27 Much ofthe concern centers on possible uses and abuses of brain-imaging
technologies in interrogation of enemy combatants or other terrorism suspects.
The focus of this article is the potential use of tMRI to detect deception in noninterrogation contexts,
specifically in criminal and civil legal proceedings and in the workplace. At the time of this writing,
there do not appear to have been any instances of the use of tMRI lie detection in a legal or employment
setting. However, there is little doubt that attempts to apply this new technology to real-world situations
will be made, most likely in the near future.
http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
6 of Page
1443 of 15
.. Top
Very briefly, functional MRI relies on the fact that cerebral blood .&. Abstract
In the experimental setting, the BOLD signal over the whole brain is acquired while volunteer subjects
perform various cognitive tasks.30 The imaging data are then transformed to a standard brain template
and averaged across subjects. Statistical techniques are used to identify a significant change in blood
flow to a particular brain region in one condition compared with another. It is also possible to analyze
data from within a single subject.
The BOLD signal is both valid and reliable in properly constrained experimental paradigms. There is
very good agreement between ±MRI and PET for the mapping of regional changes in brain activity.3J
Functional MRI is now being used for presurgical mapping for epilepsy and brain tumor surgeries32- 35
and is being studied for other diagnostic purposes. 36
Some of the experimenters attempted to enhance the emotional salience of the lying task through
monetary incentives: in one paradigm the subjects were told they would double their payment from $50
to $100 if they were able to deceive the experimenters49- 51 ; in others they were told that they would
forfeit their $20 payment if their deception was detected. 44 ,45 In another study, the experimenters did not
manipulate rewards, but put on a demonstration for the subjects before the scanning session that implied
http://www.jaapl.org/cgi/content/fu11l36/4/49l 112112010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
7 ofPage
1444 of 15
that the testers could see the volunteers' brain activation results in real time. 47 The stated purpose of this
was to approximate the conditions of a polygraph examination.
The most consistent results of these studies are greater activation of certain prefrontal and anterior
cingulate regions in the lie conditions relative to the truth conditions. It has been hypothesized that these
regions are recruited for the purpose of inhibiting a prepotent response (i.e., giving a true answer).52 It
has been proposed that this is one of the major cognitive differences between truth and deception:
The liar is called upon to do at least two things simultaneously. He must construct a new
item of information (the lie) while also withholding a factual item (the truth).... [T]he
truthful response comprises a form of baseline, or prepotent response.... We might,
therefore, propose that responding with a lie demands some form of additional cognitive
processing, that it will engage executive, prefrontal systems (more so than telling the truth)
[Ref. 52, p 1757].
These studies do not simply measure neural correlates of autonomic arousal. Thus, the technique may
have some advantages over conventional polygraph methodology. For example, presumably, mere
nervousness in an innocent subject would not create a false positive for deception.
Most of these studies reported only the results of analyses of pooled data from a group of subjects.
However, for the method to have any practical value, it must be applied to individuals. Two separate
research groups have used different statistical methodologies to do just that.
Kozel et al.~ used a modified Guilty Knowledge Test in which 30 subjects engaged in a mock crime of
stealing a watch or ring. (In the debriefing, 60% of the subjects indicated that they thought the crime was
real, which supports the validity of the paradigm.) Subjects were then presented 80 different questions
visually while being scanned. Yes or no responses were delivered by button press. The subjects were
instructed to lie about having taken the item but to answer all other questions truthfully. They were paid
$50 for participating, but were told that they would receive an additional $50 if an observer could not
tell when they were lying (in actuality, all subjects received $100).
Statistical analysis of the group data identified one anterior cingulate and two prefrontal regions that
were more activated in the lie than in the truth condition. The regions were similar to those activated in
several of the other lie-detection paradigms mentioned. By analyzing the activations in these regions in
each subject (pooled across all of that subject's responses), Kozel et al. 5J reported that they were able to
predict accurately which item (watch or ring) was taken in 28 of30, or 93 percent, of the cases. The
activity in the same regions was then applied to the data from a new set of 31 subjects scanned under
identical conditions. For this group, the method identified the item taken with 90 percent accuracy (28 of
31 subjects).
In their discussion, Kozel et al. 51 suggest that their method could be used in real-life settings by first
testing the subject with the Guilty Knowledge Test mock crime scenario and then, if the subject's brain
activation patterns indicate reliable separation between lies and truth, scanning them again while they
http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
Functional
Case Li~ Detection: Too Good toDocument
MRI1:07-cr-10074-JPM-tmp be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
8 of Page
1445 of 15
respond to questions about the actual topic of interest. This approach has been licensed by the Cephos
Corporation.
Davatzikos et ai. 4~ scanned 22 volunteers in a Guilty Knowledge Test paradigm involving lying about
having a particular playing card in one's possession. Subjects were told they would be paid $20 only if
they successfully concealed the fact that they possessed the card (in fact all subjects were paid). The
researchers employed a statistical approach involving the application of machine learning methods to
their entire dataset. Using this approach, they reported high accuracy in distinguishing a lie from the
truth. Whether applied to single events (i.e., a single button-press response) or to all the data from a
single subject, the sensitivity for detection of lying was around 90 percent, and the specificity was
around 86 percent. This methodology is used by No Lie MRI, Inc.
individual refuses to enter the scanner, refuses to respond to the ... CoocJ\,I$iQn
Over and above these hindrances are more complex questions about transitioning from the research
laboratory to the real world. Some of the concerns that have yet to be fully addressed are discussed in the
following sections.
The studies conducted thus far have been carried out on healthy volunteers who were screened for
neurological and psychiatric disorders, including substance use. There has been no testing of f}JIRI lie
detection paradigms in juveniles, the elderly, or individuals with Axis I and/or Axis II disorders, such as
substance abuse, antisocial personality disorder, mental retardation, head injury, or dementia. It is
unclear whether and how such diagnoses would affect the reliability of the approach. (As mentioned, a
potential advantage of the method in comparison with the polygraph is that it does not rely on autonomic
reactions, and thus individuals with antisocial personality disorder may lose the advantage of evading
detection due to their hyporesponsivity during polygraph testing. 53)
All of the published literature involves scenarios in which the volunteer subjects have been instructed to
lie. No literature addresses the question of how this basic fact affects brain activation patterns, in
comparison with the more realistic situation in which the person being tested makes a completely free
decision about whether to lie and repeats this process for each question asked.
http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal
02/19/10 Pageof...
9 ofPage
1446 of 15
None of the volunteer subjects faced serious negative consequences for unconvincing lying, although in
some cases they believed there was a monetary incentive for lying successfully.
The fMRI approach to lie detection does not rely on detecting signs of autonomic arousal or nervousness
that can be associated with lying. This approach reduces the chance that a person who is truthful will be
classified as deceptive on the basis of his being fearful (for any reason) during testing. However, the
other side of this coin is that fMRI lie detection appears to depend at least in part on the suppression of
competing responses. It does not directly determine what those competing responses are, and they may
not, in fact, be untruths. As pointed out by Grafton et at.:
When defendants testify, they do inhibit their natural tendency to blurt out everything they
know. They are circumspect about what they say. Many of them also suppress expressions
of anger and outrage at accusation. Suppressing natural tendencies is not a reliable indicator
oflying, in the context of a trial [Ref. 54, pp 36-37].
There are presently no data regarding the likelihood of this type of false-positive result.
Effects of Countermeasures
It is hypothesized that much of the frontal lobe activation in imaging studies of deception is related to
the suppression of competing responses. Unknown at present is the potential effect of extensive
rehearsal. Ganis et at. 39 have already demonstrated differential activation patterns between spontaneous
and rehearsed lies. The rehearsal in that experiment was brief, on the order of minutes. If a person spent
weeks practicing a fabricated story (akin to the preparation an intelligence officer might undertake in
assuming a false identity), would the activations associated with response suppression remain as strong?
For a person with much at stake and adequate advance warning, it is not unreasonable to assume that
extensive rehearsal might be attempted to try to fool the technique. It is not known what effect, if any,
such a cognitive countermeasure would have.
Other countermeasures-for example, one analogous to an approach commonly used against polygraph
examinations (i.e., attempting to raise the baseline response to nontarget questions, to reduce the
differential between target and nontarget responses)-could also be attempted by the subject while in the
scanner. There are also no data on what impact such an action would have.
In some psychiatric conditions, subjective experience is at odds with objective reality. This dichotomy is
most glaring in the case of psychosis. It appears that in the case of a delusion, the technique would not
show any deception. Langleben et at.~~ described a medical malpractice lawsuit in which a patient
accused her former psychotherapist of sexual abuse. Both the patient and the physician took polygraph
examinations, and both passed. Other evidence suggested that the patient was most likely suffering from
http://www.jaapl.org/cgi/content/full/36/4/491 112112010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal
02/19/10 Page of...
10 ofPage
1447 of 15
a delusion. Such a situation probably would not be amenable to the use of fMRI lie detection. Other
examples where fMRI may not add useful information might include dementias or amnestic disorders
with confabulation, somatoform disorders, and the pseudologia fantastica seen in some patients with
factitious disorders.
~ Legal Considerations
aTop
These unresolved questions suggest that the potential uses of fMRI a Ab~tract
.... The Science Behind the...
lie detection in real-life situations will remain relatively restricted
.... Applications to Detecting...
for the foreseeable future. A criminal defendant who failed an fMRI .... L"imitations.. o f.. thelech.niqYe
lie detector test could still assert reasonable doubt, unlike the case • Legal Considerations
.... Ethics-Related Considerations
with DNA identification, for example, with which the odds of being .... Conclusion
identified by chance are on the order of billions to one. Thus, there .... .R.eference~
More generally, the present state of the science in this area is unlikely to meet legal standards for
admissibility in court proceedings. The literature on the technique is sparse thus far. As we have seen,
only two groups have published data on single-subject results. The Frye v. Us. 56 standard, which was
applied throughout the nation for seven decades until 1993 and is still the standard in some jurisdictions,
requires that a scientific technique have general acceptance in the relevant scientific community for the
results to be admissible as evidence. The use of fMRI for lie detection would not pass such a test at
present.
The Daubert v. Merrell Dow Pharmaceuticals, Inc).1 standard calls on the trial court to act as a
gatekeeper. The court must determine whether the proposed scientific evidence is relevant to the issue at
hand and assess its reliability. Under this standard, guidelines for the trial court to use in this assessment
include: whether the procedure is generally accepted in the scientific community (as in Frye), whether
the procedure has been tested under field conditions, whether it has been subject to peer review, the
known or potential error rate, and whether standards for the operation of the technique have been
developed. At present, fMRI lie detection would be unlikely to meet all of these criteria. The technique
has not been tested in the field (i.e., in real civil or criminal cases), but only under laboratory conditions.
There is also no standardization of the various techniques and protocols involved in performing and
analyzing the scans.
Even if these obstacles are eventually overcome, the technique would face additional hurdles before any
use in criminal proceedings. The Fifth Amendment right to avoid self-incrimination appears to rule out
compelling a criminal defendant to submit to the technique. Another unresolved question is whether an
fMRI scan constitutes a search, with potential Fourth Amendment implications.
Functional MRI lie detection may see its first application in nonjudicial settings, such as employment
screening. Despite the caveats described herein, the published studies suggest that the technique may be
http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal
02/19/10 Page oC.Page
11 of 1448 of 15
more accurate than traditional polygraph methods, at least in Guilty Knowledge Test paradigms for
which individual subject data have been published. These findings may make it attractive to employers.
The fMRI technique could be used in conjunction with polygraphs, either on a routine basis, or in cases
in which polygraph results are equivocal. Bioethicist Ronald Green has predicted, "Brain-imaging lie
detection will most likely be used where absolute reliability is not needed and where a predominately
naYve population is under scrutiny...the technology is likely to supplement or take the place of written
honesty tests and polygraphy" (Ref. 23, p 54).
However, there are still significant legal concerns to be addressed before such applications become
widespread. Outside of government agencies and companies involved in the provision of security, the
routine use of polygraph examinations is generally barred by the Federal Employee Polygraph
Protection Act of 1988 (FEPPA). 58 Whether fMRI lie detection is covered under FEPPA is not clear at
present. The key language in FEPPA states:
The term "lie detector" includes a polygraph, deceptograph, voice stress analyzer,
psychological stress evaluator, or any other similar device (whether mechanical or
electrical) that is used, or the results of which are used, for the purpose of rendering a
diagnostic opinion regarding the honesty or dishonesty of an individual.. .. The term
"polygraph" means an instrument that-(A) records continuously, visually, permanently,
and simultaneously changes in cardiovascular, respiratory, and electrodermal patterns as
minimum instrumentation standards; and (B) is used, or the results of which are used, for
the purpose of rendering a diagnostic opinion regarding the honesty or dishonesty of an
individual [Ref. ~8, Section 2001].
Does an flVlRI scan, which does not measure psychological stress or the physiological parameters
detected by a polygraph, nevertheless qualify as similar to the polygraph for the purposes of the FEPPA?
At present, there has been no legally binding interpretation of this question. No Lie MRI, Inc. has taken
the stance that the FEPPA does not apply:
U.S. law prohibits truth verification/lie-detection testing for employees that is based on
measuring the autonomic nervous system (e.g., polygraph testing). No Lie MRI, Inc.
measures the central nervous system directly and such is not subject to restriction by these
laws. The company is unaware of any law that would prohibit its use for employment
. 59
screenmg.-
Ultimately, whether fMRI lie detection is prohibited by FEPPA may end up being determined by statute
or court decisions.
~ Ethics-Related Considerations
... Top
The growing body of scientific literature and the advent of ... Abstract
commercial enterprises to market brain imaging-based deception ... Th.eSclenceBehin.dth.e•.•.•
http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal
02/19/10 Page of...
12 ofPage
1449 of 15
An argument can be made that this concern reflects an overabundance of caution. The polygraph may be
even less specific to deception than fMRI. It is also not clear how allowing the poJlygraph but prohibiting
fMRI lie detection addresses the question of the imprecise definition of deception.
A related matter is the possibility of the premature adoption of a scientifically immature technology.
Given the comparatively narrow research base on which fMRI lie detection currently rests, several
commentators have urged caution in allowing it to be used for practical applications. One author has
recommended that any new lie-detection device go through a complete government approval process,
analogous to the Food and Drug Administration's approval process for drugs and medical devices. 22
There are concerns that a rush to apply the technique and the competition for limited government
funding could inhibit the conduct of appropriate research in the area. "Premature commercialization will
bias and stifle the extensive basic research that still remains to be done" (Ref. 16, p 47).
Commentators have also pointed out the danger of the so-called CSI effect, meaning that the aura of big
science and high technology surrounding complex and expensive tests may lead to an overestimation of
the reliability and utility of fMRI lie detection among lay people, including law enforcement personnel
and other investigators, judges, and jurors. If fMRI lie detection were misinterpreted as being an
infallible method of distinguishing truth from falsehood, participants in legal proceedings could
experience significant pressure to submit to testing, with refusal being interpreted as evidence of guilt. 16
The reasoning would be: the test detects lies; therefore, anyone who refuses to take it must have
something to hide. Although such a conclusion is not at all supported by the actual data, it is not
inconceivable that some may draw it.
Another question of ethics concerns the right to the privacy of one's thoughts. Neuroethicists have
coined the term cognitive libertyQJ to refer to the "limits of the state's right to peer into an individual's
thought processes with or without his or her consent, and the proper use of such information in civil,
forensic, and security settings" (Ref. 16, pp 39-40). Under what circumstances should a government
agency--or, for that matter, an employer or insurance company-be allowed to look for deception with
this technique? Our society has not yet grappled with these critical questions, but if enthusiasm for fMRI
lie detection increases, it appears that such a debate will be essential. In the words of one commentator,
"Constitutional and/or legislative limitations must be considered for such techniques" (Ref. 21, p 61).
Another author has proposed that using a "neurotechnology device to essentially peer into a person's
thought processes should be unconstitutional unless it is done with the informed consent of that
http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
·Functional MRI1:07-cr-10074-JPM-tmp
Case Lie Detection: Too Good toDocument
be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal...
02/19/10 Page 13Page 10 of 15
of 144
Related to the concept of cognitive liberty is the possible use of fMRI against the will of the subject.
Hypothetical scenarios in which this might occur have been described in the context of national security
investigations or other types of high-stakes interrogations. 60,62 It is not inconceivable that terrorism
suspects could be restrained and placed in an MRI scanner in such a way that they would be unable to
move their heads enough to foil the scan. Even if they refused to answer questions, it might be possible
to detennine from the brain's response whether the subject recognizes a sensory stimulus, such as a
sound or image.
It should be clear from the preceding discussion, however, that lie detection using fMRI requires the
subject to answer questions. Furthennore, as in a traditional polygraph examination, a comparison to
known truthful responses by the subject is necessary for the technique to work. In any event, the
coercive use of brain-imaging technology would certainly be fraught with ethics-related, legal, and
constitutional difficulties. Scientific and mental health organizations may soon want to articulate
positions on the ethics of nonmedical uses of brain-imaging technology, coercive or not.
~ Conclusion
.. IQP
With ongoing research, and likely improvements in accuracy in the .... Abstract
.. The Science Behind the...
laboratory setting, it does not seem unreasonable to predict that
.. Applications to Detecting...
fMRI lie detection will gain wider acceptance and, at a minimum, "'1..1mitCltiQI!SQf..the..Iechn.ique
replace the polygraph for certain applications. What seems far less .. l..egCllCQI!s.jden~tiQIlS
... Ethics-Related Considerations
likely is the science-fiction scenario in which a criminal defendant • Conclusion
is convicted solely on the basis of a pattern of neuronal activation .... References
when under questioning.
Thus far, under carefully controlled experimental conditions, an accuracy of90 percent is the best that
has been achieved. Improvements in the technology that would reduce the error rate from 10 percent to
something comparable with the billions-to-one accuracy of DNA testing are difficult to conceive of,
given the mechanics of the science involved.
Perhaps more important, the technique does not directly identify the neural signature of a lie. Functional
MRI lie detection is based on the identification of patterns of cerebral blood flow that statistically
correlate with the act of lying in a controlled experimental situation. The technique does not read minds
and detennine whether a person's memory in fact contains something other than what he or she says it
does. The problem of false-positive identification of deception is unlikely to be overcome to a sufficient
degree to allow the results of an fNlRI lie detection test to defeat reasonable doubt. Furthennore, it is
difficult to envision compelling an unwilling criminal defendant to submit to a test, because of the Fifth
Amendment right against self-incrimination. If a criminal defendant volunteers to take the test, it is still
not clear that the results would be any more admissible under current conditions than the results of a
standard polygraph examination would be. It appears to be too early to predict whether fMRI lie
http://www.jaapl.org/cgi/content/full/36/4/49l 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal...
02/19/10 Page 14Page 11 of 15
of 144
detection will ever reach the level of reliability and standardization needed to meet Frye or Daubert
criteria.
If the Federal Employee Polygraph Protection Act is interpreted as applying to fMRI lie detection, it will
not be used in the general workplace. Nevertheless, the next few years may see the use of the technology
in government and in the other limited circumstances in which nongovernmental employers are allowed
to administer polygraph examinations. Although there are several unresolved questions regarding the
ethics of this type of application, it is not clear that the concerns are qualitatively different, with fMRI lie
detection in this context, from those raised by the polygraph, or from concerns about the use of brain
imaging in other contexts such as research or diagnostics. As previously mentioned, absolute reliability
is not necessarily required in employment applications.
Like polygraph evidence, which is generally inadmissible, fMRI lie detection may still find a role in
civil suits and in criminal investigations. No claims would be made that the results definitively
determine the truth as do those of the more traditional forensic tests, but the findings could be used in
settlement negotiations. Police could employ the technique in criminal investigations as a means to rule
out suspects, as they already do with the polygraph. 63
A variety of practical, legal, and ethics-related concerns surround the potential use of functional MRI for
the purpose of lie detection. Given the current state of the field and the unresolved practical matters
mentioned herein, the forensic role of the technique is likely to be limited to the civil arena, with both
sides agreeing to have one or more parties consent to undergo the test. Use in the workplace is also
possible, but if FEPPA applies, then the use of fMRI lie detection in employment will be as limited as
the use of the polygraph. Although the ethics-related dangers are perhaps not as grave in employment
applications or civil suits as they would be in a criminal case, an ongoing scientific, legal, and bioethics
dialogue about the appropriate uses of fMRI lie detection is certainly prudent and timely.
References
.... I.QR
... A,b$tract
1. Ekman P, O'Sullivan M: Who can catch a liar? Am Psychol ... Ih~Sci~nc~_6~hLndlbe•••
46:912-20, 1991 .... Applications to Detecting...
2. Spence SA: The deceptive brain. J R Soc Med 97:6-9,2004 ... Limitations of the Technique
... L,~gaICon$ideJalion$
[En~~f!!UT~xt1 ... ethjc$:R~I.ated. . Con$.ideration$
3. Ford EB: Lie detection: historical, neuropsychiatric and legal ...
Conclusion
dimensions. IntJ Law Psychiatry 29:159-77, 2006[Medline] •
References
4. Office of Technology Assessment: Scientific validity of
polygraph testing: a research review and evaluation. A Technical Memorandum. Washington, DC:
US Office of Technology Assessment, 1983
5. Office of Technology Assessment: The use of integrity tests for pre-employment screening.
6. Brett AS, Phillips M, Beary JF: Predictive power of the polygraph: can the "lie detector" really
detect liars? Lancet 1:544-7, 1986
7. Lykken DT: A Tremor in the Blood: Use and Abuse of the Lie Detector. New York: Plenum
http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --168-1
Document Simpson 36 (4):
Filed 491 -- Journal...
02/19/10 Page 15Page 12 of 15
of 144
Press, 1998
8. Stem PC: The polygraph and lie detection, in Report of the National Research Council Committee
to Review the Scientific Evidence on the Polygraph. Washington, DC: The National Academies
Press, 2003, pp 340-57
9. Hall CT: Fib detector. San Francisco Chronicle. November 26,2001, pAlO
10. Fox M: Lying: it may truly be all in the mind. Courier Mail (Queensland, Australia). December 1,
2004, P 3
11. Anonymous. Signs of lying are all in the mind. The Australian. December 1, 2004, P 10
12. Talan J: No lie, it's easier to tell the truth. Houston Chronicle. October 9, 2005, P 7
13. Haddock V: Lies wide open. San Francisco Chronicle. August 6, 2006, pEl
14. Hadlington S: Science and technology: the lie machine. The Independent (London). September
13,2006,plO
15. Henig RM: Looking for the lie. New York Times Magazine. February 5, 2006, pp 47-53, 76,80
16. Wolpe PR, Foster KR, Langleben DD: Emerging neurotechnologies for lie-detection: promises
and perils. Am J Bioeth 5:39-49, 2005[M~dliJ1~1
17. Boire RG: Searching the brain: the Fourth Amendment implications of brain-based deception
detection devices. Am J Bioeth 5:62-3, 2005 [Medline]
18. Buller T: Can we scan for truth in a society ofliars? Am J Bioeth 5:58-60, 2005[M~dlin~1
19. Fins 11: The Orwellian threat to emerging neurodiagnostic technologies. Am J Bioeth 5:56-7,
2005 [Medline]
20. Fischbach RL, Fischbach GD: The brain doesn't lie. Am J Bioeth 5:54-5, 2005 [Medline]
21. Glenn LM: Keeping an open mind: what legal safeguards are needed? Am J Bioeth 5:60-1, 2005
[Medline]
22. Greely HT: Premarket approval regulation for lie detections: an idea whose time may be coming.
Am J Bioeth 5:50-2, 2005[M~dliJ1~1
23. Green RM: Spy versus spy. Am J Bioeth 5:53-4, 2005 [Medline]
24. Moreno JD: Dual use and the "moral taint" problem. Am J Bioeth 5:52-3, 2005[Medlin~
25. No author listed: Neuroethics needed. Nature 441 :907, 2006[M~dliJ1~1
26. Pearson H: Lure oflie detectors spooks ethicists. Nature 441 :918-19, 2006[Medline}
27. American Civil Liberties Union: ACLU seeks information about government use of brain
scanners in interrogations (June 28, 2006 press release). Available at
http://www.aclu.org/privacy/medica1l26035prs20060628.html. Accessed February 9, 2007
28. Ogawa S, Lee TM: Magnetic resonance imaging of blood vessels at high fields: in vivo and in
vitro measurements and image simulation. Magn Reson Med 16:9-18, 1990[Medline]
29. Ogawa S, Tank DW, Menon R, et al: Intrinsic signal changes accompanying sensory stimulation:
functional brain mapping with magnetic resonance imaging. Proc Natl Acad Sci USA 89:5951-5,
1992[Abstract/Fr~~ Full Text]
30. Huettel SA, Song AW, McCarthy G: Functional Magnetic Resonance Imaging. Sunderland, MA:
Sinauer Associates, Inc., 2004
31. Raichle ME, Mintun M: Brain work and brain imaging. Annu Rev Neurosci 29:449-76, 2006
[M~dlin~]
32. Benke T, Koylu B, Visani P, et al: Language lateralization in temporal lobe epilepsy: a
comparison between fMRI and the Wada Test. Epilepsia47:1308-19, 2006[Medline]
33. Larsen S, Kikinis R, Talos IF, et al: Quantitative comparison of functional MRI and direct
electrocortical stimulation for functional mapping. Int J Med Robot 3:262-70, 2007[Medline]
34. Kesavadas C, Thomas B, Sujesh S, et al: Real-time functional MR imaging (fMRI) for presurgical
evaluation of paediatric epilepsy. Pediatr RadioI37:964-74, 2007[Medline]
35. Pelletier I, Sauerwein HC, Lepore F, et al: Non-invasive alternatives to the Wada test in the
presurgical evaluation of language and memory functions in epilepsy patients. Epileptic Disord
9:111-26, 2007[Medline]
36. Dickerson BC: Advances in functional magnetic resonance imaging: technology and clinical
http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
·Functional
CaseMRI Lie Detection: Too Good to
1:07-cr-10074-JPM-tmp be True? --
Document Simpson
168-1 36 (4):
Filed 491 -- Journal...
02/19/10 Page 16Page 13 of 15
of 144
http://www.jaapl.org/cgi/contentlfull/36/4/491 1/21/2010
functional Lie Detection: Too Good toDocument
MRI1:07-cr-10074-JPM-tmp
Case be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal...
02/19/10 Page 17Page 14 of 15
of 144
62. Thompson S: The legality of the use of psychiatric neuroimaging in intelligence interrogation.
Cornell Law Rev 90:1601-37, 2005[M~dlin~1
63. Anonymous. The polygraph technique Part II: value during an investigation. Available at
http://www.policelink.com/training/articles/1947-the-polygraph-technique-part-ii-value-during
ill1:inY~st.ig~liml. Accessed December 6, 2007
Authors' reply:
~HOME
N. K. Aggarwal
Neuroimaging, Culture, and Forensic Psychiatry
J Am Acad Psychiatry Law, June 1, 2009; 37(2): 239 - 244.
___---"---'--J [Abstract] [Full Text] [PDF]
~HOME
J. R. Merikangas
Commentary: Functional MRI Lie Detection
JAm Acad Psychiatry Law, December 1, 2008; 36(4): 499 - 501.
------'--'--' [Abstracfl [Full Text] [PDF]
~HOME
""A b.·."""'"
Commentary: The Future of Forensic Functional Brain Imaging
JAm Acad Psychiatry Law, December 1, 2008; 36(4): 502 - 504.
-----'----"- [Abstract] [Full Text] [PDF]
This Article
~ Abstract FREE
Services
• Similar articles in this journal
http://www.jaapl.org/cgi/content/full/36/4/491 1/21/2010
.Functional
CaseMRI Lie Detection: Too Good toDocument
1:07-cr-10074-JPM-tmp be True? --168-1
SimpsonFiled
36 (4): 491 -- Journal...
02/19/10 Page 18Page 15 of 15
of 144
~ Si.mi.I.i1.r.i1J1i.<:les_iIJP!.!ltMed
~ Alert me to new issues of the iournal
~ Download to citation manager
Citing Articles
~ Citing Articles via HighWire
~ (:itingArti<:!e$vlil.. .GoogleS<:ho!~r
Google Scholar
~ Articles by Simpson, J. R.
~ Search for Related Content
PubMed
~p.!.!bMed(:Jtiltio"
~ Articles by Simpson. J. R.
http://www.jaapl.org/cgi/content/full/36/4/491 112112010
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 19 of 144
I. INTRODUCTION
"Illustration" or "map" are among the most frequently used words for
translating the Chinese character tu, a graphic representation of any
phenomenon that can be pictured in life and society, whether in traditional
China or elsewhere. l Investigations of the early role of tu in Chinese culture
first set out to answer questions about who produced tu, the background of its
originator, and the originator's purpose. How were pictures conceptualized?
Interpreted? In examining tu, Chinese scholars stressed the relational aspect
of tu and shu (writing) to answer both these questions, as well as to the
importance of not robbing an image of its overall beauty and life with too
much graphic detail. In the West, specific concepts of technical or scientific
illustrations did not exist before the Renaissance. With the coming of that
age, technical illustration became a specific branch of knowledge and activity,
with its own specific goals and ends. Although these developments did not
proceed in any linear manner in either China or the West, they mirrored the
growing importance of science and technology in both societies. However, the
desire to understand the function of the brain through observation of human
behavior and deficits in patients was marked especially in the West. Ideas
about cerebral localization paved the way to developments for mapping brain
function-a path that has seen at least eight different technological
approaches since the first successful measurements of brain electrical activity
in 1920 by Hans Berger. 2 Each technological approach has different
Deane F. and Kate Edelman Johnson Professor of Law, Professor (by courtesy) of
Genetics, and Director of the Center for Law and the Biosciences, Stanford University.
Professor Greely would like to thank Sean Rodriguez for his excellent research assistance on
this Article.
tt Associate Professor (Research) of Neurology, and Director of the Program in
Neuroethics, Center for Biomedical Ethics, Stanford University.
1 Hans Ulrich Vogel, Technical Illustrations in Ancient China: Achievements and
Limitations, PROC. SECOND SHANGHAI ROUNDTABLE, ANCIENT CHINESE SCI. Be HIGH
TECHNOLOGY: ROOTS, FRUITS AND LESSONS 17-20 (2002).
2 David Millett, Hans Berger: From Psychic Energy to the EEG, 44 PERSP. BIOLOGY Be
MED. 522, 523-42 (2001).
EXHIBIT
I 3_~
378 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
potential, limitations, and degrees of invasiveness, yet both share the notion
that they enable, at least to some extent, mind reading.
.As we enter more fully into the era of mapping and understanding the
brain, society will face an increasing number of important ethical, legal, and
social issues raised by these new technologies. One set of issues that is already
upon us involves the use of neuroscience technologies for the purpose of lie
detection. Companies are already selling "lie detection services." If these
become widely used, the legal issues alone are enormous, implicating at least
the First, Fourth, Fifth, Sixth, Seventh, and Fourteenth Amendments to the
U.S. Constitution. 3 At the same time, the potential benefits to society of such
a technology, if used well, could be at least equally large. This article argues
that non-research use of these technologies is premature at this time. It then
focuses on one of many issues and urges that we adopt a regulatory system
that will assure the efficacy and safety of these lie-detection technologies.
This article begins by describing the history and functioning of brain
imaging technologies, particularly functioning magnetic resonance imaging
(fMRI). It next discusses, and then critically analyzes, the peer-reviewed
literature on the use of fMRI for lie detection. It ends by arguing for federal
regulation of neuroscience-based lie detection in general and fMRI-based lie
detection in particular.
A. AN EVOLUTION OF "CEREBROSCOPY"s
For the beginning of discussion of these legal issues, see Henry T. Greely, Prediction,
Litigation, Privacy, and Property: Some Possible Legal and Social Implications ofAdvances in
Neuroscience, in NEUROSCIENCE AND THE LAw: BRAIN, MIND, AND THE SCALES OF JUSTICE
114-156 (Brent Garland ed., 2004); Henry T. Greely, The Social Consequences ofAdvances in
Neuroscience: Legal Problems; Legal Perspectives, in NEUROETHICS: DEFINING THE ISSUES IN
THEORY, PRACTICE AND POLICY 245 (Judy Illes ed., 2006); Michael S. Pardo, Neuroscience
Evidence, Legal Culture, and Criminal Procedure, 33 AM. J. CRIM. L. (forthcoming 2007)
(discussing the search and seizure clause, self-incrimination clause, and due process clause);
Sarah S. Stoller & Paul Root Wolpe, Emerging Neurotechnologies for Lie Detection and the
Fifth Amendment, 33 AM. J. L. Be MED. 359, 359-375 (2007) (self-incrimination clause).
There are also two early discussions of the ethical issues involved in this technology. Judy
Illes, A Fish Story? Brain Maps, Lie Detection, and Personhood, 6 CEREERUM 73 (2004); Paul
R Wolpe, Kenneth R. Foster & David D. Langleben, Emerging Neurotechnologies for Lie
Detection: Promises and Perils, 5 AM. J. BIOETHICS 38, 42 (2005). Finally, at the very end of
the editing process, we discovered another article that discuss neuroscience-based lie detection
technologies in some detail. Charles N.W. Keckler, Cross-Examining the Brain: A Legal
Analysis ofNeural Imagingfor Credibility Impeachment, 57 HASTINGS L. J. 509 (2006).
• Much of this section is based on Judy Illes, Eric Racine & Matthew P. Kirschen, A
Picture Is Worth 1000 Words, but Which WOO?, in NEUROETHICS: DEFINING THE ISSUES IN
THEORY, PRACTICE AND POLICY (Judy Illes ed., 2006).
• We thank Professor Stephen Rose for inspiring some of the following discussion.
Illes, Racine & Kirschen, supra note 4.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 21 of 144
John R. Mallard, The Evolution ofMedical Imaging: From Geiger Counters to MRJ
A Personal Saga, 46 PERSP. BIOLOGY &< MED. 349 (2003).
• W. Penfield & E. Boldrey, Somatic Motor and Sensory Representation in the Cerebral
Cortea: ofMan as Studied by Electrical Stimulation, 60 BRAIN 389 (1937).
9 J. C. Mazziotta, Window on the Brain, 57 ARCHIVES NEUROLOGY 1413 (2000).
10 D. Cohen, Magnetoencephalography: Detection ofthe Brain's Electrical Activity with
a Superconducting Magnetometer, 175 SCI. 664 (1972).
11 Mazziotta, supra note 9.
.. S. Coyle et aI., On the Suitability of Near-Infrared (NIR) Systems for Nea:t-
Generation Brain-Computer Interfaces, 25 PHYSIOLOGICAL MEASUREMENT 815 (2004); S.
Horovitz & J. C. Gore, Simultaneous Event-Related Potential and Near-Infrared Spectroscopic
Studies ofSemantic Processing, 22 HUM. BRAIN MAPPING 110 (2003).
13 Judy Illes, Matthew P. Kirschen & John D.E. Gabrieli, From Neuroimaging to
Neuroethics, 6 NATURE NEUROSCIENCE 205 (2003).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 22 of 144
380 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
in the four years since the original study, another 5300 papers had been
published.a All told, that makes about 8700 papers since 1991.
17
Desmond & Chen, supra note 15.
,. Rosen & GUl, supra note 16.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 24 of 144
382 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
19 Desmond & Chen, supra note 15; Kenneth S. Kosik, Beyond Phrenology, at Last, 4
NATURE NEUROSCIENCE 234 (2003).
20 Kosik, supra note 19.
21 G. Aguirre, E. Zarahn & M. D'esposito, The Variability of Human, BOLD
Hemodynamic Responses, 8 NEUROIMAGE 360 (1998).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 25 of 144
22
Rosen & GUl, supra note 16.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 26 of 144
384 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
ultimately choose the one for presentation that best highlights the main
results ofthe study.
5. Ethical Considerations
Ethical considerations for imaging the function of the brain can be
examined in the context of two themes: conditions of the test that enable
acquisition of data and conditions about the use ofthe data themselves.
a. Test Conditions
For any of the methods described above, requirements for the protection
of human subjects and disclosure of risks, and benefits if any, must be
followed. Of paramount importance is safety. Contraindications to EEG
include, for example, dermatologic allergies to electrolytic glue used for
ensuring good conductance of scalp electrodes. For MRI, subject
claustrophobia, metal implants, and metal objects in the environment that
can rapidly become projectiles in the presence of the strong magnetic field are
foremost considerations. For all modalities, the accidental discovery of an
anomaly that might have clinical significance is an important consideration,
and procedures that will be taken for follow-up, if any, must be handled in a
forthright manner when obtaining consent. 23 Long-term negative, even life
long reactions to repeat scanning or stimuli-particularly when unpleasant or
frightening-are possible but extremely rare.
b. External Conditions
Ethics considerations about how data will be used outside the laboratory
setting bring us back to the original questions that scholars asked about tu.
First, conceptualization: Given the substantial complexity of designing
any imaging experiment, what acquisition and statistical protocols were
applied? This question applies to what Illes and Racine have called design or
paradigmatic bias. 24 Are the stimuli age-, gender- and culturally-appropriate?
Will they generate results that are generalizable to populations not tested?
Are there regions of the brain with significant activity that go unnoticed
because they were not within the brain structure or statistical range of choice?
Second, biases of interpretation: What biases might interpreters bring to
the maps of data? Investigator bias is inevitable given the nature of the
imaging experiment and the necessity for human interpretation of images.
Unlike results from a topographic map, for example, the meaning of an
activation shown on an image is far from black and white.
Third, how will the map data be used? This third question raises perhaps
the toughest ethical challenge of all: privacy, profiling, and predicting future
behavior.
386 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
01 ld.; Adler, supra note 29, at 48-51, 181-195 (discussing Marston's unusually
interesting life).
o. Frye v. United States, 293 F. 1013, 1013 (D.C. Cir. 1923).
00 NRC, supra note 30, at 293-94. See also Adler, supra note 29, at 39-40, 51-54. Late
in life, Marston, along with his wife, Elizabeth Holloway Marston, invented the comic book
character, Wonder Woman. One of Wonder Woman's attributes was her possession of the
magic lasso, forged from the Magic Girdle of Aphrodite. NRC, supra note 30, at 295. The
lasso would make anyone it encircled tell the truth.
3. Frye, 293 F. at 1024.
o. NRC, supra note 30, at 12-13.
o. See, e.g., United States v. Scheffer, 523 U.S. 303, 333 (1998). Recently, however, the
New Mexico Supreme Court concluded that polygraph evidence would be presumptively
admissible in New Mexico courts. Lee v. Martinez, 96 P.3d 291 (N.M. 2004). A few federal
courts, applying the newer Daubert standard for admissibility of scientific evidence, have also
found polygraph evidence admissible in particular cases, albeit under unusual circumstances.
United States v. Allard, 464 F.3d. 529 (5th Cir. 2006); Thornburg v. Mullin, 422 F.3d 1113
(10th Cir. 2005); United States v. Piccinonna, 885 F.2d 1529 (11th Cir. 1989) (en bane).
07 NRC, supra note 30, at 6.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 29 of 144
a. Electroencephalography
As discussed earlier, EEGs measure electric currents generated by the
brain. One particular kind of EEG measurement claimed to be useful in
detecting lies is the "P300"-a wave of electrical signal, measured at the scalp,
that occurs approximately 300 milliseconds after a subject receives a stimulus.
The analysis of the timing and shape of this waveform has some meaning, but
the credibility of its usefulness is undercut by the hype given it by its leading
proponent, Lawrence Farwelp9
Farwell is an electrophysiologist who has, for over fifteen years, argued
that human P300 waves can be used as a "guilty knowledge" test, to determine
whether, for example, a suspect has ever seen the site of a crime. 40 Farwell
refers to this process as "brain fingerprinting" and has been selling brain
fingerprinting for several years through Brain Fingerprinting Laboratories, a
privately held company.41 The company's website claims that in more than
175 tests, the method has produced inconclusive results six times and has been
accurate every other time. 42 Farwell's work, however, has not been
substantially vetted in the peer-reviewed literature. 43 Apparently, the only
article he has published on his technology in a peer-reviewed journal is a 2001
on-line article in the Journal of Forensic Science where he and a co-author
reported on a successful trial of his method with six subjects. 44 He has not
revealed any further evidence to support his claims of high accuracy,
protecting it as a trade secret. He is an inventor on four patents that are
relevant to this work. 45
3. [d. at 166-68.
388 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
..6 [d.
07 Harrington v. Iowa, 659 N.W.2d 509, 525 (Iowa 2003) (holding that Harrington's
conviction violated the Due Process clause under Brady v. Maryland, 373 U.S. 83 (1963),
because the prosecution had failed to disclose material exculpatory information in its
possession to his counsel before trial).
os Id. at 516.
o. Brain Fingerprinting Laboratories, http://www.brainwavescience.com/Ruled%
20Admissable.php (last visited July 6, 2007).
•0 Britton Chance et al., A Novel Method for Fast Imaging of Brain Function, Non
Invasively, with Light, 2 OPTICS EXPRESS 411, 413 (1998). For a less technical description, see
Steve Silberman, The Corte.r Cop, 14 WIRED 149 (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 31 of 144
used without any contact with the subject's head. 51 In effect, he hopes to be
able to perform something like an fMRI blood-flow analysis without the cost
and inconvenience of an MRI machine. Although NIRS for lie detection is
widely discussed in the popular52 and semi-popular53 literature, we found no
peer-reviewed publications on it.
c. Facial Micro-expressions
Berkeley psychologist, Paul Ekman, has championed the analysis of facial
micro-expressions. Ekman became famous for his work establishing the
universality of some primary human facial expressions, such as those for
anger, disgust, fear, joy, sadness, and surprise. 54 He has been interested in
methods of detecting deception since at least the late 1960s,55 and now claims
that careful analysis of fleeting "micro-expressions" on subjects' faces can
detect lying with substantial accuracy.56
Ekman has done substantial research on using facial micro-expressions to
detect lying, but he has not published much of the research in peer-reviewed
literature; he has said that this is because of his concern about the
information falling into the wrong hands. As a result, Ekman's methods and
results have not been subject to much public analysis, making their value hard
to assess. If effective, they would have the advantage of not requiring any
obvious intervention with the subject-subjects would not have to have
various sensors attached to them, as with polygraphs, EEGs, or NIRS, or be
inserted into a machine, as with fMRI. This technique could quite plausibly
be used surreptitiously, through undetected videotaping of the subject's face
during questioning.
d. Periorbital Thermography
Periorbital thermography measures the temperature of the tissue around
the eyes. Ioannis Pavlidis, a computer scientist at the University of Houston,
and James Levine, an endocrinologist at the Mayo Clinic, invented, and
continue to promote, this technique. Pavlidis and Levine claim that the
390 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
temperature of the area around the eyes rises noticeably when subjects lie. 57
Their theory is akin to the approach of the polygraph: the increased stress of
lying triggers an involuntary physiological response in their subjects. In this
case, instead of using blood pressure or the other polygrapher markers, they
contend that rapid eye movements associated with stress increase the blood
flow around the eyes and thus increase that area's temperature. 58 In a series
of articles since 2001, including one article in Nature, the researchers have
claimed accuracy rates from 78% to over 91%.59 Like analyzing facial micro
expressions, this approach does not require attaching anything to the subject
and could quite plausibly be done without the subject's knowledge through a
device that measures small temperature differences at a distance; high
resolution infrared cameras can be used for this purpose that can detect
temperature changes as small as 0.045 degrees Fahrenheit. 60
The NRC report discussed periorbital thermography and some of its
limitations. It concluded:
Despite the public attention focused on the published version of
this study in Nature . . . it remains a flawed and incomplete
evaluation based on a small sample, with no cross-validation of
measurements and no blind evaluation. It does not provide
acceptable scientific evidence to support the use of facial
thermography in the detection of deception. 61
87 loannis Pavlidis, James Levine & Paulette Baukol, Thermal Imaging for Anxiety
Detection, PROC. lEE WORKSHOP ON COMPUTER VISION BEYOND THE VISIBLE SPECTRUM:
METHODS &: ApPLICATIONS (2000) [hereinafter Pavlidis et aI., Thermal Imaging for Anxiety
Detection].
88 Id.
89 loannis Pavlidis & James Levine, Monitoring ofPeriorbital Blood Flow Rate Through
Thermal Image Analysis and its Application to Polygraph Testing, 3 ENGINEERING MED. &:
BIOLOGY Soc'y 2826, 2826 (2001); loannis Pavlidis & James Levine, Thermal Facial Screening
for Deception Detection, 2 ENGINEERING &: MED. 1183, 1183 (2002); Pavlidis et aI., Thermal
Imagingfor Anxiety Detection, supra note 57, at 56; loannis Pavlidis, Norman L. Eberhardt &
James A. Levine, Seeing Through the Face ofDeception: Thermal Imaging Offers a Promising
Hands-offApproach to Mass Security Screening, 415 NATURE 35, 35 (2002); Dean A. Pollina,
Andrew B. Dollins, Stuart M. Senter, Troy E. Brown, loannis Pavlidis, James Levine & Andrew
H. Ryan, Facial Skin Surface Temperature Changes During a 'Concealed Information' Test, 34
ANNALS BIOMEDICAL ENGINEERING 1182, 1182 (2006) (researching in collaboration with the
Department of Defense Polygraph Institute).
60 Jeffrey Kluger & Coco Masters, How To Spot a Liar, TIME, Aug. 20, 2006.
61 NRC, supra note 30, at 157.
62 Vicki Haddock, Lies Wide Open, SAN FRANCISCO CHRON., Aug. 6, 2006.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 33 of 144
392 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
MRI facilities with at least a 3-Tesla magnet. 65 Its website does not give any
details on the price of the service or its accuracy, except in its investor
information section. 66 There it claims a "current accuracy" of 93%, with an
expected 99% accuracy "when development ofthe product is complete."67 The
website does not specifically state a price to the customer, but various
projections assume a price of about $1,800 per use. 68 The website does
mention some limitations of its test, but these are only limitations of the MRI
process: individuals cannot have metal in their bodies, cannot be
claustrophobic, cannot be ''brain damaged," and must not move during the
MRI process. 69
No Lie MRI's website has pages for four classes of customers:
corporations, lawyers, government, and individuals. The company envisions
many uses. For corporations, it suggests security firms may want to use its
process for pre-employment screening, that insurance companies should use
it to verify policy-holders' claims, and that investment banking companies
may use it to determine the truthfulness of corporate earnings statements.70
The site claims that its process is not subject to the federal Employee
Polygraph Protection Act, which bans most employment-related use of lie
detection. 7I As discussed in Section III(A) below, this interpretation is
implausible.
For lawyers, the website analogizes No Lie MRI tests to DNA tests,
adding that "it would also be potentially possible for a witness to validate his
or her own statements to the court."72 It does not mention at this point any
barriers to admissibility of such evidence.
The firm suggests a wide range of uses for federal, state, and international
governments. 73 In each case, the firm points to areas where the "now
discredited" polygraph machines are used; for developing counties, it offers
the use of its technology to battle corruption.
Its section for individuals reads:
No Lie MRI has potential applications to a wide variety of
concerns held by individual citizens.
• Risk reduction in dating
77 ld.
78 Id.
79 Glenn Smith, Deli Owner Lays Hope on New MRI Lie Detector, POST AND COURIER
(Charleston, S.C.), Jan. 16, 2007, at AI.
80 ld. Its publicity-attracting skills do have some limits. In late October 2006 one of
the authors (Greely) was taped by the Today Show for a segment on No Lie MRI. The firm
was going to do a truth assessment on camera for Today, supposedly of a woman who wanted
to prove to her husband that she was sexually faithful. Several days after the taping, the
author was told that the segment had been cancelled because the woman had changed her
mind.
81 CEPHOS Corporation Home Page, http://www.cephoscorp.com/index.html (last
visited July 6, 2007).
82 CEPHOS Corporation, http://www.cephoscorp.com/management.htm (last visited
July 6, 2007).
• 0 CEPHOS Corporation, http://www.cephoscorp.com/(last visited July 6, 2007).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 36 of 144
394 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
three have been published by Andy Kozel, formerly at the Medical University
of South Carolina and now at the University of Texas Southwestern Medical
Center, whose technology is being developed by CEPHOS Corporation. The
remaining six have come from laboratories all around the world. Of the
twelve papers, only three (two by Langleben and one by Kozel) attempt to
assess differences in truthful and deceptive responses within an individual; all
the others look only at averaged group data. We will focus on the Langleben
and Kozel reports, as those are the basis for the methods being used by the
two known commercial firms. This section will summarize briefly each of
these papers.
Of course, one might object that not all research is published and not all
published research appears in peer-reviewed journals; there might be other
evidence about the efficacy of these methods. Nevertheless, we can only assess
the evidence we have. No one can be expected to accept the kind of dramatic
change in our society that accurate fMRI-based lie detection could bring
without solid, public proof. We find very little in the peer-reviewed literature
that even suggests that fMRI-based lie detection might be useful in real-world
situations and absolutely nothing that proves that it is.
396 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
activation patterns between when the subjects honestly said they did not have
the two of diamonds, and when they dishonestly said they did not have the
five of clubs. Thus, each of the eighteen subjects was being examined on
having told the truth sixteen times and having lied sixteen times. The other
cards-the non-target cards-were used to help keep the subject's attention
and to make sure they were actually reading the question at the top of the
card-the ten of spades.
When the researchers averaged the results of all eighteen subjects, they
found two regions with statistically significant increases in activation when
the subjects were lying about the five of clubs. No areas showed greater
activation when telling the truth about the two of diamonds. The first region
ran from the left anterior cingulate cortex to the medial aspect of the right
superior frontal gyrus. The second region "is a 91-voxel cluster, U-shaped
along the craniocaudal axis, extending from the border ofthe prefrontal to the
dorsal premotor cortex ... and also involving the anterior parietal cortex from
the central sulcus to the lower bank of the interparietal sulcus ...."93 The
authors pointed out that part of the first region is known to be activated when
someone stops himself or herself from responding in the easiest way. They
speculated that the second region may be involved in providing additional
help in overcoming the first response to answer truthfully. The paper did not
report on any differences between truth-telling and lying for individuals, but
only looked at the overall averages.
Langleben reported on another experiment in a paper published in
Human Brain Mapping in 2005. 94 According to its acknowledgements, this
paper was based in part on work done with funding support from the Defense
Advanced Research Projects Agency (DARPA). This study was done with
twenty-six right-handed male undergraduates, twenty-two in the initial phase
and four more as part of a validation study at the end. The experiment also
presented the subjects with pictures of playing cards while being scanned with
a promised twenty-dollar reward for "success," but with some differences from
the previous experiment.
The subjects again received a sealed envelope with a twenty-dollar bill,
but this time it contained two cards, a five of clubs and a seven of spades.
Langleben told the subjects to lie consistently about one of the two cards, but
he left the decision about which card up to them. The experiment also used a
"recurrent distracter" (the two of hearts), a "variable distracter" (all of the
other number cards from each suit), and a null card (where only the back of
the card was shown). Inside the scanner, the subjects saw a card for two
seconds followed by somewhere between zero and sixteen seconds of the null
card. During each subject's session, they saw the truth, lie, and recurrent
distractor cards (five of clubs, seven of spades, and two of hearts) twenty-four
times each and saw a variable distractor card 168 times, for a total of 230
responses. Each time they were asked whether they had that card. A subject
responding as instructed would say "no" 206 times (to the lie card, the
recurrent distractor, and the variable distractors) and say "yes" twenty-four
times (to the truth card). The whole test session lasted just under fifteen
minutes. .
When analyzed as group averages, the results showed several areas of
different activation between truth and lie. These areas included some, but not
most, of the areas reported in Langleben's earlier article. The researchers
noted that the results contradict some of the assumptions of the earlier paper.
The researchers then looked at individual results, using a logistic regression
model. They used regions of interest identified in the group study to create a
model that, applied to those twenty-two subjects, was 78% accurate (with a
specificity of 76% and a sensitivity of 80%) at being able to tell, for an
individual subject, when he was lying and when he was telling the truth. To
validate that model, they then applied it to four new subjects, also healthy,
right-handed male undergraduates, who were scanned under the same tests.
For these four subjects, they were able to distinguish true answers from false
ones 76.5% of the time, with a sensitivity of 69% and a specificity of 84%.
In 2005, NEURoIMAGE published another lie detection paper from the
Langleben lab, this one with Davatzikos listed as its primary author. 95 The
paper uses the same brain scan results from the twenty-two right-handed
male undergraduate subjects reported in the second Langleben article
discussed above, but it uses a different method of data analysis. This
approach is called a "high-dimensional non-linear pattern classification
method." Using this statistical method, they report being able to distinguish
when the individual subjects were lying or telling the truth just under 90% of
the time (90% specificity, 85.8% sensitivity). The brain regions that appeared
most important in this analysis only overlapped to a limited extent with those
identified in the two other Langleben articles.
In summary, Langleben's peer-reviewed publications report on two
experiments, both of which involve lying about a playing card. The
experiments involved exactly forty-four subjects, the majority of them male
undergraduates. It is also noteworthy that the experimental design used in
the second experiment (the subject ofthe last two Langleben papers) was such
that subjects only pressed the "yes" button when truthfully reporting the card
they held. In examining the contrast between the truth card (yes) and the lie
card (no), it is quite possible that the researchers were, in part, seeing an
effect of the unusual occurrence (24 out of 230 times) of the subjects' need to
push "yes" instead of "no," which casts doubt on the validity ofthis work. 96
398 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
random order. They were to press one button to signify that money was under
an object and another button to say that it was not. They were to tell the truth
about one of the objects hiding the money, lie about the other object hiding
the money, and falsely say that a third object was concealing money when it
was not. The subjects chose which objects to lie about. This continued
through twenty iterations, so each object was shown twenty times. Unlike the
pilot study, this study used an MRI system with greater field strength than
before (a 3-Tesla magnet compared to 1.5-Tesla), more cases of deception
(forty, or two in each of the twenty iterations, instead of eight), and a few
more subjects (ten instead of eight).
The researchers did a group analysis of the results and found increased
activation in five areas when the subjects were lying and no areas of increased
activation when subjects were telling the truth. Of the eleven areas with
significantly increased activation during lying, five were areas identified in the
pilot study. The investigators again found quite varied patterns of activation
among individual subjects, although, when they defined regions broadly, they
found that seven of the ten had increased activation in the right prefrontal
cortex.
The third Kozel article on fMRI-based lie detection was published in
Biological Psychiatry in 2005. 99 This used a quite different approach. First,
the researchers recruited thirty healthy unmedicated adults from the local
university community, between the ages of eighteen to fifty, to be part of the
"model building group." Unlike the other studies, this experiment contained a
few people who were left-handed or had mixed-handedness. The subjects
were taken to a room and told to "steal" either a ring or a watch from a
drawer. When scanned, they were asked whether they had taken the ring or
had taken the watch, along with two control questions. Each of the four
questions was asked twenty times. The subjects were instructed to deny
taking either object. They were told they would receive an extra fifty dollars if
their lie was not detected.
The experimenters first tested thirty subjects whose results were used to
build a model that would distinguish, for those individuals, who was lying and
who was telling the truth. They then tested thirty-one different subjects, with
the same test, and applied the initial model. The subjects were more diverse
than in many of the previous experiments. Only about 20% of them were
students and a substantial number (six in the model building group and
twelve in the testing group) were African-American. On average, they had
completed over sixteen years of education and all of them had at least
completed high school.
Group analysis in the model-building group showed significant increased
activation when lying in seven clusters of brain regions, including five the
researchers had seen before. They focused on three clusters for building their
mode1. 1oo When the researchers looked at activation in those three clusters
400 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
3. Other Articles
We found six other papers in the peer-reviewed literature that described
experimental tests of fMRI-based lie detection. Each involved only group
results and none to our knowledge is currently being commercialized. We
summarize these briefly next.
The first published fMRI-based lie detection paper was published in
Brain Imaging: NeuroReport in 2001, with Sean A. Spence from the
University of Sheffield as its first author. lol The experiment included thirty
people. Before being tested, the subjects were asked thirty-six questions about
their activities that day (for example, did they make their beds). These
questions were then re-asked with each subject told to tell the truth if the
"yes" and "no" answers were displayed in one color (either green or red), and
to lie if they were shown in the other. The tests were done twice with the
questions displayed in written form and twice with the questions spoken (in
both cases the "yes" and "no" answered were presented visually). All thirty
subjects were measured for their reaction times. Ten of the subjects, all
healthy, right-handed, males between twenty-three and twenty-five years old,
then did the same experiment while being scanned.
On average, the subjects, in the scanner and out, took about 200
milliseconds (eight to twelve percent) longer to lie than to tell the truth. The
scanned subjects showed statistically significant increases in activation in
various brain regions when lying than when telling the truth.
Tatia M.C. Lee from the University of Hong Kong is the first author in
two other published studies of lie detection. Her first study, in Human Brain
Mapping, appeared in 2002. 102 The scanning experiments were performed on
six right-handed male subjects, all of whom were native Mandarin speakers in
their thirties from mainland China. The subjects were told to feign memory
impairment in two trials. In one trial, they were asked whether two three-
cortex, and the right supplementary motor area. The second cluster included the right
orbitofrontal cortex, the right inferior frontal cortex, and the right iinsula. The final cluster
comprised the right middle frontal cortex and the right superior frontal cortex. Id. at 608.
'0' Sean A. Spence et al., Behavioural and Functional Anatomical Correlatu of
Deception in Humans, 12 BRAIN IMAGING NSUROREPORT 2849 (2001).
'02 Tatia M.e. Lee et al., Lie Detection by Functional Magnetic Resonance Imaging, 15
HUM. BRAIN MAPPING 157 (2002) (manuscript was received by the journal two months before
Spence's earlier-published article had been received and, in that sense, may be the earliest of
these experiments).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 43 of 144
digit numbers, shown one after another, were identical; in the second trial,
they were asked questions about their personal history, such as where they
were born. Each trial was done four times with the subjects instructed to
answer truthfully one time, answer falsely as successfully as they could,
answer falsely badly, and answer randomly. After averaging the results of the
last five subjects, the researchers found four broad regions of the brain with
increased activation during lying. 103
Lee's second paper, published in 2005 in Neurolmage,104 studied three
cohorts of subjects, totaling twenty-eight individuals, to see if regions of
greater activation during lying varied by gender or mother language. The first
cohort contained eight Chinese men; the second contained seven men and
eight women, all Chinese; the third trial looked at six Caucasian monolingual
English speakers. All the subjects were right-handed. Subjects were again
asked to answer slightly different questions in one of four ways: truthfully,
falsely but well, falsely but poorly, or randomly. After averaging the results
within the first and third cohorts, and separately in the second cohort between
men and women, the researchers found significant increases in activation
during lying in the same broad regions as in their earlier work, regardless of
the subject's sex or mother tongue.
Giorgio Ganis and colleagues from Harvard published the results of
another lie detection experiment in Cerebral Cortex in 2003. 105 In this study,
the investigators compared memorized lies that fit into a coherent story with
spontaneous lies that did not fit such a story. A total of ten subjects (seven
women and three men) were asked to quickly make up and tell lies about their
most memorable real vacations and work experiences. Then they were told to
give a memorized lie that was part of a coherent story. When the subjects'
results were averaged, both lies led to more activation in several areas than
telling the truth did, but the two different kinds of lies also showed
significantly different activation patterns when compared.
Jennifer Maria Nunez and her colleagues from Cornell published a study
of twenty subjects in Neurolmage in 2005. 106 This experiment involved ten
women and ten men, all healthy, right-handed young adults. The subjects
first gave honest answers to seventy-two yes or no questions two days before
the scanning. The questions included were both autobiographical ("do you
own a laptop computer?") and non-autobiographical ("are laptop computers
portable?"). Subjects answered each question once truthfully and once falsely,
while being scanned. On average, eight regions were more active when lying
than when telling the truth; seven regions were more active when answering
autobiographical questions. No regions were active during the averaged
honest responses or the averaged non-autobiographical answers.
103 Lee's group reported "four principle regions of brain activation: prefrontal and front,
parietal, temoral, and sub-cortical." Id. at 161.
100 Tatia M.C. Lee et al., Neural Correlates of Feigned Memory Impairment, 28
NEUROIMAGE 305 (2005).
106 G. Ganis et al., Neural Correlates of Different Types of Deception: An jMRI
Investigation, 13 CEREBRAL CORTEX 830, 830 (2003).
106 Jennifer Maria Nunez et al., Intentional False Responding Shares Neural Substrates
with Response Conflict and Cognitive Control, 25 NEUROIMAGE 267 (2005).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 44 of 144
402 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
107 Feroze B. Mohamed et al., Brain Mapping of Deception and Truth Telling about an
Ecologically Valid Situation: Function MR Imaging and Polygraph Investigation - Initial
Experience, 238 RADIOLOGY 679 (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 45 of 144
108 It is not entirely clear whether the number of people tested is important apart from
its likely relationship to the diversity of people tested. One might argue that, apart from being
less diverse, studies with smaller numbers of subjects provide a stiffer test for fMRI-based lie
detection. All other things being equal, it is harder to establish any given level of statistical
significance with a smaller number of subjects than with a larger number (we owe this insight
to Nancy Kanwisher). As all other things are not always equal, larger sample sizes are still
preferable.
109 Brain imaging researchers often prefer to use right-handed subjects. Some brain
functions are found to be located in different places in right-handed and non-right-handed
people. Although there may be no reason to suspect that any particular function (not related
to movement) will correlate with different regions in people with different handedness,
limiting test subjects to those with one handedness removes that possible confounding factor.
The vast majority of people are strongly right-handed, so it is simpler to use them as subjects.
Although there is no evidence and no particular reason to think that non-right-handed people
would show different areas of activation while lying, there is almost no evidence that they
would not.
uo Langleben's own experiments showed significant activations in different regions.
Langleben, supra note 94.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 46 of 144
404 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 &3 2007
results. Although the researchers often told the subjects, falsely, to believe
that their success in lying would earn them more money (in fact, researchers
with that design paid all the subjects the "extra" money), it is also not clear
that this apparent monetary incentive would affect the subjects the same way
as the more common-and more powerful-incentives for lying, such as
avoiding arrest.
The context points to a deeper problem with the artificiality of the
situation-the researchers assume that whatever kind of "lie" they are having
the subjects tell is relevant to the kinds oflies people tell in real life. But those
lies vary tremendously. III Are lies about participation in a crime the same as
lies about the quality of a meal or the existence of a "prior engagement"? Do
lies about sex activate the same regions of the brain as lies about money, lies
to avoid embarrassment, or lies about the five of clubs? Do lies of omission
look the same under fMRI as lies of commission? We do not know the
answers to these, or many other questions-and neither do the researchers
who published these papers. This is not a criticism of the researchers, as
scientists have to start somewhere and a well-defined situation is essential for
analysis. It is likely to be difficult, and perhaps even impossible, to create
good tests of real-world lies. This is a criticism of any attempt to apply this
research to the real world without a great deal more work.
All of the concerns discussed so far are reasons to doubt that these
experiments did, in fact, prove that one can detect real world lies through
fMRI. The last concern is slightly different. Even if the studies had proven
that proposition, they did not begin to prove that the method could actually be
effective because they did not exclude the very real possibilities that subjects
could use countermeasures against fMRI-based lie detection. 1I2
The use of countermeasures to polygraphy has been discussed
substantially in the past and has even been the subject of some limited
research. The National Academy panel on the polygraph spent ten pages on
countermeasures. The panel concluded:
If these measures are effective, they could seriously undermine
any value of polygraph security screening. Basic physiological
theory suggests that training methods might allow individuals to
succeed in employing effective countermeasures. Moreover, the
empirical research literature suggests that polygraph test results
can be affected by the use of countermeasures. 1I3
Countermeasures to fMRI-based lie detection could use a wide range of
methods. At one extreme, we know a subject can make an fMRI scan useless.
Simple movements of the tongue or jaw will make fMRI scans unreadable.
Movements of other muscles will introduce new areas of brain activation,
muddying the underlying picture. Even less visibly, simply thinking about
other things during a task may activate other brain regions in ways that
interfere with the lie-detection paradigm.
If, as some think, lying is detectable because it is harder than telling the
truth and thus requires the activation of more or different areas of the brain, a
subject could try doing mental arithmetic or memory tests while giving true
answers, thus, perhaps, making true answers harder to distinguish from false
ones. Similarly, a well-memorized lie may not activate those additional
regions and may look like a truth. The Ganis paper, discussed above, actually
reported differences between memorized and improvised lies, though it
reported that both were distinguishable on average from the truth. 1l4
This issue of countermeasures is both filled with unknowns and vital. If,
in fact, countermeasures turn out to be effective, the people we may most
want to catch may well be the ones best trained- by criminal gangs, by foreign
intelligence agencies, by terrorists, or others-in countermeasures. Of course,
if the countermeasures are easy enough, "training" may be as simple as a quick
search of the Internet. A quick Google search of "polygraph countermeasures"
already turns up many sites offering information on beating the polygraph,
some free 1l5 and some for payment, including one former polygrapher who
charges $59.95 (plus shipping) for his manual plus DVD. 1l6 IffMRI-based lie
detection becomes common, efforts to beat fMRI-based lie detection will, no
doubt, also become common.
Existing law in the United States regulates the use of lie detectors in
several ways. First, the federal government and many states limit the use of
lie detectors on employees by their employers (or their agents). Many states
also license and otherwise regulate operators of lie detectors. Finally, all
American courts, state or federal, have positions on the admissibility of at
least one kind of lie detector evidence. Some of those regulations are worded
broadly to apply to lie detection or deception detection, some focus narrowly
on polygraphs, and some are in between. This section will briefly survey all of
those regulations; those that are aimed narrowly at polygraphs, even when not
directly applicable to fMRI-based lie detection, are still useful for showing the
breadth of government interests in this field.
m See, e.g., George W. Maschke & Gino J. Scalahrini, THE LIE BEHIND THE LIE
406 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
1. Statutory Regulation
Lie detectors are not subject to any general regulation requiring that they
be proven effective before they can be used. It is conceivable, but unlikely,
that some methods of lie detection might fall within the jurisdiction of the
Food and Drug Administration (FDA). A novel device might fall under FDA's
control over medical devices; a new molecule, to be used, for example, as a
kind of truth serum, could fall within its power over new drugs. To be subject
to FDA power, the device or drug would have to be "intended for use in the
diagnosis of disease or other conditions, or in the cure, mitigation, treatment,
or prevention of disease in man or other animals" or "intended to affect the
structure or any function of the body of man or other animals.... "117 This
would not appear to include a lie detection device or drug, unless it operated
by affecting the "function" of the brain by releasing inhibitions against lying.
Although the MRI, for example, is clearly a medical device, because it is
intended "for use in the diagnosis of disease or other conditions," it has
already been approved. Under the off-label use doctrine, a drug, biologic, or
device approved for one purpose can generally be legally used for any
purpose. liB
In the absence of general pre-market regulation of lie detection, the most
important regulatory statute in the field is the federal Employee Polygraph
Protection Act of 1988 (EPPA). 119 Under this Act, almost all employers are
forbidden to "directly or indirectly, [] require, request, suggest, or cause any
employee or prospective employee to take or submit to any lie detector test" or
to "use, accept, refer to, or inquire concerning the results of any lie detector
test of any employee or prospective employee."12o The Act provides a very
broad definition of a lie detector test, including within its scope "a polygraph,
deceptograph, voice stress analyzer, psychological stress evaluator, or any
other similar device (whether mechanical or electrical) that is used, or the
results of which are used, for the purpose of rendering a diagnostic opinion
regarding the honesty or dishonesty of an individual."121 Employers violating
the Act are subject to civil penalties levied by the Secretary of Labor of up to
$10,000 per violation, as well as private suits by those harmed by the
violation. 122
The Act contains a variety of exemptions, notably for employees offederal,
state, and local governments, as well as various contractors, experts, and
others involved in national security or work with the FBI. 123 One exemption
allows employers to make limited use of polygraphs-but not any other forms
of lie detectors-in ongoing investigations. 124 Some of the exemptions,
including the exemption for an employer's ongoing investigations, are
117 Federal Food Drug and Cosmetic Act §201(h)-(g)(1), 21 U.S.C. § 333 (2006).
118 Food and Drug Administration, Institutional Review Board Information Sheets,
http://www.fda.gov/oc/ohrt/irbs/offlabel.html (last visited July 6, 2007).
119 Federal Employee Policy Protection Act of1988, 29 U.S.C. §§ 2001-2009 (2006).
120 29 U.S.C. § 2002(1)-(2) (2006). The section also prohibits employers from taking
action against employees because of their refusal to take a test, because of the results of such a
test, or for asserting their rights under the Act. 29 U.S.C. § 2001(3)-(4) (2006).
121 29 U.S.C. § 2001(3) (2006).
122 29 U.S.C. § 2005 (2006).
123 29 U.S.C. § 2006 (2006).
104 29 U.S.C. § 2006(d) (2006).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 49 of 144
conditioned on the prOVISIOn of specified rights for those being tested. 125
These rights include, among others:
(A) the examinee shall be permitted to terminate the test at any
time;
(B) the examinee is not asked questions in a manner designed
to degrade, or needlessly intrude on, such examinee;
(C) the examinee is not asked any question concerning-
(i) religious beliefs or affiliations,
(ii) beliefs or opinions regarding racial matters,
(iii) political beliefs or affiliations,
(iv) any matter relating to sexual behavior; and
(v) beliefs, affiliations, opinions, or lawful activities
regarding unions or labor organizations; and
(D) the examiner does not conduct the test if there is sufficient
written evidence by a physician that the examinee is suffering
from a medical or psychological condition or undergoing
treatment that might cause abnormal responses during the actual
testing phase. 126
The Act also limits disclosure of the test results by both the employer or
the polygraph examiner. I27 Finally, the Act expressly provides that it does not
preempt any state or local laws, or collective bargaining agreements that have
added restrictions on lie detector tests. 128
EPPA has seen little activity or discussion since its passage. The Secretary
of Labor adopted extensive regulations for its implementation, many of which
deal with the procedures for imposing civil fines. 129 EPPA has been the
subject of a few law review articles, most of them student notes and comments
408 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
published just after its adoption. 13o Moreover, it has been discussed in only a
handful of reported court cases. 131 .
UNo Lie MRI proceeds on its current path, this may change. It claims, as
noted above, that tMRI-based lie detection is not covered by EPPA. 132 Neither
EPPA nor the regulations under it, nor any case law interpreting it, supports
such an interpretation. The statute and the regulation define lie detection
broadly as "a polygraph, deceptograph, voice stress analyzer, psychological
stress evaluator, or any other similar device (whether mechanical or electrical)
that is used, or the results of which are used, for the purpose of rendering a
diagnostic opinion regarding the honesty or dishonesty of an individual."133
No Lie MRI seems to base its argument on the fact that all the methods EPPA
names measure the autonomic nervous system, whereas No Lie MRI
presumes that its method, which goes directly to the central nervous system, is
thus not "any other similar device (whether mechanical or electrical) ... ." 134
In the context both of the statute generally and of that sentence in EPPA,
specifically, this argument borders on frivolous. Congress intended to give a
broad scope to EPPA's definition; it is not surprising that it did not include
tMRI-based lie detection in 1988 as tMRI was not developed until several
130 Ryan K. Brown, Specific Incident Polyg-raph Testing Under the Employee Polyg-raph
Protection Act of 1988, 64 WASH. L. REv. 661 (1989); Ching Wah Chin, Protecting Employees
and Neglecting Technology Assessment: The Employee Polyg-raph Protection Act of 1988, 55
BROOK. L. REV. 1315 (1990); Charles P. Cullen, The Specific Incident Exemption of the
Employee Polyg-raph Protection Act, 65 NOTRE DAME L. REv. 262 (1990); Brad V. Driscoll, The
Employee Polyg-raph Protection Act of 1988: A Balance of Interests, 75 IOWA L. REV. 539
(1990); Earl J. Engle, Counseling the Client in the Employee Polyg-raph Protection Act, 35 PRAC
LAw 65 (1989); Peter C. Johnson, Banning the Truth-Finder in Employment: The Employee
Polyg-raph Protection Act of1988, 54 Mo. L. REV. 155 (1989); Andrew J. Natale, The Employee
Polyg-raph Protection Act of 1988 - Should the Federal Government Regulate the Use of
Polyg-raphs in the Private Sector, 58 U. CIN. L. REV. 559 (1989); Kathleen F. Reilly, The
Employee Polyg-raph Protection Act of 1988: Proper Penalties When Guilty Employees Are
Improperly Caught, 7 HOFSTRA LAB. &: EMF. L.J. 369 (1990); Durwood Ruegger, When
Polyg-raph Testing Is Allowed: Limited Exceptions Under the EPPA, 108 BANKING L.J. 555
(1991); Yvonne K. Sening, Heads or Tails: The Employee Polyg-raph Protection Act, 39 CATH.
U. L. REv. 235 (1989); Paul D. Seyferth, An Overview of the Employee Polyg-raph Protection
Act, 57 J. Mo. B. 226 (2001).
131 The United State Code Services annotations show only fifteen cases reported in
either the Federal Reporter or the Federal Supplement that discuss this Act. Watson v.
Drummond Co., 436 F.3d 1310 (11th Cir. 2006); Polkey v. Transtecs Corp., 404 F.3d 1264
(11th Cir. 2005); Calhillo v. Cavender Oldsmohile, Inc., 288 F.3d 721 (5th Cir. 2002); Veazey v.
Communications & Cahle, Inc., 194 F.3d 850 (7th Cir. 1999); Saari v. Smith Barney, Harris
Upham & Co., 968 F.2d 877 (9th Cir. 1992); Lyles v. Flagship Resort Development Corp., 371
F.Supp. 2d 597 (D. N.J. 2005); Deetjan v. V.I.P., Inc., 287 F.Supp.2d 80 (D. Me. 2003); Long
v. Mango's Tropical Cafe, 972 F.Supp. 655 (S.D. Fla. 1997); Mennen v. Easter Stores, 951
F.Supp. 838 (N.D. la. 1997); James v. Professionals' Detective Agency, 876 F.Supp. 1013 (N.D.
Ill. 1995); Lyle v. Mercy Hosp. Anderson, 876 F.Supp. 157 (S.D. Oh., 1995); Del Canto v. ITT
Sheraton Corp., 865 F.Supp. 927 (D.D.C. 1994); Blackwell v. 53rd-Ellis Currency Exch., 852
F.Supp. 646 (N.D. Ill. 1994); Ruhin v. Tourneau, Inc., 797 F.Supp. 247 (S.D.N.Y. 1992).
13> See No Lie MRI, Customers - Corporations, http://www.noliemri.com/customers/
years later and the first experiments with fMRI-based lie detection were not
published until 2000. Furthermore, the company's reading seems to ignore
the structure of the sentence, which covers four named devices "or any other
similar device (whether mechanical or electrical) that is used, or the results of
which are used, for the purpose of rendering a diagnostic opinion regarding
the honesty or dishonesty of an individual."135 This can easily be read to find
the similarity in the use to which the device is put, not in some other similarity
to the four named devices. 136
Several other federal statutes deal specifically with the use of polygraphs
for security purposes within the Defense Department,137 the Energy
Department,138 and more generally in the context of security clearances. 139 In
addition, one federal statute conditions federal grants to help states deal with
domestic violence, sexual assaults, and similar crimes, on states' assurances
that by 2009, their "laws, policies, and practices" will ensure that victims of
such crimes are not asked or required to submit to "a polygraph examination
or other truth telling device ...."140
States have been active in broader ways than the federal government in
regulating lie detection and polygraphy, but the state laws are not particularly
consistent. 141
Twenty-five states and the District of Columbia have their own version of
EPPA. 142 Some of these predated the federal statute; others, passed later,
typically cover some or all of the state and local employees excluded from the
federal act. Interestingly, many of these acts preventing employers from
requiring lie detector tests specifically exclude various law enforcement
officers,143 while others specifically include them144 or are, in fact, limited to
them. 145 Some of these state laws restrict polygraphs specifically/46 while
others, like the federal act, cover lie detection more generally.147
410 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
Many state laws deal with other aspects of polygraphy, and, to a lesser
extent, lie detection more broadly.14B Twenty-two states have licensing
schemes for polygraph examiners. 149 Twenty states specifically authorize
polygraph or other lie detection tests for sex offenders as a condition of
probation or parole. 150 Eleven states have already met the federal requirement
for protecting complaining victims of sexual offenses from being required to
take a polygraph test. l5l More than thirty other state statutes deal with one
aspect or another of lie detection, from requiring it for some state employees
(typically law enforcement officers), to banning its required use in insurance
claims or welfare applications, to regulating or prohibiting the use of
information from polygraphs by credit reporting agencies.152
The application of any of these statutes to fMRI-based lie detection
requires a careful examination of the language of the law. Some deal only
with polygraphs, but others have broader definitions that would appear to
include fMRI-based lie detection. Several states use very broad language
indeed:
It is the purpose of this chapter to regulate all persons who
purport to be able to detect deception or to verifY truth of
statements through the use of instrumentation, such as lie
detectors, polygraphs, deceptographs, psychological stress
evaluators or similar or related devices and instruments without
regard to the nomenclature applied thereto and this chapter shall
be liberally construed to regulate all these persons and
instruments. No person who purports to be able to detect
deception or to verifY truth of statements through
instrumentation shall be held exempt from this chapter because
of the terminology which he may use to refer to himself, to his
instrument or to his services. 153
Even the statutes that appear to deal only with polygraphy may have some
surprising consequences. In some states, broad statutes may mean that
anyone seeking to administer an fMRI-based lie detection test will need a
state license, from a licensing board set up to regulate polygraphy. In other
states, the polygraph licensure statutes may effectively exclude fMRI-based lie
detection. For example, several statutes have definitions like this: '''Polygraph'
means an instrument which records permanently and simultaneously a
subject's cardiovascular and respiratory patterns and other physiological
changes pertinent to the detection of deception."154 As fMRI tests do not
152 Id.
'63 ME. REV. STAT. ANN. tit. 32, § 7152 (1964). See identical or substantial similar
language from Nebraska, NEB. REV. STAT. § 81-1902 (1999) (providing that the statute be
''liberally construed to regulate all persons" using lie detectors, stress evaluators,
deceptographs and voice analyzers); OklallOma, OKLA. STAT. ANN. tit. 59, §1452 (West 2000)
(providing the statute "regulate[s] all persons who purport to be able to detect deception ...
without regard to the nomenclature applied thereto); and South Carolina, S.C. CODE ANN. §
40-53-20 (1999) (same).
,.. Ky. REv. STAT. ANN. § 329.010(6) (LexisNexis 2001).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 53 of 144
2. Judicial Admissibility
These statutory regulations by and large miss another important way in
which governments regulate lie detection-decisions about whether and
under what circumstances they are admissible in court. Although this is
governed by statute in at least one case,155 for the most part it is the result of
court-adopted rules or judicial decisions based on whether the evidence meets
the tests for admissibility of scientific evidence.
The courtroom situation is surprisingly complicated. It focuses almost
entirely on polygraphy and not on other forms of lie detection, though there
are a few cases on lie detection through voice stress analyzers l56 and one
unpublished case on ''brain fingerprinting."157 Presumably, fMRI-based lie
detection would be judged under the regular rules for admissibility of
scientific evidence, which means that the existing law on polygraphs is not
directly applicable. Nonetheless, it should be useful to review how that law
stands.
As a general matter, no American judicial system routinely allows
polygraph evidence to be introduced except New Mexico's.158 The New Mexico
Supreme Court has adopted a rule of evidence generally allowing polygraph
evidence under some conditions. 159 Outside New Mexico, evidence of the
results of a polygraph examination cannot be admitted in evidence, except as
described below.
Most courts view the admissibility of polygraph evidence as an issue of the
admissibility of scientific or technical evidence. All but New Mexico have
generally rejected it as failing the tests for admissibility of scientific evidence.
The test for scientific evidence widely used in American courts through most
of the 20 th century, the Frye test, takes it name from a 1923 case involving the
admissibility of testimony of William Marston, the inventor of the
,.. See CAL. EVID. CODE § 351.1(a) (1995) (providing that "the results of a polygraph
examination ... shall not be admitted into evidence in any criminal proceeding ... ,").
156 See generally Thomas R. Malia, Annotation, Admissibility ofVoice Stress Evaluation
Test Results or ofStatements Made During Test, 47 A.L.R. 4th 1202 (1986, 2001 supplement)
(collecting and analyzing state and federal law to conclude that tests are generally
inadmissible). See also Whittington v. State, 809 A.2d 721, 740 (Md. App. 2002) (holding that
results of a voice stress test are not admissible at trial); State v. Gaudet, 638 So.2d 1216, 1222
(La. App. 1994) (holding the same); State v. Higginbotham, 554 So.2d 1308, 1310 (La. App.
1989) (holding the same); State v. Arnold, 533 So.2d 1311, 1314 (La. App. 1988) (holding the
same); Smith v. State, 355 A.2d 527, 536 (Md. App. 1976) (holding the same).
151 See Harrington v. Iowa, 659 N.W.2d 509, 525 (Iowa 2003).
I.B See United States v. Scheffer, 523 U.S. 303, 310-11 (1998).
189 See N.M. R. EVID. 11-707(c) (allowing the opinion of the polygraph examiner to be
admitted into evidence at the discretion of the trial judge). See Lee v. Martinez, 96 P.3d 291
(N.M. 2004) (discussing when to allow polygraph evidence in court).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 54 of 144
412 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
polygraph. 160 The Frye test was replaced for federal courts, and many state
courts, by the Daubert test after a 1993 decision of the United States Supreme
Court. l6l The Frye and Daubert tests have spawned a vast literature on their
individual and comparative merits; for present purposes it is important only
to note that both tests rely, at least in the first instance, on the trial court
judge to make a decision about the admissibility of the evidence based on
expert testimony before her. 162 Neither test requires extended
experimentation or sets any accuracy standards that have to be attained
(though the Daubert test does at least inquire about error rates). Barring any
statutory intervention, presumably any evidence coming from a new method
of lie detection would be admitted or not based on trial court determinations
of whether it complied with Frye or Daubert.
In United States v. Scheffer,163 the United States Supreme Court upheld
the exclusion of polygraph evidence against a claim that a criminal defendant
had a right under the Sixth Amendment to have it admitted. Scheffer involved
a blanket ban on the admissibility of polygraph evidence under the Military
Rule of Evidence 707. 164 Scheffer, a enlisted man, worked with military police
as an informant in drug investigations.165 When he was court-martialed for
illegal drug use, he wanted to introduce the results of polygraph examinations,
performed by the military as a routine part of his work as an informant, that
showed that he honestly denied illegal drug use during the same period that a
urine test detected methamphetamine.166 The court-martial refused to admit
Scheffer's evidence because of Rule 707,167 but the Court of Appeals for the
Armed Forces reversed, holding that this per se exclusion of all polygraph
evidence violated the Sixth Amendment. 16B
The Supreme Court reversed in tum, upholding Rule 707 in a fractured
decision. Justice Thomas wrote the opinion announcing the decision of the
Court, joined by Justices Rehnquist, Scalia, and Souter. 169 He found the rule
constitutional on three grounds: (1) continued question about the reliability
of polygraph evidence, (2) the need to "preserve the jury's core function of
making credibility determinations in criminal trials," and (3) the avoidance of
collateral litigation. 170 Justice Kennedy, joined by Justices O'Connor,
Ginsburg, and Breyer, concurred in the section of the Thomas opinion based
on reliability of polygraph evidence, but did not agree with the other two
grounds. l7l Justice Stevens dissented, finding that polygraph testing was
reliable enough to overcome a complete ban. I72
Interestingly, in spite of the almost universal conclusion that polygraph
evidence does not meet the standards for admissibility of scientific evidence, it
160 NRC, supra note 30, at 293-94. See also Adler, supra note 29.
161
Daubert v. Merrell Dow Pharmaceuticals, 516 U.S. 869 (1993).
mId. at 318-320.
173 See 1-8 MATTHEW BENDER &: Co., INC., SCIENTIFIC EVIDENCE 8-4(C) (2005).
17< United States v. Piccinonna, 885 F.2d 1529 (11th Cir. 1989) (en bane). See also
United States v. Henderson, 409 F.3d. 1293 (11th Cir. 2005) (explaining the relationship
between Piccinonna and Daubert).
175 Rupe v. Wood, 93 F.3d 1434 (9th Cir. 1996). See also Height v. State, 604 S.E.2d
796 (Ga. 2004) (holding that Georgia's general ban on admitting polygraph evidence except on
the parties' stipulation did not apply to the sentencing phase of a capital case).
176 See, e.g., United States v. Cordoba, 104 F.3d 225 (9th Cir. 1997); United States v.
Posado, 57 F.3d 428 (5th Cir. 1995). But see United States v. Prince-Oyibo, 320 F.3d 424 (4th
Cir. 2003) (maintaining the Fourth Circuit's per se rule against the admissibility of polygraph
evidence in spite of Daubert).
177 Some of the ideas in this section have appealed previously in Henry T. Greely, Pre
market Approval Regulationfor Lie Detection: An Idea Whose Time May Be Coming, 5 AM. J.
BIOETHIC 50, 50-52 (2005).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 56 of 144
414 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
178 Federal Food Drug and Cosmetic Act §§ 301-308, 21 U.S.C. § 333 (2006). See
PETER B. HUTT, RICHARD A. MERRILL &: LEWIS A. GROSSMAN, HUTT, MERRILL, AND
GROSSMAN'S FOOD AND DRUG LAw 1196-1370 (3d ed. 2007).
179 Illes, supra note 3.
180 Federal Food Drug and Cosmetic Act (FFDCA) § 505(i). See generally HUTT,
MERRILL &: GROSSMAN, supra note 178, at 624-626.
181 Federal Food Drug and Cosmetic Act § 505(i), 21 U.S.C. §333 (2006). See HUTT,
MERRILL &: GROSSMAN, supra note 178, at 624-26.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 57 of 144
,.2 One might argue whether information should be provided about countermeasures.
The information might give test subjects the information they need to nullify the tests; on the
other hand, they may help test operators and others detect, comhat, or evaluate the risk that a
suhject is using countermeasures.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 58 of 144
416 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
efficacy remains and many relevant questions lack clear answers. Just how
much accuracy is enough? Should it be enough that the method shows some
statistically significant improvement over the efficacy of random guesses
whether the subject is lying-and, if so, statistically significant at what
probability (p) value? Should it be measured against the accuracy of humans
experienced at detecting lies, such as a veteran investigator or an experienced
judge, or would efficacy have to be a general figure? Could lie detectors be
approved based on how well they worked with just some kinds of people, with
some types of questions, or in particular kinds of situations? If approved for
particular kinds of situations, should its use be legally restricted to the
situations for which it was approved or should something akin to the medical
"off-label use" be allowed? If approved for only some kinds of people, how
well could that restriction be enforced?
It may be that either the legislature or the agency should set quantitative
standards for safety and efficacy, though it is hard to know what reasonable
standards might be in advance of good trials of any lie detection technologies.
One solution might be to allow the agency substantial discretion initially, but
require that it set quantitative standards after it had substantial experience
with the technologies.
Another possibility is to eliminate the requirement for pre-market
approval and just require rigorous and extensive pre-market testing with the
test results becoming public information. In that way, the potential users of
the technologies would have sufficient information to make their own choices
about whether it was effective "enough." We prefer a stronger restriction, but
even an informational requirement would be a great advance over the current
situation.
Regardless of the strength of the regulatory system, another tricky
question will be what to do about changes in the lie detection methods that
might occur if the technology moves from initial validity to sustained validity
in the face of other technological changes and innovation. 183 Drugs cannot be
changed in non-trivial ways without new trials-changes to molecular
structure cannot be made easily or presumed unimportant-but the core of at
least some of the lie detection methods may be easily changeable software. If
the software is changed to vary the weight given to activation in particular
brain regions, should new trials, either for approval or for provision of the
required accuracy information to the public, be required before that change
can be legally implemented?
Assuming the regulatory scheme requires approval and not just creation
and provision of information, the question arises as to whom it would
regulate. Specifically, should defense or intelligence agencies be bound by
these constraints? We think the answer is "yes." If the technology is not
sufficiently accurate, it does no good-and may actually do harm-to allow it
to be used in national security settings. Yet the temptation for such use would
be enormous and the political costs of insisting on universal coverage might
sink the entire plan.
,.. See generally Judy Illes & Margaret L. Eaton, Commercializing Cognitive
Neurotechnology - The Ethical Terrain, 25 NATURE BIOTECHNOLOGY 393 (2007) (asserting
that a "lack of recognition of the ethical, social and policy issues associated with the
commercialization of neurotechnology could compromise new ventures in the area.").
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 60 of 144
418 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
The question of binding works in the other direction as well. Once a lie
detection technology is approved, does everyone have to allow its use? Here,
the answer seems to be "no." Different courts, for example, may have different
views about how accurate a technology needs to be for it to be admitted into
evidence. Even if the federal government wanted to promote a common
standard for judicial admissibility of lie detection evidence, it probably does
not have the power to impose its position on state courts. Similarly, if one
state wants to have stricter (but not weaker) rules for allowing the use of lie
detection, although the federal government might arguably have the power to
force an uniform standard under the Commerce Clause, it would be more
appropriate for it to allow states to be more restrictive, as it did in EPPA. It is
worth noting that judges facing attempts by parties to admit this evidence will
have to cope with questions of field strength, statistical significance, and
various brain regions of interest. Educational or other neutral explanatory
resources might be tremendously useful for such judges and should be
pursued.
A final, and important, practical question has to do with cost. Who will
pay for this testing? Although these trials would likely be much less expensive
than the several hundred million dollars required for drug trials, it is likely
that the cost of testing any single lie detection method would be more than $5
million. 184 Whether companies could raise the money for those tests is
unclear; it would no doubt depend on an assessment of their market
possibilities. This in turn may hinge on whether patents or the regulatory
structure provides them any protection from competition. Government
funding is another possibility, particularly as the government would likely be a
major consumer of lie detection services. Government-funded trials could
also be done by government-employed experts, rather than relying, as the
FDA does, on the regulated companies to do the testing. Politically, however,
it is usually easier to force companies to spend money than to appropriate it
from public funds. Depending on how many lie detection methods need to be
evaluated in anyone year, the aggregate costs of the trials could become
significant.
,.. This figure is a very rough estimate. It assumes that a trial would use 2,000 subjects
and that the MRI time alone for each individual would cost about $1,000, for a total of $2
million. It then assumes that recruitment and management of the subjects, on the one hand,
and analysis of the results, on the other, would each involve costs roughly comparable to the
MRI costs, bringing the estimated total to about $6 million. The actual figure, even for a
2,000 person trial, could easily be two or three times as much; it seems very unlikely that it
could be half or a third of that amount.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 61 of 144
be weighed against the benefits of lie detection. This is true not only in our
dealings with governments, whose actions are limited by the federal
Constitution, but in the private spheres oflife. Society would need to consider
whether laws like the Employee Polygraph Protection Act need to be amended
or extended beyond employers to others who may want to use lie detectors
insurers, lenders, contractors, schools, or parents, among others. Completely
new licensing schemes might be needed for those who operate these new lie
detectors.
Even the preliminary regulation we propose would not pass itself.
Congress, preferably, or state legislatures, would have to be convinced to
adopt a new, complex, and possibly expensive statutory plan, one dealing with
technologies that just barely exist and that may be more widely viewed as
blessings than as threats. Why then would-or should-Congress act?
Because it is important. Companies are marketing the age-old dream of
lie detection coupled with the high-tech mystique (and beautiful color
graphics) of brain scanning. The combination may prove irresistible to many,
but with so little evidence that the method is accurate, the result may be
tragic. Honest people may be treated unfairly based on negative tests;
dishonest people may go free.
Even in the judicial context, where the Daubert and Frye tests provide
some check on inaccurate evidence, the check is only partial. Each trial judge
is empowered to make her own decision, based on the evidence presented in
her court. A good lawyer, with a good expert, pushing admissibility of the
technology, and a bad lawyer, with a bad (or non-existent) expert opposing it,
could tip the balance in any given court. So could an overly impressionable or
scientifically naIve judge. A favorable decision by any single judge anywhere
in the country will be trumpeted by the companies selling the technology, in
the same way Larry Farwell, the developer of "brain fingerprinting," has
publicized his view of the Harrington case.
As a result, lives may be ruined. We have seen lives shattered before, with
and without these technologies. Wen Ho Lee is one example ofa victim of the
polygraph. lss Recent news provides an even clearer example of the costs of
investigative mistakes, although not in a case involving (as far as we know) lie
detection. In September 2002, Maher Arar, a Canadian citizen who was born
in Syria, was returning to Canada with his family from a vacation in Tunisia. ls6
While changing planes at Kennedy Airport in New York, he was detained by
U.S. officials. After thirteen days of questioning-but no formal charges or
court action-he was flown to the Middle East, where his American captors
delivered him to Syrian security agents. 1B7 After a year of imprisonment and
torture, he was released through Canadian intervention. lss After a two-year
,.. See DAN STOBER AND IAN HOFFMAN, A CONVENIENT Spy; WEN Ho LEE AND THE
POLITICS OF NUCLEAR ESPIONAGE (2002) (discussing, in some detail, conflicting conclusions
investigators drew from Lee's several polygraph tests). See also Transcript of Court Opinion,
United States v. Wen Ho Lee, http://cicentre.com/Documents/DOC_Judge_Parker_
on_Lee_Case.htm (extraordinary apology United States District Judge James Parker, the
judge assigned to Lee's criminal case, made to Lee for the method of his detention by the
federal government).
,.6 See Jane Mayer, Outsourcing Torture, NEW YORKER, Feb. 14, 2005.
187 Id.
,.. Id.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 62 of 144
420 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 &3 2007
V. CONCLUSION
We have come a long way, from discussions of the concept of tu, mapping,
and illustration to the use of individualized, rapidly changing maps of blood
flow in the brain to try to detect lies. We need to remember, though, that the
map is never the territory; the fMRI scan is not the same as the brain it scans.
Neuroscience lie detection, if it proves feasible at all, will not be perfect. We
need to prevent the use of unreliable technologies, and to develop fully
detailed information about the limits of accuracy of even reliable lie detection.
Government regulation appears to be the only way to accomplish this goal,
and, by so doing, we take a first step toward maximizing the benefits of these
new technologies while minimizing their harms.
'.9 Id.
190 Dahlia Lithwick, Welcome Back to the Rule of Law, SLATE, Jan. 30, 2007,
http://www.slate.com/id/2157667/.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 63 of 144
APPENDIX
STATB CITATION DESCRIPTION
Alabama
ALA. CODE §§ 34-25-1 et seq. Licensing scheme.
(LexisNexis 1995).
ALA. CODE § 36-1-8 (1995). No polygraph for state employees.
Alaska
ALAsKA STAT. §§ 12.55.100(e), Beginning July 1, 2007, those on
33.16.150(a)(13) (2006). probation or parole for a sex offense
"shall ... submit to regular periodic
polygraph examinations."
ALAsKA STAT. § 23.10.037 (1989). No polygraph for employees, except law
enforcement officers.
Arizona
ARIz. REv. STAT. § 36-3710 Se:cually violent predators may be
(LexisNexis 1998). monitored during outpatient treatment
by ''polygraph or plethysmograph or
both."
ARIz. REv. STAT. § 38-1101(A), Law enforcement officers do not have a
(B)(4) (2005). right to a representative during an
interview that "could result in dismissal,
demotion or suspension" if the
interview occurs in the course of a
polygraph examination.
Arkansas
ARK. CODE ANN. §§ 17-39-101 et Licensing scheme.
seq. (1987).
ARK. CODE ANN. § 17-40-202(5) Board of Private Investigators and
(1987). Private Security Agencies shall include
one licensed polygraph examiner.
California
CAL. EVID. CODE § 351.1(a) Polygraph evidence not admissible
(Deering 1983). unless stipulated by all parties.
422 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
Colorado
COLO. REv. STAT. § 10-3 Defines requiring an a polygraph for an
1104(1)(k) (2006). insurance claim as an unfair business
practice subj ect to administrative
discipline.
refuse).
Connecticut
CONN. GEN. STAT. § 31-51g No polygraph for public employees,
officers.
(2007).
Delaware
DEL. CODE ANN. tit. 19, § 704 No polygraph for public or private
DC
D.C. CODE § 32-902 (1995). No polygraphs for public or private
corrections departments.
violate § 32-902.
Florida
FLA. STAT. ANN. § 321.065 Highway patrol officers may be required
occurred."
Georgia
GA. CODE ANN. § 42-1-14 (2006). A sex offender may provide the risk
Guam
GUAM CODE ANN. tit. 10, §§ Law enforcement officers must "submit
Hawaii
HAW. REv. STAT. ANN. §§ 378 No polygraphs for public or private
Idaho
IDAHO CODE ANN. §§ 44-903, No polygraphs for public or private
Illinois
20 ILL. COMPo STAT. ANN. Chicago crime laboratory employees
725 ILL. COMPo STAT. ANN. 5/115 Criminal courts "shall not require,
take a polygraph.
725 ILL. COMPo STAT. ANN. 200/1 No polygraphs required for sex assault
735 ILL. COMPo STAT. ANN. 5/2 Civil courts may not "require" either
Indiana
IND. CODE ANN.§§ 25-30-2-1 et Licensing scheme.
424 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
Iowa
IOWA CODE § 730.4 (2006). No polygraphs for public or private
employees, except for law enforcement.
IOWA CODE § 915.44 (2006). No required polygraphs required for sex
assault victims or witness; however,
police may consider refusal in deciding
whether to take the case; and polygraph
refusal may not be the sole reason for
declining to investigate.
Kansas
None.
Kentucky
Ky. REv. STAT. ANN. §§
Law enforcement officers must take a
15.382(17), 15.384 (LexisNexis
polygraph.
2002).
Maryland
MD. CODE CRIM. ANN. PROC. § Parole conditions for sex offenders may
MD. CODE ANN., LAB. I< EMPL. § No polygraphs for public or private
staff.
MD. CODE ANN., PUB. SAFETY § 1 Sexual Offender Advisory Board shall
proceeding.
Massachusetts
MAss. GEN. LAws ANN. ch. 149 § No polygraphs for public or private
Michigan
MICH. COMPo LAws SERVo §§ No polygraphs for public or private
1973).
Minnesota
MINN. STAT. § 181.75 (1986). No polygraphs for public or private
employees.
results.
Mississippi
MISS. CODE ANN. § 45-3-47 Highway Safety Patrol training
polygraph.
seq. (1993).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 68 of 144
426 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
Missouri
Mo. REv. STAT. § 632.505(3)(13) Court may order ''polygraph,
(2006). plethysmograph, or other electronic or
behavioral monitoring or assessment"
as a condition ofparole or probationfor
sex offenders.
Montana
MONT. CODE ANN. § 39-2-304 No polygraphs for public or private
(1997). employees. (1987 Amendment removed
previous exception for law enforcement
agencies.)
Nebraska
NEB. REv. STAT. ANN. §§ 81-1901 Licensing scheme.
et seq. (LexisNexis 1980).
NEB. REv. STAT. ANN. § 83 Parole Administration may require
174.03(4)(f) (LexisNexis 2006). polygraphs as parole condition for sex
offenders.
Nevada
NEV. REv. STAT. ANN. § Evaluation for parole or probation by
176.139(4)(e) (LexisNexis 2001). Division of Parole and Probation may
include polygraph.
NEV. REv. STAT. ANN. § In ordering release for parole or
176A.410(1)(g) (LexisNexis probation, court shall require sex
2005), NEV. REv. STAT. ANN. § offenders to submit to polygraphs as
213.1245 (LexisNexis 2003). requested by parole or probation
officers.
NEV. REv. STAT. ANN. § 289.050 No polygraphs may be required for law
(LexisNexis 2001), NEV. REv. enforcement officers under
STAT. ANN. § 289.070 investigation; but, if an officer
(LexisNexis 2005). volunteers for a polygraph, the exam
must be recorded and reviewed by a
second polygraph examiner.
New Jersey
N.J. REv. STAT. § 2C:40A-l No polygraphs for public or private
(1983). employees, except for some employees
with access to controlled substances.
The exemption is narrower than the
federal Employee Polygraph Protection
Act. Compare N.J. Stat. § 2C:40A-l,
with 29 U.S.C. § 2006(f).
N.J. REv. STAT.§ 30:4-123.88 Parole Board may require all sex
(2005). offenders (and some kidnappers) to
submit to polygraphs at least annually.
New Mexico
N.M. STAT. ANN. § 9-3-13(d)(12) Sex Offender Management Board shall
(LexisNexis 2007). study polygraphs as a method of
evaluating sex offenders.
N.Y. GEN. Bus. LAw § 380-j Consumer reporting agencies may not
(Consul 1986). include polygraph information in file.
North Carolina
None.
North Dakota
N.D. CENT. CODE §§ 43-31-01 et Licensing scheme.
seq. (1993).
Ohio
OHIO REv. CODE ANN. § 177.01 Organized crime commission and
(LexisNexis 1999). consultants may be required to take a
polygraph before being granted a
security clearance.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 70 of 144
428 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 3 2007
Oklahoma
OKLA. STAT. tit. 22, § Sex offenders may be required to
991a(1)(A)(ee) (2006). undergo polygraph exams.
OR. REv. STAT. § 163.705 (981). No polygraph for sexual assault victims.
Rhode Island
RI. GEN. LAws §§ 28-6.1-1 et seq. No polygraphs for public or private
(1987). employees.
South Carolina
S.C. CODE ANN. §§ 40-53-10 et Licensing scheme.
seq. (1972).
South Dakota
S.D. CODIFIED LAws §§ 36-30-1 Licensing scheme.
et seq. (1984).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 71 of 144
Tennessee
TENN. CODE ANN. § 38-3-123 No polygraph for sexual assault victims.
(2006).
exams.
(1982).
Texas
TEX. FAM. CODE ANN. § 51.151 Juveniles in custody may not be given a
(Vernon 001), TEx. HUM. REs. sex offenses may be required to submit
2001).
411.007,411.0074 (Vernon
dispatcher positions must take a
2005), TEX. GoV'T CODE ANN. § polygraph, but current police officers
polygraph.
TEX. HEALTH & SAFETY CODE Civil commitment/or se:x offenders may
ANN. § 841.083 (Vernon 2005). include regular polygraph and
plethysmographs.
TEX. Loc. GoV'T CODE ANN. §§ Firefighters may be required to submit
143.124,143.314 (Vernon 1997). to polygraph if a complainant files a
complaint against the firefighter and
the complainant takes and passes a
polygraph; firefighters may also be
required to submit to a polygraph ifthe
municipality's department head
"considers the circumstances to be
extraordinary and the fire department
head believes that the integrity of a fire
fighter or the fire department is in
question."
TEX. GCC. CODE ANN. § 1703.001 Licensing scheme.
(Vernon 1999).
TEX. CODE CRIM. PROC. ANN. art. No polygraphs required for sex assault
Art. 15.051 (Vernon 1997). victims.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 72 of 144
•
430 AMERICAN JOURNAL OF LAW & MEDICINE VOL. 33 NO.2 & 32007
STATE
CITATION DESCRIPTION
Utah
2006 (2006).
seq. (1975).
Virginia
VA. CODE ANN. § 8.01-418.2 Polygraph evidence is not admissible in
polygraphers.
an investigation.
probation of teacher.
Virginia law.
seq. (1993).
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 73 of 144
Washington
WASH. REv. CODEANN.§ Law enforcement officer requirements
43.43.020 (LexisNexis 2005),
for initial hire mayor must include
WASH. REv. CODEANN.§
polygraph, depending on the position
43.101.080(19) (LexisNexis
and date of hire. Most significant, after
2005), WASH. REv. CODE ANN.§
July 2005, all new peace officers must
43.101.095(2)(a) (LexisNexis
pass a polygraph. Wash. Rev. Code §
2005), WASH. REv. CODE ANN.§
43.101.095(2)(a) (LexisNexis 2005).
43.103.090(2)(a) (LexisNexis
1999).
West Virginia
W. VA. CODE ANN. § 21-5-5b No polygraphs for public or private _
Wisconsin
WIS. STAT. §§ 51.375,301.132 Polygraphs may be required for sex
custody.
privilege.
information.
Wyoming
None.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 74 of 144
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 75 of 144
U sing Imaging
to Identify Deceit:
Scientific and
Ethical Questions
ISBN#: 0-87724-077-9
The views expressed in this volume are those held by each contributor and are
not necessarily those of the Officers and Fellows of the American Academy of
Arts and Sciences.
Contents
INTRODUCTION
Imaging Deception
Emilio Bizzi and Steven E. Hyman
3 CHAPTER 1
An Introduction to Functional Brain Imaging
in the Context of Lie Detection
Marcus E. Raichle
7 CHAPTER 2
The Use of fMRI in Lie Detection: What Has Been Shown
Nancy Kanwisher
14 CHAPTER 3
23 C H A PTE R 4
35 C HAPT E R 5
Neural Lie Detection in Courts
Walter Sinnott-Armstrong
40 CHAPTER (;
Lie Detection in the Courts: The Vain Search for the Magic Bullet
Jed S. Rakoff
46 CHAPTER 7
Neuroscience-Based Lie Detection: The Need for Regulation
Henry T. Greely
INTRODUCTION
Imaging Deception
EMILIO BIZZI AND STEVEN E. HYMAN
CHAPTER 1
An Introduction to
Functional Brain Imaging
in the Context of Lie
Detection
MARCUS E. RAICHLE
Human brain imaging, as the term is understood today, began with the intro
duction of X-ray computed tomography (i.e., CT as it is known today) in
1972. By passing narrowly focused X-ray beams through the body at many
different angles and detecting the degree to which their energy had been at
tenuated, Godfrey Hounsfield was able to reconstruct a map of the density of
the tissue in three dimensions. For their day, the resultant images of the brain
were truly remarkable. Hounsfield's work was a landmark event that radically
changed the way medicine was practiced in the world; it spawned the idea that
three-dimensional images of organs of the body could be obtained using the
power of computers and various detection strategies to measure the state of
the underlying tissues of the body.
In the laboratory in which I was working at Washington University in
St. Louis, the notion of positron emission tomography (PET) emerged short
ly after the introduction of X-ray computed tomography. Instead of passing
an X-ray beam through the tissue and looking at its attenuation as was done
with X-ray computed tomography, PET was based on the idea that biologi
cally important compounds like glucose and oxygen labeled with cyclotron
produced isotopes (e.g., 150, llC, and 18F) emitting positrons (hence the
name positron emission tomography) could be detected in three dimensions by
ringing the body with special radiation detectors. The maps arising from this
strategy provided us with the first quantitative maps of brain blood flow and
metabolism, as well as many other interesting measurements of function.
With PET, modern human brain imaging began measuring function.
In 1979, magnetic resonance imaging (MRI) was introduced. While em
bracing the concept of three-dimensional imaging, this technique was based
on the magnetic properties of atoms (in the case of human imaging, the pri
mary atom of interest has been the hydrogen atom or proton). Studies of
3
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 81 of 144
these properties had been pursued for several decades in chemistry laborato
ries using a technique called nuclear magnetic resonance. When this technique
was applied to the human body and images began to emerge, the name was
changed to "magnetic resonance imaging" to assuage concerns about radio
activity that might mistakenly arise because of the use of the term nuclear.
Functional MRI (£MRI) has become the dominant mode of imaging function
in the human brain.
At the heart of fimctional brain imaging is a relationship between blood
flow to the brain and the brain's ongoing demand for energy. The brain's
voracious appetite for energy derives almost exclusively from glucose, which
in the brain is broken down to carbon dioxide and water. The brain is depen
dent on a continuing supply of both oxygen and glucose delivered in flowing
blood regardless of moment-to-moment changes in an individual's activities.
For over one hundred years scientists have known that when the brain
changes its activity as an individual engages in various tasks the blood flow in
creases to the areas of the brain involved in those tasks. What came as a great
surprise was that this increase in blood flow is accompanied by an increase in
glucose use but not oxygen consumption. As a result, areas of the brain tran
siently increasing their activity during a task contain blood with increased oxy
gen content (i.e., the supply of oxygen becomes greater than the demand for
oxygen). This observation, which has received much scrutiny from research
ers, paved the way for the introduction of MRl as a functional brain tool.
By going back to the early research of Michael Faraday in England and,
later, Linus Pauling in the United States, researchers realized that hemoglo
bin, the molecules in human red blood cells that carry oxygen from the lungs
to the tissue, had interesting magnetic properties. When hemoglobin is carry
ing a full load of oxygen, it can pass through a magnetic field without caus
ing any disturbance. However, when hemoglobin loses oxygen to the tissue,
it disrupts any magnetic field through which it passes. MRI is based on the
use of powerful magnetic fields, thousands of times greater than the earth's
magnetic fields. Under normal circumstances when blood passes through an
organ like the brain and loses oxygen to the tissue, the areas of veins that are
draining oxygen-poor blood show up as little dark lines in MRI images, reflect
ing the loss of the MRI signal in those areas. Now suppose that a sudden in
crease in blood flow locally in the brain is not accompanied by an increase in
oxygen consumption. The oxygen content of these very small draining veins
increases. The magnetic field in the area is restored, resulting in a local in
crease in the imaging signal. This phenomenon was first demonstrated with
MRI by Seiji Ogawa at Bell Laboratories in New Jersey. He called the phe
nomenon the "blood oxygen level dependent" (BOLD) contrast of MRI and
advocated its use in monitoring brain function. As a result researchers now
have fMRI using BOLD contrast, a technique that is employed thousands of
times daily in laboratories throughout the world.
A standard maneuver in functional brain imaging over the last twenty-five
years has been to isolate changes in the brain associated with particular tasks
by subtracting images taken in a control state from the images taken during
the performance of the task in which the researcher is interested. The control
state is often carefully chosen so as to contain most of the elements of the task
of interest save that which is of particular interest to the researcher. For exam
ple, to "isolate" areas of the brain concerned with reading words aloud, one
might select as the control task passively viewing words. Having eliminated
areas of the brain concerned with visual word perception, the resulting "dif
ference image" would contain only those areas concerned with reading aloud.
Another critical element in the strategy of functional brain imaging is the
use of image averaging. A single difference image obtained from one individ
ual appears "noisy," nothing like the images usually seen in scientific articles
or the popular press. Image averaging is routinely applied to imaging data
and usually involves averaging data from a group of individuals. While this
technique is enormously powerful in detecting common features of brain
function across people, in the process it completely obscures important indi
vidual differences. Where individual differences are not a concern, this is not
a problem. However, in the context of lie detection researchers and others
are specifically interested in the individual. Thus, where functional brain
imaging is proposed for the detection of deception, it must be clear that the
imaging strategy to be employed will provide satisfactory imaging data for
valid interpretation (i.e., images of high statistical quality ).1
1. For a more in-depth explanation of functional brain imaging, see RaichJe (2000); and
Raichle and Mintun (2006).
2. I was a member of the NAS committee that authored the report.
agencies and secret national laboratories. This is a sobering fact given the con
cerns raised by the NAS report about the use of the polygraph in screening.
As a screening technique the polygraph performs poorly and would likely
falsely incriminate many innocent employees while missing the small number
of spies in their midst. The NAS committee could find no available and prop
erly tested substitute, including functional brain imaging, that could replace
the polygraph.
The NAS committee found many problems with the scientific data it re
viewed. The scientific evidence on means of lie detection was of poor quality
with a lack of realism, and studies were poorly controlled, with few tests of
validity. For example, the changes monitored (e.g., changes in skin conduc
tance, respiration, and heart rate) were not specific to deception. To compound
the problem, studies often lacked a theory relating the monitored responses
to the detection of truthfulness. Changes in cardiac output, peripheral vascu
lar resistance, and other measures of autonomic function were conspicuous
by their absence. Claims with regard to functional brain imaging hinged for
the most part on dubious extrapolations from group averages.
Countermeasures (i.e., strategies employed by a subject to "beat the poly
graph") remain a subject clouded in secrecy within the intelligence commu
nity. Yet information on such measures is freely available on the Internet! Re
gardless, countermeasures remain a challenge for many techniques, although
one might hold some hope that imaging could have a unique role here. For
example, any covert voluntary motor or cognitive activity employed by a sub
ject would undoubtedly be associated with predictable changes in functional
brain imaging signals.
At present we have no good ways of detecting deception despite our very
great need for them. We should proceed in acquiring such techniques and
tools in a manner that will avoid the problems that have plagued the detec
tion of deception since the beginning of recorded history. Expanded research
should be administered by organizations with no operational responsibility for
detecting deception. This research should operate under normal rules of sci
entific research with freedom and openness of communication to the extent
possible while protecting national security. Finally, the research should vigor
ously explore alternatives to the polygraph, including functional brain imaging.
REFERENCES
CHAPTER 2
Can you tell what somebody is thinking just by looking at magnetic resonance
imaging (MRI) data from their brain?] My colleagues and I have shown that
a part of the brain we call the "fusiform face area" is most active when a per
son looks at faces (Kanwisher et al. 1997). A separate part of the brain is most
active when a person looks at images of places (Epstein and Kanwisher 1998).
People can selectively activate these regions during mental imagery. If a sub
ject closes her eyes while in an MRI scanner and vividly imagines a group of
faces, she turns on the fusiform face area. If the same subject vividly imagines
a group of places, she turns on the place area. When my colleagues and I first
got these results, we wondered how far we could push them. Could we tell
just by looking at the fMRI data what someone was thinking? We decided to
run an experiment to determine whether we could tell in a single trial whether
a subject was imagining a face or a place (O'Craven and Kanwisher 2000).
My collaborator Kathy O'Craven scanned the subjects, and once every
twelve seconds said the name of a famous person or a familiar place. The sub
ject was instructed to form a vivid mental image of that person or place. Mter
twelve seconds Kathy would say, in random order, the name of another per
son or place. She then gave me the fMRI data from each subject's face and
place areas.
Figure 1 shows the data from one subject. The x-axis shows time, and the
y-axis shmvs the magnitude of response in the face area (black) and the place
area (gray). The arrows indicate the times at which instructions were given
to the subject. My job was to look at these data and determine for each trial
whether the subject was imagining a face or a place. Just by eyeballing the
data, I correctly determined in over 80 percent of the trials whether the sub
ject was imagining faces or places. I worried for a long time before we pub
lished these data that people might think we could use an MRI to read their
minds. Would they not realize the results obtained in my experiment were for
1. This article is based on remarks made at the American Academy of Arts and Sciences's
conference on February 2, 2007.
7
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 85 of 144
Figure 1. Time course of fMRI response in the fusiform face area and parahippo
campal place area of one subject over a segment of a single scan, showing the fMRI
correlates of single unaveraged mental events. Each black arrow indicates a single
trial in which the subject was asked to imagine a specific person (indicated by face
icon) or place (indicated by house icon). Visual inspection of the time courses al
lowed 83 percent correct determination of whether the subject was imagining a
face or a place. Source: O'Craven and Kanwisher 2000.
for identifYing patterns of brain response that are consistent across subjects.
Group studies are not useful for determining whether a particular subject is
lying. Studies that analyze individual subject data are relevant for trying to
determine whether fMRI is useful for lie detection, so we discuss these find
ings in turn.
STUDY 1
tion, and they found the three or four of those blobs that were most discrim
inative between truths and lies. They then used those blobs to classify the
activations in the other subjects. So, tor example, they ran statistical tests on
each 3-D pixel or "voxel" in the brain, asking whether that voxel produced
a stronger response during the lie condition than the neutral condition, and
they tallied how many voxels showed that pattern versus how many produced
a stronger response in the truth condition than neutral. If they found more
lie voxels than truth voxels, they considered their model to have identified
which condition was a lie in that subject. By this measure, they could correct
ly determine for 90 percent of subjects which was the lie and which was the
truth.
This is not really lie detection. The researchers always know that the sub
jects are lying in response to one of the sets of non-neutral questions. Rather
than answering the question "Can you tell whether the subject is lying," this
research is answering the question "Can you tell which response is the truth
and which is the lie?"
STUDY 2
REAL-WORLD IMPLICATIONS
To summarize all three of the individual subject studies, two sets of function
al MRI data have been analyzed and used to distinguish lies from truth.
Kozel and colleagues (2005) achieved a 90 percent correct response rate in
determining which was the lie and which was the truth, when they knew in
advance there would be one of each. Langleben got a 76 percent correct
response rate with individual trials, and Davatzikos, analyzing the same data,
got an 89 percent correct response rate. The very important caveat is that in
the last two studies it is not really lies they were looking at, but rather target
detection events. Leaving that problem aside, these numbers aren't terrible.
And these classification methods are getting better rapidly. Imaging methods
are also getting better rapidly. So who knows where all this will be in a few
years. It could get even much better than that.
But there is a much more fundamental question. vVhat does any of this
have to do with real-\vorld lie detection? Let's consider how lie detection in
the lab differs from any situation where you might want to use these meth
ods in the real world. The first thing I want to point out is that making a
false response when you are instructed to do so isn't a lie, and it's not decep
tion. It's simply doing what you are told. We could call it an "instructed false
hood." Second, the kind of situation where you can imagine wanting to use
MRl for lie detection differs in many respects from the lab paradigms that
have been used in the published studies. For one thing, the stakes are incom
parably higher. We are not talking about $20 or $50; we are talking about
prison, or life, or life in prison. Further, the subject is suspected of a very
serious crime, and they believe while they are being scanned that the scan
may determine the outcome of their trial. All of this should be expected to
produce extreme anxiety. Importantly, it should be expected to produce extreme
anxiety whether the subject isguilty or notguilty of the crime. The anxiety does
not result from guilt per se, but rather simply from being a suspect. Further,
importantly, the subject may not be interested in cooperating, and all of these
methods we have been discussing are completely foilable by straightforward
countermeasures.
Functional MRI data are useless if the subject is moving more than a
few millimeters. Even when we have cooperative subjects trying their best to
help us and give us good data, we still throw out one of every five, maybe
ten, subjects because they move too much. If they're not motivated to hold
still, it will be much worse. This is not just a matter of moving your head
you can completely mess up the imaging data just by moving your tongue in
your mouth, or by closing your eyes and not being able to read the questions.
Of course, these things will be detectable, so the experimenter would know
that the subject was using countermeasures. But there are also countermea
stIres subjects could use that would not be detectable, like performing mental
arithmetic. You can probably activate all of those putative lie regions just by
subtracting seven iteratively in your head.
Because the published results are based on paradigms that share none of
the properties of real-world lie detection, those data offer no compelling evi
delKe that fM.Rl will work for lie detection in the real world. No published
evidence shows lie detection with fM.Rl under anything even remotely re
sembling a real-world situation. Furthermore, it is not obvious how the use
of MRI in lie detection could even be tested under anything resembling a
real-world situation. Researchers would need access to a population of sub
jects accused of serious crimes, including, crucially, some who actually perpe
trated the crimes of which they are accused and some who did not. Being
suspected but innocent might look a lot like being suspected and guilty in
the brain. For a serious test of lie detection, the subject would have to be
lieve the scan data could be used in her case. For the data from individual
scans to be of any use in testing the method, the experimenter would ulti
mately have to know whether the subject of the scan was lying. Finally, the
subjects would have to be interested in cooperating. Could such a study ever
be ethically conducted?
Before the use of fMRI lie detection can be seriously considered, it must
be demonstrated to work in something more like a real-world situation, and
those data must be published in peer-reviewed journals and replicated by labs
without a financial conflict of interest.
REFERENCES
CHAPTER 3
14
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 92 of 144
A B
Figure 1. (A) the hippocampus (light grey) and parahippocampus (dark grey);
(8) the fusiform face area (in circles)
Slotnick, 2004 for a review), indicates that for many brain regions known to
be important in memory, such as the hippocampus, it does not matter whether
an experienced event was the result of our thought or our perception. Given
this, we cannot look at these memory-related brain regions to reveal whether
a person is remembering accurately. Other brain regions, such as the parahip
pocampus and fusiform gyrus, are more specifically involved in perceptual
processing and memory for perceptual details (Schacter and Slotnick, 2004).
These regions may provide a signal of familiarity for scenes and faces.
However, one difficulty in relying on BOLD signal responses in regions
involved in perceptual aspects of memory, such as the parahippocampus or
fusiform gyrus, to judge whether a suspected criminal is familiar with a scene
or face is that perception is often altered by emotion. For the criminal whose
brain is being imaged to judge involvement in a crime, pictures of the scene
of the crime or partners in crime are likely highly emotional. Changes in per
ception occur with emotion, and research has demonstrated changes in BOLD
signal in both the parahippocampus and fusiform gyrus for emotional scenes
and faces. For example, a study looking at individuals remembering 9/11
found that people who were closer to the World Trade Center showed less
activity in the parahippocampus when recalling the events of9/11 than when
recalling less emotional events (Sharot et aI., 2007). As indicated earlier, the
parahippocampus also shows less activation when a face is more familiar
(Gonsalves et aI., 2005). Even though imaging this region might reveal per
ceptual qualities of memory, this region might also be influenced by emotion.
If the event is highly emotional, signals in the parahippocampus might not
be a reliable indicator of familiarity.
What about the fusiform gyrus? This region is known for processing faces
(Kanwisher and Yovel, 2006, for a review). As indicated by Gonslaves and
colleagues (2005) this region also shows stronger activation for more familiar
faces. However, emotion also influences responses in the fusiform gyrus, so
that in a highly emotional situation the signal fi'om this region might be some
what altered. For faces that are equally unfamiliar, more activation is observed
in the nlsiform gyrus for faces with fear expressions (Vuilleumier et aI., 2001).
Furthermore, the face itself does not need to be fearful. If the context in
which the face is presented is fearful, greater activation of the fusiform gyrus
is observed (Kim et aI., 2004). Because of this, responses in this region may
not be a reliable indicator of familiarity with a face in an emotional situation.
When researchers look at the brain's memory circuitry to detect familiar
ity with a scene or person, for some regions it may be difficult to differenti
ate events that a person imagined, rehearsed, or thought were plausible from
those that actually occurred. Memories are formed for events that happen
only in our minds and events that happen in the outside world. The regions
that are important in the perceptual aspects of memory are influenced by
emotion, so they might not be good detectors of familiarity if the situation
is emotional. In other words, the imagery and emotion that would likely be
present when a lie is personally relevant and important might interfere with
the use of fMRI signals to detect familiarity.
The second potential use of tl\1RI for lie detection relies on our knowl
edge of the neural circuitry of conflict. It is assumed that lying results in a
conflict between the truth and the lie. How might emotion and imagery in
fluence the neural signatures of conflict? Two regions that have been high
lighted for their role in responding to contlict or interference are the anterior
cingulate cortex and the inferior frontal gyrus. These regions have also been
implicated in studies of lie detection (e.g., Kozel et aI., 2004; Langelben et
aI., 2005). One classic task used to detect conflict-related responses is the
Stroop test, in which participants are shown a list of words and asked not to
read the words but to name the colors in which the words are printed. For
instance, if the vv'Ord table is presented in blue ink, participants are asked to
say "blue." Most of the time, participants can fairly easily ignore the words
and name the color of the ink. However, if the words the participants are
asked to ignore are the names of colors, it is much more difficult. For exam
ple, it typically takes significantly longer for participants to name the ink
color "blue" if the word they are asked to ignore is red as opposed to table.
This longer reaction time is due to the conflict between reading the word
red and saying the word "blue." Naming the color of ink for color words in
comparison to other words results in significant activation of an anterior,
dorsal region of the cingulate cortex (Carter and van Veen, 2007) and this
same region shows activation in many laboratory studies of lie detection
(e.g., Kozel et aI., 2004).
However, difficulty in naming the ink color of words is not only slower
for color words. In a variation of the Stroop task, called the emotional Stroop
task, subjects are presented with highly emotional words printed in different
colors of ink. When asked to ignore the words and name the ink color, it
takes significantly longer to name the color for emotional words in compari
son to neutral words. Interestingly, emotional variations of the Stroop task
also result in activation of the anterior cingulate, but a slightly different region
that is more ventral than that observed in the classic Stroop paradigm (Whalen
et aI., 1998). In a meta-analysis ofa number of conflict tasks, Bush et al.
(2000) contlrmed this division within the anterior cingulate (see Figure 2).
Cognitive contlict tasks, such as the classic Stroop task, typically result in
activation of the dorsal anterior cingulate, whereas emotional conflict tasks,
as demonstrated by the emotional Stroop task, result in activation of the ven
tral anterior cingulate. This suggests that this specific neural indicator of con
flict is significantly altered depending on the emotional nature of the conflict.
Another region often implicated in contlict or interference in studies of
lie detection is the inferior frontal gyrus. In fact, some studies of lie detection
have suggested that activation of this region is the best predictor of whether
a participant is lying (Langleben et aI., 2005). The role this region plays in
conflict or interference monitoring has traditionally been examined with the
Sternberg Proactive Interference paradigm. In a typical version of this para
digm, a participant is shown a set of stimuli and told to remember it. For
example, the set might include three letters, such as B, D, F. After a short
delay the participant is presented a letter and asked, "Was this letter in the
target set?" If the letter is D, the participant should answer "yes." In the next
Figure 2. Meta-analysis
of fMRI studies show
ing anterior cingulate
7070 60 50 40 30 20 10 o • Cognitive activation for cognitive
task
(circles) and emotional
60 • Emotional
(squares) tasks demon
task
\ -_._----' strating a cognitive-af
50 \ fective division within
the anterior cingulate.
40 Consistent with this
Cogl1ltivo division, cognitive (tri
\ division angles) and affective
(diamond) versions of
a Stroop task results in
activation of dorsal and
ventral regions of the
anterior cingulate, re
spectively. Reprinted
with permission from
Bush et aI., 2000.
20
trial the participant is given another target set, such as K, E, H. At this point,
if the participant is shown the letter P, she or he should say "no." If the par
ticipant is shown the letter B, the correct answer is also "no." However, for
most participants it will take longer to correctly respond "no" to B than P.
This is because B was a member of the immediately preceding target set (B,
D, F), but it is not a member of the current target set (K, E, H). On the pre
ceding trial, a minute or so earlier, the participant was ready to respond
"yes" to B. To correctly respond "no" to Bon tlle current trial requires tlle
participant to inhibit this potential "yes" response and focus only on the cur
rent target set. This requirement for inhibition is not necessary if the probe
letter is P, which was not a member of either the preceding or current target
set. Research using both brain imaging (D'Esposito et ai., 1999; Jonides and
Nee, 2006) and lesion (Thompson-Schill et ai., 2002) techniques has shown
that the inferior frontal gyrus plays an important role in resolving this type of
interterence or conflict. It is believed this region might be linked to lying
because in order to lie one must inhibit the truth, which creates conflict or
interference (e.g., Langleben et aI., 2005).
In order to examine the impact of emotion on this type of interference,
a recent study used a variation of the typical Sternberg Proactive Interference
paradigm in which the stimuli were emotional words or scenes, instead of
letters or neutral words and scenes. An examination of reaction times found
that the inhibition of emotional stimuli was faster tllan neutral stimuli, sug
gesting that emotion can impact processing in this interterence paradigm
(Levens and Phelps, 2008). Using fMRI to examine the neural circuitry un
derlying the impact of emotion on the Sternberg Proactive Interference para
digm revealed that the inhibition of emotional stimuli on this task engages a
slightly different network, including regions of the anterior insula cortex and
orbitofrontal cortex (Levens et aI., 2006). Much like the results observed
with the anterior cingulate, this suggests that emotion alters the neural circuit
ry of inhibition or conflict observed in the inferior frontal gyrus. Although
furtller studies are needed to clarifY tile impact of emotion on interference
resolution mediated by the inferior frontal gyrus, these initial results suggest
that this neural indicator of conflict in lying may also be different in highly
emotional situations.
There is abundant evidence that emotion can influence the neural circuit
ry of conflict or interference identified in laboratory studies of lie detection,
but can imagery or repetition also alter responses in these regions? It seems
possible that practicing a lie repeatedly, as one might after generating a false
alibi, could reduce the conflict experienced when telling that lie. If this is the
case, we might expect less evidence of conflict with practice or repetition.
This finding has been observed with the classic Stroop paradigm. A number
of behavioral studies have demonstrated that practice can diminish the Stroop
effect (see MacLeod, 1991, for a review). It has also been shown that prac
ticing the Stroop task significantly reduces conflict-related activation in the
anterior cingulate (Milhan et aI., 2003). To date, there is little research exam
ining how imagery might alter the neural circuitry of conflict or interference
as represented in the inferior frontal gyrus, but these findings suggest that at
least some neural indicators of conflict or interference may be unreliable if
the task (or lie) is imagined, practiced, and rehearsed.
Although the use of fMRI to detect lying in legal settings holds some
promise, there are some specific challenges in developing these techniques
that have yet to be addressed. Out of necessity, the development of tech
niques for lie detection relies on controlled laboratory studies of lying. How
ever, lying in legally relevant circumstances is rarely so controlled. This differ
ence should be kept in mind when building a neurocircuitry of lie detection
that is based on unimportant lies told by paid participants in the laboratory,
but is intended to be applied in legally important situations to people outside
the laboratory facing far higher stakes. Because of this, it is important to
examine exactly what might differ between the laboratory lie and the other
lies that could impact the usefulness of tl1ese techniques. In this essay, I have
explored two factors-imagery and emotion-and highlighted how research
suggests that the neural signatures identified in current fMRI lie detection
technologies might be quite different in their utility when the lies detected
are not generated in the laboratory. This problem of applying laboratory find
ings to other, more everyday andlor personally relevant and important cir
cumstances is a challenge for all studies of human behavior. However, ad
dressing this challenge becomes especially critical when we attempt to use
our laboratory findings to generate techniques that can potentially impact
individuals' legal rights. Until this challenge can be addressed, the use of
fMRI for lie detection should remain a research topic, instead of a legal tool.
REFERENCES
Bush, G., P. Luu, and M.L Posner. 2000. Cognitive and emotional influences
in anterior cingulate cortex. Trends in Cognitive Science 4:215-222.
Cabeza, R., S.M. Rao, AD. Wagner, AR. Mayer, and D.L. Schacter. 2001.
Can medial temporal lobe regions distinguish true from false? An event-relat
ed functional MRI study of veridical and illusory recognition memory.
Proceedings of the National Academy of Sciences 98:4805-4810.
Carter, C.S., and V. van Veen. 2007. Anterior cingulate cortex and contlict
detection: an update of theory and data. Cognitive) Affective) Behavioral
Neuroscience 7:367-379.
D'Esposito, M., B.R. Postle, ]. Jonides, and E.E. Smith. 1999. The neural
substrate and temporal dynamics of interference eftects in working memory
as revealed by event-related functional MRI. Proceedings of the National
Academy of Sciences 96:7514-7519.
Gonsalves, B.D., 1. Kahn, T Curran, K.A Norman, and AD. Wagner. 2005.
Memory strength and repetition suppression: Multimodal imaging and medi
al temporal cortical contributions to recognition. Neuron 47:751-781.
Jonides, J., and D.E. Nee. 2006. Brain mechanisms of proactive interference
in working memory. Neuroscience 139:181-193.
Kanwisher, N., and G. Yovel. 2006. The fusiform face area: a cortical region
specialized for the perception of faces. Philosophical Transactions ofthe Royal
Society: B Biological Science 361:2109-2128.
Kim, H., L.H. Somerville, T. Johnstone, S. Polis, A.L. Alexander, L.M. Shin,
and P.]. Whalen. 2004. Contextual modulation of amygdala responsivity to
surprised faces. Journal of Cognitive Neuroscience 16:1730-1745.
Kozel, F.A, TM. Padgett, and M.S. George. 2004. A replication study of
the neural correlates of deception. Behavioral Neuroscience 118:852-856.
Langleben, D.D., ].W. Loughead, W.B. Bilker, K. Ruparel, AR. Childress,
SJ. Busch, and R.C. Gur. 2005. Telling truth from lie in individual subjects
with fast event-related £MRl. Human Brain Mapping 26:262-272.
Levens, S.M., M. Saintilus, and E.A Phelps. 2006. Prefrontal cortex mecha
nisms underlying the interaction between emotion and inhibitory processes
in working memory. Cognitive Neuroscience Society, San Francisco, CA
Levens, S.M., and E.A. Phelps. 2008. Emotion processing effects on inter
ference resolution in working memory. Emotion 8:267-280.
MacLeod, C.M. 1991. Half a century of research on the Stroop eftect: An
integrative review. Psychological Bulletin 109:163-203.
Milhan, M.P., M.T Banich, E.D. Claus, and N.]. Cohen. 2003. Practice
related effects demonstrate complementary roles of anterior cingulate and
prefrontal cortices in attentional control. Neuroimage 18:483-493.
CHAPTER 4
STEPHEN J. MORSE
INTRODUCTION
Law must answer two types of general questions: 1) What legal rules should
govern human interaction in a particular context? and 2) How should an in
dividual case be decided? Scientific information, including findings from the
new neurosciences, can be relevant both to policy choices and to individual
adjudication. Most legal criteria are behavioral, however, including, broadly
speaking, actions and mental states, and it could not be otherwise. The goal
of law is to help guide and order the interactions between acting persons. Con
sider criminal responsibility, the legal issue to which neuroscience is consid
ered most relevant. Criminal prohibitions all concern culpable actions or omis
sions and are addressed to potentially rational persons, not to brains. Brains
do not commit crimes. Acting people do. We do not blame and punish brains.
We blame and punish persons if they culpably violate a legal prohibition that
society has enacted. All legally relevant evidence, whether addressed to a pol
icy choice or to individual adjudication, must therefore concern behavior en
tirely or in large measure.
Behavioral evidence will almost always be more legally useful and proba
tive than neuroscientific information. If no conflict exists between the two
types of evidence, the neuroscience will be only cumulative and perhaps super
fluous. If conflict does exist between behavioral and neuroscientific informa
tion, the strong presumption must be that the behavioral evidence trumps
the neuroscience. Actions speak louder than images. If the behavioral evidence
is unclear, however, but the neuroscience is valid and has legally relevant im
plications, then the neuroscience may tip the decision-making balance. The
question is whether neuroscientific (or any other) evidence is legally relevant;
that is, whether it genuinely and precisely helps answer a question the law asks.
Consider the following examples of both types of questions, beginning
with a general legal rule. Should adolescents who culpably commit capital
murder when they are sixteen or seventeen years old qualifY for imposition
1. The title of this paper is a precise copy of the title of an article by Apoorva Mandavilli tlIat
appeared in Nature in 2006.
23
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 101 of 144
FALSE STARTS
ever, they stopped calculating and refused to take an innocent life even to save
net lives. If the study is valid in real-world conditions, it suggests that people
with "normal" brains do not consequentially calculate under some conditions
and perhaps it suggests that socializing them to do so might be difficult. But
this finding does not necessarily mean that people cannot be socialized to cal
culate if we thought that such consequential calculation was desirable.
The new neuroscience joins a long list of contenders for a fully causal,
scientitlc explanation of human behavior, ranging from sociological to psy
chological to biological theories. Such explanations are thought to be threats
to the law's conception of the person and responsibility. Neuroscience con
cerns the brain-the biological source of our humanity, personhood, and sense
of self-and it seems to render the challenge to the legal concept of the per
son more credible. The challenge arises in two forms. The first does not deny
that we are the types of creatures we think we are, but it simply assumes that
responsibility and all that it implies are impossible if determinism is true. This
is a familiar claim. The second challenge denies that we are the type of crea
tures we think we are and that is presupposed by law and morality. This is a
new and potentially radical claim. Neither succeeds at present, however.
The dispute about whether responsibility is possible in a deterministic
world has been ongoing for millennia, and no resolution is in sight (Kane
2005). No uncontroversial detlnition of determinism has been advanced, and
we will never be able to confirm that it is true or not. A~ a working defini
tion, however, let us assume, roughly, that all events have causes that operate
according to the physical laws of the universe and that they were themselves
caused by those same laws operating on prior states of the universe in a con
tinuous thread of causation going back to the first state. Even if this is too
strong, the universe seems so sufficiently regular and lawful that rationality
demands that we must adopt the hypothesis that universal causation is ap
proximately correct. The English philosopher, Galen Strawson, calls this the
"reality constraint" (Strawson 1989). If determinism is true, the people we
are and the actions we perform have been caused by a chain of causation over
which we mostly had no rational control and for which we could not possi
bly be responsible. We do not have contra-causal freedom. How can respon
sibility be possible for action or for anything else in such a universe? How can
it be rational and fair for civil and criminal law to hold anyone accountable
for anything, including blaming and punishing people because they allegedly
deserve to be blamed and punished?
Three common positions are taken in response to this conundrum: meta
physical libertarianism, hard determinism, and compatibilism. Libertarians
believe that human beings possess a unique kind of freedom of will and action
according to which they are "agent originators" or have "contra-causal free
dom." In short, they are not determined and effectively able to act uncaused
by anything other than themselves (although they are of course influenced by
their time and place and can only act on opportunities that exist then). The
buck stops with them. Many people believe that libertarianism is a founda
tional assumption for law. They believe that responsibility is possible only if
we genuinely possess contra-causal freedom. Thus, if we do not have this
extraordinary capacity, they fear that many legal doctrines and practices, es
pecially those relating to responsibility, may be entirely incoherent. Nonethe
less, metaphysical libertarianism is not a necessary support for current respon
sibility doctrines and practices. All doctrines of criminal and civil law are fully
consistent with the truth of determinism (Morse 2007). Moreover, only a
small number of philosophers and scientists believe that human beings pos
sess libertarian freedom of action and will, which has been termed a "pan
icky" metaphysics (Strawson 1982) because it is so implausible (Bok 1998).
Hard determinists believe that determinism is true and is incompatible
with responsibility. Compatibilists also believe that determinism is true but
claim that it is compatible with responsibility. For either type of determinist,
biological causes, including those arising from the brain, pose no new or more
powerful general metaphysical challenge to responsibility than nonbiological
or social causes. As a conceptual and empirical matter, we do not necessarily
have more control over psychological or social causal variables than over bio
logical causal variables. More important, in a world of universal causation or
determinism, biological causation creates no greater threat to our life hopes
than psychological or social causation. For purposes of the metaphysical free
will debate, a cause is just a cause, whether it is neurological, genetic, psycho
logical, sociological, or astrological. Neuroscience is simply the newest "bogey"
in a dispute about the general possibility of responsibility that has been ongo
ing for millennia. It certainly is more scientifically respectable than earlier
bogeys, such as astrology and psychoanalysis, and it certainly produces com
pelling representations of the brain (although these graphics are almost always
misleading to those who do not understand how they are constructed). But
neuroscience evidence for causation does no more work in the general free
will/responsibility debate than other kinds of causal evidence.
Hard determinism does not try either to explain or to justifY our respon
sibility concepts and practices; it simply assumes that genuine responsibility is
metaphysically unjustified. For example, a central hard determinist argument
is that people can be responsible only if they could have acted otherwise than
they did, and if determinism is true, they could not have acted other than they
did (Wallace 1994). Consequently, the hard determinist claims that even if
an internally coherent account of responsibility and related practices can be
given, it will be a superficial basis for responsibility, which is allegedly only
an illusion (Smilansky 2000). There is no "real" or "ultimate" responsibility.
Hard determinists concede that Western systems of law and morality hold
some people accountable and excuse others, but the hard determinist argues
that these systems have no justifiable basis for distinguishing genuinely respon
sible from nonresponsible people. Hard determinists sometimes accept respon
sibility ascriptions because doing so may have good consequences, but they
still deny that people are genuinely responsible and robustly deserve praise
and blame and reward and punishment.
Hard determinism thus provides an external critique of responsibility. If
determinism is true and is genuinely inconsistent with responsibility, then no
one can ever be really responsible for anything and desert-based responsibili
ty attributions cannot properly justifY further action. The question, then, is
whether as rational agents we must swallow our pride, accept hard determin
ism because it is so self-evidently true, and somehow transform the legal sys
tem and our moral practices accordingly.
Compatibilists, who agree with hard determinists that determinism is true,
have three basic answers to the incompatibilist challenge. First, they claim that
responsibility attributions and related practices are human activities construct
ed by us for good reason and tllat they need not conform to any ultimate
metaphysical facts about genuine or "ultimate" responsibility. Indeed, some
compatibilists deny that conforming to ultimate metaphysical facts is even a
coherent goal in this context. Second, compatibilism holds that our positive
doctrines of responsibility are fully consistent with determinism. Third, com
patibilists believe that our responsibility doctrines and practices are norma
tively desirable and consistent with moral, legal, and political theories that
we firmly embrace. The first claim is theoretical; the third is primarily nor
mative. Powerful arguments have been advanced for the first and third claims
(Lenman 2006; Morse 2004). For the present purpose, however, which is ad
dressed to whether free will is really foundational for law, the second claim is
the most important.
The capacity for rationality is the primary responsibility criterion, and its
lack is the primary excusing condition. Human beings have different capaci
ties for rationality in general and in specific contexts. For example, young
children in general have less developed rational capacity than adults. Ration
ality differences also differentially affect agents' capacity to grasp and to be
guided by good reason. Differences in rational capacity and its effects are real
even if determinism is true. Compulsion is also an excusing condition, but it
is simply true that some people act in response to external or internal hard
choice threats to which persons of reasonable firmness might yield, and most
people most of the time are not in such situations when they act. This is true
even if determinism is true and even if people could not have acted otherwise.
Consider the doctrines of criminal responsibility. Assume that the defen
dant has caused a prohibited harm. Prima facie responsibility requires that the
defendant's behavior was performed with a requisite mental state. Some bodily
movements are intentional and performed in a state of reasonably integrated
consciousness. Some are not. Some defendants possess the requisite mental
state, the intent to cause a prohibited harm such as death. Some do not. The
truth of determinism does not entail that actions are indistinguishable from
nonactions or that different mental states do not accompany action. These facts
are true and make a perfectly rational legal difference even if determinism is
true. Determinism is fully consistent with prima facie guilt and innocence.
Now consider the affirmative defenses of insanity and duress. Some peo
ple with a mental disorder do not know right from wrong. Others do. In cases
of potential duress, some people face a hard choice that a person of reason
able firmness would yield to. These differences make perfect sense according
to dominant retributive and consequential theories of punishment. A causal
account can explain how these variations were caused, but it does not mean
that these variations do not exist. Determinism is fully consistent with both
the presence and absence of affirmative defenses. In sum, the legal criteria
used to identifY which defendants are criminally responsible map onto real
behavioral differences that justifY differential legal responses.
In their widely noted paper, Joshua Greene and Jonathan Cohen (2004)
take issue with the foregoing account of the positive foundations of legal re
sponsibility. They suggest that despite the law's official position, most people
hold a dualistic, libertarian view of the necessary conditions for responsibility
because "vivid scientific information about the causes of criminal behavior leads
people to doubt certain individuals' capacity tor moral and legal responsibili
ty" (Greene and Cohen 2004, p. 1776). To prove their point, they use the
hypothetical of "Mr. Puppet," a person who has been genetically and environ
mentally engineered to be a specific type of person. Greene and Cohen cor
rectly point out that Mr. Puppet is really no ditferent from an identical person
I call Mr. Puppet2, who became the same sort of person without intentional
intervention. Yet most people might believe that Mr. Puppet is not responsi
ble. If so, however, should Mr. Puppet2 also not be responsible? After all,
everyone is a product of a gene/environment interaction. But would it not
then follow, as Greene and Cohen claim, that no one is responsible?
Greene and Cohen are correct about ordinary peoples' intuitions, but
people make the fundamental psycholegal error (Morse 1994) all the time.
That is, they hold the erroneous but persistent belief that causation is per se
an excusing condition. This is a sociological observation and not a justifica
tion tor thinking causation or determinism does or should excuse behavior.
Whether the cause for behavior is biological, psychological, sociological, or
astrological, or some trothy brew of all of these does not matter. In a causal
universe, all behavior is presumably caused by its necessary and sufficient
causes. A cause is just a cause. If causation excused behavior, no one could
ever be responsible. Our law and morality do hold some people responsible
and excuse others. Thus causation per se cannot be an excusing condition,
no matter how much explanatory and predictive power a cause or set of
causes for a particular behavior might have. The view that causation excuses
per se is inconsistent with our positive doctrines and practices. Moreover, if
Mr. Puppet and Mr. Puppet2 are both rational agents, the argument I have
provided suggests that they are both justifiably held responsible. The lure of
purely mechanistic thinking about behavior when causes are discovered is
powerful but should be resisted.
At present, the law's "official" position about persons, action, and respon
sibility is justified unless and until neuroscience or any other discipline demon
strates convincingly that we are not the sorts of creaUlres we and the law tllink
we are-conscious and intentional creatures who act for reasons that playa
causal role in our behavior-and thus that the foundational facts for respon
sibility ascriptions are mistaken. If it is true, for example, that we are all auto
mata, then no one is an agent, no one is acting and, therefore, no one can be
responsible for action. But none of the stunning discoveries in the neuro
sciences or their determinist implications have yet begun to justifY the belief
that we are radically mistaken about ourselves. Let us therefore return to the
proper understanding of the relation between neuroscience and law, again
using criminal responsibility as the most powerful example.
The criteria for legal excuse and mitigation-like all legal criteria-are behav
ioral, including mental states. For example, lack of rational capacity is a gener
ic excusing condition, which explains why young children and some people
with mental disorder or dementia may be excused if they commit crimes. For
another example, as Justice Oliver Wendell Holmes wrote long ago, "Even a
dog distinguishes between being stumbled over and being kicked." Mental
states matter to our responsibility for action. Take the insanity defense, for
example, which excuses some people with mental disorder who commit crimes.
The defendant will not be excused simply because he or she is suffering from
mental disorder, no matter how severe it is. The defendant will not be excused
simply because disordered thinking affected the defendant's reasons for action.
Rather, the mental disorder must produce substantial lack of rational capacity
concerning the criminal behavior in question. All insanity defense tests are
primarily rationality tests. Lack of rational capacity is doing the excusing work.
Mental disorder that plays a role in explaining the defendant's behavior
may paradoxically not have any effect on responsibility at all. Imagine a clini
cally hypomanic businessperson who, as a result of her clinical state, has really
high attention, energy, and the like, and who makes a contract while in that
state. If the deal turns out to be less advantageous than she thought, the law
will not allow her to avoid that contract even though she made it under the
influence of her mood disorder. Why? Because the businessperson was per
fectly rational when she made the contract. Indeed, her hypomania might
have made her "hyper-rational." Here is another example fi'om criminal law.
Imagine a person with paranoia who is constantly scanning his environment
for signs of impending danger. Because tlle person is hypervigilant, he identi
fies a genuine and deadly threat to his life that ordinary people would not
have perceived. If the person acts in self-defense, he is fully rational and his
behavior would be justified. In this case, again, the mental abnormality made
the agent "hyper-rational" in the circumstances.
Potentially legally relevant neuroimaging studies attempt to correlate
brain activity with behavior, including mental states. In other words, legally
relevant neuroscience must begin with behavior. We seek brain images associ
ated with behaviors that we have already identified on normative, moral,
political, and social grounds as important to us. For example, we recognize
that adolescents behave differently from adults. They appear to be more im
pulsive and peer-oriented. They appear, on average, to be less fully rational
than adults. These differences seemingly should make a moral and legal dif
ference concerning, for example, criminal responsibility or the age at which
people can drink or make independent health-care decisions. These differ
ences also make us wonder if, in part, neuroanatomical or neurophysiological
causal explanations might exist for the behavioral differences already identi
fied as important to us.
Indeed, there is a parallel between the use of neuroscience for legal pur
poses and the development of cognitive neuroscience itself. Psychology does
and must precede neuroscience when human behavior is in question (Hatfield,
2000).2 Brain operations can be divided into various localities and subfllnc
tions. The investigation of these constitutes the Held of neuroscience. Some
of the functions the brain implements are mental functions, such as percep
tion, attention, memory, emotions, and planning. Psychology is broadly de
fined as the experimental science that directly studies mental functions. There
fore, psychology is the primary discipline investigating a major subset of brain
functioning, including those fi.mctions that make us most distinctly human.
These are also the types of functions that are therefore most relevant to law,
because law is a human construction that is meant to help order human inter
action. On occasion, inferring function from structure or physiology might
be possible. In most cases, however, general knowledge or conjecture about
function guides the investigation of structure and physiology. This will be
especially true as we move "from the outside in." That is, it will be especially
true as we studycomplex, intentional human behavior as opposed to, say, the
perceptual apparatus. Lastly, therefore, psychology is tlle royal road to brain
science in those areas that make us most distinctly human and that are most
relevant to law.
When we evaluate what might be legally relevant brain science, we will
be limited by the validity of the psychology upon which the brain science is
based. As most-indeed, as all-honest neuroscientists and psychologists will
admit, we wish that our psychological constructs and theories were better than
they are. Thus, the legal helpfUlness of neuroscience is limited.
Despite the limitations just described, neuroscience can sometimes be of
assistance in helping us decide what a general rule should be and in adjudicat
ing individual cases. Identitying brain correlates oflegally relevant criteria is
seldom necessary, or even helpful, when we are trying to define a legal stan
dard if the behavioral difference is already clear. If the behavioral difference is
not clear, then the neuroscience does not help, because the neuroscience must
always begin with a behavior or behavioral difference that we have already
identified as important.
For example, we have known that the rational capacity of adolescents is
different from adults. Juvenile courts have existed for over a hundred years,
well before anyone tllOught about neuroimaging the adolescent brain. The
common law treated juveniles differently from adults for hundreds of years
before we had any sense of neuroscience. People had to be of a certain age
to vote, to drink, to join the army, and to be criminally responsible long
before anyone envisioned functional magnetic resonance imaging (fMRI). If
the rational capacity difference between adults and adolescents was less clear,
then neuroscience could not tell us whether to treat adolescents differently,
even if we believed that rationality made a difference. Whether adolescents
are sufficiently different from adults so that they should be treated legally
2. What follows, in which I draw a parallel between the usc of neuroscience for legal purposes
and the development of cognitive neuroscience itself, borrows from and liberally paraphrases
an excellent article by Hatfield (2000). What I will suggest is not meant to be critical or dis
missive of neuroscience or of any other science. Indeed, I firmly believe that most neuro
science is genuinely excellent science. Nonetheless, much as legally relevant neuroscience
must begin with identification of the behavior that is normatively relevant, so psychology
does and conceptually must precede neuroscience.
responsible people just "lose" it. If the behavioral history had been more prob
lematic, however, then the potential effect of the cyst might well have caused
us more readily to believe that he did suffer fi-om a rationality defect. Similarly,
in a case in which intelligence is clearly relevant, if the real-world evidence
about intelligence is ambiguous, scientific tests of intelligence, whether psy
chological or neuroscientific, would surely be helpfld.
I have so far been arguing for a cautious, somewhat deflationary stance toward
the potential of neuroscience to help us decide general and specific legal is
sues, but I do not mean to suggest I am a radical skeptic, cynic, or the like.
I am not. I do not know what science is going to discover tomorrow. In the
future we might well find grounds tor greater optimism about broader legal
relevance. But we have to be extraordinary sensitive to the limits of what neu
roscience can contribute to the law at present.
How should the law proceed? What is the danger of images? The power
of images to persuade might be greater than their legal validity warrants. First,
images might not be legally relevant at all. We must always carefully ask what
precise legal question is under consideration and then ask whether the image
or any other neuroscience (or other type of) evidence actually helps us answer
this precise question. The image might indicate something interesting, and it
might be a vivid, compelling representation, but docs it precisely answer our
legal question?
Second, once naive subjects, such as average legislators, judges, and jurors,
see images of the brain that appear correlated to the behavior in question, they
tend to fall into the trap that I call the "lure of mechanism" or to make the
fundamental psycholegal error discussed previously. That is, they tend to be
lieve that causation is an excuse, especially if the brain seems to playa causal
role. In a wonderful recent study (Knutson et al. 2007), researchers were able
to predict with astonishing accuracy, depending on what part of the brain was
active, whether the subject would or would not make a choice to buy a con
sumer item. The title oOohn Tierney's article reporting on the study in the
New Y01'k Times-an excellent example of an educated layperson's response
asked, "The Voices in My Head Say 'Buy It!' Why Argue?" Tierney conclud
ed, "You might remove the pleasure of shopping by somehow dulling the
brain's dopamine receptors ... but try getting anyone to stay on that med
ication. Better the occasional jolt of pain. Charge it to the insula." (Tierney
2007). Note the implication of mechanism: When you shop, you are not an
acting agent but are at the mercy of your brain anatomy and physiology. You,
the acting agent, the shopper, did not decide whether to buy. Your brain did.
We are just brains in a mall. But we must resist the lure of mechanism. Brains
do not shop; people do.
As a result, should we exclude imaging evidence from the courtroom, or
should we, as we commonly do in the law, admit the evidence and trust cross
examination to expose its strengths and weaknesses? In other words, is the
proper question the weight of such evidence or whether it should be admit-
ted at aU? The answer should depend on the relevance and strength of the
science. If the legal relevance of the science is established and the science is
quite good, my preference would be to admit the evidence and let the experts
dispute its worth before the neutral adjudicator, the judge or the jury. But
two criteria must be met first: The science must be both legally relevant and
sound.
REFERENCES
CHAPTER 5
Getting scientists and lawyers to communicate with each other is not easy.
Getting them to talk is easy, but communication requires mutual understand
ing-and that is a challenge.
Scientists and lawyers live in different cultures with different goals. Courts
and lawyers aim at decisions, so they thrive on dichotomies. They need to
determine whether defendants are guilty or not, liable or not, competent or
not, adult or not, insane or not, and so on. Many legal standards implicitly
recognize continua, such as when prediction standards for various forms of
civil and criminal commitment speak of what is "highly likely" or "substan
tially likely," but in the end courts still need to decide whether the probabili
ty is or is not high enough for a certain kind of commitment. The legal sys
tem, thus, depends on on-off switches. This generalization holds for courts,
and much of the legal world revolves around court decisions.
Nature does not work that way. Scientists discover continuous probabili
ties on multiple dimensions.I An oculist, for example, could find that a patient
is able to discriminate some colors but not others to varying levels of accuracy
in various circumstances. The same patient might be somewhat better than av
erage at seeing objects far away in bright light but somewhat worse than aver
age at detecting details nearby or in dim light. Given so many variations in vi
sion, if a precise scientist were asked, "Is this particular person's vision good?"
he or she could respond only with "It's good to this extent in these ways."
The legal system then needs to determine whether this patient's vision is
good enough for a license to drive. That is a policy question. To answer it,
lawmakers need to determine whether society can live with the number of
accidents that are likely to occur if people with that level of vision get driver's
licenses. The answer can be different for licenses to drive a car or a school bus
or to pilot a plane, but in all such cases the law needs to draw lines on the
continua that scientists discover.
35
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 113 of 144
The story remains the same for mental illness. Modern psychiatrists find
large clusters of symptoms that vary continuously along four main dimen
sions. 2 Individual patients are more or less likely to engage in various kinds
of behaviors within varying times in varying circumstances. For therapeutic
purposes, psychiatrists need to locate each client on the distinct dimensions,
but they do not need to label any client simply as insane or not.
Psychiatrists also need not use the terms "sane" and "insane" when they
testify in trials involving an insanity defense. One example among many is the
Model Penal Code test, which holds that a defendant can be found not guilty
by reason of insanity if he lacks substantial capacity to appreciate the wrong
fulness of his conduct or to conform his conduct to the requirements of the
law. This test cannot be applied with scientific techniques alone. If a defendant
gives correct answers on a questionnaire about what is morally right and wrong
but shows no skin conductance response or activity in the limbic system while
giving these answers, does that individual really "appreciate" wrongfulness?
And does this defendant have a "capacity" to appreciate wrongfulness if he
does appreciate it in some circumstances but not others? And when is that
capacity "substantial"? Questions like these drive scientists crazy.
These questions mark the spot where science ends and policy decisions
begin. Lawyers and judges can recognize the scientific dimensions and con
tinua, but they still need to draw lines in order to serve their own purposes
in reaching decisions. How do they draw a line? They pick a vague area and a
terminology that can be located well enough in practice and that captures
enough of the right cases for society to tolerate the consequences. Where law
makers draw the line depends both on their predictions and on their values.
Courts have long recognized that the resulting legal questions can be
confusing to psychiatrists and other scientists because their training lies else
where. Scientists have no special expertise on legal or policy issues. That is
why courts in the past usually did not allow psychiatrists to testify on ultimate
legal issues in trials following pleas of not guilty by reason of insanity. This re
striction recently was removed in federal courts, but there is wisdom in the
old ways, when scientists gave their diagnoses in their own scientific terms
and left legal decisions to legal experts. In that system, scientists determine
which dimensions are predictive and where a particular defendant lies on those
continua. Lawyers then argue about whether that point is above or below
the legal cutoff that was determined by judges or legislators using policy
considerations. That system works fine as long as the players stick to their
assigned roles.
This general picture applies not just to optometry and psychiatry but to
other interactions between science and law, including neural lie detection.
Brain scientists can develop neural methods of lie detection and then test
their error rates. Scientists can also determine how much these error rates
vary with circumstances, because some methods are bound to work much
better in the lab than during a real trial. However, these scientists have no
special expertise on the question of whether those error rates are too high to
Although scientists can determine error rates for methods of lie detection,
the issue is not so simple. For a given method in given circumstances, scien
tists distinguish two kinds of errors. The first kind of error is a false positive
(or false alarm), which occurs when the test says that a person is lying but he
or she really is not lying. The second kind of error is a false negative (or a
miss), which occurs when the test says that a person is not lying but he or
she really is lying. The rate of false positives determines the test's specificity,
whereas the rate of false negatives determines the test's sensitivity.
These two error rates can differ widely. For example, elsewhere in this
volume Nancy K;anwisher cites a study of one method of neural lie detection
where one of the error rates was 31 percent and the other was only 16 per
cent. The error rate was almost twice as high in one direction than in the
other. When error rates differ by so much, lawmakers need to consider each
rate separately. Different kinds of errors create different problems in different
circumstances. Lawmakers need to decide which error rate is the one that
matters for each particular use of neural lie detection.
Compare three legal contexts: In the first a prosecutor asks the judge to
let him use neural lie-detection techniques on a defense witness who has pro
vided a crucial alibi for the defendant. The prosecutor thinks that this defense
witness is lying. Here the rate of false positives matters much more than the
rate of false negatives, because a false positive might send an innocent person
to prison, and courts are and should be more worried about convicting the
innocent than about failing to convict the guilty.
In contrast, suppose a defendant knows that he is innocent, but the trial
is going against him, largely because one witness claims to have seen the de
fendant running away from the scene of the crime. The defendant knows that
this witness is lying, so his lawyer asks the judge to let him use neural lie de-
tection techniques on the accusing witness. Here the rate of false negatives
matters more than the rate of false positives because a false negative is what
might send an innocent defendant to prison.
Third, imagine that the defense asks the judge to allow as evidence the
results of neural lie detection on the accused when he says that he did not com
mit the crime. Here the rate of false positives is irrelevant because the defen
dant would not submit this evidence if the results were positive for lying.
Overall, then, should courts allow neural lie detection? If the rates of false
positives and false negatives turn out to differ widely (as I suspect they will),
then the values of the system might best be served by allowing some uses in
some contexts but forbidding others uses in other contexts. The legal system
might not allow prosecutors to force any witness to undergo lie detection, but
it still might allow prosecutors to use lie detection on some willing witnesses.
Or the law might not allow prosecutors to use lie detection at all, but it still
might allow defense attorneys to use lie detection on any witness or only on
willing or friendly witnesses. If not even those uses are allowed, then the rules
of evidence deprive the defense of a tool that, while flawed, could create a
reasonable doubt, which is all the defense needs. If the intent is to ensure
that innocent people are not convicted and if the defense volunteers to take
the chance, then why the law should categorically prohibit this imperfect tool
is unclear.
That judges would endorse such a bifurcated system of evidence is doubt
ful, although why is not clear. Some such system might turn out to be opti
mal if great differences exist between the rates of false negatives and false pos
itives and also between the disvalues of convicting the innocent and failing to
convict the guilty. Doctors often distinguish false positives from false nega
tives and use tests in some cases but not others, so why should courts not do
the same? At least this question is worth thinking about.
BASE RATES
A more general problem, however, suggests that courts should not allow any
neural lie detection. When scientists know the rates of false positives and false
negatives for a test, they usually apply Bayes's theorem to calculate the test's
positive predictive value, which is the probability that a person is lying, given
a positive test result. This calculation cannot be performed without using a
base rate (or prior probability). The base rate has a tremendous effect on the
result. If the base rate is low, then the predictive value is going to be low as
well, even if the rates of false negatives and of false positives seem reasonable.
This need for a base rate malces such Bayesian calculations especially prob
lematic in legal uses of lie detection (neural or not). In lab studies the nature
of the task or the instructions to subjects usually determines the base rate. 3
However, determining the base rate of lying in legal contexts is much more
difficult.
3. For more on this, see Nancy Kanwisher's paper elsewhere in this volume.
Imagine that for a certain trial everyone in society were asked, "Did you
commit this crime?" Those who answered "Yes" would be confessing, so al
most everyone, including the defendant, would answer "No." Only the per
son who was guilty would be lying. Thus, the base rate of lying in the gener
al population for this particular question is extremely low. Hence, given Bayes's
theorem, the test of lying might seem to have a low predictive value.
However, this is not the right way to calculate the probability. What real
ly needs to be known is the probability that someone is lying, given that this
person is a defendant in a trial. How can that base rate be determined? One
way is to gather conviction rates and conclude that most defendants are guil
ty, so most of them are lying when they deny their guilt. With this assump
tion, the base rate of lying is high, so Bayes's theorem yields a high predic
tive value for a method of lie detection with low enough rates of false nega
tives and false positives. However, this assumption that most defendants are
guilty violates important legal norms. Our laws require us to presume that
each defendant is innocent until proven guilty. Thus, if a defendant is asked
whether he did it and he answers, "No," then our judicial system is legally
required to presume that he is not lying. The system should not, then, depend
on any calculation that assumes guilt or even a high probability of guilt. But
without some such assumption, one cannot justity a high enough base rate to
calculate a high predictive value for any method of neural lie detection of
defendants who deny their guilt.
CONCLUSION
A crystal ball would be needed to conclude that neural lie detection has no
chance of ever working or of being fair in trials. But many details need to be
carefully worked out before such techniques should be allowed in courts.
Whether the crucial issues can be resolved remains to be seen, but the way to
resolve them is for scientists and lawyers to learn to work together and com
municate with each other.
REFERENCES
CHAPTER 6
The detection of truth is at the heart of the legal process. l The purpose of a
trial, in particular, is to resolve the factual disputes on which a case turns, and
the trial culminates with the rendering of a "verdict," which in Latin means
"to state the truth."
Given the common tendency of witnesses to exaggerate, to embroider,
and, frankly, to lie,2 how does a fact finder determine the truth? Particularly
in the adversarial system of justice common to the Anglo-American tradition,
truth is revealed, and lying detected, by exposing witnesses to cross-examina
tion; that is, to tough questioning designed to test the consistency and cred
ibility of the witness's story. John Henry Wigmore, known to most lawyers
as the most profound expositor of the rules of evidence in the history of law,
famously described cross-examination as "the greatest legal engine ever invent
ed for the discovery of truth."3 And while not everyone is specially trained
in the art of cross-examination, common experience-whether it be of parents
questioning children, of customers questioning salespersons, or of reporters
questioning politicians-suggests that nothing exposes fabrication like a good
tough questioning.
But no one supposes that cross-examination is a perfect instrument for
detecting the truth. For that matter, there probably has never been a scien
tific study of how effective cross-examination is in detecting lies, and I am
1. This article, based on remarks made at the American Academy of Arts and Sciences's con
ference on February 2, 2007, presents the author's personal views and does not reflect how
he might decide any legal issue in any given case.
2. In my remarks on which this article is based, I estimated that 90 percent of all material trial
witnesses knowingly lie in some respect (though not always materially). This estimate is based
on my experience as a trial judge for the past twelve years and as a trial lawyer for the twenty
five years previous to that and has no scientific study to back it up. Possibly I am too harsh:
perhaps the percentage of material witnesses who consciously lie in some respect is as low as,
say, 89 percent.
3. John Henry \Vigmore, Evidence In Trials At Common Law (Chadbourn Edition, 1974),
Vol. 5, p. 32.
40
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 118 of 144
not even sure how such a study could be devised. In any event, people have
always sought for some simpler, talismanic way of separating truth from false
hood, and, relatediy, of determining guilt or innocence.
Thus, in medieval times, an accused who protested his innocence was put
to such tests as the ordeal by water, in which he was bound and thrown in the
river. The accused who had falsely protested his innocence would be rejected
by the water and would float, whereas the honest accused would be received
by the water and immediately sink. Usually he was fished out before he
drowned; but, if the rescue came too late, he at least died in a state of inno
cence. If one accepted the basic religious theories on which these tests were
premised, the tests were infallible.
& the middle ages gave way to the Renaissance, the faith-based methods
of the ordeals were replaced by a more up-to-date, empirical method for de
termining the truth: torture. Although some men and women, in their per
versity, might deny the evil of which their accusers were certain they were
guilty, infliction of sufficient pain would lead them to admit, mirabilc dietu,
exactly what the accusers had suspected.
Mter a while, however, it became increasingly obvious that torture was
leading to too many "false positives," and more accurate methods were sought.
In his treatise on evidence, Wigmore contends that cross-examination was
originally developed as an alternative to torture. 4 Today, of course, it is incon
ceivable that anyone would recommend the use of torture.
From the seventeenth century onward, cross-examination, for all its limi
tations, seemed to be the best the legal system had to offer as a means of de
termining the truth. In the late nineteenth century, every schoolchild knew
the story of how Abe Lincoln, in defending a man accused of murder, care
fully questioned the prosecution's eyewitness as to how he was able to see the
accused commit the murder in the dead of night and, when the witness said
it was because there was a full moon, produced an almanac showing that on
that night the moon was but a sliver. As Lincoln elsewhere said, "you can't
fool all of the people all of the time"-at least not when someone is around
to ask the hard questions.
But the late nineteenth century also witnessed the growing belief that all
areas of inquiry would ultimately yield to the power of science. It was just a
matter of time before an allegedly "scientific" instrument for detecting lies was
invented, namely, the polygraph.
The truth is that the polygraph is not remotely scientific. The theory of
the polygraph-itself largely untested-is that someone who is consciously ly
ing feels anxiety, and that that anxiety, in turn, is manifested by an increase in
respiration rate, pulse rate, blood pressure, and sweating. Common experi
ence suggests many possible flaws in this theory: more practiced liars might
feel little anxiety about lying; taking a lie detector test might itself generate
anxiety; sweating, pulse rate, blood pressure, and respiration rate are com
monly affected by all sorts of other conditions, both external and internal;
and so forth. One might hypothesize, therefore, that polygraph tests, while
they might be better than pure chance in separating truth tellers from liars
4. Ibid., p. 32ff.
after aU, some people might fit the theory-would nevertheless have a high
rate of error. As Marcus E. Raichle discusses elsewhere in this volume, that is
precisely what the National Academies (NAS), which in 2002 reviewed the
evidence on polygraph reliability, concluded. The NAS also concluded that
polygraph testing has "weak scientific underpinnings"5 and that "belief in its
accuracy goes beyond ,vhat the evidence suggests." 6
Not all experts agree. Reviewing the literature in 1998, the Supreme
Court of the United States concluded that "the scientific community remains
extremely polarized about the reliability of polygraph techniques," with some
studies concluding that polygraphs are no better than chance at detecting lies
and, at the other extreme, one study concluding that polygraph results are
accurate about 87 percent of the time. But even a 13 percent error rate is a
high number when you are dealing with something as important as determin
ing a witness's credibility, let alone determining whether he or she is guilty
or innocent of a crime.
Moreover, all these error-rate statistics are suspect because the scientific
community is nowhere close to agreeing on how one properly establishes the
base measure for determining the reliability of the polygraph. To devise an
experiment in which one set of subjects is told to lie and the other set of sub
jects is told to tell the truth is one thing; to recreate the real-life conditions
that would allow for a true test of the polygraph is quite something else.
Whether any sound basis exists on which one can assert anything useful about
the reliability or unreliability of the polygraph is uncertain.
Courts, being conservative and skeptical by nature, have largely tended
to exclude polygraph evidence. But that has not stopped the government, the
military, some private industry, and much of the public generally from accept
ing the polygraph as reliable-so great is the desire for a "magic bullet" that
can instantly distinguish truth from falsehood.
Even the courts, while excluding polygraph evidence from tlle courtroom,
have sometimes approved its use by the police on the cynical basis that it real
ly does not matter whether the polygraph actually detects lying, so long as
people believe that it does: if a subject believes that a polygraph actually works,
he or she will be motivated to tell the truth and "confess." The hypocrisy of
this argument is staggering: the argument, in effect, is that even if the truth
is that polygraph tests are, at best, error-prone, the police and other authori
ties should lie to people and encourage them to believe that the tests are high
ly accurate because this lie will encourage people to tell the truth.
Even on these terms, moreover, experience in my own courtroom suggests
that the use of polygraphs is much more likely to cause mischief, or worse,
than to be beneficial. Let me give just one example. The Millenium Hotel is
situated next to Ground Zero. A few weeks after the attack on the Twin Tow
ers, hotel employees were allowed back into the hotel to recover the belong
ings of the guests who had had to flee the premises on September 11, and one
5. The National Academics, National Research Council, "Polygraph Testing Too Flawed for
Security Screening," October 8, 2002, p. 2.
6. The National Academics, National Research Council, Committee to Review the Scientific
Evidence on the Polygraph, The Polygraph and Lie Detection (2003), p. 7.
But the credentials of the scientists should not obscure the shakiness of
the science. 7 A basic problem with both polygraphy and brain scanning to de
tect lying is that no established standard exists for defining what willful de
ception is, let alone how to establish a base measure against which the validi
ty and reliability of any lie-detection technique can be evaluated. What exists
at this point are imaging technologies that show us patterns of activity or oth
er events in the brain that are hypothesized to correlate with various mental
states. Not one of these hypotheses has been su bjected to the kind of rigor
ous testing that would establish its validity.
That, however, has not stopped several commercial enterprises from of
fering brain scanning as a purportedly scientific lie-detection technique that
law enforcement agencies, private businesses, and even courts should utilize.
The mere fact that evidence is proffered by someone with scientific creden
tials does not begin to satisfY the conditions for its admissibility in court.
In the case of the federal courts, the admissibility of expert testimony is
governed by Rule 702 of the Federal Rules of Evidence. Rule 702 provides that
If scientific, technical, or other specialized knowledge will assist the trier
of fact to understand the evidence or to determine a fact in issue, a wit
ness qualified as an expert by knowledge, skill, experience, training or
education may testifY thereto in the form of an opinion of otherwise, if
(1) the testimony is based upon sufficient facts or data, (2) the testimony
is the product of reliable principles and methods, and (3) the witness has
applied the principles and methods reliably to the facts of the case.
Though every case must be assessed on its individual merits, brain scan
ning as a means of assessing credibility likely suffers fi'om several defects that
would render such evidence inadmissible under Rule 702 as it has been inter
preted in the federal courts.
First, and perhaps most fundamentally, there is no commonly accepted
theory of how brain patterns evidence lying: in the absence of such a theory,
all that is being shown, at best, is the presence or absence of certain brain pat
terns that allegedly correlate with some hypothesized accompaniment of ly
ing, such as anxiety. But no scientific evidence has shown either that lying is
always accompanied by anxiety or that anxiety cannot be caused by a dozen
other factors that cannot be factored out.
Second, the theories that have been proposed have not been put to the
test of falsifiability, which, if one accepts (as the Supreme Court does) a Pop
per-like view of science, is the sine qua non of assessing scientific validity and
reliability.
Third, no standard way exists of defining what lying is, let alone how to
test for it. The law recognizes many kinds of lies, ranging from "white lies"
and "puffing" to affirmative misstatements, actionable half-truths, and mate
rial omissions. Brain scans cannot yet come dose to distinguishing between
these different kinds of lying. Yet the differences are crucial in almost any case:
a little white lie is altogether different, in the eyes of the law and of common
7. For a more expert discussion of the limitations of brain scanning as a truth-detection device,
see the articles by Elizabeth Phelps and Nancy Kanwisher elsewhere in this volume.
CHAPTER 7
Neuroscience-Based Lie
Detection: The Need for
Regulation
HENRY T. GREELY
In our lives and in our legal system \ve often are vitally interested in whether
someone is telling us the tmth. Over the years, humans have used reputation,
body language, oaths, and even torture as lie detectors. In the twentieth cen
tury, polygraphs and truth serum made bids for widespread use. The twenty
first century is confronting yet another kind of lie detection, one based on
neuroscience and particularly on functional magnetic resonance imaging
(fMRI).
The possibility of effective lie detection raises a host of legal and ethical
questions. Evidentiary rules on scientific evidence, on probative value com
pared with prejudicial effect, and, possibly, rules on character evidence would
be brought into play. Constitutional issues would be raised under at least the
Fourth, Fifth, and Sixth Amendments, as well as, perhaps, under a First Amend
ment claim about a protected freedom of thought. Four U.S. Supreme Court
justices have already stated their view that even a perfectly effective lie detec
tor should not be admissible in court because it would unduly infringe the
province of the jury. And ethicist Paul Wolpe has argued that this kind of inter
vention raises an entirely novel, and deeply unsettling, ethical issue about pri
vacy within one's own skulJ.l
These issues are fascinating and the temptation is strong to pursue them,
but we must not forget a crucial first question: does neuroscience-based lie
detection work and, if so, how well? This question has taken on particular ur
gem.)' as two, and possibly three, companies are already marketing fMRI-based
lie detection services in the United States. The deeper implications of effec
1. Paul R. iVolpe, Kenneth R.. Foster, and David D. Langleben, "Emerging Neurotechnologies
tor Lie- Detection: Promises and Perils," American Journal of Bioethics 5 (2) (2005): 39-49.
46
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 124 of 144
tive lie detection are important but may prove a dangerous distraction from
the preliminary question of effectiveness. Their exploration may even lead
some readers to infer that neuroscience-based lie detection is ready for use.
It is not. And nonresearch use of neuroscience-based lie detection should
not be allowed until it has been proven safe and effective. 2 This essay will re
view briefly the state of the science concerning tMRI-based lie detection; it
will then describe the limited extent of regulation of this technology and will
end by arguing for a premarket approval system for regulating neuroscience
based lie detection, similar to that used by the Food and Drug Administration
to regulate new drugs.
Arguably all lie detection, like all human cognitive behavior, has its roots in
neuroscience, but the term neuroscience-based lie detection describes newer
methods of lie detection that try to detect deception based on information
about activity in a subject's brain.
The most common and commonly used lie detector, the polygraph, does
not measure directly activity in the subject's brain. From its invention around
1920, the polygraph has measured physiological indications that are associat
ed with the mental state of anxiety: blood pressure, heart rate, breathing rate,
and galvanic skin response (sweating). When a subject shows higher levels of
these indicators, the polygraph examiner may infer that the subject is anxious
and nlrther that the subject is lying. Typically, the examiner asks a subject a
series of yes or no questions while his physiological responses are being moni
tored by the device. The questions may include irrelevant questions and emo
tionally charged, "probable lie" control questions as well as relevant questions.
An irrelevant question might be "Is today Tuesday?" A probable lie question
would be "Have you ever stolen anything?" a question the subject might well
be tempted to answer "no," even though it is thought unlikely anyone could
truthfully deny ever having stolen anything. Another approach, the so-called
guilty knowledge test, asks the subject questions about, for example, a crime
scene the subject denies having seen. A subject who shows a stronger physio
logical reaction to a correct statement about the crime scene than to an in
correct statement may be viewed as lying about his or her lack of knowledge.
The result of a polygraph examination combines the physiological results
gathered by the machine with the examiner's assessment of the subject to draw
a conclusion about whether the subject answered particular questions honest
ly. The problems lie in the strength (or weakness) of the connection between
the physiological responses and anxiety, on the one hand, and both anxiety
and deception, on the other. Only if both connections are powerful can one
argue tllat the physiological reactions are strong evidence of deception.
2. This argument is made at great length in Henry T. Greely and Judy Illes, "Neuroscience
Based Lie Detection: The Urgent Need for Regulation," American Journal ofLaw & Medicine
33 (2007): 377-431
times happens within one laboratory: Langleben's first two studies differed
substantially in what regions correlated with deception. s
Third, only three of the twelve studies dealt with predicting deceptiveness
by individuals. 6 The other studies concluded that on average particular regions
in the (pooled) brains of the subjects were statistically significantly likely to be
activated (high ratio of oxygenated to deoxygenated hemoglobin) or deacti
vated (low ratio) when the subjects were lying. These group averages tell you
nothing useful about the individuals being tested. A group of National Foot
ball League place kickers and defensive linemen could, on average, weigh 200
pounds when no single individual was within 80 pounds of that amount. The
lie-detection results are not likely to be that stark, but before we can assess
whether the method might be useful, we have to know how accurate it is in
detecting deception by individuals-its specificity (lack offalse positives) and
sensitivity (lack of false negatives) are crucial. Only one of the Kozel articles
and two of the Langleben articles discuss the accuracy of individual results.
Next is the question of the individuals tested. The largest of these stud
ies involved thirty-one subjects;7 more of them looked at ten to fifteen. The
two Langleben studies that looked at individual results were based on four
subjects. For the most part, the subjects were disconcertingly homogenous
young, healthy, and almost all right-handed. Langleben's studies, in particular,
were limited to young, healthy, right-handed undergraduates at the University
of Pennsylvania who were not using drugs. How well these results project to
the rest of the population is unknown.
A fifth major problem is the artificiality of the experimental designs. People
are recruited and give their informed consent to participate in a study of fMRI
and deception. Typically, they are told to lie about something. In Langleben's
three studies they were told to lie when they saw a particular playing card pro
jected on the screen inside the scanner. In Kozel's work, perhaps the least arti
ficial of the experiments, subjects were told to take either a ring or a watch
from a room and then to say, in the scanner, that they had not taken either
object. Note how different this is from a criminal suspect telling the police
that he had not taken part in a drug deal, or, for that matter, from a .dinner
guest praising an overcooked dish. The experimental subjects are following
orders to lie where nothing more rides on the outcome than (in some cases)
a promised $50 bonus if they successfully deceive the researchers. We just do
not know how well these methods would work in settings similar to those
where lie detection would, in practice, be used.
8. Rupe v. Wood, 93 F.3d 1434 (9th Cir. 1996). See also Height v. State, 604 S.E.2d 796
(Ga. 2(04).
Unlike the general Federal Rules of Evidence, the Military Rules of Evi
dence expressly forbid the admission of any polygraph evidence. Airman Schef
fer, arguing that if the polygraph were good enough for the military police, it
should be good enough for the court-martial, claimed that this rule, Rule 707,
violated his Sixth Amendment right to present evidence in his own defense.
The U.S. Court of Military Appeals agreed, but the U.S. Supreme Court did
not and reversed. The Court, in an opinion written by Justice Thomas, held
that the unreliability of the polygraph justified Rule 707, as did the potential
for confusion, prejudice, and delay when using the polygraph.
Justice Thomas, joined by only three other justices (and so not creating
a precedent), also wrote that even if the polygraph were extremely reliable, it
could not be introduced in court, at least in jury trials. This, he said, was be
cause it too greatly undercut "the jury's core function of making credibility
determinations in criminal trials."
Scheffer is a useful reminder that lie detection, whether by polygraph,
fMRI, or any other technical method, will have to face not only limits on sci
entific evidence but other concerns. Under Federal Rule of Evidence 403 (and
equivalent state rules), the admission of any evidence is subject to the court's
determination that its probative value outweighs its costs in prejudice, confu
sion, or time. Given the possible damning effect on the jury of a fancy high
tech conclusion that a witness is a liar, Rule 403 might well hold back all but
the most accurate lie detection. Other rules involving character testimony
might also come into play, particularly if a witness wants to introduce lie-detec
tion evidence to prove that he or she is telling the truth. In Canada, for example,
polygraph evidence is excluded not because it is unreliable but because it vio
lates an old common law evidentiary rule against "oath helping" (R. v. Beland,
2 S.C.R. 398 [1987]). While nonjudicial use offMRI-based lie detection is
almost unregulated, the courtroom use offMRI-based lie detection will face
special difficulties. The judicial system should be the model for the rest of so
ciety. We should not allow any uses offMRI-based (or other neuroscience
based) lie detection until it is proven sufficiently safe and eflective.
Effective lie detection could transform society, particularly the legal system.
Although fMRl-based lie detection is clearly not ready for nonresearch uses
today, I am genuinely agnostic about its value in ten years (or twenty years,
or even five years). It seems plausible to me that some patterns of brain acti
vation will prove to be powerfully effective at distinguishing truth from lies,
at least in some situations and with some people. (The potential for unde
tectable countermeasures is responsible for much of my uncertainty about
the future power of neuroscience-based lie detection.)
Of course, "transform" does not have a normative direction-society
could be transformed in ways good, bad, or (most likely) mixed. Should we
develop effective lie detection, we will need to decide how, and under what
circumstances, we want it to be usable, in effect rethinking EPPA in hundreds
CONCLUSION
Lie detection is just one of the many ways in which the revolution in neuro
science seems likely to change our world. Nothing is as important to us, as
humans, as our brains. Further and more-detailed knowledge about how those
brains work-properly and improperly-is coming and will necessarily change
our medicine, our law, our families, and our day-to-day livcs. We cannot anti
cipate all the benefits or all the risks this revolution will bring us, but we can
be alert for examples as-or, better, just before-they arise and then do our
best to use tl1em in ways that will make our world better, not worse.
REFERENCES
CONTRIBUTORS
Emilio Bizzi is President of the American Academy of Arts and Sciences and
Institute Professor at the Massachusetts Institute of Technology, where he
has been a member of the faculty since 1969. He is a neuroscientist whose
rcsearch focuses on movcmcnt control and thc neural substrate for motor
learning. He is also a Fellow of the American Academy of Arts and Sciences,
a Fellow of the National Academy of Sciences, a member of the Institute of
Medicine, and a member of the Accademia Nazionale dei Lincei. He was
awarded the President of Italy's Gold Medal for Scientific Contributions.
Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law
and Professor, by courtesy, of Genetics at Stanford University. He specializes in
legal and social issues arising from advanccs in the biosciences. He chairs the
California Advisory Committee on Human Stem Cell Research and directs the
Stanford Center for Law and the Biosciences. Hc is a member of the executive
committee of the Neuroethics Society and is a co-director of the Law and
Neuroscience Project. He graduated from Stanford University in 1974 and
from Yale Law School in 1977. He served as a law clerk for Judge John Minor
Wisdom of the United States Court of Appeals and for Justice Potter Stewart
of the United States Supreme Court. He began teaching at Stanford in 1985.
CONTRIBUTORS 57
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 135 of 144
Jed S. Rakoff is a United States District Judge for the Southern District of
New York. He also serves on the Governance Board of the MacArthur Foun
dation Project on Law and Neuroscience, and on the National Academies'
Committee to Prepare the Third Edition of the Federal Judges' Manual on
Scientific Evidence. He has a B.A. fi'om Swarthmore College, an M.Phi\. from
Oxford University, and a rD. from Harvard Law School.
The Academy was founded during the American Revolution by John Adams,
James Bowdoin, John Hancock, and other leaders who contributed promi
nently to the establishment of the new nation, its government, and its Con
stitution. Its purpose was to provide a forum for a select group of scholars,
mcmbers of the learned professions, and government and business leaders to
work together on behalf of the democratic interests of thc republic. In tllC
words of the Academy's Charter, enacted in 1780, the "end and design of
the institution is ... to cultivate every art and science which may tend to
advance the interest, honour, dignity, and happiness of a free, independent,
and virtuous people." Today the Academy is both an honorary learned society
and an independent policy research center that conducts multidisciplinary
studies of complex and emerging problems. Current Academy research focus
es on science and global security; social policy; the humanities and culture;
and education. The Academy supports young scholars tllrough its Visiting
Scholars Program and Hellman Fellowships in Science and Tcchnology Policy,
providing year-long residencies at its Cambridge, Massachusetts, headquarters.
The Academy's work is advanced by its 4,600 elected members, who are
leaders in the academic disciplines, the arts, business, and public affairs from
around the world.
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 137 of 144
N E u R o L A "V
'"
0
""
,.,
14
.
...l
~
c:
BY
en ING.FEI CHEN (lVIA '90)
ILLUSTRATION BY JEFFREY DECOSTER
Or could brain-based testing wrongly condemn some and trample the civil liberties of others?
In August 2008. Hank Greely received an e-mail from an/n{ernationaL HeraLJ Tribune
unusual murder case in India: A woman had been convicted of killing her ex-fiance with arsenic, and the
circumstantial evidence against her included a brain-scan test that purportedly showed she had
"I was amazed and somewhat appalled," recalls Greely (BA 74), the Deane F. and
studied the legal, ethical, and social implications of biomedical advances for nearly 20 years.
Opposite: No studies ofBE as. as it's called, have been published The brain science revolution raises the tantaliz
Mark G. Kelman, in peer-reviewed scientilic journals to prove it works. ing sci-tl-like prospect that secrets hidden inside
James C. Gaither Maybe society wi]] someday find a technological people's heads-like prejudice, intention to com
Professor solution to lie detection, GI'eely told the Tribune, "but mit a crime, or deception-are within reach of be
of Law we need to demand the highest standards ofproof be ing knowable. And such "mind reading" could have
ana Vice Dean fore we ruin people's lives based on its application." wide-ranging legal ramilications.
While it remains unclear whether the guilty ver Although the scientific know-how is not here yet,
dict in the indian case will be upheld, the idea that someday brain scans might provide stronger proof of
a murder conviction rested in part on the premature an eyewitness's identification of a suspect, confirm a
adoption of an unproven novel technology by a ju lack of bias in a potential juror, or demonstrate that
dicial system makes Greely uneasy. The case is the a worker compensation claimant does, in tact, suffer
sort of potentially disastrous scenario that he and hum debilitating pain. Neuroimaging evaluations of
colleagues in the budding field of"neurolaw" seek to drug offenders might help predict the odds of relapse
head off in the U.S. legal system. and guide sentencing. And new treatment options
based on a berter grasp of the neural processcs un
derlying addictive or violent behavior could improve
rehabilitation pmgrams for repeat lawbreakers.
N THE UNITED STATES. CONCERNS ABOUT A As director of the Stanford Center for Law and
SIMILAR BRAIN-SCANNING lie detection technol the Biosciences (CLB) and the Stanford Interdisci
'"
o
ogy, based on functional magnetic resonance imag plinary Group on Neuroscience and Society, Greely
o
'" ing (fMRI), have been swirling around sinee two has pmvided critical analysis of the societal conse
companies, No Lie MRl and Cephos Corp., began quences of genetic testing and embryonic stem cell
offering commercial testing using the technique in techniques. in recent years, as he has turned his gaze
2006 and 2008, respectively. While reservations to bmin science, Stantord has emerged as a leader
abound about the reliability of these brain scans, the in the neurolaw field, with the CLB holding one
16 -'
decision of whether courts accept them as evidence day conferences on "Reading Minds: Lie Detection,
will initially rest upon the discretion of individual Neuroscience, Law. and Society" and "Neuroimag
judges, on a case-by-case basis. ing, Pain, and th~ Law" in 2006 and 2008.
:n in the past two decades, neuroscience research Greely along with two Stantol·d nem'osdence pro
has made rapid gains in deciphering how the fessors and two research fellows has also been en
human brain works. building toward a fuller com gaged in the Law and Neuroscience Project, a three
prehension ofbehavior that could vastly change how year, $10 million collaboration funded by the ,John
society goes about educating children, conducting D. and Catherine T. MacArthur Foundation since
business. and treating diseases. Powerful neuroim 2007. Presided over by honorary chair retired Su
aging techniques are for the first time able to reveal preme Court ,Justice Sandra Day O'Connor '52
which parts of the living human brain are in action (BA '50) and headquartered at the University of
while the mind experiences fear, pain, empathy, and California, Santa Barbara. the project brings togeth
even feelings of religious belief. er legal scholars, judges, philosophers, and scientists
"Anything that leads to a better, deeper un from two dozen universities.
derstanding of people's minds plays right to One network is gauging neuroscience's pmmises
the heart of human society and culture-and and potential pel;]s in the areas ofcriminal responsibil
as a result, right to the heart of the law," says ity and prediction and treatment ofcriminal behavior.
Greely. "The law cares about the mind." A second network, co-directed by Greely, is exploring
co NVEJ]~SA".rI0 N 'IS'i
"I "rJIINK r:I-'IIE] y,,,, I'> "111 "111
.1.' . '-..J."1.J..I'''' \rVILI.J .. k
h LL\! . .1;\.\1 ( .
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 140 of 144
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 141 of 144
the impacts of neuroscience on legal decision making. ing with neuroscientist Anthony D. Wagner (phD
"A lot of the judges who are participating are just '97), a Law and Neuroscience Project member, and
frankly barned by this flood of neuroscience evidence Emily R. Murphy '12 and Teneille R. Brown-who
they are seeing coming into the courts. They want to have been Law and Neuroscience Project fellows-to
use it if it's good and solid, but they don't want to further investigate the Indian BEOS profiling tech
if it's flimflam," says Law and Neuroscience Project nology. The convicted woman and her current hus
member William T. Newsome, professor of neurobi band (also found guilty in the case) were granted bail
ology at the Stanford School of lVledicine. He and by an appellate court while it considers the couple's
the other scientists are helping to educate their legal appeal of the nIling.
counterpal1:s about what brain imaging"can tell you BEOS's inventor claims that by analyzing brain
reliably and what it can't." The project will produce wave patterns that indicate a remembrance of infor
a neuroscience primer for legal practitioners. mation about a murder, the test can distinguish the
source of that knowledge-from actually experienc
ing the crime, versus hearing of it in the news. But
there's no scientific evidence supporting that such
EUROSCIENTIFIC EVIDENCE HAS ALREADY INFLU a feat is possible, says Wagner, a Stanford associate
ENCED COURT OUTCOMES in a number of inst:mces. professor of psychology. Neuroimaging studies have
Brain scan data is showing some purchase in death shown that merely imagining events in your mind
penalty cases, after a defendant has been found guilo/. triggers patterns of brain activity similar to those
'"
0 says Robert Weisberg '79, the Edwin E. Huddleson, that arise from experiencing the events for real.
0
'" Jr. Professor of Law and faculty co-directOl' of the Motivated by the C<'\SC, Wagner, along with
Stanford Criminal Justice Center. That's because, psychology postdoctoral fellow Jesse Rissman and
'" during the penalty phase, the defendant has "a con Greely, is exploring whether basic memory recognition
stitutional right to offer just about anything that could testing is possible with fMRl. Functional MRI looks
18
.>,
..J
be charactel'ized as mitigating evidence," he says.
"What happens here is thata lot ofdefense evidence
for metabolic activity in the brain to see how different
parts "light up" when an individual performs celtain
0 that wouldn't be admissible during the guilt phase, mental tasks while lying inside an l"vlRI machine.
~
then comes back in a secondary way." For instance, The id.ea of fAiRI memory detection raises intrigu
'" Weisberg says, to try to reduce punishment to a life ing possibilities: Could it be used to verifY whether
sentence, some defense lawyers are presentingneuroirn a suspect's brain recognizes the objects in a crime
aging pictures to argue that organic brain damage from scene shown in a photo, or to confirm an eyewit
an ahusive childhood makes their clientless culpable. ness's identification of a perpetrator-without the
But the bar for admissibility of such evidence is test subject even uttering a word? And, if so, how
different in different legal contexts, Weisberg adds, accurately? The answers aren't known.
and it is generally set much higher in the guilt phase In experiments funded by the Law and Neurosci
during which criminal responsibility is determined. ence Project; the Stanford researchers are studying new
In that setting, there's a greater reluctance to consid computer algorithms for analyzing a person's neural
er brain-based information. "Right now the courts activation patterns to see if they can be used to predict
are very, very worried about allowing big inferences whether a face the person has seen before will be recog
to be drawn about how neuroscientific evidence ex nized. Preliminary accuracy rates look good. But, "Vag
plains criminal responsibility," he says. ner cautions, it is uncertain whether the lab findings
Still, in two ca.~es in California and New York, would translate over to real-world applicability.
defendants accused of first-degree murder success And that is the seemingly insurmountable stick
fully argued for a lesser charge of manslaughter after ing point with fMRI lie detection. Unlike poly
presenting brain scans to establish diminished brain gmph testing, which monitors for anxiety-induced
function from neurological disorders. And the first changes in blood pressure, pulse and breathing rates,
in the next generation of evidence from brain-based and sweating that accompany prevarication, fMRI
technology-fMRI lie detection-is already knock scans aim to directly capture the brain in the act of
ing at courtroom doors, posing "the most imminent deception. About 20 published peer-reviewed stud
risk issue" in neurolaw, says Greely. ies found that certain brain areas become more active
On the Stanford campus, Greely has been work when a person lies versus when telling the truth. These
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 142 of 144
yet to use in the real world, with all its variegated '"
deceptions of complicated ha1f~truths and rehearsed
>,
~
false alibis. Experimental test conditions are a far
.1 19
cry from the highly emotional, stressful scenario of
being accused of a crime for which you could be sent
""c
~
proceeding.
Nonetheless, one of the fil'St known attempts to
admit such evidence into court was in a juvenile sex
abuse case in San Diego County earlier this year. To
try to prove he was innocent, the defendant submitted
a scan from No Lie MRI, but later withdrew it. (For
details about the CLB's involvement, see its hlog at
lawandbiosciences.wordpress.com). No Lie's CEO,
,Joel Huizenga, says that he is confident the brain scan the peer-reviewed literature? What is its error rate? Emily Murphy '12
tests can pass court admissibility rules "with flying col Other state courts use the Frye test of admissihility; and Hank Greely
ors" if the decision isn't politicized by opponents. which requires proof that scientific evidence is gen (BA '74), the
But George Fisher, the ,Judge ,John Crown Pro erally accepted in the relevant scientific community. Deane F. and Kate
fessor of Law and a former criminal prosecutor, Fisher's guess is that £MRI lie detection evidence Edelman Johnson
thinks the justice system won't recognize such evi "will not get past the reliability stage in most places." Professor of Law
dence anytime soon. Trial court criteria for admit Attempts to reproduce real-world lying in the lab, he
ting data from a new scientific technique set stiff re says, "are probably unlikely to satisJY a court when it
quirements for demonstrating its reliability; he says. really gets down and looks hard at these studies."
In federal courts and roughly half of state courts, Plus, the justice system has an ideological aver
individual judges must apply the Daubert standard sion to lie detection technology: Jn United Stat"" I'.
on a case-by-case basis, hearing testimony from ex Sch4Jer [www.law.comell.edulsupct/htmI!96-1133.
perts on key questions: Is the evidence sound? Has ZS.htm\] four Supreme Court justices said that a lie
the scientific technique been tested and published in detection test, regardless of its accuracy, shouldn't
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 143 of 144
be admitted into federal courts because it would points out that no one can be unwillingly interro
infringe on the jury's role as the human "lie detec gated by brain scan, because it currently requires
tor" in the courtroom. "The mythology around the significant cooperation from the subject in prepar
system is that the jurors are able to tell a lie when ing for and undergoing the procedure.
they see one," says Fisher. Greely has proposed that WRI lie detection com
Greely is less sanguine that courts v\lill keep un panies be required to get pre-marketingapproval &om
proven fMRI testing out. "Anybody can try to admit an agen<;y like the Food and Drug Administration.
neuroscience evidence in any case, in any court in the Not surprisingly; Laken and Huizenga are opposed to
country," he says, adding that busy judges are typical the idea Huizenga says that, given the enormous time
ly not wen prepared to make good decisions about it. and expense this would take, the idea is real~y a po
If there were any inclination, however, for courts litically motivated move to stop the technology cold.
to accept new lie detection evidence that isn't very Laken, however, says he is open to a discussion with
firmly rooted in science, it would most likely happen Law and Neuroscience Project researchers, other sci
on the defense side rather than the prosecution's, entists, and government agencies about what it would
speculates former U.S. Attorney Carol C. Lam '85, take to validate the accura<;y of the technology.
who is depuly general counsel at Qualcomm. That's As scientists unlock the mysteries of the hum,m
because the criminal justice system is structured to brain, we may learn that some people are neurally
give the benefit of any doubt to the defendant. 'wired in ways that compel them to certain types
Such instances of admission, were they ever to of unlawful behavior: Their brains made them do
'" happen, would most likely also first take place in it. How much this should lessen their culpability
'"
'"
'" proceedings where the judge is the only trier of fact, or punishment are weighly questions that courts
Lam adds; individual judges might be curious about would have to grapple with.
"
the flYIRI test and confident that they can detel'mine Some philosophers and neurobiologists believe
the appropriate weight to give it. But Lam also notes neuroscience will prove that human beings don't
.
>,
~
that the defense communily actually might not wish have free will; instead, we are creatures whose ac
20 ..J
to present WRI lie detection results in court-for tions are determined by mechanical workings of the
0 lear that if this kind of evidence became widely ac brain that occur even before we make a conscious
~
cepted by the judicial system, prosecutors would decision. If that's true, these thinkers argue, it could
:n begin to use it against criminal defendants. finally explode the very concept of criminal respon
sibility and shattet' the judicial system.
But most legal scholars don't buy into that.
"I think the free will conversation is the most
HE POTENTIAL ETHICAL AND LEGAL ISSUES dead-ended in all of neuroscience:' says Mark G.
surrounding brain scans for deception or memory Kelman, the ,James C. Gaither Professor of Law
detection get Ihorny quickly. On one hand, every and vice dean. The debate has been going on for
one agrees that a highly accurate WID lie detection 2,000 years, he says, with critics of the idea that free
test could be a powerful weapon in exonerating the will exisL~ concluding long ago that human behavior
innocent, similar to forensic DNA evidence. But is governed by the mind-not by some imagined
could prosecutors compel someone to undergo test moral entity within it~and the mind is located in
ing, or would that violate the Fifth Amendment's the brain, It's doubtful, Kelman says, that neurosci
protection against self-incrimination? W(mld it vio ence will add anything new to the free will criminal
late the Fourth Amendment's bar against unreason responsibility arguments by detailing the precise
able searches? A broader question may be whether locations or processes that c'-'(plain particular actions
a right to privacy is violated if someone scans your or traits, like a lack of empathy or impulse control.
brain to read your mind, neurolaw experts say Furthermore, others point out that the criminal
whether for court, the workplace, or school. justice system does not rely on a premise of free will.
Even if flYIRI lie detection's reliabilily remains "It depends on the hypothesis that people's behavior
in doubt, law enforcement and national securily is shapeable by outside forces," says Fisher. 'l\nd
agents could still use it to guide criminal investi there's a big difference between saying there is no free
gations, as they do with the polygraph. However will and saying the risk of punishment has no impact
Steven Laken, president and CEO of Cephos, on a person's calculations about what to do next."
Case 1:07-cr-10074-JPM-tmp Document 168-1 Filed 02/19/10 Page 144 of 144
TO TELL LIE
WHEN THEY
o
SEE ONE."
.J C ., F o
What is far more probable in the future, many provide "an objective basis lor saying, 'This person's
experts &1;Y; is that one of neuroscience's biggest influ getting on top of their problem,'" Newsome says.
ences would be in 1'evamping processes like sentenc Parole boards have been moving toward taking
ing 01' parole, 01' in forcing us to rethink such ideas greate1' account of evidence-based predictions of
as the rehabilitation of criminals, sexual predators, behavior, adds Weisberg. "It is possible that neurosci
mentally insane convicts, or drug offenders. Although entific evidence could be used to weigh into inHuenc
minimum sentences are mandated in many situa ing the conditions of parole or the kind of treatment
tions, judges still have some discretion in how they program the prisoner is sent into," he sa:ys.
handle defendants in certain cases. If research led to When it comcs to rehabilitation, new treat
0'.
better predictions of future behavior that could help ments that seek to change criminal behavior raise 0
0
distinguish the more dangerous lawbreakers &om the their own potentially Orwellian ethical dilemmas,
safer bets, courts could make better decisions about though. A vaccine against cocaine is in clinical trials,
how long a sentence to give a delendant, and whether G1'eely says. If it ever reaches the market, would the "'
he should be given probation or sent to prison. And, legal system torce coke addicts to get vaccinated
once he's in jail. when he should come out. or otherwise imprison them? "Every plus here has " ~
what the chances are of controlling the defendant's changing behaviors in good ways, or in bad ways."
~
behavior in the future, we're going to be better ol!;" The thought of giving the government strong tools
says O'Connor. Answers &om neuroscience would be tor altering people's behavior through direct action V)
e.-..::tremely welcome in decision making when defen on the brain is, he says, "scary."
dants are committed to a mental institution. "\Vhen Prognosticators of neurolaw must walk a careful
should a person be confined or when is it appropriate line in making conjectures about the tllture. A few years
to have a person released on medication?" she asks. ago, a British bioethics scholar complained to Greely
"There's just a need for that kind of information." that the law professor's dissections of the legal implica
Drug addiction is another arca whcl'c the law is tions of IMRI lie detection paid short shrift to whether
hungry for better solutions and more effective the technology actually works--possibly leaving people
treatments. "Our jails are overloaded, and they with the impression that it wiIl, or already does.
are overloaded with people who have committed "That was a really good wake-up call," recalls
drug crimes," says O'Connor. "So it just becomes Greely. Still, while no one knows exaetly where the
enormously important to figure out how people get science will take us, he says, the goal of neurolaw is
addicted to drugs and what we can do to sever that to clarifY how coming discoveries might affect the
connection if we can." legal world and to point out tensions, gaps, and
Predictions of recidivism might be improved areas where the law may need rethinking. Society
through the invention of brain imaging tools that must wOI'ry about both long-term implications of
assess whether an addict has truly broken the habit, the hypothetical future and short-term realities of
says Newsome. Fo1' example, one possible test could the present, he says. "You've got to look in both
be to scan the person's brain while she views video directions." SJ.
clips of people injecting heroin. If research established Ingjei Chen M a .1cience writer whOJe work haJ appeared
that such tempting scenes reliably triggered greater in The New York TimCllo Smithsonian, Discover, and
activity in the emotional centers of dnJg abusers' otherpu61iLatlofIJ.To ~iew an inter~iew with Emily Murphy
brains, scans taken before and after treatment could about the Indian Ca.1C, go to www.stanfordlawyer.com.