Beruflich Dokumente
Kultur Dokumente
E L I Z A B E T H N . R I L E Y, H A N N A H L . C O M B S ,
H E AT H E R A . DAV I S , A N D G R E G O R Y T. S M I T H
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
16
ria led to enormous advances in the success and utility of psychological assess-
ment. Classic examples include the development of the Minnesota Multiphasic
Personality Inventory (MMPI; Butcher, 1995) and the California Personality
Inventory (CPI; Megargee, 2008). Those tests were designed to predict specific
criteria and generally do so quite well. The MMPI-2 distinguishes between
psychiatric inpatients and outpatients and also facilitates treatment planning
(Butcher, 1990; Greene, 2006; Nichols & Crowhurst, 2006; Perry et al., 2006).
The MMPI-2 has been applied usefully to normal populations (such as in person-
nel assessment: Butcher, 2001; Derksen et al., 2003), to head-injured populations
(Alkemade et al., 2015; Gass, 2002), seizure disorders (King et al., 2002; Whitman
et al., 1994) and in correctional facilities (Megargee, 2006). The CPI validly pre-
dicts a wide range of psychosocial functioning criteria as well (Gough, 1996).
Over time, defining “validity” as “criterion validity” led to improved predic-
tion and advances in knowledge. As these changes occurred, two limitations in
relying solely on criterion validity became apparent. The first is that the criterion
validity of a test can only be as good as the criterion that it was designed to pre-
dict. Often, when researchers investigate the criterion validity of a new measure,
they presume the validity of the criterion, rather than study the validity of the
criterion independently. It has become clear that there are many sources of error
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
17
Theory as Evidence 17
in criteria. For example, criteria are often based on some form of judgement,
such as teacher or parent rating, or classification status using a highly imper-
fect diagnostic system like the American Psychiatric Association Diagnostic and
Statistical Manual of Mental Disorders (currently DSM-5: APA, 2013). There are
certainly limits to the criteria predicted by neuropsychological tests. This prob-
lem will not be the focus of this chapter, although it is of course important to
bear in mind.
The second problem is that the criterion validity approach does not facili-
tate the development of basic theory (Cronbach & Meehl, 1955; Smith, 2005;
Strauss & Smith, 2009). When tests are developed for predicting a very specific
criterion, and when they are validated only with respect to that predictive task,
the validation process is likely to contribute little to theory development. As a
result, criterion validity findings tend not to provide a strong foundation for
deducing likely relationships among psychological constructs, and hence for the
development of generative theory. This limitation led the field of psychology to
develop the concept of construct validity and to focus on construct validity in test
and theory validation (for review, see Strauss & Smith, 2009).
In order to develop and test theories of the relationships among psychologi-
cal variables, it is necessary to invoke psychological constructs that do not have
a single criterion reference point (Cronbach & Meehl, 1955). Constructs such
as fluid reasoning, crystallized intelligence, and working memory cannot be
directly observed or tied to a single criterion behavior or action, rather, they
are inferred entities (Jewsbury et al., 2016; McGrew, 2009). We infer the exis-
tence of constructs from data because doing so proves helpful for understanding
psychological processes that lead to important individual differences in real-life
task performance and in real-life outcomes. We consider it important to study
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
18
both that our measure is a valid measure of construct A, and that our theory
relating A, B, C, and D has validity. In a very real sense, each test of the validity of
our measure of A is simultaneously a test of the theoretical proposition describ-
ing relationships among the four constructs. After extensive successful empirical
analysis, our evidence supporting the validity of our measure of A comes to rest
on a larger, cumulative body of empirical support for the theory of which A is
a part.
Then, when it comes to tests of the criterion validity of A with respect to some
important neuropsychological function, the criterion validity experiment repre-
sent tests of an extension of an existing body of empirical evidence organized as
a psychological theory. Because of the matrix of underlying empirical support,
positive tests of criterion validity are less likely to reflect false-positive findings.
In the contrasting case, in which tests of criterion validity are conducted without
such an underlying network of empirical support, there is a greater chance that
positive results may not be replicable.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
19
Theory as Evidence 19
general, the content of the obvious items is consistent with emerging theories of
psychopathology, but the content of the subtle items tends not to be. Most of the
many studies testing the replicability of validity findings using the subtle items
have found that those items often do not predict the intended criteria validly and
sometimes even predict in the opposite direction (Hollrah, Schottmann, Scott,
& Brunetti, 1995; Jackson, 1971; Weed et al., 1990). Although those items were
originally chosen based on their criterion validity (using the criterion-keying
approach: Hathaway & McKinley, 1942), researchers can no longer be confident
that they predict intended criteria accurately. The case of MMPI subtle items is
a case in which criterion validity was initially demonstrated in the absence of
supporting theory, and the criterion validity findings have generally not proven
replicable. The contrasting examples of the Wechsler scales and the MMPI subtle
items reflect the body of knowledge in clinical psychology, which indicates the
greater success derived from predicting criteria based on measures well sup-
ported in theory.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
20
between the automatic and the voluntary has been highly influential in basic psy-
chological science for more than a century (e.g., James, 1890, Posner & Snyder,
1975; Quantz, 1897).
Development and validation of the modern Stroop Test (Stroop, 1935) as a
clinical measure thus rests on an impressive basis of theoretical and empirical
support. The test has been found to be a relatively robust measure of attention
and interference in numerous experimental studies, and psychologists have
adopted it for use in the clinical setting. The test has become so popular in
clinical settings that several versions and normative datasets have been created.
Ironically, the existence of multiple versions creates a separate set of administra-
tion and interpretation problems (Mitrushina, 2005). The Stroop Test has proven
to be an effective measure of executive disturbance in numerous neurological
and psychiatric populations (e.g., schizophrenia, Parkinson’s disease, chronic
alcoholism, Huntington’s disease, attention- deficit hyperactivity disorder
[ADHD]: Strauss, Sherman, & Spreen, 2006). Impaired attentional abilities asso-
ciated with traumatic brain injury, depression, and bipolar disorder all appear to
influence Stroop performance (Lezak, Howeison, Bigler, & Tranel, 2012; Strauss
et al., 2006). Consistent with these findings, the Stroop has been shown to be
sensitive to both focal and diffuse lesions (Demakis, 2004). Overall, the Stroop
Test is a well-validated, reliable measure that provides significant information
on the cognitive processes underlying various neurological and psychological
disorders. As is characteristic of the iterative nature of scientific development,
findings from use of the test have led to modifications in the underlying theory
(MacLeod, 1991).
Sometimes, researchers attempt to create assessment measures based on
theory but the tests do not, after repeated examination, demonstrate good cri-
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
terion validity. Generally, one of two things happens in the case of these theory-
driven tests that ultimately tend not to be useful. First, it is possible that the test
being explored was simply not a good measure of the construct of interest. This
can lead to important modifications of the measure so that it more accurately
assesses the construct of interest and can be potentially useful in the future. The
second possibility is that the theory underlying development of the measure is
not fully accurate. When the latter is the case, attempts to use a test to predict
criteria will provide inconsistent results, characterized by failures to replicate.
An example of this problem is illustrated in the long history of the Rorschach
test in clinical psychology. The Rorschach was developed from hypothetical
contentions that one’s perceptions of stimuli reveal aspects of one’s personality
(Rorschach, 1964) and that people project aspects of themselves onto ambiguous
stimuli (Frank, 1939). This test provides an example of a hypothesis-based test
that has produced inconsistent criterion validity results, a test based on a theoret-
ical contention that has since been challenged and deemed largely unsuccessful.
One school of thought is that the hypotheses are sound but that the test scoring
procedures are flawed. As a consequence, there have been repeated productions
of new scoring systems over the years (see Wood, Nezworski, Lilienfeld, & Garb,
2003). To date, each scoring system, including Exner’s (1974, 1978) systems, has
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
21
Theory as Evidence 21
had limited predictive validity (Hunsley & Bailey, 1999; Wood et al., 2003). The
modest degree of support provided by the literature has led to a second inter-
pretation, which is that the hypothesis underlying use of the test lacks validity
(Wood et al., 2003).
We believe there are two important lessons neuropsychologists can draw from
histories of tests such as the Rorschach. The first is to be aware of the validation
history of theories that are the basis for tests one is considering using. Knowing
whether underlying theories have sound empirical support or not can provide
valuable guidance for the clinician. The second is that it may not be wise to invest
considerable effort in trying to alter or modify tests that either lack adequate cri-
terion validity or are not based on a valid, empirically supported theory.
The second manner in which theories develop is through what has been
called a “bootstrapping” process. Cronbach and Meehl (1955) described
bootstrapping in this context as the iterative process of using the results of
tests of partially developed theories to refine and extend the primary theory.
Bootstrapping can lead to further refinement and elaboration, allowing for
stronger and more precise validation tests. When using the bootstrapping
method of theory development in neuropsychology, it is common for research-
ers to discover, perhaps even accidentally, that some sort of neuropsychological
test can accurately identify who has a cognitive deficit based on their inability
to perform the test at hand (e.g., Fuster, 2015; Halstead, 1951). Once researchers
are able to determine that the test is a reliable indicator of neuropsychological
deficit, they can explore what the task actually measures and can begin to for-
mulate a theory about this cognitive deficit based on what they think the test
is measuring. In this manner, the discovery and repeated use of a measure to
identify neuropsychological deficit can lead to better theories about the cogni-
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
22
(TBI: Nybo et al., 2004). In addition, TMT parts A and B can predict psychoso-
cial outcome following head injury (Devitt et al., 2006) and are useful for pre-
dicting instrumental activities of daily living in older adults (Boyle et al., 2004;
Tierney et al., 2001). Tierney et al. (2001) reported that the TMT was significantly
related to self-care deficits, use of emergency services, experiencing harm, and
loss of property in a study of cognitively impaired people who live alone. At this
point in the history of the test, one can say that the TMT has a strong theoretical
foundation and an extensive matrix of successful criterion predictions in line
with that theory. Inferences made based on the test today are unlikely to be char-
acterized by repeated false positive conclusions.
cient (r ~ .60; Lichtenberger, Kaufman, & Lai, 2001) and an inadequate floor and
ceiling (Flanagan, McGrew, & Ortiz, 2000). Furthermore, Family Pictures does
not correlate well with other visual memory measures and it does not effec-
tively discriminate lesion lateralization (Chapin, Busch, Naugle, & Najm, 2009;
Dulay, Schefft, Marc, Testa, Fargo, Privitera, & Yeh, 2002). There is good reason
to question its accuracy as a measure of visual memory, its use for that purpose
could lead to non-optimal prediction of criteria.
Theory-Based Assessment
Our first recommendation is the one we have emphasized throughout this chap-
ter: whenever possible, use assessment tools grounded in empirically supported
theory. Doing so reduces the chances that one will draw erroneous conclusions
from test results. However, we also recognize that neuropsychological tests dem-
onstrating good, theory-driven criterion validity do not exist for every construct
that is of clinical interest and importance. There are sometimes serious issues
needing assessment and care for which there are no assessment instruments that
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
23
Theory as Evidence 23
fit the bill of showing criterion validity based on strong theory. We thus review
here some points we believe will be helpful to practitioners who have to make
difficult decisions about neuropsychological assessment when no clear guide-
lines or evidence exist for that specific clinical context.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
24
More recently, the RBANS has been updated to include special group stud-
ies for Alzheimer’s disease, vascular dementia, HIV dementia, Huntington’s
disease, Parkinson’s disease, depression, schizophrenia, and closed head injury
populations (Pearson Clinical, 2016). This change represents an important step
in enhancing the criterion-related validity of this measure, the measure can now
be used with more confidence in many non-demented populations. Still, clini-
cians who make decisions to use this (or any other) measure on a population
for whom it was not originally created and for which there is not sound validity
evidence should closely examine the body of available empirical literature sup-
porting their decision.
Sometimes the “off-label” use of an assessment tool is necessary because there
are no assessment instruments available for the problem of interest that have
been validated in the population of interest. This is an unfortunately common
gap between the state of the research literature and the state of clinical needs. In
such a case, off-label assessments must be used with extreme caution and with
the knowledge that the use of such instruments is getting ahead of the theory
and research on which these assessments were created.
der. The use of a single score is a problem for two primary reasons (1) one cannot
know how specific components of the construct might differentially contribute
to the overall score, and (2) the same overall score could represent different com-
binations of component scores for different individuals (Strauss & Smith, 2009).
A classic example of construct heterogeneity in assessment is the Beck
Depression Inventory (BDI; Beck et al., 1996), which is used to provide a single
score of depressive symptomatology. This is problematic because depression is
a multifaceted collection of very different symptoms, including feelings of high
negative affect (such as sadness/worthlessness), low positive affect (anhedo-
nia), sleeping or eating too much or too little, thoughts of suicide, and others.
Individuals can obtain the same score on the BDI but actually have very different
patterns of symptoms (McGrath, 2005).
In the case of the BDI, the patient answers questions on a scale of 0–3 points
based on the severity of that symptom, and the total depressive score is a sum of
all the points for all of the questions. A score of 9 points can mean different things
for different patients. One patient might get that score due to high negative affect
without disturbances in positive affect, sleep, or appetite. Another patient might
get the same score due to loss of positive affect in the absence of any significant
negative affect or any sleep or appetite problems. Because treatments for high
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
25
Theory as Evidence 25
negative affect and low positive affect differ (Chambless & Ollendick, 2001), it is
crucial to know the degree to which each of those two constructs contributed to
the patient’s score. It is far more efficient and accurate to measure negative and
positive affect separately, and doing so can provide a clearer picture of the actual
clinical experience of the patient. It is important to note, however, that the actual
diagnosis of major depressive disorder in the DSM-5 (APA, 2013) is similarly
heterogeneous and is based on these varied and conflated constructs. Thus, the
construct heterogeneity of the BDI is not merely an assessment problem, and
construct heterogeneity in assessment is not merely a neuropsychological prob-
lem. The problem of construct heterogeneity extends well into the entire field of
clinical psychology and our understanding of health problems as a whole (see
also Chapter 4 of this volume).
For the clinician, it is critical to understand exactly what the assessment
instrument being used is purported to measure and, related to this, whether the
target construct is heterogeneous or homogeneous. A well-validated test that
measures a homogenous construct is the Boston Naming Test (BNT: Pedraza,
Sachs, Ferman, Rush, & Lucas, 2011). The BNT is a neuropsychological mea-
sure that was designed to assess visual naming ability using line drawings of
everyday objects (Strauss et al., 2006). Kaplan first introduced the BNT as a test
of confrontation naming in 1983 (Kaplan, Goodglass, & Weintraub, 1983). The
BNT is sensitive to naming deficits in patients with left-hemisphere cerebrovas-
cular accidents (Kohn & Goodglass, 1985), anoxia (Tweedy & Schulman, 1982),
and subcortical disease (Henry & Crawford, 2004, Locascio et al., 2003). Patients
with Alzheimer’s disease typically exhibit signs of anomia (difficulty with word
recall) and show impairment on the BNT (Strauss et al., 2006).
In contrast, there are neuropsychological measures that measure multiple cog-
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
26
use the tools they do have at their disposal to ensure or preserve whatever level
of validity is available to them.
Ecological Validity
Finally, there is the issue of ecological validity in neuropsychological assess-
ment. An important question for clinicians concerns whether variation in test
scores maps onto variation in performance of relevant cognitive functions in
everyday life. This issue perhaps becomes central when the results of an assess-
ment instrument do not match clinical observation or patient report. When
this occurs, the clinician’s judgement concerning how much faith to place in the
test result should be influenced by the presence or absence of ecological valid-
ity evidence for the test. In the absence of good evidence for ecological validity,
the clinician might be wise to weigh observation and patient report more heav-
ily. When there is good evidence for ecological validity of a test, the clinician
might instead give the test result more weight. There are certainly times when
a neuropsychological test will detect some issue that was previously unknown
or that the patient did not cite as a concern (for a full review of the importance
of ecological validity issues to neuropsychological assessment, see Chaytor &
Schmitter-Edgecombe, 2003).
SUMMARY
In the context of good theory, criterion validity evidence is hugely important
to the practitioner. It provides a clear foundation for sound, scientifically based
clinical decisions. Criterion validity evidence in the absence of theory may well
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
27
Theory as Evidence 27
REFERENCES
Alkemade, N., Bowden, S. C., & Salzman, L. (2015). Scoring correction for MMPI-2
Hs scale in patients experiencing a traumatic brain injury: A test of measurement
invariance. Archives of Clinical Neuropsychology, 30, 39–48.
American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental
Disorders (DSM-5®). Washington, DC: American Psychiatric Association.
American Psychological Association. (2010). Ethical Principles of Psychologists and
Code of Conduct. Retrieved from http://apa.org/ethics/code/index.aspx. Date
retrieved: January 19, 2015.
Anastasi, A. (1950). The concept of validity in the interpretation of test scores.
Educational and Psychological Measurement, 10, 67–78.
Beck, A. T., Steer, R. A., & Brown, G. K. (1996). Beck Depression Inventory–II. San
Antonio, TX: Psychological Corporation.
Boyle, P. A., Paul, R. H., Moser, D. J., & Cohen, R. A. (2004). Executive impairments
predict functional declines in vascular dementia. The Clinical Neuropsychologist,
18(1), 75–82.
Butcher, J. N. (1990). The MMPI-2 in Psychological Treatment. New York: Oxford
University Press.
Butcher, J. N. (1995). User’s Guide for The Minnesota Report: Revised Personnel Report.
Minneapolis, MN: National Computer Systems
Butcher, J. N. (2001). Minnesota Multiphasic Personality Inventory–2 (MMPI-2) User’s
Guide for The Minnesota Report: Revised Personnel System (3rd ed.). Bloomington,
MN: Pearson Assessments.
Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological inter-
ventions: Controversies and evidence. Annual Review of Psychology, 52(1), 685–716.
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
Chapin, J. S., Busch, R. M., Naugle, R. I., & Najm, I. M. (2009). The Family Pictures
subtest of the WMS-III: Relationship to verbal and visual memory following tem-
poral lobectomy for intractable epilepsy. Journal of Clinical and Experimental
Neuropsychology, 31(4), 498–504.
Chaytor, N., & Schmitter-Edgecombe, M. (2003). The ecological validity of neu-
ropsychological tests: A review of the literature on everyday cognitive skills.
Neuropsychology Review, 13(4), 181–197.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests.
Psychological Bulletin, 52(4), 281–302.
Demakis, G. J. (2004). Frontal lobe damage and tests of executive processing: A meta-
analysis of the category test, Stroop test, and trail-making test. Journal of Clinical
and Experimental Neuropsychology, 26(3), 441–450.
Devitt, R., Colantonio, A., Dawson, D., Teare, G., Ratcliff, G., & Chase, S. (2006).
Prediction of long-term occupational performance outcomes for adults after mod-
erate to severe traumatic brain injury. Disability & Rehabilitation, 28(9), 547–559.
Duff, K., Patton, D., Schoenberg, M. R., Mold, J., Scott, J. G., & Adams, R. L. (2003).
Age-and education-corrected independent normative data for the RBANS in a
community dwelling elderly sample. The Clinical Neuropsychologist, 17(3), 351–366.
Dulay, M. F., Schefft, B. K., Marc Testa, S., Fargo, J. D., Privitera, M., & Yeh, H. S.
(2002). What does the Family Pictures subtest of the Wechsler Memory Scale–III
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
28
measure? Insight gained from patients evaluated for epilepsy surgery. The Clinical
Neuropsychologist, 16(4), 452–462.
Exner, J. E. (1974). The Rorschach: A Comprehensive System. New York: John Wiley
& Sons.
Exner Jr., J. E., & Clark, B. (1978). The Rorschach (pp. 147–178). New York: Plenum
Press, Springer US.
Flanagan, D. P., McGrew, K. S., & Ortiz, S. O. (2000). The Wechsler Intelligence Scales
and Gf-Gc Theory: A Contemporary Approach to Interpretation. Needham Heights,
MA: Allyn & Bacon.
Frank, L. K. (1939). Projective methods for the study of personality. The Journal of
Psychology, 8(2), 389–413.
Franzen, M. D. (2000). Reliability and Validity in Neuropsychological Assessment.
New York: Springer Science & Business Media.
Fuster, J. M. (2015). The Prefrontal Cortex. San Diego: Elsevier, Acad. Press.
Gass, C. S. (2002). Personality assessment of neurologically impaired patients. In
J. Butcher (Ed.), Clinical Personality Assessment: Practical Approaches (2nd ed.,
pp. 208–244). New York: Oxford University Press.
Gold, J. M., Queern, C., Iannone, V. N., & Buchanan, R. W. (1999). Repeatable Battery
for the Assessment of Neuropsychological Status as a screening test in schizophre-
nia, I: Sensitivity, reliability, and validity. American Journal of Psychiatry, 156(12),
1944–1950.
Gough, H. G. (1996). CPI Manual: Third Edition. Palo Alto, CA: Consulting
Psychologists Press.
Greene, R. L. (2006). Use of the MMPI-2 in outpatient mental health settings. In
J. Butcher (Ed.), MMPI-2: A Practitioner’s Guide. Washington, DC: American
Psychological Association.
Halstead, W. G. (1951). Biological intelligence. Journal of Personality, 20(1), 118–130.
Hathaway, S. R., & McKinley, J. C. (1942). The Minnesota Multiphasic Personality
Schedule. Minneapolis, MN, US: University of Minnesota Press.
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
Henry, J. D., & Crawford, J. R. (2004). Verbal fluency deficits in Parkinson’s dis-
ease: A meta-analysis. Journal of the International Neuropsychological Society, 10(4),
608–622.
Hollrah, J. L., Schlottmann, S., Scott, A. B., & Brunetti, D. G. (1995). Validity of the
MMPI subtle items. Journal of Personality Assessment, 65, 278–299.
Hunsley, J., & Bailey, J. M. (1999). The clinical utility of the Rorschach: Unfulfilled
promises and an uncertain future. Psychological Assessment, 11(3), 266–277.
Jackson, D. N. (1971). The dynamics of structured personality tests: 1971. Psychological
Review, 78(3), 229–248.
James, W. (1890). Principles of Psychology. New York: Holt.
Jewsbury, P. A., Bowden, S. C., & Strauss, M. E. (2016). Integrating the switching, inhi-
bition, and updating model of executive function with the Cattell-Horn-Carroll
model. Journal of Experimental Psychology: General, 145(2), 220–245.
Kaplan, E., Goodglass, H., & Weintraub, S. (1983). Boston Naming Test (BNT). Manual
(2nd ed.). Philadelphia: Lea and Fabiger.
Karzmark, P. (2009). The effect of cognitive, personality, and background factors on
the WAIS-III arithmetic subtest. Applied Neuropsychology, 16(1), 49–53.
Kaufman, A. S. (1994). Intelligent Testing with the WISC-III. New York: John Wiley
& Sons.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
29
Theory as Evidence 29
King, T. Z., Fennell, E. B., Bauer, R., Crosson, B., Dede, D., Riley, J. L., … & Roper, S. N.
(2002). MMPI-2 profiles of patients with intractable epilepsy. Archives of Clinical
Neuropsychology, 17(6), 583–593.
Kohn, S. E., & Goodglass, H. (1985). Picture-naming in aphasia. Brain and Language,
24(2), 266–283.
Larson, E. B., Kirschner, K., Bode, R., Heinemann, A., & Goodman, R. (2005).
Construct and predictive validity of the Repeatable Battery for the Assessment of
Neuropsychological Status in the evaluation of stroke patients. Journal of Clinical
and Experimental Neuropsychology, 27(1), 16–32.
Lezak, M. D., Howieson, D. B., Bigler, E. D., & Tranel, D. (2012). Neuropsychological
Assessment. New York: Oxford University Press.
Lichtenberger, E. O., Kaufman, A. S., & Lai, Z. C. (2001). Essentials of WMS-III
Assessment (Vol. 31). New York: John Wiley & Sons.
Lichtenberger, E. O., & Kaufman, A. S. (2009). Essentials of WAIS-IV Assessment (Vol.
50). New York: John Wiley & Sons.
Lichtenberger, E. O., & Kaufman, A. S. (2013). Essentials of WAIS-IV Assessment (2nd
ed.). New York: John Wiley and Sons.
Locascio, J. J., Corkin, S., & Growdon, J. H. (2003). Relation between clinical char-
acteristics of Parkinson’s disease and cognitive decline. Journal of Clinical and
Experimental Neuropsychology, 25(1), 94–109.
Long, C. J., & Kibby, M. Y. (1995). Ecological validity of neuropsychological tests: A look
at neuropsychology’s past and the impact that ecological issues may have on its
future. Advances in Medical Psychotherapy, 8, 59–78.
MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An integrative
review. Psychological Bulletin, 109(2), 163–203.
Matarazzo, J. D., 1990. Psychological testing versus psychological assessment.
American Psychologist, 45, 999–1017.
Mayes, S. D., & Calhoun, S. L. (2007). Wechsler Intelligence Scale for Children third
and fourth edition predictors of academic achievement in children with attention-
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
30
Nelson, J. M., Canivez, G. L., & Watkins, M. W. (2013). Structural and incremental
validity of the Wechsler Adult Intelligence Scale–Fourth Edition with a clinical
sample. Psychological Assessment, 25(2), 618–630.
Nichols, D. S., & Crowhurst, B. (2006). Use of the MMPI-2 in inpatient mental
health settings. In MMPI-2: A Practitioner’s Guide (pp. 195–252). Washington,
DC: American Psychological Association.
Nybo, T., Sainio, M., & Muller, K. (2004). Stability of vocational outcome in adult-
hood after moderate to severe preschool brain injury. Journal of the International
Neuropsychological Society, 10(5), 719–723.
Pedraza, O., Sachs, B. C., Ferman, T. J., Rush, B. K., & Lucas, J. A. (2011). Difficulty and
discrimination parameters of Boston Naming Test items in a consecutive clinical
series. Archives of Clinical Neuropsychology, 26(5), 434–4 44.
Perry, J. N., Miller, K. B., & Klump, K. (2006). Treatment planning with the MMPI-2.
In J. Butcher (Ed.), MMPI-2: A Practitioner’s Guide (pp. 143–64). Washington,
DC: American Psychological Association.
Posner, M. I., & Snyder, C. R. R. (1975). Facilitation and inhibition in the processing of
signals. Attention and Performance V, 669–682.
Quantz, J. O. (1897). Problems in the psychology of reading. Psychological
Monographs: General and Applied, 2(1), 1–51.
Rabin, L. A., Barr, W. B., & Burton, L. A. (2005). Assessment practices of clinical neu-
ropsychologists in the United States and Canada: A survey of INS, NAN, and APA
Division 40 members. Archives of Clinical Neuropsychology, 20(1), 33–65.
Randolph, C. (2016). The Repeatable Battery for the Assessment of Neuropsychological
Status Update (RBANS Update). Retrieved May, 2016, from http://w ww.pear-
sonclinical.com/p sychology/products/100000726/repeatable-b attery-for-t he-
assessment-of-neuropsychological-status-update-rbans-update.html#tab-details.
Randolph, C., Tierney, M. C., Mohr, E., & Chase, T. N. (1998). The Repeatable Battery
for the Assessment of Neuropsychological Status (RBANS): Preliminary clinical
validity. Journal of Clinical and Experimental Neuropsychology, 20(3), 310–319.
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
31
Theory as Evidence 31
Tweedy, J. R., & Schulman, P. D. (1982). Toward a functional classification of naming
impairments. Brain and Language, 15(2), 193–206.
Wechsler, D. (1997). Wechsler Memory Scale. Third Edition (WMS). San Antonio,
TX: Pearson.
Wechsler, D. (2008). Wechsler Adult Intelligence Scale–Fourth Edition (WAIS-IV). San
Antonio, TX: Pearson.
Wechsler, D. (2014). Wechsler Intelligence Scale for Children–Fifth Edition (WAIS-V).
San Antonio, TX: Pearson.
Weed, N. C., Ben-Porath, Y. S., & Butcher, J. N. (1990). Failure of Wiener and Harmon
Minnesota Multiphasic Personality Inventory (MMPI) subtle scales as personal-
ity descriptors and as validity indicators. Psychological Assessment: A Journal of
Consulting and Clinical Psychology, 2(3), 281–285.
Weimer, W. B. (1979). Notes on the Methodology of Scientific Research. Hillsdale,
NJ: John Wiley & Sons.
Whitman, S., Hermann, B. P., & Gordon, A. C. (1984). Psychopathology in epi-
lepsy: How great is the risk? Biological Psychiatry, 19(2), 213–236.
Wilk, C. M., Gold, J. M., Humber, K., Dickerson, F., Fenton, W. S., & Buchanan,
R. W. (2004). Brief cognitive assessment in schizophrenia: Normative data for the
Repeatable Battery for the Assessment of Neuropsychological Status. Schizophrenia
Research, 70(2), 175–186.
Wood, J. M., Nezworski, M. T., Lilienfeld, S. O., & Garb, H. N. (2003). What’s Wrong
with the Rorschach? Science Confronts the Controversial Inkblot Test. San Francisco,
CA: Jossey-Bass.
Copyright © 2017. Oxford University Press, Incorporated. All rights reserved.
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.
32
<i>Neuropsychological Assessment in the Age of Evidence-Based Practice : Diagnostic and Treatment Evaluations</i>, edited by
Stephen C. Bowden, Oxford University Press, Incorporated, 2017. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/bibliouocsp-ebooks/detail.ac
Created from bibliouocsp-ebooks on 2019-11-08 03:27:55.