Beruflich Dokumente
Kultur Dokumente
OO
Printed in the USA. All rights reserved. Copyright 0 1988 Pergamon Press plc
MANAGING ATTRITION IN
CLINICAL RESEARCH
Susan N. Flick
Vanderbilt University
ABSTRACT. Attrition can cause serious problems for clinical researchrs by introducing biases
into an experiment. Preinclusion and postinclusion attrition can threaten both internal and external
ualidity. This article provides a compre&sive approach to attrition, examining the possible
impact of this occurrerue at each phase of the study. Certain methods of recruitment and ppes of
research protocols may decrease the occurrence of attrition. When attrition does occur, experimental
methods are useful for preserving validity, including.. completers-only analysis,. optimization;
endpoint analysis; and time-controlled analysis. Statistical approaches also can be implementid
such as standardized chmge scores, data replacement by regression, endpoint analysis with
regression, and worst case scenarios. The application of a single approach to the data will most
likely introduce bias; however, a statistical bracketing procedure that compares two methods with
opposing biases willyield greater confiidence in th validity of th results.
Reprint requests should be addressed to Susan Flick, Department of’ Psychology, Vander-
bilt University, Nashville, TN 37240.
500 S. N. Flick
about certain data points. Their solution did not address the issues of bias or
validity; instead they reframed the basic research question to be one of seeking
points where the independent variables produce the most optimal outcome
variables.
All of these researchers suggested useful methods for managing attrition. Taken
together, they provide a basic overview of the problem and some solutions, but
they do not offer a comprehensive approach. The purpose of this article is to
present a comprehensive approach to managing attrition in clinical research. This
paper primarily focuses on clinical trials, but the principles suggested are equally
applicable to other types of clinical research.
Because attrition can affect the validity of a study at many points in the re-
search, it can be problematic from the recruitment through the data analysis.
This article follows the temporal organization of research and gives explanations
and examples of the problems encountered due to attrition at each phase of the
study, along with suggested solutions.
DEFINITION OF VALIDITY
Validity can be divided into two types: internal and external. Internal validity
allows the researcher to attribute between-group differences to the experimentally
defined distinctions between groups. Internal validity is essential for accurate
interpretation of results; if the study is internally valid, then the results of the
experiment can be attributed legitimately to the experimental manipulation
(Campbell & Stanley, 1963). A well-designed research project should allow the
researcher to test the experimental hypothesis that the targeted intervention pro-
duced the outcome, and it should effectively rule out all possible alternative
explanations for the findings. For an experimenter to assume that changes in the
experimental group are attributable to the experimental manipulation, the follow-
ing criteria must be met: (a) the sample is randomly assigned to treatment
groups, (b) the sample is controlled with respect to all other threats to internal
validity (e.g., history, maturation, testing, instrumentation, regression toward the
mean), and (c) the sample does not have any dropouts (Campbell & Stanley,
1963).
External validity is defined as the generalizability of results to other settings,
populations, treatments, etc. (Campbell & Stanley, 1963). In presenting the
study, the researcher will probably want to extrapolate the findings to the popula-
tion of interest from which the sample was taken. If the sample was truly random
and representative, then it can be assumed that this extrapolation is valid. Howev-
er, if the researcher’s sample was not truly random, then the researcher will not be
justified in extrapolating the findings to the general sampling population. The
researcher can only suggest that the findings extend to the population that has
similar characteristics to the sample in the study.
While threats to both internal and external validity can seriously endanger a
study, it can be argued that internal validity is the sine qua non of valid research.
It is essential for a study to be interpretable (internally valid) in order for its
generalizability (external validity) to be of any importance. For this reason, some
researchers choose to minimize threats to internal validity as much as possible,
even if it necessitates increasing threats to external validity.
Manuging Attrition 501
Preinclusion attrition (Howard et al., 1986) occurs when potential subjects from
the population of interest are not included in the sample due to a selection bias on
the part of either the researcher or the subjects. Selection bias effects are particu-
larly problematic if subjects who are similar along some dimension are excluded
disproportionately from the research sample in a systematic manner.
Preinclusion attrition can occur due to sampling bias introduced by the re-
searchers’ method of contacting prospective subjects. For instance, if a researcher
who is studying parents tries to contact them at their home phones only during
business hours, a select portion of the targeted population will be contacted
(nonworking parents) and available for inclusion in the subject pool. Another way
the researcher might incur a preinclusion attrition effect is by recruiting only
some fraction of those approached for the study. For example, a researcher might
study exposure treatments for snake phobias. In keeping true informed consent,
the researcher mentions that the subject runs the risk of being bitten by a venom-
ous snake (in which case the researcher disclaims legal responsibility). Of those
approached for the study, only a small percentage agree to participate. While a
random and representative sample of individuals may be approached, those who
agree to participate are self-selected on some dimension that is likely to be related
to treatment outcome. Those agreeing to participate may be less phobic, more
motivated, and/or more daring than those who decline participation.
Both researcher selection effects and subject self-selection effects can threaten
the external validity of the study. If the researcher’s sample was not truly random
due to sampling bias brought about by preinclusion attrition, then the researcher
cannot suggest that the findings extrapolate to the general sampling population.
From the examples above, the researcher could only make statements about
working parents who were home during the day, or snake phobics who would be
willing to sign the consent form. This may severely limit the generalizability of
study findings, thus restricting external validity.
The author identified 37 clinical studies in Volume 53 (calendar year 1985) of
the Journal of Consulting and Clinical Psychology for the purpose of examining both
preinclusion and postinclusion attrition. The Journal of Consulting and Clinical Psy-
chology was selected because it is the journal of the American Psychological Associ-
ation that is devoted to publishing empirical studies in clinical psychology. Of
these 37 studies, 13 included information about the number of people who were
eligible and/or contacted to participate in the study, and the remaining 24 studies
failed to include this information. Thus, nearly two-thirds of the studies reviewed
did not provide any information about how the research sample was selected from
the population of possible participants. In these cases, the reader received no
information about the possibility that the sample might have had problems due to
preinclusion attrition.
One solution to the problem of preinclusion attrition addresses the possible de-
crease in external validity by better defining the sampling population. The re-
searcher can address this problem by providing suitable information in publica-
tions so that the reader will be better able to evaluate the selection of the sample.
502 S. N. Flick
after informed consent and pre-experimental measures have been obtained (Cook
& Campbell, 1979).
Zelen (1979, 1982) also suggested that the researcher does not need to seek
consent from patients in comparison groups that receive the same treatment that
they would have received in the absence of the research study. Zelen (1979)
reasoned that for these patients there is no departure from their expectations of
treatment, so they do not need to be informed of the research. This procedure
may significantly reduce preinclusion attrition in these comparison groups by not
giving subjects a choice about study participation. Zelen (1979) suggested this
procedure for medical trials, though it may be less appropriate for psychological
research as many psychological studies require subjects to complete question-
naires specifically for research purposes. Zelen’s method (1979, 1982) also raises
important ethical issues (Curran, 1979; Fost, 1979; Relman, 1979) about in-
formed consent that must be carefully considered. While decreasing attrition is a
worthwhile goal, the researcher must still adhere to the standards in the Ethical
Principles of Psychologists (Principle 9d) (A merican Psychological Association,
1981), which usually requires informed consent for all subjects participating in
research.
Preinclusion attrition can result in a case in which the researcher has some data
for a large set of potential subjects but only has complete data for a smaller subset
of subjects. This situation can result when researchers use a screening interview
prior to actual study participation. In this circumstance, preinclusion attrition
will yield a sample that may be biased, and the sample statistics may misrepresent
the actual population statistics. The researcher can use statistical procedures to
estimate the actual population statistics based on the biased sample. For example,
in a study to estimate the prevalence of depression in a mental health center
sample, researchers (Frank, Schulberg, Welch, Sherick, & Costello, 1985) used a
screening interview to obtain information about demographics and depression
status. The researchers followed the screening interview with a more in-depth
diagnostic interview, but they found that many people either declined participa-
tion or failed to keep an appointment for the longer interview. This type of
preinclusion attrition (if inclusion is defined as participation in the second inter-
view) may have biased their sample and their estimate of prevalence.
Frank et al. (1985) implemented a two-step statistical procedure suggested by
Welch, Frank, and Costello (1983) to correct for sampling bias. This procedure
involves an a priori choice of variables that might predict the rule to describe self-
selection; for example, subjects who are older, more depressed, etc. may be less
likely to return for the diagnostic interview. Data for these variables are entered
into a regression equation to derive a constant to describe the self-selection rule
for inclusion in the diagnostic interview. This selection effect correction factor is
entered into a subsequent regression equation to predict the actual dependent
variable, in this case, the prevalence of depression. The actual statistics for this
procedure are lengthy and complex, and the reader is advised to consult the
original sources for the mathematics (Frank et al., 1985; Welch et al., 1983).
The assumptions underlying this analysis are the same as for any regression
analysis (e.g., normally distributed sample, linear relations between variables,
504 S. N. Flick
etc. (Tabachnick & Fidell, 1983)), and to the extent that these assumptions are
violated, the analysis will not yield a valid result. Also, the result is more valid
when the data entering the analysis are closer to complete, and the results may be
biased to the extent that the data are incomplete. However, in appropriate cases,
this analytic procedure provides a useful way to estimate sample statistics taking
into account the effects of preinclusion attrition.
If subjects drop out of a study after they have been included, the study’s validity,
both internal and external, can be severely jeopardized. When postinclusion
attrition is systematic, it threatens the representativeness of the sample as a whole,
thereby drawing into question the external validity of the study. Subjects may
drop out of the research protocol (evenly among groups) because they are similar
along some dimension, such as an aversion to completing lengthy psychological
tests. Even if the experimental group shows vast improvement compared to the
control, the researcher cannot assert that therapy X works for subjects randomly
selected from population Y. Rather, the researcher can only say that therapy X
works for people who are willing to complete lengthy forms. These subjects may
have self-selected along some dimension that mediates willingness to participate,
such as patience, psychological mindedness, etc.
Postinclusion attrition poses a particular threat to internal validity. If the sub-
jects in one or more groups self-select along some dimension relevant to the
research hypotheses, then the researcher will not be able to say with assurance
whether the experimental group changed because of the research intervention or
because of self-selection. For example, if subjects self-select based on motivation,
and subjects who are unmotivated drop out of the treatment group more often
than out of the control group, then the researcher will be unable to say if the
experimental group improved because of the intervention or because motivated
individuals were likely to improve regardless of intervention. Thus, attrition can
severely undermine the assurance with which the researcher can attribute the
causality of the findings.
While postinclusion attrition may be a current problem in the literature, it is us-
ually either handled perfunctorily or completely ignored. In the review of volume
53 of the Journal of Consulting and Clinical Psycholou described earlier, 37 studies were
identified that assessed the research subjects at more than one point in time,
allowing for the possibility of postinclusion attrition. Of these studies, 26 were
studies of clinical interventions. Out of these 26 studies, 7 did not report any
attrition. In these cases, it is impossible to say whether attrition did not occur, or
whether the researchers simply chose to include only complete cases in their
reports. In one instance (Patterson & Forgatch, 1985), the report specifically said
that only complete cases were included, but gave no indication of the frequency of
completers in the sample. Six of the studies reported attrition rates for the com-
plete sample, but failed to report attrition by group. The difficulties associated
with attrition are compounded when attrition occurs differentially in different
groups, as described previously. Reporting attrition only for the sample as a whole
does not provide important information about biases that attrition may produce,
by unbalancing the groups. Of the 27 studies reviewed, 13 reported attrition for
Managing Attrition 505
each group separately. Six of these studies had an attrition rate of at least 20% in
one group, and 4 of these reported attrition of at least 30% in one group.
In most cases, the rate of dropouts was simply reported, and no effort was made
to explain how it might affect the results. However, several researchers made an
effort to examine the rate of attrition (e.g., Kirkley et al., 1985; Michelson &
Mavissakalian, 1985). These authors presented data that showed that the distribu-
tion of dropouts did not differ significantly across groups. The researchers also
presented a comparison of treatment completers versus dropouts in an attempt to
demonstrate that these groups did not differ. Completers and dropouts were
compared on basic demographic variables and were shown not to differ signifi-
cantly. Once the dropouts were shown to be “similar” to the completers based on
nonsignificant comparisons of demographic variables, the sample of completers
was assumed to be unbiased by attrition and the data were handled as if com-
pleters were representative of the whole initial sample, including the dropouts.
There are several problems with the aforementioned method for handling drop-
outs. Showing that subjects did not drop out of one group significantly (at p < .05)
more often than they dropped out of the other groups does not show that dropouts
are randomly distributed. Demonstrating that dropouts and completers do not
significantly differ on demographic variables does not show that they are in fact
the same, nor that they are drawn from the same population. In both cases, the
researcher is in a position of trying to prove the null hypothesis, which is not
possible given current statistical conventions.
Even if the numbers of dropouts are equally distributed among groups, the
researcher could not say with certainty that the samples are unbiased. Equal
numbers of subjects could drop out of each group in the study for very different
reasons (Howard et al., 1986). For example, subjects might drop out of the
treatment group due to a lack of motivation for treatment, while subjects who are
highly motivated for treatment might drop out of a waiting list control group in
order to seek treatment elsewhere. In this case, equal numbers of dropouts from
each group would result in a biased sample because subjects dropped out of
groups differentially along a dimension that may be important to treatment out-
come. The researcher could not assert with assurance that any treatment result is
attributable to the treatment specifically and not to differences in the level of
motivation for treatment among the groups.
& Stuart, 1984) dropouts from a weight loss program were shown to have less
confidence that they would reach their weight loss goals than those completing the
program.
Every treatment study could be viewed as a prospective study of who drops out
of the type of treatment under investigation, given the population. The researcher
could build into a study multiple measurement points for variables related to
outcome and for those hypothetically related to likelihood of dropping out of
treatment. For example, at the end of a therapy session, the therapist and/or
client might rate how likely the client is to return for the next session, and if he/she
is not likely to return, what the reasons would be. Measuring these types of
variables may help to make attrition a phenomenon for study which may yield
interesting research.
There are two major benefits to routinely including assessment devices de-
signed to discriminate between dropouts and completers. First, instead of trying
to show that dropouts and completers are alike, one could show how they are
different. The purpose of research could be not only to show that treatment X
works, but to show that treatment X works for subjects who are high on factor Y,
while subjects who are high on factor Z are more likely to drop out (Howard et al.,
1986). A second benefit of including variables that are likely to differentiate
dropouts from completers would result if the researcher tries to replace missing
data statistically. Statistical methods are described in greater detail below.
PREVENTING ATTRITION
While it is unlikely that all attrition is preventable, there are some suggestions in
the literature for ways to minimize dropouts. One way to encourage continued
participation in research studies is to require subjects to put down a refundable
deposit (Kazdin, 1980) with large deposits being more effective than small ones
(Hagen, Foreyt, & Durham, 1976). Requiring a deposit prior to participation in a
study may be an effective way of reducing postinclusion attrition (Follick, Fowler,
& Brown, 1984). The disadvantage of requiring a deposit is that it may simultane-
ously increase preinclusion attrition. Subjects may self-select prior to inclusion
based on variables such as economic solvency and/or motivation. However, the
type of selection caused by a deposit might not threaten the validity of the study. If
the researcher wants to extrapolate findings to clients who pay for services, then
asking for a deposit in order to decrease postinclusion attrition will make the
study conditions more similar to real world contingencies.
Reducing the interval between the time the client calls for an appointment and
the date of the appointment may reduce the likelihood of dropouts (Benjamin-
Bauman, Reiss, & Bailey, 1984; Folkins, Hersch, & Dahlen, 1980). A friendly
receptionist and pleasant waiting area will not guarantee continued subject partic-
ipation, but subjects might be less likely to drop out given these amenities. Follow-
up after a missed appointment may be helpful in keeping clients in treatment. In a
study by Lowe (1982), sending a letter that automatically rescheduled the client
for another appointment increased the likelihood of clients returning for further
treatment. However, it also increased the no-show rate.
In addition to manipulating the logistics of offering treatment in the context of
a study, it has been suggested that clients who are better informed about what to
expect in therapy are more likely to continue. Several researchers have designed
S. N. Flick
There is no simple rule of thumb to determine how much attrition is too much for
a given sample. The researcher cannot say that if X subjects or X% of the sample
drops out, or if the ratio of dropouts between groups is X to Y then there is a
problem. Instead of a simple rule, the researcher must base these judgments on
empirical evidence. One reason that data-based tests are necessary is because of
the nature of statistical tests. If the F statistic for a given analysis borders on
significance in either direction (e.g., p= .050 or p= .0494), then it is possible that
a low rate of attrition might have introduced sufficient bias to determine statistical
significance or nonsignilicance. One method of data-based analysis for determin-
ing whether attrition has biased the sample is provided below (Cook & Campbell,
1979; Jurs & Glass, 1971). The researcher can also be particularly cautious to
safeguard validity and use methods for handling the data which are detailed in the
next two sections.
To determine whether attrition has biased the sample, Jurs and Glass (1971)
suggested performing a series of multivariate analyses of variance (MANOVAs)
on the data. They suggested that the experimenter should assign a dummy varia-
ble to designate attrition status, and should assign a value to each subject based on
whether they completed or dropped out. The independent variables in the
MANOVA would be attrition status and experimental treatment group. Depen-
dent variables would be chosen from all variables measured in the study which
might discriminate between dropouts and completers. A series of two-way
MANOVAs would be performed (attritionx treatment). If any of the attrition
MANOVAs is significant, then the study has been compromised by attrition, and
validity is threatened. Jurs and Glass (1971) make no mention of an appropriate
level of significance, so one can assume that they intend to use p < .05. However,
to say that groups are not different at p< .05 does not imply that they are the
same, as one cannot prove the null hypothesis (that dropouts and completers are
both from the same population). However, current statistical conventions do not
provide a method for showing that two groups are the same (i.e., accepting the
null hypothesis). The researcher might consider increasing the level of chance
probability, for example p > .20 or p > .30, up to p= .50. This method would give
the researcher greater assurance that attrition is random with respect to the
variables measured in the study and that the sample is not biased by attrition.
If the sample is show to be unbiased, then completers-only analysis may be
appropriate. If the sample is biased, the researcher cannot be assured that the
results of the study are attributable to the experimental manipulation as opposed
Managing Attrition 509
to subject self-selection through attrition. In this case, the researcher should avoid
using completers-only analysis. Instead, he/she should implement other methods
for handling the data.
Several statistical methods have been proposed to manage research data that have
been affected by attrition. One approach is the use of covariance techniques to
adjust for inequalities between groups on variables that influence attrition. For
example, if one analyzes only completers’ data, one may find that the groups are
no longer equivalent on pretest measures for variables that may have affected
attrition (e.g., motivation, expectancies about treatment, etc.). When subjects
self-select out of groups, the true experimental nature of the study is undermined
and the researcher is left with a quasi-experiment. Kenny (1975) suggests that
standardized change score analysis is the covariance technique of choice for exper-
Managing Attrition 511
imental studies which have become quasi-experiments due to attrition. The first
step in a standardized change score analysis is to separately standardize the
pretest and posttest scores for the outcome measures. Next, the independent
variables that are shown to relate to attrition (e.g., using the Jurs & Glass (1971)
procedure, as previously described) are partialed out (using partial correlation) of
these standardized scores. The resulting standardized scores are then used to
calculate (pre minus post) difference scores. Difference scores can be entered into
t-tests or analysis of variance equations to determine whether the groups were
different with respect to their level of change (pre minus post difference) over the
course of the study. (For more complete descriptions of statistical adjustments, see
Kenny, 1975.) While these statistical procedures are the preferred method when
subjects have dropped out, the results should be reported with greater caution
than would be necessary if the design had remained a “true” experiment.
Several methods have been devised to statistically replace missing data using
regression equations (Bloom, 1984; Frane, 1976). Computer packages such as
BMDP (Dixon et al., 1983) have programs (PAM) designed to replace missing
data using a regression model. Regression analysis develops a linear model relat-
ing the independent variables to the dependent variables. If some of the depen-
dent variables (e.g., outcome) are missing due to attrition, then they can be
predicted by entering that subject’s independent variables into the regression
equation. For this purpose, regression programs give best results when the predic-
tor variables are highly correlated with both the outcome measures to be predicted
and with attrition. Whereas basic demographic variables are unlikely to predict
either attrition or outcome well, variables such as expectancy might significantly
discriminate between dropouts and completers and might also be highly correlat-
ed with outcome measures in the group of completers. Including these types of
variables will maximize the reliability of prediction of missing data. However,
these regression equations are based on the premise that completers and dropouts
are from the same population, and that dropouts are randomly distributed. Re-
gression also assumes that the variables are related linearly and that the sample is
normally distributed (Tabachnick & Fidell, 1983). To the extent that any of these
assumptions are not tenable, the results will be biased.
The regression techniques to replace missing data which were described above
can be implemented optimally when used in conjunction with endpoint analysis
(Kriss, Hollon, DeRubeis, & Evans, 1981). Kriss et al. (1981) suggested a meth-
od whereby the researcher develops regression equations to predict outcome at
each data collection point. Using this method, outcome data is collected periodi-
cally, and a regression equation is developed for each measurement point using all
data available at that point from both completers and dropouts. Endpoint data for
each dropout is entered into the regression equation for the individual’s point of
dropout (week X) to predict his/her score for the following measurement point
(X+ 1). This prediction is subsequently entered into the equation to predict
outcome for the next point (X+2). The procedure is repeated iteratively until it
results in an outcome score for the final measurement point in the study. Using
regression replacement in conjunction with endpoint analysis, as described above,
provides a very good estimate of the outcome measures, and when possible, it is
the preferred method.
Regression techniques can also be combined with time controlled analysis.
Because the main drawback of time controlled analysis is an underestimation of
512 S. N. Flick
the actual treatment effect, regression can be useful in estimating the treatment
effect for subjects who did not complete treatment (Bloom, 1984). The estimated
treatment effect is added to the dropout’s actual outcome measures to provide an
estimate of the outcome measure if that person had completed treatment. While
this may be a desirable technique, it has several drawbacks. To the extent that
data are missing for dropouts and completers at the last data collection point, the
estimate of the treatment effect will be biased. Additionally, this method assumes
that dropouts and completers are from the same population, and that dropouts
would do as well as completers if they had only finished treatment. Each of these
assumptions may or may not be true. Thus, the method may slightly overestimate
treatment effects for individuals and for the population as a whole.
The most conservative of the statistical techniques for dealing with data that are
missing due to attrition is a worst case scenario analysis. In this case, the research-
er assumes that all of the dropouts would have performed on outcome measures in
a direction that was least favorable to the researcher’s hypotheses (Stephan &
McCarthy, 1958, pp. 252-255). With this method, the dropouts from the treat-
ment group are assigned the worst possible scores on outcome measures, and the
dropouts from the control group are assigned the best possible scores. If the
researcher still finds a significant difference, then he/she is justified in assuming
that the difference is not attributable to sample bias induced by attrition. If the
researcher has a very large treatment effect, then this test is a useful way to rule
out attrition bias as the cause of the results. However, this test is extremely
conservative, and should be used with caution. If the test results show significant
differences, one can assume that it is a “true positive,” (e.g., one can assume that
the results are not spurious due to attrition), but a negative test is ambiguous, and
should not be taken to mean that no difference exists. In this case, the researcher
can only assume that any existing difference is not sufficiently large to pass a very
conservative test.
A more appropriate approach to the attrition problem is a less conservative
version of the worst case scenario. This “bracketing procedure” involves testing for
the significance of the treatment effect using any two methods for treating the data
that are determined to be biased in opposite directions (Chassan, 1979; Cook &
Campbell, 1979). If both tests yield the same result, then one can conclude with
some assurance that the result is valid. For example, in a depression study, one
might consider a situation in which attrition occurs only in the treatment group
(none from the control group) and it is due to low motivation for treatment. First,
one might do an endpoint analysis, though the researcher knows that this is likely
to be biased against finding a treatment effect. The scores immediately prior to
drop out were probably weighted in the direction of pathology, given the natural
time course for depression. An endpoint analysis would make the treatment group
look worse than the “real” case. However, if the dropouts had stayed in treatment,
one would expect that the dropouts would not have improved as much as the
completers due to low motivation. A completers-only analysis would make the
treatment group look better than the “real” case, thus biasing in favor of finding a
treatment effect. The use of both analyses sets up parameters similar to the worst
case analysis, but the parameters are not unreasonably conservative. If the two
analyses agree, then one can draw a conclusion with somewhat greater assurance.
If the two analyses result in opposite conclusions, then the case is still ambiguous.
The experimenter probably has an effect size that may be only marginally signifi-
cant, and he/she must interpret it carefully.
Managing Attrition 513
SUMMARY
Attrition, both preinclusion and postinclusion, can cause serious problems for the
researcher by inducing biases into clinical trials. However, several procedures can
be of practical use to the researcher in preserving validity. Careful reporting of
selection effects for experimental samples will help to make the biases induced by
preinclusion attrition more explicit. The researcher can also include variables in
the research that are likely to discriminate between dropouts and completers.
These variables can help to answer the questions of who is likely to drop out of
treatment and why they are likely to do so.
Endpoint analysis and time controlled analysis can be useful methods for main-
taining validity when attrition occurs. While one may not be able to realistically
continue to collect data on all subjects who have dropped out, to the extent that
the researcher is able to do so, the added data will be useful in deriving valid
conclusions.
Statistical methods of data replacement can be useful and pragmatic methods
for dealing with attrition. Many of the statistical approaches suggested involve
regression techniques. As long as the assumptions of regression (e.g., normally
distributed sample, linear relationships, etc.) are not violated, and the researcher
has at least five subjects for each independent variable entered in the regression
(Tabachnick & Fidell, 1983, p. 92) th en these techniques can be practical means
for preserving validity.
With any technique implemented, the researcher must be sensitive to the possi-
ble biases that can be induced. Most single approaches to the data will introduce
some bias, and it is suggested that comparing two alternative methods that
produce biases in opposite directions might give a more valid result.
REFERENCES
Curran, W. J. (1979). Reasonableness and randomization in clinical trials: Fundamental law and
governmental regulation. The New EnglandJournal OfMedicine, 300, 1273-1275.
Dixon, W. J., Brown, M. B., Engelman, L., Frane, J. W., Hill, M. A., Jennrich, R. I., & Toporek,
J. D. (Eds.). (1983). BMDP Statistical Software (1983 Revised Printing). Berkeley: University of
California Press.
Folkins, C., Hersch, P., Dahlen, D. (1980). Waiting time and no-show rate in a community mental
health center. AmericanJournal ofCommunity Psycholou, 8, 121-123.
Follick, M. J., Fowler, J. L., & Brown, R. A. (1984). Attrition in worksite weight-loss interventions:
The effects of an incentive procedure. Journal ofConsultingand Clinical Psycholo~, 52, 139-140.
Fost, N. (1979). Consent as a barrier to research. The New England Journal ofMedicine, 300, 1272-
1273.
Franc, J. W. (1976). Some simple procedures for handling missing data in multivariate analysis.
Psychometrika, 41, 409-415.
Frank, R. G., Schulberg, H. C., Welch, W. P., Sherick, H., & Costello, A. J. (1985). Research
selection bias and the prevalence of depressive disorders in psychiatric facilities. Journal of Consult-
ing and Clinical Psychology, 53, 370-376.
Fraps, C. L., McReynolds, W. T., Beck, N. C., & Heisler, G. H. (1982). Predicting client attrition
from psychotherapy through behavioral assessment procedures and a critical response approach.
Journal ofClinicalPsychology, 38, 759-764.
Friedman, A. S. (1975). Interaction of drug therapy with marital therapy in depressive patients.
Archives ofGeneral Psych&y, 32, 619-637.
Garfield, S. L. (1963). A note on patients’ reasons for terminating therapy. Psychological Reports, 13,
38.
Garfield, S. L. (1978). Research on client variables in psychotherapy. In Garfield, S. L. & Bergin,
A. E. (Eds.), Handbook ofpsychotherapy and behavior change (pp. 191-232). New York: John Wiley &
Sons.
Greenspan, M., & Kulish, N. M. (1985). Factors in premature termination in long-term psycho-
therapy. Psychotherapy, 22, 75-82.
Hagen, R. L., Foreyt, J. P., & Durham, T. W. (1976). The dropout problem: Reducing attrition in
obesity research. Behavior Therapy, 7, 463-47 1.
Heitler, J. B. (1973). Preparation of lower-class patients for expressive group psychotherapy. Journal
ofConsultingand Clinical Psychology, 41, 251-260.
Howard, K. I., Krause, M. S., & Orlinsky, D. E. (1986). The attrition dilemma: Toward a new
strategy for psychotherapy research. Journal of Consulting and Clinical Psychology, 54, 106-l 10.
Jurs, S. G. & Glass, G. V. (1971). The effect of experimental mortality on the internal and external
validity of the randomized comparative experiment. TheJournal of Experimental Education, 40, 62-
66.
Kazdin, A. E. (1980). Researchdesign in clinicalpsychologv. New York: Harper & Row, Publishers.
Kenny, D. A. (1975). A quasi-experimental approach to assessing treatment effects in the non-
equivalent control group design. PsychologicalBulletin, 82, 345-362.
Kirkley, B. G., Schneider, J. A., Agras, W. S., & Bachman, J. A. (1985). Comparison of two group
treatments for bulimia. Journal ofConsulting and Clinical Psychology, 53, 43-48.
Kolb, D. L., Beutler, L. E., Davis, C. S., Crago, M., & Shanfield, S. B. (1985). Patient and
therapy process variables relating to dropout and change in psychotherapy. Psychotherapy, 22, 702-
710.
Kriss, M. R., Hollon, S. D., DeRubeis, R. J., & Evans, M. D. (1981, November). Methodological
advances in the CPT pro&t: Pooling multiple observations and estimating end of treatment scores. Paper
presented at the meeting of the Association for the Advancement of Behavior Therapy, Toronto,
Canada.
Lasky, J. J, (1962). The problem of sample attrition in controlled treatment trials. Journal of Nervous
and Mental Disorders, 135, 332-338.
Lowe, R. H. (1982). Responding to “no-shows”: Some effects of follow-up method on community
mental health center attendance patterns. Journal ofConsulting and Clinical Psychology, 50, 602-603.
Michelson, L., & Mavissakalian, M. (1985). Psychophysiological outcome of behavioral and phar-
macological treatments of agoraphobia. Journal of Consulting and Clinical Psychology, 53, 229-236.
Mitchell, C., & Stuart, R. B. (1984). Effect of self-efficacy on dropout from obesity treatment.
Journal of Consulting and Clinical Psychology, 5 2, 11 OO- 110 1.
Morrow, G. R. (1980). How readable are subject consent forms? Journal of the American Medical
Association, 244, 56-58.
Patterson, G. R., & Forgatch, M. S. (1985). Therapist behavior as a determinant for client noncom-
Managing Attrition 515
pliance: A paradox for the behavior modifier. Journal qf Consulting and Clinical Psycholo~, 53, 846-
851.
Pekarik, G. (1983). Improvement in clients who have given different reasons for dropping out of
treatment. Journal of Clinical Psychology, 39, 909-9 13.
Pekarik, G. (1985). The effects of employing different termination classification criteria in dropout
research. Psychotherapy, 22, 86-91.
Relman, A. S. (1979). The ethics of randomized clinical trials: Two perspectives. The New England
Journal OfMedicine, 300, 1272.
Rush, A. J., Beck, A. T., Kovacs, M., & Hollon, S. (1977). Comparative efficacy of cognitive
therapy and pharmacotherapy in the treatment of depressed outpatients. Cognitive Therapy and
Research, 1, 17-37.
Rush, A. J., & Watkins, J. T. (1981). Cognitive therapy with psychologically naive depressed
outpatients. In Emery, G., Hollon, S. D., & Bedrosian, R. C. (Eds.), New directions in cognitive
therapy: A casebook (pp. 5-28). New York: The Guilford Press.
Stephan, F. F., & McCarthy, P. J. (1958). Sampling opinions: An analysis of survey procedure. New York:
John Wiley & Sons, Inc.
Strupp, H. H., & Bloxom, A. L. (1973). Preparing lower-class patients for group psychotherapy:
Development and evaluation of a role-induction film. Journal ofConsu1tingan.d Clinical Psychology, 41,
373-384.
Tabachnick, B. G., & Fidell, L. S. (1983). U sm g multioariate statistics. New York: Harper & Row.
Welch, W. P., Frank, R. G., & Costello, A. J. (1983). M’ tssing data in psychiatric research: A
solution. Psychologtcal Bulletin, 94, 177-180.
Wilson, D. 0. (1985). The effects of systematic client preparation, severity, and treatment setting
on dropout rate in short-term psychotherapy. Journal of So&l and Clinical Psychology, 3, 62-70.
Zelen, M. (1979). A new design for randomized clinical trials. The New England Journal ofMedicine,
300, 1242-1245.
Zelen, M. (1982). Strategy and alternate randomized designs in cancer clinical trials. Cancer Feat-
mznt Reports, 66, 1095-l 100.