Beruflich Dokumente
Kultur Dokumente
David B. Resnik a
a
The Brody School of Medicine at East Carolina University Greenville NC USA.
Online Publication Date: 01 June 2004
To cite this Article: Resnik, David B. , (2004) 'The Precautionary Principle and
Medical Decision Making', Journal of Medicine and Philosophy, 29:3, 281 - 299
To link to this article: DOI: 10.1080/03605310490500509
URL: http://dx.doi.org/10.1080/03605310490500509
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
ABSTRACT
The precautionary principle is a useful strategy for decision-making when physicians and
patients lack evidence relating to the potential outcomes associated with various choices.
According to a version of the principle defended here, one should take reasonable measures to
avoid threats that are serious and plausible. The reasonableness of a response to a threat depends
on several factors, including benefit vs. harm, realism, proportionality, and consistency. Since a
concept of reasonableness plays an essential role in applying the precautionary principle, this
principle gives physicians and patients a decision-making strategy that encourages the careful
weighing and balancing of different values that one finds in humanistic approaches to clinical
reasoning. Properly understood, the principle presents a worthwhile alternative to approaches to
clinical reasoning that apply expected utility theory to decision problems.
Keywords: cancer screening tests, evidence-based medicine, expected utility theory, medical
decision-making, precautionary principle, probability
I. INTRODUCTION
One of the oldest rules in medical decision-making is the adage an ounce of
prevention is worth a pound of cure. For most diseases, the harms and costs
entailed by preventative measures are much less than the harms and costs
associated with the disease. The rule appears to be a straightforward
application of expected utility theory (EUT) to clinical reasoning. For
example, to decide whether one should immunize children against a disease,
one should calculate the expected utilities of different choices and pick the
Address correspondence to: David B. Resnik, J.D., Ph.D., Department of Medical Humanities,
The Brody School of Medicine at East Carolina University, Greenville, NC 27858, USA.
E-mail: resnikd@mail.ecu.edu
10.1080/03605310490500509$16.00 # Taylor & Francis Ltd.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
282
DAVID B. RESNIK
option with the highest expected utility. EUT provides a scale for comparing
the expected costs and benefits of immunizing children against the expected
costs and benefits of not immunizing children.
What should one do when one does not have enough evidence to apply EUT
to a medical decision? Suppose that a 48-year-old male with no family history
of prostate cancer wants to know whether he should have a prostate specific
antigen (PSA) test to detect this disease. Prostate cancer is a serious illness that
mostly affects older males. 3% of all men in the United States die from prostate
cancer, but more than two-thirds of prostate cancer patients are 75 or older, with
a median age of death at 77 (Centers for Disease Control, 2003). The PSA test is
a relatively inexpensive way of detecting early-stage prostate cancer. Although
many medical experts recommend that men who are 50 or older receive the
PSA test, there is widespread disagreement in the medical community about
the merits of administering the test because of insufficient evidence that finding
and treating early-stage prostate cancers saves quality adjusted life years
(QUALY) (Centers for Disease Control, 2003). Many prostate cancers remain
localized and have no significant effect on male health. Moreover, treatments
for prostate cancer, such as surgery or radiation, can have harmful side effects,
such as impotence and incontinence. The test often yields inconclusive results
because it has a sensitivity (or false negative rate) of 86% and a specificity (or
false positive rate) of 33%, which means that it misses 14% of prostate cancers
and issues false alarms 66% of the time (Hoffman, Gilliland, Adams-Cameron,
Hunt, & Key, 2002).1 The decision to perform (or not perform) the PSA test has
been and continues to be fraught with clinical, ethical, and legal controversy
(Gerard & Frank-Stromborg, 1998).2
How should one make a medical decision when faced with such
uncertainty? One possible approach to take under these conditions would be
to appeal to the precautionary principle (PP), a rule for decision-making that
many political groups and activists have urged that society adopt in response
to environmental and public health threats. According to a popular version of
the PP, lack of scientific proof should not be used as an excuse for failing to
take reasonable measures to avert a serious threat (European Commission,
2000). While the PP makes intuitive sense, there are many difficulties with
interpreting and applying this principle (Cranor, 2001; Soule, 2000). Under
some interpretations, the PP is an extremely risk-aversive, anti-science rule
(Resnik, 2003). In order to use the PP to make rational decisions, one must
articulate a version of this principle that is reasonable and not exceedingly
risk-aversive.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
283
In this paper, I will argue that, properly understood, the PP can provide
physicians and patients with a useful approach to medical decisions when
EUT does not apply or does not yield clear recommendations. Since a concept
of reasonableness plays an essential role in applying the PP, this principle
gives physicians and patients a decision-making strategy that encourages the
careful weighing and balancing of different values that one finds in humanistic
approaches to clinical reasoning. I will defend this thesis as follows. In Section
II, I will explain how the PP differs from EUT. In Section III, I will explicate
and defend a version of the PP that could play a role in many types of practical
decisions. In Section IV, I will present an overview of different approaches to
medical decision-making and argue that the practical difficulties with
implementing EUT in medical decision-making create a niche for the PP. In
Section V, I will apply the version of the PP defined in Section III to a case
study.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
284
DAVID B. RESNIK
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
285
of the EUT, then it would not challenge the status quo of environmental and
public health risk assessment, since risk assessment approaches already attempt
to take cost-effective measures to prevent harm (Cranor, 1993; Goklany, 2001).
Where the PP differs from EUT is in the ambiguous phrase lack of full
scientific certainty. The PP authorizes decision-makers to make choices even
when they lack scientific certainty. If we are to understand the PP as challenging
the status quo and both proponents and opponents of the PP agree that it
challenges the status quo then, by implication, the status quo (i.e. risk
assessment) must not authorize choices when one lacks full scientific
certainty. To properly distinguish the PP from EUT, it is important to define
this ambiguous phrase as well as other terms contained in the PP.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
286
DAVID B. RESNIK
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
287
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
288
DAVID B. RESNIK
& Urbach, 1989). In the long run, our probability assignments will reflect the
evidence we have gathered rather than our initial biases. Thus, I can assign a
probability, albeit a small one, to the statement I will be attacked by a pack of
poodles on my way to work in Greenville, NC.
The problems with the subjective approach to probability are well known,
and I will not review all of these arguments and counterarguments here.
Briefly, the main drawback with the subjective approach is that Bayesian
updating may not overcome our initial biases. New evidence may not convert
subjective probabilities into objective ones, especially when we have
conducted only a few tests (Earman, 1992). Although there are some formal
and rational constraints on the assignment of subjective probabilities, such as
the axioms of probability theory and prohibitions against the possibility of a
Dutch Book, initial subjective probabilities can vary greatly: one person could
regard the probability of a statement as 99% while someone else could regard
its probability as only 1% (Resnik, 1987). After a few tests, these initial
assignments might change to 80% and 20%, but this would still not be enough
to overcome initial biases.
One might also object that the statements that I am calling plausible are
really nomologically possible, where nomological possibility is understood as a type of possibility short of logical possibility. A statement is
nomologically possible if it is consistent with scientific laws (Pargetter, 1984).
For example, it is logically possible for a rock to fly up in the sky when I
release it from my hand but nomologically impossible because this event
would be inconsistent with the law of gravity. My reply to this objection is that
nomological possibility does not capture the full sense of what we mean by
plausibility. It is nomologically possible that my car will fall apart into a
thousand pieces on the way to work, but I do not think that this is plausible.
The second ambiguity related to defining and applying the PP has to do with
the concept of reasonableness, which is another important part of the principle.
Many versions of the PP require that responses to threats be reasonable (Cranor,
2001). For example, the European Commission (2000) holds that the measures
taken in response to a threat should be proportional to the level of the threat,
consistent with other measures already taken, and based on an examination of
potential costs and benefits of responding to threat, including economic costs
and benefits. Elsewhere, I argue that responses to a threat should take a realistic
attitude toward the threat (Resnik, 2003). For example, it would be
unreasonable to respond to a threat that is impossible to prevent. It would
also be unreasonable to takes ineffective measures against a preventable threat.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
289
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
290
DAVID B. RESNIK
other threats can be reversed. In the short run, a reversible threat may seem to
be very serious, but in the long run, an irreversible threat may be more serious.
For example, consider two potential threats to a house, damage to the roof
and damage to the foundation. In the short run, hail or wind damage to the
roof may cause a great deal of harm to the house, but in the long run, damage
to the foundation may be more serious, because this harm may not be
reversible. It is usually not too difficult to replace damaged shingles or other
roofing materials, but it may be difficult or impossible to repair a cracked
foundation.
Having defined some of the key terms in the PP, we can now offer a
definition of this principle:
PP (definition): One should take reasonable measures to prevent or mitigate
threats that are plausible and serious.
This is a very short definition of the PP. The concepts of reasonableness,
plausibility, and seriousness distinguish the PP from EUT, which uses
probability instead of plausibility, utility maximization instead of
reasonableness; and harm instead of seriousness.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
291
interventions, but non-experts (i.e., politicians and the public) must approve of
any policy before it is adopted. This paper will focus on medical decisions, not
public health decisions.
The basic issue underlying this debate about the nature of medical decisionmaking (or clinical reasoning) can be stated as a general question, is
medicine a science or an art? This question that is as old as medicine itself
(Little, 1995; Porter, 1997). Most people who have put some serious thought
into the question recognize that medicine is both a science and an art. The
science of medicine involves the application of scientific knowledge, theories,
principles, and methods to clinical problems; the art of medicine resides in the
relationship between the physician and patient, the practical skills of the
physician, and the decision about whether to perform a medical intervention
(Clouser & Zucker, 1974). Both scientific facts and human values play roles in
medical decisions. Scientific facts play a role in understanding the clinical
problem as well as potential solutions. Human values play a role in choosing
among different options (Beauchamp & Childress, 2001).
Even though most people agree that medicine is both a science and an art,
some authors view it as more like a science than an art, while others view it as
more like an art than a science. Those who emphasize the scientific aspects of
medicine argue that it is possible to apply formal methods, such as logic,
statistics and decision theory, to clinical reasoning. Although this approach
recognizes that values still play a role in medical decisions, it holds that one
can use quantitative methods to understand how values enter into clinical
decisions (Albert, Munson, & Resnik, 1999; Gorovitz & MacIntyre, 1976;
Schaffner, 1986). EUT is a quantitative method that has had considerable
influence on medical decision-making. In the PSA testing case mentioned at
the beginning of the paper, one could assign values and probabilities to the
outcomes associated with the different choices (test vs. dont test) to
determine what one should do. EUT is not a value-free approach to decisionmaking, because values play an important role in the calculation of expected
utility. However, since EUT is a quantitative approach to decision-making,
many people mistakenly view it as value-free (or objective).
During the 1990s, evidence-based medicine (or EBM) was the new
buzzword for the scientific approach to clinical reasoning. According to EBM
proponents, many medical decisions and accepted practices are based on
tradition, prejudice, intuition and other factors that have little to do with the
evaluation of scientific evidence relating to the outcomes of controlled
experiments. The EBM approach recommends that doctors make decisions
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
292
DAVID B. RESNIK
based on a precise weighing of risks and benefits in light of the best available
scientific evidence (Sackett, Straus, Richardson, Rosenberg, & Haynes, 2000).
Medical decisions should maximize benefits and minimize risks for the
patient. The medical community can assess safety, efficacy, and costeffectiveness by studying the outcomes of various treatments and procedures.
Ideally, all medical decisions should be based on evidence from randomized,
clinical trials, the so-called gold standard for medical research (Sackett
et al., 2000). EBM proponents also endorse the development of practice
guidelines and electronic databases to provide physicians with EBM
recommendations and up-to-date information (Boyle, 2000).
Those who do not accept the scientific approach to medical decisionmaking are skeptical about attempts to use quantitative methods to understand
the role of values in decision-making. Although these methods can help our
understanding of clinical decisions, qualitative informal methods, such as
intuition and experience, as well as moral, political, religious, and aesthetic
values should play an important role in decision-making (Downie &
Macnaughton, 2000; Little, 1995; Murphy, 1976). Clinical reasoning is not
simply an application of quantitative methods to clinical problems; it involves
judgments and discretion as well as a careful weighing and balancing of
different values. It is humanistic, not scientific.
There are several reasons why clinical reasoning is not simply an
application of quantitative formal methods to clinical problems. First, doctors
must frequently make medical decisions with incomplete evidence (Miller,
1990). Very often there is not enough time to gather all the evidence one might
need to make a definitive diagnosis, and one must accept a working diagnosis
in order to begin treatment (Gorovitz & MacIntyre, 1976). For example, if a
78-year-old female has a 3-day history of a productive cough, fever, difficulty
breathing, and a chest x-ray consistent with left lower lobe pneumonia, a
doctor may begin antibiotic therapy before he or she knows the laboratory
results from a sputum culture, even though those results are essential for
determining the type of pathogen that is infecting the patient (Tierney,
McPhee, & Papadakis, 2002).
Second, the sheer complexity of medical problems can make it difficult to
develop definite solutions to clinical problems in real time (Gorovitz and
MacIntyre, 1976; Miller, 1990). Even the diagnosis and treatment of a
relatively simple human medical problem, such as a pneumonia, may involve
many different organ systems as well as many different tests and treatments.
Moreover, it is frequently the case a patient may have more than one medical
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
293
problem or illness. Human beings are complex, living systems with many
different interacting parts, not controlled experiments.
Third, human values play a key role in determining whether to initiate a
particular test or treatment (Gorovitz & MacIntyre, 1976; Resnik, 1995). All
tests and treatments have risks and benefits. Patients and doctors must choose
among different therapeutic options by weighing risks and benefits.
Furthermore, doctors and patients do not always agree on the best way to
balance benefits and risks, since risk/benefit calculations often depend on
quality of life judgments (Little, 1995). Although science can help us to
determine the most effective means of promoting benefits or avoiding risks, it
cannot help us weigh and balance benefits and risks. To compare benefits and
risks, we must appeal to moral, aesthetic, political or religious values
(Beauchamp & Childress, 2001).
Fourth, human values play an important role in defining many of the
foundational concepts in healthcare, such as health, disease, normal
functioning, dysfunction, disability, and quality of life. We often
appeal to our conception of the good life to classify specific traits as diseases.
A person with a disease is someone who deviates from these normative ideals.
For example, at one time physicians and psychiatrists regarded homosexuality
and promiscuity as diseases. Since we often regard diseases as bad traits,
we feel morally obligated to treat all diseases in order to restore the person to
the good life. Although some scholars, such as Boorse (1975) have made
brilliant attempts to define these concepts in objective, scientific terms, many
other scholars are skeptical of such attempts.10
I do not intend to settle this ongoing debate about the nature of clinical
reasoning in this paper. However, I would like to show how the PP can shed
some light on this debate. First, as we saw earlier, the PP applies to decisions
under ignorance. One of the strongest arguments for regarding medicine as an
art is that physicians and patients must frequently make important choices
when they do not have enough evidence to assign objective probabilities to the
outcomes associated with different choices. The PP can offer physicians
and patients some useful guidance when they must make decisions under
ignorance. Instead attempting to maximize utility, physicians and patients can
make choices that are reasonable responses to plausible and serious threats.
Second, the PP is essentially a qualitative method for making decisions.
Instead of using quantitative concepts like probability and utility
maximization, it employs qualitative concepts, such as plausibility and
reasonableness. To apply the PP to any particular problem, one must make
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
294
DAVID B. RESNIK
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
295
Would this be a consistent approach to cancer screening? Having the test may
also not reflect a careful weighing of risks and benefits: although the test has
some potential benefits, there are also some risks. For example, the test could
yield a false positive or a false negative result. If the test yields a false positive
result, the patient would need to undergo other tests to determine that the
elevated PSA level is not due to cancer. If the test yields a false negative
result, the patient will be lulled into a false sense of security and may not take
proper steps to mitigate the effects of his cancer. Finally, if the test yields a
true positive result, the patient will need to choose among different options,
which may have undesirable complications, such as impotence and
incontinence. It may be the case that watchful waiting would be the best
option for the patient. If this is the case, then why even bother with the test at
all?
As one can see, most of the key issues in the decision whether to have the
PSA test relate to the reasonableness of different responses. In the EMB
literature, the discussion of PSA testing focuses on the issues relating to
maximizing expected utility. To date, there is widespread disagreement among
physicians and patients about the expected utility of the PSA test. Although
the PP does not provide a quick and easy solution to the problem of PSA
testing, it does offer a different and useful perspective on the problem. The PP
advises decision-makers to focus their discussion on what would constitute a
reasonable response to threat of prostate cancer, rather than on which option
has the highest expected utility. The PP does not necessarily provide us with a
clear and unambiguous answer to our original question, but it helps us to
frame the problem in a useful way. The PP instructs us to approach the
problem from the point of view of reasonableness rather than from the
perspective of cost-effectiveness. By thinking about the reasonableness of
various responses, we can consider not only questions about costs and benefits
but also questions about proportionality, consistency, and realism.11
VI. CONCLUSION
In this paper, I have argued that the precautionary principle can offer
physicians and patients a useful method for making choices when they do not
have enough evidence to apply expected utility theory to medical decisions.
When physicians and patients lack adequate scientific proof relating to the
potential outcomes associated with various choices, they should take
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
296
DAVID B. RESNIK
reasonable measures to avoid health threats that are serious and plausible.
Although the PP may not yield definite solutions to all medical dilemmas, it
can help focus our attention on questions that physicians and patients do not
always consider when making medical decisions, such as questions about the
reasonableness of different options. The reasonableness of a response to a
health threat depends on several factors, including benefit vs. harm, realism,
proportionality, and consistency. Since the PP requires that decisions be
reasonable, it encourages the careful weighing and balancing of different
values that one finds in humanistic approaches to clinical reasoning, and it
constitutes a worthwhile alternative to simplistic applications of expected
utility theory that one finds in some approaches to clinical reasoning.12
NOTES
1. A sensitivity of 84% is not very impressive, and specificity of 33% is very unimpressive.
For comparison, self-administered HIV tests have a sensitivity and specificity of 99% or
greater (The One Minute HIV Test, 2003).
2. Many other cancer screening exams, such as mammography to detect early breast cancer
and computerized tomography to detect early lung cancer, have also been very controversial. See Gates (2001) and Grann and Neugut (2003).
3. I followed the traditional analysis of rationality used in philosophy of science, decision
theory and economics: a rational agent is someone who takes effective means to his ends.
Some writers, such as Gert (1998) view irrationality as the more basic notion: a rational
agent is someone who does not make irrational choices. Others, such as Simon (1982),
argue that the concept of rationality must take human limitations into account: a rational
agent is someone who takes satisfactory means to his ends, given his ignorance and lack of
time to decide, where satisfactory does not mean the best possible, but only good
enough. For further discussion, see Audi (2001).
4. The three main degrees of proof recognized in the U.S. legal system include preponderance of evidence, clear and convincing, and beyond reasonable doubt (Blacks Law
Dictionary, 1999). If one translates this concepts into probabilistic terms, preponderance
of evidence probability > 50%; beyond reasonable doubt probability 95%, and
clear and convincing probability between 95% and 60%.
5. One must be careful to distinguish between de re and de dicto senses of the terms
plausibility, probability, and possibility. De re (or of the thing) uses of these
terms occur when one applies them to a particular thing, process, or event. For example, the
statement It will probably rain tomorrow, uses probability in a de re sense, since it is
applying this term to events on the world. On the other hand, the statement It is probable
that it will rain tomorrow uses probability in a de dicto (or of the speech) sense
because it applies probability to another statement, i.e. it will rain tomorrow. In this
paper, I will be using the de dicto sense of the words probable, possible, and
plausible, rather than the de re sense.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
297
6. Bayes theorem provides a method for calculating conditional probabilities based on prior
probabilities and the evidence. The theorem implies the following equation:
PH=E
7.
8.
9.
10.
11.
12.
PH PE=H
PE
where P(H/E) the probability of the hypothesis, given the evidence from a test;
P(H) the probability of the hypothesis prior to testing; P(E/H) the probability of
the evidence, given the hypothesis, prior to testing; and P(E) the probability of the
evidence, prior to testing. See Howson and Urbach (1989).
The plausibility of a hypothesis is a function of the evidence pertaining to the hypothesis as
well as our background knowledge and epistemic values, such as simplicity, explanatory
power, consistency, fruitfulness, and the like. For example, one might argue that a
hypothesis is plausible because it is has some evidence in its favor, it is consistent with
our background knowledge and provides simple and powerful explanation of some
phenomena. See Resnik (2003).
I recognize that people may disagree about the definition of reasonableness as well as
the various criteria for determining reasonableness, but I will not address this problem in
this essay. For further discussion, see Audi (2001) and Rawls (2001). I also note that the law
has its own concept of reasonableness, which in some ways is similar to the concept of
reasonableness discussed here (Blacks Law Dictionary, 1999).
See the discussion of rationality in note 3.
For a review of these debates, see Brown (1985); Kova cs (1998).
In both of these examples, the key questions focused on the reasonableness of the responses
to health threats, but other examples might focus on questions relating the plausibility or
seriousness of the threats. For instance, if it has very little evidence for a health threat, then
plausibility becomes a key issue. The threat posed by the use of smallpox as a biological
weapon raises important questions relating to plausibility. Is it plausible that terrorists
would be able to use smallpox as a weapon? For further discussion, Fauci (2002).
I would like to thank Loretta Kopelman and Douglas Weed for helpful comments.
REFERENCES
Albert, D., Munson, R., & Resnik, M. (1999). Reasoning in medicine: An introduction to
clinical inference, (2nd ed.). Baltimore: Johns Hopkins University Press.
Audi, R. (2001). The architecture of reason. New York: Oxford University Press.
Beauchamp, T., & Childress, J. (2001). Principles of biomedical ethics (5th ed.). New York:
Oxford University Press.
Blacks Law Dictionary. (1999). Minneapolis: West Publishing.
Boorse, C. (1975). On the distinction between disease and illness. Philosophy and Public
Affairs, 5, 4968.
Boyle, P. (Ed.). (2000). Getting doctors to listen: Ethics and outcomes data in context.
Washington, DC: Georgetown University Press.
Brown, W. (1985). On defining disease. The Journal of Medicine and Philosophy, 10, 311328.
Centers for Disease Control. (2003). Prostate Cancer Screening: A Decision Guide [On-line].
Available: http://www.cdc.gov/cancer/prostate/decisionguide/index.htm.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
298
DAVID B. RESNIK
Clouser, D., & Zucker, A. (1974). Medicine as art: An initial exploration. Texas Reports on
Biology and Medicine, 32, 267274.
Cranor, C. (1993). Regulating toxic substances. New York: Oxford University Press.
Cranor, C. (2001). Learning from the law to address uncertainty in the precautionary principle.
Science and Engineering Ethics, 7, 313326.
Downie, R., & Macnaughton, J. (2000). Clinical Judgment. New York: Oxford University Press.
Earman, J. (1992). Bayes or bust? Cambridge, MA: MIT Press.
European Commission. (2000). Communication from the commission on the precautionary
principle. Brussels: European Commission.
Fauci, A. (2002). Smallpox vaccination policy the need for dialogue. New England Journal of
Medicine, 346, 1319-1320.
Foster, K., Vecchia, P., & Repacholi, M. (2000). Risk management, science, and the
precautionary principle. Science, 288, 979981.
Gates, T. (2001). Screening for cancer: Evaluating the evidence. American Family Physician,
63, 513522.
Gerard, M., & Frank-Stromborg, M. (1998). Screening for prostate cancer in asymptomatic
men: Clinical, legal, and ethical implications. Oncology Nursing Forum, 25, 15611569.
Gert, B. (1998). Morality: Its nature and justification. New York: Oxford University Press.
Goklany, I. (2001). The precautionary principle: A critical appraisal of environmental risk
assessment. Washington, DC: The Cato Institute.
Goldman, A. (1986). Epistemology and cognition. Cambridge: Harvard University Press.
Gorovitz, S., & MacIntyre, A. (1976). Toward a theory of medical fallibility. The Journal of
Medicine and Philosophy, 1, 5171.
Grann, V., & Neugut, A. (2003). Lung cancer screening at any price? Journal of the American
Medical Association, 289, 355356.
Hoffman, R., Gilliland, F.D., Adams-Cameron, M., Hunt, W.C., & Key, C.R. (2002). Prostatespecific antigen testing accuracy in community practice. Biomed Central Family
Practice, 3, 1923.
Howson, C., & Urbach, P. (1989). Scientific reasoning. New York: Open Court.
Johnson, R., & Bhattacharya, J. (1985). Statistics: Principles and methods. New York:
John Wiley and Sons.
Kass, N. (2001). An ethics framework for public health. American Journal of Public Health,
91, 17761782.
Kitcher, P. (1993). The advancement of knowledge. New York: Oxford University Press.
Kova cs, J. (1998). The concept of health and disease. Medicine, Health Care, and Philosophy,
1, 3139.
Little, M. (1995). Humane medicine. Cambridge: Cambridge University Press.
Miller, R. (1990). Why the standard view is standard: People, not machines, understand
patients problems. The Journal of Medicine and Philosophy, 15, 581591.
Moser, P. (Ed.). (1990). Rationality in action. Cambridge: Cambridge University Press.
Murphy, E. (1976). The logic of medicine. Baltimore: Johns Hopkins University Press.
North Sea Conference. (1990). Final Declaration of the Third International Conference on
Protection of the North Sea. International Environmental Law, 1, 662673.
Pargetter, R. (1984). Laws and modal realism. Philosophical Studies, 46, 335347.
Pearl, J. (2000). Causality. Cambridge: Cambridge University Press.
Porter, R. (1997). The greatest benefit to mankind: A medical history of humanity. New York:
W.W. Norton.
Downloaded By: [EBSCOHost EJS Content Distribution] At: 02:52 20 July 2007
299
Rawls, J. (2001). Justice as fairness: A restatement. Cambridge, MA: Harvard University Press.
Resnik, D. (1995). To test or not to test: A clinical dilemma. Theoretical Medicine, 16,
141152.
Resnik, D. (2003). Is the precautionary principle unscientific? Studies in the History and
Philosophy of Biological and Biomedical Sciences, 34, 329344.
Resnik, M. (1987). Choices: An introduction to decision theory. Minneapolis: University of
Minnesota Press.
Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science,
20, 16.
Sackett, D., Straus, S.E., Richardson, W.S., Rosenberg, W., & Haynes, R.B. (2000). Evidencebased medicine: How to practice and teach EBM, (2nd ed.). London: Wolfe Publishing.
Schaffner, K. (1986). Exemplar reasoning about biological models and diseases: A relation
between the philosophy of science and the philosophy of medicine. The Journal of
Medicine and Philosophy, 11, 6380.
Simon, W. (1982). Models of bounded rationality. Cambridge, MA: MIT Press.
Soule, E. (2000). Assessing the precautionary principle. Public Affairs Quarterly, 14, 309329.
The One-Minute HIV Test. (2003). [On-line]. Available: http://www.1-minute-aids-test.com/
Tierney, L., McPhee, S., & Papadakis, M. (2002). Current medical diagnosis and treatment.
New York: McGraw-Hill.
United Nations. (1992). Agenda 21: The UN program of action from Rio. New York: United
Nations.
Vanderzwaag, D. (1999). The precautionary principle in environmental law and policy: Elusive
rhetoric and first embraces. Journal of Environmental Law and Practice, 8, 355385.