Sie sind auf Seite 1von 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.


The Theory of Reasoned Action

Article  in  Theory & Psychology · August 2009

DOI: 10.1177/0959354309336319


49 23,374

1 author:

David Trafimow
New Mexico State University


Some of the authors of this publication are also working on these related projects:

Moral attributions towards sexual behavior View project

Replication Crisis and Research Integrity View project

All content following this page was uploaded by David Trafimow on 20 May 2015.

The user has requested enhancement of the downloaded file.

The Theory of Reasoned Action
A Case Study of Falsification in Psychology

David Trafimow

ABSTRACT. Although Fishbein and Ajzen’s theory of reasoned action has

been a leading theory in social psychology for the last few decades, it
also has been an object of criticism for much of that period and subject
to definitional issues about what an attitude is. One of the main recent
criticisms is that the theory is not falsifiable. In contrast, I argue not only
that the theory makes risky predictions, and hence is falsifiable under
reasonable standards of falsification, but also that at least one of its
assumptions has actually been falsified. This specific argument is used
to set up a more general argument that psychologists tend to subscribe to
a naïve falsificationist viewpoint, invalidly use this viewpoint to evalu-
ate theories, and thereby prevent important empirical research from
being performed.
KEY WORDS: affect, attitudes, auxiliary assumptions, case study, cognition,
falsification, risky predictions, subjective norms, theory of reasoned action

One of the most extensive and influential research programs in the history of
social psychology was spawned by Martin Fishbein and Icek Ajzen’s theory
of reasoned action (Ajzen & Fishbein, 1980; Fishbein, 1963, 1967, 1980;
Fishbein & Ajzen, 1975) and its descendants such as Ajzen’s (1988, 1991)
theory of planned behavior. A cursory search of the citation index reveals
thousands of citations for each of these theories, thereby demonstrating that
this program of research has been extremely successful in terms of its influ-
ence on the field of psychology.
However, the fact that the theory of reasoned action has been influential
does not necessarily mean that it is a good theory. It has been subjected to
criticisms by several authors (e.g., Greve, 2001; Liska, 1984; Miniard &
Cohen, 1981; Ogden, 2003; Smedslund, 2000). The most important criticism,

THEORY & PSYCHOLOGY VOL. 19 (4): 501–518

© The Author(s), 2009. Reprints and permissions:
DOI: 10.1177/0959354309336319

and the one that inspired the present article, is that the theory of reasoned
action is not falsifiable (see Greve, 2001; Ogden, 2003; Smedslund, 2000, for
recent examples). Because a theory must be falsifiable to be a good theory, if
the theory of reasoned action is not falsifiable, then it is not a good theory
regardless of how many researchers believe it to be useful.
With this point in mind, I have two goals. The first goal is specific to the the-
ory of reasoned action: I wish to examine whether or not it is falsifiable under
reasonable criteria for falsification. In presenting my analysis, I will argue that
(a) it has survived some potentially rather destructive tests, (b) in some cases
it actually has been falsified, and (c) research emanating from the theory of
reasoned action can be used to falsify other theories that, in turn, also have
been criticized on falsificationist grounds. The second goal is more general and
stems from an observation that researchers who are not psychologists often
accuse psychologists of proposing theories that are not falsifiable. Even where
only psychologists are concerned, they often accuse theories that have sur-
vived for a long period of time as being unfalsifiable (see Betsch & Pohl, 2002;
Gannon, 2002; Hellberg, 2006; McGuire, 2006; Roth, Wilhelm, Pettit, &
Meuroet, 2005, for a few recent examples not pertaining to the theory of rea-
soned action). It is my belief that psychologists have not thought through the
issue of falsification in a sufficiently critical manner and that accusations of
unfalsifiability would be much less common if they did so. Worse yet, I believe
that much research of potential value is not performed because researchers
blindly assume that the relevant theories are not falsifiable and consequently
conclude that the research is impossible. So my second and more general goal
is to stimulate researchers to think more critically about falsification.

Clearing Out the Philosophical Underbrush

Although the notion of falsification as an important criterion for evaluating the-

ories has had a long history in the philosophy of science, the concept became
especially widespread because of the writings of Sir Karl Popper (e.g., 1959,
1963, 1972, 1983). At the risk of oversimplifying Popper’s argument, I will
reduce it down to its bare essentials. Suppose one has a theory (T) and makes a
prediction (O) that follows from it. Does confirming O prove T to be true?
Obviously, this is not so, as can be seen from the following invalid syllogism.
It is invalid because it commits the crime of affirming the consequent; the
observation could be true for some reason other than the truth of the theory.

{1} If T then O
{2} O
{3} Therefore T (invalid conclusion)

In contrast, if the observation does not come out, one can validly draw a con-
clusion about the falsity of the theory by the logic of Modus Tollens as follows.

{1} If T then O
{2} Not O
{3} Therefore not T

Thus, if one attempts to prove a theory to be true by experiment, then one nec-
essarily commits the logical fallacy of affirming the consequent, whereas if
one wishes to disprove a theory by experiment, one can use the valid logic of
Modus Tollens. Therefore, Popper urged researchers to try to disprove theo-
ries rather than to try to prove them. But if one wishes to disprove theories,
then a prerequisite for doing so is that the theories make predictions that
might not be true. In short, the theories must be falsifiable.
Of course, as Popper was well aware, matters are not this simple. The com-
plexity was described particularly effectively by his student, Imre Lakatos
(1978). According to Lakatos, experimental predictions never come only from
a theory; they come from a combination of a theory and assumptions that
are outside the theory, which he termed auxiliary assumptions. Consider,
for example, predictions about the velocities and positions of the planets that
have been made from Newton’s Laws of Motion. To make such predictions,
it is not only necessary to have a theory such as that proposed by Newton, but
it is also necessary to make auxiliary assumptions about the current positions
of the planets and their speeds, and to make various other assumptions about
the presence or absence of additional astronomical bodies (and their posi-
tions, speeds, masses, etc.). Consequently, if obtained data contradict a the-
ory, it is not necessary to conclude that the theory is wrong because the
problem may reside in one or more of the auxiliary assumptions (Duhem,
1906/1954; Lakatos, 1978; Meehl, 1990, 1997; Quine, 1953/1980). The pos-
sibility of blaming an observational failure on one or more auxiliary assump-
tions rather than the theory means that the observational failure does not have
the power to unequivocally doom the theory.
The Lakatosian argument can be illustrated easily with the following syl-
logism, where A1, A2, and so on, refer to auxiliary assumptions.

{1} If (T & A1 & A2 & … An) then O

{2} Not O
{3} Therefore, not (T & A1 & A2 & … An) = not T or not A1 or not A2 or
… not An

Given that one appreciates how auxiliary assumptions complicate theory falsi-
fication, this appreciation does not, by itself, strongly imply that scientists should
not be concerned with falsification. It is possible to argue that Lakatosian diffi-
culties are just that—difficulties—and the fact that something is difficult does
not mean that it should not be attempted. Moreover, even though the necessity
of having auxiliary assumptions causes absolute falsification to be impossible
to attain, it might be possible to have less stringent criteria and argue for some
kind of “reasonable” falsification that would be possible to attain. Even if the

goal is reasonable falsification instead of absolute falsification, this goal is

nevertheless out of the question if one’s theory is not, in principle, capable of
being falsified. Thus, Popper’s falsification argument retains much of its force
despite the fact that auxiliary assumptions preclude the possibility of absolute
falsification (see Meehl, 1990, 1997; Trafimow, 2003).
But auxiliary assumptions complicate falsification in another way that, to
my knowledge, has not been addressed in the psychological literature. To
understand the complication, let us ask a simple question: How can one deter-
mine whether a theory is falsifiable? The way to do this is to attempt to make
predictions from the theory and see if these predictions have the possibility,
at least in principle, of being shown to be wrong. But we have already seen
that it is necessary to include auxiliary assumptions in the process of deter-
mining what the predictions might be. Therefore, to determine whether a the-
ory is falsifiable, it is necessary to specify not only the theory but also a set
of auxiliary assumptions to combine with the theory. If the result of this com-
bination is one or more predictions that might be wrong, then the theory is
clearly falsifiable (unless one wishes to disallow one or more of the auxiliary
assumptions). More importantly, however, if the result of this combination is
a failure to derive a testable prediction, we do not have a strong case that the
theory is unfalsifiable. It is possible that the theory would make testable pre-
dictions if it were combined with different auxiliary assumptions. Thus, to
unambiguously deem a theory to be unfalsifiable, one must try out all possi-
ble combinations of auxiliary assumptions in conjunction with the theory and
show that, in every case, the resulting prediction cannot be tested, even in
principle. To my knowledge, this has never been done in the history of science.
On the contrary, there are many theories that were thought to be impossible
to test at one time and found to be easy to test at later times. For example, prior
to the advent of spectroscopy it was considered to be impossible to test theories
about the chemical composition of the stars because there was no way to go to
them and obtain samples for analysis. However, with the discovery of spectral
analysis, tests of the chemical composition of the stars became commonplace.
The history of science is replete with similar cases, which is why philosophers
(including Popper) speak of whether theories are falsifiable in principle rather
than whether they can be falsified at the present time.
Even religious theories, which some scientists currently consider to be
untestable, are not necessarily so in principle. Consider the religious theory
that God created everything. Although this theory has undesirable character-
istics, the oft-cited charge that it is, in principle, not falsifiable is not one of
them. To see why, let us introduce an auxiliary assumption that there is a
prayer that, upon discovery, would induce God to appear and truthfully answer
questions. Well, then, suppose someone discovered this crucial prayer, used
it to summon God, and asked God directly about creation. God’s denial of
responsibility for creation, if it happened, would provide strong evidence
against the religious theory. It would not be proof because God might lie about

his responsibility for creating an imperfect universe, or perhaps the being that
answered the prayer is not really God, and so on. But as I pointed out earlier,
this is a problem with any scientific theory—contradictory data can always be
attributed to wrong auxiliary assumptions rather than to wrong theories. The
point is that if one is sufficiently creative about auxiliary assumptions, it is
always possible to have tests of seemingly unfalsifiable theories.1 Ultimately,
when one decides how much to believe or disbelieve a theory, the issue is the
weight of the evidence, the plausibility of alternative explanations, presumptions
about the validity of auxiliary assumptions, and so on, rather than conclusive
proof or disproof.
Although falsification—or at least the naïve version presented here thus
far—is clearly an inadequate philosophy of science, a more sophisticated ver-
sion has some desirable characteristics. For example, Popper argued that
although theories cannot be proven, they can be corroborated (supported) to
a greater or lesser degree depending on their ability to make risky predictions,
which are predictions that are likely to be wrong if the theory is wrong. Of
course, as I explained earlier, all predictions, whether risky or not, come from
the theory and auxiliary assumptions. Consequently, if one theory makes
more risky predictions than another, it might be that the other theory would
have made more risky predictions in the context of more creative auxiliary
assumptions. Nevertheless, the fact that a particular theory has been shown to
make risky predictions is a point in its favour: the theory allows researchers
to predict something that they would be unlikely to predict without the
theory. Therefore, I will retain Popper’s assertion that, ceteris paribus, risky
predictions are a point in favor of the theories that have been demonstrated to
make them.
Because of the difficulty in evaluating theories without a supporting context
of auxiliary assumptions, Lakatos (1978) argued that whole research programs
rather than single theories should be evaluated. From a philosophical point of
view this is not very satisfying because it suggests that philosophers can deem
research programs to be successful only after they have demonstrated them-
selves to be so, and it would be desirable to be able to make these evaluations on
an a priori basis. But from a psychological perspective, particularly from the per-
spective of evaluating theory of reasoned action research, there has been over 40
years of research, and so it seems reasonable to inquire as to whether this
research program has led to risky predictions. If the answer is in the affirmative,
that would constitute a strong point in its favour, whereas if the answer is in the
negative, that would constitute strong support for its falsificationist critics.

The Theory of Reasoned Action

This section contains a brief description of the theory of reasoned action fol-
lowed by two examples of risky predictions that have been made from the theory.

In one case the risky prediction has been supported, but in the other case it has
not. These cases will be split into subsections dealing with the distinction between
attitude and subjective norm and whether attitudes have both a cognitive and
affective component.

Brief Description of the Theory of Reasoned Action

The theory of reasoned action can be described briefly as follows (see Ajzen
& Fishbein, 1980; Fishbein, 1980; Fisbein & Ajzen, 1975, for fuller accounts).
The most proximal cause of behavior is behavioral intention (what one
intends to do or not to do). Behavioral intention, in turn, is determined by atti-
tude (one’s evaluation of the behavior) and subjective norm (one’s evaluation
of what important others think one should do), either of which might be the
most important determinant of any particular behavior. Usually, this is revealed
empirically by the beta weights obtained from multiple regression analyses,
where behavioral intention is regressed on to attitude and subjective norm. If
the result is a larger attitude than subjective norm beta weight, the behavior is
deemed to be more under attitudinal than normative control, but if the reverse
is true, then the behavior is deemed to be more under normative than attitu-
dinal control. In either case, then, it is desirable to know what determines atti-
tude or subjective norm, respectively, if a researcher wishes to influence the
behavior. Attitude is determined by behavioral beliefs (beliefs about the like-
lihood of various consequences) and evaluations of how good or bad it would
be if those consequences happened. Subjective norm is determined by beliefs
about what specific important others think one should do and how much one is
motivated to comply with those important others. Both attitude and subjective
norm are assumed to be determined by summative processes. Thus, to form
an attitude, people are assumed to sum behavioral belief-evaluation products
(attitude = ∑biei), whereas to form a subjective norm, people are assumed to
sum normative belief-motivation to comply products (subjective norm = ∑nimi).

The Distinction between Attitude and Subjective Norm

The theory of reasoned action has been criticized in many ways. Possibly the
most extensive commentary has pertained to the distinction between attitude
and subjective norm (e.g., Liska, 1984; Miniard & Cohen, 1981; see Trafimow,
1998, for a review). The basic problem is largely a conceptual one, though
there have also been empirical arguments that will not be covered here (but
see Trafimow, 1998). To understand the conceptual problem, remember that
attitudes are presumably determined by beliefs about consequences (and eval-
uations of those consequences) whereas subjective norms are determined by
normative beliefs (and motivations to comply). But what if behavioral beliefs
and normative beliefs are really different names for the same construct? This is
precisely the question posed by Miniard and Cohen (1981), whose argument

can be illustrated by example, as follows. Suppose that the behavior of

concern is “eating a chocolate bar” and so a behavioral belief might be “my
father will disagree with me if I eat a chocolate bar.” A normative belief might
be “my father thinks I should not eat a chocolate bar.” These two beliefs seem
to be different ways of saying the same thing, thereby contradicting the idea
that there is a strong distinction between them. Clearly, if this distinction is
incorrect, the attitude–subjective norm distinction, which is based on it, is also
cast into doubt.
There is a falsification argument that can be made here. Fishbein and his
advocates could come up with counter-examples where the distinction seems
to be much stronger than in the foregoing examples, but Miniard and Cohen
and their advocates could reply with yet more examples where the distinction
seems to fail. Even if researchers agreed on the examples, they could disagree
about whether those examples favor the distinction or go against it. Thus,
there seems to be a conceptual problem with the theory of reasoned action
that renders it unfalsifiable.
Nevertheless, in work with colleagues I have performed several sets of exper-
iments that provided reasonably risky tests of the distinction. Fishbein and I
(Trafimow & Fishbein, 1994a) made use of an auxiliary assumption that was
supported by Stasson and Fishbein (1990) concerning seat belt use under safe
or risky driving conditions. Stasson and Fishbein had used a multiple regression
paradigm to show that intentions to wear a seat belt under safe driving condi-
tions were attitudinally controlled whereas intentions to wear a seat belt under
risky driving conditions were normatively controlled. So Fishbein and I per-
formed three experiments in which we manipulated attitudes towards wearing
a seat belt under these conditions with behavioral intentions to wear a seat belt
as the dependent measure. If the attitude–subjective norm distinction is a false
one, the attitude manipulation should be equally effective in influencing behav-
ioral intentions regardless of whether behavioral intentions are measured in a
safe or risky driving context. But if it is really the case that seat belt use under
safe driving conditions is attitudinally controlled whereas seat belt use under
risky driving conditions is normatively controlled, the attitude manipulation
should affect only behavioral intentions in the safe driving context and not in
the risky driving context. Three experiments supported this latter prediction.
Furthermore, Fishbein and I (Trafimow & Fishbein, 1994b) performed a set of
analogous experiments to show that subjective norm manipulations affected
intentions to perform normatively controlled behaviors but not intentions to
perform attitudinally controlled behaviors.
Fishbein and I (Trafimow & Fishbein, 1995) also tested the distinction at
the belief level—precisely the level at which the accusation of not being
falsifiable seems most appropriate. We invoked an auxiliary assumption about
the formation of associations between beliefs to provide a way of testing
whether people distinguish between behavioral and normative beliefs. We
reasoned that if people really use behavioral beliefs to form an attitude, then

this would involve comparing the behavioral beliefs to other behavioral

beliefs, thereby resulting in the formation of associations among the behav-
ioral beliefs. Similar reasoning applied to normative beliefs suggests that
associations between them also should be formed. However, there is no rea-
son to assume that associations are formed between behavioral beliefs and
normative beliefs because there is no reason for people to compare them to
each other. Well then, suppose that people are asked to write down their beliefs
about a behavior. If the first belief retrieved is a behavioral belief, then it
should be easy to traverse an associative pathway to another behavioral belief.
Similarly, if the first belief retrieved is a normative belief, then it should be easy
to traverse an associative pathway to another normative belief. The upshot is
that there should be clustering in the retrieved beliefs: the behavioral beliefs
should tend to be recalled adjacently to each other and the normative beliefs
should tend to be recalled adjacently to each other. Three experiments supported
this prediction controlling for a variety of variables such as possible priming
effects, the semantic similarity of the beliefs, and others.
There have been additional tests. For example, Finlay and I (Trafimow &
Finlay, 1996) demonstrated that people, as well as behaviors, can be under
attitudinal or normative control. Furthermore, based on this research, Ybarra
and I (Ybarra & Trafimow, 1998) showed that priming the private self increases
the attitude–intention association whereas priming the collective self increases
the subjective norm–intention association. However, I risk belaboring the point
that with respect to the attitude–subjective norm distinction, the combination
of the theory of reasoned action with carefully selected auxiliary assumptions
resulted in successful tests of predictions that easily could have failed. And
had these predictions failed, I, at least, would have been more likely to con-
clude that the theory of reasoned action is wrong than to assume that the
repeated failures were caused by invalid auxiliary assumptions. Thus, we have
at least one case of reasonable corroboration for a theory that is supposedly
not falsifiable.

Affective and Cognitive Attitude Components

According to the theory of reasoned action, attitude is a cognitive variable: it
is an evaluation based on expected likelihoods of consequences and their val-
ues (Fishbein, 1980). In contrast, other researchers believe that attitudes con-
tain both an affective and a cognitive component (e.g., Triandis, 1980). To test
these possibilities, many researchers have used factor analytic research para-
digms—participants are asked to endorse the items that make up an attitude
measure, and factor analysis is used to determine if the items load onto one
factor or two. Contrary to the theory of reasoned action, researchers typically
obtain two factors rather than one, with affective items loading onto one of
the factors and cognitive items loading onto the other (Abelson, Kinder, Peters,
& Fiske, 1982; Breckler, 1984; Breckler & Wiggins, 1989; Crites, Fabrigar,
& Petty, 1994; Triandis, 1980).

Although the factor analytic research seems to provide a compelling case

against the theory of reasoned action, Fishbein (1980) met the challenge with
an elegant argument based on three issues. In the first place, factors obtained
from factor analyses have to be named, and although there might be agree-
ment on the cognitive factor, it is less clear whether the second factor meas-
ures affect or whether it mentions something else (e.g., health). Secondly, if
it could be shown that one of the factors is strongly correlated with Σbiei and
intention, whereas the other factor is not (or less so), then there is no reason
to prefer an “affective/cognitive” interpretation over an “attitude/something
else” interpretation; in fact, the reverse would be true. Thirdly, Fishbein pre-
sented an actual case where this was so. Furthermore, what Fishbein consid-
ered as a health factor, because of its lack of correlation with Σbiei and
intention, correlated well with health variables.
From a falsification perspective, Fishbein’s (1980) clever argument could be
perceived as harming rather than helping the theory of reasoned action. If the
factor analytic evidence can be interpreted to support or disconfirm Fishbein’s
attitude conception, depending on the argumentative abilities of the researcher,
then there would seem to be no way to falsify the theory. But this problem in
falsification is unsolvable only if researchers insist on limiting themselves to
factor analytic paradigms. In addition to using factor analysis, Sheeran and I
(Trafimow & Sheeran, 1998) made use of an auxiliary assumption that we
termed the associative hypothesis. According to this hypothesis, beliefs that are
more cognitive are likely to become associated with each other, as are beliefs
that are more affective. During retrieval, participants traverse associative path-
ways from cognitive beliefs to other cognitive beliefs and from affective beliefs
to other affective beliefs. Using a variety of experimental paradigms, Sheeran
and I obtained evidence that corroborates this hypothesis. Our participants
tended to retrieve cognitive beliefs adjacently to each other and they tended to
retrieve affective beliefs adjacently to each other, even when controlling for
potential confounding effects such as linguistic similarity, belief valence, and
others. Thus, these data provide reasonable (but not absolute) falsification of an
important assumption of the theory of reasoned action that attitudes do not con-
tain distinct affective and cognitive components. Furthermore, this falsification
has led to theory change in that more recent versions of the theory include the
distinction (e.g., Ajzen & Fishbein, 2005).

Extending the Program

This section expands on theory of reasoned action research in two ways. The
first subsection presents a risky prediction made by Ajzen’s (1988) theory of
planned behavior, which is an extension of the theory of reasoned action. The
second subsection shows how Fishbein’s distinction between attitudes and
subjective norms can be used as an auxiliary assumption to derive a risky
prediction from Freud’s psychoanalytic theory.

Perceived Behavioral Control

Ajzen (1988) extended the theory of reasoned action by adding perceived
behavioral control—how much control people think they have over their behav-
ior—as an additional determinant of behavioral intention. Perceived behavioral
control is usually measured by items such as “X is under my control” and “X
is easy for me to perform.” Note the implicit assumption that perceptions
of control and perceptions of difficulty are the same thing. Contrary to this
assumption, my colleagues and I (Trafimow, Sheeran, Conner, & Finlay, 2002)
have performed the following three demonstrations. Most importantly, we
showed that there are behavioral intentions that are affected by manipulating
control beliefs but not by manipulating difficulty beliefs; whereas there are
other behavioral intentions that are affected by manipulating difficulty beliefs
but not by manipulating control beliefs. If the two types of beliefs are equiva-
lent, these differential effects should be impossible to obtain. Secondly, we
performed several experiments showing that different control beliefs are more
associated with each other in memory than with difficulty beliefs, and that dif-
ficulty beliefs are more associated with each other than with control beliefs.
Thirdly, we performed a meta-analysis showing that difficulty is a better pre-
dictor of more behavioral intentions than is control. Thus, the perceived behav-
ioral control issue provides a nice case where a widely cited extension of the
theory of reasoned action has been demonstrated not only to be reasonably
testable but to have failed some of the tests.2

Fishbein and Freud

Although Fishbein’s theory of reasoned action has been considered by many to
be unfalsifiable, the most famous example of a supposedly unfalsifiable theory
in the history of psychology would have to be Freud’s psychoanalytic theory
(Freud, 1908/1959; 1922/1955; see Sarnoff, 1976, for an accessible review). In
this subsection, I intend to demonstrate not only that Freud’s theory is testable,
but that it can be tested by using the distinction between attitude and subjective
norm, from the theory of reasoned action, as the crucial auxiliary assumption.
A central concept in Freud’s theory concerns the Oedipal Complex, where
young children are faced with the perceived choice of conforming to the same-
sex parent’s proscriptions or being punished severely. According to Freud, young
children usually pick the former choice, which results in their internalizing the
same-sex parent’s values. But although the young children’s behaviors are
strongly influenced by their parents’ proscriptions, this influence decreases as
the children grow older. In the first place, they develop defense mechanisms
that allow them to do more of what they want to do while still pretending (even
to themselves) that they are behaving according to their internalized values.
Secondly, with increasing maturity they gain physical and mental abilities that
make them increasingly independent of parental constraints.

There is a clear parallel between Freud and Fishbein. Stated in terms of

attitudes and subjective norms, Freud’s theory clearly implies that younger
children, who are more dependent and have less well-developed defense
mechanisms than those in older children, should be particularly highly influ-
enced by what they think their parents think they should do; their behaviors
should be under normative control. In contrast, older children should be more
likely to intend to perform behaviors according to their own evaluations; their
behaviors should be under attitudinal control. My colleagues and I (Trafimow,
Brown, Grace, Thompson, & Sheeran, 2002) actually tested this prediction in
a large sample of children ranging from ages 8 to 16. We measured these chil-
dren’s attitudes, subjective norms, and intentions to perform 30 behaviors and
we performed both between-participants and within-participants correlational
analyses and multiple regression analyses. Contrary to Freud’s theory, although
children of all age groups tended to be much more under attitudinal than
normative control, there were no differences in attitudinal or normative control
across the age groups for either between-participants analyses or within-
participants analyses. Furthermore, the lack of a difference is not due to a lack
of discrimination between the behaviours, because there were large mean
differences across age groups. For example, younger children had much more
positive attitudes and intentions to “pretend to be asleep” than did older children.
Thus, it was not the responses on the variables that were the same across age
groups but rather the relationships between the variables.
In summary, Freud’s theory is often cited, even in undergraduate textbooks,
as the prototypical example of an unfalsifiable theory in psychology (e.g.,
Bernstein, Clarke-Stewart, Roy, Srull, & Wickens, 1994; Myers, 1992; Ryckman,
1993) despite the numerous empirical tests and applications of the theory that
have been reviewed by Fisher and Greenberg (1996). One reason it has been
considered to be unfalsifiable may be because it is difficult to figure out how
one would actually test it. In the present language, it is difficult to see what
auxiliary assumptions one could make that would allow for a reasonable test
of the theory. But the fact that the crucial auxiliary assumptions are difficult
to find does not mean that they cannot be found. In the Trafimow et al. (2002)
study, the attitude–subjective norm distinction from Fishbein’s theory of rea-
soned action was used as the crucial auxiliary assumption and it resulted in a
reasonable (but not absolute) disconfirmation of Freud’s theory. It is ironic
that the theory of reasoned action, which has been accused of being unfalsifi-
able, turned out to be capable of being used to falsify a central aspect of Freud’s
theory, which has also been considered to be unfalsifiable.


Readers of a preliminary version of the present article suggested relevant issues

that will be addressed in this section.3 One issue that has been addressed in

detail by Putnam (1975), Smedslund (1991), and Michael and Lise Wallach
(1998; also see L. Wallach & M.A. Wallach, 2001) is the issue of tautologies
and near-tautologies. Triandis (1980) pointed out an example of a tautological
proposition in the attitude area—specifically, that attitudes have often been
defined as predispositions for behavior. Given this definition, Triandis asserted
that the numerous tests that have been conducted of whether attitudes predict
behaviors seem misplaced because attitudes must predict behaviors by defini-
tion, regardless of any empirical findings. Less extremely, Michael and Lise
Wallach (1998) suggested that much research in social psychology involves
the use of near-tautologies—theories that “are so entrenched in the system
of assumptions implicit in social psychological thinking that they cannot be
disconfirmed” (p. 184). Because the system of assumptions in which social
psychology theories are entrenched is so strongly accepted, any deviations of
data from predictions would be deemed to be due to faulty auxiliary assump-
tions rather than due to the wrongness of the theory (or the wrongness of the
implicit assumptions to which the theory is tied). It might seem that tauto-
logical or near-tautological theories are not capable of being falsified.
It is true that falsification is an irrelevant consideration for tautological
propositions—there can be little doubt that definitions are not susceptible
to falsification (Putnam, 1975). But they are susceptible to considerations of
utility, as can be seen by examining the history of attitude research. It was
largely because of numerous demonstrations of low or nonexistent attitude–
behavior correlations (reviewed by Wicker, 1969) that Fishbein and Ajzen were
led to redefine attitudes as evaluations of behaviors rather than as predisposi-
tions to perform them, thereby rendering the predictive and causal effects of
attitudes on behaviors as empirical questions. By making this new definition
the centerpiece of their theory and by adding useful auxiliary assumptions in
the form of a measurement model, they were able to dramatically improve the
prediction of behaviors. Thus, the problem is not whether one definition or
another definition is true (“true” has no meaning in this context), rather it is
their relative utilities that are in question. The relative utility of definitions, in
turn, depends largely on the relative success of the theories in which they play
a role. Thus, although definitions themselves are not falsifiable, empirical
research can lead them to be changed for the better (Quine, 1953/1980). And
whatever the definitions are, the theories that contain them can be exposed to
risky predictions with the adroit use of auxiliary assumptions.
Let us now consider near-tautologies. I tend to agree with Wallach and
Wallach (L. Wallach & M.A. Wallach, 1998; M.A. Wallach & L. Wallach,
1998) that the particular examples of research they cite as near-tautological are
not major contributions to psychology. However, I disagree with them about
why this is so. According to Wallach and Wallach, the theories are not capable
of being falsified because they are too closely tied to the implicit assumptions
social psychologists have (O’Donohue, 1989, termed these “metaphysics”). But
I see nothing, in principle, that makes it impossible for researchers to make their

metaphysical assumptions explicit, make more creative auxiliary assumptions,

or propose theories that are less closely tied to metaphysical assumptions, any
or all of which could result in the derivation of interesting hypotheses that
could be falsified under the criterion of reasonable falsification. It is true that
researchers tend not to follow these prescriptions and that there would be resist-
ance from other psychologists who buy into the metaphysics that are currently
fashionable, but this is a normal hurdle in most sciences. I believe that the
problem with the research that Wallach and Wallach cite is that the theories
are simply uninteresting and uninformative, possibly because they are so closely
entrenched in what psychologists believe anyhow.4 Indeed, I used Bayes’s
theorem to demonstrate how the proposal of theories and hypotheses that are
too plausible, given one’s metaphysical assumptions, leads psychologists to
conduct research that fails to affect our levels of confidence in them (Trafimow,
2003). In summary, although I agree with Wallach and Wallach that much social
psychology theorizing is too closely tied to the metaphysics that are currently
fashionable, I disagree that this creates an insurmountable falsification problem;
one can always provide reasonably risky tests of theories given sufficiently
creative auxiliary assumptions. One reason that many social psychology
theories seem unfalsifiable may be that they are obvious given the metaphysics
to which they are so closely tied, and it is this seeming obviousness that causes
the illusion that they cannot be subjected to reasonable falsification efforts.

Boundary Conditions
Greenwald, Pratkanis, and Leippe (1986) stated that social psychology theories
are overgeneralized: the boundary conditions are narrower than is apparent
from the writings of proponents of these theories. In addition, Greenwald et al.
pointed out that there is a confirmation bias that retards progress because
researchers are so busy looking for evidence that confirms their theories that
they may not find out what the boundary conditions are for decades. To address
this problem, Greenwald et al. suggested two research strategies that move
away from theory testing. The idea of the condition-seeking strategy is to search
for the conditions under which the predicted effect occurs or does not occur.
This strategy can be used iteratively to obtain increasingly precise results.
For example, one might find that result R occurs under condition C1 but not
under condition C2. In turn, further research might indicate that condition C1
produces result R only under condition C3, and so on. In contrast, the idea of
the design strategy is to try to find conditions under which one can obtain a
presently unobtainable result. Both strategies move the researcher away from
theory testing and towards obtaining a more detailed empirical picture.
As Greenwald et al. (1986) recognized, these strategies can be criticized.
They cited Cronbach (1975), for example, as arguing that strategies such as
these, rather than leading to more scientific progress, instead lead to an infinitely
lengthy process of accumulating increasingly trivial findings. Greenwald et al.’s

counter to this criticism is that interactions between conditions can be stated

in either a more empirical or a more theoretical way, and if one chooses the
latter, then Cronbach’s criticism no longer applies. But there is a problem
with this counter-argument. If one describes complicated interactions between
conditions with theoretical terms rather than with empirical ones, then over-
generalization is forced because the obtained interactions between conditions
doubtless interact with some other condition yet to be specified. Given that
the main point of the Greenwald et al. article was to avoid overgeneralization,
and given that their counter-argument to Cronbach assumes a strategy that, if
used, would result in precisely that effect, they can hardly be said to have suc-
cessfully addressed Cronbach’s criticism.
Nevertheless, the condition-seeking and design strategies may be useful for
a completely different reason that relates to the present discussion of falsifi-
cation. Specifically, these strategies seem likely to make one aware of some
of the auxiliary assumptions that were implicit in the original research or to
make one aware of other possible auxiliary assumptions that could be used.
Because the process of falsification is so dependent on auxiliary assumptions,
any strategy that leads researchers to become more aware of the ones they are
using or to discover others that could be used is likely to increase the efficacy
of falsification efforts. Ironically, although Greenwald et al. wanted to move
away from a theory-testing paradigm, the methods they espoused, if followed,
would be likely to result in better tests of theories.


Although the necessity of auxiliary assumptions to derive empirical hypotheses

renders absolute falsification impossible, it is nevertheless possible to make
progress in that direction depending on how creatively one selects and uses
auxiliary assumptions. By explicitly considering the conditions under which
predicted and unpredicted results may or may not occur, the creative selection
of auxiliary assumptions is likely to be augmented. A broad knowledge of
research outside one’s own area is also likely to help in this regard because it
increases the number of areas from which auxiliary assumptions can be drawn.
As was illustrated by the Freud example, sometimes theories in one area can act
as useful auxiliary assumptions to test theories in other areas. Finally, the mere
consciousness of the importance of auxiliary assumptions may serve as a prime
to increase the creativity with which researchers select them.
Whether a researcher uses one of the methods listed in the foregoing
paragraph or a different method to increase the creativity with which he or she
selects and uses auxiliary assumptions is not a matter of overwhelming concern.
What is of overwhelming concern is that the use of any method to creatively
select and use auxiliary assumptions to test a theory is unlikely if one begins by
assuming that the theory is not falsifiable. Consequently, I have argued that all

theories are amenable, in principle, to some degree of falsification, depending

on how adept one is at choosing auxiliary assumptions. In addition, several
examples were provided of tests that could have resulted in a reasonable degree
of falsification or that actually did result in a reasonable degree of falsification
of ideas that had been said by at least some people to not be falsifiable.
My fervent hope is that rather than devote research efforts to explaining why
particular theories are not falsifiable, researchers will devote efforts to either
criticize theories on other grounds or to work harder to actually test them.

1. The reader may consider the discovery of a crucial prayer of this sort to be unlikely.
But how unlikely would it have seemed to an 18th-century chemist if someone had
suggested the possibility that one day scientists would determine the chemical
composition of the stars from the light that they radiate?
2. It may be possible to save the concept of perceived behavioral control by treating
it as a superordinate concept that has distinct “control” and “difficulty” components.
However, this treatment of perceived behavioral control clearly differs from Ajzen’s
original (e.g., 1988) one.
3. I thank two anonymous reviewers for their helpful comments on an earlier version
of this article.
4. Another problem with psychology theories is that they tend to be mere summariza-
tions of empirical findings rather than real theories. Although this issue is too tan-
gential to the main points of present concern for further elaboration, Stam (1992)
has discussed it in some detail.

Abelson, R.P., Kinder, D.R., Peters, M.D., & Fiske, S.T. (1982). Affective and semantic
components in political person perception. Journal of Personality and Social
Psychology, 42, 619–630.
Ajzen, I. (1988). Attitudes, personality, and behavior. Chicago: Dorsey.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human
Decision Processes, 50, 179–211.
Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social
behavior. Englewood Cliffs, NJ: Prentice-Hall.
Ajzen, I., & Fishbein, M. (2005). The influence of attitudes on behavior. In D. Albarracin,
B.T. Johnson, & M.P. Zanna (Eds.), The handbook of attitudes (pp. 173–221).
Mahwah, NJ: Erlbaum.
Bernstein, D.A., Clarke-Stewart, A., Roy, E.J., Srull, T.K., & Wickens, C.D. (1994).
Psychology (3rd ed.). Boston: Houghton Mifflin.
Betsch, T., & Pohl, D. (2002). Tversky and Kahneman’s availability approach to frequency
judgment: A critical analysis. In P. Sedelmeier & T. Betsch (Eds.), Frequency processing
and cognition (pp. 109–119). New York: Oxford University Press.
Breckler, S.J. (1984). Empirical validation of affect, behavior, and cognition as
distinct components of attitude. Journal of Personality and Social Psychology,
47, 1191–1205.

Breckler, S.J., & Wiggins, E.C. (1989). Affect versus evaluation in the structure of
attitudes. Journal of Experimental Social Psychology, 25, 253–271.
Crites, S.L., Fabrigar, L.R., & Petty, R.E. (1994). Measuring the affective and
cognitive properties of attitudes: Conceptual and methodological issues. Personality
and Social Psychology Bulletin, 20, 619–634.
Cronbach, L.J. (1975). Beyond the two disciplines of scientific psychology. American
Psychologist, 30, 116–127.
Duhem, P. (1954). The aim and structure of physical theory (P.P. Wiener, Trans.).
Princeton, NJ: Princeton University Press. (Original work published 1906)
Fishbein, M. (1963). An investigation of the relationships between beliefs about an
object and the attitude toward that object. Human Relations, 16, 233–239.
Fishbein, M. (1967). Attitude and the prediction of behavior. In M. Fishbein (Ed.),
Readings in attitude theory and measurement (pp. 477–492). New York: Wiley.
Fishbein, M. (1980). Theory of reasoned action: Some applications and implications.
In H. Howe & M. Page (Eds.), Nebraska Symposium on Motivation, 1979 (pp. 65–116).
Lincoln: University of Nebraska Press.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An intro-
duction to theory and research. Reading, MA: Addison-Wesley.
Fisher, S., & Greenberg, R.P. (1996). Freud scientifically reappraised: Testing the theories
and therapy. Oxford, UK: Wiley.
Freud S. (1955). Group psychology and the analysis of the ego. In J. Strachey (Ed.
and Trans.), The standard edition of the complete psychological works of Sigmund
Freud (Vol. 18, pp. 169–175). London: Hogarth. (Original work published 1922)
Freud, S. (1959). Character and anal eroticism. In J. Strachey (Ed. and Trans.), The
standard edition of the complete psychological works of Sigmund Freud (Vol. 9,
pp. 67–143). London: Hogarth. (Original work published 1908)
Gannon, L. (2002). A critique of evolutionary psychology. Psychology, Evolution, &
Gender, 4, 173–218.
Greenwald, A.G., Pratkanis, A.R., & Leippe, M.R. (1986). Under what conditions
does theory obstruct research progress? Psychological Review, 93, 216–229.
Greve, W. (2001). Traps and gaps in action explanation: Theoretical problems of a
psychology of human action. Psychology Review, 108, 435–451.
Hellberg, T. (2006). The puzzle: Exploring the evolutionary puzzle of male homo-
sexuality. Archives of Sexual Behavior, 35, 243–244.
Lakatos, I. (1978). The methodology of scientific research programmes. Cambridge,
UK: Cambridge University Press.
Liska, A.E. (1984). A critical examination of the causal structure of the Fishbein/Ajzen
model. Social Psychology Quarterly, 47, 61–74.
McGuire, M. (2006). Steps towards an evolutionary-based theory of psychotherapy—I.
Clinical Neuropsychiatry: Journal of Treatment Evaluation, 3, 162–169.
Meehl, P.E. (1990). Appraising and amending theories: The strategy of Lakatosian
defense and two principles that warrant using it. Psychological Inquiry, 1, 108–141.
Meehl, P.E. (1997). The problem is epistemology, not statistics: Replace significance
tests by confidence intervals and quantify accuracy of risky numerical predictions.
In L. Harlow, S.A. Mulaik, & J.H. Steiger (Eds.), What if there were no significance
tests? (pp. 393–425). Mahwah, NJ: Erlbaum.
Miniard, L.E., & Cohen, J.B. (1981). An examination of the Fishbein behavioral intentions
model’s concept and measures. Journal of Experimental Social Psychology, 17, 309–329.

Myers, D.G. (1992). Psychology (3rd ed.). New York: Worth.

O’Donohue, W.T. (1989). The (even) bolder model: The clinical psychologist as
metaphysician-scientist-practitioner. American Psychologist, 44, 1460–1468.
Ogden, J. (2003). Some problems with social cognition models: A pragmatic and
conceptual analysis. Health Psychology, 22, 424–428.
Popper, K.R. (1959). The logic of scientific discovery. New York: Basic Books.
Popper, K.R. (1963). Conjectures and refutations. London: Routledge.
Popper, K.R. (1972). Objective knowledge. Oxford, UK: Oxford University Press.
Popper, K.R. (1983). Realism and the aim of science. London: Routledge.
Putnam, H. (1975). Mind, language, and reality, Philosophical papers (Vol. 2).
Cambridge, UK: Cambridge University Press.
Quine, W.V.O. (1980). Two dogmas of empiricism. In W.V.O. Quine (Ed.), From a
logical point of view (2nd ed., pp. 20–46). Cambridge, MA: Harvard University Press.
(Original work published 1953)
Roth, W.T., Wilhelm, F.H., Pettit, D., & Meuret, A.E. (2005). Rescuing the hyperven-
tilation theory of panic: Reply to Ley (2005). Psychological Bulletin, 131, 199–201.
Ryckman, R. M. (1993). Theories of Personality (5th ed.). Pacific Grove, CA:
Sarnoff, C. (1976). Latency. Lanham, MD: Jason Aronson, Inc.
Smedslund, G. (2000). A pragmatic basis for judging models and theories in health
psychology: The axiomatic method. Journal of Health Psychology, 5, 133–149.
Smedslund, J. (1991). The pseudoempirical in psychology and the case for psycho-
logic. Psychological Inquiry, 2, 325–338.
Stam, H.J. (1992). The demise of logical positivism: Implications of the Duhem–Quine
thesis for psychology. In C. Tolman (Ed.), Positivism in psychology: Historical and
contemporary problems (pp. 17–24). New York: Springer-Verlag.
Stasson, M., & Fishbein, M. (1990). The relation between perceived risk and preven-
tive action: Within-subjects analysis of perceived driving risk and intentions to wear
seat belts. Journal of Applied Social Psychology, 20, 1541–1557.
Trafimow, D. (1998). Attitudinal and normative processes in health behavior. Psychology
and Health, 13, 307–317.
Trafimow, D. (2003). Hypothesis testing and theory evaluation at the boundaries:
Surprising insights from Bayes’s theorem. Psychological Review, 110, 526–535.
Trafimow, D., Brown, J., Grace, K., Thompson, L., & Sheeran, P. (2002). The relative
influence of attitudes and subjective norms from childhood to adolescence: Between-
participants and within-participants analyses. The American Journal of Psychology,
115, 395–414.
Trafimow, D., & Finlay, K. (1996). The importance of subjective norms for a minority
of people. Personality and Social Psychology Bulletin, 22, 820–828.
Trafimow, D., & Fishbein, M. (1994a). The importance of risk in determining the
extent to which attitudes affect intentions to wear seat belts. Journal of Applied Social
Psychology, 24, 1–11.
Trafimow, D., & Fishbein, M. (1994b). The moderating effect of behavior type on the
subjective norm–behavior relationship. Journal of Social Psychology, 134, 755–763.
Trafimow, D., & Fishbein, M. (1995). Do people really distinguish between behav-
ioural and normative beliefs? British Journal of Social Psychology, 34, 257–266.
Trafimow, D., & Sheeran, P. (1998). Some tests of the distinction between cognitive
and affective beliefs. Journal of Experimental Social Psychology, 34, 378–397.

Trafimow, D., Sheeran, P., Conner, M., & Finlay, K.A. (2002). Evidence that perceived
behavioural control is a multidimensional construct: Perceived control and perceived
difficulty. British Journal of Social Psychology, 41, 101–121.
Triandis, H.C. (1980). Values, attitudes, and interpersonal behavior. In H. Howe &
M. Page (Eds.), Nebraska Symposium on Motivation, 1979 (pp. 195–259). Lincoln:
University of Nebraska Press.
Wallach, L., & Wallach, M.A. (2001). A response on concepts, laws and measurement
in social psychology. Theory & Psychology, 11, 489–494.
Wallach, M.A., & Wallach, L. (1998). When experiments serve little purpose: Misguided
research in mainstream psychology. Theory & Psychology, 8, 183–194.
Wicker, A.W. (1969). Attitudes versus actions: The relationship of verbal and overt
behavioral responses to attitude objects. Journal of Social Issues, 25, 41–78.
Ybarra, O., & Trafimow, D. (1998). How priming the private self or collective self
affects the relative weights of attitudes or subjective norms. Personality and Social
Psychology Bulletin, 24, 362–370.

DAVID TRAFIMOW is Professor of Psychology at New Mexico State University.

He has published articles on a variety of topics, including philosophical and sta-
tistical issues in psychology, attitudes, attribution, and self-concepts. ADDRESS:
Department of Psychology, MSC 3452, New Mexico State University, PO Box
30001, Las Cruces, NM 88003–8001, USA. [email:]

View publication stats