Sie sind auf Seite 1von 26

Synthese

DOI 10.1007/s11229-015-0836-8

Tool-box or toy-box? Hard obscurantism in economic


modeling

Jon Elster1

Received: 6 January 2015 / Accepted: 19 July 2015


© Springer Science+Business Media Dordrecht 2015

Abstract “Hard obscurantism” is a species of the genus scholarly obscurantism.


A rough intensional definition of hard obscurantism is that models and procedures
become ends in themselves, dissociated from their explanatory functions. In the present
article, I exemplify and criticize hard obscurantism by examining the writings of emi-
nent economists and political scientists.

Keywords Rational choice theory · Mathematical and statistical modeling ·


Irrationality · Bad science

1 Introduction

Pascal Engel has written incisively on “soft obscurantism” in the humanities, notably
in the wonderful Instructions aux académiques, written by him but published under
the penname Federico Tagliatesta, with a preface signed by Pascal. When he sent me
the book and I read it, I was taken in by the hoax—I believed the story about the young
Italian philosopher who died tragically in an accident. More recently, at a conference
for which I wrote this paper, he presented a hilarious, erudite and conceptually sharp
analysis of the distinction between stupidity and folly. It’s good to have him as a
partner in the struggle against obscurantism, although “mit der Dummheit kämpfen
Götter selbst vergebens”.
In the present article I consider the less frequently phenomenon of “hard obscu-
rantism”, a species of the genus scholarly obscurantism. In academic debates, a more

I am grateful for the comments by two anonymous referees.

B Jon Elster
je70@columbia.edu

1 Columbia University, New York, USA

123
Synthese

common term for obscurantism is “bullshit”, first identified as an intellectual pathol-


ogy by Frankfurt (1988); later discussions include Cohen (2002) and Gjelsvik (2006).
I shall not enter into the conceptual debate concerning the fine grain of bullshit. The
project is perhaps hopeless, since, as any reader of undergraduate essays will know,
confusion resists precise capture. One may perhaps, distinguish between obscure writ-
ers and obscurantist writers. The former aim at truth, but do not respect the norms for
arriving at truth, such as focusing on causality, acting as the Devil’s Advocate, and
generating falsifiable hypotheses. The latter do not aim at truth, and often scorn the
very idea that there is such a thing as the truth.
The preceding remarks apply in obvious ways to soft obscurantisms. The best way
of characterizing this cluster is by extension: it includes post-modernism, subaltern
theory, queer theory, deconstruction, psychoanalysis, functionalist explanations, mul-
ticulturalism, structuralism, and several others (see Elster 2012 for examples). Beyond
the weak or strong disregard for truth, these schools or movements have so little in
common that a definition by intension seems impossible.
So far, bullshittologists have not focused their attention on hard obscurantism.
The term “hard” is vague, but points in the direction of quantitative, mathematical,
computer-based, or formal analyses. In the broadest sense, hard obscurantism can be
found in a number of academic disciplines. I believe that analytical philosophy and
linguistics sometimes exhibit pointless or excessive formalism. Much work is model-
driven, not world-driven. These disciplines are not, however, my focus here. I shall
consider hard obscurantism in the social sciences, notably in economics and in the
increasing body of political science that relies on economic models.
Before I proceed, a comment on the relation between the two forms of obscurantism
may be in order. Historically, the introduction of quantitative or formal modes of
analysis may have been a reaction to prevalent forms of soft obscurantism. Analytical
philosophy arose to some extent as a reaction to the perceived soft obscurantism of
Continental philosophy. Mathematical economic models arose to some extent as a
reaction to purely verbal economics, which prevented economists from estimating the
net effect of the many causal mechanisms at work in the economy. Many of these
reactions were salutary. Yet by virtue of psycho-sociological mechanisms that I shall
not consider here (but see Elster 2009a, 2012), the models often took on a life of
their own and became increasingly dissociated from their original aims. Once hard
obscurantism took off, its existence seemed to justify soft obscurantism. In political
science, for instance, the “perestroika” movement that briefly flourished around 2000
was a soft-obscurantist reaction to what was correctly perceived as a pernicious turn
of the discipline towards hard obscurantism. Each extreme might seem to justify the
other. Scholars who fought a two-front war against both ran the risk of being attacked
by both or enlisted as allies by both.
To characterize hard obscurantism in the social sciences, let me once again begin
with an extensional definition. The case I consider in this article is, as in the title of a
book by Green and Shapiro (1994) that usefully supplements my analysis, pathologies
of rational-choice theory. I want to emphasize, however, that I believe rational-choice
theory in general, and game theory more specifically, have immense conceptual value.
The simple budget-line-cum-indifference-curve analysis found in any microeconomics
textbook created light where previously there was semi-obscurity. The observation

123
Synthese

that it can be strategically rational to burn one’s bridges or one’s ships made sense
of behavior that had seemed unintelligible or irrational. The distinction among the
Prisoner’s Dilemma, the Assurance Game (or Stag Hunt), the Battle of the Sexes and
the Game of Chicken has illuminated complex structures of social interaction that
were previously only dimly understood. We understand today why a bad state may
persist indefinitely if it is an equilibrium, in which no single agent has an incentive to
change behavior unilaterally. In fact, I have suggested elsewhere (Elster 2009a) that
hard obscurantism may itself be a bad equilibrium.
Rational-choice theory can also have great practical value, as a routine and indis-
pensable tool for ministries of finance, central banks and similar institutions. Assuming
that consumers and producers respond rationally to incentives, institutions can set inter-
est rates or tax rates to achieve socially desirable goals. Typically, though, the theory
only allows predictions of short-term effects of small changes. Since reforms are usu-
ally small, incremental and reversible, this is sufficient for most practical purposes.
The following contrast may illustrate the limits of the approach. It is possible, or so I
assume, to estimate pretty accurately the effect of a one per cent increase in the price
of alcoholic beverages on the legal purchase of such liquids. I do not think, however,
anyone can estimate the effect of tripling the price, which might induce smuggling
and home brewing to an incalculable extent. For an even more dramatic example, I
do not think anyone can estimate the effects of legalizing hard drugs. To do so, one
would have to estimate the effect of legalization on preference formation, and not sim-
ply the effect on consumption for given preferences of rational individuals. Nobody
understands how preferences are formed and transformed.
The U. S. Constitution (Art. I.8) presupposes rational, incentive-responsive agents
when it gives Congress the power to “promote the progress of science and useful arts
by securing, for limited times to authors and inventors the exclusive right to their
respective writings and discoveries”. More recently, Nobel prizes in economics have
been awarded for the design of auctions and for matching medical interns to hospitals
(among other pairs). The designers of incentives systems should always keep in mind,
however, the possibility that the schemes may be gamed by rational agents (Watts 2011,
pp. 47–53). The perverse effects of using citation counts to allocate funds to academic
institutions and individual scholars offer a well-known illustration. Using a salami-
slicing system, many authors divide up their research results for separate publication in
a series of “least publishable units” (Furner 2014, p. 103). This example which could
be multiplied, does not of course amount to an objection to rational-choice theory,
only to its practical usefulness.
When used in a commonsensical way, rational-choice theory can also have sub-
stantial empirical value, in helping us to explain behavior. Let me illustrate by some
examples from Hume’s History of England. He affirmed that the reason why some
early popes created very strict regulations of divorce and marriage among relatives, up
to the seventh degree of affinity, was to profit from the dispensations they could grant
(Hume 1983, vol. I p. 267). He also argued that the tendency of barons to stay on their
estates rather than at court was individually rational behavior, although undermining
the interest of their body (ibid. p. 461–462). Finally, Hume observed that Elizabeth
I, knowing that every heir would be a dangerous rival, deliberately did not name her
successor (ibid., vol. IV, p. 60, 82). Closer to home, we constantly, if usually implicitly,

123
Synthese

explain the behavior of other people by assuming that they act rationally, for instance
when not crossing the street on a red light. In such cases, the explanations do not rely on
the strong assumptions that I discuss later, only on robust cost-benefit considerations.
Rational-choice models are not the only sources of hard obscurantism in the social
sciences. I shall briefly and without much nuance mention three other theoretical
approaches that can—but do not necessarily—favor hard obscurantism.
Statistical models can illustrate hard obscurantism in economics and political sci-
ence. In his criticism of the use of such models in the social sciences, Freedman (2005,
2010) identifies gross errors in six articles published in the leading journals of these
disciplines. It goes without saying that any model or theory can be used in sloppy
and irresponsible ways. The use of statistics in the courtroom is a notorious example.
When elite journals publish bad science, however, the situation is serious. In a famous
article on “Statistical models and shoe leather”, Freedman (2010, p. 46) comments on
regression analysis in the following terms:
A crude four-point scale may be useful: (1) Regression usually works, although
it is (like anything else) imperfect and may sometimes go wrong. (2) Regression
sometimes works in the hands of skillful practitioners, but it isn’t suitable for
routine use. (3) Regression might work, but it hasn’t yet. (4) Regression can’t
work. Textbooks, courtroom testimony, and newspaper interviews seem to put
regression into category 1. Category 4 seems too pessimistic. My own view is
bracketed by categories 2 and 3, although good examples are quite hard to find.
In my discussion of rational-choice pathologies, I shall follow Freedman’s example
by criticizing work by prestigious economists and political scientists published in elite
journals or by elite publishers. It would be an easy and pointless victory to criticize
work that is bad by the standards of the discipline.
Agent-based models (a form of simulation) can also illustrate hard obscurantism
in the social sciences. As originally introduced by Schelling (1971) in a study of
race-based residential segregation, these models had great appeal, because of their
transparency. One could follow, step by step, the mechanism by which individuals
who prefer to live in a mixed neighborhood, but wish that a majority belong to their
own race, make residential choices that lead to complete segregation. More recent
models are more complex and highly opaque. As they usually have many parameters,
which can take on many values, one can fiddle with the settings to get virtually any
outcome. In this respect agent-based models resemble the data-mining and curve-
fitting of many statistical models. The results rarely have the ex-post obviousness
that characterizes good science and that Schelling’s model possessed. Agent-based
models, from being a tool-box, have largely become a toy-box.
Behavioral economics, too, is running the risk of becoming a toy-box. There is
probably not a week without some highly sophisticated experiment demonstrating a
new “mechanism” or a new “effect” appearing on the Internet. Many of the findings
seem to have little relevance outside the laboratory; at least, many authors do not
try to demonstrate relevance “in the wild”. When authors cite real-world cases to
demonstrate, say, the sunk-cost effect, they rarely take the time to explore (and refute)
alternative explanations of the alleged examples. The sunk-cost effect certainly seems
to “fit” the Vietnam war or the construction of the Concorde airplane, but so do many

123
Synthese

other hypotheses. It is ironical that behavioral economics, which was largely inspired
by the explanatory failures of rational-choice theory, is running into similar problems.
A rough intensional definition of hard obscurantism is that models and procedures
become ends in themselves, dissociated from their explanatory functions. The term
“toy-box” that I used to characterize (some instances of) agent-based modeling and
behavioral economics can be extended to (some instances of) rational-choice modeling
and statistical modeling. Concerning the latter, Ragnar Frisch deplored the tendency of
econometrics to become “playometrics”. (As he was the founding editor of Economet-
rica, this was a rare instance of insider criticism of hard obscurantism.) Concerning
the former, the common defense of rationality as an “as-if” assumption also confirms
this characterization. In the Summary, I propose a more fine-grained analysis.
I shall now proceed as follows. In Sect. 2 I set out the basic structure of rational-
choice theory, and consider the two ways in which it is liable to fail: by the
indeterminacy of the models and by the irrationality of the agents whose behavior
the models try to explain. In Sect. 3 I exemplify and criticize hard obscurantism by
examining the writings of eminent economists and political scientists, including three
winners of the Nobel Prize in economics and two winners of the John Bates Clark
Medal who have not (yet) received the Nobel prize. Section 4 offers a brief Summary.

2 Rational-choice theory and its failures

Rational-choice theory is first a normative theory, advising agents about what to do to


realize their aims as well as possible, and then an explanatory theory trying to account
for the behavior of the agents by assuming that they follow this advice. The idea of
realizing your aims as well as possible can be disaggregated into three maximization
operations: among the available options, choose the one you believe will best realize
your aims; use the information available to you to form the beliefs most likely be true;
if necessary collect an optimal amount of new information. Diagrammatically:
Action

Desires (preferences) Beliefs

Informations

123
Synthese

A few comments may be useful for clarification. (i) A direct influence of desires
on beliefs (wishful thinking) is incompatible with rationality. (ii) Because desires
can (rationally) influence the amount of information one should collect, high-stake
decisions requiring more information, and because information shapes beliefs, an
indirect influence of desires on beliefs is compatible with rationality. (iii) In addition,
the optimal amount of information to collect is also shaped by the agent’s beliefs about
the value of the information and the cost of acquiring it. (iv) The loop reflects the fact
that the expected benefits may be modified by what is found in the search itself.
Philosophers tend to use this belief-desire model of rational action. Economists tend
to use a discounted-expected-utility model. For my purposes here, they are equivalent.
In the economist model, agents attach cardinal utilities to each possible outcome of
each possible action, and subjective probabilities to the occurrence of each possible
outcome of each possible action. The expected utility of an action is defined as the
weighted sum of the utilities of its possible outcomes, with the probabilities serving as
weights. If outcomes of an action will occur in the future and the agent has a positive
rate of time preference, the expected utility must be discounted to its present value. A
rational agent chooses the action that has the greatest discounted-expected utility.
This thumbnail sketch of the theory of rational choice ignores a host of subtleties
that are irrelevant for my purposes. In preparation for the next Sect. 1 should, however,
mention one complication. The optimal “action” predicted by a theory need not be an
ordinary physical action from which the agent derives immediate utility, such as eating
an apple rather than an orange. Rather, it can be the decision to use a lottery device that
will direct her to eat an apple with probability p and to eat an orange with probability
(1−p). The use of such mixed strategies can be important in game-theoretic contexts.
Rational-choice theory as thus defined can fail in two ways. First, it can fail to yield
a sharp determinate prediction. Second, it may yield a sharp prediction that fails the
confrontation with the observed facts. The first problem is one of the indeterminacy
of the theory, the second that of the irrationality of the agents.
Theory indeterminacy can arise for several reasons. I shall distinguish three sources:
an infinite regress in the determination of the optimal amount of information, uncer-
tainty (brute or strategic), and cognitive limitations. (The failure to recognize these
limitations is a form of irrationality).
In some cases, rational-choice theory cannot offer agents determinate advice about
how much information to gather. I suspect but cannot prove that this failure is the
rule rather than the exception. Winter (1964, p. 252) observed that the idea of reduc-
ing satisficing to a form of maximizing creates an infinite regress, since “the choice
of a profit-maximizing information structure itself requires information, and it is not
apparent how the aspiring profit maximizer acquires this information or what guaran-
tees that he does not pay an excessive price for it”. Along the same lines, Johansen
(1977, p. 144) characterized the search process as “like going into a big forest to pick
mushrooms. One may explore the possibilities in a certain limited region, but at some
point one must stop the explorations and start picking because further explorations as
to the possibility of finding more and better mushrooms by walking a little bit further
would defeat the purpose of the hike. One must decide to stop the explorations on an
intuitive basis, i.e. without actually investigating whether further exploration would
have yielded better results”. When rational belief formation is indeterminate, one does

123
Synthese

indeed have to rely on intuition. Even assuming that an observer can predict the out-
come of intuition as based on heuristics and biases, that prediction will not reflect a
normative prescription for the agent.
Uncertainty in the technical sense means that agents (i) know all possible actions
and (ii) all possible outcomes of each action, but (iii) cannot assign precise numerical
probabilities to the outcomes. Rational-choice theory does offer some weak advice
about what to do in such situations: follow some decision rule based on the worst
and the best outcomes of each action (Arrow and Hurwicz 1971). The maximin rule
(choosing the option with the best worse-consequences) is commonly recommended,
but “maximax” (choosing the option with the best best-consequences) is equally com-
patible with the (weak) demands of rationality. For all practical purposes, therefore,
the theory does not offer any advice.
Brute uncertainty can arise in “games against nature”, as in earthquake predictions
or predictions of global warming (Weitzman 2009). It can also arise in complex social
systems, such as financial markets. (The latter are also of course subject to strategic
uncertainty.) Taleb (2007, pp. 198–200) offers a general analysis of the issue:
This problem [estimating the shape and the parameters of a distribution] has been
seemingly dealt away with the use of “off-the-shelf” probability distributions.
But distributions are self-referential. Do we have enough data? If the distribution
is, say, the traditional tells, then yes, we may be able to say that we have sufficient
data—for instance the Gaussian itself tells us how much data we need. But if the
distribution is not from such a well-bred family, then we may not have enough
data. But how do we know which distribution we have on our hands? Well, from
the data itself. […]

So we can state the problem of self-reference of statistical distributions in the


following way. If (1) one needs data to obtain a probability distribution to gauge
knowledge about the future behavior of the distribution from its past results,
and if, at the same time, (2) one needs a probability distribution to gauge data
sufficiency and whether or not it is predictive outside its sample, then we are
facing a severe regress loop. We do not know what weight to put on additional
data.

Strategic uncertainty arises when agents have to form beliefs about one another,
including beliefs about beliefs etc. In theory, one can short-circuit the looming infinite
regress by the notion of an equilibrium set of strategies, defined as strategies that
are optimal against each other. In non-zero-sum games, these often involve mixed
strategies that are only weakly optimal against each other, in the sense that an agent
can do just as well by adopting one of the strategies in the mix as a pure strategy. In that
case, however, why should other agents believe she is playing her mixed equilibrium
strategy? Would it not be rational to assume that she plays it safe and adopts a maximin
pure strategy? Strategic uncertainty can also arise when a game has several equilibria
in pure strategies and none of them is weakly Pareto-superior to the others. In that
case, even agents who are perfectly informed about the nature of the game and about
each other (common knowledge) have no rational grounds for tacitly coordinating on
one equilibrium rather than on another.

123
Synthese

While real and important, the issues of infinite regress and uncertainty are minor
compared to a much more elementary, indeed almost trivial question: How can one
impute to real-life agents the capacity to make in real time the calculations that occupy
many pages of mathematical appendixes in the leading journals and that can be
acquired only through years of professional training? Actually, the question is even
more elementary. The calculations assume that the utilities and subjective probabilities
of the agents are well-defined, stable and fine-grained. A large body of psychological
literature completely undermines this assumption. Let me just focus on the fineness
of grain. To be sure, agents may prefer immediate rewards to delayed rewards, but
the assumption of exponential time discounting has no psychological reality: it is due
only to mathematical convenience (see for instance McClure et al. 2004). To be sure,
they may also have an idea that some outcomes are more likely, perhaps much more
likely than others, but the assumption of a continuous probability distribution has no
psychological reality (see for instance Tversky and Kahneman 1974). Agents do not
have access to such data, and could not draw the proper inferences from them even
if they had. Mental states that do not exist cannot have causal efficacy or enter into
explanations.
Some rational-choice theorists are aware of this problem and try to address it. I shall
discuss four possible answers to the italicized question, three of which have actually
been proposed.
The first answer—which is mainly hypothetical—is to invoke the precedents of
Newton’s law of gravitation and of quantum mechanics. Early critics of Newton
objected to the law of gravitation that it presupposed the metaphysically absurd notion
of action at a distance. Eventually, however, everybody accepted the theory because it
worked, with an amazing degree of precision. The even more incomprehensible the-
ory of quantum mechanics, which involves not only action at a distance but objective
indeterminacy, is also accepted because its predictions are verified with nine-decimal
accuracy. Similarly, in spite of the general objections to rational-choice theory that
I have proposed, one might be willing to accept it if its predictions were verified
with comparable precision. However, anyone with the slightest acquaintance with
economics or political science will dismiss the idea as laughable. Often, scholars are
happy if they “get the sign right”.
The second and most frequent defense of the explanatory relevance of rational-
choice theory would appeal to a causal mechanism capable of simulating rationality.
Just as economists are fond of arguing that self-interest can simulate altruism,
they often claim that non-intentional mechanisms can simulate intentional optimiz-
ing. These mechanisms will generate behavior with utility-maximizing or profit-
maximizing consequences even though the agents are incapable of deriving it from
an intention to maximize. Generally speaking, there are two mechanisms that might
be capable of this feat: reinforcement and selection. The former works by causing
given behavioral units to optimize, the latter by eliminating non-optimizing units. As
defenders of rational-choice theory rarely if ever appeal to reinforcement, and since
the mechanism in any case doesn’t simulate optimality very well, I shall ignore it.
Natural selection has of course produced the kind of rough-and-ready and cogni-
tively undemanding rationality that serves us well in everyday life. As an example,
consider the Norwegian proverb: “Don’t cross the river to the other bank when you go

123
Synthese

to fetch water”. An organism that engaged in such wasteful behavior would quickly
be eliminated. There is no reason to believe, however, that natural selection could pro-
duce the highly sophisticated strategic behaviors that the models predict. Evolutionary
game theory may have some uses, but that of sustaining the models is not one of them.
Models of “economic natural selection” do have some empirical relevance. The
work of Nelson and Winter (1984), in particular, shed some qualitative light on eco-
nomic development. Yet they do not provide the sought-for simulation of rationality,
for several reasons. As with agent-based modeling in general, it is often hard to know
the extent to which the results are artifacts of the assumptions. Moreover, these results
do not show optimizing behavior. In a population of firms that evolve by innovation
and imitation there is always a substantial proportion of non-optimizing firms. Since
firms are adapting to a rapidly changing environment, they are aiming at a moving
target. In any case, there is no hope whatsoever that the simulations could mimic
the models all the way down to the mathematical appendices. Finally, and even more
important, bankruptcy-driven or takeover-driven elimination of inefficient agents could
never generate optimizing behavior in non-market societies or in non-market sectors
in market societies. I conclude that appeal to selection is pure hand-waving.
A third defense is that although cognitively limited agents are liable to make mis-
takes, these will cancel each other out in the aggregate. If we required each person in a
group to carry out calculations of the order of difficulty, say, of multiplying 49 and 73
in at most 30 seconds, we would expect there to be some mistakes, but also that these
would be symmetrically distributed around the correct answer. For some purposes,
this fact might justify the rationality assumption. When, however, the answer requires
solving differential equations or carrying out other complicated operations, there is no
reason to expect answers or guesses to be symmetrically distributed around the correct
answer. The burden of proof is on those who claim they will.
Friedman (1953, pp. 11–12), finally, offered two seductive analogies to persuade
his readers of the reality of maximizing behavior that does not rely on maximizing
calculations. First, “leaves [on a tree] are positioned as if each leaf deliberately sought
to maximize the amount of sunlight it receives, given the position of its neighbors, as
if it knew the physical laws determining the amount of sunlight that would be received
in various positions and could move rapidly or instantaneously from any one position
to any other desired and unoccupied position”. Second, “excellent predictions would
be yielded by the hypothesis that the [expert] billiard player made his shots as if he
knew the complicated mathematical formulas that would give the optimum directions
of travel, could estimate accurately by eye the angles, etc., describing the location of
the balls, could make lightning calculations from the formulas, and could then make
the balls travel in the direction indicated by the formulas”.
While seductive, the analogies are unpersuasive. The leaves simulate maximiza-
tion because natural selection eliminated trees that didn’t. To assume that a similar
mechanism exists for economic behavior is to beg the question. Expert billiard players
are experts because ten thousand hours of practicing enable them somehow (we don’t
know how) to make the right shots on an intuitive basis. This is, of course, a tightly
constrained situation. To extrapolate the argument to business decisions in a fluid and
opaque environment is unwarranted. Nor does the metaphor work for consumer deci-
sions. The only attempt known to me to transform the billiard metaphor into a theory

123
Synthese

found that “individual learning methods can reliably identify reasonable search rules
only if the consumer is able to spend absurdly large amounts of time searching for a
good rule” (Allen and Carroll 2001, p. 255). This would be a case of hyperrationality
(and therefore of irrationality)—searching for the decision that would be optimal if
one were to ignore the costs of decision-making itself.
This concludes my discussion of model indeterminacy. I shall spend less space on
questions of agent irrationality, which are empirical rather than conceptual. For the
sake of brevity, I simply offer a list of irrationality-generating mechanisms (the ones
I believe to be the most important are in boldface):

• loss aversion
• hyperbolic discounting
• the sunk-cost fallacy and the planning fallacy (especially deadly in conjunction)
• the tendency of unusual events to trigger stronger emotional reactions
• the cold-hot and hot-cold empathy gaps
• trade-off aversion and ambiguity aversion
• anchoring in the elicitation of beliefs and preferences
• the representativeness and availability heuristics
• the conjunction and disjunction fallacies
• the certainty effect and the pseudo-certainty effect
• choice bracketing, framing, and mental accounting
• sensitiveness to changes from a reference point rather than to absolute levels
• status quo bias and the importance of default options
• meliorizing rather than maximizing
• motivated reasoning and self-serving biases in judgment
• flaws of expert judgments and of expert predictions
• self-signaling and magical thinking
• non-consequentialist and reason-based choice
• overconfidence and the illusion of control
• spurious pattern-finding

The list draws heavily on findings from behavioral economics (sources cited in
Elster 2009a). It should be supplemented by the tendency of emotion to induce dis-
torted belief formation and low-quality belief formation. These mechanisms all seem
to be robust, in the sense of affecting behavior “in the wild” and not only in the artificial
context of the laboratory.
Confronted with apparently irrational behavior, economists often try, sometimes
successfully, to make sense of them in the rational-choice framework. Some cases
of revenge are probably rational reputation-building; some cases of addiction may
be rational self-medication; and some suicides may be rational self-euthanasia. At
the same time, some revenge-seekers have an urge to act immediately, rather than
bide their time to increase the likelihood of success; some addicted smokers persist
because of their wishful thinking about the dangers of smoking; and some people
commit suicide when they are overwhelmed by shame and the shame causes them to
believe, irrationally, that it will endure forever.

123
Synthese

3 Hard obscurantism in practice

I shall present five examples of hard obscurantism in economics and political science:
Cognitive dissonance theory as used by economists (Akerlof and Dickens, Rabin)
Endogenous patience and altruism (Becker, Mulligan)
Warm-glow theories (Kahneman and Knetsch, Caplan, Andreoni)
Mixed strategies (Dixit and Skeath)
Political transitions (Acemoglu and Robinson)

3.1 Cognitive dissonance theory as used by economists

It is not surprising that the theory of cognitive dissonance reduction appeals to econo-
mists. It is a quantitative phenomenon: the reduction, perhaps even the minimization,
of a disutility (dissonance) caused by the coexistence of several mental states or atti-
tudes. The word “cognitive” is somewhat misleading, as the states can be beliefs,
desires or, as in the case that originally inspired the theory (Festinger 1957, p. vi),
emotions.
Akerlof and Dickens (1982, p. 38) argue that workers form motivated beliefs about
job safety “according to whether the benefit [of holding the belief] exceed the cost,
or vice versa. If the psychological benefit of suppressing one’s fear in a particular
activity exceeds the cost due to increased chances of accident, the worker will believe
the activity to be safe”. Similarly, Rabin (1994, p. 178) models “a person’s difficulty of
maintaining ‘false’ beliefs with a cost function such that a utility-maximizing person
will trade off his preference for feeling good about himself with the cost of maintaining
false beliefs”. In these two models, costs of holding false beliefs arise from different
sources. Akerlof and Dickens refer to the fact that unjustified sanguine beliefs about
workplace safety increases the risk of workplace accidents. In Rabin’s model, the cost is
psychic rather than material, and involves the dissonance that arises when “engaging
in immoral activities [that conflict] with our notion of ourselves as moral people”
(ibid.). He offers the example of wearing furs at the expense of animal suffering.
Akerlof and Dickens (1982, p. 308) explicitly assume that there are no constraints
on belief formation: the worker “can believe whatever he chooses irrespective of the
information available to him”. This flies in the face both of common sense and of the
evidence: “people are not at liberty to espouse any attitude they want to: they can do
so only within the limits imposed by their prior beliefs” (Kunda 1990, p. 484). The
authors recognize that their assumption represents a “polar case”, but offer no reasons
for thinking that the conclusions generalize to more realistic cases in which belief
formation is constrained by prior beliefs.
Rabin recognizes that people cannot just adopt any belief they would like to have,
but he models the obstacles in terms of costs rather than constraints (see Elster 2004
for this distinction). “I shall assume that the person believes that there is some morally
legitimate level of the activity [e.g. wearing furs], Y, such that the person suffers
from cognitive dissonance if he chooses level X greater than Y. […] To capture the
difficulty [of believing that Y is high], I let the function C(Y) represent the psychic
cost of holding beliefs Y, where C(0) = 0 and C’(Y) > 0 for all Y” (Rabin 1994,

123
Synthese

pp. 180–181). In his example, suppose Y = 0 is the state in which no animals are
killed for the purpose of making furs. For a person wearing furs, the cost of believing
that painful killing of animals is morally acceptable is higher than the cost of believing
that painless killing of animals is acceptable but that painful killing is not. At the same
time, the benefits from holding the former belief are greater than those of holding
the latter, since the person who believes that animal suffering is morally acceptable
can wear his fur coat with a clear conscience, whereas the person who can only bring
himself to believe that painless killing is acceptable will experience some painful (pun
intended) cognitive dissonance. For a given functional form and given parameters of
the cost and benefit functions, it might be the case, for instance, that the belief that
painless killing is acceptable but that painful killing is not is the one that maximizes
the agent’s utility.
I shall not dwell on the surreal character of these arguments, but only make some
conceptual and empirical objections. In the Akerlof–Dickens argument, the benefits
of motivated beliefs come now and the possible costs later. Both enter into the agent’s
decision to adopt the belief. For that argument to go through, we have to assume that the
unconscious is capable of making such intertemporal tradeoffs. There is no evidence
that it is. Usually the unconscious is seen as guided by the Pleasure-Principle of
seeking immediate satisfaction, whereas the capacity to make intertemporal tradeoffs
is characteristic of the conscious mind. It is a conceptual truth that for future benefits
or costs to shape present behavior, they must be mentally made present (represented)
on some mental screen. One can argue, moreover, that consciousness can be defined
behaviorally by the capacity to represent what is temporally or physically absent. This
definition is routinely adopted, for instance, in debates about animal consciousness.
For my purposes here, I need not enter the debate over these complex issues, since
I can rest my case on what I believe to be an empirical fact: there is no evidence for
unconscious intertemporal tradeoffs. The unconscious does not seem to weigh the
present benefits of false beliefs against the future costs of holding them. Nor does it
seem to weigh the present costs of false beliefs against the future benefits of holding
them. If the former trade-off is possible, the latter should be, but there is no evidence
for either. Although Winston (1980) made a case for the latter idea, he offered no
empirical evidence. Briefly summarized, he argued that if I want to quit using drugs
but find that my beliefs about their dangerous effects are insufficiently dissuasive, I
should adopt the belief that they are more dangerous than I currently believe they are,
since this belief would motivate me to suffer the withdrawal pains. Everything we
know about addiction suggests, however, the opposite: addicts persuade themselves
that the drug is less dangerous than they have reason to think it is.
Both models assume that the agent first identifies the correct belief and then modifies
it in a utility-maximizing way. In other words, they assume that the mechanism at work
is self-deception rather than wishful thinking. In the latter case, the agent just adopts
the belief she would like to be true (assuming no constraining prior beliefs) without
confronting it with the evidence that would induce a rational belief. It might indeed
be the case that she adopts by wishful thinking the very same belief that she would
have formed by considering the evidence, although this could of course only happen
by accident. In self-deception, however, it could never happen, since it is the very
discrepancy between the rational belief and the belief that the agent would like to be

123
Synthese

true that causes her to adopt the latter. Neither Akerlof and Dickens nor Rabin pay
any attention to the huge psychological and philosophical literature on self-deception,
part of which denies the very existence of the phenomenon (see the symposium on
Mele 1997).
Without stating or it defending it, the authors seem to adopt a model that has been
discarded by psychologists and philosophers alike. Their arguments make sense only
on the assumption of an homunculus—the unconscious as a small inner person capable
of behaving like a strategic conscious agent.

3.2 Endogenous patience and altruism

In a rational-choice model, preferences are usually seen as given, and certainly not as
chosen. I have just argued that an alcoholic cannot choose to believe that drinking is
more dangerous than it actually is; nor can she choose to dislike alcohol by a mere
act of the conscious or unconscious motivational machinery. She can, of course, use
indirect strategies, such as taking a drug (Disulfiram) that will make her sick if she
drinks, or announcing publicly—to make her incur social disapproval in the case of
backsliding—that she has quit drinking.
Such strategies make no sense, however, for ordinary consumption goods. The
quip, “I’m glad I don’t like spinach, because then I would eat it and I hate the
stuff” is laughable precisely because there is no reason not to eat spinach if you
like it, and therefore no reason to desire not to desire it. If tomorrow I learn that
spinach is strongly cancer-inducing, I shall simply cease to have a first-order desire
for spinach, but not as the result of a second-order desire not to have the first-order
desire. Alcohol, nicotine and other drugs are different. The desire not to desire con-
suming drugs may be causally inefficacious if it is swamped by cravings or withdrawal
symptoms.
I shall call preferences for spinach, alcohol and other consumption goods, in
the broadest sense of the term, material preferences, and oppose them to formal
preferences. The latter include risk attitudes (risk-preference or risk-aversion) and
impatience (a preference for earlier reward over later reward). I also include altru-
ism and selfishness in this category. We may now ask the following question: can
second-order preferences over first-order formal preferences be causally efficacious?
If I am a risk-lover but desire to be risk-averse, can that desire change my first-order
preference? If I have a short time horizon but desire to be able to defer gratifica-
tion, will the desire help me to resist temptations? If I am selfish (altruistic) but
desire to be altruistic (selfish), will the desire have causal efficacy? A positive answer
to these questions would justify the idea of endogenous and rational preference
formation.
It is probably true that our lives as whole would go better if we were risk-neutral—
adopting an attitude of “You win some, you lose some”. This fact has inspired the
idea of endogenizing risk-attitudes (Palacios-Huerta and Santos 2004). Here, I focus
on efforts to endogenize patience (Becker and Mulligan 1997) and, more briefly, on
efforts to endogenize altruism (Mulligan 1997). Before I engage with their arguments,
I shall explain why the answer is indeed positive in the special case of hyperbolic
discounting.

123
Synthese

Rational-choice models usually assume that agents discount future rewards expo-
nentially, meaning that there is a constant period-by-period discount rate. For instance,
if the agent is indifferent between one unit of utility in the next period and 0.9 unit
today, then he is different between one unit of utility in the period after next and 0.81
unit today. This assumption ensures time-consistency: if an agent at time t faces the
choice between getting a small reward at time t+n and a larger reward at time t + n + m
and prefers (at time t) getting the larger delayed reward, he will still prefer the larger
reward at time t + n. By contrast, if the agent discounts the future hyperbolically, he
will—to simplify—be indifferent between one unit of utility t periods into the future
and 1/(1 + t) units today. This agent is not time-consistent: well ahead of time he
may prefer the larger delayed reward over the earlier smaller reward and then suffer a
preference reversal when the time of delivery of the early reward approaches. Mulli-
gan (1996) shows that this behavior is irrational according to a standard money-pump
criterion: a person with hyperbolic time discounting could be made to ruin himself by
a sequence of stepwise “improvements”. This argument does not, of course, invali-
date the assumption of hyperbolic time discounting, which is strongly supported by the
empirical evidence (Fredericks et al. 2004). Moreover, the money-pump argument pre-
supposes, implausibly, the existence of a “money-pumper” who has full information
about the shape of the agent’s discounting function.
An agent who discounts the future hyperbolically and knows it, can have an incentive
to change his discounting function if there is a technology for doing so—a discounting
pill, or perhaps psychotherapy (see Elster 2015, p. 252, note 15 for a numerical exam-
ple). In a calm and reflective moment he wants, let us assume, to choose the larger
delayed reward in a certain category of choices, but knows that in each choice he will
succumb to temptation and choose the earlier, smaller reward unless he manages to
change his discounting function. This idea is, to be sure, a mere conceptual possibility,
which arises only within the artificial framework of economic theory. In reality, self-
control problems stem from other sources. I discuss the idea only as a background for
my claim, discussed below, that if discounting is exponential a deliberately induced
change of preferences is not even a conceptual possibility.
Becker and Mulligan (1997) claim that people can and do choose their rate of
(exponential) time discounting in order to improve their lifetime utility. Their basic
assumption is that “people have the option to put forth effort to increase their appre-
ciation of the future” and that “[m]ore resources spent on imagination increase the
propinquity of future pleasures and therefore their [present] value” (p. 734). For
instance, a “person may spend additional time with his aging parents in order to
appreciate the need for providing for his own old age” (p. 735; my italics). Similarly,
because “schooling can communicate images of the situations and difficulties of adult
life […], educated people should be more productive at reducing the remoteness of
future pleasures” (pp. 735–736). In fact, this effect of schooling may also provide the
motivation to seek higher education: “more patience may be the reason why some
people choose to continue their schooling” (p. 751).
In what seems like an independent reinvention of an argument offered by Toc-
queville (2004, p. 615), Becker and Mulligan also claim, if I understand them correctly,
that if individuals invest in information about the afterlife there might be a spillover
effect to life on earth: “To the extent that future-oriented capital [due to investment

123
Synthese

in imagining life in heaven] is ‘general’—it facilitates the imagination of events at a


variety of distances into the future—a higher utility after death [sic] will even encour-
age consumption growth before death” (p. 741). They add that this effect obtains only
for those who believe they will go to heaven, which seems inconsistent with the ear-
lier assertion that their “afterlife [will be] affected by what they do while alive” (p.
740). Surely, in equilibrium the optimal investment in imagination and the optimal
amounts of good deeds should be determined simultaneously. Their discussion of this
(non-trivial) issue is so brief, however, that it is hard to tell exactly what the argument
is.
Once again, I shall not comment on the surreal aspects of some of these assertions,
but offer two general arguments against the theory. The first is a purely conceptual
objection, whereas the second criticizes the as-if character of their argument. In devel-
oping the first objection, I rely on Skog (1997, 2001). I should notify the reader that
in the late 1990s I debated these issues orally and by e-mail with several economists,
notably Gary Becker and Peter Diamond. While I did not succeed in persuading them,
the reciprocal was also true. (My best answer to their objections is in Elster 2015,
p. 252.) My failure to be persuaded may well have been due to my lack of technical
competence. The much-regretted Ole-Jørgen Skog did possess that competence, but
so far no one of the Becker-Mulligan persuasion has tried to rebut his arguments.
The following argument (Skog 2001, p. 211) offers what seems to me to be a
knock-down argument against the Becker-Mulligan theory:
For instance, consider a person with exponential discounting, valuing tomorrow’s
rewards at 40 per cent of their instantaneous value. He would always prefer one
chocolate bar at T = t + s to two chocolate bars at T = t + s + 1, whatever
the delay s. Suppose that he was offered a pill that would increase his discount
factor to 60 per cent. This obviously would induce him to wait for the two bars.
But why should the impatient self want to do that? For him one bar with a small
delay is better than two bars with a bigger delay. In this example, the myopic
actor has no real motive for reducing his discount rate (increasing his discount
factor). According to his utility function, one chocolate now is the best option.
In my own discussion of the same problem, I suppose that “scientists came up
with a discounting pill, which would increase the weight of future rewards in present
decisions. If I take the pill, my life will go better. My parents will be happy I took the
pill. In retrospect, I will be grateful that I did. But if I have a choice to take the pill or
not, I will refuse if I am rational. Any behavior that the pill would induce is already
within my reach. I could stop smoking, start exercising or start saving right now, but
I don’t. Since I do not want to do it, I would not want to take a pill that made me do
it” (Elster 2015, p. 252). The investment in more vivid impressions of aging or of the
afterlife seems to me exactly analogous to the discounting pill.
Even supposing that Skog and I are mistaken on this point, there is a more elementary
objection to the Becker-Mulligan account, viz. their neglect of the distinction between
intentions and consequences. It may be true that people who choose higher education
learn to value the future more highly, and that as a result their life goes better. These
two causal claims provide, however, no evidence that they intentionally choose higher
education in order to learn to value the future more highly. Becker and Mulligan are

123
Synthese

simply telling a just-so story. To use a phrase that can be applied to many other cases,
they are engaged in rational-choice functionalism. Most functionalist explanations
apply to collective behavior. To cite a case that may seem extreme but is actually
quite representative, it has been alleged that feuding and vendettas can be explained
by their effect of keeping population size at a sustainable level (Boehm 1984). Becker
and Mulligan (and many others) apply similar arguments to individual behavior. The
fallacy can also be stated as that of neglecting non-explanatory benefits.
Of course, the objective consequences of behavior are easier to identify than subjec-
tive motivations. That fact does not, however, justify an exclusive focus on the former,
any more than the proverbial drunk who had lost his key was justified in looking under
the lamppost simply because the light was better there. Explanations of behavior must
appeal to antecedent mental states. The latter cannot be imputed to the agent solely
on the basis of the consequences of the behavior. In a Chicago-style reply (Friedman
1953), Becker and Mulligan might counter that they are only testing an implication
of their theory, and that the realism of the motivational assumptions is irrelevant. I
agree, however, with Gibbard and Varian (1978, p. 671) when they say that “On [our]
reading of Friedman, when a model is applied to a situation, all that is hypothesized is
that the conclusions of the applied model are close enough to the truth for the purpose
at hand. According to us, something further is hypothesized: that the conclusions are
sufficiently close to the truth because the assumptions are sufficiently close to the
truth”. In the case at hand, the assumptions, such as visiting aging parents in order to
form a more vivid impression of what aging is like, seem very far-fetched.
Finally, we may extend the analysis to the idea of “endogenous altruism” pro-
posed in Mulligan (1997). This kind of extension from an intrapersonal case to the
corresponding interpersonal case—from “future selves” to “other selves”—is often
tempting and sometimes useful. In the present case, the extension fails for the same
reasons that explain failure in the intrapersonal case.
Mulligan (1997, p. 73) argues that “parental actions affect their [sic] willingness
to sacrifice their own consumption for consumption by their child. Parents are aware
of the effect of those actions on their ‘preferences’ and take those effect into account
when determining what actions to take”. More specifically, a “parent’s concern for a
child’s consumption is assumed to depend on the quantity of resources—mainly time
and effort—directed to the accumulation of concern. […] People may naturally tend
to be selfish, but parents may also spend time and effort in self-reflection to overcome
such a natural bias” (p. 77).
Given what I said about endogenizing time preference, my response to the statement
I have italicized is obvious: why would selfish parents want to become less selfish? If
they want to, aren’t they already unselfish? Hence I agree with the following common-
sense objection in a review of Mulligan’s book:
If parents in some sense want to behave more altruistically than they would
with zero expenditures on child-oriented resources, why do they not simply
shift more resources to their children, perhaps by investing more in their human
capital? If they start facing a marginal tradeoff of 1.50 for their own versus their
children’s consumption but want to be more altruistic, why don’t they simply
shift consumption from themselves to their children to change this tradeoff rather

123
Synthese

than divert resources away from both generations’ consumption by using them
for child-oriented resources? (Behrman 1998, p. 1508)
In addition, again analogously to the time preference case, I would ask: where
is the evidence that parents intentionally behave in this way? Mulligan’s response
(p. 123) can only be characterized as lame: “The effect of child-oriented resources
is mechanical and well understood by parents in my model, but a precise under-
standing by parents is not necessary for parents to willingly purchase child-oriented
resources and for my results to obtain. Advertising is an example where the effect of
something on preferences is modeled as well-known, but those models clearly pro-
vide insights into the ‘real world’ where advertising has effects that are not always
predictable.”
At the very least, I agree with Bowles (1998, p. 80) when he writes that “We know
that [in preference formation] intentional motivations are sometimes involved; one
learns to appreciate classical music because one notices that aficionados appear to
enjoy it […]. But instrumental motivations may be of limited importance compared to
other influences”, such as conformism or, for that matter, anti-conformism. I believe
the case should be put more strongly. For instrumental motivations to matter, they
must be empirically demonstrable on a scale large enough to have socially important
consequences. The occasional anecdote about visiting aging parents or wishing to
learn to appreciate classical music is too much like the made-up examples that have
brought parts of analytical philosophy into discredit.

3.3 Warm-glow theories

Rational-choice models often assume that agents are selfish, in the sense of being
motivated by material, usually monetary gains. Many critics and some defenders of
rational-choice theory mistakenly assume that rationality implies egoistic selfishness.
Although there is no logical connection, there may be a sociological one. In the words
of Frank (1988, p. 21), “[t]he flint-eyed researcher fears no greater humiliation than
to have called some action altruistic, only to have a more sophisticated colleague later
demonstrate that it was self-serving”.
Some actions, however, are recalcitrant to such demonstrations. When people vote
in national elections with secret ballot, do volunteer work to preserve the environment
or give money to Oxfam, material self-interest will rarely if ever be their motivation.
Faced with this challenge, the flint-eyed economist can appeal to egocentricity rather
than to material egoism (Elster 2009b). Egocentric motivations include egoistic ones,
but also the desire for approval by an audience. Vanity causes us to seek the approval
of an external audience; amour-propre to seek the approval of an internal audience.
According to Kant (1996, pp. 61–62), it can in fact “never be inferred with certainty
that no covert impulse of self-love, under the pretense of [duty], was not actually the
real determining cause of the will”.
Kant did not make a positive claim about the motivation for any specific actions.
Some flint-eyed economists do, when they explain good-doing by the “warm-glow”
from doing good—the applause of the internal audience. I shall examine three argu-
ments along these lines, two of them briefly and the third at greater length.

123
Synthese

Voting in national elections with secret ballot poses two puzzles. First, why would
a selfish and rational individual bother to vote at all? Second, how can we explain
the well-supported empirical fact that voters by and large vote “sociotropically”, to
promote the public interest rather than their own? Caplan (2007, p. 151) addresses the
second issue, and argue that even when people vote for the public good rather than
their private interest, they do so to “enhance their self-image”. Because a single vote
has essentially zero effect on the outcome, affluent voters can afford to buy altruism at
low or no cost, by voting for redistributive policies that would harm them if they were
to be implemented. Although Caplan does not address the first puzzle, I conjecture he
would offer the same explanation.
Kahneman and Knetsch (1992) address the question of people’s willingness to pay
for public goods. In the first part of an experimental study, they asked subjects how
much they would pay for environmental services. Three groups of subjects responded
to questions formulated at different levels of inclusiveness, ranging from “improving
environmental services” to “improve the availability of equipment and trained person-
nel for rescue operations”. As the last improvement is a small subset of the first, one
might expect subjects to state willingness to pay much larger amounts for the more
inclusive measures. Contrary to expectations, the amounts were roughly similar. To
explain this finding, they carried out a second experiment to test the hypothesis that
“the moral satisfaction associated with an inclusive cause extends with little loss to
any significant subset of that cause” (p. 64). The hypothesis was confirmed, suggest-
ing that subjects were more concerned with feeling good about themselves than with
doing good.
Andreoni (1990, 2006) argues that philanthropy, too, must be explained by the
desire of donors to obtain a warm glow rather than by their desire to help recipients.
He assumes that donations must be in a Nash equilibrium, in which each citizen donates
an amount that is optimal given the donations of everybody else. Since the welfare of
the recipients is a public good for the donors, however, they have a collective action
problem. In the words of Andreoni (1990, p. 465), one can deduce from the assumption
of rational equilibrium behavior that “in large economies virtually no one gives to the
public good, hence making the Red Cross, the Salvation Army and American Public
Broadcasting logical impossibilities”. However, if donations are motivated by the fact
that they are good for the donor rather than for the recipients, they produce a private
good rather than a public good. Since the donor internalizes all the benefits from his
gift, there is no collective-action problem.
A succinct statement of the warm-glow effect in the context of a public good
experiment, in which each member of a group had the opportunity to benefit other
members, is that “the act of contributing, independent of how much it increases group
payoffs, increases a subject’s utility by a fixed amount” (Palfrey and Prisbrey 1997,
p. 830). Psychologically, this seems implausible. Since the warm glow is supposed to
come from doing good, it is presumably enhanced by doing more good rather than
less. A donation that’s known to be pointless cannot produce a warm glow any more
than a costless donation can. The greater the benefit to others and the greater the cost
to oneself, the warmer the glow. In practice, and perhaps in principle, it would be
hard to distinguish between the enhanced welfare of others as the altruistic goal of the
donor and its role as a condition for achieving his egocentric goal. In the regression

123
Synthese

equation of Palfrey and Prisbrey (1997), which uses as variables the egoistic cost to
the agent, his egocentric benefits and his altruistic benefits, the last is found to be “not
significantly different from zero”. My reaction to their claim is one of frank disbelief,
not because I can identify any error in their analysis but because this is not how human
beings are and behave. Even if they ultimately care only about the warm-glow, they
cannot get it unless they believe that they do good for others, and the more good they
believe they do the warmer is the glow.
As further evidence of the lack of intellectual sophistication by economists in this
field, one may cite the following argument that Andreoni (2006, p. 1220 n. 16) makes
for his theory: “the fact that people do get a joy from giving is such a natural observation
as to be nearly beyond question”. Yes, but as the neuroeconomists know (de Quervain
et al. 2004), getting pleasure from doing X and doing X for the sake of the pleasure
are two entirely different things.
As the point is fundamental, it may be worthwhile mentioning that it has a venerable
ancestry. Thus in On the Happy Life (9.1-2), Seneca affirms that
In the first place, even though virtue is sure to bestow pleasure, it is not for
this reason that virtue is sought; for it is not this, but something more than
this that she bestows, nor does she labor for this, but her labor, while directed
toward something else, achieves this also. As in a ploughed field, which has been
broken up for corn, some flowers will spring up here and there, yet it was not
for these poor little plants, although they may please the eye, that so much toil
was expended—the sower had a different purpose, these were superadded—just
so pleasure is neither the cause nor the reward of virtue, but its by-product, and
we do not accept virtue because she delights us, but if we accept her, she also
delights us.
I am unable to propose a positive theory of why people donate to good causes.
They probably have all sorts of reasons, in proportions and combinations that may
defy precise analysis. The operation of social norms—giving because one is observed
by others—is relatively uncontroversial. That of “quasi-moral norms” (Elster 2015,
Ch. 4)—giving because one observes others giving—also seems plausible, especially
perhaps in the wake of natural disasters. I would not exclude irrational phenomena
such as magical thinking, but evidence might be hard to find. Targeted altruism (“adopt
a child”) is observed in many contexts. Donations to the Salvation Army or to the Red
Cross may be expressions of untargeted altruism, because many donors do not go
through the Nash equilibrium reasoning. In fact, the assumption that they do is highly
implausible, since they would need information about the donation behavior of others
that most people almost certainly do not have. Warm-glow motivations probably also
have a role to play, although for the reasons indicated above they may be hard to
separate from altruism.
The assumption that agents are rational and self-interested is both parsimonious
and capable of yielding sharp predictions. Yet when the predictions of the model
fail, something has to give. Generally speaking, the natural reaction to an explana-
tory failure is to try to explain the observed facts by departing as little as possible
from the original model. In the case that concerns me here, the smallest devia-
tion from rational egoism might seem to be that of rational egocentricity. In this

123
Synthese

model, the utility function of the typical agent would include both her own mate-
rial benefit and the degree to which she can think of herself as a moral agent. As
in other cases, there would be a trade-off between these aims: she might be will-
ing to sacrifice some material welfare to get the warm glow from the enhanced self-
image.
Warm-glow theorists of philanthropy and similar non-selfish actions (voting, pre-
serving the environment, etc.) do not seem to realize that this small adjustment to
the model, substituting egocentricity for egoism, requires another and more radical
one: substituting irrationality for rationality. Specifically, agents have to engage in
motivated belief formation about their own motives. This substitution is required by
what I take to be a conceptual truth: one cannot derive a warm glow from an action
unless the agent believes that the action was performed at least in part to benefit oth-
ers. An egocentric agent who performs for the inner audience has to believe that she
is altruistic. An agent who performed “good actions” only for the conscious end of
enhancing her self-image could not achieve that aim, any more than one can enhance
one’s self-image by paying another person to praise oneself.
Thus a common denominator of the arguments offered by Caplan, Kahneman
and Knetsch, and Andreoni is the assumption of motivated belief formation about
one‘s own motives. I have no empirical objection to the assumption. Envy may be
transmuted into righteous indignation or guilt into anger (“who has offended cannot
forgive”). These cases differ, though, from warm-glow cases. Transmutation is a form
of dissonance-reduction, which is triggered by the urge to escape from a negatively
valued emotion. The warm-glow effect is triggered by the search for a positively valued
emotion: there is no dissonance to be reduced. It is not clear, however, how much one
has to do to produce the warm glow. Surely, the act of giving a penny to the panhandler
in the street isn’t enough. Similarly, I find it hard to believe that the costless response
to a question about how much one is willing to pay for the provision of a public good
could create a warm glow. The assumption that people act to create moral satisfac-
tion may predict what we observe, but the assumption must also have independent
credibility.

3.4 Mixed strategies

Sometimes, individuals or groups make decisions by using a randomizing device,


when it is impossible or prohibitively expensive to gather enough information to make
a determinate rational choice. We may flip a coin, for instance, to determine who is
going to clean up after dinner. To select the foreperson of a jury, one can put the names
of all jurors in a hat and select one at random. Ordinary decisions rarely if ever use
devices that deviate from equiprobability, meaning that each option or candidate has
an equal chance of being chosen.
In other cases, an agent may randomize, possibly deviating from equiprobability,
to prevent others from anticipating or guessing her decision. In poker, rational-choice
theory may for instance tell the agent to bluff one third of the times she’s dealt a Queen.
In military affairs, a rational military commander may use a randomizing device to
select strategies for fighter pilots in dog fights (Luce and Raiffa 1957, p. 76). These are

123
Synthese

zero-sum games. For the reasons noted above, there is no similarly plausible rationale
for randomizing in non-zero sum games (see also ibid. p. 93).
Yet even in non-zero-sum cases, sometimes agents act as if they randomize. Con-
sider the case of Kitty Genovese. In the standard presentation, this young woman was
stabbed to death in 1964 in the presence of 38 neighbors who heard her cries, none of
whom called the police. Although this account is probably apocryphal (see Manning
et al. 2007), it has been claimed that there is also experimental evidence for bystander
passivity (Darley and Latane 1968). Some experiments also suggest that the more
bystanders there are, the less is the chance that any of them will intervene. There is
no safety in numbers. I need not consider the empirical veracity of these claims, since
my only purpose here is to consider the explanation of bystander passivity offered in
a highly regarded textbook on game theory (Dixit and Skeath 2004, p. 416).
Citing the Kitty Genovese case, these authors argue that one may try to justify the
idea of mixed strategies by appealing to a causal mechanism: “[M]ixed strategies are
quite appealing in this context. The people are isolated, and each is trying to guess
what others will do. Each is thinking, Perhaps I should call the police ... but maybe
someone else does ... but what if they don’t? Each breaks off this process at some
point and does the last thing that he thought of in this chain, but we have no good
way of predicting what that last thing is. A mixed strategy carries the flavor of this
idea of a chain of guesswork being broken by a random point.” So far, so good. The
authors then go on, however, to commit a simple quantifier fallacy: from the correct
premise that for every person there is a probability p that he will not act, they reach
the false conclusion that there is a p such that each person will abstain from acting
with probability p. Moreover—a second unjustified step—they assume that when all
abstain from acting with probability p, their choices form an equilibrium.
In this example, the probabilities completely lack microfoundations. The authors
give no reason why all subjects should come up with the particular probability that
has the property of generating an equilibrium. A blind causal process cannot mimic a
conscious strategic choice. When Harsanyi (1977, p. 114) offered a similar argument,
he limited his claim to equiprobabilistic mixed strategies, “generated […] by what
amounts to an unconscious chance mechanism inside [the player’s] central nervous
system”. Dixit and Skeath (2004, p. 417 note), by contrast, are willing to entertain the
idea that when the benefits and costs from calling the police are 10 and 8 respectively,
the equilibrium probability that a given person in a group of 100 will not call is 0.998.
In their model, “increasing [the size of the group] from 2 to infinity leads to an increase
in the probability that not even one person helps from 0.64 to 0.8”. Yet although the
model has the seemingly attractive feature of predicting the counterintuitive fact that
there is no safety in numbers, the argument is undermined by the absurdity of the
assumptions. Their stylized explanation is nothing more than a “just-so” story.

3.5 Political transitions

In a number of acclaimed publications, Daron Acemoglu and James Robinson have


tried to provide rational-choice foundations for the transition to democracy. I shall
focus on one of their articles (Acemoglu and Robinson 2001), published in the flagship

123
Synthese

journal of the American Economic Association. Since I have not read all their other
writings on the subject, it is possible that they have already addressed and countered
the objections I shall make. Even if this were so, the relevant fact for my purposes
is that an obscurantist article was published in the most prestigious journal of the
profession, not that two individual scholars made mistakes that they may or may not
have corrected later.
As a preliminary comment, let me say that I find the article a monumental expression
of explanatory hubris. The idea that one could explain transitions to democracy in West
European and Latin American countries by a game-theoretic model involving only two
actors, “the rich” and “the poor”, is too far-fetched to be taken seriously—except for
the present polemical purposes.
I shall not try to address all the issues discussed in the article, but only comment on
the basic conceptual framework and its empirical support or lack thereof. As noted,
they reduce the question of class struggle to the conflict between rich and poor, thus
neglecting, for instance, possible conflicts of interest between poor peasants and poor
urban workers. The former have an interest in high prices on food products, the latter
an interest in low prices, a phenomenon that mirrors the conflict between landowners
and industrialist capitalists in the nineteenthcentury England. I shall not pursue this
issue further, but take the two-class model for given.
Acemoglu and Robinson assume that all agents have identical preferences, differ-
ing only in their capital endowments. All poor agents are assumed to have the same
endowments, as do all the rich. Having already swallowed the two-class assumption,
I shall swallow these simplifications as well. I am not equally willing, however, to
accept the assumption that agents discount the future exponentially. Although mathe-
matically convenient and seemingly justified by the axiom that agents are rational, the
assumption has “little empirical support” (Fredericks et al. 2004, p. 210). To adopt it
without trying to justify it or defend it against criticism, which was surely well-known
to the authors, is a cavalier procedure.
Compared to other issues, the assumption of exponential discounting is neverthe-
less a minor problem. A more troublesome issue is the idea that in any given period
aggregative productivity A can be modeled by assuming that A is either high with
probability (1−s) or low with probability s. I shall ignore the starkly dichotomous
and starkly implausible character of the assumption and focus instead on its interpre-
tation. When Acemoglu and Robinson assert (p. 940) that that the level of income is
“stochastic”, they are presumably using the term in the dictionary sense of a process
involving the operation of chance, such as the onset of cancer. Although scientists
may today be able to quantify the probability that a given person will develop a given
kind of cancer in a given period, the person herself may not—and a hundred years
ago certainly could not—have any idea about the magnitude of the risk. By contrast,
Acemoglu and Robinson (p. 944, Eq. 4) impute knowledge of the value of s to the
agents, in order to calculate the “discounted expected net present value […] of a poor
agent after the revolution but before the state A is revealed”. They would presumably
defend this imputation by some version of the theory of rational expectations. What-
ever the defense they might offer, the imputation is indefensible. The idea that, say,
the rural poor in France in 1789 or the urban poor in 1848 attached a sharp probability
to aggregate productivity being high or low is a piece of science-fiction.

123
Synthese

For the game-theoretic model of revolution to get off the ground, each class—the
rich and the poor—must be viewed as a unitary actor. Acemoglu and Robinson are
of course aware of the problem of free riding in revolutionary situations, but claim
that “[b]ecause a revolution generates private benefits for a poor agent, there is no
collective action problem” (p. 941). In a footnote they add that
Although a revolution that changes the political system might seem to have
public-good like features, the existing empirical literature substantiates the
assumption that revolutionary leaders concentrate on providing private goods to
potential revolutionaries (see Tullock 1971). There could also be a coordination
problem in which all poor agents expect others not to take part in a revolution, so
do not take part themselves. However, since taking part in a revolution imposes
no additional costs irrespective of whether it succeeds or not, it is a weakly
dominant strategy.
The reference to Tullock (1971) is strange. Tullock does not offer or cite any empir-
ical evidence concerning actual revolutions. In fact, he does not cite a single empirical
study of any revolution. Tullock merely asserts that his “impression is that [revolu-
tionaries] generally expect to have a good position in the new state which is to be
established by the revolution. Further, my impression is that the leaders of revolu-
tions continuously encourage their followers in such views” (p. 98; my italics). To
cite this armchair speculation as a decisive piece of “empirical literature” is to offer
very weak support, in fact, no support at all. Instead, Acemoglu and Robinson should
have cited primary empirical sources. There is a vast literature on the motivations of
revolutionary leaders and followers, some of it supporting the idea of private ben-
efits and some of it, perhaps the greater part, contradicting it. The French peasants
who triggered the abolition of feudalism on August 4, 1789 by burning the castles of
nobles in the second half of July were not motivated by the desire to achieve leading
positions in a post-revolutionary society, nor were the East Germans who mobilized
in the streets of Leipzig in October 1989. My aim, however, is not to offer a counter-
generalization to the reckless generalization of Acemoglu and Robinson, only to point
out its recklessness.
The final sentence in the quoted passage is so opaque that I shall not comment on
it, except to remark that it seems to ignore the vast costs and risks often incurred by
revolutionaries. Instead I shall conclude by a brief comment on the formal analysis of
counterrevolution proposed by Acemoglu and Robinson. As they state, their “formal-
ization implies that, as with a revolution, there is no free-rider problem with a coup” (p.
942). Again, the empirical evidence is relegated to a footnote: “This seems plausible.
For example, in Venezuela in 1948, Guatemala in 1954, and Chile in 1973, landowners
were rewarded for supporting the coup by having the land returned to them”. They
provide no references to support this claim. I know little about Latin America, but a
brief literature search on Chile suggests that for this country at least their claim may
be incorrect.
The claim seems to presuppose a selective political motivation for land restitu-
tion: landowners who supported the coup, but only those who supported it, got their
land back. It also presupposes that in lending their support to the coup, landowners
anticipated and were motivated by these selective rewards. As I observed in the dis-

123
Synthese

cussion of Becker and Mulligan, ex post consequences cannot substitute for ex ante
intentions. Since I do not know of any evidence about intentions, however, I can only
refer to consequences. The extensive discussion of land restitution after 1973 in Bel-
lisario (2007) does not cite any selective political criteria. By contrast, the “individual
allotment of parcels to a portion of the asentados [beneficiaries] of Agrarian Reform”
did discriminate against those “who had participated in land takeovers and in other
political ‘crimes’ before the military coup” (Silva 1991, p. 22, 26). In other words,
supporters of the coup were not selectively rewarded, but some beneficiaries of the
Allende regime were selectively punished. Needless to say, I cannot vouch for the
accuracy of these analyses, which are the fruit of half an hour’s search on the Internet.
I cite them only to indicate the kind of fine-grained analysis that would be needed to
make good on the claim.

4 Summary

Although I believe that the cases I have selected for analysis are somewhat represen-
tative of mainstream economic theorizing, I cannot make strong claims about how
typical they are. What I can assert with great confidence is that the authors I have
singled out are far from marginal, and in fact are at the core of the profession. Their
numerous awards testify to this fact.
These writings have in common a somewhat uncanny combination of mathematical
sophistication on the one hand and conceptual naiveté and empirical sloppiness on
the other. The mathematics, which could have been a tool, is little more than toy. The
steam engine was invented by Hero of Alexandria in the first century A. D., but he
considered it mainly as a toy, not as a tool that could be put to productive use. He did
apparently use it, though, for opening temple doors, so his engine wasn’t completely
idling. Hard obscurantist models, too, may have some value as tools, but mostly they
are toys.
I have pointed to the following objectionable practices:

1. Citing empirical evidence in a cavalier way, in the form of anecdotes, “impres-


sions”, and unsubstantiated historical claims (Becker and Mulligan, Acemoglu
and Robinson).
2. Adopting huge simplifications that make the empirical relevance of the results
essentially nil (Acemoglu and Robinson).
3. Assuming that the probabilities in a stochastic process are known to the agents
(Acemoglu and Robinson) or even in some sense optimal (Dixit and Skeath).
4. Assuming that intentions can be inferred from outcomes (Becker and Mulligan,
Kahneman and Knetsch).
5. Assuming that the unconscious has the capacity to carry out intertemporal tradeoffs
(Akerlof and Dickens).
6. Assuming that agents can choose optimal beliefs on the basis of the consequences
of having them rather than on the basis of the evidence supporting them (Rabin,
Akerlof and Dickens).
7. Assuming that agents can choose optimal preferences (Becker, Mulligan).

123
Synthese

8. Ignoring well-established facts such as hyperbolic discounting or limited cognitive


capacities (Acemoglu and Robinson).
9. Assuming self-deceiving agents (Rabin, Andreoni, Caplan, Kahneman and
Knetsch), without engaging in the literature on this controversial subject.
10. Assuming that agents can enhance their self-image by taking trivial, even costless
altruistic actions (Andreoni, Caplan, Kahneman and Knetsch).
11. Adhering to the instrumental Chicago-style philosophy of explanation, which
emphasizes as-if rationality and denies that the realism of assumptions is a relevant
issue.
These features do not, of course, amount to sufficient and/or necessary conditions. If
I were to single out one cluster of issues that seem to be the most important, I would
mention (1), (2), (4) and (11). Whereas the other issues are problem-specific, these
four questions seem to be more recurrent. Many of the other issues have a common
root, which is the neglect of elementary conceptual analysis. The mixed-strategy case
is perhaps the best example. Also, the obsession with optimization operates across the
board.

References
Acemoglu, D., & Robinson, J. (2001). A theory of political transitions. American Economic Review, 91,
938–963.
Akerlof, G., & Dickens, W. (1982). The economic consequences of cognitive dissonance. American Eco-
nomic Review, 72, 307–319.
Allen, T., & Carroll, C. (2001). Individual learning about consumption. Macroeconomic Dynamics, 5,
255–271.
Andreoni, J. (1990). Impure altruism and donations to public goods: A theory of warm-glow giving. Eco-
nomic Journal, 100, 464–477.
Andreoni, J. (2006). Philanthropy. In S.-C. Kolm & J. M. Ythier (Eds.), Handbook of the Economics of
Giving, Altruism and Reciprocity (Vol. II, pp. 1202–1269). Amsterdam: North-Holland.
Arrow, K., & Hurwicz, L. (1971). An optimality criterion for decision-making under uncertainty. In C. F.
Carter & J. L. Ford (Eds.), Uncertainty and Expectation in Economics (pp. 1–11). Clifton, NJ: Kelley.
Becker, G., & Mulligan, C. (1997). The endogenous determination of time preference. Quarterly Journal
of Economics, 112, 729–758.
Behrman, J. (1998). Review of Mulligan (1997). Journal of Economic Literature, 36, 1508–1509.
Bellisario, A. (2007). The Chilean agrarian transformation: Part 2. Journal of Agrarian Change, 7, 145–182.
Bowles, S. (1998). Endogenous preferences. Journal of Economic Literature, 36, 75–111.
Caplan, B. (2007). The Myth of the Rational Voter. Princeton: Princeton University Press.
Cohen, G. A. (2002). Deeper into bullshit. In S. Ross & L. Overton (Eds.), Contours of Agency, Essays on
Themes from Harry Frankfurt (pp. 321–339). Cambridge, MA: MIT Press.
Darley, B., & Latane, J. (1968). Group inhibition of bystander intervention in emergencies. Journal of
Personality and Social Psychology, 10, 215–221.
de Quervain, J. F., et al. (2004). The neural basis of altruistic punishment. Science, 305, 1254–1258.
de Tocqueville, A. (2004). Democracy in America. New York: Library of America.
Dixit, A., & Skeath, S. (2004). Games of Strategy. New York: Norton.
Elster, J. (2004). Costs and constraints in the economy of the mind. In I. Brocas & J. Carillo (Eds.), The
psychology of economic decisions (Vol. 2, pp. 3–14). Oxford: Oxford University Press.
Elster, J. (2009a). Excessive ambitions. Capitalism and Society, 4, 1–30.
Elster, J. (2009b). Le dèsintèressement. Paris: Seuil.
Elster, J. (2012). Hard and soft obscurantism in the humanities and social sciences. Diogenes, 58, 159–170.
Elster, J. (2015). Explaining social behavior (revised ed.). Cambridge: Cambridge University Press.
Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford: Stanford University Press.
Frank, R. (1988). Passions Within Reason. New York: Norton.

123
Synthese

Frankfurt, H. (1988). On bullshit. In H. Frankfurt (Ed.), The Importance of What We are About (pp. 117–133).
Cambridge: Cambridge University Press.
Fredericks, S., Loewenstein, G., & O’Donoghue, T. (2004). Time discounting and time preference. In C.
Camerer, G. Loewenstein, & M. Rabin (Eds.), Advances in Behavioral Economics (pp. 162–222).
New York: Russell Sage.
Freedman, D. (2005). Statistical Models. Cambridge: Cambridge University Press.
Freedman, D. (2010). Statistical Models and Causal Inference: A Dialogue with the Social Sciences. Cam-
bridge: Cambridge University Press.
Friedman, M. (1953). Essays in Positive Economics. Chicago: University of Chicago Press.
Furner, J. (2014). The ethics of evaluative bibliometrics. In B. Cronin & C. Sugimoto (Eds.), Beyond
bibliometrics. Cambridge, MA: MIT Press.
Gibbard, A., & Varian, H. (1978). Economic models. Journal of Philosophy, 25, 664–677.
Gjelsvik, O. (2006). Bullshit illuminated. In J. Elster, et al. (Eds.), Understanding Choice, Explaining
Behaviour (pp. 101–111). Oslo: Oslo Academic Press.
Green, D., & Shapiro, I. (1994). Pathologies of Rational Choice Theory. Cambridge: Cambridge University
Press.
Harsanyi, J. (1977). Rational Behavior and Bargaining Equilibrium in Games and Social Situations. Cam-
bridge: Cambridge University Press.
Hume, D. (1983). History of England. Indianapolis, IN: Liberty Fund Press.
Johansen, L. (1977). Lectures on Macroeconomic Planning. Amsterdam: North-Holland.
Kahneman, D., & Knetsch, J. (1992). Valuing public goods: The purchase of moral satisfaction. Journal of
Environmental Economics and Management, 22, 57–70.
Kant, I. (1996). Groundwork of the metaphysics of morals. In I. Kant (Ed.), Practical Philosophy (pp.
37–108). Cambridge: Cambridge University Press.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480–498.
Luce, R., & Raiffa, H. (1957). Games and Decisions. New York: Wiley.
Manning, R., Levine, M., & Collins, A. (2007). The Kitty Genovese murder and the social psychology of
helping. American Psychologist, 62, 555–562.
McClure, S., et al. (2004). Separate neural systems evaluate immediate and delayed monetary rewards.
Science, 306, 503–507.
Mele, A. (1997). Real self-deception. Behavioral and Brain Sciences, 20, 91–136.
Mulligan, C. (1996). “A logical economist’s argument against hyperbolic discounting”. http://home.
uchicago.edu/~cbm4/hyplogic.pdf.
Mulligan, C. (1997). Parental Priorities and Economic Inequality. Chicago: University of Chicago Press.
Nelson, R., & Winter, S. (1984). An Evolutionary Theory of Economic Change. Cambridge, MA: Harvard
University Press.
Palacios-Huerta, I., & Santos, T. (2004). A theory of markets, institutions, and endogenous preferences.
Journal of Public Economics, 88, 601–627.
Palfrey, T., & Prisbrey, T. (1997). Anomalous behavior in public goods experiments: How much and why?
American Economic Review, 87, 829–846.
Rabin, M. (1994). Cognitive dissonance and social change. Journal of Economic Behavior and Organization,
23, 177–194.
Schelling, T. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1, 143–186.
Silva, P. (1991). The military regime and restructuring of land tenure. Latin American Perspectives, 18,
15–32.
Skog, O. (1997). The strength of weak will. Rationality and Society, 9, 245–271.
Skog, O. (2001). Theorizing about patience formation. Economics and Philosophy, 17, 207–219.
Taleb, N. (2007). Black swans and the domain of statistics. The American Statistician, 61, 198–200.
Tullock, G. (1971). The paradox of revolution. Public Choice, 11, 89–99.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty. Science, 1131, 185–1124.
Watts, D. (2011). Everything is obvious. New York: Crown Business.
Weitzman, M. (2009). On modeling and interpreting the economics of catastrophic climate change. Review
of Economics and Statistics, 91, 1–19.
Winston, G. (1980). Addiction and backsliding. Journal of Economic Behavior and Organization, 1, 295–
324.
Winter, S. (1964). Economic ‘natural selection’ and the theory of the firm. Yale Economic Papers, 4, 225–
272.

123

Das könnte Ihnen auch gefallen