Sie sind auf Seite 1von 6

SPECIAL ARTICLE

The Contributions of John Nash to


Game Theory and Economics
Arunava Sen

The place of the economist John Nash in the pantheon


of greats in game theory, and economics in general, is
not based on the development of powerful new
mathematical methods but rather on some
fundamental insights. These insights have been pivotal
in the development of economic theory, especially since
the 1980s when faith in models with price-taking
agents began to wane. An appreciation of the work of
the Nobel laureate who died earlier this year.

ohn Nash (19282015) was one of the most remarkable


scholars of our era. He made profound contributions to
game theory as well as to several areas in pure mathematics.1 He won both the Nobel Prize for Economics (in 1994)
and the Abel Prize for Mathematics (in 2015), the highest academic distinction in these areas.2 His personal life was tragic.3
He was struck by mental illness when he was barely 30.
A lthough he recovered later in his life, his intellectually productive life was extremely short. His reputation is based on a
handful of papers written when he was a young man.
This article will outline Nashs main contributions to game
theory. In particular, it will describe his notion of equilibrium,
now called Nash equilibrium, and his papers on bargaining.4
These ideas form the bedrock of game theory, and lie at the
heart of modern economic theory.
1 What Is a (Non-Cooperative) Game?

The author would like to thank Debasis Mishra for helpful suggestions.
He would also like to thank Pradeep Dubey for numerous conversations
regarding John Nash and his contemporaries.
Arunava Sen (asen@isid.ac.in) teaches at the Indian Statistical Institute,
New Delhi.
Economic & Political Weekly

EPW

NOVEMBER 28, 2015

vol l no 48

Game theory is the formal analysis of interaction between


decision-making agents. It applies to any situation where
agents undertake actions that mutually affect each other. For
example, an airline that decides to change the price of its tickets
not only affects its own profitability but also that of its competitors. Similarly, by varying her bids in an auction, a bidder
affects the allocations of all bidders. Political parties choosing
coalition partners in an election, jury members deciding who
to give an award to, or countries choosing their level of nuclear
deterrence are all instances of mutual interaction that can be
modelled as a game. So, too can be games in the traditional
usage of the word such as chess and tic-tac-toe. In fact, the
ambit of the theory is so wide-ranging that it is difficult to conceive of situations of interest to social scientists, where it does
not apply. In the traditional theory of market structure there
are only two polar cases that lie outside its purview. The first is
the idealised world of perfect competition where individual
firms are too small to have any impact on the environment of
their competitors. The other is monopoly where there is, by
definition, only a single player in the market.
In their path-breaking book Theory of Games and Economic
Behavior, John von Neumann and Oskar Morgenstern [17] developed a general way to represent any game. A game G consists of the following: a set of players N = {1, 2, , n} with n
2, a strategy set Si for each player i and a pay-off function i
that assigns a pay-off to each player i that depends on the
strategy choices of all other players.5 The game unfolds as follows: each player chooses a strategy from her strategy set in
75

SPECIAL ARTICLE

ignorance of the choices of others. Depending on the choices


made by all players, a pay-off accrues to each player.6
The framework described in the previous paragraph may
appear inadequate for representing certain strategic situations. For instance, consider the following bargaining game
known as the ultimatum game. Two players attempt to split a
sum of money, say Rs 100. Player 1 proposes a split to which
Player 2 responds by accepting or rejecting. If he accepts,
the split is implemented and the players pay-offs are given by
the split. If he rejects, both players get zero. It is clear that
Player 1s strategy is a split (or a number between 0 and 100)
but what is Player 2s strategy? Since Player 2s strategy must be
chosen in ignorance of Player 1s strategy, it may seem that
there is no way for Player 2 to respond to an offer by Player 1.
However, this difficulty is easily dealt with by a suitable definition of Player 2s strategy. A strategy for Player 2 is a contingent
plan where she responds (by either accepting or rejecting) to
every conceivable offer that Player 1 may make, that is, a strategy
is a map from the set of all possible splits to the set {Accept,
Reject}. For instance, possible strategies for Player 2 are: I will
accept only if I get at least Rs 75 or I will always accept or I
will only accept Rs 50 and so on. Each player can then choose
strategies without knowing the choice made by the other. In
fact, the idea of interpreting strategies as contingent plans is
perfectly general and all strategic situations can be represented
in this way.
A few clarifying remarks ought to be made at this stage. We
are considering situations where the players make choices
independently. It is not difficult within this framework to accommodate pre-play communication between players where they
discuss what they intend playing. However, they cannot make
binding commitments about their play. Or equivalently, they
can costlessly break any agreements they make before play. For
this reason, these games are called non-cooperative games.
Another important assumption is that the game (that is, the set
of players, the strategy sets and the pay-off functions) is common knowledge to all the players.7 In several games of interest,
players may not know the pay-off functions of other players.
For instance, it seems reasonable to assume that a bidder at an
auction knows her own valuation of the objects being sold but is
not sure about the valuations of the others. These are games of
incomplete information, first formalised by John Harsanyi [5].
We shall restrict our discussion to games of complete information.
2 Nash Equilibrium

What should we expect players to play in a game? In order to


fix ideas, let us consider a simple game known in the literature
L
R
as matching pennies.
L
(1,1)
(1,1)
Consider the following model of a penalty
shoot-out in a soccer game. There are two R (1,1) (1,1)
playersthe shooter (the row player) and the goalkeeper (the
column player). Each player has two strategies: L and R. If both
players choose L or both choose R, the goalkeeper saves. The
shooter then gets 1 and the goalkeeper, 1. If, on the other
hand the two players choose different strategies, the shooter
scores and gets 1 while the goalkeeper gets 1. This is shown in
76

the matrix above. In every cell, the first and second numbers in
the pair of numbers denote the pay-offs to the shooter and
goalkeeper, respectively.
In this setting, it is meaningless to require a player to maximise his pay-off. A players pay-off maximising strategy depends
on the strategy chosen by the other player. If the goalkeeper
chooses L, the shooters optimal strategy is R. On the other
hand, if the goalkeeper chooses R, his optimal strategy is L.
Similarly, the goalkeepers optimal strategy is L if the shooter
chooses L and R if the shooter chooses R.
Since a player does not know what his opponent will play
(since each is required to play simultaneously), it is reasonable
to assume that each player has beliefs about the choice of the
other player. A belief of a player is a probability distribution
over the strategies of the other player. For instance, the shooter may have seen videos of earlier penalty shootouts involving
this particular goalkeeper and observed that he goes L, 60% of
the time and R, 40% of the time. Similarly, the goal-keeper believes that the shooter goes L with probability p and R with probability 1 p for some p between zero and one.
Every player has a set of strategies that are optimal given a
belief. These are the strategies that maximise the players
expected pay-off given the beliefs.8 They are known as bestresponses to the belief.9
Let us calculate the best-responses in the matching pennies
example. Suppose the row player (the shooter) believes that
the goalkeeper will play L and R with probabilities p and 1 p,
respectively. Then his expected pay-off from playing L is 1.p
+ 1.(1 p) = 1 2p. By a similar argument, the expected pay-off
from playing R is 1.p + 1.(1 p) = 2p 1. Clearly the bestresponse for the row player is to play R if p > 21 and L if p < 21 .
If p = 21 , playing L and R both yield a pay-off of zero. Therefore,
any randomisation by the row player is optimal for the belief
p = 21 . By an analogous argument the goalkeepers bestresponses are as follows: play L if he believes the shooter will
play L with probability q > 21 ; play R if q < 21 and randomise
arbitrarily if q = 21 .
It seems natural to require players to play best-responses in
any equilibrium. But best-responses to what beliefs? Nash
equilibrium specifies these beliefs for a playerthey are precisely those beliefs that are generated from the strategies actually played by the other players in equilibrium. Suppose that
such an equilibrium involves the goalkeeper playing L with
probability 0.7, that is, p = 0.7. Then the best-response of the
shooter is to play R with probability 1. In a Nash equilibrium,
these must, in fact, be the beliefs of the goalkeeper. His bestresponse to these beliefs is to play L with probability one.
However this is inconsistent with our assumption that p = 0.7.
Consequently, p = 0.7 cannot be an equilibrium.
A little reflection should convince the reader that the only
equilibrium in this game, is p* = q* = 21 . If the shooter believes that the goalkeeper will play L and R with equal probability, then playing L and R both give him zero and playing L
and R with equal probability is a best-response. The shooter
playing L and R with equal probability will generate beliefs
q = 21 for the goalkeeper. Playing L and R with equal probability
NOVEMBER 28, 2015

vol l no 48

EPW

Economic & Political Weekly

SPECIAL ARTICLE

for the goalkeeper is a best response to q = 21 , in turn justifying the beliefs p = 21 held by the shooter.
Let us summarise the discussion above. A strategy for a
player in the game above is a probability distribution over the
set {L, R}. A strategy for player j can be interpreted as a belief
for player i. A strategy profile (p*, q*) is a Nash equilibrium if
p* is a best response to q* and q* is a best-response to p*.
Equivalently, (p*, q*) is a best-response to itself.
This idea can be generalised. Consider an arbitrary game
where N = {1, 2,, n} is the set of players, Si and i : S1
Sn R, are the strategy set and pay-off functions, respectively, for each player i N. We shall refer to the elements of Si as
the set of pure strategies for player i.10 A player is allowed to
choose randomly from his set of pure strategies.11 A randomised strategy over Si is called a mixed strategy. The set of
mixed strategies for player i will be denoted by Mi this is just
the set of probability distributions over Si. The pay-off function
for player i, i, can be extended to a pay-off function i where
i: M1 Mn R in a straightforward way, by taking expectations. The set of players N, the set of mixed strategies Mi
and the new pay-off functions i for each player i N, is
referred to as the mixed extension of the original game.
A Nash equilibrium of the mixed extension game is a collection of strategies m* = (m1*, m2*, , mn*) such that
i(m*) i(m1*, , m*i1, mi, m*i+1, , m*n) for all
mi Mi and for all i N.
Let m* be a Nash equilibrium and let i be a player. The players
beliefs are the strategies of other players specified by the equilibrium, that is, (m1*, , m*i1, m*i+1, m*n). The Nash equilibrium
inequality ensures that m1* is a best-response to these beliefs.
There are several equivalent ways to express the fundamental
idea behind Nash equilibrium. As we noted earlier, a Nash equilibrium is a best response to itselfin Nashs words, it is a counter
to itself. The equilibrium is also self-enforcing in the following
sense: if players made an informal agreement to play it, then no
player has a unilateral incentive to break the agreement.
A Nash equilibrium can also be defined for the game where
players can only play pure strategies, that is, the game where
the strategy set for player i is Si and the pay-off function for i is
i. A Nash equilibrium in this game (or a pure strategy Nash
equilibrium) is a collection of pure strategies s* = (s1*, s2*, ,
s*n) such that
i(s*) i(s*1, , s*i1, si, s*i+1, ,s*n) for all si Si and for all
i N.
It is easy to see that a pure strategy Nash equilibrium is a Nash
equilibrium of the mixed extension game. However, a pure strategy Nash equilibrium may not exist. Consider the matching pennies. Observe that at every pure strategy pair (L, L), (L, R), (R, L)
and (R, R) one of the players is getting minus one. Moreover, the
player can unilaterally deviate and obtain a pay-off of one.
Nashs main contribution to equilibrium theory is not merely
the formulation of a plausible equilibrium concept. In his
1953 paper [9], he showed that the mixed extension of every
game with a finite set of pure strategies, has at least one (Nash)
equilibrium. The existence property can be extended to all
well behaved games.12 It is the existence property of Nash
Economic & Political Weekly

EPW

NOVEMBER 28, 2015

vol l no 48

equilibrium that mainly accounts for its widespread use in


game theory. It is not difficult to propose other equilibrium concepts. For instance, one could get rid of the difficulty of arbitrary
beliefs by requiring the following: every equilibrium strategy
equilibrium for a player is a best-response to every possible belief
(not just to the beliefs generated by the proposed equilibrium).
This is the requirement that every player has a weakly dominant strategy, a strategy that does at least as well as any other,
no matter what strategies others play. Although players are
very likely to play such strategies when they exist, they will not
exist in most games.13 For example, there are no weakly dominant strategy for either player in the matching pennies game.
The Nash equilibrium notion unifies several solution concepts
that were proposed for particular classes of games long before
Nashs paper. The best known of these are the models of firm
competition developed in the mid-19th century by Cournot and
Bertrand ([3], [2]; see [13] Chapter 3, for a modern textbook
treatment). Cournot considered a model where a finite (at least
two) number of firms produce a homogeneous product at zero
marginal cost.14 Each firm strategically decides a quantity of
output to produce. The aggregate output of firms determines a
market price via an inverse demand function which in turn, determines the profits of all firms. The situation is a game because
the profits of each firm depends not only on its own strategy (its
output level) but also those of all other firms. Cournot proposed
a solution concept known as Cournot equilibrium where each
firms output is optimal given the equilibrium output levels of all
other firms. Firms produce outputs strictly less than the competitive level and enjoy strictly positive profits. Cournots solution was criticised by Bertrand who proposed a model where
firms compete in prices. In the same homogeneous product
model, the Bertrand equilibrium outcome is completely different
from the Cournot equilibrium. Even with two firms, the Bertrand
equilibrium yields the competitive outcome. Both Cournot and
Bertrand equilibria are instances of Nash equilibria, the former
in the quantity-setting game and the latter, in the price-setting
game.15 The different predictions of the two models are due to
the differences in the underlying models (the different strategy
sets and the pay-off functions) rather than differences in the
solution concept.
Nash equilibrium is the benchmark solution concept in game
theory. It has strong conceptual foundations and possesses the
practical attribute of existence in a wide class of games. The
status of Nash equilibrium is well summarised in [6]:
...the notion of Nash equilibrium has become a required part of the
toolkit for economists and other social and behavioural scientists, so
well-known that it does not need explicit citation, any more than one
needs to cite Adam Smith when discussing competitive equilibrium.
There have been modifications, generalisations, and refinements, but
the basic equilibrium analysis is the place to begin (and sometimes
end) the analysis of strategic interactions, not only in economics but
also in law, politics, etc. ... Students in economics classes today probably hear Nashs name as much as or more than that of any economist.

3 Nash Bargaining

In a couple of papers, written in 1950 and 1953, ([10], [11]),


Nash laid the foundations of a theory of bargaining. Both are
77

SPECIAL ARTICLE

papers of breathtaking originality and have inspired a massive


body of research.
Nash considered situations (called bargaining problems)
where two players had some surplus to share. In his own
words, The economic situations of monopoly versus monopsony, of state trading between two nations and of employer
and labour union may be regarded as bargaining problems [10].
A bargaining problem consists of a set of possible surplus
vectors to be shared between the players and a disagreement
vector. Formally, a bargaining problem consists of a set S R 2
and d R 2. The set S is the set of feasible pay-offs for the players. For instance, imagine that the two owners of a company
have profits of Rs 100 crore to share between themselves. If the
pay-offs to Players 1 and 2 are denoted by x1 and x2 crore,
respectively, then S = {(x1, x2) R 2|x1 + x2 100}. Nash made
an important assumption regarding the structure of the set S:
he assumed that it was convex.16 If two pay-off vectors belong
to S, then so does any average of the two vectors. He justified
this on the grounds that players could always choose to resolve
differences by flipping a coin. In the example considered
above, the players may agree to toss a fair coin between the
two alternatives, Player 1 gets Rs 80 and Player 2 gets Rs 20, or
Player 2 gets Rs 60 and player 1 gets Rs 40. In terms of
expected utility, this deal is the equivalent to the deal where
Player 1 gets Rs 60 and Player 2 gets Rs 40.
The disagreement pay-off vector specifies pay-offs each
player will get if negotiations break down. Returning to the
earlier example, Players 1 and 2 may have legal claims to Rs 40
and Rs 20 respectivelythey can ensure these pay-offs under
all circumstances. The disagreement vector in the problem is
then the vector (40, 20). To summarise, a bargaining problem
B is a pair (S, d) where S is the set of feasible pay-offs and d is
the disagreement vector. In order to avoid discussion of uninteresting problems, it is assumed that all bargaining problems under consideration have a feasible pay-off where each
player gets strictly more than her disagreement pay-off.
A bargaining solution describes the outcome of bargaining in
every bargaining problem. In the example, (60, 40) could be the
outcome.17 So could any vector in the set S, such as the disagreement vector itself, that is, (40, 20)18 or any other vector in S
such as (45, 35). Formally, a bargaining solution assigns a vector
x = (x1, x2) in the set S for every bargaining problem (S, d).
What is the outcome in any bargaining problem? Equivalently,
what is the correct bargaining solution? In his two papers [10]
and [11], Nash provided two approaches to this question. In [10],
he proposed four axioms that every reasonable bargaining solution should satisfy. He showed quite remarkably, that these
simple axioms uniquely identify an outcome in every bargaining
game. This bargaining solution is known as the Nash Bargaining
Solution or NBS. The NBS does not consider any particular bargaining process or mechanism but predicts an outcome for every
bargaining game on the basis of some abstract considerations of
what these outcomes ought to be. In [11], Nash explicitly constructed a bargaining mechanism, now called the Nash demand
game. He showed that the equilibrium outcome (in a specific
sense) of the Nash demand game coincides precisely with the
78

Nash bargaining solution for every bargaining game. The


Nash demand game is a non-cooperative game in the sense
described in the previous section. It therefore provides a noncooperative foundation for an axiomatic solution concept. The
general project of providing such foundations for axiomatic solution concepts is known as the Nash program.
Nash proposed the following axioms for bargaining solutions.
(1) Efficiency (EFF): It should not be possible to improve the
pay-offs obtained by both players relative to the outcome of
bargaining. In the example, the disagreement outcome (40,
20) and the outcome (45, 35) are inefficient because in each
case, both players can be made better-off (for instance, by
agreeing to (60, 40)). The axiom expresses the idea that money
should not be left on the table when bargaining is concluded.
(2) Symmetry (SYM): In symmetric problems, players ought
to be treated identically. A bargaining problem is symmetric if
it satisfies two conditions: (i) the set S is symmetric with respect
to the 450 line, and (ii) the disagreement pay-offs of the two
players are equal. In the example, the set S satisfies (i) since
(x1 , x2) S implies (x2 , x1) S. However, (ii) is not satisfied
since the disagreement pay-offs are unequal. Consequently
the symmetry axiom has no implications for this bargaining
problem. However, if disagreement pay-offs were equal (for
instance, if it was (20, 20)), then SYM would require that both
player should receive the same pay-off. In conjunction with
the efficiency axiom, it would have to be the case that the outcome is (50, 50). In a symmetric game both players are identical. If players are treated equally in this situation, the underlying assumption is that both have equal bargaining power.
(3) Independence of Equivalent Utility Representations
(IEUR): If the surplus is measured in different units for each of
the players in a given situation, the new outcome must simply be
the original outcome measured in the new units. Imagine a bargaining problem (S, d). Suppose the units of measurement for
Player 1 and Player 2s pay-offs are now changed as follows: xl1=
a1x1 + b1 and xl2 = a2x2 + b2 where a1, a2 > 0. In the example, the
surplus for the two players was measured in crores. Suppose
Player 1s pay-off is now measured in lakhs while Player 2s payoff is measured in thousands. We now have a transformed bargaining problem (Sl, dl) where every vector (x1, x2) in S is transformed into a vector in Sl by multiplying the first component by
102 and the second by 104. The disagreement pay-offs are
similarly transformed to the vector dl. Suppose the outcome
for (S, d) is z. According to IEUR, the outcome zl for (Sl, dl) must
be the appropriate transformation of z, that is, zl1 = a1z1 + b1
and zl2 =a2 z2 + b2 in the general case and zl1 = 102 and zl2 = 104
z2 in the example. The IEUR axiom expresses a natural idea: the
outcome of bargaining should not depend on the units in which
a players pay-off is measured.
(4) Independence of Irrelevant Alternatives (IIA): Consider
the bargaining problems (S, d) and (T, d) where S T and let
the outcome for (T, d) be z. If z belongs to S, then the outcome
NOVEMBER 28, 2015

vol l no 48

EPW

Economic & Political Weekly

SPECIAL ARTICLE

for (S, d) must be z as well. The case for the axiom can be
argued along the following lines. Suppose the outcome for
(S, d) is zl which is distinct from z. Since S T , zl was under
consideration in the problem (T, d) but was rejected in favour
of z. Therefore, the choice of zl over z in (S, d) is due to absence
of the alternatives in T\S. But these alternatives ought to be
irrelevant for the problem (S, d). Axioms such as IIA appear
frequently in economic theory.19 The axiom is a natural simplifying assumption. The analysis would be virtually intractable
if bargaining outcomes depended on alternatives that are not
on the table.
The Nash bargaining solution for the bargaining problem
(S, d) is the vector (x1*, x2*) S with the following property:
(x1* d1)(x2* d2) (x1 d1)(x2d2) for all (x1 , x2) S and
x1 d 1 , x 2 d 2 .
The function (x1d1)(x2d2) is known as the Nash product.
The NBS is obtained by maximising the Nash product (a continuous function) over a compact set {x S|x d}. A standard
result in mathematics guarantees the existence of a solution.
Moreover, the solution is unique since the objective function is
strictly quasi-concave and the constraint set is convex. The
maximisation is illustrated in Figure 1.
Figure 1: Nash Bargaining Solution
X2

(X1d1).(X2d2)

NBS
X1

(d1,d2)

Nash [10] proved the following result.


Theorem 1 (Nash 1950) A bargaining solution satisfies the
EFF, SYM, IEUR and IIA axioms if and only if it is the NBS .
This is a beautiful and unexpected result. The axioms are natural and weak. There is no reason to believe that they have
strong implications but they imply the maximisation of a very
specific objective function, the Nash product. The efficiency and
symmetry axioms can also be weakened without fundamentally
changing Nashs result. The axiom of individual rationality
requires the solution to give each player at least her disagreement pay-off. If efficiency is replaced by individual rationality,
the only additional bargaining solution admitted is the disagreement solution that assigns the disagreement pay-off to every
bargaining problem. Similarly, the relaxation of the symmetry
axiom to one that admits differential bargaining powers among
players, yields a solution that involves the maximisation of a
weighted Nash product. Details may be found in [7], [8] and [15].
Economic & Political Weekly

EPW

NOVEMBER 28, 2015

vol l no 48

In [11], Nash considered a specific bargaining game, the


Nash demand game. Suppose the two payers have to bargain
over a surplus S R 2. The game proceeds as follows. In the
first stage, each player proposes a threat which will be carried
out if negotiation fails. Carrying out the threat assures each
player a certain pay-off which we normalise to zero. Each
player now makes a demand, that is, Player 1 asks for x1 and
Player 2, x2. These demands are compatible if (x1 , x2) S in
which case, each player receives her demand; otherwise both
players receive zero (her threat pay-off).
The Nash demand game is a non-cooperative game. It is a
natural bargaining procedure and it would be very convenient
if the Nash equilibria of this game coincided with the NBS.
This is not truethe demand game admits a very large number of Nash equilibria. For instance, let x be a point on the efficient boundary of S, that is, there does not exist any other
feasible pay-off vector where both players do strictly better
relative to x. Observe that x can be sustained as a Nash equilibrium in the Nash demand game, since any attempt by player i
to increase his pay-off given that the other player demands x j,
will lead to incompatibility and a pay-off of zero. A drawback
of the demand game is that players are punished heavily if
small mistakes are madetheir pay-offs are reduced to zero
if demands are a little incompatible. Nash therefore considered approximating these discontinuous pay-off functions by
smooth approximations. In the approximated demand game,
pay-offs are unchanged if demands are compatible. If they are
incompatible, they are close to zero and tapering off rapidly as
they move further away from S, without actually ever being
zero. Nash justified the perturbation of the demand game on
the grounds of player uncertainty regarding pay-offs and the
information structure. The main result in [11] is the following:
as the pay-off approximations get better (that is, they approach that of the demand game), the NBS becomes the unique
Nash equilibrium of the modified demand game.20
Nashs result establishes the centrality of the NBS in bargaining theory. His papers have inspired a massive body of theoretical and experimental research on bargaining. The literature investigates issues arising from Nashs work. What are
equilibrium outcomes in non-cooperative bargaining games
where offers are made sequentially and players have different
rates of time preference? How are the results affected by if
agents are uncertain about the disagreement pay-off and time
preferences of other players? Can uncertainty explain inefficiencies arising from delay in bargaining? And finally, what is
the source of power in bargaining? Is it related to disagreement pay-offs? Or to low rates of time preference (patience)
when bargaining takes time? Or perhaps to reputations that
allow players to credibly make unreasonable offers? A discussion of many of these issues can be found in [14].
4 Concluding Remarks

John Nash was a mathematician who, in addition to his work


on game theory, made profound contributions to pure mathematics (see for instance, his Abel Prize citation and informal
descriptions of his work in [1]). These contributions can be
79

SPECIAL ARTICLE

properly appreciated only by researchers with advanced training in specific branches of mathematics. In contrast, his papers
in game theory are accessible to a wide audience. They are
written in an informal, almost conversational style. There is
no obsession with technical details. The 1950 paper about the
existence of equilibrium [9], is less than a page long and barely has any mathematical symbols.21 The 1950 axiomatic bargaining paper [10] has examples about the purchase of
Cadillacs and Buicks and individuals Bill and Jack who have to
share objects such as a book, a toy, a pen and so on. The paper
rests on an elegant mathematical argument but it can be fully
explained to someone with no more than a high school background in mathematics. The other two papers use a more
sophisticated mathematical result, the Kakutani Fixed-Point
Theorem. However Nash provides the following assurance:

Notes
1 In a handwritten letter to the National Security Agency (NSA) of the US in 1955, Nash outlined the construction of an enciphering-deciphering machine. Although his idea was not
pursued by the NSA, the letter anticipates developments in computational complexity and
modern cryptography by several decades
see [12].
2 Sadly, he and his wife, Alicia were killed in a
road accident while they were on their way
back from Norway where he had received the
Abel Prize.
3 It was the subject of the Oscar-winning 2001
Hollywood film A Beautiful Mind based on the
book by Sylvia Nassar with the same name.
4 For an informal and wide-ranging assessment
of Nashs work and legacy, see [4].
5 Formally, i is a map from the Cartesian product of the strategy sets to the real line.
6 This description is known as the normal form
representation of the game. An alternative and
equivalent representation is the extensive form.
The latter makes use of game trees.
7 In other words, all players know it, all players
know that all players know it, all players know
that all players know that all players know it,
and so on ad infinitum.
8 An important question is why a player should
choose to maximise his expected pay-off. For
instance, why should it not be reasonable to assume that a player chooses strategies in order
to maximise his worst-case pay-off? The foundations of choice under uncertainty were laid
down by von-Neumann and Morgenstern in
their book. Their fundamental contribution is
the so-called Expected Utility Theorem. Loosely
speaking, it states that if preferences over lotteries over outcomes satisfy certain reasonable
axioms, utility numbers can be assigned to the
outcomes in a way such that choosing the optimal lottery according to these preferences is
equivalent to choosing the lottery that yields
the highest expected utility. The pay-offs to the
players in a game can be thought of as the utilities from the appropriate outcomes. Choosing
an optimal strategy (given a belief) is then
equivalent to choosing the strategy that maximises the expected pay-off from the strategy
where the lottery is specified by the belief.
9 Nash referred to them as counters.
10 In the matching pennies example the set of
pure strategies for each player is the set {L, R}.
11 The randomisation is independent in the statistical sense of the randomisation of other
players.
12 For instance, any game where the strategy sets

80

13

14
15

16
17
18
19

20

21

...those readers who are unacquainted with the mathematical


technicalities will find they can manage quite well without
them ([11], p 129).
Nashs place in the pantheon of greats in game theory,
and economics in general, is not based on the development of
powerful new mathematical methods but rather on some
fundamental insights. These insights have been pivotal in the
development of economic theory, especially since the 1980s
when faith in models with price-taking agents began to
wane. The book and film brought him to the notice of the
public at large. However, they also had the unfortunate aspect
of focusing attention on his mental health problems rather
than on his work. I believe John Nash deserves to be remembered, above all, for his magnificent contributions to the
world of ideas.

are compact, convex subsets of Euclidean space


and the pay-off functions are continuous and
quasi-concave has at least one Nash equilibrium.
Any collection of weakly dominant strategies,
one for each player, is a Nash equilibrium. The
converse is not true.
He had in mind, the market for spring water.
The well-known Stackelberg equilibrium is also
a particular Nash equilibrium (called the subgame prefect Nash equilibrium) of the leaderfollower duopoly game. See [13] Chapter 6 for
details.
He also made the technical assumption that S
is compact.
Here (60, 40) denotes the outcome where Player 1 gets 60 and Player 2 gets 40.
This could be interpreted as the complete failure of negotiations.
An IIA axiom also appears in the famous Arrow
Impossibility Theorem in social choice theory.
The Arrow and Nash axioms differ in their formulations because they apply to different models, but express the same idea.
Nashs technique of selecting equilibria in
games where multiple equilibria exist, is also
remarkably prescient. The idea is to select
equilibria that are limit points of equilibria in
suitably perturbed games. Several decades
after Nashs paper, a large and influential literature emerged, developing this idea (see [16]).
Nash T-shirts were available in some US university campuses which had the entire paper
printed on them.

References
[1] Abel Prize (2015): http://www.abelprize.no/
c63466/seksjon/vis.html?tid=63467.
[2] Bertrand, Joseph (1883): Review of Thorie
mathmatique de la richesse sociale by Leon
Walras and Recherches sur les principles
mathmatiques de la thorie de les richesses
by Augustin Cournot, Journal des Savants
499-508. Translated by Margaret Chevaillier
as Review by Joseph Bertrand of two books,
History of Political Economy, 24 (1992), 64653.
[3] Cournot, Antoine A (1838): Recherches sur
les principles mathmatiques de la thorie de
les richesses, Paris, Hachette. Translated by
Nathaniel T Bacon as Researches into the
mathematical principles of the theory of
wealth, New York, Macmillan, 1897.
[4] Ghosh, Parikshit (2015): John Nash and
Modern Economic Theory, http://www.
ideasforindia.in/article.aspx?article id=1462.
[5] Harsanyi, John C (1967, 1968): Games with

Incomplete Information played by Bayesian


Players, I, II and III, Management Science, 14,
15982, 32024 and 486502.
[6] Holt, Charles A, Alvin E Roth and Vernon L
Smith (2004): The Nash Equilibrium: A Perspective, Proceedings of the National Academy
of Sciences of the United States of America,
101:12, 39994002.
[7] Moulin, Herv (1991): Axioms of Cooperative
Decision Making, Econometric Society Monograph.
[8] Moulin, Herv (2003): Fair Division and Collective Welfare, Cambridge, MIT Press.
[9] Nash, John F (1950a): Equilibrium Points in
n-person Games, Proceedings of the National
Academy of Sciences of the United States of
America, 36:1 4849.
[10] Nash, John F (1950b): The Bargaining Problem, Econometrica, 18:2, 155162.
[11] Nash, John F (1953): Two-Person Cooperative
Games, Econometrica, 21:1, 12840.
[12] Nissan, Noam (2012): John Nashs Letter to the
NSA, Turings Invisible Hand: Computation,
Economics and Game Theory blog, https://
agtb.wordpress.com/2012/02/17/john-nashsletter-to-the-nsa/.
[13] Osborne, Martin J (2004): An Introduction to
Game Theory, Oxford University Press.
[14] Osborne, Martin J and Ariel Rubinstein
(1990): Bargaining and Markets, Academic
Press, full text available with permission of
the publisher at https://www.economics.utoronto.ca/osborne/bm/.
[15] Roth, Alvin E (1979): Axiomatic Models of Bargaining, Lecture Notes in Economics and
Mathematical Systems (Managing Editors: M
Beckman and H P Kunzi), Springer-Verlag,
Berlin Heidelberg New York.
[16] van Damme, Eric (1983): Refinements of the
Nash Equilibrium Concept, Lecture Notes in
Economics and Mathematical (Managing Editors: M Beckman and H P Kunzi), SpringerVerlag, Berlin Heidelberg New York.
[17] von Neuman, John and Oskar Morgenstern
(1944): Theory of Games and Economic Behavior, Princeton University Press.

NOVEMBER 28, 2015

Subscription Numbers
Subscribers are requested to note their
S ubscription Numbers and quote these
numbers when corresponding with the
circulation department.
vol l no 48

EPW

Economic & Political Weekly

Das könnte Ihnen auch gefallen