Sie sind auf Seite 1von 42

This is page 103

Printer: Opaque t

!!
Pinning Down Beliefs: Nash Equilibrium

The path we have taken so far has introduced us to three solution concepts for
trying to predict the behavior of rational players. The !rst, strict dominance, only
relied on rationality and was very appealing. It also predicted a unique outcome
for the Prisoners dilemma (as it would in any game for which it existed). However, it often fails to exist. The two sister concepts of IESDS and rationalizability
relied on more than rationality, and asked for common knowledge of rationality.
In return, we get existence for every game, and in some games we got uniqueness.
In particular, whenever there was a strict dominant equilibrium, it will uniquely
survive IESDS and rationalizability. Then, for other games for which strict dominance did not apply, like the Cournot duopoly, we got uniqueness from IESDS and
rationalizability.
However, when we consider a game like the battle of the sexes, none of the
concepts introduced above had any bite: dominant strategy equilibrium did not
apply, and both IESDS and rationalizability could not restrict the set of reasonable
behavior.

104

11. Pinning Down Beliefs: Nash Equilibrium

Chris
!
Pat

"

! 2,1 0,0
" 0,0 1,2

For example, we cannot rule out the possibility that Pat goes to the opera, while
Chris goes to the football game since Pat may behave optimally to his belief that
Chris is going to the opera, and Chris may behave optimally to his belief that
Pat is going to the football game. But if we think of this pair of actions not only
as actions, but as a system of actions and beliefs, then there is something of a
dissonance: indeed the players are playing best responses to their beliefs, but their
beliefs are wrong!

11.1 Nash Equilibrium


In what follows, we are about to make a huge leap in our requirements of a solution.
For dominant strategy equilibrium, all we required is for people to be rational, but
it applied very seldom. For IESDS and rationalizability, we demanded rationality,
and common knowledge of it. Now, we will introduce a much more demanding
concept, !"#$ %&'()(*+(', introduced by John Nash (1950), a Nobel Laureate,
and the subject of a very successful Hollywood movie, - ./"'0(1') 2(34 (based
on the book by Sylvia Nasar (1998)).
To cut to the chase, a Nash equilibrium is a system of beliefs and a pro!le of
actions so that each player is playing a best response to his beliefs, and moreover,
that players have correct beliefs. Another, and very common way of de!ning a
Nash equilibrium, is a pro!le of strategies for which /"5$ player is choosing a best
response to the strategies of ")) 60$/+ players. Formally:
De!nition 22 7$/ 8'+/ #0+"0/9: 8+6)/; #! = (#!1 $ #!2 $ %%%$ #!! ) ! & (# " !a#$ %&'i)
*i+,i'm (1 #!" (# " .< 06 #!"" ; 16+ ")) ' ! ($0$"0 (#=
)" (#!" $ #!"" ) " )" (#0" $ #!"" ) 16+ ")) #0" ! &" "34 ")) ' ! ( %

11.1 Nash Equilibrium

105

At the risk of being repetitive, lets emphasize what the requirements of a Nash
equilibrium are:

1. Players are playing a */#0 +/#863#/ to their beliefs


2. Players beliefs about their opponents are 56++/50

The !rst requirement is a direct consequence of rationality. It is the second requirement that is very demanding, and is a tremendous leap beyond the structures
we have considered so far. It is one thing to ask people to behave rationally given
their beliefs (play a best response), but a totally di!erent thing to ask players to
predict the behavior of their opponents correctly.
Then again, it may be possible to accept such a strong requirement if we allow
for some reasoning that is beyond the physical structure of the game. For example,
imagine that Pat is an in"uential person people just seem to follow Pat, and
this is something that Pat knows well. In this case, Chris should believe, knowing
that Pat is so in"uential, that Pat would expect Chris to go to the opera, and
Pats beliefs, knowing this, should be that Chris will indeed believe that Pat is
going to the Opera, and so Chris will go to the opera as well. Indeed, (!$ !) is
a Nash equilibrium. However, notice that we can make the symmetric argument
about Chris being an in"uential person: ("$ " ) is also a Nash equilibrium. As the
external game theorist, however, we should not say more than one of these two
outcomes is what we predict. (You should be able to convince yourself that no
other pair of pure strategies is a Nash equilibrium.)
Remark 3 7$/ "+9',/30 (# 360 0$"0 >$+(# )(?/# 06 8)/"#/ @"0 A #'5$ "3 "+9',/30
B6')4 5$"39/ 0$/ 8":6! 61 0$/ 9",/C D0 (# 63): "*6'0 */)(/1# 0$"0 "+/ E#/)1 1')))(39FC
What about the other games we saw? In the Prisoners Dilemma, the unique
Nash Equilibrium is ("$ " )% In the Cournot Duopoly game, the unique Nash Equilibrium is (33 13 $ 33 13 ), as we will see formally soon. Recall the following two-player
discrete game we used to demonstrate IESDS:

106

11. Pinning Down Beliefs: Nash Equilibrium

.
/

*
4,3
2,1
3,0

+
5,1
8,4
9,6

,
6,2
3,6
2,8

In it, the only pair of pure strategies that constitute a Nash equilibrium is (-$ *),
the same pair that survived IESDS.
The relationship between the outcomes we obtained earlier and the Nash equilibrium outcomes is no coincidence. There is a simple relationship between the
concepts we previously developed and that of Nash equilibrium as the following
proposition states clearly:
Proposition 7 >63#(4/+ " #0+"0/9: 8+6)/ #! = (#!1 $ #!2 $ %%%$ #!! )C D1 #! (# /(0$/+=
GHI " #0+(50 46,(3"30 #0+"0/9: /&'()(*+(',J
GKI 0$/ '3(&'/ #'+L(L6+ 61 D%MNMJ 6+
GOI 0$/ '3(&'/ +"0(63")(P"*)/ #0+"0/9: 8+6)/J
0$/3 #! (# 0$/ '3(&'/ !"#$ %&'()(*+(',C
This proposition is simple to prove, and is left as an exercise. The intuition is of
course quite straightforward: we know that if there is a strict dominant strategy
equilibrium then it uniquely survives IESDS and rationalizability, and this in turn
must mean that players are playing a best response to the other players strategies.

11.2 Applications and Examples


HHCKCH 7$/ 7+"9/4: 61 0$/ 56,,63#
The tragedy of the commons refers to problems of con"ict over scarce resources
that result from the tension between individual sel!sh interests and the common
good, popularized by Hardin (1968). The main idea has proven to be a useful
concept for understanding how we have come to be at the brink of several environmental catastrophes.
Hardin introduces the hypothetical example of a pasture shared by local herders.
Each herders wants to maximize his yield, therefore increasing his herd size whenever possible. Each additional animal has a positive e!ect on its herder, but the

11.2 Applications and Examples

107

cost of that extra animal, namely the degrading of the quality of the pasture, is
shared by all the other herders. As a consequence, the individual incentives of each
herder are the grow their herds, and at the end, this causes tremendous losses to
everyone. To those trained in economics, it is yet another example of the distortion
from the free-rider problem. It should also remind you of the Prisoners dilemma
where individual driven by sel!sh incentives cause pain to the group.
In the course of his essay, Hardin develops the theme, drawing in examples of
latter day "commons", such as the atmosphere, oceans, rivers, !sh stocks, National
Parks, advertising and even parking meters. A major theme running throughout
the essay is the growth of human populations, with the Earths resources being a
general commons (given that it concerns the addition of extra "animals", it is the
closest to his original analogy).
Lets put some game theoretic analysis behind this story. Imagine that there
are 0 players, each choosing how much to consume from a common resource 1.
Each player ' chooses his own consumption, 2" " 0. The bene!t of consuming an
amount 2" " 0 give player ' a bene!t equal to 2" and no other player bene!ts from
's choice. The cost of depleting the resource is a function of total consumption,
and is equal to
! !2
!
X
X
3(
2" ) =
2"
%
"=1

"=1

This cost is borne equally by all the players, so including a players bene!t and
cost from consumption yields the following utility function for each player,
!2
!
1 X
2#
%
)" (2" $ 2"" ) = 2" #
0 #=1

To solve for a Nash equilibrium we can compute the best response correspondences for each player, and then !nd a strategy pro!le for which all the best
response functions are satis!ed together. This is an important point that warrants
further emphasis. We know that given 2"" , player ' will want to choose an element in 4," (2"" )% Hence, if we !nd some pro!le of choices (21! $ 22! $ %%%$ 2!! ) for which
!
2"! = 4," (2""
) for all ' ! (, then this must be a Nash equilibrium.
This means that if we derive all 0 best response correspondences, and it turns
out that they are functions (unique best responses), then we have a system of

108

11. Pinning Down Beliefs: Nash Equilibrium

0 equations, one for each players best response function, with 0 unknowns, the
choices of each player. Solving this will yield a Nash equilibrium. To get player 's
best response function (and we will verify that it is a function) we write down the
!rst order condition of his utility function:
! !
5)" (2" $ 2"" )
2 X
=1#
2" = 0
52"
0 "=1
and this gives us player 's best response function,
4," (2"" ) =

0 X
#
2# %
2
#6="

We have 0 such equations, one for each player, and if we substitute the choice
2" instead of 4," (2"" ), we get the 0 equations with 0 unknowns that need to be
solved. Doing this yields,1
1
2"! = for all ' ! (.
2
Now we need to ask, is consuming 12 too much or too little? The right way to
answer this is using the Pareto criterion: can we !nd another consumption pro!le
that will make everyone better o!? If we can, we can compare that with the Nash
equilibrium to answer this question. To !nd such a pro!le well do a little trick:
we will maximize the sum of all the utility functions, which we can think of as
societys utility function. I wont go into the moral justi!cation of it, but it will
turn out to be a useful tool.2 The function we are maximizing is, therefore,
max

$1 %$2 %&&&$!

!
X
"=1

)" (2" $ 2"" ) =

!
X
"=1

2" #

!
X
#=1

2#

!2

1 We know that these equations are all symmetric, and hence we have a symmetric solution. Then, solving the
" (! " 1)$ will yield the symmetric solution of $ = 12 . To show that there are no asymmetric
equation $ = !
2
soltions requires a bit more work.
2 In general, maximizing the sum of utility functions will result in a Pareto optimal outcome, but it need not
be the only one. In this example, this maximization gives the condition for all the Pareto optimal consumption
pro!les because it turns out that all ! !rst order conditions are the same because of the structure of our problem.
This is not something we will dwell on much at all. As mentioned above, this is just a useful tool to see if something
else may be Pareto dominated.

11.2 Applications and Examples

109

The !rst order conditions for this problem are,


!
X
2" = 0 for all ' = 1$ %%%$ 0
1#2
"=1

which means that from a social perspective, the solution must satisfy
!
X
1
2" = .
2
"=1

Interestingly, society doesnt care who gets how much as long as total consumption is equal to 12 . hence, we can look at the symmetric solution where each
1
player consumes b
2" = 2!
, and compare this with the Nash equilibrium solution. To
do this we will subtract the utility of player ' at the Nash solution from his utility
at the social optimum solution, and if this di!erence is positive then we know that
the social optimum is better for everyone. We have,
! !2
!
!2
X1
X 1
1
1
1
1
!
!
)" (b
#
+
2" $ b
2"" ) # )" (2" $ 2"" ) =
#
2 0 #=1 2
20 0 #=1 20
= 0(2 # 0) # 1
6 0
As we suspected, if all the players could only commit to consume the amount
1
b
, then they would each have a higher utility than they have in the Nash
2" = 2!
equilibrium, in which they consume more than is socially desirable. Thus, as Hardin
puts it, giving people the freedom to make choices may make them all worse o!
than if that freedom were somehow regulated. Of course, the counter argument
is whether we can trust a regulator to keep things under control, and if not, the
question remains which is the better of two evils, and answer that I will not o!er
here.
HHCKCK >6'+360 N'686):
Lets revisit the Cournot game with demand 7 = 100 # 8 and cost functions
3" (8" ) = 3" 8" for !rms ' ! !1$ 2". The maximization problem that !rm ' faces when
it believes that its opponent chooses quantity 8# is,
max )" (8" $ 8# ) = (100 # 8" # 8# ) # 8" # 3" # 8" %
'"

110

11. Pinning Down Beliefs: Nash Equilibrium

Recall that the best response for each !rm is given by the !rst-order condition,
4," (8# ) =

100 # 8# # 3"
%
2

This means that each !rm chooses quantities as follows:


81 =

100 # 82 # 31
100 # 81 # 32
and 82 =
%
2
2

(11.1)

When do we have a Nash equilibrium? Precisely when we !nd a pair of quantities,


(81 $ 82 ) that are ,'0'") */#0 +/#863#/#% This occurs exactly when we solve both best
response functions (11.1) simultaneously. The following diagram shows the solution
for 31 = 32 = 0, in which case the unique Nash equilibrium is 81 = 82 = 33 13 .

Notice that the Nash equilibrium coincides with the unique strategies that survive
IESDS and that are rationalizable, which is the conclusion of Proposition 7. An
exercise that is left for you is to explore he Cournot model with many !rms.
HHCKCO ./+0+"34 N'686):
The Cournot model assumed that the !rms choose quantities, and the market price
adjusts to clear the demand. However, one can argue that !rms often set prices,
and let consumers choose where to purchase from, rather than setting quantities
and waiting for the market price to equilibrate demand. We now consider the

11.2 Applications and Examples

111

game where each !rm posts a price for their otherwise identical goods. This was
the situation modelled and analyzed by Joseph Bertrand in 1883.
As before, assume that demand is given by 9 = 100 # 8 and cost functions
3" (8" ) = 0 for !rms ' ! !1$ 2" (zero costs). Clearly, we would expect buyers to all
buy from the !rm whose price is the lowest. What happens if there is a tie? Lets
assume that the market splits equally between the two !rms. This gives us the
following normal for of the game:
Players: ( = !1$ 2"
Strategy sets: &" = [0$ $] for ' ! !1$ 2" and !rms choose prices 9" ! &"
Payo!s: To calculate payo!s, we need to know what the quantities will be for
each !rm. Given our assumption on ties, the quantities are given by,
!
"
# 100 # 9" if 9" 6 9#
8" (9" $ 9# ) =
0 if 9" : 9#
"
$ 100"("
if 9" = 9#
2
which in turn means that the payo! function is given by:
!
"
# (100 # 9" ) # 9" if 9" 6 9#
)" (9" $ 9# ) =
0
if 9" : 9#
"
$ 100"("
# 9"
if 9" = 9#
2

Now that the description of the game is complete, we can calculate the best
response functions of both !rms. To do this, we will !rst start with a slight modi!cation that is motivated by reality: assume that prices cannot be any real number,
but are limited to be increments of some small number, say ; : 0% That is, prices
are assumed to be in the set !0$ ;$ 2;$ 3;%%%". For example, ; = 0%01 if we are considering cents as the price increment,3 and the strategy set will be !0$ 0%01$ 0%02$ %%%".
We will soon introduce smaller denominations, and will look at what happens when
this increment becomes in!nitely small and approaches zero.
3 Notice,

for example, that in gas stations gallons are often quoted in prices that include one-tenth of a cent.

112

11. Pinning Down Beliefs: Nash Equilibrium

We derive the best response of a !rm by exhausting the relevant situations that
it can face. Assume !rst that 9# is very high, above 50. Then, !rm ' can set the
monopoly (pro!t maximizing) price of 50 and not face any competition, which is
clearly what ' would choose to do.4 Now assume that 50 : 9# : 0%01% Firm ' can
choose one of three options: either set 9" : 9# and get nothing, set 9" = 9# and split
the market, or set 9" 6 9# and get the whole market. It is not too hard to check
that of these three, !rm ' wants to just undercut !rm < and capture the whole
market, thus setting a price of 9" = 9# # 0%01.5 When 9# = 0%01 then these three
options are still there, but undercutting means setting 9" = 0$ which is the same as
setting 9" : 9# and getting nothing. Thus, the best reply is setting 9" = 9# = 0%01
and splitting the market. Finally, if 9# = 0 then any choice of price will give !rm
' zero pro!ts, and therefore anything is a best response. In summary:
!
50
if 9# : 50
"
"
"
# 9 # 0%01
if 50 " 9# : 0%01
#
4," (9# ) =
"
0%01
if 9# = 0%01
"
"
$
9" ! !0$ 0%01$ 0%02$ 0%03$ %%%" if 9# = 0
Now given that !rm <s best response is exactly symmetric, it should not be hard
to see that there are two Nash Equilibria that follow immediately from the form of
the best response functions: 7$/ */#0 +/#863#/ to 0.01 is 0.01, and " */#0 +/#863#/
to 0 is 0. Thus, the two Nash equilibria are,
(91 $ 92 ) ! !(0$ 0)$ (0%01$ 0%01)" %
It is worth pausing here for a moment to prevent a rather common point of
confusion, which arises often when a player has more than one best response to a
certain action of his opponents. In this example, when 92 = 0, player 1 is indi!erent
4 The monopoly price is the price that would maximize a single !rms pro!ts if there were no competitors.
This would be obtained by maximizing )" (() = '( = (100 " ()(, and the !rst order condition is 100 " 2( = 0,
resulting in an optimal price of 50. Hence, if a competitor sets a price above 50, the !rm can act as if there was
no competition.
100$! !$2

!
see this, if we have some (# * 0&01 then by setting (" = (# !rm " gets + " =
while if it sets
2
0
2
(" = (# " 0&01 it will get + " = 100((# " 0&01) " ((# " 0&01) . If we calculate the di!erence between the two we get
that +0" " + " = 50&02(# " 12 (2# " 1&000 1, which is positive at (# = 0&02, and this di!erence has a positive derivative
for any (# # [0&02% 50].

5 To

11.2 Applications and Examples

113

between "3: 8+(5/ $/ chooses: if he splits the market with 91 = 0 he gets half the
market with no pro!ts, and if he sets 91 : 0 he gets no customers and has no
pro!ts. One may be tempted to jump to the following conclusion: if player 2 is
choosing 92 = 0, then any choice of 91 together with player 2s zero price will be
a Nash equilibrium. This is incorrect. It is true that player 1 is playing a best
response with any one of his choices, but if 91 : 0 then 92 = 0 is 360 a best
response as we can observe from the analysis above. Thus, having one !rm choose
a price of zero while the other is not cannot be a Nash equilibrium.
Comparing the Bertrand game outcome to the Cournot game outcome is an
interesting exercise. Notice that when !rms choose quantities (Cournot), the unique
Nash equilibrium when costs were zero had 81 = 82 = 33 13 . A quick calculation
shows that for the aggregate quantity of 8 = 81 + 82 = 66 23 we get a demand price
of 9 = 33 13 and each !rm makes a pro!t of $1$ 111%11. When instead these !rms
compete on prices, the two possible equilibria have either zero pro!ts when both
choose zero prices, or negligible pro!ts (about 50 cents) when they each choose a
price of $0.01. Interestingly, for both the Cournot and Bertrand games, if we only
had one player, he would maximize pro!ts which are ) = 98, and would choose
the monopoly price (or quantity) of $50 (or 50 units) and earn a pro!t of $2,500.
The message of this analysis is quite striking: one !rm may have monopoly
power, but when we let one more !rm compete, and they compete with prices,
then the market will behave competitively if both choose a price of zero, price
will equal marginal costs! Notice that if we add a third and fourth !rm, this will
not change the outcome; prices will have to be zero (or practically zero at 0.01)
for all !rms in the Nash (Bertrand) equilibrium. This is not the case for Cournot
competition.
A quick observation should lead you to realize that if we let ; be smaller than
one cent, the conclusions above will be sustained, and we will have two Nash
equilibria. One with 91 = 92 = 0, and one with 91 = 92 = ;. It turns out that
these two equilibria not only become closer in pro!ts as ; gets smaller, but we we
take ; to zero and assume that prices can be chosen as any real number, we get a
very clean result: the unique Nash equilibrium will have prices equal to marginal
costs, implying a competitive outcome.

114

11. Pinning Down Beliefs: Nash Equilibrium

Proposition 8 Q6+ ; = 0 G8+(5/# 5"3 */ "3: +/") 3',*/+I 0$/+/ (# " '3(&'/ !"#$
/&'()(*+(',= 91 = 92 = 0%
proof: First note that we cant have a negative price in equilibrium a !rm
o!ering it will lose money (pay the consumers to take its goods!). We need
to show that we cant have a positive price in equilibrium. We can see this
in two steps:
(() If 91 = 92 = 9 : 0$ each would bene!t from changing to some price 9 # ;
(; very small) and get the whole market for almost the same price.
((() If 91 : 92 " 0$ 92 would want to deviate to 91 # ; (; very small) and
earn higher pro!ts.
It is easy to see that 91 = 92 = 0 is an equilibrium: both are playing a best
response.
Exercise 1 D1 0$/ 56#0 1'350(63 B"# 3 # 8" 16+ /"5$ +, 0$/3 0$/ '3(&'/ !"#$
/&'()(*+(', B6')4 */ 91 = 92 = 3C >63L(35/ :6'+#/)1 61 0$(# "# "3 /"#: /R/+5(#/C
We will now see an interesting variation of the Bertrand game. Assume that
3" (8" ) = 3" # 8" represents cost of the !rm as before. Now, however, let 31 = 1 and
32 = 2 so that the two !rms are not identical: !rm 1 has a cost advantage. Let the
demand still be 9 = 100 # 8.
Now consider the case with discrete price jumps with ; = 0%01% As before, there
are still two Nash Equilibria:
(91 $ 92 ) ! !(1%99$ 2%00)$ (2%00$ 2%01)" %
or more generally, (91 $ 92 ) ! !(2 # ;$ 2)$ (2$ 2 + ;)".
Exercise 2 M$6B 0$"0 63): 0$/#/ "+/ 0$/ !"#$ /&'()(*+(" 61 0$(# 9",/C
Now we can ask ourselves, what happens if ; = 0? If we would think of using
a limit approach to answer this, then we may expect a similar result to the one
we saw before, namely, that we get one equilibrium which is the limit of both, and
in this case it would be 91 = 92 = 2%
But is this an equilibrium? Interestingly, the answer is no! To see this, consider
the best response of !rm 1. Its payo! function is not continuous when !rm 2 o!ers

11.2 Applications and Examples

115

!1 (p1,p2)

Monopoly profits

Duopoly profits

p2

p1M

p1

FIGURE 11.1. The pro!t function in the Bertrand Duopoly game.

a price of 2. The pro!t function of !rm 1, as a function of 91 when 92 = 2, is


depicted in Figure 11.1.The !gure !rst draws the pro!ts of !rm 1 as if it were
a monopolist with no competition (the hump shaped curve), and if this were the
6
case it would charge its monopoly price 9,
1 = 55%5. If !rm 2 charged more than
the monopoly price, this would have no impact on the choice of !rm 1 it will still
charge the monopoly price. If, however, !rm 2 charges a price 92 that is less than
the monopoly price then there is a discontinuity in the pro!t function of !rm 1:
as its price 91 approaches 92 from below, its pro!ts rise. However when it hits 92
exactly then it will split the market and experience its pro!ts dropping by half.
This discontinuity causes !rm 1 to 360 $"L/ a well de!ned best response correspondence when 92 6 55%5. Firm 1 wants to set a price as close to 92 as it can,
but does not want to reach 92 because then it splits the market and gets a jump
down in pro!ts. Once its price goes above 2 then !rm 1s pro!ts drop further to
zero.
Indeed, this is an example where a Nash equilibrium does not exist. The reason is
precisely the fact that !rm 1 does not have a well behaved payo! function, which
6 The

maximization here is for the pro!t function + 1 = (100 " ()(( " -1 ) where -1 = 1.

116

11. Pinning Down Beliefs: Nash Equilibrium

in turn causes it to not have a well de!ned best response function, a consequence
of the discontinuity in the pro!t function.

11.3 Nash Equilibrium in a Matrix: A Simple Method


In this short section, we go over a trivial, yet fool proof way to !nd all the pure
strategy Nash equilibria in matrix games if at least one exists. Consider the following two person !nite game in matrix form:

.
/

*
7,7
2,4
8$ 1

+
4,2
5$ 5
3$ 2

,
1$ 8
2$ 3
0,0

It is easy to see that no strategy is dominated, and thus strict doinance cannot be
applied to this game, while IESDS and rationalizability will conclude tha anything
can happen. However, a pure strategy Nash equilibrium exists. To !nd it, we use
a simple method that captures the fact that any Nash equilibrium must a pair
of strategies at which each of the two players is playing a best response to his
opponents strategy. The procedure is best explained in three steps
Step 1: For every 56)',3, !nd the highest payo! entry for player 1. By de!ition,
this entry must be on the row that is a best response for the particular
column being considered. Under-line the pair of payo!s in this entry.
What step 1 does is to identify the best response of player 1 16+ /"5$ 61 0$/
8'+/ #0+"0/9(/# (columns) of player 2. For instance, if player 2 is playing *, then
player 1s best response is /, and we underline the payo!s associated with this
row in column 1. After performing this step we see that there are three pairs of
pure strate!es at which player 1 is playing a best response: (/$ *), (.$ $ +) and
(.$ ,).
Step 2: For every +6B, !nd the highest payo! entry for player 2. By de!ition, this
entry must be on the column that is a best response for the particular row
being considered. Over-line the pair of payo!s in this entry.

11.4 Evaluating Nash Equilibria

117

Step 2 similarly identi!es the pairs of strategies at which player 2 is playing a


best response. For instance, if player 1 is playing /, then player 2s best response
is +, and we over-line the payo!s associated with this column in row 3. We can
continue to conlude that player 2 is playing a best response at three strategy pairs:
(/$ +), (.$ $ +) and (-$ ,).
Step 3: If an entry has both an under- and over-line, it is the outcome of a Nash
Equilibrium in pure strategies.
This follows immediately from the fact that both players are playing a best
response at any such pair of strategies. In this example we !nd that (.$ +) is the
unique pure strategy Nash equilibrium it is the only pair of pure strategies for
which both players are playing a best response. If you apply this to the battle of
the sexes, for example, you will !nd both pure strategy Nash equilibria, (!$ !)
and ("$ " ). For the prisoners dilemma only ("$ " ) will be identi!ed.

11.4 Evaluating Nash Equilibria


As for our criteria to evaluate solution concepts, we can see from the Battle of the
Sexes example that we may not have a unique Nash equilibrium. However, as the
argument above alludes to, there is no reason to expect that we should. Indeed, we
may need to entertain other aspects of an environment in which players interact,
such as social norms and historical beliefs, to make precise predictions about which
of the possible Nash equilibria may result as the more likely outcome.
As for existence, the analysis of the Bertrand price competition game with asymmetric marginal costs presented in section demonstrated that sometimes a Nash
equilibrium amy not exixst. For the interested reader, section 12.5 discusses the
rather general conditions that guarantee the existence of a Nash equilibrium, which
was a central part of Nashs Ph.D. dissertation. It turns out that the kid of pathologies that happen with discontinuous utility functions are a main source of problems,
but if we believe that these are rare (e.g., in reality prices are increments) then
we should not worry too much about these anomalies. It turns out that for a rich
set of games described in section 12.5, a Nash equilibrium always exists, which

118

11. Pinning Down Beliefs: Nash Equilibrium

gives this solution concept its powerlike IESDS and rationalizability, the solution
concept of Nash is widely applicable. It will, however, usually lead to more re!ned
predictions than those of IESDS and rationalizability as implied by proposition 7.
From the prisoners dilemma, we can easily see that Nash equilibrium does not
guarantee Pareto optimality. Indeed, if people are left to their own devices then in
some situations we need not expect them to do what is best for the whole group.
This point was made quite convingly and intuitively in Hardins (1968) tragedy of
the commons argument. This is where we can revisit the important restriction to
#/)1 /316+5(39 6'056,/#: our solution concepts took the game as given, and imposed
rationality and common knowledge to try and see what players will choose to do. If
they each seek to maximize their individual well beingthen they may hinder their
ability to achieve socially optimal outcomes.

11.5 Summary
...

11.6 Exercises
1. Prove Proposition 7.
2. The 0 !rm Cournot Model: Suppose there are 0 !rms in the Cournot
oligopoly model. Let 8" denote the quantity produced by !rm ', and let
= = 8" +###+8! denote the aggregate production. Let 7 (=) denote the market
clearing price (when demand equals =) and assume that inverse demand
function is given by 7 (=) = > # = (where = 6 >). Assume that !rms have
no !xed cost, and the cost of producing quantity 8" is 38" (all !rms have the
same marginal cost, and assume that 3 6 >).
(a) Model this as a Normal form game
(b) What is the Nash (Cournot) Equilibrium of the game where !rms choose
their quantities simultaneously?

11.6 Exercises

119

(c) What happens to the equilibrium price as 0 approaches in!nity? Is this


familiar?
3. Splitting Pizza: You and a friend are in an Italian restaurant, and the
owner o!ers both of you an 8-slice pizza under the following condition. Each
of you must simultaneously announce how many slices you would like; that
is, each player ' ! !1$ 2" names his desired amount of pizza, 0 % #" % 8.
If #1 + #2 % 8 then the players get their demands (and the owner eats any
leftover slices). If #1 + #2 : 8, then the players get nothing. Assume you each
care only about how much pizza you individually consume.
(a) Write out or graph each players best-response correspondence.
(b) What outcomes can be supported as pure-strategy Nash equilibria?
4. Tragedy of the Roommates: You and your 0 # 1 roommates each have 5
hours of free time you could spend cleaning your apartment. You all dislike
cleaning, but you all like having a clean room: each persons payo! is the
total hours spent (by everyone) cleaning, minus a number 3 times the hours
spent (individually) cleaning. That is,
)" (#1 $ #2 $ % % % $ #! ) = #3 # #" +

!
X

##

#=1

Assume everyone chooses simultaneously how much time to spend cleaning.


(a) Find the Nash equilibrium if 3 6 1.
(b) Find the Nash equilibrium if 3 : 1.
(c) Set 0 = 5 and 3 = 2. Is the Nash equilibrium Pareto e"cient? If not,
can you !nd an outcome where everyone is better o! than at the Nash
equilibrium outcome?
5. Wasteful Shipping Costs. Consider two countries, ? and 4$ each with a
monopolist that owns the only coal mine in the country, and it produces coal.
Let !rm 1 be the one located in country ?, and !rm 2 the one in country

120

11. Pinning Down Beliefs: Nash Equilibrium

4. Let 8"# $ ' ! !1$ 2" and < ! !?$ 4" denote the quantity that !rm ' sells in
country <. Consequently, let 8" = 8". + 8"/ be the total quantity produced by
!rm ' ! !1$ 2", and let 8 # = 81# + 82# be the total quantity sold in country
< ! !?$ 4". The demand for coal in countries ? and 4 is given respectively
by,
9# = 90 # 8# $ < ! !?$ 4"$
and the costs of production for each !rm is given by,
3" (8" ) = 108" $ ' ! !1$ 2"%
(a) Assume that the countries do not have a trade agreement and, in fact,
imports in both countries are prohibited. This implies that 82. = 81/ = 0
is set as a political constraint. What quantities 81. and 82/ will both
!rms produce?

Now assume that the two countries sign a free-trade agreement that
allows foreign !rms to sell in their countries without any tari!s. There
are, however shipping costs. If !rm ' sells quantity 8"# in the foreign
country (i.e., !rm 1 selling in 4 or !rm 2 selling in ?) then shipping
costs are equal to 108"# . Assume further that /"5$ +, chooses a pair of
quantities 8". $ 8"/ simultaneously, ' ! !1$ 2"$ so that a pro!le of actions
consists of four quantity choices.
(b) Model this as a normal form game and !nd a Nash equilibrium of the
game you described. Is it unique?

Now assume that before the game you described in (b.) is played, the
research department of !rm 1 discovered that shipping coal with the
current ships causes the release of pollutants. If the !rm would disclose
this report to the World-Trade-Organization (WTO) then the WTO
would prohibit the use of the current ships. Instead, a new shipping
technology would be o!ered that would increase shipping costs to 408"#
(instead of 108"# as above).

11.7 References

121

(c) Would !rm 1 be willing to release the information to the WTO? Justify
your answer with an equilibrium analysis.
6. Comparative Economics: Two high tech !rms (1 and 2) are considering
a joint venture. Each !rm ' can invest in a novel technology, and can choose
02
a level of investment @" ! [0$ 5] at a cost of 3" (@" ) = 4" (think of @" as how
many hours to train employees, or how much capital to buy for R&D labs).
The revenue of each !rm depends both on its investment, and of the other
!rms investment. In particular, if !rm ' and < choose @" and @# respectively,
then the gross revenue to !rm ' is
!
"
if @" 6 1
# 0
,(@" $ @# ) =
2
if @" " 1and @# 6 2
"
$
@" # @# if @" " 1and @# " 2
(a) Write down mathematically, and draw the pro!t function (gross revenue minus costs) of !rm ' as a function of @" for three cases: (') @# 6 2,
('') @# = 2, and (''') @# = 4

(b) What is the best response function of !rm ' ?


(c) It turns out that there are two (4/30(5") pairs of such !rms (that is,
the technology above describes the situation for both pairs). One pair
in Russia where coordination is hard to achieve and business people
are very cautious, and the other pair in Germany where coordination
is common and business people expect their partners to go the extra
mile. You learn that the Russian !rms are earning signi!cantly less
pro!ts than the German !rms, despite the fact that their technologies
are identical. Can you use Nash equilibrium analysis to shed light on
this dilemma? If so, be precise and use your previous analysis to do so.

11.7 References
HHH TO BE COMPLETED HHH
Hardin, Garrett (1968) The Tragedy of the Commons, M5(/35/

122

11. Pinning Down Beliefs: Nash Equilibrium

Nasar, Sylvia. - ./"'0(1') 2(34C New York: Simon and Schuster, 1998.
Nash, John (1950)

This is page 123


Printer: Opaque t

!"
Mixed Strategies

In the previous chapters we postponed discussing the option that a player has to
choose a random strategy. This turns out to be an important type of behavior to
consider, with interesting implications and interpretations that follow from this
kind of behavior. In fact, there are many games for which there will be no equilibrium predictions if we do not consider the players ability to choose random
strategies.
Consider the following classic zero sum game called Matching Pennies.1
Players 1 and 2 both put a penny on a table simultaneously. If the two pennies
come up the same side (heads or tails) then player 1 gets both, otherwise player 2
does. We can represent this in the following matrix:
Player 2
A
Player 1

1A

A 1$ #1 #1$ 1
B #1$ 1 1$ #1

P/+6 #', 9",/ is one in which the gains of one player are the losses of another, hence their payo!s sum to
zero. The class of zero sum games was the main subject of analysis before Nash introduced his solution concept
in the 1950s. These games have some very nice mathematical properties and were a central object of analysis in
von Neumann and Morgensterns (1944) seminal book.

124

12. Mixed Strategies

Upon observation we can see that the method we introduced in section 11.3 to !nd
pure strategy Nash equilibria does not work. Namely, given a belief that player 1
has about player 2s choice, he always wants to match it. In contrast, given a belief
that player 2 has about player 1s choice, he would like to choose the opposite
orientation for his penny. Does this mean that a Nash equilibrium fails to exist?
We will soon see that a Nash equilibrium will indeed exist if we allow players to
choose random strategies, and there will be an intuitive appeal to the proposed
equilibrium.
Matching pennies is not the only simple game that fails to have a pure-strategy
Nash equilibrium. Recall the childs game Rock-Paper-Scissors. Recall that rock
beats scissors, scissors beats paper, and paper beats rock. If winning gives the
player a payo! of 1 and the loser a payo! of #1, and if we assume a tie is worth
0, then we can describe this game by the following matrix:

,
7
&

,
7
&
0$ 0 #1$ 1 1$ #1
1$ #1 0$ 0 #1$ 1
#1$ 1 1$ #1 0$ 0

It is rather straightforward to write down the best response correspondence for


player 1 when he believes that player 2 will play one of his pure strategies as
follows:
!
"
# 7$ when #2 = ,
#1 (#2 ) =
&$ when #2 = 7
"
$
,$ when #2 = &

and a similar (symmetric) list would be the best response correspondence of player
2. Examining the two best response correspondences implies rather immediately
that there is no pure strategy equilibrium, just like in the matching pennies game.
The reason is that starting with any pair of pure strategies, at least one player is
not playing a best response, and will want to change his strategy in response.

12.1 Strategies, Beliefs and Expected Payo!s

125

12.1 Strategies, Beliefs and Expected Payo!s


In what follows we introduce the ability of players to choose random strategies.
This will turn out to o!er us several important advances over what we have done so
far. First, it will admit the ability of people to make choice like Ill "ip a coin and
choose accordingly, which is not unheard of. Second, and more importantly, it will
endow our players with a richer set of possible beliefs that capture an uncertain
world: if player ' can believe that his opponents are choosing random strategies,
then this puts player ' in the same kind of situation a decision maker is that faces
a decision problem with probabilistic uncertainty. Hence, you are encouraged to
review chapter 3 that lays out the simple decision problem with random events.
HKCHCH Q(3(0/ M0+"0/9: M/0#
We start with the basic de!nition of random play when players have !nite strategy
sets &" :
De!nition 23 S/0 &" */ 8)":/+ 'T# 8'+/ #0+"0/9: #/0 "34 "##',/ 0$"0 &" (# 3(0/C
N/3/ $&" "# 0$/ #im.*/x 61 &" ; B$(5$ (# 0$/ #/0 61 8+6*"*()(0: 4(#0+(*'0(63# 6L/+
&" % - mix/1 #2,a2/34 16+ 8)":/+ ' (# "3 /)/,/30 C " ! $&" $ #6 0$"0 C " (# " 8+6*"*()(0:
4(#0+(*'0(63 6L/+ &" C U/ 4/360/ *: C " (#" ) 0$/ 8+6*"*()(0: 0$"0 8)":/+ ' 8)":# #" C
That is, a mixed strategy for player ' is just a probability distribution over his
pure strategies. Recall that any probability distribution C " (#) over a !nite state
space, in our case &" , must satis!es two conditions:
1. C " (#" ) " 0 for all #" ! &" , and
2.

1" #2"

C " (#" ) = 1%

That is, the probability of any event happening must be non negative, and the
sum of the probabilities of all the possible events must add up to one.2 Notice that
every pure strategy is a mixed strategy with a degenerate distribution that picks
a single strategy with probability one.
2 The

notation

%" "&"

((1" ) means the sum of ((1" ) over all the 1" # 2" .

126

12. Mixed Strategies

Example 8 5a26$in3 8/nni/#9 >63#(4/+ 0$/ 2"05$(39 @/33(/# 9",/ 4/#5+(*/4


/"+)(/+ B(0$ 0$/ ,"0+(R;
@)":/+ K
A
@)":/+ H

A 1$ #1 #1$ 1
B #1$ 1 1$ #1

Q6+ /"5$ 8)":/+ '; &" = !A$ B "; "34 0$/ #(,8)/R; B$(5$ (# 0$/ #/0 61 ,(R/4 #0+"0/9(/#;
5"3 */ B+(00/3 "#
$&" = !(C " (A)$ C " (B )) : C " (A) " 0$ C " (B ) " 0$ C " (A) + C " (B ) = 1"%
U/ +/"4 0$(# "# 16))6B#= 0$/ #/0 61 ,(R/4 #0+"0/9(/# (# 0$/ #/0 61 ")) 8"(+# (C " (A)$ C " (B ))
#'5$ 0$"0 *60$ "+/ 363V3/9"0(L/ 3',*/+#; "34 0$/: *60$ #', '8 06 63/C3 U/ '#/
0$/ 360"0(63 C " (A) 06 +/8+/#/30 0$/ 8+6*"*()(0: 0$"0 8)":/+ ' 8)":# A; "34 C " (B ) "#
0$/ 8+6*"*()(0: 0$"0 8)":/+ ' 8)":# B C
Example 9 :;6<)8a./,)=6i##;,#9 D3 0$/ <65?V@"8/+VM5(##6+# 9",/; &" = !,$ 7$ &"
G16+ +65?; 8"8/+ "34 #5(##6+# +/#8/50(L/):I; "34 B/ 5"3 4/3/ 0$/ #(,8)/R "#
$&" = !(C " (,)$ C " (7 )$ C " (&)) : C " (,)$ C " (7 )$ C " (&) " 0$ C " (,) + C " (7 ) + C " (&) = 1"
B$(5$ (# 36B 0$+// 3',*/+#; /"5$ 4/3(39 0$/ 8+6*"*()(0: 0$"0 0$/ 8)":/+ 8)":# 63/
61 $(# 8'+/ #0+"0/9(/#C - ,/30(63/4 /"+)(/+; " 8'+/ #0+"0/9: (# W'#0 " #8/5(") 5"#/ 61
" ,(R/4 #0+"0/9:C Q6+ /R",8)/; (3 0$(# 9",/ B/ 5"3 +/8+/#/30 0$/ 8'+/ #0+"0/9: 61
8)":(39 , B(0$ 0$/ 4/9/3/+"0/ ,(R/4 #0+"0/9:= C(,) = 1$ C(7 ) = C(&) = 0%
Given a players mixed strategy C" (#), it will be useful to identify between pure
strategies that are chosen with a positive probability and those that are not. We
de!ne,
3 The

simplex of this two element strategy set can be represented by a single number ( # [0% 1], where ( is the
probability that player " plays 3, and 1 "( is the probability that player " plays 4. This follows from the de!nition
of a probability distribution over a two element set. In general, the simplex of a strategy set with $ pure strategies
will be in a $ " 1 dimensional space, where each of the $ " 1 numbers is in [0,1], and represent the probability
of the !rst $ " 1 pure strategies. All sum up to a number equal to or less than one so that the remainder is the
probability of the $'( pure strategy.

12.1 Strategies, Beliefs and Expected Payo!s

127

De!nition 24 X(L/3 " ,(R/4 #0+"0/9: C " (#) 16+ 8)":/+ '; B/ B()) #": 0$"0 " 8'+/
#0+"0/9: #" ! &" (# in 2$/ #'..;,2 ;> C " (#) (1 "34 63): (1 (0 655'+# B(0$ 86#(0(L/
8+6*"*()(0:; (C/C; (1 C " (#" ) : 0C
For example, in the game of rock-paper-scissors, a player can choose rock or
paper, each with equal probability, and not choose scissors. In this case C" (,) =
C " (7 ) = 0%5 and C " (&) = 0. We will then say that , and 7 are in the support of
C " (#), but & is not.
HKCHCK D33(0/ M0+"0/9: M/0#
As we have seen with the Cournot and Bertrand duopoly examples, the strategy
sets need not be !nite. In these case where the strategy sets are well de!ned
intervals, a mixed strategy will be given by a cumulative distribution function:
De!nition 25 S/0 &" */ 8)":/+ 'T# 8'+/ #0+"0/9: #/0 "34 "##',/ 0$"0 &" (# "3
(30/+L")C - mix/1 #2,a2/34 16+ 8)":/+ ' (# " 5',')"0(L/ 4(#0+(*'0(63 1'350(63 "" :
&" & [0$ 1]$ B$/+/ "" (@) = Pr!#" % @"C D1 "" (#) (# 4(!/+/30("*)/ B(0$ 4/3#(0: D" (#)
0$/3 B/ #": 0$"0 #" ! &" (# (3 0$/ #'886+0 61 "" (#) (1 D" (#" ) : 0C
Example 10 ?;',n;2 @';.;*4A >63#(4/+ 0$/ >6'+360 4'686): 9",/ B(0$ "
5"8"5(0: 563#0+"(30 61 HYY '3(0# 61 8+64'50(63 #6 0$"0 &" = [0$ 100] 16+ ' ! !1$ 2"C
>63#(4/+ 0$/ ,(R/4 #0+"0/9: B$/+/ 8)":/+ ' 5$66#/# " &'"30(0: */0B//3 OY "34 ZY
B(0$ " '3(16+, 4(#0+(*'0(63C 7$"0 (#;
!
!
"
"
0
16+
#
16+ #" 6 30
6
30
"
#
# 0
1" "30
1
"" (#" ) =
16+ #" ! [30$ 50] "34 D" (#" ) =
16+ #" ! [30$ 50]
20
20
"
"
$
$
1
16+ #" : 50
0
16+ #" : 50

We will mostly focus on games with !nite strategy spaces to illustrate most of the
examples with mixed strategies, but some interesting examples will have in!nite
strategy sets and will require the use of cumulative distributions and densities to
explore behavior in mixed strategies.
HKCHCO ./)(/1# "34 2(R/4 M0+"0/9(/#
As we discussed earlier, introducing probability distributions not only enriches the
set of actions that a player can choose from, but also allows us to enrich the beliefs

128

12. Mixed Strategies

that players have. Consider, for example, player ' who plays against opponents
#'. It may be that player ' is uncertain about the behavior of his opponents for
many reasons. For example, he may believe that his opponents are indeed choosing
mixed strategies, which immediately implies that their behavior is not !xed but
rather random. An alternative interpretation is the situation in which player ' is
playing a game against an opponent that he does not know, and his opponents
background will determine how she will play. This interpretation will be revisited
later in chapter 25, and is a very appealing justi!cation for beliefs that are random,
and behavior that is consistent these beliefs.
To introduce beliefs about mixed strategies formally we de!ne,
De!nition 26 - +/*i/> 16+ 8)":/+ ' (# 9(L/3 *: " probability distribution E " !
$&"" 6L/+ 0$/ #0+"0/9(/# 61 $(# 68863/30#C U/ 4/360/ *: E " (#"" ) "# 0$/ 8+6*"*()(0:
8)":/+ ' "##(93# 06 $(# 68863/30# 8)":(39 #"" ! &"" C
Thus, a */)(/1 for player ' is a probability distribution over the strategies of
his opponents. Notice that the belief of player ' lies in the same set that represents the pro!les of mixed strategies of player 's opponents. For example, in
the rock-paper-scissors game, we can represent the beliefs of player 1 as a triplet,
(E 1 (,)$ E 1 (7 )$ E1 (&)) where by de!nition, E 1 (,)$ E 1 (7 )$ E 1 (&) " 0, and E 1 (,) +
E 1 (7 ) + E 1 (&) = 1. The interpretation of E 1 (#2 ) is the probability that player 1
assigns to player 2 playing some particular #2 ! &2 % Recall that the strategy of
player 2 is a triplet C 2 (,)$ C 2 (7 )$ C 2 (&) " 0, with C 2 (,) + C 2 (7 ) + C 2 (&) = 1, so
we can clearly see the analogy between E and C.

HKCHC[ %R8/50/4 \0()(0:


Consider the matching pennies game described above, and assume for the moment
that player 2 chooses the mixed strategy C 2 (A) = 13 and C 2 (B ) = 23 . If player 1
plays A then he will win and get 1 with probability 13 while he will lose and get
#1 with probability 23 . If, however, he plays B then he will win and get 1 with
probability 23 while he will lose and get #1 with probability 13 . Thus, by choosing
di!erent actions, player 1 will face di!erent lotteries as described in chapter 3.

12.1 Strategies, Beliefs and Expected Payo!s

129

To evaluate these lotteries we will resort to the notion of expected utility over
lotteries as presented in section 3.2. Thus, we de!ne the expected utility of a player
from playing mixed strategies as follows:
De!nition 27 7$/ /x./62/1 .a4;! 61 8)":/+ ' B$/3 $/ 5$66#/# #" ! &" "34 $(#
68863/30# 8)": 0$/ ,(R/4 #0+"0/9: C "" ! !&"" (#
)" (#" $ C "" ) =

1!" #2!"

C "" (#"" ))" (#" $ #"" )

M(,()"+):; 0$/ /R8/50/4 8":6! 61 8)":/+ ' B$/3 $/ 5$66#/# C " ! !&" "34 $(# 6886V
3/30# 8)": 0$/ ,(R/4 #0+"0/9: C "" ! !&"" (#
)1 (C " $ C "" ) =

1" #2"

C " (#" ))" (#" $ C "" ) =

1" #2"

%
&

1!" #2!"

'

C " (#" )C "" (#"" ) # )" (#" $ #"" )(

The idea is a straightforward adaptation of de!nition 5 in section 3.2. The randomness that player ' faces if he chooses some #" ! &" is created by the random
selection of #"" ! &"" that is described by the probability distribution C "" (#).
Clearly, the de!nition we just presented is well de!ned only for !nite strategy
sets &" . The analog to interval strategy sets is a straightforward adaptation of the
second part of de!nition 5.4
Example 11 :;6<)8a./,)=6i##;,#= </5")) 0$/ +65?V8"8/+V#5(##6+# /R",8)/ "*6L/;

,
7
&

,
0$ 0
1$ #1
#1$ 1

7
#1$ 1
0$ 0
1$ #1

&
1$ #1
#1$ 1
0$ 0

4 Consider a game where each player has a strategy set given by the interval 2 = [1 % 1 ]. if player 1 is playing
"
" "
11 and his opponents, players # = 2% 3% &&&% ! are using the mixed strategies given by the density function 5# () then
the expected utility of player 1 is given by
Z %2 Z %3
Z %#

)" (1" % 1!" )52 (12 )53 (13 ) 5! (1! )612 613 61! &
%2

%3

%#

130

12. Mixed Strategies

"34 "##',/ 0$"0 8)":/+ K 8)":#= C 2 (,) = C2 (7 ) = 12 ; C 2 (&) = 0% U/ 5"3 36B


5")5')"0/ 0$/ /R8/50/4 8":6! 16+ 8)":/+ H 1+6, "3: 61 $(# 8'+/ #0+"0/9(/#;
1
1
1
% 0 + % (#1) + 0 % 1 = #
2
2
2
1
1
1
)1 (7$ C 2 ) =
% 1 + % 0 + 0 % (#1) =
2
2
2
1
1
)1 (&$ C 2 ) =
% (#1) + % 1 + 0 % 0 = 0
2
2

)1 (,$ C 2 ) =

D0 (# /"#: 06 #// 0$"0 8)":/+ H $"# " '3(&'/ */#0 +/#863#/ 06 0$(# ,(R/4 #0+"0/9: 61
8)":/+ K= (1 $/ 8)":# 7 ; $/ B(3# 6+ 0(/# B(0$ /&'") 8+6*"*()(0:; B$()/ $(# 60$/+ 0B6
8'+/ #0+"0/9(/# "+/ B6+#/= B(0$ , $/ /(0$/+ )6#/# 6+ 0(/# "34 B(0$ & $/ /(0$/+ )6#/#
6+ B(3#C >)/"+):; (1 $(# */)(/1# "*6'0 0$/ #0+"0/9: 61 $(# 68863/30 "+/ 4(!/+/30 0$/3
8)":/+ H (# )(?/): 06 $"L/ " 4(!/+/30 */#0 +/#863#/C
It is useful to consider an example where the players have strategy sets that are
intervals.
Example 12 Bi11in3 >;, a @;**a,9 D,"9(3/ 0$/ 16))6B(39 9",/ (3 B$(5$ 0B6
8)":/+# 5"3 *(4 16+ " 46))"+C %"5$ 5"3 #'*,(0 " *(4 0$"0 (# " +/") 3',*/+ G#6 B/
"+/ 360 +/#0+(50/4 06 8/33: (35+/,/30#I; #6 0$"0 &" = [0$ $); ' ! !1$ 2"C 7$/ 8/+#63
B(0$ 0$/ $(9$/#0 *(4 9/0# 0$/ 46))"+; *'0 0$/ 0B(#0 (# 0$"0 *60$ *(44/+# $"L/ 06 8":
0$/(+ *(4#C G7$(# (# 5"))/4 "3 a** .a4 a'62i;nCI D1 0$/+/ (# " 0(/ 0$/3 *60$ 8": "34
0$/ 46))"+ (# "B"+4/4 06 /"5$ 8)":/+ B(0$ "3 /&'") 8+6*"*()(0: 61 YCZC 7$'#; (1 8)":/+
' *(4# #" "34 8)":/+ < &= ' *(4# ## 0$/3 8)":/+ 'T# 8":6! (#
!
"
(1 #" 6 ##
# ##"
1
)" (#" $ #"" ) =
# #" (1 #" = ## %
2
"
$
1 # #" (1 #" : ##

!6B (,"9(3/ 0$"0 8)":/+ K (# 8)":(39 " ,(R/4 #0+"0/9: (3 B$(5$ $/ (# uniformly
5$66#(39 " *(4 */0B//3 Y "34 HC 7$"0 (#; 8)":/+ KT# ,(R/4 #0+"0/9: C 2 (# " '3(V
16+, 4(#0+(*'0(63 6L/+ 0$/ (30/+L") Y "34 H; B$(5$ (# +/8+/#/30/4 *: 0$/ 5',')"0(L/
4(#0+(*'0(63 1'350(63 "34 4/3#(0:;
(
(
#2 16+ #2 ! [0$ 1]
1 16+ #2 ! [0$ 1]
"2 (#2 ) =
"34 D2 (#2 ) =
1
16+ #2 : 1
0
16+ #2 : 1

12.2 Mixed Strategy Nash Equilibrium

131

7$/ /R8/50/4 8":6! 61 8)":/+ H 1+6, 6!/+(39 " *(4 #" : 1 (# 1 # #" 6 0 #(35/ $/ B())
B(3 16+ #'+/; *'0 0$(# B6')4 360 */ B(#/C 7$/ /R8/50/4 8":6! 1+6, *(44(39 #" 6 1
(#;5
1
F)1 (#1 $ C 2 ) = Pr!#1 6 #2 "(##1 ) + Pr!#1 = #2 "( # #1 ) + Pr!#1 : #2 "(1 # #1 )
2
1
= (1 # " (#1 ))(##1 ) + 0( # #1 ) + " (#1 )(1 # #1 )
2
= 0
7$'#; B$/3 8)":/+ K (# '#(39 " '3(16+, 4(#0+(*'0(63 */0B//3 Y "34 H 16+ $(# *(4;
0$/3 8)":/+ H 5"3360 9/0 "3: 86#(0(L/ expected 8":6! 1+6, "3: *(4 $/ 6!/+#= "3:
*(4 )/## 0$"3 63/ 6!/+# "3 /R8/50/4 8":6! 61 Y; "34 "3: *(4 "*6L/ H 9'"+"30//#
9/00(39 0$/ 46))"+ "0 "3 (3"0/4 8+(5/C 7$(# 9",/ (# 63/ 06 B$(5$ B/ B()) +/0'+3
)"0/+; "# (0 $"# #/L/+") (30/+/#0(39 1/"0'+/# "34 0B(#0#C

12.2 Mixed Strategy Nash Equilibrium


Now that we are equipped with a richer space for both strategies and beliefs, we
are ready to restate the de!nition of a Nash equilibrium to this more general setup
as follows:
De!nition 28 7$/ ,(R/4 #0+"0/9: 8+6)/ C! = (C !1 $ C !2 $ %%%$ C !! ) (# " Nash Equilibrium (1 16+ /"5$ 8)":/+ C !" (# " */#0 +/#863#/ 06 C !"" % 7$"0 (#; 16+ ")) ' ! ($
)" (C !" $ C !"" ) " )" (C " $ C !"" ) ' C " ! !&" %
This de!nition is the natural generalization of what we de!ned previously in
de!nition 22. We require that each player is choosing a strategy C !" ! !&" that is
the best thing he can do when his opponents are choosing some pro!le C !"" ! !&"" %
As we dicussed previously, there is another interesting interpretation of the definition of Nash equilibrium. We can think of C !"" as the belief of player ' about his
opponents, E " $which captures the idea that player ' is uncertain of his opponents
behavior. The pro!le of mixed strategies C !"" thus captures this uncertain belief
5 If

player 2 is using a uniform distribution over [0% 1] then Pr{11 = 12 } = 0 for any 11 # [0% 1].

132

12. Mixed Strategies

over all of the pure strategies that player 's opponents can play. Clearly, rationality requires that a player play a best response given his beliefs (which now extends
the notion of rationalizability to allow for uncertain beliefs). A Nash equilibrium
requires that these beliefs be correct.
Recall that we de!ned a pure strategy #" ! &" to be in the support of C " if
C " (#" ) : 0$ that is, if #" is played with positive probability (see De!nition 24.) Now
imagine that in the Nash equilibrium pro!le C ! $ the support of 's mixed strategy
C !" contains more than one pure strategy, say #" and #0" are both in the support of
C !" .
What must we conclude about a rational player ' if C !" is indeed part of a Nash
equilibrium (C !" $ C !"" )? By de!nition, C !" is a best response against C !"" , which means
that given C !"" , player ' cannot do better than to randomize between more than one
of his pure strategies, in this case, #" and #0" . But, when would a player be willing
to randomize between two alternative pure strategies? The answer is predictable:
0

Proposition 9 D1 C ! (# " !"#$ /&'()(*+(',; "34 *60$ #" "34 #" "+/ (3 0$/ #'886+0
61 C !" $ 0$/3
)" (#" $ C !"" ) = )" (#0" $ C !"" ) = )" (C !" $ C !"" ) %
The proof is quite straightforward and follows from the observation that if a
player is randomizing between two alternatives, then he must be indi!erent between
0
them. If this were not the case, say )" (#" $ C !"" ) : )" (#0" $ C !"" ) with both #" and #"
in the support of C !" $ then by reducing the probability of playing #0" from C !" (#0" )
to zero, and increasing the probability of playing #" from C !" (#" ) to C !" (#" ) + C !" (#0" ),
player 's expected utility must go up, implying that C !" could not have been a best
response to C !"" .
This simple observation will play an important role for computing mixed strategy
Nash equilibria. In particular, we know that if a player is playing a mixed strategy,
he must be indi!erent between the actions he is choosing with positive probability,
that is, the actions that are in the support of his mixed strategy. One players
indi!erence will impose restrictions on the behavior of other players, and these
restrictions will help us !nd the mixed strategy Nash equilibrium.
For games with many players, or with two players that have many strategies,
!nding the set of mixed strategy Nash equilibria is a tedious task. It is often done

12.2 Mixed Strategy Nash Equilibrium

133

with the help of computer algorithms, since it generally takes on the form of a
linear programing problem. Still, it will be useful to see how one computes mixed
strategy Nash equilibria for simpler games.
HKCKCH %R",8)/= 2"05$(39 @/33(/#
Consider the matching pennies game,
A

#1$ 1
A 1,-1
B #1$ 1 1,-1
and recall that we showed that this game does not have a pure strategy Nash
equilibrium. We now ask, does it have a mixed strategy Nash equilibrium? To
answer this, we have to !nd mixed strategies for both players that are mutual best
responses.
To simplify the notation, de!ne mixed strategies for players 1 and 2 as follows:
Let 9 be the probability that player 1 plays A and 1 # 9 the probability that he
plays B . Similarly, let 8 be the probability that player 2 plays A and 1 # 8 the
probability that he plays B .
Using the formulae for expected utility in this game, we can write player 1s
expected utility from each of his two pure actions as follows:
)1 (A$ 8) = 8 % 1 + (1 # 8) % (#1) = 28 # 1

(12.1)

)1 (B$ 8) = 8 % (#1) + (1 # 8) % 1 = 1 # 28
With these equalities in hand, we can calculate the best response of player 1 for any
choice 8 of player 2. In particular, playing A will be strictly better than playing B
for player 1 if and only if )1 (A$ 8) : )1 (B$ 8)$ and using (12.1) above this will be
true if and only if
28 # 1 : 1 # 28$
which is equivalent to 8 : 12 . Similarly, playing B will be strictly better than
playing A for player 1if and only if 8 6 12 . Finally, when 8 = 12 player 1 will be
indi!erent between playing A or B . This simple analysis gives us the best response

134

12. Mixed Strategies

correspondence of player 1, which is:


!
"
if 8 6
# 9=0
4,1 (8) =
9 ! [0$ 1] if 8 =
"
$
9=1
if 8 :

1
2
1
2
1
2

It may be insightful to graph the expected utility of player 1 from choosing either
A or B as a function of 8, the choice of player 2, as shown in !gure ??.

Expected utility in the Matching Pennies game


The expected utility of player 1 from playing A was given by the function )1 (A$ 8) =
28 # 1 as described in (12.1) above. This is the rising linear function in the !gure.
Similarly, )1 (B$ 8) = 1 # 28 is the declining function. Now, it is easy to see where
the best response of player 1 is coming from. The upper envelope of the graph
will show the highest utility that player 1 can achieve when player 2 plays 8. When
8 6 12 this is achieved by playing B , when 8 : 12 this is achieved by playing A, and
when 8 = 12 both A and B are equally good for player one.
In a similar way we can calculate the utilities of player 2 given a mixed strategy
9 of player 1 to be,

)2 (9$ A) = 9 # (#1) + (1 # 9) # 1 = 1 # 29
)2 (9$ B ) = 9 # 1 + (1#) # (#1) = 29 # 1

12.2 Mixed Strategy Nash Equilibrium

135

and this implies that 2s best response is,

!
"
if 9 6
# 8=1
4,2 (9) =
8 ! [0$ 1] if 9 =
"
$
8=0
if 9 :

1
2
1
2
1
2

We know from proposition 9 above that when player 1 is mixing between A and
B , both with positive probability, then it must be the case that his payo! from A
and from B are identical. This, it turns out, (,86#/# " +/#0+(50(63 on the behavior
of player 2, given by the choice of 8. Namely, player 1 is willing to mix between A
and B if and only if )1 (A$ 8) = )1 (B$ 8) $which will hold if and only if 8 = 12 . This
is the way in which the indi!erence of player 1 imposes a restriction on player 2:
only when player 2 is playing 8 = 12 $ will player 1 be willing to mix between his
actions A and B . Similarly, player 2 is willing to mix between A and B only when
)2 (9$ A) = )2 (9$ B ), which is true only when 9 = 12 . At this stage we have come to
the conclusion of our quest for a Nash equilibrium in this game. We can see that
there is indeed a pair of mixed strategies that form a Nash Equilibrium, and these
are precisely when (9$ 8) = ( 12 $ 12 ).
We return now to observe that the best response correspondence for each player
is a function the other players strategy, which in this case is a probability between
0 and 1, namely, the opponents mixed strategy (probability of playing A). Thus,
we can graph the best response correspondence of each player in a similar way that
we did for the Cournot duopoly game since each strategy belongs to a well de!ned
interval, [0$ 1]. For the matching pennies example, player 2s best response 8(9) can
be graphed in !gure ?? (8(9) is on the @-axis, as a function of 9 on the G-axis.)
Similarly, we can graph player 1s best response, 9(8), and these two correspondences will indeed intersect only at one point: (9$ 8) = ( 12 $ 12 ). By de!nition, when
these two correspondences meet, the point of intersection is a Nash equilibrium.

136

12. Mixed Strategies

Best Response correspondences in the Matching Pennies game


There is a simple logic that we can derive from the Matching Pennies example
that is behind the general way of !nding mixed strategy equilibria in games. The
logic relies on a fact that we had already discussed, which says that if a player is
mixing between several strategies then he ,'#0 */ (34(!/+/30 between them. What
a particular player ' is willing to do depends on the strategies of his opponents.
Therefore, to !nd out when player ' is willing to mix between some of his pure
strategies, we must !nd strategies of his opponents, #', that make him indi!erent
between some of his pure actions.
For the matching pennies game this can be easily illustrated as follows. First, we
ask B$(5$ #0+"0/9: 61 8)":/+ K will make 8)":/+ H (34(!/+/30 between playing A and
B . The answer to this question (assuming it is unique) ,'#0 */ 8)":/+ KT# #0+"0/9:
(3 /&'()(*+(',. The reason is simple: if player 1 is to mix in equilibrium, then
player 2 must be playing a strategy for which player 1s best response is mixing,
and this strategy is the one that makes player 1 indi!erent between playing A
and B . Similarly, we ask which strategy of player 1 will make player 2 indi!erent
between playing A and B , and this must be player 1s equilibrium strategy.
HKCKCK %R",8)/= <65?V@"8/+VM5(##6+#
When we have games with more than 2 strategies for each player, then coming up
with quick ways to solve mixed strategy equilibria is not as straightforward as with
2 % 2 games, and it will usually involve more tedious algebra that solves several

12.2 Mixed Strategy Nash Equilibrium

137

equations with several unknowns. If we take the game of rock-paper-scissors, for


example, then there are many mixing combinations for each player, and we cant
simply check things the way we did for the matching pennies game.
To !nd the Nash equilibrium of the Rock-Paper-Scissors game we proceed in
three steps. First we show that there is no Nash equilibrium in which at least one
player plays a pure strategy. The we will show that there is no Nash equilibrium in
which at least one player mixes only between two pure strategies. These steps will
imply that in any Nash equilibrium, both players must be mixing with all three
pure strategies, and this will lead to the solution.
We start by showing that there can be no Nash equilibrium in which one player
plays a pure strategy and the other mixes. Suppose player ' plays a pure strategy.
Its easy to see from looking at the payo! matrix that player < always receives
di!erent payo!s from each of his pure strategies whenever ' plays a pure strategy. Therefore, player < cannot be indi!erent toward any two or three of her pure
strategies, so she can play only a pure strategy with positive probability in equilibrium. Therefore, < cannot be playing a mixed strategy if ' plays a pure strategy.
We conclude that there are no Nash equilibria where either player plays a pure
strategy.
Now suppose that ' mixes between , and 7 . Then < always gets a strictly
higher payo! from playing 7 than from playing ,, so no strategy requiring < to
play , with positive probability can be a best response for <, and < cant play ,
in any NE. But if < doesnt play ,, then ' gets a strictly higher payo! from &
than from 7 , so no strategy requiring ' to play 7 with positive probability can be
a best response. But we assumed that ' was mixing between , and 7 , so weve
reached a contradiction. We conclude that in equilibrium, ' cannot mix between
, and 7 . We can apply similar reasoning to 's other pairs of pure strategies. We
conclude that in this game, no player can play a mixed strategy in which he only
plays two pure strategies with positive probability.
If by now youve guessed that the mixed strategies C!1 = C !2 = ( 13 $ 13 $ 13 ) form a
Nash equilibrium then you are right. If player ' plays C" , then < will receive an
expected payo! of 0 from every pure strategy, so < will be indi!erent toward all
of her pure strategies. Therefore, 4,# (C " ) includes all of <s mixed strategies and

138

12. Mixed Strategies

C # ! 4,# (C " ). Similarly, C " ! 4," (C # ). We conclude that C 1 and C 2 form a Nash
equilibrium. We will prove that (C !1 $ C !2 ) is 0$/ '3(&'/ !"#$ /&'()(*+(',.
Suppose player ' plays , with probability C" (,) ! (0$ 1), 7 with probability
C " (7 ) ! (0$ 1), and & with probability 1#C " (,)#C " (7 ). Since we proved that both
players have to mix with all three pure strategies, it follows that C " (,) + C " (7 ) 6 1
so that 1 # C " (,) # C " (7 ) ! (0$ 1). It follows that player < receives the following
payo!s from his three pure strategies:

)# (,) = #C " (7 ) + 1 # C " (,) # C " (7 ) = 1 # C" (,) # 2C " (7 )


)# (7 ) = C " (,) # (1 # C " (,) # C " (7 )) = 2C " (,) + C " (7 ) # 1
)# (&) = #C " (,) + C " (7 )
In any Nash equilibrium in which < plays all three of his pure strategies with
positive probability, he must receive the same expected payo! from all strategies.
Therefore, in any equilibrium, we must have )# (,) = )# (7 ) = )# (&). If we set
these payo!s equal to each other and solve for C " (,) and C " (7 ), we get C " (,) =
C " (7 ) = 1 # C " (,) # C " (7 ) = 13 . We conclude that < is only willing to include all
three of his pure strategies in his mixed strategy if ' plays C !" = ( 13 $ 13 $ 13 ). Similarly,
' will only be willing to play all his pure strategies with positive probability if <
plays C !# = ( 13 $ 13 $ 13 ). Therefore, there is no other Nash equilibrium in which both
players play all their pure strategies with positive probability.
HKCKCO 2')0(8)/ %&'()(*+("= @'+/ "34 2(R/4
In the Matching Pennies and Rock-Paper-Scissors games, the unique Nash equilibrium was a mixed strategy Nash equilibrium. It turns out that mixed strategy
equilibria need no be unique when they exist. In fact, when a games has multiple
pure strategy Nash equilibria, it will often have other Nash equilibria in mixed
strategies. Consider the following game,

.
/

+
,
0$ 0 3$ 5
4$ 4 0$ 3

12.2 Mixed Strategy Nash Equilibrium

139

It is easy to check that (.$ ,) and (/$ +) are both pure strategy Nash equilibria.
It turns out that in cases like this, when there are two distinct pure strategy Nash
equilibria, there will generally be a third one in mixed strategies. For this game,
let player 1s mixed strategy be given by C 1 = (C 1 (.)$ C 1 (/)), and let player 2s
strategy be given by C 2 = (C 2 (+)$ C 2 (,))%
Player 1 will mix when )1 (.$ C 2 ) = )1 (/$ C 2 )$ or,
C 2 (+) % 0 + (1 # C 2 (+)) % 3 = C 2 (+) % 4 + (1 # C 2 (+)) % 0 ( C 2 (+) =

3
$
7

and player 2 will mix when )2 (C 1 $ +) = )2 (C 1 $ ,)$ or,


C 1 (.) % 0 + (1 # C1 (.)) % 4 = C 1 (.) % 5 + (1 # C 1 (.)) % 3 ( C 1 (.) =

1
%
6

This yields our third Nash equilibrium: (C !1 $ C !2 ) = (( 16 $ 56 )$ ( 37 $ 47 )).


It is interesting to see that all three equilibria would show up in a careful drawing
of the best response functions. Using the utility functions )1 (.$ C 2 ) and )1 (/$ C 2 )
we have,
!
"
if C 2 (+) 6 37
# C 1 (.) = 1
4,1 (C 2 ) =
C 1 (.) ! [0$ 1] if C 2 (+) = 37 %
"
$
C 1 (.) = 0
if C 2 (+) : 37
Similarly, using the utility functions )2 (C 1 $ +) and )2 (C 1 $ ,) we have,
!
"
if C 1 (.) 6 16
# C 2 (+) = 1
4,2 (C 1 ) =
C 2 (+) ! [0$ 1] if C 1 (.) = 16 %
"
$
C 2 (+) = 0
if C 1 (.) : 16

Letting 8 = C2 (+) and 9 = C 1 (.)$ we can draw the two best response correspondences as they appear in !gure 12.1.
Notice that all three Nash equilibria are revealed: (9$ 8) ! !(1$ 0)$ ( 16 $ 37 )$ (0$ 1)"
are Nash equilibria, where (9$ 8) = (1$ 0) corresponds to the pure strategy (.$ +),
and (9$ 8) = (0$ 1) corresponds to the pure strategy (/$ ,).
Remark 4 D0 ,": */ (30/+/#0(39 06 ?36B 0$"0 9/3/+(5")): G" 16+, 61 E"),6#0 ")V
B":#FI 0$/+/ "+/ "3 odd 3',*/+ 61 /&'()(*+(" (3 9",/#C 76 8+6L/ 0$(# (# 1"+ 1+6,
0+(L("); "34 0$/ (30/+/#0/4 +/"4/+ 5"3 563#')0 C

140

12. Mixed Strategies

FIGURE 12.1. Three Nash Equilibria.

12.3 IESDS in Mixed Strategies)


As we have seen above, by introducing mixed strategies we o!ered two advancements: First, players can have richer beliefs, and second, players can choose a
richer set of actions. This second advancement can be useful when we reconsider
the idea of IESDS, and in fact present it in its preise form using mixed strategies.
In particular, we can now o!er the following de!nition:
De!nition 29 S/0 C " ! !&" "34 #0" ! &" */ 86##(*)/ #0+"0/9(/# 16+ 8)":/+ 'C U/ #":
0$"0 #0" (# #2,i62*4 1;mina2/1 *: C" (1
)" (C " $ #"" ) : )" (#0" $ #"" ) '#"" ! &"" %
That is, to consider a strategy as strictly dominated, we no longer require that
some other 8'+/ #0+"0/9: dominate it, but allow for mixtures to dominate it as well.
It turns out that this allows IESDS to have more bite. For example, consider the
following game,
.
/

*
5$ 1
3$ 2
4$ 3

+
1$ 4
0$ 0
4$ 4

,
1$ 0
3$ 5
0$ 3

12.4 IESDS and Rationalizability Revisited$$

141

and denote mixed strategies for players 1 and 2 as triplets, (C 1 (-)$ C 1 (.)$ C 1 (/))
and (C 2 (*)$ C 2 (+)$ C 2 (,)) respectively. It is easy to see that no pure strategy is
strictly dominated by another pure strategy for any player. However, if we allow
for mixed strategies, we can !nd that for player 2, the strategy * is dominated
by a strategy that mixes between the pure strategies + and ,. It is as i! ew are
increaing the number of rows player 2 has to in!nity, and one of these in particular
is the strategy in which player two mixes between + and , with equal probability,
as he following diagram suggests,
.
/

*
5$ 1
3$ 2
4$ 3

+
1$ 4
0$ 0
4$ 4

,
1$ 0
3$ 5
0$ 3

player 2s payo! from mixing

and

=(

(0$ 12 $ 12 )
,
2
2%5
3%5

Hence, we can perform the following sequence of IESDS with mixed strategies:
1. (0$ 12 $ 12 ) '2 *
2. in the resulting game, (0$ 12 $ 12 ) '1 Thus, after two stages of IESDS we have reduced the game above to,
.
/

+
,
0,0 3,5
4,4 0,3

A question you must be asking is how did we !nd these dominated strategies?
Well, a good eye for the numbers is what it takes, short of a computer program
or brute force. Also, notice that there are other mixed strategies that would work
because strict dominance implies that if we add a small ; : 0 to one of the
probabilities, and subtract it from another, then the resulting expected utility
from the new mixed strategies can be made arbitrarily close to that of the original
one, thus it too would dominate the dominated strategy.

12.4 IESDS and Rationalizability Revisited))


HHH TO BE COMPLETED HHH
...{never a best reply} = {strictly dominated action}

142

12. Mixed Strategies

FIGURE 12.2.

12.5 Existence: Nashs Theorem))


HHH TO BE COMPLETED HHH
What can we say about Nash equilibrium? Unlike IESDS for example, the
Bertrand game with di!erent marginal costs did not give rise to a well de!ned
best response, which in turn failed to give a Nash equilibrium. In his seminal
dissertation, Nash o!ered conditions under which this pathology does not arise,
and as the Bertrand example suggested, it has to do with continuity of the payo!
functions. We now state Nashs theorem:
Theorem 10 G!a#$C# D$/;,/mE D1 " (# "3 0V8)":/+ 9",/ B(0$ 3(0/ #0+"0/9:
#8"5/# &" $"34 5630(3'6'# '0()(0: 1'350(63 )" : & & (; ' ! (; 0$/3 0$/+/ /R(#0# "
Nash Equilibrium in mixed strategiesC
Proving this theorem is well beyond the scope of this text, but it is illuminating to
provide some intuition. The idea of Nashs proof builds on a Fixed-Point Theorem.
Consider a function D : [0$ 1] & [0$ 1]. Brouwers Fixed-Point Theorem states that
if D is continuous then there exists some @! ! [0$ 1] that satis!es D(@! ) = @! % The
intuition for this theorem can be captured by the graph in !gure X.X
How does this relate to game theory? Nash showed that if the utility functions
are continuous, then something like continuity is satis!ed for the */#0 +/#863#/
56++/#8634/35/ of each player. That is, if we have a sequence of strategies !C $"" "%
$=1
for 's opponents that converges to C
b"" , then there is a converging subsequence of
player 's best response 4," (C $"" ) that converges to an element of his best response
4," (b
C "" ).

12.6 Summary

143

We can then proceed by considering the 56))/50(63 of best response correspondences, 4, * 4,1 % 4,2 % # # # % 4,! , which operates from !& to itself. That
is, 4, : !& ! !& takes every element C 0 ! !&, and converts it into a subset
4,(C 0 ) + !&. Since !&" is a simplex, it must be compact (closed and bounded,
just like [0,1]).
Nash then applied a more powerful extension of Brouwers theorem, called Kakutanis !xed point theorem, that says that under these conditions, there exists some
C ! such that C ! ! 4,(C ! ), that is, 4, : !& ! !& has a !xed point. This
precisely means that there is some C ! for which the operator of the collection of
best response correspondences, when operating on C ! $ includes C ! . This !xed point
means that for every ', C!" is a best response to C !"" , which is the de!nition of a
Nash equilibrium.

12.6 Summary
...

12.7 Exercises
1. Use the best response correspondences in the Battle of the Sexes game to
!nd all the Nash equilibria. (Follow the approach used for the example in
section 12.2.3.)
2. An all pay auction: In example 12 we introduced a version of an all pay
auction that worked as follows: each bidder !rst submits a bid. The highest
bidder gets the good, but all bidders pay there bids. As in the example,
consider such an auction for a good worth $1 to each of the two bidders. Each
bidder can choose to o!er a bid from the unit interval so that &" = [0$ 1].
Players only care about the expected value they will end up with at the end
of the game (i.e., if a player bids $0.4 and expects to win with probability
0.7 then his payo! is 0%7 % 1 # 0%4).
(a) Model this auction as a normal-form game.

144

12. Mixed Strategies

(b) Show that this game has no pure strategy Nash Equilibrium.
(c) Show that this game cannot have a Nash Equilibrium in which each
player is randomizing over a !nite number of bids.
(d) Consider mixed strategies of the following form: Each player ' chooses
and interval, [@" $ @" ] with 0 % @" 6 @" % 1 together with a cumulative
distribution "" (@) over the interval [@" $ @" ]% (Alternatively you can think
of each player choosing "" (@) over the interval [0$ 1], with two values @"
and @" such that "" (@" ) = 0 and "" (@" ) = 1.)
i. Show that if two such strategies are a mixed strategy Nash equilibrium then it must be that @1 = @2 and @1 = @2 %
ii. Show that if two such strategies are a mixed strategy Nash equilibrium then it must be that @1 = @2 = 0%
iii. Using the above, argue that if two such strategies are a mixed strategy Nash equilibrium then both players must be getting an expected
utility of zero.
iv. Show that if two such strategies are a mixed strategy Nash equilibrium then it must be that @1 = @2 = 1%
v. Show that "" (@) being uniform over [0$ 1] is a symmetric Nash equilibrium of this game.

12.8 References
HHH TO BE COMPLETED HHH
Nash, John (1950)
von Neumann, John and Oscar Morgenstern (1944)

Das könnte Ihnen auch gefallen