Beruflich Dokumente
Kultur Dokumente
Printer: Opaque t
!!
Pinning Down Beliefs: Nash Equilibrium
The path we have taken so far has introduced us to three solution concepts for
trying to predict the behavior of rational players. The !rst, strict dominance, only
relied on rationality and was very appealing. It also predicted a unique outcome
for the Prisoners dilemma (as it would in any game for which it existed). However, it often fails to exist. The two sister concepts of IESDS and rationalizability
relied on more than rationality, and asked for common knowledge of rationality.
In return, we get existence for every game, and in some games we got uniqueness.
In particular, whenever there was a strict dominant equilibrium, it will uniquely
survive IESDS and rationalizability. Then, for other games for which strict dominance did not apply, like the Cournot duopoly, we got uniqueness from IESDS and
rationalizability.
However, when we consider a game like the battle of the sexes, none of the
concepts introduced above had any bite: dominant strategy equilibrium did not
apply, and both IESDS and rationalizability could not restrict the set of reasonable
behavior.
104
Chris
!
Pat
"
! 2,1 0,0
" 0,0 1,2
For example, we cannot rule out the possibility that Pat goes to the opera, while
Chris goes to the football game since Pat may behave optimally to his belief that
Chris is going to the opera, and Chris may behave optimally to his belief that
Pat is going to the football game. But if we think of this pair of actions not only
as actions, but as a system of actions and beliefs, then there is something of a
dissonance: indeed the players are playing best responses to their beliefs, but their
beliefs are wrong!
105
At the risk of being repetitive, lets emphasize what the requirements of a Nash
equilibrium are:
The !rst requirement is a direct consequence of rationality. It is the second requirement that is very demanding, and is a tremendous leap beyond the structures
we have considered so far. It is one thing to ask people to behave rationally given
their beliefs (play a best response), but a totally di!erent thing to ask players to
predict the behavior of their opponents correctly.
Then again, it may be possible to accept such a strong requirement if we allow
for some reasoning that is beyond the physical structure of the game. For example,
imagine that Pat is an in"uential person people just seem to follow Pat, and
this is something that Pat knows well. In this case, Chris should believe, knowing
that Pat is so in"uential, that Pat would expect Chris to go to the opera, and
Pats beliefs, knowing this, should be that Chris will indeed believe that Pat is
going to the Opera, and so Chris will go to the opera as well. Indeed, (!$ !) is
a Nash equilibrium. However, notice that we can make the symmetric argument
about Chris being an in"uential person: ("$ " ) is also a Nash equilibrium. As the
external game theorist, however, we should not say more than one of these two
outcomes is what we predict. (You should be able to convince yourself that no
other pair of pure strategies is a Nash equilibrium.)
Remark 3 7$/ "+9',/30 (# 360 0$"0 >$+(# )(?/# 06 8)/"#/ @"0 A #'5$ "3 "+9',/30
B6')4 5$"39/ 0$/ 8":6! 61 0$/ 9",/C D0 (# 63): "*6'0 */)(/1# 0$"0 "+/ E#/)1 1')))(39FC
What about the other games we saw? In the Prisoners Dilemma, the unique
Nash Equilibrium is ("$ " )% In the Cournot Duopoly game, the unique Nash Equilibrium is (33 13 $ 33 13 ), as we will see formally soon. Recall the following two-player
discrete game we used to demonstrate IESDS:
106
.
/
*
4,3
2,1
3,0
+
5,1
8,4
9,6
,
6,2
3,6
2,8
In it, the only pair of pure strategies that constitute a Nash equilibrium is (-$ *),
the same pair that survived IESDS.
The relationship between the outcomes we obtained earlier and the Nash equilibrium outcomes is no coincidence. There is a simple relationship between the
concepts we previously developed and that of Nash equilibrium as the following
proposition states clearly:
Proposition 7 >63#(4/+ " #0+"0/9: 8+6)/ #! = (#!1 $ #!2 $ %%%$ #!! )C D1 #! (# /(0$/+=
GHI " #0+(50 46,(3"30 #0+"0/9: /&'()(*+(',J
GKI 0$/ '3(&'/ #'+L(L6+ 61 D%MNMJ 6+
GOI 0$/ '3(&'/ +"0(63")(P"*)/ #0+"0/9: 8+6)/J
0$/3 #! (# 0$/ '3(&'/ !"#$ %&'()(*+(',C
This proposition is simple to prove, and is left as an exercise. The intuition is of
course quite straightforward: we know that if there is a strict dominant strategy
equilibrium then it uniquely survives IESDS and rationalizability, and this in turn
must mean that players are playing a best response to the other players strategies.
107
cost of that extra animal, namely the degrading of the quality of the pasture, is
shared by all the other herders. As a consequence, the individual incentives of each
herder are the grow their herds, and at the end, this causes tremendous losses to
everyone. To those trained in economics, it is yet another example of the distortion
from the free-rider problem. It should also remind you of the Prisoners dilemma
where individual driven by sel!sh incentives cause pain to the group.
In the course of his essay, Hardin develops the theme, drawing in examples of
latter day "commons", such as the atmosphere, oceans, rivers, !sh stocks, National
Parks, advertising and even parking meters. A major theme running throughout
the essay is the growth of human populations, with the Earths resources being a
general commons (given that it concerns the addition of extra "animals", it is the
closest to his original analogy).
Lets put some game theoretic analysis behind this story. Imagine that there
are 0 players, each choosing how much to consume from a common resource 1.
Each player ' chooses his own consumption, 2" " 0. The bene!t of consuming an
amount 2" " 0 give player ' a bene!t equal to 2" and no other player bene!ts from
's choice. The cost of depleting the resource is a function of total consumption,
and is equal to
! !2
!
X
X
3(
2" ) =
2"
%
"=1
"=1
This cost is borne equally by all the players, so including a players bene!t and
cost from consumption yields the following utility function for each player,
!2
!
1 X
2#
%
)" (2" $ 2"" ) = 2" #
0 #=1
To solve for a Nash equilibrium we can compute the best response correspondences for each player, and then !nd a strategy pro!le for which all the best
response functions are satis!ed together. This is an important point that warrants
further emphasis. We know that given 2"" , player ' will want to choose an element in 4," (2"" )% Hence, if we !nd some pro!le of choices (21! $ 22! $ %%%$ 2!! ) for which
!
2"! = 4," (2""
) for all ' ! (, then this must be a Nash equilibrium.
This means that if we derive all 0 best response correspondences, and it turns
out that they are functions (unique best responses), then we have a system of
108
0 equations, one for each players best response function, with 0 unknowns, the
choices of each player. Solving this will yield a Nash equilibrium. To get player 's
best response function (and we will verify that it is a function) we write down the
!rst order condition of his utility function:
! !
5)" (2" $ 2"" )
2 X
=1#
2" = 0
52"
0 "=1
and this gives us player 's best response function,
4," (2"" ) =
0 X
#
2# %
2
#6="
We have 0 such equations, one for each player, and if we substitute the choice
2" instead of 4," (2"" ), we get the 0 equations with 0 unknowns that need to be
solved. Doing this yields,1
1
2"! = for all ' ! (.
2
Now we need to ask, is consuming 12 too much or too little? The right way to
answer this is using the Pareto criterion: can we !nd another consumption pro!le
that will make everyone better o!? If we can, we can compare that with the Nash
equilibrium to answer this question. To !nd such a pro!le well do a little trick:
we will maximize the sum of all the utility functions, which we can think of as
societys utility function. I wont go into the moral justi!cation of it, but it will
turn out to be a useful tool.2 The function we are maximizing is, therefore,
max
$1 %$2 %&&&$!
!
X
"=1
!
X
"=1
2" #
!
X
#=1
2#
!2
1 We know that these equations are all symmetric, and hence we have a symmetric solution. Then, solving the
" (! " 1)$ will yield the symmetric solution of $ = 12 . To show that there are no asymmetric
equation $ = !
2
soltions requires a bit more work.
2 In general, maximizing the sum of utility functions will result in a Pareto optimal outcome, but it need not
be the only one. In this example, this maximization gives the condition for all the Pareto optimal consumption
pro!les because it turns out that all ! !rst order conditions are the same because of the structure of our problem.
This is not something we will dwell on much at all. As mentioned above, this is just a useful tool to see if something
else may be Pareto dominated.
109
which means that from a social perspective, the solution must satisfy
!
X
1
2" = .
2
"=1
Interestingly, society doesnt care who gets how much as long as total consumption is equal to 12 . hence, we can look at the symmetric solution where each
1
player consumes b
2" = 2!
, and compare this with the Nash equilibrium solution. To
do this we will subtract the utility of player ' at the Nash solution from his utility
at the social optimum solution, and if this di!erence is positive then we know that
the social optimum is better for everyone. We have,
! !2
!
!2
X1
X 1
1
1
1
1
!
!
)" (b
#
+
2" $ b
2"" ) # )" (2" $ 2"" ) =
#
2 0 #=1 2
20 0 #=1 20
= 0(2 # 0) # 1
6 0
As we suspected, if all the players could only commit to consume the amount
1
b
, then they would each have a higher utility than they have in the Nash
2" = 2!
equilibrium, in which they consume more than is socially desirable. Thus, as Hardin
puts it, giving people the freedom to make choices may make them all worse o!
than if that freedom were somehow regulated. Of course, the counter argument
is whether we can trust a regulator to keep things under control, and if not, the
question remains which is the better of two evils, and answer that I will not o!er
here.
HHCKCK >6'+360 N'686):
Lets revisit the Cournot game with demand 7 = 100 # 8 and cost functions
3" (8" ) = 3" 8" for !rms ' ! !1$ 2". The maximization problem that !rm ' faces when
it believes that its opponent chooses quantity 8# is,
max )" (8" $ 8# ) = (100 # 8" # 8# ) # 8" # 3" # 8" %
'"
110
Recall that the best response for each !rm is given by the !rst-order condition,
4," (8# ) =
100 # 8# # 3"
%
2
100 # 82 # 31
100 # 81 # 32
and 82 =
%
2
2
(11.1)
Notice that the Nash equilibrium coincides with the unique strategies that survive
IESDS and that are rationalizable, which is the conclusion of Proposition 7. An
exercise that is left for you is to explore he Cournot model with many !rms.
HHCKCO ./+0+"34 N'686):
The Cournot model assumed that the !rms choose quantities, and the market price
adjusts to clear the demand. However, one can argue that !rms often set prices,
and let consumers choose where to purchase from, rather than setting quantities
and waiting for the market price to equilibrate demand. We now consider the
111
game where each !rm posts a price for their otherwise identical goods. This was
the situation modelled and analyzed by Joseph Bertrand in 1883.
As before, assume that demand is given by 9 = 100 # 8 and cost functions
3" (8" ) = 0 for !rms ' ! !1$ 2" (zero costs). Clearly, we would expect buyers to all
buy from the !rm whose price is the lowest. What happens if there is a tie? Lets
assume that the market splits equally between the two !rms. This gives us the
following normal for of the game:
Players: ( = !1$ 2"
Strategy sets: &" = [0$ $] for ' ! !1$ 2" and !rms choose prices 9" ! &"
Payo!s: To calculate payo!s, we need to know what the quantities will be for
each !rm. Given our assumption on ties, the quantities are given by,
!
"
# 100 # 9" if 9" 6 9#
8" (9" $ 9# ) =
0 if 9" : 9#
"
$ 100"("
if 9" = 9#
2
which in turn means that the payo! function is given by:
!
"
# (100 # 9" ) # 9" if 9" 6 9#
)" (9" $ 9# ) =
0
if 9" : 9#
"
$ 100"("
# 9"
if 9" = 9#
2
Now that the description of the game is complete, we can calculate the best
response functions of both !rms. To do this, we will !rst start with a slight modi!cation that is motivated by reality: assume that prices cannot be any real number,
but are limited to be increments of some small number, say ; : 0% That is, prices
are assumed to be in the set !0$ ;$ 2;$ 3;%%%". For example, ; = 0%01 if we are considering cents as the price increment,3 and the strategy set will be !0$ 0%01$ 0%02$ %%%".
We will soon introduce smaller denominations, and will look at what happens when
this increment becomes in!nitely small and approaches zero.
3 Notice,
for example, that in gas stations gallons are often quoted in prices that include one-tenth of a cent.
112
We derive the best response of a !rm by exhausting the relevant situations that
it can face. Assume !rst that 9# is very high, above 50. Then, !rm ' can set the
monopoly (pro!t maximizing) price of 50 and not face any competition, which is
clearly what ' would choose to do.4 Now assume that 50 : 9# : 0%01% Firm ' can
choose one of three options: either set 9" : 9# and get nothing, set 9" = 9# and split
the market, or set 9" 6 9# and get the whole market. It is not too hard to check
that of these three, !rm ' wants to just undercut !rm < and capture the whole
market, thus setting a price of 9" = 9# # 0%01.5 When 9# = 0%01 then these three
options are still there, but undercutting means setting 9" = 0$ which is the same as
setting 9" : 9# and getting nothing. Thus, the best reply is setting 9" = 9# = 0%01
and splitting the market. Finally, if 9# = 0 then any choice of price will give !rm
' zero pro!ts, and therefore anything is a best response. In summary:
!
50
if 9# : 50
"
"
"
# 9 # 0%01
if 50 " 9# : 0%01
#
4," (9# ) =
"
0%01
if 9# = 0%01
"
"
$
9" ! !0$ 0%01$ 0%02$ 0%03$ %%%" if 9# = 0
Now given that !rm <s best response is exactly symmetric, it should not be hard
to see that there are two Nash Equilibria that follow immediately from the form of
the best response functions: 7$/ */#0 +/#863#/ to 0.01 is 0.01, and " */#0 +/#863#/
to 0 is 0. Thus, the two Nash equilibria are,
(91 $ 92 ) ! !(0$ 0)$ (0%01$ 0%01)" %
It is worth pausing here for a moment to prevent a rather common point of
confusion, which arises often when a player has more than one best response to a
certain action of his opponents. In this example, when 92 = 0, player 1 is indi!erent
4 The monopoly price is the price that would maximize a single !rms pro!ts if there were no competitors.
This would be obtained by maximizing )" (() = '( = (100 " ()(, and the !rst order condition is 100 " 2( = 0,
resulting in an optimal price of 50. Hence, if a competitor sets a price above 50, the !rm can act as if there was
no competition.
100$! !$2
!
see this, if we have some (# * 0&01 then by setting (" = (# !rm " gets + " =
while if it sets
2
0
2
(" = (# " 0&01 it will get + " = 100((# " 0&01) " ((# " 0&01) . If we calculate the di!erence between the two we get
that +0" " + " = 50&02(# " 12 (2# " 1&000 1, which is positive at (# = 0&02, and this di!erence has a positive derivative
for any (# # [0&02% 50].
5 To
113
between "3: 8+(5/ $/ chooses: if he splits the market with 91 = 0 he gets half the
market with no pro!ts, and if he sets 91 : 0 he gets no customers and has no
pro!ts. One may be tempted to jump to the following conclusion: if player 2 is
choosing 92 = 0, then any choice of 91 together with player 2s zero price will be
a Nash equilibrium. This is incorrect. It is true that player 1 is playing a best
response with any one of his choices, but if 91 : 0 then 92 = 0 is 360 a best
response as we can observe from the analysis above. Thus, having one !rm choose
a price of zero while the other is not cannot be a Nash equilibrium.
Comparing the Bertrand game outcome to the Cournot game outcome is an
interesting exercise. Notice that when !rms choose quantities (Cournot), the unique
Nash equilibrium when costs were zero had 81 = 82 = 33 13 . A quick calculation
shows that for the aggregate quantity of 8 = 81 + 82 = 66 23 we get a demand price
of 9 = 33 13 and each !rm makes a pro!t of $1$ 111%11. When instead these !rms
compete on prices, the two possible equilibria have either zero pro!ts when both
choose zero prices, or negligible pro!ts (about 50 cents) when they each choose a
price of $0.01. Interestingly, for both the Cournot and Bertrand games, if we only
had one player, he would maximize pro!ts which are ) = 98, and would choose
the monopoly price (or quantity) of $50 (or 50 units) and earn a pro!t of $2,500.
The message of this analysis is quite striking: one !rm may have monopoly
power, but when we let one more !rm compete, and they compete with prices,
then the market will behave competitively if both choose a price of zero, price
will equal marginal costs! Notice that if we add a third and fourth !rm, this will
not change the outcome; prices will have to be zero (or practically zero at 0.01)
for all !rms in the Nash (Bertrand) equilibrium. This is not the case for Cournot
competition.
A quick observation should lead you to realize that if we let ; be smaller than
one cent, the conclusions above will be sustained, and we will have two Nash
equilibria. One with 91 = 92 = 0, and one with 91 = 92 = ;. It turns out that
these two equilibria not only become closer in pro!ts as ; gets smaller, but we we
take ; to zero and assume that prices can be chosen as any real number, we get a
very clean result: the unique Nash equilibrium will have prices equal to marginal
costs, implying a competitive outcome.
114
Proposition 8 Q6+ ; = 0 G8+(5/# 5"3 */ "3: +/") 3',*/+I 0$/+/ (# " '3(&'/ !"#$
/&'()(*+(',= 91 = 92 = 0%
proof: First note that we cant have a negative price in equilibrium a !rm
o!ering it will lose money (pay the consumers to take its goods!). We need
to show that we cant have a positive price in equilibrium. We can see this
in two steps:
(() If 91 = 92 = 9 : 0$ each would bene!t from changing to some price 9 # ;
(; very small) and get the whole market for almost the same price.
((() If 91 : 92 " 0$ 92 would want to deviate to 91 # ; (; very small) and
earn higher pro!ts.
It is easy to see that 91 = 92 = 0 is an equilibrium: both are playing a best
response.
Exercise 1 D1 0$/ 56#0 1'350(63 B"# 3 # 8" 16+ /"5$ +, 0$/3 0$/ '3(&'/ !"#$
/&'()(*+(', B6')4 */ 91 = 92 = 3C >63L(35/ :6'+#/)1 61 0$(# "# "3 /"#: /R/+5(#/C
We will now see an interesting variation of the Bertrand game. Assume that
3" (8" ) = 3" # 8" represents cost of the !rm as before. Now, however, let 31 = 1 and
32 = 2 so that the two !rms are not identical: !rm 1 has a cost advantage. Let the
demand still be 9 = 100 # 8.
Now consider the case with discrete price jumps with ; = 0%01% As before, there
are still two Nash Equilibria:
(91 $ 92 ) ! !(1%99$ 2%00)$ (2%00$ 2%01)" %
or more generally, (91 $ 92 ) ! !(2 # ;$ 2)$ (2$ 2 + ;)".
Exercise 2 M$6B 0$"0 63): 0$/#/ "+/ 0$/ !"#$ /&'()(*+(" 61 0$(# 9",/C
Now we can ask ourselves, what happens if ; = 0? If we would think of using
a limit approach to answer this, then we may expect a similar result to the one
we saw before, namely, that we get one equilibrium which is the limit of both, and
in this case it would be 91 = 92 = 2%
But is this an equilibrium? Interestingly, the answer is no! To see this, consider
the best response of !rm 1. Its payo! function is not continuous when !rm 2 o!ers
115
!1 (p1,p2)
Monopoly profits
Duopoly profits
p2
p1M
p1
maximization here is for the pro!t function + 1 = (100 " ()(( " -1 ) where -1 = 1.
116
in turn causes it to not have a well de!ned best response function, a consequence
of the discontinuity in the pro!t function.
.
/
*
7,7
2,4
8$ 1
+
4,2
5$ 5
3$ 2
,
1$ 8
2$ 3
0,0
It is easy to see that no strategy is dominated, and thus strict doinance cannot be
applied to this game, while IESDS and rationalizability will conclude tha anything
can happen. However, a pure strategy Nash equilibrium exists. To !nd it, we use
a simple method that captures the fact that any Nash equilibrium must a pair
of strategies at which each of the two players is playing a best response to his
opponents strategy. The procedure is best explained in three steps
Step 1: For every 56)',3, !nd the highest payo! entry for player 1. By de!ition,
this entry must be on the row that is a best response for the particular
column being considered. Under-line the pair of payo!s in this entry.
What step 1 does is to identify the best response of player 1 16+ /"5$ 61 0$/
8'+/ #0+"0/9(/# (columns) of player 2. For instance, if player 2 is playing *, then
player 1s best response is /, and we underline the payo!s associated with this
row in column 1. After performing this step we see that there are three pairs of
pure strate!es at which player 1 is playing a best response: (/$ *), (.$ $ +) and
(.$ ,).
Step 2: For every +6B, !nd the highest payo! entry for player 2. By de!ition, this
entry must be on the column that is a best response for the particular row
being considered. Over-line the pair of payo!s in this entry.
117
118
gives this solution concept its powerlike IESDS and rationalizability, the solution
concept of Nash is widely applicable. It will, however, usually lead to more re!ned
predictions than those of IESDS and rationalizability as implied by proposition 7.
From the prisoners dilemma, we can easily see that Nash equilibrium does not
guarantee Pareto optimality. Indeed, if people are left to their own devices then in
some situations we need not expect them to do what is best for the whole group.
This point was made quite convingly and intuitively in Hardins (1968) tragedy of
the commons argument. This is where we can revisit the important restriction to
#/)1 /316+5(39 6'056,/#: our solution concepts took the game as given, and imposed
rationality and common knowledge to try and see what players will choose to do. If
they each seek to maximize their individual well beingthen they may hinder their
ability to achieve socially optimal outcomes.
11.5 Summary
...
11.6 Exercises
1. Prove Proposition 7.
2. The 0 !rm Cournot Model: Suppose there are 0 !rms in the Cournot
oligopoly model. Let 8" denote the quantity produced by !rm ', and let
= = 8" +###+8! denote the aggregate production. Let 7 (=) denote the market
clearing price (when demand equals =) and assume that inverse demand
function is given by 7 (=) = > # = (where = 6 >). Assume that !rms have
no !xed cost, and the cost of producing quantity 8" is 38" (all !rms have the
same marginal cost, and assume that 3 6 >).
(a) Model this as a Normal form game
(b) What is the Nash (Cournot) Equilibrium of the game where !rms choose
their quantities simultaneously?
11.6 Exercises
119
!
X
##
#=1
120
4. Let 8"# $ ' ! !1$ 2" and < ! !?$ 4" denote the quantity that !rm ' sells in
country <. Consequently, let 8" = 8". + 8"/ be the total quantity produced by
!rm ' ! !1$ 2", and let 8 # = 81# + 82# be the total quantity sold in country
< ! !?$ 4". The demand for coal in countries ? and 4 is given respectively
by,
9# = 90 # 8# $ < ! !?$ 4"$
and the costs of production for each !rm is given by,
3" (8" ) = 108" $ ' ! !1$ 2"%
(a) Assume that the countries do not have a trade agreement and, in fact,
imports in both countries are prohibited. This implies that 82. = 81/ = 0
is set as a political constraint. What quantities 81. and 82/ will both
!rms produce?
Now assume that the two countries sign a free-trade agreement that
allows foreign !rms to sell in their countries without any tari!s. There
are, however shipping costs. If !rm ' sells quantity 8"# in the foreign
country (i.e., !rm 1 selling in 4 or !rm 2 selling in ?) then shipping
costs are equal to 108"# . Assume further that /"5$ +, chooses a pair of
quantities 8". $ 8"/ simultaneously, ' ! !1$ 2"$ so that a pro!le of actions
consists of four quantity choices.
(b) Model this as a normal form game and !nd a Nash equilibrium of the
game you described. Is it unique?
Now assume that before the game you described in (b.) is played, the
research department of !rm 1 discovered that shipping coal with the
current ships causes the release of pollutants. If the !rm would disclose
this report to the World-Trade-Organization (WTO) then the WTO
would prohibit the use of the current ships. Instead, a new shipping
technology would be o!ered that would increase shipping costs to 408"#
(instead of 108"# as above).
11.7 References
121
(c) Would !rm 1 be willing to release the information to the WTO? Justify
your answer with an equilibrium analysis.
6. Comparative Economics: Two high tech !rms (1 and 2) are considering
a joint venture. Each !rm ' can invest in a novel technology, and can choose
02
a level of investment @" ! [0$ 5] at a cost of 3" (@" ) = 4" (think of @" as how
many hours to train employees, or how much capital to buy for R&D labs).
The revenue of each !rm depends both on its investment, and of the other
!rms investment. In particular, if !rm ' and < choose @" and @# respectively,
then the gross revenue to !rm ' is
!
"
if @" 6 1
# 0
,(@" $ @# ) =
2
if @" " 1and @# 6 2
"
$
@" # @# if @" " 1and @# " 2
(a) Write down mathematically, and draw the pro!t function (gross revenue minus costs) of !rm ' as a function of @" for three cases: (') @# 6 2,
('') @# = 2, and (''') @# = 4
11.7 References
HHH TO BE COMPLETED HHH
Hardin, Garrett (1968) The Tragedy of the Commons, M5(/35/
122
Nasar, Sylvia. - ./"'0(1') 2(34C New York: Simon and Schuster, 1998.
Nash, John (1950)
!"
Mixed Strategies
In the previous chapters we postponed discussing the option that a player has to
choose a random strategy. This turns out to be an important type of behavior to
consider, with interesting implications and interpretations that follow from this
kind of behavior. In fact, there are many games for which there will be no equilibrium predictions if we do not consider the players ability to choose random
strategies.
Consider the following classic zero sum game called Matching Pennies.1
Players 1 and 2 both put a penny on a table simultaneously. If the two pennies
come up the same side (heads or tails) then player 1 gets both, otherwise player 2
does. We can represent this in the following matrix:
Player 2
A
Player 1
1A
A 1$ #1 #1$ 1
B #1$ 1 1$ #1
P/+6 #', 9",/ is one in which the gains of one player are the losses of another, hence their payo!s sum to
zero. The class of zero sum games was the main subject of analysis before Nash introduced his solution concept
in the 1950s. These games have some very nice mathematical properties and were a central object of analysis in
von Neumann and Morgensterns (1944) seminal book.
124
Upon observation we can see that the method we introduced in section 11.3 to !nd
pure strategy Nash equilibria does not work. Namely, given a belief that player 1
has about player 2s choice, he always wants to match it. In contrast, given a belief
that player 2 has about player 1s choice, he would like to choose the opposite
orientation for his penny. Does this mean that a Nash equilibrium fails to exist?
We will soon see that a Nash equilibrium will indeed exist if we allow players to
choose random strategies, and there will be an intuitive appeal to the proposed
equilibrium.
Matching pennies is not the only simple game that fails to have a pure-strategy
Nash equilibrium. Recall the childs game Rock-Paper-Scissors. Recall that rock
beats scissors, scissors beats paper, and paper beats rock. If winning gives the
player a payo! of 1 and the loser a payo! of #1, and if we assume a tie is worth
0, then we can describe this game by the following matrix:
,
7
&
,
7
&
0$ 0 #1$ 1 1$ #1
1$ #1 0$ 0 #1$ 1
#1$ 1 1$ #1 0$ 0
and a similar (symmetric) list would be the best response correspondence of player
2. Examining the two best response correspondences implies rather immediately
that there is no pure strategy equilibrium, just like in the matching pennies game.
The reason is that starting with any pair of pure strategies, at least one player is
not playing a best response, and will want to change his strategy in response.
125
1" #2"
C " (#" ) = 1%
That is, the probability of any event happening must be non negative, and the
sum of the probabilities of all the possible events must add up to one.2 Notice that
every pure strategy is a mixed strategy with a degenerate distribution that picks
a single strategy with probability one.
2 The
notation
%" "&"
((1" ) means the sum of ((1" ) over all the 1" # 2" .
126
A 1$ #1 #1$ 1
B #1$ 1 1$ #1
Q6+ /"5$ 8)":/+ '; &" = !A$ B "; "34 0$/ #(,8)/R; B$(5$ (# 0$/ #/0 61 ,(R/4 #0+"0/9(/#;
5"3 */ B+(00/3 "#
$&" = !(C " (A)$ C " (B )) : C " (A) " 0$ C " (B ) " 0$ C " (A) + C " (B ) = 1"%
U/ +/"4 0$(# "# 16))6B#= 0$/ #/0 61 ,(R/4 #0+"0/9(/# (# 0$/ #/0 61 ")) 8"(+# (C " (A)$ C " (B ))
#'5$ 0$"0 *60$ "+/ 363V3/9"0(L/ 3',*/+#; "34 0$/: *60$ #', '8 06 63/C3 U/ '#/
0$/ 360"0(63 C " (A) 06 +/8+/#/30 0$/ 8+6*"*()(0: 0$"0 8)":/+ ' 8)":# A; "34 C " (B ) "#
0$/ 8+6*"*()(0: 0$"0 8)":/+ ' 8)":# B C
Example 9 :;6<)8a./,)=6i##;,#9 D3 0$/ <65?V@"8/+VM5(##6+# 9",/; &" = !,$ 7$ &"
G16+ +65?; 8"8/+ "34 #5(##6+# +/#8/50(L/):I; "34 B/ 5"3 4/3/ 0$/ #(,8)/R "#
$&" = !(C " (,)$ C " (7 )$ C " (&)) : C " (,)$ C " (7 )$ C " (&) " 0$ C " (,) + C " (7 ) + C " (&) = 1"
B$(5$ (# 36B 0$+// 3',*/+#; /"5$ 4/3(39 0$/ 8+6*"*()(0: 0$"0 0$/ 8)":/+ 8)":# 63/
61 $(# 8'+/ #0+"0/9(/#C - ,/30(63/4 /"+)(/+; " 8'+/ #0+"0/9: (# W'#0 " #8/5(") 5"#/ 61
" ,(R/4 #0+"0/9:C Q6+ /R",8)/; (3 0$(# 9",/ B/ 5"3 +/8+/#/30 0$/ 8'+/ #0+"0/9: 61
8)":(39 , B(0$ 0$/ 4/9/3/+"0/ ,(R/4 #0+"0/9:= C(,) = 1$ C(7 ) = C(&) = 0%
Given a players mixed strategy C" (#), it will be useful to identify between pure
strategies that are chosen with a positive probability and those that are not. We
de!ne,
3 The
simplex of this two element strategy set can be represented by a single number ( # [0% 1], where ( is the
probability that player " plays 3, and 1 "( is the probability that player " plays 4. This follows from the de!nition
of a probability distribution over a two element set. In general, the simplex of a strategy set with $ pure strategies
will be in a $ " 1 dimensional space, where each of the $ " 1 numbers is in [0,1], and represent the probability
of the !rst $ " 1 pure strategies. All sum up to a number equal to or less than one so that the remainder is the
probability of the $'( pure strategy.
127
De!nition 24 X(L/3 " ,(R/4 #0+"0/9: C " (#) 16+ 8)":/+ '; B/ B()) #": 0$"0 " 8'+/
#0+"0/9: #" ! &" (# in 2$/ #'..;,2 ;> C " (#) (1 "34 63): (1 (0 655'+# B(0$ 86#(0(L/
8+6*"*()(0:; (C/C; (1 C " (#" ) : 0C
For example, in the game of rock-paper-scissors, a player can choose rock or
paper, each with equal probability, and not choose scissors. In this case C" (,) =
C " (7 ) = 0%5 and C " (&) = 0. We will then say that , and 7 are in the support of
C " (#), but & is not.
HKCHCK D33(0/ M0+"0/9: M/0#
As we have seen with the Cournot and Bertrand duopoly examples, the strategy
sets need not be !nite. In these case where the strategy sets are well de!ned
intervals, a mixed strategy will be given by a cumulative distribution function:
De!nition 25 S/0 &" */ 8)":/+ 'T# 8'+/ #0+"0/9: #/0 "34 "##',/ 0$"0 &" (# "3
(30/+L")C - mix/1 #2,a2/34 16+ 8)":/+ ' (# " 5',')"0(L/ 4(#0+(*'0(63 1'350(63 "" :
&" & [0$ 1]$ B$/+/ "" (@) = Pr!#" % @"C D1 "" (#) (# 4(!/+/30("*)/ B(0$ 4/3#(0: D" (#)
0$/3 B/ #": 0$"0 #" ! &" (# (3 0$/ #'886+0 61 "" (#) (1 D" (#" ) : 0C
Example 10 ?;',n;2 @';.;*4A >63#(4/+ 0$/ >6'+360 4'686): 9",/ B(0$ "
5"8"5(0: 563#0+"(30 61 HYY '3(0# 61 8+64'50(63 #6 0$"0 &" = [0$ 100] 16+ ' ! !1$ 2"C
>63#(4/+ 0$/ ,(R/4 #0+"0/9: B$/+/ 8)":/+ ' 5$66#/# " &'"30(0: */0B//3 OY "34 ZY
B(0$ " '3(16+, 4(#0+(*'0(63C 7$"0 (#;
!
!
"
"
0
16+
#
16+ #" 6 30
6
30
"
#
# 0
1" "30
1
"" (#" ) =
16+ #" ! [30$ 50] "34 D" (#" ) =
16+ #" ! [30$ 50]
20
20
"
"
$
$
1
16+ #" : 50
0
16+ #" : 50
We will mostly focus on games with !nite strategy spaces to illustrate most of the
examples with mixed strategies, but some interesting examples will have in!nite
strategy sets and will require the use of cumulative distributions and densities to
explore behavior in mixed strategies.
HKCHCO ./)(/1# "34 2(R/4 M0+"0/9(/#
As we discussed earlier, introducing probability distributions not only enriches the
set of actions that a player can choose from, but also allows us to enrich the beliefs
128
that players have. Consider, for example, player ' who plays against opponents
#'. It may be that player ' is uncertain about the behavior of his opponents for
many reasons. For example, he may believe that his opponents are indeed choosing
mixed strategies, which immediately implies that their behavior is not !xed but
rather random. An alternative interpretation is the situation in which player ' is
playing a game against an opponent that he does not know, and his opponents
background will determine how she will play. This interpretation will be revisited
later in chapter 25, and is a very appealing justi!cation for beliefs that are random,
and behavior that is consistent these beliefs.
To introduce beliefs about mixed strategies formally we de!ne,
De!nition 26 - +/*i/> 16+ 8)":/+ ' (# 9(L/3 *: " probability distribution E " !
$&"" 6L/+ 0$/ #0+"0/9(/# 61 $(# 68863/30#C U/ 4/360/ *: E " (#"" ) "# 0$/ 8+6*"*()(0:
8)":/+ ' "##(93# 06 $(# 68863/30# 8)":(39 #"" ! &"" C
Thus, a */)(/1 for player ' is a probability distribution over the strategies of
his opponents. Notice that the belief of player ' lies in the same set that represents the pro!les of mixed strategies of player 's opponents. For example, in
the rock-paper-scissors game, we can represent the beliefs of player 1 as a triplet,
(E 1 (,)$ E 1 (7 )$ E1 (&)) where by de!nition, E 1 (,)$ E 1 (7 )$ E 1 (&) " 0, and E 1 (,) +
E 1 (7 ) + E 1 (&) = 1. The interpretation of E 1 (#2 ) is the probability that player 1
assigns to player 2 playing some particular #2 ! &2 % Recall that the strategy of
player 2 is a triplet C 2 (,)$ C 2 (7 )$ C 2 (&) " 0, with C 2 (,) + C 2 (7 ) + C 2 (&) = 1, so
we can clearly see the analogy between E and C.
129
To evaluate these lotteries we will resort to the notion of expected utility over
lotteries as presented in section 3.2. Thus, we de!ne the expected utility of a player
from playing mixed strategies as follows:
De!nition 27 7$/ /x./62/1 .a4;! 61 8)":/+ ' B$/3 $/ 5$66#/# #" ! &" "34 $(#
68863/30# 8)": 0$/ ,(R/4 #0+"0/9: C "" ! !&"" (#
)" (#" $ C "" ) =
1!" #2!"
M(,()"+):; 0$/ /R8/50/4 8":6! 61 8)":/+ ' B$/3 $/ 5$66#/# C " ! !&" "34 $(# 6886V
3/30# 8)": 0$/ ,(R/4 #0+"0/9: C "" ! !&"" (#
)1 (C " $ C "" ) =
1" #2"
1" #2"
%
&
1!" #2!"
'
The idea is a straightforward adaptation of de!nition 5 in section 3.2. The randomness that player ' faces if he chooses some #" ! &" is created by the random
selection of #"" ! &"" that is described by the probability distribution C "" (#).
Clearly, the de!nition we just presented is well de!ned only for !nite strategy
sets &" . The analog to interval strategy sets is a straightforward adaptation of the
second part of de!nition 5.4
Example 11 :;6<)8a./,)=6i##;,#= </5")) 0$/ +65?V8"8/+V#5(##6+# /R",8)/ "*6L/;
,
7
&
,
0$ 0
1$ #1
#1$ 1
7
#1$ 1
0$ 0
1$ #1
&
1$ #1
#1$ 1
0$ 0
4 Consider a game where each player has a strategy set given by the interval 2 = [1 % 1 ]. if player 1 is playing
"
" "
11 and his opponents, players # = 2% 3% &&&% ! are using the mixed strategies given by the density function 5# () then
the expected utility of player 1 is given by
Z %2 Z %3
Z %#
)" (1" % 1!" )52 (12 )53 (13 ) 5! (1! )612 613 61! &
%2
%3
%#
130
)1 (,$ C 2 ) =
D0 (# /"#: 06 #// 0$"0 8)":/+ H $"# " '3(&'/ */#0 +/#863#/ 06 0$(# ,(R/4 #0+"0/9: 61
8)":/+ K= (1 $/ 8)":# 7 ; $/ B(3# 6+ 0(/# B(0$ /&'") 8+6*"*()(0:; B$()/ $(# 60$/+ 0B6
8'+/ #0+"0/9(/# "+/ B6+#/= B(0$ , $/ /(0$/+ )6#/# 6+ 0(/# "34 B(0$ & $/ /(0$/+ )6#/#
6+ B(3#C >)/"+):; (1 $(# */)(/1# "*6'0 0$/ #0+"0/9: 61 $(# 68863/30 "+/ 4(!/+/30 0$/3
8)":/+ H (# )(?/): 06 $"L/ " 4(!/+/30 */#0 +/#863#/C
It is useful to consider an example where the players have strategy sets that are
intervals.
Example 12 Bi11in3 >;, a @;**a,9 D,"9(3/ 0$/ 16))6B(39 9",/ (3 B$(5$ 0B6
8)":/+# 5"3 *(4 16+ " 46))"+C %"5$ 5"3 #'*,(0 " *(4 0$"0 (# " +/") 3',*/+ G#6 B/
"+/ 360 +/#0+(50/4 06 8/33: (35+/,/30#I; #6 0$"0 &" = [0$ $); ' ! !1$ 2"C 7$/ 8/+#63
B(0$ 0$/ $(9$/#0 *(4 9/0# 0$/ 46))"+; *'0 0$/ 0B(#0 (# 0$"0 *60$ *(44/+# $"L/ 06 8":
0$/(+ *(4#C G7$(# (# 5"))/4 "3 a** .a4 a'62i;nCI D1 0$/+/ (# " 0(/ 0$/3 *60$ 8": "34
0$/ 46))"+ (# "B"+4/4 06 /"5$ 8)":/+ B(0$ "3 /&'") 8+6*"*()(0: 61 YCZC 7$'#; (1 8)":/+
' *(4# #" "34 8)":/+ < &= ' *(4# ## 0$/3 8)":/+ 'T# 8":6! (#
!
"
(1 #" 6 ##
# ##"
1
)" (#" $ #"" ) =
# #" (1 #" = ## %
2
"
$
1 # #" (1 #" : ##
!6B (,"9(3/ 0$"0 8)":/+ K (# 8)":(39 " ,(R/4 #0+"0/9: (3 B$(5$ $/ (# uniformly
5$66#(39 " *(4 */0B//3 Y "34 HC 7$"0 (#; 8)":/+ KT# ,(R/4 #0+"0/9: C 2 (# " '3(V
16+, 4(#0+(*'0(63 6L/+ 0$/ (30/+L") Y "34 H; B$(5$ (# +/8+/#/30/4 *: 0$/ 5',')"0(L/
4(#0+(*'0(63 1'350(63 "34 4/3#(0:;
(
(
#2 16+ #2 ! [0$ 1]
1 16+ #2 ! [0$ 1]
"2 (#2 ) =
"34 D2 (#2 ) =
1
16+ #2 : 1
0
16+ #2 : 1
131
7$/ /R8/50/4 8":6! 61 8)":/+ H 1+6, 6!/+(39 " *(4 #" : 1 (# 1 # #" 6 0 #(35/ $/ B())
B(3 16+ #'+/; *'0 0$(# B6')4 360 */ B(#/C 7$/ /R8/50/4 8":6! 1+6, *(44(39 #" 6 1
(#;5
1
F)1 (#1 $ C 2 ) = Pr!#1 6 #2 "(##1 ) + Pr!#1 = #2 "( # #1 ) + Pr!#1 : #2 "(1 # #1 )
2
1
= (1 # " (#1 ))(##1 ) + 0( # #1 ) + " (#1 )(1 # #1 )
2
= 0
7$'#; B$/3 8)":/+ K (# '#(39 " '3(16+, 4(#0+(*'0(63 */0B//3 Y "34 H 16+ $(# *(4;
0$/3 8)":/+ H 5"3360 9/0 "3: 86#(0(L/ expected 8":6! 1+6, "3: *(4 $/ 6!/+#= "3:
*(4 )/## 0$"3 63/ 6!/+# "3 /R8/50/4 8":6! 61 Y; "34 "3: *(4 "*6L/ H 9'"+"30//#
9/00(39 0$/ 46))"+ "0 "3 (3"0/4 8+(5/C 7$(# 9",/ (# 63/ 06 B$(5$ B/ B()) +/0'+3
)"0/+; "# (0 $"# #/L/+") (30/+/#0(39 1/"0'+/# "34 0B(#0#C
player 2 is using a uniform distribution over [0% 1] then Pr{11 = 12 } = 0 for any 11 # [0% 1].
132
over all of the pure strategies that player 's opponents can play. Clearly, rationality requires that a player play a best response given his beliefs (which now extends
the notion of rationalizability to allow for uncertain beliefs). A Nash equilibrium
requires that these beliefs be correct.
Recall that we de!ned a pure strategy #" ! &" to be in the support of C " if
C " (#" ) : 0$ that is, if #" is played with positive probability (see De!nition 24.) Now
imagine that in the Nash equilibrium pro!le C ! $ the support of 's mixed strategy
C !" contains more than one pure strategy, say #" and #0" are both in the support of
C !" .
What must we conclude about a rational player ' if C !" is indeed part of a Nash
equilibrium (C !" $ C !"" )? By de!nition, C !" is a best response against C !"" , which means
that given C !"" , player ' cannot do better than to randomize between more than one
of his pure strategies, in this case, #" and #0" . But, when would a player be willing
to randomize between two alternative pure strategies? The answer is predictable:
0
Proposition 9 D1 C ! (# " !"#$ /&'()(*+(',; "34 *60$ #" "34 #" "+/ (3 0$/ #'886+0
61 C !" $ 0$/3
)" (#" $ C !"" ) = )" (#0" $ C !"" ) = )" (C !" $ C !"" ) %
The proof is quite straightforward and follows from the observation that if a
player is randomizing between two alternatives, then he must be indi!erent between
0
them. If this were not the case, say )" (#" $ C !"" ) : )" (#0" $ C !"" ) with both #" and #"
in the support of C !" $ then by reducing the probability of playing #0" from C !" (#0" )
to zero, and increasing the probability of playing #" from C !" (#" ) to C !" (#" ) + C !" (#0" ),
player 's expected utility must go up, implying that C !" could not have been a best
response to C !"" .
This simple observation will play an important role for computing mixed strategy
Nash equilibria. In particular, we know that if a player is playing a mixed strategy,
he must be indi!erent between the actions he is choosing with positive probability,
that is, the actions that are in the support of his mixed strategy. One players
indi!erence will impose restrictions on the behavior of other players, and these
restrictions will help us !nd the mixed strategy Nash equilibrium.
For games with many players, or with two players that have many strategies,
!nding the set of mixed strategy Nash equilibria is a tedious task. It is often done
133
with the help of computer algorithms, since it generally takes on the form of a
linear programing problem. Still, it will be useful to see how one computes mixed
strategy Nash equilibria for simpler games.
HKCKCH %R",8)/= 2"05$(39 @/33(/#
Consider the matching pennies game,
A
#1$ 1
A 1,-1
B #1$ 1 1,-1
and recall that we showed that this game does not have a pure strategy Nash
equilibrium. We now ask, does it have a mixed strategy Nash equilibrium? To
answer this, we have to !nd mixed strategies for both players that are mutual best
responses.
To simplify the notation, de!ne mixed strategies for players 1 and 2 as follows:
Let 9 be the probability that player 1 plays A and 1 # 9 the probability that he
plays B . Similarly, let 8 be the probability that player 2 plays A and 1 # 8 the
probability that he plays B .
Using the formulae for expected utility in this game, we can write player 1s
expected utility from each of his two pure actions as follows:
)1 (A$ 8) = 8 % 1 + (1 # 8) % (#1) = 28 # 1
(12.1)
)1 (B$ 8) = 8 % (#1) + (1 # 8) % 1 = 1 # 28
With these equalities in hand, we can calculate the best response of player 1 for any
choice 8 of player 2. In particular, playing A will be strictly better than playing B
for player 1 if and only if )1 (A$ 8) : )1 (B$ 8)$ and using (12.1) above this will be
true if and only if
28 # 1 : 1 # 28$
which is equivalent to 8 : 12 . Similarly, playing B will be strictly better than
playing A for player 1if and only if 8 6 12 . Finally, when 8 = 12 player 1 will be
indi!erent between playing A or B . This simple analysis gives us the best response
134
1
2
1
2
1
2
It may be insightful to graph the expected utility of player 1 from choosing either
A or B as a function of 8, the choice of player 2, as shown in !gure ??.
)2 (9$ A) = 9 # (#1) + (1 # 9) # 1 = 1 # 29
)2 (9$ B ) = 9 # 1 + (1#) # (#1) = 29 # 1
135
!
"
if 9 6
# 8=1
4,2 (9) =
8 ! [0$ 1] if 9 =
"
$
8=0
if 9 :
1
2
1
2
1
2
We know from proposition 9 above that when player 1 is mixing between A and
B , both with positive probability, then it must be the case that his payo! from A
and from B are identical. This, it turns out, (,86#/# " +/#0+(50(63 on the behavior
of player 2, given by the choice of 8. Namely, player 1 is willing to mix between A
and B if and only if )1 (A$ 8) = )1 (B$ 8) $which will hold if and only if 8 = 12 . This
is the way in which the indi!erence of player 1 imposes a restriction on player 2:
only when player 2 is playing 8 = 12 $ will player 1 be willing to mix between his
actions A and B . Similarly, player 2 is willing to mix between A and B only when
)2 (9$ A) = )2 (9$ B ), which is true only when 9 = 12 . At this stage we have come to
the conclusion of our quest for a Nash equilibrium in this game. We can see that
there is indeed a pair of mixed strategies that form a Nash Equilibrium, and these
are precisely when (9$ 8) = ( 12 $ 12 ).
We return now to observe that the best response correspondence for each player
is a function the other players strategy, which in this case is a probability between
0 and 1, namely, the opponents mixed strategy (probability of playing A). Thus,
we can graph the best response correspondence of each player in a similar way that
we did for the Cournot duopoly game since each strategy belongs to a well de!ned
interval, [0$ 1]. For the matching pennies example, player 2s best response 8(9) can
be graphed in !gure ?? (8(9) is on the @-axis, as a function of 9 on the G-axis.)
Similarly, we can graph player 1s best response, 9(8), and these two correspondences will indeed intersect only at one point: (9$ 8) = ( 12 $ 12 ). By de!nition, when
these two correspondences meet, the point of intersection is a Nash equilibrium.
136
137
138
C # ! 4,# (C " ). Similarly, C " ! 4," (C # ). We conclude that C 1 and C 2 form a Nash
equilibrium. We will prove that (C !1 $ C !2 ) is 0$/ '3(&'/ !"#$ /&'()(*+(',.
Suppose player ' plays , with probability C" (,) ! (0$ 1), 7 with probability
C " (7 ) ! (0$ 1), and & with probability 1#C " (,)#C " (7 ). Since we proved that both
players have to mix with all three pure strategies, it follows that C " (,) + C " (7 ) 6 1
so that 1 # C " (,) # C " (7 ) ! (0$ 1). It follows that player < receives the following
payo!s from his three pure strategies:
.
/
+
,
0$ 0 3$ 5
4$ 4 0$ 3
139
It is easy to check that (.$ ,) and (/$ +) are both pure strategy Nash equilibria.
It turns out that in cases like this, when there are two distinct pure strategy Nash
equilibria, there will generally be a third one in mixed strategies. For this game,
let player 1s mixed strategy be given by C 1 = (C 1 (.)$ C 1 (/)), and let player 2s
strategy be given by C 2 = (C 2 (+)$ C 2 (,))%
Player 1 will mix when )1 (.$ C 2 ) = )1 (/$ C 2 )$ or,
C 2 (+) % 0 + (1 # C 2 (+)) % 3 = C 2 (+) % 4 + (1 # C 2 (+)) % 0 ( C 2 (+) =
3
$
7
1
%
6
Letting 8 = C2 (+) and 9 = C 1 (.)$ we can draw the two best response correspondences as they appear in !gure 12.1.
Notice that all three Nash equilibria are revealed: (9$ 8) ! !(1$ 0)$ ( 16 $ 37 )$ (0$ 1)"
are Nash equilibria, where (9$ 8) = (1$ 0) corresponds to the pure strategy (.$ +),
and (9$ 8) = (0$ 1) corresponds to the pure strategy (/$ ,).
Remark 4 D0 ,": */ (30/+/#0(39 06 ?36B 0$"0 9/3/+(5")): G" 16+, 61 E"),6#0 ")V
B":#FI 0$/+/ "+/ "3 odd 3',*/+ 61 /&'()(*+(" (3 9",/#C 76 8+6L/ 0$(# (# 1"+ 1+6,
0+(L("); "34 0$/ (30/+/#0/4 +/"4/+ 5"3 563#')0 C
140
*
5$ 1
3$ 2
4$ 3
+
1$ 4
0$ 0
4$ 4
,
1$ 0
3$ 5
0$ 3
141
and denote mixed strategies for players 1 and 2 as triplets, (C 1 (-)$ C 1 (.)$ C 1 (/))
and (C 2 (*)$ C 2 (+)$ C 2 (,)) respectively. It is easy to see that no pure strategy is
strictly dominated by another pure strategy for any player. However, if we allow
for mixed strategies, we can !nd that for player 2, the strategy * is dominated
by a strategy that mixes between the pure strategies + and ,. It is as i! ew are
increaing the number of rows player 2 has to in!nity, and one of these in particular
is the strategy in which player two mixes between + and , with equal probability,
as he following diagram suggests,
.
/
*
5$ 1
3$ 2
4$ 3
+
1$ 4
0$ 0
4$ 4
,
1$ 0
3$ 5
0$ 3
and
=(
(0$ 12 $ 12 )
,
2
2%5
3%5
Hence, we can perform the following sequence of IESDS with mixed strategies:
1. (0$ 12 $ 12 ) '2 *
2. in the resulting game, (0$ 12 $ 12 ) '1 Thus, after two stages of IESDS we have reduced the game above to,
.
/
+
,
0,0 3,5
4,4 0,3
A question you must be asking is how did we !nd these dominated strategies?
Well, a good eye for the numbers is what it takes, short of a computer program
or brute force. Also, notice that there are other mixed strategies that would work
because strict dominance implies that if we add a small ; : 0 to one of the
probabilities, and subtract it from another, then the resulting expected utility
from the new mixed strategies can be made arbitrarily close to that of the original
one, thus it too would dominate the dominated strategy.
142
FIGURE 12.2.
12.6 Summary
143
We can then proceed by considering the 56))/50(63 of best response correspondences, 4, * 4,1 % 4,2 % # # # % 4,! , which operates from !& to itself. That
is, 4, : !& ! !& takes every element C 0 ! !&, and converts it into a subset
4,(C 0 ) + !&. Since !&" is a simplex, it must be compact (closed and bounded,
just like [0,1]).
Nash then applied a more powerful extension of Brouwers theorem, called Kakutanis !xed point theorem, that says that under these conditions, there exists some
C ! such that C ! ! 4,(C ! ), that is, 4, : !& ! !& has a !xed point. This
precisely means that there is some C ! for which the operator of the collection of
best response correspondences, when operating on C ! $ includes C ! . This !xed point
means that for every ', C!" is a best response to C !"" , which is the de!nition of a
Nash equilibrium.
12.6 Summary
...
12.7 Exercises
1. Use the best response correspondences in the Battle of the Sexes game to
!nd all the Nash equilibria. (Follow the approach used for the example in
section 12.2.3.)
2. An all pay auction: In example 12 we introduced a version of an all pay
auction that worked as follows: each bidder !rst submits a bid. The highest
bidder gets the good, but all bidders pay there bids. As in the example,
consider such an auction for a good worth $1 to each of the two bidders. Each
bidder can choose to o!er a bid from the unit interval so that &" = [0$ 1].
Players only care about the expected value they will end up with at the end
of the game (i.e., if a player bids $0.4 and expects to win with probability
0.7 then his payo! is 0%7 % 1 # 0%4).
(a) Model this auction as a normal-form game.
144
(b) Show that this game has no pure strategy Nash Equilibrium.
(c) Show that this game cannot have a Nash Equilibrium in which each
player is randomizing over a !nite number of bids.
(d) Consider mixed strategies of the following form: Each player ' chooses
and interval, [@" $ @" ] with 0 % @" 6 @" % 1 together with a cumulative
distribution "" (@) over the interval [@" $ @" ]% (Alternatively you can think
of each player choosing "" (@) over the interval [0$ 1], with two values @"
and @" such that "" (@" ) = 0 and "" (@" ) = 1.)
i. Show that if two such strategies are a mixed strategy Nash equilibrium then it must be that @1 = @2 and @1 = @2 %
ii. Show that if two such strategies are a mixed strategy Nash equilibrium then it must be that @1 = @2 = 0%
iii. Using the above, argue that if two such strategies are a mixed strategy Nash equilibrium then both players must be getting an expected
utility of zero.
iv. Show that if two such strategies are a mixed strategy Nash equilibrium then it must be that @1 = @2 = 1%
v. Show that "" (@) being uniform over [0$ 1] is a symmetric Nash equilibrium of this game.
12.8 References
HHH TO BE COMPLETED HHH
Nash, John (1950)
von Neumann, John and Oscar Morgenstern (1944)