Sie sind auf Seite 1von 201
DREW FUDENBERG AND JEAN TIROLE| Contents Introduction... 0... Ghapterl. 2. ee Ghapter2.........00- Chapter 3... ee Chapter 4.0... 7. wee Chapter 5... ee ee Chapter 6... ee Chapter 7.......... Chapter 8... 2... 1 wee Or Chapter 10 2... ... 00. Ghapter 11 2.2... we Chapter 120... 0... Chapter 13.2... we Chapter 16 2... 1... it 2 . 35 37 75 103 12. 139 ash 165 181. 187 197 Introduction Drew Fudenberg and Jean Tirole have asked me to prepare this set of solutions to accompany their text Game Theory. While constrained by the number of pages I vas willing to write I have tried to make the solu- tions easy to read so that a casual r jex might quickly identify the main ideas. I have also tried to provide more detail than is standard in solution manuals. I would Like to thank Sugato Dasgupta and various students at MIT for their comments and corrections. I ove special thanks to Eva Hakala for typing the manuscript and revisions and preparing the figures. While many errors surely remain, I hope that readers will find the solutions useful and that they will make an excellent text even more accessible. Glenn D. Ellison iL Chapter 1 Exercise Ll (4) Let y be the probability that 2 plays L. Player 1's expected payoff is then ya + (1-y)e if he plays U, ye + (1-y)g if he plays D. His best response is WU) 4 ya + (Lye > ye + (-y)g RG) = 4 (Dif ya + (L-y)e < ye + (L-ydg (U,D) 4 ya + (L-y)e = ye + CL-y)g Let z be the probability that 1 plays U. Player 2's best response is (L) i zb + (L-z)f > 2d + (1-2) Ry(z) = 4 (R) if 2b + (L-z)f < 2d + (L-2)h. (LR) i€ zb + (L-z)f = 2d + (1-2) (it) Player 1 is indifferent between his two strategies regardless of his opponent's play bead and f- (111) Player 1 has (U) as a strictly dominant strategy if a> e and c > g. He has (D) as a strictly dominant strategy if ad and £>h. He has (R) as a strictly dominant strategy if b < d and £ eandb>d. As (U) is not strictly dominant ¢ < g As (L) is not strictly dominant £ < h ‘These are precisely the conditions for (D,R) to be a Nash equilibrium which contradicts that (U,L) was unique. case 2. a= e andb>d. Let ¢ be a small positive number with (1-c)btef > (1-«)dteh. Then, it is a Nash equilibrium for player 1 to play L and player 2 to play U with probability 1-« and D with probability «. This gives a continuum of equilibria. a>eandb=d. Works just like Case 2. fase 4. ane andb=d. If (D,L) is not an equilibrium, then £ < h. I£ (U,R) is not an equilibrium then c < g. These two conditions tell us that (D,R) is a Nash equilibrium, again contradicting that (U,L) was the unique equilibrium. (v) Suppose player 2 plays L with probability p. Player 1's payoff to U is l-pt3(1-p) = 3-2p. Player 1’s payoff to D is 4+pt0(1-p) = 4p. Graphing the two, we Ry(p) = (0) (ae) for p< 1/2 = (U,D) (qe [0,1]) for p = 1/2 = (D) (qn0) for p > 1/2 4 3 a 3-27 * : : oO 7 1 When player 1 plays U with probability q, player 2's payoffs are: uA) = U,(R) = The best response 2-34, -1q. is R@) = (2) (p= 1) for q < 3/4 ~ (LR) (p€ [0,1]) for q = 3/4 = (@=1) for q > 3/4 2 We have a Nash equilibrium when both players are playing a best response. The Nash equilibria are thus intersections of the best response curves in (P,q) space. This gives us two pure strategy equilibria: (D,L) corresponds to p= 1, q = 0. (U,R) corresponds to p= 0, q = and one mixed equilibrium where U with probability 3/4 Player 1 plays D with probability 1/4 L with probability 1/2 Player 2 plays R with probability 1/2 R@) Ra) ‘eee Exercise 12 The game has five pure strategy Nash equilibria: (A,A,A), (B,B,B), (6,6,6), (A,C,C) and (A,B,A)- We can si that this is the complete list by looking for equilibria of different types. 1. Equilibria where the “winner” gets three votes: Each of (A,A.A), (B,B,B) and (C,C,C) is clearly an equilibrium as no matter what single player deviation is played, the outcome is unchanged. 2. Equilibria where the "winner" gets two votes: Note that either player in the majority can make A the result if he wants, ¢.g., if (B,B,C) is the profile, player 1 can vote for A and with the tie vote (A,B,C), A is chosen. Hence, player 1 can be part of the majority only if his most preferred outcome A is the quilibrium outcome. This gives the equilibrium (A,B,A). Any (A,A,x) is not an equilibrium, as 2 would play x. (A,C,A) is not an equilibrium as 3 would play C. If 1 is not part of the majority 2 and 3 must forma majority and vote for something both prefer to A. This gives the equilibrium (A,C,0). 3. Equilibria where the "winner" gets one vo! There are none. The votes would have to be A,B and C in some order. As the player who votes for A could have chosen B or C, he must be player 1. Player 2 prefers either B or C to A. He could get one of these outcomes by voting for whatever 3 is voting for, so (A,B,C) and (A,C,B) do not work. Exercise 1.3 (a) Let (x,y) be a division with g(x,y) = 1, x = xg, y= ¥q- Suppose player 1 deviates and plays x x. If x 0 => g(x',y) <1 so player 1 gets x” x, g(x’ ,y) > 1 so player 1 gets xy. In either case, player 1 does not benefit from playing x’. Similarly, player 2 does worse if he plays any y’ ~y. Thus, (x,y) is a Nash equilibriua. ‘This gives a continuum of Nach equilibria if ve assume that g(x),yg) <1 0°%0 and some additional regularity condition like Lim g(x,¥9) > 1 Lim g(x9,y) > 1. xe ye (®) Player 1's expected payoff when (x,y) is played is Prob (g(x,y) < 2)+(x-%q) = [1-F(EC,9))] (4%) Player 1's bese response to y is Argmax [1-F(g(x,¥))] (X-%9) - x The first-order condition for the maximization is Cex fee) Boy) = 1 = PEGI) ~ Fg.) es? : 2g ee. ECx,y) Similarly, the first-order conditions for player 2 imply wy. —_L:Flg(e.y)) yy . £(@ 0.) )ZECR,9) Let (x,,y,) be the Nash equilibrium corresponding to the distribution We assume that as t > ©, F, converges to a point mass on z = 1 so te F(z) 90 for z <1 a1 for z>1 £,(2) 90 for z #1 We first show that if (xgy,) 9 Gy") then g(x*,y") = 1. By the Foc for Xe : L-F@(,Y,)) Gy) Bony) + WERT As t 9 ©, the left side remains bounded. If g(x*.y") <1, using the fact that F and £ are continuous and g(x,,¥,) > BOC .y"), we see L-F(B(X,.¥,)) > 1 and £(g(x,,7,)) > 0, 80 the right side is unbounded. Hence, no solution with a(x",y") <1 is possible. g(x".y") > 1 is also impossible. In that case ve could find T such that © = T g(x,,y,) > 146 # Prob (g(x_.¥_) $2) 90. In this case we can show that a player cannot be playing a best response and would do better playing near x) oF yo. The first-order conditions also tell us that Set HELI) ce x 28x Bee" y* 0 neg ED _ HOT) yey te Boy) S8o"y) The axiomatic solution is such that acy) 1. This has the first-order conditions a yg 7 A SRY) xq The calculations above show that the limit point ony") satisfies the FOC of ~ 1 cx a fan the axiomatic solution. Exercise 14 (a) Firm 1's profit is given by its per unit profit (p,-10) times the quantity it sells. (>) (4) Suppose (p),py) with p, > p, is a pure strategy Nash equilibrium. P, $10 is impossible, as firm 1 could do better by raising its price to 10 or 10te. Tf py > 10 and p, > p, note that firm 2 earns positive profits as he could have chosen p, = p, and earned a positive profit. Hence, firm 2 has a positive quantity sold. Thus Min(100-k-py,k) > 0 => 100-p, > ke > Firm 1 sales = Min(100-p,,k) = k. Suppose firm 1 raises its price from p, to p,+e where « is small enough so that 100-(p, +e) > k and pyte 10 in any Nash equilibrium. (441) Suppose again that p, = Py = pvis a Nash equilibrium. If p* > 100-2k and firm 1 sets py =p» then firm 2 sells quantity * 100-2 2 : win(s0 - Bk) Af it also charges price p*. as 50-2 < 50 - firm 2's capacity constraint is not binding. If firm 2 were to cut its price * to p*-« it would lose «(50 -- } on the units it currently sells. This loss 30 as 90. Firm2 also gains ten fainaor-cl-0.00-(90 -F]] As € 40, this gain does not go to zero (because the capacity constraint was not binding) so for small enough ¢, it is profitable to deviate and play = p” was not a Nash equilibrium 2 Ee * T£ p* < 100-2k, 50 - B- > k so each firm's capacity constraint is Pee. Thus, py ~ P, binding. For ¢ small enough, it is easy to verify that with firm 1's price fixed, firm 2 can raise its price from p" to p'te and still sell k units. This is a profitable deviation so py = Py ~ p* is not a Nash equilibrium. (v) Finally, we now show py = py = p* = 100-2k is not a Nash equilibrium so no pure strategy Nash equilibrium exists. Suppose p, is fixed at p™ = 100-2k. If firm 2 sets p, = p* we have seen that it has ptofit k(p*-10). If firm 2 deviates and sets its price at p'+e its profit is (p*+e-10)Min(100-K(p*+e),k) = (p*-10#E) (ke) = (p*=10)ktek-e(p*-10) . This deviation is profitable if k > p*-10 = 100-2k-10 which is true for k > 30. Under the assumptions of the problem, ve have then shown that there can be no pure strategy equilibrium, Exercise 1.5 Suppose management offers s). If the union responds with s, = s, its 1 offer is accepted © |sg-891 < 194-491 ; e897 (B22), and the union's expected utility is anteyny = EE fu EZYIoy Suppose (sf,24) 4s a Nash equilibriua, then sf must aaxtaize che expression above which gives the first-order condition * Hee bE at = nsf E/E Similarly, if s) is fixed and management offers s, < 5, they have expected sovey oft feo EE which gives the first-order condition so SYS 5 ]e-0 utility i.e., management chooses an offer below that of the union until the gain from lowering s, and getting a better contract when they win, Pale 7 182 “(2 Jo, 2 ey eaters ett enal tose roe innecraf eee test eee arcane Ae tetstyp[ C2 ot $9°5,)P 2 2 Each first-order condition gives us an expression for s». equate these two expressions we see ‘The left and right sides of this equation are precisely the probability of the union's and of management's offer being accepted. If sy {8 uniform on [-1,1]: stl 1 Ps) = p(s) 5 - atts) The previous result (22 =} gives us 2 sits, 182 ere ee z S17 82 Now, substituting for P and p and using 3 . 1 in the first-order condition for the union gives este nl o tea ete Pn spt 2 om spat spet. Exercise 1.6 (D,R) is clearly a Nash equilibrium so we know the game has at least one Nash equilibrium. 10 To show uniqueness, we follow the steps in the hint: (1) There are no other pure strategy equilibria. This follows simply from examining the other eight pure strategy Profiles, e.g., (U,L) 1s not a Nash equilibrium, as player 2 would prefer to Play M or D when his opponent plays U. (2) Player 1 can‘t put positive weight on both U and M. Player 2 can’t put positive weight on both L and M. Suppose in a Nash equdilibrium, player 2 plays L, M and R with Probabilities p), q), andr). Player 1's payoffs are then Po-2a, if he plays U, -2p)ta, if he plays M, ry if he plays D. Player 1 can only put weight on both U and M if each is a best response. Thus, we must have Pp-2ay = -2P9¢ag => Pp =a, - In this case, though the payoff to each of U and M is negative while the Payoff to D is non-negative (or 0 and positive if r, = 1). Thus, neither U nor M was a best response. The claim for player 2 is similar. (3) No mixed equilibrium is possible. We can only have a mixed equilibrium if player 1 puts positive weight on 2 or more strategies. Suppose he plays U with probability p) > 0 and D with probability 1p, > 0. Then, player 2 has payoffs (-2p,, py, 1-p,) to L, M, and R. The only possible mixed strategy for 2 is then to mix on M and R. Suppose he plays M with probability q, = 0. Then player 1 has payoffs (-2q), 4, 1-42) to U, M, and D. In this case, player 1 is not playing a best response when he plays U with positive probability. 1 A similar calculation shows that player 2 cannot put positive weight on 2 or more strategies so no mixed equilibrium exists. Exercise 1.7 Let #, denote the true type of consumer {. Suppose all individuals other than i announce some @_ {When individual { chooses an announcement 84, the realized decision is <= x"(1,....d)). Consumer i's utility is RAV 08.) # DV Oe8y) = G0 = ky +E VV G.84)-c00 Jat Jj = at 5 j -6, gate The largest possible value for this expression is clearly obtained for x= x8,.8 4) Player i can achieve this best possible payoff by announcing #, = 0,. Assuming strict concavity, truthtelling is a strict best response ané thus dominates any other strategy. Exercise 1.8 (a) As there are a continuum of consumers each has no effect on the fraction £ who choose to withdraw when everyone else’s action is held fixed. Suppose only consumers dying at date 1 withdraw their money at that date. Then * * (x09 fry -xey -1-— p41 A customer who dies at date 1 has 17%, tf he withdraws, = 0 if he does not. Thus, he clearly wants to conform to his strategy. A customer who does not die has ec], ¢) = 0 if he withdraws at 1, 12 . ) =e) - 3 E eee feet ete etree fee mee fees ee aleo prefers to conform to his strategy of net withdrawing. (b) Another Nash equilibrium is f = 1. Regardless of an individual's actions, if alt others withdraw at date L chen fr, = cf > 1. Hence, ©, 7 1/f, c= 0 if he withdraws at date 1. cyte, = 0 if he does not withdra Thus, cach type of player has a higher utility if he withdravs at date 1 so all players prefer to follow this prescribed strategy and we have a Nash equilibrium. () The payoffs in this game are similar to the stag hunt. If all play the "good" strategy (deer hunting or not withdrawing unless necessary) the social optimum is achieved. There is also, however, a "bad" equilibrium in each game. Once enough players are playing the "bad" strategy the other players are forced to do so as vell. Exercise 1.9 (a) Assume that p(q,,4)) ~ a-b(a,+a,) for a, = 0, a) 20, a, + 4) 5 a/b. When is player 1's reaction function given by the solution to the first-order conditions? Fix q) € (0,2). Note first that au + ex(qy) - 2 14! aay If marginal cost is increasing or constant this expression will be negative for all q, € (0, 49). Hence, any interior solution to the first-order conditions is the best response. 3 We can only be sure that the first-order condition will have a solution 1£ the boundary conditions %p 92 are satisfied, The second boundary condition is a= Blan +E - ag) - EE - ay) - eG - a) <0, which is satisfied for c}(? - q)) 2 0. However, the first boundary condition a= bay - 64(0) > 0 need not hold, for example, if we have constant marginal costs. With constant marginal costs the best response will be ry (4) = 0 2 oe for q) sufficiently close to a/b. If, for example, c,(q,) = cay $0 ¢4(4)) = 0 the boundary condition is satisfied and the first-order condition has a solution. (b) Now, suppose all the firms are identical and have cost functions ea) = cay - ‘The Nash equilibrium is the solution to the first-order conditions (a - dEq,) - bay - 20. 95) ~ bay This equation can only hold for all i if all of the firm's outputs are identical. The common output q* must satisfy a - (IHl)q” - = 0 * a TH In this Nash equilibrium the price is atIc pt) a+ gh = SATE, As I > @ the price tends to the competitive level c. Exercise 1.10 (a) The strategy space for player i is 1 5, (0,8) each farmer can choose to raise any non-negative number of cows. The Payoff functions are given by ay (sys 1p) 7 SyV(5yt89t...48)) = se, the difference between the revenue farmer i's cows produce and their cost. (>) Given production n_, by the other farmers, ve determine player i's reaction by maximizing his payoff. To maximize u,(n,,n_,) we must find the solution so the first order condtetons at an interntor point my € [0.N- J ny) gat (4f a solutton exists) and check the possible boundary solutions ny = 0 or Ne Tiny. Note that ny >No J ny cannoe be che best response if char jai J jai value is positive as this produces negative profits. ‘The first order condition is the familiar marginal revenue = marginal cost condition 4 vinyte. tmp) + ayy (nyt. tn) se Let ry(n_,) be the solution to this equation. as the payoff function is concave in (0,N- [nj], if @ solution r,(n_,) exists it is the best response. gad ay TE no solution exists, the best response is to set n, = 0 as n, = N- J x Sei always produces negative profits. We have a symmetric Nash equilibrium if all players are playing a best response, i.e. if * * oe rtm. .jn*) on “i Looking for an equilibrium with n" > 0, n* must satisfy player i's first order condition * a es * vin't...4n") + n'y (nb. .tn®) = © 1s or v(In*) +n (In') =e. Ié v(0) > ¢, this equation aust have a solution for some n* € (0,N/I) because v(0) + Ov" (0) > € V(LN/T) + (Y/Y (N/T) <0 ¢ this also has an unique solution We can show that the social optimum has a lower production level than the Nash equilibrium. To do this, we show that at the aggregate Nash equilibrium production level In”, the social welfare function is decreasing in output. aw em, + aa gee cen (rants + atv any - e} + a-yety cnt (pay cn*) <0 16 using the first order condition for n’. (c) This is a version of the Cournot game where the farmers are like firms with constant marginal cost c, and the yield per cow v(N) is the analogue of the inverse demand function p(q). The result of (b) is the familar one that industry profit is lower in a Cournot oligopoly than under monopoly. Exercise LIL Let G be a game with strategy spaces S, CR" compact and payoffs By (Sgro 8p)- We define a sequence of ganes 6! as follows: Let s} be a finite set of points Od) k= 1,2,..,K(5) such that kes, ad wi e@ sequence 5, s a ° J and vues, Min|xa}t] <5, vith the sequence &y s.£, 65> 0 a8 j 90. The 8} can be thought of as finite collections of grid points, The payoffs in are the sane as the payoffs in G. Each game G) has a finite number of pure strategies and thus has a mixed strategy equilibrium where player { assigns weight wi (x!) to strategy x!*. As mentioned in the text (see footnote 23), the set of measures on a compact set is compact in the topology of weak convergence so we can take a subsequence of the j* s.t. Js woe ere wd = yl and p= u,x...% 1 where w= pdx..ogd and w= yx. xmy is a measure on 54x. ..X8,. Suppose (,,..- iz) were not a Nash equilibrium of G. Then, 3 a player t and an alternate strategy ji, such chat wy Gg) > Up wm yd + We will use ji to construct a profitable deviation from uJ in the game G). Note that WY Gpn Js, Js, BOK DIB gO (8g Oy) defines the payoff to a mixed strategy. As wy > #4 and g is continuous, uw fo,e0eur psi ap te comets ab wld 6050, 88: 12 ty > WIP weg, 28H, 0: 0H gO) = IS women Day Oe Dan) OL OL < $ as w) 9 y and g is continuous, 3J, s.t. j= Jy * cS rrt—“—*—“(—isw™SC—sssS—SC—sS—sSCSCC—CSsN Jj As h(x.) = f g(t, ,)84, is continuous on the compact set S, it is uniformly continous so 36 such that xx, €5,, [xy - xj] <5 + [hGxy)-hOx)] < €/3. Choose Jy such that J = J, % 5; <6. 01 ure jd on the J as For J = Max(J,.J,.J3) we construct a measure ji] on the Grid s} follows. Partition S, into disjoint measurable sets s* with x}* sf*. one way to do this would be to assign x € S, to the set si* if x!* ts the grid point closest to x (with same procedure for ties). Define the new measure by Bx) = a. sd*: Rood) - 5s} With this construction, a «ned ast] < £3. cs) | Je howaiyoo - ned oasf5| < § Fs} i so vhen ve sum over all k we find | Js. repay) + fy ncoeid oo | <§ se Sy i 3 which gives us that er —————— == > SS ex pea, - 28 ay se sp > fF e(xj.x 0 2, +$ (by definition of ©) > SS wlxyx 20) ond (ey $2 J, =u had This tells us that a is a profitable deviation which contradicts that p! was a Nash equilibrium of G!, 18 ScaeesssamaaseseaaanatssaeAe 11 2eRehe ss ARs ss SRRRRE] RENE] 000000] TURN S00RNt 7 0Gt0] 000N NNeR | eee eee eee es Exercise LIZ We look for a symmetric mixed strategy equilibrium where each player bids x cents with probability P, and where P, > 0 for x € (0,1,2,...,99). Recall that when a player uses a mixed strategy he must be indifferent Detween all of the pure strategies he plays with positive probability. When a player bids zero, he cannot win the dollar and makes no payment so his payoff is 0. Hence, each player must have an expected payoff of 0 regardless of his action, Using this we can solve for the necessary probabilities Pee I£ a player bids 1, he wins the dollar if and only if his opponent bids 0, 1.0. he wins with probability py. Regardless of his opponent's strategy he pays 1. The expected payoff (in cents) is Pg*l00 7-1. This $ equal to zero for py = 1/100. Similarly, {f a player bids 2, he wins the dollar with probability py + P- The zero expected profits condition is (Pp + Py)*100- 2-0, which implies p, = 1/100. This leads us to conjecture that the mixed equilibrium is given by Py = Py ses = Pgg = 1/100. Tt is easy to verify that this is an equilibriun. Suppose player j uses this mixed strategy. If player 1 bids x he wins the dollar with probability x/100. His expected payoff is then (/100)100 - x . This payoff is zero for all x so it is a best response for player 1 to play any mixed strategy. As each player is playing a best response to the other’s strategy we have a Nash equilibrium. 1g 20 chapter 2 Exercise 21 (a) Let Si be the set of player i's strategies which are undominated in the 3" round of the usual iterated strict dominance tet TY be the analogous set for the process where player i's strategies are deleted at step N only if i € I(N). ee J =six...xst, Derx ox. We first prove the relatively straightforvard claim that sc 1) by induction on J. For j = 0, 8° = 1° so the result in trivially true. Suppose we knows) c TI. We want to show s!*¥ cat. re 5, ¢ 2"? then either s, ¢ T} (in vhich case s € S} > s, € S}"? as desired) or sy was renoved at the j+1°° iteration, In this latter case, Jo, such that uplop sp) > uplsps_p V8.2 eT, . asic, ula s_p) > Uplaps_p V5.2 € 9, This does not imnediately show that s, is dominated with respect to SI because o, might put positive eight on strategies not in SJ), If this is the case, though, ve could take any a, not in S} and replace it by the weighted average of strategies that dominated it vhen it was removed in iteration k s,ety . (Taking the largest of the {’ corresponding to 4 = 1,2,...,1 gives i”.) If a, €5}'T enon either 2, € 5} (in which case it is not in Tp for the 1 corresponding to j) or 30, € =} such that J uplop sp) > U4C5p5_p for alls ,¢ 5), . Now, by the inductive hypothesis, pick i such that T' cs} and let { be the next larger index such that t€ I(i”) i , such that player l’s strategies are removed at round i’. Te spe Tt chen sp eT, 50 54S), and up(op.s_p) > wyl5p.8 ve,et!, again, chis does not ‘anediately tell us that sy is removed at round 1 of the T(W) process because op may put positive veight on actions not in 1H. As before, though, any weight on such actions can be moved ¢o weight on the actions which dominated them to yield a oy which dominates s,. Hence we can conclude 4) € TY’ and enact!” sl*l, ay induction, we get that T° c S* as desired. (b) Define an alternative deletion process by BN (o, € SHY] there is no of ¢ #°) such that : gue. wl.) > logo) Vo CBD, iN : af = (3, € S15, € 2) sl is simply the set of pure strategies in 3. 22 Fe ee oe Fest, note chat all strategies inf! put wetghe only on pure strategies in S{. If s, were deleted at round k for player i, all mixed strategies which put positive weight on s, were deleted at round k. Now we show by induction that s} = $1. For N= 0, the result is true by definition, suppose sf ~ SK. shoving sit c Sk" ts easy. 16 a, t9 deleted at this round from S{*1 then 30; ¢ 2 such chat Ye > Uy (s0 Vey osk soo exk We know Sf ¢ Bf, 80 04 € 2K. also, , k a r—— e+ and s, is deleted from Sy as well. To see that Si*? c sk), suppose chat 3, 1s deleted at this round from kel k sf"t, men, 30, © 2 such chac Bes > UCoy 8.4) ¥s. This immediately gives us Uy (p74) > Uy (3,,04) Vo as all such ¢ , only put weight on s While o, need not be an elenent of SK, we can easily fix this. If o, has beon deleted at round J Wy (ays * If we define a new strategy 5, to be equal to sy except that any probability of playing a, is replaced by playing 0, , Dea sp st) + sfapdelaj(oy.s . nu(agst pO] > ucts” (3508 This contradicts the assumption that s” was a Nash equilibrium and establishes the result. (Existence is shown in Section 1.5.1) Exercise 2.3 See Gabay and Moulin (1980). Exercise 2.4 First, we make a useful observation. 24 SO BEE Eee eee eee eee en suppose that for q, fixed, m,(4,,4,) 1s concave. Suppose also chat ve know that player 1's best response R,(q,) satisfies R(q)) = 4) ¥42- x(q) % y Ry (ay) Then, any qi a) or for firm 2 to produce 4) > a. In the next round of iterated dominance we note that if firm 2 never produces q, > 49, firm 1's best response is alvays at least g? for soue go> 0. g) cam be defined similarly. In the next round of iterated dominance use 2 > q) co show that player L's bese response is sf and so again any a) > ah ts strictly dominated. Repeating this process, @\ approaches firm 1's output at D snd a approaches firm 1’s output at B. Exercise 2.5 First note that if all other farmers choose outputs which result in a 25 price p, farmer { will assume that his output has no effect on price, so he maximizes re Pa - and thus has q(i) = ip as his best response to price p. Round 1: As all outputs must be non- jegative, p < a/b must result regardless of each farmer's strategy. Hence, farmer 1's best response 1s always s ip. As in problem 2.4, the farmer's payoff is concave so producing q(1) > dominated by producing q(i) = Round 2: Assume that any q({) > ix is dominated for all i. Then, when everyone plays an undominated strategy: i + bp = D(p) = fF qcsre(et t i s J if(i)xdi : = kx. Thus, p= oie 8 - Kx so player i's best response is always at least if _ By). and we can conclude that any q(i) < (2 5 5) is dominated for player 1. Round 3: Assume q(i) < iy is dominated. Then when others play undominated strategies a - bp & ky gives us that any waft - 5] is dominated. I£ we apply round 2 then 3 we see q > ix dominated implies that is dominated. Repeating this process we s 2 aN eee «i a> ag -E+S-. +k Blt bt az ou that any 26 is dominated. Applying Round 3 to this value, any 2 2 2 a kk k k’ aeit eB EE ee gO -E+5 on St fa asecmeeast pepettieginccnna feria eee eee oe eee 2 wo gg. whenever b > k and the series converg Now, suppose b =k. If we follow the same process as before the Intttal step that q(t) > @ is dominated gives us next that aw 4 dominance Tf we assune that a tatonnenent process starts with q(1) = ix, the sane calculations show chat q(t) = i(f - fx) results in the next pertod. For E< 1 the process is stable. From any starting point the sequence of ourpues converges to a(i) ~ i(gt;]. starting from q(1) = 0, se yields the successive upper and lover bounds from iterated strict dominance. For k/> = the process is not stable Exercise 2.6 (a) Player 2's strategies consist of playing H with probability p, and T with probability 1-p). We can show that it {s dominated for player 1 to play both H and T with Positive probability. In particular, playing H with probability p, and T with probability q, (and a with probability 1-(py4qj)) with p, = q, > 0 ts 27 dominated by the strategy of playing H with probability p)-q,. 7 vith Probabilicy 0 and a with probability 1-(p;+q,)+2a)- No further strategies can be removed. H and T are each undominated for player 2 (consider 1 playing T and H respectively), All strategies for player 1 not yet ruled out achieve the maximum possible payoff when player 2 mixes 50-50. (®) Suppose player 2 plays H with probability p,. Player 1's payoffs to H, T, anda are then 2p)-1, 1-2p, and a respectively. We graph each of these as a function of p,. Ie is clear that the best responses for player 1 are: T A mixed strategy on (T,a) A mixed strategy on (a,H) H 28 Starting with these strategies and all possible strategies for player 2, we can see that all of them are best responses so further iterations of the process defining rationalizable remove no other strategies. Exercise 2.7 First, we show that when player 1 plays U with probability p and player 2 plays L with probability q that d is not a best response for player 3. This will require a fair amount of algebra. a is better than d if 9pq = 6pq + 6(1-p)(1-q). b is better than d if 9p(1-q) + 9q(1-p) = 6pq + 6(1-p) (1-4). ¢ is better than d if 9(1-p)(1-q) = 6pq + 6(1-p) (1-4) The first is true if dar, i-p Tea The third is true if ipla,,, > 4 T€ neither of these two holds, we must show that b is better than d. 1 Gasel: p2}.a 0, so p(1-2q) = (1-p)(1-2q) ° p(1-q)-pq = (1-p)(1-q)-(1-p)q ° p(1-q)+q(1-p) = (1-p)(1-4)+pq > 9(p(1-9)+q(1-p)) = 6((1-p)(1-q)+pq) > u3(b) = u, (4) + 1 1 1 casa 2: p=}, a2} (or, dy symmetry, pst, qs Let £(p.q) = uy(b) - u(d) = -2 + S(ptq) - 10pq (after some algebra). = Me want to show that £(p.q) = 0 whenever P= 52-52, p=}, andge I-p 2 z Fix p> 29 ae FW 0- : efe a #)>° > fp.) > 0 forpe dz. aeg Peqh => ub) > u(d) Again, d is not a best response. To see that d is not dominated, let o be a mixed strategy for player 3 with o(a) = p, o(b) = a, o(c) = 1-(ptq) - I£ 0 dominates d, u,(o) = u,(4) for any possible strategy of players 1 and 2. If 1 and 2 play (U,L), Té 1 and 2 play (D,R), u, (2) = 9(1-(p+a)) 3 (a) = 6 30 SEB BEBE BERBER EEE EER HRE HE BE i * ptasz. This is clearly impossible, so the strategy ¢ cannot dominate 4. Exercise 2.8 (4) A correlated equilibrium is a probability distribution on the outcomes where each player maximizes his payoff conditional on the strategy he’s told. Let Py, Pp, Py» P, be the probabilities of (U,L), (U,R), (D,L) and (D,R) respectively, in the game of figure'2.4. The conditions that player 1 plays a best response when told U and D are respectively Byr8 + Pgt0 yet + Pye Pinte Pivae? and Byrd + Pysh Dyed + p40 P3 + Py P3 * Py which imply py - p= 0 and p, - py = 0. The conditions that player 2 plays a conditional best response are: Pyrl + pyr4 = py+0 + pys5 and Ppt0 + pye5 = ppl + pyrh , which imply p, - py = 0 and p, - p, 20. Combining all these we see that p,, Py, Py, P, must satisfy: 1" Par Pr Py 31 Q) pp tp, + Py +o} (2) Min (pp.P,) = Max (Py.Py)- This gives a continuum of equilibria, Among them are: The pure strategy equilibria: (1,0,0,0) and (0,0,0,1). ea) tie ated squtttortn: (54,32 > (11) Let py, Pr Py, Py de the probabilities of (U,L,A), (U,R,Al, (D,L.AD and (D,R,A) respectively, in the game of figure 2.5. Let 4). ay. 43, 4% be the same with B instead of A. Let r. zr, be the same with C instead 1) Fae of A. ‘The conditions for player 1 to be satisfied with (U) and (D) are 2q, = Py + Py + ay + 24 +4, + Fy and 23 Pp +P, tay t Mates te, - The conditions for player 2 are 2q, 2 Pp +P, +t, +t, + 2a, + ay and 2a Py ty try tty + 2a, +45 - For player 3 we need to compare his payoff when told to play A with each of the payoffs he could get playing B or C. This gives 3p, + Ps = 2p, + 2p, and 3p, + Py = 3, - The conditions when player 3 is told to play B and C are 2q, + 2q, = 34, +95, 32

Das könnte Ihnen auch gefallen