Sie sind auf Seite 1von 5

MAT378 HW 1

Russell Kim

Problem 1
Qualitatively, we can think of the Nash Equilibrium as a state where no player has any incentive to change
their bid. In mathematical language, we defined a Nash Equilibrium to be a∗i such that

ui (a∗−i , bi ) ≤ ui (a∗i , a∗i )

For the sake of contradiction, let us assume that a∗1 6= a∗2 . We can assume that a∗1 ≤ v1 because otherwise
they would have at most 0 utility. because Then, we can take two cases.

Case 1: a∗1 > a∗2


Looking at the utility functions, it is clearly violated when this is the case.

u1 (a∗−1 , b1 ) = u1 (a∗1 , a∗1 )

when bi is any value down to and including a∗2 , because player 1 can change his bid from a∗1 to a∗2 without
any loss of utility - player 1’s nash equilibrium bid must be the lowest bid that he can make without loss of
maximum utility. Since this condition is violated, we do not have a nash equilibrium.
Case 2: a∗1 < a∗2
Looking again at player 1’s utility function, we can see that

u1 (a∗−1 , b1 ) ≥ u1 (a∗1 , a∗1 )

which is a violation. This inequality is true because a∗1 < a∗2 , player 1’s utility is now 0. This bid is not
maximizing player 1’s utility function, as he can raise it to at least a∗2 . Now we can see that this case does
not lead to a nash equilibrium as well. Since there are contradictions in both cases, the only possible nash
equilibrium is when a∗1 = a∗2 , where a∗1 is [v2 , v1 ].

Problem 2
a) We take three cases, when x < 1, x = 1, and x > 1.

Case 1: x < 1
In order to maximize utility for both players(Players A and B), we note the following actions:

1
When B plays 1, A should play 2
When B plays 2, A should play 2
When A plays 1, B should play 1
When A plays 2, B should play 1
Now we see a loop - B playing 1 leads to A playing 2 which leads to B playing 1. Therefore, the Nash
Equilibrium is the tuple (2,1).

Case 2: x = 1
In this case, utility is the same no matter which choice it is - therefore, a∗1 = A1 , and a∗2 = A2 .

Case 3: x > 1
In this case, the actions taken by both players A and B differ:
When B plays 1, A should play 1
When B plays 2, A should play 1
When A plays 1, B should play 2
When A plays 2, B should play 2
Here the Nash Equilibrium is the tuple (1,2).

b) In this game, it does not matter what the values of y and z are - the only Nash equilibria are (1,1) and
(2,2).
When A plays 1, B should play 1
When A plays 2, B can play 1 or 2(but has no incentive to change)
When B plays 1, A should play 1 or 2(but has no incentive to change)
When B plays 2, A should play 2
The only ”loops” present are when Players A and B either both play 1 or both play 2.

Problem 3
a) Since this is a zero-sum game, we can use infimums and supremums to calculate the Nash equilibriums.

Λ(a1 ) = inf min(a1 , a2 ) = 1


a2 ∈A2

λ = sup (a1 ) = 1
a1 ∈A1

Λ(a2 ) = sup min(a1 , a2 ) = a2


a1 ∈A1

λ = inf (a2 ) = 1
a1 ∈A1

This is a strictly determined game, since we have the condition that λ = λ = 1. The optimal set of
strategies for player one is Θ1 (G) = {N}, and the optimal set of strategies for player two is Θ2 (G) = {1}

2
b) Since this is a zero-sum game, we can use infimums and supremums to calculate the Nash equilibriums.

Λ(a1 ) = inf max(a1 , a2 ) = a1


a2 ∈A2

λ = sup (a1 ) = ∞
a1 ∈A1

Λ(a2 ) = sup max(a1 , a2 ) = ∞


a1 ∈A1

λ = inf (a2 ) = ∞
a1 ∈A1

Since we see that λ = λ = ∞, this is a strictly determined game. The optimal set of strategies for player
one is Θ1 (G) = {∞}, and the optimal set of strategies for player two is Θ2 (G) = {∞}.

Problem 4
a) To show continuity, we need to show that ∀t ∈ (0, 1), y ∼ tx + (1 − t)z Now evaluate this expression:

u(tx + (1 − t)z)

u(y) = tu(x) + (1 − t)u(z)

Set
u(y) − u(z)
t=
u(x) − u(z)
Substituting t into the second expression above, we obtain

u(y) − u(z) u(z) − u(y)


u(x) + u(z)
u(x) − u(z) u(x) − u(z)

which simplifies to
u(x)u(y) − u(y)u(z)
u(x) − u(z)
= u(y)

as desired.

b) First, we assume that x > y > z. Then, we evaluate

u(ty + (1 − t)z) = tu(y) + (1 − t)u(z)

We also know that


u(tx + (1 − t)z) = tu(x) + (1 − t)u(z)

Clearly, tu(y) + (1 − t)u(z) > tu(x) + (1 − t)u(z), as tu(y) > tu(x), so we have proved independence.

3
c) For this problem, to prove that Y is a linear subspace of X, we show that for every v1 , v2 ∈ Y , that

v + v ∈ Y
1 2
cv ∈ Y
1

To show the first condition, we just have to show that u(v1 + v2 ) = u(0). First, let a = 2v1 . From linearity,
we know that
1 1 1 1
u( a + 0) = u(v1 ) = u(a) + u(0)
2 2 2 2
which leads to
1 1
u(v1 ) = u(0) = u(a) + u(0)
2 2
so we have
u(a) = u(2v1 ) = u(0)

by similar reasoning, and letting a = 2v2 , we can find that u( 12 v2 ) = u(0). As a result, we can evaluate

1 1 1 1
u( (2v1 ) + (2v1 )) = u(2v1 ) = u(2v1 ) + (2v1 ) = u(0)
2 2 2 2

by similar reasoning, we can show that u(2v2 ) = 0 as well.

1 1 1 1
u( 2v1 + 2v2 ) = u(v1 + v2 ) = u(2v1 ) + u(2v2 ) = u(0)
2 2 2 2

So we have proved the first condition.


For the second condition, we need to show that u(cv1 ) = u(0). Since we know that u is a linear function, we
know obviously that u(cv) = cu(v), and therefore, we know that u(cv1 ) = u(cv2 ) = cu(v1 ) = cu(v2 ) = cu(0).

d) We use the hint and need to show that for any fixed ck ∈ Xk ,

Y + ck ⊆ Xk , Y + ck ⊇ Xk

, so ∀vy ∈ Y , we need to show that


u(vy + ck ) = u(ck )
1 1 1
u( (2vy + 2ck )) = u(vy + cy ) = u(2vy ) + u(2ck )
2 2 2
We also know that by linearity,
1 1
u(ck ) = u(2ck ) + u(0)
2 2
and
1 1 1
u(2ck ) = u(ck ) − u(0)
2 2 2
so
1 1
u(vy + cy ) = u(2vy ) + u(ck ) − u(0)
2 2

4
Since we know that u(0) ∈ Y , u(vy ) ∈ Y , u(2vy ) = u(0), and we have proved that Y is a linear subspace of
X, and that u(ck ), we now know that

u(vy + ck ) = u(0) + u(ck ) = u(ck )

and we have showed what we have needed to prove.

Das könnte Ihnen auch gefallen