Sie sind auf Seite 1von 10

1 Moral Hazard: Multiple Agents

1.1 Moral Hazard in a Team

• Multiple agents (firm?) Holmstrom(82) Deterministic Q.

— Partnership: Q jointly affected 2


∂Q
Output Q(a) ∼ F (q|a), ∂a > 0, ∂ Q2 < 0,
i ∂ai
— Individual qi’s. (tournaments)

2
dqij = ∂a∂ ∂a
Q

≥ 0, (dq)ij —negative definite.


i j
• Common shocks, cooperations, collusion, monitor
ing. Agents: ui(w) = w.

Agents: i = 1, . . . , n. Partnership w(Q) = {w1(Q), . . . , wn(Q)},

Pn
Ui(wi, ai) = ui(wi) − ψ i(ai) such that for all Q, i=1 wi(Q) = Q.

Efforts: a = (a1, . . . , an), with ai ∈ [0, ∞) Problem: free-riding (someone else works hard, I gain)

∂Q(a∗) 0 ∗
Output q = (q1, . . . , qn) ∼ F (q|a). First-best: ∂ai = ψ (ai ).

Principal: R-N.
Pn
Agents’ choices: FOC: At F-B: Q(a∗) − ∗
i=1 ψ i(ai ) > 0.
dwi[Q(ai, a∗−i)] ∂Q(ai, a∗−i)
= ψ 0(ai) Thus ∃z = (z1, . . . , zn).
dQ ∂ai

Note, b-b looses from higher Qs.


? Nash (a∗i ) = FB (a∗i )?

Comments: b-b is a residual claimant (in fact each agent


Locally:
is a residual claimant in a certain interpretation (!)).
dwi[Q(ai,a∗−i )]
dQ = 1, thus wi(Q) = Q + Ci. Not the same as Alchian & Demsetz (equity for manager’s
incentives to monitor agents properly).
Pn
Budget i=1 wi(Q) = Q for all (!) Q.
? Other ways to support first-best?
This requires a third party: budget breaker
Mirrlees contract: reward (bonus) bi if Q = Q(a∗),
Let zi = −Ci–payment from agent i.
penalty k otherwise. (bonuses for certain targets)
P
Thus n ∗ ∗
i=1 zi + Q(a ) ≥ nQ(a )
As long as bi − ψ i(a∗i ) ≥ −k, F-B can be supported,
P
and zi ≤ Q(a∗) − ψ i(a∗i ). moreover if b’s and k exist so that Q(a∗) ≥ n
i=1 bi, no
b-b needed.
Interpretation: Debt financing by the firm. 1.2 Special Examples of F-B (approx) via
P different schemes
Firm commits to repay debts of D = Q(a∗) − bi, and
bi to each i.
Legros & Matthews (’93), Legros & Matsushima (’91).
If cannot, creditors collect Q and each employer pays k.
(Hm...)
• Deterministic Q, finite A’s, detectable deviations.

Issues: (1) Multiple equilibria (like in all coordination-


type games, and in Mechanism-Design literature). No Say, ai ∈ {0, 1}. And Qf b = Q(1, 1, 1).
easy solution unless
Let Qi = Q(ai = 0, a−i = (1, 1)).
(2) actions of others are observed by agents, and the prin-
cipal can base his compensation on everyone’s reports. Suppose Q1 6= Q2 6= Q3.
Not a problem with Holmstrom though (Positive effort
of one agent increases effort from others). Shirker identified and punished (at the benefit of the oth-
ers).
(3) Deterministic Q.
Similarly, even if Q1 = Q2 6= Q3.
• Approx. efficiency, n = 2. Check: Agent 1. Set a2 = 1,
∙ ¸
((a+1)−1)2 a2
maxa 2 − 2 = 0.
Idea: use one agent to monitor the other (check with
prob ε).
Agent 2. a2 ≥ 1 7→ Q ≥ 1. Implies a∗2 = 1, U2 =
ai ∈ [0, ∞) Q = a1 + a2, ψ i(ai) = a2i /2. 1 − ε/2.

F-B: a∗i = 1. a2 < 1 guarantees Q < 1 with prob. ε.

L & M propose: agent 1 chooses a1 = 1 with pr = 1−ε. Obtain a∗2 = 12 , and U2 = 54 − εk.

When Q ≥ 1, For, k ≥ 12 + 41ε , a∗2 = 1 is optimal.


(
w1(Q) = (Q − 1)2/2
w2 = q − w1(Q). • Random output. Cremer & McLean works. (condi
tions?)
when Q < 1,
(
w1(Q) = Q + k

w2(Q) = 0 − k.

1.3 Observable individual outputs Agent’s choice: a ∈ arg max ŵ(a).

2 2
E(eaε) = ea σ /2, for ε ∼ N (0, σ 2).
q1 = a1 + ε1 + αε2,
q2 = a2 + ε2 + αε1. (back to General case) Agent i

ε1, ε2 ∼ iid N (0, σ 2).


V (w1) = V ar(v1(ε1 + αε2) + u1(ε2 + αε1))
h i
CARA agents: u(w, a) = −e−µ(w−ψ(a)), ψ(a) = 12 ca2. = σ 2 (v1 + αu1)2 + (u1 + αv1)2

Linear incentive schemes: Then, agent’s problem:


w1 = z1 + v1q1 + u1q2, ⎧ ⎫
⎨ z1 + v1a + u1a2 − 12 ca2− ⎬
w2 = z2 + v2q2 + u2q1. maxa 2h i .
⎩ − µσ (v1 + αu1)2 + (u1 + αv1)2 ⎭
2
No relative performance weights: ui = 0,
Solution a∗1 = vc1 (as in one A case).
Principal: maxa,z,v,u E(q − w),
v2 2 h i
h i
ŵ1 = z1+ 12 c1 + u1cv2 − µσ (v1 + αu1)2 + (u1 + αv1)2 .
subject to E −e−µ(w−ψ(a)) ≥ u(w̄). 2

½ µ ¶¾
h i v1 v2
Define ŵ(a), as −e−µŵ(a) = E −e−µ(w−ψ(a)) . Principal: maxz1,v1,u1 c − z1 + c1 + u1cv2
s.t. ŵ1 ≥ w̄ 1.4 Tournaments

Principal:
Lazear & Rosen (’81)
½ h i¾
v1 1 v12 µσ 2 2 2
maxv1,u1 c − 2 c 2 − (v1 + αu1) + (u1 + αv1) . Agents: R-N, no common shock.

To solve: (1) find u1 to minimize sum of squares (risk) qi = ai + εi. ε ∼ F (·), E = 0, V ar = σ 2.

(2) Find v1 (trade-off) risk-sharing, incentives Cost ψ(ai).

2α v .
Obtain u1 = − 1+α2 1 F-B: 1 = ψ 0(a∗).

The optimal incentive scheme reduce agents’ exposure to wi = z + qi.


common shock.
z + E(qi) − ψ(a∗) = z + a∗ − ψ(a∗) = ū.
v1 = 1+α2 .
1+α +µcσ 2(1−α2)2
2

Tournament: qi > qj → prize W , both agents paid z.

Agent: z + pW − ψ(ai) →ai max.


p = P r(qi > qj ) = 1.5 Cooperation and Competition

= P r(ai − aj > εj − εi) = H(ai − aj ). • Inducing help vs Specialization

EH = 0, V arH = 2σ 2.
• Collusion among agents
∂p
FOC: W ∂a = ψ 0(ai).

i
• Principal-auditor-agent

W h(ai − aj ) = ψ 0(ai).

1 .
Symmetric Nash: (+FB): W = h(0) Itoh (’91)

H(0)
z + h(0) − ψ(a∗) = ū. 2 agents: qi ∈ {0, 1}, (ai, bi) ∈ [0, ∞) × [0, ∞).

Result: Same as FB with wages. Ui = ui(w) − ψ i(ai, bi), ui(w) = w.

Extension: multiple rounds, prizes progressively increas- ψ i(ai, bi) = a2i + b2i + 2kaibi, k ∈ [0, 1].
ing.
Pr(qi = 1) = ai(1 + bi).
Agents: Risk-averse+Common Shock.
Contract: wi = (wjk
i ), wi — payment to i when q = j,
jk i
Trade-off between (z, q) contracts and tournaments. q−i = k.
No Help: bi = 0. By itself (ignoring change in a) and if b∗ is small, and
since change in w increases risk, it is costly for the prin
w0 = 0, ai(1 − w1) →w1 max, cipal to provide these incentives.


s.t, ai = 12 w1 (IC) and IR is met. ... Even if a adjusts, since it is different from the first-best
for the principal with b = 0, the principal looses for sure.
Getting Help: Agent i solves (given aj , bj , w, w11 >
w10, w01 > w00 = 0.) Thus, if k is positive, there is a discontinuity at b = 0,
√ √ thus “a little” of help will not help: for all b < b∗ principal
a(1 + bj )aj (1 + b) w11 + (1 − a(1 + bj ))aj (1 + b) w01+
√ is worse-off.
a(1 + bj )(1 − aj (1 + b)) w10 − a2 − b2 − 2kab → max
a,b

³ ´ For k = 0, help is always better.


FOC+symm: consider ∂
∂b
√ √ √ Two-step argument: 1. If ahelp ≥ ab=0, marginal cost
a2(1+b) ( w11 − w10)+a(1−a(1+b)) w01 = 2(b+ak)
of help is of second order, always good.

If (as in No Help) w11 = w10, w01 = 0, and k > 0, we 2. Show that ahelp ≥ ab=0.
have RHS = 0, LHS > 0 for any b ≥ 0.

Therefore, need to change w significantly to get any b


close to 0. (Even to get b = 0 with F OCb = 0)
1.6 Cooperation and collusion. s.t (a∗1, a∗2)—NE in efforts, and CEi ≥ 0.

Individual choices: vi = ψ 0i(ai).


CARA agents: u(w, a) = −e−µi(wi−ψi(a)).
à ! u are set to minimize risk-exposure: ui = −vi σσi ρ.
σ 21 σ 12 j
qi = ai+εi, (ε1, ε2) ∼ N (0, V ), where V = ,
σ 12 σ 22 P2 h i
Total risk exposure: µ v 2σ 2(1 − ρ) .
ρ = σ 12/(σ 1σ 2). i=1 i i i

Linear incentive schemes: • Full side-contracting:


w1 = z1 + v1q1 + u1q2,
w2 = z2 + v2q2 + u2q1. • (?) Enough to consider contracts on (a1, a2).

• No side contracts (CE2(a1, a2) analogously): • Problem reduces to a single-agent problem with µ1 =
1 1
CE1(a1a2) = z1 + v1a1 + u1a2 − ψ 1(a1) µ1 + µ2 , with costs ψ(a1, a2) = ψ 1(a1) + ψ 2(a2).
µ
− 1 (v12σ 21 + u21σ 22 + 2v1u1σ 12)
2 • Full side contracting dominates no s-c iff ρ ≤ ρ∗.
(cooperation vs relative-performance evaluation).
Principal (RN):

(1 − v1 − u2)a1 + (1 − u1 − v2)a2 − z1 − z2 → max • MD schemes.


1.7 Supervision and Collusion Principal: reward Monitor for y ∗ with w ≥ k.

(Punish when there is not y ∗?)


Principal: V > 1,

Suppose not, that is 12 pk > z. (and thus suppose that


Agent: cost c ∈ {0, 1}, Pr(c = 0) = 1
2. wmon = 0)

Monitor: cost z, Proof y ∗, Pr(y ∗|c = 0) = p). ³ ´


Principal: 12 p(V − k) + 1 − 12 p (V − 1).

Assume: V > 2 (so P = 1 is optimal without monitor)


• No gain for allowing collusion
With monitor (no collusion)
³ ´ • If k is random, then possible.

1 pV + 1 − 1 p
(V − 1) − z
2 2

Compare to V − 1.

Collision: Agent-Monitor: Tagent → (kT )monitor , k ≤

1.

max T = 1.

Das könnte Ihnen auch gefallen