Sie sind auf Seite 1von 16

STUDY

GUIDE: BEHAVIORAL ECONOMICS

CHAPTER (2): FUNDAMENTALS


Relations:
(i) Complete: Means that two elements can be compared, i.e. 𝑒𝑖𝑡ℎ𝑒𝑟 𝑥 ≽
𝑦 𝑜𝑟 𝑦 ≽ 𝑥, 𝑖𝑓 𝑏𝑜𝑡ℎ 𝑡𝑟𝑢𝑒 𝑡ℎ𝑒𝑛 𝑥~𝑦
(ii) Transitive: Means that 𝑖𝑓 𝑥 ≽ 𝑦 𝑎𝑛𝑑 𝑦 ≽ 𝑧, 𝑡ℎ𝑒𝑛 𝑥 ≽ 𝑧
(iii) Reflexive: DM is indifferent between an element and the elements itself,
i.e. if 𝑥 ≽ 𝑥 𝑎𝑛𝑑 𝑥 ≼ 𝑥, 𝑡ℎ𝑒𝑛 𝑥~𝑥
Preference ordering: A relation that satisfies completeness, transitivity, and
reflexivity
Rational DM: A DM is said to be rational if and only if:
∃ 𝑠𝑜𝑚𝑒 𝑝𝑟𝑒𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑜𝑟𝑑𝑒𝑟𝑖𝑛𝑔 ≽ 𝑠. 𝑡. 𝑠ℎ𝑒 𝑎𝑙𝑤𝑎𝑦𝑠 𝑝𝑖𝑐𝑘𝑠 ≽
−𝑚𝑎𝑥𝑖𝑚𝑎𝑙 𝑒𝑙𝑒𝑚𝑒𝑛𝑡𝑠 𝑖𝑛 𝑐ℎ𝑜𝑖𝑐𝑒 𝑝𝑟𝑜𝑏𝑙𝑒𝑚 𝑆𝜖𝑃.
Rational Choice Correspondence: C is a rational choice correspondence if there is a
preference ordering ≽, s.t.: 𝐶 = 𝐶≽ , i.e. that the choices the DM makes are ≽
−𝑚𝑎𝑥𝑖𝑚𝑎𝑙: 𝐶≽ 𝑆 = 𝑥𝜖𝑆 𝑥 ≽ 𝑦, 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑦𝜖𝑆}
IIA: Independence of Irrelevance of Alternatives means that if a DM picks 𝑥 𝜖 𝑆, then
she will pick 𝑥 𝜖 𝑅, 𝑤ℎ𝑒𝑟𝑒 𝑅 ⊆ 𝑆, i.e. if 𝑅 ⊆ 𝑆𝜖𝑃 then 𝐶 𝑆 ∩ 𝑅 ⊆ 𝐶 (𝑅).
EXP: Expansion Consistency refers to if a DM picks 𝑥 𝜖 𝐶 (𝑅) and 𝑦 𝜖 𝐶 (𝑆), if 𝑅 ⊂ 𝑆,
and 𝐶 𝑅 ∩ 𝐶 𝑆 ≠ ∅, then 𝑥 𝜖 𝐶 (𝑆). In other words: If 𝐶 𝑅 ∩ 𝐶 𝑆 ≠ ∅, then
𝐶 𝑅 ⊆ 𝐶 (𝑆).
Satisficing Procedure: Given some order O in a finite set X, which describes the
sequence in which elements are viewed. Let 𝐴 ⊆ 𝑋 and let A be the set of
acceptable options. In the decision problem (S), the DM goes through the order until
she finds an option in A. If none are in A, then the next best alternative is picked
from the set.
Observed Choice Correspondence: Let the set of decision problems are given by D.
The set of observed choice correspondence is given by: 𝐶!"# (𝑆) that associates to
each decision problem 𝑆𝜖𝐷, some subset 𝐶!"# (𝑆) ⊆ 𝑆 which represents the DM’s
observed choices.

1
Revealed Preference: If the DM is rational, then all of the elements she chooses
reveal his preferences, i.e. if he chooses x in {x, y}, then the DM must prefer x over y
(proposed by Samuelson).
Revealed Strict Preferences: An option x is revealed strictly preferred to x
(𝑥 ≻∗ 𝑦) if there is some 𝑆𝜖𝑃 that has y inside of it and such that, 𝐶!"# 𝑆 =
𝑥.
Cyclic Relation: Given a set containing n elements, say 𝑆𝜖𝑃, 𝑆 = 𝑥! , … , 𝑥! . An
ordering: 𝑥! ≻∗ 𝑥! ≻∗ … ≻∗ 𝑥! ≻∗ 𝑥! … This is a cyclic relation, i.e. a cyclic in the
ordering. Any cyclic preference ordering is not rational.
Unambiguously strictly preferred: A preference ordering in which an option x is
preferred over another option y in the set and the relation is acyclic, i.e. 𝑥 ≻∗ 𝑦 ⟺
𝑥 ≻! 𝑦. This makes the DM rational. This just means that x is also strictly preferred
to y.
Valid Forecast: Option x is a valid forecast for a choice problem 𝑆 ∉ 𝐷 if ∃ 𝑎 ≻ (strict
preference ordering) such that 𝐶!"# 𝑇 ⊆ 𝐶≻ 𝑇 for all 𝑇 ∈ 𝐷 and 𝑥 ∈ 𝐶≻ (𝑆) .
(Review)
Utility function: Utility functions associate to each option in X, a real number. More
precisely, 𝑢: 𝑋 → ℝ represent some preference ordering ≽ if, for all 𝑥, 𝑦 ∈ 𝑋, 𝑥 ≽ 𝑦
if and only if 𝑢 𝑥 ≥ 𝑢 (𝑦).
What does it mean for a utility function to represent an ordering? It means that it
assigns a real number to every option, such that it captures the preferences in X. For
example if one option x is strictly preferred over y, then the utility function must
represent this preference, i.e. u(x)>u(y).
Endowment Effect: One example of violation of rationality when DM’s prefer the
goods they are endowed with than the alternative. This suggests that, for example if
the choice problem is defined over X={mug, pen}, even if the DM has a preference
𝑚𝑢𝑔 ≻ 𝑝𝑒𝑛, if they are endowed with the pen they are unlikely to want to trade.
Status-quo bias: Another example of a violation of rationality, when people
are presented with “default” options, i.e. the typical subscription, they are
unlikely to divert from the status-quo, even if a different alternative might
benefit them more.

2
WTP & WTA: This has to do with one violation of rationality, when people’s
willingness to pay (WTP) is different from the willingness to accept (WTA), i.e.
a good typically has some value for someone, and e.g. a pen might be valued
at $10. This experiment suggests that people who are endowed with the pen
should take any values ≥ $10. On the other hand, they would the monetary
value they would be willing to pay for the pen should be $10 as well.
However, experimental evidence suggests that people generally have a WTA
< WTP, which obviously doesn’t make sense for rational DMs.
Attraction Effect: (also known as the asymmetric dominance effect) it shows how
the mere presence of this dominated option makes another, comparable,
alternatively far more appealing. This sort of behavior violates IIA and therefore,
rationality.
For example, suppose you have three packages of (days in PAR, days in LDN): (4, 2),
(2,4). These two aren’t easily comparable and so people pick based on their
preferences of city (i.e. they either prefer London or Paris). However, adding another
package: (4, 2), (2, 4), (3, 1). This third package is dominated by (4, 2) and so makes
the first alternative look better in comparison to the second and (obviously) third.
Clearly this violates IIA because if (4, 2) belonged to the choices in the larger set,
then it should belong to the choice in the smaller set. However, someone who picks
(2, 4) over (4, 2) in the first decision problem would be taken as irrational.
Choice overload: This occurs when people face too many options. The problem here
is that people overlook some options that may be optimizing their preference
ordering, but because there are so many options they don’t pay attention to it and
end up with worse option.
Framing effects: How choices are affected by the way the choice problem is
phrased/framed. Tsversky & Kahneman’s experiment on diseases is a good way to
think about this. In the experiment, people tend to overwhelmingly pick differently
according to the either positive or negative frame.
Attention filter and choice with limited attention: the DM maximizes a preference
ordering, but not necessarily pays attention to all of his choices. Suppose the DM
picks from a finite set, X with a strict preference ordering ≻, with an attention filter
A(S).

3
Attention Filter: associates each choice problem S a subset A(S) that the DM
considers. Under the limited attention, the DM then maximizes her preference
ordering with the ≻ −𝑚𝑎𝑥𝑖𝑚𝑎𝑙 element in A(S).
Overload attention filter (OAF): This is analogous to the situation of IIA. If the DM
pays attention to some element x in S, 𝐴(𝑆) then she will pay attention to x in the
smaller set 𝑅 ⊂ 𝑆, if 𝑥 ∈ 𝑅.
Consideration attention filter (CAF): this says that the subset of options that a DM
paid attention to in a choice problem stays the same even if you drop an element
that was not paid attention to in the first place. That is: if the DM pays attention to
𝑖𝑓 𝑦 ∈ 𝑆\𝐴 𝑆 , 𝑡ℎ𝑒𝑛 𝐴 (𝑆\{𝑦}) = 𝐴(𝑆).
Unambiguously paid attention to: Assuming choice under limited attention in which
OAF has not been refuted. An option x is unambiguously preferred to another option
y if 𝑥 ≻ 𝑦 𝑓𝑜𝑟 𝑎𝑙𝑙 ≻, 𝐴 ∈ 𝑃𝐴(𝐶!"# ). An option is unambiguously paid attention to
if in a choice problem S, 𝑥 ∈ 𝐴 𝑆 , 𝑓𝑜𝑟 𝑎𝑙𝑙 ≻, 𝐴 ∈ 𝑃𝐴(𝐶!"# ).
NB: 𝑃𝐴 𝐶!"# is the set of combination of observes choices with all combinations of
(≻, 𝐴).
Example of choice procedures:
(i) Limited mistakes: Instead of attention filters, think of limited attention to
the times people look exclusively at k first choices, or top 𝛼% choices in a
preference ordering. It’s equivalent to thinking about how the DM
maximizes her preferences systematically within the 𝛼-percentile.
(ii) Compromising conflicting preferences: When the DM operates under
more than one preference ordering (e.g. ≻, ≻ ′ ). Sometimes these
preference orderings come into conflict with one another and produces
conflicting choices depending on how the DM chooses to combine the
preference orderings. For example, one DM might count the number of
alternative options under some element x, given the two different
rankings. She might then try to maximize the sum of the two numbers to
arrive at a choice.
Context-dependent preference:
Lists and choice function from lists:
Lists are defined as a nonempty sequence of distinct elements of X.

4
Functions form lists associate to every list one of one of its elements.
Primacy, recency and contrast effects
Choice procedures over lists:

CHAPTER (3): RISK & UNCERTAINTY
Lottery: Given that O is the set of possible outcomes; a lottery is the probability
distribution over O, i.e. the likelihood of each element in O occurring with some
positive probability.
Expected value of a Lottery: Is simply defined as the expected outcome, given the
probability distribution over O. That is, ∑𝑣(𝑜! )𝑝! = 𝐸(𝑉).This is the average of the
monetary prizes associated with each outcome, distributed by the loterry’s
probability distribution over O.
Bernoulli Lottery: Defined by a lottery in which there are two outcome, say outcome
H and T. Say that the outcomes are equally distributed with probability equal to ½. If
the outcome is H the prize is $2; otherwise the game continues. On the second
round, the prize for H is $2*2=$4; otherwise the game continues… So the Expected
1
value of the Bernoulli lottery is given by: ∑ 2
2𝑛 → ∞. Clearly this doesn’t make

sense, so perhaps expected value isn’t the best way to think about these things.
Expected utility function: A utility function, denoted U, defined over lotteries for
which there is a function 𝑢: 𝑂 → ℝ, such that 𝑈 𝑙 = !∈! 𝑙 𝑜 𝑢(𝑜). In other words,
instead of monetary values, there is some utility associated to a certain outcome
that is ultimately dictated by a probability distribution.
Bernoulli utility function: This is the u function associated to the Expected Utility
function, U. The purpose is to recalibrate the monetary outcomes and make them
into real numbers in the form of some utility.
Expected utility preference: The preference orderings over lotteries that can be
expressed by an expected utility function.
Certainty Equivalent: In a lottery is the sure amount the DM is indifferent between
that amount and the lottery.
Process—Given some Bernoulli utility function, u, and a lottery l. Suppose he has two
outcomes: either gets 𝑢 𝑜! 𝑜𝑟 𝑢 𝑜! , with some probability 𝑝! 𝑎𝑛𝑑 𝑝! , respectively.

5
Then his Expected utility 𝑈 = 𝑝! ∗ 𝑢 𝑜! + 𝑝! ∗ 𝑢(𝑜! ). This would be what he gets
taking the bet. So you set the expected utility equal to the Bernoulli function and
solve for the CE, i.e. 𝑢 𝐶𝐸 = 𝑝! ∗ 𝑢 𝑜! + 𝑝! ∗ 𝑢(𝑜! ), then solve for CE given the
utility function u.
Risk-averse: A DM is risk-averse if: 𝐶𝐸 ≤ 𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑙𝑜𝑡𝑡𝑒𝑟𝑦. So the
DM prefers taking the sure amount to taking the bet. Another way to see this is that
the second derivative of the Bernoulli utility function is negative (i.e. the function is
concave).
Risk loving: This is the opposite of a risk-averse DM. A DM is risk loving if: 𝐶𝐸 ≥
𝐸𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑣𝑎𝑙𝑢𝑒 𝑜𝑓 𝑡ℎ𝑒 𝑙𝑜𝑡𝑡𝑒𝑟𝑦. So the DM prefers taking the bet to the sure
amount. Here the second derivative of the Bernoulli utility function is positive (i.e.
the function is covex).
Risk-neutral: The DM is indifferent between the lottery and the sure amount and
you’d have a linear Bernoulli utility function (second derivative = 0).
Allais Paradox: First violation of Expected Utility. The tendency to pick alternatives
that contradict the theory of Expected Utility is explained by the tendency to
overestimate probability of losing everything (i.e. an outcome of $0) when the
higher payoffs are similar (e.g. if other payoffs are $2500 and $2400). And secondly,
in another choice problem, overestimating the probability of getting an
incrementally higher outcome, when the probability of $0 is similar in both options.
(See attached diagrams).
The reasoning in the Allais Paradox is like this:
Two choices problems: 1 & 2. In each choice problem you have two elements: a & b.
Therefore four possible choice patterns: (a1, a2), (a1, b2), (b1, b2), and (b1, a2). It
turns out that only (a1, a2) and (a2, b2) is consistent with the theory of expected
utilities. The work follows like this: Assume you pick 𝑎1 ≽ 𝑏1
⟺ 0.33𝑢 2500 + 0.66 2400 + 0.01𝑢 0 ≥ 1𝑢(2400)
0.33𝑢 2500 + 0.01𝑢 0 ≥ 0.34𝑢(2400)
(Adding 0.66u(0) to both sides)
⟺ 0.33𝑢 2500 + 0.67𝑢 0 ≥ 0.34𝑢 2400 + 0.66𝑢(0)
Now this is the same as the choice problem 2!

6
This would imply that: 𝑎2 ≽ 𝑏2. Therefore by the theory of Expected Utility, the only
pairs of choices that make sense are (a1, a2) and (b1, b2). Any other choices violate
the theory of Expected Utility and therefore rationality.
Machina Triangle: A lottery, L can be represented graphically using the Machina
Triangle, for three different outcomes with probabilities p, q and r. Where p+q+r=1.
In the Triangle, the axes denote 𝑞 ∈ [0,1] on the y-axis and 𝑝 ∈ [0,1] on the x-axis.
A line connecting both ends of the axes, i.e. when p=1 and q=1, denotes a frontier.
At any point on this line r=0, meaning the third outcome occurs with probability
equal to zero. However, at the bottom left corner of the triangle, r=1 and p=q=0. At
any point inside the triangle, all three outcomes occur with probability greater than
zero.
The triangle can therefore be used to graph lotteries with under any of these
parameters for three outcomes.
The Machina Triangle can help show preferences over lotteries by introducing
indifference curves.
Constructing indifference curves on the triangle from Allais Paradox—
Assume a Bernoulli utility function u and that u (0)=0, let c=constant.
𝑐 𝑢 2400
𝑝𝑢 2400 + 𝑞𝑢 2500 = 𝑐 → 𝑞 = −𝑝 = 𝐴 − 𝐵𝑝
𝑢 2500 𝑢 2500
!
𝑞 = 𝐴 − 𝐵𝑝, 𝐴 = ! !"##

Since B does not depend on c, then the indifference curves are straight lines and are
parallel to one another.
COMPLETE
Mixture Lottery: A DM can “mix” between lotteries, i.e. given two lotteries 𝑙 & 𝑙′,
given some 𝑝 ∈ [0,1], the mixture lottery is given by: 𝑝𝑙 ⊕ 1 − 𝑝 𝑙′, which is the
lottery that gives some outcome o with probability: 𝑝𝑙 𝑜 ⊕ 1 − 𝑝 𝑙′(𝑜), for each
𝑜 ∈ 𝑂.
Independence & Continuity Axioms for choice under risk:
(i) Independence: for come 𝑝 ∈ 0,1 and lotteries 𝑙, 𝑙 ! , 𝑙′′ if 𝑙 ≽ 𝑙′ then:
𝑝𝑙 ⊕ 1 − 𝑝 𝑙 !! ≽ 𝑝𝑙 ! ⊕ (1 − 𝑝)𝑙′′. Since the probability of getting this
other lottery l’’ is equivalent in either situation so what will dictate the
preference is a preference of l over l’.

7
(ii) Continuity: Requires some “sandwiching”, because we are trying to make
preferences over lotteries continuous. Suppose, 𝑙 ≽ 𝑙 ! ≽ 𝑙′′, then ∃ some
𝑝 ∈ [0,1], such that: 𝑙 ! ~𝑝𝑙 ⊕ 1 − 𝑝 𝑙′′.

Non-expected utility: Defined as classes of preferences over lotteries that can all be
represented by utility functions, but some that can’t be represented by expected
utility functions.
Disappointment aversion: One example of non-expected utility that tries to capture
one of the caveats of the Allais Paradox, i.e. that people overestimate the probability
of getting a bad outcome when the alternative is much higher. For example, when in
the first decision problem in the Allais Paradox the DM has an option between A and
B. In A, there is a 1% chance of getting nothing, while in B there is 100% chance of
getting $2400. Some people view this as being analogous to saying: by picking A I
miss out on $2400 and if in A I am the 1% that gets nothing. So this creates
disappointment and people want to avoid it so they are disappointment averse.
Therefore, assume a Bernoulli function that incorporates disappointment: 𝑢! (𝑜, 𝑙).
Such that the Expected utility formula becomes: 𝑈 ! !
= !𝑙 𝑜 𝑢! (𝑜, 𝑙).
More specifically, we introduce a function 𝑑: ℝ → ℝ, which associates a level of
disappointment as function of how good or bad the outcome of the lottery, such
that:

𝑢! 𝑜, 𝑙 = 𝑢 𝑜 − 𝑑(𝑢 𝑜 − 𝑙 𝑜 𝑢(𝑜))
!

In other words, the term inside d(x), represents the difference in what you get and
what you expected to get. When the expectation was greater than the outcome’s
utility then people are disappointed, so d(x)≥0. When the outcome is better than the
expectation people aren’t disappointed so d(x)≤0 and so utility overall is greater
under Disappointment Aversion.
Summary:
𝑑 𝑥 ≥ 0, 𝑖𝑓 𝑥 ≤ 0 → 𝑝𝑒𝑜𝑝𝑙𝑒 𝑒𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑡𝑜𝑜 𝑚𝑢𝑐ℎ

𝑑 𝑥 ≤, 𝑖𝑓 𝑥 ≥ 0 → 𝑝𝑒𝑜𝑝𝑙𝑒 𝑒𝑥𝑝𝑒𝑐𝑡𝑒𝑑 𝑙𝑒𝑠𝑠 𝑡ℎ𝑎𝑛 𝑤ℎ𝑎𝑡 𝑡ℎ𝑒𝑦 𝑔𝑜𝑡
In general:

8
𝑈! 𝑙 = 𝑈 𝑙 − 𝑙 𝑜 𝑑(𝑢 𝑜 − 𝑈 𝑙 )
!

Prospect Theory: Different take on non-expected utility. Suppose that you vary
initial wealth such that you offer up different prizes under each choice problems, it
turns out that in many cases people don’t care about their final wealth; instead they
care about the individual lottery and if they are losing or making money.
The problems below are equivalent, but people tend to interpret it as it is on the
right hand side. In other words, people care about losses and gains, not in terms of
final wealth.
W=$500

1/2 1/2
$490 -$10

$510 $10
1/2 1/2

W=$480

1/2 1/2

$490 $10


1/2 $510 1/2 $30

W=$520

1/2 1/2
$490 -$10

1/2 $510 1/2 -$30



Another important nuance of this theory as that it posits that people treat losses
differently from gains. That is, people are risk-averse when it comes to gains but are
risk loving when it comes to losses.

9
Departure from Expected Utility: The theory is different from Expected Utility insofar
as people treat individual lotteries that have equivalent outcomes on final wealth
differently, even though this violates expected utility theories. They treat each of the
lotteries above different, even though their expected utility (on final wealth) should
be equivalent.
Main takeaway:
(i) People overestimate low positive probabilities and underestimate high
probabilities less than 1.
(ii) Loss Aversion: People are more sensitive to losses than to gains.

The (ii) caveat is captured by a reimagined Bernoulli function, where the DM has a
discontinuous utility function around zero. In other words, when outcome is less
than zero: the DM loses a lot more utility for every unit loss in outcome. While when
the outcome is positive, the DM comparatively gains less for every marginal unit
earned more of outcome. The Bernoulli is such that: u(-x)<-u(x).


The (i) is captured by the fact that the DM assigns different weights across different
probabilities. That is, when the probability of an outcome is close to zero, the DM
overestimates the probability (gives it a higher weighting). When the probability is
close to one, the DM underestimates (gives it a lower weighting). To do this, they
assign a weighting function for lotteries for the DM.
Expected Utility under Prospect Theory:

𝑈 𝑙 = 𝑤 𝑙 𝑜 𝑢(𝑜)
!

10
What are the problems with Prospect Theory? It sometimes may violate first-order
Stochastic Dominance because you can prefer one lottery to another even if it is
FOSD over that lottery.
First-order stochastic dominance (FOSD): Applies exclusively to comparing lotteries
when thinking about monetary outcome amounts. By definition, a lottery 𝑙 FOSD 𝑙′,
if the probability p of getting an amount $m is larger than or equal under 𝑙 than
under 𝑙′, for all amounts m.
Act: A map that associates a consequences C to all finite states of the world in the
set S. That is, every state of the world can be associated to a particular outcome, a
consequence, given some action over that state.
Regret Theory: An act is thought of as a choice between lotteries, because it
whether to buy a lottery or not. Under regret theory, there’s a rejoicing function
which captures how much happier the DM is for choosing some m over m’, i.e.
𝑟(𝑚, 𝑚! ). A DM picks an act a over a’ provided that for all states, s:

𝑝 𝑠 𝑟 𝑎 𝑠 , 𝑎! 𝑠 > 0
!

The rejoicing function has four properties:


(i) 𝑟 𝑥, 𝑥 = 0, You’re equally as happy in getting either because they’re the
same.
(ii) 𝑟 𝑥, 𝑦 ≥ 0, For all 𝑥 ≥ 𝑦, because you must be happier if you got the
better end of the deal given the state of the world.
(iii) 𝑟 𝑥, 𝑦 = −𝑟(𝑦, 𝑥), This means that to get the alternative yields the
inverse amount of happiness, i.e. rejoicing is the opposite of regret.
(iv) 𝑟 𝑧, 𝑥 > 𝑟 𝑧, 𝑦 + |𝑟 𝑥, 𝑦 |, For all 𝑥 > 𝑦 > 𝑧. In other words, the
regret is larger between most preferred and least preferred than the sum
of the regret.
NB: Regret Theory may not be consistent with preference maximization
because it may produce acyclic relation between acts.
The point is that regret adds more pain to getting nothing as opposed to
something.
Uncertainty vs. Risk: Under uncertainty, the definition of acts still applies; however,
there is no known probability distribution for all states of the world in S. Instead,

11
there is some subjective probability, i.e. a distribution that varies across DMs
(proposed by Prof. Savage). So the probability distribution is now endogenous for
decision-making.
Expected utility under uncertainty: given by combining subjective probability
distribution with the usual expected utility function:

𝑝 𝑠 𝑢(𝑎 𝑠 )
!

Can infer underlying subjective probability distribution of a DM by proposing choice


problem such as: DM gets $100 if it rain tomorrow night, or DM gets $100 if it
doesn’t rain tomorrow night. Depending on which one she picks you can infer
whether she thinks one is more likely than the other.
Moral Hazard: Situation whereby the DM acts more recklessly (or underestimates
the probability of disaster) under some regulation, e.g. making everyone where
safety belts makes people think that the probability of an accident is smaller than
before but they drive more recklessly in response to this change.
State-dependent utility: when the DM’s Bernoulli utility function changes depending
on the state of the world, e.g. if the DM thinks it will rain, then they will value an
umbrella more than they would have otherwise.
Ellsberg Paradox and ambiguity aversion: Assume a bag full of 90 balls, 30 are Red
and 60 are Black and Yellow.
Suppose you have two choice problems for
(i) a1—getting $100 if ball is R. b1—getting $100 if B or Y.
(ii) a2—getting $100 if ball is R or Y. b2—getting $100 if B or Y.
There’s a tendency to pick (a1, b2) and this is explained by an ambiguity aversion, i.e.
people prefer objective probabilities to uncertainty.

CHAPTER (4): MULTIPLE TIME PERIODS
Exponential discounting: Simple model for utility across multiple time periods. There
is a discount factor, 𝛿 that is raised to some time period, t and discount the utility.
!
There are consumption profiles for each period in t = 0, 1, 2…T, given by 𝑐! !!! .

𝑈 𝑐! , 𝑐! , … , 𝑐! = 𝛿 ! 𝑢 𝑐!
!!!

12
Per-period discount factor and per-period discount rate:
!
Formula for the discount factor: 𝛿 = !!!.

Stationarity and independence for choice over time:


(i) Stationarity: Suppose you have 𝑐!@! 𝑠 = 𝑥 and 𝑐!@! = 0 for all t ≠ s.
this just means that the DM only cares about the time gap in
consumption when picking between consuming x later than y earlier,
given x > y and t > s. So the DM will choose to consumer 𝑐!@! ≻ 𝑐!@! if
and only if she prefers 𝑐!@ (!!!) ≻ 𝑐!@ !!! .
(ii) Independence: If you have two different consumption profiles for the
DM, but one of them for some time period is equivalent then you can just
exclude it from the comparison and the preference ordering will remain
valid.
Present Bias: Experimental evidence shows that people have a tendency to
overweigh the present against the future. This is related to procrastinating because
you overweigh the cost today against the future unpleasantness of doing a task. So
you discount differently across time.
Hyperbolic discounting:
Addresses problem of present bias. The discount factor is instead defines by:
!
𝛿 = !!!". The parameter k calibrates the speed with which utilities are calibrated.

Hyperbolic discounting will dampen the speed with which utilities are discounting.
With k=1 and exponential discounting with 𝛿 = 1/2: (top one is hyperbolic)









13
NB: The ratio of discount factors under Hyperbolic discounting shows that because it
accounts for present bias, then stationarity doesn’t hold, because then it does
matter when you pick, not just the gap between the two.
Quasi-hyperbolic discounting:
Similar approach to the hyperbolic discounting, but simply takes out the first term at
time = 0 and says that for all other periods some beta multiplies only periods
thereafter and it captures the present bias because the first period is therefore
overweighed. The model goes:

𝑈 𝑐! , 𝑐! , … , 𝑐! = 𝑢 𝑐! + 𝛽 𝛿 ! 𝑢 𝑐!
!!!


Naïveté vs. sophistication in the beta-delta model:
The beta-delta model can be used to distinguish between different kinds of DMs,
one who is naïve and the other who is sophisticated. To be a sophisticated means
that the DM is aware of time-inconsistencies and will plan accordingly for how she
will decide when she gets to that time period.
In other words, the sophisticated DM will decide by backward induction, while the
naïve DM will wait and see what happens at every time period.
!
Example: assume 𝛽 = ! , 𝛿 = 1

Week Instantaneous Utility


1 3
2 5
3 8
4 13
(See attached).
Della Vigne & Malmendier:
The paper distinguishes between naïve and sophisticated DMs by producing a new
kind of beta-delta model. It’s about how the DM perceives they will account for the
present variable, i.e. people estimate how much their 𝛽 will be. The parameter is: 𝛽.
The closer the two are to one another, the more sophisticated the DM.

14
CHAPTER (5): Other—Regarding Preferences
Other-regarding preference: Models in economics that capture preferences that
depend on others’ welfare, as well as the DM’s own. Knows as ORP’s.
Dictator game:
An individual decides how to split some amount m between himself and the rest of
the people. (It turns out that some people do not necessarily keep all of m for
themselves).
Ultimatum game:
First player plays the dictator game, the second player can either then accept or
reject. If the second player then rejects the proposal, both players get (0, 0). The
interesting thing is that the second player would accept anything better than or
equal to zero; knowing this, the first player always picks a split (10, 0). However,
experimental data suggests that some players reject a lot of the options even if they
are equal to (9, 1) split. Knowing this, some first players pick a much more equitable
split in order to ensure that the second player accepts.
Trust game:
First player picks an amount to send to the second player given by $m. The second
player then multiplies this amount by k>1. Then the second player decides how
much of $km to send back. Suppose this amount is $x≤km, yielding a final amount of
$(km-y), where y is the amount the first player gets. Where y is the initial amount
plus what the second player sent back. Normally, the first player would just keep all
the money and not send any to player 2. Final outcomes would then be: ($y, $(km-
y)). However, experimental data contradicts this and shows that in fact the player 1
does send back some amount, perhaps in the expectation that the second player will
then send back some of his own. And in response, the second player usually sends
back a good amount.
Consequentialism: Means that the only things that matter are the final outcomes, as
opposed to the intermediary steps to get there.
Fehr Schmidt preferences: The modified utility function of a DM to capture envy and
guilt with regards to the final outcomes of each player (i.e. 𝑚! 𝑎𝑛𝑑 𝑚! , for two
players). The utility function looks like this:

15
𝑢! 𝑚! , 𝑚! = 𝑚! − 𝛼! max 𝑚! − 𝑚! , 0 − 𝛽! max {𝑚! − 𝑚! , 0}
Where, 𝛼! =the degree to which the DM is envious and, 𝛽! = captures the degree to
which the DM’s cares about fairness (how guilty he feels by getting more than the
other DM).
Two possible cases:
(i) If DM gets less than the other DM, so 𝑚! < 𝑚! :
𝑢! 𝑚! , 𝑚! = 𝑚! − 𝛼! (𝑚! − 𝑚! ). The utility is less because someone
else got more than them.
(ii) If the DM gets more than the other DM, so 𝑚! > 𝑚! :
𝑢! 𝑚! , 𝑚! = 𝑚! − 𝛽! 𝑚! − 𝑚! . The DM feels bad about getting more
than the other person so his utility is hurt by this.
NB: If 𝛽! > 1 then the DM values fairness so much that she is better off not getting
the money and burning it.

16

Das könnte Ihnen auch gefallen