Sie sind auf Seite 1von 6

Economics 765 Assignment 1

1.3 In the one-period binomial model (as considered in the first class), suppose we want
to determine the price at time zero of the derivative security with payoff V1 = S1 . This
means that the derivative security pays the stock price, in either state of the world. It can
also be interpreted as a European call option with strike price K = 0. (Why? Be sure you
understand this.) Use the risk-neutral pricing formula
V0 =

1
[pV1 (H) + qV1 (T )]
1+r

to compute the time-zero price V0 of this option.

The value of a European call with strike price K is (S1 K)+ . We can safely assume that
S1 0, and so, if K = 0, (S1 K)+ = S1 .
Recall that the risk-neutral probabilities are given by
p =

1+rd
ud

and

q =

u1r
.
ud

We set V1 (H) = S1 (H) = uS0 and V1 (T ) = S1 (T ) = dS0 , and then we get from the pricing
formula that

S0
(1 + r d)u (u 1 r)d
V0 =
+
= S0 .
1+r
ud
ud
Thus the unsurprising result is that you pay the market price of the security in order to
get a derived security whose value is identical to that of the security itself.
The hedging portfolio is trivial: it consists of one unit of the underlying asset. It is
unnecessary to carry any of the risk-free asset.
1.1 The axioms for a probability measure P defined on a measure space (, F ) can be
stated as follows:
(i) P () = 1, and
(ii) whenever A1 , A2 , . . . is a sequence of disjoint sets in F , then

An

n=1

P (An ).

n=1

Use these axioms to show the following.


(i) If A F , B F , and A B, then P (A) P (B).
(ii) If A F and {An }
n=1 is a sequence of sets in F with limn P (An ) = 0 and A An
for every n, then P (A) = 0.

(i)

The subset B \ A is defined as the set of points in B that are not in A:


B \ A = { | B,
/ A}.

It follows that A and B \ A are disjoint, and that their union is B. Then using axiom (ii)
with A1 = A, A2 = B \ A, An = for n > 2, we see that
P (B) = P (A) + P (B \ A).
Since P (B \ A) 0, this inequality is equivalent to P (A) P (B), as required.
(ii) From part (i), we see that, since A An for all n, it follows that P (A) P (An ).
Going to the limit as n , we get
P (A) lim P (An ) = 0.
n

Since also P (A) 0, we get the desired result that P (A) = 0.


1.5 When dealing with double Lebesgue integrals, just as with double Riemann integrals,
the order of integration can be reversed. The only assumption required is that the function
being integrated should be either nonnegative or integrable. Here is an application of this
fact.
Let X be a nonnegative random variable with cumulative distribution function F (x) =
P (X x). Show that
Z

EX =

(1 F (x)) dx
0

by showing that

Z Z

is equal to both EX and

R
0

I[0,X()[ (x) dx dP ()

(1 F (x)) dx.

If we look at the inner integral in the double integral above, we see that it is
Z
0

I[0,X()[ (x) dx =

Thus the double integral itself is


Z
X()dP () = E(X),

X()

dx = X().
0

by definition of the expectation.

When we reverse the order of integration, the double integral becomes


Z Z
I[0,X()[ (x) dP () dx.
0

The inner integral is now to be thought of as a function of x. The indicator function that
is the integrand is equal to 1 iff x < X(), which is equivalent to X() > x. Thus we see
that
Z
Z

I[0,X()[ (x) dP () =

dP () = P (X() > x) = 1 P (X() x) = 1 F (x).


X()>x

Thus the double integral is

1 F (x) dx.

This gives the desired conclusion.


1.9 Suppose that X is a random variable defined on a probability space (, F, P ), that
A F, and that, for every Borel subset B of R, we have

IB (X()) dP () = P (A) P {X B}.

(S.01)

Then we say that X is independent of the event A.


Show that, if X is independent of an event A, then

g(X()) dP () = P (A) Eg(X)

(S.02)

for every nonnegative, Borel-measurable, function g.

This is just the standard machine. If g = IB , then the left-hand side of (S.02) is the same
as that of (S.01). Then the result follows on noting that
Z
Z
Eg(X) = EIB (X) =
IB (X()) dP () =
dP () = P (X B),

X()B

so that the right-hand side of (S.02) is also equal to that of (S.01). The next step uses
linearity. We consider simple functions of the form
n
X
g(x) =
ak IBk (x).
k=1

We now see that

Z X
n
A k=1

ak IBk (X()) dP () =
=

n
X
k=1
n
X

Z
ak
A

IBk (X()) dP ()

ak P (A) P (X Bk )

k=1

= P (A)
= P (A)

n
X
k=1
n
X

ak P (X Bk )
ak EIBk (X)

k=1
n
X

= P (A) E

k=1

ak IBk (X).

The first equality follows from the linearity of the integral; the second from the first step of
the machine, just proved; the third is trivial; the fourth follows from the definition of the
expectation of a random variable, here IBk (X), that can take on only finitely many values;
and the fifth is again linearity. This shows that the result is true for simple functions. The
third step of the machine uses monotone convergence as usual, and the fourth step, which
separates the positive and negative parts of a function is not even asked for here.
1.10

Let P be the uniform Lebesgue measure on = [0, 1]. Define

(
Z() =
For A B[0, 1], define

1
2

if 0 <

1
if 1.
2

Z
P (A) =

Z() dP ().
A

(i) Show that P is a probability measure.


(ii) Show that, if P (A) = 0, then P (A) = 0. We say that P is absolutely continuous with
respect to P .
(iii) Show that there is a set A for which P (A) = 0 but P (A) > 0. In other words, P and P
are not equivalent.

(i) We know that all we need for P to be a probability measure is that Z should be
measurable, nonnegative, and of expectation 1. It is obviously measurable, since its only
inverse images are {0 < 1/2} and {1/2 < 1}, which, being intervals, are Borel sets.
Nonnegativity is trivial. The expectation is that of a random variable that can take on
only finitely many values, and so is calculated as follows:
Z
1
1
1
Z() d = 0 P (0 <
) + 2 P (
1) = 2
= 1.

EZ =

[0,1]

(ii) We calculate as follows, noting that Z = 2 I[0.5,1] .


Z

P (A) =

Z() dP () = 2
A

=2
[0,1]

[0,1]

IA I[0.5,1] dP

IA[0.5,1] dP = 2P (A [0.5, 1]) <= 2P (A) = 0.

The inequality at the second last step is a consequence of the result of exercise 1.1.
(iii) Any set A contained in the interval [0, 1/2[ is such that P (A) = 0, because, as we
just showed, this expression is the P -probability of the interval A [0.5, 1], which is the
empty set. Of course any interval of positive length in [0, 1/2[ has positive P measure.
4

1.14 Let X be a nonnegative random variable defined on a probability space (, F, P )


with the exponential distribution, which is
P {X a} = 1 ea ,

a 0.

be another positive constant, and define


where is a positive constant. Let
Z=
Define P by

()X

e
.

Z
P (A) =

Z dP

for all A F .

(i) Show that P () = 1.


(ii) Compute the distribution function
P {X a} for a 0
for the random variable X under the probability measure P .

(i)

By definition,
Z
P () =

)X() dP () = E exp (
)X ,
Z dP =
exp (

(S.03)

where the last equality is just the definition of the expectation. Recall the measure X
defined on (R, B) by the requirement that, for B B, X (B) = P (X B). Thus
X ([0, x]) = P (X [0, x]) = P (0 X x) = 1 ex ,
since it is clear from the CDF that X takes on only nonnegative values. Recall also that,
for a nonnegative Borel-measurable function g,
Z
Eg(X) =

g(x) dX (x).
R

Thus

)X =
E exp (

Z0
=
0

)x d(1 ex )
exp (

)x ex dx
exp (

=
0

Finally, from (S.03) we see that P () = 1.


5

i
h x


dx = e
= .

(ii)

This is a straightforward computation.


Z

P {X a} =

Z dP =
{Xa}

I[0,a] Z dP = E(I[0,a] Z)

Z a

e()x ex dx
=
I[0,a] Z dX =
0
0
Z a
h
i

x
x

=
e
dx = e
= 1 ea .
Z

Thus the change of measure leaves us with an exponentially distributed random variable,
instead of .
but with parameter

Das könnte Ihnen auch gefallen