Sie sind auf Seite 1von 3

Homework 7

MATH 8651 Fall 2015

Q1. Let 1 , 2 , 3 be three probability measures on (, F) such that

1  2  3 .

Show that
d1 d1 d2
= a.s.
d3 d2 d3
d1 d2
Solution: Note that d2 d3 is F-measurable. Since 2  3 ,
Z
d2
2 (A) = d3 , for any A F.
A d3

Consequently, by Durrett exercise 1.6.8 (see Homework 2, Q9), for any measurable function g 0,
Z Z
d2
gd2 = g d2 , (1)
d3
Now since 1  2 , for A F,
Z Z
d1 d1 d2
1 (A) = d2 = d3 ,
A d2 d2 d3
d1
where we take g = 1A d 2
in (1) to obtain the last equality above. The claim now follows from the
(almost sure) uniqueness of Radon-Nikodym derivatives.
Q2. Let X and Y be independent, identically distributed integrable random variables. Show that
X +Y
E[X|X + Y ] = .
2

Solution: We claim that E[X|X + Y ] = E[Y |X + Y ] almost surely. To prove the claim, we need to
show that
Z Z Z Z
E[X|X + Y ]dP = XdP = Y dP = E[Y |X + Y ]dP for any B BR .
X+Y B X+Y B X+Y B X+Y B

d
Note that since X and Y are i.i.d., (X, Y ) = (Y, X), implying that Ef (X, Y ) = Ef (Y, X) for any
measurable function f : R2 R such that E|f (X, Y )| < .
Applying the above identity to the function f (u, v) = u1u+vB and noting that f (X, Y ) is integrable
since |f (X, Y )| |X|, we obtain that
Z Z
XdP = Ef (X, Y ) = Ef (Y, X) = Y dP.
X+Y B X+Y B

Hence, the claim follows. Now, almost surely,


1
E[X|X + Y ] + E[Y |X + Y ] = 21 E[X + Y |X + Y ] = 12 (X + Y ).

E[X|X + Y ] = E[Y |X + Y ] = 2

1
Q3. Let X and Y be integrable random variables and suppose that

E[X|Y ] = Y and E[Y |X] = X a.s.

Show that X = Y a.s.


[Comment: When X, Y L2 , the above result is an easy consequence of Durrett exercise 5.1.11 by
taking G = (X). But in the above problem, we only assume X, Y L1 . Hint: Work with quantities
like E[(X Y )1(X>a,Y a) ] + E[(X Y )1(Xa,Y a) ] for a R.]
Solution: Since {Y a} (Y ) and E[X|Y ] = Y a.s.,

E[Y 1Y a ] = E[E[Y |X]1Y a ] = E[X1Y a ],

implying that E[(Y X)1Y a ] = 0. Similarly, E[(X Y )1Xa ] = 0. Subtracting one from the other,
we obtain
0 = E[(X Y )(1Xa 1Y a )] = E[(X Y )(1Xa,Y >a 1X>a,Y a )],
which we can write as
E(X Y )1X>a,Y a + E(Y X)1Xa,Y >a = 0.
Note that both (X Y )1X>a,Y a 0 and (Y X)1Xa,Y >a 0. So,

(X Y )1X>a,Y a = 0 and (Y X)1Xa,Y >a = 0 a.s.

This implies that


P(X > a, Y a) = 0 and P(X a, Y > a) = 0 a.s.
Finally, X
P(X > Y ) = P(X > a, Y a for some q Q) P(X > a, Y a) = 0.
qQ

Similarly, P(Y > X) = 0. Combining, we get P(X 6= Y ) = 0.


Q4. Let X be integrable and let G, H be two sub--algebras of F. If (X, G) is independent of H, prove
that
E[X|(G, H)] = E[X|G] a.s.

[The notation (X, G) denotes the -field generated by (X) G. ]


Solution: We need to show that Z Z
E[X|G]dP = XdP
C C
for any C (G, H). For any A G and B H,
Z Z Z Z
E[X|G]dP = E[X|G]1A 1B dP = E[X|G]1A dP 1B dP
AB
Z Z Z Z
= XdP 1B dP = X1A 1B dP = XdP,
A AB

where the second equality above follows from the fact E[X|G]1A G and 1B H and hence they
are independent, the third equality is implied by the definition of E[X|G] and the fourth equality is
a consequence of the fact X1A and 1B are independent since X1A is (X, G)-measurable while 1B is
H-measurable.
Define  Z Z 
C= C (G, H) : E[X|G]dP = XdP .
C C

We have shown that C contains the -system {A B : A G, B H}. It is straightforward to


see that C is also a -system (one can, for example, apply DCT to show that if Ci C and Ci ,
then i Ci C). So, by Dynkins theorem, C contains the -algebra containing the -system
{A B : A G, B H} G H. Hence, C = (G, H).

2
Q5 Durrett 5.1.6
Solution: We consider the probability space ( := {a, b, c}, 2 , P ) where 2 is the set of all subsets
of and P is the uniform measure on {a, b, c}. Take F1 = ({a}) and F2 = ({b}) and the random
variable X = 1{a} . Note that E[|F1 ] is averaging over b, c and E[|F2 ] is averaging over a, c. We
compute
E[E[X|F1 ]|F2 ] = E[1{a} |F2 ] = 21 1{a} + 12 1{c}
and
E[E[X|F2 ]|F1 ] = E[ 12 1{a} + 12 1{c} |F1 ] = 21 1{a} + 14 1{b} + 14 1{c} .
So, clearly E[E[X|F1 ]|F2 ] 6= E[E[X|F2 ]|F1 ].

Q6. 5.1.9
Solution: Note that

E(X E[X|F])2 = E[X 2 ] 2E[XE[X|F]] + E(E[X|F]2 ) = E[X 2 ] E(E[X|F]2 ) = EVar(X|F),

where in the second equality we have used the fact that E[XE[X|F]] = EE(XE[X|F]|F) = E(E[X|F]2 ).
Now

Var(X) = E(X EX)2 = E(X E(X|F) + E(X|F) EX)2


= E(X E(X|F))2 + E(E(X|F) EX)2
= E(X E(X|F))2 + E(E(X|F) EE(X|F))2
= EVar(X|F) + Var(E(X|F)),

where the cross term disappear because

E[(X E(X|F))(E(X|F) EX)] = EE[(X E(X|F))(E(X|F) EX)|F]


 
= E (E(X|F) EX)E[(X E(X|F))|F]
 
= E (E(X|F) EX)(E(X|F) E(X|F)) = 0.

Q7. 5.1.11 Solution: From Q6, we get

[E[(Y X)2 ] = E(Y E[Y |G])2 = E[Y 2 ] E(E[Y |F]2 ) = E[Y 2 ] E[X 2 ] = 0,

which implies that (Y X)2 = 0 a.s. or, equivalently, X = Y a.s.

Das könnte Ihnen auch gefallen