Sie sind auf Seite 1von 3

Problem 1

Let A and B be events of some sample space. Show that:


(i) Pr(A) = 1 =⇒ A ⊥
⊥ B.
(ii) Pr(A) = 0 =⇒ A ⊥
⊥ B.
{
(iii) A ⊥
⊥ B ⇐⇒ A ⊥ ⊥ B { ⇐⇒ A{ ⊥
⊥ B ⇐⇒ A ⊥ ⊥ B{.
Solution (i, ii). It suffices to note that Pr(A) = 1 =⇒ Pr(A ∩ B) = Pr(B) = Pr(A) Pr(B) and
Pr(A) = 0 =⇒ Pr(A ∩ B) = 0 = Pr(A) Pr(B).
Solution (iii) We show that

⊥ B =⇒ A{ ⊥
A⊥ ⊥ B { =⇒ A{ ⊥
⊥ B =⇒ A ⊥ ⊥ B { =⇒ A ⊥
⊥ B.

⊥ B =⇒ A{ ⊥
1. A ⊥ ⊥ B Since B = (B \ A) ∪ (B ∩ A), we have Pr(B) = Pr(B \ A) + Pr(B ∩ A). But
Pr(B \ A) = Pr(B ∩ A{ ). Hence,

Pr(B ∩ A{ ) = Pr(B) − Pr(B ∩ A)


= Pr(B) − Pr(B) Pr(A) (A ⊥
⊥ B)
= Pr(B)[1 − Pr(A)]
= Pr(B) Pr(A{ ).

⊥ B =⇒ A{ ⊥
Thus, we have shown that A ⊥ ⊥ B.
2. A{ ⊥⊥ B =⇒ A ⊥ ⊥ B { Since A = (A \ B) ∪ (A ∩ B), we have Pr(A) = Pr(A \ B) + Pr(A ∩ B).
Similarly, B = (B\A)∪(B∩A) implies that Pr(B) = Pr(B\A)+Pr(B∩A). Hence, Pr(A)−Pr(A\B) =
Pr(B) − Pr(B \ A). It follows that

Pr(A ∩ B { ) = Pr(A) − Pr(B) + Pr(B ∩ A{ )


= Pr(A) − Pr(B) + Pr(B) Pr(A{ ) (A{ ⊥
⊥ B)
= Pr(A) − Pr(B)[1 − Pr(A{ )]
= Pr(A) − Pr(B) Pr(A)
= Pr(A)[1 − Pr(B)]
= Pr(A) Pr(B { ).

Thus, we have shown that A{ ⊥ ⊥ B{.


⊥ B =⇒ A ⊥
⊥ B { =⇒ A{ ⊥
3. A ⊥ ⊥ B { . Since B { = (B { \A)∪(B { ∩A), we have Pr(B { ) = Pr(B { \A)+Pr(B { ∩A).
But Pr(B { \ A) = Pr(B { ∩ A{ ). Hence,

Pr(B { ∩ A{ ) = Pr(B { ) − Pr(B { ∩ A)


= Pr(B { ) − Pr(B { ) Pr(A) ⊥ B{)
(A ⊥
= Pr(B { )[1 − Pr(A)]
= Pr(B { ) Pr(A{ ).

⊥ B { =⇒ A{ ⊥
Thus, we have shown that A ⊥ ⊥ B{.
4. A{ ⊥
⊥ B { =⇒ A ⊥
⊥ B Since A = (A \ B) ∪ (A ∩ B), we have Pr(A ∩ B { ) = Pr(A) − Pr(A ∩ B).
1
2

Similarly, B { = (B { \ A) ∪ (B { ∩ A) implies that Pr(B { ∩ A) = Pr(B { ) − Pr(B { ∩ A{ ). Hence,


Pr(A) − Pr(A ∩ B) = Pr(B { ) − Pr(B { ∩ A{ ). It follows that
Pr(A ∩ B) = Pr(A) − Pr(B { ) + Pr(B { ∩ A{ )
= Pr(A) − Pr(B { ) + Pr(B { ) Pr(A{ ) (A{ ⊥
⊥ B{)
= Pr(A) − Pr(B { )[1 − Pr(A{ )]
= Pr(A) − Pr(B { ) Pr(A)
= Pr(A)[1 − Pr(B { )]
= Pr(A) Pr(B).
Thus, we have shown that A{ ⊥
⊥ B { =⇒ A ⊥
⊥ B.

Problem 4
For each n ∈ N, let {X1 , X2 , . . . , Xn } be a collection of iid random variables, which is indepen-
d d
dent of the random variable N = Poisson(λ). Let Xi = Bernoulli(p) for each i, and
(P
n
i=1 Xi if n ≥ 1
Sn :=
0 if n = 0.
Find ESN and var SN .
Solution to the first part. Since ESN = E(E(SN |N )) by LIME, we begin by finding the CEF
n 7→ E(SN |N = n). So let n ∈ supp(N ) = {0, 1, 2, . . .}. Then,
E(SN |N = n) = E(Sn |N = n) (useful rule)
( P
E( ni=1 Xi |N = n) if m ≥ 1
= (defn. of Sn )
0 if n = 0
( P
E( ni=1 Xi ) if n ≥ 1
= ({X1 , X2 , . . . , Xn } ⊥
⊥ N)
0 if n = 0
(
nEX1 if n ≥ 1 d d d
= (X1 = X2 = · · · = Xn )
0 if n = 0
(
np if n ≥ 1 d
= (X1 = Bernoulli(p))
0 if n = 0
= np.
Therefore,
E(SN |N ) := E(SN |N = n) n=N = np n=N = N p.
d
Consequently, ESN = E(E(SN |N )) = E(N p) = pEN = pλ because N = Poisson(λ).
Solution to the second part. By the variance decomposition formula,
var SN = E var(SN |N ) + var E(SN |N ).
Now, from the first part we have that
var E(SN |N ) = var(N p) = p2 var N = p2 λ. (EN = var N = λ)
3

Next,
var(SN |N = n) = var(Sn |N = n) (useful rule)
(
var( ni=1 Xi |N = n) if n ≥ 1
P
= (defn. of Sn )
0 if n = 0
( Pn
var( i=1 Xi ) if n ≥ 1
= ({X1 , X2 , . . . , Xn } ⊥
⊥ N)
0 if n = 0
(
n var X1 if n ≥ 1
= (X1 , X2 , . . . , Xn iid)
0 if n = 0
(
np(1 − p) if n ≥ 1 d
= (X1 = Bernoulli(p))
0 if n = 0
= np(1 − p).
Therefore,
var(SN |N ) := var(SN |N = n) n=N = np(1 − p) n=N = N p(1 − p).
Hence,
E var(SN |N ) = EN p(1 − p) = p(1 − p)EN = p(1 − p)λ.
It follows that var SN = p(1 − p)λ + p2 λ = pλ.

Das könnte Ihnen auch gefallen