Sie sind auf Seite 1von 5

STAT 218 Homework 1 Solution

Jiarui Han April 15, 2004


1. (Problem 1.23) (a) Proof: Condition on the rst step and use the fact that the probability of going from -1 to 1 is (equal to the probability of ever going from -1 to 0) multiplied by (equal to the probability of ever going from 0 to 1), we have = p + (1 p)2 . (b) Proof: Solving the equation in part (a) gives that = 1 (2p 1) . 2(1 p)

Since [0, 1], this yields that for p 1 , 2 = 1 (2p 1) = 1. 2(1 p)

For p 1 it follows from the strong law of large numbers that the 2 position of the particle converges to , and from this it follows that the probability of ever reaching 1 must be less than unity. (For if it were 1 then state 1 would be reached innitely often which would contradict the fact that the particles position converges to .) 1 Hence, when p 2 we obtain that = 1 + (2p 1) p = . 2(1 p) 1p

(c) Solution: The probability of ever going to 1 from 0 is , so the probability of ever going to k + 1 from k is also , for all k. So the probability that the particle ever reaches n is n .

(d) Proof: Suppose the particle is presently at location i, then P {next at i + 1|will reach n} = = = = = 2. (Problem 1.24) (a) Proof: Conditioning on the rst move gives E[T ] = p 1 + (1 p)(1 + 2E[T ]) = 1 + 2(1 p)E[T ].
1 Hence, E[T ] = if p 2 . As it can be shown that E[T ] < when 1 p > 2 , we obtain in this case that

P {next at i + 1, will reach n} P {will reach n} pP {will reach n|at i + 1} P {will reach n} ni1 p ni p 1p

E[T ] =

1 . 2p 1

(b) Proof: Let X denote the particles location after the rst move. Now, given that X = 1, T = 1; and given that X = 1, T is distributed as 1 plus the convolution of two random variables both having the distribution of T . Therefore, E[T |X = 1] = 1 , E[T |X = 1] = 1 + 2E[T ] V ar(T |X = 1) = 0 , V ar(T |X = 1) = 2V ar(T ) and thus V ar(E[T |X]) = V ar(E[T |X] 1) = 4(E[T ])2 p(1 p) = E[V ar(T |X)] = 2(1 p)V ar(T ) By the conditional variance formula V ar(T ) = E[V ar(T |X)]+V ar(E[T |X]) = 2(1p)V ar(T )+ 4p(1 p) (2p 1)2 4p(1 p) (2p 1)2

which gives the result. (c) Solution: The time to reach n is the sum of n independent random n variables each having the distribution of T . (That is, T = i=1 Ti , where Ti is the additional time it takes once the particle reaches i 1 until it reaches i. Hence, E[time to reach n] = nE[T ]. 2

(d) Solution: By the same reasoning as in (c), V ar(time to reach n) = nV ar(T ). 3. (Problem 1.31) Solution: P {min(X, Y ) > a| min(X, Y ) = X} = P {X > a|X < Y } P {X > a, X < Y } = P {X < Y }

P {X > a, X < Y } =
a

P {Y > X|X = x}1 e1 x dx e2 x 1 e1 x dx

=
a

= 1
a

e(1 +2 )x dx

1 e(1 +2 )a 1 + 2

P {X < Y } =
0

P {X < Y |Y = y}2 e2 y dy (1 e1 y )2 e2 y dy

=
0

= Hence,

1 1 + 2

P {min(X, Y ) > a| min(X, Y ) = X} = e(1 +2 )a .

4. (Problem 4.33) (a) Proof: {Xn , n 0} is a branching process, then there are two classes: {0} {1, 2, } recurrent transient

Since 1, 2, 3, are transient states, for each state i = 0 the process will never return to i after some nite steps. Therefore Xn converges to 0 or innity. (b) Proof: V ar[Xn ] = E[V ar(Xn |Xn1 )] + V ar[E(Xn |Xn1 = 2 n1 + 2 V ar(Xn1 )

The result now follows by mathematical induction. It is obvious when n = 1, so assume, when = 1, that V ar(Xk1 ) = 2 k2 Then V ar(Xk ) = 2 k1 + 2 2 k2 = 2 k1 = 2 k1 k1 1 1 k1 1 1+ 1 k1 1 . 1

k 1 1

The proof when = 1 is similar (or one could let 1 in the above). 5. (Problem 4.34)
1 (a) Solution: If p 2 then 1, so 0 = 1. 1 If p 2 then > 1, so we know 0 < 1. Conditioning on the rst generation we have 2 0 = (1 p)2 + 2p(1 p)0 + p2 0 ,

which implies 0 = (b) Solution: P {X2 P {X2 P {X2 P {X2 So = 1} = 2} = 3} = 4}

(1 p)2 . p2

= [2p(1 p)]2 + p2 2 (1 p)2 2p(1 p) = 2p(1 p) p2 + p2 {2p2 (1 p)2 + [2p(1 p)]2 } = 4p5 (1 p) = p6
4

P {X3 = 0, X2 = 0} =
i=1

P {X2 = i}(1 p)2i .

(c) Proof: By (a) we know

e
k=0

k k!

(1 p)2 p2

= exp

1 2p p2

6. (Problem 4.35) 4

(a) 0 =
j

j j = e e0 . j! 0

Therefore implying that a = 0 .

0 e0 = e

(b) Let Xn denote the population size of the nth generation. Then P {Xn+1 = j|Xn = i, eventual extinction} P {Xn+1 = j, extinction|Xn = i} = P {extinction|Xn = i} P {Xn+1 = j|Xn = i}P {extinction|Xn+1 = j} = P {extinction|Xn = i} = =
ei (i)j j 0 j! i 0 i j

(i) ji 0 j!

On the other hand, if the number of ospring per individual were Poisson with mean a, then by part (a) P {Xn+1 = j|Xn = i} = eia (ia)j j! (i0 )j = ei0 j! = ei0 (i0 )i

(i0 )ji j! (i0 )ji = eia (ia)i j! (i0 )ji = ei (i)i j! ei (i)j ji = 0 j!

Das könnte Ihnen auch gefallen