Sie sind auf Seite 1von 9

2.6 (a)

(b)

STA 6326 Homework 3 Solutions

Maksim Levental

September 30, 2014

3
If −∞ < x < 0 then Y = g(X) = |X| 3 = (−X) 3 and hence g −1 (Y ) = − √ Y and
−1
1
3
=
√ Y 2
√ Y =
3
3
3 √ Y 2
3
and if 0 ≤ x < ∞ then Y
= g(X) = |X| 3 = X 3 and hence g −1 (Y ) =
√ 3 Y and
1
1
3
√ Y =
=
√ Y 2
3
3
3 √ Y 2
3
therefore for 0 ≤ y < ∞
√ 3
√ 3
− | −
y
y |
√ 3
y
e
| e − |
e −
f Y (y) =
√ y 2 + 6 √ y 2 =
√ y 2
6
3
3
3
3
Let u =
√ 3 y then

ˆ

0

f Y (y)dy = ˆ

0

√ 3
y
e
2 dy = 3
3
3
√ y
3

ˆ

0

e y d ( y) = 1

3

3

Y

= g(X) = 1 X 2

=

g 1 (Y ) = 1 Y

for 1 < x < 0 and

1 Y =

1

2 1 Y

and g 1 (Y ) = 1 Y

for 0 x < 1 and

and therefore for 0 y < 1

1 Y =

1

2 1 Y

f Y (y) = 3 1 y + 1 2

+ 3 1 y + 1 2 16 1 y

16 1 y

=

3

y 2 1 y

8

Let u = 1 y then u 2 = 1 y

just going back to X space). Hence

ˆ 1

=

y = 1 u 2

=dy = 2udu (which is of course

f Y (y)dy = ˆ 1

0

8 1

3

y 2

y dy = 8

3 (2)(1) ˆ 0 1 + u 2 du = 4 0 1 +

1

3

1 = 3 4 4

3

3

0

= 1

(b)

(a)

2.11 X

1

2π e x 2 /2

E

X 2 =

=

=

=

=

1

2π ˆ

−∞

1

2π ˆ

−∞

x 2 e x 2 /2 dx

x · x · e x 2 /2 dx

1 2π x ˆ x · e x 2 /2 dx

−∞ ˆ

−∞ ˆ x · e y 2 /2 dy dx

1 2π x e x 2 /2

−∞ + ˆ

−∞ e x 2 /2 dx

1

2π 0 + ˆ

−∞ e x 2 /2 dx

= 1

By example 2.1.7

f Y (y) =

1

2πy e y/2 + e y/2 = 2πy e y/2

1

2

and since 0 < y <

E (Y ) = ˆ yf Y (y)dy

0

=

1

2π ˆ

0

e

y/2

y dy

1
1
2e − ( √ y ) 2 /2 d ( √ y)
= 2 ˆ
−∞
1
=
2π ˆ
e − ( √ y ) 2 /2 d ( √ y)
−∞
1
=
2π √ 2π

= 1

The support of Y 0 x < then Y

is 0 < y < . If −∞ < x < 0 then Y

= X and

g(Y ) 1 = Y . Then

= X

and therefore

f Y (y) =

1 2π e (y) 2 /2 |−1| + e y 2 /2 |1| =

E(Y ) =

=

=

2π ˆ

2

0

2π ˆ

2

0

2

2π

ye y 2 /2

dy

e (y 2 /2) d y 2 /2

and g(Y ) 1 = Y , else if

2e y 2 /2

2π

(c)

Var(Y ) = E Y 2 (E (Y )) 2

Hence Var (Y ) = 1 2

π .

E

Y 2 =

=

2π ˆ

2

0

y 2 e y 2 /2 dy

2 1 2 ˆ

−∞

y 2 /2

y 2 e

2π dy

= 1 by part (a)

2.12 Y = g(X) = d tan(X). g(X) is increasing for 0 < x < π/2 and g 1 (Y ) = arctan(Y ). Hence

and therefore

(arctan(Y /d)) =

1

1

1 + (Y /d) 2 d

f Y (y) = 2

1 1

π 1 + (y/d) 2 d

with support y (0, ). This is the Cauchy distribution hence E(Y ) = .

2.13 The probability that there are k heads, given the ﬁrst ﬂip lands heads, is geometrically dis-

tributed “ﬂips until ﬁrst tail”: P H (X = k) = p k (1 p) but restricted to k = 1, 2, 3,

probability that there are k tails, given the ﬁrst ﬂip lands tails, is also geometrically distributed

“ﬂips until ﬁrst head”: P T (X = k) = (1 p) k p, but also restricted to k = 1, 2, 3, the probability that there’s either a run of k heads or tails is

and the

Therefore

P HT (X = k) = P H + P T = p k (1 p) + (1 p) k p

2.14

and

(a)

E (X) =

=

=

=

=

k=1

k=1

k p k (1 p) + (1 p) k p

kp k (1 p) +

k=1

k(1 p) k p

E (H) (1 p) + E (T ) p

1 (1 p)

+ 1 p

p

1 p

1

1

p +

1 p 2

E (X) = ˆ x · f X (x)dx let u = F X (x) and since F X strictly monotonic

0

= ˆ 1 F

0

1

X

(u)du

= ˆ (1 F X (x)) dx

0

(b) First note that x = x

k=1 1

E (X) =

x=0

=

x=1

x

· f X (x)

x · f X (x)

x

= f X (x)

x=1 k=1

∞ ∞

= f X (x) since 0 < k < x and 0 < x <

k=1 x=k

=

k=1

(1 F X (k))

=

k=0

(1 F X (k))

⇐⇒

k < x < and 0 < k <

1

2.17 m is such that m = F

X

(1/2)

2.20

2.22

(a)

(b)

m

3 ´

0

x 2 dx = m 3 and therefore m =

1/2.

3

Therefore m = tan (0) = 0

1

2 =

1

π

1

π

1

π

ˆ

−∞

m

1

1 + x 2

= (arctan(m) arctan(−∞))

2

= arctan(m) + π

Number of children is distributed Geometrically P (X = k) = (1 p) k1 p, number of trials until ﬁrst success, including ﬁrst success, with p = 1/2. The mean µ = 1/p = 2. Therefore the couple, on average, should have two children.

(a) Note that ´

0

e −αx 2 dx = 1 2 π
a

. Let α = 12 under the integral, then

ˆ

0

f X (x)dx =

=

=

=

=

=

=

β 3 π ˆ

4

3 π ˆ

4

β

4

x 2 e x 2 2 dx

∂α e αx 2 dx

e αx 2 dx

0

0

ˆ

d

0

π

d

π 1

β

3

4

π

α 1/2

α

3/2

β

3

4

π

2

β

3

4

π

2

2

π 1

1

β

2 3/2

= 1

β 3 π

1

1

2

2

β

1

3

=

β

3

2 3/2

1

β

β 3 1

(b) Note that ´

0

xe αx 2 = 2α (u substitution). Let α = 12 under the integral, then

1

E (X) =

=

=

=

=

β 3 π ˆ

4

0

β 3 π ˆ

4

0

x 3 e x 2 2 dx

α xe αx 2 dx

4

3 π dα ˆ

d

β

0

xe αx 2 dx

4

d

β

3 π dα

1

2 α 1

2 π α 2 =

β

3

2β 4 β 3 π

2β

π

=

The second moment is

E

X 2 =

=

=

=

=

=

2β

Hence Var(X) = 3 β 2

2

π

β 3 π ˆ

4

0

x 4 e x 2 2 dx

2

β 3 π ˆ ∂α

4

0

2

e αx 2 dx

4

d

2 ˆ

π

2

e αx 2 dx

β

3

π

4

0

2

d

2 α 1/2

β

3

π

2

4

π

π

4

π

α 3/2 =

β 3 π

3

2 5/2 =

1

2

β

2

d

6

β 3 π

6

4

β 3 π

4

β

2 = β 2 3

2 π .

4

d

α 5/2

2.23 (a) f Y (y) =

(b)

E (Y ) =

1

5

1

9 =

1

2 y f X y + f X

1

y = 4 y 1 + y + 1 y =

1

2 y .

2 ´

0

1

1

4

45 .

y

y dy =

2 ´

0

1

1

ydy = 1

3

and E Y 2 =

2 ´

0

1

1

y 3/2 dy = 1 5 therefore Var(X) =

2.24 (a)

(b)

E (X)

E

E (X) =

=

a ´

0

x a dx

1

=

a

a+1

and E X 2 = a ´

0

a

1

x a+1 dx =

1

n n

a+2 therefore Var(X) =

a

k=1 k 2 but

X 2 (E(X)) 2 =

1

n k=1 n k = n 1

a

a

2

a+2

(a 2 +1)(a+2)

n(n+1)

2

=

a 2 +2a+1 =

1

2 (n + 1) and E X 2 =

k 3 (k 1) 3 = k 3 k 3 3k 2 + 3k + 1 = 3k 2 3k 1

and therefore

Hence

1

n

n

k=1

k 2 =

1

3n

n

k=1

k 3 (k 1) 3 + 3k 1

=

=

3n

1

3n

1

n

k=1

n

k=1

k 3 (k 1) 3 +

n

k=1

3k 1

k 3 (k 1) 3 + 3 n(n + 1) n

2

=

3n n 3 + 2 3

1

n(n + 1) n

=

6 1 (2n + 1)(n + 1)

Var(X) =

6 1 (2n + 1)(n + 1)

4 1 (n + 1)(n + 1)

(c) TODO

= 1 2 (n + 1) 3 (2n + 1)

1

=

=

12 1 (n + 1) (n 1)

n 2 1

12

1

2 (n + 1)

 2.25 = g(X) = −X then g −1 (Y ) = −Y and (−Y ) = 1. Hence Y (a) Let Y ∼ f X (−y) (−Y ) = f X (y). (b)

E e tX = ˆ e tx f X (x)dx = ˆ e t(x) f X (x)dx = ˆ

e t(x) f X (x)dx

Let Y = X. Then by part (a)

ˆ

e t(x) f X (x)dx = ˆ e t(y) f X (y)dy = ˆ e t(y) f X (y)dy = ˆ

e tx f X (x)dx = E e tX

Hence E e tX = E e tX .

2.26 (a) N (0, 1), Standard Cauchy, Student’s t.

(b)

1 =

a+

→∞ ˆ

lim

a

f X (x)dx

=

→∞ ˆ

lim

a

a f X (x)dx + ˆ

a

a+

f X (x)dx

=

→∞ ˆ

lim

a f X (x)dx + ˆ a f X (x )d(x )

a

a

→∞ ˆ a

a f X (x)dx + ˆ a f X ((x ) + )d(x )

a

= lim

3.2

(a)

The probability that 0 items in k draws are defective if 6 are defective in 100 i

P (X = 0) =

0 6 94

k

100

k

94!

k!(94k)!

=

100!

k!(100k)!

= (100 k)(99 k)(98 k)(97 k)(96 k)(95 k)

100 · 99 · 98 · 96 · 95

.10

Then solving P (X = 0) .10 numerically yield k 32. So To detect 6 defectives in a batch of 100 you need at least 32 draws, but as the number of defectives goes up this number will decrease hence you need at most draws.

(b)

3.4 (a)

(b)

P (X = 0) =

0 1 99

k

100

k

99!

k!(99k)!

=

100!

k!(100k)!

= (100 k)

100

.10

Therefore k 90.

The number of “ﬂips” until success (ﬁnding the right key) is geometrically distributed with success probability 1/n and failure probability (n1)/n. Therefore the mean number of trials is

1

1/n = n

There are n! permutations of the keys (assuming they’re all distinct) and n diﬀerent positions in any permutation that the correct key could be in. There are n1 (k

1)! diﬀerent permutations of keys that could precede the correct key and

permutations of keys that could succeed the correct key. Therefore the probability the

correct key is in the kth position is

(n k)!

k1

nk

nk

P(X = k) = n1

 (k − 1)! n−k (n − k)! k−1 n−k n!

=

1

n

and then E (X) = n + 1/2, i.e. in the middle, as you’d expect.

3.7 P (X = k) = e λ λ k /k! implies

P (X 2) = e λ

k=2

λ k

k!

= e λ

k=0

λ

k

k!

λ 1

= e λ e λ λ 1

= 1 e λ λ e λ

Therefore P (X 2) .99 ⇐⇒

1 e λ λ e λ .99

⇐⇒

λ 6.63835

3.10

(a) The probability of choosing 4 packets of cocaine out all 496 packets is

N

4

N+M

4

The probability of choosing 2 noncocaine packets out all the rest is

M

2

N+M4

2

Therefore, by independence, the probability of choosing 4 packets of cocaine and then 2 packets of noncocaine is

3.13

(b)

N

4

M

2

N+M

4

N+M4

2

=

N

4

M

2

N+M

4

N+M4

2

m(m 1)n(n 1)(n 2)(n 3)

(m + n)(m + n 1)(m + n 2)(m + n 3)(m + n 4)(m + n 5)

(a) P (X > 0) = k=1 e λ λ k /k! = k=0 e λ λ k /k! e λ = 1 e λ hence

P(X T = k) =

e

λ

λ

k

1 e λ

k! I {1,2,

}

Then

and

E(X T ) =

=

=

e

λ

1

e λ

k=1

k λ k! k

λ

e

λ

1

e

e

λ

e λ λ

1

k=0

k λ k! k

0

and ﬁnally

2

E(X T ) =

e

λ

1 e λ

k=1

k 2 λ k! k

=

=

=

e

λ

λ

λ

λe λ1

λ

1 e

λ

2 MGF (X) t=0 0

∂t

2

e λe t 1 t=0

t=0

2

1 e

e

1 e

2

∂t

∂t e t e λe t

λe λ1

1 e

λ1

λ e t e λe t + λe 2t e λe t

t=0

=

= λe

1 e λ e λ + λe λ

λ (1 + λ)

=

e (1 e λ )

2

Var(X T ) = E(X T ) (E(X T )) 2 =

λ (1 + λ) e (1 e λ )

λe λ 1 e

λ e λ (1 + λ)

e

1

λ =

e λ

(b)

P (X = k) = k+r1 (1p) k (p) r . Note this deﬁnition is obverse from the book - p = 1p . Firstly

k

P(X > k) =

=

i=1

i=0

k + r 1

k

k + r 1

k

= 1 p r

(1 p) k (p) r

(1 p) k (p) r 0 + r 1 (1 p) 0 (p) r

0

Then P (X = k) =

and

1 r k+r1

p

k

(1 p) k (p) r I {1,2, } and

E(X T ) =

=

=

1

p

r

k k + r 1

k

k=1

k=0 k k + r 1

1

p

r(1 p)

r

k

p r+1

p k (1 p) r

p k (1 p) r

2

E(X T ) = E(X T (X T 1)) + E(X T )

=

=

=

=

=

=

1

p r

1

p r

1

p r

k=1

k=0

k=0

k(k 1) k + r 1 p k (1 p) r + E(X T )

k

(k + r 1)!

1)! p k (1 p) r + E(X T )

(k 2)!(r

((k 2) + (r + 2) 1)!

1)! p k (1 p) r + E(X T )

(k 2)!((r + 2) 2

p 2 p 2

p r ((r + 2) 3)((r + 2) 2)

k=0

((k 2) + (r + 2) 1)!

(k 2)!((r + 2) 1)!

p k2 (1 p) r+2 + E(X T )

1

p r (r 1)r

k=0 (k 2) + (r + 2) 1

k 2

p k2 (1 p) r+2 + E(X T )

1)r + r(1 p)

1

p r (r

p

r+1

= 1 + p r1 r(1 p)(r 1)

p r (r 1)r

Finally

Var(X T ) = 1 + p r1 r(1 p)(r 1)

p r (r 1)r

r(1 p)

p

r+1

2 = 1 (p 1)