Sie sind auf Seite 1von 5

Indian Institute of Technology Kanpur

HSO201A Applied Probability and Statistics (2017-18-II)

ANSWERS TO PRACTICE PROBLEM SET (PPS) # 7

1. Suppose that a fair coin is flipped independently 40,000 times. Making appropriate
assumptions, show that there is at least a 99% chance that the proportion of Heads (H)
will be more than 0.475, but less than 0.525.

Ans: Since the coin flipped is fair, the probability of Head (T) and Tail (T) at each toss
equals 0.5, i.e. P(H) = P(T) = p = 0.5. The coin is flipped independently N = 40000
times. Let X = Number of times Head (H) appears in 40000 independent flips of a fair
coin. Clearly, X is a random variable, which follows the Binomial Distribution with
parameters (N = 40000, p = 0.5).

It follows, therefore, that:  = E(X) = Np = 40000(0.5) = 20000 and 2 = Var(X) =


Np(1–p) = 40000(0.5)2 = 10000.

Using the Chebychev’s Inequality, we can write:

P(|X – |  k) 

From the above inequality, it follows that:

P(|X – | < k) = 1 – P(|X – |  k)  1 –

Or, P( – k < X <  + k)  1 –

Or, P[( – k)/N < (X/N) < ( + k)/N)]  1 – (1)

We are given that: ( – k)/N = 0.475 and [( + k)/N = 0.525

i.e. (20000 – k√10000)/40000 = 0.475, which implies that:

k = [20000 – (0.475)(40000)]/ √10000 = 1000/100 = 10

It is easy to check that ( + k)/N = (20000 + 10√10000)/40000 = 21000/40000 = 0.525

Therefore, using inequality (1) above, we obtain:

P[0.475 < (X/N) < 0.525]  1 – = 1 – 0.01 = 0.99

Hence, there is at least a 99% chance that the proportion of Heads (H) (= (X/N)) will be
more than 0.475, but less than 0.525.

2. Four mobile phones are selected at random from a batch consisting of – five new
mobile phones, seven used mobile phones in a working condition and eight non-
1
working (or, defective) mobile phones. Let X be the number of new mobile phones that
are selected. Further, let Y be the number of used mobiles phones in a working
condition that are selected.

(a) Find the joint probability mass function (p.m.f.) of X and Y.


Ans: Using the information given above, the joint p.m.f. of X and Y can be
expressed as:

pXY(x, y) = Consequently, the joint p.m.f. values can be

calculated for all possible values of X and Y. The table in part (b) below provides
the joint p.m.f values for all possible values of X and Y.

(b) Find the marginal p.m.f.s of X and Y.


Ans: The joint p.m.f. table below helps us obtain the marginal p.m.f.s of X and Y:

Joint Row
p.m.f. of Y=0 Y=1 Y=2 Y=3 Y=4 Sum
X&Y P(X = x)
14 392 196 56 7 𝟏𝟑𝟔𝟓
X=0
969 4845 1615 969 969 𝟒𝟖𝟒𝟓
56 196 56 35 𝟒𝟓𝟓
X=1 0
969 969 323 969 𝟗𝟔𝟗
56 112 14 𝟐𝟏𝟎
X=2 0 0
969 969 323 𝟗𝟔𝟗
16 14 𝟏𝟎
X=3 0 0 0
969 969 𝟑𝟐𝟑
1 𝟏
X=4 0 0 0 0
969 𝟗𝟔𝟗
Column 𝟏𝟒𝟑 𝟐𝟎𝟎𝟐 𝟓𝟒𝟔 𝟗𝟏 𝟕
Sum 1
P(Y = y) 𝟗𝟔𝟗 𝟒𝟖𝟒𝟓 𝟏𝟔𝟏𝟓 𝟗𝟔𝟗 𝟗𝟔𝟗

In the table above, the “Row Sums” give the marginal p.m.f. values for different
values of X, while the “Column Sums” provide the marginal p.m.f. values for
different values of Y.

(c) Find the conditional p.m.f. of X, given that Y = 1.


( , )
Ans: P(X = 0 | Y = 1) = = Similarly,
( )

P(X = 1 | Y = 1) = (final answer)

P(X = 2 | Y = 1) =

P(X = 3 | Y = 1) =
2
3. Let 𝑋 = (X1, X2) be a continuous bivariate random vector with the joint p.d.f.:

𝑓 (𝑥 , 𝑥 ) = 2𝑒 𝑒 if x1 > 0 and x2 > 0


=0 otherwise

Verify that the above function is a valid joint p.d.f.

Ans: It is easy to verify that: (i) 𝑓 (𝑥 , 𝑥 )  0,  (𝑥 , 𝑥 )  R2

(ii) ∫ ∫ 𝑓 (𝑥 , 𝑥 )𝑑𝑥 𝑑𝑥 = 1

(a) P(X1 > 1, X2 < 1) = 𝑒 (1 − 𝑒 ) (final answer)

(b) P(X1 < X2) =

(c) P(X1 < a), where a > 0 is a constant.


= (1 − 𝑒 )

4. The joint p.d.f. of random variables X and Y is given by:

𝑓 (𝑥, 𝑦) = 𝑥(2 − 𝑥 − 𝑦) if 0 < x < 1 and 0 < y < 1


=0 otherwise

Verify that the above function is a valid joint p.d.f.


Ans: It is easy to verify that: (i) 𝑓 (𝑥, 𝑦)  0,  (𝑥, 𝑦)  R2

(ii) ∫ ∫ 𝑓 (𝑥, 𝑦)𝑑𝑥𝑑𝑦 = 1

Further, compute the conditional p.d.f. of X, given that Y = y, where 0 < y < 1.

Ans: The conditional p.d.f. of X, given that Y = y, where 0 < y < 1, is given as:
( , )
𝑓 | (𝑥|𝑦) = (1)
( )

where 𝑓 (𝑦) is the “marginal” p.d.f. of Y. Further, note that if 0 < y < 1, then:

𝑓 (𝑦) = ( − ) (final answer)

Substituting for 𝑓 (𝑦) in equation (1) above, we obtain:

( )
𝑓 | (𝑥|𝑦) = , if 0 < x < 1; zero otherwise. (final answer)
( )

3
5. Let 𝑋 = (X1, X2) be a given bivariate random vector (discrete, or continuous). Let g and
h be real-valued functions of x1 and x2 respectively, where x1 and x2 are the realized
values of random variables X1 and X2 respectively. Show that if X1 and X2 are
independent, then:

(a) E(g(X1)h(X2)) = E(g(X1))E(h(X2))

Proof: Without loss of generality, let us assume that 𝑋 = (X1, X2) is a continuous
bivariate random vector (The proof is similar for the case where 𝑋 = (X1, X2) is
discrete). If X1 and X2 are independent, then, by definition, their joint p.d.f. is the
product of their marginal p.d.fs, i.e.

𝑓 (𝑥 , 𝑥 ) = 𝑓 (𝑥 )𝑓 (𝑥 ),  (𝑥 , 𝑥 )  R2 (1)

By definition of expectation, we can write:

E(g(X1)h(X2)) = ∫ ∫ 𝑔(𝑥 )ℎ(𝑥 )𝑓 (𝑥 , 𝑥 )𝑑𝑥 𝑑𝑥

= ∫ ∫ 𝑔(𝑥 )ℎ(𝑥 )𝑓 (𝑥 )𝑓 (𝑥 )𝑑𝑥 𝑑𝑥 (from equation (1) above)

= (∫ 𝑔(𝑥 )𝑓 (𝑥 )𝑑𝑥 ) (∫ ℎ(𝑥 )𝑓 (𝑥 )𝑑𝑥 )


= E(g(X1))E(h(X2))

(b) 𝑀 (𝑡) = 𝑀 (𝑡)𝑀 (𝑡) , where 𝑀 (𝑡) , 𝑀 (𝑡) and 𝑀 (𝑡) are the m.g.f.s of the
random variables X1, X2 and Z = X1 + X2 respectively.

Proof: By definition, 𝑀 (𝑡) = E(𝑒 ), 𝑀 (𝑡) = E(𝑒 ) and


( )
𝑀 (𝑡) = E(𝑒 ) = E(𝑒 ) = E(𝑒 𝑒 ) (2)

Let g(X1) = 𝑒 and h(X2) = 𝑒 . If X1 and X2 are independent, then using part
(a) above, we obtain:

E(𝑒 𝑒 ) = E(𝑒 )𝐸(𝑒 ) = 𝑀 (𝑡)𝑀 (𝑡).

Combining this with equation (2) above, we get:

𝑀 (𝑡) = 𝑀 (𝑡)𝑀 (𝑡)

(c) Z = X1 + X2  N(1 + 2, 12 + 22), provided X1  N(1, 12) and X2  N(2, 22).

Proof: Recall that if X1  N(1, 12), then its m.g.f. is given by:

𝑀 (𝑡) = E(𝑒 ) = 𝑒 (3)


Similarly, if X2  N(2, 22), then its m.g.f. is given by:

𝑀 (𝑡) = E(𝑒 ) = 𝑒 (4)

Furthermore, if X1 and X2 are independent, then by part (b) above, we have:


4
𝑀 (𝑡) = 𝑀 (𝑡)𝑀 (𝑡), where Z = X1 + X2.

Substituting from equations (3) and (4) above, we obtain:

𝑀 (𝑡) = (𝑒 )(𝑒 ) = 𝑒( ) ( )
(5)

Note that 𝑀 (𝑡 ) in equation (5) above is the m.g.f. of a random variable, which follows
N(1 + 2, 12 + 22) distribution. Hence, Z = X1 + X2  N(1 + 2, 12 + 22).

Das könnte Ihnen auch gefallen