Beruflich Dokumente
Kultur Dokumente
We give examples of discrete and continuous joint distributions. From the joint distribution of x1, , xp we obtain the individual distribution (which we call the marginal distribution) of each of x1, x2, , xp. We define the concept of conditional distribution. We briefly study the concept of independence of random variables and its relation to uncorrelatedness. First, let us consider a few examples. Example 1: A college has 2 specialists in long distance running, 4 specialists in Tennis and 6 top level cricketers among its students. The college plans to send 3 sportsmen from the above for participating in the University sports and games. The three sportsmen are selected randomly from among the above 12. Let x1 and x2 denote respectively the number of long distance specialists and the number of tennis specialists chosen. The joint probability mass function of x1 and x2 is defined as P{x1 = i, x2 = j} for i =0,1,2 and j = 0,1,2,3. Obtain the joint probability mass function of x1, x2. Solution:
Similarly,
4 6 12 36 p(0, 2) = 2 = 1 3 220 4 12 4 p(0, 3) = 3 3 = 220 2 6 12 30 p(1, 0) = 1 = 2 3 220 2 4 6 12 48 p(1, 1) = 1 = 1 1 3 220 2 4 12 12 p(1, 2) = 1 = 2 3 220 p(1, 3) = 0 since the number of persons chosen is 3. 2 6 12 6 p(2, 0) = 2 1 3 = 220 2 4 12 4 p(2, 1) = 2 1 3 = 220 p(2, 2) = 0 p(2, 3) = 0
The values taken by x1 and x2 and the corresponding probabilities constitute the joint distribution of x1 and x2 and can be expressed in a tabular form as follows: Table 1 Joint distribution of x1 and x2 Value taken by x1 0 1 2 x2 Joint probability 0 1 2 3 Row sum
Column sum
Joint probability 60 36 220 220 48 12 220 220 4 0 220 112 48 220 220
4 220
0 0
4 220
For the joint distribution table, it is easy to write down the distributions of x1 and x2 which we call the marginal distributions of x1 and x2 respectively. P{x1 = 0} = P{x1 = 0, x2 = 0} + P{x1 = 0, x2 = 1}+ P{x1 = 0, x2 = 2}+ P{x1 = 0, x2 = 3} =
Notice that P{x1 = 0} is the row-sum corresponding to x1 = 0 in the above table. Accordingly this is recorded as row-sum corresponding to x1 = 0. Similarly the second and third row-sums are the probabilities of x1 = 1 and x1 = 2 respectively. Thus the marginal distribution of x1 is Table 2 Marginal distribution of x1 Value Probability 0 120 220 1 90 220 2 10 220
Similarly, the marginal distribution of x2 is obtained using the column sums in table 1. Thus the marginal distribution of x2 is Table 3 Marginal distribution of x2 Value Probability 0 56 220 1 112 220 2 48 220 3 4 220
Suppose we are given additional information that no long distance running specialist is selected, or in other words, we know that x1 = 0. Then what are the probabilities for x2 = 0, 1, 2, 3 given this additional information? Notice that we are looking for the conditional probabilities: P{x2 = j | x1 = 0} for j = 0, 1, 2, 3. We can compute them using the row corresponding to x1 = 0.
1 P( x1 = 0, x2 = 0) 20 120 20 = = = P( x1 = 0) 220 220 120 6 1 60 120 60 P{x2 = 1 | x1 = 0} = = = 220 220 120 2 36 120 36 3 P{x2 = 2 | x1 = 0} = = = 220 220 120 10 4 120 4 1 P{x2 = 3 | x1 = 0} = = = 220 220 120 30
P{x2 = 0 | x1 = 0} =
The distribution of x2 given x1 = 0 is called the conditional distribution of x2 given x1 = 0 and can be expressed neatly in the following table. Table 4 Conditional distribution of x2 | x1 = 0 Value 0 1 2 3 1 1 3 1 Probability 6 2 10 30 E1. In example 1, let x3 = number of cricketers chosen. Write down the joint distribution of x2 and x3. Obtain the marginal distributions of x2 and x3. Obtain the conditional distribution of x3 given x2 = 1. In example 1, let pijk denote P{x1 = i, x2 = j, x3 = k}. Obtain pijk for i = 0,1,2; j = 0,1,2,3 and k = 0,1,2,,6. The values of x1, x2 and x3 and the corresponding pijk constitute the joint distribution of x1, x2 and x3.
E2.
x1 The random variables x1, x2 and x3 in example 1 and E1 are discrete. x = x2 is x 3 called a discrete random vector and the distribution of x (the joint distribution of x1, x2 and x3) in such a case is called discrete multivariate distribution.
x p ) is continuous) if there exists a function f(u1,,up) defined for all u1,, up having the property that for every set A of p-tuples,
On the other hand we say that x1,, xp are jointly continuous (x = x1 K P((x1,, xp) A) = f(u1,,up)du1,,dup where the integration is over all (u1,,up) A.
The function f(u1,,up) is called the probability density function of x (or joint probability density function of x1,, xp). If A1,, Ap are sets of real numbers such that A = {(u1,,up), ui Ai, i = 1,,p} we can write P{(u1,,up) A} = P{ xi Ai, i = 1,,p } = f (u1 u p )du1 du p
A1 Ap
(x
x p ) is defined as F(a1,a2,,ap)={x1
f ( u1 K u p )du1 K du p
p F(a1 , , a p ) provided the a1 a p
Another interpretation of the density function of x can be given using the following:
a1 + a1 a1 +a p
a1
f (u ,, u
1 a1
)du1, , du p
f(a1,,ap)a1,,ap. when a1, i = 1,,p are small and f is continuous. Thus f(a1,,ap) is a measure of the chance that the random vector x is in a small neighborhood of (a1,,ap). Let fx(u1, u2,,up) be the density function of x. Then the marginal density of xi, denoted by f xi (ui ) is defined as
f xi (ui ) =
defined as f x1 ( u1 ) =
f x1|x2 =u2 ( u1 | u2 ) =
f x ( u1 ,K u p ) f x2 ( u2 )
Why is the conditional density defined thus? To see this let us multiply the left hand side of the above equality by du1,,dur and the right hand side also by du1,,dur = durdur+1dup / dur+1dup. 4
P(ur +1 xr +1 ur +1 + du r +1 , , u p x p u p + du p )
P(u1 x1 u1 + du1 , , u p x p u p + du p )
= P u1 x1 u1 + du1 , , ur xr ur + dur | ur +1 xr +1 ur +1 + du r +1 , , u p x p u p + du p Thus for small values of du1,,dup, f x1 | x 2 = u2 (u1 | u2 ) represents the conditional probability that xi lies between ui and u1+dui, i = 1,,r given that xj lies between uj and uj+duj, j = r+1,, p. Let us now consider a few examples. Example 2: Consider x = (x1, x2)t with density 0 < u1 < 1, 0 < u2 < 1 c(2 u1 u2 ) fx(u1, u2) = 0 otherwise (a) (b) (c) (d)
where c is a constant.
Obtain the value of c Find the marginal density of x2 Find the conditional density of x1 given x2 = u2, where 0 < u2 <1. Find the probability that x1 > given x2 = u2.
x
(u1 , u2 )du1 , du 2 = 1
1 1
Now 1 =
1 0
f x ( u1 ,u2 )du1du2 =
1
c.(2 u u )du du
1 2 1 0 0 1 1 0 0
f x1 (u1 ) =
0 1
(u1 , u2 )du2
= (2 u1 u2 )du2
0
= (2 u1 ) u2du2
0
= 2 u1
3 1 = - u1. 2 2
3 -u , Thus f x1 ( u1 ) = 2 1 0
3 for 0 < u2 < 1 - u , (c) By symmetry, the marginal density of x2 is f x2 ( u2 ) = 2 2 else where. 0 f ( u ,u ) 2 u1 u2 So, the conditional density of x1 | x2 = u2 is f x1|x2 =u2 ( u1 | u2 ) = x 1 2 = 3 f x2 ( u 2 ) u2 2 where 0 < u1 <1, 0 < u2 <1. since fx(u1, u2) = 0 whenever u1 (0, 1) or u2 (0, 1) 2 u1 u2 whenever 0 < u1 < 1, 0 < u2 < 1 3 u2 we have f x1 | x2 = u 2 (u1 | u2 ) = 2 0 otherwise
1 (d) P x1 > x2 = u2 = 4
=
f
1 4
x1 | x 2 = u 2
(u1 | u2 )du 2
2 u1 u2 du1 3 1 u2 4 2 1 1 = (2 u1 u2 )du1 3 1 u2 4 2
(a) Obtain the marginal density of x2 (b) Obtain the conditional density of x1 given x2 = u2. Solution: (a) Clearly f x2 ( u2 ) = 0 whenever - < x2 0. let 0 < x2 . Then
f x2 ( u2 ) = 2e e
0
u1 2u2
du1
= 2e 2u 2 eu1 du1
0
2u 2
whenever 0 < u2 .
if u1 ( 0, ) ,u2 ( 0, ) otherwise
It can be easily checked that this is the same as the marginal distribution of x1. Thus in this example the joint density of x1 and x2 is the product of the marginal densities of x1 and x2.
x1 Let x = x where x1 is of order r x 1 and x2 is of order (p-r) x 1 have density fx(u) 2 u1 where u = u is partitioned according as the partition of x. We say that x1 and x2 2 are independent if P{x1 A, x2 B} = P{x1 A}.P{x2 B} for all subsets A and B of Rr and R(p-r) respectively.
It can be shown (which is beyond the scope of the present notes) that x1 and x2 are independent if and only if the joint density of x1 and x2 (i.e., the density of x) is equal to the product of the marginal density of x1 and x2. or in other words fx(u) = f x1 ( u1 ). f x 2 ( u2 ) . In fact if we can factorize fx(u) = g1(u1). g2(u2) where gi(ui) involves only ui, i = 1,2, then x1 and x2 are independent. Further c1.g1(u1) and c1.g2(u2) are the marginal densities of x1 and x2 respectively where c1 and c1 are constants. Thus x1 and x2 of example 3 are independent. However, x1 and x2 of example 2 are not independent.
7
We give below a relationship between uncorrelatedness and independence. Result 1. Let x1 and x2 be independent. Then Cov(x1, x2) = 0. Proof: Let f x1 ( u1 ) and f x 2 ( u2 ) be the densities of x1 and x2. Then the joint density
u1 u . 2
- K
u f
1 x1
since the first integral in the previous expression splits into the product of the two later integrals. However the converse is not true as shown through the following exercise. E3: Let x have the following distribution Value -3 Probability (a) Show that the distribution x2 is Value Probability 1 9 -1 1 3
(b) Write down the joint distribution of x and x2 (c) Show that x and x2 are uncorrelated. (d) Show that x and x2 are not independent. E4:
x1 Consider a random vector x = x with the density 2 cu ( 2 u1 u2 ) 0 < u1 < 1, 0 < u2 < 1 fx(u1, u2) = 1 0 otherwise where c is a constant.
(a) (b) (c) (d) Obtain the value of c. Find the marginal densities of x1 and x2. Find the conditional density of x2 given x1 = u1. Are x1 and x2 independent?
SOLUTIONS TO EXERCISES E1: x2 = No. of tennis specialists chosen x3 = No. of cricketers chosen Let pij denote the probability that x2 = i and x3 = j, i = 1,, 4 and j = 1,, 6 Clearly pij = 0 whenever i + j 4 as only 3 sportsmen were chosen. p00 = 0 since there are only 2 specialists in long distance running and 3 sportsmen are selected. 2 6 12 6 p01 = 2 1 3 = 220 2 6 12 30 p02 = 1 2 3 = 220 6 12 20 p03 = 3 3 = 220 p04 = p05 = p06 = 0 4 2 12 4 p10 = 1 2 3 = 220 4 6 2 12 48 p11 = 1 1 1 3 = 220 4 6 12 60 p12 = 1 2 3 = 220 p13 = p14 = p15 = p16 = 0 4 2 12 12 p20 = = 2 1 3 220 4 6 12 36 p21 = 2 1 3 = 220 p22 = p23 = p24 = p25 = p26 = 0 4 12 4 p30 = 3 3 = 220 p31 = p32 = p33 = p34 = p35 = p36 = 0 Also pij = 0 for i = 4 or j 4. Thus the joint distribution of x2 and x3 is given by
Joint Distribution of x2 and x3 Value taken by x2 0 Joint Probability 1 2 3 4 Column sum x3 0 1 2 3 4 5 6 Row sum
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
The marginal distribution of x2 is as follows: Marginal Distribution of x2 Value Probability 0 56 220 1 112 220 2 48 220 3 4 220 4 0
The marginal distribution of x2 is as follows: Value Probability Marginal Distribution of x3 0 1 2 3 4 20 90 90 20 0 220 220 220 220 5 0 6 0
The conditional distribution of x3 | x2 = 1 is obtained using the row corresponding to x2 = 1 in the joint probability table as follows:
4 112 4 = 220 220 112 48 112 48 P(x3 = 1|x2 = 1) = = 220 220 112 60 112 60 P(x3 = 2|x2 = 1) = = 220 220 112
P(x3 = 0|x2 = 1) = P(x3 = j|x2 = 1) = 0 for j 3. Thus the conditional distribution of x3 | x2 = 1 is as follows:
10
5 0
6 0
Clearly pijk = 0 whenever i+j+k 3. Hence we shall consider only those combinations of i, j and k such that i+j+k = 3.
6 12 20 p003 = 3 3 = 220 4 6 12 60 p012 = 1 2 3 = 220 4 6 12 36 p021 = 2 1 3 = 220 4 12 4 p030 = 3 3 = 220 2 6 12 30 p102 = 1 2 3 = 220 2 4 6 12 48 p111 = 1 1 1 3 = 220 2 4 12 12 p120 = 1 2 3 = 220 2 6 12 6 p201 = 2 1 3 = 220 2 4 12 4 p210 = 2 1 3 = 220 .
E3.(a) Notice that x2 can take only two values 1 and 9. P(x2 = 1) = P(x = 1 or x = -1) = P(x = 1) + P(x = -1) = Similarly P(x2 = 9) = P(x = 3) + P(x = -3) =
1 2 1 2
11
(b) The joint distribution of x and x is given as under: Joint Distribution of x and x2 Value taken by x2 1 9 Column sum (c) Clearly, E(x) = So Cov(x, x2) x -3 -1 1 3 Row sum
Joint Probability
Joint Probability
1 1 + =0 4 4
Thus P(x = 3 and x2 = 1) P(x = 3), P(x2 = 1). Hence x and x2 are not independent.
c.u (2 u1 u2 ) when 0 < u1 < 1 and 0 < u2 < 1 E4. (a) fx(u1, u2) = 1 0 otherwise
1 1
So
1 = =
f (u ,u ) du du
x 1 2 1 0 0
c(2u u
1 0 0
1 1
2 1
u1u2 du1du2
12
5 5 3 1 = c . = c. = .c 12 12 4 3
Hence c =
12 5
(c) For 0 < x1< 1 and 0 < x2< 1, the conditional density of x2 given x1 = u1 is given by
12 ( 2 u1 u2 ) .u1 2 u u 1 2 = 5 = 3 12 3 u1 u1 u1 2 5 2
(d) Since the conditional density of x2 given x1 is different from the marginal distribution of x2, we conclude that x1 and x2 are not independent.
13