Sie sind auf Seite 1von 11

Chapter 7

7.1
a) Normal distribution: Assuming thickness of wafers follows normal distribution

1 (x − µ)2
f (x) = √ exp (−0.5)
2πσ σ2

Substituting mean and variance values,

f (x) = 0.1249 exp(−0.049(x − 334)2 )

Z 320
Pr(X < 320) = f (x) dx
0
Z 320
= 0.1249 exp(−0.049(x − 334)2 )
0

= 0.1249 × 4.693 × 10−5


= 5.8616 × 10−6 .

Out of 90 wafers, no. of wafers having thickness less than 320µm is

= 90 × 5.8616 × 10−6 ≈ 0

b) Uniform distribution: Assuming thickness follows uniform distribution



1

b−a a<x<b
f (x) =
 0 elsewhere

1
f (x) =
b−a
2
where mean µ = variance σ 2 = (b−a)
b+a
2 , 12
We know µ = 320µm and σ 2 = 10.2µm
Solving we get a = 328.47 and b = 339.531
Hence, P r(X < 320) = 0

1
7.2
a) Gaussian distribution:
i) Marks obtained by the students in a class.
ii) Weights of apples in a lot.
b) Poisson distribution:
i) Number of accidents occurring on road.
ii) Arrival of persons in queue.
c) Chi-square distribution:
i) Sample variance of normal population.
ii) Power spectral densities of variables with Gaussian distribution.

7.3
Both continuous and discrete probability distribution: The waiting time in a queue when modelled
m
as m+1 will follow a mixed distribution

 (1 − ρ), t=0
f (x) =
λ

µ (µ − λ) exp(−(µ − λ)x) elsewhere

In this case f (x) follows a discrete distribution at t = 0, where as it follows continuous distribution else
where.

7.4

 k(1 − x)(1 − y) 0 < x < 1, 0 < y < 1
f (x) =
 0 elsewhere

a) Compute the value of k:


The property of density function is that

Z ∞ Z ∞
f (x, y) dx dy = 1
−∞ −∞
Z 1 Z 1
=⇒ k(1 − x)(1 − y) dx dy = 1
0 0
Z 1
=⇒ k(1 − x)dx = 2
0

=⇒ k = 4

b) Compute marginal densities:


We know

2
Z ∞
fY (y) = f (x, y)dx
−∞
Z 1
=⇒ fY (y) = 4(1 − x)(1 − y) dx
0

=⇒ fY (y) = 2(1 − y)

We know

Z ∞
fX (x) = f (x, y)dy
−∞
Z 1
=⇒ fX (x) = 4(1 − x)(1 − y) dy
0

=⇒ fX (x) = 2(1 − x)

c) Compute Pr(0.4 < X < 0.8, 0.2 < Y < 0.4):


We know

Z 0.8 Z 0.4
Pr(0.4 < X < 0.8, 0.2 < Y < 0.4) = f (x, y)dydx
0.4 0.2
Z 0.8 Z 0.4
= 4(1 − x)(1 − y)dydx
0.4 0.2
Z 0.4 x=0.8
x2
=4 (1 − y)(x − )
0.2 2 x=0.4
Z 0.4
=4 (1 − y) × 0.16
0.2
y=0.4
y 2
= 0.64(y − )
2 y=0.2
= 0.64 × 0.14
= 0.0896

d) Compute conditional densities: We know

f (x, y)
fY (y|X = x) =
fX (x)
4(1 − x)(1 − y)
=
2(1 − x)
= 2(1 − y)

3
f (x, y)
fX (x|Y = y) =
fY (y)
4(1 − x)(1 − y)
=
2(1 − y)
= 2(1 − x)

It is observed that, the marginal densities and conditional densities are equal for the given problem. This
means that the variables X and Y are independent of each other.

7.5
Given the variance-covariance matrix
 
4 1 2
Σ = 1 9 −3
 

2 −3 25

The variance-covariance matrix is of three random variables X1 , X2 and X3 is of the form


 2

σX 1
σX1 X2 σX1 X3
Σ = σX2 X1 2
σX σX2 X3 
 
2
2
σX3 X1 σX3 X2 σX 3

a) Correlation matrix: The correlation matrix ρ is obtained as

 2 2

σX 1
/σX 1
σX1 X2 /(σX1 σX2 ) σX1 X3 /(σX1 σX3 )
Σ =  σX2 X1 /(σX1 σX2 ) 2 2
σX /σX σX2 X3 /(σX2 σX3 )
 
2 2
2 2
σX3 X1 /((σX3 σX1 )) σX3 X2 /(σX3 σX2 ) σX 3
/σX 2

The correlation-cross correlation matrix of the given variance-covariance matrix is obtained as


 
1 1/6 1/5
Σ = 1/6 1 −1/5
 

1/5 −1/5 1

b) Cross-correlation:
Given two random variables

Y1 = X1 Y2 = 0.5X2 + 0.5X3

4
Assume random variables are of zero mean. The cross-covariance σY1 Y2 is calculated as

σY1 Y2 = E(Y1 Y2 )
= E(0.5X1 X2 + 0.5X1 X3 )
= 1.5

The variance of random variables σY2 1 and σY2 2 is calculated as

σY2 1 = σX
2
1
=4
σY2 2 = 0.5σX
2
2
2
+ 0.5σX 3
+ 0.25σX2 σX3 = 16.75

The cross correlation is computed as



ρY1 Y2 = 1.5/( 16.75 ∗ 4) = 0.1833

7.6
Let

Z1 = c11 X1 + c12 X2 + · · · + c1n Xn


Z2 = c21 X1 + c22 X2 + · · · + c2n Xn

For simplicity, assume E(Z1 ) = E(Z2 ) = 0.


Then

cov(Z1 , Z2 ) = E(Z1 Z2 ) = E((c11 X1 + c12 X2 + · · · + c1n Xn )(c21 X1 + c22 X2 + · · · + c2n Xn ))


2
= c11 c21 σX 1
+ c11 c22 σX1 X2 + · · · + c21 c22 σX1 X2 + · · ·
= cT1 ΣX c2

h iT h iT
where c1 = c11 c12 ··· c1n and c2 = c21 c22 ··· c2n

Given Z1 = X1 + X2 + X3 and Z2 = X1 + 2X2 − X3 . Hence in this example

h iT
c1 = 1 1 1
h iT
c2 = 1 2 −1

  
h i 4 1 2 1
cov(Z1 , Z2 ) = E(Z1 Z2 ) = 1 1 1 1 9 −3  2  = −3
  

2 −3 25 −1

5
7.7
Given joint cumulative distribution

1
F (x, y) = xy(x + y)
16

a) Joint pdf :
The joint pdf of x and y is given by

∂2
f (x, y) = F (x, y)
∂x∂y
∂2 1 2
f (x, y) = (x y + xy 2 )
∂x∂y 16
∂ 1 2
f (x, y) = (x + 2xy)
∂x 16
1
f (x, y) = (2x + 2y)
16
1
f (x, y) = (x + y)
8

b) Marginal densities:
The marginal density in x is given by
Z 2
1
fX (x) = f (x, y)dy
8 0
Z 2
1
fX (x) = (x + y)dy
0 8
1
fX (x) = (x + 1)
4

c) Cumulative distribution:
The cumulative distribution function in x is given by
Z x
FX (x) = fX (x)dx
Z0 x
1
FX (x) = (x + 1)dx
0 4
1
FX (x) = (x2 + 2x)
8

7.8
Given joint density function

 K(x + y 2 ) for 0 < x < 2, 0 < y < 2
f (x, y) =
 0 elsewhere

6
i) Compute the value of K: From the property of the joint distribution function
Z ∞ Z ∞
f (x, y) dx dy = 1
−∞ −∞
Z 2Z 2
=⇒ K(x + y 2 ) dx dy = 1
0 0

=⇒ k = 3/28

ii) Marginal densities in x and y:

3x 2
fX (x) = +
14 7
3 3y 2
fY (y) = +
14 14

iii) Pr(0.6 < X < 1.2, 0.4 < Y < 0.8):

Pr(0.6 < X < 1.2, 0.4 < Y < 0.8) = 0.0327

iv) Conditional expectation E(Y /X = 1):

f (x, y)|X=1
E(Y /X = 1) =
fX (x)|X=1
3
= (1 + y 2 )
14

7.9
The program to compute covariance between two random variables is shown below

function c o v a r i a n c e = c o v a r ( x , y )

% Function t o compute auto−c o v a r i a n c e o f two v a r i a b l e s x and y

i f ( length ( x ) ˜= length ( y ) )

print ( ’ x and y must be o f e q u a l l e n g t h ’ )


end

N = length ( x ) ;
meanx=mean( x ) ;
meany=mean( y ) ;

x d e t r e n d = x − meanx ;
y d e t r e n d = y − meany ;

7
covariance = 0;

f o r k = 1 :N
c o v a r i a n c e = c o v a r i a n c e +( x d e t r e n d ( k ) ∗ y d e t r e n d ( k ) ) ;
end

end

Given two random variables, X ∼ N (1, 3), Y = X 2 + 5X.


The program to compute cross covariance between two random variables is shown below

X = randn ( 1 0 0 0 , 1 ) ;

Y = (X. ˆ 2 ) + ( 5 ∗X ) ;

c o v u s e r d e f i n e d = c o v a r (X,Y) % Compute c r o s s c o v a r i a n c e u s i n g u s e r
%d e f i n e d f u n c t i o n

c o v c a l c u l a t e d = xcov (X,Y ) ; % Compute c r o s s c o v a r i c n e


%u s i n g x c o v command

covcalculated = c o v c a l c u l a t e d ( length (X) )

The cross covariance value obtained using user defined function is 4.7976e + 03 and using xcov function
is 4.7976e + 03. The value obtained in both cases are identical.

7.10
Central limit theorem:
The program to verify central limit theorem is shown below

% Program t o c v e r i f y c e n t r a l l i m i t theorem f o r
% question 7.10

N1 = 5 ; % F i r s t c a s e o f 5 random v a r i a b l e s
N2 = 1 0 0 ; % Second c a s e o f 100 random v a r i a b l e s
N = 5 0 0 ; % number o f s a m p l e s t o g e n e r a t e

Y1=0;
Y2=0;

f o r k = 1 : N1
Y1 = Y1 + rand (N, 1 ) ;
end
f o r k = 1 : N2
Y2 = Y2 + rand (N, 1 ) ;

8
end

h i s t (Y1)
figure
h i s t (Y2)

The histogram plot of the variable obtained in both the cases is shown in the figure below

120 140

100 120

100
80
80
60
60
40
40

20 20

0 0
0 1 2 3 4 5 40 45 50 55 60

(a) Histogram obtined from linear combination of 5 vari- (b) Histogram obtined from linear combination of 100
ables variables

From the plots, it is evident that second plot of N = 100 random variables follow Gaussian distribution
than the case of N = 5 variables.

7.11
7.12
2
Given E(X) = E(Y ) = 0, σX = 1, σY2 = 4 and σXY = 3
a)E(XY ): We know

σXY = E(XY ) − E(X)E(Y )


=⇒ E(XY ) = 3

b)σZ1 Z2 :
Given Z1 = 2X + 3Y and Z2 = X − 3Y . The cross covariance of Z1 and Z2 is calculated as follows

2
E(Z1 Z2 ) = 2σX − 3σXY − 9σY2
= 2 − 9 − 36 = −43

9
7.13
The joint Gaussian distribution of two variables X and Y having unconditional means µX and µY is given
by
1
P 0.5 exp(0.5(x − µX ) −1
P
f (x, y) = xy (y − µY )
2π det( xy )
where " #
P σx2 σxy
xy =
σyx σy2
P
If two variables are uncorrelated then the matrix xy becomes diagonal and will be
" #
P σx2 0
xy =
0 σy2

Then

f (x, y) = f (x)f (y)

which means two variables are independent of each other.

7.14
k
X
Given w[k] = x[n] where x[k] is an i.i.d process with mean zero. In addition w[0] = 0.
n=0
The mean of the process is

E(w[k]) = 0

The variance of the process is

k k
!
X X
E(w2 [k]) = E x[n] x[n1 ]
n=0 n1 =0
k
X
=1+ x2 [n]
n=0

The variance of the process is not constant and varies with time. Hence the process is not stationary.

7.17
Given signal x[k] = e[k] sin(c1 k) + cos(c1 k)e[k − 1].
The mean of the random variable µx = 0 (assuming µe1 [k] = µe2 [k] = 0).

10
The variance of the signal

σx2 [k] = E(x2 [k]) = E(sin2 (c1 k)e2 [k] + cos2 (c1 k)e2 [k − 1])
= sin2 (c1 k)σe2 + cos2 (c1 k)σe2
= σe2

The auto covariance of the signal σxx [l] is computed as

σxx [l] = E(x[k]x[k − 1])


= E((e[k] sin(c1 k) + cos(c1 k)e[k − 1])(e[k − l] sin(c1 k − c1 l) + cos(c1 k − c1 l)e[k − 1 − l])
= sin(c1 (k − l)) sin(c1 k)σee [l] + cos(c1 (k − l)) cos(c1 k)σee [l]+
sin(c1 (k − l)) cos(c1 k)σee [l − 1] + cos(c1 (k − l)) sin(c1 k)σee [l + 1]

The auto covariance function is non-zero only at three lags. The values are also independent of time k.
Hence the process in stationary.

11

Das könnte Ihnen auch gefallen