Sie sind auf Seite 1von 7

Probability of Random Vectors

Multiple Random Variables


Each outcome of a random experiment may need to be described by a set of N 1 random variables fx ;    ; xN g, or in vector form: X = x ;    ; xN T which is called a random vector. In signal processing X is often used to represent a set of N samples of a random signal xt a random process.
1 1

Joint Distribution Function and Density Function


1 1 1

The joint distribution function of a random vector X is de ned as FX u ;    ; uN  = P x Z u ;    ; xN uN  Z u uN 1 =    p ;    ; N d    dN where p ;    ; N  is the joint density function of the random vector X.
1

,1

,1

Independent Variables

For convenience, let us rst consider two of the N variables and rename them as x and y. These two variables are independent i P A B  = P x u; y v = P x uP y v = P AP B  where events A and B are de ned as x u" and y v", respectively. This de nition is equivalent to px; y = pxpy as this will lead to Z u Z v Z u Z v P x u; y v = p; dd = p p dd =
Z

,1 ,1
u

,1

p d

,1

p d = P x uP y v
2

,1 ,1

Similarly, a set of N variables are independent i px ;    ; xN  = px  px     pxN 


1 1

Mean Vector

The expectation or mean of random variable xi is de ned as


4 i = E xi =
Z Z 1 1    i p1;    ; N  d1    dN ,1 ,1

The mean vector of random vector X is de ned as


4 M = E X  = E x ;    ; E xN  T =  ;    ; N T
1 1

which can be interpreted as the center of gravity of an N-dimensional object with px ;    ; xN  being the density function.
1

Covariance Matrix
i
2

The variance of random variable xi is de ned as


4 = E xi ,Zi = E xi  , i Z 1 1 =    i , i p ;    ; N  d    dN
2 2 2

,1

,1

The covariance of xi and xj is de ned as


ij
2

4 = Covxi; xj  = E xi , ixj , j  = E xixj  , ij Z 1 Z 1 =    ij p ;    ; N  d    dN , ij ,1 ,1


1 1

The covariance matrix of a random vector X is de ned as  = E X , M 3 X , M T = E XX T  , MM T 2 = where


6 4

:: :: :: :: ij :: :: :: ::
2

7 5

N N

= E xixj  , ij is the covariance of xi and xj . When i = j , i = E xi  , i is the variance of xi , which can be interpreted as the amount of information,
ij
2 2 2 2

or energy, contained in the ith component of the signal X . And the total information or energy contained in X is represented by

tr  =
2 2

N X i=1

 is symmetric as ij = ji. Moreover, it can be shown that  is also positive de nite, i.e., all its eigenvalues f ;    ; N g are greater than zero and we have N X tr  = i 0
1

and

i=1

det  =

N Y i=1

i 0
ij
2

Two variables xi and xj are uncorrelated i

= 0, i.e.,

E xi xj  = E xiE xj  = ij


If this is true for all i 6= j , then X is called uncorrelated or decorrelated and its covariance matrix  becomes a diagonal matrix with only nonzero i i = 1;    ; N  on its diagonal. If xi i = 1;    ; N  are independent, px ;    ; xN  = px     pxN , then it is easy to show that they are also uncorrelated. However, uncorrelated variables are not necessarily independent. But uncorrelated variables with normal distribution are also independent.
2 1 1

Autocorrelation Matrix

The autocorrelation matrix of X is de ned as

:: :: :: 4 R= E XX T  = :: rij :: :: :: ::
2 6 4

3 7 5

N N

where

4 rij = E xi xj  =

ij

+ ij

Obviously R is symmetric and we have  = R , MM T When M = 0, we have  = R. Two variable xi and xj are orthogonal i rij = 0. Zero mean random variables which are uncorrelated are also orthogonal.

Mean and Covariance under Unitary Transforms


A unitary orthogonal transform of X is de ned as


Y = AT X X = AY
1

where A is a unitary orthogonal matrix

AT = A,

and Y is another random vector. The mean vector MY and the covariance matrix Y of Y are related to the MX and X of X as shown below:

MY = E Y  = E AT X  = AT E X  = AT MX
T T Y = E Y Y T  , MY MY = E AT XX T A , AT MX MX A T T T T T T T = A E XX A , A MX MX A = A E XX  , MX MX A T = A X A

Unitary transform does not change the trace of :


T T tr Y = tr E Y Y T  , MY MY = E tr Y Y T  , tr MY MY  T T T T T T = E Y Y  , MY MY = E X AA X  , MX AA MX T = E X T X  , MX MX = tr X

which means the total amount of energy or information contained in X is not changed after a unitary transform Y = AT X although its distribution among the N components is changed.

Normal Distribution
1

The density function of a normally distributed random vector X is: 1 1 X ,M T , X ,M  exp , px ;    ; xN  = N X; M;  = = 2 2N= jj where M and  are the mean vector and covariance matrix of X , respectively. When N = 1,  and M become and , respectively, and the density function becomes single variable normal distribution. To nd the shape of a normal distribution, consider the iso-value hyper surface in the N-dimensional space determined by equation
1 2 1 2

N X; M;  = c
0 1

where c is a constant. This equation can be written as X , M T , X , M  = c


1

where c is another constant related to c , M and . For N = 2 variables x and y, we have " "  a b= 2 x ,  x X , M T , X , M  = x , x; y , y b=2 c y , y = ax , x + bx , xy , y  + cy , y  = c Here we have assumed "  a b=2 = , b=2 c
1 0 1 2 1 1

The above quadratic equation represents an ellipse instead of any other quadratic curve centered at M =  ;  T , because , , as well as , is positive de nite: , = ac , b =4 0
1 2 1 1 2

When N 2, the equation N X; M;  = c represents a hyper ellipsoid in the N-dimensional space. The center and spatial distribution of this ellipsoid are determined by M and , respectively.
0

In particular, when X = x ;    ; xN T is decorrelated, i.e., ij = 0 for all i 6= j ,  becomes a diagonal matrix 2 0  0 3 6 0  0 7 7  = diag ;    ; N = 6 6 4    5  7 0 0  N and equation N X; M;  = c can be written as N X T , X , M   X , M  = xi , i = c
1 2 1 2 1 2 2 2 2 0 1 2

which represents a standard ellipsoid with all its axes parallel to those of the coordinate system. Estimation of M and  When px ;    ; xN  is not known, M and  cannot be found by their de nitions. However, they can be estimated if a large number of outcomes Xj ; j = 1;    ; K  of the random experiment in question can be observed. The mean vector M can be estimated as K ^ = 1 X Xj M
1

i=1

Kj i where xij is the ith element of Xj . The autocorrelation R can be estimated as K ^ = 1 Xj XjT R K
i
=1  

i.e., the ith element of M is estimated as K X  ^ = 1 xj

Kj

=1

 

j =1

And the covariance matrix  can be estimated as K ^ = 1 X Xj XjT , M ^M ^T = R ^,M ^M ^T 

Kj

=1

Das könnte Ihnen auch gefallen