Beruflich Dokumente
Kultur Dokumente
The joint distribution function of a random vector X is de ned as FX u ; ; uN = P x Z u ; ; xN uN Z u uN 1 = p ; ; N d dN where p ; ; N is the joint density function of the random vector X.
1
,1
,1
Independent Variables
For convenience, let us rst consider two of the N variables and rename them as x and y. These two variables are independent i P A B = P x u; y v = P x uP y v = P AP B where events A and B are de ned as x u" and y v", respectively. This de nition is equivalent to px; y = pxpy as this will lead to Z u Z v Z u Z v P x u; y v = p; dd = p pdd =
Z
,1 ,1
u
,1
p d
,1
pd = P x uP y v
2
,1 ,1
Mean Vector
which can be interpreted as the center of gravity of an N-dimensional object with px ; ; xN being the density function.
1
Covariance Matrix
i
2
,1
,1
:: :: :: :: ij :: :: :: ::
2
7 5
N N
= E xixj , ij is the covariance of xi and xj . When i = j , i = E xi , i is the variance of xi , which can be interpreted as the amount of information,
ij
2 2 2 2
or energy, contained in the ith component of the signal X . And the total information or energy contained in X is represented by
tr =
2 2
N X i=1
is symmetric as ij = ji. Moreover, it can be shown that is also positive de nite, i.e., all its eigenvalues f ; ; N g are greater than zero and we have N X tr = i 0
1
and
i=1
det =
N Y i=1
i 0
ij
2
= 0, i.e.,
Autocorrelation Matrix
:: :: :: 4 R= E XX T = :: rij :: :: :: ::
2 6 4
3 7 5
N N
where
4 rij = E xi xj =
ij
+ ij
Obviously R is symmetric and we have = R , MM T When M = 0, we have = R. Two variable xi and xj are orthogonal i rij = 0. Zero mean random variables which are uncorrelated are also orthogonal.
Y = AT X X = AY
1
AT = A,
and Y is another random vector. The mean vector MY and the covariance matrix Y of Y are related to the MX and X of X as shown below:
MY = E Y = E AT X = AT E X = AT MX
T T Y = E Y Y T , MY MY = E AT XX T A , AT MX MX A T T T T T T T = A E XX A , A MX MX A = A E XX , MX MX A T = A X A
which means the total amount of energy or information contained in X is not changed after a unitary transform Y = AT X although its distribution among the N components is changed.
Normal Distribution
1
The density function of a normally distributed random vector X is: 1 1 X ,M T , X ,M exp , px ; ; xN = N X; M; = = 2 2N= jj where M and are the mean vector and covariance matrix of X , respectively. When N = 1, and M become and , respectively, and the density function becomes single variable normal distribution. To nd the shape of a normal distribution, consider the iso-value hyper surface in the N-dimensional space determined by equation
1 2 1 2
N X; M; = c
0 1
where c is another constant related to c , M and . For N = 2 variables x and y, we have " " a b= 2 x , x X , M T , X , M = x , x; y , y b=2 c y , y = ax , x + bx , xy , y + cy , y = c Here we have assumed " a b=2 = , b=2 c
1 0 1 2 1 1
The above quadratic equation represents an ellipse instead of any other quadratic curve centered at M = ; T , because , , as well as , is positive de nite: , = ac , b =4 0
1 2 1 1 2
When N 2, the equation N X; M; = c represents a hyper ellipsoid in the N-dimensional space. The center and spatial distribution of this ellipsoid are determined by M and , respectively.
0
In particular, when X = x ; ; xN T is decorrelated, i.e., ij = 0 for all i 6= j , becomes a diagonal matrix 2 0 0 3 6 0 0 7 7 = diag ; ; N = 6 6 4 5 7 0 0 N and equation N X; M; = c can be written as N X T , X , M X , M = xi , i = c
1 2 1 2 1 2 2 2 2 0 1 2
which represents a standard ellipsoid with all its axes parallel to those of the coordinate system. Estimation of M and When px ; ; xN is not known, M and cannot be found by their de nitions. However, they can be estimated if a large number of outcomes Xj ; j = 1; ; K of the random experiment in question can be observed. The mean vector M can be estimated as K ^ = 1 X Xj M
1
i=1
Kj i where xij is the ith element of Xj . The autocorrelation R can be estimated as K ^ = 1 Xj XjT R K
i
=1
Kj
=1
j =1
Kj
=1