Sie sind auf Seite 1von 27

SUBJECT: TRANSFORMS AND RANDOM PROCESS FOR ELECTRONICS

ENGINEERING
SUB CODE: SEC5101

UNIT – IV RANDOM PROCESS

NOTION OF STOCHASTIC PROCESS:


It is defined as the system that changes over time in an uncertain manner
• A stochastic process is a family of time indexed random variables Xt where t
belongs to an index set. Formal notation, where I is an index set that is a subset
of R.
• Examples of index sets:
1) I = (-∞, ∞) or I = [0, ∞]. In this case Xt is a continuous time stochastic
process.
2) I = {0, ±1, ±2, ….} or I = {0, 1, 2, …}. In this case Xt is a discrete time
stochastic process.
• We use uppercase letter {Xt } to describe the process. A time series, {xt } is a
realization or sample function from a certain process.
• We use information from a time series to estimate parameters and properties of
process {Xt }.
• A process is said to be strictly stationary if has the same joint distribution as
That is, if {Xt } is a strictly stationary process and then, the mean function is a
constant and the variance function is also a constant.

• Moreover, for a strictly stationary process with first two moments finite, the
covariance function, and the correlation function depend only on the time
difference s.
• A trivial example of a strictly stationary process is a sequence of i.i.d random
variables.

Examples
• Automated teller machine (ATM)
• Printed circuit board assembly operation
• Runway activity at airport

WIDE SENSE STATIONARY


• Stationary refers to time invariance of some, or all, the statistics of a random
process ,e.g., mean, autocorrelation, nth order distribution, etc
• We define two types of stationary, strict sense (SSS) and wide sense
(WSS)
• A random process X(t) (or Xn) is said to be WSS if its finite order
distributions are time invariant, i.e., the joint cdf (pdf, or pmf) of X(t1),
X(t 2 ),..., X(tk) is the same as for X(t1 + α), X(t2 +α), . . . , X(tk+α),for all k,
all t1,t 2 , . . . , tk,and all time shifts α
• So for a SSS process, the first order distribution is independent of t, and the
second order distribution ,i.e., the distribution of any two samples X(t1) and
X(t2), depends only on τ = t2 −t1 To see this, note that from the definition of
stationary, for any t, the joint distribution of X(t1) and X(t2) is the same as
the joint distribution of X(t1 + (t − t1)) and X(t2 + (t − t1)) = X(t + (t2
−t1))
• A random process X(t) is said to be WSS if its mean and
autocorrelation functions are time invariant, i.e., E(X(t))=µ,
independent of t and RX(t1,t2) is only a function of (t2−t1)
• Since RX(t1,t2)=RX(t2,t1),if X(t) is WSS,RX(t1,t2) is only a function of |t2
−t1|
• Clearly SSS⇒WSS, the converse, however, is not necessarily true

• For GRP, WSS ⇒ SSS, since the process is completely specified byits mean
and autocorrelationfunctions
• Random walk is not WSS, since RX (n1, n2) = min{n1, n2} is
nottimeinvariant–infactnoindependentincrementprocesscanbeWSS

Ergodicity

Ergodicity Principle

If the time averages converge to the corresponding ensemble averages in the


probabilistic sense, then a time-average computed from a large realization can be used
as the value for the corresponding ensemble average. Such a principle is the ergodicity
principle to be discussed below:

M .S .
A WSS process { X ( t ) } is said to beergodic in mean, if µX T
 → µ X as T → ∞ .
Thus for a mean ergodic process { X ( t )} ,
lim E µ X T = µ X
T →∞

and
lim var µ X T
=0
T →∞
We have earlier shown that
E µX T = µX
and
1
2T
 τ 
var µ X T
=
2T ∫C
− 2T
X (τ ) 1 −
 2T 
dτ
Therefore, the condition for ergodicity in mean is

1
2T
 τ 
T →∞ 2T ∫
lim CX (τ ) 1− dτ = 0
−2T  2T 

If C X (τ ) decreases to 0 forτ > τ 0 , then the above condition is satisfied.

Further,
1
2T
 τ  1
2T


2T −2T
C X (τ ) 1 −
 2T 
dτ ≤
2T ∫
−2T
C X (τ ) dτ

Therefore, a sufficient condition for mean ergodicity is


2T


−2T
C X (τ ) dτ < ∞

Example
Consider the random binary waveform { X (t )} discussed in Example .The process has
the auto-covariance function for τ ≤ Tp given by
 τ
1 − τ ≤ Tp
C X (τ ) =  Tp

0 otherwise

Here
2T 2T


−2T
C X (τ ) dτ = 2 ∫ C X (τ ) dτ
0
Tp
 τ 
= 2 ∫  1 − dτ
 T 
0  p 

 Tp3 Tp2 
= 2  Tp + 2 − 
 3Tp Tp 

2Tp
=
3

∴ ∫ C X (τ ) dτ < ∞
−∞
Hence { X (t )} is not mean ergodic.
Autocorrelation ergodicity
T
1
2T −∫T
RX (τ ) T = X (t ) X (t + τ )dt

If we consider Z (t ) = X (t ) X (t + τ ) so that, µ Z = RX (τ )

Then { X (t )} will be autocorrelation ergodic if {Z (t )} is mean ergodic.

Thus { X (t )} will be autocorrelation ergodic if

1
2T
 τ1 
lim
T →∞ 2T ∫ 1 − 2T C
−2T
Z (τ 1 )dτ 1 = 0

where

CZ (τ 1 ) = EZ (t ) Z (t − τ 1 ) − EZ (t ) EZ (t − τ 1 )
= EX (t ) X (t − τ ) X (t − τ ) X (t − τ − τ 1 ) − RX2 (τ )

Involves fourth order moment.

Hence the condition for autocorrelation ergodicity of a jointly Gaussian process is found.

Thus X (t ) will be autocorrelation ergodic if

1
2T
 τ 
lim
T →∞ 2T ∫ 1 − 2T  C (τ )dτ → 0
−2T
z

Now CZ (τ ) = EZ (t ) Z (t + τ ) − RX2 (τ )

Hence, X (t) will be autocorrelation ergodic

1
2T
 τ 
∫−2T  2T  ( Ez(t ) z (t + α ) − RX (τ ) ) dα → 0
2
If lim  1 −
T →∞ 2T

Example
Consider the random–phased sinusoid given by X (t ) = A cos( w0 t + φ ) where A and w0 are
constants and φ ~ U [0, 2π ] is a random variable. We have earlier proved that this
A2
process is WSS with µ X = 0 and RX (τ ) = cos w0τ
2
For any particular realization x(t ) = A cos( w0t + φ1 ),

1 T
2T ∫−T
µx T
= A cos( w0t + φ1 )dt

1
= A sin( w0T )
Tw0
and
T
1
Rx (τ ) T
=
2T −T
∫ A cos(w t + φ ) A cos(w (t + τ ) + φ )dt
0 1 0 1

T
A2
=
4T ∫ [cos w τ + A cos(w (2t + τ ) + 2φ )]dt
−T
0 0 1

A2 cos w0τ A2 sin( w0 (2T + τ ))


= +
2 4w0T
A2 cos w0τ
We see that as T → ∞, both T
µxT
→ 0 and Rx (τ ) →
2
For each realization, both the time-averaged mean and the time-averaged
autocorrelation function converge to the corresponding ensemble averages. Thus the
random-phased sinusoid is ergodic in both mean and autocorrelation.

A random process { X (t )} is ergodic if its ensemble averages converge in the M.S.


sense to the corresponding time averages. This is a stronger requirement than
stationary- the ensemble averages of all orders of such a process are independent of
time. This implies that an ergodic process is necessarily stationary in the strict sense.
The converse is not true there are stationary random processes which are not ergodic.

Expectation

In general, the expected value of a random variable, written as E(X), is equal to the
weighted average of the outcomes of the random variable, where the weights are based
on the probabilities of those outcomes. If a is a constant, we can write E(X+a), E(X-a),
E(aX), and so forth. If Y is another random variable, we can write about E(X+Y), E(XY),
etc.
Variance

Now that we have an idea about the average value or values that a random process
takes, we are often interested in seeing just how spread out the different random values
might be. To do this, we look at the variance which is a measure of this spread. The
variance, often denoted by σ2, is written as follows:
σ2=Var(X)=E[(X−E[X]2)]

Covariance
The covariance function is a number that measures the common variation of X and Y. It
is defined as cov(X, Y ) = E[(X − E[X])(Y − E[Y ])]
=E[XY ] − E[X]E[Y ]
The covariance is determined by the difference in E[XY ] and E[X]E[Y ]. If X and
Y were statistically independent then E[XY ] would equal E[X]E[Y ] and the covariance
would be zero. Hence, the covariance, as its name implies, measures the common
variation. The covariance can be normalized to produce what is known as the
correlation coefficient, ρ
= cov(X, Y)/√var(X)var(Y)

Correlation:

Correlation determines the degree of similarity between two signals. If the signals are
identical, then the correlation coefficient is 1; if they are totally different, the correlation
coefficient is 0, and if they are identical except that the phase is shifted by
exactly (i.e. mirrored), then the correlation coefficient is -1.

When two independent signals are compared, the procedure is known as cross-
correlation, and when the same signal is compared to phase shifted copies of itself, the
procedure is known as autocorrelation.

A function which is related to the correlation function, but arithmetically less complex, is
the average magnitude difference function.
Autocorrelation is a method which is frequently used for the extraction of fundamental
frequency, : if a copy of the signal is shifted in phase, the distance between
correlation peaks is taken to be the fundamental period of the signal (directly related to
the fundamental frequency). The method may be combined with the simple smoothing
operations of peak and centre clipping, or with other low-pass filter operations.

o Autocorrelation function: The autocorrelation function Rxx(t) of a random


signal X(t) is a measure of how well the future values of X(t) can be predicted
based on past measurements. It contains no phase information of the signal.
∞∞
Rxx (τ ) = E[ X (t ) X (t + τ )] = ∫∫ x1 x2 P( x1 , x2 ) dx1dx2
0 0

where x1 = X (t ), x2 = X (t + τ ), and P( x1 , x2 ) = joint PDF of x1 and x2


Properties Autocorrelation Function of a real WSS Random Process
Autocorrelation of a deterministic signal
Consider a deterministic signal x(t ) such that
1 T 2
0 < lim ∫ x (t ) dt < ∞
T →∞ 2T −T

Such signals are called power signals. For a power signal x(t ), the autocorrelation
function is defined as
1 T
Rx (τ ) = lim ∫ x (t + τ ) x (t ) dt
T →∞ 2T − T

Rx (τ ) measures the similarity between a signal and its time-shifted version.

1 T 2
Particularly, Rx (0) = lim ∫ x (t ) dt is the mean-square value. If x(t) is a voltage
T →∞ 2T −T

waveform across a 1 ohm resistance, then Rx (0) is the average power delivered to the

resistance. In this sense, Rx (0) represents the average power of the signal.
Example
Suppose x (t ) = A cos ω t. The autocorrelation function of x(t) at lag τ is given by
1 T
Rx (τ ) = lim ∫ A cos ω (t + τ ) A cos ω tdt
T →∞ 2T −T

A2 T
= lim ∫ [cos(2ω t + τ ) + cos ωτ ]dt
T →∞ 4T −T

A2 cos ωτ
=
2
We see that Rx (τ ) of the above periodic signal is also periodic and its maximum occurs

2π 2π A2
when τ = 0, ± ,± , etc. The power of the signal is Rx (0) = .
ω ω 2
The autocorrelation of the deterministic signal gives us insight into the properties of the
autocorrelation function of a WSS process. We shall discuss these properties next.

Properties of the autocorrelation function of a WSS process


Consider a real WSS process { X (t )} . Since the autocorrelation function RX (t1 , t2 ) of
such a process is a function of the lag τ = t1 − t2 , we can redefine a one-parameter

autocorrelation function as
R X (τ ) = EX (t + τ ) X (t )
If { X (t )} is a complex WSS process, then
RX (τ ) = EX (t + τ ) X *(t )
where X *(t ) is the complex conjugate of X (t ). For a discrete random sequence, we
can define the autocorrelation sequence similarly.

The autocorrelation function is an important function charactersing a WSS random


process. It possesses some general properties. We briefly describe them below.
1. RX (0) = EX 2 (t ) is the mean-square value of the process. If X (t) is a voltage signal

applied across a 1 ohm resistance, then RX (0) is the ensemble average power

delivered to the resistance. Thus,


R X (0) = EX 2 (t ) ≥ 0.

2. For a real WSS process X (t), RX (τ ) is an even function of the time τ .

R X ( −τ ) = RX (τ ). Thus,
RX (−τ ) = EX (t − τ ) X (t )
= EX (t ) X (t − τ )
= EX (t1 + τ ) X (t1 ) ( Substituting t1 = t − τ )
= RX (τ )

3. RX (τ ) ≤ RX (0). This follows from the Schwartz inequality


2 2 2
< X (t ), X (t + τ ) > ≤ X (t ) X (t + τ )

We have
RX2 (τ ) = {EX (t ) X (t + τ )}2
= EX 2 (t ) EX 2 (t + τ )
= RX (0) RX (0)
∴ RX (τ ) <RX (0)

4. RX (τ ) is a positive semi-definite function in the sense that for any positive integer
n n
n and real a j , a j , ∑ ∑ ai a j RX (ti − t j )>0
i =1 j =1

Proof
Define the random variable
n
Y = ∑ ai X (ti )
j =1

Then we have
n n
0 ≤ EY 2 = ∑ ∑ ai a j EX (ti )X (t j )
i =1 j =1
n n
= ∑ ∑ ai a j RX (ti −t j )
i =1 j =1

It can be shown that the sufficient condition for a function RX (τ ) to be the


autocorrelation function of a real WSS process { X (t )} is that RX (τ ) be real, even and
positive semi definite.
5. If X (t ) is MS periodic, then R X (τ ) is also periodic with the same period.
Proof:
Note that a real WSS random process { X (t )} is called mean-square periodic ( MS

periodic) with a period T p if for every t ∈ Γ


E ( X (t + Tp ) − X (t ))2 = 0
⇒ EX 2 (t + Tp ) + EX 2 (t ) − 2 EX (t + Tp ) X (t ) = 0
⇒ RX (0) + RX (0) − 2 RX (Tp ) = 0
⇒ RX (Tp ) = RX (0)

Again
E (( X (t + τ + Tp ) − X (t + τ )) X (t )) 2 ≤ E ( X (t + τ + Tp ) − X (t + τ )) 2 EX 2 (t )
⇒ ( RX (τ + Tp ) − RX (τ )) 2 ≤ 2( RX (0) − RX (Tp )) RX (0)
⇒ ( RX (τ + Tp ) − RX (τ )) 2 ≤ 0 Q RX (0) = RX (Tp )
∴ RX (τ + Tp ) = RX (τ )

For example, X (t ) = A cos( w0 t + φ ) where are constants and


A and w0

φ ~ U [0, 2π ], is MS periodic random process with a period . Its autocorrelation
ω0
function
A2 cos ω0τ 2π
RX (τ ) = is periodic with the same period .
2 ω0
The converse of this result is also true. If R X (τ ) is periodic with period T p then X (t )
is MS periodic with a period T p . This property helps us in determining time period of
a MS periodic random process.

6. Suppose X (t ) = µ X + V (t )
where V (t ) is a zero-mean WSS process and lim RV (τ ) = 0. Then
τ →∞
2
lim RX (τ ) = µ X
τ →∞

Interpretation of the autocorrelation function of a WSS process

The autocorrelation function R X (τ ) measures the correlation between two random


variables X (t ) and X (t + τ ). If R X (τ ) drops quickly with respect to τ , then the X (t )
and X (t + τ ) will be less correlated for large τ . This in turn means that the signal has
lot of changes with respect to time. Such a signal has high frequency components. If
R X (τ ) drops slowly, the signal samples are highly correlated and such a signal has less
high frequency components. Later on we see that R X (τ ) is directly related to the
frequency -domain representation of a WSS process
Cross correlation function of jointly WSS processes

Cross-correlation is the method which basically underlies implementations of the Fourier


transformation: signals of varying frequency and phase are correlated with the input
signal, and the degree of correlation in terms of frequency and phase represents the
frequency and phase spectrums of the input signal.

If { X (t )} and {Y (t )} are two real jointly WSS random processes, their cross-correlation
functions are independent of t and depends on the time-lag. We can write the cross-
correlation function
RXY (τ ) = EX (t + τ )Y (t )
o Cross-correlation function: The cross-correlation function Rxy(t) is a measure
of how well the future values of one signal can be predicted based on past
measurements of another signal.
R xy (τ ) = E[ X (t )Y (t + τ )]

The cross correlation function satisfies the following properties:

(i ) RXY (τ ) = RYX (−τ )


This is because
RXY (τ ) = EX (t + τ )Y (t )
= EY (t ) X (t + τ )
= RYX (−τ )

RYX (τ )
RXY (τ )

O
τ
Fig. RXY (τ ) = RYX (−τ )

(ii) RXY (τ ) ≤ RX (0) RY (0)


We have
2 2
RXY (τ ) = EX (t + τ )Y (t )
≤ EX 2 (t + τ ) EY 2 (t ) using Cauch-Schwarts Inequality
= RX (0) RY (0)
∴ RXY (τ ) ≤ RX (0) RY (0)
Further,
1
RX (0) RY (0) ≤( RX (0) + RY (0) ) Q Geometric mean ≤ Arithmatic mean
2
1
∴ RXY (τ ) ≤ RX (0) RY (0) ≤ ( RX (0) + RY (0) )
2

(iii) If X (t) and Y (t) are uncorrelated, then RXY (τ ) = EX (t + τ ) EY (t ) = µ X µY

(iv) If X (t) and Y (t) is orthogonal process, RXY (τ ) = EX ( t + τ ) Y ( t ) = 0

Example
Consider a random process Z (t ) which is sum of two real jointly WSS random
processes X(t) and Y(t). We have
Z (t ) = X (t ) + Y (t )
RZ (τ ) = EZ (t + τ ) Z (t )
= E[ X (t + τ ) + Y (t + τ )][ X (t ) + Y (t )]
= RX (τ ) + RY (τ ) + RXY (τ ) + RYX (τ )
If X (t ) and Y (t ) are orthogonal processes, then RXY (τ ) = RYX (τ ) = 0
∴ RZ (τ ) = RX (τ ) + RY (τ )

Linear systems with random inputs:


Basics of Linear Time Invariant Systems:

A system is modeled by a transformation T that maps an input signal x(t ) to an output


signal y(t). We can thus write, y (t ) = T [ x(t )]

Linear system
The system is called linear if superposition applies: the weighted sum of inputs results in
the weighted sum of the corresponding outputs. Thus for a linear system

T  a1 x1 ( t ) + a2 x2 ( t )  = a1T  x1 ( t )  + a2T  x2 ( t ) 

Example : Consider the output of a linear differentiator, given by


d x (t )
y (t ) =
dx
d
Then,
dt
( a1 x1 (t ) + a2 x2 (t ) )
d d
= a1 x1 ( t ) + a2 x2 (t )
dt dt

Hence the linear differentiator is a linear system.

• Introduction: LTI systems are analyzed using the correlation/spectral technique. The
inputs are assumed to be stationary/ergodic random with mean = 0.
• Ideal system:
x(t) H(f) y(t)
h(t)
t
y (t ) = ∫ x(τ ) h(t − τ ) dτ
0

Y( f ) = X ( f ) H( f )
• Ideal model:
o Correlation and spectral relationships:
∞∞ ∞
Ryy (t) = ∫∫ h(α) h(β ) Rxx (t −α − β ) dα dβ Rxy (t) = ∫ h(α) Rxx (t −α) dα
00 0
2
S yy ( f ) = H( f ) Sxx ( f ) Sxy ( f ) = H( f ) Sxx ( f )

Total output noise energy: ϕ y2 = ∫ S yy ( f ) df
−∞
o Example: LPF to white noise
1
H( f ) = = H ( f ) e − jφ ( f ) K = RC (LPF)
1 + j 2π f K
t
1 − 1
⇒ h(t ) = e K ⋅ u (t ) ⇒ H ( f ) = , and φ ( f ) = tan −1 (2π f K )
K 1 + ( 2π f K ) 2

For white noise : S xx ( f ) = A


2 A
⇒ S yy ( f ) = H ( f ) S xx ( f ) =
1 + (2π f K ) 2
∞ t
j 2π f t A −K
and R yy (t ) = ∫ S yy ( f ) e df =
2K
e
−∞
∞ ∞
A A
ϕ 2y = ∫ S yy ( f ) df = 2 × ∫ f K )2
df =
2K
, or
−∞ 0 1 + (2π
∞ ∞ 2t
1 −K A
ϕ 2y = A ∫ h 2 (t ) dt = A ∫ e dt =
2 2K
0 0 K
One-sided spectrum: G(f) = 2 S(f) for f ≥ 0

o Example: LPF to a sine process


1
H( f ) = = H ( f ) e − jφ ( f ) K = RC (LPF)
1 + j 2π f K
 1 − Kt
 e t≥0
⇒ h(t ) =  K
0 t<0

1
⇒ H( f ) = φ ( f ) = tan −1 (2π f K )
2
1 + (2π f K )
1 2
For sine wave : Gxx ( f ) = A δ ( f − f0 )
2
2 A2 δ ( f − f 0 )
⇒ G yy ( f ) = H ( f ) Gxx ( f ) =
2[1 + (2π f K ) 2 ]

and R yy (t ) = ∫ G yy ( f ) cos (2π f t ) df
0

A2
⇒ R yy (t ) = cos (2π f 0t )
2[1 + (2π f 0 K ) 2 ]
∞ ∞
A2 δ ( f − f 0 )
ϕ y2 = ∫ G yy ( f ) df = ∫ df
0 0
2[1 + (2π f K ) 2 ]
2
A
⇒ ϕ y2 =
2[1 + (2π f 0 K ) 2 ]

• Models uncorrelated input and output noise: Gmn(f) = Gum(f) = Gvn(f) = 0

n(t)

H(f) v(t)
u(t) ∑ y(t)
h(t)

m(t) ∑ x(t)

x(t ) = u (t ) + m(t ) y (t ) = v(t ) + n(t )


⇒ G xx ( f ) = Guu ( f ) + Gmm ( f ) G yy ( f ) = Gvv ( f ) + Gnn ( f )
2
G xy ( f ) = Guv ( f ) = H ( f ) Guu ( f ) Gvv ( f ) = H ( f ) Guu ( f )
G xy ( f ) G xy ( f ) 2 Gvv ( f ) G yy ( f ) − Gnn ( f )
⇒ H( f ) = = H( f ) = =
Guu ( f ) G xx ( f ) − Gmm ( f ) Guu ( f ) G xx ( f ) − Gmm ( f )
White noise process
One of the very important random processes is the white noise process. Noises in
many practical situations are approximated by the white noise process. Most
importantly, the white noise plays an important role in modeling of WSS signals.

A white noise process {W (t )} is defined by

N0
SW (ω ) = −∞ < ω < ∞
2
where N 0 is a real constant and called the intensity of the white noise. The
corresponding autocorrelation function is given by
N
RW (τ ) = δ (τ ) where δ (τ ) is the Dirac delta.
2
The average power of white noise
1 ∞ N
Pavg = EW 2 (t ) = ∫ dω → ∞
2π −∞ 2
The autocorrelation function and the PSD of a white noise process is shown in Fig.
below.

S W (ω )

N0
2

O
ω
RW (τ )

N0
δ (τ )
2

O
τ

Remarks

• The term white noise is analogous to white light which contains all visible light
frequencies.
• A white noise is generally assumed to be zero-mean.
• A white noise process is unpredictable as the noise samples at different instants of
time are uncorrelated:
CW (ti , t j ) = 0 for ti ≠ t j .

Thus the samples of a white noise process are uncorrelated no matter how closely the
samples are placed. Assuming zero mean, σ W2 → ∞. Thus a white noise has an infinite

variance.

• A white noise is a mathematical abstraction; it cannot be physically realized since it


has infinite average power.
• If the system band-width (BW) is sufficiently narrower than the noise BW and noise
PSD is flat , we can model it as a white noise process. Thermal noise, which is the
noise generated in resistors due to random motion electrons, is well modelled as
white Gaussian noise, since they have very flat psd over very wide band (GHzs)
• A white noise process can have any probability density function. Particularly, if the
white noise process {W (t )} is a Gaussian random process, then {W (t )} is called a

white Gaussian random process.


• A white noise process is called strict-sense white noise process if the noise samples
at distinct instants of time are independent. A white Gaussian noise process is a
strict-sense white noise process. Such a process represents a ‘purely’ random
process, because its samples at arbitrarily close intervals also will be independent.

Example A random-phase sinusoid corrupted by white noise


Suppose X (t ) = B sin (ωc t + Φ) + W (t ) where A is a constant bias and Φ ~ U [0, 2π ].
N
and {W (t ) } is a zero-mean WGN process with PSD of 0 and independent of Φ.
2
Find RX (τ ) and S X (ω ). .
Rx (τ ) = EX (t + τ ) X (t )
= E ( B sin (ω c (t + τ ) + Φ ) + W (t + τ ) )( W (t ) + B sin (ω c t + Φ ))
B2
= cos ω cτ + RW (τ )
2 B2 N
∴ S X (ω ) =
4
( δ (ω + ω c ) + δ (ω − ω c ) ) + 0
2
where δ (ω ) is the Dirac Delta function.
Wiener-Khinchin theorem
The Wiener-Einstein-Khinchine theorem is also valid for discrete-time random
processes. The power spectral density S X (ω ) of the WSS process { X [n]} is the
discrete-time Fourier transform of autocorrelation sequence.

S X (ω ) = ∑ RX [ m ] e − jω m −π ≤ w ≤ π
m =−∞

RX [ m] is related to S X (ω ) by the inverse discrete-time Fourier transform and given by


π
1
RX [m]) =
2π ∫π

S X (ω )e jωm d ω

Thus RX [ m] and S X (ω ) forms a discrete-time Fourier transform pair. A generalized

PSD can be defined in terms of z − transform as follows



SX ( z) = ∑ R [m ] z
m =−∞
x
−m
Clearly, S∞X (ω ) = S X ( z ) z =e jω
S X (ω ) = ∑ RX [ m] e − jω m
m =−∞ −m
Example Suppose RX [m] = 2 m = 0, ±1, ±2, ±3.... Then
m
∞1
= 1 + ∑   e − jω m
m =−∞  2 
m≠ 0

3
=
Definition of ω
5 − 4cos Power Spectral Density of a WSS Process
Let us define
X T (t) = X(t) -T < t < T
=0 otherwise
t
= X (t ) rect ( )
2T
t
where rect ( ) is the unity-amplitude rectangular pulse of width 2T centering the origin.
2T
As t → ∞, X T (t ) will represent the random process X (t ).
Define the mean-square integral
T
FTX T (ω ) = ∫X
−T
T (t)e − jω t dt

Applying the Pareseval’s theorem we find the energy of the signal


T ∞

∫X ∫
2 2
T (t)dt = FTX T ( ω ) dω .
−T −∞

Therefore, the power associated with X T (t ) is


T ∞ 2
1 1
∫−T X (t)dt = 2T ∫
2
T FTX T ( ω ) dω and
2T −∞

The average power is given by


T ∞ ∞ 2 2
1 1 FTX T ( ω )
E ∫ X T (t)dt =
2
E ∫ FTX T ( ω ) dω = E ∫ dω
2T −T 2T −∞ −∞
2T
2
E FTX T (ω )
where is the contribution to the average power at frequency ω and
2T
represents the power spectral density for X T (t ). As T → ∞, the left-hand side in the

above expression represents the average power of X (t ). Therefore, the PSD S X (ω ) of

the process X (t ) is defined in the limiting sense by


2
E FTX T (ω )
S X (ω ) = lim
T →∞ 2T

Properties of the PSD


S X (ω ) being the Fourier transform of RX (τ ), it shares the properties of the Fourier

transform. Here we discuss important properties of S X (ω ).

The average power of a random process X (t ) is

EX 2 (t) = R X (0)
1 ∞
= ∫ S X (ω ) dw
2π −∞

• If { X (t )} is real, R X (τ ) is a real and even function of τ . Therefore,


S X (ω ) = ∫R
−∞
X (τ )e − jωτ dτ


= ∫R
−∞
X (τ )(cos ωτ + j sin ωτ )dτ


= ∫R
−∞
X (τ ) cos ωτ dτ


= 2 ∫ RX (τ ) cos ωτ dτ
0

Thus S X (ω ) is a real and even function of ω .


2
E X T (ω )
• From the definition S X ( w) = limT →∞ is always non-negative. Thus
2T
S X (ω ) ≥ 0.

• If { X (t )} has a periodic component, R X (τ ) is periodic and so S X (ω ) will have

impulses.

Power spectral density of a discrete-time WSS random process


Suppose g [ n ] is a discrete-time real signal. Assume g[n] to be obtained by sampling a
continuous-time signal g (t ) at an uniform interval T such that
g[ n] = g ( nT ), n = 0, ±1, ±2,...
The discrete-time Fourier transform (DTFT) of the signal g[n] is defined by

G (ω ) = ∑ g[ n]e − jω n
n =−∞


G (ω ) exists if { g[n]} is absolutely summable, that is, ∑ g[ n] < ∞. The signal g[n] is
n =−∞

obtained from G (ω ) by the inverse discrete-time Fourier transform


π
1
g[ n]) =
2π ∫π

g (ω )e jω n dw

Following observations about the DTFT are important:


• ω is a frequency variable representing the frequency of a discrete sinusoid.
Thus the signal g[n] = A cos(ω0 n) has a frequency of ω0 radian/samples.

• G (ω ) is always periodic in ω with a period of 2π . Thus G (ω ) is uniquely defined

in the interval −π ≤ ω ≤ π .
• Suppose { g[n]} is obtained by sampling a continuous-time signal g a (t ) at a
uniform interval T such that
g[n] = g a (nT ), n = 0, ±1, ±2,...

The frequency ω of the discrete-time signal is related to the frequency Ω of the


ω
continuous time signal by the relation Ω =
T
where T is the uniform sampling interval. The symbol Ω for frequency of a
continuous signal is used in the signal-processing literature just to distinguish it from
the corresponding frequency of the discrete-time signal. This is illustrated in the Fig.
below.
• We can define the Z − transform of the discrete-time signal by the relation

G ( z ) = ∑ g [ n] z − n
n =−∞

where z is a complex variable. G (ω ) is related to G ( z ) by


G (ω ) = G ( z ) z =e jω

Power spectrum of a discrete-time real WSS process { X [n]}


Consider a discrete-time real WSS process { X [ n]}. The very notion of stationarity
poses problem in frequency-domain representation of { X [n]} through the Discrete-time
Fourier transform. The difficulty is avoided similar to the case of the continuous-time
WSS process by defining the truncated process

 X [ n] for n ≤ N
X N [ n] = 
0 otherwise

The power spectral density S X (ω ) of the process { X [n]} is defined as

1 2
S X (ω ) = lim E DTFTX N (ω )
N →∞ 2N +1
where
∞ N
DTFTX N (ω ) = ∑ X N [ n]e − jwn = ∑ X [n]e − jwn
n =−∞ n =− N

Note that the average power of { X [n]} is RX [0] = E X 2 [n ] and the power spectral

density S X (ω ) indicates the contribution to the average power of the sinusoidal

component of frequency ω.
Cross power spectral density
Consider a random process Z (t ) which is sum of two real jointly WSS random
processes X(t) and Y(t). As we have seen earlier,

RZ (τ ) = RX (τ ) + RY (τ ) + RXY (τ ) + RYX (τ )
If we take the Fourier transform of both sides,
SZ (ω ) = S X (ω ) + SY (ω ) + FT ( RXY (τ )) + FT ( RYX (τ ))
where FT (.) stands for the Fourier transform.

Thus we see that S Z (ω ) includes contribution from the Fourier transform of the cross-

correlation functions RXY (τ ) and RYX (τ ). These Fourier transforms represent cross power

spectral densities.
Definition of Cross Power Spectral Density
Given two real jointly WSS random processes X(t) and Y(t), the cross power spectral
density (CPSD) S XY (ω ) is defined as

FTX T∗ (ω ) FTYT (ω )
S XY (ω ) = lim E
T →∞ 2T
where FTX T (ω ) and FTYT (ω ) are the Fourier transform of the truncated processes

t t ∗
X T (t ) = X(t)rect ( ) and YT (t ) = Y(t)rect ( ) respectively and denotes the complex
2T 2T
conjugate operation.
We can similarly define SYX (ω ) by

FTYT∗ (ω ) FTX T (ω )
SYX (ω ) = lim E
T →∞ 2T
Proceeding in the same way as the derivation of the Wiener-Khinchin-Einstein theorem
for the WSS process, it can be shown that

S XY (ω ) = ∫ RXY (τ )e − jωτ dτ
−∞

and

SYX (ω ) = ∫ RYX (τ )e − jωτ dτ
−∞

The cross-correlation function and the cross-power spectral density form a Fourier
transform pair and we can write

RXY (τ ) = ∫ S XY (ω )e jωτ dω
−∞

and

RYX (τ ) = ∫ SYX (ω )e jωτ dω
−∞

Properties of the CPSD


The CPSD is a complex function of the frequency ω. Some properties of the CPSD of
two jointly WSS processes X(t) and Y(t) are listed below:
*
(1) S XY (ω ) = SYX (ω )

Note that RXY (τ ) = RYX (−τ )



∴ S XY (ω ) = ∫ RXY (τ )e − jωτ dτ
−∞

= ∫ RYX ( −τ )e− jωτ dτ
−∞

= ∫ RYX (τ )e jωτ dτ
−∞
*
= SYX (ω )

(2) Re( S XY (ω )) is an even function of ω and Im( S XY (ω )) is an odd function of ω


We have

S XY (ω ) = ∫ RXY (τ )(cos ωτ + j sin ωτ )dτ
−∞
∞ ∞
= ∫ RXY (τ )cos ωτ dτ + j ∫ RXY (τ )sin ωτ )dτ
−∞ −∞
= Re( S XY (ω )) + j Im( S XY (ω ))
where

Re( S XY (ω )) = ∫ RXY (τ )cos ωτ dτ is an even function of ω and
−∞

Im( S XY (ω )) = ∫ RXY (τ )sin ωτ dτ is an odd function of ω and
−∞

(3) X(t) and Y(t) are uncorrelated and have constant means, then
S XY (ω ) = SYX (ω ) = µ X µY δ (ω )

Observe that
RXY (τ ) = EX (t + τ )Y (t )
= EX (t + τ ) EY (t )
= µ X µY
= µY µ X
= RXY (τ )
∴ S XY (ω ) = SYX (ω ) = µ X µY δ (ω )

(4) If X(t) and Y(t) are orthogonal, then


S XY (ω ) = SYX (ω ) = 0

If X(t) and Y(t) are orthogonal,


RXY (τ ) = EX (t + τ )Y (t )
=0
= RXY (τ )
∴ S XY (ω ) = SYX (ω ) = 0

(5) The cross power PXY between X(t) and Y(t) is defined by

1 T
PXY = lim E ∫ X (t )Y (t ) dt
T →∞ 2T −T

Applying Parseval’s theorem, we get


1 T
PXY = lim E ∫ X (t )Y (t )dt
T →∞ 2T −T

1 ∞
= lim E ∫ X T (t )YT (t )dt
T →∞ 2T −∞

1 1 ∞ *
= lim E ∫ FTX T (ω ) FTYT (ω )dω
T →∞ 2T 2π −∞
1 ∞ EFTX T* (ω ) FTYT (ω )
= ∫ lim dω
2π −∞ T →∞ 2T
1 ∞
= ∫ S XY (ω )dω
2π −∞
1 ∞
∴ PXY = ∫ S XY (ω )dω
2π −∞
Similarly,
1 ∞
PYX = ∫ SYX (ω )dω
2π −∞
1 ∞ *
= ∫ S XY (ω )dω
2π −∞
*
= PXY

Example Consider the random process Z (t ) = X (t ) + Y (t ) discussed in the beginning of


the lecture. Here Z (t ) is the sum of two jointly WSS orthogonal random processes
X(t) and Y(t).
We have,
RZ (τ ) = RX (τ ) + RY (τ ) + RXY (τ ) + RYX (τ )
Taking the Fourier transform of both sides,
S Z (ω ) = S X (ω ) + SY (ω ) + S XY (ω ) + SYX (ω )
1 ∞ 1 ∞ 1 ∞ 1 ∞ 1 ∞
∴ ∫ S Z (ω )dω = ∫ S X (ω )dω + ∫ SY (ω )dω + ∫ S XY (ω )dω + ∫ SYX (ω )dω
2π −∞ 2π −∞ 2π −∞ 2π −∞ 2π −∞
Therefore,
PZ (ω ) = PX (ω ) + PY (ω ) + PXY (ω ) + PYX (ω )

Remark
• PXY (ω ) + PYX (ω ) is the additional power contributed by X (t ) and Y (t ) to the
resulting power of X (t ) + Y (t )
• If X(t) and Y(t) are orthogonal, then

S Z (ω ) = S X (ω ) + SY (ω ) + 0 + 0
= S X (ω ) + SY (ω )
Consequently
PZ (ω ) = PX (ω ) + PY (ω )

Thus in the case of two jointly WSS orthogonal processes, the power of the sum
of the processes is equal to the sum of respective powers.
Noise Bandwidth:
The mathematical definition of the ENB (from [1]) is:

where:
is the amplitude transfer function peak value.
• is the amplitude transfer function over frequency
as power:

where:
is the power transfer function peak value.
is the power transfer function over frequency

PART – A
1. Explain WSS
2. What is ergodicity?
3. Explain parseval’s relation
4. Write in detail about white noise
5. Write the condition for LTI systems

PART – B
1. Write in details about auto correlation and cross correlation
2. State and prove Wiener Khintchine relation
3. What is power spectral density? write its properties
4. Explain expectations, variance and covariance

Das könnte Ihnen auch gefallen