Beruflich Dokumente
Kultur Dokumente
ENGINEERING
SUB CODE: SEC5101
• Moreover, for a strictly stationary process with first two moments finite, the
covariance function, and the correlation function depend only on the time
difference s.
• A trivial example of a strictly stationary process is a sequence of i.i.d random
variables.
Examples
• Automated teller machine (ATM)
• Printed circuit board assembly operation
• Runway activity at airport
• For GRP, WSS ⇒ SSS, since the process is completely specified byits mean
and autocorrelationfunctions
• Random walk is not WSS, since RX (n1, n2) = min{n1, n2} is
nottimeinvariant–infactnoindependentincrementprocesscanbeWSS
Ergodicity
Ergodicity Principle
M .S .
A WSS process { X ( t ) } is said to beergodic in mean, if µX T
→ µ X as T → ∞ .
Thus for a mean ergodic process { X ( t )} ,
lim E µ X T = µ X
T →∞
and
lim var µ X T
=0
T →∞
We have earlier shown that
E µX T = µX
and
1
2T
τ
var µ X T
=
2T ∫C
− 2T
X (τ ) 1 −
2T
dτ
Therefore, the condition for ergodicity in mean is
1
2T
τ
T →∞ 2T ∫
lim CX (τ ) 1− dτ = 0
−2T 2T
Further,
1
2T
τ 1
2T
∫
2T −2T
C X (τ ) 1 −
2T
dτ ≤
2T ∫
−2T
C X (τ ) dτ
∫
−2T
C X (τ ) dτ < ∞
Example
Consider the random binary waveform { X (t )} discussed in Example .The process has
the auto-covariance function for τ ≤ Tp given by
τ
1 − τ ≤ Tp
C X (τ ) = Tp
0 otherwise
Here
2T 2T
∫
−2T
C X (τ ) dτ = 2 ∫ C X (τ ) dτ
0
Tp
τ
= 2 ∫ 1 − dτ
T
0 p
Tp3 Tp2
= 2 Tp + 2 −
3Tp Tp
2Tp
=
3
∞
∴ ∫ C X (τ ) dτ < ∞
−∞
Hence { X (t )} is not mean ergodic.
Autocorrelation ergodicity
T
1
2T −∫T
RX (τ ) T = X (t ) X (t + τ )dt
If we consider Z (t ) = X (t ) X (t + τ ) so that, µ Z = RX (τ )
1
2T
τ1
lim
T →∞ 2T ∫ 1 − 2T C
−2T
Z (τ 1 )dτ 1 = 0
where
CZ (τ 1 ) = EZ (t ) Z (t − τ 1 ) − EZ (t ) EZ (t − τ 1 )
= EX (t ) X (t − τ ) X (t − τ ) X (t − τ − τ 1 ) − RX2 (τ )
Hence the condition for autocorrelation ergodicity of a jointly Gaussian process is found.
1
2T
τ
lim
T →∞ 2T ∫ 1 − 2T C (τ )dτ → 0
−2T
z
Now CZ (τ ) = EZ (t ) Z (t + τ ) − RX2 (τ )
1
2T
τ
∫−2T 2T ( Ez(t ) z (t + α ) − RX (τ ) ) dα → 0
2
If lim 1 −
T →∞ 2T
Example
Consider the random–phased sinusoid given by X (t ) = A cos( w0 t + φ ) where A and w0 are
constants and φ ~ U [0, 2π ] is a random variable. We have earlier proved that this
A2
process is WSS with µ X = 0 and RX (τ ) = cos w0τ
2
For any particular realization x(t ) = A cos( w0t + φ1 ),
1 T
2T ∫−T
µx T
= A cos( w0t + φ1 )dt
1
= A sin( w0T )
Tw0
and
T
1
Rx (τ ) T
=
2T −T
∫ A cos(w t + φ ) A cos(w (t + τ ) + φ )dt
0 1 0 1
T
A2
=
4T ∫ [cos w τ + A cos(w (2t + τ ) + 2φ )]dt
−T
0 0 1
Expectation
In general, the expected value of a random variable, written as E(X), is equal to the
weighted average of the outcomes of the random variable, where the weights are based
on the probabilities of those outcomes. If a is a constant, we can write E(X+a), E(X-a),
E(aX), and so forth. If Y is another random variable, we can write about E(X+Y), E(XY),
etc.
Variance
Now that we have an idea about the average value or values that a random process
takes, we are often interested in seeing just how spread out the different random values
might be. To do this, we look at the variance which is a measure of this spread. The
variance, often denoted by σ2, is written as follows:
σ2=Var(X)=E[(X−E[X]2)]
Covariance
The covariance function is a number that measures the common variation of X and Y. It
is defined as cov(X, Y ) = E[(X − E[X])(Y − E[Y ])]
=E[XY ] − E[X]E[Y ]
The covariance is determined by the difference in E[XY ] and E[X]E[Y ]. If X and
Y were statistically independent then E[XY ] would equal E[X]E[Y ] and the covariance
would be zero. Hence, the covariance, as its name implies, measures the common
variation. The covariance can be normalized to produce what is known as the
correlation coefficient, ρ
= cov(X, Y)/√var(X)var(Y)
Correlation:
Correlation determines the degree of similarity between two signals. If the signals are
identical, then the correlation coefficient is 1; if they are totally different, the correlation
coefficient is 0, and if they are identical except that the phase is shifted by
exactly (i.e. mirrored), then the correlation coefficient is -1.
When two independent signals are compared, the procedure is known as cross-
correlation, and when the same signal is compared to phase shifted copies of itself, the
procedure is known as autocorrelation.
A function which is related to the correlation function, but arithmetically less complex, is
the average magnitude difference function.
Autocorrelation is a method which is frequently used for the extraction of fundamental
frequency, : if a copy of the signal is shifted in phase, the distance between
correlation peaks is taken to be the fundamental period of the signal (directly related to
the fundamental frequency). The method may be combined with the simple smoothing
operations of peak and centre clipping, or with other low-pass filter operations.
Such signals are called power signals. For a power signal x(t ), the autocorrelation
function is defined as
1 T
Rx (τ ) = lim ∫ x (t + τ ) x (t ) dt
T →∞ 2T − T
1 T 2
Particularly, Rx (0) = lim ∫ x (t ) dt is the mean-square value. If x(t) is a voltage
T →∞ 2T −T
waveform across a 1 ohm resistance, then Rx (0) is the average power delivered to the
resistance. In this sense, Rx (0) represents the average power of the signal.
Example
Suppose x (t ) = A cos ω t. The autocorrelation function of x(t) at lag τ is given by
1 T
Rx (τ ) = lim ∫ A cos ω (t + τ ) A cos ω tdt
T →∞ 2T −T
A2 T
= lim ∫ [cos(2ω t + τ ) + cos ωτ ]dt
T →∞ 4T −T
A2 cos ωτ
=
2
We see that Rx (τ ) of the above periodic signal is also periodic and its maximum occurs
2π 2π A2
when τ = 0, ± ,± , etc. The power of the signal is Rx (0) = .
ω ω 2
The autocorrelation of the deterministic signal gives us insight into the properties of the
autocorrelation function of a WSS process. We shall discuss these properties next.
autocorrelation function as
R X (τ ) = EX (t + τ ) X (t )
If { X (t )} is a complex WSS process, then
RX (τ ) = EX (t + τ ) X *(t )
where X *(t ) is the complex conjugate of X (t ). For a discrete random sequence, we
can define the autocorrelation sequence similarly.
applied across a 1 ohm resistance, then RX (0) is the ensemble average power
R X ( −τ ) = RX (τ ). Thus,
RX (−τ ) = EX (t − τ ) X (t )
= EX (t ) X (t − τ )
= EX (t1 + τ ) X (t1 ) ( Substituting t1 = t − τ )
= RX (τ )
We have
RX2 (τ ) = {EX (t ) X (t + τ )}2
= EX 2 (t ) EX 2 (t + τ )
= RX (0) RX (0)
∴ RX (τ ) <RX (0)
4. RX (τ ) is a positive semi-definite function in the sense that for any positive integer
n n
n and real a j , a j , ∑ ∑ ai a j RX (ti − t j )>0
i =1 j =1
Proof
Define the random variable
n
Y = ∑ ai X (ti )
j =1
Then we have
n n
0 ≤ EY 2 = ∑ ∑ ai a j EX (ti )X (t j )
i =1 j =1
n n
= ∑ ∑ ai a j RX (ti −t j )
i =1 j =1
Again
E (( X (t + τ + Tp ) − X (t + τ )) X (t )) 2 ≤ E ( X (t + τ + Tp ) − X (t + τ )) 2 EX 2 (t )
⇒ ( RX (τ + Tp ) − RX (τ )) 2 ≤ 2( RX (0) − RX (Tp )) RX (0)
⇒ ( RX (τ + Tp ) − RX (τ )) 2 ≤ 0 Q RX (0) = RX (Tp )
∴ RX (τ + Tp ) = RX (τ )
6. Suppose X (t ) = µ X + V (t )
where V (t ) is a zero-mean WSS process and lim RV (τ ) = 0. Then
τ →∞
2
lim RX (τ ) = µ X
τ →∞
If { X (t )} and {Y (t )} are two real jointly WSS random processes, their cross-correlation
functions are independent of t and depends on the time-lag. We can write the cross-
correlation function
RXY (τ ) = EX (t + τ )Y (t )
o Cross-correlation function: The cross-correlation function Rxy(t) is a measure
of how well the future values of one signal can be predicted based on past
measurements of another signal.
R xy (τ ) = E[ X (t )Y (t + τ )]
RYX (τ )
RXY (τ )
O
τ
Fig. RXY (τ ) = RYX (−τ )
Example
Consider a random process Z (t ) which is sum of two real jointly WSS random
processes X(t) and Y(t). We have
Z (t ) = X (t ) + Y (t )
RZ (τ ) = EZ (t + τ ) Z (t )
= E[ X (t + τ ) + Y (t + τ )][ X (t ) + Y (t )]
= RX (τ ) + RY (τ ) + RXY (τ ) + RYX (τ )
If X (t ) and Y (t ) are orthogonal processes, then RXY (τ ) = RYX (τ ) = 0
∴ RZ (τ ) = RX (τ ) + RY (τ )
Linear system
The system is called linear if superposition applies: the weighted sum of inputs results in
the weighted sum of the corresponding outputs. Thus for a linear system
T a1 x1 ( t ) + a2 x2 ( t ) = a1T x1 ( t ) + a2T x2 ( t )
• Introduction: LTI systems are analyzed using the correlation/spectral technique. The
inputs are assumed to be stationary/ergodic random with mean = 0.
• Ideal system:
x(t) H(f) y(t)
h(t)
t
y (t ) = ∫ x(τ ) h(t − τ ) dτ
0
Y( f ) = X ( f ) H( f )
• Ideal model:
o Correlation and spectral relationships:
∞∞ ∞
Ryy (t) = ∫∫ h(α) h(β ) Rxx (t −α − β ) dα dβ Rxy (t) = ∫ h(α) Rxx (t −α) dα
00 0
2
S yy ( f ) = H( f ) Sxx ( f ) Sxy ( f ) = H( f ) Sxx ( f )
∞
Total output noise energy: ϕ y2 = ∫ S yy ( f ) df
−∞
o Example: LPF to white noise
1
H( f ) = = H ( f ) e − jφ ( f ) K = RC (LPF)
1 + j 2π f K
t
1 − 1
⇒ h(t ) = e K ⋅ u (t ) ⇒ H ( f ) = , and φ ( f ) = tan −1 (2π f K )
K 1 + ( 2π f K ) 2
A2
⇒ R yy (t ) = cos (2π f 0t )
2[1 + (2π f 0 K ) 2 ]
∞ ∞
A2 δ ( f − f 0 )
ϕ y2 = ∫ G yy ( f ) df = ∫ df
0 0
2[1 + (2π f K ) 2 ]
2
A
⇒ ϕ y2 =
2[1 + (2π f 0 K ) 2 ]
n(t)
H(f) v(t)
u(t) ∑ y(t)
h(t)
m(t) ∑ x(t)
N0
SW (ω ) = −∞ < ω < ∞
2
where N 0 is a real constant and called the intensity of the white noise. The
corresponding autocorrelation function is given by
N
RW (τ ) = δ (τ ) where δ (τ ) is the Dirac delta.
2
The average power of white noise
1 ∞ N
Pavg = EW 2 (t ) = ∫ dω → ∞
2π −∞ 2
The autocorrelation function and the PSD of a white noise process is shown in Fig.
below.
S W (ω )
N0
2
O
ω
RW (τ )
N0
δ (τ )
2
O
τ
Remarks
• The term white noise is analogous to white light which contains all visible light
frequencies.
• A white noise is generally assumed to be zero-mean.
• A white noise process is unpredictable as the noise samples at different instants of
time are uncorrelated:
CW (ti , t j ) = 0 for ti ≠ t j .
Thus the samples of a white noise process are uncorrelated no matter how closely the
samples are placed. Assuming zero mean, σ W2 → ∞. Thus a white noise has an infinite
variance.
3
=
Definition of ω
5 − 4cos Power Spectral Density of a WSS Process
Let us define
X T (t) = X(t) -T < t < T
=0 otherwise
t
= X (t ) rect ( )
2T
t
where rect ( ) is the unity-amplitude rectangular pulse of width 2T centering the origin.
2T
As t → ∞, X T (t ) will represent the random process X (t ).
Define the mean-square integral
T
FTX T (ω ) = ∫X
−T
T (t)e − jω t dt
∫X ∫
2 2
T (t)dt = FTX T ( ω ) dω .
−T −∞
EX 2 (t) = R X (0)
1 ∞
= ∫ S X (ω ) dw
2π −∞
∞
S X (ω ) = ∫R
−∞
X (τ )e − jωτ dτ
∞
= ∫R
−∞
X (τ )(cos ωτ + j sin ωτ )dτ
∞
= ∫R
−∞
X (τ ) cos ωτ dτ
∞
= 2 ∫ RX (τ ) cos ωτ dτ
0
impulses.
∞
G (ω ) exists if { g[n]} is absolutely summable, that is, ∑ g[ n] < ∞. The signal g[n] is
n =−∞
in the interval −π ≤ ω ≤ π .
• Suppose { g[n]} is obtained by sampling a continuous-time signal g a (t ) at a
uniform interval T such that
g[n] = g a (nT ), n = 0, ±1, ±2,...
X [ n] for n ≤ N
X N [ n] =
0 otherwise
1 2
S X (ω ) = lim E DTFTX N (ω )
N →∞ 2N +1
where
∞ N
DTFTX N (ω ) = ∑ X N [ n]e − jwn = ∑ X [n]e − jwn
n =−∞ n =− N
Note that the average power of { X [n]} is RX [0] = E X 2 [n ] and the power spectral
component of frequency ω.
Cross power spectral density
Consider a random process Z (t ) which is sum of two real jointly WSS random
processes X(t) and Y(t). As we have seen earlier,
RZ (τ ) = RX (τ ) + RY (τ ) + RXY (τ ) + RYX (τ )
If we take the Fourier transform of both sides,
SZ (ω ) = S X (ω ) + SY (ω ) + FT ( RXY (τ )) + FT ( RYX (τ ))
where FT (.) stands for the Fourier transform.
Thus we see that S Z (ω ) includes contribution from the Fourier transform of the cross-
correlation functions RXY (τ ) and RYX (τ ). These Fourier transforms represent cross power
spectral densities.
Definition of Cross Power Spectral Density
Given two real jointly WSS random processes X(t) and Y(t), the cross power spectral
density (CPSD) S XY (ω ) is defined as
FTX T∗ (ω ) FTYT (ω )
S XY (ω ) = lim E
T →∞ 2T
where FTX T (ω ) and FTYT (ω ) are the Fourier transform of the truncated processes
t t ∗
X T (t ) = X(t)rect ( ) and YT (t ) = Y(t)rect ( ) respectively and denotes the complex
2T 2T
conjugate operation.
We can similarly define SYX (ω ) by
FTYT∗ (ω ) FTX T (ω )
SYX (ω ) = lim E
T →∞ 2T
Proceeding in the same way as the derivation of the Wiener-Khinchin-Einstein theorem
for the WSS process, it can be shown that
∞
S XY (ω ) = ∫ RXY (τ )e − jωτ dτ
−∞
and
∞
SYX (ω ) = ∫ RYX (τ )e − jωτ dτ
−∞
The cross-correlation function and the cross-power spectral density form a Fourier
transform pair and we can write
∞
RXY (τ ) = ∫ S XY (ω )e jωτ dω
−∞
and
∞
RYX (τ ) = ∫ SYX (ω )e jωτ dω
−∞
(3) X(t) and Y(t) are uncorrelated and have constant means, then
S XY (ω ) = SYX (ω ) = µ X µY δ (ω )
Observe that
RXY (τ ) = EX (t + τ )Y (t )
= EX (t + τ ) EY (t )
= µ X µY
= µY µ X
= RXY (τ )
∴ S XY (ω ) = SYX (ω ) = µ X µY δ (ω )
(5) The cross power PXY between X(t) and Y(t) is defined by
1 T
PXY = lim E ∫ X (t )Y (t ) dt
T →∞ 2T −T
1 ∞
= lim E ∫ X T (t )YT (t )dt
T →∞ 2T −∞
1 1 ∞ *
= lim E ∫ FTX T (ω ) FTYT (ω )dω
T →∞ 2T 2π −∞
1 ∞ EFTX T* (ω ) FTYT (ω )
= ∫ lim dω
2π −∞ T →∞ 2T
1 ∞
= ∫ S XY (ω )dω
2π −∞
1 ∞
∴ PXY = ∫ S XY (ω )dω
2π −∞
Similarly,
1 ∞
PYX = ∫ SYX (ω )dω
2π −∞
1 ∞ *
= ∫ S XY (ω )dω
2π −∞
*
= PXY
Remark
• PXY (ω ) + PYX (ω ) is the additional power contributed by X (t ) and Y (t ) to the
resulting power of X (t ) + Y (t )
• If X(t) and Y(t) are orthogonal, then
S Z (ω ) = S X (ω ) + SY (ω ) + 0 + 0
= S X (ω ) + SY (ω )
Consequently
PZ (ω ) = PX (ω ) + PY (ω )
Thus in the case of two jointly WSS orthogonal processes, the power of the sum
of the processes is equal to the sum of respective powers.
Noise Bandwidth:
The mathematical definition of the ENB (from [1]) is:
where:
is the amplitude transfer function peak value.
• is the amplitude transfer function over frequency
as power:
where:
is the power transfer function peak value.
is the power transfer function over frequency
•
PART – A
1. Explain WSS
2. What is ergodicity?
3. Explain parseval’s relation
4. Write in detail about white noise
5. Write the condition for LTI systems
PART – B
1. Write in details about auto correlation and cross correlation
2. State and prove Wiener Khintchine relation
3. What is power spectral density? write its properties
4. Explain expectations, variance and covariance