Sie sind auf Seite 1von 33

7.

Statistical Description

239

Random Processes

Then this probability system is called a random process. This definition is simply a more precise statement o f the earlier comments that the actual realization o f the random process is determined by a random selection o f an element from S. The collection o f all possible realizations {X(t, s): se S} = {X(t, sj, X(t, s ),...} is called the ensemble o f functions in the random process, where the elements in this set are the sample functions. (As given, S is countable, but 5 may be uncountable.) Sometimes it is useful to denote the possible values o f t by indicating that they are elements o f another set, T. The ensemble o f functions in the random process would then be given as {X(t s): re T seS}.
2 t f

7.1

Statistical Description

A random process or stochastic process is a function that maps all elements o f a sample space into a collection or ensemble o f time functions called sample functions. The term sample function is used for the time function corresponding to a particular realization of the random process, which is similar to the designation o f outcome for a particular realization of a random variable. A random process is governed by probabilistic laws, so that different observations of the time function can differ because a single point in the sample space maps to a single sample function. The value o f a random process at any given time t cannot be predicted in advance. I f a process is not random it is called nonrandom or deterministic. As in the treatment o f random phenomena earlier, it is convenient to consider the particular realization, o f an observed random process, to be determined by random selection o f an element from a sample space S. This implies that a particular element 5 E S is selected according to some random choice, with the realization o f the random process completely determined by this random choice. To represent the time dependence and the random dependence, a random process is written as a function o f two variables as X(t, s) with trepresenting the time dependence and s the randomly chosen element o f S. As was the case for random variables, for which the notation indicating the dependence on s was often suppressed, i.e., X was used instead of X(s), a random process will be written as X(t) instead o f X ( f , s) when it is not necessary to use the latter notation for clarity.
t

To help clarify these ideas, several examples o f random processes will now be considered. With the Bernoulli random variable X(t ) = X which takes on the values 0 and 1, and with t a time index, {X(t ,s):k = . . . , - 2 , - 1 , 0 , 1 , 2 , . , . , 5 = 0,1} or simply {X : k = . . . , - 2 , - 1 , 0 , 1 , 2 , . . . } (where the 5 variation is suppressed) is the ensemble o f functions i n a Bernoulli random process. There is an infinite number o f sample functions in this ensemble. Also, i f
k kt k k k

y * =
N

Y is a binomial random variable, and for N a time index the binomial counting random process is expressed as Y(t)= Y , NT<t<(N + l)T, N = 1, 2 , . . . , where T is the observation period. There is an infinite number of sample functions in the ensemble o f the binomial random process. A typical sample function for the binomial random process [(x x ,.. . ) ( 1 , 0 , 0 , 1 , 1 , 1 , 0 , 1 , . . . ) and ( y , , y , . . . ) = ( 1 , 1 , 1 , 2 , 3 , 4 , 4 , 5 , . . . ) ] is shown in Fig. 7.1.1. As shown, the process may increment by 1 only at the discrete times t = kT fc= 1,2,
N lt 2 2 k t

Y(t) 5 4 3 2 1 o I

. , , . |

A precise definition o f a random process is as follows: (a) Let 5 be a nonempty set, (b) Let P( ) be a probability measure defined over subsets of S, and (c) To each se S let there correspond a time function X(t, s).
238

L
T

i 5T

i 6T

i 7T

*8T

> 9T

2T

3T

4T

Fig. 7.1.1. Sample function for a binomial random process.

7.1 240 7 Random Prtcenes X(t) 5 4 . 3


I

Statistical Description

241

r .

Another example o f a random process is the sine wave random process given as X(t)= Vsin(fl/ + 9 )

2 1 0 I 0 |
LJ
1

2T

i 3T

4T

' ST

6T

i
7T

8T

1 t 97

Fig, 7,1 J , Sample /unction f o r i Poisson random process.

Another counting process is the Poisson random process, which counts the number o f events o f some type (e.g., photons in a photomultiplier tube) that are obtained from some initial time (often ( - 0) until time t. The number o f events obtained i n a fixed interval o f time is described by a Poisson random variable. I f N, denotes the number o f arrivals before time t, then this process is given as X(t) = N . A typical sample function for the Poisson random process is shown i n Fig. 7.1.2. As can be seen i n this figure, the Poisson random process differs from the binomial random process i n that it can be incremented by 1 at any time and is not limited to the discrete
t

where the amplitude V may be random (which is the case for amplitude modulation), the frequency Cl may be random (which is the case for frequency modulation), the phase 0 may be random (which is the case for phase modulation), or any combination o f these three parameters may be random. For the random process X(t) - cos(cu ' + 0 ) , where a> is a constant and 8 is a uniform phase with f (8) = 1/(2TT), 0 ^ 8 < 2ir, there is an infinite number o f sample functions (all of the same frequency and maximum value, with different phase angles), since there is an infinite number o f values in the interval from 0 to 2ir. The random process for a simple binary communication scheme is X( t) = cos(w / + 0 ) , where 0 has the value 6 = 0 or Q = TT, and consists o f the two sample functions x(t, S|) = cos(ft> 0 and x(t, s ) = cos(o* ( + 7r). This binary phase modulation or, as it is commonly called, phase shift keying is the most efficient (yields the smallest probability o f error) binary communication scheme. The last example o f a random process is a noise waveform which might distort the received waveform in a communication system. A typical sample function is shown in Fig. 7.1.4. The noise waveforms are greatly affected
0 0 B 0 0 2 0

X(t)

times t = kT, k = 1,2, A random process called a random telegraph signal can be obtained from the Poisson random process by taking the value at t ~ 0 to be either + 1 or - 1 with equal probability, and changing to the opposite value at the points where the Poisson process is incremented. A typical sample function of this random process is shown in Fig. 7.1.3. This telegraph signal can change values at any time.
k

Y(t) Fig. 7.1.4. Sample function for a noise random process.

2T

3T

4T

5T

6T

7T

'

8T

9T

-l Fig. 7.1.3. Sample function for a random telegraph signal.

by the filtering and other operations performed on the transmitted waveforms, so it is difficult to draw a general waveform. The most common assumption concerning the type o f noise random process is that it is Gaussian. Since a random process is a function o f two variables, t and s, either or both of these may be chosen to be fixed. With the fixed values denoted by

242

7 Random Processes

7.2

Statistical Averages

243

a subscript, these descriptions are (a) (b) (c) (d) X{t, X(t X(t, X(t,,
h

s)-X{t) is a random process. s)~X(tt) = X is a random variable. Sj) = x(t, Sj) is a deterministic time function or sample function. Sj) - x(t>, Sj) is a real number.
tl

(c) Operations on known random processes, e.g., filtering (d) Specification of the probability of a finite number of sample functions The most common simplifying assumption on random processes is that they satisfy some type o f definition o f stationarity. The concept o f stationarity for a random process is similar to the idea o f steady state in the analysis of the response o f electrical circuits. I t implies that the statistics o f the random process are, in some sense, independent o f the absolute value o f time. This does not imply that the joint statistics o f X^) and X(t ) are independent of the relative times t and t , since for most random processes, as *! approaches t X(t ) becomes more predictable from the value o f X(t,). This type o f behavior is readily satisfied, however, by allowing the dependence on time of these joint statistics to depend only on the difference between the two times, not on the precise value o f either o f the times. The first type o f stationarity considered is the strongest type. The statement that a random process is stationary i f its statistical properties are invariant with respect to time translation is strict sense stationarity. Precisely stated, the random process X(t, s) is strict sense stationary i f and only if, for every value o f N and for every set of time instants {t, T, i -1,2,..., N}
2 t 2 2t 2

For case (b) the time is fixed, so the possible values are the values that X(t) can take on at this one instant in time, which is completely equivalent to the description o f a one-dimensional random variable. Thus, X{t ) is a random variable and can be described by a probability density function. Similar interpretations hold for the other cases. Both parameters, t and s, o f a random process may be either discrete or continuous. I f the possible values X(t) can take on are a discrete set of values, X{t) is said to be a discrete random process, whereas i f the possible values are a continuum o f values, X{t) is said to be a continuous random process. The Bernoulli, binomial, Poisson, and telegraph random processes are discrete random processes, and the sine wave and noise random processes are continuous random processes. I f only values at the time instants < i , * j , . . . , t . . . are of interest, the random process has a discrete parameter or is designated as a discrete time random process (unless a parameter such as time is explicitly stated, the terms continuous and discrete describe the range o f the real numbers the random process can take on). Such random processes are common in sampled data systems. I f time takes on a continuum of values, the random process has a continuous parameter or is a continuous time random process. The Bernoulli and binomial random processes are discrete time random processes, while the Poisson, telegraph, sine wave, and noise random processes are continuous time random processes. The complete statistical description o f a general random process can be infinitely complex. I n general, the density function o f X(t ) depends on the value o f I f X{t) is sampled at N times, X = X ( r ) , . . . . X(t )) is a random vector with joint density function that depends o n t t , . . . , t . A suitable description can i n theory be given by describing the joint density function o f X (or joint probability mass function) for all N and for all possible choices o f t t ..., t . Such extremely general random process characterizations normally cannot be easily analyzed, so various simplifying assumptions are usually made. Fortunately, these simplifying assumptions are reasonable i n many situations of practical interest. Random processes can be specified in several ways:
t k t t T 2 N l t 2 N lt 2t N

FxOO.XUj)

X(l,v)(*!, X , . . . , X )
2 N

= F

x (

1 + T ) i X (

l + T )

( f + * ) ( * l , * 2 , *N)
N

(7.1.1) holds true for all values o f x , , x , . . . , x and all T such that (f, + T ) G T for all i. This definition is simply a mathematical statement o f the property, which has already been stated, that the statistics depend only on time differences. Such differences are preserved i f all time values are translated by the same amount r. The statistics o f the Bernoulli random process do not change with time, so the Bernoulli random process is strict sense stationary, whereas the statistics o f the binomial and the Poisson process do change with time and they are nonstationary. Strict sense stationarity is sometimes unnecessarily restrictive, since most of the more important results for real-world applications o f random processes are based on second-order terms, or terms involving only two time instants. A weaker definition o f stationarity involving the first and second moments w i l l be given shortly.
2 N

(a) Processes for which the rule for determining the density function is stated directly, e.g., the Gaussian random process (b) Processes consisting o f a deterministic time function with parameters that are random variables, e.g., the sine wave random process

7.2

Statistical Averages

It is often difficult to prove that a process is strict sense stationary, but proof o f nonstationarity can at times be easy. A random process is

244

7 Random Processes

7.2 Statistical Averages

245

nonstationary i f any o f its density functions (or probability functions) or any o f its moments depend on the precise value o f time. Example 7.2.1. given as X ( f ) - Y cos(a> r)
0 0

Example 7.2.2 i n itself does not prove or disprove that X{t) ~ cos(w r + 0 ) is stationary. That this random process is stationary is shown in Example 7.2.3.
o

Consider the sine wave process with random amplitude - o o < t < cc

Example 7.2.3. The sine wave random process o f Example 7.2.2 is shown to be strict sense stationary by first observing that
0 = cos" [X(r,)]- , r
a o 1 1

where w is a constant frequency and V is a random variable uniformly distributed from 0 to 1, i.e., / (y) = l
v

or

0 = 27r-cos" [X(r )]-fti i


1 o

From Example 3.3.2, the density function o f X ( r , ) is obtained as Osy^l


A(i,)(*i) = X - ^ -1 ^ x, 1

= 0

otherwise

v 1 - x\ = 0 otherwise
t

The mean o f the random process, where the expectation is over the random variation ( i n this case Y), is obtained as E[X(t)] = E[Yco8(oO]f
J -oo

y<Ms(w t)f (y)dy^


9 Y

f y cos(o) t)(\)
0

Since neither this density function nor the density function for X{t depends on f,, fx(t )(x )=f
l 1 x(tl+T}

+ r)

dy

JO
0

(x )
l

= COS(Q)OO f y d y = i c o s ( a > r )
Jo

where t is a constant with respect to the integration. Since the first moment is a function o f time, the process is nonstationary, O Now consider the sine wave process with random phase.

for all T and f,. Knowledge that X ( i , ) = x , specifies the value o f the random variable 0 and correspondingly the sample function o f the random process X{t). Given that X ( / , ) = x , , the value a, that will be observed at time f, is therefore not random, but depends only on x and the time difference ( i , - f i ) , and is given as
t

a, = cos[ft> (i, - (,) +


0

cos"'(*i)]

or

a, = c o s [ w ( ' f
0

'i) +

2TT - c o s " ' ( x i ) ]

Example 7.2.2.

Consider the sine wave process given as The conditional density function of X(t )
t

given X(t )
%

is then

X(t)
0

= cos(w '+0)
0

-co < r < oo

fxiu)\Xi ){x,Ix )
U t

= S(

Xi

-a,)

i = 2, 3 , . . . , N

where again w is a constant and the density function o f 8 is

and does not depend on the time origin. Also, the conditional density function o f X ( ( , + T) given X ( ( I + T ) , fxu,+T)\xit +T)(Xt I * , ) = 8(x, - a,)
t

all T and (,

i = 2,3,..., N

= 0 H i e mean o f X(t) E[X(t)] is obtained as


o

otherwise does not depend on the time origin. Thus for X (t) = ( X ( r , ) , X ( r )
2 T

X(t ))
N

= [cos(w r+0)]=
Joo

cos(o> t+B)M$)dd
0

and X ( t + T ) = ( X ( r + T),JV(r + T ) , . . . , A ' ( r + T ) )


I 2 w T

= I
j

[cos(<M) co&(6)-sin(,0
I"
0

sin(0)]{j^j dB
| S-2ff1
0

it follows that
N
J

-2ir

= cos(o r) sin(fl) +sin(w () 2ir L 9-o


0 0

cos(0)\

I e-o

/x(o(x)=/x(,,)(x ) n
1

5(x -a )=/
i i

X ( t + T )

(x)

(-2

= ~ [cos(a) ()(0 - 0) + s i n ( o i ( ) ( l - D ] - 0 2 IT

which shows that X(t)

is strict sense stationary.

246

7 Random Process**

7.2 Statistical Averages

247

The complete set of joint and marginal density (or probability) functions give a complete description o f a random process, but they are not always available. Further, they contain more information than is often needed. Much o f the study o f random processes is based on a second-order theory, for which only the joint statistics applicable for two different instants o f time are needed. These statistics are adequate for computing mean values of power (a second-order quantity, since it is based on the mean square value). The frequency spectra are also adequately described by these secondorder statistics. A n important random process that is completely described by second-order statistics is the Gaussian random process. A situation where the second-order statistics are useful can be observed by evaluating the mean square value o f the random output o f the linear system shown i n Fig. 7.2.1. The random output o f the linear system is
X(t) ^ Y(t) .

As can be seen, the autocorrelation function o f the input to a linear system is needed to compute the mean square value o f the output. Closely related to the autocorrelation function is the covariance function, which is a direct generalization o f the definition o f covariance. The covariance function is defined as ^ ( ' , , ( ) = Cov[A-(/ ),A'(( )] = { [ X ( / ) - H ( X ( r ) ) ] [ X ( t ) - ( A ' ( r ) ) ] }
2 1 2 I 1 2 2

=
lt 2 x

R (tut )-E[X(t )]E[X(t )]


x 2 l 2 x

= RM t )-m (ti)m (t2)


x

(7.2.2)

where m (t) = E[X(t)] is the mean function. The covariance function for a zero mean random process is identical to the autocorrelation function. Example 7.2.4. Consider the sine wave process of Example 7.2.2 where X(t) = cos(io t + &) w i t h / ( 0 ) = l / ( 2 i r ) , 0=s 8<2n. The autocorrelation is given as
0 e

h(t)

Fig. 7.2.1. Linear system with random input.

RxOi. ' 2 ) = E[X(t )X(t )]


y 2

= [ c o s ( w * , + 0 ) cos(& / + 0 ) ]
0 0 2

obtained i n terms o f the convolution integral o f the random input and the impulse response o f the system as YXO- [
J -00

which is expressed i n terms o f only one integral, since there is only the one random variable 0 , as (
Jo

X(u)h(t-u)du

R Ui,
x

t )=
2

cos(oj t

0 l

+ e)cos{(o t +e)(~\
0 2

de \2ir/

The mean square value is then obtained as


E[y (t)] =
2

B[[J
C

1 X(u)h(t-u)du J" X(u)X(v)h(t-u)h{t-v)dudv} =

2 w

lcos((0 t +a> t
o I

o 1

+ 28) +

cos(a> t -a t )]de
o 2 /Q l

= E{\

= cos[ft>o('2-ri)]

Talcing the expectation inside the double integration and associating it with the random quantity yields E[Y {t)]
2

= f
J -co
J

I
oo

E[X(u)X(v)]h(t-u)h(t~v)

dudv

The term E[X(u)X(v)] i n this mean square value is a fundamental quantity in the consideration o f random processes. I t is called the autocorrelation function o f the random process X{t) and is defined, for the time instants r, and * , as
2

R {tx,t )=E[X(t )X(t )]^


x 2 1 2

xo )(x ,x )dx dx
2 l 2 l

(7.2.1)

where the trigonometric identity cos(A) cos(B) = [cos(A + B ) + c o s ( A - B ) ] / 2 was used to help evaluate the integral. (The first term i n the integral becomes the integral o f a cosine over two periods, which is zero, while the second term does not involve 8 and yields 2TT times the constant value.) I n Example 7.2.4 R (tt, ' 2 ) depends only on the difference between the two time instants, t - t . Strict sense stationarity, which states that density or probability functions are invariant under translations o f the time axis, implies that for a strict sense stationary random process the autocorrelation function depends only on the difference between t and (,. The converse of this is not necessarily true; i.e., the autocorrelation function depending only on time differences does, not necessarily imply strict sense stationarity (for an important class o f random processes, Gaussian random processes, the converse is true, however). A n example i n which the autocorrelation
x 2 x 2

248

7 Random Processes

7.2 Statistical Averages

249

function depends only on the time difference and the process is not strict sense stationary is now given. Example 7.2.5. functions Consider the random process X(t) with the sample x(r, s,) - cos(i) x( t, s ) ~ sin( r)
3

The random process o f Example 7.2.5, which has four sample functions, is wide sense stationary even though i t is not strict sense stationary. It was shown i n Section 4.4 that a linear combination of Gaussian random variables is a Gaussian random variable. I n a similar manner, the random variables X , , X ,..., X are jointly Gaussian i f and only i f
2 N

x(r, s ) = -cos(f)
2

x( t, 5 ) = - s i n ( t) is a Gaussian random variable for any set o f g,'s. Now i f X{t) is a random process and gU)X(t)dt (7.2.3)

which are equally likely. It can readily be determined that

m U) = E[X(t)]=\ix(t,s )
x

4 (-i

=0

and R {ti>t )
x 2

where g ( f ) is any function such that Y has finite second moment, then X(t) is a Gaussian random process i f and.only i f Y is a Gaussian random variable for every g(t). = E[X(t )X(t )]=\
l 2

X(i i,)X(i ,5 )^cos(i -i )


l f A l 3 1

which illustrates that the autocorrelation function depends only on t - t . That X(t) is not strict sense stationary can be shown by observing that at U=0
2 x

A direct consequence o f this is that the output o f a linear system is a Gaussian random process i f the input is a Gaussian random process. To show that this is true, consider the output o f a linear system with impulse response A(f) and input X(t) which is a Gaussian random process. The output is given as Y(t) = [ J -co which is a Gaussian random process i f h(t~u)X(u)du

fx M
W

*Ja(x, + n+J*(x,)+l*(x, -1)

and at t ~ t + r = 7r/4
2 x

Z=\
JOS

g{t)Y{t)dt

or
/x(l >(*l)?*/x(l +r>(Xl)
I l

is a Gaussian random variable for a l l g(t). Substituting for Y(t) and interchanging the order o f integration yields

Z =

As stated previously, a large part o f the study o f random processes is built on the study o f the autocorrelation function and related functions [the mean function m (t) is also important i n such studies]. For such analyses, only a form o f stationarity which guarantees that the functions actually used depend on time differences is really needed. This form is considerably weaker than strict sense stationarity and leads to the definition o f wide sense stationarity. A random process X(t) is wide sense stationary i f the mean function m (t) = E[X(t)] does not depend on t [m (t) is a constant] and R (t , t) = R (T) is a function only o f r = t - t . I f X{t) is wide sense stationary, R (r) = E[X(t)X(t+T)] for all (.
x x x x x 2 x 2 x x

gU)

[\ ~
H{t

u ) X ( u ) du d t

=j where

[|

gU)h(t-u)

dt^X(u)du

=j

g'(u)X{u)du

'()= I

g{t)h{t-u)dt

Since X(t) is a Gaussian random process Z is a Gaussian random variable

250

Random Processes

7.2

Statistical Averages

251
3

for all g'(t), and thus, since Z is a Gaussian random variable, Y(t) is a Gaussian random process. Another consequence o f the definition o f a Gaussian random process, given in Eq. (7.2.3), is that i f X(t) is a Gaussian random process the N random variables X{t ) X(t ),..-,X(t ) are jointly Gaussian random variables [have an N-dimensional Gaussian density function as given i n Eq. (4.4.1)]. This is shown by using
x y 2 N

random vector X = (X(t ),X(t ),X(t )), given as


x 2 3

where r, = 0, t =i
2

and t =i

is

1 2,= 0.413 -0.212

0.413 1 0.191

-0.212' 0.191 1

*(0 !*8(-'i) i n Eq. (7.2.3), Frequently this property is used as the definition o f a Gaussian random process; i.e., a Gaussian random process is defined as a random process for which the N random variables X ( * i ) , X ( f ) , . . . , X(t ) are jointly Gaussian random variables for any N and any f,, ( , . . . , t . Even though this statement is straightforward, it is easier to prove that the output of a linear system is a Gaussian random process i f the input is a Gaussian random process by using the definition given here rather than using this last property. The Gaussian random process is important for essentially the same reasons as Gaussian random variables; it is appropriate for many engineering models or, using the central limit theorem, i f it is obtained from the contributions o f many quantities, e.g., electron movement. I n addition, analytical results are more feasible for the Gaussian random process than most other random processes. Just as a Gaussian random variable is specified by the mean and the variance, a Gaussian random process is specified by knowledge of the mean function m (t) and the covariance function K {t t) [or equivalently by the mean function and the autocorrelation function R (ti,t )]. I f X ( f ) is wide sense stationary, m {t) = m a constant, and R (T) is a function only o f the time difference r=t -t . From Eq. (7.2.2), K (T) also depends only on the time difference. Since the two functions that specify a Gaussian random process do not depend on the time origin (only on time difference), strict sense stationarity is also satisfied. Thus, wide sense stationarity implies strict sense stationarity for a Gaussian random process. I n general, strict sense stationarity implies wide sense stationarity, since wide sense stationarity involves only the first two moments. The converse is not necessarily true; i.e., wide sense stationarity does not necessarily imply strict sense stationarity for a general random process, although the converse is true for the special case o f a Gaussian random process.
2 N 2 N x x u 2 x 2 x xt X 2 y x

The covariance matrix for a Gaussian random vector can be generated for any set o f sample values using the autocorrelation function R ( T ) . Another important concept in the study o f random processes is that o f ergodicity. A random process is said to be ergodic i f ensemble (statistical) averages can be replaced by time averages in evaluating the mean function, autocorrelation function, or any function o f interest. I f the ensemble averages can be replaced by time averages, these averages cannot be a function o f time, and the random process must be stationary. Thus, i f a random process is ergodic it must be stationary. The converse o f this is not necessarily true; i.e., a stationary random process does not necessarily have to be ergodic. For an ergodic random process an alternative way to obtain the mean function o f the random process X{t) is by averaging X ( r ) over an infinitely long time interval as
X

m,= lim-!- |

X(t)dt

(7.2.4)

and the autocorrelation function can be obtained by averaging the value of X(t)X(t + T) over an infinitely long time interval as R (r)=lim-~j
x

^X(t)X(t

+ r)dt

(7.2.5)

Ergodicity is very useful mainly because it allows various quantities such as the autocorrelation function to be measured from actual sample functions. In fact, all real correlators are based on time averages, and as such most of the random processes in engineering are assumed to be ergodic. A n example o f a random process that is stationary but not ergodic is now given. Example 7.2.7. functions Consider the random process X(t) X(r,j,) = + i X(r,* ) = - l
2

with the sample

Example 7.2.6. I f X(t) is a stationary Gaussian random process with m (t) = 0 and R (T) = sin(7TT)/(7rT), the covariance matrix o f the Gaussian
x x

A l l the statistical properties o f X{t) are invariant with respect to time and X(t) is strict sense stationary. With both sample functions equally likely m ( ( ) = 0, while the time average o f X(t) is + 1 for the first sample function and - 1 for the second sample function, and X{t) is not ergodic.
x

252

7 Random Processes

7.2 Statistical Averages

253

Now consider the calculation o f the mean function and autocorrelation function o f a more involved random process. Example 7.2.8. Determine the mean function and the autocorrelation function for the random process (random telegraph signal) o f Fig. 7.1.3, with the assumption that P [ X ( 0 ) - 1] = P [ X ( 0 ) = - 1 ] - i With the parameter of the Poisson random variable of Eq. (2.3.10) a = At, the probability function o f the number o f points in the interval of length T, n ( T ) , is given as *-*'f AFV
P [ n ( T ) = l ] =

H i e autocorrelation function is then obtained as **(J)-(1)(1)<1) e- cosh(A5) + ( l ) ( - l ) ( i ) e- sinh(As) + ( - l ) ( l ) ( i ) e- 'sinh(As) + ( - l ) ( - l ) ( i ) - c o s h ( A 5 )


e A Al Ai A,

e~ '[cosh(As) - s i n h ( A s ) ] = e~ ** or expressed i n terms o f T K ( ) = x T e 2 A

-O0<T<00

i _ p .

Thus the random telegraph signal is a wide sense stationary random process. Some properties o f autocorrelation functions o f wide sense stationary random processes w i l l now be given. Property 1. 11,(0) = E[X (t)] which is the mean-square value. Property 2.
2

which describes the number o f times the random telegraph signal X(t) changes sign in the interval o f length T. Using this probability function, P [ X ( t ) = 1 | X ( 0 ) 1] = PMT) = 0] + P [ ( T ) = 2 ] + -

[l + ^ + - - - ] - r " c o . h ( A ) and P [ X ( 0 = l|X(0) = - l ] = P[n(r) = l ] + P[n(T) = 3 ] + - - ' = e~ [ k t + ^ + ] = e' Then P[X(t) = 1] = P[X(t) = 1 | X ( 0 ) - 1]P[X<0) = 1]
! kt M

= ^ x f (x) J -co
xw

dx

(7.2.6)

sinh(Ar)

R (T)
X

= R (-T)
X

(7.2.7)

or the autocorrelation function is an even function o f T. This can be shown as RAr) RA*i, + r) = E{XUJXOt
x

+ P[X(f)= l|X(0)--l]P[X(0) =- l ] = e~ and P[X{t) obtained as m (t) = E [ X ( ( ) ] - ( O P [ X ( r ) = l ] + ( - l ) i [ X ( 0 = - l ] = 0


3e > kt

+ r ) ] = E[X{t

r)X(t )}
t

cosh(A()(i) + e~ = l] = i

kt

sinh(At)(i) - J

= Rx((l + T,(,) = J? (-T)

= ~\) = l-P[X{t)

Finally, the mean function is

Property 3. \R (T)\XR (0)


x x

(7.2,8)

or the largest value o f the autocorrelation function occurs at T = 0. This can be shown by considering 0 s E { [ X ( / ) X ( / + r)] } = [ X ( / ) 2 X ( ( ) X ( / + r) + X ( / , + r)]
J I I I 1 l 2 2 2

With T = f - f , and S = \T\


2

P[X{t )
2

=* l | X ( r . ) = 1] = e~ = 1, X ( t , ) = 1] = \e"

Ks

cosh(As) cosh(As)

= [ X ( ( ) ] 2 [ X ( ( ) X ( r + r)] + [X (/ -r-r)]
t I ] 1

and P[X{t )
2 As

RA0)2R (T)+R (Q)


x X

Similarly, P [ X ( r ) 1, X ( t , ) = - 1 ] = \ e '
2 K s

which yields sinh(As) sinh(As) or the desired result \R (T)\*R (0)


x x

P[X(t )
2 2

-1, X(t.) = 1]=\e~


k

x t

P [ X ( f ) = - 1 , X ( ( , ) = - 1 ] = | e~ ' cosh(As)

254

7 Random Processes

7.3

Spectral Deasttv

255

Property 4. I f Y(t) = X(t) + y , with y a constant (dc component) and E[X{t)] = 0, then R (r) = R (r) + yl (7.2-9)
0 0 y x

This can be shown as K ( T ) = [ y ( r ) V ( ' i + '-)] = E { [ X ( ( ) + >'o3[X(t + T) + yo]}


y I l 1

= E[X(t )X(t
l l

+ T) + y X(t )
0 l

+ y X(t
(t l

+ T) + y o\
0 i

It can readily be seen that the autocorrelation function of the random telegraph signal satisfies these properties, i.e., mean square value of 1, even function o f T, maximum at T = 0, and E [ X ( * ) ] = 0. Consider the functions of Fig. 7.2.2 as possible autocorrelation functions. Figure 7.2.2a cannot be an autocorrelation function since g(0) is not the maximum value, while Fig. 7.2.2b cannot be an autocorrelation function since g(r) is not an even function. Figure 7.2.2c cannot be an autocorrelation function since neither is g(0) the maximum nor g(r) an even function.

= E[X(t )X(t
=

+ T)] + y E[X(t ))
Q

+ y E[X(t

+ T)] + yl '

R (T)+0+0+yl=*R (T)+yl
X X

which is the desired result. Property 5. I f K(r) = X t O + yoCOS^f+ 0 ) , with y and w constants, 0 a random variable uniformly distributed from 0 to 2TT, and 0 and X{t) statistically independent for all t, then
0

Fig. 7.2.2. Possible autocorrelation functions: (a) rectified sine; (b) exponential; (c) sine.

R ( T ) = ^ ( r ) + ( ^ cos(o>r)
Y y

(7.2.10) Example 7.2.9. Consider the autocorrelation function given as K * ( T ) = 100e"


,0|T,

or i f Y(t) has a periodic component, then R (r) will also have a periodic component with the same period. This can be shown as R ( r ) = E { [ X ( t ) + >'oCOS(a>t + 0 ) ] [ X ( r + T) + yoCOs(I + wT + 0 ) ] }
y I 1 ) I

+50 cos(20r) + 25

= E [ X ( ( ) X ( ( + T ) ] + y E [ X ( t + r)]E[cos(o*f + 0 ) ]
l 1 o t I

The mean, mean square value, and variance of this random process will be determined by using the properties of the autocorrelation function. The mean square value is obtained as E [ X ( 0 J = RM and using {E[X(t)]} =
2 2

+ y [ X ( r ) ] E [ c o s ( w r , + 6>T + 0 ) ]
o 1

= 100+50+25 = 175

+ylE[cos(a>t
= R (T)
X

+ 0 ) cos(ojfi + <ar + 0 ) ]
E[COS(2W(, + WT + 2 0 ) + COS(WT)]
2

+ Q+0+~
2

lim ^ * ( r ) |
|rl-ac

w i l h

period^

t e r m r e m o v e d

= 25

* , ( T ) + ^ [ 0 + CO(T)] = R,(T) + ^ C M ( T )

the mean is E [ X ( f ) ] - 5 . From this the variance is obtained as V a r [ X ( 0 ] = E [ X ( 0 3 * { [ X ( ( ) ] } = 175-25 = 150 (7.2.11) 7.3 Spectral Density
2 2

which is the desired result. Property 6. I f X ( ( ) is ergodic and has no periodic component, lim R {r) = {E[X(t)}}
x 2

|T|-TO

or the mean ( m e a n ) can be obtained from the autocorrelation function as r goes to infinity. Conceptually X ( r , ) and X ( / , + T) tend to become statistically independent as r goes to infinity. Thus, lim K ( T ) = l i m E t X t O X O . + r ) ]
X

|f|-*O0

|T|-*OO

= lim E [ X ( ( , ) ] E [ X ( r , + T ) ] = { [ X ( ( ) ] }
|T|H.CO

since the mean is a constant.

As in the deterministic (nonrandom) case, the spectral content of a random process, i.e., the strength o f the random process in different frequency bands, is an important consideration. I f X(t) is a deterministic signal (voltage or current), the Fourier transform o f X(t) transforms this signal from the time domain to the frequency domain with the resultant being an amplitude (voltage or current) distribution of the signal in different frequency bands. Now i f X ( r ) is a random process the Fourier transform

256

7 Random Processes

7.3 Spectral Density

257

of X(t) transforms this into the frequency domain, but since the random process is a function o f both time and the underlying random phenomena, the Fourier transform is a random process i n terms o f frequency (instead of time). This transform may not exist and even i f it does a random process does not yield the desired spectral analysis. A quantity that does yield the desired spectral analysis is the Fourier transform (taken with respect to the variable T, time difference, o f the function) o f the autocorrelation function o f stationary (at least wide sense stationary) random processes. This Fourier transform is commonly called the power spectral density o f X{t) and denoted S (f) [ i t may also be written as S (a) where w = 2nf]. Thus, S (f) is given as
K x x

Example 7.3.1. Determine the power spectral density for the random telegraph signal o f Fig. 7.1.3, where the autocorrelation function was determined in Example 7.2.8. The power spectral density is then determined as S (f)=
x

["
J OO

e ^ e ^ d r
Coo
e

i2A-J2vf)T
Jo

-(2A+/2ir/)T

1 2\-j2wf

1 2A+j2ir/

4A (2TT/) +4A'
2

Several properties o f power spectral densities will now be given. S (f)=


x

R (r)
x

e~ '

J2 fr

dr

(7.3.1a)

Property 1. S (f)^0
x

(7.3.6)
x

with the inverse Fourier transform given by K*(r) = j " S (f)


x

J2irfr

df

(7.3.1b)

Evaluating Eq. (7.3.1b) at T = 0 gives *x(0)= [ S (f)df=E[X {t)]


x 2

That this is true can be observed by assuming that S (f)<0 for some frequency band. Then integrating only over this frequency band will yield a negative power, which is impossible, and thus S (f) must be nonnegative. This can also be observed from the alternative form o f the spectral density, since it is an average o f a positive quantity.
x

(7.3.2)

Property 2. S (-f)
x x

J -a

= S (f)
x

(7.3.7)

which justifies the name o f power spectral density, since the integral over frequency yields a power (mean square value). The power spectral density then describes the amount o f power in the random process i n different frequency bands. [More precisely, it describes the amount o f power which would be dissipated i n a 1-0 resistor by either a voltage or current random process equal to X(t).] I f X(t) is an ergodic random process Eq. (7.3.2) can be written as

or S (f) is an even function o f frequency. This can be shown by expressing the exponent o f Eq. (7.3.1a) as e~ to yield S (f)=
x x J2irfr

= cos(27r/T) - j

sm{2nfr)

j
J-

Rx(T)[cos(2ir/T)-7sin(27r/T)]dT= f
J_co

R (T) COS(2IT/T) dr
X

J>a>

(7.3.3)

which is equivalent to the definition o f total average power (which would be dissipated i n a 1-H resistor by a voltage or current waveform). For X(t) an ergodic random process an alternative form o f the spectral density is
T-*oo 2T

since R (T) is an even function o f T, sin(2ir/r) is an odd function o f T, the product R (T) sin(2ir/r) is an odd function, and the integral o f an odd function is 0. Finally, cos(2irfr) is an even function off, which makes S (f) an even function of /
X x

Example 7.3.2. Determine E[X (t)] and R (T) for the random process X(t) with power spectral density S (f)-l/[(2ir/) +0.04], -oo</<oo. Using Eq. (7.3.2) and J [ l / ( t > + c ) ] dv = (1/c) taxT^v/c)
X 2 x 2 2

where F (f)
x

^j;^*-G^) (S)'-i
,
(7.3.5) = 2.5

/=co

=j"

X(t) e~ 'dt

i2irf

258

7 Random Processes

7.4

Linear Systems with Random Inputs

259

Since the power spectral density S {f) is o f the same form as the spectral density of Example 7.3.1, the autocorrelation function is of the form
x

R (T)
x

= a e~

fcW

-oo < T < oo

The power spectral density, from Eq. (7.3.1a), is expressed as

function, i n reality, has meaning only after it has been passed through a system with a finite bandwidth (integrated), white noise has meaning only after it has been passed through a system with a finite bandwidth. As long as the bandwidth of the noise process is significantly larger than that of the linear system, the noise can be considered to have an infinite bandwidth. Example 7.33. I f the thermal noise voltage in a 1-11 resistor were measured with a digital voltmeter of bandwidth 100 kHz, this voltage, from Eqs. (7.3.10) and (7.3.2), would be 7(7.946 x 10" )(2 x 10 ) = 0.040 / t V (rms), while with a 30-MHz oscilloscope a noise voltage of 0.690 would be measured. These are the values that were stated in Section 1.1.
21 s

Setting this equal to the given S {f)


x

yields a = 2.5 and b = 0.2. Thus,


2W

^ ( r ) = 2.5e-a n d ^ ( 0 ) = 2.5 = E [ X ( 0 ] .
2

-oo<r<oo

A noise random process is said to be white noise i f Us power spectral density is constant for all frequencies, i.e., i f S(/) = y -oo</<co (7.3.8)

The autocorrelation function of a white noise process theoretically can be obtained by taking the inverse Fourier transform of S(f) given in Eq. (7.3.8), using Eq. (7.3.1b), as ^ ^ d f = ^ e^df -co 2. 2 J -cc. This integral cannot be evaluated directly, but is equal to an impulse function at T = 0. Thus,
e

*(r)= f
J

where the division by 2 is used when both negative and positive frequencies are considered. Using Eq. (7.3.2), the power i n a white noise process can be seen to be infinite, which is impossible in the physical world. But the concept o f white noise is important. From quantum mechanics, the power spectral density of thermal noise (noise voltage due to the random motion of electrons in conducting material with resistance R ) is given as

K(r)=^S{r)

(7.3.11)

where k (the Boltzmann constant) = 1.37 x 10" , h (the Planck constant) = 6.62 x 10" , and T is temperature in kelvins (standard or room temperature T = 63F = 290 K ) . The maximum value o f S ( / ) is 2RkT at f=*0, which can be obtained by a limiting process. For \f\ = Q.\(kT /h) = 6x\Q Hz, S (f) has only dropped to 0.95 of its maximum value, and thus S (f) is essentially constant for | / | < 6 x l 0 H z [for | / | = 0.01(k7o//i) = 6 x 10 Hz, S {f) equals 0.995 of its maximum value]. Even though thermal noise is not strictly a white noise process, it appears white (constant power spectral density) over most frequencies of interest i n engineering ( | / | < 6 x 10" Hz). I n engineering applications, then, the power spectral density of thermal noise is given as
34 0 il 0 H n M 10 n

23

That this is correct can readily be seen by taking the Fourier transform of R(T) to obtain S(f) [putting Eq. (7.3.11) into Eq. (7.3.1a)]. I t is not uncommon in Fourier transforms to be easy to obtain the transform (or inverse transform) and be difficult to obtain the inverse transform (or transform). Thus, in many cases the inverse transform is obtained by recognizing the form of the transform (or vice versa). The form of the autocorrelation function for a white noise process indicates that white noise is uncorrelated for T / 0. Also, from Eqs. (7.3.11) and (7.2.11), the mean of a white noise process

m(0 = 0.

7.4

Linear Systems with Random Inputs

S(f) = 2RkT = 7.946x l O " *

2 1

- 6 x 10" < / < 6 x 10"

(7.3.10)

The use of a white noise random process is similar to the use of an impulse function in the analysis of linear systems. Just as an impulse

As shown i n Section 7.2, the output of a linear system is a Gaussian random process i f the input is a Gaussian random process. Thus, the output of a linear system, when the input is a Gaussian random process, can be specified by determining the mean function and the autocorrelation (or covariance) function of the output. For the linear system of Fig. 7.2.1, with impulse response h(t), where the mean function and autocorrelation

260

Random Processes

7.4

Linear Systems with Random Inputs

261

function o f the input random process, are m (t) and respectively, the mean function of the output is obtained as
x

R (t t )
x tt 2

reduces to R (tx*h)=
y

K(h-v-(t -u))h(u)h(v)dudv
x

m, ,(r) = [ y ( t ) ] = - E [ j * - ( ) / ! ( ( - ) d u j = j or

E[X(u)]h(t-u)du
o r

J -OO
poo

J -OO

(7.4.4)
r
00

K,(T)=

R (r~v
x

u)h(u)h(v)dudv

f
E =

oo

Too

J OO J 00

m (u)h(t-u)
x

du = J
-00

m (t-u)h(u)
x

du

(7.4.1)

-00

Likewise, the autocorrelation function o f the output is obtained as

Thus, Y(t) is wide sense stationary i f X(t) is wide sense stationary. The power spectral density of the output of the linear system can be obtained by rewriting Eq. (7.4.4) as
j* c r<x> r c o o

W = [j [I X((,-u)/i(u)du| 1 X(i -w)A()rfwj


2

=
J-nO J - 0 0 J OO

S {f)
x

e " - h(u)h(v)

J7

f(r

D+u)

dfdudv

and interchanging the order of integration R (r)


y

X(f,-u)X(t -iOM)*(t>)<*u<*i>]
2

=j

S (f)
x

h(u) ei "*

<*][

Hv) e~

J2lTfv

dvj e

}2wfr

df

[ X ( ( , - u ) X ( t - v)]h(u)h(v)
2

du dv

-00

Recognizing the integral with respect to v as H{f) and the integral with respect to u as H*(f) [ i f h(t) is a real function], this reduces to R (r)
y

-0

or R Ui,t )
y 2

= I
J -OO

S (f)\H(f)\
x

e"

J2

fT

df= [ S (f)
y

e"

J2

fr

df

= \
J
-00

I
J -co

R (t -u t -v)h{u)h(v)dudv
x i t 2

(7.4.2)

J -oo

Setting the integrands equal, since Fourier transforms are unique, yields S {f) = S (f)\H(f)\ (7.4.5) Example 7.4.1. Consider a white noise process, X(t), as an input to an ideal low-pass filter, whose transfer function is shown in Fig. 7.4.1. With
y x 2

The relationships of Eqs. (7.4.1) and (7.4.2) involving the mean function and the autocorrelation function are valid for any random process that is the output of a linear system. They have special meaning, however, i n the case of a Gaussian random process, since a knowledge of these functions is sufficient to specify the random process. For the special case of X( t) being wide sense stationary, m ( t) = m and R-xOi, h) = K * ( T ) , where r - r - r,. I n this case, the mean function of Y( (), Eq. (7.4.1), reduces to
x x 2

/Mr)=^5(r) and S*(/) = y -oo</<oo

m {t) = j
y

r
J -a
x

m h(u)du
x

H(f) 1

or
y

m (t) = m

h(u)du~m

(7.4.3)
w o w

which is a constant. Also, the autocorrelation function of V ( r ) , Eq. (7.4.2),

Fig. 7.4.1. Ideal tow-pass filter.

262

Random Processes

7.4

Linear Systems with Random Inputs

263

the output power spectral density, from Eq. (7.4.5), is obtained as No -W<f< W

h(t). The output power spectral density is obtained as S (f)


y

= SAf)\H(f)\

N /2
0

-co < f

<

CO

and the output autocorrelation function, from Eq. (7.3.1b), as


- O O < T<O0

and the autocorrelation function expressed as

which can be recognized as (or using a Fourier transform table, not evaluating the integral)
Ry{T)=

From the output autocorrelation for the ideal low-pass filter it can be seen that this filter has correlated the noise process. The mean square value or power in the output process is given as R (0) - t V N , which is finite as expected.
y 0

Mik -2^\
e

<

<

Example 7.4.2. Consider the RC low-pass filter shown in Fig. 7.4.2. Again, the input is a white noise process with power spectral density N / 2 .
0

The RC low-pass filter correlates the noise process, and the mean square value o f the output process is given as R (0) = 2irf N /4.
y c 0

As a final item consider a random process as the input to two linear systems as shown in Fig. 7.4.3. I f the input random process X{t) is Gaussian,
Y(i)

X(i)

V(t) X(t)

h (t)
y

Fig. 7.4.2. RC low-pass filter.

z<0

The transfer function o f this filter can be obtained, using a voltage divider (ratio o f the impedance o f the capacitor to the sum o f the impedances o f the resistor and the capacitor), as \/{)2irfC) R + l/ijlirfC) or letting f =
e

Fig. 7.4.3. Linear systems with common input.

1 \+j2irfRC

then Y(t) and Z(t) are jointly Gaussian random processes. The crosscorrelation o f two random processes is defined in a manner similar to the autocorrelation. The cross-correlation of Y(t) and Z(t) is given as RyAty,t2) = E[Y{t )Z{t )}
x 2

(7.4.6)

l/{2wRC) H(/) = 1 l+Jf/fc


-OO < / <

For the Y(t) and Z ( f ) given in Fig. 7.4.3 the cross-correlation function is given as
00

RyAh ,t ) = E
2

X(U - u)hAu) du |

X(t ~
2

v)hAv)

dvj

The impulse response o f this filter can be shown to be h{t) = 2nf e- <'
c 2 f

(i=0 otherwise

which can be expressed in terms o f the autocorrelation function o f the input as


|*oo
Too

= o H(f)

could also have been obtained by taking the Fourier transform o f

Ry*(U,h)=
J
- 0 0 J 00

RAti-u,

1 -v)hAu)hAv)dudv
2

(7.4.7a)

264

7 Random Processes

Problems

265

I f X(t) is wide sense stationary the cross-correlation function is a function only o f t - * i = T as


2

this zero response to a constant implies |


J 00

I* CO

1 oo *

h {u)du=0
y y

R (r)
yz

=
J CO

R (r-v
x

+ u)h {u)h (v)


y t

dudv

(7.4.7b)

J oo
y i

When the cross-correlation function R ( r ) is a function only o f T and E[ Y{t)] and E[Z(t)] are constants, V ( r ) and Z{t) are defined to be jointly wide sense stationary. The cross-spectral density ( i f it exists) is defined as the Fourier transform of the cross-correlation function and written as S (f)=r
yz

and using this i n Eq. ( 7 . 4 . 3 ) yields m (t) = 0. Thus, the covariance between Y(t) and Z{i) is zero, and i f X{t) is a Gaussian random process, Y(t) and Z(t) are statistically independent. The covariance (or cross-covariance) function o f the random processes Y(t) and Z{t) is defined, similar to Eq. (7.2.2), as K {t t )
yz u 2

= Cov\_Y{t )Z{t )^E{[Y(


x 2

t )-E(
x z 2

Y(t ))]\_Z(t )
x 2

E{Z(t ))]}
2

R Me- " dT
y

i2

fT

(7.4.8a)

= Ry*('i,

'2) -

m ((,)m (t )
y

(7.4.10)

J -00

Likewise, the cross-correlation function is the inverse Fourier transform of the cross-spectral density and given as RyM=\ S {f)e ^df
yz }2

Two properties o f the cross-correlation function o f the jointly wide sense stationary random processes Y(t) and Z(t) are R (T)
yz

= R (-r)
zy

(7.4.11a) (7.4.11b)

(7.4.8b)

and \R (T)\^y/R (0)R (0)


yz y z

The cross-spectral density o f Y(t) and Z ( r ) can be obtained by putting Eq. (7.3.1b) into Eq. (7.4.7b), which yields R Ar)=
y

The first property follows from RyM = E[Y(t )Z(t


l l

+ T)-\ = E[Z(t

+ r)Y(t )]
l

R (-T)
2y

IT
L
J

S (f)e ^~ Uf\h (u)h {v)dudv


x y z

j2

v+u

J OO J OS

o o

and interchanging the order o f integration


RM y =

11
J

s f A )

[II

rf
e

ky(u)

**

fu

"][fL ~
z

hM 2 / d v e iw v

]
J2

d f

and the second property follows from Eq. (4.6.1) with X and Y replaced by Y(t) and Z(t), respectively. In a manner similar to the definition o f the time autocorrelation function given in Eq. (7.2.5), i f the random processes Y(t) and Z(t) are ergodic, then the time cross-correlation function o f Y(t) and Z{t) is defined as
R (T)=
YZ

Recognizing the integral with respect to v as H (f) and the integral with respect to u as H*(f) [ i f h (t) is a real function], this reduces to
y

lim ~

Y(t)Z(t

+ r)dr

(7.4.12)

R M=
y

P
-oo

S (f)H*(f)H (f)e ^df=


x z

J2

r
J

S ,(f)e *df
y

-oo

Thus, when Y(t) and Z{t) are jointly ergodic random processes, the cross-correlation function o f Y(t) and Z{t) can be obtained as a time average. PROBLEMS

Setting the integrands equal yields S (f)


yz z

= S (f)Hf(f)H (f)
x 2

(7.4.9)

I f Hy(f) and H {f) are nonoverlapping, / ^ ( T ) = 0 for all T. I n addition, either H (f) or H ( / ) must have zero response to a constant (dc) input; i.e., i f H (f) has zero response to a constant, Y(t) = 0 i f X"(f) = c. Since
y r y

' ' [ 7 J . l . F o r the random process X(t) = Acos(a) t)+B$in(a) t), where w is a constant and A and B are uncorrected zero mean random variables having different density functions but the same variance o- , determine whether X(t) is wide sense stationary.
0 0 0 2

Y(t) =
J
-00

X ( ! - u ) W "

\J.2.2.>or the random process X(t) = Ycos(2irl) and f (y)={, -l^y^l, evaluate E[X(t)] and E[X (t)] and determine whether X(t) is strict sense stationary, wide sense stationary, or neither.
Y 2

266

Random Processes 7.2.9. For R (T) Var[X(/)].


X

Problems as shown in Fig. P7.2.9, determine E[X(t)], E[X (t')],


2

267 and

7.2.3. For the random process Y{t) = X(t) cos(27rt + 0 ) , where X(t) is a wide sense stationary random process statistically independent o f 0 with /(0) = 1 / ( 2 I T ) , 0 < 0<2TT, determine whether Y{i) is wide sense stationary. 7.2.4. Determine the covariance matrix o f the Gaussian random vector X = (XU ) X(t ), X(h)) when /, = 1, t = i.S, and fj = 2.25, where X(t) is a stationary Gaussian random process with m (t) = 0 and R ( T ) = s i n ( w T ) / ( i r T ) .
t t 2 2 x X T

7.2.5. For a random process X(t) with autocorrelation function R (r) as shown in Fig. P7,2.5a, determine the autocorrelation function R (T) for the system shown
x V

>\\
i
x

R (t)
x

in Fig. F7.2.Sb[Y(t)~X(t)

X(t-3)].

Fig. P7.2.9. 7.2.10. Show that the mean function m (i) = 0 and the autocorrelation function R (T) = (1~\T\/T), - T = = T < T, for the binary random process X(t) = A i T + u < t < ( i + 1) T + v, -co < i < co, where V i s a uniform random variable with f ( v) = 1/ T, Os u < T, J>(A, = 1) = P(At = - 1 ) = 0.5, and {A,} and V are statistically independent.
x v

Fig. P7.2.5..

7.2.11. For the periodic function X(t) = . . . - I , + 1 , + 1 , - 1 , + 1 , - l , - 1 , . . . where each 1 is on for 1 sec and T = 7sec, determine the time autocorrelation function R (r) (obtain for T = 0, 1,2.3,4,5,6).
t x

X(t) T=3 second delay Ftg. P7,2.5b. 7.2.6. For a random process X(t) with autocorrelation function R (T) as shown in Fig. P7.2.5a, determine the autocorrelation function R (r) for the system with
x y

7.2.12. For the periodic function X(t) = . . . , - 1 , + 1 , + 1 , + 1 , - 1 , + 1 , . . . where each 1 is on for 1 sec and 7" = 6 sec, determine the time autocorrelation function R (T) (obtain for T = 0 , 1 , 2, 3, 4, 5).
X

7.3.1. Determine the power spectral density S (f)


x

for R (T) = 1, - 2 s r s 2 .
X

7.3.2. Determine the power spectral density for the binary random process o f Problem 7.2.10 with T= 1. 7.3.3. Determine E [ X ( r ) 3 , E[X (t)], and R ( T ) for the random process with power spectral density S , ( J ) = I O / [ ( 2 7 r / ) + 0.16], - c o < / < c o .
T 2 2 x 2

X(t)

Y(t) = X(t-2)
COS(5T) + 10.

X(t-5).
E[X (0l
2

7.3.4. F o r S , ( / ) a s s h o w n i n F i g . P7.3.4,determine [ * ( ( ) ] , E [ X " ( f ) ] , a n d K ( T ) .

7.2.7. Determine E[X(t)], 7.2.8. For R (T)


x

and V a r [ X ( f ) ] for R (T) =


x 2

+ Sx(0 and

as shown in Fig. P7.2.8, determine [ * ( ( ) ] E[X (t)],

Var[X(t)].

Fig. P7.3.4.

7.3.5. For S (f) as shown in Fig. P7.3.5, determine E[X( / )], E[X (t)l
x

and R

(r).

Ftg. P7.2.8.

7.3.6. Determine the thermal noise voltage in a resistor when it is measured with a 1-MHz oscilloscope and when it is measured with a 5-MHz oscilloscope.

268

Random Processes

Sx(0 3

Appendix A Evaluation of Gaussian Probabilities


f

-4

0 Fig. P7J.5.

73.7. Determine the thermal noise voltage in a 1-ft resistor when it is measured with (a) a 10-MHz oscilloscope and (b) a 50-MHz oscilloscope. 7.3.8. Can g ( / H 5 cos(2ir/) be a power spectral density for the wide sense stationary random process X(t)l 7.4.1. Determine the output power spectral density and the output autocorrelation function for a system with impulse response h(t) - e~', t aO, whose input is a white noise process X(t) with spectral density S (f) = N /2, - a o < / < o o .
x 0

A.1

Q Function Evaluation

7.4.2. Determine the output power spectral density and the output autocorrelation function for a system with impulse response h(t) = t.Oss ts I , whose input is a white noise process X(t) with spectra density S (f)= NJ2, - o o < / < o o .
x

The Q function, which yields probability statements on the normalized Gaussian random variable, is given from Eq. (2.2.4) as <?(*) = 1-F (x)=
x

7.4J. Determine the cross-correlation function of Z,(() = X{t) + Y(t) and Z {t) = X(t)-Y(t) where X(t) and V(t) are statistically independent random processes with zero means and autocorrelation functions R (T) = e~ , -oo< r<oo,
2 t M x

[" J
x

V2ir

and i?,,(T) = 2e" , - O O < T < O O .

|T|

7.4.4. For the random processes X ( i ) = A cos {t t) + B sin(w 0 and Y(t) = B c o s ( w 0 - A s i m > < ) where w is a constant and A and B are uncorrected zero-mean random variables having different density functions but the same variance <r , show that X(t) and Y(t) are jointly wide sense stationary.
0 0 0 0 0 2

7 A S . For the two periodic functions X(t) = . . . , - 1 , +1, +1, - 1 , +1, - 1 , - 1 , . . . and Y(t) = ..., - 1 , + 1 , + 1 , - 1 , - 1 , - 1 , + 1 , . . . where each 1 is on for 1 sec and T = 7sec, determine the time cross-correlation function R {T) (obtain for T = 0,1,2,3,4,5,6).
XY

and the region o f integration from Fig. 2.2.3 is shown in Fig. A.1. The Q function is tabulated in Table A.1 for values o f the argument from 0.00 to 4,99 in increments o f 0.01, where the 10 columns give the second decimal place o f the argument. For example, the value o f <?(2.13) is found i n the row with the value 2.1 under the column 0.03, or <?(2.13) = 0.01659. This table lists the values only for positive arguments, but the Q function for negative arguments can be obtained from Eq. (2.28) as C?(-c) = l - Q ( c ) For example,

Q(-0$7) = 1 -

<?(0.97) = I - 0.16602 = 0.83398.

0
269

Fig. A.1. Q function for Gaussian probabilities.

270

Appendix A: Evaluation of Gaussian Probabilities

A.2

Inverse Q Function

271

A n approximation for Q(x), which is easily implemented on a programmable calculator, is given as


Table A.1 Gaussian Probabilities
.00 .01 .02 .03 .04 .05 .06 .07 .08
.0 )
1

Q(x) = (b t + b t + b t +b<t +b,t )e- +e(x)


i 2 3

xi/2

JC==0

where the error is bounded as \e(x)\ < 7.5 x 10" , and


0.0 0.1 0.2 0.3 0.4 0.5 0.6 0,7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 '2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 .50000 .46017 .42074 .38209 .34458 .30854 .27425 .24196 .21186 .18406 .15866 .13567 .11507 .09680 .08076 .06681 .05480 ,04457 ,03593 .02872 .02275 .01786 .01390 .01072 .00820 .00621 ,00466 .00347 .00256 .00187 1.35E-3 9.68E-4 6.87E-4 4.83E-4 3.37E-4 2.33E-4 1.59E-4 1.08E-4 7.24E-5 4.8IE-5 3.17E-5 2.07E-5 1.34E-5 8.55E-6 5.42E-6 3.40E-6 2.11E-6 1.30E-6 7.94E-7 4.80E-7 .49601 .45640 .41683 .37828 .34090 .30503 .27093 .23885 .20897 .18141 .15625 .13350 .11314 .09510 ,07927 .06552 .05370 .04363 .03515 .02807 .02222 .01743 .01355 .01044 .00798 .00604 .00453 .00336 .00248 .00181 1.31E-3 9.36E-4 6.64E-4 4.67E-4 3.25E-4 2.24E-4 1.53E-4 1.04E-4 6.95E-5 4.62E-5 3.04E-S 1.98E-5 1.28E-5 8.17E-6 5.17E-6 3.24E-6 2.02E-6 1.24E-6 7.56E-7 4.56E-7 .49202 .45224 .41294 .37448 ,33724 .30153 .26763 .23576 .20611 .17879 ,15386 .13136 .11123 .09342 .07780 .06426 .05262 .04272 .03438 .02743 .02169 .01700 .01321 .01017 .00776 ,00587 ,00440 .00326 .00240 .00175 1.26E-3 9.04E-4 6.41E-4 4.50E-4 3.13E-4 2.16E-4 1.47E-4 9.96E-5 6.67E-5 4.43E-5 2.91E-5 1.90E-5 1.22E-5 7.81E-6 4.94E-6 3.09E-6 1.92E-6 1.18E-6 7.19E-7 4.33E-7 .48803 .44828 .40905 .37070 .33360 .29806 .26435 .23269 .20327 .17619 .15150 .12924 .10935 .09176 .07636 .06301 .05155 .04182 .03362 .02680 .02118 .01659 .01287 .00990 .00755 .00570 .00427 .00317 .00233 .00169 1.22E-3 8.74E-4 6.19E-4 4.34E-4 3.02E-4 2.08E-4 1.42E-4 9.58E-5 6.41E-5 4.25E-5 2.79E-5 1.81E-5 1.17E-5 7.46E-6 4.72E-6 2.95E-6 1.83E-6 1.12E-6 6.84E-7 4.12E-7 .48405 .44432 .40517 .36693 .32997 .29460 .26109 .22965 .20045 .17361 .14917 .12714 .10749 .09012 .07493 .06178 .05050 .04093 .03288 .02619 .02068 .01618 .01255 .00964 .00734 .00554 .00415 .00307 .00226 .00164 1.18E-3 8.45E-4 5.98E-4 4.19E-4 2.9IE-4 2.00E-4 1.36E-4 9.20E-5 6.15E-5 4.08E-5 2.67E-5 1.74E-5 1.12E-5 7.13E-6 4.50E-4 2.82E-6 1.74E-6 1.07E-6 6.50E-7 3.91E-7 .48006 .44038 .40129 .36317 .32636 .29116 .25785 .22663 .19766 .17106 .14686 .12507 .10565 .08851 .07353 .06057 .04947 .04006 .03216 .02559 .02018 .01578 .01222 .00939 .00714 .00539 .00402 .00298 .00219 .00159 1.14E-3 8.16E-4 5.77E-4 4.04E-4 2.80E-4 1.93E-4 1.31E-4 8.84E-5 5.91E-5 3.91E-5 2.56E-5 1.66E-5 1.07E-5 6.81E-6 4.30E-6 2.68E-6 1.66E-6 1.02E-6 6.18E-7 3.72E-7 .47608 .43644 .39743 .35942 .32276 ,28774 .25463 .22363 .19489 .16853 .14457 .12302 .10383 .08692 .07215 .05938 .04846 .03920 .03144 .02500 .01970 .01539 .01191 .00914 .00695 .00523 .00391 .00289 .00212 .00154 1.11E-3 7.89E-4 5.57E-4 3.90E-4 2.7QE-4 1.85E-4 1.26E-4 8.50E-5 5.67E-5 3.75E-5 2.45E-5 1.59E-5 1.02E-5 6.51E-6 4.10E-6 2.56E-6 1.58E-6 9.69E-7 5.88E-7 3.53E-7 . 47210 .43251 .39358 .35569 .31918 .28434 .25143 .22065 .19215 .16602 .14231 .12100 .10204 .08534 .07078 .05821 .04746 .03836 .03074 .02442 .01923 .01500 .01160 .00889 .00676 .00508 .00379 .00280 .00205 .00149 1.07E-3 7.62E-4 5.38E-4 3.76E-4 2.60E-4 1.79E-4 1.21E-4 8.16E-5 5.44E-5 3.60E-5 2.35E-5 1.52E-5 9.78E-6 6.22E-6 3.91E-6 2.44E-6 1.51E-6 9.22E-7 5.59E-7 3.35E-7 .46812 .46414 .42858 .42465 .38974 .38591 .35197 .34827 .31561 .31207 .28096 .27760 .24825 .24510 .21770 .21476 .18943 .18673 16354 .16109 .14007 .13786 .11900 .11702 .10027 .09853 .08379 .08226 .06944 .06811 .05705 .05592 .04648 .04551 .03754 .03673 .03005 .02938 .02385 .02330 .01876 .01831 .01463 .01426 .01130 .01101 .00866 .00842 .00657 .00639 ,00494 .00480 .00368 .00357 .00272 .00264 .00199 .00193 .00144 .00139 1.04E-3 1.00E-3 7.36E-4 7.11E-4 5.19E-4 5.01E-4 3.62E-4 3.50E-4 2.51E-4 2.42E-4 1.72E-4 1.65E-4 1.17E-4 1.12E-4 7.84E-5 7.53E-5 5.22E-5 5.01E-5 3.45E-5 3.31E-5 2.25E-5 2.16E-5 1.46E-5 1.40E-5 9.35E-6 8.94E-6 5.94E-6 5.67E-6 3.74E-6 3.56E-6 2.33E-6 2.22E-6 1.44E-6 1.37E-6 8.78E-7 8.35E-7 5.31E-7 5.05E-7 3.18E-7 3.02E-7

* = 1/(1 + rx)
2

r = 0.2316419
3

b = 0.127414796
x

b = -0.142248368 6 = -0.726576013
4

6 = 0.710706871 b = 0.530702714
s

A.2

Inverse Q Function

If the value o f Q(a) is given, the value of a can be obtained by linear interpolation i n Table A . 1 . For example, i f Q ( a ) = 0.02 the value of a lies between 2.05 and 2.06 [Q(2.05) = 0.02018 and <?(2.06) = 0.01970], and by linear interpolation between these two points, a is / 0.02018-0.02 V a = 2.05 + ( (2.06 - 2.05) = 2 054 \0.02018-0.01970/ '
V

For values o f Q > 0.5, Eq. (2.2.8) is used, for example, Q(b) = 0.75 to obtain < ? ( - 6 ) = l - < ? ( 6 ) = 0.25 and -fe = 0.67 + / 0.25143-0.25 V
( 0 6 8 a 6 7 ) = 0

U.25143-0.24825) - -

6 7 4

b = -0.674 A rational approximation for the inverse o f Q(x), which is easily implemented on a programmable calculator, is given as * = C +CtS + C S 1 ' . *
0 2 2 2

, , + gQ)
4

<?s=0.5

where the error is bounded as | e ( p ) | < 4 . 5 x 10" , and s = (-21n<?)'


/2

c = 2.515517
0

c, = 0.802853 d = 0.189269
2

c = 0.010328
2

d, = 1.432788

d = 0.001308
3

Appendix B: Sum of N Uniform Random Variables

273

Appendix B Sum of N Uniform Random Variables

This will be proved by using mathematical induction. Before starting, note from symmetry that fz(2)=fz (N-z)
N N+t

N=l,2,...

and the density function of Z is obtained from the density function of Z and the density function of U by convolution as
N N+i

/z

N + 1

(z)=
J

fv +,{z-x)f {x)
N ZN

dx

-co

For a starting value, N = 1 /z,() = l The density function and the distribution function for the sum of N statistically independent random variables uniformly distributed from 0 to 1 is derived. The results are piece wise continuous functions with different expressions over each unity interval. For Z the sum of these uniform random variables, a normalized version o f the sum, W is defined where the mean of W is 0 and the variance of W is 1. The density function and distribution function for W are also developed. The density function of this normalized sum then facilitates a direct comparison with the normalized Gaussian density function. Letting Z be the sum of N statistically independent uniform random variables as
N Nt N N N N

0<r=sl

and Eq. ( B . l ) holds. Now assume Eq. ( B . l ) is true for N and show that it is true for N+l. Three ranges will be considered separately. First, for 0 < z < l

Now, for fc<z<;Jfc+l, it = 1 , 2 , . . . ,

N-l,

Zi=t/i Z where f {u,)


Ul N

+ri(-i)'cf(x-i) - rfxi
w ,
N

= Z . +U
N {

N = 2,3,...

Jk

*-Q

=l = 0

0<u,*l otherwise

1-1,2,... +J\-i) cf(x-.r- Jx]


i i

the density function of the sum of N statistically uniform random variables is given as ^ > > 7 ^ T T ^ (-D'CfU-O *(IS
= 1 1

= ^ [ | V i > ' c r { ^

k<z*k

+\

= ^ [ ^

+ |/-i)

cr(z-k) -| (-ir c^ ( -;) j


] 1 2 +l

-1)1

i-o

Using the relationship C ? + C,1, = C? fc = 0 , l , . . . , N - l (B.1) otherwise


272

yields (B.2b)

= 0

faJti-jfiii-lYcr^z-i)"

274

Appendix B: Sum of TV Uniform Random Variables

Appendix B: Sum of

Uniform Random Variable*

275

Finally, for N < z ^ N +1

which reduces to F (Z)


Zn

=~ = 1 = 0

i(~l)'C?(z-i)

k<z^k+l 2>N z^O

fc

,l,...,N-l (B.3)

where the last equality comes from the symmetry of f (z),


z

i.e.,

The mean o f Z

is obtained as i-i ;-i2


\ t/, N = 1

1-0 This is then evaluated as /z


N + l

2
N
1 XT

and the variance as


o

U) = ~ ( N + l - z ) ^ = ^ |

( - l ) ' c r ( z - , r

(B.2c) For the transformation

(
N N

N I

Var(U,)=L-U-

where symmetry has again been used in the last equality. Combining Eqs. (B.2a), (B.2b), and (B.2c) yields the desired result of Eq. ( B . l ) . Using Eq. ( B . l ) , the distribution function for the sum of N statistically independent uniform random variables f o r f c < z ^ f c + l , f c - 0 , 1 , . . . , N - 1 , is obtained as

i=i ,

/ Z
N

<=i

(=i 12 N/2

12

E ( W ) = 0 and Var( W ) = 1. The density function of W the density function o f Z , Eq. ( B . l ) , as


N

is obtained from

'

'

k-N

2 k-N/2 + \ ,- < w < ' JN/U VN/12 otherwise


N

(B.5) '

=^r I i
Pi

(-i) cf[(y+i-o -0-O ] (-D'crtr-o" ~


JV
N N

T V ! j - o (=o

i
If-o

(-D'c^t-.r I (-D'cfo+i-o *
1

Likewise, the distribution function of W tion function of Z , Eq. (B.3), as


N

is obtained from the distribu-

!i-o

-1571 (-D'C, (*-0 +^"f

k-N/2

k~N/2

+l (B.6)

i
JV! ; _ 1 /-o
w

'li-iycru-ir+^-.i(-i)'c,"(z-o
JVi (=0

VAT/12 - 1

VJV/12

~Z(-l)'C, (ik-l)"
iV I 1=0

-1 _n

w>

k-N/2

+1

VN/12
k-N/2

4l

I ' t - D ' c r t m - . T - T j : ! ' I (-i)'cfO-O"

VN/12 The density and distribution functions of W are listed below for several values of N [only positive values are given since fw {-w) = fw {w) and
N N N

276

Appendix B: Sum of N Uniform Random Variables


f

Appendix B: Sum of JV Uniform Random Variables w <


N w )

277

f v ( - H > ) = 1 - F ( W ) ] . For each N and each range the density function is given first and one minus the distribution function is given next
N W N

N=\

A = B = 0.2887

C=l B C(l/2-Aw) C = 0.5


2

0 < w s 1.732 N =2 A = B = 0.4082 B(\-Aw) C(l-Aw)

0<w<2.449 N =4

A = 0.5774

B = 0.09623 B[(2 - A w ) C[(2-Aw) B[(2 - Aw) C[(2-Aw)


3 4

C 0.04167

1.732 < w ^ 3.464 0 < w < 1.732 N =8 A = 0.8165

] ] - 4 ( 1 - Aw) ] -4(1-Aw) ]
3 4

B = 0.0001620
B[(4 - AH>) ]
7

C = 2.480 x 10~

3.674 < w ^. 4.899

2.449 < w 3.647 1.225 < w < 2.449 0 < w < 1.225

C[(4-Aw) ] B[(4 - Aw) - 8(3 - Aw) ] C[(4-Aw) -8(3-Aw) ] B [ ( 4 - A w ) - 8 ( 3 - A w ) + 28(2 - A w ) ] C[(4-Aw) -8(3-Aw) +28(2-Aw) ] B[(4-Aw) -8(3-Aw) +28(2-Aw) -56(l-Aw) ] C[(4-Aw) -8(3-Aw) + 28(2-Aw) -56(l-Aw) ]
7 7 8 8 7 7 7 8 8 8 7 7 7 7 8 8 8 B

Fig. B . l . Density function for the sum of JV uniform random variables.

JV=12 5<w^6 4<w<5 3<ws4 2<ws3 K w s 2

A=l

B = 2.505 x l O "
n 1 2

C = 2.088 X 10~

0<w=sl

B[(6-w) ] C[(6-w) ] B[(6-w) -12(5-w) ] C[(6-w) -12(5-w) ] B[(6-w) -12(5-w) +66(4-w) C [ ( 6 - w ) - 1 2 ( 5 - w ) + 66(4-w) B [ ( 6 - w ) - 1 2 ( 5 - w ) + 66(4-w) C[(6-w) -12(5-w) + 66(4-w) B[(6-w) -12(5-w) + 66(4-w) + 495(2-w) ] C[(6-w) -12(5-w) +66(4-w) + 495(2-w) ] B[(6-w) -12(5-w) + 66(4-w) + 495(2-w) -792(l-w) C [ ( 6 - w) -12(5 - w ) + 6 6 ( 4 - w) +495(2-w) -792(l-w)
n u , 2 1 2 1 1 , 1 , 2 1 2 u 1 , I 2 1 2 n n 1 1 , 2 , 2 1 2 1 1 I , n M 1 2 l 2 l 2 1 2

1 2

1 1

l 2

] ] -220(3-w) ] -220(3-w) ] -220(3~w)


1 1 1 2 n 1 2

Figure B . l gives a comparison of the density function o f the normalized Gaussian random variable with the density function o f W for N = l , 2, 4, 8. From this figure i t can be seen that the density functions are very close for N as small as 8. Likewise, for one minus the distribution function (the Q function) with w = 1 (1 standard deviation) the error relative to the Gaussian probability for N = 4, 8, and 12 is 4.2, 2.0 and 1.3%, respectively, which is quite close. Even for w = 2 (2 standard deviations) the error is small; i.e., for N = 4, 8, and 12 the error is 6.5, 3.2, and 2 . 1 % , respectively. But for w = 3 (3 standard deviations) the error relative to the Gaussian probability for N = 4,8, and 12 is 84.1,38.8, and 25.4%, respectively, which is quite large. Also, for w = 4 (4 standard deviations) the error for N - 4, 8, and 12 the error is 100, 93.4, and 73.1%. I t can be concluded that the sum o f uniform random variables for small N is a good approximation to a Gaussian random variable i f the argument o f the variable is within 2 standard deviations, but is a poor approximation for values on the tails (large arguments) of the random variable.
N

1 2

-220(3^w)

1 1

-220(3-w)

12

- 2 2 0 ( 3 - w)

12

Appendix C: Moments of Random Variables

279

Appendix C Moments of Random Variables

Table C.2 Moments for Continuous Random Variables Uniform

f (x)=-l b-a
x

-<x>< <x<bx>
a

b+a E(X)= 2
Gaussian

(b-a) Var(X) = ^ - p 12

4>xM = ~ j(b-a)o>

J ^ _ , j ^

r < ^ r-(*-^) i J (x) = =:exp\


x

-oo< x <oo
x

-oo<u<oo

0<r<oo

The mean (first moment), the variance (second central moment), and the characteristic function (Fourier transform of the density function or probability function) are given for some frequently encountered random variables. Along with these moments, the range where the probability density function or probability function is nonzero is given and the range of the parameters o f the probability density function or probability function is given. Table C . l gives these moments for the discrete random variables along with their probability function. Table C.2 gives these moments for the continuous random variables along with their probability density function.
Table C.1

E{X)

=n

Var(A") = a

<f> {a>) = e x p O p - V / 2 )

Exponential

f {x) = a e-*
x

0 < x <oo

0 < a <oo

E(X)=a
Gamma
h+l b

Var(X) = \ a

<M<) =

1-/6

\a x ~\
^ ^ ^ L n f e +Dj
E m =
e

"
r W

- *

<

0<a<co
m )

0 < b<oo

Moments Tor Discrete Random Variables Bernoulli P(X =fc)= p ( l - p ) fc = 0 , l 0 < p < l E(X)=p Var(X)-p(l-p) * ()-pc
x k l _ k > ,

t l a

, < i i

J , \

a)

Cauchy
/ x W =

+ l-p E(X)

(f)(^+^) undefined

-<*< Var(Jt)=;oo
x

0<a<oo
M

Binomial P(X = k)=C%p (l-p) fc = 0 , l , . . . , N JV-1,2,3,... 0 < p < l E(X) = Np Var(X) - JVp{l - p ) 4>x(<) [ p + 1 ~p]
N k N k

<p (w) = e~"

Rayleigh

Poisson

fx(*) (^j
a k

e j t p

(~^~)

0^x<co Q<b<eo
t 4

e~ a P(X = k) = Jfc!
E(X) =a

ft-0,1,2,...
x

0<d E{X) = 4vbJ2 Var(X) = ~


f f ) ! >

Var(X) = a

4> (<o) = e x p [ a ( e x p < - 1 ) ] Beta

Geometric P(X = k) = (l-p) - p


k 1

fc=l,2,3,...
M m ) m

0<p<t

/x(*) =

r ( a + 6+2) , ,,,. , , . * ( t - * r 1 (a +1 JI (o +1)


a

Osxsl

0s <oo
f l

0^&<cc

B W - i

Var(X=i^

- l
o+ e+ 2 (a + 6 + 3)(a + /i + 2)
3

278

Appendix D: Matrix Notation

281

Appendix D Matrix Notation

a 3 x 2 matrix, the transpose is

a 2 x 3 matrix.

A row vector is a matrix with a single row and a column vector is a matrix with a single column. The vectors considered here will normally be column vectors, which makes the transpose a row vector. For x an n x 1 column vector, the transpose x = {Xi} =
r

[x x ,...,x]
lt 2

is a 1 x n row vector. Two matrices are equal An introduction to matrix notation and examples o f matrix manipulation are given i n this appendix. A matrix is an array o f numbers or symbols and its utility is the ease o f representing many scalar quantities i n shorthand notation. Matrices will be represented by boldface letters. The general matrix A is given as flu A = an ' ' 0\m = {a )
t}

A = B = { b } = {a }
y tf

i f a,/ = b for all i and j , which implies that A and B must be the same size. The matrix A is symmetric i f
if

A = A = {a,,} = { a }
y

<*22

or a = a for all i and j . To be symmetric A must be a square o r n x n matrix. For matrix addition to be defined, the matrices o f the sum must be o f the same size. I f A and B are both nxm matrices the matrix addition is defined as
y jt

<*2 '

nm

C = A + B = {a + 6 } = {c,,}
tf tf

where there are n rows and m columns and A is referred to as an n x m matrix. The element i n the ith row and jth column is a and the set representation means the set o f all elements a. I f n = m, A is said to be a square matrix. The transpose o f the matrix A , A , is the matrix obtained by interchanging the rows and columns o f A and is written
tJ T

where the sum matrix is also n x m . Thus, an element i n C is the sum o f the corresponding elements i n A and B. I t can easily be shown that matrix addition is both associative and commutative. Example D.2. I f A and B are given as "l A = 2 3 4" 5 6 "2 B = 7 8 - f -3 1

A -{

f l j

,}
T

and the element i n the ith row and the j t h column is a,,. I f A is n x m, A is m x R. Example D . l . For 1 A = 2 3
280

then the sum o f A and B is 3 C = 9 11

4 5 6

The product o f a scalar d and an n x m matrix A is defined as B = d A = {da,,} = {*>,,}

282

Appendix D: Matrix Notation

Appendix D: Matrix Notation

283

where B i s n x m and each element in the product is the scalar times each corresponding element in A. Example D.3. A is "l B=3 2 3 4" 6

For A n x m, B m x p, and C p x q

For A given in Example D.2, the product o f d = 3 and "3 9 12" 15 18

= K}{t

6^}

= A(BC)

5 == 6

The multiplication o f the two matrices A n x m and B m x p is defined as

and matrix multiplication is associative. The result o f the product is an n X q matrix. The transpose o f the product o f matrices is the product o f the transposes of the matrices in reverse order, i.e., (AB) =
T

C = AB = | j [ a 6yJ = {c }
Jfc y

abJ
ik kJ

ab}
jk ki

= j % ba}
ki Jk T

= B A

where C is an n x p matrix. The y t h element of the product matrix is obtained as a summation o f the elements in the i t h row o f A (the matrix on the left in the product) times the elements in the jth column o f B (the matrix on the right). For these terms to match up, the number o f elements in the ith row o f A must equal the number o f elements in the jth column o f B; i.e., the number o f columns o f A must equal the number o f rows o f B for the multiplication to be defined. Matrix multiplication may not be commutative, AB/BA since for A n x m and B m x p , the product BA (p / n) is not defined. Even if the product BA was defined (A is n x m and B i s m x n ) , matrix multiplication is in general not commutative. Example D.4. For A 2 x 3 and B 3 x 2 given as

I f A is n x m and B is m x p, AB is n x p and ( A B ) is p x n. Likewise, B is p x m, A is m x n, and B A is p x n. A bilinear form is given as


T T T

{
T

"

"

I \{y } = \ x a y \ = xa y where x is 1 x n, A is n x n, and y is n x 1, which makes the bilinear form a scalar. When y = x, this form is called a quadratic form or x A x is a quadratic form (a scalar). A special case o f the quadratic form is
t k kr r k kr r T

x x= 1

xl

which is the magnitude o f the vector squared. I f A is an n x n (square) matrix and i f for a matrix B (n x n), BA = AB = I where I is the identity matrix, B is said to be the inverse o f A (B = A ) . B and I are w x n matrices and I has l's on the main diagonal and O's everywhere else. In order to evaluate the inverse o f a matrix a few terms will first be defined. Letting M be the minor o f the element a , M is equal to the determinant formed by deleting the ith row and the yth column o f A. Also, letting A be the cofactor o f the element a ,
i} i} i} y tJ - 1

the product A B is evaluated as l ( - 2 ) + 2(-l) + 3(2) l ( - 3 ) + 2(l) + 3 ( 3 ) l

4 ( - 2 ) + 5(-l) + 6(2)

4 ( - 3 ) + 5(l) + 6(3)J,

which is obtained as the sum o f the product o f elements in the ith row times corresponding elements in the jth column. This product reduces to

A = (-l)'^M
tf

t f

The determinant of A is given in terms o f the cofactors as


n n

|A| =

OyAfJ = I

OyAtj

284

Appendix D; Matrix Notation

Appendix D: Matrix Notation

285

which states that the determinant can be obtained by summing the product of elements o f any row times their corresponding cofactors or by summing the product o f elements o f any column times their corresponding cofactors. I f B equals A with the jth column replaced by the ith column (columns i and j are the same in B ) , the determinant o f B is 0 and
n

Now |A| = a A + a A + a A 3 = l(32) + 2 ( l l ) + 3(6) = 72


1 I 1 2 l 2 1 3 I

and finally the inverse is 32 8 14 -12 -16 -1

|B| = 0 = I
fc-i

a 4y
w

i*j

The determinant is also 0 i f the jth row is replaced by the ith row (rows i and j are the same). Combining these statements with |A[ yields L
k-\

A " - ^ 11 72 6

a , A - a, Aj
k v k

= 8,j\X\
T

k-l

A n alternative procedure for finding a matrix inverse is the Gauss-Jordan method. This method consists o f solving the matrix equation A B = I for B, which is A " . By performing row operations the augmented matrix [ A : I ] is manipulated to the form [ I : B ] , which yields the inverse directly.
1

where S 1 , i - a n d S - 0, i
v fJ

Now Cof A = (Ay), Cof A = {A*}, and Example D.6. For A o f Example D.5 the augmented matrix is given as 3 : 1 0 0 -2:0 5:0 1 0 0 1 1 0 0 2 6 12 3 : 1 0 0 1 : 1 1 0 14 : 3 0 1 aA}
lk Jk

ACof A = { t
i f |A|

={|A|5 } = |A|l
tf

*o
A
- l

1 2 -1 = Cof A / | A |
T T 1 T

4 6

-3

Also, ( A B ) - - B " A Example D.5.

and ( A " ) = ( A ) - . 3 -2 5 6

Consider the matrix 1 2 A = -1 -3 4

The cofactors are calculated as A = 4 6 2 6 -2 5 3 5 3 -2 = 32 -1 -3 A


2 2

-2 5 3 5 1 = 14 3 -2

= 11

1 3

-1 -3 1 2 6 1 -1

4 6

= 6

where in the first step the element in row 1 column 1 is normalized to 1 to obtain a new row 1. Next a new row 2 is obtained, such that the element in column 1 is 0, as old row 2 minus ( - 1 ) times new row 1, and a new row 3 is obtained, such that the element in column 1 is 0, as old row 3 minus (3) times new row 1. I n the second step the element in row 2 column 2 is normalized to obtain a new row 2. Then a new row 1 is obtained, such that the element in column 2 is 0, as old row 1 minus (2) times new row 2, and a new row 3 is obtained, such that the element in column 2 is 0, as old row 3 minus (12) times new row 2. The second and third steps are given as

2 i

= 2

= 8

1 -3

A = -

-3 =

= -12 2 4

"l
-* 0
0

0 1 0

f
i

: 3
'
l 6 "

1 l 6

o"

"l 0 0 :
1 0 0 :

4
11 72 1

l 7 36 1

2"

~5
l

^31

= -16

-4 =
32

-1

3 3

= 6

12 : 1

-2

0 -* 0 1 0

-12

1 : U

~6

l 12 _

which yields Cof A = 32 8 -16 11 14 -1 6 -12 6

As shown for the third step the element in row 3 column 3 is normalized to obtain a new row 3. Then a new row 1 is obtained, such that the element in column 3 is 0, as old row 1 minus (|) times new row 3, and a new row 2 is obtained, such that the element in column 3 is 0, as old row 2 minus (I) times new row 3.

286

Appendix D: Matrix Notation


1

Appendix D: Matrix Notation

287 Y,
2

Thus A

is obtained as
I

A"' =

9
11 75 J_ 12

9
7 36 1 6

9 -ft ft
Since 16 > 0 , 16 12
v

_2

Example D.7. Consider the Gaussian random vector Y = (Y with n j = (0,0,0) and 16 12 20 12 13
11

Y)
3

20
11

29

which is the same result as i n Example D.5.

The rank r o f an n x m matrix A is the size o f the largest square submatrix of A whose determinant is nonzero. A symmetric n x n matrix A is positive definite i f the quadratic form x A x > 0 for each nonzero x and is positive semidefinite i f the quadratic form x A x s 0 for each nonzero x and x A x = 0 for at Least one nonzero x. The rank o f A is equal to n i f A is positive definite, and the rank o f A is less than n i f A is positive semidefinite. Also, a,i>0 for all 1 , 2 , . . . , n i f A is positive definite, and a 0 for all i = 1 , 2 , , . . , n i f A is positive semidefinite. A matrix A is positive definite i f and only i f the leading principal minors are greater than zero, i.e.,
T T T

12 13

16 = 64>0, and 12 20

12 13 11

20 11 = 0 29

the rank o f X is 2. Thus / ( y ) contains an impulse function. The impulse function can be determined using Eq. (5.6.2) to obtain T (Y = TX with 2* = I ) as
y

4
3

0 2 -2

a >0,
M

a,,
<*2I

5
12

> 0

A| > 0

<*22

where t = 0 (and the third column is omitted). Then using Eq. (5.6.4) A (X = AY) is obtained as
3i

Likewise, a matrix A is positive semidefinite i f and only i f the leading principal minors are greater than or equal to zero. In Section 4.4 the quadratic form o f the exponent o f the N-dimensional Gaussian density function must be positive semidefinite and XZ or % (an n x n ) is a positive semidefinite matrix. That this is so can be seen by observing that for any subset o f the random variables the determinant of the covariance matrix o f the subset must be greater than or equal to zero. The subsets can be picked such that all o f the leading principal minors are greater than or equal to zero, which makes X positive semidefinite. I f X , is positive definite (rank n), then there is no impulse function i n the density function ( i f r a n k < n there would be an impulse function i n the density function). For the transformation Y = A X + B where X =* KX A , X is at least a positive semidefinite matrix since it is a covariance matrix. I f A is m x n X is m x m and i f m > n the rank o f A is at most n and the rank o f X at most n or X cannot be positive definite (there must be an impulse in the density function o f Y ) .
1 x x T y X y y y y

0.25

0"| 0.5 J
33

L-0.375

where the last row o f A is omitted since t ~0. 1 0' Y = T AY = 0 2


3 t 2

Finally,

-1
T

which yields the dependency Y -2Y Y . Letting Y ' = ( K , , Y ) t h e c o n d i tional density o f Y given Y' is the impulse (obtained from Y = 2Y ~ Y) given as
2 3 3 { 2

/v |v(y3|y') = 5(y3-2y, + y )
3 2

Now

An n x n matrix A is positive definite i f and only i f there exists an n x n matrix B o f rank n such that B B = A. Likewise, a n n x n matrix A is positive semidefinite i f and only i f there exists an n x n matrix B o f rank less than n such that B B = A. This property is used in Section 5.6 to obtain a transformation from statistically independent random variables to random variables with a desired covariance matrix.
T T

- - [ z

and j_r
13

-i2i 16}

64L-12

288

Appendix D: Matrix Notation

which gives /v(y') = ^ - c x p [ - ( 1 3 y - 2 4 y y + l 6 y 5 ) / 1 2 8 ]


l 1 3

Appendix E Mathematical Quantities


-oo<y <
2

lOTT

-oo<y!<oo, The density function o f Y is then obtained as / (y)=/v(y')/v3iv(^ly')


v

or / v ( y ) = 7 T - e x p t - ( 1 3 v - 2 4 y y + 16^)/128] 5 ( y ~ 2 y , + v ) loir
1 1 2 3 2

'

E.l
e
,x

Trigonometric Identities
- cos x +j sin x e " + e~
2
1 JX

sin x sin y =* J[cos(x-y)-cos{x + y)] sin x cos y * K>i (* y)+sin(x - y)] cos x + cos y = 2 cos[$(x + y ) ] cos[J(x - y)] cos x-cos y = ~2 sin[|(x + y)] s i n [ i ( x - y ) ]
n +

cos x sin x = -

sin x sin y = 2 sin {H.x y)cos J(x T y) cos 2x = cos* % - sin x sin 2x = 2 sin x cos x cos x = J(l + cos 2x) sin x=i(l-cos2x) ^ cos x - fl sin x = R cos(x+ 9) where J ? = V A + B and 0 = t a n
2 2 - 1 2 2 2

cos(x y) = cos x cos y T sin x sin y

'( f)

T 8 i

sin x

sin(x y) * sin x cos y cos x sin y . . V


= C

cosxcosy = fcos(x+y) + cos(x-y)]

^^

E.2

J
f

Indefinite Integrals

u du = ui> , ,

J "
+ta
1

o du
n+l

j "
C e" 1)

(a + bx)

f-^--iii i
J o + fcx 6
1

f
J 289

7 3

290 f

Appendix E; Mathematical Quantities 1 f


ax

E.5 e*"

Fourier Transforms

291

E.5

Fourier Transforms
F(<o) FMe^'da F(
-'""> F(<o)
F(w-tu )
0

J cos ax dx = - sin ax
x cos ax dx = J f ,
1

J e cos bx dx - ~- - (a cos bx

: (ax sin ax + cos ax) l o


2

+ b sin bx)

Description Definition Shifting (time) Shifting (frequency) Duality Scaling

l , , x cos axdx = (a x sin ax + lax cos ax a


I 2

I
J f f

c " sin bx dx = ( a sin bx - b cos bx) ~ " a +b


2

/ 1 0 - f At-h) e^-f(t) F(0


AM)

f{t)e-i"dt

dx
=

_Vbx\
t 8 n

- 2 sin ox) f . , I sin ax dx = cos ax J a x sin ax dx = 4j (-ax cos ax+sin ax) a


14

J^ v
dx
z

IT)

1 . Jbx\ , = - sin I I J Va -bV b \a/

27r/(-6>)

Differentiation (time) Differentiation (frequency)

d At) dt"
<"F<)
d"F(w)

x sin ax dx = ^ ( - a V cos ax + 2ax sin ax + 2 cos ox)

(-jt)"At

Integration (time)

/()

F(W) + TTF(0)6(W)

E.3

Definite Integrals
Integration (frequency) - -f{t)+
/*(')
J

F(u)du

fV . - * - - S r - ^
Jo

f"
Jo

6 1

2a

a>0 a>0,b>0

Conjugation (time) Conjugation (frequency)

<*

m1

a
l y

F*(-) F*()

f*{-t)

where r(6)=f / - r a y , Jo

| ' x - ' ( l - x ) " dx = r(a + b) Jo

Convolution (time) Convolution (frequency)

J"/,(T)/ (l- T)dT


a

F.ta.)^*)

r-

F , ( W ) F ( W - M ) du
2

A - ^ ' ^ .

a>0

Parseval's theorem
I -co

f
E.4

i-i

F , ( w ) F j ( o > ) dw

Impulse Step

S(t) u(t) K +nS(a>) jot 2irK3(w) 1 a+ja

Series
. JV(/V+1) 2 , N(N+l)(2N 6 + l)
N

Constant Exponential

i-o where c r = a'=Exponential (times t) i\(N-i)\ Exponential (two-sided) Cosine e-l'l cos( ()
0

te--u(t)

i-i

1 (a+ 2a a + o
2 2

, Osa<l

ir[S(a> - <a ) + S(to + w ) ]


0 0

292

Appendix E: Mathematical Quantities f(t) fintatftO cos(w f) (')


0 u F a

Description Sine Cosine (positive time)

()

Bibliography
+ w

- M * ( - <>)-*(+o)]
IT.

- [*( ->o) +

o)]

Sine (positive time)

sin(<u ')"(')
0

-j~[S(o>

- <a ) - S(to + a> )]


0 0

Rectangular pulse Triangular pulse Gaussian pulse

T T 1, -srs 2 2 2 T T 1 - |f|, - - s f s -

sin(wT/2) w//2 Abramowitz, M., and I . A. Scegun. Handbook of Mathematical Functions. New York: Dover, 1964. Bratley, P., B. L. Fox, and L. E. Schrage. A Guide to Simulation. New York: Springer-Verlag, 1983. Brown, R. G. Introduction to Random Signal Analysis and Kalman Filtering. New York: Wiley, 1983. Carlson, A. B. Communication Systems: An Introduction to Signals and Noise in Electrical Communication, 3d ed. New York: McGraw-Hill, 1986. Cooper, G. R., and C. D. McGillem. Probabilistic Methods of Signal and System Analysis, 2nd ed. New York: Holt, Rinehart, and Winston, 1986. Davenport, W. B., Jr. Probability and Random Processes: An Introduction for Applied Scientists and Engineers. New York: McGraw-Hill, 1970. Gray, R. M., and L. D. Davisson. Random Processes: A Mathematical Approach for Engineers. Englewood Cliffs, New Jersey: Prentice-Hall, 1986. Graybill, F. A. Theory and Application of the Linear Model North Scituate, Massachusetts: Duxbury, 1976. Helstrom, C. W. Probability and Stochastic Processes for Engineers. New York: Macmillan, 1984. Hogg, R. V., and A. T. Craig. Introduction to Mathematical Statistics, 2nd ed. New York: Macmillan, 196S. Larsen, R. J., and M. L. Marx. An Introduction to Probability and Its Applications. Englewood Cliffs. New Jersey: Prentice-Hall, 1985. Maisel, H. and G. Gnugnoli. Simulation of Discrete Stochastic Systems. Chicago, Illinois: Science Research Associates, 1972. Melsa, J. L., and A. P. Sage. An Introduction to Probability and Stochastic Processes. Englewood Cliffs, New Jersey: Prentice-Hail, 1973. O'Flynn, M . Probabilities, Random Variables, and Random Processes. New York: Harper and Row, 1982. Papoulis, A. Probability, Random Variables, and Stochastic Processes, 2nd ed. New York: McGraw-Hill, 1984. Parzen, E. Stochastic Processes. San Francisco, California: Holden-Day, 1962. Peebles, P. Z., Jr. Probability, Random Variables, and Random Signal Principles. New York: McGraw-Hill, 1980. 293

rrsin(ar/4>Y

294

Bibliography

Peebles, P. Z. Communication System Principles. Reading, Massachusetts: Addison-Wesley, 1976. Schaeffer, R. L., and J. T. McClave. Probability and Statistics for Engineers, 2nd ed. Boston: Duxbury, 1986. Stark, H., and J. W. Woods. Probability, Random Processes, and Estimation Theory for Engineers. Englewood Cliffs, New Jersey: Prentice-Hail, 1986. Thomas, J. B. Introduction to Probability. New York: Springer-Verlag, 1986. Thomas, J. B. An Introduction to Statistical Communication Theory. New York: Wiley, 1969. Trivedi, K. S. Probability and Statistics with Reliability, Queuing, and Computer Science Applications. Englewood Cliffs, New Jersey: Prentice-Hall, 1982. Van Trees, H. L. Detection, Estimation, and Modulation Theory, Part I . New York: Wiley, 1968. Watpole, R. E., and R. H. Myers. Probability and Statistics for Engineers and Scientists, 3rd ed. New York: Macmillan, 198S. Wong, E. Introduction to Random Processes. New York: Springer-Verlag, 1983. Wozencraft, J. M., and Jacobs, I . M. Principles of Communication Engineering. New York: Wiley, 1965. Ziemer, R. E., and W. H. Tranter. Principles of Communications. Boston, Massachusetts: Houghton Mifflin, 1976.

A N S W E R S

T O

S E L E C T E D

P R O B L E M S

Chapter 1
{2,3,4,5,6,7,8,9,10,11,12} {WA, WB, WC, RA, RB, RC) An(BuC) = AnC (AuB)nC= flu(/lnC) CcAufl (A n C) u B = {3,4, 5,6,7,8}, ( A u C ) n B = {6, 8} 1.3.1. st - {S, 0 , {1}, {1,3,4}, {2,3,4}, {2}, {1,2}, {3,4}} 133. J* = {S,0,{1,3,5},{2,3,4,5,6}, {2,4,6}, {1}, {1,2,4,6}, {3,5}} L2.1. 1.2.3. 1.2.5. 1.2.7. 1.2.9. L2.ll. 1.4.5. P(w) =0.493 1.4.7. P(C| W) - 0.25, C and W are statistically independent, P(G) = 0.4 1.4,9. Given B decide A , Given B, decide A , Given B decide A P(e)=0.31 1.4.11. m - A a n d m,-* A,P(e) = 0.22, m - A, and m , - * ^ , P(e) = 0.18 (the minimum P ( e 1-4.13. P(comm) = 0.54 1.4.15. P(comm) = 0.618 M . I 9 . Series P, = 0.965, parallel P, = 0.536 1.4.21. P, = 0.652 1.4.23. Best assignment C - 3 , A-*l, B - 2 (or A-*2, B-*\), P(comm) = 0.846 1.4.25. P(A|B) = 0.571 1.4.27. P(B) = 0.75 1.4.29. P{A \B)^P(A )
0 0 0 2 lt 0 o 0 E C

L3.5. {P} = { 1 , 0 , U U M >


P(Aufl)sP(fluC) P[An(fluC)] <P(AnB) + P(AnC) 1.3.11. P(At least one H in 3 tosses) = .875 1.4.1. P ( l , u l ) = 0 . 4 3 8 , P ( l , u l u l ) = 0.578, P(no l's in 3 tosses) = 0.422 1.4J. P(at least two match) = 0.444
2 2 3

L3.7. L3.9.

Chapter 2
2.1.1. 2.1J. 2.1.5. 2.1.7. 2.1.9. 2.1.11. 2.1.13. 2.1.15. 2.1.17. 2.2.1. 2.2.3. 2.2.5. P(X < 3) = 0.875, P(X > 4) = 0.0625, F (j) = 1-(0.5)', j = 1,2,... P ( K X s 2 ) = 0.233, P(X> 3) =0.0498 a-1, 6= - l f {x)-e-*u(x) p = 0.5 f (x) = e~ *u(x)H6(x) + $8(x-l) 0 < x < l , / ( x ) = x/2, x = l , / ( x ) = i8(x-\); Kxs3,/ (x)-i o=0.133, P ( - 0 . 5 < . Y s 0.5) =0.533 P[(X-3) >4]= F (l) + l-F (S) P ( K A " <U0)=1-Q(0.25) -Q(2) =0.5759 fi=7, a = 9 o=5
x x 2 x x x x z x X 2

2.2.7. 2J.L 2.3.3.

2.3.5. 2.4.1. 2.4.3.

A = fj. +1.282<r, B = fj, + 0.524tr, C = n-0.524<r, D = p- 1.282tr F (x) = {e ,-co<x<0; F {x) = l-e- ,0sxx> P(0 bit errors in block) = 0.478, P( 1 bit error in block) = 0.372, P{biock error) =0.150 a = 1.609, P(0,1, or 2 photons) = 0.781 f (x,y) = 2e-*e- >; x ^ O . y a O , P ( l < X^3, l=sy==2) = 0.0372 / (x)= 2x,0<x<l;/ (y) = 0.75(y +l), O s y s l ; F (x) = x , O s j t s l ; f ( y ) = 0.25(y + 3y), 0 ^ ysl; X and Y are statistically independent
c c c c x x x x 2 xv I y 2 2 x 3 y

295

296 2.4.5. 2.4.7. 2.4.9.

Answers to Selected Problems 1


2.53. / (JC) =
X

Answers to Selected Problems 4J.11. E(X) = Np, F(X ) = N(N- l) + Var(A-) = 7V>(l-/))
2 2 P

297

P(X + V>1) = 0.9583 P(X < 2 Y +1) = 0.9028 P(19Osi? s210) = 0.5, P(190: Ri + R ^ 210) = 0.75 2.4.11. P ( 2 8 5 ^ s 3 1 5 ) = 0.5, P(285= R, + J2jS315) = 0.719 2.4.13. P(failure< 30) = 0.268 2.4.15. T= 14.82 months
o 2 o

4A3. Np,

fx,x (x ,x )
1 l 2

= 2xf-6x x + 5x \
i 2 z

V/2TT(10)

2(10)/

1
a b + a

fy\xiy\*)-~]=== V2ff{6.4)
X

4.2.13. E(X) = ^

, E(X ) =
b

2+

4.4.5.

-P^

" l276^TJ
P y

y . r ( x j
43.1. 433. Var(V) = 53

12

^
(b + a)< b -a>
,+2

4^.7.
4.4.9.

2A1.

/x(x)'

1 V2ir(5) 72(0.2) 2(5) / (>-0-6x) \ exp


2

2.5.5. F ( l | X s i ) = 0.3161

E[<X+V)'] =

4.5.1. 433.
4.6.1.

P(X> y ) = p ( V 2 ) = 0.07868 P(X> Y) = Q(2) =0.02275 TV = 10 P(e) = 0.000266, P (e) = 0.001374


H p

2(0.2) /

Chapter 3
3.1.1. fy(y) = 2y 0<y&\ 3.1.3. f (y) = 2e- ,Q*yx> 3.1.5. x = ln(2u), 0 s u < 0 . 5 ; x = -lnT2(l-u)j, 0 . 5 ^ u ^ l 3.1.7. x = V 3 u 0 s u < $ ; x = (3u + l)/2, i=sl 3.1.9. x = / ( l - u ) , 0 s ; u s l 3.1.11. / ( Z ) = 5 X 10" , 900s 2 -s 1100 3.2.1. f ,Y (y ,yz)=y2e'\osy.-si, 0 s y < oo 3.2.3. / W > W 2 ) = 0y,<oo, 0==.y <oo
t 2), Y 3 Z l Y i l 2 2

ab(i+ l)(i + 2) 433. Cov(X, 10 = 0 , 0 = 0 , X and Y arc uncorrelated 43.7. Cov(X, Y) = a Var(X), p = l, a>0; P = - 1 , o < 0 ; p=*0, a = 0 4.3.9. o = 0.5, 6 = 0.866 4J.I1. a = - 6 , 6 = 3; o = 6. 6 = -3 3.4.3. z < l , / ( 2 ) = 0 . 5 ( e - ' - - ) ; Irs z < 2 , / ( z ) = l-0.5(e*- +*-<*- >) za2,/ (z) = 0.5( - -"-e- -) 3.4.5. r < 0 , / ( r ) - 0 , zs=0;/ (z) = 0. 5z -*
z 2 l z ( I ( I z e z z 2 I 1 2

[|j*7,?i*'~ |
p

a e

s 0 0 5

Chebyshev inequality, ==8.48 x 10~ Chernofl bound

4.6.3.

[|"^ ?i
(

'~ |

'

43.13.

P(Y=k)=kl

Chebyshev inequality, sO.0024863 Chernofl bound


2 3

ft = 0,1,2,..., a =fl,+ a + a 4.3.15. P(V = 7) = 0.138


= [17] ^ = [-3 3 ]
7 ;

4.7.1. 4.7.3,
P

[ 17* A ~ I ]"
X L

x>

p AE

7 - 7 5X 10-6

4.4.1.
3.4.7. / ( z ) = [
z

f (x)f (x-z)dx
x Y

-co
,: 6 fc

3A9.

3.3.1. f (y)=-~e-^-^\l 2-J2iry Os=y x>


Y

+ e-^l

3A1.

z<0,/ (z) = 0;0sz<l,/ (2) = l-e- ;ral,/ (z) = - --ez z I ( , z

P ( Z - f t ) = Ct(0.3) (0.7) - ,fc = 0 1, 2, 3, 4, 5, 6 3.4.11. P(Z = 3) = 0.195 34.13. P(285^Z*315) = 0.859 3.5.1. f (y) = 3y 0*y*l,P(Y*0.75) 0.578 333. f (z) = 0.06e~ \ z a 0 , P ( Z s 30) = 0.835
2 Y t 006 z

-CM a

[[KF

~ \" '

6 | X I 0

"

Chapter 5
5.1.1. x, = 0,1,2,3,4, 5, 1 = 0 , 1 , . . . , 5, /,(x) = 0.632, 0.233; 0.085, 0.032, 0.011, i = l , 2 , . . . , 5 5.1.3. x, = 0, 0.6, 1.2, 1.8, 2.4, 3.0, 3.6, 4.2, i = 0,l,....7 /,(x) = 0.752, 0.413, 0.226, 0.124, 0.068, 0.037, 0.021, i - 1, 2,.... 7 5.1.1. IX = I X * 277, IF(IX.LT.0)1X = (IX + 32767) +1, U = IX*.30518xlO-' $.23. IM = IX/118, IX = 277*(IX - IM* 118) - IM * 81, IF(IX.LT.0)IX = IX + 32767, U = I X * 0.30519 X l 0 53.1. x, = -2.4, -1.221, -0.764, -0.430, -0.139, 0.139, 0.430, 0.764, 1.221, 2.4, i = 0 , l , . . . , 9 /,(x) = 0.085, 0.201, 0.349, 0.401, 0.420, 0.430, 0.299, 0.328, 0.071, i = 1,2,... ,9
- 4

Chapter 4
4.1.1. H(X}
(

= , E(X )

4.1.15. E ( C ) = 0.1 \iF, (Z )=200.67n


c

"

+ 2 ) (

r ,Var(X)=^
2

1 )

4.2.1. 423.
4.23.

d*t.

jv(b~a)

4.1.3. E(X) = a,E{X ) Var(X) = a

= a +a,

t x M - Y ^ i
<M<) = ( ; ' "
I

4.1.5. No
4.1.7. o = 4, & = 9 4.1.9. c = E ( X ) 4.1.11. E(K)=100, ( G ) = 0.01003 4.1.13. (L) = l m H , E ( Z J = 10n

+ I-P>"
2

4.2.7.

4.2.9.

E(X) = (it + t)b, ( X ) - ( f i + l)(fl + 2)& , VaKX) = ( + l ) 6 E(X) = 6 , E ( X ) = 2b . Var(X).


2 1 2

5.3.3. x, = -2.4, -0.842, -0.253, 0.253, 0.842, 2.4, ( = 0,1,. . . , 5 /,(x) = 0.112, 0.325, 0.511, 0.297, 0.128, i = l , 2 , . . . , 5 5.3.5. x, = -2.4, -1.44, -0.48, 0.48, 1.44, 2.4, i - 0 . 1 , . . . . 5 . / , ( x ) - 0 . 0 7 8 , 0.208, 0.417, 0.286, 0.052, i = 1,2,.... 5 5.4.1. 7 = 3.450, P t C ( 9 ) a T ] = 0.903 5.4.3. 7 = 2.833, P[C (5)a: 7 ] = 0.586 5A3. 7 = 2.721, P [ C ( 5 ) = > 7 ] = 0.605 SAL Confidence = 0.731 5A3. N 149600 5.5A p = 0.0333, p = 0.0333 5.5.7. p = 2 . 1 4 x l 0 5.6.1. Y = 3X,, Y m -2X, + X , Y = -X + 2X + X 5.6.3. r , = 4 X + 8, y = 2 X , + 3 X + 7, " Y = 5X + 2X + X + 6
1 I O 120 llq -3 l 2 2 3 l 2 3 1 2 2 3 l 2 3

5.6A

X , = Yi, Xj = -2y,+Jr ,
2

X = 3 V , - Y + lY
3 2

298

Answers to Selected Problems

Chapter 6
6.2.1. 9 = T. X 0 is unbiased, N i-i Cramer-Rao bound = , N $ is efficient 6.2.13. M= I Y) / N<T , M is unbiased,
X 2 2

Index
e
2

Cramer-Rao bound = 2 M / N , M is efficient 6.2.15. Cramer-Rao bounds 2(N-1) <T /N2 4

6.3.1.

b* biased ^ , .
X (

'

) ,(a)-^1 ,
a

6.3.3. *w-

+ N-l

a biased 6.2.7. 0 = T X?, 0 is unbiased,


2N

i-i

6.2.9.

* Cramer-Rao bound = , 0 is N efficient 20 Var(0)= , Carmer-Rao bound =


2

6.3.5.

n = 2 , 3 , . . . , #o = * i - 0 = 0.236, -0.550, -0.608, -0.370, -0.435, -0.351, -0.279, -0.344, -0.401, -0.511 V = 0, 0.619, 0.419, 0.485, 0.406, 0.374, 0.352, 0.337, 0.326, 0.403 6.3.7. JV = 799
N N

A Algebra, 12 A posteriori density function, 209, 229 A posteriori probability, 25 A priori density function, 209 A priori probability, 25 Associative property, 9 Asymptotically efficient estimator, 219 Autocorrelation function definition, 246 properties, 253 time, 251 Average value, 126 Axioms of probability, 14 B Bayes estimate, 208 Bayes' rule, 25 Bernoulli random process, 239 Bernoulli random variable, 63 Beta random variable, 210 Binomial random process, 239 Binomial random variable, 63 BoEtzmann constant, 258 Borel field, 12 Box-Muller method, 103, 187, 201

2fl; N

0 is efficient is biased

6.4.1.
64.3.

fl = ( N + l ) / ( l + I X . ) 0 = ( a + I X^jf(mN (

6.2.11. p=N / X()#p,

+ a + b)

Chapter 7
7.2.1. E[X(/)] = 0,[X(t,)X(( )]=
2

7.3.5.

E[X(t)}
s

=0, E [ X ( f ) ] = 24, R (r) =


x

R (T)
X

= O- COS(W T)-OO<T<OO;
0

in(87TT)
g

7.23.

7.2.5. 7.2.7. 7.2.9. 7.2.11.

7.3.1. 7.3.3.

X(f) is wide sense stationary [ y ( ( ) ] = 0, E t V t f i j y t ^ ) ] ' * M r H 0 . 5 i , ( l ) ^ ) % < T<OO; V(() is wide sense stationary R ( T ) = 2R (T)+R (T +D + K (T-3) E [ X ( f ) ] = 3.162, E[X (f)3 = 85, Var[X(t)] = 75 E [ X ( t ) ] = 1.732, E[X (()3 = 5, Var[X(()] = 2 R (r)-7,-1,-1,-1,-1,-1,-1, T = 0, 1,2,3,4, 5,6 sin(4ir/) S(f)=4 ' / , -oo</<oo [ X ( r ) ] = 0, [ X ( 0 ] - 12.5, R ( T ) = 12.5e" , -OO<T<OO
y x x x 2 2 x 2 04|Ti
I

24
7 3 7

<

<

I4

for expectation, 166 for integration, 217 Central limit theorem, 55, 173 Central moments, 131 Characteristic function, 135 Chebyshev's inequality, 167 Chernoff bound, 169 Chi-square percentage points, 194 Chi-square random variable, 193 Chi-square test, 193 Cholesky's method, 202 Cofactor, 283 Combinations, 63 Combined experiment, 35 Commutative property, 9 Complement, 9 Conditional probability, 20 Conditional probability density function, 80 Conditional probability distribution function, 78 Confidence, 196 Consistent estimator, 213 Continuous random process, 242 Continuous random variable, 44 Convolution of continuous functions, 109 of discrete functions, 114 Correlated random variables, 200 Correlation, 144 Correlation coefficient, 146, 167 Counting random variable, see Poisson random variable 299

' * - y-0.399M,v(10MH.),V-0.891 v (50MHz) 74.1. SJf)=^ ' . -co</<co; l+<W) R m=3-l ' - O O < T < C O 4 ' ,, . _ - I H _co<
a 2 a T A 1 A X ( ( = R ( t ) e T < c c

7.4.5.

K(T)-3,-1,-1,-5,3,3,-I.T-0, 1,2,3,4,5,6

C Cauchy random variable, 62 Cauchy-Schwarz inequality

300

Index Fortran, 186, 188 Fourier transform, 135, 255 Functions of random variables, see Transfromation K Kalman estimator, 234 N

Index

301

Covariance, 143 Covariance function, 247 Covariance matrix, 151 Cramer-Rao bound, 215 Cross-correlation function definition, 263 properties, 265 time, 265 Cross-covariance function, 265 Cross-spectral density, 264 Cumulative distribution funtion, see Probability distribution function

G
Gamma random variable, 62 Gauss-Jordan method, 285 Gaussian random process, 249 Gaussian random variable density function, 54 distribution function, 55 fourth moment, 225 joint, 75 mean, 54,127 third moment, 225 variance, 54, 133 Gaussian random vector, 151 General generator, 187 Geometric random variable, 65

D Delta function, see Impulse function De Morgan's properties, 9 Density function, see Probability density function Disjoint sets, see Mutually exclusive sets Discrete random process, 242 Discrete random variable, 44 Discrete uniform random variable, 65 Distribution function, see Probability distribution function Distributive property, 9

Law of large numbers 169 Leibnitz's rule, 91 Likelihood function, 208 Linear multiplicative congruential generator, 185, 189 Linear system output autocorrelation, 260 output mean function, 260 output mean square value, 246 output spectra) density, 261

Noise Gaussian, 157 thermal, 258 white, 258 Noise random process, 241 Normal random variable, see Gaussian random variable Null set, 5, 10

O Orthogonal, 146 Outcome, 2

P
M Marginal density function, 71 Marginal distribution function, 70 Marginal probability function, 72 Matrix covariance, 151 inverse, 284 product, 282 sum, 281 transpose, 280 Maximum a posteriori estimation, 231 Maximum likelihood estimation, 210 Mean estimator of, 212 sample, 212 Mean function definition, 247 time, 251 Mean square error, 231 Mean square value, 131 Mean vector, 151 Minimum variance estimator, see Efficient estimator Minor, 283 Mixed random variable, 66 Modulus, 185 Moment, see Expected value Moment generating function, 136 Monotonic function, 46, 90 Monte Carlo integration, 198 Mutually exclusive sets, 14 Multinomial probability function, 165 Photomultiplier, 66 Planck constant, 258 Point conditioning, 79 Poisson random process, 240 Poisson random variable, 66 Polar coordinates, 55, 102 Positive definite, 286 Positive semidefinite, 286 Power spectral density, 256 Probability a posteriori, 25 a priori, 25 axioms, 14 of complement, 13 conditional, 20 of error, 27 intersection, 17 total, 23 transition, 25 union, 17 Probability density function, 49 Probability distribution function, 45 Probability function, 52 Probability generating function, 140 Probability mass function, see Probability function Probability space, 13 Pseudorandom numbers, 184

H
Hard decision, 163 Histrogram equally spaced intervals, 181 equal probability intervals, 182 empirical, 188 theoretical, 180

E Efficient estimator, 215 Element, 5 Empty set, see Null set Ensemble, 238, 239 Ergodic random process, 251 Estimation, 207 Exponential bound, see Chernoff bound Exponential random variable, 61 Expected value definition, 126 of the product, 142 of the sum, 142

I Importance sampling, 199 Impulse function, 50 Impulse response, 109, 148, 246 Independence, see Statistical independence Information word, 64 Integral transform method, 94 Intersection, 9

F Factorial, 63 Factorial moment, 134 Filter Ideal lowpass, 261 RC lowpass, 262

J
Jacobian, 100 Joint density function, 71 Joint distribution function, 68 Joint probability function, 72

Q
Q function, 56

302

Index Step function, 47 Strict sense stationary, 243 Subset, 7 T Thermal noise, 258 Time average, 251, 265 Total probability, 23 Transfer function, 148, 261 Transformation division, 118 linear, 92 maximum, 119, 122 minimum, 120, 122 non-one-to-one, 105 one-to-one, 92 product, 116 sum, 108 vector, 99 Transition probability, 25 Triangular matrix, 201 U Unbiased estimator, 214 Uncorrelated, 144 Uniform random variable: definition, 52 sum of, 113 Union, 7 Union bound, 19 Unit impulse function, see Impulse function Unit step function, see Step function Universal set, see Sample space V Variance definition, 131 of the sum, 143 Venn diagram, 7

Quantized value, 163 Quantum mechanics, 258

R Random phase angle, 45 Random process continuous, 242 discrete, 242 Random telegraph signal, 240 Random variable continuous, 44 discrete, 44 Random vector, 68 Rendu, 186 Rank, 286 Rayleigh random variable, 61 Rectangular coordinates, 55, 102 Relative frequenty, 5, 14 S Sample function, 238 Sample space, 5 Sequential estimation, 226 Set, 5 Set theory, 5 Set properties associative, 9 commutative, 9 De Morgan's, 9 distributive, 9 intersection, 9 union, 8 Set relationships complement, 9 intersection, 9 union, 7 Sigma (a) algebra, 12 Signal-to-noise ratio, 159 Simulation, 180, 196 Sine wave random process, 241 Soft decision, 163 Square root method, see Cholesky's method Spectral density, see Power spectral density Standard deviation, 54 Stationary strict sense, 243 wide sense, 248 Statistical independence, 20, 28 Statistical inference, 207

W
White noise, 258 Wide sense stationary, 248
Z

Z transform, 140

Das könnte Ihnen auch gefallen