Sie sind auf Seite 1von 48
UnItT-1: PROBABILITY AND RANDOM VARIABLES Introduction: The basic to the study of probability is the idea of a Physical experiment. A single performance of the experiment is called a trial for which there is an outcome. Probability can be defined in three ways. The First one is Classical Definition, Second one is, Definition from the knowledge of Sets Theory and Axioms. And the last one is from the concept of relative frequency. Experiment: Any physical action can be considered as an experiment. Tossing a coin, Throwing or rolling a die or dice and drawing a card from a deck of 52-cards are Examples, for the Experiments. Sample Space: The set of all possible outcomes in any Experiment is called the sample space. And it is represented by the letier s. The sample space is a universal set for the experiment, The sample space can be of 4 types, They are: 1, Discrete and finite sample space. 2. Discrete and infinite sample space. 3. Continuous and finite sample space. 4, Continuous and infinite sample space. ‘Tossing a coin, throwing a dice are the examples of discrete finite sample space. Choosing randomly a positive integer is an example of discrete infinite sample space. Obtaining a number on a spinning pointer is an example for continuous finite sample space Prediction or analysis of a random signal is an example for continuous infinite sample space. Events An event is defined as a subset of the sample space. The events can be represented with capital letters like A, B, C ete... All the definitions and operations applicable to sets will apply to events also, As with sample space events may be of either discrete or continuous. Again the in discrete and continuous they may be either finite or infinite. If there are N numbers of elements in the sample space of an experiment then there exists events. ® number of The event will give the specific characteristic of the experiment whereas the sample space gives all the characteristics of the experiment. Classical Definition: From the classical way the probability is defined as the ratio of number of favorable outcomes to the total number of possible outcomes from an experiment. ic. Mathematically, P(A) =P/T. Where: P(A) is the probability of event A. Fis the number of favorable outcomes and T is the Total number of possible outcomes. The classical definition fails when the total number of outcomes becomes infinity. ‘Definition from Sets and Axioms: In the axiomatic definition, the probability P(A) of an event is always a non negative real number which satisfies the following three Axioms. Axiom 1: P(A) 2 0.Which means that the probability of event is always a non negative number Axiom 2: P(S) =1.Which means that the probability of a sample space consisting of all possible outcomes of experiment is always unity or one. Axiom 3: PUN, 4n) = D¥yP(An) oF PAU AU. « Uy) = (A) +P (d+. «+ PIAQ) This means that the probability of Union of N number of events is same as the Sum of the individual probabilities of those N Events. Probability asa relative frequency: ‘The use of common sense and engineering and scientific observations leads to a definition of probability as a relative frequency of occurrence of some event, Suppose that a random experiment repeated n times and if the event A occurs n(A) times, then the probability of event a is defined as the relative frequency of event a when the number of trials n tends to infinity. Mathematically P(A) = lim... na Where n(A)/n is called the relative frequency of event, A. ‘Mathematical Model of Experiments: Mathematical model of experiments can be derived from the axioms of probability introduced, For a given real experiment with a set of possible outcomes, the mathematical model can be derived using the following steps: 1, Define a sample space to represent the physical outcomes. 2. Define events to mathematically represent characteristics of favorable outcomes. Assign probabilities to the defined events such that the axioms are satisfied. Loint Probability: If a sample space consists of two events A and B which are not mutually exclusive, and then the probability of these events occurring jointly or simultaneously called the Joint Probability. In other words the joint probability of events A and B is equal to the relative frequency of the joint occurrence. If the experiment repeats n number of times and the joint occurrence of events A and B is n(AB) times, then the joint probability of events Aand Bis P(ANB)=lim, "42 Since P(A NB) = P(A) +P (B) ~P(AUB) then P (AUB) = P(A) + P(B) ~ P(AN B) also since P(AMB) # 0,P(AUB) < P(A) + P(B) i.e the probability of union of two events is always less than or equal to the sum of the event probabilities. Conditional Probability: If an experiment repeats n times and a sample space contains only two events A and B and event A occurs n(A) times, event B occurs n(B) times and the joint event of A and B occurs n(AB) times then the conditional probability of event A given event B is equal to the relative frequency of the joint occurrence n(AB) with respect to n(B) as tends to infinity. naw) 2) i °- ar P (SA Pw) +0 That is the conditional probability P(A/B) is the probability of event A occurring on the condition that the probability of event B is already known, Similarly the conditional probability of occurrence of B when the probability of event A is given can be expressed as P(E) 2 prays 0 [PBN A) = PCAN) From the conditional probabilities, the joint probabilities of the events A and B can be expressed as Pans) =P(2)P(8) = P(2) P(A). ‘Total Probability Theorem: Consider a sample space, s that has n mutually exclusive events, Bn, nel, 2, 3,....N. such that By, 9 Bu = {9} for m# n=l, 2, 3, ....N. The probability of any event A, defined on this sample space can be expressed in terms of the Conditional probabilities of events Bn. This probability is known as the total probability of event A. Mathematically, Pay =¥! Proof: The sample space s of N mutually exclusive events, Bn, n=1, 2, 3, ...N is shownin the figure The events have the properties By Bn = {9} for m# n =I, 2, 3, ...N. and Ue, Bn = S. ie. By U By U Bs U... U By =S, Let an event A be defined on sample space s. Since a is subset of s, then A 1S =A or ANS=AN[UN., B,J] =Aor A= UN(AN By) Applying probability P(A) = P[U8.,(A 0 B,)]=U%, P(A B,) Since the events P(A B,) are mutually exclusive, by applying axiom 3. of probability we get, P(A) = De P (40 B,). From the definition of joint probability, P(ANB,) =P(4)P@,) Therefore, P(A) = Eka P (2) P(Bn) Baye’s Theorem: It states that if a sample space $ has N mutually exclusive events Bn, n=1, 3,....N. such that B,, 1 Bn = {@} for m# n =1, 2, 3, .....N. and any event A is defined on this sample space then the conditional probability of B. and A can be Expressed as rCAyecond Erma Pea 7G Proof: This can be proved from total probability Theorem, and the definition of conditional probabilities. PCB,/A) = ial We know that the conditional probability, P (#2) P(By MA) /P(A), P(A) # 0 also P(B,0.A) = P (4) P(B,) And from the total probability theorem, nai P(By NA). Therefore P(By /A) = qa — P(B,/A = ws rer P(B,/A) = = Hence Proved, PE Poorer eat Ee) Independent events: Consider two events A and B in a sample space S, having non-zero probabilities. If the probability of occurrence of one of the event is not affected by the occurrence of the other event, then the events are said to be Independent events, Mathematically P (A 9 B) = P(A)P(B). For P(A) # 0 and P(B) # 0. If A and B are two independent events then the conditional probabilities will become P (2) = PCA) and P(2)= PCB) . That is the occurrence of an event does not depend on the occurrence of the other event. Similarly the necessary and sufficient conditions for three events A, B and C to be independent are: P(AN B) = P(A)P(B) P(ANC) = PCA)P(C) P(BNC) = P(B)P(C) and P(ANB NC) = P(A) NP(B) 9 PC). ‘Multiplication Theorem of Probability: Multiplication theorem can be used to find out probability of outcomes when an experiment is performing on more than one event. It states that if there are N events An, n=1,2, N, in a given sample space, then the joint probability of all the events can be expressed as P(AL AAs 0 Aa... Ax) = P(Ar) P(Ad/AD) PAYAL A Aa). «CANAL Aa... Avs) And if all the events are independent, then P(A A An As... Ax) = P(A1) P(A2) PCAs). - PAS), Perm ion. If there are n numbers of events in an experiment, then we can choose and list them in order by two conditions. One is with replacement and another is without replacement. In_first condition, the first event is chosen in any of the n ways thereafter the outcome of this event is replaced in the set and another event is chosen from all v events. So the second event can be chosen again in n ways. For choosing r events in succession, the numbers of ways are n' (ions & Combinations: An ordered arrangement of events is called Permuta nw Ww Mathematically, Np,’ In the second condition, after choosing the first event, in any of the m ways, the outcome is not replaced in the set so that the second event can be chosen only in (n-1) ways. The third event in (n-2) ways and the 1" event in (n-+1) ways, Thus the total numbers of ways are n(n-1)(n-2).... (n-1+1). Mathematically, Me,=G-"srqr where is the number of ordering of events Introduction; A random variable isa function of the events ofa given sample space, S. Thus for a given experiment, defined by a sample space, S with elements, s the random variable is a function of S, and is represented as X(s) or X(x). A random variable X can be considered to be a function that maps all events of the sample space into points on the real axis. ‘Typical random variables are the number of hits in a shooting game, the number of heads when tossing coins, temperature/pressure variations of a physical system etc. .For example, an experiment consists of tossing two coins. Let the random variable X chosen as the number of heads shown. So X maps the real numbers of the event “showing no head” as zero, the event “any one head” as One and “both heads” as Two. Therefore the random variable is X =(0,1,2}. The elements of the random variable X ure x1=0, xx=1 & x; Conditions for a function to be a Random Variable: The following conditions are required for a function to be a random variable. 1. Every point in the sample space must correspond to only one value of the random variable. ie. it must be a single valued, ‘The set {X <2} shall be an event for any real number. The probability of this event is, equal to the sum of the probabilities of all the elementary events comesponding to {X 0,-«© Say < 0, Are constants called standard deviation and mean values of n density function is also called as the normal density function, X respectively. The Gaus ts FO 4 007 iy Denstty strbution The plot of Gaussian density function is bell shaped and symmetrical about its mean value ax. The total area under the density function is one. ie ind it occurs at x=ax. The function decreases to 0.607 The maximum value of fx(x) is = times of its maximum value at x=ax oy. The oy signifies the spread of the function and oy? is the variance. Applications: ‘The Gaussian probability density function is the most important density function among all density functions in the field of Science and Engineering. It gives accurate descriptions of many practical random quantities. Especially in Electronics & Communication Systems, the distribution of noise signal exactly matches the Gaussian probability function. It is possible to eliminate noise by knowing its behavior using the Gaussian Probability density function. 2.Uniform Function: The uniform probability density function is defined as 1 axe sai= fo-a 28s 5? 0 other wise Where ‘a’ and ‘b’ are real constants, -co Sa < o. And b > a. The uniform distribution function is Fx(x) =f fx (x)dx. = oe o forx>a xa) Therefore Fx(x) = iu Ve b) asxsb a x>b ton aa = a # b * @ ° >, teaty ostibuton The uniform probability density function has constant amplitude in the given range. The total area of the function is always one. Applications: 1:The random distribution of errors introduced in the round off process are uniformly distributed. 2. In digital communications to round off samples. 3.Exponential function: The exponential probability density function for a continuous random variable, X is defined as 4) 0% forx>a forx 0. The distribution function is Fy) = 2, fx)dx. Fu) =[f te ax F(x) = 1 e“@ay/> Therefore forx0. The distribution function is Fy) = (2, fxd. Dee ax 2 F (ead = dy Therefore Fa(x) =[ eYdy = —e7* |g Fx(x) = 1- (e-@-904/ Therefore 0 forx0 isa real constant.Tthe distribution function is Fx) = 6° Deo eG =) Poisson's distribution is the approximated fimetion of the Binomial distribution when N> oo and p- 0. Here the constant b-Np. Poisson’s density and distribution plots are similar to Binomial density and distribution plots. Applications: It is mostly applied to counting type problems. It describes 1, The number of telephone calls made during a period of time. 2. The number of defective elements in a given sample. 3. The number of electrons emitted from a cathode in a time interval.4, The number of items waiting in a queue ete. Conditional distribution Function; If A and B are two events. If A is an event {X < x} for random variable X, then the conditional distribution function of X when the event B is known is denoted as Fx(x/B) and is defined as F/B) = P {X 0 PED ux) PO) ‘Then the conditional density function of X is x lVayy Dis BP btx-x) Similarly, for random variable Y the conditional distribution function at x = xx is, 2 Be Fri) te To uy-y)) And conditional density function is 1 PED sy-y) fy (yh) Interval Conditioning: Consider the event B is defined in the interval y, < Y < y2 for the random variable Y ie. = {yi ¥ Syp). Assume that P(B) =P(yi < Y < y2)#0, then the conditional distribution function of x is given by Sisal gyoso) F(x! yi € ¥ yy)

B,>0 and El [Xn ull . Where B, and B2 are positive numbers. Introduction: In this Part of Unit we will see the concepts of expectation such as mean, variance, moments, characteristic function, Moment generating function on Multiple Random variables, We are already familiar with same operations on Single Random variable, This ean be used as basic for our topics we are going to see on multiple random variables. Eunction of joint random variables: If g(x,y) is a function of two random variables X and Y with joint density function f.s(X.y) then the expected value of the function g(x,y) is given asJ =E[e(xy)] or T= Ege Feyterteay Similarly, for N Random variables Xi, Xs... . Xi With joint density function fuse, Xn(xi.X0,- - - Xs the expected value of the function g(x.X:,. . . %) is given as [26a x0] oF Bao Loa easten ANE ete ey Mla «dey joint Moments about Origin: The joint moments about the origin for wo random variables, X, ¥ is the expected value of the function g(X,Y) = X" ¥* and is denoted as mak. Mathematically, ma =E (X" ¥*] = {7 [xy Fg yecatedy Where n and k are positive integers. The sum n+k is called the order of the moments. If k=, then mg = E [X"] are moments of X. If order moments are mig =E[X]=% = f°, [" xf pyaonacay then, then ma =E [Y*] are moments of Y. The first m =E LY! LIE YFerortcay ‘The second order moments are may=E[X"] m= E[Y"] and mi, = E[XY] For N random variables X1,X:. . . Xx the joint moments about the origin is defined as Mnane, ne = E[X" Ha". 6 Xy"] es a es sy) Ceaehea «ty Where nna, . ny are all positive integers, Correlation: Consider the two random variables X and Y, the second order joint moment mI is called the Correlation of X and Y. 1 ¢ is denoted as Rev. Ray = mii = E [XY] =I Eva yoomety For discrete random variables, Ryy = Dyer EM=1 Xn Yin Pyy Onin Properties of Correlation: 1. If wo random variables X and Y are statistically independent then X and Y are said to be uncorrelated. That is Rxy = E[XY= E[X] EIY]. Proof: Consider two random variables, X and Y with joint density function f.Cxy)and ‘marginal density functions f(x) and f(y). If X and Y are statistically independent, then we Know that f(x,y) = £3) f(y), ‘The correlation is Rav = f°, f°, xyfycun dx dy, = LL arf eof yen ax dy. (“x fyon dx J°.y Fyn dy. Ryy = E[XYI EIX] ELY]. Ifthe Random variables X and Y are orthogonal then their correlation is zero. ie, Rey = 0. Proof: Consider two Random variables X and Y with density functions f(x) and f(y). IE X and Y are said to be orthogonal, their joint occurrence is zero. That is f.y(X.y}=0. Therefore the correlation is Ryy =EIXY]= J" ay f, yoon dx dy =0. ‘Therefore Ruy =E[XY]+0. joint central moment ler two random variables X and Y. Then the expected values of the function g(xy)=(x — X)*(y— YJ are called joint central moments. Mathematically Fox =EIGe — XI" — PY ££ —¥)"(y - 1" fF, yixn dx dy 0. Where n, k ate positive integers01,2,... The onder of the central moment is n+k, The 0" Order central moment Ss Hoo [X] -E[ X]=Oand fp; =Ely-| onder central moments ate Hyp =Elx-k second order central moments are Hag =EI(x — X)?]=0¢x2, fon =EL(y — ¥)" says and pty, =El(& — X)"(y — YJ" oxy For N random Variables X;, X Xw , the joint central moments are defined as Handy «att = E[ C3 =)" (ty = HQ)" «Coy = Kw") LE SE Ca)" G2 a)". w= AN) nN fic 22,.. #N(2122,.. XN) deldx2...dxN ‘The order of the joint central moment nytt. . +My. Covi ance: Consider the random variables X and Y. The second order joint central moment Jt, is called the covariance of X and Y. It is expressed a8 Cov= oy = Js =EL-H) Ely-F] Cxr= JEJE — ANG YY" fi,yoen dx dy For discrete random variables X and Y, Cxv= Eat Efsea(tn — Xn)" Oe — Vad PCV Correlation coefficient: For the random variables X and Y, the normalized second order Central moment is called the correlation coefficient Itis denoted as p and is given by bas. Gv Coon oer Ele} Bly) Vines Testor? exey enor oxey Properties of p: 1. The range of correlation coefficient is -I< p< 1 2. IfX and ¥ are independent then p=. 3. Ifthe correlation between X and Y is perfect then p:1 4. TEXEY, then p=1 Properties of Covariance: 1, 11 X and Y are wo random variables, then the covariance Cxy=Ray- XY iwof, If X and Y are two random variables, We know that Cuy=Ele-X] Ely-7] =EIXY- XY-PX- XP] = EIXY]- E[XY)-E(YX1-E[ XY] =E[XVI- ZE[Y}-PE[X)- XVEI1 | =EIXY-SV-VR4K7 =EXY)-37 ‘Therefore Cxy=Rxy- ¥ F. hence proved. 2. Iftwo random variables X and Y are independent, then the covariance is zero. ie. Cxv But the converse is not true, Proof: Consider two random variables X and Y. If X and Y are independent, We know that E[XY|FE[XIELY] and the covariance of X and Y is Cxv=Rav- X ¥ =EIXY]- =EIXIEIY] =Cnek ¥. ‘Therefore Cxy-0, hence proved. But E[XY] =0, E[X] =0 or E[Y]-0 The covariance becomes Cxy-0. In this case, the random variables may not be independent. Therefore the converse is not true. 3.1FX and ¥ are two random variables, Var(X+-¥) = Var(X) + Vas) +2 Cr, Proof: If X and ¥ are two random variables, We know that Var(X)=ox? [X°)-E[X}? ‘Thea Var(X+¥) = EU(X+Y) HEIX+Y)* = E[IX+Y42XY]-(E[X] +ELY])* 2E{XY -E[X]-E[Y}-2E[XJELY] = E[X?]- E[XT+E[Y"] -E[Y]+2E[XY]-EIXJELYD = oq? +0242 Co ‘Therefore Var(X+¥) = Var(X) + Var(¥) + 2 Cxy, hence proved. Similarly Var(X-Y) = Var(X) + VarY) - 2 Cx. 4. IX and Y are two random variables, then the covariance of X+a,¥+b, Where ‘a’and *b’ are constants is Cov (Xta,Y-4b) = Cov (X,Y) = Cu Proof: If X and Y are two random variables, Then Cov(X+a,¥ +b)=E[(X+a)-(¥ + a) (Y¥+b)- ¥ + By] = EU(X4ae¥-a)(Y4b-F-b)) = EK 2] ‘Therefore Cov (X+a,Y+b) = Cov (X,Y) = Cxy, hence proved. 5.IfX and Y are two random variables, then the covariance of aX,bY, Where ‘a’and *b’ are constants is Cov (aX,bY) = abCov (X,Y) = abCxy Proof, Proof: IfX and Y are two random variables, Then Cov(aX,bY)=E|((aX)-(aX) (bY-bY)] = ElaiX )W-P)] = Efab(X -X)(¥-¥)] ‘Therefore Cov (aX,bY) = abCov (X.Y) = abCa. hence proved. 6.1EX, Y and Z are three random variables, then Cov (X+Y.Z) = Cov (X.Z) + Cov (¥.2). Proof; We know that Cov(X+Y,Z)=E[(X+Y)-(¥ + ¥) (Z-Z)] =E[X+Y-X -¥ (2-Z)] =El(X—X) + (Y-¥)) ZZ) =EU(X-X)(Z—Z)] + B[(Y-¥) Z2)] ‘Therefore Cov (X+¥,Z) = Cov (X,Z) + Cov (Y,Z). hence proved, function of two random variables X joint characteristic Function: The joint characte and Y is defined as the expected value of the joint function g(x,y)=e/¥e/02", It can be = Bleiott gor, expressed a8 Ogy(osae) = Jarxio2Y, Where cl and ware real vival Therefore Dgyoerse) = [5 f5, ef rn de dy. This is known as the two dimensional Fourier transform with signs of «1 and w2 are reversed for the joint density function. So the inverse Fourier transform of the joint characteristic function gives the joint density function again the signs of w1 and w2 are reversed, i.e, The joint density function is f,yan= J°, [, Oyycoane Jax") deat dw2. Loint Moment Generating Function: the joint moment generating function of two random variables X and Y is defined as the expected value of the joint function g(x.y)=e"™%e®Y It can be expressed as My yore = Ele**e%” |= 0***92" Where #1 and 02 are real variables. as ‘Therefore My.yios02) = yan ax dy. ‘And the joint density function is Fnyeon= J2, [2, Myycoronye" O42) 91 02. Gaussian Random Variables {2.Random variables): If two random variables X and Y are said to be jointly Gaussian, then the joint density function is given as CoA _ 20(0k-£)0 Fay pl 2G aroxer dio ‘This is also called as bivariate Gaussian density function, ‘N Random variables: Consider N random variables Xo, N. They are said to be Jointly Gaussian if their joint density function(N variate density function) is given by z oe Pw Fea aN) = BPTI HP 2 ) Where the covariance matrix of N random variables is Cy Cas Cay [Gp =]0 C2... Con) x —¥ Cur Cua, Con. [x — X]F = transpose of [X - ¥] [¢x l= determinant of [Cy] And [[Cx]"4] = inverse of (Cyl ‘The joint density function for two Gaussian random variables X1 and X2 can be derived by 2. in the formula of N Random variables case. substituting N- ‘Properties of Gaussian Random Variables: 1. The Gaussian random variables are completely defined by their means, variances and If the Gaussian random variables are uncorrelated, then they are statistically independent 3. All marginal density functions derived from N-variate Gaussian density functions are Gaussian, 4. All conditional density functions are also Gaussian, 5. All linear transformations of Gaussian random variables are also Gaussian. incar Transform: bles: Consider N Gaussian random variables Yn, n=1,2,.. .N. having a linear transformation with set of N Gaussian random variables Xn, 2,» .N. The linear transformations ean be written as YisauXrtanXst. . .4aXn > Yara XtapXot. « « +ayyXw Yarani Xia Xot. . taysXws Where the elements a iand j N are real numbers. In matrix form we can write it as Yay fa i++ Qw Yo ayy Qua + yw ir E ‘The transformation T is My ys day tre | a= ew Jay, yas ss uy ‘Therefore [Y]=(T] [X]. Also with mean values of X and Y. [Y-F] = [T] [X-X1, (Ty! [Y-¥1 The random processes are also called as stochastic processes which deal with randomly varying time wave forms such as any message signals and noise. They are described statistically since the complete knowledge about their origin is not known. So statistical measures are used. Probability distribution and probability density functions give the complete statistical characteristics of random signals. A random process is a function of both sample space and time variables. And can be represented as {XSx(6.t)} Deterministic and_Non-deterministic processes: In general a random process may be deterministic or non deterministic. A process is called as deterministic random process if future values of any sample function can be predicted from its past values, For example, X(t) A sin (od(+6), where the parameters A, on and © may be random vatiables, is deterministic random process because the future values of the sample function can be detected from its known shape. If future values of a sample function cannot be detected from observed past values, the proces Classification of random process: Random processes are mainly classified into four types based on the time and random variable X as follows. 1. Continuous Random Process: A random process is said to be continuous if both the random variable X and time t are continuous. The below figure shows continuous, random process. The fluctuations of noise voltage in any network is a continuous random process. (a) | Sample function of a continuous random process 2. Discrete Random Process: In discrete random process, the random variable X has only discrete values while time, t is continuous. The below figure shows a discrete random process. A digital encoded signal has only two diserete values a positive level and a negative level but time is continuous. So itis a discrete random process. 0) Sample function of a discrete random process 3. Continuous Random Sequence: A random process for which the random variable X is continuous but t has diserete values is called continuous random sequence. A continuous random signal is defined only at discrete (sample) time intervals. It is also called as a discrete time random process and can be represented as a set of random variables (X(1)} for samples t,, k=0,1,2,. (6) | Sample function of avontinuous random sequenc 4. Discrete Random Sequence: In discrete random sequence both random variable X and time ¢ are discrete, It can be obtained by sampling and quantizing a random signal This is called the random process and is mostly used in digital signal processing applications. The amplitude of the sequence can be quantized into two levels or multi shown in below figure s (d) and (e). mtn (2) spate ele fate Slslslelel| elelymiateeesr (d) | Sample function of a discrete random sequence levels etlth tutte Refit erm (isa 2 (©) Sample function of adiscrete random sequence (multi levels) lint distribution functions of random process: Consider a random process X(t). For a single random variable at time t), X1=X(ty), The cumulative distribution function is defined as Fx(xisty) =P ((X(4,)Sxi} where x1 is any real number. The function Fx(x,:ty) is known as the first order distribution function of X(t), For two random variables at time instants ty and t X(ti) = X and X(t) = Xo, the joint distribution is called the second order joint distribution function of the random process X(t) and is given by F(x, 225 intervals X(t), P ((X(4)Sx1, X(h)Se2}- In general for N random variables at N time 2,...N, the N" order joint distrib n of X(t) is defined as fun FXG, 82... XN ty) =P ((X()SH1, X(O)SK,... Xl) Ew} Joint density functions of random process:: Joint density functions of a random process can be obtained from the derivatives of the distribution functions 1. First order density funetion: fx¢x;t)) = 8S 2. Second order density Function: f(xy, X35), t) = Exess te) Independent random processes: Consider a random process X(t), Let X(ti) = xi, = 1,2,...N be N Random variables defined at time constants t.t2,... t x with density functions fut), fa(ais), .. f(xy ty). If the random process X(t) is statistically independent, then the N* order joint density function is equal to the product of individual joint functions of X(t) i. f(x, Xa... ANG Hf... ty) = FeGRLst) Blast). « « xGxa 5 ty). Similarly the joint distribution will be the product of the individual distribution functions, ‘Statistical properties of Random Processes: The following are the statistical properties of random processes, 1. Mean: The mean value of a random process X(t) is equal to the expected value of the random process i.e. X(t) = E[X()] = {%, xfe(a: dx Autocorrelation: Consider random process X(Q), Let X; and X1 be two random variables defined at times ty and respectively with joint density function fix(X4, X25, t2). The correlation of X and X2, E[X1 X2] = E[X(t1) X(t2)) is called the autocorrelation function of the random process X(t) defined as Raausts) = BIX: Xal = EIX(4) X(t)] or Ruwtita)= J £2, xref XL, x2 5 U1, 12) dxydaxy 3. Cross correlation: Consider two random processes X(Q) and Y(0) defined with random variables X and Y at time instants t; and t; respectively. The joint density function is f(xy ; t,t)-Then the correlation of X and Y, E[XY] = E[X(ti) Y()] is called the cross correlation function of the random processes X(t) and Y(t) which is defined as Rev(tuts FIX] =E[X(t) Yeo] or Ryvit LE, S227 ayy 5 U1, 2) dedy Stationary Processes: A random process is said to be stationary if all its statistical properties such a5 mean, moments, variances etc... do not change with time, The stationarity which depends on the density functions has different levels or orders. 1. First order stationary process: A random process is said to be stationary to order ‘one or first order Stationary if its first order density function does not change with time or shift in time value, If X(t) is a first order stationary process then fx(xist) = fxaust-+At) for any time t). Where At is shift in time value. Therefore the condition for a process to be a first order stationary random process is that its mean value must be constant at any time instant, ie, E[X(Q)] = ¥ = constant, Second order stationary process: A random process is said to be stationary to order two or second order stationary if its second order joint density function does not change with time or shift in time value ie. f(x), X25 t, 0) = fx(ar, xastr+At, trkAt) for all t.b and At, It is a function of time difference (t», .) and not absolute time t, Note that a second order stationary process is also a first order stationary process. The condition for a process to be a second omer stationary is that its autocorrelation should depend only on time differences and not on absolute time. i. If Ryx(ti.t2) = EIX((,) X(ta)] is autocorrelation fumetion and t=t2—t, then Raxltitit = ELM) X(ti+ 9] =Rex(2) . Rex(2) should be independent of time t 3. Wide sense stationary (WSS) process: If a random process X(t) is a second order Stationary process, then it is called a wide sense stationary (WSS) or a weak sense stationary process. However the converse is not true. The condition for a wide sense stationary process are 1. E[X()] = X = constant. 2. E[X(t) X(t] = Rex(t) is independent of absolute time t Joint wide sense stationary process: Consider two random processes X(t) and Y(O. If they are jointly WSS, then the cross correlation function of X(t) and (0) is a function of time difference tt» -tyonly and not absolute time. i.e, Ryv(tit2) = E[X(t) Y(t) If t=t3 -ts then Revit ®) = EX Y(+ 0] = Rev(@). Therefore the conditions for a process to be joint wide sense stationary are 1. E[X(0] =X = constant, 2. ELY(0] = ¥ = constant 3. E[X(0) Y(t 9] = Rxy(t) is independent of time t. 4. Strict sense stationary (SSS) processes: A random process X(t) is said to be strict Sense stationary if its Nth order joint density function does not change with time or shift in time value. ic. (Xs, Xa. KN HB, 9) = BOG, 2, RS EMEA FAL, that) for all t= ty and At. A process that is stationary to all orders n=1,2,... N is called strict sense stationary process. Note that SSS process is also a WSS process. But the reverse is not true, ‘Time Average Function: Consider a random process X(). Let x(0 be a sample function ‘which exists for all time at a fixed value in the given sample space S. The average value of x(t) taken over all times is called the time average of x(t. It is also called mean value of x(0). Itean be expressed as = A[x()] = limp so [x(t at ‘Time autocorrelation function: Consider a random process X(t). The time average of the product X(t) and X(t+ 2) is called time average autocorrelation function of x(t) and is denoted as Ral =ALX() NOD] of Ra =lmp seo Z fF, (aCe + Dade. ‘Time mean square function: If t = 0, the time average of x°(0) is called time mean square value of x(t) defined as = A[X(0] = limp ya [7x*(Ode. ‘Time cross correlation function: Let X(t) and Y(t) be two random processes with sample Functions x() and y(t) respectively. The time average of the product of x(t) y(tt 1) is called time cross correlation function of x(t) and y(1). Denoted as X fT x(Qy(e + at. Ergodic Theorem and Ergodic Process: The Ergodic theorem states that for any random process X(), all time averages of sample functions of x(t) are equal to the corresponding statistical or ensemble averages of X(t). ie, = ¥ or a(t) = Rux(®) . Random processes that satisfy the Ergodic theorem are called Ergodic processes. loint Ergodic Process: Let X(t) and Y(t) be two random processes with simple functions X(t) and y(t) respectively. The two random processes are said to be jointly Engodic if they are individually Ergodic and their time cross correlation functions are equal to their respective statistical cross correlation functions. ie. 1. =X, Y= ¥ 2. Rya(t) = Rex(t). Belt) = Rev) and Ryy(2) = Rvx(0) Mean Ergodic Random Process: average of any sample function x(t) is equal to its statistical average, X whi the probability of all other sample functions is equal to one, ie, E[X(0)] with probability one forall x(). Autocorrelation Ergodic Process: A stationary random process X(t) is said to be Autocorrelation Ergodic if and only if the time autocorrelation function of any sample function x(t) is equal to the statistical autocorrelation function of X(t). ie. Ax(t) x(t+)] = ELX( X(H+0)] or Rue) = Rext2). Cross Correlation Ergodic Process: Two stationary random processes X(t) and Y(t) are said to be cross correlation Ergodic if and only if its time eross correlation function of sample functions x(t) and y(t) is equal to the statistical cross correlation function of X(t) and Y(. ie. ALCO y+] = ELKO YH] of Ry = Rav) A random process X(1) is said to be mean Ergodic if time is constant and = ADO) Properties of Autocorrelation function: WSS and is a function of time difference t autocorrelation function of X(), Consider that a random process X(t) is at least are the properties of the 1. Mean square value of X(0) is E[X*(] = Rxx(0). It is equal to the power (average) of the process, X(). We know that for X(t), Ryx(t) = E[X() X(+ 0] . It = 0, then Ryx(0) = E[X(t) E[X*()] hence proved. Autocorrelation function is maximum at the origin ie. [Ryx(t)| < Rxx(0). Proof: Consider two random variables X(t) and X(t) of X(t) defined at time intervals ty and ( respectively. Consider a positive quantity (X(t,) #X(t)/? 20 Taking Expectation on both sides, we get E[X(t) X(t) > 0 E[X*4)+ Xa) + 2X(4) X()] 20 E[X"(u)]4 LX) + 2E[X(t) X(t] 2 0 Ryx(O}+ Ryx(O)+ 2 Ryx(tisto) 2 0 [Since EIX*()] = Rex(0)] Given X(t) is WSS and t= t-ti. Therefore 2 Rxx(0t 2 Rxx(t) 2 0 RyxlOk Ryx(t) = or IRxx(t)] < Rxx(0) hence proved. 3. Rxx() is an even function of tie. Ryx(-t) =Rxx(2). Proof: We know that Rxx(t) = E[X() X(t+ 2)] tthen E[X() Xt] tortutt ‘Therefore Ryx(-t) = E[X(u+ 1) X(u)] = ELX(u) X(ut D] Rxx(-t) = Rxx(e) hence proved. Ifa random process X(t) has a non zero mean value, E[X(1)] #0 and Ergodic with no periodic components, then lim)s)-s2» Rax() Proof ; Consider a random variable X(t)with random variables X(t)) and X(t). Given the mean value is E[X(t)] = ¥ #0. We know that Rux(®) = ELXOX(H+)] = EIX(t) X¢a)]. Since the process has no periodic components, as|t ~ 60, the random variable becomes independent, i. lime se Rux() = ELX(t,) X(6)] = EEX(G)] EL X¢t)] Since X(0) is Ergodic E[X(t,)] = Ef X(q)] =X ‘Therefore Linq soo Ryx(t) = 8? hence proved. If X(0 is periodic then its autocorrelation function is also periodic. Proof: Consider a Random process X(t) which is periodic with period To ‘Then X(t)= X(t To) or X(t t) = X(t £ Ty). Now we have Rxx(t) = E[X(t)X(t+4)] then Ryx(tk To) = E(XOX(C+ T)] Given X(t) is WSS, Rxx(tt To) Ryx(tt To) = Rex) ‘Therefore Ryx(7) is periodic hence proved. If X(0 is Ergodic has zero mean, and no periodic components then Limp) ooo Rage Proof: It is already proved that lity... Rax(t) = X?. Where X is the mean value of X(t) which is given as zero. ‘Therefore Limp} soo Ryg(t) = 0 hence proved. The autocorrelation function of random process Rxx(t) cannot have any arbitrary shape. Proof: The autocorrelation function Ryx(2) is an even function of + and has maximum value at the origin. Hence the autocorrelation function cannot have an arbitrary shape hence proved Ifa random process X(t) with zero mean has the DC component A as Y(t) =A + X(0), Then Rvy(1)= A*+ Rxx(7) Proof: Given a random process Y(t) =A + Xt). We know that Ryy(t) = E[Y()Y(t4)] =E[(A + X() (A+ X(t 2))] = EI(A? + AX() + AX(t# + X(O X(t 9] EI(A?] + AE[X()] + E[AX(+ OD] E[X( X(t+ 9] =A740+04 Ryx(0) ‘Therefore Ryy(z) = A+ Rxx(t) henee proved. Ifa random process Z(t) is sum of two random processes X(t) and Y() ie, ZW) = X()+ ¥(O. Then Rza(2) = Rex(®+ Rev(+ Ryx(@+ Rv) Proof: Given Z(t) = X(t) + Y(v. We know that Rt) = BIZ(QZ(t+9)] = E[X()+¥(0) (X(t) (t+))] ELK) XCH+0}+ XO Y(t) +¥(O X44) +¥(0) Y(t] =EUX() XC+0)}+ BX YeC+)] +ELY(O X(6+9)] ELV YH) ‘Therefore Ryz(t) = Rxx(t)+ Ru(t}+ Ryx(t)+ Ryy(@) hence proved. TX(OX(H+4)] Properties of Cross Correlation Function; Consider two random processes X(t) and Y(t) are at least jointly WSS. And the cross correlation function is a function of the time difference t ‘Thea the following are the properties of cross correlation function Ryy(2) =Rvx(-t) is a Symmetrical property. Proof: We know that Rxv(2) = E[X(®) Y(t+ 2)] also Ryx(e) = ELY(t) X(t+ 9] Lett=- then ELV X¢-9)] ‘cort-urt then E[Yut 1) X(u)] = E[X(u) Yu 9) ‘Therefore Ryx(-s) = Rxv(®) henee proved. If Ryx(t) and Ryy(0) ate the autocorrelation functions of X(t) and Y(t) respectively then the cross correlation satisfies the inequality: [Rxy (1)1 < VRxxCO)RyyO). Proof: Consider two random processes X(t) and Y(t) with auto correlation functions Ryx(t) and Ryy(t). Also consider the inequality Bip + oP =z 0 BO, Fun, eo # “Tarai # Tax ORO We know that E[X*(] = Rocx(0) and E[Y°(0] = Rvy(0) and E[X(1) X(+ 9] = Rv) fx) , 27 erefore SEXO) 4 SHO) > Theres Rxx(0) Ry (0) Tix 2° 22rd) 9 tame = Le Be FaxOnn@ = VRxx()Ryy(0). = [Rev] OF [Rev()1 < /Rxx(Rvw() hence proved. Hence the absolute value of the cross comelation function is always less than or equal to the geometrical mean of the autocorrelation functions, 3. IF Rxx(2) and Ryv(t) are the autocorrelation fimetions of X(t) and Y(t) respectively then the cross correlation satisfies the inequality: [Rxy(1)1 $3 [Rxx(0)+ Rvv(0)] Proof: We know that the geometric mean of any two positive numbers cannot exceed their arithmetic mean that is if Rxx() and Rvv(2) are two positive quantities then at vo, VReaORHO) SF IRyx(O}+ Ryy(O)]:We know that [Ray (1)] < yRixCO Rev) ‘Therefore [Ryy(t)] $ $[Rxx(0)+ Ryy(0)]. Hence proved. 4, If two random processes X(t) and ¥(Q) are statistically independent and are at least WSS, then Ryv(@) =¥ Y. Proof: Let two random processes X(t) and Y(t) be jointly W then we know that Ryv( -EIX(O Y(t+ 0] Since X(D and ¥(— are independent Rev(o) -EIXOIEL Ye] Therefore Rxy(t) = X ¥ hence proved. 5. two random processes X(0) and Y(\) have zero mean and are jointly WSS, then Jimie-+0 Rev() = 0. Proof: We know that Rxy(t) =E[X(t) Y(t+ 9]. Taking the limits on both sides Time s0 Rey() = lime) 5 EXC Y(t+ Dh. As [t| + 0, the random processes X(t) and Y(t) can be considered as independent processes therefore Lime} seo Ryy(1) = ELXWOIEL Y(+ 9] Given X = ¥=0 ‘Therefore lime)... Rey(t) = 0. Similarly limje-s0 Ryx( Covariance functions for random processes: ‘Auto Covariance function; Consider two random processes X(t) and X(t+ 1) at two time intervals t and t+ t The auto covariance function can be expressed as Cax(t, tH) = EU(X(0-E[X()) (XH) — E[X(+9)))] or 0. Hence proved. Cyx(t, tht) =Rax(t, 0) - EL) E[X(t+0)] 2 Ate IFX(0 is WSS, then Cxx(t) = Rax® - ), Cxx(0) = Rux( 0) + ? SE[X]- X? = 0X? ‘Therefore at t= 0, the anto covariance finetion becomes the Variance of the random process. The autocorrelation coefficient of the random process, X(t) is defined as Peat eens if soto x(t) Pxx(0) = Also [pxx(tt +0 s 1. Cross Covariance Function: If wo random processes X(1) and Y(t) have random variables X(0) and Y(t+ 9, then the cross covariance function can be defined as Cutt, HH) = ELOXO-ELXOD) (V(t) — E[Y (t+) D] or CyvGt, 0) = Ryv(t, t+) - [CX ELY(-9)]. IEX( and Y(P are jointly WSS, then Cxv(@) =Rav(d) -¥ Y. IFX() and Y(0) are Uncorrelated then Cuvll, 0) =0. The cross correlation coefficient of random processes X(t) and (0) is defined as cytttr) ie, Pelt "thoes Cyr) _ Sxr10) FoR oxey Pry) = n_ Random Process: Consider a continuous random process X(0). Let N- random variables Xv=X(ti).X=X(t), .. . .Xw =X(ty) be defined at time intervals ti, t w respectively. If random variables are jointly Gaussian for any N=1,2,.... And at any time instants tt». « ty. Then the random process X(t) is called Gaussian random process. The Gaussian density function is given as Axx, X2. ANE HL, tN) = exp(-[X — X]" [Cex] "1X — oar where Croc is a covariance matrix, Poisson’s random process: The Poisson process X(t) is a discrete random process which represents the number of times that some event has occurred as a function of time. If the number of occurrences of an event in any finite time interval is described by a Poisson distribution with the average rate of occurrence is A, then the probability of exactly ‘occurrences over a time interval (0,t) is ake 7m PIX(=K! K=0,1.2,. And the probability density function is fo Ae Fx), a fe) = UNIT-4: RANDOM PROCESSES:SPECTRAL CHARACTERISTICS In this unit we will study the characteristics of random processes regarding correlation and covariance functions which are defined in time domain. This unit explores the important concept of characterizing random processes in the frequency domain, ‘These characteristics are called spectral characteristics. All the concepts in this unit can be easily learnt from the theory of Fourier transforms. Consider a random process X (1). The amplitude of the random process, when it varies randomly with time, does not satisfy Dirichlet’s conditions. Therefore it is not possible to apply the Fourier transform directly on the random process for a frequency domain analysis, Thus the autocorrelation function of a WSS random process is used to study spectral characteristics such as power density spectrum or power spectral density (psd). Power Density Spectrum: The power spectrum of a WSS random process X (1) is defined as the Fourier transform of the autocorrelation function Rux (2) of X (t). It can be expressed as Sux(o) = [°,Rexie)e-sorge We can oblain the autocorrelation function from the power spectral density by taking the inverse Fourier transform ic Rex =I Spxlod gt0t des ‘Therefore, the power density spectrum Sxx(o) and the autocomelation fimnetion Rix (¢) are Fourier transform pairs ath ‘The power spectral density can aso be defined as Sxx(0) = lin. xen! Where X1(o) is a Fourier transform of X(0 in interval [-T.T] Average power of the random process: The average power Pxx of a WSS random process X(t) is defined as the time average of its second order moment or autocorrelation function at 70 Mathematically, Pxx= A {E[X7()]) Pax lity ao [2 EXAQ)]At Or Pax =Rex (It = 0 We know that from the power density spectrum, Rex (=I Sexe ole da Att=0 Pxx=Rux (=2I" Spe) gas ‘Therefore average power of X(t) is Pxx= EE Sex) aw Properties of power density spectrum: The properties of the power density spectrum Sxx(0) for a WSS random process X(t) are given as (D Sxx(o)z 0 i , 2 Proof; From the definition, the expected value of a non negative function Ef][X,c0]*|] is always non-negative. Therefore Syx(o)2 0 hence proved. (@) The power spectral density at zero frequency is equal to the area under the curve of the autocorrelation Rxx (1) ie. Sxx(0) = f° Rex ae Proof: From the definition we know that Sxx(o) = J", Rayo g=sor gy at 0-0, Sxx(0) = [2 Rug gy hence proved (3) The power density spectrum of a ral process X(t) is aneven function i. Sxx(-0)= Sxx(o) Proof: Consider a WSS real process X(1). then Sxx(0) = [2 Reto ear ge also Sxx(-@) = [°, Rygte clue de Substitute t= -t then, Sex) = [Ryo elon ae Since X(1) is real, from the properties of autocorrelation we know that, Ryx (-t) = Rux (1) eR, [Reet ole te Therefore Syxx(-0) Sxx(-0)= Sxx() hence proved, (4) Sxx(0) is always a real lmetion the col) Proof: We know that $yq(0) = lity... 2 Since the function |[X,co)] is real function, Sxx(o) is always a real finction hence proved. (5) If Syx(0) is a psd of the WSS random process XC), then 2 WSS random process equals the area under the curve of the power spectral density, FP. Syted au © A (EDCCO1) = Ryo (0) oF The time average of the mean square value of Proof: We know that Rxx (t) =A {E[X (t+ 2) X(H} EI Six gy ATO, Rxx 0) = A (EDCO) density. Hence proved. ZIE Si guy = Area under the curve of the power spectral (6) If X(O is a WSS random process with psd Sxx(o), then the psd of the derivative of ‘X(1) is equal to @” times the psd. Sxx(o). That is SXX(o) = 0° Sexo). Proof: We know that Sxx() = limy APecal) and Xx(o) = [7 X(Qe dt AL x@e tat LE XOE evotdt = [.X(O(Cja) e/a Jeo) JF, X(t) eorde Bile (ad) ‘Therefore SXX(O) = lim...“ tim, , Hoel a WB [109% (09 in Him yoo SXX(0) =w? Lin, ,. AAA" ‘Therefore SXX(o) = 0° Sxx(o) hence proved. Cross power density spectrum: Consider two real random processes X(t) and Y(t). which are jointly WSS random processes, then the cross power density spectrum is defined as the Fourier transform of the cross correlation function of X(t) and Y(t).and is expressed as Sxx(o) = [2 Rye efor ge and Syx(@) =f, Ryyco g-rorge by inverse Fourier transformation, we can obtain the cross correlation functions as Rey © = LSS tata J Sree) giot aay and Ryx (= “Therefore the cross psd and cross correlation functions are forms a Fourier transform pai If Xy(@) and Yx(o) are Fourier transforms of X(t) and ¥(1) respectively in interval [-T,T], ‘Then the cross power density spectrum is defined as sifr eo scon Average cross power: The average cross power Pxy of the WSS random processes X(t) and. ‘Y(0 is defined as the cross correlation function at t=0. That is Pays limp. [FRav(tdt or Pry=Ray(@)I2= 0% Ryy(O) Alo Paa'= 2 [Sereda td Pan= 21° Selo) a Properties of crass power density spectrum: The properties of the cross power for real random processes X(t) and ¥(1) are given by (1) Sxy(-0)= Sxy(@) and Syx(-0) Syx(0) Proof; Consider the cross correlation function Rxy(t). ‘The cross power density spectrum is Sxx(@) = [2 Ryyie swe ae Let 1=- 1 Then Sxv() = [2 Ryyeo gfe ge Since Rev(-t) = Rxv() Sxv(o) = [2 Revo ‘Therefore Sxv(-0)= Sxv(@) Similarly Syx(-o)= Syx(o) hence proved, (2) The real part of Sxy(@) and teal part Syx(@) are even fictions of ice. Re [Sxx(«)] and Re [Syx(@)] are even functions. Proof; We know that Sxx(o) = [2 Ry e-or ge and also we know that e7/r=cosmt-jsinent, Re [Sxv(0)] = J", Rer(-® cosat te Since cos otis aneven function ic. cos wt = os (-ot) Re [Swr()] = 7, Reyin coset ar = Me Rev cost-aty ar ‘Therefore Ssv(0) xv(-0) Similarly Syx(0)= Syx(-) henee proved. (3) The imaginary part of Sxv(o) and imaginary part Syx(o) are odd functions of o ie, Im [Sxv(@)] and Im [Syx(o)] are odd fimetions, Proof: We know that Sx(@) = J", Ryyie ¢-fare ge andl also we know that e7h™=cosot-jpinot, Im [Sxx(0)] = J, Reysere-sinery de == 5, Ravine ax =~ Im [Sxx(0)] ‘Therefore Im [Sxy()] = - Im [Sxv()] Similarly Im [Syx(o)] =~ Im [Syx(@)] hence proved (4) Sxy(0)-0 and Syx(o)0 if X( and Y(f) are Orthogonal, Proof: From the properties of cross correlation function, We know that the random processes X(1) and ¥() are said to be orthogonal if their cross correlation function is zero ie. Rev) =Ryx(t) 0. We know that Sxx() =, Ray eH" ae ‘Therefore Sxy(@)-0. Sunilarly Syx(o)-0 hence proved. (5) If X(0) and Y(t) are uncorrelated and have mean values X and ¥, then Proof; We know that Sxx(0) = J", Ryyto) eso ae =Se(o)= f° EKOVC+ De" dr Since X(t) and Y(t) are uncorrelated, we know that ELX(QY(t + 1)= EX OME +1)] ‘Therefore Sxv(0} EL EIX(OE(Y(t + Dem" dr Sxv(o) = [2 Pe" de Sxy(o) -¥¥ [° ee dr Therefore Sxv(o)=20% F 5). hence proved. UNIT-S: LINEAR SYSTEMS RESPONSE TO RANDOM INPUTS Consider a continuous LT system with impulse response h(t). Assume that the system is always causal and stable. When a continuous time Random pracess X (t) is applied on this system, the output response is also a continuous time random process Y(t) If the random processes X and ¥ are discrete time signals, then the linear system is called a discrete time system. In this unit we concentrate on the statistical and spectral characteristics of the output random process ¥ (t). ‘System Respons whose impulse response is h(t) as shown in below figure. Then the output response ¥ (t) is also a feta random process X (t) be applied to a continuous linear time invariant system random process. It can be expressed by the convolution integral, ¥(t)=h (t)*X (t) x(t) vo — hi -— Thats, the output response is V(t) =/", h(e)X(t = dr. Mean Value of Output Response: Consider that the random process X (t) is wide sense stationary process. ‘Mean value of output response=€I¥ (0), Then ELV (t= Eth (9 * X(0) =EL[S, A@OX(C- 1dr] =fEA@E[X(t — )]de But E[X(t — 1)] =X =constant, since x(t) is WSS. Then ElY (t)]=¥ = X [°, A(t) dr. Also if H (w) is the Fourier transform of h(t), then H(w)= £2, (ae! de. Atw =0,H (0) = A(t) dt is called the zero frequency response of the system. Substituting this we get ELY (t)] = 7 = XH (0) is constant. Thus the mean value of the output response Y (t) of a WSS random process is equal to the product of the ‘mean value of the input process and the zero frequency response of the system. ‘Mean Square Value of Output Response: Mean square value of output response is E (PU = lth) * x (0) = E (the) *X (h(t) * Xe) SEL [SCX (ttt, [2 ACa)X(t - t2)d a) FEU IS, X(C—eDX(t— re)A(eACED)A A] EW = $2, J BLXE = 1 DXC— HACE NE)d dey Where t, and r, are shifts in time intervals. If input X(t) is a WSS random process then E[X(¢—1,)X(¢ = 12)] = Ryexlty = ta) ‘Therefore E (V(t) f°, 2, Ra (ts — Ta) ACE )h(Tg)d tye ‘This expression is independent of time t, And itrepresents the Output power. ‘Autocorrelation Function of Output Response: The autocorrelation of ¥() is Ryr(ts, ta) = EV (4) Y (te) = E Mh ta) *X (ts) (h(a) *X (all FEL {SAE — Dd ty [2 hE)K (ta — tad Tal SEUSS XG — Xe — EAC ACT) d T1dT2] = (SIE EIXG — XG e)IACEACd td, We know that EX (ty ~ 1,)X(tz ~ t2)] = Raxlte ~ th +41 — 2) Ifinput x (t) isa WSS random process, Lt the time difference x= t-tyand tet, Then EXE TMC += Te] = Real +, — Ty) Then Ryylt, t+ 1) = Ryyltst)= [%, [% Rex (t+ ty — Ta) AC AC) TdT If = Ret) is the autocorrelation function of X(t), then Ryy(t)= Rutt) * hfe) (x) It is observed that the output autocorrelation function is a function of only t. Hence the output random orocess Vitis also WSS random process. Cross Correlation Function of Respon: If the input x (t) is WSS random process, then the cross correlation function of input X (t) and output YUtis Reyltst +2) =E IX (Y(t) Revle) = EO (01, AEX (+e) dey) 2 EKOX (t+ e- e] ACE)dr) ("Rex (t— t1)] h(E,)d which is the convolution of R(t) and h (2) ‘Therefore Ryy(t) = Rxx(t) * h(t) similarly we can show that Ryx(t) = Rex(t)*h (x) This shows that X (t) and Y(t) are jointly WSS. And we can also relate the autocorrelation functions and the cross correlation functions as Ryv( Reyte) *h (x) Ryyle) = Ryxte) * h(t) ‘Spectral Characteristics of a System Response: Consider that the random process X (t) is a WSS random process with the autocorrelation function Ryy(t) applied through an LT system. It is noted ‘that the output response ¥ (t) is also a WSS and the processes X (t) and ¥ (t) are jointly WSS. We can obtain power spectral characteristics of the output process V(t) by taking the Fourier transform of the correlation functions. Power Density Spectrum of Response: Consider that a random process X (t) Is applied on an LT| system having a transfer function Hw). The output response is ¥ (t). If the power spectrum of the input process is Sx (w), then the power spectrum of the output response is given by Sy (w) = [H()I? Sate) Proof: Let Ryy(z) be the autocorrelation of the output response ¥ (t). Then the power spectrum of ‘the response is the Fourier transform of R(t) Therefore Sy (to) =F [Sw(w)] (%, Ryy (te dr We know that Ryy(t) = J, {% Rux(t + 2 ~ t2) AG DA(r)d dry Then Sy (ta) = [oy fe Jn Rex (E+ 2 — 12) ACH )AC)d Tye, ede = [2 ,hCr) JS, h(t) [2 Ryn (e+ ty = 12) edt dtp dy (h(n, de!" f° Arado! J Rex (e+ 1 — ty) eM eM Clade dry dry Letrsteust, d= dt Therefore Sy (w) = [",n(rJe™dr, [%, A(rgleMdty J, Ryy(e Fmt dt We know that H (w)= {2 h(rJe/o" at, Therefore S(t) = H*() H(W) Six (w) = H(-a)H(0) Sx) Therefore Sy (w) = |H(w)|? Sec(w). Hence proved. Similarly, we can prove that the cross power spectral density function Sue (0) = Spo (2) H(t) and Spx (0) = Soc (wo) H(-wo) Spectrum Bandwidth: The spectral density is mostly concentrated at a certain frequency value. It decreases at other frequencies. The bandwidth of the spectrum is the range of frequencies having significant values. Its defined as “the measure of spread of spectral density” and is also called rms bandwidth or normalized bandwidth. Its given by Sau Sprioren Wins Ta Saxonie ‘Types of Random Processes: In practical situations, random process can be categorized into different types depending on their frequency components. For example information bearing signals such as audio, video and modulated waveforms etc., carry the information within a specified frequency band. ‘The Important types of Random processes are; 11. Low pass random processes 2. Band pass random processes 3. Band limited random processes 4, Narrow band random processes A random process is defined as a low pass random process X (t)ifits power spectral density yx (.o) has significant components within the frequency band as shown in below figure. For example baseband signals such as speech, image and video are low pass random processes. S.A) (2).Band pass random processes: A random process X (t) is called a band pass process if its power spectral density Sy («) has significant components within a band width W that does not include w But in practice, the spectrum may have a small amount of power spectrum at w =0, as shown in ‘the below figure. The spectral components outside the band W are very small and can be neglected. For example, modulated signals with carrier frequency wp and band width W are band pass random processes. The noise transmitting over a communication channel can be modelled as a band pass process. (3).Band Limited random processes: A random process is said to be band limited if its power spectrum components are zero outside the frequency band of width W that does not include w ‘The power density spectrum of the band limited band pass process is shown in below figure. (4).Narrow band_random processes: A band limited random process is said to be a narrow band process if the band width W is very small compared to the band centre frequency, ive. W<< ta, where W=band width and uy isthe frequency at which the power spectrum is maximum. The power density spectrum of a narrow band process Nt) is shown in below figure. Representation of a narrow band process: For any arbitrary WSS random processes N(t), The quadrature form of narrow band process can be represented as N(t) = X(t) Cos wayt— V(tSin wot Where X(t) and Y(t) are respectively called the in-phase and quadrature phase components of Nit) ‘They can be expressed as. X(t) =A(t) Cosferty] Y(t) =A (t) Sin{9(t)] and the relationship between the processes A(t) and @it) are given by VXFO-4 YO and 0 (t) = tan Properties of Band Limited Random Processes: Let N (t) be any band limited WSS random process with zero mean value and a power spectral density, Sua(a). If the random process is represented by N(t) =X (t) Cos wat ~Yit)Sin wot then some important properties of x(t) and ¥ (t) are given below 1._IFIN (t)is WSS, then X (t) and ¥ (t) are jointly WSS, 2. IFN (t) has zero mean ie. E [N(t]} =O, then E [x (t]] =E[¥ (]=0 3. The mean square values of the processes are equal ie. € [N"(t)] = E XC(t] = € (M(t). 4. Both processes X (t) and ¥ (t) have the same autocorrelation functions Le. Rxx(t) = Ryy(). 5. The cross correlation functions of X (t) and Y (t) are given by Ryx(t) = — Ryx(Z). If the processes are orthogonal, then Ryx(t) = Ryx(t) =0. 6. Bothx (t) andy (t) have the same power spectral densities Sa) = Safran rn 0 7. The cross power spectrums are Sy (ws) vc (). 8. IFN (t)is@ Gaussian random process, then X (t) and ¥ (t) are jointly Gaussian. 9. The relationship between autocorrelation and power spectrum Sw (w) is. Rex So" Syytarcosiw-o Jt} daa and Rrv@)=2 Jy Syywreosia~o to 10. IFN (t) is zero mean Gaussian and its psd, Sy(w) is symmetric about -tw, then X (t) and ¥ (t) are statistically independent.

Das könnte Ihnen auch gefallen