Sie sind auf Seite 1von 37

Solutions to Selected Problems and Exercises

Chapter 1

p 1.5.1. Find the real and imaginary parts of (a) .j C 3/=.j  3/ and (b)
Problem
.1 C j 2/3 .

Solution. (a)

j C3 .j C 3/.j  3/ 1  3j  3j  9 4 3
D D D   j:
j 3 .j  3/.j  3/ 1 C3
2 2 5 5

(b)
p p p 2 p p
.1 C j 2/3 D 13 C 3  12 .j 2/ C 3  1.j 2/ C .j 2/3 D 5 C 2j:

Problem 1.5.2. Find the moduli jzj and arguments of complex numbers (a) z D
2j ; (b) z D 3 C 4j .
p
Solution. (a) j z jD .2/2 C 0 D 2, tan D 1 ) D =2. (You have to be
carefulpwith the coordinate angle; here cos D 0, sin < 0.)
(b) j z jD 9 C 16 D 5, tan D 4=3 ) D arctan 4=3.

Problem 1.5.3. Find the real and imaginary components of complex numbers (a)
z D 5 e j=4 ; (b) z D 2 e j.8C1:27/ .
p p
5 2
Solution. (a) z D 5e j=4 D 5 cos.=4/ C j sin.=4/ D 2 C j 5 2 2 ) Re z D
p p
5 2
2
;
Im z D 522:
(b) z D 2e D 2 cos.1:27/  2j sin.1:27/ ) Re z D 2 cos.1:27/,
j.8C1:27/

Im z D 2 sin.1:27/:

Problem 1.5.4. Show that


5 j
D :
.1  j /.2  j /.3  j / 2

W.A. Woyczyński, A First Course in Statistics for Signal Analysis, 223


DOI 10.1007/978-0-8176-8101-2,  c Springer Science+Business Media, LLC 2011
224 Solutions to Selected Problems

Solution.
5 5 5 j
D D D :
.1  j /.2  j /.3  j / .1  3j /.3  j / 10j 2

Problem 1.5.5. Sketch sets of points in the complex plane .x; y/; z D x Cjy; such
that (a) jz  1 C j j D 1; (b) z2 C .z /2 D 2:

Solution. (a) f.x; y/ W jz  1 C j j D 1g D f.x; y/ W jx C jy  1 C j j D 1g


D f.x; y/ W j.x  1/ C j.y C 1/j D 1g D f.x; y/ W .x  1/2 C .y C 1/2 D 12 g:
So the set is a circle with radius 1 and center at .1; 1/.
(b) f.x; y/ W z2 C .z /2 D 2g D f.x; y/ W .x C jy/2 C .x  jy/2 D 2g D f.x; y/ W
x 2 C 2jxy  y 2 C x 2  2jxy  y 2 D 2g D f.x; y/ W x 2  y 2 D 1g: So the set
is a hyperbola (sketch it, please).

Problem 1.5.6. Using de Moivre’s formula, find .2j /1=2 . Is this complex number
uniquely defined?

Solution.
p  j. 3 C2k/ 1=2 p j. 3 Ck/
.2j /1=2 D 2 e 2 D 2e 4 ; k D 0; 1; 2; : : :
(p 3
2e j. 4 / ; for k D 0; 2; 4; : : : I
D p
j. 3 C/
2e 4 ; for k D 1; 3; 5; : : : I
(p

2 cos. 3 / C j sin. 3 / ; for k D 0; 2; 4; : : : I
D p
4 4

2 cos. 7 4
/ C j sin. 7
4
/ ; for k D 1; 3; 5; : : : :

Problem 1.5.10. Using de Moivre’s formula, derive the complex exponential rep-
resentation (1.4.5) of the signal x.t/ given by the cosine series representation
P
x.t/ D M mD1 cm cos.2 mf0 t C m /.

Solution.

X
M
x.t/ D c0 C cm cos.2 mf0 t C m /
mD1

X 1  j.2 mf0 t Cm / 


M
D c0 e j 20f0 t C cm e C e j.2 mf0 t Cm /
mD1
2
XM
cm j.2 mf0 t Cm / X cm j.2 mf0 t Cm /
M
D c0 e j 20f0 t C e C e
mD1
2 mD1
2
M
X c  M 
X cm 
m jm
D e e j 2 mf0 t C c0 e j 20f0 t C e jm e j 2 mf0 t :
mD1
2 mD1
2
Chapter 1 225

Problem 1.5.12. Using a computing platform such as Mathematica, Maple, or


Matlab, produce plots of the signals

M 
X 
 .1/m  1 .1/m
xM .t/ D C cos mt  sin mt ;
4 mD1  m2 m

for M D 0; 1; 2; 3; : : : ; 9 and 2 < t < 2. Then produce their plots in the
frequency-domain representation. Calculate their power (again, using Mathematica,
Maple, or Matlab, if you wish). Produce plots showing how power is distributed
over different frequencies for each of them. Write down your observations. What is
likely to happen with the plots of these signals as we take more and more terms of the
above series, that is, as M ! 1? Is there a limit signal x1 .t/ D limM !1 xM .t/?
What could it be?
Partial solution. Sample Mathematica code for the plot:
M = 9;

Plot[
Sum[
(((-1)ˆm - 1)/(Pi*mˆ2))*Cos[m*t] - (((-1)ˆm)/m)*Sin[m*t],
{m, M}],
{t, -2*Pi, 2*Pi}]

Sample power calculation:


M2 = 2;

N[Integrate[(1/(2*Pi))*
Abs[Pi/4 +
Sum[(((-1)ˆm - 1)/(Pi*mˆ2))*Cos[m*u] - (((-1)ˆm)/m)*
Sin[m*u], {m, M2}]]ˆ2, {u, 0, 2*Pi}], 5]

1.4445

Problem 1.5.13. Use the analog-to-digital conversion formula (1.1.1) to digitize


signals from Problem 1.5.12 for a variety of sampling periods and resolutions. Plot
the results.

Solution. We provide sample Mathematica code:


M=9;
x[t_]:=Sum[(((-1)ˆm-1)/(Pi*mˆ2))*Cos[m*t]-
(((-1)ˆm)/m)*Sin[m*t],{m,M}]
T=0.1;
R=0.05;

xDigital=Table[R*Floor[x[m T]/R],{m,1,50}];

ListPlot[xDigital]
226 Solutions to Selected Problems

Problem 1.5.14. Use your computing platform to produce a discrete-time signal


consisting of a string of random numbers uniformly distributed on the interval [0,1].
For example, in Mathematica, the command
Table[Random[], {20}]
will produce the following string of 20 random numbers between 0 and 1:
{0.175245, 0.552172, 0.471142, 0.910891, 0.219577,
0.198173, 0.667358, 0.226071, 0.151935, 0.42048,
0.264864, 0.330096, 0.346093, 0.673217, 0.409135,
0.265374, 0.732021, 0.887106, 0.697428, 0.7723}

Use the “random numbers” string as additive noise to produce random versions
of the digitized signals from Problem 1.5.12. Follow the example described in
Fig. 1.1.3. Experiment with different string lengths and various noise amplitudes.
Then center the noise around zero and repeat your experiments.

Solution. We provide sample Mathematica code:


M=9;
x[t_]:=Sum[(((-1)ˆm-1)/(Pi*mˆ2))*Cos[m*t]-
(((-1)ˆm)/m)*Sin[m*t],{m,M}]
T=0.1;
R=0.05;
xDigital=Table[R*Floor[x[m T]/R],{m,1,50}];
ListPlot[xDigital]
Noise=Table[Random[],{50}];
noisysig = Table[Noise[[t]] + xDigital[[t]], {t, 1, 50}];
ListPlot[noisysig]
Centernoise = Table[Random[] - 0.5, {50}];
noisysig1 = Table[Centernoise[[t]] + xDigital[[t]], {t, 1,
50}];
ListPlot[noisysig1]

Chapter 2

Problem 2.7.1. Prove that the system of real harmonic oscillations

sin.2mf0 t/; cos.2mf0 t/; m D 1; 2; : : : ;

forms an orthogonal system. Is the system normalized? Is the system complete? Use
the above information to derive formulas for coefficients in the Fourier expansions
in terms of sines and cosines. Model this derivation on calculations in Sect. 2.1.

Solution. First of all, we have to compute the scalar products:


Z P
1
sin .2mf0 t/ cos .2nf0 t/dt;
P 0
Chapter 2 227
Z P
1
sin .2mf0 t/ sin .2nf0 t/dt;
P 0
Z P
1
cos .2mf0 t/ cos .2nf0 t/dt:
P 0
Using the trigonometric formulas listed in Sect. 1.2, we obtain
Z P
1
sin .2 mt=P / cos .2 nt=P / dt
P 0
Z P
1
D .sin .2.mn/t=P / C sin .2.mCn/t=P // dt D 0 for all m; nI
2P 0
Z P
1
cos .2 mt=P / cos .2 nt=P / dt
P 0
Z P
1
D .cos .2.m  n/t=P / C cos .2.m C n/t=P // dt
2P 0
1
; if m D n;
D 2
0; if m ¤ n:

Similarly,
Z P
1 1
2; m D nI
sin .2mt=P / sin .2nt=P / dt D
P 0 0; m ¤ n:

Therefore, we conclude that the given system is orthogonal p but not normalized. It
can be normalized by multiplying each sine and cosine by 1= 2. It is not complete,
but it becomes complete, if we add the function identically equal to 1 to it; it is
obviously orthogonal to all the sines and cosines.
Using the orthogonality property of the above real trigonometric system, we ar-
rive at the following Fourier expansion for a periodic signal x.t/:
1
X
x.t/ D a0 C Œam cos.2mf0 t/ C bm sin.2mf0 t/;
mD1

with coefficients
Z
1 P
a0 D x.t/ dt;
P 0
Z
2 P
am D x.t/ cos .2mt=P / dt;
P 0
Z
2 P
bm D x.t/ sin .2mt=P / dt;
P 0

for m D 1; 2; : : : .
228 Solutions to Selected Problems

Problem 2.7.2. Using the results from Problem 2.7.1, find formulas for amplitudes
cm and phases
P m in the expansion of a periodic signal x.t/ in terms of only cosines,
x.t/ D 1 mD0 cm cos.2 mf0 t C m /:

Solution. Obviously, c0 D a0 : To find the connection among am ; bm and cm ; and


m , we have to solve the following system:

am D cm cos m ; bm D cm sin m :

This gives us   q
bm
m D arctan  ; cm D 2 C b2 :
am m
am
Problem 2.7.9. Find the complex and real Fourier series for the periodic signal
x.t/ D j sin tj. Produce graphs comparing the signal x.t/ and its finite Fourier
sums of order 1, 3, and 6.
Solution. The first observation is that x.t/ has period . So
Z Z
1  1 
zm D j sin tje 2jmt dt D sin t  e 2jmt dt
 0  0
Z Z 
1  e jt  e jt 2jmt 1
D e dt D .e jt .12m/  e jt .1C2m/ / dt
 0 2j 2j 0
!
1 e j.12m/  1 e j.1C2m/ 2
D  dt D ;
2j j.1  2m/ j.1 C 2m/ .1  4m2 /

because e j D e j D 1 and e 2j m D 1 , for all m. Therefore, the sought-after


complex Fourier expansion is
1
2 X 1
x.t/ D  e j 2mt :
 mD1 1  4m2

We observe that for any m D : : :  1; 0; 1; : : : , we have zm D zm . Pairing up


complex exponentials with the exponents of opposite signs, and using de Moivre’s
formula, we arrive at the real Fourier expansion that contains only cosine functions:

1
!
2 X cos.2mt/
x.t/ D 1C2 :
 mD1
1  4m2

In particular, the partial sums of orders 1 and 3 are, respectively,


 
2 2 cos 2t
s1 .t/ D 1 ;
 3
 
2 2 cos 2t 2 cos 4t 2 cos 6t
s3 .t/ D 1   :
 3 15 35
Chapter 2 229

x[t_] := Abs[Sin[t]]
pl = Plot[x[t], {t, -2 * Pi, 2 * Pi}, Frame -> True,
GridLines -> Automatic, PlotStyle -> {Thickness[0.01]}]
1.0

0.8

0.6

0.4

0.2

0.0
-6 -4 -2 0 2 4 6
sum[t_, M_] := (2/Pi) * (1 + Sum[(2 / (1 - 4 * m^2)) * Cos[2 * m * t], {m, 1, M}])

sl = Plot[sum[t, 1],{t, -2 * Pi, 2 * Pi}, Frame -> True,


GridLines -> Automatic, PlotStyle -> {Thickness[0.01]}]

1.0

0.8

0.6

0.4

0.2
-6 -4 -2 0 2 4 6
s3 = Plot[sum[t, 6],{t, -2 * Pi, 2 * Pi}, Frame -> True,
GridLines -> Automatic, PlotStyle -> {Thickness[0.01]}]
1.0

0.8

0.6

0.4

0.2

0.0
-6 -4 -2 0 2 4 6

Mathematica code and the output showing x.t/; s1 .t/, and s6 .t/ are shown on the
next page.

Problem 2.7.13. (a) The nonperiodic signal x.t/ is defined as equal to 1/2 on the
interval Œ1; C1 and 0 elsewhere. Plot it and calculate its Fourier transform
X.f /. Plot the latter.
230 Solutions to Selected Problems

(b) The nonperiodic signal y.t/ is defined as equal to .t C 2/=4 on the interval
Œ2; 0, .t C 2/=4 on the interval Œ0; 2, and 0 elsewhere. Plot it and calculate
its Fourier transform Y .f /. Plot the latter.
(c) Compare the Fourier transforms X.f / and Y .f /. What conclusion do you draw
about the relationship of the original signals x.t/ and y.t/?
Solution. (a) The Fourier transform of x.t/ is
Z C1
1 j 2f t e j 2f  e j 2f sin 2f
X.f / D e dt D D :
1 2 4jf 2f

(b) Integrating by parts, the Fourier transform of y.t/ is


Z 0 Z C2
Y .f / D ..t C 2/=4/e j 2f t dt C ..t C 2/=4/e j 2f t dt
2 0
 
1 1 1 j 2f 2
D 2 .1  e /
4 j 2f .j 2f /2
 
1 1 1 j 2f 2
C 2 .e  1/
4 j 2f .j 2f /2
1 1  
j 2f 2
D .1  e j 2f 2
/  .e  1/
4 .j 2f /2
1 1  2  sin 2f 2
j 2f
D e j 2f
 e D :
4 .j 2f /2 2f

(c) So we have that Y .f / D X 2 .f /. This means that the signal y.t/ is the convo-
lution of the signal x.t/ with itself: y.t/ D .x  x/.t/.
Problem 2.7.18. Utilize the Fourier transform (in the space variable z) to find a
solution of the diffusion (heat) partial differential equation

@u @2 u
D 2;
@t @z

for a function u.t; z/ satisfying the initial condition u.0; z/ D ı.z/. The solution of
the above equation is often used to describe the temporal evolution of the density of
a diffusing substance.
Solution. Let us denote the Fourier transform (in z) of u.t; z/ by
Z 1
U.t; f / D u.t; z/e j 2f z d z:
1

Then, for the second derivative,

@2 u.t; z/
7! .j 2f /2 U.t; f / D 4 2 f 2 U.t; f /:
@z2
Chapter 3 231

So taking the Fourier transform of both sides of the diffusion equation gives the
equation
@
U.t; f / D 4 2 f 2 U.t; f /;
@t
which is now just an ordinary differential linear equation in the variable t, which
has the obvious exponential (in t) solution
2f 2 t
U.t; f / D C e 4 ;

where C is a constant to be matched later to the initial condition u.0; z/ D ı.z/ .


Taking the inverse Fourier transform gives

1 z2
u.t; z/ D p e  4 t :
4 t

Indeed, by completing the square,


Z 1 Z 1
2f 2 t
U.t; f /e j 2fx
df D C e 4 e j 2fx df
1 1
2
Z 1
x 2  t .f jx=.4//2
D C e 4 t e 4 df;
1
p
with the last (Gaussian) integral being equal to 1= 4t. A verification of the
initial condition gives C D 1.

Chapter 3

Problem 3.7.2. Calculate the probability that a random quantity uniformly dis-
tributed over the interval Œ0; 3 takes values between 1 and 3. Do the same calculation
for the exponentially distributed random quantity with parameter  D 1:5, and the
Gaussian random quantity with parameters  D 1:5;  2 D 1.

Solution. (a) Since X has a uniform distribution on the interval [0, 3], then the
value of the p.d.f. is 1/3 between 0 and 3 and 0 elsewhere.

Pf1 X 3g D .3  1/  1=3 D 2=3:

(b)
Z 3 Z 3
1 x= 2
e dx D e 2x=3 dx D 1.e 23=3  e 2=3 / D 0:378:
1  3 1
232 Solutions to Selected Problems

(c) We can solve this problem using the table for the c.d.f. of the standard normal
random quantity:

P.1 X 3/ D P.1  1:5 X   3  1:5/ D P.0:5 Z 1:5/


D ˆ.1:5/  ˆ.0:5/ D 0:9332  .1  ˆ.0:5//
D 0:9332  1 C 0:6915 D 0:6247:

Problem 3.7.4. The p.d.f. of a random variable X is expressed by the quadratic


function fX .x/ D ax.1  x/; for 0 < x < 1, and is zero outside the unit interval.
Find a from the normalization condition and then calculate FX .x/; EX; Var.X /;
Std.X /; the nth central moment, and P.0:4 < X < 0:9/. Graph fX .x/ and FX .x/.

Solution. (a) We know that for the p.d.f. of any random quantity, we have
Z 1
fX .x/ dx D 1:
1

So Z 1
a
1D ax.1  x/ dx D :
0 6
Thus, the constant a D 6.
(b) To find the c.d.f., we will use the definition
Z x
FX .x/ D fX .y/ dy:
1

In our case, when 0 < x < 1,


Z x
FX .x/ D 6y.1  y/ dy D x 2 .3  2x/:
0

Finally, 8
ˆ
< 0; for x < 0I
FX .x/ D x .3  2x/; for 0 x < 1I
2

1; for x  1:
(c)
Z 1
1
EX D 6x 2 .1  x/ dx D ;
0 2
Z 1
1 3 1
Var.X / D E.X 2 /  .EX /2 D 6x 3 .1  x/ dx  D  D 0:05;
0 4 10 4
p p
Std.X / D Var.X / D 0:05  0:224:
Chapter 3 233

(d) The nth central moment is


Z 1 Z 1  nk
X
n
1
.x  0:5/n 6x.1  x/ dx D 6 xk 
x.1  x/ dx
0 0 2
kD0
!  Z
Xn
n 1 nk 1 kC1
D6  x .1  x/ dx
k 2 0
kD0
! 
Xn
n 1 nk 1
D6  :
k 2 6 C 5k C k 2
kD0

(e) Z 0:9
P.0:4 < X < 0:9/ D 6x.1  x/ dx D 0:62:
0:4

Problem 3.7.6. Find the c.d.f and p.d.f. of the random quantity Y D tan X ,
where X is uniformly distributed over the interval .=2; =2/. Find a physical
(geometric) interpretation of this result.

Solution. The p.d.f. fX .x/ is equal to 1= for x 2 .=2; =2/ and 0 elsewhere.
So the c.d.f. is
8
ˆ
ˆ0; for x =2I
<
FX .x/ D .1=/.x C =2/; for x 2 .=2; =2/I
ˆ
:̂1; for x  =2:

Hence,

FY .y/ D P.Y y/ D P.tan X y/ D P.X arctan.y//


1
D FX .arctan.y// D .arctan.y/ C =2/:

The p.d.f. is

d d 1 1
fY .y/ D FY .y/ D .arctan.y/ C =2/ D :
dy dy  .1 C y 2 /

This p.d.f. is often called the Cauchy probability density function.


A physical interpretation: A particle is emitted from the origin of the .x; y/-plane
with the uniform distribution of directions in the half-plane y > 0. The p.d.f. of the
random quantity Y describes the probability distribution of locations of the particles
when they hit the vertical screen located at x D 1.

Problem 3.7.13. A random quantity X has an even p.d.f. fX .x/ of the triangular
shape shown in Sect. 3.7.
234 Solutions to Selected Problems

(a) How many parameters do you need to describe this p.d.f.? Find an explicit ana-
lytic formula for the p.d.f. fX .x/ and the c.d.f. FX .x/. Graph both.
(b) Find the expectation and variance of X .
(c) Let Y D X 3 . Find the p.d.f. fY .y/ and graph it.

Solution. (a) Notice that the triangle is symmetric about the line x D 0. Let us
assume that the vertices of the triangle have the following coordinates: A.a; 0/,
B.a; 0/, C.0; c/. Then the p.d.f is represented by the equations y D  ac x C c
in the interval Œ0; a and y D ac x C c in the interval Œa; 0. So we need at most
two parameters.
Next, the normalization condition says that area under the p.d.f is 1. So nec-
essarily, ac D 1 ) c D 1=a. Therefore, actually, one parameter suffices and
our one-parameter family of p.d.f.s has the following analytic description:
8
ˆ
ˆ 0; for x < aI
ˆ
ˆ
< x C 1; for  a x < 0I
fX .x/ D a2 x a 1
ˆ
ˆ a2 C a ;
ˆ for 0 x < aI

0; for x  a:

The corresponding c.d.f. is as follows: If x < a, then FX .x/ D 0; if a


Rx x2
x < 0, then FX .x/ D a . at2 C a1 / dt D 2a 2 C a C 2 ; if 0 x < a, then
x 1
R x x2
FX .x/ D 12 C 0 . at2 C a1 / dt D 12  2a 2 C a ; if x > a, then F .x/ D 1.
x

(b) Find the expectation and variance of X .


Z 1 Z 0   Z a  
x 1 x 1
EX D x fX .x/ dx D x C dxC x  2C dx D 0:
1 a a2 a 0 a a

Of course, the above result can be obtained without any integration by observing
that the p.d.f. is an even function, symmetric about the origin.

Z 1 Z 0   Z a  
x 1 x 1
VarX D x 2 fX .x/ dx D x2 2
C dx C x2  2 C dx
1 a a a 0 a a
2
a
D :
6

(c) The function y D g.x/ D x 3 is monotone; therefore, there exists an inverse


function, which in this case is x D g 1 .y/ D y 1=3 . The derivative g 0 .x/ D 3x 2 ,
and g 0 .g 1 .y/ D 3y 2=3 . Then [see (3.1.12)]
Chapter 3 235
8
ˆ
ˆ 0;
  for y < .a/3 I
ˆ
ˆ
fX .g 1 .y// < y 1=3
C 1 1
; for .a/3 y < 0I
D  ay 1=3
2 a  3y 2=3
fY .y/ D 0 1
ˆ
ˆ  a2 C a 3y 2=3 ; for 0 < y < a3 I
g .g .y/ 1 1
ˆ

0; for y  a3 :

Here is the needed Mathematica code producing the desired plots:


(*pdf, a=2*)
H[x_] := If[x < 0, 0, 1]
f[a_, b_, x_] := H[x - a] - H[x - b];
ff[x_, a_] := (x/aˆ2 + 1/a)*f[-a, 0, x] +
(-x/aˆ2 + 1/2)*f[0, a, x]
Plot[ff[x, 2], {x, -3, 3}]

F[x_, a_] := (xˆ2/(2*aˆ2) + x/a + 1/2)*f[-a, 0, x] +


(1/2 - xˆ2/(2*aˆ2) + x/a)*f[0, a, x]
Plot[F[x, 2], {x, -4, 4}]

Problem 3.7.15. Verify the Cauchy–Schwartz inequality (3.3.18). Hint: Take Z D


.X  EX /=.X / and W D .Y  EY =.Y /, and consider the discriminant of the
expression E.Z C xW /2 . The latter is quadratic in the x variable and necessarily
always nonnegative, so it can have at most one root.

Solution. The quadratic form in x,

0 E.Z C xW /2 D EZ 2 C 2xE.ZW / C x 2 EW 2 D p.x/;

is nonnegative for any x. Thus, the quadratic equation p.x/ D 0 has at most one
solution (root). Therefore, the discriminant of this equation must be nonpositive,
that is,
.2E.ZW //2  4EW 2 EZ 2 0;
which gives the basic form of the Cauchy–Schwarz inequality,
p p
jE.ZW /j EW 2  EZ 2 :

Finally, substitute for Z and W as indicated in the above hint to obtain the desired
result.

Problem 3.7.24. Complete the following sketch of the proof of the central limit the-
orem from Sect. 3.5. Start with a simplifying observation (based on Problem 3.7.23)
that it is sufficient to consider random quantities Xn ; n D 1; 2; : : : ; with expecta-
tions equal to 0, and variances 1.
(a) Define FX .u/ as the inverse Fourier transform of the distribution of X :
Z 1
FX .u/ D Ee juX D e jux dFX .x/:
1
236 Solutions to Selected Problems

Find FX0 .0/ and FX00 .0/. In the statistical literature FX .u/ is called the charac-
teristic function of the random quantity X . Essentially, it completely determines
the probability distribution of X via the Fourier transform (inverse of the inverse
Fourier transform).
(b) Calculate FX .u/ for the Gaussian N.0; 1/ random quantity. Note the fact that its
functional shape is the same as that of the N.0; 1/ p.d.f. This fact is the crucial
reason for the validity of the CLT.
(c) Prove that, for independent random quantities X and Y ,

FXCY .u/ D FX .u/  FY .u/:

(d) Utilizing (c), calculate


Fpn.X
N X /=Std.X/
.u/:
Then find its limit as n ! 1. Compare it with the characteristic of the Gaussian
N.0; 1/ random quantity. (Hint: It is easier to work here with the logarithm of
the above transform.)

Solution. Indeed, .Xk  EXk /=Std.Xk / has expectation 0 and variance 1, so it is


enough to consider the problem for such random quantities. Then
(a)
ˇ ˇ
d ˇ ˇ
FX0 .0/ D Ee juX ˇ D j EXe juX ˇ D j EX D 0;
du uD0 uD0
ˇ ˇ
d ˇ ˇ
FX00 .0/ D j EXe juX ˇ D j 2 EX 2 e juX ˇ D EX 2 D 1:
du uD0 uD0

(b) If Z is an N.0; 1/ random quantity, then


Z 1 2 Z 1
e x =2 2 1 2 2 1
FZ .u/ D e juX p dx D e u =2 e  2 .x 2juXC.j u/ / p dx
1 2 1 2
Z 1
2 1 2 1
D e u =2 e  2 .xj u/ p dx
1 2
Z 1
2 1 2 1 2
D e u =2 e  2 z p d z D e u =2
1 2

by changing the variable x  j u 7! z in the penultimate integral and because


the Gaussian density in the last integral integrates to 1.
(c) Indeed, if X and Y are independent, then

FXCY .u/ D Ee j u.XCY / D E.e juX  e juY / D Ee juX  Ee juY D FX .u/  FY .u/

because the expectation of a product of independent random quantities is the


product of their expectations.
Chapter 3 237

(d) Observe first that


p
n.XN  X / 1
D p .Y1 C    C Yn /;
Std.X/ n

where
X1  X Xn  X
Y1 D ; : : : ; Yn D ;
Std.X/ Std.X/
so that, in particular, Y1 ; : : : ; Yn are independent, identically distributed with
EY1 D 0 and EY12 D 1. Hence, using (a)–(c),

Fpn.X
N X /=Std.X/
.u/ D F.Y1 =pnCCYn =pn/ .u/
D F.Y1 =pn/ .u/    F.Yn =pn/ .u/
p
D ŒFY1 .u= n/n :

Now, for each fixed but arbitrary u, instead of calculating the limit n ! 1 of
the above characteristic functions, it will be easier to calculate the limit of their
logarithm. Indeed, in view of de l’HOopital’s rule applied twice (differentiating
with respect to n; explain why this is okay),

lim log Fpn.X


N X /=Std.X/
.u/
n!1
p
p n log FY1 .u= n/
D lim logŒFY1 .u= n/ D lim
n!1 n!1 1=n
p 0
p
.1=FY1 .u= n//  FY1 .u= n/  . 12 u=n3=2 /
D lim
n!1 1=n2
0
p p
1 1  FY1 .u= n/ 1 FY001 .u= n/  . 12 u=n3=2 /
D u lim D u lim
2 n!1 1=n1=2 2 n!1  12  1=n3=2
1
D  u2 ;
2

because FY0 1 .0/ D 0 and FY001 .0/ D 1; see part (a). So for the characteristic
functions themselves,
2 =2
lim Fpn.X
N X /=Std.X/
.u/ D e u ;
n!1

and we recognize the above limit as the characteristic function of the N.0; 1/
random quantity; see part (b).
The above proof glosses over the issue of whether indeed the convergence of
characteristic functions implies the convergence of c.d.f.s of the corresponding
random quantities. The relevant continuity theorem can be found in any of the
mathematical probability theory textbooks listed in the Bibliographical Com-
ments at the end of this volume.
238 Solutions to Selected Problems

Chapter 4

Problem 4.3.1. Consider a random signal

X
n  
X.t/ D Ak cos 2kfk .t C ‚k / ;
kD0

where A0 ; ‚1 ; : : : ; An ; ‚n are independent random variables of finite variance, and


‚1 ; : : : ; ‚n are uniformly distributed on the time interval Œ0; P D 1=f0 . Is this
signal stationary? Find its mean and autocovariance functions.

Solution. The mean value of the signal (we use the independence conditions) is
   
EX.t/ D E A1 cos 2f0 .t C ‚1 / C    C E An cos 2nf0 .t C ‚n /
Z P
d 1
D EA1  cos 2f0 .t C 1 / C    C EAn 
0 P
Z P
d n
cos 2nf0 .t C n / D 0:
0 P

The mean value doesn’t depend on time t; thus, the first requirement of stationarity
is satisfied.
The autocorrelation function is

X .t; t C / D EŒX.t/X.t C /


!
Xn X
n
DE Ai cos.2if0 .t C ‚i //  Ak cos.2kf0 .tC C ‚k //
i D1 kD1
XX
n n  
D E.Ai Ak /  E cos.2if0 .tC‚i //  cos.2kf0 .tCC‚k //
i D1 kD1
X
n
EA2
D i
cos .2if0 /;
2
i D1

because all the cross-terms are zero. The autocorrelation function is thus depending
only on  (and not on t), so that the second condition of stationarity is also satisfied.

Problem 4.3.2. Consider a random signal

X.t/ D A1 cos 2f0 .t C ‚0 /;

where A1 ; ‚0 are independent random variables, and ‚0 is uniformly distributed


on the time interval Œ0; P =3 D 1=.3f0/. Is this signal stationary? Is the signal
Y .t/ D X.t/  EX.t/ stationary? Find its mean and autocovariance functions.
Chapter 4 239

Solution. The mean value of the signal is

  Z P =3
d
EX.t/ D E A cos 2f0 .t C ‚/ D EA  cos.2f0 .t C //
0 P =3
ˇP =3 3EA  
3EA ˇ
D sin.2f0 .tC // ˇ D sin.2f0 .t C P =3//  sin.2f0 t/ :
2  D0 2
Since
pCq pq
sin p  sin q D 2 cos sin ;
2 2
we finally get p
3 3  
EX.t/ D EA cos 2f0 t C ;
2 3
which clearly depends on t in an essential way. Thus, the signal is not stationary.
The signal Y .t/ D X.t/  EX.t/ obviously has mean zero. Its autocovariance
function is

Y .t; s/ D EŒX.t/X.s/  EX.t/EX.s/


Z P =3
3
D EA2 cos 2f0 .t C / cos 2f0 .s C / d  EX.t/EX.s/;
0 P

with EX.t/ already calculated above. Since cos ˛ cos ˇ D Œcos .˛ C ˇ/ C


cos .˛  ˇ/=2, the integral in the first term is
    
3 2
cos 2f0 .t  s/ C sin 2f0 t C s C  sin.2f0 .t C s// :
4 3f0

Now, Y .t; s/ can be easily calculated. Simplify the expression (and plot the
ACF) before you decide the stationarity issue for Y .t/.
Problem 4.3.8. Show that if X1 ; X2 ; : : : ; Xn are independent, exponentially dis-
tributed random quantities with identical p.d.f.s e x ; x  0, then their sum Yn D
X1 C X2 C    C Xn has the p.d.f. e y y n1 =.n  1/Š; y  0. Use the technique
of characteristic functions (Fourier transforms) from Chap. 3. The random quantity
Yn is said to have the gamma probability distribution with parameter n. Thus, the
gamma distribution with parameter 1 is just the standard exponential distribution;
see Example 4.1.4. Produce plots of gamma p.d.f.s with parameters n D 2; 5; 20;
and 50. Comment on what you observe as n increases.
Solution. The characteristic function (see Chap. 3) for each of the Xi s is
Z 1
1
FX .u/ D Ee juX D e jux e x dx D :
0 1  ju

In view of the independence of the Xi s, the characteristic function of Yn is neces-


sarily the nth power of the common characteristic function of the Xi s:
240 Solutions to Selected Problems

1
FYn .u/ D Ee j u.X1 CCXn / D Ee j uX1      Ee j uXn D :
.1  j u/n

So it suffices to verify that the characteristic function of the p.d.f. fn .u/ D


e y y n1 =
.n  1/Š; y  0, is also of the form .1  j u/n . Indeed, integrating by parts,
we obtain
Z 1
y n1 e .j u1/y y n1 ˇˇ1
e juy e y dy D  ˇ
0 .n  1/Š j u  1 .n  1/Š yD0
Z 1
1 y n2
C e .j u1/y dy:
1  ju 0 .n  2/Š

The first term on the right side is zero, so that we get the recursive formula

1
Ffn .u/ D Ff .u/;
1  j u n1

which gives the desired result since Ff1 .u/ D FX .u/ D .1  j u/1 :

Chapter 5

Problem 5.4.5. A stationary signal X.t/ has the autocovariance function

X ./ D 16e 5j j cos 20 C 8 cos 10 :

(a) Find the variance of the signal.


(b) Find the power spectrum density of this signal.
(c) Find the value of the spectral density at zero frequency.

Solution. (a)
 2 D X .0/ D 16 C 8 D 24:
(b) Let us denote the operation of Fourier transform by F . Then writing perhaps a
little informally, we have
Z 1
SX .f / D X ./e j 2f d  D .F X /.f /
1
 
D F 16e 5j j  cos .20/ C 8 cos .10/ .f /
 
D 16  F .e 5j j /  F .cos.20// .f / C 8  F .cos .10//.f /:
Chapter 5 241

But
25 10
F .e 5j j /.f / D D
52 C .2f / 2 25 C .2f /2
and
ı.f C 10/ C ı.f  10/
F .cos 20/.f / D ;
2
so that
 
F .e 5j j /  F .cos .20// .f /
Z 1
10 ı.f  s C 10/ C ı.f  s  10/
D  ds
1 25 C .2f /
2 2
Z 1 Z 1 
ı.s  .f C 10// ı.s  .f  10//
D5 ds C ds
1 25 C .2f / 1 25 C .2f /
2 2
 
1 1
D5 C ;
25 C 4 2 .f C 10/2 25 C 4 2 .f  10/2
R
because we know that ı.f  f0 /X.f / df D X.f0 /. Since F .cos 10/.f /
D ı.f C 5/=2 C ı.f  5/=2,

80 80
SX .f / D C C4ı.f C5/C4ı.f 5/:
25 C 4 2 .f C 10/2 25 C 4 2 .f  10/2

Another way to proceed would be to write e 5j j cos .20/ as e 5 .e j.20 / 
e j.20 / /=2, for  > 0 (similarly for negative s), and do the integration
directly in terms of just exponential functions (but it was more fun to do convo-
lutions with the Diracdelta impulses, wasn’t it?).
(c)

80 80 160
SX .0/ D C C 4ı.5/ C 4ı.5/ D :
25 C 4 2 100 25 C 4 2 100 25 C 400 2

Problem 5.4.9. Verify the positive-definiteness (see Remark 5.2.1) of autocovari-


ance functions of stationary signals directly from their definition.

Solution. Let N be an arbitrary positive integer, t1 ; : : : ; tN 2 R, and z1 ; : : : ; zN 2 C.


Then, in view of the stationarity of X.t/,

X
N X
N X
N X
N
.tn  tk /zn zk D EŒX  .t/X.t C .tn  tk //zn zk
nD1 kD1 nD1 kD1

X
N X
N
D EŒX  .t C tk /X.t C tn /zn zk
nD1 kD1
242 Solutions to Selected Problems

X
N X
N
DE .zk X.t C tk //  .zn X.t C tn //
nD1 kD1
ˇN ˇ2
ˇX ˇ
ˇ ˇ
D Eˇ zn X.t C tn /ˇ  0:
ˇ ˇ
nD1

Chapter 6

Problem 6.4.1. The impulse response function of a linear system is h.t/ D 1  t


for 0 t 1 and 0 elsewhere.
(a) Produce a graph of h.t/:
(b) Assume that the input is the standard white noise. Find the autocovariance func-
tion of the output.
(c) Find the power transfer function of the system, its equivalent-noise bandwidth,
and its half-power bandwidth.
(d) Assume that the input has the autocovariance function X .t/ D 3=.1 C 4t 2 /.
Find the power spectrum of the output signal.
(e) Assume that the input has the autocovariance function X .t/ D exp.4jtj/.
Find the power spectrum of the output signal.
(f) Assume that the input has the autocovariance function X .t/ D 1  jtj for
jtj < 1 and 0 elsewhere. Find the power spectrum of the output signal.

1.0

0.8

0.6

0.4

0.2

0.0
-1.0 -0.5 0.0 0.5 1.0 1.5 2.0

Solution. (a)
(b) With X ./ D ı./, the autocovariance function of the output is
Z 1Z 1
Y ./ D X .  u C s/h.s/h.u/ ds d u
0 0
Z 1Z 1
D ı.s  .u  //.1  s/.1  u/ ds d u:
0 0
Chapter 6 243

As long as 0 < u   < 1, which implies that 1 <  < 1, the inner integral is
Z 1
ı.s  .u  //.1  s/ ds D 1  .u  /;
0

and otherwise it is zero.

0.30

0.25

0.20

0.15

0.10

0.05

0.00
-1.5 -1.0 -0.5 0.0 0.5 1.0 1.5

So, for 0 <  < 1,


Z 1
1
Y ./ D .1  .u  //.1  u/ d u D .  1/2 . C 2/;
6

and, in view of the evenness of the ACvF,

1
Y ./ D .jj  1/2 .jj C 2/ for 1 <  < 1;
6

and it is zero outside the interval Œ1; 1; see the preceding figure.
(c) The transfer function of the system is
Z 1
sin2 .f / 2f  sin.2f /
H.f / D .1  t/e 2jft dt D j :
0 2 2 f 2 4 2 f 2

Therefore, the power transfer function is


!2  2
 sin2 .f / 2f  sin.2f /
jH.f /j D H.f /H .f / D
2
C
2 2 f 2 4 2 f 2

1 C cos 2f C 2f sin 2f  2 2 f 2


D ;
8 4 f 4

as shown in the following figure.


244 Solutions to Selected Problems

0.25

0.20

0.15

0.10

0.05

0.00
−1.5 −1.0 −0.5 0.0 0.5 1.0 1.5

To find the value of the power transfer function at f D 0, one can apply
l’HOopital’s rule, differentiating the numerator and denominator of jH.f /j2 three
times, which yields jH.0/j2 D 1=4. Thus, the equivalent-noise bandwidth is
Z 1
1
BWn D .1  t/2 dt D 2=3:
2jH.0/j2 0

Checking the above plot of the power transfer function, one finds that the half-
power bandwidth is approximately BW1=2 D 0:553:
(d) The power spectrum of the output signal is given by

SY .f / D SX .f /jH.f /j2 ;

where SX .f / is the power spectrum of the input signal. In our case,


Z 1
3 3 jf j
SX .f / D  cos.2f t/ dt D e :
1 1 C 4t 2 2

Therefore,

3 jf j 1 C cos 2f C 2f sin 2f  2 2 f 2


SY .f / D e  :
2 8 4 f 4

(e) In this case, similarly,


Z 1
2
SX .f / D e 4jt j  cos.2f t/ dt D
1 4 C  2f 2

and

2 1 C cos 2f C 2f sin 2f  2 2 f 2


SY .f / D  :
4 C  2f 2 8 4 f 4
Chapter 6 245

(f) Finally, here


.sin f /2
SX .f / D
 2f 2
and

.sin f /2 1 C cos 2f C 2f sin 2f  2 2 f 2


SY .f / D  :
 2f 2 8 4 f 4

Problem 6.4.5. Consider the circuit shown in Fig. 6.4.2. Assume that the input,
X.t/, is the standard white noise.
(a) Find the power spectra SY .f / and SZ .f / of the outputs Y .t/ and Z.t/.
(b) Find the cross-covariance,
 
Y Z ./ D E Z.t/Y .t C / ;

between those two outputs.

Solution. (a) Note that X.t/ D Y .t/ C Z.t/. The impulse response function for the
“Z” circuit is
1 t =RC
hZ .t/ D e ;
RC
and Z 1
Y .t/ D X.t/  hZ .s/X.t  s/ ds:
0
So the impulse response function for the “Y” circuit is
Z 1
1 s=RC
hY .t/ D ı.t/  e ı.t  s/ ds
0 RC

1 t =RC
D ı.t/ 
e ; t  0:
RC
The Fourier transform of hY .t/ will give us the transfer function
Z 1  
1 t =RC 2jft 2jRCf
HY .f / D ı.t/  e e dt D :
0 RC 1 C 2jRCf

For the standard white noise input X.t/, the power spectrum of the output is
equal to the power transfer function of the system. Indeed,

4 2 R2 C 2 f 2
SY .f / D 1  jHY .f /j2 D :
1 C 4 2 R2 C 2 f 2

The calculation of SX .f / has been done before, as the “Z” circuit represents
the standard RC-filter.
246 Solutions to Selected Problems

(b)

yz ./ D E.Y .t/Z.t C //


Z 1 Z 1 
DE X.t  s/hY .s/ ds X.t C   u/hZ .u/ d u
1 1
Z 1 Z 1
D EX.t  s/X.t C   u/hY .s/hZ .u/ ds d u
1 1
Z 1 Z 1  
1 s=RC 1 u=RC
D ı.  u C s/ ı.s/  e e du ds
0 0 RC RC
Z 1 
1 s=RC 1 . Cs/=RC
D ı.s/  e e ds
0 RC RC
Z 1 Z 1
1 . Cs/=RC 1 s=RC 1 . Cs/=RC
D ı.s/ e ds  e e ds
0 RC 0 RC RC
1  =RC 1  =RC 1  =RC
D e  e D e :
RC 2RC 2RC

Chapter 7

Problem 7.4.2. A signal of the form x.t/ D 5e .t C2/ u.t/ is to be detected in the
presence of white noise with a flat power spectrum of 0:25 V2 =Hz using a matched
filter.
(a) For t0 D 2, find the value of the impulse response of the matched filter at
t D 0; 2; 4:
(b) Find the maximum output signal-to-noise ratio that can be achieved if t0 D 1:
(c) Find the detection time t0 that should be used to achieve an output signal-to-
noise ratio that is equal to 95% of the maximum signal-to-noise ratio discovered
in part (b).
(d) The signal x.t/ D 5e .t C2/ u.t/ is combined with white noise having a power
spectrum of 2 V2 =Hz. Find the value of RC such that the signal-to-noise ratio at
the output of the RC filter is maximal at t D 0:01 s.

Solution. (a) The impulse response function for the matched filter is of the form

h.s/ D 5 expŒ.t0  s C 2/  u.t0  s/ D 5e .4s/ u.2  s/;

where t0 is the detection time and u.t/ is the usual unit step function. Therefore,

h.0/ D 5e 4 ; h.2/ D 5e 2 ; h.4/ D 0:


Chapter 8 247

(b) The maximum signal-to-noise ratio at detection time t0 is


R1 R t0
S x 2 .t0  s/ ds 25e 2.t0 sC2/ ds
.t0 / D 0
D 0
D 50e 4 .1  e 2t0 /:
N max N0 0:25

So
S
.t0 D 0/ D 50e 4 :
N max
(c) The sought detection time t0 can thus be found by numerically solving the equa-
tion
50e 4 .1  e 2t0 / D 0:95  50e 4 ;
which yields, approximately, t0 D  log 0:05=2  1:5.

Chapter 8

Problem 8.5.1. A zero-mean Gaussian random signal has the autocovariance func-
tion of the form
X ./ D e 0:1j j cos 2 :
Plot it. Find the power spectrum SX .f /: Write the covariance matrix for the signal
sampled at four time instants separated by 0.5 s. Find its inverse (numerically; use
any of the familiar computing platforms, such as Mathematica, Matlab, etc.).

Solution. We will use Mathematica to produce plots and do symbolic calculations


although it is fairly easy to calculate SX .f / by direct integration. The plot of X ./
follows.

1.0

0.5

0.0

−0.5

−1.0

−4 −2 0 2 4

The power spectrum SX .f / is the Fourier transform of the ACvF, so


248 Solutions to Selected Problems

In[1]:= GX[t_] := Exp[- Abs[t]]*Cos[2*Pi*t];

In[2]:= FourierTransform [GX[t], t, 2*Pi*f]


Out[2]=

1 1

2p (1+4(-1+f)2 p2 ) 2p (1+4(1+f)2 p2)

Note that the Fourier transform in Mathematica is defined as a function of the angu-
lar velocity variable ! D 2f ; hence the above substitution. The plot of the power
spectrum is next.

0.5

0.4

0.3

0.2

0.1

0.0
-3 -2 -1 0 1 2 3

Problem 8.5.3. Find the joint p.d.f. of the signal from Problem 8.5.1 at t1 D 1; t2 D
1:5; t3 D 2, and t4 D 2:5. Write the integral formula for

P .2 X.1/ 2; 1 X.1:5/ 4; 1 X.2/ 1; 0 X.2:5/ 3/:

Evaluate the above probability numerically.

Solution. Again, we use Mathematica to carry out all the numerical calculations.
First, we calculate the relevant covariance matrix.
In[3]:= CovGX = N[{{GX[0], GX[0.5], GX[1], GX[1.5]},
{GX[0.5], GX[0], GX[0.5], GX[1]},
{GX[1], GX[0.5], GX[0], GX[0.5]},
{GX[1.5], GX[1], GX[0.5], GX[0]}}] // MatrixForm
Out[3]=
Chapter 8 249

1. −0.606531 0.367879 −0.22313


−0.606531 1. −0.606531 0.367879
0.367879 −0.606531 1. −0.606531
−0.22313 0.367879 −0.606531 1.

Its determinant and its inverse are


In[4]:=Det [CovGX]
Out[4]= 0.25258

In[5]:= ICovGX = Inverse[CovGX] // MatrixForm


Out[5]=

1.58198 0.959517 −6.73384⫻10−17 −1.11022⫻10−16


0.959517 2.16395 0.959517 −2.63452⫻10−16
−16
−1.11022⫻10 0.959517 2.16395 0.959517
−5.55112⫻10−17 −2.22045⫻10−16 0.959517 1.58198

Thus, the corresponding 4D Gaussian p.d.f. is


In[6]:= f[x1, x2, x3, x4]= (1/((2*Pi)ˆ2*Sqrt[Det[CovGX]])) *
Exp[-(1/2)*
Transpose[{{x1},{x2},{x3},{x4}}]. ICovGX. {x1,x2,x3,x4}]
Out[6]= 0.05 * Eˆ( -0.79 x1ˆ2 - 1.08 x2ˆ2 - 0.96 x2 x3 -
1.08 x3ˆ2 + x1 (-0.96 x2 + 8.92*10ˆ-17 x3 + 8.33*10ˆ-17 x4) +
2.43*10ˆ-16 x2 x4 - 0.96 x3 x4 - 0.79 x4ˆ2

Note the quadratic form in four variables, x1, x2, x3, x4, in the exponent.
The calculation of the sought probability requires evaluation of the 4D integral,
 
P 2 X.1/ 2; 1 X.1:5/ 4; 1 X.2/ 1; 0 X.2:5/ 3
Z 2Z 4Z 1Z 3
D f .x1 ; x2 ; x3 ; x4 / dx1 dx2 dx3 dx4 ;
2 1 1 0

which can be done only numerically:


In[7]:= NIntegrate[ f[x1, x2, x3, x4],
{x1, -2, 2}, {x2, -1, 4}, {x3, -1, 1}, {x4, 0, 3}]
Out[7]= {0.298126}

Problem 8.5.4. Show that if a 2D Gaussian random vector YE D .Y1 ; Y2 / has un-
correlated components Y1 ; Y2 , then those components are statistically independent
random quantities.
250 Solutions to Selected Problems

Solution. Recall the p.d.f. of a general zero-mean 2D Gaussian random vector


.Y1 ; Y2 / [see (8.2.9)]:
  2 
1 1 y1 y1 y2 y22
fYE .y1 ; y2 / D p  exp   2 C 2 :
21 2 1  2 2.1  2 / 12 1 2 2

If the two components are uncorrelated, then  D 0, and the formula takes the
following simplified shape:
  
1 1 y12 y22
fYE .y1 ; y2 / D  exp  C I
21 2 2 12 22

it factors into the product of the marginal densities of the two components of the
random vector YE :
     
1 1 y12 1 1 y22
fYE .y1 ; y2 / D p exp   p exp  ;
21 2 12 22 2 22
D fY1 .y1 /  fY2 .y2 /;

which proves the statistical independence of Y1 and Y2 .

Chapter 9

Problem 9.7.8. Verify that the additivity property (9.3.7) of any continuous func-
tion forces its linear form (9.3.8).

Solution. Our assumption is that a function C.v/ satisfies the functional equation

C.v C w/ D C.v/ C C.w/ (S.9.1)

for any real numbers v; w. We will also assume that is it continuous although the
proof is also possible (but harder) under a weaker assumption of measurability. Tak-
ing v D 0; w D 0 gives

C.0/ D C.0/ C C.0/ D 2C.0/;

which implies that C.0/ D 0. Furthermore, taking w D v, we get

C.0/ D C.v/ C C.v/ D 0;

so that C.v/ is necessarily an odd function.


Chapter 9 251

Now, iterating (S.9.1) n times, we get that for any real number v,

C.nv/ D n  C.v/I

choosing v D 1=n, we see that C.1/ D nC.1=n/ for any positive integer n. Replac-
ing n by m in the last equality and combining it with the preceding equality with
v D 1=m, we get that for any positive integers n; m,
n n
C D  C.1/:
m m
Finally, since any real number can be approximated by the rational numbers of the
form n=m, and since C was assumed to be continuous, we get that for any real
number,
C.v/ D v  C.1/I
that is, C.v/ is necessarily a linear function.
Bibliographical Comments

The classic modern treatise on the theory of Fourier series and integrals which influenced much of
the harmonic analysis research in the second half of the twentieth century is

[1] A. Zygmund, Trigonometric Series, Cambridge University Press, Cambridge, UK, 1959.

More modest in scope, but perhaps also more usable for the intended reader of this text, are

[2] H. Dym and H. McKean, Fourier Series and Integrals, Academic Press, New York, 1972,
[3] T. W. Körner, Fourier Analysis, Cambridge University Press, Cambridge, UK, 1988,
[4] E. M. Stein and R. Shakarchi, Fourier Analysis: An Introduction, Princeton University Press,
Princeton, NJ, 2003,
[5] P. P. G. Dyke, An Introduction to Laplace Transforms and Fourier Series, Springer-Verlag,
New York, 1991.

The above four books are now available in paperback.


The Schwartz distributions (generalized functions), such as the Dirac delta impulse and its
derivatives, with special emphasis on their applications in engineering and the physical sciences,
are explained in

[6] F. Constantinescu, Distributions and Their Applications in Physics, Pergamon Press, Oxford,
UK, 1980,
[7] T. Schucker, Distributions, Fourier Transforms and Some of Their Applications to Physics,
World Scientific, Singapore, 1991,
[8] A. I. Saichev and W. A. Woyczyński, Distributions in the Physical and Engineering Sciences,
Vol. 1: Distributional and Fractal Calculus, Integral Transforms and Wavelets, Birkhäuser
Boston, Cambridge, MA, 1997,
[9] A. I. Saichev and W. A. Woyczyński, Distributions in the Physical and Engineering Sciences,
Vol. 2: Linear, Nonlinear, Fractal and Random Dynamics in Continuous Media, Birkhäuser
Boston, Cambridge, MA, 2005.

Good elementary introductions to probability theory, and accessible reads for the engineering
and physical sciences audience, are

[10] J. Pitman, Probability, Springer-Verlag, New York, 1993,


[11] S. M. Ross, Introduction to Probability Models, Academic Press, Burlington, MA, 2003.

On the other hand,

[12] M. Denker and W. A. Woyczyński, Introductory Statistics and Random Phenomena: Uncer-
tainty, Complexity, and Chaotic Behavior in Engineering and Science, Birkhäuser Boston,
Cambridge, MA, 1998,

253
254 Bibliographical Comments

deals with a broader issue of how randomness appears in diverse models of natural phenomena and
with the fundamental question of the meaning of randomness itself.
More ambitious, mathematically rigorous treatments of probability theory, based on measure
theory, can be found in

[13] P. Billingsley, Probability and Measure, Wiley, New York, 1983,


[14] O. Kallenberg, Foundations of Modern Probability, Springer-Verlag, New York, 1997,
[15] M. Loève, Probability Theory, Van Nostrand, Princeton, NJ, 1961.

All three also contain a substantial account of the theory of stochastic processes.
Readers more interested in the general issues of statistical inference and, in particular, paramet-
ric estimation, should consult

[16] G. Casella and R. L. Berger, Statistical Inference, Duxbury, Pacific Grove, CA, 2002,

or

[17] D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers, Wiley,
New York, 1994.

The classic texts on the general theory of stationary processes (signals) are

[18] H. Cramer and M. R. Leadbetter, Stationary and Related Stochastic Processes: Sample Func-
tion Properties and Their Applications, Dover Books, New York, 2004,
[19] A. M. Yaglom, Correlation Thoery of Stationary and Related Random Functions, Vols. I and
II, Springer-Verlag, New York, 1987.

However, the original,

[20] N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, MIT Press
and Wiley, New York, 1950.

still reads very well.


Statistical tools in the spectral analysis of stationary discrete-time random signals (also known
as time series) are explored in

[21] P. Bloomfield, Fourier Analysis of Time Series: An Introduction, Wiley, New York, 1976,
[22] P. J. Brockwell and R. A. Davis, Time Series: Theory and Methods, Springer-Verlag, New
York, 1991.

and difficult issues in the analysis of nonlinear and nonstationary random signals are tackled in

[23] M. B. Priestley, Non-linear and Non-stationary Time Series Analysis, Academic Press, Lon-
don, 1988,
[24] W. J. Fitzgerald, R. L. Smith, A. T. Walden, and P. C. Young, eds., Nonlinear and Nonsta-
tionary Signal Processing, Cambridge University Press, Cambridge, UK, 2000.

The latter is a collection of articles, by different authors, on the current research issues in the area.
A more engineering approach to random signal analysis can be found in a large number of
sources, including

[25] A. Papoulis, Signal Analysis, McGraw-Hill, New York, 1977,


[26] R. G. Brown and P. Y. Hwang, Introduction to Random Signal Analysis and Kalman Filtering,
Wiley, New York, 1992.

A general discussion of transmission of signals through linear systems can be found in


Bibliographical Comments 255

[27] M. J. Roberts, Signals and Systems: Analysis of Signals Through Linear Systems, McGraw-
Hill, New York, 2003,
[28] B. D. O. Anderson, and J. B. Moore, Optimal Filtering, Dover Books, New York, 2005.

Gaussian stochastic processes are thoroughly investigated in

[29] I. A. Ibragimov and Y. A. Rozanov, Gaussian Random Processes, Springer -Verlag, New
York, 1978,
[30] M. A. Lifshits, Gaussian Random Functions, Kluwer Academic Publishers, Dordrecht,
the Netherlands, 1995.

and for a review of the modern mathematical theory of not necessarily second-order and not nec-
essarily Gaussian stochastic integrals, we refer to

[31] S. Kwapien and W. A. Woyczyński, Random Series and Stochastic Integrals: Single and
Multiple, Birkhäuser Boston, Cambridge, MA, 1992.
Index

A Burgers’
additive noise, 3 equation, 8
additivity property of probabilities, 53 turbulence, 6
adhesion model, 8
analog-to-digital conversion, 2
Anderson, B. D. O., 255 C
angular velocity, 21 Casella, G., 254
approximation of periodic signals, 34ff. Cauchy criterion of convergence, 185
at each time separately, 31 Cauchy–Schwartz inequality, 82, 102
at jump points, 32 causal system, 145
by Césaro averages, 32 central limit theorem, 60, 91, 175
Gibbs phenomenon, 33 error of approximation in —, 92
in power, 31 sketch of proof of —, 103
mean-square error, 31 Césaro average, 32
uniform, 31 chaotic behavior, 2
ARMA system, 157 Chebyshev’s inequality, 188, 191
Arzelà–Ascoli theorem, 198 circuit
autocorrelation function (ACF), 108ff. integrating —, 145
as a positive-definite function, 195 RC —, 149
autocovariance function, 106 complex
normalized, 107 exponentials, 14, 18
orthogonality of —, 18
numbers, 15
computational complexity, 2
B of fast Fourier transform, 45
band-limited noise, 133 computer algorithms, 211ff.
bandwidth conditional probability, 79
equivalent noise —, 142, 152 reverse —, 80
half-power —, 142, 152 confidence intervals, 96ff., 122
of finite-time integrating circuit, 154 for means, 96ff.
Bayes’ formula, 80 for variance, 98
Berger, R. L., 254 Constantinescu, F., 253
Berry–Eseen theorem, 95 control function
Billingsley, P., 199, 254 cumulative —, 201
binomial formula, 54 convergence in mean-square, 184
Bloomfield, P., 254 Cauchy criterion for —, 185
Branicky, Mike, x Cooley, J. W., 45
Brockwell, P. J., 121, 254 correlation
Brown, R. G., 254 coefficient, 82
Brownian motion, 4 covariance matrix, 182

257
258 Index

covariance, 82 RC —, 149, 150, 162


matrix, 180 Wiener —, 170ff.
Cramer, H., 254 filtering noise out of signal, 117
cross-correlation, 171 Fitzgerald, W. J., 254
cross-covariance, 159 “floor” function, 2
Crutchfield, James P., 7 Folland, G. B., 198, 203
cumulative Fourier
control function, 200 analysis, 23, 253
distribution function (c.d.f.), 52 coefficient, 22
power spectrum, 196 expansion, 23, 25
Czekajewski, Jan, x pure cosine, 26
pure sine, 27
Jean-Baptiste, 49
D series, 21ff.
Davis, R. A., 121, 254 complex —, 21ff.
de Moivre’s formula, 14, 15 Fourier transform (FT), 35ff.
Denker, M., 3, 56, 63, 95, 104, 175, 197, 253 basic properties, 37ff.
“devil’s staircase,” 62 discrete, 44ff.
diffusion, 4 fast (FFT), 44ff.
equation, 49 computational complexity of —, 45
Dirac delta impulse, 41ff. inverse (IFT), 36
calculus of —, 43 linearity of —, 37
“probing” property of —, 44 of convolution, 39
Fourier transform of —, 44 of nonintegrable signals, 41
discrete Fourier transform, 45 table of —s, 43
discrete-time sampling, 155 table of properties of —, 40
distributions fractional dimension, 61
in the sense of Schwartz, 43 frequency spectrum, 13
Dyke, P. P. G., 253 frequency-domain description, 12
Dym, H., 253 fundamental
frequency, 13, 18
theorem of calculus, 57
E
Edwards, Robert, x
G
EEG signals, 107, 108, 122, 132
gamma function, 95
ergodic
Gauss, Carl Friedrich, 45
behavior, 7, 122
Gibbs’ phenomenon, 33ff.
signal, 119
estimation
of parameters, 92ff. H
of power spectrum, 128 harmonic analysis, 21ff.
of the autocorrelation, 122ff. heat equation, 49
of the mean, 119ff. Herglotz’s theorem, 196
consistency of —, 120 histogram, 51
expected value (expectation) of r.q., 71 Hwang, P. Y., 254
linear scaling of —, 73

I
F Ibragimov, I. A., 255
fast Fourier transform (FFT), 44ff. impulse response function, 144
computational complexity of —, 45 causal, 144
filter realizable, 144
causal —, 172 integrating circuit, 145
matched —, 167ff. inverse Fourier transform, 36
Index 259

K P
Kallenberg, O., 114, 254 parameter estimation, 96ff.
kinetic energy, 65 Papoulis, A., 172, 254
Kolmogorov’s theorem Parseval’s formula, 24, 25, 49
on infinite sequences of r.q.s., 199 extended, 24, 25, 49
on sample path continuity, 170 passive tracer, 6
Körner, T. W., 31, 253 period of the signal, 1
Kronecker delta, 22 infinite —, 36
Kwapień, S., 255 periodogram, 131
Petrov, V. V., 95
L Piryatinska, A., x, 108, 122, 132
Landau’s asymptotic notation, 120 Pitman, J., 253
Laplace transform, 172 Poisson distribution, 55
law of large numbers (LLN), 89 polarization identity, 49
Leadbetter, M. R., 254 power
least-squares fit, 89 ff. spectral density, 135, 208
Lèvy process, 5 spectrum, 134ff.
Lifshits, M.A., 255 cumulative, 196
Loève, M., 187, 254 of interpolated digital signal, 137
Loparo, Ken, x transfer function, 152
Priestley, M.B., 254
probability
M
density function (p.d.f.), 57ff.
marginal probability distribution, 78
joint — of random vector, 75
matching filter, 168
normalization condition for —, 59
McKean, H., 253
mean power, 127 distribution, 58ff.
moments of r.q.s., 71 absolutely continuous, 56
Montgomery, D. C., 254 Bernoulli, 54
Moore, J. B., 255 binomial, 54
moving average conditional, 75
autoregressive (ARMA) —, 157 continuous, 56
general, 124 chi-square, 66, 98, 100
interpolated —, 139 table of —, 102
of white noise, 111 cumulative, 52
to filter noise, 117 exponential, 58
Gaussian (normal), 59, 91
calculations with —, 60
N mean and variance of —, 74
nabla operator, 5 table of —, 100
noise
joint —, 75
additive, 3
marginal, 78
white, 110, 134
mixed, 61
normal equations, 87
normalization condition, 55 n-point, 175
of function of r.q., 63
of kinetic energy, 65
O of square of Gaussian r.q., 66
optimal filter, 169ff. Poisson, 55
orthonormal basis, 22 quantiles of —, 96
in 3D space, 24 singular, 61
of complex exponentials, 22 Student’s-t , 95, 100
orthonormality, 18 table of —, 102
of complex exponentials, 18 uniform, 57
260 Index

theory, 51, 237 Rozanov, Y. A., 255


measure-theoretic, 254 Rudin, W., 186
paradoxes in —, 51 Runger, G. C., 254
Pythagorean theorem, 24, 25

S
Q Saichev, A. I., 43, 253
quantiles of probability distribution, 96 sample paths, 5
table of chi-square —, 100 continuity with probability 1 of —, 187ff.
differentiability of —, 184ff.
mean-square continuity of —, 184ff.
R mean-square differentiability of —, 184ff.
random sampling period, 2
errors, 80 scalar product, 22
harmonic oscillations, 109, 133 scatterplot, 86
superposition of —, 109, 133 Schucker, T., 253
random interval, 94 Schwartz distributions, 43–44, 253
numbers, 19
Shakarchi, R., 253
phase, 108
signals, 1
quantities (r.q.), 54ff.
analog, 1
absolute moments of —, 69
aperiodic, 1, 18, 35ff.
continuous —, 62ff.
characteristics of, 175
correlation of —, 81
delta-correlated, 135
discrete —, 61ff.
description of —, 1ff.
expectation of —, 71ff.
deterministic, 2
function of —, 64ff.
spectral representation of —, 21ff.
linear transformation of Gaussian —,
64 digital, 1, 140
moments of —, 71 discrete sampling of —, 157
singular —, 63 interpolated, 137ff.
standard deviation of —, 73 Diracdelta impulse, 41ff.
standardized —, 74 energy of —, 8
statistical independence of —, 73ff. filtering of —, 171
variance of —, 64 Gaussian, 175ff.
switching signal, 100 stationary, 171
variable, 48 jointly stationary —, 161
vectors, 67ff. nonintegrable, 36
covariance matrix of —, 164 periodic, 8, 23
Gaussian —, 68, 162ff. power of —, 31
2-D —, 68, 164 random, 4, 105ff., 200ff.
joint probability distribution of —, 67 switching, 113, 135
linear transformation of —, 160 types of —, 1ff.
moments of —, 73 mean power of —, 127ff.
walk, 6 stationary, 106, 196ff.
randomness, 1 discrete —, 196ff.
of signals, 4 Gaussian —, 181ff.
RC filter, 149ff., 162 power spectra of —, 127ff.
rectangular waveform, 26ff. strictly, 105
regression line, 89ff. second-order, weakly, 106
REM sleep, 108 simulation of —, 124, 210ff.
resolution, 2 spectral representation of —, 204ff.
reverse conditional probability, 80 stochastic, 2, 199
Roberts, M. I., 255 transmission of binary —, 80
Ross, S. M., 253 time average of —, 19
Index 261

signal-to-noise ratio, 163ff. series, 216


in matching filter, 168ff. Tukey, O. W., 45
in RC filter, 149
optimization of —, 161ff.
simulation U
of stationary signals, 124, 193ff. uncertainty principle, 4
of white noise, 112 Usoltsev, Alexey, x
Smith, R. L., 254
spectral representation theorem, 195
stability of fluctuations law, 89 V
standard deviation of r.q., 73 variance of r.q., 72
stationary conditions, 3 invariance under translation of —, 73
statistical independence, 75 quadratic scaling of —, 73
statistical inference, 254
Stein, E. M., 253
stochastic W
difference equation, 116, 158 Walden, A. T., 254
consistency of mean estimation in —, waveform
119 rectangular, 26
integration, 199 white noise, 109, 193
Gaussian —, 187 band-limited —, 133
isometric property of —, 203 continuous-time, 135, 193
for signals with uncorrelated discrete–time, 109
increments, 199ff. filtered, 207
processes, 105, 254ff. integrals, 195
stationary, 254 moving average of —, 111
Gaussian, 254 interpolated, 137
simulation of —, 124
Lèvy, 5
Wiener
Wiener, 4, 202
filter, 170ff.
system bandwidth, 151
acausal, 170
causal, 172
N., 172, 202
T process, 5, 204, 207
time-domain description, 9 Wiener–Hopf equation, 172
time series, 105, 254 Woyczynski, W. A., 8, 43, 63, 95, 104, 122,
nonstationary, 254 163, 197, 253, 255
nonlinear, 254
total probability formula, 79
trajectory, 4 Y
transfer function, 151 Yaglom, A. M., 254
transmission Young, P. C., 254
of binary signals, 80
in presence of random errors, 80
through linear systems, 143ff. Z
trigonometricformulas, 14 Zygmund, A., 23, 253

Das könnte Ihnen auch gefallen