Sie sind auf Seite 1von 103

Bauhaus-University Weimar

Institute for Mathematics/Physics

Supports to the Lecture

Signal Analysis
(Mathematical Fundamentals of Signal Processing)
Supplemented by
the repetition of some basics of Linear Algebra and Analysis
and by some basics of Functional Analysis

for

Natural Hazards and Risks in Structural Engineering (NHRE)

from
Klaus Markwardt

Professur Angewandte Mathematik


Institut f
ur Mathematik/Physik
Bauhaus-Universit
at Weimar
Coudraystr. 13 B, Zimmer 202
99421 Weimar
Homepage:
http://webuser.uni-weimar.de/~markward/

Contents
1 Notations

2 Repetition - Some Basics of Linear Algebra and Analysis

2.1

Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.2

Differential Calculus for real Functions

2.3

Integral Calculus for real Functions

. . . . . . . . . . . . . . . . . . . . . . . . . 12

. . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3.1

Rules of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3.2

Basics in numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3.3

Improper integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3.4

Gaussian probability density functions . . . . . . . . . . . . . . . . . . . . . . 15

3 Repetition - Real and complex Fourier series

18

3.1

Trigonometric polynomials and discrete frequency spectra . . . . . . . . . . . . . . . 18

3.2

Real amplitude-phase representations of trigonometric polynomials . . . . . . . . . . 19

3.3

Complex representation of real trigonometric polynomials . . . . . . . . . . . . . . . 21

3.4

Complex amplitude-phase representations of trigonometric polynomials . . . . . . . 22

3.5

Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.6

Fourier-series of even and odd periodic functions . . . . . . . . . . . . . . . . . . . . 26

3.7

Complex representation of Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.8

Approximation of a rectangular oscillation by trigonometric polynomials

3.9

Piecewise continuous functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.10 Pointwise and uniform convergence of Fourier series

. . . . . . 27

. . . . . . . . . . . . . . . . . . 30

3.11 Smoothness of signals and asymptotic behavior of their Fourier coefficients . . . . . . 31


3.12 Exercises

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Basics of Functional Analysis

33

4.1

Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2

Linear operators and linear functionals . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.3

Normed spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.4

Real inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.5

Application to real Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.6

Complex inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.7

Application to complex Fourier series

4.8

Metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

. . . . . . . . . . . . . . . . . . . . . . . . . . 46

5 Fourier transform

51

5.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.2

Relations between Fourier transform and complex Fourier coefficients . . . . . . . . . 53

5.3

Amplitude and Phase Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.4

Basic properties and examples of the Fourier transform

5.5

Scalar products and signal energy considerations . . . . . . . . . . . . . . . . . . . . 64

5.6

The convolution of functions

5.7

Translation, dilation and differentiation of signals . . . . . . . . . . . . . . . . . . . . 69

5.8

Cross-correlation and autocorrelation of signals . . . . . . . . . . . . . . . . . . . . . 69

. . . . . . . . . . . . . . . . 58

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

6 Important Fourier Transformation Pairs

70

7 Discrete Fourier Transform (DFT)

72

7.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

7.2

Properties of the Discrete Fourier Transform

7.3

Some basic hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

7.4

Application of the DFT for generally sampling period . . . . . . . . . . . . . . . . . 79

8 Lecture Graphs - Part 1

. . . . . . . . . . . . . . . . . . . . . . 77

83

9 Lecture Graphs - Part 2

90

10 Lecture Graphs - Part 3

99

Literature Hints
Fourier and wavelet analysis
Books in English: [1], [17], [15], [8], Books in German : [14], [6], [7], [19],
Wavelet analysis
Books in English : [18], [9], [3],[4]
Later studies : [5], [10], [11], [16],
Books in German : [2], [12]

Notations

Set of complex numbers

Set of natural numbers, N = {1, 2, }

No

No = {0, 1, 2, }

Set of real numbers

Set of whole numbers, Z = { , 2, 1, 0, 1, 2, }

C k (R)

space of continuous functions on R that have continuous derivatives up to order k

C k (I)

space of continuous functions on an interval I that have


continuous derivatives up to order k

Lp (R)

f Lp (R) if

Mn (f )

Mn (f ) =

|f (t)|p dt exists, p N

tn f (t) dt, moment of order n (continuous moment of a real function)

supp(f )

support of a function f

(t)

Dirac delta distribution, Dirac impulse

lk

Kronecker delta

F(f ) , f

F(f )() = f() =

Angular frequency :

Frequency

f (n)

n-th derivative of a real or complex valued function

L(f )

L(f )(z) =

R
0

f (t) eit dt, Fourier-Transformation

= 2

f (t) ezt dt,

z C, Laplace Transform
4

A or 1A

characteristic function (indicator function) of a set A

Re(f ) , Im(f )

real- and imaginary part of f

hf, gi

hf, gi =

f (t) g(t) dt

inner product or scalar product in L2 (R),

also scalar products in the vector spaces Rn and Cn


p
hf, f i, in Rn and Cn norm in L2 (R) in

kf k = kf k2

kf k =

bcc

floor of a real number c

dce

ceil of a real number c

relation mark for proper inclusion (proper containment) in set theory

relation mark for inclusion (containment) in set theory

A*B

set A is not contained in set B

Vector Sets
Rn
Cn
Zn
u, v, w

linear space of n-dimensional vectors with real components


linear space of n-dimensional vectors with complex components
set of n-dimensional vectors with whole-numbered components
vectors

Set of Matrices
(n, m)-matrix
R(n,m)
C(n,m)
Z(n,m)
A, B, C
aij
det(A)

other Notions
u Cn
A R(n,m)

a matrix with n rows and m columns


set of (n, m)-matrices with real entries
set of (n, m)-matrices with complex entries
set of (n, m)-matrices with whole-numbered entries
matrices
entry from row i and column j regarding matrix A
determinant of the matrix A

for all n-dimensional vectors with complex entries


A is a real (n, m)-matrix

Lower case Greek alphabet

name
alpha
beta
gamma
delta
epsilon
zeta
eta
theta

character

name
iota
kappa
lambda
mu
nu
xi
omicron
pi

character

name
rho
sigma
tau
upsilon
phi
chi
psi
omega

character

2
2.1

Repetition - Some Basics of Linear Algebra and Analysis


Complex numbers

http://en.wikipedia.org/wiki/Complex_number
http://www.mathematics-online.org/kurse/kurs9/
Imaginary unit i:
i2 = 1
Cartesian or algebraic representation complex numbers:
z = a + b i,

a, b R

Properties 2.1. (Arithmetic operations)


Addition :
(a + b i) + (c + d i) = (a + c) + (b + d) i
Multiplication:
(a + b i) (c + d i) = (ac bd) + (ad + bc) i
Subtraction :
(a + b i) (c + d i) = (a c) + (b d) i
Division:

a + bi
(a + b i)(c d i)
ac + bd bc ad
=
= 2
+ 2
i
c + di
(c + d i)(c d i)
c + d2
c + d2

Examples 2.2.
(3 + 2i) + (5 + 5i) = (3 + 5) + (2 + 5)i = 8 + 7i,
(5 + 5i) (3 + 2i) = (5 3) + (5 2)i = 2 + 3i,

(2 + 5i) (3 + 7i) = (2 3 5 7) + (2 7 + 5 3)i = 29 + 29i,


(2 + 5i)
(2 + 5i) (3 7i)
(6 + 35) + (15i 14i)
41 + i
41
1
=

=
=
=
+ i
(3 + 7i)
(3 + 7i) (3 7i)
(9 + 49) + (21i 21i)
58
58 58
Complex plane

Remark 2.3. (Polar form)


Polar coordinates r und :

x = r cos

y = r sin ,

= Representation in polar form:


z = r (cos + i sin )
with absolute value (modulus)
r = |z| =
and argument

arctan

arctan

= arg(z) = arctan

(2.1)

p
x2 + y 2

y
x
y
x
y
x

for

x>0

for

x < 0, y 0

for

x < 0, y < 0

for

x = 0, y > 0

for

x = 0, y < 0

(2.2)

The argument can also calculated by

= arg(z) =

arccos r

arccos xr

for
for

y0

(2.3)

y<0

which is undefined for r = 0 . Also the following formula can be used




(
y
2 arctan r+x
for r + x > 0
= arg(z) =

for r + x = 0

(2.4)

For the above defined argument the standard interval is given by


< .
Other variants modulo 2 are possible for arg(z) (ambiguity of representation) but our standard is
mostly used in signal analysis.
With Eulers formula
ei = cos() + i sin()

(2.5)

one gets (cp. http://en.wikipedia.org/wiki/Euler%27s_formula)


z = r ei .

(2.6)

Eulers formula

Conjugate complex number

Compare http://en.wikipedia.org/wiki/Exponential_function
Exponential function with complex argument:
ez = ex+i y = ex eiy = ex [ cos (y) + i sin (y) ]
Absolut value of it
|ez | = ex
One can prove that
|z1 z2 | = |z1 | |z2 |

z1
=
z2

|z1 |
,
|z2 |

z2 6= 0

Remark 2.4. Verification of Eulers formula with Taylor series for ex , cos(x) and sin(x) :
http://en.wikipedia.org/wiki/Taylor_series

(2.7)

Remark 2.5. From Eulers formula follows de Moivres formula


(cp. http://en.wikipedia.org/wiki/De_Moivre%27s_formula )
cos(n) + i sin(n) = (cos + i sin )n ,

nN

(2.8)

M oivre0 sf ormula = formulas for cosine and sine:


Angle sum and difference identities, double-, triple-, and half-angle formulae
(cp. http://en.wikipedia.org/wiki/List_of_trigonometric_identities)
sin(x + y) = sin(x) cos(y) + sin(y) cos(x)
cos(x + y) = cos(y) cos(x) sin(x) sin(y)
sin(2x) = 2 sin(x) cos(x)

cos(2x) = cos2 (x) sin2 (x)


sin(nx) =
=
cos(nx) =
=

 
 
n
n
3
n3
n sin(x) cos
(x)
sin (x) cos
(x) +
sin5 (x) cosn5 (x) + . . .
3
5
dn
e


2
X
n
i+1
(1)
sin2i1 (x) cosn2i+1 (x)
2i 1
i=1
 
 
n
n
n
2
n2
cos (x)
sin (x) cos
(x) +
sin4 (x) cosn4 (x) + . . .
2
4
c
bn
 
2
X
i n
(1)
sin2i (x) cosn2i (x)
2i
n1

i=0

De Moivres formula can be used to find the n-th roots of a complex number.
(cp. http://en.wikipedia.org/wiki/Nth_root) , Parts: Roots of unity and Complex roots
With a = |a| ei

and

0 < 2

the solutions of

zn a = 0

can be represented by

Special case a = 1

p
zk = n |a| exp
=

i
2i
+k
n
n

with k = 0, 1, . . . , n 1 .

n roots of unity

Remark 2.6. Derivatives and integrals of complex valued functions f : R C


Real domain of definition (short: domain) and complex codomain
= Differentiate or integrate real and imaginary part of function f

10

(2.9)

All third roots of unity (find the two errors!)

All fifth roots of the complex number a = 1 + i

Formelsammlung (DVP Mathematik)

Trigonometrische Funktionen und Hyperbelfunktionen


Definition
cosh x =

Symmetrie


1 x
e + ex
2

sin(x) = sin x,
sinh(x) = sinh x,

sinh x =

cos(x) = cos x,
cosh(x) = cosh x,


1 x
e ex
2
tan(x) = tan x,
tanh(x) = tanh x

Eigenschaften
cosh2 x sinh2 x = 1,
cos(x + 2)
= cos x,

cos2 x + sin2 x = 1,
sin(x + 2)
= sin x,

tan(x + ) = tan x

In analytical modelling often the following trigonometric function values are used.
Funktionswerte

/6

sin x

1
2

cos x

tan x

1
2

3
3

/4

1
2 2

1
2 2
1

Additionstheoreme
sin(x y)

= sin x cos y cos x sin y

/3 /2

1
1
2 3
1
2

2/3

1
2 3

12

3
0

3/4
5/6

1
1
0
2 2
2

12 2 12 3 1
1

11

sin 2x

= 2 sin x cos x

3
3

2.2

Differential Calculus for real Functions

Compare
http://en.wikipedia.org/wiki/Differential_calculus
http://en.wikipedia.org/wiki/Numerical_differentiation
http://en.wikipedia.org/wiki/List_of_Differentiation_Identities

2.3

Integral Calculus for real Functions

2.3.1

Rules of Integration

Compare
http://en.wikipedia.org/wiki/Antiderivative
http://en.wikipedia.org/wiki/Riemann_integral
http://en.wikipedia.org/wiki/Integral_calculus
http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus
http://en.wikipedia.org/wiki/Integration_by_parts
http://en.wikipedia.org/wiki/Integration_by_substitution
http://en.wikipedia.org/wiki/Lists_of_integrals

Indefinite integrals
Integration by parts:
Z

f (x) g (x) dx = f (x) g(x)

f 0 (x) g(x) dx

(2.10)

erhalt man aus der Produktregel zur Differentialrechnung.


Integration by substitution:
With continuous differentiable
follows

f (g(x)) g (x) dx =

u = g(x)
f (u) du

If the left side is structurally given :


Calculate the right side and substitute then u = g(x).
If the right side is structurally given :
Find a substitution u = g(x) with existing inverse function x = g 1 (u), so that you can
calculate
Z
F (x) + C = f (g(x)) g 0 (x) dx
Substituting x = g 1 (u) gives the indefinite integral

F (u) = F (g 1 (u))

12

Riemann integrals (definite integrals)

If f is continuous on a compact interval I = [a, b] and F is an antiderivative von f then


Z

f (x) dx = F (b) F (a)

(2.11)

= Generalisation of (2.11) for piecewise continuous functions f


Integration by parts:
Z

f (x) g (x) dx = [ f (x)

g(x) ]ba

f 0 (x) g(x) dx

(2.12)

Integration by substitution:
With u = g(x)
First rule:

Second rule:

f (u) du =

f (g(x)) g 0 (x) dx =

g 1 ()

g 1 ()

du = g 0 (x) dx

und

f
ur x [a, b]

g(b)

f (u) du

g(a)

f (g(x)) g 0 (x) dx

if g 1

exists.

With
= g(a)

und

= g(b),

a = g 1 ()

und

b = g 1 (),

and
we get a simpler form

2.3.2

f (g(x)) g (x) dx =

f (u) du.

Basics in numerical Integration

Compare
http://archives.math.utk.edu/visual.calculus/4/riemann_sums.4/index.html
http://en.wikipedia.org/wiki/Rectangle_method
http://en.wikipedia.org/wiki/Numerical_integration
Rectangle method
Specifically, the interval [a, b] over which the function f is to be integrated is divided into n subintervals Ik = [xk xk1 ]. If all subintervals Ik sufficiently small then we have
Z

f (x) dx

n
X
k=1

f (k ) (xk xk1 )

with

xk1 k xk ,

In the case of equidistant decomposition


x

= xk xk1 = const
13

for all k

a = x0 ,

b = xn ,

follows with sufficiently small x


Z

f (x) dx x

n
X

f (k )

(2.13)

k=1

Substituting the left points of the subintervals Ik


k = xk1
one gets the left corner approximation
Z

f (x) dx = x

n1
X
k=0

f (a + k x) + En(`) (f )

with some error En(`) (f )

(2.14)

with some error En(r (f )

(2.15)

Substituting the right points of the subintervals


k = x k
one gets the right corner approximation
Z

f (x) dx = x

n
X
k=1

f (a + k x) + En(r) (f )

For the errors in (2.15) and (2.14) we get with


(b a)
n

h = x =
the estimates



(b a)
(`)
h
En (f ) Mf 0
2



(b a)
(r)
h,
En (f ) Mf 0
2

with

(2.16)



Mf 0 = max f 0 (x) .
axb

Mf 0 can be substitutet by an upper bound of |f 0 (x)| in the given interval.


The composite trapezoidal rule is more exactly. We get it substituting
f (k )

f (xk1 ) + f (xk )
2

by

in (2.13) :
Z

f (x) dx = x

n1
X
1
1
f (a) +
f (a + k x) + f (b)
2
2
k=1

+ E (n) (f )

(2.17)

This composite trapezoidal rule can be interpreted as the arithmetic mean of the left corner approximation and the right corner approximation. For the error in (2.17) one gets the estimate
|E(f )|
or



(b a) 2
h max f 00 (x)
axb
12

|E(f )|

(b a)
Mf 00 h2
12

with h = x =

(b a)
n



with Mf 00 = max f 00 (x)
axb

|f 00 (x)|

Mf 00 can be substituted here by an upper bound of


in [a, b]
Compare http://en.wikipedia.org/wiki/Trapezoidal_rule
14

(2.18)

2.3.3

Improper integrals

Compare
http://en.wikipedia.org/wiki/Improper_integral
1. Unbounded integrand on interval of finite length
(a) Singularity in the right boundary point
Z
Z b
f (x) dx := lim

b

(b) Singularity in the left boundary point


Z b
Z
f (x) dx := lim

0

0

f (x) dx

f (x) dx

a+

(c) Singularity in an inner point c of integration interval


Z b
Z c
Z b
f (x) dx := lim
f (x) dx + lim
f (x) dx,
a

0

a<c<b

c+

2. Domain of integration is unbounded


(a) Integration interval of form [ a, + )
Z
Z b
f (x) dx
f (x) dx := lim
b+

(b) Integration interval of form ( , b ]


Z b
f (x) dx := lim

(c) Integration interval of form ( , + )


Z
f (x) dx := lim

lim

f (x) dx

a b+

2.3.4

f (x) dx

Gaussian probability density functions

Compare
http://en.wikipedia.org/wiki/Normal_distribution
http://en.wikipedia.org/wiki/Probability_density_function
One of the most used improper integrals is given by
Z

2
ex dx =

(2.19)

It is used in the theory of normal distributions, cp. [13]. The corresponding family of Gaussian
probability density functions (short : Gaussian functions) can be written as.
f, (x) :=

(x)2
1

e 2 2 ,
2

15

> 0,

(2.20)

Mean value : ,

Variance : 2

and Standard deviation : =

Var(X)

Standard deviation is a widely used measure of dispersion.


The graph of every probability density function (2.20) is bell-shaped, with a peak at the mean
value .
The special probability density function in (2.20) with = 0 and = 1 is connected with
standard normal Distribution.
x2
1
f0,1 (x) :=
e 2
2
From (2.19) one gets

f, (x) dx = 1

Furthermore by
E(X) =

Var(X) =

2
1

+


Z
(x )2
dx =
x exp
2 2

+
Z



(x )2
(x ) exp
d x = 2
2 2
2

fundamental equations of probability theory are given.

16

(2.21)

(2.22)

Standard normal Distribution

Different Parameters and

17

Repetition - Real and complex Fourier series

3.1

Trigonometric polynomials and discrete frequency spectra

Compare
http://en.wikipedia.org/wiki/Periodic_function
http://en.wikipedia.org/wiki/Simple_harmonic_motion
http://de.wikipedia.org/wiki/Trigonometrisches_Polynom
(take above not the corresponding English variant)
http://en.wikipedia.org/wiki/Fourier_series
http://en.wikipedia.org/wiki/Frequency_spectrum
Real trigonometric polynomial of order n (standard form) :
n

n (t) =

a0 X
+
ak cos(k 1 t) + bk sin(k 1 t),
2
k=1

ak , bk R,

1 =

2
T

(3.1)

Here 1 is used instead of , because later gets the part of angular frequency variable. But 1
is in (3.1) a given fixed angular frequency. Likewise gets the part of frequency variable later.
= 2
In this script is used for frequency variable not f and not !
Interpretation of (3.1) in the sense of oscillation theory :
T ist a period of n , the basic period (period of all in n contained oscillations).
1 is corresponding angular frequency to T (basic angular frequency)
Basic frequency 1 of (3.1) is given by 1 = 2 1
(3.1) includes oscillations with integer multiples of the basic frequency 1
Corresponding angular frequencies are integer multiples of the basic angular frequency 1 .
k = k 1 ,

k = k 1 ,

k = 1, 2, n

n has a special frequency spectrum : discrete, bounded and finite


a(0)
a0
a(o )
=
= ,
2
2
2

a(k ) = ak ,

b(k ) = bk ,

k = 1, 2, n

(3.2)

= Fourier coefficients are plotted versus frequency .


n has a special angular frequency spectrum : discrete, bounded and finite
a(o )
a(0)
a0
=
= ,
2
2
2

a(k ) = ak ,

b(k ) = bk ,

k = 1, 2, n

(3.3)

What is the difference to the equation (3.2)?


= Fourier coefficients are plotted versus angular frequency .
The coefficient a20 represents the mean value of n (t) over every basic period interval [ to , to + T ].
a0
2 is a special spectral value, belonging to the frequency 0 (o = 0 , o = 0) .
18

Example 3.1. In particular for 2-periodic trigonometric polynomials, i.e. for T = 2 , one
gets
n
a0 X
+
ak cos(k t) + bk sin(k t)
(3.4)
2
k=1

with

k =

k
,
2

k = k

and

for

k = 0, 1, 2, n

Example 3.2. For 1-periodic trigonometric polynomials, i.e. for T = 1, one gets
n

a0 X
+
ak cos(2 k t) + bk sin(2 k t)
2

(3.5)

k=1

with
k = k,

3.2

and

k = 2k

for

k = 0, 1, 2, n

Real amplitude-phase representations of trigonometric polynomials

In the engineering literature the real trigonometric polynomial (3.1) is often given in the following
real amplitude-phase representation.
Cosine-representation :
n

a0 X
+
Ak cos(k 1 t k )
n (t) =
2

< k

with

k=1

(3.6)

From
cos( ) = cos cos + sin sin
you get for every k N
Ak cos(k 1 t k ) = Ak cos(k ) cos(k 1 t) + Ak sin(k ) sin(k 1 t).
Compare this with (3.1) and get
ak = Ak cos(k ),

bk = Ak sin(k )

for k = 1, 2,

(3.7)

If this types of amplitudes Ak and phases k are given, then coefficients ak and bk in (3.1) are
well-defined. If the coefficients ak and bk are given, then compare with polar coordinates, look at
formulas (2.3) and (2.4).
Set

x = ak ,

y = bk ,

r = Ak ,

= k

and get between (3.1) and (3.6) the following formulas


q
Ak =
a2k + b2k ,
k = 1, 2,
k =

ak

arccos Ak

arccos Aakk

if bk 0
if bk < 0

19

(3.8)

for

k = 1, 2, n

(3.9)

or
k =



bk

2
arctan

Ak +ak

if Ak + ak > 0
for
if Ak + ak = 0

k = 1, 2, n

(3.10)

If Ak = 0, then k is not defined uniquely.


Setting

a0
2
you get the spectral value A0 , which can become also negative. (Other definition of A0 connected
with 0 is possible, but not used here)
The above formulas can be interpreted as relations between different spectra of special time signals
:
between the spectrum {ak , bk } and the spectrum {Ak , k }
A0 =

Characterisation of the corresponding time signals :


They are periodic, real valued and include only finitely many (angular) frequencies.
This angular frequencies are whole integer multiples of some basic angular frequency 1
k = k 1
Spectral plot :
= Amplitudes Ak and phases k are plotted versus angular frequency .
or
= Amplitudes Ak and phases k are plotted versus frequency .
Sine-representation :
Now lets consider a similar sine-representation of real trigonometric polynomials.
n

n (t) =

a0 X
+
Ak sin(k 1 t k )
2

with

k=1

< k

(3.11)

From
sin( ) = cos() sin() sin() cos()
you get for every k
Ak sin(k 1 t k ) = Ak cos(k ) sin(k 1 t) Ak sin(k ) cos(k 1 t)
Compare this with (3.1) and get for the real Fourier coefficients
ak = Ak sin(k ),

bk = Ak cos(k )

k = 1, 2,

(3.12)

If the coefficients ak and bk are given, then do the following :


Substitute in (3.7) bk 7 ak and ak 7 bk and get between (3.1) and (3.11) the formulas
q
a2k + b2k ,
k = 1, 2,
(3.13)
Ak =
a0
is also the same mean value like in the cosine-representation
2

bk
if ak 0

arccos Ak
=
for k = 1, 2, n

bk
arccos Ak if ak > 0

A0 =

20

(3.14)

or
k =



ak

2
arctan

Ak +bk

if Ak + bk > 0
if Ak + bk = 0

for k = 1, 2, n

(3.15)

If Ak = 0, then also k is not defined uniquely.


The real amplitude spectrum is here the same like in the cosine-case, but the phase-spectrum is
changed.
Spectral plot :
= Amplitudes Ak and phases k are plotted versus angular frequency (k = k 1 ).
or
= Amplitudes Ak and phases k are plotted versus frequency (k = k 1 )..
Exercise 3.3. How the above formulas between Fourier coefficients, amplitudes and phases are
changed, if you use instead of (3.6) and (3.11)
n

a0 X
n (t) =
+
Ak cos(k 1 t + k )
2

with

< k

with

< k

k=1

and

a0 X
Ak sin(k 1 t + k )
+
n (t) =
2
k=1

as bases for real amplitude-phase representations.

3.3

Complex representation of real trigonometric polynomials

Alternative to (3.1), (3.6) respectively (3.11) a complex representation of the real trigonometric
polynomials is often used.
n
X
n (t) =
Ck ei k1 t .
(3.16)
k=n

This representation is connected with the later introduced discrete Fourier-transform (DFT).
Between the real spectrum of (3.1) and the complex spectrum of (3.16) the relations
2 C0 = a0 ,

ak = Ck + Ck ,

bk = i (Ck Ck ),

kN

and
C0 =
Ck =
Ck =

a0
2
ak i bk
,
2
ak + i bk
,
2

(3.17)
k N,
kN

are valid. Show it with Eulers formula and with



1  i k t
i k1 t
1
cos(k1 t) =
e
+e
2

1  i k t
sin(k1 t) =
e 1 ei k1 t
2i
21

(3.18)
(3.19)

In general this spectral values Ck become complex, also if the trigonometric polynomial n is
real-valued. They own real part Re(Ck ) and imaginary part Im(Ck ).
So n (t) is connected with a complex angular frequency spectrum :
This is discrete, bounded, finite and contains also negative angular frequencies.
All containing frequencies are whole multiples of a basic angular frequency 1 .
k = k 1 ,

k = k ,

k ) = Ck ,
C(

k = n, 1, 0, 1, , n

(3.20)

Spectral plot :
= Real and imaginary part of the spectrum are plotted versus angular frequency .
If n is real-valued, then
k )) of the spectrum is an even function of with discrete domain
the real part Re(C(
and
k )) of the spectrum is an odd function of with discrete domain.
the imaginary part Im(C(
Of course n (t) is connected also with a complex frequency spectrum :
This is discrete, bounded, finite and contains also negative frequencies, but only whole multiples
of a basic frequency 1 .
k = k 1 ,

k = k ,

k ) = Ck ,
C(

k = n, 1, 0, 1, , n

(3.21)

Spectral plot :
= Real and imaginary part of the spectrum are plotted versus frequency .

3.4

Complex amplitude-phase representations of trigonometric polynomials

The complex representation (3.16) of the real trigonometric polynomial can be transferred in
n (t) =

n
X

k=n

with

|Ck | ei (k1 t+k ) =

n
X

k=n

|Ck | ei k ei k1 t

(3.22)

Ck = |Ck | ei k

From this representation we get the complex amplitude-phase spectrum by


q
1

|Ck | =
a2k + b2k
for k = 0, 1, 2, 3 .
if you set
2
k = arg(Ck ) for
k = 0, 1, 2, 3

b0 = 0

(3.23)
(3.24)

The phases k are the arguments of the Ck in the complex plane (Argand plane)
In our case of real valued trigonometric polynomials we get from (3.17)
k = (k ) = (k ) = arg(ak i bk ) = arg(ak + i bk )
k = (k ) = (k ) = arg(ak + i bk )
C0 is real and so for 0 there are 3 possibilities:
1. 0 = 0 if C0 > 0
22

for

for

k = 1, 2, n

k = 1, 2, n

2.
3.

0 = if C0 < 0
0 is not uniquely defined for C0 = 0.

Since n (t) is real valued, we get


|Ck | = |Ck |

and k = k

for k = 1, 2, n

respectively

|C(
k )| = |C(k )| and (k ) = (k )

for k = 1, 2, n

So
k )| becomes an even function of
this discrete amplitude spectrum |C(
and
this discrete phase spectrum (k ) becomes an odd function of .
Similar to (3.6), (3.9) and (3.10) we get now formulas for calculating the in (3.24) defined arguments
k .
For the negative angular frequencies we get :

ak

arccos 2 |Ck |

k = (k ) =

arccos ak

if bk 0
if bk < 0

2 |Ck |

or

k = (k ) =
This results in

2
arctan

bk
2 |Ck |+ak

k = k

for k = 1, 2, n

if

2 |Ck | + ak 0

if

2 |Ck | + ak = 0

for k = 1, 2, n

for k = 1, 2, n.

For the positive angular frequencies we get then


k = k

3.5

or (k ) = (k )

for k = 1, 2, n.

Fourier series

Very general periodic signals can represented as Fourier series. This are limits of trigonometric
polynomials.
This means: Periodic signals can be decomposed into a sum of simple oscillating functions, namely
sines and cosines or in complex exponentials (possibly infinitely many summands,infinitely many
harmonic oscillations). The study of Fourier series is a branch of Fourier analysis and provides
essential basics for discrete spectral analysis.

23

Compare http://en.wikipedia.org/wiki/Square_wave
Examples for periodic signals

Examples for periodic signals

A Fourier series arises as a limit of trigonometric polynomials (3.1) for n . A sufficiently


regular function f (t) with period T can be decomposed into a Fourier series
f (t) =

a0 X
+
ak cos(k 1 t) + bk sin(k 1 t),
2
k=1

ak , bk R,

1 =

2
T

(3.25)

if the convergence concept for the right side is determined properly, compare for instance section
3.10. Then the real representation (3.6) and (3.11), the complex representation (3.16) and the
complex amplitude-phase representation (3.22) also converge for n . The relations between the
different discrete spectra remain valid, but now the corresponding spectra are countable infinitely
discrete spectra.
24

The angular frequency based spectrum of a periodic signal f is connected with its Fourier series
(3.25)
a(o )
a0
= ,
a(k ) = ak ,
b(k ) = bk ,
kN
(3.26)
2
2
with whole multiples of one basic angular frequency 1
k = k 1 ,
a0
2

kN

is the integral mean value of signal f .


a(o )
a0
= ,
2
2

Similar the frequency based spectrum :

a(k ) = ak ,

b(k ) = bk ,

kN

This discrete spectral values (3.26) are called Fourier coefficients.


For real time signals f (t) (real valued functions) the spectrum (3.26) is real, that means such signals
have only real Fourier coefficients in (3.25).
These with (3.25) connected Fourier coefficients can be calculated by

ak =

bk =

2
T

tZ
0 +T

f (t) cos(k1 t) dt

(3.27)

2
T

tZ
0 +T

f (t) sin(k1 t) dt

(3.28)

t0

t0

if the integrals exist (more accurate later). With t0 any time shifting of the periodicity interval can
be used (Simplification of the calculation!). Especially we get from (3.27)
1
a0
=
2
T

tZ
0 +T

f (t)dt

(3.29)

t0

Compare Example 1 in http://en.wikipedia.org/wiki/Fourier_series


Often used variants of (3.27) and (3.28) are

ak =

bk =

2
T
2
T

ZT
0
ZT

f (t) cos(k1 t) dt

(3.30)

f (t) sin(k1 t) dt

(3.31)

or
T

ak =

2
T

Z2

f (t) cos(k1 t) dt

(3.32)

f (t) sin(k1 t) dt

(3.33)

T2

bk =

2
T

Z2

T2

25

Particularly for 2-periodic signals f (t), that implies T = 2 , the Fourier series (3.25) becomes

a0 X
+
ak cos(k t) + bk sin(k t)
2

ak , bk R,

k=1

1 = 1,

T = 2

(3.34)

In the formulas (3.27), (3.28), (3.29) (3.30), (3.31) (3.32) and (3.33) T = 2 and 1 = 1 is to
insert.
For practical examples in generally the Fourier coefficients in (3.25) cannot be calculated in a
closed form. Then this coefficients must be approximated up to a choosen order n. So you get an
approximation of a given periodic oscillation by a trigonometric polynomial of order n. Here n must
be choosen so large, that all essential oscillation components are contained in the approximation. In
practice composed oscillations will be measured by piezoelectric accelerometers. To get the essential
spectral values of the measured signals, you can use the discrete Fourier transform (DFT). In many
programs this is implemented as a fast algorithm, the so called fast Fourier transform (FFT), see
Matlab, Maple, etc. With this things we deal later.

3.6

Fourier-series of even and odd periodic functions

If the periodic time signal f (t) is an odd function, then the ak are zero. The Fourier series (3.25)
becomes

X
bk sin(k1 t)
f (t) =
k=1

This is called a sine-series. The sine-series becomes zero at t = 0. The derivative of a sine series is
a formal cosine series, but take account of its convergence behavior.
If the periodic time signal f (t) is an even function, then the bk are zero. The Fourier series (3.25)
becomes

a0 X
f (t) =
+
ak cos(k1 t)
2
k=1

It is also called cosine series. The derivative of a cosine series is a formal sine series, but take
account of its convergence behavior.
Compare http://en.wikipedia.org/wiki/Even_and_odd_functions

3.7

Complex representation of Fourier series

We get such Fourier series from the complex trigonometric polynomial (3.16) if we calculate all
complex Fourier coefficients Ck and if there exist some well defined limit for n against infinity.
f (t) =

k=

Ck ei k1 t

(3.35)

The coefficients of (3.25) and 3.35 are linked by (3.17).


This discrete spectral values can be calculated also directly by
1
Ck =
T

tZ
o +T

f (t) ei k1 t dt for arbitrary real to ,

to

26

1 =

2
T

Compare (3.27), (3.28), (3.29) and the following representation formulas there.
Mostly the versions for to = 0 and for to = T2 are used.
k ) = C(
k) = 1
Ck = C(
T

ZT

k ) = C(
k) = 1
Ck = C(
T

Z2

f (t) ei k1 t dt

(3.36)

f (t) ei k1 t dt

(3.37)

T2

3.8

Approximation of a rectangular oscillation by trigonometric polynomials


Integral mean value (of every period)

Trigonometric polynomial of 1. order

Trigonometric polynomial of 3. order

27

Trigonometric polynomial of 5. order

Trigonometric polynomial of order 9

28

3.9

Piecewise continuous functions

The following definitions of piecewise continuity and piecewise continuous differentiability are special connected with the field of Fourier series. In other fields you find sometimes modifications of
this definitions.
Definition 3.4. A function f : [ , ] C is called piecewise continuous on this closed interval
[ , ], if it is continuous on all but a finite number of exception points t1 , t2 , tm in which
the onesided limits exist.
Remarks :
1. If f : [ , ] C is continuous, then we have no exception points and the set {t1 , t2 , tm }
is empty. A continuous f will be considered as a special piecewise continuous function.
2. No every piecewise continuous f is continuous.
3. The function value f (tk ) in an exception point tk plays no role, because f is there not
continuous. So we include in the definition 3.4 the case, in which the f (tk ) are not defined.
Although the domain D of f in that case is D = [ , ]\{t1 , t2 , tm }, we use the notation
in definition 3.4.
4. Definition 3.4 is often used for the special case of real valued functions f : [ , ] R.
The functions (signals) defined in Definition 3.4 are special cases of absolute integrable functions
f on [ , ]
(short : f L1 ([, ]), look at the equation (4.16) with p = 1 .
ZT

If

|f (t)| dt < ,

then f is called absolut integrable on

[ , ].

The functions in Definition 3.4 are also special cases of quadratic integrable functions on [ , ]
(short : f L2 ([, ]) , look at the examples 4.19 and 4.29. In signal theory such signals f are
called signals with finite energy on [, ].
If

ZT
0

|f (t)|2 dt < ,

then f is called quadratic integrable on

[ , ].

Theorem 3.5. On every bounded interval [, ] a quadratic integrable function f is also absolut
integrable.
L2 ([, ]) L1 ([, ])
In general for functions f : I C with unbounded domains I this not true. For
I = (, ],

I = [, )

and

I = (, )

(3.38)

we get
L2 (I) * L1 (I)

and

L1 (I) * L2 (I).

Theorem 3.6. All Fourier coefficients, that means all spectral values exist, if the T -periodic signal
f is absolute integrable on one period, for instance absolut integrable on one of the intervals


T T
[, ] = [0, T ], [, ] = [T, 0] or [, ] = ,
.
2 2
29

Definition 3.7. A piecewise continuous function f : [ , ] C is called piecewise continuous


differentiable on [ , ], if the following both conditions are fulfilled.
1. f is continuous differentiable on all but a finite number of exception points t1 , t2 , tp .
2. In this exception points the derivatives f 0 (tk ) does not exist but all onesided limits of f 0 (t).
Remarks :
1. Every piecewise continuous differentiable f is also piecewise continuous.
2. If f : [ , ] C is continuous differentiable, then we have no exception points and then f
is also piecewise continuous differentiable.
3. No every piecewise continuous differentiable f is continuous.
4. Definition 3.7 is often used for the special case of real valued functions f : [ , ] R.
Definition 3.8. A function f : I C with unbounded domain I of the type (3.38)
is called piecewise continuous, if it is on every bounded subinterval [ , ] of I piecewise
continuous,
is called piecewise continuous differentiable, if it is on every bounded subinterval [ , ] of I
piecewise continuous differentiable.
Theorem 3.9. If a T -periodic function f : R C is piecewise continuous (piecewise continuous
differentiable) on one period, then it is piecewise continuous (piecewise continuous differentiable)
on the whole domain R.

3.10

Pointwise and uniform convergence of Fourier series

Compare
http://en.wikipedia.org/wiki/Convergence_of_Fourier_series
Theorem 3.10. If f (t) is T -periodic and piecewise continuous differentiable on one period of length
T then the Fourier series is pointwise convergent for all t R. With 1 = 2
T the limit of the
Fourier series is then given by

a0 X
+
ak cos(k 1 t) + bk sin(k 1 t)
f (t) =
2

in every point t of continuity

k=1

f (t) + f (t+)
a0 X
=
+
ak cos(k 1 t) + bk sin(k 1 t)
2
2

in every point t of discontinuity

k=1

In every bounded closed interval [ , ] on which in addition f is continuous the Fourier series
converges uniformly to f (t).
If f (t) is additionally continuous on R, then the Fourier series converges uniformly to f (t) on the
whole domain R.
Remark 3.11. Under the same assumptions you can formulate a similar convergence theorem by
using (3.35) and (3.37) or (3.36)
f (t) =

k=

Ck ei k1 t

in every point t of continuity

30


X
f (t) + f (t+)
=
Ck ei k1 t
2

in every point t of discontinuity

k=

Remark 3.12. (Gibbs phenomenon)


Compare http://en.wikipedia.org/wiki/Gibbs_phenomenon
The Gibbs phenomenon involves both the fact that Fourier sums overshoot at a jump discontinuity,
and that this overshoot does not die out as the order of the approximating Fourier polynomials
(trigonometric polynomials) increases.

Gibbs phenomenon - Overshoot

3.11

Smoothness of signals and asymptotic behavior of their Fourier coefficients

Theorem 3.13. If the periodic signal f (t) has continuous derivatives up to order m and if f (m) (t)
is piecewise continuous differentiable then there exists a constant L > 0 with


Ck

L
|k|m+1

for all discrete spectral values

Ck

A similar statement holds for the spectral values ak , bk .

3.12

Exercises

Exercise 3.14. The time signal f has the following properties: f (t) = f (t + 2) for all t R
and

0
for t 2

4 for
2 < x < 2
f (t) =

0
for 2 t <
1. What is the primitive period of f (t). Sketch the signal in the interval [3, 3]. Has f (t) a
symmetry property?
2. Determine the real and complex Fourier coefficients ak , bk and Ck . Write the Fourier series
down.

31

3. Sketch the corresponding spectra for || 4. Change the angular frequency scaling by frequency scaling and adapt the above spectral representations.
4. Calculate and sketch for the above complex spectrum the corresponding parts of the amplitude
and of the phase spectrum.
Exercise 3.15. Let a T -periodic function f (t) with the convergent Fourier series (3.25) be given.
1. How the basic angular frequency, the basic frequency, the basic period, the real and complex
Fourier coefficients are changed, if we replace f (t) by f(t) = f (p t) with some p > 0 (time
dilation of the signal f ). Use for the new parameters
1, a
k .
2. Take for f (t) the time signal of the exercise 3.14 and set p = 2. Sketch 3 periods of f(t).
Sketch the corresponding spectra for || 8 and compare this results with the last exercise.
Exercise 3.16. Let a T -periodic function f (t) with the convergent Fourier series (3.25) be given.
1. How the basic angular frequency, the basic frequency, the basic period, the real and complex
Fourier
coefficients are changed, if we replace f (t) by f(t) = f (t to ) with some to > 0 (time shifting
of the signal f ). Use for the new parameters
1, a
k .

2. Take for f (t) the time signal of the exercise 3.14 and set to = 4 . Sketch 3 periods of f(t).
Sketch the corresponding spectra for || 8 and compare this results with the last exercise.

32

Basics of Functional Analysis

4.1

Linear Spaces

The notion vector space will be generalised in this section. So function spaces and signal spaces
(spaces of discrete signals and spaces of signals with continuous domain) can be interpreted as
special cases of such a generalized structure. In the following definition consider only the cases
F = R : field of real numbers
F = C : field of complex numbers
Definition 4.1. A set V together with an operation 00 +00 (addition) is called a linear space over
a field F if the following axioms are valid:
1. If u, v, w V then v + w V and

v+w =w+v
(v + w) + u = v + (w + u)
there is a zero vector 0 V so that v + 0 = v for all v V
each v V has an additive inverse x V such that x + v = 0.

2. If F and v V then there is defined an operation 00 00 (outer multiplication) with


v V, so that the properties

( + ) v = v + v
(v + w) = v + w
( ) v = ( v)
1v =v

hold for all , F and for all v, w V.


Remark 4.2. Also the notion vector space is conventional for that, what we have called linear
space. But we will use the term linear space, because we reserve the notion vector space for the
n-dimensional vector spaces Rn and Cn .
Examples 4.3. The following examples are essentially and the symbols are often used.
Rn : n-dimensional real vector space. Real vectors are written as column vectors, so the
transposition T is used in in (4.1), all components are real numbers, usual addition between
vectors and usual multiplication of vectors by real numbers (real scalars) , vectors in Rn can
be write in the form
u = (u1 , u2 , , un )T
(4.1)
Cn n-dimensional complex vector space, complex vectors, complex components are possible
(the adjectives real or complex are omit, if the context is clear)
the space of polynomials with real coefficients and maximally degree n
the space of polynomials with complex coefficients and maximally degree n
Spaces of infinite sequences
Spaces of continuous functions
Spaces of piecewise continuous functions
Spaces of absolute integrable functions
Spaces of quadratic integrable functions (signals with finite energy)
33

4.2

Linear operators and linear functionals

Definition 4.4. If two linear spaces V and W over the same field F are given, then a map
A : V W
is called linear or a linear operator if for all u, v V und F the following conditions are
fulfilled
A is homogeneous:

A ( v) = A (v)

A is additive:

A (u + v) = A (u) + A (v)

In the cases
A : V R

A : V C

or

such an operator is called a linear functional (real or complex).


Examples of linear functionals :
Scalar products with a fixed vector, definite integrals, function values on a fixed point
Examples of linear operators:
Linear transformations in vector spaces by matrices, differentiation operators, the gradient, integrals
with variable upper limit, integration operators like Fourier and Laplace transform

4.3

Normed spaces

Definition 4.5. Given a linear space V over R or C then a norm on V is a mapping


k k : V R
f with the following properties for all x, y V and all scalars :
kxk 0

kxk = 0

(4.2)

k xk = || kxk

x=0

(4.3)

(Definitheit)

(positive homogeneous)

kx + yk kxk + kyk

(triangle inequality, subadditivity )

(4.4)
(4.5)

0 is here the zero element (zero vector) and 0 is the number zero. The norm is positive definite.
A linear space with a norm is called a normed space (normed vector space).
A linear space can be provided with several different norms.
Example 4.6. V = Rn with Euclidean norm
v
u n
q
uX
2
2
kxk2 := x1 + + xn = t
x2i ,
i=1

Unit sphere in R2 concerning the Euclidean norm :


34

x Rn

Example 4.7. Generalisation of Euclidean norm on Cn :


v
u n
p
uX
2
2
kzk2 := |z1 | + + |zn | = t
|zi |2 ,
i=1

Example 4.8. Maximum norm in Rn or Cn

kxk := max (|x1 |, . . . , |xn |)


Unit sphere in R2 is now

Example 4.9. Maximum (absolute) sum norm in Rn or Cn :


kxk1 :=

n
X
i=1

Unit sphere in R2 is now

35

|xi |.

z Cn

Examples 4.10. p-Norms in Rn and in Cn


Only for p 1 one gets by
x = (x1 , x2 , , xn )

kxkp :=

n
X

|xk |

n
X

k=1

!1

p1

(4.6)

a norm in Rn or in Cn .
The triangle inequality becomes now the form
n
X
k=1

!1

n
X

|xk + yk |p

!1

|xk |p

k=1

k=1

!1

|yk |p

(4.7)

This special case of triangle inequality is known as Minkowski inequality for vectors and finite
sequences respectively.
With q as the corresponding conjugate H
older exponent of p, that means
1 1
+ =1
p q

1 p, q ,

(set here for once

1
= 0 ),

(4.8)

!1

(4.9)

the so called H
olders inequality
n

n
X
X


x k yk
|xk yk |



k=1

k=1

n
X
k=1

!1

|xk |p

n
X
k=1

|yk |q

is valid.
With dot product (scalar product) in Rn

hx, yi =

n
X

hx, yi R

x k yk ,

k=1

(4.10)

and component-by-component multiplication in Rn


x y = (x1 y1 , x2 y2 , , xn yn ),

x y Rn

(4.11)

(distinguish this two definitions carefully)


one can write the inequalities (4.9) in a shorter form
|hx, yi| kx yk1 kxkp kykq .

(4.12)

It is possible to bring forward (4.12) and other inequalities in Rn or Cn on special function spaces
(spaces of signals with continuous time or continuous frequency domain). Especially for the case
p = q = 2 the inequalities (4.9) or (4.12) become a variant of Cauchy-Schwarz inequality (Schwarz
inequality). It estimates scalar products (4.10) with Euclidean norms (lengths of vectors).
With the complex valued scalar product in Cn
hx, yi =

n
X

x k yk ,

(4.13)

k=1

which is a generalisation of (4.10), the notions and statements between (4.8) and (4.12) remain
valid, if we substitute in (4.9)
!
!
n
n
X
X
xk yk
by
xk yk .
k=1

k=1

36

Remark 4.11. Inequalities for p-norms in Rn and Cn


As norms on finite dimensional vector spaces all p-norms including the maximum norm (case
p = ) are equivalent to each other. This results follow from the following inequalities:

p
p 1, n = 1, 2, 3,
kxkp kxk1 np1 kxkp ,

kxk kxkp p n kxk ,


p 1, n is the dimension of space
There are infinite dimensional spaces with similar p-norms, where this inequalities are not valid.
From the last row of inequalities we get especially
lim kxkp = kxk

in

Rn and Cn

Example 4.12. For a given closed interval I = [a, b] we define C(I) as the set of continuous
functions on I This set C(I) becomes in a simple way a linear space. With
||f || = max{ |f (t)| : t I}

(4.14)

is defined a norm on C(I), the so called Maximum norm.


If I is an open or a half open interval, whereas the intervals of infinite length are included, then
let Cb (I) be the linear space of continuous and bounded functions on I. The norm ||f || is now
defined as
||f || = sup{ |f (t)| : t I},
(4.15)
whereas sup{ } is the supremum of the set defined in the curly brackets. This ||f || is called
supremum norm of f . In the case of C(I) it is the same as (4.14).
Example 4.13 (Sequence spaces). For every fixed p with
1p<
the set of all real or complex sequences a = (an ) over the index set N with

n=1

|an |p < ,

forms a linear space `p (N) (usual term by term Addition and usual outer number multiplication).
This linear space becomes with
!1

p
X
p
||a||p =
|an |
n=1

a normed space.
In the case p = by

` (N) = {(an ) : sup |an | < }


n

is the linear space of all bounded sequences defined (real or complex valued). This linear space
becomes with
||a|| = sup |an |
n

a normed space.
All statements between (4.6) and (4.13) can be taken on this example, if one substitutes n by .
But the used series must exist.
The linear spaces ` (No ), `p (No ), ` (Z) and `p (Z) and corresponding norms for p 1 are
similar defined.
37

Example 4.14. With I as interval in R the set of all functions

f : I R resp. f : I C
for which

Z
I

|f (t)|p dt <

is valid with fixed p,

1p<

(4.16)

represent the linear space Lp (I). By identification of functions which are only different on a set of
measure 0 one gets in this space with
kf kp :=

Z

1
p
|f (t)| dt
p

(4.17)

a norm. The triangle inequality for this special norm is known again as Minkowski inequality.
If one substitute in the examples 4.10 the sums by appropriate integrals, then the statements of that
examples can be translate to the space Lp (I).
Most of linear spaces V mentioned in this subsection are complete.
Completeness of V means:
Every Cauchy sequence in the space V has a limit that is also in V.
In particular in a complete space V = V(k k)
follows from
always

kfn k < ,

n=1

fn = f

n=1

4.4

with

fn V

for all n

with some unique defined f V.

Real inner product spaces

Compare
http://en.wikipedia.org/wiki/Inner_product_space
http://en.wikipedia.org/wiki/Dot_product
An inner product space is sometimes also called a pre-Hilbert-space, since its completion with
respect to the metric, induced by its inner product, is a Hilbert space.
In this subsection let V be a real linear space.
Definition 4.15. A scalar product (or an inner product) on V is a map
h, i : V V R
that satisfies the following 5 axioms for all x, y, z V and all R
1.
hx, xi 0

(non negative)

38

2.
hx, xi = 0 x = 0

(definite)

3.
hx, yi = hy, xi

(symmetric)

4.
hx + y, zi = hx, zi + hy, zi

(additive in the first argument)

5.
hx, yi = hx, yi

(homogeneous in the first argument)

From the symmetric property follows


hx, y + zi = hx, yi + hx, zi

(additive in the second argument)

und
hx, yi = hx, yi

(homogeneous in the second argument)

- Axioms 1 and 2 together : positive-definiteness of h , i


- Axioms 4 and 5 together : linearity in the first argument
- Such a scalar product is a symmetric positive-definite bilinear form.
A real linear space V with such an scalar product is called an inner product space. Inner
product spaces have a naturally defined norm based upon the given inner product.
Theorem 4.16. Every inner product produces by
p
kxk := hx, xi

(4.18)

a norm, the so called induced norm.

Theorem 4.17. Cauchy-Schwarz inequality (also Schwarz inequality or Bunyakovsky inequality)


The induced norm of an inner product space V satisfies
|hx, yi| kxk kyk

for all

x, y V .

(4.19)

Beweis:
(4.19) is trivial in the case y = 0. So only the case hy, yi 6= 0 for any real number R is to
examine. One gets
0 hx y, x yi = hx y, xi hx y, yi = hx, xi 2hx, yi + 2 hy, yi
With the special choice
=
results

hx, yi
= hx, yi kyk2
hy, yi

0 kxk2 hx, yi2 kyk2


and then
0 kxk2 hx, yi2 kyk2 .
So we get
hx, yi2 kxk2 kyk2 .
The square root of both sides of this inequality provides the Schwarz inequality (4.19).
39

Example 4.18. In Euclidean spaces Rn the inequality (4.19) takes the form
n
v
! v
!
u n
n
X
u
X
u
u X

t
xi yi
x2i t
yi2



i=1

i=1

(4.20)

i=1

Compare http://en.wikipedia.org/wiki/Euclidean_space

Example 4.19. The linear space of all real valued functions over I = [a, b] which are quadratic
integrable there, is termed by
L2 (I)
oder
L2 ([a, b]) ,
An inner product is here given with
hf, gi :=
The induced norm is

Zb

f (t) g(t) dt

(4.21)

v
u b
uZ
p
u
kf k := hf, f i = t f 2 (t) dt.

(4.22)

For this special case the squared Schwarz inequality (4.19) gets the shape
b
2 b
b

Z

Z
Z


f (t) g(t) dt f 2 (t) dt g 2 (t) dt




a

(4.23)

This space L2 (I) is complete. For I also I = [a, ) or I = (, b] can be choosen.


Skalar product and angle in R2 and R3
In Euclidean geometry scalar product, length and angle are related. The (smaller) angle between
two non-zero vectors ~a and ~b is given by
 
~a ~b = |~a| |~b| cos()
with = ] ~a, ~b , 0
This scalar product becomes 0 if and only if the angle between the corresponding vectors is

Then the vectors are orthogonal to each other.

40

2.

Definition 4.20. Abstract definition of angle


A scalar product in a real linear space V (abstract vector space with inner product) provides by
hx, yi
p
cos = p
,
hx, xi hy, yi

(4.24)

the definition of the angle between two non-zero elements (abstract vectors).
By
2

|hx, yi| hx, xi hy, yi

this is a correct definition.





hx, yi


p
1
p
hx, xi hy, yi

Definition 4.21. Orthogonality


In a real inner product space V elements x 6= 0 and y 6= 0 are called orthogonal if
hx, yi = 0.

(4.25)

The following notations can be generalised


System of pairwise orthogonal elements
Orthogonal basis
System of pairwise orthonormal elements
Orthonormal basis
Orthogonal projection on a subspace

4.5

Application to real Fourier series

Example 4.22. Basic functions regarding real Fourier series for the basic interval [ 0, 2 ] are
gn (t) = cos(n t),

hm (t) = sin(m t),

n = 0, 1, 2, ,

m = 1, 2, ,

t [ 0, 2 ]

This is the most known example of an orthogonal function system. Using angle sum and difference
identities, compare
http://en.wikipedia.org/wiki/List_of_trigonometric_identities#Trigonometric_functions
one gets
sin(n t) sin(m t) =
cos(n t) cos(m t) =
sin(n t) cos(m t) =

1
(cos(n t m t) cos(n t + m t))
2
1
(cos(n t m t) + cos(n t + m t))
2
1
(sin(n t m t) + sin(n t + m t))
2

(4.26)
(4.27)
(4.28)

Because of sin( + 2 ) = cos() the formula (4.27) directly follows from (4.26). Using Eulers
formula it becomes possible to proof them all.
From ( 4.26), (4.27) and (4.28) the following orthogonality relations result

Z2
for n = m, m = 1, 2,
sin(n t) sin(m t) dt =
(4.29)

0
for
n
=
6
m,
n,
m
=
1,
2,

0
41

for n = m, m = 1, 2,

Z2

2 for n = m = 0
cos(n t) cos(m t) dt =

0
for n =
6 m, n, m = 1, 2,
Z2

cos(n t) sin(m t) dx = 0

(4.30)

n N0 , m N

for

(4.31)

Formula (4.31) is valid also for the case m = n.


Instead of (4.29), (4.30) and (4.31) with the Kronecker delta symbol you can write shorter
hhn , hm i = nm
kg0 k2 = hg0 , g0 i = 2,
hgn , hm i = 0

for

for

n, m = 1, 2,

hgn , gm i = nm

for

n, m = 1, 2,

n = 0, 1, 2, ,

m = 1, 2, ,

The function set


{gn } {hm },

n N0 , m N

is an orthogonal basis in L2 ([0, 2]), this means :


Pairwise different functions are orthogonal and the function set is complete. Completeness results
from the fact, that every f L2 ([0, 2]) can be calculated with its Fourier series, which converges
in the energy norm of every period.
If we change the orthogonal basis {gn } {hm }, n N0 , m N by
1
g0 = g0 ,
2

and

1
gn = gn ,

n = 1 hn
h

n = 1, 2,

for

then the function set


m },
{
gn } {h

n N0 , m N

provides an orthonormal basis in L2 ([0, 2]).


= Signal theory :
A periodic signal f with finite energy in every period is a special signal with finite average power.
Such periodic signals are uniquely determined by its discrete spectrum. Here we had the special
period T = 2 .
Theorem 4.23. If f L2 ([0, 2]) then you get for the Fourier series

a0 X
f (t) =
+
ak cos(k t) + bk sin(k t)
2

with convergence in the energy norm of

L2 ([0, 2]) .

k=1

This means that with the corresponding trigonometric polynomials


N

a0 X
fN (t) =
+
ak cos(k t) + bk sin(k t)
2

kf fN k2 0

results

k=1

for

N .

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficiently
large.
For the Fourier coefficients the following Parsevals identity hold


|a0 |2 1 X
1
+
|ak |2 + |bk |2 =
4
2
2
k=1

42

Z2
0

|f (t)|2 dt =

kf k2
2

Similar we have for the trigonometric approximations fN (t)


N


|a0 |2 1 X
1
+
|ak |2 + |bk |2 =
4
2
2
k=1

Z2
0

|fN (t)|2 dt =

kfN k2
2

So the signal energy |f (t)|2 or |fN (t)|2 can be calculated by the spectrum of the signal. On the
right sides of the last two equations the energy of one signal period will divided by the length of
one time period. This is the average signal power of f or fN in this period. Because the signals
are 2 -periodic this is here also the finite average power of the complete signals.
Now the general case of real Fourier series :
Basic functions regarding real Fourier series for the basic interval [ 0, T ] are
gn (t) = cos(n 1 t),

n = 0, 1, 2, ,

hm (t) = sin(m 1 t),

m = 1, 2, ,

t [ 0, T ]

with

2
T
as basic angular frequency. They will be used to develop T -periodic functions in Fourier series.
One gets from (4.29), (4.30), (4.31) by the substitution
1 =

t = 1 t,

dt = 1 dt.

the orthogonality relations


ZT

sin(n 1 t) sin(m 1 t) dt =

ZT

cos(n 1 t) cos(m 1 t) dt =

and

ZT

T
2

for n = m,

for n 6= m,

m = 1, 2,

for n = m,

for n = m = 0

for n 6= m,

(4.32)

n, m = 1, 2,

T
2

cos(n 1 t) sin(m 1 t) dt = 0

m = 1, 2,
(4.33)

n, m = 1, 2,

for n N0 , m N,

(4.34)

arise for the function system


{cos(n 1 t), sin(m 1 t)},

n = 0, 1, 2, ,

m = 1, 2, .

Exercise 4.24. Define the orthogonality relations for the functions gn , hm in the last example
similar to the example 4.22. Then construct also an orthonormal basis in L2 ([0, T ]) for this
generalized case.
Theorem 4.25. If f L2 ([0, T ]) then you get for the Fourier series

a0 X
f (t) = +
ak cos(k 1 t)+bk sin(k 1 t)
2

with convergence in the energy norm of

L2 ([0, T ]) .

k=1

This means that with the corresponding trigonometric polynomials


N

a0 X
fN (t) =
+
ak cos(k 1 t) + bk sin(k 1 t)
2

results

k=1

43

kf fN k2 0

for

N .

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficiently
large.
For the Fourier coefficients the following Parsevals identity hold
ZT


1
|a0 |2 1 X
|ak |2 + |bk |2 =
+
4
2
T
k=1

|f (t)|2 dt =

kf k2
T

: average signal power

of f (t)

: average signal power

of fN (t)

Similar we have for the trigonometric approximations fN (t)


N


1
|a0 |2 1 X
|ak |2 + |bk |2 =
+
4
2
T
k=1

ZT
0

|fN (t)|2 dt =

kfN k2
T

Interpret theorem 4.25 like theorem 4.23.


The energy error of signal approximation in one period can calculated by
2

kf fN k =

ZT
0

|f (t) fN (t)| dt =

ZT
0

|f (t)|2 dt

N

|a0 |2
T X
T
|ak |2 + |bk |2
4
2
k=1

From this we get quick the approximation error measured in the average signal power.

4.6

Complex inner product spaces

(also complex pre-Hilbert-space or complex vector space with scalar product called)
Compare http://en.wikipedia.org/wiki/Inner_product
Definition 4.26. A scalar product (also inner product called) on a complex vector space V is a
map
h, i : V V C
that satisfies the following five axioms for all elements x, y, z V (abstract vectors) and all scalars
C (complex numbers)
1.
hx, xi 0

(non negative)

2.
hx, xi = 0 x = 0

(definite)

3.
hx, yi = hy, xi

Hermitian symmetry

4.
hx + y, zi = hx, zi + hy, zi

(additive respective first argument)

5.
hx, yi = hx, yi

(homogeneous respective first argument)

44

Application of the 3. axiom one the 4. and 5. gives


hx, y + zi = hx, yi + hx, zi

(additivity resp. second argument)

und
hx, yi = hx, yi

(conjugate homogeneous respective first argument)

(4.35)

Such a complex inner product is also called a positive-definite Hermitian form. In real spaces we
worked with similar positive-definite symmetric bilinear forms.
Analogue to the real case now also a norm can be produced.
Definition 4.27. Every complex inner product produces by
p
kxk := hx, xi

(4.36)

a norm, the so called induced norm.

So every complex inner product space becomes a special normed space.


Example 4.28. In Cn lets define the scalar product by matrix calculus as
hx, yi :=

n
X
i=1

hx, yi :=

xi yi = y x,

n
X
i=1

if x and y are column vectors

xi yi = x y ,

if x and y are row vectors


T

Remark : The adjoint matrix is defined by A = A . Vectors are special matrices. The induced
norm is now defined by
v
v
u n
u n
uX
uX

2
t
kxk :=
|xi | = t
xi xi = x x ,
if x is a row vector
i=1

i=1

The Schwarz inequality (4.19) becomes now


v
n
! v
!
u n
n
u
X
X
u X
u
t

t
|xi |2
|yi |2
xi yi


(4.37)

Schwarz inequality (4.19) taken in quadrat gets the form


b
2 b
b

Z

Z
Z


f (t) g(t) dt |f (t)|2 dt |g(t)|2 dt



(4.38)

i=1

i=1

i=1

Example 4.29. In the linear space of complex valued and quadratic integrable functions over an
bounded interval I = [a, b]
kurz:
L2 (I) oder L2 ([a, b]) ,

If the energy of the time signals f and g are finite then


hf, gi :=

Zb

f (t) g(t) dt

(4.39)

v
u b
uZ
p
u
kf k := hf, f i = t |f (t)|2 dt

(4.40)

The complex inner product space is complete, if regarding induced norm every Cauchy sequence is
convergent. Such a space is called a Hilbert space.
45

4.7

Application to complex Fourier series

Often in the applications complex Fourier expansions are used, to analyse oscillating structures.
Here for instance the complex function system
fn (t) = {exp(i n t)},

n Z,

t [ 0, 2 ]

(4.41)

which is related with that of example 4.22 and connected with the complex representation of Fourier
series for 1 = 1 and T = 2 is given.
The orthogonality relations corresponding the interval [ 0, 2 ] are now simpler to calculate

Z2
mZ
2 for n = m,
int imt
e
dx =
(4.42)
e

0
for
n
=
6
m,
n,
m

Z
0
or

hfn , fm i = 2 nm

Pay attention to
ei m t = ei m t .
The function system (4.41) characterises a system of complex harmonic oscillations which contains
2
n
1
, ,

2
2
2

the frequencies

the angular frequencies

1, 2, n

and the
corresponding periods

2,

(4.43)
(4.44)

2
2
,

2
n

The primitive or basic period is here 2.


The function system (4.41) provides an orthogonal basis in in the complex function space L2 ([0, 2]).
Change it so, that you get a orthonormal basis in L2 ([0, 2]).
With the above function systems one can analyse periodic oscillations with a basic period T = 2
1
resp. a basic frequency = 2
. The frequency spectrum of such an oscillation corresponds to
(4.43) resp. (4.44) completed by o = 0 resp. o = 0. This last discrete frequency is connected
with the mean value of the oscillation.
Theorem 4.30. If f L2 ([0, 2]) then you get for the complex Fourier series (3.35) with 1 = 1
f (t) =

k=

Ck ei kt

with convergence in the energy norm of

L2 ([0, 2]) .

This means that with the corresponding trigonometric polynomials


fN (t) =

N
X

k=N

Ck ei kt

kf fN k2 0

results

for

N .

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficiently
large.
For the Fourier coefficients the following Parsevals identity hold

k=

1
| Ck |2 =
2

Z2
0

|f (t)|2 dt =

kf k2
2
46

: average signal power

of f (t)

Similar we have for the trigonometric approximations fN (t)


N
X

k=N

1
| Ck |2 =
2

Z2
0

|fN (t)|2 dt =

kfN k2
2

of fN (t)

: average signal power

Between the spectra of theorem 4.30 and theorem 4.23 the equations

X

|a0 |2 1 X
|ak |2 + |bk |2 =
+
| Ck |2
4
2
k=1

k=

N
N
X

|a0 |2 1 X
|ak |2 + |bk |2 =
+
| Ck |2
4
2
k=1

k=N

hold.
As a rule periodic oscillations with any basic angular frequency 1 > 0 are to analyse. So you
must substitute (4.43) and (4.44) by
n 1
frequencies
:
, n = 1, 2,
(4.45)
2
angular frequencies

n 1 ,

corresponding periods

T :

2
,
n 1

n = 1, 2,

(4.46)

n = 1, 2,

That in this case of complex Fourier series used function system, compare (3.35)
fn (t) = ei n 1 t ,

n Z,

t [ 0, T ]

(4.47)

satisfies the orthogonality relations


ZT

i n 1 t

ei m 1 t

dt =

or

for n = m
(4.48)
for n 6= m

hfn , fm i = T nm

The function system (4.48) provides an orthogonal basis in in the complex function space L2 ([0, T ]).
Exercise : Change it so, that you get an orthonormal basis in L2 ([0, T ]).
Theorem 4.31. If f L2 ([0, T ]) then you get for the complex Fourier series (3.35)
f (t) =

k=

Ck ei k1 t

with convergence in the energy norm of

L2 ([0, T ]) .

This means that with the corresponding trigonometric polynomials


fN (t) =

N
X

k=N

Ck ei k1 t

kf fN k2 0

results

for

N .

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficiently
large.
For the Fourier coefficients the following Parsevals identity hold

k=

1
| Ck |2 =
T

ZT
0

47

|f (t)|2 dt =

kf k2
T

Similar we have for the trigonometric approximations fN (t)


N
X

k=N

1
| Ck |2 =
T

ZT
0

|fN (t)|2 dt =

kfN k2
T

On the right sides of the last two equations is standing the average signal power of f (t) and fN (t).
The energy error of signal approximation in one period can calculated by
2

kf fN k =

ZT
0

|f (t) fN (t)| dt =

ZT
0

|f (t)| dt T

N
X

k=N

| Ck |2

From this we get quick the approximation error measured in the average signal power.
Between the spectra of theorem 4.31 and theorem 4.25 again the equations

X

|a0 |2 1 X
|ak |2 + |bk |2 =
+
| Ck |2
4
2
k=1

k=

N
N
X

|a0 |2 1 X
+
|ak |2 + |bk |2 =
| Ck |2
4
2
k=1

hold.

4.8

k=N

Metric spaces

A metric space is a set where a notion of distance between elements of the set is defined. Such a
distance d is also called a distance function or a metric.
Definition 4.32. A metric d on a set M is defined by the following axioms
d : M M R

d (x, y)

d (y, x)

(symmetry)

d (x, y)

d(x, z) + d(z, y)

(triangle inequality)

d (x, y)

d (x, y)

(non negative)

x=y

(definite)

which are valid x, y, M. A set M with such a metric d is called a metric space M = (M, d).
In any normed space V = (V, k k) is a metric defined by
d(x, y) kx yk,

x, y V

a metric. So every normed space becomes a special metric space with the metric induced by the
given norm.
Special metrics induced by norms:

48

The normed space Rn

v
u n
q
uX
x2i ,
kxk2 = x21 + + x2n = t

( Euclidean norm)

i=1

becomes with the induced Euclidean distance

v
u n
p
uX
2
2
2
d(x, y) = kx yk2 = (x1 y1 ) + (x2 y2 ) + + (xn yn ) = t (xi yi )2
i=1

a metric space.

Similarly the normed space Cn

v
u n
p
uX
2
2
|zi |2 ,
kzk2 = |z1 | + + |zn | = t

z Cn

i=1

becomes with the induced distance

v
u n
p
uX
2
2
2
d(z, w) = kz wk2 = |z1 w1 | + |z2 w2 | + + |zn wn | = t
|zi wi |2
i=1

a metric space.

From maximum norm in Rn (p = )


kxk = max (|x1 |, . . . , |xn |)
results maximum metric
d(x, y) = kx yk = max (|x1 y1 |, . . . , |xn yn |) .
From maximum absolute sum norm in Rn (p = 1)
kxk1 =

n
X
i=1

|xi |

results the distance (Manhattan Metric)


d(x, y) = kx yk1 =
From p-norm in Rn
kxkp =

n
X
i=1

!1

n
X
i=1

|xi yi |

|xi |p

1p<

results the distance (Minkowski metric)

d(x, y) = kx ykp =

n
X
i=1

Every by a norm induced distance is translation invariant.


49

!1

|xi yi |

Example 4.33. With interval I = [a, b] in the linear space C(I) the maximum norm is given by
||f || = max{ |f (t)| : t I}
This norm induces by
d(f, g) = ||f g|| = max{ |f (t) g(t)| : t I}
on V = C(I) a distance. With this distance one can construct a uniform neighbourhood of every
given f C(I).
Example 4.34. For the in (4.16) and (4.17) defined normed space Lp (I) the distance is defined
by
1

p
Z
p

|f (t) g(t)| dt
d(f, g) = kf gkp :=
I

Mostly used are the cases


p = 1 : absolut integrable functions
p = 2 : quadratic integrable functions or signals with finite energy

50

Fourier transform

5.1

Introduction

Consider a periodic signal fT : R R with period T, its complex Fourier series and its discrete
spectral values Ck , compare (3.35). By using the discrete spectral values
T

)= 1
Ck = C(
k
T

Z2

fT(t) eik t dt

with

1 =

2
T

and k = k 1 =

2k
T

(5.1)

T2

of fT(t) we get the representation formulas (reconstruction formulas)


fT(t) =

k=

Ck eik1 t

respectively

fT(t) =

k=

) eik t
C(
k

with

k = k1 . (5.2)

Certainly some convergence conditions must be satisfied, compare for instance subsection 3.10.
Now let f (t) and f() be two functions with
f : RC

f : R C,

and

which are not periodic and which are connected by


f() = Ft { f (t)} () :=

f (t) = F

f (t) eit dt,

(5.3)

1
{ f()} (t) :=
2

f() eit d .

(5.4)

If both integrals exist and the equations hold, then we call {f (t), f()} a Fourier transform pair.
In applications we often use only the case of a real valued time signal f (t).
To a time signal f (t) of such a pair corresponds an angular frequency spectrum f(). Similar in
) of the periodic time signal f (t). This also
(5.1) we had considered the discrete spectrum C(
T
k

with C() termed spectrum has a discrete support of isolated frequencies


{ , 21 , 1 , 0, 1 , 21 , }.
But the spectrum f() in (5.3) has a continuous support.
Often this is phrased by : f() is a continuous spectrum. Pay attention to the fact that f() is not
always a continuous function of . This is another property. So you have to distinguish between
the notion spectrum with continuous support and the notion continuous spectral function !
Physically interpretation :

C()
contains complex amplitudes to isolated angular frequencies k .
f() is an angular frequency density, this means,
its a complex amplitude density (per angular frequency unit). The frequency spectrum is
now blurred or smeared.
51

For understanding this, compare it in principle with some types of loading in structural mechanics :
A sum of concentrated loads on a beam
is a set of forces which act on some discrete (isolated) points.
A load density on a beam (force per length)
is a load blurred on some interval.
Symbolically one can write instead of (5.3) and (5.4) also
F
f (t) p f()

F
f() p f (t)

and

The Fourier transform F in (5.3) and its inverse transform F


in (5.4) can be written in terms
of frequency variable instead of angular frequency variable with = 2.
f() =

i2t

f (t) e

dt

and

f (t) =

f() ei2t d

with = 2 .

(5.5)

Now the frequency axis is otherwise scaled.


Look at the change of the factor in the representation formula for calculating f (t) and on the
different integration variables in (5.5) and (5.4).
Equations above show that the transform and its inverse have the same general form. The integral
kernels are exponential terms with different signs in the exponent.
Example 5.1. Calculate the F-transform of a zero centered rectangular impulse of height A and
time duration 1 .
f (t) = A rect (t) .
The result is
f() = A sinc
In -domain we get

 

f() = A sinc () .

Now calculate the F-transform of a special dilated and right shifted version of f (t).
Hint: First operation is dilation, second is shifting. Time center of the resulting rectangular impulse
is 2 , duration time of the impulse is .


t 2
g(t) = A rect
with > 0

g() =

g(t) e

it

g() = A

sinc

A eit dt

1 ei
e
= A
i

With

one gets

dt =

 
2

g() = A sinc

sin

ei 2 i
2
e
i


 

ei 2 .

2
Apply the table of Fourier transformation pairs to short the calculations:
 


t 2
t
f (t) 7 f
7 f

f() 7 f( ) 7 f( ) ei 2

52

5.2

Relations between Fourier transform and complex Fourier coefficients

Plausibility considerations:
Let f (t) be an absolute integrable signal, short : f L1 (R). Such a signal is non-periodic.
Furthermore let f (t) be piecewise continuous differentiable.
Choose now a sufficient large T so, that f (t) is essentially localized in the interval [ T2 ,
Let fT(t) be a periodic signal fT : R R with


T T
fT(t) = f (t)
for all t ,
2 2

and

fT(t + T ) = fT(t)

for all

T
2 ].

Remark: Looking on fT(t) in one special period means looking on the essentially part of f (t) .
Remark: From T follows particularly fT(t) f (t) with L1 -convergence.
By setting

= 1 one gets from (5.1) with a small angular frequency difference

2
T

and k = k =

2k
T

the relations

for the equally spaced k .

The analyzing formula (5.1) can be compared now with (5.3) for sufficient large T :
T

)=
C(
k

Z2

fT(t) eik t dt

T2

So we get

f (t) eik t dt =

f(k )

2
) 1 f( )
f(k )
or
C(
with =
.
(5.6)
k
k
2
T
T
This means that the discrete spectral values of periodic time signals can be approximated by
calculation the continuous Fourier transform at the corresponding angular frequency values k .
)
C(
k

Enlarging of T results in downsizing of .


This is connected with denser lying discrete spectral lines of (5.1).
) become then smaller. The
In addition the magnitudes of the complex Fourier coefficients C(
k
energy of the continuous spectrum will be distributed on more denser lying discrete spectral values
(better approximation).
The reconstruction formula (5.2) can be approximated by using the approximation formulas (5.6).
fT(t) =

k=

) eik t
C(
k

1 X
1

f (k ) eik t
f() eit d .
2
2
k=

For T we get 0. Then the difference of the two series goes to zero. The right
expression represents the inverse Fourier transform.
With T follows for the angular frequency increment
point t of continuity
Z
1
f (t) =
f() eit d
2

53

and you get in every

This is the reconstruction formula (5.4) for non-periodic time signals. You get it as a limit case of
the periodic-time-signal considerations.
If t is no point of continuity then you get similar to subsection 3.10
f (t) + f (t+)
1
=
2
2

f() eit d .

Generally we identify functions which are distinct only on a set of measure 0. Then the last
representation for some special points t becomes irrelevant.
Example 5.2. Begin like in example 5.1 with
f (t) = A rect(t) .
With T > 1 you get by
fT(t) =

n=

f (t n T )

a T -periodic function with the property fT(t) = f (t) for T2 < t T2 . The periodic function
fT(t) is here constructed by a function f (t) with bounded support. f (t) is not periodic! Now calculate
the complex Fourier coefficients (5.1) for
1 =
From

eik t dt =

2
T

and

k = k 1 =

2k
T

T >1

with

eik t
exp(ik t)
eik t
+C =i
+C =i
+C
i k
k
k

for

k 6= 0

(this is an indefinite integral of a complex valued time function with a parameter k )


results
T
1
1
Z2
Z2
Z2
A
1
)= 1
Ck = C(
A eik t dt =
ei k t dt
fT(t) eik t dt =
k
T
T
T
)= Ai
Ck = C(
k
T

12

T2

exp(i
k

k
2 )

exp(i

k
 

k
2 )

12

A 2 i sin(
= i
T
k

) = A sinc k
C(
also valid for
k
T
2
Here you get with the F-transform of example 5.1
) = 1 f( )
C(
k
k
T

or

)=
C(
k

k
2 )

A sin( 2k )
=
k
T
2

0 = 0

f(k ).

In this example instead of the approximations (5.6) even the corresponding equations hold for T > 1.
Exercise 5.3. Realize the same operations and calculations for the signal g(t) in example 5.1
Theorem 5.4. For a time signal f with f L2 (R) and supp(f ) [ to , to + T ] and for a
corresponding periodization given by
fT(t) =

n=

the following properties hold.


1. f L1 (R)
54

f (t n T )

2. fT L2 ([ to , to + T ]), fT L2 [ T2 ,
period.
3. fT L1 ([ to , to + T ]),
period.


] , fT L2 ([ 0, T ]), the same finite energy in every

T
2

T
2

fT L1 [ T2 ,


] ,

fT L1 ([ 0, T ]), the same L1 -norm in every

4. The Fourier transform f() exists for all and is a continuous function with the special
properties
f() 0 for and f() 0 for . (Riemann-Lebesgue-Lemma)
5. f L2 (R). This means : Also the spectrum has finite energy.
6. Between the complex Fourier coefficients Ck of fT(t) and the Fourier transform f() of f (t)
the relations
) = 1 f( )
Ck = C(
k
k
T

or

)=
Ck = C(
k

f(k ) = f(k )

with

1
,
T

k =

k
T

are valid. (f() is the frequency based Fourier transform)


7. The Fourier transform f() is here continuous and can be approximated by
f()

f(k ) rect

k=






X
2 X
k
k

=
C(k ) rect
=T
C(k ) rect

k=

k=

If the complex Fourier coefficients are known then the continuous Fourier transform can be
approximated. The Fourier coefficients depend on the choosing of the above period length
T . By enlarging T the approximation of the Fourier transform with the Fourier coefficients
becomes better.
Remark 5.5. For a given time signal g L2 (R) L1 (R) we get by
f (t) = g(t) rect

 
t
T

signals

f (t)

and

fT(t) =

n=

f (t n T )

which satisfy the assumptions of theorem 5.4 with to = T2 . Especially proposition 6 of theorem 5.4
is valid. If the Fourier transform g is known (and of simple structure) but the Fourier transform f
not then can we try to approximate the Fourier coefficients
) = 1 f( )
Ck = C(
k
k
T

by

Ck = C(
) = g(k ) .
k
T

Estimation of the absolute approximation errors :




T

T

Z 2

2
Z

Z



1


1
1




i
t
i
t
i
t



k
k
k
C

g(t)
e
dt

g(t)
e
dt
g(t)
e
dt
k
+
k
T
T
T


T2

or weakened but simplified

Ck Ck
T

Z 2

1
|g(t)| dt +
T

Z
T
2

55

|g(t)| dt





Z



g(t) eik t dt


T

2

Estimation of the relative approximation errors



T
2

T2
T2

R
R
R
R
R
R


i
t
i
t


k dt
k dt +
g(t)
e
|g(t)|
dt
|g(t)| dt
g(t)
e
|g(t)|
dt
+
|g(t)|
dt
+



T
T
T

Ck Ck

2
2
2



=

T



|
g (k )|


R2
Ck
f(k )
ik t


g(t)
e
dt




T
2

if f(k ) 6= 0 and g(k ) 6= 0 .

This are only plausibility considerations but try the application for some very rapid decaying time
signals with unbounded support.
For another period starting value to all calculations can be adapted simple.

5.3

Amplitude and Phase Spectra

Let the Fourier transform of a real signal f (t) be represented by f(). This spectral density or
spectrum f() is usually complex and can be written in normal form as
f() = Re{f}() + Im{f}() i ,

or short

f() = R() + I() i

if it dont cause confusion. The functions R() and I() are the real and the imaginary part of
the spectrum. In the polar form








f() = f() ei ()
or
f() = f() exp (i ())
the amplitude spectrum and the phase spectrum of the signal f are given by


p 2
f () = R () + I 2 (),


() = arg f()

For calculating the phase spectrum we can use the formulas





R()

for I() 0
arccos

|f()|



() =
R()

arccos
for I() < 0

|f()|

undefined
for all with f(w) = 0

or

() =

2
arctan

I()
|f()|+R()

f or
f or




f
()

+ R() 6= 0



f () + R() = 0

By these formulas we get < () for the phase-codomain.


For better understanding sometimes ()
e
= () + 2k with some k Z is used.

If f (t) is real valued then the spectrum f() satisfies the properties that R() is even and I() is
odd. That means
f (t)

real valued

R() = R()
56

and I() = I()

This comes from


R() =

f (t)cos(t)dt

and

I() =

f (t)sin(t)dt .

So we get for a real time signal f (t)


f() = f() .





If f (t) is a real time signal then its amplitude spectrum f() is even and its phase spectrum
() is odd. That means






and
() = ().
f () = f()
If f (t) is real valued, then with fe as the even part of f and fo as the odd part of f we get
F

f (t) = fe (t) + fo (t) p R() + jI(),


F

fe (t) =

1
2

{ f (t) + f (t) } p R()

fo (t) =

1
2

{ f (t) f (t) } p i I().

and

The F-transform of a real and even function is real and even. The F-transform of a real and odd
function is pure imaginary and odd.
A real signal f (t) which can be expressed by
1
f (t) =
2

f() eit d

(inverse Fourier transform) ,

has also the representations


1
f (t) =
2

Z
Z


1
i() it
i(()+t)
f
()
e
e
d
=
f
()
d .



e
2





For real f (t) the amplitude spectrum f() is even and the phase spectrum () is odd. Then
also
sin(() + t) is an odd function for every fixed parameter t. So we get the representation
1
f (t) =

Z


f () cos(() + t) d

for real f (t)

Example 5.6. The Fourier transform of




 
t 2

g(t) = rect
with > 0 is g() = sinc
ei 2 ,

compare 5.1 for the case

Here the amplitude spectrum result in


 


|
g ()| = sinc
.
2
The phase spectrum is calculated with the help of


for
sinc
>0
2
2
()

=


sinc
<0
2 for
2
where the whole number k is choosen so that < .
57

by

= + 2 k ,

A = 1.

Exercise 5.7. Sketch the time signal g(t) of the last example.
Sketch an essentiell section of the corresponding amplitude spectrum |
g ()|.
Sketch the envelope of this amplitude spectrum and characterise its asymptotic behavior.
Sketch the phase spectrum.
Think about the essentiell support of |
g ()| which contains all significant angular frequencies. In
general the essentiell support is not the support. It must be defined in some sense. From this we get
the essentiell (angular frequency) bandwidth. In general it is not the (angular frequency) bandwidth
(exact bandwidth).
Exercise 5.8. Replace g(t) in example 5.6 by the time signal


t to
g(t) = rect
with some arbitrary fixed to ,
some arbitrary fixed

> 0.

Adapt the considerations and calculations of example 5.6 and that of the last exercise.

5.4

Basic properties and examples of the Fourier transform

For the Fourier transformation of a linear combination of time signals the property
)
( n
n
n
X
X
X
k Ft {fk (t)} () =
k fk () ,
Ft
k fk (t) () =
k=1

k=1

k=1

holds that means the F-Transform is a linear operator (a linear map). A shorter formulation of
this superposition principle is given by
n
X
k=1

k fk (t) p

n
X

k fk () .

(5.7)

k=1

The proof follows from the fact that the integral is a linear functional.
In principle the F-Transform can be applied to complex valued time functions f (t) also. For the
conjugate
complex f (t) of f (t) you get
Z
n
o
Ft f (t) () =
f (t) eit dt

or shorter

f (t) eit dt = Ft {f (t)}() = Ft {f (t)} ()

F
f (t) p f()

(5.8)

The F-transform of a shifted signal (time delay of a signal) can be calculated by


Ft {f (t to )}() = ei to Ft {f (t)}()
shortly
F

f (t to ) p ei to f()

(5.9)

So time shifting yields to complex modulation of the spectrum.


The linearity and this last rule can be used to find the following Fourier transform pairs. For every
real parameter 6= 0 you get
1
F
{ f (t ) + f (t + )} p f() cos( ) ,
2
58

1
F
{ f (t + ) f (t )} p i f() sin( ) ,
2
f (t + ) f (t )
sin( )
F
p i f()
.
2

What happens in the last equation for 0 .

(5.10)

Example 5.9. Look at the examples 5.2 and 5.6 and calculate the F-transform of the following
time signal.




t 2
t + 2
rect
with parameter 6= 0 .
f (t) = rect


 

 
 
 
t
t

f () = exp i
() exp i
()
Ft rect
Ft rect
2

 
 
f() = 2 i sin
sinc
2
2
This (angular) frequency spectrum is pure imaginary and odd (time signal f (t) self is real and odd).
Here a time-limited signal was given (bounded time support), but its F-transform is not frequencylimited (no bounded frequency support).
Time scaling of a signal f (t) is connected with the following equation in frequency domain
Ft { f (a t) }() =
simpler formulated by
F

f (a t) p
In the case a < 0 the operator

1
F{ f (t) }() ,
|a|

1  
f
|a|
a

for all

a 6= 0

(5.11)

f (t) 7 f (a t)
realises time dilation and time reflection.
Example 5.10. Find the F-transforms of the signal dilations
 
 
t
t
f1 (t) = rect
and
f2 (t) = rect
2
2
From the transform pair

results

for all

> 0.

 
 
t
F
f (t) = rect
p sinc

 

F
f1 (t) = f (2 t) = f1 (t) p
sinc
2
4
 
t
F
f2 (t) = f
= f2 (t) p 2 sinc( )
2

Sketch the two time signals and their amplitude spectra for some parameter > 0. Compare the
time signals, the amplitude spectra and their main lobes.
The main lobe width of |f1 ()| is twice as the main lobe width of |f()|, whereas the main lobe
width of |f2 ()| is half of the main lobe width of |f()|. Here the amplitudes of all time pulses are
1.

59

Exercise 5.11. Change all rectangular pulses of the last examples by appropriate multiplications
with constants, so that the signal energy becomes 1. Adapt for them the considerations and tasks
of the last example.
Choose then the coefficients so that you get rectangular impulses of L1 -Norm 1 and adapt again the
considerations.
A special case of the above time scalings is the time reversal operation (time reflection R)
R : f (t) 7 f (t)
The F-transform of g(t) = f (t) can be calculated with the help of
Ft {f (t)}() = F{f }() ,
shorter expressed by
F
f (t) p f() .

(5.12)

Proof it also directly : With t = t you get


Z

f (t) e

it

Z
Z

i t

f (t) e dt =
f (t) eit dt = f()
dt =

If f (t) is real valued then we get


f() = f()

and

F
f (t) p f() ,

(5.13)

which is more used in the applications.


Example 5.12. Spectral density calculation of the following time signals :
1.
f1 (t) = et H(t)

f1 () =

>0

et H(t) eit dt

t it

dt =

t=+
e(+i)t
=

( + i)

e(+i)t dt

t=o

f1 () =

= 2
i 2
2
+ i
+
+ 2

2. Use now the formula for time reflection to calculate the next signal spectrum.
f2 (t) = f1 (t) = et H(t)

60

f2 () =

1
i

3. Use the superposition property to calculate the spectrum of


f3 (t) = e |t| H(t)

for > 0

We get with
f3 (t) = f1 (t) + f1 (t) = f1 (t) + f2 (t)

f3 () =

1
1
+
+ i
i

the result

2
for > 0
+ 2
Neither the time representation nor the frequency representation of this signal has a bounded
support.
f3 () =

Duality properties :
F
f (t) p f()
1

F
f() p f (t)

F
f(t) p 2 f ()

f () p

F 1

(5.14)

1
f (t)
2

Proof of the first property : From the formula for F 1 you get
2f (t) =

f() eit d ,

f() eit d .

subsequent changing of t by t results in


2f (t) =

The exchange of the variable notations t yields to


2 f () =

f(t) eit dt ,

which gives the first statement.


An equivalent -based duality property holds by
F
f (t) p f()
F 1

f() p f (t)

=
=

F
f(t) p f ()
F 1

f () p f(t)

Example 5.13. With the duality theorem show that


g(t) =

a2

1
a||
F
e
= g()
p
2
+t
a

Solution : By

for

a>0

2a
1 a|t|
1
F
=
e
p 2
2
+
2a
a + 2
and the duality property of the F-transform, we get
F

ea|t| p

a2

a2

1
1 a||
F
p 2
e
2
+t
2a
61

(5.15)

a||
1
F
p
e
2
+t
a
Conclusion
Z
a2

Example 5.14. Calculate

a||
1
F 1
e
p 2
a
a + t2

and

cos(t)

dt = ea||
2
2
a +t
a

but only for

a>0

a>0

for


sin(a t)
()
f or a > 0
Ft
t
 
 
t
F
rect
p sinc

2
 

t
F
p 2 a sinc(a )
a=
= rect
2
2a
 
1
t
F
p sinc(a )
rect
2a
2a

Apply the duality property and get with the even function rect(x)
F

sinc(a t) p 2
With

 
1
.
rect
2a
2a

sin(a t)
a sin(a t)
a
=
= sinc(a t)
t

at

follows the required Fourier transform


 
sin(a t)
F
p rect
t
2a

for

a>0

Use the special parameter representation a = 2 B and get with


1 sin(2B t)
sin(2B t)
=
= sinc(2B t)
2B
t
2B t
the transformation pair
F

sinc(2B t) p

 
1
rect
2B
4B

for

B>0

This two time signals (time filters) are signals (filters) with finite bandwidth. They have a bounded
-support.
Theorem 5.15. If the function f (t) has a Fourier transform f() which is is continuous at
= 0 , then
Z

f (0) =
f (t) dt

is valid. If f (t) is continuous at t = 0 and f L1 (R) then we get


1
f (0) =
2

62

f() d

Examples 5.16. Dirac impulses and periodic signals


Dirac impulses are used in connection with the transforms of periodic functions. Both types of
signals neither elements of L1 (R) nor elements of L1 (R). The considerations have to be generalised by using elements of distribution theory. Dirac impulses are special distributions (generalised
functions) but very useful in signal analysis considerations.
Fourier transform of Dirac impulses in time domain :
F

(t) p 1

(t to ) p ei to

and

(5.16)

Every time-shifted Dirac impulse contain all frequencies with the same amplitude |ei to | = 1, this
means that this impulse has a constant amplitude spectrum.
This results from
Z

(t to ) eit dt = exp(i to )

eit

because


is a continuous function of t .

Fourier transform of complex periodic time signals :


F

eic t p 2 ( c )

and in particular

This results from


1
c )}(t) =
2

F1 {(

F 1

( c ) p

1 p 2 ()

(5.17)

( c ) eit d
1 ic t
e
2

1 ic t
F
e
p ( c )
2

Complex modulation and frequency translation


F
f (t) ei c t p f( c )

F
or f( c ) p f (t) ei c t

(5.18)

This complex modulation modifies every time signal f (t) so that the spectral density will be shifted
(shifting of the corresponding angular frequency spectrum).
Sequently application of time dilation, complex modulation and F-transform yields to


1 c
F
f (at) eic t p
f
(5.19)
|a|
a
by using at first
F

f (at) p
and then the left of the formulas (5.18).

1  
f
|a|
a

F-transform for real modulated time signals


For
fmod (t) = f (t) cos(c t + )
we get
Ft {f (t) cos(c t + )}() =

1
1 i
e f ( + c ) + ei f( c )
2
2

or shorter

1
1
fmod () = ei f( + c ) + ei f( c )
2
2
This results directly from formula (5.18).
63

(5.20)

Definition 5.17. A signal f is called frequency or band limited to some o > 0,


if |f()| = 0 for all with || > o > 0.
If f L2 (R) is band limited to some sufficiently small o > 0 , then it will be called also a low
pass filter. For 0 < o < c we get then with fmod a band pass filter with
|fmod ()| = 0

|| > |c + o | and

f or

|| < |c o |

If above all f() is centered around 0 then fmod is centered around c . This means here that f()
is centered around c in (0, +) and centered around c in (, 0) . Illustrate this by a sketch.
As special examples of (5.20) we get
1
1
f ( c ) + f( + c )
2
2

(5.21)

1
1
f ( c )
f ( + c )
2i
2i

(5.22)

f (t) cos(c t) p

:
2

f (t) sin(c t) p

and
for =

5.5

for = 0 :

Scalar products and signal energy considerations

The energy contained in a real or complex time signal with f L2 (R) is defined by
E(f ) =

|f (t)|2 dt .

(5.23)

The integrand |f (t)|2 is called the energy density per time. Since for f L2 (R) the equations
Z

1
|f (t)|2 dt =
2

|f()|2 d =

|f()|2 d ,

(Parsevals theorem)

(5.24)

are valid you can also write


1
E(f ) =
2

|f()|2 d

or

E(f ) =

|f()|2 d

(Rayleighs Energy Theorem)

(5.25)
2
2

|f ()| is the energy density per angular frequency and |f ()| is the energy density
Here
per frequency .
In particular : If f is quadratic integrable then f also and conversely.
More generally we get for f, g L2 (R) relations between the scalar products in time domain
and in frequency domains.
1
2

hf, git =

1
f (t) g(t) dt =
2

or
hf, git =

1 D E
f() g() d =
f , g
2

D
E
f() g() d = f, g

64

(5.26)

(5.27)

Example 5.18. For f (t) = rect(t) you get E(f ) = 1 and for g(t) = f ( t ) = rect( t ) you get
E(g) = if > 0 .

F
Using the energy theorem and rect( t ) p sinc 2 you can realise the equations
1
E(
g) =
2

 2
Z 
h
 i2
t
sinc
d =
rect
dt =
2

> 0.

for

With = 2 you can transform it in a -based representation


Z

E(
g) =

  2
Z 
t
2
dt =
[ sinc ( ) ] d =
rect

for

> 0.

Use this to calculate the integrals


Z

sinc2

sinc2 ( ) d =

 
2

d =

Proof that
sinc

 
2

1
p
rect

F 1

for

for

 
t
=: h (t)

>0

> 0.

for every parameter

>0

The energy of the parameter dependent signal h goes to infinity if 0 .


Lets go back now to the signal g with time representation g(t) and angular frequency representation
g().
The zero crossing points ` = 2
and r =
the -axis.

of g() have the least distance to the 0-point of

2
1 1
Calculate now for g the energy portion Ep in the -band [ 2
, ] respectively in the -band [ , ]

Substitution =

, d =

d and subsequent numerical integration results in

1
Ep =
2

sinc

 
2

d =

Z1

sinc2 () d 0, 924

Exercise : Realise this with rectangular method or trapezoid rule.


Result:
2
Circa 92% of the signal energy are contained in the -frequency band [ 2
, ] resp. in the frequency band [ 1 , 1 ]. This means 92% of the signal energy is contained in the main lobe of the
frequency representations.

5.6

The convolution of functions

Compare http://en.wikipedia.org/wiki/Convolution
The convolution
+
Z
(f g)(t) =
f (t ) g( ) d

or

+
Z
(g f )(t) =
f ( ) g(t ) d,

65

(5.28)

ist defined, if one of the two integrals almost every exists. If one of them exists almost every then
also the other with
(f g)(t) = (g f )(t)
Theorem 5.19. Convolution in L1 (R)
If f and g are elements of L1 (R) then the following properties hold:
1. (f g)(t) exists almost everywhere
2. f g L1 (R)

with
kf gk1 kf k1 kgk1.

So the convolution is a continuous bilinear operation in L1 (R).


: L1 (R) L1 (R) L1 (R).
Shortly:
f, g L1 (R)

f g L1 (R)

with kf gk1 kf k1 kgk1

(5.29)

Theorem 5.20. For f, g, h L1 (R) and arbitrary C the following operation rules hold :
f ( g) = ( f ) g = (f g)

f (g + h) = (f g) + (f h)
f g = gf

f (g h) = (f g) h
L1 (R) becomes with the operations + and a commutative algebra. In L1 (R) does not
exists an identity element respect to the convolution. This means there is no function g L1 (R)
with (f g)(t) = f (t).
If f (t) is continuous around the time point t = 0 then its convolution with the Dirac impulse
(t) results in
+
+
Z
Z
f (t ) ( ) d =
f ( ) (t ) d = f (t)
(5.30)
(f )(t) =

whereat the integral is to interpret in a generalized way. (t) is a distribution (a generalized


function) not a function but the use of the Dirac impulse simplifies many considerations. With
(t) = (t )
you get a shifted Dirac localized at t = . In the case > 0 this is a delayed version of the Dirac
impulse.
The following properties hold.
( f )(t) = f (t),

( )(t) = (t),

( f )(t) = f (t )

(5.31)

( )(t) = + (t) = (t ( + ))
So (t) realizes a shifting of signals. In engineering notation you can formulate the above properties
also in the following way.
(t) f (t) = f (t),

(t) (t) = (t),

(t ) f (t) = f (t )

(t ) (t ) = (t ( + ))
66

(5.32)

Definition 5.21. The support of a piecewise continuous function f : R C is defined by


supp(f ) = {t : f (t) 6= 0}

(5.33)

Lemma 5.22. If f (t) and g(t) are measurable functions and (f g)(t) exists almost everywhere
then
supp(f g) supp(f ) + supp(g)
(5.34)
Conclusions of this Lemma :
Convolution in the case of left-side limited supports
supp(f ) [a, +),

supp(g) [b, +)

supp(f g) [a + b, +)

(5.35)

supp(f g) (, c + d]

(5.36)

supp(f g) [a + b, c + d]

(5.37)

Convolution in the case of right-side limited supports


supp(f ) (, c],

supp(g) (, d]

Convolution in the case of bounded supports


supp(f ) [a, c],

supp(g) [b, d]

In the case of special limited supports modifications of the formula (5.28) can be used to calculate
the convolution.
Theorem 5.23. If (f g)(t) exists almost everywhere then
supp(f ) [0, +)

supp(f ) [0, +)

supp(f ) [0, +)

supp(f ) [0, +)

and

and

+
Z
(f g)(t) =
f ( ) g(t ) d

(5.38)

(f g)(t) =

supp(g) [0, +)

supp(g) [0, +)

Zt

f (t ) g( ) d

(5.39)

(f g)(t) =

Zt

f (t ) g( ) d

(5.40)

(f g)(t) =

Zt

f ( ) g(t ) d

(5.41)

Theorem 5.24. If f (t) and g(t) are both piecewise continuous functions with supp(f ) [a, +)
and supp(g) [b, +) then their convolution (f g)(t) is for all t continuous and has the property
supp(f g) [a + b, +).
Corollary 5.25. If f (t) and g(t) are both piecewise continuous functions with compact support,
then their convolution (f g)(t) exists, has also a compact support and is continuous.
Now convolution of f (t) and g(t) in different function spaces,
Theorem 5.26. If f L1loc(R), g L1 (R) and supp(g) is bounded then the convolution (f g)(t)
is almost everywhere defined and belongs to L1loc(R).
67

Compare the last theorem with theorem 5.19 above.


Theorem 5.27. If f L1loc(R), g L1 (R) and f (t) is a bounded function then the convolution
(f g)(t) exist for all t R and belongs to L (R) with
kf gk kf k kgk1
f g L (R) means that (f g)(t) is an essentiell bounded function.
Theorem 5.28. If f Lp (R) and g Lq (R) with
p 1,

q1

and

1 1
+ =1
p q

(conjugate H
older-exponents)

then (f g)(t) is defined everywhere. Furthermore (f g)(t) is then continuous and bounded on
R with
kf gk kf kp kgkq
Mostly used special cases :
p = 2, q = 2 results in
f, g L2 (R)

(f g)(t)

is continuous and bounded with

kf gk kf k2 kgk2 (5.42)

The convolution of two signals with finite energy is a continuous and bounded function on R.
p = 1, q = results in
f L1 (R), g L (R)

(f g)(t) is continuous and bounded with kf gk kf k1 kgk


(5.43)
If g(t) is a bounded piecewise continuous function then g L (R). So an absolute integrable
function can be convoluted with a bounded piecewise continuous function. The result is a continuous
and bounded function. Filtering with f L1 (R) means
=

g(t) 7 (f g)(t) .
Such filtering improves here the regularity of the input signal g(t) .
Lets consider a further possibility of convolution.
Theorem 5.29. If f L1 (R) and g L2 (R) then exists (f g)(t) almost everywhere with
f g L2 (R) and
kf gk2 kf k1 kgk2
Such a filter f (t) maps a time signal with finite energy in a time signal with finite energy.
Theorem 5.30. (Derivations of convolution)
Lets assume that f L1 (R) and g C p (R). If in addition for k = 0, 1, , p the functions
g (k) (t) are bounded then
f g C p (R)
and
(f g)(k) = f g (k)

f
ur

are valid.

68

k = 1, 2, , p

Under suitable conditions the Fourier transform of a convolution (f g)(t) in the time domain is
the pointwise product of the two corresponding Fourier transforms.
Ft {(f g)(t)}() = Ft {f (t)}() Ft {g(t)}()
Similar the Fourier transform of a usual product of time signals can be calculated by
Ft {f (t) g(t)}() =

1
Ft {f (t)}() Ft {g(t)}()
2

under suitable conditions (more in the lectures).

5.7

Translation, dilation and differentiation of signals

More in the lectures

5.8

Cross-correlation and autocorrelation of signals

compare with
http://en.wikipedia.org/wiki/Cross-correlation and
http://en.wikipedia.org/wiki/Autocorrelation
More in the lectures

69

(5.44)

Important Fourier Transformation Pairs

Some general properties of the Fourier transformation :

Definition of Inverse Fourier Transform


R
1
f (t) = 2
f() eit d

Definition of Fourier Transform


R
f() =
f (t) eit dt

f (t) = F {f()} (t)

f() = Ft {f (t)} ()

f (t to )

ei to f ()

f (a t)

1
|a|

eio t f (t)

f( o )

f(t)

2 f ()

f (n) (t)

(i )n f()

(i t)n f (t)

f(n) ()

Rt

f ( ) d

f( a )

f()
i

for a 6= 0

+ f(0) ()

(f g)(t)

f() g()

f (t) g(t)

1
2

(f g)()

70

Important examples :

(t n MT )

(t n MT )

2
MT

n=

n=

eikMT

k=

k=

2
MT

H(t)

()

tri(t)

sinc2 ( 2 )

(t)

2()

(t t0 )

eit0

eio t

2 ( 0 )

cos(0 t)

[( + 0 ) + ( 0 )]

sin(0 t)

i [(

rect(t)

sinc( 2 )

rect( at )

a sinc( a2 )

sgn(t)

2i

sinc(t)

rect( 2 )

Poisson formula

Heaviside

Triangular pulse :
tri(t) = (rect rect)(t)

0 ) ( + 0 )]

a>0

71

sinc(t)

)
rect( 2

e 2 2

2 2
2 e 2

H(t) et

1
+ i

>0

e|t|

2
2 + 2

>0

H(t) t et

1
( + i)2

>0

H(t) et cos(o t)

( + i)
0 2 + ( + i)2

>0

H(t) et sin(o t)

0
0 2 + ( + i)2

>0

t2

7
7.1

Discrete Fourier Transform (DFT)


Introduction

Compare:
http://en.wikipedia.org/wiki/Discrete_Fourier_transform
The discrete Fourier transform DFT is the most important discrete transform, used to realize a
practical Fourier analysis for sampled signals in engineering applications.
The DFT maps every discrete time signal with N samples on a special frequency domain representation. The output of the DFT is a discrete signal of the same length N .
The DFT requires a discrete input of finite length N (row or column vector). Such inputs are often
created by sampling a continuous signal in a chosen finite time interval of length T . The standard
form of this time segment is here [ 0, T ].
Generally an analog signal f (t) will be converted to a discrete signal by equally spaced sampling
the continuous signal.
See the definitions of sampling period and sampling frequency in
http://en.wikipedia.org/wiki/Sampling_%28signal_processing%29+
You have to chose a sufficient large time duration T in which the given continuous signal f (t) was
sampled to fk = f (tk ), k = 0, 1, 2, and a sufficient small sampling period t to get a realistic
frequency analysis of f (t).
But chose T not to large and t not to small (computing time).
Two principal cases for an appropriate choosing of T :
72

a) f (t) is essentially localized in an interval of length T . This means that a sufficient large part of
the signal energy is contained in the chosen interval.
b) If a) is not possible then reduce T in a practical way. But look careful on your physical and
mathematical model and decide, if you can get your expected calculation results with sufficient
correctness.
Theoretically the application of DFT in both cases is connected with a T -periodization.
Example 7.1. As an academical example the following signal with continuous domain let be given.
3 t
cos(5 t 6 ) for t 0
e
f (t) =

0
for t < 0

You can consider it in some chosen [ 0, T ] so that the case a) is valid. This restriction of f (t) is a
non-periodic signal. But its T-periodization, which starts in [ 0, T ] becomes of course T-periodic.
For our engineering applications only one basic period is of interest. In this period we sample f (t)
and get our discrete input signal for the DFT. The discrete output is a frequency spectrum. This is
only an academical example, because in practise we dont have a closed-form analytical expression
(also no approximation) before sampling the time signal.
Remark 7.2. If a continuous T -periodic signal f (t) is given by a closed-form analytical expression,
for instance by


T 2
f (t) = t
for t [ 0, T ] and f (t + T ) = f (t) for all t R.
2
then we have an special case of b). Now we sample f (t) in the given basic period of [ 0, T ]
with a sufficient small t by evaluation the fk = f (k t), k = 0, 1, 2, . Then we can
calculate the discrete spectrum with the help of Matlab-fft, vide infra. This gives the possibility of
quick signal approximation by trigonometrical polynomials connected with the corresponding spectral
decomposition (compare chapter Real and complex Fourier series).
The DFT can be computed efficiently in practice by using a fast Fourier transform algorithm (FFT).
The term FFT is often used to mean DFT, but DFT refers to a mathematical transformation of a
(time) signal and FFT refers to a specific family of algorithms for computing the DFT.
FFT-algorithms are implemented in many program systems (Matlab, Maple, Mathematica and
others).
FFT provides opportunities for fast calculations of many practical tasks,
for instance
- calculations of essential frequencies contained in a sampled signal,
- calculations of discrete convolutions with long filters
- approximation of continuous convolutions and
- calculations of signal correlations.
Remark 7.3. The input of the DFT could be a sampling of some continuous time signal f(t) with
the special sampling period t = 1 .
f0 = f(0),

f1 = f(1),

f2 = f(2),

fN 1 = f(N 1)

Put this uniformly spaced samples in a vector f .


f=

f0 , f1 , , , fN 1
73

(7.1)

The chosen time duration of the continuous signal f(t) is in this special case T = N .
In principle the DFT implies a N -periodization of the discrete input signal f (theory).
fk = fk+N

kZ

for all

So we would get
f

=
=

f2 , f1 , f0 , f1 , , fN 1 , fN , fN +1 ,

, fN 2 , fN 1 , f0 , f1 , , fN 1 , f0 , f1 ,

or especially in this example


f=

, f(N 2), f(N 1), f(0), f(1), , f(N 1), f(0), f(1),

With this N -periodization is connected a T -periodization fp (t) of f(t), which starts on the interval
[ 0, T ].
This results in fp (t) = fp (t + T ) for all t.
If you know the values in one period, then all values are defined. So the next definition is reasonable.
With the imaginary unit i and N as the primitive Nth root of 1 is given the following definition.
Definition 7.4 (DFT). The sequence of N complex numbers (vector with N components)

f = f0 , f1 , , , fN 1

is transformed into the sequence of N complex numbers


F =

F0 , F1 , , , FN 1

by the DFT according to the formula


Fk =

N
1
X

fn kn
N

N = e

with

2i
N

k = 0, . . . , N 1

n=0

(7.2)

This is sometimes shorted by


Fk = DF T {fn }

k = 0, . . . , N 1,

with

n = 0, . . . , N 1

The DFT (7.2) can be written also in the form


Fk =

N
1
X

fn e

2i
kn
N

k = 0, . . . , N 1

with

n=0

Remark 7.5. In principle there is for every positive integer N one DFT, so this N must be careful
chosen at the beginning.
The DFT implies a N -periodization of its discrete output signal F also (theory). From
(k+N ) n
n
N
= k
N

you can verify that the formula (7.2) for DFT is defined for all k Z with Fk+N = Fk . So the
frequency domain representation becomes also N -periodic.
Theorem 7.6. The inverse transform of (7.2) exists and is called inverse discrete Fourier transform (IDFT). It is given by
fn =

N 1
1 X kn
Fk N
N

with

N = e

k=0

74

2i
N

n = 0, . . . , N 1

(7.3)

(7.3) is sometimes shorted by


fn = IDF T {Fk }

k = 0, . . . , N 1,

with

n = 0, . . . , N 1

This IDFT formula (7.3) can be written also in the form


fn =

N 1
1 X 2i kn
Fk e N ,
N

n = 0, . . . , N 1

k=0

Remark 7.7. Mostly the discrete time signal f is real valued. But the DFT F can be a complex
valued vector even if f is real valued.
From
k(n+N )
N
= kn
N
results that the formula (7.3) for IDFT is defined for all n Z with
periodicity of the discrete time signals is confirmed by the IDFT.

fn+N = fn . So the N -

2i

Remark 7.8. With the N th standard unit root N = e N you get the special Vandermonde
matrix F of size (N, N ) and its generally nth column F(: , n) by

00

10

20
F=
N

..

(N 1)0

01

11

21

02

12

22

..
.

..
.

(N 1)1

(N 1)2

0(N 1)

...

...

...

..

1(N 1)

2(N 1)

..
.

(N 1)(N 1)

. . . N

0(n1)

1(n1)

.
2(n1)
F(: , n) =

..

.
(N 1)(n1)
N

(7.4)
The column vectors of the matrix F form an orthogonal basis in the space C of N-dimensional
complex vectors. But this basis is not orthonormal. With the scalar product (4.13) the column
vectors of F satisfy
h F(: , n) , F(:, m) i = N (n, m)
N

where (n, m) is the Kronecker delta. A similar property holds for the rows of F.
Hint: F(: , m) results from Matlab-notation. In a similar way F(k, :) denotes the row with the
index k.
By using column vectors f and F for the time sampling and the corresponding frequency spectrum
the DFT (7.2) can be realized in matrix form
F = F f .
The matrix F is symmetric. Its inverse can be calculated by the formula
F1 =

1
F
N

in which F is the adjoint matrix of F. The adjoint matrix will be called also as conjugate transpose
matrix or Hermitian transpose matrix.

75

So you get for F1 the descriptive representation

00
01
N
N

10
11

N
N

21
1
1
20
N
F =
N
N

..
..

.
.

(N 1)0

(N 1)1

02

12

...

0(N 1)

1(N 1)

...

...

..
.

..

22

(N 1)2

2(N 1)

..
.

(N 1)(N 1)

. . . N

(7.5)

The inverse discrete Fourier transform (7.3) can be realized in matrix form now.
f = F1 F
Hint: If you have to calculate (7.4) or (7.5) for some N , for instance N = 4 or N = 5, then you
can use
p+kN
= pN , for all k Z
N
to simplify the work with considerations modulo N.

By using the floor function (see Matlab help) we define now a special remainder function, which
results in non-negative integer values of one indexing period. Here N > 0 is the length of this
indexing period.
jxk
N,
x Z, N N
(7.6)
(x)N := x
N
(x)N computes the unique nonnegative remainder on division of the integer x by the positive integer
N . It returns an integer r such that x = q N + r holds for some integer q. In addition, we have
r = (x)N

The interpretation of
(x y)N

0r<N


xy
= (x y)
N

is clear: calculate first the difference and then the remainder function.
With the function (7.6) we can define operations on infinite discrete periodic sequences by restricting
them on one basic period of length N (vector with N components).

fn = f0 , f1 , , , fN 1 ,
n = 0, 1, , N 1
Simple circular right shift of fn

f(n1)N =

fN 1 , f0 , f1 , , , fN 2

Simple circular left shift of fn


f(n+1)N =

f1 , f2 , , , fN 1 , f0

k-times circular right shift of fn


f(nk)N =

fN k , fN k+1 , , f0 , , , fN k1

k-times circular left shift of fn


f(n+k)N =

fk , fk+1 , , , fk2 , fk1 ,

In the last 2 representations k > 1 is chosen.


76

7.2

Properties of the Discrete Fourier Transform

Let fn , gn , hn , be discrete time signals with the same length N and the
k, H
k,
index variable n = 0, 1, . . . , N 1. The corresponding DFT-spectra Fk , G
index variable k = 0, 1, . . . , N 1 are then also of length N . As above we use
N = e

with the

2i
N

Under this assumptions the following definitions and properties hold.


Definitions:
a) Circular convolution
We call hn the circular convolution of fn and gn and write hn = fn ~ gn if
hn =

N
1
X

fm g(nm)N

(7.7)

m=0

The circular convolution is a commutative operation.


If fn and gn dont have the same length, then define N = max(Nf , Ng ) with
Nf = length(fn ) and Ng = length(gn ). Subsequently apply zero-padding to the shorter
signal.
b) Reversal
The signal gn = f(n)N is called the reversal of fn . The reversal is defined as the circular
reversal. By the special notion its the time-reversal. The frequency-reversal is similar defined.
c) Circular symmetry
The signal fn is called circular symmetric if

fn = f(n)N

(7.8)

Properties of the DFT:


Linearity of the DFT-Transform
DF T { fn + gn } = DF T {fn } + DF T {gn }
Circular shift in time domain Modulation in frequency domain


DF T f(nm)N = mk
Fk
for every fixed m Z
N
Modulation in time domain Circular shift in frequency domain


DF T mn
fn = F(km)N
for every fixed m Z
N

Reversal in time domain Reversal in frequency domain




DF T f(n)N = F(k)N

(7.9)

(7.10)

(7.11)

(7.12)

Complex conjugation in time domain Complex conjugation of reversal in frequency domain



DF T fn = F(k)N

(7.13)

If fn is real valued then the conjugate spectrum is equal to the frequency-reversal.


Fk = F(k)N
77

(7.14)

If fn is real valued and circular symmetric then Fk also.


fn R,

Fk R,

fn = f(n)N

Fk = F(k)N

(7.15)

Circular convolution in time domain Multiplication in frequency domain


k
DF T {fn ~ gn } = Fk G

(7.16)

Multiplication in time domain Circular convolution frequency domain


DF T {fn gn } =

1
k
Fk ~ G
N

(7.17)

Parsevals Theorem for the DFT


N
1
X
n=0

7.3

N 1
1 X
fn gn =
Fk Gk
N

(7.18)

k=0

Some basic hints

Now we put together some basic essentials.


T is the time duration of the signal with continuous domain (practical choice).
[ 0, T ] is the continuous interval, in which the signal will be sampled. This sampling results
in a discrete signal.

1
t

is the time sampling period. It is the time between neighboring samples. This can be also
considered as the time resolution of the measuring.

is the sampling frequency.


The sampling frequency (also sampling rate or sample rate) defines the number of samples
per unit of time (usually second, abbr.: sec) taken from a continuous signal to make a
discrete signal. For time-domain signals, the unit for sampling rate is hertz, ( Hz = sec1 ).
Sa
Sometimes the notion sec
(samples per second) is used.

N is the number of sampling points.


t0 = 0,

t1 = t,

tN 1 = (N 1) t

tN = T = N t is not a member of the sampling domain. With tN a new period


starts(theory).
In Matlab the indexing of vectors starts with 1, so we get there
t[1] = 0,

t[2] = t,

t[N ] = (N 1) t

The measured values (of elongation, force, acceleration velocity, et cetera) at the discrete
points
f0 = f (0), f1 = f (t), , f (N 1) = f ((N 1) t)
become in Matlab indexing the form
f [1] = f (0),

f [2] = f (t),

f [N ] = f ((N 1) t)

So you have to add 1 in indexing if you implement corresponding formulas.


78

Nyquist-Shannon sampling theorem


The perfect reconstruction of a signal f (t) from its sampled version f (k t) is possible when
the sampling frequency is greater than twice the maximum frequency of the signal f (t).
max
Tm =

maximum frequency contained in


1
max

f (t)

the corresponding period duration

1
sampling frequency
t
The perfect reconstruction property is fulfilled
Sa =

if Sa > 2 max

or if

2 t < Tm

For aliasing problem and definition of Nyquist frequency, compare


http://en.wikipedia.org/wiki/Sampling_frequency
http://en.wikipedia.org/wiki/Nyquist_frequency
f f t is a Matlab-function for a FFT, a special algorithm for calculating the DFT. Application
of f f t on the vector
f = {f [1], f [2], , f [N ]}
results in a vector with complex components
f f t(f ) = {F [1], F [2], , F [N ]}
Matlab-implemented f f t is connected with the standard definition of the DFT (7.2) and
properties formulated between (7.9) and (7.18).
For practical applications it is connected with the property that the discrete time signal f is
sampled with the sampling period
t = 1 .
The frequency resolution becomes in this standard case

1
1
1
=
=
.
T
N t
N

But for generally t we get for the frequency resolution

7.4

1
1
=
T
N t

(7.19)

Application of the DFT for generally sampling period

In most books and implemented programs the DFT is given without the sampling period t as
scaling factor. In this case all relations will be considered for the special case t = 1 only. For
instance the FFT-algorithm in Matlab f f t is connected with t = 1 . If you use Matlab include
DF T {fn } = f f t(fn )

with formula (7.2)

Also we have introduced this standard to avoid confusion.


But in the research of oscillation structures the scaling factor t is to keep in mind. Otherwise you
would get no realistic frequency-amplitudes. This is especially important if you want to compare
measurements with different sampling periods. Therefore we introduce now a t-scaled variant of
DFT.
79

For every fixed time sampling period


Fk = DF T {fn , t} = t

N
1
X

> 0 a DFT is given by

fn kn
N

with

N = e

2i
N

k = 0, . . . , N 1

n=0

(7.20)

The comparison with (7.2) provides


DF T {fn , 1} = DF T {fn } .
Generally the spectrum Fk is otherwise scaled now. For t < 1 the magnitudes |Fk | become
smaller. This is more realistic if N is very large.
With the fft-function in Matlab we can realize (7.20) by the scaling factor t
DF T {fn , t} = t DF T {fn } = t f f t(fn )
The inverse transform of (7.20) is given by
fn = IDF T {Fk , } =

N
1
X

Fk kn
N

with

k=0

1
1
=
,
T
N t

n = 0, . . . , N 1 (7.21)

where is the frequency resolution.


How the DFT-formulas from (7.9) until (7.18) change, if we replace the transformation pair (7.2)(7.3) by (7.20)-(7.21).
Properties of the t-scaled DFT (7.20):
Linearity of the DFT-Transform
DF T { fn + gn ,

t }

= DF T {fn ,

t }

+ DF T {gn ,

t }

(7.22)

Circular shift in time domain Modulation in frequency domain




DF T f(nm)N , t = mk
Fk
for every fixed m Z
N

(7.23)

Modulation in time domain Circular shift in frequency domain




(km)
DF T mn

f
,
t = F
for every fixed m Z
n
N
N
Reversal in time domain Reversal in frequency domain


DF T f(n)N , t = F(k)N

(7.24)

(7.25)

Complex conjugation in time domain Complex conjugation of reversal in frequency domain



DF T fn ,

= F(k)N

(7.26)

If fn is real valued then the conjugate spectrum is equal to the frequency-reversal.


Fk = F(k)N

(7.27)

If fn is real valued and circular symmetric then Fk also.


fn R,

fn = f(n)N

80

Fk R,

Fk = F(k)N

(7.28)

Circular convolution in time domain Multiplication in frequency domain


t

DF T {fn ~ gn ,

t }

k
= Fk G

(7.29)

Multiplication in time domain Circular convolution in frequency domain


DF T {fn gn ,

t }

k
= Fk ~ G

with

1
1
=
T
N t

(7.30)

Parsevals Theorem for the DFT


t

N
1
X

fn gn =

n=0

N
1
X

k
Fk G

with

k=0

1
1
=
T
N t

(7.31)

If we change the scaling in the standard-DFT (7.2) by the factor t, then compared with the set of
properties from (7.9) until (7.18) only the last three formulas (7.29), (7.30) and (7.31) are changed.
Interpretation of the DFT-spectrum for even N
With Ny = N2 the Nyquist-frequency becomes
ny

N
N
1
1
1
= Ny =
=

=
=
2
2 N t
2 t
2

1
t

The Nyquist-frequency is the half of the sampling frequency. You can interpret the spectrum Fk
of a t-sampled time signal fn now by
F0 = F (0),

F1 = F (),

FNy +1 = F ((1Ny )),


So for k > Ny =

N
2

FNy 1 = F ((Ny 1) ),

FNy +2 = F ((2Ny )),

FNy = F (Ny ) = F (ny ),

, FN 2 = F (2),

FN 1 = F ()

the spectral values Fk are connected with the negative frequencies


(1 Ny ) ,

(2 Ny ) ,

2 ,

This is connected with the N -periodicity of Fk .


In Matlab-indexing you get
F [1] = F (0),

F [2] = F (), , F [Ny +1] = F (ny ),

F [Ny +2] = F ((1Ny )), , F [N ] = F ()

Remark 7.9. In generally the sampled time signals are real in our applications. By (7.27) yo get
then
Fk = F(k)N .
In the N -periodization this is connected with an even real part and an odd imaginary part. Now all
frequency information is contained in the left half of the t-scaled output-spectrum of Matlab:
t

F [1] = F0 ,

F [2] = F1 , ,

F0 = F (0),

F1 = F (), ,

F [Ny + 1] = FNy

with

Ny =

N
2

FNy = F (Ny ) = F (ny )

Often only sections of this half part spectrum are plotted to get better visual frequency resolution.

81

Remark 7.10. In applications it is often necessary to filter t-sampled time signals gn of finite
length N . Such filtering can be realized by circular convolution.
gn 7 fn ~ gn
In the spectral domain of the

t-scaled

DFT the filter is connected with pointwise multiplication.


1
Fk 7
Fk Gk
t

Compare (7.29). If fn is real valued and circular symmetric then the spectrum Fk also. Compare
(7.28). By using the sampling function sinc and circular shifting in the time domain you can
construct circular symmetric filters fn with the property

1 for 0 k Nf

1
N
1 for N Nf k N 1
Fk =
with Nf <
.

t
2

0 for Nf k N Nf 1
With the corresponding time representation

hn =

1
fn
t

you can realize in a computer program an ideal distortion-free low pass filtering now. Distortion-free
k will be changed not by filtering. The corresponding
means, that the phase-spectrum of the signal G
frequency band is given by


0 , f
with
f = N f
k =
Verify that the spectrum H

1
t

Fk is circular symmetric.

The real (Re or R) and imaginary part (Im or I) of this discrete spectrum, the amplitude spectrum
(magnitude) and the phase spectrum (argument) are defined similarly to the case of continuous
Fourier transform.

Amplitude spectrum :
|F | = |F0 |, |F1 |, , , |FN 1 |

with

|Fk | =
and
Phase spectrum :
with

q
[R(Fk )]2 + [I(Fk )]2

= arg(F ) =

0 , 1 , , , N 1



k = arg(Fk ) = atan2 I(Fk ), R(Fk )

Here atan2 is the two-argument form of the arctan-function, see


http://en.wikipedia.org/wiki/Atan2.
For considering energy relations between the time domain and the frequency domain use Parsevals
equation for the scaled DFT (7.31).
Remember : t = 1 is connected with the standard formulation.

82

Lecture Graphs - Part 1

Example 1.1

Example 1.2

83

Example 1.3

Example 1.4

84

Example 1.5

Example 1.6

85

Example 1.7

Example 1.8

86

Example 1.9

87

88

Example 1.10

89

Lecture Graphs - Part 2

Example 2.1

Example 2.2

90

Example 2.3

91

Example 2.4

92

Example 2.5

Example 2.6

93

Example 2.7

94

Example 2.8

95

Example 2.9

Example 2.10

96

Example 2.11

97

Example 2.12

Example 2.13

98

10

Lecture Graphs - Part 3

Example 3.1

Example 3.2

99

Example 3.3

100

101

References
[1] G. Bachman. Fourier and wavelet analysis. Springer, New York, 2000.
[2] W. Bani. Wavelets: Eine Einf
uhrung f
ur Ingenieure. Oldenbourg Verlag, M
unchen, 2001.
[3] Y. T. Chan. Wavelet Basics. Kluver Academic Publishers, Boston, Dordrecht, London, 1995.
[4] C. K. Chui. An Introduction to Wavelets. Wavelet Analysis and its Applications. Academic
Press, Boston, San Diego, New York, London, Sydney, Tokyo, Toronto, 1992.
[5] I. Daubechies. Ten Lectures on Wavelets. Society for Industrial and Applied Mathematics
(SIAM), Philadelphia, 1992.
[6] O. Follinger. Laplace- und Fourier-Transformation. H
uthig, Heidelberg, 1993.
[7] O. Follinger. Laplace-, Fourier- und z-Transformationen. H
uthig, Heidelberg, 2003. 8. u
berarb.
Aufl. / bearb. von Mathias Kluwe, 424 S.
[8] K. Grochenig. Foundations of time-frequency analysis. Birkhauser, Boston, 2001.
[9] G. Kaiser. A Friendly Guide to Wavelets. Birkhauser, Boston, Basel, Berlin, 1994.
[10] A. K. Louis and P. Maa and A. Rieder. Wavelets: Theorie und Anwendungen. Teubner,
Stuttgart, 2nd edition, 1998.
[11] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1999.
[12] K. Markwardt. Die schnelle Wavelet-Transformation als Grundlage f
ur Verfahren zur Systemund Parameteridentifikation. ISM-Berichte, Bauhausuniversitat Weimar, 2007. 128 Seiten,
ISSN: 1610-738.
[13] D. C. Montgomery. Applied statistics and probability for engineers. Wiley, Hoboken, 2006.
[14] H. G. Natke. Einf
uhrung in Theorie und Praxis der Zeitreihen- und Modalanalyse: Identifikation schwingungsf
ahiger elastomechanischer Systeme. Vieweg /and Sohn Verlagsgesellschaft,
Braunschweig/Wiesbaden, 1992.
[15] H. J. Nussbaumer. Fast fourier transform and convolution algorithms. Springer, Berlin, 1981.
[16] Resnikoff. Wavelet Analysis. Springer, New York-Berlin-Heidelberg, 1998.
[17] S. W. Smith. Digital signal processing: A practical guide for engineers and scientists. Amsterdam, 1955.
[18] G. Strang and T. Nguyen. Wavelets and Filter Banks. Wellesley-Cambridge Press, Wellesley
MA 0218181 USA, 1997.
[19] T. Westermann. Mathematik f
ur Ingenieure mit Maple, Differential- und Integralrechnung f
ur
Funktionen mehrerer Variablen, gew
ohnliche und partielle Differentialgleichungen, FourierAnalysis, graph. Darst. + 1 CD-ROM (12 cm). Berlin [u.a.], 1997. Umfang: XIV, 563 S.

102

Das könnte Ihnen auch gefallen