Beruflich Dokumente
Kultur Dokumente
Applied Mathematics
This book is based on lecture notes of professor Lars-Erik Persson, from a course in applied mathematics
given at Luleå Technical University and Uppsala University. The course has been given for graduates
students in areas outside mathematics, but the Internet version has been aimed at “gymnasielärare” (ap-
proximately college teachers) in mathematics.
The only prerequisites is the basic standard mathematics courses at the university level, i.e. Basic algebra,
Linear algebra and Analysis.
The blue links are directed towards external sources outside this document, and we are not responsible
for the availability and information content of these pages. At present there is only very limited selection
of exercises but that will improve soon.
Fredrik Strömberg
Johan Byström
Lars-Erik PerssonFredrik Strömberg – Responsible for this document (and lectures appearing here).
Fredrik Strömberg
Johan Byström
Lars-Erik PerssonJohan Byström – Responsible for the rest of the lectures available on the web.
Fredrik Strömberg
Johan Byström
Lars-Erik PerssonLars-Erik Persson – Inventor of the course and source of inspiration for these notes.
CHAPTER 1
3
CHAPTER 2
5
CHAPTER 3
7
CHAPTER 4
♦
Example 4.2. (The inhomogeneous one-dimensional heat conduction equation)
Suppose that we have the same system as in the previous example, but that we also add the heat
v(x,t) at the point x and time t (see Fig. 4.1.2). In this case u(x,t) is described by the inhomogeneous
heat conduction equation:
ut0 − ku00xx = v(x,t), t > 0, 0 < x < l,
u(x, 0) = f (x), 0 < x < l,
u(0,t) = h(t), t > 0,
u(l,t) = g(t), t > 0.
♦
9
10 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS
(x, y) ∂D
D
♦
Example 4.4. (The three-dimensional heat conduction equation)
We now consider heat conduction in a three-dimensional region V . We use the same notation
as above, with the addition of a z-coordinate. Then u(x, y, z,t) is described by the three-dimensional
heat conduction equation:
(4.1.1) ut0 − div (kgradu) = v(x, y, z,t), (x, y, z) ∈ V, t > 0,
u(x, y, z, 0) = f (x, y, z), (x, y, z) ∈ V,
u(x, y, z,t) = g(x, y, z), (x, y, z) ∈ ∂V, t > 0.
♦
R EMARK 1. Note that the gradient “grad” of the function u(x, y, z) is given by the vector
0 0 0
∂u ∂u ∂u ∂ ∂ ∂
gradu = ∇u = ux , uy , uz = , , = , , u.
∂x ∂z ∂z ∂x ∂z ∂z
If ∇ is written as
∂ ∂ ∂
∇= , , ,
∂x ∂z ∂z
the divergence “div”, of a vector field ~F = (Fx , Fy , Fz ) is given by
∂Fx ∂Fy ∂Fz
divF = ∇ · ~F = + + .
∂x ∂z ∂z
Thus, the divergence of the gradient of u(x, y, z) is given by
div (gradu) = ∇ · ∇u = ∇2 u = ∆u = u00xx + u00yy + u00zz .
Hence, if k = k(x, y, z) = k0 is constant (4.1.1) can be written as
ut0 − k0 u00xx + u00yy + u00zz = v ⇔ ut0 − k0 ∆u = v.
Since the equations are the same, all methods we consider here for solving the heat equation in various
cases can also be applied to these alternative diffusion problems. Another PDE which is as important as
the diffusion equation is the wave equation, which we will now consider in some examples of.
u(x,t)
0 l
x
y
∂D
∆ is usually called the Laplace operator or simply the Laplacian, and is of great importance in
both pure and applied mathematics. The solution u(x, y) of (4.1.2) describes the heat in the point
(x, y) after thermal equilibrium. This is usually called a stationary solution to the heat conduction
problem.
u00xx + u00yy = f ⇔
2
∇ u = f ⇔
∆u = f.
4.2. A GENERAL PARTIAL DIFFERENTIAL EQUATION OF THE SECOND ORDER 13
1
Here ut0 = 0 and v(x, y,t) = − f (x, y) in Example 4.3, so the the Poisson equation can be interpreted
k
as the inhomogeneous heat conduction equation at thermal equilibrium (ut0 = 0), where we at all
times t add the heat f (x, y) to the point (x, y).
R EMARK 3. If the added heat in the examples above have negative sign, the obvious physical interpreta-
tion is that we cool down the system.
Example 4.10. The problems in Examples 4.1-4.6 have unique solutions, but the problems in Example
4.7-4.9 do not have unique solutions.
R EMARK 4. A PDE of the type (4.2.1) usually has an infinite number of solutions and the general solution
depends on a number of arbitrary functions (to be compared with the fact that solutions to ODE:s usually
depend on arbitrary constants).
♦
14 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS
R EMARK 5. A solution u(x, y) to the Laplace equation is called a harmonic function. To find harmonic
functions one can use the fact that if f (z) = f (x + iy) is an analytic function (or synonymously: entire
d
or holomorphic), i.e. if f (z) exists, then the real part u(x, y) = ℜ f (x + iy), and the imaginary part
dz
v(x, y) = ℑ f (x + iy) of f are both harmonic functions.
In the example above we used f (z) = z2 , ez and log z2 respectively.
D EFINITION 4.1. We say that the PDE (*) is linear if the operator L has the properties
(1) L(u + v) = Lu + Lv,
(2) L(cu) = cLu.
If these conditions are not both satisfied we say that (*) is non-linear.
Example 4.15. The heat conduction equation in Example 4.13 is linear.
∂ ∂2
Proof: We must see verify that L = − k 2 satisfies (1) and (2) above.
∂t ∂x
∂(u + v) ∂2 (u + v) ∂u ∂2 u ∂v ∂2 v
(1) L(u + v) = −k 2
= − k 2 + − k 2 = Lu + Lv.
∂t ∂x ∂t ∂x ∂t ∂x
4.4. CLASSIFICATION OF PDES 15
∂2 (cu) ∂2 u ∂2 u
∂(cu) ∂u ∂u
(2) L(cu) = −k = c − kc 2 = c − k 2 = cLu.
∂t ∂x2 ∂t ∂x ∂t ∂x
Consider a linear and homogeneous (i.e. the right hand side is 0) PDE:
(*) Lu = 0.
Suppose that u1 , u2 , . . . are solutions of (*) and that u is a finite linear combination of these:
u = c1 u1 + c2 u2 + · · · + cn un .
Then u is also a solution to (*) since
Lu = L (c1 u1 + · · · + cn un ) = c1 Lu1 + · · · + cn Lun = 0 + · · · + 0 = 0.
This is called the superposition principle and is true also for infinite sums:
u = c1 u1 + c2 u2 + · · · + cn un + · · · ,
provided that certain convergence properties hold1.
The continuous superposition principle:
Assume that uα (x,t) satisfies Luα = 0 for all α, a ≤ α ≤ b, and define
Z b
u(x,t) = c(α)uα (x,t)dα,
a
where c(α) is an arbitrary (integrable) function. Then
Lu = 0.
Proof:
b
Z
Lu = L c(α)uα (x,t)dα
a
Z b
= c(α)Luα (x,t)dα
a
Z b
= c(α) · 0dα = 0.
a
Example 4.20. It is easy to verify that for each −∞ < α < ∞, the function
!
1 (x − α)2
uα (x,t) = √ exp −
4πkt 4kt
satisfies the heat conduction equation
ut0 − ku00xx = 0.
Hence this equation is also satisfied by the function
(x − α)2
1
Z ∞
u(x,t) = √ c(α) exp − dα,
4πkt −∞ 4kt
for any arbitrary, integrable function c(α).
n n
1E.g. if we have uniform convergence in: s (x) = u (x) → u, s0 (x) = u0 (x) → u0 , etc. for all occurring derivatives.
n ∑ j n ∑ j
1 1
4.6. WELL-POSED PROBLEMS 17
Example 4.21. Consider the initial-values problem which consists of the equation
utt00 + u00xx = 0, t > 0, −∞ < x < ∞,
together with the initial-values
(4.6.1) u(x, 0) = 0, ut0 (x, 0) = 0, −∞ < x < ∞.
The unique solution is given by the function which is constant 0:
u(x,t) ≡ 0, t ≥ 0, −∞ < x < ∞.
Let us now make a little perturbation of the initial-values (4.6.1):
(4.6.2) u(x, 0) = 0, ut0 (x, 0) = 10−4 sin 104 x.
The solution to this new problem is given by
u(x,t) = 10−8 sin 104 x sinh 104t .
1
For large t we know that sinh 104t is approximately exp 104t . The tiny change in the initial-
2
values ave rise to a change in the solution from the constant 0 to a function which grows expo-
nentially (from the sinh-factor) and oscillates exponentially much (from the sine-factor). A really
dramatic change! This implies that the solution is not stable, and hence the problem is ill-posed.
♦
where f ∈ C [0, l] and g, h ∈ C [0, T ], has a unique solution, u(x,t), in the rectangle
R : 0 ≤ x ≤ l, 0 ≤ t ≤ T.
Solution: Later on we will construct a solution to this problem (in Example 5.9)!
But for now, assume that we have two different solutions to the problem: u1 (x,t) and u2 (x,t). It
is then clear that the function
w(x,t) = u1 (x,t) − u2 (x,t)
must satisfy the boundary-value problem:
0 00
wt − kwxx = 0,
0 < x < l, 0 < t < T,
w(x, 0) = 0, 0 < x < l,
w(0,t) = w(l,t) = 0, 0 < t < T.
18 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS
Consider a function f (x), −l < x < l. The Fourier coefficients of f are defined as
1 l
Z
a0 = f (x)dx,
2l −l
Z l
1 nπx
an = f (x) cos dx, n = 1, 2, . . . ,
l −l l
Z l
1 nπx
bn = f (x) sin dx, n = 1, 2, . . . ,
l −l l
and the Fourier series of f is defined by
∞ nπx nπx
S (x) = a0 + ∑ an cos + bn sin .
n=1 l l
For a more detailed discussion of Fourier series see Section 6.1. See also Fig. 4.7.
Assume that f (x) is infinitely many times differentiable in the interval −l < x < l, except for a number
of discontinuity points. Then we have:
(a) S (x) = S (x + 2l), for all x.
(b) S (x) = f (x) at the points where f is continuous,
1
(c) S (x) = [ f (x+) + f (x−)] at points of discontinuity2.
2
2Here f (+x) = lim f (y), where we keep y > x as we take the limit, and we define f (x−) similarly.
y→x
4.7. SOME REMARKS ON FOURIER SERIES 19
f (x)
x
−l l
y S (x)
f (x)
x
−l l
When making a graph of a discontinuous function it is customary to indicate the value which is attained
by the function with a filled circle and the value which is not attained by an unfilled circle.
k (
k, 0 < x < l,
f (x) =
−k, −l < x ≤ 0
x
−π π
−k
area cancels the “positive” area), hence a0 = an = 0 for all n. And we have
1 l nπx
Z
bn = f (x) sin dx
l −l l
1 0
Z nπx Z l nπx
= −k sin dx + k sin dx
l −l l 0 l
2k l nπx
Z
= sin dx
l 0 l
2k
l nπx l
= − cos
l nπ l 0
2k
= (1 − cos nπ)
nπ
2k
= (1 − (−1)n ) .
nπ
I.e.
4k 4k 4k
b1 = , b2 = 0, b3 = , b4 = 0, b5 = ,...,
π 3π 5π
4k ∞ nπx 4k ∞ 1 (2m + 1)πx
S (x) = ∑ bn sin l = π ∑ 2m + 1 sin
π n=1 m=0 l
4k πx 1 3πx 1 5πx
= sin + sin + sin +··· .
π l 3 l 5 l
See Fig. 4.7.2 for an illustration of some of the partial (containing only a finite number of terms)
sums for S (x).
4.8. SEPARATION OF VARIABLES 21
k
S1 (x)
x
π π
−k
x
π π
S1 (x)
4k
3π sin 3x
4k
5π sin 5x
S2 (x)
−k
S3 (x)
Separation of variables is a common method to solve certain types of PDEs. Since it originated from an
idea of Fourier it is also sometimes called Fourier’s method.
22 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS
What we mean by separating the variables in (1) is to seek a solution u(x,t) which can be factored as
u(x,t) = X(x)T (t),
where X(x) and T (t) are functions depending only on x and t respectively. Assume now that we can write
u in this way. If we differentiate u = XT we get ut0 (x,t) = X(x)T 0 (t) and u00xx (x,t) = X 00 (x)T (t), and if we
substitute these expressions into (1) we get the equation:
X(x)T 0 (t) − kX 00 (x)T (t) = 0,
which can be rewritten as
T 0 (t) 1 X 00 (x)
= .
T (t) k X(x)
Wee see that the left hand side is a function of t only and the right hand side is a function of x only. Hence,
the the only possibility is that both sides equals a constant:
T 0 (t) 1 X 00 (x)
= = −λ,
T (t) k X(x)
for some constant λ (which we have to determine later). Instead of the PDE (1) we now have two ODEs:
(
T 0 (t) = −λkT (t),
X 00 (x) = −λX(x),
with the general solutions
√ √
T (t) = Ce−λkt , and X(x) = A sin λx + B cos λx .
The boundary values (3) implies that either T ≡ 0 or X(0) = X(l) = 0. Since the first alternative only gives
us the solution which is constant 0 we see that X must satisfy the boundary conditions X(0) = X(l) = 0,
i.e.
X(0) = B = 0,
which tells us that B = 0, and we also see that
√
X(l) = A sin λl = 0.
√
To once again avoid the trivial solution W ≡ 0 (i.e. with A = 0) we must have sin λl = 0, which
implies that
√
λl = nπ, n ∈ Z+ ,
or equivalently
n2 π2
λ= ,
l2
for some positive integer n.
We have showed that if a solution to (1) can be factored as X(x)T (t) then it can be written as
nπ 2 2
n π kt
K sin x exp − 2 ,
l l
4.8. SEPARATION OF VARIABLES 23
where n is a positive integer and K a constant. By the superposition principle (sec 4.5) the general solution
to (1) satisfying the boundary-values (3) can be written as
∞ nπ 2 2
n π kt
u(x,t) = ∑ bn sin x exp − 2 ,
n=1 l l
∞ nπ
(*) u(x, 0) = f (x) = ∑ bn sin x .
n=1 l
Let us for simplicity assume that l = π and consider some examples of initial values f (x) in the above
model example.
Example 4.24. Let f (x) = 2 sin x + 4 sin 3x. Then (*) is satisfied if b1 = 2, b2 = 0, b3 = 4, b4 = b5 =
· · · = 0. Hence, the solution to the model example is
♦
4 1 1 4
Example 4.25. Let f (x) = 1 = sin x + sin 3x + sin 5x + · · · . Then (*) is satisfied if b1 = ,
π 3 5 π
41 41
b2 = 0, b3 = , b4 = 0, b5 = , b6 = 0, etc. in this case, the solution to the model example is
π3 π5
given by
4 1 1
u(x,t) = sin(x)e−kt + sin(3x)e−9kt + sin(5x)e−25kt + · · ·
π 3 5
∞
4 2
= ∑ sin ((2n − 1)x) e−(2n−1) kt .
π n=1
Example 4.26. If we have an arbitrary initial-value function f (x), 0 ≤ x ≤ π, the solution to the model
example is given by
∞
−n2 kt ,
u(x,t) = ∑ bn sin (nx) exp
n=1
where
1 2
Z π Z π
bn = fu (x) sin nxdx = f (x) sin nxdx.
π −π π 0
Here fu (x) is an extension of f (x) to an odd function in the interval −π < x < π, i.e. fu (x) = f (x) if
x > 0 and fu (x) = − f (x) if x ≤ 0 (cf. Fig. 4.8.1).
24 4. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS
f (x)
fu (x)
x
−π π
4.9. Exercises
4.1. [S] Determine, for each of the following differential equations, if it is linear or non-linear:
a) ut0 (x,t) + x2 u00xx (x,t) = 0.
∂2 u ∂u
b) 2
+ u = f (x,t).
∂t ∂x
c) u∆u − ut0 = 0.
∂3 u ∂2 u ∂u
d) + 2 + = u0x .
∂3t ∂t ∂t
4.2.* Determine, for each of the following partial differential equations, the regions where it is hyper-
bolic, elliptic or parabolic:
a) utt00 + xu00xx + 2u 0 2
x = f (x,t), (x,t) ∈ R .
2 00 00
b) y uxx + uyy = 0, x ∈ R, y > 0.
∂2 u
2
2 ∂ u 1 ∂u
c) =c + , t > 0, r > 0, and c ∈ R a constant.
∂t 2 ∂r2 r ∂r
d) sin x utt00 + 2u00xt + cos xu00xx = tan x, t ∈ R,|x| ≤ π.
4.3. [S] Let u(x,t), t > 0, x > 0 denote the temperature in an infinitely long rod with heat conductance
coefficient k, and which we heat up by increasing the temperature at the end point such that u(0,t) =
1 (x−α)2
t. Use the fact that uα (x,t) = (4πkt)− 2 e− 4kt is a solution of u0 t − ku00xx = 0 for each α ∈ R together
with the superposition principle to determine u(x,t). I.e. solve the problem
ut0 − ku00xx = 0, x > 0,t > 0,
u(0,t) = t, t > 0.
4.9. EXERCISES 25
4.5. [S] a) Determine the Fourier series for the function f (x) which in the interval −π < x < π is given
by f (x) = x2 .
π2 ∞
(−1)k
b) use a) to show that =−∑ 2
.
12 k=1 k
4.7. [S] Consider a rod of length L = 1 with heat conduction coefficient k = 1. At the beginning the
rod has the constant temperature 1. We then (instantaneous) cool down the ends of the rod to the
temperature 0, where we then keep it during the continuation of the experiment.
a) Formulate this problem mathematically.
b) find an expression for the temperature of the rod in the point x at the time t.
(Hint: for the Fourier series expansion of the constant 1 use an odd periodic extension
in the interval.)
Solution: Set y(x) = xr , then y0 (x) = rxr−1 and y00 (x) = r(r − 1)xr−2 . If we insert this into (*) we get
r(r − 1)xr + arxr + bxr = 0,
which gives us the equation
(**) r(r − 1) + ar + b = 0.
This is the so-called Characteristic equation corresponding to (*). Assume that the solutions of (**) are
r1 and r2 . We have three different cases:
♦
R EMARK 6. Observe that
xα+iβ = xα eiβ ln x = xα (cos (β ln x) + i sin (β ln x))
and
xα−iβ = xα (cos (β ln x) − i sin (β ln x)) ,
hence we can write the solution of case 3 in the example above as
y(x) = xα ((A + B) cos (β ln x) + i(A − B) sin (β ln x)) .
27
28 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES
If we only consider constants A and B such that C = A + B and D = i (A − B) are real numbers then we
see that
y(x) = xα (C cos (β ln x) + D sin (β ln x))
is a real-valued solution to (*).
In the next section we will describe in more details what is meant by a Sturm-Liouville problem (Charles-
Fran cois Sturm and Joseph Liouville), but first we will look at some examples.
Solution: Previously (cf. section 4.8, p. 22) we saw that this problem can be solved if and only if
nπ 2
λ = λn = , n = 1, 2, 3, . . . (eigenvalues)
l
with the corresponding solutions
nπ
yn (x) = an sin x (eigenfunctions).
l
♦
Example 5.5. Solve
00
X (x) − λX(x) = 0, 0 ≤ x ≤ 1,
X(0) = 0,
0
X (1) = −3X(1).
Solution: We have three different cases:
♦
Example 5.6. Solve (
x2 X 00 (x) + 2xX 0 (x) + λX = 0,
X(1) = 0, X(e) = 0.
Solution: The characteristic equation is
r(r − 1) + 2r + λ = 0
⇔
r2 + r + λ = 0
30 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES
p
F IGURE 5.2.1. Solutions to tan p = −
3
y
y = tan p
p1 p2 p3 p
y = − 3p
J0 (x)
α1 α2 α3 α4 α5 x
To construct an orthonormal basis in a vector space we must be able to measure lengths and angles.
Hence we must introduce an inner product (a scalar product). With the help of an inner product we can
easily determine which elements are orthogonal to each other. There are two examples of vector spaces
and inner products we will consider here. The plane R2 together with the usual scalar product, and a
vector space consisting of functions on an interval together with an inner product defined by an integral.
Vectors in R2
If we have two vectors ~x = (x1 , x2 ) and ~y = (y1 , y2 ), the inner product of ~x and ~y is defined by
~x ·~y = x1 y1 + x2 y2 .
The norm of ~x, |~x|, is defined by
|~x|2 =~x ·~x = x12 + x22 ,
and the distance between ~x and ~y, |~x −~y|, is given by
|~x −~y|2 = (x1 − y1 )2 + (x2 − y2 )2 .
The angle θ between ~x and ~y can now be computed using the relation
~x ·~y = |~x||~y| cos θ,
π
and we say that two vectors are orthogonal (perpendicular to each other), ~x ⊥~y, if θ = , i.e. if
2
~x ·~y = 0.
A Function Space
We now consider the vector space consisting of functions f (x) defined on the interval [0, l] (for some
l > 0) together with a positive weight-function r(x). The generalizations of the concepts above are
Z l
h f , gi = f (x)g(x)r(x)dx, (inner product)
0
Z l
k f k2 = | f (x)|2 r(x)dx, (norm)
0
Z l
k f − gk2 = | f (x) − g(x)|2 r(x)dx, (distance)
0
h f , gi = k f k kgk cos θ, (angle)
f ⊥g⇔ h f , gi = 0 (orthogonality)
Z l
⇔ f (x)g(x)r(x)dx = 0.
0
If P (x) > 0 and c1 , . . . , c4 6= 0 we say that the problem is regular, and if P or r is 0 in some endpoint we
say that it is singular (note that there are other examples of both regular and singular SL problem, e.g.
the following problem is regular).
(i) The eigenvalues are real and to every eigenvalue the corresponding eigenfunction is unique
up to a constant multiple.
(ii) The eigenvalues form an infinite sequence λ1 , λ2 , . . . and they can be ordered as
with
lim λn = ∞.
n→∞
(iii) If y1 and y2 are two eigenfunctions corresponding to two different eigenvalues, λi1 6= λi2 ,
they are orthogonal with the respect to the inner product defined by r(x), i.e.
Z l
hy1 , y2 i = y1 (x)y2 (x)r(x)dx = 0.
0
34 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES
We will now see how we can generalize the concept of Fourier series from the usual trigonometric basis
functions to an orthonormal basis consisting of eigenfunctions to a Sturm-Liouville problem.
Assume that we have an infinite linear combination
∞
f (x) = ∑ cn yn (x),
n=1
where yn ⊥ ym for n 6= m. Then
* +
∞ ∞
h f , ym i = ∑ cn yn , ym = ∑ cn hyn , ym i
n=1 n=1
= cm hym , ym i = cm kym k2 .
Let f be an arbitrary function on [0, l]. Then we define the generalized Fourier series for f as
∞
S (x) = ∑ cn yn (x),
n=1
where
1
cn = h f , yn i
kym k2
are the generalized Fourier coefficients.
Let y1 , y2 , . . . be a set of orthogonal eigenfunctions of a regular Sturm-Liouville problem, and let f be a
piece-wise smooth function in [0, l]. Then, for each x in [0, l] we have that
We begin by performing the following natural scaling of the problem (cf. Chapter 1):
k x
(5.6.1) t= t, x = .
l2 l
5.6. SOME APPLICATIONS 35
also satisfy (1) and (2). We will now see that we can also assert that this function satisfy the initial
condition (3) by choosing appropriate constants b̃n . It is clear that
∞
ũ (x, 0) = ∑ b̃n sin (nπx) ,
n=1
♦
Example 5.10. Consider a rod between x = 1 and x = e. Let u(x,t) denote the temperature of the rod
at the point x and time t. Assume that the end points are kept at the constant temperature 0, that at
the initial time t = 0 the rod has a heat distribution given by
u(x, 0) = f (x), 1 < x < e,
that no heat is added and that the rod has constant density ρ and specific heat cv . Assume also
that the rod has heat conductance K which varies as K(x) = x2 . The equation which determines the
temperature u(x,t) is in this case
∂ 2 0
(1) cv ρut0 = x ux , 1 < x < e, t > 0.
∂x
Determine u(x,t) for 1 ≤ x ≤ e, and t > 0.
Solution: We apply Fourier’s method of separating the variables and assume that we can find a
solution of the form u(x,t) = X(x)T (t). Inserting this expression in (1) above we get
T0 1 d 2 0
cv ρ = x X = −λ,
T X dx
where λ is a constant and X satisfies the boundary condition
(2) X(1) = X(e) =0.
Thus T satisfies the equation
λ
(3) T0 =− T,
cv ρ
5.6. SOME APPLICATIONS 37
and X satisfies
d 2 0
x X + λX = 0, 1 < x < e
dx
⇔
(4) x2 X 00 + 2xX 0 + λX = 0, 1 < x < e.
The equation (4) together with the boundary condition (2) gives us a regular Sturm-Liouville problem
on [1, e]. The characteristic equation is
r(r − 1) + 2r + λ = 0,
with the roots r
1 1
r1,2 = − ± − λ.
2 4
As in Example 5.6 we get three different cases depending on the value of λ:
1 1 1 1
λ= We have a double root r = − , and the solutions are given by X(x) = Ax− 2 +Bx− 2 ln x.
4 2
1
The boundary condition (2) gives X(1) = A = 0 and X(e) = Be− 2 = 0, i.e. we get only
the trivial solution X ≡ 0.
1
λ< The roots are now real and different, r1 6= r2 , and the solutions are
4
X(x) = Axr1 + Bxr2 .
The boundary conditions gives us
( (
X(1) = A + B = 0 A = −B,
⇒
X(e) = Aer1 + Ber2 = 0 A(er1 − er2 ) = 0,
and since r1 6= r2 we must have A = 0, and
r we only get the trivial solution X ≡ 0.
1 1 1
λ> We have two complex roots r = − ± i λ − , and the general solution is
4 2 4
r ! r !
A 1 B 1
X(x) = √ sin λ − ln x + √ cos λ − ln x .
x 4 x 4
r !
− 21 1
The boundary conditions imply that X(1) = B = 0 and X(e) = Ae sin λ− =
4
0, which us gives that
r
1
λ − = nπ, n ∈ Z+ .
4
1
Observe that the case n = 0 is the same as λ = . Hence the eigenvalues of the Sturm-Liouville problem
4
(4) and (2) are
1
λn = + n2 π2 , n ∈ Z+ ,
4
and the corresponding eigenfunctions are
1
Xn (x) = √ sin (nπ ln x) , n ∈ Z+ .
x
For every fixed n, the equation (3) is
λn
Tn0 = − Tn ,
cv ρ
38 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES
λ=0 The solutions to (5) and (6) are then T =constant, and X = Ax + B, i.e. u(x,t) = Ax + B
for some constants A and B. The boundary value (2) gives u(0,t) = B = 0, and (3)
gives u0x (1,t) = A = −3u(1,t) = −3A, i.e. A = 0 and we only get the trivial solution
u(x,t) ≡ 0. √ √
λ>0 The solutions to (5) are T (t) = Aeλt and the solutions to (6) are X(x) = Be λx +Ce− λx .
The boundary value (2) gives u(0,t) = T (t)X(0) = Aeλt (B +C) = 0, hence either A = 0
(which implies u ≡ 0) or B = −C. The condition (3) is now equivalent to
√ √ √ √ √
Aeλt λB e λ + e− λ = −3AB e λ − e− λ ,
⇔
√ √ √
2 λ
ABe 3+ λ = AB 3 − λ ,
which is only satisfied if AB = 0 (show this!), and in this case we only get the trivial
solution u ≡ 0.
λ<0 If we set λ = −p2 we get (in the same manner as in Example 5.5) the solutions
2
(*) un (x,t) = Bn e−pn t sin pn x, n = 1, 2, 3, . . . ,
also satisfies (1), (2) and (3). Furthermore, (6) with the corresponding boundary conditions is a
regular Sturm-Liouville problem and the theory of generalized Fourier series implies that u(x,t) will
satisfy (4):
∞
u(x, 0) = ∑ Bn sin pn x = f (t)
n=1
p
where pn are the positive solutions of tan p = − , p1 < p2 < · · · (see Fig. 5.2.1), and Bn is defined
3
by (**).
♦
40 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES
Example 5.12. (The wave equation) A vibrating circular membrane with radius R is described by the
following equation together with boundary and initial values:
p
utt00 = c2 u00xx + u00yy ,t > 0, r = x2 + y2 ≤ R,
(1)
(2) u (R,t) = 0,t > 0, (fixated boundary)
(3) u(r, 0) = f (r), r ≤ R, (initial position)
∂u
(4) (r, 0) = g(r), r ≤ R. (initial velocity)
∂t
p
Observe that the initial conditions only depend on r = x2 + y2 = the distance from the center of
the membrane to the point (x, y), and if we introduce polar coordinates
(
x = r cos θ,
y = r sin θ,
we see that (1) can be written as
∂2 u ∂2 u 1 ∂u 1 ∂2 u
= c2 + + .
∂t 2 ∂r2 r ∂r r2 ∂θ2
If we also make the assumption that u(r, θ,t) is radially symmetric (i.e. that u(r, θ,t) is independent
of the angle θ) we can write (1) as
∂2 u
2
2 ∂ u 1 ∂u
(1’) =c + .
∂t 2 ∂r2 r ∂r
To solve the problem we continue as before and use Fourier’s method to separate the variables. With
the function u(r,t) = W (r)G(t) inserted into (1’) we get the equations
1
(5) W 00 + W 0 + k2W = 0, 0 ≤ r ≤ R,
r
(6) G00 + (ck)2 G = 0, t > 0.
Furthermore, we get the following boundary values from (2):
(7) W (R) = 0,
and (5) together with (7) is a regular Sturm-Liouville problem which gives us the eigenfunctions
α
n
Wn (r) = J0 r ,
R
where αn = kn R are solutions of J0 (kR) = 0 (see Example 5.7). Observe that if we write (5) in the
general form we see that we have the weight function= r, i.e. the inner product is given by
Z R
h f , gi = f (r)g(r)rdr.
0
By solving (6) for k = kn and using the superposition principle we see that
∞ cα cα α
n n n
(*) u(r,t) = ∑ An cos t + Bn sin t J0 r
n=1 R R R
is a solution to (1) and (2). And we can also choose the constants An so that (3) is satisfied, i.e.
∞ α
n
u(r, 0) = ∑ An J0 r = f (r),
n=1 R
if
Z R
1 α
n
(**) An = R R 2 f (r)J0 r rdr.
J0 αn R
R r rdr 0
0
5.7. EXERCISES 41
Hence, the answer to the problem is given by (*) where An and Bn are chosen as in (**) and (***).
5.7. Exercises
5.1. [S] Solve the following S-L problem by determing the eigenvalues and eigenfunctions:
0
x2 u0 (x) + λu(x) = 0, 1 < x < eL , u(1) = u eL = 0,
(a)
0
(b) x2 u0 (x) + λu(x) = 0, 1 < x < eL , u(1) = u0 (e) = 0.
5.2.* Solve the following S-L problem by determing the eigenvalues and eigenfunctions:
(a) u00 (x) + λu(x) = 0, 0 < x < l, u0 (0) = u0 (l) = 0,
(b) u00 (x) + λu(x) = 0, 0 < x < l, u0 (0) = u(l) = 0.
5.4.* A rod between x = 1 and x = e has √ constant temperature 0 at the endpoints, and at the time
t = 0 the heat distribution is given by x, 1 < x < e. The rod has a constant density ρ and constant
specific heat C, but its thermical conductance varies like K = x2 , 1 < x < e. Formulate an initial and
boundary values problem for the temperature of the rod, u(x,t). Then use fourier’s method to solve
the problem.
5.5.*
(a) Solve the problem
ut0 = 4u00xx , 0 ≤ x ≤ 1, t > 0,
u(0,t) = 0 t > 0,
u0x (1,t) =−cu(1,t), t > 0,
( 1
x, 0≤x< ,
u(x, 0) = 2
1 − x, 1
x≥ .
2
(b) Giva a physical interpretation of the problem in (a).
42 5. INTRODUCTION TO STURM-LIOUVILLE THEORY AND THE THEORY OF GENERALIZED FOURIER SERIES
5.6. [S] Consider an ideal liquid, flowing orthogonally towards an infinitely long cylinder by the radius
a. Since the problem is uniform in the axial coordinate we can treat the problem in plane polar
coordinates.
a
x
43
44 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS
R EMARK 9. The complex form in Example 6.2 can be deduced from the formulas in Example 6.1 and
Euler’s formulas:
it −it
sint = e − e ,
(
eit
2i −it = cost + i sint,
(6.1.1) it or equivalently −it
cost = e + e e = cost − i sint.
,
2
We have
∞
f (t) = a0 + ∑ an cos nΩt + bn sin nΩt
n=1
einΩt + e−inΩt − e−inΩt
∞ inΩt
e
= a0 + ∑ an + bn
n=1 2 2i
∞
an bn inΩt an bn −inΩt
= a0 + ∑ + e + − e
n=1 2 2i 2 2i
∞
= c0 + ∑ cn einΩt + cn e−inΩt ,
n=1
an bn
where we let c0 = a0 and cn = + . (Observe that an and bn are real numbers). If we additionally
2 2i
define
c−n = cn ,
we get
∞
f (t) = ∑ cn einΩt .
n=−∞
6.2. THE LAPLACE TRANSFORM 45
Moreover:
an bn 1 l
Z
n>0: cn = −i = f (t)e−inΩt dt,
2 2 2l −l
n=0: c0 = a0 ,
a−n b−n 1 l i l
Z Z
n<0: cn = c−n = −i = f (t) cos(−nΩt)dt − f (t) sin(−nΩt)dt
2 2 2l −l 2l −l
1 l
Z
= f (t)e−inΩt dt.
2l −l
If f (t) is defined for t ≥ 0 the (unilateral) Laplace transform (Pierre-Simon Laplace) L and its inverse
L −1 are defined by:
Z ∞
L: f (t) 7→ F(s) = L { f (t)} (s) = e−st f (t)dt,
0
Z a+i∞
1
L −1 : F(s) 7→ f (t) = L −1 {F(s)}(t) = F(s)est ds.
2πi a−i∞
Note that if f (t)e−σ0 t → 0 as t → ∞ then the first integral converges for all complex numbers s with real
part greater than σ0 , and in the second integral we then demand that a > σ0 .
R EMARK 10. In applications the inverse transforms are usually computed by using a table (see e.g.
Appendix A-1, p. 90). When computing the inverse transform it is sometimes also useful to remember
how to compute partial fraction decompositions (see e.g. Appendix A-6, p. 99)
..
n o .
L f (t) (s) = sn L { f } (s) − sn−1 f (0) − sn−2 f 0 (0) − · · · − f (n−1) (0).
(n)
Convolution
The Convolution product of two functions f and g, f ? g over a finite interval [0,t] is defined as
Z t
( f ? g)(t) = f (u)g(t − u)du.
0
In fact
Z ∞Z t
L { f ? g} = f (u)g(t − u)due−st dt
0 0
Z ∞Z ∞
= f (u)g(t − u)e−st dtdu
0 u
Z ∞ Z ∞
−su −s(t−u)
= f (u)e g(t − u)e dt du
0 u
Z ∞ Z ∞
{x = t − u} = f (u)e−su g(x)e−sx dx du
0 0
= L { f } L {g} .
Z ∞Z t Z ∞Z ∞
Observe that in the second equality we used the following identity: dudt = dtdu, which
0 0 0 u
follows from the fact that both sides represent an area integral in the (u,t)-plane over the octant between
the positive t-axis and the line t = u.
Damping
By damping a “signal” f (t) exponentially, i.e. multiply f (t) with e−at one obtains a translation of the
Laplace transform of f as
Z ∞
L e−at f (t) (s) = e−at f (t)e−st dt
Z0 ∞
= f (t)e−(s+a)t dt = L { f } (s + a).
0
I.e. we have the following formula:
L e−at f (t) (s) = L { f } (s + a).
(6.2.1)
Time delay
Heaviside’s function is defined by (
0, t < 0,
θ(t) =
1, t ≥ 0,
and for a ∈ R the function t 7→ θ(t − a) is a function which takes the value 0 when t < a and 1 when t ≥ a
(see Fig. 6.2.1). The meaning of the function θ(t − a) is to switch on a signal at time t = a, and one can
also form the function θ(t − a) − θ(t − b) which switch on a signal at the time t = a and switch it off at
the time t = b: (
f (t), a ≤ t ≤ b,
f (t) (θ(t − a) − θ(t − b)) =
0, else.
Another use of the Heaviside’s function is time delay. To translate a function f (t) which is defined for
t ≥ 0 (i.e. delay the signal) one can form the function t 7→ f (t − a)θ(t − a), the function which is 0 when
t < a and f (t − a) when t ≥ a. The Laplace transform of this function is given in the following manner
by a damping at the transform side
Z ∞
L { f (t − a)θ(t − a)} (s) = f (t − a)θ(t − a)e−st dt
0
Z ∞
= f (t − a)e−st dt = [u = t − a]
a
Z ∞
= f (u)e−s(u+a) du = e−as L { f } (s),
0
6.2. THE LAPLACE TRANSFORM 47
θ(a − t)
1
♦
iat
Example 6.4. Let f (t) = e , where a is a constant and t ≥ 0. The Laplace transform of f is then given
by:
Z ∞ Z ∞
L eiat (s) = eiat e−st dt = e(ia−s)t dt
0 0
" #∞
e(ia−s)t 1 s + ia
= = =
ia − s s − ia s2 + a2
0
s a
= +i 2 ,
s2 + a2 s + a2
and since L is linear Eulers formulas (6.1.1) implies that
s
L {cos at} (s) = 2 2 ,
s +a
a2
L {sin at} (s) = 2 2 .
s +a
48 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS
Solution: Let L {y(t)} = Y (s). Then L y00 (t) = s2Y (s), and if we (Laplace-) transform the equa-
where u(x,t) is a bounded function (u(x,t) gives the heat in the point x at the time t). Now transform
the entire equation in the time variable, and let U(x, s) denote the Laplace transform of u(x,t). The
equation ut − kuxx = 0 can now be written as
sU(x, s) − kUxx (x, s) = 0,
and if we solve this ordinary differential equation (in the x-variable), we get
√ √
U(x, s) = Ae− s/kx + Be s/kx ,
where A and√B are functions of s. Since we assumed u to be bounded (in both variables) the term
containing e s/kx must vanish, i.e. B = 0, and we get
√
U(x, s) = A(s)e− s/kx .
1
The boundary condition implies U(0, s) = L {u(0,t)} (s) = L {1} (s) = , but we see that U(0, s) =
s
1
A(s) so A(s) = and
s
1 √
U(x, s) = e− s/kx .
s
6.3. THE FOURIER TRANSFORM 49
To find u we must now apply the inverse transform on U. For this purpose it is convenient to use a
table, and using Appendix 1, p. 90 we see that
x
u(x,t) = erfc √ ,
2 kt
where erfc is the complementary error function (erf),
erfc(t) = 1 − erf(t),
Z t
2 2
erfc(t) = √ e−z dz.
π 0
The counterpart of Fourier series for functions f (t) defined on R is the Fourier transform, F { f }, which
we define as
Z ∞
F : f (t) 7→ fˆ(ω) = F { f (t)} = f (t)e−iωt dt,
−∞
for functions f (t) such that the integral converges. We also have an inverse transform
1 ∞ ˆ
Z
F −1 : f (t) = f (ω)eiωt dω.
2π −∞
R EMARK 11. We can still interpret the formula as if we reconstruct the signal f (t) as a sum of waves
(basis functions) eiωt , with amplitudes fˆ(ω).
R EMARK 12. In applications it is customary to find the inverse transform using appropriate tables (see
Appendix 2, p. 91).
In the same manner as for the Laplace transform we can derive a number of useful general properties for
the Fourier transform.
• Linearity
F {a f (t) + bg(t)} = aF { f (t)} + bF {g(t)} .
• Differentiation
F f 0 (t) = iωF { f (t)} ,
..
n o .
F f (n) (t) = (iω)n F { f (t)} .
• Convolution
F { f ? g} = F { f (t)} F {g(t)} ,
where the convolution over R is defined by
Z ∞
f ?g = f (t − u)g(u)du.
−∞
• Frequency modulation
F eiat f (t) (ω) = F { f (t)} (ω − a) = fˆ(ω − a).
50 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS
• Time delay
F { f (t − a)} = e−iωa fˆ(ω).
Example 6.7. Let f (t) = θ(t)e−t , (θ(t) is defined as on p. 46) then
Z ∞
−t
θ(t)e−t e−iωt dt
F θ(t)e (ω) =
−∞
Z ∞
= e−(1+iω)t dt
0
" #∞
−e−(1+iω)t 1
= = .
1 + iω 1 + iω
0
1
I.e. fˆ(ω) = .
1 + iω
♦
Example 6.8. (Heat conduction equation with an initial temperature distribution)
Assume that we have an infinitely long rod with temperature distribution in the point x at the
time t given by u(x,t), x ∈ R, t ≥ 0. Assume also that at the initial time t = 0 the temperature is
distributed according to the function f (x), i.e. u(x, 0) = f (x). To determine u we must solve the
following initial value problem.
(
ut0 − ku00xx = 0, −∞ < x < ∞, t > 0,
(6.3.1)
u(x, 0) = f (x), −∞ < x < ∞.
Solution: By using the Fourier transform in the same way as the Laplace transform in Example
6.6 we get (after some calculations) that
1
Z ∞
2
u(x,t) = √ f (z)e−(x−z) /4kt dz.
4πkt −∞
♦
1 2
R EMARK 13. The function G(y,t) = √ e−(x−z) /4kt is the so called Green’s function or the unit
4πkt
impulse solution to the following problem:
(
Gt0 − kG00xx = 0,
G(x, 0) = δy (x).
Here δy (x) is the Dirac delta function (Paul Dirac), which is usually characterized by the property that
Z ∞
g(x)δy (x)dx = g(y),
−∞
or alternatively formulated
g ? δy (u) = g(u − y).
Green’s method: The solution to 6.3.1 is given by
u = f ? G.
Observe that δy (x) is not a function strictly speaking, but a distribution. If y = 0 we simply write δ0 (x) =
δ(x). I connection with applications δy (x) is usually called a unit impulse (in the point x = y). When
considering a physical system, the occurrence of δ(t) should be viewed as that the system is subjected to
6.3. THE FOURIER TRANSFORM 51
a short (momentary) force. (For example if you hit a pendulum with a hammer at the time 0 the system
will be described by an equation of the type mÿ + aẏ + by = cδ(t).)
Sampling
Sampling here means that we reconstruct a continuous function from a set of discrete (measured/sampled)
function values.
S : f (t) → { f (nδ)} , δ is the length of the sampling interval.
−2δ −δ 0 δ 2δ 3δ t
D EFINITION 6.1. A function f (t) is said to be band limited if the Fourier transform of f , F ( f ) only
contains frequencies in a bounded interval, i.e. if fˆ(ω) = 0 for |ω| ≥ c for some constant c. (The
counterpart for periodic functions is of course that the Fourier series transform is a finite sum.)
T HEOREM . The sampling theorem
A continuous band limited signal f (t) can be uniquely reconstructed from its values in a finite number of
uniformly distributed points (sampling points) if the distance between two neighboring points is at most
π
. In this case we have:
c
∞
−1 kπ sin(ct − kπ)
S : f (t) = ∑ f .
k=−∞ c ct − kπ
kπ
(Here the sampling is performed over the points xk = .)
c
R EMARK 14. In connection with the sampling theorem we should also mention two other discrete Fourier
transforms:
• The Discrete Fourier Transform (DFT).
• The Fast Fourier Transform (FFT).
These transforms are very useful in many practical applications, but we do not have the time to go into
more details concerning these in this short introduction (in short one can say that practically the entire
information society of today relies on the FFT). Some references:
• Mathematics of the DFT. A good and extensive online-book on DFT and applications,
http://ccrma-www.stanford.edu/~jos/r320/.
52 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS
• Fourier Transforms, DFTs, and FFTs. Another extensive text on mainly DFT and FFT with
examples and applications,
http://www.me.psu.edu/me82/Learning/FFT/FFT.html.
• Linearity
Z [a{xn } + b{yn }] = aZ [{xn }] + bZ [{yn }] .
• Damping
z
Z [{an xn }] = X .
a
• Convolution
Z [{xn } ? {yn }] = Z [{xn }] · bZ [{yn }] ,
where (the discrete) convolution of two sequences is defined by
n
{xn } ? {yn } = {zn } , with zn = ∑ xn−k yk , n = 0, 1, 2, . . . .
k=1
• Differentiation
X 0 (z) = Z [{0, 0, −x1 , −2x2 , −3x3 , . . .}] .
• Forward shift
Z [{0, x0 , x1 , x2 , x3 , . . .}] = z−1 X(z),
Z [{0, 0, x0 , x1 , x2 , x3 , . . .}] = z−2 X(z), etc.
• Backward shift
↓
Z {x0 , x1 , x2 , x3 , . . .} = zX(z) − x0 z,
↓
Z {x0 , x1 , x2 , x3 , . . .} = z2 X(z) − x0 z2 − x1 z, etc.
When comparing with the formulas for the Laplace transform we se that the forward shift corresponds
to time delay and backward shift corresponds to differentiation in the continuous case. Since the shift
6.4. THE Z-TRANSFORM 53
operations might feel a little different as compared to their continuous counterparts we prove the second
last equality:
↓
Z {x0 , x1 , x2 , x3 , . . .} = x1 + x2 z−1 + x3 z−2 + · · ·
= x0 z + x1 + x2 z−1 + x3 z−2 + · · · − x0 z
= zX(z) − x0 z.
1
= 1 + z + z2 + z3 + · · · , |z| < 1,
1−z
and (differentiate both sides)
1
= 1 + 2z + 3z2 + · · · ,
(1 − z)2
which gives
z
= z + 2z2 + 3z3 + · · · ,
(1 − z)2
1
and if we set instead of z here we see that
z
z
Z [{rn }] = , |z| > 1.
(z − 1)2
R EMARK 15. The Z-transform is very useful for solving difference equations and for treating discrete
linear systems.
R EMARK 16. The discrete Fourier transform (DFT) that was mentioned earlier is a special case of the
Z-transform with z = e−2kπ/N .
R EMARK 17. More examples of useful transform pairs and general properties can be found in Appendix
3.
54 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS
The idea of wavelets is relatively new, but it has already shown itself to be much more effective than many
other transforms, e.g. for applications in
• Signal processing, and
• Image processing.
In these cases the story begins with what is now called the mother wavelet, ψ. Typically the function ψ
has the following properties:
Z ∞
* ψ(t)dt = 0,
−∞
** ψ is well localized in both time and frequency, and in addition satisfies some further (techni-
cal) conditions.
It can then be shown that the following system
∞
ψ j,k (t) j,k=−∞ ,
where
ψ j,k (t) = 2 j/k ψ 2 j t − k
are translations, dilatations and normalization of the original mother wavelet, is a (complete) orthogonal
basis. A signal f (t) can be reconstructed by using the usual (generalized) Fourier idea:
∞
W −1 : f (t) = ∑ f , ψ j,k ψ j,k (t).
j,k=−∞
R EMARK 18. A problem with the Fourier series transform is that a signal f (t) which is well localized in
time results in an outgoing signal fˆ(ω) which is dispersed in the frequency range (e.g. the Fourier series
for the delta function δ(t) contains all frequencies) and vice versa. The advantage with the wavelet trans-
form is that you can “compromise” and obtain localization in both time and frequency simultaneously (at
least in certain cases).
R EMARK 19. In Appendix 4 we have included a motivation and illustration which makes it easier to
understand the terminology and formulas above. The motivation is obtained by a natural approximation
procedure, with the classical Haar wavelet as mother wavelet.
R EMARK 20. The transform W above corresponds to the Fourier series transform, but there also exists
a similar integral transform corresponding to the Fourier transform.
R EMARK 21. The wavelet transforms are not so useful if you have to do all calculations by hand, but
nowadays there are easily available computer programs which makes them very powerful for certain
applications. The following web adresses provide information about a few such programs:
• http://www.wavelet.org (Wavelet Digest+search engine+links+...)
• http://www.finah.com/ (Many practical applications)
• http://www.tyche.math.univie.ac.at/Gabor/index.html (Gabor analysis)
• http://www.sm.luth.se/~grip/ (Licentiate and PhD thesis of Niklas Grip)
Some research groups in Sweden which are working with wavelets and applications (also industrially):
6.7. CONTINUOUS LINEAR SYSTEMS 55
L -transform
Problem Transformed Problem
(easy)
When we want to solve a given problem the key to success is to chose a suitable transform for the problem
in question. In this chapter we have presented some useful transforms but there are other examples in the
literature. In Appendix 5 we present some further transforms (mainly taken from (3)). In most cases
we have also included a formula for the inverse transform and the corresponding useful tables are also
included.
Many linear system, e.g. in technical applications, can be described by a linear differential equation:
(6.7.1) an y(n) (t) + an−1 y(n−1) (t) + · · · + a0 y(t) = bk x(k) (t) + bk−1 x(k−1) (t) + · · · + b0 x(t),
together with initial values
y(0) = y0 (0) = · · · = y(n) (0) = 0.
Set Y (s) = L {y(t)}(s) and X(s) = L {x(t)}(s) and transform (6.7.1). Using the initial values we get:
(an sn + an−1 sn−1 + · · · a0 )Y (s) = (bk sk + bk−1 sk−1 + · · · b0 )X(s),
which gives
Y (s) bk sk + bk−1 sk−1 + · · · b0
= .
X(s) an sn + an−1 sn−1 + · · · a0
We define the Transfer function, H(s), by Y (s) = H(s)X(s), i.e.
bk sk + bk−1 sk−1 + · · · b0
H(s) = .
an sn + an−1 sn−1 + · · · a0
For every incoming signal (with transform X(s)) we get the corresponding solution (outgoing signal)
Y (s) = H(s)X(s), and if we invert the transform we see that
y(t) = h(t) ? x(t).
How do we find H(s)?
For a unit impulse δ(t) we have
Z ∞
L {δ(t)} = δ(t)e−st dt = e0 = 1.
0
This implies that if we send in a unit impulse the system will respond in the following way:
(
y(t) = h(t) ? δ(t) = h(t),
Y (s) = H(s).
In technical applications h(t) is usually called the unit impulse solution.
x(t)
m
y(t)
6.7. CONTINUOUS LINEAR SYSTEMS 57
We consider the system illustrated in Figure 6.7.2, i.e. a weight m which is attached to the end
of vertically suspended spring. The weight has an equilibrium point relative to a moving reference
system (e.g. the point of attachment for the spring), and the distance from this equilibrium point is
denoted by y(t). The movement of the reference system (relative to some absolute reference system)
is denoted by x(t).
(A concrete example of such a system with a moving reference system is obtained attaching the
spring to a wooden board and then move that board up and down.)
It can be shown that the system can be described by the following linear differential equation:
mÿ(t) + cẏ(t) + ky(t) = cẋ(t) + ax(t).
If we apply the Laplace transform to both sides of this equation we get
ms2 + cs + k Y (s) = (cs + a)X(s),
F IGURE 6.7.3.
y
y(t) = 1 − 32 e−t + 12 e−3t θ(t)
1
t
1
b0 + b1 1z + · · · + bk z1k
H(z) = .
a0 + a1 1z + · · · + am z1m
For every incoming signal (with Z-transform X(z)) we get the solution (outgoing signal)
Y (z) = H(z)X(z),
which gives
{yn } = {hn } ? {xn }.
How do we find H(z)?
6.9. FURTHER EXAMPLES 59
1
For the unit impulse sequence, {δn }, we have Z [{yn }] = 1 + 0 · + · · · = 1, which implies that the system
z
will respond in the following way:
{yn } = {hn } ? {δn } = {hn },
i.e.
Y (z) = H(z).
In technical applications {hn } is called the unit impulse response.
1
Example 6.12. A linear discrete system has the transfer function H(z) = . Compute the unit
z + 0.8
step response!
Solution: The unit step sequence is {σn } = {1, 1, 1, . . .}, and we have
z
X(z) = Z [{σn }] = ,
z−1
and thus we get
z 5z 1 1
Y (z) = H(z)X(z) = = − .
(z − 1)(z + 0.8) 9 (z − 1) (z + 0.8)
The inverse transform gives the answer, Z −1 [Y (z)] = {yn }, where
5
yn = (1 − (−0.8)n ) .
9
i.e.
sin ax
Z ∞
π
1 − e−a , a > 0.
2
dx =
0 x(1 + x ) 2
Solution: We start by applying the Fourier transform (with respect to x) to u. We denote this
operation with Fx {u} = F {x 7→ u(x, y)} and we get
Z ∞
U = U(ω, y) = Fx {u} (ω) = u(x, y)e−iωx dx,
−∞
U(ω, 0)
= fˆ(ω),
U(ω, y) → 0, when y → ∞.
U(ω, y) = fˆ(ω)e−|ω|y .
1 y
gy (x) = .
π x 2 + y2
Hence the wanted solution is
y f (z)
Z ∞
u(x, y) = dz, y > 0.
π −∞ (x − z)2 + y2
♦
6.10. EXERCISES 61
6.10. Exercises
6.1. [S] a) Compute the inverse Laplace transform of
1
F(s) = e−2s .
s2 + 8s + 15
b) Find the unit step response to a system with the transfer function
3
H(s) = .
(s + 1)(s + 3)
6.6.*
a) Prove the convolution formula, F { f ? g} = F { f }F {g}, for the Fourier transform.
b) Define f (t) = θ(t)e−t , let f1 (t) = f (t) and for n ≥ 1 let fn (t) = ( fn−1 ? f )(t). Compute
fn (t).
1
6.10. Determine the sequence y(n), n ≥ 0 which has the Z-transform Y (z) = .
z2 + 1
6.11. [S] Let f (x) = e−|x| and compute the convolutoin product ( f ? f )(x).
6.13. [S] Use the Laplace transform to solve the following system of differential equations
( (
x0 − 2x + 3y = 0 x(0) = 8
0 ,
y − y + 2x =0 y(0) = 3.
6.14.
a) Define the Haar-scaling function ϕ and the Haar-wavelet function ψ.
b) Illustrate ψ(t − 2),ψ(4t),ψ(4t − 1),ψ(4t − 3) and 2ψ(4t − 2) in the ty-plane.
c) Explain how a signal f (t) can be represented by a system of basis functions constructed
by translating, dilating and normalizing the Haar wavelet.
6.17. [S] A discrete linear system has the unit impulse answer {0.7n }. Compute the system’s response
to the signal {an }, a 6= 0.7.
6.10. EXERCISES 63
∞
6.18. Let f : R → R be a continuous function such that ∑ fˆ(n) is absolutely convergent, and that
−∞
∞
there is a continuous function g(x) = ∑ f (2πn + x), x ∈ [−π, π].
n=−∞
a) Show that g(x) has the period 2π.
b) Compute the Fourier series for g(x) and use this to show the following formula (the
Poisson summation formula):
∞ ∞
∑ fˆ (n) = 2π ∑ f (2πn).
n=−∞ n=−∞
c) Use the Fourier series for g from b) to show that if f (x) = 0 for |x| ≥ π then we have
the following formula
∞
1
fˆ (m) eimx , |x| ≤ π,
f (x) = 2π m=−∞
∑
|x| ≥ π.
0,
6.19. Use the previous exercise to show a version of the Sampling theorem. Suppose that f : R → C
has a Fourier transform and is band-limited, i.e. fˆ (ω) = 0 for |ω| ≥ c. Show that f is uniquely
kπ
determined by its values at (for example) the sequence , k ∈ Z according to the following formula
c
∞
kπ 1
f (x) = ∑ f sin (ct − nπ) .
−∞ c cx − mπ
6.20. [S] The dispersion of smoke from a smoke pipe with the height h as the wind direction and and
wind speed is constant can be modelled by the following equation
2
∂ c ∂2 c
∂c
v =d + ,
∂x ∂x2 ∂z2
where c(x, z) is the concentration of smoke at the height z conunted from the base of the pipe and
the distance x from the pipe in the direction of the wind. d is a diffusion coefficient and v is the wind
speed (in m/s). If we also assume that the rate of change of c in the x-direction is much smaller than
the rate of change in the z -direction we get the simplified equation
2
∂c ∂ c
=k .
∂x ∂z2
The rate of change in the concentration at ground level and infinitely high up can be viewed as
negligible which gives us the boundary values
∂c ∂c
(x, 0) = lim (x, z) = 0.
∂z z→∞ ∂z
The concentration of smoke can also be neglected infinitely far away in the x-direction. At the
location of the pipe the concentration is 0 except for at the height h where the smoke drifts out of it
with a flowrate qkgm−2 s−1 . Thus we also get the boundary values:
q
c(0, z) = δ (z − h) , lim c(x, z) = 0.
v x→∞
a) Rewrite the problem using dimensionless quantities.
64 6. INTRODUCTION TO TRANSFORM THEORY WITH APPLICATIONS
b) Use the Laplace transform to find the concentration at ground level, c (x, 0). (Hint: split into
two cases, z ≷ 1 and observe that the derivative of the Laplace transform of c is not continuous
everywhere.)
c) At which range from the pipe is the concentrationen at ground level highest?
CHAPTER 7
65
CHAPTER 8
Integral Equations
8.1. Introduction
Integral equations appears in most applied areas and are as important as differential equations. In fact,
as we will see, many problems can be formulated (equivalently) as either a differential or an integral
equation.
Example 8.1. Examples of integral equations are:
Z x
(a) y(x) = x − (x − t)y(t)dt.
0 Z x
(b) y(x) = f (x) + λ k(x − t)y(t)dt, where f (x) and k(x) are specified functions.
Z 1 0
♦
A general integral equation for an unknown function y(x) can be written as
Z b
f (x) = a(x)y(x) + k(x,t)y(t)dt,
a
where f (x), a(x) and k(x,t) are given functions (the function f (x) corresponds to an external force). The
function k(x,t) is called the kernel. There are different types of integral equations. We can classify a
given equation in the following three ways.
• The equation is said to be of the First kind if the unknown function only appears under the
integral sign, i.e. if a(x) ≡ 0, and otherwise of the Second kind.
• The equation is said to be a Fredholm equation if the integration limits a and b are constants,
and a Volterra equation if a and b are functions of x.
• The equation are said to be homogeneous if f (x) ≡ 0 otherwise inhomogeneous.
Example 8.2. A Fredholm equation (Ivar Fredholm):
Z b
k(x,t)y(t)dt + a(x)y(x) = f (x).
a
67
68 8. INTEGRAL EQUATIONS
♦
Example 8.3. A Volterra equation (Vito Volterra):
Z x
k(x,t)y(t)dt + a(x)y(x) = f (x).
a
♦
Example 8.4. The storekeeper’s control problem.
To use the storage space optimally a storekeeper want to keep the stores stock of goods constant. It
can be shown that to manage this there is actually an integral equation that has to be solved. Assume
that we have the following definitions:
a = number of products in stock at time t = 0,
k(t) = remainder of products in stock (in percent) at the time t,
u(t) = the velocity (products/time unit) with which new products are purchased,
u(τ)∆τ = the amount of purchased products during the time interval ∆τ.
The total amount of products at in stock at the time t is then given by:
Z t
ak(t) + k(t − τ)u(τ)dτ,
0
and the amount of products in stock is constant if, for some constant c0 , we have
Z t
ak(t) + k(t − τ)u(τ)dτ = c0 .
0
To find out how fast we need to purchase new products (i.e. u(t)) to keep the stock constant we thus need
to solve the above Volterra equation of the first kind.
♦
Example 8.5. (Potential)
Let V (x, y, z) be the potential in the point (x, y, z) coming from a mass distribution ρ(ξ, η, ζ) in
Ω (see Fig. 8.1.1). Then
Z Z Z
ρ(ξ, η, ζ)
V (x, y, z) = −G dξdηdζ.
Ω r
The inverse problem, to determine ρ from a given potential V , gives rise to an integrated equation.
Furthermore ρ and V are related via Poisson’s equation
∇2V = 4πGρ.
Ω
r (ξ, η, ζ)
(x, y, z)
8.2. INTEGRAL EQUATIONS OF CONVOLUTION TYPE 69
where k ? y(x) is the convolution product of k and y (see p. 45). The most important technique when
working with convolutions is the Laplace transform (see sec. 6.2).
Solution: The equation is of convolution type with f (x) = x and k(x) = x. We observe that
1
L (x) = 2 and Laplace transforming the equation gives us
s
1 1 1 1
L [y] = 2
− L [x ? y] = 2 − L [x] L [y] = 2 − 2 L [y] , i.e.
s s s s
1
L [y] = ,
1 + s2
1
and thus y(x) = L −1 = sin x.
1 + s2
Answer: y(x) = sin x.
L [f]
−1
Answer: y(x) = L .
1 − λL [k]
♦
70 8. INTEGRAL EQUATIONS
Initial value problem
⇒ The Volterra equation,
Dynamical system
Boundary value problem ⇒ The Fredholm equation.
By a famous theorem (Picard’s theorem) we know that under certain conditions on f (x, y) we have
y(x) = lim yn (x).
n→∞
2
R EMARK 22. Observe that y(x) = ex − 1 is the exact solution to the equation. (Show this!)
R EMARK 23. In case one can can guess a general formula for yn (x) that formula can often be verified by,
for example, induction.
L EMMA 8.1. If f (x) is continuous for x ≥ a then:
Z xZ s Z x
f (y)dyds = f (y)(x − y)dy.
a a a
Z s
P ROOF. Let F(s) = f (y)dy. Then we see that:
a
Z xZ s Z x Z x
f (y)dyds = F(s)ds = 1 · F(s)ds
a a a
Z x a
{integration by parts} = [sF(s)]xa − sF 0 (s)ds
a
Z x
= xF(x) − aF(a) − s f (s)ds
Z x Za s
= x f (y)dy − 0 − y f (y)dy
Z sa a
= f (y)(x − y)dy.
a
72 8. INTEGRAL EQUATIONS
R EMARK 24. Example 8.10 shows how an initial value problem can be transformed to an integral equa-
tion. In example 8.12 below we will show that an integral equation can be transformed to a differential
equation, but first we need a lemma.
P ROOF. Let
Z b
G(t, a, b) = u(x,t)dx,
a
where (
a = a(t),
b = b(t).
8.4. THE CONNECTION BETWEEN DIFFERENTIAL AND INTEGRAL EQUATIONS (SECOND-ORDER) 73
Then
Z t2
3 1
F 0 (t) = √ cos(xt)xdx + sint 3 · 2t − sint 2 · √ .
t 2 t
where
(
x(1 − t), x ≤ t ≤ 1,
k(x,t) =
t(1 − x), 0 ≤ t ≤ x.
I.e. we have
Z x Z 1
y(x) = λ t(1 − x)y(t)dt + λ x(1 − t)y(t)dt.
0 x
Furthermore we see that y(0) = y(1) = 0. Thus the integral equation (*) is equivalent to the boundary
value problem
(
y00 (x) + λy(x) = 0
y(0) = y(1) = 0.
74 8. INTEGRAL EQUATIONS
8.5. A General Technique to Solve a Fredholm Integral Equation of the Second Kind
Assume that the kernel k(x, ξ) is separable, which means that it can be written as
n
k(x, ξ) = ∑ α j (x)β j (ξ).
j=1
If we insert this into (8.5.1) we get
n Z b
y(x) = f (x) + λ ∑ α j (x) β j (ξ)y(ξ)dξ
j=1 a
n
(8.5.2) = f (x) + λ ∑ c j α j (x).
j=1
Observe that y(x) as in (8.5.2) gives us a solution to (8.5.1) as soon as we know the coefficients c j . How
can we find c j ?
Multiplying (8.5.2) with βi (x) and integrating gives us
Z b Z b n Z b
y(x)βi (x)dx = f (x)βi (x)dx + λ ∑ c j α j (x)βi (x)dx,
a a j=1 a
or equivalently
n
ci = fi + λ ∑ c j ai j .
j=1
n
Thus we have a linear system with n unknown variables: c1 , . . . , cn , and n equations ci = fi + λ ∑ c j ai j ,
j=1
1 ≤ i ≤ n. In matrix form we can write this as
(I − λA)~c = ~f ,
where
a11 ··· a1n f1 c1
.. .. .. ..
A= .. ~
, f = , and ~c = .
. . . . .
an1 ··· ann fn cn
Some well-known facts from linear algebra
Suppose that we have a linear system of equations
(*) B~x = ~b.
Depending on whether the right hand side ~b is the zero vector or not we get the following alternatives.
1. If ~b = ~0 then:
a) det B 6= 0 ⇒ ~x = ~0,
b) det B = 0 ⇒ (*) has an infinite number of solutions ~x.
2. If ~b 6= 0 then:
c) det B 6= 0 ⇒ (*) has a unique solution ~x,
d) det B = 0 ⇒ (*) has no solution or an infinite number of solutions.
8.5. A GENERAL TECHNIQUE TO SOLVE A FREDHOLM INTEGRAL EQUATION OF THE SECOND KIND 75
The famous Fredholm Alternative Theorem is simply a reformulation of the fact stated above to the setting
of a Fredholm equation.
As always when solving a differential or integral equation one should test the solutions by inserting them
into the equation in question. If we insert y(x) = 1 − x and y(x) = 1 − 3x in (*) we can confirm that they
are indeed solutions corresponding to λ = 2 and −2 respectively.
♦
Z 1 Z 1
The left hand sides are identical so there are no solutions if x f (x)dx 6= f (x)dx, other-
0 Z 1 0
(iii) If the kernel k is not separable then there are infinitely many eigenvalues:
λ1 , λ2 , . . . , λn , . . . ,
with 0 < |λ1 | ≤ |λ2 | ≤ · · · and lim |λn | = ∞.
n→∞
(iv) To every eigenvalue corresponds at most a finite number of linearly independent
eigenfunctions.
We will now describe a method for solving a Fredholm Equation of the type:
Z b
(*) y(x) = f (x) + λ k(x,t)y(t)dt.
a
L EMMA 8.4. (Hilbert-Schmidt’s Lemma) Assume that there is a continuous function g(x) such that
Z b
F(x) = k(x,t)g(t)dt,
a
where k is symmetrical (i.e. k(x,t) = k(t, x)). Then F(x) can be expanded in a Fourier series as
∞
F(x) = ∑ cn yn (x),
n=1
T HEOREM 8.5. (The Hilbert-Schmidt Theorem) Assume that λ is not an eigenvalue of (*) and that y(x)
is a solution to (*). Then
∞
fn
y(x) = f (x) + λ ∑ yn (x),
n=1 λ n −λ
where λn and yn (x) are eigenvalues and eigenfunctions to the corresponding homogeneous equation (i.e.
Z b
(*) with f ≡ 0) and fn = f (x)yn (x)dx.
a
and according to H-S Lemma (8.4) we can expand y(x) − f (x) in a Fourier series:
∞
y(x) − f (x) = ∑ cn yn (x),
n=1
where
Z b Z b
cn = (y(x) − f (x)) yn (x)dx = y(x)yn (x)dx − fn .
a a
80 8. INTEGRAL EQUATIONS
Hence
Z b Z b
y(x)yn (x)dx = fn + (y(x) − f (x)) yn (x)dx
a a
Z b Z b
= fn + λ k(x, ξ)y(ξ)dξ yn (x)dx
a a
Z b Z b
{k(x, ξ) = k(ξ, x)} = fn + λ k(ξ, x)yn (x)dx y(ξ)dξ
a a
λ b
Z
= fn + yn (ξ)y(ξ)dξ.
λn a
Thus Z b
fn λn f n
y(x)yn (x)dx = = ,
a 1− λ λn − λ
λn
and we conclude that
λn fn λ fn
cn = − fn = ,
λn − λ λn − λ
i.e. we can write y(x) as
∞
fn
y(x) = f (x) + λ ∑ yn (x).
n=1 λn − λ
where λ 6= n2 π2 , n = 1, 2, . . . , and
(
x(1 − ξ), x ≤ ξ ≤ 1,
k(x, ξ) =
ξ(1 − x), 0 ≤ ξ ≤ x.
Solution: From Example 8.15 we know that the normalized eigenfunctions to the homogeneous
equation
Z 1
y(x) = λ k(x, ξ)y(x)dξ
0
are √
yn (x) = 2 sin (nπx) ,
corresponding to the eigenvalues λn = n2 π2 , n = 1, 2, . . . . In addition we see that
Z 1 √ √
(−1)n+1 2
Z 1
fn = f (x)yn (x)dx = x 2 sin (nπx) dx = ,
0 0 nπ
hence √
2λ ∞ (−1)n+1
y(x) = x + ∑ n (n2 π2 − λ) sin (nπx) , λ 6= n2 π2 .
π n=1
♦
Finally we observe that by using practically the same ideas as before we can also prove the following
theorem (cf. (5, pp. 246-247)).
8.7. HILBERT-SCHMIDT THEORY TO SOLVE A FREDHOLM EQUATION 81
T HEOREM 8.6. Let f and k be continuous functions and define the operator K acting on the function
y(x) by
Z x
Ky(x) = k(x, ξ)y(ξ)dξ,
a
and then define positive powers of K by
K m y(x) = K(K m−1 y)(x), m = 2, 3, . . . .
Observe that we obtain the same solution independent of method. This is easiest seen by looking
at the Taylor expansion of the second solution. More precisely we have
√ √ 1 √ 2 1 √ 3
e− λx = 1 − λx + λx − λx + · · · ,
2 3!
√ √ 1 √
2 1 √ 3
e λx = 1 + λx + λx + λx + · · · ,
2 3!
i.e.
1 √ √
y(x) = √ e λx − e− λx
2 λ
√
2 √ 3 2 √ 5
1
= √ 2 λx + λx + λx + · · ·
2 λ 3! 5!
x3 x5
= x + λ + λ2 + · · · .
3! 5!
♦
8.8. Exercises
8.1. [S] Rewrite the following second order initial values problem as an integral equation
(
u00 (x) + p(x)u0 (x) + q(x)u(x) = f (x), x > a,
u(a) = u0 , u0 (a) = u1 .
8.5. [S] Let α ≥ 0 and consider the probability that a randomly choosen integer between 1 and x has
its largest prime factor ≤ xα . As x → ∞ this probability distribution tends to a limit distribution with
the distribution function F(α), the so called Dickman function (note that F(α) = 1 for α ≥ 1). The
function F(α) is a solution of the following integral equation
Z α
t 1
F(α) = F dt, 0 ≤ α ≤ 1.
0 1−t t
1
Compute F(α) for ≤ α ≤ 1.
2
8.9. [S] Write a Neumann series for the solution of the integral equation
Z 1
u(x) = f (x) + λ u(t)dt,
0
e 1 1
and give the solution of the equation for f (x) = ex − + and λ = .
2 2 2
8.10. Solve the following integral equations:
Z 1
a) y(x) = x2 + (1 − 3xξ) y(ξ)dξ,
0
Z 1
a) Show that for f (x) ≡ 0 the equationen has only the triviala solution in C2 [0, 1].
b) Give a function f (x) such that the equation has a non-trivial solution for all values of λ
and compute this solution.
8.14.* The current in an LRC-circuit with L = 3, R = 2, C = 0.2 (SI-units) and where we apply a
voltage at the time t = 3 satisfies the following integral equation
Z t
I(t) = 6θ(t − 1) (t − 1) + 2t + 3 − (2 + 5 (t − y)) I(y)dy.
0
Determine I(t) using the Laplace transform.
8.15. [S] Consider (again) the salesman’s control problem (Example 8.4). Assume that the number of
products in stock at the time t = 0 is a and that the products are sold at a constant rate such that all
products are sold out in T (time units). Now let u(t) be the rate (products/time unit) with which we
have to purchase new products in order to have a constant number of a products in stock.
a) Write the integral equation which is satisfied by u(t).
b) Solve the equation from a) and find u(t).
a t/T
b) u(t) = e .
T
8.16.*
a) Write the integral equation
Z 1
(*) y(x) = λ k(x, ξ)y(ξ)dξ,
0
where (
x(1 − ξ), x ≤ ξ ≤ 1,
k(x, ξ) =
ξ(1 − x), 0 ≤ ξ ≤ x,
as a boundary value problem.
b) Find the eigenvalues and the normalized eigenvectors to the problem in a).
Solve the equation
Z 1
y(x) = f (x) + λ k(x, ξ)y(ξ)dξ,
0
where k(x, ξ) is as in a) and λ 6= n2 π2 for
c) f (x) = sin (πkx), k ∈ Z, and
d) f (x) = x2 .
8.8. EXERCISES 85
87
APPENDIX A
Appendices
89
90 A. APPENDICES
Transform pairs
z
Unit step σn
z−1
Unit pulse δn 1
Delayed unit pulse δn−k z−k
z
Exponential an
z−a
z
Ramp function rn = nσn
(z − 1)2
z sin θ
Sine sin nθ
z2 − 2z cos θ + 1
z sin θ
Damped sine an sin nθ
z − 2za cos θ + a2
2
z(z − cos θ)
Cosine cos nθ
z2 − 2z cos θ + 1
z(z − a cos θ)
Damped cosine an cos nθ
z2 − 2za cos θ + a2
A-4. THE HAAR WAVELET 93
1.The Haar Wavelet. The mother wavelet ψ and scaling function ϕ are in this case very simple
functions that take the values 0, 1 and −1, and 0 and 1 (see Fig. 1.4.1):
1
0≤t ≤ ,
1, (
2 1, 0 ≤ t < 1,
ψ(t) = −1, 1 < t ≤ 1, ϕ(t) =
2 0, otherwise.
0, otherwise,
The different operations performed on the mother wavelet to construct a basis are illustrated in Figs. 1.4.2,
1.4.3, 1.4.4 and 1.4.5.
t t
1 1 1
2
y = ϕ(t − 1) y = ϕ(t − k)
1 1
t t
1 k k+1
trim =
0 0 -10 0
94 A. APPENDICES
y = ϕ 22t
y = ϕ 2k t
1 1
t t
1 1
4 2k
y
F IGURE 1.4.4. Dilatations and translationsy
y = ϕ 22 t − 1 y = ϕ 22 t − 3
1 1
t t
1 1 3 1
4 2 4
y = 22 ϕ 22 t − 3
t
3 1
4
2. An Approximation Example. We will now see how to approximate a function by step functions.
Observe that in the figures that illustrate the different cases we have used the function f (t) = t 2 .
a) Approximation by the mean value (see Fig. 1.4.6):
Z 1
1
f (t) ≈ A0 (t) = f (s)ds ϕ(t).
1 0
1 y = f (t)
y = A0 (t)
t
1
Z 1 Z 1
2
f (t) ≈ A1 (t) = 2 f (s)dsϕ(2t) + 2 1
f (s)dsϕ(2t − 1)
0 2
Z 1 √ √ Z 1 √ √
= f (s) 2ϕ(2s)ds 2ϕ(2t) + f (s) 2ϕ(2s − 1)ds 2ϕ(2t − 1)
0 0
= a0 ϕ0 (t) + a1 ϕ1 (t).
1 y = f (t)
y = A1 (t)
t
1 1
2
f (t) ≈ A2 (t)
Z 1 Z 1 Z 3
4 2 4
= 4 f (s)dsϕ(4t) + 4 1
f (s)dsϕ(4t − 1) + 4 1
f (s)dsϕ(4t − 2) +
0 4 2
Z 1
4 3 f (s)dsϕ(4t − 3)
Z 41 Z 1
= f (s)2ϕ(4s)ds 2ϕ(4t) + f (s)2ϕ(4s − 1)ds 2ϕ(4t − 1) +
0 0
Z 1 Z 1
f (s)2ϕ(4s − 2)ds 2ϕ(4t − 2) + f (s)2ϕ(4s − 3)ds 2ϕ(4t − 3)
0 0
= a0 ϕ0 (t) + a1 ϕ1 (t) + a2 ϕ2 (t) + a3 ϕ3 (t).
1 y = f (t)
y = A2 (t)
t
1 1 3 1
4 2 4
2n −1
f (t) ≈ ∑ ak ϕk (t),
k=0
where the “Fourier coefficients” are
Z 1
n
ak = f (s)2 2 ϕ (2n s − k) ds,
0
and the “basis functions” are
n
ϕk (t) = 2 2 ϕ (2nt − k) .
3. Approximation by wavelets. The basic idea is that we can write f (t) as:
f (t) ≈ An (t) = (An (t) − An−1 (t)) + (An−1 (t) − An−2 (t)) + . . .
+ (A2 (t) − A1 (t)) + (A1 (t) − A0 (t)) + A0 (t).
E.g. for n = 2 we have
f (t) ≈ A2 (t) = (A2 (t) − A1 (t)) + (A1 (t) − A0 (t)) + A0 (t),
A-5. ADDITIONAL TRANSFORMS 97
where
Z 1 Z 1
A1 (t) − A0 (t) = 2 f (s)ϕ(2s)dsϕ(2t) + 2 f (s)ϕ(2s − 1)dsϕ(2t − 1) −
0 0
Z 1
f (s)ϕ(s)dsϕ(t) = [ϕ(t) = ϕ(2t) + ϕ(2t − 1)]
0
Z 1 Z 1
= f (s) (ϕ(2s) − ϕ(2s − 1)) dsϕ(2t) − f (s) (ϕ(2s) − ϕ(2s − 1)) ϕ(2t − 1)
0 0
Z 1
= f (s)ψ(s)dsψ(t),
0
where ψ(t) is the Haar wavelet as defined on p. 93. Similarly one can also show that
Z 1 √ √ Z 1 √
A2 (t) − A1 (t) = f (s) 2ψ(2s)ds 2ψ(2t) + f (s) 2ψ(2s − 1)dsψ(2t − 1).
0 0
By continuing in this manner we find that f (t) can be approximated by An (t), which can be expressed as
n
An (t) = A0 (t) + ∑ f , ψ j,k ψ j,k (t),
j,k=0
where
j
ψ j,k (t) = 2 2 ψ 2 j t − k ,
and
Z 1
f , ψ j,k = f (s)ψ j,k (s)ds.
0
We present here some additional examples of transforms. For more information and applications cf.,e.g. L.
Debnath, Integral Transforms and Their Applications, (3).
Here Pnα,β (x) is the Jacobi polynomial of degree n and order α, β, which can be written
explicitly as
∞
n+α n+β
Pnα,β (x) = 2−n ∑ (x − 1)n−k (x + 1)k , n = 0, 1, . . . ,
k=0 k n − k
A-6. PARTIAL FRACTION DECOMPOSITIONS 99
and the “Fourier coefficients” are an = (δn )−1 f α,β (n), where
2α+β+1 Γ(n + α + 1)Γ(n + β + 1)
δn = .
n!(α + β + 2n + 1)Γ(n + α + β + 1)
10 The Laguerre Transform
Z ∞
f˜α (n) , f˜α (n) = e−x xα Lnα (x) f (x)dx,
L: f (x) →
0
∞
L−1 f (x) = ∑ (δn )−1 f˜α (n)Lnα (x).
n=0
Here Lnα (x) is the Laguerre polynomial of degreen ≥ 0 and order α > −1, and the “Fourier
coefficients” are an = (δn )−1 f˜α (n), where
Γ(n + α + 1)
δn = .
n!
11 The Hermite Transform
Z ∞
∗ 2
H : f (x) → { fH (n)} , fH (n) = e−x Hn (x) f (x)dx,
−∞
∞
∗ −1
(H ) : f (x) = ∑ δ−1
n f H (n)Hn (x).
n=0
Here Hn (x) is the Hermite polynomial of degree n, and the “Fourier coefficients” are an =
δ−1
n f H (n), where √
δn = n!2n π.
R EMARK 26. Observe that the transforms 8-11 are special cases of the earlier theory for generalized
Fourier series (cf. Def. 6.1).
It is quite common that, especially when dealing with Laplace or Z transform, one wants to apply the
inverse transform to a rational function
P(s)
.
Q(s)
If none of the standard rules apply directly, the standard approach is to first of all perform polynomial
division if the degree of P is greater than or equal to the degree of Q. After this step it is usually the best
approach to make a partial fraction decomposition.
Suppose now that degP < degQ. We know that the polynomial Q can be factored (in R) into linear factors,
(s − a), and quadratic factors s − a)2 + b2 . Remember that a partial fraction decomposition is of the
form
P(s) p1 pM
= +···+ ,
Q(s) q1 qM
where p1 , . . . , pM are constants or linear polynomials, and q1 , . . . , qM consist of the linear and quadratic
factors of Q (with all multiplicities). The following two general rules apply:
• A linear factor (s − a) of multiplicity n contributes with
A1 A2 An
+ +···+ .
s − a (s − a)2 (s − a)n
100 A. APPENDICES
A1 s + B1 A2 s + B2 An s + Bn
+ +···+ .
((s − a)2 + b2 ) ((s − a)2 + b2 )2 ((s − a)2 + b2 )n
The coefficients of the polynomials p j are usually computed by putting the right hand side on a common
denominator and comparing the resulting coefficients with P(s).
Example 1.1. We consider the rational function
P(s) 3s2 + 1
= .
Q(s) s(s + 1)(s − 1)2
2
The factors of Q are the linear factors s, (s − 1) of multiplicity 2 and the quadratic factor (s2 + 1).
Hence the partial fraction decomposition is
P(s) 3s2 + 1 A B C Ds + E
= 2 2
= + + 2
+ 2 ,
Q(s) s(s + 1)(s − 1) s s − 1 (s − 1) s +1
and if we put the right hand side on a common denominator we get
A(s − 1)2 s2 + 1 + Bs(s − 1)(s2 + 1) +Cs(s2 + 1) + (Ds + E)s(s − 1)2
3s2 + 1
= ,
s(s2 + 1)(s − 1)2 s(s2 + 1)(s − 1)2
and hence
3s2 + 1 = A(s − 1)2 s2 + 1 + Bs(s − 1)(s2 + 1) +Cs(s2 + 1) + (Ds + E)s(s − 1)2 .
101
102 Bibliography
π2 ∞
(−1)n
4.5. a) S (x) = +4 ∑ 2
cos (nt). b) Use f (0) = 0 = S (0).
3 n=1 n
4.7. a) We get the equation ut0 = u00xx ,0 < x < 1, t > 0, the initial value u(x, 0) = 1, and the boundary values
u(0,t) = u(1,t) = 0.
4 ∞ 1 2 2
b) u(x,t) = ∑ e−π (2k+1) t sin (π (2k + 1) x).
π k=0 2k + 1
1
1 − e−4t cos 2x .
4.9. u(x,t) =
2
1 n2 π2
An 1 nπ
5.1. (a) The eigenvalues are λn = + 2 and the eigenfunctions are un (x) = √ cos + ln x .
4 L x 4 L
1
(b) The eigenvalues are λn = p2n + , where pn are solutions of tan pn = 2pn and the eigenfunctions are
4
1
un (x) = An √ sin (pn ln x).
x
2 2
2 l
∞ Z
πk − π 2k t πk
5.3. u(x,t) = ∑ ak cos x e l with ak = f (x) cos x dx.
k=0 l l 0 l
5.6. a) The reason that m must be an integer is the periodicity: Θ(θ + 2π) = Θ (θ).
a2 a2 a2
b) ψ(r, θ) = −v0 r + cos θ, och ~v = (vr , vθ ) with vr = v0 1 − 2 cos θ, och vθ = v0 1 + 2 sin θ.
r r r
1 −3(t−2) 1 −5(t−2)
6.1. a) f (t) = e − e θ(t − 2).
2 2
3 1
b) y(t) = 1 − e−t + e−3t .
2 2
1 1 −t
6.3. a) y(t) = 1 − e−t (cost + sint) b) y(t) =
e cost − 1 + t .
2 2
1
6.5. fˆ(ω) = e−3iω .
1 + iω
sin (ω + ω0 ) a sin (ω − ω0 ) a
6.7. F (ω) = i − .
ω + ω0 ω − ω0
6.9. y(n) = 2n + (−1)n ,n ≥ 1.
10
an+1 − 0.7n+1 σn .
6.17. yn =
10a − 7
z Dx cv
6.20. a) z = , x = , c = , and the equation now becomes
h V h2 q
∂c ∂2 c
= ,
∂x ∂z2
with the boundary conditions
∂c ∂c
|z=0 = |z→∞ = 0,
∂z ∂z
and
c|x=0 = δ (z − 1) , c|x→∞ = 0.
1 1 qh − V h2
b) c (x, 0) = √ e− 4x which gives c(x, 0) = √ e 4dx kg m−3 .
πx πdvx
1 d
c) The maximum is attained at x = , i.e. x = m.
2 2vh2
Z x Z x
8.1. We get a Volterrra equation on the form u(x) = F(x)+ k(x, y)u(y)dy with F(x) = (x − y) f (y)dy+
a a
(p(a)u0 + u1 ) (x − a) + u0 and k(x, y) = p(y) + (x − y) p0 (y) − q(y) .
Z x
8.3. The integral equation is y(x) = −ω2 (x − t)y(t)dt + 1, and the solutions are y(x) = cos ωt, and if
0
y(1) = 0 we must have ω = 2πn, and we thus get the eigenfunctions yn (x) = cos (2πnx) , n ∈ Z.
Z 1
1 t 1 1
Z α
8.5. For ≤ α ≤ 1 we have: F(α) = F dt = 1 − dt = 1 + ln α.
2 0 1−t t α t
8.7. y = 1 − x
∞ Z 1
1
8.9. u(x) = f (x) + ∑ λn f1 , where f1 = f (t)dt. For the given function f (x) and λ = we get u(x) =
n=1 0 2
∞
e 1 1 e−1
ex − + + ∑ n = ex .
2 2 n=1 2 2
λπ
8.11. a) u(x) ≡ 0. b) u(x) = sin x, c) u(x) = sin 2x + sin x.
2
2
2πn 2πn
8.13. The eigenvalues are λn = − and the eigenfunctions are un (x) = cos x , for n ∈ Z.
a a
t
8.15. a) With k(t) = 1 − for 0 ≤ t ≤ T we get the Volterra equation
T
Z t
ak(t) + k(t − y)u(y)dy, 0 ≤ t ≤ T,
0
which can be solved either by rewriting it as a differential equation or by using the Laplace transfrom.
104 Bibliography
1 2π Z Z 2π
8.17. The eigenvalues are λ = ± , and if we define f1 = f (t) cost and f2 = f (t) sint we get the
π 0 0
different cases:
1
For λ = there are solutions if f (x) is odd (or ≡ 0), and these are then given by
π
f2
u(x) = f (x) − sin x + c cos x
2π
for some constant c.
1
For λ = − there are solutions if f (x) is even (or ≡ 0), and these are then given by
π
f1
u(x) = f (x) + cos x + d sin x,
2π
where d is a constant.
1
For λ 6= ± we have the solutions
π
λ f1 λ f1
u(x) = f (x) + cos x − sin x.
1 − λπ 1 + λπ
8.19. u(t) = cost