Sie sind auf Seite 1von 21

# Math 20

## 1 Eigenvalues and Eigenvectors

1. Definition: A scalar λ is called an eigenvalue of the n × n matrix A is there is a nontrivial solution
x of Ax = λx. Such an x is called an eigenvector corresponding to the eigenvalue λ.
2. What does this mean geometrically? Suppose that A is the standard matrix for a linear transformation
T : Rn → Rn . Then if Ax = λx, it follows that T (x) = λx. This means that if x is an eigenvector of
A, then the image of x under the transformation T is a scalar multiple of x – and the scalar involved
is the corresponding eigenvalue λ. In other words, the image of x is parallel to x.

## 3. Note that an eigenvector cannot be 0, but an eigenvalue can be 0.

4. Suppose that 0 is an eigenvalue of A. What does that say about A? There must be some nontrivial
vector x for which
Ax = 0x = 0
which implies that A is not invertible which implies a whole lot of things given our Invertible Matrix
Theorem.
5. Invertible Matrix Theorem Again: The n × n matrix A is invertible if and only if 0 is not an
eigenvalue of A.
6. Definition: The eigenspace of the n × n matrix A corresponding to the eigenvalue λ of A is the set of
all eigenvectors of A corresponding to λ.
7. We’re not used to analyzing equations like Ax = λx where the unknown vector x appears on both
sides of the equation. Let’s find an equivalent equation in standard form.

Ax = λx
Ax − λx = 0
Ax − λIx = 0
(A − λI)x = 0

8. Thus x is an eigenvector of A corresponding to the eigenvalue λ if and only if x and λ satisfy (A−λI)x =
0.

9. It follows that the eigenspace of λ is the null space of the matrix A − λI and hence is a subspace of
Rn .
10. Later in Chapter 5, we will find out that it is useful to find a set of linearly independent eigenvectors
for a given matrix. The following theorem provides one way of doing so. See page 307 for a proof of
this theorem.
11. Theorem 2: If v1 , . . . , vr are eigenvectors that correspond to distinct eigenvalues λ1 , . . . , λr of an
n × n matrix A, then the set {v1 , . . . , vr } is linearly independent.
2 Determinants
1. Recall that if λ is an eigenvalue of the n × n matrix A, then there is a nontrivial solution x to the
equation
Ax = λx
or, equivalently, to the equation
(A − λI)x = 0.
(We call this nontrivial solution x an eigenvector corresponding to λ.)
2. Note that this second equation has a nontrivial solution if and only if the matrix A−λI is not invertible.
Why? If the matrix is not invertible, then it does not have a pivot position in each column (by the
Invertible Matrix Theorem) which implies that the homogeneous system has at least one free variable
which implies that the homogeneous system has a nontrivial solution. Conversely, if the matrix is
invertible, then the only solution is the trivial solution.
3. To find the eigenvalues of A we need a condition on λ that is equivalent to the equation (A − λI)x = 0
having a nontrivial solution. This is where determinants come in.
4. We skipped Chapter 3, which is all about determinants, so here’s a recap of just what we need to know
 
a b
5. Formula: The determinant of the 2 × 2 matrix A = is
c d

detA = ad − bc.
 
a11 a12 a13
6. Formula: The determinant of the 3 × 3 matrix A =a21 a22 a23  is
a31 a32 a33

detA = a11 a22 a33 + a12 a23 a31 + a13 a21 a32
− a31 a22 a13 − a32 a23 a11 − a33 a21 a12 .

## See page 191 for a useful way of remembering this formula.

7. Theorem: The determinant of an n × n matrix A is 0 if and only if the matrix A is not invertible.
8. That’s useful! We’re looking for values of λ for which the equation (A − λI)x = 0 has a nontrivial
solution. This happens if and only if the matrix A − λI is not invertible. This happens if and only if
the determinant of A − λI is 0. This leads us to the characteristic equation of A.

## 3 The Characteristic Equation

1. Theorem: A scalar λ is an eigenvalue of an n × n matrix A if and only if λ satisfies the characteristic
equation
det(A − λI) = 0.

2. It can be shown that if A is an n × n matrix, then det(A − λI) is a polynomial in the variable λ of
degree n. We call this polynomial the characteristic polynomial of A.
 
3 6 −8
3. Example: Consider the matrix A =0 0 6 . To find the eigenvalues of A, we must compute
0 0 2
det(A − λI), set this expression equal to 0, and solve for λ. Note that
     
3 6 −8 λ 0 0 3−λ 6 −8
A − λI = 0 0 6  −  0 λ 0  =  0 −λ 6 .
0 0 2 0 0 λ 0 0 2−λ

Since this is a 3 × 3 matrix, we can use the formula given above to find its determinant.

## det(A − λI) = (3 − λ)(−λ)(2 − λ) + (6)(6)(0) + (−8)(0)(0)

− (0)(−λ)(−8) − (0)(6)(3 − λ) − (−λ)(0)(6)
= −λ(3 − λ)(2 − λ)

Setting this equal to 0 and solving for λ, we get that λ = 0, 2, or 3. These are the three eigenvalues of
A.
4. Note that A is a triangular matrix. (A triangular matrix has the property that either all of its entries
below the main diagonal are 0 or all of its entries above the main diagonal are 0.) It turned out that
the eigenvalues of A were the entries on the main diagonal of A. This is true for any triangular matrix,
but is generally not true for matrices that are not triangular.
5. Theorem 1: The eigenvalues of a triangular matrix are the entries on its main diagonal.
6. In the above example, the characteristic polynomial turned out to be −λ(λ − 3)(λ − 2). Each of the
factors λ, λ − 3, and λ − 2 appeared precisely once in this factorization. Suppose the characteristic
function had turned out to be −λ(λ − 3)2 . In this case, the factor λ − 3 would appear twice and so we
would say that the corresponding eigenvalue, 3, has multiplicity 2.
7. Definition: In general, the multiplicity of an eigenvalue ` is the number of times the factor λ − `
appears in the characteristic polynomial.

4 Finding Eigenvectors
 
3 6 −8
1. Example (Continued): Let us now find the eigenvectors of the matrix A =0 0 6 . We have
0 0 2
to take each of its three eigenvalues 0, 2, and 3 in turn.
2. To find the eigenvectors corresponding to the eigenvalue 0, we need to solve the equation (A−λI)x = 0
where λ = 0. That is, we need to solve

(A − λI)x = 0
(A − 0I)x = 0
Ax = 0
 
3 6 −8
0 0 6  x = 0
0 0 2

## Row reducing the augmented matrix, we find that

   
x1 −2
x = x2  = x2  1  .
x3 0
This tells us that the eigenvectors
  corresponding to the eigenvalue 0 are precisely the set of scalar
−2
multiples of the vector  1 . In other words, the eigenspace corresponding to the eigenvalue 0 is
0
 
 −2 
Span  1  .
0
 

3. To find the eigenvectors corresponding to the eigenvalue 2, we need to solve the equation (A−λI)x = 0
where λ = 2. That is, we need to solve
(A − λI)x = 0
(A − 2I)x = 0
   
3 6 −8 2 0 0
0 0 6  − 0 2 0 x = 0
0 0 2 0 0 2
 
1 6 −8
0 −2 6  x = 0
0 0 0
Row reducing the augmented matrix, we find that
   
x1 −10
x = x2  = x3  3  .
x3 1
This tells us that the eigenvectors
  corresponding to the eigenvalue 2 are precisely the set of scalar
−10
multiples of the vector  3 . In other words, the eigenspace corresponding to the eigenvalue 2 is
1
 
 −10 
Span  3  .
1
 

## 4. I’ll let you find the eigenvectors corresponding to the eigenvalue 3.

5 Similar Matrices
1. Definition: The n × n matrices A and B are said to be similar if there is an invertible n × n matrix
P such that A = P BP −1 .
2. Similar matrices have at least one useful property, as seen in the following theorem. See page 315 for
a proof of this theorem.
3. Theorem 4: If n × n matrices are similar, then they have the same characteristic polynomial and
hence the same eigenvalues (with the same multiplicities).
4. Note that if the n × n matrices A and B are row equivalent, then they
 are not necessarily
 similar. For a
2 0 1 0
simple counterexample, consider the row equivalent matrices A = and B = . If these two
0 1 0 1
matrices were similar, then there would exist an invertible matrix P such that A = P BP −1 . Since B
is the identity matrix, this means that A = P IP −1 = P P −1 = I. Since A is not the identity matrix,
we have a contradiction, and so A and B cannot be similar.
5. We can also use Theorem 4 to show that row equivalent matrices are not necessarily similar: Similar
matrices have the same eigenvalues but row equivalent matrices often do not have the same eigenvalues.
(Imagine scaling a row of a triangular matrix. This would change one of the matrix’s diagonal entries
which changes its eigenvalues. Thus we would get a row equivalent matrix with different eigenvalues,
so the two matrices could not be similar by Theorem 4.)

6 Diagonalization
1. Definition: A square matrix A is said to be diagonalizable if it is similar to a diagonal matrix. In
other words, a diagonal matrix A has the property that there exists an invertible matrix P and a
diagonal matrix D such that A = P DP −1 .

## 2. Why is this useful? Suppose you wanted to find A3 . If A is diagonalizable, then

A3 = (P DP −1 )3 = (P DP −1 )(P DP −1 )(P DP −1 )
= P DP −1 P DP −1 P DP −1
= P D(P P −1 )D(P P −1 )DP −1
= P DDDP −1
= P D3 P −1 .

In general, if A = P DP −1 , then Ak = P Dk P −1 .
3. Why isthis useful?
 Because powers of diagonal matrices are relatively easy to compute. For example,
7 0 0
if D =0 −2 0, then
0 0 3
 3 
7 0 0
D3 =  0 (−2)3 0  .
0 0 33
This means that finding Ak involves only two matrix multiplications instead of the k matrix multipli-
cations that would be necessary to multiply A by itself k times.
4. It turns out that an n×n matrix is diagonalizable if and only it has n linearly independent eigenvectors.
That’s what the following theorem says. See page 321 for a proof of this theorem.
5. Theorem 5 (The Diagonalization Theorem):
(a) An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.
(b) If v1 , v2 , . . . , vn are linearly independent eigenvectors of A and λ1 , λ2 , . . . , λn are their corre-
sponding eigenvalues, then A = P DP −1 , where
 
P = v1 · · · vn

and  
λ1 0 ··· 0
0 λ2 ··· 0 
D= .
 
.. ..
 ..

. . 
0 0 ··· λn

(c) If A = P DP −1 and D is a diagonal matrix, then the columns of P must be linearly independent
eigenvectors of A and the diagonal entries of D must be their corresponding eigenvalues.
6. What can we make of this theorem? If we can find n linearly independent eigenvectors for an n × n
matrix A, then we know the matrix is diagonalizable. Furthermore, we can use those eigenvectors and
their corresponding eigenvalues to find the invertible matrix P and diagonal matrix D necessary to
show that A is diagonalizable.
7. Theorem 4 told us that similar matrices have the same eigenvalues (with the same multiplicities). So
if A is similar to a diagonal matrix D (that is, if A is diagonalizable), then the eigenvalues of D must
be the eigenvalues of A. Since D is a diagonal matrix (and hence triangular), the eigenvalues of D
must lie on its main diagonal. Since these are the eigenvalues of A as well, the eigenvalues of A must
be the entries on the main diagonal of D. This confirms that the choice of D given in the theorem
makes sense.

8. See your class notes or Example 3 on page 321 for examples of the Diagonalization Theorem in action.
Notes by Richard Taylor Thus on our collection of functions C ∞ , we have defined two operations:

## (a) “addition” e.g. if f (t) = 1 and h(t) = et then (f + h)(t) = 1 + et ;

10 Function Spaces
(b) and “scalar multiplication” e.g. if h(t) = et then (2h)(t) = 2et .
Many of the ideas of linear algebra, which we have studied in the context of Rn or Cn ,
These are the same basic operations that we have studied on Rn . Just as most of our
are applicable much more widely in the mathematical sciences. To try to capture the
study of Rn was immediately applicable to Cn , so many of the same ideas also apply to
domain of validity of these methods, mathematicians introduce the concept of “vector
our new “space” Cn . More specifically Cn is a “linear space” in the sense of section 9.1.
space” or “linear space”. (These two terms are synonyms.) Rather than studying linear
spaces in the abstract, we shall look at some examples which are important in the theory
of differential equations.
Examples

10.1 Ordinary Linear Differential Equations (1) Any polynomial function an tn + an−1 tn−1 + · · · + a1 t + a0 is smooth and so the
collection of all polynomial functions forms a subset P of C ∞ . This subset P has
(Compare this section with sections 4.1, 4.2 and 9.3 in the book) the following two important properties
By a smooth function from the real numbers to themselves we shall mean a function (a) If f (t), g(t) are polynomials so is f (t) + g(t)
f : R → R which can be differentiated as many times as you like. We will denote the set (b) If f (t) is a polynomial and c is a real number then cf (t) is a polynomial.
of all such functions by C ∞ . For instance
Because P has properties (a) and (b) we call P a subspace of C ∞ .
f (t) = 1
g(t) = t (2) Suppose c1 and c2 are real numbers and c1 t + c2 et is the zero function. Then we
must have that c1 = c2 = 0. (Why? If c2 6= 0 then for t very large and positive c2 et
h(t) = et
will be much larger than c1 t in magnitude and so c1 t + c2 et 6= 0. Thus one must
are all smooth functions. Indeed have c2 = 0 and hence also c1 = 0.) Because of this property we say that t and et
are linearly independent.
dn f
=0 for all n > 0 (3) On the other hand
dtn
dg dn g 1.1 + 1.t + (−1).(1 + t) = 0
=1; = 0 for all n > 1
dt dtn and so we say that
n
d h
= et for all n > 0. 1, t and (1 + t)
dtn
are linearly dependant.
On the other hand f (t) = |t| is not a smooth function as it is not even once differentiable
at t = 0. (4) If f (t) ∈ C ∞ we define a new function (Df )(t) by

f (t) = |t| df
@ (Df )(t) = f 0 (t) = (t).
@ dt
@
@ (For example D(sin(t)) = cos(t).) In general D(f ) is again a smooth function so D
@ gives a function from C ∞ to C ∞ . (D is a “function of functions”.) Moreover D has
@
@ -t the following two important properties:

## (a) D(f (t) + g(t)) = D(f (t)) + D(g(t))

(b) D(cf (t)) = cD(f (t)),
If c ∈ R and if f and g are smooth functions, so is (cf + g)(t) = cf (t) + g(t). (Recall that
if f and g are differentiable, so is cf + g and (cf + g)0 (t) = cf 0 (t) + g 0 (t).) whenever c ∈ R; f (t), g(t) ∈ C ∞ . Because D has these two properties, we call D a
t
Thus for example 1 + t and t + 2e are in our collection C . ∞ linear transformation or we simply say D is linear.

1 2
(5) What is the kernel of D? It is simply the collection of functions f (t) ∈ C ∞ such that The sequence f1 (t), f2 (t), . . . may be infinite, but we require that any g(t) is a linear combi-
D(f (t)) = 0. But the only functions with zero derivative are the constant functions. nation of only finitely many f1 (t), . . . , fn (t). The number n we need may depend on g(t).
Thus ker(D) is the collection all constant functions. For example 1, t, t2 , t3 , . . . spans P but does not span C ∞ . (See exercises for 9.1.)

(6) What is the image of D? It is the whole of C ∞ . Why? If f (t) ∈ C ∞ then we can Suppose that an (t), . . . , a0 (t), g(t) ∈ C ∞ . Then we will refer to an equation of the form
define a new function
dn f (t) dn−1 f (t)
Z t
g(t) = f (s)ds. an (t) + an−1 (t) + · · · + a0 (t)f (t) = g(t) (∗)
0 dtn dtn−1
Then g(t) is also a smooth function and by the fundamental theorem of calculus as a linear ordinary differential equation. To the equation (∗) we may associate a linear
D(g(t)) = f (t). transformation
T : C∞ → C∞
(7) If T : Rn → Rn is a linear transformation and Im T = Rn then ker T = (0). However
D : C ∞ → C ∞ is a linear transformation and Im D = C ∞ , but ker D 6= (0). This defined by
can happen because C ∞ is “infinite dimensional”, by which we mean that C ∞ cannot
 dn f (t) df
T f (t) = an (t) + · · · + a1 (t) + a0 (t)f (t).
be spanned by any finite number of elements fi (t) ∈ C ∞ . dtn dt
It is easy to check that T is indeed a linear transformation. The equation (∗) can be
(8) Now consider D 2 = D ◦ D : C ∞ → C ∞ . Its kernel is the collection of smooth rewritten
functions f (t) such that f 00 (t) = 0, i.e. such that f 0 (t) = a, a constant, i.e. such

T f (t) = g(t).
that
f (t) = at + b If g(t) 6= 0 we will call this equation inhomogeneous. If g(t) = 0 we will call it homogeneous.
We will refer to the associated equation
for some real numbers a, b. Thus any element f (t) ∈ ker D 2 is a linear combination 
of t and 1, i.e. T f (t) = 0
f (t) = a.t + b.1
as the associated homogeneous equation.
We say that t and 1 span ker D 2 .
If f0 (t) is any given solution of 
(9) In fact the functions t and 1 are also linearly independant and so we say that T f (t) = g(t)
they form a basis of ker D 2 . As this basis has two elements we say that ker D 2 is then the general solution is
two dimensional. f (t) = f0 (t) + h(t)
(10) Find all solutions f (t) of the equation D 2 f (t) = et . where h(t) ∈ ker T , i.e. h(t) is a solution of the associated homogeneous equation.
It is not too hard to spot that f0 (t) = et is one solution of this equation. If f (t) is
any other solution then D 2 f (t) − 2 2 t t We have the following two important facts which guarantee the existence of solutions of
 f0 (t) = D (f (t)) − D (f0 (t)) = e − e = 0. On certain linear ODE’s (Ordinary Differential Equations)
the other hand if D 2 f (t) − f0 (t) = 0 then D 2 (f (t)) = D 2 (f0 (t)) = et .
Thus f (t) is a solution of D 2 f (t) = et if and only if f (t) − f0 (t) ∈ ker D 2 . Thus the Fact 10.1.1. Suppose
general solution is
f (t) = f0 (t) + at + b = et + at + b.  dn f (t) dn−1 f (t) df (t)
T f (t) = n
+ an−1 (t) + · · · + a1 (t) + a0 (t)f (t)
Just as for linear equations, to find the general solution of an inhomogeneous equa- dt dtn−1 dt
tion (eg. D 2 f (t) = et ) you find any solution and add to it a general solution of the is a linear transformation from C ∞ to C ∞ . Then ker T has dimension n.
corresponding homogeneous equation (eg. D 2 f (t) = 0).
Fact 10.1.2. Suppose

Let us take this opportunity to explain what we mean by saying a sequence f1 (t), f2 (t), . . .  dn f (t) dn−1 f (t) df (t)
T f (t) = + an−1 (t) + · · · + a1 (t) + a0 (t)f (t)
of elements of a subspace V ∈ C ∞ span V . We will mean that if g(t) is any element of V dtn dtn−1 dt
then we can find a finite number of real numbers c1 , . . . , cn such that
and suppose g(t) ∈ C ∞ . Then there exists f (t) ∈ C ∞ with
g(t) = c1 f1 (t) + · · · + cn fn (t). 
T f (t) = g(t).

3 4
n
Note that in both these facts we are assuming that the coefficient of ddtfn(t) are 1. Both For instance if T = D 2 − 6D + 13 then the “characteristic polynomial” is X 2 − 6X + 13
facts become false if we do not assume this. For instance if T f (t) = tf 0 (t) + f (t) which has roots √
0
(t) then there is no function f (t) ∈ C ∞ with

then dim(ker
 T ) = 0. Also if S f (t) = tf 3 ± −4 = 3 ± 2i.
S f (t) = 1. (See the exercises.)
Thus e3t cos 2t and e3t sin 2t ∈ ker T . As dim ker T = 2 we see that e3t cos 2t and e3t sin 2t
For example consider the case of “constant coefficients”, i.e.
is a basis of ker T , i.e. the general solution of
T = D n + an−1 D n−1 + · · · + a1 D + A0 
T f (t) = 0
where an+1 , . . . , a0 ∈ R. It is convenient to look at the polynomial
is
X n + an−1 X n−1 + · · · + a1 X + a0 , c1 e3t cos 2t + c2 e3t sin 2t.

sometimes, if slightly confusingly, called the characteristic polynomial of T . Over the Suppose now we are asked to find the general solution of
complex numbers we may factorise this polynomial 
T f (t) = 30 cos t.
(X − α1 )(X − α2 ) . . . (X − αn )
We must first look for some particular solution to this equation. Experience can teach us
n
where α1 , α2 , . . . , αn ∈ C ; and we may also factorise that a good bet is to look for a solution
T = (D − α1 ) . . . (D − αn ). f (t) = A cos t + B sin t.
d αj t
If αj ∈ R then dt
e = αj eαj t so (D − αj )eαj t = 0, so that T eαj t = 0, i.e. eαj t ∈ ker T . Then
2 2
For instance if T = D − D − 2, then the “characteristic polynomial” is X − X − 2 = T f (t)

= −A cos t − B sin t + 6A sin t − 6B cos t + 13A cos T + 13B sin t
(X − 2)(X + 1). Thus T = (D − 2)(D + 1) and so e2t and e−t are in ker T . As ker T has
= (12A − 6B) cos t + (6A + 12B) sin t.
dimension 2 by Fact 10.1.1 we see that e2t and e−t form a basis of ker T , i.e. ker T is the
collection of all functions c1 e2t + c2 e−t . This will give a solution to 
If on the other hand αj ∈ C but is not real then the complex conjugate αj of αj is also a T f (t) = 30 cos t
root of X n + an−1 X n−1 + · · · + a1 X + a0 . Write if and only if

αj = a + ib 12A − 6B = 30
αj = a − ib. 6A + 12B = 0

Then T eαj t = 0, but now eαj t is not in C ∞ as it is not real valued. i.e.
αj t at
A=2 B = −1.
e = e (cos bt + i sin bt).
Thus we have found a particular solution
Also T eαj t = 0 and
eαj t = eat (cos bt − i sin bt). f (t) = 2 cos t − sin t.

## Thus   We deduce that the general solution is

1 αj t
T (e + eαj t ) = 0 i.e. T (eat cos bt) = 0
2 f (t) = 2 cos t − sin t + c1 e3t cos 2t + c2 e3t sin 2t
and  
1 αj t
T (e − eαj t ) = 0 i.e. T (eat sin bt) = 0;
2i
i.e. eat cos bt and eat sin bt ∈ ker T .
WE RECOMMEND YOU ALSO READ SECTION 9.3

5 6
EXERCISES (9) Problem 34 of section 9.3

(10) Let T f (t) = f 00 (t) + 9f (t). Find a basis for ker T . Also find the general solution


(1) which of the following sets are subspaces of C ∞ ? Justify your answers. of 
T f (t) = cos(αt)
(a) All continuous functions from R to R.
where α is a positive real number. Distinguish the cases α = 3 and α 6= 3. [HINT:
(b) All f ∈ C ∞ such that f (0) + f 0 (0) = 0. In the case α = 3 consider At cos 3t + Bt sin 3t.]
(c) All f ∈ C ∞ such that f + f 0 = 0.
df (t)
(d) All f ∈ C ∞ such that f (0) = 1. (11) Solve the equation t = 1 and explain why it has no solution in C ∞ .
dt
(2) Which of the following subsets of C ∞ are LI? Justify your answers. (12) Let T f (t) = tf 0 (t) + f (t). Suppose T f (t) = 0. If g(t) = tf (t) show that
 

## g 0 (t) = 0. Conclude that dim(ker T ) = 0.

(a) 1, t, t2 , t3 .
(b) 1 + t, 1 − t, t2 , 1 + t + t2 .
(c) sin t, et , e−t .
(d) sin t, cos t, sin(t + π/3).

(3) Which of the following functions are linear? Justify your answers.

(a) T : C ∞ → R; T (f ) = f (0).
∞ ∞
(b) T : C →C ; T (f ) = f 2 + f 0 .
 
f (0)
(c) T : C ∞ → R2 ; T (f ) = .
f (1)
R1
(d) T : C ∞ → R; T (f ) = 0 f (t)dt.

## (6) Find the eigenvalues and eigenspaces for T : C ∞ → C ∞ given by T (f ) = f 0 + f .

(7) Let T f (t) = f 00 (t) + f 0 (t) − 12f (t). Find a basis for ker T . Find a smooth function


## f (t) such that


T f (t) = 0
and f (0) = f 0 (0) = 0.

(8) Let T f (t) = f 00 (t) + 2f 0 (t) + 2f (t). Find a basis for ker T . Find a smooth function


## f (t) such that


T f (t) = 0
and f (0) = f 0 (0) = 1.

7 8
10.2 Fourier Series
b
(Compare this with section 5.5.) In the last section we looked at spaces of
functions which behaved like Rn , but we did not look at any analogues of the concepts r
of length, angle or dot product. In this section we will discuss an example in which the −π π
analogues of these concepts play an important role. b
Recall that if a and b are real numbers with a < b then [a, b] denotes the interval

{x ∈ R : a ≤ x ≤ b} .
Again C[−π, π] is a linear space:
s s R (a) If f (t) and g(t) ∈ C[−π, π] then f (t) + g(t) ∈ C[−π, π] (recall that the sum of
a b continuous functions is continuous), eg. |t| + sin t ∈ C[−π, π]
I
@
@ interval [a, b] - includes end points
(b) If f (t) ∈ C[−π, π] and c ∈ R then cf (t) ∈ C[−π, π], eg. 2 sin t ∈ C[−π, π].

We will let C[−π, π] denote the collection of all continuous functions from the We will define the inner product of two functions f (t), g(t) ∈ C[−π, π] to be
interval [−π, π] to R.
1 π
Z
For example hf (t), g(t)i = f (t)g(t)dt.
1 π −π
t, sin t, |t|, 2
t − 16
You should think of it as an analogue of the dot product of two vectors in Rn .
are all functions in C[−π, π]
It shares with the dot product the following three key properties:

## @ hf (t), g(t)i = hg(t), f (t)i

@
@
@ (2) If f (t), g(t) and h(t) ∈ C[−π, π] and if c ∈ R then
−π π −π π −π π
hcf (t) + g(t), h(t)i = chf (t), h(t)i + hg(t), h(t)i

## hf (t), f (t)i > 0.

−π π We will let you check properties (1) and (2) for yourself. Let us explain property (3).
Firstly
1 π
Z
hf (t), f (t)i = f (t)2 dt.
π −π
As f (t)2 ≥ 0 for all t we see that hf (t), f (t)i ≥ 0.
Suppose f (t) 6= 0, why is hf (t), f (t)i 6= 0 ? Well suppose f (t0 ) 6= 0. Because f is
On the other hand the function continuous we can find δ > 0 such that

1 t>0 1
|f (t)| > |f (t0 )| for all t ∈ [t0 − δ, t0 + δ].

f (t) = 0 t=0 2

−1 t < 0 As long as t0 6= ±π we may also suppose t0 − δ > −π and t0 + δ < π. (We leave the cases

t0 = ±π to you, they are only slightly different.)
does not lie in C[−π, π]

9 10
(2) To calculate the distance between 1 and |t|

1 π 2 π
Z Z
r h|t| − 1, |t| − 1i = (|t| − 1)2 dt = (t − 1)2 dt
1
π −π π 0
|f (t0 )| Z π
2
2
= (t2 − 2t + 1)dt
−π t0 −δ t t0 +δ
0 π π 0

2 t3

2
= − t2 + t = π 2 − 2π + 2
π 3 0 3
r
2 2
k|t| − 1k = π − 2π + 2
3
Then (3) If n is a positive integer find k sin ntk.
Z π Z t0 +δ
π
1
Z
f (t)2 dt ≥ f (t)2 dt hsin nt, sin nti = (sin nt)2 dt
−π t0 −δ
Z t0 +δ π −π
1
≥ |f (t0 )|2 dt To evaluate this integral recall the usefull trigonometric formulae:
t0 −δ 4
δ sin(A + B) = sin A cos B + cos A sin B
≥ |f (t0 )|2 > 0.
2
cos(A + B) = cos A cos B − sin A sin B
1 = (cos A)2 + (sin A)2
p
We define the length of a function f ∈ C[−π, π] to be hf (t), f (t)i and we will denote it
kf k. We define the distance between two functions f (t), g(t) ∈ C[−π, π] to be kf − gk.
Roughly speaking two functions f (t) and g(t) are close if the area between their graphs Putting B = A in the second of these we get
is small.
cos(2A) = (cos A)2 − (sin A)2
= 1 − 2(sin A)2

Thus
−π π −π π π π
1 1 1
Z Z
hsin nt, sin nti = (1 − cos 2nt)dt = 1 − cos 2nt = 1.
π −π 2 2π −π
close functions far apart functions
Thus k sin ntk = 1.

Examples Similarly if n is a positive integer one can check that k cos ntk = 1. Moreover
k √12 k = 1.
(1) To calculate ktk
 π We will call two functions f (t), g(t) ∈ C[−π, π] orthogonal if
1 π 2 1 t3 2π 3 2π 2
Z
ht, ti = t dt = = = hf (t), g(t)i = 0.
π −π π 3 −π 3π 3
r
2 We will call a collection of functions f1 (t), . . . , fn (t), . . . (finite or infinite) orthonormal if
ktk = π
3
(a) kfj (t)k = 1 for each j

## (b) hfj (t), fk (t)i = 0 if j 6= k.

11 12
Examples (3) Suppose V is a subspace of C[−π, π] and that f (t) ∈ C[−π, π]. If we can find
g(t) ∈ V such that f (t) − g(t) is orthogonal to each element of V then
(1) If n 6= m are positive integers then sin nx and sin mx are orthogonal
kf (t) − g(t)k ≤ kf (t) − h(t)k
1 π
Z
hsin nt, sin mti = sin nt sin mtdt
π −π for all h(t) ∈ V with equality if and only if g(t) = h(t).
Z π
1 1 
= cos(n − m)t − cos(n + m)t dt
π −π 2
rf (t)

 J

1 sin(n − m)t sin(n + m)t
= − = 0.  J
2π (n − m) (n + m) −π   J

 J
 J
Again we use the formula for cos(A + B) (and for cos(A − B)). 
J g(t) J
J J
(2) In fact the sequence of functions J J
r
h(t)  
J 
J 
1 
√ , sin(t), cos(t), sin(2t), cos(2t), sin(3t), cos(3t), . . . J  V
2 J  
 
J
is orthonormal. We leave it to you to evaluate the necessary integrals.
We have
The following facts can be proved exactly as they were for Rn .
kf (t) − h(t)k2 = kf (t) − g(t) + g(t) − h(t)k2
(1) If f1 (t), . . . , fn (t) are orthonormal then they form a basis of an n-dimensional sub-
= kf (t) − g(t)k2 + kg(t) − h(t)k2
space of Rn .
≥ kf (t) − g(t)k2 with equality only if kg(t) − h(t)k = 0 i.e. g(t) = h(t).
The main point here is to check that f1 (t), . . . , fn (t) are linearly independent.
Suppose The main point is that g(t) − h(t) is in V and so orthogonal to f (t) − g(t).
c1 f1 (t) + . . . + cn fn (t) = 0.
(4) If f1 (t), . . . , fn (t) are an orthonormal basis of a subspace V ∈ C[−π, π] then
Taking the inner product 
projV f (t) = hf (t), f1 (t)if1 (t) + · · · + hf (t), fn (t)ifn (t)
hfj (t), c1 f1 (t) + . . . + cn fn (t)i = 0  
is in V ; f (t) − projV f (t) is orthogonal to every element of V ; and projV f (t) is
we see that closer to f (t) then any other element of V .
0 = c1 hfj (t), f1 (t)i + · · · + cn hfj (t), fn (t)i = cj 
It suffices to check that for each j = 1, . . . , n : hfj (t), projV f (t) − f (t)i = 0
for each j.
But
(2) If f (t) and g(t) are orthogonal then 
hfj (t), projV f (t) − f (t)i = hf (t), f1 (t)ihfj (t), f1 (t)i + · · ·
kf (t) + g(t)k2 = kf (t)k2 + kg(t)k2 +hf (t), fn (t)ihfj (t), fn (t)i − hfj (t), f (t)i
= hf (t), fj (t)i − hfj (t), f (t)i = 0.
Indeed

kf + gk2 = hf + g, f + gi We will let Tn denote the subspace of C[−π, π] with orthonormal basis
= hf, f i + hf, gi + hg, f i + hg, gi 1
√ , sin t , cos t , . . . , sin nt , cos nt.
= hf, f i + hg, gi 2
= kf k2 + kgk2 . 
Then projTn f (t) is an approximation to f (t) constructed from these trigonometric func-
tions. As n increases one might expect these approximations to become better and better.

13 14
In fact we have: Example Find the Fourier series for t.
1 1 π t
Z
Fact 10.2.1. (1) If f ∈ C[−π, π] then
h √ , ti = √ dt = 0
 2 π −π 2
kprojTn f (t) − f (t)k → 0 Z π
1
hcos nt, ti = t cos ntdt = 0 because t cos nt is an odd function
as n → ∞. π −π
Z π
1
(2) hsin nt, ti = t sin ntdt

π −π
1 X  
1 − cos nt

1 π cos nt
Z
kf (t)k2 = hf (t), √ i2 + hf (t), sin nti2 + hf (t), cos nti2 = t + dt
2 1 π n π −π n
−π
1 −(−1)n −(−1)n
 
Although we will not prove this, let us at least explain how (2) follows from (1). = π− (−π)
2 π n n
projT f (t) 2 = hf (t), √1 i √1 + hf (t), sin ti sin t + · · · + hf (t), cos nti cos nt 2(−1)n+1

n =
2 2
n
1
= hf (t), √ i2 + hf (t), sin nti2 + · · · + hf (t), cos nti2 . Thus
2 ∞
X (−1)n+1
t=2 sin nt.
On the other hand n=1
n

## kf (t)k2 = kprojTn f (t) k2 + kf (t) − projTn f (t) k2

  By part (2) of Fact 10.2.1 we see that
1 ∞
= hf (t), √ i2 + · · · + hf (t), cos nti2 + kf (t) − projTn f (t) k2 .
 X 4
2 ktk2 = 2
n=1
n
Letting n → ∞ gives part (2).
 i.e. ∞ ∞
Although this tells us that “on average” projTn f (t) is close to f (t), it does not tell us 2 2 X 1 π2 X 1
what happens for any given t ∈ [−π, π]. However if we place some smoothness hypothesis π =4 2
i.e. =
3 n=1
n 6 n=1
n2
on f (t) then we can say what happens.
i.e.
Fact 10.2.2. Suppose f (t) ∈ C[−π, π] is differentiable at a point x ∈ [−π, π], and if π2 1 1 1 1 1
=1+ + + + + +···
x = ±π also assume that f (−π) = f (π). Then the series 6 4 9 16 25 36

an amazing expression of π as an infinite sum.
1 1 X 
h √ , f (t)i √ + hsin nt, f (t)i sin nx + hcos nt, f (t)i cos nx On the other hand by Fact 10.2.2 if we put t = π
2
we get
2 2 n=1

π X (−1)n+1 n−1
converges to f (x). =2 (−1) 2
2 n=1
n
The series n odd

∞  i.e.
1 1 X 

h √ , f (t)i √ + hsin nt, f (t)i sin nt + hcos nt, f (t)i cos nt π X (−1)m 1 1 1 1 1
2 2 n=1 = =1− + − + − +··· ,
4 m=0 2m + 1 3 5 7 9 11
is called the Fourier series for f after the French mathematician Jean-Baptiste-Joseph another amazing expression for π as an infinite sum.
Fourier (1768-1830). Fact 10.2.2 was known to Fourier and is often referred to as Fourier’s
theorem, although the first rigorous proof was only found later by Dirichlet.

## We recommend that you read section 5.5.

15 16
EXERCISES 10.3 Partial Differential Equation I: The Heat Equation
Consider a uniform metal bar stretching from x = 0 to x = π. Suppose that the ends of
the bar are held at a constant temperature of 0 (eg. are immersed in a mixture of water
(1) Find the length of
and ice) but that otherwise the bar is thermally insulated from its surroundings, except
1 + sin t + 3 cos 5t + 2 sin 10t
that at time t = 0 the bar is quickly heated so that it has temperature distribution
√ p
(2) Show that 1/ 2 and 3/2 t/π are orthonormal. Let V be the subspace of C[−π, π] (
consisting of functions of the form at + b. Find projV (t2 ). x if x ≤ π2
T (x, 0) =
π−x if x ≥ π2
(3) Find the Fourier series for |t|.
Z π Describe the temperature of the bar at all subsequent times.
(4) Calculate eat cos ntdt .
−π T (x,0)
nt at π 6

[HINT: Integrate by parts twice to get an expression −π eat cos ntdt = a cos
 
n2
e −π −
a 2 R π R π
n2 −π
eat cos ntdt and then solve for −π eat cos ntdt] -x
(5) If a is a real constant find the Fourier series for
x=0 x=π
1 at
e + e−at

cosh at =
2
The temperature T (x, t) obeys the equation:
[HINT: cosh(−at) = cosh(at)]
∂T ∂2T

1 =µ 2
∂t ∂x
X
(6) Find a closed formula for as a function of a. [HINT: use (5) and Fact
n2 + a 2
n=1 for some positive constant µ depending on the structure of the bar. (This sort of equation
10.2.2] is called a partial differential equation or PDE)
Where does this particular equation come from?

## rate of heat flow past x is −K ∂T∂x

, K = thermal conductivity. (heat flows from hot to
cold at a rate proportional to the temperature gradient)
rate of temperature increase = C . rate of arrival of heat (C = heat capacity)
We examine what happens to a small length of bar from x to x + δx in the small time
from t to t + δt.

−K ∂T∂x
(x, t) δt K ∂T
∂x
(x + δx, t) δt
x x + δx

## total heat in time δt :  

∂T ∂T
K (x + δx, t) − (x, t) δt
∂x ∂x
rise in temperature in time δt :
 
C ∂T ∂T
T (x, t + δt) − T (x, t) ≈ K (x + δx, t) − (x, t) δt
δx ∂x ∂x

17 18
i.e.   There are several methods available to tackle this sort of problem, we will present one
1  1 ∂T ∂T based on Fourier series.
T (x, t + δt) − T (x, t) ≈ CK (x + δx, t) − (x, t)
δt δx ∂x ∂x
We first look for some simple solutions to the equation
i.e.
2
∂T ∂ T ∂T ∂2T
= CK 2 =µ 2 T (0, t) = T (π, t) = 0. (∗)
∂t ∂x ∂t ∂x
The equation
In fact let us look for a solution
∂T ∂2T
=µ 2
∂t ∂x T (x, t) = u(x)v(t).

is called the heat or diffusion equation. It arises in many physical situations where some Then we require u(0) = u(π) = 0 and
diffusion process occurs eg. diffusion of pollutants in an aquifer, or of ions through a cell
wall. v 0 (t) u00 (x)
=µ .
Here we are asked to find a solution to this equation subject to the restrictions that v(t) u(x)

T (0, t) = T (π, t) = 0 (the ends of the bar stay at temperature 0) We see that the quantity
v 0 (t) u00 (x)

(
x if x ≤ π2 v(t) u(x)
T (x, 0) =
π−x if x > π2 . is independent of both position x and time t so that it must be a constant.
We are led to try to solve the equation
Such restrictions are called initial conditions or boundary conditions. Many different
initial conditions are possible, they will depend on the problem one is trying to solve. u00 (x) = au(x)
(Another possibility would be that the bar was initially at temperature 0, that the left v 0 (t) = aµv(t)
end is always kept at temperature 0 but that the right end is made to take on a specified u(0) = u(π) = 0.
temperature T (π, t).)
We are only looking for T in the region But we know how to solve these equation.

## 0≤x≤π (length of bar) (a) If a > 0, say a = λ2 then

t≥0 (positive time) u(x) = Aeλx + Be−λx .
The equation u(0) = u(π) = 0 imply that A = B = 0, i.e. u(x) ≡ 0. This is not
much help.
t
6
(b) If a = 0 then
u(x) = Ax + B.
Again the equation u(0) = u(π) = 0 imply that A = B = 0, i.e. u(x) ≡ 0. Again
Solve for
T (0, t) = 0 T (π, t) = 0 not much help.
T (x, t)
(c) Now suppose a < 0, say a = −λ2 . Then

## -x The equation u(0) = 0 implies B = 0.

0 π
T (x, 0) specified The equation u(π) = 0 implies A = 0 or λ is a whole number n. In this case
2
v 0 (t) = −n2 µv(t) so that v(t) = Ce−µn t .

19 20
Thus we have found a series of solutions to (∗). Namely for each positive integer n we We now compute the Fourier series of θ.
have a solution
1 π 1
2
Z
cn e−µn t sin nx θ(x) √ dx = 0 as θ(x) = −θ(−x).
π −π 2
for any constant cn . If we put t = 0 we get cn sin nx so none of these solutions is the one
Z π
we are looking for. 1
However note that both the equations θ(x) cos nxdx = 0 for the same reason.
π −π
∂T ∂2T
=µ 2 1
Z π
2
Z π
∂t ∂x θ(x) sin nxdx = θ(x) sin nxdx
π −π π 0
and the boundary conditions π
π
2 2
Z Z
2
T (0, t) = T (π, t) = 0 = x sin nxdx + (π − x) sin nxdx
π 0 π π
are linear: i.e. if T1 and T2 are two solutions so is cT1 + T2 . Thus we get a lot more π
2
0
2 2
Z Z
solutions of these two equations: namely any finite sum =
2
x sin nxdx − y sin n(π − y)dy
π 0 π π
N 2
X 2 Z π Z π
cn e−µn t sin nx. 2 2 2 2
= x sin nxdx − y sin(ny − nπ)dy
n=1 π 0 π 0
Z π
At t = 0 this becomes 2 2
N = (1 − (−1)n ) x sin nxdx
X π 0
cn sin nx. = 0 n even
n=1 Z π
4 2
In fact more is true. If the constants cn become smaller sufficiently rapidly as n → ∞ = x sin nxdx
π 0
then the sum ∞  π Z π
X 2 4 − cos nx 2 4 2 cos nx
cn e−µn t sin nx = x + dx
π n 0 π 0 n
n=1
4 π
will converge and give a solution to (∗) which specialises at t = 0 to = [sin nx]02
πn2
∞ 4 n−1
X = (−1) 2 n odd
cn sin nx. n2 π
n=1

If we can find cn such that X 4 (−1)m
θ(x) = sin(2m + 1)x

( m=0
(2m + 1)2 π
π
X x x≤ 2
cn sin nx = π Thus we see that
n=1
π−x x≥ 2

X 4 (−1)m −µ(2m+1)2 t
then we would have found a solution to our original problem. But this is the sort of T (x, t) = e sin (2m + 1)x
(2m + 1)2 π
problem we studied in the last section. m=0

To put it more precisely in the form we considered in the last section consider satisfies
∂2T

π − x x ≥ π2 ∂T
= −µ 2

θ(x) = x − π2 ≤ x ≤ π2 ∂t ∂x
 T (0, t) = T (π, t) = 0
−x − π x ≤ − π2

θ
6 T (x, 0) = θ(x)
@
Note that we extended θ to [−π, π] by −π @ -x as desired.
arranging that θ(−x) = −θ(x). @ π
@

21 22
2
Note that as t → ∞, e−µ(2m+1) t → 0. Thus as t → ∞, T (x, t) → 0. As one might have EXERCISES
expected the bar cools towards having a uniform temperature of 0.
∂T ∂2T
The same method (developed by Fourier at the start of the 19th century) allows one to (1) Solve the equation = µ 2 in 0 ≤ x ≤ π, t ≥ 0 subject to T (0, t) = T (π, t) and
so solve the heat equation with any boundary conditions of this form. In fact we have: ∂t ∂x
T (x, 0) = 4 sin x.
FACT Let f (x) be any (reasonable) function on [0, π] which vanishes at both end points.
∂T ∂2T
Then there is a unique function T (x, t) for 0 ≤ x ≤ π, t ≥ 0 such that (2) Solve the equation = µ 2 in 0 ≤ x ≤ π, t ≥ 0 subject to T (0, t) = T (π, t) = 0
 ∂t ∂x
∂T ∂2T 0
 x ≤ π/4
= −µ 2
∂t ∂x and T (x, 0) = 1 π/4 < x < 3π/4
T (0, t) = T (π, t) = 0

0 x ≥ 3π/4

T (x, 0) = f (x). [You may assume that T (x, 0) has a Fourier sine series, which can be computed in
the same way as when T is continuous.]

100 ∂T ∂2T
(3) Show that T (x, t) = π
x is a solution of = subject to T (0, t), T (π, t) = 100.
∂t ∂x2
∂T ∂2T
(4) Solve the equation = in 0 ≤ x ≤ π, t ≥ 0 subject to T (0, t) = 0, T (π, t) =
∂t ∂x2
100, T (x, 0) = 0 for 0 ≤ x ≤ π. Describe T (x, t) for very large t. [HINT: look for a
solution T (x, t) = 100
π
x + S(x, t).]

## (5) Show that if n = 0, 1, 2, 3, . . . then

2 µt
T (x, t) = e−n cos nx

∂T ∂2T ∂T ∂T
is a solution of = such that (0, t) = (π, t) = 0. (These boundary
∂t ∂x2 ∂x ∂x
conditions correspond to a bar which is completely thermally insulated, even at its
ends.)

## (6) Solve the equation

∂T ∂2T
= in 0 ≤ x ≤ π, t ≥ 0
∂t ∂x2
∂T ∂T
and subject to the boundary conditions (0, t) = (π, t) = 0 and T (x, 0) = x.
∂x ∂x
Describe T (x, t) for very large t.

23 24
10.4 Partial Differential Equations II We would like to choose cn such that
We will discuss two other very standard examples of PDE’s. ∞
(
X y y ≤ π/2
cn sinh(nπ) sin(ny) =
n=1
π−y y ≥ π/2
1) Laplace’s Equation
Consider a square copper plate: 0 ≤ x ≤ π, 0 ≤ y ≤ π. The sides y = 0, y = π and As in the last section we see that
x = 0 are maintained at a constant temperature of 0. The point (π, y) is maintained at (
0 n even
a temperature cn sinh(nπ) = n−1
4 (−1) 2
n π
n odd
y if 0 ≤ y ≤ π/2
π−y if π/2 ≤ y ≤ π Thus ∞ 
X 4 (−1)m sinh (2m + 1)x 
T (x, y) =  sin (2m + 1)y .
If the plate is in equilibrium find the temperature distribution on the plate. m=0
π(2m + 1) sinh (2m + 1)π
The temperature T (x, y, t) satisfies
FACT If C is any smooth simple closed curve in the plane and f is a smooth function on
∂2T ∂2T C then we can find a function T on the interior of C such that
 
∂T
=µ 2
+ .
∂t ∂x ∂y 2 ∂2T ∂2T
2
+ =0
If the temperature is constant then we must have ∂x ∂y 2

## ∂2T ∂2T and for (x, y) ∈ C: T (x, y) = f (x, y).

+ = 0.
∂x2 ∂y 2
This is called Laplace’s equation.
We must solve Laplace’s equation in 0 ≤ x ≤ π, 0 ≤ y ≤ π subject to T (x, 0) = T (x, π) = f (x, y)


0, T (0, y) = 0, (
y y ≤ π/2
T (π, y) =
π − y y ≤ π/2 T (x, y)
Again we look for simple solutions to Laplace’s equation of the form C
T (x, y) = u(x)v(y)

## We must then solve

u00 (x) = au(x) u(0) = 0
2) The Wave Equation
v 00 (y) = −av(y) v(0) = v(π) = 0
As for the heat equation the only non-trivial solutions are for a = n2 ; n = 1, 2, 3, . . . Suppose a violin string of length π is fixed between the points x = 0 and x = π
Then and suppose the string is plucked with the end points fixed. Describe the movement of
v(y) = A sin ny the string.
u(x) = B (enx − e−nx ) = 2B sinh(nx)
Thus we get the solutions r u(x,t)
r
cn = sinh(nx) sin(ny) 0 x π
By linearity

X
cn sinh(nx) sin(ny) Let u(x, t) denote the displacement of the string from the x-axis at time t and at distance
n=1 x along the x-axis.
will also be a solution if cn tend to zero sufficiently fast.

25 26
Then u satisfies the wave equation: ∂T ∂2T
Notice the difference. The equation = has a unique solution in 0 ≤ x ≤ π, t ≥ 0
∂t ∂x2
∂2u ∂2u if we specify T (0, t) = T (π, t) = 0 and we specify T (x, 0).
= 2
∂t2 ∂x ∂2u ∂2u
On the other hand the equation 2 = 2 has infinitely many solutions in 0 ≤ x ≤ π,
(as long as time is measures in suitable units). ∂t ∂x
t ≥ 0 is we specify u(0, t) = u(π, t) = 0 and we specify u(x, 0). In this case we may also
We are looking for solutions satisfying the boundary conditions u(0, t) = u(π, t) = 0. ∂u
specify (x, 0).
We look again for simple solutions ∂t
In general it is a subtle question what boundary conditions we can impose for a PDE and
u(x, t) = v(x)w(t) still expect a solution or a unique solution.
and obtain the equations
v 00 (x) = av(x) v(x) = v(π) = 0
w 00 (t) = aw(t)
As in the previous section we see we only obtain a non-trivial solution if a = −n2 for n
an integer.
Thus we obtain solutions
an sin nt sin nx and bn cos nt sin nx.
Again linearity gives solutions
X 
an sin nt + bn cos nt sin nx

where we may in fact allow the sums to become infinite if an and bn tend to zero sufficiently
fast as n → ∞.
To get a specific solution we must specify what happens at t = 0. Suppose that at t = 0
the string is stationary with
(
x
x ≤ π/2
u(x, 0) = 100
π−x
100
x ≥ π/2.
Then we rquire that (
∞ x
X
100
x ≤ π/2
bn sin nx = π−x
n=1 100
x ≥ π/2.
(
0 n even
As in the last section we see that bn = 4 (−1) 2
n−1

100n2 π
n odd.
What about the an ? They seem to be arbitrary. The point is that the motion of the
string depends not only on its initial position, but also on its initial velocity. Using the
fact that the string is stationary at t = 0 we see that
X 
nan cos nt − nbn sin nt sin nx = 0.

t=0
P
Thus nan sin nx = 0 and so an = 0 for all n. Thus

X 4 (−1)m
u(x, t) = cos(2m + 1)t sin(2m + 1)x.
m=0
100(2m + 1)2 π

27 28
EXERCISES ∂2T ∂2T
(6) Can you solve the equation + = 0 in 0 ≤ x ≤ π, 0 ≤ y ≤ π subject to
∂x2 ∂y 2
(1) Suppose that the the boundary of a uniform copper disc is maintained at a temper-
ature T (x, y) = xy. ∂T ∂T
(0, y) = 0 , (π, y) = sin y
Find the temperature at the center of the disc when the temperature over the disc ∂x ∂x
is constant in time. ∂T ∂T
(x, 0) = 0 , (x, π) = 0
∂y ∂y
(2) A uniform metal square 0 ≤ x ≤ π, 0 ≤ y ≤ π has a temperature distribution which  
is constant in time. If [HINT: Apply Green’s theorem to ∂T , ∂T .]
( ∂x ∂y

y y ≤ π/2
T (0, y) = 0 T (π, y) =
π − y y ≥ π/2
(
x x ≤ π/2
T (x, 0) = 0 T (x, π) =
π − x x ≥ π/2

## find T (x, y) over the whole square.

(3) A violin string fixed at x = 0 and x = π is initially undisturbed.
It is then given a velocity
(
∂u x x ≤ π/2
(x, 0) =
∂t π − x x ≥ π/2.

Describe the displacement u(x, t) of the string as a function of position and time.
(4) Show that if f (y) and g(y) are any twice differentiable functions then u(x, t) =
f (x + t) + g(x − t) satisfies
∂2u ∂2u
2
= 2.
∂x ∂t
If u(0, t) = u(π, t) = 0 show that we must have
f (y) = −g(−y)
f (y + 2π) = f (y)
u(x, t) = f (x + t) − f (t − x).
∂u
If further (x, 0) = 0 show that f (y) + f (−y) is constant and hence that f (y) =
∂t
−f (−y).
(
x x ≤ π/2
If u(x, 0) = find f and hence find u(x, t).
π − x x ≥ π/2

∂2T ∂2T
(5) Solve the equation + = 0 in 0 ≤ x ≤ π, 0 ≤ y ≤ π subject to
∂x2 ∂y 2
∂T ∂T
(0, y) = 0 , (π, y) = sin 2y
∂x ∂x
∂T ∂T
(x, 0) = 0 , (x, π) = 0
∂y ∂y

29 30