Sie sind auf Seite 1von 164

MATH 8430

Fundamental Theory
of
Ordinary Dierential Equations
Lecture Notes
Julien Arino
Department of Mathematics
University of Manitoba
Fall 2006
Contents
1 General theory of ODEs 3
1.1 ODEs, IVPs, solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Ordinary dierential equation, initial value problem . . . . . . . . . . 3
1.1.2 Solutions to an ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Geometric interpretation . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Existence and uniqueness theorems . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Successive approximations . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Local existence and uniqueness Proof by xed point . . . . . . . . . 10
1.2.3 Local existence and uniqueness Proof by successive approximations 13
1.2.4 Local existence (non Lipschitz case) . . . . . . . . . . . . . . . . . . . 16
1.2.5 Some examples of existence and uniqueness . . . . . . . . . . . . . . 21
1.3 Continuation of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.1 Maximal interval of existence . . . . . . . . . . . . . . . . . . . . . . 27
1.3.2 Maximal and global solutions . . . . . . . . . . . . . . . . . . . . . . 28
1.4 Continuous dependence on initial data, on parameters . . . . . . . . . . . . . 29
1.5 Generality of rst order systems . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 Generality of autonomous systems . . . . . . . . . . . . . . . . . . . . . . . . 34
1.7 Suggested reading, Further problems . . . . . . . . . . . . . . . . . . . . . . 34
2 Linear systems 35
2.1 Existence and uniqueness of solutions . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1 The vector space of solutions . . . . . . . . . . . . . . . . . . . . . . 37
2.2.2 Fundamental matrix solution . . . . . . . . . . . . . . . . . . . . . . 38
2.2.3 Resolvent matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2.4 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.5 Autonomous linear systems . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Ane systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.1 The space of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.2 Construction of solutions . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.3 Ane systems with constant coecients . . . . . . . . . . . . . . . . 47
2.4 Systems with periodic coecients . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4.1 Linear systems: Floquet theory . . . . . . . . . . . . . . . . . . . . . 48
i
ii
Fund. Theory ODE Lecture Notes J. Arino
CONTENTS
2.4.2 Ane systems: the Fredholm alternative . . . . . . . . . . . . . . . . 51
2.5 Further developments, bibliographical notes . . . . . . . . . . . . . . . . . . 54
2.5.1 A variation of constants formula for a nonlinear system with a linear
component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3 Stability of linear systems 57
3.1 Stability at xed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2 Ane systems with small coecients . . . . . . . . . . . . . . . . . . . . . . 57
4 Linearization 63
4.1 Some linear stability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 The stable manifold theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3 The Hartman-Grobman theorem . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4 Example of application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 A chemostat model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.2 A second example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5 Exponential dichotomy 79
5.1 Exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Existence of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . 81
5.3 First approximate theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.4 Stability of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . 84
5.5 Generality of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . 84
References 85
A Denitions and results 87
A.1 Vector spaces, norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.1.1 Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.1.2 Matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.1.3 Supremum (or operator) norm . . . . . . . . . . . . . . . . . . . . . . 87
A.2 An inequality involving norms and integrals . . . . . . . . . . . . . . . . . . 88
A.3 Types of convergences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
A.4 Asymptotic Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.5 Types of continuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A.6 Lipschitz function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
A.7 Gronwalls lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
A.8 Fixed point theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
A.9 Jordan normal form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.10 Matrix exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
A.11 Matrix logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
A.12 Spectral theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
B Problem sheets 101
Homework sheet 1 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Homework sheet 2 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Homework sheet 3 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Homework sheet 4 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Final examination 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Homework sheet 1 2006 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
iii
Introduction
This course deals with the elementary theory of ordinary dierential equations. The word
elementary should not be understood as simple. The underlying assumption here is that, to
understand the more advanced topics in the analysis of nonlinear systems, it is important
to have a good understanding of how solutions to dierential equations are constructed.
If you are taking this course, you most likely know how to analyze systems of nonlinear
ordinary dierential equations. You know, for example, that in order for solutions to a
system to exist and be unique, the system must have a C
1
vector eld. What you do not
necessarily know is why that is. This is the object of Chapter 1, where we consider the
general theory of existence and uniqueness of solutions. We also consider the continuation
of solutions as well as continuous dependence on initial data and on parameters.
In Chapter 2, we explore linear systems. We rst consider homogeneous linear systems,
then linear systems in full generality. Homogeneous linear systems are linked to the theory
for nonlinear systems by means of linearization, which we study in Chapter 4, in which
we show that the behavior of nonlinear systems can be approximated, in the vicinity of
a hyperbolic equilibrium point, by a homogeneous linear system. As for autonomous sys-
tems, nonautonomous nonlinear systems are linked to a linearized form, this time through
exponential dichotomy, which is explained in Chapter 5.
1
Chapter 1
General theory of ODEs
We begin with the general theory of ordinary dierential equations (ODEs). First, we dene
ODEs, initial value problems (IVPs) and solutions to ODEs and IVPs in Section 1.1. In
Section 1.2, we discuss existence and uniqueness of solutions to IVPs.
1.1 ODEs, IVPs, solutions
1.1.1 Ordinary dierential equation, initial value problem
Denition 1.1.1 (ODE). An nth order ordinary dierential equation (ODE) is a functional
relationship taking the form
F
_
t, x(t),
d
dt
x(t),
d
2
dt
2
x(t), . . . ,
d
n
dt
n
x(t)
_
= 0,
that involves an independent variable t 1 R, an unknown function x(t) T R
n
of
the independent variable, its derivative and derivatives of order up to n. For simplicity, the
time dependence of x is often omitted, and we in general write equations as
F
_
t, x, x

, x

, . . . , x
(n)
_
= 0, (1.1)
where x
(n)
denotes the nth order derivative of x. An equation such as (1.1) is said to be in
general (or implicit) form.
An equation is said to be in normal (or explicit) form when it is written as
x
(n)
= f
_
t, x, x

, x

, . . . , x
(n1)
_
.
Note that it is not always possible to write a dierential equation in normal form, as it can
be impossible to solve F(t, x, . . . , x
(n)
) = 0 in terms of x
(n)
.
Denition 1.1.2 (First-order ODE). In the following, we consider for simplicity the more
restrictive case of a rst-order ordinary dierential equation in normal form
x

= f(t, x). (1.2)


3
4
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
Note that the theory developed here holds usually for nth order equations; see Section 1.5.
The function f is assumed continuous and real valued on a set | R R
n
.
Denition 1.1.3 (Initial value problem). An initial value problem (IVP) for equation (1.2)
is given by
x

= f(t, x)
x(t
0
) = x
0
,
(1.3)
where f is continuous and real valued on a set | R R
n
, with (t
0
, x
0
) |.
Remark The assumption that f be continuous can be relaxed, piecewise continuity only is
needed. However, this leads in general to much more complicated problems and is beyond the
scope of this course. Hence, unless otherwise stated, we assume that f is at least continuous. The
function f could also be complex valued, but this too is beyond the scope of this course.
Remark An IVP for an nth order dierential equation takes the form
x
(n)
= f(t, x, x

, . . . , x
(n1)
)
x(t
0
) = x
0
, x

(t
0
) = x

0
, . . . , x
(n1)
(t
0
) = x
(n1)
0
,
i.e., initial conditions have to be given for derivatives up to order n 1.
We have already seen that the order of an ODE is the order of the highest derivative
involved in the equation. An equation is then classied as a function of its linearity. A linear
equation is one in which the vector eld f takes the form
f(t, x) = a(t)x(t) +b(t).
If b(t) = 0 for all t, the equation is linear homogeneous; otherwise it is linear nonhomogeneous.
If the vector eld f depends only on x, i.e., f(t, x) = f(x) for all t, then the equation is
autonomous; otherwise, it is nonautonomous. Thus, a linear equation is autonomous if
a(t) = a and b(t) = b for all t. Nonlinear equations are those that are not linear. They too,
can be autonomous or nonautonomous.
Other types of classications exist for ODEs, which we shall not deal with here, the
previous ones being the only one we will need.
1.1.2 Solutions to an ODE
Denition 1.1.4 (Solution). A function (t) (or , for short) is a solution to the ODE
(1.2) if it satises this equation, that is, if

(t) = f(t, (t)),


for all t 1 R, an open interval such that (t, (t)) | for all t 1.
The notations and x are used indierently for the solution. However, in this chapter,
to emphasize the dierence between the equation and its solution, we will try as much as
possible to use the notation x for the unknown and for the solution.
1.1. ODEs, IVPs, solutions
Fund. Theory ODE Lecture Notes J. Arino
5
Denition 1.1.5 (Integral form of the solution). The function
(t) = x
0
+
_
t
t
0
f(s, (s))ds (1.4)
is called the integral form of the solution to the IVP (1.3).
Let 1 = 1((t
0
, x
0
), a, b) be the domain dened, for a > 0 and b > 0, by
1 = (t, x) : [t t
0
[ a, |x x
0
| b ,
where | | is any appropriate norm of R
n
. This domain is illustrated in Figures 1.1 and
1.2; it is sometimes called a security system, i.e., the union of a security interval (for the
independent variable) and a security domain (for the dependent variables) [19]. Suppose
0
t a t
x
0
t
0
t +a
0
x +b
0
0
x
x b
0
(t ,x )
0
Figure 1.1: The domain 1 for T R.
t
x
y
t
0
x
y
0
0
Figure 1.2: The domain 1 for T R
2
: security tube.
6
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
that f is continuous on 1, and let M = max
R
|f(t, x)|, which exists since f is continuous
on the compact set 1.
In the following, existence of solutions will be obtained generally in relation to the domain
1 by considering a subset of the time interval [t t
0
[ a dened by [t t
0
[ , with
=
_
a if M = 0
min(a,
b
M
) if M > 0.
This choice of = min(a, b/M) is natural. We endow f with specic properties (continuity,
Lipschitz, etc.) on the domain 1. Thus, in order to be able to use the denition of (t) as
the solution of x

= f(t, x), we must be working in 1. So we require that [t t


0
[ a and
|xx
0
| b. In order to satisfy the rst of these conditions, choosing a and working on
[t t
0
[ implies of course that [t t
0
[ a. The requirement that b/M comes from
the following argument. If we assume that (t) is a solution of (1.3) dened on [t
0
, t
0
+ ],
then we have, for t [t
0
, t
0
+],
|(t) x
0
| =
_
_
_
_
_
t
t
0
f(s, (s))ds
_
_
_
_

_
t
t
0
|f(s, (s))| ds
M
_
t
t
0
ds
= M(t t
0
),
where the rst inequality is a consequence of the denition of the integrals by Riemann
sums (Lemma A.2.1 in Appendix A.2). Similarly, we have |(t) x
0
| M(t t
0
) for all
t [t
0
, t
0
]. Thus, for [t t
0
[ , |(t) x
0
| M[t t
0
[. Suppose now that b/M.
It follows that | x
0
| M[t t
0
[ Mb/M = b. Taking = min(a, b/M) then ensures
that both [t t
0
[ a and | x
0
| b hold simultaneously.
The following two theorems deal with the localization of the solutions to an IVP. They
make more precise the previous discussion. Note that for the moment, the existence of
a solution is only assumed. First, we establish that the security system described above
performs properly, in the sense that a solution on a smaller time interval stays within the
security domain.
Theorem 1.1.6. If (t) is a solution of the IVP (1.3) in an interval [t t
0
[ < , then
|(t) x
0
| < b in [t t
0
[ < , i.e., (t, (t)) 1((t
0
, x
0
), , b) for [t t
0
[ < .
Proof. Assume that is a solution with (t, (t)) , 1((t
0
, x
0
), , b). Since is continuous, it
follows that there exists 0 < < such that
_
|(t) x
0
| < b for [t t
0
[ <
_
and
_
|(t
0
+) x
0
| = b or |(t
0
) x
0
| = b
_
, (1.5)
i.e., the solution escapes the security domain at t = t
0
. Since a, < a. Thus
(t, (t)) 1 for [t t
0
[ .
1.1. ODEs, IVPs, solutions
Fund. Theory ODE Lecture Notes J. Arino
7
Thus |f(t, (t))| M for [t t
0
[ . Since is a solution, we have that

(t) = f(t, (t))


and (t
0
) = x
0
. Thus
(t) = x
0
+
_
t
t
0
f(s, (s))ds for [t t
0
[ .
Hence
|(t) x
0
| =
_
_
_
_
_
t
t
0
f(s, (s))ds
_
_
_
_
for [t t
0
[
M[t t
0
[ for [t t
0
[ .
As a consequence,
|(t) x
0
| M < M M M
b
M
= b for [t t
0
[ .
In particular, |(t
0
) x
0
| < b. Hence contradiction with (1.5).
The following theorem is proved using the same sort of technique as in the proof of
Theorem 1.1.6. It links the variation of the solution to the nature of the vector eld.
Theorem 1.1.7. If (t) is a solution of the IVP (1.3) in an interval [t t
0
[ < , then
|(t
1
) (t
2
)| M[t
1
t
2
[ whenever t
1
, t
2
are in the interval [t t
0
[ < .
Proof. Let us begin by considering t t
0
. On t
0
t t
0
+ ,
(t
1
) (t
2
) = x
0
+
_
t
1
t
0
f(s, (s))ds x
0

_
t
2
t
0
f(s, (s))ds
=
_
t
2
t
1
f(s, (s))ds if t
2
> t
1
_
t
2
t
1
f(s, (s))ds if t
1
> t
2
Now we can see formally what is needed for a solution.
Theorem 1.1.8. Suppose f is continuous on an open set | R R
n
. Let (t
0
, x
0
) |,
and be a function dened on an open set 1 of R such that t
0
1. Then is a solution of
the IVP (1.3) if, and only if,
i) t 1, (t, (t)) |.
ii) is continuous on 1.
iii) t 1, (t) = x
0
+
_
t
t
0
f(s, (s))ds.
8
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
Proof. () Let us suppose that

= f(t, ) for all t 1 and that (t


0
) = x
0
. Then for all
t 1, (t, (t)) | (i ). Also, is dierentiable and thus continuous on 1 (ii ). Finally,

(s) = f(s, (s))


so integrating both sides from t
0
to t,
(t) (t
0
) =
_
t
t
0
f(s, (s))ds
and thus
(t) = x
0
+
_
t
t
0
f(s, (s))ds
hence (iii ).
() Assume i ), ii ) and iii ). Then is dierentiable on 1 and

(t) = f(t, (t)) for all t 1.


From (3), (t
0
) = x
0
+
_
t
0
t
0
f(s, (s))ds = x
0
.
Note that Theorem 1.1.8 states that should be continuous, whereas the solution should
of course be C
1
, for its derivative needs to be continuous. However, this is implied by point
iii ). In fact, more generally, the following result holds about the regularity of solutions.
Theorem 1.1.9 (Regularity). Let f : | R
n
, with | an open set of RR
n
. Suppose that
f C
k
. Then all solutions of (1.2) are of class C
k+1
.
Proof. The proof is obvious, since a solution is such that

C
k
.
1.1.3 Geometric interpretation
The function f is the vector eld of the equation. At every point in (t, x) space, a solution
is tangent to the value of the vector eld at that point. A particular consequence of this
fact is the following theorem.
Theorem 1.1.10. Let x

= f(x) be a scalar autonomous dierential equation. Then the


solutions of this equation are monotone.
Proof. The direction eld of an autonomous scalar dierential equation consists of vectors
that are parallel for all t (since f(t, x) = f(x) for all t). Suppose that a solution of
x

= f(x) is non monotone. Then this means that, given an initial point (t
0
, x
0
), one the
following two occurs, as illustrated in Figure 1.3.
i) f(x
0
) ,= 0 and there exists t
1
such that (t
1
) = x
0
.
ii) f(x
0
) = 0 and there exists t
1
such that (t
1
) ,= x
0
.
1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
9
1
t t t t t t
0 1 2
0
2
Figure 1.3: Situations that would lead to a scalar autonomous dierential equation having
nonmonotone solutions.
Suppose we are in case i), and assume we are in the case f(x
0
) > 0. Thus, the solution curve
is increasing at (t
0
, x
0
), i.e.,

(t
0
) > 0. As is continuous, i) implies that there exists
t
2
(t
0
, t
1
) such that (t
2
) is a maximum, with increasing for t [t
0
, t
2
) and decreasing
for t (t
2
, t
1
]. It follows that

(t
1
) < 0, which is a contradiction with

(t
0
) > 0.
Now assume that we are in case ii). Then there exists t
2
(t
0
, t
1
) with (t
2
) = x
0
but
such that

(t
2
) < 0. This is a contradiction.
Remark If we have uniqueness of solutions, it follows from this theorem that if
1
and
2
are
two solutions of the scalar autonomous dierential equation x

= f(x), then
1
(t
0
) <
2
(t
0
) implies
that
1
(t) <
2
(t) for all t.
Remark Be careful: Theorem 1.1.10 is only true for scalar equations.
1.2 Existence and uniqueness theorems
Several approaches can be used to show existence and/or uniqueness of solutions. In Sec-
tions 1.2.2 and 1.2.3, we take a direct path: using either a xed point method (Section 1.2.2)
or an iterative approach (Section 1.2.3), we obtain existence and uniqueness of solutions
under the assumption that the vector eld is Lipschitz. In Section 1.2.4, the Lipschitz
assumption is dropped and therefore a dierent approach must be used, namely that of
approximate solutions, with which only existence can be established.
1.2.1 Successive approximations
Picards successive approximation method consists in using the integral form (1.4) of the
solution to the IVP (1.3) to construct a sequence of approximation of the solution, that
converges to the solution. The steps followed in constructing this approximating sequence
are the following.
10
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
Step 1. Start with an initial estimate of the solution, say, the constant function
0
(t) =

0
= x
0
, for [t t
0
[ h. Evidently, this function satises the IVP.
Step 2. Use
0
in (1.4) to dene the second element in the sequence:

1
(t) = x
0
+
_
t
t
0
f(s,
0
(s))ds.
Step 3. Use
1
in (1.4) to dene the third element in the sequence:

2
(t) = x
0
+
_
t
t
0
f(s,
1
(s))ds.
. . .
Step n. Use
n1
in (1.4) to dene the nth element in the sequence:

n
(t) = x
0
+
_
t
t
0
f(s,
n1
(s))ds.
At this stage, there are two major ways to tackle the problem, which use the same idea:
if we can prove that the sequence
n
converges, and that the limit happens to satisfy
the dierential equation, then we have the solution to the IVP (1.3). The rst method
(Section 1.2.2) uses a xed point approach. The second method (Section 1.2.3) studies
explicitly the limit.
1.2.2 Local existence and uniqueness Proof by xed point
Here are two slightly dierent formulations of the same theorem, which establishes that if the
vector eld is continuous and Lipschitz, then the solutions exist and are unique. We prove
the result in the second case. For the denition of a Lipschitz function, see Section A.6 in
the Appendix.
Theorem 1.2.1 (Picard local existence and uniqueness). Assume f : | R R
n
T
R
n
is continuous, and that f(t, x) satises a Lipschitz condition in | with respect to x.
Then, given any point (t
0
, x
0
) |, there exists a unique solution of (1.3) on some interval
containing t
0
in its interior.
Theorem 1.2.2 (Picard local existence and uniqueness). Consider the IVP (1.3), and as-
sume f is (piecewise) continuous in t and satises the Lipschitz condition
|f(t, x
1
) f(t, x
2
)| L|x
1
x
2
|
for all x
1
, x
2
T = x : |x x
0
| b and all t such that [t t
0
[ a. Then there exists
0 < = min
_
a,
b
M
_
such that (1.3) has a unique solution in [t t
0
[ .
To set up the proof, we proceed as follows. Dene the operator F by
F : x x
0
+
_
t
t
0
f(s, x(s))ds.
1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
11
Note that the function (F)(t) is a continuous function of t. Then Picards successives
approximations take the form
1
= F
0
,
2
= F
1
= F
2

0
, where F
2
represents F F.
Iterating, the general term is given for k = 0, . . . by

k
= F
k

0
.
Therefore, nding the limit lim
k

k
is equivalent to nding the function , solution of the
xed point problem
x = Fx,
with x a continuously dierentiable function. Thus, a solution of (1.3) is a xed point of F,
and we aim to use the contraction mapping principle to verify the existence (and uniqueness)
of such a xed point. We follow the proof of [14, p. 56-58].
Proof. We show the result on the interval t t
0
. The proof for the interval t
0
t
is similar. Let X be the space of continuous functions dened on the interval [t
0
, t
0
+ ],
X = C([t
0
, t
0
+]), that we endow with the sup norm, i.e., for x X,
|x|
c
= max
t[t
0
,t
0
+]
|x(t)|
Recall that this norm is the norm of uniform convergence. Let then
S = x X : |x x
0
|
c
b
Of course, S X. Furthermore, S is closed, and X with the sup norm is a complete metric
space. Note that we have transformed the problem into a problem involving the space of
continuous functions; hence we are now in an innite dimensional case. The proof proceeds
in 3 steps.
Step 1. We begin by showing that F : S S. From (1.4),
(F)(t) x
0
=
_
t
t
0
f(s, (s))ds
=
_
t
t
0
f(s, (s)) f(s, x
0
) +f(s, x
0
)ds
Therefore, by the triangle inequality,
|F x
0
|
_
t
t
0
|f(s, (s)) f(t, x
0
)| +|f(t, x
0
)|ds
As f is (piecewise) continuous, it is bounded on [t
0
, t
1
] and there exists M = max
t[t
0
,t
1
]
|f(t, x
0
)|.
Thus
|F x
0
|
_
t
t
0
|f(s, (s)) f(t, x
0
)| +Mds

_
t
t
0
L|(s) x
0
| +Mds,
12
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
since f is Lipschitz. Since S for all | x
0
| b, we have that for all S,
|F x
0
|
_
t
t
0
Lb +Mds
(t t
0
)(Lb +M)
As t [t
0
, t
0
+], (t t
0
) , and thus
|F x
0
|
c
= max
[t
0
,t
0
+]
|F x
0
| (Lb +M)
Choose then such that b/(Lb +M), i.e., t suciently close to t
0
. Then we have
|F x
0
|
c
b
This implies that for S, F S, i.e., F : S S.
Step 2. We now show that F is a contraction. Let
1
,
2
S,
|(F
1
)(t) (F
2
)(t)| =
_
_
_
_
_
t
t
0
f(s,
1
(s)) f(s,
2
(s))ds
_
_
_
_

_
t
t
0
|f(s,
1
(s)) f(s,
2
(s))|ds

_
t
t
0
L|
1
(s)
2
(s)|ds
L|
1

2
|
c
_
t
t
0
ds
and thus
|F
1
F
2
|
c
L|
1

2
|
c
|
1

2
|
c
for

L
Thus, choosing < 1 and /L, F is a contraction. Since, by Step 1, F : S S, the
contraction mapping principle (Theorem A.11) implies that F has a unique xed point in
S, and (1.3) has a unique solution in S.
Step 3. It remains to be shown that any solution in X is in fact in S (since it is on X that
we want to show the result). Considering a solution starting at x
0
at time t
0
, the solution
leaves S if there exists a t > t
0
such that |(t) x
0
| = b, i.e., the solution crosses the border
of T. Let > t
0
be the rst of such ts. For all t
0
t ,
|(t) x
0
|
_
t
t
0
|f(s, (s)) f(s, x
0
)| +|f(s, x
0
)|ds

_
t
t
0
L|(s) x
0
| +Mds

_
t
t
0
Lb +Mds
1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
13
As a consequence,
b = |() x
0
| (Lb +M)( t
0
)
As = t
0
+, for some > 0, it follows that if
>
b
Lb +M
then the solution is conned to T.
Note that the condition x
1
, x
2
T = x : |x x
0
| b in the statement of the theorem
refers to a local Lipschitz condition. If the function f is Lipschitz, then the following theorem
holds.
Theorem 1.2.3 (Global existence). Suppose that f is piecewise continuous in t and is
Lipschitz on | = 1 T. Then (1.3) admits a unique solution on 1.
1.2.3 Local existence and uniqueness Proof by successive ap-
proximations
Using the method of successive approximations, we can prove the following theorem.
Theorem 1.2.4. Suppose that f is continuous on a domain 1 of the (t, x)-plane dened,
for a, b > 0, by 1 = (t, x) : [t t
0
[ a, |x x
0
| b, and that f is locally Lipschitz in x
on 1. Let then, as previously dened,
M = sup
(t,x)R
|f(t, x)| <
and
= min(a,
b
M
)
Then the sequence dened by

0
= x
0
, [t t
0
[

i
(t) = x
0
+
_
t
t
0
f(s,
i1
(s))ds, i 1, [t t
0
[
converges uniformly on the interval [t t
0
[ to , unique solution of (1.3).
Proof. We follow [20, p. 3-6].
Existence. Suppose that [t t
0
[ . Then
|
1

0
| =
_
_
_
_
_
t
t
0
f(s,
0
(s))ds
_
_
_
_
M[t t
0
[
M b
14
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
from the denitions of M and , and thus |
1

0
| b. So
_
t
t
0
f(s,
1
(s))ds is dened for
[t t
0
[ , and, for [t t
0
[ ,
|
2
(t)
0
| =
_
_
_
_
_
t
t
0
f(s,
1
(s))ds
_
_
_
_
|
_
t
t
0
|f(s,
1
(s))|ds M b.
All subsequent terms in the sequence can be similarly dened, and, by induction, for [tt
0
[
,
|
k
(t)
0
| M b, k = 1, . . . , n.
Now, for [t t
0
[ ,
|
k+1
(t)
k
(t)| =
_
_
_
_
x
0
+
_
t
t
0
f(s,
k
(s))ds x
0

_
t
t
0
f(s,
k1
(s))ds
_
_
_
_
=
_
_
_
_
_
t
t
0
f(s,
k
(s)) f(s,
k1
(s)) ds
_
_
_
_
L
_
t
t
0
|
k
(s)
k1
(s)|ds,
where the inequality results of the fact that f is locally Lipschitz in x on 1.
We now prove that, for all k,
|
k+1

k
| b
(L[t t
0
[)
k
k!
for [t t
0
[ (1.6)
Indeed, (1.6) holds for k = 1, as previously established. Assume that (1.6) holds for k = n.
Then
|
n+2

n+1
| =
_
_
_
_
_
t
t
0
f(s,
n+1
(s)) f(s,
n
(s))ds
_
_
_
_

_
t
t
0
L|
n+1
(s)
n
(s)|ds

_
t
t
0
Lb
(L[s t
0
[)
n
n!
ds for [t t
0
[
b
L
n+1
n!
[t t
0
[
n+1
n + 1

s=t
s=t
0
b
(L[t t
0
[)
n+1
(n + 1)!
and thus (1.6) holds for k = 1, . . ..
Thus, for N > n we have
|
N
(t)
n
(t)|
N1

k=n
|
k+1
(t)
k
(t)|
N1

k=n
b
(L[t t
0
[)
k
k!
b
N1

k=n
(L)
k
k!
1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
15
The rightmost term in this expression tends to zero as n . Therefore,
k
(t) converges
uniformly to a function (t) on the interval [t t
0
[ . As the convergence is uniform, the
limit function is continuous. Moreover (t
0
) = x
0
. Indeed,
N
(t) =
0
(t) +

N
k=1
(
k
(t)

k1
(t)), so (t) =
0
(t) +

k=1
(
k
(t)
k1
(t)).
The fact that is a solution of (1.3) follows from the following result. If a sequence
of functions
k
(t) converges uniformly and that the
k
(t) are continuous on the interval
[t t
0
[ , then
lim
n
_
t
t
0

n
(s)ds =
_
t
t
0
lim
n

n
(s)ds
Hence,
(t) = lim
n

n
(t)
= x
0
+ lim
n
_
t
t
0
f(s,
n1
(s))ds
= x
0
+
_
t
t
0
lim
n
f(s,
n1
(s))ds
= x
0
+
_
t
t
0
f(s, (s))ds,
which is to say that
(t) = x
0
+
_
t
t
0
f(s, (s))ds for [t t
0
[ .
As the integrand f(t, ) is a continuous function, is dierentiable (with respect to t), and

(t) = f(t, (t)), so is a solution to the IVP (1.3).


Uniqueness. Let and be two solutions of (1.3), i.e., for [t t
0
[ ,
(t) = x
0
+
_
t
t
0
f(s, (s))ds
(t) = x
0
+
_
t
t
0
f(s, (s))ds.
Then, for [t t
0
[ ,
|(t) (t)| =
_
_
_
_
_
t
t
0
f(s, (s)) f(s, (s))ds
_
_
_
_
L
_
t
t
0
|(s) (s)|ds. (1.7)
We now apply Gronwalls Lemma A.7) to this inequality, using K = 0 and g(t) = |(t)
(t)|. First, applying the lemma for t
0
t t
0
+, we get 0 |(t) (t)| 0, that is,
|(t) (t)| = 0,
16
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
and thus (t) = (t) for t
0
t t
0
+ . Similarly, for t
0
t t
0
, |(t) (t)| = 0.
Therefore, (t) = (t) on [t t
0
[ .
Example Let us consider the IVP x

= x, x(0) = x
0
= c, c R. For initial solution, we choose

0
(t) = c. Then

1
(t) = x
0
+
_
t
0
f(s,
0
(s))ds
= c +
_
t
0

0
(s)ds
= c c
_
t
0
ds
= c ct.
To nd
2
, we use
1
in (1.4).

2
(t) = x
0
+
_
t
0
f(s,
1
(s))ds
= c
_
t
0
(c cs)ds
= c ct + c
t
2
2
.
Continuing this method, we nd a general term of the form

n
(t) =
n

i=0
c
(1)
i
t
i
i!
.
This is the power series expansion of ce
t
, so
n
= ce
t
(and the approximation is valid on
R), which is the solution of the initial value problem.
Note that the method of successive approximations is a very general method that can be
used in a much more general context; see [8, p. 264-269].
1.2.4 Local existence (non Lipschitz case)
The following theorem is often called Peanos existence theorem. Because the vector eld is
not assumed to be Lipschitz, something is lost, namely, uniqueness.
Theorem 1.2.5 (Peano). Suppose that f is continuous on some region
1 = (t, x) : [t t
0
[ a, |x x
0
| b,
with a, b > 0, and let M = max
R
|f(t, x)|. Then there exists a continuous function (t),
dierentiable on 1, such that
i) (t
0
) = x
0
,
1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
17
ii)

(t) = f(t, ) on [t t
0
[ , where
=
_
a if M = 0
min
_
a,
b
M
_
if M > 0.
Before we can prove this result, we need a certain number of preliminary notations and
results. The denition of equicontinuity and a statement of the Ascoli lemma are given in
Section A.5. To construct a solution without the Lipschitz condition, we approximate the
dierential equation by another one that does satisfy the Lipschitz condition. The unique
solution of such an approximate problem is an -approximate solution. It is formally dened
as follows [8, p. 285].
Denition 1.2.6 (-approximate solution). A dierentiable mapping u of an open ball J 1
into | is an approximate solution of x

= f(t, x) with approximation (or an -approximate


solution) if we have
|u

(t) f(t, u(t))| ,


for any t J.
Lemma 1.2.7. Suppose that f(t, x) is continuous on a region
1 = (t, x) : [t t
0
[ a, |x x
0
| b.
Then, for every positive number , there exists a function F

(t, x) such that


i) F

is continuous for [t t
0
[ a and all x,
ii) F

has continuous partial derivatives of all orders with respect to x


1
, . . . , x
n
for [tt
0
[
a and all x,
iii) |F

(t, x)| max


R
|f(t, x)| = M for [t t
0
[ a and all x,
iv) |F

(t, x) f(t, x)| on 1.


See a proof in [12, p. 10-12]; note that in this proof, the property that f denes a
dierential equation is not used. Hence Lemma 1.2.7 can be used in a more general context
than that of dierential equations. We now prove Theorem 1.2.5.
Proof of Theorem 1.2.5. The proof takes four steps.
1. We construct, for every positive number , a function F

(t, x) that satises the re-


quirements given in Lemma 1.2.7. Using an existence-uniqueness result in the Lipschitz case
(such as Theorem 1.2.2), we construct a function

(t) such that


(P1)

(t
0
) = x
0
,
(P2)

(t) = F

(t,

(t)) on [t t
0
[ < .
(P3) (t,

(t)) 1 on [t t
0
[ .
18
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
2. The set T =

: > 0 is bounded and equicontinuous on [t t


0
[ . Indeed,
property (P3) of

implies that T is bounded on [t t


0
[ and that |F

(t,

(t))| M
on [t t
0
[ . Hence property (P2) of

implies that
|

(t
1
)

(t
2
)| M[t
1
t
2
[,
if [t
1
t
0
[ and [t
2
t
0
[ (this is a consequence of Theorem 1.1.7). Therefore, for a
given positive number , we have |

(t
1
)

(t
2
)| whenever [t
1
t
0
[ , [t
2
t
0
[ ,
and [t
1
t
2
[ /M.
3. Using Lemma A.5, choose a sequence
k
: k = 1, 2, . . . of positive numbers such that
lim
k

k
= 0 and that the sequence

k
: k = 1, 2, . . . converges uniformly on [t t
0
[
as k . Then set
(t) = lim
k

k
(t) on [t t
0
[ .
4. Observe that

(t) = x
0
+
_
t
t
0
F

(s,

(s))ds
= x
0
+
_
t
t
0
f(s,

(s))ds +
_
t
t
0
F

(s,

(s)) f(s,

(s))ds,
and that it follows from iv) in Lemma 1.2.7 that
_
_
_
_
_
t
t
0
F

(s,

(s)) f(s,

(s))ds
_
_
_
_
[t t
0
[ on [t t
0
[ .
This is true for all 0, so letting 0, we obtain
(t) = x
0
+
_
t
t
0
f(s, (s))ds,
which completes the proof.
See [12, p. 13-14] for the outline of two other proofs of this result. A proof, by Hartman
[11, p. 10-11], now follows.
Proof. Let > 0 and

(t) be a C
1
n-dimensional vector-valued function on [t
0
, t
0
]
satisfying

(t
0
) = x
0
,

(t
0
) = f(t
0
, x
0
) and |

(t) x
0
| b, |

(t)| M. For 0 < ,


dene a function

(t) on [t
0
, t
0
+] by putting

(t) =

(t) on [t
0
, t
0
] and

(t) = x
0
+
_
t
t
0
f(s,

(s ))ds on [t
0
, t
0
+]. (1.8)
The function

can indeed be thus dened on [t


0
, t
0
+]. To see this, remark rst that
this formula is meaningful and denes

(t) for t
0
t t
0
+
1
,
1
= min(, ), so that

(t) is C
1
on [t
0
, t
0
+
1
] and, on this interval,
|

(t) x
0
| b, |

(t)

(s)| M[t s[. (1.9)


1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
19
It then follows that (1.8) can be used to extend

(t) as a C
1
function over [t
0
, t
0
+
2
],
where
2
= min(, 2), satisfying relation (1.9). Continuing in this fashion, (1.8) serves to
dene

(t) over [t
0
, t
0
+] so that

(t) is a C
0
function on [t
0
, t
0
+], satisfying relation
(1.9).
Since |

(t)| M, M can be used as a Lipschitz constant for

, giving uniform con-


tinuity of

. It follows that the family of functions,

(t), 0 < , is equicontinuous.


Thus, using Ascolis Lemma (Lemma A.5), there exists a sequence (1) > (2) > . . ., such
that (n) 0 as n and
(t) = lim
n

(n)
(t) exists uniformly
on [t
0
, t
0
+ ]. The continuity of f implies that f(t,
(n)
(t (n)) tends uniformly to
f(t, (t)) as n ; thus term-by-term integration of (1.8) where = (n) gives
(t) = x
0
+
_
t
t
0
f(s, (s))ds
and thus (t) is a solution of (1.3).
An important corollary follows.
Corollary 1.2.8. Let f(t, x) be continuous on an open set E and satisfy |f(t, x)| M.
Let E
0
be a compact subset of E. Then there exists an = (E, E
0
, M) > 0 with the
property that if (t
0
, x
0
) E
0
, then the IVP (1.3) has a solution and every solution exists on
[t t
0
[ .
In fact, hypotheses can be relaxed a little. Coddington and Levinson [5] dene an -
approximate solution as
Denition 1.2.9. An -approximate solution of the dierential equation (1.2), where f is
continuous, on a t interval I is a function C on I such that
i) (t, (t)) | for t 1;
ii) C
1
on I, except possibly for a nite set of points S on I, where

may have simple


discontinuities (g has nite discontinuities at c if the left and right limits of g at c exist
but are not equal);
iii) |

(t) f(t, (t))| for t I S.


Hence it is assumed that has a piecewise continuous derivative on I, which is denoted
by C
1
p
(I).
Theorem 1.2.10. Let f C on the rectangle
1 = (t, x) : [t t
0
[ a, |x x
0
| b.
Given any > 0, there exists an -approximate solution of (1.3) on [t t
0
[ such that
(t
0
) = x
0
.
20
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
Proof. Let > 0 be given. We construct an -approximate solution on the interval [t
0
, t
0
+];
the construction works in a similar way for [t
0
, t
0
]. The -approximate solution that we
construct is a polygonal path starting at (t
0
, x
0
).
Since f C on 1, it is uniformly continuous on 1, and therefore for the given value of
, there exists

> 0 such that


|f(t, ) f(

t,

)| (1.10)
if
(t, ) 1, (

t,

) 1 and [t

t[

|

|

.
Now divide the interval [t
0
, t
0
+ ] into n parts t
0
< t
1
< < t
n
= t
0
+ , in such a way
that
max [t
k
t
k1
[ min
_

M
_
. (1.11)
From (t
0
, x
0
), construct a line segment with slope f(t
0
, x
0
) intercepting the line t = t
1
at
(t
1
, x
1
). From the denition of and M, it is clear that this line segment lies inside the
triangular region T bounded by the lines segments with slopes M from (t
0
, x
0
) to their
intercept at t = t
0
+, and the line t = t
0
+. In particular, (t
1
, x
1
) T.
At the point (t
1
, x
1
), construct a line segment with slope f(t
1
, x
1
) until the line t = t
2
,
obtaining the point (t
2
, x
2
). Continuing similarly, a polygonal path is constructed that
meets the line t = t
0
+ in a nite number of steps, and lies entirely in T.
The function , which can be expressed as
(t
0
) = x
0
(t) = (t
k1
) +f(t
k1
, (t
k1
))(t t
k1
), t [t
k1
, t
k
], k = 1, . . . , n,
(1.12)
is the -approximate solution that we seek. Clearly, C
1
p
([t
0
, t
0
+]) and
|(t) (

t)| M[t

t[ for t,

t [t
0
, t
0
+]. (1.13)
If t [t
k1
, t
k
], then (1.13) together with (1.11) imply that |(t) (t
k1
)|

. But from
(1.12) and (1.10),
|

(t) f(t, (t))| = |f(t


k1
, (t
k1
)) f(t, (t))| .
Therefore, is an -approximation.
We can now turn to their proof of Theorem 1.2.5.
Proof. Let
n
be a monotone decreasing sequence of positive real numbers with
n
0
as n . By Theorem 1.2.10, for each
n
, there exists an
n
-approximate solution
n
of
(1.3) on [t t
0
[ such that
n
(t
0
) = x
0
. Choose one such solution
n
for each
n
. From
(1.13), it follows that
|
n
(t)
n
(

t)| M[t

t[. (1.14)
1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
21
Applying (1.14) to

t = t
0
, it is clear that the sequence
n
is uniformly bounded by
|x
0
| + b, since [t t
0
[ b/M. Moreover, (1.14) implies that
n
is an equicontinuous
set. By Ascolis lemma (Lemma A.5), there exists a subsequence
n
k
, k = 1, . . ., of
n
,
converging uniformly on [t
0
, t
0
+] to a limit function , which must be continuous since
each
n
is continuous.
This limit function is a solution to (1.3) which meets the required specications. To
see this, write

n
(t) = x
0
+
_
t
t
0
f(s,
n
(s)) +
n
(s)ds, (1.15)
where
n
(t) =

(t) f(t,
n
(t)) at those points where

n
exists, and
n
(t) = 0 otherwise.
Because
n
is an
n
-approximate solution, |
n
(t)|
n
. Since f is uniformly continuous on
1, and
n
k
uniformly on [t
0
, t
0
+] as k , it follows that f(t,
n
k
) f(t, (t))
uniformly on [t
0
, t
0
+] as k .
Replacing n by n
k
in (1.15) and letting k gives
(t) = x
0
+
_
t
t
0
f(s, (s))ds. (1.16)
Clearly, (t
0
) = 0, when evaluated using (1.16), and also

(t) = f(t, (t)) since f is contin-


uous. Thus as dened by (1.16) is a solution to (1.3) on [t t
0
[ .
1.2.5 Some examples of existence and uniqueness
Example Consider the IVP
x

= 3[x[
2
3
x(t
0
) = x
0
(1.17)
Here, Theorem 1.2.5 applies, since f(t, x) = 3x
2/3
is continuous. However, Theorem 1.2.2 does not
apply, since f(t, x) is not locally Lipschitz in x = 0 (or, f is not Lipschitz on any interval containing
0). This means that we have existence of solutions to this IVP, but not uniqueness of the solution.
The fact that f is not Lipschitz on any interval containing 0 is established using the following
argument. Suppose that f is Lipschitz on an interval 1 = (, ), with > 0. Then, there exists
L > 0 such that for all x
1
, x
2
1,
|f(t, x
1
) f(t, x
2
)| L[x
1
x
2
[
that is,
3

[x
1
[
2
3
[x
2
[
2
3

L[x
1
x
2
[
Since this has to hold true for all x
1
, x
2
1, it must hold true in particular for x
2
= 0. Thus
3[x
1
[
2
3
L[x
1
[
Given an > 0, it is possible to nd N

> 0 such that


1
n
< for all n N

. Let x
1
=
1
n
. Then for
n N

, if f is Lipschitz there must hold


3
_
1
n
_2
3

L
n
22
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
So, for all n N

,
n
1
3

L
3
This is a contradiction, since lim
n
n
1/3
= , and so f is not Lipschitz on 1.
Let us consider the set
E = t R : x(t) = 0
The set E can be have several forms, depending on the situation.
1) E = ,
2) E = [a, b], (closed since x is continuous and thus reaches its bounds),
3) E = (, b),
4) E = (a, +),
5) E = R.
Note that case 2) includes the case of a single intersection point, when a = b, giving E = a. Let us
now consider the nature of x in these dierent situations. Recall that from Theorem 1.1.10, since
(1.17) is dened by a scalar autonomous equation, its solutions are monotone. For simplicity, we
consider here the case of monotone increasing solutions. The case of monotone decreasing solutions
can be treated in a similar fashion.
1) Here, there is no intersection with the x = 0 axis. Thus if follows that
x(t) is
_
> 0, if x
0
> 0
< 0, if x
0
< 0
2) In this case,
x(t) is
_
_
_
< 0, if t < a
= 0, if t [a, b]
> 0, if t > b
3) Here,
x(t) is
_
= 0, if t < b
> 0, if t > b
4) In this case,
x(t) is
_
< 0, if t < a
= 0, if t > a
5) In this last case, x(t) = 0 for all t R.
Now, depending on the sign of x, we can integrate the equation. First, if x > 0, then [x[ = x and
so
x

= 3x
2/3

1
3
x
2/3
x

= 1
x
1/3
= t + k
1
x(t) = (t + k
1
)
3
1.2. Existence and uniqueness theorems
Fund. Theory ODE Lecture Notes J. Arino
23
for k
1
R. Then, if x < 0, then [x[ = x, and
x

= 3 (x)
2/3

1
3
(x)
2/3
(x

) = 1
(x)
1/3
= t + k
2
x(t) = (t + k
2
)
3
for k
2
R. We can now use these computations with the dierent cases that were discussed earlier,
depending on the value of t
0
and x
0
. We begin with the case of t
0
> 0 and x
0
> 0.
1) The case E = is impossible, for all initial conditions (t
0
, x
0
). Indeed, as x
0
> 0, we have
x(t) = (t + k
1
)
3
. Using the initial condition, we nd that x(t
0
) = x
0
= (t
0
+ k
1
)
3
, i.e.,
k
1
= x
1/3
0
t
0
, and x(t) = (t + x
1/3
0
t
0
)
3
.
2) If E = [a, b], then
x(t) =
_
_
_
(t + k
2
)
3
if t < a
0 if t [a, b]
(t + k
1
)
3
if t > b
Since x
0
> 0, we have to be in the t > b region, so t
0
> b, and (t
0
+k
1
)
3
= x
0
, which implies
that k
1
= x
1/3
0
t
0
. Thus
x(t) =
_
_
_
(t + k
2
)
3
if t < a
0 if t [a, b]
(t + x
1/3
0
t
0
)
3
if t > b
Since x is continuous,
lim
tb,t>b
(t + x
1/3
0
t
0
)
3
= 0
and
lim
ta,t<a
(t + k
2
)
3
= 0
This implies that b = t
0
x
1/3
0
and k
2
= a. So nally,
x(t) =
_

_
(t + a)
3
if t < a
0 if t [a, t
0
x
1
3
0
] (a t
0
x
1
3
0
)
(t + x
1/3
0
t
0
)
3
if t > t
0
x
1
3
0
Thus, choosing a t
0
x
1/3
0
, we have solutions of the form shown in Figure 1.4. Indeed, any
a
i
satisfying this property yields a solution.
3) The case [a, +) is impossible. Indeed, there does not exist a solution through (t
0
, x
0
) such
that x(t) = 0 for all t [a, +); since we are in the case of monotone increasing functions,
if x
0
> 0 then x(t) x
0
for all t t
0
.
4) E = R is also impossible, for the same reason.
24
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
Figure 1.4: Case t
0
, x
0
> 0, subcase 2, in the resolution of (1.17).
5) For the case E = (, b], we have
x(t) =
_
0 if t (, b]
(t + k
1
)
3
if t > b
Since x(t
0
) = x
0
, k
1
= x
1/3
0
t
0
, and since x is continuous, b = k
1
= t
0
x
1/3
0
. So,
x(t) =
_
0 if t (, t
0
x
1/3
0
]
(t + x
1/3
0
t
0
)
3
if t > t
0
x
1/3
0
The other cases are left as an exercise.
Example Consider the IVP
x

= 2tx
2
x(0) = 0
(1.18)
Here, we have existence and uniqueness of the solutions to (1.18). Indeed, f(t, x) = 2tx
2
is contin-
uous and locally Lipschitz on R.
1.3 Continuation of solutions
The results we have seen so far deal with the local existence (and uniqueness) of solutions to
an IVP, in the sense that solutions are shown to exist in a neighborhood of the initial data.
The continuation of solutions consists in studying criteria which allow to dene solutions on
possibly larger intervals.
Consider the IVP
x

= f(t, x)
x(t
0
) = x
0
,
(1.19)
1.3. Continuation of solutions
Fund. Theory ODE Lecture Notes J. Arino
25
with f continuous on a domain | of the (t, x) space, and the initial point (t
0
, x
0
) |.
Lemma 1.3.1. Let the function f(t, x) be continuous in an open set | in (t, x)-space, and
assume that a function (t) satises the condition

(t) = f(t, (t)) and (t, (t)) |, in an


open interval 1 = t
1
< t < t
2
. Under this assumption, if lim
j
(
j
, (
j
)) = (t
1
, ) |
for some sequence
j
: j = 1, 2, . . . of points in the interval 1, then lim
t
1
(, ()) =
(t
1
, ). Similarly, if lim
j
(
j
, (
j
)) = (t
2
, ) | for some sequence
j
: j = 1, 2, . . . of
points in the interval 1, then lim
t
2
(, ()) = (t
2
, ).
Proof. Let J be an open neighborhood of (t
1
, ). Then (t, (t)) J in an interval
1
<
t < (J) for some (J) determined by J. Indeed, assume that the closure of J,

J |,
and that [f(t, x)[ M in J for some positive number M. For every positive integer j and
every positive number , consider a rectangular region
1
j
() = (t, x) : [t t
j
[ , |x (t
j
)| M
Then there exists an > 0 and a j such that (
1
, ) R
j
() J, with = min
_
,
M
M
_
and
t
j

1
.
From Theorem 1.1.6 applied to the solution of the IVP x

= f(t, x), x(
j
) = (
j
), we
obtain that (, ()) 1
j
() | on the interval t
1
<
j
. Since | is an arbitrary open
neighborhood of (t
1
, ), we conclude that lim
j
(
j
, (
j
)) = (t
1
, ) 1.
From the previous result, we can derive a result concerning the maximal interval over
which a solution can be extended. To emphasize the fact that the solution of an ODE
exists in some interval 1, we denote (, 1). We need the notion of extension of a solution.
It is dened in the classical manner (see Figure 1.5).
Denition 1.3.2 (Extension). Let (, 1) and (

,

1) be two solutions of the same ODE. We
say that (

,

1) is an extension of (, 1) if, and only if,
1

1,

|I
=
where
|I
denotes the restriction to 1.
Theorem 1.3.3. Let f(t, x) be continuous in an open set | in (t, x)-space, and the function
(t) be a function satisfying the condition

(t) = f(t, (t)) and (t, (t)) |, in an open


interval 1 = t
1
< t < t
2
. If the following two conditions are satised:
i) (t) cannot be extended to the left of t
1
(or, respectively, to the right of t
2
),
ii) lim
j
(
j
, (
j
)) = (t
1
, ) (or, respectively, (t
2
, )) exists for some sequence
j
: j =
1, 2, . . . of points in the interval 1,
then the limit point (t
1
, ) (or, respectively, (t
2
, )) must be on the boundary of |.
26
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs

~
I
I
~

Figure 1.5: The extension



on the interval

1 of the solution (dened on the interval
1).
Proof. Suppose that the hypotheses of the theorem are satised, and that (t
1
, ) | (re-
spectively, (t
2
, ) |). Then, from Lemma 1.3.1, it follows that
lim
t
1
(, ()) = (t
1
, )
(or, respectively, lim
t
2
(, ()) = (t
2
, )). Thus we can apply Theorem 1.2.5 (Peanos
Theorem) to the IVP
x

= f(t, x)
x(t
1
) = ,
(or, respectively, x

= f(t, x), x(t


2
) = ). This implies that the solution can be extended
to the left of t
1
(respectively, to the right of t
2
), since Theorem 1.2.5 implies existence in a
neighborhood of t
1
. This is a contradiction.
A particularly important consequence of the previous theorem is the following corollary.
Corollary 1.3.4. Assume that f(t, x) is continuous for t
1
< t < t
2
and all x R
n
. Also,
assume that there exists a function (t) satisfying the following conditions:
a) and

are continuous in a subinterval 1 of the interval t


1
< t < t
2
,
b)

(t) = f(t, (t)) in 1.


Then, either
i) (t) can be extended to the entire interval t
1
< t < t
2
as a solution of the dierential
equation x

= f(t, x), or
ii) lim
t
|(t)| = for some in the interval t
1
< t < t
2
.
1.3. Continuation of solutions
Fund. Theory ODE Lecture Notes J. Arino
27
1.3.1 Maximal interval of existence
Another way of formulating these results is with the notion of maximal intervals of existence.
Consider the dierential equation
x

= f(t, x) (1.20)
Let x = x(t) be a solution of (1.20) on an interval 1.
Denition 1.3.5 (Right maximal interval of existence). The interval 1 is a right maximal
interval of existence for x if there does not exist an extension of x(t) over an interval 1
1
so that x remains a solution of (1.20), with 1 1
1
(and 1 and 1
1
having dierent right
endpoints). A left maximal interval of existence is dened in a similar way.
Denition 1.3.6 (Maximal interval of existence). An interval which is both a left and a
right maximal interval of existence is called a maximal interval of existence.
Theorem 1.3.7. Let f(t, x) be continuous on an open set | and (t) be a solution of (1.20)
on some interval. Then (t) can be extended (as a solution) over a maximal interval of
existence (

,
+
). Also, if (

,
+
) is a maximal interval of existence, then (t) tends to
the boundary | of | as t

and t
+
.
Remark The extension need not be unique, and

depends on the extension. Also, to say, for


example, that | as t
+
is interpreted to mean that either
+
= or that
+
< and
if |
0
is any compact subset of |, then (t, (t)) , |
0
when t is near
+
.
Two interesting corollaries, from [11].
Corollary 1.3.8. Let f(t, x) be continuous on a strip t
0
t t
0
+ a (< ), x R
n
arbitrary. Let be a solution of (1.3) on a right maximal interval J. Then either
i) J = [t
0
, t
0
+a],
ii) or J = [t
0
, ), t
0
+a, and |(t)| as t .
Corollary 1.3.9. Let f(t, x) be continuous on the closure

| of an open (t, x)-set |, and let
(1.3) possess a solution on a maximal right interval J. Then either
i) J = [t
0
, ),
ii) or J = [t
0
, ), with < and (, ()) |,
iii) or J = [t
0
, ) with < and |(t)| as t .
28
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
1.3.2 Maximal and global solutions
Linked to the notion of maximal intervals of existence of solutions is the notion of maximal
and global solutions.
Denition 1.3.10 (Maximal solution). Let 1
1
R and 1
2
R be two intervals such that
1
1
1
2
. A solution (, 1
1
) is maximal in 1
2
if has no extension (

,

1) solution of the
ODE such that 1
1


1 1
2
.
Denition 1.3.11 (Global solution). A solution (, 1
1
) is global on 1
2
if admits a exten-
sion

dened on the whole interval 1
2
.
2
I
U

Figure 1.6:
1
is a global and maximal solution on 1;
2
is a maximal solution on 1, but
it is not global on 1.
Every global solution on a given interval 1 is maximal on that same interval. The converse
is false.
Example Consider the equation x

= 2tx
2
on R. If x ,= 0, x

x
2
= 2t, which implies that
x(t) = 1/(t
2
c), with c R. Depending on c, there are several cases.
if c < 0, then x(t) = 1/(t
2
c) is a global solution on R,
if c > 0, the solutions are dened on (,

c), (

c,

c) and (

c, ). The solutions are


maximal solutions on R, but are not global solutions.
if c = 0, then the maximal non global solutions on R are dened on (, 0) and (0, ).
Another solution is x 0, which is a global solution on R.
Lemma 1.3.12. Every solution of the dierential equation x

= f(t, x) is contained in a
maximal solution

.
1.4. Continuous dependence on initial data, on parameters
Fund. Theory ODE Lecture Notes J. Arino
29
The following theorem extends the uniqueness property to an interval of existence of the
solution.
Theorem 1.3.13. Let
1
,
2
: 1 R
n
be two solutions of the equation x

= f(t, x), with f


locally Lipschitz in x on |. If
1
and
2
coincide at a point t
0
1, then
1
=
2
on 1.
Proof. Under the assumptions of the theorem,
1
(t
0
) =
2
(t
0
). Suppose that there exists a
t
1
, t
1
,= t
0
, such that
1
(t
1
) ,=
2
(t
1
). For simplicity, let us assume that t
1
> t
0
.
By the local uniqueness of the solution, it follows from
1
(t
0
) =
2
(t
0
) that there exists
a neighborhood ^ of t
0
such that
1
(t) =
2
(t) for all t ^. Let
E = t [t
0
, t
1
] :
1
(t) ,=
2
(t)
Since t
1
E, E ,= . Let = inf(E), we have (t
0
, t
1
], and for all t [t
0
, ),
1
(t) =
2
(t).
By continuity of
1
and
2
, we thus have
1
() =
2
(). This implies that there exists
a neighborhood J of on which
1
=
2
. This is a contradiction, since
1
(t) ,=
2
(t) for
t > , hence there exists no such t
1
, and
1
=
2
on 1.
Corollary 1.3.14 (Global uniqueness). Let f(t, x) be locally Lipschitz in x on |. Then by
any point (t
0
, x
0
) |, there passes a unique maximal solution : 1 R
n
. If there exists a
global solution on 1, then it is unique.
1.4 Continuous dependence on initial data, on param-
eters
Let be a solution of (1.3). To emphasize the fact the this solution depends on the initial
condition (t
0
, x
0
), we denote it
t
0
,x
0
. Let be a parameter of (1.3). When we study the
dependence of
t
0
,x
0
on , we denote the solution as
t
0
,x
0
,
.
We suppose that |f(t, x)| M and [f(t, x)/x
i
[ K for i = 1, . . . , n for (t, x) |,
with | RR
n
. Note that these conditions are automatically satised on a closed bounded
region of the form 1 = (t, x) : [t t
0
[ a, |x x
0
| b, where a, b > 0.
Our objective here is to characterize the nature of the dependence of the solution on the
initial time t
0
and the initial data x
0
.
Theorem 1.4.1. Suppose that f and f/x are continuous and bounded in a given region
|. Let
t
0
,x
0
be a solution of (1.3) passing through (t
0
, x
0
) and

t
0
, x
0
be a solution of (1.3)
passing through (

t
0
, x
0
). Suppose that and exist on some interval 1.
Then, for each > 0, there exists > 0 such that if [t

t[ < and |x
0
x
0
| < , then
|(t) (

t)| < , for t,

t 1.
Proof. The prooof is from [2, p. 135-136]. Since is the solution of (1.3) through the point
(t
0
, x
0
), we have, for all t 1,
(t) = x
0
+
_
t
t
0
f(s, (s))ds (1.21)
30
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
As is the solution of (1.3) through the point (

t
0
, x
0
), we have, for all t 1,
(t) = x
0
+
_
t

t
0
f(s, (s))ds (1.22)
Since
_
t
t
0
f(s, (s))ds =
_

t
0
t
0
f(s, (s))ds +
_
t

t
0
f(s, (s))ds,
substracting (1.22) from (1.21) gives
(t) (t) = x
0
x
0
+
_

t
0
t
0
f(s, (s))ds +
_
t

t
0
f(s, (s)) f(s, (s))ds
and therefore
|(t) (t)| |x
0
x
0
| +
_
_
_
_
_
_

t
0
t
0
f(s, (s))ds
_
_
_
_
_
+
_
_
_
_
_
t

t
0
f(s, (s)) f(s, (s))ds
_
_
_
_
Using the boundedness assumptions on f and f/x to evaluate the right hand side of the
latter inequation, we obtain
|(t) (t)| |x
0
x
0
| +M[

t
0
t
0
[ +K
_
_
_
_
_
t

t
0
(s) (s)ds
_
_
_
_
If [t
0

t
0
[ < , |x
0
x
0
| < , then we have
|(t) (t)| +M +K
_
_
_
_
_
t

t
0
(s) (s)ds
_
_
_
_
(1.23)
Applying Gronwalls inequality (Appendix A.7) to (1.23) gives
|(t) (t)| (1 +M)e
K|t

t
0
|
(1 +M)e
K(
2

1
)
using the fact that [t

t
0
[ <
2

1
, if we denote 1 = (
1
,
2
). Since
|(t) (

t)| <
_
_
_
_
_
t

t
f(s, (s))ds
_
_
_
_
M[t

t[ M
if [t

t[ < , we have
|(t) (

t)| |(t) (t)| +|(t) (

t)| (1 +M)e
K(
2

1
)
+M
Now, given > 0, we need only choose < /[M + (1 + M)
K(
2

1
)
] to obtain the desired
inequality, completing the proof.
1.4. Continuous dependence on initial data, on parameters
Fund. Theory ODE Lecture Notes J. Arino
31
What we have shown is that the solution passing through the point (t
0
, x
0
) is a continuous
function of the triple (t, t
0
, x
0
). We now consider the case where the parameters also vary,
comparing solutions to two dierent but close equations.
Theorem 1.4.2. Let f, g be dened in a domain | and satisfy the hypotheses of Theo-
rem 1.4.1. Let and be solutions of x

= f(t, x) and x

= g(t, x), respectively, such


that (t
0
) = x
0
, (t
0
) = x
0
, existing on a common interval < t < . Suppose that
|f(t, x) g(t, x)| for (t, x) |. Then the solutions and satisfy
|(t) (t)| |x
0
x
0
|e
K|tt
0
|
+( )e
K|tt
0
|
for all t, < t < .
The following theorem [6, p. 58] is less restrictive in its hypotheses than the previous
one, requiring only uniqueness of the solution of the IVP.
Theorem 1.4.3. Let | be a domain of (t, x) space, 1

the domain [
0
[ < c, with c > 0,
and |

the set of all (t, x, ) satisfying (t, x) |, 1

. Suppose f is a continuous function


on U

, bounded by a constant M there. For =


0
, let
x

= f(t, x, )
x(t
0
) = x
0
(1.24)
have a unique solution
0
on the interval [a, b], where t
0
[a, b]. Then there exists a > 0
such that, for any xed such that [
0
[ < , every solution

of (1.24) exists over [a, b]


and as
0


0
uniformly over [a, b].
Proof. We begin by considering t
0
(a, b). First, choose an > 0 small enough that the
region 1 = [t t
0
[ , |x x
0
| M is in |; note that 1 is a slight modication
of the usual security domain. All solutions of (1.24) with 1

exist over [t
0
, t
0
+ ]
and remain in 1. Let

denote a solution. Then the set of functions

, 1

is an
uniformly bounded and equicontinuous set in [t t
0
[ . This follows from the integral
equation

(t) = x
0
+
_
t
t
0
f(s,

(s), )ds ([t t


0
[ ) (1.25)
and the inequality |f| M.
Suppose that for some

t [t
0
, t
0
+ ],

t) does not tend to


0
(

t). Then there


exists a sequence
k
, k = 1, 2, . . ., for which
k

0
, and corresponding solutions

k
such that

k
converges uniformly over [t
0
, t
0
+] as k to a limit function , with
(

t) ,=
0
(

t). From the fact that f C on |

, that C on [t
0
, t
0
+ ], and that

k
converges uniformly to , (1.25) for the solutions

k
yields
(t) = x
0
+
_
t
t
0
f(s, (s),
0
)ds ([t t
0
[ )
32
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
Thus is a solution of (1.24) with =
0
. By the uniqueness hypothesis, it follows that
(t) =
0
(t) on [t t
0
[ . Thus (

t) =
0
(

t). Thus all solutions

on [t t
0
[ tend
to
0
as
0
. Because of the equicontinuity, the convergence is uniform.
Let us now prove that the result holds over [a, b]. For this, let us consider the interval
[t
0
, b]. Let [t
0
, b), and suppose that the result is valid for every small h > 0 over [t
0
, h]
but not over [t
0
, + h]. It is clear that t
0
+ . By the above assumption, for any small
> 0, there exists a

> 0 such that


|

( )
0
( )| < (1.26)
for [
0
[ <

. Let H | be dened as the region


H = [t [ , |x
0
( )| +M[t +[
with small enough that H |. Any solution of x

= f(t, x, ) starting on t = with


initial value
0
, [
0

0
( )[ will remain in H as t increases. Thus all solutions can
be continued to +.
By choosing = in (1.26), it follows that for [
0
[ <

, the solutions

can all
be continued to + . Thus over [t
0
, + ] these solutions are in | so that the argument
that


0
which has been given for [t t
0
[ , also applies over [t
0
, + ]. Thus the
assumption about the existence of < b is false. The case = b is treated in similar fashion
on t .
A similar argument applies to the left of t
0
and therefore the result is valid over [a, b].
Denition 1.4.4 (Well-posedness). A problem is said to be well-posed if solutions exist, are
unique, and that there is continuous dependence on initial conditions.
1.5 Generality of rst order systems
Consider an nth order dierential equation in normal form
x
(n)
= f
_
t, x, x

, . . . , x
(n1)
_
(1.27)
This equation can be reduced to a system of n rst order ordinary dierential equations, by
proceeding as follows. Let y
0
= x, y
1
= x

, y
2
= x

, . . . , y
n1
= x
(n)
. Then (1.27) is equivalent
to
y

= F(t, y) (1.28)
with y = (y
0
, y
1
, . . . , y
n1
)
T
and
F(t, z) =
_

_
y
1
y
2
.
.
.
y
n1
f(t, y
0
, . . . , y
n1
)
_

_
1.5. Generality of rst order systems
Fund. Theory ODE Lecture Notes J. Arino
33
Similarly, the IVP associated to (1.27) is given by
x
(n)
= f
_
t, x, x

, . . . , x
(n1)
_
x(t
0
) = x
0
, x

(t
0
) = x
1
, . . . , x
(n1)
(t
0
) = x
n1
(1.29)
is equivalent to the IVP
y

= F(t, y)
y(t
0
) = y
0
= (x
0
, . . . , x
n1
)
T
(1.30)
As a consequence, all results in this chapter are true for equations of order higher than 1.
Example Consider the second order IVP
x

= 2x

+ 4x 3
x(0) = 2, x

(0) = 1
To transform it into a system of rst-order dierential equations, we let y = x

. Substituting (where
possible) y for x

in the equation gives


y

= 2y + 4x 3
The initial condition becomes x(0) = 2, y(0) = 1. So nally, the following IVP is equivalent to the
original one:
x

= y
y

= 4x 2y 3
x(0) = 2, y(0) = 1
Note that the linearity of the initial problem is preserved.
Example The dierential equation
x
(n)
(t) = a
n1
(t)x
(n1)
(t) + + a
1
(t)x

(t) + a
0
(t)x(t) + b(t)
is an nth order nonhomogeneous linear dierential equation. Together with the initial condition
x
(n1)
(t
0
) = x
(n1)
0
, . . . , x

(t
0
) = x

0
, x(t
0
) = x
0
where x
0
, x

0
, . . . , x
(n1)
0
R, it forms an IVP. We can transform it to a system of linear rst order
equations by setting
y
0
= x
y
1
= x

.
.
.
y
n1
= x
(n1)
y
n
= x
(n)
34
Fund. Theory ODE Lecture Notes J. Arino
1. General theory of ODEs
The nth order linear equation is then equivalent to the following system of n rst order linear
equations
y

0
= y
1
y

1
= y
2
.
.
.
y

n2
= y
n1
y

n1
= y
n
y

n
= a
n1
(t)y
n
(t) + a
n2
(t)y
n1
(t) + + a
1
(t)y
1
(t) + a
0
(t)y
0
(t) + b(t)
under the initial conditions
y
n1
(t
0
) = x
(n1)
0
, . . . , y
1
(t
0
) = x

0
, y
0
(t
0
) = x
0

1.6 Generality of autonomous systems


A nonautonomous system
x

(t) = f(t, x(t))


can be transformed into an autonomous system of equations by setting an auxiliary variable,
say y, equal to t, giving
x

= f(y, x)
y

= 1.
However, this transformation does not always make the system any easier to study.
1.7 Suggested reading, Further problems
Most of these results are treated one way or another in Coddington and Levinson [6] (rst
edition published in 1955), and the current text, as many others, does little but paraphrase
them.
We have not seen here any results specic to complex valued dierential equations. As
complex numbers are two-dimensional real vectors, the results carry through to the complex
case by simply assuming that if, in (1.2), we consider an n-dimensional complex vector, then
this is equivalent to a 2n-dimensional problem. Furthermore, if f(t, x) is analytic in t and
x, then analytic solutions can be constructed. See Section I-4 in [12], ..., for example.
Chapter 2
Linear systems
Let 1 be an interval of R, E a normed vector space over a eld K (E = K
n
, with K = R
or C), and L(E) the space of continuous linear maps from E to E. Let | | be a norm on
E, and [[[ [[[ be the induced supremum norm on L(E) (see Appendix A.1). Consider a map
A : 1 L(E) and a map B : 1 E. A linear system of rst order equations is dened by
x

(t) = A(t)x(t) +B(t) (2.1)


where the unknown x is a map on 1, taking values in E, dened dierentiable on a sub-
interval of 1. We restrict ourselves to the nite dimensional case (E = K
n
). Hence we
consider A /
n
(K), n n matrices over the eld K, and B K
n
. We suppose that A and
B have continuous entries. In most of what follows, we assume K = R.
The name linear for system (2.1) is an abuse of language. System (2.1) should be called
an ane system, with associated linear system
x

(t) = A(t)x(t). (2.2)


Another way to distinguish systems (2.1) and (2.2) is to refer to the former as a nonhomo-
geneous linear system and the latter as an homogeneous linear system. In order to lighten
the language, since there will be other qualicatives added to both (2.1) and (2.2), we use
in this chapter the names ane system for (2.1) and linear system for (2.2).
The exception to this naming convention is that we refer to (2.1) as a linear system if we
consider the generic properties of (2.1), with (2.2) as a particular case, as in this chapters
title or in the next section, for example.
2.1 Existence and uniqueness of solutions
Theorem 2.1.1. Let A and B be dened and continuous on 1 t
0
. Then, for all x
0
E,
there exists a unique solution
t
(x
0
) of (2.1) through (t
0
, x
0
), dened on the interval 1.
35
36
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
Proof. Let k(t) = [[[A(t)[[[ = sup
x1
|A(t)x|. Then for all t 1 and all x
1
, x
2
K,
|f(t, x
1
) f(t, x
2
)| = |A(t)(x
1
x
2
)|
[[[A(t)[[[ |x
1
x
2
|
k(t)|x
1
x
2
|,
where the inequality
|A(t)(x
1
x
2
)| [[[A(t)[[[ |x
1
x
2
|
results from the nature of the norm [[[ [[[ (see Appendix A.1). Furthermore, k is continuous
on 1. Therefore the conditions of Theorem 1.2.2 hold, leading to existence and uniqueness
on the interval 1.
With linear systems, it is possible to extend solutions easily, as is shown by the next
theorem.
Theorem 2.1.2. Suppose that the entries of A(t) and the entries of B(t) are continuous on
an open interval 1. Then every solution of (2.1) which is dened on a subinterval of the
interval 1 can be extended uniquely to the entire interval 1 as a solution of (2.1).
Proof. Suppose that 1 = (t
1
, t
2
), and that a solution of (2.1) is dened on = (
1
,
2
),
with 1. Then
|(t)| |(t
0
)| +
_
_
_
_
_
t
t
0
A(s)(s) +B(s)ds
_
_
_
_
for all t , where t
0
. Let
_
K = |(t
0
)| + (
2

1
) max

1
t
2
|B(t)|
L = max

1
t
2
|A(t)|
Then, for t
0
, t ,
|(t)| K +L
_
_
_
_
_
t
t
0
(s)ds
_
_
_
_
K +L
_
t
t
0
|(s)|ds.
Thus, using Gronwalls Lemma (Lemma A.7), the following estimate holds in ,
|(t)| Ke
L|tt
0
|
Ke
L(
2

1
)
<
This implies that case ii) in Corollary 1.3.4 is ruled out, leaving only the possibility for to
be extendable over 1, since the vector eld in (2.1) is Lipschitz.
2.2 Linear systems
We begin our study of linear systems of ordinary dierential equations by considering ho-
mogeneous systems of the form (2.2) (linear systems), with x R
n
and A /
n
(R), the set
of square matrices over the eld R, A having continuous entries on an interval 1.
2.2. Linear systems
Fund. Theory ODE Lecture Notes J. Arino
37
2.2.1 The vector space of solutions
Theorem 2.2.1 (Superposition principle). Let o
0
be the set of solutions of (2.2) that are
dened on some interval 1 R. Let
1
,
2
o
0
, and
1
,
2
R. Then
1

1
+
2

2
o
0
.
Proof. Let
1
,
2
o
0
be two solutions of (2.2),
1
,
2
R. Then for all t 1,

1
= A(t)
1

2
= A(t)
2
,
from which it comes that
d
dt
(
1

1
+
2

2
) = A(t)[
1

1
+
2

2
],
implying that
1

1
+
2

2
o
0
.
Thus the linear combination of any two solutions of (2.2) is in o
0
. This is a hint that o
0
must be a vector space of dimension n on K. To show this, we need to nd a basis of o
0
.
We proceed in the classical manner, with the notable dierence from classical linear algebra
that the basis is here composed of time-dependent functions.
Denition 2.2.2 (Fundamental set of solutions). A set of n solutions of the linear dier-
ential equation (2.2), all dened on the same open interval 1, is called a fundamental set of
solutions on 1 if the solutions are linearly independent functions on 1.
Proposition 2.2.3. If A(t) is dened and continuous on the interval 1, then the system
(2.2) has a fundamental set of solutions dened on 1.
Proof. Let t
0
1, and e
1
, . . . , e
n
denote the canonical basis of K
n
. Then, from Theorem 2.1.1,
there exists a unique solution (t
0
) = (
1
(t
0
), . . . ,
n
(t
0
)) such that
i
(t
0
) = e
i
, for i =
1, . . . , n. Furthermore, from Theorem 2.1.1, each function
i
is dened on the interval 1.
Assume that
i
, i = 1, . . . , n, is linearly dependent. Then there exists
i
R, i =
1, . . . , n, not all zero, such that

n
i=1

i
(t) = 0 for all t. In particular, this is true for
t = t
0
, and thus

n
i=1

i
(t
0
) =

n
i=1

i
e
i
= 0, which implies that the canonical basis of
K
n
is linearly dependent. Hence a contradiction, and the
i
are linearly independent.
Proposition 2.2.4. If T is a fundamental set of solutions of the linear system (2.2) on the
open interval 1, then every solution dened on 1 can be expressed as a linear combination
of the elements of T.
Let t
0
1, we consider the application

t
0
: o
0
K
n
Y
t
0
(x) = x(t
0
)
Lemma 2.2.5.
t
0
is a linear isomorphism.
38
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
Proof.
t
0
is bijective. Indeed, let v K
n
, from Theorem 2.1.1, there exists a unique solution
passing through (t
0
, v), i.e.,
v K
n
, !x o
0
, x(t
0
) = v
t
0
(x) = v,
so
t
0
is surjective. That
t
0
is injective follows from uniqueness of solutions to an ODE.
Furthermore,
t
0
(
1
x
1
+
2
x
2
) =
1

t
0
(x
1
)+
2

t
0
(x
2
). Therefore dimo
0
= dimK
n
= n.
2.2.2 Fundamental matrix solution
Denition 2.2.6. An n n matrix function t (t), dened on an open interval 1, is
called a matrix solution of the homogeneous linear system (2.2) if each of its columns is a
(vector) solution. A matrix solution is called a fundamental matrix solution if its columns
form a fundamental set of solutions. If in addition (t
0
) = I, a fundamental matrix solution
is called the principal fundamental matrix solution.
An important property of fundamental matrix solutions is the following, known as Abels
formula.
Theorem 2.2.7 (Abels formula). Let A(t) be continuous on 1 and /
n
(K) be such
that

(t) = A(t)(t) on 1. Then det satises on 1 the dierential equation


(det )

= (trA)(det ),
or, in integral form, for t, 1,
det (t) = det () exp
__
t

trA(s)ds
_
. (2.3)
Proof. Writing the dierential equation

(t) = A(t)(t) in terms of the elements


ij
and
a
ij
of, respectively, and A,

ij
(t) =
n

k=1
a
ik
(t)
kj
(t), (2.4)
for i, j = 1, . . . , n. Writing
det =

11
(t)
12
(t) . . .
1n
(t)

21
(t)
22
(t) . . .
2n
(t)

n1
(t)
n2
(t) . . .
nn
(t)

,
we see that
(det )

11

12
. . .

1n

21

22
. . .
2n

n1

n2
. . .
nn

11

12
. . .
1n

21

22
. . .

2n

n1

n2
. . .
nn

+ +

11

12
. . .
1n

21

22
. . .
2n

n1

n2
. . .

nn

.
2.2. Linear systems
Fund. Theory ODE Lecture Notes J. Arino
39
Indeed, write det (t) = (r
1
, r
2
, . . . , r
n
), where r
i
is the ith row in (t). is then a linear
function of each of its arguments, if all other rows are constant, which implies that
d
dt
det (t) =
_
d
dt
r
1
, r
2
, . . . , r
n
_
+
_
r
1
,
d
dt
r
2
, . . . , r
n
_
+ +
_
r
1
, r
2
, . . . ,
d
dt
r
n
_
.
(To show this, use the denition of the derivative as a limit.) Using (2.4) on the rst of the
n determinants in (det )

gives

k
a
1k

k1

k
a
1k

k2
. . .

k
a
1k

kn

21

22
. . .
2n

n1

n2
. . .
nn

.
Adding a
12
times the second row, a
13
times the rst row, etc., a
1n
times the nth row,
to the rst row, does not change the determinant, and thus

k
a
1k

k1

k
a
1k

k2
. . .

k
a
1k

kn

21

22
. . .
2n

n1

n2
. . .
nn

a
11

11
a
11

12
. . . a
11

1n

21

22
. . .
2n

n1

n2
. . .
nn

= a
11
det .
Repeating this for each of the terms in (det )

, we obtain (det )

= (a
11
+ a
22
+ +
a
nn
) det , giving nally (det )

= (trA)(det ). Note that this equation takes the form


u

(t)u = 0, which implies that


uexp
__
t

(s)ds
_
= constant,
which in turn implies the integral form of the formula.
Remark Consider (2.3). Suppose that 1 is such that det () ,= 0. Then, since e
a
,= 0
for any a, it follows that det ,= 0 for all t 1. In short, linear independence of solutions for a
t 1 is equivalent to linear independence of solutions for all t 1. As a consequence, the column
vectors of a fundamental matrix are linearly independent at every t 1.
Theorem 2.2.8. A solution matrix of (2.2) is a fundamental solution matrix on 1 if,
and only if, det (t) ,= 0 for all t 1
Proof. Let be a fundamental matrix with column vectors
i
, and suppose that is any
nontrivial solution of (2.2). Then there exists c
1
, . . . , c
n
, not all zero, such that
=
n

j=1
c
j

j
,
or, writing this equation in terms of , = c, if c = (c
1
, . . . , c
n
)
T
. At any point t
0
1,
this is a system of n linear equations with n unknowns c
1
, . . . , c
n
. This system has a unique
40
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
solution for any choice of (t
0
). Thus det (t
0
) ,= 0, and by the remark above, det (t) ,= 0
for all t 1.
Reciproqually, let be a solution matrix of (2.2), and suppose that det (t) ,= 0 for
t 1. Then the column vectors are linearly independent at every t 1.
From the remark above, the condition det (t) ,= 0 for all t 1 in Theorem 2.2.8 is
equivalent to the condition there exists t 1 such that det (t) ,= 0. A frequent candidate
for this role is t
0
.
To conclude on fundamental solution matrices, remark that there are innitely many
of them, for a given linear system. However, since each fundamental solution matrix can
provide a basis for the vector space of solutions, it is clear that the fundamental matrices
associated to a given problem must be linked. Indeed, we have the following result.
Theorem 2.2.9. Let be a fundamental matrix solution to (2.2). Let C /
n
(K) be a
constant nonsingular matrix. Then C is a fundamental matrix solution to (2.2). Con-
versely, if is another fundamental matrix solution to (2.2), then there exists a constant
nonsingular C /
n
(K) such that (t) = (t)C for all t 1.
Proof. Since is a fundamental matrix solution to (2.2), we have
(C)

C = (A(t))C = A(t)(C),
and thus C is a matrix solution to (2.2). Since is a fundamental matrix solution to
(2.2), Theorem 2.2.8 implies that det ,= 0. Also, since C is nonsingular, det C ,= 0. Thus,
det C = det det C ,= 0, and by Theorem 2.2.8, C is a fundamental matrix solution to
(2.2).
Conversely, assume that and are two fundamental matrix solutions. Since
1
= I,
taking the derivative of this expression gives

1
+ (
1
)

= 0, and therefore (
1
)

1
. We now consider the product
1
. There holds
_

=
_

1
_

+
1

=
1

1
+
1
A(t)
=
_

1
A(t)
1
+
1
A(t)
_

=
_

1
A(t) +
1
A(t)
_

= 0.
Therefore, integrating (
1
)

gives
1
= C, with C /
n
(K) is a constant. Thus,
= C. Furthermore, as and are fundamental matrix solutions, det ,= 0 and
det ,= 0, and therefore det C ,= 0.
Remark Note that if is a fundamental matrix solution to (2.2) and C /
n
(K) is a constant
nonsingular matrix, then it is not necessarily true that C is a fundamental matrix solution to
(2.2). See Exercise 2.3.
2.2. Linear systems
Fund. Theory ODE Lecture Notes J. Arino
41
2.2.3 Resolvent matrix
If t (t) is a matrix solution of (2.2) on the interval 1, then

(t) = A(t)(t) on 1. Thus,


by Proposition 2.2.3, there exists a fundamental matrix solution.
Denition 2.2.10 (Resolvent matrix). Let t
0
1 and (t) be a fundamental matrix solution
of (2.2) on 1. Since the columns of are linearly independent, it follows that (t
0
) is
invertible. The resolvent (or state transition matrix) of (2.2) is then dened as
1(t, t
0
) = (t)(t
0
)
1
.
It is evident that 1(t, t
0
) is the principal fundamental matrix solution at t
0
(since
1(t
0
, t
0
) = (t
0
)(t
0
)
1
= I). Thus system (2.2) has a principal fundamental matrix
solution at each point in 1.
Proposition 2.2.11. The resolvent matrix satises the Chapman-Kolmogorov identities
1) 1(t, t) = I,
2) 1(t, s)1(s, u) = 1(t, u),
as well as the identities
3) 1(t, s)
1
= 1(s, t),
4)

s
1(t, s) = 1(t, s)A(s),
5)

t
1(t, s) = A(t)1(t, s).
Proof. First, for the Chapman-Kolmogorov identities. 1) is 1(t, t) = (t)
1
(t) = I. Also,
2) gives
1(t, s)1(s, u) = (t)
1
(s)(s)
1
(u) = (t)
1
(u) = 1(t, u).
The other equalities are equally easy to establish. Indeed,
1(t, s)
1
=
_
(t)
1
(s)
_
1
=
_

1
(s)
_
1
(t)
1
= (s)
1
(t) = 1(s, t),
whence 3). Also,

s
1(t, s) =

s
_
(t)
1
(s)
_
= (t)
_

s

1
(s)
_
As is a fundamental matrix solution,

exists and is nonsingular, and dierentiating

1
= I gives

s
_
(s)
1
(s)
_
= 0
_

s
(s)
_

1
(s) + (s)
_

s

1
(s)
_
= 0
(s)
_

s

1
(s)
_
=
_

s
(s)
_

1
(s)

1
(s) =
1
(s)
_

s
(s)
_

1
(s).
42
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
Therefore,

s
1(t, s) = (t)
1
(s)
_

s
(s)
_

1
(s) = 1(t, s)
_

s
(s)
_

1
(s).
Now, since (s) is a fundamental matrix solution, it follows that (s)/s = A(s)(s), and
thus

s
1(t, s) = 1(t, s)A(s)(s)
1
(s) = 1(t, s)A(s),
giving 4). Finally,

t
1(t, s) =

t
(t)
1
(s)
= A(t)(t)
1
(s) since is a fundamental matrix solution
= A(t)1(t, s),
giving 5).
The role of the resolvent matrix is the following. Recall that, from Lemma 2.2.5,
t
0
dened by

t
0
: o K
n
x x(t
0
),
is a K-linear isomorphism from the space o to the space K
n
. Then 1 is an application from
K
n
to K
n
,
1(t, t
0
) : K
n
K
n
v 1(t, t
0
)v = w
such that
1(t, t
0
) =
t

1
t
0
i.e.,
(1(t, t
0
)v = w) (x o, w = x(t), v = x(t
0
)) .
Since
t
and
t
0
are K-linear isomorphisms, 1 is a K-linear isomorphism on K
n
. Thus
1(t, t
0
) /
n
(K) and is invertible.
Proposition 2.2.12. 1(t, t
0
) is the only solution in /
n
(K) of the initial value problem
d
dt
M(t) = A(t)M(t)
M(t
0
) = I,
with M(t) /
n
(K).
2.2. Linear systems
Fund. Theory ODE Lecture Notes J. Arino
43
Proof. Since d(1(t, t
0
)v)/dt = A(t)1(t, t
0
)v,
_
d
dt
1(t, t
0
)
_
v = (A(t)R(t, t
0
)) v,
for all v R
n
. Therefore, 1(t, t
0
) is a solution to M

= A(t)M. But, by Theorem 2.1.1, we


know the solution to the associated IVP to be unique, hence the result.
From this, the following theorem follows immediately.
Theorem 2.2.13. The solution to the IVP consisting of the linear homogeneous nonau-
tonomous system (2.2) with initial condition x(t
0
) = x
0
is given by
(t) = 1(t, t
0
)x
0
.
2.2.4 Wronskian
Denition 2.2.14. The Wronskian of a system x
1
, . . . , x
n
of solutions to (2.2) is given
by
W(t) = det(x
1
(t), . . . , x
n
(t)).
Let v
i
= x
i
(t
0
). Then we have
x
i
(t) = 1(t, t
0
)v
i
,
and it follows that
W(t) = det(1(t, t
0
)v
1
, . . . , 1(t, t
0
)v
n
)
= det 1(t, t
0
) det(v
1
, . . . , v
n
).
The following formulae hold
(t, t
0
) := det 1(t, t
0
) = exp
__
t
t
0
trA(s)ds
_
(2.5a)
W(t) = exp
__
t
t
0
trA(s)ds
_
det(v
1
, . . . , v
n
). (2.5b)
2.2.5 Autonomous linear systems
At this point, we know that solutions to (2.2) take the form (t) = 1(t, t
0
)x
0
, but this
was obtained formally. We have no indication whatsoever as to the precise form of 1(t, t
0
).
Typically, nding 1(t, t
0
) can be dicult, if not impossible. There are however cases where
the resolvent can be explicitly computed. One such case is for autonomous linear systems,
which take the form
x

(t) = Ax(t), (2.6)


that is, where A(t) A. Our objective here is to establish the following result.
44
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
Lemma 2.2.15. If A(t) A, then 1(t, t
0
) = e
(tt
0
)A
for all t, t
0
1.
This result is deduced easily as a corollary to another result developped below, namely
Theorem 2.2.16. Note that in Lemma 2.2.15, the notation e
(tt
0
)A
involves the notion of
exponential of a matrix, which is detailed in Appendix A.10.
Because the reasoning used in constructing solutions to (2.6) is fairly straightforward,
we now detail this derivation. Using the intuition from one-dimensional linear equations, we
seek a K such that

(t) = e
t
v be a solution to (2.6) with v K
n
0. We have

= e
t
v,
and thus

is a solution if, and only if,


e
t
v = Ae
t
v
= e
t
Av
v = Av
(A I)v = 0 (with I the identity matrix).
As v = 0 is not the only solution, this implies that A I must not be invertible, and so

is a solution det(A I) = 0,
i.e., is an eigenvalue of A.
In the simple case where A is diagonalizable, there exists a basis (v
1
, . . . , v
n
) of K
n
, with
v
1
, . . . , v
n
the eigenvectors of A corresponding to the eigenvalues
1
, . . . ,
n
. We then obtain
n linearly independent solutions

i
(t) = e

i
(tt
0
)
, i = 1, . . . , n. The general solution is given
by
(t) =
_
e

1
(tt
0
)
x
01
, . . . , e
n(tt
0
)
x
0n
_
,
where x
0i
is the ith component of x
0
, i = 1, . . . , n. In the general case, we need the notion
of matrix exponentials. Dening the exponential of matrix A as
e
A
=

k=0
A
n
n!
(see Appendix A.10), we have the following result.
Theorem 2.2.16. The global solution on K of (2.6) such that (t
0
) = x
0
is given by
(t) = e
(tt
0
)A
x
0
.
Proof. Assume = e
(tt
0
)A
x
0
. Then (t
0
) = e
0A
x
0
= Ix
0
= x
0
. Also,
(t) =
_

n=0
1
n!
(t t
0
)
n
A
n
_
x
0
=

n=0
1
n!
(t t
0
)
n
A
n
x
0
,
2.2. Linear systems
Fund. Theory ODE Lecture Notes J. Arino
45
so is a power series with radius of convergence R = . Therefore, is dierentiable on R
and

(t) =

n=1
1
n!
n(t t
0
)
n1
A
n
x
0
=

n=0
1
(n + 1)!
(n + 1)(t t
0
)
n
A
n+1
x
0
=

n=0
1
n!
(t t
0
)
n
A
n+1
x
0
= A
_

n=0
1
n!
(t t
0
)
n
A
n
x
0
_
= A(t)
so is solution of (2.6). Since (2.6) is linear, solutions are unique and global.
The problem is now to evaluate the matrix e
tA
. We have seen that in the case where A
is diagonalizable, solutions take the form
(t) =
_
e

1
(tt
0
)
x
01
, . . . , e
n(tt
0
)
x
0n
_
,
which implies that, in this case, the matrix 1(t, t
0
) takes the form
1(t, t
0
) =
_
_
_
_
_
e

1
(tt
0
)
0 0
0 e

2
(tt
0
)
0
.
.
.
.
.
.
0 0 e
n(tt
0
)
_
_
_
_
_
.
In the general case, we need the notion of generalized eigenvectors.
Denition 2.2.17 (Generalized eigenvectors). Let be an eigenvalue of the n n matrix
A, with multiplicity m n. Then, for k = 1, . . . , m, any nonzero solution v of
(A I)
k
v = 0
is called a generalized eigenvector of A.
Theorem 2.2.18. Let A be a real n n matrix with real eigenvalues
1
, . . . ,
n
repeated
according to their multiplicity. Then there exists a basis of generalized eigenvectors for R
n
.
And if v
1
, . . . , v
n
is any basis of generalized eigenvectors for R
n
, the matrix P = [v
1
v
n
]
is invertible,
A = D +N,
where
P
1
DP = diag(
j
),
the matrix N = A D is nilpotent of order k n, and D and N commute.
46
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
2.3 Ane systems
We consider the general (ane) problem (2.1), which we restate here for convenience. Let
x R
n
, A : 1 L(E) and B : 1 E, where 1 R and E is a normed vector space, we
consider the system
x

(t) = A(t)x(t) +B(t) (2.1)


2.3.1 The space of solutions
The rst problem that we are faced with when considering system (2.1) is that the set of
solutions does not constitute a vector space; in particular, the superposition principle does
not hold. However, we have the following result.
Proposition 2.3.1. Let x
1
, x
2
be two solutions of (2.1). Then x
1
x
2
is a solution of the
associated homogeneous equation (2.2).
Proof. Since x
1
and x
2
are solutions of (2.1),
x

1
= A(t)x
1
+B(t)
x

2
= A(t)x
2
+B(t)
Therefore
d
dt
(x
1
x
2
) = A(t)(x
1
x
2
)
Theorem 2.3.2. The global solutions of (2.1) that are dened on 1 form an n dimensional
ane subspace of the vector space of maps from 1 to K
n
.
Theorem 2.3.3. Let V be the vector space over R of solutions to the linear system x

=
A(t)x. If is a particular solution of the ane system (2.1), then the set of all solutions of
(2.1) is precisely
+, V .
Practical rules:
1. To obtain all solutions of (2.1), all solutions of (2.2) must be added to a particular
solution of (2.1).
2. To obtain all solutions of (2.2), it is sucient to know a basis of o
0
. Such a basis is
called a fundamental system of solutions of (2.2).
2.3.2 Construction of solutions
We have the following variation of constants formula.
2.3. Ane systems
Fund. Theory ODE Lecture Notes J. Arino
47
Theorem 2.3.4. Let R(t, t
0
) be the resolvent of the homogeneous equation x

= A(t)x asso-
ciated to (2.1). Then the solution x to (2.1) is given by
x(t) = R(t, t
0
) +
_
t
t
0
R(t, s)B(s)ds (2.7)
Proof. Let R(t, t
0
) be the resolvent of x

= A(t)x. Any solution of the latter equation is


given by
x(t) = R(t, t
0
)v, v R
n
Let us now seek a particular solution to (2.1) of the form x(t) = R(t, t
0
)v(t), i.e., using a
variation of constants approach. Taking the derivative of this expression of x, we have
x

(t) =
d
dt
[R(t, t
0
)]v(t) +R(t, t
0
)v

(t)
= A(t)R(t, t
0
)v(t) +R(t, t
0
)v

(t)
Thus x is a solution to (2.1) if
A(t)R(t, t
0
)v(t) +R(t, t
0
)v

(t) = A(t)R(t, t
0
)v(t) +B(t)
R(t, t
0
)v

(t) = B(t)
v

(t) = R(t
0
, t)B(t)
since R(t, s)
1
= R(s, t). Therefore, v(t) =
_
t
t
0
R(t
0
, s)B(s)ds. A particular solution is given
by
x(t) = R(t, t
0
)
_
t
t
0
R(t
0
, s)B(s)ds
=
_
t
t
0
R(t, t
0
)R(t
0
, s)B(s)ds
=
_
t
t
0
R(t, s)B(s)ds
2.3.3 Ane systems with constant coecients
We consider the ane equation (2.1), but with the matrix A(t) A.
Theorem 2.3.5. The general solution to the IVP
x

(t) = Ax(t) +B(t)


x(t
0
) = x
0
(2.8)
is given by
x(t) = e
(tt
0
)A
x
0
+
_
t
t
0
e
(tt
0
)A
B(s)ds (2.9)
Proof. Use Lemma 2.2.15 and the variation of constants formula (2.7).
48
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
2.4 Systems with periodic coecients
2.4.1 Linear systems: Floquet theory
We consider the linear system (2.2) in the following case,
x

= A(t)x
A(t +) = A(t), t,
(2.10)
with entries of A(t) continuous on R.
Denition 2.4.1 (Monodromy operator). Associated to system (2.10) is the resolvent 1(t, s).
For all s R, the operator
C(s) := 1(s +, s)
is called the monodromy operator.
Theorem 2.4.2. If X(t) is a fundamental matrix for (2.10), then there exists a nonsingular
constant matrix V such that, for all t,
X(t +) = X(t)V.
This matrix takes the form
V = X
1
(0)X(),
and is called the monodromy matrix.
Proof. Since X is a fundamental matrix solution, there holds that X

(t) = A(t)X(t) for all t.


Therefore X

(t+) = A(t+)X(t+), and by periodicity of A(t), X

(t+) = A(t)X(t+),
which implies that X(t + ) is a fundamental matrix of (2.10). As a consequence, by
Theorem 2.2.9, there exists a matrix V such that X(t +) = X(t)V .
Since at t = 0, X() = X(0)V , it follows that V = X
1
(0)X().
Theorem 2.4.3 (Floquets theorem, complex case). Any fundamental matrix solution of
(2.10) takes the form
(t) = P(t)e
tB
(2.11)
where P(t) and B are n n complex matrices such that
i) P(t) is invertible, continuous, and periodic of period in t,
ii) B is a constant matrix such that () = e
B
.
Proof. Let be a fundamental matrix solution. From 2.4.2, the monodromy matrix V =

1
(0)() is such that (t + ) = (t)V . By Theorem A.11.1, there exists B /
n
(C)
2.4. Systems with periodic coecients
Fund. Theory ODE Lecture Notes J. Arino
49
such that e
B
= V . Let P(t) = (t)e
Bt
, so (t) = P(t)e
Bt
. It is clear that P is continuous
and nonsingular. Also,
P(t +) = (t +)e
B(t+)
= (t)V e
B(+t)
= (t)e
B
e
B
e
Bt
= (t)e
Bt
= P(t),
proving the P is -periodic.
Theorem 2.4.4 (Floquets theorem, real case). Any fundamental matrix solution of (2.10)
takes the form
(t) = P(t)e
tB
(2.12)
where P(t) and B are n n real matrices such that
i) P(t) is invertible, continuous, and periodic of period 2 in t,
ii) B is a constant matrix such that ()
2
= e
2B
.
Proof. The proof works similarly as in the complex case, except that here, Theorem A.11.1
implies that there exists B /
n
(R) such that e
2B
= V
2
. Let P(t) = (t)e
Bt
, so
(t) = P(t)e
tB
. It is clear that P is continuous and nonsingular. Also,
P(t + 2) = (t + 2)e
(t+2)B
= (t +)V e
(2+t)B
= (t)V
2
e
(2+t)B
= (t)e
2B
e
2B
e
tB
= (t)e
tB
= P(t),
proving the P is -periodic.
See [12, p. 87-90], [4, p. 162-179].
Theorem 2.4.5 (Floquets theorem, [4]). If (t) is a fundamental matrix solution of the
-periodic system (2.10), then, for all t R,
(t +) = (t)
1
(0)().
In addition, for each possibly complex matrix B such that
e
B
=
1
(0)(),
there is a possibly complex -periodic matrix function t P(t) such that (t) = P(t)e
tB
for
all t R. Also, there is a real matrix R and a real 2-periodic matrix function t Q(t)
such that (t) = Q(t)e
tR
for all t R.
50
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
Denition 2.4.6 (Floquet normal form). The representation (t) = P(t)e
tR
is called a
Floquet normal form.
In the case where (t) = P(t)e
tB
, we have dP(t)/dt = A(t)P(t) P(t)B. Therefore,
letting x = P(t)z, we obtain x

= P(t)x

+dP(t)/dtx = P(t)A(t)x +A(t)P(t)x P(t)Bx


z = P
1
(t)x, so z

=
dP
1
(t)
dt
x +P
1
(t)x

=
dP
1
(t)
dt
P(t)z +P
1
(t)A(t)P(t)z
Denition 2.4.7 (Characteristic multipliers). The eigenvalues
1
, . . . ,
n
of a monodromy
matrix B are called the characteristic multipliers of equation (2.10).
Denition 2.4.8 (Characteristic exponents). Numbers such that e

is a characteristic
multiplier of (2.10) are called the Floquet exponents of (2.10).
Theorem 2.4.9 (Spectral mapping theorem). Let K = R or C. If C GL
n
(K) is written
C = e
B
, then the eigenvalues of C coincide with the exponentials of the eigenvalues of B,
with same multiplicity.
Denition 2.4.10 (Characteristic exponents). The eigenvalues
1
, . . . ,
n
of a monodromy
matrix B are called the characteristic exponents of equation (2.10). The exponents
1
=
exp(2
1
), . . . ,
n
= exp(2
n
) of the matrix ()
2
are called the (Floquet) multipliers of
(2.10).
Proposition 2.4.11. Suppose that X, Y are fundamental matrices for (2.10) and that X(t+
) = X(t)V , Y (t +) = Y (t)U. Then the monodromy matrices U and V are similar.
Proof. Suppose that X(t + ) = X(t)V and Y (t + ) = Y (t)U. But, by Theorem 2.2.9,
since X and Y are fundamental matrices for (2.10), there exists an invertible matrix C such
that X(t) = Y (t)C for all t. Thus, in particular, X(t +) = Y (t +)C, and so
C
1
UCX(t +) = Y (t +)C = Y (t)UC = X(t)C
1
UC,
since Y (t) = X(t)C
1
. It follows that V = C
1
UC, so U and V are similar.
From this Proposition, it follows that monodromy matrices share the same spectrum.
Corollary 2.4.12. All solutions of (2.10) tend to 0 as t if and only if [
j
[ < 1 for all
j (or '(
j
) < 0 for all j).
Let p be an eigenvector of ()
2
associated with a multiplier . Then the solution
(t) = (t)p of (2.10) satises the condition (t + 2) = (t). This is the origin of the
term multiplier.
2.4. Systems with periodic coecients
Fund. Theory ODE Lecture Notes J. Arino
51
2.4.2 Ane systems: the Fredholm alternative
We discuss here an extension of a theorem that was proved implicitly in Exercise 4, Assign-
ment 2. Let us start by stating the result in question. We consider here the system
x

= A(t)x +b(t), (2.13)


where x R
n
, A /
n
(R) and b R
n
, with A and b continuous and -periodic.
Theorem 2.4.13. If the homogeneous equation
x

= A(t)x (2.14)
associated to (2.13) has no nonzero solution of period , then (2.13) has for each function
f, a unique -periodic solution.
The Fredholm alternative concerns the case where there exists a nonzero periodic solution
of (2.14). We give some needed results before going into details. Consider (2.14). Associated
to this system is the so-called adjoint system, which is dened by the following dierential
equation,
y

= A
T
(t)y (2.15)
Proposition 2.4.14. The adjoint equation has the following properties.
i) Let R(t, t
0
) be the resolvent matrix of (2.14). Then, the resolvent matrix of (2.15) is
R
T
(t
0
, t).
ii) There are as many independent periodic solutions of (2.14) as there are of (2.15).
iii) If x is a solution of (2.14) and y is a solution of (2.15), then the scalar product
x(t), y(t)) is constant.
Proof. i) We know that

s
R(t, s) = R(t, s)A(s). Therefore,

s
R
T
(t, s) = A
T
(s)R
T
(t, s).
As R(s, s) = I, the rst point is proved.
ii) The solution of (2.15) with initial value y
0
is R
T
(0, t)y
0
. The initial value of a periodic
solution of (2.15) is y
0
such that
R
T
(0, )y
0
= y
0
This can also be written as
_
R
T
(0, ) I

y
0
= 0
or, taking the transpose,
y
T
0
[R(0, ) I] = 0
Now, since R(0, ) = R
1
(, 0), it follows that
y
T
0
[R(0, ) I] = 0 y
T
0
_
R
1
(, 0) I

= 0
This is equivalent to y
T
0
[R(0, ) I] = 0. The latter equation has as many solutions as
[R(0, ) I] x
0
= 0; the number of these depends on the rank of R(, 0) I.
52
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
iii) Recall that for dierentiable functions a, b,
d
dt
a(t), b(t)) =
d
dt
a(t), b(t)) +a(t),
d
dt
b(t))
Thus
d
dt
x(t), y(t)) = A(t)x(t), y(t)) +x(t), A
T
(t)y(t)) = 0
Before we carry on to the actual Fredholm alternative in the context of ordinary dier-
ential equations, let us consider the problem in a more general setting. Let H be a Hilbert
space. If A L(H, H), the adjoint operator A

of A is the element of L(H, H) such that


u, v H, Au, v) = u, A

v)
Let Img(A) be the image of A, Ker(A

) be the kernel of A

. Then we have H = Img(A)


Ker(A

).
Theorem 2.4.15 (Fredholm alternative). For the equation Af = g to have a solution, it is
necessary and sucient that g be orthogonal to every element of Ker(A

).
We now use this very general setting to prove the following theorem, in the context of
ODEs.
Theorem 2.4.16 (Fredholm alternative for ODEs). Consider (2.13) with A and f con-
tinuous and -periodic. Suppose that the homogeneous equation (2.14) has p independent
solutions of period . Then the adjoint equation (2.15) also has p independent solutions of
period p, which we denote y
1
, . . . , y
p
. Then
i) If
_

0
y
k
(t), b(t))dt = 0, k = 1, . . . , p (2.16)
then there exist p independent solutions of (2.13) of period , and,
ii) if this condition is not fullled, (2.13) has no nontrivial solution of period .
Proof. First, remark that x
0
is the initial condition of a periodic solution of (2.13) if, and
only if,
[R(0, ) I] x
0
=
_

0
R(0, s)b(s)ds (2.17)
By Theorem 2.3.4, the solution of (2.13) through (0, x
0
) is given by
x(t) = R(t, 0)x
0
+
_
t
0
R(t, s)b(s)ds
Hence, at time ,
x() = R(, 0)x
0
+
_

0
R(, s)b(s)ds
2.4. Systems with periodic coecients
Fund. Theory ODE Lecture Notes J. Arino
53
If x
0
is the initial condition of a -periodic solution, then x() = x
0
, and so
x
0
R(, 0) =
_

0
R(, s)b(s)ds
On the other hand, y
k
(0) is the initial condition of an -periodic solution y
k
if, and only
if,
_
R
T
(0, ) I

y
k
(0) = 0
Let C = R(0, ) I. We have that R
n
= Img(C) Ker(C
T
). We now use the Fredholm
alternative in this context. There exists x
0
such that
Cx
0
=
_

0
R(0, s)b(s)ds
if, and only if,
_

0
R(0, s)b(s)ds Img(C)
Indeed, from the Fredholm alternative, setting f = x
0
and g =
_

0
R(0, s)b(s)ds, we have
that Cf = g has a solution if, and only if, g is orthogonal to every element of Ker(C
T
), i.e.,
since R
n
= Img(C) Ker(C
T
), if, and only if, g Img(C).
Now, y
1
(0), . . . , y
p
(0) is a basis of Ker(C
T
). It follows that there exists a solution of
(2.13) if, and only if, for all k = 1, . . . , p,
k = 1, . . . , p,
_

0
R(0, s)b(s)ds, y
k
(0)) = 0
k = 1, . . . , p,
_

0
R(0, s)b(s), y
k
(0))ds = 0
k = 1, . . . , p,
_

0
b(s), R
T
(0, s)y
k
(0))ds = 0
k = 1, . . . , p,
_

0
b(s), y
k
(s))ds = 0
If these relations are satised, the set of vectors v such that
Av =
_

0
R(0, s)b(s)ds
is of the form v
0
+ Ker(C
T
), where v
0
is one of these vectors; hence there exist p of them
which are independent and are initial conditions of the p independent -periodic solutions
of (2.13).
Example The equation
x

= f(t) (2.18)
54
Fund. Theory ODE Lecture Notes J. Arino
2. Linear systems
where f is -periodic, has solutions of period if, and only if,
_

0
f(s)ds = 0
Let y = x

. Then, dierentiating y and substituting into (2.18), we have


y

= f(t)
Hence the system is
_
x
y
_

=
_
0 1
0 0
__
x
y
_
+
_
0
f(t)
_
Hence,
A
T
=
_
0 0
1 0
_
and the adjoint equation

= A
T
has the periodic solution (0, a)
T
.
2.5 Further developments, bibliographical notes
2.5.1 A variation of constants formula for a nonlinear system with
a linear component
The variation of constants formula given in Theorem 2.3.4 can be extended.
Theorem 2.5.1 (Variation of constants formula). Consider the IVP
x

= A(t)x +g(t, x) (2.19a)


x(t
0
) = x
0
, (2.19b)
where g : RR
n
R
n
a smooth function, and let 1(t, t
0
) be the resolvent associated to the
homogeneous system x

= A(t)x, with 1 dened on some interval 1 t


0
. Then the solution
of (2.19) is given by
(t) = 1(t, t
0
)x
0
+
_
t
t
0
1(t, s)g((s), s)ds, (2.20)
on some subinterval of 1.
Proof. We proceed using a variation of constants approach. It is known that the general
solution to the homogeneous equation x

= A(t)x associated to (2.19) is given by


(t) = 1(t, t
0
)x
0
.
We seek a solution to (2.19) by assuming that (t) = 1(t, t
0
)v(t). We have

(t) =
_
d
dt
1(t, t
0
)
_
v(t) +1(t, t
0
)v

(t)
= A(t)1(t, t
0
)v(t) +1(t, t
0
)v

(t),
2.5. Further developments, bibliographical notes
Fund. Theory ODE Lecture Notes J. Arino
55
from Proposition 2.2.11. For to be solution, it must satisfy the dierential equation (2.19),
and thus

(t) = A(t)(t) +g(t, (t)) A(t)1(t, t


0
)v(t) +1(t, t
0
)v

(t) = A(t)1(t, t
0
)v(t) +g(t, (t))
1(t, t
0
)v

(t) = g(t, (t))


v

(t) = 1(t, t
0
)
1
g(t, (t))
v

(t) = 1(t
0
, t)g(t, (t))
v(t) =
_
t
t
0
1(t
0
, s)g(s, (s))ds +C,
using Proposition 2.2.11 again. Therefore,
(t) = 1(t, t
0
)
__
t
t
0
1(t
0
, s)g(s, (s))ds +C
_
.
Evaluating this expression at t = t
0
gives (t
0
) = C, so C = x
0
. Therefore,
(t) = R(t, t
0
)x
0
+1(t, t
0
)
_
t
t
0
1(t
0
, s)g(s, (s))ds
= R(t, t
0
)x
0
+
_
t
t
0
1(t, t
0
)1(t
0
, s)g(s, (s))ds
= R(t, t
0
)x
0
+
_
t
t
0
1(t, s)g(s, (s))ds,
from Proposition 2.2.11.
Chapter 3
Stability of linear systems
3.1 Stability at xed points
We consider here the autonomous equation (not necessarily linear),
x

= f(x). (3.1)
To emphasize the fact that we are dealing with ows, we write x(t, x
0
) the solution to (3.1),
at time t and satisfying at time t = 0 the initial condition x(0) = x
0
.
Denition 3.1.1 (Fixed point). A xed point of (3.1) is a point x

such that f(x

) = 0.
This is evident, as a point such that f(x

) = 0 satises (x

= f(x

) = 0, so that the
solution is constant when x = x

. Note also that this implies that x(t) = x

is a solution
dened on R.
Denition 3.1.2 (Stable equilibrium point). The xed point x

is ( positively) stable if the


following two conditions hold:
i) There exists r > 0 such that if |x
0
x

| < r, then the solution x(t, x


0
) is dened for
all t 0. (This is automatically satised for ows).
ii) For any > 0, there exists > 0 such that |x
0
x

| < implies |x(t, x


0
) x

| < .
Denition 3.1.3 (Asymptotically stable equilibrium point). If the equilibrium x

is (pos-
itively) stable and that additionally, there exists > 0 such that |x
0
x

| < implies
lim
t
x(t, x
0
) = x

, then x

is (positively) asymptotically stable.


3.2 Ane systems with small coecients
We consider here a linear system of the form
x

= Ax +b(t)x (3.2)
where b is continuous and small, i.e., lim
t
b(t) = 0, with A M
n
(R) and b R.
57
58
Fund. Theory ODE Lecture Notes J. Arino
3. Stability of linear systems
Theorem 3.2.1. Suppose that all eigenvalues of A have negative real parts, and that b is
continuous and such that lim
t
b(t) = 0. Then 0 is a g.a.s. equilibrium of (3.2).
The proof comes from [2, p. 156-157].
Proof. For any given (t
0
, x
0
), t
0
> 0, we have, from [2, Th 2.1, p. 37] about the existence
and uniqueness of the solutions to the linear equation x

= A(t)x + g(t), that the (unique)


solution
t
(x
0
) satisfying the initial condition
t
0
(x
0
) = x
0
exists for all t t
0
.
by the variation of constants formula, using b(t)x as the inhomogeneous term, we can
express the solution by means of the equivalent integral equation, for t
0
t < ,

t
(x
0
) = e
(tt
0
)A
x
0
+
_
t
t
0
e
(ts)A
b(s)
s
(x
0
)ds (3.3)
by the hypothesis on A, e
(tt
0
)A
is such that for t
0
t < , |(t, t
0
)| Ke
(tt
0
)
for
K > 0, > 0, where R(t, t
0
) = e
(tt
0
)A
is the fundamental matrix of the homogeneous part
of (3.2).
Since lim
t
b(t) = 0, given any > 0, there exists a number T t
0
such that [b(t)[ <
for t T. We now use the variation of constants formula (3.3) with the point (T,
T
(x
0
))
for initial condition. We have, for T t < ,

t
(x
0
) = e
(tT)A

T
(x
0
) +
_
t
T
e
(ts)A
b(s)
s
(x
0
)ds
Thus, using |(t)| Ke
(tt
0
)
(with t
0
= T) and [b(t)[ < for t T, we obtain, for
T t < ,
|
t
(x
0
)| Ke
(tT)
|
T
(x
0
)| +K
_
t
T
e
(ts)
|
s
(x
0
)|ds
Multiplying both sides of this inequality by e
t
and using Gronwalls inequality (Appendix A.7)
with the function |
t
(x
0
)|e
t
, we obtain, for T t < ,
|
t
(x
0
)| K|
T
(x
0
)|e
(K)(tT)
(3.4)
From this we conclude that if 0 < < /K, the solution
t
(x
0
) will approach zero exponen-
tially. This does not yet prove that the zero solution of (3.2) is stable. To do this, we compute
a bound on |
T
(x
0
)|. Returning to (3.3) and restricting t to the interval t
0
t T, we
have
|
t
(x
0
)| Ke
(tt
0
)
|x
0
| +K
1
K
_
t
t
0
e
(ts)
|(s, t
0
, x
0
)|ds
where K
1
= max
t
0
tT
[b(t)[. Multiplying by e
t
and applying the Gronwall inequality we
obtain
|
t
(x
0
)| K|x
0
|e
(tt
0
)+K
1
K(tt
0
)
K|x
0
|e
K
1
K(tt
0
)
, t
0
t T (3.5)
3.2. Ane systems with small coecients
Fund. Theory ODE Lecture Notes J. Arino
59
Therefore,
|
T
(x
0
)| K|x
0
|e
K
1
K(Tt
0
)
, t
0
T (3.6)
Thus we can make [
T
(x
0
)[ small by choosing [x
0
[ suciently small. This together with (3.4)
gives the stability.
Indeed, substituting (3.6) into (3.4) gives, for T t < ,
|
t
(x
0
)| K
2
|x
0
|e
K
1
K(Tt
0
)
e
(K)(tT)
(3.7)
Let then K
2
= max
_
Ke
K
1
K(Tt
0
)
, K
2
e
K
1
K(Tt
0
)
_
. From (3.5) and (3.7) we have
|
t
(x
0
)|
_
K
2
|x
0
| if t
0
t T
K
2
|x
0
|e
(K)(tT)
if T t <
(3.8)
For a given matrix A, we can compute K and ; we next pick any 0 < < /K and then
T t
0
so that [b(t)[ < for t T. We then compute K
1
and K
2
. Now, given any > 0,
choose < /K
2
. Then from (3.8), if |x
0
| < , |
t
(x
0
)| < for all t t
0
so that the zero
solution is stable. From (3.8), it is clear that the zero solution is globally asymptotically
stable.
Corollary 3.2.2. Let all eigenvalues of A have negative real part, so that [e
At
[ Ke
t
for
some constants K > 0, > 0 and all t 0. Let b(t) be continuous for 0 t < and
suppose that there exists T > 0 such that [b(t)[ < /K for t T. Then the zero solution of
(3.2) is globally asymptotically stable.
Theorem 3.2.3. Let all eigenvalues of A have negative real part, and let b(t) be continuous
for 0 t < and such that
_

0
[b(s)[ds < . Then the zero solution of (3.2) is globally
asymptotically stable.
60
Fund. Theory ODE Lecture Notes J. Arino
3. Stability of linear systems
We give some notions of linear stability theory, in the case of the autonomous linear
system (2.6), repeated here for convenience:
x

(t) = Ax(t). (2.6)


We let w
j
= u
j
+ iv
j
be a generalized eigenvector of A corresponding to an eigenvalue

j
= a
j
+ib
j
, with v
j
= 0 if b
j
= 0, and
B = u
1
, . . . , u
k
, u
k+1
, v
k+1
, . . . , u
m
, v
m

be a basis of R
n
, with n = 2mk.
Denition 3.2.4 (Stable, unstable and center subspaces). The stable, unstable and center
subspaces of the linear system (2.6) are given, respectively, by
E
s
= Spanu
j
, v
j
: a
j
< 0,
E
u
= Spanu
j
, v
j
: a
j
> 0
and
E
c
= Spanu
j
, v
j
: a
j
= 0.
Denition 3.2.5. The mapping e
At
: R
n
R
n
is called the ow of the linear system (2.6).
The term ow is used since e
At
describes the motion of points x
0
R
n
along trajectories
of (2.6).
Denition 3.2.6. If all eigenvalues of A have nonzero real part, that is, if E
c
= , then
the ow e
At
of system (2.6) is called a hyperbolic ow, and the system (2.6) is a hyperbolic
linear system.
Denition 3.2.7. A subspace E R
n
is invariant with respect to the ow e
At
, or invariant
under the ow of (2.6), if e
At
E E for all t R.
Theorem 3.2.8. Let E be the generalized eigenspace of A associated to the eigenvalue .
Then AE E.
Theorem 3.2.9. Let A /
n
(R). Then
R
n
= E
s
E
u
E
c
.
Furthermore, if the matrix A is the matrix of the linear autonomous system (2.6), then E
s
,
E
u
and E
c
are invariant under the ow of (2.6).
Denition 3.2.10. If all the eigenvalues of A have negative (resp. positive) real parts, then
the origin is a sink (resp. source) for the linear system (2.6).
Theorem 3.2.11. The stable, center and unstable subspaces E
S
, E
C
and E
U
, respectively,
are invariant with respect to e
At
, i.e., let x
0
E
S
, y
0
E
C
and z
0
E
U
, then e
At
x
0
E
S
,
e
At
y
0
E
C
and e
At
z
0
E
U
.
3.2. Ane systems with small coecients
Fund. Theory ODE Lecture Notes J. Arino
61
Denition 3.2.12 (Homeomorphism). Let X be a metric space and let A and B be subsets
of X. A homeomorphism h : A B of A onto B is a continuous one-to-one map of A
onto B such that h
1
: B A is continuous. The sets A and B are called homeomorphic
or topologically equivalent if there is a homeomorphism of A onto B.
Denition 3.2.13 (Dierentiable manifold). An n-dimensional dierentiable manifold M
(or a manifold of class C
k
) is a connected metric space with an open covering U

( i.e.,
M =

) such that
i) for all , U

is homeomorphic to the open unit ball in R


n
, B = x R
n
: [x[ < 1,
i.e., for all there exists a homeomorphism of U

onto B, h

: U

B,
ii) if U

,= and h

: U

B, h

: U

B are homeomorphisms, then h

(U

)
and h

(U

) are subsets of R
n
and the map
h = h

h
1

: h

(U

) h

(U

)
is dierentiable (or of class C
k
) and for all x h

(U

), the determinant of the


Jacobian, det Dh(x) ,= 0.
Remark The manifold is analytic if the maps h = h

h
1

are analytic.
Next, recall that x

is an equilibrium point of (4.1) if f(x

) = 0. An equilibrium point
x

is hyperbolic if the Jacobian matrix Df of (4.1) evaluated at x

, denoted Df(x

), has no
eigenvalues with zero real part. Also, recall that the solutions of (4.1) form a one-parameter
group that denes the ow of the nonlinear dierential equation (4.1). To be more precise,
consider the IVP consisting of (4.1) and an initial condition x(t
0
) = x
0
. Let 1(x
0
) be the
maximal interval of existence of the solution to the IVP. Let then : R R
n
R
n
be
dened as follows: For x
0
R
n
and t 1(x
0
), (t, x
0
) =
t
(x
0
) is the solution of the IVP
dened on its maximal interval of existence 1(x
0
).
Example Consider the (linear) ordinary dierential equation x

= ax, with a, x R. The


solution is (t, x
0
) = e
at
x
0
, and satises the group property
(t + s, x
0
) = e
a(t+s)
x
0
= e
at
(e
as
x
0
) = (t, e
as
x
0
) = (t, (s, x
0
))

For simplicity and without loss of generality since both results are local results, we assume
hereforth that x

= 0, i.e., that a change of coordinates has been performed translating x

to the origin. We also assume that t


0
= 0.
Chapter 4
Linearization
We consider here the autonomous nonlinear system in R
n
x

= f(x) (4.1)
The object of this chapter is to show two results which link the behavior of (4.1) near a
hyperbolic equilibrium point x

to the behavior of the linearized system


x

= Df(x

)(x x

) (4.2)
about that same equilibrium.
4.1 Some linear stability theory
We now give some notions of linear stability theory, in the case of the autonomous linear
system (2.6), repeated here for convenience:
x

(t) = Ax(t). (2.6)


We let w
j
= u
j
+ iv
j
be a generalized eigenvector of A corresponding to an eigenvalue

j
= a
j
+ib
j
, with v
j
= 0 if b
j
= 0, and
B = u
1
, . . . , u
k
, u
k+1
, v
k+1
, . . . , u
m
, v
m

be a basis of R
n
, with n = 2mk.
Denition 4.1.1 (Stable, unstable and center subspaces). The stable, unstable and center
subspaces of the linear system (2.6) are given, respectively, by
E
s
= Spanu
j
, v
j
: a
j
< 0,
E
u
= Spanu
j
, v
j
: a
j
> 0
and
E
c
= Spanu
j
, v
j
: a
j
= 0.
63
64
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
Denition 4.1.2. The mapping e
At
: R
n
R
n
is called the ow of the linear system (2.6).
The term ow is used since e
At
describes the motion of points x
0
R
n
along trajectories
of (2.6).
Denition 4.1.3. If all eigenvalues of A have nonzero real part, that is, if E
c
= , then
the ow e
At
of system (2.6) is called a hyperbolic ow, and the system (2.6) is a hyperbolic
linear system.
Denition 4.1.4. A subspace E R
n
is invariant with respect to the ow e
At
, or invariant
under the ow of (2.6), if e
At
E E for all t R.
Theorem 4.1.5. Let E be the generalized eigenspace of A associated to the eigenvalue .
Then AE E.
Theorem 4.1.6. Let A /
n
(R). Then
R
n
= E
s
E
u
E
c
.
Furthermore, if the matrix A is the matrix of the linear autonomous system (2.6), then E
s
,
E
u
and E
c
are invariant under the ow of (2.6).
Denition 4.1.7. If all the eigenvalues of A have negative (resp. positive) real parts, then
the origin is a sink (resp. source) for the linear system (2.6).
Theorem 4.1.8. The stable, center and unstable subspaces E
S
, E
C
and E
U
, respectively,
are invariant with respect to e
At
, i.e., let x
0
E
S
, y
0
E
C
and z
0
E
U
, then e
At
x
0
E
S
,
e
At
y
0
E
C
and e
At
z
0
E
U
.
Denition 4.1.9 (Homeomorphism). Let X be a metric space and let A and B be subsets
of X. A homeomorphism h : A B of A onto B is a continuous one-to-one map of A
onto B such that h
1
: B A is continuous. The sets A and B are called homeomorphic
or topologically equivalent if there is a homeomorphism of A onto B.
Denition 4.1.10 (Dierentiable manifold). An n-dimensional dierentiable manifold M
(or a manifold of class C
k
) is a connected metric space with an open covering U

( i.e.,
M =

) such that
i) for all , U

is homeomorphic to the open unit ball in R


n
, B = x R
n
: [x[ < 1,
i.e., for all there exists a homeomorphism of U

onto B, h

: U

B,
ii) if U

,= and h

: U

B, h

: U

B are homeomorphisms, then h

(U

)
and h

(U

) are subsets of R
n
and the map
h = h

h
1

: h

(U

) h

(U

)
is dierentiable (or of class C
k
) and for all x h

(U

), the determinant of the


Jacobian, det Dh(x) ,= 0.
4.2. The stable manifold theorem
Fund. Theory ODE Lecture Notes J. Arino
65
Remark The manifold is analytic if the maps h = h

h
1

are analytic.
Next, recall that x

is an equilibrium point of (4.1) if f(x

) = 0. An equilibrium point
x

is hyperbolic if the Jacobian matrix Df of (4.1) evaluated at x

, denoted Df(x

), has no
eigenvalues with zero real part. Also, recall that the solutions of (4.1) form a one-parameter
group that denes the ow of the nonlinear dierential equation (4.1). To be more precise,
consider the IVP consisting of (4.1) and an initial condition x(t
0
) = x
0
. Let 1(x
0
) be the
maximal interval of existence of the solution to the IVP. Let then : R R
n
R
n
be
dened as follows: For x
0
R
n
and t 1(x
0
), (t, x
0
) =
t
(x
0
) is the solution of the IVP
dened on its maximal interval of existence 1(x
0
).
Example Consider the (linear) ordinary dierential equation x

= ax, with a, x R. The


solution is (t, x
0
) = e
at
x
0
, and satises the group property
(t + s, x
0
) = e
a(t+s)
x
0
= e
at
(e
as
x
0
) = (t, e
as
x
0
) = (t, (s, x
0
))

For simplicity and without loss of generality since both results are local results, we assume
hereforth that x

= 0, i.e., that a change of coordinates has been performed translating x

to the origin. We also assume that t


0
= 0.
4.2 The stable manifold theorem
Theorem 4.2.1 (Stable manifold theorem). Let E be an open subset of R
n
containing the
origin, let f C
1
(E), and let
t
be the ow of the nonlinear system (4.1). Suppose that
f(0) = 0 and that Df(0) has k eigenvalues with negative real part and n k eigenvalues
with positive real part. Then there exists a k-dimensional dierentiable manifold S tangent
to the stable subspace E
s
of the linear system (4.2) at 0 such that for all t 0,
t
(S) S
and for all x
0
S,
lim
t

t
(x
0
) = 0
and there exists an (n k)-dimensional dierentiable manifold U tangent to the unstable
subspace E
u
of (4.2) at 0 such that for all t 0,
t
(U) U and for all x
0
U,
lim
t

t
(x
0
) = 0
There are several approaches to the proof of this result. Hale [10] gives a proof which uses
functional analysis. The proof we give here comes from [18, p. 108-111], who derives it from
[6, p. 330-335]. It consists in showing that there exists a real nonsingular constant matrix
C such that if y = C
1
x then there are n k real continuous functions y
j
=
j
(y
1
, . . . , y
k
)
dened for small [y
i
[, i k, such that
y
j
=
j
(y
1
, . . . , y
k
) (j = k + 1, . . . , n)
dene a k-dimensional dierentiable manifold

S in y space. The stable manifold in S space
is obtained by applying the transformation P
1
to y so that x = P
1
y denes S in terms of
k curvilinear coordinates y
1
, . . . , y
k
.
66
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
Proof. System (4.1) can be written as
x

= Df(0)x +F(x)
with F(x) = f(x) Df(0)x. Since f C
1
(E), F C
1
(E), and F(0) = f(0) = 0 since
f(0) = 0. Also, DF(x) = Df(x) Df(0) and so DF(0) = 0. To continue, we use the
following lemma (which we will not prove).
Lemma 4.2.2. Let E be an open subset of R
n
containing the origin. If F C
1
(E), then
for all x, y N

(0) E, there exists a N

(0) such that


[F(x) F(y)[ |DF()| [x y[
From Lemma 4.2.2, it follows that for all > 0 there exists a > 0 such that [x[ and
[y[ imply that
[F(x) F(y)[ [x y[
Let C be an invertible n n matrix such that
B = C
1
Df(0)C =
_
P 0
0 Q
_
where the eigenvalues
1
, . . . ,
k
of the k k matrix P have negative real part and the
eigenvalues
k+1
, . . . ,
n
of the (nk) (nk) matrix Q have positive real part. Let > 0
be chosen small enough that for j = 1, . . . , k, '(
j
) < < 0. Let y = C
1
x, we have
y

= C
1
x

= C
1
Df(0)x +C
1
F(x)
= C
1
Df(0)Cy +C
1
F(Cy)
= By +G(y)
where G(y) = C
1
F(Cy). Since F C
1
(E), G C
1
(

E), where

E = C
1
(E). Also,
Lemma 4.2.2 applies to G.
Now consider the system
y

= By +G(y) (4.3)
and let
U(t) =
_
e
Pt
0
0 0
_
and V (t) =
_
0 0
0 e
Qt
_
Then U

= BU, V

= BV and e
Bt
= U(t)+V (t). Choosing as previously noted suciently
small, we can then choose K > 0 large enough and > 0 small enough that
|U(t)| Ke
(+)t
for all t 0
and
|V (t)| Ke
t
for all t 0
4.2. The stable manifold theorem
Fund. Theory ODE Lecture Notes J. Arino
67
Consider now the integral equation
u(t, a) = U(t)a +
_
t
0
U(t s)G(u(s, a))ds
_

t
V (t s)G(u(s, a))ds (4.4)
where a, u R
n
and a is a constant vector. We can solve this equation using the method of
successive approximations. Indeed, let
u
(0)
(t, a) = 0
and
u
(j+1)
(t, a) = U(t)a +
_
t
0
U(t s)G(u
(j)
(s, a))ds
_

t
V (t s)G(u
(j)
(s, a))ds (4.5)
We show by induction that for all j = 1, . . . and t 0,
[u
(j)
(t, a) u
(j1)
(t, a)[
K[a[e
t
2
j1
(4.6)
Clearly, (4.6) holds for j = 1 since
[u
(1)
(t, a)[ |U(t)a| +
_
t
0
|U(t s)G(u(s, a))|ds +
_

t
|V (t s)G(u(s, a))|ds
Now suppose that (4.6) holds for j = k. We have
[u
(k+1)
(t, a) u
(k)
(t, a)[ =

_
t
0
U(t s)
_
G(u
(k+1)
(s, a)) G(u
(k)
(s, a))

ds

_

t
V (t s)
_
G(u
(k+1)
(s, a)) G(u
(k)
(s, a))

ds

_
t
0
|U(t s)|

G(u
(k+1)
(s, a)) G(u
(k)
(s, a))

ds
+
_

t
|V (t s)|

G(u
(k+1)
(s, a)) G(u
(k)
(s, a))

ds
which, since G veries a Lipschitz-type condition as given by Lemma 4.2.2, implies that
there exists > 0 such that
[u
(k+1)
(t, a) u
(k)
(t, a)[
_
t
0
|U(t s)|

u
(k+1)
(s, a) u
(k)
(s, a)

ds
+
_

t
|V (t s)|

u
(k+1)
(s, a) u
(k)
(s, a)

ds
68
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
Using the bounds on |U| and |V | as well as the induction hypothesis (4.6), it follows that
[u
(k+1)
(t, a) u
(k)
(t, a)[
_
t
0
Ke
(+)(ts)
K[a[e
s
2
k1
ds
+
_

t
Ke
(ts)
K[a[e
s
2
k1
ds

K
2
[a[e
t
2
k1
+
K
2
[a[e
t
2
k1
which, if we choose < /(4K), i.e., K/ < 1/4, implies that
[u
(k+1)
(t, a) u
(k)
(t, a)[ <
_
1
4
+
1
4
_
K[a[e
t
2
k1
=
K[a[e
t
2
k
(4.7)
Note that for G to satisfy a Lipschitz-type condition, we must choose K[a[ < /2, i.e.,
[a[ < /(2K). Then, by induction, (4.6) holds for all t 0 and j = 1, . . ..
As a consequence, for t 0, n > m > N,
[u
(n)
(t, a) u
(m)
(t, a)[

j=N
[u
(j+1)
(t, a) u
(j)
(t, a)[
K[a[

j=N
1
2
j
=
K[a[
2
N1
As this last quantity approaches 0 as N , it follows that u
(j)
(t, a) is a Cauchy sequence
(of continuous functions).
It follows that
lim
j
u
(j)
(t, a) = u(t, a)
uniformly for all t 0 and [a[ < /(2K). From the uniform convergence, we deduce that
u(t, a) is continuous. Now taking the limit as j in both sides of (4.5), it follows that
u(t, a) satises the integral equation (4.4) and as a consequence, the dierential equation
(4.3).
Since G C
1
(

E), it follows from induction on (4.5) that u
(j)
(t, a) is a dierentiable
function of a for [a[ < /(2K) and t 0. Since u
(j)
(t, a) u(t, a) uniformly, it then follows
that u(t, a) is dierentiable for t 0 and [a[ < /(2K). The estimate (4.7) implies that
[u(t, a)[ 2K[a[e
t
(4.8)
for t 0 and [a[ < /(2K).
It is clear from (4.4) that the last n k components of a do not enter the computation
of u(t
0
, a) and may thus be taken as zero. So the components u
j
(t, a) of the solution u(t, a)
satisfy the initial conditions
u
j
(0, a) = a
j
for j = 1, . . . , k
4.3. The Hartman-Grobman theorem
Fund. Theory ODE Lecture Notes J. Arino
69
and
u
j
(0, a) =
__

0
V (s)G(u(s, a
1
, . . . , a
k
, 0, . . . , 0))ds
_
j
for j = k + 1, . . . , n
where ( )
j
denotes the jth component. For j = k + 1, . . . , n, dene the functions

j
(a
1
, . . . , a
k
) = u
j
(0, a
1
, . . . , a
k
, 0, . . . , 0) (4.9)
The initial values y
j
= u
j
(0, a
1
, . . . , a
k
, 0, . . . , 0) then satisfy
y
j
=
j
(y
1
, . . . , y
k
) for j = k + 1, . . . , n
which denes a dierentiable manifold

S in y space for
_
y
2
1
+ +y
2
k
< /(2K). Fur-
thermore, if y(t) is a solution of the dierential equation (4.3) with y(0)

S, i.e., with
y(0) = u(0, a), then
y(t) = u(t, a)
It follows from the estimate (4.8) that if y(t) is a solution of (4.3) with y(0)

S, then
y(t) 0 as t . It can also be shown that if y(t) is a solution of (4.3) with y(0) ,

S,
then y(t) , 0 as t ; see [6, p. 332].
This implies, as
t
satises the group property
s+t
(x
0
) =
s
(
t
)(x
0
), that if y(0)

S,
then y(t)

S for all t 0. And it can be shown as in [6, Th 4.2, p. 333] that

j
y
i
(0) = 0
for i = 1, . . . , k and j = k + 1, . . . , n, i.e., that the dierentiable manifold

S is tangent to
the stable subspace E
s
= y R
n
: y
1
= = y
k
= 0 of the linear system y

= By at 0.
The existence of the unstable manifold

U of (4.3) is established the same way, but con-
sidering a reversal of time, t t, i.e., considering the system
y

= By G(y)
The stable manifold for this system is the unstable manifold

U of (4.3). In order to determine
the (nk)-dimensional manifold

U using the above process, the vector y has to be replaced
by the vector (y
k+1
, . . . , y
n
, y
1
, . . . , y
k
).
4.3 The Hartman-Grobman theorem
Adapted from [4, p. 311].
Theorem 4.3.1 (Hartman-Grobman). Suppose that 0 is an equilibrium point of the non-
linear system (4.1). Let
t
be the ow of (4.1), and
t
be the ow of the linearized system
x

= Df(0)x. If 0 is a hyperbolic equilibrium, then there exists an open subset T of R


n
containing 0, and a homeomorphism G with domain in T such that G(
t
(x)) =
t
(G(x))
whenever x T and both sides of the equation are dened.
70
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
We follow here [18].
Theorem 4.3.2 (Hartman-Grobman). Let E be an open subset of R
n
containing the origin,
let f C
1
(E), and let
t
be the ow of the nonlinear system (4.1). Suppose that f(0) = 0
and that the matrix A = Df(0) has no eigenvalue with zero real part.
Then there exists a homeomorphism H of an open set U containing the origin onto an
open set V containing the origin such that for each x
0
U, there is an open interval 1
0
R
containing 0 such that for all x
0
U and t 1
0
,
H
t
(x
0
) = e
At
H(x
0
);
i.e., H maps trajectories of (4.1) near the origin onto trajectories of x

= Df(0)x near the


origin and preserves the parametrization by time.
Proof. Suppose that f C
1
(E), f(0) = 0 (i.e., 0 is an equilibrium) and A = Df(0) the
jacobian matrix of f at 0.
1. As we have assumed that the matrix A has no eigenvalues with zero real part (i.e., 0
is an hyperbolic equilibrium point), we can write A in the form
A =
_
P 0
0 Q
_
where P has only eigenvalues with negative real parts and Q has only eigenvalues with
positive real parts.
2. Let
t
be the ow of the nonlinear system (4.1). Let us write the solution as
x(t, x
0
) =
t
(x
0
) =
_
y(t, y
0
, z
0
)
z(t, y
0
, z
0
)
_
where
x
0
=
_
y
0
z
0
_
R
n
has been decomposed as y
0
E
s
, the stable subspace of A, and z
0
E
u
, the unstable
subspace of A.
3. Let

Y (y
0
, z
0
) = y(1, y
0
, z
0
) e
P
y
0
and

Z(y
0
, z
0
) = z(1, y
0
, z
0
) e
Q
z
0
Then

Y (0) =

Y (0, 0) = y(1, 0, 0) = 0. The same is true of

Z(0) = 0. Also, D

Y (0) =
D

Z(0) = 0. Since f C
1
(E),

Y (y
0
, z
0
) and

Z(y
0
, z
0
) are continuously dierentiable. Thus
|D

Y (y
0
, z
0
)| a
and
|D

Z(y
0
, z
0
)| a
4.3. The Hartman-Grobman theorem
Fund. Theory ODE Lecture Notes J. Arino
71
on the compact set [y
0
[
2
+ [z
0
[
2
s
2
0
. By choosing s
0
suciently small, we can make a as
small as we like. We let Y (y
0
, z
0
) and Z(y
0
, z
0
) be smooth functions, dened by
Y (y
0
, z
0
) =
_

Y (y
0
, z
0
) for [y
0
[
2
+[z
0
[
2

_
s
0
2
_
2
0 for [y
0
[
2
+[z
0
[
2
s
2
0
and
Z(y
0
, z
0
) =
_

Z(y
0
, z
0
) for [y
0
[
2
+[z
0
[
2

_
s
0
2
_
2
0 for [y
0
[
2
+[z
0
[
2
s
2
0
By the mean value theorem,
[Y (y
0
, z
0
)[ a
_
[y
0
[
2
+[z
0
[
2
a([y
0
[ +[z
0
[)
and
[Z(y
0
, z
0
)[ a
_
[y
0
[
2
+[z
0
[
2
a([y
0
[ +[z
0
[)
for all (y
0
, z
0
) R
n
. Let B = e
P
and C = e
Q
. Assuming that P and Q have been normalized
in a proper way, we have
b = |B| < 1 and c = |C
1
| < 1
4. For
x =
_
y
z
_
R
n
dene the transformations
L(y, z) =
_
By
Cz
_
and
T(y, z) =
_
By +Y (y, z)
Cz +Z(y, z)
_
i.e., L(x) = e
A
x and, locally, T(x) =
1
(x). Then the following lemma holds, which we
prove later.
Lemma 4.3.3. There exists a homeomorphism H of an open set U containing the origin
onto an open set V containing the origin such that
H T = L H
5. We let H
0
be the homeomorphism dened above and L
t
and T
t
be the one-parameter
families of transformations dened by
L
t
(x
0
) = e
At
x
0
and T
t
(x
0
) =
t
(x
0
)
Dene
H =
_
1
0
L
s
H
0
T
s
ds
72
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
It follows from the above lemma that there exists a neighborhood of the origin for which
L
t
H =
_
1
0
L
ts
H
0
T
st
dsT
t
=
_
1t
t
L
s
H
0
T
s
dsT
t
=
__
0
t
L
s
H
0
T
s
ds +
_
1t
0
L
s
H
0
T
s
ds
_
T
t
=
_
1
0
L
s
H
0
T
s
dsT
t
= HT
t
since by the above lemma, H
0
= L
1
H
0
T which implies that
_
0
s
L
s
H
0
T
s
ds =
_
0
t
L
s1
H
0
T
s+1
ds
=
_
1
1t
L
s
H
0
T
s
ds
Thus H T
t
= L
t
H or equivalently
H
t
(x
0
) = e
At
H(x
0
)
and it can be shown that H is a homeomorphism on R
n
. The outline of the proof is
complete.
We now prove Lemma 4.3.3.
Proof. We use the method of successive approximations. For x R
n
, let
H(x) =
_
(y, z)
(y, z)
_
Then H T = L H is equivalent to the pair of equations
B(y, z) = (By +Y (y, z), Cz +Z(y, z)) (4.10a)
C(y, z) = (By +Y (y, z), Cz +Z(y, z)) (4.10b)
Successive approximations for (4.10b) are dened by

0
(y, z) = z

k+1
(y, z) = C
1

k
(By +Y (y, z), Cz +Z(y, z))
(4.11)
It can be shown by induction that for k = 0, 1, . . . the functions
k
are continuous and such
that
k
(y, z) = z for [y[ +[z[ 2s
0
.
4.3. The Hartman-Grobman theorem
Fund. Theory ODE Lecture Notes J. Arino
73
Let us now prove that
k
is a Cauchy sequence. For this, we show by induction that
for all j 1,
[
j
(y, z)
j1
(y, z)[ Mr
j
([y[ +[z[)

(4.12)
where r = c[2 max(a, b, c)]

with (0, 1) chosen suciently small that r < 1 (which is


possible since c < 1) and M = ac(2s
0
)
1
/r. Inequality (4.12) is satised for j = 1 since
[
1
(y, z)
0
(y, z)[ = [C
1

0
(By +Y (y, z), Cz +Z(y, z)) z[
= [C
1
(Cz +Z(y, z)) z[
= [C
1
Z(y, z)[
|C
1
| [Z(y, z)[
ca([y[ +[z[)
Mr([y[ +[z[)

since Z(y, z) = 0 for [y[ +[z[ 2s


0
. Now assuming that (4.12) holds for j = k gives
[
k+1
(y, z)
k
(y, z)[ = [C
1

k
(By +Y (y, z), Cz +Z(y, z)) C
1

k1
(By +Y (y, z), Cz +Z(y, z))[
= [C
1
(
k

k1
)[
|C
1
| [
k

k1
[
which, using induction hypothesis (4.12) and c = |C
1
| gives
cMr
k
_
[By +Y (y, z)[ +[Cz +Z(y, z)[
_

cMr
k
_
b[y[ + 2a([y[ +[z[) +c[z[
_

cMr
k
_
2 max(a, b, c)
_

_
[y[ +[z[
_

Mr
k+1
_
[y[ +[z[)

Using the same type of argument as in the proof of the stable manifold theorem,
k
is
thus a Cauchy sequence of continuous functions that converges uniformly as k to a
continuous function (y, z). Also, (y, z) = z for [y[ + [z[ 2s
0
. Taking limits in (4.11)
and left-multiplying by C shows that (y, z) is a solution of (4.10b).
Now for (4.10a). This equation can be written
B
1
(y, z) = (B
1
y +Y
1
(y, z), C
1
z +Z
1
(y, z)) (4.13)
where Y
1
and Z
1
occur in the inverse of T, which exists provided that a is small enough (i.e.,
s
0
is suciently small),
T
1
(y, z) =
_
B
1
y +Y
1
(y, z)
C
1
z +Z
1
(y, z)
_
Successive approximations with
0
(y, z) = y can then be used as above (since b = |B| < 1)
to solve (4.13).
74
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
Therefore, we obtain the continuous map
H(y, z) =
_
(y, z)
(y, z)
_
and it follows as in [11, p. 248-249] that H is a homeomorphism of R
n
onto R
n
.
4.4 Example of application
4.4.1 A chemostat model
To illustrate the use of the theorems in this chapter, we take an example of nonlinear
system, a system of two nonlinear dierential equations modeling a biological device called
a chemostat. Without going into details, the system is the following.
dS
dt
= D(S
0
S) (S)x (4.14a)
dx
dt
= ((S) D)x (4.14b)
The parameters S
0
and D, respectively the input concentration and the dilution rate, are
real and positive. The function is the growth function. It is generally assumed to satisfy
(0) = 0,

> 0 and

< 0.
To be complete, one should verify that the positive quadrant is positively invariant under
the ow of (4.14), i.e., that for S(0) 0 and x(0) 0, solutions remain nonnegative for all
positive times, and similar properties. But since we are here only interested in applications
of the stable manifold theorem, we proceed to a very crude analysis, and will not deal with
this point.
Note that in vector form, the system is noted

= f()
with = (S, x)
T
and
f() =
_
D(S
0
S) (S)x
((S) D)x
_
Equilibria of the system are found by solving f() = 0. We nd two, the rst one situated
on one of the boundaries of the positive quadrant,

T
= (S

T
, x

T
) = (S
0
, 0)
the second one in the interior,

I
= (S

, x

) = (, S
0
)
where is such that () = D. Note that this implies that if S
0
,

T
is the only
equilibrium of the system.
4.4. Example of application
Fund. Theory ODE Lecture Notes J. Arino
75
At an arbitrary point = (S, x), the Jacobian matrix is given by
Df() =
_
D

(S)x (S)

(S)x (S) D
_
Thus, at the trivial equilibrium

T
,
Df(

T
) =
_
D (S
0
)
0 (S
0
) D
_
We have two eigenvalues, D and (S
0
) D. Let us suppose that (S
0
) D < 0. Note that
this implies that

T
is the only equilibrium, since, as we have seen before,

I
is not feasible
if > S
0
.
As the system has dimensionality 2, and that the matrix Df(

T
) has two negative eigen-
values, the stable manifold theorem (Theorem 4.2.1) states that there exists a 2-dimensional
dierentiable manifold / such that

t
(/) /
for all
0
/, lim
t

t
(
0
) =

T
.
At

T
, /is tangent to the stable subspace E
S
of the linearized system

= Df(

T
)(

T
).
Since there are no eigenvalues with positive real part, there does not exist an unstable
manifold in this case. Let us now caracterize the nature of the stable subspace E
S
. It is
obtained by studying the linear system

= Df(

T
)(

T
)
=
_
D (S
0
)
0 (S
0
) D
__
S S
0
x
_
=
_
D(S S
0
) (S
0
)x
((S
0
) D)x
_
(4.15)
Of course, the Jacobian matrix associated to this system is the same as that of the nonlinear
system (at

T
). Associated to the eigenvalue D is the eigenvector v
1
= (1, 0)
T
, to (S
0
)D
is v
2
= (1, 1)
T
.
The stable subspace is thus given by Span (v
1
, v
2
), i.e., the whole of R
2
.
In fact, the stable manifold of

T
is the whole positive quadrant, since all solutions limit
to this equilibrium. But let us pretend that we do not have this information, and let us try
to nd an approximation of the stable manifold.
4.4.2 A second example
This example is adapted from [18, p. 111]. Consider the nonlinear system
x

= x y
2
y

= x
2
+y
(4.16)
76
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
From the nullclines equations, it is clear that (x, y) = (0, 0) is the only equilibrium point.
At (0, 0), the Jacobian matrix of (4.16) is given by
J =
_
1 0
0 1
_
The linearized system at 0 is
x

= x
y

= y
(4.17)
So the eigenvalues are 1 and 1, with associated eigenvectors (1, 0)
T
and (0, 1)
T
, respec-
tively. Therefore, the stable manifold theorem (Theorem 4.2.1) implies that there exists a
1-dimensional stable (dierentiable) manifold S such that
t
(S) S and lim
t

t
(x
0
) = 0
for all x
0
S, and a 1-dimensional unstable (dierentiable) manifold U such that
t
(U) U
and lim
t

t
(x
0
) for all x
0
U. Furthermore, at 0, S is tangent to the stable subspace
E
S
of (4.17), and U is tangent to the unstable subspace E
U
of (4.17).
The stable subspace E
S
is given by Span (v
1
), with v
1
= (0, 1)
T
, i.e., the y-axis. The
unstable subspace E
U
is Span (v
2
), with v
2
= (1, 0)
T
, i.e., the x-axis. The behavior of this
system is illustrated in Figure 4.1.
0.05 0.04 0.03 0.02 0.01 0 0.01 0.02 0.03 0.04 0.05
0.05
0.04
0.03
0.02
0.01
0
0.01
0.02
0.03
0.04
0.05
x
y
Figure 4.1: Vector eld of system (4.16) in the neighborhood of 0.
To be more precise about the nature of the stable manifold S, we proceed as follows.
First of all, as A is in diagonal form, we have
A = B =
_
1 0
0 1
_
and C = I. Also, F() = G() =
_
y
2
x
2
_
. Here, the matrices P and Q are in fact scalars,
4.4. Example of application
Fund. Theory ODE Lecture Notes J. Arino
77
P = 1 and Q = 1. Thus
U(t) =
_
e
t
0
0 0
_
, V (t) =
_
0 0
0 e
t
_
Finally, a = (a
1
, 0)
T
. So now we can use successive approximations to nd an approximate
solution to the integral equation (4.4), which here takes the form
u(t, a) =
_
e
t
a
1
0
_
+
_
t
0
_
e
(ts)
u
2
2
(s)
0
_
ds
_

t
_
0
e
(ts)
u
2
1
(s)
_
ds
To construct the sequence of successive approximations, we start with u(t, a) = (0, 0)
T
, then
compute the successive terms using equation (4.5), which takes the form
u
(j+1)
(t, a) =
_
e
t
a
1
0
_
+
_
t
0
_
e
(ts)
0
0 0
_
G
_
u
(j)
(s)
_
ds
_

t
_
0 0
0 e
(ts)
_
G
_
u
(j)
(s)
_
ds
=
_
e
t
a
1
0
_
+
_
t
0
_
e
(ts)
0
0 0
_
_
_
_
u
(j)
2
(s)
_
2
_
u
(j)
1
(s)
_
2
_
_
ds
_

t
_
0 0
0 e
(ts)
_
_
_
_
u
(j)
2
(s)
_
2
_
u
(j)
1
(s)
_
2
_
_
ds
=
_
e
t
a
1
0
_
+
_
t
0
_
e
(ts)
_
u
(j)
2
(s)
_
2
0
_
ds
_

t
_
0
e
(ts)
_
u
(j)
1
(s)
_
2
_
ds
Therefore,
u
(1)
(t, a) = U(t)a =
_
e
a
a
1
0
_
since u
(0)
(t, a) =
_
u
(0)
1
(t, a)
u
(0)
2
(t, a)
_
=
_
0
0
_
.
Then,
u
(2)
(t, a) =
_
e
t
a
1
0
_

_

t
_
0
e
(ts)
(e
s
a
1
)
2
_
ds
=
_
e
t
a
1
0
_

_
0
1
3
a
2
1
e
2t
_
=
_
e
t
a
1

1
3
a
2
1
e
2t
_
and continuing this process, we nd
u
(3)
(t, a) =
_
e
t
a
1
+
1
27
(e
4t
e
t
)a
4
1

1
3
a
2
1
e
2t
_
Also, it is possible to show that u
(4)
(t, a) u
(3)
(t, a) = O(a
5
1
).
The stable manifold o is 1-dimensional, so here it has the form
2
(a
1
) = u
2
(0, a
1
, 0), and
is here approximated by

2
(a
1
) =
1
3
a
2
1
+O(a
5
1
)
78
Fund. Theory ODE Lecture Notes J. Arino
4. Linearization
as a
1
0. Thus o is approximated by
y =
x
2
3
+O(x
5
)
as x 0.
Chapter 5
Exponential dichotomy
Our aim here is to show the equivalent of the Hartman-Grobman theorem for linear systems
with variable coecients. Compared to other results we have seen so far, this is a much
more recent eld. The rst results were shown in the 60s by Lin. We give here only the
most elementary results. For more details, see, e.g., [13].
We consider the linear system of dierential equations
dx
dt
= A(t)x (5.1)
where the n n matrix A(t) is continuous on the real axis.
5.1 Exponential dichotomy
Denition 5.1.1 (Exponential dichotomy). Let X(t) be a the fundamental matrix solution
of (5.1). If X(t) and X
1
(s) can be decomposed into the following forms
X(t) = X
1
(t) +X
2
(t)
X
1
(s) = Z
1
(s) +Z
2
(s)
X(t)Z
1
(s) = X
1
(t)Z
1
(s) +X
2
(t)Z
2
(s)
and satisfy the conditions that there exists , , positive constants such that
|X
1
(t)Z
1
(s)| e
(ts)
, t s
|X
2
(t)Z
2
(s)| e
(ts)
, s t
(5.2)
where
X
1
(t) = (X
11
(t), 0), X
2
(t) = (0, X
12
(t)),
Z
1
(s) =
_
Z
11
(s)
0
_
, Z
2
(s) =
_
0
Z
21
(s)
_
,
79
80
Fund. Theory ODE Lecture Notes J. Arino
5. Exponential dichotomy
or there is a projection P on the stable manifold such that
|X(t)PX
1
(s)| e
(ts)
, t s
|X(t)(I P)X
1
(s)| e
(ts)
, s t
(5.3)
then we say that the system (5.1) admits exponential dichotomy.
Remark A matrix P denes a projection if it is such that P
2
= P.
Another denition [1].
Denition 5.1.2. Let (t, s), (t, t) = I, be the principal matrix solution of (5.1). We
say that (5.1) has an exponential dichotomy on the interval 1 if there are projections P(t) :
R
n
R
n
, t 1, continuous in t, such that if Q(t) = I P(t), then
i) (t, s)P(s) = P(t)(t, s), for t, s 1.
ii) |(t, s)P(s)| Ke
(ts)
, for t s 1.
iii) |(t, s)Q(s)| Ke
(ts)
, for s t 1.
A more general denition is the following (see, e.g., [16]).
Denition 5.1.3 ((
1
,
2
)-dichotomy). If
1
,
2
are continuous real-valued functions on
the real interval 1 = (

,
+
), system (5.1) is said to have a (
1
,
2
)-dichotomy if there
exist supplementary projections P
1
, P
2
on R
n
such that
|X(t)P
i
X
1
(s)| K
i
exp(
_
t
s

i
), if (1)
i
(s t) 0, i = 1, 2.
where K
1
, K
2
are positive constants.
In the case that
1
,
2
are constants, the system (5.1) is said to have an exponential
dichotomy if
1
< 0 <
2
and an ordinary dichotomy if
1
=
2
= 0.
The following remark is from [7].
Remark The autonomous system
x

= Ax
has an exponential dichotomy on R
+
if and only if no eigenvalue of A has zero real part. It
has ordinary dichotomy is and only if all eigenvalues of A with zero real part are semisimple
(their algebraic multiplicities are equal to their geometric multiplicities, i.e., the dimension of their
eigenspace). In each case, X(t) = e
tA
and we can take P to be the spectral projection dened by
P =
1
2i
_

(zI A)
1
dz
with any rectiable simple closed curve in the open left half-plane which contains in its interior
all eigenvalues of A with negative real part.
5.2. Existence of exponential dichotomy
Fund. Theory ODE Lecture Notes J. Arino
81
5.2 Existence of exponential dichotomy
To check that the previous denitions hold can be a very tedious task. Some authors have
thus worked on deriving simpler conditions that imply exponential dichotomy.
Theorem 5.2.1. If the matrix A(t) in (5.1) is continuous and bounded on R, and there
exists a quadratic form V (t, x) = x
T
G(t)x, where the matrix G(t) is symmetric, regular,
bounded and C
1
, such that the derivative of V (t, x) with respect to (5.1) is positive denite,
then (5.1) admits exponential dichotomy.
The converse is true, without the requirement that A(t) be bounded.
A result of [7].
Theorem 5.2.2. Let A(t) be a continuous n n matrix function dened on an interval 1
such that
i) A(t) has k eigenvalues with real part < 0 and n k eigenvalues with real part
> 0 for all t 1,
ii) |A(t)| M for all t 1.
For any positive constant < min(, ), there exists a positive constant = (M, + , )
such that, if
|A(t
2
) A(t
1
)| for [t
2
t
1
[ h
where h > 0 is a xed number not greater than the length of 1, then the equation
x

= A(t)x
has a fundamental matrix X(t) satisfying the inequalities
|X(t)

PX
1
(s)| Ke
()(ts)
for t s
|X(t)(I

P)X
1
(s)| Le
()(st)
for s t
where K, L are positive constants depending only on M, +, and

P =
_
I
k
0
0 0
_
The following result, due to Muldowney [16], gives a criterion for the existence of a
(
1
,
2
)-dichotomy.
Proposition 5.2.3. Suppose there is a continuous real-valued function on 1 and constants
l
i
, 0 l
i
< 1, i = 1, 2, such that for some m, 0 m n,
max
_
l
1
'(a
jj
) +l
1
m

i=1,i=j
[a
ij
[ +
n

i=m+1
[a
ij
[ : j = 1, . . . , m
_
l
1
,
82
Fund. Theory ODE Lecture Notes J. Arino
5. Exponential dichotomy
min
_
l
2
'(a
jj
)
m

i=1
[a
ij
[ l
2
n

i=m+1,i=j
[a
ij
[ : j = m + 1, . . . , n
_
l
2
.
Then the system (5.1) has a (
1
,
2
)-dichotomy, where

1
= max
_
l
1
'(a
jj
) +
m

i=1,i=j
[a
ij
[ +l
2
n

i=m+1
[a
ij
[ : j = 1, . . . , m
_
,

2
= min
_
'(a
jj
) l
1
m

i=1
[a
ij
[
n

i=m+1,i=j
[a
ij
[ : j = m + 1, . . . , n
_
.
The same sort of theorem can be proved with sums of the columns replaced by sums of
the rows.
Example Consider
A(t) =
_
_
1 0 1/2
t/2 t t
2
t/2 t
2
t
_
_
, t > 0

5.3 First approximate theory


We consider the nonlinear system
dx
dt
= A(t)x +f(t, x) (5.4)
where f : RR
n
R
n
is continuous, f(t, x) = O(|x|
2
), |x| = o(1), |f(t, x
1
) f(t, x
2
)|
L|x
1
x
2
| with L small enough.
Let x(t) be a non trivial solution of (5.1); dene

u
(x(t)) = limsup
ts
1
t s
log
|x(t)|
|x(s)|
and

u
(x(t)) = liminf
ts
1
t s
log
|x(t)|
|x(s)|
The numbers

u
(x(t)) and
u
(x(t)) are called the uniform upper characteristic exponent and
uniform lower characteristic exponent of x(t), respectively.
Remark If

(x) < 0, then lim
s
|x(s)| = . If (x) > 0, then lim
t
|x(t)| =
.
Then we have the following theorem.
5.3. First approximate theory
Fund. Theory ODE Lecture Notes J. Arino
83
Theorem 5.3.1. If (5.1) admits the exponential dichotomy, then the linear system (5.1)
and the nonlinear system (5.4) are topologically equivalent, i.e.,
i) if the solution x(t) of (5.4) remains in a neighborhood of the origin for t 0, or t 0,
then lim
t
x(t) = 0, or lim
t
x(t) = 0, respectively;
ii) there exists positive constants
0
and
0
such that if a solution x(t) of (5.4) is such
that lim
t
x(0) = 0, or lim
t
x(t) = 0, then
|x(t)|
0
|x(s)|e

0
(ts)
, t s
or
|x(t)|
0
|x(s)|e

0
(ts)
, s t
respectively. At this time,

u
(x(t))
0
< 0, or
u
(x(t))
0
> 0;
iii) for a k-dimensional solution x of (5.1) with

u
(x(t)) < 0, or an (n k)-
dimensional solution x(t) of (5.1) with
u
(x(t)) > 0, there is a unique k-
dimensional or (nk)-dimensional y(t), solution of (5.4), such that

u
(x(t)) <
0, or
u
(x(t)) > 0 respectively.
A dierent statement of the same sort of result is given by Palmer [17].
Theorem 5.3.2. Suppose that A(t) is a continuous matrix function such that the linear
equation x

= A(t)x has an exponential dichotomy. Suppose that f(t, x) is a continuous


function of R R
n
into R
n
such that
|f(t, x)| , |f(t, x
1
) f(t, x
2
)| L|x
1
x
2
|
for all t, x, x
1
, x
2
. Then if
4LK
there is a unique function H(t, x) of R R
n
into R
n
satisfying
i) H(t, x) x is bounded in R R
n
,
ii) if x(t) is any solution of the dierential equation x

= A(t)x + f(t, x), then H(t, x(t))


is a solution of x

= A(t)x.
Moreover, H is continuous in R R
n
and
|H(t, x) x| 4K
1
for all t, x. For each xed t, H
t
(x) = H(t, x) is a homeomorphism of R
n
. L(t, x) = H
1
t
(x)
is continuous in RR
n
and if y(t) is any solution of x

= A(t)x, then L(t, y(t)) is a solution


of x

= A(t)x +f(t, x).


84
Fund. Theory ODE Lecture Notes J. Arino
5. Exponential dichotomy
5.4 Stability of exponential dichotomy
Theorem 5.4.1. Suppose that the linear system (5.1) admits exponential dichotomy. Then
there exists a constant > 0 such that the linear system
dx
dt
= (A(t) +B(t))x (5.5)
also admits exponential dichotomy, when B(t) is continuous on R and |B(t)| .
Another version of this theorem.
Theorem 5.4.2. Let B : /
n
(R
+
) be a bounded, continuous matrix function. Suppose that
(5.1) has an exponential dichotomy on R
+
. If = sup |B(t)| < /(4K
2
), then the perturbed
equation
x

= (A(t) +B(t))z
also has an exponential dichotomy on R
+
with constants

K and determined by K, and
. Moreover, if

P(t) is the corresponding projection, then |P(t)

P(t)| = O() uniformly
in t R
+
. Also, [ [ = O().
5.5 Generality of exponential dichotomy
The exposition has been done here in the case of a system of ODEs. But it is important to
realize that exponential dichotomies exist in a much more general setting.
Bibliography
[1] A. Acosta and P. Garca. Synchronization of non-identical chaotic systems: an expo-
nential dichotomies approach. J. Phys. A: Math. Gen., 34:91439151, 2001.
[2] F. Brauer and J.A. Nohel. The Qualitative Theory of Ordinary Dierential Equations.
Dover, 1989.
[3] H. Cartan. Cours de calcul dierentiel. Hermann, Paris, 1997. Reprint of the 1977
edition.
[4] C. Chicone. Ordinary Dierential Equations with Applications. Springer, 1999.
[5] E.A. Coddington and N. Levinson. Theory of Ordinary Dierential Equations. McGraw-
Hill, 1955.
[6] E.A. Coddington and N. Levinson. Theory of Ordinary Dierential Equations. Krieger,
1984.
[7] W.A. Coppel. Dichotomies in Stability Theory, volume 629 of Lecture Notes in Mathe-
matics. Springer-Verlag, 1978.
[8] J. Dieudonne. Foundations of Modern Analysis. Academic Press, 1969.
[9] N.B. Haaser and J.A. Sullivan. Real Analysis. Dover, 1991. Reprint of the 1971 edition.
[10] J.K. Hale. Ordinary Dierential Equations. Krieger, 1980.
[11] P. Hartman. Ordinary Dierential Equations. John Wiley & Sons, 1964.
[12] P.-F. Hsieh and Y. Sibuya. Basic Theory of Ordinary Dierential Equations. Springer,
1999.
[13] Z. Lin and Y-X. Lin. Linear Systems. Exponential Dichotomy and Structure of Sets of
Hyperbolic Points. World Scientic, 2000.
[14] H.J. Marquez. Nonlinear Control Systems. Wiley, 2003.
[15] R.K Miller and A.N. Michel. Ordinary Dierential Equations. Academic Press, 1982.
85
86
Fund. Theory ODE Lecture Notes J. Arino
BIBLIOGRAPHY
[16] J.S Muldowney. Dichotomies and asymptotic behaviour for linear dierential systems.
Transactions of the AMS, 283(2):465484, 1984.
[17] K.J. Palmer. A generalization of Hartmans linearization theorem. J. Math. Anal. Appl.,
41:753758, 1973.
[18] L. Perko. Dierential Equations and Dynamical Systems. Springer, 2001.
[19] L. Schwartz. Cours danalyse, volume I. Hermann, Paris, 1967.
[20] K. Yosida. Lectures on Dierential and Integral Equations. Dover, 1991.
Appendix A
A few useful denitions and results
Here, some results that are important for the course are given with a somewhat random
ordering.
A.1 Vector spaces, norms
A.1.1 Norm
Consider a vector space E over a eld K. A norm is an application, denoted | |, from E to
R
+
that satises the following:
1) x E, |x| 0, with |x| = 0 if and only if x = 0.
2) x E, a K, |ax| = [a[ |x|.
3) x, y E, |x +y| |x| +|y|.
A vector space E equiped with a norm | | is a normed vector space.
A.1.2 Matrix norms
A.1.3 Supremum (or operator) norm
The supremum norm is dened by
L L(E), [[[L[[[ = sup
xE{0}
|L(x)|
|x|
= sup
x1
|L(x)|.
The inequality
|A(t)(x
1
x
2
)| [[[A(t)[[[ |x
1
x
2
|
87
88
Fund. Theory ODE Lecture Notes J. Arino
A. Denitions and results
results from the nature of the norm [[[ [[[. See appendix A.1. is best understood by indicating
the spaces in which the various norms are dened. We have
|Ax|
a
= |x|
b
_
_
_
_
A
_
x
|x|
b
__
_
_
_
a
|x|
b
[[[A[[[
= |A||x|
b
,
since _
_
_
_
x
|x|
b
_
_
_
_
b
= 1.
A.2 An inequality involving norms and integrals
Lemma A.2.1. Suppose f : R
n
R
n
is a continuous function. Then
_
_
_
_
_
b
a
f(x)dx
_
_
_
_

_
b
a
|f(x)| dx.
Proof. First, note that we have
_
_
_
_
_
b
a
f(x)dx
_
_
_
_
=
__
_
_
_
_
b
a
f
1
(x)dx
_
_
_
_
, . . . ,
_
_
_
_
_
b
a
f
n
(x)dx
_
_
_
_
_
.
For a given component function f
i
, i = 1, . . . , n, using the denition of the Riemann integral,
_
b
a
f
i
(x)dx = lim
k
k

j=1
f
i
(x

j
)x
j
,
where x

j
is the sample point in the interval [x
j1
, x
j
] with width x
j
. Therefore,
_
_
_
_
_
b
a
f
i
(x)dx
_
_
_
_
=
_
_
_
_
_
lim
k
k

j=1
f
i
(x

j
)x
_
_
_
_
_
= lim
k
_
_
_
_
_
k

j=1
f
i
(x

j
)x
_
_
_
_
_
,
since the norm is a continuous function. The result then follows from the triangle inequality.
A.3 Types of convergences
Denition A.1 (Pointwise convergence). Let X be any set, and let Y be a topological space.
A sequence f
1
, f
2
, . . . of mappings from X to Y is said to be pointwise convergent (or simply
convergent) to a mapping f : X Y , if the sequence f
n
(x) converges to f(x) for each x in
X. This is usually denoted by f
n
f. In other words,
(f
n
f) (x X, > 0, N 0, n N, d(f
n
(x), f(x)) < ) .
A.4. Asymptotic Notations
Fund. Theory ODE Lecture Notes J. Arino
89
Denition A.2 (Uniform convergence). Let X be any set, and let Y be a topological space.
A sequence f
1
, f
2
, . . . of mappings from X to Y is said to be uniformly convergent to a
mapping f : X Y , if given > 0, there exists N such that for all n N and all x X,
d(f
n
(x), f(x)) < .
This is usually denoted by f
n
u
f. In other words,
(f
n
u
f) ( > 0, N 0, n N, x X, d(f
n
(x), f(x)) < ) .
An important theorem follows.
Theorem A.3.1. If the sequence of maps f
n
is uniformly convergent to the map f, then
f is continuous.
A.4 Asymptotic Notations
Let n be a integer variable which tends to and let x be a continuous variable tending
to some limit. Also, let (n) or (x) be a positive function and f(n) or f(x) any function.
Then
i) f = O() means that [f[ < A for some constant A and all values of n and x,
ii) f = o() mean that f/ 0,
iii) f means that f/ 1.
Note that f = o() implies f = O().
A.5 Types of continuities
Denition A.3 (Uniform continuity). Let (X, d
X
) and (Y, d
Y
) be two metric spaces, E X
and F Y . A function f : E F is uniformly continuous on the set E X if for every
> 0, there exists > 0 such that
d
Y
(f(x), f(y)) < whenever x, y E and d
X
(x, y) < .
In other words,
(f : E (X, d
X
) F (Y, d
Y
) uniformly continuous on E)
( > 0, > 0, x, y E, d
X
(x, y) < d
Y
(f(x), f(y)) < ) .
Denition A.4 (Equicontinuous set). A set of functions F = f dened on a real interval
1 is said to be equicontinuous on 1 if, given any > 0, there exists a

> 0, independent
of f F and also t,

t 1 such that
|f(t) f(

t)| < whenever [t

t[ <

90
Fund. Theory ODE Lecture Notes J. Arino
A. Denitions and results
An interpretation of equicontinuity is that a sequence of functions is equicontinuous if
all the functions are continuous and they have equal variation over a given neighbourhood.
Equicontinuity of a sequence of functions has important consequences.
Theorem A.5.1. Let f
n
be an equicontinuous sequence of functions. If f
n
(x) f(x) for
every x X, then the function f is continuous.
Lemma A.5 (Ascoli). On a bounded interval 1, let F = f be an innite, uniformly
bounded, equicontinuous set of functions. Then F contains a sequence f
n
, n = 1, 2, . . .,
which is uniformly convergent on 1.
Theorem A.6. Let C(X) be the space of continuous functions on the complete metric space
X with values in R
n
. If a sequence f
n
in C(X) is bounded and equicontinuous, then it has
a uniformly convergent subsequence.
A.6 Lipschitz function
Denition A.6.1 (Lipschitz function). A map f : | R R
n
R
n
is Lipschitz in x if
there exists a real number L such that for all (t, x
1
) | and (t, x
2
) |,
|f(t, x
1
) f(t, x
2
)| L|x
1
x
2
|.
In the case of a Lipschitz function as dened above, the constant L is independent of x
1
and x
2
, it is given for |. A weaker version is local Lipschitz functions.
Denition A.6.2 (Locally Lipschitz function). A map f : | R R
n
R
n
is locally
Lipschitz in x if, for all (t
0
, x
0
) |, there exists a neighborhood V | of (t
0
, x
0
) and a real
number L, such that for all (t, x
1
), (t, x
2
) V ,
|f(t, x
1
) f(t, x
2
)| L|x
1
x
2
|.
In other words, f is locally Lipschitz if the restriction of f to V is Lipschitz.
Thus, a locally Lipschitz function is Lipschitz if it is locally Lipschitz on | with every-
where the same Lipschitz constant L. Another denition of a locally Lipschitz function is as
follows.
Denition A.6.3. A function f : | R
n+1
R
n
is locally Lipschitz continuous if for
every compact set V |, the number
L = sup
(t,x)=(t,y)V
|f(t, x) f(t, y)|
|x y|
is nite, with L depending on V .
Property A.6.4. Let f(t, x) be a function. The following properties hold.
A.7. Gronwalls lemma
Fund. Theory ODE Lecture Notes J. Arino
91
i) f Lipschitz f uniformly continuous in x.
ii) f uniformly continuous , f Lipschitz.
iii) f(t, x) has continuous partial derivative f/x on a bounded closed domain T f is
locally Lipschitz on T.
Proof. i ) Suppose that f is Lipschitz, i.e., there exists L > 0 such that |f(t, x
1
)f(t, x
2
)|
L|x
1
x
2
|. Recall that f is uniformly continuous if for every > 0, there exists > 0 such
that for all x
1
, x
2
, |x
1
x
2
| < implies that |f(t, x
1
) f(t, x
2
)| < . So, given an > 0,
choose < /L. Then |x
1
x
2
| < implies that
|f(t, x
1
) f(t, x
2
)| < L|x
1
x
2
| < L < L/L = .
Thus f is uniformly continuous (see Denition A.3).
ii ) This is left as an exercise. Consider for example the function f dened by f(x) =
1/ ln x on (0,
1
2
], f(0) = 0.
iii ) Notice that f/x continuous on T implies that |f/x| is continuous on (the
bounded closed domain) T, and thus |f/x| is bounded on T. Let
L = sup
(t,x)D
_
_
_
_
f(t, x)
x
_
_
_
_
If (t, x
1
), (t, x
2
) |, by the mean-value theorem, there exists [x
1
, x
2
] such that f(t, x
2
)
f(t, x
1
) = (x
2
x
1
)
f
x
(t, ). As |, it follows that |
f
x
(t, )| L, and thus |f(t, x
2
)
f(t, x
1
)| L|x
2
x
1
|.
A.7 Gronwalls lemma
The name of Gronwall is associated to a certain number of inequalities. We give a few of
them here. We prove the most simple one (as it is an easy proof to remember), as well as
the most general one (Lemma A.9). In [12, p. 3], the lemma is stated as follows.
Lemma A.7 (Gronwall). If
i) g(t) is continuous on t
0
t t
1
,
ii) for t
0
t t
1
, g(t) satises the inequality
0 g(t) K +L
_
t
t
0
g(s)ds.
Then
0 g(t) Ke
L(tt
0
)
,
for t
0
t t
1
.
92
Fund. Theory ODE Lecture Notes J. Arino
A. Denitions and results
Proof. Let v(t) =
_
t
t
0
g(s)ds. Then v

(t) = g(t), which implies that the assumption on g can


be written
0 v

(t) K +Lv(t).
The right inequality is a linear dierential inequality, with integrating factor exp(
_
t
t
0
Lds)
Also, v(t
0
) = 0. Hence,
d
dt
_
e
L(tt
0
)
v(t)
_
Ke
L(tt
0
)
and therefore,
e
L(tt
0
)
v(t)
K
L
_
1 e
L(tt
0
)
_
.
Thus Lv(t) K
_
e
L(tt
0
)
1
_
, and g(t) K +Lv(t) Ke
L(tt
0
)
.
In [4, p. 128-130], Gronwalls inequality is stated as
Lemma A.8. Suppose that a < b and let g, K and L be nonnegative continuous functions
dened on the interval [a, b]. Moreover, suppose that either K is a constant function, or K
is dierentiable on [a, b] with positive derivative K

. If, for all t [a, b]


g(t) K(t) +
_
t
a
L(t)g(s)ds,
then
g(t) K(t) exp
__
t
a
L(s)ds
_
,
for all t [a, b].
Finally, the most general formulation is in [8, p. 286].
Lemma A.9. If and are two nonnegative regulated functions on the interval [0, c], then
for every nonnegative regulated function w in [0, c] satisfying the inequality
w(t) (t) +
_
t
0
(s)w(s)ds,
we have, in [0, c],
w(t) (t) +
_
t
0
(s)(s) exp
__
t
s
()d
_
ds. (A.1)
Before proving the result, let us recall that a function f from an interval 1 R to a
Banach space F is regulated if it admits in each point in 1 a left limit and a right limit. In
particular, every continuous mapping from 1 R to a Banach space is regulated, as well as
monotone maps from 1 to R; see, e.g., [8, Section 7.6].
A.8. Fixed point theorems
Fund. Theory ODE Lecture Notes J. Arino
93
Proof. Let y(t) =
_
t
0
(s)w(s)ds; y is continuous, and since w(t) (t) +
_
t
0
(s)w(s)ds, it
follows that, except maybe at a denumerable number of points of [0, c], we have
y

(t) (t)y(t) (t)(t) (A.2)


from [8, Section 8.7]. Let z(t) = y(t) exp(
_
t
0
(s)ds). Then (A.2) is equivalent to
z

(t) (t)(t) exp


_

_
t
0
(s)ds
_
.
Using a mean-value type theorem (see, e.g., [8, Th. 8.5.3]) and using the fact that z(0) = 0,
we get, for t [0, c],
z(t)
_
t
0
(s)(s) exp
_

_
s
0
()d
_
ds,
whence by denition
y(t)
_
t
0
(s)(s) exp
__
t
s
()d
_
ds,
and inequality (A.1) now follows from the relation w(t) (t) +y(t).
A.8 Fixed point theorems
Denition A.10 (Contraction mapping). Let (X, d) be a metric space, and let S X. A
mapping f : S S is a contraction on S if there exists K < 1 such that, for all x, y S,
d(f(x), f(y)) Kd(x, y)
Every contraction is uniformly continuous on X (from Proposition A.6.4, since a con-
traction is Lipschitz).
Theorem A.11 (Contraction mapping principle). Consider the complete metric space (X, d).
Every contraction mapping f : X X has one and only one x X such that f(x) = x.
Proof. Existence We use successive approximations. Let x
0
X. Dene x
1
= f(x
0
), x
2
=
f(x
1
), . . . , x
n
= f(x
n1
), . . . This denes an innite sequences of elements of X. As f is a
contraction,
d(x
2
, x
1
) = d(f(x
1
), f(x
0
))
Kd(x
1
, x
0
).
Similarly,
d(x
3
, x
2
) = d(f(x
2
), f(x
1
))
Kd(x
2
, x
1
)
K
2
d(x
1
, x
0
).
94
Fund. Theory ODE Lecture Notes J. Arino
A. Denitions and results
Iterating,
d(x
n+1
, x
n
) K
n
d(x
1
, x
0
).
Therefore,
d(x
n+p
, x
n
) d(x
n+p
, x
n+p1
) +d(x
n+p1
, x
n+p2
) + +d(x
n+1
, x
n
)
(K
p1
+K
p2
+ +K + 1)K
n
d(x
1
, x
0
)

K
n
1 K
d(x
1
, x
0
).
Therefore d(x
n+p
, x
n
) tends to 0 as n , so x
n
is a Cauchy sequence. Since X is a
complete space, it follows that x
n
admits a limit . As lim
n
x
n
= , it follows from
continuity of f that x
n+1
= f(x
n
) tends to f(). But x
n+1
also converges to , so f() = ,
that is, is a xed point of f.
Uniqueness Suppose
1
and
2
are two xed points. Then there must hold that
d(
1
,
2
) Kd(
1
,
2
) < d(
1
,
2
),
if d(
1
,
2
) ,= 0. Therefore d(
1
,
2
) = 0, and
1
=
2
.
In the case that f : S X S, the theorem takes the form of Theorem A.12. Closedness
of S is an implicit requirement, since the set S in the complete metric space X is closed if,
and only if, S is complete.
Theorem A.12. Let S be a closed subset of the complete metric space (X, d). Every con-
traction mapping f : S S has one and only one x S such that f(x) = x.
Theorem A.8.1. Consider a mapping f : X X, where X is a complete metric space.
Suppose that f is not necessarily a contraction, but that one of the iterates f
k
of f, is a
contraction. Then f has a unique xed point.
A.9 Jordan normal form
Theorem A.9.1. Every complex n n matrix A is similar to a matrix of the form
J =
_

_
J
0
0 0 0
0 J
1
0 0

0 0 0 J
s
_

_
where J
0
is a diagonal matrix with diagonal
1
, . . . ,
n
, and, for i = 1, . . . , s,
J
i
=
_

q+i
1 0 0 0 0
0
q+i
1 0 0 0

0 0 0 0
q+i
1
0 0 0 0 0
q+i
_

_
A.9. Jordan normal form
Fund. Theory ODE Lecture Notes J. Arino
95
The
j
, j = 1, . . . , q + s, are the characteristic roots of A, which need not all be distinct.
If
j
is a simple root, then it occurs in J
0
, and therefore, if all the roots are distinct, A is
similar to the diagonal matrix
J =
_

1
0 0 0
0
2
0 0

0 0 0
n
_

_
An algorithm to compute the Jordan canonical form of an n n matrix A [15].
i) Compute the eigenvalues of A. Let
1
, . . . ,
m
be the distinct eigenvalues of A with
multiplicities n
1
, . . . , n
m
, respectively.
ii) Compute n
1
linearly independent generalized eigenvectors of A associated with
1
as
follows. Compute
(A
1
E
n
)
i
for i = 1, 2, . . . until the rank of (A
1
E
n
)
k
is equal to the rank of (A
1
E
n
)
k+1
. Find
a generalized eigenvector of rank k, say u. Dene u
i
= (A
1
E
n
)
k1
u, for i = 1, . . . , k.
If k = n
1
, proceed to step 3. If k < n
1
, nd another linearly independent generalized
eigenvector with rank k. If this is not possible, try k 1, and so forth, until n
1
linearly
independent generalized eigenvectors are determined. Note that if (A
1
E
n
) = r,
then there are totally (n r) chains of generalized eigenvectors associated with
1
.
iii) Repeat step 2 for
2
, . . . ,
m
.
iv) Let u
1
, . . . , u
k
, . . . be the new basis. Observe that
Au
1
=
1
u
1
,
Au
2
= u
1
+
1
u
2
,
.
.
.
Au
k
= u
k1
+
1
u
k
Thus in the new basis, A has the representation
J =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

1
1
.
.
. 1

2
1
.
.
. 1

3
1
.
.
. 1

3
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
96
Fund. Theory ODE Lecture Notes J. Arino
A. Denitions and results
where each chain of generalized eigenvectors generates a Jordan block whose order
equals the length of the chain.
v) The similarity transformation which yields J = Q
1
AQis given by Q = [u
1
, . . . , u
k
, . . .].
A.10 Matrix exponentials
Let A /
n
(K). The exponential of A is dened by
e
A
=

n=0
A
n
n!
(A.3)
We have e
A
/
n
(K). Also

1
n!
A
n


1
n!
[[[A[[[
n
, so that the series

1
n!
A
n
is absolutely
convergent in /
n
(K). Thus e
A
is well dened.
Property A.10.1. Let A, B /
n
(K). Then
i)

e
A

e
|||A|||
.
ii) If A and B commute ( i.e., AB = BA), then e
A+B
= e
A
e
B
.
iii) e
A
= I +

n=1
1
n!
A
n
.
iv) e
0
= I.
v) e
A
is invertible with inverse e
A
.
vi) e
At
is dierentiable, and
d
dt
e
At
= Ae
At
.
vii) If P and T are linear transformations on K
n
, and S = PTP
1
, then e
S
= Pe
T
P
1
.
Proof. v) For any matrix A, we have A(A) = AA = (A)A, so A and A commute.
Therefore, e
A
e
A
= e
AA
= e
0
= I. Therefore, for any A, e
A
is invertible with inverse
e
A
.
There are several shortcuts to computing the exponential of a matrix A:
1) If A is nilpotent, that is, if there exists q N such that A
q
= 0, then e
A
=

q
k=1
A
k
/k!.
A nilpotent matrix has several interesting properties. A is nilpotent if and only if all
its eigenvalues are zero. A nilpotent matrix has zero determinant and trace.
2) If A is such that there exists q N such that A
q
= A, then this can sometimes be
exploited to simplify the computation of e
A
.
3) Any matrix A can be written in the form A = D+N, where D is diagonalizable, N is
nilpotent, and D and N commute. Therefore, e
A
= e
D+N
= e
D
e
N
.
4) Other cases require the use of the Jordan normal form (explained below).
A.11. Matrix logarithms
Fund. Theory ODE Lecture Notes J. Arino
97
Use of the Jordan form to compute the exponential of a matrix. Suppose that
J = P
1
AP is the Jordan form of the matrix A. For a block diagonal matrix
B =
_
_
_
B
1
0
.
.
.
0 B
s
_
_
_
,
we have, for k = 0, 1, . . .,
B
k
=
_
_
_
B
k
1
0
.
.
.
0 B
k
s
_
_
_
,
Therefore, for t R,
e
Jt
=
_
_
_
e
J
0
t
0
.
.
.
0 e
Jst
_
_
_
,
with
e
J
0
=
_
_
_
e

1
t
0
.
.
.
0 e

k
t
_
_
_
.
Now, since J
i
=
k+i
I
i
+N
i
, with N
i
a nilpotent matrix, and since I
i
and N
i
commute, there
holds
e
J
i
t
= e

k+i
t
e
N
i
t
.
For k N
i
, N
k
i
= 0, so
e
tJ
i
=
_
_
_
_
_
1 t
t
n
i
1
(n
i
1)!
0 1
t
n
i
2
(n
i
2)!
0 1
_
_
_
_
_
.
A.11 Matrix logarithms
Theorem A.11.1. Suppose that M is a nonsingular n n matrix. Then there is an n n
matrix B (possibly complex), such that e
B
= M. If, additionally, M /
n
(R), then there
is B /
n
(R) such that e
B
= M
2
.
Let u C, we have, for [u[ < 1,
1
1 +u
= 1 u +u
2
u
3
+
98
Fund. Theory ODE Lecture Notes J. Arino
A. Denitions and results
and, for [u[ < 1,
ln(1 +u) = t
t
2
2
+
t
3
3

t
4
4
+
=

k=1
(1)
k+1
t
k
k
For any z C,
exp(z) = 1 +z +
z
2
2!
+
z
3
3!
+
=

n=1
z
n
n!
Therefore, for [u[ < 1, u C,
1 +u = exp(ln(1 +u))
=

n=1
1
n!
_

k=1
(1)
k+1
u
k
k
_
n
Suppose that
B = (ln )I +

k=1
(1)
k+1
k
1

k
Z
k
where
Z =
_

_
0 1 0
.
.
.
.
.
.
0 0 1
0 0
_

_
is an mm-matrix. Since Z is nilpotent (Z
N
= 0 for all N m), the above sum is nite.
Observe that
exp(B) = exp((ln )I) exp
_

k=1
(1)
k+1
k
1

k
Z
k
_
= exp
_

k=1
(1)
k+1
k
_
Z

_
k
_
=
_
I +
Z

_
= I +Z
= J
We say ln J = B.
A.12. Spectral theorems
Fund. Theory ODE Lecture Notes J. Arino
99
A.12 Spectral theorems
When studying systems of dierential equations, it is very important to be able to compute
the eigenvalues of a matrix, for instance in order to study the local asymptotic stability of
an equilibrium point. This can be a very dicult problem, that often becomes intractable
even for systems with low dimensionality. However, even if it is not possible to compute an
explicit solution, it is often possible to use spectrum localization theorems. We here give
two of the most famous ones: the Routh-Hurwitz criterion, and the Gershgorin theorem.
Let A be a n n matrix, denote its elements by (a
ij
). The set of all eigenvalues of A is
called its spectrum, and is denoted Sp(A).
Theorem A.12.1 (Routh-Hurwitz). If n = 2, suppose that det A > 0 and trA < 0. Then
A has only eigenvalues with negative real part.
Theorem A.12.2 (Gershgorin). Let
R
j
=
n

i=1,i=j
[a
ij
[
Let Sp(A). Then

_
j
[ a
jj
[ R
j

Gershgorins theorem is extremely helpful in certain cases. Suppose that A is stictly


diagonally dominant, i.e., [a
ii
[ >

n
i=1,i=j
[a
ij
[. Then A has no eigenvalues with zero real
part. Also, the number of eigenvalues with negative real part is equal to the number of
negative entries on the diagonal of A, and conversely for eigenvalues with positive real parts
and the number of positive entries on the diagonal of A.
Appendix B
Problem sheets
Contents
Homework sheet 1 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Homework sheet 2 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Homework sheet 3 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Homework sheet 4 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Final examination 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Homework sheet 1 2006 . . . . . . . . . . . . . . . . . . . . . . . . . . 145
101
Fund. Theory ODE Lecture Notes J. Arino
103
McMaster University Math4G03/6G03
Fall 2003
Homework Sheet 1
Exercise 1.1 We consider here the equation
x

(t) = x(t) +f(x(t)) (B.1)


where R is constant and f is continuous on R.
i) Show that x is a solution of (B.1) on R
+
if, and only if,
_
_
_
x is continuous on R
+
and
x(t) = e
t
x(0) +e
t
_
t
0
e
s
f(x(s))ds t R
+
(B.2)
Suppose now that > 0 and that f is such that
a, k R, a > 0, 0 < k < : u R, [u[ a [f(u)[ k[u[ (B.3)
Part I. Suppose that there exists a solution x of (B.1), dened on R
+
and satisfying the
inequality
[x(t)[ a, t R
+
(B.4)
i) Prove the inequality
[x(t)[ [x(0)[e
(k)t
, t R
+
[Hint: Use Gronwalls lemma with the function g(t) = e
t
[x(t)[].
ii) Deduce that x admits the limit 0 as t .
Part II.
i) Show that any solution of (B.1) on R
+
that satises [x(0)[ < a, satises (B.4).
ii) Deduce from the preceding questions the two following properties.
a) Any solution x of (B.1) on R
+
satisfying the condition [x(0)[ < a, admits the
limit 0 when t .
b) The function x 0 is the unique solution of (B.1) on R
+
such that x(0) = 0.
Part III. Application. Show that, for > 1, all solutions of the equation
x

= x + ln
_
1 +x
2
_
tend to zero when t .
104
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Exercise 1.2 Let f : [0, +) R, f C
1
, and a R. We consider the initial value
problems
x

(t) +ax(t) = f(t), t 0


x(0) = 0
(B.5)
and
x

(t) +ax(t) = f

(t), t 0
x(0) = 0
(B.6)
As these equations are linear, the initial value problems (B.5) and (B.6) admit unique solu-
tions. We denote the solution to (B.5) and the solution to (B.6). Find a necessary and
sucient condition on f such that

= .
[Hint: Use a variation of constants formula].
Exercise 1.3 Let f : R
n
R
n
be continuous. Consider the dierential equation
x

(t) = f(x(t)) (B.7)


i) Let x be a solution of (B.7) dened on a bounded interval [0, ), with > 0. Suppose
that t f(x(t)) is bounded on [0, ).
a) Consider the sequence
z
,n
= x(
1
n
), n N

Show that (z
,n
)
nN
is a Cauchy sequence.
b) Deduce that there exists x

R
n
such that,
|x(t) x

| M

[t [
with M = sup
t[0,)
|f(x(t))|.
c) Show that x admits a nite limit when t , t < .
ii) Show that there exists an extension of x that is a solution of (B.7) on the interval
[0, ].

Fund. Theory ODE Lecture Notes J. Arino


105
McMaster University Math4G03/6G03
Fall 2003
Homework Sheet 1 Solutions
Solution Exercise 1 1) Let us rst show that x solution of (B.1) implies (B.2).
If x is a solution of (B.1) on R
+
, then x is dierentiable on R
+
, and so x is continuous on
R
+
. Furthermore,
x

(s) = x(s) +f(x(s)), for all s R


+
e
s
x

(s) = e
s
+e
s
f(x(s))

_
t
0
e
s
x

(s)ds =
_
t
0
e
s
x(s)ds +
_
t
0
e
s
f(x(s))ds
[e
s
x(s)]
t
0

_
t
0
e
s
x(s)ds =
_
t
0
e
s
x(s)ds +
_
t
0
e
s
f(x(s))ds
e
t
x(t) x(0) =
_
t
0
e
s
f(x(s))ds, for all t R
+
x(t) = e
t
x(0) +e
t
_
t
0
e
s
f(x(s))ds, for all t R
+
Let us now prove the converse, i.e., that if a function x satises (B.2), then it is a solution
to the IVP (B.1). Since x and f are continuous on R
+
, t e
t
f(x(t)) is continuous on R
+
.
This implies that
t
_
t
0
e
s
f(x(s))ds
is dierentiable on R
+
, and, dierentiating the expression for x(t) as given in (B.2),
x

(t) = e
t
x(0) e
t
_
t
0
e
s
f(x(s))ds +e
t
e
t
f(x(t))
x

(t) =
_
e
t
x(0) +e
t
_
t
0
e
s
f(x(s))ds
_
. .
x(t)
+f(x(t))
And thus
x

(t) = x(t) +f(x(t))


which implies that x is a solution to (B.1).
Part I. We now assume that (B.3) is also satised, and that there exists a solution x on R
+
satisfying (B.4).
1) If x is a solution of (B.1), then
x(t) = e
t
x(0) +e
t
_
t
0
e
s
f(x(s))ds
106
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
This implies that
[x(t)[ e
t
[x(0)[ +e
t
_
t
0
e
s
[f(x(s))[ds
From (B.3) and multiplying both sides by e
t
,
e
t
[x(t)[ [x(0)[ +
_
t
0
ke
s
[x(s)[ds
We use Gronwalls Lemma (Lemma A.2) as follows,
e
t
[x(t)[
. .
g(t)
[x(0)[
. .
K(t)
+
_
t
0
k
..
L(t)
e
s
[x(s)[
. .
g(s)
ds
Thus,
e
t
[x(t)[ [x(0)[ exp
__
t
0
kds
_
[x(0)[e
kt
and so nally, for all t R
+
, we have
[x(t)[ [x(0)[e
t(k)
= [x(0)[e
(k)t
2) From (B.3), 0 < k < , hence k > 0, which implies, together with the result of
the previous question, that lim
t
x(t) = 0.
Part II.
1) Let us suppose that x is a solution of (B.1) that is such that [x(0)[ < 0. Let / = t :
[x(t)[ a. Let us show that / = [0, +).
First of all, notice that [x(0)[ < a and x continuous on R
+
implies that there exists
t
0
R
+
0 such that, for all t [0, t
0
], [x(t)[ a. Indeed, suppose this were not the case.
Then, for all > 0, there exists t

[0, ] such that [x(t

)[ > a. This means that for all


n N0, there exists u
n
[0,
1
n
] such that [x(u
n
)[ > a. As u
n
0 as n and that x
is continuous, [x(0)[ a, since taking the limit implies that strict inequalities become loose.
This is a contradiction with [x(0)[ < a. Thus [0, t
0
] /.
Let us now show that for all t
1
/, [0, t
1
] /, i.e., / is an interval. First, if t
1
t
0
then
[0, t
1
] [0, t
0
] /. Secondly, in the case t
1
> t
0
, suppose that [0, t
1
] , /. This means that
[0, t
1
] such that , /. More precisely, (t
0
, t
1
) such that , /, since [0, t
0
] /
and t
1
/. Let be the smallest such , that is, = inft (t
0
, t
1
); t , /. Note that
can also be dened as = supt (t
0
, t
1
); t /.
Thus
= inft (t
0
, t
1
); [x(t)[ > a = supt (t
0
, t
1
); [x(t)[ < a
Since x is continuous, this implies that [x()[ a and [x()[ a, hence x() = a. But,
with its sup denition, this implies that = t
1
, whereas with its inf denition, this implies
that < t
1
.
Fund. Theory ODE Lecture Notes J. Arino
107
Hence / is an interval. We now want to show that it is an unbounded interval. Assume
it is bounded, hence let c = sup(/) < . Since x is continuous, / = t 0, [x(t)[ a
implies that c /. Thus / = [0, c]. Therefore, for all t [0, c],
[x(t)[ a
so, by Part I, 1),
[x(t)[ [x(0)[e
(k)t
[x(t)[ [x(0)[ < a
[x(c)[ < a
Since x is continuous on R
+
, there exists t > c such that [x(t)[ a, and thus there exists
t > c such that t /, which is a contradiction. Therefore, / = [0, ), and we can conclude
that t R
+
, [x(t)[ a.
Another proof, contributed by Guihong Fan, proceeds by contradiction, using the fact
the (B.1) is an autonomous scalar equation. Notice that equation (B.1) can be written in
the form x

= g(x), with g(u) = u + f(u). Thus, since this mapping is continuous,


we can apply Theorem 1.1.8 on the monotonicity of the solutions to an autonomous scalar
dierential equation. Assume that x(t) is a solution of (B.1) on R
+
that satises [x(0)[ < a,
but that (B.4) is violated.
Then, since the solution x(t) is monotone, there exists t
0
R
+
such that one of the
following holds.
i) x(t
0
) = a and x

(t
0
) > 0,
ii) x(t
0
) = a and x

(t
0
) < 0.
Let us treat case i). From (B.3), it follows that [f(x(t
0
))[ = [f(a)[ ka. Therefore, using
(B.1),
x

(t
0
) = x(t
0
) +f(x(t
0
))
= a +f(a)
a +ka
( k)a
< 0
since > k. This is a contradiction with x

(t
0
) > 0. Case ii) is treated similarly, and thus
it follows that (B.4) holds for all t R
+
.
2 a) If [x(0)[ < a, then we have just proved that for all t R
+
, [x(t)[ a. From Part
I, 1), this implies that [x(t)[ [x(0)[e
(k)t
. Therefore, since > k, lim
t
x(t) = 0.
2 b) To show that x 0 is the only solution of (B.1) such that x(0) = 0, we rst show
that x 0 is a solution of (B.1). Condition (B.3) applied to 0 states that [0[ < a implies
[f(0)[ k[0[ = 0.
108
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Uniqueness: let be a solution of (B.1) such that (0) = 0. This implies that [(0) < a,
and as a consequence, it follows from Part I, 1) that for all t R
+
, [(t)[ [(0)[e
(k)t
,
hence for all t R
+
, (t) = 0.
Part III. All solutions of the nonlinear equation x

= x + ln(1 + x
2
) tend to zero as
t , when > 1. Indeed, let f(u) = ln(1 +u
2
). We have
[f

(u)[ =
2[u[
1 +u
2
1
since (u 1)
2
= u
2
+1 2u 0 (and hence 2u/(1 +u
2
) 1). Then, [f(u) f(0)[ [u 0[
implies that [f(u)[ [u[, for all u R. We thus have k = 1 < , the hypotheses of the
exercise are satised and all solutions of the equation tend to zero.
Solution Exercise 2 Using a variation of constants formula, we obtain
(t) = e
at
c
1
+
_
t
0
e
a(ts)
f(s)ds, t 0
and
(t) = e
at
c
2
+
_
t
0
e
a(ts)
f

(s)ds, t 0
Let us show this for the solution of (B.5), the solution of (B.6) is obtained using exactly the
same method. The solution to a linear dierential equation consists of the general solution
to the homogeneous equation together with a particular solution to the nonhomogeneous
equation. Here, the homogeneous equation is
x

= ax
and basic integration yields the general solution x(t) = c
1
e
at
. To obtain a particular solution
to the nonhomogeneous equatiob (B.5), we use a variation of constants formula: assume that
the constant in the solution x(t) = ce
at
is a function of time, hence
x(t) = c(t)e
at
Taking the derivative of this expression, we obtain
x

= c

e
at
ace
at
Substituting both this expression and x = ce
at
into (B.5), we get
c

e
at
ace
at
+ace
at
= f(t), t 0
and hence
c

= e
at
f(t)
Integrating both sides of this expression gives
c(t) =
_
t
0
e
as
f(s)ds
Fund. Theory ODE Lecture Notes J. Arino
109
so that a particular solution to (B.5) is given by
x(t) = c(t)e
at
=
__
t
0
e
as
f(s)ds
_
e
at
=
_
t
0
e
a(ts)
f(s)ds
Summing the general solution to the homogeneous equation with this last expression gives
the desired result.
Using the initial conditions yields
(0) = c
1
= 0
(0) = c
2
= 0
Hence the system under consideration is
(t) =
_
t
0
e
a(ts)
f(s)ds, t 0
(t) =
_
t
0
e
a(ts)
f

(s)ds, t 0
Recall that if g(t) =
_
t
t
0
h(s, t)ds, then for all t
0
,
g

(t) = h(t, t) +
_
t
t
0
h
t
(s, t)ds
This implies that

(t) = f(t) a
_
t
0
e
a(ts)
f(s)ds
It follows that

(t) = (t) f(t) a


_
t
0
e
a(ts)
f(s)ds =
_
t
0
e
a(ts)
f

(s)ds
f(t) a
_
t
0
e
a(ts)
f(s)ds =
_
e
a(ts)
f(s)

s=t
s=0
a
_
t
0
e
a(ts)
f(s)ds
f(t) =
_
e
a(ts)
f(s)

s=t
s=0
f(t) = f(t) e
at
f(0)
f(0) = 0

Solution Exercise 3 1 a) For a vector-valued function, there is no mean value


theorem with an equal sign. But the following holds (see, e.g., [3, p. 44], [9, p. 209] or
Rudin, p.113).
110
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Theorem B.1.3. Let f : [a, b] F be a continuous mapping, with F a Banach space.
Suppose that f admits a right derivative f

r
(x) for all x (a, b), and that |f

r
(x)| k, where
k 0 is a constant. Then
|f(b) f(a)| k(b a)
and more generally, for all x
1
, x
2
[a, b],
|f(x
2
) f(x
1
)| k[x
2
x
1
[
Let us denote M = sup
t[0,)
|f(t, x)|, and let n, p > N, where N is suciently large
that
1
n+p
[0, ) and
1
n
[0, ). Using Theorem B.1.3, we obtain that
|x(
1
n +p
) x(
1
n
)| M[
1
n

1
n +p
[
|z
,n+p
z
,n
| M[
1
n

1
n +p
[
So the sequence (z
,n
)
nN
is a Cauchy sequence.
1 b) For all t [0, ),
|x(t) x(
1
n
)|

_
t

1
n
|f(x(s))|ds

|x(t) z
,n
| M

[t +
1
n
[ (B.8)
Since the sequence (z
,n
)
nN
is a Cauchy sequence, there exists x

= lim
n
(z
,n
). Thus
taking n in (B.8) gives
|x(t) x

| M

[t [
1 c) According to 1.b), we have
lim
t,t<
|x(t) x

| 0
Hence
lim
t,t<
x(t) = x

2) Let
z(t) =
_
x(t) if t [0, )
x

if t =
Let us show that z is a solution of (B.7) on [0, ] if z is continuous on [0, ] and satises the
integral equation
z(t) = z(t
0
) +
_
t
t
0
f(z(s))ds
for all t [0, ], and with an arbitrary t
0
[0, ]. We know by construction that z is
continuous (since lim
t
x(t) = x

).
Fund. Theory ODE Lecture Notes J. Arino
111
Let t [0, ),
z(t) = x(t) = x(t
0
) +
_
t
t
0
f(x(s))ds
because x is a solution. For t = ,
z() = x

= lim
t,t<
x(t)
= lim
t,t<
x(t
0
) +
_
t
t
0
f(x(s))ds
= lim
t,t<
z(t
0
) +
_
t
t
0
f(z(s))ds
since for t < , we have z(t) = x(t).
Furthermore, t f(z(t)) is bounded on [0, ), which implies that
_

t
0
f(z(s))ds = lim
t
_
t
t
0
f(z(s))ds
So
z() = z(t
0
) +
_

t
0
f(z(s))ds
So z is solution to (B.7) on [0, ].
Fund. Theory ODE Lecture Notes J. Arino
113
McMaster University Math4G03/6G03
Fall 2003
Homework Sheet 2
Exercise 2.1 Consider the system
_
x
1
x
2
_

=
_
a b
c d
__
x
1
x
2
_
where a, b, c, d R are constants such that ad bc = 0. Discuss all possible behaviours of
the solutions and sketch the corresponding phase plane trajectories.
Exercise 2.2 Let A be a constant n n matrix.
i) Show that [e
A
[ e
|A|
.
ii) Show that det e
A
= e
trA
.
iii) How should be chosen so that lim
t
e
t
e
At
= 0.

Exercise 2.3 Let X(t) be a fundamental matrix for the system x

= A(t)x, where A(t)


is an n n matrix with continuous entries on R. What conditions on A(t) and C guarantee
that CX(t) is a fundamental matrix, where C is a constant matrix.
Exercise 2.4 Consider the system
x

= A(t)x (B.9)
where A(t) is a continuous n n matrix on R, and A(t +) = A(t), > 0.
i) Show that T(), the set of -periodic solutions of (B.9), is a vector space.
ii) Let f be a continuous n 1 matrix function on R with f(t + ) = f(t). Show that,
for the system
x

= A(t)x +f(t) (B.10)


the following conditions are equivalent.
a) System (B.10) has a unique -periodic solution,
b) [X
1
() X
1
(0)] is nonsingular,
c) dimT() = 0.

114
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
McMaster University Math4G03/6G03
Fall 2003
Homework Sheet 2 Solutions
Solution Exercise 1 The characteristic polynomial of the matrix
A =
_
a b
c d
_
is (a )(d ) bc =
2
(a +d) +ad bc =
2
(a +d) since ad bc = 0. Thus A
has the eigenvalues 0 and a +d. Hence solutions are of the form
x
1
= c
1
x
2
= c
2
e
(a+d)t
with c
1
, c
2
R, and there are three possibilities.
If a + d < 0, then all points on the x
1
axis are equilibria, and all trajectories in
(x
1
, x
2
)-space go to them parallely to the x
2
axis.
If a +d = 0, then all points are equilibria.
If a + d > 0, then the x
1
axis is invariant and any solution that does not start on the
x
1
axis diverges to parallely to the x
2
axis.

Solution Exercise 2 1) We have e


A
=

k=0
A
k
/k!. Taking the norm,
|e
A
| = |

k=0
A
k
/k!|,
whence, by the triangle inequality, |e
A
|

k=0
|A
k
/k!| = e
A
.
2) Let J be the Jordan form of A, i.e., there exists P nonsingular such that P
1
AP =
J, where J has the form diag(B
j
)
j=1,...,p
, with B
j
the Jordan block corresponding to the
eigenvalue
j
of multiplicity m
j
. Then, since A and J are similar,
det e
A
= det e
J
= det(e
B
1
)
m
1
. . . det(e
Bp
)
mp
= e

1
m
1
e
pmp
= exp(
p

k=1

k
m
k
)
= trA
Fund. Theory ODE Lecture Notes J. Arino
115
3) We can write
lim
t
e
t
e
At
= lim
t
e
(AI)t
Let Sp (A) be the spectrum of A, i.e., the set of eigenvalues of A. Then, if Sp (A),
Sp (A I). We have lim
t
e
(AI)t
= 0 if, and only if, '() < 0 for all
Sp (A I), i.e., '( + ) < 0 for all Sp (A), i.e., '() < for all Sp (A).
Hence, choosing greater than the eigenvalue of A with maximal real part ensures that
lim
t
e
(AI)t
= 0.
Solution Exercise 3 We have
(CX(t))

= CX

(t)
= CA(t)X(t)
For CX(t) to be a fundamental matrix for x

= A(t)x requires that (CX(t))

= A(t)(CX(t)).
So C and A(t) must commute. Also, a fundamental matrix must be nonsingular. As X(t)
is a fundamental matrix, it is nonsingular. Thus C must be nonsingular for CX(t) to be
a fundamental matrix. So, to conclude, if X(t) is a fundamental matrix for the system
x

= A(t)x, then CX(t) is a fundamental matrix for x

= A(t)x if C is nonsingular and


commutes with A(t).
Solution Exercise 4 1) For x R
n
, x T() if it satises (B.9). Let x
1
, x
2
R
n
and
1
,
2
R. Then, as R
n
is a vector space,
1
x
1
+
2
x
2
R
n
. Now assume that,
moreover, x
1
, x
2
T(). Then,
(
1
x
1
+
2
x
2
)

=
1
x

1
+
2
x

2
=
1
A(t)x
1
+
2
A(t)x
2
= A(t)(
1
x
1
+
2
x
2
)
and therefore,
1
x
1
+
2
x
2
T(), and T() is a vector space.
2) There were of course several approaches to this problem. The simplest one required
the use of Theorem B.2.4, stated and proved later.
c) b) Let V be the nonsingular matrix such that X(t + ) = X(t)V , that we know
to exist from Theorem 2.4.2. Then X
1
(t + ) = V
1
X
1
(t), and X
1
(t + ) X
1
(t) =
(V
1
I)X
1
(t).
Suppose that dimT() = 0. Then 1 is not an eigenvalue of V . This implies that 1 is
neither an eigenvalue of V
1
; in turn, 0 is not an eigenvalue of V
1
I. This means that
V
1
I is nonsingular, and since X(t) is nonsingular, (V
1
I)X
1
(t) is nonsingular. Thus
we conclude that if dimT() = 0, then (X
1
(t +) X
1
(t)) is nonsingular.
b) c) The previous argument works the other way as well.
c) a) Suppose dimT() = 0 and that x
1
, x
2
are two solutions to (B.10). Then
x

1
= A(t)x
1
+f(t) and x

2
= A(t)x
2
+f(t). Therefore,
(x
1
x
2
)

= A(t)x
1
+f(t) A(t)x
2
f(t) = A(t)(x
1
x
2
)
116
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
which implies that x
1
x
2
is a solution to (B.9), and therefore, dimT() ,= 0, a contradiction.
Thus the solution to (B.10) is unique.
a) c) Let x be the unique -periodic solution of (B.10), and assume that dimT() ,=
0, i.e., there exists y, non trivial -periodic solution to (B.9). Then
(x +y)

= A(t)x +f(t) +A(t)y


= A(t)(x +y) +f(t)
and so x + y is a solution to (B.10), which is a contradiction since x is the unique solution
to (B.10).
Theorem B.2.4. Consider the system
x

= A(t)x
where A(t) has continuous entries on R and is such that A(t + ) = A(t) for some R.
Then 1 is an eigenvalue of A(t).
Proof. For some constant vector c ,= 0, we have x(t) = X(t)c. Also, because of periodicity,
x(t + ) = X(t + )c. As x is periodic of period , x(t) = x(t + ), so that, using the
previously obtained forms,
X(t)c = X(t +)c
X(t)c = X(t)V c
c = V c
Hence c is an eigenvector of V with corresponding eigenvalue (Floquet multiplier) = 1.
Fund. Theory ODE Lecture Notes J. Arino
117
McMaster University Math4G03/6G03
Fall 2003
Homework Sheet 3
Exercise 3.1 Compute the solution of the dierential equation
x

(t) = x(t) y(t) t


2
y

(t) = x(t) + 3y(t) + 2t


(B.11)

Exercise 3.2 Consider the initial value problem


x

(t) = A(t)x(t)
x(t
0
) = x
0
(B.12)
We have seen that the solution of this initial value problem is given by
x(t) = R(t, t
0
)x
0
where R(t, t
0
) is the resolvent matrix of the system. Suppose that we are in the case where
the following condition holds
t, s 1, A(t)A(s) = A(s)A(t) (B.13)
with 1 R.
i) Show that M(t) = exp
_
_
t
t
0
A(s)ds
_
is a solution of the matrix initial value problem
M

(t) = A(t)M(t)
M(t
0
) = I
n
where I
n
is the n n identity matrix. [Hint: Use the formal denition of a derivative,
i.e., M

(t) = lim
h0
(M(t +h) M(t))/h]
ii) Deduce that, when (B.13) holds,
R(t, t
0
) = exp
__
t
t
0
A(s)ds
_
iii) Deduce the following theorem.
Theorem B.3.5. Let U, V be constant matrices that commute, and suppose that A(t) =
f(t)U +g(t)V for scalar functions f, g. Then
R(t, t
0
) = exp
__
t
t
0
f(s)dsU
_
exp
__
t
t
0
g(s)dsV
_
(B.14)
118
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets

Exercise 3.3 Find the resolvent matrix associated to the matrix


A(t) =
_
a(t) b(t)
b(t) a(t)
_
(B.15)
where a, b are continuous functions on R.
Exercise 3.4 Consider the linear system
x

=
1
t
x +ty
y

= y
(B.16)
with initial condition x(t
0
) = x
0
, y(t
0
) = y
0
.
i) Solve the initial value problem (B.16).
ii) Deduce the formula for the principal solution matrix R(t, t
0
).
iii) Show that in this case,
R(t, t
0
) ,= exp
__
t
t
0
A(s)ds
_
with
A(t) =
_
1
t
t
0 1
_

Fund. Theory ODE Lecture Notes J. Arino


119
McMaster University 4G03/6G03
Fall 2003
Solutions Homework Sheet 3
Solution Exercise 3.1 Let (t) = (x(t), y(t))
T
,
A =
_
1 1
1 3
_
and B(t) = (t
2
, 2t)
T
. The system (B.11) can then be written as

= A +B(t)
This is a nonhomogeneous linear system, so we use the variation of constants,
(t) = e
(tt
0
)A

0
+
_
t
t
0
e
(ts)A
B(s)ds
where
0
= (t
0
). Let us assume for simplicity that t
0
= 0.
Eigenvalues of A are ( 2)
2
, with associated subspace
ker(A 2I) =
__
x
y
_
;
_
1 1
1 1
__
x
y
_
=
_
0
0
__
=
__
x
y
_
; x +y = 0
_
Thus dimker(A 2I) = 1 ,= 2, and A is not diagonalisable. Let us compute the Jordan
canonical form of A. There exists P nonsingular such that
P
1
AP =
_
2
0 2
_
where is a constant that has to be determined.
P
1
AP =
_
2 1
0 2
_
P =
_
1 1
1 0
_
, P
1
=
_
0 1
1 1
_
Then
e
At
= Pe
(P
1
AP)t
P
1
with
e
(P
1
AP)t
= e
P
1
(At)P
120
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
We have P
1
AP = 2I +N, where
_
0 1
0 0
_
Therefore, e
(P
1
AP)t
= e
(2It+Nt)
= e
2t
e
Nt
. Now, N is nilpotent (N
2
= 0), so e
Nt
=

n=0
t
n
n!
N
n
= I +Nt. As a consequence,
e
(P
1
AP)t
= e
2t
(I +Nt)
= e
2t
__
1 0
0 1
_
+
_
0 1
0 0
__
=
_
e
2t
te
2t
0 e
2t
_
Thus
e
At
= P
_
e
2t
te
2t
0 e
2t
_
P
1
=
_
(1 t)e
2t
te
2t
te
2t
(1 +t)e
2t
_
= e
2t
_
1 t t
t 1 +t
_
We still have to compute
_
t
0
e
As
B(s)ds. We have
e
As
B(s) = e
2s
_
1 +s s
s 1 s
__
s
2
2s
_
= e
2s
_
s
3
s
2
+ 2s
2
s
3
+ 2s 2s
2
_
= e
2s
_
s
2
s
3
s
3
2s
2
+ 2s
_
Let I
1
(t) =
_
t
0
e
2s
sds, I
2
(t) =
_
t
0
e
2s
s
2
ds and I
3
(t) =
_
t
0
e
2s
s
3
ds. Then
_
t
0
e
As
B(s)ds =
_
I
1
(t) I
3
(t)
I
3
(t) 2I
2
(t) + 2I
1
(t)
_
Evaluating the integrals, we obtain
_
t
0
e
As
B(s)ds =
_
e
2t
_
1
4
t
2
+
1
4
t +
1
8
+
1
2
t
3
_

1
8
e
2t
_
1
4
t
2

3
4
t
3
8

1
2
t
3
_
+
3
8
_
As a conclusion,
(t) = e
2t
_
1 t t
t 1 +t
_

0
+
_
1 t t
t 1 +t
__
e
2t
_
1
4
t
2
+
1
4
t +
1
8
+
1
2
t
3
_

1
8
e
2t
_
1
4
t
2

3
4
t
3
8

1
2
t
3
_
+
3
8
_
=
_
e
2t
x
0
e
2t
tx
0
e
2t
ty
0
+
1
2
e
2t
t +
1
8
e
2t

1
8

t
4
+
3
4
e
2t
t
2
e
2t
tx
0
+e
2t
y
0
+e
2t
ty
0

1
4
e
2t
t
2
e
2t
t +
t
4

3
8
e
2t
+
3
8
_
is the solution of (B.11) going through (x
0
, y
0
) at time 0.
Fund. Theory ODE Lecture Notes J. Arino
121
Solution Exercise 3.2 1) We have
1
h
(M(t +h) M(t)) =
1
h
_
exp
__
t+h
t
0
A(s)ds
_
exp
__
t
t
0
A(s)ds
__
Since, for all s
1
, s
2
1, A(s
1
)A(s
2
) = A(s
2
)A(s
1
), we have
__
s
1
t
0
A(u)du
___
s
2
t
0
A(u)du
_
=
__
s
2
t
0
A(u)du
___
s
1
t
0
A(u)du
_
It follows that
1
h
(M(t +h) M(t)) =
1
h
_
exp
__
t+h
t
A(s)ds
_
exp
__
t
t
0
A(s)ds
_
exp
__
t
t
0
A(s)ds
__
=
1
h
M(t)
_
exp
__
t+h
t
A(s)ds
_
I
_
=
_
1
h
exp
__
t+h
t
A(s)ds
_

1
h
I
_
M(t)
The second term in this equality tends to zero as h 0, and thus
lim
h0
1
h
(M(t +h) M(t)) = A(t)M(t)
therefore M

(t) = A(t)M(t), hence the result.


2) The implication is trivial.
3) If U and V commute and that A(t) = Uf(t) +V g(t), then
A(s)A(t) = (Uf(s) +V g(s))(Uf(t) +V g(t))
= U
2
f(s)f(t) +UV f(s)g(t) +V Ug(s)f(t) +V
2
g(s)g(t)
= U
2
f(t)f(s) +V Ug(t)f(s) +UV f(t)g(s) +V
2
g(t)g(s)
= (Uf(t) +V g(t))Uf(s) + (Uf(t) +V g(t))V g(s)
= (Uf(t) +V g(t))(Uf(s) +V g(s))
that is, A(s) and A(t) commute for all t, s. Then
R(t, t
0
) = exp
__
t
t
0
A(s)ds
_
= exp
__
t
t
0
Uf(s) +V g(s)ds
_
= exp
__
t
t
0
f(s)dsU +
_
t
t
0
g(s)dsV
_
= exp
__
t
t
0
f(s)dsU
_
exp
__
t
t
0
g(s)dsV
_

122
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Solution Exercise 3.3 Writing
A(t) = a(t)
_
1 0
0 1
_
+b(t)
_
0 1
1 0
_
= a(t)I +b(t)V,
it is obvious the Theorem B.3.5 can be used here, since I commutes with all matrices. Thus,
R(t, t
0
) = exp
__
t
t
0
a(s)dsI
_
exp
__
t
t
0
b(s)dsV
_
Let (t) =
_
t
t
0
a(s)ds and (t) =
_
t
t
0
b(s)ds. Then R(t, t
0
) = e
(t)I
e
(t)V
= e
(t)
Ie
(t)V
. Now
notice that V
2
= I, V
3
= V , etc., so that we can write that
V
n
=
_
(1)
p
I if n = 2p,
(1)
p
V if n = 2p + 1.
This implies that
e
(t)V
=

n=0
1
n!
(t)
n
V
n
=
_

p=0
(1)
p
(2p)!
(t)
2p
_
I +
_

p=0
(1)
p
(2p + 1)!
(t)
2p+1
_
V
= cos((t))I + sin((t))V
Thus
R(t, t
0
) = e
(t)
(cos((t))I + sin((t))V )
=
_
e
(t)
cos((t)) e
(t)
sin((t))
e
(t)
sin((t)) e
(t)
cos((t))
_

Solution Exercise 3.4 1) Notice that the y

equation in (B.16) does not involve


x. Therefore, we can solve it directly, giving y(t) = Ce
t
, with C R. Substituting this into
the equation for x

, we have
x

=
1
t
x(t) +tCe
t
To solve this nonhomogeneous rst-order scalar equation, we start by solving the homoge-
neous part, x

= x/t. This equation is separable, giving the solution x(t) = Kt, for K R.
Now we use a variation of constants approach to nd a particular solution to the nonhomoge-
neous problem. We use the ansatz x(t) = K(t)t, which, when dierentiated and substituted
into the nonhomogeneous equation, gives K

(t) = Ce
t
, and hence, K(t) = Ce
t
is a particular
solution, giving the general solution x(t) = Kt +Ce
t
.
Fund. Theory ODE Lecture Notes J. Arino
123
Let t
0
,= 0 (to avoid problems with 1/t), and suppose x(t
0
) = x
0
, y(t
0
) = y
0
. Then
x
0
= Kt
0
+ Ct
0
e
t
0
and y
0
= Ce
t
0
. It follows that K = x
0
/t
0
y
0
and C = e
t
0
y
0
, and the
solution to the equation going through the point (x
0
, y
0
) at time t
0
is given by
(t) =
_
x(t)
y(t)
_
=
_
(
x
0
t
0
y
0
)t +e
t
0
y
0
te
t
e
t
0
y
0
e
t
_
=
_
x
0
t
0
t +y
0
t(e
tt
0
1)
y
0
e
tt
0
_
.
2) The solution to the IVP

= A(t)
(t
0
) =
0
is given by (t) = R(t, t
0
)
0
. Thus, the resolvent matrix (or principal solution matrix) for
(B.16) is given by
R(t, t
0
) =
_
t
t
0
t +te
tt
0
0 e
tt
0
_
3) First of all, notice that A(t) and A(s) do not commute. Let us compute B(t) =
_
t
t
0
A(s)ds.
B(t) =
_
_
t
t
0
1
s
ds
_
t
t
0
sds
0
_
t
t
0
ds
_
=
_
ln
t
t
0
1
2
(t
2
t
2
0
)
0 t t
0
_
which, for convenience, we denote
B(t) =
_

0
_
Eigenvalues of B(t) are and . As R(t
0
, t
0
) = B(t
0
) = I, we are concerned with nding
a t ,= t
0
such that B(t) is diagonalizable. If a t exists such that ,= , then B(t) is
diagonalizable, i.e., there exists P nonsingular such that
P
1
B(t)P = D =
_
0
0
_
Then
e
B(t)
= P
_
e

0
0 e

_
P
1
We nd
P =
_
1
0
_
, P
1
=
1

_

0 1
_
124
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Thus, after a few computations,
e
B(t)
=
_
e

(e

)
0 e

_
=
_
t
t
0

0 e
tt
0
_
The element in this matrix is the only one dierent from the elements in R(t, t
0
). We have
=
t
2
t
2
0
2(t t
0
ln
t
t
0
)
_
e
tt
0

t
t
0
_
,= t(e
tt
0
1)

Fund. Theory ODE Lecture Notes J. Arino


125
McMaster University Math4G03/6G03
Fall 2003
Homework Sheet 4
Exercise 3.1 A dierential equation of the form
x

= f(t, x(t), x(t )) (B.17)


for > 0, is called a delay dierential equation (or also a dierential dierence equation,
or an equation with deviating argument), and is called the delay. The basic initial value
problem for (B.17) takes the form
x

= f(t, x(t), x(t ))


x(t) =
0
(t), t
0
t t
0
(B.18)
i) Use the method of steps to construct the solution to (B.18) on the interval [t
0
, t
0
+],
that is, nd how to construct the solution to the non delayed problem
x

= f(t, x(t),
0
(t ))
x(t
0
) =
0
(t
0
), t
0
t t
0
+
(B.19)
ii) Discuss existence and uniqueness of the solution on the interval [t
0
, t
0
+], depending
on the nature of
0
and f.
iii) Suppose that
0
C
0
([t
0
, t
0
]). Discuss the regularity of the solution to (B.18) on
the interval [t
0
+k, t
0
+ (k + 1)], k N.

Exercise 3.2 Consider the delay initial value problem


x

(t) = ax(t )
x(t) = C, t [t
0
, t
0
]
(B.20)
with a, C R, R

+
. Using the ideas of the previous exercise, nd the solution to (B.20)
on the interval [t
0
+k, t
0
+ (k + 1)], k N.
Exercise 3.3 Compute A
n
i
and e
tA
i
for the following matrices.
A
1
=
_
0 1
1 0
_
, A
2
=
_
1 1
0 1
_
, A
3
=
_
_
0 1 sin()
1 0 cos()
sin() cos() 0
_
_
.

126
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Exercise 3.4 Compute e
tA
for the matrix
A =
_
_
1 1 0
1 0 1
0 1 1
_
_
.

Exercise 3.5 Let A /


n
(R) be a matrix (independent of t), | | be a norm on R
n
and [[[[[[ the associated operator norm on /
n
(R).
i) a) Show that for all t R and all k N

, there exists C
k
(t) 0 such that,

e
t
k
A

_
I +
t
k
A
_


1
k
2
C
k
(t).
with lim
k
C
k
(t) =
t
2
2
[[[A
2
[[[.
b) Show that for all t R and all k N

I +
t
k
A

e
|t|
k
|||A|||
.
c) Deduce that
e
tA
= lim
k
_
I +
t
k
A
_
k
.
ii) Suppose now that A is symmetric and that its eigenvalues are > , with > 0.
a) Show by induction that, for k 0,
(I +A)
(k+1)
=
_

0
e
t(I+A)
t
k
k!
dt.
b) Deduce that for all u > 0,

(I +uA)
k

M if, and only if,

e
tA

M, t > 0.
c) Show that
_
t > 0, e
tA
0
_

_

0
, >
0
, (I +A)
1
0
_
where by convention, for B = (b
ij
) /
n
(R), writing that B 0 means that
b
ij
0 for all i, j = 1, . . . , n.
iii) Do the results of part 2) hold true if A is a nonsymmetric matrix?

Fund. Theory ODE Lecture Notes J. Arino


127
McMaster University Math4G03/6G03
Fall 2003
Homework Sheet 4 Solutions
Solution Exercise 1 1) The proposed method consists in considering (B.18) as a
nondelayed IVP on the interval [t
0
, t
0
+]. Indeed, on this interval, we can consider (B.19).
That the latter is a nondelayed problem is obvious if we rewrite the dierential equation as
x

(t) = g(t, x(t)) (B.21)


with g(t, x(t)) = f(t, x(t),
0
(t )), which is well dened on the interval [t
0
, t
0
+ ] since
for t [t
0
, t
0
+], t [, 0], on which the function
0
is dened.
We can then use the integral form to construct the solution on the interval [t
0
, t
0
+],
x(t) = x(t
0
) +
_
t
t
0
g(s, x(s))ds
=
0
(t
0
) +
_
t
t
0
f(s, x(s),
0
(s ))ds
2) Obviously, the discussion to make is on the nature of the function f. As problem
(B.19) is an ordinary dierential equations initial value problem, existence and uniqueness
of solutions on the interval [t
0
, t
0
+ ] follow the usual scheme. To discuss the required
properties on f and
0
, the best is to use (B.21). Recall that a vector eld has to be
continuous both in t and in x for solutions to exist. Thus to have existence of solutions to
the equation (B.21), g must be continuous in t and x. This implies that f(t, x,
0
(t ))
must be continuous in t, x. Thus
0
has to be continuous on [t
0
, t
0
].
Now, for uniqueness of solutions to (B.21), we need g to be Lipschitz in x, i.e., we require
the same property from f. Note that this does not imply either
0
or the way f depends on

0
.
3) Every successive integration raises the regularity of the solution: x is C
1
on [t
0
, t
0
+],
C
2
on [t
0
+, t
0
+ 2], etc. Hence, x is C
n
on [t
0
+ (n 1), t
0
+n].
Solution Exercise 2 We proceed as previously explained. We assume for sim-
plicity that t
0
= 0. To nd the solution on the interval [0, ], we consider the nondelayed
IVP
x

1
= ax
0
(t)
x
1
(0) = C
where x
0
(t) = C for t [0, ]. The solution to this IVP is straightforward, x
1
(t) = C+aCt =
C(1 +at), dened on the interval [0, ]. To integrate on the second interval, we consider the
IVP
x

2
= a[C(1 +at)]
x
2
() = x
1
() = C +aC
128
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Hence we nd the solution to the dierential equation to be, on the interval [, 2],
x
2
(t) = C
_
1 +at +
1
2
a
2
t
2

1
2
a
2

2
_
Iterating this process one more time with the IVP
x

3
= a
_
C
_
1 +at +
1
2
a
2
t
2

1
2
a
2

2
__
x
3
(2) = x
2
(2) =
3
2
a
2
C
2
+ 2aC +C
we nd, on the interval [2, 3], the solution
x
3
(t) = C
_
1 +at +
1
2
a
2
t
2
+
1
6
a
3
t
3

1
2
ta
3

1
3
a
3

1
2
a
2

2
_
We develop the intuition that the solution at step n (i.e., on the interval [(n 1), n])
must take the form
x
n
(t) = C
n

k=0
a
k
(t (k 1))
k
k!
(B.22)
This can be proved by induction (we will not do it here).
Solution Exercise 3 For matrix A
1
, we have A
2
1
= I, A
3
1
= A
1
, etc. Hence,
A
2n
1
= I and A
2n+1
1
= A
1
, which implies
eAt =

n=0
t
2n
(2n)!
A
2n
1
+

n=0
t
2n+1
(2n + 1)!
A
2n+1
1
=
_

n=0
t
2n
(2n)!
_
I +
_

n=0
t
2n+1
(2n + 1)!
_
A
1
= cosh tI + sinh tA
=
_
cosh t sinh t
sinh t cosh t
_
For matrix A
2
, remark that it can be written as A
2
= I +N, where
N =
_
0 1
0 0
_
is a nilpotent matrix. Thus A
2
2
= (I +N)
2
= I +2N+N
2
= I +2N, A
3
2
= (I +2N)(I +N) =
I + 2N +N + 2N
2
= I + 3N, and it is easily shown by induction that (I +N)
n
= I +nN.
Fund. Theory ODE Lecture Notes J. Arino
129
It follows that
e
A
2
t
=
_

n=0
t
n
n!
_
I +
_

n=0
nt
n
n!
_
N
= e
t
I +t
_

n=0
t
n1
(n 1)!
_
N
= e
t
I +te
t
N
= e
t
_
1 t
0 1
_
Finally, for matrix A
3
, we have that
A
2
3
=
_
_
cos
2

1
2
sin 2 cos

1
2
sin 2 sin
2
sin
cos sin 1
_
_
and A
3
3
= 0, i.e., A
3
is nilpotent for the index 3. Therefore, e
A
3
t
= I +tA
3
+
t
2
2
A
2
3
.
Solution Exercise 4 The Jordan canonical form of A is
J =
_
_
0 0 0
0 1 1
0 0 1
_
_
Now, to compute e
J
t, remark that J has the form
_
_
_
0 0 0
0
0
A
1
_
_
_
where e
A
1
t
has been computed in Exercise 3. Hence,
_
_
_
e
0t
0 0
0
0
e
A
1
t
_
_
_
=
_
_
1 0 0
0 e
t
te
t
0 0 e
t
_
_
We have J = P
1
AP, where
P =
_
_
1 1 2
1 0 1
1 1 1
_
_
, P
1
=
_
_
1 1 1
0 1 1
1 0 1
_
_
are the matrices of change of basis that transform A to its Jordan canonical form. Then
A = PJP
1
, and e
At
= Pe
Jt
P
1
, i.e.,
e
At
=
_
_
(2 t)e
t
1 e
t
1 (1 t)e
t
1
1 e
t
1 1 e
t
1 + (t 1)e
t
1 e
t
1 +te
t
_
_
130
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets

Solution Exercise 5 This exercise was far from trivial.


1a) Consider the map
t e
At
R /
n
(R)
We have (e
At
)

= Ae
At
and (e
At
)
(k)
= A
k
e
At
, where u
(k)
denotes the kth derivative of u.
e
t
k
A
= I +
t
k
A +
t
2
2k
2
A
2
+
t
2
k
2
(
t
k
)
Thus
e
t
k
A
I
t
k
A =
t
2
2k
2
A
2
+
t
2
k
2
(
t
k
)
Therefore, taking the norm [[[[[[ of this expression,

e
t
k
A
(I +
t
k
A)


1
k
2
_
t
2
2

A
2

+t
2

(
t
k
)

_
=
1
k
2
C
k
(t)
We have
lim
k
C
k
(t) =
t
2
2

A
2

Let S
k
=

j=3
t
j
k
j2
j!
[[[A[[[
j
. This series is uniformly convergent, which implies that we can
change the order in the following limit,
lim
k
S
k
=

j=3
lim
k
_
t
j
k
j2
j!
[[[A[[[
j
_
= 0
1b) We have already seen (Exercise 2, Assignment 2) that

e
A

e
|||A|||
. Therefore,

e
t
k
A

e
t
k
|||A|||
But
e
|t|
k
|||A|||
=

k=0
1
k!
[t[
k
[
k
k
[[[A[[[
k
= 1 +
[t[
k
[[[A[[[ +
[t[
2
[
2k
2
[[[A[[[
2
+
which, since
|t|
2
2k
2
[[[A[[[
2
+ 0, implies that
e
|t|
k
|||A|||
1 +
[t[
k
[[[A[[[ = [[[I[[[ +

t
k
A

I +
t
k
A

Fund. Theory ODE Lecture Notes J. Arino


131
(since [[[I[[[ = sup
v1
|IV | = sup
v1
|v| = 1).
1c) We skip the scalar case, and consider the case n 2. We have
e
At
(I +
t
k
A)
k
= (e
t
k
A
)
k
(I +
t
k
A)
k
=
_
e
t
k
A
(I +
t
k
A)
_
k1

j=0
(e
t
k
A
)
k1j
(I +
t
n
A)
j
since for two matrices E, F /
n
(R) (or /
n
(C)) that commute, E
n
F
n
= (E
F)

k1
j=0
E
k1j
F
j
.
Therefore,

e
At
(I +
t
k
A)
k

e
t
k
A(I+
t
k
A)

k1

j=0
_

e
t(k1j)
k
A

(I +
t
k
A)
j

_
(B.23)
Now, we have

e
t(k1j)
k
A

e
|t|(k1j)
k
|||A|||
. Also,

(I +
t
k
A)
j

I +
t
k
A

_
e
|t|
k
|||A|||
_
j
= e
j|t|
k
|||A|||
where the last inequality results from question 1b). Therefore,

e
At
(I +
t
k
A)
k


1
k
2
C
k
(t)
k1

j=0
e
|t|
k1j
k
|||A|||
e
j|t|
k
|||A|||
=
1
k
2
C
k
(t)
k1

j=0
e
|t|
k1
k
|||A|||
=
1
k
2
C
k
(t)ke
|t|
k1
k
|||A|||
=
1
k
C
k
(t)e
|t|
k1
k
|||A|||
We thus have

e
At
(I +
t
k
A)
k


1
k
C
k
(t)e
|t|
k1
k
|||A|||
=
1
k
D
k
(t)
As k , C
k
(t)
t
2
2
[[[A
2
[[[ and e
|t|
k1
k
|||A|||
e
|t||||A|||
. Therefore, lim
k
D
k
(t) =
t
2
2
[[[A
2
[[[ e
|t||||A|||
,
which in turn implies that lim
k
(I +
t
k
A)
k
= e
At
.
2a) We now suppose that A is a symmetric matrix. Recall that any symmetric matrix
is diagonalizable, with real eigenvalues. Furthermore, there exists a matrix P such that
P
1
= P
T
and that P
T
AP = diag(
i
), with
i
Sp (A).
We assume that the eigenvalues
i
, i = 1, . . . , m, are such that
i
> , for > 0. We
want to show by induction that the following holds.
k 0, (I +A)
(k+1)
=
_

0
e
(I+A)t
t
k
k!
dt (B.24)
132
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Suppose that k = 0. Equation (B.24) reads
(I +A)
1
=
_

0
e
(I+A)t
dt
The matrix (I + A)
1
is nonsingular. Indeed, suppose that det(I + A) = 0. This is
equivalent to det(I A) = 0, which implies that is an eigenvalue of A, a contradiction
with the hypothesis on the localization of the spectrum.
Now remark that if B is a nonsingular matrix, the equality
d
dt
e
Bt
= Be
Bt
implies that
B
d
dt
e
Bt
= e
Bt
. Since (I +A)
1
is nonsingular, we thus have
d
dt
e
(I+A)t
= (I +A)e
(I+A)t
and therefore
e
(I+A)t
= (I +A)
1
d
dt
_
e
(I+A)t
_
and so, integrating,
_

0
e
(I+A)t
ds =
_

0
(I +A)
1
d
dt
_
e
(I+A)t
_
dt
= (I +A)
1
_
i
0
nfty
d
dt
_
e
(I+A)t
_
dt
= (I +A)
1
_
e
(I+A)t

0
= (I +A)
1
_
lim
t
_
e
(I+A)t
_
I
_
(B.25)
Now,
e
(I+A)t
= P
_
_
_
e
(+
1
)t
0
.
.
.
0 e
(+n)t
_
_
_
P
1
Since for all i = 1, . . . , n,
i
> , it follows that lim
t
e
(+
i
)t
= 0, which in turn implies
that lim
t
e
(I+A)t
= 0. Using this in (B.25) gives (B.24) for k = 0.
Now assume (B.24) holds for k = j, i.e.,
(I +A)
j
=
_

0
e
(I+A)t
t
j1
(j 1)!
dt
Then
_

0
e
(I+A)t
t
j
j!
dt =
_

0
(I +A)
1
d
dt
_
e
(I+A)t
_
t
j
j!
dt
= (I +A)
1
_

0
d
dt
_
e
(I+A)t
_
t
j
j!
dt
= (I +A)
1
__
e
(I+A)t
t
j
j!
_

_

0
e
(I+A)t
t
j1
(j 1)!
dt
_
Fund. Theory ODE Lecture Notes J. Arino
133
As we did in the k = 0 case, we now use the bound on the eigenvalues to get rid of the term
t
j
j!
e
(I+A)t
= P
_
_
_
t
j
j!
e
(+
1
)t
0
.
.
.
0
t
j
j!
e
(+n)t
_
_
_
P
1
0 as t
Therefore,
_

0
e
(I+A)t
t
j
j!
dt = (I +A)
1
_

0
e
(I+A)t
t
j1
(j 1)!
dt
= (I +A)
1
(I +A)
j
= (I +A)
(j+1)
from which we deduce that (B.24) holds for all k 0.
2b) Let us begin with the implication (u > 0,

(I +uA)
k

M) (t > 0,

e
At

M).
We know from 1c) that e
At
= lim
k
(I +
t
k
A)
k
. Thus
e
At
= lim
k
_
_
I +
t
k
A
_
k
_
1
= lim
k
_
I +
t
k
_
k
(B.26)
Let u = t/k with k N

and t > 0. Then


t > 0, k N

(I +
t
k
A)
k

M
t > 0, lim
k

(I +
t
k
A)
k

M
t > 0,

lim
k
(I +
t
k
A)
k

M
which, using (B.26), implies that
t > 0,

e
At

M
We now treat the reverse implication, (t > 0,

e
At

M) (u > 0,

(I +uA)
k

M).
We have
(I +uA)
k
= [u(
1
u
I +A)]
k
= u
k
[
1
u
I +A]
k
Suppose that > 1/u, i.e., 0 < u < 1/. Then, from 2a), it follows that
(I +uA)
k
= u
k
_

0
e
(
1
u
I+A)t
t
j1
(j 1)!
dt
134
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
which, when taking the norm, gives

(I +uA)
k

u
k
_

0

e
At

t
u
t
j1
(j 1)!
dt

Mu
k
(k 1)!
_

0
e

t
u
t
k1
dt
Let = t/u, then dt = ud, and

(I +uA)
k


Mu
k
(k 1)!
_

0
e

u
k1

k1
d

M
(k 1)!
_

0
e

k1
d
The latter integral is (k), the Gamma Function. It is well known
1
that for k N, (k) =
(k 1)!. Thus,

(I +uA)
k

M
2c) Let us begin with the forward implication (). To apply 2b) with k = 0, it
suces that the eigenvalues of A be greater than . Take
0
= . Then > =
0
, and
so
(I +A)
1
=
_

0
e
(I+A)t
dt
=
_

0
e
t
e
At
dt 0
since the eigenvalues R and by hypothesis on e
At
.
Now for the reverse implication (). That there exists
0
R such that for all k N

,
(I +A)
1
0 implies that
>
0
, k N

, (I +A)
k
0
for suciently large. Take = k/t, the previous expression can be written
t > 0, k k
0
, (
k
t
I +A)
k
0
where k
0
is suciently large. This implies that
t > 0, k k
0
,
_
k
t
_
k
(I +
t
k
A)
k
0
As (k/t)
k
> 0,
t > 0, k k
0
, (I +
t
k
A)
k
0
1
See, e.g., M. Abramowitz and I.E. Stegun, Handbook of Mathematical Functions. Dover, 1965.
Fund. Theory ODE Lecture Notes J. Arino
135
so
t > 0, lim
k
(I +
t
k
A)
k
0
So that this nally implies that
t > 0, e
At
0
3) The results of the previous part hold. However, in the case of a nonsymmetric matrix,
we need to ask for the real part of the eigenvalues to be greater than .
Fund. Theory ODE Lecture Notes J. Arino
137
MATH
4G03/6G03
Julien Arino
Duration Of Examination: 72 Hours
McMaster University Final Examination December 2003
This examination paper includes 4 pages and 3 questions. You are responsible
for ensuring that your copy of the paper is complete.
Detailed Instructions
You have 72 hours, from the time you pick up and sign for this examination sheet, to
complete this examination. You are to work on this examination by yourself. Any hint of
collaborative work will be considered as evidence of academic dishonesty. You are not to
have any outside contacts concerning this subject, except myself.
You can use any document that you nd useful. If using theorems from outside sources,
give the bibliographic reference, and show clearly how your problem ts in with the conditions
of application of the theorem. When citing theorems from the lecture notes, refer to them
by the number they have on the last version of the notes, as posted on the website on the
rst day of the examination period.
Pay attention to the form of your answers: as this is a take-home examination, you are
expected to hand back a very legible document, in terms of the presentation of your answers.
Show your calculations, but try to be concise.
This examination consists of 1 independent question and 2 problems. In questions or
problems that have multiple parts, you are always allowed to consider as proved the results
of a previous part, even if you have not actually done that part.
Foreword to the correction. This examination was long, but established the results in
a very guided way. Exercise 1 was almost trivial. Both of the Problems dealt with Sturm
theory. This comes as an illustration of the richness of behaviors that can be observed in
dierential equations: simple equations such as (B.29) can have very complex behaviors.
Concerning the diculty of the problems, it was not excessive. Problem 2 is a shortened
and simplied version of a review problem for the CAPES, a French competition to hire
high school teachers. The original problem comprised 23 questions, and was written by
candidates in 5 hours. Problem 3 introduced the Wronskian, which we did not have time to
cover during class. It also established further results of Sturm type.
138
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Exercise 5.1 Consider the mapping A : t A(t), continuous from R to /
n
(R),
periodic of period and such that A(t)A(s) = A(s)A(t) for all t, s R. Consider the
equation
x

(t) = A(t)x (B.27)


Let R(t, t
0
) be the resolvent of (B.27).
i) Show that R(t +, t
0
+) = R(t, t
0
) for all t, t
0
R.
ii) Let u be an eigenvector of R(, 0), associated to the eigenvalue . Show that the
solution of (B.27) taking the value u for t
0
= 0 is such that
x(t +) = x(t), t R. (B.28)
iii) Conversely, show that if x is a nontrivial solution of (B.27) such that (B.28) holds,
then is an eigenvalue of R(, 0).

Problem 5 2 The aim of this problem is to study some properties of the solutions of
the dierential equation
x

+q(t)x = 0, (B.29)
where q is a continuous function from R to R.
i) Show that for t
0
, x
0
, y
0
R, there exists a unique solution of (B.29) such that
x(t
0
) = x
0
, x

(t
0
) = y
0
Preliminary results : convex functions. A function f : 1 R R is convex if, for all
x, y 1 and all [0, 1],
f(x + (1 )y) f(x) + (1 )f(y).
Before proceeding with the study of the solutions of (B.29), we establish a few useful results
on convex functions.
ii) Let f be a function dened on R, convex and nonnegative. Suppose that f has two
zeros t
1
, t
2
and that t
1
< t
2
. Show that f is zero on the interval [t
1
, t
2
].
Let c R and f be a convex function that is bounded from above on the interval [c, +).
It can then be shown that f is decreasing on [c, +). Using this fact, show the following.
iii) Every convex function that is bounded from above on R is constant.
Part I.
Fund. Theory ODE Lecture Notes J. Arino
139
iv) Let a, b R, a < b. We assume that (B.29) has a solution x, zero at a and at b and
positive on (a, b). Show that
_
b
a
[q(t)[dt >
4
b a
.
v) We suppose that
_

0
[q(t)[dt converges. Let x be a bounded solution of (B.29). Deter-
mine the behaviour of x

as t .
vi) We suppose that q C
1
and is positive and increasing on R
+
. Show that all solutions
of (B.29) are bounded on R
+
.
Part II. We suppose in this part that q is nonpositive and is not the zero function.
vii) Let x be a solution of (B.29). Show that x
2
is a convex function.
viii) Show that if x is a solution of (B.29) that has two distinct zeros, then x 0.
ix) Show that if x is a bounded solution of (B.29), then x 0.
Part III.
x) Let x, y be two solutions of (B.29). Show that the function xy

y is constant.
xi) Let x
1
and x
2
be the solutions of (B.29) that satisfy
x
1
(0) = 1, x

1
(0) = 0,
x
2
(0) = 0, x

2
(0) = 1.
Show that (x
1
, x
2
) is a basis of the vector space o on R of the solutions of (B.29).
What is the value of x
1
x

2
x

1
x
2
? Can the functions x
1
and x
2
have a common zero?
Justify your answer.
xii) Discuss the results of question 11) in the context of linear systems, i.e., transform
(B.29) into a system of rst-order dierential equations and express question 11) and
its answer in this context.
xiii) Show that if q is an even function, then the function x
1
is even and the function x
2
is
odd.
Problem 5 3 The aim of this problem is to show some elementary properties of the
Wronskian of a system of solutions, and to use them to study a second-order dierential
equation.
Consider the nth order ordinary dierential equation
x
(n)
= a
0
(t)x +a
1
(t)x

+ +a
n1
(t)x
(n1)
(t) (B.30)
where x
(k)
denotes the kth derivative of x,
d
k
x
dt
k
.
140
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
i) Find the matrix A(t) such that this system can be written as the rst-order linear
system y

= A(t)y.
Part I : Wronskian Let f
1
, . . . , f
n
be n functions from R into R that are n 1 times dier-
entiable. We dene W(f
1
, . . . , f
n
), the Wronskian of f
1
, . . . , f
n
, by
W(f
1
, . . . , f
n
)(t) = det
_
_
_
_
_
f
1
(t) f
n
(t)
f

1
(t) f

n
(t)
.
.
.
.
.
.
f
(n1)
1
(t) f
(n1)
n
(t)
_
_
_
_
_
.
If f
1
, . . . , f
n
are linearly dependent, then W(f
1
, . . . , f
n
) = 0. The converse is false. Remember
that the set of solutions of (B.30) forms a vector space S of dimension n.
ii) Using the equivalent linear system y

= A(t)y, show that every system of n solutions


of (B.30) whose Wronskian is nonzero at a time constitutes a basis of S.
iii) Using the linear system x

= A(t)x, show that for every set of n solutions,


W(t) = W(s) exp
__
t
s
a
n1
(u)du
_
.
Part II : a theorem of Sturm Let us now consider the second-order dierential equation
a
2
(t)x

+a
1
(t)x

+a
0
(t)x = 0 (B.31)
The objective here is to show the following theorem of Sturm.
Theorem B.5.6 (Sturm). Let f
1
, f
2
be two independent solutions of (B.31). Between two
consecutive zeros of f
1
, there is exactly one zero of f
2
.
We suppose f
1
, f
2
are two independent solutions of (B.31).
iv) Let u and v be two consecutive zeros of f
1
. Using the Wronskian W(f
1
, f
2
), show that
u and v cannot be zeros of f
2
.
v) Deduce the theorem. [Hint: consider the function f
1
/f
2
.]
Part III : another theorem of Sturm. Let us assume that we conne ourselves to segments
of R where a
0
(t) ,= 0.
vi) Let
x(t) = u(t) exp
__
t
0
a
1
(s)
a
2
(s)
ds
_
Show that (B.31) becomes u

+q(t)u = 0.
We want to show the following theorem of Sturm.
Fund. Theory ODE Lecture Notes J. Arino
141
Theorem B.5.7 (Sturm). Let
x

+q(t)x = 0, q(t) 0 (B.32)


in an interval (t
1
, t
2
). Every solution of (B.32) that is not identically zero has at most one
zero in the interval [t
1
, t
2
].
vi) Let be a solution of (B.32) on (t
1
, t
2
), and v be a zero of in this interval. Discuss the
properties of . [Hint: Use of Problem 2, Part II is possible, but not strictly necessary.]
vii) Let u < v be another zero of in the interval (t
1
, t
2
). Discuss properties of . Conclude.
142
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Solution Exercise 1 1) We have
R(t +, t
0
+) = exp
__
t+
t
0
+
A(s)ds
_
= exp
__
t
t
0
A(s +)ds
_
= exp
__
t
t
0
A(s)ds
_
= R(t, t
0
)
2) Let u be an eigenvector associated to the eigenvalue , i.e.,
R(, 0)u = u
Let x be the solution of (B.27) such that x(t
0
) = x(0) = u. As x is a solution of (B.27), we
have that, for all t,
x(t) = R(t, t
0
)u = R(t, 0)u
Therefore,
x(t +) = R(t +, 0)u
= R(t +, )R(, 0)u
= R(t +, )u
= R(t, 0)u
= R(t, 0)u
= x(t)
and hence (B.28).
3) Let x be a nonzero solution of (B.27) such that, for all t R,
x(t +) = x(t)
We assume that, for all t R, x(t + ) = x(t). This is true in particular for t = 0, and
hence x() = x(0). As x , 0, there exists v R 0 such that x(0) = v. Therefore,
x(t) = v
and as a consequence,
R(, 0)v = v
and is an eigenvalue of R(, 0).
Solution Problem 2 This problem concerns what are called Sturm theory type
results, that is, results dealing with the behavior of second order dierential equations.
Fund. Theory ODE Lecture Notes J. Arino
143
1) This is a standard application of the existence-uniqueness result of Cauchy-Lipschitz.
To see that, transform the second order equation into a system of rst order equations, by
setting y = x

. Then, dierentiating y, we obtain


y

+qx = 0
Therefore, (B.27) is equivalent to the system of rst order equations
x

= y
y

= qx
This is a linear system, hence satises a Lipschitz condition, and we can apply the Cauchy-
Lipschitz theorem.
2) Since f is nonnegative and convex, we have that, for all [0, 1],
0 f ((1 )t
1
+t
2
) (1 )f(t
1
) +f(t
2
)
But we have supposed that f(t
1
) = f(t
2
) = 0. Hence we have that for all [0, 1],
f ((1 )t
1
+t
2
) = 0
and so f is zero on [t
1
, t
2
].
3)
4)
5)
6)
7)
8)
9)
10)
11)
12)
13)
Solution Problem 3 This problem was also about Sturm results. But it also
introduced the notion of Wronskian, which is a very general tool intimately linked to the
notion of resolvent matrix.
1) We let y
1
= x, y
2
= x

, . . . , y
n
= x
(n1)
. As a consequence, y

1
= y
2
, y

2
= y
3
, . . . , y

n1
=
y
n
, and y

n
= a
0
(t)y
1
+a
1
(t)y
2
+ +a
n1
(t)y
n1
. Written in matrix form, this is equivalent
to y

= A(t)y, with y = (y
1
, . . . , y
n
)
T
and
A(t) =
_
_
_
_
_
_
_
_
_
0 1 0 . . . 0
0 0 1 0 . . . 0
.
.
.
.
.
.
.
.
. 1 0
0 1
a
0
(t) a
1
(t) . . . a
n1
(t)
_
_
_
_
_
_
_
_
_
(B.33)
144
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
2) We know that the system is equivalent to y

= A(t)y, with A(t) given by (B.33). To


every basis (
1
, . . . ,
n
) of the vector space of solutions of
y

= A(t)y (B.34)
there corresponds a basis (
1
, . . . ,
n
) of (B.30), where
i
is the rst coordinate of the vector

i
for every i. The converse is also true.
We know that a system (
1
, . . . ,
n
) of solutions of (B.34) is a basis if det(
1
, . . . ,
n
) ,= 0,
and it suces for this that det(
1
, . . . ,
n
) be nonzero at one point.
Since we have det(
1
, . . . ,
n
) = W(
1
, . . . ,
n
), the result follows.
3) This is a direct application of Liouvilles theorem, which states that if R(t, s) is the
resolvent of A(t), then
det R(t, s) = exp
__
t
s
trA(u)du
_
And if a system of coordinates is xed, for every fundamental matrix

,
det

(t) = det

(s) exp
__
t
s
trA(u)du
_
From Liouvilles theorem,
det((t)
1
(s)) = exp
__
t
s
trA(u)du
_
which implies that
det (t) = det (s) exp
__
t
s
trA(u)du
_
Now, note that det (t) = W(t) and det (s) = W(s). This implies the result.
Note that for a system of solutions of (B.30), W() ,= 0 i are linearly independent
(i.e., we have the converse implication).
4)
5)
6)
7)
Fund. Theory ODE Lecture Notes J. Arino
145
University of Manitoba Math 8430
Fall 2006
Homework Sheet 1
Periodic solutions of dierential equations
In this problem, we will study the solutions of some dierential equations, and in particular,
their periodic solutions.
Let T > 0 be a real number, P be the vector space of real valued, continuous and
T-periodic functions dened on R, and let a P. Dene
A =
_
T
0
a(t)dt, g(t) = exp
__
t
0
a(u)du
_
,
and endow P with the norm
|x| = sup
tR
[x(t)[.
First part
1. For what value(s) of A does the dierential equation
x

(t) = a(t)x(t) (E1)


admit non trivial T-periodic solutions?
We now let b P, and consider the dierential equation
x

(t) = a(t)x(t) +b(t). (E2)


2.a. Describe the set of maximal solutions to (E2) and the intervals of denition of these
solutions.
2.b. Describe the set of maximal solutions to (E2) that are T-periodic, rst assuming
A ,= 0, then A = 0.
146
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Second part
In this part, we let H be a real valued C
1
function dened on R
2
, and consider the
dierential equation
x

(t) = a(t)x(t) +H(x(t), t). (E3)


3. Check that a function x is solution to (E3) if and only if it satises the condition
x(t) = g(t)
_
x(0) +
_
t
0
g(s)
1
H(x(s), s)ds
_
.
4. Suppose that H is T-periodic with respect to its second argument, and that A ,= 0.
Show that, for all functions x P, the formula
U(
H
x)(t) =
e
A
1 e
A
g(t)
_
t+T
t
g(s)
1
H(x(s), s)ds,
denes a function U
H
x P, and that x est solution to (E3) if and only if U
H
x = x.
In the rest of the problem, we let F be a real-valued C
1
function dened on R
2
, T-periodic
with respect to its second argument; for all > 0, dene H

= F and U

= U
H
, so that
the dierential equation (E3) is written
x

(t) = a(t)x(t) +F(x(t), t). (E4)


Assume that A ,= 0. For all r > 0, we denote B
r
the closed ball with centre 0 and radius r
in the normed space P. We want to show the following assertion: for all r > 0, there exists

1
> 0 such that, for all
1
, the dierential equation (E4) has a unique solution x B
r
,
that we will denote x

.
We denote
r
(resp.
r
) the upper bound of the set [F(v, s)[ (resp. [
F
v
(v, s)[), where
v [r, r] and s [0, T].
5.a. Find a real
0
> 0 such that, for all
0
, U

(B
r
) B
r
.
5.b. Find a real
1

0
such that, for all
1
, the restriction of U

to B
r
be a
contraction of B
r
.
5.c. Conclude.
6. Study the behavior of the function x

when 0, the number r being xed.


7. We now suppose that the function a is a constant k ,= 0 et that the function F takes
the form F(v, s) = f(v). Determine the solution x

of (E4).
8. We now consider T = 1, k = 1 and f(v) = v
2
, and thus (E4) takes the form
x

(t) = x(t) +x(t)


2
. (E5)
8.a. Give possible values of
0
and
1
.
8.b. Determine the x

of (E5).
8.c. Let R. Show that there exists a unique maximal solution

of (E5) such that

(0) = . Determine precisely this solution, and graph several of these solutions.
Fund. Theory ODE Lecture Notes J. Arino
147
Third part
Here, we consider the dierential equation
x

(t) = kx(t) +f(x(t)), (E6)


where k < 0, f is C
1
and zero at zero. We let
= sup
u[1,1]
[f

(u)[,
and assume that < k.
We propose to show the following result: if x is a maximal solution of (E6) such that
[x(0)[ < 1, then it is dened on [0, ) and, for all t 0,
[x(t)[ [x(0)[e
(k+)t
.
9. In this question, we suppose that the set of t such that [x(t)[ > 1 is non-empty, and
we denote its lower bound by . Show that, for all t [0, ],
[x(t)[ [x(0)[e
(k+)t
.
10. Conclude.
N.B. This result expresses the stability and the asymptotic stability of the trivial solution
of (E6).
Fund. Theory ODE Lecture Notes J. Arino
149
University of Manitoba Math 8430
Fall 2006
Homework Sheet 1 Solutions
1. The equation (E1) is a separable equation, so we write
x

(t) = a(t)x(t)
x

(t)
x(t)
= a(t)
ln [x(t)[ =
_
t
0
a(s)ds +C
[x(t)[ = exp
__
t
0
a(s)ds +C
_
x(t) = K exp
__
t
0
a(s)ds
_
,
where it was assumed that integration starts at t
0
= 0, and where the sign of [x(t)[ is
absorbed into K R. Since x(0) = K, the general solution to (E1) is thus
x(t) = x(0) exp
__
t
0
a(s)ds
_
. (B.35)
A nontrivial solution (B.35) is T-periodic if it satises x(t + T) = x(t) for all t 0. In
particular (for simplicity), there must hold that x(T) = x(0). This leads to
x(T) = x(0) x(0) exp
__
T
0
a(s)ds
_
= x(0)
exp
__
T
0
a(s)ds
_
= 1

__
T
0
a(s)ds
_
= 0
A = 0.
2.a. We know that the general solution to the homogeneous equation (E1) associated to
(E2) is given by (B.35). To nd the general solution to (E2), we need a particular solution
to (E2), or to use integrating factors or a variation of constants approach. We do the
latter, since we already have the solution (B.35) to (E1). Returning to the solution with
undetermined value for K, we consider the ansatz
(t) = K(t) exp
__
t
0
a(s)ds
_
= K(t)g(t),
150
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
where the second equality uses the denition of g(t). We have

(t) = K

(t)g(t) +K(t)g

(t)
= K

(t)g(t) +K(t)a(t)g(t).
The function is solution to (E2) if and only if it satises (E2); therefore, is solution if
and only if

(t) = a(t)(t) +b(t) K

(t)g(t) +K(t)a(t)g(t). = a(t)K(t)g(t) +b(t)


K

(t)g(t) = b(t)
K

(t) =
b(t)
g(t)
, for g(t) ,= 0
K(t) =
_
t
0
b(s)
g(s)
ds +C.
Note that the remark that g(t) ,= 0 is made for form: as it is dened, g(t) > 0 for all
t 0. We conclude that the general solution to (E2) is given by
x(t) =
__
t
0
b(s)
g(s)
ds +C
_
exp
__
t
0
a(s)ds
_
.
Since it will be useful to have information in terms of x(0) (as in question 1.), we note that
C = x(0). Thus, the solution to (E2) through x(0) = 0 is given by
x(t) =
__
t
0
b(s)
g(s)
ds +x(0)
_
exp
__
t
0
a(s)ds
_
. (B.36)
With integrating factors, we would have done as follows: write the equation (E2) as
x

(t) a(t)x(t) = b(t).


The integrating factor is then
(t) = exp
_

_
a(t)dt
_
,
and the general solution to (E2) is given by
x(t) =
1
(t)
__
t
0
(s)b(s)ds +C
_
Maximal solutions are solutions that are the restriction of no other solution.
Fund. Theory ODE Lecture Notes J. Arino
151
2.b. Solutions to (E2) are T-periodic if x(T) = x(0); therefore, a T-periodic solution
satises
x(T) = x(0)
__
T
0
b(s)
g(s)
ds +x(0)
_
exp
__
T
0
a(s)ds
_
= x(0)

__
T
0
b(s)
g(s)
ds +x(0)
_
= x(0)e
A

_
T
0
b(s)
g(s)
ds =
_
e
A
1
_
x(0)
Second part
3. Before proceeding, note that
g

(t) =
d
dt
exp
__
t
0
a(s)ds
_
= a(t) exp
__
t
0
a(s)ds
_
= a(t)g(t).
We dierentiate x(t) = g(t)
_
x(0) +
_
t
0
g(s)
1
H(x(s), s)ds
_
. This gives
x

(t) = g

(t)
_
x(0) +
_
t
0
g(s)
1
H(x(s), s)ds
_
+g(t)
H(x(t), t)
g(t)
= g

(t)
_
x(0) +
_
t
0
g(s)
1
H(x(s), s)ds
_
+H(x(t), t)
= a(t)g(t)
_
x(0) +
_
t
0
g(s)
1
H(x(s), s)ds
_
+H(x(t), t)
= a(t)x(t) +H(x(t), t),
and thus x(t) = g(t)
_
x(0) +
_
t
0
g(s)
1
H(x(s), s)ds
_
is solution to (E3).
4. Let x P. Then U
H
x P if and only if U
H
x is T-periodic. We have
(U
H
x)(t +T) =
e
A
1 e
A
g(t +T)
_
t+2T
t+T
g(s)
1
H(x(s), s)ds.
152
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
Remark that
g(t +T) = exp
__
t+T
0
a(s)ds
_
= exp
__
t
0
a(s)ds +
_
t+T
t
a(s)ds
_
= g(t) exp
__
t+T
t
a(s)ds
_
= e
A
g(t),
since a(t) is T-periodic. Therefore,
(U
H
x)(t +T) =
e
A
1 e
A
e
A
g(t)
_
t+2T
t+T
g(s)
1
H(x(s), s)ds
=
e
A
1 e
A
e
A
g(t)
_
t+T
t
g(s T)
1
H(x(s T), s T)ds.
Now
g(s T) = exp
__
sT
0
a(u)du
_
= exp
__
s
0
a(u)du +
_
sT
s
a(u)du
_
= g(s) exp
_

_
s
sT
a(u)du
_
= e
A
g(s).
So, nally,
(U
H
x)(t +T) =
e
A
1 e
A
e
2A
g(t)
_
t+T
t
g(s)
1
H(x(s), s)ds
= e
2A
(U
H
x)(t),
since H is T-periodic in its second argument and x P. Therefore, U
H
x P for x P.
Suppose that x(t) = (U
H
x)(t). Then,
x

(t) =
e
A
1 e
A
_
g

(t)
_
t+T
t
H(x(s), s)
g(s)
ds +g(t)
_
H(x(t +T), t +T)
g(t +T)

H(x(t), t)
g(t)
__
=
e
A
1 e
A
_
a(t)g(t)
_
t+T
t
H(x(s), s)
g(s)
ds +g(t)
_
H(x(t), t)
e
A
g(t)

H(x(t), t)
g(t)
__
= a(t)
e
A
1 e
A
g(t)g(t)
_
t+T
t
H(x(s), s)
g(s)
ds +
e
A
1 e
A
g(t)
(1 e
A
)H(x(t), t)
e
A
g(t)
= a(t)x(t) +H(x(t), t).
Fund. Theory ODE Lecture Notes J. Arino
153
5.a. We seek
0
> 0 such that for all
0
, |x| r |U

x| r. Therefore, we
compute |U

x|. We have, letting H(x, s) = F(x, s),


|U

x| = sup
tR
[(U

x)(t)[
= sup
tR

e
A
1 e
A
g(t)
_
t+T
t
g(s)
1
F(x(s), s)ds

=
e
A
[1 e
A
[
sup
tR
[g(t)[

_
t+T
t
F(x(s), s)
g(s)
ds


e
A
[1 e
A
[
sup
tR
[g(t)[
_
t+T
t

F(x(s), s)
g(s)

ds.
Note that we keep the absolute value of [1 e
A
[, since A could be negative, leading to a
negative value for 1 e
A
. Let |g
1
| = sup
tR
[g
1
(t)[. We then have
|U

x|
e
A
[1 e
A
[
|g||g
1
| sup
tR
_
t+T
t
[F(x(s), s)[ ds

e
A
[1 e
A
[
|g||g
1
|
_
t+T
t

r
ds,
since x(s) [r, r], and so
|U

x|
e
A
[1 e
A
[
|g||g
1
|
r
T.
Letting

0
=
_
e
A
[1 e
A
[
|g||g
1
|
r
T
_
1
r,
we see that if
0
, then |U

x| r.
5.b. For the restriction of U

to be a contraction, we must have the inequality obtained


above, as well as, for x, y B
r
, d(U

x, U

y) < d(x, y). In terms of the induced norm, this


means that |U

x U

y| < |x y|. Therefore, letting x, y P be such that |x| r and


154
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
|y| r, we compute
|U

x U

| = sup
tR
[(U

x)(t) (U

y)(t)[
= sup
tR

e
A
1 e
A
g(t)
_
t+T
t
F(x(s), s) F(y(s), s)
g(s)
ds

e
A
1 e
A

sup
tR
[g(t)[

_
t+T
t
F(x(s), s) F(y(s), s)
g(s)
ds

e
A
1 e
A

sup
tR
[g(t)[
_
t+T
t

F(x(s), s) F(y(s), s)
g(s)

ds.
For s [0, T] and x(s) [r, r], we have, picking a y(s) [r, r],
[F(x(s), s)[ = [F(x(s), s) F(y(s), s) +F(y(s), s)[
[F(x(s), s) F(y(s), s)[ +[F(y(s), s)[

r
[x(s) y(s)[ +
r
,
from the mean value theorem, and thus
[F(x(s), s)[ 2
r
r +
r
.
5.c. We used the contraction mapping principle (Theorem ??):
for
1
, U

is a contraction of B
r
,
P is complete (it is closed in ((R, R) B(R) endowed with the supremum norm) and
B
r
is closed in P.
Therefore, we conclude that for a given r > 0, for all
1
, there exists a unique solution
x

of (E4) in B
r
.
6. This is the contraction mapping theorem with a parameter:
|x

| = |U

| |U

|
. .
Kxx


+|U

|.
Fund. Theory ODE Lecture Notes J. Arino
155
But we have
|U

|(t) [

[
e
A
[1 e
A
[
_
t+T
t
[g(t)g(s)
1
F(x(s), s)[ds
[

[
e
A
[1 e
A
[
e
TA
T
r
. .
=K

.
Thus, we have |x

|
[

[K

1 K
and therefore R x

P is continuous; it follows
that lim
0
x

= x
0
. But the only periodic solution of (E1) when A ,= 0 is the zero solution.
Therefore, x

0 when 0.
7. Let x
0
(t) = c
0
, then g(t) = e
kt
and A = kT.
U

x
0
(t) =
e
kT
1 e
kT
e
kt
f(t
0
)
_
t+T
t
e
ks
ds =
e
kT
1 e
kT
f(c
0
)e
kt
_

1
k
e
ks
_
t+T
t
=
e
kT
1 e
kT
f(c
0
)
e
kt
k
_
e
kt
e
kt
e
kT
_
=
e
kT
1 e
kT
f(c
0
)
k
_
1 e
kT
_
=

k
f(c
0
)
The constant function x

(t) = c
0
is solution (where c
0
is the unique solution of the equation
f(x) +kx = 0 for suciently small).
Note : letting g(x) =

k
f(x), it follows that g

(x) =

k
f

(x). Thus, for r > 0 given,


there exists
0
> 0 such that
0
implies g([r, r]) [r, r], and there exists
1

0
such
that sup
x[r,r]
[g

(x)[ < 1, the xed point theorem can be applied easily.


8.a. Using the formula obtained in 5.a. with
r
= r
2
,
r
= 2r, g(t) = e
t
, A = T then

0
=
r(1 e
T
)e
T
e
T
r
2
T
=
1 e
T
rT
,
1
=
1
2
1 e
T
2rT
=

0
4
.
8.b. The zero function is clearly a 1-periodic solution of (E5). By uniqueness of solutions,
x

= 0 is the only solution of (E5).


156
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
8.c. The vector eld x+x
2
is C
1
, and therefore existence and uniqueness of a maximal
solution

is a direct consequence of the theorem of Cauchy-Lipschitz.


We solve the equation x

= x + x
2
without constraint of periodicity. There are two
constant solutions , x(t) = 0 and x(t) = 1/. By uniqueness, any other solution never takes
the values 0 and 1/. We have:
x

x(1 x)
=
d
dt
_
ln

x
1 x

_
= 1
x
1 x
= e
t
x(t) =
e
t
1 +e
t
, R

.
The condition x(0) = gives =

1
if ,= 0 and ,= 1/, and as a consequence,
letting =
1

we obtain

(t) =
1
e
t
+
.
If 0 then

is dened on R.
If < 0, we let t
0
= ln
_

_
.
If > 0 then

> 1, that is, t


0
> 0,

is dened on ] , t
0
[.
If < 0 then

< 1, that is, t


0
< 0,

is dened on ]t
0
, +[.
Here are a few representative solutions obtained when setting = 1.
alpha<0
alpha>1/epsilon
0<alpha<1/epsilon
3
2
1
0
1
2
3
y
3 2 1 1 2 3
x
Third part
Fund. Theory ODE Lecture Notes J. Arino
157
9. > 0 since [x(0)[ < 1, and we have x(s) [1, 1] for all s [0, ], by denition of
. Since f(0) = 0 and [f

[ is bounded by on [1, 1], we have, from the inequality of nite


variations, [f(u) f(0)
..
=0
[ [u[ for all u [1, 1], that is, [f(x(s)) f(0)
..
=0
[ [x(s)[.
Let t [0, ]. From 4.,
[e
kt
x(t)[ =

x(0) +
_
t
s=0
e
ks
f(x(s)) ds

[x(0)[ +
_
t
s=0
e
ks
[x(s)[ ds,
from which [e
kt
x(t)[ [x(0)[e
t
(using Gronwalls lemma with (t) = e
kt
[x(t)[, = [x(0)[,
= ), giving the inequality.
10. Since < 0, it follows that [x(0)[e
(k+)t
[x(0)[ < 1. Letting c = t > 0 [ [x(t)[ >
1, which is assumed non empty, then = inf c > 0 (by continuity, since [x(0)[ < 1 there
exists > 0 such that [x(t)[ < 1 on [0, ], and thus > 0). Since lim
t

[x(t)[ 1 and
lim
t
+
[x(t)[ 1, it follows [x()[ = 1.
On [0, ], [x(t)[ [x(0)[e
(k+)t
and, taking the limit, [x()[ < 1, which is impossible.
First conclusion : c = and, if J is the interval of denition of x, then t J [0, +[,
[x(t)[ 1.
If J admits an upper bound b R, then x

is bounded in a neighborhood of b. Thus x


admits a limit in b. The same is therefore true for x

. We then know that x can be extended


beyond b, contradicting the maximality of J.
Final conclusion : J [0, +[= [0, +[, x is dened on [0, +[ and the proof of question
9. holds true for all t 0, i.e.,
t [0, +[, [x(t)[ [x(0)[e
(k+)t
.
N.B. This result expresses the stability and the asymptotic stability of the trivial solution
of (E6).
This subject was the Premi`ere composition de mathematiques for the contest determining
admission to

Ecole Polytechnique in France, for MP (Math-Physics) track students, in 2004.
Students, in their second year of university, have 4 hours to write this premi`ere composition.
The original subject comprised another question, originally question 3, which was sup-
pressed in this homework sheet. To be complete, this question is included here:
158
Fund. Theory ODE Lecture Notes J. Arino
B. Problem sheets
2. We suppose here that T = 2 and that the function a is a constant k.
2.a. Assuming k ,= 0, express the Fourier coecients x(n), n Z, of a solution x of
(E2) belonging to P, as a function of k and the Fourier coecients of b. What is the mode
of convergence of the Fourier series of x?
2.b. What happens when k = 0?
2.a. If k ,= 0 then A = 2k ,= 0, and from 2., there exists a unique 2-periodic solution.
Since the mapping x x(n) is linear, and from the relation

x

(n) = in x(n), we have

(n) = k x(n) +

b(n) x(n) =

b(n)
in k
.
Since x is (
1
, we know that the Fourier series of x is normally convergent. Since

b(n) 0,
we ca also say that x(n) = o
_
1
n
_
.
2.b. Applying here again the result of 2.b.,
If

b(0) = 0 then all solutions are 2-periodic. In this case, solutions satisfy x(n) =

b(n)/in for n Z non zero and x(0) varies with the solutions under consideration.
If

b(0) ,= 0 then no solution is periodic.

Das könnte Ihnen auch gefallen