Sie sind auf Seite 1von 16

Chapter 10

Introduction to differential equations.


Dynamical systems
10.1

Differential equations

A differential equation is any equation which contains derivatives, either ordinary


or partial, of an unknown function; in the first case, which involves a function of only
one variable, the differential equation is called an ordinary differential equation (abbreviated by ODE); in the second case, that corresponds to functions of more than
one variable, the equation is called a partial differential equation (PDE). In this last
chapter of the course we will be dealing only with ODEs.
Solving a differential equation means finding all functions which, when substituted
in the equation along with its corresponding derivatives, make the equation become
an identity.
Example: It is immediate to check that

x1 (t) = et ,

and x2 (t) = et ,
1

are both solutions to the ODE


x 00 (t) = x(t).

More precisely, a solution to a differential equation on an interval t [a, b] is any


function x(t) which satisfies the differential equation in question on the interval a <
t < b. It is important to note that solutions are often accompanied by intervals and
these intervals can give some important information about the solution.

One of the most widely known ordinary differential equations is that given by Newtons Second Law of Motion. If an object of mass m is moving with acceleration a and
being acted on with force F then Newtons Second Law tells us that F = ma, where
d2 x
dv
or a = 2 , with v the velocity of the object,
the acceleration can be written as a =
dt
dt
and x the position function of the object at any time t. Notice that the force can be a
function of t, x and/or v. Then Newtons law can be written as


d2 x
dx
m 2 = F t, x,
.
dt
dt
The order of a differential equation is the largest derivative present in the differential
equation. Newtons second law, for instance is a second order differential equation.
In this chapter, we will be looking almost exclusively at first and second order ODEs.
The collection of all solutions of a differential equation can often be expressed by a
single expression which contains parameters, and is called the general solution of
the equation. If a function that is a solution of a given ODE has no parameters, it is
called a particular solution.
Example: It can be proven that the general solution of the equation
x 00 (t) = x(t),
is given by

xg (t) = c1 et + c2 et .
2

The constants c1 and c2 appearing in this solution are completely determined if some
additional information about the function x(t) is specified. For example, if we know
that x(0) = 0 and x 0 (0) = 1 the unique solution satisfying these conditions is:
1
1
x(t) = et et .
2
2
This type of conditions, where the value of the function and those of its derivatives at
a single point x0 are given are called initial conditions.
If the initial conditions are now x(0) = 0 and x 0 (0) = 0, then the unique solution
satisfying the initial conditions is
x(t) = 0.
This last solution is independent of the variable t; this type of solutions are called
equilibrium solutions or steady-state solutions, and are often among the most important solutions of differential equations.

10.1.1

Linear differential equations with constant coefficients

Let us now focus on a particular type of ODEs called linear differential equations
with constant coefficients.
A linear differential equation is any differential equation that can be written in the
following form:
an x(n) (t) + an1 x(n1) (t) + + a1 x 0 (t) + a0 x(t) = g(t).
The important thing to note about linear differential equations is that there are no
products of the function x(t) and its derivatives, and neither the function or its derivatives occur to any power other than the first, though the coefficients an , . . . , a0 and
g(t) can be zero or non-zero functions, constant or non-constant functions, linear or
3

non-linear functions. If the right hand side of this equation, g(t), is constantly equal
to zero, then the equation is called homogeneous; otherwise, it is non-homogeneous.
Linear, homogeneous equations are particularly important, as they commonly appear
in nature, and also because they are generally easy to solve; they satisfy the following
theorems:
Theorem. The Superposition Principle : If x1 (t) and x2 (t) are two solutions to a
linear, homogeneous equation, then c1 x1 (t)+c2 x2 (t) is also a solution to the equation.
Theorem. The solution space : Given a homogeneous linear ODE of order n, we can
find n linearly independent solutions. Furthermore, these n solutions along with the
solutions given by the superposition principle are all of the solutions of the ODE. In
other words, the set of solutions of the ODE is a vector space of dimension n.
Linear ODEs are specially simple when the coefficients are constant, and general
methods to solve them can be provided (these will be studied in posterior courses).
However, in this course we will explicitly solve them by transforming them into a system of the form X 0 = AX, and by using previously studied concepts, in particular, the
eigenvalues and a basis of eigenvectors (and generalized eigenvectors, if necessary)
of A.

The following example shows how a linear differential equation with constant coefficients can be transformed into a linear system (of differential equations):
Example: Consider the second-order differential equation with constant coefficients
x 00 + 3x 0 + 2x = 0.
Note that the independent variable t is dropped for clarity. If we introduce a new
variable (of t) by setting y = x 0 , then y 0 = x 00 , which substituted in the equation yields
y 0 = 3x 0 2x = 2x 3y. Thus, we have two unknown functions of t, x and y, that
4

are related by the linear system


x

y0

= y

= 2x 3y

which can be rewritten using matrix notation as

 0
0
x

=
y0
2

 
x .
y
3

 
 0
x
x
To simplify this notation we will usually write X =
and X 0 =
, and thus the
y
y0
system becomes:
X 0 = AX,

where A =

2 3
However, despite this simplified notation, we must keep in mind that both X and X 0
are vectors whose components are functions of t.

Thus a homogeneous, linear differential equation (with constant coefficients) is given


by a matrix A Rnn via X 0 (t) = AX(t), where X 0 denotes differentiation with
respect to t. Any function X : R 7 Rn such that X 0 (t) = AX(t) for all t R provides a
solution of the differential equation. The initial value problem for a linear differential
equation X 0 = AX can be rephrased as finding, for a given initial value X0 Rn , for
t = t0 , a solution X(t) that satisfies X(t0 ) = X0 .
When the independent variable t represents the time, we normally say that the solutions of a linear differential equation X 0 = AX define a (continuous) dynamical system
(or linear flow) in Rn .
In the following section we study how to solve linear systems, and thus linear differential equations.
5

10.2

Planar systems of differential equations

For the remainder of this chapter we will deal with systems in R2 , which assume the
simple form

x = ax + by
y 0 = cx + dy
0

where a, b, c, d are constants, though the results included here are easily extended to
Rn . As we have seen before, we may abbreviate this system by using the coefficient
matrix A, where

A=

Then the linear system may be written as


X 0 = AX.
Note that the origin is always a solution to this system; therefore it is an equilibrium
point for the system. To find other equilibria, we must solve the linear system of
algebraic equations

ax + by = 0
cx + dy = 0

This system has a nonzero solution if and only if det(A) = 0. In such a case, there is a
straight line through the origin on which each point is an equilibrium. Thus we have
the following result:

Theorem: The planar linear system X 0 = AX has:


1. a unique equilibrium point (0, 0) if det(A) 6= 0.
2. a straight line of equilibrium points if det(A) = 0 (and A 6= 0.)
6

Now we turn to the question of finding non-equilibrium solutions of the linear system X 0 = AX. The key observation here is this:

If V0 is an eigenvector of A associated to the eigenvalue R, then the function


X(t) = et V0
is a solution of the system.
Proof: The derivative of X(t) (with respect to t) is X 0 (t) = et V0 . If we plug this
into the system, we get:
et V0 = X 0 (t) = AX = Aet V0 ,
or
et V0 = et AV0 ,
which is obviously true for all t since V0 is an eigenvector associated to .
In particular, note the vectorial nature of the solution.
We also have the following theorem, which is a direct consequence of the superposition principle and the previous result:
Theorem: Suppose A has a pair of real eigenvalues 1 6= 2 , and associated eigenvectors V1 and V2 . Then the general solution of the linear system X 0 = AX is given
by
X(t) = c1 e1 t V1 + c2 e2 t V2 .

Example: Consider the linear system

X0 =

2 3
7

X = AX.

The characteristic equation of A is 2 + 3 + 2 = ( + 1)( + 2), so the system has


eigenvalues 1 and 2. An eigenvector corresponding to the eigenvalue 1 = 1 is
obtained by solving the equation
   
x
0
(A + I)
=
y
0



1
that yields V1 =
. In a similar way, we get that an eigenvector associated to
 1
1
2 = 2 is V2 =
. Note that these two eigenvectors are linearly independent.
2
Then, the general solution of the system is
X(t) = c1 e

1t


 
1
1
2t
+ c2 e
,
1
2

which translates into x(t) = c1 e1t + c2 e2t , and y(t) = c1 e1t 2c2 e2t .

The previous result also applies to the case of have only one eigenvalue with geometric multiplicity equal to 2. In such case, we can find two linearly independent
eigenvectors V1 and V2 , and the general solution is given by
X(t) = c1 et V1 + c2 et V2 .
More complicated situations appear when the eigenvalues are complex and when
there is only one eigenvalue with geometric multiplicity equal to 1.

10.2.1

Complex eigenvalues

Consider the 2 2 matrix A, with complex eigenvalues. Since the characteristic equation has real coefficients, its complex roots must occur in conjugate pairs: 1 = a + bi,
2 = a bi. If we consider 1 , we can easily check that any associated eigenvector V1
has complex components. Also, it is straight forward to show that:
X(t) = et V1 = e(a+bi)t V1
8

is a solution to the ODE. However, such solution is not real but complex, and we are
interested in finding only real solutions. The following theorem provides a technique
to obtain two real, linearly independent solutions:
Theorem: Given a system X 0 = AX, where A is a real matrix, if X = X1 + iX2 is a
complex solution, then its real and imaginary parts X1 , X2 are also (real) solutions to
the system.

10.2.2

Repeated eigenvalues of a defective matrix

In the case that A is a defective matrix, we can find only one linearly independent
eigenvector V1 associated to the eigenvalue with algebraic multiplicity 2. We know
that et V1 is a solution to the system. To come up with a second, linearly independent solution, we can use a generalized eigenvector 2 . Recall that such generalized

1
,
eigenvector can be computed by means of the Jordan canonical form J =
0
with the system
AP = PJ.
It can be checked that the general expression for generalized eigenvectors 2 can be
expressed as
2 = V1 + U,
where is a scalar and U is a (particular) generalized eigenvalue. If we rename the
scalar as t (the independent variable in the equation) in the expression for 2 , then
X(t) = et 2
is a solution linearly independent of et V1 .

Alternatively, we can select any particular generalized eigenvalue P2 , and write the
9

solution directly as
X(t) = et (tV1 + P2 ).

10.3

Study of planar linear systems

One remarkable fact about planar linear system is that we can sketch the solutions

x(t)
, without effectively knowing their analytic expression. To do that,
X(t) =
y(t)
we simply have to notice that if, for a given value of t, say t0 , we evaluate the right

0
x(t0 )
x (t0 )
we obtain
, which rephand sides of the system at the point
0
y(t0 )
y (t0 )
resents the slope of the tangent to the curve representing the solution that passes

x(t0 )
.
through
y(t0 )
Thus, we regard the right-hand side of the system as defining a vector field on R2 . To
that end, we think of X 0 as representing a vector in the Euclidean plane, whose x and
y components are given by the first and second coordinates of AX, respectively, and
next we visualize this vector as being based at the point (x, y).

Obviously, doing this by hand is an intensive work, but with a computer we


can easily obtain, for example, that the
vector field associated to the system

x0 = y
y 0 = x
is the one displayed in the figure beside.

10

To avoid vector overlapping, we might


draw a direction field instead, which
consists of scaled versions of the vectors;
following the directions displayed by the
field, we can now sketch the solutions of
the system (in this case, circles centred at
the origin) before solving it.

A solution of a planar system should now be thought of as a parameterized curve


in the plane of the form (x(t), y(t)) such that, for each t, the tangent vector at the
point (x(t), y(t)) is (x 0 (t), y 0 (t)). That is, as t increases, the solution curve (x(t), y(t))
winds its way through the plane always tangent to the given vector AX(t) based at
(x(t), y(t)). If t is regarded as the time, the plot of the trajectory represents how the
solution evolves as time passes, showing thus its dynamical behaviour.

To illustrate this qualitative description of the solutions, we will consider the following cases, involving real eigenvalues.

A. Real distinct eigenvalues


Consider X 0 = AX and suppose that A has two real eigenvalues 1 < 2 . Assuming
for the moment that i 6= 0, there are three cases to consider:
1. 1 < 0 < 2 ,
2. 1 < 2 < 0,
3. 0 < 1 < 2 .
11

Example. Saddle: First consider the simple system


X 0 = AX
where

A=

with 1 < 0 < 2 . This can be solved immediately since the system decouples into
two unrelated first-order equations:
x 0 = 1 x
y 0 = 2 y.
The characteristic equation is
( 1 )( 2 ) = 0
 
1
so 1 and 2 are the eigenvalues. An eigenvector corresponding to 1 is V1 =
0
0
and to 2 is V2 = (0, 1) . Hence, the general solution is
 
 
1 t 1
2 t 0
X(t) = c1 e
+ c2 e
.
0
1
Since 1 < 0, the straight-line solutions of the
 
1 t 1
form c1 e
lie on the x axis and tend to
0
 
0
as t . This axis is called the stable
0
 
2 t 0
line. Since 2 > 0, the solutions c2 e
lie
  1
0
on the y axis and tend away from
as t
0
; this axis is the unstable line.

All other solutions (with c1 , c2 6= 0) tend to infinity in the direction of the unstable


0
line, as t , since X(t) comes closer and closer to
as t increases.

c2 e2 t
12

Example. Sink: Now consider the case X 0 = AX where

1 0
,
A=
0 2
with 1 < 2 < 0. As above we find two straight-line solutions and then the general
solution:
X(t) = c1 e

1 t

 
 
1
2 t 0
.
+ c2 e
1
0

Unlike the saddle case, now all solutions tend to (0, 0) 0 as t . The question is:
How do they approach the origin? To answer this, we compute the slope dy/dx of a
solution with c2 6= 0. We write:
x(t) = c1 e

1 t

y(t) = c2 e2 t
and compute
dy
dy/dt
2 c2 e2 t
2 c2 (2 1 )t
=
=
=
e
.

t
1
dx
dx/dt
1 c1 e
1 c1
Since 1 < 2 < 0, it follows that these slopes approach (provided c2 6= 0). Thus
these solutions tend to the origin tangentially to the y axis.
Since 1 < 2 < 0, we call 1 the stronger eigenvalue and 2 the weaker eigenvalue.
The reason for this in this particular case is that
the x coordinates of solutions tend to 0 much
more quickly than the y coordinates. This accounts for why solutions (except those on the
line corresponding to the 1 eigenvector) tend
to hug the straight-line solution corresponding
to the weaker eigenvalue as they approach the
origin.

In this case the equilibrium point is

called a sink.
Example. Source:
13

When the matrix

A=

satisfies 0 < 2 < 1 , our vector field may be


regarded as the negative of the previous example. The general solution and phase portrait
remain the same, except that all solutions now
tend away from (0, 0) along the same paths. 

B. Repeated Real Eigenvalues


The only remaining cases occur when A has repeated real eigenvalues. One simple
case occurs when A is a diagonal matrix of the form

A=

where the eigenvalues of A are both equal to . In this case, every nonzero vector is
an eigenvector since
AV = V
for any V R2 . Hence solutions are of the form
X(t) = c1 et V.
 
 
0
0
Each such solution lies on a straight line through
and either tends to
(if
0
0
 
0
< 0) or away from
(if > 0).
0

14

A more interesting case occurs when

A=

Again both eigenvalues are equal to , but now there is only one linearly independent
 
1
eigenvector given by
. Hence we have one straight-line solution
0
 
t 1
.
X1 (t) = c1 e
0
To find other solutions, note that the system can be written
x 0 = x + y
y 0 = y.
Thus, if y 6= 0, we must have
y(t) = c2 et .
Therefore the differential equation for x(t) reads
x 0 = x + c2 et .
This is a first-order differential equation for x(t). The best option is to guess a solution
of the form
x(t) = c1 et + tet ,
for some constants c1 and . This technique is often called the method of undetermined coefficients. Inserting this guess into the differential equation shows that
= c2 while c1 is arbitrary. Hence the solution of the system may be written
 
 
1
t t
c1 e
+ c2 e
,
0
1
t

that is its general solution.


15

Note that, if < 0, each term in this solution tends to 0 as t goes to infinity. This is clear
for the c1 et and c2 et terms. For the term c2 tet this is an immediate consequence

of lHopital
s rule. Hence all solutions tend to (0, 0) as t goes to infinity. When > 0,
all solutions tend away from (0, 0). In fact, solutions tend toward or away from the
origin in a direction tangent to the eigenvector (1, 0).

16

Das könnte Ihnen auch gefallen