Beruflich Dokumente
Kultur Dokumente
Differential equations
x1 (t) = et ,
and x2 (t) = et ,
1
One of the most widely known ordinary differential equations is that given by Newtons Second Law of Motion. If an object of mass m is moving with acceleration a and
being acted on with force F then Newtons Second Law tells us that F = ma, where
d2 x
dv
or a = 2 , with v the velocity of the object,
the acceleration can be written as a =
dt
dt
and x the position function of the object at any time t. Notice that the force can be a
function of t, x and/or v. Then Newtons law can be written as
d2 x
dx
m 2 = F t, x,
.
dt
dt
The order of a differential equation is the largest derivative present in the differential
equation. Newtons second law, for instance is a second order differential equation.
In this chapter, we will be looking almost exclusively at first and second order ODEs.
The collection of all solutions of a differential equation can often be expressed by a
single expression which contains parameters, and is called the general solution of
the equation. If a function that is a solution of a given ODE has no parameters, it is
called a particular solution.
Example: It can be proven that the general solution of the equation
x 00 (t) = x(t),
is given by
xg (t) = c1 et + c2 et .
2
The constants c1 and c2 appearing in this solution are completely determined if some
additional information about the function x(t) is specified. For example, if we know
that x(0) = 0 and x 0 (0) = 1 the unique solution satisfying these conditions is:
1
1
x(t) = et et .
2
2
This type of conditions, where the value of the function and those of its derivatives at
a single point x0 are given are called initial conditions.
If the initial conditions are now x(0) = 0 and x 0 (0) = 0, then the unique solution
satisfying the initial conditions is
x(t) = 0.
This last solution is independent of the variable t; this type of solutions are called
equilibrium solutions or steady-state solutions, and are often among the most important solutions of differential equations.
10.1.1
Let us now focus on a particular type of ODEs called linear differential equations
with constant coefficients.
A linear differential equation is any differential equation that can be written in the
following form:
an x(n) (t) + an1 x(n1) (t) + + a1 x 0 (t) + a0 x(t) = g(t).
The important thing to note about linear differential equations is that there are no
products of the function x(t) and its derivatives, and neither the function or its derivatives occur to any power other than the first, though the coefficients an , . . . , a0 and
g(t) can be zero or non-zero functions, constant or non-constant functions, linear or
3
non-linear functions. If the right hand side of this equation, g(t), is constantly equal
to zero, then the equation is called homogeneous; otherwise, it is non-homogeneous.
Linear, homogeneous equations are particularly important, as they commonly appear
in nature, and also because they are generally easy to solve; they satisfy the following
theorems:
Theorem. The Superposition Principle : If x1 (t) and x2 (t) are two solutions to a
linear, homogeneous equation, then c1 x1 (t)+c2 x2 (t) is also a solution to the equation.
Theorem. The solution space : Given a homogeneous linear ODE of order n, we can
find n linearly independent solutions. Furthermore, these n solutions along with the
solutions given by the superposition principle are all of the solutions of the ODE. In
other words, the set of solutions of the ODE is a vector space of dimension n.
Linear ODEs are specially simple when the coefficients are constant, and general
methods to solve them can be provided (these will be studied in posterior courses).
However, in this course we will explicitly solve them by transforming them into a system of the form X 0 = AX, and by using previously studied concepts, in particular, the
eigenvalues and a basis of eigenvectors (and generalized eigenvectors, if necessary)
of A.
The following example shows how a linear differential equation with constant coefficients can be transformed into a linear system (of differential equations):
Example: Consider the second-order differential equation with constant coefficients
x 00 + 3x 0 + 2x = 0.
Note that the independent variable t is dropped for clarity. If we introduce a new
variable (of t) by setting y = x 0 , then y 0 = x 00 , which substituted in the equation yields
y 0 = 3x 0 2x = 2x 3y. Thus, we have two unknown functions of t, x and y, that
4
y0
= y
= 2x 3y
0
0
x
=
y0
2
x .
y
3
0
x
x
To simplify this notation we will usually write X =
and X 0 =
, and thus the
y
y0
system becomes:
X 0 = AX,
where A =
2 3
However, despite this simplified notation, we must keep in mind that both X and X 0
are vectors whose components are functions of t.
10.2
For the remainder of this chapter we will deal with systems in R2 , which assume the
simple form
x = ax + by
y 0 = cx + dy
0
where a, b, c, d are constants, though the results included here are easily extended to
Rn . As we have seen before, we may abbreviate this system by using the coefficient
matrix A, where
A=
ax + by = 0
cx + dy = 0
This system has a nonzero solution if and only if det(A) = 0. In such a case, there is a
straight line through the origin on which each point is an equilibrium. Thus we have
the following result:
Now we turn to the question of finding non-equilibrium solutions of the linear system X 0 = AX. The key observation here is this:
X0 =
2 3
7
X = AX.
1
that yields V1 =
. In a similar way, we get that an eigenvector associated to
1
1
2 = 2 is V2 =
. Note that these two eigenvectors are linearly independent.
2
Then, the general solution of the system is
X(t) = c1 e
1t
1
1
2t
+ c2 e
,
1
2
which translates into x(t) = c1 e1t + c2 e2t , and y(t) = c1 e1t 2c2 e2t .
The previous result also applies to the case of have only one eigenvalue with geometric multiplicity equal to 2. In such case, we can find two linearly independent
eigenvectors V1 and V2 , and the general solution is given by
X(t) = c1 et V1 + c2 et V2 .
More complicated situations appear when the eigenvalues are complex and when
there is only one eigenvalue with geometric multiplicity equal to 1.
10.2.1
Complex eigenvalues
Consider the 2 2 matrix A, with complex eigenvalues. Since the characteristic equation has real coefficients, its complex roots must occur in conjugate pairs: 1 = a + bi,
2 = a bi. If we consider 1 , we can easily check that any associated eigenvector V1
has complex components. Also, it is straight forward to show that:
X(t) = et V1 = e(a+bi)t V1
8
is a solution to the ODE. However, such solution is not real but complex, and we are
interested in finding only real solutions. The following theorem provides a technique
to obtain two real, linearly independent solutions:
Theorem: Given a system X 0 = AX, where A is a real matrix, if X = X1 + iX2 is a
complex solution, then its real and imaginary parts X1 , X2 are also (real) solutions to
the system.
10.2.2
In the case that A is a defective matrix, we can find only one linearly independent
eigenvector V1 associated to the eigenvalue with algebraic multiplicity 2. We know
that et V1 is a solution to the system. To come up with a second, linearly independent solution, we can use a generalized eigenvector 2 . Recall that such generalized
1
,
eigenvector can be computed by means of the Jordan canonical form J =
0
with the system
AP = PJ.
It can be checked that the general expression for generalized eigenvectors 2 can be
expressed as
2 = V1 + U,
where is a scalar and U is a (particular) generalized eigenvalue. If we rename the
scalar as t (the independent variable in the equation) in the expression for 2 , then
X(t) = et 2
is a solution linearly independent of et V1 .
Alternatively, we can select any particular generalized eigenvalue P2 , and write the
9
solution directly as
X(t) = et (tV1 + P2 ).
10.3
One remarkable fact about planar linear system is that we can sketch the solutions
x(t)
, without effectively knowing their analytic expression. To do that,
X(t) =
y(t)
we simply have to notice that if, for a given value of t, say t0 , we evaluate the right
0
x(t0 )
x (t0 )
we obtain
, which rephand sides of the system at the point
0
y(t0 )
y (t0 )
resents the slope of the tangent to the curve representing the solution that passes
x(t0 )
.
through
y(t0 )
Thus, we regard the right-hand side of the system as defining a vector field on R2 . To
that end, we think of X 0 as representing a vector in the Euclidean plane, whose x and
y components are given by the first and second coordinates of AX, respectively, and
next we visualize this vector as being based at the point (x, y).
x0 = y
y 0 = x
is the one displayed in the figure beside.
10
To illustrate this qualitative description of the solutions, we will consider the following cases, involving real eigenvalues.
A=
with 1 < 0 < 2 . This can be solved immediately since the system decouples into
two unrelated first-order equations:
x 0 = 1 x
y 0 = 2 y.
The characteristic equation is
( 1 )( 2 ) = 0
1
so 1 and 2 are the eigenvalues. An eigenvector corresponding to 1 is V1 =
0
0
and to 2 is V2 = (0, 1) . Hence, the general solution is
1 t 1
2 t 0
X(t) = c1 e
+ c2 e
.
0
1
Since 1 < 0, the straight-line solutions of the
1 t 1
form c1 e
lie on the x axis and tend to
0
0
as t . This axis is called the stable
0
2 t 0
line. Since 2 > 0, the solutions c2 e
lie
1
0
on the y axis and tend away from
as t
0
; this axis is the unstable line.
All other solutions (with c1 , c2 6= 0) tend to infinity in the direction of the unstable
0
line, as t , since X(t) comes closer and closer to
as t increases.
c2 e2 t
12
1 0
,
A=
0 2
with 1 < 2 < 0. As above we find two straight-line solutions and then the general
solution:
X(t) = c1 e
1 t
1
2 t 0
.
+ c2 e
1
0
Unlike the saddle case, now all solutions tend to (0, 0) 0 as t . The question is:
How do they approach the origin? To answer this, we compute the slope dy/dx of a
solution with c2 6= 0. We write:
x(t) = c1 e
1 t
y(t) = c2 e2 t
and compute
dy
dy/dt
2 c2 e2 t
2 c2 (2 1 )t
=
=
=
e
.
t
1
dx
dx/dt
1 c1 e
1 c1
Since 1 < 2 < 0, it follows that these slopes approach (provided c2 6= 0). Thus
these solutions tend to the origin tangentially to the y axis.
Since 1 < 2 < 0, we call 1 the stronger eigenvalue and 2 the weaker eigenvalue.
The reason for this in this particular case is that
the x coordinates of solutions tend to 0 much
more quickly than the y coordinates. This accounts for why solutions (except those on the
line corresponding to the 1 eigenvector) tend
to hug the straight-line solution corresponding
to the weaker eigenvalue as they approach the
origin.
called a sink.
Example. Source:
13
A=
A=
where the eigenvalues of A are both equal to . In this case, every nonzero vector is
an eigenvector since
AV = V
for any V R2 . Hence solutions are of the form
X(t) = c1 et V.
0
0
Each such solution lies on a straight line through
and either tends to
(if
0
0
0
< 0) or away from
(if > 0).
0
14
A=
Again both eigenvalues are equal to , but now there is only one linearly independent
1
eigenvector given by
. Hence we have one straight-line solution
0
t 1
.
X1 (t) = c1 e
0
To find other solutions, note that the system can be written
x 0 = x + y
y 0 = y.
Thus, if y 6= 0, we must have
y(t) = c2 et .
Therefore the differential equation for x(t) reads
x 0 = x + c2 et .
This is a first-order differential equation for x(t). The best option is to guess a solution
of the form
x(t) = c1 et + tet ,
for some constants c1 and . This technique is often called the method of undetermined coefficients. Inserting this guess into the differential equation shows that
= c2 while c1 is arbitrary. Hence the solution of the system may be written
1
t t
c1 e
+ c2 e
,
0
1
t
Note that, if < 0, each term in this solution tends to 0 as t goes to infinity. This is clear
for the c1 et and c2 et terms. For the term c2 tet this is an immediate consequence
of lHopital
s rule. Hence all solutions tend to (0, 0) as t goes to infinity. When > 0,
all solutions tend away from (0, 0). In fact, solutions tend toward or away from the
origin in a direction tangent to the eigenvector (1, 0).
16