Sie sind auf Seite 1von 39

Chapter 6

Systems of Linear Dierential


Equations
6.1 Systems of Linear Dierential Equations
Introduction
Up to this point the entries in a vector or matrix have been real numbers. In this section,
and in the following sections, we will be dealing with vectors and matrices whose entries are
functions. A vector whose components are functions is called a vector-valued function or
vector function. Similarly, a matrix whose entries are functions is called a matrix function.
The operations of vector and matrix addition, multiplication by a number and matrix
multiplication for vector and matrix functions are exactly as dened in Chapter 5 so there
is nothing new in terms of arithmetic. However, for the purposes of this and the following
sections, there are operations on functions other than arithmetic operations that we need
to dene for vector and matrix functions, namely the operations from calculus (limits,
dierentiation, integration). The operations from calculus are dened in a natural way.
Calculus of vector functions: Let v(t) = (f
1
(t), f
2
(t), . . . , f
n
(t)) be a
vector function whose components are dened on an interval I.
Limit: Let c I. If lim
xc
f
i
(t) =
i
exists for i = 1, 2, . . . n, then
lim
tc
v(t) =
_
lim
tc
f
1
(t), lim
tc
f
2
(t), . . . , lim
tc
f
n
(t)
_
= (
1
,
2
, . . . ,
n
) .
Limits of vector functions are calculated component-wise.
Derivative: If f
1
, f
2
, . . . , f
n
are dierentiable on I, then v is dierentiable
on I, and
v

(t) =
_
(f

1
(t), f

2
(t), . . . , f

n
(t)
_
.
273
Thus v

is the vector function whose components are the derivatives of the


components of v.
Integral: Since dierentiation of vector functions is done component-wise,
integration must also be component-wise. That is
_
v(t) dt =
__
f
1
(t) dt,
_
f
2
(t) dt, . . . ,
_
f
n
(t) dt
_
.
Calculus of matrix functions: Limits, dierentiation and integration of
matrix functions are done in exactly the same way component-wise.
Systems of linear dierential equations
Consider the third-order linear dierential equation
y

+ p(t)y

+ q(t)y

+ r(t)y = f(t)
where p, q, r, f are continuous functions on some interval I. Solving the equation for
y

, we get
y

= r(t)y q(t)y

p(t)y

+ f(t).
Introduce new dependent variables x
1
, x
2
, x
3
, as follows:
x
1
= y
x
2
= x

1
(= y

)
x
3
= x

2
(= y

)
Then
y

= x

3
= r(t)x
1
q(t)x
2
p(t)x
3
+ f(t)
and the third-order equation can be written equivalently as the system of three rst-order
equations:
x

1
= x
2
x

2
= x
3
x

3
= r(t)x
1
q(t)x
2
p(t)x
3
+ f(t).
Note that this system is just a very special case of the general system of three, rst-order
dierential equations:
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ a
13
(t)x
3
(t) + b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ a
23
(t)x
3
(t) + b
2
(t)
x

3
= a
31
(t)x
1
+ a
32
(t)x
2
+ a
33
(t)x
3
(t) + b
3
(t).
274
Example 1. (a) Consider the third-order nonhomogeneous equation
y

8y

+ 12y = 2e
t
.
Solving the equation for y

, we have
y

= 12y + 8y

+ y

+ 2e
t
.
Let x
1
= y, x

1
= x
2
(= y

), x

2
= x
3
(= y

). Then
y

= x

3
= 12x
1
+ 8x
2
+ x
3
+ 2e
t
and the equation converts to the equivalent system:
x

1
= x
2
x

2
= x
3
x

3
= 12x
1
+ 8x
2
+ x
3
+ 2e
t
.
(b) Consider the second-order homogeneous equation
t
2
y

ty

3y = 0.
Solving this equation for y

, we get
y

=
3
t
2
y +
1
t
y

.
To convert this equation to an equivalent system, we let x
1
= y, x

1
= x
2
(= y

). Then we
have
x

1
= x
2
x

2
=
3
t
2
x
1
+
1
t
x
2
which is just a special case of the general system of two rst-order dierential equations:
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ b
2
(t).
General Theory
Let a
11
(t), a
12
(t), . . . , a
1n
(t), a
21
(t), . . . , a
nn
(t), b
1
(t), b
2
(t), . . . , b
n
(t) be continuous
functions on some interval I. The system of n rst-order dierential equations
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ + a
1n
(t)x
n
(t) + b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ + a
2n
(t)x
n
(t) + b
2
(t)
.
.
.
.
.
.
x

n
= a
n1
(t)x
1
+ a
n2
(t)x
2
+ + a
nn
(t)x
n
(t) + b
n
(t)
(S)
275
is called a rst-order linear dierential system.
The system (S) is homogeneous if
b
1
(t) b
2
(t) b
n
(t) 0 on I.
System (S) is nonhomogeneous if the functions b
i
(t) are not all identically zero on I. That
is, (S) is nonhomgeneous if there is at least one point a I and at least one function b
i
(t)
such that b
i
(a) = 0.
Let A(t) be the n n matrix
A(t) =
_
_
_
_
_
_
a
11
(t) a
12
(t) a
1n
(t)
a
21
(t) a
22
(t) a
2n
(t)
.
.
.
.
.
.
.
.
.
a
n1
(t) a
n2
(t) a
nn
(t)
_
_
_
_
_
_
and let x and b(t) be the vectors
x =
_
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
_
_
, b(t) =
_
_
_
_
_
_
b
1
(t)
b
2
(t)
.
.
.
b
n
(t)
_
_
_
_
_
_
.
Then (S) can be written in the vector-matrix form
x

= A(t) x +b(t). (S)


The matrix A(t) is called the matrix of coecients or the coecient matrix of the system.
Example 2. The vector-matrix form of the system in Example 1 (a) is:
x

=
_
_
_
0 1 0
0 0 1
12 8 1
_
_
_
x +
_
_
_
0
0
2e
t
_
_
_
, where x =
_
_
_
x
1
x
2
x
3
_
_
_
,
a nonhomogeneous system.
The vector-matrix form of the system in Example 1(b) is:
x

=
_
0 1
3/t
2
1/t
_
x +
_
0
0
_
=
_
0 1
3/t
2
1/t
_
x, where x =
_
x
1
x
2
_
,
a homogeneous system.
The vector-matrix form of y

+ p(t)y

+ q(t)y

+ r(t)y = 0 is:
x

=
_
_
_
0 1 0
0 0 1
r(t) q(t) p(t)
_
_
_x where x =
_
_
_
x
1
x
2
x
3
_
_
_.
276
A solution of the linear dierential system (S) is a dierentiable vector function
x(t) =
_
_
_
_
_
_
x
1
(t)
x
2
(t)
.
.
.
x
n
(t)
_
_
_
_
_
_
that satises (S) on the interval I.
Example 3. Verify that x(t) =
_
t
3
3t
2
_
is a solution of the homogeneous system
x

=
_
0 1
3/t
2
1/t
__
x
1
x
2
_
of Example 1 (b).
SOLUTION
x

=
_
t
3
3t
2
_

=
_
3t
2
6t
_
?
=
_
0 1
3/t
2
1/t
__
t
3
3t
2
_
=
_
3t
2
6t
_
;
x(t) is a solution.
Example 4. Verify that x(t) =
_
_
_
e
2t
2e
2t
4e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_ is a solution of the nonhomogeneous
system
x

=
_
_
_
0 1 0
0 0 1
12 8 1
_
_
_x +
_
_
_
0
0
2e
t
_
_
_
of Example 1 (a).
277
SOLUTION
x

=
_

_
_
_
_
e
2t
2e
2t
4e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_
_

=
_
_
_
2e
2t
4e
2t
8e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_
?
=
_
_
_
0 1 0
0 0 1
12 8 1
_
_
_
_

_
_
_
_
e
2t
2e
2t
4e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_
_

_ +
_
_
_
0
0
2e
t
_
_
_
?
=
_
_
_
0 1 0
0 0 1
12 8 1
_
_
_
_
_
_
e
2t
2e
2t
4e
2t
_
_
_+
_
_
_
0 1 0
0 0 1
12 8 1
_
_
_
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_+
_
_
_
0
0
2e
t
_
_
_
?
=
_
_
_
2e
2t
4e
2t
8e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t

3
2
e
t
_
_
_+
_
_
_
0
0
2e
t
_
_
_ =
_
_
_
2e
2t
4e
2t
8e
2t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_;
x(t) is a solution.
THEOREM 1. (Existence and Uniqueness Theorem) Let a be any point on the interval
I, and let
1
,
2
, . . . ,
n
be any n real numbers. Then the initial-value problem
x

= A(t) x + b(t), x(a) =


_
_
_
_
_
_

2
.
.
.

n
_
_
_
_
_
_
has a unique solution.
Exercises 6.1
Convert the dierential equation into a system of rst-order equations.
1. y

ty

+ 3y = sin 2t.
2. y

+ y = 2e

2t.
3. y

+ y = e
t
.
4. my

+ cy

+ ky = cos t, m, c, k, are constants.


In Exercises 5 - 8 a matrix function A and a vector function b are given. Write
the system of equations corresponding to x

= A(t)x + b(t).
278
5. A(t) =
_
2 1
3 0
_
, b(t) =
_
e
2t
2e
t
_
.
6. A(t) =
_
t
3
t
cos t 2
_
, b(t) =
_
t 1
2
_
.
7. A(t) =
_
_
_
2 3 1
2 0 1
2 3 0
_
_
_, b(t) =
_
_
_
e
t
2e
t
e
2t
_
_
_.
8. A(t) =
_
_
_
t
2
3t t 1
2 t 2 t
2t 3 t
_
_
_, b(t) =
_
_
_
0
0
1
_
_
_.
Write the system in vector-matrix form.
9.
x

1
= 2x
1
+ x
2
+ sin t
x

2
= x
1
3x
2
2 cos t
10.
x

1
= e
t
x
1
e
2t
x
2
x

2
= e
t
x
1
3e
t
x
2
11.
x

1
= 2x
1
+ x
2
+ 3x
3
+ 3e
2t
x

2
= x
1
3x
2
2 cos t
x

3
= 2x
1
x
2
+ 4x
3
+ t
12.
x

1
= t
2
x
1
+ x
2
tx
3
+ 3
x

2
= 3e
t
x
2
+ 2x
3
2e
2t
x

3
= 2x
1
+ t
2
x
2
+ 4x
3
13. Verify that u(t) =
_
t
1
t
2
_
is a solution of the system in Example 1 (b).
14. Verify that u(t) =
_
_
_
e
3t
3e
3t
9e
3t
_
_
_+
_
_
_
1
2
e
t
1
2
e
t
1
2
e
t
_
_
_ is a solution of the system in Example
1 (a).
279
15. Verify that w(t) =
_
_
_
te
2t
e
2t
+ 2te
2t
4e
2t
+ 4te
2t
_
_
_ is a solution of the homogeneous system asso-
ciated with the system in Example 1 (a).
16. Verify that v(t) =
_
sin t
cos t 2 sin t
_
is a solution of the system
x

=
_
2 1
3 2
_
x +
_
0
2 sin t
_
.
17. Verify that v(t) =
_
_
_
2e
2t
0
3e
2t
_
_
_ is a solution of the system
x

=
_
_
_
1 3 2
0 1 0
0 1 2
_
_
_x.
6.2 Homogeneous Systems
In this section we give the basic theory for linear homogeneous systems. This theory is
simply a repetition of the results given in Sections 3.2 and 3.7, phrased this time in terms
of the system
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ + a
1n
(t)x
n
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ + a
2n
(t)x
n
(t)
.
.
.
.
.
.
x

n
= a
n1
(t)x
1
+ a
n2
(t)x
2
+ + a
nn
(t)x
n
(t)
(H)
or
x

= A(t)x. (H)
Note rst that the zero vector z(t) 0 =
_
_
_
_
_
_
0
0
.
.
.
0
_
_
_
_
_
_
is a solution of (H). As before,
this solution is called the trivial solution. Of course, we are interested in nding nontrivial
solutions.
THEOREM 1. If x = x(t) is a solution of (H) and is any real number, then
u(t) = x(t) is also a solution of (H); any constant multiple of a solution of (H) is a
solution of (H).
280
THEOREM 2. If x
1
= x
1
(t) and x
2
= x
2
(t) are solutions of (H), then
u(t) = x
1
(t) +x
2
(t)
is also a solution of (H); the sum of any two solutions of (H) is a solution of (H).
These two theorems can be combined and extended to:
THEOREM 3. If x
1
= x
1
(t), x
2
= x
2
(t), . . . , x
k
= x
k
(t) are solutions of (H), and if
c
1
, c
2
, . . . , c
k
are real numbers, then
v(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
k
x
k
(t)
is a solution of (H); any linear combination of solutions of (H) is also a solution of (H).

Linear Dependence and Linear Independence of Vector Functions


The notions of linear dependence and linear independence of vectors is of fundamental
importance in linear algebra. See Chapter 5, Section 5.7.
In this subsection we will look at linear dependence/independence of vector functions
in general. This is an extension of the material in Section 5.7. We will return to linear
dierential systems after we treat the general case.
DEFINITION 1. Let
v
1
(t) =
_
_
_
_
_
_
v
11
(t)
v
21
(t)
.
.
.
v
n1
(t)
_
_
_
_
_
_
, v
2
(t) =
_
_
_
_
_
_
v
12
(t)
v
22
(t)
.
.
.
v
n2
(t)
_
_
_
_
_
_
, . . . , v
k
(t) =
_
_
_
_
_
_
v
1k
(t)
v
2k
(t)
.
.
.
v
nk
(t)
_
_
_
_
_
_
be n-component vector functions dened on some interval I. The vectors are linearly
dependent on I if there exist k real numbers c
1
, c
2
, . . . , c
k
, not all zero, such that
c
1
v
1
(t) + c
2
v
2
(t) + + c
k
v
k
(t) 0 on I.
Otherwise the vectors are linearly independent on I.
THEOREM 4. Let v
1
(t), v
2
(t), . . . , v
n
(t) be n, n-component vector functions dened
on an interval I. If the vectors are linearly dependent, then

v
11
(t) v
12
(t) v
1n
(t)
v
21
(t) v
22
(t) v
2n
(t)
.
.
.
.
.
.
.
.
.
.
.
.
v
n1
(t) v
n2
(t) v
nn
(t)

0 on I.
281
Proof: See the proof of Theorem 1 in Section 5.7.
The determinant in Theorem 4 is called the Wronskian of the vector functions v
1
, v
2
, . . . , v
n
.
We will let W(v
1
, v
2
, . . . , v
n
)(t), or simply W(t), denote the Wronskian.
COROLLARY Let v
1
(t), v
2
(t), . . . , v
n
(t) be n, n-component vector functions dened
on an interval I, and let W(t)) be their Wronskian. If W(t) = 0 for at least one t I,
then the vector functions are linearly independent on I.
It is important to understand that in this general case, W(t) 0 does not imply that
the vector functions are linearly dependent. An example is given in Section 5.7.
Example 1. (a) The Wronskian of the vector functions
u(t) =
_
t
3
3t
2
_
and v(t) =
_
t
1
t
2
_
.
is:
W(x) =

t
3
t
1
3t
2
t
2

= 4t.
(Note: u and v are solutions of the homogeneous system in Example 3, Section 6.2.)
(b) The Wronskian of the vector functions
v
1
(t) =
_
_
_
e
2t
2e
2t
4e
2t
_
_
_, v
2
(t) =
_
_
_
e
3t
3e
3t
9e
3t
_
_
_, v
3
(t) =
_
_
_
te
2t
e
2t
+ 2te
2t
4e
2t
+ 4te
2t
_
_
_
is:
W(x) =

e
2t
e
3t
te
2t
2e
2t
3e3t e
2t
+ 2te
2t
4e
2t
9e
3t
4e
2t
+ 4te
2t

= 25e
t
.
Back to Linear Dierential Systems
When the vector functions x
1
, x
2
, . . . , x
n
are n solutions of the homogeneous system
(H) we get a much stronger version of Theorem 4.
THEOREM 5. Let x
1
(t), x
2
(t), . . . , x
n
(t) be n solutions of (H). Exactly one of the
following holds:
1. W(x
1
, x
2
, . . . , x
n
)(t) 0 on I and the solutions are linearly dependent.
2. W(x
1
, x
2
, . . . , x
n
)(t) = 0 for all t I and the solutions are linearly independent.

282
Compare this result with Theorem 4, Section 3.2, and Theorem 5, Section 3.7.
It is easy to construct sets of n linearly independent solutions of (H). Simply pick any
point a I and any nonsingular n n matrix A. Let a
1
be the rst column of
A, a
2
the second column of A, and so on. Then let x
1
(t) be the solution of (H) such
that x
1
(a) = a
1
, let x
2
(t) be the solution of (H) such that x
2
(a) = a
2
, . . . , and let
x
n
(t) be the solution of (H) such that x
n
(a) = a
n
. The existence and uniqueness theorem
guarantees the existence of these solutions. Now
W(x
1
, x
2
, . . . , x
n
)(a) = det A = 0.
Therefore, W(t) = 0 for all t I and the solutions are linearly independent. A
particularly nice set of n linearly independent solutions is obtained by choosing A = I
n
,
the identity matrix.
THEOREM 6. Let x
1
(t), x
2
(t), . . . , x
n
(t) be n linearly independent solutions of (H).
Let u(t) be any solution of (H). Then there exists a unique set of constants C
1
, C
2
, . . . , C
n
such that
u(t) = C
1
x
1
(t) + C
2
x
2
(t) + + C
n
x
n
(t).
That is, every solution of (H) can be written as a unique linear combination of x
1
, x
2
, . . . , x
n
.

DEFINITION 2. A set {x
1
, x
2
, . . . , x
n
} of n linearly independent solutions of (H) is
called a fundamental set of solutions. A fundamental set of solutions is also called a solution
basis for (H).
If {x
1
, x
2
, . . . , x
n
} is a fundamental set of solutions of (H), then the n n matrix
X(t) =
_
_
_
_
_
_
x
11
(t) x
12
(t) x
1n
(t)
x
21
(t) x
22
(t) x
2n
(t)
.
.
.
.
.
.
.
.
.
x
n1
(t) x
n2
(t) x
nn
(t)
_
_
_
_
_
_
(the vectors x
1
, x
2
, . . . , x
n
are the columns of X) is called a fundamental matrix for (H).

DEFINITION 3. Let {x
1
(t), x
2
(t), . . . , x
n
(t)} be a fundamental set of solutions of
(H). Then
x(t) = C
1
x
1
(t) + C
2
x
2
(t) + + C
n
x
n
(t),
where C
1
, C
2
, . . . , C
n
are arbitrary constants, is the general solution of (H).
Note that the general solution can also be written in terms of the fundamental matrix:
C
1
x
1
(t) + C
2
x
2
(t) + + C
n
x
n
(t) =
_
_
_
_
_
_
x
11
(t) x
12
(t) x
1n
(t)
x
21
(t) x
22
(t) x
2n
(t)
.
.
.
.
.
.
.
.
.
x
n1
(t) x
n2
(t) x
nn
(t)
_
_
_
_
_
_
_
_
_
_
_
_
C
1
C
2
.
.
.
C
n
_
_
_
_
_
_
= X(t)C.
283
Example 2. The vectors
u(t) =
_
t
3
3t
2
_
and v(t) =
_
t
1
t
2
_
form a fundamental set of solutions of
x

=
_
0 1
3/t
2
1/t
__
x
1
x
2
_
.
The matrix
X(t) =
_
t
3
t
1
3t
2
t
2
_
is a fundamental matrix for the system and
x(t) = C
1
_
t
3
3t
2
_
+ C
2
_
t
1
t
2
_
=
_
t
3
t
1
3t
2
t
2
__
C
1
C
2
_
is the general solution of the system.
Exercises 6.3
Determine whether or not the vector functions are linearly dependent.
1. u =
_
2t 1
t
_
, v =
_
t + 1
2t
_
2. u =
_
cos t
sin t
_
, v =
_
sin t
cos t
_
3. u =
_
t t
2
t
_
, v =
_
2t + 4t
2
2t
_
4. u =
_
te
t
t
_
, v =
_
e
t
1
_
5. u =
_
_
_
2 t
t
2
_
_
_, v =
_
_
_
t
1
2
_
_
_, w =
_
_
_
2 + t
t 2
2
_
_
_.
6. u =
_
_
_
cos t
sin t
0
_
_
_, v =
_
_
_
cos t
0
sin t
_
_
_, w =
_
_
_
0
cos t
sin t
_
_
_.
284
7. u =
_
_
_
e
t
e
t
e
t
_
_
_, v =
_
_
_
e
t
2e
t
e
t
_
_
_, w =
_
_
_
0
e
t
0
_
_
_.
8. u =
_
2 t
t
_
, v =
_
t + 1
2
_
, w =
_
t
t + 2
_
9. u =
_
e
t
0
_
, v =
_
0
0
_
, w =
_
0
e
t
_
10. u =
_
_
_
_
_
cos (t + /4)
0
0
0
_
_
_
_
_
, v =
_
_
_
_
_
cos t)
0
0
e
t
_
_
_
_
_
, w =
_
_
_
_
_
sin t)
0
0
e
t
_
_
_
_
_
11. Given the linear dierential system
x

=
_
5 3
2 0
_
x.
Let
u =
_
e
2t
e
2t
_
and v =
_
3e
3t
2e
3t
_
.
(a) Show that u, v are a fundamental set of solutions of the system.
(b) Let X be the corresponding fundamental matrix. Show that
X

= AX.
(c) Give the general solution of the system.
(d) Find the solution of the system that satises x(0) =
_
1
0
_
.
12. Let X be the matrix function
X(t) =
_
cos 2t sin 2t
sin 2t cos 2t
_
(a) Verify that X is a fundamental matrix for the system
x

=
_
0 2
2 0
_
x.
(b) Find the solution of the system that satises x(0) =
_
2
3
_
.
285
13. Let X be the matrix function
X(t) =
_
_
_
0 4te
t
e
t
1 e
t
0
1 0 0
_
_
_
(a) Verify that X is a fundamental matrix for the system
x

=
_
_
_
1 4 4
0 1 1
0 0 0
_
_
_x.
(b) Find the solution of the system that satises x(0) =
_
_
_
0
1
2
_
_
_.
14. The linear dierential system equivalent to the equation
y

+ p(t)y

+ q(t)y

+ r(t)y = 0
is:
_
_
_
x

1
x

2
x

3
_
_
_=
_
_
_
0 1 0
0 0 1
r(t) q(t) p(t)
_
_
_
_
_
_
x
1
x
2
x
3
_
_
_.
(See Example 2, Section 5.1.) Show that if y = y(t) is a solution of the equation,
then x(t) =
_
_
_
y(t)
y

(t)
y

(t)
_
_
_ is a solution of the system.
NOTE: This result holds for linear equations of all orders. However, it is important
to understand that solutions of systems which are not converted from equations do
not have this special form.
15. Find three linearly independent solutions of
x

=
_
_
_
0 1 0
0 0 1
4 4 1
_
_
_x.
16. Find three linearly independent solutions of
x

=
_
_
_
0 1 0
0 0 1
18 3 4
_
_
_x.
17. Find two linearly independent solutions of
x

=
_
0 1
6/t
2
6/t
_
x.
286
18. Find two linearly independent solutions of
x

=
_
0 1
4/t
2
3/t
_
x.
19. Let {x
1
(t), x
2
(t), . . . , x
n
(t)} be a fundamental set of solutions of (H), and let
X(t) be the corresponding fundamental matrix. Show that V satises the matrix
dierential equation
X

= A(t)X.
6.3 Homogeneous Systems with Constant Coecients, Part I
A homogeneous system with constant coecients is a linear dierential system having the
form
x

1
= a
11
x
1
+ a
12
x
2
+ + a
1n
x
n
x

2
= a
21
x
1
+ a
22
x
2
+ + a
2n
x
n
.
.
.
.
.
.
x

n
= a
n1
x
1
+ a
n2
x
2
+ + a
nn
x
n
where a
11
, a
12
, . . . , a
nn
are constants. The system in vector-matrix form is
_
_
_
_
_
x

1
x

n
_
_
_
_
_
=
_
_
_
_
_
a
11
a
12
a
1n
a
21
a
22
a
2n

a
n1
a
n2
a
nn
_
_
_
_
_
_
_
_
_
_
x
1
x
2

x
n
_
_
_
_
_
or x

= Ax. (1)
Example 1. Consider the 3
rd
order linear homogeneous dierential equation
y

+ 2y

5y

6y = 0.
The characteristic equation is:
r
3
+ 2r
2
5r 6 = (r 2)(r + 1)(r + 3) = 0
and {e
2t
, e
t
, e
3t
} is a solution basis for the equation.
The corresponding linear homogeneous system is
x

=
_
_
_
0 1 0
0 0 1
6 5 2
_
_
_x
287
and
x
1
(t) =
_
_
_
e
2t
2e
2t
4e
2t
_
_
_= e
2t
_
_
_
1
2
4
_
_
_
is a solution vector (see Problem 14, Exercises 6.3). Similarly,
x
2
(t) =
_
_
_
e
t
e
t
e
t
_
_
_= e
t
_
_
_
1
1
1
_
_
_ and x
3
(t) =
_
_
_
e
3t
3e
3t
9e
3t
_
_
_= e
3t
_
_
_
1
3
9
_
_
_
are solution vectors.
Solutions: Eigenvalues and Eigenvectors
Example 1 suggests that homogeneous systems with constant coecients might have
solution vectors of the form x(t) = e
t
v, for some number and some constant vector
v.
Set x(t) = e
t
v. Then x

(t) = e
t
v. Substituting into equation (1), we get:
e
t
v = Ae
t
v which implies Av = v.
The latter equation is an eigenvalue-eigenvector equation for A. Thus, we look for
solutions of the form x(t) = e
t
v where is an eigenvalue of A and v is a corresponding
eigenvector.
Example 2. Returning to Example 1, note that
_
_
_
0 1 0
0 0 1
6 5 2
_
_
_
_
_
_
1
2
4
_
_
_= 2
_
_
_
1
2
4
_
_
_,
_
_
_
0 1 0
0 0 1
6 5 2
_
_
_
_
_
_
1
1
1
_
_
_= 1
_
_
_
1
1
1
_
_
_,
and
_
_
_
0 1 0
0 0 1
6 5 2
_
_
_
_
_
_
1
3
9
_
_
_= 3
_
_
_
1
3
9
_
_
_.
2 is an eigenvalue of A =
_
_
_
0 1 0
0 0 1
6 5 2
_
_
_ with corresponding eigenvector
_
_
_
1
2
4
_
_
_, 1
is an eigenvalue of A with corresponding eigenvector
_
_
_
1
1
1
_
_
_
, and 3 is an eigenvalue
of A with corresponding eigenvector
_
_
_
1
3
9
_
_
_.
288
Example 3. Find a fundamental set of solution vectors of
x

=
_
1 5
3 3
_
x
and give the general solution of the system.
SOLUTION First we nd the eigenvalues:
det(A I) =

1 5
3 3

= ( 6)( + 2).
The eigenvalues are
1
= 6 and
2
= 2.
Next, we nd corresponding eigenvectors. For
1
= 6 we have:
(A 6I)x =
_
5 5
3 3
__
x
1
x
2
_
=
_
0
0
_
which implies x
1
= x
2
, x
2
arbitrary.
Setting x
2
= 1, we get the eigenvector
_
1
1
_
.
Repeating the process for
2
= 2, we get the eigenvector
_
5
3
_
.
Thus x
1
(t) = e
6t
_
1
1
_
and x
2
(t) = e
2t
_
5
3
_
are solution vectors of the system.
The Wronskian of x
1
and x
2
is:
W(t) =

e
6t
5e
2t
e
6t
3e
2t

= 8e
4t
= 0.
Thus x
1
and x
2
are linearly independent; they form a fundamental set of solutions. The
general solution of the system is
x(t) = C
1
e
6t
_
1
1
_
+ C
2
e
2t
_
5
3
_
.
Example 4. Find a fundamental set of solution vectors of
x

=
_
_
_
3 1 1
12 0 5
4 2 1
_
_
_x
and nd the solution that satises the initial condition x(0) =
_
_
_
1
0
1
_
_
_.
289
SOLUTION
det(AI) =

3 1 1
12 5
4 2 1

=
3
+ 2
2
+ 2.
Now
det(A I) = 0 implies
3
2
2
+ 2 = ( 2)( 1)( + 1) = 0.
The eigenvalues are
1
= 2,
2
= 1,
3
= 1.
As you can check, corresponding eigenvectors are:
v
1
=
_
_
_
1
1
2
_
_
_, v
2
=
_
_
_
3
1
7
_
_
_, v
3
=
_
_
_
1
2
2
_
_
_.
Since distinct exponential vector-functions are linearly independent (calculate the Wron-
skian to verify), a fundamental set of solution vectors is:
x
1
(t) = e
2t
_
_
_
1
1
2
_
_
_, x
2
(t) = e
t
_
_
_
3
1
7
_
_
_, x
3
(t) = e
t
_
_
_
1
2
2
_
_
_.
Thus, the general solution of the system is
x(t) = C
1
e
2t
_
_
_
1
1
2
_
_
_+ C
2
e
t
_
_
_
3
1
7
_
_
_+ C
3
e
t
_
_
_
1
2
2
_
_
_.
To nd the solution vector satisfying the initial condition, solve
C
1
x
1
(0) + C
2
x
2
(0) + C
3
x
3
(0) =
_
_
_
1
0
1
_
_
_
which is:
C
1
_
_
_
1
1
2
_
_
_+ C
2
_
_
_
3
1
7
_
_
_+ C
3
_
_
_
1
2
2
_
_
_ =
_
_
_
1
0
1
_
_
_
or
_
_
_
1 3 1
1 1 2
2 7 2
_
_
_
_
_
_
C
1
C
2
C
3
_
_
_
=
_
_
_
1
0
1
_
_
_
.
NOTE: The matrix of coecients here is the fundamental matrix evaluated at t = 0
290
Using the solution method of your choice (row reduction, inverse, Cramers rule), the
solution is: C
1
= 3, C
2
= 1, C
3
= 1. The solution of the initial-value problem is
x(t) = 3e
2t
_
_
_
1
1
2
_
_
_ e
t
_
_
_
3
1
7
_
_
_+ e
t
_
_
_
1
2
2
_
_
_.
Exercises 6.3
Find the general solution of the system x

= Ax where A is the given matrix. If an


initial condition is given, also nd the solution that satises the condition.
1.
_
2 4
1 1
_
.
2.
_
3 2
1 2
_
.
3. A =
_
2 1
0 3
_
, x(0) =
_
1
3
_
.
4. A =
_
1 2
3 2
_
.
5. A =
_
1 4
2 3
_
.
6. A =
_
1 1
4 2
_
, x(0) =
_
1
1
_
.
7. A =
_
_
_
3 2 2
3 1 3
1 2 0
_
_
_. Hint: 1 is an eigenvalue.
8. A =
_
_
_
15 7 7
1 1 1
13 7 5
_
_
_. Hint: 2 is an eigenvalue.
9.
_
_
_
3 0 1
2 2 1
8 0 3
_
_
_, x(0) =
_
_
_
1
2
8
_
_
_. Hint: 2 is an eigenvalue.
10.
_
_
_
2 2 1
0 1 0
2 2 1
_
_
_. Hint: 0 is an eigenvalue.
291
11.
_
_
_
8 2 1
1 7 3
1 1 6
_
_
_. Hint: 5 is an eigenvalue.
12.
_
_
_
1 3 1
1 1 1
3 3 1
_
_
_, x(0) =
_
_
_
1
2
1
_
_
_. Hint: 2 is an eigenvalue.
13. Given the second order, homogeneous equation, with constant coecients
y

+ ay

+ by = 0.
(a) Transform the equation into a system of rst order equations by setting x
1
=
y, x
2
= y

. Then write your system in the vector-matrix form


x

= Ax.
(b) Find the characteristic equation for the coecient matrix you found in part (a).
(c) Compare your result in (b) with the characteristic equation for y

+ay

+by = 0.
14. Given the third order, homogeneous equation, with constant coecients
y

+ ay

+ by

+ cy = 0.
(a) Transform the equation into a system of rst order equations by setting x
1
=
y, x
2
= y

, x
3
= y

. Then write your system in the vector-matrix form


x

= Ax.
(b) Find the characteristic equation for the coecient matrix you found in part (a).
(c) Compare your result in (b) with the characteristic equation for
y

+ ay

+ by

+ cy = 0.
6.4 Homogeneous Systems with Constant Coecients: Part II.
There are two diculties that can arise in solving a homogeneous system with constant
coecients. We will treat these in this section.
1. A has complex eigenvalues.
If = a + bi is a complex eigenvalue of A with corresponding (complex) eigenvector
u + i v, then = a bi (the complex conjugate of ) is also an eigenvalue of A and
u i v is a corresponding eigenvector. The corresponding linearly independent complex
solutions of x

= Ax are:
w
1
(t) = e
(a+bi)t
(u + i v) = e
at
(cos bt + i sin bt)(u + i v)
= e
at
[(cos bt u sin bt v) + i(cos bt v + sin bt u)]
292
w
2
(t) = e
(abi)t
(u i v) = e
at
(cos bt i sin bt)(u i v)
= e
at
[(cos bt u sin bt v) i(cos bt v + sin bt u)]
Now
x
1
(t) =
1
2
[w
1
(t) +w
2
(t)] = e
at
(cos bt u sin bt v)
and
x
2
(t) =
1
2i
[w
1
(t) w
2
(t)] = e
at
(cos bt v + sin bt u)
are linearly independent solutions of the system, and they are real-valued vector functions.
Note that x
1
and x
2
are simply the real and imaginary parts of w
1
(or of w
2
).
(Review Section 3.3 where you were shown how to convert complex exponential solutions
into real-valued solutions involving sine and cosine.)
Example 1. Determine the general solution of
x

=
_
2 5
1 0
_
x.
SOLUTION
det(A I) =

2 5
1

=
2
2 + 5.
The eigenvalues are:
1
= 1 + 2i,
2
= 1 2i. The corresponding eigenvectors are:
v
1
=
_
1 + 2i
1
_
=
_
1
1
_
+ i
_
2
0
_
, v
2
=
_
1 2i
1
_
=
_
1
1
_
i
_
2
0
_
.
Now
e
(1+2i)t
__
1
1
_
+ i
_
2
0
__
= e
t
(cos 2t + i sin 2t)
__
1
1
_
+ i
_
2
0
__
= e
t
_
cos 2t
_
1
1
_
sin 2t
_
2
0
__
+ i e
t
_
cos 2t
_
2
0
_
+ sin 2t
_
1
1
__
.
A fundamental set of solution vectors for the system is:
x
1
(t) = e
t
_
cos 2t
_
1
1
_
sin 2t
_
2
0
__
, x
2
(t) = e
t
_
cos 2t
_
2
0
_
+ sin 2t
_
1
1
__
.
The general solution of the system is
x(t) = C
1
e
t
_
cos 2t
_
1
1
_
sin 2t
_
2
0
__
+ C
2
e
t
_
cos 2t
_
2
0
_
+ sin 2t
_
1
1
__
.
293
Example 2. Determine a fundamental set of solution vectors of
x

=
_
_
_
1 4 1
3 2 3
1 1 3
_
_
_x.
SOLUTION
det(A I) =

1 4 1
3 2 3
1 1 3

=
3
+ 6
2
21 + 26 = ( 2)(
2
4 + 13).
The eigenvalues are:
1
= 2,
2
= 2 + 3i,
3
= 2 3i. The corresponding eigenvectors
are:
v
1
=
_
_
_
1
0
1
_
_
_, v
2
=
_
_
_
5 + 3i
3 + 3i
2
_
_
_=
_
_
_
5
3
2
_
_
_+ i
_
_
_
3
3
0
_
_
_
v
3
=
_
_
_
5 3i
3 3i
2
_
_
_
=
_
_
_
5
3
2
_
_
_
i
_
_
_
3
3
0
_
_
_
.
Now
e
(2+3i)t
_

_
_
_
_
5
3
2
_
_
_+ i
_
_
_
3
3
0
_
_
_
_

_ = e
2t
(cos 3t + i sin 3t)
_

_
_
_
_
5
3
2
_
_
_+ i
_
_
_
3
3
0
_
_
_
_

_
= e
2t
_

_cos 3t
_
_
_
5
3
2
_
_
_ sin 3t
_
_
_
3
3
0
_
_
_
_

_ + i e
2t
_

_cos 3t
_
_
_
3
3
0
_
_
_+ sin 3t
_
_
_
5
3
2
_
_
_
_

_.
A fundamental set of solution vectors for the system is:
x
1
(t) = e
2t
_
_
_
1
0
1
_
_
_, x
2
(t) = e
2t
_

_cos 3t
_
_
_
5
3
2
_
_
_ sin 3t
_
_
_
3
3
0
_
_
_
_

_,
x
3
(t) = e
2t
_

_cos 3t
_
_
_
3
3
0
_
_
_+ sin 3t
_
_
_
5
3
2
_
_
_
_

_.
2. A has an eigenvalue of multiplicity greater than 1
Well treat the case where A has an eigenvalue of multiplicity 2. At the end of the section
we indicate the possibilities when A has an eigenvalue of multiplicity 3. You will see that
the diculties increase with the multiplicity.
294
Example 3. Determine a fundamental set of solution vectors of
x

=
_
_
_
1 3 3
3 5 3
6 6 4
_
_
_x.
SOLUTION
det(AI) =

1 3 3
3 5 3
6 6 4

=
3
+ 12 16 = ( 4)( + 2)
2
.
The eigenvalues are:
1
= 4,
2
=
3
= 2.
As you can check, an eigenvector corresponding to
1
= 4 is v
1
=
_
_
_
1
1
2
_
_
_.
Well carry out the details involved in nding an eigenvector corresponding to the dou-
ble eigenvalue 2.
[A (2)I]v =
_
_
_
3 3 3
3 3 3
6 6 6
_
_
_
_
_
_
v
1
v
2
v
3
_
_
_ =
_
_
_
0
0
0
_
_
_.
The augmented matrix for this system of equations is
_
_
_
3 3 3 0
3 3 3 0
6 6 6 0
_
_
_ which row reduces to
_
_
_
1 1 1 0
0 0 0 0
0 0 0 0
_
_
_
The solutions of this system are: v
1
= v
2
v
3
, v
2
, v
3
arbitrary. We can assign values to
v
2
and v
3
independently and obtain two linearly independent eigenvectors. For example,
setting v
2
= 1, v
3
= 0, we get the eigenvector v
2
=
_
_
_
1
1
0
_
_
_. Reversing these values, we
set v
2
= 0, v
3
= 1 to get the eigenvector v
3
=
_
_
_
1
0
1
_
_
_. Clearly v
2
and v
3
are linearly
independent. You should understand that there is nothing magic about our two choices for
v
2
, v
3
; any choice which produces two independent vectors will do.
The important thing to note here is that this eigenvalue of multiplicity 2 produced two
independent eigenvectors.
Based on our work above, a fundamental set of solutions for the dierential system
x

=
_
_
_
1 3 3
3 5 3
6 6 4
_
_
_x
295
is
x
1
(t) = e
4t
_
_
_
1
1
2
_
_
_, x
2
(t) = e
2t
_
_
_
1
1
0
_
_
_, x
3
(t) = e
2t
_
_
_
1
1
0
_
_
_1.
Example 4. Let A =
_
_
_
0 1 0
0 0 1
12 8 1
_
_
_
det(AI) =

1 0
0 1
12 8 1

=
3

2
+ 8 12 = ( 3)( + 2)
2
.
The eigenvalues are:
1
= 3,
2
=
3
= 2.
As you can check, an eigenvector corresponding to
1
= 3 is v
1
=
_
_
_
1
3
9
_
_
_.
Well carry out the details involved in nding an eigenvector corresponding to the dou-
ble eigenvalue 2.
[A (2)I]v =
_
_
_
2 1 0
0 2 1
12 8 1
_
_
_
_
_
_
v
1
v
2
v
3
_
_
_=
_
_
_
0
0
0
_
_
_.
The augmented matrix for this system of equations is
_
_
_
2 1 0 0
0 2 1 0
12 8 1 0
_
_
_ which row reduces to
_
_
_
2 1 0 0
0 2 1 0
0 0 0 0
_
_
_
The solutions of this system are v
1
=
1
4
v
3
, v
2
=
1
2
v
3
, v
3
arbitrary. Here there is only
one parameter and so well get only one eigenvector. Setting v
3
= 4 we get the eigenvector
v
2
=
_
_
_
1
2
4
_
_
_.
In contrast to the preceding example, the double eigenvalue here produced only one
(independent) eigenvector.
Suppose that we were asked to nd a fundamental set of solutions of the linear dierential
system
x

=
_
_
_
0 1 0
0 0 1
12 8 1
_
_
_x.
296
By our work above, we have two independent solutions
x
1
= e
3t
_
_
_
1
3
9
_
_
_ and x
2
= e
2t
_
_
_
1
2
4
_
_
_.
We need a third solution which is independent of these two.
Our system has a special form; it is equivalent to the third order equation
y

+ y

8y

12y = 0.
The characteristic equation is
r
3
+ r
2
8r 12 = (r 3)(r + 2)
2
= 0
(compare with det(A I).) The roots are: r
1
= 3, r
2
= r
3
= 2 and a fundamental
set of solutions is {y
1
= e
3t
, y
2
= e
2t
, y
3
= te
2t
}. The correspondence between these
solutions and the solution vectors we found above should be clear:
e
3t
e
3t
_
_
_
1
3
9
_
_
_
, e
2t
e
2t
_
_
_
1
2
4
_
_
_
.
As we saw in Section 6.3, the solution y
3
(t) = te
2t
of the equation produces the
solution vector
x
3
(t) =
_
_
_
y
3
(t)
y

3
(t)
y

3
(t)
_
_
_ =
_
_
_
te
2t
e
2t
2te
2t
4e
2t
4te2t
_
_
_= e
2t
_
_
_
0
1
4
_
_
_+ te
2t
_
_
_
1
2
4
_
_
_
of the corresponding system.
The appearance of the te
2t
v
2
term should not be unexpected since we know that a
characteristic root r of multiplicity 2 produces a solution of the form te
rt
.
You can check that x
3
is independent of x
1
and x
2
. Therefore, the solution vectors
x
1
, x
2
, x
3
are a fundamental set of solutions of the system.
The question is: What is the signicance of the vector w =
_
_
_
0
1
4
_
_
_? How is it related
to the eigenvalue 2 which generated it, and to the corresponding eigenvector?
Lets look at [A(2)I]w = [A + 2I]w:
[A + 2I]w =
_
_
_
2 1 0
0 2 1
12 8 1
_
_
_
_
_
_
0
1
4
_
_
_=
_
_
_
1
2
4
_
_
_= v
2
;
297
A(2)I maps w onto the eigenvector v
2
. The corresponding solution of the system
has the form
x
3
(t) = e
2t
w+ te
2t
v
2
where v
2
is the eigenvector corresponding to 2 and w satises
[A (2)I]w= v
2
.
An Eigenvalue of Multiplicity 2: The General Result
Given the linear dierential system x

= Ax. Suppose that A has an eigenvalue of


multiplicity 2. Then exactly one of the following holds:
1. has two linearly independent eigenvectors, v
1
and v
2
. Corresponding linearly
independent solution vectors of the dierential system are x
1
(t) = e
t
v
1
and x
2
(t) =
e
t
v
2
.
2. has only one (independent) eigenvector v. Then a linearly independent pair of
solution vectors corresponding to are:
x
1
(t) = e
t
v and x
2
(t) = e
t
w+ te
t
v
where w is a vector that satises (A I)w = v. The vector w is called a
generalized eigenvector corresponding to the eigenvalue .
Example 5. Find a fundamental set of solution vectors for x

=
_
1 1
1 3
_
x.
SOLUTION
det(AI) =

1 1
1 3

=
2
4 + 4 = ( 2)
2
.
Characteristic values:
1
=
2
= 2.
Characteristic vectors:
(A 2I)v =
_
1 1
1 1
__
v
1
v
2
_
=
_
0
0
_
;
_
1 1 0
1 1 0
_

_
1 1 0
0 0 0
_
.
The solutions are: v
1
= v
2
, v
2
arbitrary; there is only one eigenvector. Setting v
2
= 1,
we get v =
_
1
1
_
.
298
The vector x
1
= e
2t
_
1
1
_
is a solution of the system.
A second solution, independent of x
1
, is x
2
= e
2t
w + te
2t
v where w satises
(A2I)w = v:
(A 2I)w =
_
1 1
1 1
__
w
1
w
2
_
=
_
1
1
_
;
_
1 1 1
1 1 1
_

_
1 1 1
0 0 0
_
.
The solutions of this system are w
1
= 1 w
2
, w
2
arbitrary. If we choose w
2
= 0 (since
this is a nonhomogeneous system, any choice for w
2
will do), we get w
1
= 1 and
w =
_
1
0
_
. Thus
x
2
(t) = e
2t
_
1
0
_
+ te
2t
_
1
1
_
is a solution of the system independent of x
1
. The solutions
x
1
(t) = e
2t
_
1
1
_
, x
2
(t) = e
2t
_
1
0
_
+ te
2t
_
1
1
_
are a fundamental set of solutions of the system.
Example 6. Let A =
_
_
_
3 1 1
2 2 1
2 2 0
_
_
_
. Find a fundamental set of solutions of
x

= Ax
SOLUTION
det(A I) =

3 1 1
2 2 1
2 2

=
3
+ 5
2
8 + 4 = ( 1)( 2)
2
.
The eigenvalues are:
1
= 1,
2
=
3
= 2.
An eigenvector corresponding to
1
= 1 is v
1
=
_
_
_
1
0
2
_
_
_
(check this).
Well show the details involved in nding an eigenvector (or eigenvectors) corresponding
to the double eigenvalue 2.
[A2)I]v =
_
_
_
1 1 1
2 0 1
2 2 2
_
_
_
_
_
_
v
1
v
2
v
3
_
_
_=
_
_
_
0
0
0
_
_
_.
299
The augmented matrix for this system of equations is
_
_
_
1 1 1 0
2 0 1 0
2 2 2 0
_
_
_ which row reduces to
_
_
_
1 1 1 0
0 2 1 0
0 0 0 0
_
_
_
The solutions of this system are: v
3
= 2v
2
, v
1
= v
2
+v
3
= v
2
, v
2
arbitrary. There is only
one eigenvector corresponding to the eigenvalue 2. Setting v
2
= 1, we get v
2
=
_
_
_
1
1
2
_
_
_.
Thus, two independent solutions of the given linear dierential system are
x
1
= e
t
_
_
_
1
0
2
_
_
_, x
2
= e
2t
_
_
_
1
1
2
_
_
_.
We need another solution corresponding to the eigenvalue 2, one which is independent
of x
2
. We know that this solution has the form
x
3
(t) = e
2t
w+ te
2t
v
2
where w satises (A2I)w = v
2
. That is:
_
_
_
1 1 1
2 0 1
2 2 2
_
_
_
_
_
_
w
1
w
2
w
3
_
_
_ =
_
_
_
1
1
2
_
_
_.
The augmented matrix is
_
_
_
1 1 1 1
2 0 1 1
2 2 2 2
_
_
_ which row reduces to
_
_
_
1 1 1 1
0 2 1 1
0 0 0 0
_
_
_
The solutions of this system are
w
3
= 1 + 2w
2
, w
1
= 1 w
2
+ w
3
= 1 w
2
+ (1 + 2w
2
) = w
2
, w
2
arbitrary.
If we choose w
2
= 0 (any choice for w
2
will do), we get w
1
= 0, w
2
= 0, w
3
= 1 and
w =
_
_
_
0
0
1
_
_
_. Thus
x
3
= e
2t
_
_
_
0
0
1
_
_
_+ te
2t
_
_
_
1
1
2
_
_
_
is a solution of the system independent of x
2
(and of x
1
). The solutions
x
1
= e
t
_
_
_
1
0
2
_
_
_, x
2
= e
2t
_
_
_
1
1
2
_
_
_, x
3
= e
2t
_
_
_
0
0
1
_
_
_+ te
2t
_
_
_
1
1
2
_
_
_
are a fundamental set of solutions of the system.
300
Exercises 6.4
Find the general solution of the system x

= Ax where A is the given matrix. If an


initial condition is given, also nd the solution that satises the condition.
1.
_
2 4
2 2
_
.
2.
_
1 2
1 3
_
, x(0) =
_
1
3
_
.
3.
_
1 1
4 3
_
.
4.
_
5 2
2 1
_
.
5.
_
3 2
8 5
_
, x(0) =
_
3
2
_
.
6.
_
1 1
4 5
_
.
7.
_
_
_
3 4 4
4 5 4
4 4 3
_
_
_, x(0) =
_
_
_
2
1
1
_
_
_. Hint: 3 is an eigenvalue.
8.
_
_
_
3 0 3
1 2 3
1 0 1
_
_
_. Hint: 2 is an eigenvalue.
9.
_
_
_
0 4 0
1 0 0
1 4 1
_
_
_. Hint: 1 is an eigenvalue.
10.
_
_
_
5 5 5
1 4 2
3 5 3
_
_
_. Hint: 2 is an eigenvalue.
11.
_
_
_
1 1 2
0 1 0
0 1 3
_
_
_, x(0) =
_
_
_
1
3
2
_
_
_. Hint: 3 is an eigenvalue.
12.
_
_
_
3 1 1
7 5 1
6 6 2
_
_
_, x(0) =
_
_
_
1
0
1
_
_
_. Hint: 4 is an eigenvalue.
301
13.
_
_
_
0 1 1
1 1 1
2 1 3
_
_
_. Hint: 2 is an eigenvalue.
14.
_
_
_
0 0 2
1 2 1
1 0 3
_
_
_. Hint: 2 is an eigenvalue.
15.
_
_
_
2 1 1
2 1 1
0 1 1
_
_
_, x(0) =
_
_
_
1
2
0
_
_
_. Hint: 2 is an eigenvalue.
16.
_
_
_
2 1 1
3 3 4
3 1 2
_
_
_. Hint: 1 is an eigenvalue.
17.
_
_
_
2 2 6
2 1 3
2 1 1
_
_
_
. Hint: 6 is an eigenvalue.
18.
_
_
_
8 6 1
10 9 2
10 7 0
_
_
_. Hint: 3 is an eigenvalue.
Eigenvalues of Multiplicity 3.
Given the dierential system x

= Ax. Suppose that is an eigenvalue of A of


multiplicity 3. Then exactly one of the following holds:
1. has three linearly independent eigenvectors c
1
, c
2
, c
3
. Then three linearly inde-
pendent solution vectors of the system corresponding to are:
v
1
(t) = e
t
c
1
, v
2
(t) = e
t
c
2
, v
3
(t) = e
t
c
3
.
2. has two linearly independent eigenvectors c
1
, c
2
. Then two linearly independent
solutions of the system corresponding to are:
v
1
(t) = e
t
c
1
, v
2
(t) = e
t
c
2
A third solution, independent of v
1
and v
2
has the form
v
3
(t) = e
t
w+ te
t
v
where v is an eigenvector corresponding to and (AI)w = v.
302
3. has only one (independent) eigenvector c. Then three linearly independent solu-
tions of the system have the form:
v
1
= e
t
c, v
2
= e
t
w+ te
t
c,
v
3
(t) = e
t
z + te
t
w+ t
2
e
t
c
where (A I)w= c and (A I)z = w.
6.5 Nonhomogeneous Systems
The treatment in this section parallels exactly the treatment of linear nonhomogeneous
equations in Sections 3.4 and 3.7.
Recall from Section 6.1 that a linear nonhomogeneous dierential system is a system of
the form
x

1
= a
11
(t)x
1
+ a
12
(t)x
2
+ + a
1n
(t)x
n
(t) + b
1
(t)
x

2
= a
21
(t)x
1
+ a
22
(t)x
2
+ + a
2n
(t)x
n
(t) + b
2
(t)
.
.
.
.
.
.
x

n
= a
n1
(t)x
1
+ a
n2
(t)x
2
+ + a
nn
(t)x
n
(t) + b
n
(t)
(N)
where a
11
(t), a
12
(t), . . . , a
1n
(t), a
21
(t), . . . , a
nn
(t), b
1
(t), b
2
(t), . . . , b
n
(t) are continuous
functions on some interval I and the functions b
i
(t) are not all identically zero on I; that
is, there is at least one point a I and at least one function b
i
(t) such that b
i
(a) = 0.
Let A(t) be the n n matrix
A(t) =
_
_
_
_
_
_
a
11
(t) a
12
(t) a
1n
(t)
a
21
(t) a
22
(t) a
2n
(t)
.
.
.
.
.
.
.
.
.
a
n1
(t) a
n2
(t) a
nn
(t)
_
_
_
_
_
_
and let x and b(t) be the vectors
x =
_
_
_
_
_
_
x
1
x
2
.
.
.
x
n
_
_
_
_
_
_
, b(t) =
_
_
_
_
_
_
b
1
(t)
b
2
(t)
.
.
.
b
n
(t)
_
_
_
_
_
_
.
Then (N) can be written in the vector-matrix form
x

= A(t) x +b(t). (N)


The corresponding linear homogeneous system
x

= A(t) x (H)
is called the reduced system of (N).
303
THEOREM 1. If z
1
(t) and z
2
(t) are solutions of (N), then
x(t) = z
1
(t) z
2
(t)
is a solution of (H). (C.f. Theorem 1, Section 3.4.)
Proof: Since z
1
and z
2
are solutions of (N),
z

1
(t) = A(t)z
1
(t) +b(t) and z

2
(t) = A(t)z
2
(t) +b(t)).
Let x(t) = z
1
(t) z
2
(t). Then
x

(t) = z

1
(t) z

2
(t) = [A(t)z
1
(t) +b(t)] [A(t)z
2
(t) +b(t)]
= A(t) [z
1
(t) z
2
(t)] = A(t)x(t).
Thus, x(t) = z
1
(t) z
2
(t) is a solution of (H).
Our next theorem gives the structure of the set of solutions of (N).
THEOREM 2. Let x
1
(t), x
2
(t), . . . , x
n
(t) be a fundamental set of solutions the reduced
system (H) and let z = z(t) be a particular solution of (N). If u = u(t) is any solution
of (N), then there exist constants c
1
, c
2
, . . . , c
n
such that
u(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t) +z(t)
(C.f. Theorem 2, Section 3.4.)
Proof: Let u = u(t) be any solution of (N). By Theorem 1, u(t) z(t) is a solution of
the reduced system (H). Since x
1
(t), x
2
(t), . . . , x
n
(t) are n linearly independent solutions
of (H), there exist constants c
1
, c
2
, . . . , c
n
such that
u(t) z(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t).
Therefore
u(t) = c
1
x
1
(t) + c
2
x
2
(t) + + c
n
x
n
(t) + z(t).
According to Theorem 2, if x
1
(t), x
2
(t), . . . , x
n
(t) are linearly independent solutions
of the reduced system (H) and z = z(t) is a particular solution of (N), then
x(t) = C
1
x
1
(t) + C
2
x
2
(t) + + C
n
x
n
(t) +z(t) (1)
represents the set of all solutions of (N). That is, (1) is the general solution of (N). Another
way to look at (1) is: The general solution of (N) consists of the general solution of the
reduced equation (H) plus a particular solution of (N):
x
..
general solution of (N)
= C
1
x
1
(t) + C
2
x
2
(t) + + C
n
x
n
(t)
. .
general solution of (H)
+ z(t).
..
particular solution of (N)
304
Variation of Parameters
Let x
1
(t), x
2
(t), . . . , x
n
(t) be a fundamental set of solutions of (H) and let X(t) be the cor-
responding fundamental matrix (X is the nn matrix whose columns are x
1
, x
2
, . . . , x
n
).
Then, as we saw in Section 6.3, the general solution of (H) can be written
X(t)C where C =
_
_
_
_
_
_
C
1
C
2
.
.
.
C
n
_
_
_
_
_
_
.
In Exercises 6.3, Problem 19, you were asked to show that X satises the matrix dierential
system
X

= A(t)X.
That is, X

(t) = A(t)X(t).
We replace the constant vector C by a vector function u(t) which is to be determined
so that
z(t) = X(t)u(t)
is a solution of (N). Dierentiating z, we get
z

(t) = [X(t)u(t)]

= X(t)u

(t) + X

(t)u(t) = X(t)u

(t) + A(t)X(t)u(t).
Since z is to satisfy (N), we have
z

(t) = A(t)z(t) + b(t) = A(t)X(t)u(t) +b(t).


Therefore
X(t)u

(t) + A(t)X(t)u(t) = A(t)X(t)u(t) + b(t),


from which it follows that
X(t)u

(t) = b(t).
Since X is a fundamental matrix, it is nonsingular, and so we can solve for u

:
u

(t) = X
1
(t)b(t) which implies u(t) =
_
X
1
(t)b(t) dt.
Finally, we have
z(t) = X(t)
_
X
1
(t)b(t) dt
is a solution of (N).
By Theorem 2, the general solution of (N) is given by
x(t) = X(t)C+ X(t)
_
X
1
(t)b(t) dt. (2)
Compare this result with the general solution of rst order linear dierential equation given
by equation (2) in Section 2.1
305
Example 1. Find the general solution of the nonhomogeneous linear dierential system
x

=
_
0 1
t
2
t
1
_
x +
_
0
2t
1
_
.
SOLUTION You can verify that x
1
(t) =
_
t
1
_
and x
2
(t) =
_
t ln t
1 + ln t
_
are a
fundamental set of solutions of the reduced system
x

=
_
0 1
t
2
t
1
_
x.
The corresponding fundamental matrix is
X(t) =
_
t t ln t
1 1 + ln t
_
.
The inverse of X is given by
X
1
(t) =
_
t
1
+ t
1
ln t ln t
t
1
1
_
.
We are now ready to calculate z using the result given above:
z =
_
t t ln t
1 1 + ln t
_
_
_
t
1
+ t
1
ln t ln t
t
1
1
__
0
2t
1
_
dt
=
_
t t ln t
1 1 + ln t
_
_
_
2t
1
ln t
2t
1
_
dt
=
_
t t ln t
1 1 + ln t
__
(ln t)
2
2 ln t
_
=
_
t(ln t
2
2 ln t + (ln t)
2
_
.
The general solution of the given nonhomogeneous system is
x(t) =
_
t t ln t
1 1 + ln t
__
C
1
C
2
_
+
_
t(ln t
2
2 ln t + (ln t)
2
_
.
By xing a point a on the interval I, the general solution of (1) given by (2) can be
written as
x(t) = X(t)C+ X(t)
_
t
a
X
1
(s)b(s) ds, t I. (3)
This form is useful in solving system (1) subject to an initial condition x(a) = x
o
. Substi-
tuting t = a in (3) gives
x
o
= X(a)C which implies C = X
1
(a)x
o
.
306
Therefore the solution of the initial-value problem
x

= A(t) x x(a) = x
o
is given by
x(t) = X(t)X
1
(a)x
o
+ X(t)
_
t
a
X
1
(s)b(s) ds. (4)
Exercises 6.5
Find the general solution of the system x

= A(t)x + b(t) where A and b are given.


1. A(t) =
_
4 1
5 2
_
, b(t) =
_
e
t
2e
t
_
2. A(t) =
_
2 1
3 2
_
, b(t) =
_
0
4t
_
3. A(t) =
_
2 2
3 3
_
, b(t) =
_
1
2t
_
4. A(t) =
_
3 2
4 3
_
, b(t) =
_
2 cos t
2 sin t
_
5. A(t) =
_
3 1
2 4
_
, b(t) =
_
3t
e
t
_
6. A(t) =
_
0 1
1 0
_
, b(t) =
_
sec t
0
_
7. A(t) =
_
1 1
1 1
_
, b(t) =
_
e
t
cos t
e
t
sin t
_
8. A(t) =
_
3t
2
t
0 t
1
_
, b(t) =
_
4t
2
1
_
9. A(t) =
_
_
_
1 1 0
1 1 0
0 0 3
_
_
_, b(t) =
_
_
_
e
t
e
2t
te
3t
_
_
_
10. A(t) =
_
_
_
1 1 1
0 0 1
0 1 2
_
_
_, b(t) =
_
_
_
0
e
t
e
t
_
_
_
Solve the initial-value problem.
307
11. x

=
_
3 1
1 3
_
x +
_
4e
2t
4e
4t
_
, x(0) =
_
1
1
_
12. x

=
_
3 2
1 0
_
x +
_
2e
t
2e
t
_
, x(0) =
_
2
1
_
6.6 Direction Fields and Phase Planes
There are many types of dierential equations which do not have solutions which can
be easily written in terms of elementary functions such as exponentials, sines and cosine,
or even as integrals of such functions. Fortunately, when these equations are of rst or
second order one can still gain a good understanding of the behavior of their solutions
using geometric methods. In this section we discuss the basics of phase plane analysis, an
extension of the method of slope elds discussed in Section 2.4.
Let us consider the dierential equation
y

= f(y),
and think about it geometrically. The equality implies that the graph of a solution of this
equation in x y plane, must have slope equal to f(y) at the point (x, y). For instance, for
the dierential equation
y

= 2y(1 y)
the slope of the solution equals 0 at all points for which y = 1. Indeed, since the solution
satisfying the initial condition y(0) = 1 is the constant solution y(x) 1, this is what we
expect. The following gure shows this solution (in red), along with the solution satisfying
y(0) = 0.1.
Let us now turn to autonomous dierential equations in two variables
x

1
= f(x
1
, x
2
) (1)
x

2
= g(x
1
, x
2
). (2)
308
Drawing slope elds for x
1
and x
2
separately will not work, since each slope eld depends
on both variables. However, note that any solution of this equation, (x
1
(t), x
2
(t)), paramet-
rically denes a curve in the x
1
x
2
plane. Indeed, the vector (x

1
(t
0
), x

2
(t
0
)) is the tangent
vector to this parametric curve at the point (x
1
(t
0
), x
2
(t
0
)).
Therefore, we can sketch the solutions of the dierential system (12) by selecting a
number of points in the plane. At each point in the collection we draw a vector emanating
from the point, so that the vector (f(x
1
, x
2
), g(x
1
, x
2
)) emanates from the point (x
1
, x
2
).
The collection of these vectors is called a vector eld. In practice we may have to scale the
length of the vectors by a constant factor.
Example 1: Let us start with the constant coecient system
x

1
= x
1
+ 5x
2
x

2
= 3x
1
+ 3x
2
.
considered in Example 3 of section 5.3. The gure below shows vectors attached to points
spaced 0.1 units apart in the horizontal and vertical direction. For instance, to the point
with coordinates x
1
= 0.3, and x
2
= 0.5 we attach the vector with components x
1
+ 5x
2
=
0.3 +5 0.5 = 2.8 in the horizontal and 3x
1
+ 3x
2
= 3 0.5 + 3 0.3 = 2.4 in the vertical
direction. Similarly, the vector (3.2, 3.6) emanates from the point (0.7, 0.5). The
length of all vectors is scaled by an equal factor so that they all t in the gure.
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
Also shown are two solutions of the dierential system, one with initial condition
(x
1
(0), x
2
(0)) = (0.5, 0.25), and the other with (x
1
(0), x
2
(0)) = (0.2, 0.1). The arrows
point in the direction in which the solutions are traversed. You can also see that the solu-
tions diverge from the origin in the direction of the eigenvector (1, 1) corresponding to the
positive eigenvalue.
Let us next consider the following equation
y

+ y

+ y = 0.
309
This equation is known as the linear damped pendulum. If we think of y as the angular
displacement from the resting position and y

as the angular velocity of the pendulum, then


the solutions of the equation will describe its oscillation around the equilibrium position at
y = 0. As we will see shortly, measures the amount of damping.
If we let x
1
= y, and x
2
= y

, then we obtain the following pair of equations


x

1
= x
2
(3)
x

2
= x
2
x
1
. (4)
This is a linear system, and we could solve it using the methods of Section 6.3. 6.4. Instead,
let us look at the phase plane, and see what happens as we vary .
In the gure below you will see the vector eld and solutions with initial condition
x
1
= x
2
= 0.8, in both cases. In the left gure = 0.1, while on the right = 0.2. As you
could guess, a pendulum that is subject to more damping will oscillate fewer times before
reaching the equilibrium at the origin.
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
The linear system (3-4) only describes the behavior of the pendulum accurately when
the displacement from the rest position is small. For larger displacements it is necessary to
use the nonlinear equation
x

1
= x
2
(5)
x

2
= x
2
sin(x
1
). (6)
Although this equation may not look much more complicated than the previous one, it is
much more dicult to solve. However, a phase plane analysis can be easily performed in this
case as well. In the gure below = 0.05, and the initial conditions are x
1
= 0, x
2
= 2/3.
310
3 2 1 1 2 3
3
2
1
1
2
3
Exercises 6.6
1. Sketch the phase plane for the following systems
(a)
x

= 1
y

= y
(b)
x

= x
y

= y
(c)
x

= x
2
1
y

= x y
2. Solve the system for the damped pendulum (3-4) when = 0. Sketch the vector eld
and the solutions. What happens to the amplitude of the solutions as the pendulum
oscillates in this case?
Solution: In this case the solutions have the form
x
1
(t) = C
1
cos t + C
2
sint, x
2
= C
2
cos C
1
sin t.
They oscillate forever with constant amplitude.
1.0 0.5 0.5 1.0
1.0
0.5
0.5
1.0
3. Solve the equation for the damped linear pendulum (3-4). Show that when > 0
equations oscillate with diminishing amplitude.
311

Das könnte Ihnen auch gefallen