Beruflich Dokumente
Kultur Dokumente
1. Solving x = Ax
1 Solving x = Ax
We know we could write a first order linear system as a matrix equation. For example,
suppose we have a first order linear homogeneous system with constant coefficients
such as the following:
We can then let A = (aij ) and x be the column vector (xi ) and rewrite the system as
x = Ax
We follow the example from one first order equation and look for solutions of the
form
x = vert
where v is a constant vector and r is a scalar. So if this x is to be a solution, we must
have x = Ax or
vrert = Avert
1
or equivalently (since ert 6= 0)
(A rI)v = 0
This is exactly the problem of determining the eigenvalues r and associated eigenvec-
tors v of the matrix A.
We know of course that in order to solve a system of n first order linear equations
with constant coefficients in the form x = Ax we must be able to find n linearly
independent vector solutions to (A rI)v = 0. So if we manage to find n linearly
independent solutions to our eigenvalue/eigenvector problem above, we will be able
to solve the system.
Example:
7 4 1
We begin by finding the determinant |A I|:
1 2
0
|A I| =
8 7 0
7 4 1
2
We expand the determinant along the third column to get
1 2 0
1 2
8 7 0
= (1 )
= (1 )(2 6 + 9)
8 7
7 4 1
(A I)v = 0.
Here,
1 1 2 0 2 2 0
A (1)I =
8 71 0 = 8
6 0
7 4 1 1 7 4 0
We do this by using row operations on the augmented matrix
2 2 0 0
8 6 0 0
7 4 0 0
We can divide the first row by 2 to get a 1 in the first column. If we then subtract
8 times row 1 from row 2, and add 7 times row 1 to row three, we can also eliminate
all other entries in column 1:
1 1 0 0
0 2 0 0
0 3 0 0
We then see that we can multiply row two by 1/2 and use it to eliminate the
remaining entries in column 2, leaving us with
1 0 0 0
0 1 0 0
0 0 0 0
Thus, the solution has x1 = x2 = 0, and x3 is a free variable. So the column vector
x1 = (0, 0, 1)T is an eigenvector, as is any constant multiple.
Note that because eigenvectors are not unique (any constant multiple is also an eigen-
vector), when we solve for our eigenvector, we will always end with one row eliminated
if we reduce correctly. Each eliminated row will correspond a free variable. (This is
a good check that we have done the right things.)
3
For the eigenvector = 3, we get the augmented matrix
1 3 2 0 0 4 2 0 0
8 7 3 0 0
= 8
4 0 0
7 4 1 3 0 7 4 2 0
which after row operations reduces to the standard form
1 0 2 0
0 1 4 0
0 0 0 0
Thus we see that x1 2x3 = 0, and x2 + 4x3 = 0, so if we let x3 = 1, then x1 = 2
and x2 = 4. Thus we get an eigenvector x2 = (2, 4, 1)T . There are no other
eigenvectors for = 3 which are not just constant multiples of this one.
It is easy to check that our eigenvalues and eigenvectors are right:
1 2 0 0 0
8 7 0 0 = 0
7 4 1 1 1
So A(0, 0, 1)T = 1 (0, 0, 1)T , as desired. For = 3, we get
1 2 0 2 6
8 7 0 4 = 12
7 4 1 1 3
so A(2, 4, 1)T = 3 (2, 4, 1)T , as we claimed.
Example:
Eigenvectors:
Solution: The eigenvalues are = 1 with eigenvectors that are multiples of (1, 1)T ,
and eigenvalue are = 4 which has eigenvectors that are multiples of (2, 3)T .
4
3 Back to Solving Systems
We started trying to find solutions of the form vert to a linear first order system
x = Ax with n equations and constant coefficients. We determined that such solu-
tions must satisfy (A rI)v = 0; i.e., r must be an eigenvalue and v an associated
eigenvector.
Example:
2e2t et
= 4et et = 3et 6= 0
e2t 2et
So we do have a fundamental set of solutions, and all solutions are of the form
! !
2t 2 t 1
c1 e + c2 e .
1 2
5
4 Notes on Fundamental Sets
We have an approach (using eigenvalues and eigenvectors) for solving x = Ax, but
under what conditions will we get a fundamental set of solutions?
It is useful to note that if we have distinct eigenvalues, the corresponding eigenvectors
will be linearly independent. Thus, if we have a system of n equations, and our matrix
gives us n distinct eigenvalues 1 , . . . , n with eigenvectors v1 , . . . , vn , then the general
solution will be
c1 v1 e1 t + c2 v2 e2 t + cn vn en t
We saw this case in example above.
It is also possible of course for us to have a repeated eigenvalue, but still have a full
set of linearly independent eigenvectors.
Example:
Find the general solution to the system
x1 = x1
x2 = x2
we see that we have two free variables. So we can have both (0, 1)T and (1, 0)T as
eigenvectors. Since we have two linearly independent eigenvectors, we have a general
solution ! !
0 t 1
c1 e + c2 et
1 0
Note that in this example, we could have solved the original system quite easily by
solving each equation independently of each other. For example, the general solution
to x1 = x1 is:
x1 =
6
and the general solution to x2 = x2 is
x2 =
It may be that we have a repeated eigenvalue and do not get linearly independent
eigenvectors, and thus cannot form a fundamental set of solutions. It is also possible
to have complex eigenvalues (and even complex eigenvectors). We will deal with these
cases over the next few days.