Sie sind auf Seite 1von 10

Linear Algebra (part 3): Eigenvalues and Eigenvectors

(by Evan Dummit, 2012, v. 1.00)

Contents
1 Eigenvalues and Eigenvectors
1.1 1.2 1.3 1.4 The Basic Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Some Slightly More Advanced Results About Eigenvalues Theory of Similarity and Diagonalization

1
1 4 5 8

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

How To Diagonalize A Matrix (if possible) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Eigenvalues and Eigenvectors


We have discussed (perhaps excessively) the correspondence between solving a system of homogeneous linear equations and solving the matrix equation vectors.

A x = 0,

for

an

nn

matrix and

and

each

n1

column

For reasons that will become more apparent soon, a more general version of this question which is also of interest is to solve the matrix equation problem corresponds to

A x = x,

where

is a scalar. (The original homogeneous system

= 0.) T :V V x
does

In the language of linear transformations, this says the following: given a linear transformation from a vector space

to itself, on what vectors

act as multiplication by a constant

1.1

The Basic Setup


Denition: For

an

corresponding scalar

n n matrix, a nonzero is called an eigenvalue

vector of

with

A x = x

is called an eigenvector of

A,

and the

A. 0
an eigenvector.

Important note: We do not consider the zero vector For a xed value of

the set

the zero vector, is a subspace of

S whose elements are the eigenvectors x with A x = x, together with V . (This set S is called the eigenspace associated to the eigenvalue .)

S contains the zero vector. S is closed under addition, because if A x1 = x1 and A x2 = x2 , then A (x1 + x2 ) = (x1 + x2 ). [S3]: S is closed under scalar multiplication, because for any scalar , A (x) = (A x) = (x) = (x).
[S1]: [S2]:

It turns out that it is fairly straightforward to nd all of the eigenvalues: because

nn

identity matrix, we can rewrite the eigenvalue equation

we know precisely when there will be a nonzero vector is not invertible, or, in other words, when

x with det(I A) = 0. det(tI A),

x = (I ) x where I is the A x = x = (I ) x as (I A) x = 0. But (I A) x = 0: it is when the matrix (I A) n


in the

Denition: When we expand the determinant variable

we will obtain a polynomial of degree

t.

This polynomial is called the characteristic polynomial

p(t)

of the matrix

A,

and its roots are

precisely the eigenvalues of

A. tn

Notation 1: Some authors instead dene the characteristic polynomial as the determinant of the matrix

A tI rather n than (1) .

than

tI A.

I dene it this way because then the coecient of

will always be 1, rather

Notation 2: It is often customary, when referring to the eigenvalues of a matrix, to include an eigenvalue the appropriate number of extra times if the eigenvalue is a multiple root of the characteristic polynomial. Thus, for the characteristic polynomial

t2 (t 1)3 ,

we could say the eigenvalues are

= 0, 0, 1, 1, 1

if we

wanted to emphasize that the eigenvalues occurred more than once.

Remark: The characteristic polynomial may have non-real numbers as roots. Non-real eigenvalues are absolutely acceptable; the only wrinkle is that the eigenvectors for these eigenvalues will also necessarily contain non-real entries. (If

A has real number entries, then any non-real roots of the characteristic poly-

nomial will come in complex conjugate pairs. The eigenvectors for one root will be complex conjugates of the eigenvectors for the other root.)

Proposition: The eigenvalues of an upper-triangular matrix are the diagonal entries.

This statement follows from the observation that the determinant of an upper-triangular matrix is the product of the diagonal entries, combined with the observation that if of times.)

A is upper-triangular,

then

tI A

is also upper-triangular. (If diagonal entries are repeated, the eigenvalues are repeated the same number


Example: The eigenvalues of 2, and 3.

1 0 0

i 3 0

3 8

are 1, 3, and

and the eigenvalues of

2 0 0

0 3 0

1 2 2

are 2,

To nd all the eigenvalues (and eigenvectors) of a matrix

A,

follow these steps:

Step 1: Write down the matrix characteristic polynomial Step 2: Set

tI A

and compute its determinant (using any method) to obtain the

p(t).
of

p(t)

equal to zero and solve. The roots are precisely the eigenvalues

A. x form

Step 3: For each eigenvalue

solve for all vectors

satisfying

A x = x.

(Either do this directly, or by

solving the homogeneous system the eigenspace associated to to

(I A) x = 0 via row-reduction.)

The resulting solution vectors

and the nonzero vectors in the space are the eigenvectors corresponding

. A= 1 0 0 1
.

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

Step 1: We have

tI A =

t1 0

0 t1

, so

p(t) = det(tI A) = (t 1)2 .


So the eigenvalues are

Step 2: The characteristic equation

(t 1)2 = 0 has a double root t = 1. 1 0 0 1 a b =1 a b

= 1, 1

(Alternatively, we could have used the fact that the matrix is upper-triangular.) Step 3: We want to nd the vectors with

. Clearly, all vectors

a b
.

have this

property. Therefore, a basis for the eigenspace with

is given by

1 0 A=

and

0 1 1 1
.

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

1 0

Step 1: We have

tI A =

t1 0

1 t1

, so

p(t) = det(tI A) = (t 1)2 .


So the eigenvalues are

Step 2: The characteristic equation

(t 1)2 = 0 has a double root t = 1. 1 0 1 1 a b a b

= 1, 1 a+b b

(Alternatively, we could have used the fact that the matrix is upper-triangular.) Step 3: We want to nd the vectors with which means

. This requires

a b

= a 0

can be arbitrary and

b = 0.

So the vectors we want are those of the form

, and so

a basis for the eigenspace with

=1

is given by

1 0

Remark: Note that this matrix

1 0

1 1

and the identity matrix

1 0

0 1

have the same characteristic

polynomial and eigenvalues, but do not have the same eigenvectors. In fact, for for

= 1,

the eigenspace

1 0

1 1

is 1-dimensional, while the eigenspace for

1 0

0 1

is 2-dimensional.

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

A=

2 3

2 1

Step 1: We have Step 2: Since Step 3:

tI A =

t2 3

2 t1

, so

p(t) = det(tI A) = (t 2)(t 1) (2)(3) = t2 3t 4.


the eigenvalues are

p(t) = t2 3t 4 = (t 4)(t + 1), 2 3 2 1 a b

= 1, 4 2a + 2b 3a + b

For

= 1

we want

a b

, so we need

a b

, which reduces

to

2 a = b. 3 =1

So the vectors we want are those of the form

For

we want

2 3

2 1

a b

=4

a b

, so we need

2 2 b , so a basis is given by . 3 3 b 2a + 2 b 4a = , which reduces to 3a + b 4b 1 1 .

a = b.

So the vectors we want are those of the form

b b

, so a basis is given by

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

0 A= 1 0 t 1 1 t

0 0 0 1 . 1 0 = t (t2 + 1).

Step 1: We have

t tI A = 1 0

0 t 1

0 1 , t

so

p(t) = det(tI A) = t = 0, i, i
.

Step 2: Since Step 3:

p(t) = t (t2 + 1), 0 1 0 0 0 1

the eigenvalues are

a a 0 0 0 1 b = 0 b , so we need a c = 0 , so a = c and For = 0 we want c c b 0 0 1 a b = 0. So the vectors we want are those of the form 0 , so a basis is given by 0 . 1 a 0 0 0 a a 0 ia For = i we want 1 0 1 b = i b , so we need a c = ib , so a = 0 0 1 0 c c b ic 0 0 and b = ic. So the vectors we want are those of the form ic , so a basis is given by i . c 1 a a 0 ia 0 0 0 For = i we want 1 0 1 b = i b , so we need a c = ib , so 0 1 0 c c ic b 0 a = 0 and b = ic. So the vectors we want are those of the form ic , so a basis is given by c 0 i . 1

Example: Find all eigenvalues, and a basis for each eigenspace, for the matrix

1 A = 1 1

0 1 1 3 . 0 3 t1 0

t1 Step 1: We have tI A = 1 1 2 (t 1) (t 3) + (t 1).


Step 2: Since Step 3:

0 t1 0

1 t1 3 , so p(t) = (t1) 0 t3

1 3 +(1) 1 t3

p() = (t 1) [(t 1)(t 3) + 1] = (t 1)(t 2)2 , 1 1 1 0 1 a a 1 3 b = 1 b , 0 3 c c

the eigenvalues are

= 1, 2, 2

a+c a For = 1 we want so we need a + b + 3c = b , so a + 3c c 0 0 c = 0 and a = 0. So the vectors we want are those of the form b , so a basis is given by 1 . 0 0 1 0 1 a a a+c 2a For = 2 we want 1 1 3 b = 2 b , so we need a + b + 3c = 2b , so 1 0 3 c c a + 3 c 2c c a = c and b = 2c. So the vectors we want are those of the form 2c , so a basis is given by c 1 2 . 1

1.2

Some Slightly More Advanced Results About Eigenvalues


Theorem: If

is an eigenvalue of the matrix

which appears exactly

times as a root of the characteristic

polynomial, then the dimension of the eigenspace corresponding to

is at least 1 and at most

k.

Remark: The number of times that algebraic multiplicity of multiplicity of multiplicity.

appears as a root of the characteristic polynomial is called the , and the dimension of the eigenspace corresponding to is called the geometric

So what the theorem says is that the geometric multiplicity is at most the algebraic

Example: If the characteristic polynomial is 3-dimensional, and the eigenspace for

(t 1)3 (t 3)2 ,

then the eigenspace for

=1

is at most

=3

is at most 2-dimensional.

Proof: The statement that the eigenspace has dimension at least 1 is immediate, because (by assumption)

is a root of the characteristic polynomial and therefore has at least one nonzero eigenvector associated

to it.

For the statement that the dimension is at most

k,

the idea is to look at the homogeneous system

(I A) x = 0. If appears k times

I A B must have at most k rows of all zeroes. Otherwise, the matrix B (and hence I A too, although this requires a check) would have 0 as an eigenvalue more than k times, because B is in echelon form and therefore upper-triangular. But the number of rows of all zeroes in a square matrix is the same as the number of nonpivotal
as a root of the characteristic polynomial, then when we put the matrix into its reduced row-echelon form

B,

then

columns, which is the number of free variables, which is the dimension of the solution space.

So, putting all the statements together, we see that the dimension of the eigenspace is at most

k.

Theorem: If

v1 , v2 , . . . , vn are eigenvectors of A associated to distinct eigenvalues 1 , 2 , . . . , n , then v1 , v2 , . . . , vn

are linearly independent.

Proof: Suppose we had a nontrivial dependence relation between

(Note that at least two coecients have to be nonzero, because none of

v1 , . . . , vn , say a1 v1 + + an vn = 0. v1 , . . . , vn is the zero vector.)

A (a1 v1 + + an vn ) = A 0 = 0. Now since v1 , . . . , vn are eigenvectors this says a1 (1 v1 ) + + an (n vn ) = 0. But now if we scale the original equation by 1 and subtract (to eliminate v1 ), we obtain a2 (2 1 )v2 + a3 (3 1 )v3 + + an (n 1 )vn = 0. Since by assumption all of the eigenvalues 1 , 2 , . . . , n were dierent, this dependence is still nontrivial, since each of j 1 is nonzero, and at least one of a2 , , an is nonzero. But now we can repeat the process to eliminate each of v2 , v3 , . . . , vn1 in turn. Eventually we are left with the equation b vn = 0 for some nonzero b. But this is impossible, because it would say that vn = 0, contradicting our denition saying that the zero vector is not an eigenvector. So there cannot be a nontrivial dependence relation, meaning that v1 , . . . , vn are linearly independent.
Multiply both sides by the matrix

A:

this gives

Corollary: If

is an

nn

matrix with

distinct eigenvalues

eigenvectors associated to those eigenvalues, then

v1 , v2 , . . . , vn

are a basis for

1 , 2 , . . . , n , and v1 , v2 , . . . , vn Rn . v1 , v2 , . . . , vn
they are a basis.

are (any)

This result follows from the previous theorem: it guarantees that so since they are vectors in the

are linearly independent,

n-dimensional A

vector space

Rn ,

Theorem: The product of the eigenvalues of

is the determinant of

A.

Proof: If we expand out the product is equal to

p(t) = (t 1 ) (t 2 ) (t n ), we see that the constant term (1)n 1 2 2 . But the constant term is also just p(0), and since p(t) = det(tI A) we n have p(0) = det(A) = (1) det(A). Thus, setting the two expressions equal shows that the product of the eigenvalues equals the determinant of A. A
equals the trace of

Theorem: The sum of the eigenvalues of

A.

Note: The trace of a matrix is dened to be the sum of its diagonal entries. Proof: If we expand out the product

p(t) = (t 1 ) (t 2 ) (t n ) we see that the coecient of tn1 is equal to (1 + + n ). If we expand out the determinant det(tI A) to nd the coecient n1 of t , we can show (with a little bit of eort) that the coecient is the negative of the sum of the diagonal entries of A. Therefore, setting the two expressions equal shows that the sum of the eigenvalues equals the trace of A.

1.3

Theory of Similarity and Diagonalization


Denition: We say two matrix

such that

n n matrices A and B are similar (or conjugate) if there exists B = P 1 AP . (We refer to P 1 AP as the conjugation of A by P .) A= 3 2 1 1
and

an invertible

nn

Example: The matrices

B = 2 1 3 2

1 1

2 1 3 2

are similar: with

P = =

P 1 =

2 1 P 1 AP = B .

3 2

, we can verify that

1 1

2 1

3 2

2 3 1 2 1 2 1 1 A

, so that , so that

Remark: The matrix

Q=

0 1

1 2

also has

B = Q1 AQ. Q
with

In general, if two matrices

and

are

similar, then there can be many dierent matrices

Q1 AQ = B . B = P 1 AP

Similar matrices have quite a few useful algebraic properties (which justify the name similar). If and

D=P

CP ,

then we have the following:

The sum of the conjugates is the conjugate of the sum:

B + D = P 1 AP + P 1 CP = P 1 (A + C )P . BD = P 1 AP P 1 CP = P 1 (AC )P .
exists if and only if

The product of the conjugates is the conjugate of the product: The inverse of the conjugate is the conjugate of the inverse:

A1

B 1

exists, and

=P

P.

The determinant of the conjugate is equal to the original determinant:

det(B ) = det(P 1 AP ) = det(tI B ) = det(P 1

det(P

) det(A) det(P ) = det(A) det(P


1

P ) = det(A).

The conjugate has the same characteristic polynomial as the original matrix:

tI P P

A P ) = det(P

(tI A)P ) = det(tI A).

In particular, a matrix and its conjugate have the same eigenvalues (with the same multiplicities). Also, by using the fact that the trace is equal both to the sum of the diagonal elements and a coecient in the characteristic polynomial, we see that a matrix and its conjugate have the same trace.

If

x is an eigenvector of A with eigenvalue , then P 1 x is an eigenvector of B with A x = x then B (P 1 x) = P 1 A(P P 1 )x = P 1 A x = P 1 (x) = (P 1 x).
This is also true in reverse: if same eigenvalue). In particular, the eigenspaces for

eigenvalue

if

is an eigenvector of

then

P y

is an eigenvector of

(with the

have the same dimensions as the eigenspaces for

A. B
that

One question we might have about similarity is: given a matrix similar to?

A,

what is the simplest matrix

is

As observed above, any matrix similar to

has the same eigenvalues for

A.

So, if the eigenvalues are

1 , , n ,
the simplest form we could plausibly hope for would be a diagonal matrix

1
.. .

whose diagonal elements are the eigenvalues of

A.
if it is similar to a diagonal matrix

Denition: We say that a matrix exists an invertible matrix

with

A is diagonalizable D = P 1 AP . 2 3 6 7

D;

that is, if there

Example: The matrix

A=

is diagonalizable. We can check that for

P =

1 1

2 1

and

P 1 =

1 1

2 1 A

, then we have

P 1 AP =

4 0

0 1

= D.
then it is very easy to compute any power of

If we know that

is diagonalizable and have

D = P 1 AP ,

A:

Since Then

is diagonal,

Dk
k

is the diagonal matrix whose diagonal entries are the

k th

powers of

D. 2 1

D = (P

AP ) = P

(A )P , 6 7

so

A =P D P

Example: With

A = 2 1 =

2 3

as above, we have

Dk =

4k 0

0 1

, so that

Ak =

1 1

4k 0

0 1

1 1

2 4k 1 + 4k

2 2 4k 1 + 2 4k

k which are not positive integers. For 7 3 4 1 2 example, if k = 1 we get the matrix 1 , which is actually the inverse matrix A . And 3 4 2 1 0 2 2 6 2 if we set k = we get the matrix B = , whose square satises B = = A. 1 3 3 7 2
Observation: This formula also makes sense for values of

Theorem: An

nn

matrix

is diagonalizable if and only if it has

linearly independent eigenvectors. In

particular, every matrix whose eigenvalues are distinct is diagonalizable.

Proof: If

has

linearly independent eigenvectors

v1 , , vn

with respective eigenvalues

1 , , n ,

then consider the matrix

| P = v1 |

| |

| vn |

whose columns are the eigenvectors of

A.

| | | | | Because v1 , , vn are eigenvectors, we have A P = Av1 Avn = 1 v1 | | | | | 1 | | | | | | .. But we also have 1 v1 n vn = v1 vn = P D. . | | | | | | n


Therefore, we can therefore write For the other direction, if

| n vn . |

A P = P D. Now since the eigenvectors D = P 1 AP , as desired. | | D = P 1 AP

are linearly independent,

is invertible, and

then (like above) we can rewrite this to say

AP = P D.

| If P = v1 |

| | | | | | | vn then AP = P D says Av1 Avn = 1 v1 n vn , which | | | | | | | (by comparing columns) says that Av1 = 1 v1 , . . . , Avn = n vn . Thus the columns v1 , , vn of P are eigenvectors, and (because P is invertible) they are linearly independent. n
distinct

Finally, the last statement in the proof follows because (as shown earlier) a matrix with eigenvalues has

linearly independent eigenvectors.

Advanced Remark: As the theorem demonstrates, if we are trying to diagonalize a matrix, we can run into trouble if the matrix has repeated eigenvalues. However, we might still like to know what the simplest form a non-diagonalizable matrix is similar to.

The answer is given by what is called the Jordan Canonical Form (of a matrix): every matrix is similar

to a matrix of the form

J1 J2
.. .

, Jn
where each

J1 , , Jn

is a square Jordan block matrix

of the form

J =

1
.. .

, 1

with

on the diagonal and 1s directly above the diagonal (where

blank entries are zeroes).

2 Example: The non-diagonalizable matrix 0 0 and J2 = [3].


generalized eigenvectors: vectors eigenvectors would correspond to

1 2 0

0 0 is in Jordan Canonical Form, with J1 = 3

2 0

1 2

The existence and uniqueness of the Jordan Canonical Form can be proven using a careful analysis of

x satisfying (I A)k x = 0 k = 1.)

for some positive integer

k.

(Regular

Roughly speaking, the idea is to use certain carefully-chosen generalized eigenvectors to ll in for the missing eigenvectors; doing this causes the appearance of the extra 1s appearing above the diagonal in the Jordan blocks.

Theorem (Cayley-Hamilton): If matrix

p(x)

is the characteristic polynomial of a matrix

A,

then

p(A)

is the zero

(where in applying a polynomial to a matrix, we replace the constant term with that constant times

the identity matrix).

Example: For the matrix

A=

2 3

2 1

, we have

det(tI A) =

t2 3

2 t1

= (t 1)(t 2) 6 = 10 9 6 7

t2 3t 4. 6 9 6 3

We can compute

A2 = 0 0
.

10 9

6 7

, and then indeed we have

A2 3 A 4 I =

4 0

0 4

0 0

Proof (if

is diagonalizable): If

the characteristic polynomial of

A is A.

diagonalizable, then let

D = P 1 AP

with

diagonal, and

p(x)

be

D are the eigenvalues 1 , , n of A, hence are roots of the characteristic p(1 ) = = p(n ) = 0. Then, because raising D to a power just raises all of its diagonal entries to that power, we can see 0 p(1 ) 1 .. .. .. that p(D ) = p = 0. = = . . . 0 p(n ) n Now by conjugating each term and adding the results, we see that 0 = p(D) = p(P 1 AP ) = P 1 [p(A)] P . So by conjugating back, we see that p(A) = P 0 P 1 = 0.
The diagonal entries of polynomial of

A.

So

In the case where Canonical Form

A
of

is not diagonalizable, the proof is more dicult.

One way is to use the Jordan

in place of the diagonal matrix

D;

then (one can verify)

p(J ) = 0,

and then the

remainder of the argument is the same.

1.4

How To Diagonalize A Matrix (if possible)


In order to determine whether a matrix

is diagonalizable (and if it is, how to nd a diagonalization

A = P 1 DP ),

follow these steps:

Step 1: Find the characteristic polynomial and eigenvalues of Step 2: Find a basis for each eigenspace of Step 3a: Determine whether

A.
dimension (namely,

A.

A is diagonalizable  if each eigenspace has the proper

the number of times the corresponding eigenvalue appears as a root of the characteristic polynomial) then the matrix is diagonalizable. Otherwise, the matrix is not diagonalizable.

Step 3b: If the matrix is diagonalizable, then eigenvalues of

is the diagonal matrix whose diagonal entries are the

(with appropriate multiplicities), and then

can be taken to be the matrix whose

columns are linearly independent eigenvectors of

in the same order as the eigenvalues appear in

D.

Example: For

A=

0 3

2 5

, determine whether there exists a diagonal matrix

and an invertible matrix

with

D = P 1 AP ,

and if so, nd them.

Step 1: We have

tI A =

The eigenvalues are therefore Step 2:

t 2 3 t 5 = 2, 3. 0 3

so

det(tI A) = t(t 5) + 6 = t2 5t + 3 = (t 2)(t 3).

For

= 2 we need to solve

The eigenvectors are of the

For

= 3 we need to solve

The eigenvectors are of the

2 a a 2b 2a =2 , so = and thus a = b. 5 b b 3a + 5 b 2b b 1 form so a basis for the = 2 eigenspace is . b 1 2 0 2 a a 2b 3a =3 , so = and thus a = b. 3 5 b b 3a + 5 b 3b 3 2 2 b form so a basis for the = 3 eigenspace is . 3 3 b A
is diagonalizable, and

Step 3: Since the eigenvalues are distinct we know that

D=
.

2 0

0 3

. We

have two linearly independent eigenvectors, and so we can take

P = 0 3

1 1 2 5

2 3 1 1

To check: we have

P 1 =

3 1

2 1

, so

P 1 AP =

3 1

2 1

2 3

2 0

0 3

D.

Note: We could also take

D=

3 0

0 2

if we wanted. There is no particular reason to care much about

which diagonal matrix we want as long as we make sure to arrange the eigenvectors in the correct order.

1 1 0 Example: For A = 0 2 0 , determine whether there exists a diagonal matrix D and an invertible 0 2 1 1 matrix P with D = P AP , and if so, nd them. t1 1 0 t2 0 t2 0 so det(tI A) = (t1) Step 1: We have tI A = 0 = (t1)2 (t2). 2 t 1 0 2 t 1 The eigenvalues are therefore = 1, 1, 2.
Step 2:

0 a a ab a 0 b = b , so 2b = b and thus b = 0. 1 c c 2b + c c a 1 0 The eigenvectors are of the form 0 so a basis for the = 1 eigenspace is 0 , 0 . c 0 1 2a ab a a 1 1 0 For = 2 we need to solve 0 2 0 b = 2 b , so 2b = 2b and thus 2c 2b + c c c 0 2 1 b a = b and c = 2b. The eigenvectors are of the form b so a basis for the = 2 eigenspace is 2b 1 1 . 2 1 0 0 Step 3: Since the eigenspace for = 1 is 2-dimensional, the matrix A is diagonalizable, and D = 0 1 0 0 0 2 1 0 1 0 1 . We have three linearly independent eigenvectors, so we can take P = 0 0 1 2 1 1 0 1 1 0 1 1 0 1 0 1 To check: we have P 1 = 0 2 1 , so P 1 AP = 0 2 1 0 2 0 0 0 1 = 0 1 0 0 1 0 0 2 1 0 1 2 1 0 0 0 1 0 = D. 0 0 2 1 For = 1 we need to solve 0 0 1 2 2 1 1 1 Example: For A = 0 1 1 , determine whether there exists a diagonal matrix D and an invertible matrix 0 0 1 P with D = P 1 AP , and if so, nd them. t 1 1 1 t 1 1 so det(tI A) = (t1)3 since tI A is upper-triangular. Step 1: We have tI A = 0 0 0 t1 The eigenvalues are therefore = 1, 1, 1.
Step 2:


For

=1

we need to solve

1 0 0

a = b = 0.

The eigenvectors are of

1 a a a+b+c a 1 b = b , so b + c = b and thus 1 c c c c 0 0 the form 0 so a basis for the = 1 eigenspace is 0 . c 1 1 1 0


is 1-dimensional but the eigenvalue appears 3 times as a root of

Step 3: Since the eigenspace for

=1

the characteristic polynomial, the matrix

is

not diagonalizable .

Well, you're at the end of my handout. Hope it was helpful. Copyright notice: This material is copyright Evan Dummit, 2012. You may not reproduce or distribute this material without my express permission.

Das könnte Ihnen auch gefallen