Sie sind auf Seite 1von 17

Characteristic Polynomial

Massoud Malek
Preliminary Results. In all that follows, we denote the n n identity matrix by In and the
n n zero matrix by Zn .
Let A = (aij ) be an nn matrix. If A u = u, then and u are called the eigenvalue and eigenvector
of A, respectively. The eigenvalues of A are the roots of the characteristic polynomial
KA () = det ( In A ) .
The eigenvectors are the solutions to the Homogeneous system ( In A) X = .
Note that KA () is a monic polynomial (i.e., the leading coefficient is one).
Cayley-Hamilton Theorem. If KA () = n + p1 n1 + + pn1 + pn is the characteristic
polynomial of the n n matrix A, then
KA (A) = An + p1 An1 + + pn1 A + pn In = Zn ,
Corollary. Let KA () = n + p1 n1 + + pn1 + pn be the characteristic polynomial of the
n n invertible matrix A. Then
A1 =


1  n1
A
+ p1 An2 + + pn2 A + pn1 In .
pn

Proof. According to the Cayley Hamiltons theorem we have




A An1 + p1 An2 + + pn1 In = pn In .
Since A is nonsingular, pn = (1)n det(A) 6= 0; thus the result follows.
Newtons Identity. Let 1 , 2 , . . . , n be the roots of the polynomial
K() = n + p1 n1 + p2 n2 + + pn1 + pn .
If sk = k1 + k2 + + kn , then
1
pk = (sk + sk1 p1 + sk2 p2 + + s2 pk2 p1 + s1 pk1 ) .
k
Proof. From K() = ( 1 )( 2 ) . . . . . . ( n1 )( n )
differentiation, we obtain

and the use of logarithmic


n

K 0 ()
n n1 + (n 1) c1 n2 + + 2 cn2 + cn1 X
1
=
.
=
n
n1
n2
K ()
+ c1
+ c2
+ + cn1 + cn
( i )
i=1

By using the geometric series for

1
and choosing || > max |i |, we obtain
1in
( i )

n
X
i=1

1
n s1
s2
= + 2 + 3 +
( i )


 n s 1
s2
+ 2 + 3 + .

By equating both sides of the above equality we may obtain the Newtons identities.
Hence n n1 + (n 1) p1 n2 + + pn1 = n + p1 n1 + p2 n2 + + pn

The Method of Direct Expansion

Characteristic Polynomial

The Method of Direct Expansion.


(aij ) is defined as:

a11 a12

a21 a22

KA () = det(In A) = .
..
..
.

an1
an2
where
1 =

The characteristic polynomial of an n n matrix A =







= n 1 n1 +2 n2 + (1)n n ,


. . . ann

...
...
..
.

n
X

a1n
a2n
..
.

aii = trace(A)

i=1

is the sum of all first-order diagonal minors of A,


X aii aij


2 =
aji ajj
i<j

is the sum of all second-order diagonal minors of A,



X aii aij
aji ajj
3 =

i<j<k aki akj


aik
ajk
akk

is the sum of all third-order diagonal minors of A, and so forth. Finally,


n = det(A)
 
n
There are
diagonal minors of order k in A. From this we find that the direct computation of
k
the coefficients of the characteristic polynomial of an n n matrix is equivalent to computing
   
 
n
n
n
+
+
= 2n 1
1
2
n
determinants of various orders, which, generally speaking, is a major task. This has given rise to
special methods for expanding characteristic polynomial. We shall explain some of these methods.

1 2 3
Example. Compute the characteristic polynomial of A = 2 1 4.
1 0 2
We have:
1 = 1 + 1 + 2 = 4,




1 2 1 4 1 3



= (3) + (2) + (1) = 2,


+
+
2 =
2 1 0 2 1 2


1 2 3


3 = det(A) = 2 1 4 = 17.
1 0 2
Thus
KA () = det(I3 A) = 3 1 2 + 2 3 = 3 4 2 + 17.

Leverriers Algorithm

Characteristic Polynomial

Leverriers Algorithm. This method allows us to find the characteristic polynomial of any
n n matrix A using the trace of the matrix Ak , where k = 1, 2, n. Let
(A) = {1 , 2 , , n }
be the set of all eigenvalues of A which is also called the spectrum of A. Note that
sk = trace(Ak ) =

n
X

ki , for all k = 1, 2, , n.

i=1

Let
KA () = det(In A) = n + p1 n1 + + pn1 + pn
be the characteristic polynomial of the matrix A, then for k n, the Newtons identities hold true:
1
pk = [sk + p1 sk1 + + pk1 s1 ] (k = 1, 2, , n)
k

1 2
1 1
1 0
2
1

Example. Let A =
2 1 1 3 . Then
4 5 0
4

1
8
4

9
1
1
A2 =
13 12 5
15 12 6

0
17
6
13
19
125 48 16
104

9
8
23
46
A3 = 42 28
A4 = 122 23 22
.
43 9 16 22
90 40 41
8
12
7
19 11 3 17
66 120
0
107

So s1 = 4 , s2 = 12 , s3 = 44 , and s4 = 36. Hence


p1 = s1 = 4,
1
1
p2 = (s2 + p1 s1 ) = (12 + (4)4) = 2,
2
2
1
1
p3 = (s3 + p1 s2 + p2 s1 ) = (44 + (4)12 + 2(4)) = 28,
3
3
1
1
p4 = (s4 + p1 s3 + p2 s2 + p3 s1 ) = (36 + (4)(44) + 2(12) + 28(4)) = 87.
4
4
Therefore
KA () = 4 43 + 22 + 28 87
and

1  3
A 4 A2 + 2 A + 28 I4
87

17
6
13
19
1
8
4 0
1 2
1 1
1 0

1 42 28
8
23 9 1 1 9 1 0
2
1
0 1
=
4
+2
+28
0 0
87 43 9 16 22 13 12 5 8 2 1 1 3
19 11 3 17
15 12 6 7
4 5 0
4
0 0

43 22 1 17
1
8
4
16 11
.
=

87 5 41 10 4
33 27 21 9

A1 =

0
0
1
0

0
1

The Method of Souriau

Characteristic Polynomial

The Method of Souriau (or Fadeev and Frame).


the Leverriers method.

This is an elegant modification of

Let A be an n n matrix, then define


A1 = A,
A2 = A B1 ,
.. .. ..
. . .

q 1 = trace(A1 ),
q 2 = 21 trace(A2 ),
.. .. .. ..
. . . .

B1 = A1 + q1 In ;
B2 = A2 + q2 In ;
.. .. ..
. . .

An1 = A Bn2 ,
An = A Bn1 ,

1
q n1 = n1
trace(An1 ),
q n = n1 trace(An ),

Bn1 = An1 + qn1 In ;


Bn = An + qn In

Theorem. Bn = Zn , and
KA () = det(In A) = n + q 1 n1 + + q n1 + q n ,
where Adj (A) = Bn1 and if A is nonsingular, then
A1 =

1
Bn1 .
qn

Proof. Suppose the characteristic polynomial of A is


KA () = det(In A) = n + p1 n1 + + pn1 + pn ,
where p0k s are defined in the Leverriers method.
Clearly p1 = trace(A) = trace(A1 ) = q1 , and now suppose that we have proved that
q1 = p1 , q2 = p2 , . . . , qk1 = pk1 .
Then by the hypothesis we have
Ak = ABk1 = A(Ak1 + qk1 In ) = AAk1 + qk1 A
= A[A(Ak2 + qk2 In )] + qk1 A
= A2 Ak1 + qk2 A2 + qk1 A
.
..
..
..
..
..
= ..
.
.
.
.
.
= Ak + q1 Ak1 + + qk1 A .
Let si = trace(Ai ) (i = 1, 2, . . . , k), then by Newtons identities
kqk = trace(Ak ) = trace(Ak ) + q1 trace(Ak1 ) + + qk1 trace(A)
= sk + q1 sk1 + + qk1 s1
= sk + p1 sk1 + + pk1 s1
= kpk .
showing that pk = qk . Hence this relation holds for all k.
By the Cayley-Hamilton theorem,
Bn = An + q1 An1 + + qn1 A + qn In = Zn .
and so
Bn = An + qn In = Zn ;

An = ABn1 = qn In .

If A is nonsingular, then det(A) = (1)n KA (0) = (1)n qn 6= 0, and thus


A1 =

1
Bn1 .
qn

The Method of Souriau

Characteristic Polynomial

Example.
polynomial and if possible the inverse of the matrix

Find the characteristic


1 2
1 1
1 0
2
1

A=
2 1 1 3 .
4 5 0
4
For k = 1, 2, 3, 4, compute
1
Ak = A Bk1
qk =
trace(Ak ), Bk = Ak + qk I4 .
k

3 2
1 1
1 2
1 1
1 4 2
1 0
1
2
1
;

q1 = 4,
B1 =
A1 =
2
2 1 1 3 ,
1 5 3
4 5 0
0
4 5 0
4

1
0
0
4
3
0
0
4
5
5
1
9 5
1 9 5

q
=
2,
B
=
A2 =
2
2
5 16 11 4 ;
5 16 9 4
1
8
6 7
1
8
6 9

15 22 1
17
43 22 1
17
8

24 16 11
4
16 11
, q3 = 28, B3 = 8
;
A3 =
5

41 38 4
5
41 10 4
33 27
21 37
33 27
21
9

0 0 0 0
87 0 0 0
0 0 0 0
0 87 0 0

q4 = 87,
B4 =
A4 =
0 0 0 0 .
0 0 87 0 ,
0 0 0 0
0 0 0 87
Therefore the characteristic polynomial of A is:
KA () = 4 43 + 22 + 28 87.
Note that A4 is a diagonal matrix, so we only need to multiply the first row of A by the first column
of B3 to obtain 87. Since q4 = 87, the matrix A has an inverse.

43 22 1
17
1
1
4
16 11
8
.
A1 =
B3 =
41 10 4
q4
87 5
33 27
21
9
Matlab Program
A = input(0 Enter a square matrix : 0 )
m = size(A); n = m(1); q = zeros(1, n); B = A; AB = A; In = eye(n);
f or k = 1 : n 1, q(k) = (1/k) trace(AB)B = AB + q(k) In; AB = A B; end
C = B; q(n) = (1/n) trace(AB); Q = [1 q];
disp(0 T he Characteristic polynomial looks like : 0 )
disp( 0 KA (x) = x n + q(1)x (n 1) + ... + q(n 1)x + q(n)0 ), disp(0 0 ),
disp(0 T he coef f icients list c(k) is : 0 ), disp(0 0 ),
disp(Q), disp(0 0 )
if q(n) == 0, disp(0 T he matrix is singular 0 );
else, disp(0 T he matrix has an inverse. 0 ), disp(0 0 )
C = (1/q(n)) B;
disp(0 T he inverse of A is : 0 ), disp(0 0 ),
disp(C)
end

The Method of Undetermined Coefficients

Characteristic Polynomial

The Method of Undetermined Coefficients. If one has to expand large numbers of characteristic polynomials of matrices of the same order, then the method of undetermined coefficients
may be used to produce characteristic polynomials of those matrices.
Let A be an n n matrix and
KA () = det ( In A) = n + p1 n1 + + pn1 + pn .
be its characteristic polynomial. In order to find the coefficients pi s of KA () we evaluate
Dj = KA (j) = det (jIn A) j = 0, 1, 2, . . . , n 1
and obtain the following system of linear equations:
pn = D0
n

n1

+ + pn = D1

n1

+ + p n = D2

1 + p1 .1
2 + p1 .2

...........................................
(n 1)n + p1 .(n 1)n1 + + pn = Dn1
Which can be changed into:

1n1
2n1

Sn1 P =
..

1n2
2n2
..
.

...
...
..
.

1
2
..
.

p1
p2
..
.

D1 D0 1n
D2 D0 2n
.. .. .. ..
. . . .

=
= D .

n
. . . n 1 pn1
Dn1 D0 (n 1)

(n 1)n1 (n 1)n2

The system may be solved as follows:


P = Sn1 D .
Since the (n 1) (n 1) matrix Sn depends only on the order of A, we may store Rn , the inverse
of Sn1 beforehand and use it to find the coefficients of characteristic polynomial of various n n
matrices.
Examples. Compute the characteristic polynomials of the 4 4 matrices

1 3 0 4
1 2 1 1
2 3 1 3
1 0 2 1

A=
1 2 1 2 and B = 2 1 1 3 .
4 5 0 4
1 3 2 1
First we find

1 1 1
S = 8 4 2
27 9 3

and R = S 1

6
6 2
1
= 30 24 6 .
12
36 18 4

Then for the matrix A we obtain


D0 = det(A) = 48,

D1 = det(I4 A) = 72,

D2 = det(2I4 A) = 128, and D3 = det(3I4 A) = 180 .


D1 D0 14
25
D = D2 D0 24 = 96 .
D3 D0 34
213
Hence

p1
6
6
2
25
0
1
P = p2 = 30 24 6 96 = 23 .
12
p3
36 18 4
213
2

The Method of Undetermined Coefficients

Characteristic Polynomial

Thus
KA () = 4 232 2 48.
For the matrix B we have
D0 = det(B) = 87,

D1 = det(I4 B) = 60,

D2 = det(2I4 B) = 39,

and D3 = det(3I4 B) = 12.


D1 D0 14
26
D = D2 D0 24 = 32 .
D3 D0 34
6
Hence



6
6
2
26
p1
4
1

30 24 6
32 =
2 .
P = p2 =
12
36 18 4
6
p3
28

Thus
KB () = 4 43 + 22 + 28 87.
Matlab Program
N = input(Enter the size of your square matrix : 0 );
n = N 1; In = eye(N ); S = zeros(n); R = zeros(n); D = zeros(1, n);
DSP 1 = [ 0 F or any 0 , int2str(N ),0 square matrix, you need S =0 ];
DSP 2 = [ 0 Do you want to try with another 0 , int2str(N ),0 square matrix? (Y es = 1/N o = 0)0 ];
%DEF IN IN G S
f or i = 1 : n, f or j = 1 : n, S(i, j) = i (N j); end; end;
disp(0 0 ), disp(DSP 1), disp(0 0 ), disp(S),
R = inv(S);
ok = 1;
while ok == 1;
A = input([0 Enteran 0 , int2str(N ),0 x 0 , int2str(N ),0 matrix A : 0 );

disp(0 0 )

D0 = det(A);
f or k = 1 : n; D(k) = det(k In A); end;
f or i = 1 : n; DD(i) = D(i) D0 i N ; end;
P = R DD0 ;
disp(0 T he Characteristic polynomial looks like : 0 )
disp( 0 KA (x) = x n + p(1)x (n 1) + ... + p(n 1)x + p(n)0 ), disp(0 0 ),
disp(0 T he coef f icients list p(k) is : 0 ), disp(0 0 ),
disp([1 P 0 D0]), disp(0 0 ),
disp(DSP 2), disp(0 0 ),
ok = input(DSP 2);
end

The Method of Danilevsky

Characteristic Polynomial

The Method of Danilevsky.

Consider an n n matrix A and let

KA () = det(In A) = n + p1 n1 + + pn1 + pn
be its characteristic polynomial. Then the companion matrix of KA ()

p1 p2 p3 . . . pn1 pn
1
0
0
...
0
0

0
1
0
...
0
0

F [A] = .
..
..
..
..
..
..
.
.
.
.
.

0
0
...
1
0
0
0
0
0
...
1
0
is similar to A and is called the Frobenius form of A.
The method of Danilevsky (1937) applies the Gauss-Jordan method to obtain the Frobenius form
of an n n matrix. According to this method the transition from the matrix A to F [A] is done
by means of n 1 similarity transformations which successively transform the rows of A, beginning
with the last, into corresponding rows of F [A].
Let us illustrate the beginning of the process.

a11 a12
a21 a22

A = a31 a32
..
..
.
.

Our purpose is to carry the nth row of

a13 . . . a1,n1 a1n


a23 . . . a2,n1 a2n

a33 . . . a3,n1 a3n

..
..
..
..
.
.
.
.

an1 an2 an3 . . . an,n1 ann



into the row 0 0 . . . 1 0 . Assuming that an,n1 6= 0, we replace the (n 1)th row of the n n
identity matrix with the nth row of A and obtained the matrix

1
0
0 ...
0
0
0
1
0 ...
0
0

..
.. .
..
..
Un1 = ...
.
...
.
.
.

an1 an2 an3 . . . an,n1 ann


0
0
0 ...
0
1
The inverse of Un1 is

Vn1

1
0
..
.

0
1
..
.

0
0
..
.

...
...

0
0
..
.

0
0
..
.

=
.
...

vn1,1 vn1,2 vn1,3 . . . vn1,n1 vn1,n


0
0
0
...
0
1

where
vn1,i =

ani
an,n1

and
vn1,n1 =

for i 6= n 1
1
an,n1

Multiplying the right side of A by Vn1 , we obtain

b11
b12
b13
b21
b
b
22
23

..
.
.
..
..
AVn1 = B = .

bn1,1 bn1,2 bn1,3


0
0
0

...
...

b1,n1
b2,n1
..
.

...
. . . bn1,n1
...
1

b1n
b2n
..
.

bn1,n1
0

The Method of Danilevsky

Characteristic Polynomial

However the matrix B = A Mn1 is not similar to A. To have a similarity transformation, it is


1
necessary to multiply the left side of B by Un1 = Vn1
. Let C = Un1 AVn1 , then C is similar to
A and is of the form

c11
c12
c13
. . . c1,n1
c1n
b21
b22
b23
. . . b2,n1
b2n

..

.
.
..
.
..
..
.
C= .

...
.
.

cn1,1 cn1,2 cn1,3 . . . cn1,n1 cn1,n1


0
0
0
...
1
0
Now, if cn1,n1 6= 0, then similar operations are performed on matrix C by taking its (n 2)th row
as the principal one. We then obtain the matrix
D = Un2 CVn2 = Un2 Un1 AVn1 Vn2
with two reduced rows. We continue the same way until we finally obtain the Frobenius form
F [A] = U1 U2 Un2 Un1 AVn1 Vn2 V2 V1 .
if, of course, all the n 1 intermediate transformations are possible.
Exceptional case in the Danilevsky method. Suppose that in the transformation of the matrix
A into its Frobenius form F [A] we arrived, after a few steps, at a matrix of the form

r11 r12 . . . r1k . . . r1,n1 r1n


r21 r22 . . . r2k . . . r2,n1 r2n

..
..
..
..
..
..
..
.

.
.
.
.
.
.

rk1 rk2 . . . rkk . . . rk,n1 rkn

R=
0 ... 1 ...
0
0
0

0
0 ... 0 ...
0
0

.
..
..
..
..
..
..
.
.
.
.
.
.
.
.
0

...

...

and it was found that rk,k1 = 0 or |rk,k1 | is very small. It is then possible to continue the
transformation by the Danilevsky method.
Two cases are possible here.
Case 1. Suppose for some j = 1, 2, . . . , k 2, rkj 6= 0. Then by permuting the jth row and (k 1)th
0 ) similar to R with
row and the jth column and (k 1)th column of R we obtain a matrix R0 = (rij
0
rk,k1 6= 0.
Case 2. Suppose now that rkj = 0 for all

r11
r12
r21
r22

..
..
.
.


 rk1,1 rk1,2
R1 R2

..
R=
= ...
.
0 R3

0
0

0
0

..
..
.
.
0
0

j = 1, 2, . . . , k 2. Then R is in the form


...
...
..
.

r1,k1
r2,k1
..
.

r1,k
r2,k
..
.

r1,k+1
r2,k+1
..
.

...
...
..
.

r1n
r2n
..
.

. . . rk1,k1 rk1,k rk1,k+1 . . . rk1,n


..
..
..
..
..
..
.
.
.
.
.
.
...
0
rkk
rk,k+1 . . .
rkn
...
0
1
...
0
0
..
..
..
..
..
..
.
.
.
.
.
.
...
0
0
...
1
0

In this case the characteristic polynomial of R breaks up into two determinants:


det(In R) = det(Ik1 R1 ) det(Ink+1 R3 ).

The Method of Danilevsky

Characteristic Polynomial

10

Here, the matrix R3 is already reduced to the Frobenius form. It remains to apply the Danilevskys
method to the matrix R1 .
Note. Since Uk Ak1 only changes the kth row of Ak1 , it is more efficient to multiply first Ak1
by its (k + 1)th row and then multiply on the right side the resulting matrix by Vk .
The next result shows that once we transform A into its Frobenius form ; we may obtain the
eigenvectors with the help of the matrices Vi0 s.
Theorem. Let A be an n n matrix and let F [A] be its Frobenius form. If is an eigenvalue of
A, then
n1

n2

..
v= .
and
w = Vn1 Vn2 V2 V1 v


1
are the eigenvectors of F [A] and A respectively.
Proof. Since
det(In A) = det(In F [A] ) = n + p1 n1 + + pn1 + pn ,
we have

p1 p2 p3
1


1
0
(In F [A])v =
..
..
..
.
.
.
0
0
0

n1
. . . pn1 pn

0
n2 0
...
0
0


.
...
0
0
.. = 0 .
.
..
..
..
.
.
. ..
1
0
...
1

1
1
AVn1 Vn2 V2 V1 and F [A]v = v, we conclude that
Vn1
Since F [A] = V11 V21 Vn2

w = Vn2 V2 V1 (v) = (Vn2 V2 V1 ) F [A]v = A (Vn1 Vn2 V2 V1 v) = Aw


Note. For expanding characteristic polynomials of matrices of order higher than fifth, the method
of Danilevsky requires less multiplications and additions than other methods.
Example. Reduce the matrix

1
2
A=
1
0

1 3
4
0 2
1

0 1
2
0 1 1

to its Frobenius form.


The matrix B = A3

1 0
0 1
B=
0 0
0 0

= U3 AV3 is as

0
0
1

0
0 2
1 1 1
0
1
0

follows:

1
1 3
4

0 2
1 0
0 1
2 0
0 1 1
0

Since b32 = 0, we need the permutation matrix

0
1
C =JBJ =
0
0

1
0
0
0

0
0
1
0

0
1

0 2
0 1
1
0

1 3
0 2
0 1
0 1

0
1
0
0

0
1
J =
0
0

1
0

1 1
1 0
0
0


0
0
1

0
0 2
=
1 1 1
0
1
0

1 0 0
0 0 0
; thus
0 1 0
0 0 1

1 0 0
0

0 0 0 1
=
0 1 0 0
0
0 0 1

1 3 1
0 2 1
.
0 0 1
0 1
0

2 2 1
1 3 1
.
1 0 1
0
1
0

The Method of Danilevsky


Next we obtain the

1 0
0 1
D=
0 0
0 0

Characteristic Polynomial

11

matrix D = A2 = U2 C V2

0 2 2 3
1 0 0 0
0 2 2 1
0 0

2
0
0 1
.
1 1 3 1 0 1 0 1 = 1 1
1
0
0
1 0 0 1 0 1 0 0 1 0 0
0
0
1
0
0 0 0 1
0 0
1
0
0 1

Finally the Frobenius

1
0
F [A] =
0
0

form F [A] = A1 = U1 DV1 ,

0 2 2 3
1
1 2 0

2
0 0
1 0 0 1 1
1
0
0 0
0 1 0 0
0
0
1
0
0
0 0 1

1
1
0
0

2
0
1
0


0
1

0 1
=
0 0
1
0

4
0
1
0

2
0
0
1

3
0
.
0
0

Thus the Characteristic polynomial of A is KA () = x4 x3 4 x2 2 x 3


Using MATLAB for the Danilevsky Method
Consider the matrix A:
>> A = [ 1 2 4 3 ; 2 4 5 1 ; 3 2 1 4 ;

1
2
A=
3
5

5 1 2 3 ] , M = A ; I = eye(4);

2 4 3
4 5 1

2 1 4
1 2 3

Use the fourth row of A to define U and its inverse V:


>> U = I ; U (3, :) = A (4, :)

1 0 0
0 1 0
U =
5 1 2
0 0 0

, V = I ; V (3, :) = A (4, :) / A (4, 3) ; V (3, 3) = 1/ A (4, 3)

0
1.0000 0.0000 0.0000 0.0000
0.0000 1.0000 0.0000 0.0000
0

V =

2.5000 0.5000 0.5000 1.5000


3
1
0.0000 0.0000 0.0000 1.0000

Define the matrix B similar to A which has the same characteristic polynomial.
>> B = U A V

9.0000
10.5000
B=
54.5000
0.0000

0.0000 2.0000 3.0000


1.5000 2.5000 6.5000

4.5000 16.5000 16.5000


0.0000 1.0000
0.0000

Change B into A and find a new U and its inverse V, by using the third row of the new A:
>> A = B ; U = I ; U (2, :) = A (3, :) , V = I ; V (2, :) = A (3, :) / A (3, 2) ; V

1.0000
0.0000 0.0000
0.0000
1.0000 0.0000
54.5000 4.5000 16.5000 16.5000
12.1111 0.2222

U =
V =
0.0000
0.0000 0.0000
0.0000 1.0000
0.0000
0.0000
0.0000 0.0000
1.0000
0.0000 0.0000

(2, 2) = 1/ A (3, 2)

0.0000 0.0000
3.6667 3.6667

1.0000 0.0000
0.0000 1.0000

Define the matrix B similar to A which has the same characteristic polynomial.

The Method of Danilevsky

Characteristic Polynomial

12

>> B = U A V

9 0
2
3
525 18 139 159

B=
0
1
0
0
0
0
1
0

Change B into A and find a new U and its inverse V, using the second row of the new A:
>> A = B ; U = I ; U (1, :) = A (2, :) , V (1, :) = A (2, :) / A (2, 1) ; V (1, 1) = 1/ A(2, 1)

10.5000 1.5000 2.5000 6.5000


0.0019 0.0343 0.2648 0.3029
0.0000
0.0000 1.0000 0.0000 0.0000
1.0000 0.0000 0.0000

U =
V
=
0.0000
0.0000 0.0000 1.0000 0.0000
0.0000 1.0000 0.0000
0.0000
0.0000 0.0000 1.0000
0.0000 0.0000 0.0000 1.0000
>> B = U A V

9 23 42 144
1 0
0
0

B=
0 1
0
0
0 0
1
0
B is the companion matrix of our original matrix A. Here is the characteristic polynomial of A:
1.0000

9.0000

23.0000

42.0000

144.0000

Exceptional Case.
>> A = [ 1 2 4 3 ; 2 4 5 1 ; 3 2 1 4 ;

1
2
A=
3
5

5 1 0 3 ]

2 4 3
4 5 1

2 1 4
1 0 3

Since A (4, 3) = 0 , we need a permutation matrix P which moves A (4, 3) = 0 into another position:
>> P = I ; Q = P

P (4, :) = P (3, :) P (3, :) = Q (4, :)

1 0 0 0
0 1 0 0

P =
0 0 0 1
0 0 1 0

Define the matrix B similar to A which has the same characteristic polynomial, but B (4, 3) 6= 0.
Note that every time there is a zero in A (k, k + 1) entry, use a permutation matrix, to obtain a new
matrix similar the matrix A, but with a non-zero value at that position.
>> B = P A P

1
2
B=
5
3

2
4
1
2

3
1
3
4

4
5

0
1

We set A to B and continue the same way as the previous example.

The Method of Danilevsky

Characteristic Polynomial

>> U = I ; U (3, :) = A (4, :)

1 0 0
0 1 0
U =
3 2 4
0 0 0

, V = I ; V (3, :) = A (4, :) / A (4, 3) ; V

1.0000
0.0000 0.0000
0
0.0000
1.0000
0.0000
0

V =

0.7500 0.5000 0.2500


1
1.0000
0.0000 0.0000
1

13
(3, 3) = 1/ A (4, 3)

0.0000
0.0000

0.2500
1.0000

Define the matrix B similar to A which has the same characteristic polynomial.
>> B = U A V

1.2500
1.2500
B=
9.7500
0.0000

0.5000
3.5000
6.5000
0.0000

0.7500 3.2500
0.2500 4.7500

6.7500 16.2500
1.0000 0.0000

Change B into A and find a new U and its inverse V, using the second row of the new A:
>> A = B ; U = I

1.0000
45.5000
U =
0.0000
0.0000

; U (2, :) = A (3, :) , V = I ; V (2, :) = A (3, :) / A (3, 2) ; V (2, 2) = 1/ A (3, 2)

1.0000 0.0000 0.0000 0.0000


0.0000 0.0000 0.0000
4.1364 0.0909 0.3182 0.5909
11.0000 3.5000 6.5000

V
=
0.0000 0.0000 1.0000 0.0000
0.0000 1.0000 0.0000
0.0000 0.0000 0.0000 1.0000
0.0000 0.0000 1.0000

Define the matrix B similar to A which has the same characteristic polynomial.
>> B = U A V

1.6818 0.0070
0.2552
2.0455
57.0227 3.2797 53.4598 133.5682

B=
0.0000

1.0000
0
0
0
0
1.0000
0
Note that A (2, 1) 6= 0, so we dont need any permutation matrix.
Change B into A and find a new U and its inverse V, using the second row of the new A:
>> A = B ; U = I ; U (1, :) = A (2, :) , V = I ; V (1, :) = A (2, :) / A (2, 1) ; V (1, 1) = 1/ A (2, 1)

57.0227 3.2797 53.4598 133.5682


0.0175 0.0575 0.9375 2.3424
0.0000
0.0000 1.0000 0.0000 0.0000
1.0000
0.0000
0.0000

U =
V =
0.0000

0.0000 0.0000 1.0000 0.0000


0.0000
1.0000
0.0000
0.0000
0.0000
0.0000
1.0000
0.0000 0.0000 0.0000 1.0000
Define the matrix B similar to A which has the same characteristic polynomial.
>> B = U A V

4.9615 58.5769 208.9231 108.0000


1.0000
0.0000
0.0000
0.0000

B=
0.0000
1.0000
0.0000
0.0000
0.0000
0.0000
1.0000
0.0000
B is the companion matrix of our original matrix A. Here is the characteristic polynomial of A:
1.0000

4.9615

58.5769

208.9231

108.0000

The Method of Danilevsky

Characteristic Polynomial

Matlab Program
A = input(0 Enter the square matrix A : 0 );
m = size(A); N = m(1); b = [1]; B = zeros(N ); i = 1;
while i < N,
J = eye(N ); h = A(N i + 1, N i)
while h == 0;
c = A(N i + 1, 1 : N i); z = norm(c, inf );
if z = 0;
k = 1; r = 0;
while r == 0 & k < N i;
r = r + c(N i k); k = k + 1;
J(N i, N i) = 0; J(N i, N i k + 1) = 1;
J(N i k + 1, N i k + 1) = 0; J(N i k + 1, N i) = 1,
A = J A J; k = k + 1;
end
else
b = conv(b, [1 A(N i + 1, N i + 1 : N )]);
B = A(1 : N i, 1 : N i); A = B, N = N i;
end
h = A(N i + 1, N i);
end
U = eye(N ); V = eye(N );
U (N i, :) = A(N i + 1, :);
V (N i, :) = A(N i + 1, :)/A(N i + 1, N i); V (N i, N i) = 1/A(N i + 1, N i);
A=U AV;
i = i + 1;
end;
b = conv(b, [1 A(N i + 1, N i + 1 : N )]); disp(0 0 ),
disp(0 T he Characteristic polynomial looks like : 0 ), disp(0 0 ),
disp([0 KA (x) = x n + c(1)x (n 1), +... + c(n 1)x + c(n)0 ]), disp(0 0 ),
disp(0 T he coef f icients list c(k) is : 0 ), disp(0 0 ),
disp(b), disp(0 0 )

14

The Method of Krylov

Characteristic Polynomial

15

The Method of Krylov. Let A be an n n matrix. For any n-dimensional nonzero column
vector v we associate its successive transforms
v k = Ak v

(k = 0, 1, 2, . . .);

this sequence of vectors is called the Krylov sequence associated to the matrix A and the vector v.
At most n vectors of the sequence v0 , v1 , v2 , . . . will be linearly independent.
Suppose for some r = r(v) n, the vectors v0 , v1 , v2 , . . . , vr are linearly independent and the
vector vr+1 is a linear combination of the preceding ones. Hence there exists a monic polynomial
() = c0 + c1 + c2 2 + + cr1 r1 + r
such that
(A) v = (c0 In + c1 A + + cr1 Ar1 + Ar ) v = c0 v0 + c1 v1 + + cr1 vr + cr vr+1 = .
The polynomial () is said to annihilate v and to be minimal for v. If () is another monic
polynomial which annihilates v,
(A) v = ,
then () divides ().
To show that; suppose
() = () () + (),
where () is the remainder after dividing by , hence of degree strictly less than , it follows
that
(A) v = .
But () is minimal for v, hence () = 0.
Now of all vectors v there is at least one vector for which the degree v is maximal, since for any
vector v, r(v) n. We call such vector a maximal vector.
Remark. A monic polynomial A () of an n n matrix A is said to be its minimal polynomial, if
A () is of minimum degree satisfying
(A) = Zn .
If the minimal polynomial of a matrix is equal to its characteristic polynomial, then we may use
Krylov method to find the characteristic polynomial. For example, if the matrix has n distinct
eigenvalues, then its minimal polynomial is equal to its characteristic polynomial. Hence the Krylov
method would be successful.

Algorithm. To produce the characteristic polynomial of the matrix A by Krylov method, we


follow the following steps:
Step 1. Choose an arbitrary n-dimensional nonzero column vector v such as e1 , then use the Krylov
sequence to define the matrix


V = v , A v , A2 v , . . . , An2 v , An1 v = [ v0 , v1 , v 2 , . . . vn2 , vn1 ] .
Step 2. If the matrix V has rank n, then the system V c = vn has a unique solution
ct = (c0 , c1 , c2 , , cn1 ).
The monic polynomial
() = c0 + c1 + c2 2 + + cn1 n1 + n
which annihilates the vector v is the characteristic polynomial of A.
Step 3. If the system V c = vn does not have a unique solution, then change the initial vector

The Method of Krylov

Characteristic Polynomial

16

and try for example with e2 .


Step 4. If again, the system V c = vn does not have a unique solution, then either choose
another initial vector v or suspect that the minimal polynomial and characteristic polynomial of A
are different, in this case abandon this method and use another method.
Examples. Compute the characteristic polynomials of the following matrices:

1 2
1 1
1 2 3 1
1 2 3 4
1 0

2
1
, B = 1 0 2 1 , and C = 1 2 3 4 .
A=
2 1 1 3
1 3 1 3
1 0 0 0
4 5 0
4
1 0
1 2
1 0 0 0

1
0

Choosing the initial vector v =


0 for the matrices A and B, we obtain
0



0
VA = v , A v , A2 v , A3 v =
0
0

1 1 17
1



1 9 42
0
and VB = v , B v , B 2 v , B 3 v =
0
2 13 43
4 15 19
0

1
1
1
1

1
0
0
0

1
1
.
1
1

87
28

The matrix VA is nonsingular, hence cA = VA1 A4 v =


2 . From the vector cA we obtain the
4
characteristic polynomial of A which is

87
 28
4
2
3
4

KA () = 1 , , 2 , 4
2 + = 87 + 28 + 2 4 + .
4

0
1

The matrix VB is singular, so we need another initial vector such as v =


0. The new matrix
0

0 2 11 11
9
1 0

8
0
2
1
is invertible, so cB = V B 4 v =

VB =
B
0 3 5 21
10. From the vector cB we
0 0 1 18
2
obtain the characteristic polynomial of B which is

9
 2
4
2
3
4

KB () = 1 , , 2 , 4
10 + = 9 2 10 + 2 + .
2
The minimal and characteristic polynomials of the matrix C are
mC () = 3 32 7

KC () = 4 33 72 ,


respectively. Therefore by choosing any initial vector v, the matrix VC = v , C v , C 2 v , C 3 v
will always be singular. This means that the Krylov sequence will never produce the characteristic
polynomial KC ().
and

The Method of Krylov

Characteristic Polynomial

Matlab Program
A = input(0 Enter a square matrix A : 0 );
m = size(A); n = m(1); V = zeros(n, n);
DL1 = [0 Enter an initial 0 , int2str(n) , 0 dimensional row vector v0 = 0 ];
v0 = input(DL1);
z = 0; k = 1;
while z == 0 & k < 5
w = v0; V (:, 1) = w;
f or i = 2 : n, w = A w; V (:, i) = w; end,
if det(V ) = 0; k = 8; c = inv(V ) A w;
else
while k < 5
v0 = input(0 T he matrix V is singular, please enter another initial row vector v0 : 0 );
k = k + 1;
end;
end;
z = det(V );
end;
if k == 5;
disp(0 Sorry, the Krylov method is not suited f or this matrix. 0 ), disp(0 0 ),
else;
disp(0 T he Characteristic polynomial looks like : 0 ), disp(0 0 ),
disp([0 KA (x) = c(0) + C(1)x + c(2)x 2 + + c(n 1)x (n 1) + x n0 ]), disp(0 0 ),
disp(0 T he coef f icients list c(k) is : 0 ), disp(0 0 ),
disp([c0 , 1])
end;

17

Das könnte Ihnen auch gefallen