Sie sind auf Seite 1von 3

Power Method for Finding Eigenvectors and Eigenvalues

1 2 3 n Consider a nxn matrix A with independent eigenvectors: x , x , x , .. , x i i Thus A x = i x for i=1,2..n


n i n i Successive multiplication by A, say n times, yields A x = i x for i=1,2..n. This idea is used below to show how to obtain the eigenvectors and the eigenvalues of A.

First, label the eigenvalues of A according to their magnitude as follows 0 1 . .. . n 1 Since the eigenvectors are independent an arbitrary vector v 0 can be expressed as: 0 0 1 v = a x + a x + . .. . + a
0 1

n 1

n 1 x

0 Choosing v to be a linear combination of the rows of A, for instance, equal to the first row of A (but 0 arranged as a column vector) will guarantee v to lie in the rowspace of A. 0 Multiplying v by matrix A, N times yield:
N 0 N 0 N 1 A v = a A x + a A x + . .. . + a 0 1 0 ( 0) 1 ( 1) N n 1 A x n 1 ( n 1) N n 1

n 1

or

N 0 N 0 N 1 x + a x + . .. . + a A v = a

assuming for now that all the eigenvalues have different magnitudes results in

(0)

>>

(1)

.. >>

n 1

for large enough N.

This implies that the term with

( 0)

is dominant resulting in the following approximation:

N 0 (ANv0) x a 0 ( 0)

0 Thus, it is seen that successive multiplication by the matrix A rotates vector v towards the 0 direction of x . This provides a way to find the direction of the eigenvector.
N 0 In practice, if > 1, the term A v will grow exponentially with increasing N. This can quickly 0

become a problem in practical implementation. To avoid this problem, the estimated eigenvector should be normalized after each iteration until convergence is achieved. i i v p = i v i+1 i = A p v i iT i+1 = p v

For instance, let

with

and

for i=0,1,...N

Then it follows that

(pN)

0 x 0 x

and

(N ) (0)

for large enough N.

In summary, the main steps for the Power Method applied to a matrix M are: 1) Set x equal to the normalized transpose of the first nonzero row of M. 2) set y = M x 3) set = x y y 4) set, x = y 5) repeat step 2, until converges within a preset tolerance level. 6) The last value of is an eigenvalue of M and the last value of x is the corresponding eigenvector
T

The Power Method described above allow us to find one eigenvector and one eigenvalue. The remaining eigenvalues and eigenvectors of a matrix A can be found as follows.. Let be the i-th eigenvalue of A and x be the j-th eigenvector of A and i j.
i j

Then

(A i I)x j = Ax j ix j (A i I)x j = (j i)x j (


j i

j j j = x x = x
j i

j This means that x is also an eigenvector of matrix A I and the corresponding eigenvalue is .

It is important to note that if i = j then

(A i I)x i = (i i)x i ( (
i

=0

In other words the power method applied to matrix A I does not favor the direction of i i x because x is in the nullspace of A I .
i

In general, define a matrix B j as follows Bj =

i= 0

(A iI)
j 1

j +1 =B Bj x

j +1 j +1 =B A j I x x j 1 j +1 j

Continuing with this process yields: j +1 = Bj x

j j +1 x ( j+1 i) i = 0

j +1 is an eigenvector of matrix which means that x

j A i I) . ( i = 0

It is important to check the rank of B . The rank of B should be one less than that of B
j j

j 1

or (A if

j=0). Otherwise there are repeated eigenvalues. The eigenvectors associated with the repeated eigenvalues correspond to the nullspace of A j I . Recall that the nullspace can be obtained

through the Gramm Schmidt procedure.

Note that the nullspace of B is larger than that of B


j

j 1

because each newly found eigenvector is

transformed into a vector in the nullspace of the next matrix B. The net result is that of "deflating" the eigenvalues of A, that is, as the eigenvalues of A are found their magnitudes are "deflated" to zero eigenvalues in the matrices B. For this reason the method describe above is called a "deflation" method. Also note that all the eigenvectors must be in the nullspace of B . The only way this can
n 1

happen for a complete set of independent eigenvectors, i.e., a basis for the domain of A (nullspace + rowspace), is that B = 0. Recall that the characteristic polynomial of A can be expressed in
n 1

factorized form as

(
i= 0

n 1

= 0 and that B
i

n 1 n 1

i= 0 n 1

(A iI) = 0. Thus, replacing by A in

the characteristic polynomial yields B

. This last result is known as Cayley-Hamilton's Theorem:

"Every matrix satisfies its own characteristic polynomial." A deflation method used to find the eigenvectors and eigenvalues of matrix A is described below. Define Step 1: Set k=0, B = I and apply the power method to A to find the eigenvalue and the 0 corresponding normalized eigenvector x . Step 2: Set B = B A k I
0

) ( )

Step 3: Check the rank of B. The rank of B should be one less or equal (when k=0) to that of the rank of the previous matrix B. Otherwise there are repeated eigenvalues. The eigenvectors associated with the repeated eigenvalues correspond to the nullspace of A k I . Assuming there are "r" repeated eigenvalues equal to k, the equation B = B A k I should be used instead of the equation in Step 2 . Step 4: Apply the power method to matrix B to find the next eigenvector, p eigenvalue can be found from the equation
k+ 1 k+ 1

)r

of A. The corresponding

=p

T A p . k+ 1 k+ 1

Step 5: set k = number of eigenvectors already found + 1. Step 6: Repeat Steps 2 through 5 until all eigenvectors and eigenvalues have been found.

Das könnte Ihnen auch gefallen