Beruflich Dokumente
Kultur Dokumente
Yu-Kai Hong
Department of Applied Mathematics, National University of Kaohsiung,
Kaohsiung 811, Taiwan. E-mail: a0934147@gmail.com
23 May 2007
Abstract
Most of the algorithms for finding the dominant eigenvalue or eigenvector problem
use the power method which is the easiest numerical method. The inverse power
method is used to find the smallest eigenvalue or eigenvector, the idea of the inverse
power method depends on the power method via taking the inverse of the matrix
in the power method. Give any constant, we can find the nearest eigenvalue of the
matrix by using the shifted power method, which also depends on the power method
via shifting the eigenvalues of the matrix in the power method.
1 Power Method
Therem 1.1(Power Method). Assume the matrix A ∈ Mn×n (R) has n dis-
tinct eigenvalues λ1 , λ2 , λ3 ...λn and that they are ordered in decreasing mag-
nitude; that is,
|λ1 | > |λ2 | ≥ |λ3 | ≥ ... ≥ |λn |
if X0 is chosen appropriately, then the sequence {Xk } and the sequence {Ck }
generated recursively by
Yk = AXk
Assume that X0 was chosen in such a manner that b1 is not zero (since we
want to evaluate the dominant eigenvalue and the dominant eigenvector ).
Since that we know AVi = λi Vi so that
Yk−1 = AXk−1
⎛ Ã !k−1 Ã !k−1 Ã !k−1 ⎞
λk−1
1 ⎝b1 V1 + b2
λ2 λ3 λn
=A V2 + b3 V3 + ... + bn Vn ⎠
C1 C2 ..Ck−1 λ1 λ1 λ1
⎛ Ã !k−1 Ã !k−1 Ã !k−1 ⎞
λ1k−1 ⎝b1 AV1 + b2 A
λ2 λ3 λn
= V2 + b3 A V3 + ... + bn A Vn ⎠
C1 C2 ..Ck−1 λ1 λ1 λ1
⎛ Ã !k−1 Ã !k−1 Ã !k−1 ⎞
λ1k−1 ⎝b1 λ1 V1 + b2 λ2
λ2 λ3 λn
= V2 + b3 λ3 V3 + ... + bn λn Vn ⎠
C1 C2 ..Ck−1 λ1 λ1 λ1
⎛ à !k à !k à !k ⎞
λk1 ⎝b1 V1 + b2
λ2 λ3 λn
= V2 + b3 V3 + ... + bn Vn ⎠
C1 C2 ..Ck−1 λ1 λ1 λ1
2
and
⎛ à !k à !k à !k ⎞
λk1 ⎝b1 V1 + b2
λ2 λ3 λn
Xk = V2 + b3 V3 + ... + bn Vn ⎠
C1 C2 ..Ck−1 Ck λ1 λ1 λ1
and since both the vector Xk and V1 are normalized unit vector
kXk k = kV1 k = 1
so that we have
¯ ¯
¯ b1 λk1 ¯ b1 λk1
¯ ¯
lim ¯¯ ¯ = lim =1
k→∞ C1 C2 ..Ck−1 Ck ¯ k→∞ C1 C2 ..Ck−1 Ck
V1 = lim Xk
k→∞
Replacing the k with k − 1 in the terms of the
b1 λk1
lim =1
k→∞ C1 C2 ..Ck−1 Ck
λ1 = lim Ck
k→∞
In the equation
à !k
λi
lim bi Vi = 0 for each i = 2...n
k→∞ λ1
3
we see that the coefficient of Vi in Xk goes to zero. in proportion (λi /λ1 )k and
that the speed of convergence of {Xk } to V1 is governed by the terms (λ2 /λ1 )k .
Consequently, the rate of convergence is linear. Similary, the convergence of
the sequence {Ci } to λ1 is also linear.
λ(A−1 V ) = A−1 AV = IV = V
we have that
A−1 V = λ−1 V
So that λ−1 is the eigenvalue of A−1 corresponding to the eigenvector V
if X0 is chosen appropriately, then the sequence {Xk } and the sequence {Ck }
generated recursively by
Yk = A−1 Xk
where
Yk
Xk+1 =
Ck
and
Ck = kYk k
will converge to the smallest eigenvector Vn and the smallest eigenvalue λn ,respectively.
That is
lim Xk = Vn and lim Ck = λ−1 n
k→∞ k→∞
4
It is clearly that we want to evaluate the smallest eigenvalue or eigenvector
of a matrix A, in order to use the power method , we deform the matrix A
to matrix B, such that the smallest eigenvalue or eigenvector of A be the
dominant eigenvalue or eigenvector of B and then using the power method.So
that we take B = A−1 (suppose that A is invertible).
Since that
|λ1 | > |λ2 | ≥ |λ3 | ≥ ... ≥ |λn−1 | > |λn | > 0
¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯ ¯
¯ −1 ¯
¯λn ¯ > ¯¯λ−1 ¯ ¯ −1 ¯ ¯ −1 ¯ ¯ −1 ¯
n−1 ¯ ≥ ¯λn−3 ¯ ≥ ... ≥ ¯λ2 ¯ > ¯λ1 ¯ > 0
The speed of the convergence on the inverse power method depends on ratio
(λ−1 −1 k
n−1 /λn ) = (λn /λn−1 )
k
Yk = A−1 Xk
we don‘t have to evaluate the matrix A−1 , we solve the linear system.
AYk = Xk
(A − αI)V = AV − αV = λV − αV = (λ − α)V
From the above theorem we know that, the matrix A ∈ Mn×n (R) has n distinct
eigenvalues λ1 , λ2 , λ3 ...λn and that they are ordered in decreasing magnitude;
5
that is,
|λ1 | > |λ2 | ≥ |λ3 | ≥ ... ≥ |λn−1 | ≥ |λn | > 0
Let A = A − λ1 I, the eigenvalues of the matrix A order as the following
so that we can use the power method of the matrix A to find the dominant
eigenvalue λ = λ2 − λ1 and the eigenvector V , then we have λ2 = λ1 + λ and
V which are the second dominant eigenvalue and eigenvector of the matrix A,
this method is known as the shfited method.
So that using the same method we can find the eigenvalues λ1 , λ2 , λ3 ...λn and
the corresponding eigenvectors V1 , V2 , V3 ...Vn of the matrix A step bt step, but
the next eigenvalue or eigenvector that we found, we need more and more
strongly conditions
|λ1 | > |λ2 | > |λ3 | > ... > |λn−1 | > |λn |
and the errors will be more and more larger, so that the power method is best
in finding the dominant eigenvalue and eigenvector.
Theorem 3.2. Suppose the matrix A ∈ Mn×n (R) and λ be the eigenvalue
of A corresponding to the eigenvector V , let α be any constant value, then
(λ − α)−1 is the eigenvalue of the matrix (A − αI)−1 corresponding to the
eigenvector V .
6
Consider the eigenvalue λj , then a constant α can be chosen so that μ1 =
1/(λj − α) is dominant eigenvalue of the matrix (A − αI)−1 . Furthermore, if
X0 is chosen appropriately, then the sequences {Xk } and {Ck } are generated
recursively by
Yk = (A − αI)−1 Xk
and
Yk
Xk+1 =
Ck
where
Ck = kYk k
will converge to the dominant eigenvalue μ1 = 1/(λj − α) of the matrix (A −
αI)−1 . Finally, the corresponding eigenvalue of the matrix A is given by the
calculation
1
λj = +α
μ1
Proof. Without loss generality, we may that λ1 > λ2 > λ3 > ... > λn−1 > λn ,
Select a constant number α (α 6= λj ) that is closer to λj than any of the other
eigenvalues of the matrix A, that is
so that