Sie sind auf Seite 1von 8

Section 4.

5 Eigenvalues of Symmetric Tridiagonal Matrices

Key Terms

Symmetric matrix

Tridiagonal matrix

Orthogonal matrix

QR-factorization

Rotation matrices (plane rotations)

Eigenvalues
We will now complete our study of the matrix eigenvalue problem for symmetric matrices
by developing an algorithm to compute all of the eigenvalues of a symmetric tridiagonal
matrix. The technique which we will develop in this section is called the QR algorithm.
Unlike the technique developed for reduction to tridiagonal form, the QR algorithm is
iterative in nature. The sequence of matrices generated by the algorithm converges to a
diagonal matrix, so the eigenvalues of the “final" matrix in the sequence are just the
elements along main diagonal. As we will continue to use similarity transformations,
these diagonal elements are also the eigenvalues of the original matrix. But we will not
use Householder matrices.

Overview of eigenvalue approximation using the QR-method:

Let A be an n × n symmetric matrix which has been reduced to symmetric tridiagonal


form, denoted AH , using Householder matrices. This preserves the eigenvalues and
provides a matrix which is closer to diagonal form, hence fewer arithmetic operations
will be required at later steps. Next we apply a piece of theory to AH
Theorem: Any m × n complex matrix A can be factored into a product of an m × n
matrix Q with orthonormal columns and n × n upper triangular matrix R.
(This is called a QR-factorization; we discuss this later.)

In the case that matrix A is real and square, then the matrix Q will be an orthogonal
matrix.
We use the QR-factorization to generate a sequence of similarity transformations so that the
limit of the sequence will be a diagonal matrix. The eigenvalues of A and AH will be
preserved and will appear on the diagonal of the diagonal matrix.

The sequence is constructed as follows:

Let A0 = AH; find the QR-factorization of A0;  A0 = Q0R0.


(Matrix Q0 is orthogonal so its inverse is Q0T.)

Define A1 = R0Q0 = Q0TA0Q0 since R0 = Q0TA0. Find the QR-factorization of A1;  A1 = Q1R1.

Define A2 = R1Q1 = Q1TA1Q1 since R1 = Q1TA1. Thus A2 = Q1T Q0TA0Q0Q1.

Apply the steps again to A2 and successively repeat the process.

We call iteration process the BASIC QR-method. It generates a sequence


A0, A1, A2, …, Ak, … → D, a diagonal matrix.

The diagonal entries of Ak approximate the eigenvalues of matrix A.

This process preserves the tridiagonal form and the numerical values of the first sub
diagonal and first super diagonal decrease in value as k gets large. We stop the iterative
process when the entries of these off-diagonals are “sufficiently small”. In this case the
sequence appears to be converging to a diagonal matrix.
Example: Apply the basicqr.m iteration to the symmetric tridiagonal matrix

Observe that the off-diagonal elements are converging


toward zero, while the diagonal elements are
converging toward the eigenvalues of A, which, to three
decimal places, are 5.951, 3.084 and -1.035.
Furthermore, the eigenvalues appear along the
diagonal of Ak in decreasing order of magnitude.
This example demonstrates the general performance of the QR algorithm and that the off-
diagonal entries converge toward zero and the rate of convergence O(|λj /λj-1 |) where j is the
row number and j-1 the column number of an off- diagonal entry. (No proof about the rate.)
In the example we used the m-file basicqr.m .

>> help basicqr


basicqr Apply the basicqr method to matrix A iter times.

Use in the form ==> basicqr(A,iter) <==

There are choices for various display options.

MATLAB has an m-file more robust than basicqr.m. The MATLAB file is qr.m. It has various
options which appear in the help file,

A simple MATLAB code using qr.m which is like for the basic QR-method is given below.

for k=1:10, [Q,R]=qr(A); A=R*Q, pause, end

Change the number of iterations as needed.

We will not prove the QR theorem. A discussion is given in the text. Orthogonal matrices
constructed in the QR-algorithm are often called rotation matrices.
Notes:

The eig command in MATLAB uses a more robust implementation than that given in
the basic QR-(eigen)method discussed here. This command is quite dependable.

If a matrix has eigenvalues very close in magnitude, then the convergence of the eigen
method based on QR-factorizations can quite slow.

If a matrix is defective or nearly defective the convergence of QR-(eigen)method can


be slow.

The basic QR-(eigen)method can fail.

When asked to compute QR-factorizations involving symmetric matrices it is


recommended that the matrix first be transformed to symmetric tridiagonal form.

QR-factorizations are often implemented using another technique called (Givens)


plane rotations.

If A is not symmetric how can you approximate its eigenvalues?


Eigenvalues of non-symmetric matrices

As opposed to the symmetric problem, the eigenvalues a of non-symmetric matrix do not


form an orthogonal set of vectors. Moreover, eigenvectors may not form a linear-
independent vector system (this is possible, although not necessarily, in case of multiple
eigenvalues - a subspace with size less than k can correspond to the eigenvalue of
multiplicity k).

The second distinction from the symmetric problem is that a non-symmetric matrix could
not be easily reduced to a tridiagonal matrix or a matrix in other compact form - matrix
asymmetry causes the fact that after zeroing all the elements below the first subdiagonal
(using an orthogonal transformation) all the elements in the upper triangle are not zeroed,
so we get a matrix in upper Hessenberg form. This slows an algorithm down, because it is
necessary to update all the upper triangles of the matrix after each iteration of the QR
algorithm.

At last, the third distinction is that the eigenvalues of a non-symmetric matrix could be
complex (as are their corresponding eigenvectors).
Algorithm description
Solving a non-symmetric problem of finding eigenvalues is performed in some steps. In
the first step, the matrix is reduced to upper Hessenberg form by using an orthogonal
transformation. In the second step, which takes the most amount of time, the matrix is
reduced to upper Schur form by using an orthogonal transformation. If we only have to
find the eigenvalues, this step is the last because the matrix eigenvalues are located in the
diagonal blocks of a quasi-triangular matrix from the canonical Schur form.

If we have to find the eigenvectors as well, it is necessary to perform a backward


substitution with Schur vectors and quasi-triangular vectors (in fact - solving a system of
linear equations; the process of backward substitution itself takes a small amount of time,
but the necessity to save all the transformations makes the algorithm twice as slow).
The Schur decomposition, which takes the most amount of time, is performed by using
the QR algorithm with “multiple shifts”. The algorithm is taken from the LAPACK library.
This algorithm is a block-matrix analog of the ordinary QR-algorithm with double shift. As
all other block-matrix algorithms, this algorithm requires adjustment to achieve optimal
performance. (Thankfully all this is incorporated in MATLAB’s eig command.)

More details at
http://people.math.gatech.edu/~klounici6/2605/Lectures%20notes%20Carlen/chap4.pdf
https://www.math.washington.edu/~morrow/498_13/eigenvalues.pdf
http://web.stanford.edu/class/cme335/lecture4sup.pdf

Das könnte Ihnen auch gefallen