Beruflich Dokumente
Kultur Dokumente
Univerity of Maryland
College Park, MD
c 1996, G. W. Stewart
All Rights Reserved
Contents
Preface ix
Approximation 3
Lecture 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5
General observations : : : : : : : : : : : : : : : : : : : : : : : : 5
Decline and fall : : : : : : : : : : : : : : : : : : : : : : : : : : : 5
The linear sine : : : : : : : : : : : : : : : : : : : : : : : : : : : 8
Approximation in normed linear spaces : : : : : : : : : : : : : 9
Signicant dierences : : : : : : : : : : : : : : : : : : : : : : : 11
Lecture 2 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13
The space C 0 1] : : : : : : : : : : : : : : : : : : : : : : : : : : 13
Existence of best approximations : : : : : : : : : : : : : : : : : 14
Uniqueness of best approximations : : : : : : : : : : : : : : : : 14
Convergence in C 0 1] : : : : : : : : : : : : : : : : : : : : : : : 14
The Weierstrass approximation theorem : : : : : : : : : : : : : 15
Bernstein polynomials : : : : : : : : : : : : : : : : : : : : : : : 16
Comments : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 18
Lecture 3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 19
Chebyshev approximation : : : : : : : : : : : : : : : : : : : : : 19
Uniqueness : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 22
Convergence of Chebyshev approximations : : : : : : : : : : : : 23
Rates of convergence : : : : : : : : : : : : : : : : : : : : : : : : 23
Lecture 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 25
A theorem of de la Vallee-Poussin : : : : : : : : : : : : : : : : : 25
A general approximation strategy : : : : : : : : : : : : : : : : : 26
Chebyshev polynomials : : : : : : : : : : : : : : : : : : : : : : 27
Economization of power series : : : : : : : : : : : : : : : : : : : 28
Farewell to C1 a b] : : : : : : : : : : : : : : : : : : : : : : : : 30
Lecture 5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 33
Discrete, continuous, and weighted least squares : : : : : : : : 33
Inner product spaces : : : : : : : : : : : : : : : : : : : : : : : : 34
Quasi-matrices : : : : : : : : : : : : : : : : : : : : : : : : : : : 35
Positive denite matrices : : : : : : : : : : : : : : : : : : : : : 36
The Cauchy and triangle inequalities : : : : : : : : : : : : : : : 37
Orthogonality : : : : : : : : : : : : : : : : : : : : : : : : : : : : 38
The QR factorization : : : : : : : : : : : : : : : : : : : : : : : : 39
Lecture 6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 41
Existence and uniqueness of the QR factorization : : : : : : : : 41
The Gram{Schmidt algorithm : : : : : : : : : : : : : : : : : : : 42
Projections : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 43
Best approximation : : : : : : : : : : : : : : : : : : : : : : : : : 45
v
vi Son of Afternotes
Lecture 7 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 49
Expansions in orthogonal functions : : : : : : : : : : : : : : : : 49
Orthogonal polynomials : : : : : : : : : : : : : : : : : : : : : : 50
Discrete least squares and the QR decomposition : : : : : : : : 54
Lecture 8 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 59
Householder transformations : : : : : : : : : : : : : : : : : : : 59
Orthogonal triangularization : : : : : : : : : : : : : : : : : : : 61
Implementation : : : : : : : : : : : : : : : : : : : : : : : : : : : 64
Comments on the algorithm : : : : : : : : : : : : : : : : : : : : 65
Solving least squares problems : : : : : : : : : : : : : : : : : : 66
Lecture 9 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 67
Operation counts : : : : : : : : : : : : : : : : : : : : : : : : : : 67
The Frobenius and spectral norms : : : : : : : : : : : : : : : : 68
Stability of orthogonal triangularization : : : : : : : : : : : : : 69
Error analysis of the normal equations : : : : : : : : : : : : : : 70
Perturbation of inverses and linear systems : : : : : : : : : : : 71
Perturbation of pseudoinverses and least squares solutions : : : 73
Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77
Linear and Cubic Splines 79
Lecture 10 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 81
Piecewise linear interpolation : : : : : : : : : : : : : : : : : : : 81
The error in L(f ) : : : : : : : : : : : : : : : : : : : : : : : : : : 83
Chebyshev approximation by linear splines : : : : : : : : : : : 84
Hat functions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 84
Integration : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 85
Least squares approximation : : : : : : : : : : : : : : : : : : : 86
Implementation issues : : : : : : : : : : : : : : : : : : : : : : : 88
Lecture 11 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 93
Cubic splines : : : : : : : : : : : : : : : : : : : : : : : : : : : : 93
Derivation of the cubic spline : : : : : : : : : : : : : : : : : : : 94
End conditions : : : : : : : : : : : : : : : : : : : : : : : : : : : 95
Convergence : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 97
Locality : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 97
Eigensystems 101
Lecture 12 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 103
A system of dierential equations : : : : : : : : : : : : : : : : : 103
Complex vectors and matrices : : : : : : : : : : : : : : : : : : : 104
Eigenvalues and eigenvectors : : : : : : : : : : : : : : : : : : : 105
Existence and uniqueness : : : : : : : : : : : : : : : : : : : : : 106
Left eigenvectors : : : : : : : : : : : : : : : : : : : : : : : : : : 106
Real matrices : : : : : : : : : : : : : : : : : : : : : : : : : : : : 106
Contents vii
xi
Approximation
3
Lecture 1
Approximation
General Observations
Decline and Fall
The Linear Sine
Approximation in Normed Linear Spaces
Signicant Dierences
General observations
1. Approximation problems occur whenever one needs to express a group of
numbers in terms of other groups of numbers. They occur in all quantitative
disciplines | engineering and the physical sciences, to name two| and they
are a mainstay of statistics. There are many criteria for judging the quality of
an approximation and they profoundly aect the algorithms used to solve the
problem.
In spite of this diversity, most approximation problems can be stated in a
common language. The best way to see this is to look at a number of dierent
problems and extract the common features. We will consider the following
here.
1. Given the radiation history of a mixture of isotopes, determine the
proportion of the isotopes.
2. Determine a linear approximation to the sin function.
Decline and fall
2. Suppose we have a mixture of two radioactive isotopes. If enough of the
rst isotope is present, its radiation will vary with time according to the law
ae; t . Here a is a constant specifying initial the level of radiation due to the
rst isotope and is a decay constant that tells how fast the radiation decays.
If the second isotope follows the law be;t , the mixture of the two isotopes
radiates according to the law ae; t + be;t .
The above law implies that the radiation will become arbitrarily small as t
increases. In practice, it the radiation will decline until it reaches the level of
the background radiation | the ambient radiation that is all around us | after
which no decrease will be observed. Thus a practical model of the observed
radiation y from the mixture is
y = ae; t + be;t + c
where c represents the background radiation.
5
6 Son of Afternotes
o o
o o
o
o
o o
o
To make the procedure rigorous, we must associate a precise meaning with the
symbol \ =". We will use the principle of least squares.
Specically, let
ri = yi ; ae; ti ; be;ti ; c i = 1 2 : : : n
be the residuals of the approximations to the yi , and let
2(a b c) = r12 + r22 + + rn2 :
As our notation indicates 2 , varies with a, b, and c. Ideally we would like
to choose a, b, c, so that (a b c) is zero, in which case the approximation
will be exact. Since that is impossible in general, the principle of least squares
requires that we choose a, b, and c so that the residual sum of squares 2(a b c)
is minimized.
6. The principle of least squares is to some extent arbitrary, and later we
will look at other ways of tting observations. However, it has the eect of
balancing the size of the residuals. If a single residual is very large, it will
generally pay to reduce it. The other residuals may increase but there will be
a net decrease in the residual sum of squares.
7. Another reason for adopting the principle of least squares is computational
simplicity. If we dierentiate 2(a b c) with respect to a, b, and c and set the
results to zero, the result is a system of linear equations | commonly called
the normal equations | that can be solved by standard techniques. But that
is another story (see x6.13).
8. With an eye toward generality, let's recast the problem in new notation.
Dene the vectors y , u, and v by
0 1 0 ; t1 1 0 ;t1 1
y1 e
BB y2 CC Be; t2 CC
B BBee;t2 CC
y=BB@ ... CCA u = BB@ ... CCA v = BB@ ... CCA
yn e; tn e;tn
and let e be the vector consisting of all ones. Then the vector of residuals is
given by
r = y ; au ; bv ; ce:
Moreover, if for any vector x we dene the 2-norm of x by
qP 2
kxk2 = i xi
then the principle of least squares can be cast in the form:
Choose a, b, and c so that ky ; au ; bv ; cek2 is minimized.
In this notation we see that the natural setting for this kind of least squares
approximation is the space Rn of real n-dimensional vectors equipped with the
2-norm.
8 Son of Afternotes
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5
by
`2(t) = 2t :
The maximum error in this approximation is about 0:211 and is attained at the
point t = 0:881| a considerable improvement over the Taylor series expansion.
12. However we can do better. Imagine that we move the dotted line in
Figure 1.2 upward while preserving its slope. The maximum error at t = 0:881
will decrease. The error at the end points will increase but until the line moves
half the the original maximum error, the error at the endpoints will be strictly
less that the error at t = 0:881. Thus, if we set
` = 0:105 + 2t
3
(note that 0:105
= 0:211=2) we obtain an approximation whose maximum error
occurs at three points, as shown in the following table.
t : 0:000 0:881 1:000 (1.1)
sin t ; `3 (t) : ;0:105 0:105 ;0:105
This approximation is illustrated by the dashed line in Figure 1.2.
13. This is as good as it gets. We will see later that any attempt to change the
coecients of `3 must result in an error larger than 0:105 somewhere on the
interval 0 =2]. Thus we have found the solution of the following problem.
Choose a and b so that maxt20=2] j sin t ; a ; btj is minimized.1
14. Once again it will be helpful to introduce some new notation. Let k k1
be dened for all continuous functions on 0 =2] by
kf k1 = t2max
0=2]
jf (t)j:
Then our approximation problem can be stated in the following form.
Choose a and b so that k sin t ; a ; btk1 is minimized.
Approximation
The Space C 0 1]
Existence of Best Approximations
Uniqueness of Best Approximations
Convergence in C 0 1]
The Weierstrass Approximation Theorem
Bernstein Polynomials
Comments
The space C 0 1]
1. Since normed linear spaces dier in their properties, we will treat each on
its own terms, always taking care to point out results that hold generally. We
will begin with C 0 1] equipped with the 1-norm. (Unless another norm is
specied, we will 1-norm with C 0 1] comes bundled with the 1-norm.)
2. In investigating C 0 1], the rst order of business is to verify that the 1-
norm is well dened. Here the fact that the interval 0 1] is closed is critical.
For example, the function x;1 is continuous on the half-open interval (0 1].
But it is unbounded there, and hence has no 1-norm.
Fortunately, a theorem from elementary analysis that says that a function
that is continuous on a nite closed interval attains a maximum on that set.
In other words, if f 2 C 0 1], not only is kf k1 well dened, but there is a
point x such that
kf k1 = f (x):
3. The fact that 0 1] is closed and bounded has another important conse-
quence for the elements of C 0 1]| they are uniformly continuous. The formal
denition of uniform continuity is a typical piece of epsilonics. A function f is
uniformly continuous on 0 1] if for every > 0 there is a > 0 such that
x y 2 0 1] and jx ; yj < =) jf (x) ; f (y )j < :
The common sense of this denition is that we can keep the function values
near each other by keeping their arguments near each other and the process
does not depend on where we are in 0 1].
Like many denitions of this kind, unform continuity is best understood
by considering counterexamples. You should convince yourself that x;1 is not
uniformly continuous on (0 1].
13
14 Son of Afternotes
1.5
0.5
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
which to evaluate f remains unchanged. But because our coin is biased, the
probability that s = tik changes:
;
the probability that s = tik is aik (t) = ki ti (1 ; t)k;i .
The construction of the sequence of polynomials depends on the properties
of the aik (t). Because the we must evaluate f at some tik , we must have
X
k
aik (t) = 1: (2.2)
i=0
The mean value of s is
X
k
aik (t)tik = t (2.3)
i=0
and its variance is
aik (t)(tik ; t)2 = t(1 k; t) :
X
k
(2.4)
i=0
Since the variance goes to zero as k increases, the distribution of s bunches
up around t and we can expect to evaluate f near t. In particular, we would
18 Son of Afternotes
expect the average value of f (s) to be near f (t) and to get nearer as k increases.
Now this average value is given by
X
k X
k !
Bk(t) = f (tik ) ki ti (1 ; t)k;1
aik (t)f (tik ) = (2.5)
i=0 i=0
which is a polynomial of degree not greater than k in t. The polynomials Bk
are called the Bernstein polynomials associated with f .
17. It can be shown that the polynomials Bk converge uniformly to f . The
proof is essentially an analytic recasting of the informal reasoning we used to
derive the polynomials, and there is no point in repeating it here. Instead we
will make some comments on the result itself.
Comments
18. Although we have stated the approximation theorem for the interval 0 1],
it holds for any closed interval a b]. For the change of variable
t = a(1 ; s) + bs
transforms a b] (in t) to 0 1] (in s).
19. Bernstein's approach to the Weierstrass approximation theorem is con-
structive, but it should not be thought that his polynomials are the way to go
in approximating functions. For one thing, the Bernstein polynomial of degree
k that approximates a polynomial of degree k is not necessarily the polynomial
itself. Moreover, the convergence can be very slow. For example, if we expand
the right hand side of (2.4), we get
Approximation
Chebyshev Approximation
Uniqueness
Convergence of Chebyshev Approximations
Rates of Convergence
Chebyshev approximation
1. We now turn to the problem of nding a polynomial of degree k that best
approximates a continuous function f in C1 0 1]. This problem is known as
Chebyshev approximation, after the Russian mathematician who rst consid-
ered it.1 It is also known as best uniform approximation. Our rst order of
business will be to establish a characterization of the solution.
2. The linear approximation to the sine function that we derived in xx1.9{1.14
is prototypical. The table (1.1) shows that the maximum error is attained at
three points and the errors at those points alternate in sign. It turns out that
the general Chebyshev approximation of degree k will assume its maximum
error at k + 2 points and the errors will alternate in sign.
3. We now wish to establish the following characterization of the Chebyshev
approximation.
Let f 2 C 0 1] and let pk be of degree k. Let
e(t) = f (t) ; pk (t):
Then a necessary and sucient condition that pk be a Chebyshev
approximation of f is that there exist k + 2 points
0 t0 < t1 < < tk < tk+1 1
such that
1: je(ti )j = kf ; pk k1 i = 0 : : : k + 1 (3.1)
2: e(ti )e(ti+1 ) 0 i = 0 : : : k:
The second condition in (3.1) says that successive errors e(ti ), if they are
not zero, alternate in sign. For brevity we will say that the maximum error
alternates at k + 2 points.
1
The special case of a straight line was considered by Boscovich in 1757. Aspiring re-
searchers take heed and search the literature! There is no new thing under the sun.
19
20 Son of Afternotes
0.8
0.6
0.4
0.2
0.2
0 0.5 1 1.5
4. The proof of this result consists of denying the characterization and show-
ing that we can then get a better approximations. Before launching into the
proof, which is complicated, we consider a simple example. Figure 3.1 shows
an approximation p(t) to sin t (solid line) that does not satisfy the character-
ization | its maximum error alternates in sign only at the endpoints of the
interval 0 =2]. Then there must be a point t0 (= 0:3336) between the end-
points at which the error is zero. We can pass a line q (t) through t0 that is
small and has the same sign as the error at the endpoints (dotted line). Then
p(t) ; q(t) (dashed line) is a better approximation to the sine.
5. Turning now to the general proof of the necessity of the condition, suppose
that the maximum error | call it | does not alternate at k + 2 points. We
will construct points ti as follows.
t1 is the rst point in 0 1] for which je(t1 )j = .
t2 is the rst point greater than t1 with je(t2)j = and
e(t1)e(t2 ) < 0.
t3 is the rst point greater than t2 with je(t3)j = and
e(t2)e(t3 ) < 0.
etc.
3. Approximation 21
q(t)
t2 z2
t1 z1 t3
e(t)
This process must terminate with ` points ti , where ` < k + 2. This situation
is illustrate for ` = 3 in Figure 3.2.
Now since e(ti ) and e(ti+1 ) (i = 1 : : : ` ; 1) have opposite signs, there is a
zero of e between them. Let zi be the largest such zero. In addition let z0 = 0
and z` = 1.
Let
q (t) = (t ; z1)(t ; z2 ) (t ; z`;1 ):
Since ` < k + 2, the degree of q is no greater than k. Consequently, for any
> 0 the polynomial pk ; q is a candidate for a best approximation. We are
going to show that for small enough the error
(pk ; q ) ; f = e ; q
has norm strictly smaller than . This implies that pk was not a best approxi-
mation in the rst place. We will assume without loss of generality that
signq (t)] = signe(ti )] t 2 (zi;1 zi):
We will rst consider the determination of for the interval z0 z1]. For
deniteness suppose that signe(ti)] > 0 (which corresponds to Figure 3.2).
This implies that for any > 0
> e(t) ; q(t) t 2 (zi;1 zi):
What we must show now is that for any suciently small > 0
e(t) ; q (t) > ; t 2 (zi;1 zi):
Let
0 = t2min
z z ]
0 1
fe(t)g:
22 Son of Afternotes
We must have +
0 > 0. For otherwise the interval z0 z1] must contain a
point with e(t) = ;, contradicting the construction of t2 . Now let
0 = 2k+qk
0 :
1
Then for 0 0 and t 2 z0 z1],
e(t) ; q (t) e(t) ; 0q(t)
0 ; (+2kq0k)1q(t)
0 ; +20
; ;20
> ; (since
0 > ;)
which is the inequality we wanted.
Parameters i corresponding to the interval zi;1zi ] are determined simi-
larly. If 0 < < minfig, then kf ; (p ; q )k1 < , which establishes the
contradiction.
6. Turning now to suciency, suppose that the maximum error in pk alternates
at the points t0 : : : tk+1 but that pk is not a Chebyshev approximation. Let
q be a Chebyshev approximation. Denote the error in p and q by ep and eq .
Since jeq (ti )j < jep (ti )j, the values of ep ; eq at the ti alternate in sign. But
ep ; eq = q ; pk , whose values also alternate at the ti . It follows that q ; pk has
k + 1 zeros between the ti. But q ; pk has degree not greater than k. Hence
q ; pk = 0, or pk = q, a contradiction.
Uniqueness
7. The characterization of best uniform approximations does not insure their
uniqueness. It is conceivable that there could be two best approximations that
alternate at a dierent set of points. Fortunately, it is easy to show that this
can not happen.
8. Let p and q be best approximations of degree k of f , and set
= kp ; f k1 = kq ; f k1 :
We claim that 21 (p + q ) is also a best approximation. In fact,
1
2 (p + q) ; f 1 21 kp ; f k1 + 21 k1 ; f k1 =
Since we cannot have k 12 (p + q ) ; f k1 < , we must have k 12 (p + q ) ; f k1 = .2
2
More generally any convex combination | that is a combination of the form p +(1 ; )q
( 2 0 1]) | is of best approximations is a best approximation, and this result holds in any
normed linear space.
3. Approximation 23
Since jp(ti ) ; f (ti )j and jq (ti) ; f (ti )j , the only way for (3.2) to hold
is for
p(ti) ; f (ti ) = q(ti) ; f (ti ) = (;1)i
It follows that p(ti ) = q (ti ). Thus p and q are polynomials of degree k that are
equal at k + 2 distinct points. Hence p = q .
Convergence of Chebyshev approximations
9. We have seen that for each function f 2 C1 0 1] there is a sequence of best
approximations p0 p1 : : : , where each pk is of degree not greater than k. It is
natural to conjecture that this sequence actually converges to f in norm. The
proof is trivial.
10. Let Bk (f ) be the kth Bernstein polynomial corresponding to f (see x2.16).
Our proof of the Weierstrass approximation theorem shows that kBk (f ) ;
f k1 ! 0. Since pk is a best approximation,
kpk ; f k1 kBk(f ) ; f k1 ! 0:
Hence kpk ; f k1 ! 0.
Rates of convergence
11. This proof of the convergence of best approximations shows only that the
convergence is no slower than the convergence of the Bernstein polynomials.
That is not saying much, since the Berstein polynomials are known to con-
verge slowly (see x2.19). A stronger result goes under the name of Jackson's
theorem.3
Let f have n continuous derivatives on 0 1], and let pk be the
best approximation to f of degree k. Then there is a constant n ,
independent of f and k, such that
kpk ; f k1 n kfnk k1 :
(n)
3
Jackson established the basic results in 1911, results that have since been elaborated.
The theorem stated here is one of the elaborations. For more see E. W. Cheney, Introduction
to Approximation Theory, McGraw{Hill, New York, 1966.
24 Son of Afternotes
Approximation
Convergence of Chebyshev Approximations
Rates of Convergence
A Theorem of de la Vallee-Poussin
A General Approximation Strategy
Chebyshev Polynomials
Economization of Power Series
Farewell to C1a b]
A theorem of de la Vallee-Poussin
1. We now turn to methods for calculating the best approximation to a function
f . There is a method, called the Remes algorithm, that will compute a best
approximation to high accuracy.1 However, high accuracy is seldom needed in
best polynomial approximation. This is a consequence of the following theorem
of de la Vallee-Poussin.
Let p be a polynomial of degree not greater than k such that p ; f
alternates at the points t0 < t1 < < tk+1 i.e., signp(ti+1 ) ;
f (ti+1 )] = ; signp(ti) ; f (ti )] (i = 0 : : : k + 1). Then
min
i
jp(ti) ; f (ti)j Ek (f ) kp ; f k1: (4.1)
The right-hand inequality in (4.1) is simply a statement that the error in
any polynomial approximation is not less than the error in the best polynomial
approximation. The left-hand inequality is established exactly as we estab-
lished the suciency of the characterization of best approximations in x3.6.
Assume that the best approximation pk has error less than mini jp(ti ) ; f (ti )j
and show that pk = p.
2. The signicance of this result can be more easily seen by writing
;1 mini jp(ti ) ; f (ti)j Ek (f )
kp ; f k1 kp ; f k1
so that
kp ; f k1 Ek(f ):
We will call the number the V-P ratio for the polynomial p at the points ti .
If it is near one, the polynomial p has a maximum error that is almost as good
as the best approximation. Even when is equal to, say, 2, the maximum error
1
A fortran program is available on Netlib (http://www.netlib.org/index.html)
25
26 Son of Afternotes
Chebyshev polynomials
5. The equi-alternating Chebyshev polynomials 2 are dened in terms of trigo-
nometric and hyperbolic functions:
(
ck (t) = cos(k cos;1 t) jtj 1 (4.3)
cosh(k cosh;1 t) jtj 1:
(Actually, both formulas are valid for all values of t but can give complex
intermediate quantities if they are used outside their natural ranges.)
6. To see that these functions are actually polynomials, we will show that
they satisfy a simple recurrence. To get us started, note that p0 (t) = 1, and
p1(t) = t.
Now let t = cos , so that pk (t) = cos k. By adding the trigonometric
identities
cos(k + 1) = cos k cos ; sin k sin (4.4)
cos(k ; 1) = cos k cos + sin k sin
we get
cos(k + 1) + cos(k ; 1) = 2 cos k cos
or in terms of Chebyshev polynomials
ck+1(t) = 2tck (t) ; ck;1 (t): (4.5)
From this it is obvious that ck is a polynomial of degree k and that its leading
term is 2k;1 tk .
7. Since the hyperbolic functions satisfy the identities (4.4), the same argument
shows the identity of the two expressions in (4.3)
8. Since pk (t) = cos(k cos;1 t) on ;1 1], the absolute value of pk (t) cannot
be greater than one on that interval. If we set
t = cos (k ; i) i = 0 1 : : : k
ik k
then
ck (tik ) = cos(k ; i) = (;1)k;i:
Thus the values of ck (t) alternate between plus and minus one as t varies from
;1 to 1. The points tik are called the Chebyshev points of ck .
9. If it is desired to evaluate ck at the point t, we can start with c0(t) = 1 and
c1(t) = t, and apply the recurrence (4.5). This procedure has the advantage
2
Chebyshev polynomials are frequently written Tk (t), which reects an alternate tran-
scription | Tchebyche | from the Cyrillic alphabet.
28 Son of Afternotes
that it produces values for all the polynomials of degree less than or equal to
k. Consequently, we can also evaluate sums of the form
b0c0(t) + b1c1(t) + + bk ck (t)
in O(k) operations. We leave the details as an exercise.
10. Although the recurrence makes it unnecessary to have an explicit repre-
sentation of the Chebyshev polynomials to evaluate Chebyshev series, we will
need their coecients later when we turn to the economization of power series.
They can be computed by a simple algorithm. To derive it, write
ck (t) = c0k + c1k t + c2kt2 + + ckk tk :
The recurrence (4.5) can be written schematically in the form
;ck;2(t) = ; c0k;2 ; c1k;2 t ; c2k;2t2 ;
+2tck;1 (t) = 2c0k;1 t + 2c1k;1t2 +
ck (t) = ; c0k;2 + (2c0k;1 ; c1k;2)t + (2c1k;1 ; c2k;2)t2 +
From this we see that the general equation for the coecients of ck are given
by
c0k = ;c0k;2
and
cik = 2ci;1k;1 ; cik;2 i = 1 : : : k:
The following program implements this recursion, accumulating the coecients
in an upper triangular array C .
1. C 0 0] = 1
2. C 0 1] = 0 C 1 1] = 1
3. for k = 2 to m
4. C 0 k] = ;C 0 k;2] (4.6)
5. for i = 1 to k
6. C i k] = 2C i;1 k;1] ; C i k;2]
7. end for i
8. end for k
The algorithm requires O(m2). It is not as ecient as it could be, since it
does not take advantage of the fact that every other coecient of a Chebyshev
polynomial is zero.
Economization of power series
11. We now turn to obtaining a Chebyshev expansion of the function f we
wish to approximate. The following technique assumes that f has a power
series expansion:
f (t) = a0 + a1t + a2t2 + :
4. Approximation 29
Approximation
Discrete, Continuous, and Weighted Least Squares
Inner Product Spaces
Quasi-Matrices
Positive-Denite Matrices
The Cauchy and Triangle Inequalities
Orthogonality
The QR Factorization
33
34 Son of Afternotes
Symmetry and linearity imply that the inner product is linear in its rst ar-
gument as well as its second.1
6. Given an inner product space, we can dene a norm on it by kxk2 = (x x).
To qualify as a norm, the function kk must be denite and homogeneous, and
must satisfy the triangle inequality (see x1.17). The fact that k k is denite
and homogeneous follows immediately from the denition of the inner product.
The triangle inequality requires more work. However, before we derive it, we
must dispose of some technical problems.
Quasi-matrices
7. The elementary theory of approximation in inner-product spaces is essen-
tially the same for discrete and continuous spaces. Yet if you compare a book
on numerical linear algebra with a book on approximation theory, you will nd
two seemingly dissimilar approaches. The numerical linear algebra book will
treat the material in terms of matrices, whereas the approximation theory book
will be cluttered with scalar formulas involving inner products | formulas that
on inspection will be found to be saying the same thing as their more succinct
matrix counterparts. Since the approaches are mathematically equivalent, it
should be possible to nd a common ground. We will do so rst by introducing
a notational change and second by extending the notion of matrix.
8. Part of the problem is that the conventional notation for the inner product
is not compatible with matrix conventions. We will therefore begin by writing
xTy
for the inner product in all vector spaces | discrete or continuous. Note that
this is an operational denition. It does not imply that there is an object xT
that multiplies y | although it will be convenient at times to pretend there is
such an object.
9. Another problem is that matrices, as they are usually understood, cannot
be fashioned from vectors in a continuous space. In Rn we can build a matrix
by lining up a sequence of vectors. For example, if x1 : : : xk 2 Rn , then
X = (x1 x2 xk ) (5.1)
is a matrix with n rows and k columns. If we attempts to do the same thing
with members of a continuous vector space we end up with an object that has
no rows. For the moment let's call these objects quasi-matrices and denote
the set of quasi-matrices with k columns by V k . We have already seen quasi-
matrices in x4.14, where we treated them as row vectors of functions. This
1
When the vector spaces are over the eld of complex numbers, symmetry is replaced by
conjugate symmetry: (xy) = (y x). More on this when we consider eigenvalue problems.
36 Son of Afternotes
works in a general normed linear space, but our range of operations with such
objects is limited. In an inner-product space, on the other hand, quasi-matrices
can be made to behave more like ordinary matrices.
10. The key to performing operations with quasi-matrices is to ask if the
operation in question makes sense. For example if X 2 V k is partitioned as in
(5.1) and b 2 Rk , then the product
Xb = 1 x1 + + k xk
is well dened and is a member of V .
Similarly if A = (a1 a2 a` ) 2 Rk` , then the product
XA = (Xa1 Xa2 Xa`)
is a well dened quasi-matrix. On the other hand, the product AX does not
exist, since V does not have a matrix-vector product.
Two operations involving the inner product will be particularly important.
If X and Y are quasi-matrices with k and ` columns, then the product X T Y
is a k
l matrix dened by
0T 1
T
BBxx1T2yy11 xx1T2yy22 xT1 y`
xT2 y` C
CC
X TY = BB@ ... .. .. C
. . A
xTk y1 xTk y2 xTk y`
If X Y 2 V k then the product XY T is an operator mapping V into V
dened by
(XY T )z = X (Y T z ):
We will encounter this operator when we introduce projections.
11. The term \quasi-matrix" has served its purpose, and we will now give it
an honorable burial. Its memorial will be the notation V k to denote a matrix
formed as in (5.1) from k elements of the vector space V .2
Positive de nite matrices
12. Positive denite matrices play an important role in approximation in
inner-product spaces. Here we collect some facts we will need later.
In the original lecture I gave a more formal development and even wrote quasi-matrices in
2
small caps to distinguish them from ordinary matrices. This is bad business for two reasons.
First, it represents a triumph of notation over common sense. Second, if you introduce a
formal mathematical object, people are likely to start writing papers about it. Like \On the
generalized inverse of quasi-matrices." Ugh!
5. Approximation 37
y r
Orthogonality
16. Orthogonality between vectors is a generalization of perpendicularity in
Euclidean two- and three-dimensional space. The concept is central to ap-
proximation in inner-product spaces. To see why consider the problem of
approximating a vector y 2 R3 by a linear combination 1 x1 + 2x2 of two
linearly independent vectors x1 and x2 . Our criterion for best approximation
requires that we minimize the norm of the residual r = y ; 1x1 ; 2 x2.
The geometric situation is illustrated in Figure 5.1. As 1 and 2 vary, the
vector x = 1x1 + 2x2 varies in a two dimensional subspace, represented by
the plane in the gure. The dashed lines show a typical approximation x and
the corresponding residual r. Geometric intuition tells us that the size of r
should be minimized when r is at right angles to the plane, as illustrated by
the solid lines.
17. To generalize this insight, we must decide what we mean for a vector to
be perpendicular to a subspace. We will begin by using the Cauchy inequality
to dene the notion of angle between two vectors.
Since cos ranges from 1 to ;1 as ranges from 0 to , the Cauchy
inequality can be rewritten in the form
xT y = cos kxkky k
for some unique 2 0 ]. In ordinary Euclidean two- or three-dimensional
space, it is easy to verify that is actually the angle between x and y . In a
general inner-product space we dene to the be angle between x and y .
18. If = 2 | i.e., if x and y are at right angles | then cos and hence xT y
are zero. Thus we will say that the vectors x and y are orthogonal if xT y = 0.
We write x ? y .
Note that we do not require x and y to be nonzero in our denition of
orthogonality. It is useful to have the zero vector orthogonal to all vectors,
and even more useful to have the only self-orthogonal vector to be the zero
vector.
5. Approximation 39
If we set Q = (q1 q2 qk ), then (5.4) may be written succinctly in the form
QT Q = Ik
where Ik is the identity matrix of order k.
If X 2 V k then q is orthogonal to the columns of X if and only if X T q = 0.
20. A fact that goes deeper that these essentially notational results is the
Pythagorean equality. It is the analytic equivalent of the Pythagorean theorem,
which says that the square of the hypotenuse of a right triangle is equal to the
sum of its squares of its two sides. Analytically expressed:
If x and y are orthogonal then
kx + yk2 = kxk2 + kyk2: (5.5)
The proof is easy:
kx + yk2 = (xT + y)T(Tx + y) T
= x x + 2x y + y y
= xT x + y T y since x ? y
= kxk2 + ky k2:
The QR factorization
21. By denition every nite-dimensional subspace X V has a basis | say,
x1 : : : xk. The QR factorization shows that this basis can be converted into
an orthonormal basis for X . Specically:
Let X 2 V k have linearly independent columns. Then there is a
quasi-matrix Q 2 V k with orthonormal columns and a an upper
triangular matrix R of order k having positive diagonal elements
such that
X = QR:
Both Q and R are unique.
40 Son of Afternotes
22. Before establishing the existence of the QR factorization, let's see what
it means. Since R is nonsingular, we can write XS = Q, where S = R;1 . In
other words,
0
BBs011 ss1222 s1k 1C
(q1 q2 qk ) = (x1 x2 xk ) B
s2k CC
B@ ... ... .. C
.A
0 0 skk
or equivalently
q1 = s11 x1
q2 = s12 x1 + s22x2
qk = s1k x1 + s2k x2 + + skk xk
Thus q1 is just x1 normalized. The vector q2 is a linear combination of x1
and x2 that is orthogonal to q1 or equivalently to x1 . Generally, qj is a linear
combination of x1 : : : xj , that is orthogonal to q1 : : : qj ;1 , or equivalently
to x1 : : : xj ;1 . Thus the QR factorization amounts to a sequential orthog-
onalization of the vectors x1 : : : xk . The term sequential is important here.
Having orthogonalized x1 : : : xk we have also orthogonalized x1 : : : xl for any
l < k.
Lecture 6
Approximation
Existence and Uniqueness of the QR Factorization
The Gram{Schmidt Algorithm
Projections
Best Approximation
Expansions in Orthogonal Functions
Existence and uniqueness of the QR factorization
1. The proof of the existence of the QR factorization proceeds by induction
on k. For k = 1, Q is just the vector x1 normalized and R = (kx1k).
Now assume that the theorem is true for k ; 1 1. We seek a QR factor-
ization of X = (x1 xk ) in the partitioned form1
!
(X1 xk ) = (Q1 qk ) R011 r1k :
kk
On computing the rst column of this partition, we nd that
X1 = Q1R11:
This is the QR factorization of X1, which exists by the induction hypothesis.
The second column of the partition is
xk = Q1r1k + kk qk :
Multiplying this equation by QT1 we nd that
QT1 xk = r1k + kk QT1qk :
From this we see that kk qk will be orthogonal to the columns of Q1 if and
only if
r1k = QT1 xk (6.1)
in which case
kk qk = xk ; Q1r1k: (6.2)
If xk ; Q1 r1k 6= 0, we can take
kk = kxk ; Q1 r1k k (6.3)
1
The indices of a submatrix in the partion are the same as the element in the upper left
of the submatrix. This scheme is called northwest indexing.
41
42 Son of Afternotes
1. for j = 1 to k
2. Q: j ] = X : j ]
3. for i = 1 to j ;1
4. Ri j ] = Q: i]TQ: j ]
5. end for i
6. for i = 1 to j ;1
7. Q: j ] = Q: j ] ; Q: i]Ri j ]
8. end for i
9. Rj j ] = kQ: j ]k
10. Q: j ] = Q: j ]=Rj j ]
11. end for j
It is a remarkable fact that if we combine the two inner loops, we compute the
same QR factorization.
1. for j = 1 to k
2. Q: j ] = X : j ]
3. for i = 1 to j ;1
4. Ri j ] = Q: i]TQ: j ]
5. Q: j ] = Q: j ] ; Q: i]Ri j]
6. end for i
7. Rj j ] = kQ: j ]k
8. Q: j ] = Q: j ]=Rj j]
9. end for j
This algorithm is called the modied Gram{Schmidt algorithm. Note that it is
really a dierent algorithm, since Q: j ] changes as we compute the successive
values of Ri j ]. We leave as a (mandatory) exercise to verify that it works.
4. The Gram{Schmidt algorithm can be used in both continuous and discrete
spaces. Unfortunately, in the discrete case it is numerically unstable and can
give vectors that are far from orthogonal. The modied version is better, but
it too can produce nonorthogonal vectors. We will later give better algorithms
for computing the QR factorization in the discrete case.
Projections
5. Figure 5.1 suggests that the best approximation to a vector y will be the
shadow cast by y at high noon on the approximating subpace. Such shadows
are called projections. We are going to show how to use the QR factorization
to compute projections.
6. Let the columns of X 2 V k be linearly independent, and let X = QR be
the QR factorization of X . Then the columns of Q form a basis for the space
X spanned by the columns of X . Let
PX = QQT and P? = I ; PX : (6.4)
44 Son of Afternotes
The following result shows that these two matrices are related to the geometry
of the space X .
Let PX and P? be dened by (6.4). Then
1: x 2 X =) PX x = x and P? x = 0:
2: z ? X =) PX z = 0 and P? z = z:
To prove the rst of the above results, note that if x 2 X then x = Qb for
some b (because the columns of Q span X ). Hence
PX x = (QQT)(Qb) = Q(QT Q)b = Qb = x
Again, if x ? X , then QT x = 0, and PX x = QQT x = 0. The other result is
established similarly.
7. Let y be an arbitrary vector in V and let
yX = PX y and y? = P?y:
Then since PX + P? = I ,
y = yX + y? : (6.5)
T
Now yX = Q(Q y ) is a linear combination of the columns of Q and hence lies
in X . Moreover,
QT y? = QT (I ; QQT )y = QT y ; (QTQ)QT y = QT y ; QT y = 0:
Hence the vector y? is orthogonal to X . Thus (6.5) decomposes the vector y
into the sum of two orthogonal vectors, one lying in X and the other orthogonal
to X . (This, in fact, is just the decomposition illustrated in Figure 5.1).
Such a decomposition is unique. For if y = y^X + y^? is another such
decomposition, we have from the result of x6.6
yX = PX y = PX (^yx + y^? ) = y^X
since y^X 2 X and y^? ? X .
To summarize:
For any vector y the sum
y = PX y + P? y
decomposes y uniquely into the sum of a vector lying in X and a
vector orthogonal to X .
8. The vector yX = PX y is called the orthogonal projection of y onto the column
space of X , and the vector y? = P? y is called complementary projection .
The matrices that produce these vectors are called projection operators or
projectors. They are important in many numerical applications. Here we shall
use them to solve the best approximation problem.
6. Approximation 45
Best approximation
9. We now are in a position to give a complete solution of the approximation
problem in an inner product space. Here are the ingredients.
1. A vector space V and an inner product (x y ). As usual, we shall
write xT y for (x y ).
2. A vector y to be approximated.
3. Linearly independent vectors x1 : : : xk to do the approximating.
We look for an approximation in the form
y
= 1 x1 + 2x2 + + k xk :
The best approximation will minimize ky ; 1 x1 ; 2x2 ; ; k xk k, where
kxk2 = xT x.
10. To bring matrix techniques into play, let
X = (x1 x2 xk ) and b = ( 1 2 k )T
Then we wish to determine b to
minimize ky ; Xbk:
11. The solution of the problem involves the use of projections and the Py-
thagorean equality (5.5):
ky ; Xbk2 = kPX (y ; Xb) + P? (y ; Xb)k2 (PX + P? = I )
= kPX (y ; Xb)k2 + kP? (y ; Xb)k2 (Pythagorean equality)
= kPX y ; Xbk2 + kP? y k2 (PX X = X P?X = 0):
Now the second term in the last expression is independent of b. Consequently,
we can minimize ky ; Xbk by minimizing kPX y ; Xbk. But PX y is in the
space spanned by the columns of X . Since the columns of X are linearly
independent, there is a unique vector b such that
Xb = PX y: (6.6)
For this b, we have kPX y ; Xbk = 0, which is as small as it gets. Consequently,
the vector b satisfying (6.6) is the unique solution of our approximation prob-
lem.
12. Since our object is to derive numerical algorithms, we must show how to
solve (6.6). There are two ways| one classical (due to Gauss and Legendre)
and one modern.
46 Son of Afternotes
Approximation
Expansions in Orthogonal Functions
Orthogonal Polynomials
Discrete Least Squares and the QR Decomposition
The rst thing is to verify that the cosines and sines are indeed orthogonal.
Consider the equations
cos(i + j )t = cos it cos jt ; sin it sin jt (7.2)
cos(i ; j )t = cos it cos jt + sin it sin jt
Adding these two equations, we get
cos(i + j )t + cos(i ; j )t = 2 cos it cos jt:
From this it is easily seen that
Z
cos it cos jt dt = 0 ifif ii = j > 0,
6= j .
;
This establishes the orthogonality of cosines with themselves and shows that
k cos itk2 = for i > 0. (The case i = 0 is an exception: k cos 0tk2 = 2.)
The orthogonality of sines may be obtained by subtracting the equations
(7.2) and integrating. We also nd that k sin itk2 = . The orthogonality
between sines and cosines may be derived similarly from the identity
sin(i + j )t = sin it cos jt + cos it sin jt:
determined by the starting vectors and the requirement that the results be
orthonormal.
12. We will need the following useful fact about basic sequences of polynomials.
Let p0 p1 : : : be a basic sequence of polynomials, and let q be a
polynomial of degree k. Then q can be written uniquely as a linear
combination of p0 p1 : : : pk .
We actually proved this result in x4.14, where we showed how to expand
a truncated power series in Chebyshev polynomials The pi correspond to the
Chebyshev polynomials, and q corresponds to the truncated power series. You
should satisfy yourself that the result established in x4.14 depends only on the
fact that the sequence of Chebyshev polynomials is basic.
13. A consequence of the above result is that:
If p0 p1 : : : is a sequence of orthogonal polynomials then pi is or-
thogonal to any polynomial of degree less than i.
To see this, let q be of degree j < i. Then q can be written in the form
q = a0p0 + : : : + aj pj . Hence
R p q = R p (a p + + a p ) = a R p p + + a R p p = 0:
i i 0 0 j j 0 i 0 j i j
14. We are now in a position to establish the existence the recurrence that
generates the sequence of orthogonal polynomials. We will work with monic
polynomials, i.e., polynomials whose leading coecient is one.
Let p0 p1 p2 : : : be a sequence of monic orthogonal polynomials.
Then there are constants ak and bk such that
pk+1 = tpk ; ak pk ; bk pk;1 k = 0 1 : : ::
(For k = 0, we take p;1 = 0.)
The derivation of this three term recurrence is based on the following two
facts.
1. The orthogonal polynomials can be obtained by applying the Gram{
Schmidt algorithm to any basic sequence of polynomials.1
2. We can defer the choice of the kth member of this basic sequence
until it is time to generate pk .
1 Actually, we will use a variant of the algorithm in which the normalization makes the
polynomials monic.
7. Approximation 53
It should not be thought, however, that one approach is better than the
other. The decomposition (7.9) is more general than (7.8) in that it applies
with equal facility to continuous and discrete problems. The decomposition
(7.8) represents a matrix approach that restricts us to discrete problems. How-
ever, the loss of generality is compensated by more computationally oriented
formulas. You would be well advised to become equally familiar with both
approaches.
Lecture 8
Approximation
Householder Transformations
Orthogonal Triangularization
Implementation
Comments on the Algorithm
Solving Least Squares Problems
Householder transformations
1. In many least squares problems n is very large and much greater than
k. In such cases, the approach through the QR decomposition seems to have
little to recommend it, since the storage of Q requires n2 storage locations.
For example, if n is one million, Q requires one trillion oating-point words to
store. Only members of congress, crazed by the scent of pork, can contemplate
such gures with equanimity.
2. The cure for this problem is to express Q in a factored form that is easy
to store and manipulate. The factors are called Householder transformations.
Specically, a Householder transformation is a matrix of the form
H = I ; uuT kuk2 = p2: (8.1)
It is obvious that H is symmetric. Moreover,
H TH = (I ; uuT )(I ; uuT )
= I ; 2uuT + (uuT )(uuT )
= I ; 2uuT + u(uT u)uT
= I ; 2uuT + 2uuT
= I
so that H is orthogonal | a good start, since we want to factor Q into a
sequence of Householder transformations.
3. The storage of Householder transformations presents no problems, since all
we need is to store the vector u. However, this ploy will work only if we can
manipulate Householder transformations in the form I ; uuT . We will now
show how to premultiply a matrix X by a Householder transformation H .
From (8.1)
HX = (I ; uuT )X = X ; u(uTX ) = X ; uvT
where v T = uT X . Thus we have the following algorithm.
59
60 Son of Afternotes
1. v T = uT X
2. X = X ;uv T (8.2)
It is easy to verify that if X 2 Rnk , then (8.2) requires about 2kn multi-
plications and additions. This should be compared with a count of kn2 for the
explicit multiplication by a matrix of order n. What a dierence the position
of the constant two makes!
4. Although Householder transformations are convenient to store and manip-
ulate, we have yet to show they are good for anything. The following result
remedies this oversight by showing how Householder transformations can be
used to introduce zeros into a vector.
Suppose kxk2 = 1, and let
u = px1e1 :
1
If u is well dened, then
Hx (I ; uuT )x = e1 :
= x x 12e1 x + e1 e1
T T T
1
1 2
= 1 1 + 1
1
= 2:
Now Hx = x ; (uT x)u. But
uT x = (xp1 e1 ) x
T
1
= x px e1 x
T T
1 1
= p 1
1
p 1 1
= 1 1:
8. Approximation 61
Hence
Hx = x ; (uT x)u
= x ; 1 1 px e1
p
1 1
= x ; (x e1 )
= e1 :
5. The proof shows that there are actually two Householder transformations
that transform x into e1 . They correspond to the two choices of the sign
= 1 in the construction of u. Now the only cases in which this construction
can fail, is when j1 j = 1 and we choose the sign so that 1 + 1 = 0. We can
avoid this case by always choosing to have the same sign as 1 .
This choice of doesn't just prevent breakdown it is essential for numerical
stability. If 1 is near one and we choose the wrong sign, then cancellation will
occur in the computation 1 + 1.
6. A variant of this construction may be used to reduce any nonzero x vector
to a multiple of e1 . Simply compute u from x=kxk2. In this case
Hx = kxk2e1 :
The following algorithm implements this procedure. In addition to returning
u it returns = kxk2.
1. housegen (x u )
2. = kxk2 p
3. if ( = 0) u = 2e1 return
4. u = x=
5. if (u1 0)
6. =1
7. else
8. = ;1
9. end if
10. u1 = u1p+
11. u = u= ju1j
12. = ;
13. end housegen
Orthogonal triangularization
7. The equation Hx = kxk2 e1 is essentially a QR decomposition of X
with H playing the role of Q and kxk2 playing the role of R. (We add
the qualication \essentially" because the diagonal of R = (kxk2) may be
negative.) We are now going to show how to parlay this minidecomposition
into the QR decomposition of a general n
k matrix X . We will look at
62 Son of Afternotes
x x x x r r r r r r r r
x^ x x x H1 0 x x x H2 0 r r r H3
x^ x x x =) 0 x^ x x =) 0 0 x x =)
x^ x x x 0 x^ x x 0 0 x^ x
x^ x x x 0 x^ x x 0 0 x^ x
x^ x x x 0 x^ x x 0 0 x^ x
r r r r r r r r
H3 0 r r r H4 0 r r r
=) 0 0 r r =) 0 0 r r
0 0 0 x 0 0 0 r
0 0 0 x^ 0 0 0 0
0 0 0 x^ 0 0 0 0
Figure 8.1. Unitary triangularization.
the process in two ways | schematically and recursively | and then write a
program.
8. The schematic view is illustrated in Figure 8.1 for n = 6 and k = 4. The
arrays in the gure are called Wilkinson diagrams (after the founding father
of modern matrix computations). A letter in a diagram represents an element
which may or may not be zero (but is usually nonzero). A zero in the diagram
represents an element that is denitely zero.
The scheme describes a reduction to triangular form by a sequence of
Householder transformation. At the rst step, the rst column is singled out
and a Householder transformation H1 generated that reduces the column to a
multiple of e1 | i.e., it replaces the elements marked with hats by zeros. The
transformation is then multiplied into the entire matrix to get the second array
in the gure.
At the second step, we focus on the trailing 5
3 submatrix, enclosed by
a box in the gure. We use the rst column of the submatrix to generate a
Householder transformation H2 that annihilates the elements with hats and
multiply H2 into the submatrix to get the next array in the sequence.
The process continues until we run out of columns. The result is an upper
triangular matrix R (hence the r's in the Wilkinson diagrams). More precisely,
we have ! ! ! !
I3 0 I2 0 I1 0 H X = R :
0 H 0 H 0 H 1 0
4 3 2
Since the product of orthogonal transformations is orthogonal, the QT of our
8. Approximation 63
Implementation
10. You can code an actual algorithm either from the Wilkinson diagrams
of Figure 8.1 or from the recursive derivation of the decomposition. Which
approach you choose is a matter of personal preference. I tend to favor the
recursive approach, which we will present here.
11. Even when they are based on a recursive derivation, matrix algorithms
typically do not use recursion. The reason is not that the overhead for recursion
is unduly expensive | it is not. Rather, many matrix algorithms, when they
fail, do so at points deep inside the algorithm. It is easier to jump out of a
nest of loops than to swim up from the depths of a recursion.
12. We begin by deciding where we will store the objects we are computing.
Initially we will be proigate in our use of storage and will economize later on.
Specically, we will store the matrix X , the Householder vectors u, and the
matrix R in separate arrays called X , U , and R.
13. A key to the implementation is to note that the rst step is typical of
all steps (end conditions excluded). Consequently to describe the j th stage,
we replace 1 by j and 2 by j + 1. The northwest indexing tells us where the
subarrays begin.
We will now write down the steps of the derivation and in parallel the
corresponding code. We have the following correspondence between the matrix
X and the array that contains it.
xjj Xjj +1 ! X j :n j] X j :n j +1:k]
The rst step at the j th stage is to
generate the House- ! housegen (X j :n j ] U j :n j ] Rj j ])
holder vector uj
Note that we have stored the Householder vector in the same position in the
array U as the vector xjj occupies in the array X . The norm of xjj , with an
appropriate sign, is the element jj of R and is moved there forthwith.
The next step is to compute Hj Xjj +1 = (I ; uj uTj )Xjj +1 . This is done in
two stages as shown in (8.2):
vT = uTj Xjj+1 ! vT = U j :n j ]TX j :n j+1 k]
and
Xj ; uj v T ! X j :n j+1:k] = X j :n j +1:k] ; U j :n j ]vT
Finally, we
update R ! Rj j +1:k] = X j j +1:k]
8. Approximation 65
Approximation
Operation Counts
The Frobenius and Spectral Norms
Stability of Orthogonal Triangularization
Error Analysis of the Normal Equations
Perturbation of Inverses and Linear Systems
Perturbation of Pseudoinverses and Least Squares Solutions
Summary
Operation counts
1. We now have two methods for solving least squares problems | the method
of normal equations and the method of orthogonal triangularization. But which
should we use? Unfortunately, there is no simple answer. In this lecture, we
will analyze three aspects of these algorithms: speed, stability, and accuracy.
2. We have already derived an operation count (8.3) for the method of orthog-
onal triangularization. If n is reasonably greater than k then the computation
of the QR decomposition by Householder transformations requires about
nk2 additions and multiplications. (9.1)
3. Under the same assumptions, the solution by the normal equations requires
about
1 nk2 additions and multiplications. (9.2)
2
To see this let xj denote the j th column of X . Then the (i j )-element of
A = X TX is
aij = xTi xj :
This inner product requires about n additions and multiplications for its for-
mation. By symmetry we only have to form the upper or lower half of the
matrix A. Thus we must calculated 12 k2 elements at a cost of n additions and
multiplications per element.
Once the normal equations have been formed, their solution by the Choles-
ky algorithm requires 61 k3 additions and multiplications. Since in least squares
applications n is greater than k | often much greater| the bulk of the work
is in the formation of the normal equations, not their solution.
4. Comparing (9.1) and (9.2), we see that the normal equations is superior.
Moreover, in forming the cross-product matrix X T X , we can often take advan-
tage of special structure in the matrix X , which is harder to do in calculating
the QR decomposition. Thus, if our only concern is speed, the normal equa-
tions win hands down.
67
68 Son of Afternotes
In what follows we will use the symbol k k to stand for any norm with the
above properties.
8. If X is a matrix and X~ is an approximation to X we can dene the error
in X~ by
kX~ ; X k
and the relative error in X~ by
kX~ ; X k :
kX k
These denitions are formed in analogy with the usual denition of error and
relative error for real numbers. However, because they summarize the sta-
tus of an array of numbers in a single quantity they do not tell the whole
story. In particular, they do not provide much information about the smaller
components of X~ . For example, let
! !
x = 101;5 and x~ = 12:00001
10;5 :
Then the vector x~ has a relative error of about 10;5 in the 2-norm. The rst
component of x~ has the same relative error. But the second component, which
is small, has a relative error of one! You should keep this example in mind in
interpreting the bounds to follow.
Stability of orthogonal triangularization
9. We now turn to the eects of rounding error on the method of orthogonal
triangularization. We will assume that the computations are carried out in
oating-point arithmetic with rounding unit M . If, for example, we are per-
forming double precision computations in IEEE standard arithmetic we will
have M = 10;16 , sixteen being the roughly the number of signicant digits
carried in a double-precision word.
10. Since the rounding-error analysis of orthogonal triangularization is quite
involved, we will only state the nal results.
70 Son of Afternotes
22. Returning now to the system (A + G)~b = c, we have from (9.7) that
~b ; b
= ;A;1 GA;1 c = ;A;1 Gb:
Hence
k~b ; bk < kA;1Gk:
kbk
By weakening this bound, we can write it in a form that is easy to interpret.
Specically, let (A) = kAkkA;1 k. Then from
k~b ; bk < kA;1kkGk = (A) kGk :
kbk kAk
More precisely:
9. Approximation 73
b kE k bound error
3:2e;05
1 7:2e;02 4:8e;02
2 3:0e+03 1:8e+03
5:0e;06
1 1:1e;02 2:3e;03
2 4:8e+02 3:2e+02
5:3e;07
1 1:2e;03 6:1e;04
2 5:0e+01 2:0e+01
4:2e;08
1 9:6e;05 5:5e;05
2 4:0e+00 5:3e;01
5:0e;09
1 1:1e;05 6:0e;06
2 4:7e;01 7:8e;02
4:7e;10
1 1:0e;06 6:5e;08
2 4:5e;02 2:7e;02
5:5e;11
1 1:2e;07 1:5e;08
2 5:2e;03 2:5e;03
31. From the backward error bound (9.3) for orthogonal triangularization
we nd that solutions of the least squares problem computed by orthogonal
triangularization satisfy
k~b ; bk < cnk (X ) + 2(X ) krk
M (9.13)
kbk kX kkbk
On the other hand solutions computed by the normal equations satisfy
k~b ; bk < dnk (A) 1 + kyk ! M :
kbk kX kk~bk
Now kAk = kX TX k kX k2 and kA;1 k = kX y(X y)T k kX yk2. Hence
(A) 2 (X )
(with equality for the spectral norm). If we incorporate the term ky k=(kX kk~bk)
9. Approximation 77
32. Comparing (9.13) and (9.14), we see that in general we can expect orthog-
onal triangularization to produce more accurate solutions than the normal
equations. However, because the normal equations are computed by a simpler
algorithm the constant dnk will in general be smaller than cnk . Hence, if the
residual is small or (X ) is near one, the normal equations will compute a
slightly more accurate solution.
Summary
33. So which method should one use to use to solve least squares problems?
Actually, this is not the right question. What we have are two numerical tools
and we should be asking how should they be used. Here are some observations.
It is hard to argue against the normal equations in double precision. Very
few problems are so ill-conditioned that the results will be too inaccurate to
use. The method is easier to code and faster, especially when X has many
zero elements. Because the normal equations have been around for almost
two centuries, many auxiliary procedures have been cast in terms of them.
Although there are QR counterparts for these procedures, they are not always
immediately obvious. For this reason statisticians tend to prefer the normal
equations, and there is no reason why they should not.
On the other hand, if one wants a general purpose algorithm to run at all
levels of precision, orthogonal triangularization is the winner because of its sta-
bility. Unless the data is more accurate than the precision of the computation,
the user gets an answer that is no more inaccurate than he or she deserves.
Linear and Cubic Splines
79
Lecture 10
81
82 Son of Afternotes
l2
t0 t1 t2 t3 t4 t5
Then `i is clearly linear, and it satises `i (ti ) = fi and `i (ti+1 ) = fi+1 . More-
over, if we dene ` on t0 tn ] by
`(t) = `i(t) t 2 ti ti+1 ]
then ` is piecewise linear, interpolates the fi , and is continuous because
`i;1 (ti ) = fi = `i (ti)].2 The situation is illustrated in Figure 10.1 The piece-
wise linear curve is `. The smooth curve is a function f interpolated by ` at
the knots.
4. The spline ` is unique, since each function `i is uniquely determined by the
fact that it is linear and that `i (ti ) = fi and `i (ti+1 ) = fi+1 .
5. We have solved the problem of determining ` by breaking it up into a
set of individual linear interpolation problems. But our focus will be on the
function ` in its entirety. For example, given a set of knots t0 < < tn , we
can dene the space L of all linear splines over those knots. Note that this is
a linear space | linear combinations of continuous, piecewise linear functions
are continuous and piecewise linear.
6. We can also dene an operator L(f ) that takes a function f and produces
the linear spline ` that interpolates f at the knots. Note that if ` 2 L, then
` interpolates itself at the knots. Since the interpolant is unique we have the
following result.
If ` 2 L, then L(`) = `.
2
If you are alert, you will have noticed that the interpolation property insures continu-
ity i.e., the second condition in (10.1) is redundant. However, in some generalizations the
condition is necessary e.g., when the interpolation points and the knots do not coincide.
10. Linear and Cubic Splines 83
8. The bound (10.5) shows that the spline approximation converges as the
mesh size goes to zero. The rate of convergence is proportional to the square
of hmax. Although this convergence not very rapid as such things go, it is fast
enough the make it dicult to distinguish f from L(f ) in parts of Figure 10.1.
9. If we are lucky enough to be able to choose our knots, the local bound
(10.3) provides some guidance on how to proceed. We would like to keep the
error roughly equal in each interval between the knots. Since the error bound
is proportional to Mi h2i , where Mi represents the second derivative of f in
the interval, we pshould choose the knot spacing so that the hi are inversely
proportional to Mi . In other words, the knots should be densest where the
curvature of f is greatest. We see this in Figure 10.1, where it is clear that a
some of extra knots between t2 and t5 would reduce the error.
84 Son of Afternotes
c0 c1 c2 c3 c4 c5 c6
t0 t1 t2 t3 t4 t5 t6
Figure 10.2 shows a set of hat functions over a mesh of seven knots. To
distinguish them I have represented them alternately by solid and dashed lines.
13. It is easy to write down equations for the hat functions. Specically,
8 (t ; t )=h if t 2 t t ],
< i;1 i;1 i;1 i
ci (t) = : (ti+1 ; t)=hi if t 2 ti ti+1],
0 elsewhere.
The hat function c0 omits the rst of the above cases cn , the second.
14. In terms of hat functions, the linear spline interpolant has the following
form:
Xn
L(f ) = fi ci: (10.7)
i=0
In fact the sum on the right, being a linear combination of members of L, is
itself a member of L. Moreover
X
n
L(f )(tj ) = fici (tj ) = fj cj (tj ) = fj
i=0
so that the spline interpolates f . It is worth noting that the hat functions are
analogous to the Lagrange basis for polynomial interpolation.
Integration
15. Since L(f ) approximates f on the interval t0 tn ] it is natural to expect its
integral over the same interval to approximate the integral of f . From (10.7),
we see that the problem of integrating L(f ) reduces to that of integrating the
individual hat functions ci .
16. These integrals can be easily computed with the aid of the following
picture.
3
They are also called chapeau functions from the French for hat.
86 Son of Afternotes
h(i-1) h(i)
It shows the area under ci can is that of a triangle of height one and base
hi;1 + hi . Since the area of a triangle is one half its base times its height, we
have Z tn
ci (t) dt = hi;12+ hi :
t0
The hat functions c0 and cn must be considered separately:
Z tn h Z tn
0
c0(t) dt = 2 and cn (t) dt = hn2;1 :
t0 t0
22. The Rcomponents of the right-hand side of the normal equations are the
integrals cif . Usually the evaluation of these integrals will be the hardest
part of the procedure. However, we are helped by the fact that only integrals
of f (t) and tf (t) are required.
23. Turning to the solution of the normal equations, note that the matrix A
has three properties.
1. A is tridiagonal.
2. A is positive denite.
3. Each diagonal element of A is twice the sum of the o diagonal ele-
ments in the same row.
4
In this indexing scheme, the leading element of A is 00 .
88 Son of Afternotes
The rst property says that we can solve the normal equations by Gaussian
elimination in O(n) operations. The second tells us that pivoting is unneces-
sary in Gaussian elimination, which allows us to reduce the order constant in
the work. The third property | an example of strict diagonal dominance |
says that if the sizes of the mesh spacings do not vary too abruptly the system
will be well conditioned and we can expect accurate solutions to the normal
equations. (We will give a systematic treatment of diagonally dominant ma-
trices in Lecture 24).
24. We can also t linear splines to sets of points. Specically, suppose we are
given m points (xk yk ). Then we can attempt to determine a linear spline `
over the knots ti such that
X
yk ; `(xk )]2 = min:
k
Note that the abscissas xk are not in general the same as the knots.
The procedure is much the same as in the continuous case. We expand ` in
hat functions and then form the normal equations. For example, the elments
of the matrix of the normal equations are
X
ij = ci (xk )cj (xk ):
k
In practice, we would only sum over indices k for which both ci (xk ) and cj (xk )
are nonzero.
Once again, the matrix A of the normal equations is tridiagonal. If there
is at least one xk between each knot, it will also be positive denite. However,
we cannot guarantee that the system is well conditioned. The trouble is that
if there are no xk 's in the intervals supporting one of the hat functions, the
corresponding row of the normal equations will be zero. Moving a point to
slightly inside the union of these intervals will make the system nonsingular
but ill-conditioned. Generally speaking, you should select knots so that each
hat function has some points near its peak.
Implementation issues
25. If t 2 ti ti+1 ], we can evaluate L(f ) by the formula
; fi (t ; ti ):
L(f )(t) = fi + fti+1 ;
i+1 t i
The evaluation of this formula requires three additions, one division, and one
multiplication.
If it is required to evaluate the spline many times over the course of a
calculation, it may pay to precompute the numbers
; fi
di = fti+1 ;
i+1 t i
10. Linear and Cubic Splines 89
of the spline, one could preface (10.10) with a special test for this case, which
would save two comparisons. But the real lesson to be learned from all this is:
Know your application and make sure your software ts.
Lecture 11
wood that determines a curve and has ended up as the curve itself.
93
94 Son of Afternotes
and then use the above conditions to write down linear equations for the co-
ecients. This approach has two disadvantages. First it results in a system
with an unnecessarily large band width whose coecients may vary by many
orders of magnitude. Second, the system has 4n unknows, but the conditions
provide only 4n ; 2 equations. This indeterminacy is, of course, intrinsic to
the problem but the direct approach gives little insight into how to resolve it.
Accordingly, we will take another approach.
Derivation of the cubic spline
5. We begin by assuming that we know the second derivatives si = g 00(ti ).
Then p00i (ti ) = si and p00i (ti+1 ) = si+1 . Since pi is linear, we can write it in the
form
p00i (t) = hsi (ti+1 ; t) + shi+1 (t ; ti ): (11.1)
i i
Integrating (11.1), we nd that
p0i (t) = bi ; 2shi (ti+1 ; t)2 + s2ih+1 (t ; ti )2 (11.2)
i i
where bi is a constant to be determined. Integrating again, we get
pi (t) = ai + bi(t ; ti ) + 6shi (ti+1 ; t)3 + shi+1 (t ; ti )3
i i
where again ai is to be determined.
6. The coecients ai and bi are easily calculated. From the condition pi (ti ) =
fi , we have
2
ai + si6hi = fi
or 2
ai = fi ; si6hi :
It is worth noting that ai is just the function value fi adjusted by the second
term. This term has the dimensions of a function, since it is a second derivative
multiplied by the square of a length along the t-axis.
From the condition pi (ti+1 ) = fi+1 , we nd that
2
ai + bihi + si+16 hi = fi+1
or after some manipulation
bi = di ; (si+1 ; si )hi (11.3)
where
di = fi+1h; fi :
i
11. Linear and Cubic Splines 95
will not have zero second derivatives at the endpoints, and imposing such a
condition on g will worsen the approximation.
11. In the absence of information about the function at the endpoints, a good
alternative is to require that p0 = p1 and pn;2 = pn;1 . Since p0 and p1 have
the same rst and second derivatives at t1 and are cubic, all we need to do is
to require that the third derivatives be equal. Since
p000i = si+1h; si
i
we get the equation
h1 s0 + (h0 + h1 )s1 ; h0 s2 = 0
which can be appended to the beginning of the system (11.4). You should
verify that after one step of Gaussian elimination, the system returns to a
tridiagonal, diagonally dominant system. The condition at the other end is
handled similarly. These conditions are sometimes called not-a-knot conditions
because the form of the approximation does not change across t1 and tn;1 .
12. If one species the derivatives at the endpoints, the result is called a
complete spline. The details are left as an exercise.
11. Linear and Cubic Splines 97
Convergence
13. The convergence of complete cubic splines can be summarized as follows.
Let f have a continuous fourth derivative, and let g be the complete
cubic spline interpolating f at the knots t0 : : : tn . Let hmax be the
maximum mesh size. Then
1: kf ; gk1 3845 kf (iv)k1h4max
2: kf 0 ; g0k1 241 kf (iv)k1h3max (11.7)
3: kf 00 ; g00k1 38 kf (iv)k1h2max
14. The spline converges to f as the fourth power of the maximum step size.
The derivatives also converge, albeit more slowly. Although we have not given
the result here, even the third derivative converges, provided the mesh sizes go
to zero in a uniform manner.
Locality
15. As nice as the bounds (11.7) are, they are misleading in that they depend
on the maximum mesh size hmax. We have already encountered this problem
with the bound
kf ; L(f )k1 81 kf 00k1 h2max
for piecewise linear interpolation cf. (10.5)]. Although it appears to depend on
the maximum mesh size, in reality each of the linear functions `i of which L(f )
is composed has its own error bound. This is the reason we can take widely
spaced knots where the second derivative of f is small without compromising
the accuracy of the approximation in other parts of the interval t0 tn ]. It
would be a shame if we couldn't do the same with cubic splines. It turns out
that we can.
16. To see what is going on, it will be convenient to start with equally spaced
knots. In this case the equations (11.4) have the form
h s + 2h s + h s = d ; d :
6 i 3 i+1 6 i+2 i+1 i
If we divide each equation by 2h=3, the equations become
1 1
4 si + si+1 + 4 si+2 = ci
where
ci = 3(di+12h; di ) :
If we specify the second derivatives at s0 and sn , then we may write the equa-
tions in the form
(I ; T )s = c
98 Son of Afternotes
1e+00 3e;01 8e;02 2e;02 6e;03 1e;03 4e;04 1e;04 3e;05 7e;06
3e;01 1e+00 3e;01 8e;02 2e;02 6e;03 2e;03 4e;04 1e;04 3e;05
8e;02 3e;01 1e+00 3e;01 8e;02 2e;02 6e;03 2e;03 4e;04 1e;04
2e;02 8e;02 3e;01 1e+00 3e;01 8e;02 2e;02 6e;03 2e;03 4e;04
6e;03 2e;02 8e;02 3e;01 1e+00 3e;01 8e;02 2e;02 6e;03 1e;03
1e;03 6e;03 2e;02 8e;02 3e;01 1e+00 3e;01 8e;02 2e;02 6e;03
4e;04 2e;03 6e;03 2e;02 8e;02 3e;01 1e+00 3e;01 8e;02 2e;02
1e;04 4e;04 2e;03 6e;03 2e;02 8e;02 3e;01 1e+00 3e;01 8e;02
3e;05 1e;04 4e;04 2e;03 6e;03 2e;02 8e;02 3e;01 1e+00 3e;01
7e;06 3e;05 1e;04 4e;04 1e;03 6e;03 2e;02 8e;02 3e;01 1e+00
where 0 0 ;1 1
B
B; 1 04 ; 1 CC
B 4 . . . .4 . . CC
T =B
B . . . CC
B
@ ; 14 0 ; 41 A
; 1
4 0
Let
R = (I ; T );1:
Then since s = Rc, we have
si = ri0c0 + ri1c1 + + rin;2 cn;2 + rin;1 cn;1:
Thus a change of in the j th-element of c makes a change of rij in si . If rij
is small, si is eectively independent of cj . To give them a name, let us call
the rij transfer coecients.
Figure 11.2 exhibits the matrix of transfer coecients for n = 11. Along
the diagonal they are approximately one, and they drop o quickly as you move
away from the diagonal. In other words, if two knots are well separated, the
function values near one of them have little do do with the second derivative
calculated at the other. For example, in Figure 11.2 the inuence of the of the
rst knot on the last is reduced by a factor of about 10;5 .
17. This local behavior of splines is a consequence of the fact the the matrix
I ; T is diagonally dominant and tridiagonal. To see this we need two facts.
The rst is the expansion (I ; T );1 in an innite series.
Let k k denote a consistent matrix norm (see x9.7). If kT k < 1,
then I ; T is nonsingular, and
(I ; T );1 = I + T + T 2 + T 3 + : (11.8)
11. Linear and Cubic Splines 99
A proof of this theorem can be found in many places (you might try your hand
at it yourself). The series (11.8) is a matrix analogue of the geometric series.
It is called a Neumann series.
18. In our application it will be convenient to use the matrix 1-norm dened
by X
kAk1 = max i
jaij j:
j
This norm | which for obvious reasons is also called the row-sum norm | is
consistent. For our application
kT k1 = 21 :
Consequently the matrix of transfer coecients is represented by the Neumann
series (11.8).
19. The second fact concerns the product of tridiagonal matrices. Let us rst
look at the product of two such matrices:
0X X O O O O
1 0X X O O O
1 0X
O X X O O O
1
B
B X X X O O OCC BBX X X O O CC BBX
O X X X O OCC
B
B CC BBO CC BBX CC
B
B
O X X X O O
CC BBO X X X O O
CC = BBO X X X X O
CC :
B
@O
O O
O
X
O
X
X
X
X
O
X
CA B@O O
O
X
O
X
X
X
X
O
X
CA B@O X
O
X
X
X
X
X
X
X
X
CA
O O O O X X O O O O X X O O O X X X
As can be veried from the above Wilkinson diagrams, the product is penta-
diagonal | it has one more super- and subdiagonal. You should convince your
self that further multiplication of this product by a tridiagonal matrix ands
two more super- and subdiagonals. In general we have the following result.
Let T be tridiagonal. If k < ji ; j j, then the (i j )-element of T k is
zero. Otherwise put
eTi T k ej = 0 for k < ji ; j j: (11.9)
20. We are now in a position to get a bound on the (i j )-element of R. We
have
jrij j = jeTi (I ; T );1ej j
= jeTi I ej + eTi T ej + eTi T 2 ej + j Neumann series expansion
= jeTi T ji;j j ej + eTi T ji;j j+1 ej + j (by 11.9)
kT kj1i;jj + kT kj1i;jj+1 +
= kT kj1i;j j =(1 ; kT k1) Sum of geometric series.
More generally we have established the following result.
100 Son of Afternotes
101
Lecture 12
Eigensystems
A System of Dierential Equations
Complex Vectors and Matrices
Eigenvalues and Eigenvectors
Existence and Uniqueness
Left Eigenvectors
Real Matrices
Multiplicity and Defective Matrices
Functions of Matrices
Similarity Transformations and Diagonalization
The Schur Decomposition
103
104 Son of Afternotes
The conjugate A" of a matrix A is the matrix whose elements are "ij .
The conjugate transpose of a matrix A is the matrix
AH = A"T:
(The superscript H is in honor of the French mathematician Hermite). If we
dene the inner product of two complex vectors x and y to be xH y , then
X
xHx = jxj j2 def
= kxk22
j
becomes a norm | the natural generalization of the 2-norm to complex matri-
ces. Since xH y is a true inner product, the notion of orthogonality and all its
consequences hold for complex vectors and matrices. (See x5.16.)
9. The passage to complex matrices also involves a change of terminology. A
matrix is said to be Hermitian if
A = AH :
A matrix U is unitary if it is square and
U HU = I:
Hermitian and unitary matrices are the natural generalizations of symmetric
and orthogonal matrices | natural in the sense that they share the essential
properties of their real counterparts.
Eigenvalues and eigenvectors
10. Let A 2 Cnn . The pair ( x) is an eigenpair of A if
x 6= 0 and Ax = x:
The scalar is called an eigenvalue of A and the vector x is called an eigen-
vector.
11. The word eigenvalue is a hybrid translation of the German word Eigenwert,
which was introduced by the mathematician David Hilbert. The component
eigen is cognate with our English word \own" and means something like \par-
ticular to" or \characteristic." At one time mathematical pedants objected to
the marriage of the German eigen to the English \value," and | as pedants
will | they made matters worse by coining a gaggle of substitutes. Thus in
the older literature you will encounter \characteristic value," \proper value,"
and \latent root" | all meaning eigenvalue.
Today the terms \eigenvalue" and \eigenvector" are the norm. In fact
\eigen" has become a living English prex, meaning having to do with eigen-
values and eigenvectors. Thus we have eigenpairs, eigensystems, and even
eigencalculations. When I was a graduate student working at Oak Ridge, my
oce was dubbed the Eigencave after Batman's secret retreat.
106 Son of Afternotes
lecture (xx12.2{12.4) does not work for defective matrices, since a defective
matrix does not have n linearly independent eigenvectors.2
21. The eigenvalues of defective matrices are very sensitive to perturbations in
the elements of the matrix. Consider, for example, the following perturbation
of the matrix of (12.5): !
A~ = 1 11 :
The characteristic equation of A~ is ( ; 1)2 = , so that the eigenvalues are
1 :
p
Thus a perturbation of, say, 10;10 in A introduces a perturbation of 10;5 in
its eigenvalues!
Sensitivity is not the only ill consequence of defectiveness. Defectiveness
also slows down the convergence of algorithms for nding eigenvalues.
Functions of matrices
22. Many computational algorithms for eigensystems transform the matrix so
that the eigenvalues are more strategically placed. The simplest such transfor-
mation is a shift of origin.
Suppose that Ax = x. Then (A ; I )x = ( ; )x. Thus subtracting
a number from the diagonal of a matrix shifts the eigenvalues by . The
eigenvectors remain unchanged.
23. From the equation Ax = x it follows that
A2 x = A(Ax) = Ax = 2x:
Hence squaring a matrix squares its eigenvalues. In general, the eigenvalues of
Ak are the kth powers of the eigenvalues of A.
24. If f (t) = 0 + 1 t + + n tn is a polynomial, we can dene
f (A) = 0 I + 1A + + n An :
On multiplying f (A) and x and simplifying, we get
Ax = x =) f (A)x = f ()x:
Thus if we form a polynomial in A the eigenvalues are transformed by the same
polynomial.
2
Defective eigenvalues are associated with special solutions like te t t2 e t : : : :
12. Eigensystems 109
32. To establish this result, let an ordering for the eigenvalues of A be given,
and let 1 be the rst eigevalue in that ordering. Let v1 be an eigenvector
corresponding to 1 normalized so that kv1k2 = 1. Let the matrix (v1 V2) be
unitary. (This matrix can be constructed by computing a QR decomposition
of the vector v1 . See x13.2.)
Consider the matrix
H 1 v H AV2
! !
(v1 V2) A(v1 V2) = Vv1HAv
H 1 = 1v1T v1 hH
H 1V2H v1 B
2 Av1 V2 AV2
where hH = v1H AV2 and B = V2H AV2 . By the orthonormality of the columns
of (v1 V2), we have v1H v1 = 1 and V2H v1 = 0. Hence
H
!
(v1 V2) A(v1 V2) = 01 hB :
H
Now " H
!#
det I ; 01 hB = ( ; 1) det(I ; B ):
Hence the eigenvalues of B are the eigenvalues of A excluding 1 . By an
obvious induction, we can nd a unitary matrix W such that S = W H BW
is upper triangular with the eigenvalues of A (excluding 1) appearing in the
proper order. If we set
U = (v1 V2W )
it is easily veried that U is unitary and
H
!
T U T AU = 01 h SW :
Eigensystems
Real Schur Form
Block Diagonalization
Diagonalization
Jordan Canonical Form
Perturbation of a Simple Eigenvalue
Real Schur form
1. Most routines for solving dense nonsymmetric eigenvalue problems begin
by computing a Schur form. There are three reasons. First, the form itself is
useful in a number of applications. Second, it can be computed stably using
orthogonal transformations. Third if eigenvectors are needed, they can be
computed cheaply from the Schur form.
If the matrix in question is real, however, the Schur form may be complex.
Because complex arithmetic is expensive, complex matrices are to be avoided
whenever possible. Fortunately, there is a real variant of the Schur form.
Let A 2 Rnn . Then there is an orthogonal matrix U such that
U T AU has the block triangular form
0T T12 T13 T1k
1
BB 011 T22 T23 T2k C
C
B0 T3k C
T = U TAU = BBB . 0 T33 CC :
@ .. .. .. .. C
. . . A
0 0 0 Tkk
The diagonal blocks are either 1
1, in which case they are real
eigenvalues of T , or 2
2, in which case they contain a pair of
complex conjugate eigenvalues of T . The matrix T is called a real
Schur form.
2. The existence of the form is established in the same way as the existence of
the Schur form, except that we pick o conjugate pairs of eigenvalues together.
In outline, let ( + i x + iy ) be a complex eigenpair, so that
A(x + iy ) = ( + i )(x + iy ):
Then it is easily veried that x and y are linearly independent and
!
A(x y ) = (x y ) ; (x y)M:
113
114 Son of Afternotes
Hermitian matrices
13. About the only way you can determine if a general matrix is diagonalizable
is to try to compute its eigenvectors and see if you run into trouble. Some kinds
of matrices, however, are known a priori to be diagonalizable. Of these none
is more important than the class of Hermitian matrices (x12.9).
14. If A is Hermitian, then aii = a"ii . Since only real numbers can be equal to
their conjugates, the diagonal elements of a Hermitian matrix are real.
15. In transforming Hermitian matrices it is important to preserve the property
of being Hermitian. The most natural class of transformation are unitary
similarity transformations, since if A is Hermitian
(U H AU )H = U H AU:
In particular, any Schur form of A is Hermitian. But if a triangular matrix
is Hermitian, then it is both lower and upper triangular | i.e., it is diagonal.
Moreover, the diagonal elements are their own conjugates and are therefore
real. Hence we have the following important result.
Let A be Hermitian. Then there is a unitary matrix U such that
U H AU = diag(1 : : : n)
where the i are real. Equivalently, a Hermitian matrix has real
eigenvalues and their eigenvectors can be chosen to form an or-
thonormal basis for Cn .
16. It is worth noting that the Hermitian matrices are an example of a more
general class of matrices called normal matrices. A matrix A is normal if
AAH = AHA. It can be shown that any Schur form of a normal matrix is diag-
onal. Hence normal matrices have a complete set of orthonormal eigenvectors.
However, their eigenvalues may be complex.
Aside from Hermitian matrices, the only natural class of normal matrices
are the unitary matrices.
Perturbation of a simple eigenvalue
17. Before attempting to devise an algorithm to compute a quantity, it is
always a good idea to nd out how sensitive the quantity is to perturbation in
the data. For eigenvalues this problem takes the following form. Let
Ax = x
and let
A~ = A + E
13. Eigensystems 119
Hence,
hH + y2H C = y2H:
It follows that
6= 0, for otherwise would be an eigenvalue of C . But
yH x =
, which establishes the result.
19. Returning now to the perturbation problem, let us write the perturbed
equation in the form
(A + E )(x + h) = ( + )(x + h)
where it is assumed that h and go to zero along with E . Expanding and
throwing out second order terms we get
Ax + Ah + Ex = + h + x:
120 Son of Afternotes
yH Ex
= yHx : (13.6)
Moreover,
j~ ; j sec 6 (x y )kE k2 + O(kE k22):
21. Thus the secant of the angle between the left and right eigenvectors of an
eigenvalue is a condition number for the eigenvalue. It assumes its minimum
value of one when the angle is zero and becomes innite as the angle approaches
=2 | that is, as the left and right vectors become orthogonal. Since the left
and right eigenvectors of a Hermitian matrix are the same, the eigenvalues of
Hermitian matrices are as well-conditioned as possible.
A Jordan block, on the other hand, has orthogonal left and right eigenvec-
tors, and its eigenvalue could be said to be innitely ill conditioned. Actually,
the eigenvalue is nonanalytic, and our rst order perturbation theory does not
apply. However, for matrices near a Jordan block, the eigenvectors will be very
13. Eigensystems 121
nearly orthogonal, and the eigenvalues very ill conditioned. For example, them
matrix !
A = 0 10
p
has the eigenvalue with left and right eigenvectors
!
yT
p
= ( 1) and x = p1
Eigensystems
A Backward Perturbation Result
The Rayleigh Quotient
Powers of Matrices
The Power Method
A backward perturbation result
1. Let ( x) be an approximate eigenpair of A with kxk2 = 1. In trying to
assess the quality of this pair it is natural to look at how well and x satisfy
the equation Ax = x. Specically, consider the residual
r = Ax ; x:
If r is zero, then ( x) is an eigenpair of A. If r is small, what can we say
about the accuracy of and x?
2. Unfortunately, we can't say much in general. Suppose, for example, that
!
A = 1 11
where is small. If x = (1 0)T and = 1, then
!
krk = 0 = :
2 2
p
But as we have seen (x12.21) the eigenvalues of A are 1 . Thus if = 10;10,
the norm of the residual r underestimates the perturbation in the eigenvalues
by ve orders of magnitude.
3. Nonetheless, the residual contains useful information about the eigensystem,
as the following result shows.
For any x 6= 0 and any scalar let
r = Ax ; x:
Then there is a matrix H
E = ; krxxk2
2
with
kE kp = kkxrkk22 p = 2 F
such that
(A + E )x = x:
123
124 Son of Afternotes
40
35
30
25
A(1,2)
20
15
10
0
0 100 200 300 400 500 600 700 800 900 1000
k
17. Our rst iterative method, the power method, attempts to approximate
an eigenvector of A. Let u0 be given. Having computed uk;1 , we compute uk
according to the formula
uk = k;1 Auk;1 : (14.6)
Here k;1 is a nonzero scaling factor. In practice, k;1 will be a normalizing
constant, like kAuk;1 k;2 1 , chosen to keep the components of the uk within the
range of the machine. But since multiplication by a nonzero constant does not
change direction of a vector, we can choose any nonzero value for k . We will
take advantage of this freedom in analyzing the convergence of the method.
It follows immediately from (14.6) that
uk = k;1 0 Ak u0: (14.7)
18. To analyze the convergence of the power method, suppose that we can
order the eigenvalues of A so that
j1j > j2j j3j jnj:
(If this is possible, we call 1 the dominant eigenvalue of A.) Since 1 is simple,
we can nd a matrix
X = (x1 X2 )
such that !
;1
X AX = 0 B 1 0
equal to its spectral radius, then the conjugate eigenvalue has the same magni-
tude. Even when there is a dominant eigenvalue, if it is close to the next largest
eigenvalue in magnitude, the convergence will be slow{often heartbreakingly
so. Although there are xes involving iterating with more than one vector,
the power method and its variants must be considered to be a special-purpose
tool, not an all-purpose algorithm.
Lecture 15
Eigensystems
The Inverse Power Method
Derivation of the QR Algorithm
Local Convergence Analysis
Practical Considerations
Hessenberg Matrices
The inverse power method
1. One cure for the diculties of the power method is to transform the matrix
so that the eigenvalues are better situated. For example, a shift of the form
A ; I may improve the ratio of the dominant eigenvalue to the subdominant
eigenvalue. However, the real payo comes when we combine a shift with an
inversion.
2. Let be an approximation to a simple eigenvalue 1 of A. We have seen
that the eigenvalues of (A ; I );1 are (i ; );1 . Now as approaches 1 the
number (1 ; );1 approaches innity, while the numbers (i ; );1 (i 6= 1) will
approach xed limits. It follows that as approaches 1 , the power method
will converge ever more rapidly to the corresponding eigenvector.
Figure 15.1 illustrates how powerful this eect can be, even when is not
a particularly good approximation to . The 's in the topmost plot represent
the eigenvalues of a matrix of order ve, and the + represents an approximation
to one of them. The 's in the lower part represent the shifted-and-inverted
eigenvalues. Note how the distinguished eigenvalue has separated itself from
the rest, which are now huddling together near the origin.
3. The power method applied to a shifted, inverted matrix is called the inverse
power method. Of course we do not actually invert the matrix instead we solve
a linear system. Given a starting vector u and an approximate eigenvalue ,
one step of the inverse power method goes as follows.
1. Solve the system (A;I )v = u
2. u^ = v=kv k2
The processes may be iterated by replacing u by u^. The normalization by the
2-norm is not obligatory | any of the standard norms would do as well.
4. Since the inverse power method is a variant of the power method, the above
analysis applies unchanged. However, one objection to the method must be
addressed in detail. If we choose very near to an eigenvalue, the system
(A ; I )v = u will be ill-conditioned and will be solved inaccurately.
133
134 Son of Afternotes
2
2.5 2 1.5 1 0.5 0 0.5 1 1.5
1.5
0.5
0
0.5 0 0.5 1 1.5 2 2.5 3 3.5 4
The answer to this objection is that it that the system is indeed solved
inaccurately but that it does not matter. For if we use a stable algorithm to
solve the system, then the computed solution satises
(A + E ) ; I ]v = u
where kAk=kE k is of the same order of magnitude as the rounding unit. If the
target eigenpair is not too ill-conditioned, it will not be much perturbed by
the error E , and the method will converge.
Another way of putting it is to observe that the errors in the solution tend
to lie along the direction of the eigenvector we are seeking and disappear when
we normalize v . The failure to grasp this fact has resulted in a lot of misguided
papers.
5. The size of the intermediate vector v gives a lower bound on the quality of
the solution. For if kv k2 is large, the residual
(A ; I )^u = kvuk
2
is small, and by the result of x14.3 the pair (^u ) is an exact eigenpair of a
nearby matrix.
15. Eigensystems 135
6. The advantages of the inverse power method are its swift convergence | in
many applications, one iteration is enough | and the fact that it can get at
eigenvectors in the interior of the spectrum. Its drawbacks are that it requires a
good approximation to the eigenvalue whose eigenvector is sought and that one
must solve the system (A ; I )v = u. For a dense matrix this solution requires
O(n3 ) operations, although once a factorization has been computed subsequent
iterations require only O(n2 ) operations. For special matrices, however, the
expense may be considerably less.
Derivation of the QR algorithm
7. The QR algorithm is an iterative method for computing a Schur form of
a matrix A. Starting with A0 = A, it proceeds to reduce A iteratively to
triangular form by a sequence of unitary similarities:
Ak+1 = QHk Ak Qk k = 0 1 2 : : ::
Although the formulas by which the Qk are generated are easy to write down,
they are not in themselves very informative. We will therefore derive the
formulas by exploiting the connection of the algorithm with the inverse power
method. For the nonce we will be considering only one step of the iteration,
and we will drop the subscript k.
8. The triangularization is an iterative version of the reduction to Schur form
(see x12.31). But in this case, we start at the bottom of the matrix. Specically,
write A in the form !
B
A = gH : h (15.1)
Then the transformations Q are chosen to reduce the norm of g H. Once g is
eectively zero, we can continue with the process with the smaller matrix B ,
a process known as deation.
9. Now an ideal choice of Q would be a matrix of the form
Q = (Q q)
where q is a left eigenvector of A corresponding to, say, the eigenvalue . For
in that case, H !
Q AQ QH Aq
H
Q AQ =
0
and we could proceed immediately to the reduction of QH AQ .
10. Lacking an exact eigenvector, we might try to compute an approximate
eigenvector to use in dening Q. If in (15.1) the vector g H is small, the vector
136 Son of Afternotes
11. The heart of the QR algorithm is an elegant way of generating the entire
matrix Q = (Q q ) in one fell swoop. Specically, let
A ; I = QR (15.3)
be the QR factorization of A ; I . Then
eTn R = nneTn : (15.4)
Now write (15.3) in the form
QH = R(A ; I );1 :
Multiplying by eTn , we get
qH eTn QH = nneTn (A ; I );1 :
Since kq k2 = 1, it follows that the last column of the unitary matrix Q is the
vector in (15.2).
12. Having generated Q, there is an equally elegant way to eect the similarity
transformation QH AQ. From (15.3) it follows that
RQ = QH (A ; I )Q = QH AQ ; I:
Hence
QH AQ = RQ + I:
In other words, we multiply the factors Q and R in reverse order and add back
in the shift.
15. Eigensystems 137
13. Combining all this, we arrive at the basic formulas for the QR algorithm.1
Given A0 iterate as follows.
1. for k = 0 1 2 : : :
2. Choose a shift k
3. Factor Ak ;k I = Qk Rk , where Q is unitary and
eTn R = nnen (15.5)
4. Ak+1 = RQ+k I
5. end for k
Convergence of this kind is called quadratic, and it is very fast indeed. If, for
example,
2 = 1 and kg0k2 = 0:1, the bounds on the subsequent iterates are
10;2 , 10;4, 10;8 , 10;16 , : : : .
21. If A is Hermitian convergence occurs with blinding speed. For in that case
g = h, and
kgk+1k 2kgkk32:
If = 1 and kg0k = 0:1, the bounds on the subsequent iterates are 10;3 , 10;9 ,
10;27 , 10;91 , : : : .
Practical Considerations
22. Although the QR algorithm has excellent convergence properties, it is not
as it stands a practical way to compute eigenvalues. In the rst place, the
algorithm requires that we compute the QR factorization of Ak . For a full
matrix, this is an O(n3) process and would lead to an O(n4 ) algorithm. It is
highly desirable to get the count down to O(n3).
A second problem concerns real matrices. Since the QR algorithm is an
implementation of the inverse power method with shift , it is imperative that
be near the eigenvalue we are currently seeking. But it that eigenvalue
is complex, we must use a complex shift, in which case the iterates Ak will
become complex, and the work to implement the algorithm will more than
double. Moreover, because of rounding error the eigenvalues will not occur in
exact conjugate pairs.
23. A cure for the rst problem is to rst reduce the matrix to a special
form | called Hessenberg form | that is cheap to factor and is preserved by
the iteration. The second problem is solved by calculating the real Schur form
described in x13.1. The techniques by which the QR algorithm can be adapted
to compute the real Schur form are connected with the properties of Hessenberg
matrices, which we turn to now.
24. Since our nal goal is to devise an algorithm for real matrices, from now
on we will assume that our hero A is real.
Hessenberg matrices
25. A matrix A 2 Rnn is upper Hessenberg if
i > j + 1 =) aij = 0:
An upper Hessenberg matrix of order 5 has a Wilkinson diagram of the form
X X X X X
X X X X X
O X X X X : (15.11)
O O X X X
O O O X X
140 Son of Afternotes
26. The rst thing that we need to establish is that the QR algorithm preserves
Hessenberg form. We will prove it by describing the reduction QT A = R and
then the back multiplication RQ. To eect the reduction we will use little
2
2 Householder transformations that act on two rows or two columns of the
matrix.2
Let A have the form (15.11). The rst step of the reduction involves
forming the product H1A which operates on the rst two rows and introduces
a zero in the (2 1)-element. We represent this symbolically by the diagram
! X X X X X
! ^
X X X X X
O X X X X :
O O X X X
O O O X X
Here the arrows point to the rows operated on by the transformations and
the hat indicates the element that will be annihilated. For the second step we
compute H2(H1A) as follows:
X X X X X
! O X X X X
! O ^
X X X X :
O O X X X
O O O X X
The result is the triangular matrix R = H4H3 H2H1 A. The matrix QT is the
product H4H3 H2H1 .
2
In computational practice we would work with a more convenient class of transformations
called plane rotations, which we will introduce later.
15. Eigensystems 141
We must now compute RQ = RH1 H2H3 H4. The product RH1 is repre-
sented symbolically thus:
# #
X X X X X
^
O X X X X
O O X X X
:
O O O X X
O O O O X
Here the arrows point to the columns that are being combined and the hat
is over a zero element that becomes nonzero. The subsequent multiplications
proceed as follows.
# #
X X X X X
X X X X X
(RH1)H2 : O ^
O X X X
:
O O O X X
O O O O X
# #
X X X X X
X X X X X
(RH1H2 )H3 : O X X X X
O O ^
O X X
O O O O X
# #
X X X X X
X X X X X
(RH1H2 )H3 : O X X X X
O O X X X
O O O ^O X
The nal matrix is in Hessenberg form as required.
27. The algorithm is quite ecient. The product H1 A is eectively the product
of a 2
2 matrix with the rst two rows of A. Since these rows are of length
n the process requires about 2n additions and 4n multiplications. The second
product combines two rows of length n ; 1 and requires about 2(n ; 1) additions
and 4(n ; 1) multiplications. Thus number of additions for the reduction to
triangular form is
Xn
2 i = n2
i=1
142 Son of Afternotes
Eigensystems
Reduction to Hessenberg form
The Hessenberg QR Algorithm
Return to Upper Hessenberg
Reduction to Hessenberg form
1. We have seen that the QR algorithm preserves Hessenberg form and can be
implemented in O(n2) operations. But how do we get to a Hessenberg matrix
in the rst place? We will now describe a reduction to Hessenberg form by
Householder transformations. The reduction bears a passing similarity to the
reduction to triangular form in x8.14 however, the fact that we must work with
similarity transformations prevents us from getting all the way to triangularity.
2. We will begin in the middle of things. Suppose we have reduced our matrix
A to the form illustrated for n = 8 in the following Wilkinson diagram:
0X X X X X X X X
1
BBX X X X X X X X CC
BBO CC
BBO X X X X X X X
CC
BB O X X X X X X CC : (16.1)
BBO O O X X X X X CC
BBO O O b
X X X X X CC
@O O O b
X X X X X A
O O O b
X X X X X
143
144 Son of Afternotes
the matrix to Hessenberg form. Then apply the QR algorithm with Rayleigh
quotient shifts | except for the rst which should be complex to move the
interation into the complex plane. Typically, the algorithm will ounder for
a few iterations and then start converging. When ann;1 becomes suciently
small (say compared with jann j + jan;1n;1 j), we can set it to zero and continue
by using an;1n;1 as a shift. It turns out that as the subdiagonal element next
to the current shift is converging quadratically to zero, the other subdiagonal
elements also tend slowly to zero.1 Consequently, as we move up the matrix,
it takes fewer and fewer iterations to nd an eigenvalue.
9. There is a subtlety about applying the transformations that is easy to
overlook. Suppose that we have reduced the matrix to the form
0 1
a a a a aaa a
B
B a a a a aaC
aCa
B
B 0 a a a aaC
aCC
a
B
B 0 0 a a aaC
a a
B
B C:
B 0 0 0 a aaC
a a
C
B
B 0 aC
0 0 0 0 a C
a
B
@0 aC A
0 0 0 0 0 a
0 0 0 0 0 0 0 a
Then we only have to triangularize down to the horizontal line. Now if we
want to compute a Schur form, then we must apply the transformations across
the entire matrix. But if we are interested only in the eigenvalues then we need
only work with the leading principal submatrix indicated by the lines below:
0a a a a aaa a
1
BBa a a a aaC
aCa
BB0 a a a aaC
aCa
BB0 0 a a aaC
aCC
a
BB :
BB0 0 0 a aaC
a a
CC
BB0 aC
0 0 0 0 a a
@0 aC
0 0 0 0 0 aA
0 0 0 0 0 0 0 a
10. If we want the Shcur form, we must accumulate the transformations that
were generated during the reduction to Hessenberg form and in the course of
the QR iterations. Specically, beginning with U = I , each time we perform a
similarity transformation of the form A H H AH , we also perform the update
U UH .
1
This is a consequence of a relation between the unshifted QR algorithm and the power
method.
146 Son of Afternotes
11. We can compute eigenvectors from the Schur form. To compute an eigen-
vector corresponding to an eigenvalue , we partition the Schur form to expose
the eigenvalue: 0 1
B@B0 c eDT CA :
0 0 F
We then seek an eigenvector from the equation
0 10 1 0 1
B@ 0 T CA B@w1 CA = B@w1 CA :
B c D
0 0 F 0 0
It follows that
(B ; I )w = ;c: (16.3)
This is an upper triangular system that can be solved for w. The eigenvector
of the original matrix is then given by
0 1
w
x=UB
@ 1 CA :
0
This procedure might appear risky when A has multiple eigenvalues, since
could then appear on diagonal of B . However, rounding error will almost
certainly separate the eigenvalues a little (and if it fails to, you can help it
along), so that B ; I is not singular. Nor is there any danger in dividing by a
small element as long as the eigenvector is well conditioned. For in that case,
the numerator will also be small. However, care must be taken to scale to avoid
overows in solving (16.3), just in case there is a nearly defective eigenvalue.
12. When A is symmetric, the Hessenberg form becomes tridiagonal. Both
the reduction and the QR algorithm simplify and, as we have seen, the con-
vergence is cubic. The details can be found in many places, but you will nd
it instructive to work them out for yourself.
Return to Upper Hessenberg
13. The algorithm just described is a viable method for computing eigenvalues
and eigenvectors of a general complex matrix. But with real matrices we can
do better by computing the real Schur form of A (x13.1) in real arithmetic.
In order to derive the algorithm, we must establish result about Hessenberg
matrices that is of interest in its own right.
14. First some nomenclature. We will say that an upper Hessenberg matrix
of order n is unreduced if ai+1i 6= 0 (i = 1 : : : n ; 1). In other words, the
subdiagonal elements of an unreduced upper Hessenberg matrix are nonzero.
16. Eigensystems 147
Eigensystems
The Implicit Double Shift
Implementation Issues
The Singular Value Decomposition
The implicit double shift
1. We now turn to an ingenious technique for avoiding complex arithmetic
when A is real. The idea is to perform two complex conjugate shifts at once.
To set the stage, consider two steps of the QR algorithm:
1: A0 ; I = Q0R0 2: A1 = QH0 A0 Q0
(17.1)
3: A1 ; " I = Q1R1 4: A2 = QH1 A1 Q1 :
I claim that the product Q = Q0Q1 and hence A2 = QT A0 Q is real. To see
this, rst note that the matrix
(A0 ; " I )(A0 ; I ) = A20 ; 2<()A0 + jj2I
is real. But
(A0 ; " I )(A0 ; I ) = (A0 ; "I )Q0R0 by (17.1.1)
= Q0 QH0 (A0 ; " I )Q0R0 since QQH = I
= Q0 (A1 ; "I )Q0R0 by (17.1.2)
= (Q0 Q1)(R1R0) by (17.1.3)
Hence Q = Q0 Q1 is just the Q factor of the QR factorization of (A0 ; " I )(A0 ;
I ) and hence is real.1
2. Thus if we perform two steps of the QR algorithm, with complex conjugate
shifts, the result is nominally real. In practice, however, rounding error will give
us a complex matrix, and one that can have substantial imaginary components.
Fortunately, there is an indirect way to compute Q and A2 without going
through A1 .
Consider the following algorithm.
1. Compute the rst column q of Q
2. Let H be an orthogonal matrix such that H e1 = q
3. Let B = H T AH (17.2)
4. Use algorithm (16.2) to nd U such that C = U T BU
is Hessenberg
1
Well, almost. It's columns could be scaled by complex factors of magnitude one.
149
150 Son of Afternotes
5. The rst column of Q has only three nonzero elements. Hence, if we take H
in (17.2) to be a Householder transformation, it will aect only the rst three
rows and columns of A. Specically H T AH will have the form
0a aa aaa a a
1
BBa aa aaa a aC C
BBa^ aa aaa a aCC
BBa^ aa aaa a aCCC :
BB
BB0 0 0 a
a a a aC C
BB0 0 0 0 a
a a aC C
@0 0 0 0 0 a
a aC A
0 0 0 0 0 0 a a
17. Eigensystems 151
The next step is to reduce the rst column of this matrix, i.e., to annihilate the
elements with hats. This can be done by a Householder transformation that
aects only rows and columns two through four of the matrix. The transformed
matrix has the form
0 1
a a a
aa aa a
BBa a a
aa aa aCCC
BB0 a a
aa aa aC
BB0 a^ a
aa aa aC CC :
BB (17.4)
BB0 a^ a
aa aa aC C
BB0 0 0 0 a
a a aC C
@0 0 0 0 0 a
a aC A
0 0 0 0 0 0 a a
We continue by using a 3
3 Householder transformations to eliminated the
hatted elements in (17.4). In this way we chase the bulge below the subdiagonal
out the bottom of the matrix. At the kth stage, the elements ak+2k and ak+3k
are annihilated, but two new elements are introduced into the (k + 4 k + 1)-
and (k +4 k +2)-positions. The very last step is special, since only one element
is annihilated in the matrix
0a a a a a
a a a
1
B
B a a a a a
a a aCC
B
B 0 a a a a
a a aCCC
B
B 0 0 a a a
a a aC
B
B C:
B 0 0 0 a a
a a aC C
B
B 0 0 0 0 a
a a aC C
B
@0 0 0 0 0 a a aC A
0 0 0 0 0 a^ a a
6. The following is a program implementing this reduction.
152 Son of Afternotes
Eigensystems
Rank and Schmidt's Theorem
Computational Considerations
Reduction to Bidiagonal Form
The Implicit QR Algorithm for Singular Values
Plane Rotations
The Unshifted QR Algorithm for Singular Values
Rank and Schmidt's theorem
1. The singular value decomposition has many uses, but perhaps the most
important lies in its ability to reveal the rank of a matrix. Suppose that X
has rank p. Since U and V are nonsingular, $ must also have rank p, or
equivalently
1 p > p+1 = = k = 0:
This shows that in principle we can determine the rank of a matrix by inspect-
ing its singular values.
2. But life is not that simple. In practice, the matrix X we actually observe
will be a perturbation of the original matrix Xtrue | say X = Xtrue + E . The
zero singular values of Xtrue will become nonzero in X , though it can be shown
that they will be bounded by kE k2. Thus, if the nonzero singular values of
Xtrue are well above kE k2, and we compute the singular value of X , we will
observe p singular values above the error level and n ; p singular values at the
error level. Thus we can guess the original rank of X .1
3. Having found an ostensible rank p of the original matrix Xtrue, we would like
to nd an approximation of that matrix of exactly rank p. Since our knowledge
of Xtrue is derived from X , we would not want our approximation to stray too
far from X . The following construction does the trick.
4. Partitioning U and V by columns, let
Up = (u1 u2 up) and Vp = (v1 v2 vp)
and let
$p = diag(1 2 : : : p):
1
This is an oversimplication of a hard problem. Among other things things, we need to
know the size of the error. It is not clear what to do about singular values that are near
but above the error level. And one must decide how to scale the columns of X , since such
scaling changes the singular values. Still the problems are not insurmountable | at least in
individual cases where we can apply a dose of common sense.
155
156 Son of Afternotes
-4
-8
-12
-16
the vector Xvi has to be computed with cancellation and will be inaccurate.
These inaccuracies will propagate to ui .
8. A third problem is the squaring of the singular values as we pass from
X to A. Figure 18.1 depicts representative singular values of X by white
arrows descending from a nominal value of 1. They are hanging over a sea of
rounding error, represented by the dashed line at 10;16 . The of accuracy we
can expect of a singular value is the distance from the arrow to the sea. The
black arrows represent the eigenvalues of A = X T X . The accuracy of the rst
is diminished slightly, the second signicantly. The third has become shark
bait | no accuracy at all.
9. It is important to keep some perspective on these defects. The method of
computing U from V is certainly bad | V is orthogonal to working accuracy,
while U is not. However, in some applications, all that is required is V and
$. If X is sparse, the most ecient algorithm will be to compute A and its
eigensystem. The above considerations suggest that if the singular values are
1), then the
well above the square root of the rounding unit (assuming kX k2 =
results may be satisfactory. (Compare the comments on the normal equations
in x9.33).
Reduction to bidiagonal form
10. As an alternative to the algorithm (18.1) we are going to show how the
QR algorithm can be used to calculate singular values without forming the
cross-product matrix A. Now an ecient implementation of the QR algorithm
requires that A be reduced to Hessenberg form. Since A is symmetric, the
158 Son of Afternotes
Hessenberg form becomes tridiagonal | that is, a form in which all elements of
A above the rst superdiagonal and below the rst subdiagonal are zero. The
corresponding form for X is a bidiagonal form, in which only the diagonal and
the rst superdiagonal are nonzero. We will now show how to compute this
form.
11. The process begins like the orthogonal triangularization algorithm of x8.14.
A Householder transformation introduces zeros into the rst column as illus-
trated below: 0 1
X X X X
BBO X X X CC
BO CC
P1X = BBBO X X X
CC :
B@O X
X
X
X
X
X
CA
O X X X
Next the matrix is postmultiplied by a Householder transformation that intro-
duces zeros into the rst row:
0X X O O
1
BBO X X X CC
BO CC
P1XQ1 = BBBO X X X
CC :
B@O X
X
X
X
X
X
CA
O X X X
This second transformation does not touch the rst column, so that the zeros
already introduced are undisturbed. A third transformation introduces zeros
into the second column:
0X X O O
1
BBO X X X CC
BO CC
P2 P1XQ1 = BBBO O X X
CC :
B@O O
O
X
X
X
X
CA
O O X X
Note for later reference that the product of the Q's has the form
!
Q1 Q2 Qk;2 = 10 Q0 : (18.2)
The implicit QR algorithm for singular values
12. Let us call the nal result of the bidiagonalization B , and assume without
loss of generality that it is square (the last n ; k rows are already zero). Then
the matrix
C = BTB
is tridiagonal, and its eigenvalues are the squares of the singular values of B .
For the reasons cited in xx18.7{18.8, we do not want to work with C . We
will now show how we can achieve the eect of a shifted QR step on C by
manipulating B .
13. The implicitly shifted QR algorithm with shift would begin by computing
the rst column of C ; I : 0 1
B@c11c21; CA
0
and then reducing it to a multiple e1. Since the rst column of C has only
two nonzero elements, it can be reduced by a 2
2 orthogonal transforma-
tion Q0 (embedded as always in an identity matrix of appropriate order).
We then form QT0 CQ0 and determine an orthogonal transformation Q such
that QT QT0 CQ0Q. If we use the natural variant of the algorithm (16.2), the
transformation Q0 Q will have the same rst column as Q0 and hence is the
transformation that we would get from an explicit QR step.
14. Now suppose that we postmultiply the transformation Q0 into B and then
reduce B to bidiagonal form:
B^ = PBQ0Q:
Then
B^ TB^ = (Q0Q)TC (Q0Q)
160 Son of Afternotes
18. There are a number of natural candidates for the shift. The traditional
shift is the smallest singular value of the trailing 2
2 submatrix of B . With
this shift, the superdiagonal element bk;1k will generally converge cubically
to zero. After it has converged, the problem can be deated by working with
the leading principal submatrix of order one less. Singular vectors can be
computed by accumulating the transformations applied in the reduction to
bidiagonal form and the subsequent QR steps.
19. The algorithm sketched above was for many years the standard. Recently,
however, an unshifted variant that can calculate small singular values of bidi-
agonal matrices with great accuracy has come in favor. Before we describe
it, however, we must digress to discuss the 2
2 transformations we use in
practice.
Plane rotations
20. A plane (or Givens) rotation is a matrix of the form
!
P = ;cs sc
where
c2 + s2 = 1:
The matrix P is clearly orthogonal. Moreover, if the 2-vector (x y ) is nonzero
and we set
c = p 2x 2 and s = p 2y 2
x +y x +y
then ! ! p !
c s x = x2 + y 2 :
;s c y 0
21. Plane rotations, suitably embedded in an identity matrix, can be used to
introduce zeros in vectors and matrices. For example, with the above deni-
tions of c and s, we have
01 0 0 0 0 01 0z 1 0 z 1
BB0 c 0 0 s 0CC BBxCC BBpx2 + y2CC
BB0 0 1 0 0 0CC BBz CC BB z CC
BB0 0 0 1 0 0CC BBz CC = BB z CC :
B@0 ;s 0 0 c 0CA B@yCA B@ 0 CA
0 0 0 0 0 1 z z
162 Son of Afternotes
The plane rotation illustrated above is called a rotation in the (2 5)-plane.
22. Some care must be taken in generating plane rotations to avoid overows
and harmful underows. The following code does the job.
1. rotgen (x y c s r)
2. = jxj+jy j
3. if ( = 0)
4. c = 1 s = 0 return
5. end ifp
6. r = (x= )2 + (y= )2
7. c = x=r s = y=r return
8. end rotgen
The unshifted QR algorithm for singular values
23. To describe the unshifted algorithm, it will be convenient to write B in
the form 0 1
BBd1 de12 e2 CC
BB d3 e3 CC :
@ . . . . . .A
24. In the unshifted algorithm we use a zero shift at all stages. Since the rst
column of C = B T B is !
d1 de1
1
the starting rotation can be generate directly from the rst row of B :
rotgen (d1 e1 c s r)
If we postmultiply by this rotation, we get
0 10
d1 e1
BB d2 e2 C B c ;s 1C 0Bd1 0
1
CC
BB C
C B
B s c C B f d2 e2
@ d e C B 0 0 C CA = BB@ d3 e3 CC (18.3)
. . . . . . A @ ... . . . . . .A
3 3
..
.
where
d1 r
f = sd2 (18.4)
d2 cd2
We now generate a rotation to eliminate f :
rotgen (d1 f c^ s^ r^)
18. Eigensystems 163
The key observation is that we do not have to apply the rotation directly. For
if we do apply it, we get
0
BB;c^s^ s^ 1C 0Bd1 0
1 0
d1 s^d2 s^e2
CC BB 0 c^d2 c^e2
1
CC
BB 0 c^ CC BB f d2 e2 C B C
@. 0 CA B@ d3 e3 =
CA B@ d3 e3 CA
.. ... ... ... ... ...
where
d1 r^:
We now observe that the row vectors (^sd2 s^e2) and (^cd2 c^e2 ) are both propor-
tional to (d^2 e^2 ). Hence if we generate the next rotation by
rotgen (d2 e2 c s r)
the transformed matrix will have the form
0 10 1 0 1
d1 s^d2 s^e2 1 0 0 d1 s^r
BB 0 c^d2 c^e2 CC BB0 c ;s CC BB 0 c^r 0 CC
BB d3 e3 C B
CA B@0 s c CA B@ f d3 e3 CCA
C = B
@ . . . . . . ... ... ... ... ...
where
e1 s^r
d2 c^r (18.5)
f = sd3
d3 cd3
Let us now set (18.4) beside (18.5). Except for the replacement e1
s^r they are essentially identical. The become exactly so if we replace the
statement d1 r in (18.4) by d1 c^r with c^ = 1. Thus we have a makings
of a loop.
The following algorithm implements the reduction.
1. c^ = 1
2. for i = 1 to n;1
3. rotgen (di ei c s r)
4. if (i 6= 1) ei;1 = s^r
5. di = c^r
6. f = sdi+1
7. di+1 = cdi+1
8. rotgen (di f c^ s^ di)
9. end for i
164 Son of Afternotes
25. The formulas in the above algorithm (including rotgen ) consist entirely of
multiplications, divisions, and additions of positive quantities. Consequently,
cancellation cannot occur, and all quantities are computed to high relative
accuracy. This accounts for the fact that even small singular values are cal-
culated accurately | provided a bad convergence criterion does not stop the
iteration prematurely.
26. It might be thought that the unshifted algorithm would be considerably
slower that the shifted algorithm. However, experiments have shown that for
most problems the two algorithms are more or less comparable.3
3
For some problems, the unshifted algorithm is a disaster. For example, it will converge
very slowly with a matrix whose diagoanl elements are one and superdiagonal elements are
0:1.
Krylov Sequence Methods
165
Lecture 19
Introduction
In the next several lectures we are going to consider algorithms that | loosely
speaking | are based on orthogonalizing a sequence of vectors of the form
u Au A2u : : ::
Such a sequences is called a Krylov sequence . The fundamental operation
in forming a Krylov sequence is the computation of the product of A with
a vector. For this reason Krylov sequence methods are well suited for large
sparse problems in which it is impractical to decompose the matrix A.
Unfortunately, Krylov sequence methods are not easy to manage, and the
resulting programs are often long and intricate. It has been my experience that
these programs cannot be treated as black boxes you need to know something
about the basic algorithm to use them intelligently. This requirement sets the
tone of the following lectures. We will present the general algorithm and glide
around the implementation details.
We will begin with Krylov methods for the eigenvalue problem. Computing
the complete eigensystem of a large, sparse matrix is generally out of the
question. The matrix of eigenvectors will seldom be sparse and cannot be
stored on a computer. Fortunately, we are often interested in a small part of
the eigensystem of the matrix | usually a part corresponding to a subspace
called an invariant subspace. Accordingly, we will begin with a treatment of
these subspaces.
Invariant subspaces
1. Let A 2 Cnn and let U be a subspace of Cn of dimension k. Then U is
an invariant subspace of A if
A U fAu : u 2 Ug U :
In other words an invariant subspace of A is unchanged (or possibly diminished)
on multiplication by the A.
167
168 Son of Afternotes
Then partitioning
U (X1 X2) = (U^1 U^2 ) (19.1)
we see that ^ ^ !
A(U^1 U^2 ) = (U^1 U^2) H011 H
^
H
12 :
22
It follows that AU^1 = U^1 H^ 11, and the columns of U span an invariant subspace.
6. If we go further and block diagonalize H so that
^ !
; 1
(X1 X2) H (X1 X2) = 0 H^ H 11 0
22
then both U^1 and U^2 span invariant subspaces. In particular, y H A = A where
is not an eigenvalue of H22, then
(y H U^1 y H U^2 ) = y H A(U^1 U^2 ) = (y H U^1H^ 11 y H U^2 H^ 22):
or y H U^2 H^ 22 = y H U^2 . Since is not an eigenvalue of H^ 22 , it follows that
y H U2 = 0. In other words, the vectors of U^2 contain no components along any
eigenvector corresponding to an eigenvalue that is not an eigenvalue of H22.
7. In practice we will seldom have an exact invariant subspace. Instead we
will have
AU ; UH = G
where G is small. In this case we have the following analogue of the results of
xx14.1{14.6. The proof is left as an exercise.
Let U have orthonormal columns and let
AU ; UH = G:
Then there is a matrix E = ;GU H with
kE kp = kGkp p = 2 F
such that
(A + E )U = UH:
Moreover, kGkF assumes its minimum value when
H = U H AU: (19.2)
The matrix H dened by (19.2) is sometimes called the matrix Rayleigh
quotient, since is is dened in analogy with the scalar Rayleigh quotient. If
( z ) is an eigenpair of H , the pair ( Uz ) is an eigenpair of. A + E . If the pair
is well conditioned and E is small, it is a good approximation to an eigenpair
of A. The pair ( Uz ) is called a Ritz pair.
170 Son of Afternotes
Krylov subspaces
8. We have already noted that a step of the power method can easily take
advantage of sparsity in the matrix, since it involves only a matrix-vector
multiplication. However the convergence of the powers Ai u to the dominant
eigenvector can be extremely slow. For example, consider the matrix
A = diag(1 :95 :952 : : : :9599)
whose dominant eigenvector is e1 . The rst column of Table 19.1 gives the
norm of the vector consisting of the last 99 components of Ak u=kAk uk2, where
u is a randomly chosen vector. After 15 iterations we have at best a two digit
approximation to the eigenvector.
9. On the other hand, suppose we try to take advantage of all the powers we
have generated. Specically, let
Kk = (u Au A2 u Ak;1 u)
and let us look for an approximation to the eigenvector e1 in the form
e1 =2 Kkz
where \=2 " means approximation in the least squares sense. The third column
in Table 19.1 shows the results. The method starts o slowly, but converges
one step of the QR algorithm with shift . The rst step is to determine an
orthogonal matrix Q such that
R = QH (H ; I )
is upper triangular. Now H ; I is singular, and hence R must have a zero on
its diagonal. Because H was unreduced, that zero cannot occur at a diagonal
position other than the last (see the reduction in x15.26). Consequently, the
last row of R is zero and so is the last row of RQ. Hence H^ = RQ + I has
the form ^ ^!
H h :
0
In other words the shifted QR step has found the eigenvalue exactly and has
deated the problem.
In the presence of rounding error we cannot expect the last element of R
to be nonzero. This means that the matrix H^ will have the form
^ !
H h^
h^ kk;1 eTk;1 ^
16. We are now going to show how to use the transformation Q to reduce
the size of the Arnoldi decomposition. The rst step is to note that Q =
P12 P23 Pn;1n, where Pii+1 is a rotation in the (i i+1)-plane. Consequently,
Q is Hessenberg and can be partitioned in the form
!
Q = q QeT q q :
kk;1 k;1 kk
From the relation
AUk = Uk Hk + hk+1k uk+1 eTk
we have
AUk Q = Uk QQH Hk Q + hk+1k uk+1 eTk Q:
If we partition
U^k = Uk Q = (U^k;1 u^k )
Then
^ ^
!
H h
A(U^k;1 u^k ) = (U^k;1 u^k ) ^h eT ^ + hk+1k uk+1(qkk;1 eTk;1 qkk ):
kk;1 k;1
Computing the rst k ; 1 columns of this partition, we get
AU^k;1 = U^k;1H + (^hkk;1 u^k + hk+1k qkk;1 uk+1 )eTk;1 : (19.4)
174 Son of Afternotes
The matrix H is Hessenberg. The vector h^ kk;1 u^k + hk+1k qkk;1 uk+1 is or-
thogonal to the columns of U^k;1 . Hence (19.4) is an Arnoldi decomposition of
length k ; 1. With exact computations, the eigenvalue is not present in H .
17. The process may be repeated to remove other unwanted values from H .
If the matrix is real, complex eigenvalues can be removed two at a time via
an implicit double shift. The key observation here is that Q is zero below
its second subdiagonal element, so that truncating the last two columns and
adjusting the residual results in an Arnoldi decomposition. Once undesired
eigenvalues have been removed from H , the Arnoldi decomposition may be
expanded to one of order k and the process repeated.
Deation
18. There are two important additions to the algorithm that are beyond the
scope of this course. The correspond to the two kinds of deation in x19.5 and
x19.6. They are used for quite distinct purposes.
19. First, as Ritz pairs converge they can be locked into the decomposition.
The procedure amounts to computing an Arnoldi decomposition of the form
!
A(U1 U2 ) = (U1 U2 ) H011 H
H21 + hk+1k uk+1ek :
12 T
When this is done, one can work with the part of the decomposition corre-
sponding to U2 , thus saving multiplications by A. (However, care must be
taken to maintain orthogonality to the columns of U1 .)
20. The second addition concerns unwanted Ritz pairs. The restarting proce-
dure will tend to purge the unwanted eigenvalues from H . But the columns
of U may have signicant components along the eigenvectors corresponding to
the purged pairs, which will then reappear as the decomposition is expanded.
If certain pairs are too persistent, it is best to keep them around by computing
a block diagonal decomposition of the form
!
A(U1 U2) = (U1 U2 ) H011 H0 +
uk+1eTk
22
where H11 contains the unwanted eigenvalues. This insures that U2 has negli-
gible components along the unwanted eigenvectors, as we showed in x19.6. We
can then compute an Arnoldi decomposition from by reorthogonalizing the
relation
AU2 = H22U2 +
uk+1eTm
where m is the order of H2 .
Krylov Sequence Methods
175
Lecture 20
It follows that
1u2 = Au1 ; 1u1
2u3 = Au2 ; 2 u2 ; 1u1
and in general
k uk+1 = Auk ; k uk ; k;1uk;1 : (20.2)
177
178 Son of Afternotes
7. This modied process is called the Lanczos algorithm with selective re-
orthogonalization. It is the method of choice for large, sparse symmetric eigen-
value problems. But it is very tricky to implement, and the person who needs
it should go to an archive like Netlib to get code.
Relation to orthogonal polynomials
8. The fact that orthogonal polynomials and the vectors in the Lanczos process
satisfy a three term recurrence suggests that there might be some relation
between the two. To see what that relation might be, let
Kk = span(u Au A2u : : : Ak;1u)
be the kth Krylov subspace associated with A and u. We will suppose that
dim(Kk ) = k
i.e., the Krylov vectors that span Kk are linearly independent.
If v 2 Kk , then
v = c0u + c1Au + + ck;1Ak;1 u:
Consequently, if we set
p(x) = c0 + c1x + + ck;1 xk;1
we have
v = p(A)u:
Thus there is a one-one correspondence between polynomials of degree k ; 1
or less and members of Kk .
9. Now let us dene an inner product between two polynomials by
(p q ) = p(A)u]Tq (A)u]:
6 0. Then by the
To see that this formula denes an inner product, let p =
independence of the Krylov vectors, p(A)u 6= 0, and
(p p) = kp(A)uk20 > 0:
Thus our inner product is denite. It is easy to show that it is bilinear. It is
symmetric because
(p q ) = (p q )T = uT p(A)T q (A)u]T = uT q (A)T p(A)u = (q p):
180 Son of Afternotes
10. The inner product ( ) on the space of polynomials generates a sequence
p0 p1 : : : of orthogonal polynomials (see the comments in x7.17), normalized
so that (pj pj ) = 1. If we dene uj = pj (A)u, then
uTi uj = (pi pj )
and hence the vectors uj are orthonormal.
However, if A is not symmetric these polynomials do not satisfy a three-
term recurrence. The reason is that to derive the three-term recurrence we
needed the fact that (tp q ) = (p tq ). For symmetric matrices we have
(tp q ) = uT p(A)T AT ]q (A)u = uT p(A)T Aq (A)]u = (q tp):
Thus when the Arnoldi process is applied to a symmetric matrix, the result
is a tridiagonal matrix whose elements are the coecients of the three-term
recurrence for a sequence of orthogonal polynomials.
Golub{Kahan bidiagonalization
11. We have seen that the symmetric eigenvalue problem and the singular value
decomposition are closely related | the singular values of a square matrix X
are the square roots of the eigenvalues of X T X . Consequently we can calculate
singular values by applying the Lanczos algorithm to X T X . The matrix-vector
product required by the algorithm can be computed in the form X T (Xu)| i.e.,
a multiplication by X followed by a multiplication by X T.
12. As a general rule singular value problems are best treated on their own
terms and not reduced to an eigenproblem (see xx18.5{18.9). Now the Lanc-
zos algorithm can be regarded as using a Krylov sequence to tridiagonalize
X TX . The natural analogue is to bidiagonalize X . Since the bidiagonaliza-
tion algorithm described in x18.11 involves two sided transformations, the cor-
responding Lanczos-like process will involve two Krylov sequence. It is called
Golub{Kahan bidiagonalization.
13. For simplicity we will assume that X is square. The algorithm can be
derived directly from the bidiagonalization algorithm in x18.11. Specically,
let v1 be given with kv1k2 = 1, and let Q0 be an orthogonal matrix whose
rst column is v1 . If we apply the bidiagonalization algorithm to XQ0, we
end up with orthogonal matrices U and Q such that U T (XQ0)Q is bidiagonal.
Moreover, Q has the form !
1 0
0 Q
see (18.2)]. Hence if we set V = Q0 Q, the rst column of V will be v1.
We have thus shown that given any vector v1 with kv1 k2 = 1, there are
orthogonal matrices U and V such that
V e1 = v1
20. Krylov Sequence Methods 181
and 0 1
1 1
BB 2 2 CC
BB 3 3 CC
T
U XV = B B ... ... CC : (20.3)
BB CC
@ n;1 n;1 A
n
The constants k and k are given by
k = uTk Xvk and k = uTk Xvk+1 : (20.4)
14. From the bidiagonal form (20.3) we may derive a double recursion for the
the columns uk and vk of U and V . Multiplying by U , we have
0 1
1 1
BB 2 2 CC
X (v1 v2 v3 ) = (u1 u2 u3 ) BB 3 3 C C
@ ... ... A
On computing the kth column of this relation, we nd that Xvk = k;1 uk;1 +
k vk , or
k uk = Xvk ; k;1 uk;1 :
(To get us started we assume that 0 = 0.) From the relation
0 1
1
BB 1 2 CC
X (u1 u2 u3 ) = (v1 v2 v3 ) B 2 3 C
T B C
@ ... ... A
we get X T uk = k vk + k vk+1 or
k vk+1 = X T uk ; k vk :
15. We do not quite have a recursion because the formulas (20.4) for the
k and k involve the very vectors we are trying to compute. But since the
columns of U and V are normalized, we must have
k = kXvk ; k;1uk;1 k2 and k = kX T uk ; k vk k2 :
We summarize the recursion in the following algorithm.
182 Son of Afternotes
1. 0 = 0
2. for k = 1 2 : : :
3. uk = Xvk ; k;1 uk;1
4. k = kuk k2
5. uk = uk =k
6. vk = X T uk ;k vk
7. k = kvk+1k2
8. vk+1 = vk+1 = k
9. end for k
16. Although Golub{Kahan bidiagonalization can be used to nd singular
values, it is more important as the basis for iterative algorithms to solve least
squares problems and nonsymmetric linear systems. It would take us too far
aeld to develop these algorithms. Instead, in the following lectures we will
treat a method for solving positive denite systems that is closely related to
the Lanczos algorithm.
Lecture 21
183
184 Son of Afternotes
The minimum of the function '(x1 + Sk ^a) occurs at the unique point
x^k+1 = x1 + Sk a^k for which
SkT r^k = 0:
To see this, write in analogy with (21.4)
r = r1 ; ASa:
Then by (21.5),
'(x1 + Sk a^) = (r1 ; ASk ^a)T A;1(r1 ; ASk ^a) = kA; 21 r1 ; A 12 Sk a^k22
where A 2 is the positive denite square root of A. This equation exhibits our
1
1. r1 = s1 = b ; Ax1
2. for k = 1 2 : : :
3. k = krk k22=sTk Ask
4. xk+1 = xk + k sk
5. rk+1 = rk ; k Ask (21.11)
6. k = ;krk+1k22=krk k22
7. sk+1 = rk+1 ; k sk
8. end for k
The reason these formulas are superior is that the norms are always accu-
rately computed, whereas cancellation can occur in the computation of sTk rk
and sTk Ark+1 .
Termination
19. We have already noted (x21.15) that if s1 is an eigenvector of A then the
conjugation process never gets o the ground. However, if s1 = r1, we have
for some > 0 (remember that A is positive denite),
e1 = ;Ar1 = ;r1 = ;s1 :
Thus the rst step is along the direction of the error, which is immediately
reduced to zero.
20. An analogous termination of the procedure occurs when Kk is an invariant
subspace of A. Specically, let m be the smallest integer such that Km = Km+1 ,
i.e., such that Am r1 is a linear combination of r1 : : : Am;1 r1. Then
AKm Km+1 = Km
so that Km is an invariant subspace of A. Now rm+1 ? Km . But rm+1 2
Km+1 = Km. Thus rm+1 is in and is orthogonal to Km and hence must be
zero. We have established the following result.
Let m be the smallest integer such that
Km = Km+1:
Then xm+1 = x .
Since we always have Kn = Kn+1 , we have shown that:
The conjugate gradient method terminates with a solution in at
most n iterations.
Lecture 22
Hence
kp(A)e1kA = kp(A)A e1k2 kp(A)k2kA e1k2 = kp(A)k2ke1kA:
1
2
1
2
But the 2-norm of any symmetric matrix is the magnitude of its largest eigen-
value. Hence if we denote the set of eigenvalues of A by (A),
kp(A)e1kA = max jp()jke0kA :
2 (A)
7. Ideally we would like to choose p to minimize max 2 (A) jp()j. If we know
something about the distribution of the eigenvalues of A we can try to tailor
p so that the maximum is small. Here we will suppose that we know only the
smallest and largest eigenvalues of A | call them a and b. We then wish to
determine p so that
max jp(t)j = min
t2ab]
(22.2)
22. Krylov Sequence Methods 193
it is possible for the A-norm of a vector to be small, while its 2-norm is large.
For example, suppose !
A = 10 0
where
is small. Then kxpkA = 12 +
22. Thus if kxkA = 1, it is possible for
j2j to be be as large as 1=
. It is even possible for the A-norm of a vector to
decrease while its 2-norm increases. Given such phenomena, the optimality of
the conjugate gradient method seems less compelling. What good does it do us
to minimize the A-norm of the error if its 2-norm is increasing? Fortunately,
the 2-norm also decreases, a result which we shall now establish.
12. Let's begin be establishing a relation between kek k2 and kek+1 k2 . From
the equation
xk+1 = xk + k sk
we have ek+1 = ek + k sk or
ek = ek+1 ; k sk :
Hence
kek k22 = (ek+1 ; k sk )T (ek+1 ; k sk ) = kek+1k22 ; 2k sTk ek+1 + 2ksk k22:
or
kek+1 k22 = kek k22 + 2k sTk ek ; 2ksk k22:
Now the term ;2 ksk k22 reduces the error. Since k = krk k22=sTk Ask > 0, the
term 2k sTk ek+1 will reduce the error provided sTk ek+1 is not positive.
13. Actually we will show a great deal more than the fact that sTk ek+1 0. We
will show that the inner product of any error with any step is nonpositive. The
approach is to use matrix representations of the conjugate gradient recursions.
14. Let m be the iteration at which the conjugate gradient algorithm termi-
nates, so that rm+1 = 0. For purposes of illustration we will take m = 4.
Write the relation sk+1 = rk+1 ; k sk in the form
rk = sk+1 + k sk :
Recalling that r1 = s1 , we can write these relations in the form
0 1
1 1 0 0
B0 1 2 0 CC
(r1 r2 r3 r4 ) = (s1 s2 s3 s4 ) B
B@0 0 1 3CA :
0 0 0 1
We abbreviate this by writing
R = SU (22.5)
22. Krylov Sequence Methods 195
Now let
E = (e1 e2 e3 e4)
be the matrix of errors. Then E = ;A;1 R. Hence from (22.6),
E = ;SDL;1 :
On multiplying by S T , we have
S TE = ;(S T S )DL;1:
The matrices S T S , D, and L are all nonnegative. Hence the elements of S T E
are nonpositive. We may summarize as follows.
In the conjugate gradient algorithm, the inner products sTi sj are
nonnegative and the inner products sTi ej are nonpositive. The quan-
tity 2k sTk ek is nonpositive, and
kek+1 k22 = kek k22 ; 2ksk k22 + 2k sTk ek :
Consequently, the conjugate gradient method converges monotoni-
cally in the 2-norm.
Lecture 23
197
198 Son of Afternotes
1. r^1 = (M ; 12 b) ; (M ; 12 AM ; 21 )(M 12 x1 )
2.
3. s^1 = r^1
4. for k = 1 2 : : :
5. k =1 kr^k k22=s^Tk (M1 ; 12 AM ; 12 )^sk
6. (M 2 xk+1 ) = (M 2 xk ) + k s^k
7. r^k+1 = r^k ; k (M ; 21 AM ; 12 )^sk
8.
9. k = ;kr^k+1 k22=kr^k k22
10. s^k+1 = r^k+1 ; ^k s^k
11. end for k
The quantities with hats over them will change in the nal algorithm. Note
the two blank lines, which will be lled in later.
The logical place to begin transforming1 this algorithm is with the updating
of xk . If we multiply statement 6 by M ; 2 and set
sk = M ; 12 s^k
the result is
6. xk+1 = xk + k sk
Now for the residuals. If we multiply statement 1 by M 12 and set
rk = M 21 r^k
we get
1. r1 = b ; Ax1
Similarly statement 7 becomes
7. rk+1 = rk ; k Ask
So far we have merely reproduced statements in the original conjugate
gradient algorithm. Something dierent happens1 when we try to update sk .
Consider rst s1 . If we multiply statement 3 by1M 2 to transform r^1 into r1 , the
right hand side of the assignment becomes M 2 s^1 = Ms1 . Thus s1 = M ;1 r1.
We can get around this problem by dening
zk = M ;1 rk
and replacing statements 2 and 3 by
23. Krylov Sequence Methods 199
These two desiderata generally work against each other. For example choosing
M = I makes the system easy to solve, but gives a poor approximation to A
(unless of course A = I , in which case we do not need to precondition). On
the other hand, the choice M = A causes the conjugate gradient algorithm to
converge in one step, but at the expense of solving the original problem with
a dierent right-hand side. We must therefore compromise between these two
extremes.
6. A choice near the extreme of choosing M = I is to chose M = D, where D
is the diagonal of A. The system Mz = r is trivial to solve. Moreover, if the el-
ements of D vary over many orders of magnitude, this diagonal preconditioning
may speed things up dramatically.
It might be objected that since D is diagonal it is easy to form the matrix
D; 12 AD; 12 . Why then go through the trouble of using the preconditioned
version of an algorithm? An answer is that in some circumstances we may
not be able to manipulate the matrix. We have already noted that the ma-
trix A enters into the algorithm only in the formation of the product Ax. If
this computation is done in a black-box subprogram, it may be easier to use
preconditioned conjugate gradients than to modify the subprogram.
7. Another possibility is to use larger pieces of A. For example, a matrix which
arises in the solution of Poisson's problem has the form
0 T ;I 1
B
B
B ;I T ;I CC
CC
B
B ;I T ;I CC (23.2)
B
B ... ... ... C
B
@ ;I T ;I CA
;I T
where 0 4 ;1 1
BB;1 4 ;1 CC
BB ;1 4 ;1 CC
T =B BB ... ... ... CC :
B@ C
;1 4 ;1CA
;1 4
In this case a natural preconditioner is
M = diag(T T T : : : T T ):
This preconditioner has more of the avor of A that the diagonal of A. And
since the matrix T is tridiagonal, it is cheap to solve systems involving M .
23. Krylov Sequence Methods 201
Incomplete LU preconditioners
8. Except for diagonal matrices, the solution of the system Mz = r requires
that we have a suitable decomposition of M . In many instances this will be
an LU decomposition | the factorization M = LU into the product of a unit
lower triangular matrix and an upper triangular matrix produced by Gaussian
elimination. The idea of an incomplete LU preconditioner is to to perform an
abbreviated form of Gaussian elimination on A and to declare the product of
the resulting factors to be M . Since M is by construction already factored,
systems involving M will be easy to solve.
9. To get an idea of how the process works, consider the matrix (23.2). Since
we will be concerned with only the structure of this matrix, we will use a
Wilkinson diagram to depict an initial segment.
0X X X
1
BBX CC
BB X X X
CC
BB X X X X
XC
C
BB X X CC
BBX X X CC
BB X X X X CC
@ X X X XA
X X X
10. It is now time to get precise. The sparsity pattern of L and U does not
necessarily have to be the same as A (as it was above). We will therefore
introduce a sparsity set Z to control the pattern of zeros. Specically, let Z
be a set of ordered pairs of integers from f1 2 : : : ng containing no pairs of
the form (i i). An incomplete LU factorization of A is a decomposition of the
form
A = LU + R (23.3)
where L is unit lower triangular (i.e., lower triangular with ones on its diago-
nal), U is upper triangular, and L, U , and R have the following properties.
1: If (i j ) 2 Z with i > j , then `ij = 0.
2: If (i j ) 2 Z with i < j , then uij = 0.
3: If (i j ) 62 Z , then rij = 0.
In other words, the elements of L and U are zero on the sparsity set Z , and
o the sparsity set the decomposition reproduces A.
11. It is instructive to consider two extreme cases. If the sparsity Z set is
empty, we get the LU decomposition of A | i.e., we are using A as a precon-
ditioner. On the other hand, if Z is everything except diagonal pairs of the
form (i i), the we are eectively using the diagonal of A as a preconditioner.
12. To establish the existence and uniqueness of the incomplete factorization,
we will rst give an algorithm for computing it, which will establish the unique-
ness. We will then give conditions that insure existence and hence that the
algorithm goes to completion.
13. The algorithm generates L and U rowwise. Suppose we have computed
the rst k ; 1 rows of L and U , and we wish to compute the kth row. Write
the rst k rows of (23.3) in the form
! ! ! !
A11 A12 = L11 0 U11 U12 + R11 R12 (23.4)
aT21 aT22 `T12 1 0 uT22 T rT
r21 22
1. incompleteLU (A Z )
2. for k = 1 to n
3. for j = 1 to k;1
4. if ((k j ) 2 Z )
5. Lk j ] = 0
6. else
7. Lk j ] = (Ak j ];Lk 1:j ;1]U 1:j ;1 j])=U j j]
8. end if
9. end for j
10. for j = k to n
11. if ((k j ) 2 Z )
12. U k j ] = 0
13. else
14. U k j ] = Ak j ];Lk 1:k;1]U 1:k;1 j]
15. end if
16. end for j
17. end for k
18. end incompleteLU
Suppose that we have computed `k1 : : : `kj ;1. If (k j ) is not in the spar-
sity set, then rkj = 0, and (23.5) gives
;1
kX
kj = `ki ij + `kj jj
i=1
from which we get Pk;1 `
`kj = kj ; i=1 ki ij : (23.7)
jj
The key observation here is that it does not matter how the values of the
preceding `'s and 's were determined. If `kj is dened by (23.7), then when
we compute LU is (k j )-element will be kj . Thus we can set `'s and 's to
zero on the sparsity set without interfering of the values of LU o the sparsity
set. A similar analysis applies to the determination of kk : : : kn from (23.6).
14. Figure 23.1 contains an implementation of the algorithm sketched above.
If Z is the empty set, the algorithm produces an LU factorization of A, and
hence is a variant of classical Gaussian elimination. When Z is nonempty,
the algorithm produces matrices L and U with the right sparsity pattern.
Moreover, it must reproduce A o the sparsity set, since that is exactly what
the statements 7 and 14 say.
204 Son of Afternotes
205
206 Son of Afternotes
Hence P
0 jaii j ; j 6=i jaij j jjxxjijj
jaiij ; P j j
j 6=i aij
the last inequality following from (24.2). Moreover, the second inequality is
strict inequality if and only if for some j we have jaij j > 0 and jxj j=jxij < 1.
But strict inequality would contradict diagonal dominance. Hence (24.3) and
(24.4).
4. The above result implies that if A is strictly diagonal dominant then we
cannot have Ax = 0 for a nonzero x. For if so, the inequality cannot be strict
in any row corresponding to a maximal component of x. Thus we have shown
that:
A strictly diagonally dominant matrix is nonsingular.
5. Although strictly diagonally dominant matrices must be nonsingular, diag-
onal dominance alone does not imply either nonsingularity or singularity. For
example, the matrix 0 1
2 ;1 0
B@;1 2 ;1CA :
0 ;1 2
is nonsingular. (Check it out by your favorite method.) On the other hand
the matrix 0 1
1 1 0
B@1 1 0CA
1 2 4
is singular. Note that its null vector is (1 ;1 1=4). Since the third component
is not maximal whereas the rst two are, by the result of x24.3 the two elements
of A in the northeast must be zero | as indeed they are.
6. If we can verify that a diagonally dominant matrix is nonsingular, we can
generate a lot of other nonsingular diagonally matrices.
Let A be diagonally dominant and nonsingular. Let A^ be obtained
from A by decreasing the magnitudes of a set (perhaps empty) of
o-diagonal elements and increasing the magnitudes of a set (also
perhaps empty) of diagonal elements. Then A^ is nonsingular.
^ = 0, where x 6= 0. Now if we
To establish this result, suppose that Ax
interchange two rows and the same two columns of A^, the diagonals are in-
terchanged along with the elements in their respective rows (though the latter
24. Krylov Sequence Methods 207
^ = 0, the correspond-
appear in a dierent order). Moreover, in the equation Ax
ing components of x are interchanged. By a sequence of such interchanges, we
may bring the maximal components of x to the beginning.
Assuming such interchanges have been made, partition Ax = 0 in the form
^ ^ ! ! !
A11 A12 x1 = 0
A^21 A^22 x2 0
where x1 contains all the maximal components of x. By the result of x24.3,
the matrix A^12 must be zero, so that A^ has the form
^ !
A11 0 (24.5)
A^21 A^22
where A11 is singular and diagonally dominant with equality in all rows.
Now in passing back to A, we must decrease the magnitude of some diago-
nals and increase the magnitude of some o-diagonals. Some of these modica-
tions must take place in the rst row of the partition (24.5), for otherwise A will
be singular. But any such modication will destroy the diagonal dominance of
A.
7. It is worth pointing out that this result is about changing magnitudes,
independently of signs. Thus we may change the sign of an o-diagonal element
as long as the magnitude decreases. But in general we cannot do without a
strict decrease, as the matrices
! !
1 1 and 1 1
1 ;1 1 1
show.
8. An important consequence of the result of x24.6 is that:
Any principal submatrix of a nonsingular diagonally dominant ma-
trix is a nonsingular diagonally dominant matrix.
To see this assume without loss of generality that the principal submatrix
is a leading submatrix, say A11 in the partition
!
A= A 11 A12
A21 A22 :
Then A11 is clearly diagonally dominant. Moreover by the result of x24.6, the
matrix !
A11 0
0 A22
208 Son of Afternotes
Here the absolute value of a matrix A is dened to be the matrix whose ele-
ments are jij j, and the inequality is taken componentwise.
We are going to show that the Schur complement is diagonally dominant
by showing that
s jDje ; jB j; j;111a21aT12 je 0:
Note that we have lumped the diagonals of j;111 a21a12 j with the o-diagonals
because they could potentially decrease the diagonal elements D and decreasing
a diagonal element is morally equivalent to increasing a corresponding o-
diagonal element.
First we have
s jDje ; jB je ; j;111 jja21jja12jTe:
Now by diagonal dominance ja12jT e j11j, or j;111 jja12jT e 1. It follows
that
s jDje ; jBje ; ja21j:
But since the last n ; 1 row rows of A are diagonally dominant, the vector
on the right hand side of this inequality is nonnegative. Thus s 0, which
establishes the result.
11. This result immediately implies that Gaussian elimination goes to comple-
tion when it is applied to a nonsingular diagonally dominant matrix. A similar
manipulation gives an inequality that can be used to show that the algorithm
is numerically stable. We do not give it here, not because it is dicult, but
because to interpret it requires the rounding-error analysis of Gaussian elimi-
nation as a background.
12. The course of Gaussian elimination is not aected by scaling of the form
A D1AD2
where D1 and D2 are nonsingular diagonal matrices.1 Thus Gaussian elimina-
tion goes to completion and the results are stable for any matrix that can be
scaled to be diagonally dominant.
An important class of matrices that can be so scaled is the class of M-
matrices. These matrices matrices have nonpositive o-diagonal elements and
nonnegative inverses i.e.,
1. i 6= j =) ij 0,
2. A;1 0.
It can be shown that if A is an M-matrix, then there is a diagonal matrix D
such that D;1 AD is strictly diagonally dominant. Hence the results of this
lecture apply to M-matrices as well as to diagonally dominant matrices.
1
Warning: We are talking about the unpivoted algorithm. Pivoted versions which move
other elements into the diagonals before performing elimination steps are aected by diagonal
scaling.
210 Son of Afternotes
where a^21 and a^T12 are obtained by setting to zero the elements in the sparsity
set. Then r21 and r12 T are zero o the sparsity set, and by the result of x24.6
T
!
A^ = ^a 11 Aa^12
21 22
is a a nonsingular diagonally dominant matrix. Let
!
L1 = ;11a^ 00
11 21
and T
!
U1 = 011 a012
Then it is easy to verify that
T
!
A ; L1U1 = r0 Ar~12
21 22
where
A~22 = A22 ; ;111 a^21 ^aT12:
Now A~22 is the Schur complement of 11 in A^ and hence is a nonsingu-
lar diagonally dominant matrix. Thus, by induction, it has an incomplete
factorization
A~22 = L22U22 + R22:
If we set !
1
L = ;1 ^a L 0
11 21 22
24. Krylov Sequence Methods 211
and T
!
U = 011 Ua^12
22
then it is easy to verify that
T
!
A ; LU = r0 Rr12 R
21 22
Thus L, U , and R exhibit the right sparsity patterns, so that LU is an incom-
plete LU factorization of A.
Iterations, Linear and Nonlinear
213
Lecture 25
215
216 Son of Afternotes
truly better that xki , shouldn't we use it in forming the subsequent components
of xk+1 ? In other words, why not iterate according to the formulas
xki +1 = a;ii 1(bi ; Pj<i aij xkj +1 ; Pj>i aij xkj )? (25.2)
It turns out that this method | the Gauss{Seidel method | is often faster than
Jacobi's method.
4. An important generalization of the Gauss{Seidel method can be obtained
by partitioning the equation Ax = b in the form
0 A A12 A1k;1 A1k
10 x1
1 0
b1
1
BB A11 A2k C
CC BB 2 CC BB b2 CCC
B C B
BB ..21 A..22 A2k;1
..
x
.. C B .. C = B .. C
BB . . . . CC BB . CC BB . CC
@Ak;11 Ak;12 Ak;1k;1 Ak;1k @xk;1 A @bk;1A
A
Ak1 Ak2 Akk;1 Akk xk bk
where the diagonal blocks Aii are nonsingular. Then in analogy with (25.2),
we can write
xki +1 = A;ii 1(bi ; Pj<i Aij xkj +1 ; Pj<i Aij xkj ):
This is the block Gauss{Seidel method.
5. The essentially scalar character of the above formulas makes it dicult to
analyze the methods. To get a nicer formulation write
A = D ; L ; U
where D consists of the diagonal elements of A, and L and P U are the subdi-
agonal and superdiagonal triangles of A. Then the sums j 6=i aij xkj in (25.1)
are the components of the vector ;(L + U )xk . Similarly, multiplication by
a;ii 1 corresponds to multiplication by D;1 . Thus we can represent the Jacobi
iteration in the form
xk+1 = D;1b + D;1(L + U )xk :
P
6. For the Gauss-Seidel interation, thePsum j<i aij xkj +1 written in matrix
terms becomes ;Lxk+1 , and the sum j>i aij xkj becomes ;Uxk . Thus the
Gauss{Seidel iteration assumes the form
xk+1 = D;1(b + Lxk+1 + Uxk ):
With a little algebra, we can put this relation in the form
xk+1 = (D ; L);1b + (D ; L);1Uxk : (25.3)
25. Iterations, Linear and Nonlinear 217
Convergence
10. It is now time to investigate the convergence of the iteration
xk+1 = M ;1 b + M ;1Nxk : (25.7)
The rst thing to note is that the solution x is a xed point of the iteration
i.e.,
x = M ;1 b + M ;1 Nx: (25.8)
If we subtract (25.8) from (25.7), we obtain the following recursion for the
error in the iterates:
xk+1 ; x = M ;1 N (xk ; x):
The solution of this recursion is
xk = (M ;1N )k x0 :
Consequently a sucient condition for the convergence of the method is that
lim (M ;1 N )k = 0
k!1
(25.9)
By the results of x14.14, a necessary and sucient for (25.9) is that
(M ;1N ) < 1
where (M ;1N ) is the spectral radius of A.
11. The condition that (M ;1N ) < 1 is sucient for the convergence of
the iteration (25.7). It is not quite necessary. For example if x0 ; x is an
eigenvector of M ;1 N corresponding to an eigenvalue whose absolute values is
less than one, then the iteration will converge. As a practical matter, however,
rounding error will introduce components along eigenvectors corresponding to
the eigenvalues greater than or equal to one, and the iteration will fail to
converge. For this reason we are not much interested in splittings for which
(M ;1N ) 1.
12. We must now determine conditions under which (M ;1N ) < 1. The solu-
tion of this problem depends on the structure of A. As we said at the beginning
of the lecture, we are going to consider splitting of diagonally dominant ma-
trices. However, it turns out that diagonal dominance alone is not enough to
do the job. We also need the concept of irreducibility, to which we now turn.
Irreducibility
13. In x24.6 we used row and column interchanges to order the diagonals of
a diagonally dominant matrix. More generally, suppose we want the diagonal
elements of A to appear in order k1 : : : kn . Let
P = (ek1 ekn )
25. Iterations, Linear and Nonlinear 219
where ei is the vector with ith component one and its other components zero.
(The matrix P is called a permutation matrix .) It is easy to verify that the
(i j )-element of P T AP is aki kj . In particular, the ith row of P T AP has the
same elements as ki th row of A, so that diagonal dominance is preserved.
14. We now say that A is reducible if there is a permutation matrix P such
that !
T
P AP = A A A 11 0 (25.10)
21 22
where A11 is square. Otherwise A is irreducible . We have already seen a
reducible matrix in (24.5).
15. Although reducibility or irreducibility is not easily determined from the
denition, there is an alternative characterization that is easy to apply. Let us
say that you can \step" from the diagonal element aii to ajj if aij 6= 0. Then
A is irreducible if and only if you can travel from any diagonal element to any
other by a sequence of steps. In using this result, it is convenient to put down
n dots representing the diagonals and connect them with arrows representing
permissible steps.2
This result is a little tricky to prove, but you can easily convince yourself
that in the reducible matrix (25.10) there is no way to travel from a diagonal
element of A11 to a diagonal element of A22.
16. We will now couple irreducibility with diagonal dominance in the following
denition. A matrix A is said to be irreducibly diagonally dominant if
1. A is irreducible,
2. A is diagonally dominant with strict inequality in at least one row.
Note that the strict-inequality requirement is not reected in the term \irre-
ducibly diagonally dominant" | sometimes a source of confusion.
Splittings of irreducibly diagonally dominant matrices
17. The principle result of this division is the following.
Let A be irreducibly diagonally dominant, and let A = M ; N be
a splitting such that
1: The diagonal elements of N are zero (25.11)
2: jAj = jM j + jN j:
Then M is nonsingular and (M ;1N ) < 1.
2
This structure is called the graph of A, and it is useful in many applications.
220 Son of Afternotes
Here jAj is the matrix obtained from A by taking the absolute value of its
elements.
18. Of the two conditions, the rst simply says that the diagonal of A and M
are the same. The second condition, written elementwise in the form jaij j =
jmij j + jnij j, says that the elements of M are obtained by reducing the elements
of A in magnitude, but without changing their signs.
19. To prove this result, rst note that by the result of x24.6 M is nonsingular.
The proof that (M ;1N ) > 1 is by contradiction. Assume that
M ;1 Nx = x x 6= 0 jj 1:
Let xi be a maximal component of x, i.e., a component whose absolute value
is largest. Then from the equation Mx = ;1 Nx, we get
X
aii xi + (mij ; ;1 nij )xj = 0:
6i
j=
(Here we use the fact that mii = aii ). From this it follows that
jaij j Pj6=i (jmij j; j;1jjnij j) jjxxjijj
Pj6=i (jmij j; jnij j) jjxxjijj (25.12)
Pj6=i jaij j jjxxjijj
Pj6=i jaij j:
Now if if the inequality is strict, we contradict the diagonal dominance of A.
If equality holds, then aij = 0 whenever xj is not maximal see (24.3)].
Suppose equality holds for all maximal components (otherwise we have
our contradiction). Now there must be at least one component that is not
maximal for otherwise A would be diagonally dominant with equality in all
row sums. Let P be a permutation that moves the maximal components to
the beginning. Partition
!
P T AP = A 11 A12
A21 A22
where A11 is square and associated with the maximal components. Since the
elements of A12 are associated with nonmaximal components in (25.12), they
are zero. It follows that A is reducible | a contradiction.
20. This is a fairly powerful result, in that it says that any reasonable splitting
of an irreducibly diagonally dominant matrix denes a convergent iteration.
In particular, the Jacobi, Gauss{Seidel, and block Gauss{Seidel iterations all
25. Iterations, Linear and Nonlinear 221
converge. Unfortunately, the result does not say which is the best But if I were
forced to bet, I would choose block Gauss{Seidel and would probably win.
21. The irreducibility hypothesis was used to argue to a contradiction when
equality obtained in all the sums (25.12). But if we assume that A is strictly
diagonally dominant, even equality is a contradiction. Hence we have the
following result.
The result of x25.17 remains true if irreducible diagonal dominance
is replaced by strict diagonal dominance.
223
224 Son of Afternotes
about the size of (kxk ; xk). We will make the following restriction. Because
(kxk ; x k) = o(1), there is a > 0 such that
kx ; xk =) kg0(x)k + (kxk ; xk) < 1:
We will require that
kx0 ; xk :
9. With these assumptions, it follows from (26.7) that
kx1 ; xk kx0 ; xk:
This implies that kx1 ; xk . Hence
kx2 ; xk kx1 ; xk 2kx0 ; xk:
In general
kxk ; xk k kx0 ; xk:
and since < 1, the xk converge to x .
10. The analysis suggests that the iteration converges at least as fast as k
approaches zero. However, has been inated to account for the behavior
of g (x) when x is away from x . A more realistic assessment of the rate of
convergence may be obtained by writing (26.7) in the form
kxk+1 ; xk kg0(x)k + (kxk ; xk):
kxk ; xk
Since = o(1)
lim kx ; xk kg0(x)k:
k+1
k!1 kxk ; x k
Hence
kxk+1 ; xk < kg0(x)kkxk ; xk:
In other words, the error in xk+1 is asymptotically less than the error in xk by
a factor of at least kg 0(x)k.
11. We may summarize these results as follows.
26. Iterations, Linear and Nonlinear 227
lim kx ; x k kg 0(x)k:
k+1
k!1 kxk ; x k
Linear and nonlinear iterations compared
12. It is instructive to compare these results with those for the linear iteration
xk+1 = (M ;1N )xk + M ;1 b g(xk ):
The condition for the convergence of this iteration is (M ;1N ) < 1. But in this
case the derivative of g is identically M ;1 N . Since the norm of a matrix bounds
its spectral radius, the condition (26.8) insures that the iteration converges.2
13. The nonlinear iteration diers from the linear iteration in an important
aspect. The linear iteration is globally convergent | it converges from any
starting point x0 . The nonlinear iteration is, in general, only locally conver-
gent | the starting point must be suciently near the xed point to insure
convergence.
Rates of convergence
14. An converging sequence for which
kxk+1 ; xk ! < 1
kxk ; xk
is said to converge linearly. A xed-point iteration usually converges linearly,
and will be the spectral radius of g 0(x ). An important exception is when
g 0(x) = 0, in which case the convergence is said to be superlinear. In this
case, the convergence accelerates as the iteration progresses.
15. A particularly important case of superlinear convergence occurs when
g 0(x) = 0 and (kxk ; xk) K kxk ; x k, for some constant K . In this case
kxk+1 ; xk K kxk ; xk2:
2
In fact, we could replace (26.8) by the condition g0 (x )] < 1. For by the result of x14.12,
if g0 (x)] < 1, there is a norm such that kg0 (x )k < 1.
228 Son of Afternotes
In particular, if
kxk+1 ; xk = K kxk ; xk2
then we say the the sequence converges quadratically. We have encountered
quadratic convergence in the nonsymmetric QR algorithm (see x15.20).
Newton's method
16. We are now going to apply our analysis of the xed-point iteration to
analyze the multivariate version of Newton's method. To derive the algorithm,
suppose f : Rn ! Rn , and let f (x) = 0. Let us assume that f 0 (x) is continuous
and nonsingular about x . Then
0 = f (x ) = f (x) + f 0 (x)(x ; x) + o(x ; x) = f 0(x )(x ; x) + o(x ; x):
If we drop the o(x ; x) term and solve for x , we get
x
= x ; f 0 (x);1f (x):
The multivariate Newton's method is obtained by iterating this formula start-
ing with some initial approximation x0 to x :
xk+1 = xk ; f 0 (xk );1f (xk ) k = 0 1 : : ::
17. The iteration function for Newton's method is
g(x) = x ; f 0(x);1 f (x): (26.9)
Because f (x) = 0, we have g (x) = x i.e., x is a xed point of g . According
to our local convergence theory, the behavior of Newton's method depends on
the size of g 0(x ).
In the univariate case, we can evaluate g 0 using elementary dierentiation
formulas. We can do something like that in the multivariate case however,
the formulas are not simple. Instead we will compute the rst two terms of
the Taylor series for g (x) about x . Whatever multiplies x ; x must be the
required derivative.
Since f is dierentiable at x and f (x ) = 0,
f (x) = f (x )+ f 0(x )(x ; x)+ o(x ; x ) = f 0 (x)(x ; x)+ o(x ; x ): (26.10)
Since f 0 (x) is continuous,
f 0 (x) = f 0(x) + o(1):
Since the inverse of a matrix is continuous,
f 0(x);1 = f 0 (x) + o(1)];1 = f 0 (x);1 + o(1): (26.11)
26. Iterations, Linear and Nonlinear 229
23. We are now in a position to state and prove the contractive mapping
theorem.
Let C be a closed subset of Rn and let g : C ! Rn be a contraction
with constant . If
x 2 C =) g (x) 2 C
then g has a unique xed point x in C . Moreover, for any x0 2 C
the sequence dened by
xk+1 = g (xk ) k = 0 1 : : :
converges to x and