Beruflich Dokumente
Kultur Dokumente
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
Society for Industrial and Applied Mathematics is collaborating with JSTOR to digitize, preserve and
extend access to Journal of the Society for Industrial and Applied Mathematics: Series B,
Numerical Analysis
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
(1.1) A = 1727*,
(1.2) A1 = F27?/*,
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
206
A1 = (A*A)~1A*
(rank = 3),
-1
-1 -1
-1
1 -1
-1
-1
-1
1
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
of rows and columns, then, although it may not appear to the nak
be deficient in rank, it is violently ill-conditioned (it has a very ti
On the other hand, when all the ? l's in the matrix are repla
then the resulting matrix is quite docile. Therefore, it would b
to tell, by looking at only the diagonal elements of the row-ech
of U are the eigen vectors of AA and the columns of V are the eige
hardware. But for most other machines, and especially when a prog
_/0 A\
~\A* o)
has for its eigenvalues the singular values of A, each appearing with both a
positive and a negative sign. The representation A could not be treated
directly by a standard eigenvalue-vector program without dealing with the
problems which we shall discuss in detail in what follows.
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
208
A = PJQ*,
ai ft 0 ? ? ? 0
a2 ft 0
o
J =
A.-1
OLn
(m ? n) X n
formations (see [17], [21], [32]) are used. Let A = Aa) and let A(3/2),
A(2), ? ? ? , A(n\ A(n+m be defined as follows:
A(k+H2) _ p(k)AOc)
A(k+1) _ A(k+l/2)Q(k)
k = 1, 2, . ? ? , n,
k = 1, 2,
1.
?}(*)
2x?)x(?*
x?)*x?> = 1(
?(A0* (fc)
i = fc + 1, -
m,
j = k + 2,
ai ft 0 ?
0 a2 ft 0
.
i (fc+D
Oik ftb
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
,n,
Xia) = 0, i = 1, 2, ? ? ? , k - 1.
Since P{k) is a unitary transformation, length is preserved and consequently
_ \ U>'
(2.1) |a*|2= El<4'
i=k
2x>k)xkak = a},k} , i = k
and hence
(fc) / m \ 1/2
ctk ? ? , ,
Summarizing, we have
A(*+i/2) = Ac*> __ xw.2(x(fc)*Aw),
with
/ m \l/2
Sfc
-(gloffif) ,
ck
-G0+W ?
2s"\^f\xk) '
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
- 2(Aa+1/2)ya))-y'
with
)l/2
(fc+1/2)
ak,k+i
ft = ? ^fc
j(l + i^rj)
2/t+i =
(fc+1/2)
-( 24 NKFM yk+1
1/2
(say),
r-
and
a)
Aqi = aipi,
* = 2, 3,
p? A = awqw .
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
211
where an g: 0.
0"l ^ 0~2 ^
These are the numbers which appear on the diagonal of the matrix 2 which
was introduced in (1.1), i.e.,
ci
02
0"3
2 =
0
0
(m ? n) X n
(3.1) J = X27*
A = J7S7*,
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
212
Pi
a\
a2
olz fiz
J =
o
without introducing any new notation to distinguish this n X n matrix J
from the m X n matrix J. This can be done because the previous equations
remain valid after the following process of "abbreviation":
= (/* o)
whose eigenvalues are just +cr? and ? <n for i ? 1, 2, ? ? ? , n. The calculation of the eigenvalues of J is simplified conceptually by a transformation
(3.2)
C-O-C^?U)-
?2?-l
?2/?
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
213
'0 5i
?i 0 ft
ft o
r =
?2
<*2
?n
Clearly there exists a unitary diagonal matrix D such that the similarity
transformation
0
$2
DTD* = S =
(3.3)
0
yields a tridiagonal matrix S whose elements
eigenvalues of
l?ij
aipi
otifii
J*J =
?2P2
?2/32
0
OLn-l Pn-1
Otn-1 Pn-1 | ?n | +
Jn-1
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
S2 -+ t\ s212
Si h s2
s2 h
A(JV)A* = K =
Sn?1 vn?1
where s? = | a% | and ^ = | 0* |. Hence K is a real, symmetric, positive semidefinite, tridiagonal matrix and its eigenvalues can be computed by the
Sturm sequence algorithm.
Although the smaller eigenvalues of A*A are usually poorly determined,
a simple error analysis shows that all the eigenvalues of K are as welldetermined as those of T. The reason for this is that the computation of the
Sturm sequences is algebraically the same for both T and K. Thus to use K
is preferable since the total number of operations in calculating its eigen?
values is certainly less than in computing the eigenvalues of T.
4. Orthogonal vectors properly paired. We consider now the calculation
of the unitary matrices X and Y which were introduced in (3.1):
J = X27*.
As pointed out in ?3, J can be transformed into a real matrix by means of
unitary diagonal transformations, and we shall assume henceforth that this
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
215
extra error. And if <n is negligible one can find x? and y?-
that they are normalized. The claims in the last two sent
but there is no point in doing so because the second sour
drastic; if the z/s are not orthogonal then neither will t
nor will the y/s. The problem of ensuring that the z/s a
the present state of the art of computation, a serious on
One way to ensure the orthogonality of calculated eigen
6i
6i
ai
b2
K =
bn-
H from the vector v so that K\ = P^KH will have zero in place of &w_i.
After deleting the last row and column of the tridiagonal matrix Ki,
another eigenvalue, eigenvector and deflation would be calculated, and so
on. The eigenvectors of K would be the columns of an orthogonal matrix ob?
tained by multiplying together all the H's. The orthogonality of the eigen?
vectors would be guaranteed (to within the limits of acceptable rounding
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
216
The first rotation, which fixes all the subsequent ones, can be determined
from the first two elements of K's eigenvector v as suggested by Rutishauser
[28, p. 226] or else from the first two elements of K ? XI. In effect, the de?
of K's successive deflations each of its eigenvalues will be at some time the
greatest or the smallest of the current matrix on hand. Next we apply
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
217
k 1
k 1
k
0
ln-1 1
and
ui h
U2 &2
Uz
U =
0
bn-l
Un
81 = +1,
than a few units in the last place (cf. the argument in [30]). Now even if v
is contaminated by components of the eigenvectors corresponding to other
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
218
o
Cj Sj
Pi =
Sj Cj
3 = 1, 2,
,72 ? 1,
0
with Cj for its jth and (j + l)th diagonal elements, where c/ + s/ = 1.
Suppose the products
and
h = ai ? X, Wq = bi, 0i = vi
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
in conjunction with cy+i + sy+i = 1, the values cy+i and sy+i. This method
seems to be effective and we believe that it should always work, but since
we cannot prove the method's mfallibility, our work is incomplete.
Now we can show how to construct a deflation process for the bidiagonal
matrix J. The first step is to obtain J's largest singular value a; a2 is the
largest eigenvalue of the tridiagonal matrix JlJ (see ?3). The next step
requires the corresponding vectors x and y which can be obtained either by
solving JlJy = a2y for y and setting x = </~Vy, or by calculating <r's eigen?
vector z of S in (3.3) and hence obtaining x and y from z's even and odd
components respectively. Both methods for getting x and y are numerically
stable when performed in floating point. The deflation of J is accomplished
by a sequence of 2 X 2 rotations applied in succession to its first and second
columns, its first and second rows, its second and third columns, its second
and third rows, its third and fourth columns, ? ? ? , its (n ? l)th and nth
rows. The ith rotation applied to rows i and i + 1 of J must simultaneously
annihilate a spurious subdiagonal element, introduced into row i + 1 by the
previous column rotation, and the ith element in the current x-vector. The
ith column rotation, except for the first, must annihilate a spurious term
introduced by the previous row rotation into the (i + l)th column just
above the first superdiagonal, and simultaneously the transpose of the ith
column rotation must liquidate the ith element of the current y-vector. The
first column rotation would when applied to J*J ? a2I annihilate the element
in its first row and second column. At the end of the deflation process J's
element 6w_i should have been replaced by zero. Of course, rounding errors
will prevent the rotations from performing their roles exactly upon both the
matrix J and the vectors x and y, but just as in the deflation of a tridiagonal
matrix we are able so to determine the rotations that negligible residuals are
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
U = PX, V = QY,
to exhibit the decomposition (1.1):
A = U2V.
B G Sp,
|| A - A || ^ || A - B ||,
where
A = utv*
and 2 is obtained from the 2 of (1.1) by setting to zero all but its p largest
singular values <n.
B || = || 2 - U*BV
Let U*BV = C. Then
Now
di
^
and
i,4j
it
i=l
is
ai+i.
c^
=
conv
Thus,
0
othe
II yl /T II / 2 I I 2\^2
|| A ? A || = (oTp+i + ? ? ? + o> ) .
Finding the vector x of shortest length which minimizes || b ? Ax \\ is
equivalent to finding the vector y of shortest length which minimizes
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
A = c/2y*,
A1 = y27C7*
to obtain the least squares solution x = A7b. Once again, to ignore some
singular values o>+i, ar+2, ? ? ? , cw is equivalent to perturbing A by a matrix
whose norm is (X^?=r+i cr*)12.
In some scientific calculations it is preferable that a given square matrix A
2y = c + 5c,
in which the permissible perturbation 5c still satisfies
(5.1) || 5c || < e.
Subject to this constraint, 5c may be
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
J A(i,j)x(j) dj = b(i).
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
[28] -, On Jacobi rotation patterns, Proc. Symp. Appl. Math. XV, Experimental
Arithmetic, High Speed Computing, and Mathematics, Amer. Math. Soc,
1963, pp. 219-240.
[29] F. Smithies, Integral Equations, Cambridge University Press, Cambridge, 1958,
Chap. VIII.
[30] J. H. Wilkinson, The calculation of the eigenvectors of codiagonal matrices,
Comput. J., 1 (1958), pp. 148-152.
[31] -, Stability of the reduction of a matrix to almost triangular and triangular
forms by elementary similarity transformations, J. Assoc. Comput. Mach., 6
(1959), pp. 336-359.
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms
This content downloaded from 202.78.175.199 on Thu, 01 Sep 2016 12:17:33 UTC
All use subject to http://about.jstor.org/terms