Sie sind auf Seite 1von 10

CSE6643/MATH6643: Numerical Linear Algebra

Haesun Park
hpark@cc.gatech.edu

School of Computational Science and Engineering


College of Computing
Georgia Institute of Technology
Atlanta, GA 30332, USA

Lecture 1

What is Numerical Analysis?


Three great branches of science:
Theory, Experiment, and Computation
The purpose of computing is insight, not numbers (1961 Hamming).
Numerical Analysis is
Study of Algorithms for Problems of Continuous Mathematics.
Ex. Newtons method, Lagrange interpolation polynomial, Gaussian
Elimination, Eulers method...
Computational mathematics is mainly based on two ideas
(extreme simplification) Taylor series and linear algebra.
Role of Computers in Numerical Computing
Computers certainly play a part in numerical computing but even if rounding
error vanishes, 95 % of numerical analysis would remain.
Most mathematical problems cannot be solved
by a finite sequence of elementary operations
Need: fast algorithms that converge to approximate answers accurate to
many digits of precision, in science and engineering applications.

CSE6643/MATH6643: Numerical Linear Algebra p.1/9

Different Types of Problems in Numerical Computing


Problem F: can be solved in a finite sequence of elementary operations:
Root for polynomial of degree 4: closed form formula exists (Ferrari 1540)
Solving linear equations
Linear programming
Problem I: cannot be solved in a finite sequence of elementary operations:
Root for polynomial of degree 5 and higher: no closed form formula
exisits (Ruffini and Abel, around 1800)
Finding eigenvalues of an n n matrix with n 5
Minimize a function of several variables
Evaluate an integral
solve an ODE
solve a PDE
Problem F is not necessarily easier than Problem I
When problem dimension is very high, one often ignores the exact solution
and uses approximate and fast methods instead.
*** Worlds largest matrix computation as of April 2007:
Googles PageRank - eigenvector of a matrix of order 2.7 billion
CSE6643/MATH6643: Numerical Linear Algebra p.2/9

Gauss (1777-1855) and Numerical Computing


least squares data fitting (1795)
systems of linear equations (1809)
numerical quadrature (1814)
fast Fourier transform (1805) - not well known until it was rediscovered by
Cooley and Tukey (1965)
Numerical Linear Algebra
square linear system solving
least squares problems
eigenvalue problem
Often, Algorithms = Matrix Factorizations

CSE6643/MATH6643: Numerical Linear Algebra p.3/9

Square Linear System Solving


When does the solution exist?
When is it easy to solve?
Diagonalization: expensive
Make it triangular: A = LU: lower - upper triangular factors
Gaussian elimination: make the
! problem into triangular system solving
0 1
May break down: A =
.
1 0
Even for matrices with the LU factorization, it can be unstable.
Pivoting: By interchanging rows, stability can be achieved.
Gaussian elimination with pivoting: P A = LU
Discovery of pivoting was easy but its theoretical analysis has been hard.
For most matrices, it is stable but in 1960 Wilkinson and others found that
for certain exceptional matrices, Gaussian elimination with pivoting is
unstable.

CSE6643/MATH6643: Numerical Linear Algebra p.4/9

Orthogonal (Unitary) Transformations


Use of orthogonal matrices was introduced in late 1950s
T
Q Rnn with Q1 = QT or Q Cnn with Q1 = Q
QR factorization: For!any matrix A : m n, m >= n, a QR factorization of
R
A exists, A = Q
where Q : m m has orthonormal colummns and
0
R : n n is upper
triangular. Reduced QRD: A = Q1 R where

Q = Q1 , Q2

Gram(1883)-Schmidt(1907) orthogonalization: column of Q are obtained


and it gets R as a by product in the process of triangular orthogonalization
Modified Gram-Schmidt (Laplace 1816, Rice 1966)
Householder method (1958, Householder reflector, Turnbull and Aitken
1932): A is reduced to an upper triangular matrix R via orthogonal
operations. More stable numerically, because orthogonal operations
preserve L2 and Frobenius norms and thus do not amplify the rounding
errors introduced at each step: H = I 2vv T /(v T v), v 6= 0
Givens method: extension
! of 2x2 plane rotations
!
c s
cos sin
plane rotations:
=
s
c
sin
cos
CSE6643/MATH6643: Numerical Linear Algebra p.5/9

Important matrix computation algorithms in 1960s


Based on the QR factorization:
to solve the least squares
construct orthonormal bases
used at the core of other algorithms especially in EVD and SVD algorithms

CSE6643/MATH6643: Numerical Linear Algebra p.6/9

Least Squares
Overdetermined system solving Ax b where A : m n with m n
If a square system, we know how to solve: normal equations, or QRD
Reduced QR Decomposition: distance preserving dimension reduction
method
QRD: efficient updating and downdating methods exist
Rank Deficiency: Q is not the basis for range(R) if rank(A) is not full
Pivoted QR decomposition, Rank Revealing QR decomposition or the SVD
is needed if rank(A) is not full

CSE6643/MATH6643: Numerical Linear Algebra p.7/9

Singular Value Decomposition (SVD)


Beltrami, Jordan, Sylvester, in the late 19th century
made well known by Golub 1965
The SVD: Any matrix A Cmn (assume m >= n but not necessary) can
be decomposed into A = U V T where U Cmm is unitary, V Cnn is
unitary, and Rmn > 0 is diagonal, = diag(1 n ) with
1 2 ... n 0. If r > r+1 = 0 then rank(A) = r.
Ap = Up p Vp , p rank(A) is the best rank p approximation of A
singular values of A are the nonnegative square roots of the eigenvalues of
AT A
AT A = V T V T and AAT = U T U T
Latent Semantic Indexing:
lower rank approximation of the term-document matrix
Principal Component Analysis:
Let C = A ceT .
Then the leading singular vectors are the PCA solutions. Let the SVD of
C = U V T . Then CC T = U T V T So Uk is the k principal vectors
But SVD is expensive.
CSE6643/MATH6643: Numerical Linear Algebra p.8/9

Eigenvalue problem Ax = x, x 6= 0
Symmetric vs. Non-symmetric
For which matrices eigenvalues are easy to find?
What transformations allowed? Similarity transformations. Two matrices A
and B are similar if B = X 1 AX for a nonsingular matrix X. Then the
characteristic polynomials of A and B are the same.
Algorithms for Symmetric Eigenvalue Problems
Jacobi algorithm
QR algorithm (1960)
One of the matrix factorization algorithms with the greatest impact
Francis, Kublanovskaya, Wilkinson
Iterative, in symmetric case, typically it converges cubically
EISPACK, LAPACK, NAG, IMSL, Numerical recipes, MATLAB, Maple,
Mathematica
Power method: to find the leading eigenvalue, eigenvector, PageRank

CSE6643/MATH6643: Numerical Linear Algebra p.9/9

Das könnte Ihnen auch gefallen