Sie sind auf Seite 1von 7

The spectral decomposition

Let A be a n × n symmetric matrix. From the The expression


spectral theorem, we know that there is an
orthonormal basis u1, · · · , un of Rn such that each A = λ1u1uT1 + · · · + λnunuTn.
uj is an eigenvector of A. Let λj be the eigenvalue
corresponding to uj, that is, is called the spectral decomposition of A. Note
that each matrix ujuTj has rank 1 and is the matrix
Auj = λjuj. of projection onto the one dimensional subspace
spanned by uj. In other words, the linear map P
Then
A = PDP−1 = PDPT defined by P(x) = ujuTj x is the orthogonal
projection onto the subspace spanned by uj.
where P is the orthogonal matrix P = [u1 · · · un]
and D is the diagonal matrix with diagonal
entries λ1, · · · , λn. The equation A = PDPT can be
rewritten as:
" ⎡ T⎤
λ1 u1
!
A = [u1 · · · un] ... ⎣ .. ⎦
λn uTn
⎡ ⎤
uT1
= [λ1u1 · · · λnun] ⎣ .. ⎦
uTn
= λ1u1uT1 + · · · + λnunuTn.
Quadratic forms
' (
n 11 3
Definition. A quadratic form on R is a function Let A = . We calculated its eigenvalues and
13 1
of the form 31 1
Q(x) = xT Ax eigenvectors in class. Characteristic polynomial of
A is
where A is an n × n symmetric matrix.
pA(λ) = det(A − λI)
Examples. = −20 + 4x + 5x2 − x3
= (5 − λ)(λ + 2)(λ − 2)
So the eigenvalues are −2, 2, 5. The corresponding
eigenspaces are all one dimensional, spanned by
the eigenvectors
1 1 1
! " ! " ! "
v1 = 0 , v2 = −2 , v3 = 1 .
−1 1 1
Scaling the eigenvectors, we get the orthogonal
matrix
⎡ 1
√1 √1


) v1 v2 v3 * ⎢ 2 6
2 1
3
P = ∥v1∥ ∥v2∥ ∥v3∥ = ⎣ 0 − √6 √3 ⎦

− √12 √16 √13

Then PT = P−1 and A = PDPT .


Diagonalization of quadratic forms
Let y = P−1x = PT x. Then we have
Q = xT Ax = xT PDP−1x = (P−1x)T D(P−1x) = yT Dy.
Explicitly, one has

Q = x21 + 3x22 + x23 + 2x1x2 + 6x1x3 + 2x2x3


= −2y21 + 2y22 + 5y23
where
y1 = √1 x1 − √12 x3
2
y2 = √1 x1 − √26 x2 + √16 x3
6
y3 = √1 x1 + √13 x2 + √13 x3.
3

From the diagonalization of the quadratic from Q


we find that the level surfaces Q = c are
hyperboloids whose axes of symmetry are the
vectors given by the columns of P. So the
columns of P are called the principal axes of Q. It
also shows that the quadratic form Q takes both
positive and negative values, so it is “indefinite”.
Singular value decomposition
A rectangular matrix is called diagonal if all the ◦ The nonzero diagonal entries of Σ are
entries away from the main diagonal are zero. positive and usually arranged in (weakly)
decreaing order. These nonzero entries are
Theorem. Let A be an m × n real matrix of rank r.
Then A can be written in the form the square roots of nonzero eigenvalues of
AT A (or AAT ). These are called the singular
A = UΣV T values of A. The number of singular values
(counted with multiplicity) is equal to the
where rank of A.
◦ Σm×n is a rectangular diagonal matrix with r ◦ The columns of V form an orthonormal
nonzero diagonal entries. basis of Rn consisting of eigenvalues of AT A.
These columns are called the right singular
◦ Um×m and Vn×n are orthogonal matrices. vectors of A.
Such a decomposition A = UΣV T is called the ◦ The columns of U form an orthonormal
singular value decomposition of A and is one of basis of Rm consisting of eigenvectors of
the most important matrix factrorization for AAT . These columns are called the left
many applications. singular vectors of A.
Example: How to find SVD
3 2
! "
Extend {u1, u2} to an orthonormal basis {u1, u2, u3}
Let A = 2 3 . Find the singular value of R3:
2 −2
1 −2
! "
decomposition of A. u3 = 2 .
- . 3 1
17 8
We compute AT A = 8 17 . The symmetric Let U = [u1 u2 u3] and
matrix AT A has
- .eigenvalues
- . 25 and 9 with 5 0
" !
1 −1 Σ=U= 0 3 .
eigenvectors 1 and 1 respectively. Scaling 0 0
these eigenvectors we find an orthonormal basis
Then U and V have orthonormal columns and Σ
- √ . - √ . is diagonal and
1/√2 −1/√ 2
v1 = , v2 =
1/ 2 1/ 2 AV = [Av1 Av2 Av3]
5 0
! "
Let V = [v1 v2]. Let = [5u1 3u2 0] = [u1 u2 u3] 0 3 = UΣ.
0 0
1 1 −1
! " ! "
Av1 1 Av2
u1 = = √ 1 u2 = = √ 1
5 2 0 3 3 2 −4 Since V T = V −1, we obtain
A = UΣV T .
Math 207, December 07, 2018, A few practice problems

! "
1 1 by v1 = ex , v2 = sin x, v3 = cos x. Let H be
1. Let A = . Let L : R2 → R2 be the linear
0 1 the span of v1 , v2 , v3 .
map L(v) = Av. Let S denote the unit circle
(a) Show that B = {v1 , v2 , v3 } is a basis of H.
in the plane. Find the major and minor axes
of the ellipse L(S). Find max ∥Av∥ and find a (b) Let L : H → H be the linear map defined by
∥v∥=1 L(f ) = f ′ . Find the matrix of L with respect
vector v for which the maximum is attained. to the basis B.

sketch"of solution. One computes M = AT A =


! sketch of solution. (a) Suppose c1 , c2 , c3 are
1 1 scalars such that
. Characteristic polynomial of M is:
1 2
c1 v1 + c2 v2 + c3 v3 = 0.
! "
1−λ 1
det = λ2 − 3λ + 1. This means
1 2−λ
√ c1 ex + c2 sin x + c3 cos x = 0
The eigenvalues
√ are λ1 = (3 + 5)/2 and λ2 =
(3 − 5)/2. One computes for all real number x. Take x = 0, π/2, π to get
! " the system of linear equations:
1 − λ1 1
M − λ1 I = c1 e0 + c2 sin 0 + c3 cos 0 = 0
1 2 − λ1
π
c1 e 2 + c2 sin π2 + c3 cos π2 = 0
Note that M has rank 1. So the second col-
umn of M must be a scalar multiple of the first c1 eπ + c2 sin π + c3 cos π = 0
column. So the second column of M must be
which is equivalent to
(1 − λ1 ) times the first column. That is ⎡ ⎤
! "! " ! " 1 0 1
1 − λ1 1 1 0 A⃗c = ⃗0 where A = ⎣eπ/2 1 0 ⎦.
= .
1 2 − λ1 −(1 − λ1 ) 0 eπ 0 −1
So ! " Note that adding the third row of A we get the
1 matrix
w1 =
1 + eπ 0 0
⎡ ⎤
−(1 − λ1 )
A′ = ⎣ eπ/2 1 0⎦
is an eigenvector of M with eigenvalue λ1 .
eπ 0 −1
Since the eigenvectors corresponding to distinct
eigenvalues are orthogonal, which is upper triangular with nonzero entries
! " on the diagonal. It follows that det(A) =
w2 =
(1 − λ1 ) det(A′ ) ̸= 0. So A is invertible. So the only
1 solution of A⃗c = ⃗0 is ⃗c = ⃗0. Thus we have
agrued that c1 v1 + c2 v2 + c3 v3 = 0 implies
must be an eigenvector with eigenvalue λ2 . The c1 = c2 = c3 = 0. So B = {v1 , v2 , v3 } is a
vectors Aw1 and Aw2 point in the direction of linearly independent. By definition, B spans
the major and minor axes of the ellipse L(S). H. So B is a basis of H.
The required maximum √ is the√ largest singular (b) From the basic rules of differentiation,
value of A, which is λ1 = ( 5 + 1)/2 and the
maximum one gets L(v1 ) = v1 , L(v2 ) = v3 and
√ is attained when x = w1 /∥w1 ∥. (By L(v3 ) = −v2 . We compute the coordinates of
the way, λ1 is also the half length of the major
L(v1 ), L(v2 ), L(v3 ) with respect to the basis B:
axis of the ellipse L(S)).
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 0 0
2. Let V be the vector space of all infinitely differ- [L(v1 )]B = ⎣0⎦ , [L(v2 )]B = ⎣0⎦ , [L(v3 )]B = ⎣−1⎦ .
entiable functions on R. Define v1 , v2 , v3 ∈ V 0 1 0
The matrix of L with respect to the basis B is form a basis for the eigenspace corresponding
'
[L]BB = [L(v1 )]B [L(v2 )]B [L(v3 )]B
( to eigenvalue 0.
At this point we have already found five lin-
⎡ ⎤
1 0 0
= ⎣0 0 −1⎦ . early independent eigenvectors of J. So 5 and
0 1 0 0 are the only eigenvalues of J. The eigenspace
for 5 is one dimensional spanned by v. The
eigenspace for eigenvalue 0 is four dimensional.
A basis is given above.
3. Let J be the 5 × 5 matrix all whose entries are
equal to 1. Find the eigenvalues and eigenvec-
tors of A.
4. Mark each statement true or false. Justify each
T
sketch of solution. Let v = [1, 1, 1, 1, 1] be the answer.
first column of J. First observe that Jv = 5v. a. An n × n matrix that is orthogonally diago-
So v is an eigenvector of J correponding to nalizable must be symmetric.
eigenvalue 5.
b. If AT = A and if u and v are vectors such
Since each column of J is equal to v, the col-
that Au = 3u and Av = 4v, then u and v are
umn space is spanned by v, hence col(J) is one
orthogonal.
dimensional, spanned by v. So rank(J) = 1,
hence nul(J) = {x ∈ R5 : Jx = 0} has di- c. An n × n symmetric matrix has n distinct
mension 4. Observe that nul J is just the zero real eigenvalues.
eigenspace. So 0 is an eigenvalue of J with 4
dimensional eigenspace. We can find the eigen-
vectors for the eigenvalue 0 by solving the equa- sketch of solution. a. TRUE. Let A = P DP T
tion Jx = 0. Applying Gaussian algorithm to where P is orthogonal and D is diagonal. Then
J we get AT = (P DP T )T = (P T )T DT P T = P DP T =
⎡ ⎤ A. So A is symmetric.
1 1 1 1 1
⎢0 0 0 0 0⎥ b. TRUE We compute

⎢ ⎥
J =⎢ ⎢0 0 0 0 0⎥

⎣0 0 0 0 0⎦ (3u)T v = (Au)T v = uT AT v = uT Av = uT (4v)
0 0 0 0 0
So the system of equation J ′ x = 0 is equivalent It follows that
to the single equation x1 +x2 +x3 +x4 +x5 = 0.
So the general solution of J ′ x = 0 is 3(u · v) = 4(u · v).
⎡ ⎤
−t2 − t3 − t4 − t5 So (u · v) = 0, that is u and v are orthogonal.

⎢ t2 ⎥


⎢ t 3

⎥ c. FALSE. Take A to be the n × n identity
⎣ t4 ⎦ matrix. Then A is a symmetric matrix whose
t5 only eigenvalue is 1.
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−1 −1 −1 −1
⎢1⎥ ⎢0⎥ ⎢0⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 5. Section 7.1 problem 30.
⎢ 0 ⎥ + t3 ⎢ 1 ⎥ + t4 ⎢ 0 ⎥ + t5 ⎢ 0 ⎥
= t2 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0⎦ ⎣0⎦ ⎣1⎦ ⎣0⎦
0 0 0 1 sketch of solution of problem 30. Suppose A =
So the vectors P DP T and B = QD′ QT with P and Q orthog-
onal and D, D′ diagonal. Then verify that A
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
−1 −1 −1 −1
⎢1⎥ ⎢0⎥ ⎢0⎥ ⎢0⎥ and B are symmetric. Since AB = BA, we find
that (AB)T = B T AT = BA = AB. So AB is
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 ⎥,⎢ 1 ⎥,⎢ 0 ⎥,⎢ 0 ⎥
also symmetric, hence orthogonally diagonaliz-
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣0⎦ ⎣0⎦ ⎣1⎦ ⎣0⎦
0 0 0 1 able.

Das könnte Ihnen auch gefallen