Beruflich Dokumente
Kultur Dokumente
Similar Matrices
Let A and B be n n matrices. We say that A is similar to B if there is an
invertible n n matrix P such that P 1 AP = B. If A is similar to B, we
write A B.
Remarks
a. A A.
b. If A B, then B A.
c. If A B and B C, then A C.
a. det A = det B.
b. A is invertible if and only if B is invertible.
c. A and B have the same rank.
d. A and B have the same characteristic polynomial.
e. A and B have the same eigenvalues.
MATH10212 Linear Algebra Brief lecture notes 49
Diagonalization
Definition. An n n matrix A is diagonalizable if there is a diagonal
matrix D such that A is similar to D that is, if there is an invertible
matrix P such that P 1 AP = D.
Note that the eigenvalues of D are its diagonal elements, and these are
the same eigenvalues as for A.
...Since eigenvectors for distinct eigenvalues are lin. indep. by Th. 4.20.
1 , 2 , . . . , k
B = B1 B2 Bk
(i.e., the total collection of basis vectors for all of the eigenspaces) is linearly
independent.
a. A is diagonalizable.
b. The union B of the bases of the eigenspaces
Pof A (as in Theorem 4.24)
k
contains n vectors (which is equivalent to i=1 dim Ei = n).
c. The algebraic multiplicity of each eigenvalue equals its geometric mul-
tiplicity and all eigenvalues are real numbers this condition
is missing in the textbook!.
MATH10212 Linear Algebra Brief lecture notes 50
Theorem 4.27 and Th. 4.23 actually give a method to decide whether A
is diagonalizable, and if yes, to find P such that P 1 AP is diagonal: the
columns of P are vectors of bases of the eigenspaces.
1 2 2
Example. For A = 2 1 2 the characteristic polynomial is
2 2 1
1 2 2
det(A I) = 2 1 2 =
2 2 1
(1 ) + 8 + 8 4(1 ) 4(1 ) 4(1 ) = = ( 5)( + 1)2 . Thus,
3
3 20 29
Example. For A = 0 1 82 the eigenvalues are 3, 1, and 7. Since
0 0 7
they are distinct, the matrix is diagonalizable.
3 0 0
(To find that P such that P 1 AP = 0 1 0, one still needs to solve those
0 0 7
linear systems (A ()I)~x = ~0......).
3 1 0
Example. For A = 0 3 1 the eigenvalue is 3 of alg. multiplicity 3.
0 0 3
0 1 0
Eigenspace E3 : 0 0 1 ~x = ~0; matrix has rank 2, so dim E3 = 1. So A is
0 0 0
not digonalizable.
100 1 2
Example. Use diagonalization to find A for A = . Eigenvalues
2 1
2 2 ~ 1
are.... 1 and 3. Eigenspace E3 : ~x = 0; x1 = x2 ; basis .
2 2 1
2 2 1 1 1
Eigenspace E1 : ~x = ~0; x1 = x2 ; basis . Let P = ;
2 2 1 1 1
1 0
then P 1 AP = D = . Now, A = P DP 1 , so A100 = (P DP 1 )100 =
0 3
100
1 1 1 100 1 1 1 1 0 1/2 1/2
P DP P DP P DP = P D P = =
1 1 0 3 1/2 1/2
1 1 1 0 1/2 1/2 1 3100 1/2 1/2
100 = =
1 1 0 3 1/2 1/2 1 3100 1/2 1/2
3100 + 1 3100 1
(1/2) 100 .
3 1 3100 + 1
MATH10212 Linear Algebra Brief lecture notes 52
Orthogonality in Rn
We introduce the dot product of vectors in Rn by setting
~u ~v = ~uT ~v ;
that is, if
u1 v1
.. ..
~u = . and ~v = .
un vn
then
v1
~u ~v = ~uT ~v = u1 un ... = u1 v1 + u2 v2 + + un vn .
vn
~u ~v = ~v ~u (commutativity).
~u (~v + w)
~ = ~u ~v + ~u w
~
~u (~v ) = (~v ~u) (The last two properties are referred to as linearity
of the dot product.)
~u ~u = u21 + + u2n and therefore ~u ~u 0. Moreover, if ~u ~u = 0 then
~u = ~0.
by q
k~v k = ~v ~v = v12 + v22 + + vn2
Solution We must show that every pair of vectors from this set is orthogo-
nal. This is true, since
Theorem 5.1. If
~v1 , ~v2 , . . . , ~vk
is an orthogonal set of nonzero vectors in Rn , then these vectors are linearly
independent.
then
(c1~v1 + c2~v2 + + ck~vk ) ~vi = ~0 ~vi = 0
or, equivalently,
Since
~v1 , ~v2 , . . . , ~vk
is an orthogonal set, all of the dot products in equation (1) are zero, except
~vi ~vi . Thus, equation (1) reduces to
ci (~vi ~vi ) = 0
MATH10212 Linear Algebra Brief lecture notes 54
from Example 5.1 are orthogonal and, hence, linearly independent. Since
any three linearly independent vectors in R3 form a basis in R3 , by the
Fundamental Theorem of Invertible Matrices, it follows that ~v1 , ~v2 , ~v3 is an
orthogonal basis for R3 .
w
~ = c1~v1 + c2~v2 + + ck~vk
are given by
w
~ ~vi
ci = for i = 1, . . . , k
~vi ~vi
Proof Since
~v1 , ~v2 , . . . , ~vk
is a basis for W , we know that there are unique scalars c1 , c2 , . . . , ck such
that
w
~ = c1~v1 + c2~v2 + + ck~vk
(from Theorem 3.29). To establish the formula for ci , we take the dot prod-
uct of this linear combination with ~vi to obtain
w
~ ~vi = (c1~v1 + c2~v2 + + ck~vk ) ~vi
= ci (~vi ~vi )
since ~vj ~vi = 0 for j 6= i. Since ~vi 6= ~0, ~vi ~vi 6= 0. Dividing by ~vi ~vi , we obtain
the desired result.
~v = k~v k~u.
Let ~qi denote the ith column of Q (and, hence, the ith row of QT ). Since
the (i, j) entry of QT Q is the dot product of the ith row of QT and the jth
column of Q, it follows that
Example
Each of the following matrices is orthogonal:
1 0 1 0 1/2 1/ 2 cos sin
, , ,
0 1 0 1 1/ 2 1/ 2 sin cos
a. Q is orthogonal.
a. Q1 is orthogonal.
b. det Q = 1.
c. If is an eigenvalue of Q, then || = 1.
d. If Q1 and Q2 are orthogonal n n matrices, then so is Q1 Q2 .