Sie sind auf Seite 1von 45

# Contents

## 1. Linear Equations and Matrices 1

Linear Equations 1
Matrices 3
2. Vector Spaces 4
3. Linear Transformations and Matrix Representations 9
3.1. The Existence of Linear Transformations 9
3.2. The Dimension Theorem 9
3.3. Matrix Representation 10
Dual Spaces 14
4. Matrices 18
Systems of Linear Equations 19
Determinants 19
5. Polynomial Rings 22
6. Diagonalizations 23
Matrix Limits 27
7. Jordan Canonical Forms 29
Jordan Canonical Form Theorem 31
Minimal polynomials 34
8. Inner Product Spaces 39
Orthogonal Projection and Spectral Theorem 43
LECTURE NOTE : LINEAR ALGEBRA
DONG SEUNG KANG
1. Linear Equations and Matrices
Linear Equations. One of the central motivation for linear algebra is solv-
ing system of linear equations. We call the following a system of linear
equations:
() a
11
x
1
+a
12
x
2
+ +a
1n
x
n
= b
1
a
21
x
1
+a
22
x
2
+ +a
2n
x
n
= b
2
.
.
.
a
m1
x
1
+a
m2
x
2
+ +a
mn
x
n
= b
m
,
where x
1
, , x
n
are unknowns(or variables) and the a
ij
are coecients
and b
j
are constants. In particular, when all b
j
= 0 then we call the system
homogeneous.
1
2 DONG SEUNG KANG
We note that if the system is homogeneous, the system of linear equations
always has at least one solution, x
1
= x
2
= = x
n
= 0 , say trivial
solution.
Our main goal is to determine whether a given system of linear equations
has a solution, or not. To solve the system of linear equations, we will
use Gaussian Elimination, and the main three operations follow( called
elementary operation ):
(1) multiply a nonzero constant throughout an equation
(2) interchange two equations
(3) change an equation by adding a constant multiple of another equa-
tion.
Theorem 1.1. Any system of linear equations has either no solution, ex-
actly one solution, or innitely many solutions.
Problem Set
Problem 1.2. Describe all the possible types of solution set:
(1)
a
1
x +b
1
y = c
1
a
2
x +b
2
y = c
2
.
(2)
a
11
x
1
+a
12
x
2
+a
13
x
3
= b
1
a
21
x
1
+a
22
x
2
+a
23
x
3
= b
2
a
31
x
1
+a
32
x
2
+a
33
x
3
= b
3
.
The system of linear equations (*) can be written as a matrix form as
follows:
AX = b ,
where
A =
_
_
_
a
11
a
1n
.
.
.
.
.
.
.
.
.
a
m1
a
mn
_
_
_
, X =
_
_
_
x
1
.
.
.
x
n
_
_
_
, and b =
_
_
_
b
1
.
.
.
b
m
_
_
_
.
We call this equation a matrix equation. To apply to Gaussian elimina-
tion, we need to modify the matrix A as follows:
()
_
_
_
a
11
a
1n
b
1
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
mn
b
m
_
_
_
.
We call this matrix a augmented matrix. To solve this equation (**),
similar to the elementary operation, there are some operations.
LINEAR ALGEBRA 3
Denition 1.3. Let A M
n
(F) . Any one of the following three operations
on the rows of A is called an elementary row operation :
ROP 1 interchanging any two rows of A (ROP 1 is R
i
R
i
, for some
1 i ,= j n;)
ROP 2 multiplying any row of A by a nonzero constant (ROP 2 is R
i
=
c R
i
, for some 1 i n;)
ROP 3 adding any constant multiple of a row of A to another row (ROP 3
is R
i
= R
i
+c R
j
, for some 1 i ,= j n,)
where for some i , R
i
is the row of A.
In fact, the elementary operation to a system of linear equations (*) are
rephrased as the elementary row operations for the augmented matrix (**).
Now, I will introduce a specied matrix.
Denition 1.4. A matrix is called a row-echelon form if it has the fol-
lowing properties:
(1) the rst nonzero entry of each row is 1, called a leading 1.
(2) A row containing only 0s should come after all rows with some
nonzero entries.
(3) The leading 1s appear from left to the right in succesive rows.
Moreover, the matrix of the reduced row-echelon form satises
(4) Each column that contains a leading 1 has zeros everywhere else, in
addition to the above three properties.
By using the nite sequence of elementary row operations, any given
matrix can be transformed to the row-echelon form( or reduced row-echelon
form ). In particular, the reduced row-echelon form is unique.
The whole process to obtain this reduced row-echelon form is called a
Gaussian-Jordan Elimination.
Denition 1.5. Two augmented matrices (or systems of linear equations)
are said to be row-equivalent if we can be transformed to the other by a
nite sequence of elementary row operations.
Theorem 1.6. If two systems of linear equations are row-equivalent, then
they have the same set of solutions.
Matrices. Historically, Cayley introduced the word matrix in the year
1858. The meaning of the word is that within which something originates.
In this section we will investigate some properties of such matrices.
Denition 1.7. Let I = 1, 2, , m , J = 1, 2, , n be sets and let F
be a eld. A function A : I J F is called a mn matrix over a eld
F .
4 DONG SEUNG KANG
In general, a matrix M is written in the following form:
A =
_
_
_
_
_
a
11
a
12
a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
.
.
.
a
m1
a
m2
a
mn
_
_
_
_
_
= ( a
ij
) ,
where the number a
ij
is called a (i, j)entry of the matrix A.
Denition 1.8. Let A be a mn matrix over a eld F .
(1) The transpose of A is n m matrix, denoted by A
t
, whose jth
column is taken from the jth row of A.
(2) the matrix A is called a symmetric matrix if A
t
= A.
(3) the matrix A is called a skew-symmetric matrix if A
t
= A.
(4) if n = m, we call the A a nsquare matrix.
Theorem 1.9. Every matrix can be written as a sum of symmetric and
skew-symmetric matrices.
In particular, when A is a square matrix and has an inverse, the matrix
equation (**) can be easily solved such as X = A
1
b .
Denition 1.10. Let A ne a nsquare matrix. We call the matrix B is an
inverse of A if AB = I
n
= BA. If a given matrix A has its inverse, is said
to be invertible. If not, it said to be singular or not invertible.
It is a natural question how to nd the inverse for a given matrix.
2. Vector Spaces
Denition 2.1. A vector space V over a eld F consists of a non-empty
set on which two operations, called addition and scalar multiplication, are
dened so that for each pair x, y V , a unique element x+y V is dened
and for each a F and v V , a unique element av V is dened such
that the following hold:
(1) for all x, y V , x +y = y +x
(2) for all x, y, z V , x + (y +z) = (x +y) +z
(3) there exists an element in V denoted by 0 such that x + 0 = x for
all x V
(4) for each x V , there exists a unique element y V such that
x +y = 0 (y is denoted by x)
(5) for each x V , 1x = x
(6) for each a, b F and v V , (ab)v = a(bv)
(7) for each a F and v, w V , a(v +w) = av +aw
(8) for each a, b F and v V , (a +b)v = av +bv .
Denition 2.2. A subset W of a vector space V is called a subspace
of V if W is a vector space under the operations of addition and scalar
multiplication dened on V .
LINEAR ALGEBRA 5
Note that V and 0 are both subspaces of V .
Theorem 2.3. Let V be a vector space and W a subspace of V . Then W
is a subspace of V if and only if the following hold:
(1) W is a non-empty set(0 W)
(2) x +y W whenever x, y W
(3) aw W whenever a F and w W .
Denition 2.4. Let V be a vector space and let v
1
, , v
n

V , a
1
, , a
n
F . Then a
1
v
1
+ +a
n
v
n
is called the linear combination
of v
1
, , v
n
with coecients a
1
, , a
n
. If S is a non-empty subset of V ,
we dene the span of S , span(S) , to be the set of all linear combinations
of the elements of S . If S = , the null set, we dene span() to be 0 .
We say that S spans (generates) V if V = span(S) .
Theorem 2.5. Let S be a subset of a vector space V . The span(S) is a
subspace of V .
Note that suppose that v
1
, , v
r
are column vectors in F
n
. By
(v
1
, , v
r
) M
nr
(F) , we mean that n r matrix having v
i
as its ith
column. For a
1
, , a
r
F , we have
(v
1
, , v
r
)
_
_
_
a
1
.
.
.
a
r
_
_
_
= a
1
v
1
+ +a
r
v
r
.
Let S = v
1
, , v
r
F
n
and let w F
n
. A basic question is whether or
not w span(S) . This is the same as asking whether there exist x
1
, , x
r

F such that w = x
1
v
1
+ +x
r
v
r
. This is the case if and only if the matrix
equation Ax = w has a solution, where A M
nr
(F) whose ith column
is v
i
and w is written as a column vector.
Theorem 2.6. Let A M
mn
(F) , where m < n. Then the equation Ax =
0 has non-trivial solution.
Denition 2.7. A subset S of a vector space V is called linearly depen-
dent if there exists a nite subset v
1
, , v
r
of S and a
1
, , a
r
F , not
all 0, such that
a
1
v
1
+ +a
r
v
r
= 0 ;
if S is not linearly dependent, S said to be linearly independent.
Note that S is linearly independent if and only if the only representation
of 0 as linear combination of element of S are trivial representation, i.e., all
coecients are 0.
The following are easy consequences of the denition.
(1) Any set which contains a linearly dependent set is linearly depen-
dent.
(2) Any subset of a linearly independent set is linearly independent.
6 DONG SEUNG KANG
(3) Any set which contains the 0 vector is linearly dependent; for 1 0 =
0 .
(4) A set S of vectors is linearly independent if and only if each nite
subset of S is linearly independent, i.e., if and only if for any distinct
vectors v
1
, , v
r
of S , a
1
v
1
+ +a
r
v
r
= 0 implies each a
i
= 0 .
Denition 2.8. A basis of a vector space V is
(1) a linear independent subset of V
(2) span(S) = V .
Note that is a basis for the 0 vector space.
Theorem 2.9. A subset S of a vector space V is a basis for V if and only
if for each element v V , there exists unique elements v
1
, , v
r
S and
unique a
1
, , a
r
F such that v = a
1
v
1
+ +a
r
v
r
.
Theorem 2.10. Let S be a linearly independent subset of a vector space
V , and let v V . Then S v is linearly independent if and only if
v span(S) .
Theorem 2.11. Suppose that the vector space V has a nite spanning set S
and I is a linearly independent set of vectors, I S , possibly I = . Then
there exists a basis B for V with I B S . In particular, V has a nite
basis.
A vector space V having a nite spanning set is called a nite dimensional
vector space; if V does not have a nite spanning set, V is said to be innite
dimensional.
Corollary 2.12. Let V be a nite dimensional vector space. Then
(1) every spanning set for V contains a basis for V ,
(2) every linearly independent subset of V is contained in a basis for V .
Theorem 2.13. Let V be a nite dimensional vector space, let S be a nite
spanning set for V , and let T be a subset of V having more than elements
that S. Then T is linearly independent.
Corollary 2.14. Let V be a nite dimensional vector space and let
B , and ( be bases for V . Then B and ( are nite sets having the same
number of elements.
Denition 2.15. By Corollary 2.14, if V is a nite dimensional vector
space, then there is an integer n such that every basis for V has exactly n
elements. We call n the dimension of V and say that V is ndimensional;
we write dim(V ) = n.
Theorem 2.16. Suppose that V is a ndimensional vector space. Then
(1) any linearly independent subset of V containing n elements is a basis
for V ,
(2) any spanning set for V which contains exactly n elements is a basis
for V .
LINEAR ALGEBRA 7
Theorem 2.17. Suppose that V is a ndimensional vector space and W is
a subspace of V . Then
(1) W is nite dimensional
(2) dim(W) dim(V ) and dim(W) = dim(V ) if and only if W = V .
(3) any basis for W can be extended to a basis for V .
Problem Set
Problem 2.18. Let V be a vector space over a eld F and nonempty set
S V . Show that span(S) is the intersection of all subspaces of V that
contains S .
Problem 2.19. Let W
1
and W
2
be subspaces of a vector space V . Prove
that W
1
W
2
is a subspace of V if and only if W
1
W
2
or W
2
W
1
.
Problem 2.20. Let C be the complex numbers and R be the real numbers.
Show that
(1) 1, i is linearly dependent when C is regarded as a Cvector space.
(2) 1, i is linearly independent when C is regarded as a Rvector space.
Problem 2.21. Let
A =
_
_
1 3 0
0 2 0
2 1 1
_
_
M
3
(R) .
Show that there is a positive integer n and real numbers, not all zero, say
c
0
, c
1
, , c
n
such that
c
0
I +c
1
A+ +c
n
A
n
= 0 .
Problem 2.22. Let W = (a, b, c, d, e) R
5
[3a = d + e, e = a b + 2c
be a subspace of R
5
. Show that (1, 1, 2, 1, 4) W and nd a basis for W
that contains (1, 1, 2, 1, 4) . Be sure to explain why your proposed basis is
a basis.
Denition 2.23. A vector space V is called the direct sum of W
1
and W
2
if W
1
and W
2
are subspaces of V such that
W
1
W
2
= 0 and W
1
+W
2
= V .
We denote that V is the direct sum of W
1
and W
2
by writing V = W
1
W
2
.
Problem 2.24. Let W
1
and W
2
be subspaces of a vector space V . Show
that V is the direct sum of W
1
and W
2
if and only if each element in V can
be uniquely written as x
1
+x
2
where x
1
W
1
and x
2
W
2
.
Problem 2.25. Show that F
n
is the direct sum of the subspaces W
1
=
(a
1
, , a
n
) F
n
[a
n
= 0 and W
2
= (a
1
, , a
n
) F
n
[a
1
= =
a
n1
= 0 .
Problem 2.26. A matrix M is called skew-symmetric if M
t
= M .
Clearly, a skew-symmetric matrix is square.
8 DONG SEUNG KANG
(1) Prove that the set W
1
of all skew-symmetric nn real matrices is a
subspace of M
n
(R) .
(2) Let W
2
be the subspace of M
n
(R) consisting of the symmetric n n
matrices. Prove that M
n
(R) = W
1
W
2
.
(3) Find dim W
1
and dim W
2
.
(4) Find the bases for W
1
and W
2
, respectively.
Problem 2.27. Show that a subset W of a vector space V is a subspace of
V if and only if span (W) = W .
Problem 2.28. Show that the set W of all nn matrices having trace equal
to zero is a subspace of M
n
(F) , and nd a basis for W .
Problem 2.29. For a xed a R, determine the dimension of the subspace
of T
n
(R) dened by f T
n
(R)[f(a) = 0 .
Problem 2.30. Let D
0
[0, 1] = f D[0, 1][f(0) = 0 . Show that D
0
=
D
0
[0, 1] is a subspace of D = D[0, 1] and show that D = D
0
W , where
W is a simple nite dimensional subspace of D. Do this by nding W and
proving the direct sum statement.
LINEAR ALGEBRA 9
3. Linear Transformations and Matrix Representations
Denition 3.1. Let V and W be vector spaces over a eld F . A function
T : V W is called a linear transformation from V into W if for all
x, y V and c F we have
T(x +y) = T(x) +T(y) ,
T(c x) = c T(x) .
We denote the set of all linear transformations fromV into W by L(V, W) .
Example 3.2. Let A M
mn
(F) . View F
m
and F
n
as column vectors.
The function L
A
: F
n
F
m
dened by L
A
(v) = Av is a linear transforma-
tion called the left-multiplication transformation.
3.1. The Existence of Linear Transformations.
Theorem 3.3. Let V and W be vector spaces over a eld F , and suppose
V is a nite-dimensional with a basis b
1
, , b
n
. For any vectors
w
1
, , w
n
in W there exists uniquely one linear transformation T : V W
such that
T(b
i
) = w
i
for all i = 1, , n.
The following Theorem 3.4 is viewed as the generalized existence of linear
transformation.
Theorem 3.4. Let V and W be vector spaces over a eld F , and suppose V
is a nite-dimensional with a basis B . Let f : B W be a function. Then
there exists a unique T L(V, W) such that T(x) = f(x) for all x B .
Corollary 3.5. Let V and W be vector spaces over a eld F , and let B
be a basis for V . Let T, S L(V, W) . If T(x) = S(x) for all x B , then
T = S .
Hence to classify linear transformations from V , we need to check with
a basis for V , that is, any linear transformation T on a vector space V is
determined by the basis B for V .
3.2. The Dimension Theorem.
Denition 3.6. Let V and W be vector spaces over a eld F , and let
T L(V, W) .
(1) The null space(or kernel) of T is dened to be
N(T) = x V [T(x) = 0 .
(2) Then range(or image) of T is dened to be
R(T) = T(x)[x V .
Theorem 3.7. Let V and W be vector spaces over a eld F , and let T
L(V, W) . Then
(1) N(T) is a subspace of V .
10 DONG SEUNG KANG
(2) R(T) is a subspace of W.
(3) T is one-to-one if and only if N(T) = 0.
(4) If B is a basis for V , then R(T) = span(T(X)[x B) .
Theorem 3.8. Let V and W be vector spaces over a eld F with V nite-
dimensional and let T L(V, W) . Then
dim(V ) = dimN(T) + dimR(T) .
Denition 3.9. If N(T) and R(T) are nite-dimensional, then we dene
the nullity of T , denoted nullity(T) , and the rank of T , denoted rank(T) ,
to be the dimensions of N(T) and R(T) , respectively.
Theorem 3.10. Let V and W be vector spaces over a eld F , and let
T L(V, W) . Then the following statements are equivalent:
(1) T is both one-to-one and onto;
(2) there exists S L(V, W) such that S T = I
V
and TS = I
W
.
If the linear transformation T satises one of the conditions as in Theo-
rem 3.10, we say that T is invertible. The S is the inverse of T ; it is denoted
by S = T
1
.
The following Corollary is the special case of Theorems 3.10 3.8.
Corollary 3.11. Let V and W be ndimensional vector spaces over a eld
F , and let T L(V, W) . Then the following statements are equivalent:(note
that dimV = n = dimW).
(1) T is invertible;
(2) N(T) = 0 ;
(3) R(T) = W ;
(4) there exists S L(V, W) such that ST = I
V
;
(5) there exists S L(V, W) such that TS = I
W
.
Theorem 3.12. Every ndimensional vector space is isomorphic to R
n
.
Example 3.13. Let F be a eld and V the vector space of all polynomial
functions from F into F . Let D be the dierentiation linear transforma-
tion in L(V ) , and let T be the linear transformation in L(V ) dened by
T(f)(x) = xf(x) . Then
DT ,= TD.
But DT TD = id
V
.
3.3. Matrix Representation. We will dene the matrix representation of
a given linear transformation. To express the matrix, we have to dene a
special basis called an ordered basis for V , which a basis endowed with a
specic order. Now let B = b
1
, , b
n
be an ordered basis for V .
Denition 3.14. For v V , there exist unique scalars a
1
, , a
n
such that
v =
n

i=1
a
i
b
i
.
LINEAR ALGEBRA 11
We dene the coordinate vector of v relative to B , denoted [v]
B
, by
[v]
B
=
_
_
_
a
1
.
.
.
a
n
_
_
_
.
Let V be ndimensional vector space over a eld F with an ordered basis
B , and let T L(V ) . We dene the matrix representation of T for the
ordered basis B , denoted [T]
B
, to be
([T(b
1
)]
B
, , [T(b
n
)]
B
) .
Theorem 3.15. Let V be ndimensional vector space over a eld F with
an ordered basis B . Let : V F
n
be dened by (x) = [x]
B
for all x V .
Then is an isomorphism.
Example 3.16. By Example 3.2, if A M
n
(F) and B is the standard
ordered basis, then [L
A
]
B
= A.
Let V and W be vector spaces over a eld F . Then L(V, W) can be
considered as a Fvector space with addition and scalar multiplication as
follows;
(S +T)(x) = S(x) +T(x) and (cT)(x) = c T(x) ,
for all S, T L(V, W) and c F , and x V .
Theorem 3.17. Let V be ndimensional vector space over a eld F with
an ordered basis B . Let : L(V ) M
n
(F) be dened by (T) = [T]
B
for
all T L(V ) . Then is an isomorphism.
The following Theorem is similar to Corollary 3.11. You should know the
relation between them.
Theorem 3.18. Let A M
n
(F) be a matrix. Then the following are equiv-
alent:
(1) A is invertible;
(2) if Ax = 0 for x F
n
, then x = 0 ( that is, the equation Ax = 0 has
a trivial solution.);
(3) there exists B M
n
(F) such that AB = I
n
;
(4) there exists B M
n
(F) such that BA = I
n
;
(5) det(A) ,= 0 ;
(6) the columns of A are linearly independent.
(7) A is row equivalent to I
n
.
(8) A is a product of elementary matrices.
(9) Ax = b has a solution for every b F
m
.
(10) rank(A) = n.
(11) The linear transformation A : F
n
F
n
via A(x) = Ax is injective.
(12) The linear transformation A : F
n
F
n
via A(x) = Ax is surjective.
(13) Zero is not an eigenvalue of A.
12 DONG SEUNG KANG
lary 3.11 and the previous Theorem 3.18.
Theorem 3.19. Let V be ndimensional vector space over a eld F with
an ordered basis B , and let T L(V, W) . Then T is invertible if and only
if (T) = [T]
B
is invertible in M
n
(F) .
Denition 3.20. Let A, B M
n
(F) be matrices. We say that B is similar
to A if there is an invertible matrix Q M
n
(F) such that B = Q
1
AQ.
Theorem 3.21. Let B = b
1
, , b
n
and ( = c
1
, , c
n
be ordered bases
for V . Let P M
n
(F) be the matrix whose jth column is [c
j
]
B
. Then
(1) P is invertible.
(2) [x]
C
= P
1
[x]
B
for all x V .
(3) [T]
C
= P
1
[T]
B
P .
Corollary 3.22. Similar matrices have the same determinant.
Let V and W be nite dimensional vector spaces with standard bases
B = b
1
, , b
n
and ( = c
1
, , c
m
, respectively, and let T L(V, W) .
Then the matrix representation of T for the ordered bases B and ( is
[T] = ([T(b
1
)], , [T(b
n
)]) ,
where
[T(b
i
)] =
m

j=1
x
ji
c
j
for all 1 i n;
that is the ith column of [T] is
[T(b
i
)] =
_
_
_
_
_
x
1i
x
2i
.
.
.
x
mi
_
_
_
_
_
.
Then R(T) = spanT(b
1
), , T(b
n
) . Hence the R(T) is the column
space of [T] . Thus dim R(T) = rank of [T] .
Note that, by Dimension Theorem, we may conclude that
dim N(T) = dim V rank of [T] .
Example 3.23. Let V = R
3
, W = P
2
(R) be vector spaces with B and /
as standard bases for V and W , and B
v
= (0, 1, 1), (1, 0, 1), (1, 1, 0) and
/
w
= x+x
2
, 1+x
2
, 1+x be bases for V and W, respectively. Let a linear
transformation T : R
3
P
2
(R) via
T(a, b, c) = c + (b +c)x + (a +b +c)x
2
.
LINEAR ALGEBRA 13
Then the following diagram commutes:
T
(V, B) (W, /)
[T]
A
B
id
v
[id
v
]
B
B
v
id
w
[id
w
]
A
w
A
T
(V, B
v
) (W, /
w
)
[T]
A
w
B
v
.
Compute all matrices above diagram, where [id
v
]
B
B
v
is a transition ma-
trix(coordinate change matrix) . Please, check that
[T]
A
w
B
v
= [id
w
]
A
w
A
[T]
A
B
[id
v
]
B
B
v
.
To compute [T]
A
B
,
T(1, 0, 0) = x
2
, T(0, 1, 0) = x +x
2
, T(0, 0, 1) = 1 +x +x
2
.
Hence
[T]
A
B
=
_
_
0 0 1
0 1 1
1 1 1
_
_
.
To compute the transition matrix of [id
v
]
B
B
v
,
id
v
(0, 1, 1) = 0(1, 0, 0) + 1(0, 1, 0) + 1(0, 0, 1)
id
v
(1, 0, 1) = 1(1, 0, 0) + 0(0, 1, 0) + 1(0, 0, 1)
id
v
(1, 1, 0) = 0(1, 0, 0) + 1(0, 1, 0) + 0(0, 0, 1) .
Hence we have
[id
v
]
B
B
v
=
_
_
0 1 1
1 0 1
1 1 0
_
_
.
Similarly, we have
[id
w
]
A
w
A
=
_
_

1
2
1
2
1
2
1
2

1
2
1
2
1
2
1
2

1
2
_
_
.
Then the [T]
A
w
B
v
will be computed by two methods as follows:
T(0, 1, 1) = 1 + 2x + 2x
2
=
3
2
(x +x
2
) +
1
2
(1 +x
2
) +
1
2
(1 +x)
T(1, 0, 1) = 1 +x + 2x
2
= 1(x +x
2
) + 1(1 +x
2
) + 0(1 +x)
T(1, 1, 0) = x + 2x
2
=
3
2
(x +x
2
) +
1
2
(1 +x
2
)
1
2
(1 +x) .
Also, we may use
[T]
A
w
B
v
= [id
w
]
A
w
A
[T]
A
B
[id
v
]
B
B
v
.
14 DONG SEUNG KANG
Thus we have
[T]
A
w
B
v
=
_
_
3
2
1
3
2
1
2
1
1
2
1
2
0
1
2
_
_
.
Note that [T]
A
B
and [T]
A
w
B
v
are similar because [id
w
]
A
w
A
=
_
[id
v
]
B
B
v
_
1
. Also,
the determinant of them is -1. Use Theorem 3.12, and compare with
Theorem 3.21.
Dual Spaces. Let V be a vector space over a eld F , and let L(V, F) = f :
V F linear transformation with the following operations : for f, g V

and c F ,
(f +g)(x) = f(x) +g(x) and (cf)(x) = c f(x) ,
for all x V . Then V

## forms a vector space over a eld F .

Denition 3.24. For a vector space V over F , we dene the dual space
of V to be the vector space L(V, F) , denoted by V

dim(V

## ) = dim(L(V, F)) = dim(V ) dim(F) = dim(V ) .

Theorem 3.25. Suppose that V is a nite-dimensional vector space with
the ordered basis B = x
1
, , x
n
. Let f
i
(1 i n) be the ith coordinate
function with repect to B as dened above, and let B

= f
1
, , f
n
. Then
B

## , and for any f V

we have
f =
n

i=1
f(x
i
)f
i
.
Denition 3.26. Using the notation Theorem 3.25, we call the ordered
basis B

= f
1
, , f
n
of V

that satises
f
i
(x
j
) =
ij
(1 i, j n)
the dual basis of B .
Theorem 3.27. Let V and W be nite-dimensional vector spaces over a
eld F with ordered bases B and /, respectively. For any linear transfor-
mation T : V W , the mapping T

: W

dened by
T

(g) = g T
for all g W

## is a linear transformation with the property that

[T

]
B

A
=
_
[T]
A
B
_
t
.
Note that
LINEAR ALGEBRA 15
T
(V, B) (W, /)
[T]
A
B
T

(V

, B

) (W

, /

)
[T

]
B

## Then for any g W

,
T g
V W F
.
Hence we may dene T

(g) = g T .
Problem Set
Problem 3.28. Let T : R
3
R be a linear transformation. Show that
there exist scalars a, b, and c such that T(x, y, z) = ax + by + cz for all
(x, y, z) R
3
. Can you generalize this result to T : R
n
R?
Problem 3.29. Let V = M
2
(R) , let A =
_
1 2
3 4
_
, and let T L(V ) be
dened by T(B) = AB BA for all B V .
(1) Show that T is a linear transformation.
(2) Pick any basis B you wish for V and compute [T]
B
for that basis.
Problem 3.30. Let v = (5, 2, 3, 1) F
4
and let B = b
1
, b
2
, b
3
, b
4
be the
standard basis for F
4
.
(1) Does there exist T L(V ) such that
T(b
1
) = v, T(v) = b
2
, T(b
2
) = b
3
, T(b
3
) = b
1
?
(2) Let T be as in (1). Determine [T]
B
.
Problem 3.31. Find A M
6
(R) such that A
3
= 5I and neither A nor A
2
is a diagonal matrix.
Problem 3.32. Let V be a nite-dimensional vector space and let T
L(V ) . Show that
V = ImT KerT if and only if R(T) = R(T
2
) .
Problem 3.33. Assume that there exists T L(V ) such that N(T) =
R(T) . Prove that dimV is even.
Problem 3.34. Assume that dimV is even. Prove that there exists a T
L(V ) such that N(T) = R(T) .
Problem 3.35. Let V be a vector space of dimension n over a eld F . If
T L(V ) prove that the following statements are equivalent:
16 DONG SEUNG KANG
(1) N(T) = R(T)
(2) T
2
= 0, T ,= 0 , rank (T) =
n
2
.
Problem 3.36. Let r n and let U and W be subspaces of F
n
with
dim U = r , dim W = n r . Prove that there exists T L(F
n
) such
that N(T) = W and R(T) = U .
Problem 3.37. Let A be 32 matrix and let B be 23 matrix. Prove that
C = AB is not invertible. Generalize this or give a counterexample (Hint :
Use Theorem 3.18).
Problem 3.38. Let T : R
3
R
2
and U : R
2
R
3
be linear transforma-
tions. Show that UT is not invertible.
Problem 3.39. Let V be a nite-dimensional vector space and let T
L(V ) . Establish the chains
V ImT ImT
2
ImT
n
ImT
n+1

0 Ker T Ker T
2
Ker T
n
Ker T
n+1

Show that there is a positive integer p such that ImT
p
= ImT
p+1
and
deduce that
ImT
p
= ImT
p+k
for all k 1 and Ker T
p
= Ker T
p+k
.
Show also that
V = ImT
p
Ker T
p
.
Problem 3.40. Let V be a nite-dimensional vector space and let T
L(V ) . Suppose there is U L(V ) such that TU = id
V
. Show that T is
invertible and U = T
1
. Give an example which shows that this is false
when V is not nite dimensional.
Problem 3.41. Consider R as a vector space over the eld Q of rational
numbers. Let B be a basis for R over Q. Determine whether B is nite or
countable or uncountable and show that your assertion.
Problem 3.42. (1) Let f : R R be a continuous function such that
f(x+y) = f(x) +f(y) for all x, y R. Prove that there exists c R
such that f(x) = cx, for all x R.
(2) Show that there exists a discontinuous function f : R R such
that f(x + y) = f(x) + f(y) for all x, y R. (Hint: View R as
a Qvector space. Then 1 is linearly independent so there exists
a basis B for R as a Qvector space with 1 B . Any function
f
0
: B R can be extended to a function f : R R as follows: if
r R, then r = a
1
b
1
+ +a
t
b
t
, where a
i
Q and b
1
R. Dene
f(r) = a
1
f
0
(b
1
) + +a
t
f
0
(b
t
) . Choose f
0
appropriately so that the
resulting f can not be continuous).
Problem 3.43. For each positive integer n, there exists a complex number

n
= e
2/n
with the property that
n
n
= 1 but
m
n
,= 1 for all 1 m < n. View
LINEAR ALGEBRA 17
C as an Rvector space and let n 3 . Dene T : C C by T(z) =
n
z for
all z C.
(1) Prove that T L(C) .
(2) Prove that B = 1,
n
is a basis for C.
(3) Let A = [T]
B
. Prove that A
n
= I but A
m
,= I for all 1 m < n.
Problem 3.44. A diagram of nite-dimensional vector spaces and linear
transformations of the form
V
1
T
1

V
2
T
2

T
n

V
n+1
is called an exact sequence if (a)T
1
is injective (b)T
n
is surjective
(c)R(T
i
) = N(T
i+1
) for all i = 1, , n 1 . Prove that, for each exact
sequence,
n+1

i=1
(1)
i
dimV
i
= 0 .
18 DONG SEUNG KANG
4. Matrices
This section is devoted to two related objectives:
(1) the study of certain rank-preserving operations on matrices.
(2) the application of these operations and the theory of linear transfor-
mations to the solution to systems of linear equations.
Denition 4.1. Let A M
n
(F) . Any one of the following three operations
on the rows of A is called an elementary row operation :
ROP 1 interchanging any two rows of A (ROP 1 is R
i
R
i
, for some
1 i ,= j n;)
ROP 2 multiplying any row of A by a nonzero constant (ROP 2 is R
i
=
c R
i
, for some 1 i n;)
ROP 3 adding any constant multiple of a row of A to another row (ROP 3
is R
i
= R
i
+c R
j
, for some 1 i ,= j n,)
where for some i , R
i
is the row of A.
Denition 4.2. If A M
mn
(F) , we dene the rank of A, denoted
rank(A) , to be the rank of linear transformation L
A
: F
n
F
m
dened by
T(x) = Ax, for all x F
n
.
Theorem 4.3. Let A M
nm
(F) be a matrix over a eld F . Then
(1) dimRow(A) = dimCol(A) .
(2) Let R = (r
ij
) be the row-reduced echelon form of A over F . Suppose
that R ,= 0 , the rst s(1 s m) rows are not zero rows and let the
column of the ith row(i = 1, , s) with leading coecient 1 be k
i
th
column(,which is the pivot column and hence k
1
< k
2
< < k
s
).
Let a
1
, a
2
, , a
m
be row vectors of R, let b
1
, b
2
, , b
n
be column
vectors of A, and let
S = a
1
, a
2
, , a
s
and P = b
k1
, b
k2
, , b
ks
.
Then S is a basis for Row(A) and P is a basis for Col(A).
We recall that the image of A is Col(A), and hence P is a basis for the
range of A.
Example 4.4. Let
A =
_
_
1 2 2 3
1 2 3 5
2 4 5 8
_
_
.
Then
rref(A) =
_
_
1 2 0 1
0 0 1 2
0 0 0 0
_
_
.
Then a
1
= (1, 2, 0, 1) , a
2
= (0, 0, 1, 2) , and then the pivot columns are 1st
and 3rd columns of rref(A) , b
1
= (1, 1, 2) , b
3
= (2, 3, 5) . Hence a
1
, a
2
is
a basis for Row(A) and b
1
, b
3
is a basis Col(A).
LINEAR ALGEBRA 19
Theorem 4.5. Let A M
mn
(F) be a matrix over a eld F . Then
(1) If P M
m
(F) , then Row(PA) Row(A).
(2) In particular, if P is invertible, then Row(PA) equals Row(A).
(3) If Q M
n
(F) , then Col(AQ) Col(A).
(4) In particular, if Q is invertible, then Col(AQ) equals col(A).
Systems of Linear Equations.
Denition 4.6. A system Ax = b of m linear equations in n unknowns
is said to be homogeneous if b = 0 . Otherwise the system is said to be
nonhomogeneous.
Note that any homogeneous system has at least one solution, namely,
the zero vector. This solution is called the trivial solution. Also, this
solution of homogeneous system is the same as the null space of the linear
transformation T : F
n
F
m
dened by T(x) = Ax, where x F
n
.
Remark 4.7. Any system of linear equations has only three types solutions
as follows:
type 1 there is only one solution.
type 2 there are innitely many solutions
type 3 there is no solution.
Since any homogeneous system has always a solution, the third type can not
occur, that is, the solution types of homogeneous are either type 1 or type
2.
Remark 4.8. The solutions of Ax = b and Rx = b

## have the same solu-

tions, where [R[b

## ] is obtained from [A[b] by using the row operations as in

Denition 4.1. Note that R and A are row equivalent.
Remark 4.9. The only case not to have solutions is of form :
[R[b

] =
_
_
[
[
0 0 [r
_
_
with r ,= 0 ,
that is, R contains a zero row with nonzero constant r in b

.
Determinants. The following remark gives us the properties and applica-
tions of determinants.
Remark 4.10. (1) For any A M
n
(F) , det(A) = det(ROP1(A)) ,
where ROP 1 is R
i
R
i
, for some 1 i ,= j n.
(2) For any A M
n
(F) , det(A) = (c)
n
det(ROP2(A)) , where ROP 2
is R
i
= c R
i
, for some 1 i n.
(3) For any A M
n
(F) , det(A) = det(ROP3(A)) , where ROP 3 is
R
i
= R
i
+c R
j
, for some 1 i ,= j n.
(4) For any A, B M
n
(F) , det(AB) = det(A) det(B) .
(5) For any A M
n
(F) , det(A) = det(A
t
) .
(6) For any A M
n
(F) , and k F , det(kA) = k
n
det(A) .
20 DONG SEUNG KANG
(7) Let A, B M
n
(F) such that AB = BA. Prove that if n is odd and
F is not a eld of characteristic two, then A and B is not invertible.
(8) Prove that if n is odd, then there does not exist an A M
n
(R) such
that A
2
= I
n
.
(9) Let M M
n
(C) . If M is a skew-symmetric and n is odd, then M is
not invertible. What if n is even ?
(10) Prove that an upper triangular nn matrix is invertible if and only
if all its diagonal entries are nonzero.
Problem Set
Problem 4.11. Prove that E is an elementary matrix if and only if E
T
is
.
Problem 4.12. If the real matrix
_
_
_
_
a 1 a 0 0 0
0 b 1 b 0 0
0 0 c 1 c 0
0 0 0 d 1 d
_
_
_
_
has rank r , show that
(1) r > 2
(2) r = 3 if and only if a = d = 0 and bc = 1
(3) r = 4 in all other cases.
Problem 4.13. Prove that for any mn matrix, rank(A) = 0 if and only
if A is the zero matrix.
Problem 4.14. Let A be a m n matrix with rank m. Prove that there
exists an n m matrix B such that AB = I
m
.
Problem 4.15. Let B be a n m matrix with rank m. Prove that there
exists an mn matrix A such that AB = I
m
.
Problem 4.16. Suppose A M
56
(F) such that (1, 2, 3, 4, 5, 6) and
(1, 1, 1, 1, 1, 1) are solutions to Ax = 0 . Must the reduced echelon form of A
contain a row of zeros ? Be sure to fully justify your answer.
Problem 4.17. Let F be a eld, and let A, B M
n
(F) .
(1) Show that tr(AB) = tr(BA) .
(2) Show that AB BA = I
n
is impossible.
(3) Let A M
mn
(R) . Show that A = 0 if and only if tr(A
t
A) = 0 .
Problem 4.18. Let A M
n
(C) be a skew-symmetric matrix. Suppose that
n is odd. Then det A = 0 .
Problem 4.19. Let A M
n
(F) be called orthogonal if AA
t
= I . If A is
orthogonal, show that det A = 1 . Give an example of an orthogonal matrix
for which det A = 1 .
LINEAR ALGEBRA 21
Problem 4.20. (1) Let A M
n
(F) . Show that there are at most n distinct
scalars c F such that det(cI A) = 0 .
(2) Let A, B M
n
(F) . Show that if A is invertible then there are at most
n scalars c F for which the matrix cA+B is not invertible.
Problem 4.21. Prove or give a counterexample to the following statements:
If the coecient matrix of a system of m linear equations in n unknowns
has rank m, then the system has a solution.
Problem 4.22. Let A be an n n matrix. Prove that A is row-equivalent
to the n n identity matrix if and only if the system of equations AX = 0
has only the trivial solution.
Problem 4.23. Let A and B be 2 2 matrices such that AB = I
2
. Prove
that BA = I
2
.
Problem 4.24. Determine all solutions to the following innite system of
linear equations in the innitely many unknowns x
1
, x
2
, .
x
1
+x
3
+x
5
= 0
x
2
+x
4
+x
6
= 0
x
3
+x
5
+x
7
= 0
.
.
.
.
.
.
.
.
.
.
.
.
How many free parameters are required ?
22 DONG SEUNG KANG
5. Polynomial Rings
Let F[x] = T(F) be the polynomials in x over a eld F and let
f(x), g(x) F[x] . If g(x) ,= 0 , deg g(x) is the highest power of x in g(x) .
We call g(x) is monic if 1 is the coecient of the highest power of x in g(x) .
[ Division Algorithm ] Let f(x), g(x) F[x], with g(x) ,= 0 . Then
there exist unique q(x), r(x) F[x] such that
f(x) = g(x)q(x) +r(x)
where either r(x) = 0 or deg r(x) < deg g(x) .
Proposition 5.1. If deg f(x) = n, then f(x) has at most n roots in F ,
counting multiplicity.
Denition 5.2. Let I F[x] . I is called an ideal of F[x] if for every
f(x), g(x) I and h(x) F[x] ,
f(x) +g(x) I and h(x)f(x) I .
Example 5.3. I = f(x) F[x][f(3) = 0 is an ideal of F[x] .
From this example above, we may conclude the set of all polynomials such
that f(T) = 0 is ideal, where T L(V, W) .
Proof. For f(x), g(x) I and h(x) F[x] , we have to check whether f(x)+
g(x) I and h(x)f(x) I . Indeed, (f + g)(3) = f(3) + g(3) = 0 + 0 = 0
and (hf)(3) = h(3)g(3) = h(3) 0 = 0 , as desired.
Theorem 5.4. Let I ,= 0 be an ideal of F[x] . Then there exists a unique
monic polynomial f(x) I such that I = g(x)f(x)[g(x) F[x] . Then we
call the polynomial f(x) a generator of the ideal I .
Let T L(V, W) . Hence, again, the set of all polynomials such that
f(T) = 0 is ideal I , and then we can nd a generator for I , say m(x) . We
call the monic generator m(x) of I = f(x) F[x][f(T) = 0 the minimal
polynomial of the linear transformation of T .
LINEAR ALGEBRA 23
6. Diagonalizations
For a given linear transformation T on a nite-dimensional vector space
V , we seek answers to the following questions:
1. Does there exist an ordered basis B for V such that [T]
B
is a diagonal
matrix ?
2. If such a basis exists, how can it be found ?
Let V be a nite dimensional vector space over a eld F , and let T
L(V ) . Then we may have
[T]
B
=
_
_
_
c
1
0
.
.
.
0 c
n
_
_
_
in some basis B = b
1
, , b
n
for V .
For each j , we write
T(b
j
) = c
j
b
j
.
Denition 6.1. Let v V , v ,= 0 . v is called an eigenvector of T if
T(v) = v , for some F ; is called the eigenvalue of T corresponding
to the eigenvector v .
Similar to the previous denition, we may dene eigenvector and eigen-
value of a matrix M such as Mv = v , where v ,= 0 .
Example 6.2. Let T L(C
2
) be dened by T(x, y) = (y, x) . Then i and
i are eigenvalues of T and (i, 1) and (i, 1) are eigenvectors of T with
associated i and i , respectively. Let B = (i, 1), (i, 1) . What is the
[T]
B
?
Theorem 6.3. T L(V ) can be written in the diagonal form
_
_
_
c
1
0
.
.
.
0 c
n
_
_
_
in the basis B = b
1
, , b
n
if and only if each b
j
is an eigenvector of T
with associated the eigenvalue c
j
.
Example 6.4. Let T L(R
3
) be dened by T(x, y, z) = (0, x, y) . If there
is a basis b
1
, b
2
, b
3
for R
3
consisting of eigenvectors of T then T(b
1
) =
T(b
2
) = T(b
3
) = 0, i.e., T = 0 . This is a contradiction. Hence we may
conclude that R
3
has no basis consisting of eigenvectors of T .
Now, we will investigate which conditions can be required to be similar
to a diagonal matrix.
Proposition 6.5. Let dimV < , let T L(V ) , and let B and ( be
ordered bases for V .
det([T]
B
) = det([T]
C
) .
24 DONG SEUNG KANG
Now, we may dene det(T) to be det([T]
B
) , where B is an ordered basis
for V .
Theorem 6.6. Let dimV < , let T L(V ) , and let F . The following
statements are equivalent:
(1) is an eigenvalue of T ;
(2) N(T I
V
) ,= 0 ;
(3) det([T I
V
]
B
) = 0 for some basis B for V .
Moreover, if is an eigenvalue of T and v V , v ,= 0 , then v is an
eigenvector corresponding to if and only if v N(T I
V
) .
Hence we may consider N(T) the set of eigenvectors of T corresponding
to the eigenvalue 0 of T .
It is a natural step to ask how to nd eigenvectors and eigenvalues of T .
Denition 6.7. Let dimV = n and T L(V ) . Then c
T
(x) = det(T
xI
V
) F[x] is called the characteristic polynomial of T.
Proposition 6.8. c
T
(x) is a polynomial of degree n with leading coecients
(1)
n
. F is an eigenvalue of T if and only if is a roots of c
T
(x) .
Proof. Suppose F is an eigenvalue of T . Then there exists a non-zero
vector V such that T() = . Hence (T I)() = 0 has non-trivial
solution, say . Thus (T I) is singular, i.e., det(T I) = 0 . Conversely,
suppose is a roots of c
T
(x) . Then the system of linear equations (T
I)(x
1
, , x
n
)
t
= 0 has non-trivial solution, .i.e., F is an eigenvalue of
T .
Hence the characteristic polynomial c
T
(x) is in the ideal I = f
F[x][f(T) = 0 . Then we asserted that we can nd the generator(monic
minimal polynomial) of I , say m
T
(x) I ; that is, c
T
(x) = g(x)m
T
(x) , for
some g(x) F[x] . Note that m
T
(T) = 0 , since m
T
(x) I . The charac-
teristic and minimal polynomials for T have the same roots, except
for multiplicities.
Denition 6.9. Let dimV = n and T L(V ) , and let is an eigenvalue
of T . The eigenspace of T corresponding to the eigenvalue is E

(T) =
v V [T(v) = v .
Denition 6.10. Let V be a vector space, T L(V ) . A subspace W of V
is called Tinvariant if T(w) W for all w W .
Lemma 6.11. N(T) , R(T) , and E

## (T) are Tinvariant subspaces of V .

Now, we want to know some properties which determine which linear
transformation(matrix) will be diagonalizable.
Denition 6.12. Let V be an Fvector space, and let W
1
, , W
r
be
subspaces of V . V is the direct sum of the W
i
if there exist bases B
i
for
W
i
, i = 1, , r such that
B
i
B
j
= for i ,= j and
r
i=1
B
i
is a basis for V .
LINEAR ALGEBRA 25
Then we write V = W
1
W
r
.
Proposition 6.13. Suppose that V = W
1
W
r
. Then each element
of V has a unique representation in the form w
1
+ +w
r
, where w
i
W
i
,
for i = 1, , r .
Lemma 6.14. Let
1
, ,
k
be distinct eigenvalues of T . For each i =
1, , k , let v
i
E

i
(T) , and suppose that v
1
+ + v
k
= 0 . Then v
i
= 0
for all i = 1, , k .
Theorem 6.15. Let
1
, ,
k
be distinct eigenvalues of T L(V ) . For
each i = 1, , k , let S
i
be a linearly independent subset of E

i
(T) . Then
S =
k
i=1
S
i
is linearly independent.
Theorem 6.16. Let
1
, ,
k
be distinct eigenvalues of T L(V ) . Let W
be the subspace spanned by all eigenvectors of T . Then
W = E

1
(T) E

k
(T) .
Theorem 6.17. Let dimV < , let T L(V ) . Assume that c
T
(x) splits
over F and let
1
, ,
k
be distinct eigenvalues of T . Then the following
statements are equivalent:
(1) T is diagonalizable;
(2) V = E

1
(T) E

k
(T) ;
(3) for each i = 1, , k , the geometric multiplicity of
i
equals the
algebraic multiplicity of
i
.
(4) the minimal polynomial m
T
(x) has distinct roots.
Corollary 6.18. Let dimV = n, let T L(V ) . If T has n distinct eigen-
values, then T is diagonalizable ( Note that the converse is not true ).
Remark 6.19. Let V be a vector space with a basis B . We may write
A = [T]
B
M
n
(F) . Then we have the following diagram :
_
_
_
_
_
_
AX = 0 has a non-trivial solution the null space of T ,= 0

zero is eigenvalue of A |

A is not invertible ,that is, det A = 0 T is not one-to-one .
_
_
_
_
_
_
Also, we may say that all eigenvalues of A are not zero AX = 0 has
the trivial solution A is invertible, that is, det A ,= 0 .
Problem Set
26 DONG SEUNG KANG
Problem 6.20. Find the eigenvalues and eigenvectors for
_
_
_
_
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
_
_
_
_
.
Problem 6.21. Let D : P
3
(R) P
3
(R) be the dierentiable dened by
D(f(x)) = f

## (x) for f(x) P

3
(R) . Find all eigenvalues and eigenvectors of
D and D
2
.
Problem 6.22. For any square matrix A, show that A and A
t
have the
same characteristic polynomial.
Problem 6.23. Let T L(M
n
(R) dened by T(A) = A
t
, the transpose of
A.
(1) Show that T is a linear transformation.
(2) Show that 1 are the only eigenvalues of T .
(3) Describe the eigenvectors corresponding to each eigenvalue of T .
(4) Find an order basis B for M
2
(R) such that [T]
B
ia a diagonal matrix.
(5) Find an ordered basis for M
n
(R) such that [T]
B
ia a diagonal matrix
for n > 2 .
Problem 6.24. Let A, B M
n
(C) .
(1) Show that if B is invertible, then there exists a scalar c C such
that A+cB is not invertible. Hint: Check its determinant.
(2) Find 2 2 matrices A and B such that A is invertible, B ,= 0 , but
A+cB is invertible for all c C.
Problem 6.25. Let
1
, ,
n
be the eigenvalues of n n matrix A. Then
(1) A is invertible if and only if for all
i
,= 0 .
(2) if A is invertible then the inverse A
1
has eigenvalues
1

1
, ,
1

n
.(How about eigenvectors for A and A
1
?)
(3) Let A, B M
n
(F) , where F is a eld. Show that if (I AB) is
invertible, then (I BA) is invertible and
(I BA)
1
= I +B(I AB)
1
A.
(4) For all A, B M
n
(R) , show that AB and BA have the same eigen-
values.
(5) Let V = M
n
(F) , A M
n
(F) , and let T L(V ) dened by
T(B) = AB. Show that the minimal polynomial for T is the minimal
polynomial for A.
(6) Let A, B M
n
(F) , where F is a eld. Show that they have the same
eigenvalues. Do they have the same characteristic polynomial ? Do
they have the same minimal polynomial ?
(7) show that A and A
t
have the same eigenvalues. How about eigen-
vectors for A and A
t
?
(8) if for some m > 0 , A
m
= 0 then all eigenvalues of A are zero.
LINEAR ALGEBRA 27
(9) if A
2
= I , then the sum of eigenvalues of A is an integer.
(10) Let A be a real skew-symmetric matrix with eigenvalue . Show that
the real part of is zero, and that

is also an eigenvalue.
Problem 6.26. Let A M
n
(F) . Suppose A has two distinct eigenvalues

1
and
2
with dim(E

1
) = n 1 . Show that A is diagonalizable.
Problem 6.27. If A M
3
(F) has eigenvalues 1, 2, and 3 , what is the
eigenvectors of B = (AI
3
)(A2I
3
)(A3I
3
) ?
Problem 6.28. Let f(x) = det(AxI) be the characteristic polynomial
of A. Evaluate f(A) for
A =
_
_
1 2 2
1 2 1
1 1 4
_
_
.
(In fact, this is the Cayley-Hamilton Theorem.)
Problem 6.29.
A =
_
_
_
_
0 0 0 0
a 0 0 0
0 b 0 0
0 0 c 0
_
_
_
_
.
Find the conditions on a,b, and c R such that A is diagonalizable ?
Problem 6.30. Let P : R
2
R
2
dened by P(x, y) = (x, 0) . Show that P
is a linear transformation. What is the minimal polynomial for P ?
Problem 6.31. Every matrix A such that A
2
= A is similar to a diagonal
matrix.
Problem 6.32. Compute A
2004
b for
_
_
1 2 1
0 5 2
0 6 2
_
_
, b =
_
_
2
4
7
_
_
.
Problem 6.33. Let V = M
2
(F) and let T L(V ) be dened by T(A) = A
t
for A V .
(1) Determine a maximal linearly independent set of eigenvectors of T .
(2) Determine whether or not T is diagonalizable. If so, determine a
diagonal matrix D such that D = [T]
B
for a suitable basis B ; if not,
prove that T is not diagonalizable.
Matrix Limits. In this part we will investigate the limit of a sequence of
powers A, A
2
, , where A is a square matrix with complex entries. Such
sequences and their limits have practical applications in the life and natural
sciences.
28 DONG SEUNG KANG
Denition 6.34. Let L, A
1
, A
2
, M
np
(C) . The sequence A
1
, A
2
,
is said to converge to the matrix L if
lim
m
(A
m
)
ij
= L
ij
for all 1 i n and 1 j p .
Note that
e
x
= 1 +x +
x
2
2!
+ +
x
n
n!
+ .
Denition 6.35. For A M
n
(C) , dene e
A
= lim
m
B
m
, where
B
m
= I +A+
A
2
2!
+ +
A
m
m!
.
Thus we have
e
A
= I +A+
A
2
2!
+ +
A
n
n!
+ .
Problem 6.36. (1) Let L, A
1
, A
2
, M
np
(C) . If lim
m
(A
m
) =
L, then lim
m
(A
m
)
t
= L
t
.
(2) Let P
1
AP = D be a diagonal matrix. Prove that e
A
= Pe
D
P
1
.
(3) Find A, B M
2
(R) such that e
A
e
B
,= e
A+B
.
LINEAR ALGEBRA 29
7. Jordan Canonical Forms
In the pervious section we studied the properties to determine which lin-
ear transformations(matrices) are diagonalizable. Unfortunately, only some
parts of them can be diagonalizable. Hence in this section we will study even
if they are not diagonalizable, they can be reduced to considerable simple
forms(we call these forms Jordan canonical forms).
Example 7.1. Let T : C
8
C
8
be a linear transformation with an order
basis(we call this basis a Jordan canonical basis) B = b
1
, , b
8
for C
8
such that
[T]
B
=
_
_
_
_
_
_
_
_
_
_
_
_
2 1
2 1
2
2
3 1
3
0 1
0
_
_
_
_
_
_
_
_
_
_
_
_
is a Jordan canonical form of T . Then the characteristic polynomial of T is
c
T
(x) = (x 2)
4
(x 3)
2
x
2
. Also, we know that
T(b
1
) = 2b
1
, T(b
4
) = 2b
4
, T(b
5
) = 3b
5
, T(b
7
) = 0b
7
.
Hence we know that b
1
, b
4
are eigenvectors of = 2 , b
5
is an eigenvector
of = 3 , and b
7
is an eigenvector of = 0 . In particular, the minimal
polynomial of T is m
T
(x) = (x 2)
3
(x 3)
2
x
2
.
For example, S : C
8
C
8
be a linear transformation with an order
basis(set of eigenvectors) ( = b
1
, , b
8
for C
8
such that
[S]
C
=
_
_
_
_
_
_
_
_
_
_
_
_
2
2
2
2
3
3
0
0
_
_
_
_
_
_
_
_
_
_
_
_
is a Jordan canonical form(diagonal matrix) dierent from [T]
B
. Then
the characteristic polynomial of T is c
T
(x) = (x 2)
4
(x 3)
2
x
2
. Also, the
minimal polynomial of T is m
T
(x) = (x 2)(x 3)x.
Note that they have the same characteristic polynomial, but they have
dierent Jordan canonical forms. Also.
T(b
2
) = b
1
+ 2b
2
(T 2I)(b
2
) = b
1
.
30 DONG SEUNG KANG
Similarly,
(T 2I)(b
3
) = b
2
, (T 2I)
2
(b
3
) = b
1
.
Note (T 2I)
3
(b
j
) = 0 for all j = 1, 2, 3 . Also, T(b
4
) = 2b
4
. Hence b
1
=
(T 2I)
2
(b
3
), b
2
= (T 2I)(b
3
) . Similarly, we have b
5
= (T 3I)(b
6
) and
b
7
= (T 0I)(b
8
) .
Now, generalize this as follows;
Denition 7.2. The Jordan block of the size r corresponding to ,
J(r, ) , is the r r matrix such that
J(r, )
i,i
= for i = 1, , r ,
J(r, )
i,i+1
= 1 for i = 1, , r 1 , and
J(r, )
i,j
= 0 for all other i, j .
We say that a matrix is in Jordan form if it is block diagonal with each
block being a Jordan block.
Denition 7.3. Let V be an Fvector space, and let T L(V ) . Let F
and let v V , v ,= 0 .
(1) v is called a generalized eigenvector of T corresponding to if
(T I
V
)
p
(v) = 0 for some positive integer p .
(2) For some positive integer p ,
K

(T) = v[(T I
V
)
p
(v) = 0
is called the generalized eigenspace of T corresponding to .
Remark 7.4. Let v K

## (T) , and let p > 0 be minimal with (T

I
V
)
p
(v) = 0 . Set v
i
= (T I
V
)
pi
(v) for i = 1, , p . Then v = v
p
,
v
i
= (T I
V
)(v
i+1
) for i = 1, , p 1 . We call the set v
1
, , v
p
=
(T I
V
)
p1
(v
p
), , (T I
V
)(v
p
), v
p
a cycle of generalized eigen-
vector of T corresponding to . v
1
is called the initial vector, and v
p
the end vector, p is called the length of the cycle. Note that v
1
is an
eigenvector of T corresponding to the eigenvalue .
Proposition 7.5. K

## (T) is Tinvariant subspace of V , and

K

(T) E

(T) .
Theorem 7.6. Let T be a linear transformation on a nite-dimensional
vector space V such that the characteristic polynomial of T splits, and, let

1
, ,
k
be the distinct eigenvalues of T with corresponding multiplicities
m
1
, , m
k
. For each j = 1, , k , let B
j
be an order basis for K

j
(T) .
Then
(1) B
i
B
j
= for i ,= j .
(2) B = B
1
B
k
is an order basis for V .
(3) dimK

j
(T) = m
j
for j = 1, , k .
LINEAR ALGEBRA 31
Theorem 7.7. Let T be a linear transformation on a nite-dimensional
vector space V such that the characteristic polynomial of T splits, and sup-
pose that B is a basis for V such that B is a disjoint union of cycles of
generalized eigenvectors of T . Then
(1) For each cycle of generalized eigenvectors gamma contained in B ,
W = span() is Tinvariant, and [T[W]

is a Jordan block.
(2) B is a Jordan canonical basis for V .
Theorem 7.8. A cycle of generalized eigenvectors is linearly independent.
Remark 7.9. Let B
W
= v
1
, , v
p
a cycle of generalized eigenvector of
T corresponding to , and let W = v
1
, , v
p
. Then W is Tinvariant.
Indeed,
(T I
V
)(v
j
) = T(v
j
) v
j
.
Then
T(v
j
) =
_
v
j
+v
j1
if j ,= 1,
v
j
if j = 1 .
Let T
|
W
be the linear transformation of T restricted to W . Then we have
[T
|
W
]
B
W
= [[T
|
W
(v
1
)]
B
W
, , [T
|
W
(v
p
)]
B
W
]]
=
_
_
_
_
_
_
_
1 0 0
0 1 0
0 0
.
.
.
.
.
.
.
.
.
1
0 0 0
_
_
_
_
_
_
_
,
and hence the matrix is the Jordan block of the size p corresponding to ,
denoted by J(p, ) ,
Jordan Canonical Form Theorem. Let V is Fvector space with
dimV < , and let T L(V ) . Suppose c
T
(x) splits over F . Then there ex-
ists a basis B for V which is disjoint union of cycles of generalized eigenspaces
for T . Then we call such a basis a Jordan Canonical basis for T . Hence
we have
[T]
B
=
_
_
_
J(r
1
,
1
) 0
0
.
.
.
0
0 0 J(r
s
,
s
)
_
_
_
,
where each J(r
i
,
i
) is an r
i
r
i
Jordan block corresponding to
i
(the
i
not necessarily distinct).
We will investigate there exists a Jordan Canonical form.
Lemma 7.10. Let V be a n-dimensional vector space over a eld F = C,
where C is a set of complex numbers, and let T L(V ) . Then there exists
(n 1)dimensional subspace W of V such that T(W) W .
32 DONG SEUNG KANG
Proof. Consider the linear transformation T
t
: V

, where V

is the
dual space of V . Since F = C, T
t
has an eigenvalue l V

(l ,= 0) such that
T
t
(l) = l , for some C. Since l V

, l : V C is a non-zero linear
transformation. Hence let W = Ker l . Then dimW = n 1 (because of
dimC = 1 and Dimension Theorem). Now, we may claim that T(W) W .
For any w W = Ker l ,
l(T(w)) = (T
t
l)(w) = (l)(w) = (l(w)) = 0 .
Thus T(w) Ker l = W , as claimed.
Theorem 7.11. (Existence of Jordan Canonical Forms) Let V be a n-
dimensional vector space over a eld F = C, where C is a set of complex
numbers, and let T L(V ) . Then there exists a basis B for V such that
[T]
B
=
_
_
_
_
_
J(p,
1
)
J(q,
2
)
.
.
.
J(s,
r
)
_
_
_
_
_
,
where p +q + +s = n and
1
,
2
, ,
r
are eigenvalues of T .
Proof. We will prove it by induction on dimV = n. Let n = 1 . Then for any
V , T() = , for some C. Then [T]
B
= ()
11
, where B = ,
,= 0 . Suppose that it holds on n 1 .
Now, we will show the case where dim = n. By previous Lemma, there
exists (n1)dimensional subspace W of V such that T(W) W . Suppose
B
0
= e
1
, , e
p
, f
1
, , f
q
, , h
1
, , h
s
is a basis for W such that
T(e
j
) =
_
e
j1
+
1
e
j
j ,= 1,

1
e
1
j = 1 ,
T(f
j
) =
_
f
j1
+
2
f
j
j ,= 1,

2
f
1
j = 1 ,
and
T(h
j
) =
_
h
j1
+
r
h
j
j ,= 1,

r
h
1
j = 1 .
Complete B
0
to a basis B for V by adding e . Then
T(e) =
p

j=1

j
e
j
+
q

j=1

j
f
j
+ +
s

j=1

j
h
j
+e .
Then we may assume = 0 ; otherwise replace T by T I . Let
e

= e
p

j=1
x
j
e
j

q

j=1
y
j
f
j

s

j=1
z
j
h
j
.
LINEAR ALGEBRA 33
Want to nd the x
j
, y
j
, , z
j
to make T(e

## ) as simple as possible. Then

T(e

) = T(e)
p

j=1
x
j
T(e
j
)
q

j=1
y
j
T(f
j
)
s

j=1
z
j
T(h
j
)
= (
1

1
x
1
x
2
)e
1
+ (
2

1
x
2
x
3
)e
2
+ + (
p1

1
x
p1
x
p
)e
p1
+ (
p

1
x
p
)e
p
+ (
1

2
y
1
y
2
)f
1
+ + (
q

2
y
q
)f
q
+ .
Now, we will consider two cases as follows:
(1)
i
,= 0 for some i . Let i = 1 . Then we may take the x
j
to
make T(e

## ) as simple as possible such that x

p
=

p

1
, x
p1
=
x
p

p1

1
, . Then T(e

) spanf
1
, , f
q
, , h
1
, , h
s
, e

.
Now, let V
0
= spanf
1
, , f
q
, , h
1
, , h
s
, e

be a subspace of
V . Then T(V
0
) V
0
and dimV
0
n 1 . By induction step, there
exists a Jordan Canonical basis B

for V
0
. Hence e
1
, , e
p
B

## is a Jordan Canonical basis for V .

(2) Suppose for all eigenvalues are zero.
T(e

) = T(e)
p

j=1
x
j
T(e
j
)
q

j=1
y
j
T(f
j
)
s

j=1
z
j
T(h
j
)
= (
1
x
2
)e
1
+ (
2
x
3
)e
2
+ + (
p1
x
p
)e
p1
+ (
p
)e
p
+ (
1
y
2
)f
1
+ + (
q
y
q
)f
q
+ .
Then we may choose the coecients such that
T(e

) =
p
e
p
+
q
f
q
+ +
s
h
s
.
If
p
= 0 , then T(e

) spanf
1
, , f
q
, , h
1
, , h
s
. It follow
from case (1).
We may assume
p
,= 0 ,
q
,= 0 , ,
s
,= 0 , and p q s .
Then we have
e

T(e

) =
p
e
p
+
q
f
q
+ +
s
h
s
T
2
(e

) =
p
T(e
p
) +
q
T(f
q
) + +
s
T(h
s
)
=
p
(e
p1
+
1
e
p
) +
q
(f
q1
+
2
f
q
) + +
s
(h
s1
+
1
h
s
)
=
p
e
p1
+
q
f
q1
+ +
s
h
s1
T
p1
(e

) =
p
e
2
+
T
p
(e

) =
p
e
1
+ .
It is easy to check that
B = T
p
(e

), T
p1
(e

), T(e

), e

, f
1
, , f
q
, , h
1
, , h
s

## 34 DONG SEUNG KANG

is a basis for V . Thus we have a Jordan Canonical basis B for V .
Remark 7.12. (1) The number of Jordan blocks for the eigenvalue
equals to dim E

(T) .
(2) V is the direct sum of the K

i
(T) .
(3) If B

, B and B

## have exactly the same number of cycles of generalized

eigenvectors corresponding to , and those cycles have the same
lengths.
(4) In particular, [T]
B
= [T]
B
up to permutation of the Jordan blocks.
Minimal polynomials.
Theorem 7.13. Let V be a nite dimensional vector space over F and let
T L(V ) . Let I = p(x) F[x] [ p(T) = 0 . Then I contains a unique
monic polynomial m
T
(x) of minimal degree among all non-zero polynomials
in I .
Moreover, I = g(x)m
T
(x) [ g(T) F[x] .
Recall the m
T
(x) is called the minimal polynomial of T .
Theorem 7.14. Let V be a nite dimensional vector space over F and let
T L(V ) . Assume that c
T
(x) splits over F and let
1
, ,
k
be the distinct
eigenvalues of . For each i , let m
i
denote the size of largest Jordan block
corresponding to
i
in the Jordan canonical form of T . Then
m
T
(x) = (x
1
)
m
1
(x
k
)
m
k
.
Remark 7.15. Let V be a nite dimensional vector space over F and let
T L(V ) . Assume that c
T
(x) splits over F and the Jordan canonical form
of T has t Jordan blocks corresponding to , say of sizes r
1
, , r
t
. Then
c
T
(x) =

(x )
r
1
++r
t
and
m
T
(x) =

(x )
Max{r
1
, ,r
t
}
.
Corollary 7.16. Let V be a nite dimensional vector space over F and let
T L(V ) . Then F is an eigenvalue of T if and only if is a root of
m
T
(x) ; that is, c
T
(x) and m
T
(x) have exactly the same roots.
Proof. Let m
T
(x) = (x
1
)
e
1
(x
2
)
e
2
(x
r
)
e
r
. We claim that all

j
are eigenvalues of T , Enough to show that
r
is an eigenvalue of T and
then the others are similar. Let S = (T
1
I)
e
1
(T
2
I)
e
2
(T
r
I)
e
r
be a linear transformation on V . Note that S ,= 0 , otherwise S(x) =
m
T
(x)
x
r
with S(T) = 0 . This is a contradiction to the minimality of m
T
(x) . Let
W = S(V ) = ImS , where V ,= Ker S . We will show that T(w) =
r
w for
w W . Let w = S(), for some V .
(T
r
I)S = 0 = m
T
(T) .
Apply to V ,
0 = (T
r
I)S() = (T
r
I)(w) .
LINEAR ALGEBRA 35
Hence T(w) =
r
w, as desired, that is
r
is an eigenvalue of T . Conversely,
let be an eigenvalue of T , that is, for some nonzero vector , T() = ,
,= 0 . Note that (T I)
e
r
() = (T I)
e
r
1
(T I)() = (
r
)(T
I)() = (
r
)() . Hence we have
0 = m
T
(T)() = (
1
)
e
1
(
2
)
e
2
(
r
)
e
r
() .
Since ,= 0 , then (
1
)
e
1
(
2
)
e
2
(
r
)
e
r
= 0 . Hence there exists
a 1 j r such that =
j
. Thus is a root of m
T
(x) .
Corollary 7.17. Let V be a nite dimensional vector space over F and let
T L(V ) . Then the following statements are equivalent:
(1) T is diagonalizable;
(2) m
T
(x) has no multiple roots;
(3) there exists f(x) F[x] which splits over F and has no multiple
roots such that f(T) = 0 .
Corollary 7.18 (Cayley-Hamilton Theorem). c
T
(x) is a multiple of m
T
(x) .
Example 7.19. Let T L(M
n
(C)) be dened by T(A) = A A
t
, where
A
t
is the transpose of A. Then we will determine the Jordan canonical form
of T .
(1) We have to nd some polynomial p(x) g(x) C[x][g(T) = 0 .
(2) Since T(A) = AA
t
, we know that T
2
(A) = T(AA
t
) = (AA
t
)
(AA
t
)
t
= 2T(A) ; that is, T
2
2T = 0 . Hence p(x) = x
2
2x.
(3) Now, we have to nd the candidates for minimal polynomials for
g(x) C[x][g(T) = 0 .
(4) There are some candidates as follows;
x, x 2, x(x 2) = p(x) .
(5) By Corollary 7.17, we may conclude T is diagonalizable as follows;
[T] = 0, [T] = 2I , or m
T
(x) = p(x) .
(6) Now we need to check possibilities for T . First two cases are impos-
sible because if A is skew-symmetric(A
t
= A), T(A) = 2A, and if
A is symmetric (A
t
= A), T(A) = 0 .
(7) Hence m
T
(x) = x(x2) . We recalled that dim (Sym) =
n(n+1)
2
and
dim (Skew-Sym) =
n(n1)
2
, where dim(M
n
(C)) = n
2
.
(8) Thus we have the Jordan canonical form of T is
_
J(
n(n+1)
2
, 0) 0
0 J(
n(n1)
2
, 2)
_
.
Example 7.20. Let dimV = 4 and B = b
1
, b
2
, b
3
, b
4
be a basis for V .
Let T L(V ) be dened by
T(b
1
) = 0 T(b
2
) = 5b
1
, T(b
3
) = 5b
1
, and T(b
4
) = 2b
2
+ 5b
3
.
We will determine the Jordan canonical form for T .
36 DONG SEUNG KANG
(1) Since
T(b
1
) = 0 T(b
2
) = 5b
1
, T(b
3
) = 5b
1
, and T(b
4
) = 2b
2
+ 5b
3
,
we have
T
2
(b
1
) = 0 T
2
(b
2
) = 0, T
2
(b
3
) = 0, and T
2
(b
4
) = 15b
1
.
Again, we have
T
3
(b
i
) = 0 for all i = 1, , 4 .
Note that T
2
,= 0 and T
3
= 0 .
(2) Since c
T
(x) and m
T
(x) have exactly the same roots, we may conclude
that c
T
(x) = x
4
(because dimV = 4) and m
T
(x) = x
3
.
(3) By Remark 7.15 or Theorem 7.14, the Jordan canonical form for T
is
_
J(3, 0) 0
0 J(1, 0)
_
.
Example 7.21. Let T be the linear transformation on P
2
(R) dened by
T(f) = f f

. Let B = 1, x, x
2
be the standard basis for P
2
(R) . Then
[T]
B
=
_
_
1 1 0
0 1 2
0 0 1
_
_
,
which has the characteristic polynomial c
T
(x) = (x + 1)
3
. Hence = 1
is the only eigenvalue of T , and then K

= P
2
(R) . Also, dimE

= 1 .
Now, want to compute a Jordan canonical basis, i.e., have to nd a vector
f P
2
(R) such that (T + I)
2
(f), (T + I)(f), f is a basis for T . Also,
(T + I)
3
(f) = 0 . Take f(x) = x
2
, / = (T + I)
2
(x
2
), (T + I)(x
2
), x
2
is a
Jordan canonical basis for T such that
[T]
A
=
_
_
1 1 0
0 1 1
0 0 1
_
_
.
Note that the minimal polynomial m
T
(x) = (x + 1)
3
. Hence the maximal
size of Jordan canonical form with respect to = 1 is 3.
Problem Set
Problem 7.22. Let dimV = 4 and let B = b
1
, b
2
, b
3
, b
4
be a basis for V .
Let T L(V ) be dened by
T(b
1
) = 0, T(b
2
) = 5b
1
, T(b
3
) = 5b
1
, T(b
4
) = 2b
2
+ 5b
3
.
(1) Determine c
T
(x) and m
T
(x) without determining a Jordan canonical
basis for V .
(2) Determine the Jordan canonical for for T without determining a
Jordan canonical basis for V .
(3) Determine a Jordan canonical basis for V .
LINEAR ALGEBRA 37
Problem 7.23. Let A be the complex matrix
_
_
_
_
_
_
_
_
2 0 0 0 0 0
1 2 0 0 0 0
1 0 2 0 0 0
0 1 0 2 0 0
0 0 0 0 2 0
0 0 0 0 1 1
_
_
_
_
_
_
_
_
.
Compute the Jordan canonical form for A.
Problem 7.24. If A is a complex 55 matrix with characteristic polynomial
C
A
(x) = (x2)
3
(x+7)
2
and the minimal polynomial m
A
(x) = (x2)
2
(x+
7) , what is the Jordan canonical form for A?
Problem 7.25. How many possible Jordan canonical forms are there for a
6 6 complex matrix with characteristic polynomial (x + 2)
4
(x 1)
2
?
Problem 7.26. Suppose that T L(V ) . Let c
T
(x) = x
10
and N(T) =
R(T) . Determine the Jordan canonical form of T .
Problem 7.27. Determine the Jordan canonical form of the linear trans-
formation T L(C
n
) dened by
T(e
i
) =
n

k=1
e
k
for all i = 1, , n,
where e
1
, , e
n
is the standard basis for C
n
.
Problem 7.28. Let F = C, let T L(V ) , and let W be a Tinvariant
subspace of V . Let T
|
W
L(W) denote the restriction of T to W .
(1) Prove that m
T
|
W
(x) divides m
T
(x) in C[x] .
(2) Suppose that T is diagonalizable. Prove that T
|
W
is also diagonaliz-
able.
Problem 7.29. Let A M
n
(F) be such that A
k
= 0 for some positive
integer k . Prove that A
n
= 0 .
Problem 7.30. Classify up to similarity all 3 3 complex matrices A such
that A
3
= I .
Problem 7.31. Classify up to similarity all nn complex matrices A such
that A
n
= I .
Problem 7.32. Suppose A, B M
n
(C) have the same characteristic and
minimal polynomials. Can we conclude that A and B are similar ? (a) if
n = 3 ? (b) if n = 4 ?
Problem 7.33. Describe, up to similarity, all 3 3 complex matrices A
such that A
2
+ 2A3I = 0 .
38 DONG SEUNG KANG
Problem 7.34. Let V be a nite dimensional vector space, and let T
L(V ) . If T
2
= T , prove that there exists a basis B for V such that for every
v B we have either T(v) = 0 or T(v) = v .
Problem 7.35. Let V be an ndimensional Rvector space and let T
L(V ) of minimal polynomial x
2
+ 1 . Show that n must be even.
LINEAR ALGEBRA 39
8. Inner Product Spaces
Denition 8.1. Let V be a ndimensional inner product space over F .
Then B = b
1
, , b
n
is called an orthogonal basis for V if
b
i
, b
j
) = 0 if i ,= j .
Moreover, B = b
1
, , b
n
is called an orthonormal basis for V if
b
i
, b
j
) = 0 if i ,= j and b
i
, b
j
) = 1 if i = j .
Theorem 8.2 (Gram-Schmidt Orthogonalization Process). Let V be a -
nite dimensional inner product space over F . Then V has an orthogonal
(moreover, orthonormal) basis for V .
Remark 8.3 (Gram-Schmidt orthogonalization process). Let V be a nite
dimensional inner product space over F , and let B = b
1
, , b
n
be a basis
for V . Then we dene
a
k
= b
k

k1

j=1
b
k
, a
j
)
[[a
j
[[
2
a
j
for 0 k n.
Then a
1
, , a
n
is called an orthogonal basis for V . Hence

a
1
||a
1
||
, ,
a
n
||a
n
||
is an orthonormal basis for V , where [[[[
2
= , ) .
Example 8.4. Let (1, 1, 0), (2, 0, 1), (2, 2, 1) be a basis for R
3
. Then
(1, 1, 0), (1, 1, 1), (
1
3
,
1
3
,
2
3
) is an orthogonal basis for R
3
.
Theorem 8.5. Let V be a nite dimensional inner product space over F ,
and let B = b
1
, , b
n
be an orthonormal basis for V . Then for any
V , we have
=
n

i=1
, b
i
)b
i
.
Corollary 8.6. Let V be a nite dimensional inner product space over F ,
B = b
1
, , b
n
be an orthonormal basis for V , and let let T L(V ) . Then
for each i = 1, , n,
T(b
i
) =
n

i=1
T(b
i
), b
i
)b
i
.
Theorem 8.7. Let V be a nite dimensional inner product space over F
and let g L(V, F) . Then there exists a unique y V such that
g(x) = x, y) for all x V .
Theorem 8.8. Let V be a nite dimensional inner product space over F
and let T L(V ) . Then there exists a unique T

T(x), y) = x, T

## (y)) for all x, y V .

Then we call T

40 DONG SEUNG KANG
Note that for A M
n
(F) ,
A

=

A
t
.
Theorem 8.9. Let V be a nite dimensional inner product space over F ,
let T L(V ) , and let B be an orthonomal basis for V . Then
[T

]
B
= [T]

B
.
Note that this is false if B is not an orthonormal basis for V .
Remark 8.10. Let W be a subspace of a vector space of V , and let T be
a linear transformation on V to itself. Also, let A M
n
(F) , k F .
(1) if W is Tinvariant, then W

is T

invariant.
(2) T

= T .
(3) N(T

T) = N(T) rank(T

T) = rank(T)
(4) rank(T) = rank(T

) rank(TT

) = rank(T) .
(5) (kA)

=

kA

.
(6) (AB)

= B

.
(7) (A+B)

= A

+B

.
(8) det(A

) = det(A) .
(9) For any A M
n
(F) , rank(A

A) = rank(AA

) = rank(A) .
Theorem 8.11 (Schurs Theorem). Let V be a nite-dimensional inner
product space over a eld F and let T L(V ) . Assume c
T
(x) splits over
F . Then there exists an orthonormal basis B for V such that [T]
B
is upper
triangular.
Proof. Proceed induction on dimV . If dimV = 1 , then done. Assume that
the result is true for inner product space of dimension less than n. We recall
that there is T

= [T]

## for some orthonormal

basis (2) if W is Tinvariant then W

is T

invariant (3)T

= T .
We claim that T

## has an eigenvector. Let be an orthonormal basis for

V . Then
C
T
(x) = det([T

xI
n
) = det([T]

t
xI
n
) = det([T]

xI
n
) .
Since C
T
(x) splits over F , C
T
(x) splits over F . If C
T
(x) = (x
1
) (x

n
) then C
T
(x) = (x
1
) (x
n
) . Let be an eigenvalue of T

.
Then there exists a z ,= 0 such that T

## (z) = z . Set U = span(z) so that

dimU = 1 , and then dimU

= n 1 . That is V = U U

. Also, since
T

(z) = z then U is T

## invariant. This implies that U

is T

invariant,
i.e., Tinvariant. Since C
T
U

(x) divide C
T
(x), C
T
U

## (x) splits over F . By

induction, there is an orthonormal basis for U

. Hence [T
U
]

is upper
triangular. Let B =
z
||z||
. Then B is orthonormal and
[T]
B
=
_
_
_
_
upper
triangular
(n 1) (n 1)
0 0
_
_
_
_
.
LINEAR ALGEBRA 41
Thus [T]
B
is upper triangular.
Remark 8.12. If in addition we are given T = T

then
[T]
B
= [T

]
B
= [T]

B
= ([T]
B
)
t
.
Hence [T]
B
is diagonal and diagonal entries are real, i.e., T diagonalizable
with real eigenvalues.
Denition 8.13. Let V be an inner product space and let T L(V ) .
(1) T is called self-adjoint if T = T

.
(2) T is called normal if TT

= T

T .
(3) T is called unitary if T(x), T(x)) = x, x) and F = C. In particu-
lar, T is called orthogonal if T(x), T(x)) = x, x) and F = R.
Example 8.14. Let A M
n
(F) be a skew-symmetric matrix. Then AA
t
and A
t
A are normal.
Note that if T is self-adjoint then it is normal, but the converse is not
true.
Theorem 8.15. Let V be an inner product space over a eld F, and let T
be a normal linear transformation on V . The the following are true:
(1) T(x), T(x)) = T

(x), T

## (x)) , for all x V .

(2) T cI is a normal linear transformation for all c F .
(3) if T() = , then T

() =

.
(4) if T(
1
) =
1

1
and T(
2
) =
2

2
with
1
,=
2
, then
1
,
2
) = 0 .
Theorem 8.16. Let V be a nite dimensional inner product space over F ,
let T L(V ) , and suppose c
T
(x) splits over F . Then V has an orthonormal
basis of eigenvectors of T if and only if either
(1) F = C and T is normal or,
(2) F = R and T is self-adjoint.
Proof. By Schurs Theorem 8.11, we may say that [T]
B
is an upper triangular
matrix in a some orthonormal basis B for V . First of all, let F = C. Suppose
that the basis B is consisting of all eigenvectors of T . Then [T]
B
is a diagonal
matrix, say
_
_
_

1
0 0
0
.
.
.
0
0 0
n
_
_
_
,
where
1
, ,
n
are eigenvalues of T . Hence
[T]
B
[T]
B
t
=
_
_
_

1
0 0
0
.
.
.
0
0 0
n

n
_
_
_
= [T]
B
t
[T]
B
,
i.e., T is normal. Conversely, we will show inductively. Since T is normal,
we may say that
[T]
B
[T]
B
t
= [T]
B
t
[T]
B
,
42 DONG SEUNG KANG
that is,
_
_
_
z
1
z
2
z
n

.
.
.

_
_
_
_
_
_
_
_
z
1
0
z
2

.
.
.
.
.
.

z
n

_
_
_
_
_
=
_
_
_
_
_
z
1
0
z
2

.
.
.
.
.
.

z
n

_
_
_
_
_
_
_
_
z
1
z
2
z
n

.
.
.

_
_
_
,
where all z
j
C. Then we have z
1
z
1
+ + z
n
z
n
= z
1
z
1
. Since 0 =
(x + yi)(x yi) = x
2
+ y
2
and x, y R, then x = y = 0 . We know
that z
2
= = z
n
= 0 . After then, we have simplied matrix [T]
B
=
_
_
_
_
z
1
0 0
0 w
2
w
n

_
_
_
_
, where z
1
, all w
j
C. Continue in this way, we have
a diagonal matrix
[T]
B
=
_
_
_
z
1
0
.
.
.
0 u
n
_
_
_
,
as desired. Now let F = R. Then the T is self-adjoint if and only if [T]
B
=
[T

]
B
= [T]

B
= [T]
t
B
if and only if [T]
B
is a diagonal matrix if and only if B
is an orthonormal basis of eigenvectors of T .
Corollary 8.17. If T is self-adjoint, then every eigenvalue of T is real.
Interestingly, when F = C and T is normal , Theorem 8.16 does not
extend to innite-dimensional complex inner product spaces.
Example 8.18. Let H be an inner product space consisting of continuous
complex-valued functions dened on the interval [0, 2] with the following
inner product
f, g) =
1
2
_
2
0
f(t)g(t) dt .
Then the following set S = e
ijt
where 0 t 2 and i is the imaginary
number

1 (recall that e
ijt
= cos jt + i sinjt ). It is easy to check that
the set S is an orthonormal subset of H and linearly independent. Now,
let V = span(S) , and let T , U L(V ) dened by T(f) = f
1
f nd U(f) =
f
1
f . For all integer k we have
T(f
k
) = f
k+1
and U(f
k
) = f
k1
.
Then
T(f
i
)f
j
) = f
i+1
, f
j
) =
(i+1),j
=
i,(j1)
= f
i
, f
j1
) = f
i
, U(f
j
)) .
Hence U = T

. It implies that TT

= I = T

## T that is, T is normal. Now,

we will show that T has no eigenvectors. Suppose that F is an eigenvector of
T . Then we have T(f) = f , for some . Since V = span(S) , we may write
LINEAR ALGEBRA 43
f =

m
j=n
a
j
f
j
, where a
m
,= 0 . Applying T to both sides of the preceding
equation, we obtain
m

j=n
a
j
f
j+1
=
m

j=n
a
j
f
j
.
Since a
m
,= 0 , we can write f
m+1
as a linear combination of
f
n
, f
n+1
, , f
m
. But this is a contradiction because S is linearly inde-
pendent.
The following statements are equivalent to the denition of a unitary or
orthogonal linear transformation.
Theorem 8.19. Let T be a linear transformation on a nite-dimensional
inner product space V . Then the following are equivalent.
(1) TT

= I = T

T .
(2) T(x), T(y)) = x, y) , for all x, y V .
(3) if B is an orthonormal basis for V , then T(B) is an orthonormal
basis for V .
(4) There exists an orthonormal basis B for V such that T(B) is an
orthonormal basis for V .
(5) T(x), T(x)) = x, x) for all x V .
Orthogonal Projection and Spectral Theorem. Recall that if V =
W
1
W
2
, then a linear transformation T on V is the projection on W
1
along W
2
if whenever x = x
1
+ x
2
, with x
1
W
1
and x
2
W
2
, we have
T(x) = x
1
. Then we have
R(T) = W
1
= x V [T(x) = x and N(T) = W
2
.
So
V = R(T) N(T) .
Note that T is a projection if and only if T
2
= T .
Denition 8.20. Let V be an inner product space, and let T : V V
be a projection. We say that T is an orthogonal projection if R(T)

=
N(T) and N(T)

= R(T) .
Theorem 8.21. Let V be an inner product space, and let T : V V be a
linear transformation. Then T is an orthogonal projection if and only if T

and T
2
= T = T

.
Proof. Suppose T is an orthogonal projection. Since T is a projection, we
have T
2
= T . Enough to show that T

exists and T = T

. For all x = x
1
+x
2
and y = y
1
+y
2
V = R(T) N(T) , respectively, we have
x, T(y)) = x
1
+x
2
, T(y
1
)) = x
1
, T(y
1
)) +x
2
, T(y
1
)) = x
1
, y
1
)
T(x), y) = T(x
1
), y
1
+y
2
) = T(x
1
), y
1
) +T(x
1
), y
2
) = x
1
, y
1
) .
So x, T(y)) = T(x), y) , for all x, y V . Hence by Theorem 8.8, T

exists
and T = T

## . Conversely, suppose that T

2
= T = T

, we know that T is a
44 DONG SEUNG KANG
projection, and hence we need to show that R(T) = N(T)

and R(T)

=
N(T) . leave it exercise.
Let V be a nite-dimensional inner product space, W be a subspace of V ,
and T be the orthogonal projection on W . We may choose an orthonormal
basis B = b
1
, , b
n
for V such that b
1
, , b
k
is a basis for W . Then
we have
[T]
B
=
_
I
k
O
1
O
2
O
3
_
.
Theorem 8.22 (The Spectral Theorem). Suppose that T is a linear trans-
formation on a nite-dimensional inner product space V over F with the
distinct eigenvalues
1
, ,
k
. Assume that T is normal if F = C and that
T is self-adjoint if F = R. For each i = 1, , k , let W
i
be the eigenspace of
T corresponding to the eigenvalue
i
, and let T
i
be the orthogonal projection
on W
i
. Then the folowing are true.
(1) V = W
1
W
k
.
(2) If W

i
denotes the direct sum of subspaces W
j
(j ,= i) , then W

i
=
W

i
.
(3) T
i
T
j
=
ij
T
i
for 1 i, j k .
(4) I = T
1
+ +T
k
.
(5) T =
1
T
1
+ +
k
T
k
.
Note that the set
1
, ,
k
of eigenvalues of T is called the spectrum
of T , I = T
1
+ + T
k
is called resolution of the identity operator
induced by T , and the sum T =
1
T
1
+ +
k
T
k
is called the spectral
decomposition of T .
Remark 8.23. Let T L(V ) be self-adjoint over R, where B is a basis for
V . Then [T]
B
is a real symmetric matrix and also every eigenvalue of [T]
B
is real and c
T
(x) splits over R.
We now list several interesting properties of the spectral theorem.
Corollary 8.24. Suppose that T is a linear transformation on a nite-
dimensional inner product space V over F .
(1) Let F = C. T is normal if and only if T

## = g(T) for some polyno-

mial g .
(2) Let F = C. T is unitary if and only if T is normal and [[ = 1 for
every eigenvalue of T .
(3) Let F = C and T is normal. T is self-adjoint if and only if every
eigenvalue of T is real.
(4) Let T be as in the spectral theorem with spectral decomposition T =

1
T
1
+ +
k
T
k
. Then each T
j
is a polynomial in T .
Problem Set
LINEAR ALGEBRA 45
Problem 8.25. Suppose that , )
1
and , )
2
are two inner products on a
vector space V . Prove that , ) = , )
1
+, )
2
is another inner product on
V .
Problem 8.26. Let B be a basis for a nite-dimensional inner product
space. Prove that if x, y) = 0 for all x B , then y = 0 .
Problem 8.27. Let T be a self-adjoint linear transformation in L(V ) , where
V is a nite-dimensional inner product space. Show that if x, T(x)) = 0
for all x V , then T = 0 , where 0 is a zero linear transformation.
Problem 8.28. Let A M
n
(R) be a symmetric matrix. Prove that A is
similar to a diagonal matrix.
Problem 8.29. Let A M
n
(R) be symmetric (or A M
n
(C) be normal).
Then prove the followings:
tr(A) =
n

i=1

i
, tr(A

A) =
n

i=1
[
i
[
2
, and det(A) =
n
i=1

i
,
where the
i
are (not necessarily distinct) eigenvalues of A.
Problem 8.30. Let V be a nite-dimensional complex inner product space
over a eld F , and let T L(V ) . Use the spectral decomposition T =

1
T
1
+ +
k
T
k
of T to prove the following statements:
(1) IF g is a polynomial, then
g(T) =
k

i=1
g(
i
)T
i
.
(2) If T
n
= 0 for some n, then T = 0 , where 0 is a linear transforma-
tion.
(3) Let U L(V ) . Then U commutes with T if and only if U commutes
with each T
i
.
(4) There exists a normal U L(v) such that U
2
= T .
(5) T is invertible if and only if
i
,= 0 for some 1 i k .
(6) T is projection if and only if every eigenvalue of T is 0 or 1 .
(7) T = T

## if and only if every

i
is an imaginary number.