Sie sind auf Seite 1von 99

Solutions to Axler, Linear Algebra Done Right 2nd Ed.

Edvard Fagerholm
edvard.fagerholm@helsinki. [gmail.com
Beware of errors. I read the book and solved the exercises during s
pring break (one
week), so the problems were solved in a hurry. However, if you do nd better or
interesting
solutions to the problems, Id still like to hear about them. Also please dont pu
t this on
the Internet to encourage copying homework solutions...
1 Vector Spaces
1. Assuming that C is a eld, write z = a + bi. Then we have 1/z
= z/zz = z/[z[
2
.
Plugging in the numbers we get 1/(a + bi) = a/(a
2
+ b
2
) bi/(a
2
+ b
2
) = c + di. A
straightforward calculation of (c+di)(a+bi) = 1 shows that this is indeed an inv
erse.
2. Just calculate ((1 +

3)/2)
3
.
3. We have v + (v) = 0, so by the uniqueness of the additive inv
erse (prop. 1.3)
(v) = v.
4. Choose a ,= 0 and v ,= 0. Then assuming av = 0 we get v = a
1
av = a
1
0 = 0.
Contradiction.
5. Denote the set in question by A in each part.
(a) Let v, w A, v = (x
1
, x
2
, x
3
), w = (y
1
, y
2
, y
3
). Then x
1
+ 2x
2
+ 3x
3

= 0 and
y
1
+ 2y
2
+ 3y
3
= 0, so that 0 = x
1
+ 2x
2
+ 3x
3
+ y
1
+ 2y
2
+ 3y
3
= (x
1
+ y
1
) +
2(x
2
+y
2
) +3(x
3
+y
3
), so v +w A. Similarly 0 = a0 = ax
1
+2ax
2
+3ay
3
, so
av A. Thus A is a subspace.
(b) This is not a subspace as 0 , A.
(c) We have that (1, 1, 0) A and (0, 0, 1) A, but (1, 1, 0)+(0, 0, 1) = (1, 1,
1) , A,
so A is not a subspace.
(d) Let (x
1
, x
2
, x
3
), (y
1
, y
2
, y
3
) A. If x
1
= 5x
3

and y
1
= 5y
3
,
1

then ax

= 5ax
3
,
so a(x
1
, x
2
, x
3
) A. Similarly x
1
+ y
1
= 5(x
3
+ y
3
), so that (x
1
, x
2
, x
3
) +
(y
1
, y
2
, y
3
) A. Thus A is a subspace.
1
6. Set U = Z
2
.
7. The set (x, x) R
2
[ x R (x, x) R
2
[ x R is closed under multiplication
but is trivially not a subspace ((x, x) + (x, x) = (2x, 0) doesnt belong to it unl
ess
x = 0).
8. Let V
i
be a collection of subspaces of V . Set U =
i
V
i
. Then if u, v U. We
have that u, v V
i
for all i because V
i

is a subspace.

Thus au V

i
for all i, so that
av U. Similarly u + v V
i
for all i, so u + v U.
9. Let U, W V be subspaces. Clearly if U W or W U, then U W is cle
arly a
subspace. Assume then that U , W and W , U. Then we can choose u U W
and w W U. Assuming that U W is a subspace we have u + w U W.
Assuming that u + w U we get w = u + w u U. Contradiction. Similarly for
u + w W. Thus U W is not a subspace.
10. Clearly U = U + U as U is closed under addition.
11. Yes and yes. Follows directly from commutativity and associativity of ve
ctor addition.
12. The zero subspace, 0, is clearly an additive identity. Assuming
that we have
inverses, then the whole space V should have an inverse U such that U + V
= 0.
Since V + U = V this is clearly impossible unless V is the trivial vector
space.
13. Let W = R
2
. Then for any two subspaces U
1
, U
2
of W we have U
1
+ W = U
2
+ W,
so the statement is clearly false in general.
14. Let W = p T(F) [ p =
n
i=0
a
i
x
i
, a
2
= a
5
= 0.
15. Let V = R
2
. Let W = (x, 0) R
2
[ x R. Set U
1
= (x, x) R
2
[ x R,
U
2
= (x, x) R
2
[ x R. Then its easy to see that U

1
+ W = U
2
+ W = R
2
,
but U
1
,= U
2
, so the statement is false.
2 Finite Dimensional Vector Spaces
1. Let u
n
= v
n
and u
i
= v
i
+v
i+1
, i = 1, . . . , n1. Now we see that v
i
=
n
j=i
u
i
. Thus
v
i
span(u
1
, . . . , u
n
), so V = span(v
1
, . . . , v
n
) span(u
1
, . . . , u
n
).
2. From the previous exercise we know that the span is V . As (v
1
, . . . , v
n
) is a linearly
independent spanning list of vectors we know that dimV = n. The claim now fol
lows
from proposition 2.16.
2
3. If (v
1
+ w, . . . , v
n
+ w) is linearly dependent, then we can write

0 = a
1
(v
1
+ w) + . . . + a
n
(v
n
+ w) =
n
i=1
a
i
v
i
+ w
n
i=1
a
i
,
where a
i
,= 0 for some i.

Now

n
i=1
a
i
,= 0 because otherwise we would get
0 =
n
i=1
a
i
v
i
+ w
n
i=1
a
i
=
n
i=1
a
i
v
i
and by the linear independence of
i
) we would get a
i
= 0, i contradicting our
assumption. Thus

(v

w =
_
n
i=1
a
i
_
1
n
i=1
a
i
v
i
,
so w span(v
1
, . . . , v
n
).
4. Yes, multiplying with a nonzero constant doesnt change the degree of a polyn
omial
and adding two polynomials either keeps the degree constant or makes
it the zero
polynomial.
5. (1, 0, . . .), (0, 1, 0, . . .), (0, 0, 1, 0, . . .), . . . is triviall
y linearly independent. Thus F

isnt nite dimensional.


6. Clearly T(F) is a subspace which is in nite dimensional.
7. Choose v
1
V . Assuming that V is in nite dimensional span(v
1
) ,= V . Thus we can
choose v
2
V span(v
1
). Now continue inductively.
8. (3, 1, 0, 0, 0), (0, 0, 7, 1, 0), (0, 0, 0, 0, 1) is clearly linearly indep
endent. Let (x
1
, x
2
, x
3
, x
4
, x
5
)
U. Then (x
1
, x
2
, x
3

, x
4
, x
5
) = (3x
2
, x
2
, 7x
4
, x
4
, x
5
) = x
2
(3, 1, 0, 0, 0)+x
4
(0, 0, 7, 1, 0)+
x
5
(0, 0, 0, 0, 1), so the vectors span U. Hence, they form a basis.
9. Choose the polynomials (1, x, x
2
x
3
, x
3
). They clearly span T
3
(F) and form a basis
by proposition 2.16.
10. Choose a basis (v
1
, . . . , v
n
). Let U
i
= span(v
i
). Now clearly V = U
1
. . . U
n
by
proposition 2.19.
11. Let dimU = n = dimV . Choose a basis (u
1
, . . . , u
n
) for U and extend it to a
basis (u
1
, . . . , u
n
, v
1
, . . . , v
k
) for V . By assumption a basis for V has length n, so

(u
1
, . . . , u
n
) spans V .
3
12. Let U = span(u
0
, . . . , u
m
). As U ,= T
m
(F) we have by the previous exercise that
dimU < dimT
m
(F) = m + 1. As (p
0
, . . . , p
m
) is a spanning list of vectors having
length m + 1 it is not linearly independent.
13. By theorem 2.18 we have 8 = dimR
8
= dimU + dimW dim(U W) = 4 + 4
dim(U W). Since U +W = R
8
, we have that dim(U W) = 0 and the claim follows
14. Assuming that U W = 0 we have by theorem 2.18 that
9 = dimR
9
= dimU + dimW dim(U W) = 5 + 5 0 = 10.
Contradiction.
15. Let U
1
= (x, 0) R
2
[ x R, U
2
= (0, x) R
2
[ x R, U
3
= (x, x) R
2
[ x
R. Then U
1
U
2
= 0, U
1
U
3
= 0 and U
2
U
3
= 0. Thus the left hand side
of the equation in the problem is 2 while the right hand side is 3.
hus we have a

counterexample.
16. Choose a basis (u
i
1
, . . . , u
i
n
i
) for each U
i
. Then the list of vectors (u
1
1
, . . . , u
m
nm
) has
length dimU
1
+ . . . + dimU
m
and clearly spans U
1
+ . . . + U
m
proving the claim.
17. By assumption the list of vectors (u
1
1
, . . . , u
m
nm
) from the proof of the previous exercise
is linearly independent. Since V = U
1
+. . .+U
n
= span(u
1
1
, . . . , u
m
nm
) the claim follows.
3 Linear Maps
1. Any v ,= 0 spans V . Thus, if w V , we have w = av for some a F. Now T(
v) = av
for some a F, so for w = bv V we have T(w) = T(bv) = bT(v) = bav = aw. Thus
T is multiplication by a scalar.
2. De ne f by e.g.
f(x, y) =
_
x, x = y
0, x ,= y
,
then clearly f satis es the condition, but is non linear.
3. To de ne a linear map its enough to de ne the image of the elements
of a basis.
Choose a basis (u
1

,
m
)
1
,
m
,
1
,
n
)
V
1
,
n

. . . , u
for U and extend it to a basis (u
. . . , u
v
. . . , v
of
. If S L(U, W).

Choose some vectors w

. . . , w

W. Now de ne T L(V, W)
by T(u
i
) = S(u
i
), i = 1, . . . , m and T(v
i
) = w
i
, i = 1, . . . , n. These relations de ne
the linear map and clearly T
|U
= S.
4. By theorem 3.4 dimV = dimnull T +dimrange T. If u , null T, then dimrang
e T >
0, but as dimrange T dimF = 1 we have dimrange T = 1 and it fol
lows that
4
dimnull T = dimV 1. Now choose a basis (u
1
, . . . , u
n
) for null T. By our assumption
(u
1
, . . . , u
n
, u) is linearly independent and has length dimV . Hence, its a basis for
V . This implies that
V = span(u
1
, . . . , u
n
, u) = span(u
1
, . . . , u
n
) span(u) = null T au [ a F.
5. By linearity 0 =
n
i=1
a
i
T(v

i
) =
n
i=1
T(a
i
v
i
) = T(
n
i=1
a
i
v
i
). By injectivity we
have
n
i=1
a
i
v
i
= 0, so that a
i
= 0 for all i. Hence (T(v
1
), . . . , T(v
n
)) is linearly
independent.
6. If n = 1 the claim is trivially true.
is true for n = k.
If S
1
, . . . , S
k+1
satisfy the assumptions, then S
1
S
k
is injective by the induction
hypothesis. Let T = S
1
S
k
. If u ,= 0, then by injectivity of S
k+1
we S
k+1
u ,= 0
and by injectivity of T we have TS
k+1
u ,= 0. Hence TS
k+1
is injective and the claim
follows.

Assume that

the claim

7. Let w W. By surjectivity of T we can


nd a vector v V
that T(v) = w.
Writing v = a
1
v
1
+ . . . + a
n
v
n
we get w = T(v) = T(a
1
v
1
+ . . . + a
n
v
n
) = a
1
T(v
1
) +
. . . + a
n
T(v
n
) proving the claim.
8. Let (u
1
, . . . , u
n
) be a basis for null T and extend it to a basis (u
1
, . . . , u
n
, v
1
, . . . , v
k
)
of V . Let U := span(v
1
, . . . , v
k
). Then by construction null T U = 0 and for an
arbitrary v V we have that v = a
1
u
1
+ . . . + a
n
u
n
+ b
1
v
n
+ . . . + b
k

such

v
k
, so that
T(v) = b
1
T(v
1
) + . . . + b
k
T(v
k
).
Hence range T = T(U).
9. Its easy to see that (5, 1, 0, 0), (0, 0, 7, 1) is a basis for null T (see
exercise 2.8). Hence
dimrange T = dimF
4
dimnull T = 4 2 = 2. Thus range T = F
2
, so that T is
surjective.
10. Its again easy to see that (3, 1, 0, 0, 0), (0, 0, 1, 1, 1) is a basis of n
ull T. Hence, we get
dimrange T = dimF
5
dimnull T = 5 2 = 3 which is impossible.
11. This follows trivially from dimV = dimnull T + dimrange T.
12. From dimV = dimnull T +dimrange T it follows trivially that if we have a
surjective
linear map T L(V, W), then dimV dimW. Assume then that dimV dimW.
Choose a basis (v
1
, . . . , v
n
) for V and a basis (w
1
, . . . , w
m
) for W. We can de ne
a linear map T L(V, W) by letting T(v
i
) = w
i
, i = 1, . . . , m, T(v
i
) = 0, i =
m + 1, . . . , n. Clearly T is surjective.
5
13. We have that dimrange T dimW. Thus we get dimnull T = dimV dimrange T
dimV dimW. Choose an arbitrary subspace U of V of such that dimU dimV
dimW. Let (u
1
, . . . , u
n
) be a basis of U and extend it to a basis (u
1
, . . . , u
n
, v
1

, . . . , v
k
)
of V . Now we know that k dimW, so let
1
, . . . , w
m
) be a basis for W. De ne
a linear map T L(V, W) by T(u
i
) = 0, i = 1, . . . , n and T(v
i
) = w
i
, i = 1, . . . , k.
Clearly null T = U.
14. Clearly if we can nd such a S, then
T is injective.
Let (v
1
, . . . , v
n
) be a basis of V . By exercise 5
1
), . . . , T(v
n
)) is linearly inde
pendent so we can extend it to a basis (T(v
1
), . . . , T(v
n
), w
1
, . . . , w
m
) of W. De ne
S L(W, V ) by S(T(v
i
)) = v
i
, i = 1, . . . , n and S(w
i
) = 0, i = 1, . . . , m. Clearly
ST = I
V
.
15. Clearly if we can nd such a S, then
1
, . . . , v
n
) be a
basis of V . By assumption (T(w
1
), . . . , T(w
n
)) spans W. By the linear dependence
lemma we can make the list of vectors (T(w
1
), . . . , T(w
n

(w

T is injective.

Assume then that

(T(v

T is surjective.

Otherwise let (v

)) a basis by removing some


vectors. Without loss of generality we can assume that the rst m vectors form
the
basis (just permute the indices). Thus T(w
1
), . . . , T(w
m
)) is a basis of W. De ne
the map S L(W, V ) by S(T(w
i
)) = w
i
, i = 1, . . . , m. Now clearly TS = I
W
.
16. We have that dimU = dimnull T + dimrange T dimnull T + dimV . Substitut
ing
dimV = dimnull S + dimrange S and dimU = dimnull ST + dimrange ST we get
dimnull ST + dimrange ST dimnull T + dimnull S + dimrange S.
Clearly dimrange ST dimrange S, so our claim follows.
17. This is nothing but pencil pushing. Just take arbitrary matrices
satisfying the re
quired dimensions and calculate each expression and the equalities easily fall o
ut.
18. Ditto.
19. Let V = (x
1
, . . . ,
n
). From proposition 3.14 we have that
/(Tv) = /(T)/(v) =
_

_
a
1,1
a
1,n
.
.
.
.
.
.
.
.
.
a
m,1
a
m,n
_

_
_

_
x
1
.

.
.
x
n
_

_ =
_

_
a
1,1
x
1
+ . . . + a
1,n
x
n
.
.
.
a
m,1
x
1
+ . . . + a
m,n
x
n
_

_
which shows that Tv = (a
1,1
x
1
+ . . . + a
1,n
x
n
, . . . , a
m,1
x
1
+ . . . + a
m,n
x
n
).
20. Clearly dimMat(n, 1, F) = n = dimV . We have that Tv = 0 if
and only if v =
0v
1
+ . . . , 0v
n
= 0, so null T = 0. Thus T is injective and hence invertible.
6
21. Let e
i
denote the n 1 matrix with a 1 in the ith row and 0 everywhere else. Let T

be the linear map. De ne a matrix A := [T(e


1
), . . . , T(e
n
)]. Now its trivial to verify
that
Ae
i
= T(e
i
).
By distributivity of matrix multiplication (exercise 17) we get for an arbitrary
v =
a
1
e
1
+ . . . + a
n
e
n
that
Av = A(a
1
e
1
+ . . . + a
n
e
n
) = a
1
Ae
1
+ . . . + a
n
Ae
n
= a
1
T(e
1
) + . . . + a
n
T(e
n
) = T(a
1
e
1
+ . . . + a
n
e
n
),
so the claim follows.
22. From theorem 3.21 we have ST invertible ST bijective S and T bijec
tive S
and T invertible.
23. By symmetry its su cient to prove this in one direction only. Thus if TS =

I. Then
ST(Su) = S(TSu) = Su for all u. As S is bijective Su goes through the whole sp
ace
V as u varies, so ST = I.
24. Clearly TS = ST if T is a scalar multiple of the identity.
For the other direction
assume that TS = ST for every linear map S L(V ).
25. Let T L(F
2
) be the operator T(a, b) = (a, 0) and S L(F
2
) the operator S(a, b) =
(0, b). Then neither one is injective and hence invertible by 3.21. However,
T + S is
the identity operator which is trivially invertible. This can be triv
ially generalized
to spaces arbitrary spaces of dimension 2.
26. Write
A :=
_

_
a
1,1
a
1,n
.
.
.
.
.
.
.
.
.
a
n,1
a
n,n
_

_, x =
_

_
x
1
.
.
.
x
n
_

_.
Then the system of equations in a) reduces to Ax = 0. Now A de nes a linear ma
p
from Mat(n, 1, F) to Mat(n, 1, F). What a) states is now that the ma
p is injective
while b) states that it is surjective. By theorem 3.21 these are equivalent.

4 Polynomials
1. Let
1
, . . . ,
m
be m distinct numbers and k
1
, . . . , k
m
> 0 such that their sum is n.
Then
m
i=1
(x
i
)
k
i
is ceary such a poynomia.
7
2. Let p
i
(x) =
j=i
(x z
j
), so that deg p
i
= m. Now we have that p
i
(z
j
) ,= 0 if and
ony if i = j. Let c
i
= p
i
(z
i
). De ne
p(x) =
m+1
i=1
w
i
c
1
i
p
i
(x).
Now deg p = m and p ceary p(z
i
) = w
i
.
3. We ony need to prove uniqueness as existence is theorem 4.5.

Let s

, r
be other such
poynomias. Then we get that
0 = (s s
)p + (r r
) r
r = (s s
)p
We know that for any poynomias
deg p + deg q.
Assuming that s ,= s

p ,= 0 ,= q

we have deg pq

we have deg(r
r) < deg(s s
)p which is impossibe.
s = s

Thus

impying r = r
.
4.
p

Let be a root of p.

Thus we can write p(x) = (x )q(x).

Now we get

(x) = q(x) + (x )q
(x).
Thus is a root of
if and ony if
statement foows.
5. Let p(x) =

is a root of

q i.e.

is a mutipe root.

The

n
i=0
a
i
x
i
, where a
i
R. By the fundamenta
theorem of cacuus we
have a compex root z. By proposition 4.10 z is aso a root of p. Then
z and z are
roots of equa mutipicity (divide by (x z)(x z) which is rea). Thus if we h
ave
no rea roots we have an even number of roots counting mutipicity, but the num
ber
of roots counting mutipicity is deg p hence odd. Thus p has a rea root.
5 Eigenvaues and Eigenvectors
1. An arbitrary eement of U
1
+. . . +U

n
is of the form u
1
+. . . +u
n
where u
i
U
i
. Thus
we get T(u
1
+ . . . + u
n
) = T(u
1
) + . . . + T(u
n
) U
1
+ . . . + U
n
by the assumption
that T(u
i
) U
i
for a i.
2. Let V =
i
U
i
where U
i
is invariant under T for a i. Let v V , so that v U
i
for a
i. Now T(v) U
i
for a i by assumption, so T(v) U
i
= V . Thus V is invariant
under T.
3. The came is ceary true for U = 0 or U = V . Assume that 0 ,= U ,=
V . Let
(u
1
, . . . , u
n
) be a basis for U and extend it to a basis (u
1
, . . . , u
n
, v
1
, . . . , v
m
). By
our assumption m 1. De ne a inear operator by T(u
i

) = v
1
, i = 1, . . . , n and
T(v
i
) = v
1
, i = 1, . . . , m. Then ceary U is not invariant under T.
8
4. Let u nu(T I), so that T(u) u = 0. ST = TS gives us
0 = S(Tu u) = STu Su = TSu Su = (T I)Su,
so that Su nu(T I).
5. Ceary T(1, 1) = (1, 1), so that 1 is an eigenvaue. Aso
T(1, 1) = (1, 1) =
(1, 1) so 1 is another eigenvaue. By coroary 5.9 these are a eigenvaues.
6. We easiy see that T(0,0,1)=(0,0,5), so that 5 is an eigenvaue.
Aso T(1, 0, 0) =
(0, 0, 0) so 0 is an eigenvaue. Assume that ,= 0 and T(z
1
, z
2
, z
3
) = (2z
2
, 0, 5z
3
) =
(z
1
, z
2
, z
3
). From the assumption ,= 0 we get z
2
= 0, so the equation is of the
form (0, 0, 5z
3
) = (z
1
, 0, z
3
). Again we see that z
1
= 0 so we get the equation
(0, 0, 5z
3
) = (0, 0, z
3
). Thus 5 is the ony non zero eigenvaue.
7. Notice that the range of T is the subspace (x, . . . , x) F
n
[ x F and has dimension
1. Thus dimrange T = 1, so dimnu T = n 1. Assume that T has tw
o distinct
eigenvaues
1
,
2

and assume that


1
,= 0 ,=
2
. Let v
1
, v
2
be the corresponding
eigenvectors, so by theorem 5.6 they are ineary independent. Then v
1
, v
2
, nu T,
but dimnu T = n 1, so this is impossibe. Hence T has at most o
ne non zero
eigenvaue hence at most two eigenvaues.
Because T is not injective we know that 0 is an eigenvaue.
We aso see that
T(1, . . . , 1) = n(1, . . . , 1), so n is another eigenvaue. By the
previous paragraph,
these are a eigenvaues of T.
8. Let a F. Now we have T(a, a
2
, a
3
, . . .) = (a
2
, a
3
, . . .) = a(a, a
2
, . . .), so every a F
is an eigenvaue.
9. Assume that T has k +2 distinct eigenvaues
1
, . . . ,
k+2
with corresponding eiven
vectors v
1
, . . . , v
k+2
. By theorem 5.6 these eigenvectors are ineary independent. Now
Tv
i
=
i
v
i
and dimspan(Tv
1
, . . . , Tv
k+2
) = dimspan(
1
v
1
, . . . ,
k+2

v
k+2
) k + 1
(its k + 2 if a

i
are non zero, otherwise k + 1). This is a contradiction as
dimrange T = k and span(
1
v
1
, . . . ,
k+2
v
k+2
) range T.
10. As T = (T
1
)
1
we only need to show this in one direction. If T is invertible, then
0 is not an eigenvalue. Now let be an eigenvaue of T and v the
corresponding
eigenvector. From Tv = v we get that T
1
v = T
1
v = v, so that T
1
v =
1
v.
11. Let be an eigenvaue of TS and v the corresponding eigenvecto
r. Then we get
STSv = Sv = Sv, so if Sv ,= 0, then it is is an eigenvector for the eigenvaue .
If
Sv = 0, then TSv = 0, so = 0. As Sv = 0 we know that S is not injective,
so ST
is not injective and it has eigenvaue 0. Thus if is an eigenvaue of TS, then
its an
eigenvaue of ST. The other impication foows by symmetry.
9
12. Let (v
1
, . . . , v
n
) be a basis of V . By assumption (Tv
1
, . . . , Tv
n
) = (
1
v, . . . ,
n
v
n
).
We need to show that
i
=
j

for a

v
i

i, j.

Choose i ,= j,

+ v
j
is an eigenvector, so T(v
i
+ v
j
) =
i
v
i
+
j
v
j
= (v
i
+ v
j
) = v
i
+ v
j
. This
means that (
i
)v
i
+ (
j
)v
j
= 0. Because (v
i
, v
j
) is ineary independent we
get that
i
= 0 =
j
i.e.
i
=
j
.
13. Let v
1
V and extend it to a basis (v
1
, . . . , v
n
) of V . Now Tv
1
= a
1
v
1

then by our assumption

+ . . . + a
n
v
n
.
Let U
i
be the subspace generated by the vectors v
j
, j ,= i. By our assumption each
U
i
is an invariant subspace. Let Tv
1
= a
1
v
1
+ . . . + a
n
v
n
. Now v
1
U
i
for i > 1. So
et j > 1, then Tv U
j
impes a
j
= 0. Thus Tv
1
= a
1
v
1
. We see that v
1
was an
eigenvector. The resut now foows from the previous exercise.
14. Ceary (STS
1
)
n
= ST
n
S
1
, so
n
i=0
a
i
(STS
1
)
i
=

n
i=0
a
i
ST
i
S
1
= S
_
n
i=0
a
i
T
i
_
S
1
.
15.
or.

Let
Let

be an eigenvaue of
p(x) =

T and v V

a corresponding eigenvect

n
i=0
a
i
x
i
. Then we have
p(T)v =
n
i=0
a
i
T
i
v =
n
i=0
a
i

i
v = p()v.
Thus p() is an eigenvaue of p(T). Then et a be an eigenvaue of p(T) and
v V
the corresponding eigenvector. Let q(x) = p(x) a. By the fundamenta theorem
of
agebra we can write q(x) = c
n
i=1
(x
i
). Now q(T)v = 0 and q(T) = c

n
i=1
(T

i
I). As q(T) is non injective we have that T
i
I is non injective for some i. Hence

i
is an eigenvaue of T. Thus we get 0 = q(
i
) = p(
i
) a, so that a = p(
i
).
16. Let T L(R
2
) be the map T(x, y) = (y, x). On page 78 it was shown that it has
no eigenvaue. However, T
2
(x, y) = T(y, x) = (x, y), so 1 it has an eigenvaue.
17. By theorem 5.13 T has an upper trianguar matrix with respect to some basi
s (v
1
, . . . , v
n
).
The caim now foows from proposition 5.12.
18. Let T L(F
2
) be the operator T(a, b) = (b, a) with respect to the standard basis
.
Now T
2
= I, so T is ceary invertibe. However, T has the matrix
_
0 1
1 0
_
.
10
19. Take the operator T in exercise 7. It is ceary not invertibe, but h
as the matrix
_

_
1
1
.
.
.
.
.
.
.
.
.
1
1

_.
20. By theorem 5.6 we can choose basis (v
1
, . . . , v
n
) for V where v
i
is an eigenvector of
T corresponding to the eigenvaue
i
. Let be the eigenvaue of S corresponding to
v
i
. Then we get
STv
i
= S
i
v
i
=
i
Sv
i
=
i
v
i
=
i
v
i
= Tv
i
= Tv
i
= TSv
i
,
so ST and TS agree on a basis of V . Hence they are equa.
21. Ceary 0 is an eigenvaue and the corresponding eigenvectors are nu T
= W 0.
Now assume ,= 0 is an eigenvaue and v = u + w is a corresponding e
igenvector.
Then P
U,W
(u + w) = u = u + w (1 )u = w. Thus w U, so w = 0, but
,= 0, so we have w = 0 which impies (1 )u = 0. Because v is an eigenvector we
have v = u + w = u ,= 0, so that 1 = 0. Hence = 1. As we can choose free
y
our u U, so that its non zero we see that the eigenvectors corresponding to 1 ar
e
u = U 0.
22. As dimV = dimnu P + dimrange P, its ceary su cient to prove t
hat nu P
range P = 0. Let v nu P range P. As v range P, we can nd a u V such
that Pu = v. Thus we have that v = Pu = P
2

u = Pv, but v nu P, so Pv = 0.


Thus v = 0.
23. Let T(a, b, c, d) = (b, a, d, c), then T is ceary injective, so 0 is no
t an eigenvaue.
If ,= 0 is an eigenvaue and (a, b, c, d) = (b, a, d, c). We ceary
see that a ,=
0 b ,= 0 and simiary with c, d. By symmetry we can assume that
a ,= 0 (an
eigenvector is non zero). Then we have a = b and b = a. Substituting w
e get

2
b = b (
2
+ 1)b = 0. As b ,= 0 we have
2
+ 1 = 0, but this equation has no
soutions in R. Hence T has no eigenvaue.
24. If U is an invariant subspace of odd degree, then by theorem 5.26 T
|U
has an eigenvaue
with eigenvector v. Then is an eigenvaue of T with eigenvector v
against our
assumption. Thus T has no subspace of odd degree.
6 Inner Product Spaces
1. From the aw of cosines we get |x y|
2
= |x|
2
+ |y|
2
2|x||y| cos . Solving we
get
|x||y| cos =
|x|
2
+|y|
2
|x y|
2
2
.
11
Now let x = (x
1
, x2), y = (y
1
, y
2
). A straight calculation shows that
|x|
2
+|y|
2
|xy|
2
= x
2
1
+x

2
2
+y
2
1
+y
2
2
(x
1
y
1
)
2
(x
2
y
2
)
2
= 2(x
1
y
1
+x
2
y
2
),
so that
|x||y| cos =
|x|
2
+|y|
2
|x y|
2
2
=
2(x
1
y
1
+ x
2
y
2
)
2
= x, y) .
2. If u, v) = 0, then u, av) = 0, so by the Pythagorean theorem |u| |u| + |av| =
|u+av|. Then assume that |u| |u+av| for all a F. We have |u|
2
|u+av|
2

u, u) u + av, u + av). Thus we get


u, u) u, u) +u, av) +av, u) +av, av) ,
so that
2Re a u, v) [a[

2
|v|
2
.
Choose a = t u, v) with t > 0, so that
2t u, v)
2
t
2
u, v)
2
|v|
2
2 u, v)
2
t u, v)
2
|v|
2
.
If v = 0, then clearly u, v) = 0. If not choose t = 1/|v|
2
, so that we get
2 u, v)
2
u, v)
2
.
Thus u, v) = 0.
3. Let a = (a
1
,

2a
2
, . . . ,

na
n
) R
n
and b = (b
1
, b
2
/

2, . . . , b
n
/

n) R
n
. This
euality is then simply a, b)
2
|a|
2
|b|
2

which follows directly from the Cauchy


Schwarz ineuality.
4. From the parallelogram euality we have |u + v|
2
+ |u v|
2
= 2(|u|
2
+ |v|
2
).
Solving for |v| we get |v| =

17.
5. Set e.g. u = (1, 0), v = (0, 1). Then |u| = 1, |v| = 1, |u + v| = 2
, |u v| = 2.
Assuming that the norm is induced by an inner product, we would ha
ve by the
parallelogram ineuality
8 = 2
2
+ 2
2
= 2(1
2
+ 1
2
) = 4,
which is clearly false.
6. Just use |u| = u, u) and simplify.
7. See previous exercise.
12
8. This exercise is a lot trickier than it might seem. Ill prove it for R, th
e proof for C
is almost identical except for the calculations that are longer and more tedious
. All
norms on a nite dimensional real vector space are euivalent. This gives us
lim
n
|r
n
x + y| = |x + y|
when r
n
. This probaby doesnt make any sense, so check a book on topoogy
or functiona anaysis or just assume the resut.
De ne u, v) by
u, v) =
|u + v|
2
|u v|
2
4
.
Triviay we have |u|
2
= u, u), so positivity de niteness foows. Now
4(u + v, w) u, w) v, w)) = |u + v + w|
2
|u + v w|

2
(|u + w|
2
|u w|
2
)
(|v + w|
2
|v w|
2
)
= |u + v + w|
2
|u + v w|
2
+(|u w|
2
+|v w|
2
)
(|u + w|
2
+|v + w|
2
)
We can appy the paraeogram equaity to the two parenthesis getting
4(u + v, w) u, w) v, w)) = |u + v + w|
2
|u + v w|
2
+
1
2
(|u + w 2w|
2
+[u v[
2
)

1
2
(|u + v + 2w|
2
+|u v|
2
)
= |u + v + w|
2
|u + v w|
2
+
1
2
|u + w 2w|
2

1
2
|u + v + 2w|
2

= (|u + v + w|
2
+|w|
2
) +
1
2
|u + v 2w|
2
(|u + v w|
2
+|w|
2
)
1
2
|u + v + 2w|
2
Appying the paraeogram equaity to the two parenthesis we and simp
ifying we
are eft with 0. Hence we get additivity in the
rst sot. Its e
asy to see that
u, v) = u, v), so we get nu, v) = nu, v) for a n Z. Now we get
u, v) = nu/n, v) = nu/n, v) ,
13
so that u/n, v) = 1/nu, v). Thus we have nu/m, v) = n/mu, v). It foows that
we have homogeneity in the rst sot when the scaar is rationa. Now et R and
choose a sequence (r
n
) of rationa numbers such that r
n
. This gives us
u, v) = im
n
r
n
u, v) = im
n
r
n
u, v)
= im
n
1
4
(|r
n
u + v|
2
|r
n
u v|
2
)
=
1
4
(|u + v|
2
|u v|

2
)
= u, v)
Thus we have homogeneity in the rst sot. We triviay aso have symmetry, so w
e
have an inner product. The proof for F = C can be done by de ning the inner prod
uct
from the compex poarizing identity. And by using the identities
u, v)
0
=
1
4
(|u + v|
2
|u v|
2
) u, v) = u, v)
0
+u, iv)
0
i.
and using the properties just proved for , )
0
.
9. This is reay an exercise in cacuus. We have integras of the types
sinnx, sinmx) =
_

sinnxsinmxdx
cos nx, cos mx) =
_

cos nxcos mxdx


sinnx, cos mx) =
_

sinnxcos mxdx
and they can be evaluated using the trigonometric identities
sinnxsinmx =
cos((n m)x) cos((n + m)x)
2
cos nxcos mx =
cos((n m)x) + cos((n + m)x)
2
cos nxsinmx =
sin((n m)x) sin((n + m)x)
2
.
10. e
1
= 1, then e
2
=
xx,11
xx,11
. Where

x, 1) =
_
1
0
x = 1/2,
14
so that
|x 1/2| = x 1/2, x 1/2) =

_
1
0
(x 1/2)
2
dx =
_
1/12,
which gives e
2
=

12(x 1/2) =

3(2x 1). Then to continue encil ushing we


have

x
2
, 1
_
= 1/3,
_
x
2
,

3(2x 1)
_
=

3/6,
so that x
2

x
2
, 1
_
1

x
2
,

3(2x 1)
_
3(2x 1) = x
2
1/3 1/2(2x 1). Now

|x
2
1/3 1/2(2x 1)| = 1/(6

5)
giving e
3
=

5(6x
2
6x+1). The orthonormal basis is thus (1,

3(2x1),

5(6x
2

6x + 1).
11. Let (v
1
, . . . , v
n
) be a linearly deendent list. We can assume that (v
1
, . . . , v
n1
) is
linearly indeendent and v
n
san(v
1
, . . . , v
n
). Let P denote the rojection on the
subsace sanned by (v
1
, . . . , v
n1
). Then calculating e
n
with the Gram Schmidt
algorithm gives us
e
n
=
v
n
P(v
n
)
|v
n
Pv
n
|
,
but v
n
= Pv

n
n

by our assumtion,

so e

= 0. Thus the resulting list will simly


contain zero vectors. Its easy to see that we can extend the algorithm to work
for
linearly deendent lists by tossing away resulting zero vectors.
12. Let (e
1
, . . . , e
n1
) be an orthogonal list of vectors. Assume that (e
1
, . . . , e
n
) and
(e
1
, . . . , e
n1
, e
n
) are orthogonal
e

and san the same subsace.

n
= a
1
e
1
+ . . . + a
n
e
n
.

Now we have e

n
, e
i
) = 0 for all i < n, so that a
i
= 0 for
i < n. Thus we have e
n
= a
n
e
n
and from |e
n
| = 1 we have a
n
= 1.
Let (e
1
, . . . , e
n

Then we can write

) be the orthonormal base roduced from (v


1
, . . . , v
n
) by Gram Schmidt.
Then if (e
1
, . . . , e
n
) satis es the hyothesis from the roblem we have by the revious
aragrah that e
i
= e
i
. Thus we have 2
n
ossible such orthonormal lists.
13. Extend (e
1
, . . . , e
m
) to a basis (e
1
, . . . , e
n
) of V . By theorem 6.17
v = v, e
1
) e
1
+ . . . +v, e
n
) e
n
,
so that
|v| = | v, e
1
) e
1
+ . . . +v, e
m
) e
n
|
= | v, e
1
) e
1
| + . . . +| v, e
m
) e
n
|
= [ v, e
1
) [ + . . . +[ v, e

n
) [.
Thus we have |v| = [ v, e
1
) [ + . . . +[ v, e
m
) [ if and only if v, e
i
) = 0 for i > m i.e.
v san(e
1
, . . . , e
m
).
15
14. Its easy to see that the di erentiation oerator has a uer triangular matri
x in the
orthonormal basis calculated in exercise 10.
15. This follows directly from V = U U

.
16. This follows directly from the revious exercise.
17. From exercise 5.21 we have that V = range P null P. Let U := range P
, then by
assumtion null P U

. From exercise 15, we have dimU = dimV dimU =


dimnull P, so that null P = U

. An arbitrary v V can be written as v = Pv +(v


Pv). From P
2
= P we have that P(v Pv) = 0, so v Pv null P = U

. Hence
the decomposition v = Pv + (v Pv) is the unique decomposition in U U

. Now
Pv = P(Pv +(v Pv)) = Pv, so that P is the identity on U. By de nition P = P
U
.
18. Let u range P, then u = Pv for some v V hence Pu = P
2
v = Pv = u, so P is
the identity on range P. Let w null P. Then for a F
|u|
2
= |P(u + aw)|
2
|u + aw|
2
.
By exercise 2 we have u, w) = 0. Thus null P (range P)

and from the dimension


equality null P = (range P)

. Hence the claim follows.


19. If TP
U

= P
U
TP
U
clearly U is invariant. Now assume that U is invariant.
u U we have P
U
TP
U
u = P
U
Tu = Tu = TP
U
u.
20. Let u U, then we have Tu = TP
U
u = P
U
Tu U. Thus U is invariant. Then let
w U

. Now we can write Tw = u+u

Then for

where u U and u
U

. Now P
U
Tw = u,
but u = P
U
Tw = TP
U
w = T0 = 0, so Tw U

. Thus U

is also invariant.
21. First we need to nd an orthonormal basis for U.
e
1
=
(1/

2, 1/

2, 0, 0), e
2
= (0, 0, 1/

5, 2/

5). Let U = span(e


1
, e
2
). Then we have
u = P
U

With Gram Schmidt we get

(1, 2, 3, 4) = (1, 2, 3, 4), e


1
) e
1
+(1, 2, 3, 4), e
2
) e
2
= (3/2, 3/2, 11/5, 22/5).
22. If p(0) = 0 and p
(0) = 0, then p(x) = ax
2
+ bx
3
. Thus we want to
nd the
projection of 2 + 3x to the subspace U := span(x
2
, x
3
). With Gram Schmidt we get
the orthonormal basis (

3x
2
,
_
420/11(x
3

1
2
x
2
)). We get
P
U
(2+3x) =
_
2 + 3x,

3x
2
_

3x
2
+
_
2 + 3x,
_
420/11(x
3

1
2
x
2
)
_

_
420/11(x
3

1
2
x
2
).
Here
_
2 + 3x,

3x
2
_
=
17
12

3,
_
2 + 3x,
_
420/11(x
3

1
2
x
2
)
_
=
47
660

1155,
so that
P
U
(2 + 3x) =
71
22
x
2
+
329
22
x
3
.
16
23. Theres is nothing special with this exercise, compared to the pre
vious two, except
that it takes ages to calculate.
24. We see that the map T : T
2
(R) R de ned by p p(1/2) is linear. From exercise
10 we get that (1,


3(2x 1),

5(6x
2
6x + 1) is an orthonormal basis for T
2
(R).
From theorem 6.45 we see that
q(x) = 1 +

3(2 1/2 1)

3(2x 1) +

5(6 (1/2)
2
6 1/2 + 1)

5(6x
2
6x + 1)
= 3/2 + 15x 15x
2
.
25. The map T : T
2
(R) R de ned by p
_
1
0
p(x) cos(x)dx is clearly linear. Again
we have the orthonormal basis (1,

3(2x 1),

5(6x
2
6x + 1), so that
q(x) =
_
1
0
cos(x)dx +
_
1
0

3(2x 1) cos(x)dx

3(2x 1)
+
_
1
0

5(6x
2
6x + 1) cos(x)dx

5(6x
2
6x + 1)
= 0
4

3(2x 1) + 0 =
12

2
(2x 1).
26. Choose an orthonormal basis (e
1
, . . . , e
n
) for V and let F have the usual basis.
by roosition 6.47 we have
/(T, (e
1
, . . . , e
n
)) = [e
1
, v)
e
n
, v)] ,
so that
/(T
, (e
1
, . . . , e
n
)) =
_

_
e
1
, v)
.
.
.
e
n
, v)
_

_
and nally T
a = (ae
1
, v), . . . , ae
n
, v)).

Then

27. with the usual basis for F


n
we get
/(T) =
_

_
0 1 0
.
.
.
.
.
.
.
.
. 1
0 0
_

_
so by roosition 6.47
/(T
) =
_

_
0 0
1
.
.
.
.
.
.
.
.
.
0 1 0
_

_
.
17
Clearly T
(z

1
, . . . , z
n
) = (z
2
, . . . , z
n
, 0).
28. By additivity and conjugate homogeneity we have (T I)
= T
I. From exercise
31 we see that T
I is non injective if and ony if T I is non injective.
is an eigenvaue of T

That is

if and ony if is an eigenvaue of T.


29. As (U

= U and (T
)
= T its enough to show ony one of the impications.
U be invariant under T and choose w U

. Now we can write T


w
u

,
0

Let

= u + v, where
U and v U
so that
= Tu, w) = u, T

w) = u, u + v) = u, u) = |u|
2
.
Thus u = 0 which competes the proof.
30. As (T
)
= T its su cient to prove the rst part.
by exercise 31 we have
dimV = dimrange T = dimrange T

Assume that T is injective, then

,
but range T is a subspace of V , so that range T = V .
31. From proposition 6.46 and exercise 15 we get
dimnu T = dim(range T)

= dimW dimrange T
= dimnu T + dimW dimV.
For the second part we have
dimrangeT

dimW dimnuT

= dimV dimnu T
= dimrange T,
where the second equaity foows from the rst part.
32. Let T be the operator induced by A. The coumns are the images of the b
asis vectors
under T, so the generate range T. Hence the dimension of the span
of the coumn
vectors equa
dimrange T. By proposition 6.47 the span of the row v
ectors equas
range T
.

By the previous exercise dimrange T = dimrange T

, so the caim foows.


7 Operators on Inner Product Spaces
1. For the rst part, et p(x) = x and q(x) = 1. Then ceary 1/2 = Tp, q) ,
= p, Tq) =
p, 0) = 0. For the second part its not a contradiction since the basis is not or
thog
ona.
18
2. Choose the standard basis for R
2
. Let T and S the the operators de ned by the
matrices
_
0 1
1 0
_
,
_
0 0
0 1
_
.
Then T and S as ceary sef adjoint, but TS has the matrix
_
0 1
0 0
_
.
so TS is not sef adjoint.
3. If T and S are sef adjoint, then (aT +bS)
= (aT)
+(bS)
= aT +bS for any a, b R,
so sef adjoint operators form a subspace.
oint, but
ceary (iI)
= iI, so iI is not sef adjoint.
4. Assume that P is sef adjoint.
= nu P
=

The identity operator I is sef adj

Then from proposition 6.46 we have nu P

(range P)

. Now P is ceary a projection to range P. Then assume that P i


s a
projection. Let (u
1
, . . . , u
n
) an orthogona basis for range P and extend it to a basis
of V . Then ceary P has a diagona
matrix with respect to thi
s basis, so P is
sef adjoint.
5. Choose the operators corresponding to the matrices
A =
_
0 1
1 0
_
, B =
_
1 1
1 0
_
.
Then we easiy see that A
A = AA
and B
B = BB
. However, A + B doesnt
de ne a norma operator which is easy to check.
6. From proposition 7.6 we get nu T = nu T
. It foows from proposition 6.46 that
range T = (nu T
)

= (nu T)

= range T
.
7.
k
.
k
.
_
T
T
k1
v, T
T
k1

Ceary nu T null T


Let v null T
Then we have

v
_
=
_
T
T
k
v, T
k1
v
_
= 0,
so that T
T
k1
v = 0.
_
T
k1
v, T
k1
v
_
=
_
T

Now

T
k1
v, T
k2
v
_
= 0,
so that v null T
k1
. Thus null T
k
null T
k1
and continuing we get null T
k

null T.
Now let u range T
k
, then we can nd a v V such that u = T
k
v = T(T
k1
v), so
that range T
k
range T. From the rst part we get dimrange T
k
= dimrange T, so
range T
k

= range T.
19
8. The vectors u = (1, 2, 3) and v = (2, 5, 7) are both eigenvecto
rs corresponding to
di erent eigenvalues. A self adjoint operator is normal, so by corollar
y 7.8 if T is
normal that would imply orthogonality of u and v. Clearly u, v) ,= 0, so T
cant be
even normal much less self adjoint.
9. If T is normal, we can choose a basis of V consisting of eigenvectors o
f T. Let A be
the matrix of T corresponding to the basis. Now T is self adjoint if and o
nly if the
conjugate transpose of A equals A that is the eigenvalues of T are real.
10. For any v V we get 0 = T
9
v T
8
v = T
8
(Tv v). Thus Tv v null T
8
= null T
by the normality of T. Hence T(Tv v) = T
2
v Tv = 0, so T
2
= T. By the spectral
theorem we can choose a basis of eigenvectors for T such that T has a diagonal
matrix
with
1
, . . . ,
n
on the diagona. Now T
2
has the diagona matrix with
2
1
, . . . ,
2
n
on
the diagona and from T
2
= T we must have
2
i
=
i
for a i. Hence
i
= 0 or
i
= 1,
so the matrix of T equas its conjugate transpose. Hence T is sef adjoint
.
11. By the spectra
theorem we can choose a basis of T such th
at the matrix of T
corresponding to the basis is a diagona
matrix. Let

1
, . . . ,
n
be the diagona
eements. Let S be the operator corresponding to the diagona
matrix
having the
eements

1
, . . . ,

n
on the diagona. Ceary S
2
= T.
12. Let T the operator corresponding to the matrix
_
0 1
1 0
_
. Then
T
2
+ I = 0.
13. By the spectra theorem we can choose a basis consisting of eigenvectors o
f T. Then
T has a diagona
matrix with respect to the basis. Let
1
, . . . ,
n
be the diago
na
eements. Let S be the operator corresponding to the diagona
matrix having
3

1
, . . . ,
3

n
on the diagona. Then ceary S
3
= T.
14. By the spectra theorem we can choose a basis (v
1
, . . . , v
n
) consisting of eigenvectors
of T. Let v V be such that |v| = 1. If v = a
1
v
1
+ . . . + a
n
v
n

this impies that


n
i=1
[a
i
[ = 1. Assume that |Tv v| < .
i
[ for a i, thn
|Tv v| = |
n
i=1
(a
i
Tv
i
v
i
)| = |
n
i=1
(a
i

i
v
i
v
i
)|
= |
n
i=1
a
i
(
i
)v
i
| =
n
i=1
[a
i
[[
i
[|v
1
|
=
n
i=1
[a
i
[[

If [

i
[
n
i=1
[a
i
[ = ,
which is a contradiction. Thus w can nd an ignvau
i
such that [
i
[ < .
20
15. If such an innr product xists w hav a basis consisting o
f ignvctors by th
spctra thorm. Assum thn that (v
1
, . . . , v
n
) is basis such that th matrix of T
is a matrix consisting of ignvctors of T. D n an innr product by
v
i
, v
j
) =
_
0, i ,= j
1, i = j
.
Extnd this to an innr product by biinarity and homognity. Thn
w hav an
innr product and cary T is sf adjoint as (v
1
, . . . , v
n
) is an orthonorma basis of
U.
16. Lt T L(R
2
) b th oprator T(a, b) = (a, a + b). Thn span((1, 0)) is invarian
t
undr T, but its orthogona compmnt span((0, 1)) is cary not.
17. Lt T, S b two positiv oprators. Thn thy ar sf adjoint
and (S + T)
=
S
+T
= S +T, so S +T is sf adjoint. Aso (S + T)v, v) = Sv, v) +Tv, v) 0.
Thus, S + T is positiv.
18. Cary T
k
is sf adjoint for vry positiv intgr. For k = 2 w hav

T
2

v, v
_
=
Tv, Tv) 0. Assum that th rsut is tru for a positiv k < n and n 2. T
hn
T
n
v, v) =

T
n2
Tv, Tv
_
0,
by hypothsis and sf adjointnss. Hnc th rsut foows by induction.
19. Cary Tv, v) > 0 for a
v V 0 impis that T is injctiv hnc
 invrtib.
So assum that T is invrtib. Sinc T is sf adjoint w can choos an o
rthonorma
basis (v
1
, . . . , v
n
) of ignvctors such that T has a diagona matrix with rspct to
th basis. Lt th mnts on th diagona b
1
, . . . ,
n
and by assumption
i
> 0
for a i. Lt v = a
1
v
1
+ . . . + a
n
v
n
V 0, so that
Tv, v) = [a
1
[
2

1
+ . . . +[a
n
[
2

n
> 0.
20.
_
sin cos
cos sin
_
is a suar root for vry R, which shows that it has in nity
many suar roots.

21. Lt S L(R


2
) b d nd by S(a, b) = (a + b, 0). Thn |S(1, 0)| = |(1, 0)| = 1 and
|S(0, 1)| = 1, but S is cary not an isomtry.
22. R
3
is an odd dimnsiona
ra
vctor spac. Hnc
S has an ignvau
and a
corrsponding ignvctor v. Sinc |Sv| = [[|v| = |v| w hav
2
= 1. Hnc
S
2
v =
2
v = v.
21
23. Th matrics corrsponding to T and T
ar
/(T) =
_
_
0 0 1
2 0 0
0 3 0
_
_
, /(T
) =
_
_
0 2
0 0
1 0
_
_
.
Thus w
/(T

0
3
0

gt

T)
_
_
4
0
0
_
_
,
so

T
1
,
2
,
3
)

is th

0
9
0

0
0
1

that T

z
z
(4z

oprator (z

1
, 9z
2
, z
3
). Hnc

T
T(z
1
, z
2
, z
3
) =
(2z
1
, 3z
2
, z
3
). Now w just nd to prmut
isomtry S(z
1
, z
2
, z
3
) = (z
3
, z
1
, z
2
).
24. Cary T

th

indics which corrsponds to th

T is positiv, so by proposition 7.26 it has a uniu positiv suar root.


Thus its su cint to show that R
2
= T
Now w hav T

T.

= (SR)
= R
S
= RS
by sf adjointnss of R.
2
= RIR = RS
SR = T
T.

Thus R

Assum that T is invrtib.

25.

Thn

T must b invrtib (poar dcomposition).


Hnc
S = T

T
T
1
, so S is uniuy dtrmind.
invrtib. Thn rang

Now assum

that

is not

T is not invrtib (hnc not surjctiv).


Now assum that T is not invrtib, so that

T
T is not invrtib. By th spctra
thorm w can choos a basis (v
1
, . . . , v
n
) of ignvctors of

T
T and w can
assum that v
1
corrsponds to th ignvau

T
T.
by
1
=
1
,
i
=
i
,

D n
Uv

0.

Now t T = S

Sv
Uv
Sv
i > 1.

Cary U ,= S and T = U

T. Now choos
u, v V . Thn its asy to vrify that Uu, Uv) = Su, Sv) = u, v),
is an
isomtry.
26. Choos a basis of ignvctors for T such that T has a diagona matrix w
ith ignva
us
1
, . . . ,
n

so U

on th diagona.

Now T

T = T
2
corrsponds to th diagona matrix
with
2
1
, . . . ,
2
n
on th diagona and th suar root

T
T corrsponds to th diag
ona matrix having [
1
[, . . . , [
n
[ on th diagona. Ths ar th singuar vaus, so
th caim foows.
27. Lt T L(R
2
) b th map T(a, b) = (0, a). Thn from th matrix rprsntation
w s that T
T(a, 0) = (a, 0), hnc

T
T = T
T and 1 is cary a singuar vau.
Howvr, T
2
= 0, so
_
(T
2
)
T
2
= 0, so 1
2
= 1 is not a singuar vau of T
2
.
28. Th composition of two bijctions is a bijction.

If T=S

T, thn sinc S is an
isomtry w hav that T is bijctiv hnc invrtib if and ony if

T
T is bijctiv.
But


T
T is injctiv hnc bijctiv if and ony if 0 is not an ignvau. Thus T
is
bijctiv if and ony if 0 is not a singuar vau.
22
29. From th poar dcomposition thorm w s that dimrang T = dimrang

T
T.
Sinc

T
T is sf adjoint w can choos a basis of ignvctors of

T
T such that
th matrix of

T
T is a diagona matrix.

Cary dimrang

T uas th numbr


of non zro mnts on th diagona i..
s.
30. If S is an isomtry, thn cary

th numbr of non zro singuar vau

S = I, so a singuar vaus ar 1. Now assum


that a singuar vaus ar 1. Thn

S
S is a sf adjoint (positiv) oprator with
a ignvaus ua to 1. By th spctra thorm w can choos a basis of
V such
that

S
S has a diagona
at th
matrix of

As a ignvaus ar

matrix.

S is th idntity matrix.

Thus

S = I, so S is an isomtry.
31. Lt T

on,

this mans th

1
= S
1
_
T
1
T
1
and T
2
= S
2
_
T
2
T
2
.
1

Assum

that

and T
2

hav th sam


singuar vaus s
1
, . . . , s
n
. Thn w can choos bass of ignvctors of
_
T
1
T
1
and
_
T
2
T
2

such that thy hav th sam matrix.


1
, . . . , v
n
) and
(w
1
, . . . , w
n
). Lt S th oprator d nd by S(w
i
) = v
i
, thn cary S is an isomtry
and
_
T

Lt ths

bass b

(v

2
T
2
= S
1
_
T
1
T
1
S. Thus w gt T
2
= S
2
_
T
2
T
2
= S
2
S
1
_
T
1
T
1
S. Writing
S
3
= S
2
S
1
S
1
1
, which is cary an isomtry, w gt
T
2
= S
3
S
1
_
T
1
T
1
S = S
3
T
1
S.
Now t

2
= S
1
T
1
S
2
.

Thn w

hav

2
T
2
= (S
1
T
1
S
2
)
(S
1
T
1
S
2
) = S
1
T
1
T
1
S
1
=
S
1
1
T
1
T
1
S
1
.
1
T
1

Lt

b

an ignvau

of

and v a corrsponding ignvctor.


Bcaus S
1
is bijctiv w hav a u V such that v = S
1
u. This givs us
T

2
T
2
u = S
1
1
T
1
T
1
S
1
u = S
1
1
T
1
T
1
v = S
1
1
v = u,
so that is an ignvau of T
2
T
2
. Lt
1
, . . .
n
) b an
vctors

(v
, v
orthonorma basis of ign
of T

1
T
1
, thn (S
1
1
v
1
, . . . , S
1
v
n
) is an orthonorma basis for V of ignvctors
of T
2
T
2
.
2
T
2

Hnc

and T
1
T
1

hav th sam ignvaus. Th singuar vaus ar


simpy th positiv suar roots of ths ignvaus, so th caim foows.
32. For th rst part, dnot by S th map d nd by th formua. St
A := /(S, (f
1
, . . . , f
n
), (
1
, . . . , 
n
)), B := /(T, (
1
, . . . , 
n
), (f
1
, . . . , f
n
)).
Cary A = B and is a diagona matrix with s
1
, . . . , s
n
on th diagona. A th s
i
ar positiv ra numbrs, so that B
= B = A.
that S = T
.
For th
maps
i
to s
1
i

i
which
i
. Thus
xrcis
23
33. By

It foows from proposition 6.47

scond part obsrv


f

that a inar map S d nd by th

is mappd by T to f
TS = I.
3.23.

Th

caim now foows from

d nition

T is positiv. Thus a singuar vaus of T ar positiv. From th


singuar vau dcomposition thorm w can choos bass of V such that
|Tv| = |
n
i=1

formua

s
i
v, 
i
) f
i
| =
n
i=1
s
i
[ v, 
i
) [
sinc a s
i
ar positiv.
s|v| = s
n

Now cary

i=1
[ v, 
i
) [
n
i=1
s
i
[ v, 
i
) [ = |Tv|
and simiary for s|v|.
34. From th triang uaity and th prvious xrcis w gt
|(T
+ T
)v| |T
v| +|T
v| (s
+ s
)|v|.
Now t v b th ignvctor of
_
(T
+ T
)
(T
+ T

) corrsponding to th singuar


vau s. Lt T + T
= S
_
(T
+ T
)
(T
+ T
), so that
|(T
+ T
)v| = |S
_
(T
+ T
)
(T
+ T
)v| = |Ssv| = s|v| (s
+ s
)|v|,
so that s s
+ s
.
8 Oprators on Compx Vctor Spacs
1. T is not injctiv, so 0 is an ignvau of T. W s that T
2
= 0, so that any v V
is a gnraizd ignvctor corrsponding to th ignvau 0.
2. On pag 78 its shown that i ar th ignvaus of T. W hav dimnu(T iI)
2
+
dimnu(T + iI)
2
= dimC
2
= 2, so that dimnu(T iI)
2
= dimnu(T + iI)
2
= 1.
Bcaus (1, i) is an ignvctor corrsponding to th ignvau i and

(1, i) is an
ignvctor corrsponding to th ignvau i. Th st of a gnraizd ign
vctors
ar simpy th spans of ths corrsponding ignvctors.
3. Lt
m1
i=0
a
i
T
i
v = 0, thn 0 = T
m1
(
m1
i=0
a
i
T
i
v = a
0
T
m1
v, so that a
0
= 0. Ap
pying rpatdy T
mi
for i = 2, . . . w s that a
i
= 0 for a i. Hnc (v, . . . , T
m1
v)
is inary indpndnt.
4. W s that T
3
= 0, but T
2
,= 0. Assum that S is a suar root of T,
is nipotnt, so by coroary 8.8 w hav S
dimV
= S
3
= 0, so that 0 = S
4
= T
2
.
Contradiction.
5. If ST is nipotnt, thn n N such that (ST)
n
= 0, so (TS)
n+1
= T(ST)
n
S = 0.
24

thn S

6. Assum that ,= 0 is an ignvau of N with corrsponding ign


vctor v. Thn
N
dimV
v =
dimV
v ,= 0. This contradicts coroary 8.8.
7. By mma 8.26 w can nd a basis of V such that th matrix of N is uppr tr
ianguar
with zros on th diagona. Now th matrix of N
is th conjugat transpos.

Sinc

= N, th matrix of N must b zro, so th caim foows.


8. If N
dimV 1
,= N
dimV
, thn dimnu N
i1
< dimnu N
i
for a i dimV by propo
sition 8.5. By coroary dimnu N
dimV
= dimV , so th caim cary foows.
9. rang T
m
= rang T
m+1
impis dimnu T
m
= dimnu T
m+1
i.. nu T
m
= nu T
m+1
.
From proposition 8.5 w gt dimnu T
m
= dimnu T
m+k
for a
k 1. Again this
impis that dimrang T
m
= dimrang T
m+k
for a k 1, so th caim foows.
10. Lt T b th oprator d nd in xrcis 1. Cary nu T rang T ,= 0,
so th
caim is fas.
11. W hav dimV = dimnu T
n
+dimrang T
n
, so its su cint to prov that nu T
n

rang T
n
= 0. Lt v nu T
n
rang T
n
. Thn w can
nd a u V such that
T
n
u = v. From 0 = Tv = T
n+1
u w s that u nu T
n+1
= nu T
n
which impis
that v = T
n
u = 0.
12. From thorm 8.23 w hav V = nu T
dimV
, so that T
dimV
= 0. Thn t T L(R
3
)
b th oprator T(a, b, c) = (b, a, 0). Thn cary 0 is th ony ignvau,
but T is
not nipotnt.
13. From nu T
n2
,= nu T
n1
and from proposition 8.5 w s that dimnu T
n
n1.
Assum that T has thr di rnt ignvaus 0,
1
,
2
. Thn dimnu (T
i
I)
n
1,
so from thorm 8.23
n dimnu T
n
+dimnu (T
1
I)
n
+dimnu (T
2
I)
n
n 1 +1 +1 = n +1,
which is impossib, so T has at most two di rnt ignvaus.
14. Lt T L(C
4

) b d nd by T(a, b, c, d) = (7a, 7b, 8c, 8d) from th matrix of T its
asy to s that th charactristic poynomia is (z 7)
2
(z 8)
2
.
15. Lt d
1
b th mutipicity of th ignvau 5 and d
2
th mutipicity of th ignvau
6. Thn d
1
, d
2
1 and d
1
+ d
2
= n. It foows that d
1
, d
2
n 1, so that
(z 5)
d
1
(z 6)
d
2
divids (z 5)
n1
(z 6)
n1
. By th Cayy Hamiton thorm
(T 5I)
d
1
(T 6I)
d
2
= 0, so that (T 5I)
n1
(T 6I)
n1
= 0.
16. If vry gnraizd ignvctor is an ignvctor, thn by thorm 8.23 T
has a basis
of ignvctors. If thrs a gnraizd ignvctor that is not an ignvctor
, thn
w hav an ignvau such that dimnu(T I)
dimV
,= dimnu(T I). Thus if
25

1
, . . . ,
m
ar th ignvaus of T, thn

m
i=1
dimnu(T
i
I) < dimV , so thr
dosnt xists a basis of ignvctors.
17. By mma 8.26, choos a basis (v
1
, . . . , v
n
) such that N has an uppr trianguar
matrix. Appy Gram Schmidt orthogonaization to th
1
= 0 and
assum that N
i
span(
1
, . . . , 
i
), so that
N
i+1
= N
_
v
i+1
v
i+1
, 
1
) 
1
. . . v
i+1
, 
i
) 
i
|v
i+1
v
i+1
, 
1
) 
1
. . . v
i+1
, 
i
) 
i
|
_
span(
1
, . . . , 
i+1
).

basis.

Thn N

Thus N has an uppr trianguar matrix in th basis (


1
, . . . , 
n
).
18. Continuing in th proof of mma 8.30 up to j = 4 w s that a
4
= 5/128. So that

I + N = I +
1
2
N
1
8
N
2
+
1
16
N
3

5
128
N
4
in mma 8.30 with th
19. Just rpac th Tayor poynomia
r poynomia
of
3

1 + x and copy th proof of th mma and thorm 8.32.


20. Lt p(x) =
m
i=0
a
i
x
i
b th minima
poynomia
0
,= 0, thn p(x) =
x

of

T.

m
i=1
a
i
x
i1
. Sinc T is invrtib w must hav
m
i=1
a
i
T
i1
= 0 which contradicts

If

Tayo

th minimaity of p. Hnc a


0
,= 0, so soving for I in p(T) w gt
a
1
0
(a
1
+ . . . + a
m
T
m1
)T = I.
Hnc stting (x) = a
1
0
(a
1
+ . . . + a
m
x
m1
) w hav (T) = T
1
.
21. Th oprator d nd by th matrix
_
_
0 0 1
0 0 0
0 0 0
_
_
is cary an xamp.
22. Choos th matrix
A =
_

_
1 0 0 0
0 1 1 0
0 0 1 0
0 0 0 0
_

_
.
Its asy to s that A(AI)
2
= 0. Thus, th minima poynomia divids z(z 1)
2
.
Howvr, th who poynomia is th ony factor that annihiats A, so z(z 1)
2
is
th minima poynomia.
26
23. Lt

1
, . . . ,
m
b th ignvaus of T with mutipicity d
1
, . . . , d
m
. Assum that V
has a basis of ignvctors (v
1
, . . . , v
n
). Lt p(z) =
m
i=1
(z
i
). Thn w gt
p(T)v
i
=
_
_
j=i
(T
j
I)
_
_
(T
i
I)v
i
= 0,
so that th minima poynomia divids p and hnc has no doub root.
Now assum that th minima
poynomia
has no doub roots. Lt th
 minima
poynomia
b p and t (v
1
, . . . , v
n
) b a Jordan basis of T. Lt A b th argst
Jordan bock of T. Now cary p(A) = 0. Howvr, by xrcis 29,
th minima
poynomia
of (A I) is z
m+1
whr m is th ngth of th ongst conscutiv
string of 1s that appar just abov th diagona, so th minima
po
of A
ynomia
is (z )
m+1
. Hnc
m = 0, so that A is a 1 1 matrix and th basis vcto
r
corrsponding to A is an ignvau. Sinc A was th argst Jordan bock it
foows
that a basis vctors ar ignvaus (corrsponding to a 1 1 matrix).
24. If V is a compx vctor spac, thn V has a basis of ignvctors. No
w t

1
, . . . ,
m
b th distinct ignvaus.

Now cary p =

m
i=1
(z
i
) annihiats T, so that th
minima poynomia bing a factor of p dosnt hav a doub root.
Now assum that V is a ra vctor spac. By thorm 7.25 w can nd a basis o
f T
such that T has a bock diagona matrix whr ach bock is a 1 1 or 2 2 matri
x.
Lt
1
, . . . ,
m
b th distinct ignvaus corrsponding to th 1 1 bocks. Thn
p(z) =
m
i=1
(z
i
) annihiats a
but th 2 2 bocks of th matrix. Now its
su cint to show that ach 2 2 bock is annihiatd by a poynomia which dosnt
hav ra roots.
By thorm 7.25 w can choos th 2 2 bocks to b of th form
_
a b
b a
_
whr b > 0 and its asy to s that
_
a b
b a
_
2a
_
a b
b a
_
+ (a
2
+ b
2
)
_
1 0
0 1
_
= 0.
Cary this poynomia has a ngativ discriminant, so th caim foows.
25. Lt  b th minima poynomia. Writ  = sp + r, whr dg
r < dg p, so that
0 = (T)v = s(T)p(T)v +r(T)v = r(T)v. Assuming r ,= 0, w can mutipy th bot
h
sids with th invrs of th highst co cint of r yiding a monic poynomia

r
2
of dgr ss than p such that r
2
(T)v = 0 contradicting th minimaity of p. Hnc
r = 0 and p divids .
27
26. Its asy to s that no propr factor of z(z 1)
2
(z 3) annihiats th matrix
A =
_

_
3 1 1 1
0 1 1 0
0 0 1 0
0 0 0 0
_

_
,
so th its minima
poynomia
is z(z 1)
2
(z 3) which by d nition is aso th
charactristic poynomia.
27. Its asy to s that no propr factor of z(z 1)(z 3) annihiats th matrix
A =
_

_
3 1 1 1
0 1 1 0
0 0 1 0
0 0 0 0
_

_
,
but A(AI)(A3I) = 0, so th caim foows.
28. W s that T(
i
) = 
i+1
for i < n. Hnc (
1
, T
1
, . . . , T
n1

1
) = (
1
, . . . , 
n

) is
p of dgr ss than
inary indpndnt. Thus for any non zro poynomia
n w
hav p(
i
) ,= 0. Hnc th minima
poynomia
has dgr n, so that th m
inima
poynomia uas th charactristic poynomia.
Now from th matrix of T w s that T
n
(
i
) = a
0
a
1
T(
1
) . . . a
n1
T
n1
. St
p(z) = a
0
+a
1
z +. . . +a
n1
z
n1
+z
n
. Now p(T)(
1
) = 0, so by xrcis 25 p divids
th minima poynomia. Howvr, th minima poynomia is monic and has dgr

n, so p is th minima poynomia.
29. Th biggst Jordan bock of N is of th form
_

_
0 1
0 1
.
.
.
.
.
.
0 1
0
_

_
.
Now cary N
m+1
= 0, so th minima poynomia divids z
m+1
. Lt v
i
, . . . , v
i+m
b
th basis mnts corrsponding to th biggst Jordan bock. Thn N
m

i+m
= 
i
,=
0, so th minima poynomia is z
m+1
.
30. Assum that V cant b dcomposd into two propr subspacs. Th
n T has ony
on ignvau. If thr is mor than on Jordan bock, thn Lt (
v
1
, . . . , v
m
) b
th vctors corrsponding to a
but th ast Jordan bock and (v
m+1
, . . . , v
n
) th
28
vctors corrsponding to th ast Jordan bock. Thn cary V = span(v
1
, . . . , v
m
)
span(v
m+1
, . . . , v
n
). Thus, w hav ony on Jordan bock and T I is nipotnt
and has minima poynomia
z
dimV
by th prvious xrcis. Hnc T has minima
poynomia (z )
dimV
.
Now assum that T has minima poynomia (z )
dimV
. If V = U W, t p
1
b th
minima poynomia of T

|U
and p
2

th minima poynomia of T


|W
. Now (p
1
p
2
)(T) =
0, so that (z )
dimV
divids p
1
p
2
. Hnc dg p
1
p
2
= dg p
1
+ dg p
2
dimV , but
dg p
1
+dg p
2
dimU +dimW = dimV . Thus dg p
1
p
2
= dimV . This mans that
p
1
(z) = (z )
dimU
and p
2
(z) = (z )
dimW
. Now if n = maxdimU, dimW w
hav (T
|U
I)
n
= (T
|W
I)
n
= 0. This mans that (T I)
n
= 0 contradicting
th fact that (z )
dimV
is th minima poynomia.
31. Rvrsing th Jordan basis simpy rvrss th ordr of th Jordan bocks
and ach
bock nds to b rpacd by its transpos.

9 Oprators on Ra Vctor Spacs


1. Lt a b th vau in th uppr ft cornr. Thn th matrix must hav t
h form
_
a 1 a
1 a a
_
and its asy to s that
_
1
1
_
is an ignvctor corrsponding to th ignvau 1.
2. Th charactristic poynomia of th matrix is p(x) = (x a)(x d) bc. Now
th
discriminant uas (a + d)
2
4(ad bc) = (a d)
2
+ 4bc. Hnc th charactristic
poynomia has a root if and ony if (a d)
2
+ 4bc 0. If p(x) = (x
1
)(x
2
),
thn p(T) = 0 impis that ithr A
1
I or A
2
I is not invrtib hnc A has
an ignvau. If p dosnt hav a root thn assuming that Av = v w gt
0 = p(A)v = p()v,
so that v = 0. Hnc A has no ignvaus. Th caim foows.
3. S th proof of th nxt probm, though in this spcia cas th proof is
trivia.
4. First t b an ignvau of A. Assum that is not an ignvau of
A
1
, . . . , A
m
,
so that A
i
I is invrtib for ach i. Now t B b th matrix
_

_
(A
1
I)
1
.
.
.
0
m

(A
I)

1
_

_.
29
Its now asy to vrify that B(AI) is a diagona matrix with a 1s on th diagona,
hnc its invrtib. Howvr, this is impossib, sinc A I is not. Thus, i
s an
ignvau of som A
i
.
Now t b an ignvau of A
i
. Lt T b th oprator corrsponding to A I
in L(F
n
). Lt A
i
corrspond to th coumns j, . . . , j + k and t (
1
, . . . , 
n
) b th
standard basis. Lt (a
1
, . . . , a
k
) b th ignvctor of A
i
corrsponding to . St v =
a
1

j
+. . . +a
k

j+k
. Thn its asy to s that T maps th spac span(
1
, . . . , 
j1
, v)
to th spac span(
1
, . . . , 
j1
). Hnc T is not injctiv, so that is an ignvau
of A.
5. Th proof is idntica to th argumnt usd in th soution of xrcis 2.
6. Lt (v
1
, . . . , v
n
) b th basis with th rspct to which T has th givn matri
x.
Appying Gram Schmidt to th basis th matrix triviay has th sam form.
7. Choos a basis (v
1
, . . . , v

n
) such that T has a matrix of th form in thorm 9.10.
Now if v
j
corrsponds to th scond vctor in a pair corrsponding to a 2 2 matrix
or corrsponds to a 1 1 matrix, thn span(v
1
, . . . , v
j
) is an invariant subscap. Th
scond posibiity mans that v
j
corrsponds to th rst vctor in a pair corrsponding
to a 2 2 matrix, so that span(v
1
, . . . , v
j+1
) is an invariant subspac.
8. Assuming that such an oprator xistd w woud hav basis (v
1
, . . . , v
7
) such that
th matrix of T woud hav a matrix with ignpair (1, 1)
dimnu(T
2
+ T + I)
dimV
2
= 7/2
tims on th diagona. This woud contradict thorm 9.9 as 7/2 is
not a who
numbr. It foows that such an oprator dosnt xist.
9. Th uation x
2
+ x + 1 = 0 has a soution in C. Lt b a soution. Thn th
oprator corrsponding to th matrix I cary is an xamp.
10. Lt T b th oprator in L(R
2k
) corrsponding to th bock diagona matrix whr
ach bock has charactristic poynomia x
2
+x +. Then y theorem 9.9 we hve
k =
dimnull(T
2
+ T + I)
2k
2
,
so tht dimnull(T
2
+ T + I)
2k
is even. However, the miniml polynomil of T
divides (T
2
+T +I)
2k
nd hs degree less thn 2k, so tht (T

2
+T +I)
k
= 0.
It follows tht dimnull(T
2
+ T + I)
2k
= dimnull(T
2
+ T + I)
k
nd the clim
follows.
30
11. We see tht (, ) is n eigenpir nd from the nilpotency of T
2
+ T + I nd
theorem 9.9 we hve
dimnull(T
2
+ T + I)
dimV
2
= (dimV )/2
it follows tht dimV is even. For the second prt we know from the Cyley-Hm
ilton
theorem tht the miniml polynomil p(x) of T hs degree less thn dimV . Thu
s we
hve p(x) = (x
2
+ x + )
k
nd from deg p dimV we get k (dimV )/2. Hence
(T + T + I)
(dimV )/2
= 0.
12. By theorem 9.9 we cn choose  sis such tht the mtrix of T hs the fo
rm of 9.10.
Now we hve the mtrices [5] nd [7] t lest once on the digonl. Assuming t
ht
T hs n eigenpir we would hve t lest one 2 2 mtrix on the digonl. Thi
s is
impossile s the digonl hs only length 3.
13. By proposition 8.5 dimnull T
n1
n1. Lik in th prvious xrcis w know that
th matrix [0] is at ast n1 tims on th diagona and its impossib to t a 2 2
matrix in th ony pac ft.
14. W s that A satis s th poynomia
p(z) = (z a)(z d) bc no mattr if F
is
R or C. If F = C, thn bcaus p is monic, has dgr 2 and annihiats A it
foows
that p is th charactristic poynomia.
Assum thn that F = R. Now if A has no ignvaus, thn p is th charactr
istic
poynomia by d nition. Assum thn that p(z) = (z
1
)(z
2

) which impis
that p(T) = (T
1
I)(T
2
I) = 0. If nithr T
1
I or T
2
I is injctiv, thn
by d nition p is th minima poynomia. Assum thn that T
2
I is invrtib.
Thn dimnu(T
1
I) = 2, so that T
1
I = 0. Hnc w gt c = b = 0 and
a = d =
1
. Hnc p(z) = (z
1
)
2
, so that p is th charactristic poynomia by
d nition.
15. S is norma, so thrs an orthonorma basis of V such that S has a bo
ck diagona
matrix with rspct to th basis and ach bock is a 1 1 or 2 2 matrix and th
22 bocks hav no ignvau (thorm 7.25). From S
S = I w s that ach bock
A
i
satisfy A
i
A
i

= I, so that thy ar isomtris. Hnc a 2 2 bock is of th form


_
cos sin
sin cos
_
.
W s that T
2
+T +I is not injective if nd only if A
2
i
+A
i
+I is not injective
for some 22 mtrix A
i
. Hence x
2
+x+ must equl the chrcteristic polynomil
of some A
i

, so y the previous exercise its of the form


(x cos )(z cos ) + sin
2
= x
2
2xcos + cos
2
+ sin
2

so that = cos
2
+ sin
2
= 1.
31
10 Trac and Dtrminant
1. Th map T /(T, (v
1
, . . . , v
n
)) is bijctiv and satis s ST /(ST, (v
1
, . . . , v
n
)) =
/(S, (v
1
, . . . , v
n
))/(T, (v
1
, . . . , v
n
)). Hnc
ST = I /(ST, (v
1
, . . . , v
n
)) = /(S, (v
1
, . . . , v
n
))/(T, (v
1
, . . . , v
n
)) = I.
Th caim now foows triviay.
2. Both matrics rprsnt an oprator in L(F
n
). Th caim now foows from xrcis
3.23.
3. Choos a basis (v
1
, . . . , v
n
) and t A = (a
ij
) b th matrix corrsponding to th

basis. Thn Tv
1
= a
11
v
1
+ . . . + a
n1
v
n
. Now (v
1
, 2v
2
, . . . , 2v
n
) is aso a basis and w
hav by our assumption Tv = a
11
v
1
+ 2(a
21
v
2
+ . . . + a
n1
v
n
). W thus gt
a
21
v
2
+ . . . + a
n1
v
n
= 2(a
21
v
2
+ . . . + a
n1
v
n
),
which impis (a
21
v
2
+ . . . + a
n1
v
n
= 0. By inar indpndnc w gt a
21
= . . . =
a
n1

= 0, so that Tv
1
= a
11
v. Th caim cary foows.
4. Foows dircty from th d nition of /(T, (u
1
, . . . , u
n
), (v
1
, . . . , v
n
)).
5. Lt (
1
, . . . , 
n
) b th standard basis for C
n
. Thn w can
nd an oprator T
L(C
n
) such that /(T, (
1
, . . . , 
n
)) = B. Now T has an uppr trianguar matrix
corrsponding to som basis (v
1
, . . . , v
n
) of V . Lt A = /((v
1
, . . . , v
n
), (
1
, . . . , 
n
)).
Thn
A
1
BA = /(T, (v
1
, . . . , v
n
))
which is uppr trianguar. Cary A is an invrtib suar matrix.
6. Lt T b th oprator corrsponding to th matrix
_
0 1
1 0
_
,
_
0 1
1 0
_

2
=
_
1 0
0 1
_
.
By thorm 10.11 w hav trac(T
2
) = 2 < 0.
7. Lt (v
1
, . . . , v
n
) b a basis of ignvctors of T and
1
, . . . ,
n
b th corrsponding
ignvaus. Thn T has th matrix
_

1
0
.
.
.
0
n
_

_.
Cary trac(T
2
) =
2
1
+ . . . +
2
n
0 by thorm 10.11.
32
8. Extnd v/|v| to an orthonorma basis (v/|v|, 
1
, . . . , 
n
) of V and t A = /(T, (v/|v|, 
1
, . . . , 
n
)).
Lt a, a
1
, . . . , a
n
dnot th diagona mnts of A. Thn w hav
a
i

= T
i
, 
i
) = 
i
, v) w, 
i
) = 0, 
i
) = 0,
so that
trac(T) = a = Tv/|v|, v/|v|) = v/|v|, v) w, v/|v|) = w, v) .
9. From xrcis 5.21 w hav V = nu P rang P. Now t v rang P. Thn
v = Pu
for som u V . Hnc Pv = P
2
w = Pw = v, so that P is th idntity on rang P.
Now choos a basis (v
1
, . . . , v
m
) for rang P and xtnd it with a basis (u
1
, . . . , u
n
)
of nu P to gt a basis for V . Thn cary th matrix for P in
this basis consists
of a diagona
matrix with 1s on part of th diagona
corrsponding
to th vctors
(v
1
, . . . , v
m
) and 0s on th rst of th diagona. Hnc trac P = dimrang P 0.
10. Lt (v
1
, . . . , v
n
) b som basis of V and t A = /(T, (v
1
, . . . , v
n
)). Lt a
1
, . . . , a
n
b
th diagona mnts of A. By proposition 6.47 T
has th matrix A
, so that th
diagona mnts of A
ar a
1
, . . . , a
n

. By thorm 10.11 w hav


trac(T
) = a
1
+ . . . + a
n
= a
1
+ . . . + a
n
= trac(T).
11. A positiv oprator is sf adjoint. By th spctra
thorm
w can
nd a basis
(v
1
, . . . , v
n
) of ignvctors of T. Thn A = /(T, (v
1
, . . . , v
n
)) is a diagona matrix
and by positivity of T a th diagona mnts a
1
, . . . , a
n
ar positiv. Hnc a
1
+
. . . + a
n
= 0 impis a
1
= . . . = a
n
= 0, so that A = 0 and hnc T = 0.
12. Th trac of T is th sum of th ignvaus. Hnc 48 + 24 = 51 40 +
1, so
that = 36.
13. Choos a basis of (v
1
, . . . , v
n
) of V and t A = /(T, (v
1
, . . . , v
n
)). Thn th matrix
of cT is cA. Lt a
1
, . . . , a
n
b th diagona mnts of A. Thn w hav
trac(cT) = ca
1
+ . . . + ca
n
= c(a
1

+ . . . + a
n
) = ctrac(T).
14. Th xamp in xrcis 6 shows that this is fas. For anoth
r xamp tak S =
T = I.
15. Choos a basis (v
1
, . . . , v
n
) for V and t A = /(T, (v
1
, . . . , v
n
)). Its su cint to
prov that A = 0. Now t a
i,j
b th mnt in row i, coumn j of A. Lt B b
th matrix with 1 in row j coumn i and 0 swhr. Thn its asy to s th
at in
BA th ony non zro diagona mnt is a
i,j
. Thus trac(BA) = a
i,j
. Lt S b th
oprator corrsponding to B. It foows that a
i,j
= trac(ST) = 0, so that A = 0.
33
16. Lt T
T
i
= a
1

1
+ . . . + a
n

n
.
i

Thn w

hav

that a

= T
T
i
, 
i
) = |T
i
|
2
by
th ortogonaity of (
1
, . . . , 
n
). Cary a
i

is th
/(T

ith diagona mnt in th matrix

T, (
1
, . . . , 
n
)), so that
trac(T
T) = |T
1
|
2
+ . . . +|T
n
|
2
.
Th scond assrtion foows immdiaty.
17. Choos a basis (v
1
, . . . , v
n
) such that T has an uppr trianguar matrix B = (b
ij
)
with ignvaus
1
, . . . ,
n
on th diagona. Now th ith mnt on th diagona of
B
B is
n
j=1
b
ji
b
ji
=
n
j=1
[b
ji
[
2
, whr [b
ii
[
2
= [
i
[
2
. Hnc
n

i=1
[
i
[
2

n
i=1
n
j=1
[b
ji
[
2
= trac(B
B) = trac(A
A) =
n
k=1
n
j=1
[a
jk
[
2
.
18. Positivity and d nitnss foows from xrcis 16 and additivity
in th
rst sot
from coroary 10.12. Homognity in th rst sot is xrcis 13 and by xrcis
 10
S, T) = trac(ST
) = trac((ST
)
) = trac(TS
) = T, S).
It foows that th formua d ns an innr product.
19. W hav 0 Tv, Tv) T
v, T
v) = (T
T TT
)v, v).
T TT
is a

Hnc

positiv

oprator (its cary sf adjoint),

but trac(T

T TT
) = 0, so a
ignvaus ar 0.

th
Hnc

T TT
= 0 by th spctra thorm, so T is norma.
20. Lt dimV = n and writ A = /(T). Thn w hav
dt cT = dt cA =

ca
(1),1
ca
(n),n
= c
n
det A = c
n
det T
21. Let S = I and T = I. Then we have 0 = det(S + T) ,= det S + det T =
1 + 1 = 2.
22. We can ue the reult of the next exercie which mean that we only need t
o prove
thi for the complex cae. Let T L(C
n
) be the operator correponding to the
matrix A. Now each block A
j
on the diagonal correpond to ome bai vector
(e
i
, . . . , e
i+k
). Thee can be replaced with another et of vector, (v
i
, . . . , v
i+k
), pan
ning the ame ubpace uch that A
j
in thi new bai i upper triangular. We have
T(pan(v
i
, . . . , v
i+k
)) span(v
i
, . . . , v
i+k
), so after doing this for all blocks we can
assume that A is upper triangular. The claim follows immediately.
23. This follows trivially from the formula of trace and determinant for a mat
rix, because
the formulas only depends on the elements of the matrix.
34
24. Let dimV = n and let A = /(T), so that B = /(T) equals the conjugate tran

spose
of A i.e. b
ij
= a
ji
. Then we have
det T = det A =

a
(1),1
a
(n),n
=

a
1,(1)
a
n,(n)
=

a
1,(1)
a
n,(n)
=

b
(1),1
b
(n),n
= det B = det T
.
The rt equality on the econd line follow eaily that every term in the upper 
um
i repreented by a term in the lower um and vice vera. The econd claim fol
low
immediately from theorem 10.31.
25. Let = (x, y, z) R
3
[ x
2
+ y
2
+ z
2
< 1 and let T be the operator T(x, y, z) =
(ax, by, cz). Its easy to see that T() is the ellipsoid. Now is a circle with
radius
1. Hence
[ det T[volume() = abc
4
3
=
4
3

abc,
which is the volume of an ellisoid.
35

Das könnte Ihnen auch gefallen