Sie sind auf Seite 1von 10

Homework 6 Solutions

Joshua Hernandez
November 11, 2009
2.5 - The Change of Coordinate Matrix
2. For each of the following pairs of ordered bases and

for R
2
, nd the change of coordinate matrix
that changes

-coordinates into coordinates.


b. = {(-1, 3), (2, -1)} and

= {(0, 10), (5, 0)}.


Solution: We want to nd Q = [I
R
2]

. The usual procedure:


I
R
2(0, 10) = (0, 10) = a(-1, 3) +b(2, -1), I
R
2(5, 0) = (5, 0) = c(-1, 3) +d(2, -1).
We can write these two systems as matrix equations
_
-1 2
3 -1
__
a
b
_
=
_
0
10
_
,
_
-1 2
3 -1
__
c
d
_
=
_
5
0
_
,
or together as
_
-1 2
3 -1
__
a c
b d
_
=
_
0 5
10 0
_
.
The matrix (
a c
b d
) will be our change-of-coordinate matrix. We solve by taking inverses:
Q =
_
a c
b d
_
=
_
-1 2
3 -1
_
-1
_
0 5
10 0
_
=
1
-5
_
-1 -2
-3 -1
__
0 5
10 0
_
=
1
-5
_
-20 -5
-10 -15
_
=
_
4 1
2 3
_
.
Note: In general, let V be some nite-dimensional vector space, let the standard basis of V,
and let and

be two other bases of V. Then


[I
V
]

= [I
V
]

[I
V
]

= ([I
V
]

)
-1
[I
V
]

The matrices [I
V
]

and [I
V
]

are easily computed; one can simply read o the coecients from
the basis vectors.
d. = {(-4, 3), (2, -1)} and

= {(2, 1), (-4, 1)}.


Solution: Proceeding as above,
Q =
_
-4 2
3 -1
_
-1
_
2 -4
1 1
_
=
1
-2
_
-1 -2
-3 -4
__
2 -4
1 1
_
=
1
-2
_
-4 2
-10 8
_
=
_
2 -1
5 -4
_
.
3. For each of the following pairs of ordered bases and

for P
2
(R), nd the change of coordinate matrix
that changes

-coordinates into -coordinates.


b. = {1, x, x
2
}, and

= {a
2
x
2
+a
1
x +a
0
, b
2
x
2
+b
1
x +b
0
, c
2
x
2
+c
1
x +c
0
}
1
Solution: We can easily write vectors in

as linear combinations in :
a
2
x
2
+a
1
x +a
0
= a
0
1 + a
1
x + a
2
x
2
b
2
x
2
+b
1
x +b
0
= b
0
1 + b
1
x + b
2
x
2
c
2
x
2
+c
1
x +c
0
= c
0
1 + c
1
x + c
2
x
2
Thus
Q =
_
_
a
0
b
0
c
0
a
1
b
1
c
1
a
2
b
2
c
2
_
_
.
d. = {x
2
x + 1, x + 1, x
2
+ 1} and

= {x
2
+x + 4, 4x
2
3x + 2, 2x
2
+ 3}
Solution: Let = {1, x, x
2
}, the standard ordered basis of P
2
(R). We compute Q = [I
P
2
(R)
]

using the identity Q = ([I


P
2
(R)
]

)
-1
[I
P
2
(R)
]

:
Q =
_
_
1 1 1
-1 1 0
1 0 1
_
_
-1
_
_
4 2 3
1 -3 0
1 4 2
_
_
=
_
_
1 -1 -1
1 0 -1
-1 1 2
_
_
_
_
4 2 3
1 -3 0
1 4 2
_
_
=
_
_
2 1 1
3 -2 1
1 3 1
_
_
(I found the inverse matrix using Cramers rule, but you could also nd it by solving the systems
a
i
(1, -1, 1) +b
i
(1, 1, 0) +c
i
(1, 0, 1) = e
i
,
(where e
i
is the ith standard basis vector of R
3
) and writing those coecients down the ith
column.
6. For each matrix A and ordered basis , nd [L
A
]

. Also, nd an invertible matrix such that [L


A
]

=
Q
-1
AQ.
Solution: Let be the standard ordered basis of the relevant R
n
(i.e. the basis in which
[L
A
]

= A). We can write


[L
A
]

= [I
R
nL
A
I
R
n]

= [I
R
n]

[L
A
]

[I
R
n]

= ([I
R
n]

)
-1
A[I
R
n]

Letting Q = [I
R
n]

(this is just the matrix whose column vectors are the corresponding elements
of ), we have our matrix such that [L
A
]

= Q
-1
AQ.
b. A =
_
1 2
2 1
_
and =
__
1
1
_
,
_
1
-1
__
Solution: Here Q = (
1 1
1 -1
) and
[L
A
]

=
_
1 1
1 -1
_
-1
_
1 2
2 1
__
1 1
1 -1
_
=
1
-2
_
-1 -1
-1 1
__
3 -1
3 1
_
=
1
-2
_
-6 0
0 2
_
=
_
3 0
0 -1.
_
d. A =
_
_
13 1 4
1 13 4
4 4 10
_
_
and =
_
_
_
_
_
1
1
-2
_
_
,
_
_
1
-1
0
_
_
,
_
_
1
1
1
_
_
_
_
_
Solution: Now, to avoid another instance of Cramers rule, Ill try to compute [L
A
]

directly
- that is, send each of the basis vectors through the matrix, then try to express it as a linear
combination in . If the previous problem is any indication, this should be easier.
2
_
_
13 1 4
1 13 4
4 4 10
_
_
_
_
1
1
-2
_
_
=
_
_
6
6
-12
_
_
= 6
_
_
1
1
-2
_
_
+ 0
_
_
1
-1
0
_
_
+ 0
_
_
1
1
1
_
_
,
_
_
13 1 4
1 13 4
4 4 10
_
_
_
_
1
-1
0
_
_
=
_
_
12
-12
0
_
_
= 0
_
_
1
1
-2
_
_
+ 12
_
_
1
-1
0
_
_
+ 0
_
_
1
1
1
_
_
,
_
_
13 1 4
1 13 4
4 4 10
_
_
_
_
1
1
1
_
_
=
_
_
18
18
18
_
_
= 0
_
_
1
1
-2
_
_
+ 0
_
_
1
-1
0
_
_
+ 18
_
_
1
1
1
_
_
.
So [L
A
]

=
_
6 0 0
0 12 0
0 0 18
_
. As before, Q =
_
1 1 1
1 -1 1
-2 0 1
_
.
I hasten to add that [L
A
]

is not usually a diagonal matrix. In this case, the basis was


carefully picked by the textbook authors.
10. Prove that if A and B are similar n n matrices, then trace(A) = trace(B).
Lemma: If A and B are n n matrices, then trace(AB) = trace(BA).
trace(AB) =
n

i=1
(AB)
ii
=
n

i=1
n

k=1
A
ik
B
ki
=
n

k=1
n

i=1
B
ki
A
ik
=
n

k=1
(BA)
kk
= trace(BA).
Solution: Suppose that B = Q
-1
AQ, for some (invertible n n) matrix Q. Then
trace(B) = trace(Q
-1
(AQ)) = trace((AQ)Q
-1
) = trace(A(QQ
-1
)) = trace(A).
Here Q
-1
takes the place of A, and AQ takes the place of B in the lemma above.
13. Let V be a nite-dimensional vector space over a eld F, and let = {x
1
, . . . , x
n
} be an ordered basis
for V. Let Q be an n n invertible matrix with entries from F. Dene
x

j
=
n

i=1
Q
ij
x
i
, for 1 j n (1)
and set

= {x

1
, . . . , x

n
}. Prove that

is a basis for V and hence that Q is the change of coordinate


matrix changing

coordinates into coordinates.


Solution: Applying

to both sides of (1),


[x

j
]

=
n

i=1
Q
ij
[x
i
]

, for 1 j n.
Equating the kth entries of these vectors,
([x

j
]

)
k
=
n

i=1
Q
ij
([x
i
]

)
k
, for 1 j n.
3
Now, since were taking our vectors x
i
from the ordered basis , the coordinate vectors [x
i
]

should be just the standard basis vectors e


i
. So
([x
i
]

)
k
= (e
i
)
k
=
ik
,
and our sum becomes
([x

j
]

)
k
=
n

i=1
Q
ij

ik
= Q
kj
.
In matrix form, we have
_
[x

1
]

[x

n
]

_
= Q. (2)
The RHS of (2) is an invertible matrix, so its column vectors C
j
must be linearly independent.
So [x

j
]

, and consequently x

j
=
-1

([x

j
]

) must be linearly independent. Thus

is a basis.
Equation (2) also shows that Q = [I
V
]

, so it is the change-of-basis matrix from

vari-
ables to variables.
4.4 - SummaryImportant Facts about Determinants
4h Evaluate the determinant by any legitimate method
Solution: Working down the rst column,
det
_
_
_
_
1 -2 3 -12
-5 12 -14 19
-8 22 -20 31
-4 8 -14 15
_
_
_
_
= det
_
_
12 -14 19
22 -20 31
8 -14 15
_
_
(-5) det
_
_
-2 3 -12
22 -20 31
8 -14 15
_
_
+ (-8) det
_
_
-2 3 -12
12 -14 19
8 -14 15
_
_
(-4) det
_
_
-2 3 -12
12 -14 19
22 -20 31
_
_
=
_
12 det
_
-20 31
-14 15
_
22 det
_
-14 19
-14 15
_
+ 8 det
_
-14 19
-20 31
__
(-5)
_
-2 det
_
-20 31
-14 15
_
22 det
_
3 -12
-14 15
_
+ 8 det
_
3 -12
-20 31
__
+ (-8)
_
-2 det
_
-14 19
-14 15
_
12 det
_
3 -12
-14 15
_
+ 8 det
_
3 -12
-14 19
__
(-4)
_
-2 det
_
-14 19
-20 31
_
12 det
_
3 -12
-20 31
_
+ 22 det
_
3 -12
-14 19
__
= [12(134) 22(56) + 8(-54)] (-5)[-2(134) 22(-123) + 8(-147)]
+ (-8)[-2(56) 12(-123) + 8(-111)] (-4)[-2(-54) 12(-147) + 22(-111)]
= 166.
5. Suppose M M
nn
(F) can be written in the form
M =
_
A B
0 I
_
where A is a square matrix. Prove that det(M) = det(A).
4
Solution: This is a special case of problem 6, with C = I. By property 4, det(I) = 1, so
det(M) = det(A) det(I) = det(A).
6. Prove that if M M
nn
(F) can be written in the form
M =
_
A B
0 C
_
where A and C are square matrices, then det(M) = det(A) det(C).
Solution: Assume A M
kk
. Let m = n k. We will prove by induction on k.
For the case k = 1, we have the picture
M =
_
A B
0 C
_
=
_
_
_
_
_
a b
1
b
m
0
.
.
.
0
C
_
_
_
_
_
.
Expanding the determinant down the rst column,
det(M) = a det(

M
11
) + 0 + + 0
= a det(C) = det(A) det(C).
Now, suppose weve proven this fact for matrices of dimensions up to k 1. We have the picture
M =
_
A B
0 C
_
=
_
_
_
_
_
_
_
_
_
_
a
11
a
1k
.
.
.
.
.
.
.
.
.
a
k1
a
kk
0 0
.
.
.
.
.
.
.
.
.
0 0
b
11
b
1m
.
.
.
.
.
.
.
.
.
b
k1
b
km
C
_
_
_
_
_
_
_
_
_
_
.
Expanding down the rst column:
det(M) = a
11
det(

M
11
) a
k1
det(

M
k1
) + 0 + + 0
= a
11
det
_

A
11

B
1
0 C
_
a
k1
det
_

A
k1

B
k
0 C
_
,
where

B
j
is the (k 1) m matrix formed by removing the jth row of B. Note that

A
j1
is a
(k 1) (k 1) matrix, so we can apply our inductive assumption:
= a
11
det(

A
11
) det(C) a
k1
det(

A
k1
) det(C)
=
_
a
11
det(

A
11
) a
k1
det(

A
k1
)
_
det(C)
= det(A) det(C).
5.1 - Eigenvalues and Eigenvectors
2d. For V = P
2
(R), let T : V V have the mapping
T(a +bx +cx
2
) = (-4a + 2b 2c) (7a + 3b + 7c)x + (7a +b + 5c)x
2
.
5
Let = {x x
2
, -1 +x
2
, -1 x +x
2
}. Find [T]

, determine whether consists of eigenvectors of T.


Solution:
T(x x
2
) = (-4(0) + 2(1) 2(-1)) (7(0) + 3(1) + 7(-1))x + (7(0) + 1(1) + 5(-1))x
2
= 4 + 4x 4x
2
= 0(x x
2
) + 0(-1 +x
2
) + (-4)(-1 x +x
2
).
T(-1 +x
2
) = (-4(-1) + 2(0) 2(1)) (7(-1) + 3(0) + 7(1))x + (7(-1) + 1(0) + 5(1))x
2
= 2 2x
2
= 0(x x
2
) + (-2)(-1 +x
2
) + 0(-1 x +x
2
).
T(-1 x +x
2
) = (-4(-1) + 2(-1) 2(1)) (7(-1) + 3(-1) + 7(1))x + (7(-1) + 1(-1) + 5(1))
= 3x 3x
2
= 3(x x
2
) + 0(-1 +x
2
) + 0(-1 x +x
2
).
So this transformation has matrix
[T]

=
_
_
0 0 3
0 -2 0
-4 0 0
_
_
.
This matrix is not diagonal (in the usual sense), and so the basis is not a basis of eigenvectors.
3. For each of the following matrices A M
nn
(F),
(i) Determine all the eigenvalues of A
(ii) For each eigenvalue , nd a set of corresponding eigenvectors.
(iii) If possible, nd a basis of F
n
consisting of eigenvectors of A.
(iv) If successful in nding such a basis, determine an invertible matrix Q and a diagonal matrix D
such that Q
-1
AQ = D.
b. A =
_
_
0 -2 -3
-1 1 -1
2 2 5
_
_
for F = R.
Solution:

A
() = det
_
_
0 -2 -3
-1 1 -1
2 2 5
_
_
= (0 )(1 )(5 ) + (-2)(-1)2 + (-3)(-1)2 (0 )(-1)2 (-3)(1 )2 (-2)(-1)(5 )
= -
3
+ (0 + 1 + 5)
2
+ (-5 2 6 (-2)) + (0 + 4 + 6 0 (-6) 10)
= -
3
+ 6
2
11 + 6.
Guessing correctly that 1 is a root of this polynomial, I can reduce it by long division:
-
3
+ 6
2
11 + 6 = ( 1)(-
2
) +5
2
11 + 6
= ( 1)(-
2
+ 5) 6 + 6
= ( 1)(-
2
+ 5 6)
= -1( 1)( 2)( 3).
6
So my solutions are = 1, 2, 3. The corresponding eigenspaces must all have dimension 1.
Computing eigenspaces:
E
1
= N
_
_
0 1 -2 -3
-1 1 1 -1
2 2 5 1
_
_
= N
_
_
-1 -2 -3
-1 0 -1
2 2 4
_
_
= span
_
_
_
_
_
1
1
-1
_
_
_
_
_
,
E
2
= N
_
_
0 2 -2 -3
-1 1 2 -1
2 2 5 2
_
_
= N
_
_
-2 -2 -3
-1 -1 -1
2 2 3
_
_
= span
_
_
_
_
_
1
-1
0
_
_
_
_
_
,
E
3
= N
_
_
0 3 -2 -3
-1 1 3 -1
2 2 5 3
_
_
= N
_
_
-3 -2 -3
-1 -2 -1
2 2 2
_
_
= span
_
_
_
_
_
1
0
-1
_
_
_
_
_
.
Each of these vectors was computed by observing the (obvious) linear relations between the
columns of the corresponding matrices. For instance, the rst and third columns of the last
matrix are identical. The diagonalizing basis and matrices are
=
_
_
_
_
_
1
1
-1
_
_
,
_
_
1
-1
0
_
_
,
_
_
1
0
-1
_
_
_
_
_
, Q =
_
_
1 1 1
1 -1 0
-1 0 -1
_
_
, and D =
_
_
1 0 0
0 2 0
0 0 3
_
_
.
d. A =
_
_
2 0 -1
4 1 -4
2 0 -1
_
_
for F = R.
Solution:

A
() = det
_
_
2 0 -1
4 1 -4
2 0 -1
_
_
= (2 )(1 )(-1 ) (-1)(1 )2
= (1 )(
2
(2 + -1) + (2 1(-1) (-1)2) = (1 )( 1).
My solutions are = 0, 1. Then
E
0
= N
_
_
2 0 -1
4 1 -4
2 0 -1
_
_
= span
_
_
_
_
_
1
4
2
_
_
_
_
_
,
E
1
= N
_
_
2 1 0 -1
4 1 1 -4
2 0 -1 1
_
_
= N
_
_
1 0 -1
4 0 -4
2 0 -2
_
_
= span
_
_
_
_
_
0
1
0
_
_
,
_
_
1
0
1
_
_
_
_
_
.
The diagonalizing basis and matrices are
=
_
_
_
_
_
1
4
2
_
_
,
_
_
0
1
0
_
_
,
_
_
1
0
1
_
_
_
_
_
, Q =
_
_
1 4 2
0 1 0
1 0 1
_
_
, and D =
_
_
0 0 0
0 1 0
0 0 1
_
_
.
4g. Find the eigenvalues of T and an ordered basis for V such that [T]

is a diagonal matrix, where


V = P
3
(R) and T : V V has the mapping
T(f(x)) = xf

(x) +f

(x) f(2)
7
Solution: Observe that
T(a +bx +cx
2
+dx
3
)
= x(b + 2cx + 3dx
2
) + (2c + 6dx) (a + 2b + 4c + 8d)
= (-a 2b 2c 8d) + (0a +b + 0c + 6d)x + (0a + 0b + 2c + 0d)x
2
+ (0a + 0b + 0c + 3d)x
3
Let = {1, x, x
2
, x
3
}, the standard ordered basis for P
3
(R). Then
[T]

=
_
_
_
_
-1 -2 -2 -8
0 1 0 6
0 0 2 0
0 0 0 3
_
_
_
_
.
Problem 9 will show that -1, 1, 2, and 3 (the diagonal entries of this triangular matrix) are the
eigenvalues of T. Then
E
-1
= N
_
_
_
_
0 -2 -2 -8
0 2 0 6
0 0 3 0
0 0 0 4
_
_
_
_
= span
_

_
_
_
_
_
1
0
0
0
_
_
_
_
_

_
, E
1
= N
_
_
_
_
-2 -2 -2 -8
0 0 0 6
0 0 1 0
0 0 0 2
_
_
_
_
= span
_

_
_
_
_
_
1
-1
0
0
_
_
_
_
_

_
,
E
2
= N
_
_
_
_
-3 -2 -2 -8
0 -1 0 6
0 0 0 0
0 0 0 1
_
_
_
_
= span
_

_
_
_
_
_
2
0
-3
0
_
_
_
_
_

_
, E
3
= N
_
_
_
_
-4 -2 -2 -8
0 -2 0 6
0 0 -1 0
0 0 0 0
_
_
_
_
= span
_

_
_
_
_
_
-7
6
0
2
_
_
_
_
_

_
.
The diagonalizing basis is therefore
= {1, 1 x, 1 3x
2
, -7 + 6x + 2x
3
}.
9. Prove that the eigenvalues of an upper-triangular matrix M are the diagonal entries of M
Solution: Assume M M
nn
(F). Then

M
() = det
_
_
_
_
_
a
11
a
12
a
1n
0 a
22
a
2
n
0 0
.
.
.
.
.
.
0 0 a
nn

_
_
_
_
_
= (a
11
)(a
22
) (a
nn
),
since the determinant of an upper-triangular matrix is the product of its diagonal entries (determi-
nant property 4). The roots of this polynomial, i.e. the eigenvalues of M, are = a
11
, a
22
, . . . , a
nn
.
11. A scalar matrix is a square matrix of the form I.
a. Prove that if a square matrix A is similar to a scalar matrix I, then A = I
Solution: Suppose Q
-1
AQ = I, for some invertible matrix Q. Then
A = Q(I)Q
-1
= (QIQ
-1
) = QQ
-1
= I.
b. Show that a diagonalizable matrix having only one eigenvalue is a scalar matrix.
8
Solution: If A is diagonalizable, and has only one eigenvalue, , then there exists some invertible
matrix Q such that
Q
-1
AQ =
_
_
_
_
_
0 0
0 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
_
_
_
_
_
= I.
By (a) above, A must be a scalar matrix.
c. Prove that A = (
1 1
0 1
) is not diagonalizable
Solution: Observe that
A
() = (1 )(1 ), so A has only one eigenvalue. However, A is not
a scalar matrix, so by (b) above, A must not be diagonalizable.
14. For any square matrix A, prove that A and A
t
have the same characteristic polynomial, and hence the
same eigenvalues.
Solution: Determinant property 8 states that if A is a square matrix, then det(A) = det(A
t
).
Observe, then,

A
() = det(AI) = det((A)
t
) = det(A
t
I
t
) = det(A
t
I) =
A
t ().
The roots of LHS and RHS are the eigenvalues of A and A
t
, so these eigenvalues must be the
same.
15. a. Let T be a linear operator on the vector space V, and let x be an eigenvector of T corresponding to
the eigenvalue . For any positive integer m, prove that x is an eigenvector of T
m
corresponding
to the eigenvalue
m
.
Solution: Clearly T
1
(x) =
1
x. Now, suppose T
m1
(x) =
m1
x. By associativity of functions,
T
m
(x) = T(T
m1
(x)) = T(
m1
x) =
m1
T(x) =
m1
x =
m
x.
The rest follows by induction.
b. State and prove the analogous result for matrices.
Solution: Let A M
nn
(F), and let v F
n
be an eigenvector of A corresponding to the eigen-
value . For any positive integer m, x is the eigenvector of A
m
corresponding to the eigenvalue
m
.
Proof by induction: Clearly A
1
x =
1
x. Now, suppose A
m1
x =
m1
x. By associativ-
ity of matrix multiplication,
A
m
(x) = A(A
m1
x) = A(
m1
x) =
m1
Ax =
m1
(x) =
m
x.
17. Let T be the linear operator on M
nn
(R) dened by T(A) = A
t
.
a. Show that 1 are the only eigenvalues of T
Solution: Suppose A is an eigenvector of T corresponding to the eigenvalue . Then
A = (A
t
)
t
= T(T(A)) = T
2
(A) =
2
A.
If A = 0, then
2
= 1, and so = 1.
b. Find the eigenvectors corresponding to these eigenvectors.
9
Solution: The eigenvectors corresponding to 1, those which satisfy
A
t
= T(A) = A,
are the symmetric matrices. The eigenvectors corresponding to -1, those which satisfy
A
t
= T(A) = A,
are the skew-symmetric matrices
c. Find an ordered basis for M
22
(R) such that [T]

is a diagonal matrix.
Solution: We showed, a long time ago, that M
22
is the direct sum of the symmetric and skew
symmetric matrices. We can therefore form a basis of M
22
from the disjoint union of the bases
of the two spaces, i.e.
=
__
1 0
0 0
_
,
_
0 0
0 1
_
,
_
0 1
1 0
_
,
_
0 1
-1 0
__
.
Then [T]

=
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 -1
_
.
10

Das könnte Ihnen auch gefallen