Sie sind auf Seite 1von 47

Math 107 August 6, 2009 (Partial) Solution Manual

107 (Partial) Solution Manual


1 Matrices and Determinants
1.1 Systems of Linear Equations
2. Solve the system:
2x +y 2z = 0
2x y 2z = 0
x +2y 4z = 0

_
2 1 2 0
2 1 2 0
1 2 4 0
_

rref
_

_
1 0 0 0
0 1 0 0
0 0 1 0
_

_
x = 0, y = 0, z = 0
8. Solve the system:
x
1
+x
2
x
3
+2x
4
= 1
x
1
+x
2
x
3
x
4
= 1
x
1
+2x
2
+x
3
+2x
4
= 1
2x
1
+2x
2
+x
3
+x
4
= 2

_
1 1 1 2 1
1 1 1 1 1
1 2 1 2 1
2 2 1 1 2
_

rref
_

_
1 0 0 0
11
3
0 1 0 0
10
3
0 0 1 0
2
3
0 0 0 1
2
3
_

_
x
1
=
11
3
, x
2
=
10
3
, x
3
=
2
3
, x
4
=
2
3
15. Solve the system:
2x
1
x
2
x
3
+x
4
+x
5
= 0
x
1
x
2
+x
3
+2x
4
3x
5
= 0
3x
1
2x
2
x
3
x
4
+2x
5
= 0

_
2 1 1 1 1 0
1 1 1 2 3 0
3 2 1 1 2 0
_

rref
_

_
1 0 0 7 4 0
0 1 0 9 5 0
0 0 1 4 4 0
_

_
x
4
and x
5
are free variables, while x
1
= 7x
4
+ 4x
5
, x
2
= 9x
4
+ 5x
5
, x
3
= 4x
4
+ 4x
5
17. Determine conditions on a, b, and c, so that system is consistent:
2x y +3z = a
x 3y +2z = b
x +2y +z = c

_
2 1 3 a
1 3 2 b
1 2 1 c
_

_
2 1 3 a
0
5
2
1
2
b
1
2
a
0 0 0 c +b a
_

_
Thus, c +b a = 0 for the system to be consistent.
19. Determine conditions on a, b, c, and d, so that system is consistent:
_

_
1 1 1 1 a
1 1 1 1 b
1 1 1 1 c
1 1 1 1 d
_

rref
_

_
1 0 0 0
1
2
(a +b)
0 1 0 0
1
2
(c d)
0 0 1 0
1
2
(d b)
0 0 0 1
1
2
(c a)
_

_
Thus, there are no conditions on a, b, c, and d for the system to be consistent.
1
Math 107 August 6, 2009 (Partial) Solution Manual
22. In this case, we have 4 unknowns and 3 equations in a homogeneous linear system. By
Theorem 1.1, there are innitely many solutions.
23. Are there any nontrivial solutions to the system?
x y +z = 0
2x +y +2z = 0
3x 5y +3z = 0

_
1 1 1 0
2 1 2 0
3 5 3 0
_

_
1 1 1 0
0 3 0 0
0 0 0 0
_

_
Yes, since in the process of row reduction, we get a row of zeroes.
26. There are innitely many answers to each part of this problem. Here are some examples:
One Solution:
x +2y = 1
x y = 0
No Solution:
x +2y +3z = 0
x +2y 3z = 0
x 2y z = 1
Innitely Many Solutions:
x +2y +3z = 0
x +2y 3z = 0
2x +4y z = 0
28. There are 4 possibilities. With three variables, each equation represents a plane in 3-D space.
There are 4 ways that three planes can intersect: (1) No intersection, (2) Intersection at a
single point, (3) Intersection on a line, and (4) All three planes are identical (intersection
everywhere on that plane).
1.2 Matrices and Matrix Operations
5.
A4B =
_

_
7 6
15 7
2 17
_

_
9.
EF =
_

_
0 8 9
3 5 13
3 4 10
_

_
2
Math 107 August 6, 2009 (Partial) Solution Manual
11. AE is not a valid multiplication; it is undened.
12.
EA =
_

_
2 0
3 4
4 1
_

_
14.
B(C +D) =
_

_
0 4
14 8
16 16
_

_
18. A
3
= A A A. A A is not dened, since A is not a square matrix.
20.
_

_
1 3 1 5
1 1 1 1
1 1 1 6
_

_
_

_
x
1
x
2
x
3
x
4
_

_
=
_

_
2
1
6
_

_
21.
2x
1
2x
2
+5x
3
+7x
4
= 12
4x
1
+5x
2
11x
3
+3x
4
= 3
23. A and B are n n matrices:
(a) (A+B)
2
= (A+B)(A+B) = A(A+B) +B(A+B) = A
2
+AB +BA+B
2
(b) In general, AB = BA. This implies that AB +BA = 2AB.
28. There are many possible answers to this question. Here is one:
A =
_
1 0
0 0
_
, B =
_
0 0
0 1
_
, AB =
_
0 0
0 0
_
29. (a) A = [ a
1
a
2
a
n
], B =
_

_
B
1
B
2
B
3
.
.
.
B
n
_

_
. Now,
AB = A
_

_
B
1
B
2
.
.
.
B
n
_

_
= A
_
_
_
_
_
_
_
_
_
_
_

_
B
1
0
0
.
.
.
0
_

_
+
_

_
0
B
2
0
.
.
.
0
_

_
+. . . +
_

_
0
0
.
.
.
0
B
n
_

_
_
_
_
_
_
_
_
_
_
_
3
Math 107 August 6, 2009 (Partial) Solution Manual
= [ a
1
a
2
a
n
]
_

_
B
1
0
0
.
.
.
0
_

_
+ [ a
1
a
2
a
n
]
_

_
0
B
2
0
.
.
.
0
_

_
+. . . [ a
1
a
2
a
n
]
_

_
0
0
.
.
.
0
B
n
_

_
= AB = a
1
B
1
+a
2
B
2
+. . . +a
n
B
n
(b)
AB = 2
_

_
1
2
4
_

_
+
_

_
1
1
1
_

_
+ 6
_

_
0
1
2
_

_
=
_

_
3
3
15
_

_
30. (a) B =
_

_
b
1
b
2
.
.
.
b
n
_

_
, A = [ A
1
A
2
A
n
]. Now,
AB = [ A
1
A
2
A
n
]
_

_
b
1
b
2
.
.
.
b
n
_

_
= ([ A
1
0 0 ] + [ 0 A
2
0 0 ] +. . . + [ 0 0 A
n
])
_

_
b
1
b
2
.
.
.
b
n
_

_
= [ A
1
0 0 ]
_

_
b
1
b
2
.
.
.
b
n
_

_
+ [ 0 A
2
0 0 ]
_

_
b
1
b
2
.
.
.
b
n
_

_
+. . . + [ 0 0 A
n
]
_

_
b
1
b
2
.
.
.
b
n
_

_
= AB = b
1
A
1
+b
2
A
2
+. . . +b
n
A
n
(b)
AB = 2
_

_
3
0
1
_

_
2
3
1
_

_
+ 3
_

_
1
5
2
_

_
=
_

_
1
12
7
_

_
4
Math 107 August 6, 2009 (Partial) Solution Manual
1.3 Inverses of Matrices
1. Inverse:
_

_
1
7

2
7
3
7
1
7
_

_
6. Inverse:
_

1
2
0
1
2
1
11

3
11
0
4
11

1
11
0
_

_
7. Inverse:
_

_
11
12
9
12
1
12

1
4

1
4

1
4

1
4
1
4
1
1
2

1
2
0

7
12

1
4
1
12

1
4
_

_
10. Inverse times right hand side:
_

_
11
12
9
12
1
12

1
4

1
4

1
4

1
4
1
4
1
1
2

1
2
0

7
12

1
4
1
12

1
4
_

_
_

_
3
5
1
2
_

_
=
_

_
73
12

7
4
0

29
12
_

_
x
1
=
73
12
, x
2
=
7
4
, x
3
= 0, and x
4
=
29
12
.
11b. Replace the rst row by itself times twice the second row: E =
_
1 2
0 1
_
.
13. Proof of Theorem 1.6: Let E = [e
ij
] and A = [a
ij
], so that EA = [
n

k=1
e
ik
a
kj
].
(a) Suppose E switches rows l and m of I. This means that e
ij
= 1 if i equals j and they
are not equal to l or m or if i = m and j = l or if i = l and j = m. Otherwise e
ij
= 0.
Consider the ij entry of EA:
5
Math 107 August 6, 2009 (Partial) Solution Manual
If i is not equal to l or m, then the only nonzero e
ik
is e
ii
, so
ent
ij
(EA) =
n

k=1
e
ik
a
kj
= e
ii
a
ij
= a
ij
.
Therefore, the i
th
row of EA is the i
th
row of A
If i = l, then the only nonzero e
lk
is e
lm
, so
ent
lj
(EA) =
n

k=1
e
lk
a
kj
= e
lm
a
mj
= a
mj
.
Therefore, the l
th
row of EA is the m
th
row of A.
If i = m, then the only nonzero e
mk
is e
ml
, so
ent
mj
(EA) =
n

k=1
e
mk
a
kj
= e
ml
a
lj
= a
lj
.
Therefore, the m
th
row of EA is the l
th
row of A, leaving the rest of A the same.
Thus, EA switches rows l and m of A.
(b) Suppose E multiplies row l of I by c. Then, e
ll
= c, and e
ii
= 1 for all i = l and e
ij
= 0
for i = j. Consider the ij entry of EA:
If i = l, then the only nonzero e
ij
is e
ii
= 1,
ent
ij
(EA) =
n

k=1
e
ik
a
kj
= e
ii
a
ij
= a
ij
.
Therefore, the i
th
row of EA is the i
th
row of A.
If i = l, then the only nonzero e
lj
is e
ll
= c,
ent
lj
(EA) =
n

k=1
e
lk
a
kj
= e
ll
a
lj
= ca
lj
.
Therefore, the l
th
row of EA is the l
th
row of A times c.
Thus, EA multiplies the l
th
row of A by c, leaving the rest of A the same.
(c) Suppose E replaces row l by itself plus c times row m. Then, e
ij
= 1 if i = j, e
lm
= c,
and e
ij
= 0 for i = j for all other i and j. Consider the ij entry of EA:
If i = l, then the only nonzero e
ij
is e
ii
= 1,
ent
ij
(EA) =
n

k=1
e
ik
a
kj
= e
ii
a
ij
= a
ij
.
Therefore, the i
th
row of EA is the i
th
row of A.
6
Math 107 August 6, 2009 (Partial) Solution Manual
If i = l, then the only nonzero e
lj
are e
ll
= 1 and e
lm
= c,
ent
lj
(EA) =
n

k=1
e
lk
a
kj
= e
ll
a
lj
+e
lm
a
mj
= a
lj
+ca
mj
.
Therefore, the l
th
row of EA is the l
th
row of A plus c times the m
th
row of A.
Thus, EA replaces the l
th
row of A by itself plus c times m
th
row of A, leaving the rest
of A the same.
14. (a) If E is obtained by switching rows i and j of I, let B = E. Using Theorem 1.6, Part
(1), it is clear that
EB = I and BE = I
Thus, B = E
1
.
(b) If E is obtained by multiplying row i of I by a nonzero scalar c, let B be obtained by
multiplying row i of I by
1
c
. Using Theorem 1.6, Part (2), EB = I and BE = I. Thus,
B = E
1
.
(c) If E is obtained by replacing row i of I by itself plus c times row j of I, let B be the
matrix obtained by replacing row i of I by itself plus c times row j of I. Using Theorem
1.6, Part (3), EB = I and BE = I. Thus, B = E
1
.
16. If a square matrix A contains a zero row, then the reduced row echelon form for A will also
contain at least one zero row. By the inverse algorithm, A will not have an inverse. Therefore,
A is not invertible. If a square matrix A contains a zero column, then any row operations
will maintain that zero column. Therefore, A cannot be row reduced to I. Therefore, by the
inverse algorithm, A will not be invertible.
1.4 Special Matrices and Additional Properties of Matrices
4.
(A
1
)
4
=
_

_
1 0 0
0
1
16
0
0 0 1
_

_
12.
B
T
A
T
=
_
16 12
8 8
_
7
Math 107 August 6, 2009 (Partial) Solution Manual
17. A+B is not symmetric:
A+B =
_

_
4 2 2
1 4 6
2 6 6
_

_
20. B
T
B is symmetric for any square matrix B (Theorem 1.14, Part 3). In this case,
B
T
B =
_

_
11 2 2
2 20 10
2 10 6
_

_
22c. Let A = [a
ij
] be an n n upper triangular matrix. (The proof for a lower triangular matrix
is similar.) () Assume A has all diagonal entries nonzero. This implies that we can do row
operations on A (multiply row i by
1
a
ii
) to get an upper triangular matrix with ones on the
diagonal. These ones can then be use to eliminate everything above in the column using row
operations (Eliminate o-diagonal a
ij
by replacing row i by row i plus a
ij
times row j, etc.).
This leaves the identity. Thus, A is invertible.
( =) (Contrapositive) Assume one of the diagonal entries is zero. If a
nn
= 0, then the last
row is zero, and the reduced row echelon form for A will have a zero row, and, thus, A will
not be invertible. Otherwise, let m be the largest m such that a
mm
= 0, i.e. if i > m, a
ii
= 0.
Similar to above, we can do row operations on A (multiply row i by
1
a
ii
) for all rows below
row m, to get an upper triangular matrix with ones on the diagonal for all rows beyond row
m. These ones can then be use to eliminate everything in row m by successively replacing
row m by row m plus a
mj
times row j for j > m. Since a
mm
= 0, row m will then be a zero
row. Thus, the reduced row echelon form for A will be zero, and A will not be invertible.
24d. Since A is an symmetric matrix, A = A
T
. Since A is invertible, it is also true that
A
1
= (A
T
)
1
By Theorem 1.13, Part(5), for any square matrix, (A
T
)

1 = (A

1)
T
. Thus,
A
1
= A

1)
T
Thus, A
1
is symmetric.
26. ( = and ) A and B are row equivalent if and only if there exists elementary matrices,
{E
k
} such that
E
k
E
k1
E
1
A = B
Let C = E
k
E
k1
E
1
. By Theorem 1.10, C is the product of elementary matrices if and
only if C is invertible. Thus, C is invertible.
8
Math 107 August 6, 2009 (Partial) Solution Manual
32. A
3
=
_

_
0 0 0
0 0 0
0 0 0
_

_
.
33. Assume A = [a
ij
] is an n n upper triangulr matrix with zeros along the diagonal. (The
proof is similar for a lower triangular matrix.) The proof will proceed by induction on the
number of superdiagonals that are zero, where the k
th
superdiagonal is dene by the entries
{a
i(i+k)
} for i = 1 . . . n k, the entries whose columns are k over from the diagonal.
Base Case k = 1:
A
2
=
_

_
0 a
12
a
13
a
1n
0 0 a
23
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 a
(n1)n
0 0 0 0
_

_
_

_
0 a
12
a
13
a
1n
0 0 a
23
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 a
(n1)n
0 0 0 0
_

_
=
_

_
0 0 b
13
b
1n
0 0 0 b
24
b
2n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 b
(n2)n
0 0 0 0
0 0 0 0
_

_
where b
ij
are some nonzero numbers. Note that every entry 1 column over from the
diagonal is 0.
Induction Step. Assume
A
k
=
_

_
0 0 b
1(k+1)
b
1n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 b
(nk)n
0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0
_

_
where every entry in the k
th
, (k 1)
t
h, . . . , and 1
st
superdiagonals are zero, i.e every
9
Math 107 August 6, 2009 (Partial) Solution Manual
entry k or less columns from the diagonal is 0. Then,
A
k+1
= A
k
A =
_

_
0 0 b
1(k+1)
b
1n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 b
(nk)n
0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0
_

_
_

_
0 a
12
a
13
a
1n
0 0 a
23
a
2n
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 a
(n1)n
0 0 0 0
_

_
=
_

_
0 0 c
1(k+2)
c
1n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 c
(nk1)n
0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0
_

_
where c
ij
are some nonzero numbers. Note that every entry k + 1 or less columns from
the diagonal is 0.
Now, consider when k + 1 = n. Then, c
nk1
= c
0n
and c
1(k+2)
= c
1(n+1)
. In other words,
when k + 1 = n, the entire matrix A
n
is 0. Thus, A is nilpotent of order n.
1.5 Determinants
5. det(A) = (1)

4 2
3 1

+ (1)

2 3
3 1

+ 2

2 3
4 2

= 41
8.

2 1 5 6
0 3 4 0
0 1 5 2
0 1 3 0

= 2

3 4 0
1 5 2
1 3 0

= 2(2)

3 4
1 3

= 52
12.

2 1 3 1
1 2 1 4
1 1 3 1
3 2 1 5

= 5
15. Showing that the determinant of a lower triangular matrix is the product of the diagonals.
Proof using induction on the size of the matrix. First, the base case, n = 1. The determinant
10
Math 107 August 6, 2009 (Partial) Solution Manual
of a 1 1 lower triangular matrix is just the single entry. Assume that the determinant of a
lower triangular n n matrix is the product of the diagonals. Let A be a (n + 1) (n + 1)
lower triangular matrix,
A =
_

_
a
11
0 0
a
21
a
22
0 0
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nn
_

_
Lets do the cofactor expansion of A about the rst row to nd the determinant. This gives:
det(A) =

a
11
0 0
a
21
a
22
0 0
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
nn

= a
11

a
22
0 0
a
32
a
33
0 0
.
.
.
.
.
.
.
.
.
.
.
.
a
n2
a
n3
a
nn

= a
11
det(B
1
)
Now, the matrix B
1
is an n n lower triangular matrix, so the determinant of B is the
product of the diagonals. Thus,
det(B
1
) = a
22
a
nn
and
det(A) = a
11
a
22
a
nn
16. Let A be a square matrix such that row i is c times row j. Let B the matrix obtained from A
by replacing row i by itself plus c times row j. Thus, row i of B is a zero row. By Corollary
1.17, det(B) = 0. By Theorem 1.20, det(B) = det(A). Thus, det(A) = 0.
If a square matrix has a column that is a scalar multiple of another column, then its deter-
minant is also zero. In this case, take the transpose of the matrix and apply the just proven
statement. Then, by Theorem 1.19, the transpose and the original matrix have the same
determinant.
1.6 Further Properties of Determinants
4. The determinant of the matrix is 0. The matrix is not invertible.
6. A =
_
2 3
1 2
_
= det(A) = 1.
adj(A) =
_
2 3
1 2
_
= inv(A) =
adj(A)
det(A)
=
_
2 3
1 2
_
11
Math 107 August 6, 2009 (Partial) Solution Manual
10.
A =
_

_
5 4 1
2 3 2
3 1 3
_

_
and det(A) = 24
A
1
=
_

_
2 4 1
4 3 2
2 1 3
_

_
and det(A
1
) = 60
A
2
=
_

_
5 2 1
2 4 2
3 2 3
_

_
and det(A
2
) = 48
A
3
=
_

_
5 4 2
2 3 4
3 1 2
_

_
and det(A
3
) = 60
Thus, x = 2.5, y = 2, and z = 2.5.
13. Suppose E is an elementary matrix.
(a) If E is obtained from I by multiplying a row of I by a scalar c, then by Theorem 1.20,
part (2), det(E) = cdet(I) = c.
(b) If E is obtained from I by replacing a row of I by itself plus c times another row, then
by Theorem 1.20, part (3), det(E) = det(I) = 1.
15c. det(A + B) = 28, det(A) = 14, det(B) = 7, det(A) + det(B) = 21. Thus, det(A + B) =
det(A) +det(B).
16 If A and B are square matrices of the same size, then, by Theorem 1.24, det(AB) =
det(A)det(B) and det(BA) = det(B)det(A). But, determinants of matrices are real num-
bers, so det(A)det(B) = det(B)det(A). Thus, det(AB) = det(BA).
1.7 Proofs of Theorems on Determinants
5. Suppose A is an n n matrix. Consider the ij entry of adj(A)A:
ent
ij
(adj(A)A) =
n

k=1
C
ik
a
kj
if i = j, then
ent
ii
(adj(A)A) =
n

k=1
a
ki
C
ik
= det(A)
12
Math 107 August 6, 2009 (Partial) Solution Manual
since the sum is the cofactor expansion of A about column i. If i = j, then
ent
ij
(adj(A)A) =
n

k=1
a
kj
C
ik
is the determinant of the matrix, call it B, obtained from A by replacing the j
th
column
of A by the i
th
column of A. Since this matrix contains two columns with same entries,
the transpose of B has two identical rows. This implies det(B
T
) = 0. By Theorem 1.19,
det(B) = 0. Thus, for i = j,
ent
ij
(adj(A)A) = 0
= adj(A)A = det(A)I
6. Let A =
_

_
A
1

A
2

A
3

.
.
.
A
n

_
, where A
i
is the i
th
row of A. This implies cA =
_

_
cA
1

cA
2
cA
3

.
.
.
cA
n

_
.
By application of Theorem 1.20 n times,
det(cA) = cdet
_
_
_
_
_
_
_
_
_
_
_

_
A
1

cA
2

cA
3

.
.
.
cA
n

_
_
_
_
_
_
_
_
_
_
_
= c
2
det
_
_
_
_
_
_
_
_
_
_
_
_
_

_
A
1

A
2

cA
3

.
.
.
cA
n

_
_
_
_
_
_
_
_
_
_
_
_
_
= . . . = c
n
det
_
_
_
_
_
_
_
_

_
A
1

A
2

.
.
.
A
n

_
_
_
_
_
_
_
_
= c
n
det(A)
7. The following lemma will be helpful.
Lemma 1. If the matrix B is obtained from the matrix A by multiplying one of the columns
of A by a scalar c, then det(B) = c det(A).
Suppose B is obtained from A by multiplying one of the columns of A by a c. Note that
B
T
can be obtained from A
T
by multiplying a row of A
T
by c. By Theorem 1.6, part(2),
det(B
T
) = c det(A
T
). By Theorem 1.19, det(A
T
) = det(A) and det(B
T
) = det(B). Thus,
det(B) = c det(A).
Let V
n
be the n n Vandermonde matrix
13
Math 107 August 6, 2009 (Partial) Solution Manual
(a) n = 2: V
2
=
_
1 1
x
1
x
2
_
. By Theorem 1.20,
det(V
2
) =

1 1
0 x
2
x
1

= (x
2
x
1
)
(b) n = 3: V
3
=
_

_
1 1 1
x
1
x
2
x
3
x
2
1
x
2
2
x
2
3
_

_
. By Theorem 1.20,
det(V
3
) =

1 1 1
0 x
2
x
1
x
3
x
1
0 x
2
2
x
2
1
x
2
3
x
2
1

Expanding the determinant along the rst column,


det(V
3
) =

x
2
x
1
x
3
x
1
(x
2
+x
1
)(x
2
x
1
) (x
3
+x
1
)(x
3
x
1
)

By Lemma 1,
det(V
3
) = (x
2
x
1
)(x
3
x
1
)

1 1
(x
2
+x
1
) (x
3
+x
1
)

= (x
2
x
1
)(x
3
x
1
)

1 1
x
2
x
3

where the second right-hand side is obtained by adding x


1
times the rst row to the
second row. The remaining determinant to be found is a 2 2 Vandermonde matrix
with x
2
and x
3
instead of x
1
and x
2
. Thus,
det(V
3
) = (x
2
x
1
)(x
3
x
1
)(x
3
x
2
)
(c) n = 4: V
4
=
_

_
1 1 1 1
x
1
x
2
x
3
x
4
x
2
1
x
2
2
x
2
3
x
2
4
x
3
1
x
3
2
x
3
3
x
3
4
_

_
. By Theorem 1.20,
det(V
4
) =

1 1 1 1
0 x
2
x
1
x
3
x
1
x
4
x
1
0 x
2
2
x
2
1
x
2
3
x
2
1
x
2
4
x
2
1
0 x
3
2
x
3
1
x
3
3
x
3
1
x
3
4
x
3
1

14
Math 107 August 6, 2009 (Partial) Solution Manual
=

x
2
x
1
x
3
x
1
x
4
x
1
(x
2
+x
1
)(x
2
x
1
) (x
3
+x
1
)(x
3
x
1
) (x
4
+x
1
)(x
4
x
1
)
(x
2
x
1
)(x
2
2
+x
2
x
1
+x
2
1
) (x
3
x
1
)(x
2
3
+x
3
x
1
+x
2
1
) (x
4
x
1
)(x
2
4
+x
4
x
1
+x
2
1
)

Thus, by Lemma 1,
det(V
4
) = (x
2
x
1
)(x
3
x
1
)(x
4
x
1
)

1 1 1
x
2
+x
1
x
3
+x
1
x
4
+x
1
x
2
2
+x
2
x
1
+x
2
1
x
2
3
+x
3
x
1
+x
2
1
x
2
4
+x
4
x
1
+x
2
1

By Theorem 1.20, if we add x


1
times the second row to the third row, the determinant
is unchanged:
det(V
4
) = (x
2
x
1
)(x
3
x
1
)(x
4
x
1
)

1 1 1
x
2
+x
1
x
3
+x
1
x
4
+x
1
x
2
2
x
2
3
x
2
4

Likewise, if we add x
1
times the rst row to the second row, the determinant is un-
changed:
det(V
4
) = (x
2
x
1
)(x
3
x
1
)(x
4
x
1
)

1 1 1
x
2
x
3
x
4
x
2
2
x
2
3
x
2
4

The remaining determinant is a 3 3 Vandermonde matrix with x


2
, x
3
, and x
4
instead
of x
1
, x
2
and x
3
. Thus,
det(V
4
) = (x
2
x
1
)(x
3
x
1
)(x
4
x
1
)(x
3
x
2
)(x
4
x
2
)(x
4
x
3
)
(d) We will prove this by induction on n, the size of the Vandermonde matrix. The base
cases are demonstrated above. Assume V
n
is the n n Vandermonde matrix and our
induction hypothesis,
det(V
n
) =
n

j=2
_
j1

i=1
(x
j
x
i
)
_
Let V
n+1
be the (n + 1) (n + 1) Vandermonde matrix:
V
n+1
=
_

_
1 1 1 1
x
1
x
2
x
3
x
n+1
x
2
1
x
2
2
x
2
3
x
2
n+1
.
.
.
.
.
.
.
.
.
.
.
.
x
n
1
x
n
2
x
n
3
x
n
n+1
_

_
15
Math 107 August 6, 2009 (Partial) Solution Manual
Thus,
det(V
n+1
) =

1 1 1 1
0 x
2
x
1
x
3
x
1
x
n+1
x
1
0 x
2
2
x
2
1
x
2
3
x
2
1
x
2
n+1
x
2
1
.
.
.
.
.
.
.
.
.
.
.
.
0 x
n
2
x
n
1
x
n
3
x
n
1
x
n
n+1
x
n
1

x
2
x
1
x
3
x
1
x
n+1
x
1
x
2
2
x
2
1
x
2
3
x
2
1
x
2
n+1
x
2
1
.
.
.
.
.
.
.
.
.
x
n
2
x
n
1
x
n
3
x
n
1
x
n
n+1
x
n
1

By Lemma 1 and the fact that for any positive integer m,


y
m+1
z
m+1
= (y z)
_
m

k=0
y
mk
z
k
_
the determinant becomes
det(V
n+1
) = (x
2
x
1
)(x
3
x
1
) (x
n+1
x
1
)

1 1 1
x
2
+x
1
x
3
+x
1
x
n+1
+x
1
.
.
.
.
.
.
.
.
.
n2

k=0
x
n2k
2
x
k
1
n2

k=0
x
n2k
3
x
k
1

n2

k=0
x
n2k
n+1
x
k
1
n1

k=0
x
n1k
2
x
k
1
n1

k=0
x
n1k
3
x
k
1

n1

k=0
x
n1k
n+1
x
k
1

where the second to last row is now explicitly shown. Consider row i of the matrix
above:
_
i1

k=0
x
i1k
2
x
k
1
i1

k=0
x
i1k
3
x
k
1

i1

k=0
x
i1k
n+1
x
k
1
_
If we multiply row i by x
1
, it becomes
_

i

k=1
x
ik
2
x
k
1

i1

k=1
x
ik
3
x
k
1

n1

k=1
x
ik
n+1
x
k
1
_
Thus, adding x
1
times row i to row i + 1,
_
i

k=0
x
ik
2
x
k
1

k=1
x
ik
2
x
k
1
i

k=0
x
ik
3
x
k
1

i1

k=1
x
ik
3
x
k
1

i

k=0
x
ik
n+1
x
k
1

n1

k=1
x
ik
n+1
x
k
1
_
=
_
x
i
2
x
i
3
x
i
n+1
_
By Theorem 1.20, adding a multiple of one row to another does not change the deter-
minant, so if for each i we add x
1
times row i to row i + 1, we obtain
det(V
n+1
) = (x
2
x
1
)(x
3
x
1
) (x
n+1
x
1
)

1 1 1
x
2
x
3
x
n+1
.
.
.
.
.
.
.
.
.
x
n2
2
x
n2
3
x
n2
n+1
x
n1
2
x
n1
3
x
n1
n+1

16
Math 107 August 6, 2009 (Partial) Solution Manual
The remaining matrix is an n n Vandermonde matrix with x
2
,. . . ,x
n+1
instead of
x
1
,. . . ,x
n
, so
det(V
n+1
) = (x
2
x
1
)(x
3
x
1
) (x
n+1
x
1
)
n+1

j=3
_
j1

i=2
(x
j
x
i
)
_
by the induction hypothesis. Thus,
det(V
n+1
) =
n+1

j=2
_
j1

i=1
(x
j
x
i
)
_
2 Vector Spaces
2.1 Vector Spaces
3. (a) Not a vector space. Properties 2, 5, 7, and 8 hold, but 1, 3, 4, and 6 do not.
(b) Not a vector space. Properties 1, 2, 3, and 4 hold, but 5, 6, 7, and 8 do not.
(c) Not a vector space. Properties 3, 4, 5, 7, and 8 hold, but 1, 2, and 6 do not.
6. The complex numbers under the addition and scalar multiplication given are a vector space.
2.2 Subspaces and Spanning Sets
1. (c) The set is not a subspace because
_
x
2 5x
_
+
_
y
2 5y
_
=
_
x +y
4 5(x +y)
_
,
which is not of the form given. Thus, the set is not closed under addition and the vectors
given do not consititute a subspace.
(d) We could also write these vectors as
_
x
x
_
. First,
_
x
x
_
+
_
y
y
_
=
_
x +y
(x +y)
_
So, the set is closed under addition. Also,
c
_
x
x
_
=
_
cx
(cx)
_
so the set is closed under scalar multiplication. Thus, the set is a subspace of R
2
.
17
Math 107 August 6, 2009 (Partial) Solution Manual
2. (b) Not a subspace since it is not closed under scalar multiplication:
c
_

_
y +z + 1
y
z
_

_
=
_

_
cy +cz +c
cy
cz
_

_
,
which is not in the subset if c = 1.
(d) Not a subspace since it is not closed under scalar multiplication:
c
_

_
x
y
x
2
+y
2
_

_
=
_

_
cx
cy
cx
2
+cy
2
_

_
,
which is not in the subset for all c.
3. (a) Subspace
(b) Not a subspace (not closed under both addition and scalar multiplication)
(c) Subspace
(d) Subspace
(e) Not a subspace (not closed under both addition and scalar multiplication)
5. The solutions to that linear system do not dene subspace. The set of solutions is not closed
under addition or scalar multiplication: Let X
1
and X
2
be two solutions to AX = B, with B
nonzero. Then,
A(X
1
+X
2
) = AX
1
+AX
2
= B +B = 2B = B
A(cX
1
) = cAX
1
= cB = B, if c = 1
11. Consider
_
1 2
1 3
__
c
1
c
2
_
=
_
x
1
x
2
_
Since

1 2
1 3

= 5 = 0, given x
1
and x
2
, this system has a unique solution for c
1
and c
2
.
Thus, it spans R
2
. Since the solution c
1
= c
2
= 0 is unique for x
1
= 0 and x
2
= 0, the vectors
are also linearly independent. Thus, they are a basis for R
2
.
12. Consider
_
1 5
2 4
__
c
1
c
2
_
=
_
x
1
x
2
_
Since

1 5
2 4

= 6 = 0, given x
1
and x
2
, this system has a unique solution for c
1
and c
2
.
Thus, it spans R
2
. Since the solution c
1
= c
2
= 0 is unique for x
1
= 0 and x
2
= 0, the vectors
are also linearly independent. Thus, they are a basis for R
2
.
18
Math 107 August 6, 2009 (Partial) Solution Manual
13. Consider
_

_
1 0 2
3 1 1
1 2 3
_

_
_

_
c
1
c
2
c
2
_

_
=
_

_
x
1
x
2
x
3
_

_
Since

1 0 2
3 1 1
1 2 3

= 5 = 0, given x
1
, x
2
and x
3
, this system has a unique solution for c
1
,
c
2
, and c
3
. Thus, it spans R
3
. Since the solution c
1
= c
2
= c
3
= 0 is unique for x
1
=, x
2
= 0,
x
3
= 0 the vectors are also linearly independent. Thus, they are a basis for R
3
.
2.3 Linear Independence and Bases
3.
_

_
9
6
3
_

_
=
_

3
2
_
_

_
6
4
2
_

_
, so the vectors are linearly dependent.
6.

0 1 1
4 5 3
1 3 1

= 0, so the system
_

_
0 1 1
4 5 3
1 3 1
_

_
_

_
c
1
c
2
c
3
_

_
=
_

_
0
0
0
_

_
has innitely many solutions. Thus, the vectors are linearly dependent.
7. Assume
c
1
_
1 0
1 1
_
+c
2
_
0 1
1 0
_
+c
3
_
1 1
1 1
_
=
_
0 0
0 0
_
for some c
1
,c
2
,c
3
.
= c
1
+c
3
= 0, c
2
+c
3
= 0, c
1
+c
2
+c
3
= 0, c
1
+c
3
= 0
= c
1
= c
2
= c
3
= 0. Thus, the vectors are independent.
10. Assume c
1
(x
3
1) +c
2
(x
2
1) +c
3
(x 1) +c
4
= 0 for some c
1
,c
2
,c
3
,c
4
. Then,
_

_
1 0 0 1
0 1 0 1
0 0 1 1
0 0 0 1
_

_
_

_
c
1
c
2
c
3
c
4
_

_
=
_

_
0
0
0
0
_

_
19
Math 107 August 6, 2009 (Partial) Solution Manual
The matrix has determinant 1, so the system has a unique solution, c
1
= c
2
= c
3
= c
4
= 0.
Thus, the vectors are linearly independent.
14. Assume c
1
_

_
2
1
0
_

_
+c
2
_

_
1
3
1
_

_
+c
3
_

_
1
4
1
_

_
= 0. Since
det(A) =

2 1 1
1 3 4
1 3 4

= 14 = 0
there is a unique solution to the coecient problem, A
_

_
c
1
c
2
c
3
_

_
= b. If b = 0, then since the
solution is unique,
_

_
c
1
c
2
c
3
_

_
= 0, so the vectors are linearly independent. For any other b, a
solution exists, so the vectors span R
3
. Therefore, the vectors form a basis.
17. Assume c
1
(x
2
+x + 1) +c
2
(x
2
x + 1) +c
3
(x
2
1) = 0. Then,
_

_
1 1 1
1 1 0
1 1 1
_

_
_

_
c
1
c
2
c
3
_

_
= 0
The matrix has determinant equal to 4, so the system has a unique solution c
1
= c
2
= c
3
= 0.
Thus, the vectors are linearly independent. The nonzero determinant also implies a unique
solution to the nonhomogeneous problem, which implies that the vectors span P
2
. Thus, the
vectors are a basis.
21. Consider c
1
(x + 1) +c
2
(x + 2) +c
3
(x + 3) = 0 or the equivalent system
_
1 1 1
1 2 3
_
_

_
c
1
c
2
c
3
_

_
= 0
Apply row operations, the matrix above becomes
_
1 0 1
0 1 2
_
Thus, c
1
= c
3
, and c
2
= 2c
3
with c
3
a free variable are solutions. For example, c
1
= 1,
c
2
= 2, c
3
= 1 is a solution. Thus, the vectors are not linearly independent and, therefore,
not a basis.
20
Math 107 August 6, 2009 (Partial) Solution Manual
24. (a) [v]

=
_

_
1
2
3
14

3
14
_

_
.
(b) v =
_

_
1
4
1
_

_
.
27. Let v
i
be the zero vector in a set of vector v
1
,. . . ,v
k
. Consider the linear combination of
v
1
,. . . ,v
k
given by 7v
i
= 0. This linear combination gives zero without all of the scalars being
zero. Thus, the set of vectors is linearly dependent.
28. Well prove the contrapositive of the given statement. Let B be a linearly dependent subset
of the vectors v
1
,. . . ,v
n
. Then, by Theorem 2.6, there exists an element of B that is a linear
combination of the other elements of B. This implies that this element is a linear combination
of the set v
1
,. . . ,v
n
. By Theorem 2.6, this implies that v
1
,. . . ,v
n
is linearly dependent.
2.4 Dimension; Nullspace, Row Space, and Column Space
2. (a)

1 2 0
1 1 1
4 0 8

= 32 = 0, so the vectors are linearly independent. Since there are 3 of


them, they form a basis for R
3
.
(b)

3 1 3
2 1 1
1 0 2

= 0, so the vectors are not linearly independent and therefore not a basis.
(c) There are 4 vectors, so they cannot form a basis for a vector space with dimension 3.
(d) There are 2 vectors, so they cannot form a basis for a vector space with dimension 3.
3. (a)

1 2 1
1 0 1
1 3 2

= 2 = 0, so the vectors are linearly independent. Since there are 3 of


them, they form a basis for P
2
.
(b) There are only 2 vectors, not enough to form a basis for P
2
, which has dimension 3.
21
Math 107 August 6, 2009 (Partial) Solution Manual
4. (c) c
1
_
0 1
0 1
_
+c
2
_
1 0
0 1
_
+c
3
_
1 0
0 0
_
+c
4
_
0 1
1 0
_
= 0 implies
c
2
+c
3
= 0
c
1
+c
4
= 0
c
4
= 0
c
1
+c
2
= 0
= c
1
= 0 = c
2
= 0 = c
3
= 0. Thus, the vector matrices are linearly
independent. Since there are 4 of them, they form a basis for M
22
(R).
(d) There are 5 vector matrices in the set and dim(M
22
(R)) = 4, so the set cannot be a
basis.
7.
_

_
1 1 1
1 1 0
1 1 2
_

rref
_

_
1 1 0
0 0 1
0 0 0
_

_
Thus, x
2
is a free variable and x
1
x
2
= 0, x
3
= 0. Thus, every vector in the nullspace is
of the form:
_

_
x
2
x
2
0
_

_
. Thus, a basis for the nullspace is {
_

_
1
1
0
_

_
}. A basis for the rowspace is
{[1 -1 0], [0 0 1]}.
_

_
1 1 1
1 1 0
1 1 2
_

rcef
_

_
1 0 0
0 1 0
2 1 0
_

_
So, a basis for the column space is {
_

_
1
0
2
_

_
,
_

_
0
1
1
_

_
}. Thus, the rank of the matrix is 2.
10.
_
2 1 3 4
1 0 1 3
_

rref
_
1 0 1 3
0 1 5 2
_
Thus, x
3
and x
4
are free variables and x
1
x
3
+3x
4
= 0, x
2
5x
3
+2x
4
= 0. Therefore, any
vector in the nullspace can be written
_

_
x
3
5x
3
x
3
0
_

_
+
_

_
3x
4
2x
4
0
x
4
_

_
22
Math 107 August 6, 2009 (Partial) Solution Manual
Therefore, a basis for the nullspace is {
_

_
1
5
1
0
_

_
,
_

_
3
2
0
1
_

_
}. A basis for the rowspace is {[1 0 -1 3],
[0 1 -5 2]}.
_
2 1 3 4
1 0 1 3
_

rref
_
1 0 0 0
0 1 0 0
_
So, a basis for the column space is {
_
1
0
_
,
_
0
1
_
}. Thus, the rank of the matrix is 2.
14.
_

_
0 1 2 4
2 1 0 2
1 0 1 1
_

rref
_

_
1 0 1 0
0 1 2 0
0 0 0 1
_

_
So, the rank of the matrix is 3. Thus, the dimension of the subspace spanned by the vectors
is 3, so it is all of R
3
. So, a basis for the subspace is {
_

_
1
0
0
_

_
,
_

_
0
1
0
_

_
,
_

_
0
0
1
_

_
}
18. Suppose A is an m n matrix, with m > n. Let B be the reduced row echelon form for
A. Then, the RS(A) = RS(B) by Theorem 2.13, since A and B are row equivalent. Also,
since m > n, B has at most n nonzero rows and since B is in reduced row echelon form, the
nonzero rows of B form a basis for RS(B), and thus, RS(A). Therefore, the set of rows of
A is subset of RS(A) with more vectors in it than the basis. Thus, by Lemma 2.8, the set of
rows is linearly dependent.
2.5 Wronskians
5. w(x
2
1, x
2
+ 1, x + 1) =

x
2
1 x
2
+ 1 x + 1
2x 2x 1
2 2 0

= 4 = 0 for some x, so the functions are


linearly independent.
6. w(e
x
, e
2x
, e
3x
) =

e
x
e
2x
e
3x
e
x
2e
2x
3e
3x
e
x
4e
2x
9e
3x

= 2e
6x
= 0 for some x, so the functions are linearly
independent.
23
Math 107 August 6, 2009 (Partial) Solution Manual
12. By trigonometric identity, for all x,
cos(2x) = cos
2
(x) sin
2
(x)
Thus, for all x,
cos(2x) cos
2
(x) +sin
2
(x) = 0
Therefore, the functions are linearly dependent.
14. (a) No, since one (or all) of the functions could be zero on the interval (c, d), and linearly
dependent there, but be, for example, x, x
2
, x
3
,. . . , etc. on the rest of the interval and
stil be linearly independent on (a, b).
(b) Yes, since if there are no scalars to make a linear combination of the functions equal to
zero on all of (c, d), there cannot be scalars to make a linear combination of the functions
equal to zero on all of any interval (a, b) that includes (c, d).
3 First Order Ordinary Dierential Equations
4 Linear Dierential Equations
4.1 The Theory of Higher Order Linear Dierential Equations
2. 2
nd
order, linear
3. 4
th
order, linear
6. (1, )
10. The two functions solve the ODE:
1
x
: x :
x
2
(
2
x
3
) +x(
1
x
2
)
1
x
= 0 x
2
(0) +x(1) x = 0
The two functions are linearly independent since

1
x
x

1
x
2
1

=
2
x
, which is nonzero every-
where. Thus, they are a fundamental set of solutions and the general solution is given by
c
1
1
x
+c
2
x
Solving using the initial conditions, y(1) = 0, y

(1) = 1,
c
1
+c
2
= 0
24
Math 107 August 6, 2009 (Partial) Solution Manual
c
1
+c
2
= 1
gives, c
1
=
1
2
, c
2
=
1
2
, which gives the solution
1
2x
+
1
2
x.
11. The three functions solve the ODE:
e
x
: e
2x
: e
3x
:
e
x
7e
x
+ 6e
x
= 0 8e
2x
7(2e
2x
) + 6e
2x
= 0 27e
3x
7(3e
3x
) + 6e
3x
= 0
The Wronskian of the three functions is

e
x
e
2x
e
3x
e
x
2e
2x
3e
3x
e
x
4e
2x
9e
3x

= 20
so the functions are linearly independent. The general solution to the ODE is
c
1
e
x
+c
2
e
2x
+c
3
e
3x
Using the initial condition, y(0) = 1, y

(0) = 0, y

(0) = 0 gives
c
1
+c
2
+c
3
= 1, c
1
+ 2c
2
3c
3
= 0, c
1
+ 4c
2
+ 9c
3
= 0
which has solution c
1
=
3
2
, c
2
=
3
5
, and c
3
=
1
10
, giving the solution to the ODE as
3
2
e
x

3
5
e
2x
+
1
10
e
3x
15. The general solution is given by
e
x
+c
1
e
x
+c
2
e
2x
+c
3
e
3x
For the initial conditions, y(0) = 1, y

(0) = 0, y

(0) = 0, this gives


1 +c
1
+c
2
+c
3
= 1, 1 +c
1
+ 2c
2
3c
3
= 0, 1 +c
1
+ 4c
2
+ 9c
3
= 0
which has solution c
1
= 0, c
2
=
1
5
, c
3
=
1
5
, giving the solution to the ODE as
e
x
+
1
5
e
2x

1
5
e
3x
17. {x, x
2
} is a fundamental set of solutions:
x : x
2
:
1
2
x
2
(0) x(1) +x = 0
1
2
x
2
(2) x(2x) +x
2
= 0
25
Math 107 August 6, 2009 (Partial) Solution Manual
If y
p
= e
x
, then
1
2
x
2
y

p
xy

p
+y
p
= e
x
(
1
2
x
2
x + 1)
The general solution to this ODE is given by
e
x
+c
1
x +c
2
x
2
With the initial conditions, y(1) = 1, y

(1) = 0, we get the system


e +c
1
+c
2
= 1, e +c
1
+ 2c
2
= 0
which has solution c
1
= 2 e, c
2
= 1, giving the solution to the ODE as
e
x
+ (2 e)x x
2
4.2 Homogeneous Constant Coecient Linear Dierential Equations
2. Roots of the polynomial are 3 and 2, general solution is c
1
e
3x
+c
2
e
2x
.
5. Roots of the polynomial are 1, 1 and 4, general solution is c
1
e
x
+c
2
e
x
+c
3
e
4x
.
7. Root of the polynomial is 3, general solution is c
1
e
3x
+c
2
xe
3x
.
10. Roots of the polynomial are 0 and 1, general solution is c
1
+c
2
x +c
3
e
x
+c
4
xe
x
.
11. Roots of the polynomial are 4i, the general solution is c
1
cos(2x) +c
2
sin(2x).
12. Roots of the polynomial are 2 i

3, the general solution is c


1
e
2x
cos(

3x) +c
3
e
2x
sin(

3x).
13. Roots of the polynomial are 0 and 1 i

3, the general solution is


c
1
+c
2
e
x
cos(

3x) +c
3
e
x
sin(

3x)
20. (a) The characteristic polynomial has roots 1 and
1
4
, the general solution is
c
1
e
x
+c
2
e

1
4
x
(b) The initial conditions imply that c
1
+c
2
= 0 and c
1

1
4
c
2
= 1. Thus, c
1
=
4
3
, c
2
=
4
3
,
so the solution to the IVP is
4
3
e
x

4
3
e

1
4
x
22. (a) The characteristic polynomial has roots 0 and 5, the general solution is
c
1
+c
2
x +c
3
e
5x
26
Math 107 August 6, 2009 (Partial) Solution Manual
(b) The initial conditions imply that c
1
c
2
+c
3
e
5
= 1, c
2
+5c
3
e
5
= 0, and 25c
3
e
5
= 2.
Thus, c
1
=
13
25
, c
2
=
2
5
, c
3
=
2e
5
25
so the solution to the IVP is
13
25

2
5
x +
2
25
e
5(x+1)
23. (a) The characteristic polynomial has roots 0, 2, and 3, the general solution is
c
1
+c
2
e
2x
+c
3
e
3x
(b) The initial conditions imply that c
1
+ c
2
+ c
3
= 1, 2c
2
+ 3c
3
= 0, and 4c
2
+ 9c
3
= 2.
Thus, c
1
=
2
3
, c
2
=
1
5
, c
3
=
2
15
so the solution to the IVP is
2
3
+
1
5
e
2x
+
2
15
e
3x
24. The hard way.
(a) The characteristic polynomial has roots 0 and 2 i, the general solution is
c
1
+c
2
e
2x
sin x +c
3
e
2x
cos x
(b) The initial conditions imply that c
1
+c
2
e
2
sin(1)+c
3
e
2
cos(1) = 1, e
2
(2 sin(1)+cos(1))c
2
+
e
2
(2 cos(1) sin(1)c
3
= 0, and e
2
(3 sin(1) + 4 cos(1))c
2
+e
2
(4 sin(1) + 3 cos(1))c
3
= 2.
Thus, c
1
=
7
5
, c
2
= e
2
(
4
5
cos(1)
2
5
sin(1)), c
3
= e
2
(
2
5
cos(1)+
4
5
sin(1)), so the solution
to the IVP is
7
5
+e
2
(
4
5
cos(1)
2
5
sin(1))e
2x
sin(x) +e
2
(
2
5
cos(1) +
4
5
sin(1))e
2x
cos(x)
24. The easy way.
(a) The characteristic polynomial has roots 0 and 2 i, the general solution is
c
1
+c
2
e
2x2
sin(x 1) +c
3
e
2x2
cos(x 1)
(b) The initial conditions imply that c
1
+ c
3
= 1, c
2
+ 2c
3
= 0, and 4c
2
+ 3c
3
= 2. Thus,
c
1
=
7
5
, c
2
=
4
5
, c
3
=
2
5
, so the solution to the IVP is
7
5
+
4
5
e
2x2
sin(x 1)
2
5
e
2x2
cos(x 1)
The two solutions can be seen to be identical by the fact that
sin(AB) = sin(A) cos(B) cos(A) sin(B) and cos(AB) = cos(A) cos(B) sin(A) sin(B)
29. The general solution is c
1
e
2x
+c
2
e
3x
+c
3
xe
3x
+c
4
e
3x
+c
5
xe
3x
.
27
Math 107 August 6, 2009 (Partial) Solution Manual
30. The general solution is
c
1
+c
2
x +c
3
x
2
+c
4
e
4x
+c
5
xe
4x
+
e
x
_
c
6
cos(

2
2
x) +c
7
sin(

2
2
x) +c
8
xcos(

2
2
x) +c
9
xsin(

2
2
x)
_
+
e

1
2
x
_
c
10
cos(
3
2
x) +c
11
sin(
3
2
x)
_
40. The conditions on the roots r
1
, . . . , r
n
real and distinct, so that the equation decays to 0 as
x are that r
i
< 0 for all i.
41. The conditions on the roots r
1
, . . . , r
n
real, but not necessarily distinct, so that the equation
decays to 0 as x are that r
i
< 0 for all i.
42. The conditions on the roots r
1
, . . . , r
n
possibly complex, so that the equation decays to 0 as
x are that Re(r
i
) < 0 (the real part is negative) for all i.
4.3 The Method of Undetermined Coecients
1. The characteristic polynomial of the homogeneous equation has roots 3 and 2, so a fun-
damental set for the homogeneous equation is e
3x
, e
2x
. Trying a particular solution of the
form Ae
2x
yields
4Ae
2x
2Ae
2x
6Ae
2x
= 3e
2x
or A =
3
4
. Thus, the general solution is

3
4
e
2x
+c
1
e
3x
+c
2
e
2x
4. The characteristic polynomial of the homogeneous equation has roots 1 and
1
2
, so a fun-
damental set for the homogeneous equation is e
x
, e

1
2
x
. Trying a particular solution of the
form Acos(x) +Bsin(x) yields
2Acos(x) 2Bsin(x) 3Asin(x) + 3Bcos(x) +Acos(x) +Bsin(x) = cos(x)
or A+ 3B = 1 and B 3A = 0, which has solution A =
1
10
, B =
3
10
. Thus, the general
solution is

1
10
cos(x) +
3
10
sin(x) +c
1
e
x
+c
2
e

1
2
x
28
Math 107 August 6, 2009 (Partial) Solution Manual
9. The characteristic polynomial of the homogeneous equation has roots 2i, so a fundamental
set for the homogeneous equation is cos(2x), sin(2x). Trying a particular solution of the form
Axcos(2x) +Bxsin(2x) yields
16Asin(2x)+16Bcos(2x)16Axcos(2x)16Bxsin(2x)+16Axcos(2x)+16Bxsin(2x) = 3 cos(2x)
or 16A = 0 and 16B = 3, which has solution A = 0, B =
3
16
. Thus, the general solution is
3
16
xsin(2x) +c
1
cos(2x) +c
2
sin(2x)
11. The characteristic polynomial of the homogeneous equation has roots 0 and 8, so a funda-
mental set for the homogeneous equation is 1, e
8x
. Trying a particular solution of the form
Ax
3
+Bx
2
+Cx yields
6Ax + 2B + 24Ax
2
+ 16Bx + 8C = 2x
2
7x + 3
or 24A = 2, 6A+16B = 7 and 2B+8C = 3, which has solution A =
1
12
, B =
15
32
, C =
63
128
.
Thus, the general solution is
1
12
x
3

15
32
x
2
+
63
128
x +c
1
+c
2
e
8x
14. The characteristic polynomial of the homogeneous equation has roots 2 and 4, so a fun-
damental set for the homogeneous equation is e
2x
, e
4x
. Trying particular solutions of the
forms Ax
2
+Bx +C, Dxe
2x
and F cos(4x) +Gsin(4x) yields
6A+ 12Ax + 6B 24Ax
2
24Bx 24C = 3x
2
12Dxe
2x
+ 12De
2x
+ 6De
2x
+ 12Dxe
2x
24Dxe
2x
= 5e
2x
48F cos(4x)48Gsin(4x)24F sin(4x)+24Gcos(4x)24F cos(4x)24Gsin(4x) = 6 sin(4x)
or 24A = 3, 12A24B = 0, 6A+6B24C = 0, 18D = 5, 72F +24G = 0, 72G24F =
6, which has solution A =
1
8
, B =
1
16
, C =
3
64
, D =
5
18
, F =
1
40
, G =
3
40
. Thus, the
general solution is

1
8
x
2

1
16
x
3
64

5
18
xe
2x
+
1
40
cos(4x) +
3
40
sin(4x) +c
1
e
2x
+c
2
e
4x
18. The characteristic polynomial of the homogeneous equation has a double root 2, so a fun-
damental set for the homogeneous equation is e
2x
, xe
2x
. Trying particular solutions of the
forms Ax +B and C cos(3x) +Dsin(3x) yields
4A+ 4Ax + 4B = 2x
29
Math 107 August 6, 2009 (Partial) Solution Manual
9C cos(3x) 9Dsin(3x) 12C sin(3x) +12Dcos(3x) +4C cos(3x) +4Dsin(3x) = sin(3x)
or 4A = 2, 4A + 4B = 0, 5C + 12D = 0, 5D 12C = 1, which has solution A =
1
2
,
B =
1
2
, C =
12
169
, D =
5
169
. Thus, the general solution is
1
2
x
1
2
+
12
169
cos(3x) +
5
169
sin(3x) +c
1
e
2x
+c
2
xe
2x
Using the initial conditions y(0) = 0 and y

(0) = 1 to nd c
1
and c
2
,

1
2
+
1
2
169 +c
1
= 0
1
2
+
15
169
2c
1
+c
2
= 1
gives c
1
=
145
338
and c
2
=
19
26
. So, the solution to the IVP is
1
2
x
1
2
+
12
169
cos(3x) +
5
169
sin(3x) +
145
338
e
2x

19
26
xe
2x
26. 0, 4, 4i, and 2i are not roots of the characteristic polynomial, so the particular solution
y
p
can be of the form
Ax +B +Ce
4x
+Dcos(4x) +F sin(4x) +Gxcos(2x) +Hxsin(2x) +J cos(2x) +Lsin(2x)
4.4 Variation of Parameters
1. The roots of the corresponding characteristic polynomial are 1 and 1, making e
x
and e
x
a
fundamental set of solutions to the homogeneous equation.
(a) Inputting the solution y
p
= Ax
2
+Bx +C, we nd
2AAx
2
Bx C = 3x
2
1
which leads to A = 3, B = 0, and C = 5, for a particular solution of y
p
= 3x
2
5.
(b) y
p
(x) = u
1
(x)e
x
+u
2
(x)e
x
, where
u
1
(x) =
_
e
x
(3x
2
1)
w(e
x
, e
x
)
and u
2
(x) =
_
e
x
(3x
2
1)
w(e
x
, e
x
)
where w is Wronksian. w(e
x
, e
x
) = 2, so
u
1
(x) =
1
2
_
3x
2
e
x
e
x
=
1
2
_
3x
2
e
x
6xe
x
5e
x
_
and
u
2
(x) =
1
2
_
3x
2
e
x
e
x
=
1
2
_
3x
2
e
x
6xe
x
+ 5e
x
_
30
Math 107 August 6, 2009 (Partial) Solution Manual
So,
y
p
(x) =
1
2
_
3x
2
e
x
6xe
x
5e
x
_
e
x

1
2
_
3x
2
e
x
6xe
x
+ 5e
x
_
e
x
=
3
2
x
2
3x
5
2

3
2
x
2
+ 3x
5
2
= 3x
2
5
2. The roots of the characteristic polynomial are 3 and 2, making e
3x
and e
2x
a fundamental
set of solutions to the homogeneous equation. w(e
3x
, e
2x
) = 5e
x
, so, then, y
p
= u
1
(x)e
3x
+
u
2
(x)e
2x
, with
u
1
(x) =
_
e
2x
4e
x
5e
x
=
4
10
e
2x
and
u
2
(x) =
_
e
3x
4e
x
5e
x
=
4
15
e
3x
so,
y
p
(x) =
4
10
e
2x
e
3x

4
15
e
3x
e
2x
=
2
3
e
x
So, the general solution is
y =
2
3
e
x
+c
1
e
3x
+c
2
e
2x
3. The characteristic polynomial has a double root at 3, so a fundamental set of solutions is
e
3x
, xe
3x
. Thus, w(e
3x
, xe
3x
) = e
6x
. Then, if y
p
(x) = u
1
(x)e
3x
+u
2
(x)xe
3x
,
u
1
(x) =
_
3e
3x
xe
3x
e
6x
=
1
2
xe
6x
+
1
12
e
6x
and
u
2
(x) =
_
3e
3x
e
3x
e
6x
=
1
2
e
6x
Thus,
y
p
(x) = (
1
2
xe
6x
+
1
12
e
6x
)e
3x
+
1
2
e
6x
xe
3x
=
1
12
e
3x
So, the general solution is
y =
1
12
e
3x
+c
1
e
3x
+c
2
xe
3x
31
Math 107 August 6, 2009 (Partial) Solution Manual
5 Linear Transformations and Eigenvalues and Eigenvectors
5.1 Linear Transformations
3. Not a linear transformation: T(x
1
+x
2
, y
1
+y
2
, z
1
+z
2
) = T(x
1
, y
1
, z
1
) +T(x
2
, y
2
, z
2
).
4. Linear transformation
7. Linear transformation
12. Not a linear transformation: T(A+B) = det(A+B) = det(A) +det(B) = T(A) +T(B).
18.
_

_
1 1 3 1
2 3 1 2
3 7 5 3
_

_
20. Let
A =
_

_
1 1 0
1 0 1
0 1 1
_

_
Then,
A
1
=
_

_
1 1 1
0 1 1
1 0 1
_

_
(a) A
1
_

_
2
1
4
_

_
=
_

_
5
3
6
_

_
. So, T
_
_
_
_
_

_
2
1
4
_

_
_
_
_
_
= 5T
_
_
_
_
_

_
1
1
0
_

_
_
_
_
_
3T
_
_
_
_
_

_
1
0
1
_

_
_
_
_
_
+ 6T
_
_
_
_
_

_
0
1
1
_

_
_
_
_
_
or
T
_
_
_
_
_

_
2
1
4
_

_
_
_
_
_
=
_

_
5
0
5
0
_

_
+
_

_
6
3
0
0
_

_
+
_

_
6
0
0
6
_

_
=
_

_
5
3
5
6
_

_
(b) A
1
_

_
x
y
z
_

_
=
_

_
x y z
y +z
x z
_

_
. So,
T
_
_
_
_
_

_
x
y
z
_

_
_
_
_
_
= (x y z)T
_
_
_
_
_

_
1
1
0
_

_
_
_
_
_
+ (y +z)T
_
_
_
_
_

_
1
0
1
_

_
_
_
_
_
+ (x z)T
_
_
_
_
_

_
0
1
1
_

_
_
_
_
_
32
Math 107 August 6, 2009 (Partial) Solution Manual
or
T
_
_
_
_
_

_
x
y
z
_

_
_
_
_
_
=
_

_
x y z
0
x +y +z
0
_

_
+
_

_
2y + 2z
y +z
0
0
_

_
+
_

_
x z
0
0
x +z
_

_
=
_

_
2x +y
y +z
x +y +z
x +z
_

_
33. Suppose
_
x
y
_
=
_
r cos
r sin
_
. Then,
T
_
x
y
_
=
_
r cos( +)
r sin( +)
_
=
_
r cos cos r sin sin
r sin cos +r cos sin
_
=
_
xcos y sin
y cos +xsin
_
=
_
cos sin
sin cos
__
x
y
_
35. For any linear transformation, T : V V , either dim(ker(T)) = 0 or dim(ker(T)) > 0, but
not both can be true. The second inequality is 2. We show that dim(ker(T)) = 0 implies 1.
Let v
1
, . . . , v
n
be a basis of the n-dimensional vector space V . Then, T(v
1
), . . . , T(v
n
) are a
set of n vectors in V . Assume there are scalars {c
i
} so that
c
1
T(v
1
) +c
2
T(v
2
) +. . . +c
n
T(v
n
) = 0
Then,
T(c
1
v
1
+c
2
v
2
+. . . +c
n
v
n
) = 0
Since dim(ker(T)) = 0, this implies that c
1
v
1
+ . . . + c
n
v
n
= 0, and consquently, that c
1
=
c
2
= = c
n
= 0, since v
1
, . . . , v
n
is a basis. Thus, T(v
1
), . . . , T(v
n
) are linearly independent
vectors in V . Since there are n of them, they also form a basis for V . Thus, they also span
V . Therefore, for any vector v V , there exist {a
i
} so that
v = a
1
T(v
1
) +a
2
T(v
2
) +. . . +a
n
T(v
n
) = T(a
1
v
1
+. . . +a
n
v
n
)
Dening u = a
1
v
1
+. . . +a
n
v
n
, gives T(u) = v and statement 1.
36. If T : V W is 1-1, then T(x
1
) = T(x
2
) = x
1
= x
2
. Suppose u ker(T). Then,
T(u) = T(0) = u = 0. So, the kernel is only the zero vector.
If ker(T) is only the zero vector, suppose T(x
1
) = T(x
2
) for x
1
, x
2
V . Then, T(x
1
x
2
) = 0,
which implies x
1
x
2
ker(T). But, zero is the only vector in the kernel of T, so x
1
x
2
= 0,
or x
1
= x
2
. So, T is 1-1.
33
Math 107 August 6, 2009 (Partial) Solution Manual
5.2 The Algebra of Linear Transformations
5. ST
_
x
y
_
= S
__
x + 3y
x y
__
=
_
x + 7y
3x +y
_
6. TS
_
x
y
_
= T
__
2x y
x + 2y
__
=
_
5x + 5y
x 3y
_
8. TS(ax +b) = T(ax 2a +b) = ax + 2a 2a +b = ax +b
11. Basis={e
3x
, e
x
}
14. Basis={sin x, cos x, xsin x, xcos x}
17. Let f(x), h(x) C

. Then,
T
g(x)
(f +h) = g(x)(f(x) +h(x)) = g(x)f(x) +g(x)h(x) = T
g(x)
(f) +T
g(x)
(h)
and
T
g(x)
(cf) = g(x)(cf(x)) = cg(x)f(x) = cT
g(x)
So, T
g(x)
is a linear transformation.
20. (a) If u ker(T), then T(u) = 0. T(u +v) = T(u) +T(v) = T(v).
(b) If T(u) = T(v), then T(u) T(v) = 0 = T(u v) = 0, so that u v ker(T).
23. (a) A =
_
1 3
1 1
_
, B =
_
2 1
1 2
_
(b) C =
_
1 7
3 1
_
, BA =
_
2 1
1 2
__
1 3
1 1
_
=
_
1 7
3 1
_
5.3 Matrices for Linear Transformations
1. (a) [T]

=
_
1 1
1 1
_
(b) The change of basis from to is given by P =
_
1 2
1 1
_
.
(c) The change of basis from to is given by P
1
=
_
1 2
1 1
_
.
(d) [T]

= P
1
[T]

P =
_
4 7
2 4
_
.
4. (Assuming a spurious x
2
in the fourth row.)
34
Math 107 August 6, 2009 (Partial) Solution Manual
(a) [T]

=
_

_
1 1 1
3 15 16
3 13 14
_

_
(b) The change of basis from to is given by P =
_

_
1 2 0
1 3 1
1 2 1
_

_
.
(c) The change of basis from to is given by P
1
=
_

_
1 2 2
0 1 1
1 4 5
_

_
.
(d) [T]

= P
1
[T]

P =
_

_
1 1 0
0 2 0
1 0 1
_

_
.
5. (a) [T]

=
_

_
1 0 0
2 1 0
1 1 1
_

_
(b) The change of basis from to is given by P =
_

_
1 1 0
0 0 1
1 1 1
_

_
.
(c) The change of basis from to is given by P
1
=
_

_
1
2
1
2

1
2
1
2

1
2
1
2
0 1 0
_

_
.
(d) [T]

= P
1
[T]

P =
_

_
3
2
1
2

1
2

1
2
1
2
1
2
2 2 1
_

_
.
(e) [v]

=
_

_
1
1
1
_

_
. [v]

= P
1
[v]

=
_

_
1
2
1
2
1
_

_
.
(f) [T(v)]

= [T]

[v]

=
_

_
1
2
1
2
3
_

_
.
(g) T(v) =
1
2
(x
2
1) +
1
2
(x
2
+ 1) + 3(x + 1) = x
2
+ 3x + 3.
7. (a) [T]

=
_
0 1
1 0
_
35
Math 107 August 6, 2009 (Partial) Solution Manual
(b) The change of basis from to is given by P =
_
1 1
1 1
_
.
(c) The change of basis from to is given by P
1
=
_
1
2
1
2

1
2
1
2
_
.
(d) [T]

= P
1
[T]

P =
_
0 1
1 0
_
.
(e) [v]

=
_
2
3
_
. [v]

= P
1
[v]

=
_
1
2
5
2
_
.
(f) [T(v)]

= [T]

[v]

=
_
5
2

1
2
_
.
(g) T(v) =
5
2
(sin(x) + cos(x))
1
2
(sin(x) cos(x)) = 2 sin(x) + 3 cos(x).
9. (a) [T]

=
_

_
1 0 1
1 1 0
0 1 1
_

_
(b) [T(v)]

= [T]

[v]

=
_

_
1 0 1
1 1 0
0 1 1
_

_
_

_
1
2
3
_

_
=
_

_
2
3
5
_

_
(c) T(v) = 2v
1
3v
2
+ 5v
3
.
5.4 Eigenvalues and Eigenvectors
5. The eigenvalues are 2

3 and 2

3, with basis eigenvectors


_
3
2
1
_
and
_

3
2
1
_
, respectively.
8. The eigenvalues are 2 and 1, with basis eigenvectors
_

_
1
1
1
_

_
for 2 and
_

_
1
0
0
_

_
for 1.
9. The eigenvalue is 4 with basis eigenvector
_

_
0
0
1
_

_
.
16. The eigenvalues are 2i and 2i with basis eigenvectors
_
1 2i
1
_
and
_
1 + 2i
1
_
, respectively.
17. The eigenvalues are 1, i, and i with basis eigenvectors
_

_
1
0
0
_

_
,
_

_
0
1
i
_

_
, and
_

_
0
1
i
_

_
, respectively.
36
Math 107 August 6, 2009 (Partial) Solution Manual
20. If D is diagonal matrix, the eigenvalues of D are the diagonal entries of D. The bases for the
eigenspaces are just the standard basis vectors for R
n
.
26. If A is a square matrix, then eigenvalues of A are the so that det(I A) = 0. Since
det(B) = det(B
T
) for any square matrix B, det(I A) = det
_
(I A)
T
_
= det(I A
T
).
Thus, det(I A) = 0 if and only if det(I A
T
), so A and A
T
have the same eigenvalues.
5.5 Similar Matrices, Diagonalization, and Jordan Form
5. A is diagonalizable:
D =
_

12 0
0

12
_
, P =
_
1
2

3
1
2

3
1 1
_
8. A is not diagonalizable, not enough eigenvectors.
9. A is not diagonalizable, not enough eigenvectors.
16. A is diagonalizable:
D =
_
2i 0
0 2i
_
, P =
_
1 2i 1 + 2i
1 1
_
17. A is diagonalizable:
D =
_

_
1 0 0
0 2i 0
0 0 2i
_

_
, P =
_

_
1 0 0
0 1 1
0 i i
_

_
31. Suppose A and B are similar matrices. Then, A = P
1
BP for some invertible matrix P.
Thus, AI = P
1
BP I for any scalar . Therefore,
det(AI) = det(P
1
BP I)
= det
_
P
1
(BP PI)
_
= det
_
P
1
(B PIP
1
)P
_
= det(P
1
)det(B I)det(P)
= det(B I)
Thus, A and B have the same characteristic polynomial.
36. Let A be a n n matrix with n distinct eigenvalues. Since the bases of eigenspaces corre-
sponding to dierent eigenvalues are linearly independent and A has n eigenspaces, there are
n linearly independent eigenvectors of A. Thus, A is diagonalizable by Theorem 5.18.
37
Math 107 August 6, 2009 (Partial) Solution Manual
6 Systems of Linear Dierential Equations
6.1 The Theory of Systems of Linear Dierential Equations
1. Y

=
_
2c
1
e
2x
+ 3c
2
e
3x
4c
1
e
2x
+ 3c
2
e
3x
_
_
4 1
2 1
_
Y =
_
4c
1
e
2x
+ 4c
2
e
3x
2c
1
e
2x
c
2
e
3x
2c
1
e
2x
+ 2c
2
e
3x
+ 2c
1
e
3x
+c
2
e
3x
_
= Y

4. W(
_

_
e
2x
0
0
_

_
,
_

_
0
3 cos(5x)
3 sin(5x)
_

_
,
_

_
0
sin 5x
cos 5x
_

_
) = e
2x
(3 cos
2
(5x) + 3 sin
2
(5x)) = 3e
2x
= 0 for all x.
Therefore, the functions are linearly independent on the whole interval (, ).
5. The general solution to the problem is
_
c
1
e
x
c
2
e
2x
_
. A matrix of fundamental solutions is
_
e
x
0
0 e
2x
_
.
9. Y (0) =
_
2
1
_
gives the specic solution
_
2e
x
e
2x
_
.
17. Being a matrix, M, of fundamental solutions implies that each column is a solution to the
system and the columns are linearly independent. Since det(M) = e
d
1
+d
2
+...+d
n
x = 0 for all
x, the columns are linearly independent. Consider the i
th
column of M: Y
i
=
_

_
0
.
.
.
e
d
i
x
.
.
.
0
_

_
. Then,
Y

i
=
_

_
0
.
.
.
d
i
e
d
i
x
.
.
.
0
_

_
and DY
i
=
_

_
0
.
.
.
d
i
e
d
i
x
.
.
.
0
_

_
, so Y

i
= DY
i
and Y
i
is a solution. Since this is true for
all i, M is a matrix of fundamental solutions.
27. (AB)

=
_
4e
2x
+ 8e
2x
2e
x
2xe
x
8x 4e
4x
2e
x
+ 4x 2e
x
2xe
x
+e
3x
+ 3xe
3x
_
38
Math 107 August 6, 2009 (Partial) Solution Manual
6.2 Homogeneous Systems with Constant Coecients - Diagonalizable
3. From before, the eigenvalues are 2

3 and 2

3, with basis eigenvectors


_
3
2
1
_
and
_

3
2
1
_
,
respectively. Therefore, the general solution to Y

= AY is
c
1
_
3
2
e
2

3x
e
2

3x
_
+c
2
_
_

3
2
e

3
2
e

3
2
_
_
=
_
_
c
1

3
2
e
2

3x
c
2

3
2
e

3
2
c
1
e
2

3x
+c
2
e

3
2
_
_
5. The eigenvalues are 1, 2, and 3, with corresponding basis eigenvectors
_

_
0
1
0
_

_
,
_

_
1
2
2
_

_
,
_

_
1
1
1
_

_
,
respectively. Therefore, the general solution to Y

= AY is
c
1
_

_
0
e
x
0
_

_
+c
2
_

_
e
2x
2e
2x
2e
2x
_

_
+c
3
_

_
e
3x
e
3x
e
3x
_

_
=
_

_
c
2
e
2x
+c
3
e
3x
c
1
e
x
2c
2
e
2x
c
3
e
3x
2c
2
e
2x
c
3
e
3x
_

_
11. The eigenvalues are 1, i, and i with basis eigenvectors
_

_
1
0
0
_

_
,
_

_
0
1
i
_

_
, and
_

_
0
1
i
_

_
, respectively.
Therefore the general solution to Y

= AY is
c
1
_

_
e
x
0
0
_

_
+c
2
_

_
0
cos(x)
sin(x)
_

_
+c
3
_

_
0
sin(x)
cos(x)
_

_
=
_

_
c
1
e
x
c
2
cos(x) +c
3
sin(x)
c
2
sin(x) +c
3
cos(x)
_

_
15. Y (0) =
_

_
0
1
1
_

_
=
_

_
c
2
+c
3
c
1
2c
2
c
3
2c
2
c
3
_

_
= c
1
= 2, c
2
= 1, and c
3
= 1. So, the solution is
Y (x) =
_

_
e
2x
e
3x
2e
x
2e
2x
+e
3x
2e
2x
+e
3x
_

_
22. The eigenvalues are solutions to 0 =
2
169 =
2
25 = (5)(+5), so the eigenvalues
are 5. The corresponding basis eigenvectors are
_
3
1
_
and
_
1
3
_
. Therefore the general
solution is
c
1
_
3e
5x
e
5x
_
+c
2
_
e
5x
3e
5x
_
=
_
3c
1
e
5x
+c
2
e
5x
c
1
e
5x
3c
2
e
5x
_
39
Math 107 August 6, 2009 (Partial) Solution Manual
25. The eigenvalues are 13i with corresponding basis eigenvectors
_
3i
1
_
and
_
3i
1
_
. The general
solution is then
c
1
_
3e
x
sin(3x)
e
x
cos(x)
_
+c
2
_
3e
x
cos(3x)
e
x
sin(3x)
_
=
_
3c
1
e
x
sin(3x) + 3c
2
e
x
cos(3x)
c
1
e
x
cos(3x) +c
2
e
x
sin(3x)
_
28. Y

=
_

_
2 1 0
3 4 0
5 6 3
_

_
Y . The eigenvalues are 1, 3 and 5 with basis eigenvectors
_

_
2
2
11
_

_
,
_

_
0
0
1
_

_
, and
_

_
2
6
13
_

_
. Thus, the general solution is
_

_
2c
1
e
x
+ 2c
3
e
5x
2c
1
e
x
+ 6c
3
e
5x
11c
1
e
x
+c
2
e
3x
13c
3
e
5x
_

_
Thus,
_

_
5
3
4
_

_
= Y (0) =
_

_
2c
1
+ 2c
3
2c
1
+ 6c
3
11c
1
+c
2
13c
3
_

_
= c
3
= 1, c
1
=
3
2
, c
2
=
67
2
. So, the solution is
_

_
3e
x
+ 2e
5x
3e
x
+ 6e
5x

33
2
e
x
+
67
2
e
3x
13e
5x
_

_
30. The conditions on the eigenvalues are that the real parts of the eigenvalue are strictly negative
(< 0).
6.3 Homogeneous Systems with Constant Coecients - Nondiagonalizable
17. The matrix has eigenvalue 6 with multiplicity 2 and a single basis eigenvector
_
1
1
_
. Using
Schurs Theorem and the Schur Decomposition, we can use Theorem 9.7 to nd an orthogonal
matrix P so that P
T
AP is upper triangular. Using
_
1
1
_
as the start, the matrix P found
using Theorem 9.7 is
P =
_
_
1

2
1

2
1

2

1

2
_
_
40
Math 107 August 6, 2009 (Partial) Solution Manual
with P
T
= P. Thus,U = P
T
AP =
_
6 2
0 6
_
. The system Z

= UZ gives
z

1
(t) = 6z
1
(t) + 2z
2
(t)
z

2
(t) = 6z
2
(t)
which has general solution z
2
(t) = c
2
e
6t
, z
1
(t) = c
1
e
6t
+2c
2
te
6t
or c
1
_
e
6t
0
_
+c
2
_
2te
6t
e
6t
_
.
Thus, the general solution to the original problem is PZ or
1

2
_
(c
1
+c
2
)e
6t
+ 2c
2
te
6t
(c
1
c
2
)e
6t
+ 2c
2
te
6t
_
Using the initial value Y (0) =
_
1
0
_
, we get the system of equations
_
1
0
_
=
1

2
_
c
1
+c
2
c
1
c
2
_
Thus, c
1
= c
2
=

2
2
. Therefore, the solution is
_
x(t)
y(t)
_
=
_
e
6t
+te
6t
te
6t
_
6.4 Nonhomogeneous Linear Systems
1. Y

=
_
3 0
8 1
_
Y +
_
2
x
_
. The eigenvalues of the matrix are 3 and 1, with basis eigenvectors
_
1
2
_
and
_
0
1
_
. Thus, the general solution to the homogeneous linear system is
Y
H
=
_
c
1
e
3x
2c
1
e
3x
+c
2
e
x
_
and a matrix of fundamental solutions is M =
_
e
3x
0
2e
3x
e
x
_
. Thus,
M
1
=
_
e
3x
0
2e
x
e
x
_
and Y
P
= M
_
M
1
G(x)dx =
_
e
3x
0
2e
3x
e
x
_
_
_
e
3x
0
2e
x
e
x
__
2
x
_
dx.
41
Math 107 August 6, 2009 (Partial) Solution Manual
So,
Y
P
=
_
e
3x
0
2e
3x
e
x
__ _
2e
3x
4e
x
+xe
x
_
dx
=
_
e
3x
0
2e
3x
e
x
__

2
3
e
3x
4e
x
+xe
x
e
x
_
=
_

2
3

19
3
+x
_
Thus, the general solution is
Y = Y
P
+Y
H
=
_

2
3
+c
1
e
3x

19
3
+x + 2c
1
e
3x
+c
2
e
x
_
5. The eigenvalues are 1, 2, and 3, with corresponding basis eigenvectors
_

_
0
1
0
_

_
,
_

_
1
2
2
_

_
,
_

_
1
1
1
_

_
,
respectively. Therefore, the general solution to Y

= AY is
c
1
_

_
0
e
x
0
_

_
+c
2
_

_
e
2x
2e
2x
2e
2x
_

_
+c
3
_

_
e
3x
e
3x
e
3x
_

_
=
_

_
c
2
e
2x
+c
3
e
3x
c
1
e
x
2c
2
e
2x
c
3
e
3x
2c
2
e
2x
c
3
e
3x
_

_
and a matrix of fundamental solutions is M =
_

_
0 e
2x
e
3x
e
x
2e
2x
e
3x
0 2e
2x
e
3x
_

_
. Thus,
M
1
=
_

_
0 e
x
e
x
e
2x
0 e
2x
2e
3x
0 e
3x
_

_
and
42
Math 107 August 6, 2009 (Partial) Solution Manual
Y
P
= M
_
_

_
0 e
x
e
x
e
2x
0 e
2x
2e
3x
0 e
3x
_

_
_

_
1 2x
xe
x
0
_

_
dx
= M
_
_

_
xe
2x
e
2x
+ 2xe
2x
2e
3x
4xe
3x
_

_
dx
=
_

_
0 e
2x
e
3x
e
x
2e
2x
e
3x
0 2e
2x
e
3x
_

_
_

1
2
xe
2x

1
4
e
2x
xe
2x

2
9
e
3x
+
4
3
xe
3x
_

_
=
_

_
1
3
x
2
9

1
2
xe
x

1
4
e
x
+
2
9
+
2
3
x
2
3
x +
2
9
_

_
Thus, the general solution is
_

_
c
2
e
2x
+c
3
e
3x
+
1
3
x
2
9
c
1
e
x
2c
2
e
2x
c
3
e
3x

1
2
xe
x

1
4
e
x
+
2
9
+
2
3
x
2c
2
e
2x
c
3
e
3x
+
2
3
x +
2
9
_

_
11. The general solution was
_

2
3
+c
1
e
3x

19
3
+x + 2c
1
e
3x
+c
2
e
x
_
Using the initial condition Y
0
=
_
2
1
_
, this gives the system

2
3
+c
1
= 2

19
3
+ 2c
1
+c
2
= 1
which leads to c
1
=
8
3
, c
2
= 2 and the solution
_

2
3
+
8
3
e
3x

19
3
+x +
16
3
e
3x
+ 2e
x
_
13. The general solution was
_

_
c
2
e
2x
+c
3
e
3x
+
1
3
x
2
9
c
1
e
x
2c
2
e
2x
c
3
e
3x

1
2
xe
x

1
4
e
x
+
2
9
+
2
3
x
2c
2
e
2x
c
3
e
3x
+
2
3
x +
2
9
_

_
43
Math 107 August 6, 2009 (Partial) Solution Manual
Using the initial condition Y
0
=
_

_
0
1
1
_

_
gives the system
c
2
+c
3

2
9
= 0
c
1
2c
2
c
3

1
4
+
2
9
= 1
2c
2
c
3
+
2
9
= 1
which leads to c
1
=
9
4
, c
2
= 1, c
3
=
7
9
and the solution
_

_
e
2x

7
9
e
3x
+
1
3
x
2
9
9
4
e
x
2e
2x
+
7
9
e
3x

1
2
xe
x

1
4
e
x
+
2
9
+
2
3
x
2e
2x
+
7
9
e
3x
+
2
3
x +
2
9
_

_
15. This leads to the system
Y

=
_
2 1
2 3
_
Y +
_
t
t
_
The matrix above as eigenvalues 4 and 1 and with eigenvectors
_
1
2
_
and
_
1
1
_
, respectively.
Thus, the general solution to the homogeneous problem is
c
1
_
e
4t
2e
4t
_
+c
2
_
e
t
e
t
_
with a matrix of fundamental solutions given by M =
_
e
4t
e
t
2e
4t
e
t
_
and
M
1
=
_
1
3
e
4t 1
3
e
4t
2
3
e
t

1
3
e
t
_
and Y
P
= M
_
_
1
3
e
4t 1
3
e
4t
2
3
e
t

1
3
e
t
__
t
t
_
dt
Y
P
= M
_ _
0
te
t
_
dt
=
_
e
4t
e
t
2e
4t
e
t
__
0
te
t
+e
t
_
=
_
t + 1
t 1
_
So, the general solution is
_
x(t)
y(t)
_
=
_
t + 1 +c
1
e
4t
+c
2
e
t
t 1 + 2c
1
e
4t
c
2
e
t
_
44
Math 107 August 6, 2009 (Partial) Solution Manual
Using the initial condition (1, 1) gives the system
1 +c
1
+c
2
= 1
1 + 2c
1
c
2
= 1
leading to c
1
=
2
3
, c
2
=
2
3
and the solution
_
x(t)
y(t)
_
=
_
t + 1 +
2
3
e
4t

2
3
e
t
t 1 +
4
3
e
4t
+
2
3
e
t
_
9 Inner Product Spaces
9.1 Inner Product Spaces
6. < v, u >= 0, ||v|| = , ||u|| = , The angle between u and v is

2
.
7. < f, g >=
_
b
a
f(x)g(x)dx =
_
b
a
g(x)f(x)dx =< g, f >
< f+g, h >=
_
b
a
(f(x)+g(x))h(x)dx =
_
b
a
f(x)h(x)dx+
_
b
a
g(x)h(x)dx =< f, h > + < g, h >
< cf, g >=
_
b
a
cf(x)g(x)dx = c
_
b
a
f(x)g(x)dx = c < f, g >
< f, f >=
_
b
a
f
2
(x)dx 0 for all f and
_
b
a
f
2
(x)dx = 0 i f(x) = 0 for all x.
8. < v, u >= a
1
w
1
b
1
+a
2
w
2
b
2
+. . . +a
n
w
n
b
n
=< u, v >
< v +u, w >= (a
1
+b
1
)w
1
c
1
+(a
2
+b
2
)w
2
c
2
+. . . +(a
n
+b
n
)w
n
c
n
= a
1
w
1
c
1
+. . . +a
n
w
n
c
n
+
b
1
w
1
c
1
+. . . +b
n
w
n
c
n
=< v, w > + < u, w >
< cv, u >= ca
1
w
1
b
1
+ca
2
w
2
b
2
+. . . +ca
n
w
n
b
n
= c < v, u >
< v, v >= w
1
a
2
1
+ w
2
a
2
2
+ . . . + w
n
a
2
n
0, since w
i
> 0 for all i. < v, v >= 0 if and only if
a
2
i
= 0 for all i = a
i
= 0 for all i = v = 0.
16. Let v be a nonzero vector in a inner product space (V, < , >)). Then ||v|| =

< v, v > > 0


and
<
v
||v||
,
v
||v||
>=
< v, v >
||v||
2
= 1
so,
v
||v||
is a unit vector.
18.
_

0
sin(x) sin(2x)dx =
_

0
2 sin
2
(x) cos(x)dx =
2
3
sin
3
(x)

0
= 0
45
Math 107 August 6, 2009 (Partial) Solution Manual
20.
||v +u||
2
=< v +u, v +u >
=< v, v > + < v, u > + < u, v > + < u, u >
= ||v||
2
+ 2 < v, u > +||u||
2
||v||
2
+ 2||u||||v|| +||u||
2
= (||v|| +||u||)
2
The second to last step is from the Cauchy=Schwartz Inequality. Thus,
||v +u|| ||v|| +||u||
21.
||v +u||
2
=< v +u, v +u >
=< v, v > + < v, u > + < u, v > + < u, u >
= ||v||
2
+ 2 < v, u > +||u||
2
= ||v||
2
+||u||
2
The last step is from the orthogonality of u and v, so < v, u >= 0.
22. Let w be a xed vector in the inner product space (V, < , >). Let U be the set of all vectors
in V orthogonal to w. Let v
1
, v
2
U. Then, < v
1
+ v
2
, w >=< v
1
, w > + < v
2
, w >= 0, so
v
1
+v
2
U. Also, < cv
1
, w >= c < v
1
, w >= 0, so cv
1
U. Thus, U is closed under addition
and scalar multiplication, and a subspace.
9.2 Orthonormal Bases
2. (a) < v
i
, v
j
>= 0 if i = j and < v
i
, v
i
>= 1 for 1 i 4.
(b) < v, v
1
>=
1

3
, < v, v
2
>=
8+

3
3
, < v, v
3
>=
13+

3
3
, < v, v
4
>=
5+

3
3
. Thus,
v =
1

3
v
1
+
8 +

3
3
v
2

13

3
3
v
3
+
5 +

3
3
v
4
6. v
1
=
_

_
1

2
0
1

2
_

_
, w
2
=
_

_
1
1
0
_

2
_

_
1

2
0
1

2
_

_
=
_

_
1
2
1

1
2
_

_
, v
2
=
w
2
||w
2
||
=
_

_
1

6
2

6
_

_
.
w
3
=
_

_
0
1
1
_

2
_

_
1

2
0
1

2
_

6
_

_
1

6
2

6
_

_
=
_

2
3
2
3
1
3
_

_
, v
3
=
w
3
||w
3
||
=
_

2
3
2
3
1
3
_

_
.
46
Math 107 August 6, 2009 (Partial) Solution Manual
9. < 1, 1 >=
_
1
1
(1)(1)dx = 2, so v
1
=
1

2
.
w
2
= x
__
1
1
x

2
dx
_
1

2
= x, v
2
=
w
2
||w
2
||
=
_
3
2
x.
w
3
= x
2

__
1
1
x
2

2
dx
_
1

2

_
_
1
1
x
3

2
dx
_
x
_
3
2
= x
2

1
3
, v
3
=
w
3
||w
3
||
=
3

5
2

2
(x
2

1
3
)
13. Showing the matrix is orthogonal is equivalent to showing that {v
1
=
_
cos
sin
_
, v
2
=
_
sin
cos
_
}
is an orthonormal basis for R
2
.
v
1
v
1
= cos
2
+ sin
2
= 1,
v
1
v
2
= cos sin + sin cos = 0, and
v
2
v
2
= sin
2
+ cos
2
= 1.
Thus, the set is an orthonormal basis and the matrix is orthogonal.
47

Das könnte Ihnen auch gefallen