You are on page 1of 5

# Solutions to Assignment 7

## Math 217, Fall 2002

4.3.10 Find a basis for the null space of the following matrix: A =
_
_
1 0 5 1 4
2 1 6 2 2
0 2 8 1 9
_
_
.
We need to nd a basis for the solutions to the equation Ax = 0. To do this we
rst put A in row reduced echelon form. The result (according to the computer)
is:
_
_
1 0 5 0 7
0 1 4 0 6
0 0 0 1 3
_
_
.
From this we can read the general solution, x =
_

_
5x
3
7x
5
4x
3
6x
5
x
3
3x
5
x
5
_

_
. We can also
write this as x = x
3
_

_
5
4
1
0
0
_

_
+ x
5
_

_
7
6
0
3
1
_

_
, or Span
_

_
_

_
5
4
1
0
0
_

_
,
_

_
7
6
0
3
1
_

_
_

_
. Because these
two vectors are clearly not multiples of one another, they also give a basis. So
a basis for null(A) is
_

_
_

_
5
4
1
0
0
_

_
,
_

_
7
6
0
3
1
_

_
_

_
.
3.3.28 Let v
1
=
_

_
7
4
9
5
_

_
, v
2
=
_

_
4
7
2
5
_

_
, v
3
=
_

_
1
5
3
4
_

_
. It can be veried that v
1
3v
2
+
5v
3
= 0. Use this information to nd a basis for H = Span{v
1
, v
2
, v
3
}.
By the Spanning Set Theorem, some subset of the v
i
is a basis for H. In
class we showed how to nd this subset. We simply remove any of the vectors
involved in a non-trivial linear relation. So I choose to remove v
3
(I could have
1
removed any of the v
i
because they each occur with a non-zero coecient in
the dependency relation v
1
3v
2
+ 5v
3
= 0). The remaining vectors then give
a basis for H. We know they span by the Spanning Set Theorem. They are
also linearly independent, because they are not multiples of one another.
4.3.30 Let S = {v
1
, . . . , v
k
} be a set of k vectors in R
n
, with k > n. Use a theorem
from Chapter 1 to explain why S cannot be a basis for R
n
.
Theorem 8, page 69, claims that any set {v
1
, . . . , v
p
} in R
n
is linearly dependent
if p > n. That is exactly the situation we nd ourselves in (expect that we have
used the letter k instead of p). Thus {v
1
, . . . , v
k
} is linearly dependent, and
cannot be a basis.
4.3.32 Suppose that T is a one-to-one transformation. Show that if the set of images
{T(v
1
), . . . , T(v
p
)} is linearly dependent, then {v
1
, . . . , v
p
} is linearly depen-
dent.
This problem should say that T is a linear transformation (the book
has a typo).
If the set {T(v
1
), . . . , T(v
p
)} is linearly dependent, then there are c
1
, . . . , c
p
R
not all zero such that c
1
T(v
1
) + + c
p
T(v
p
) = 0. Because T is a linear
transformation, we can rewrite this as T(c
1
v
1
+ +c
p
v
p
) = 0. We know that
T(0) = 0, so because T is one-to-one it must be the case that c
1
v
1
+ +c
p
v
p
=
0. This gives a dependency relation on the v
i
(recall that not all the c
i
are zero),
and thus {v
1
, . . . , v
p
} is linearly dependent.
4.4.14 The set B = {1 t
2
, t t
2
, 2 2t + t
2
} is a basis for P
2
. Find the coordinate
vector of p(t) = 3 + t 6t
2
relative to B.
We need to write p in terms of the basis B, that is, nd x
1
, x
2
, x
3
R such that
x
1
(1 t
2
) + x
2
(t t
2
) + x
3
(2 2t + t
2
) = 3 + t 6t
2
. Multiplying things out,
we get (x
1
+2x
3
) +(x
2
2x
3
)t +(x
1
x
2
+x
3
)t
2
= 3 +t 6t
2
. Thus we have
to solve the three linear equations:
x
1
+ 2x
3
= 3
x
2
2x
3
= 1
x
1
x
2
+ x
3
= 6
We form the augmented matrix for this system,
_
_
1 0 2 3
0 1 2 1
1 1 1 6
_
_
.
2
In row reduced echelon form, this is the matrix
_
_
1 0 0 7
0 1 0 3
0 0 1 2
_
_
. So we see that
x
1
= 7, x
2
= 3, and x
3
= 2.
This implies that 7(1 t
2
) + (3)(t t
2
) + (2)(2 2t + t
2
) = 3 + t 6t
2
, so
[3 + t 6t
2
]
B
=
_
_
7
3
2
_
_
R
3
.
4.4.20 Suppose that {v
1
, . . . , v
4
} is a linearly dependent spanning set for a vector space
V . Show that each w in V can be expressed in more than one way as a linear
combination of v
1
, . . . , v
4
.
Because the v
i
are linearly dependent, there are c
1
, . . . , c
4
R not all zero
such that c
1
v
1
+ + c
4
v
4
= 0. Given w there are d
1
, . . . , d
4
R such that
d
1
v
1
+ + d
4
v
4
= w (because the v
i
are a spanning set). Consider the linear
combination (c
1
+d
1
)v
1
+ +(c
4
+d
4
)v
4
. We have that (c
1
+d
1
)v
1
+ +(c
4
+
d
4
)v
4
= (c
1
v
1
+ +c
4
v
4
) +(d
1
v
1
+ +d
4
v
4
) = 0+w = w. This constitutes
a dierent linear combination than d
1
v
1
+ + d
4
v
4
because not all of the c
i
are zero, and hence for some i between 1 and 4, we have that c
i
+ d
i
= d
i
.
4.4.32 Let p
1
(t) = 1 + t
2
, p
2
(t) = 2 t + 3t
2
, p
3
(t) = 1 + 2t 4t
2
.
(a) Use coordinate vectors to show that these polynomials form a basis for P
2
.
We know that a basis for P
2
is B = {1, t, t
2
}. It is not dicult to see
that if the p
i
are dependent, then so are the images [p
1
]
B
, [p
2
]
B
, [p
3
]
B
.
That is, if 0 = c
1
p
2
+ c
2
p
2
+ c
3
p
3
is a nontrivial dependency (i.e., not
all c
i
are zero), then taking the coordinate mapping of both sides of the
equation yields the dependency relation 0 = [0]
B
= [c
1
p
2
+c
2
p
2
+c
3
p
3
]
B
=
c
1
[p
2
]
B
+ c
2
[p
2
]
B
+ c
3
[p
3
]
B
.
We also know that if [p
1
]
B
, [p
2
]
B
, [p
3
]
B
spans R
3
, then the p
i
span P
2
.
That is, assuming the [p
i
]
B
span, we know that for each f P
2
there are
d
i
such that [f]
B
= d
1
[p
1
]
B
+ d
2
[p
2
]
B
+ d
3
[p
3
]
B
= [d
1
p
1
+ d
2
p
2
+ d
3
p
3
]
B
,
and because the coordinate mapping is one-to-one, this implies that f =
d
1
p
1
+ d
2
p
2
+ d
3
p
3
.
So to show the p
i
are a basis of P
2
, it is enough to show that the [p
i
]
B
are a basis of R
3
. By the Invertible Matrix Theorem, this set will be a
basis if and only if the matrix
_
[p
1
]
B
[p
2
]
B
[p
3
]
B

=
_
_
1 2 1
0 1 2
1 3 4
_
_
has a
pivot in every row (because then by IMT the columns will span and will
be linearly independent). So we put
_
_
1 2 1
0 1 2
1 3 4
_
_
in row reduced echelon
3
form and obtain:
_
_
1 0 0
0 1 0
0 0 1
_
_
. We conclude that the set {p
1
, p
2
, p
3
} does
form a basis of P
2
.
(b) Consider the basis B = {p
1
, p
2
, p
3
} for P
2
. Find q in P
2
given that [q]
B
=
_
_
3
1
2
_
_
.
Because [q]
B
=
_
_
3
1
2
_
_
, we know that
q = 3p
1
+p
2
+2p
3
= 3(1+t
2
)+(2t+3t
2
)+2(1+2t4t
2
) = 1+3t8t
2
.
4.5.12 Find the dimension of the vector space spanned by
_
_
1
2
0
_
_
,
_
_
3
4
1
_
_
,
_
_
8
6
5
_
_
, and
_
_
3
0
7
_
_
.
To nd the dimension, we need to nd the number of elements in a basis. So we
form the matrix
_
_
1 3 8 3
2 4 6 0
0 1 5 7
_
_
and count the number of pivot columns
(this because we know that dim(Col(A)) for a matrix A is precisely the number
of pivot columns of A). So we put
_
_
1 3 8 3
2 4 6 0
0 1 5 7
_
_
in row reduced echelon
form, obtaining the matrix:
_
_
1 0 7 0
0 1 5 0
0 0 0 1
_
_
. This has 3 pivot columns, so the
dimension of the subspace spanned by the vectors given is 3.
4.5.22 The rst four Laguerre polynomials are 1, 1t, 24t+t
2
, and 618t+9t
3
t
3
.
Show that these polynomials form a basis of P
3
.
Consider the basis B = {1, t, t
2
, t
3
} of P
3
. Utilizing the same arguments as we
did in question 4.4.32, we know that it is enough to show that the images of
these polynomials form a basis of R
4
under to coordinate mapping. Their images
under this mapping are: [1]
B
=
_

_
1
0
0
0
_

_
, [1 t]
B
=
_

_
1
1
0
0
_

_
, [2 4t + t
2
]
B
=
_

_
2
4
1
0
_

_
,
4
and [6 18t + 9t
3
t
3
]
B
=
_

_
6
18
9
1
_

_
.
By IMT, these will form a basis if and only if the matrix
_

_
1 1 2 6
0 1 4 18
0 0 1 9
0 0 0 1
_

_
is invertible. This matrix will be invertible if and only if its determinate is
non-zero (theorem 4 page 194). The computer tells me that the determinate of
this matrix is equal to 1. So we conclude that the given polynomials do form a
basis.
4.5.32 Let H be a nonzero subspace of V , and suppose T is a one-to-one (linear)
mapping of V into W. Prove that dim T(H) = dim H. If T happens to be
a one-to-one mapping of V onto W, then dim V = dim W. Isomorphic nite
dimensional vector spaces have the same dimension.
We know that if v
1
, . . . , v
p
is linearly independent, then because T is one-to-one,
T(v
1
), . . . , T(v
p
) is linearly independent (that was an earlier homework problem,
4.3.32). We showed in class that if v
1
, . . . , v
p
spans H, then T(v
1
), . . . , T(v
p
)
spans T(H). This means that because the v
1
, . . . , v
p
form a basis of H, the
T(v
1
), . . . , T(v
p
) give a basis of T(H). They both have the same size, so both
spaces have the same dimension.
When T is also onto, then taking H = V we have that T(V ) = W, and the
statement follows from what we have already done.
5