Sie sind auf Seite 1von 112

Denitions, Theorems and Exercises

Abstract Algebra
Math 332
Ethan D. Bloch
December 26, 2013
ii
Contents
1 Binary Operations 3
1.1 Binary Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Isomorphic Binary Operations . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Groups 9
2.1 Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Isomorphic Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Basic Properties of Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3 Various Types of Groups 25
3.1 Cyclic Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Finitely Generated Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Dihedral Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.4 Permutations and Permutation Groups . . . . . . . . . . . . . . . . . . . . 31
3.5 Permutations Part II and Alternating Groups . . . . . . . . . . . . . . . . . 33
4 Basic Constructions 35
4.1 Direct Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Finitely Generated Abelian Groups . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Innite Products of Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4 Cosets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5 Quotient Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2 Contents
5 Homomorphisms 45
5.1 Homomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.2 Kernel and Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6 Applications of Groups 51
6.1 Group Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7 Rings and Fields 55
7.1 Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7.2 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
8 Vector Spaces 61
8.1 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.2 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.3 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.4 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8.5 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.6 Bases and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.7 Bases for Arbitrary Vector Spaces . . . . . . . . . . . . . . . . . . . . . . 79
9 Linear Maps 81
9.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
9.2 Kernel and Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
9.3 Rank-Nullity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
9.4 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.5 Spaces of Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
10 Linear Maps and Matrices 95
10.1 Review of MatricesMultiplication . . . . . . . . . . . . . . . . . . . . . 96
10.2 Linear Maps Given by Matrix Multiplication . . . . . . . . . . . . . . . . . 98
10.3 All Linear Maps F
n
F
m
. . . . . . . . . . . . . . . . . . . . . . . . . . 100
10.4 Coordinate Vectors with respect to a Basis . . . . . . . . . . . . . . . . . . 101
10.5 Matrix Representation of Linear MapsBasics . . . . . . . . . . . . . . . 102
10.6 Matrix Representation of Linear MapsComposition . . . . . . . . . . . . 104
10.7 Matrix Representation of Linear MapsIsomorphisms . . . . . . . . . . . 106
10.8 Matrix Representation of Linear MapsThe Big Picture . . . . . . . . . . 108
10.9 Matrix Representation of Linear MapsChange of Basis . . . . . . . . . . 109
1
Binary Operations
4 1. Binary Operations
1.1 Binary Operations
Fraleigh, 7th ed. Section 2
Gallian, 8th ed. Section 2
Judson, 2013 Section 3.2
Denition 1.1.1. Let A be a set. A binary operation on A is a function AA A. A
unary operation on A is a function A A.
Denition 1.1.2. Let A be a set, let be a binary operation on A and let H A. The subset
H is closed under if ab H for all a, b H.
Denition 1.1.3. Let A be a set, and let be a binary operation on A. The binary operation
satises the Commutative Law (an alternative expression is that is commutative) if
ab = ba for all a, b A.
Denition 1.1.4. Let A be a set, and let be a binary operation on A. The binary operation
satises the Associative Law (an alternative expression is that is associative) if (a
b) c = a(bc) for all a, b, c A.
Denition 1.1.5. Let A be a set, and let be a binary operation on A.
1. Let e A. The element e is an identity element for if ae =a =ea for all a A.
2. If has an identity element, the binary operation satises the Identity Law.
Lemma 1.1.6. Let A be a set, and let be a binary operation on A. If has an identity
element, the identity element is unique.
Proof. Let e, e A. Suppose that e and e are both identity elements for . Then e = e
e = e, where in the rst equality we are thinking of e as an identity element, and in the
second equality we are thinking of e as an identity element. Therefore the identity element
is unique.
Denition 1.1.7. Let A be a set, and let be a binary operation of A. Let e A. Suppose
that e is an identity element for .
1. Let a A. An inverse for a is an element a
/
A such that aa
/
= e and a
/
a = e.
2. If every element in A has an inverse, the binary operation satises the Inverses
Law.
Exercises
Exercise 1.1.1. Which of the following formulas denes a binary operation on the given
set?
1.1 Binary Operations 5
(1) Let be dened by x y = xy for all x, y {1, :, , . . .}.
(2) Let be dened by x y =

xy for all x, y [:, ).


(3) Let be dened by x y = x y for all x, y Q.
(4) Let be dened by (x, y)(z, w) = (x +z, y +w) for all (x, y), (z, w) R
:
{(o, o)}.
(5) Let be dened by x y = |x +y| for all x, y N.
(6) Let be dened by x y = ln(|xy| e) for all x, y N.
Exercise 1.1.2. For each of the following binary operations, state whether the binary oper-
ation is associative, whether it is commutative, whether there is an identity element and, if
there is an identity element, which elements have inverses.
(1) The binary operation on Z dened by x y = xy for all x, y Z.
(2) The binary operation on R dened by x y = x +:y for all x, y R.
(3) The binary operation on R dened by x y = x +y for all x, y R.
(4) The binary operation on Q dened by x y = (x +y) for all x, y Q.
(5) The binary operation on R dened by xy = x for all x, y R.
(6) The binary operation on Q dened by x y = x +y +xy for all x, y Q.
(7) The binary operation on R
:
dened by (x, y) (z, w) = (xz, y + w) for all
(x, y), (z, w) R
:
.
Exercise 1.1.3. For each of the following binary operations given by operation tables, state
whether the binary operation is commutative, whether there is an identity element and, if
there is an identity element, which elements have inverses. (Do not check for associativity.)
(1)
1 :
1 1 : 1
: : :
1 :
.
(2)
j k l m
j k j m j
k j k l m
l k l j l
m j m l m
.
(3)
x y z w
x x z w y
y z w y x
z w y x z
w y x z w
.
6 1. Binary Operations
(4)
a b c d e
a d e a b b
b e a b a d
c a b c d e
d b a d e c
e b d e c a
.
(5)
i r s a b c
i i r s a b c
r r s i c a b
s s i r b c a
a a b c i s r
b b c a r i s
c c a b s r i
.
Exercise 1.1.4. Find an example of a set and a binary operation on the set such that the
binary operation satises the Identity Law and Inverses Law, but not the Associative Law,
and for which at least one element of the set has more than one inverse. The simplest way
to solve this problem is by constructing an appropriate operation table.
Exercise 1.1.5. Let n N. Recall the denition of the set Z
n
and the binary operation on
Z
n
. Observe that [1] is the identity element for Z
n
with respect to multiplication. Let a Z.
Prove that the following are equivalent.
a. The element [a] Z
n
has an inverse with respect to multiplication.
b. The equation ax 1 (mod n) has a solution.
c. There exist p, q Z such that ap+nq = 1.
(It turns out that the three conditions listed above are equivalent to the fact that a and n are
relatively prime.)
Exercise 1.1.6. Let A be a set. A ternary operation on A is a function AAA A. A
ternary operation : AAA A is left-induced by a binary operation : AA A if
((a, b, c)) = (ab) c for all a, b, c A.
Is every ternary operation on a set left-induced by a binary operation? Give a proof or a
counterexample.
Exercise 1.1.7. Let A be a set, and let be a binary operation on A. Suppose that satises
the Associative Law and the Commutative Law. Prove that (ab) (cd) = b[(d a) c]
for all a, b, c, d A.
Exercise 1.1.8. Let B be a set, and let be a binary operation on B. Suppose that satises
the Associative Law. Let
P = {b B | bw = wb for all w B}.
Prove that P is closed under .
Exercise 1.1.9. Let C be a set, and let be a binary operation on C. Suppose that satises
the Associative Law and the Commutative Law. Let
Q = {c C | c c = c}.
1.1 Binary Operations 7
Prove that Q is closed under .
Exercise 1.1.10. Let A be a set, and let be a binary operation on A. An element c A is
a left identity element for if c a = a for all a A. An element d A is a right identity
element for if ad = a for all a A.
(1) If A has a left identity element, is it unique? Give a proof or a counterexample.
(2) If A has a right identity element, is it unique? Give a proof or a counterexample.
(3) If A has a left identity element and a right identity element, do these elements have
to be equal? Give a proof or a counterexample.
8 1. Binary Operations
1.2 Isomorphic Binary Operations
Fraleigh, 7th ed. Section 3
Gallian, 8th ed. Section 6
Denition 1.2.1. Let (G, ) and (H, ) be sets with binary operations, and let f : G H
be a function. The function f is an isomorphism of the binary operations if f is bijective
and if f (ab) = f (a) f (b) for all a, b G.
Denition 1.2.2. Let (G, ) and (H, ) be sets with binary operations. The binary opera-
tions and are isomorphic if there is an isomorphism G H.
Theorem 1.2.3. Let (G, ) and (H, ) be sets with binary operations. Suppose that (G, )
and (H, ) are isomorphic.
1. (G, ) satises the Commutative Law if and only if (H, ) satises the Commutative
Law.
2. (G, ) satises the Associative Law if and only if (H, ) satises the Associative Law.
3. (G, ) satises the Identity Law if and only if (H, ) satises the Identity Law. If
f : G H is an isomorphism, then f (e
G
) = e
H
.
4. (G, ) satises the Inverses Law if and only if (H, ) satises the Inverses Law.
Exercises
Exercise 1.2.1. Prove that the two sets with binary operations in each of the following pairs
are isomorphic.
(1) (Z, +) and (Z, +), where Z = {n | n Z}.
(2) (R{o}, ) and (R{1}, ), where x y = x +y +xy for all x, y R{1}.
(3) (R

, +) and (M
::
(R), +), where M
::
(R) is the set of all :: matrices with real
entries.
Exercise 1.2.2. Let f : Z Z be dened by f (n) = n+1 for all n Z.
(1) Dene a binary operation on Z so that f is an isomorphism of (Z, +) and (Z, ),
in that order.
(2) Dene a binary operation on Z so that f is an isomorphism of (Z, ) and (Z, +),
in that order.
Exercise 1.2.3. Prove Theorem 1.2.3 (2).
Exercise 1.2.4. Prove Theorem 1.2.3 (3).
Exercise 1.2.5. Prove Theorem 1.2.3 (4).
2
Groups
10 2. Groups
2.1 Groups
Fraleigh, 7th ed. Section 4
Gallian, 8th ed. Section 2
Judson, 2013 Section 3.2
Denition 2.1.1. Let G be a non-empty set, and let be a binary operation on G. The
pair (G, ) is a group if satises the Associative Law, the Identity Law and the Inverses
Law.
Denition 2.1.2. Let (G, ) be a group. The group (G, ) is abelian if satises the Com-
mutative Law.
Lemma 2.1.3. Let G be a group. If g G, then g has a unique inverse.
Denition 2.1.4. Let G be a group. If G is a nite set, then the order of the group, denoted
|G|, is the cardinality of the set G.
Denition 2.1.5. Let n N, and let a, b Z. The number a is congruent to the number b
modulo n, denoted a b (mod n), if ab = kn for some k Z.
Theorem 2.1.6. Let n N, and let a Z. Then there is a unique r {o, . . . , n1} such that
a r (mod n).
Theorem 2.1.7. Let n N.
1. Let a, b Z. If a b (mod n), then [a] = [b]. If a ,b (mod n), then [a] [b] = / 0.
2. [o] [1] . . . [n1] =Z.
Denition 2.1.8. Let n N. The set of integers modulo n, denoted Z
n
, is the set dened
by Z
n
= {[o], [1], . . . , [n1]}, where the relation classes are for congruence modulo n.
Denition 2.1.9. Let n N. Let + and be the binary operations on Z
n
dened by [a] +
[b] = [a+b] and [a] [b] = [ab] for all [a], [b] Z
n
.
Lemma 2.1.10. Let n N, and let a, b, c, d Z. Suppose that a c (mod n) and b d
(mod n). Then a+b c +d (mod n) and ab cd (mod n).
Proof. There exist k, j Z such that a c = kn and b d = jn. Then a = c +kn and
b = d + jn, and therefore
a+b = (c +kn) +(d + jn) = c +d +(k + j)n,
ab = (c +kn)(d + jn) = cd +(c j +dk +k jn)n.
The desired result now follows.
Corollary 2.1.11. Let n N, and let [a], [b], [c], [d] Z
n
. Suppose that [a] = [c] and [b] = [d].
Then [a+b] = [c +d] and [ab] = [cd].
2.1 Groups 11
Lemma 2.1.12. Let n N. Then (Z
n
, +) is an abelian group.
Example 2.1.1. We wish to list all possible symmetries of an equilateral triangle.
Such a triangle is shown in Figure 2.1.1. The letters A, B and C are not part of the triangle,
but are added for our convenience. Mathematically, a symmetry of an object in the plane
is an isometry of the plane (that is, a motion that does not change lengths between points)
that take the object onto itself. In other words, a symmetry of an object in the plane is an
isometry of the plane that leaves the appearance of the object unchanged.
Because the letters A, B and C in Figure 2.1.1 are not part of the triangle, a symmetry
of the triangle may interchange these letters; we use the letters to keep track of what the
isometry did. There are only two types of isometries that will leave the triangle looking
unchanged: reections (that is, ips) of the plane in certain lines, and rotations of the plane
by certain angles about the center of the triangle. (This fact takes a proof, but is beyond the
scope of this course.) In Figure 2.1.2 we see the three possible lines in which the plane can
be reected without changing the appearance of the triangle. Let M
1
, M
:
and M

denote
the reections of the planes in these lines. For example, if we apply reection M
:
to the
plane, we see that the vertex labeled B is unmoved, and that the vertices labeled A and C
are interchanged, as seen in Figure 2.1.3. The only two possible non-trivial rotations of the
plane about the center of the triangle that leave the appearance of the triangle unchanged are
rotation by 1:o

clockwise and rotation by :o

clockwise, denoted R
1:o
and R
:o
. We do
not need rotation by 1:o

counterclockwise and rotation by :o

counterclockwise, even
though they also leave the appearance of the triangle unchanged, because they have the
same net effect as rotation by :o

clockwise and rotation by 1:o

clockwise, respectively,
and it is only the net effect of isometries that is relevant to the study of symmetry. Let I
denote the identity map of the plane, which is an isometry, and which can be thought of as
rotation by o

.
A
C B
Figure 2.1.1.
The set G = {I, R
1:o
, R
:o
, M
1
, M
:
, M

} is the collection of all isometries of the plane


that take the equilateral triangle onto itself. Each of these isometries can be thought of
as a function R
:
R
:
, and as such we can combine these isometries by composition of
functions. It can be proved that the composition of isometries is an isometry, and therefore
12 2. Groups
L
1
L
3
L
2
Figure 2.1.2.
A
C B
C
A B
M
2
Figure 2.1.3.
2.1 Groups 13
composition becomes a binary operation on the set G; the details are omitted. We can then
form the operation table for this binary operation:
I R
1:o
R
:o
M
1
M
:
M

I I R
1:o
R
:o
M
1
M
:
M

R
1:o
R
1:o
R
:o
I M

M
1
M
:
R
:o
R
:o
I R
1:o
M
:
M

M
1
M
1
M
1
M
:
M

I R
1:o
R
:o
M
:
M
:
M

M
1
R
:o
I R
1:o
M

M
1
M
:
R
1:o
R
:o
I
.
Composition of functions is associative, and hence this binary operation on G is associa-
tive. Observe that I is an identity element. It is seen that I, M
1
, M
:
and M

are their own


inverses, and that R
1:o
and R
:o
are inverses of each other. Therefore (G, ) is a group. This
group is not abelian, however. For example, we see that R
1:o
M
1
,= M
1
R
1:o
.
The group G is called the symmetry group of the equilateral triangle.
Exercises
Exercise 2.1.1. For each of the following sets with binary operations, state whether the set
with binary operation is a group, and whether it is an abelian group.
(1) The set (o, 1], and the binary operation multiplication.
(2) The set of positive rational numbers, and the binary operation multiplication.
(3) The set of even integers, and the binary operation addition.
(4) The set of even integers, and the binary operation multiplication.
(5) The set Z, and the binary operation on Z dened by ab = ab for all a, b Z.
(6) The set Z, and the binary operation on Z dened by ab = ab+a for all a, b Z.
(7) The set Z, and the binary operation on Z dened by ab =a+b+1 for all a, b Z.
(8) The set R {1}, and the binary operation on R {1} dened by a b = a +
b+ab for all a, b R{1}.
Exercise 2.1.2. Let P = {a, b, c, d, e}. Find a binary operation on P given by an operation
table such that (P, ) is a group.
Exercise 2.1.3. Find an example of a set and a binary operation on the set given by an
operation table such that each element of the set appears once and only once in each row
of the operation table and once and only once in each column, but the set together with this
binary operation is not a group.
14 2. Groups
Exercise 2.1.4. Let A be a set. Let P(A) denote the power set of A. Dene the binary
operation on P(A) by X Y = (X Y) (Y X) for all X,Y P(A). (This binary
operation is called symmetric difference. Prove that (P(A), ) is an abelian group.
Exercise 2.1.5. Let (G, ) be a group. Prove that if x
/
= x for all x G, then G is abelian.
Is the converse to this statement true?
Exercise 2.1.6. Let (H, ) be a group. Suppose that H is nite, and has an even number of
elements. Prove that there is some h H such that h ,= e
H
and hh = e
H
.
2.2 Isomorphic Groups 15
2.2 Isomorphic Groups
Fraleigh, 7th ed. Section 3
Gallian, 8th ed. Section 6
Judson, 2013 Section 9.1
Denition 2.2.1. Let (G, ) and (H, ) be groups, and let f : G H be a function. The
function f is an isomorphism (sometimes called a group isomorphism) if f is bijective
and if f (ab) = f (a) f (b) for all a, b G.
Denition 2.2.2. Let (G, ) and (H, ) be groups. The groups G and H are isomorphic if
there is an isomorphism G H. If G and H are isomorphic, it is denoted G

= H.
Theorem 2.2.3. Let G, H be groups, and let f : G H be an isomorphism.
1. f (e
G
) = e
H
.
2. If a G, then f (a
/
) = [ f (a)]
/
, where the rst inverse is in G, and the second is in H.
Proof. We will prove Part (2), leaving the rest to the reader.
Let and be the binary operations of G and H, respectively.
(2). Let a G. Then f (a) f (a
/
) = f (aa
/
) = f (e
G
) = e
H
, where the last equality uses
Part (1) of this theorem, and the other two equalities use the fact that f is a isomorphism
and that G is a group. A similar calculation shows that f (a
/
) f (a) = e
H
. By Lemma 2.1.3,
it follows that [ f (a)]
/
= f (a
/
).
Theorem 2.2.4. Let G, H and K be groups, and let f : G H and j : H K be isomor-
phisms.
1. The identity map 1
G
: G G is an isomorphism.
2. The function f
1
is an isomorphism.
3. The function j f is an isomorphism.
Lemma 2.2.5. Let G and H be groups. Suppose that G and H are isomorphic. Then G is
abelian if and only if H is abelian.
Proof. This lemma follows immediately from Theorem 1.2.3 (1).
Lemma 2.2.6. Let (G, ) be a group, let A be a set, and let f : A G be a bijective map.
Then there is a unique binary operation on A such that (A, ) is a group and f is an
isomorphism.
16 2. Groups
Exercises
Exercise 2.2.1. Which of the following functions are isomorphisms? The groups under
consideration are (R, +), and (Q, +), and ((o, ), ).
(1) Let f : Q(o, ) be dened by f (x) =
x
for all x Q.
(2) Let k: (o, ) (o, ) be dened by k(x) = x

for all x (o, ).


(3) Let m: R R be dened by m(x) = x + for all x R.
(4) Let g: (o, ) R be dened by g(x) = lnx for all x (o, ).
Exercise 2.2.2. Prove Theorem 2.2.3 (1).
Exercise 2.2.3. Prove Theorem 2.2.4 (1) and (2).
Exercise 2.2.4. Prove that up to isomorphism, the only two groups with four elements are
Z

and the Klein -group K. Consider all possible operation tables for the binary operation
of a group with four elements; use the fact that each element of a group appears once in
each row and once in each column of the operation table for the binary operation of the
group, as stated in Remark 2.3.2.
Exercise 2.2.5. Let G be a group, and let g G. Let i
g
: GG be dened by i
g
(x) =gxg
1
for all x G. Prove that i
g
is an isomorphism.
2.3 Basic Properties of Groups 17
2.3 Basic Properties of Groups
Fraleigh, 7th ed. Section 4
Gallian, 8th ed. Section 2
Judson, 2013 Section 3.2
Theorem 2.3.1. Let G be a group, and let a, b, c G.
1. e
1
= e.
2. If ac = bc, then a = b (Cancellation Law).
3. If ca = cb, then a = b (Cancellation Law).
4. (a
1
)
1
= a.
5. (ab)
1
= b
1
a
1
.
6. If ba = e, then b = a
1
.
7. If ab = e, then b = a
1
.
Proof. We prove Part (5), leaving the rest to the reader.
(5). By Lemma 2.1.3 we know that ab has a unique inverse. If we can show that
(ab)(b
1
a
1
) = e and (b
1
a
1
)(ab) = e, then it will follow that a
1
b
1
is the unique
inverse for ab, which means that (ab)
1
= b
1
a
1
. Using the denition of a group we see
that
(ab)(b
1
a
1
) = [(ab)b
1
]a
1
= [a(bb
1
)]a
1
= [ae]a
1
= aa
1
= e.
A similar computation shows that (b
1
a
1
)(ab) = e.
Remark 2.3.2. A useful consequence of Theorem 2.3.1 (2) and (3) is that if the binary
operation of a group with nitely many elements is given by an operation table, then each
element of the group appears once and only once in each row of the operation table and
once and only once in each column (consider what would happen otherwise). On the other
hand, just because an operation table does have each element once and only once in each
row and once and only once in each column does not guarantee that the operation yields a
group; the reader is asked to nd such an operation table in Exercise 2.1.3.
Theorem 2.3.3. Let G be a group. The following are equivalent.
a. G is abelian.
b. (ab)
1
= a
1
b
1
for all a, b G.
18 2. Groups
c. aba
1
b
1
= e for all a, b G.
d. (ab)
:
= a
:
b
:
for all a, b G.
Theorem 2.3.4 (Denition by Recursion). Let H be a set, let e H and let k: H H
be a function. Then there is a unique function f : N H such that f (1) = e, and that
f (n+1) = k( f (n)) for all n N.
Denition 2.3.5. Let G be a group and let a G.
1. The element a
n
G is dened for all n N by letting a
1
= a, and a
n+1
= a a
n
for
all n N.
2. The element a
o
G is dened by a
o
= e. For each n N, the element a
n
is dened
by a
n
= (a
n
)
1
.
Lemma 2.3.6. Let G be a group, let a G and let n, m Z.
1. a
n
a
m
= a
n+m
.
2. (a
n
)
/
= a
n
.
Lemma 2.3.7. Let G be a group, let a G and let n, m Z.
1. a
n
a
m
= a
n+m
.
2. (a
n
)
1
= a
n
.
Proof. First, we prove Part (1) in the case where n N and m = n. Using the denition
of a
n
we see that a
n
a
m
= a
n
a
n
= a
n
(a
n
)
1
= e = a
o
= a
n+m
.
We now prove Part (2). There are three cases. First, suppose n o. Then n N. Then
the result is true by denition. Second, suppose n = o. Then (a
n
)
1
= (a
o
)
1
= e
1
= e =
a
o
= a
o
= a
n
. Third, suppose n < o. Then n N. We then have e = e
1
= (a
o
)
1
=
(a
n
a
n
)
1
= (a
n
)
1
(a
n
)
1
, and similarly e = (a
n
)
1
(a
n
)
1
. By the uniqueness of in-
verses we deduce that [(a
n
)
1
]
1
= (a
n
)
1
, and hence a
n
= (a
n
)
1
.
We now prove Part (1) in general. There are ve cases. First, suppose that n o and
m o. Then n, m N. This part of the proof is by induction. We will use induction on k to
prove that for each k N, the formula a
k
a
p
= a
k+p
holds for all p N.
Let k = 1. Let p N. Then by the denition of a
p
we see that a
k
a
p
= a
1
a
p
= a a
p
=
a
p+1
= a
1+p
= a
k+p
. Hence the result is true for k = 1.
Now let k N. Suppose that the result is true for k. Let p N. Then by the denition of
a
k
we see that a
k+1
a
p
= (a a
k
) a
p
= a (a
k
a
p
) = a a
k+p
= a
(k+p)+1
= a
(k+1)+p
. Hence
the result is true for k +1. It follows by induction that a
k
a
p
= a
k+p
for all p N, for all
k N.
Second, suppose that n =o or m=o. Without loss of generality, assume that n =o. Then
by the denition of a
m
we see that a
n
a
m
= a
o
a
m
= e a
m
= a
m
= a
o+m
= a
n+m
.
2.3 Basic Properties of Groups 19
Third, suppose that n o and m<o. Then mo, and hence n N and mN. There
are now three subcases.
For the rst subcase, suppose that n +m o. Therefore n +m N. Let r = n +m.
Then n = r + (m). Because r o and m o, then by the rst case in this proof, to-
gether with the denition of a
(m)
, we see that a
n
a
m
= a
r+(m)
a
m
= (a
r
a
m
)a
(m)
=
a
r
(a
m
a
(m)
) = a
r
(a
m
(a
m
)
1
) = a
r
e = a
r
= a
n+m
.
For the second subcase, suppose that n +m = o. Then m = n. Using the denition of
a
n
we see that a
n
a
m
= a
n
a
n
= a
n
(a
n
)
1
= e = a
o
= a
n+m
.
For the third subcase, suppose that n +m < o. Then (n) + (m) = (n +m) o.
Because m o and n < o, we can use the rst of our three subcases, together with
the denition of a
n
, to see that a
n
a
m
= ((a
n
)
1
)
1
((a
m
)
1
)
1
= (a
n
)
1
(a
m
)
1
=
[a
m
a
n
]
1
= [a
(m)+(n)
]
1
= [a
(n+m)
]
1
= [(a
n+m
)
1
]
1
= a
n+m
. We have now com-
pleted the proof in the case where n o and m < o.
Fourth, suppose that n < o and m o. This case is similar to the previous case, and we
omit the details.
Fifth, suppose that n < o and m < o. Then n o and m o, and we can proceed
similarly to the third subcase of the third case of this proof; we omit the details.
Denition 2.3.8. Let A be a set, and let be a binary operation on A. An element e A is
a left identity element for if e a = a for all a A. If has a left identity element, the
binary operation satises the Left Identity Law.
Denition 2.3.9. Let A be a set, and let be a binary operation of A. Let e A. Suppose
that e is a left identity element for . If a A, a left inverse for a is an element a
/
A such
that a
/
a = e. If every element in A has a left inverse, the binary operation satises the
Left Inverses Law.
Theorem 2.3.10. Let G be a set, and let be a binary operation of A. If the pair (G, )
satises the Associative Law, the Left Identity Law and the Left Inverses Law, then (G, ) is
a group.
Exercises
Exercise 2.3.1. Let H be a group, and let a, b, c H. Prove that if abc =e
H
, then bca =e
H
.
Exercise 2.3.2. Let G be a group. An element g G is idempotent if g
:
= g. Prove that G
has precisely one idempotent element.
Exercise 2.3.3. Let H be a group. Suppose that h
:
= e
H
for all h H. Prove that H is
abelian.
Exercise 2.3.4. Let G be a group, and let a, b G. Prove that (ab)
:
= a
:
b
:
if and only if
ab = ba.
Exercise 2.3.5. Let G be a group, and let a, b G. Prove that (ab)
1
= a
1
b
1
if and only
if ab = ba. (Do not use Theorem 2.3.3.)
20 2. Groups
Exercise 2.3.6. Find an example of a group G, and elements a, b G, such that (ab)
1
,=
a
1
b
1
.
Exercise 2.3.7. Let (H, ) be a group. Let be the binary operation on H dened by
ab = ba for all a, b H.
(1) Prove that (H, ) is a group.
(2) Prove that (H, ) and (H, ) are isomorphic.
Exercise 2.3.8. Let G be a group, and let g G. Let i
g
: G G be dened by i
g
(x) = gxg
/
for all x G. Prove that i
g
is an isomorphism.
Exercise 2.3.9. Let G be a group, and let g G. Suppose that G is nite. Prove that there
is some n N such that g
n
= e
G
.
2.4 Subgroups 21
2.4 Subgroups
Fraleigh, 7th ed. Section 5
Gallian, 8th ed. Section 3
Judson, 2013 Section 3.3
Denition 2.4.1. Let G be a group, and let H G be a subset. The subset H is a subgroup
of G if the following two conditions hold.
(a) H is closed under .
(b) (H, ) is a group.
If H is a subgroup of G, it is denoted H G.
Lemma 2.4.2. Let G be a group, and let H G.
1. The identity element of G is in H, and it is the identity element of H.
2. The inverse operation in H is the same as the inverse operation in G.
Proof.
(1). Let e
G
be the identity element of G. Because (H, ) is a group, it has an identity
element, say e
H
. (We cannot assume, until we prove it, that e
H
is the same as e
G
.) Then
e
H
e
H
= e
H
thinking of e
H
as being in H, and e
G
e
H
= e
H
thinking of e
H
as being in G.
Hence e
H
e
H
= e
G
e
H
. Because both e
H
and e
G
are in G, we can use Theorem 2.3.1 (2)
to deduce that e
H
= e
G
. Hence e
G
H, and e
G
is the identity element of H.
(2). Now let a H. Because (G, ) is a group, the element a has an inverse a
1
G. We
will show that a
1
H. Because (H, ) is a group, then a has an inverse a H. (Again, we
cannot assume, until we prove it, that a is the same as a
1
.) Using the denition of inverses,
and what we sawin the previous paragraph, we knowthat a
1
a =e
G
and aa =e
G
. Hence
aa = a
1
a. Using Theorem 2.3.1 (2) again, we deduce that a
1
= a.
Theorem 2.4.3. Let G be a group, and let H G. Then H G if and only if the following
three conditions hold.
(i) e H.
(ii) If a, b H, then ab H.
(iii) If a H, then a
1
H.
22 2. Groups
Proof. First suppose that H is a subgroup. Then Property (ii) holds by the denition of a
subgroup, and Properties (i) and (iii) hold by Lemma 2.4.2.
Now suppose that Properties (i), (ii) and (iii) hold. To show that H is a subgroup, we
need to show that (H, ) is a group. We know that is associative with respect to all the
elements of G, so it certainly is associative with respect to the elements of H. The other
properties of a group follow from what is being assumed.
Theorem 2.4.4. Let G be a group, and let H G. Then H G if and only if the following
three conditions hold.
(i) H ,= / 0.
(ii) If a, b H, then ab H.
(iii) If a H, then a
1
H.
Proof. First, suppose that H is a subgroup. Then Properties (i), (ii) and (iii) hold by
Lemma 2.4.3.
Second, suppose that Properties (i), (ii) and (iii) hold. Because H ,= / 0, there is some
b H. By Property (iii) we know that b
1
H. By Property (ii) we deduce that b
1
b H,
and hence e H. We now use Lemma 2.4.3 to deduce that H is a subgroup.
Lemma 2.4.5. Let G be a group, and let K H G. If K H and H G, then K G.
Lemma 2.4.6. Let G be a group, and let {H
i
}
iI
be a family of subgroups of G indexed by
I. Then

iI
H
i
G.
Theorem 2.4.7. Let G, H be groups, and let f : G H be an isomorphism.
1. If A G, then f (A) H.
2. If B H, then f
1
(B) G.
Proof. We will prove Part (1), leaving the rest to the reader.
Let and be the binary operations of G and H, respectively.
(1). Let A G. By Lemma 2.4.2 (1) we know that e
G
A, and by Theorem 2.2.3 (1)
we know that e
H
= f (e
G
) f (A). Hence f (A) is non-empty. We can therefore use Theo-
rem 2.4.4 to show that f (A) is a subgroup of H. Let x, y f (A). Then there are a, b A
such that x = f (a) and y = f (b). Hence x y = f (a) f (b) = f (a b), because f is a iso-
morphism. Because A is a subgroup of G we know that a b A, and hence x y f (A).
Using Theorem 2.2.3 (2) we see that x
1
= [ f (a)]
1
= f (a
1
). Because A is a subgroup of
G, it follows from Theorem 2.4.4 (iii) that a
1
A. We now use Theorem 2.4.4 to deduce
that f (A) is a subgroup of H.
2.4 Subgroups 23
Lemma 2.4.8. Let (G, ) and (H, ) be groups, and let f : G H be a function. Suppose
that f is injective, and that f (ab) = f (a) f (b) for all a, b G.
1. f (G) H.
2. The map f : G f (G) is an isomorphism.
Proof. Part (1) is proved using the same proof as Theorem 2.4.7 (1), and Part (2) is trivial.
Exercises
Exercise 2.4.1. Let GL
:
(R) denote the set of invertible : : matrices with real number
entries, and let SL
:
(R) denote the set of all : : matrices with real number entries that
have determinant 1. Prove that SL
:
(R) is a subgroup of GL
:
(R). (This exercise requires
familiarity with basic properties of determinants.)
Exercise 2.4.2. Let n N.
(1) Prove that (Z
n
, +) is an abelian group.
(2) Suppose that n is not a prime number. Then n = ab for some a, b N such that
1 < a < n and 1 < b < n. Prove that the set {[o], [a], [:a], . . . , [(b1)a]} is a subgroup
of Z
n
.
(3) Is (Z
n
{[o]}, ) a group for all n? If not, can you nd any conditions on n that would
guarantee that (Z
n
{[o]}, ) is a group?
Exercise 2.4.3. Find all the subgroups of the symmetry group of the square.
Exercise 2.4.4. Let G be a group, and let A, B G. Suppose that G is abelian. Let AB
denote the subset
AB = {ab | a A and b B}.
Prove that if A, B G, then AB G.
Exercise 2.4.5. Let G be a group, and let H G. Prove that H G if and only if the
following two conditions hold.
(i) H ,= / 0.
(ii) If a, b H, then ab
1
H.
Exercise 2.4.6. Let G be a group. Suppose that G is abelian. Let I denote the subset
I = {g G | g
:
= e
G
}.
Prove that I G.
24 2. Groups
Exercise 2.4.7. Let G be a group, and let H G. Suppose that the following three condi-
tions hold.
(i) H ,= / 0.
(ii) H is nite.
(iii) H is closed under .
Prove that H G.
Exercise 2.4.8. Let G be a group, and let s G. Let C
s
denote the subset
C
s
= {g G | gs = sg}.
Prove that C
s
G.
Exercise 2.4.9. Let G be a group, and let A G. Let C
A
denote the subset
C
A
= {g G | ga = ag for all a A}.
Prove that C
A
G.
3
Various Types of Groups
26 3. Various Types of Groups
3.1 Cyclic Groups
Fraleigh, 7th ed. Section 6
Gallian, 8th ed. Section 4
Judson, 2013 Section 4.1
Lemma 3.1.1. Let G be a group and let a G. Then
{a
n
| n Z} =

{H G | a H}. (3.1.1)
Denition 3.1.2. Let G be a group and let a G. The cyclic subgroup of G generated by
a, denoted a, is the set in Equation 3.1.1.
Denition 3.1.3. Let G be a group. Then G is a cyclic group if G = a for some a G;
the element a is a generator of G.
Denition 3.1.4. Let G be a group and let H G. Then H is a cyclic subgroup of G if
H =a for some a G; the element a is a generator of H.
Denition 3.1.5. Let G be a group and let a G. If a is nite, the order of a, denoted
|a|, is the cardinality of a. If a is innite, then a has innite order.
Theorem 3.1.6 (Well-Ordering Principle). Let A N be a set. If A is non-empty, then
there is a unique m A such that m a for all a A.
Theorem 3.1.7 (Division Algorithm). Let a, b Z. Suppose that b ,= o. Then there are
unique q, r Z such that a = qb+r and o r < |b|.
Theorem 3.1.8. Let G be a group, let a G and let m N. Then |a| = m if and only if
a
m
= e and a
i
,= e for all i {1, . . . , m1}.
Proof. First, suppose that |a| = m. Then a = {a
n
| n Z} has m elements. Therefore the
elements a
o
, a, a
:
, . . . , a
m
cannot be all distinct. Hence there are i, j {o, . . . , m} such that
i ,= j and a
i
=a
j
. WLOG assume i < j. Then a
ji
=e. Observe that j i N, and j i m.
Let M = {p N| a
p
=e}. Then MN, and M,= / 0 because j i M. By the Well-Ordering
Principle (Theorem 3.1.6), there is a unique k M such that k x for all x M. Hence
a
k
= e and a
q
,= e for all q {1, . . . , k 1}.
Because j i M, it follows that k j i m. Let y Z. By the Division Algorithm
(Theorem 3.1.7), there are unique q, r Z such that y = qk +r and o r < k. Then a
y
=
a
qk+r
= (a
k
)
q
a
r
=e
q
a
r
=a
r
. It follows that a {a
n
| n {o, . . . , k1}}. Therefore |a| k.
Because |a| = m, we deduce that m k. It follows that k = m. Hence a
m
= e and a
i
,= e
for all i {1, . . . , m1}.
Second, suppose a
m
= e and a
i
,= e for all i {1, . . . , m1}. We claim that a {a
n
|
n {o, . . . , m1}}, using the Division Algorithm similarly to an argument seen previously
in this proof. Additionally, we claim that all the elements a
o
, a, a
:
, . . . , a
m1
are distinct. If
not, then an argument similar to the one above would show that there are s, t {o, . . . , m1}
3.1 Cyclic Groups 27
such that s < t and a
ts
= e; because t s m1 < m, we would have a contradiction. It
follows that a = {a
n
| n {o, . . . , m1}}, and hence |a| = m.
Lemma 3.1.9. Let G be a group. If G is cyclic, then G is abelian.
Theorem 3.1.10. Let G be a group and let H G. If G is cyclic, then H is cyclic.
Corollary 3.1.11. Every subgroup of Z has the form nZ for some n N{o}.
Theorem 3.1.12. Let G be a group. Suppose that G is cyclic.
1. If G is innite, then G

=Z.
2. If |G| = n for some n N, then G

=Z
n
.
Denition 3.1.13. Let a, b Z. If at least one of a or b is not zero, the greatest common
divisor of a and b, denoted (a, b), is the largest integer that divides both a and b. Let
(o, o) = o.
Lemma 3.1.14. Let a, b Z. Then (a, b) exists and (a, b) o.
Denition 3.1.15. Let a, b Z. The numbers a and b are relatively prime if (a, b) =1.
Lemma 3.1.16. Let a, b Z. Then there are m, n Z such that (a, b) = ma+nb.
Corollary 3.1.17. Let a, b Z. If r is a common divisor of a and b, then r|(a, b).
Corollary 3.1.18. Let a, b Z. Then (a, b) = 1 if and only if there are m, n Z such that
ma+nb = 1.
Corollary 3.1.19. Let a, b, r Z. Suppose that (a, b) = 1. If a|br then a|r.
Theorem 3.1.20. Let G be a group. Suppose that G =a for some a G, and that |G| = n
for some n N. Let s, r N.
1. |a
s
| =
n
(n,s)
.
2. a
s
=a
r
if and only if (n, s) = (n, r).
Corollary 3.1.21. Let G be a group. Suppose that G=a for some a G, and that |G| =n
for some n N. Let s N. Then G =a
s
if and only if (n, s) = 1.
Exercises
Exercise 3.1.1. Let C be a cyclic group of order 6o. How many generators does C have?
Exercise 3.1.2. List all orders of subgroups of Z
:o
.
Exercise 3.1.3. Find all the subgroups of Z
6
.
Exercise 3.1.4. Let p, q N be prime numbers. How many generators does the group Z
pq
have?
28 3. Various Types of Groups
Exercise 3.1.5. Let G be a group, and let g, h G. Prove that if gh has order p for some
p N, then hg has order p.
Exercise 3.1.6. Let G, H be groups, and let f , k: G H be isomorphisms. Suppose that
G =a for some a G. Prove that if f (a) = k(a), then f = k.
Exercise 3.1.7. Let G be a group. Prove that if G has a nite number of subgroups, then G
is nite.
Exercise 3.1.8. Let G be a group, and let A, B G. Suppose that G is abelian, and that A
and B are cyclic and nite. Suppose that |A| and |B| are relatively prime. Prove that G has a
cyclic subgroup of order |A| |B|
3.2 Finitely Generated Groups 29
3.2 Finitely Generated Groups
Fraleigh, 7th ed. Section 7
Lemma 3.2.1. Let G be a group and let S G.
1. {H G | S H} ,= / 0.
2.

{H G | S H} G.
3. If K G and S K, then

{H G | S H} K.
Proof. Part (1) holds because G {H G | S H}, Part (2) holds by Lemma 2.4.6, and
Part (3) is straightforward.
Denition 3.2.2. Let G be a group and let S G. The subgroup generated by S, denoted
S, is dened by S =

{H G | S H}.
Theorem 3.2.3. Let G be a group and let S G. Then
S = {(a
1
)
n
1
(a
:
)
n
:
(a
k
)
n
k
| k N, and a
1
, . . . , a
k
S, and n
1
, . . . , n
k
Z}.
Remark 3.2.4. In an abelian group, using additive notation, we would write the result of
Theorem 3.2.3 as
S = {n
1
a
1
+n
:
a
:
+ +n
k
a
k
| k N, and a
1
, . . . , a
k
S, and n
1
, . . . , n
k
Z}.
Denition 3.2.5. Let G be a group and let S G. Suppose S = G. The set S generates
G, and the elements of S are generators of G.
Denition 3.2.6. Let G be a group. If G = S for some nite subset S G, then G is
nitely generated.
30 3. Various Types of Groups
3.3 Dihedral Groups
Fraleigh, 7th ed. Section 8
Gallian, 8th ed. Section 1
Judson, 2013 Section 5.2
Theorem 3.3.1. Let n N. Suppose n . For a regular n-gon, let 1 denote the identity
symmetry, let r denote the smallest possible clockwise rotation symmetry, and let m denote
a reection symmetry.
1. r
n
= 1, and r
k
,= 1 for k {1, . . . , n1}.
2. m
:
= 1 and m ,= 1.
3. rm = mr
1
.
4. If p Z, then r
p
m = mr
p
.
5. If p N, then r
p
= r
k
for a unique k {o, 1, . . . , n1}.
6. If p, s {o, 1, . . . , n1}, then r
p
= r
s
and mr
p
= mr
s
if and only if p = s.
7. If p {o, 1, . . . , n1}, then m ,= r
p
.
8. If p {o, 1, . . . , n1}, then (r
p
)
1
= r
np
and (mr
p
)
1
= mr
p
.
Proof. Parts (1) and (2) follow immediately from the denition of r and m. Part (3) can be
veried geometrically by doing both sides to a polygon. For Part (4), prove it for positive
p by induction, for p = o trivially, and for negative p by using the fact that it has been
proved for positive p and then for p negative looking at mr
p
m=mr
(p)
m=r
p
mm=r
p
.
Part (5) uses the Division Algorithm (Theorem 3.1.7). Part (6) is proved similarly to proofs
we saw for cyclic groups. Part (7) is geometric, because t reverses orientation and powers
of r preserve orientation. Part (8) is also geometric, using basic ideas about rotations and
reections.
Denition 3.3.2. Let n N. Suppose n . For a regular n-gon, let 1 denote the identity
symmetry, let r denote the smallest possible clockwise rotation symmetry, and let m denote
a reection symmetry. The n-th dihedral group, denoted D
n
, is the group
D
n
= {1, r, r
:
, . . . , r
n1
, m, mr, mr
:
, . . . , mr
n1
}.
Theorem 3.3.3. Let n N. Suppose n . Then there is a unique group with generators a
and b that satisfy the following three conditions.
(i) a
n
= 1, and a
k
,= 1 for all k {1, . . . , n1}.
(ii) b
:
= 1 and b ,= 1.
(iii) ab = ba
1
.
3.4 Permutations and Permutation Groups 31
3.4 Permutations and Permutation Groups
Fraleigh, 7th ed. Section 8
Gallian, 8th ed. Section 5, 6
Judson, 2013 Section 5.1
Denition 3.4.1. Let A be a non-empty set. A permutation of A is a bijective map A A.
The set of all permutations of A is denoted S
A
. The identity permutation A A is denoted
.
Denition 3.4.2. Let A be a non-empty set. The composition of two permutations of A is
called permutation multiplication.
Lemma 3.4.3. Let A be a non-empty set. The pair (S
A
, ) is a group.
Remark 3.4.4. In the group S
A
, we will usually write as an abbreviation of .
Lemma 3.4.5. Let A and B be non-empty sets. Suppose that A and B have the same cardi-
nality. Then S
A

= S
B
.
Denition 3.4.6. Let n N, and let A = {1, . . . , n}. The group S
A
is denoted S
n
. The group
S
n
is the symmetric group on n letters.
Proposition 3.4.7. Let A be a non-empty set. Then S
A
is abelian if and only if A is nite
and |A| :.
Proof. It is easy to see that S
1
and S
:
are abelian (look at the groups explicitly). We see
that S

is not abelian by looking at the two permutations with second rows (:1) and (1:).
However, those two permutations can be thought of as belonging to S
A
for any A that has
three or more elements (including the case when A is innite), and hence all such groups
are not abelian.
Theorem 3.4.8 (Cayleys Theorem). Let G be a group. Then there is a set A such that G
is isomorphic to a subgroup of S
A
. If G is nite, a nite set A can be found.
Exercises
Exercise 3.4.1. Let , S

be dened by
=
_
1 :
: 1
_
and =
_
1 :
: 1
_
.
Compute each of the following.
(1)
:
.
(2)
1
.
32 3. Various Types of Groups
(3) .
(4) .
Exercise 3.4.2. Find all subgroups of S

.
Exercise 3.4.3. Let n N. Suppose n . Let S
n
. Suppose that = for all S
n
.
Prove that = .
Exercise 3.4.4. Let A be a non-empty set, and let P S
A
. The subgroup P is transitive on
A if for each x, y A, there is some P such that (x) = y.
Prove that if A is nite, then there is a subgroup Q S
A
such that Q is cyclic, that
|Q| = |A|, and that Q is transitive on A.
3.5 Permutations Part II and Alternating Groups 33
3.5 Permutations Part II and Alternating Groups
Fraleigh, 7th ed. Section 9
Gallian, 8th ed. Section 5
Judson, 2013 Section 5.1
Denition 3.5.1. Let A be a non-empty set, and let S
A
. Let be the relation on A
dened by a b if and only if b =
n
(a) for some n Z, for all a, b A.
Lemma 3.5.2. Let A be a non-empty set, and let S
A
. The relation is an equivalence
relation on A.
Denition 3.5.3. Let A be a set, and let S
A
. The equivalence classes of are called the
orbits of .
Denition 3.5.4. Let n N, and let S
n
. The permutation is a cycle if it has at most
one orbit with more than one element. The length of a cycle is the number of elements in
its largest orbit. A cycle of length : is a transposition.
Lemma 3.5.5. Let n N, and let S
n
. Then is the product of disjoint cycles; the cycles
of length greater than 1 are unique, though not their order.
Corollary 3.5.6. Let n N, and let S
n
. Suppose n :. Then is the product of
transpositions.
Theorem 3.5.7. Let n N, and let S
n
. Suppose n :. Then either all representations
of as a product of transpositions have an even number of transpositions, or all have an
odd number of transpositions.
Denition 3.5.8. Let n N, and let S
n
. Suppose n :. The permutation is even
or odd, respectively, if it is the product of an even number or odd number, respectively, of
transpositions.
Denition 3.5.9. Let n N. Suppose n :. The set of all even permutations of A is denoted
A
n
.
Lemma 3.5.10. Let n N. Suppose n :.
1. The set A
n
is a subgroup of S
n
.
2. |A
n
| =
n!
:
.
Denition 3.5.11. Let n N. Suppose n :. The group A
n
is the alternating group on n
letters.
Exercises
34 3. Various Types of Groups
Exercise 3.5.1. Compute the product of cycles (1, , :)(, ) as a single permutation in the
following groups.
(1) In S

.
(2) In S
6
.
Exercise 3.5.2. Let S

be dened by
=
_
1 : 6
: 1 6
_
.
(1) Write as a product of cycles.
(2) Write as a product of transpositions.
Exercise 3.5.3. Let n N. Suppose n . Let S
n
.
(1) Prove that can be written as a product of at most n1 transpositions.
(2) Prove that if is not a cycle, it can be written as a product of at most n: transpo-
sitions.
(3) Prove that if is odd, it can be written as a product of :n+ transpositions.
(4) Prove that if is even, it can be written as a product of :n+8 transpositions.
Exercise 3.5.4. Let n N. Suppose n :. Let K S
n
. Prove that either all the permutations
in K are even, or exactly half the permutations in K are even.
Exercise 3.5.5. Let n N. Suppose n :. Let S
n
. Suppose that is odd. Prove that if
S
n
is odd, then there is some A
n
such that = .
Exercise 3.5.6. Let n N. Let S
n
. Prove that if is a cycle of odd length, then
:
is a
cycle.
4
Basic Constructions
36 4. Basic Constructions
4.1 Direct Products
Fraleigh, 7th ed. Section 11
Gallian, 8th ed. Section 8
Judson, 2013 Section 9.2
Denition 4.1.1. Let H and K be groups. The product binary operation on HK is the
binary operation dened by (h
1
, k
1
)(h
:
, k
:
) = (h
1
h
:
, k
1
k
:
) for all (h
1
, k
1
), (h
:
, k
:
) H
K.
Lemma 4.1.2. Let H and K be groups. The set HK with the product binary operation is
a group.
Denition 4.1.3. Let H and K be groups. The set HK with the product binary operation
is the direct product of the groups H and K.
Lemma 4.1.4. Let H and K be groups. Then HK

= KH.
Lemma 4.1.5. Let H and K be groups. Suppose that H and K are abelian. Then HK is
abelian.
Theorem 4.1.6. Let m, n N. The group Z
m
Z
n
is cyclic and is isomorphic to Z
mn
if and
only if m and n are relatively prime.
Denition 4.1.7. Let G be a group, and let A, B G. Let AB denote the subset AB = {ab |
a A and b B}.
Lemma 4.1.8. Let H and K be groups. Let

H = H{e
K
} and

K = {e
H
} K.
1.

H,

K HK.
2.

H

K = HK.
3.

H

K = {(e
H
, e
K
)}.
4. hk = kh for all h

H and k

K.
Lemma 4.1.9. Let G be a group, and let H, K G. Suppose that the following properties
hold.
(i) HK = G.
(ii) HK = {e}.
(iii) hk = kh for all h H and k K.
Then G

= HK.
Lemma 4.1.10. Let G be a group, and let H, K G. Then HK = G and HK = {e} if and
only if for every g G there are unique h H and k K such that g = hk.
4.1 Direct Products 37
Theorem 4.1.11. Let m
1
, . . . , m
r
N. The group

r
i=1
Z
m
i
is cyclic and is isomorphic to
Z
m
1
m
:
m
r
if and only if m
i
and m
k
are relatively prime for all i, k {1, . . . , r} such that i ,=k.
Exercises
Exercise 4.1.1. List all the elements of Z

, and nd the order of each element.


Exercise 4.1.2. Find all the subgroups of Z
:
Z
:
Z
:
.
Exercise 4.1.3. Prove Lemma 4.1.8.
Exercise 4.1.4. Prove Lemma 4.1.9.
38 4. Basic Constructions
4.2 Finitely Generated Abelian Groups
Fraleigh, 7th ed. Section 11
Gallian, 8th ed. Section 11
Judson, 2013 Section 13.1
Theorem 4.2.1 (Fundamental Theorem of Finitely Generated Abelian Groups). Let G
be a nitely generated abelian group. Then
G =Z
(p
1
)
n
1
Z
(p
:
)
n
:
Z
(p
k
)
n
k
ZZ Z
for some k N, and prime numbers p
1
, . . . , p
k
N, and n
1
, . . . , n
k
N. This direct product
is unique up to the rearrangement of factors.
Exercises
Exercise 4.2.1. Find, up to isomorphism, all abelian groups of order 16.
Exercise 4.2.2. Find, up to isomorphism, all abelian groups of order :o.
Exercise 4.2.3. How many abelian groups are there, up to isomorphism, of order :.
Exercise 4.2.4. Let G be a group. Suppose that G is nite and abelian. Prove that G is
not cyclic if and only if there is some prime number p N such that G has a subgroup
isomorphic to Z
p
Z
p
.
4.3 Innite Products of Groups 39
4.3 Innite Products of Groups
Denition 4.3.1. Let I be a non-empty set, and let {A
i
}
iI
be a family of sets indexed by I.
The product of the family of sets, denoted

iI
A
i
, is the set dened by

iI
A
i
= { f F(I,
_
iI
A
i
) | f (i) A
i
for all i I}.
If all the sets A
i
are equal to a single set A, the product

iI
A
i
is denoted by A
I
.
Theorem 4.3.2. Let I be a non-empty set, and let {A
i
}
iI
be a family of non-empty sets
indexed by I. Then

iI
A
i
,= / 0.
Denition 4.3.3. Let I be a non-empty set, and let {G
i
}
iI
be a family of non-empty groups
indexed by I. Let f , g

iI
G
i
.
1. Let f g: I

iI
A
i
be dened by ( f g)(i) = f (i)g(i) for all i I.
2. Let f
/
: I

iI
A
i
be dened by ( f )(i) = [ f (i)]
1
for all i I.
3. Let e: I

iI
A
i
be dened by e(i) = e
G
i
for all i I.
Lemma 4.3.4. Let I be a non-empty set, and let {G
i
}
iI
be a family of non-empty groups
indexed by I. Let f , g

iI
G
i
.
1. f g

iI
G
i
.
2. f
/

iI
G
i
.
3. e

iI
G
i
.
Lemma 4.3.5. Let I be a non-empty set, and let {G
i
}
iI
be a family of non-empty groups
indexed by I. Then

iI
G
i
is group.
Proof. We will show the Associative Law; the other properties are similar. Let f , g, h

iI
G
i
. Let i I. Then
[( f g)h](i) = [( f g)(i)]h(i) = [ f (i)g(i)]h(i) = f (i)[g(i)h(i)]
= f (i)[(gh)(i)] = [ f (gh)](i).
Hence ( f g)h = f (gh).
40 4. Basic Constructions
4.4 Cosets
Fraleigh, 7th ed. Section 10
Gallian, 8th ed. Section 7
Judson, 2013 Section 6.1, 6.2
Denition 4.4.1. Let G be a group and let H G. Let
L
and
R
be the relations on G
dened by a
L
b if and only if a
1
b H for all a, b G, and a
R
b if and only if ab
1
H
for all a, b G.
Lemma 4.4.2. Let G be a group and let H G. The relations
L
and
R
are equivalence
relations on G.
Denition 4.4.3. Let G be a group, let H G and let a G. Let aH and Ha be dened by
aH = {ah | h H} and Ha = {ha | h H}.
Lemma 4.4.4. Let G be a group, let H G and let a G.
1. The equivalence class of a with respect to
L
is aH.
2. The equivalence class of a with respect to
R
is Ha.
Proof. With respect to
L
, we have
[a] = {b G | a
L
b} = {b G | a
1
b H}
= {b G | a
1
b = h for some h H}
= {b G | b = ah for some h H} = {ah | h H} = aH.
The proof for
R
is similar.
Denition 4.4.5. Let G be a group, let H G and let a G. The left coset of a (with
respect to H) is the set aH. The right coset of a (with respect to H) is the set Ha.
Lemma 4.4.6. Let G be a group, let H G and let a, b G.
1. aH = bH if and only if a
1
b H.
2. Ha = Hb if and only if ab
1
H.
3. aH = H if and only if a H.
4. Ha = H if and only if a H.
Proof. This lemma follows immediately fromthe denition of
L
and
R
and Lemma 4.4.4.
4.4 Cosets 41
Lemma 4.4.7. Let G be a group and let H G.
1. All left cosets of G with respect to H and all right cosets of G with respect to H have
the same cardinality as H.
2. The family of all left cosets of G with respect to H has the same cardinality as the
family of all right cosets of G with respect to H.
Denition 4.4.8. Let G be a group, let H G. The index of H in G, denoted (G : H), is
the number of left cosets of G with respect to H.
Theorem 4.4.9. Let G be a group and let H G. Suppose that G is nite. Then |G| =
|H| (G : H).
Corollary 4.4.10 (Lagranges Theorem). Let G be a group and let H G. Suppose that G
is nite. Then |H| divides |G|.
Corollary 4.4.11. Let G be a group and let a G. Suppose that G is nite. Then |a| divides
|G|.
Corollary 4.4.12. Let G be a group. If |G| is a prime number, then G is cyclic.
Corollary 4.4.13. Let p N be a prime number. The only group of order p, up to isomor-
phism, is Z
p
.
Theorem 4.4.14. Let G be a group and let K H G. Suppose that (G : H) and (H : K)
are nite. Then (G : K) is nite, and (G : K) = (G : H) (H : K).
Exercises
Exercise 4.4.1. Find all cosets of the group :Z with respect to the subgroup Z.
Exercise 4.4.2. Find all cosets of the group Z
1:
with respect to the subgroup .
Exercise 4.4.3. Find (Z
:
: ).
Exercise 4.4.4. Let G be a group, and let p, q Nbe prime numbers. Suppose that |G| = pq.
Prove that every proper subgroup of G is cyclic.
Exercise 4.4.5. Let G be a group, let H G. Prove that there is a bijective map from the
set of all left cosets of G with respect to H to the set of all right cosets of G with respect to
H. (Note: the group G is not necessarily nite.)
Exercise 4.4.6. Prove Theorem 4.4.14.
Exercise 4.4.7. Let G be a group, let H G. Suppose that G is nite, and that (G : H) = :.
Prove that every left coset of G with respect to H is a right coset of G with respect to H.
Exercise 4.4.8. Let G be a group. Suppose that G is nite. Let n = |G|. Prove that if g G,
then g
n
= e
G
.
42 4. Basic Constructions
4.5 Quotient Groups
Fraleigh, 7th ed. Section 14, 15
Gallian, 8th ed. Section 9
Judson, 2013 Section 10.1
Lemma 4.5.1. Let G be a group and let H G. The formula (aH)(bH) = (ab)H for all
a, b G gives a well-dened binary operation on the set of all left cosets of G with respect
to H if and only if gH = Hg for all g G.
Lemma 4.5.2. Let G be a group and let H G. The following are equivalent.
a. gHg
1
H for all g G.
b. gHg
1
= H for all g G.
c. gH = Hg for all g G.
Denition 4.5.3. Let G be a group and let H G. The subgroup H is normal if any of
the conditions in Lemma 4.5.2 are satised. If H is a normal subgroup of G, it is denoted
H G.
Denition 4.5.4. Let G be a group and let H G. The set of all left cosets of G with respect
to H is denoted G,H.
Lemma 4.5.5. Let G be a group and let H G. The set G,H with the binary operation
given by (aH)(bH) = (ab)H for all a, b G is a group.
Denition 4.5.6. Let G be a group and let H G. The set G,H with the binary operation
given by (aH)(bH) = (ab)H for all a, b G is the quotient group of G by H.
Corollary 4.5.7. Let G be a group and let H G. Suppose that G is nite. Then |G,H| =
(G : H) =
|G|
|H|
.
Proof. This corollary follows immediately from Theorem 4.4.9.
Lemma 4.5.8. Let G be a group and let H G. Suppose that G is abelian. Then G,H is
abelian.
Lemma 4.5.9. Let G be a group and let H G. Suppose that G is cyclic. Then G,H is
cyclic.
Lemma 4.5.10. Let G be a group and let H G. If (G : H) = :, then H G.
Lemma 4.5.11. Let G be a group and let H G. Suppose that G is nite. If |H| =
1
:
|G|,
then H G.
Lemma 4.5.12. Let H and K be groups. Let G = HK.
4.5 Quotient Groups 43
1. H{e
K
} G and {e
H
} K G.
2. G,(H{e
K
})

= K and G,({e
H
} K)

= H.
Lemma 4.5.13. Let G be a group, and let H, K G. Suppose that the following properties
hold.
(i) HK = G.
(ii) HK = {e}.
Then H, K G if and only if hk = kh for all h H and k K.
Denition 4.5.14. Let G be a group. The group G is simple if it has no non-trivial proper
normal subgroups.
Exercises
Exercise 4.5.1. Compute each of the following quotient groups; state what the group is in
the form given by the Fundamental Theorem of Finitely Generated Abelian Groups.
(1) (Z
:
Z

),(o, :).
(2) (Z
:
Z

),(1, :).
(3) (Z

Z
8
),(1, :, ).
(4) (ZZ),(o, 1).
(5) (ZZZ),(1, 1, 1).
Exercise 4.5.2. Let G be a group and let H G. Suppose that (G : H) is nite. Let m =
(G : H). Prove that if g G, then g
m
H.
Exercise 4.5.3. Let G be a group, and let {H
i
}
iI
be a family of normal subgroups of G
indexed by I. Prove that

iI
H
i
G.
Exercise 4.5.4. Let G be a group and let S G.
(1) Prove that {H G | S H} ,= / 0.
(2) Prove that

{H G | S H} G.
(3) Prove that if K G and S K, then

{H G | S H} K.
The normal subgroup generated by S, which is dened by

{H G | S H}, is the
smallest normal subgroup of G that contains S.
44 4. Basic Constructions
Exercise 4.5.5. Let G be a group. A commutator in G is an element of G that can be
expressed in the form aba
1
b
1
for some a, b G. The commutator subgroup of G is the
smallest normal subgroup of G that contains all the commutators in G; such a subgroup
exists by Exercise 4.5.4.
Let C denote the commutator subgroup of G. Prove that G,C is abelian.
Exercise 4.5.6. Let G be a group and let H G. Suppose that no other subgroup of G has
the same cardinality as H. Prove that H G.
Exercise 4.5.7. Let G be a group, let H G, and let N G.
(1) Prove that HN H.
(2) Is HN a normal subgroup of G? Give a proof or a counterexample.
Exercise 4.5.8. Let G be a group, and let m N. Suppose that G has a subgroup of order
m. Let K =

{H G | |H| = m}. Prove that K G.
5
Homomorphisms
46 5. Homomorphisms
5.1 Homomorphisms
Fraleigh, 7th ed. Section 13
Gallian, 8th ed. Section 10
Judson, 2013 Section 11.1
Denition 5.1.1. Let (G, ) and (H, ) be groups, and let f : G H be a function. The
function f is a homomorphism (sometimes called a group homomorphism) if f (ab) =
f (a) f (b) for all a, b G.
Theorem 5.1.2. Let G, H be groups, and let f : G H be a homomorphism.
1. f (e
G
) = e
H
.
2. If a G, then f (a
1
) = [ f (a)]
1
, where the rst inverse is in G, and the second is in
H.
3. If A G, then f (A) H.
4. If B H, then f
1
(B) G.
Theorem 5.1.3. Let G, H and K be groups, and let f : GH and j : H K be homomor-
phisms. Then j f is a homomorphism.
Lemma 5.1.4. Let G and H be groups. Suppose that G is cyclic with generator a. If b H,
there is a unique homomorphism f : G H such that f (a) = b.
Denition 5.1.5. Let G be a group. An endomorphism of G is a homomorphism G G.
An automorphism of G is an isomorphism G G.
Denition 5.1.6. Let G be a group. An automorphism f : G G is an inner automor-
phism if there is some g G such that f (x) = gxg
1
for all x G.
Lemma 5.1.7. Let G be a group, and let H G. Then H G if and only if f (H) = H for
all inner automorphisms of G.
Exercises
Exercise 5.1.1. Which of the following functions are homomorphisms? Which of the ho-
momorphisms are isomorphisms? The groups under consideration are (R, +), and (Q, +),
and ((o, ), ).
(1) Let f : Q(o, ) be dened by f (x) =
x
for all x Q.
(2) Let k: (o, ) (o, ) be dened by k(x) = x

for all x (o, ).


(3) Let m: R R be dened by m(x) = x + for all x R.
5.1 Homomorphisms 47
(4) Let g: (o, ) R be dened by g(x) = lnx for all x (o, ).
(5) Let h: R R be dened by h(x) = |x| for all x R.
Exercise 5.1.2. Prove that the function det : GL
:
(R) R{o} is a homomorphism, where
the binary operation for both groups is multiplication.
Exercise 5.1.3.
(1) Let j : Z

be dened by j([x]) = [x] for all [x] Z

, where the two appearances


of [x] in the denition of j refer to elements in different groups. Is this function
well-dened? If it is well-dened, is it a homomorphism?
(2) Let k: Z
6
Z

be dened by k([x]) = [x] for all [x] Z


6
. Is this function well-
dened? If it is well-dened, is it a homomorphism?
(3) Can you nd criteria on n, m N that will determine when the function r : Z
n
Z
m
dened by r([x]) = [x] for all [x] Z
n
is well-dened and is a homomorphism? Prove
your claim.
Exercise 5.1.4. Let G, H be groups. Prove that the projection maps
1
: GH G and

:
: GH H are homomorphisms.
Exercise 5.1.5. Let G be a group, and let f : GG be dened by f (x) = x
1
for all x G.
Is g a homomorphism? Give a proof or a counterexample.
Exercise 5.1.6. Let G, H be groups, and let f , k: GH be homomorphisms. Suppose that
G =S for some S G. Prove that if f (a) = k(a) for all a S, then f = k.
48 5. Homomorphisms
5.2 Kernel and Image
Fraleigh, 7th ed. Section 13, 14
Gallian, 8th ed. Section 10
Judson, 2013 Section 11.1, 11.2
Denition 5.2.1. Let G and H be groups, and let f : G H be a homomorphism.
1. The kernel of f , denoted ker f , is the set ker f = f
1
({e
H
}).
2. The image of f , denoted im f , is the set im f = f (G).
Remark 5.2.2. Observe that
ker f = {g G | f (g) = e
H
}
and
im f = {h H | h = f (g) for some g G}.
Lemma 5.2.3. Let G, H be groups, and let f : G H be a homomorphism.
1. ker f G.
2. im f H.
Proof. Straightforward, using Theorem 5.1.2.
Lemma 5.2.4. Let G, H be groups, and let f : GH be a homomorphism. Then ker f G.
Theorem 5.2.5. Let G and H be groups, and let f : G H be a homomorphism. The
function f is injective if and only if ker f = {e
G
}.
Proof. Suppose that f is injective. Because f (e
G
) = e
H
by Theorem 5.1.2 (1), it follows
from the injectivity of f that ker f = f
1
({e
H
}) = {e
G
}.
Now suppose that ker f = {e
G
}. Let a, b G, and suppose that f (a) = f (b). By Theo-
rem 5.1.2 (2) and the denition of homomorphisms we see that
f (ba
1
) = f (b) f (a
1
) = f (a) [ f (a)]
1
= e
H
.
It follows that ba
1
f
1
({e
H
}) = ker f . Because ker f = {e
G
}, we deduce that ba
1
=
e
G
. A similar calculation shows that a
1
b = e
G
. By Lemma 2.1.3 we deduce that
(a
1
)
1
= b, and therefore by Theorem 2.3.1 (4) we see that b = a. Hence f is injec-
tive.
Lemma 5.2.6. Let G, H be groups, and let f : G H be a homomorphism. Let h H. If
a f
1
({h}), then f
1
({h}) = a(ker f ).
5.2 Kernel and Image 49
Denition 5.2.7. Let G be a group and let N G. The canonical map for G and N is the
function : G G,N dened by (g) = gN for all g G.
Lemma 5.2.8. Let G be a group and let N G. The canonical map : G G,N is a
surjective homomorphism, and ker = N.
Theorem 5.2.9 (First Isomorphism Theorem). Let G, H be groups, and let f : G H
be a homomorphism. Then there is a a unique isomorphism g: G,ker f im f such that
f = g, where : G G,ker f is the canonical map.
Remark 5.2.10. The condition f = g in the First Isomorphism Theorem is represented
by the following commutative diagram (discuss what that means).
G im f H
G,ker f
f

Corollary 5.2.11. Let G, H be groups, and let f : G H be a homomorphism.


1. G,ker f

= im f .
2. If f is surjective, then G,ker f

= H.
Exercises
Exercise 5.2.1. Find the kernel of each of the following homomorphisms.
(1) Let f : Z Z
1
be the unique homomorphism determined by f (1) = [1o].
(2) Let g: ZZ Z be the unique homomorphism determined by g((1, o)) = : and
g((o, 1)) = .
Exercise 5.2.2. In Exercise 5.1.3, you found criteria on n, m N that determine when
the function r : Z
n
Z
m
dened by r([x]) = [x] for all [x] Z
n
is well-dened and is a
homomorphism. Find the kernel for those functions that are well-dened and are homo-
morphisms.
Exercise 5.2.3. Let G, H be groups, and let f : G H be a homomorphism. Prove that
im f is abelian if and only if aba
1
b
1
ker f for all a, b G.
Exercise 5.2.4. Let G be a group, and let g G. Let p: Z G be dened by p(n) = g
n
for
all n N. Find ker f and im f .
50 5. Homomorphisms
6
Applications of Groups
52 6. Applications of Groups
6.1 Group Actions
Fraleigh, 7th ed. Section 16, 17
Gallian, 8th ed. Section 29
Judson, 2013 Section 14.114.3
Denition 6.1.1. Let X be a set and let G be a group. An action of G on X is a function
: GX X that satises the following two conditions.
1. e
G
x = x for all x X.
2. (ab)x = a(bx) for all x X and a, b G.
Lemma 6.1.2. Let X be a set and let G be a group.
1. Let : GX X be an action of G on X. Let : GS
X
be dened by (g)(x) =gx
for all g G and x X. Then is well-dened and is a homomorphism.
2. Let : G S
X
be a homomorphism. Let : GX X be dened by ((g, x)) =
(g)(x) for all g G and x X. Then is an action of G on X.
Denition 6.1.3. Let X be a set and let G be a group. The set X is a G-set if there is an
action of G on X.
Denition 6.1.4. Let X be a set and let G be a group. Suppose that X is a G-set. Let g G.
The xed set of g, denoted X
g
, is the set X
g
= {x X | gx = x}.
Denition 6.1.5. Let X be a set and let G be a group. Suppose that X is a G-set.
1. The group G acts faithfully on X if X
g
,= X for all g G{e
G
}.
2. The group G acts transitively on X if for each x, y X there is some g G such that
gx = y.
Denition 6.1.6. Let X be a set and let G be a group. Suppose that X is a G-set. Let
x X. The isotropy subgroup of x (also called the stabilizer of x), denoted G
x
, is the set
G
x
= {g G | gx = x}.
Lemma 6.1.7. Let X be a set and let G be a group. Suppose that X is a G-set. Let x X.
Then G
x
G.
Denition 6.1.8. Let X be a set and let G be a group. Suppose that X is a G-set. Let be
the relation on X dened by x y if and only if there is some g G such that gx = y for all
x, y X.
Lemma 6.1.9. Let X be a set and let G be a group. Suppose that X is a G-set. The relations
is an equivalence relation on X.
6.1 Group Actions 53
Denition 6.1.10. Let X be a set and let G be a group. Suppose that X is a G-set. Let x X.
The orbit of x (with respect to G), denoted Gx, is the set Gx = {gx | g G}.
Lemma 6.1.11. Let X be a set and let G be a group. Suppose that X is a G-set. Let x X.
1. Suppose Gx is nite. Then |Gx| = (G : G
x
).
2. Suppose G is nite. Then |G| = |Gx| |G
x
|.
Theorem 6.1.12 (Burnsides Formula). Let X be a set and let G be a group. Suppose that
X and G are nite, and that X is a G-set. Let r be the number of orbits in X with respect to
G. Then
r |G| =

gG
|X
g
|.
Exercises
Exercise 6.1.1. An action of (R, +) on the plane R
:
is obtained by assigning to each R
the rotation of the R
:
about the origin counterclockwise by angle . Let P R
:
. Suppose
that P is not the origin.
(1) Prove that R
:
is a R-set.
(2) Describe the orbit RP geometrically.
(3) Find the isotropy subgroup R
P
.
Exercise 6.1.2. Let X be a set and let G be a group. Suppose that X is a G-set. Prove that
G acts faithfully on X if and only if for each g, h G such that g ,= h, there is some x X
such that gx ,= hx.
Exercise 6.1.3. Let X be a set, let Y X, and let G be a group. Suppose that X is a G-set.
Let G
Y
= {g G | gy = y for all y Y}. Prove that G
Y
G.
Exercise 6.1.4. The four faces of a tetrahedral die are labeled with 1, :, and dots,
respectively. How many different tetrahedral dice can be made?
Exercise 6.1.5. Each face of a cube is painted with one of eight colors; no two faces can
have the same color. How many different cubes can be made?
Exercise 6.1.6. Each corner of a cube is painted with one of four colors; different corners
may have the same color. How many different cubes can be made?
54 6. Applications of Groups
7
Rings and Fields
56 7. Rings and Fields
7.1 Rings
Fraleigh, 7th ed. Section 18
Gallian, 8th ed. Section 12, 13
Judson, 2013 Section 16.1, 16.2
Denition 7.1.1. Let A be a set, and let + and be binary operations on A.
1. The binary operations + and satisfy the Left Distributive Law (an alternative
expression is that is left distributive over +) if a (b +c) = (a b) +(a c) for all
a, b, c A.
2. The binary operations + and satisfy the Right Distributive Law (an alternative
expression is that is right distributive over +) if (b+c) a = (b a) +(c a) for all
a, b, c A.
Denition 7.1.2. Let R be a non-empty set, and let and let + and be binary operations on
R. The triple (R, +, ) is a ring if the following three properties hold.
(i) (R, +) is an abelian group.
(ii) The binary operation is associative.
(iii) The binary operation is left distributive and right distributive over +.
Lemma 7.1.3. Let (R, +, ) be a ring, and let a, b R.
1. o a = o and a o = o.
2. a(b) = (a)b = (ab).
3. (a)(b) = ab.
Denition 7.1.4. Let (R, +, ) be a ring.
1. The ring R is commutative if the binary operation satises the Commutative Law.
2. The ring R is a ring with unity if the binary operation has an identity element
(usually denoted 1).
Lemma 7.1.5. Let (R, +, ) be a ring with unity. Then o = 1 if and only if R = {o}.
Denition 7.1.6. Let (R, +, ) be a ring with unity, and let a R. Then a is a unit if there
is some b R such that ab = 1 and ba = 1.
Lemma 7.1.7. Let (R, +, ) be a ring with unity, and let a R.
1. The unity is unique.
7.1 Rings 57
2. If there is some b R such that ab = 1 and ba = 1, then b is unique.
Lemma 7.1.8. Let (R, +, ) be a ring with unity, and let a, b, c R.
1. If o ,= 1, then o is not a unit.
2. If c is a unit and ca = cb, then a = b.
3. If c is a unit and ac = bc, then a = b.
4. If a is a unit, then a
1
is a unit and (a
1
)
1
= a.
5. If a and b are units, then ab is a unit and (ab)
1
= b
1
a
1
.
Denition 7.1.9. Let (R, +, ) be a ring.
1. The ring R is an integral domain if it is a commutative ring with unity, if o ,= 1, and
if ab = o implies a = o or b = o for all a, b R.
2. The ring R is a division ring (also called a skew eld) if it is a ring with unity, if
o ,= 1, and if every non-zero element is a unit.
3. The ring R is a eld if it is a commutative ring with unity, if o ,= 1, if every non-zero
element is a unit, and if the binary operation satises the Commutative Law.
Lemma 7.1.10. Let (R, +, ) be a ring.
1. If R is a division ring, then it is an integral domain.
2. If R is a eld, it is a division ring.
Proof. Part (2) is evident. For Part (1), suppose that R is a division ring. Let a, b R.
Suppose ab =o. Suppose further that a ,=o. Then a is a unit. Then a
1
(ab) =a
1
o. Using
the Associative Law, Inverses Law and Identity Law for , together with Lemma 7.1.3 (1),
we deduce that b = o.
Exercises
Exercise 7.1.1. For each of the following sets with two binary operations, state whether
the set with binary operations are a ring, whether it is a commutative ring, whether it is a
ring with unity, and whether it is a eld.
(1) The set N, with the standard addition and multiplication.
(2) Let n N. The set nZ, with the standard addition and multiplication.
(3) The set ZZ, with the standard addition and multiplication on each component.
58 7. Rings and Fields
(4) The set :ZZ, with the standard addition and multiplication on each component.
(5) The set {a+b

: | a, b Z}, with the standard addition and multiplication.


(6) The set {a+b

: | a, b Q}, with the standard addition and multiplication.


Exercise 7.1.2. Let (R, +, ) be a ring with unity. Let U be the set of all the units of R.
Prove that (U, ) is a group.
Exercise 7.1.3. Let (R, +, ) be a ring. Prove that a
:
b
:
= (a+b)(ab) for all a, b R if
and only if R is commutative.
Exercise 7.1.4. Let (G, +) be an abelian group. Let a binary operation on G be dened
by a b = o for all a, b G, where o is the identity element for +. Prove that (G, +, ) is a
ring.
Exercise 7.1.5. Let (R, +, ) be a ring. An element a R is idempotent if a
:
= a. Suppose
that R is commutative. Let P be the set of all the idempotent elements of R. Prove that P is
closed under multiplication.
Exercise 7.1.6. Let (R, +, ) be a ring. An element a R is nilpotent if a
n
= o for some
n N. Suppose that R is commutative. Let c, d R. Prove that if c and d are nilpotent, then
c +d is nilpotent.
Exercise 7.1.7. Let (R, +, ) be a ring. The ring R is a Boolean Ring if a
:
= a for all a R
(that is, if every element of R is idempotent). Prove that if R is a Boolean ring, then R is
commutative.
Exercise 7.1.8. Let A be a set. Let P(A) denote the power set of A. Let binary operations
+ and on P(A) be dened by
X +Y = (X Y) (X Y) and X Y = X Y
for all X,Y P(A). Prove that (P(A), +, ) is a Boolean ring (as dened in Exercise 7.1.7);
make sure to prove rst that it is a ring.
7.2 Polynomials 59
7.2 Polynomials
Fraleigh, 7th ed. Section 22
Gallian, 8th ed. Section 16
Judson, 2013 Section 17.1
Denition 7.2.1. Let R be a ring. The set of polynomials over R, denoted R[x], is the set
R[x] = { f : N{o} R | there is some N N{o} such that
f (i) = o for all i N{o} such that i N}.
Denition 7.2.2. Let R be a ring.
1. Let 0: N{o} R be dened by 0(i) = o for all i N{o}.
2. Suppose that R has a unity. Let 1: N{o} R be dened by 1(o) = 1 and 1(i) = o
for all i N.
Denition 7.2.3. Let R be a ring, and let f R[x]. Suppose that f ,= 0. The degree of f ,
denoted deg f , is the smallest N N{o} such that f (i) = o for all i N{o} such that
i N.
Denition 7.2.4. Let R be a ring, and let f , g R[x]. Let f +g, f g, f : N{o} R be
dened by ( f +g)(i) = f (i) +g(i), and ( f g)(i) =

i
k=o
f (k)g(i k), and (f )(i) = f (i)
for all i N{o}.
Lemma 7.2.5. Let R be a ring, and let f , g R[x].
1. f +g, f g, f R[x].
2. deg( f +g) max{deg f , degg}.
3. deg( f g) deg f +degg. If R is an integral domain, then deg( f g) = deg f +degg.
Lemma 7.2.6. Let R be a ring.
1. (R[x], +, ) is a ring.
2. If R is commutative, then R[x] is commutative.
3. If R has a unity, then R[x] has a unity.
4. If R is an integral domain, then R[x] is an integral domain.
Denition 7.2.7. Let R be a ring, let f R[x] and let r R. Let r f : N{o} R be dened
by (r f )(i) = r f (i) for all i N{o}.
Lemma 7.2.8. Let R be a ring, let f R[x] and let r R. Then r f R[x].
60 7. Rings and Fields
Denition 7.2.9. Let R be an integral domain. Let x: N{o} R be dened by x(1) = 1
and x(i) = o for all i N{o} {1}.
Lemma 7.2.10. Let R be an integral domain, let f R[x], and let n N. If f ,= 0, suppose
that n deg f . Then there are unique a
o
, a
1
, . . . , a
n
R such that f =a
o
1+a
1
x+ +a
n
x
n
.
Denition 7.2.11. Let R be a commutative ring with unity, let S R be a commutative
subring with identity, and let R. The evaluation map with respect to is the function

: S[x] R dened by

(a
o
1+a
1
x + +a
n
x
n
) = a
o
1+a
1
+ +a
n

n
for all f
S[x].
Denition 7.2.12. Let R be a commutative ring with unity, let S R be a commutative sub-
ring with identity, and let f S[x]. The polynomial function induced by f is the function

f : R R dened by

f () =

( f ) for all R.
Denition 7.2.13. Let R be a commutative ring with unity, let S R be a commutative
subring with identity, and let f S[x]. A zero of f is any R such that

f () = o.
Exercises
Exercise 7.2.1. List all the polynomials in Z

[x] that have degree less than or equal to :.


Exercise 7.2.2. Find the units in each of the following rings.
(1) Z[x].
(2) Z

[x].
Exercise 7.2.3. Let D be an integral domain. Describe the units in D[x].
8
Vector Spaces
62 8. Vector Spaces
8.1 Fields
Denition 8.1.1. Let F be a non-empty set, and let and let + and be binary operations on
R. The triple (F, +, ) is a eld if the following four properties hold:
(i) (F, +) is an abelian group.
(ii) (F {o}, ) is an abelian group.
(iii) The binary operation is left distributive and right distributive over +.
(iii) o ,= 1.
Lemma 8.1.2. Let F be a eld, and let x, y, z F.
1. 0 is unique.
2. 1 is unique.
3. x is unique.
4. If x ,= 0, then x
1
is unique.
5. x +y = x +z implies y = z.
6. If x ,= 0, then x y = x z implies y = z.
7. x 0 = 0.
8. (x) = x.
9. If x ,= 0, then (x
1
)
1
= x.
10. (x) y = x (y) = (x y).
11. (x) (y) = x y.
12. 0 has no multiplicative inverse.
13. xy = 0 if and only if x = 0 or y = 0.
8.2 Vector Spaces 63
8.2 Vector Spaces
Friedberg-Insel-Spence, 4th ed. Section 1.2
Denition 8.2.1. Let F be a eld. A vector space (also called a linear space) over F is
a set V with a binary operation +: V V V and scalar multiplication F V V that
satisfy the following properties. Let x, y, z V and let a, b F.
1. (x +y) +z = x +(y +z).
2. x +y = y +x.
3. There is an element 0 V such that x +0 = x.
4. There is an element x V such that x +(x) = 0.
5. 1x = x.
6. (ab)x = a(bx).
7. a(x +y) = ax +ay.
8. (a+b)x = ax +by.
Denition 8.2.2. Let F be a eld, and let m, n N. The set of all mn matrices with
entries in F is denoted M
mn
(F). An element A M
mn
(F) is abbreviated by the notation
A =
_
a
i j

.
Denition 8.2.3. Let F be a eld, and let m, n N.
1. The mn zero matrix is the matrix O
mn
dened by O
mn
=
_
c
i j

, where c
i j
= 0 for
all i {1, . . . , m} and j {1, . . . , n}.
2. The nn identity matrix is the matrix I
n
dened by I
n
=
_

i j

, where

i j
=

1, if i = j
0, if i ,= j
for all i, j {1, . . . , n}.
Denition 8.2.4. Let F be a eld, and let m, n N. Let A, B M
mn
(F), and let c F.
Suppose that A =
_
a
i j

and B =
_
b
i j

.
1. The matrix A+B M
mn
(F) is dened by A+B =
_
c
i j

, where c
i j
= a
i j
+b
i j
for
all i {1, . . . , m} and j {1, . . . , n}.
64 8. Vector Spaces
2. The matrix A M
mn
(F) is dened by A =
_
d
i j

, where d
i j
= a
i j
for all i
{1, . . . , m} and j {1, . . . , n}.
3. The matrix cA M
mn
(F) is dened by cA =
_
s
i j

, where s
i j
= ca
i j
for all i
{1, . . . , m} and j {1, . . . , n}.
Lemma 8.2.5. Let F be a eld, and let m, n N. Let A, B,C M
mn
(F), and let s, t F.
1. A+(B+C) = (A+B) +C.
2. A+B = B+A.
3. A+O
mn
= A and A+O
mn
= A.
4. A+(A) = O
mn
and (A) +A = O
mn
.
5. 1A = A.
6. (st)A = s(tA).
7. s(A+B) = sA+sB.
8. (s +t)A = sA+tA.
Corollary 8.2.6. Let F be a eld, and let m, n N. Then M
mn
(F) is a vector space over
F.
Lemma 8.2.7. Let V be a vector space over a eld F. let x, y, z V and let a F.
1. 0 is unique.
2. x is unique.
3. x +y = x +z implies y = z.
4. (x +y) = (x) +(y).
5. 0x = 0.
6. a0 = 0.
7. (a)x = a(x) = (ax).
8. (1)x = x.
9. ax = 0 if and only if a = 0 or x = 0.
Proof. Parts (1), (2), (3) and (4) are immediate, using the fact (V, +) is an abelian group,
and the fact that we have proved these properties for groups.
The remaining parts of this lemma are left to the reader in Exercise 8.2.1.
8.2 Vector Spaces 65
Exercises
Exercise 8.2.1. Prove Lemma 8.2.7 (5), (6), (7), (8) and (9).
Exercise 8.2.2. Let V, W be vector spaces over a eld F. Dene addition and scalar multi-
plication on V W as follows. For each (v, w), (x, y) V W and c F, let
(v, w) +(x, y) = (v +x, w+y) and c(v, w) = (cv, cw).
Prove that V W is a vector space over F with these operations. This vector space is called
the product vector space of V and W.
Exercise 8.2.3. Let F be a eld, and let S be a non-empty set. Let F(S, F) be the set of all
functions S F. Dene addition and scalar multiplication on F(S, F) as follows. For each
f , g F(S, F) and c F, let f +g, c f F(S, F) be dened by ( f +g)(x) = f (x) +g(x)
and (c f )(x) = c f (x) for all x S.
Prove that F(S, F) is a vector space over F with these operations.
66 8. Vector Spaces
8.3 Subspaces
Friedberg-Insel-Spence, 4th ed. Section 1.3
Denition 8.3.1. Let V be a vector space over a eld F, and let W V. The subset W is
closed under scalar multiplication by F if av W for all v W and a F.
Denition 8.3.2. Let V be a vector space over a eld F, and let W V. The subset W is a
subspace of V if the following three conditions hold.
1. W is closed under +.
2. W is closed under scalar multiplication by F.
3. W is a vector space over F.
Lemma 8.3.3. Let V be a vector space over a eld F, and let W V be a subspace.
1. The additive identity element of V is in W, and it is the additive identity element of
W.
2. The additive inverse operation in W is the same as the additive inverse operation in
V.
Proof. Because (V, +) is an abelian group, this result is the same as Lemma 2.4.2.
Lemma 8.3.4. Let V be a vector space over a eld F, and let W V. Then W is a subspace
of V if and only if the following three conditions hold.
1. W ,= / 0.
2. W is closed under +.
3. W is closed under scalar multiplication by F.
Proof. First, suppose that W is a subspace of V. Then W ,= / 0, because 0 W, and hence
Property (1) holds. Properties (2) and (3) hold by denition.
Second, suppose that Properties (1), (2) and (3) hold. To show that W is a subspace
of V, we need to show that W is a vector space over F. We know that + is associative
and commutative with respect to all the elements of V, so it certainly is associative and
commutative with respect to the elements of V.
Because W ,= / 0, then there is some x W. Then x = (1)x by Lemma 8.2.7 (8). It
follows from Property (3) that x W, and by Property (2) we deduce that 0 = x+(x)
W. Hence Parts (1), (2), (3) and (4) of Denition 8.2.1 hold for W. Parts (5), (6), (7) and
(8) of that denition immediately hold for W because they hold for V.
8.3 Subspaces 67
Lemma 8.3.5. Let V be a vector space over a eld F, and and let U W V be subsets.
If U is a subspace of W, and W is a subspace of V, then U is a subspace of V.
Proof. Straightforward.
Lemma 8.3.6. Let V be a vector space over a eld F, and let {W
i
}
iI
be a family of sub-
spaces of V indexed by I. Then

iI
W
i
is a subspace of V.
Proof. Note that 0 W
i
for all i I by Lemma 8.3.3. Hence 0 {W
i
}
iI
. Therefore {W
i
}
iI
,=
/ 0.
Let x, y

iI
W
i
and let a F. Let k I. Then x, y W
k
, so x +y W
k
and ax
W
k
. Therefore x +y

iI
W
i
and ax

iI
W
i
. Therefore

iI
is a subspace of U by
Lemma 8.3.4.
Exercises
Exercise 8.3.1. Let
W = {
_
x
y
z
_
R

| x +y +z = o}.
Prove that W is a subspace of R

.
Exercise 8.3.2. Let F be a eld, and let S be a non-empty set. Let F(S, F) be as dened in
Exercise 8.2.3. Let C(S, F) be dened by
C(S, F) = { f F(S, F) | f (s) = 0 for all but a nite number of elements s S}.
Prove that C(S, F) is a subspace of F(S, F).
Exercise 8.3.3. Let V be a vector space over a eld F, and let W V. Prove that W is a
subspace of V if and only if the following conditions hold.
1. W ,= / 0.
2. If x, y W and a F, then ax +y W.
Exercise 8.3.4. Let V be a vector space over a eld F, and let W V be a subspace. Let
w
1
, . . . , w
n
W and a
1
, . . . , a
n
F. Prove that a
1
w
1
+ +a
n
w
n
W.
Exercise 8.3.5. Let V be a vector space over a eld F, and let S, T V. The sum of S and
T is the subset of V dened by
S+T = {s +t | s S and t T}.
Let X,Y V be subspaces.
(1) Prove that X X +Y and Y X +Y.
(2) Prove that X +Y is a subspace of V.
(3) Prove that if W is a subspace of V such that X W and Y W, then X +Y W.
68 8. Vector Spaces
8.4 Linear Combinations
Friedberg-Insel-Spence, 4th ed. Section 1.4
Denition 8.4.1. Let V be a vector space over a eld F, and let S V be a non-empty
subset. Let v V. The vector v is a linear combination of vectors of S if
v = a
1
v
1
+a
:
v
:
+ +a
n
v
n
for some n N and some v
1
, v
:
, . . . , v
n
S and a
1
, a
:
, . . . , a
n
F
Denition 8.4.2. Let V be a vector space over a eld F, and let S V be a non-empty
subset. The span of S, denoted span(S), is the set of all linear combinations of the vectors
in S. Also, let span(/ 0) = {0}.
Lemma 8.4.3. Let V be a vector space over a eld F, and let S V be a non-empty subset.
1. S span(S).
2. span(S) is a subspace of V.
3. If W V is a subspace and S W, then span(S) W.
4. span(S) =

{U V | U is a subspace of V and S U}.
Proof. Straightforward.
Denition 8.4.4. Let V be a vector space over a eld F, and let S V be a non-empty
subset. The set S spans (also generates) V if span(S) =V.
Remark 8.4.5. There is a standard strategy for showing that a set S spans V, as follows.
Proof. Let v V.
.
.
.
(argumentation)
.
.
.
Let v
1
, . . . , v
n
S and a
1
, . . . , a
n
F be dened by . . .
.
.
.
(argumentation)
.
.
.
Then v = a
1
v
1
+ +a
n
v
n
. Hence S spans V.
In the above strategy, if S is nite, then we can take v
1
, . . . , v
n
to be all of S.
Exercises
8.4 Linear Combinations 69
Exercise 8.4.1. Using only the denition of spanning, prove that {[
1
:
] , [

]} spans R
:
.
Exercise 8.4.2. Let V be a vector space over a eld F, and let W V. Prove that W is a
subspace of V if and only if span(W) =W.
Exercise 8.4.3. Let V be a vector space over a eld F, and let S V. Prove that
span(span(S)) = span(S).
Exercise 8.4.4. Let V be a vector space over a eld F, and let S, T V. Suppose that
S T.
(1) Prove that span(S) span(T).
(2) Prove that if span(S) =V, then span(T) =V.
Exercise 8.4.5. Let V be a vector space over a eld F, and let S, T V.
(1) Prove that span(ST) span(S) span(T).
(2) Give an example of subsets S, T R
:
such that S and T are non-empty, not equal to
each other, and span(ST) =span(S)span(T). A proof is not needed; it sufces to
state what each of S, T, ST, span(S), span(T), span(ST) and span(S) span(T)
are.
(3) Give an example of subsets S, T R
:
such that S and T are non-empty, not equal to
each other, and span(ST) span(S)span(T). A proof is not needed; it sufces to
state what each of S, T, ST, span(S), span(T), span(ST) and span(S) span(T)
are.
70 8. Vector Spaces
8.5 Linear Independence
Friedberg-Insel-Spence, 4th ed. Section 1.5
Denition 8.5.1. Let V be a vector space over a eld F, and let S V. The set S is linearly
dependent if there are n N, distinct vectors v
1
, v
:
, . . . v
n
S, and a
1
, a
:
, . . . a
n
F that are
not all 0, such that a
1
v
1
+ +a
n
v
n
= 0.
Lemma 8.5.2. Let V be a vector space over a eld F, and let S V. If 0 S, then S is
linearly dependent.
Proof. Observe that 1 0 = 0.
Lemma 8.5.3. Let V be a vector space over a eld F, and let S V. Suppose that S ,= / 0
and S ,= {0}. The following are equivalent.
a. S is linearly dependent.
b. There is some v S such that v span(S{v}).
c. There is some v S such that span(S{v}) = span(S).
Proof. Suppose S is linearly dependent. Then there are n N, distinct vectors v
1
, . . . , v
n
S,
and a
1
, . . . , a
n
F not all 0, such that a
1
v
1
+ +a
n
v
n
=0. Then there is some k {1, . . . , n}
such that a
k
,= 0. Therefore
v
k
=
a
1
a
k
v
1

a
k1
a
k
v
k1

a
k+1
a
k
v
k+1

a
n
a
k
v
n
.
Hence v
k
span(S{v
k
}).
Suppose that is some v S such that v span(S {v}). Then there are p N, and
w
1
, w
:
, . . . w
p
S{v} and c
1
, c
:
, . . . c
p
F such that v = c
1
w
1
+ +c
p
w
p
By Exercise 8.4.4 (1) we know that span(S{v}) span(S).
Let x span(S). Then there are m N, and u
1
, u
:
, . . . u
m
S and b
1
, b
:
, . . . b
m
F such
that x = b
1
u
1
+ +b
m
u
m
. First, suppose that v is not any of u
1
, u
:
, . . . u
m
. Then clearly
x span(S{v}). Second, suppose that v is one of u
1
, u
:
, . . . u
m
. Without loss of generality,
suppose that v = u
1
. Then
x = b
1
(c
1
w
1
+ +c
p
w
p
) +b
:
u
:
+ +b
m
u
m
= b
1
c
1
w
1
+ +b
1
c
p
w
p
+b
:
u
:
+ +b
m
u
m
.
Hence x span(S {v}). Putting the two cases together, we conclude that span(S)
span(S{v}). Therefore span(S{v}) = span(S)
Suppose that there is some wS such that span(S{w}) =span(S). Because wS, then
w span(S), and hence w span(S {w}). Hence there are r m, and x
1
, . . . , x
r
S {w}
8.5 Linear Independence 71
and d
1
, . . . , d
r
F such that w=d
1
x
1
+ +d
r
x
r
. Without loss of generality, we can assume
that x
1
, . . . , x
r
are distinct. Therefore
1 w+(d
1
)x
1
+ +(d
m
)x
m
= 0.
Because 1 ,=0, and because w, x
1
, . . . , x
r
are distinct, we deduce that S is linearly dependent.
Denition 8.5.4. Let V be a vector space over a eld F, and let S V. The set S is linearly
independent if it is not linearly dependent.
Lemma 8.5.5. Let V be a vector space over a eld F.
1. / 0 is linearly independent.
2. If v V and v ,= 0, then {v} is linearly independent.
Proof. Straightforward.
Remark 8.5.6. There is a standard strategy for showing that a set S in a vector space is
linearly independent, as follows.
Proof. Let v
1
, . . . , v
n
S and a
1
, . . . , a
n
F. Suppose that v
1
, . . . , v
n
are distinct, and that
a
1
v
1
+ +a
n
v
n
= 0.
.
.
.
(argumentation)
.
.
.
Then a
1
= 0, . . ., a
n
= 0. Hence S is linearly independent.
In the above strategy, if S is nite, then we simply take v
1
, . . . , v
n
to be all of S.
Lemma 8.5.7. Let V be a vector space over a eld F, and let S
1
S
:
V.
1. If S
1
is linearly dependent, then S
:
is linearly dependent.
2. If S
:
is linearly independent, then S
1
is linearly independent.
Proof. Straightforward.
Lemma 8.5.8. Let V be a vector space over a eld F, let S V and let v V S. Suppose
that S is linearly independent. Then S{v} is linearly dependent if and only if v span(S).
Proof. Suppose that S{v} is linearly dependent. Then there are n N, and v
1
, v
:
, . . . , v
n

S{v} and a
1
, a
:
, . . . , a
n
F not all equal to zero such that a
1
v
1
+ +a
n
v
n
= 0. Because S
is linearly independent, it must be the case that v is one of the vectors v
1
, v
:
, . . . , v
n
. Without
72 8. Vector Spaces
loss of generality, assume v =v
1
. It must be the case that a
1
,=0, again because S is linearly
independent. Then
v =
a
:
a
1
v
:

a
n
a
1
v
1
Because v
:
, . . . , v
n
S, then v span(S).
Suppose that v span(S). Then v is a linear combination of the vectors of S. Thus S{v}
is linearly independent by Lemma 8.5.2.
Exercises
Exercise 8.5.1. Using only the denition of linear independence, prove that {x
:
+1, x
:
+
:x, x +} is a linearly independent subset of R
:
[x].
Exercise 8.5.2. Let V be a vector space over a eld F, and let u, v V. Suppose that u ,= v.
Prove that {u, v} is linearly dependent if and only if at least one of u or v is a multiple of the
other.
Exercise 8.5.3. Let V be a vector space over a eld F, and let u
1
, . . . , u
n
V. Prove that the
set {u
1
, . . . , u
n
} is linearly dependent if and only if u
1
= 0 or u
k+1
span({u
1
, . . . , u
k
}) for
some k {1, . . . , n1}.
8.6 Bases and Dimension 73
8.6 Bases and Dimension
Friedberg-Insel-Spence, 4th ed. Section 1.6
Denition 8.6.1. Let V be a vector space over a eld F, and let B V. The set B is a basis
for V if B is linearly independent and B spans V.
Theorem 8.6.2. Let V be a vector space over a eld F, and let B V.
1. The set B is a basis for V if and only if every vector in V can be written as a linear
combination of vectors in B, where the set of vectors in B with non-zero coefcients
in any such linear combination, together with their non-zero coefcients, are unique.
2. Suppose that B = {u
1
, . . . , u
n
} for some n N and u
1
, . . . , u
n
V. Then B is a basis
for V if and only if for each vector v V, there are unique a
1
, . . . , a
n
F such that
v = a
1
u
1
+ +a
n
u
n
.
Proof.
(1). Suppose that B is a basis for V. Then B spans V, and hence every vector in V can be
written as a linear combination of vectors in B. Let v V. Suppose that there are n, m N,
and v
1
, . . . , v
n
, u
1
, . . . , u
m
B and a
1
, . . . , a
n
, b
1
, . . . , b
m
F such that
v = a
1
v
1
+a
:
v
:
+ +a
n
v
n
and v = b
1
u
1
+b
:
u
:
+ +b
m
u
m
.
Without loss of generality, suppose that n m. If might be the case that the sets {v
1
, . . . , v
n
}
and {u
1
, . . . , u
m
} overlap. By renaming and reordering the vectors in these two sets appropri-
ately, we may assume that {v
1
, . . . , v
n
} and {u
1
, . . . , u
m
} are both subsets of a set {z
1
, . . . , z
p
}
for some p N and z
1
, . . . , z
p
B. It will then sufce to show that if
v = c
1
z
1
+c
:
z
:
+ +c
p
z
p
and v = d
1
z
1
+d
:
z
:
+ +d
p
z
p
(8.6.1)
for some c
1
, . . . , c
p
, d
1
, . . . , d
p
F, then c
i
= d
i
for all i {1, . . . , p}.
Suppose that Equation 8.6.1 holds. Then
(c
1
d
1
)z
1
+ +(c
p
d
p
)z
p
= 0.
Because B is linearly independent, it follows that c
i
d
i
= 0 for all i {1, . . . , p}. Because
c
i
= d
i
for all i {1, . . . , p}, we see in particular that c
i
= 0 if and only if d
i
= 0. Hence
every vector in V can be written as a linear combination of vectors in B, where the set of
vectors in B with non-zero coefcients in any such linear combination, together with their
non-zero coefcients, are unique.
Next, suppose that every vector in V can be written as a linear combination of vectors in
B, where the set of vectors in B with non-zero coefcients in any such linear combination,
together with their non-zero coefcients, are unique. Clearly B spans V. Suppose that there
are n N, and v
1
, . . . , v
n
B and a
1
, . . . , a
n
F such that a
1
v
1
+a
:
v
:
+ +a
n
v
n
= 0. It is
also the case that 0 v
1
+0 v
:
+ +0 v
n
= 0. By uniqueness, we deduce that a
i
= 0 for
all i {1, . . . , n}. Hence B is linearly independent.
74 8. Vector Spaces
(2). This part of the theorem follows from the previous part.
Lemma 8.6.3. Let V be a vector space over a eld F, and let S V. The following are
equivalent.
a. S is a basis for V.
b. S is linearly independent, and is contained in no linearly independent subset of V
other than itself.
Proof. Suppose that S is a basis for V. Then S is linearly independent. Suppose that S T
for some linearly independent subset T V. Let v T S. Because S is a basis, then
span(S) = V, and hence v span(S). It follows from Lemma 8.5.8 that S {v} is linearly
dependent. It follows from Lemma 8.5.7 (1) that T is linearly dependent, a contradiction.
Hence S is contained in no linearly independent subset of V other than itself.
Suppose that S is linearly independent, and is contained in no linearly independent sub-
set of V other than itself. Let w V. First, suppose that w S. Then w span(S) by
Lemma 8.4.3 (1). Second, suppose that wV S. By the hypothesis on S we see that S{w}
is linearly dependent. Using Lemma 8.5.8 we deduce that w span(S). Combining the two
cases, it follows that V span(S). By denition span(S) V. Therefore span(S) =V, and
hence S is a basis.
Theorem 8.6.4. Let V be a vector space over a eld F, and let S V. Suppose that S is
nite. If S spans V, then some subset of S is a basis for V.
Proof. Suppose that S spans V. If S is linearly independent then S is a basis for V. Now
suppose that S is linearly dependent.
Case One: Suppose S = {0}. Then V = span(S) = {0}. This case is trivial because / 0 is a
basis.
Case Two: Suppose S contains at least one non-zero vector. Let v
1
S be such that
v
1
,= 0. Then {v
1
} is linearly independent by Lemma 8.5.5. By adding one vector from S at
a time, we obtain a linearly independent subset {v
1
, . . . , v
n
} S such that adding any more
vectors from set S would render the subset linearly dependent.
Let B= {v
1
, . . . , v
n
}. Because S is nite and BS, we can write S = {v
1
, . . . , v
n
, v
n+1
, . . . , v
p
}
for some p Z such that p n.
Let i {n +1, . . . , p}. Then by the construction of B we know that B{v
i
} is linearly
dependent. It follows from Lemma 8.5.8 implies that v
i
span(B).
Let w V B. Because S spans V, there are a
1
, . . . , a
p
F such that w = a
1
v
1
+
a
:
v
:
+ +a
p
v
p
. Because each of v
n+1
, . . . , v
p
is a linear combination of the elements
of B, it follows that w can be written as a linear combination of elements of B. We then
use Lemma 8.5.3 (b) to deduce that B {w} is linearly dependent. It now follows from
Lemma 8.6.3 that B is a basis.
8.6 Bases and Dimension 75
Theorem 8.6.5 (Replacement Theorem). Let V be a vector space over a eld F, and let
S, L V. Suppose that S and L are nite sets. Suppose that S spans V, and that L is linearly
independent.
1. |L| |S|.
2. There is a subset H S such that |H| = |S| |L|, and such that LH spans V.
Proof. Let m = |L| and n = |S|. We will show that this theorem holds by induction on m.
Base Case: Suppose m = o. Then L = / 0 and m n. Let H = S. Then H and S have
nm = no = n elements, and LH = / 0S = S, and so LH spans V.
Induction Step: Suppose the result is true for m, and suppose L has m+1 vectors.
Suppose L = {v
1
, . . . , v
m+1
}. Let L
/
= {v
1
, . . . , v
m
}. By Lemma 8.5.7 we know that L
/
is
linearly independent. Hence, by the inductive hypothesis, we know that m n and that
there is a subset H
/
S such that H
/
has n m elements and L
/
H
/
spans V. Suppose
H
/
= {u
1
, . . . , u
nm
}. Because L
/
H
/
spans V, there are a
1
, . . . , a
m
, b
1
, . . . , b
mm
F such
that v
n+1
= a
1
v
1
+ +a
n
v
n
+b
1
u
1
+ +b
nm
u
nm
. Because v
1
, . . . , v
n+1
is linearly in-
dependent, then v
n+1
is not a linear combination of {v
1
, . . . , v
n
}. Hence n m o and not
all b
1
, . . . , b
nm
are zero.
Because nm o, then n m, and therefore n m+1.
Without loss of generality, assume b
1
,= 0. Then
u
1
=
1
b
1
v
m+1

a
1
b
1
v
1
+
a
m
b
1
v
m

b
:
b
1
u
:
+ +
b
nm
b
1
u
nm
.
Let H = {u
:
, . . . , u
nm
}. Clearly H has n(m+1) elements. Then
LH = {v
1
, . . . , v
m+1
, u
:
, . . . , u
nm
}.
We claim that L H spans V. Clearly, v
1
, . . . , v
m
, u
:
, . . . , u
nm
span(L H). Also u
1

span(LH). Hence L
/
H
/
span(LH). We know that span(L
/
H
/
) =V, and hence
by Exercise 8.4.4 (2) we see that span(span(L H)) = V. It follows from Exercise 8.4.3
that span(LH) =V.
Corollary 8.6.6. Let V be a vector space over a eld F. Suppose that V has a nite basis.
Then all bases of V are nite, and have the same number of vectors.
Proof. Let B be a nite basis for V. Let n = |B|. Let K be some other basis of V. Suppose
that K has more elements than B. Then K has at least n+1 elements (it could be that K is
innite). In particular, let C be a subset of K that has precisely n +1 elements. Then C is
linearly independent by Lemma 8.5.7. Because B spans V, then by Theorem 8.6.5 (1) we
deduce that n+1 n, which is a contradiction.
76 8. Vector Spaces
Next, suppose that K has fewer elements than B. Then K is nite. Let m = |K|. Then
m < n. Because K spans V and B is linearly independent, then by Theorem 8.6.5 (1) we
deduce that n m, which is a contradiction.
We conclude that K has the same number of vectors as B.
Denition 8.6.7. Let V be a vector space over a eld F. The vector space V is nite-
dimensional if V has a nite basis. The vector space V is innite-dimensional if V does
not have a nite basis. If V is nite-dimensional, the dimension of V, denoted dim(V), is
the number of elements in any basis.
Lemma 8.6.8. Let V be a vector space over a eld F. Then dim(V) = o if and only if
V = {0}.
Proof. By Lemma 8.5.5 (1) we know that / 0 is linearly independent. Using Denition 8.4.2
we see that dim(V) = o if and only if / 0 is a basis for V if and only if V = span(/ 0) if and
only if V = {0}.
Corollary 8.6.9. Let V be a vector space over a eld F, and let S V. Suppose that V is
nite-dimensional. Suppose that S is nite.
1. If S spans V, then |S| dim(V).
2. If S spans V and |S| = dim(V), then S is a basis.
3. If S is linearly independent, then |S| dim(V).
4. If S is linearly independent and |S| = dim(V), then S is a basis.
5. If S is linearly independent, then it can be extended to a basis.
Proof. We prove Parts (1) and (5), leaving the rest to the reader in Exercise 8.6.2.
Let n = dim(V).
(1). Suppose that S spans V. By Theorem 8.6.4 we know that there is some H S such
that H is a basis for V. Corollary 8.6.6 implies that |H| = n. It follows that |S| n.
(5). Suppose that S is linearly independent. Let B be a basis for V. Then |B| =n. Because
B is a basis for V, then B spans V. By the Replacement Theorem (Theorem 8.6.5) there is
a subset K B such that |K| = |B| |S|, and such that S K spans V. Note that |SK| =
|B| = n. It follows from Part (2) of this corollary that S K is a basis. Therefore S can be
extended to a basis.
Theorem 8.6.10. Let V be vector space over a eld F, and let W V be a subspace.
Suppose that V is nite-dimensional.
8.6 Bases and Dimension 77
1. W is nite-dimensional.
2. dim(W) dim(V).
3. If dim(W) = dim(V), then W =V.
4. Any basis for W can be extended to a basis for V.
Proof. Let n = dim(V). We prove all four parts of the theorem together.
Case One: Suppose W = {0}. Then all four parts of the theorem hold.
Case Two: Suppose W ,= {0}. Then there is some x
1
W such that x
1
,= 0. Note that {x
1
}
is linearly independent. It might be the case that there is some x
:
W such that {x
1
, x
:
}
is linearly independent. Keep going, adding one vector at a time while maintaining linear
independence. Because W V, then there are at most n linearly independent vectors in
W by Corollary 8.6.9 (3). Hence we can keep adding vectors until we get {x
1
, . . . , x
k
} W
for some k N such that k n, where adding any other vector in V would render the set
linearly dependent. Hence, adding any vector in W would render it linearly dependent. By
Lemma 8.6.3 we see that {x
1
, . . . , x
k
} is a basis for W. Therefore W is nite-dimensional
and dim(W) dim(V).
Now suppose dim(W) = dim(V). Then k = n and {x
1
, . . . , x
n
} is a linearly independent
set in V with n elements. By Corollary 8.6.9 (4), we know that {x
1
, . . . , x
n
} is a basis for V.
Then W = span({x
1
, . . . , x
n
}) =V.
From Corollary 8.6.9 (5) we deduce that any basis for W, which is a linearly independent
set in V, can be extended to a basis for V.
Exercises
Exercise 8.6.1. Let
W = {
_
x
y
z
_
R

| x +y +z = o}.
It was proved in Exercise 8.3.1 that W is a subspace of R

. What is dim(W)? Prove your


answer.
Exercise 8.6.2. Prove Corollary 8.6.9 (2), (3) and (4).
Exercise 8.6.3. Let V be a vector space over a eld F, and let X,Y V be subspaces.
Suppose that X and Y are nite-dimensional. Find necessary and sufcient conditions on X
and Y so that dim(X Y) = dim(X).
Exercise 8.6.4. Let V, W be vector spaces over a eld F. Suppose that V and W are nite-
dimensional. Let V W be the product vector space, as dened in Exercise 8.2.2. Express
dim(V W) in terms of dim(V) and dim(W). Prove your answer.
Exercise 8.6.5. Let V be a vector space over a eld F, and let L S V. Suppose that S
spans V. Prove that the following are equivalent.
78 8. Vector Spaces
a. L is a basis for V.
b. L is linearly independent, and is contained in no linearly independent subset of S
other than itself.
8.7 Bases for Arbitrary Vector Spaces 79
8.7 Bases for Arbitrary Vector Spaces
Friedberg-Insel-Spence, 4th ed. Section 1.7
Denition 8.7.1. Let P be a non-empty family of sets, and let M P. The set M is a
maximal element of P if there is no Q P such that M Q.
Lemma 8.7.2. Let V be a vector space over a eld F. Let B be the family of all linearly
independent subsets of V. Let S B. Then S is a basis for V if and only if S is a maximal
element of B.
Proof. This lemma follows immediately from Lemma 8.6.3.
Denition 8.7.3. Let P be a non-empty family of sets, and let C P. The family C is a
chain if A, B C implies A B or A B.
Theorem 8.7.4 (Zorns Lemma). Let P be a non-empty family of sets. Suppose that for
each chain C in P, the set

CC
C is in P. Then P has a maximal element.
Theorem 8.7.5. Let V be a vector space over a eld F. Then V has a basis.
Proof. Let B be the family of all linearly independent subsets of V. We will show that B
has a maximal element by using Zorns Lemma (Theorem 8.7.4). The maximal element of
B will be a basis for V by Lemma 8.7.2.
Because / 0 is a linearly independent subset of V, as stated in Lemma 8.5.5 (1), we see
that / 0 B, and hence B is non-empty.
Let C be a chain in B. Let U =

CC
C. We need to show that U B. That is, we
need to show that U is linearly independent. Let v
1
, . . . , v
n
U and suppose a
1
v
1
+ +
a
n
v
n
= 0 for some a
1
, . . . , a
n
F. By the denition of union, we know that for each i
{1, . . . , n}, there is some C
i
C such that v
i
C
i
. Because C is a chain, we know that
for any two of C
1
, . . . ,C
n
, one contains the other. Hence we can nd k {1, . . . , n} such that
C
i
C
k
for all i {1, . . . , n}. Hence v
1
, . . . , v
n
C
k
. Because C
k
C B, then C
k
is linearly
independent, and so a
1
v
1
+ +a
n
v
n
= 0 implies a
i
= 0 for all i {1, . . . , n}. Hence U is
linearly independent, and therefore U B.
We have now seen that B satises the hypotheses of Zorns Lemma, and by that lemma
we deduce that B has a maximal element.
Exercises
Exercise 8.7.1. Let V be a vector space over a eld F, and let S V. Prove that if S spans
V, then some subset of S is a basis for V.
80 8. Vector Spaces
9
Linear Maps
82 9. Linear Maps
9.1 Linear Maps
Friedberg-Insel-Spence, 4th ed. Section 2.1
Denition 9.1.1. Let V,W be vector spaces over a eld F. Let f : V W be a function.
The function f is a linear map (also called linear transformation or vector space homo-
morphism) if
(i) f (x +y) = f (x) + f (y)
(ii) f (cx) = c f (x)
for all x, y V and c F
Lemma 9.1.2. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map.
1. f (0) = 0.
2. If x V, then f (x) = f (x).
Proof. We already proved this result about groups, because linear maps are group homo-
morphisms.
Lemma 9.1.3. Let V, W be vector spaces over a eld F, and let f : V W be a function.
The following are equivalent.
a. f is a linear map.
b. f (cx +y) = c f (x) + f (y) for all x, y V and c F.
c. f (a
1
x
1
+ +a
n
x
n
) =a
1
f (x
1
)+ +a
n
f (x
n
) for all x
1
, . . . x
n
V and a
1
, . . . a
n
F.
Proof. Showing (a) (b) is straightforward. Showing (b) (c) requires induction on n.
Showing (c) (a) is straightforward.
Lemma 9.1.4. Let V, W, Z be vector spaces over a eld F, and let f : V W and g: W Z
be linear maps.
1. The identity map 1
V
: V V is a linear map.
2. The function g f is a linear map.
Proof.
(1). This part is straightforward.
9.1 Linear Maps 83
(2). Let x, y V and c F. Then
(g f )(x+y) = g( f (x+y)) = g( f (x) + f (y)) = g( f (x)) +g( f (y)) = (g f )(x) +(g f )(y)
and
(g f )(cx) = g( f (cx)) = g(c( f (x))) = c(g(( f (x))) = c(g f (x)).
Lemma 9.1.5. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map.
1. If A is a subspace of V, then f (A) is a subspace of W.
2. If B is a subspace of W, then f
1
(B) is a subspace of V.
Proof. The proof of this result is similar to the proof of the analogous result for group
homomorphisms.
Theorem 9.1.6. Let V, W be vector spaces over a eld F.
1. Let B be a basis for V. Let g: B W be a function. Then there is a unique linear
map f : V W such that f |
B
= g.
2. Let {v
1
, . . . , v
n
} be a basis for V, and let w
1
, . . . , w
n
W. Then there is a unique linear
map f : V W such that f (v
i
) = w
i
for all i {1, . . . , n}.
Proof. We prove Part (1); Part (2) follows immediately from Part (1).
Let v V. Then by Theorem 8.6.2 (1) we know that v can be written as v = a
1
x
1
+
+a
n
x
n
for some x
1
, . . . , x
n
B and a
1
, . . . a
n
F, where the set of vectors with non-
zero coefcients, together with their non-zero coefcients, are unique. Then dene f (v) =
a
1
g(x
1
) + +a
n
g(x
n
). If v is written in two different ways as linear combinations of
elements of B, then the uniqueness of the vectors in B with non-zero coefcients, together
with their non-zero coefcients, implies that f (v) is well-dened.
Observe that if v B, then v = 1 v is the unique way of expressing v as a linear combi-
nation of vectors in B, and therefore f (v) = 1 g(v) = g(v). Hence f |
B
= g.
Let v, w, V and let c F. Then we can write v = a
1
x
1
+ +a
n
x
n
and w = b
1
x
1
+ +
b
n
x
n
where x
1
, . . . , x
n
B and a
1
, . . . , a
n
, b
1
, . . . , b
n
F. Then v +w =

n
i=1
(a
i
+b
i
)x
i
, and
hence
f (v +w) =
n

i=1
(a
i
+b
i
)g(x
i
) =
n

i=1
a
i
g(x
i
) +
n

i=1
b
i
g(x
i
) = f (v) + f (w).
A similar proof shows that f (cv) = c f (v). Hence f is linear map.
84 9. Linear Maps
Let h: V W be a linear map such that h|
B
= g. Let v V. Then v = a
1
x
1
+ +a
n
x
n
for some x
1
, . . . , x
n
B and a
1
, . . . a
n
F. Hence
h(v) = h(
n

i=1
a
i
x
i
) =
n

i=1
a
i
h(x
i
) =
n

i=1
a
i
g(x
i
) = f (v)
Therefore h = f . It follows that f is unique.
Corollary 9.1.7. Let V, W be vector spaces over a eld F, and let f , g: V W be linear
maps. Let B be a basis for V. Suppose that f (v) = g(v) for all v B. Then f = g.
Proof. Trivial.
Exercises
Exercise 9.1.1. Prove that there exists a linear map f : R
:
R

such that f ([
1
1
]) =
_
1
o
:
_
and f ([
:

]) =
_
1
1

_
. What is f ([
8
11
])?
Exercise 9.1.2. Does there exist a linear map g: R

R
:
such that g(
_
1
o

_
) = [
1
1
] and
g(
_
:
o
6
_
) = [
:
1
]? Explain why or why not.
9.2 Kernel and Image 85
9.2 Kernel and Image
Friedberg-Insel-Spence, 4th ed. Section 2.1
Denition 9.2.1. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map.
1. The kernel (also called the null space) of f , denoted ker f , is the set ker f =
f
1
({0}).
2. The image of f , denoted im f , is the set im f = f (V).
Remark 9.2.2. Observe that
ker f = {v V | f (v) = 0}
and
im f = {w W | w = f (v) for some v V}.
Lemma 9.2.3. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map.
1. ker f is a subspace of V.
2. im f is a subspace of W.
Proof. Use Lemma 9.1.5.
Lemma 9.2.4. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Then f is injective if and only if ker f = {0}.
Proof. Vector spaces are abelian groups, and linear maps are group homomorphisms, and
therefore this result follows from Theorem 5.2.5, which is the analogous result for group
homomorphisms.
Lemma 9.2.5. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Let w W. If a f
1
({w}), then f
1
({w}) = a+ker f .
Proof. Again, this result follows from Lemma 5.2.6, which is the analogous result for
group homomorphisms (though here we use additive notation instead of multiplicative no-
tation).
Lemma 9.2.6. Let V, W be vector spaces over a eld F, let f : V W be a linear map and
let B be a basis for V. Then im f = span( f (B)).
86 9. Linear Maps
Proof. Clearly f (B) im f . By Lemma 9.2.3 (2) and Lemma 8.4.3 (3), we deduce that
span( f (B)) im f .
Let y im f . Then y = f (v) for some v V. Then v = a
1
v
1
+ + a
n
v
n
for some
v
1
, . . . , v
n
B and a
1
, . . . , a
n
F. Then
y = f (v) = f (a
1
v
1
+ +a
n
v
n
) = a
1
f (v
1
) + +a
n
f (v
n
) span( f (B)).
Therefore im f span(B), and hence im f = span( f (B)).
Lemma 9.2.7. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V is nite-dimensional. Then ker f and im f are nite-dimensional.
Proof. By Lemma 9.2.3 (1) we know that ker f is a subspace of V, and hence ker f is
nite-dimensional by Theorem 8.6.10 (1).
Let B be a basis for V. By Corollary 8.6.6 we know that B is nite. Hence f (B) is nite.
By Lemma 9.2.6 we see that im f = span( f (B)). It follows from Theorem 8.6.4 that a
subset of f (B) is a basis for im f , which implies that im f is nite-dimensional.
Exercises
Exercise 9.2.1. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Let w
1
, . . . , w
k
im f be linearly independent vectors. Let v
1
, . . . , v
k
V be vectors
such that f (v
i
) = w
i
for all i {1, . . . , k}. Prove that v
1
, . . . , v
k
are linearly independent.
Exercise 9.2.2. Let V and W be vector spaces over a eld F, and let f : V W be a linear
map.
(1) Prove that f is injective if and only if for every linearly independent subset S V,
the set f (S) is linearly independent.
(2) Supppose that f is injective. Let T V. Prove that T is linearly independent if and
only if f (T) is linearly independent.
(3) Supppose that f is bijective. Let B V. Prove that B is a basis for V if and only if
f (B) is a basis for W.
Exercise 9.2.3. Find an example of two linear maps f , g: R
:
R
:
such that ker f = ker g
and im f =img, and none of these kernels and images is the trivial vector space, and f ,=g.
9.3 Rank-Nullity Theorem 87
9.3 Rank-Nullity Theorem
Friedberg-Insel-Spence, 4th ed. Section 2.1
Denition 9.3.1. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map.
1. If ker f is nite-dimensional, the nullity of f , denoted nullity( f ), is dened by
nullity( f ) = dim(ker f ).
2. If im f is nite-dimensional, the rank of f , denoted rank( f ), is dened by rank( f ) =
dim(im f ).
Theorem 9.3.2 (Rank-Nullity Theorem). Let V, W be vector spaces over a eld F, and
let f : V W be a linear map. Suppose that V is nite-dimensional. Then
nullity( f ) +rank( f ) = dim(V).
Proof. Let n = dim(V). By Lemma 9.2.3 (1) we know that ker f is a subspace of V, and
hence ker f is nite-dimensional by Theorem 8.6.10 (1), and nullity( f ) = dim(ker f )
dim(V) by Theorem 8.6.10 (2). Let k = nullity( f ). Then k n. Let {v
1
, . . . , v
k
} be a basis
for ker f . By Theorem 8.6.10 (4) {v
1
, . . . , v
k
} can be extended to a basis {v
1
, . . . , v
n
} for
V. We will show that { f (v
k+1
), . . . , f (v
n
)} is a basis for im f . It will then follow that the
rank( f ) = nk, which will prove the theorem.
By Lemma 9.2.6 we know that im f = span({ f (v
1
), . . . , f (v
n
)}). Note that v
1
, . . . , v
k

ker f , and therefore f (v
1
) = = f (v
k
) =0. It follows that im f =span({ f (v
k+1
), . . . , f (v
n
)}).
Suppose b
k+1
f (v
k+1
)+ +b
n
f (v
n
) =0 for some b
k+1
, . . . , b
n
F. Hence f (b
k+1
v
k+1
+
+b
n
v
n
) = 0. Therefore b
k+1
v
k+1
+ +b
n
v
n
ker f . Because {v
1
, . . . , v
n
} is a basis for
ker f , then b
k+1
v
k+1
+ +b
n
v
n
= b
1
v
1
+ +b
k
v
k
for some b
1
, . . . , b
k
F. Then b
1
v
1
+
+b
k
v
k
+ (b
k+1
)v
k+1
+ + (b
n
)v
n
= 0. Because {v
1
, . . . , v
n
} is a basis for V, then
b
1
= = b
n
= 0. Therefore f (v
k+1
), . . . , f (v
n
) are linearly independent.
Corollary 9.3.3. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V is nite-dimensional. Then rank( f ) dim(V).
Proof. Trivial.
Corollary 9.3.4. Let V, W be vector spaces over a eld F, and let f : V W be a lin-
ear map. Suppose that V and W are nite-dimensional, and that dim(V) = dim(W). The
following are equivalent.
a. f is injective.
b. f is surjective
c. f is bijective.
88 9. Linear Maps
d. rank( f ) = dim(V).
Proof. Clearly (c) (a), and (c) (b). We will show below that (a) (d), and (b)
(d). It will then follow that (a) (b), and from that we will deduce that (a) (c), and (b)
(c).
(a) (d) By Lemma 9.2.4 we know that f is injective if and only if ker f = {0}. By
Lemma 8.6.8 we deduce that f is injective if and only if dim(ker f ) = o, and by denition
that is true if and only if nullity( f ) = o. By The Rank-Nullity Theorem (Theorem 9.3.2),
we know that nullity( f ) = dim(V) rank( f ). It follows that f is injective if and only if
dim(V) rank( f ) = o, which is the same as rank( f ) = dim(V).
(b) (d) By denition f is surjective if and only if im f = W. By Lemma 9.2.3 (2)
we know that im f is a subspace of W. If im f = W then clearly dim(im f ) = dim(W);
by Theorem 8.6.10 (3) we know that if dim(im f ) = dim(W) then im f = W. Hence f is
surjective if and only if dim(im f ) = dim(W), and by denition that is true if and only if
rank( f ) = dim(W). By hypothesis dim(W) = dim(V), and therefore f is surjective if and
only if rank( f ) = dim(V).
Corollary 9.3.5. Let V, W, Z be vector spaces over a eld F, and let f : V W and
g: W Z be linear maps. Suppose that V and W are nite-dimensional.
1. rank(g f ) rank(g).
2. rank(g f ) rank( f ).
Proof.
(1). Observe that im(g f ) = (g f )(V) =g( f (V)) g(W) =img. By Lemma 9.2.3 (2)
we know that im(g f ) and img are subspaces of W. It is straightforward to see that
im(g f ) is a subspace of img. It follows from Theorem 8.6.10 (2) that rank(g f ) =
dim(im(g f )) dim(img) = rank(g).
(2). By Corollary 9.3.3 we see that rank(g f ) = dim(im(g f )) = dim((g f )(V)) =
dim(g( f (V))) = dim(g|
f (V)
( f (V))) = rank(g|
f (V)
) dim( f (V)) = dim(im f ) = rank( f ).
Exercises
Exercise 9.3.1. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V and W are nite-dimensional.
(1) Prove that if dim(V) < dim(W), then f cannot be surjective.
(2) Prove that if dim(V) dim(W), then f cannot be injective.
9.4 Isomorphisms 89
9.4 Isomorphisms
Friedberg-Insel-Spence, 4th ed. Section 2.4
Denition 9.4.1. Let V and W be a vector space over a eld F and let f : V W be a
function. The function f is an isomorphism if f is bijective and is a linear map.
Denition 9.4.2. Let V, W be a vector space over a eld F. The vector spaces V and W are
isomorphic if there is an isomorphismV W.
Lemma 9.4.3. Let V, W be vector spaces over a eld F, and let f : V W be an isomor-
phism. Then f
1
is an isomorphism.
Proof. The proof of this result is similar to the proof of Theorem 2.2.4 (2), which is the
analogous result for group isomorphisms.
Corollary 9.4.4. Let V, W be vector spaces over a eld F, and let f : V W be a lin-
ear map. Suppose that V and W are nite-dimensional, and that dim(V) = dim(W). The
following are equivalent.
a. f is injective.
b. f is surjective
c. f is an isomorphism.
d. rank( f ) = dim(V).
Lemma 9.4.5. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Let B be a basis for V. Then f is an isomorphism if and only if f (B) is a basis for W.
Proof. Suppose that f is an isomorphism. Let v
1
, v
:
, . . . v
n
f (B) and a
1
, a
:
, . . . a
n
F, and
suppose that v
1
, . . . , v
n
are distinct, and that a
1
v
1
+ +a
n
v
n
= 0. There are w
1
, . . . , w
n
B
such that f (w
i
) = v
i
for all i {1, . . . , n}. Clearly w
1
, . . . , w
n
are distinct. Then a
1
f (w
1
) +
+a
n
f (w
n
) = 0. It follows that f (a
1
w
1
+ +a
n
w
n
) = 0, which means that a
1
w
1
+
+a
n
w
n
ker f . Because f is injective, then by Lemma 9.2.4 we know that ker f =
{0}. Therefore a
1
w
1
+ +a
n
w
n
= 0. Because {w
1
, . . . , w
n
} B, and because B is linearly
independent, it follows from Lemma 8.5.7 (2) that {w
1
, . . . , w
n
} is linearly independent.
Hence a
1
= a
:
= = a
n
= o. We deduce that f (B) is linearly independent. Because f is
surjective, we know that im f =W. It follows from Lemma 9.2.6 that span( f (B)) =W. We
conclude that f (B) is a basis for W.
Suppose that f (B) is a basis for W. Then span( f (B)) = W, and by Lemma 9.2.6 we
deduce that im f =W, which means that f is surjective. Let v ker f . Because B is a basis
for V, there are m N, vectors u
1
, . . . , u
m
B and c
1
, . . . , c
m
F such that v = c
1
u
1
+ +
c
m
u
m
. Then f (c
1
u
1
+ +c
m
u
m
) =0, and hence c
1
f (u
1
)+ +c
m
(u
m
) =0. Because f (B)
is linearly independent, it follows that c
1
= = c
m
= 0. We deduce that v = 0. Therefore
ker f = {0}. By Lemma 9.2.4 we conclude that f is injective.
90 9. Linear Maps
Theorem 9.4.6. Let V, W be vector spaces over a eld F. Then V and W are isomorphic
if and only if there is a basis B of V and a basis C of W such that B and C have the same
cardinality.
Proof. Suppose V and W are isomorphic. Let f : V W be an isomorphism, and let D be
a basis of V. Then by Lemma 9.4.5 we know that f (D) is a basis of W, and clearly D and
f (D) have the same cardinality.
Suppose that there is a basis B of V and a basis C of W such that B and C have the
same cardinality. Let g: B C be a bijective map. Extend g to a linear map h: V W by
Theorem 9.1.6 (1). Then h(B) =C, so h(B) is a basis for W, and it follows by Lemma 9.4.5
that h is an isomorphism.
Corollary 9.4.7. Let V, W be vector spaces over a eld F. Suppose that V and W are
isomorphic. Then V is nite-dimensional if and only if W is nite-dimensional. If V and W
are both nite-dimensional, then dim(V) = dim(W).
Proof. This result follows immediately from Theorem 9.4.6, because a vector space is -
nite dimensional if and only if it has a nite basis, and the dimension of a nite-dimensional
vector space is the cardinality of any basis of the vector space.
Corollary 9.4.8. Let V, W be vector spaces over a eld F. Suppose that V and W are
nite-dimensional. Then V and W are isomorphic if and only if dim(V) = dim(W).
Proof. This result follows immediately from Theorem 9.4.6, because the dimension of a
nite-dimensional vector space is the cardinality of any basis of the vector space.
Corollary 9.4.9. Let V be a vector space over a eld F. Suppose that V is nite-
dimensional. Let n = dim(V). Then V is isomorphic to F
n
.
Proof. Observe that dim(F
n
) = n. The result then follows immediately from Corol-
lary 9.4.8.
Exercises
Exercise 9.4.1. Let V be a vector space over a eld F. Suppose that V non-trivial. Let B
be a basis for V. Let C(B, F) be as dened in Exercise 8.3.2. It was seen in Exercise 8.3.2
that C(B, F) is a vector space over F. Let : C(B, F) V be dened by
( f ) =

vB
f (v),=0
f (v)v,
for all f C(B, F). Prove that is an isomorphism. Hence every non-trivial vector space
can be viewed as a space of functions.
9.5 Spaces of Linear Maps 91
9.5 Spaces of Linear Maps
Friedberg-Insel-Spence, 4th ed. Section 2.2
Denition 9.5.1. Let V and W be vector spaces over a eld F. The set of all linear maps
V W is denoted L(V,W). The set of all linear maps V V is denoted L(V).
Denition 9.5.2. Let A be a set, let W be a vector space over a eld F, let f , g: A W be
functions and let c F.
1. Let f +g: A W be dened by ( f +g)(x) = f (x) +g(x) for all x A.
2. Let f : A W be dened by (f )(x) = f (x) for all x A.
3. Let c f : A W be dened by (c f )(x) = c f (x) for all x A.
4. Let 0: A W be dened by 0(x) = 0 for all x A.
Lemma 9.5.3. Let V, W be vector spaces over a eld F, let f , g: V W be linear maps
and let c F.
1. f +g is a linear map.
2. f is a linear map.
3. c f is a linear map.
4. 0 is a linear map.
Proof. We prove Part (1); the other parts are similar, and are left to the reader.
(1). Let x, y V and let d F. Then
( f +g)(x +y) = f (x +y) +g(x +y) = [ f (x) + f (y)] +[g(x) +g(y)]
= [ f (x) +g(x)] +[ f (y) +g(y)] = ( f +g)(x) +( f +g)(y)
and
( f +g)(dx) = f (dx) +g(dx) = d f (x) +dg(x) = d[ f (x) +g(x)] = d( f +g)(x).
Lemma 9.5.4. Let V, W be vector spaces over a eld F. Then L(V,W) is a vector space
over F.
92 9. Linear Maps
Proof. We will show Property (7) in the denition of vector spaces; the other properties
are similar. Let f , g L(V,W) and let a F. Let x V. Then
[a( f +g)](x) = a[( f +g)(x)] = a[ f (x) +g(x)]
= a f (x) +ag(x) = (af )(x) +(ag)(x) = [af +ag](x).
Hence a( f +g) = a f +ag.
Lemma 9.5.5. Let V, W, X, Z be vector spaces over a eld F. Let f , g: V W and
k: X V and h: W Z be linear maps, and let c F.
1. ( f +g)k = ( f k) +(gk).
2. h( f +g) = (h f ) +(hg).
3. c(h f ) = (ch) f = h(c f ).
Proof. We prove Part (1); the other parts are similar, and are left to the reader.
(1). Let x X. Then
[( f +g)k](x) = ( f +g)(k(x)) = f (k(x)) +g(k(x))
= ( f k)(x) +(gk)(x) = [( f k) +(gk)](x).
Hence ( f +g)k = ( f k) +(gk).
Theorem 9.5.6. Let V, W be vector spaces over a eld F. Suppose that V and W are nite-
dimensional. Then L(V,W) is nite-dimensional, and dim(L(V,W)) = dim(V) dim(W).
Proof. Let n = dim(V) and m = dim(W). Let {v
1
, . . . , v
n
} be a basis for V, and let
{w
1
, . . . , w
m
} be a basis for W.
For each i {1, . . . , n} and j {1, . . . , m}, let e
i j
: V W be dened as follows. First, let
e
i j
(v
k
) =

w
j
, if k = i
0, if k {1, . . . , n} and k ,= i.
Next, because {v
1
, . . . , v
n
} is a basis for V, we can use Theorem 9.1.6 (2) to extend e
i j
to a
unique linear map V W.
We claim that the set T = {e
i j
| i {1, . . . , n} and j {1, . . . , m}} is a basis for L(V,W).
Once we prove that claim, the result will follow, because T has nm elements.
Suppose that there is some a
i j
F for each i {1, . . . , n} and j {1, . . . , m} such that
n

i=1
m

j=1
a
i j
e
i j
= 0.
9.5 Spaces of Linear Maps 93
Let k {1, . . . , n}. Then
n

i=1
m

j=1
a
i j
e
i j
(v
k
) = 0(v
k
),
which implies that
m

j=1
a
k j
w
j
= 0.
Because {w
1
, . . . , w
m
} is linearly independent, it follows that a
k j
= 0 for all j {1, . . . , m}.
We deduce that a
i j
= 0 for all i {1, . . . , n} and j {1, . . . , m}. Hence T is linearly inde-
pendent.
Let f L(V,W). Let r {1, . . . , n}. Then f (v
r
) W. Because {w
1
, . . . , w
m
} is spans W,
there is some c
rp
F for each p {1, . . . , m} such that f (v
r
) =

m
j=1
c
r j
w
j
.
Observe that
n

i=1
m

j=1
c
i j
e
i j
(v
r
) =
m

j=1
c
r j
wj = f (v
r
).
Hence f and

n
i=1

m
j=1
c
i j
e
i j
agree on {v
1
, . . . , v
n
}, and it follows from Corollary 9.1.7
that f =

i j
c
i j
e
i j
. Hence T spans L(V,W), and we conclude that T is a basis for L(V,W).
Exercises
Exercise 9.5.1. Let V, W be vector spaces over a eld F, and let f , g: V W be non-
zero linear maps. Suppose that im f img = {0}. Prove that { f , g} is a linearly independent
subset of L(V,W).
Exercise 9.5.2. Let V, W be vector spaces over a eld F, and let S V. Let S

L(V,W)
be dened by
S

= { f L(V,W) | f (x) = 0 for all x S}.


(1) Prove that S

is a subspace of L(V,W).
(2) Let T V. Prove that if S T, then T

.
(3) Let X,Y V be subspaces. Prove that (X +Y)

= X

. (See Exercise 8.3.5 for


the denition of X +Y.)
94 9. Linear Maps
10
Linear Maps and Matrices
96 10. Linear Maps and Matrices
10.1 Review of MatricesMultiplication
Friedberg-Insel-Spence, 4th ed. Section 2.3
Denition 10.1.1. Let F be a eld, and let m, n, p N. Let AM
mn
(F) and BM
np
(F).
Suppose that A =
_
a
i j

and B =
_
b
i j

. The matrix AB M
mp
(F) is dened by AB =
_
c
i j

,
where c
i j
=

n
k=1
a
ik
b
k j
for all i {1, . . . , m} and j {1, . . . , p}.
Lemma 10.1.2. Let F be a eld, and let m, n, p, q N. Let A M
mn
(F), let B M
np
(F)
and let C M
pq
(F).
1. A(BC) = (AB)C.
2. AI
n
= A and I
m
A = A.
Proof.
(1). Suppose that A =
_
a
i j

and B =
_
b
i j

and C =
_
c
i j

, and AB =
_
s
i j

and BC =
_
t
i j

and A(BC) =
_
u
i j

and (AB)C =
_
w
i j

. Then s
i j
=

n
k=1
a
ik
b
k j
for all i {1, . . . , m}
and j {1, . . . , p}; and t
i j
=

p
z=1
b
iz
c
z j
for all i {1, . . . , n} and j {1, . . . , q}. Then u
i j
=

n
x=1
a
ix
t
x j
=

n
x=1
a
ix
_
p
z=1
b
xz
c
z j
_
for all i {1, . . . , m} and j {1, . . . , q}; and w
i j
=

p
y=1
s
iy
c
y j
=

p
y=1
_
n
k=1
a
ik
b
ky
_
c
y j
for all i {1, . . . , m} and j {1, . . . , q}. Rearranging
shows that u
i j
= w
i j
for all i {1, . . . , m} and j {1, . . . , q}.
(2). Straightforward.
Denition 10.1.3. Let F be a eld, and let n N. Let A M
nn
(F). The matrix A is
invertible if there is some B M
nn
(F) such that BA = I
n
and AB = I
n
. Such a matrix B is
an inverse of A.
Lemma 10.1.4. Let F be a eld, and let n N. Let A M
nn
(F). If A has an inverse, then
the inverse is unique.
Proof. The proof is similar to that used for groups.
Denition 10.1.5. Let F be a eld, and let n N. Let A M
nn
(F). If A has an inverse,
then the inverse is denoted A
1
.
Lemma 10.1.6. Let F be a eld, and let n N. Let A, B M
nn
(F). Suppose that A and B
are invertible
1. A
1
is invertible, and (A
1
)
1
= A.
2. AB is invertible, and (AB)
1
= B
1
A
1
.
Proof. The proof is similar to that used for groups.
10.1 Review of MatricesMultiplication 97
Denition 10.1.7. Let F be a eld, and let n N. The set of all n n invertible matrices
with entries in F is denoted GL
n
(F).
Corollary 10.1.8. Let F be a eld, and let n N. Then (GL
n
(F), ) is a group.
Lemma 10.1.9. Let F be a eld, and let m, n, p N. Let A, B M
mn
(F) and let C, D
M
np
(F). Then A(C+D) = AC+AD and (A+B)C = AC+BC.
Proof. Straightforward.
Corollary 10.1.10. Let F be a eld, and let n N. Then (M
nn
(F), +, ) is a ring with
unity.
Exercises
Exercise 10.1.1. Let F be a eld, and let n N. Let A, B M
nn
(F). The trace of A is
dened by
tr A =
n

i=1
a
ii
.
Prove that tr(AB) = tr(BA).
98 10. Linear Maps and Matrices
10.2 Linear Maps Given by Matrix Multiplication
Friedberg-Insel-Spence, 4th ed. Section 2.3
Denition 10.2.1. Let F be a eld, and let m, n N. Let A M
mn
(F). The linear map
induced by A is the function L
A
: F
n
F
m
dened by L
A
(v) = Av for all v F
n
.
Lemma 10.2.2. Let F be a eld, and let m, n, p N. Let A, B M
mn
(F), let C M
np
(F),
and let s F.
1. L
A
is a linear map.
2. L
A
= L
B
if and only if A = B.
3. L
A+B
= L
A
+L
B
.
4. L
sA
= sL
A
.
5. L
AC
= L
A
L
C
.
6. Suppose m = n. Then L
I
n
= 1
F
n.
Proof. Suppose that A =
_
a
i j

and B =
_
b
i j

. Let {e
1
, . . . , e
n
} be the standard basis for F
n
.
(1). Let v, wF
n
. Then L
A
(v+w) =A(v+w) =Av+Aw=L
A
(v)+L
A
(w), and L
A
(sv) =
A(sv) = s(Av) = sL
A
(v).
(2). If A = B, then clearly L
A
= L
B
.
Suppose L
A
= L
B
. Let j {1, . . . , n}. Then L
A
(e
i
) = L
B
(e
i
), and hence Ae
j
= Be
j
, which
means that the j-th column of A equals the j-th column of B. Hence A = B.
(3). Let v F
n
. Then L
A+B
(v) = (A+B)(v) = Av +Bv = L
A
(v) +L
B
(v). Hence L
A+B
=
L
A
+L
B
.
(4). The proof is similar to the proof of Part (3).
(5). Let j {1, . . . , n}. Then L
AC
(e
j
) = (AC)(e
j
), and (L
A
L
C
)(e
j
) = L
A
(L
C
(e
j
)) =
A(C(e
j
)). Observe that (AC)(e
j
) is the j-th column of AC, and that C(e
j
) is the j-th column
of C. However, the j-th column of AC is dened by A times the j-th column of C. Hence
L
AC
(e
j
) = (L
A
L
C
)(e
j
). Therefore L
AC
and L
A
L
C
agree on a basis, and by Corollary 9.1.7
we deduce that L
AC
= L
A
L
C
.
(6). Trivial.
Corollary 10.2.3. Let F be a eld, and let m, n, p, q N. Let A M
mn
(F), let B
M
np
(F), and let C M
pq
(F). Then (AB)C = A(BC).
10.2 Linear Maps Given by Matrix Multiplication 99
Proof. Using Lemma 10.2.2 (3) together with the associativity of the composition of func-
tions, we see that L
A(BC)
= L
A
L
BC
= L
A
(L
B
L
C
) = (L
A
L
B
)L
C
= L
AB
L
C
= L
(AB)C
.
By Lemma 10.2.2 (2) we deduce that A(BC) = (AB)C.
100 10. Linear Maps and Matrices
10.3 All Linear Maps F
n
F
m
Friedberg-Insel-Spence, 4th ed. Section 2.2
Lemma 10.3.1. Let F be a eld. Let n, m N, and let f : F
n
F
m
be a linear map. Then
f = L
A
, where A M
mn
(F) is the matrix that has columns f (e
1
), . . . , f (e
n
).
Proof. Let i {1, . . . , n}. Let
_
a
1i
.
.
.
a
mi
_
= f (e
i
).
Let v F
n
. Then v =
_
x
1
.
.
.
x
n
_
for some x
1
, . . . , x
n
F. Then
f (v) = f (
_
x
1
.
.
.
x
n
_
) = f (x
1
e
1
+ +x
n
e
n
) = x
1
f (e
1
) + +x
n
f (e
n
)
= x
1
_
a
11
.
.
.
a
m1
_
+ +x
n
_
a
1n
.
.
.
a
mn
_
=
_
x
1
a
11
++x
n
a
1n
.
.
.
x
1
a
m1
++x
n
a
mn
_
=
_
a
11
a
1n
.
.
.
.
.
.
a
m1
a
mn
__
x
1
.
.
.
x
n
_
= Av = L
A
(v).
Hence f = L
A
.
10.4 Coordinate Vectors with respect to a Basis 101
10.4 Coordinate Vectors with respect to a Basis
Friedberg-Insel-Spence, 4th ed. Section 2.2
Denition 10.4.1. Let V be a vector space over a eld F, and let V be a basis for V.
The set is an ordered basis if the elements of are given a specic order.
Denition 10.4.2. Let V be a vector space over a eld F. Suppose that V is nite-
dimensional. Let n = dim(V). Let = {v
1
, . . . , v
n
} be an ordered basis for V. Let x V.
Then there are unique a
1
, . . . , a
n
F such that x = a
1
v
1
+ +a
n
v
n
. The coordinate vec-
tor of x relative to is [x]

=
_
a
1
.
.
.
a
n
_
F
n
.
Lemma 10.4.3. Let F be a eld, and let n N. Let be the standard ordered basis for F
n
.
If v F
n
, then [v]

= v
Proof. Straightforward.
Denition 10.4.4. Let V be a vector space over a eld F. Suppose that V is nite-
dimensional. Let n = dim(V). Let be an ordered basis for V. The standard represen-
tation of V with respect to is the function

: V F
n
dened by

(x) = [x]

for all
x V.
Theorem 10.4.5. Let V be a vector space over a eld F. Suppose that V is nite-
dimensional. Let n = dim(V). Let be an ordered basis for V. Then

is an isomorphism.
Proof. Let {e
1
, . . . , e
n
} be the standard basis for F
n
.
Let = {u
1
, . . . , u
n
}. Let i {1, . . . , n}. Then

(u
i
) = e
i
. By Theorem 9.1.6 (2) there is a
unique linear map g: V F
n
such that g(u
i
) = e
i
for all i {1, . . . , n}.
Let v V. Then there are unique a
1
, . . . , a
n
F such that x = a
1
v
1
+ +a
n
v
n
. Hence

(v) =
_
a
1
.
.
.
a
n
_
= a
1
e
1
+ +a
n
e
n
= a
1
g(u
1
) + +a
n
g(u
n
)
= g(a
1
u
1
+ +a
n
u
n
) = g(v).
Hence

= g. It follows that

is linear.
We know by Lemma 9.2.6 that im

= span(

()) = span{e
1
, . . . , e
n
} = F
n
. Hence

is surjective. Because dim(V) = n = dim(F


n
), it follows from Corollary 9.4.4 that

is an
isomorphism.
102 10. Linear Maps and Matrices
10.5 Matrix Representation of Linear MapsBasics
Friedberg-Insel-Spence, 4th ed. Section 2.2
Denition 10.5.1. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V and W are nite-dimensional. Let n = dim(V) and m = dim(W). Let
= {v
1
, . . . , v
n
} be an ordered basis for V and = {w
1
, . . . , w
n
} be an ordered basis for W.
The matrix representation of f with respect to and is the matrix [ f ]

with j-th column


equal to [ f (v
j
)]

for all j {1, . . . , n}.


If V =W and = , the matrix [ f ]

is written [ f ]

.
Remark 10.5.2. With the hypotheses of Denition 10.5.1, we see that [ f ]

=
_
a
i j

, where
the elements a
i j
F are the elements such that
f (v
j
) =
m

i=1
a
i j
w
i
for all j {1, . . . n}.
Note that [ f ]

is an mn matrix.
Lemma 10.5.3. Let V, W be vector spaces over a eld F, let f , g: V W be linear maps,
and let c F. Suppose that V and W are nite-dimensional. Let n = dim(V). Let be an
ordered basis for V, and let be an ordered basis for W.
1. [ f ]

= [g]

if and only if f = g.
2. [ f +g]

= [ f ]

+[g]

.
3. [c f ]

= c[ f ]

.
4. [1
V
]

= I
n
.
Proof. We prove Part (1); the other parts are straightforward.
(1). If f = g, then clearly [ f ]

= [g]

.
Suppose that [ f ]

= [g]

. Let = {v
1
, . . . , v
n
}. Let j {1, . . . , n}. Then [ f (v
j
)]

is the j-th
column of [ f ]

, and [g(v
j
)]

is the j-th column of [g]

. It follows that f (v
j
) and g(v
j
) have
the same coordinate vector relative to . Hence f (v
j
) = g(v
j
). Therefore f and g agree on
a basis, and by Corollary 9.1.7 we deduce that f = g.
Exercises
10.5 Matrix Representation of Linear MapsBasics 103
Exercise 10.5.1. Let V, W be vector spaces over a eld F. Suppose that V and W are nite-
dimensional. Let n = dim(V) and m = dim(W). Let be an ordered basis for V, and let
be an ordered basis for W. Let A M
mn
(F). Prove that there is a linear map f : V W
such that [ f ]

= A.
Exercise 10.5.2. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V and W are nite-dimensional.
(1) Suppose that f is an isomorphism. Then there is an ordered basis for V and an
ordered basis for W such that [ f ]

is the identity matrix.


(2) Suppose that f is an arbitrary linear map. Then there is an ordered basis for V and
an ordered basis for W such that [ f ]

has the form


[ f ]

=
_
I
r
O
O O
_
,
where O denotes the appropriate zero matrices, for some r {o, 1, . . . , n}.
104 10. Linear Maps and Matrices
10.6 Matrix Representation of Linear MapsComposition
Friedberg-Insel-Spence, 4th ed. Section 2.3
Theorem 10.6.1. Let V, W, Z be vector spaces over a eld F, and let f : V W and
g: W Z be linear maps. Suppose that V, W and Z are nite-dimensional. Let be an
ordered basis for V, let be an ordered basis for W, and let be an ordered basis for Z.
Then [g f ]

= [g]

[ f ]

.
Proof. Suppose that [ f ]

=
_
a
i j

, that [g]

=
_
b
i j

, that [g f ]

=
_
c
i j

, and that [g]

[ f ]

=
_
d
i j

.
Let n = dim(V), let m = dim(W) and let p = dim(Z). Let = {v
1
, . . . , v
n
}, let =
{w
1
, . . . , w
m
} and let = {z
1
, . . . , z
p
}.
By the denition of matrix multiplication, we see that d
i j
=

n
k=1
b
ik
a
k j
for all i
{1, . . . , p} and j {1, . . . , n}.
Let j {1, . . . , n}. Then by Remark 10.5.2 we see that
(g f )(v
j
) =
p

r=1
c
r j
z
r
.
On the other hand, using Remark 10.5.2 again, we have
(g f )(v
j
) = g( f (v
j
)) = g(
m

i=1
a
i j
w
i
) =
m

i=1
a
i j
g(w
i
)
=
m

i=1
a
i j
_
p

r=1
b
ri
z
r
_
=
p

r=1
_
m

i=1
b
ri
a
i j
_
z
r
.
Because {z
1
, . . . , z
p
} is a basis, it follows Theorem 8.6.2 (2) that

m
i=1
b
ri
a
i j
= c
r j
for all
r {1, . . . , p}.
Hence d
i j
= c
i j
for all i {1, . . . , p} and j {1, . . . , n}, which means that [g f ]

=
[g]

[ f ]

.
Theorem 10.6.2. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V and W are nite-dimensional. Let be an ordered basis for V and let
be an ordered basis for W. Let v V. Then [ f (v)]

= [ f ]

[v]

.
Proof. Let h: F V be dened by h(a) = av for all a F. Let g: F W be dened by
g(a) = a f (v) for all a F. It can be veried that h and g are linear maps; the details are
left to the reader.
Let = {1} be the standard basis ordered basis for F as a vector space over itself. Observe
that f h = g, because f (h(a)) = f (av) = af (v) = g(a) for all a F. Then
[ f (v)]

= [g(1)]

= [g]

= [ f h]

= [ f ]

[h]

= [ f ]

[h(1)]

= [ f ]

[v]

10.6 Matrix Representation of Linear MapsComposition 105


Lemma 10.6.3. Let F be a eld, and let m, n N. Let be the standard ordered basis for
F
n
, and let be the standard ordered basis for F
m
.
1. Let A M
mn
(F). Then [L
A
]

= A.
2. Let f : F
n
F
m
be a linear map. Then f = L
C
, where C = [ f ]

.
Proof.
(1). Let {e
1
, . . . , e
n
} be the standard basis for F
n
. Let j {1, . . . , n}. By Lemma 10.4.3, we
see that Ae
j
= L
A
(e
j
) = [L
A
(e
j
)]

. Observe that Ae
j
is the j-th column of A, and [L
A
(e
j
)]

is the j-th column of [L


A
]

. Hence A = [L
A
]

.
(2). Let v F
n
. Using Lemma 10.4.3 and Theorem 10.6.2, we see that f (v) = [ f (v)]

=
[ f ]

[v]

=Cv = L
C
(v). Hence f = L
C
.
106 10. Linear Maps and Matrices
10.7 Matrix Representation of Linear MapsIsomorphisms
Friedberg-Insel-Spence, 4th ed. Section 2.4
Theorem 10.7.1. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V and W are nite-dimensional, and that dim(V) = dim(W). Let be
an ordered basis for V, and let be an ordered basis for W.
1. f is an isomorphism if and only if [ f ]

is invertible.
2. If f is an isomorphism, then [ f
1
]

=
_
[ f ]

_
1
.
Proof. Both parts of the theorem are proved together. Let n = dim(V) = dim(W).
Suppose that f is an isomorphism. By denition of inverse maps we know that f
1
f =
1
V
and f f
1
= 1
W
. By Lemma 9.4.3 we know that that f
1
is a linear map. Hence, using
Theorem 10.6.1 and Lemma 10.5.3 (4), we deduce that
[ f
1
]

[ f ]

= [ f
1
f ]

= [1
V
]

= [1
V
]

= I
n
.
A similar argument shows that
[ f ]

[ f
1
]

= I
n
.
It follows that [ f ]

is invertible and
_
[ f ]

_
1
= [ f
1
]

.
Suppose that [ f ]

is invertible. Let A = [ f ]

. Then there is some B M


nn
(F) such that
AB = I
n
and BA = I
n
. Suppose that B =
_
b
i j

.
Suppose that = {v
1
, . . . , v
n
} and that = {w
1
, . . . , w
n
}. By Theorem 9.1.6 (2) there is a
unique linear map g: W V such that g(w
i
) =

n
k=1
b
ki
v
i
for all i {1, . . . , n}. Then by
denition we have [g]

= B.
Using Theorem 10.6.1 and Lemma 10.5.3 (4), we deduce that
[g f ]

= [g]

[ f ]

= BA = I
n
= [1
V
]

.
A similar argument shows that
[ f g]

= [1
W
]

.
It follows from Lemma 10.5.3 (1) that g f = 1
V
and f g = 1
W
. Hence f has an inverse,
and it is therefore bijective. We conclude that f is an isomorphism.
Corollary 10.7.2. Let F be a eld, and let n N. Let A M
nn
(F).
1. A is invertible if and only if L
A
is an isomorphism.
2. If A is invertible, then (L
A
)
1
= L
A
1.
10.7 Matrix Representation of Linear MapsIsomorphisms 107
Proof. Left to the reader in Exercise 10.7.3.
Exercises
Exercise 10.7.1. In this exercise, we will use the notation f () = in the sense of ordered
bases, so that f takes the rst element of to the rst element of , the second element of
to the second element of , etc.
Let V, W be vector spaces over a eld F, and let f : V W be a linear map. Suppose
that V and W are nite-dimensional.
(1) Let be an ordered basis for V, let be an ordered basis for W. Then [ f ]

is the
identity matrix if and only if f () = .
(2) The map f is an isomorphism if and only if there is an ordered basis for V and an
ordered basis for W such that [ f ]

is the identity matrix.


Exercise 10.7.2. Let V, W be vector spaces over a eld F, and let f : V W be a linear
map. Suppose that V and W are nite-dimensional. Let be an ordered basis for V, and let
be an ordered basis for W. Let A = [ f ]

.
(1) Prove that rank( f ) = rank(L
A
).
(2) Prove that nullity( f ) = nullity(L
A
).
Exercise 10.7.3. Prove Corollary 10.7.2.
108 10. Linear Maps and Matrices
10.8 Matrix Representation of Linear MapsThe Big Picture
Friedberg-Insel-Spence, 4th ed. Section 2.4
Theorem10.8.1. Let V, W be vector spaces over a eld F. Suppose that V andW are nite-
dimensional. Let n = dim(V) and let m = dim(W). Let be an ordered basis for V, and let
be an ordered basis for W. Let : L(V,W) M
mn
(F) be dened by ( f ) = [ f ]

for
all f L(V,W).
1. is an isomorphism.
2. L
( f )

f for all f L(V,W).


Proof.
(1). The fact that is a linear map is just a restatement of Lemma 10.5.3 (2) and (3). We
know by Theorem 9.5.6 that dim(L(V,W)) = nm. We also know that dim(M
mn
(F)) =
nm. Hence dim(L(V,W)) = dim(M
mn
(F)). The fact that is injective is just a restate-
ment of Lemma 10.5.3 (1). It now follows from Corollary 9.4.4 that is an isomorphism.
(2). Let f L(V,W). Let v V. Using Theorem 10.6.2, we see that
(

f )(v) =

( f (v)) = [ f (v)]

= [ f ]

[v]

=( f )

(v) =L
( f )
(

(v)) = (L
( f )

)(v).
Hence L
( f )

f .
Remark 10.8.2. The equation L
( f )

f in Theorem 10.8.1 (2) is represented by


the following commutative diagram, where commutative here means that going around
the diagram either way yields the same result.
V W
F
n
F
m
f

L
( f )

10.9 Matrix Representation of Linear MapsChange of Basis 109


10.9 Matrix Representation of Linear MapsChange of Basis
Friedberg-Insel-Spence, 4th ed. Section 2.5
Lemma 10.9.1. Let V be a vector space over a eld F. Suppose that V is nite-dimensional.
Let and
/
be ordered bases for V.
1. [1
V
]

/
is invertible.
2. If v V, then [v]

= [1
V
]

/
[v]

/ .
Proof.
(1). We know that 1
V
is an isomorphism, and therefore Theorem 10.7.1 (1) implies that
[1
V
]

/
is invertible.
(2). Let v V. Then 1
V
(v) = v, and hence [1
V
(v)]

= [v]

. It follows from Theo-


rem 10.6.2 that [1
V
]

/
[v]

/ = [v]

.
Denition 10.9.2. Let V be a vector space over a eld F. Suppose that V is nite-
dimensional. Let and
/
be ordered bases for V. The change of coordinate matrix
(also called the change of basis matrix) that changes
/
-coordinates into -coordinates is
the matrix [1
V
]

/
.
Lemma 10.9.3. Let V be a vector space over a eld F. Suppose that V is nite-dimensional.
Let , and be ordered bases for V. Let Q be the change of coordinate matrix that
changes -coordinates into -coordinates, and let R be the change of coordinate matrix
that changes -coordinates into -coordinates
1. RQis the change of coordinate matrix that changes -coordinates into -coordinates
2. Q
1
is the change of coordinate matrix that changes -coordinates into -coordi-
nates
Proof. Left to the reader in Exercise 10.9.1.
Theorem 10.9.4. Let V, W be vector spaces over a eld F. Suppose that V and W are
nite-dimensional. Let and
/
be ordered bases for V, and let and
/
be ordered bases
for W. Let Q be the change of coordinate matrix that changes
/
-coordinates into -co-
ordinates, and let P be the change of coordinate matrix that changes
/
-coordinates into
-coordinates. If f : V W is a linear map, then [ f ]

/
= P
1
[ f ]

Q.
110 10. Linear Maps and Matrices
Proof. Let f : V W be a linear map. Observe that f = 1
W
f 1
V
. Then [ f ]

/
=
[1
W
f 1
V
]

/
. It follows fromTheorem10.6.1 that [ f ]

/
= [1
W
]

[ f ]

[1
V
]

/
. By Lemma 10.9.3,
we deduce that [ f ]

/
= P
1
[ f ]

Q.
Corollary 10.9.5. Let V be a vector space over a eld F. Suppose that V is nite-
dimensional. Let and
/
be ordered bases for V. Let Q be the change of coordinate
matrix that changes
/
-coordinates into -coordinates. If f : V V is a linear map, then
[ f ]

/ = Q
1
[ f ]

Q.
Corollary 10.9.6. Let F be a eld, and let n N. Let A M
nn
(F). Let = {v
1
, . . . , v
n
}
be an ordered basis for F
n
. Let Q M
nn
(F) be the matrix whose j-th column is v
j
. Then
[L
A
]

= Q
1
AQ.
Denition 10.9.7. Let F be a eld, and let n N. Let A, B M
nn
(F). The matrices A and
B are similar if there is an invertible matrix Q M
nn
(F) such that A = Q
1
BQ.
Lemma 10.9.8. Let F be a eld, and let n N. The relation of matrices being similar is an
equivalence relation on M
nn
(F).
Proof. Left to the reader in Exercise 10.9.2.
Exercises
Exercise 10.9.1. Prove Lemma 10.9.3.
Exercise 10.9.2. Prove Lemma 10.9.8.

Das könnte Ihnen auch gefallen