of
MATRICES
www.TheSolutionManual.com
by FRANK AYRES, JR.
including
MATRICES
www.TheSolutionManual.com
BY
^C^O
^
02656
Elementary matrix algebra has now become an integral part of the mathematical background
necessary for such diverse fields as electrical engineering and education, chemistry and
sociology,
as well as for statistics and pure mathematics. This book, in presenting the
more
essential mate
rial, designed primarily to serve as a useful supplement to current texts and as a handy refer
is
ence book for those working in the several fields which require some knowledge
of matrix theory.
Moreover, the statements of theory and principle are sufficiently complete that the book
could
be used as a text by itself.
www.TheSolutionManual.com
The material has been divided into twentysix chapters, since the logical arrangement is
thereby not disturbed while the usefulness as a reference book is increased. This
also permits
a separation of the treatment of real matrices, with which the majority of readers
will be con
cerned, from that of matrices with complex elements. Each chapter contains
a statement of perti
nent definitions, principles, and theorems, fully illustrated by examples. These, in
turn, are
followed by a carefully selected set of solved problems and a considerable number
of supple
mentary exercises.
The beginning student in matrix algebra soon finds that the solutions of numerical exercises
are disarmingly simple. Difficulties are likely to arise from the
constant round of definition, the
orem, proof. The trouble here is essentially a matter of lack of mathematical maturity,'
and
normally to be expected, since usually the student's previous work in mathematics has
been
concerned with the solution of numerical problems while precise statements of principles and
proofs of theorems have in large part been deferred for later courses. The aim of the
present
book is to enable the reader,if he persists through the introductory paragraphs and
solved prob
lems in any chapter, to develop a reasonable degree of selfassurance about the material.
The solved problems, in addition to giving more variety to the examples illustrating the
theorems, contain most of the proofs of any considerable length together with
representative
shorter proofs. The supplementary problems call both for the solution of numerical
exercises
and for proofs. Some of the latter require only proper modifications of proofs given earlier;
more important, however, are the many theorems whose proofs require but a few lines. Some are
of the type frequently misnamed "obvious" while others will be found to call for
considerable
ingenuity. None should be treated lightly, however, for it is due precisely to the abundance of
such theorems that elementary matrix algebra becomes a natural first course for those seeking
to attain a degree of mathematical maturity. While the large number of these problems
in any
chapter makes it impractical to solve all of them before moving to the next, special attention
is directed to the supplementary problems of the first two chapters. A
mastery of these will do
much to give the reader confidence to stand on his own feet thereafter.
The author wishes to take this opportunity to express his gratitude to the staff of the Schaum
Publishing Company for their splendid cooperation.
Page
Chapter 1 MATRICES 1
www.TheSolutionManual.com
matrix. Inverse of a matrix. Transpose of a matrix. Symmetric
matrices. Skewsymmetric matrices. Conjugate of a matrix. Hermitian
matrices. SkewHermitian matrices. Direct sums.
Chapter 5 EQUIVALENCE 39
Rank of a matrix. Nonsingular and singular matrices. Elementary
transformations. Inverse of an elementary transformation. Equivalent
matrices. Row canonical form. Normal form. Elementary matrices.
Canonical sets under equivalence. Rank of a product.
Chapter 8 FIELDS 64
Number fields. General fields. Subfields. Matrices over a field.
CONTENTS
Page
Chapter 9 LINEAR DEPENDENCE OF VECTORS AND FORMS 67
Vectors. Linear dependence of vectors, linear forms, polynomials, and
matrices.
www.TheSolutionManual.com
Chapter 12 LINEAR TRANSFORMATIONS 94
Singular and nonsingular transformations. Change of basis. Invariant
space. Permutation matrix.
Page
Chapter lo HERMITIAN FORMS 146
Matrix form. Transformations. Canonical forms. Definite and semi
definite forms.
www.TheSolutionManual.com
Chapter 21 SIMILARITY TO A DIAGONAL MATRIX 163
Real symmetric matrices. Orthogonal similarity. Pairs of real quadratic
forms. Hermitian matrices. Unitary similarity. Normal matrices.
Spectral decomposition. Field of values.
INDEX 215
chapter 1
Matrices
"2 1 3
3 l'
(a) and (b) 2 1
1
www.TheSolutionManual.com
1 5 4 7
and subject to certain rules of operations given below is called a matrix. The matrix (a) could be
(2x + 3y + 7z =
considered as the coefficient matrix of the system of homogeneous linear equations
\ X y + 5z = [
i2x + 3y = 7

In the matrix
(1.1)
the numbers or functions a^ are called its elements. In the double subscript notation, the first
subscript indicates the row and the second subscript indicates the column in which the element
stands. Thus, all elements in the second row have 2 as first subscript and all the elements in
the fifth column have 5 as second subscript. A matrix of m rows and n columns is said to be of
order "m by ra" or mxra.
(In indicating a matrix pairs of parentheses, ( ), and double bars,  , are sometimes
used. We shall use the double bracket notation throughout.)
At times the matrix (1.1) will be called "the mxra matrix [a^ ]
" or "the mxn matrix A =
[a^]". When the order has been established, we shall write simply "the matrix 4".
SQUARE MATRICES. When m = n, (1.1) is square and will be called a square matrix of order n or an
resquare matrix.
In a square matrix, the elements a^, 022. . . , " are called its diagonal elements.
The sum of the diagonal elements of a square matrix A is called the trace of A.
1
MATRICES [CHAP. 1
EQUAL MATRICES. Two matrices A = [a^] and B = [bij] are said to be equal (A = B) if and only if
they have the same order and each element of one is equal to the corresponding element of the
other, that is, if and only if
a = 1,2, , ro; / = 1, 2 n)
^^J 'V
Thus, two matrices are equal if and only if one is a duplicate of the other.
ZERO MATRIX. A matrix, every element of which is zero, is called a zero matrix. When ^ is a zero
matrix and there can be no confusion as to its order, we shall write A = Q instead of the mxn
array of zero elements.
SUMS OF MATRICES. If 4 = [a^A and S = [fe.^] are two mxn matrices, their sum (difference), A B,
is defined as the mxn matrix C = where each element of C
[c^A is the sum (difference) of the
www.TheSolutionManual.com
,
'\ 2 31 0'
2 3
Example 1. It A and = then
14 1 I
12 5
2 +3
A +B
Lo+( 1) 1 +2 + 5j
and
12 23 30
O(l) 12 45
Two matrices of the same order are said to be conformable for addition or subtraction. Two
matrices of different orders cannot be added or subtracted. For example, the matrices (a) and
(b) above are nonconformable for addition and subtraction.
The sum of k matrices ^ is a matrix of the same order as A and each of its elements is k
times the corresponding element of A. We define: If k is any scalar (we call k a scalar to dis
tinguish it from [k] which is a 1x1 matrix) then by kA = Ak is meant the matrix obtained from
A by multiplying each of its elements by k.
I 2
A+A + A 3A A3
! 3
and
r5(l) 5(2)" r5 101
5A
L5(2) 5(3) J L10 15j
by A, called the negative of /4, is meant the matrix obtained from A by mul
In particular,
tiplying each of its elementsby 1 or by simply changing the sign of all of its elements. For
every A, we have A +(A) = 0, where indicates the zero matrix of the same order as A.
Assuming that the matrices A,B,C are conformable for addition, we state:
(a) A + B = B + A (commutative law)
(b) A + (B+C) = (A + B)+C (associative law)
(c) k(A + B) = kA + kB = (A + B)k, A a scalar
(d) There exists a matrix D such that A + D = B.
These laws are a result of the laws of elementary algebra governing the addition of numbers
and polynomials. They show, moreover,
1. Conformable matrices obey the same laws of addition as the elements of these matrices.
CHAP. 1] MATRICES
MULTIPLICATION. By the product AB in that order of the Ixm matrix A = [a^i a^g ais a^m] and
fell
^31
the mxl matrix fi is meant the 1x1 matrix C = [on fen + 012 fesi + + aimfemi]
fell
fe^i
www.TheSolutionManual.com
femi
Note that the operation is row by column; each element of the row is multiplied into the cor
responding element of the column and then the products are summed.
1'
2
(b) [3 1 4] 6 [6  6+ 12] =
3J
By the product AB in that order of the mxp matrix A = [a^] and the p xn matrix B = [bij]
is meant the mxn matrix C = \c;;'\
<
V '
where
p
Hj ii^ij + "t2 ^2; + + "ip i>,pj ^^^"ikbkj (f = 1, 2, . . . ,m; / = 1, 2 re).
Example 4.
The product ^S is defined or A is conformable to B for multiplication only when the number
ofcolumns of A is equal to the number of rows of S. If ^ is conformable to B for multiplication
{AB is defined), B is not necessarily conformable to A for multiplication (BA may or may not
be
^^^ined) See Problems 34.
Assuming that A,B,C are conformable for the indicated sums and products, we have
(e) A(B + C) = AB + AC (first distributive law)
(/") (A + B)C = AC + BC (second distributive law)
(g) A(BC) = (AB)C (associative law)
However,
(A) AB i= BA, generally,
(i) AB = does not necessarily imply i = or S = 0,
PRODUCTS BY PARTITIONING. Let A= [a^J] be of order mxp and 8= [b^j] be of order pxn. In
forming the product AB, the matrix A is in effect partitioned into m matrices of order Ixp and B
into n matrices of order pxl. Other partitions may be used. For example, let A and 6 be parti
tioned into matrices of indicated orders by drawing in the dotted lines as
(piXTii) I
(pixnj)
(mixpi) I (m]^xp2) (mLxps)
A =
(P2X%) I
(p2Xra2)
Jmgxpi) I
(m,2Xp2) j
(m2Xp3)_
(psxni) !
(p3Xre2)
Am A^2 Aig
"21 I
O22
A^i I
A,
"31 I
"32
In any such partitioning, itnecessary that the columns of A and the rows of 6 be partitioned
is
in exactly the same way; however m^, mg, re^, rig may be any nonnegative (including 0) integers
www.TheSolutionManual.com
such that mi+ m2 = m and rei+ ^2 = n. Then
2 1 1110
Examples. Compute /4S, given /I = 3 2 and B 2 110
1 1 2 3 12
Partitioning so that
11 ^12
2 1 !
fill ^12
111
A = 3 2 '0 and B 2 1 1
A21 A22 B21 S22
10 11 2 3 1 I 2
1 1 1
2 1 1 = ' [2]
i4' [3 3] [?
1"
1 1
[1 0] + [l][2 3 1] [1 0] + [l][2]
2 1 1
Let A, B.C... be resquare matrices. Let A be partitioned into matrices of the indicated
orders
(pixpi) I
(piX P2) I
. . . I
(piX Ps)" All A^2
J I I
j
^"
(P2X Ps)^ "" '
j
(P2XPS) A21 A22
I
I
I T
.(PsXpi) I
(psXp2) I
... I (PsXPs) /loo
and let 8, C, ... be partitioned in exactly the same manner. Then sums, differences, and products
may be formed using the matrices A^i, A^2< : Sii. 5i2.  ^n, C12
CHAP. 1] MATRICES
SOLVED PROBLEMS
1 210 3 4 1
2I
ri + 3 2+(4) 1 + 1 0+2 4202
1. (a) 4 2 1 15 3 = 4+1 0+5 2+0 1+3 5 5 2 4
2 5 1 2_ 22 31 2 +2 5 + (2) 1 +3 2 + (l) 4741
'l 2 1 O' 3412 13 2+4 11 02 2 6 2
(b) 4 2 1 1 5 3 = 41 05 20 13 35 2
25 12 22 31 22 5+2 13 2+1 3 2
1 2 1 3 63
(c) 3 4 2 1 12 6 3
2 5 1 2. . 6 15 3 6,
"l 2 1 o" 1 2 1
(d)  4 2 1 4 2 1
2 5 1 2 2 5 1 2
www.TheSolutionManual.com
1 2 3 2 P 1
2. If A 3 4 and 6 1 5 find D = r s such that A + B  T) = 0.
5 6 4 3 t u
2 0"
and r = 4, Then D = 4 1 = A +
 9 9.
4 6 9 6"
(c) [1 2 3] 7 10 7
5 8 11 8
[ 1(4) + 2(0) + 3 (5) 1 (6) + 2 (7) + 3(8) 1(9) + 2(10) + 3(ll) 1 (6) + 2(7) + 3(8)]
[19 4 4 4]
(e)
l]
\ "gl ^ ri(3) + 2(l) + l(2) l(4) + 2(5) + 1(2)]
^ p 8]
G 2J _2 2J
[4(3) + 0(1) + 2 (2) 4(4) + 0(5) + 2(2)J [s I2J
{2 1 1"
5. Show that:
2 ? 2
2 3 2
(b) S
1=17=1
S a
J
= 22 3
j=i 1=1
a,,,
J
2 3 3 2
(c) 2: aife( S b^hCj^j)
^
= 2 (2 aif,bkh)chj
k=i h=t h=i k=i ^
2
(")
J, ife(*fei +'^fej) = "il^^i + '^lj) + '^t2(*2i+'^2i) = (il^i+''i2*2i) + (il<^17+'^i2'^2i)
2 2
2 3 2
(6) 2 2 a = 2 (a^j^+ a^ + a p) = (an + 0^2 + a^g) + (a^t + 092 + <223)
t=i j=i '' i=i
= (Oil + O21) + (12 + "22) + (ai3 + "23)
2
2 a
" + 2
2
a + a.
2
2 =22 3 2
o .
www.TheSolutionManual.com
i=l i = l ^2 i = i 13 j = ii = i V
This is simply the statement that in summing all of the elements of a matrix, one may sum first the
elements of each row or the elements of each column.
222
2 3 2
(c) 2 a., ( 2
b,,c,)
tfe^^^j^ kh hy
= 2 a,(bi, c + 6, c + &, c O
37'
^_j ^^ tfe^ fei ij fee 2JI fe3
3 2
= /.l/li^^^'fe'^^'^'^r
6. Prove: It A = [a^] is of order mxn and if S = [fe^] and C = [c] are of order nxp, then /}(B + C)
^ '
= AB + AC.
The elements of the j'th row of A are a^^ , a^^, ... , a^^ and the elements of the /th column of S +C are
feijj+cij,
62J+
c'2j fenj +%j Then the element standing in the ith row and /th column of A(B + C) is
the elements standing in the ith row and /th column of AB and ^C.
7. Prove: If i = [aij] is of order mxn, if S = [6^] is of order rexp, and if C = [c.A is of order pxo
^
then ^(6C) = (iS)C.
P
The elements of the j throw of /4 areai^.a^g, ...,a and the elements ofthe/th column of BC are 2 6ih c^
p P h= i ^ .
'iJ'
2 b^f^Cf^ 2 b^j^c^j, hence the element standing in the ith row and /th column of A (BC) is
P P P n P
ait^^b^chj +ai2^^b2h%j + ... + tn = ^^"ik<j^,^hh<^hj^
J^ *n/i<;/ij
P n n n n
= ^^^^j_"ik''kh)<^hj = (2^^0ifefefei)cij + (X^aif^bf^2)<:QJ + + (^^"ikbkp)<^pj
This is the element standing in the j'th row and /th column of (AB)C; hence, A(BC)  (AB)C.
8. Assuming A, B ,C,D conformable, show in two ways that (A +B)(C +D) = AC + AD + BC + BD.
Using (e) and then (/), (A+B)(C+D) = (A +B)C + (A +B)D = AC +BC +AD +BD.
Using (/") and then (e), (A+B)(C+D) = A(C +D) + B(C +D) = AC + AD ^ BC + BD
= AC +BC +A.D +BD.
CHAP. 1] MATRICES
"l o'
1 l'
1
"l 10 1 'l o' 3 1 2
"4
1 2"
9. (a) 1 2 1 1 + 2 [3 1 2] 1 + 6 2 4 = 6 3 4
1
1 3 1 1 3 1 9 3 6 9 3 7
_3 1 2_
'1
10 10 0' '1
10 10 o" 1
[0]
2 '
1 1 1
1 2 :]
10 oil
(b)
1
1
3
4 i
1 5
10 3 10
1
1
j2
:o]
[0 1 3]
[0]
[0] [0: ]
10 10 6 1 1 3 [0 "][o
2
3
12
10
www.TheSolutionManual.com
18
"1 0'
1 ! 1 1 2 1 3 4 5 1 2 3 4 5 1 1
2 1
4 
!
0^0 2 34 5 6 2 3 4 5 6_ 2 1
3 4' 7"
i
3 1 2 !
3 4 15 6 7 1 2 3 3 1 2 5 6 3 1 2
(c)
1
1 2 1 1 4 5 1 6 7 8 1 2 1 4 5 1 2 1 6 7 8 1 2 1
\
1 1

1
+
9 8
^ 1
7 6 5
4
1
[l][8 7]
1 9 8. 1
[l][6 5 4]
1 7 6 5 1 1
_0 1 i 1 8 7 1
6 5 [1][1]
%i Hiyi+ "1272
11^1 0]^2^2
10. Let { %2 = a^iyi + 0^272 be three linear forms in y^ and yj ^nd let be a
.J1 O21 Zi + h2
%3 os 1 yi + % 2 72
linear transformation of the coordinates (yi, j^) into new coordinates (z^, zg). The result of applying
the transformation to the given forms is the set of forms
*i %1 012 r
Vl
.
Using matrix notation, we have the three forms "21 "22 and the transformation
Vr,
^3 1 '^3 2
611 612
The result of applying the transformation is the set of three forms
O21 022
'xi "11 %2 r
1
pri ii2 /,
"2 1 "^2 2
^2 1 622 ^',
Os 1 ^^3 2
Thus, when a set of m linear forms in n variables with matrix A is subjected to a linear trans
formation of the variables with matrix B , there results a set of m linear forms with matrix C = AB
'
MATRICES [CHAP. 1
SUPPLEMENTARY PROBLEMS
2
3' 1
"l 3 2I 4 ]
2I
Given A = 5 2 , B = 4 2 5 , and C = 3 2
1
.1 1_ _2 3_
J 2 3_
4 1 f 3 I 51
(a) Compute: A + B = 9 2 7 , AC = 53
_3 1 4j _ t2j
2 4 6
(6) Compute: 24 10 4 0S =
2 2 2
(c) Verify: A + {B~C) = (A+B)C.
(d) Find the matrix D such that A+D^B. Verify that D =BA = (AB).
1 1 1 1 2 3 11 6 1
www.TheSolutionManual.com
12. Given A = 3 2 1 and B 2 4 6 , compute AB = Q and BA = 22 12 2 Hence, ABi^BA
2 1 1 2 3 11 1
6
generally.
\ 3 2 14 10 2 112
13. Given A = 2 1 3 , B = 2 111 and C 3 2 1 1 show that AB = AC. Thus, AB = AC
4 3 1 1212 251
does not necessarily imply B = C
1 1 1 1 3
^["12 3 4"!
14. Given A = 2 3 , B = 2 and C show that (AB)C = A(BC).
02
,
3 1 2 1 [2 ij
4
15. Using the matrices of Problem 11, show that A(B + C) = AB + AC and (A+B)C = AC +BC.
2 3 5 13 5 2 2 4
17. Given A = 14 5 B 1 3 5 and C 13 4
1 3 4 13 5 1 2 3
(a) show that AB = BA = 0, AC = A, CA = C.
(b) use the results of (a) to show that ACB = CS^, A'^  B"^ = (A B)(A + B), (A Bf = A'^ + b'
2
18. Given where i = 1, derive a formula for the positive integral powers of A .
[
i [? 0]
is a matrix of the set.
20. Given the matrices A of order mx, B of order nxp, and C of order rx^, under what conditions
on p, 9,
and r would the matrices be conformable for finding the products and what is the order of each" (a)
ABC
(b)ACB, (c)A(B + C)?
Ans. (a) p = r; mxq (b) r = n = g; m x p (c) r = n, p = q; m x g
CHAP. 1] MATRICES
(a) A 1 j
1 and B 1
j
Ans. 1 2
1 1
0^ 1 1 1 I
m
"
1 o"
1
(b) A and B 1 Ans.
1
1 2
12 10 10 1
4 1
1 '
'2 2
(c) A and B Ans
1
1 1 i 10
I 2 2 1 i
2 2
22. Prove: (a) trace (A + B) = trace A + trace B, (6) trace (kA) = k trace A.
www.TheSolutionManual.com
2
h 2 1
1
1
2
= 2n+r.3y. 3 3
V, Y^l
_
^;^^^l^ y,\ [2
1 [2 1
2 3
r zi + 722"]
[221  622J'
24. If 4 = [aiji and B = [fe^J are of order m x n and if C = [c^j] is of order nx. p, show that (/4+B)C =^C + BC.
25. Let /4 = [a^] and B = [fej,;,] , where (i = 1, 2 m; / = 1, 2 p; A; = 1, 2, ... ,n). Denote by /S the sum of
Pi
182
the elements of the /th row of B, that is, let fij = S b j)^. Show that the element in the ith row of A
Pi
is the sum of the elements lying in the sth row of AB. Use this procedure to check the products formed in
Problems 12 and 13.
26. A relation (such as parallelism, congruency) between mathematical entities possessing the following properties:
27. Show that conform ability for addition of matrices is an equivalence relation while con form ability for multi
plication is not.
28. Prove: If .4, B, C are matrices such that AC = CA and BC = CB, then (AB BA)C = C(AB BA).
chapter 2
THE IDENTITY MATRIX. A square matrix A whose elements a^: = for i>j is called upper triangu
lar; a square matrix A whose elements a^= for i<j is called lower triangular. Thus
www.TheSolutionManual.com
U tip o tZo o
"2n
is lower triangular.
The matrix D Ogj which is both upper and lower triangular, is call
... a,
If in the diagonal matrix D above, Oi^ = Ogs = ci = k, D is called a scalar matrix; if.
in addition, k = l, the matrix is called the identity matrix and is denoted by /. For example
'1
and 1
t"] 1
When the order is evident or immaterial, an identity matrix will be denoted by /. Clearly,
/+/ + ... to p terms = p / = diag (p,p,p
p) and f
= II ... to p factors = /. Identity ma
2 3I
[1. c (5 , then l2A =
10
.
SPECIAL SQUARE MATRICES. If A and B are square matrices such that AB = BA, then A and B are
called commutative or are said to commute. It is a simple matter to show that if A is any rasquare
matrix, it commutes with itself and also with L
See Problem 2.
If A and B are such that AB = BA, th^ matrices A and 6 are said to anticommute.
,fe+i
A matrix A for which A A, where A: is a positive integer, is called periodic. If k is
kti
e for which
the least positive integer A A, then A is said to be of period k.
www.TheSolutionManual.com
THE INVERSE OF A MATRIX. If A and B are square matrices such that AB = BA = I, then B is call
ed the inverse of A and we write B = A'^ (B equals A inverse). The matrix B also has A
as its
inverse and we may write A = B~^
1 2 3 6 2 3 1 ) o'
Example 1. Since 1 3 3 1 1 = /, each matrix in the product is the inverse of
12 4 1 1 ( 1 1
the other.
We shall find later (Chapter?) that not every square matrix has an inverse. We can show
here, however, that if A has an inverse then that inverse is unique.
See Problem 7.
If A and B are square matrices of the same order with inverses A^^ and B~^ respectively,
1. The inverse of the product of tijvo matrices, having inverses, is the product in re
verse order of these inverses.
See Problem 8.
A matrix A such that A^ = I is called involutory. An identity matrix, for example, isinvol
utory. An involutory matrix is its own inve r$e.
See Problem 9.
THE TRANSPOSE OF A MATRIX. The matrix of order nxm obtained by interchanging the rows and
columns of an mxn matrix A is called the trahspose of A and is denoted by A' (A transpose). For
1 4'
6.
a
Ir
7
in the ith row
and /th column of A stands in the /th row and ith column of A\
If A' and ZTare transposes respectively qf A and B, and if A: is a scalar, we have immediately
II. The transpose of the sum of two matrices is the sum of their transposes, i.e.
(A + BY = A'+ S'
12 SOME TYPES OF MATRICES [CHAP. 2
and
in. The transpose of the product of two matrices is the product in reverse order of their
transposes, i.e.,
(AB)' = B'A'
See Problems 1012.
SYMMETRIC MATRICES. A square matrix A such that A'= A is called symmetric. Thus, a square
matrix A = \
a^j] is symmetric provided o^ = a,,^ for all values of i and , /. For example,
1 2 3
2 45 is symmetric and so also is kA for any scalar k.
35 6
In Problem 13, we prove
IV. If A is an rasquare matrix, then A + A' is symmetric.
www.TheSolutionManual.com
A square matrix A such that A'= A is called skewsymmetric. Thus, a square matrix A is
skewsymmetric provided a^. = Uj^ for all values of i and /. Clearly, the diagonal elements are
"
2 3"
3 4
THE CONJUGATE OF A MATRIX. Let a and b be real numbers and let i = V1 then, z = a+ hi is ;
called a complex number. The complex numbers a+bi and abi are called conjugates, each
being the conjugate of the other. If z = a+ bi, its conjugate is denoted by z = a+ hi.
that is, the conjugate of the sum of two complex numbers is the sum of their conjugates.
(ii) z^ Zg = (acbd) + (ad+bc)i and z^^ = (acbd) (ad+bc)i = (abi)(cdi) = F^i^,
that is, the conjugate of the product of two complex numbers is the product of their conjugates.
When i is a matrix having complex numbers as elements, the matrix obtained from A by re
placing each element by its conjugate is called the conjugate of /4 and is denoted by A (A conjugate).
l + 2i i I 21  i
Example 2. When A then A =
3 23i 3 2 + 3i
If A and B are the conjugates of the matrices A and B and itk is any scalar, we have readily
_VII. The conjugate of the sum of two matrices is the sum of their conjugates, i.e.,
(A + B) = A + B.
Vin. The conjugat e of the product of two matrices is the product, in the same order, of
their conjugates, i.e., (AB) = AB.
1  2i 3 1 + 2i 3 [l2i 3 1
(AY = while A' and (A') = (AY
i 2+ 3i  Zi
i 2 i 2 + 3iJ
www.TheSolutionManual.com
HERMITIAN MATRICES. A square matrix ^ = [0^^] such that A' = A is called Herraitian. Thus, /I
is Hermitian provided a^j = uj^ for all values of i and /. Clearly, the diagonal elements of an
Hermitian matrix are real numbers.
1 1i 2
2 I
i li 2
Examples. The matrix A = 1t 3i i is skewHermitian. Is kA skewHermitian it k is any real
2 i
DIRECT SUM. Let A^, A^ As be square matrices of respective orders m^, ,ms. The general
ization
^1 ...
A^ ...
diag(^i,^2 As)
... A,
1 2 1
Examples. Let /4i=[2], ^5 and An 2 3
B:} 4 1 2
12
3 4
The direct sum of A^,A^,Aq is diag(^3^, /42, /4g) =
121
2 3
4 12
www.TheSolutionManual.com
SOLVED PROBLEMS
1. Since
Or,
2i "22 2n 22 21 OooOoo "^22 ''2n
the product AB of
an msquare diagonal matrix A = diag(oii, a^^ an) and any mxn matrix B is obtained by multi
plying the first row of B by %i, the second row of B by a^^ and so on.
2 2 4
3. Show that 13 4 is idempotent.
1 2 3_
2 2 4' '
2 2 4" 2 4
/ = 13 4 13 4 3 4
_ 1 2 3_ . 1 2 3_ 2 3
.45.4 = (AB)A = AA = A^ and ABA = .4(34) = AB = A ; then 4^ = .4 and 4 is idempotent. Use BAB to
show that B is idempotent.
.
1 1 3
5. Show that A 5 2 6 is nilpotent of order 3.
2 1 3
3"
1 1 3 1 1 1 1 3
/ = 5 2 6 5 2 6 = 3 3 9 and A A^.A = 3 3 9 5 2 6 =
2 1 3 2 1 3 1 1 3 1 1 3 2 1 3
7. Let A,B,C be square^ matrices such that AB = 1 and CA = 1. Then iCA)B = CUB) so that B
C. Thus, 5 = C= ^ ^ is
the unique inverse of ^. (What is S~^?)
www.TheSolutionManual.com
8. Prove: (AB)^ = B~^A'^
Let A = [ay] and B = [6^]. We need only check that the element in the ith row and /th column of
A'. S' and (A+B)' are respectively a^, bjj_. and aj^+ bj^.
Let A = [ay] be of order mxn, B = [6y] be of order nxp ; then C = AB = [cy] is of order mxp. The
element standing in the ith row and /th column of AB cy =
is
J^aife. b^j and this is also the element stand
ing in the /th row and ith column of (AB)'.
The elements of the /th row of S'are iy, b^j bnj and the elements of the ith column of ^'are a^^,
"i2 Hn Then the element in the /th row and ith column of B'/l'is
n n
First Proof.
The element in the ith row and /th column of .4 is aij and the corresponding element of /I' is aji; hence,
bij = aij + a^i. The element in the /th row and ith column of A is a^i and the corresponding element of .4' is
oj^j] hence, bji = a^^ + aij. Thus, bij = bji and B is symmetric.
Second Proof.
By Problem 10, (A+Ay = A' + (A')' = A'+A = A + A' and (^ +.4') is symmetric.
14. Prove: If A and B are resquare symmetric matrices then AB is symmetric if and only if A and B
commute.
Suppose A and B commute so that AB = BA. Then (AB)' = B'A' = BA = AB and AB is symmetric.
Suppose AB is symmetric so that (AB)' = AB. Now (AB)' = B'A' = BA ; hence, AB = BA and the ma
www.TheSolutionManual.com
trices A and B commute.
15. Prove: Ifthe msquare matrix A is symmetric (skewsymmetric) and if P is of order mxn then B =
P'AP is symmetric (skewsymmetric).
If .4 is symmetric then (see Problem 12) B' = (P'AP)' = P'A'(P')' = P'A'P = P'AP and B is symmetric.
16. Prove: If A and B are resquare matrices then A and 5 commute if and only if A~ kl and B kl
commute for every scalar k.
SUPPLEMENTARY PROBLEMS
17. Show that the product of two upper (lower) triangular matrices is upper (lower) triangular.
18. Derive a rule for forming the product BA of an mxn matrix S and ^ = diag(aii,a22 a).
Hint. See Problem 1.
19. Show that the scalar matrix with diagonal element A: can be written as Wand that kA = klA =Aia.g(k,k k) A,
where the order of / is the row order of A
20. If .4 is resquare, show that a'^ a'^ = a'^ A^ where p and q are positive integers.
235 13 5
21. (a) Show that A = 14 5 and B = i _3 _5 are idempotent.
1 3 [1 3 5
4J
Using A and B, show that the converse of Problem 4 does not hold.
www.TheSolutionManual.com
(b)
1 2 2
23. (a) If A 2 1 2 show that .4 4.A  5! = 0.
2 2 1.
2 1 3
(b) If A 1 1 2 show that 4  2A  94 = 0. but 4^  2/1  9/ /
,
0.
1 2 1
1 1 1" 2
1 o"
3
1
24. Show that 1 = 1 1 1 = =
1 /.
_ 1 1 1 1 1
1 2 6
25. Show that A 3 2 9 is periodic, of period 2.
2 03
1 3 4
26. Show that 13 4 is nilpotent.
1 3 4
'12 3 2 1 6
27. Show that (a) A = 3 2 and B = 3 2 9 commute.
1 ~1 1 _l 1 4_
"112' 2/3 1/3'
(b) A = 2 3 and B = 3/5
1 2/5 1/5 commute.
12 4 7/15 1/5 1/15
30. Prove: The only matrices which commute with every nsquare matrix are the nsquare scalar matrices.
31. (a) Find all matrices which commute with diag(l, 2, 3).
(b) Find all matrices which commute with diag(aii,a22 a^^).
Ans. (a) diag(a,fe, c) where a,6, c are arbitrary.
.
1 2 3 "321
32. Show that (a) 2 5 7 I is the inverse of 4 1 1
2 4 5 3 2 1.
10 "
1
(b)
2 10
is the inverse of
2100
4 2 10 02 10
2311 _ 8 1 1 1.
34. Show that the inverse of a diagonal matrix A, all of whose diagonal elements are different from zero, is a
diagonal matrix whose diagonal elements are the inverses of those of A and in the same order. Thus, the
inverse of / is /
1
www.TheSolutionManual.com
1 4 3 3"
10
36. Let A
10 I2
by partitioning. Show that A
/j.
a b 1 /I21 '2 /s
c d 01
37. Prove: (a)(A')'=A, (b) (kA)" = kA', (c) (^^)' = (/l')^ for p a positive integer.
39. Prove: (a) (A^)'^ = A, (b) (kA)^ = jA'^, (c) (A'^Y^ = (A'^y forp a positive integer.
1 1 + I 2 + 3 i
i 1 + j 2  3 J
(b) B = 1 + i 2i 1 is skewHermitian,
23J 1
(c) iB is Hermitian,
43. If A is nsquare, show that (a) AA' and A' A are symmetric, (6) AirA', AA', and A'A are Hermitian.
44. Prove: If H is Hermitian and A is any conformable matrix then (A)' HA is Hermitian.
45. Prove: Every Hermitian matrix A can be written as B + iC where B is real and symmetric and C is real and
skewsymmetric.
46. Prove: (a) Every skewHermitian matrix A can be written as A = B + iC where B is real and skewsymmetric
and C is real and symmetric. (6) A' A is real if and only if B and C anticommute.
47. Prove: If A and B commute so also do ^"^ and B' , A' and B\ and A' and B"
48. Show that for m and n positive integers, ^4 and S"^ commute if A and 5 commute.
CHAP. 2] SOME TYPES OF MATRICES 19
n
A 1 A nA 2n('l)A
A 1 A raA ,ra ni
49. Show (a) (6) A 1 = A nX
\ A
A A"
50. Prove: If A is symmetric or skewsymmetric then AA'= A'A and / are symmetric.
51. Prove: If 4 is symmetric so also is a4 +6/1^ +..+/ where a, 6 g are scalars and p is a positive
integer.
52. Prove: Every square matrix A can bfe written as /I = B +C where B is Hermitian and C is skewHe rmitian.
53. Prove: If ^ is real and skewsymmetric or if ^ is complex and skewHermitian then iA are Hermitian.
www.TheSolutionManual.com
55. Prove: If A and B are such that AB = A and BA = B then (a) B'A'= A' and A'B"= B\ (b) A" and B' sue
idempotent, (c) ^ = B = / if ^ has an inverse.
56. If^ is involutory. show that k(.I+A) and kOA) are idempotent and j(I+A) ^(IA) = 0.
58. Find all matrices which commute with (a) diag(i. i, 2, 3), (6) diag(l. 1, 2, 2).
Ans. (a) dia.g(A. b.c). (b) dia.g(A.B) where A and B are 2square matrices with arbitrary elements and b. c
are scalars.
59. If A^.A^ A^ are scalar matrices of respective orders mj^.m^ m^. find all matrices which commute
with diag(^i, ^2 '^s)
Ans. dia.g(B^.B^ B^) where Si, S2 85 are of respective orders m^.m^, m^ with arbitrary elements.
60. If AB = 0, where A and B are nonzero nsquare matrices, then A and B are called divisors of zero. Show
that the matrices A and B of Problem 21 are divisors of zero.
61. If A = diae(Ai.A2 A^) and B = di&giB^.B^ B^) where ^^ and B^ are of the same order, (J = 1, 2,
..., s), show that
(a) A+B = diag(^i+Si,^2 + S2 ^s + Bs)
(b) AB = diag(^iBi. /I2S2 A^B^)
(c) trace AB = trace /liB^ + trace /I2S2 + ... + trace A^B^.
62. Prove: If ^ and B are nsquare skewsymmetric matrices then AB is symmetric if and only if A and S commute.
63. Prove: If A is nsquare and B = rA+sI, where r and s are scalars, then A and B commute.
64. Let A and B he nsquare matrices and let ri, rg, si, S2 be scalars such that risj 7^
rssi. Prove that Ci =
ri4+siB, C2 = r^A+SQB commute if and only if A and B commute.
65. Show that the nsquare matrix A will not have an inverse when (a) A has a row (column) of zero elements or
(6) /I has two identical rows (columns) or (c) ^ has arow(column)whichis the sum of two other rows(columns).
66. If A and B are nsquare matrices and A has an inverse, show that
(A+B)A~'^(AB) = (AB)A''^(A+B)
chapter 3
www.TheSolutionManual.com
(3.2) 1324 2314 3214 4213
(3.3)
and a product
second subscripts is then some one of the re! permutations of the inte
sequence j^, j^ / of
gers 1,2 re. (Facility will be gained if the reader will parallel the work of this section be
ginning with a product arranged so that the sequence of second subscripts is in natural order.)
For a given permutation /i,/2, ...,^ of the second subscripts, define ^j^j^....j^ = +1 or 1
according as the permutation is even or odd and form the signed product
(3.5) W2 k'^k ^^h '^Jn
the determinant of A, denoted by U, is meant the sum of all the different
signed prod
By
ucts of the form (3.5), called terms of Ul, which can be formed from the elements of A; thus.
where the summation extends over p=n\ permutations hkJn of the integers 1,2,,
20
CHAP. 3] DETERMINANT OP A SQUARE MATRIX 21
DETERMINANTS OF ORDER TWO AND THREE. Prom (3.6) we have for n = 2 and n = 3,
"ll "12
(3.7) ^11^22 21 '^12^^21
12 "'"
11^22 CEj^2^21
^21 "22
and
%l(22 033  02S32) " "12(021 "ss " ^S^Sl) + ^IS (a2lOS2  022^31)
www.TheSolutionManual.com
+ Oi
"^32 ^3 "31 <^33 '^Sl ^^32
Example 1.
1 21
(a) 14  23 6 = 2
3 4
2 11
(fc) 20  (1)3 + 3
3
2 3 5
10 1 1 11 1 01
(O 1 1 = 2 + 5
11 2 2 11
2 1
2 3 4
(d) 1 02 = 2{0(6)(2)(5)!  (3){l(6) (2)0! + (4) {l(5)  Oo!
5 6
20 18 + 20 18
See Problem 1.
PROPERTIES OF DETERMINANTS. Throughout this section, A is the square matrix whose determi
nant Ul is given by (3.6).
Suppose that every element of the sth row (every element of the/th column) is zero.
Since
every term of (3.6) contains one element from this row (column), every
term in the sum is zero
and we have
I. If every element of a row (column) of a square matrix A
is zero, then U =0.
Consider the transpose A' of A. It can be seen readily that every term of
(3.6) can be ob
tained from A' by choosing properly the factors in order from the
first, second, ... columns. Thus,
II. If 4 is a square matrix then
U'l = \A\; that is, for every theorem concerning the rows
of a determinant there is a corresponding theorem concerning
the columns and vice versa.
Denote by B the matrix obtained by multiplying each of the elements of
the ith row of A by
a scalar k. Since each term in the expansion of 5 contains one and only one
element from its
fth row, that is, one and only one element having A; as a factor,
\B\ = k 1 \: , , a. , n^ n
*%'2i'iii2J2anj^!
\ . = Jc\A\
P
Thus,
22 DETERMINANT OF A SQUARE MATRIX [CHAP. 3
O\ /l' t* QO
OrQ 1 f (X'
Let S denote the matrix obtained from A by interchanging its ith and (i+l)st rows. Each
product in (3.6) of \A\ is a product of s, and vice versa; hence, except possibly for signs,
(3.6) is the expansion of \b\. In counting the inversions in subscripts of any term of (3.6) as a
term of \b\, i before i+1 in the row subscripts is an inversion; thus, each product of (3.6) with
its sign changed is a term of s and \e\ =  \A\. Hence,
IV. If B is obtained from A by interchanging any two adjacent rows (columns), then \b\ =
www.TheSolutionManual.com
 \a\.
VI. If B is obtained from A by carrying its ith row (column) over p rows (columns), then
s = (i)^UI.
Os, Os,
p^kk^ Jn ^^k'^k'^k "njn^ "Jn
In general,
VIII. If every element of the ith row (column) of A is the sum of p terms, then \A\ can
be expressed as the sum of p determinants. The elements in the ith rows (columns) of these
p determinants are respectively the first, second, ..., pth terms of the sums and all other rows
(columns) are those of A.
IX. If B is obtained from A by adding to the elements of its ith row (column), a scalar mul
tiple of the corresponding elements of another row (column), then \b\ = \a\. For example.
Clf,', { Karin. Clf=n Clr^r, Cfg ^ + H'Qri^ ^Q'2 "^ i^Clr Zgg + ka^^
See Problems 27.
FIRST MINORS AND COFACTORS. Let A be the resquare matrix (3.3) whose determinant \A\ is given
by (3.6). When from A the elements of its ith row and /th column are removed, the determinant
of the remaining (re l)square matrix is called a first minor of A or of \a\ and denoted by \M^j\.
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 23
More frequently, it is called the minor of Oy. The signed minor, (if'*'^ \Mij\ is called the
cofactor of a^
and is denoted by a;
"V
11 12 ''13
1431 I
Then (3.8) is
www.TheSolutionManual.com
Ml = aiilA/iil  O12IM12I + aislMigl
= "11*11 + ai20ii2 + "130^13
In Problem 9, we prove
X. The value
of the determinant U, where A is the matrix of (3.3), is the sum of the prod
ucts obtained by multiplying each element of a row (column) of
U by its cofactor, i.e..
n
(3.9) Ul = aiiflii + a^g a^g + + ^in * in ^ aik'kk
n
(3.10) Ml = "^if^if + 021*2/ + + a^jd^j (ij, = 1,2 n)
fe?i"fei*j
and
"310^31 + "320^32 + "330^33 = Ul
"12*12 + "22*22 + "32*32 = I
'4 I
while
a. ,
,
'l'J'2 '^i.
Jm
_ "^m Ji %J2
% ^n _
24 DETERMINANT OF A SQUARE MATRIX [CHAP. 3
and
called submatrices of ^.
The determinant of each of these submatrices is called a minor of A and the pair of minors
J1.J2. Jm J'm'i'Jn^Q Jn
and
^m+l' 'm^2 ''r
www.TheSolutionManual.com
are called complementary minors of A, each being the complement of the other.
"I'Z %4 '^15
1,3 '^21 '^23 2,4,5
I
i
'^2,5
I
'
~ and '^l, 3.4 "32 ^^34 '^SS
I
I
f^Sl 63 I
"42 "44 ^^45
Let
(3.13) U + In + + in + h + h + + h
and
(3.14) q  in +1 + fm+2 + + *n + ^ra + i + /m+2 + " + In
p Ji, is. J
Jm'i'Jni2 Jn
7(41. 'to+2 ^n
J+i'Jm,+2 7n
and (l)'^ /I. . is called the algebraic complement of
''m,+ l>^m+2> ''
J1.72 Jm
H>h ^n
,2 4 5 ,
l(3 + 4^2l446 I .2,4,5 2 4 5 1
Of I
A.{3\i I
and (1) I
A^ 34 I
= I ^ijg',4 I
is
1.3
^^is I
Note that the sign given to the two complementary minors is the same. Is this
always true?
.Ji Ji
When m = l, (3.11) becomes A and an element of A. The
[%ii] "HJi
J2.J3. Jn
complementary minor is Ml i in the notation of the section above, and the
I
A minor of A, whose diagonal elements are also diagonal elements of A, is called a principal
minor of A. The complement of a principal minor of A is also a principal minor of A; the alge
The terms minor, complementary minor, algebraic complement, and principal minor as de
fined above for a square matrix A will also be used without change in connection
with U 
SOLVED PROBLEMS
www.TheSolutionManual.com
1 (") ! !l = 24  3(l) = 11
11 4
1 2
4 5l 3 5 3 4
(b) 3 4 5 (1)  (l)(47  56)  + 2(36  45)
6 71 5 7 5 6
5 6 7 24 = 6
1 6
4 1 3
2. Adding to the elements of the first column the corresponding elements of the other columns,
4 1 1 1 1 1 1 1
1 4 1 1 4 1 1 1
1 1 4 1 1 1 4 1 1
1 1 4 1 1 1 4 1
1 1 1 4 1 1 1 4
by Theorem I.
3. Adding the second column to the third, removing the common factor from this third column and
using Theorem Vn
1 a h + c 1 a a+b+ c 1 a 1
1 b c +a 1 b a+b+c (a + b + c) 1 b 1
1 c a+h 1 c a+b+c 1 c 1
4. Adding to the third row the first and second rows, then removing the common factor
2; subtracting
the second row from the third; subtracting the third row from the first;
subtracting 'the first row
from the second; finally carrying the third row over the other rows
26 DETERMINANT OF A SQUARE MATRIX [CHAP. 3
Oi % 1
www.TheSolutionManual.com
l2 flj^ ^2 Oi + a2 1
2 2
02 ar, 1 (a^ _ 02) a2 O2 t by Theorem III
2 2
Og Og 1 Og ag 1
and oj^oj is a factor of U. Similarly, a^a^ and aga^ are factors. Now M is of order three in the
letters; hence.
The product of the diagonal elements. 0^02. is a term of \a\ and, from (i), the term is ka^a^. Thus,
A: = l and \a\ = {a^a^){a^a2){asa^). Note that U vanishes if and only if two of the a^, og. os are 
equal.
Ul =
I ^iij2...j"yi% %; = '^ + *'
1 2 3
8. For the matrix A = 2 3 2
12 2
2 2 1+3 2 3
(1) 1+2
1
2 3 2+2I 1 3 2+3 1
1 2
,2+1 1, a,23
(ly a22 = (1) = (1)
2 2 1 2 1 2
3+ 2 3 1 3 3+3I 1 2
(1) 3+2I a 33
1
Note that the signs given to the minors of the elements in forming the cofactors follow the pattern
+  +
 + 
+  +
where each sign occupies the same position in the display as the element, whose cofactor is required, oc
cupies in ^, Write the display of signs for a 5square matrix.
9. Prove: The value of the determinant U of an resquare matrix A is the sum of the
products obtained
by multiplying each element of a row (column) of A by its cofactor.
We shall prove this for a row. The terms of (3.6) having a^^ as a factor are
www.TheSolutionManual.com
be written as
(6)
where the summation extends over the cr= (ni)! permutations of the integers
2,3 n, and hence, as
022 2S <2n
(c) "an
"ii "11 1
'twill
^n2 "ns
Consider the matrix B obtained from A by moving its sth column over the first sl columns.
By Theorem
VI. \B\ = (1) U. Moreover, the element standing in the first row and first column of B is
a^s and the
minor of a^^ in B is precisely the minor \M^\ of
a^s in A. By the argument leading to (c), the terms of
ais mis\ are all the terms of \b\ having a^s as a factor and, thus, all the
terms of (1)^"^ having a.^ U as
a factor. Then the terms of ais!(if M^/isli are all the terms of \a\ having a^s as a factor Thus
(3.15)
,s+i
since (1) = (1) We have (3.9) with = We
J i shall call (3.15) the expansion of
. .
^ "' ''^ '^ ^^^^^ '^' ^^^^ '' ' = '^ '^ '''^i"^'J by f^P^^ting the above argu
m.r,JWlTr\T
B be the
ments. Let J'*' ^iT^
matrix obtained from A by moving its rth row over the first r1
rows and then its .th col
umn over the first sl columns. Then
T~l sl
(1) (!) \a (1) u
The element standing in the row and the
the minor of a.rs in
first first column of 5 is a and the minor of a^^ in
^^ B is yreciseiy
i precisely
^ Thus, . the terms of
r+fel
,l,rkUl) M.rk\ 2 a^j^a.
rk^rk
k=i
and we have (3.9) for j = r.
.
10. When oLij is the cofactor of aij in the rasquare matrix A = [a^j] , show that
(h,ji 02 *2n
i +
^
This relation follows from (3.10) by replacing a^j with k^, 027 with k^ 0^7 with Atj, In making these ,
^2J
replacements none of the cofactors OLij.QL^j. (t^j appearing is affected since none contains an element ,
By Theorem VII, the determinant in {i) is when A^= a^^, (r = 1,2 n and s ^ j). By Theorems Vin,
and VII, the determinant in (i) is I
4 
when Ay + fea^g, (r = 1,2 n and s ^ /).
\7
Write the eauality similar to (0 obtained from (3.9) when the elements of the ith row of A are replaced
by /ci,/c2.
www.TheSolutionManual.com
1 2 3 4 5 28 25 38
11. Evaluate: (a) \
A 3 04 (c) 1 2 3 (e) 42 38 65
2 5 1 2 5 4 56 47 83
1 4 8 2 34
(b) 2 1 5 ((f) 56 3
3 2 4 4 23
1 2
3 4 Cl^^fX^Q "f"
022^22 ^ ^32 32 ai2 + a22 + (5)o;g2
2 5 1
1 2
34.2I 10
5(l)' I
5(46)
3 4l
(b) Subtracting twice the second column from the third (see Theorem IX)
1 4 8 1 4 824 1 4
1 4
2 15 2 1 521 = 2 1 3 3(l)''
3 2
3 2 4 3 2 422 3 2
3(14) 42
(c) Subtracting three times the second row from the first and adding twice the second row to the third
(d) Subtracting the first column from the second and then proceeding as in (c)
1
27 41
= 27 11 41 31
11
8 2  11
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 29
(e) Factoring 14 from the first column, then using TheoremIX to reduce the elements in the remaining columns
28 25 38 2 25 38 2 2512(2) 3820(2)
42 38 65 14 3 38 65 14 3 3812(3) 6520(3)
56 47 83 4 47 83 4 4712(4) 8320(4)
2 1 2 1
1 9
4 3 2 5 14  12 9 14 14(l54) 770
413 6 1 1
6 1
12. Show that p and q, given by (3.13) and (3.14), are either both even or both
odd.
Since each row (column) index is found in either p or 9 but never in both,
Now p+q is even (either n or n + 1 is even); hence, p and q are either both even or both odd. Thus,
www.TheSolutionManual.com
(1) = (1)^ and only one need be computed.
12 3 4 5
6 7 8 9 10
13. For the matrix A [..] 11 12 13 14 15 the algebraic complement of  Ao's is
16 17 18 19 20
.21 22 23 24 25
, ,^2+3+2+4 .1,3,51
13 5
(1) Ml,4,5l 16 18 20 (see Problem 12)
21 23 25
SUPPLEMENTARY PROBLEMS
14. Show that the permutation 12534 of the integers 1, 2, 3. 4, 5 is even, 24135 is odd, 41532 is even, 53142 is
odd, and 52314 is even.
15. List the complete set of permutations of 1, 2,3,4, taken together; show that half are even and half are odd.
16. Let the elements of the diagonal of a 5square matrix A be a.b.cd.e. Show, using (3.6), that when ^ is
diagonal, upper triangular, or lower triangular then \a\ = abcde.
17. Given 4
[j J]
= and B = [^ ^^^^ ^^^^ AB^BA^ A'b 4 AB' ^ a'b' ^ B'a' but that the determinant of
6J
each product is 4.
2 1 1 2 22 2 3
(a) 3 2 4 = 27 12
(b) 3 = 4 (c) 2 4 =
1 3 2 3 4 3 4
30 DETERMINANT OF A SQUARE MATRIX [CHAP. 3
1 2 10
19. (a) Evaluate \A 2 3 9
4 5 11
(c) Denote by 
C 
the determinant obtained from 
.4 
by interchanging its first and third columns. Evaluate
I
C I
to verify Theorem V.
1 2 7 1 2 3
(d) Show that I
A 2 3 5 2 3 4 thus verifying Theorem VIII.
4 5 8 4 5 3
1 2 7
(e) Obtain from \A \
the determinant o = 2 3 3 by subtracting three times the elements of the first
4 51
column from the corresponding elements of the third column. Evaluate j D j to verify Theorem IX.
(/) In U
subtract twice the first row from the second and four times the first row from the third.
www.TheSolutionManual.com
I
Evaluate
the resulting determinant.
(g) In I
/I I
multiply the first column by three and from it subtract the third column. Evaluate to show that
\A I
has been tripled. Compare with (e). Do not confuse (e) and (g).
22. (a) Count the number of interchanges of adjacent rows (columns) necessary to obtain 6 from A in Theorem V
and thus prove the theorem.
(b) Same, for Theorem VI.
23. Prove Theorem VII. Hint: Interchange the identical rows and use Theorem V.
24. Prove: If any two rows (columns) of a square matrix A are proportional, then 
,4 
= o.
25. Use Theorems VIII, III, and VII to prove Theorem IX.
a b
c d
27. Use (3.6) to evaluate \A\ =
e /
; then check that \a\ = ".P/. Thus, if A = diag(A^. A^). where
g h
A^, A^ are 2square matrices,  A U4i
4 3 3
29. Show that the cofactor of an element of any row of 1 1 is the corresponding element of the same
4 4 3
numbered column.
be a a^
32. Multiply the columns of 6^ ca 52 respectively by a,b.c ; remove the common factor from each of
c^ c2 ab
be ab ca
the rows to show that A ab ca be
ca be ab
a^ a bed a^ a
www.TheSolutionManual.com
1 a^ 1
6^ 6 1 aed
33. Without evaluating show that
(a  b)(a  c)(a  d)(b  c)(i  d)(e  d).
e^ e I abd c^ c'^ e 1
d^ d 1 abc d d^ d 1
1 1 ... 1 1 1 ... 1 1
1 1...1 1 1 ... 1 1
1 1 1 ... 1
1 1 1 ...0 1 1 1 ... 1 1
rei n2
a^ 1
rai n2
ar, 1
35. Prove: = S(i  2)(i  as). (aia^H(a2 03X0204)... (02 a^j 7ii  "ni
ni ra2
o 1
X a xb
37. Without expanding, show that the equation xia xc has as a root.
x+b x+c
+6 a a
a a+ b a
38. Prove ,ni
b {na + b).
a a a +6
chapter 4
Evaluation of Determinants
PROCEDURES FOR EVALUATING determinants of orders two and three are found in Chapters. In
Problem 11 of that chapter, two uses of Theorem IX were illustrated: (a) to obtain an element 1
or 1 if the given determinant contains no such element, (b) to replace an element of a given
determinant with 0.
For determinants of higher orders, the general procedure is to replace, by repeated use of
www.TheSolutionManual.com
Theorem IX, Chapters, the given determinant U! by another b = \bij\ having the property
that all elements, except one, in some row (column) are zero. If &>, is this nonzero element
'Pq
and ^p^ is its cofactor.
Then the minor of bp is treated in similar fashion and the process is continued until a determi
nant of order two or three is obtained.
Example 1.
32 12 3 2 1 2 32 1 2
'^
(l) = (l)P' 286
30 37
See Problems 13
For determinants having elements of the type in Example 2 below, the following variation
may be used: divide the first row by one of its nonzero elements and proceed to obtain zero
elements in a row or column.
Example 2.
0.921 0.185 0.476 0.614 1 0.201 0.517 0.667 1 0.201 0.517 0.667
0.782 0.157 0.527 0.138 0.782 0.157 0.527 0.138 0.123 0.384
0.921 0.921
0.872 0.484 0.637 0.799 0.872 0.484 0.637 0.799 0.309 0.196 0.217
0.312 0.555 0.841 0.448 0.312 0.555 0.841 0.448 0.492 0.680 0.240
1
0.309 0.265
0.921(0.384) 0.309 0.265 0.217 0.921(0.384)
0.492 0.757
0.492 0.757 0.240
0.921(0.384)(0.104) = 0.037
32
I 2
^m4i'% + 2 ^n
where s = i^ + i^+... + i^ +A + /2 + +h and the summation extends over the/? selections of the
column indices taken m at a time.
Example 3.
2 32 4
www.TheSolutionManual.com
32 12
Evaluate 
A , using minors of the first two rows.
3 2 3 4
2405
Prom (4.1),
1+2+1+21 1 1 + 2+1 + SI
U (1) U1.2 U5aI
^3,4 1
+ (1)" "Mi',U
1,2!
,1,31
3,4!
2,41
_l + 2 + l + 4 1,4 1 + 2 + 2 + 31 .2,31
+ (1) + (1) I ^1,:
,
,,1+2 + 2+4. .2,4 .13 1 + 2+3+4 .34
+ (1) Mi'jlUs'.
I
2 3 3 4 2 2 2 4 2 4 2 3
+
3 2 5 3 1 4 5 3 2 4
3 2 3 4 3 4 3 3 2 4
+  3 2
2
_2 _2 +
1 5 2 2 D 1 2 2 4
286
See Problems 46
(4.2) Us I
= Ullsl
See Problem 7
EXPANSION ALONG THE FIRST ROW AND COLUMN. If ^ = [aid is nsquare, then
n n ii
(4.3) ^11 ^1 IJ IJ
i=2 j,=2
Where a^ is the cofactor of o^ and a^^is the algebraic complement of the minor "ii^iil
of^.
ii Otj I
DERIVATIVE OF A DETERMINANT. Let the rasquare matrix A = [a^,] have as elements differen
^tj.
tiable functions of a variable x. Then
J
with respect to x.
Example 4.
x^ x^\ 3 2x 1 x2 x + 1 3 x+ 1 3
X 2 X 2 X 2
5 + 4x  12x^ 6x
See Problem 8
www.TheSolutionManual.com
SOLVED PROBLEMS
2 3 2 4 2 3 2 4 2 32 4
7 4 3 10 2(2) 42(3) 32(2) 102(4) 3 2 1 2
1. 286 (See Example 1)
3 2 3 4 3 2 3 4 3 2 3 4
2 4 5 2 4 5 2 4 5
There are, of course, many other ways of obtaining an element +1 or 1; for example, subtract the first
column from the second, the fourth column from the second, the first row from the second, etc.
1 1 2 1 1 + 1 22(1) 10
2 3 2 2 2 3 2+2 22(2) 2 3 46
2 4 2 1 2 4 2 +2 12(2) 2 4 43
3 1 5 3 3 1 5 + 3 32(3) 3 189
3
4
1
4
43
8
6
9
=
32(4)
13(4)
4
42(4)
83(4)
43 62(3)
93(3)
5
4
11 4
4
43
5 4
3 = 72
11 4
1 + J 1+ 2j
3. Evaluate \A\ 1  i 23i
1 2i 2 + : i
Multiply the second row by l + j and the third row by l + 2J; then
1 +i 1+2j l +j 1+2; 1 +J 1 + 2J
5 4 + 7i 1 4 + 7j 10 + 2J 1 4 + 7J 10 + 2i
1+i l + 2j
I 6 + 18
 14i 25  5j
and ,4
CHAP. 4] EVALUATION OF DETERMINANTS 35
4. Derive the Laplace expansion of \A\ = \aij\ of order n, using minors of order m<n
ji'h Jrn
Consider the msquare minor A; of 1^1 in which the row and column indices are arranged in
H''^ %\
order of magnitude. Now by i; 
interchanges of adjacent rows of ^ the row numbered ii can be brought
1 
,
into the first row. by i^ 2 interchanges of adjacent rows the tow numbered is can be brought into the second
row by % m interchanges of adjacent rows the row numbered % can be brought into the mth row. Thus.
+ (ig + + (ijj m)
after (I'l 1) 2) 11 + ^2+ + 'm2'"('"+l) interchanges of adjacent rows the rows
numbered I'l, i^ i occupy the position of the first m rows. Similarly, after /i + j^ + + /^  \m(m+l)
interchanges of adjacent columns, the columns numbered /i,/2 /jj occupy the position of the first m col
umns. As a result of the interchanges of adjacent rows and adjacent columns, the minor selected above oc
cupies the upper left corner and its complement occupies the lower right comer of the determinant; moreover.
I
A has changed sign
I
cr = j + i + + + /i + % /2 + + /"('" +1) times which is equivalent to
[1 + ^2+ + in + /i + ii + + in changes. Thus
Ji.J2' Jm Jm+i'Jni2 Jn
A yields m!(ra m)! terms of (1) \a\ or
H'''^ 'm 'mH'''m +2
www.TheSolutionManual.com
Ji~h Jn JnH' Jm+2' ' Jn
(a) (ir yields w!(n m)! terms of \a\.
"TOJl' n+2'
n(n~l)...(nm + l)
Let I'l, 12 in be held fixed. Prom these rows
different m square p
l'2....m m\{n m)\
minors may be selected. Each of these minors when multiplied by its algebraic complement yields m!(/jm)'
terms of U. Since, by their formation, there are no duplicate terms of
U among these products.
Jn Jm1' Jvn Jn
S(i/ I
, I'm
where s = i^ + i^+ + i^ +j^ + j^ + + in and the summation extends over the p different selections
/i, 72 in of the column indices.
12 3 4
5. Evaluate A
2 12 1 using minors of the first two columns.
11
3 4 12
1 21 1 1 1 21 2 ll 2 11 13 4
(1)^ + (ir
2 2
+ (If
1 I ll 3 41 1 ll 3 41 ll 1
A
B
C B
Prom the first n rows of P only one nonzero nsquare minor, U, can be formed. Its algebraic com
plement is s. Hence, by the Laplace expansion, p = .4.b.
7. Prove \AB\ = U 

S
Suppose A = [a^j] and B = [bij] are n square. Let C = [c^j] = AB so that c
V ^Hkhj i^rom
Problem 6
36 EVALUATION OF DETERMINANTS [CHAP. 4
ail 012
"m
021 022 2n
"ni n2 "nn
1 6ii 612 ..
hi
1 621 ^22
.. ?,2
1
To the (n+l)st column of \P\ add fen times the first column, 621 times the second column 6i times
the nth column; we have
On 012
"m Cii
www.TheSolutionManual.com
"21 "22 2ra C21
1 612
h,
1 622 .. 62
Next, to the (n + 2)nd column of P add fei2 times the first column, 622 times the second column,
times the nth column. We have
in Cii C12
2n C21 C22
"nn Cm Cn2
^13 ^m
*23 b^n
A C
Continuing this process, we obtain finally \P\ . Prom the last n rows of 
P 
only one non
zero nsquare minor, 1/ = (l)'^ can be formed. Its algebraic complement is (_i)i+24"+ra+(n+ii+"+2n(
= (lf'2"^^'c. Hence, \p\ = (i)Vlf" ^"^"lc = \c\ and \c\ = \ab\ = U1.b.
Oil %2 %S
8. Let A = 021 '%2 '^a where Oif = aij(x), (i, j = 1,2,3), are differentiable functions of x. Then
031 032 033
f f f f / /
a^OQiOg^ 021012033 Og^a^a^^ air^a^^a^.^ 02205^303^  Ogj^o^gOjg
www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
9. Evaluate:
3 5 7 2 1 2 4
3
2 4 11 2 1 4 3
304
(o) 156
2000 ic)
2 3 4 5
113 4 3 4 5 6
1 2 3 2 2
1116 2 1 1 3 2
{b)
2 4 16 = 41 (d) 1 1 2 1 1 118
4 12 9 4 3 2 5
1
2 4 2 7
3 2 2 2 2
11. Evaluate the determinant of Problem 9(o) using minors from the first two rows; also using minors from the
first two columns.
Use ^B
4B
[^02 ojj
= .4"B
i.4B to show
1^62
that
bjj
(oi
(<
0000
+ 02)(6i+62) = (01610262)
Q
+ (0261+0163)
O
.
2 1
3 2 1
13. Evaluate using minors from the first three rows. Ans. 720
4 3 2 1
5 4 3 2 1
6 5 4 3 2 1
38 EVALUATION OF DETERMINANTS [CHAP. 4
112 12 1
111
14. Evaluate 110 using minors from the first two columns. .4ns. 2
112
12 2 11
*1 *2 ^3 *4
16. Expand using minors of the first two rows and show that
a^ a^ flg a^
*1 ^2 ^3 *4
www.TheSolutionManual.com
a^ a2
K 62 60 6.. 62 63
A
17. Use the Laplace expansion to show that the nsquare determinant , where is fesquare, is zero when
B C
A > 2"
18. In \A\ = aiiOiii + ai2ai2 + OisO^is + ai4i4. expand each of the cofactors a^g. OL^a. tti* along its first col
umn to show
4 4 ti
^11^11 ~ .^ .^ ^il^lj^lj
11 "IJ l^lj
1=2 J=2
19. If a^j denotes the cofactor of a^j in the nsquare matrix A = [a^,], show that the bordered determinant
"11 "12
ire Pi ?i 92
?n
"21 22
"2ra P2 Pi "11 '^12 " "m
.^
t=i j=i
X
Pilj
'J
Cti
^V
"ni n2
"nra Pre Pn "ni "712 "nn
li 92
In
X I 2 a: 1 xl 1
() (b) x^ + 4 3
2*: 1 ;<: (c) a; a: 2a: +5
2x Zx + l
32 x^+l x+l x^
4ns. (a) 2a: + 9a:^ 8a;=^ , (6) 1  6a: + 21a:^ + 12a;^  15a:*, (c) 6a:^  5*"^  28x^ + 9a:^ + 20a;  2
21. Prove : If A and B are real nsquare matrices with A nonsingular and if ff = 4 + iS is Hermitian, then
. .
chapter 5
Equivalence
THE RANK OF A MATRIX. A nonzero matrix A is said to have rank r if at least one of its rsquare
minors is different from zero while every (r+l)square minor, if any, is zero. A zero matrix is
said to have rank 0.
2 3"
'l
1 2
Example 1. The rank of A 2 3 4 is r= 2 since
2 3
1^0 while U = 0.
3 5 7
www.TheSolutionManual.com
See Problem 1.
Prom I
AB\ A\\B\ follows
I. The product of two or more nonsingular resquare matrices is nonsingular; the prod
uct of two or more resquare matrices is singular if at least one of the matrices is singular.
(1) The interchange of the ith and /th rows, denoted by Hij;
The interchange of the ith and /th columns, denoted by K^j
(2) The multiplication of every element of the ith row by a nonzero scalar k, denoted by H^(k);
The multiplication of every element of the ith column by a nonzero scalar k, denoted by Ki(k).
(3) The addition to theelements of the sth row of k, a scalar, times the corresponding elements
of the /th row, denoted by Hij(k) ;
The addition to the elements of the ith column of k, a scalar, times the corresponding ele
ments of the /th column, denoted by K^j(k)
The transformations H are called elementary row transfonnations; the transformations K are
called elementary column transformations.
The elementary transformations, being precisely those performed on the rows (columns) of a
determinant, need no elaboration. It an elementary transformation cannot alter the
is clear that
order of a matrix. In Problem 2, it is shown that an elementary transformation does not alter its
rank.
39
1
40 EQUIVALENCE [CHAP. 5
1 2 3
Example 2. Let A 4 5 6
7 8 9
3"
"l 2
The effect of the elementary row transformation H2i(2) is to produce B 2 10
.7 8 9
The effect of the elementary row transformation ff2i(+ 2) on B is to produce A again
Thus, ff2i(2) and H2x(+2) are inverse elementary row transformations.
1
"ij = % ^ij
www.TheSolutionManual.com
We have
II. The inverse of an elementary transformation is an elementary transformation of the
same type.
EQUIVALENT MATRICES. Two matrices A and B are called equivalent, A'^B, if one can be obtained
from the other by a sequence of elementary transformations.
Equivalent matrices have the same order and the same rank.
Since all 3square minors of B are zero while I 1 t^ 0, the rank of S is 2 ; hence,
I 5 3
the rank of ^ is 2. This procedure of obtaining from A an eauivalent matrix B from which the
rank is evident by inspection is to be compared with that of computing the various minors of 4.
See Problem 3.
(a) one or more elements of each of the first r rows are nonzero while all other rows have
only zero elements.
(b) in the ith row, (i =1,2, ...,r), the first nonzero element is 1; let the column in which
this element stands be numbered ;'
.
't
(d) the only nonzero element in the column numbered j^, (i =1,2 r), is the element 1 of
the ith row.
CHAP. 5] EQUIVALENCE 41
(ii) If aij
171
7^ 0, use //i(l/oi,
iji ) to reduce it to 1, when necessary.
(is) If a; J = but o^
^ 0, use ffij, and proceed as in (i^).
Vi ^^7
(ii) Use row transformations of type (3) with appropriate multiples of the first row to obtain
zeroes elsewhere in the /^st column.
If nonzero elements of the resulting matrix B occur only in the first row, B = C. Other
wise, suppose 72 is the number of the first column in which this does not occur. If &2j ^ 0,
use ^2(1/^2^2) as in (ii); if but bqj^ f 0, use H^^ and proceed as in (ii). Then, as
&2J2=
in (il), clear the /gnd column of all other nonzero elements.
If nonzero elements of the resulting matrix occur only in the first two rows, we have C.
Otherwise, the procedure is repeated until C is reached.
Example 4. The sequence of row transformations ff2i(2), ffgiCD ; 2(l/5) ; //i2(l). //ssCS) applied
www.TheSolutionManual.com
to A of Example 3 yields
1 2 1 4 1 2 1 4 1 2 1 4 1 2 17/5
^\j '\^
2 4 3 5 5 3 1 3/5 '%^
1 3/5
1 2 6 7 5 3 5 3
See Problem 4.
THE NORMAL FORM OF A MATRIX. By means of elementary transformations any matrix A of rank
r > can be reduced to one of the forms
(5.1)
A
/.
\l% "'"'
M
called its normal form. zero matrix is its own normal form.
Since both row and column transformations may be used here, the element 1 of the first row
obtained in the section above can be moved into the first column. Then both the first row and
firstcolumn can be cleared of other nonzero elements. Similarly, the element 1 of the second
row can be brought into the second column, and so on.
For example, the sequence ff2i(2), ^31(1). ^2i(2), Ksi(l), X4i(4). K23, K^{\/%),
See Problem 5.
ELEMENTARY MATRICES. The matrix which results when an elementary row (column) transforma
tion is applied to the identity matrix /^ is called an elementaryrow (column) matrix. Here, an
elementary matrix will be denoted by the symbol introduced to denote the elementary transforma
tion which produces the matrix.
0'
1
Example 5. Examples of elementary matrices obtained from /g 1
1_
To effect a given elementary row transformation on A of order mxn, apply the transformation
to Ijn to form the corresponding elementary matrix H and multiply A on the left by H.
1 2
3'
"l o"
"723'
rows of A ; 4/^13(2) = 4 5 6 10 = 16 5 6 adds to
t( the first column of A two times
J 8 9 _2 1_ _25 39J
the third column.
www.TheSolutionManual.com
LET A AND B BE EQUIVALENT MATRICES. Let the elementary row and column matrices corre
sponding to the elementary row and column transformations which reduce /I to 5 be designated
as //i./Zg ^s< J^\,T^Q. '^t
where //^ is the first row transformation, //g is the second, ...;
K^ is the first column transformation, K^ is the second Then
where
(5.3) Ih H^H^ and
We have
III. Two matrices A and B are equivalent if and only if there exist nonsingular matrices
P and Q defined in (5.3) such that PAQ = B.
"1 2 1 2~
Example 7. When A 2 523, ^3i(l) //2i(2) ^ ^2i(2) Ksid) .K4i(2) K^sd) .Ks(i)
_1 2 1
2J
1200 ~1 1 0~ 1
2" "1000" "1000"
["100"! r 1 o"j
02 10 10 10 10 1 10
1 1
10 10 1 10 5
[j^i ij L iJ
1 _0 1 1_ _0 1_ _0 1_
1254
o"l
10 1
[1
= PAQ 10
[:;=} 2
1 oj
1
IV. If ^ is an resquare nonsingular matrix, there exist non singular matrices P and Q
as defined in (5.3) such that PAQ = 1^ .
See Problem 6.
CHAP. 5] EQUIVALENCE 43
We have proved
www.TheSolutionManual.com
See Problem 7.
From this follow
RANK OF A PRODUCT. Let A be an mxp matrix of rank r. By Theorem III there exist nonsingular
matrices P and Q such that
PAQ = N = P''
]
(5.6) AB = P~'NQ'^B
By Theorem AB is that of NQ'^B. Now the rows of NQ~'b consist of the firstr
VI, the rank of
rows of Q B and
mr rows of zeroes. Hence, the rank of AB cannot exceed r
the rank of A
Similarly, the rank of AB cannot exceed that of S. We
have proved
IX. The rank of the product of two matrices cannot exceed
the rank of either factor.
SOLVED PROBLEMS
1 2 3] 1 2
1. (a) The rank of A is 2 since ^ and there are no minors of order three.
4 sj 4
1 2 3
2 3
(b) The rank of A 12 5 is 2 since  ^ j
= and ^0.
2 5
2 4 8
"0 3"
2
(c) The rank of A 4 6 is 1 since 
i 
= 0, each of the nine 2square minors is 0, but nov
_0 6 9_
every element is
Show that the elementary transformations do not alter the rank of a matrix.
We shall consider only row transformations here and leave consideration of the column transformations
www.TheSolutionManual.com
as an exercise. Let the rank of the mxn matrix ,4 be r so that every (r+l)square minor of A, it any, is zero.
Let B be the matrix obtained from .4 by a row transformation. Denote by \R\ any (r+l)square minor of A and
by Is] the (r+l)squaie minor of B having the same position as \R\ .
Let the row transformation be H^j Its effect on /? is either (i) to leave
. it unchanged, (ii) to interchange
two of its rows, or (lii) to interchange one of its rows with a row not of \R\ . In the case (i), \S\ = \r\ =0;
in the case (ii), \S\ = \r\ = ; in the case (iii), \s\ is, except possibly for sign, another (r+l)square minor
of l^l and, hence, is 0.
Let the row transformation be Hi(k). Its effect on \R\ is either (1) to leave it unchanged or (ii) to multi
ply one of its rows by A:. Then, respectively, S = /? = o or S = ;i:/? = o.
Let the row transformation be Hij(k). Its effect on /? is either (i) to leave it unchanged, (ii) to increase
one of its rows by k times another of its rows, or (iii) to increase one of its rows by k times a row not of S.

In the cases (i) and (ii), S=ft = 0; in the case (iii), \s\ = /? + A: (another (r+i)square minor of /I) = 
0 k0 = 0.
Thus, an elementary row transformation cannot raise the rank of a matrix. On the other hand, it cannot
lower the rank tor, if it did, the inverse transformation would have to raise it. Hence, an elementary row
transformation does not alter the rank of a matrix.
For each of the matrices A obtain an equivalent matrix B and from it, by inspection, determine the
rank of A.
"1 3" "1 3~ '1 3"
2 1 2 3 2 2
"^
(a) A = 2 1 3
'^/
3 3 1 1
""J
1 1
_3 2 1_ 4 8 _0 1 2_ _0 1_
The transformations used were //2i(2). ffsi(3); H^(l/3), HQ(l/i); Hg^il). The rank is 3.
1+i i 1 1
'~^
(c) A i + 2j i 1 + 2j
'Vy
i 1 + 2J = B. The rank is 2.
1 + 2j l+i_ _1
i 1 + 2J_
Note. The equivalent matrices 5 obtained here are not unique. In particular, since in (o) and (i)
only
row transformations were used, the reader may obtain others by using only column
transformations.
When the elements are rational numbers, there generally is no gain in mixing row and column
transformations.
CHAP. 5] EQUIVALENCE 45
4. Obtain the canonical matrix C row equivalent to each of the given matrices A.
13 113 113 2 10 4
12 6 12 6 132 132
(a) A = '>'
2 3 9 2 3 9 132
113 13 13 2_ GOOD
1 2 2 3 f 1 2 2 3 l" 1 2 3 3" '10 3 7~
10 1
(b) A =
1 3 2 3 v^
1 1 '^ 1 1 ^\y
10 1 O
10 01
2 4 3 6 4 1 2 1 2 10 2 10 2
.1 1 1 4 6_ p 1 1 1 5_ 1 1 4_ pool 2_ 1 2
www.TheSolutionManual.com
(a) A 3 4 1 2
^\y
2 1 5 02 1 5 p 12 5 12 10 'V.
10
2 3 2 5 7 2 3 7 2 sj Lo 2 7 3_ 11 P U 7_ p 1 0_
= Us o]
^2i(3). 3i{2); K2i(2), K4i(l); Kgg; Hg^^y. ^32(2). /f42(~5); /fsd/ll), ^43(7)
4. p 1 0_ p 0_
^12: Ki(2); H3i(2); KsiCS), X3i(5), K4i(4); KsCi); /fs2(3), K42(4); ftjgC 1)
1 2 32
6. Reduce A 22 1 3 to normal form A' and compute the matrices P^ and (^^ such that Pi_AQ^ = A^.
3 04 1
Since A is 3x4, we shall work with the array Each row transformation is performed on a row of
A U
seven elements and each column transformation is performed on a column of seven elements.
1 3 2 1 2 3 2
10 1
1 1 1
1 1
12 3 1 2 32 1 1
22 1 10 6 5 7 1 6 5 7 2 5 7 2
3 4 1 6 5 7 1 6 5 7 3 011
1 1/3 3 2 1 1/3 4/3 1/3
1/6 01/6 5/6 7/6
10
1
10
or
1
157210
10 1
N Pi
1 0210
0111 011 1
46 EQUIVALENCE [CHAP. 5
1
10 and PiAQi = 10 N.
1 3 3
7. Express A = 1 4 3 as a product of elementary matrices.
.1 3 4_
The elementary transformations //2i(l). ^siCD; ^2i(3). 'f3i(3) reduce A to 4 , that is, [see (5.2)]
/ = H^H^AK^K^ = ff3i(l)^2i(l)'4X2i(3)?f3:i.(3)
www.TheSolutionManual.com
_0 ij [l ij [p ij _0 1_
8. Prove: Two mxn matrices A and B are equivalent if and only if they have the same rank.
If A and B have same rank, both are equivalent to the same matrix (5.1) and are equivalent
the to each
other. Conversely, ^ and B are equivalent, there exist nonsingular matrices P and Q such that B
if = PAQ.
By Theorem VII, A and B have the same rank.
[i:l[:i:] [!:][::]
nm
A canonical set tor nonzero 3x4 matrices is
Vh o]
[:=:] &:]
10. If from a square matrix A of order n and rank r^, a submatrix B consisting of s rows (columns) of A
is selected, the rank r^ of B is equal to or greater than r^ + s  n.
The normal form of A has nrj^ rows whose elements are zeroes and the normal form of 6 has srg rows
whose elements are zeroes. Clearly
'A ^
from which follows r > r + s  n as required.
B A
. .
CHAP. 5] EQUIVALENCE 47
SUPPLEMENTARY PROBLEMS
4 6
2 3 2
2 1 2 1223 5
5
6 7
7
8
12. Show by considering minors that A, A'. A. and .4' have the same rank.
13. Show that the canonical matrix C, row equivalent to a given matrix A, is uniquely determined by A.
14. Find the canonical matrix row equivalent to each of the following:
12 3 4 1 1/9" 1 1 10
Tl 23"]^ri 07]
www.TheSolutionManual.com
(a) (b) 3 4 12 'X^
10 1/9 (c) 2 1 10
[2 5 4j [o 1 2j
_4 3 1 2 1 11/9 3 3 12
1 2 10 1
3 2 101 1 1 1 1 1 10 012
(d) 2 1 1 1 (e)
1 1 2 3 1
^V/
10 1
2 2 1 2 1 2
5 6
1 1 1 3 3
1 3
15. Write the normal form of each of the matrices of Problem 14.
12 3 4
16. Let A = 2 3 4 1
3 4 12
(a) From /a form Z/^^. /^O). //i3(4) and check that each HA effects the corresponding row transformation.
(6) Prom U form K^^. Ks(l). K^^O) and show that each AK effects the corresponding column transformation.
(c) Write the inverses H^l, H^ {?,), H^lci) of the elementary matrices of (a). Check that for each H,HH~^^l
(d) Write the inverses K^l. ifg^Cl).
K^lo) of the elementary matricesof (6) . Check that for each K. KK~^ = I
"0 0'
3 "0 1 4"
(e) Compute B = ^12 ft,(3) //isC 4) 1 4 and C = H^^(4:)H^(3)Hi 1/3 0.
1 ij
(/) Show that BC ^ CB = I .
17. (a) Show that /?',= H. . K(k) = H^(k). and K^,(/t) = H^(k)
(b) Show that if /? is a product of elementary column matrices. R'is the product in reverse order of the same
elementary row matrices.
18. Prove: (a) AB and BA are nonsingular if .4 and B are nonsingularra square matrices.
(b) AB and BA are singular if at least one of the nsquare matrices A and B is singular.
19. UP and Q are nonsingular, show that A,PA,AQ, and PAQ have the same rank.
Hint. Express P and Q as products of elementary matrices.
13 61
20. Reduce B 14 5 1 to normal form /V and compute the matrices P^ and Qr, such that P^BQ^ = N
15 4 3
.
48 EQUIVALENCE [CHAP. 5
21. (a) Show that the number of matrices in a canonical set of nsquare matrices under equivalence is n+l.
(6) Show that thenumber of matrices in a canonical set of mxn matrices under equivalence is the smaller of
m+l and n+1.
12 4 4
22. Given A 13 2 6 of rank 2. Find a 4square matrix S 7^ such that AB = 0.
2 5 6 10
Hint. Follow the proof of Theorem X and take
Q'b
abed
_e / g A_
23. The matrix A of Problem 6 and the matrix B of Problem 20 are equivalent. Find P and Q such that B  PAQ.
www.TheSolutionManual.com
24. If the mxn matrices A and B are of rank rj and rg respectively, show that the rank of A+B cannot exceed
25. Let ^ be an arbitrary nsquare matrix and S be an nsquare elementary matrix. By considering each of the
six different types of matrix S, show that \AB\ = ^ fi
26. Let A and B be nsquare matrices, (a) If at least one is singular show that \AB\ = /4s ; (6) If both are
nonsingular, use (5.5) and Problem 25 to show that \AB\ = \a\\B\ .
28. Prove: The row equivalent canonical form of a nonsingular matrix A is I and conversely.
29. Prove: Not every matrix A can be reduced to normal form by row transformations alone.
Hint. Exhibit a matrix which cannot be so reduced.
30. Show how to effect on any matrix A the transformation H^: by using a succession of row transformations of
types (2) and (3).
31. Prove: If .4 is an mxn matrix, (m ^n), of rank m, then AA' is a nonsingular symmetric matrix. State the
theorem when the rank of A is < m.
chapter 6
THE ADJOINT. Let A = be an nsquare matrix and be the cofactor of a; then by definition
[a^ ]
y oij
y
A = 12 ^22
(6.1) adjoint adj ^
www.TheSolutionManual.com
'^ire '^sn
Note carefully that the cofactors of the elements of the ith row (column) of A are the elements
of the jth column (row) of adj A.
1 2 3
Example i. For the matrix A 2 3 2
3 3 4
11= 6, ai2 = 2. CllS = 3. fflsi = 1, 0122 = 5, a^g = 3, ffai = 5, Otgg = 4, agg = 1
*
6 15
1
and adj A 2 5 4
3 3 1
See Problems 12.
%1 %2 "In
2 1 ^^2 0271
(6.2) i(adj A)
'^ ni CE 7
(6.3) U. I
adj /I I
= , adj ^ I
. U
There follow
(6.4) I
adj 4 I
49
50 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6
A(a,diA) = (ad}A)A =
If A is of rank < ral, then adj A = 0. If i is of rank ra1, then adj A is of rank 1.
See Problem 3
www.TheSolutionManual.com
MINOR OF AN ADJOINT. In Problem 6, we prove
^ii.i2 i
IV. Let be square minor of the resquare matrix A = [o,,].
^l'''2 % tjj
7m + i' 7m+2'
Jn complement in A, and
let be its
^m+i' ^ra+2'
Ji'J2 Jn
let denote
dei the msquare minor of adj A whose elements occupy the
%. ^2 %
Ji' J2 Jm
same position in adj A as those of occupy in A.
Then
Ji' J2 Jm Jvi + i' Jm + 2 Jn
(6.6) M: (D^'i^l'
where s = j\ + S2 + + %+ /i + 72 + + Jt,
i
H, %, ^M+1' 'm+2. . ^n
.J1.J2
\A\ algebraic complement of
SOLVED PROBLEMS
a h '^ll 0^21 d b
1. The adjoint of i4 = IS I _ I
c d c a
3 4 2 3 2 3
4 3 4 3l 3 4
7 6 1
2. The adjoint of A = 1 4 1 3 1 3
IS 1
[;] 1 3 1 3 1 4
1
1
2 1
1 3 1 2 1 2
1 4 1 4 1 3
Prove: A
www.TheSolutionManual.com
3. If is of order n and rank n\, then adj A is of rank 1.
,
Ul )
Then
Prove:
Ji' h in
6. Let A; ; ^^ ^" msquare minor of the resquare matrix
'"n I
A = [a:]
tj
Jm+i. Jw42 in
let be its complement in A, and
^m+i. ^m+2 ^n
J1.J2 is
position in adjv4 as those of occupy in A.
^H,i^, ...,i Then
52 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6
Prom
www.TheSolutionManual.com
a,  a, a, 1
"W2
, , 1
Ul
h.Jm^i.
\a\
''i2Jn
Ul "^M'Jn^i " "^Jn
"%ii'Jn^i "V+i'in
^ri'^m+i "in.in
7. Prove: If 4 is a skewsymmetric of order 2ra, then \A\ is the square of a polynomial in the elements
of i.
By its definition. \
a\ is a polynomial in its elements; we are to show that under the conditions given
above this polynomial is a perfect square.
oi
when A = r
1 ,1 2
The theorem is true for n = l since, \ \, \A\ = a .
\a Oj
Assume now that the theorem is true when n = k and consider the skewsymmetric matrix A = [a^j] of
'
E== '^2k^..2n^\
order2A: + 2. By partitioning, write A = P^l
\ \ where r
\
zri^i.zii^zy
Then S is skewsym
"
metric of order 2k and, by assumption, \b\ = f where /is a polynomial in the elements of B.
If a^j denotes the cofactor of a^j in we have by Problem
/I, 6, Chapter 3, and (6.8)
a perfect square.
www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
8. Compute the adjoint of:
2"
"l 2 3~ ~1 5
2 S' 'l 2*
_0 0_ 2 1
_0 1_ _3 2 1_
10 1
1 2 2 4
1 1 1 4 2
Ans. (a) 02 (i) 2 2 6 16
1 (c) 2 5 4 (rf)
_0 1_ 1 1 2 1
10 35
2 10
9. Verify:
(a) The adjoint of a scalar matrix is a scalar matrix.
(b) The adjoint of a diagonal matrix is a diagonal
matrix.
(c) The adjoint of a triangular matrix is a triangular
matrix.
1 2 2 4 3 3
12. Show that the adjoint of ^ = 2 1 2 is 3^' and the adjoint of A = 1 1 is A itself.
22 1
4 4 3
19. Prove: If A is an nsquaie matrix of rank n or nl and if H^...H^ H^ A K^K,2^t = ^ where \ is
L or , then
1 1 1 1 1 1
adj A = adj Xi adj K^ adj K^ adj A adj H^ adj H^ adj H^
www.TheSolutionManual.com
1110
2 3 3 2
(o) A of Problem 7 Chapter 5 (b)
12
,
3 2
4 6 7 4
14 22 2
7 3 3
14 2 2 2
Ans. (a) 1 1 (b)
1 1
7 11 1
21. Let A = [a^] and B = [^ka^A be 3square matrices. If S{C) = sum of elements of matrix C, show that
23. Let A^ =\.%j\ ('/ = 1. 2, ...,n) be the lower triangular matrix whose triangle is the Pascal triangle; for
example,
10
110
12 10
13 3 1
Define bij
"V
= (1)^ ^ai,
Hj and verify for n =2,3,4 that
24. Let B be obtained from A by deleting its ith and pth rows and /th and gth columns. Show that
^ij ^pj
iq pq
where o!^ is the cofactor of a^j in \A\
chapter 7
IF A AND B are nsquare matrices such that AB = BA =1, B is called the inverse of A, (B=A~^) and
A is called the inverse of B, (A = B''^).
In Problem l, we prove
www.TheSolutionManual.com
I. An rasquare matrix A has an Inverse if and only if it is nonsingular.
1 2 3 7 6 1
Example 1. From Problem 2, Chapters, the adjoint of A = 1 3 4 is 1 1
1 4 3 1 2 1
"7/2
Since Ml = 2, A~^ = ^'^^ ^
3 f
k X
2 1 
^J
See Problem 2.
55
56 THE INVERSE OF A MATRIX [CHAP. 7
INVERSE FROM ELEMENTARY MATRICES. Let the nonsingular nsquare matrix A be reduced to /
by elementary transformations so that
1 1_ 1 _0 1__ 1
1 3 o" 'l
3" "106" '100" 7 3 3
Then A K^K2H2'i 1
www.TheSolutionManual.com
1 1 1 1 = 1 1
1 1 1 1 1 10 1
In Chapter 5 it was shown that a nonsingular matrix can be reduced to normal form by row
transformations alone. Then, from (7.2) with Q =1, we have
(7.3) H,...H^.H,
That is,
1 3 3
Examples. Find the inverse of .4 = 1 4 3 of Example 2 using only row transformations to reduce /I to/.
13 4
Write the matrix [A Q and perform the sequence of row transformations which carry A into
Ir, on the rows of six elements. We have
1 3 3 1 1 3 3 1 1 3 4 3 1 7 3 3
'\j
[AI^] = 1 4 3 1
"X^
1 1 1 1 1 1
"o
1 1 1
1 3 4 1 1 1 1 1 1 1 1 li 1
[/3^"']
7 3 3
by (7,3). Thus, as A is reduced to /g, /g is carried into A 1 1
1 1
See Problem 3.
INVERSE BY PARTITIONING. Let the matrix A = [a^j] of order n and its inverse B = [6.^,] be par
titioned into submatrices of indicated orders:
In practice, A.^^ is usually taken of order ral. To obtain A^^, the following procedure is
used. Let
%2 %3 %4
www.TheSolutionManual.com
11
"11 %2 13
[Oil %2l O23 G^ =
021 022 023 '^24
' Osi 022 ,
After computing C^^, partition G3 so that /422 = [o33] and use (7.5) to obtain C^^. Repeat the proc
ess on G4. after partitioning it so that ^22 = [044], and so on.
1 3 3
Example 4. Find the inverse of A = 1 4 3 using partitioning.
1 3 4
Take ^11
11 = [!']. ^12 , ^421 = [13], and ^22= [4] Now
Then
" 3 3
7
fill fil2
and 1 1
S21 B22
_ 1 1
THE INVERSE OF A SYMMETRIC MATRIX. When A is symmetric, aij = ciji and only ^(n+1) cofac
tors need be computed instead of the usual n^ in obtaining A~^ from adj A.
'0 b c a b ...
b a ... b c
. c ...
Ht ^l
a b
www.TheSolutionManual.com
c a c
b
c c
^i( ^) ^^i( )
However, when the element a in the dia^gonal is replaced by 1, the pair of transformations are
H^(l/\Ja) and K^{l/\Ja). In general, ^Ja is either irrational or imaginary; hence, this procedure
is not recommended.
The maximum gain occurs when the method of partitioning is used since then (7.5) reduces to
(7.6)
When A is not symmetric, the above procedure may be used to find the inverse of A'A, which
is symmetric, and then the inverse of A is found by
SOLVED PROBLEMS
1. Prove: An resquare matrix A has an inverse if and only if it is non singular.
Suppose A is nonsingular. By Theorem IV, Chapter 5, there exist nonsingular matrices P and Q such
that PAQ=I. Then A=P~^Q'^ and A''^ = QP exists.
1 1 _i
Supposed exists. The A A =1 is of rank n. If /4 were singular, /l/l would be of rank < n; hence,
A is nonsingular.
CHAP. 7] THE INVERSE OF A MATRIX 59
2. (a) When A
!]
 I
/I I =5, adji r43l,andi"=r4/53/5]
[1 [_l/5
[? 2j
2/5J
2 3 1 1 5 1 5
(b) When A = 1 2 3 then U! = 18, adj A 7 1 and A 7 1
3 1 2 5 7 5 7
2 4 3 2
3 6 5 2
3. Find the inverse of i 4 =
2 5 2 3
_4 5 14 14_
2 4 0' 3/2
3 2 1
1 1 2 1 1
1/2 1 2 3/2 1 '
1/2
3 6 5 2 10 1 3 6 5 2 1 1/2 1 3/2 1
[AIJ
I
"K^ ^\j
1 '
 3 10 3 10
www.TheSolutionManual.com
2 5 2 1 2 5 2 1 1 1 5 1 1
1
4 5 14 1 1 1
1 _4 5 14 14 1 1 3 8 10 2 1
10 18 1
13 7 2  0'
10 18 !
13 72 '
'X'
1 7 4 2 1
'\J
1 7 1
4 2 1
1 2 3 j
2 1 2 [
3 2
5 ; 10 10 3 1_ 11 2 2 3/5 1/5
1 1
23 29 64/5 18/5
'^/
100] 10 12 26/5 7/5
1 i
1 2 6/5 2/5
1 2 2 3/5 1/5
1
Ih A']
Set S22 = f"^ Prom (ii), B^^ = (4^1412)^"^; from (iii), B21 =  ^~\A^^At^y, and, from (i), B^^
*. 1
f (42i'4ii)^i2 + ^'^422 = / and f  '^'2 ('^21'*lll)'^12
60 THE INVERSE OP A MATRIX [CHAP. 7
12 3 1
1 r 3 2l ,1 [3 2 r 3 2
,
Now 11 =
jj.
^1141.  '4214 11 = [24] [2 0],
^_^ ^_^ ^
,f
= ^22  '42i('4ii^i2) = [3]  [2 4]r = [3], and f =[l/3]
www.TheSolutionManual.com
3 2] _ [2 ol
Then Sii = 4l\ + (.4lVi2)rV2i^i\) = [j "^l + Hri] [2 0]
1 ij [o oJ
=
3L3 3J
B12 = ('4ii'4i2)C
'^
 1 f (/I2A1) = i[2 0],
3
4fl
36 3
and 3 3
D01 Oo
2 01
1 2 3 1
2 4 3 3
36 3
Then B, 3
2
3
1
^1
1
3 [3][2 3 2] =
I
3
2
3
1
4 2
69
3
6
2
12
1
2
1
" 0'
1210
[Sii S12 I 12 23
and
S21 B22J 111
2 32 3
.
1 3 3
6. Find the inverse of A = 1 3 4 by partitioning.
1 4 3
1 1 3 3 7  3 3
By Example 3,
.
B^Ho 1 1 1 = 1 1
1 1 1 1 1
Thus, if the (rai)square minor A^.^_ of the nsquare nonsingular matrix A is singular, we first bring a
nonsingular (nl)square matrix into the upper left corner to obtain S, find the inverse of B, and by the prop
www.TheSolutionManual.com
er transformation on B~^ obtain A''^
2 112
7. Compute the inverse of the symmetric matrix A
13 23
1 2 11
231 4
2 1 1
Consider first the submatrix G^ 1 3 2 partitioned so that
1 2 1
Now
Then B,
1 3 5
1
and 3 1 5
10
5 5 5
2 1 1
'^11 1 3 2 [2 3 1], [4]
1 2 1
1 3 5 1/5
Now A< _1_
10
5
31
5 5
5
2
2/5 .
<f= [18/5 ], t = [5/I8].
62 THE INVERSE OF A MATRIX [CHAP. 7
2 57
Then JS^ 515 521 = ^[1 2 10], B22 = [5/I8]
7 5 11
2 571
51 52
and
7 5 11 10
1 2 10 5
www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
8. Find the adjoint and inverse of each of the following:
1 2
"
1 21" "2
3 4" "1 2
3"
3
(a) 1 1 2 (b) 4 3 1 (O 2 4 5 (d)
n n
n
U w ^ 1
J.
_ 2 1 1_ _1 2 4 3 5 6
3
2/3
315 10 4 9 1 3 2
1
1/3
Inverses (a)
^ 1531 5
3
^'^i
5
15 4 14
1 6
,
(c) 3
21
3 1 .
(d)
1/2 1/6
 J  ^   1/3
10. Obtain the inverses of the matrices of Problems using the method of Problems.
1 1 1 1 3 4 2 7 2 5 2 3
13 3 2 1
1 4 3 31
4 4
11. Same, for the matrices (a)
1
2
2
3
3
5 5
.
(b)
2
5
3
7
3
3
2
9
(c)
2
3
3
6
3
3 2
(d) 13 4 11
4 5
1 111
1
3 8 2 3 2 3 4 12 8
1212 2
2 16 6 4 144 36 60 21
1 22 41  30 1 48 20 12 5
(a) (c)
18 10 44 30 2 48 48 4 12 13
4 13 6 1 12 12 3
30 20 15 25 5
1 11 7 26
30 11 18 7 8
1 7 3 16
(b) 1 (^)i 30 12 21 9 6
2 1 1 1
15 12 69 6
1 1 1 2
15 7 6 1 1
12. Use the result of Example 4 to obtain the inverse of the matrix of Problem 11(d) by partitioning.
13. Obtain by partitioning the inverses of the matrices of Problems 8(a), 8(6), 11(a)  11(c).
CHAP. 7] THE INVERSE OF A MATRIX 63
1212 12 2
Hint: A^A
www.TheSolutionManual.com
:r / = (AA"^)' = (A~^)'A .
18. Show that if the nonsingular symmetric matrices A and B commute, then (a) A~'^B. (b) AB"'^. and (c) A~^B~^
are symmetric. Hint: (a) {A'^ B)' = {BA^'^)' = (A'^)'B' = A~^B.
19. An mxn matrix A is said to have a right inverse B if AB = I and a left inverse C it CA = I. Show that A
has
a right inverse if and only ifA is of rank m and has a left inverse if and only if the rank of A is n
13 2 3
20. Find a right inverse of A 14 13 if one exists.
13 5 4
1 3 2
Hint. The rank of A is 3 and the submatrix S 1 4 1 is nonsingular with inverse S . A right inverse of
1 3 5
17 9 5
A
4 3 1
is the 4x3 matrix B =
1 1
L J
1 3 3
7 3 3
21, Show that the submatrix T 1 1
1 4 3 of A of Problem 20 is nonsingular and obtain as another
1 3 4
1 1
right inverse of A.
1 1 1
7 1 1 a
22. Obtain 310b as a left inverse of
3
3
4
3
3
4
, where a,b. and c are arbitrary.
3 1 c
13 4 7
23. Show that A 14 5 9 has neither a right nor a left inverse.
2 3 5 8
NUMBER FIELDS. A collection or set S of real or complex numbers, consisting of more than the ele
ment 0, is called a number field provided the operations of addition, subtraction, multiplication,
and division (except by 0) on any two of the numbers yield a number of S.
Examples of number fields are:
www.TheSolutionManual.com
(b) the set of all real numbers,
(c) the set of all numbers of the form a+b\f3 ,where a and b are rational numbers,
(d) the set of all complex numbers a+ bi, where a and b are real numbers.
The set of all integers and the set of all numbers of the form bvs , where i is a rational
number, are not number fields.
GENERAL FIELDS. A collection or set S of two or more elements, together with two operations called
addition (+) and multiplication (). is called a field F provided that a, b, c, ... being elements of
F, i.e. scalars,
a + b is a unique element of F
a+b = b+a
a + (b + c) = (a + b) + c
As For each element a in F there exists a unique element a in F such that a + (a) = 0.
(ab)c = a(bc)
For each element a ^ in F there exists a unique element a'^ in F such that a a'''^ =
a~ 0=1.
Di : a(b + c) = ab+ ac
D^' (a+b)c = ac + bc
In addition to the number fields listed above, other examples of fields are:
64
CHAP. 8] FIELDS 65
SUBFIELDS. If S and T are two sets and if every member of S is also a member of T, then S is called
a subset of T.
If S and T are fields and if S is a subset of T, then S is called a subfield of T. For exam
ple, the field of all real numbers is a subfield of the field of all complex numbers; the field of
all rational numbers is a subfield of the field of all real numbers and the field of all complex
numbers.
MATRICES OVER A FIELD, When all of the elements of a matrix A are in a field F, we say that
'Mis over F". For example.
A =
1 1/2
is over the rational field and B
11 + }
is over the complex field
1/4 2/3 2 1  3i
Here, A is also over the real field while B is not; also A is over the complex field.
www.TheSolutionManual.com
... let the smallest field which
contains all the elements; that is, if all the elements are rational numbers, the field F is the
rational field and not the real or complex field. An examination of the various operations de
fined on these matrices, individually or collectively, in the previous chapters shows that no
elements other than those in F are ever required. For example:
The sum, difference, and product are matrices over F.
Hereafter when A is said to be over F it will be assumed that F is the smallest field con
taining all of its elements.
In later chapters it will at times be necessary to restrict the field, say, to the
real field.
At other times, the field of the elements will be extended, say, from the rational field to the real
field. Otherwise, the statement "A over F" implies no restriction on the field, except for the
excluded field of characteristic two.
SOLVED PROBLEM
1. Verify that the set of all complex numbers constitutes a field.
To do this we simply check the properties A^A^, MiMg, and D^D^. The zero element (/I4) is and
the unit element (M^) is 1. If a + bi and c + di are two elements, the negative (A^) of a + bi is abi. the
product (A/ 1) is (a+bi){c + di) = (ac bd) + (ad + bc)i ; the inverse (M5) of a + bi^o is
1 _ g bi _ a _ hi
a + bi a^ +62 a^+ b^ a^ + b^
Verification of the remaining properties is left as an exercise for the reader.
66 FIELDS [CHAP. 8
SUPPLEMENTARY PROBLEMS
2. Verify (a) the set of all real numbers of the form a + b\r5 where a and b are ra:ional numbers and
(6) the set of all quotients ^ of polynomials in x with real coefficients constitute fields.
Q(x)
a b
4. Verify that the set of all 2x2 matrices of the form ,
where a and b are rational numbers, forms a field.
b a
Show that this is a subfield of the field of all 2x2 matrices of the form
a h
w]\eK a and h are real numbers.
b a
www.TheSolutionManual.com
5. the set of all 2x2 matrices with real elements form a field?
6. A set R of elements a,b.c.... satisfying the conditions {Ai, A^. A^. A^, A^; Mi. M.y, D^, D^) of Page 64 is called
a ring. To emphasize the fact that multiplication is not commutative, R may be called a non commutative
ring. When a ring R satisfies Mg, it is called commutative. When a ring R satisfies M^. it is spoken of as
a ring with unit element.
Verify:
(a) the set of even integers 0,2,4, ... is an example of a commutative ring without unit element.
(b) the set of all integers 0,+l,2,+3, ... is an example of a commutative ring with unit element.
(c) the set of all nsquare matrices over F is an example of a noncommutative ring with unit element.
(d) the set of all 2x2 matrices of the form , where a and b are real numbers, is an example of a
commutative ring with unit element.
7. Can the set (a) of Problem 6 be turned into a commutative ring with unit element by simply adjoining the ele
ments 1 to the set?
8. By Problem 4, the set (d) of Problem 6 is a field. Is every field a ring? Is every commutative ring with unit
element a field? i
To ol
9. Describe the ring of all 2x2 matrices \
^ , where a and b are in F. If A. is any matrix of the ring and
^ = . show that LA = A. Call L a left unit element. Is there a right unit element?
10. Let C be the field of all complex numbers p + qi and K be the field of all 2x2 matrices where p, q,
a b^ ^
u, V are real numbers. Take the complex number a + bi and the matrix as corresponding elements of
the two sets and call each the image of the other.
3l To 4l
[2 ; 3+ ^^2i, 5.
3 2j L4 Oj
(b) Show that the image of the sum (product) of two elements of K is the sum (product) of their images in C.
(c) Show that the image of the identity element of K is the identity element of C.
(d) What is the image of the conjugate of a + bi?
of real numbers (%, x^) is used to denote a point Z in a plane. The same
pair
THE ORDERED PAIR
of numbers, written as [x^, x^], will be used here to denote the twodimensional vector or 2vector
X.2(X^1. X22)
www.TheSolutionManual.com
X(Xi. Xq)
Fig. 91
If .Yi = [%i,xi2] and X^= [x^^.x^q] are distinct 2vectors, the parallelogram law for
their sum (see Fig. 92) yields
Treating X^ and Xg as 1x2 matrices, we see that this is merely the rule for adding matrices giv
en in Chapter 1. Moreover, if k is any scalar,
The elements x^,X2 % are called respectively the first, second, ..., reth components of X.
Later we shall find it more convenient to write the components of a vector in a column, as
Now (9.1) and (9.1') denote the same vector; however, we shall speak of (9.1) as a row vector
and (9.1') as a column vector. We may, then, consider the pxq matrix A as defining p row vectors
(the elements of a row being the components of a 17vector) or as defining g column vectors.
67
68 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9
The vector, all of whose components are zero, is called the zero vector and is denoted by 0.
The sum and difference of two row (column) vectors and the product of a scalar and a vec
tor are formed by the rules governing matrices.
The vectors used here are row vectors. Note that if each bracket is primed to denote col
umn vectors, the results remain correct.
www.TheSolutionManual.com
LINEAR DEPENDENCE OF VECTORS. The m revectors over F
Ai = .%]^,%2' ^mJ
are said to be linearly dependent over F provided there exist m elements h^^.k^, ,k^ of F, not
all zero, such that
Example 2. Consider the four vectors of Example 1. By (6) the vectors ^2 ^"d Xj^_ are linearly dependent;
so also are X^, X^, and X^ by (c) and the entire set by {d).
The vectors X^ and Xq^. however, are linearly independent. For. assume the contrary so that
fel^l + ;c2A'2 = [Zk^ + 2k^, k^+ 2k^, 'iJc.i_~ Zk^l = [o, 0,0]
Then 3fti + 2k^ = 0, ft^ + 2^2 = 0, and ik^  lik^ = 0. Prom the first two relations A^ =
and then ^2 = 0.
Thus,
I. If m vectors are linearly dependent, some one of them may always be expressed as
a linear combination of the others.
.
II. If m vectors X^, X^ X^ are linearly independent while the set obtained by add
ing another vector X^j,^ is linearly dependent, then Jt^+i can be expressed as a linear com
bination of Xi, X^, , X^
Examples. Prom Example 2, the vectors X^, and X^ are linearly independent while X^.X^.^niXg are
linearly dependent, satisfying the relations 2X.i^2X^ Xg= 0. Clearly, Zg=2A^i3^2
III. If among the m vectors X^, X^ X^ there is a subset of r<m vectors which are
linearly dependent, the vectors of the entire set are linearly dependent.
Example 4. By (b) of Example 1, the vectors X^ and X^ are linearly dependent; by (d), the set of four
vectors is linearly dependent. See Problem 1.
Xi 1 %2
m<n
www.TheSolutionManual.com
(9.5) ,
*TOl ^n2 %
associated with the m vectors (9.2) is r<m, there are exactly r vectors of the set which
are linearly independent while each of the remaining mr vectors can be expressed as a
linear combination of these r vectors. See Problems 23.
V. A necessary and sufficient condition that the vectors (9.2) be linearly dependent
is that the matrix (9.5) of the vectors be of rank r<m. If the rank is m, the vectors are
linearly independent.
If the set of vectors (9.2) is linearly independent so also is every subset of them.
/l = CIu% + %2^2 +
+ '^2n*re
(9.7)
'ml "m2
the forms (9.7) are said to be linearly dependent; otherwise the forms are said to be linearly
independent. Thus, the linear dependence independence of the forms of (9.7) is equivalent
or
to the linear dependence or independence of the row vectors of A.
Example 5. The forms /i = 2xi  2 + 3*g, /2 = x.^+ 2% + 4^=3. /g = ix^  Tx^ + Xg are linearly depend
213
ent since A = 1 2 4 is of rank 2. Here, 3/^  "if^  fs = .
4 7 1
SOLVED PROBLEMS
www.TheSolutionManual.com
1. Prove: If among the m vectors X^,X^, ...,X^ there is a subset, say, X^,X^ X^, r<m, which is
linearly dependent, so also are the m vectors.
Since, by hypothesis, k^X^ + k^X^ + + k^X^ = with not all of the k's equal to zero, then
k^X^ + k^X^ +
+ k^X^ + 0.Y^+i + + 0.Yot =
with not all of the k'a equal to zero and the entire set of vectors is linearly dependent.
2. Prove: If the rank of the matrix associated with a set of m ravectors is r<m, there are exactly r
vectors which are linearly independent while each of the remaining mr vectors can be expressed
as a linear combination of these r vectors.
Let (9.5) be the matrix and suppose first that m<n If the rrowed minor in the upper left hand comer
.
is equal to zero, we interchange rows and columns as are necessary to bring a nonvanishing rrowed minor
into this position and then renumber all rows and columns in natural order. Thus, we have
11 11?
21 25'
A
%1 %2 . Xqt x^q
where the elements xp; and xj^q are respectively from any row and any column not included in A. Let h^,k^,
...,A;^+i = A be the respective cofactors of the elements x^g. x^q x^q. xpq, of the last column of V. Then,
by (3.10)
CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 71
Now let the last column of V be replaced by another of the remaining columns, say the column numbered
u. not appearing in A. The cofactors of the elements of this column are precisely the k's obtained above
so that
Thus,
k^x^j; + k^2t + " + f'r'^rt + f'rn'^pt = (t = 1,2 n)
Since /i:,^+i = A ji^ 0, Xp is a. linear combination of the r linearly independent vectors X^^. X^ X^. But ^^j
was any one ^I hence, each of these may be expressed as a linearcom
www.TheSolutionManual.com
of the mr vectors V^+i, ^r+2
binatlon of ^j^, X^ X^.
Thus, in either case, the vectors Xr+^ X^ are linear combinations of the linearly Independent vec
tors X^.X^. ..., X^ as was to be proved.
is linearly dependent. In each determine a maximum subset of linearly independent vectors and
express the others as linear combinations of these.
1234
(a) Here, 3121 is of rank 2; there are two linearly independent vectors, say X.^ and X^ . The minor
15 87
1 2 3
1 2
1
j^ . Consider then the minor 312 The cofactors of the elements of the third column are
3
15 8
respectively 14, 7, and 7. Then 1^X^ + 1X21X2= and Xg = 2X^ + X^.
2 3 11
(b) Here 2 3 12 is of rank 2; there are two linearly independent vectors, say X^ and Xg. Now the
4 6 23
2 3
2113 1
2
2 3
; we interchange the 2nd and 4th columns to obtain 2213 for which
2
5^0.
2
4326
2 1 1
The cofactors of the elements of the last column of 2 2 1 are 2,2,2 respectively. Then
432.
2X^ + 2X2  2Xs = and Xg = Xi + X,
.
4. Let Pi(l, 1, 1), PsCl, 2, 3), PsiZ, 1, 2), and P^(2, 3, 4) be points in ordinary space. The points Pi, P^
and the origin of coordinates determine a plane tt of equation
X y z 1
(i)
1111 2y + z
12 3 1
2 3 4
]'
www.TheSolutionManual.com
Thus, P4 lies in tt. The significant fact here is that [P^., Px.Pq 1 1 1 is of rank 2.
1 2 3
We have verified: Any three points of ordinary space lie in a plane through the origin provided the matrix
of their coordinates is of rank 2.
SUPPLEMENTARY PROBLEMS
5. Prove: If m vectors X,^. X^ X^ are linearly independent while the set obtained by adding another vector
^m+i is linearly dependent, then ^m+i can be expressed as a linear combination of X^.X^ X^^.
7. Prove: A necessary and sufficient condition that the vectors (9.2) be linearly dependent is that the matrix
(9.5) of the vectors be of rank r<m.
Hint: Suppose the m vectors are linearly dependent so that (9.4) holds. In (9.3) subtract from the ith row the
product of the first row by s^, the product of the second row by S2. ^s Indicated in (9.4). For the
converse, see Problem 2.
8. Examine each of the following sets of vectors over the real field for linear dependence or independence. In
each dependent set select a maximum linearly independent subset and express each of the remaining vectors
as a linear combination of these.
9. Why can there be no more than n linearly independent vectors over F'>
10. Show that if in (9.2) either Xi = Xj or X^ = aXj, a in F. the set of vectors is linearly dependent. Is the
converse true?
11. ShowthatanynvectorZandthenzero vector are linearly dependent; hence, A" ando are considered proportional.
Hint: Consider k^^X +
k^O = where fc^ = o and ftj ^ 0.
12. (a) Show that X._ = [l.l+i,i], X^ = [j,_j,i_i] and X^ = [li2i,li, 2j ] are linearly dependent over
the rational field and, hence, over the complex field.
(b) Show that Zi = [l.l+i.i], X^ = [i.i.li], and Xq = [o.l2i.2i] are linearly independent over
the real field but are linearly dependent over the complex field.
www.TheSolutionManual.com
() fz = 2::i + 3x2  Xg+ 2x^ (b) f^ = 3%L + 2^2  2x3 + 5*4
and show that the system is linearly dependent or independent according as the row vectors of the coeffi
cient matrix
"10 "11
20 21
"^0 "ni nn
are linearly dependent or independent, that is, according as the rank of 4 is less than or
r equal to 1
15. If the polynomials of either system are linearly dependent, find a linear combination which is identically
zero.
Ps = x  2*2 + X Pg = X* + 2x x^ + X + 2
over F.
[::]Ci][::]
Show that fe^il/i + A:2^^2 + ^3^3 = , when not all the k's (in F) are zero, requires that the rank of the
abed
matrix e f g h be < 3. (Note that the matrices M^.Mz.Mq are considered as defining vectors of four
p q s t
components,)
  
1 2 3 2 1 3 3 3
1 3 2 2 2 1 4 3
_ _ _ 
18. Show that any 2x2 matrix can be written as a linear combination of the matrices and
[o oj' [o oj [i oj'
19. If the ravectors X^^.X^ X^ are linearly independent, show that the vectors Y^.Yr, 1^ , where 7^ =
n
2 aijXj. are linearly independent if and only if ^ = \_aij'\ is nonsingular.
20. If A is of rank r,show how to construct a nonsingular matrix B such that AB = [C^, C2 C7, o]
where C^, C^ C^ are a given set of linearly independent columns of A.
www.TheSolutionManual.com
21. Given the points Pi(l, 1, 1, 1), Pjd, Ps(2, 2, 2, 2), and /VO, 4. 5, 6) of fourdimensional space,
2, 3. 4),
(a) Show that the rank of [Pi, P3]' is so that the points lie on a line through the origin.
1
(6) Show that [P^, P^. P3, PaY is of rank 2 so that these points lie in a plane through the origin,
(c) Does P5(2, 3. 2. 5) lie in the plane of (6)?
22. Show that every nsquare matrix A over F satisfies an equation of the form
23. Find the equation of minimum degree (see Problem 22) which is satisfied by
(a) 4 = L J,
[:;] '[::]
A
(b) = \_ \. (c) A
[;:]
Ans. (a) 4^24=0, (b) 4^24 + 27 = 0, (c) A^  2A +1 =
24. In Problem 23(b) and (c), multiply each equation by 4"^ to obtain (b) A'''^ = I^A. (c) A~'^=2lA, and
thus verify: If A over F is nonsingular, then A' can be expressed as a polynomial in A whose coeffi
cients are scalars of F.
.
chapter 10
Linear Equations
DEFINITIONS. Consider a system of m linear equations in the n unknowns xi.a;. > *r?
www.TheSolutionManual.com
"Wl*!"'" Ob2*^2 + + OIj, = Aa
\
in which the coefficients (o's) and the constant terms (A's) are in F
By a solution in F of the system is meant any set of values of %,%2. x^ in F which sat
isfy simultaneously the m equations. When the system has a solution, it is said to be consistent;
otherwise, the system is said to be inconsistent. A consistent system has either just one solu
tion or infinitely many solutions.
Two systems of linear equations over F in the same number of unknowns are called equiv
alent every solution of either system is a solution of the other. A system of equations equiv
if
alent to (10.1) may be obtained from it by applying one or more of the transformations: (o) in
terchanging any two of the equations, (b) multiplying any equation by any nonzero constant in
F, or (c) adding to any equation a constant multiple of another equation. Solving a system of
consistent equations consists in replacing the given system by an equivalent system of pre
scribed form.
SOLUTION USING A MATRIX. In matrix notation the system of linear equations (10.1) may be written
as
^11 1
\'
(10.2) 02 1 02 2 ^2n *2 = K
r". hm
ml 0^2 <^nn m
(Each row of (10.4) is simply an abbreviation of a corresponding equation of (10.1); to read the
equation from the row, we simply supply the unknowns and the + and = signs properly.)
75
76 LINEAR EQUATIONS [CHAP. 10
To solve the system (10.1) by means of (10.4), we proceed by elementary row transformations
to replace A by the row equivalent canonical matrix of Chapter 5. In doing this, we operate on
the entire rows of (10.4).
3xi + x^ 2X2  1
Example 1. Solve the system
^.X^ ^Xq^ Xg = 3
2' "1
2 1 1 2 1 2 1 2
1 2 5 5 5 1 1 1
The augmented matrix VA H\ = V
3 1 ] 1 5 5 11 5 5
0. .0
'1
1 1 1'
1 1 1 1
1 1 1 1
www.TheSolutionManual.com
.0 OJ 0.
Thus, the solution is the equivalent system of equations: xi =1, 2 = 0, xq = 1. Ex
pressed in vector form, we have X = [l, 0, l] .
FUNDAMENTAL THEOREMS. When the coefficient matrix A of the system (10.1) is reduced to the
row equivalent canonical form C, suppose {A H] is reduced to [C K], where K= ^1,^5 A:]'.
If A is of rank r, the first r rows of C contain one or more nonzero elements. The first nonzero
element in each of these rows is 1 and the column in which that 1 sta,nds has zeroes elsewhere.
The remaining rows consist of zeroes. Prom the first r rows of [C K], we may obtain each of
the variables x: , x: ,
... ,xj (the notation is that of Chapter 5) in terms of the remaining varia
Jr
bleS X: , X: , ... X; and one of the i^, Uq k^.
Jr+1 Jr+2 Jn
J2
x
, , ... , X constitute
Jr
a solution. On the other hand, if at least one of is different from zero, say
V+i' "r+s '
Qx.^ + 0% + + 0*71 7^
In the consistent case, A and [A H] have the same rank; in the inconsistent case, they
have different ranks. Thus
the coefficient matrix and the augmented matrix of the system have the same rank.
If a consistent system of equations over F has a unique solution (Example 1) that solution
is over F. If the system has infinitely many solutions (Example 2) it has infinitely many solu
tions F when the arbitrary values to be assigned are over F. However, the system has
over
infinitelymany solutions over any field 2F of which F is a subfield. For example, the system
www.TheSolutionManual.com
of Example 2 has infinitely many solutions over F (the rational field) if o is restricted
to rational
numbers, it has infinitely many real solutions if a is restricted to real numbers, it has infinitely
many complex solutions if a is any whatever complex number.
See Problems 12.
In Problem 3 we prove
ni. A system of re nonhomogeneous equations in n unknowns has a unique solution
provided the rank of its coefficient matrix A is n, that is. provided \a\ ^ 0.
In addition to the
method above, two additional procedures for solving a consistent
system
of n nonhomogeneous equations in as many unknowns AX = H are given
below. The first of
these is the familiar solution by determinants.
(a) Solution by Cramer's Rule. Denote by 4, (i = 1,2 n) the matrix obtained from A by re
placing Its Jth column with the column of constants (the
h's). Then, if \A\ y^ 0, the system
AX = H has the unique solution
See Problem 4.
2xi X2 + Sxg + *4 =
Xl + X2  3x3  4x4
Example 3. Solve the system using Cramer's Rule.
3x:i + 6x2  2x3 + X4.
1 5 1
5 1
1 3 4 3 4
6 2 1
120.
2
= 240
1
2 23 2 3
78 LINEAR EQUATIONS [CHAP. 10
2 5 5 1 2 15 1
1 1 3 4
=
1 114
24,
3 821 3 6 8 1
2 2 23 2 2 23
2 1 5 5
1 1 3 1
and 96
3 6 2 8
2 2 2 2_
240 A^ 24 1
Then xi = = 0, and
120 Ml 120 5 120
96
^4
120
www.TheSolutionManual.com
by
(10.6) A^AX = A^H or X ^ A^H
2xi + 3X2 + Xg 2 3 1
1 5 7
From Problem 2(6), Chapter 7, A 7 1 5 Then
18
5 7 1
"35'
1 5 7 ["9"
1
AX A^H J_ 7 1 5 6 29
18 18
5 7 1 L8_ . 5_
(10.8) AX =
in n unknowns is called a system of homogeneous equations. For the system (10.8) the rank
of the coefficient matrix A and the augmented matrix [A 0] are the same; thus, the system is
always consistent. Note that X = 0, that is, %i = xs = = = is always a solution; it is %
called the trivial solution.
If the rank ot A is n, then n of the equations of (10.8) can be solved by Cramer's rule for the
unique solution xj^ = X2= ...= x^= and the system has only the trivial solution. If the rank of
A is r<n. Theorem II assures the existence of nontrivial solutions. Thus,
IV. A necessary and sufficient condition for (10.8) to have a solution other than the
trivial solution is that the rank of A be r < n.
V. A necessary and sufficient condition that a system of n homogeneous equations in
n unknowns has a solution other than the trivial solution is /4 
= 0.
VI. If the rank of (10.8) is r < n, the system has exactly nr linearly independent solu
tions such that every solution is a linear combination of these nr and every such linear
combination is a solution. See Problem 6.
CHAP. 10] LINEAR EQUATIONS 79
LET Iiand X^he two distinct solutions of AX = H. Then AX^ = H, AX^ = H, and A (Xx X^) = AY = 0.
i Xi 2x2 + 3x3
Example 5. In the system set x^ = 0; then xg = 2 and x^ = 1. A particular
Xj + ^2 + 2 Xg = 5
I
\^Xi + Xq + 2*3 =
where a is arbitrary. Then the complete solution of the given system is
www.TheSolutionManual.com
Note. The above procedure may be extended to larger systems. However, it is first
necessary to show that the system is consistent. This is a long step in solving the
system by the augmented matrix method given earlier.
SOLVED PROBLEMS
xi + X2 ~ 2xs + X4 + 3 K5 = 1
e 2i  ^2 + 2% + 2x4 + 6*5 = 2
3 ail + 2 X2  4 Xg  3 X4  9 xg = 3
tion
1 1 3 1 1 3
12 2
6 18 1 3
1 1
1 2000
13
Then x^  1, x^ 2xs = 0, and ^4 + 33=5 = 0. Take xg = a and x^ = b. where a and b are arbitrary; the complete
solution may be given as xx= 1. x^^ 2a, x^^a, x^ = 3b. x^ = b or as AT = [l, 2a, a, 3 6, 6]'.
x^ + X2 + 2Xg + X4 = 5
Solve 2%i + 3x2  s  2x4 = 2 .
4xi + 5% + 3xg = 7
Snliitinn*
'11 2 1
5"' '112 1 5 1 7 5 13
[A H] = 2 3122 "V
154 8 1 5 4 8
4 5 3 7 154 13 5
The last row reads 0xi + O^tj + O^g + 04 = 5; thus the given system is inconsistent and has no solution.
80 LINEAR EQUATIONS [CHAP. 10
Suppose next that X = L is a second solution of the system; then AK = H and AL  H, and AK = AL.
Since A is nonsingular, K = L, and the solution is unique.
www.TheSolutionManual.com
"ni*i + an2*2 + + "nn^n  ^n
Denote by A the coefficient matrix [a. ] and let a^,be the cofactor of in A . Multiply the first equa
tion of (J) by ttn, the second equation by Ksi the last equation by ttni. and add. We have
n n n n
S ai^di^x^ + 2 ai20iii% + + .S ain^ilXn
i=l i=i 1=1 1 =1
hx ai2 "m
^^2 22 '^2n so that xi =
^11j
T
^1
^n ""no. "n
Next, multiply the equations of (i) respectively by a^i, ^ii n2 and sum to obtain
Continuing in this manner, we finally multiply the equations of (i) respectively by a^n. 2n (^,71
'ni
~
2Xi + X2 + 5 *3 + *^4 5
Xi + X2 ~ 3x3  4*4 = 1
5. Solve the system using the inverse of the coefficient matrix.
3 Xi + 6 Xj  2 aig + :4
=
Xl + ^2 + s + *4 =
6. Solve i + 3*2 + 2xq + 4a;4 =
2Xi + Xg  Xj^ =
Solution:
1 1 1 1 o" 1 1 1 1 11110
\A U\ 1 3 2 4 'X/
2 1 3 2 13
1 2 1 3
www.TheSolutionManual.com
2 1 
11110 1 i i o"
''\J 3
1 i 1 1 2 2
The complete solution of the system is x^ = ^a + 16, ^^ = !  6, xs=a, x^ = h. Since the rank of
A is 2, we may obtain exactly nr = 42 = 2 linearly independent solutions. One such pair, obtained by
first taking a = 1, 6 = i and then a = 3, 6 = 1 is
7. Prove: In a square matrix A of order n and rank n1, the cofactors of the
elements of any two rows
(columns) are proportional.
Since 1^1 =0, the cofactors of the elements of any row (column) of A are a solution X^ of the system
AX = (A'X = 0).
Now the system has but one linearly independent solution since A (A^) is of rank nl.
Hence, for the
cofactors of another row (column) of A (another solution X^ of the
system), we have X^ = kX^.
8. Prove: If /"i, f^ f^ are m<n linearly independent linear forms over F in n variables, then the
linear forms
^j = ^^'ijfi 0=1.2 p)
are linearly dependent if and only if the mxp matrix [5^] is of rank r<p.
The g's are linearly dependent if and only if there exist scalars a^,a^ a. F
in , not all zero, such
that
P p
.2 ( 2 a;:S)f.
1=1 J=l J ^J ^
82 LINEAR EQUATIONS [CHAP. 10
+ "p'iP (i = 1, 2 m)
j?/i^v ^i^ii
Now, by Theorem IV, the system of m homogeneous equations in p unknovras S s^j xj  has a non
9. Suppose A = [a ] of order n is singular. Show that there always exists a matrix R = [h^j] ?^ of
Let Bi.Bg B be the column vectors of B. Then, by hypothesis, AB^ = ^65 = .. = AB^ = 0. Con
www.TheSolutionManual.com
ni*it + n2^2t+ + "nn^nt
Since the coefficient matrix ^ is singular, the system in the unknowns h^t,^'2t i>nt has solutions other
than the trivial solution. Similarly, AB^ = 0, AB^ = 0, ... have solutions, each being a column of S.
SUPPLEMENTARY PROBLEMS
10. Find all solutions of:
x^ + x^ + 3 = 4
x^ ^ 1 x^ 7^3 = 5
Xj^ + %2 + >=3 + %4 =
( X^ + Xq + Xg = 4 ^1 + %2 + ^S
 % = 4
(b) (d)
\2x^ + 5 3C2 ~ 2 Xg = 3 X\ ^ Xq x^ + :>:4 = 4
Xi Xq + Xr, + x^ = 2
4*1 + 2*3 +
2xi X2 + ^x^
2*1 + 3 *3
*2 ^ *4
(.h) 3*1 + 22 + *3
7*2 4*3 5*4
*i  4*2 + 5*3
2*1 11*5 1*3 5*4
(6) *i = *2 = *3 = a
5 3,
(d) *i
J. *o = o *4
4
1 .
1 1 2
13. Given A = 2 2 4 find a matrix B of rank 2 such that AB = 0. Hint. Select the columns of B from
3 3 6_
the solutions of AX = 0.
14. Show that a square matrix is singular if and only if its rows (columns) are linearly dependent.
15. Let AX = be a system of n homogeneous equations in n unknowns and suppose A of rank r = n1. Show
that any nonzero vector of cofactors [a^i, a^j OLinV of a row of ^ is a solution of AX = 0.
^1  2%2 + 3%
ixg = + 2x^ 
/
(b)
/ 2xj_ Xg
(c) J 2.^1 + 3*2 + 4 % =
\2x^ + 5%2 + 6*3
Xg = y'ix^ 4%2 + 2%3 = 1 2xi %2 + 6 Xg =
www.TheSolutionManual.com
Hint. To the equations of (o) adjoin Oxi + Ox^ + 0%3 = and find the cofactors of the elements of the
"l 2 a""
third row of 2 5 6
Ans. (a) xi = 27a. X2 = 0, X3 = 9a or [3a, 0, a]', (6) [2a, 7a, 17a]', (c) [lla, 2a, 4a]'
17. Let the coefficient and the augmented matrix of the system of 3 nonhomogeneous
equations in 5 unknowns
AX =H be of rank 2 and assume the canonical form of the augmented
matrix to be
_0
18. Consider the linear combination Y = s,X^ + s^X^ + s^X^ + s^X^ of the solutions of Problem 17 Show
that Y is a solution of AX = H if and only if (i) s, +.2 +3 +^4 = 1 Thus, with s s,. .3, .4 arbitrary except
for (O, i^ IS a complete solution of AX = H.
I "17 ai5
a^q
and
025 \'^3q agg "4(7 "AS
11114 10
1234 2 01000
21126 . ,,. 00100 From B =[A H] infer that the
24. Show that B
32113 IS row equivalent to
00010
,
1222 4 00001
2 3 3 1 ij [_0
system of 6 linear equations in 4 unknowns has 5 linearly independent equations. Show that a system of
m>n linear equations in n unknowns can have at most re +1 linearly independent equations. Show that when
there are ra + 1 , the system is inconsistent.
25. If AX =H is consistent and of rank r, for what set of r variables can one solve?
cient and augmented matrix of the same rank r to prove: If the coefficient and the augmented matrix of the
AX H of m
system = nonhomogeneous equations in n unknowns have rank r and if X^^.X^ Xnrn are
www.TheSolutionManual.com
linearly independent solutions of the system, then
27. In a fourpole electrical network, the imput quantities i and h are given in terms of the output quantities
2 and Iq by
E^ = oEq + 6/2 "11 a h 'eI 'e2
_ = A
h cE^ + dlQ c d h. >
_/lJ
30. Let A be nsquare and nonsingular, and let S^ be the solution of AX = E^, (i = 1, 2 n). where ^is the
Identify the matrix [S^, Sg S^].
nvector whose ith component is 1 and whose other components are 0.
31. Let 4 be an m X 71 matrix with m<n and let S^ be a solution of AX  E^, {i 1,2,. m). where E^ is the
mvector whose ith component is 1 and whose other components are 0. If K = \k^. k^. , k^' show
, that
Vector Spaces
UNLESS STATED OTHERWISE, all vectors will now be column vectors. When components are dis
played, we shall write [xj^.x^ a^]'. The transpose mark (') indicates that the elements are
to be written in a column.
A set of such 7zvectors over F is said to be closed under addition if the sum of any two of
www.TheSolutionManual.com
them is a vector of the set. Similarly, the set is said to be closed under scalar multiplication
if every scalar multiple of a vector of the set is a vector of the set.
Example 1. (a) The set of all vectors [x^. x^. x^Y of ordinary space havinr equal
components (x^ = x^^ x^)
is closed under both addition and scalar multiplication. For, the sum
of any two of the
vectors and k times any vector (k real) are again vectors having equal
components.
(6) The set of all vectors [x^.x^.x^Y of ordinary space is closed under addition and scalar
multiplication.
VECTOR SPACES. Any set of revectors over F which is closed under both addition and scalar multi
plication is called a vector space. Thus, if X^, X^ X^ are nvectors over F, the set of all
linear combinations
85
.
yi + y2 + ys + 3y4.
yi + 3y2 + 2y3 Ja
yi + y2 + ys + 3x4
of
and ^2 are linearly independent. They span a subspace (the plane
tt)
The vectors Xy
real numbers.
S which contains every vector hX^ + kX.^. where /i and k are
www.TheSolutionManual.com
hX^. where
The vector X^ spans a subspace (the line L) of S which contains every vector
A is a real number.
See Problem 1.
dimension 1.
^
consisting of 7ivectors will be denoted by F(F). When r = n,
A vector space of dimension r
yi + ys + ys
Xy
71^1 + j^Xq + yg^a yi + 2y2 + 3ys
yi + 3X2 + 2ys
a
!yi
xi + y2 + ys =
are not a
nique solution. The vectors X^.X^.X^ are a basis of S. The vectors X^.X^.X^
Example whose basis is the set X^. X^
basis of S . (Show this.) They span the subspace tt of 2,
stated as:
are a set of revectors over the rank of the raa:ro matrix
F and if r is
I If X X^ .. X^
of their components,' then from the set r
Unearly independent vectors may be selected. These
T vectors span a V^iF) in which
the remaining mT vectors lie.
See Problems 23.
CHAP. 11] VECTOR SPACES 87
II. If JYi, ^2, ..., Zm are m<n linearly independent nvectors of V^iF) and if J^+j^,
^m4.2. . ^n are any nm vectors of V^iF) which together with X.^, X^ X^ form a linearly
independent set, then the set X^, X^ Z is a basis of V^iF).
See Problem 4.
III. If Z^.Zg,...,! are m<7i linearly independent vectors over F, then the
p vectors
m
^i = i^'ij^i (/=l2 P)
are linearly dependent if p>m or, when p<m, if [s^,] is of rank r<p.
IV. If Zi, .2, ..., Z are linearly independent revectors over F, then the vectors
n
Yi = 1 a^jXj
"
(i = 1,2 re)
=
www.TheSolutionManual.com
7 1
are linearly independent if and only if [a^] is nonsingular.
IDENTICAL SUBSPACES. If ,V^(F) and X(F) are two subspaces of F(F), they are identical if and
only each vector of X(F)
if is a vector of ^V;[(F) and conversely, that is, if and only if each
is a subspace of the other.
See Problem 5.
SUM AND INTERSECTION OF TWO SPACES. Let V\F) and f3^) be two vector spaces. By their
sum is meant the totality of vectors X+Y where X is in V^(F) and Y is in V^iF). Clearly, this
is a vector space; we call it the sum space
V^^iF). The dimension s of the sum space of two
vector spaces does not exceed the sum of their dimensions.
By
the intersection of the two vector spaces is meant the
totality of vectors common to the
two spaces. Now if Z is a vector common to the two spaces, so also is aX;
likewise if X and
y are common to the two spaces so also is aX^bY. Thus, the intersection of two spaces
is a
vector space; we call it the intersection space V\F).
The dimension of the intersection space
of two vector spaces cannot exceed the smaller of
the dimensions of the two spaces.
Now 4X1  X2 = X^: thus, X^ lies in both tt^ and 7t^. The subspace (line
L) spanned
by X^ IS then the intersection space of 77^ and 77^ Note that 77^ and 77^ are each of dimension
.
Xti such that every solution of AX = is a linear combination of them and every such
A
linear combination is a solution.
A basis for the null space of A is any set of N^ linearly independent solutions of AX = 0.
See Problem 9.
(11.2) rA + Nji = n
SYLVESTER'S LAWS OF NULLITY. If A and B are of order ra and respective ranks q and rg , the
www.TheSolutionManual.com
(11.3) Nab > Na , Nab > Nb
are called elementaiy or unit vectors over F. The elementary vector Ej, whose /th component
is 1, is called the /th elementary vector. The elementary vectors E^, E^ constitute an
important basis for f^^Cf).
Every vector X = [%,% ^nl' of 1^(F) can be expressed
uniquely as the sum
n
X 2 xiEi XxE^ + 2^2 +
+ '^nEr,
1=1
Of the elementary vectors. The components %, x^ x^ oi X are now called the coordinates of
X relative to the Ebasis. Hereafter, unless otherwise specified, we shall assume that a vector
X is given relative to this basis.
Then there exist unique scalars %, 03 a^
Let Zi, Zg Zn be another basis of 1^(F).
in F such that
n
X 1 aiZi a.^Z.^ + 02 Zg + + dj^Zj^
i =i
These scalars 01,05 o are called the coordinates of X relative to the Zbasis. Writing
1 1 [7,0,2]'
X [Zi, Zg, Zg]A:^ 2
3 1 1 2
Let W^, \ \ be yet another basis of f^(F). Suppose Xig = [61,^2 K\' so that
where P = IF"^Z.
Thus,
VIII. If a vector of f^(F) has coordinates Z^ and X^ respectively relative to two bases
of P^(F), then there exists a nonsingular matrix P determined
, solely by the two bases and
given by (11.6) such that Xf/ = PX^
See Problem 12.
www.TheSolutionManual.com
SOLVED PROBLEMS
1. The set of all vectors A' = [%, x^, Xg, x^Y, where x^ + x^ + x^ + x^ = Q is a subspace V of V^(F)
since the sum of any two vectors of the set and any scalar multiple of a vector of the set have
components whose sum is zero, that is, are vectors of the set.
1 3 1
2 4
2. Since is of rank 2, the vectors X.^ = [1,2,2, 1 ]', X^ = [3,4,4,3]', and X^ = [1,0,0, 1 ]'
2 4
1 3 1
Now any two of these vectors are linearly independent; hence, we may take X^ and X^, X^ and A:g. or X^
and Xg as a basis of the V2{F).
14 2 4
3. Since
13 12 is of rank 2, the vectors ^"1 = [1,1,1,0]', = [4,3,2,1 = [2,1,0,1]',
12 A's ]', A'g
1 1 2
For a basis, we may take any two of the vectors except the pair Xg, X^.
For a basis of this space we may take X^,X^,X^ = [l.O.O.O]', and Xg = [o, 1,0,0]' or X^.X2.Xg =
[1.2,3.4]'. and X; = [1,3,6,8]' since the matrices [X^, X2, X^.Xg] and [X^. X2.Xg. Xy] are of rank
4.
90 VECTOR SPACES [CHAP. 11
5. Let Zi = [l,2,l]', Z2 = [l,2,3]', A:3 = [3,6,5 ]', Y^ = [0,Q,lY, i'2=[l.2,5]' be vectors of Vs(F).
Show that the space spanned by X^, X^, Xq and the space spanned by Y.^, Y^ are identical.
First, we note that X^ and X^ are linearly independent while Xq = 2Zj^ + X^.. Thus, the X^ span a space
of dimension two. say iF('''). Also, the Yi being linearly independent span a space of dimension two. say
6. (a) If Z = [%, 2.%]' lies in the Vg(F) spanned by X^ = [ 1,1,1]' and X^ = [3,4,2]', then
www.TheSolutionManual.com
(b) If X = [xi,x^,X3,x^Y lies in the Pi(F) spanned by X^ = [1,1,2,3]' and X^ = [ 1,0,2,1]', then
3Ci 1 1
% 1
2 1 1 1
2
is of rank 2. Since 4 0, this requires =  2*1 + 4%  % = and
% 2 1
%4. 3 1
% 2
xi 1 1
% 1 = %+ 2*2  s;* = ,
4. 3 1
These problems verify: Every ^(^ ) may be defined as the totality of solutions over F of a system of nk
7. Prove: two vector spaces Vn(F) and I^(F) have Vn(F) as sum space and V^iF) as intersection
If
space, then h + k = s + t.
Suppose t = h; then Vr!XF) is a subspace of (^^F) and their sum space is 1^' itself. Thus, s=k. t =h and
s + t  h+k. The reader will show that the same is true if t = k.
Suppose next that t<h. t<k and let X^^.X^ X^ span V^(F). Then by Theorem H there exist vectors
Yh so that X1.X2 ^t.i'tn Yh span l^(F) and vectors Zj^^. Z^+j Z^ so that
^t^i, yt+2
A:i,A:2 ^t.Zt+1 2fe span I^'^F).
Now suppose there exist scalars a's and 6's such that
t h k
(114) X "iXi + .S a^Yi + S biZi
1=1 t=t+i i=t+i
t ft
h k
The vector on the left belongs to P^(F),and from the right member, belongs also to ^(F); thus it belongs
to V^(F). But X^.X^ Xt span Vn(Fy, hence, a^+i = at+2 = = "f, = 0.
t k
Now from (11.4), 2 "iXi + 2; b^Z^
i=i i=t+i
t*^"^,
But the X's and Z's are linearly independent so that a^ = 05 = = t = ''t^i = *t+2 = = ^fe = :
the ^f's.ys, and Z's are a linearly independent set and span ^(F). Then s =h + kt as was to be proved.
CHAP. 11] VECTOR SPACES 91
8. Consider ^FsCF) having X^ = [l,2,2Y and Z2 = [ 1,1,1 ]' as basis and jFgCF) having 71 = [3,1,2]'
113 1
and Fj = [ 1.0,1 ]' as basis. Since the matrix of the components 2 110 is of rank 3, the sum
3 12 1
Prom h + k = s + t. the intersection space is a VsiF). To find a basis, we equate linear combinations
2 2
of the vectors of the bases of iFg(F) and s^'sCF) as
6  3e = 1
take d = 1 for convenience, and solve ^2a + 6 c = obtaining a = 1/3, 6 = 4/3, c = 2/3. Then
( 3a + 6  2e = 1
www.TheSolutionManual.com
aX;]^ + 6^2 = [1.2/3.1/3 ]' is a basis of the intersection space. The vector [3,2,1]' is also a basis.
113 3
2 2 4
9. Determine a basis for the null space of A
10 2 1
113 3
x^ =
Consider the system of equations AX = which reduces to
\x2+ Xs + 2x^ =
A basis for the null space of .4 is the pair of linearly independent solutions [1.2,0,1]' and [2,l,l,o]'
of these equations.
Suppose first that A has the form Then the first r, rows of AB are the first r. rows of B while
the remaining rows are zeros. By Problem 10, Chapters, the rank of AB is ^45 > + %  "
'k
Suppose next that A is not of the above form. Then there exist nonsingular matrices P and
Q such that
PAQ has that form while the rank of PAQB is exactly that of AB (why?).
11. Let X=[l,2.lY relative to the Sbasis. Find its coordinates relative to a new basis Zi = [l 1 0]'
Z2 = [1,0,1]', and Zg = [1,1. l]'.
1 1 \ 1 !a + i + c = 1
1 1 1
X^ . Z X 1 1 2 [0,1,2^
1 1 1 1
. _ _
12. Let X^ and X^ be the coordinates of a vector X with respect to the two bases 7,^ = [1,1,0]'
Z2=[l,0,l]', Z3= [1,1,1]' and f4 = [l,l,2]', ff^ = [2,2,1 ]', Ifg = [1,2,2 ]'. Determine the ma
trix ? such that X^ = PX^ .
www.TheSolutionManual.com
1 4 1
03
SUPPLEMENTARY PROBLEMS
13. Let [xi^. x^. x^. X4,y be an arbitrary vector of Vi(R). where R denotes the field of real numbers. Which of the
following sets are subspaces of K^(R)'?
(a) All vectors with Xj_= X2 = X3 = x^. (d) All vectors with x^ = 1 .
14. Show that [ 1.1.1.1 ]' and [2,3.3,2]' are a basis of the fi^(F) of Problem 2.
15. Determine the dimension of the vector space spanned by each set of vectors. Select a basis for each.
[1.1,1.1]'
[1,2,3.4.5]' [l.l.O.l]'
[3,4,5,6]'
(a) [5.4,3,2,1]', (b) [1,2,3,4]' , ^'^
[1.2,3,4]'
[1.1,1.1,1]' [2.3.3,3]'
[1,0,1,2]'
16. (a) Show that the vectors X^ = [l,l,l]' and X^ = [3,4,2]' span the same space as Y^ = [9,5,1 ]' and
72= [17,11.3]'.
(6) Show that the vectors X^ = [ 1,1,1 ]' and A'2 = [3,4,2 ]' do not span the same space as Ti = [2,2,2]'
and K, = [4,3,1]'.
n. Show that if the set X^.X^ Xfe is a basis lor Vn(F). then any other vector Y of the space can be repre
sented uniquely as a linear combination of X^, X^ X^ .
k
Hint. Assume Y 51 aiXi = S biXi
CHAP. 11] VECTOR SPACES 93
18. Consider the 4x4 matrix whose columns are the vectors of a basis of the Vi(R) of Problem 2 and a basis of
the \i(R) of Problem 3. Show that the rank of this matrix is 4; hence. V^R) is the sum space and l^(R), the
19. Follow the proof given in Problem 8, Chapter 10, to prove Theorem HI.
20. Show that the space spanned by [l,0,0,0,o]', [0,0,0,0,1 ]', [l.O,l,0,0]', [0,0,1,0,0]' [l,0,0,l,l]' and the
space spanned by [l,0.0.0,l]', [0,1,0,1,0 ]', [o,l,2,l,o]', [l,0,l,0,l ]', [o,l,l,l,o]' are of dimensions
4 and 3, respectively. Show that [l,0,l,0,l]' and [l,0,2,0,l]' are a basis for the intersection space.
21. Find, relative to the basis Z^= [l,1.2]', Zg = [2.2,l]', Zg = [l,2,2]' the coordinates of the vectors
(a) [l.l.o]', (b) [1,0, l]', (c) [l.l.l]'.
Ans. (a) [1/3,2/3,0]', (6) [4/3, 1/3, 1 ]', (c) [l/3, 1/3, ]'
22. Find, relative to the basis Zi=[o,l,o]', Z2=[i,l,l]', Z3=[3,2,l]' the coordinates of the vectors
www.TheSolutionManual.com
(a) [2,1,0]', (b) [1,3,5]', (c) [0,0,l]'.
Ans. (a) [2,1,1]', (6) [6,7,2]', (c) [1/2, 3/2, 1/2 ]'
23. Let X^ and X^^ be the coordinates of a vector X with respect to the given pair of bases. Determine the
trix P such that Xj^ = PX^ .
n
24. Prove: If Pj is a solution of AX = Ej . (j = 1,2 n). then 2 hjPj is a solution of AX = H. where H =
[''1.^2 KV
Hint. H = h^Ej^ + h^E^+ +hnE^.
25. The vector space defined by all linear combinations of the columns of a matrix A
is called the column space
of A. The vector space defined by all linear combinations of the
rows of A is called the row space of ^.
Show that the columns of AB are in the column space of A and the rows of AB are in
the row space of fi.
26. Show that AX = H a system of m nonhomogeneous equations in n unknowns, is consistent if and only
.
if the
the vector H belongs to the column space of A
1 1 1111
27. Determine a basis for the null 11,
space of (a) (6) 12 12
1 1 3 4 3 4
Ans. (a) [1,1,1]', (6) [ 1,1,1, i ]', [l, 2,1, 2]'
29. Derive a procedure for Problem 16 using only column transformations on A = [X^. X^, y^ Y^]. Then resolve
Problem 5.
chapter 12
Linear Transformations
DEFINITION. Let X = [x,., x^, .... %]' and Y = lyx. y^. JnY ^^ ^^ vectors of l^(F), their co
ordinates being relative to the same basis of the space. Suppose that the coordinates of X .
Y are related by
+ df^Yj.^ri
www.TheSolutionManual.com
(12.1)
or, briefly, AX
where A = [a^.] is over F. Then (12.1) is a transformation T which carries any vector X of
V^(F) into (usually) another vector Y of the same space, called its image.
(fe) it carries aX^ + bX^ into aY^ + feFg. for every pair of scalars a and b. For this reason, the
transformation is called linear.
1 1 2
'12"
'l 1 2 2
'2'
'l 1 2 'x{
.1 3 3_ 3. 5
112 2 10 13/5
Since 12 5 10 11/5 .
X = [13/5,11/5,7/5]'.
13 3 5 1 7/5
BASIC THEOREMS. If in (12.1), X = [\,Q 0]'='i then Y = [ an, Ogi, ..., a^J' and, in general,
if ^ =  then Y = [a^j.a^j "nf]'
Hence,
I. A linear transformation (12.1) is uniquely determined when the images (Y's) of the
basis vectors are known, the respective columns of A being the coordinates of the images
of these vectors. See Problem l.
94
CHAP. 12] LINEAR TRANSFORMATIONS 95
II. A linear transformation (12.1) is nonsingular if and only if A, the matrix of the
transformation, is nonsingular. See Problem 2.
www.TheSolutionManual.com
X = A'^y
carries the set of vectors Y^, Y^, ...,\ whose components are the columns of A into the basis
vectors of the space. It is also a linear transformation.
V. The elementary vectors ^ of \{F) may be transformed into any set of n linearly
independent nvectors by a nonsingular linear transformation and conversely.
VII. When any two sets of re linearly independent revectors are given,
there exists a
nonsingular linear transformation which carries the vectors of one set into the
vectors of
the other.
CHANGE OF BASIS. Relative to a Zbasis, let 7^ = AX^, be a linear transformation of ^(F). Suppose
that the basis is changed and let X^ and Y^ be the coordinates of X^, and Y^ respectively rela
tive to the new basis. By Theorem VIH, Chapter 11, there exists
a nonsingular matrix P such
that X^ = ?X^ and Yy, = PY^ or, setting ?~^ = Q, such that
(12.2) 6 = Q^AQ
Note. Since Q = P"^, (12.2) might have been written as B = PAP^. A study of similar matrices
will be made later. There we shall agree to write B = R'^AR instead of S = SAS'^ but
for no compelling reason.
96 LINEAR TRANSFORMATIONS [CHAP. 12
1 1 3
1 3 2
[l.2,l]', W^ = [1,1.2]', IFg = [1,1,1]' be a new basis, Given the vector X = [3,0,2]',
(a)
find the coordinates of its image relative to the Wbasis. Find the linear transformation
(b)
Yjr = BXjf corresponding to V = AX. (c) Use the result of (b) to find the image ijf of Xy =
[1.3,3]'.
1 1 1 3 3
(a) Relative to the If basis, the vector X = [3,0,2]' has coordinates Xff = W X = [l,l,l]'.
[14/3,20/9.19/9]'.
www.TheSolutionManual.com
36 21 15
(b) Y w\ W^AX (W~^AW)Xjf = BXj^ 21 10 11
3 23 1
36 21 15 1 6
3 23 1 3 7
L.
See Problem 5,
SOLVED PROBLEMS
1. (a) Set up the linear transformation Y = AX which carries E^ into Y^ = [1,2,3]', E^ into [3,1,2]',
and 3 into Fg = [2,1,3]'.
{h) Find the images of li= [1,1,1]', I2 = [3,1,4 ]', and ^3 = [4,0,5]'.
(c) Show that X^ and Zg ^^^ linearly independent as also are their images.
(d) Show that Xi, X^, and Z3 are linearly dependent as also are their images.
1 3 2
(a) By TheoremI, A = [y^, Fg, K3] ; the equation of the linear transformation is Y =^ AX 2 1 1
3 2 3
13 2
(6) The image of X^= [l,l.l]' is Y^ 2 1 1 [6.4,8]'. The image of Yg is Ys = [8,9,19]' and the
3 2 3
image of Xg is K3 =[ 14.13,27]'.
1 3 6 8
(c) The rank of [A'^.Xg] = 1 1 is 2 as also is that of [^i, Kg] 4 9 Thus, X^ and X^ are linearly
_1 4 8 19
(rf) We may compare the ranks of \_X^. X^. X^] and {Y^.Y^.Yq\; however, X^ = X^ + X^ and Fg = Fi+Zg so that
both sets are linearly dependent.
CHAP. 12] LINEAR TRANSFORMATIONS 97
Suppose A is nonsingular and the transforms of X^ ^ X^ are Y = AX^ = AX^. Then A{Xi^Xi) = and
the system of homogeneous linear equations AX = Q has the nontrivial solution X = X^X^. This is pos
sible if and only if .4 = o, a contradiction of the hypothesis that A is nonsingular.

Assume the contrary, that is, suppose that the images Yi = AXi. (i = 1,2 p) of the linearly independ
ent vectors X^.Xr, Xp are linearly dependent. Then there exist scalars s^.s^ sp , not all zero, such that
P
^ H'^i = ^lYi + s^Y2+ + s^Yfy
^^
=
1=1 f^
P
'
.^ ^(4^1) = A(Sj^X^+ S2X^+ + spXp) =
www.TheSolutionManual.com
Since A is non singular, s^X^ + s^X^ + . + spXp = But this is contrary to the
o. hypothesis that the Xi are
linearly independent. Hence, the Y^ are linearly independent.
a + b + c = I
1 1 1
Y = [Y^.Y^.Yg]X 2 3 1
2 1 1
1 1 2
5. If Yy = AXf, = 2 2 1 X^ is a linear transformation relative to the Zbasis
of Problem 12, Chap
3 1 2
ter 11, find the same transformation 7^ = BX^ relative to the f'basis of that problem.
1 4 1"
1 1 1"
P''X,., 1 =
^!i Q^w
2 1 :d
2 14 6
and PY Q^AX, \^
Q^^Q^r., 7 14 9 X,
3
9 3
98 LINEAR TRANSFORMATIONS [CHAP. 12
SUPPLEMENTARY PROBLEMS
6. In Problem 1 show: (a) the transformation is nonsingular, (b) X = A Y carries the column vectors of A into
the elementary vectors.
7. Using the transformation of Probleml, find (a) the image of Af = [1,1,2]', (h) the vector X whose image is
[2.5.5]'. 4ns. (a) [8,5.11]', (b) [3.1. 2]'
9. Set up the linear transformation which carries E^ into [l,2,3]', 5 into [3.1. 2]', and 3 into [2,1,1]'.
Show that the transformation is singular and carries the linearly independent vectors [ 1,1,1 ]' and [2,0,2]'
into the same image vector.
10. Suppose (12.1) is nonsingular and show that if X^.X^. .... X^ are linearly dependent so also are their im
ages Y^.Y^ Y^.
www.TheSolutionManual.com
11. Use Theorem III to show that under a nonsingular transformation the dimension of a vector space is un
changed. Hint. Consider the images of a basis of P^ (F).
1 1
12. Given the linear transformation Y 2 3 1 X. show (a) it is singular, (6) the images of the linearly in
2 3 5
dependent vectors ^i=[l,l,l]', JVg = [2.I.2 ]', and A:3=[i.2,3]' are linearly dependent, (c) the image
of V^{R) is a Vs(R).
1 1 3
13. Given the linear transformation Y 1 2 4 X. show (a) it is singular, (6) the image of every vector of the
113
2 1
V, (R) spanned by [ 1.1,1 ]' and [3.2.0 ]' lies in the K,(fl) spanned by [5,7.5]'.
12 3
16. Let Y = AX = 3 2 1 A^ be a linear transformation relative to the basis and let a new basis, say Z, =
_1 1 1_
[1,1,0]', ^2 = [1.0.1]', Z3 = [1.1.1]' be Chosen. Let AT = [1,2,3]' relative to the Ebasis. Show that
(a) Y = [14,10,6]' is the image of A^ under the transformation.
(6) X. when referred to the new basis, has coordinates X^, = [2,1.4]' and Y has coordinates Y^ = [8,4,2]'
1 01"
(c) X^ = PX and Y^ = PY. where P 1 1 iZ^, Zf2, Z^i
1 1 1
0"
1 1
17. Given the linear transformation 7^ 1 1 Xjf . relative to the IFbasis: W^= [o,l,2]', IK,= [4,1,0]'
1 1
.
IK5 = [2.0,4]' Find the representation relative to the Zbasis: Z^ = [i,l,l]', Z2 = [l,0,l]', Z3=[l,2,l]'.
10 3
Am 2 25
10 2
18. If. in the linear transformation Y  AX. A is singular, then the null space of A is the vector space each of
whose vectors transforms into the zero vector. Determine the null space of the transformation of
123"
(a) Problem 12. (6) Problem 13. (c) Y 2 4 6 X.
3 6 9
www.TheSolutionManual.com
19. If y = AX carries every vector of a vector space I^ into a vector of that same space, v^ is called an In
variant space of the transformation. Show that in the real space V^{R) under the linear transformation
1 f
(a) F = 12 \ X. the \l^ spanned by [l.l.o]', the V^ spanned by [2,1.2]', and the V^ spanned by
2 2 3
2 2 1
(6) y = 13 1 X. the Vq spanned by [l.l.l]' and the ^ spanned by [l,0,l]' and [2,l,0]' are invariant
1 2 2
y =
10
(c) X, the li spanned by [l,l,l,l]' is an invariant vector space.
1
146 4
(c) Prove: If Pj and fg are permutation matrices so also are P3 = PlP2 and P^^P^Pt.
(d) Prove: If P is a permutation matrix so also are P' and PP' = /.
(e) Show that each permutation matrix P can be expressed as a product of a number of the elementary col
umn matrices K^2, ^28 ^nTn
(/) Write P = [^^, E^^. ^^] where ii, ij % is a permutation of 1,2 n and ^ . are the ele
mentary nvectors. Find a rule (other than P~ = P') for writing P~ For example, when n = 4 and .
P = [s, 1, 4, 2], then P'^ = [2. '4. 1. 3]; when P = [E^ E^. 1, 3], then P~^ = [g, 2, 4, 1].
chapter 13
INNER PRODUCT. In this chapter all vectors are real and l^(R) is the space of all real revectors.
If Z = [%,%, ..., x^y and y = [ji, 72, , YnY are two vectors of l^(R), their inner product is
defined to be the scalar
www.TheSolutionManual.com
(a) X^X^ = 12 + 1 1 + 1 2 = 5
ORTHOGONAL VECTORS. Two vectors X and Y of V^iR) are said to be orthogonal if their inner
product is 0. The vectors Z^ and Xg of Example 1 are orthogonal.
THE LENGTH OF A VECTOR X of ^i(R), denoted by \\ X\\ , is defined as the square root of the in
ner product of X and X thus. ;
100
CHAP. 13] VECTORS OVER THE REAL FIELD 101
A vector X whose length is z = 1 is called a unit vector. The elementary vectors E^
are examples of unit vectors.
that is, the numerical value of the inner product of two real vectors is at most the product of
their lengths.
See Problem 3.
www.TheSolutionManual.com
THE TRIANGLE INEQUALITY. If X and Y are vectors of )/(/?), then
ORTHOGONAL VECTORS AND SPACES. If X^, X^ X^ are m<n mutually orthogonal nonzero
nvectors and CiZi + ^2^2+ ...+ c^^ =
if 0, then for i = 1,2 m. (c.lX^+ 0^X2+
+ c^X^) Xi =
0. Since this requires 0^ = for i = 1,2 m , we have
I. Any set of m< n mutually orthogonal nonzero revectors is a linearly independent
set and spans a vector space Ijf(/?).
HI. If Vn(R) is a subspace of I^(/?), k>h. there exists at least one vector X of V^CR)
which is orthogonal to V^\R).
See Problem 5.
Since mutually orthogonal vectors are linearly independent, a vector space V'^(R), m>0,
can contain no more than m mutually orthogonal vectors. Suppose we have found r<m mutually
orthogonal vectors of a V^(R). They span a V^iR), a subspace of V*(R), and by Theorem HI,
there exists at least one vector of V^(R) which is orthogonal to the I^(/?). We now have
r+l
mutually orthogonal vectors of l^(R) and by repeating the argument, we show
IV. Every vector space V^(R), m>0, contains m but not more than m mutually orthog
onal vectors.
Two vector spaces are said to be orthogonal if every vector of one is orthogonal to every
vector of the other space. For example, the space spanned by X^ = [1,0,0,1]' and X^ =
[0,1,1,0]' is orthogonal to the space spanned by X^ = [ 1,0,0,1]' and X^ = [0,1,1,0 ]'
since (aX^ + bXr,) (0X3+ dX^) = for all a,b,c,d.
102 VECTORS OVER THE REAL FIELD [CHAP. 13
k
V. The set of all vectors orthogonal to every vector of a given Vn,(R) is a unique vec
tor space
^ Vn'^(R).
n J
\
ggg Problem 6.
We may associate with any vector ^ 7^ o a unique unit vector U obtained by dividing the
components of X by \\X\\ This operation is called normalization. Thus, to normalize the vector
.
X = [2,4,4]', divide each component by ^ = V4 + 16 + 16 = 6 and obtain the unit vector
[1/3,2/3.2/3]'.
A basis of Vn(R) which consists of mutually orthogonal vectors is called an orthogonal ba
sis of the space; if the mutually orthogonal vectors are also unit vectors, the basis is called a
normal orthogonal or orthononnal basis. The elementary vectors are an orthonormal basis of ^(R).
See Problem 7.
www.TheSolutionManual.com
y, = X,
^g'^3 V ^1^3 V
V  Y
^S  ^3 Y V ^ V V ^
'wl Xi "
ll*^!
'm  '^ y Y %l y y' "'l
Then the unit vectors Gj = ^ , (i = l,2,...,m) are mutually orthogonal and are an orthonormal
1^11
basis of F(/?).
Example 3. Construct, using the GramSchmidt process, an orthogonal basis of V2(R). given a basis
A'i= [1,1,1]', a:2= [1,2,1]', Xs=[i.2.zY.
(i) Y^ = X^ = [1.1.1]'
are an orthonormal basis of ^(fl). Each vector G^ is a unit vector and each product G^ Gj =
0. Note that Fg = ^2 here because X.^ and A^2 a^re orthogonal vectors.
Let Zi, ^2 ^m be a basis of a f^(/?) and suppose that X^, X^ Xg,(l< s< m), are
mutually orthogonal. Then, by the GramSchmidt process, we may obtain an orthogonal basis
y^, Yg }^ of the space of which, it is easy to show, Yj^ = X^, (i = 1,2 s). Thus,
VI. If Xi^, X2, , Xs,(l< s<m), are mutually orthogonal unit vectors of a Vn(R), there
exist unit vectors X^^^, ^m
X^^.^ i" the space such that the set X^, X^, ...,X^ is an
orthonormal basis.
THE GRAMIAN. Let X^, X^ Z>, be a set of real nvectors and define the Gramian matrix
A^ A^
' A^ A2
. . .
AL A>) A]^ A^ A^ A2 . . . X^ A,
A2 Aj X2 A2
' ' . . . A2 ' Ajj A2 A^ A2 A2 X^Xp
(13.8) G =
www.TheSolutionManual.com
... ...
that is, if
Prom (13.9) it is clear that the column vectors (row vectors) of an orthogonal matrix A are
mutually orthogonal unit vectors.
VIII. If the real resquare matrix A is orthogonal, its column vectors (row vectors) are
an orthonormal basis of V^(R), and conversely.
IX. The inverse and the transpose of an orthogonal matrix are orthogonal.
(13.10) Y = AX
104 VECTORS OVER THE REAL FIELD [CHAP. 13
be a linear transformation in Xi(R) and let the images of the nvectors I^ and
X^ be denoted by
Yi and Y^ respectively. Prom (13.4) we have
Comparing right and left members, we see that if (13.10) preserves lengths it preserves inner
products, and conversely. Thus,
XII. A linear transformation preserves lengths if and only if it preserves inner product s.
XIII. A linear transformation preserves lengths if and only if its matrix is orthogonal.
www.TheSolutionManual.com
l/\/2 l/v/6 I/V2
Examples. The linear transformation Y = AX = l/\/3 2/\/6 X ii3 orthogonal. The image of
I/V3 l/\/6 l/v^
X = [a,i,c]'is
" 26 a
y + _^ _ _1_ _f b c "]/
SOLVED PROBLEMS
2
(a) X^X^ = XiX^ = [1.2.3] 3 = 1(2) + 2(3) + 3(4) = 8
2. (a) Show that 1 = [1/3, 2/3, 2/3 ]' and Y ^ [2/3.1/3, 2/3]' are orthogonal.
(b) Find a vector Z orthogonal to both X and Y.
2/3
(a) XY = Xy = [1/3,2/3,2/3] 1/3 = and the vectors are orthogonal.
2/3
(6) Write [A:,y,o] 2/3 1/3 and compute the cofactors 2/3. 2/3, 1/3 of the elements of the
2/3 2/3 0_
column of zeros. Then by (3.11) Z = [2/3, 2/3, 1/3]' is orthogonal to both A: and K.
3. Prove the Schwarz Inequality: If X and Y are vectors of Vn(R), then \XY\ < A'.y
Clearly, the theorem is true if A" or F is the zero vector. Assume then X and Y are nonzero vectors.
www.TheSolutionManual.com
that
If a is any real number,
Now a quadratic polynomial in a is greater than or equal to zero for all real values
of o if and only if its
discriminant is less than or equal to zero. Thus,
4. Prove: If a vector Y is orthogonal to each of the nvectors X^, X^ X^. it is orthogonal to the
space spanned by these vectors.
Any vector of the space spanned by the X's can be written as a^X^+a^X^^
^c^Xtji . Then
Since XiY = 0, (i = 1,2 m). Thus, Y is orthogonal to every vector of the space and by definition is
orthogonal to the space. In particular, if Y is orthogonal to every vector of a basis of a vector space, it is
orthogonal to that space.
^^* ^I'^s ^h be a basis of the FV). let X^,^^ be a vector in the vM) but not in the P^(R) and
consider the vector
The condition that X be orthogonal to each of X^.X^ consists of h homogeneous linear equations
Xf,
106 VECTORS OVER THE REAL FIELD [CHAP. 13
^^* ^i^s Afe be a basis of the V^{R). The ..vectors X orthogonal to each of the Jt, satisfy the
system of homogeneous equations
www.TheSolutionManual.com
^'> X^.X=o.X^.X=Q Xk.X=0
Since the X^^ are linearly independent, the coefficient matrix of the
system (i) is of rank k ; hence, there are
nk linearly independent solutions (vectors) which span
a K"\/?). (See Theorem VI, Chapter 10.)
Uniqueness follows from the fact that the intersection space of the
V^iR) and the V^'^(R) is the zero
space so that the sum space is Xi{R).
^^' ^^' ^2 >^n be a given basis of V^(R) and denote by Y^. Yr, Y^ the set of mutually orthogonal
vectors to be found.
(b) Take Y^ = Xr, + aY^ . Since Y^ and Y^ are to be mutually orthogonal.
Y . X Y X
and o =  i2 . Thus. K, = X^ ~ ^1^
Xo ^Sill Y^ .
(c) Take Y^ = X3 + aYr, + bY^ . Since Y^. K,, Y^ are to be mutually orthogonal,
9. Construct an orthonormal basis of Fg, given the basis Z^ = [2,1,3]', X^ = [1,2,3]', and Xg = [1,1,1]'.
V _ y "2 ^3 V ^1 '"^3 V
[1.1.1]'  ^
9 7 14 I4J 7 [_3 3 3j
Normalizing the ys. we have [2/\/l4. l/i/Ii, 3/v'l4 ]', [4/\/42, 5/V42. l/\/42]', [l/Vs. 1/V3. 1/^3 ]'
www.TheSolutionManual.com
10. Prove: A linear transformation preserves lengths if and only if its matrix is orthogonal.
Let y^. Yq be the respective images of X^,X2 under the linear transformation Y = AX.
SUPPLEIVIENTARY PROBLEMS
H. Given the vectors A'l = [l,2,l ]'. .Yg = [2.I.2]', vYg = [2.1,4 ]'. find:
(a) the inner product of each pair,
(6) the length of each vector.
(c) a vector orthogonal to the vectors X^, X^ ; X.^, Xq .
14. Let a: = [1.2.3,4]' and Y = [2,1,1.1]' be a basis of a V^(R) and Z = [4,2,3,l]' lie in a V^{R) containing X
and y.
(a) Show that Z is not in the ^^{R).
(b) Write W = aX + bY + cZ and find a vector W of the V^{R) orthogonal to both X and Y.
15. (a) Prove: A vector of I^(ft) is orthogonal to itself if and only if it is the zero vector.
(6) Prove: If X^. X^. Xg are a set of linearly dependent nonzero ^vectors and Z^ Xg = Xj_Xq=
if
0, then
X^ and Xq are linearly dependent.
108 VECTORS OVER THE REAL FIELD [CHAP. 13
16. Prove: A vector X is orthogonal to every vector of a P^"(/?) If and only if it is orthogonal to every vector of
a basis of the space.
17. Prove: If two spaces V^iR) and t^(fl) are orthogonal, their intersection space is I^(if).
19. Prove: 
A" + 7 1
= 1a:1 + \y\ if and only if X and Y are linearly dependent.
21. Show that the vectors X. Y .Z of Problem 2 are an orthonormal basis of V^{R).
22. (o) Show that if X^. X^ X^ are linearly independent so also are the unit vectors obtained by normalizing
www.TheSolutionManual.com
them.
(6) Show that if the vectors of (a) are mutually orthogonal nonzero vectors, so also are the unit vectors
obtained by normalizing them.
(6) If .4 is orthogonal and \a\ = i, each element of^ is equal to the negative of its cofactor in ^ .
25. Prove; If .4 and B commute and C is orthogonal, then C'AC and C'BC commute.
26. Prove that AA' (or A'A), where A is nsquare. is a diagonal matrix if and only if the rows (or columns) of A
are orthogonal.
28. Prove: If X and Y are nvectors and A is nsquare, then X(^AY) = {A'X) Y
n
29. Prove: If A"]^, Z^ A' be an orthonormal basis and if A^ = 2 c^X^, then (a) XX^ = c^, (i = 1,2 n);
^=1
(b)X.X = cl+4+... + cl
30. Find an orthonormal basis of VsiR). given (a) X^ = [ 3/VT7, 2/\/T7, 2/1/17]'; (6) [3,0,2]'
Ans. (a) X^. [0.l/^^2.l/\f2Y, [4/\^, 3/^34, 3/\/34]'
(6) [3/VI3, 0, 2/\/l3]', [2/VI3. 0, 3/VT3]', [0,1,0]'
31. Construct orthonormal bases of V^i^R) by the GramSchmidt process, using the given vectors in order:
{a) [1,1,0]', [2,1,2]', [1,1,2]'
(6) [1.0,1]', [1,3,1]', [3.2,1]'
(e) [2,1,0]', [4,1,0]', [4,0,1]'
Ans. (a) [iV^. iV^, 0]', [V^/6, V2/6, 2v'V3]', [2/3,2/3,1/3]'
(b) [^\/~2, 0, iV2]', [0,1,0]', [iV^. 0, iV2]'
(c) [2V5/5, V5/5, 0]', [^/5/5.2^/5/5.oy, [o,0,l]'
32. Obtain an orthonormal basis of I^(ft), given ^"1 =[ 1,1,1 ]' and ATg = [2,1,0 ]'.
Hint. Take Y^ = X^, obtain Y^ by the GramSchmidt process, and Y^ by the method of Problem 2(6).
Ans. [\A3/3, V'3/3, V^/3]', [5\/2. 0, 2\A2]', [\/6/6, \/6/3, \/6/6 ]'
CHAP. 13] VECTORS OVER THE REAL FIELD 109
34. Show in two ways that the vectors [l.2,3,4]', [l. 1.2, 3]', and [5,4,5,6]' are linearly dependent.
"
12
s"!
(a) A =
5 oj'
(b) A 10 3
23
"514 2
12 51
Ans. 10 5 10
(a)
^ 5 I2J'
(b)
10 2 11
www.TheSolutionManual.com
37. Prove: If .4 is an orthogonal matrix and it B = AP , where P is nonsingular, then PB^ is orthogonal.
38. In a transformation of coordinates from the basis to an orthonormal Zbasis with matrix P. Y = AX be
comes 71 = P^^APX^ or 7^= BX^ (see Chapter 12). Show that if A is orthogonal so also is B. and con
versely, to prove Theorem XIV.
40. Let X = [xj^.x^.xsY and Y = [yi.yQ.ygY be two vectors of VsiR) and define the vector product, X xY , of
X2 Jl 3 ys ^1 yi
^ and y as Z = ZxF = [21, 25, 23]' where 21 = , Z2 = . Zg = After identifying
^3 ys % yi 2 yi
the z^ as cofactors of the elements of the third column
mn of X^, Y^. ], estciblish:
(a) The vector product of two linearly dependent vectors is the zero vector.
(6) The vector product of two linearly independent vectors is orthogonal to each of the two vectors.
(c) XxY = (YxX)
(d) (kX)xY = k(XxY) = XxikY), k a scalar.
(d) (XxY)(XxY) =
XX XY
YX YY
chapter 14
COMPLEX NUMBERS. x and j are real numbers and i is defined by the relation j^ = 1, z = x^iy
If
is called a complex number. The real number x is called the real part and the real number y is
called the imaginary part of x + fy.
Two complex numbers are equal if and only if the real and imaginary parts of one are equal
respectively to the real and imaginary parts of the other.
www.TheSolutionManual.com
if if 0.
The conjugate of the complex number z = x+iy is given by z = x+iy = xiy. The sum
(product) of any complex number and its conjugate is a real number.
The absolute value z of the complex number z = x+iy is given by z = \J zz = \fW+y^
It follows immediately that for any complex number z = x + iy,
VECTORS. Let X be an ravector over the complex field C. The totality of such vectors constitutes
the vector space I^(C). Since ^(R) is a subfield, it is to be expected that each theorem con
cerning vectors of I^(C) will reduce to a theorem of Chapter 13 when only real vectors are con
sidered.
If Z = [%, x^ XnY and y = [ji, 72 y^Y are two vectors of P^(C), their inner product
is defined as
(d) X(Y+Z) = XY + XZ where C(Zy) is the imaginary part of ZF.
110
CHAP. 14] VECTORS OVER THE COMPLEX FIELD 111
II. If a vector Y is orthogonal to each of the revectors X^, X^ X^, then it is or
thogonal to the space spanned by these vectors.
III. If V^iC) is a subspace of V^C), k>h, then there exists at least one vector Z in
V^(C) which is orthogonal to V^(C).
IV. Every vector space J^(C), m>0, contains m but not more than m mutually orthog
onal vectors.
A
basis of I^(C) which consists of mutually orthogonal vectors is
called an orthogonal
basis. If the mutually orthogonal vectors are also unit
vectors, the basis is called a nonnal
or orthonormal basis.
www.TheSolutionManual.com
THE GRAMSCHMIDT PROCESS. Let X,. X^ X^ be a basis for F^^CC). Define
Y, = X,
In  Xn
^2^3 y YiXs
(14.6) Yd = Xn
Y .Y
l2l2 ^ Y,.Y,
Y^
Y y
'ai'^m y
yn _
~
y
"Si T; y
^mi''mi
'mi
'y^
The unit vectors Gi 7. (i = 1.2 m) are an orthonormal basis for ^^(C).
THE GRAMIAN. Let X X^ Xp be a set of ;s vectors with complex elements and define the
Gramian matrix.
UNITARY MATRICES. An nsquare matrix A is called unitary if (AyA = A(A)'= I, that is if (i)'= A ^.
The column vectors (row vectors) of a unitary matrix are mutually orthogonal unit vectors.
VIII. The inverse and the transpose of a unitary matrix are unitary.
(14.8) Y = AX
www.TheSolutionManual.com
where A is unitary, is called a unitary transformation.
XI. A linear transformation preserves lengths (and hence, inner products) if and only
if its matrix is unitary.
SOLVED PROBLEMS
2+ 3/
1 +i
As in the case of real vectors, the Inequality is true if X = or ^ = 0. When X and Y are nonzero
vectors and a is real, then
\\aX + Yf = {aX+Y){aX+Y) = a XX + a{XY + YX) + YY = a'Wxf + 2aR{XY) + \\Y\\ > 0.
CHAP. 14] VECTORS OVER THE COMPLEX FIELD 113
Since the quadratic function in a is nonnegative if and only if its discriminant is nonpositive,
www.TheSolutionManual.com
Since B+iC is Hermitian, (B+iC)' = B + iC; thus.
This is real if and only if BC + CS = o or BC = CB ; thus, if and only if B and C anticommute.
SUPPLEMENTARY PROBLEMS
6. Given the vectors X^=[i.2i,iY. A:2 = [l, 1+ o]', and Xg =  2]'
/, [i, 1 j,
(a) find X^X^ and X^X^,
(b) find the length of each vector Xi ,
(c) show that [li, 1, ij]' is orthogonal to both X^ and X^.
(d) find a vector orthogonal to both X^ and Jf g
7. Show that [l + i.i.lV. [iAi.oy, and [l i. 1. 3j ]' are both linearly independent and mutually orthogonal.
12. Using the relations (146) and the given vectors in order, construct an orthonormal basis for iy.C) when the
vectors are
(a) [0,1,1]', [l + j,l,l]'. [lj,l,l]'
(b) [l + i.i.lY. [2.12i.2 + iY, [li.O.iY.
ri 1 ^ I 1 T r 1 1  5t 3+ 3jy r1i
7 t 5
5 6 + 3i
b 3j
y
(&) [2(1+0. 2..2 J. [3;^. 4^. 4^ J L2V30 'aVso 2\Am^
13. Prove: If /I is a matrix over the complex field, then A + A has only real elements and A A has only pure
imaginary elements.
www.TheSolutionManual.com
(b) A'A = / if and only if the columns of A are mutually orthogonal unit vectors.
16. Prove: If X and Y are nvectors and A is resquare, then XAY = A'XY
18. Prove: If A is skewHermitian such that I + A is nonsingular, then B = (lA){I + A)~^ is unitary.
I 1 +t
r i+j1
19. Use Problem 18 to form a unitary matrix, given (
"^ (6) i
i
L1+. J 1 + j i
20. Prove: If ^ and B are unitary and of the same order, then AB and BA are unitary.
21. Follow the proof in Problem 10. Chapter 13. to prove Theorem XI.
3+ i
J//3
1(1 +
2^
4 + 3i
23. Show that k l/\/3 is unitary.
2y/T5
i/y/3 ^
2Vl5_
1
24. Prove: If A is unitary and if B = .4P where P is nonsingular, then PB is unitary.
25. Prove: U A is unitary and I+A is nonsingular, then B = (I  A) (,! + A)'^ is skewHermitian.
chapter 15
Congruence
CONGRUENT MATRICES. Two resquare matrices A and B over F are called congruent, , over F if
(15.1) B = FAP
Clearly, congruence is a special case of equivalence so that congruent matrices have the same
www.TheSolutionManual.com
rank.
When P is expressed as a product of elementary column matrices, P' is the product in re
verse order of the same elementary row matrices; that is, A and B are congruent provided A can
whose first r diagonal elements are nonzero while all other elements are zero.
Example 1. Find a nonsingular matrix P with rational elements such that D  P'AP is diagonal, given
12 3 2
2 3 5 8
3 5 8 10
2 8 10 8
reducing A to D, we use [A /] and calculate en route the matrix P' First we use
In .
ff2i(2) and K2i(2). then //gjCS) and XgiCS), then H^ii2) and K^^{2) to obtain zeroes
in the first row and in the first column. Considerable time is saved, however, if the three
row transformations are made first and then the three column transformations. If A is not then
transformed into a symmetric matrix, an error has been made. We have
12 3 2 1 10 10
[AH^
2 3 5 8 1
c
011 4 2100
3 5 8 10 1
'V
011 4 3010
2 8 10 8 1 4 412 2001
1 1 1 1
0100 2 1
c
0100 2 10
~\^
1 1 1 4 10 4 1
4 10 4 1 1110
[DP']
115
116 CONGRUENCE [CHAP. 15
1 2 10 1
1 41
Then
1
10
The matrix D to which A has been reduced is not unique. The additional transformations
10
0100
ffgCi) and Kg(^). for example, will replace D by the diagonal meitrix while the
10
10
0900 however, no pair of
transformations H^d) and K^Ci) replace D by . There is,
4
www.TheSolutionManual.com
rational or real transformations which will replace D by a diagonal matrix having only nonneg
ative elements in the diagonal.
REAL SYMMETRIC MATRICES. Let the real symmetric matrix A be reduced by real elementary
transformations to a congruent diagonal matrix D, that is, let P'AP = D. While the nonzero
diagonal elements of D depend both on A and P. it will be shown in Chapter 17 that the number
of positive nonzero diagonal elements depends solely on A.
By a sequence of row and the same column transformations of type 1 the diagonal elements
of D may be rearranged so that the positive elements precede the negative elements. Then a
sequence of real row and the same column transformations of type 2 may be used to reduce the
diagonal matrix to one in which the nonzero diagonal elements are either +1 or 1. We have
II. A real symmetric matrix of rank r is congruent over the real field to a canonical
matrix
P
(15.2) C =
'rp
The integer p of (15.2) is called the index of the matrix and s = p(r p) is called the
signature.
Example 2. Applying the transformations H23. K^a and H^ik), Kr,(k) to the result of Example 1, we have
1 1 10 1
0100 2 1 C 10 5 2 5
IC\(/]
[A\n
4 10 4 1 010 2 10
1 1 1 1110
and (/AQ = C. Thus, A is of rank r = 3, index p = 2, and signature s = 1.
III. Two resquare real symmetric matrices are congruent over the real field if and only
if they have the same rank and the same index or the same rank and the same signature.
In the real field the set of all resquare matrices of the type (15.2) is a canonical set over
congruence for real rasquare symmetric matrices.
CHAP. 15] CONGRUENCE 117
IV. Every rasquare complex symmetric matrix of rank r is congruent over the field of
complex numbers to a canonical matrix
/^
(15.3)
Examples. Applying the transformations H^ii) and K^f^i) to the result of Example 2, we have
10 1 10 1
1 5 2 5 C 10 5 2 k
[^1/] [D ']
010
1
2 1 10 2i i
1 1 1 1 1 1
www.TheSolutionManual.com
R'AR
and
^&:] See Problems 23.
V. Two rasquare complex symmetric matrices are congruent over the field of complex
numbers if and only if they have the same rank.
In Problem 4, we prove
where = r
n .
D,
I
,(f=l,2 t). The rank of /} is r = 2t.
See Problems.
There follows
Vin. Two rasquare skewsymmetric matrices over F are congruent over F if and only
if they have the same rank.
The set of all matrices of the type (15.4) is a canonical set over congruence for resquare
skewsymmetric matrices.
HERMITIAN MATRICES. Two nsquare Hermitian matrices A and B are called Hermitely congruent,
[^ ], or conjunctive if there exists a nonsingular matrix P such that
(15.5) FAP
Thus,
X
IX. Two resquare Hermitian matrices are conjunctive if and only if one can be obtain
ed from the other by a sequence of pairs of elementary transformations, each pair consist
ing of a column transformation and the corresponding conjugate row transformation.
Ip
(15.6) Irp
The integer p of (15.6) is called the index of A and s = p(rp) is called the signature.
XI. Two resquare Hermitian matrices are conjunctive if and only if they have the same
rank and index or the same rank and the same signature.
The reduction of an Hermitian matrix to the canonical form (15.6) follows the procedures
www.TheSolutionManual.com
of Problem 1 with attention to the proper pairs of elementary transformations. The extreme
troublesome case is covered in Problem?.
See Problems 67.
Ip
FHP = C = Irp
Up
(15.7) B = FAP =  ilr~p
Thus,
XIV. Two resquare skewHermitian matrices A and B are conjunctive if and only if
they have the same rank while iA and iB have the same index.
See Problem 8.
CHAP. 15] CONGRUENCE 119
SOLVED PROBLEMS
1. Prove: Every symmetric matrix over F of rank r can be reduced to a diagonal matrix having exactly
r nonzero elements in the diagonal.
Suppose the symmetric matrix A = [a^,] is not diagonal. If a^ / 0. a sequence of pairs of elementary
transformations of type 3, each consisting of a row transformation and the same column transformation,
will
reduce A to
"n2 "ns
Now the continued reduction is routine so long as b^^.c^^. are different from zero. Suppose then
that along in the reduction, we have obtained
www.TheSolutionManual.com
the matrix
hss
.
'12 2
2. Reduce the symmetric matrix A 2 3 5 to canonical form (15.2) and to canonical form
(15.3).
.2 5 5
In each obtain the matrix P which effects the reduction.

1 2 2 1
1 I
1 1 1 1
u\n 2 3 5 1
1 c 1 1
I
2 1 C 1 2 1 c 2 4 1 1
2 5 5 1 1 1 2 1 2 4 1 1 1 2 1
[0^']
1 1 1 1
1 av^ 2
and j\/2 1
5a/2
1 2 1 1 1 2i i
1 2V2 2J
and k\f2 i
www.TheSolutionManual.com
3. Find a nonsingular matrix ? such that ?'A? is in canonical form (15.3), given
1 i 1 + i
A = i 2j
1 + i 2i 10 + 2i
^
1 i 1+J 1 1 1 1
2j c 3  2j i
[^1/] i 1 1 '\J 1 1
_
p
10 1 1 1 1
i 1
c 1 i 1
1 1
13 13 13 J
= [c\n
7+ 4i
1 I
13
5+12i
Here. 1
13
32j
13
4. Prove: Every ?isquare skewsymmetric matrix A over F of rank 2t is congruent over F to a matrix
B = diag(Di, Dg 7)^,0 0)
where D^ =
U :] (J = 1,2, )
then some = aji ?^ 0. Interchange the sth and first rows and the
It A = Q. then S = ^. If 4 ^ 0, mj
and first columns and the /th and second columns to replace
/th and second rows; then Interchange the Jth
oy
a..
1
g
2
V Next multiply the first row and the first column by l/a^^
3 'S*
i
CHAP. 15] CONGRUENCE 121
1
''2
to obtain 1 and from it, by elementary row and column transformation
Viel mehr als nur Dokumente.
Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.
Jederzeit kündbar.