Sie sind auf Seite 1von 231

SCHAUM'S

OUTLINE THEORY aiMl>ROBLEMS


SERIES

of

MATRICES

www.TheSolutionManual.com
by FRANK AYRES, JR.

including

Completely Solved in Detail

SCHAUM PUBLISHING CO.


NEW YORK
SCHAVM'S OUTLINE OF

THEORY AI\[D PROBLEMi;


OF

MATRICES

www.TheSolutionManual.com
BY

FRANK AYRES, JR., Ph.D.


Formerly Professor and Head,
Department of Mathematics
Dickinson College

^C^O
-^

SCHAIJM'S OUTLINE SERIES


McGRAW-HILL BOOK COMPANY
New York, St. Louis, San Francisco, Toronto, Sydney
www.TheSolutionManual.com
('opyright 1962 by McGraw-Hill, Inc. AH Rights Reserved. Printed in the
United States of America. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without the prior
written permission of the publisher.

02656

78910 SHSH 7543210


Preface

Elementary matrix algebra has now become an integral part of the mathematical background
necessary for such diverse fields as electrical engineering and education, chemistry and
sociology,
as well as for statistics and pure mathematics. This book, in presenting the
more
essential mate-
rial, designed primarily to serve as a useful supplement to current texts and as a handy refer-
is

ence book for those working in the several fields which require some knowledge
of matrix theory.
Moreover, the statements of theory and principle are sufficiently complete that the book
could
be used as a text by itself.

www.TheSolutionManual.com
The material has been divided into twenty-six chapters, since the logical arrangement is
thereby not disturbed while the usefulness as a reference book is increased. This
also permits
a separation of the treatment of real matrices, with which the majority of readers
will be con-
cerned, from that of matrices with complex elements. Each chapter contains
a statement of perti-
nent definitions, principles, and theorems, fully illustrated by examples. These, in
turn, are
followed by a carefully selected set of solved problems and a considerable number
of supple-
mentary exercises.

The beginning student in matrix algebra soon finds that the solutions of numerical exercises
are disarmingly simple. Difficulties are likely to arise from the
constant round of definition, the-
orem, proof. The trouble here is essentially a matter of lack of mathematical maturity,'
and
normally to be expected, since usually the student's previous work in mathematics has
been
concerned with the solution of numerical problems while precise statements of principles and
proofs of theorems have in large part been deferred for later courses. The aim of the
present
book is to enable the reader,if he persists through the introductory paragraphs and
solved prob-
lems in any chapter, to develop a reasonable degree of self-assurance about the material.

The solved problems, in addition to giving more variety to the examples illustrating the
theorems, contain most of the proofs of any considerable length together with
representative
shorter proofs. The supplementary problems call both for the solution of numerical
exercises
and for proofs. Some of the latter require only proper modifications of proofs given earlier;
more important, however, are the many theorems whose proofs require but a few lines. Some are
of the type frequently misnamed "obvious" while others will be found to call for
considerable
ingenuity. None should be treated lightly, however, for it is due precisely to the abundance of
such theorems that elementary matrix algebra becomes a natural first course for those seeking
to attain a degree of mathematical maturity. While the large number of these problems
in any
chapter makes it impractical to solve all of them before moving to the next, special attention
is directed to the supplementary problems of the first two chapters. A
mastery of these will do
much to give the reader confidence to stand on his own feet thereafter.

The author wishes to take this opportunity to express his gratitude to the staff of the Schaum
Publishing Company for their splendid cooperation.

Frank Ayres, Jr.


CarHsle, Pa.
October, 1962
www.TheSolutionManual.com
CONTENTS

Page
Chapter 1 MATRICES 1

Matrices. Equal matrices. Sums of matrices. Products of matrices.


Products by partitioning.

Chapter 2 SOME TYPES OF MATRICES 10


Triangular matrices.Scalar matrices. Diagonal matrices. The identity

www.TheSolutionManual.com
matrix. Inverse of a matrix. Transpose of a matrix. Symmetric
matrices. Skew-symmetric matrices. Conjugate of a matrix. Hermitian
matrices. Skew-Hermitian matrices. Direct sums.

Chapter 3 DETERMINANT OF A SQUARE MATRIX. 20


Determinants of orders 2 and 3. Properties of determinants, Minors
and cofactors. Algebraic complements.

Chapter 4 EVALUATION OF DETERMINANTS 32


Expansion along a row or column. The Laplace expansion. Expansion
along the first row and column. Determinant of a product. Derivative
of a determinant.

Chapter 5 EQUIVALENCE 39
Rank of a matrix. Non-singular and singular matrices. Elementary
transformations. Inverse of an elementary transformation. Equivalent
matrices. Row canonical form. Normal form. Elementary matrices.
Canonical sets under equivalence. Rank of a product.

Chapter 6 THE ADJOINT OF A SQUARE MATRIX 49


The adjoint. The adjoint of a product. Minor of an adjoint.

Chapter 7 THE INVERSE OF A MATRIX. 55


Inverse of a diagonal matrix. Inverse from the adjoint. Inverse from
elementary matrices. Inverse by partitioning. Inverse of symmetric
matrices. Right and left inverses of m
X w matrices.

Chapter 8 FIELDS 64
Number fields. General fields. Sub-fields. Matrices over a field.
CONTENTS

Page
Chapter 9 LINEAR DEPENDENCE OF VECTORS AND FORMS 67
Vectors. Linear dependence of vectors, linear forms, polynomials, and
matrices.

Chapter 10 LINEAR EQUATIONS 75


System of non-homogeneous equations. Solution using matrices. Cramer's
rule.Systems of homogeneous equations.

Chapter 11 VECTOR SPACES 85


Vector spaces. Sub-spaces. Basis and dimension. Sum space. Inter-
section space. Null space of a matrix. Sylvester's laws of nullity.
Bases and coordinates.

www.TheSolutionManual.com
Chapter 12 LINEAR TRANSFORMATIONS 94
Singular and non-singular transformations. Change of basis. Invariant
space. Permutation matrix.

Chapter 13 VECTORS OVER THE REAL FIELD 100


Inner product. Length. Schwarz inequality. Triangle inequality.
Orthogonal vectors and spaces. Orthonormal basis. Gram-Schmidt
orthogonalization process. The Gramian. Orthogonal matrices. Orthog-
onal transformations. Vector product.

Chapter 14 VECTORS OVER THE COMPLEX FIELD 110


Complex numbers. Inner product. Length. Schwarz inequality. Tri-
angle inequality. Orthogonal vectors and spaces. Orthonormal basis.
Gram-Schmidt orthogonalization process. The Gramian. Unitary mat-
rices. Unitary transformations.

Chapter 15 CONGRUENCE 115


Congruent matrices. Congruent symmetric matrices. Canonical forms
of real symmetric, skew-symmetric, Hermitian, skew-Hermitian matrices
under congruence.

Chapter 16 BILINEAR FORMS 125


Matrix form. Transformations. Canonical forms. Cogredient trans-
formations. Contragredient transformations. Factorable forms.

Chapter 17 QUADRATIC FORMS 131


Matrix form. Transformations. Canonical forms. Lagrange reduction.
Sylvester's law of inertia. Definite and semi-definite forms. Principal
minors. Regular form. Kronecker's reduction. Factorable forms.
CONTENTS

Page
Chapter lo HERMITIAN FORMS 146
Matrix form. Transformations. Canonical forms. Definite and semi-
definite forms.

Chapter lif THE CHARACTERISTIC EQUATION OF A MATRIX 149


Characteristic equation and roots. Invariant vectors and spaces.

Chapter 20 SIMILARITY 156


Similar matrices. Reduction to triangular form. Diagonable matrices.

www.TheSolutionManual.com
Chapter 21 SIMILARITY TO A DIAGONAL MATRIX 163
Real symmetric matrices. Orthogonal similarity. Pairs of real quadratic
forms. Hermitian matrices. Unitary similarity. Normal matrices.
Spectral decomposition. Field of values.

Chapter 22 POLYNOMIALS OVER A FIELD 172


Sum, product, quotient of polynomials. Remainder theorem. Greatest
common divisor. Least common multiple. Relatively prime polynomials.
Unique factorization.

Chapter 2o LAMBDA MATRICES 179


The X-matrix or matrix polynomial. Sums, products, and quotients.
Remainder theorem. Cayley-Hamilton theorem. Derivative of a matrix.

Chapter 24 SMITH NORMAL FORM 188


Smith normal form. Invariant factors. Elementary divisors.

Chapter 25 THE MINIMUM POLYNOMIAL OF A MATRIX 196


Similarity invariants. Minimum polynomial. Derogatory and non-
derogatory matrices. Companion matrix.

Chapter 26 CANONICAL FORMS UNDER SIMILARITY 203


Rational canonical form. A
second canonical form. Hypercompanion
matrix. Jacobson canonical form. Classical canonical form. A reduction
to rational canonical form.

INDEX 215

INDEX OF SYMBOLS 219


www.TheSolutionManual.com
'

chapter 1

Matrices

A RECTANGULAR ARRAY OF NUMBERS enclosed by a pair of brackets, such as

"2 1 3
3 l'
(a) and (b) 2 1
-1

www.TheSolutionManual.com
1 5 4 7

and subject to certain rules of operations given below is called a matrix. The matrix (a) could be
(2x + 3y + 7z =
considered as the coefficient matrix of the system of homogeneous linear equations
\ X- y + 5z = [

i2x + 3y = 7
-

or as the augmented matrix of the system of non-homogeneous linear equations


\x- y = 5
Later, we shall see how the matrix may be used to obtain solutions of these systems. The ma-
trix (b) could be given a similar interpretation or we might consider its rows as simply the coor-
dinates of the points (1,3, 1), (2, 1,4), and (4,7, 6) in ordinary space. The matrix will be used
later to settle such questions as whether or not the three points lie in the same plane with the
origin or on the same line through the origin.

In the matrix

^11 ^12 '^IS '

(1.1)

*mi "m2 ""m3

the numbers or functions a^- are called its elements. In the double subscript notation, the first
subscript indicates the row and the second subscript indicates the column in which the element
stands. Thus, all elements in the second row have 2 as first subscript and all the elements in
the fifth column have 5 as second subscript. A matrix of m rows and n columns is said to be of
order "m by ra" or mxra.

(In indicating a matrix pairs of parentheses, ( ), and double bars, || ||, are sometimes
used. We shall use the double bracket notation throughout.)

At times the matrix (1.1) will be called "the mxra matrix [a^ ]
" or "the mxn matrix A =
[a^-]". When the order has been established, we shall write simply "the matrix 4".

SQUARE MATRICES. When m = n, (1.1) is square and will be called a square matrix of order n or an
re-square matrix.

In a square matrix, the elements a^, 022. . . , " are called its diagonal elements.

The sum of the diagonal elements of a square matrix A is called the trace of A.

1
MATRICES [CHAP. 1

EQUAL MATRICES. Two matrices A = [a^] and B = [bij] are said to be equal (A = B) if and only if

they have the same order and each element of one is equal to the corresponding element of the
other, that is, if and only if

a = 1,2, , ro; / = 1, 2 n)
^^J 'V
Thus, two matrices are equal if and only if one is a duplicate of the other.

ZERO MATRIX. A matrix, every element of which is zero, is called a zero matrix. When ^ is a zero
matrix and there can be no confusion as to its order, we shall write A = Q instead of the mxn
array of zero elements.

SUMS OF MATRICES. If 4 = [a^A and S = [fe.^-] are two mxn matrices, their sum (difference), A B,
is defined as the mxn matrix C = where each element of C
[c^A is the sum (difference) of the

www.TheSolutionManual.com
,

corresponding elements of A and B. Thus, AB [Oy bij] .

'\ 2 31 0'
2 3
Example 1. It A and = then
14 1 I

-12 5

2 +3
A +B
Lo+(- 1) 1 +2 + 5j
and
1-2 2-3 3-0
O-(-l) 1-2 4-5

Two matrices of the same order are said to be conformable for addition or subtraction. Two
matrices of different orders cannot be added or subtracted. For example, the matrices (a) and
(b) above are non-conformable for addition and subtraction.

The sum of k matrices ^ is a matrix of the same order as A and each of its elements is k
times the corresponding element of A. We define: If k is any scalar (we call k a scalar to dis-
tinguish it from [k] which is a 1x1 matrix) then by kA = Ak is meant the matrix obtained from
A by multiplying each of its elements by k.

Example 2. If ^ '11 then


[:

I -2
A+A + A 3A A-3
! 3
and
r-5(l) -5(-2)"| r-5 10-1
-5A
L-5(2) -5(3) J L-10 -15j

by -A, called the negative of /4, is meant the matrix obtained from A by mul-
In particular,
tiplying each of its elementsby -1 or by simply changing the sign of all of its elements. For
every A, we have A +(-A) = 0, where indicates the zero matrix of the same order as A.

Assuming that the matrices A,B,C are conformable for addition, we state:
(a) A + B = B + A (commutative law)
(b) A + (B+C) = (A + B)+C (associative law)
(c) k(A + B) = kA + kB = (A + B)k, A- a scalar
(d) There exists a matrix D such that A + D = B.
These laws are a result of the laws of elementary algebra governing the addition of numbers
and polynomials. They show, moreover,
1. Conformable matrices obey the same laws of addition as the elements of these matrices.
CHAP. 1] MATRICES

MULTIPLICATION. By the product AB in that order of the Ixm matrix A = [a^i a^g ais a^m] and

fell

^31
the mxl matrix fi is meant the 1x1 matrix C = [on fen + 012 fesi + + aimfemi]

fell

fe^i

That is, [an Oi '^imj ~ L^ii fell "*"


^12 feji + + imi]

-|^^2 a^^fe^^J

www.TheSolutionManual.com
femi

Note that the operation is row by column; each element of the row is multiplied into the cor-
responding element of the column and then the products are summed.
1'

Example 3. (a) [2 3 4] 1 [2(l) + 3(-l) + 4(2)] = [7]


L 2.

-2
(b) [3 -1 4] 6 [-6 - 6+ 12] =
3J

By the product AB in that order of the mxp matrix A = [a^-] and the p xn matrix B = [bij]
is meant the mxn matrix C = \c;;'\
<-
V -'
where
p
Hj ii^ij + "t2 ^2; + + "-ip i>,pj ^^^"ikbkj (f = 1, 2, . . . ,m-; / = 1, 2 re).

Think of A as consisting of m rows and B as consisting of n columns. In forming C = AB


each row of A is multiplied once and only once into each column of B. The element c^ of C is then

the product of the ith row of A and the /th column of B.

Example 4.

^11 *^ 1 9 '011611 + 012621 11 ^12 + "12 622


A B [611 612I 021611+022^21 0^21 ^12 + '^22 ^22
621 622J
."31 O32 ."31611+032621 "31612+002600

The product ^S is defined or A is conformable to B for multiplication only when the number
ofcolumns of A is equal to the number of rows of S. If ^ is conformable to B for multiplication
{AB is defined), B is not necessarily conformable to A for multiplication (BA may or may not
be
^^^ined)- See Problems 3-4.

Assuming that A,B,C are conformable for the indicated sums and products, we have
(e) A(B + C) = AB + AC (first distributive law)
(/") (A + B)C = AC + BC (second distributive law)
(g) A(BC) = (AB)C (associative law)

However,
(A) AB i= BA, generally,
(i) AB = does not necessarily imply i = or S = 0,

(/) AB = AC does not necessarily imply B = C.


See Problems 3-8.
MATRICES [CHAP. 1

PRODUCTS BY PARTITIONING. Let A= [a^J] be of order mxp and 8= [b^j] be of order pxn. In
forming the product AB, the matrix A is in effect partitioned into m matrices of order Ixp and B
into n matrices of order pxl. Other partitions may be used. For example, let A and 6 be parti-
tioned into matrices of indicated orders by drawing in the dotted lines as

(piXTii) I
(pixnj)
(mixpi) I (m]^xp2) (m-Lxps)
A =
(P2X%) I
(p2Xra2)
Jmgxpi) I
(m,2Xp2) j
(m2Xp3)_
(psxni) !
(p3Xre2)

Am A^2 Aig
"21 I
O22
A^i I
A,
"31 I
"32

In any such partitioning, itnecessary that the columns of A and the rows of 6 be partitioned
is
in exactly the same way; however m^, mg, re^, rig may be any non-negative (including 0) integers

www.TheSolutionManual.com
such that mi+ m2 = m and rei+ ^2 = n. Then

^llOll + '4i2021 ""


'4l3"31 A-^-^B-12 + A-^qBqq + '^13832 Cii C12
^8 c
A21B11 + 422^21 + A-2_iiB^^ A^^B^^ + ^22^22 + ^423632 _C21 1^22.

2 1 1110
Examples. Compute /4S, given /I = 3 2 and B 2 110
1 1 2 3 12
Partitioning so that

11 ^12
2 1 !

fill ^12
111
A = 3 2 '0 and B 2 1 1
A21 A22 B21 S22
10 11 2 3 1 I 2

[A^-iB-i^i + A-12B21 A-ij^B'12 + A-i,


we have AB =
1 AqiB^^ + ^22^21 ^21^12 * -^2^ B,

1 1 1

2 1 1 = ' [2]
i4' [3 3] [?
1"
1 1
[1 0] + [l][2 3 1] [1 0] + [l][2]
2 1 1

[4 3 3-1 To o] To] Tol 4 3 3


4 3 3
7 5 5JL0J 7 5 5
_[l 1 1] + [2 3 1] [0]+[2] [3 4 2] [2] 3 4 2 2

See also Problem 9.

Let A, B.C... be re-square matrices. Let A be partitioned into matrices of the indicated
orders
(pixpi) I
(piX P2) I
. . . I
(piX Ps)" All A-^2
J I I

(P2X pi) '

j
^"
(P2X Ps)^ "" '

j
(P2XPS) A21 A22
I

I
I T

.(PsXpi) I
(psXp2) I
... I (PsXPs) /loo

and let 8, C, ... be partitioned in exactly the same manner. Then sums, differences, and products
may be formed using the matrices A^i, A^2< : Sii. 5i2. - ^n, C12
CHAP. 1] MATRICES

SOLVED PROBLEMS
1 2-10 3 -4 1
2I
ri + 3 2+(-4) -1 + 1 0+2 4-202
1. (a) 4 2 1 15 3 = 4+1 0+5 2+0 1+3 5 5 2 4
2 -5 1 2_ 2-2 3-1 2 +2 -5 + (-2) 1 +3 2 + (-l) 4-741
'l 2 -1 O' 3-412 1-3 2+4 -1-1 0-2 -2 6 -2
(b) 4 2 1 1 5 3 = 4-1 0-5 2-0 1-3 3-5 2
2-5 12 2-2 3-1 2-2 -5+2 1-3 2+1 -3 -2

1 2 -1 3 6-3
(c) 3 4 2 1 12 6 3
2 -5 1 2. . 6 -15 3 6,

"l 2 -1 o" -1 -2 1

(d) - 4 2 1 -4 -2 -1
2 -5 1 2 -2 5 -1 -2

www.TheSolutionManual.com
1 2 -3 -2 P 1
2. If A 3 4 and 6 1 -5 find D = r s such that A + B - T) = 0.

5 6 4 3 t u

1-3-p 2-2- ?1 2-p -9 "0 0"

If A-\-B-D - 3+1-r 4-5-s - 4-r -1-s = -2-p = and p = -2, 4-r =


_5+4- 6+3- u. 9- 9-u_ .0

-2 0"
and r = 4, Then D = 4 -1 = A +
- 9 9.

3. (a) [4 5 6] [4(2) +5(3) + 6(-l)] = [17]

2(4) 2(5) 2(6)- 8 10 12


(b) [4 e] =
3(4) 3(5) 3(6) = 12 15 18
[J -1(4) -1(5) -1(6). .-4 -5 -6

4 -6 9 6"

(c) [1 2 3] -7 10 7
5 8 -11 -8
[ 1(4) + 2(0) + 3 (5) 1 (-6) + 2 (-7) + 3(8) 1(9) + 2(10) + 3(-ll) 1 (6) + 2(7) + 3(-8)]
[19 4 -4 -4]

r2(l) + 3(2)+4(3)1 _ poT


(rf)

C Ll(l) + 5(2) + 6(3)J '


L29J

(e)
l]
\ "gl ^ ri(3) + 2(l) + l(-2) l(-4) + 2(5) + 1(2)]
^ p 8]
G 2J _2 2J
[4(3) + 0(1) + 2 (-2) 4(-4) + 0(5) + 2(2)J [s -I2J

{2 -1 1"

. Let <4 = 1 2 . Then


.1 1.

'2 -1 1 \2 -1 r "5 -3 r 5 -3 f 2 -1 1' 11 -8


A"" = 1 2 1 2 = 2 1 4 and 2 1 4 1 2 = 8-18
1 1
b 1- _3 -1 2. 3 -1 2 .1 1. .8-43
The reader will show that A A.a'^ and ^^./l^ = /4^.^^.
MATRICES [CHAP. 1

5. Show that:
2 ? 2

2 3 2
(b) S
1=17=1
S a--
J
= 22 3

j=i 1=1
a,-,-,
J

2 3 3 2
(c) 2: aife( S b^hCj^j)
^
= 2 (2 aif,bkh)chj
k=i h=t h=i k=i ^

2
(")
J, ife(*fei +'^fej) = "il^^i + '^lj) + '^t2(*2i+'^2i) = (il^i+''i2*2i) + (il<^17+'^i2'^2i)
2 2

2 3 2
(6) 2 2 a-- = 2 (a^j^+ a-^ + a -p) = (an + 0^2 + a^g) + (a^t + 092 + <223)
t=i j=i '' i=i
= (Oil + O21) + (12 + "22) + (ai3 + "23)
2
2 a-
" + 2
2
a + a.
2
2 =22 3 2
o-- .

www.TheSolutionManual.com
i=l i = l ^2 i = i 13 j = ii = i V
This is simply the statement that in summing all of the elements of a matrix, one may sum first the
elements of each row or the elements of each column.

222
2 3 2
(c) 2 a., ( 2
b,,c,-)
tfe^^^j^ kh hy
= 2 a-,(bi, c + 6, c + &, c O
37'
^_j ^-^ tfe^ fei ij fee 2JI fe3

0-il(^ll'^lj + ftl2C2J+ ^IsCsj) + 012(621^17 + b2QC2J + 623C3J)


(oil fell + ai2*2i)cij + (0^1612 + aiQh.22)cQj + (aiifeis + ai2b23)cgj

= ( ,^ oifefefei)cii + ( S a{fefefe2)c2f + (2 ai<^b^^)c3j


**
k=l ^
fe=i -^
fe=i -^

3 2

= /.l/li^^^'fe'^^'^'^r

6. Prove: It A = [a^-] is of order mxn and if S = [fe^-] and C = [c--] are of order nxp, then /}(B + C)
^ '
= AB + AC.

The elements of the j'th row of A are a^^ , a^^, ... , a^^ and the elements of the /th column of S +C are
feijj+cij,
62J+
c'2j fenj +%j- Then the element standing in the ith row and /th column of A(B + C) is

oil(feij + Cij) + afe(fe2j + C2j) +... +o.i(feni + = = the sum Of


%i) ^^^"ikHj+^kj) S^^iifefefej +^2^aifeC;^j,

the elements standing in the ith row and /th column of AB and ^C.

7. Prove: If i = [aij] is of order mxn, if S = [6^-] is of order rexp, and if C = [c.-A is of order pxo
-^
then ^(6C) = (iS)C.
P
The elements of the j throw of /4 areai^.a^g, ...,a- and the elements ofthe/th column of BC are 2 6ih c^
p P h= i ^ .

'iJ'

2 b^f^Cf^- 2 b^j^c^j-, hence the element standing in the ith row and /th column of A (BC) is
P P P n P
ait^^b^chj +ai2^^b2h%j + ... + tn = ^^"ik<-j^,^hh<^hj^
J^ *n/i<;/ij

P n n n n
= ^^^^j_"ik''kh)<^hj = (2^^0ifefefei)cij + (X^aif^bf^2)<:QJ + + (^^"ikbkp)<^pj

This is the element standing in the j'th row and /th column of (AB)C; hence, A(BC) - (AB)C.

8. Assuming A, B ,C,D conformable, show in two ways that (A +B)(C +D) = AC + AD + BC + BD.
Using (e) and then (/), (A+B)(C+D) = (A +B)C + (A +B)D = AC +BC +AD +BD.
Using (/") and then (e), (A+B)(C+D) = A(C +D) + B(C +D) = AC + AD ^ BC + BD
= AC +BC +A.D +BD.
CHAP. 1] MATRICES

"l o'
1 l'
1
"l 10 1 'l o' 3 1 2
"4
1 2"

9. (a) 1 2 1 1 + 2 [3 1 2] 1 + 6 2 4 = 6 3 4
1
1 3 1 1 3 1 9 3 6 9 3 7
_3 1 2_

'1
10 10 0' '1
10 10 o" 1
[0]
2 '
1 1 1
1 2 :]

10 oil
(b)
1

1
3
4 i

1 5
10 3 10

1
1

j2
:o]
[0 1 3]
[0]

[0] [0: ]
10 10 6 1 1 3 [0 "][o

2
3
12
10

www.TheSolutionManual.com
18

"1 0'
1 ! 1 1 2 1 3 4 5 1 2 3 4 5 1 1

2 1
-4- -
!

0^0 2 3|4 5 6 2 3 4 5 6_ 2 1

3 4' 7"
i
3 1 2 !
3 4 15 6 7 1 2 3 3 1 2 5 6 3 1 2
(c)
1
1 2 1 1 4 5 1 6 7 8 1 2 1 4 5 1 2 1 6 7 8 1 2 1

\
1 1
-
1

+-
9 8
-^- 1
7 6 5

4
1

[l]-[8 7]
1 9 8. 1

[l]-[6 5 4]
1 7 6 5 1 1

_0 1 i 1 8 7 1
6 5 [1]-[1]

'7 9 11' "13"


3 5 7 9 11 13
10 13 16 19 4 10 13 16 19
7
31 33 "35 37 39" 41" 31 33 35 37 39 41
20 22 24 26 28 30 20 22 24 26 28 30
13 13 .13 13 13 .13. 13 13 13 13 13 13
8 7 6 5 4 1
[6 5 4] [l]

%i Hiyi+ "1272
11^1 0]^2^2
10. Let { %2 = a^iy-i + 0^272 be three linear forms in y^ and yj ^-nd let be a
.J1 O21 Zi + h-2
%3 os 1 yi + % 2 72
linear transformation of the coordinates (yi, j^) into new coordinates (z^, zg). The result of applying
the transformation to the given forms is the set of forms

%i = (o-Li 6^j + aj^2 621) Zi + (on ii2 + '^12^22) ^2

%<2 ~ ((^2 1 C^l 1 "*"


^2 2 ^2 1) ^1 "*"
(^2 1 ^12 "*"
^2 2 ^22) ^2

X3 = ("31^11 + ^32^21) Zl + ("3 1^12 + "32 ^22) Z2

*i %1 012 r-
Vl
-.

Using matrix notation, we have the three forms "21 "22 and the transformation
Vr,
^3 1 '^3 2
611 612
The result of applying the transformation is the set of three forms
O21 022
'xi "11 %2 r
-1

pri ii2 /,
"2 1 "^2 2
^2 1 622 ^',
Os 1 ^^3 2

Thus, when a set of m linear forms in n variables with matrix A is subjected to a linear trans-
formation of the variables with matrix B , there results a set of m linear forms with matrix C = AB
'

MATRICES [CHAP. 1

SUPPLEMENTARY PROBLEMS
2
-3' -1
"l 3 2I 4 ]
2I
Given A = 5 2 , B = 4 2 5 , and C = 3 2
-1
.1 1_ _2 3_
J -2 3_

4 1 -f -3 I -51
(a) Compute: A + B = 9 2 7 , A-C = 5-3
_3 -1 4j _ t-2j
-2 -4 6
(6) Compute: -2-4 -10 -4 0-S =
-2 2 -2
(c) Verify: A + {B~C) = (A+B)-C.
(d) Find the matrix D such that A+D^B. Verify that D =B-A = -(A-B).

1 -1 1 1 2 3 -11 6 -1

www.TheSolutionManual.com
12. Given A = -3 2 -1 and B 2 4 6 , compute AB = Q and BA = -22 12 -2 Hence, ABi^BA
-2 1 1 2 3 -11 -1
6
generally.

\ -3 2 14 10 2 1-1-2
13. Given A = 2 1 -3 , B = 2 111 and C 3 -2 -1 -1 show that AB = AC. Thus, AB = AC
4 -3 -1 1-212 2-5-1
does not necessarily imply B = C

1 1 -1 1 3
^["12 3 -4"!
14. Given A = 2 3 , B = 2 and C show that (AB)C = A(BC).
0-2
,

3 -1 2 -1 [2 ij
4

15. Using the matrices of Problem 11, show that A(B + C) = AB + AC and (A+B)C = AC +BC.

16. Explain why, in general, (ABf ^ A^ 2AB + B^ and A^ - B^ ^ (A-B)(A+B).

2 -3 -5 -13 5 2 -2 -4
17. Given A = -14 5 B 1 -3 -5 and C -13 4
1 -3 -4 -13 5 1 -2 -3
(a) show that AB = BA = 0, AC = A, CA = C.

(b) use the results of (a) to show that ACB = CS^, A'^ - B"^ = (A -B)(A + B), (A Bf = A'^ + b'

2
18. Given where i = -1, derive a formula for the positive integral powers of A .

Ans. A = I, A, -I, -A according as n =4p, 4p-l-l, ip + 2, 4p-^3, where /= n


ij-
[0

19. Show that the product of any two or more


-'"'[;;][-:;] [rM'i -"] [0 -]- ["0 3-

[
i [? 0]
is a matrix of the set.

20. Given the matrices A of order mx, B of order nxp, and C of order rx^, under what conditions
on p, 9,
and r would the matrices be conformable for finding the products and what is the order of each" (a)
ABC
(b)ACB, (c)A(B + C)?
Ans. (a) p = r; mx-q (b) r = n = g; m x p (c) r = n, p = q; m x g
CHAP. 1] MATRICES

21. Compute AB, given:


10 11 10 10 2 1

(a) A 1 j
1 and B 1
j
Ans. 1 2
1 1
0^ 1 1 1 I

m
"
1 o"
1
(b) A and B 1 Ans.
1
--1 2

12 10 10 1
4 1
1 '

'2 2
(c) A and B Ans
1
1 1 i 10
I 2 2 1 i
2 2

22. Prove: (a) trace (A + B) = trace A + trace B, (6) trace (kA) = k trace A.

www.TheSolutionManual.com
2
h -2 1
1

-1
2
= 2n+r.-3y. -3 -3
V, Y^l
_
^;^^^l^ y,\ [2
1 [2 1
2 3

r -zi + 722"]

[-221 - 622J'

24. If -4 = [aiji and B = [fe^J are of order m x n and if C = [c^j] is of order nx. p, show that (/4+B)C =^C + BC.

25. Let /4 = [a^-] and B = [fej,-;,] , where (i = 1, 2 m; / = 1, 2 p; A; = 1, 2, ... ,n). Denote by /S- the sum of

Pi
182

the elements of the /th row of B, that is, let fij = S b j)^. Show that the element in the ith row of A

Pi
is the sum of the elements lying in the sth row of AB. Use this procedure to check the products formed in
Problems 12 and 13.

26. A relation (such as parallelism, congruency) between mathematical entities possessing the following properties:

(i ) Determinative Either a is in the relation to 6 or a is not in the relation to b.

(II) Reflexive a is in the relation to a, for all a.

(iii) Symmetric If a is In the relation to b then b is in the relation to a.


(iv) Transitive If a is in the relation to b and b is in the relation to c then a is in the relation to c.

is called an equivalence relation.


Show that the parallelism of lines, similarity of triangles, and equality of matrices are equivalence
relations. Show that perpendicularity of lines is not an equivalence relation.

27. Show that conform ability for addition of matrices is an equivalence relation while con form ability for multi-
plication is not.

28. Prove: If .4, B, C are matrices such that AC = CA and BC = CB, then (AB BA)C = C(AB BA).
chapter 2

Some Types of Matrices

THE IDENTITY MATRIX. A square matrix A whose elements a^: = for i>j is called upper triangu-
lar; a square matrix A whose elements a^= for i<j is called lower triangular. Thus

"ii "12 "13

www.TheSolutionManual.com
U tip o tZo o
"2n

03S is upper triangular and

is lower triangular.

The matrix D Ogj which is both upper and lower triangular, is call-

... a,

ed a diagonal matrix. It will frequently be written as

D = diag(0]^i, 022, 033,


' ^nn )
See Problem 1.

If in the diagonal matrix D above, Oi^ = Ogs = ci = k, D is called a scalar matrix; if.

in addition, k = l, the matrix is called the identity matrix and is denoted by /. For example

'1

and 1
t"] 1

When the order is evident or immaterial, an identity matrix will be denoted by /. Clearly,
/+/ + ... to p terms = p / = diag (p,p,p

p) and f
= I-I ... to p factors = /. Identity ma-

2 3I
[1. c (5 , then l2-A =

A.L hAh A, as the reader may readily show.

10
.

CHAP. 2] SOME TYPES OF MATRICES 11

SPECIAL SQUARE MATRICES. If A and B are square matrices such that AB = BA, then A and B are
called commutative or are said to commute. It is a simple matter to show that if A is any ra-square
matrix, it commutes with itself and also with L
See Problem 2.

If A and B are such that AB = -BA, th^ matrices A and 6 are said to anti-commute.
,fe+i
A matrix A for which A A, where A: is a positive integer, is called periodic. If k is
k-t-i
e for which
the least positive integer A A, then A is said to be of period k.

If i = 1, so that A^ = A, then A is callled idempotent


See Problems 3-4.
aP
A matrix A for which 4=0, where p is a positive integer, is called nilpotent. If p is the
least positive integer for which A'^ = 0, ther A is said to be nilpotent of index
See Problems 5-6.

www.TheSolutionManual.com
THE INVERSE OF A MATRIX. If A and B are square matrices such that AB = BA = I, then B is call-

ed the inverse of A and we write B = A'^ (B equals A inverse). The matrix B also has A
as its
inverse and we may write A = B~^

1 2 3 6 -2 -3 1 ) o'
Example 1. Since 1 3 3 -1 1 = /, each matrix in the product is the inverse of
12 4 -1 1 ( 1 1

the other.

We shall find later (Chapter?) that not every square matrix has an inverse. We can show
here, however, that if A has an inverse then that inverse is unique.
See Problem 7.

If A and B are square matrices of the same order with inverses A^^ and B~^ respectively,

then (AB)''^ = B'^-A~^, that is,

1. The inverse of the product of tijvo matrices, having inverses, is the product in re-
verse order of these inverses.
See Problem 8.

A matrix A such that A^ = I is called involutory. An identity matrix, for example, isinvol-
utory. An involutory matrix is its own inve r$e.
See Problem 9.

THE TRANSPOSE OF A MATRIX. The matrix of order nxm obtained by interchanging the rows and
columns of an mxn matrix A is called the trahspose of A and is denoted by A' (A transpose). For
1 4'

example, the transpose of A IS i' = 2 Note that the element


^[-e] 3
5

6.
a-
Ir

7
in the ith row

and /th column of A stands in the /th row and ith column of A\

If A' and ZTare transposes respectively qf A and B, and if A: is a scalar, we have immediately

(a) (A'Y = A and (b) (kAy = kA'

In Problems 10 and 11, we prove:

II. The transpose of the sum of two matrices is the sum of their transposes, i.e.

(A + BY = A'+ S'
12 SOME TYPES OF MATRICES [CHAP. 2

and
in. The transpose of the product of two matrices is the product in reverse order of their
transposes, i.e.,

(AB)' = B'-A'
See Problems 10-12.

SYMMETRIC MATRICES. A square matrix A such that A'= A is called symmetric. Thus, a square
matrix A = \
a^j] is symmetric provided o^-- = a,-,^ for all values of i and , /. For example,
1 2 3
2 4-5 is symmetric and so also is kA for any scalar k.

3-5 6
In Problem 13, we prove
IV. If A is an ra-square matrix, then A + A' is symmetric.

www.TheSolutionManual.com
A square matrix A such that A'= -A is called skew-symmetric. Thus, a square matrix A is
skew-symmetric provided a^.- = -Uj^ for all values of i and /. Clearly, the diagonal elements are
"
-2 3"

zeroes. For example, A 2 4 is skew-symmetric and so also is kA for any scalar k.

-3 -4

With only minor changes in Problem 13, we can prove


V. If A is any re-square matrix, then A - A' is skew-symmetric.

From Theorems IV and V follows


VI. Every square matrix A can be written as the sum of a symmetric matrix B = ^(A+A')
and a skew-symmetric matrix C = ^(A-A'). See Problems 14-15.

THE CONJUGATE OF A MATRIX. Let a and b be real numbers and let i = V-1 then, z = a+ hi is ;

called a complex number. The complex numbers a+bi and a-bi are called conjugates, each
being the conjugate of the other. If z = a+ bi, its conjugate is denoted by z = a+ hi.

If z^ = a + bi and Z2 = z^ = a~ bi, then Z2 = z-^ = a- bi = a+ bi, that is, the conjugate of


the conjugate of a complex number z is z itself.

If z.i=a+bi and Z2 = c+ di, then

(i) z-L + Zg = (a+c) + (b+d)i and z^ + z^ = (a+c) - (b+d)i = (a-bi) + (c-di)

that is, the conjugate of the sum of two complex numbers is the sum of their conjugates.

(ii) z^- Zg = (ac-bd) + (ad+bc)i and z^^ = (ac-bd)- (ad+bc)i = (a-bi)(c-di) = F^-i^,

that is, the conjugate of the product of two complex numbers is the product of their conjugates.

When i is a matrix having complex numbers as elements, the matrix obtained from A by re-
placing each element by its conjugate is called the conjugate of /4 and is denoted by A (A conjugate).

l + 2i i I -21 - i
Example 2. When A then A =
3 2-3i 3 2 + 3i

If A and B are the conjugates of the matrices A and B and itk is any scalar, we have readily

(c) (.4) = A and (c?) (X4) = ~k-A

Using (i) and (ii) above, we may prove


.

CHAP. 2] SOME TYPES OF MATRICES 13

_VII. The conjugate of the sum of two matrices is the sum of their conjugates, i.e.,
(A + B) = A + B.
Vin. The conjugat e of the product of two matrices is the product, in the same order, of
their conjugates, i.e., (AB) = A-B.

The transpose of A is denoted by A^(A conjugate transpose). It is sometimes written as A*.


We have
IX. T^e transpose of the conjugate of A is equal to the conjugate of the transpose of
A, i.e., (Ay = (A').
Example 3. Prom Example 2

1 - 2i 3 1 + 2i 3 [l-2i 3 1
(AY = while A' and (A') = (AY
-i 2+ 3i - Zi
i 2 -i 2 + 3iJ

www.TheSolutionManual.com
HERMITIAN MATRICES. A square matrix ^ = [0^^] such that A' = A is called Herraitian. Thus, /I
is Hermitian provided a^j = uj^ for all values of i and /. Clearly, the diagonal elements of an
Hermitian matrix are real numbers.

1 1-i 2

Example 4. The matrix A = 1 +j 3 i is Hermitian.

2 -I

Is kA Hermitian if k is any real number? any complex number?

A square matrix ^ = [0^^] such that 1= -A is called skew-Hermitian. Thus, ^ is skew-


Hermitian provided a^^ = -o^-- for all values of
Clearly, the diagonal elements of a i and /.
skew-Hermitian matrix are either zeroes or pure imaginaries.

i l-i 2
Examples. The matrix A = -1-t 3i i is skew-Hermitian. Is kA skew-Hermitian it k is any real
-2 i

number? any complex number? any pure imaginary?

By making minor changes in Problem 13, we may prove


X. If A is an ra-square matrix then 4 + Z is Hermitian and A-T is skew-Hermitian.

From Theorem X follows


XI. Every square matrix A with complex elements can be
written as the sum of an
Hermitian matrix B={(A + A') and a skew-Hermitian matrix
C=i(A-A').

DIRECT SUM. Let A^, A^ As be square matrices of respective orders m^, ,ms. The general-
ization

^1 ...

A^ ...

diag(^i,^2 As)

... A,

of the diagonal matrix is called the direct sum of the Ai


14 SOME TYPES OF MATRICES [CHAP. 2

1 2 -1
Examples. Let /4i=[2], ^5 and An 2 3
B:} 4 1 -2

12
3 4
The direct sum of A^,A^,Aq is diag(^3^, /42, /4g) =
12-1
2 3

4 1-2

Problem 9(6), Chapter 1, illustrates

XII. If A = Aia,g {A^,A^, ...,Ag) and S = diag(Si, S2 where Ai and Behave


^s).
the same order for (s = 1, 2 s), then AB = diag(^iSj^, A^B^ ^s^s)-

www.TheSolutionManual.com
SOLVED PROBLEMS

ii "12 'm 11^1

1. Since
Or,
2i "22 2n 22 21 OooOoo "^22 ''2n
the product AB of

''ml "7)12 "tn ''WB "Ml ''wm"m2 ^mm"nn

an m-square diagonal matrix A = diag(oii, a^^ an) and any mxn matrix B is obtained by multi-
plying the first row of B by %i, the second row of B by a^^- and so on.

2. Show that the matrices and K commute for all values of o, 6, c, J.

This follows from \" ^1 P "^l =


P^+*'^
'"^ + H = ^ '^l [" *1

2 -2 -4
3. Show that -13 4 is idempotent.

1 -2 -3_

2 -2 -4' '
2 -2 -4" -2 -4
/ = -13 4 -13 4 3 4
_ 1 -2 -3_ . 1 -2 -3_ -2 -3

4. Show that if AB = A and BA = B, then i and B are idempotent.

.45.4 = (AB)A = A-A = A^ and ABA = .4(34) = AB = A ; then 4^ = .4 and 4 is idempotent. Use BAB to
show that B is idempotent.
.

CHAP. 2] SOME TYPES OF MATRICES 15

1 1 3
5. Show that A 5 2 6 is nilpotent of order 3.

2 -1 -3
3"
1 1 3 1 1 1 1 3
/ = 5 2 6 5 2 6 = 3 3 9 and A A^.A = 3 3 9 5 2 6 =
-2 -1 -3 -2 -1 -3 -1 -1 -3 -1 -1 -3 -2 -1 -3

6. If A is nilpotent of index 2, show that AOAf= A for n any positive integer.


Since /? = 0, A^ = a'' = ... = ^" = 0. Then A{IAf = A{lnA) = AnA^ = A.

7. Let A,B,C be square^ matrices such that AB = 1 and CA = 1. Then iCA)B = CUB) so that B
C. Thus, 5 = C= ^ ^ is
the unique inverse of ^. (What is S~^?)

www.TheSolutionManual.com
8. Prove: (AB)^ = B~^-A'^

By definition {ABf{AB) = (AB)(ABf = /. Now


(B' -A )AB = b\a''-A)B B^-I-B = B-'.B
and AB(B~'^-A~'-) = A(B-B''^)A~^ A-A = /

By Problem 7, (AB)'^ is unique; hence, (AB)^ B-'.A-'

9. Prove: A matrix A is involutory if and only if (IA)(I+ A) = 0.

Suppose (I-A)(l+A) = I-A^ = 0; then A^ = I and /I is involutory.

Supposed is involutory; then A^ = I and (I-A)(I+A) = I-A^ = I-I = 0.

10. Prove: (A+B)' = A' + B'.

Let A = [ay] and B = [6^]. We need only check that the element in the ith row and /th column of
A'. S' and (A+B)' are respectively a^, bjj_. and aj^+ bj^.

11. Prove: (AB)' = S'i'.

Let A = [ay] be of order mxn, B = [6y] be of order nxp ; then C = AB = [cy] is of order mxp. The
element standing in the ith row and /th column of AB cy =
is
J^aife. b^j and this is also the element stand-
ing in the /th row and ith column of (AB)'.

The elements of the /th row of S'are iy, b^j bnj and the elements of the ith column of ^'are a^^,
"i2 Hn- Then the element in the /th row and ith column of B'/l'is
n n

Thus, (AB)' = B'A'.

12. Prove: (ABC)' = CB'A'.

mite ABC = (AB)C. Then by Problem


, 1 1 ,
(ABC)' = \(AB)C\' = C'(AB)' = C'B'A'.
2g SOME TYPES OP MATRICES [CHAP. 2

13. Show that if A = [o^j] is n-square, then B = [bij] = A + A' is symmetric.

First Proof.

The element in the ith row and /th column of .4 is aij and the corresponding element of /I' is aji; hence,

bij = aij + a^i. The element in the /th row and ith column of A is a^i and the corresponding element of .4' is

oj^j] hence, bji = a^^ + aij. Thus, bij = bji and B is symmetric.

Second Proof.

By Problem 10, (A+Ay = A' + (A')' = A'+A = A + A' and (^ +.4') is symmetric.

14. Prove: If A and B are re-square symmetric matrices then AB is symmetric if and only if A and B
commute.
Suppose A and B commute so that AB = BA. Then (AB)' = B'A' = BA = AB and AB is symmetric.

Suppose AB is symmetric so that (AB)' = AB. Now (AB)' = B'A' = BA ; hence, AB = BA and the ma-

www.TheSolutionManual.com
trices A and B commute.

15. Prove: Ifthe m-square matrix A is symmetric (skew-symmetric) and if P is of order mxn then B =
P'AP is symmetric (skew-symmetric).

If .4 is symmetric then (see Problem 12) B' = (P'AP)' = P'A'(P')' = P'A'P = P'AP and B is symmetric.

If ,4 is skew-symmetric then B' = (P'AP)' = -P'AP and B is skew-symmetric.

16. Prove: If A and B are re-square matrices then A and 5 commute if and only if A~ kl and B- kl
commute for every scalar k.

Suppose A and B commute; then AB = B.4 and

{A-kI)(B-kI) = AB - k(A+B) + k^I

BA - k(A+B) + k^I = (B-kr)(A-kl)

Thus, Akl and B kl commute.

Suppose 4 -A;/ and B -kl commute; then

(A-kI)(B-kI) = AB - k(A+B) + k^I

BA - k(A+B) + k^I = (B -kr\(A-kI)

AB = BA. and A and B commute.


CHAP. 21 SOME TYPES OF MATRICES 17

SUPPLEMENTARY PROBLEMS
17. Show that the product of two upper (lower) triangular matrices is upper (lower) triangular.

18. Derive a rule for forming the product BA of an mxn matrix S and ^ = diag(aii,a22 a).
Hint. See Problem 1.

19. Show that the scalar matrix with diagonal element A: can be written as Wand that kA = klA =Aia.g(k,k k) A,
where the order of / is the row order of A

20. If .4 is re-square, show that a'^ a'^ = a'^ A^ where p and q are positive integers.

2-3-5 -13 5
21. (a) Show that A = -14 5 and B = i _3 _5 are idempotent.
1 -3 [-1 3 5
-4J
Using A and B, show that the converse of Problem 4 does not hold.

www.TheSolutionManual.com
(b)

22. If A is idempotent, show that B = I-A is idempotent and that AB = BA = 0.

1 2 2
23. (a) If A 2 1 2 show that .4 4.A - 5! = 0.
2 2 1.

2 1 3
(b) If A 1 -1 2 show that -4 - 2A - 94 = 0. but -4^ - 2/1 - 9/ /
,
0.
1 2 1

-1 -1 -1" 2
1 o"
3
1
24. Show that 1 = 1 -1 -1 = =
1 /.
_ 1 1 -1 -1 -1

1 -2 -6
25. Show that A -3 2 9 is periodic, of period 2.
2 0-3

1 -3 -4
26. Show that -13 4 is nilpotent.
1 -3 -4

'12 3 2 -1 -6
27. Show that (a) A = 3 2 and B = 3 2 9 commute.
-1 ~1 -1 _-l -1 -4_
"112' 2/3 -1/3'
(b) A = 2 3 and B = -3/5
1 2/5 1/5 commute.
-12 4 7/15 -1/5 1/15

28. Show that A anti-commute and (A+ Bf = A^ + B'^

29. Show that each of anti-commutes with the others.

30. Prove: The only matrices which commute with every n-square matrix are the n-square scalar matrices.

31. (a) Find all matrices which commute with diag(l, 2, 3).
(b) Find all matrices which commute with diag(aii,a22 a^^).
Ans. (a) diag(a,fe, c) where a,6, c are arbitrary.
.

SOME TYPES OF MATRICES [CHAP. 2


18

1 2 3 "3-2-1
32. Show that (a) 2 5 7 I is the inverse of -4 1 -1
-2 -4 -5 3 2 1.

10 "
1

(b)
2 10
is the inverse of
-2100
4 2 10 0-2 10
-2311 _ 8 -1 -1 1.

33. Set to find the inverse of ^"'-


[3:][:^E3 [3 4]- [3/2 -1/2]

34. Show that the inverse of a diagonal matrix A, all of whose diagonal elements are different from zero, is a

diagonal matrix whose diagonal elements are the inverses of those of A and in the same order. Thus, the
inverse of / is /

-1

www.TheSolutionManual.com
1 4 3 3"|

35. Show that A = 4-3 4 and B -1 0-1 are involutory.


3-3 4 -
-4 -44-3J

10
36. Let A
10 I2
by partitioning. Show that A
/j.

a b -1 /I21 '2 /s
c d 0-1

37. Prove: (a)(A')'=A, (b) (kA)" = kA', (c) (^^)' = (/l')^ for p a positive integer.

38. Prove: (ABCy C'^B-^A-'^. Hint. Write ABC=iAB)C.

39. Prove: (a) (A-^)'^ = A, (b) (kA)-^ = -j-A'^, (c) (A'^Y^ = (A'^y forp a positive integer.

40. Show that every real symmetric matrix is Hermitian.

41. Prove: (a)(A)=A, (b) (A + B) = A + B ,


(c)(kA)=kA, (d) {AB) = A B.

1 1 + I 2 + 3 i

42. Show: (a) A = 1- J 2 -i is Hermitian,


2- 3 J i

i 1 + j 2 - 3 J

(b) B = -1 + i 2i 1 is skew-Hermitian,
-2-3J -1
(c) iB is Hermitian,

(<f ) A is Hermitian and B is skew-Hermitian.

43. If A is n-square, show that (a) AA' and A' A are symmetric, (6) A-irA', AA', and A'A are Hermitian.

44. Prove: If H is Hermitian and A is any conformable matrix then (A)' HA is Hermitian.

45. Prove: Every Hermitian matrix A can be written as B + iC where B is real and symmetric and C is real and
skew-symmetric.

46. Prove: (a) Every skew-Hermitian matrix A can be written as A = B + iC where B is real and skew-symmetric
and C is real and symmetric. (6) A' A is real if and only if B and C anti-commute.

47. Prove: If A and B commute so also do ^"^ and B' , A' and B\ and A' and B"

48. Show that for m and n positive integers, ^4 and S"^ commute if A and 5 commute.
CHAP. 2] SOME TYPES OF MATRICES 19

n
A 1 A nA 2n('-l)A
A 1 A raA ,ra n-i
49. Show (a) (6) A 1 = A nX
\ A
A A"

50. Prove: If A is symmetric or skew-symmetric then AA'= A'A and / are symmetric.

51. Prove: If 4 is symmetric so also is a-4 +6/1^ +.-.+/ where a, 6 g are scalars and p is a positive
integer.

52. Prove: Every square matrix A can bfe written as /I = B +C where B is Hermitian and C is skew-He rmitian.

53. Prove: If ^ is real and skew-symmetric or if ^ is complex and skew-Hermitian then iA are Hermitian.

54. Show that the theorem of Problem 52 can be stated:


Every square matrix A can be written as A =B+iC where B and C are Hermitian.

www.TheSolutionManual.com
55. Prove: If A and B are such that AB = A and BA = B then (a) B'A'= A' and A'B"= B\ (b) A" and B' sue
idempotent, (c) ^ = B = / if ^ has an inverse.

56. If^ is involutory. show that k(.I+A) and kO-A) are idempotent and j(I+A) ^(I-A) = 0.

57. If the n-square matrix A has an inverse A , show:


(a) (A Y= (A') , (b) (A)^ = A-\ (c) (r)-^=(^-V
Hint, (a) Prom the transpose of AA~^ = I. obtain (A'''-) as the inverse of /!'

58. Find all matrices which commute with (a) diag(i. i, 2, 3), (6) diag(l. 1, 2, 2).
Ans. (a) dia.g(A. b.c). (b) dia.g(A.B) where A and B are 2-square matrices with arbitrary elements and b. c
are scalars.

59. If A^.A^ A^ are scalar matrices of respective orders mj^.m^ m^. find all matrices which commute
with diag(^i, ^2 '^s)
Ans. dia.g(B^.B^ B^) where Si, S2 -85 are of respective orders m-^.m^, m^ with arbitrary elements.

60. If AB = 0, where A and B are non-zero n-square matrices, then A and B are called divisors of zero. Show
that the matrices A and B of Problem 21 are divisors of zero.

61. If A = diae(Ai.A2 A^) and B = di&giB^.B^ B^) where ^^ and B^ are of the same order, (J = 1, 2,
..., s), show that
(a) A+B = diag(^i+Si,^2 + S2 -^s + Bs)
(b) AB = diag(^iBi. /I2S2 A^B^)
(c) trace AB = trace /liB^ + trace /I2S2 + ... + trace A^B^.

62. Prove: If ^ and B are n-square skew-symmetric matrices then AB is symmetric if and only if A and S commute.

63. Prove: If A is n-square and B = rA+sI, where r and s are scalars, then A and B commute.

64. Let A and B he n-square matrices and let ri, rg, si, S2 be scalars such that risj 7^
rssi. Prove that Ci =
ri4+siB, C2 = r^A+SQB commute if and only if A and B commute.

65. Show that the n-square matrix A will not have an inverse when (a) A has a row (column) of zero elements or
(6) /I has two identical rows (columns) or (c) ^ has arow(column)whichis the sum of two other rows(columns).

66. If A and B are n-square matrices and A has an inverse, show that
(A+B)A~'^(A-B) = (A-B)A''^(A+B)
chapter 3

Determinant of a Square Matrix

the 3! = 6 permutations of the integers 1,2,3 taken


together
PERMUTATIONS. Consider
(3.1) 123 132 213 231 312 321

and eight of the 4! = 24 permutations of the integers 1,2,3,4 taken together

1234 2134 3124 4123

www.TheSolutionManual.com
(3.2) 1324 2314 3214 4213

a given permutation a larger integer precedes a smaller one,


If in
we say that there is an
even (odd), the permutation is
inversion. If in a given permutation the number of inversions is
even since there is no inver-
called even (odd). For example, in (3.1) the permutation 123 is
sion, the permutation 132 is odd since in it 3 precedes 2,
the permutation 312 is even since in
precedes and 3 precedes 2. In (3.2) the permutation 4213 is even since in it 4 precedes
it 3 1

2, 4 precedes 1, 4 precedes 3, and 2 precedes 1.

THE DETERMINANT OF A SQUARE MATRIX. Consider the re-square matrix

'11 "12 "IS


Uo 1 Oo p ^03 *

(3.3)

-'ni "ne "n3

and a product

(3.4) ^j, "'^L


1 ~-'2 3Jo
~^3 '^"
"^n

of n of its elements, selected so that one and only one element


comes from any row and one
and only one element comes from any column. In (3.4), as a matter of convenience, the factors
the
have been arranged so that the sequence of first subscripts is the natural order 1,2 re;

second subscripts is then some one of the re! permutations of the inte-
sequence j^, j^ / of
gers 1,2 re. (Facility will be gained if the reader will parallel the work of this section be-
ginning with a product arranged so that the sequence of second subscripts is in natural order.)
For a given permutation /i,/2, ...,^ of the second subscripts, define ^j^j^....j^ = +1 or -1
according as the permutation is even or odd and form the signed product


(3.5) -W-2- k'^k ^^h '^Jn

the determinant of A, denoted by U|, is meant the sum of all the different
signed prod-
By
ucts of the form (3.5), called terms of Ul, which can be formed from the elements of A; thus.

(3.6) le Jlj2- M '^h

where the summation extends over p=n\ permutations hk---Jn of the integers 1,2,,

The determinant of a square matrix of order re is called a determinant of order re.

20
CHAP. 3] DETERMINANT OP A SQUARE MATRIX 21

DETERMINANTS OF ORDER TWO AND THREE. Prom (3.6) we have for n = 2 and n = 3,

-"ll "12
(3.7) ^11^22 21 '^12^^21
12 "'"
11^22 CE-j^2^21
^21 "22

and

'll "12 "13


(3.8) 21 ^ "? P*^ ^123 ail0223S + ^132 Oll02s'^32 + ^213 Oi221<%3
"31 "32 "S3 + 231 %2 02331 + 312 Ol3'%lC'32 + ^321 %3 ^^2 %!
flu 022 %3 - ^11023032 - 012^1033

%l(22 033 - 02S32) " "12(021 "ss " ^S^Sl) + ^IS (a2lOS2 - 022^31)

022 ^3 O21 053 21 <^2

www.TheSolutionManual.com
+ Oi
"^32 ^3 "31 <^33 '^Sl ^^32

Example 1.

1 21
(a) 1-4 - 2-3 6 = -2
3 4

2 -11
(fc) 2-0 - (-1)3 + 3
3

2 3 5
10 1 1 11 1 01
(O 1 1 = 2 + 5
11 2 2 11
2 1

2(0-0 -1-1) - 3(1-0 -1-2) + 5(1-1-0-2) = 2(-l) - 3(-2) + 5(1)

2 -3 -4
(d) 1 0-2 = 2{0(-6)-(-2)(-5)! - (-3){l(-6)- (-2)0! + (-4) {l(-5) - O-o!
-5 -6
-20 18 + 20 -18

See Problem 1.

PROPERTIES OF DETERMINANTS. Throughout this section, A is the square matrix whose determi-
nant Ul is given by (3.6).
Suppose that every element of the sth row (every element of the/th column) is zero.
Since
every term of (3.6) contains one element from this row (column), every
term in the sum is zero
and we have
I. If every element of a row (column) of a square matrix A
is zero, then U| =0.
Consider the transpose A' of A. It can be seen readily that every term of
(3.6) can be ob-
tained from A' by choosing properly the factors in order from the
first, second, ... columns. Thus,
II. If 4 is a square matrix then
U'l = \A\; that is, for every theorem concerning the rows
of a determinant there is a corresponding theorem concerning
the columns and vice versa.
Denote by B the matrix obtained by multiplying each of the elements of
the ith row of A by
a scalar k. Since each term in the expansion of |5| contains one and only one
element from its
fth row, that is, one and only one element having A; as a factor,

\B\ = k 1 \: , , a. , n^- n
*%'2---i'iii2J2----anj^!
\ . = Jc\A\
P
Thus,
22 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

III.every element of a row (column) of a determinant U| is multiplied by a scalar k, the


If

determinant is multiplied by k; if every element of a row (column) of a determinant \A\ has k as


a factor then k may be factored from \A\ . For example,
a7']^]^ ltd AQ CI
^

O\ /l' t* QO

OrQ 1 f (X'

Let S denote the matrix obtained from A by interchanging its ith and (i+l)st rows. Each
product in (3.6) of \A\ is a product of |s|, and vice versa; hence, except possibly for signs,

(3.6) is the expansion of \b\. In counting the inversions in subscripts of any term of (3.6) as a
term of \b\, i before i+1 in the row subscripts is an inversion; thus, each product of (3.6) with
its sign changed is a term of |s| and \e\ = - \A\. Hence,
IV. If B is obtained from A by interchanging any two adjacent rows (columns), then \b\ =

www.TheSolutionManual.com
- \a\.

As a consequence of Theorem IV, we have


V. If B is obtained from A by interchanging any two of its rows (columns), then \b\ = -\A\.

VI. If B is obtained from A by carrying its ith row (column) over p rows (columns), then
|s| = (-i)^UI.

VII. If two rows (columns) of A are identical, then \A\ = .

Suppose that each element of the first row of A is expressed as a binomial %,


^li
= b-^: + c-^j,

(j = 1,2, ...,n). Then

I ^Jd2 in ^^^ii ^ "^i^ "2 J23 J5 ""in

Os-,- Os,
p^kk^ Jn ^^k'^k'^k-- "njn^ "Jn

h,. ii2 ^13 - ^m 'In

21 O22 "23 02n


+

Oni n2 "na '^nn -^n2 '-"ns

In general,

VIII. If every element of the ith row (column) of A is the sum of p terms, then \A\ can
be expressed as the sum of p determinants. The elements in the ith rows (columns) of these
p determinants are respectively the first, second, ..., pth terms of the sums and all other rows
(columns) are those of A.

The most useful theorem is

IX. If B is obtained from A by adding to the elements of its ith row (column), a scalar mul-
tiple of the corresponding elements of another row (column), then \b\ = \a\. For example.

Cfrii T"/CCEog ^22 Clnr

Clf,', {- Karin. Clf=n Clr^r, Cfg ^ + H'Qri^ ^Q'2 "^ i^Clr Zgg + ka^^
See Problems 2-7.

FIRST MINORS AND COFACTORS. Let A be the re-square matrix (3.3) whose determinant \A\ is given
by (3.6). When from A the elements of its ith row and /th column are removed, the determinant
of the remaining (re- l)-square matrix is called a first minor of A or of \a\ and denoted by \M^j\.
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 23

More frequently, it is called the minor of Oy. The signed minor, (-if'*'^ \Mij\ is called the
cofactor of a^-
and is denoted by a;
"V
11 12 ''13

Example 2. If A "21 '^SS "23

"'31 ^^32 ''SS

22 "23 "21 "23 21 22


M 11 ki2 kic
"32 "33 "31 "33 "31 "32
and
1+1 I 1 + 21 I

(-1) kll 1^11 1 , ai2 (-1) 1^12! =

14-31 I

(-1) ki3 \m.

Then (3.8) is

www.TheSolutionManual.com
Ml = aiilA/iil - O12IM12I + aislMigl
= "11*11 + ai20ii2 + "130^13

In Problem 9, we prove
X. The value
of the determinant U|, where A is the matrix of (3.3), is the sum of the prod-
ucts obtained by multiplying each element of a row (column) of
U| by its cofactor, i.e..
n
(3.9) Ul = aiiflii + a^g a^g + + ^in * in ^ aik'kk
n
(3.10) Ml = "^if^if + 021*2/ + + a^jd^j (ij, = 1,2 n)
fe?i"fei*j

Using Theorem VII, we can prove


XI. The sum of the products formed by multiplying the elements of a row (column) of an
ra-square matrix A by the corresponding cofactors of another row (column) of A is
zero.

Example 3. If A is the matrix of Example 2, we have

and
"310^31 + "320^32 + "330^33 = Ul
"12*12 + "22*22 + "32*32 = I
'4 I

while

"31*21 + "32*22 + "SS*23 =


and
"12*13 + "22*23 + "32*33 =

See Problems 10-11.

MINORS AND ALGEBRAIC COMPLEMENTS. Consider the matrix (3.3). Let


h, i^ i^ arranged in ,

order of magnitude, be m, (1 < m < ra), of the row indices


1,2 n and let j^, j^ /^ arrang-
ed in order of magnitude, be m of the column indices. Let the
remaining row and cofumn indi-
ces, arranged in order of magnitude, be respectively i^,^,, i^ and /+;+2 ^ . Such
a separation of the row and column indices determines uniquely
two matrices

a.- ,
,

'l'J'2 '^i.
Jm

.Jl.j2.- im i2.ii ''2. 72 i2 :h


(3.11) h i
^n

_ "^m Ji %J2
% ^n _
24 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

and

''m+i'J'm+i '-m + i'Jffl+s n.7n

.7ra + i'Jm+2' Jn ''m + S'Jm + i ^m+2>7m+2 ^ra+2>Jn


(3.12)

^^Jm-i-i ^n'j7ii+2 ^ri'Jn

called sub-matrices of ^.

The determinant of each of these sub-matrices is called a minor of A and the pair of minors

J1.J2. Jm J'm-'-i'Jn^-Q Jn
and
^m+l' '-m-^2 ''r

www.TheSolutionManual.com
are called complementary minors of A, each being the complement of the other.

Examples. For the 5 -square matrix A [iij.

"I'Z %4 '^15
1,3 '^21 '^23 2,4,5
I

i
'^2,5
I

'
~ and '^l, 3.4- "32 ^^34 '^SS
I

I
f^Sl 63 I
"42 "44 ^^45

are a pair of complementary minors.

Let
(3.13) U + In + + in + h + h + + h
and
(3.14) q - i-n +1 + fm+2 + + *n + ^ra + i + /m+2 + " + In

p Ji, is. J-

The signed minor (-1) A,- , , is called the algebraic complement of

Jm-'-i'Jn-i-2 Jn

7(4-1. 'to+2 ^n
J+i'Jm,+2 7n
and (-l)'^ /I. . is called the algebraic complement of
''m,+ l>^m+2> ''-
J1.72 Jm

H>h ^n

2+-5 + 14-31 13 ,1.3


Exainple4. For the minors of Examples. (-1) I Ap' g I
= - I
/i^ g I
is the algebraic complement

,2 4 5 ,
l(-3 + 4-^2l-446 I .2,4,5 2 4 5 1

the algebraic complement of


I
I I

Of I
A.{3\i I
and (-1) I
A-^ 34 I
= -I ^ijg',4 I
is

1.3
^^is I
Note that the sign given to the two complementary minors is the same. Is this

always true?

.Ji Ji
When m = l, (3.11) becomes A and an element of A. The
[%ii] "HJi
J2.J3. Jn
complementary minor is Ml i in the notation of the section above, and the
I

algebraic complement is the cofactor OL-Lj.

A minor of A, whose diagonal elements are also diagonal elements of A, is called a principal
minor of A. The complement of a principal minor of A is also a principal minor of A; the alge-

braic complement of a principal minor is its complement.


CHAP. 3] DETERMINANT OP A SQUARE MATRIX 25

Examples. For the 5-square matrix A


H'
"?2 "24 (^25
"11 "15
^1.3 and I
A, ,t 5 !
= 0^2 044 045
31 S3
52 "54 055
are a pair of complementary principal minors of A What is the algebraic complement of each ? .

The terms minor, complementary minor, algebraic complement, and principal minor as de-
fined above for a square matrix A will also be used without change in connection
with U |

See Problems 12-13.

SOLVED PROBLEMS

www.TheSolutionManual.com
1- (") ! !l = 2-4 - 3(-l) = 11
1-1 4|

1 2
4 5l 3 5 3 4
(b) 3 4 5 (1) - (l)(4-7 - 5-6) - + 2(3-6 - 4-5)
6 71 5 7 5 6
5 6 7 -2-4 = -6
1 6

(c) 3 4 15 = 1(4-21 - 11 -18


5 6 21

id) 2 3 5 = 1(3-3 - 5-1)

4 1 3

2. Adding to the elements of the first column the corresponding elements of the other columns,
-4 1 1 1 1 1 1 1

1 -4 1 1 -4 1 1 1

1 1 -4 1 1 1 -4 1 1

1 1 -4 1 1 1 -4 1

1 1 1 -4 1 1 1 -4
by Theorem I.

3. Adding the second column to the third, removing the common factor from this third column and
using Theorem Vn
1 a h + c 1 a a+b+ c 1 a 1

1 b c +a 1 b a+b+c (a + b + c) 1 b 1

1 c a+h 1 c a+b+c 1 c 1

4. Adding to the third row the first and second rows, then removing the common factor
2; subtracting
the second row from the third; subtracting the third row from the first;
subtracting 'the first row
from the second; finally carrying the third row over the other rows
26 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

Oj^+fej a^+b^ ag+63

6^+C-L 62+ ^2 ^3+^3 i^+Ci 62 + ^2 &3+^3 bi + c^ b^+c^ b^+Cr

Cj+a^ C2 + a2 C3+a3 a5^+5]^+Cj^ a2+fe2+"^2 "s+^s+'^s

bi 62 63 b-i 62 ^3 a^^ 02 flg

6^ + C-L 62+^2 ^3+^3 Cl C2 Cg bi ^2 63

a^ 02 '^3 a-, Oo 03 C]^ C2 Cg

Oi % 1

5. Without expanding, show that Ul 02 ^2 1 -(Oj^ - 02) (02 ~ '^s) ('^s - %)


Og Og 1

Subtract the second row from the first; then

www.TheSolutionManual.com
l-2 fl-j^ ^2 Oi + a2 1

2 2
02 ar, 1 (a^ _ 02) a2 O2 t by Theorem III

2 2
Og Og 1 Og ag 1

and oj^-oj is a factor of U|. Similarly, a^-a^ and ag-a^ are factors. Now M| is of order three in the

letters; hence.

(i) \a\ = k(a^^-a^)(a^-as)(as-a^)

The product of the diagonal elements. 0^02. is a term of \a\ and, from (i), the term is -ka^a^. Thus,
A: = -l and \a\ = -{a^-a^){a^-a2){as-a^). Note that U vanishes if and only if two of the a^, og. os are |

equal.

6. Prove: If A is skew-symmetric and of odd order 2p- 1, then Ml =

Since A is skew-symmetric, -4'= -A; then U I


= \-A (-if^-^-UI - Ul- But, by Theorem II,

Ul = Ul; hence, Ut = -U! and Ul = 0.

7. Prove: If A is Hermitian, then \A\ is a real number.

Since A is Hermitian. A = A' , and \a\ = U'| = U| by Theorem II. But if

Ul =
I ^iij2...j"yi% %; = '^ + *'

then Ul = |eii;2---Jn^iii"2i?--^"in = " " ^'

Now Ul = Ul requires 6 = 0; hence. U| is a real number.

1 2 3
8. For the matrix A = 2 3 2
12 2

2 2 1+3 2 3
(-1) 1+2
1

a,, = (-ir'\l ?1 = 2, ai2 = -2, Mi3 = (-ly


12 2 |i 2 11 2I

2 3 2+2I 1 3 2+3 1
1 2
,2+1 -1, a,23
(-ly a22 = (-1) = (-1)
2 2 1 2 1 2

3+ 2 3 1 3 3+3I 1 2
(-1) 3+2I a 33
1

(-ly 1 age = (-1)


3 2 2 2I 2 31
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 27

Note that the signs given to the minors of the elements in forming the cofactors follow the pattern

+ - +
- + -
+ - +
where each sign occupies the same position in the display as the element, whose cofactor is required, oc-
cupies in ^, Write the display of signs for a 5-square matrix.

9. Prove: The value of the determinant U| of an re-square matrix A is the sum of the
products obtained
by multiplying each element of a row (column) of A by its cofactor.
We shall prove this for a row. The terms of (3.6) having a^^ as a factor are

Now e. ^i"''^ i" ^ P"'^tin the is in natural order. Then


kh---Jn~ ^jbjb- -in lJi..fe. -in. 1 (a) may

www.TheSolutionManual.com
be written as

(6)

where the summation extends over the cr= (n-i)! permutations of the integers
2,3 n, and hence, as

022 2S <2n

(c) "an
"ii "11 1
'twill

^n2 "ns

Consider the matrix B obtained from A by moving its sth column over the first s-l columns.
By Theorem
VI. \B\ = (-1) U|. Moreover, the element standing in the first row and first column of B is
a^s and the
minor of a^^ in B is precisely the minor \M^\ of
a^s in A. By the argument leading to (c), the terms of
ais mis\ are all the terms of \b\ having a^s as a factor and, thus, all the
terms of (-1)^"^ having a.^ U| as
a factor. Then the terms of ais!(-if M^/isli are all the terms of \a\ having a^s as a factor Thus

(3.15)

+ + ai5!(-lf||/,3|! + ... + ai !(-!)'*" M,

"11*11 + ai2i2 + + ain*in

,s+i
since (-1) = (-1) We have (3.9) with = We
J i shall call (3.15) the expansion of
. .

U| along its first

^ "' ''^ '^ ^^^^^ '^' ^^-^^ '' ' = '^ '^ '''^i"^'J by f^P^^ting the above argu-
m.r,JWlTr\T
B be the
ments. Let J'*' ^iT^
matrix obtained from A by moving its rth row over the first r-1
rows and then its .th col-
umn over the first s-l columns. Then

T~l s-l
(-1) (-!) \a (-1) u
The element standing in the row and the
the minor of a.rs in
first first column of 5 is a and the minor of a^^ in
^^ B is yreciseiy
i precisely
^ Thus, . the terms of

are all the terms of U | having a^^ as a factor. Then

r+fel
,l,-rkU-l) M.rk\ 2 a^j^a.
rk^rk
k=i
and we have (3.9) for j = r.
.

28 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

10. When oLij is the cofactor of aij in the ra-square matrix A = [a^j] , show that

'll "12 %,f-i "'1 %,j4-i ^'i.n

(h,j-i 02 *2n
i +
^

(i) fC^GC^-j "T fCi^OL^j "T + itj^dj^j

'^i ^2 "W,j-i <%, 7 + 1


'^n

This relation follows from (3.10) by replacing a^j with k^, 027 with k^ 0^7 with Atj, In making these ,

^2J
replacements none of the cofactors OLij.QL^j. (t^j appearing is affected since none contains an element ,

from the / th column of A

By Theorem VII, the determinant in {i) is when A^= a^^, (r = 1,2 n and s ^ j). By Theorems Vin,
and VII, the determinant in (i) is I
4 |
when Ay + fea^g, (r = 1,2 n and s ^ /).
\7

Write the eauality similar to (0 obtained from (3.9) when the elements of the ith row of A are replaced
by /ci,/c2.

www.TheSolutionManual.com
1 2 3 4 5 28 25 38
11. Evaluate: (a) \
A 3 04 (c) 1 2 3 (e) 42 38 65
2 -5 1 -2 5 -4 56 47 83

1 4 8 2 3-4
(b) -2 1 5 ((f) 5-6 3
-3 2 4 4 2-3

(a) Expanding along the second column (see Theorem X)

1 2

3 4 Cl-^^fX^Q "f"
022^22 ^ ^32 32 ai2 + a22 + (-5)o;g2

2 -5 1

1 2
34.2I -10
-5(-l)-' I
5(4-6)
3 4l

(b) Subtracting twice the second column from the third (see Theorem IX)

1 4 8 1 4 8-2-4 1 4
1 4
-2 15 2 1 5-2-1 = -2 1 3 3(-l)''
-3 2
-3 2 4 3 2 4-2-2 -3 2

-3(14) -42

(c) Subtracting three times the second row from the first and adding twice the second row to the third

3 4 5 3-3(1) 4-3(2) 5-3(3) -2 -4


-2 -4
1 2 3 1 2 3 = 1 2 3
9 2
-2 5 -4 -2 + 2(1) 5 + 2(2) -4+ 2(3) 9 2

(-4 + 36) -32

(d) Subtracting the first column from the second and then proceeding as in (c)

2 3-4 2 1 -4 2-2(1) 1 -4 + 4(1)


u 5-6 3 = 5-11 3 5-2(-ll) -11 3 + 4(-ll)

4 2-3 4 -2 -3 4-2(-2) -2 -3 + 4(-2)

1
27 -41
= 27 -11 -41 -31
-11
8 -2 - 11
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 29

(e) Factoring 14 from the first column, then using TheoremIX to reduce the elements in the remaining columns

28 25 38 2 25 38 2 25-12(2) 38-20(2)
42 38 65 14 3 38 65 14 3 38-12(3) 65-20(3)
56 47 83 4 47 83 4 47-12(4) 83-20(4)

2 1 -2 1
-1 9
4 3 2 5 14 - -12 9 -14 -14(-l-54) 770
4-13 6 -1 1
6 1

12. Show that p and q, given by (3.13) and (3.14), are either both even or both
odd.

Since each row (column) index is found in either p or 9 but never in both,

P + q = (1 + 2+-+7J) + (l + 2+-"+n) = 2-kn{n + l) = n(n +1)

Now p+q is even (either n or n + 1 is even); hence, p and q are either both even or both odd. Thus,

www.TheSolutionManual.com
(-1) = (-1)^ and only one need be computed.

12 3 4 5
6 7 8 9 10
13. For the matrix A [..] 11 12 13 14 15 the algebraic complement of | Ao's is
16 17 18 19 20
.21 22 23 24 25

, ,^2+3+2+4| .1,3,51
13 5
(-1) Ml,4,5l 16 18 20 (see Problem 12)
21 23 25

and the algebraic complement of | '4^'4^'5| is -


I12 141

SUPPLEMENTARY PROBLEMS
14. Show that the permutation 12534 of the integers 1, 2, 3. 4, 5 is even, 24135 is odd, 41532 is even, 53142 is
odd, and 52314 is even.

15. List the complete set of permutations of 1, 2,3,4, taken together; show that half are even and half are odd.

16. Let the elements of the diagonal of a 5-square matrix A be a.b.cd.e. Show, using (3.6), that when ^ is
diagonal, upper triangular, or lower triangular then \a\ = abcde.

17. Given -4
[j J]
= and B = [^ ^^^^ ^^^^ AB^BA^ A'b 4 AB' ^ a'b' ^ B'a' but that the determinant of
6J
each product is 4.

18. Evaluate, as in Problem 1,

2 -1 1 2 2-2 2 3
(a) 3 2 4 = 27 12
(b) 3 = 4 (c) -2 4 =
-1 3 2 3 4 -3 -4
30 DETERMINANT OF A SQUARE MATRIX [CHAP. 3

1 2 10
19. (a) Evaluate \A 2 3 9

4 5 11

(b) Denote by B | | the determinant obtained from j


-4 |
by multiplying the elements of its second column by 5.
Evaluate \B \
to verify Theorem III.

(c) Denote by |
C |
the determinant obtained from |
.4 |
by interchanging its first and third columns. Evaluate
I
C I
to verify Theorem V.
1 2 7 1 2 3
(d) Show that I
A 2 3 5 2 3 4 thus verifying Theorem VIII.
4 5 8 4 5 3

1 2 7
(e) Obtain from \A \
the determinant |o| = 2 3 3 by subtracting three times the elements of the first
4 5-1
column from the corresponding elements of the third column. Evaluate j D j to verify Theorem IX.

(/) In U
subtract twice the first row from the second and four times the first row from the third.

www.TheSolutionManual.com
I
Evaluate
the resulting determinant.

(g) In I
/I I
multiply the first column by three and from it subtract the third column. Evaluate to show that
\A I
has been tripled. Compare with (e). Do not confuse (e) and (g).

20. If -4 is an n-square matrix and 4 is a scalar, use (3.6) to show that U^ |


= /t^M |.

21. Prove: (a) If U |


= k. then \a\ = k = \a'\.
(b) If A is skew-Hermitian, then |
-4 |
is either real or is a pure imaginary number.

22. (a) Count the number of interchanges of adjacent rows (columns) necessary to obtain 6 from A in Theorem V
and thus prove the theorem.
(b) Same, for Theorem VI.

23. Prove Theorem VII. Hint: Interchange the identical rows and use Theorem V.

24. Prove: If any two rows (columns) of a square matrix A are proportional, then |
,4 |
= o.

25. Use Theorems VIII, III, and VII to prove Theorem IX.

26. Evaluate the determinants of Problem 18 as in Problem 11.

a b
c d
27. Use (3.6) to evaluate \A\ =
e /
; then check that \a\ = ".P/. Thus, if A = diag(A-^. A^). where
g h
A^, A^ are 2-square matrices, | A U4i|

-1/3 -2/3 -2/3


28. Show that the cofactor of each element of 2/3 1/3 -2/3 is that element.

2/3 -2/3 1/3

-4 -3 -3
29. Show that the cofactor of an element of any row of 1 1 is the corresponding element of the same
4 4 3
numbered column.

30. Prove: (a) If A is symmetric then <Xij= 0^j^ when i 4 j

(b) If A is n-square and skew-symmetric then aij = (-1)


"P when 4
Ot--^ i
j
CHAP. 3] DETERMINANT OF A SQUARE MATRIX 31

31. For the matrix A of Problem 8 ;

(a) show that j


^ |
= 1

(b) form the matrix C 12 (^^22 32 and show that AC = [.


_'^13 ^23 Otgg
(c) explain why the result in (6) is known as soon as (a) is known.

be a a^
32. Multiply the columns of 6^ ca 52 respectively by a,b.c ; remove the common factor from each of
c^ c2 ab
be ab ca
the rows to show that A ab ca be
ca be ab

a^ a bed a^ a

www.TheSolutionManual.com
1 a-^ 1

6^ 6 1 aed
33. Without evaluating show that
(a - b)(a - c)(a - d)(b - c)(i - d)(e - d).
e^ e I abd c^ c'^ e 1

d^ d 1 abc d d^ d 1

1 1 ... 1 1 1 ... 1 1

1 1...1 1 1 ... 1 1

34. Show that the ra-square determinant


1 1 0...1 1 1 ... 1 1 n-1
a\ - ("-!) (-1) (n~l).
\

1 1 1 ... 1

1 1 1 ...0 1 1 1 ... 1 1

re-i n-2
a^ 1
ra-i n-2
ar, 1
35. Prove: = S(i - 2)(i - as). (ai-a^H(a2- 03X02-04)... (02 -a^j 7i-i - "ni
n-i ra-2
o 1

na^+b-^ na^+b^ na^+bg O]^ 02 O3


36. Without expanding, show that nb^ + c^
nb-j^+e^ nb^+Cg (n + l)(rf--n + 1) 61 62 is
nc-i^+a.^ nCQ+a^ ncQ + a^ Ci Cg Cg

X a xb
37. Without expanding, show that the equation x-i-a x-c has as a root.
x+b x+c

+6 a a
a a+ b a
38. Prove ,n-i
b {na + b).

a a a +6
chapter 4

Evaluation of Determinants

PROCEDURES FOR EVALUATING determinants of orders two and three are found in Chapters. In
Problem 11 of that chapter, two uses of Theorem IX were illustrated: (a) to obtain an element 1
or 1 if the given determinant contains no such element, (b) to replace an element of a given
determinant with 0.

For determinants of higher orders, the general procedure is to replace, by repeated use of

www.TheSolutionManual.com
Theorem IX, Chapters, the given determinant U! by another |b| -= \bij\ having the property
that all elements, except one, in some row (column) are zero. If &>, is this non-zero element
'Pq
and ^p^ is its cofactor.

S = bpq dp^ i-ir minor of b.


'pq
^Pq

Then the minor of bp is treated in similar fashion and the process is continued until a determi-
nant of order two or three is obtained.

Example 1.

2 3-2 4 2 + 2(3) 3 + 2(-2) -2 + 2(1) 4 + 2(2) 8-10 8

3-2 12 3 -2 1 2 3-2 1 2

3 2 3 4 3-3(3) 2-3(-2) 3-3(1) 4-3(2) -6 8 0-2


-2405 -2 4 5 -2405
8-1 8 8 + 8(-l) -1 8 + 8(-l) 0-10
= (-1)" -6 8 -2 -6 + 8(8) 8 -2 + 8(8) 58 8 62
-2 4 5 -2 + 8(4) 4 5 + 8(4) 30 4 37

'^
-(-l)- = (-l)P' -286
30 37
See Problems 1-3

For determinants having elements of the type in Example 2 below, the following variation
may be used: divide the first row by one of its non-zero elements and proceed to obtain zero
elements in a row or column.

Example 2.

0.921 0.185 0.476 0.614 1 0.201 0.517 0.667 1 0.201 0.517 0.667
0.782 0.157 0.527 0.138 0.782 0.157 0.527 0.138 0.123 -0.384
0.921 0.921
0.872 0.484 0.637 0.799 0.872 0.484 0.637 0.799 0.309 0.196 0.217
0.312 0.555 0.841 0.448 0.312 0.555 0.841 0.448 0.492 0.680 0.240

0.123 -0.384 -0.320 1

0.921 0.309 0.196 0.217 0.921(-0.384) 0.309 0.196 0.217


0.492 0.680 0.240 0.492 0.680 0.240

1
0.309 0.265
0.921(-0.384) 0.309 0.265 0.217 0.921(-0.384)
0.492 0.757
0.492 0.757 0.240

0.921(-0.384)(0.104) = -0.037

32
I 2

CHAP. 4] EVALUATION OF DETERMINANTS 33

THE LAPLACE EXPANSION. The expansion of a determinant U |


of order n along a row (column) Is
a special case of the Laplace expansion. Instead of selecting one row of \a\, let m rows num-
bered I'l, J2 i^ , when arranged in order of magnitude, be selected. Prom these m rows
n(n- l)...(ra - m + l)
P minors
1-2 .... m
can be formed by making all possible selections of m columns from the n columns. Using these
minors and their algebraic complements, we have the Laplace expansion

> Jrr. , Jn+i' jm+2 Jn


(4.1) |(-1)^ i. . i. .

^m4-i'% + 2 ^n

where s = i^ + i^+... + i^ +A + /2 + +h and the summation extends over the/? selections of the
column indices taken m at a time.

Example 3.

2 3-2 4

www.TheSolutionManual.com
3-2 12
Evaluate |
A , using minors of the first two rows.
3 2 3 4

-2405
Prom (4.1),

1+2+1+21 1 1 + 2+1 + SI
U (-1) U1.2 U-5aI
^3,4 1
+ (-1)" "Mi',|-U
1,2!
,1,31
3,4!
2,41

_l + 2 + l + 4| 1,4 1 + 2 + 2 + 31 .2,31
+ (-1) + (-1) I ^1,:

,
,,1+2 + 2+4. .2,4 .13 1 + 2+3+4 .34
+ (-1) Mi'jl-Us'.
I

+ (-ly *l,2r 1^3, 4I

2 3 3 4 2 -2 2 4 2 4 2 3
+
3 -2 5 3 1 4 5 3 2 4

3 -2 3 4 3 4 3 3 -2 4
+ - 3 2
-2

_2 _2 +

1 5 2 -2 D 1 2 -2 4

(-13)(15) - (8)(-6) + (-8)(-12) + (-1)(23) - (14)(6) + (-8)(16)

-286
See Problems 4-6

DETERMINANT OF A PRODUCT. If A and B are n-square matrices, then

(4.2) Us I
= Ul-lsl
See Problem 7

EXPANSION ALONG THE FIRST ROW AND COLUMN. If ^ = [aid is n-square, then

n n ii
(4.3) ^11 ^1 IJ IJ
i=2 j,-=2

Where a^ is the cofactor of o^ and a^^-is the algebraic complement of the minor "ii^iil
of^.
ii Otj I

DERIVATIVE OF A DETERMINANT. Let the ra-square matrix A = [a^,-] have as elements differen-
^tj.
tiable functions of a variable x. Then
J

34 EVALUATION OF DETERMINANTS [CHAP. 4

I. The \A\, of \A\ with respect to x is the sum of n determinants obtained


derivative,
dx
by replacing in all possible ways the elements of one row (column) of by their derivatives U I

with respect to x.

Example 4.

x^ x^\ 3 2x 1 x2 x + 1 3 x+ 1 3

dx 1 2x-\ x"" = 1 2x-\ x^ + 2 3^^ + 1 2x-i x-"

X -2 X -2 X -2

5 + 4x - 12x^ 6x
See Problem 8

www.TheSolutionManual.com
SOLVED PROBLEMS
2 3 -2 4 2 3 -2 4 2 3-2 4
7 4 -3 10 2(2) 4-2(3) -3-2(-2) 10-2(4) 3 -2 1 2
1. -286 (See Example 1)
3 2 3 4 3 2 3 4 3 2 3 4
-2 4 5 2 4 5 -2 4 5

There are, of course, many other ways of obtaining an element +1 or -1; for example, subtract the first
column from the second, the fourth column from the second, the first row from the second, etc.

1 -1 2 1 -1 + 1 2-2(1) 10
2 3 2 -2 2 3 2+2 -2-2(2) 2 3 4-6
2 4 2 1 2 4 2 +2 1-2(2) 2 4 4-3
3 1 5 -3 3 1 5 + 3 -3-2(3) 3 18-9
3

4
1
4
4-3
8
-6

-9
=
3-2(4)

1-3(4)
4
4-2(4)

8-3(4)
4-3 -6-2(-3)

-9-3(-3)
-5
4
-11 -4
-4
4-3

-5 -4
3 = -72
-11 -4

1 + J 1+ 2j
3. Evaluate \A\ 1 - i 2-3i
1- 2i 2 + : i

Multiply the second row by l + j and the third row by l + 2J; then

1 +i 1+2j l +j 1+2; 1 +J 1 + 2J

(l+J)(l+2j)Ml = (-1+3j)|-^| 2 5-J 2 5-J 8- 14j 25 - 5j

5 -4 + 7i 1 -4 + 7j -10 + 2J 1 -4 + 7J -10 + 2i

1+i l + 2j
I -6 + 18
- 14i 25 - 5j

and ,4
CHAP. 4] EVALUATION OF DETERMINANTS 35

4. Derive the Laplace expansion of \A\ = \aij\ of order n, using minors of order m<n
ji'h Jrn
Consider the m-square minor A; of 1^1 in which the row and column indices are arranged in
H-''^ %\
order of magnitude. Now by i; -
interchanges of adjacent rows of |^ the row numbered ii can be brought
1 |
,

into the first row. by i^- 2 interchanges of adjacent rows the tow numbered is can be brought into the second
row by % -m interchanges of adjacent rows the row numbered % can be brought into the mth row. Thus.
+ (ig- + + (ijj -m)
after (I'l- 1) 2) 11 + ^2+ + 'm-2'"('"+l) interchanges of adjacent rows the rows
numbered I'l, i^ i occupy the position of the first m rows. Similarly, after /i + j^ + + /^ - \m(m+l)
interchanges of adjacent columns, the columns numbered /i,/2 /jj occupy the position of the first m col-
umns. As a result of the interchanges of adjacent rows and adjacent columns, the minor selected above oc-
cupies the upper left corner and its complement occupies the lower right comer of the determinant; moreover.
I
A has changed sign
I
cr = j- + i + + + /i + % /2 + + /-"('" +1) times which is equivalent to
[1 + ^2+ + in + /i + ii + + in changes. Thus

Ji.J2' Jm Jm+i'Jn-i-2 Jn
A- yields m!(ra -m)! terms of (-1) \a\ or
H'''^ '-m 'm-H'''m +2-

www.TheSolutionManual.com
Ji~h- Jn Jn-H' Jm+2' ' Jn
(a) (-ir yields w!(n- m)! terms of \a\.
"TOJ-l' n+2'

n(n~l)...(n-m + l)
Let I'l, 12 in be held fixed. Prom these rows
different m- square p
l'2....m m\{n m)\
minors may be selected. Each of these minors when multiplied by its algebraic complement yields m!(/j-m)'
terms of U|. Since, by their formation, there are no duplicate terms of
U| among these products.
Jn Jm-1' Jvn Jn
S(-i/ I
, I'm

where s = i^ + i^+ + i^ +j^ + j^ + + in and the summation extends over the p different selections
/i, 72 in of the column indices.

12 3 4
5. Evaluate A
2 12 1 using minors of the first two columns.
11
3 4 12

1 21 |1 1 1 21 2 ll 2 11 13 4
(-1)^ + (-ir
2 2
+ (-If
1 I ll 3 41 1 ll 3 41 ll 1

(-3)(1) + (-2)(1) (5)(-l)

6. li A,B, and C are re-square matrices, prove

A
B
C B
Prom the first n rows of |P| only one non-zero n-square minor, U|, can be formed. Its algebraic com-
plement is |s|. Hence, by the Laplace expansion, |p| = |.4|.|b|.

7. Prove \AB\ = U |

|
S
Suppose A = [a^j] and B = [bij] are n -square. Let C = [c^j] = AB so that c
V ^Hkhj- i^rom
Problem 6
36 EVALUATION OF DETERMINANTS [CHAP. 4

ail 012
"m

021 022 2n

"ni n2 "nn

-1 6ii 612 ..
hi

-1 621 ^22
.. ?,2

-1

To the (n+l)st column of \P\ add fen times the first column, 621 times the second column 6i times
the nth column; we have

On 012
"m Cii

www.TheSolutionManual.com
"21 "22 2ra C21

"ni "n2 "nn Cm

-1 612
h,
-1 622 .. 62

Next, to the (n + 2)nd column of |P| add fei2 times the first column, 622 times the second column,
times the nth column. We have

in Cii C12

2n C21 C22

"nn Cm Cn2

^13 ^m
*23 b^n

A C
Continuing this process, we obtain finally \P\ . Prom the last n rows of |
P |
only one non-

zero n-square minor, 1-/| = (-l)'^ can be formed. Its algebraic complement is (_i)i+24--"+ra+(n+ii+-"+2n|(-|

= (-lf'2"^^'|c|. Hence, \p\ = (-i)Vlf" ^"^"lc| = \c\ and \c\ = \ab\ = U1.|b|.

Oil %2 %S
8. Let A = 021 '%2 '^a where Oif = aij(x), (i, j = 1,2,3), are differentiable functions of x. Then
031 032 033

%l"22*33 + 122Sa31 + %s"S2"21 " (^ix"^"^ " 0:l202lOs3 " (hs^^O'Sl

and, denoting -ra;: by a,',-,


dx ^^ ^J
CHAP. 4] EVALUATION OF DETERMINANTS 37

'^ll"22''S3 + 22ll"33 + ''33"ll"22 + "l2"23"31 + 023012131 + O31O12O23


dx
+ %3032021 + O32OISO2I + 021013032 ~ Oj^jCggOgg >i23'^n''S2 OgjOj^jOgg

f f f f / /
a^OQiOg^ 0210-12033 Og^a^a^^ air^a^^a^.^ 02205^303^ - Ogj^o^gOjg

Oil 0^11 + o-L20^i2 "*"


OisOi^g + a^-^dj^x + 022(^22 + 023(^23 "^ Og-LCXgi + 032(^32 + 033(^33

oil 012 Ol3 11 O12 Ol3 '11 "12 "13

021 022 023 + 21 O22 O23 + i2i "22 "23

031 032 033 31 O32 O33 '31 "32 "33

by Problem 10, Chapter 3.

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
9. Evaluate:

3 5 7 2 1 -2 -4
3
2 4 11 2 -1 4 -3
-304
(o) 156
-2000 ic)
2 3 -4 -5
113 4 3 -4 5 6

1 -2 3 -2 -2
1116 2 -1 1 3 2
{b)
2 4 16 = 41 (d) 1 1 2 1 1 118
4 12 9 -4 -3 -2 -5
1
2 4 2 7
3 -2 2 2 -2

10. If A Is n-square, show that \A^A \


is real and non-negative.

11. Evaluate the determinant of Problem 9(o) using minors from the first two rows; also using minors from the
first two columns.

12. (o) Let aJ"' "^ and B = r ''


'^

Use |^B|
4B|
[^-02 ojj

= |.4|"|B|
i.4|-|B| to show
1^-62

that
bjj

(oi
(<
0000
+ 02)(6i+62) = (0161-0262)
Q
+ (0261+0163)
O
.

+ t03 a2+iaA 61 + 163 62 + 164


(6) Let A tOj^ I and B =
-Og + ja^ Oi-iogJ [-62 + 864 6i-j6gJ
Use \AB inii.
\ts\ to
222222,2,2
express (01 + 02+03 + 04X61+62+63+64) as a sum of four squares.

2 1

3 2 1
13. Evaluate using minors from the first three rows. Ans. -720
4 3 2 1

5 4 3 2 1

6 5 4 3 2 1
38 EVALUATION OF DETERMINANTS [CHAP. 4

112 12 1

111
14. Evaluate 110 using minors from the first two columns. .4ns. 2

112
12 2 11

15. If ^1,^2 ^^s


^"^^ square matrices, use the Laplace expansion to prove

|diag(4i,.42 As)\ = Uil-U,! ....


U
a^ a^ Og a^

*1 *2 ^3 *4
16. Expand using minors of the first two rows and show that
a-^ a^ flg a^

*1 ^2 ^3 *4

www.TheSolutionManual.com
a^ a2

K 62 60 6.. 62 63

A
17. Use the Laplace expansion to show that the n-square determinant , where is fe-square, is zero when
B C
A > 2"-

18. In \A\ = aiiOiii + ai2ai2 + OisO^is + ai4i4. expand each of the cofactors a^g. OL^a. tti* along its first col-
umn to show
4 4 ti
^11^11 ~ .-^ .^ ^il^lj^lj
11 "IJ l^lj
1=2 J=2

where ffi,- is the algebraic complement of the minor Of U


"ii "ij

19. If a^j denotes the cofactor of a^j in the n-square matrix A = [a^,-], show that the bordered determinant

"11 "12

ire Pi ?i 92
?n
"21 22

"2ra P2 Pi "11 '^12 " "m
.^
t=i j=i
X
Pilj
'J
Cti
^V
"ni n2

"nra Pre Pn "ni "712 "nn
li 92
In

Hint. Use (4 .3).

20. For each of the determinants |.4| . find the derivative.

X I 2 a: -1 x-l 1

() (b) x^ + 4 3
2*: 1 ;<: (c) a; a: 2a: +5
2x Zx + l
3-2 x^+l x+l x^

-4ns. (a) 2a: + 9a:^- 8a;=^ , (6) 1 - 6a: + 21a:^ + 12a;^ - 15a:*, (c) 6a:^ - 5*"^ - 28x^ + 9a:^ + 20a; - 2

21. Prove : If A and B are real n-square matrices with A non-singular and if ff = 4 + iS is Hermitian, then
. .

chapter 5

Equivalence

THE RANK OF A MATRIX. A non-zero matrix A is said to have rank r if at least one of its r-square
minors is different from zero while every (r+l)-square minor, if any, is zero. A zero matrix is
said to have rank 0.

2 3"
'l
1 2
Example 1. The rank of A 2 3 4 is r= 2 since
2 3
-1^0 while U = 0.

3 5 7

www.TheSolutionManual.com
See Problem 1.

An re-square matrix A is called non-singular if its rank r=n, that is, if \


A ^ 0. Otherwise,
A is called singular. The matrix of Example 1 is singular.

Prom I
AB\ A\-\B\ follows

I. The product of two or more non-singular re-square matrices is non-singular; the prod-
uct of two or more re-square matrices is singular if at least one of the matrices is singular.

ELEMENTARY TRANSFORMATIONS. The following operations, called elementary transformations,


on a matrix do not change either its order or its rank:

(1) The interchange of the ith and /th rows, denoted by Hij;
The interchange of the ith and /th columns, denoted by K^j

(2) The multiplication of every element of the ith row by a non-zero scalar k, denoted by H^(k);
The multiplication of every element of the ith column by a non-zero scalar k, denoted by Ki(k).

(3) The addition to theelements of the sth row of k, a scalar, times the corresponding elements
of the /th row, denoted by Hij(k) ;

The addition to the elements of the ith column of k, a scalar, times the corresponding ele-
ments of the /th column, denoted by K^j(k)

The transformations H are called elementary row transfonnations; the transformations K are
called elementary column transformations.

The elementary transformations, being precisely those performed on the rows (columns) of a
determinant, need no elaboration. It an elementary transformation cannot alter the
is clear that
order of a matrix. In Problem 2, it is shown that an elementary transformation does not alter its
rank.

THE INVERSE OF AN ELEMENTARY TRANSFORMATION. The inverse of an elementary transforma-


tion is an operation which undoes the effect of the elementary transformation; that is, after A
has been subjected to one of the elementary transformations and then the resulting matrix has
been subjected to the inverse of that elementary transformation, the final result is the matrix A.

39
1

40 EQUIVALENCE [CHAP. 5

1 2 3
Example 2. Let A 4 5 6
7 8 9
3"
"l 2
The effect of the elementary row transformation H2i(-2) is to produce B 2 10
.7 8 9
The effect of the elementary row transformation ff2i(+ 2) on B is to produce A again
Thus, ff2i(-2) and H2x(+2) are inverse elementary row transformations.

The inverse elementary transformations are

-1
"ij = % ^ij

(2') Hi\k) = H^(i/k)

(3') H--(k) = H^A-k) K^jik) = Kij(-k)

www.TheSolutionManual.com
We have
II. The inverse of an elementary transformation is an elementary transformation of the
same type.

EQUIVALENT MATRICES. Two matrices A and B are called equivalent, A'^B, if one can be obtained
from the other by a sequence of elementary transformations.

Equivalent matrices have the same order and the same rank.

Examples. Applying in turn the elementary transformations W2i(-2), ^siCD. Ws2(-1).



1 2 -1 4 12-14 1 2 -1 4 12-1 4
2 4 3 5 5-3 5 -3 '^
5-3
-1 -2 6 -7 -1 -2 6 -7 5 -3

Since all 3-square minors of B are zero while I 1 t^ 0, the rank of S is 2 ; hence,
I 5 3
the rank of ^ is 2. This procedure of obtaining from A an eauivalent matrix B from which the
rank is evident by inspection is to be compared with that of computing the various minors of -4.

See Problem 3.

ROW EQUIVALENCE. If a matrix A is reduced to B by the use of elementary row transformations a-


lone, B is said to be row equivalent to A and conversely. The matrices A and B of Example 3
are row equivalent.

Any non-zero matrix A of rank r is row equivalent to a canonical matrix C in which

(a) one or more elements of each of the first r rows are non-zero while all other rows have
only zero elements.

(b) in the ith row, (i =1,2, ...,r), the first non-zero element is 1; let the column in which
this element stands be numbered ;'-
.
't

(C) ]\ < /2 < < j^.

(d) the only non-zero element in the column numbered j^, (i =1,2 r), is the element 1 of
the ith row.
CHAP. 5] EQUIVALENCE 41

To reduce A to C, suppose /i is the number of the first non-zero column of A.

(ii) If aij
171
7^ 0, use //i(l/oi,-
iji ) to reduce it to 1, when necessary.
(is) If a; J = but o^
^ 0, use ffij, and proceed as in (i^).
Vi ^^7

(ii) Use row transformations of type (3) with appropriate multiples of the first row to obtain
zeroes elsewhere in the /^st column.

If non-zero elements of the resulting matrix B occur only in the first row, B = C. Other-
wise, suppose 72 is the number of the first column in which this does not occur. If &2j ^ 0,
use ^2(1/^2^2) as in (ii); if but bqj^ f 0, use H^^ and proceed as in (ii). Then, as
&2J2=
in (il), clear the /gnd column of all other non-zero elements.

If non-zero elements of the resulting matrix occur only in the first two rows, we have C.
Otherwise, the procedure is repeated until C is reached.

Example 4. The sequence of row transformations ff2i(-2), ffgiCD ; 2(l/5) ; //i2(l). //ssC-S) applied

www.TheSolutionManual.com
to A of Example 3 yields

1 2 -1 4 1 2 -1 4 1 2 -1 4 1 2 17/5
^\j '\^
2 4 3 5 5 -3 1 -3/5 '%^
1 -3/5
1 -2 6 -7 5 -3 5 -3

having the properties (a)-(rf).

See Problem 4.

THE NORMAL FORM OF A MATRIX. By means of elementary transformations any matrix A of rank
r > can be reduced to one of the forms

(5.1)

A
/.
\l% "'"'
M
called its normal form. zero matrix is its own normal form.

Since both row and column transformations may be used here, the element 1 of the first row
obtained in the section above can be moved into the first column. Then both the first row and
firstcolumn can be cleared of other non-zero elements. Similarly, the element 1 of the second
row can be brought into the second column, and so on.

For example, the sequence ff2i(-2), ^31(1). ^2i(-2), Ksi(l), X4i(-4). K23, K^{\/%),

/^32(-l), ^42(3) applied to 4 of Example 3 yields I ^ , the normal form.

See Problem 5.

ELEMENTARY MATRICES. The matrix which results when an elementary row (column) transforma-
tion is applied to the identity matrix /^ is called an elementaryrow (column) matrix. Here, an
elementary matrix will be denoted by the symbol introduced to denote the elementary transforma-
tion which produces the matrix.
0'
1
Example 5. Examples of elementary matrices obtained from /g 1

1_

"0 0' '1 0" 0'


1 'l

1 = K, Ha(k) 1 K-sik). H^g(k) :


1 k K^^yk)
_0 1_ k_ 1_
42 EQUIVALENCE [CHAP. 5

Every elementary matrix is non-singular. (Why?)


The effect of applying an elementary transformation to an mxn matrix A can be produced by
multiplying A by an elementary matrix.

To effect a given elementary row transformation on A of order mxn, apply the transformation
to Ijn to form the corresponding elementary matrix H and multiply A on the left by H.

To effect a given elementary column transformation on A, apply the transformation to / to


form the corresponding elementary matrix K and multiply A on the right by K.
~1 3~ 3~ "7 9"
2 "O f 'l 2 8
Example 6. When A = 4 5 6 , H^^-A = 1 4 5 6 = 4 5 6 interchanges the first and third
_7 8 9_ 1 0_ 7 8 9_ _1 2 3_

1 2
3'
"l o"
"723'
rows of A ; 4/^13(2) = 4 5 6 10 = 16 5 6 adds to
t( the first column of A two times

J 8 9 _2 1_ _25 39J
the third column.

www.TheSolutionManual.com
LET A AND B BE EQUIVALENT MATRICES. Let the elementary row and column matrices corre-
sponding to the elementary row and column transformations which reduce /I to 5 be designated
as //i./Zg ^s< J^\,T^Q. '^t
where //^ is the first row transformation, //g is the second, ...;
K^ is the first column transformation, K^ is the second Then

(5.2) //. IU-H,.A K-i-K^ K, PAQ =

where
(5.3) Ih H^-H^ and

We have
III. Two matrices A and B are equivalent if and only if there exist non-singular matrices
P and Q defined in (5.3) such that PAQ = B.

"1 2 -1 2~|

Example 7. When A 2 5-23, ^3i(-l) //2i(-2) -^ ^2i(-2) -Ksid) .K4i(-2) -K^sd) .Ks(i)
_1 2 1
2J

1-200 ~1 1 0~ 1
2" "1000" "1000"
["100"! r 1 o"j

0-2 10 10 10 10 1 10
1 1
10 10 1 10 5
[j^i ij L iJ
1 _0 1 1_ _0 1_ _0 1_

1-25-4
o"l
10 1
[1
= PAQ 10
[:;=} 2
1 oj
1

Since any matrix is equivalent to its normal form, we have

IV. If ^ is an re-square non-singular matrix, there exist non -singular matrices P and Q
as defined in (5.3) such that PAQ = 1^ .

See Problem 6.
CHAP. 5] EQUIVALENCE 43

INVERSE OF A PRODUCT OF ELEMENTARY MATRICES. Let

P = H^...H^-H^ and Q = K-^-K^-.-Kt


as in (5.3). Since each H and K has an inverse and since the inverse of a product is the product
in reverse order of the inverses of the factors

P~^ = H;\hI\..H^^ Q''


(5.4) and = Kt...Kt-Kt-
Let A be an re-square non-singular matrix and P and Q defined above be such
let that PAQ
= / . Then

(5.5) A = P'\PAQ)Q^ = P'^-k-Q^ = P'^-Q'^

We have proved

V. Every non-singular matrix can be expressed as a product of elementary matrices.

www.TheSolutionManual.com
See Problem 7.
From this follow

VI. If A is non-singular, the rank of AB (also of BA) is that of B.

VII. If P and Q are non-singular, the rank of PAQ is that of A.

CANONICAL SETS UNDER EQUIVALENCE. In Problem 8, we prove


VIII. Two mxn matrices A and B are equivalent if and only if they have the same rank.
A set of my.n matrices is called a canonical set under equivalence if every mx-n matrix is
equivalent to one and only one matrix of the set. Such a canonical set
is given by (5.1) as r
ranges over the values 1,2 m or 1,2. ...,re whichever is the smaller.
See Problem 9.

RANK OF A PRODUCT. Let A be an mxp matrix of rank r. By Theorem III there exist non-singular
matrices P and Q such that

PAQ = N = P''
]

Then 4 = P NQ . Let S be a pxre matrix and consider the rank of

(5.6) AB = P~'NQ'^B
By Theorem AB is that of NQ'^B. Now the rows of NQ~'b consist of the firstr
VI, the rank of
rows of Q B and
m-r rows of zeroes. Hence, the rank of AB cannot exceed r
the rank of A
Similarly, the rank of AB cannot exceed that of S. We
have proved
IX. The rank of the product of two matrices cannot exceed
the rank of either factor.

suppose iS = then from (5.6). NQ-'b = 0. This requires that the


:
first r rows of Q'^B
be zeroes while the remaining rows may
be arbitrary. Thus, the rank of Q-'b and, hence
the
rank of B cannot exceed p-r. We have proved

X. If the mxp matrix A is of rank r and the pxn matrix


if B is such that AB = the
rank of B cannot exceed p-r.
44 EQUIVALENCE [CHAP. 5

SOLVED PROBLEMS
1 2 3] 1 2
1. (a) The rank of A is 2 since ^ and there are no minors of order three.
-4 sj -4

1 2 3
2 3
(b) The rank of A 12 5 is 2 since | ^ j
= and ^0.
2 5
2 4 8
"0 3"
2
(c) The rank of A 4 6 is 1 since |
i |
= 0, each of the nine 2-square minors is 0, but nov
_0 6 9_

every element is

Show that the elementary transformations do not alter the rank of a matrix.

We shall consider only row transformations here and leave consideration of the column transformations

www.TheSolutionManual.com
as an exercise. Let the rank of the mxn matrix ,4 be r so that every (r+l)-square minor of A, it any, is zero.
Let B be the matrix obtained from .4 by a row transformation. Denote by \R\ any (r+l)-square minor of A and
by Is] the (r+l)-squaie minor of B having the same position as \R\ .

Let the row transformation be H^j Its effect on |/?| is either (i) to leave
. it unchanged, (ii) to interchange
two of its rows, or (lii) to interchange one of its rows with a row not of \R\ . In the case (i), \S\ = \r\ =0;
in the case (ii), \S\ = -\r\ = ; in the case (iii), \s\ is, except possibly for sign, another (r+l)-square minor
of l^l and, hence, is 0.

Let the row transformation be Hi(k). Its effect on \R\ is either (1) to leave it unchanged or (ii) to multi-
ply one of its rows by A:. Then, respectively, |S| = |/?| = o or |S| = ;i:|/?| = o.

Let the row transformation be Hij(k). Its effect on |/?| is either (i) to leave it unchanged, (ii) to increase
one of its rows by k times another of its rows, or (iii) to increase one of its rows by k times a row not of S|.
|

In the cases (i) and (ii), |S|=|ft| = 0; in the case (iii), \s\ = /?| + A: (another (r+i)-square minor of /I) = |

0 k-0 = 0.

Thus, an elementary row transformation cannot raise the rank of a matrix. On the other hand, it cannot
lower the rank tor, if it did, the inverse transformation would have to raise it. Hence, an elementary row
transformation does not alter the rank of a matrix.

For each of the matrices A obtain an equivalent matrix B and from it, by inspection, determine the
rank of A.
"1 3" "1 3~ '1 3"
2 1 2 3 2 2
"^
(a) A = 2 1 3
'-^-/
-3 -3 1 1
-""J
1 1

_3 2 1_ -4 -8 _0 1 2_ _0 1_

The transformations used were //2i(-2). ffsi(-3); H^(-l/3), HQ(-l/i); Hg^i-l). The rank is 3.

"1 0" 0" "1 0'


2 3 1 2 3 2 3 2 3 0" 1 2 3
2 4 3 2 -3 2 -4 -8 3 4 -8 3 r^ -4 -8 3
(b) A ''V. ''\j '~^
S. The rank is 3.
3 2 1 3 4 -8 3 -3 2 -3 2 -3 2

_6 8 7 5_ -44 -11 5_ p -4 -11 5_ -3 2

1+i i 1 1
'~^
(c) A i + 2j i 1 + 2j
'-Vy
i 1 + 2J = B. The rank is 2.
1 + 2j l+i_ _1
i 1 + 2J_

Note. The equivalent matrices 5 obtained here are not unique. In particular, since in (o) and (i)
only
row transformations were used, the reader may obtain others by using only column
transformations.
When the elements are rational numbers, there generally is no gain in mixing row and column
transformations.
CHAP. 5] EQUIVALENCE 45

4. Obtain the canonical matrix C row equivalent to each of the given matrices A.

13 113 113 2 10 4
12 6 12 6 13-2 13-2
(a) A = '->-'

2 3 9 2 3 9 13-2
113 13 13 -2_ GOOD
1 2 -2 3 f 1 2 -2 3 l" 1 -2 3 3" '10 3 7~
10 1

(b) A =
1 3 -2 3 --v^
1 -1 '^ 1 -1 ^\y
10 -1 --O
10 0-1
2 4 -3 6 4 1 2 1 2 10 2 10 2
.1 1 -1 4 6_ p -1 1 1 5_ 1 1 4_ pool 2_ 1 2

5. Reduce each of the following to normal form.

1 2 -1 1 2 -1 "l o"| fl o" 1 'l o' 'l o'

www.TheSolutionManual.com
(a) A 3 4 1 2
^\y
-2 1 5 0-2 1 5 p- 1-2 5 1-2 10 'V.
10
2 3 2 5 7 2 3 7 2 sj Lo 2 7 3_ 11 P U -7_ p 1 0_

= Us o]

The elementary transformations are:

^2i(-3). 3i{2); K2i(-2), K4i(l); Kgg; Hg^-^y. ^32(2). /f42(~5); /fsd/ll), ^43(7)

[0234"! P 354 1 3 5 "1000" "1000' "1000"


(6) A 23 5 4^-0 2 3 4
'-^
2 3 'Xy
13 4
^\j
10 <^
10
4 8 13 I2J 8 13 12 2 8 13 2 3 4 13
|_4 i

4. p 1 0_ p 0_

The elementary transformations are:

^12: Ki(2); H3i(-2); KsiC-S), X3i(-5), K4i(-4); KsCi); /fs2(-3), K42(-4); ftjgC- 1)

1 2 3-2
6. Reduce A 2-2 1 3 to normal form A' and compute the matrices P^ and (^^ such that P-i_AQ^ = A^.
3 04 1
Since A is 3x4, we shall work with the array Each row transformation is performed on a row of
A U
seven elements and each column transformation is performed on a column of seven elements.

1 -3 2 1 -2 -3 2
10 1

1 1 1

1 1
12 3 1 2 3-2 1 1
2-2 1 10 -6 -5 7 1 -6 -5 7 -2 -5 7 -2
3 4 1 -6 -5 7 1 -6 -5 7 -3 0-1-1
1 1/3 -3 2 1 1/3 -4/3 -1/3
-1/6 0-1/6 -5/6 7/6
10
1
10
or
1

1-57-210
10 1

N Pi
1 0-210
0-1-11 0-1-1 1
46 EQUIVALENCE [CHAP. 5

1 1/3 -4/3 -1/3


1
-1/6 -5/6
10
7/6
Thus, Pi = -2
-1 -1
1

1
10 and PiAQ-i = 10 N.

1 3 3
7. Express A = 1 4 3 as a product of elementary matrices.
.1 3 4_

The elementary transformations //2i(-l). ^siC-D; ^2i(-3). 'f3i(-3) reduce A to 4 , that is, [see (5.2)]

/ = H^-H^-A-K^-K^ = ff3i(-l)-^2i(-l)-'4-X2i(-3)-?f3:i.(-3)

fl 0"1 fl 0"j fl 3"| fl 3 O'

Prom (5.5), Hl-H'^-K^-Kt = 110 010 010 010

www.TheSolutionManual.com
_0 ij [l ij [p ij _0 1_

8. Prove: Two mxn matrices A and B are equivalent if and only if they have the same rank.
If A and B have same rank, both are equivalent to the same matrix (5.1) and are equivalent
the to each
other. Conversely, ^ and B are equivalent, there exist non-singular matrices P and Q such that B
if = PAQ.
By Theorem VII, A and B have the same rank.

9. A canonical set for non-zero matrices of order 3 is

[i:l-[:i:] [!:][::]

nm
A canonical set tor non-zero 3x4 matrices is

Vh o]
[:=:] &:]

10. If from a square matrix A of order n and rank r^, a submatrix B consisting of s rows (columns) of A
is selected, the rank r^ of B is equal to or greater than r^ + s - n.

The normal form of A has n-rj^ rows whose elements are zeroes and the normal form of 6 has s-rg rows
whose elements are zeroes. Clearly

'A ^
from which follows r > r + s - n as required.
B A
. .

CHAP. 5] EQUIVALENCE 47

SUPPLEMENTARY PROBLEMS
4 6
2 3 2
2 1 2 12-23 5
5
6 7
7
8

11. Find the rank of (a) (b)


3 2 2
(c)
2 5-46 (d)
3 5 1 6 7 8 9
4 3 4 -1 -3 2 -2
3 4 5 10 11 12 13 14
7 4 6 2 4-16 15 16 17 18 19

Ans. (a) 2, (b) 3, (c) 4. (d) 2

12. Show by considering minors that A, A'. A. and .4' have the same rank.

13. Show that the canonical matrix C, row equivalent to a given matrix A, is uniquely determined by A.

14. Find the canonical matrix row equivalent to each of the following:

12 3 4 1 1/9" 1 1 10
Tl 2-3"]^ri 0-7]

www.TheSolutionManual.com
(a) (b) 3 4 12 'X^
10 1/9 (c) 2 1 10
[2 5 -4j [o 1 2j
_4 3 1 2 1 11/9 3 -3 12
1 2 10 1

3 2 10-1 1 -1 1 1 1 10 0-12
(d) 2 -1 1 1 (e)
1 -1 2 3 1
^V/
10 1

2 -2 1 2 1 2
5 6
1 1 1 -3 3
1 3

15. Write the normal form of each of the matrices of Problem 14.

Ans. (a) [I, 0], (b).(c) [/g o] (d) P^ (e) P' jl


j]

12 3 4
16. Let A = 2 3 4 1

3 4 12
(a) From /a form Z/^^. /^O). //i3(-4) and check that each HA effects the corresponding row transformation.
(6) Prom U form K^^. Ks(-l). K^^O) and show that each AK effects the corresponding column transformation.
(c) Write the inverses H^l, H^ {?,), H^lc-i) of the elementary matrices of (a). Check that for each H,H-H~^^l
(d) Write the inverses K^l. ifg^C-l).
K^lo) of the elementary matricesof (6) . Check that for each K. KK~^ = I
"0 0'
3 "0 1 4"|
(e) Compute B = ^12 ft,(3) -//isC- 4) 1 -4 and C = H^^(-4:)-H^(3)-Hi 1/3 0.
1 ij
(/) Show that BC ^ CB = I .

17. (a) Show that /?',-= H-. . K-(k) = H^(k). and K^,(/t) = H^-(k)
(b) Show that if /? is a product of elementary column matrices. R'is the product in reverse order of the same
elementary row matrices.

18. Prove: (a) AB and BA are non-singular if .4 and B are non-singularra -square matrices.
(b) AB and BA are singular if at least one of the n-square matrices A and B is singular.

19. UP and Q are non-singular, show that A,PA,AQ, and PAQ have the same rank.
Hint. Express P and Q as products of elementary matrices.

13 6-1
20. Reduce B 14 5 1 to normal form /V and compute the matrices P^ and Qr, such that P^BQ^ = N
15 4 3
.

48 EQUIVALENCE [CHAP. 5

21. (a) Show that the number of matrices in a canonical set of n-square matrices under equivalence is n+l.
(6) Show that thenumber of matrices in a canonical set of mxn matrices under equivalence is the smaller of
m+l and n+1.

12 4 4
22. Given A 13 2 6 of rank 2. Find a 4-square matrix S 7^ such that AB = 0.

2 5 6 10
Hint. Follow the proof of Theorem X and take

Q-'b
abed
_e / g A_

where a.b h are arbitrary.

23. The matrix A of Problem 6 and the matrix B of Problem 20 are equivalent. Find P and Q such that B - PAQ.

www.TheSolutionManual.com
24. If the mxn matrices A and B are of rank rj and rg respectively, show that the rank of A+B cannot exceed

25. Let ^ be an arbitrary n-square matrix and S be an n-square elementary matrix. By considering each of the
six different types of matrix S, show that \AB\ = |^| |fi|

26. Let A and B be n-square matrices, (a) If at least one is singular show that \AB\ = |/4|-|s| ; (6) If both are
non-singular, use (5.5) and Problem 25 to show that \AB\ = \a\-\B\ .

27. Show that equivalence of matrices is an equivalence relation.

28. Prove: The row equivalent canonical form of a non-singular matrix A is I and conversely.

29. Prove: Not every matrix A can be reduced to normal form by row transformations alone.
Hint. Exhibit a matrix which cannot be so reduced.

30. Show how to effect on any matrix A the transformation H^: by using a succession of row transformations of
types (2) and (3).

31. Prove: If .4 is an mxn matrix, (m ^n), of rank m, then AA' is a non-singular symmetric matrix. State the
theorem when the rank of A is < m.
chapter 6

The Adjoint of a Square Matrix

THE ADJOINT. Let A = be an n-square matrix and be the cofactor of a-; then by definition
[a^ ]
y oij-
y

A = 12 ^22
(6.1) adjoint adj ^

www.TheSolutionManual.com
'^ire '^sn

Note carefully that the cofactors of the elements of the ith row (column) of A are the elements
of the jth column (row) of adj A.

1 2 3
Example i. For the matrix A 2 3 2

3 3 4

11= 6, ai2 = -2. Cl-lS = -3. fflsi = 1, 0122 = -5, a^g = 3, ffai = -5, Otgg = 4, agg = -1

*
6 1-5
1

and adj A -2 -5 4
-3 3 -1
See Problems 1-2.

Using Theorems X and XI of Chapter 3, we find

%1 %2 "In
2 1 ^^2 0271
(6.2) i(adj A)

'^ ni CE 7

= diag(M), 1^1 1^1) A-L (adj A) A

Examples. For the matrix ^ of Example 1, U| = -7 and

1 2 3 6 1 -5I f-T 0'


/I (adj /I) = 2 3 2 -2 -55 4 = 0-7 -7/
3 3 4 -3 3-3 -ij [_
-7

By taking determinants in (6.2), we have

(6.3) U|. I
adj /I I
= |, adj ^ I
. U
There follow

I. If A is re-square and non-singular, then

(6.4) I
adj 4 I

49
50 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6

II. If A is re-square and singular, then

A(a,diA) = (ad}A)A =

If A is of rank < ra-l, then adj A = 0. If i is of rank ra-1, then adj A is of rank 1.

See Problem 3-

THE ADJOINT OF A PRODUCT. In Problem 4, we prove

III. If A and B are re-square matrices,

(6.5) adj AB = adj 6 adj A

www.TheSolutionManual.com
MINOR OF AN ADJOINT. In Problem 6, we prove

^ii.i2 i
IV. Let be -square minor of the re-square matrix A = [o,-,-].
^l'''-2 % tjj

7m + i' 7m+2'
Jn complement in A, and
let be its
^m+i' ^ra+2'

Ji'J2 Jn
let denote
dei the m-square minor of adj A whose elements occupy the
%. ^2 %
Ji' J2 Jm
same position in adj A as those of occupy in A.

Then
Ji' J2 Jm Jvi + i' Jm + 2 Jn
(6.6) M: (-D^'i^l'

where s = j\ + S2 + + %+ /i + 72 + + Jt,

If in (6.6) , A is non-singular, then

J2< ' Jm Jm+i' Jm + 2 Jn


uaH-i
,?i.
s
(6.7) M (-1)
I

i-
H, %, ^M+1' 'm+2. . ^n

When m = 2, (6.7) becomes


Jg>J4- Jn
(6.8) (_l)V*2+Jl-'j2m
H'J2 ^2'J2

.J1.J2
\A\ algebraic complement of

When m = n-l, (6.7) becomes


Jl'j2 Jre-1
(6.9) M, (-1) Ul a.

When m = n, (6.7) becomes (6.4).


CHAP. 6] THE ADJOINT OF A SQUARE MATRIX 51

SOLVED PROBLEMS
a h '^ll 0^21 d -b
1. The adjoint of i4 = IS I _ I

c d c a

3 4 2 3 2 3
4 3 4 3l 3 4
-7 6 -1
2. The adjoint of A = 1 4 1 3 1 3
IS -1
[;] 1 3 1 3 1 4
1
1

-2 1
1 3 1 2 1 2
1 4 1 4 1 3

Prove: A

www.TheSolutionManual.com
3. If is of order n and rank n-\, then adj A is of rank 1.

First we note that, since A is of rank n-\. there is at least one


non-zero cofactor and the rank of adj A
IS at least one. By Theorem X, Chapter 5, the rank of adj^ is at most n-{n-l) =
i. Hence, the rank is
exactly one.

4. Prove: adj ^S = adjS adj y4 .

^y(6.2) ^Badj^S = \ab\-I = {a.aiAB)AB


Since ^S-adjS-adj^ = (S adj S ) adj =
.4 /I ^(l^l-Oadj/i = jslc^adj/^) = |b|.U|./ = \AB\-I
and (adjB-adj^)4fi adjS {(adj^)^lfi =
= adjS-U|./.B = U|{(adjB)Si = \
Ab\ .
I

we conclude that adj ^S = adj S


adj /I

5. Show that adj(adji) = U| -A, if \a\ ?^ 0.

By (6.2) and (6.4),

adj ^ adj (adj ^) = diag(| adj ^|, |


adj/l| |adj^|)
1^-1
= diagd^l Ul
1

,
Ul )

Then

adj (adj ^) = UT'^-^


adj (adj ^) = \aT' -A
and adj (adj ^) = UT'^-^

Prove:
Ji' h in
6. Let A; ; ^^ ^" m-square minor of the re-square matrix
'"n I
A = [a:-]
tj-

Jm+i. Jw4-2 in
let be its complement in A, and
^m+i. ^m+2 ^n

^^^ ^*^e the m-square minor of adj


nH.i2 in\ /I whose elements occupy the same

J1.J2 is
position in adjv4 as those of occupy in A.
^H,i^, ...,i Then
52 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6

Ji. J/2' > Jm Jra + i' Jii ' Jn


u H' ^2- ' ^m
(-ifur ^i i

where s = Ji + f2+ + % +/1 + 72+ + 7m

Prom

a,- - a,- 1- a,- , 1


a,- , a,, ; '00


h-j9 ^^2 > ,/2

"^ra.ii "^mJ's "im.im ! "^raJm+i " "^m^n


tti, 1 *i j ai j 1

a,- , a,- , a,- , 1 1


Wl'A ''m+iJs '
''m+iJm 1 '^m+iJm+i ''m+i-Jn

www.TheSolutionManual.com
a,- - a,- a,- 1
"W2
, , 1

"ht.ji in.i '


"V.im4.i "traJn

Ul
h.Jm^i.

\a\
''i2Jn


Ul "^M'Jn^i " "^-Jn

"%-i-i'Jn-^i "V+i'in


^ri'^m+i "in.in

by taking determinants of both sides, we have

Ji. Js' H\ Jm+1' Jm-t2 Jn


(-ifUI- M % ^2 ''w
4.
^m+l' ''m+2 %
where s is as defined in the theorem. Prom this, the required form follows immediately.

7. Prove: If 4 is a skew-symmetric of order 2ra, then \A\ is the square of a polynomial in the elements
of i.

By its definition. \
a\ is a polynomial in its elements; we are to show that under the conditions given
above this polynomial is a perfect square.

oi
when A = r
1 ,1 2
The theorem is true for n = l since, \ \, \A\ = a .

\-a Oj

Assume now that the theorem is true when n = k and consider the skew-symmetric matrix A = [a^j] of

'
E== '^2k^..2n^\
order2A: + 2. By partitioning, write A = P^l
\ \ where r
\
-zri^i.-zii^-zy
Then S is skew-sym-
"

CHAP. 6] THE ADJOINT OP A SQUARE MATRIX 53

metric of order 2k and, by assumption, \b\ = f where /is a polynomial in the elements of B.
If a^j denotes the cofactor of a^j in we have by Problem
/I, 6, Chapter 3, and (6.8)

*2fe+l, 2fe+l 0^2^+2. 2fe+l I


0^2^+2, 2fe+l

f''2fe+l, 2fe+2 0'2fe4-2, 2fe+2 ^2^+1, 2/!e+2 I

Moreover, Otsfe+s.sfe+i = -fflgfe+i^sfe+s I hence,

1^1/ - Ll2/fe+l. 2fe+2 and

a perfect square.

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
8. Compute the adjoint of:

2"
"l 2 3~ ~1 5
2 S' 'l 2*

(o) 1 2 (b) 1 2 (c) 2 1 (rf)


110 2

_0 0_ 2 1
_0 1_ _3 2 1_
10 1

1 -2 2 -4
1 1 1 4 -2
Ans. (a) 0-2 (i) -2 2 6 -16
1 (c) -2 -5 4 (rf)

_0 1_ 1 1 -2 1
10 3-5
-2 10

9. Verify:
(a) The adjoint of a scalar matrix is a scalar matrix.
(b) The adjoint of a diagonal matrix is a diagonal
matrix.
(c) The adjoint of a triangular matrix is a triangular
matrix.

10. Write a matrix /I of order 3 such that


7^
adj^ =0.

11. If ^ is a 2-square matrix, show that adj(adj/4) = A.

-1 -2 -2 -4 -3 -3
12. Show that the adjoint of ^ = 2 1 -2 is 3^' and the adjoint of A = 1 1 is A itself.
2-2 1
4 4 3

13. Prove: If an n-square matrix A is of rank <n-\. then adj^ = 0.

14. Prove: If ^ is symmetric, so also is adj-4.

15. Prove: If ^ is Hermitian, so also is adj/1.

16. Prove: If A is skew-symmetric of order , then adj^ is symmetric or skew-symmetric according as . is


odd
54 THE ADJOINT OF A SQUARE MATRIX [CHAP. 6

17. Is there a theorem similar to that of Problem 16 for skew-Hermitian matrices?

18. For the elementary matrices, show that


-1
(a) adj Hij = -Hij
-1
(6) adj H^ (k) = diag(l/A:. l/k 1/k, 1, l/k l/k), where the element 1 stands in the ith row

(c) adj Hij


Hjk(k) = Hij(k), with similar results for the K's.
HJ

19. Prove: If A is an n-squaie matrix of rank n or n-l and if H^...H^ -H-^ -A K-^-K,2---^t = ^ where \ is

L or , then

-1 -1 -1 -1 -1 -1
adj A = adj Xi adj K^ adj K^ adj A adj H^ adj H^ adj H^

20. Use the method of Problem 19 to compute the adjoint of

www.TheSolutionManual.com
1110
2 3 3 2
(o) A of Problem 7 Chapter 5 (b)
12
,

3 2
4 6 7 4

-14 2-2 2
7 -3 -3
14 -2 2 -2
Ans. (a) -1 1 (b)
-1 1
-7 1-1 1

21. Let A = [a^--] and B = [^k-a^A be 3-square matrices. If S{C) = sum of elements of matrix C, show that

S(adj-4) = S(adjB) and \b\ = k S(adj,4) - Ui

22. Prove: If 4 is n-square then I adj (adj ^) I = UI .

23. Let A^ =\.%j\ ('/ = 1. 2, ...,n) be the lower triangular matrix whose triangle is the Pascal triangle; for
example,
10
110
12 10
13 3 1

Define bij
"V
= (-1)^ ^ai,
Hj and verify for n =2,3,4 that

(i) adJ-4 = [6.^,-]


tjj

24. Let B be obtained from A by deleting its ith and pth rows and /th and gth columns. Show that

^ij ^pj

iq pq
where o!^- is the cofactor of a^j in \A\
chapter 7

The Inverse of a Matrix

IF A AND B are n-square matrices such that AB = BA =1, B is called the inverse of A, (B=A~^) and
A is called the inverse of B, (A = B''^).

In Problem l, we prove

www.TheSolutionManual.com
I. An ra-square matrix A has an Inverse if and only if it is non-singular.

The inverse of a non-singular n-square matrix is unique.


(See Problem?, Chapter2.)
II. If A is non-singular, then AB = AC implies B = C.

THE INVERSE of a non-singular diagonal matrix diag (i k, A) is the diagonal matrix

diag(i/k^,l/kr,. ..., 1A)

^^ :^i' -^2 4 are non-singular matrices, then the inverse


of the direct sum diag(/} 4
diag(Al^. A^ A~s)

Procedures for computing the inverse of a general


non-singular matrix are given below.

INVERSE FROM THE ADJOINT. From A adj i = U|


(6.2) .
/. if ^ is non-singular

n/M| a^^/\A\ ... ai/U|


i2/U| a22/!.4| ... a2/UI
adj^
(7.1)

am/Ml a^n/\A\ ... ann/\A\

1 2 3 -7 6 -1
Example 1. From Problem 2, Chapters, the adjoint of A = 1 3 4 is 1 -1
1 4 3 1 -2 1

"7/2
Since Ml = -2, A~^ = ^'^^ -^
-3 f
-k X

2 1 -
^J
See Problem 2.

55
56 THE INVERSE OF A MATRIX [CHAP. 7

INVERSE FROM ELEMENTARY MATRICES. Let the non-singular n-square matrix A be reduced to /
by elementary transformations so that

H^.-.H^.H^. A-K^-K2...Kt = PAQ = I

Then A ^ P ^ Q^ by (5.5) and, since (S~V^ = S,

(7.2) A'^ = (P'^-Q'Y = Q-P = K^-K^... KfH^... H^-H^

Example 2. From Problem 7, Chapters,


1 1 "l -3 1 -3
HqH^ AK^Kq 1 -1 1 A 1 1 = /

-1 1_ 1 _0 1__ 1

1 -3 o" 'l
-3" "106" '100" 7 -3 -3
Then A K^K2H2'i -1

www.TheSolutionManual.com
1 1 1 1 = -1 1

1 1 1 -1 1 -10 1

In Chapter 5 it was shown that a non-singular matrix can be reduced to normal form by row
transformations alone. Then, from (7.2) with Q =1, we have

(7.3) H,...H^.H,

That is,

III. If A is reduced to / by a sequence of row transformations alone, then A~ is equal


to the product in reverse order of the corresponding elementary matrices.

1 3 3
Examples. Find the inverse of .4 = 1 4 3 of Example 2 using only row transformations to reduce /I to/.

13 4

Write the matrix [A Q and perform the sequence of row transformations which carry A into
Ir, on the rows of six elements. We have

1 3 3 1 1 3 3 1 1 3 4 -3 1 7 -3 -3
'\j
[AI^] = 1 4 3 1
"X^
1 -1 1 1 1 1
-"o
1 -1 1

1 3 4 1 1 -1 1 1 1 1 1 -li 1

[/3^"']

7 -3 -3
by (7,3). Thus, as A is reduced to /g, /g is carried into A -1 1

-1 1

See Problem 3.

INVERSE BY PARTITIONING. Let the matrix A = [a^j] of order n and its inverse B = [6.^,-] be par-
titioned into submatrices of indicated orders:

41 A12 ^11 "12


(pxp) (pxq) (pxp) (px?)
and where p + q = n
^21 A22 "21 S22
(qxp) (qxq) (qxp) (qxq)
CHAP. 7] THE INVERSE OF A MATRIX 57

Since AB = BA = I^, we have


(i) /4iiSii + /lisBgi = Ip (iii) Bgi'^ii + B^z'^si =
(7.4)
(ii) 4iiSi2 + A-^^B^-z = (iv) Bjiiia + Bss^^as = Iq

Then, provided /4ij^ is non-singular,

!Bxi = '4ij^ + (A.^^^ "-i-zK (^^si '^n) ^21 = ~C (^21 '^u)

where ^ = .422 - A^i^Ali A^^).


See Problem 4.

In practice, A.^^ is usually taken of order ra-l. To obtain A^^, the following procedure is
used. Let

%2 %3 %4

www.TheSolutionManual.com
11
"11 %2 13
[Oil %2l O23 G^ =
021 022 023 '^24
' Osi 022 ,

2i ^^aj Ogj Ogj 033 6(34


31 32 33_
41 %2 '''43 ''44

After computing C^^, partition G3 so that /422 = [o33] and use (7.5) to obtain C^^. Repeat the proc
ess on G4. after partitioning it so that ^22 = [044], and so on.

1 3 3
Example 4. Find the inverse of A = 1 4 3 using partitioning.
1 3 4

Take ^11
11 = [!']. ^12 , ^421 = [13], and ^22= [4]- Now

^= ^22 - '42i(^;' ^12) = [4]-[l3]M = [1], and f^ = [l]

Then

Sii = 4\ + ('4i\/ii2)rw^ii) = i]^[o]fii-'^^^


[j

S12 = -('4i\^i2)r^ "


I oj

S21 = -^ (^2i'4la) = [-1,0]

522 = r' = [1]

" -3 -3
7
fill fil2
and -1 1
S21 B22
_ -1 1

See Problems 5-6.


58 THE INVERSE OF A MATRIX [CHAP. 7

THE INVERSE OF A SYMMETRIC MATRIX. When A is symmetric, aij = ciji and only ^(n+1) cofac-
tors need be computed instead of the usual n^ in obtaining A~^ from adj A.

If there is to be any gain in computing A~^ as the product of elementary


matrices, the ele-
mentary transformations must be performed so that the property of being symmetric is preserved.
This requires that the transformations occur in pairs, a row transformation followed immediately
by the same column transformation. For example,

'0 b c a b ...

b a ... b c
. c ...

Ht ^l

a b

www.TheSolutionManual.com
c a c
b
c c
^i(- ^) ^^i(- |)

However, when the element a in the dia^gonal is replaced by 1, the pair of transformations are
H^(l/\Ja) and K^{l/\Ja). In general, ^Ja is either irrational or imaginary; hence, this procedure
is not recommended.
The maximum gain occurs when the method of partitioning is used since then (7.5) reduces to

(7.6)

where f = A^^ - A.^^(Af-^A^^).


See Problem 7.

When A is not symmetric, the above procedure may be used to find the inverse of A'A, which
is symmetric, and then the inverse of A is found by

(7.7) A'^ = (A'Ai^A-

SOLVED PROBLEMS
1. Prove: An re-square matrix A has an inverse if and only if it is non- singular.

Suppose A is non-singular. By Theorem IV, Chapter 5, there exist non-singular matrices P and Q such
that PAQ=I. Then A=P~^-Q'^ and A''^ = Q-P exists.
-1 -1 _i
Supposed exists. The A- A =1 is of rank n. If /4 were singular, /l/l would be of rank < n; hence,
A is non-singular.
CHAP. 7] THE INVERSE OF A MATRIX 59

2. (a) When A
!]
- I
/I I =5, adji r4-3l,andi"=r4/5-3/5-]
[-1 [_-l/5
[? 2j
2/5J

2 3 1 1 -5 1 -5
(b) When A = 1 2 3 then U! = 18, adj A 7 1 and A 7 1

3 1 2 -5 7 -5 7

2 4 3 2

3 6 5 2
3. Find the inverse of i 4 =
2 5 2 -3
_4 5 14 14_

2 4 0' 3/2
3 2 1
1 1 2 1 1
1/2 1 2 3/2 1 '
1/2
3 6 5 2 10 1 3 6 5 2 1 1/2 -1 -3/2 1
[AIJ
I
"K^ ^\j
1 '
- 3 10 -3 10

www.TheSolutionManual.com
2 5 2 1 2 5 2 1 1 -1 -5 -1 1
1

4 5 14 1 1 1
1 _4 5 14 14 1 1 -3 8 10 -2 1

0' "1 0'


1 2 3/2 1 ]
1/2 7/2 11 5/2 -2
'Xy
1 -1 -5 ]
-1 1
'V^
1-1-5 -1 1

1/2 -1 ; -3/2 1 1-2 -3 2


-3 8 10 ; -2
h 5-5 -5 3 1

10 18 1
13 -7 -2 - 0'
10 18 !
13 -7-2 '

'X'
1 -7| -4 2 1
'-\J
1 -7 1
-4 2 1

1 -2 -3 j
2 1 -2 [
-3 2
5 ; 10 --10 3 1_ 11 2 -2 3/5 1/5

1 1
--23 29 -64/5 -18/5
'^-/
100] 10 -12 26/5 7/5
1 i
1 -2 6/5 2/5
1 2 -2 3/5 1/5
1

Ih A-']

-23 29 -64/5 -18/5


10 -12 26/5 7/5
The inverse is A
1 -2 6/5 2/5
2 -2 3/5 1/5

(i) iiiSn + iisSsi = / (iii) S^i^ii + B^^A^^ =


4. Solve (.., ^ ^ for Bii,Si2,S2i, and S
/fiiS,2
(11) + A^^B^^ = (iV) ^21^12 + 52,42 = /

Set S22 = f"^ Prom (ii), B^^ = -(-4^1-412)^"^; from (iii), B21 = - ^~\A^^At^y, and, from (i), B-^^

^ti - A'l^A^^B^^ = 4\ + (^lVi2)f~\'42iA\).

Finally, substituting in (iv),

*. 1
-f (-42i'4ii)^i2 + ^'^422 = / and f - '^'2 ('^21'*lll)'^12
60 THE INVERSE OP A MATRIX [CHAP. 7

12 3 1

5. Find the inverse of A


13 3 2
by partitioning.
2 4 3 3
1111
1 2 3
(a) Take Gg 1 3 3 and partition so that
2 4 3

[24], and [3]

-1 r 3 -2l ,-1 [3 -2 r 3 -2
,
Now 11 =
jj.
^11-41. - '421-4 11 = [24] [2 0],
^_^ |^_^ ^

,f
= ^22 - '42i('4ii^i2) = [3] - [2 4]r = [-3], and f =[-l/3]

www.TheSolutionManual.com
3 -2] _ [2 ol
Then Sii = 4l\ + (.4lVi2)rV2i^i\) = [j "^l + Hr-i] [2 0]
-1 ij [o oJ

=
3L-3 3J

B12 = ('4ii'4i2)C
-'^
- 1 f (/I2A1) = i[2 0],
3
4-fl
3-6 3

and -3 3
D01 Oo
2 0-1

1 2 3 1

(6) Partition A so that A^^ 1 3 3 , A^Q - 2 -421 =[ 1 1 1 ] and /I22 = [ I ]

2 4 3 3

3-6 3

Now 4i = i -3 3 A~'^ A - i A^iA-^-^ - "X"L2 3 2J,


2 0-1

.f = [l]-[l 1 l](i-) 3 and C'^=[3]


-1 &
3'
3 -6 3' 3 -6 1 -2 1

Then B, 3

2
3

-1
^1
-1
3 [3]-|[2 -3 2] =
I
3

2
3
-1
4 -2
6-9
3
6
-2
1-2
1
2
-1
" 0'

B12 = -3 S21 = [-2 3 -2]. B22 = [3]


_ 1_

1-210
[Sii S12 I 1-2 2-3
and
S21 B22J 1-11
-2 3-2 3
.

CHAP. 7] THE INVERSE OP A MATRIX 61

1 3 3
6. Find the inverse of A = 1 3 4 by partitioning.
1 4 3

We cannot take -4,


u since this is singular.

1 1 3 3 7 - 3 -3
By Example 3,
.

the inverse of // A 1 A = 1 4 3 S is B -1 1 Then


1 1 3 4 -1 1

7 -3 -3' "1 0'


7 -3 -3'

B^Ho 1 1 1 = 1 1

1 1 1 1 1

Thus, if the (ra-i)-square minor A^.^_ of the n-square non-singular matrix A is singular, we first bring a
non-singular (n-l)-square matrix into the upper left corner to obtain S, find the inverse of B, and by the prop-

www.TheSolutionManual.com
er transformation on B~^ obtain A''^

2 1-12
7. Compute the inverse of the symmetric matrix A
13 2-3
-1 2 1-1
2-3-1 4

2 1 -1
Consider first the submatrix G^ 1 3 2 partitioned so that
-1 2 1

-421 = [-1 2], ^22= [1]

Now

^22 - ^2i(Ai^i2) = [l]-[-l 2]rM = [-2] and f^= [-i]

Then B,

B12 = B21 = [-k k]. B22 = [-i]


["jj.

1 3 -5
1
and 3 -1 5
10
-5 5 -5

Consider now the matrix A partitioned so that

2 1 -1
'^11 1 3 2 [2 -3 -1], [4]
-1 2 1

1 3 -5 -1/5
Now A< _1_
10
-5
3-1
5 -5
5

-2
2/5 .
<f= [18/5 ], t = [5/I8].
62 THE INVERSE OF A MATRIX [CHAP. 7

2 5-7
Then JS^ 5-15 521 = ^[1 -2 10], B22 = [5/I8]
-7 5 11

2 5-71
5-1 5-2
and
-7 5 11 10
1 -2 10 5

www.TheSolutionManual.com
SUPPLEMENTARY PROBLEMS
8. Find the adjoint and inverse of each of the following:
1 2
"
1 2-1" "2
3 4" "1 2
3"
3
(a) -1 1 2 (b) 4 3 1 (O 2 4 5 (d)
n n
n
U w ^ 1
J.

_ 2 -1 1_ _1 2 4 3 5 6
3

-2/3
3-15 -10 4 9 1 -3 2
1

1/3
Inverses (a)
^ -153-1 5

3

^'^i
-5
15 -4 -14
1 6
,
(c) -3
2-1
3 -1 .
(d)
1/2 -1/6
- -J - ^ - - 1/3

9. Find the inverse of the matrix of Problem 8((?) as a direct sum.

10. Obtain the inverses of the matrices of Problems using the method of Problems.

1 1 1 1 3 4 2 7 2 5 2 3
13 3 2 1

1 4 3 3-1
-4 4
11. Same, for the matrices (a)
1

2
2

3
3

5 -5
.
(b)
2
5
3

7
3

3
2

9
(c)
2
3
3

6
3

3 2
(d) 13 4 11
-4 -5
1 11-1
1
3 8 2 3 2 3 4 12 8
1-2-12 2
2 16 -6 4 -144 36 60 21

1 22 41 - 30 -1 48 -20 -12 -5
(a) (c)
18 -10 -44 30 -2 48 48 -4 -12 -13
4 -13 6 -1 12 -12 3

30 -20 -15 25 -5
-1 11 7 -26
30 -11 -18 7 -8
-1 -7 -3 16
(b) 1 (^)i -30 12 21 -9 6
2 1 1 -1
-15 12 6-9 6
1 -1 -1 2
15 -7 -6 -1 -1

12. Use the result of Example 4 to obtain the inverse of the matrix of Problem 11(d) by partitioning.

13. Obtain by partitioning the inverses of the matrices of Problems 8(a), 8(6), 11(a) - 11(c).
CHAP. 7] THE INVERSE OF A MATRIX 63

12-12 12 2

14. Obtain by partitioning the inverses of the symmetric matrices (a)


2 2-11 (b)
112 3
-1 -1 1 -1 2 2 2 3
2 1-12 2 3 3 3
1 -1 -1 -1 -3 3-3 2
Ans. (a)
-1 -1 -1 1 3-4 4-2
(b)
-1 -1 -5 -1 -3 4-5 3
-1 1 -1 -1 2-2 3-2

15. Prove: If .4 is non-singular, then AB = AC implies B = C.

16. Show that if the non-singular matrices A and B commute, so also do


(a) A andS, (b) A und B ,
(c) A a.ndB Hint, (a) A (AB)A = A (BA)A

17. Show that if the non-singular matrix A is symmetric, so also is /I


^.

Hint: A^A

www.TheSolutionManual.com
:r / = (AA"^)' = (A~^)'A .

18. Show that if the non-singular symmetric matrices A and B commute, then (a) A~'^B. (b) AB"'^. and (c) A~^B~^
are symmetric. Hint: (a) {A'^ B)' = {BA^'^)' = (A'^)'B' = A~^B.

19. An mxn matrix A is said to have a right inverse B if AB = I and a left inverse C it CA = I. Show that A
has
a right inverse if and only ifA is of rank m and has a left inverse if and only if the rank of A is n

13 2 3
20. Find a right inverse of A 14 13 if one exists.
13 5 4
1 3 2
Hint. The rank of A is 3 and the submatrix S 1 4 1 is non-singular with inverse S . A right inverse of
1 3 5

17 -9 -5
A
-4 3 1
is the 4x3 matrix B =
-1 1
L J

1 3 3
7 -3 -3
21, Show that the submatrix T -1 1
1 4 3 of A of Problem 20 is non-singular and obtain as another
1 3 4
-1 1
right inverse of A.

1 1 1
7 -1 -1 a
22. Obtain -310b as a left inverse of
3

3
4
3
3

4
, where a,b. and c are arbitrary.
-3 1 c

13 4 7
23. Show that A 14 5 9 has neither a right nor a left inverse.
2 3 5 8

24. Prove: If ^ then


/111 ^12
|.4ii| 0, |.4ii| -1^22 - ^^si^ii/ligl
A21 Aq2

25. If |/+ l| ;^ 0, then (/ + .4)-^ and (I - A) commute.

26. Prove:(i) of Problem 23, Chapter 6.


chapter 8
Fields

NUMBER FIELDS. A collection or set S of real or complex numbers, consisting of more than the ele-
ment 0, is called a number field provided the operations of addition, subtraction, multiplication,
and division (except by 0) on any two of the numbers yield a number of S.
Examples of number fields are:

(a) the set of all rational numbers,

www.TheSolutionManual.com
(b) the set of all real numbers,
(c) the set of all numbers of the form a+b\f3 ,where a and b are rational numbers,
(d) the set of all complex numbers a+ bi, where a and b are real numbers.
The set of all integers and the set of all numbers of the form bvs , where i is a rational
number, are not number fields.

GENERAL FIELDS. A collection or set S of two or more elements, together with two operations called
addition (+) and multiplication (). is called a field F provided that a, b, c, ... being elements of
F, i.e. scalars,

a + b is a unique element of F
a+b = b+a

a + (b + c) = (a + b) + c

For every element a in F there exists an element in F such that a+ = +a = a.

As For each element a in F there exists a unique element -a in F such that a + (-a) = 0.

ab = a-b is a unique element of F


ab = ba

(ab)c = a(bc)

For every element a in F there exists an element 1 ?^ such that 1-a = 1 = o.

For each element a ^ in F there exists a unique element a'^ in F such that a- a'''^ =
a~ -0=1.
Di : a(b + c) = ab+ ac
D^'- (a+b)c = ac + bc

In addition to the number fields listed above, other examples of fields are:

(e) the set of all quotients


^^^ of polynomials in x with real coefficients.
Q{x)

(/) the set of all 2x2 matrices of the form


ti where a and b are real numbers.

(g) the set in which 0+0=0.


This field, called of characteristic 2, will be excluded hereafter.
In this field, for example, the customary proof that a determinant having two rows identical
is is not valid. By interchanging the two identical rows, we are led to D = -D or 2D = ;

but D is not necessarily 0.

64
CHAP. 8] FIELDS 65

SUBFIELDS. If S and T are two sets and if every member of S is also a member of T, then S is called
a subset of T.

If S and T are fields and if S is a subset of T, then S is called a subfield of T. For exam-
ple, the field of all real numbers is a subfield of the field of all complex numbers; the field of
all rational numbers is a subfield of the field of all real numbers and the field of all complex
numbers.

MATRICES OVER A FIELD, When all of the elements of a matrix A are in a field F, we say that
'Mis over F". For example.

A =
1 1/2
is over the rational field and B
11 + }
is over the complex field
1/4 2/3 2 1 - 3i

Here, A is also over the real field while B is not; also A is over the complex field.

Let A.B, C, be matrices over the same field F and F be

www.TheSolutionManual.com
... let the smallest field which
contains all the elements; that is, if all the elements are rational numbers, the field F is the
rational field and not the real or complex field. An examination of the various operations de-
fined on these matrices, individually or collectively, in the previous chapters shows that no
elements other than those in F are ever required. For example:
The sum, difference, and product are matrices over F.

If A is non-singular, its inverse is over F.


If A ^l then there exist matrices P and Q over F such that PAQ = / and / is over F.
If A is over the rational field and is of rank r, its rank is unchanged when considered over
the real or the complex field.

Hereafter when A is said to be over F it will be assumed that F is the smallest field con-
taining all of its elements.

In later chapters it will at times be necessary to restrict the field, say, to the
real field.
At other times, the field of the elements will be extended, say, from the rational field to the real
field. Otherwise, the statement "A over F" implies no restriction on the field, except for the
excluded field of characteristic two.

SOLVED PROBLEM
1. Verify that the set of all complex numbers constitutes a field.

To do this we simply check the properties A^-A^, Mi-Mg, and D^-D^. The zero element (/I4) is and
the unit element (M^) is 1. If a + bi and c + di are two elements, the negative (A^) of a + bi is -a-bi. the
product (A/ 1) is (a+bi){c + di) = (ac -bd) + (ad + bc)i ; the inverse (M5) of a + bi^o is

1 _ g bi _ a _ hi
a + bi a^ +62 a^+ b^ a^ + b^
Verification of the remaining properties is left as an exercise for the reader.
66 FIELDS [CHAP. 8

SUPPLEMENTARY PROBLEMS
2. Verify (a) the set of all real numbers of the form a + b\r5 where a and b are ra:ional numbers and

(6) the set of all quotients ^ of polynomials in x with real coefficients constitute fields.
Q(x)

3. Verify (a) the set of all rational numbers,


(b) the set of all numbers a + bvS, where a and b are rational numbers, and
(c) the set of all numbers a+bi, where a and b are rational numbers are subfields of the complex field.

a b
4. Verify that the set of all 2x2 matrices of the form ,
where a and b are rational numbers, forms a field.
b a

Show that this is a subfield of the field of all 2x2 matrices of the form
a h
w]\eK a and h are real numbers.
b a

Why does not

www.TheSolutionManual.com
5. the set of all 2x2 matrices with real elements form a field?

6. A set R of elements a,b.c.... satisfying the conditions {Ai, A^. A^. A^, A^; Mi. M.y, D^, D^) of Page 64 is called
a ring. To emphasize the fact that multiplication is not commutative, R may be called a non- commutative
ring. When a ring R satisfies Mg, it is called commutative. When a ring R satisfies M^. it is spoken of as
a ring with unit element.

Verify:
(a) the set of even integers 0,2,4, ... is an example of a commutative ring without unit element.
(b) the set of all integers 0,+l,2,+3, ... is an example of a commutative ring with unit element.
(c) the set of all n-square matrices over F is an example of a non-commutative ring with unit element.

(d) the set of all 2x2 matrices of the form , where a and b are real numbers, is an example of a
commutative ring with unit element.

7. Can the set (a) of Problem 6 be turned into a commutative ring with unit element by simply adjoining the ele-
ments 1 to the set?

8. By Problem 4, the set (d) of Problem 6 is a field. Is every field a ring? Is every commutative ring with unit
element a field? i

To ol
9. Describe the ring of all 2x2 matrices \
^ , where a and b are in F. If A. is any matrix of the ring and

^ = . show that LA = A. Call L a left unit element. Is there a right unit element?

10. Let C be the field of all complex numbers p + qi and K be the field of all 2x2 matrices where p, q,
a -b^ ^
u, V are real numbers. Take the complex number a + bi and the matrix as corresponding elements of
the two sets and call each the image of the other.
-3l To -4l
[2 ; 3+ ^^2i, 5.
3 2j L4 Oj
(b) Show that the image of the sum (product) of two elements of K is the sum (product) of their images in C.
(c) Show that the image of the identity element of K is the identity element of C.
(d) What is the image of the conjugate of a + bi?

(e) What is the image of the inverse of "!]


[:
This is an example of an isomorphism between two sets.
chapter 9

Linear Dependence of Vectors and Forms

of real numbers (%, x^) is used to denote a point Z in a plane. The same
pair
THE ORDERED PAIR
of numbers, written as [x^, x^], will be used here to denote the two-dimensional vector or 2-vector

OX (see Fig. 9-1).


X2 ''^3(*11 + ^21.. ^12 + ^2!

X.2(X^1. X22)

www.TheSolutionManual.com
X(Xi. Xq)

Fig. 9-1

If .Yi = [%i,xi2] and X^= [x^^.x^q] are distinct 2-vectors, the parallelogram law for
their sum (see Fig. 9-2) yields

A3 A^ + A2 = L ^11 "^ -^21 1 -^12 "^ -'^221

Treating X^ and Xg as 1x2 matrices, we see that this is merely the rule for adding matrices giv-
en in Chapter 1. Moreover, if k is any scalar,

kX-i L rC%'\ -1 , fv X-y Q J

is the familiar multiplication of a vector by a real number of physics.

VECTORS. By an n -dimensional vector or re-vector X over F is meant an ordered set of n elements x^


of F, as

(9.1) X = [a;i,x2, ...,%]

The elements x^,X2 % are called respectively the first, second, ..., reth components of X.

Later we shall find it more convenient to write the components of a vector in a column, as

(91) I X-i , X2, . , Xi J

Now (9.1) and (9.1') denote the same vector; however, we shall speak of (9.1) as a row vector
and (9.1') as a column vector. We may, then, consider the pxq matrix A as defining p row vectors
(the elements of a row being the components of a 17-vector) or as defining g column vectors.

67
68 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

The vector, all of whose components are zero, is called the zero vector and is denoted by 0.

The sum and difference of two row (column) vectors and the product of a scalar and a vec-
tor are formed by the rules governing matrices.

Example 1. Consider the 3-vectors

A:i= [3,1,-4], -^2= [2.2,-3], ^3= [0,-4,1], and ^4= [-4,-4,6]


(a) 2X-^-iX^ = 2[3,l,-4] - 5[2,2,-3] = [6. 2, -S] - [lO, 10, -I5] = [-4,-8,7]
(6) IX^+X^. = 2[2,2,-3]+ [-4,-4.6] = [o,0.o] =
(c) 2X^ -3X2-^3 =

{d) 2X^ - X^- Xq+ X^ =

The vectors used here are row vectors. Note that if each bracket is primed to denote col-
umn vectors, the results remain correct.

www.TheSolutionManual.com
LINEAR DEPENDENCE OF VECTORS. The m re-vectors over F

Ai = |.%]^,%2' ^mJ

are said to be linearly dependent over F provided there exist m elements h-^^.k^, ,k^ of F, not
all zero, such that

(9.3) k.^X-1 + k^X^^ + + k^X-^ =

Otherwise, the m vectors are said to be linearly independent.

Example 2. Consider the four vectors of Example 1. By (6) the vectors ^2 ^"d Xj^_ are linearly dependent;
so also are X^, X^, and X^ by (c) and the entire set by {d).

The vectors X^ and Xq^. however, are linearly independent. For. assume the contrary so that

fel^l + ;c2A'2 = [Zk^ + 2k^, k^+ 2k^, -'iJc.i_~ Zk^l = [o, 0,0]

Then 3fti + 2k^ = 0, ft^ + 2^2 = 0, and -ik^ - lik^ = 0. Prom the first two relations A^ =
and then ^2 = 0.

Any n-vector X and the n-zero vector are linearly dependent.

A vector ^,B+i is said to be expressible as a linear combination of the vectors Ai, X^ X^


if there exist elements k^, k^, ...,k^ of F such that

Xfn+i = %A]^ + :2A2 + + k^Xf^

BASIC THEOREMS. If in (9.3), k^ ?^ 0, we may solve for

^i = - T"!^!'^! + + ^i-i^t-i + -^i+i^i+i + + ^m^m! or

(9.4) Xi = SiJVi + + s^_iZi_i + s^j-i^i+i + + s^J^

Thus,
I. If m vectors are linearly dependent, some one of them may always be expressed as
a linear combination of the others.
.

CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 69

II. If m vectors X^, X^ X^ are linearly independent while the set obtained by add-
ing another vector X^j,-^ is linearly dependent, then Jt^+i can be expressed as a linear com-
bination of Xi, X^, , X^
Examples. Prom Example 2, the vectors X^, and X^ are linearly independent while X^.X^.^niXg are
linearly dependent, satisfying the relations 2X.i^-2X^- Xg= 0. Clearly, Zg=2A^i-3^2-

III. If among the m vectors X^, X^ X^ there is a subset of r<m vectors which are
linearly dependent, the vectors of the entire set are linearly dependent.

Example 4. By (b) of Example 1, the vectors X^ and X^ are linearly dependent; by (d), the set of four
vectors is linearly dependent. See Problem 1.

IV. If the rank of the matrix

Xi 1 %2
m<n

www.TheSolutionManual.com
(9.5) ,

*TOl ^n2 %
associated with the m vectors (9.2) is r<m, there are exactly r vectors of the set which
are linearly independent while each of the remaining m-r vectors can be expressed as a
linear combination of these r vectors. See Problems 2-3.

V. A necessary and sufficient condition that the vectors (9.2) be linearly dependent
is that the matrix (9.5) of the vectors be of rank r<m. If the rank is m, the vectors are
linearly independent.

The set of vectors (9.2) is necessarily linearly dependent if m>n.

If the set of vectors (9.2) is linearly independent so also is every subset of them.

A LINEAR FORM over F in ra variables x^, x^ is a polynomial of the type


n
(9-6) 2 OiXi = a^x^ + a^x^ + + a^A:^
i=i
where the coefficients are in F
Consider a system of m linear forms in n variables

/l = CIu% + %2^2 +
+ '^2n*re
(9.7)

/m 1^1 + 0~foXo + %n^n


and the associated matrix

-'ml "m2

If there exist elements k^.k^ k^ , not all zero, in F such that

Kk + ^^^2/2 + ... + A;4 =


70 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

the forms (9.7) are said to be linearly dependent; otherwise the forms are said to be linearly
independent. Thus, the linear dependence independence of the forms of (9.7) is equivalent
or
to the linear dependence or independence of the row vectors of A.

Example 5. The forms /i = 2xi - 2 + 3*g, /2 = x.^+ 2% + 4^=3. /g = ix^ - Tx^ + Xg are linearly depend-

2-13
ent since A = 1 2 4 is of rank 2. Here, 3/^ - "if^ - fs = .

4 -7 1

The system (9.7) is necessarily dependent if m>n. Why?

SOLVED PROBLEMS

www.TheSolutionManual.com
1. Prove: If among the m vectors X^,X^, ...,X^ there is a subset, say, X^,X^ X^, r<m, which is
linearly dependent, so also are the m vectors.

Since, by hypothesis, k^X-^ + k^X^ + + k^X^ = with not all of the k's equal to zero, then

k^X^ + k^X^ +
+ k^X^ + 0-.Y^+i + + 0-.Yot =

with not all of the k'a equal to zero and the entire set of vectors is linearly dependent.

2. Prove: If the rank of the matrix associated with a set of m ra-vectors is r<m, there are exactly r

vectors which are linearly independent while each of the remaining m-r vectors can be expressed
as a linear combination of these r vectors.

Let (9.5) be the matrix and suppose first that m<n If the r-rowed minor in the upper left hand comer
.

is equal to zero, we interchange rows and columns as are necessary to bring a non-vanishing r-rowed minor
into this position and then renumber all rows and columns in natural order. Thus, we have

11 11?

21 25-'
A

Consider now an (r+l)-rowed minor

*11 12 %-r Xiq

%1 %2 . Xqt x^q

*rl X^2 ^rr Xrq


xp^ Xpr, . .
xpf xpq

where the elements xp; and xj^q are respectively from any row and any column not included in A. Let h^,k^,
...,A;^+i = A be the respective cofactors of the elements x^g. x^q x^q. xpq, of the last column of V. Then,
by (3.10)
CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 71

fci^ii + k2X2i + + krXri + ^r+i*^i = (i = 1,2 r)

and by hypothesis k^x^q + k^xQq + + krx^q + krA-ixpq = y =

Now let the last column of V be replaced by another of the remaining columns, say the column numbered
u. not appearing in A. The cofactors of the elements of this column are precisely the k's obtained above
so that

k^x^n + ^2*2W + + ^rXru + ^r-n^pu =

Thus,
k^x^j; + k^2t + " + f'r'^rt + f'r-n'^pt = (t = 1,2 n)

and, summing over all values of t,

k^X^ + k^X^ + + k^X^ + k^^-^Xp =

Since /i:,^+i = A ji^ 0, Xp is a. linear combination of the r linearly independent vectors X-^^. X^ X^. But ^^j
was any one ^I hence, each of these may be expressed as a linearcom-

www.TheSolutionManual.com
of the m-r vectors -V^+i, ^r+2
binatlon of ^j^, X^ X^.

For the case m>n.


consider the matrix when to each of the given m vectors m-n additional zero compo-
nents are added. This matrix is [^ o]. Clearly the linear dependence or independence of the vectors and
1

also the rank of A have not been changed.

Thus, in either case, the vectors Xr+^ X^ are linear combinations of the linearly Independent vec-
tors X-^.X^. ..., X^ as was to be proved.

3. Show, using a matrix, that each triple of vectors

X^ = [1,2,-3.4] X-L = [2,3,1,-1]

(a) ^2 = [3,-1,2,1] and (b) ^2= [2, 3, 1,-2]

^^3= [1,-5,8,-7] ^3= [4,6,2,-3]

is linearly dependent. In each determine a maximum subset of linearly independent vectors and
express the others as linear combinations of these.

12-34
(a) Here, 3-121 is of rank 2; there are two linearly independent vectors, say X.^ and X^ . The minor
1-5 8-7
1 2 -3
1 2

-1
j^ . Consider then the minor 3-12 The cofactors of the elements of the third column are
3
1-5 8

respectively -14, 7, and -7. Then -1^X^ + 1X2-1X2= and Xg = -2X^ + X^.

2 3 1-1
(b) Here 2 3 1-2 is of rank 2; there are two linearly independent vectors, say X^ and Xg. Now the
4 6 2-3
2 3
2-113 -1
2

2 3
; we interchange the 2nd and 4th columns to obtain 2-213 for which
-2
5^0.
2
4-326
2 -1 1

The cofactors of the elements of the last column of 2 -2 1 are 2,2,-2 respectively. Then
4-32.
2X^ + 2X2 - 2Xs = and Xg = Xi + X,
.

72 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

4. Let Pi(l, 1, 1), PsCl, 2, 3), PsiZ, 1, 2), and P^(2, 3, 4) be points in ordinary space. The points Pi, P^
and the origin of coordinates determine a plane tt of equation

X y z 1

(i)
1111 2y + z
12 3 1

Substituting the coordinates of P^ into the left member of (i). we have


2 3 4 1 2 3 4
2 3 4
1111 1110 = 1 1 1
12 3 1 12 3
1 2 3
1 1

2 3 4
]'

www.TheSolutionManual.com
Thus, P4 lies in tt. The significant fact here is that [P^., Px.Pq 1 1 1 is of rank 2.

1 2 3

We have verified: Any three points of ordinary space lie in a plane through the origin provided the matrix
of their coordinates is of rank 2.

Show that Pg does not lie in v.

SUPPLEMENTARY PROBLEMS
5. Prove: If m vectors X,^. X^ X-^ are linearly independent while the set obtained by adding another vector

-^m+i is linearly dependent, then ^m+i can be expressed as a linear combination of X^.X^ X^^.

6. Show that the representation of /^^+i in Problems is unique.


m n m
Hint: Suppose ^7^+1 = X kiXi = 1, siXi and consider 2 (A:^. - s^ ) X^
i=i i=i i=i

7. Prove: A necessary and sufficient condition that the vectors (9.2) be linearly dependent is that the matrix
(9.5) of the vectors be of rank r<m.
Hint: Suppose the m vectors are linearly dependent so that (9.4) holds. In (9.3) subtract from the ith row the
product of the first row by s^, the product of the second row by S2. ^s Indicated in (9.4). For the
converse, see Problem 2.

8. Examine each of the following sets of vectors over the real field for linear dependence or independence. In
each dependent set select a maximum linearly independent subset and express each of the remaining vectors
as a linear combination of these.

^1 = [1,2,1] ^1 = [2,1, 3,2, -1]


Xj_ = [2,-1,3,2]
X2 = [2,1.4] ^2 = [4,2 1.-2 .3]
(a) X^ = [1,3,4.2] (6) (c)
Xa = [4,5,6] Xs = [0,0 5.6, -5]
X3 = [3,-5,2,2]
^4 = [1.8.-3] X^ = [6,3 -1,--6,7]

A3 = 2a^ + A.Q Xs = 2^1--X^


Ans. (a) Xq = 2X^ - X^ (b) (c)
A. A = 5a 1 2a o x^ = 2X2--^1
CHAP. 9] LINEAR DEPENDENCE OF VECTORS AND FORMS 73

9. Why can there be no more than n linearly independent -vectors over F'>

10. Show that if in (9.2) either Xi = Xj or X^ = aXj, a in F. the set of vectors is linearly dependent. Is the
converse true?

11. Showthatanyn-vectorZandthen-zero vector are linearly dependent; hence, A" ando are considered proportional.
Hint: Consider k^^X +
k^-O = where fc^ = o and ftj ^ 0.

12. (a) Show that X._ = [l.l+i,i], X^ = [j,_j,i_i] and X^ = [l-i-2i,l-i, 2-j ] are linearly dependent over
the rational field and, hence, over the complex field.
(b) Show that Zi = [l.l+i.i], X^ = [i.-i.l-i], and Xq = [o.l-2i.2-i] are linearly independent over
the real field but are linearly dependent over the complex field.

13. Investigate the linear dependence or independence of the linear forms:

/i = 3% - Xg + 2Xg + x^ fx = 2Xx - 3Xg + 4A!g - 2*4

www.TheSolutionManual.com
() fz = 2::i + 3x2 - Xg+ 2x^ (b) f^ = 3%-L + 2^2 - 2x3 + 5*4

/a = 5x^ - 95C2 + 8xq - x^.


fg = 5Xj^ - X2+ 2Xq + X4.

Arts, (a) 3/i - 2/2 - /g =

14. Consider the linear dependence or independence of a system of polynomials

Hox + aij^x + ai_^x + a; (i = 1,2 m)

and show that the system is linearly dependent or independent according as the row vectors of the coeffi-
cient matrix

"10 "11

20 21

"^0 "ni nn

are linearly dependent or independent, that is, according as the rank of 4 is less than or
r equal to 1

15. If the polynomials of either system are linearly dependent, find a linear combination which is identically
zero.

Pi = x ~ 3x2 + 4^ _ 2 Pj_ = 2x* + 3:c^ -4x^+5x + 3


(a) P2 = 2x2 - 6 + 4 (b) P2 = x + 2x2- Zx + \

Ps = x - 2*2 + X Pg = X* + 2x- x^ + X + 2

Ans. (a) 2Pi + Pg - 2P3 = (6) P^ + P^- 2Pg =

16. Consider the linear dependence or independence of a set of 2x2 matrices M.

over F.
[::]-Ci]-[::]
Show that fe^il/i + A:2^^2 + ^3^3 = , when not all the k's (in F) are zero, requires that the rank of the

abed
matrix e f g h be < 3. (Note that the matrices M-^.Mz.Mq are considered as defining vectors of four

p q s t

components,)

Extend the result to a set of mxn matrices


74 LINEAR DEPENDENCE OF VECTORS AND FORMS [CHAP. 9

- - -

1 2 3 2 1 3 3 3

17. Show that 3 2 4 3 4 2 , and 3 6 are linearly dependent.

1 3 2 2 2 1 4 3
_ _ _ -

18. Show that any 2x2 matrix can be written as a linear combination of the matrices and
[o oj' [o oj [i oj'

n Generalize to nxn matrices.

19. If the ra-vectors X^^.X^ X^ are linearly independent, show that the vectors Y-^.Yr, 1^ , where 7^ =
n
2 aijXj. are linearly independent if and only if ^ = \_aij'\ is non-singular.

20. If A is of rank r,show how to construct a non-singular matrix B such that AB = [C^, C2 C7-, o]
where C^, C^ C^ are a given set of linearly independent columns of A.

www.TheSolutionManual.com
21. Given the points Pi(l, 1, 1, 1), Pjd, Ps(2, 2, 2, 2), and /VO, 4. 5, 6) of four-dimensional space,
2, 3. 4),

(a) Show that the rank of [Pi, P3]' is so that the points lie on a line through the origin.
1

(6) Show that [P^, P^. P3, PaY is of rank 2 so that these points lie in a plane through the origin,
(c) Does P5(2, 3. 2. 5) lie in the plane of (6)?

22. Show that every n-square matrix A over F satisfies an equation of the form

A^ + k-^A^ ^ + kr,A^ ^ + ... + kp^^A + kpl =

where the k^ are scalars of F

Hint: Consider /, 4,^^,/!^ 4 in the light of Problem 16.

23. Find the equation of minimum degree (see Problem 22) which is satisfied by

(a) 4 = L J,
[:;] '-[:-:]
A
(b) = \_ \. (c) A
[;:]
Ans. (a) 4^-24=0, (b) 4^-24 + 27 = 0, (c) A^ - 2A +1 =

24. In Problem 23(b) and (c), multiply each equation by 4"^ to obtain (b) A'''^ = I-^A. (c) A~'^=2l-A, and
thus verify: If A over F is non-singular, then A' can be expressed as a polynomial in A whose coeffi-
cients are scalars of F.
.

chapter 10

Linear Equations

DEFINITIONS. Consider a system of m linear equations in the n unknowns xi.a;. > *r?

OqiX-i + 022X2 + + c!2n*-n ~ ^2


(10.1)

www.TheSolutionManual.com
"Wl*!"'" Ob2*^2 + + OIj, = Aa
\

in which the coefficients (o's) and the constant terms (A's) are in F
By a solution in F of the system is meant any set of values of %,%2. x^ in F which sat-
isfy simultaneously the m equations. When the system has a solution, it is said to be consistent;
otherwise, the system is said to be inconsistent. A consistent system has either just one solu-
tion or infinitely many solutions.

Two systems of linear equations over F in the same number of unknowns are called equiv-
alent every solution of either system is a solution of the other. A system of equations equiv-
if

alent to (10.1) may be obtained from it by applying one or more of the transformations: (o) in-
terchanging any two of the equations, (b) multiplying any equation by any non-zero constant in
F, or (c) adding to any equation a constant multiple of another equation. Solving a system of
consistent equations consists in replacing the given system by an equivalent system of pre-
scribed form.

SOLUTION USING A MATRIX. In matrix notation the system of linear equations (10.1) may be written
as
^11 1
\'
(10.2) 02 1 02 2 ^2n *2 = K

r". hm

or, more compactly, as


(10.3) AX = H
?here A = [o^^-] is the coefficient matrix, X = [x^.Xr, xj\ and H = [h^h A^]'

Consider now for the system (10.1) the augmented matrix

Oil ai2 "i?i K


(10.4) 021 02 2 ^271^2 [A H]

ml 0^2 <^nn m

(Each row of (10.4) is simply an abbreviation of a corresponding equation of (10.1); to read the
equation from the row, we simply supply the unknowns and the + and = signs properly.)

75
76 LINEAR EQUATIONS [CHAP. 10

To solve the system (10.1) by means of (10.4), we proceed by elementary row transformations
to replace A by the row equivalent canonical matrix of Chapter 5. In doing this, we operate on
the entire rows of (10.4).

3xi + x^ 2X2 - 1
Example 1. Solve the system
^.X-^ ^Xq^ Xg = 3

2x-j^ + ^Xq + 2xq = 4

2' "1
2 1 1 2 1 2 1 2
1 -2 5 -5 -5 1 1 1
The augmented matrix VA H\ = V
-3 -1 -] 1 -5 -5 -11 -5 -5
0. .0

'1
-1 1 1'

1 1 1 1

1 1 1 1

www.TheSolutionManual.com
.0 OJ 0.

Thus, the solution is the equivalent system of equations: x-i =1, 2 = 0, xq = 1. Ex-
pressed in vector form, we have X = [l, 0, l] .

FUNDAMENTAL THEOREMS. When the coefficient matrix A of the system (10.1) is reduced to the
row equivalent canonical form C, suppose {A H] is reduced to [C K], where K= ^1,^5 A:]'.

If A is of rank r, the first r rows of C contain one or more non-zero elements. The first non-zero

element in each of these rows is 1 and the column in which that 1 sta,nds has zeroes elsewhere.
The remaining rows consist of zeroes. Prom the first r rows of [C K], we may obtain each of

the variables x: , x: ,
... ,xj (the notation is that of Chapter 5) in terms of the remaining varia-
Jr
bleS X: , X: , ... X; and one of the i^, Uq k^.
Jr+1 Jr+2 Jn

If If = k k^ 0, then (10.1) is consistent and an arbitrarily selected set of


values for X,- , ac,- X-
. together with the resulting values of %Ji ,, ,

J2
x
, , ... , X- constitute
Jr
a solution. On the other hand, if at least one of is different from zero, say
V+i' "r+s '

kj; 7^ 0, the corresponding equation reads

Qx.^ + 0% + + 0*71 7^

and (10.1) is inconsistent.

In the consistent case, A and [A H] have the same rank; in the inconsistent case, they
have different ranks. Thus

I. A system AX = H of m linear equations in n unknowns is consistent if and only if

the coefficient matrix and the augmented matrix of the system have the same rank.

II. In a consistent r<n, re-r of the unknowns may be chosen


system (10.1) of rank
so that the coefficient matrix of the remainingr unknowns is of rank r. When these n-r
unknowns are assigned any whatever values, the other r unknowns are uniquely determined.

Xi + 2*2 3*^3 ~ 4^4 = 6

Example 2. For the system aci + 3^2 + xg 2x4. = 4

2^1 + 5a:2 2%3 5x^1 = 10


CHAP. 10] LINEAR EQUATIONS 77

12-3-4 6 12-3-4 6 1 -11 -8 10


[A H] 13 1-2 4
">-/
1 4 2-2 'Xy
1 4 2-2
2 5-2-5 10_ 1 4 3-2 _0 10
"l -11 lO"
-V 1 4 0-2 = [C K]
^0 10
Since A and [A H] are each of rank r = 3, the given system is consistent; moreover,
the general solution contains n-r = 4-3 = 1 arbitrary constant. Prom the last row
of [C K], x^= 0. Let xs = a, where a is arbitrary; then aj-l = 10 + 11a and xq = -2-4o.
The solution of the system is given by x^ = 10 + lla, xj = -2-4o, xg = a, x^, = or
X = [lO + llo, -2-4a, a, o]'

If a consistent system of equations over F has a unique solution (Example 1) that solution
is over F. If the system has infinitely many solutions (Example 2) it has infinitely many solu-
tions F when the arbitrary values to be assigned are over F. However, the system has
over
infinitelymany solutions over any field 2F of which F is a subfield. For example, the system

www.TheSolutionManual.com
of Example 2 has infinitely many solutions over F (the rational field) if o is restricted
to rational
numbers, it has infinitely many real solutions if a is restricted to real numbers, it has infinitely
many complex solutions if a is any whatever complex number.
See Problems 1-2.

NON-HOMOGENEOUS EQUATIONS. A linear equation

a-^ Xi + 0^X2 + + n*n = h

is called non-homogeneous if A 7^ 0. A system AX = H is called a system of non-homogeneous


equations provided H is not a zero vector. The systems of Examples 1 and 2 are non-homogeneous
systems.

In Problem 3 we prove
ni. A system of re non-homogeneous equations in n unknowns has a unique solution
provided the rank of its coefficient matrix A is n, that is. provided \a\ ^ 0.

In addition to the
method above, two additional procedures for solving a consistent
system
of n non-homogeneous equations in as many unknowns AX = H are given
below. The first of
these is the familiar solution by determinants.

(a) Solution by Cramer's Rule. Denote by 4, (i = 1,2 n) the matrix obtained from A by re-
placing Its Jth column with the column of constants (the
h's). Then, if \A\ y^ 0, the system
AX = H has the unique solution

<1"'5) % = 777 , X2 = X = ' "'

See Problem 4.

2xi X2 + Sxg + *4 =
Xl + X2 - 3x3 - 4x4
Example 3. Solve the system using Cramer's Rule.
3x:i + 6x2 - 2x3 + X4.

2%-^ + 2*2 + 2x3 - 3x4


We find

1 5 1
5 1
1 -3 -4 -3 -4
6 -2 1
-120.
-2
= -240
1
2 2-3 2 -3
78 LINEAR EQUATIONS [CHAP. 10

2 5 5 1 2 15 1

1 -1 -3 -4
=
1 1-1-4
-24,
3 8-21 3 6 8 1

2 2 2-3 2 2 2-3
2 1 5 5
1 1 -3 -1
and -96
3 6 -2 8
2 2 2 2_

-240 A^ -24 1
Then x-i = = 0, and
-120 Ml -120 5 -120

-96
^4
-120

(b) Solution using A ^. If |


^ #
|
0, A~^ exists and the solution of the system AX = H is given

www.TheSolutionManual.com
by
(10.6) A-^-AX = A-^H or X ^ A-^H

2xi + 3X2 + Xg 2 3 1

Example 4. The coefficient matrix of the system { x^ + 2% + 3xg is A 1 2 3

3^1 + a;2 + 2a:3 3 1 2

1 -5 7
From Problem 2(6), Chapter 7, A 7 1 -5 Then
18
-5 7 1

"35'
1 -5 7 ["9"
1
AX A-^H J_ 7 1 -5 6 29
18 18
-5 7 1 L8_ . 5_

The solution of the system is x^ = 35/18, x^ = 29/18, x^ - 5/18.


See Problem 5.

HOMOGENEOUS EQUATIONS. A linear equation

'^1*1 + "2*2 + + ''n'^n =


(10.7)

is called homogeneous. A system of linear equations

(10.8) AX =

in n unknowns is called a system of homogeneous equations. For the system (10.8) the rank
of the coefficient matrix A and the augmented matrix [A 0] are the same; thus, the system is
always consistent. Note that X = 0, that is, %i = xs = = = is always a solution; it is %
called the trivial solution.

If the rank ot A is n, then n of the equations of (10.8) can be solved by Cramer's rule for the
unique solution xj^ = X2= ...= x^= and the system has only the trivial solution. If the rank of
A is r<n. Theorem II assures the existence of non-trivial solutions. Thus,
IV. A necessary and sufficient condition for (10.8) to have a solution other than the
trivial solution is that the rank of A be r < n.
V. A necessary and sufficient condition that a system of n homogeneous equations in
n unknowns has a solution other than the trivial solution is |/4 |
= 0.

VI. If the rank of (10.8) is r < n, the system has exactly n-r linearly independent solu-
tions such that every solution is a linear combination of these n-r and every such linear
combination is a solution. See Problem 6.
CHAP. 10] LINEAR EQUATIONS 79

LET Iiand X^he two distinct solutions of AX = H. Then AX^ = H, AX^ = H, and A (Xx- X^) = AY = 0.

Thus, Y = X^~ X^ is a non-trivial solution of AX = 0.


Conversely, if Z is any non-trivial solution of AX = and if X^ is any solution of AX = H,
then X = Xl+ Z is also a solution of AX = H. As Z ranges over the complete solution of AX = 0,
Zy, + Z ranges over the complete solution of AX = H. Thus,
'P
VII. If the system of non-homogeneous equations AX = E is consistent, a complete
solution of the system is given by the complete solution of AX = plus any particular so-
lution of AX = H.

i Xi 2x2 + 3x3
Example 5. In the system set x^ = 0; then xg = 2 and x^ = 1. A particular
Xj + ^2 + 2 Xg = 5
I

solution is = [o, 1, 2]'. The complete solution of < *^


~ *^ [-7a,a,3oJ
A:^,

-^
IS ,

\^Xi + Xq + 2*3 =
where a is arbitrary. Then the complete solution of the given system is

X = [-7a,a,3o]' + [O, 1,2]' = [-7a, 1 +a, 2+3a]'

www.TheSolutionManual.com
Note. The above procedure may be extended to larger systems. However, it is first
necessary to show that the system is consistent. This is a long step in solving the
system by the augmented matrix method given earlier.

SOLVED PROBLEMS

xi + X2 ~ 2xs + X4 + 3 K5 = 1

e 2i - ^2 + 2% + 2x4 + 6*5 = 2

3 ail + 2 X2 - 4 Xg - 3 X4 - 9 xg = 3

tion

The augmented matrix


'l 1-2 1 3 l'
"1
1 3 r 1 1 -2
[A H] = 2-1 2 2 6 2
-v-
-3 '\J
1 -2
3 2 -4 -3 -9 3_ -1 2 -6 8 0_ 0-1 2 -18

1 1 3 1 1 3
1-2 -2
-6 -18 1 3

1 1

1 -2000
13
Then x^ - 1, x^- 2xs = 0, and ^4 + 33=5 = 0. Take xg = a and x^ = b. where a and b are arbitrary; the complete
solution may be given as xx= 1. x^^ 2a, x^^a, x^ = -3b. x^ = b or as AT = [l, 2a, a, -3 6, 6]'.

x^ + X2 + 2Xg + X4 = 5
Solve 2%i + 3x2 - s - 2x4 = 2 .

4xi + 5% + 3xg = 7
Snliitinn*
'11 2 1
5"' '112 1 5 1 7 5 13
[A H] = 2 3-1-22 "V-
1-5-4 -8 1 -5 -4 -8
4 5 3 7 1-5-4 -13 -5
The last row reads 0-xi + O-^tj + O-^g + 0-4 = -5; thus the given system is inconsistent and has no solution.
80 LINEAR EQUATIONS [CHAP. 10

3. Prove: A system AX = H of n non-homogeneous equations in n unknowns has a unique solution


provided \A\ 7^ 0.

If A is non-singular, it is equivalent to /. When A is reduced by row transformations only to /, suppose


[A H] is reduced to [/ K]. Then X = K is a solution of the system.

Suppose next that X = L is a second solution of the system; then AK = H and AL - H, and AK = AL.
Since A is non-singular, K = L, and the solution is unique.

4. Derive Cramer's Rule.

Let the system of non-homogeneous equations be

il*l + a^jAiQ + + ain*n - h^

(1) ] uqiXi + 022*2 + + 2n*n = ^z

www.TheSolutionManual.com
"ni*i + an2*2 + + "nn^n - ^n

Denote by A the coefficient matrix [a. ] and let a^,-be the cofactor of in A . Multiply the first equa-

tion of (J) by ttn, the second equation by Ksi the last equation by ttni. and add. We have
n n n n
S ai^di-^x-^ + 2 ai20iii% + + .S ain^ilXn
i=l i=i 1=1 1 =1

which by Theorems X and XI, and Problem 10, Chapter 3, reduces to

hx ai2 "m
^^2 22 '^2n so that x-i =
^11j-
T
^1

^n ""no. "n

Next, multiply the equations of (i) respectively by a^i, ^ii n2 and sum to obtain

"11 ^1 ''is "m


\A\.x^
Oqi /12 '^23 "2n
so that *<,

Continuing in this manner, we finally multiply the equations of (i) respectively by a^n. 2n (^,71

and sum to obtain


Oil "l,n-i "1
"21 "2,n-i ^2 so that x^.

'ni

~
2Xi + X2 + 5 *3 + *^4 5

Xi + X2 ~ 3x3 - 4*4 = -1
5. Solve the system using the inverse of the coefficient matrix.
3 Xi + 6 Xj - 2 aig + :4
=

2x1 + 2x:2 + 2a;3 - 3x4 =


Solution:
2 15 1 120 120 -120'

1 1-3-4 -69 -73 17 80


Then
The inverse of A
3 6-2 1 120 -15 -35 -5 40
2 2 2-3 .24 8 8 -40.
CHAP. 10] LINEAR EQUATIONS 81

"120 120 -120" 5" "


2
1 -69 -73 17 80 -1 1/5
120 -15 -35 -5 40 8
.24 8 8 -40. . 2. 4/5
(See Example 3.)

Xl + ^2 + s + *4 =
6. Solve i + 3*2 + 2xq + 4a;4 =
2Xi + Xg - Xj^ =

Solution:

1 1 1 1 o" 1 1 1 1 11110
\A U\ 1 3 2 4 'X/
2 1 3 2 13
-1 -2 -1 -3

www.TheSolutionManual.com
2 1 -

11110 1 i -i o"
''\J 3
1 i 1 1 2 2

The complete solution of the system is x^ = -^a + 16, ^^ = -! - |6, xs=a, x^ = h. Since the rank of
A is 2, we may obtain exactly n-r = 4-2 = 2 linearly independent solutions. One such pair, obtained by
first taking a = 1, 6 = i and then a = 3, 6 = 1 is

x^ = 0. X2 = -2, *3 = 1, X4 = 1 and x^ = -1, x^ = -3, ^3 = =


3, :x:4 1

What can be said of the pair of solutions obtained by taking a = b = and a = b =


1 37

7. Prove: In a square matrix A of order n and rank n-1, the cofactors of the
elements of any two rows
(columns) are proportional.

Since 1^1 =0, the cofactors of the elements of any row (column) of A are a solution X-^ of the system
AX = (A'X = 0).

Now the system has but one linearly independent solution since A (A^) is of rank n-l.
Hence, for the
cofactors of another row (column) of A (another solution X^ of the
system), we have X^ = kX^.

8. Prove: If /"i, f^ f^ are m<n linearly independent linear forms over F in n variables, then the
linear forms

^j = ^^'ijfi- 0-=1.2 p)

are linearly dependent if and only if the mxp matrix [5^] is of rank r<p.

The g's are linearly dependent if and only if there exist scalars a^,a^ a. F
in , not all zero, such
that

lgl + "2g2 + +apgp = ai ^^Hxfi + 2 .| si^fi + ... + ap .2 sj^fi

P p

.2 ( 2 a;:S--)f.
1=1 J=l J ^J ^
82 LINEAR EQUATIONS [CHAP. 10

Since the /'s are linearly independent, this requires

+ "p'iP (i = 1, 2 m)
j?/i^v ^i^-ii

Now, by Theorem IV, the system of m homogeneous equations in p unknovras S s^j xj - has a non-

trivial solution X = [a^.a^, apV if and only if [^^j] is of rank r<p.

9. Suppose A = [a- ] of order n is singular. Show that there always exists a matrix R = [h^j] ?^ of

order n such that AB = 0.

Let Bi.Bg B be the column vectors of B. Then, by hypothesis, AB^ = ^65 = .. = AB^ = 0. Con-

sider any one of these, say AB^ = 0, or

ii&it + "iQ^'st + + "m ^nt =

www.TheSolutionManual.com
ni*it + n2^2t+ + "nn^nt

Since the coefficient matrix ^ is singular, the system in the unknowns h^t,^'2t i>nt has solutions other

than the trivial solution. Similarly, AB^ = 0, AB^ = 0, ... have solutions, each being a column of S.

SUPPLEMENTARY PROBLEMS
10. Find all solutions of:

x-^ + x^ + 3 = 4

{a) Xj_ 2^:5 + *3 ~ 3*'4 (c) 2%i + 5%2


~ 2%3 = 3

x-^ -^ 1 x^ 7^3 = 5

Xj^ + %2 + ->=3 + %4 =

( X^ + Xq + Xg = 4 ^1 + %2 + -^S
- % = 4
(b) (d)
\2x^ + 5 3C2 ~ 2 Xg = 3 X\ -^ Xq x^ + :>:4 = -4

X-i Xq + Xr, + x^ = 2

/4ns. (a) ! = 1 + 2a 6 + 3c, ajj = o, X3 = b, x^ =

(b) xi = -7a/3 + 17/3, x:2 = 4a/3 - 5/3, Xg


(d) Xi = % = 1, :;3 = 4 = 2

11. Find all non-trivial solutions of:


*1 + 2*2 "*'
3*3
i x-i 2x^ + 3x3
(a) < (c) 2*1 + X2 + 3*3
12*1 + 5*2 + 6*3
+ 2*2 +
3*i *3

4*1 + 2*3 +
2x-i X2 + ^x^
2*1 + 3 *3
*2 ^ *4-
(.h) 3*1 + 22 + *3
7*2 4*3 5*4
*i - 4*2 + 5*3
2*1 11*5 1*3 5*4

/4ns. (o) *i = -3o, X2 = 0, *g = a

(6) *i = *2 = *3 = a

5 3,
(d) *i
J. *o = o *4
4
1 .

CHAP. 10] LINEAR EQUATIONS 83

12. Reconcile the solution of 10(d) with another x-i = c, x^ = d, x^ = - ^c - , x^ = c +d.


O O o o

1 1 2
13. Given A = 2 2 4 find a matrix B of rank 2 such that AB = 0. Hint. Select the columns of B from
3 3 6_
the solutions of AX = 0.

14. Show that a square matrix is singular if and only if its rows (columns) are linearly dependent.

15. Let AX = be a system of n homogeneous equations in n unknowns and suppose A of rank r = n-1. Show
that any non-zero vector of cofactors [a^i, a^j OLinV of a row of ^ is a solution of AX = 0.

16. Use Problem 15 to solve:

^1 - 2%2 + 3%
ixg = + 2x^ -
/
(b)
/ 2xj_ Xg
(c) J 2.^1 + 3*2 + 4 % =
\2x-^ + 5%2 + 6*3
Xg = y'ix^ 4%2 + 2%3 = 1 2xi %2 + 6 Xg =

www.TheSolutionManual.com
Hint. To the equations of (o) adjoin Oxi + Ox^ + 0%3 = and find the cofactors of the elements of the
"l -2 a""

third row of 2 5 6

Ans. (a) xi = -27a. X2 = 0, X3 = 9a or [3a, 0, -a]', (6) [2a, -7a, -17a]', (c) [lla, -2a, -4a]'

17. Let the coefficient and the augmented matrix of the system of 3 non-homogeneous
equations in 5 unknowns
AX =H be of rank 2 and assume the canonical form of the augmented
matrix to be

1 613 614 fcj^g ci

1 623 624 625 C2

_0

with not both of ci, C2 equal to 0. First choose X3 = x^ = x^ =


and obtain X^ = [c^, c^, 0, 0, o]' as a solu-
tion of AX = H. Then choose X3 = 1, x^ = xg =
also X3 = x^ = 0, x^ = 1 and X3 = x^ = 0, Xg == 1 to ob-
0,
tain other solutions X^,Xg, and X^. show that these 5-2 + 1=4 solutions are linearly independent.

18. Consider the linear combination Y = s,X^ + s^X^ + s^X^ + s^X^ of the solutions of Problem 17 Show
that Y is a solution of AX = H if and only if (i) s, +.2 +-3 +^4 = 1- Thus, with s s,. .3, .4 arbitrary except
for (O, i^ IS a complete solution of AX = H.

19. Prove: Theorem VI. Hint. Follow Problem 17 with c^ = c^ = 0.

20. Prove: If ^ is an m xp matrix of rank and S x


r^ is a p matrix of rank r^ such that AB = 0, then r, +
^-<
r^ f
n
Hint. Use Theorem VI.

21. Using the 4x5 matrix .1 = [a^^-] of rank 2, verify: In an x matrix A of


rank .. the r-square determi-
nants formed from the columns of a submatrix consisting
of any r rows of A are proportional to the
r-square
determinants formed from any other submatrix consisting r of rows of A
Hint. Suppose the firsttwo rows are linearly independent so that a^j = p^.a^j
+Ps^a,.. a^.-p^a^,-
f^ ij +p42a2,-
ha2 2j.
(7 = 1.2 5). Evaluate the 2-square determinants j j

I "17 ai5
a^q
and
025 \'^3q agg "4(7 "AS

22. Write a proof of the theorem of Problem 21.

^'" ' '''''"''


''
*'' ""''"'''' '""'"' ^ '" ' '^"' ""' *^'" "'' '''^'"^ ^^1^""^ ^"-""S "^ -
f?ctors''hcJ?'"
(a) a^^-a^^ = a^J^(x^, (t) =
a^^aJJ a^jUji
where (h,i,j,h = 1, 2 n).
84 LINEAR EQUATIONS [CHAP. 10

11114 10
123-4 2 01000
21126 . -,,. 00100 From B =[A H] infer that the
24. Show that B
32-1-13 IS row equivalent to
00010
,

122-2 4 00001
2 3 -3 1 ij [_0

system of 6 linear equations in 4 unknowns has 5 linearly independent equations. Show that a system of
m>n linear equations in n unknowns can have at most re +1 linearly independent equations. Show that when
there are ra + 1 , the system is inconsistent.

25. If AX =H is consistent and of rank r, for what set of r variables can one solve?

26. Generalize the results of Problems 17 and 18 to m non-homogeneous


equations in n unknowns with coeffi-

cient and augmented matrix of the same rank r to prove: If the coefficient and the augmented matrix of the
AX H of m
system = non-homogeneous equations in n unknowns have rank r and if X-^^.X^ Xn-r-n are

www.TheSolutionManual.com
linearly independent solutions of the system, then

X = si^i + 51^2 + + ^n-r+i^n-r+i


n-r+i
where S 1, is a complete solution.
i=i

27. In a four-pole electrical network, the imput quantities i and h are given in terms of the output quantities

2 and Iq by
E^ = oEq + 6/2 "11 a h 'eI 'e2
_ = A
h cE^ + dlQ c d h. >-
_/lJ

Show that and


1 'b
A
d 1 c

Solve also for 2 and I^, h and I^, 4 and E^ .

Show that the


28. Let the system of n linear equations in n unknowns 4X = H, H ^ 0, have a unique solution.
system AX = K has a unique solution for any n-vector K ^ 0.
'l -1 1" 'xi "n"
AX = X2 = F = for the x^ as linear forms in the y's.
29. Solve the set of linear forms 2 1 3 72
1 2 3 _s_
p.
Now write down the solution of A' X = Y

30. Let A be n-square and non-singular, and let S^ be the solution of AX = E^, (i = 1, 2 n). where ^is the
Identify the matrix [S^, Sg S^].
n-vector whose ith component is 1 and whose other components are 0.

31. Let 4 be an m X 71 matrix with m<n and let S^ be a solution of AX - E^, {i -1,2,. m). where E^ is the
m-vector whose ith component is 1 and whose other components are 0. If K = \k^. k^. , k^' show
, that

k^Si + A:2S2 + + ^n^%


is a solution of AX = K.
Chapter 11

Vector Spaces

UNLESS STATED OTHERWISE, all vectors will now be column vectors. When components are dis-
played, we shall write [xj^.x^ a^]'. The transpose mark (') indicates that the elements are
to be written in a column.

A set of such 7z-vectors over F is said to be closed under addition if the sum of any two of

www.TheSolutionManual.com
them is a vector of the set. Similarly, the set is said to be closed under scalar multiplication
if every scalar multiple of a vector of the set is a vector of the set.

Example 1. (a) The set of all vectors [x^. x^. x^Y of ordinary space havinr equal
components (x-^ = x^^ x^)
is closed under both addition and scalar multiplication. For, the sum
of any two of the
vectors and k times any vector (k real) are again vectors having equal
components.
(6) The set of all vectors [x^.x^.x^Y of ordinary space is closed under addition and scalar
multiplication.

VECTOR SPACES. Any set of re-vectors over F which is closed under both addition and scalar multi-
plication is called a vector space. Thus, if X^, X^ X^ are n-vectors over F, the set of all
linear combinations

<"!) K^i + KX^ + + KX^ (kiinF)

is a vector space over F.


For example, both of the sets of vectors (a) and (b) of Example
1 are
vector spaces. Clearly, every vector space (ll.l) contains the zero
re-vector while the zero
re-vector alone is a vector space. (The space
(11.1) is also called a linear vector space.)
The totality V^iF) of all re-vectors over F is called the re-dimensional vector space over F.

SUBSPACES. A set V of the vectors of V^(F) is called a subspace


of V^(F) provided V is closed un-
der addition and scalar multiplication. Thus,
the zero re-vector is a subspace of F(F)- so
also
IS V^(F) Itself. The set (a) of Example
1 is a subspace (a line) of ordinary
space. In general
If X^, X^, ..., Z^ belong to V^(F), the space of all linear combinations
(ll.l) is a subspace of

A vector space V is said to be spanned or generated by the re-vectors


X^, X^ X^ pro-
vided (a) the Xi lie in V and (b) every vector of F is a linear
combination (11.1). Note that the
vectors X^, X^ X^_ are not restricted to be linearly independent.

Examples. Let F be the field R of real numbers


so that the 3-vectors X^ = [i.i.lY X^ = 2 i]'
.
\i
^3= [1.3,2]' and X^= [3,2,1]' lie in ordinary space S = V^R). Any vector \a b cY ot
-
s can be expressed as . , j

85
.

86 VECTOR SPACES [CHAP. 11

yi + y2 + ys + 3y4.

+ 72^2 + ys'^s + + 2X2 + Sys + 274.


Ti ^1 y4-^4 yi

yi + 3y2 + 2y3 Ja

since the resulting system of equations

yi + y2 + ys + 3x4

(i) yi + 2y2 + 3ys + 2y4

yi + 3y2 + 2y3 y4-

is consistent. Thus, the vectors Xj^, Xj. Xg, X4 spanS.

of
and ^2 are linearly independent. They span a subspace (the plane
tt)
The vectors X-y
real numbers.
S which contains every vector hX^ + kX.^. where /i and k are

www.TheSolutionManual.com
hX^. where
The vector X^ spans a subspace (the line L) of S which contains every vector
A is a real number.
See Problem 1.

the dimension of a vector space V is meant the


maximum number of lin-
BASIS AND DIMENSION. By
same thing, the minimum number of linearly in-
early independent vectors in F or, what is the
geometry, ordinary space is considered as
dependent vectors required to span V. In elementary
Here we have been considering it as a
a 3-space (space of dimension three) of points
(a, 6, c).
and the line L is of
3-space of vectors ia,h,c \. The plane n of Example 2
is of dimension 2

dimension 1.
^
consisting of 7i-vectors will be denoted by F(F). When r = n,
A vector space of dimension r

we shall agree to write p;(F) for %\F).

linearly independent vectors of V^(F) is called a


basis of the space. Each vec-
A set of r

combination of the vectors of this basis. All bases of


tor of the space is then a unique linear
linearly independent vectors of the
V^(F) have exactly the same number of vectors but any
r

space will serve as a basis.

X^ Example 2 span S since any vector [a, b. c ]' of S can be expressed


Example 3. The vectors X^. Xr,. of

yi + ys + ys

X-y
71-^1 + j^Xq + yg^a yi + 2y2 + 3ys

yi + 3X2 + 2ys

a
!yi
xi + y2 + ys =

,^ + 3X3 = h unlike the system ( i), has a u-


The resulting system of equations yi + 2y2 ,

Xi+ 3X2 + 2xs = c

are not a
nique solution. The vectors X^.X^.X^ are a basis of S. The vectors X^.X^.X^
Example whose basis is the set X^. X^
basis of S . (Show this.) They span the subspace tt of 2,

Chapter 9 apply here, of course. In particular, Theorem IV may be re-


Theorems I-V of

stated as:
are a set of re-vectors over the rank of the raa:ro matrix
F and if r is
I If X X^ .. X^
of their components,' then from the set r
Unearly independent vectors may be selected. These
T vectors span a V^iF) in which
the remaining m-T vectors lie.
See Problems 2-3.
CHAP. 11] VECTOR SPACES 87

Of considerable importance are :

II. If JYi, ^2, ..., Zm are m<n linearly independent n-vectors of V^iF) and if J^+j^,
^m4.2. . ^n are any n-m vectors of V^iF) which together with X.^, X^ X^ form a linearly
independent set, then the set X^, X^ Z is a basis of V^iF).

See Problem 4.

III. If Z^.Zg,...,! are m<7i linearly independent -vectors over F, then the
p vectors
m
^i = i^'ij^i (/=l-2 P)
are linearly dependent if p>m or, when p<m, if [s^,] is of rank r<p.

IV. If Zi, .2, ..., Z are linearly independent re-vectors over F, then the vectors
n
Yi = 1 a^jXj
"
(i = 1,2 re)
=

www.TheSolutionManual.com
7 1
are linearly independent if and only if [a^-] is nonsingular.

IDENTICAL SUBSPACES. If ,V^(F) and X(F) are two subspaces of F(F), they are identical if and
only each vector of X(F)
if is a vector of ^V;[(F) and conversely, that is, if and only if each
is a subspace of the other.

See Problem 5.

SUM AND INTERSECTION OF TWO SPACES. Let V\F) and f3^) be two vector spaces. By their
sum is meant the totality of vectors X+Y where X is in V^(F) and Y is in V^iF). Clearly, this
is a vector space; we call it the sum space
V^^iF). The dimension s of the sum space of two
vector spaces does not exceed the sum of their dimensions.

By
the intersection of the two vector spaces is meant the
totality of vectors common to the
two spaces. Now if Z is a vector common to the two spaces, so also is aX;
likewise if X and
y are common to the two spaces so also is aX^bY. Thus, the intersection of two spaces
is a
vector space; we call it the intersection space V\F).
The dimension of the intersection space
of two vector spaces cannot exceed the smaller of
the dimensions of the two spaces.

V. If two vector spaces FV) and V^(F) have


V^\F) as sum space and V^(F) as inter-
section space, then h + k = s + t.

Example 4. Consider the subspace 7f, spanned by X^ and X^ of


Example 2 and the subspace tt^ spanned
by Xs and X^. Since rr^ and tt^ are not identical (prove this)
and since the four vectors span
S, the sum space of tt^ and tt^ is S.

Now 4X1 - X2 = X^: thus, X^ lies in both tt^ and 7t^. The subspace (line
L) spanned
by X^ IS then the intersection space of 77^ and 77^ Note that 77^ and 77^ are each of dimension
.

2, S IS of dimension 3, and L is of dimension 1. This agrees with Theorem V.


See Problems 6-8.

NUIXITY OF A MATRIX. For a system of homogeneous equations AX = 0, the


solution vectors X
constitute a vector space called the null space of A.
The dimension of this space, denoted by
^
A'^ IS called the nullity of A.
,

Restating Theorem VI, Chapter 10, we have


VI. If A has nullity N^ ,
then AX = has N^ linearly independent solutions X^. X^
88 VECTOR SPACES [CHAP. 11

Xti such that every solution of AX = is a linear combination of them and every such
A
linear combination is a solution.

A basis for the null space of A is any set of N^ linearly independent solutions of AX = 0.

See Problem 9.

Vn. For an mxre matrix A of rank rj and nullity N/^,

(11.2) rA + Nji = n

SYLVESTER'S LAWS OF NULLITY. If A and B are of order ra and respective ranks q and rg , the

rank and nullity of their product AB satisfy the inequalities

''AB > '^ + 'B - "

www.TheSolutionManual.com
(11.3) Nab > Na , Nab > Nb

Nab < Na + Nb See Problem 10.

BASES AND COORDINATES. The ra-vectors

E^ = [0,1,0 0]', En = [0,0,0 1]'


El = [1,0,0 0]',

are called elementaiy or unit vectors over F. The elementary vector Ej, whose /th component
is 1, is called the /th elementary vector. The elementary vectors E^, E^ constitute an
important basis for f^^Cf).
Every vector X = [%,% ^nl' of 1^(F) can be expressed
uniquely as the sum
n
X 2 xiEi XxE-^ + 2-^2 +
+ '^nEr,
1=1

Of the elementary vectors. The components %, x^ x^ oi X are now called the coordinates of
X relative to the E-basis. Hereafter, unless otherwise specified, we shall assume that a vector
X is given relative to this basis.
Then there exist unique scalars %, 03 a^
Let Zi, Zg Zn be another basis of 1^(F).
in F such that
n
X 1 aiZi a.^Z.^ + 02 Zg + + dj^Zj^
i =i

These scalars 01,05 o are called the coordinates of X relative to the Z-basis. Writing

JY^ = [oi, og a^Y, we have

(11.4) X = [Zi.Zg Zn]Xz = Z-Xz


whose columns are the basis vectors Zi, Zg Z.
where Z is the matrix

Z^ = [l, -1 ]', Z3 = [l, -1, -1 ]' is a basis of Fg (F) and Xz = [1.2.3]'


Examples. If Zi = [2, -1, 3]', 2,

is a vector of Vg(F) relative to that basis, then


r "1
2 1 1 7

-1 -1 [7,0,-2]'
X [Zi, Zg, Zg]A:^ 2

3 -1 -1 -2

relative to the -basis. See Problem 11.


CHAP. 11] VECTOR SPACES 89

Let W^, \ \ be yet another basis of f^(F). Suppose Xig = [61,^2 K\' so that

(11.5) X = \_\,\ WX^ -r . Xy

From (11.4) and (11.5), Z = Z JV^ = IT Zj^ and

(11.6) X^ = ^'^.Z-X^ PX,

where P = IF"^Z.

Thus,
VIII. If a vector of f^(F) has coordinates Z^ and X^ respectively relative to two bases
of P^(F), then there exists a non-singular matrix P determined
, solely by the two bases and
given by (11.6) such that Xf/ = PX^
See Problem 12.

www.TheSolutionManual.com
SOLVED PROBLEMS
1. The set of all vectors A' = [%, x^, Xg, x^Y, where x^ + x^ + x^ + x^ = Q is a subspace V of V^(F)
since the sum of any two vectors of the set and any scalar multiple of a vector of the set have
components whose sum is zero, that is, are vectors of the set.

1 3 1

2 4
2. Since is of rank 2, the vectors X.^ = [1,2,2, 1 ]', X^ = [3,4,4,3]', and X^ = [1,0,0, 1 ]'
2 4

1 3 1

are linearly dependent and span a vector space ^ (F).

Now any two of these vectors are linearly independent; hence, we may take X-^ and X^, X^ and A:g. or X^
and Xg as a basis of the V2{F).

14 2 4

3. Since
13 12 is of rank 2, the vectors ^"1 = [1,1,1,0]', = [4,3,2,-1 = [2,1,0,-1]',
12 A's ]', A'g

-1 -1 -2

and ^"4 = [4,2,0,-2]' are linearly dependent and span a V^(F).

For a basis, we may take any two of the vectors except the pair Xg, X^.

4. The vectors X^, X^, Xg of Problem 2 lie in V^(F). Find a basis.

For a basis of this space we may take X^,X^,X^ = [l.O.O.O]', and Xg = [o, 1,0,0]' or X^.X2.Xg =
[1.2,3.4]'. and X; = [1,3,6,8]' since the matrices [X^, X2, X^.Xg] and [X^. X2.Xg. Xy] are of rank
4.
90 VECTOR SPACES [CHAP. 11

5. Let Zi = [l,2,l]', Z2 = [l,2,3]', A:3 = [3,6,5 ]', Y^ = [0,Q,lY, i'2=[l.2,5]' be vectors of Vs(F).
Show that the space spanned by X-^, X^, Xq and the space spanned by Y.^, Y^ are identical.

First, we note that X^ and X^ are linearly independent while Xq = 2Zj^ + X^.. Thus, the X^ span a space
of dimension two. say iF|('''). Also, the Yi being linearly independent span a space of dimension two. say

Next, iX.^, Y^ = 2^2- ^li ^i = ^i^ - 4^1. X^ = Y^- 271.


Fi = kXz - Thus, any vector aY-^ + fcK, of
2lg^(F) is (50 + 26)^2 - (2a + b)X.^ of iPg^CF) and any vector cXj^ + dX^ of iff(f)
a vector is a vector
(c + d)Y2 - (4c + 2<i)Ki of QVi(F). Hence, the two spaces are identical.

6. (a) If Z = [%, 2.%]' lies in the Vg(F) spanned by X^ = [ 1,-1,1]' and X^ = [3,4,-2]', then

-2% + 5X2 + IXg

www.TheSolutionManual.com
(b) If X = [xi,x^,X3,x^Y lies in the Pi(F) spanned by X^ = [1,1,2,3]' and X^ = [ 1,0,-2,1]', then

3Ci 1 1
% 1
2 1 1 1

-2
is of rank 2. Since 4 0, this requires = - 2*1 + 4% - % = and
% 2 1

%4. 3 1
% 2

xi 1 1

% 1 = %+ 2*2 - s;* = ,

4. 3 1

These problems verify: Every ^(^ ) may be defined as the totality of solutions over F of a system of n-k

linearly independent homogeneous linear equations over F in n unknowns.

7. Prove: two vector spaces Vn(F) and I^(F) have Vn(F) as sum space and V^iF) as intersection
If

space, then h + k = s + t.

Suppose t = h; then Vr!XF) is a subspace of (^^F) and their sum space is 1^' itself. Thus, s=k. t =h and
s + t - h+k. The reader will show that the same is true if t = k.

Suppose next that t<h. t<k and let X^^.X^ X^ span V^(F). Then by Theorem H there exist vectors
Yh so that X1.X2 ^t.i't-n Yh span l^(F) and vectors Zj^^. Z^+j Z^ so that
^t^-i, yt-+2
A:i,A:2 ^t.Zt+1 2fe span I^'^F).

Now suppose there exist scalars a's and 6's such that

t h k
(11-4) X "iXi + .S a^Yi + S biZi
1=1 t=t+i i=t+i

t ft

i=i i=t+i i=t+i

h k
The vector on the left belongs to P^(F),and from the right member, belongs also to ^(F); thus it belongs
to V^(F). But X^.X^ X-t span Vn(Fy, hence, a^+i = at+2 = = "f, = 0.

t k
Now from (11.4), 2 "iXi + 2; b^Z^
i=i i=t+i
t*^"^,
But the X's and Z's are linearly independent so that a^ = 05 = = t = ''t^i = *t+2 = = ^fe = :

the ^f's.y-s, and Z's are a linearly independent set and span ^(F). Then s =h + k-t as was to be proved.
CHAP. 11] VECTOR SPACES 91

8. Consider ^FsCF) having X^ = [l,2,2Y and Z2 = [ 1,1,1 ]' as basis and jFgCF) having 71 = [3,1,2]'

113 1

and Fj = [ 1.0,1 ]' as basis. Since the matrix of the components 2 110 is of rank 3, the sum
3 12 1

space is V,(F). As a basis, we may take X-^, X^, and Y^

Prom h + k = s + t. the intersection space is a VsiF). To find a basis, we equate linear combinations
2 2
of the vectors of the bases of iFg(F) and s^'sCF) as

6 - 3e = 1

take d = 1 for convenience, and solve ^2a + 6- c = obtaining a = 1/3, 6 = -4/3, c = -2/3. Then
( 3a + 6 - 2e = 1

www.TheSolutionManual.com
aX;]^ + 6^2 = [-1.-2/3.-1/3 ]' is a basis of the intersection space. The vector [3,2,1]' is also a basis.

113 3

2 2 4
9. Determine a basis for the null space of A
10 2 1

113 3

x^ =
Consider the system of equations AX = which reduces to
\x2+ Xs + 2x^ =

A basis for the null space of .4 is the pair of linearly independent solutions [1.2,0,-1]' and [2,l,-l,o]'
of these equations.

10. Prove: r > r + r - n.


AB A B

Suppose first that A has the form Then the first r, rows of AB are the first r. rows of B while

the remaining rows are zeros. By Problem 10, Chapters, the rank of AB is ^45 > + % - "
'k

Suppose next that A is not of the above form. Then there exist nonsingular matrices P and
Q such that
PAQ has that form while the rank of PAQB is exactly that of AB (why?).

The reader may consider the special case when B =

11. Let X=[l,2.lY relative to the S-basis. Find its coordinates relative to a new basis Zi = [l 1 0]'
Z2 = [1,0,1]', and Zg = [1,1. l]'.

Solution (a). Write

1 1 \ 1 !a + i + c = 1

(i) X = aZ^ + 6Z2 + cZr^. that is. 2 = a 1 + h + c 1 Then a + c = 2 and a = 0, 6 = 1,


1 1 1 b + c = 1

c = 2. Thus relative to the Z-basis, we have X^ = [0,-1,2]'


92 VECTOR SPACES [CHAP. 11

Solution (6). Rewriting (i) as X = {Z-^.Z^.Z^^X^ = ZXg, we have

1 -1 1

X^ . Z X 1 -1 2 [0,-1,2^

1 1 1 1

. _ _

12. Let X^ and X-^ be the coordinates of a vector X with respect to the two bases 7,^ = [1,1,0]'
Z2=[l,0,l]', Z3= [1,1,1]' and f4 = [l,l,2]', ff^ = [2,2,1 ]', Ifg = [1,2,2 ]'. Determine the ma-
trix ? such that X^ = PX^ .

1 1 1 'l 2 l' 2-3 2


Here Z = \_Z^, Z^, Zg] 1 1 ,
^ = 1 2 2 and W 2 0-1
1 1 2 1 2 -3 3

www.TheSolutionManual.com
-1 4 1

Then P W'^Z = ^ 2 1 1 by (11.6).

0-3

SUPPLEMENTARY PROBLEMS

13. Let [x-i^. x^. x^. X4,y be an arbitrary vector of Vi(R). where R denotes the field of real numbers. Which of the
following sets are subspaces of K^(R)'?
(a) All vectors with Xj_= X2 = X3 = x^. (d) All vectors with x-^ = 1 .

with x^ = x^. x^=2x^. (e) All vectors with x^.Xr,.Xs.xj^ integral.


(6) All vectors
(c) All vectors with %4 = 0.

Ans. All except {d) and (e).

14. Show that [ 1.1.1.1 ]' and [2,3.3,2]' are a basis of the fi^(F) of Problem 2.

15. Determine the dimension of the vector space spanned by each set of vectors. Select a basis for each.

[1.1,1.1]'
[1,2,3.4.5]' [l.l.O.-l]'
[3,4,5,6]'
(a) [5.4,3,2,1]', (b) [1,2,3,4]' , ^'^
[1.2,3,4]'
[1.1,1.1,1]' [2.3.3,3]'
[1,0,-1,-2]'

Ans. (a), (b), {c). r= 2

16. (a) Show that the vectors X-^ = [l,-l,l]' and X^ = [3,4,-2]' span the same space as Y^ = [9,5,-1 ]' and
72= [-17,-11.3]'.
(6) Show that the vectors X^ = [ 1,-1,1 ]' and A'2 = [3,4,-2 ]' do not span the same space as Ti = [-2,2,-2]'
and K, = [4,3,1]'.

n. Show that if the set X^.X^ Xfe is a basis lor Vn(F). then any other vector Y of the space can be repre-
sented uniquely as a linear combination of X-^, X^ X^ .

k
Hint. Assume Y 51 aiXi = S biXi-
CHAP. 11] VECTOR SPACES 93

18. Consider the 4x4 matrix whose columns are the vectors of a basis of the Vi(R) of Problem 2 and a basis of
the \i(R) of Problem 3. Show that the rank of this matrix is 4; hence. V^R) is the sum space and l^(R), the

zero space, is the intersection space of the two given spaces.

19. Follow the proof given in Problem 8, Chapter 10, to prove Theorem HI.

20. Show that the space spanned by [l,0,0,0,o]', [0,0,0,0,1 ]', [l.O,l,0,0]', [0,0,1,0,0]' [l,0,0,l,l]' and the
space spanned by [l,0.0.0,l]', [0,1,0,1,0 ]', [o,l,-2,l,o]', [l,0,-l,0,l ]', [o,l,l,l,o]' are of dimensions
4 and 3, respectively. Show that [l,0,l,0,l]' and [l,0,2,0,l]' are a basis for the intersection space.

21. Find, relative to the basis Z^= [l,1.2]', Zg = [2.2,l]', Zg = [l,2,2]' the coordinates of the vectors
(a) [l.l.o]', (b) [1,0, l]', (c) [l.l.l]'.
Ans. (a) [-1/3,2/3,0]', (6) [4/3, 1/3, -1 ]', (c) [l/3, 1/3, ]'

22. Find, relative to the basis Zi=[o,l,o]', Z2=[i,l,l]', Z3=[3,2,l]' the coordinates of the vectors

www.TheSolutionManual.com
(a) [2,-1,0]', (b) [1,-3,5]', (c) [0,0,l]'.
Ans. (a) [-2,-1,1]', (6) [-6,7,-2]', (c) [-1/2, 3/2, -1/2 ]'

23. Let X^ and X^^ be the coordinates of a vector X with respect to the given pair of bases. Determine the
trix P such that Xj^ = PX^ .

Zi= [1,0,0]', Z2=[i,o,l]', Z3= [1,1, il- Zi = [0,1,0]', 2^ = [1.1,0]', 23 = [1.2.3]'


ea) ^^^
1^1 = [0,1,0]', [^2= [1,2,3]', If'3= [1,-1,1]' = [l,1.0]'.
!fi 1^2= [1,1.1]', 1^3= [1,2.1]'
2 41 r 1 -2"1
Ans. (a) P = ^ , (6) P = -1 2
2 2j L 1 ij

n
24. Prove: If Pj is a solution of AX = Ej . (j = 1,2 n). then 2 hjPj is a solution of AX = H. where H =
[''1.^2 KV-
Hint. H = h^Ej^ + h^E^+ +hnE^.

25. The vector space defined by all linear combinations of the columns of a matrix A
is called the column space
of A. The vector space defined by all linear combinations of the
rows of A is called the row space of ^.
Show that the columns of AB are in the column space of A and the rows of AB are in
the row space of fi.

26. Show that AX = H a system of m non-homogeneous equations in n unknowns, is consistent if and only
.
if the
the vector H belongs to the column space of A

1 1 1111
27. Determine a basis for the null 1-1,
space of (a) (6) 12 12
1 1 3 4 3 4
Ans. (a) [1,-1,-1]', (6) [ 1,1,-1, -i ]', [l, 2,-1, -2]'

28. Prove: (a) N^,>N^. N^^>N^ (b)N^,<N^^N^

Hint: (a) /V^g = n - r^g ; r^g < r^ and rg .

(b) Consider n~r^g . using the theorem of Problem 10.

29. Derive a procedure for Problem 16 using only column transformations on A = [X^. X^, y^ Y^]. Then resolve
Problem 5.
chapter 12

Linear Transformations

DEFINITION. Let X = [x,., x^, .... %]' and Y = lyx. y^. JnY ^^ ^^ vectors of l^(F), their co-
ordinates being relative to the same basis of the space. Suppose that the coordinates of X .

Y are related by

yi '^ll'^l "^ ^12 "^2 "^ T ^^Yi J^

+ df^Yj.^ri

www.TheSolutionManual.com
(12.1)

or, briefly, AX

where A = [a^.-] is over F. Then (12.1) is a transformation T which carries any vector X of
V^(F) into (usually) another vector Y of the same space, called its image.

If (12.1) carries X^ into F^ and X^ into Y^, then

(a) it carries kX^ into ^y^, for every scalar k, and

(fe) it carries aX^ + bX^ into aY^ + feFg. for every pair of scalars a and b. For this reason, the
transformation is called linear.

1 1 2

Example 1. Consider the linear transformation Y = AX 1 2 5 A" in ordinary space Vq(R).


13 3

'12"
'l 1 2 2

(a) The image of A" = [2,0,5]' is Y 1 2 5 = 27 [12,27.17]'.


1 3 3 5 17

'2'
'l 1 2 'x{

(6) The vector X whose image is y = [2,0.5]' is obtained by solving 1 2 5 *2 =

.1 3 3_ 3. 5

112 2 10 13/5
Since 12 5 10 11/5 .
X = [13/5,11/5,-7/5]'.
13 3 5 1 -7/5

BASIC THEOREMS. If in (12.1), X = [\,Q 0]'='i then Y = [ an, Ogi, ..., a^J' and, in general,
if ^ = - then Y = [a^j.a^j "nf]'-
Hence,

I. A linear transformation (12.1) is uniquely determined when the images (Y's) of the
basis vectors are known, the respective columns of A being the coordinates of the images
of these vectors. See Problem l.

94
CHAP. 12] LINEAR TRANSFORMATIONS 95

A linear transformation (12-1) is called non-singular if the images of distinct vectors Xi

are distinct vectors Y^. Otherwise the transformation is called singular.

II. A linear transformation (12.1) is non-singular if and only if A, the matrix of the
transformation, is non-singular. See Problem 2.

in. A non-singular linear transformation carries linearly independent (dependent) vec-


tors into linearly independent (dependent) vectors. See Problem 3.

Prom Theorem HI follows


k
IV. Under a non-singular transformation (12.1) the image of a vector space Vn(F) is a
vector space VjJ,F), that is, the dimension of the vector space is preserved. In particular,
the transformation is a mapping of ^(F) onto itself.

When A is non-singular, the inverse of (12.1)

www.TheSolutionManual.com
X = A'^y

carries the set of vectors Y^, Y^, ...,\ whose components are the columns of A into the basis
vectors of the space. It is also a linear transformation.

V. The elementary vectors ^ of \{F) may be transformed into any set of n linearly
independent n-vectors by a non-singular linear transformation and conversely.

VI. If Y = AX carries a vector X into a vector F, if Z = BY carries Y into Z, and if


IT = CZ carries Z into W, then Z = BY = {BA)X carries X into Z and IF = (CBA\X carries
X into IF.

VII. When any two sets of re linearly independent re-vectors are given,
there exists a
non-singular linear transformation which carries the vectors of one set into the
vectors of
the other.

CHANGE OF BASIS. Relative to a Z-basis, let 7^ = AX^, be a linear transformation of ^(F). Suppose
that the basis is changed and let X^ and Y^ be the coordinates of X^, and Y^ respectively rela-
tive to the new basis. By Theorem VIH, Chapter 11, there exists
a non-singular matrix P such
that X-^ = ?X^ and Yy, = PY^ or, setting ?~^ = Q, such that

X^ = QX^ and Y^ = QY^

Then Y^ = Q-% = Q-^X, = Q-^AQX^ = BX^


where

(12.2) 6 = Q-^AQ

Two matrices A and B such that there exists a non-singular matrix


Q for which B = Q'^AQ
are called similar. We have proved
vm. If Y^ = AX^ is a linear transformation of V^(F) relative to a given basis (Z-basis)
and Yjf = BX^ is the same linear transformation relative to another basis (IF-basis),
then
A and B are similar.

Note. Since Q = P"^, (12.2) might have been written as B = PAP-^. A study of similar matrices
will be made later. There we shall agree to write B = R'^AR instead of S = SAS'^ but
for no compelling reason.
96 LINEAR TRANSFORMATIONS [CHAP. 12

1 1 3

Example 2. Let Y = AX = 1 2 1 Z be a linear transformation relative to the -basis and let W^

1 3 2

[l.2,l]', W^ = [1,-1.2]', IFg = [1,-1,-1]' be a new basis, Given the vector X = [3,0,2]',
(a)

find the coordinates of its image relative to the W-basis. Find the linear transformation
(b)

Yjr = BXjf corresponding to V = AX. (c) Use the result of (b) to find the image ijf of Xy =
[1.3,3]'.
1 1 1 3 3

Write W = [W^.W^.W^] = 2 -1 -1 then W


-1
-
1 1 -2 3
9
1 2 -1 5 -1 -3

(a) Relative to the If -basis, the vector X = [3,0,2]' has coordinates Xff = W X = [l,l,l]'.

The image of ^ is Y = AX = [9,5,7]' which, relative to the IF-basis is Yf^ = W Y =

[14/3,20/9.19/9]'.

www.TheSolutionManual.com
36 21 -15
(b) Y w\ W^AX (W~^AW)Xjf = BXj^ 21 10 -11
-3 23 -1

36 21 -15 1 6

(c) Yj. = 21 10 -11 3 = 2 [6,2,7]'

-3 23 -1 3 7
L.
See Problem 5,

SOLVED PROBLEMS

1. (a) Set up the linear transformation Y = AX which carries E^ into Y^ = [1,2,3]', E^ into [3,1,2]',
and 3 into Fg = [2,1,3]'.
{h) Find the images of li= [1,1,1]', I2 = [3,-1,4 ]', and ^3 = [4,0,5]'.
(c) Show that X^ and Zg ^-^^ linearly independent as also are their images.
(d) Show that Xi, X^, and Z3 are linearly dependent as also are their images.
1 3 2

(a) By TheoremI, A = [y^, Fg, K3] ; the equation of the linear transformation is Y =^ AX 2 1 1

3 2 3

13 2

(6) The image of X^= [l,l.l]' is Y-^ 2 1 1 [6.4,8]'. The image of -Yg is Ys = [8,9,19]' and the

3 2 3

image of Xg is K3 =[ 14.13,27]'.

1 3 6 8

(c) The rank of [A'^.Xg] = 1 -1 is 2 as also is that of [^i, Kg] 4 9 Thus, X^ and X^ are linearly

_1 4 8 19

independent as also are their images

(rf) We may compare the ranks of \_X^. X^. X^] and {Y^.Y^.Yq\; however, X^ = X^ + X^ and Fg = Fi+Zg so that
both sets are linearly dependent.
CHAP. 12] LINEAR TRANSFORMATIONS 97

2. Prove: A linear transformation (12.1) is non-singular if and only if A is non-singular.

Suppose A is non-singular and the transforms of X^ ^ X^ are Y = AX^ = AX^. Then A{X-i^-Xi) = and
the system of homogeneous linear equations AX = Q has the non-trivial solution X = X-^-X^. This is pos-
sible if and only if .4| = o, a contradiction of the hypothesis that A is non-singular.
|

3. Prove: A non-singular linear transformation carries linearly independent vectors into


linearly in-
dependent vectors.

Assume the contrary, that is, suppose that the images Yi = AXi. (i = 1,2 p) of the linearly independ-
ent vectors X-^.Xr, Xp are linearly dependent. Then there exist scalars s-^.s^ sp , not all zero, such that

P
^ H'^i = ^lYi + s^Y2+ + s^Yfy
^^
=
1=1 f^

P
'
.|^ ^(-4^1) = A(Sj^X^+ S2X^+ + spXp) =

www.TheSolutionManual.com
Since A is non -singular, s^X^ + s^X^ + -. + spXp = But this is contrary to the
o. hypothesis that the Xi are
linearly independent. Hence, the Y^ are linearly independent.

4. A certain linear transformation F = iZ carries Z^ = ]'


[ 1,0,1 into [2,3,-1]', ^^s =[ 1.-1.1 ]' into
[3.0,-2]', and I3
tion of the transformation.
=[
1.2,-1]' into [-2,7,-1]'. Find the images of ,! fj^, 4 and write the equa-

a + b + c = I

Let aX.i^ + bXr,+ cXg= E^; then -6 + 2c = and a = -^, 6 = 1, c =


i.
Thus, E^= -hX^ + X^ + ^Xg
a + b - c =

and its imageis Y, = -^2, 3,-1 ]'+ [3,0,-2]' +


H-2.7.-1 ]' = [l,2,-2]'. Similarly, the image of 5 is
Y2 = 1-1.3,1 J and the image of Eg is K, = [l,l,l ]'. The equation of the transformation is

1 -1 1

Y = [Y^.Y^.Yg]X 2 3 1

-2 1 1

1 1 2
5. If Yy = AXf, = 2 2 1 X^ is a linear transformation relative to the Z-basis
of Problem 12, Chap-
3 1 2

ter 11, find the same transformation 7^ = BX^ relative to the f'-basis of that problem.

1 4 1"

From Problem 12, Chapter 11, = PX^ = i


Xf^r 2 1 1 ^z- Then
--3 0_

-1 1 -1"

P''X,., -1 =
^!i Q^w
2 1 :d

-2 14 -6
and PY Q-^AX, \^
Q^^Q^r., 7 14 9 X,
3
-9 3
98 LINEAR TRANSFORMATIONS [CHAP. 12

SUPPLEMENTARY PROBLEMS
6. In Problem 1 show: (a) the transformation is non-singular, (b) X = A Y carries the column vectors of A into
the elementary vectors.

7. Using the transformation of Probleml, find (a) the image of Af = [1,1,2]', (h) the vector X whose image is
[-2.-5.-5]'. -4ns. (a) [8,5.11]', (b) [-3.-1. 2]'

8. Study the effect of the transformation Y = IX. also Y = klX.

9. Set up the linear transformation which carries E^ into [l,2,3]', 5 into [3.1. 2]', and 3 into [2,-1,-1]'.
Show that the transformation is singular and carries the linearly independent vectors [ 1,1,1 ]' and [2,0,2]'
into the same image vector.

10. Suppose (12.1) is non-singular and show that if X-^.X^. .... X^ are linearly dependent so also are their im-
ages Y^.Y^ Y^.

www.TheSolutionManual.com
11. Use Theorem III to show that under a non-singular transformation the dimension of a vector space is un-
changed. Hint. Consider the images of a basis of P^ (F).

1 1

12. Given the linear transformation Y 2 3 1 X. show (a) it is singular, (6) the images of the linearly in-

-2 3 5

dependent vectors ^i=[l,l,l]', JVg = [2.I.2 ]', and A:3=[i.2,3]' are linearly dependent, (c) the image
of V^{R) is a Vs(R).

1 1 3

13. Given the linear transformation Y 1 2 4 X. show (a) it is singular, (6) the image of every vector of the
113
2 1
V, (R) spanned by [ 1.1,1 ]' and [3.2.0 ]' lies in the K,(fl) spanned by [5,7.5]'.

14. Prove Theorem Vn.


Hint. Let Xi and Yi, (j = 1,2 n) be the given sets of vectors. Let Z = AX carry the set Xi into E{ and
Y = BZ carry the E^ into Kj.

15. Prove: Similar matrices have equal determinants.

12 3

16. Let Y = AX = 3 2 1 A^ be a linear transformation relative to the -basis and let a new basis, say Z, =

_1 1 1_

[1,1,0]', ^2 = [1.0.1]', Z3 = [1.1.1]' be Chosen. Let AT = [1,2,3]' relative to the E-basis. Show that
(a) Y = [14,10,6]' is the image of A^ under the transformation.
(6) X. when referred to the new basis, has coordinates X^, = [-2,-1.4]' and Y has coordinates Y^ = [8,4,2]'

1 0-1"
(c) X^ = PX and Y^ = PY. where P 1 -1 iZ^, Zf2, Z^i
-1 1 1

(d) Yy = Q ^AQX. , where Q =P ^.

0"
1 1

17. Given the linear transformation 7^ 1 1 Xjf . relative to the IF-basis: W^= [o,-l,2]', IK,= [4,1,0]'

1 1
.

CHAP. 121 LINEAR TRANSFORMATIONS 99

IK5 = [-2.0,-4]'- Find the representation relative to the Z-basis: Z^ = [i,-l,l]', Z2 = [l,0,-l]', Z3=[l,2,l]'.

-10 3

Am 2 2-5
-10 2

18. If. in the linear transformation Y - AX. A is singular, then the null space of A is the vector space each of
whose vectors transforms into the zero vector. Determine the null space of the transformation of
123"
(a) Problem 12. (6) Problem 13. (c) Y 2 4 6 X.

3 6 9

-4ns. (a) ^ (R) spanned by [l.-l.l]'


(6) I^\R) spanned by [2,l,-l]'
(c) I/^() spanned by [2,-l,o]' and [3,0,-1 ]'

www.TheSolutionManual.com
19. If y = AX carries every vector of a vector space I^ into a vector of that same space, v^ is called an In-
variant space of the transformation. Show that in the real space V^{R) under the linear transformation

1 -f
(a) F = 12 \ X. the \l^ spanned by [l.-l.o]', the V^ spanned by [2,-1.-2]', and the V^ spanned by
2 2 3

[1.-1,-2]' are invariant vector spaces.

2 2 1

(6) y = 13 1 X. the Vq spanned by [l.l.l]' and the ^ spanned by [l,0,-l]' and [2,-l,0]' are invariant
1 2 2

spaces. (Note that every vector of the V^ is carried into itself.)

y =
10
(c) X, the li spanned by [l,l,l,l]' is an invariant vector space.
1

-14-6 4

20. Consider the linear transformation Y = PX :

yi (i = 1.2 n) in which /1./2 /^ is a permuta-


tion of 1, 2 n.
^h

(a) Describe the permutation matrix P


(b) Prove: There are n! permutation matrices of order n.

(c) Prove: If Pj and fg are permutation matrices so also are P3 = P-lP2 and P^^P^Pt.
(d) Prove: If P is a permutation matrix so also are P' and PP' = /.

(e) Show that each permutation matrix P can be expressed as a product of a number of the elementary col-
umn matrices K^2, ^28 ^n-T-n-
(/) Write P = [^^, E^^. ^^] where ii, ij % is a permutation of 1,2 n and ^ . are the ele-

mentary n-vectors. Find a rule (other than P~ = P') for writing P~ For example, when n = 4 and .

P = [s, 1, 4, 2], then P'^ = [2. '4. 1. 3]; when P = [E^ E^. 1, 3], then P~^ = [g, 2, 4, 1].
chapter 13

Vectors Over the Real Field

INNER PRODUCT. In this chapter all vectors are real and l^(R) is the space of all real re-vectors.

If Z = [%,%, ..., x^y and y = [ji, 72, , YnY are two vectors of l^(R), their inner product is
defined to be the scalar

(13.1) X-Y = x^y^ + x^j^ + + XnJn

Example 1. For the vectors X^=\\.\,\\', ^2= [2.1,2]', ^3 = [l.-2.l]':

www.TheSolutionManual.com
(a) X^-X^ = 1-2 + 1- 1 + 1- 2 = 5

(6) X-^-X^ = 1-1 + l(-2) +1-1 =

(c) X.yX^ = 1-1 + 1-1 + 1-1 = 3

(rf) ^1-2^:2 = 1-4 + 1-2 + 1-4 = 10 = 2(^1-^2)

Note. The inner product is frequently defined as

(13.1') X.Y = X'Y = Y'X


The use of X'Y and Y'X is helpful; however, X'Y and Y'X are 1x1 matrices while
X-Y is the element of the matrix. With this understanding, (13.1') will be used
here. Some authors write Z|y for X-Y In vector analysis, the inner product is call- .

ed the dot product.

The following rules for inner products are immediate

(a) X^-X^ = X^-X^, X^-hX^ = HX^-X^)

(13.2) ib) X^-(X^ + Xs) = (X^+Xs)-X^ = X^-X^ + X^-X^

(c) (X^+ X^) -


(Xs+ X^) = X^-X^ + X^-X^ + X^-Xs + X^-X^

ORTHOGONAL VECTORS. Two vectors X and Y of V^iR) are said to be orthogonal if their inner
product is 0. The vectors Z^ and Xg of Example 1 are orthogonal.

THE LENGTH OF A VECTOR X of ^i(R), denoted by \\ X\\ , is defined as the square root of the in-
ner product of X and X thus. ;

(13.3) II ^11 = \/ X-X = \/ xl + xl+ --- + X.

Examples. Prom Example 1(c), \\ X^\\ = V3 .

See Problems 1-2.

100
CHAP. 13] VECTORS OVER THE REAL FIELD 101

Using (13.1) and (13.3), it may be shown that

(13.4) x.Y = iil|z+y||' - WxW - ||y|h

A vector X whose length is ||z|| = 1 is called a unit vector. The elementary vectors E^
are examples of unit vectors.

THE SCHWARZ INEQUALITY. If X and Y are vectors of ^(/?), then

(13.5) \X-Y\ < \\x\\.\\y\\

that is, the numerical value of the inner product of two real vectors is at most the product of
their lengths.

See Problem 3.

www.TheSolutionManual.com
THE TRIANGLE INEQUALITY. If X and Y are vectors of )/(/?), then

(13-6) l!^+yll < IUII + ||y||

ORTHOGONAL VECTORS AND SPACES. If X^, X^ X^ are m<n mutually orthogonal non-zero
n-vectors and CiZi + ^2^2+ ...+ c^^ =
if 0, then for i = 1,2 m. (c.lX^+ 0^X2+
+ c^X^) Xi =
0. Since this requires 0^ = for i = 1,2 m , we have
I. Any set of m< n mutually orthogonal non-zero re-vectors is a linearly independent
set and spans a vector space Ijf(/?).

A vector Y is said to be orthogonal to a vector space Vn(R) if it is orthogonal to every


vector of the space.

II. If a vector Y is orthogonal to each of the re-vectors X^, X^ X^, it is orthogonal


to the space spanned by them.
See Problem 4.

HI. If Vn(R) is a subspace of I^(/?), k>h. there exists at least one vector X of V^CR)
which is orthogonal to V^\R).
See Problem 5.

Since mutually orthogonal vectors are linearly independent, a vector space V'^(R), m>0,
can contain no more than m mutually orthogonal vectors. Suppose we have found r<m mutually
orthogonal vectors of a V^(R). They span a V^iR), a subspace of V*(R), and by Theorem HI,
there exists at least one vector of V^(R) which is orthogonal to the I^(/?). We now have
r+l
mutually orthogonal vectors of l^(R) and by repeating the argument, we show

IV. Every vector space V^(R), m>0, contains m but not more than m mutually orthog-
onal vectors.

Two vector spaces are said to be orthogonal if every vector of one is orthogonal to every
vector of the other space. For example, the space spanned by X^ = [1,0,0,1]' and X^ =
[0,1,1,0]' is orthogonal to the space spanned by X^ = [ 1,0,0,-1]' and X^ = [0,1,-1,0 ]'
since (aX^ + bXr,) (0X3+ dX^) = for all a,b,c,d.
102 VECTORS OVER THE REAL FIELD [CHAP. 13

k
V. The set of all vectors orthogonal to every vector of a given Vn,(R) is a unique vec-
tor space
^ Vn'^(R).
n J
\
ggg Problem 6.

We may associate with any vector ^ 7^ o a unique unit vector U obtained by dividing the
components of X by \\X\\ This operation is called normalization. Thus, to normalize the vector
.

X = [2,4,4]', divide each component by ||^|| = V4 + 16 + 16 = 6 and obtain the unit vector
[1/3,2/3.2/3]'.

A basis of Vn(R) which consists of mutually orthogonal vectors is called an orthogonal ba-
sis of the space; if the mutually orthogonal vectors are also unit vectors, the basis is called a

normal orthogonal or orthononnal basis. The elementary vectors are an orthonormal basis of ^(R).
See Problem 7.

THE GRAM-SCHMIDT ORTHOGONALIZATION PROCESS. Suppose X^, X^ ! are a basis of


V^(R). Define

www.TheSolutionManual.com
y, = X,

^g'^3 V ^1-^3 V
V - Y
^S - ^3 Y V ^ V V ^

-'w-l Xi "
ll*^!
-'m - '^ y Y %-l y y' "'l

Then the unit vectors Gj = ^ , (i = l,2,...,m) are mutually orthogonal and are an orthonormal
1^11

basis of F(/?).

Example 3. Construct, using the Gram-Schmidt process, an orthogonal basis of V2(R). given a basis
A'i= [1,1,1]', a:2= [1,-2,1]', Xs=[i.2.zY.

(i) Y^ = X^ = [1.1.1]'

(ii) Y^ = X^- ~^Yi = [1,-2,1]' - ^1-1 = [1,-2.1]'

(ill) ^3 = ^3 - ^Y, - ^Y, = [1,2,3]' - -^y, - ^[1,1.1]' = [-1,0,1]'

The vectors G^ = -jpj = [l/\/l. l/\/3, l/^/3l ]',

G2 = -^, = [l/Ve, -2/V6, l/Ve]' and Gg = -ii- = [-I/V2, 0, 1/\A2]'

are an orthonormal basis of ^(fl). Each vector G^ is a unit vector and each product G^ Gj =
0. Note that Fg = -^2 here because X.^ and A^2 a^re orthogonal vectors.

See Problems 8-9.


CHAP. 13] VECTORS OVER THE REAL FIELD 103

Let Zi, ^2 ^m be a basis of a f^(/?) and suppose that X^, X^ Xg,(l< s< m), are
mutually orthogonal. Then, by the Gram-Schmidt process, we may obtain an orthogonal basis
y^, Yg }^ of the space of which, it is easy to show, Yj^ = X^, (i = 1,2 s). Thus,

VI. If X-i^, X2, , Xs,(l< s<m), are mutually orthogonal unit vectors of a Vn(R), there
exist unit vectors X^^^, ^m
X^^.^- i" the space such that the set X^, X^, ...,X^ is an
orthonormal basis.

THE GRAMIAN. Let X^, X^ Z>, be a set of real n-vectors and define the Gramian matrix

A^ A^
' A^ A2
. . .
A-L A>) A]^ A^ A^ A2 . . . X-^ A,

A2 Aj X2 A2
' ' . . . A2 ' Ajj A2 A^ A2 A2 X^Xp
(13.8) G =

Xp- X^ Xp- X2 Xp- Xp XpX-i XpX^ XpXp

www.TheSolutionManual.com
... ...

Clearly, the vectors are mutually orthogonal if and only if G is diagonal.

In Problem 14, Chapter 17, we shall prove

VII. For a set of real re-vectors Z^, X^, .... Xp, |


G >0. The
|
equality holds if and only
if the vectors are linearly dependent.

ORTHOGONAL MATRICES. A square matrix A is called orthogonal if

(13.9) AA' = A'A = I

that is, if

(13.9') A' = A'

Prom (13.9) it is clear that the column vectors (row vectors) of an orthogonal matrix A are
mutually orthogonal unit vectors.

l/\/3 l/x/6 -1/^2


Example 4. By Examples, A l/\[Z -2/\/6 is orthogonal.

l/\/3 \/\[& l/x/2

There follow readily

VIII. If the real re-square matrix A is orthogonal, its column vectors (row vectors) are
an orthonormal basis of V^(R), and conversely.

IX. The inverse and the transpose of an orthogonal matrix are orthogonal.

X. The product of two or more orthogonal matrices is orthogonal.

XI. The determinant of an orthogonal matrix is 1.

ORTHOGONAL TRANSFORMATIONS. Let

(13.10) Y = AX
104 VECTORS OVER THE REAL FIELD [CHAP. 13

be a linear transformation in Xi(R) and let the images of the n-vectors I^ and
X^ be denoted by
Yi and Y^ respectively. Prom (13.4) we have

x^-x, = u\\x,^x,f - \\X,f - \\X,f]


and
Y,-Y, = k\\\Y,^Y,f - II
y, Y,f]

Comparing right and left members, we see that if (13.10) preserves lengths it preserves inner
products, and conversely. Thus,

XII. A linear transformation preserves lengths if and only if it preserves inner product s.

A linear transformation Y=AX is called orthogonal if its matrix A is orthogonal. In Prob-


lem 10, we prove

XIII. A linear transformation preserves lengths if and only if its matrix is orthogonal.

www.TheSolutionManual.com
l/\/2 l/v/6 -I/V2
Examples. The linear transformation Y = AX = l/\/3 -2/\/6 X ii3 orthogonal. The image of
I/V3 l/\/6 l/v^
X = [a,i,c]'is

" 26 a
y + _^ _ _1_ _f b c "]/

and both vectors are of length yja^ + b^ + c^ .

XIV. If (13.10) is a transformation of coordinates from the -basis to another, the Z-


basis, then the Z-basis is orthonormal if and only if A is orthogonal.

SOLVED PROBLEMS

1. Given the vectors Zj. = [1,2,3]' and X^ = [2,-3,4]', find:


(o) their inner product, (b) the length of each.

2
(a) X^-X^ = XiX^ = [1.2.3] -3 = 1(2) + 2(-3) + 3(4) = 8

(6) \\X^f = X^.X^ = X[X^ = [1,2,3] = 14 and \\Xji = vTi

lA-jf = 2(2) + (-3)(-3) + 4(4) = 29 and \\X^\\ = V29


.

CHAP. 13] VECTORS OVER THE REAL FIELD 105

2. (a) Show that 1 = [1/3, -2/3, -2/3 ]' and Y ^ [2/3.-1/3, 2/3]' are orthogonal.
(b) Find a vector Z orthogonal to both X and Y.

2/3
(a) X-Y = Xy = [1/3,-2/3,-2/3] -1/3 = and the vectors are orthogonal.
2/3

1/3 2/3 O"

(6) Write [A:,y,o] -2/3 -1/3 and compute the cofactors -2/3. -2/3, 1/3 of the elements of the
-2/3 2/3 0_

column of zeros. Then by (3.11) Z = [-2/3, -2/3, 1/3]' is orthogonal to both A: and K.

3. Prove the Schwarz Inequality: If X and Y are vectors of Vn(R), then \X-Y\ < ||A'||.||y||

Clearly, the theorem is true if A" or F is the zero vector. Assume then X and Y are non-zero vectors.

www.TheSolutionManual.com
that
If a is any real number,

llaA^ + yf = (aX + Y)-(aX + Y)

= [ax^ + y^.ax^+y^ axn+yn]-[ax^+y^. ax^ + y^ axn+jnY


= (a x^ + 2ax^y^ + yj^) + (a^x^ + Zax^y^ + y^) + + (o^^ + 2a%j^ + y^ )

= a^xf + 2aX.Y + \\Yf >

Now a quadratic polynomial in a is greater than or equal to zero for all real values
of o if and only if its
discriminant is less than or equal to zero. Thus,

i(X.Yf - i\\xf- \\Yf <

and \x-y\ < m-\\Y\

4. Prove: If a vector Y is orthogonal to each of the n-vectors X^, X^ X^. it is orthogonal to the
space spanned by these vectors.

Any vector of the space spanned by the X's can be written as a^X^+a^X^-^
^-c^Xtji . Then

{aiXi + a^X^+... + a^X^)-Y = a^X^-Y + a^X^-Y + + a^X^-Y =

Since Xi-Y = 0, (i = 1,2 m). Thus, Y is orthogonal to every vector of the space and by definition is
orthogonal to the space. In particular, if Y is orthogonal to every vector of a basis of a vector space, it is
orthogonal to that space.

5. Prove: If a ^() is a subspace of a V^(R), k>h. then there exists at least


one vector A" of v\R)
which "
is orthogonal to the f^(fi).

^^* -^I'-^s ^h be a basis of the FV). let X^,^^ be a vector in the vM) but not in the P^(R) and
consider the vector

<*) X = a^X^ + a^X^ + ... + aj^X,^ + a^^^Xf,^^

The condition that X be orthogonal to each of X^.X^ consists of h homogeneous linear equations
Xf,
106 VECTORS OVER THE REAL FIELD [CHAP. 13

a^X^-X^ + a^X^-X^ + ... + af,Xh-X-, + a,,^^Xf^^^. X^ = o

a^X^.X^ + a^X^.X^ + ... + a^Xh-X^ + af^^^Xf,^^- X^ = o

In the ft + l unknowns a^.a^ ay,^^. By Theorem IV, Chapter 10, a


non-trivial solution exists. When these
values are substituted in (i), we have a non-zero (why?) vector
X orthogonal to the basis vectors of the kA)
and hence to that space.

^ ""^^ ^ ^^^ ^^''^'''^ orthogonal to every vector of a given V^{R) is a unique


Vn-k'J'^^ vector space

^^* -^i-^s A-fe be a basis of the V^{R). The ..-vectors X orthogonal to each of the Jt,- satisfy the
system of homogeneous equations

www.TheSolutionManual.com
^'> X^.X=o.X^.X=Q Xk.X=0
Since the X^^ are linearly independent, the coefficient matrix of the
system (i) is of rank k ; hence, there are
n-k linearly independent solutions (vectors) which span
a K"-\/?). (See Theorem VI, Chapter 10.)
Uniqueness follows from the fact that the intersection space of the
V^iR) and the V^'^(R) is the zero-
space so that the sum space is Xi{R).

7. Find an orthonormal basis of V^(R), given X = [1/^6, 2/x/6, l/x/6]'.

Note that A- is a unit vector. Choose Y = [l/Vl, -IA/2 ]'


0, another unit vector such that X -Y =
Then, as in Problem 2(a), obtain Z = [l/^Jl. -l/VI, complete the
1A/3]' to set.

8. Derive the Gram-Schmidt equations (13,7).

^^' ^^' -^2 >^n be a given basis of V^(R) and denote by Y^. Yr, Y^ the set of mutually orthogonal
vectors to be found.

(a) Take Y-^^ X^.

(b) Take Y^ = Xr, + aY-^ . Since Y-^ and Y^ are to be mutually orthogonal.

Y^.Y^ = Y^-X^ + Y^-aY^ = Y^-X^ + aY^-Y^ =

Y . X Y X
and o = - -i-2 . Thus. K, = X^ ~ ^1^
Xo- ^Sill Y^ .

(c) Take Y^ = X3 + aYr, + bY^ . Since Y^. K,, Y^ are to be mutually orthogonal,

yi-Yj, = Yi-Xg + aY^.Y^ + bY^-Y^ = Y^- X^ + bY^-Y^ =


and
Y^.Y^ = Y^-Xs + aY^-Y^ + bY^.Y^ = Y^-X^ + aY^-Y^ =

Then a = _ ^ , 6 = - ^ll^ , and 7, = Z3 - ^ki^ K _ 2W^ v

(d) Continue the process until K, is obtained.


CHAP. 13] VECTORS OVER THE REAL FIELD 107

9. Construct an orthonormal basis of Fg, given the basis Z^ = [2,1,3]', X^ = [1,2,3]', and Xg = [1,1,1]'.

Take Y^ = X^ = [2.I.3Y. Then

K, = X^ - ^^i-i = [1,2.3]' - ii[2,1.3]' = [-6/7.15/14,3/14]'

V _ y "2 ^3 V ^1 '"^3 V

[1.1.1]' - ^
9 7 14 I4J 7 [_3 3 3j

Normalizing the y-s. we have [2/\/l4. l/i/Ii, 3/v'l4 ]', [-4/\/42, 5/V42. l/\/42]', [l/Vs. 1/V3. -1/^3 ]'

as the required orthonormal basis.

www.TheSolutionManual.com
10. Prove: A linear transformation preserves lengths if and only if its matrix is orthogonal.

Let y^. Yq be the respective images of X^,X2 under the linear transformation Y = AX.

Suppose A is orthogonal so that A/l = /. Then

(1) Y^-Y^ = YiY^ = (X'^A')(AX^) = X\X^ = X^^-X^

and, by Theorem XII lengths are preserved.

Conversely, suppose lengths (also inner products) are preserved. Then

Y^-Y^ = X{(A'A)X2 = X^X^. A'A=1


and A is orthogonal.

SUPPLEIVIENTARY PROBLEMS

H. Given the vectors A'l = [l,2,l ]'. .Yg = [2.I.2]', vYg = [2.1,-4 ]'. find:
(a) the inner product of each pair,
(6) the length of each vector.
(c) a vector orthogonal to the vectors X^, X^ ; X.^, Xq .

Ans. (a) 6, 0, -3 (6) Vb, 3, V^ (c) [l,0,-l ]', [3.-2.1]'

12. Using arbitrary vectors of ^3(7?). verify (13.2).

13. Prove (13.4).

14. Let a: = [1.2.3,4]' and Y = [2,1,-1.1]' be a basis of a V^(R) and Z = [4,2,3,l]' lie in a V^{R) containing X
and y.
(a) Show that Z is not in the ^^{R).
(b) Write W = aX + bY + cZ and find a vector W of the V^{R) orthogonal to both X and Y.

15. (a) Prove: A vector of I^(ft) is orthogonal to itself if and only if it is the zero vector.

(6) Prove: If X-^. X^. Xg are a set of linearly dependent non-zero ^-vectors and Z^ Xg = Xj_-Xq=
if
0, then
X^ and Xq are linearly dependent.
108 VECTORS OVER THE REAL FIELD [CHAP. 13

16. Prove: A vector X is orthogonal to every vector of a P^"(/?) If and only if it is orthogonal to every vector of
a basis of the space.

17. Prove: If two spaces V^iR) and t^(fl) are orthogonal, their intersection space is I^(if).

18. Prove: The Triangle Inequality.


Hint. Show that !|Z + 7 p < (||.Y|| + ||y||)^, using the Schwarz Inequality.

19. Prove: ||
A" + 7 1|
= |1a:|1 + \y\ if and only if X and Y are linearly dependent.

20. Normalize the vectors of Problem 11.


Ans. [l/Ve, 2/V6. l/\/6]', [2/3, 1/3, 2/3]', [2/V2i, l/V^I, -4/V2i ]'

21. Show that the vectors X. Y .Z of Problem 2 are an orthonormal basis of V^{R).

22. (o) Show that if X^. X^ X^ are linearly independent so also are the unit vectors obtained by normalizing

www.TheSolutionManual.com
them.
(6) Show that if the vectors of (a) are mutually orthogonal non-zero vectors, so also are the unit vectors
obtained by normalizing them.

23. Prove: (a) If A is orthogonal and |


.4 |
= 1, each element of A is equal to its cofactor in |
^4 |
,

(6) If .4 is orthogonal and \a\ = -i, each element of^ is equal to the negative of its cofactor in |^| .

24. Prove Theorems VIII, IX, X, XI.

25. Prove; If .4 and B commute and C is orthogonal, then C'AC and C'BC commute.

26. Prove that AA' (or A'A), where A is n-square. is a diagonal matrix if and only if the rows (or columns) of A
are orthogonal.

27. Prove: UX and Y are n-vectors, then XY'-YX' is symmetric,

28. Prove: If X and Y are n-vectors and A is n-square, then X-(^AY) = {A'X) -Y

n
29. Prove: If A"]^, Z^ A' be an orthonormal basis and if A^ = 2 c^X^, then (a) X-X^ = c^, (i = 1,2 n);
^=1
(b)X.X = cl+4+... + cl

30. Find an orthonormal basis of VsiR). given (a) X^ = [ 3/VT7, -2/\/T7, 2/1/17]'; (6) [3,0,2]'
Ans. (a) X^. [0.l/^^2.l/\f2Y, [-4/\^, -3/^34, 3/\/34]'
(6) [3/VI3, 0, 2/\/l3]', [2/VI3. 0, -3/VT3]', [0,1,0]'

31. Construct orthonormal bases of V^i^R) by the Gram-Schmidt process, using the given vectors in order:
{a) [1,-1,0]', [2,-1,-2]', [1,-1,-2]'
(6) [1.0,1]', [1,3,1]', [3.2,1]'
(e) [2,-1,0]', [4,-1,0]', [4,0,-1]'
Ans. (a) [iV^. -iV^, 0]', [V^/6, V2/6, -2v'V3]', [-2/3,-2/3,-1/3]'
(b) [^\/~2, 0, iV2]', [0,1,0]', [iV^. 0, -iV2]'
(c) [2V5/5, -V5/5, 0]', [^/5/5.2^/5/5.oy, [o,0,-l]'

32. Obtain an orthonormal basis of I^(ft), given ^"1 =[ 1,1,-1 ]' and ATg = [2,1,0 ]'.

Hint. Take Y^ = X^, obtain Y^ by the Gram-Schmidt process, and Y^ by the method of Problem 2(6).

Ans. [\A3/3, V'3/3, -V^/3]', [5\/2. 0, 2\A2]', [\/6/6, -\/6/3, -\/6/6 ]'
CHAP. 13] VECTORS OVER THE REAL FIELD 109

33- Obtain an orthonormal basis of V^iR). given X-^ = [7,_i,_i ]'.

34. Show in two ways that the vectors [l.2,3,4]', [l. -1.-2, -3]', and [5,4,5,6]' are linearly dependent.

35. Prove: If A is skew-symmetric. and I + A is non-singular, then B = (I -A)(I + A)~^ is orthogonal.

36. Use Problem 35 to obtain the orthogonal matrix S ,


given

"
12
s"!
(a) A =
-5 oj'
(b) A 10 3

2-3
"5-14 2
-12 -51
Ans. -10 -5 -10
(a)
^ 5 -I2J'
(b)

10 2 -11

www.TheSolutionManual.com
37. Prove: If .4 is an orthogonal matrix and it B = AP , where P is non-singular, then PB^ is orthogonal.

38. In a transformation of coordinates from the -basis to an orthonormal Z-basis with matrix P. Y = AX be-
comes 71 = P^^APX^ or 7^= BX^ (see Chapter 12). Show that if A is orthogonal so also is B. and con-
versely, to prove Theorem XIV.

39. Prove: If 4 is orthogonal and / + .4 is non -singular then B = (I - A) (I + A)'"^ is skew-symmetric.

40. Let X = [xj^.x^.xsY and Y = [yi.yQ.ygY be two vectors of VsiR) and define the vector product, X xY , of
X2 Jl 3 ys ^1 yi
^ and y as Z = ZxF = [21, 25, 23]' where 21 = , Z2 = . Zg = After identifying
^3 ys % yi 2 yi
the z^ as cofactors of the elements of the third column
mn of X^, Y^. ], estciblish:

(a) The vector product of two linearly dependent vectors is the zero vector.

(6) The vector product of two linearly independent vectors is orthogonal to each of the two vectors.
(c) XxY = -(YxX-)
(d) (kX)xY = k(XxY) = XxikY), k a scalar.

41. If W. X. Y. Z are four vectors of V^{R), establish:

(a) X x{Y+Z) = XxY + XxZ


(b)X-(YxZ) Y-(ZxX) = = Z-(XxY) = \XYZ\
W-Y W-Z
(c) (WxX)-(YxZ) =
X-Y X-Z

(d) (XxY)-(XxY) =
X-X X-Y
Y-X Y-Y
chapter 14

Vectors Over the Complex Field

COMPLEX NUMBERS. x and j are real numbers and i is defined by the relation j^ = 1, z = x^iy
If

is called a complex number. The real number x is called the real part and the real number y is
called the imaginary part of x + fy.

Two complex numbers are equal if and only if the real and imaginary parts of one are equal
respectively to the real and imaginary parts of the other.

A complex number x + iy = and only x = y =

www.TheSolutionManual.com
if if 0.

The conjugate of the complex number z = x+iy is given by z = x+iy = xiy. The sum
(product) of any complex number and its conjugate is a real number.

The absolute value |z| of the complex number z = x+iy is given by |z| = \J z-z = \fW+y^
It follows immediately that for any complex number z = x + iy,

(14.1) |z| > \A and \z\ > \y\

VECTORS. Let X be an ra-vector over the complex field C. The totality of such vectors constitutes
the vector space I^(C). Since ^(R) is a subfield, it is to be expected that each theorem con-
cerning vectors of I^(C) will reduce to a theorem of Chapter 13 when only real vectors are con-
sidered.

If Z = [%, x^ XnY and y = [ji, 72 y^Y are two vectors of P^(C), their inner product
is defined as

(14.2) X-Y = XY = %yi + x^y^ + + XnJn

The following laws governing inner products are readily verified;

(a) I-y = y^ (/) X-Y+Y.X = 2?.{X.Y)

(b) (cX)-Y = c(X-Y) where R(X-Y) is the real part of X-Y.

(14.3) (c) X-(cY) = c(X-Y) (g) X-Y-Y-X = 2CiX-Y)

(d) X-(Y+Z) = X-Y + X-Z where C(Z-y) is the imaginary part of Z-F.

(e) (Y+Z)-X = Y-X+Z-X See Problem 1.

The length of a vector X is given by ||Z|| = \/ X-X = \/%% + %^2 + + XnXn-

Two vectors X and Y are orthogonal if X-Y = Y-X = Q.

For vectors of V^{C), the Triangle Inequality

(14.4) \\X+Y\\ < ||Z|| + ||i'll

and the Schwarz Inequality (see Problem 2)

(14.5) \X-Y\ < \\X\\-\\Y\\

hold. Moreover, we have (see Theorems I-IV of Chapter 13)

110
CHAP. 14] VECTORS OVER THE COMPLEX FIELD 111

I. Any set of m mutually orthogonal non-zero re-vectors over C is linearly independent


and, hence, spans a vector space I^(C).

II. If a vector Y is orthogonal to each of the re-vectors X^, X^ X^, then it is or-
thogonal to the space spanned by these vectors.

III. If V^iC) is a subspace of V^C), k>h, then there exists at least one vector Z in
V^(C) which is orthogonal to V^(C).

IV. Every vector space J^(C), m>0, contains m but not more than m mutually orthog-
onal vectors.

A
basis of I^(C) which consists of mutually orthogonal vectors is
called an orthogonal
basis. If the mutually orthogonal vectors are also unit
vectors, the basis is called a nonnal
or orthonormal basis.

www.TheSolutionManual.com
THE GRAM-SCHMIDT PROCESS. Let X,. X^ X^ be a basis for F^^CC). Define

Y, = X,

In - Xn

^2-^3 y Yi-Xs
(14.6) Yd = Xn
Y .Y
l2-l2 ^ Y,.Y,
Y^

Y y
'a-i'^m y
yn _
~
y
"Si T; y
^m-i'-'m-i
-'m-i
'y^
The unit vectors Gi 7. (i = 1.2 m) are an orthonormal basis for ^^(C).

V. If X^, X^ X^, (l<s<m), are mutually orthogonal unit vectors


of ^(0) there
exist unit vectors (obtained by the Gram-Schmidt
Process) Z,,,, X, ^ ;i' in the snaoe
such that the set Z Z^ Z, is an orthonormal basis.

THE GRAMIAN. Let X X^ Xp be a set of ;s- vectors with complex elements and define the
Gramian matrix.

Xi'Xi Xj^-Xq X^-Xp Aj X-^Xq


X-y X^ Xp
X^'X'i X^-X^ X^-Xp Ag X^ XqXq X2 xp
(14.7)

Xp-X^ Xp-X^ Xp-Xp XI X-^ XI x^ x^x.


p^p
Clearly, the vectors are mutually orthogonal if and
only if G is diagonal.
Following Problem 14, Chapter 17, we may prove
VI For a set of re-vectors X^. X^ Xp with complex elements, \G\ > 0. The equality
holds if and only if the vectors are linearly
dependent.
112 VECTORS OVER THE COMPLEX FIELD fCHAP. 14

UNITARY MATRICES. An n-square matrix A is called unitary if (AyA = A(A)'= I, that is if (i)'= A ^.

The column vectors (row vectors) of a unitary matrix are mutually orthogonal unit vectors.

Paralleling the theorems on orthogonal matrices of Chapter 13, we have


VII. The column vectors (row vectors) of an re-square unitary matrix are an orthonormal
basis of l^(C), and conversely.

VIII. The inverse and the transpose of a unitary matrix are unitary.

IX. The product of two or more unitary matrices is unitary.

X. The determinant of a unitary matrix has absolute value 1.

UNITARY TRANSFORMATIONS. The linear transformation

(14.8) Y = AX

www.TheSolutionManual.com
where A is unitary, is called a unitary transformation.

XI. A linear transformation preserves lengths (and hence, inner products) if and only
if its matrix is unitary.

XII. If Y = AX is a transformation of coordinates from the i'-basis to another the Z-


basis, then the Z-basis is orthonormal if and only if A is unitary.

SOLVED PROBLEMS

1. Given X = [l+j, -J, 1]' and Y = [2 + 3J, 1- 2J, i]',


(a) find X-Y and Y-X (c) verify X-Y + Y-X = 2R(X-Y)

(b) verify X-Y = Y-X (d) verify X-Y -Y-X = 2C(X-Y)

2+ 3/

(a) X-Y = X'Y = [l-i,f,i; l-2i = (l-i)(2+3!) + i{l-2i) + 1(0 = 7 4 3j

1 +i

Y-X = Y'X = [2-3i.l + 2i.-i] i 7 - 3i

(b) From (a): Y-X. the conjugate of Y-X. is 7 + 3i = X-Y.

(c) X-Y + Y-X = (7 + 3i) + (7-3i) = 14 = 2(7) = 2R(X-y)

(d) X-Y -Y-X = (7 + 3j) -(7-30 = 6j = 2(30 = 2C(X-Y)

2. Prove the Schwarz Inequality: \X-Y\ < p|| 11Y||.

As in the case of real vectors, the Inequality is true if X = or ^ = 0. When X and Y are non-zero
vectors and a is real, then

\\aX + Yf = {aX+Y)-{aX+Y) = a X-X + a{X-Y + Y-X) + Y-Y = a'Wxf + 2aR{X-Y) + \\Y\\ > 0.
CHAP. 14] VECTORS OVER THE COMPLEX FIELD 113

Since the quadratic function in a is non-negative if and only if its discriminant is non-positive,

R{X-Yf - \xf\Yf < and Ra-Y) < IkMyll


If X-Y = 0, then -F = fl(;5:-y) < ||x| X-Y X-Y
|;f c =

|
||y | . if ,i
O. define
\x-y\
Then R(cX-Y) < \\cX\\-\\y\\ = |c||U||-iyl| = |UM|y|| while, by (14.3(6)), R(cX-Y)
R[ciX-Y)] = \X-Y\. Thus, U-yl < U||-||y|| forall^andy.

3. Prove: B = (A)'A is Hermitian for any square matrix A.

(B) = \(A)'AV = (A'A) = (A)A = B and B is Hermitian.

4. If i = B + iC is Hermitian, show that (I)/l is real if and only if B and C anti-commute.

www.TheSolutionManual.com
Since B+iC is Hermitian, (B+iC)' = B + iC; thus.

(A)A = (B + iCUB+iC) = (B + iC)(B+iC) = B^ + i(BC + CB) - C^

This is real if and only if BC + CS = o or BC = -CB ; thus, if and only if B and C anti-commute.

5. Prove: If A is skew-Hermitian. then iA is Hermitian.

Consider B = ~iA. Since A is skew-Hermitian, (A)' = -A. Then

(S)' = (riA)' = t(Z)' = i(-A) = -iA = B


and S is Hermitian. The reader will consider the case B = iA.

SUPPLEMENTARY PROBLEMS
6. Given the vectors X^=[i.2i,iY. A:2 = [l, 1+ o]', and Xg = - 2]'
/, [i, 1 j,
(a) find X^-X^ and X^-X^,
(b) find the length of each vector Xi ,

(c) show that [l-i, -1, i-j]' is orthogonal to both X^ and X^.
(d) find a vector orthogonal to both X^ and Jf g

Ans. (a) 2-3i.-i (b) ^/e .


V^ V^ . (d) [-1 -5iJ .3 -i]

7. Show that [l + i.i.lV. [iA-i.oy, and [l -i. 1. 3j ]' are both linearly independent and mutually orthogonal.

8. Prove the relations (14.3).

9. Prove the Triangle Inequality.

10. Prove Theorems I-IV.

11. Derive the relations (14.6).


114 VECTORS OVER THE COMPLEX FIELD [CHAP. 14

12. Using the relations (14-6) and the given vectors in order, construct an orthonormal basis for iy.C) when the

vectors are
(a) [0,1,-1]', [l + j,l,l]'. [l-j,l,l]'
(b) [l + i.i.lY. [2.1-2i.2 + iY, [l-i.O.-iY.

Arts, (a) [O.^V2.-iV2]', [1(1 + 0. I. 2 ]'. [-^i, ^(1 + 0. ^d + ]'

ri 1 ^ I 1 T r 1 1 - 5t 3+ 3jy r1-i
7 -t -5
-5 -6 + 3i
-b 3j
y
(&) [2(1+0. 2..2 J. [3;^. 4^. 4^ J- L2V30 'aVso- 2\Am^

13. Prove: If /I is a matrix over the complex field, then A + A has only real elements and A- A has only pure
imaginary elements.

14. Prove Theorem V.

15. If A is n-square, show


(a) A'A is diagonal if and only if the columns of A are mutually orthogonal vectors.

www.TheSolutionManual.com
(b) A'A = / if and only if the columns of A are mutually orthogonal unit vectors.

16. Prove: If X and Y are n-vectors and A is re-square, then X-AY = A'X-Y

17. Prove Theorems VII-X.

18. Prove: If A is skew-Hermitian such that I + A is non-singular, then B = (l-A){I + A)~^ is unitary.

I 1 +t
r i+j1
19. Use Problem 18 to form a unitary matrix, given (
"^ (6) i

i
L-1+.- J- -1 + j i

_9 + 8i -10 -4i -16-18i


1 r-l + 2j -4-2i"|
Ans. (a) (b) -2-24J l + 12i -10 -4t
5 |_ 2-4i -2-jJ' 29
4-lOj -2-24J -9 + 8j

20. Prove: If ^ and B are unitary and of the same order, then AB and BA are unitary.

21. Follow the proof in Problem 10. Chapter 13. to prove Theorem XI.

22. Prove: If ^ is unitary and Hermitian. then A is involutory.

3+ i

J//3
1(1 +
2^
4 + 3i
23. Show that -k l/\/3 is unitary.
2y/T5

-i/y/3 -^
2Vl5_

-1
24. Prove: If A is unitary and if B = .4P where P is non-singular, then PB is unitary.

25. Prove: U A is unitary and I+A is non-singular, then B = (I - A) (,! + A)'^ is skew-Hermitian.
chapter 15

Congruence

CONGRUENT MATRICES. Two re-square matrices A and B over F are called congruent, , over F if

there exists a non-singular matrix P over F such that

(15.1) B = FAP
Clearly, congruence is a special case of equivalence so that congruent matrices have the same

www.TheSolutionManual.com
rank.

When P is expressed as a product of elementary column matrices, P' is the product in re-
verse order of the same elementary row matrices; that is, A and B are congruent provided A can

be reduced to B by a sequence of pairs of elementary transformations, each pair consisting of


an elementary row transformation followed by the same elementary column transformation.

SYMMETRIC MATRICES. In Problem 1, we prove


I. Every symmetric matrix A over F of rank r is congruent over F to a diagonal matrix

whose first r diagonal elements are non-zero while all other elements are zero.

Example 1. Find a non-singular matrix P with rational elements such that D - P'AP is diagonal, given

12 3 2

2 3 5 8

3 5 8 10

2 8 10 -8

reducing A to D, we use [A /] and calculate en route the matrix P' First we use
In .

ff2i(-2) and K2i(-2). then //gjC-S) and XgiC-S), then H^-ii-2) and K^^{-2) to obtain zeroes
in the first row and in the first column. Considerable time is saved, however, if the three
row transformations are made first and then the three column transformations. If A is not then
transformed into a symmetric matrix, an error has been made. We have

12 3 2 1 10 10
[AH^
2 3 5 8 1
c
0-1-1 4 -2100
3 5 8 10 1
'V-
0-1-1 4 -3010
2 8 10 -8 1 4 4-12 -2001
1 1 1 1

0-100 -2 1
c
0-100 -2 10
~\^
-1 -1 1 4 -10 4 1

4 10 4 1 -1-110

[DP']

115
116 CONGRUENCE [CHAP. 15

1 -2 -10 -1
1 4-1
Then
1

10
The matrix D to which A has been reduced is not unique. The additional transformations
10
0-100
ffgCi) and Kg(^). for example, will replace D by the diagonal meitrix while the
10

10
0-900 however, no pair of
transformations H^d) and K^Ci) replace D by . There is,
4

www.TheSolutionManual.com
rational or real transformations which will replace D by a diagonal matrix having only non-neg-
ative elements in the diagonal.

REAL SYMMETRIC MATRICES. Let the real symmetric matrix A be reduced by real elementary
transformations to a congruent diagonal matrix D, that is, let P'AP = D. While the non-zero
diagonal elements of D depend both on A and P. it will be shown in Chapter 17 that the number
of positive non-zero diagonal elements depends solely on A.

By a sequence of row and the same column transformations of type 1 the diagonal elements
of D may be rearranged so that the positive elements precede the negative elements. Then a
sequence of real row and the same column transformations of type 2 may be used to reduce the
diagonal matrix to one in which the non-zero diagonal elements are either +1 or 1. We have

II. A real symmetric matrix of rank r is congruent over the real field to a canonical
matrix

P
(15.2) C =
'r-p

The integer p of (15.2) is called the index of the matrix and s = p-(r p) is called the

signature.

Example 2. Applying the transformations H23. K^a and H^ik), Kr,(k) to the result of Example 1, we have

1 1 10 1

0-100 -2 1 C 10 -5 2 5
IC\(/]
[A\n
4 -10 4 1 0-10 -2 10
-1 -1 1 -1-110
and (/AQ = C. Thus, A is of rank r = 3, index p = 2, and signature s = 1.

III. Two re-square real symmetric matrices are congruent over the real field if and only

if they have the same rank and the same index or the same rank and the same signature.

In the real field the set of all re-square matrices of the type (15.2) is a canonical set over
congruence for real ra-square symmetric matrices.
CHAP. 15] CONGRUENCE 117

IN THE COMPLEX FIELD, we have

IV. Every ra-square complex symmetric matrix of rank r is congruent over the field of
complex numbers to a canonical matrix

/^
(15.3)

Examples. Applying the transformations H^ii) and K^f^i) to the result of Example 2, we have

10 1 10 1

1 -5 2 5 C 10 -5 2 k
[^1/] [D ']
0-10
1

-2 1 10 -2i i

-1 -1 1 -1 -1 1

www.TheSolutionManual.com
R'AR
and
^&:] See Problems 2-3.

V. Two ra-square complex symmetric matrices are congruent over the field of complex
numbers if and only if they have the same rank.

SKEW-SYMMETRIC MATRICES. If A is skew-symmetric, then

(FAP)' = FAT = r(-A)P = -FAP


Thus,

VI. Every matrix B = FAP congruent to a skew-symmetric matrix A is also skew-


symmetric.

In Problem 4, we prove

VII. Every n-square skew-symmetric matrix A over F is congruent over F to a canoni-


cal matrix

(15.4) B = diag(Di, Dj 0^,0,..., 0)

where = r
n .

D,-
I
,(f=l,2 t). The rank of /} is r = 2t.
See Problems.

There follows

Vin. Two ra-square skew-symmetric matrices over F are congruent over F if and only
if they have the same rank.

The set of all matrices of the type (15.4) is a canonical set over congruence for re-square
skew-symmetric matrices.

HERMITIAN MATRICES. Two n-square Hermitian matrices A and B are called Hermitely congruent,
[^ ], or conjunctive if there exists a non-singular matrix P such that

(15.5) FAP
Thus,
X

118 CONGRUENCE [CHAP. 15

IX. Two re-square Hermitian matrices are conjunctive if and only if one can be obtain-
ed from the other by a sequence of pairs of elementary transformations, each pair consist-
ing of a column transformation and the corresponding conjugate row transformation.

X. An Hermitian matrix A of rank r is conjunctive to a canonical matrix

Ip

(15.6) -Ir-p

The integer p of (15.6) is called the index of A and s = p-(r-p) is called the signature.

XI. Two re-square Hermitian matrices are conjunctive if and only if they have the same
rank and index or the same rank and the same signature.

The reduction of an Hermitian matrix to the canonical form (15.6) follows the procedures

www.TheSolutionManual.com
of Problem 1 with attention to the proper pairs of elementary transformations. The extreme
troublesome case is covered in Problem?.
See Problems 6-7.

SKEW-HERMITIAN MATRICES. If A is skew-Hermitian, then

(FAPy = (PAT) FAP


Thus,

XII. Every matrix B = FAP conjunctive to a skew-Hermitian matrix A is also skew-


Hermitian.

By Problems, Chapter 14, H = -iA is Hermitian if A is skew-Hermitian. By Theorem


there exists a non-singular matrix P such that

Ip

FHP = C = -Ir-p

Then iFHP = iF{-iA)P = FAP = iC and

Up
(15.7) B = FAP = - ilr~p

Thus,

XIII. Every re-square skew-Hermitian matrix A is conjunctive to a matrix (15.7) in

which r is the rank of A and p is the index of iA.

XIV. Two re-square skew-Hermitian matrices A and B are conjunctive if and only if

they have the same rank while -iA and -iB have the same index.
See Problem 8.
CHAP. 15] CONGRUENCE 119

SOLVED PROBLEMS
1. Prove: Every symmetric matrix over F of rank r can be reduced to a diagonal matrix having exactly
r non-zero elements in the diagonal.

Suppose the symmetric matrix A = [a^,-] is not diagonal. If a^ / 0. a sequence of pairs of elementary
transformations of type 3, each consisting of a row transformation and the same column transformation,
will
reduce A to

"n2 "ns

Now the continued reduction is routine so long as b^^.c^^. are different from zero. Suppose then
that along in the reduction, we have obtained

www.TheSolutionManual.com
the matrix

hss
.

S+1, s+2 *s+l, n


k^
S+2, s+ 1 k c

'^ri , s+i "n, s->-2

in which the diagonal element k^^^^^^^ = o. every we have proved


If 4y = 0, the theorem with s = r. If,
however, some k^j ,
say V s+^ / 0, we move it into the (s+l,s+i) position by the proper row and column
transformation of type 1 when u = v; otherwise, we add the
(s+u)th row to the (s+t;)th row and after the
corresponding column transformation have a diagonal element
different from zero. (When a^, = o, we proceed
as in the case Ar^+i ^^.^ = o above.)

Since we are led to a sequence of equivalent matrices, A is


ultimately reduced to a diagonal matrix
whose first t diagonal elements are non-zero while all other elements
are zero.

'12 2
2. Reduce the symmetric matrix A 2 3 5 to canonical form (15.2) and to canonical form
(15.3).
.2 5 5
In each obtain the matrix P which effects the reduction.
-
1 2 2 1
1 I
1 1 1 1
u\n 2 3 5 1
1 c -1 1
I

-2 1 C -1 2 1 c 2 -4 1 1
2 5 5 1 1 1 -2 1 2 4 1 1 -1 -2 1

[0|^']

To obtain (15.2), we have

1 1 1 1

[0 1 Pi'] = 2 -4 1 1 1 i\Pi \C\P'\


-2^f2 k\f2
-1 -2 1 0-1 -2 1
120 CONGRUENCE [CHAP. 15

1 -av^ -2
and j\/2 1

5a/2

To obtain (15.3), we have


1 1 1 1 1

= C -2\/2 2\/2 iV2 Vc\?'\


[D\Pl] 2 4 1 1 1 1

-1 2 1 1 1 2i -i

1 -2V2 2J

and k\f2 -i

www.TheSolutionManual.com
3. Find a non-singular matrix ? such that ?'A? is in canonical form (15.3), given

1 i 1 + i

A = i 2-j
1 + i 2-i 10 + 2i

^
1 i 1+J 1 1 1 1

2-j c 3 - 2j -i
[^1/] i 1 1 '-\J 1 1

1 +i 2-i 10 + 2j 1 1 3-2J 10 -1-j 1

_
p
10 1 1 1 1

i 1
c 1 -i 1
1 1

5+12J l + 2i 3 + 2i 1 7+ 4i!' -5+12i 3-2t


1 1

13 13 13 J

= [c\n
7+ 4i
1 -I
13

-5+12i
Here. 1
13

3-2j
13

4. Prove: Every ?i-square skew-symmetric matrix A over F of rank 2t is congruent over F to a matrix

B = diag(Di, Dg 7)^,0 0)

where D^ =
U :] (J = 1,2, )

then some = -aji ?^ 0. Interchange the sth and first rows and the
It A = Q. then S = ^. If -4 ^ 0, mj
and first columns and the /th and second columns to replace
/th and second rows; then Interchange the Jth

oy
a..

^ by the skew-symmetric matrix -Oy


-a,;,
j

1
g
2
V Next multiply the first row and the first column by l/a^^-

3 'S*
i
CHAP. 15] CONGRUENCE 121

1
''2
to obtain -1 and from it, by elementary row and column transformation