Matrix algebra
The mystique surrounding matrix algebra is perhaps due to the texts on the subject
requiring a student to swallow too much at one time. It will be found that in order
to follow the present text and carry out the necessary computation only a limited
knowledge of a few basic denitions is required.
Definition of a matrix
The linear relationship between a set of variables x and b
a
11
x
1
+a
12
x
2
+a
13
x
3
+a
14
x
4
= b
1
a
21
x
1
+a
22
x
2
+a
23
x
3
+a
24
x
4
= b
2
a
31
x
1
+a
32
x
2
+a
33
x
3
+a
34
x
4
= b
3
(A.1)
can be written, in a shorthand way, as
[A] {x} = {b} (A.2)
or
Ax = b (A.3)
where
A [A] =
_
a
11
a
12
a
13
a
14
a
21
a
22
a
23
a
24
a
31
a
32
a
33
a
34
_
x {x} =
x
1
x
2
x
3
x
4
b {b} =
_
b
1
b
2
b
3
_
(A.4)
The above notation contains within it the denition of both a matrix and the process
of multiplication of two matrices. Matrices are dened as arrays of number of the
Matrix addition or subtraction 669
type shown in Eq. (A.4). The particular form listing a single column of numbers is
often referred to as a vector or column matrix, whereas a matrix with multiple columns
and rows is called a rectangular matrix. The multiplication of a matrix by a column
vector is dened by the equivalence of the left and right sides of Eqs (A.1) and (A.2).
The use of bold characters to dene both vectors and matrices will be followed
throughout the text generally lower case letters denoting vectors and capital letters
matrices.
If another relationship, using the same a constants, but a different set of x and b,
exists and is written as
a
11
x
1
+a
12
x
2
+a
13
x
3
+a
14
x
4
= b
1
a
21
x
1
+a
22
x
2
+a
23
x
3
+a
24
x
4
= b
2
a
31
x
1
+a
32
x
2
+a
33
x
3
+a
34
x
4
= b
3
(A.5)
then we could write
[A] [X] = [B] or AX = B (A.6)
in which
X [X] =
x
1
, x
1
x
2
, x
2
x
3
, x
3
x
4
, x
B [B] =
_
b
1
, b
1
b
2
, b
2
b
3
, b
3
_
(A.7)
implying both the statements (A.1) and (A.5) arranged simultaneously as
_
a
11
x
1
+ , a
11
x
1
+
a
21
x
1
+ , a
21
x
1
+
a
31
x
1
+ , a
31
x
1
+
_
= B [B] =
_
b
1
, b
1
b
2
, b
2
b
3
, b
3
_
(A.8)
It is seen, incidentally, that matrices can be equal only if each of the individual terms
is equal.
The multiplication of full matrices is dened above, and it is obvious that it has a
meaning only if the number of columns in A is equal to the number of rows in X for
a relation of the type (A.6). One property that distinguishes matrix multiplication is
that, in general,
AX = XA
i.e., multiplication of matrices is not commutative as in ordinary algebra.
Matrix addition or subtraction
If relations of the form from (A.1) and (A.5) are added then we have
a
11
(x
1
+x
1
) +a
12
(x
2
+x
2
) +a
13
(x
3
+x
3
) +a
14
(x
4
+x
4
) = b
1
+b
1
a
21
(x
1
+x
1
) +a
22
(x
2
+x
2
) +a
23
(x
3
+x
3
) +a
24
(x
4
+x
4
) = b
2
+b
2
a
31
(x
1
+x
1
) +a
32
(x
2
+x
2
) +a
33
(x
3
+x
3
) +a
34
(x
4
+x
4
) = b
3
+b
3
(A.9)
which will also follow from
Ax +Ax
= b +b
f
1
f
2
.
.
.
f
n
(A.14)
These, in turn, are often associated with the same number of displacements given by
another vector, say,
a =
u
1
u
2
.
.
.
u
n
(A.15)
It is known that the work is represented as a sum of products of force and displacement
W =
n
k=1
f
k
u
k
Clearly the transpose becomes useful here as we can write, by the rule of matrix
multiplication,
W = [f
1
f
2
. . . f
n
]
u
1
u
2
.
.
.
u
n
= f
T
u = u
T
f (A.16)
Use of this fact is made frequently in this book.
Transpose of a product
An operation that sometimes occurs is that of taking the transpose of a matrix product.
It can be left to the reader to prove from previous denitions that
(AB)
T
= B
T
A
T
(A.17)
Symmetric matrices
In structural problems symmetric matrices are often encountered. If a term of a matrix
A is dened as a
ij
, then for a symmetric matrix
a
ij
= a
ji
or A = A
T
A symmetric matrix must be square. It can be shown that the inverse of a symmetric
matrix is also symmetric
A
1
=
_
A
1
_
T
A
T
672 Matrix algebra
Partitioning
It is easy to verify that a matrix product AB in which, for example,
A =
a
11
a
12
a
13
a
14
a
15
a
21
a
22
a
23
a
24
a
25
a
31
a
32
a
33
a
34
a
35
B =
b
11
b
12
b
21
b
22
b
31
b
32
b
41
b
42
b
51
b
52
could be obtained by dividing each matrix into submatrices, indicated by the lines, and
applying the rules of matrix multiplication rst to each of such submatrices as if it were
a scalar number and then carrying out further multiplication in the usual way. Thus, if
we write
A =
_
A
11
A
12
A
21
A
22
_
B =
_
B
1
B
2
_
then
AB =
_
A
11
B
1
A
12
B
2
A
21
B
1
A
22
B
2
_
can be veried as representing the complete product by further multiplication.
The essential feature of partitioning is that the size of subdivisions has to be such as
to make the products of the type A
11
B
1
meaningful, i.e., the number of columns in A
11
must be equal to the number of rows in B
1
, etc. If the above denition holds, then all
further operations can be conducted on partitioned matrices, treating each partition as
if it were a scalar.
It should be noted that any matrix can be multiplied by a scalar (number). Here,
obviously, the requirements of equality of appropriate rows and columns no longer
apply.
If a symmetric matrix is divided into an equal number of submatrices A
ij
in rows
and columns then
A
ij
= A
T
ji
The eigenvalue problem
An eigenvalue of a symmetric matrix A of size n n is a scalar
i
which allows the
solution of
(A
i
I)
i
= 0 and det  A
i
I = 0 (A.18)
where
i
is called the eigenvector.
The eigenvalue problem 673
There are, of course, n such eigenvalues
i
to each of which corresponds an eigen
vector
i
. Such vectors can be shown to be orthonormal and we write
T
i
j
=
ij
=
_
1 for i = j
0 for i = j
The full set of eigenvalues and eigenvectors can be written as
=
1
0
.
.
.
0
n
=
_
1
, . . .
n
Using these the matrix A may be written in its spectral form by noting from the
orthonormality conditions on the eigenvectors that
1
=
T
then from
A =
it follows immediately that
A =
T
(A.19)
The condition number (which is related to equation solution roundoff) is dened as
=

max


min

(A.20)