Sie sind auf Seite 1von 10

Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

Math & Physics for Flight Testers, Volume I


Chapter 4
Matrices Algebra
Table of Contents

Introduction........................................................................................................................................4.1
Determinants......................................................................................................................................4.1
Matrices.............................................................................................................................................4.2
Example 1:.................................................................................................................................4.5
Example 2:.................................................................................................................................4.6
Cramer's Rule.....................................................................................................................................4.8
Example.........................................................................................................................................4.9
Solution..........................................................................................................................................4.9
References..........................................................................................................................................4.9
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

Math & Physics for Flight Testers


Chapter 4
Matrices Algebra

Introduction
This chapter on matrices is a prerequisite for the chapters on Equations of Motion, Dynamics, Linear
Controls, and Flight Control Systems. The chapter deals only with applied mathematics; therefore, the
theoretical scope of the subject is limited.
The text begins with sections dealing with determinants and matrices as a prerequisite to the
remainder of the chapter.

Determinants
In a restricted sense, at least, the concept of a determinant is already familiar from elementary
algebra. In solving systems of two or three simultaneous linear equations, it is convenient to introduce
what are called determinants of the second and third order. In this chapter we will generalize these ideas
to the solution of systems of three or more linear equations.
A determinant is a function which associates a variable (real, imaginary, scalar, or vector) with an
array of numbers. The determinant is denoted by vertical bars on either side of a square array of numbers.
Thus, if A is an (n × n) array of numbers where i designates the rows and j designates the columns, the
determinant A can be written:
a11 a12  a1n
a a 22  a2n
A  a ij  21
   
a n1 an2  a nn

When the elements of the ith row and the jth column are removed from the array, the determinant of the
remaining (n  1) × (n  1) square array is called the first-order minor of A and is denoted by Mij. It is
also called the minor of aij. The signed minor, with the sign determined by the sum of the row and
column, is called the cofactor of aij and is denoted by:
Aij = (1)i+j Mij
The value of the determinant is equal to the sum of the products of the elements of any single row or
column and their respective cofactors; i.e.:
n
A  ai1 Ai1  ai 2 Ai 2    ain Ain   aij Aij for any single ith row, or:
j 1

n
A  a1 j A1 j  a 2 j A2 j    a nj Anj   a ij Aij for any single jth column.
i 1

Expanding a 2 × 2 determinant about the first row is the easiest. The sign of the cofactor of an
element can be determined quickly by observing that the sums of the subscripts alternate from even to
odd when advancing across rows or down columns, meaning the signs alternate also. For example, if:
a11 a12
A 
a 21 a 22

the signs of the associated cofactors alternate as shown:

4.1 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

 
 

By deleting the row and column of a11, we find its cofactor is just the element a22 with the sign (1)2+2.
Likewise the cofactor for a12 is the element a21 with the sign (1)2+1. The sum of the two products gives
us the expanded value of the determinant:
|A| = a11 A11 + a12 A12 = a11 a22 + a12 (a21) = a11 a22  a12 a21
This simple example has been shown for clarity. Actual calculation of a 2 × 2 determinant is easy if
we just remember it as the difference between the cross-multiplication of the elements.
To expand a 3 × 3 determinant:
a11 a12 a13
A  a 21 a 22 a 23
a 31 a32 a 33

If we arbitrarily choose to expand about the first row:


|A| = a11 A11 + a12 A12 + a13 A13
a 22 a 23 a 21 a 23 a 21 a 22
a11   1  a12   1  a13   1
a32 a 33 a 31 a 33 a 31 a32

which expands to give the final solution:


|A| = a11 (a22 a33 a23 a32)  a12 (a21 a33 a23 a31) a13 (a21 a32 a22 a31)
The quicker method of calculating determinants is useful for the 2 × 2 determinant, but, the row or
column expansion method is better for calculating values for determinants of 3 × 3 and higher. While the
general rules for evaluating determinants by hand are simple, for determinants of greater size than 3 × 3,
the process becomes laborious. A 5 × 5 determinant would contain 120 terms of 5 factors each.
Evaluating larger determinants is an ideal task for the computer, and standard programs are available for
this task. The use of determinants for solving sets of linear equations is discussed next. Determinants are
also used in solving sets of linear differential equations in Chapter 3.

Matrices
An m × n matrix is a rectangular array of quantities arranged in m rows and n columns. When
there is no possibility of confusion, matrices are often represented by single capital letters. More
commonly, however, they are represented by displaying the quantities between brackets:
a11 a12  a1n

A  A  a ij  aij   a
 21

a 22

 a 2n
 
a m1 a m2  a mn

Note that aij refers to the element in the ith row and jth column of [A]. Thus, a23 is the element in
the second row and third column. Matrices having only one column (or one row) are called column (or
row) vectors. A matrix, unlike the determinant, is not assigned a "value"; it is simply an array of
quantities. Matrices may be considered as single algebraic entities and combined (added, subtracted,
multiplied) in a manner similar to the combination of ordinary numbers. It is necessary, however, to
observe specialized algebraic rules for combining matrices. These rules are somewhat more complicated
than for "ordinary" algebra. The effort required to learn the rules of matrix algebra is well justified,
however, by the simplification and organization which matrices bring to problems in linear algebra.

4.2 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

Two matrices having the same number of rows and the same number of columns are defined as being
conformable for addition and may be added by adding the corresponding elements; i.e.:
a11 a12  a1n b11 b12  b1n a11  b11 a12  b12  a1n  b1n
a 21 a 22  a 2n b21 b22  b2 n a 21  b21 a 22  b22  a 2 n  b2 n
 
           
a m1 am2  a mn bm1 bm 2  bmn a m1  bm1 a m 2  bm 2  a mn  bmn

A scalar is a single number. A matrix of any dimension may be multiplied by a scalar by multiplying
each element of the matrix by the scalar. Matrix multiplication can be defined for any two matrices only
when the number of columns of the first is equal to the number of rows of the second matrix.
Multiplication is not defined for other matrices. This multiplication of two matrices can be stated
mathematically as:
[A][B] = [C]
[aim][bmj] = [cij]

cij   m aik bkj (3.1)


k 1

The product of a pair of, 2 × 2 matrices is:


a11 a12 b11 b12 a b  a12  b21 a11  b12  a12  b12
 11 11
a 21 a 22 b21 b22 a 21  b11  a 22  b21 a 21  b12  a 22  b22

This example points the way to an orderly multiplication process for matrices of any order. In the
indicated product of equation 14.1, the left-hand factor [A] may be thought of as a bundle of row-vectors
and the right-hand factor [B] may be thought of as a bundle of column-vectors. If the rows of [A] and the
columns of [B] are treated as vectors, then cij in the resulting product [C] is the dot product of the ith row
of [A] and the jth column of [B]. This rule holds for matrices of any size. Matrix multiplication is
therefore a "row on column" process. The indicated product [A][B] can be carried out if [A] and [B] are
conformable; again, for conformability in multiplication, the number of columns in [A] must equal to the
number of rows in [B]. A matrix comprised of row vectors may be transformed into a matrix of column
vectors by transposing rows and columns. The transpose of matrix [A], labeled [A]T, is formed by
interchanging the rows and columns of [A]. That is, the jth row vector becomes the jth column vector, and
visa-versa.
Matrix algebra differs significantly from "ordinary" algebra in that multiplication is not commutative.
And, because multiplication is non-commutative, care must be taken in describing the product [C] = [A]
[B] to say that [B] is pre-multiplied by [A] or, equivalently, that [A] is post-multiplied by [B].
The identity (or unit) matrix [I] occupies the same position in matrix algebra that the value of unity
does in ordinary algebra. That is, for any matrix [A]:
[I][A] = [A][I] = [A]
The identity [I] is a square matrix consisting of ones on the principle diagonal and zeros everywhere
else; i.e.:
1 0  0
0 1  0
I 
   
0 0  1

The order (the number of rows and columns) of an identity matrix depends entirely on the
requirement for conformability with adjacent matrices.

4.3 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

Now that matrix multiplication has just been defined; it is natural to inquire next if there is some way
to divide matrices. There is not, properly speaking, a division operation in matrix algebra; however, an
equivalent result is obtained through the use of the inverse matrix. In ordinary algebra, every number a
(except zero) has a multiplicative inverse a1 defined as follows:
a  a1 = a11 = a0 = 1
In the same way, the matrix [A]1 is called the inverse matrix of [A] since:
[A][A]1 = [A]1[A] = [A]0 = [I]
Matrices which cannot be inverted are called singular. For inversion to be possible, a matrix must
possess a determinant not equal to zero. There is a straightforward four-step method for computing the
inverse of a given matrix [A]:

Step 1 Compute the determinant of [A]. This determinant is written |A|. If the determinant is zero or
does not exist, the matrix [A] is defined as singular and an inverse cannot be found.

Step 2 Transpose matrix [A]. The resultant matrix is written [A]T.

Step 3 Replace each element aij of the transposed matrix by its cofactor Aij. This resulting matrix is
defined as the adjoint of matrix [A] and is written Adj[A].

Step 4 Divide the adjoint matrix by the scalar value of the determinant of [A] which was computed in
Step 1. The resulting matrix is the inverse and is written [A]1.

From the definition of the inverse matrix, [A]1 [A] = [I], the computed inverse may be checked.
Consider the set of algebraic equations:
a11 x1 a12 x 2  a1n x n  y1
a 21 x1 a 22 x 2  a 2 n x n  y1
(3.2)
    
a m1 x1 a m 2 x 2  a mn x n  y n
That is:
[A] [X] = [Y]
Assuming that the inverse of [A] has been computed, both sides of this equation may be pre-
multiplied by [A]1, giving:
[A]1 [A] [X] = [A]1 [Y]
From the definition of the inverse matrix:
[I] [X] = [A]1 [Y]
we get, finally:
[X] = [A]1 [Y]
Thus, the system of equation (3.2) may be solved for x1, x2, , xn by simply computing the inverse of
[A]. Solution of sets of simultaneous equations using matrix algebra techniques has wide application in a
variety of engineering problems and will be used extensively in this text.
Two example problems will help clarify the matrix procedures described above.

Example 1: Find [A]1, if

4.4 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

3 2 0
 A  1 5 1  (3.3)
0 2  1

Step 1. Compute the determinant of [A]. Expanding about the first row
3 2 0
 A  1 5 1 
0 2  1
 A  3  5  2  2  1  0  0 2  1
 A  21  2  0  19
The determinant has the value 19; therefore an inverse can be computed.

Step 2. Transpose [A].


3 1 0
 A T
 2 5 2 
0 1  1

Step 3. Replace each element aij of [A] by its cofactor Aij to determine the adjoint matrix. Note that signs
alternate from a positive A11.
 5 2 2 2 2 5 
  
 1  1 0 1 0 1 
7 2 2
 1 0 3 0 3 1
 A T
    1 3 3
 1 1 0 1 0 1
2 6 13
 1 0 3 0 3 1 
  
 5 2 2 2 2 5 

Step 4. Divide by the scalar value of determinant of [A] which was computed as 19 in step 1.
 7 2 2 
1 
 A  3
1
 1 3
 19 
 2 6 13 

Product Check
From the definition of the inverse matrix
[A]1 [A] = [I]
This fact may be used to check a computed inverse. In the case just completed

4.5 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

 7 2 2 
 A 1
 A   19  1
1 3  3
 2 6 13 
  19 0 0 
 A 1
 A   19  0  19 0 
1 
 0 0  19
1 0 0
 A 1 A  0 1 0
0 0 1
 A 1 A   I 
Since the product does come out to be the identity matrix, the computation was correct.

Example 2: Given the following set of simultaneous equations, solve for x1, x2, and x3.
3 x1  2 x 2  2 x3  y1
 x1  x 2  4 x3  y 2 (3.4)
2 x1  3 x 2  4 x3  y 3
This set of equations can be written as:
[A] [x] = [y]
and solved as follows:
[x] = [A]1 [y]
Thus, the system of equations (14.11) can be solved for the values of x1, x2, and x3by computing the
inverse of [A].
[A] [x] = [y]
3 2  2  x1   y1 
 1 1 4   x 2    y 2 

 2  3 4   x3   y 3 

Step 1. Compute the determinant of [A]. Expanding about the first row
|A| = 3(4 + 12)  2 (4 -8) 2 (3  2)
|A| = 48 + 24 2 = 70

Step 2. Transpose [A].


 3 1 2 
 A   2  3
T
1
  2 4 4 

Step 3. Determine the adjoint matrix by replacing each element in [A]T by its cofactor.

4.6 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

 1 3 2 3 2 1 
  
 4 4 2 4 2 4 
 1 2 3 2 3  1
adj  A     
 4 4 2 4 2 4 
 1 2 3 2 3 1 
  
 1 3 2 3 2 1 
16 2 10
adj  A  12 16  10
1 13 5

Step 4. Divide by the scalar value of the determinant of [A] which was computed as 70 in Step 1.
16 2 10 
1 12
 A 1  70 16  10

 1 13 5 

Product Check
[A]1 [A] = [I ]
16 2 10   3 2  2
 A 1
 A  12
1 16  10  1 1 4 
70
 1 13 5   2 3 4 
70 0 0
1 0
 A 1  A  70 70 0 

 0 0 70
1 0 0
 A  A  0 1
1
0
0 0 1 

Since the product in the above equation is the identity matrix, the computation is correct. The values
of x1, x2, and x3 can now be found for any y1, y2, and y3 by pre-multiplying [y] by [A]1.
[x] = [A]1 [y]
 x1  16  2 10   y1 
 x   1 12 16  10  y 
 2  70   2 
 x3   1 13 5   y 3 
For example, if y1 = 1, y2 = 13, and y3 = 8
 x1  16  2 10   1 
 x   1 12 16  10 13
 2  70   
 x3   1 13 5   8 

4.7 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

x1  1 16  26  80  70  1
70 70
x 2  1 12  208  80  140  2
70 70
x3  1 1  169  40  210  3
70

Cramer's Rule
It is often useful to have a formula for the solution of a system of equations that can be used to study
properties of the solution without solving the system. Cramer's rule establishes a formula for systems of n
equations in n unknowns.
If AX = B is a system of n linear equations in n unknowns such that det(A)  0, then the system has a
unique solution. This solution is
det  A1  det  A2  det  An 
x1  , x2  ,, xn 
det  A det  A det  A
where Aj is the matrix obtained by replacing the entries in the j the column of A by the entries in the
matrix.
 b1 
b 
B   2
 
 
bn 
Proof. If det(A)  0, then A is invertible and X = A1B is the unique solution of AX = B. Therefore we
have
C11 C12  C n1  b1 
C 21 C 22  Cn2 b 
X  A 1 B  1 adj  A B  1  2
det A det  A      
 
C1n C 2n  C nn bn 
Multiplying the matrices out gives
b1 C11  b2 C12    bn C n1 
b C  b2 C 22    bn C n 2 
X 1  1 21
det  A           
 
b1 C1n  b2 C 2n    bn C nn 
The entry in the jth row of X is therefore
b1C1 j  b2 C 2 j    bn C nj
Xj 
det  A
Now let

4.8 1/21/21
Volume I, Math and Physics for Flight Testers Chapter 4 Matrices Algebra

 a11 a 21  a1 j 1 b1 a1 j 1  a1n 
a a 22  a 2 j 1 b2 a 2 j 1  a 2n 
Aj  
12
      
 
 a1n a 2 n  a n j 1 bn a n j 1  a nn 

Since Aj differs from A only in the jth column, the cofactors of entries of b1, b2, . . . , bn in Aj are the
same as the cofactors of the corresponding entries in the jth column of A. The cofactor expansion of det
(Aj) along the jth column is therefore
det(Aj) = b1 C1j + b2 C2j + bn Cnj
Substituting this result gives
 
det A j
xj  Reference 14.3
det  A

Example
Use Cramer's Rule to solve
x1   2 x3  6
 3 x1  4 x2  6 x3  30
 x1  2 x2  3 x3  8

Solution
 1 0 2 6 0 2
A   3 4 6  A1  30 4 6 
  1 2 3  8 2 3
 1 6 2  1 0 6
A2    3 30 6 A3   3 4 30
  1 8 3   1 2 8 

Therefore
det A1   40  10 det A2  72 18 det A3  152 38
x1    , x2    , x3   
det A 44 11 det A 44 11 det A 44 11
To solve a system of n equations in n unknowns by Cramer's Rule, it is necessary to evaluate
determinants of n × n matrices. For systems with more than three equations, Gaussian elimination is
superior computationally since it is only necessary to reduce one n by n + 1 augmented matrix. Cramer's
rule, however gives a formula for the solution.

References
4.1 U.S.A.F. Test Pilot School, Volume II, Flying Qualities, Chapter 2, Vectors and Matrices, January
1988.
4.2 Shames, Irving H., Engineering Mechanics: Statics and Dynamics, 2nd Edition, Prentice-Hall,
Inc., 1967.
4.3 Anton, Howard, Elementary Linear Algebra, John Wiley & Sons, 1981

4.9 1/21/21

Das könnte Ihnen auch gefallen