Sie sind auf Seite 1von 11

PART B.

LINEAR ALGEBRA,
VECTOR CALCULUS

C onte nt. Matrices, vectors, determinants, linear systems of equations (Chap. 7)


Matrix eigenvalue problems (Chap. 8)
Vectors in differential calculus. Grad, div, curl (Chap. 9)
Vector integral calculus, integral theorems (Chap. 10)

linalg package. Load it by typing with(linalg) . Type ?linalg, ?matrix, ?vector,


?determinant, ?gausselim. You will need the package practically all the time.

Chapter 7

Matrices, Vectors, Determinants.


Linear Systems of Equations

C onte nt. Addition, scalar multiplication, matrix multiplication, determinants


(Ex. 7.1, Prs. 7.1-7.6)
Special matrices, composing matrices (Exs. 7.2, 7.3, Prs. 7.7, 7.9, 7.17)
Transpose, inverse, linear transformations (Prs. 7.8, 7.10-7.12)
Orthogonality, norm (Prs. 7.13-7.15)
Solution of linear systems of equations (Exs. 7.4, 7.5, Pr. 7.14)
Rank of a matrix, row space, linear independence (Ex. 7.6,
Prs. 7.16, 7.18-7.20)

Numeric methods for matrices see Chap 20.

Examples for Chapter 7

EXAMPLE 7.1 MATRIX ADDITION, SCALAR MULTIPLICATION,


MATRIX MULTIPLICATION. VECTORS

> with(linalg): # Ignore the warning.


> A := matrix([[5, -3, 0], [6, 1, -4]]);
 
5 −3 0
A :=
6 1 −4
This shows how to type a matrix. If you prefer to arrange the entries as they will
occur, you can type, receiving the same response,

67
68 Examples for Chapter 7 Chap. 7

> A := matrix([[5, -3, 0],


> [6, 1, -4]]); # Response as before
As another way of writing matrices, you can type
> A := matrix(2, 3, [5, -3, 0, 6, 1, -4]);
 
5 −3 0
A :=
6 1 −4
2 is the number of rows and 3 the number of columns. For larger matrices the other
way is perhaps better because you can see the positions of the individual entries more
easily.
Addition of matrices. To define addition of matrices, let
> B := matrix([[-2, 4, -1], [1, 1, 5]]);
 
−2 4 −1
B :=
1 1 5
The sum is typed with a + sign,
> C := A + B; # Resp. A + B
> C := evalm(A + B); # evalm suggests “evaluate matrix”.
 
3 1 −1
C :=
7 2 1
Scalar multiplication. Transposition. Multiplication of matrices. Try
> D := evalm(4*C);

Error, attempting to assign to ‘D‘ which is protected


> evalm(A*B);
Error, (in evalm/evaluate) use the &∗ operator for matrix/vector multiplication
> E := evalm(A&*B);
Error, (in linalg[multiply]) non matching dimensions for vector/matrix product
So the second message told you the right sign &∗ for multiplication, and the third
that the product AB is not defined. But AF (F = BT the transpose of B) should
work, by the definition of matrix multiplication.
> F := transpose(B); # Transpose of a matrix

 
−2 1
 
F :=  4 1
−1 5
> G := evalm(A&*F); # Product of two matrices
 
−22 2
G :=
−4 −13
Chap. 7 Examples for Chapter 7 69

> evalm(G^2);   # Square of a matrix


476 −70
140 161
Inverse of a matrix. Determinant. Trace
> inverse(G); # Inverse of a matrix
 −13 −1 
 294 147 
 
2 −11
147 147
> det(G); # Resp. 294 # Determinant of a matrix
> trace(G); # Resp. −35 # trace − 22 − 13 = −35

Vectors occur together with matrices, particularly in connection with linear systems.
Let
> v := [3, 5]; # Resp. v := [3, 5]
> evalm(v&*A); # Resp. [45, −4, −20]
> evalm(A&*v);
Error, (in linalg[multiply]) non matching dimensions for vector/matrix product
> evalm(F&*v); # Resp. [−1, 17, 22]

Note that for vectors entered in the form v = [v1 , v2 , ...] Maple does not distinguish
between row and column vectors, leaving the interpretation to you. Thus the last
response is a column vector. If C is a 3 × 3 matrix and x = [x1 , x2 , x3 ], you can
compute C& ∗ x or x& ∗ C obtaining a row vector r = [r1 , r2 , r3 ] in both cases, but in
the first case x and r are column vectors and in the second case they are row vectors.
Further let
> u := [4, -2]; # Resp. u = [4, −2]
> u*v; # Resp. [4, −2][3, 5]
> evalm(u&*v); # Resp. 2 # Inner product (dot product)
> dotprod(u, v); # Resp. 2 # Inner product
> innerprod(u, v); # Resp. 2 # Inner product

So here you have three different commands for the inner product (dot product)
u • v or, regarding the vectors as column vectors, uT v. Similarly, you obtain vT G (v
regarded as a column vector).
> evalm(v&*G); # Resp. [−86, −59]
On rare occasions you may need the n × n matrix vuT (where n = 2 for our vectors).
> v := matrix([[3, 5]]); # Resp. v := [ 3 5 ]
 
4
> u := matrix([[4], [-2]]); # Resp. u :=
−2
 
12 20
> evalm(u&*v); # Resp.
−6 −10
Similar M ate r ial in AEM : Secs. 7.1, 7.2
70 Examples for Chapter 7 Chap. 7

EXAMPLE 7.2 SPECIAL MATRICES


Matrices of the subsequent form will be needed quite frequently.
> with(linalg): # Load the linalg package. Ignore the warning.
> matrix(3, 5, 0); # Zero matrix with 3 rows and 5 columns
 
0 0 0 0 0
 
0 0 0 0 0
0 0 0 0 0
> diag(a, b, c); # A diagonal matrix
 
a 0 0
 
0 b 0
0 0 c
> diag(1, 1, 1); # The 3 × 3 unit matrix
 
1 0 0
 
0 1 0
0 0 1
Matrices whose entries are given by a formula of the subscripts of the entries can be
obtained as illustrated by a famous example (the 3 × 3 Hilbert matrix):
> matrix(3, 3, (j, k) -> 1/(j + k - 1));
 1 1
1
 2 3

1 1 1

 
2 3 4
 
1 1 1
3 4 5
Similar M ate r ial in AEM : Sec. 7.2

EXAMPLE 7.3 CHANGING AND COMPOSING MATRICES,


ACCESSING ENTRIES. SUBMATRICES
Having discussed the basic operations with matrices and vectors in Example 7.1, we
now turn to operations of accessing entries and of accessing, interchanging or changing
rows or columns of a matrix.
> with(linalg): # Load the linalg package.
> A := matrix([[0, 1, -1, 2], [1, -2, 4, -3], [3, -4, 0, 0]]); # Given matrix
 
0 1 −1 2
 
A :=  1 −2 4 −3 
3 −4 0 0
> A[2,3]; # Resp. 4

This shows how to extract an entry of A = [ajk ], namely, a23 = 4. A whole row or
column is obtained by
Chap. 7 Examples for Chapter 7 71

> row(A, 3); # Resp. [3, −4, 0, 0]


> col(A, 1); # Resp. [0, 1, 3]
> submatrix(A, 2..3, 2..4); # Command for submatrix
 
−2 4 −3
−4 0 0
The first range 2..3 indicates the rows to be included (Rows 2 and 3), and the second
range 2..4 indicates the columns (the last three columns). Instead of ranges you can
list which rows (1 and 3, e.g.) and which columns (2 and 4, e.g.) you want to include,
so that you do get any desired submatrix.
 
1 2
> submatrix(A, [1, 3], [2, 4]); # Resp.
−4 0
Changing rows and columns. Rows are interchanged by swaprow . For instance,
> B := swaprow(A, 1, 2);
 
1 −2 4 −3
B :=  0

1 −1 2

3 −4 0 0
You will need this in the Gauss elimination. Similarly for interchanging columns, say,
2 and 4, type swapcol(A, 2, 4) . The next command, basic in the Gauss elimination,
adds −3 times Row 1 to Row 3, creating a 0 in the left lower corner.
> C := addrow(B, 1, 3, -3);
 
1 −2 4 −3
 
C :=  0 1 −1 2
0 2 −12 9
Composition of matrices from vectors. The au g me nte d matr ix [A, b] of A
and a vector b, say,
> b := [4, 5, 2]; # Resp. b := [4, 5, 2]

is obtained by typing
> augment(A, b); # Here, Maple takes b as a column vector.
 
0 1 −1 2 4
 
 1 −2 4 −3 5
3 −4 0 0 2
More generally, the command augment composes a matrix from given vectors as
columns, and by taking the transpose you get the matrix with these vectors as rows.
For instance, let
> a := [2, 1]: b := [3, 8]: c := [-1, 2]:
Then
 
2 3 −1
> augment(a, b, c); # Resp.
1 8 2
72 Examples for Chapter 7 Chap. 7

> transpose(%);  
2 1
 
 3 8
−1 2
Similar M ate r ial in AEM : Sec. 7.2

EXAMPLE 7.4 SOLUTION OF A LINEAR SYSTEM


Solve the linear system
−x1 + x2 + 2 x3 = 2
3 x1 − x2 + x3 = 6
−x1 + 3 x2 + 4 x3 = 4

Solu tion. Fir s t me thod. Write the system in matrix form Ax = b and type the
coefficient matrix A and the vector b as shown.
> with(linalg): # Ignore the warning.
> A := matrix([[-1, 1, 2], [3, -1, 1], [-1, 3, 4]]);
 
−1 1 2
 
A :=  3 −1 1
−1 3 4
> b := [2, 6, 4]; # Resp. b := [2, 6, 4]
> x := linsolve(A, b); # Resp. x := [1, −1, 2]

Se c ond me thod. In the method just discussed you get the solution without
seeing what is going on. The second method does elimination and back substitution
separately. Accordingly, first type the augmented matrix by the command augment
(see the previous example) and then apply gausselim , which does the Gauss elimi-
nation. As Step 2 of the method then follows the back substitution by the command
backsub .
> A1 := augment(A, b);  
−1 1 2 2
 
A1 :=  3 −1 1 6
−1 3 4 4
> B := gausselim(A1);
 
−1 1 2 2
B :=  0

2 7 12 

0 0 −5 −10
> x := ’x’: # Unassign x (used just before).
> x := backsub(B);
x := [1, −1, 2]
> rank(A); # Resp. 3

This implies that the solution is unique.


Similar M ate r ial in AEM : Sec. 7.3
Chap. 7 Examples for Chapter 7 73

EXAMPLE 7.5 GAUSS ELIMINATION; FURTHER CASES


C as e 1. I nfinite ly many s olu tions . The system is

3.0 x1 + 2.0 x2 + 2.0 x3 − 5.0 x4 = 8.0


0.6 x1 + 1.5 x2 + 1.5 x3 − 5.4 x4 = 2.7 .
1.2 x1 − 0.3 x2 − 0.3 x3 + 2.4 x4 = 2.1

The process of solution is the same as in the first method in the previous example.
> with(linalg): # Ignore the warning.
> Digits := 5: # This restricts floating numbers to 5 digits.
> A := matrix([[3.0, 2.0, 2.0, -5.0],
> [0.6, 1.5, 1.5, -5.4],
> [1.2, -0.3, -0.3, 2.4]]);
 
3.0 2.0 2.0 −5.0
 
A :=  0.6 1.5 1.5 −5.4 
1.2 −0.3 −0.3 2.4
> b := [8.0, 2.7, 2.1];
b := [8.0, 2.7, 2.1]
> x := linsolve(A, b);
x := [−0.2500, 10. − 1. t 1 , t 1 , 2.2500]

Hence x1 = −0.25, x3 arbitrary (here denoted by t1 ), x2 = 10 − x3 , x4 = 2.25. But


these are not all solutions. We claim that x4 can also be arbitrary, and then, for
instance, you can take x1 = 2 − x4 , x2 = 1 − x3 + 4 x4 , x3 and x4 arbitrary. Indeed,
substituting these expressions into the equations on the left, you obtain the values on
the right, that is, the components of b,
> x1 := 2 - x4; x2 := 1 - x3 + 4*x4;
x1 := 2 − x4
x2 := 1 − x3 + 4 x4
> 3*x1 + 2*x2 + 2*x3 - 5*x4; # Resp. 8
> 0.6*x1 + 1.5*x2 + 1.5*x3 - 5.4*x4; # Resp. 2.7
> 1.2*x1 - 0.3*x2 - 0.3*x3 + 2.4*x4; # Resp. 2.1

Furthermore, if in the general solution you choose x4 = 2.25, you obtain the partial
solution set first obtained. Hence be prepared that y ou r s oftwar e may not alway s
g iv e y ou all the s olu tions . Thus it may often be worthwhile to do all the steps of
the Gauss elimination.
Try out whether the second method in the previous example will give better
results. Type
> A1 := augment(A, b);
 
3.0 2.0 2.0 −5.0 8.0
 
A1 :=  0.6 1.5 1.5 −5.4 2.7 
1.2 −0.3 −0.3 2.4 2.1
74 Examples for Chapter 7 Chap. 7

> B := gausselim(A1);
 
3.0 2.0 2.0 −5.0 8.0
 
B :=  0 1.1000 1.1000 −4.4000 1.1000 
0 0 0 0 0
> x := ’x’: # Unassign x used just before.
> x := backsub(B);
x := [ 2.0000 − 0.99999 t 1 , 1.0000 − 1.0000 t 2 + 4.0000 t 1 , t 2 , t 1 ]

With this method you have obtained the full solution set (except for the small round-
off error; −0.99999 should be −1), x1 = 2 − t1 , x2 = 1 − t2 + 4 t1 , x3 = t2 arbitrary,
x4 = t1 arbitrary.
C as e 2. A u niq u e s olu tion. See the previous example.
C as e 3. N o s olu tions . For instance, x1 + x2 = 3, 2 x1 + 2 x2 = 5 has no
solutions. To see what happens, type
> A := matrix([[1, 1], [2, 2]]); b := [3, 5];
 
1 1
A :=
2 2
b := [3, 5]
> linsolve(A, b); # No response. Hence you get no solutions.
Similar M ate r ial in AEM : Sec. 7.3

EXAMPLE 7.6 RANK. ROW SPACE. LINEAR INDEPENDENCE


The rank of a matrix is the key concept in connection with the existence and
uniqueness of solutions of a linear system of equations. Illustrate by the following
matrix A that rank A is invariant under transposition.
> with(linalg): # Ignore the warning.
> A := matrix([[3, 0, 2, 2], [-6, 42, 24, 54], [21, -21, 0, -15]]);
 
3 0 2 2
 
A :=  −6 42 24 54 
21 −21 0 −15
> rank(A); rank(transpose(A)); # Resp. 2 2

Show the invariance of rank A under elementary row operations. Interchange, for
instance, Rows 1 and 3:
> rank(swaprow(A, 1, 3)); # Resp. 2

The second row operation is the addition of a constant multiple of a row to another
row, for instance, add −5 times Row 2 to Row 3:
Chap. 7 Examples for Chapter 7 75

> B := addrow(A, 2, 3, -5); # Type ?addrow , ?addcol


 
3 0 2 2
 
B :=  −6 42 24 54 
51 −231 −120 −285
> rank(B); # Resp. 2

The third row operation is the multiplication of a row by a nonze r o constant c. For
instance, multiply Row 3 of A by −1/3 (by adding −2/3 of Row 3 to Row 3):
> addrow(A, 3, 3, -2/3);
 
3 0 2 2

 −6 42 24 54 

7 −7 0 −5
> rank(%); # Resp. 2

Methods of determining rank. I . From the “triangularized form” of the matrix


(see Example 7.5 in this Guide) you see directly that rank A = 2:
> gausselim(A);
 
3 0 2 2

0 42 28 58 

0 0 0 0
I I . By the command rowspace(A) or colspace(A) , which compute a basis of the
row space or the column space of A, respectively, and the fact that rank A equals
the dimension 2 of these spaces:
> rowspace(A); colspace(A);
   
2 29 2 2
{ 0, 1, , , 1, 0, , }
3 21 3 3
 
−1
{[1, 0, 6], 0, 1, }
2
Linear independence and dependence of vectors can also be tested by the use
of a rank, namely, by the rank of the matrix whose rows or columns are the given
vectors. For instance, let the vectors be
> a := [-6, 42, 24, 54]; b := [21, -21, 0, -15]; c := [3, 0, 2, 2];
a := [−6, 42, 24, 54]

b := [21, −21, 0, −15]

c := [3, 0, 2, 2]

Obtain the matrix with these vectors as columns by the command augment ,
76 Problem Set for Chapter 7 Chap. 7

> M := augment(a, b, c);


−6 21 3
 
 42 −21 0
M := 
 

 24 0 2
54 −15 2
> rank(M); # Resp. 2

Hence the given vectors are linearly dependent. Indeed, M is obtained from A by
interchanging the rows and then taking the transpose.
Similar M ate r ial in AEM : Sec. 7.4

Problem Set for Chapter 7


Pr.7.1 (Addition, scalar multiplication) In Example 7.1 of this Guide, compute 3A,
A − B, and (2A − (1/2)B)T . (AEM Sec. 7.1)
Pr.7.2 (Matrix multiplication) In Example 7.1 compute AAT , AT A, (AT A)2 ,
(A + B)(A − B)T . (AEM Sec. 7.2)
Pr.7.3 (Matrix multiplication) Let
       
1 3 2 0 2 1 1 3
       
A = 3 5 0, B =  −2 0 −3  , c =  0, d = 1.
2 0 4 −1 3 0 −2 2

Compute A2 , A4 , B2 , B4 , AB − BA, A − AT , B + BT , det A, det B, Ac, B(c − 3d),


c • d. (AEM Sec. 7.2)
Pr.7.4 (Matrices and vectors) Using the data given in Pr.7.3, compute Bc, cT Ac, ABc,
(Ac) • (Bd), dT Bd. Why is the last result 0?
Pr.7.5 (Experiments on multiplication) By experimenting with 3 × 3 or 4 × 4 s y m-
me tr ic and s k e w-s y mme tr ic matrices whose entries are numbers or general letters
try to answer the following questions. Are sums, products, and powers of symmetric
matrices symmetric? Study the same questions for skew-symmetric matrices. What
can you say about products of a symmetric matrix times a skew-symmetric one? Is
det A = 0 for every skew-symmetric matrix? For some symmetric matrices?
(AEM Sec. 7.2)
Pr.7.6 (Associativity, distributivity) Verify the associativity of matrix multiplication and
the distributivity by 3 × 3 matrices of your own choice. (AEM Sec. 7.2)
Pr.7.7 (Rotation, the command map) If
   
cos θ − sin θ n
cos nθ − sin nθ
A= , verify that A =
sin θ cos θ sin nθ cos nθ

for n = 2, 3, 4. What does this mean in terms of rotations through an angle θ? (Use
the command map(combine,A2 ) , etc., which operates on each entry separately. Type
?map for information.) (AEM Sec. 7.2)
Pr.7.8 (Transposition rule for products) Prove (A B)T = BT AT for general 2 × 2
matrices on the computer. (AEM Sec. 7.2)
Chap. 7 Problem Set for Chapter 7 77

Pr.7.9 (Experiment on Hankel matrices) Find empirically a law for the smallest n as a
function of m (> 0, integer) such that det A = 0, where the n×n matrix A = [ajk ] has
the entries ajk = (j + k)m . (Enjoy these special Hankel matrices, whose determinants
have very fast growing values, but all of sudden become 0 from some n on. This is of
practical interest in connection with the so-called Padé approximation.)
Pr.7.10 (Inverse) Using the computer, find the formula for the inverse of a 2 × 2 matrix
A = [ajk ] in terms of ajk and det A. (AEM Sec. 7.8)
Pr.7.11 (Inverse of a product) Verify the basic relation (A B)−1 = B−1 A−1 for the ma-
trices    
0 −2 −1 1 2 3
A =  −2

3 2

and B = 2

3 4

−1 2 1 3 4 6
(AEM Sec. 7.8)
Pr.7.12 (Linear transformations) With respect to Cartesian coordinates in space, let
y = Ax and x = Bw with A and B as in the previous problem. Find the trans-
formation y = Cw which transforms w directly into y. Find the inverse of this
transformation. (AEM Sec. 7.2)
Pr.7.13 (Orthogonal vectors) Show that the following vectors are orthogonal.
(AEM Sec. 7.9)

c = [3 2 −2 1 0 ], d = [2 0 3 0 4 ], e = [ 1 −3 −2 −1 1]

Pr.7.14 (Extension of an orthogonal system) Find a vector x orthogonal to the three


vectors in Pr.7.13. (AEM Sec. 7.9)
Pr.7.15 (Norm) Find the Euclidean norm (c21 +c22 +. . .+c2n )1/2 , etc., of the vectors in Pr.7.13.
(AEM Sec. 7.9)
Pr.7.16 (Linear independence) Check the set of vectors [ −1 5 0 ], [ 16 8 −3 ],
[ −64 56 9 ] for linear independence. (AEM Sec. 7.4)
Pr.7.17 (Hilbert matrices) Find the determinant and the inverse of the Hilbert matrix
H = [hjk ] with n = 3 rows and columns, where hjk = 1/(j + k − 1). Comment on the
size of det H and of the entries of the inverse. Do the same tasks when n = 4 and 5.
(See also Example 7.2 in this Guide.)
Pr.7.18 (Experiment on rank) Find the rank of the n × n matrix A = [ajk ] with entries
ajk = j + k − 1 and any n. To understand the reason for the perhaps somewhat
surprising result, use the command gausselim and then write a program for the
steps of the Gauss elimination in the case of the present matrix.
(AEM Sec. 7.4)
Pr.7.19 (Linear independence) Are the vectors [ 0 16 0 −24 0 ], [ 1 0 −1 0 2 ],
[ 0 −14 0 21 0 ] linearly independent or dependent? (AEM Sec. 7.4)
Pr.7.20 (Row and column space) Find a basis of the row space and of the column space
of the matrix with the rows [ 3 1 4 ], [ 0 5 8 ], [ −3 4 4 ], [ 1 2 4 ].
(Type ?rowspace , ?columnspace . AEM Sec. 7.4)

Das könnte Ihnen auch gefallen