Sie sind auf Seite 1von 16

MAS171: Matrices and Geometry

Koji Ohkitani Applied Mathematics, University of Sheeld Part 1 Solution of linear equations Consider the system of 3 eqns. in the 3 unknowns x1 , x2, x3: 2x1 + 3x2 2x3 = 1 4x1 + 7x2 + 3x3 = 3 6x1 + 3x2 2x3 = 3 (1) (2) (3)

We solve by elimination in a systematic way: Multiply (1) by 2 & subtract from (2) Multiply (1) by 3 & subtract from (3) System now is Stage [1] 2x1 + 3x2 2x3 = 1 x2 + 7x3 = 1 6x2 + 4x3 = 6 (1) (2a) (3a)

Now eliminate x2 from (3a): Multiply (2a) by 6 & subtract from (3a). Now we have

Stage [2] 2x1 + 3x2 2x3 = 1 x2 + 7x3 = 1 46x3 = 0 (1) (2a) (3b)

The above process is called Gaussian elimination (GE, hereafter) & this equation is in upper triangular form, Can be solved (systematically) by back-substitution. From (3b) we have x3 = 0 Thus from (2a) x2 + 7 0 = 1 x2 = 1 & from (1) 2x1 + 3(1) 2(0) = 1 x1 = 1 Now, think of this equation in a (slightly) dierent way: The array of numbers which are the coecients of x1, x2, x3 is 2 3 2 4 7 3 6 3 2 is called the matrix of coecients for our system, usually written as 2 3 2 A=4 7 3 6 3 2 (the numbers in the matrix are called elements) & the RHS is 1 the column 3 , 3 which we usually write as 1 b = 3 (a column vector/matrix) 3
2

We can write down our elimination process in terms of A and b by forming the augmented matrix (A|b) R1 2 3 2 1

R2 4 7 3 R3 6 3 2

3 3

(R1, R2, R3 denote row labels) and performing eliminations on the rows Stage [1] R1 R2a R3a Stage [2] R1 R2a 2 3 2 7 46 1 2 3 2 7 4 1

R2 2R1 0 1 R3 3R1 0 6

1 6

0 1 R3b R3a + 6R2a 0 0

1 0

We can now perform back-substitution as before giving x3 = 0, x2 = 1, x1 = 1 or, written as a vector 1 1 . 0


3

[Our linear system is this case Ax = b.] The diagonal elements 2,1 & 46 of the upper triangular matrix at Stage [2] are called pivots. The steps in the GE method (which fails if any pivot is zero) can be summarised as follows: (1) Write down the augmented matrix (A|b) with rows labeled R1 , . . . , R n . (2) Subtract multiples of R1 from R2 , . . . , Rn to reduce the elements below the leading diagonal in the rst column to zero. In the matrix obtained subtract multiples of R2 from R3 , . . . , Rn to reduce the elements below the leading diagonal in the second column to zero. Continue this process as far as possible until (A|b) is reduced to (H|c) where H is an upper triangular matrix. (3) Solve (H|c) by back-substitution. If a pivot is zero the method breaks down. This can be overcome by interchanging the row with zero pivot with one of the rows beneath it (an essential row interchange). We may end up with a zero pivot which cannot be interchanged - the eqns then have either no soln or an innite number of solutions.

Example 1: 2x1 + 3x2 2x3 = 1 4x1 + 6x2 + 3x3 = 2 6x1 + 3x2 2x3 = 9 Solution

R1

Stage[1]

(A|b) = R2 4 6 3 R3 6 3 2

2 3 2

2 9

R1 R2a R3a

Cannot use R2a since pivot is zero, hence interchange R2a and R3a R1 R2b R3b : R3a 2 3 2 0 6 4 R2a 0 0 7 1

2 3 2 R2 2R1 0 0 7 R3 3R1 0 6 4

0 6

Now in upper triangular form. Back-subst. x3 = 0, 6x2 + 4 0 = 6 x2 = 1 2x1 + 3 (1) 2 0 = 1 x1 = 2 2 x = 1 . 0

6 0

Example 2: 2x1 + 3x2 2x3 = 1 4x1 + 6x2 + 3x3 = 2 6x1 + 9x2 2x3 = 3 Solution (A|b) = R1 2 3 2 1

Stage[1] R1 R2a R3a

R2 4 6 3 R3 6 9 2

2 3

Although this matrix is already in upper triangular form we continue the elimination process as far as possible. Stage[2] R1 R2a 2 3 2 7 0 1

R2 2R1 0 0 R3 3R1 0 0

2 3 2 7 4

0 0

0 0 4 R3b R3a 7 R2a 0 0


6

0 0

Making R3b which has all zero elements. From R2a , 7x3 = 0 x3 = 0 & from R1 2x1 + 3x2 = 1 This is one eqn. in two unknowns can choose one unknown 1 arbitrarily eg. x1 = l. Then x2 = (1 2l). & soln is 3 l 1 x = (1 2l) 3 0 1 Alternatively, if we had set x2 = k then x1 = (1 3k) 2 & soln would be written as 1 (1 3k) 2 x= k 0 1 which is equivalent to a previous form of soln with l = (13k). 2

Example 3: 2x1 + 3x2 2x3 = 1 4x1 + 6x2 + 3x3 = 2 6x1 + 9x2 2x3 = 7

Solution

R1

Stage [1]

(A|b) = R2 4 6 3 R3 6 9 2

2 3 2

2 7

R1 R2a R3a Stage [2] R1

R2 2R1 0 0 R3 3R1 0 0

2 3 2 7 4

0 4

The last row is 0 x3 = 4 no solution.

0 0 R2a R3b R3a 4 R2a 0 0 7

2 3 2 7 0

0 4

Notice here that a row of H is all zero, whereas in the previous example a row of the augmented matrix (H|c) was all zero. The reason for the appearance of zero row(s) of H or (H|c) will become clear later. For large systems there may be more than one zero row & several variables may turn out to have arbitrary values.
8

Linear dependence independence This concept applies not only to sets of vectors but also to sets of functions (see ex sheet). Later we apply this idea to the rows of A or rows of (A|b). Consider the 2-component vectors 1 & 2 3 . Clearly 6 3 1 =3 , 6 2 3 1 3 6 2 = 0 . 0 (A)

One vector is simply a multiple of the other. Next, consider 1 0 & 0 . It is not possible to write one as a multiple of the other 1 - all we can say is 0 1 0 +0 0 1 = 0 . 0 (B)

The case (A) the vectors are called linearly dependent (l.d.) whereas for (B) they are called linearly independent (l.i.) Notice that it is not possible to have 3 (or more) 2 component vectors which are l.i. eg. 1 , 0 , 3 7 1 0 since 3 1 0 =3 +7 7 0 1 1 4 Consider next the 2,3 cpt vectors 2 , 8 . 7 28 4 1 0 Clearly l.d. since 8 4 2 = 0 . 28 7 0
9

1 0 0 & 1 , we have Whereas for 0 0 1 0 0 0 0 +0 1 = 0 . 0 0 0 For 3, 3cpt vectors not so easy.

Example 4: 1 4 7 2 , 7 , 12 . 3 2 1 Solution We try to nd q1, q2, q3 such that 1 4 7 0 q1 2 + q2 7 + q3 12 = 0 3 2 1 0 If the only soln is q1 = q2 = q3 = 0 vectors are l.i. Otherwise they are l.d. We need to solve q1 + 4q2 7q3 = 0 2q1 + 7q2 12q3 = 0 3q1 + 2q2 q3 = 0 by G.E. 1 4 7 0 2 7 12 0 3 2 1 0
10

Thus 0q3 = 0

1 4 0 1 0 10 1 4 0 1 0 0

7 0 2 0 20 0 7 0 2 0 0 0

q3 = k( arbitrary) q2 + 2k = 0, q2 = 2k q1 + 8k 7k = 0, q1 = k Check 1 4 7 0 k 2 + 2k 7 + k 12 = 0 3 2 1 0 vectors are l.d.

Example 5: 1 0 0 0 , 1 , 0 . 0 0 1 Solution Solve 0 0 0 1 q1 0 + q2 1 + q3 0 = 0 1 0 0 0


11

Clearly the only soln is q1 = q2 = q3 = 0 & hence vectors are l.i.

1 0 0 0 0 1 0 0 0 0 1 0

(Note: 4 or more 3 cpt vectors must be l.d.) (The above applies equally to row vectors as well as column vectors.) Denition In general, vectors v 1 , v 2, . . . , v m are l.d. if there is a non-zero linear combination of them which is equal to the vector 0 i.e. if there are numbers q1 , q2, . . . , qm, not all zero such that q1 v 1 + q2 v 2 + . . . + q m v m = 0 If the only soln to the above eqn is q1 = q2 = . . . = qm = 0 then the vectors are l.i. In our examples on linear equations we have sometimes found rows of zeros, or nearly all zeros. There are basically 3 cases to consider. (1) In one example we found the form (H|c) to be 2 3 2 1 0 0 7 0 0 0 0 0 all zeros This means that the rows of (A|b) are l.d. & the eqn have an innite number of solutions. This is generally true (may be more than one row of zero)
12

(2) In another example we found (H|c) to be 2 3 2 1 0 0 7 0 zeros 0 0 0 4 non zeros This shows that the row of A are l.d. but the rows of (A|b) are l.i. In this case the eqns are inconsistent (no soln) (generally true). (3) In a further example we found (H|c) to be 2 3 2 1 0 1 7 1 in general not necessarily zero 0 0 46 0 The rows of A are l.i. & the eqn. have a unique soln. So far we have only considered system of n eqns, in n unknowns, but the general approach can be applied to systems with more unknowns than eqns. & with more eqns than unknowns. In particular, in such cases we will not produce an upper triangular matrix but a matrix which is in row echelon form. Row echelon form (1) All rows containing only zeros appear below rows with nonzeros. (There may not be any rows with all zeros.) (2) The rst non-zero entry in any row (reading from left to right) appears in a column to the right of the rst non-zero entry in the preceding row (i.e. not necessarily in the next one) Ex. (i)The following two matrices are in row echelon form 2 1 1 3 2 1 1 0 1 1 , 0 0 1 0 0 0 0 2 0 0 1 0 0 0 0
13

(The left one is just upper triangular) (ii)The following two are not. 2 1 2 1 1 0 0 1 , 0 0 0 0 0 1 1 0 0 Example 6: 3x + y + z = 0 x+y =1 Solution Here

1 1 0 0

3 0 0 2

(A|b) =

3 1 1 0 1 1 0 1 1 1 0 1 3 1 1 0 = (H|c)

We reduce (A|b) to row echelon form 3 1 1 0 1 1 0 1 R1 R 2

1 1 0 1 R2 R2 3R1 0 2 1 3 [rows of A l.i., rows of (A|b) l.i.] Then x+y =1 2y + z = 3

(2 eqn, 3 unknowns) We let z = . Hence 1 1 1 y = (3 + ), x = 1 y = 2 2 2


14

Innitely many soln.

1 1 x 2 2 y = 3 + 1 2 2 z 0 1

0 [a particular sol. of the inhomogeneous eq.] 1 2 1 satises the eqn with rhs 0 [genwhereas the vector 2 0 1 eral solutions of the homogeneous eq.] (more next year !) The the general soln is a linear combination of these soln. Example 7: x + 2y + z y+z x+y+z 2x + y + z Solution 1 0 (A|b) = 1 2 2 1 1 1 1 1 1 1
15

Notice that the vector

1 2
3 2

satises the eqns with rhs

0 1

= = = =

2 2 1 0

2 2 R3 R 1 1 0

This shows that the rows of A are l.d. and the rows of (A|b) are l.d. (one eqn. is a linear combination of the others) x + 2y + z = 2 y+z =2 z=1 y = 2 z = 1, x = 2 2(1) 1 = 1 Ex. if the RHS is

1 2 1 2 1 2 1 2 0 1 1 2 R4 2R1 0 1 1 2 0 1 0 1 0 1 0 1 2 1 1 0 0 3 1 4 1 2 1 2 R3 + R 2 0 1 1 2 0 0 1 1 R4 + 3R2 0 0 2 2 1 2 1 2 R4 2R3 0 1 1 2 0 0 1 1 = (H|c) 0 0 0 0

we would nd that the rows of A are l.d. but the rows of (A|b) are l.i. last row of echelon form is 000| nonzero. Hence, no solution. [A more formal approach in 2nd year.]
16

2 2 b= 1 1

Das könnte Ihnen auch gefallen