Sie sind auf Seite 1von 15

System of Linear Equations

Definition 1: Let A Fmn , b Fm . Then


a11 x1 + + a1n xn
..
..
.
.

= b1
..
.

am1 x1 + + amn xn = bn
is called a system of linear equations for the unknowns
x1 , . . . , xn with coefficients in F.
If the bi are all zero, the system is said to be homogeneous.
We consider the matrix A as a linear transformation
A : Fn Fm and the system is written as Ax = b.
The solution set of the system Ax = b is
Sol(A, b) = {x Fn : Ax = b}.
The system Ax = b is solvable if Sol(A, b) 6= .

Augmented Matrix

Proposition 1 : Ax = b is solvable iff rank (A) = rank (A|b).


Proof. Ax = b is solvable iff b = Ax for some x Fn iff
b R(A).
Suppose b R(A). As R(A) = span(Aei ) = span of columns of
A, rank of A does not change if b is added to the set of column
vectors of A. Then rank (A) = rank (A|b).
Conversely, suppose rank (A) = rank (A|b). The columns of
(A|b) generate the subspace, call it U, containing columns of A
and b. R(A) is a subspace of this space U. But R(A) and U
has now the same dimension.
Hence, U = R(A). That is, b R(A).


Translate of N(A)
Proposition 2 : Let x0 Fn be a solution of Ax = b. Then
Sol(A, b) = x0 + N(A) = {x0 + x : x N(A)}.
Proof. If x N(A), then A(x0 + x) = Ax0 + Ax = Ax0 = b.
That is, x0 + x Sol(A, b).
If v Sol(A, b) and Ax0 = b, then
A(v x0 ) = Av Ax0 = b b = 0. That is, v x0 N(A).
Or that v x0 N(A). Then, v x0 + N(A).

Corollaries 1-3 :
1. If x0 is a solution of Ax = b and {v1 , . . . , vr } is a basis for
N(A), then Sol(A, b) = {x0 + 1 v1 + + r vr : i F}.
Here, r = nullity (A) = n rank (A).
2. A solvable system Ax = b is uniquely solvable iff N(A) = 0 iff
rank (A) = n.
3. If A ia a square matrix, then Ax = b is uniquely solvable iff
det(A) 6= 0.

Cramers Rule
Proposition 3: Let det(A) 6= 0. Denote by Ci the ith column of
A. Then the solutions of Ax = b is given by
xi = det(A[Ci b])/det(A).
Proof. Write Ax = b as x1 C1 + + xn Cn = b. Move b to left
side: x1 C1 + + xi Ci b + + xn Cn = 0.
That is, the columns of the new matrix with columns
C1 , . . . , xi Ci bi , . . . , Cn
are linearly dependent. Thus, det(A[Ci xi Ci b]) = 0.
This gives xi det(A) det(A[Ci b]) = 0.

Cramers rule helps in studying the map (A, b)
x, when
det(A) 6= 0.
But it is an impractical method for computing the solution of a
system.

Elementary Operations
Definition 2: There are three kinds of Elementary Row
Operations for a matrix A Fmn :
R1. Interchange of rows
R2. Multiplication of a row by a nonzero constant
R3. Replacing a row by sum of that row with another row.
Similarly, Elementary Column Operations are defined.
Column operations do not alter the column rank of a matrix.
Since column rank of a matrix is the rank of the matrix, rank of
a matrix can be computed using this.
Similarly, using elementary row operations, the row rank of a
matrix can be computed.
We need to bring the matrix to a form where there are only
zeros below the r th row, the diagonal elements a11 , . . . , arr in
the new matrix are nonzero, and entries to the left of the
diagonal are all zero. It would show that the row rank of the
matrix is r .

Rank revisited
Proposition 4: Elementary row operations preserve the row
rank of a matrix. Elementary column operations preserve the
column rank of a matrix.
Proposition 5: For a matrix A,
rowrank (A) = colrank (A) = rank (A).
Proof. In A, Call a row superfluous if it is a linear combination of
other rows. This means omitting a superfluous row does not
alter the row rank.
Similarly, introduce superfluous columns and think about its
omission.
If jth row is superfluous, then each entry in it is the same linear
combination of other entries.
Then, a linear combination of columns is already zero if the
corresponding column combination without the jth row is zero.
Thus, omitting the jth row does not alter the column rank.
Similar argument proves that omitting a superfluous column
does not alter the row rank.


An Example

1 1 1
2 2 2
3 3 3

1 1

0 0

0 0

1 1 1
2 2 2
0 0 0

0
1 0

0
0 0

0
0 0

1 1 1
0 0 0
0 0 0

0
0 .
0

Hence, rank of the matrix is 1.


Proposition 6 : If we change the augmented matrix (A|b) by
elementary row transformations into a matrix (A0 |b0 ), then
Sol(A, b) = Sol(A0 , b0 ).
Proof. It is just an abbreviated way of manipulating the
equations linearly.

Gaussian Elimination when det(A) 6= 0.

1. Start with the augmented matrix (A|b). If the (1,1) entry is 0,


then interchange first row with another to bring the (1,1) entry
of the new matrix nonzero. Replace all other rows by that row
minus a suitable multiple of the first row to kill all (j, 1) entries
for j > 1.
2. After the k th step, Do not touch the first k rows. Continue as
in Step 1 with the rest of the matrix.
3. After the (n 1)th step, the matrix is of the form (A0 |b0 ),
where A0 is upper triangular. Use back-substitution to solve it.

Gaussian Elimination when A Fmn


(A) Start as in Gaussian Elimination as long as possible. After,
say, t steps, you find that the first t diagonal elements are
nonzero, but no interchange among the last m t rows can fill
the (t + 1, t + 1) entry with a nonzero element.
Now, the augmented matrix looks like:

0
a11

0
a22

b10

..

.
att0

0
..
.
0

B0

..
.

0
bm

Gaussian Elimination when A Fmn Contd.


(B) Use interchange of columns from among the last n t
columns and keep track of the corresponding variables.
Continue to proceed as in Step (A). Reach finally at a matrix
where the first r diagonal elements are nonzero, and all the rest
n r rows are zero rows in the matrix A. Call the b-entries as
bi0 .
Now, the augmented matrix looks like:
0
a11
0

a22

.
..

arr0

b10
..
.
br0
..
.
0
bm

Gaussian Elimination when A Fmn Contd.


(C) If one of br0 +1 , . . . , bn0 is nonzero, then Ax = b is not
solvable.
(D) Otherwise, omit the last n r rows. You now end up with the
augmented matrix as (T |S|b0 ), where T is an upper triangular
r r invertible matrix, S is an r k matrix, b0 is an r vector.
Now, the augmented matrix looks like:
0
a11
0

a22

..
0
.
S
arr0

b10

.. .
.
br0

Gaussian Elimination when A Fmn Contd.

(E) Write the unknowns as y1 , . . . , yr , z1 , . . . , zk . It is a


permutation of the original x1 , . . . , xn . The system is
Ty + Sz = b0 .
(F) Solve Ty = b0 and adjoin k zeros to get w0 . For jth vector wj
solve Ty = Sj , the jth column of S. Adjoin one 1 at jth position
rest 0 for getting wj . (We want to get k linearly independent
solutions, since nullity of T + S is k .)
(G) Finally reorder the vectors w0 , . . . , wk for v0 , . . . vk taking
care of the reordering of the unknowns done in Step (F).
(H) Sol(A, b) = {v0 + 1 v1 + + k vk : i F}.

Examples
Example 1 : Suppose after Step D, you obtain:
x + y + z = 1, y + 3z = 1.

You might take y = 1 3z, x = 1 z (1 3z) = 2z giving


Sol(A, b) = {(2z, 1 3z, z) : z F}.
= {(0, 1, 0) + (2, 3, 1) : F}.
Here, (0, 1) is obtained by solving x + y = 1, y = 1
and (2, 3) is obtained by solving x + y = 1, y = 3
Note that 1 is the coefficient of z in the first equation,
and 3 is that in the second equation.

Examples
Example 2 : Solve the following system Ax = b by Gaussian
Elimination:
2x1 + 3x2 + x3 + 4x4 9x5 = 17
x1 + x2 + x3 + x4 3x5 = 6
x1 + x2 + x3 + 2x4 5x5 = 8
2x1 + 2x2 + 2x3 + 3x4 8x5 = 14.

1 1
2 3 1 4 9 17
2 3
1 1 1 1 3 6

1 1 1 2 5 8 1 1
2 2 2 3 8 14
2 2

1
1 1 1 1 3 6
0 1 1 2 3 5
0

0 0 0 1 2 2 0
0 0 0 1 2 2
0

1
1
1
2

1
4
2
3

3 6
9 17

5 8
8 14

1 1 1 3 6
1 1 2 3 5

0 0 1 2 2
0 0 0 0 0

Example 2 Contd.

Col3,4

1
0

0
0

1
1
0
0

1 1 3 6
2 1 3 5

1 0 2 2
0 0
0 0

Leaving x1 , x2 , x4 , others are arbitrary. Write x3 = , x5 = .


Then the system looks like:
x1 + x2 + x4 = 6 + 3
x2 + 2x4 = 5 + + 3
x4 = 2 + 2
Back substitution gives:
x1 = 3 2 + 2, x2 = 1 + , x3 = , x4 = 2 + 2, x5 = .

3
2
2

1
1
1

Thus, Sol(A, b) = 0 + 1 + 0 .

0
2
2

0
0
1

Das könnte Ihnen auch gefallen