Sie sind auf Seite 1von 31

Chapter 3

Lecture 4
SOLVING SYSTEMS OF LINEAR EQUATIONS
Contents of lecture:
a. Graphical method
b. Direct method
c. Indirect method

Direct method is consists: Indirect method is consists:


Cramer’s rule, Jacobi Method
Gaussian elimination, Gauss-Seidel Method and etc.
 Gauss-Jordan method and
LU decomposition.

1
A Linear Equation representation

a. Graphical solution is the point of intersection of the two lines for simple
systems
x + 2y =1
unique Solution x + 2y =1
2x + 4y = 4 No Solution Infinite Solutions
10
2x + 4y = 2
9
Graphical 8method for LE
7 Y1
6 versu X
5
4
3
2
Y (Axis)

1
0
-10-9 -8 -7 -6 -5 -4 -3 -2-1-1 0 1 2 3 4 5 6 7 8 9 10
-2
-3
-4
-5
-6 X=3, y=-1
-7
-8
-9
X (Axis)
-10 2
10
x y1 y2 9
Graphical method for LE
-10 5.5 -14
8
-8 4.5 -12
-6 3.5 -10 7

-4 2.5 -8 6

-2 1.5 -6 5
0 0.5 -4
4
2 -0.5 -2
3
4 -1.5 0
6 -2.5 2 2

8 -3.5 4 1
Y (Axis)

10 -4.5 6 0
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10
-1

-2

-3

-4 Y1 versu X
-5
Y2 versus X
-6

-7

-8

-9

-10
X (Axis)

3
System of linear algebraic
Have a general equations that are of the form

where the a’s are constant coefficients, the b’s are


constants, and n is the number of equations.
4
Representing systems of linear equation with matrix
• As depicted below, [A] is the shorthand notation for the
matrix and aij designates an individual element of the
matrix.
A horizontal set of elements = a row ; a vertical set of elements =a
column.
The 1st subscripts i and 2nd one j always designates the number of
the row and column respectively in which the element lies.
Column 2
 a11 a12  a1n  column vectors.
a a22 
 a2 n  Row 2
[ A]   21  x1 
 
 b1 
 
      x
 2
  
b
 2
  
     
 an1 an 2  ann  
 n
x  
 n
b 

5
Summary of Operating rules that govern matrix

• Two n by m matrices are equal if, and only if, every element in the
first is equal to every element in the second, that is, [A] = [B] if aij
= bij for all i and j.
• Addition and subtraction of two matrices, say, [A] and [B], is
accomplished by adding and subtraction of corresponding terms
respectively in each matrix. The elements of the resulting matrix
[C] are computed as:
• cij = aij + bij ; dij = eij – fij

• Addition are commutative: [A] + [B] = [B] + [A]


• Both Addition and multiplication are also associative; that is,
([A] + [B]) + [C] = [A] + ([B] + [C])
• The multiplication of a matrix [A] by a scalar g is obtained by
multiplying every element of [A] by g, as:
6
…cont.
A Simple Method for Multiplying Two Matrices
 Suppose that we want to multiply [X] by [Y] to yield [Z],

If the dimensions of the matrices are suitable, matrix multiplication is


associative,
([A][B])[C] = [A]([B][C])
and distributive,
[A]([B] + [C]) = [A][B] + [A][C]
or
([A] + [B])[C] = [A][C] + [B][C]
However, multiplication is not generally commutative:
[A][B] not equal [B][A]

7
Determinant and rank
Both these terms are equivalent, but it is much easier to evaluate a
rank of a matrix rather than to evaluate a determinant of a matrix
for a larger size systems.

Determinant evaluation is a very computationally intensive; and


that is the reason why crammers rule although it is looks fairly
straight forward to implement; it is actually not very useful for any
type of practical. Det(A)=|A|=σ𝒏𝒊=𝟏(𝒂𝟏,𝒊 𝑪𝟏,𝒊)
Mi,j=obtained from matrix of A
Cofactor=Ci,j=(-1)i+jdet(Mi,j) by eliminating ith raw and jth
column.
𝟏 𝑻
Inverse(A)=A-1=𝐝𝐞𝐭(𝑨) 𝑪𝒊, 𝒋

The rank of a matrix is equal to the number of linearly


independent columns of that matrix which can be easily
identified by reducing a matrix to its upper triangular form.
8
Solution existence test
Given: M=coefficient matrix, Ma augmented matrix &
D=determinant of a matrix

D 0 Unique Solution D=0 no or infinite Solution

9
Linear equation solution methods
• There are several methods which directly used to solve
equation. Prominent among these are direct methods:
Cramer’s rule,
Gaussian elimination,
 Gauss-Jordan method and
LU decomposition.
• Another class of method for solving linear equations is
known as indirect method (iterative methods).
• By this approach, we starts from any initial guess, say x(0),and
generate an improved estimate x(k+1) from previous approximation
x(k).
Jacobi Method
Gauss-Seidel Method etc.
10
Cramer’s Rule method of solution finding
(The Determinant Method)

If D = determinant of matrix A and Di = determinant of A i ,


where
A i is obtained by replacing the ith column of A with b.
• A has unique solution if D ≠ 0 or rank of A is not equal to
0.
• A has either no solution or infinitively many soln if D = 0
Example:

D ≠0  unique solution exists.


11
Direct method :Naïve Gaussian Elimination
A method to solve simultaneous linear equations of the
form [A][X]=[C]
Two steps
1. Forward Elimination
2. Back Substitution

12
Step to apply Gaussian elimination method.
 a11 a12  a1n b1 
 
 a21 a22  a23 b2 
1. Create augmented matrix       
 
 an1 an 2  ann bn 
2. Elimination
1st round elimination
21=a21/a11, 31=a31/a11, 41=a41/a11………, n1=an1/a11
R21=R20- 21R10, R31=R30- 31R10, R41=R40- 41R10,…, Rn1=Rn0-n1R10
2nd round elimination
32=a32/a22, 42=a42/a22,….. , n2=an2/a22
R32=R31- 32R21, R42=R41- 42R21,…., Rn2=Rn1- n2R21
3rd round elimination
43=a43/a33,
R43=R42- 43R32,………., Rn3=Rn2- n2R32
3. Then finally back substitution
13
Cont’d

14
15
16
Example
x1  x2  x3  4
2 x1  x2  3x3  7
3x1  4 x2  2 x3  9

17
Are there any pitfalls of Naïve Gauss Elimination Method?
Division by zero:
It is possible that division by zero may occur during forward
elimination steps. For example for the set of equations

Round-off error:
Naive Gauss Elimination Method is prone to round-off errors. This is
true when there are large numbers of equations as errors propagate.
Also, if there is subtraction of numbers from each other, it may create
large errors.

using six and five significant digits with chopping in your


calculations and compare the results.
18
Gaussian Elimination with Partial Pivoting method

To solve simultaneous linear equations of the form


[A][X]=[C]
Two steps
• 1. Forward Elimination
• 2. Back Substitution
Forward Elimination
• Same as naïve Gauss elimination method except that we
switch rows before each of the (n-1) steps of forward
elimination.

19
Step to apply Gaussian elimination with pivoting method.
 a11 a12  a1n b1 
1. Create augmented matrix  
 a21 a22  a23 b2 
      
 
2. Elimination 
 an1 an 2  ann bn 

Check the first indices of all rows, and take the row with the highest absolute value
of the 1st indices to interchange with raw 1.
1st round elimination
21=a21/a11, 31=a31/a11, 41=a41/a11………, n1=an1/a11
R21=R20-  21R10, R31=R30-  31R10, R41=R40- 41R10,……, Rn1=Rn0- n1R10
2nd round elimination
Check the 2nd indices of all rows below Row2, and take the row with the highest
absolute value of the 2nd indices to interchange with raw 2 .
32=a32/a22, 42=a42/a22,….. , n2=an2/a22
R32=R31- 32R21, R42=R41- 42R21,…., Rn2=Rn1- n2R21
3rd round elimination
43=a43/a33,
R43=R42- 43R32,………., Rn3=Rn2- n2R32
Check the 3rd indices of all rows below Row3, and take the row with the highest absolute
value of the 3rd indices to interchange with raw 3.
3. Then finally back substitution 20
 a11 a12  a1n b1 
Matrix Form at Beginning of  
a21 a22  a23 b2 
2nd Step of Forward Elimination
      
 
 an1 an 2  ann bn 

Matrix Form at Beginning of 2nd Step of


Forward Elimination

Matrix Form at End of Forward Elimination

21
Back Substitution Starting Eqns.

Back Substitution
Example
x1  x2  x3  4
2 x1  x2  3 x3  7
3 x1  4 x2  2 x3  9
22
direct method : Gauss - Jordan
• The Gauss-Jordan method is a variation of Gauss
elimination.
• The major difference is that when an unknown is
eliminated in the Gauss-Jordan method, it is eliminated
from all other equations rather than just the subsequent
ones.
• In addition, all rows are normalized by dividing them by
their pivot elements.
• Thus, the elimination step results in an identity matrix
rather than a triangular matrix
• Consequently, it is not necessary to employ back
substitution to obtain the solution.
23
Cont’d
Starting from a system Ax=b of the general form
Example

We continue process of elimination until the set of equations


reduce to identity matrix form shown below.

1 0 0 ... 0
0 1 0 ... 0
    
0 0 ... 1 0
0 0 0  1 24
direct method :LU Decomposition
• LU Decomposition is another method to solve a set of
simultaneous linear equations
• Which is better, Gauss Elimination or LU Decomposition?
• To answer this, a closer look at LU decomposition is
needed.
Method
• For most non-singular matrix [A] that one could conduct
Naïve Gauss Elimination forward elimination steps, one
can always write it as:
[A] = [L][U]
• Where [L] = lower triangular matrix
[U] = upper triangular matrix
25
How does LU Decomposition work?

If solving a set of linear equations [A][X] = [B]


If [A] = [L][U] then [L][U][X] = [B]

Multiply by [L]-1 Which gives [L]-1[L][U][X] = [L]-1[B]


Remember [L]-1[L] = [I] which leads to [I][U][X] = [L]-1[B]
Now, if [I][U] = [U] then [U][X] = [L]-1[B]
Now, let [L]-1[B]=[Y]

Which ends with [U][X] = [Y]


Then we solve for [X].

26
Example: Solve the following equation by using:

a) Cramer’s rule Example


b) Naïve gauss x1  x2  x3  4
c) Gauss Jordan 2 x1  x2  3x3  7
d) LU decomposition 3x1  x2  6 x3  2

27
Iterative method (indirect method)
Indirect method is consists:
Jacobi Method
Gauss-Seidel Method and etc.

Iterative Methods (Example)


E1 : 10 x1  x 2  2 x3  6
E2 :  x1  11x2  x3  3x4  25
E3 : 2 x1  x2  10 x3  x 4   11
E4 : 3 x 2  x3  8 x 4  15

28
Iterative Methods (Example)
E1 : 10 x1  x 2  2 x3  6
E2 :  x1  11x2  x3  3x4  25
E3 : 2 x1  x2  10 x3  x 4   11
E4 : 3 x 2  x3  8 x 4  15

We rewrite the system in the x=Tx+c form


1 1 3
x1  x 2  x3 
10 5 5
1 1 3 25
x2  x1  x3  x 4 
11 11 11 11
1 1 1 11
x3  - x1  x2  x4 
5 10 10 10
3 1 15
x4   x 2  x3 
8 8 8
Jacobi method iteration
and start iterations with x(0)  (0, 0, 0, 0)
1 ( 0) 1 3
x1(1)  x2  x3(0)   0.6000
10 5 5
1 ( 0) 1 3 25
x2(1)  x1  x3(0) - x4(0)   2.2727
11 11 11 11
1 1 1 11
x3(1)  - x1(0)  x2(0)  x4(0)   1.1000
5 10 10 10
3 1 15
x4(1)  - x2(0)  x3(0)   1.8750
8 8 8

Continuing the iterations, the results are in the Table:


The Gauss-Seidel Iterative Method
The idea of GS is to compute x (k ) using most recently
calculated values. In our example:
1 ( k 1)
 x3( k 1)
1 3
x1( k )  x2 
10 5 5
 x3( k 1) - x4( k 1) 
1 (k ) 1 3 25
x2( k )  x1
11 11 11 11
 x4( k 1) 
1 1 1 11
x3( k )  - x1( k )  x2( k )
5 10 10 10
3 1 15
x4( k )  - x2( k )  x3( k ) 
8 8 8
Starting iterations with x (0)
 (0, 0, 0, 0) , we obtain

Das könnte Ihnen auch gefallen