Sie sind auf Seite 1von 45

NUMERICAL METHODS WITH

APPLICATIONS
(MEC500)

Dr. Siti Mariam binti Abdul Rahman


Faculty of Mechanical Engineering
Office: T1-A14-12C
e-mail: mariam4528@salam.uitm.edu.my
Outcomes of Chapter 3
After completing Chapter 3, students should be able:
To solve problems involving linear algebraic equations
using several different techniques.
To understand the trade-offs involved in choosing the
best method for a particular problem of interest.
To apply the concepts learned in practical engineering
fields.
Learning Outcome
Knowing how to solve small sets of linear equations with
the graphical method and Cramers rule.
Understanding how to implement forward elimination and
back substitution as in Gauss elimination.
Understanding the concepts of singularity and ill-condition.
Understanding how partial pivoting is implemented and
how it differs from complete pivoting.
Recognizing how the banded structure of a tridiagonal
system can be exploited to obtain extremely efficient
solutions.
Introduction
This chapter concern with the solutions of linear algebraic
equations.
There are two fundamentally different approaches:
elimination methods and iterative methods.
Three main sections to be covered:
Gauss Elimination Combining eq. to eliminate unknown
Matrix Inversion and Gauss-Seidel Computing the
inverse to find the solution and also use iterative
methods to approx. solution
LU Decomposition Involving operations on the matrix of
coefficients.
Introduction
Linear algebraic equation:

0 Two variables
ax + by + cz + d = 0 Three variables
a1x1 + a2 x 2 + K + an x n + b = 0 n variables

System of linear equations (SLE)


11x1 + a12 x 2 + K + a1n xn = b1 aij : constant coefficient
a21x1 + a22 x 2 + K + a2n xn = b2 bi : constant
n : number of equations
M
an1x1 + an 2 x 2 + K + ann x n = bn
Must be solve simultaneously for the solutions

Review on Matrix
Matrix notation and algebra provide a concise way to
represent and manipulate linear algebraic equations.
A matrix consists of a rectangular array of elements
represented by a single symbol (example: [A]).
An individual entry of a matrix is an element (example: a 23)

Column

11 a12 a13 L a1n


a a22 a23 L a2n
21 Row
. . . .
. . . .

a n1
an 2 an 3 L ann
Review on Matrix
A horizontal set of elements is called a row and a vertical
set of elements is called a column.
The first subscript of an element indicates the row while the
second indicates the column.
The size of a matrix is given as m rows by n columns, or
simply m by n (or m x n).
1 x n matrices are row vectors.

[1 a2 L an ]
m x 1 matrices are column vectors.
1
a

2

M
a
n
Special Matrices
Matrices where m = n are called square matrices.
There are a number of special forms of square matrices:
Symmetric Diagonal Identity

5 2 9 a 11 1

2 1 5 [ A ] = a22
[ A ] = 1
9 5 1 a33 1

Upper Triangular Lower Triangular Banded


a 11 a12
a 11 a12 a13 a 11 a
a22 a23
a23
21
[ A ] = a22
[ A ] = a 21 a22 [A ] =
a32 a33 a34
a33 a 31 a32 a33
a43 a44
Matrix Operation
Two matrices are considered equal if and only if every
element in the first matrix is equal to every corresponding
element in the second. This means the two matrices must
be the same size.
Matrix addition and subtraction are performed by adding or
subtracting the corresponding elements. This requires that
the two matrices be the same size.
Scalar matrix multiplication is performed by multiplying
each element by the same scalar.
Matrix Multiplication
The elements in the matrix [C] that results from multiplying
matrices [A] and a constant are calculated using:

The elements in the matrix [C] that results from multiplying


matrices [A] and [B] are calculated using:

n

1


Matrix Multiplication
Matrix Inverse
The inverse of a square, nonsingular matrix [A] is that
matrix which, when multiplied by [A], yields the identity
matrix.

8 (1/8) = (1/8) 8 = 1 1 = A 1 A = 1


Matrix Transpose
The transpose of a matrix involves transforming its rows
into columns and its columns into rows.
T 1 4

1 2 3
=
4 5 6 2 5

3 6
Remark:
A square matrix A is symmetric if AT = A.
(AT )T = A twice get back to where you started
Apply
(A + B)T = AT + BT
For a scalar c, (cA)T = cAT
(AB)T = BTAT
Remark on Matrix
Properties of Matrix addition:
A + B = B + A

A + (B + C) = (A + B) + C

c(A + B) = cA + cB

(p + q)A = pA + qA

Properties of Matrix multiplication:


(AB)C = A(BC)

(A + B)C = AC + BC

C(A + B) = CA + CB

AB BA
System of linear equations in
matrix form
Matrices provide a concise notation for representing and
solving simultaneous linear equations:

11x1 + a12 x 2 + a13 x 3 = b1 11 a12 a13 x1 b1


a
a21x1 + a22 x 2 + a23 x 3 = b2 a a23 x2 = b2
21 22
a31x1 + a32 x 2 + a33 x 3 = b3 a 31 a32 a33
x3
b3


(SLE):
System of linear equation

[ ][ A ]{x} = [ A ]{B}
1 1
Multiply with [A]-1on both side:

Since [A]-1[A] = [I] [ 1 ]{B}

Method for solving system of
linear equations
For small number of equations (n 3) linear equations can
be solved by simple techniques such as
Graphical method
Cramers rule
Elimination of unknown method

For large number of equation (n 3),


Computer method
Graphical Method
For small sets of simultaneous equations, (n 3), graphing
them and determining the location of the intercept provides
a solution. Example of two equations:

11x1 + a12 x 2 = b1
a21x1 + a22 x 2 = b2

Solve both equations for x2:


a11 b
2 = x1 + 1 x2 = (slope)x1 + intercept
a12 a12
a21 b2
x 2 = x1 +
a22 a22
Graphical Method
Plot x2 vs. x1on graph
paper, the
intersection of the
lines present the
solution.
IF n 3, graphical
method breaks down.
Graphical Method
Graphing the equations can be used to visualize the
properties of the solutions.
3 problems of solving SLE
Singular system No solution exists, Determinant =|A|
=0
Singular system Infinite solutions exist, |A|= 0
System is ill-conditioned |A| is close to zero

Parallel lines Lines are coincident Intersection


point unclear
Cramers Method
For a system of linear equation, [A]{X} = {B}
[A] is the coefficient matrix & the determinant D = |A|
11 a12 a13
a23
a 21 a22
a 31 a32 a33
A matrix is a singular matrix if its determinant is equal to
zero the inverse does not exist.

Example 1
Cramers Method
Compute x, y and z for the following example:
2x + 1y + 1z = 3
1x 1y 1z = 0
1x + 2y + 1z = 0
System of Matrixs Answer
equations determinant column
2 1 1 2 1 1 3
0
1 1 1 1 1 1
1 2 1 1 2 1 0

3 1 1 2 3 1 2 1 3
0 1 1 1 0 1 1 1 0
1 0 1 1 2 0
0 2 1
Example 1
Cramers Method

a nt
in
erm
t
De


or

Calculation of the matrixs determinant

2 1 1 2(1(1)
2(1)) 1(1(1) 1(1)) + 1(1(2) 1(1))
1 1 1 2(1 + 2) 1(1 + 1) + 1(2 + 1)
1 2 1 2 2 + 3 = 3
Example 1
Cramers Method
Evaluating the other determinants

3 1 1 2 3 1 2 1 3
0 1 1 = 3 1 0 1 = 6 1 1 0 = 9
0 2 1 1 0 1 1 2 0

Cramer's Rule says that x = Dx D, y = Dy D, and z = Dz D. That is:


3
=1
3
D y 6
y= = = 2
D 3
Dz 9
z= = =3
D 3
Elimination of unknowns
The basic strategy is to successively solve one of the
equations of the set for one of the unknowns and to eliminate
that variable from the remaining equations by substitution.
For example equation [1] and [2]:
11x1 + a12 x 2 = b1 K K [1]
a21x1 + a22 x 2 = b2 K K [2]
Elimination steps could reduce the following through
multiplication and subtraction between function, such as:
[1] a21;
a11a21x1 + a12 a21x 2 = b1a21 K K [3]
[2] a11; a21a11x1 + a22 a11x 2 = b2 a11 K K [4]
[4] [3]; a21a11x 2 a12 a21x 2 = b2 a11 b1a21

Thus acquiring; x2 = .. & x1 =

Elimination of unknowns
The elimination of unknowns can be extended to systems
with more than three equations & readily programmed for
computer implementation.The base for computer
technique such as Gauss
elimination

Three basic row operation that are employed in solving


systems of linear algebraic equations:
1. Any equation may be multiplied by a constant
(scaling process)
2. The order of equations may be interchanged
(pivoting process)
3. Any equation can be replaced by a weighted linear
combination of that equation with any other equation
(elimination process)
Nave Gauss Elimination
An extension of elimination of unknown method for solving
large SLE.
One of the most popular techniques for solving
simultaneous linear equations of the form

Gauss elimination method process consisting of


1. Forward elimination of unknowns

2. Back substitution to find solution

Nave Gauss elimination simply means the process does


not check for potential problems resulting from division by
zero
Nave Gauss Elimination
Forward Elimination
The goal of Forward Elimination is to transform the
coefficient matrix into an Upper Triangular Matrix.

11 a12 a13 a 11 a12 a13


a a a23 0 a a23
21 22 22
a 31 a32 a33 0 0 a33

The goal of Back Substitution is to solve each of the


equations
using the upper triangular matrix.

11 a12 a13 x 1 b 1
0 a a23 x 2 = b 2
22
0 0 a33 x 3 b 3
Nave Gauss Elimination
Forward Elimination
A set of n linear equations and n unknowns
11x1 + a12 x 2 + a13 x 3 + K + a1n xn = b1 K K [1]
a21x1 + a22 x 2 + a23 x 3 + K + a2n xn = b2 K K [2]
M
an1x1 + an 2 x 2 + an 3 x 3 + K + ann xn = bn K K [n]
Transform to an Upper Triangular Matrix
Step 1: Eliminate x1 in 2nd equation using equation 1 as the
pivot
equation

1
a (a21 )
11
Which yield:
a a a a
21x1 + 21 a12 x 2 + 21 a13 x 3 + K + 21 a1n x n = 21 b1 K K [1]
a11 a11 a11 a11
Nave Gauss Elimination
Forward Elimination
Zeroing out the coefficient of x1 in the 2nd equation by
subtracting this equation from 2nd equation
a21 a21 a21 a21

22 + a12x 2 + a 23 + a13 x 3 + K + a 2n + a1n xn = b2 b1
a11 a11 a11 a11

a21
= a22 +
'
22 a12
a11
a21
or x + a x +K + a x = b
' ' ' '
where a'23 = a23 + a13
22 2 23 3 2n n 2 a11
M
' a21
a = a23 +
2n a1n
a11
Nave Gauss Elimination
Forward Elimination
Repeat this procedure for the remaining equations to
reduce the set of equations as

11x1 + a12 x 2 + a13 x 3 + K + a1n xn = b1


a'22 x 2 + a'23 x 3 + K + a'2n xn = b'2
a'32 x 2 + a'33 x 3 + K + a'3n xn = b'3
M
an' 2 x 2 + an' 3 x 3 + K + ann
'
xn = bn'


Nave Gauss Elimination
Forward Elimination
Step 2: Eliminate x2 in the 3rd equation.

Equation
1
3 (a32 )

a22
This procedure is repeated for the remaining equations to
reduce the set of equations as

11x1 + a12 x 2 + a13 x 3 + K + a1n xn = b1
a'22 x 2 + a'23 x 3 + K + a'2n xn = b'2
a''33 x 3 + K + a''3n xn = b''3
M
an'' 3 x 3 + K + ann
''
xn = bn''
Nave Gauss Elimination
Forward Elimination
Continue this procedure by using the third equation as the
pivot equation and so on.
At the end of (n-1) Forward Elimination steps, the system of
equations will look like:

11x1 + a12 x 2 + a13 x 3 + K + a1n xn = b1 11 a11 a13 L a1n x 1 b1


a'22 x 2 + a'23 x 3 + K + a'2n xn = b'2 a'22 a'23 L a'2n x b'
'' '' ''
or 2 2''
a33 x 3 + K + a3n xn = b3 a''33 L a''3n x 3 = b3
M O M M M
(n 1) (n 1)
(n 1)
(n 1)
ann xn = bn ann x n b n


Nave Gauss Elimination
Back Substitution
The goal of Back Substitution is to solve each of the
equations using the upper triangular matrix.
Example of a system of 3 equations
11 a12 a13 x 1 b 1
0 a a23 x 2 = b 2
22
0 0 a33 x 3 b 3
Start with the last equation because it has only one
unknown

(n 1)

(n 1)
ann

Solve the second from last equation (n-1)th using xn solved


This solves for xn-1.
for previously.
Nave Gauss Elimination
Back Substitution
Representing Back Substitution for all equations by formula
n
(i 1) a
j =i +1
(i 1)
ij x j

(i 1)
a ii

bn(n 1)
xn = (n 1)
ann

For I = n-1, n-2,,1

Nave Gauss Elimination


Forward elimination
Starting with the first row, add or

subtract multiples of that row to


eliminate the first coefficient from the
second row and beyond.
Continue this process with the second

row to remove the second coefficient


from the third row and beyond.
Stop when an upper triangular matrix

remains.

Back substitution
Starting with the last row, solve for the

unknown, then substitute that value into


the next highest row.
Because of the upper-triangular nature

of the matrix, each row will contain only


one more unknown.
Example 2
Nave Gauss Elimination
Solve the following system of linear algebraic equations by
Gauss elimination method.
2x1 + 1x 2 + 3x 3 = 1
4x1 + 4x 2 + 7x 3 = 1
2x1 + 5x 2 + 9x 3 = 3
Solution:
2 1 3 1

[ | b] = 4 4 7 1
2 5 9 3

Start with a forward elimination



Example 2
Nave Gauss Elimination
1 3 1
2 1 3 1 1 = Eq1 /2 1
2 2 2
4 1
4 4 7 1

4 7

2 5 9 3 2 5 9 3

1 3 1 1 3 1
1 2 2 2 1 2 2 2
4 4 7 1 2 = Eq2 (Eq1 *4) 0 2 1 1

2 5 9 3 3 = Eq3 (Eq1 * 2) 0 4 6 2

1 3 1
1 3 1 1
1 2 2 2 2
2 2
1 1
0 2 1 1 2 = Eq2 /2 0 1
2 2
0 4 6 2
0 4 6 2

Example 2
Nave Gauss Elimination
1 3 1 1 3 1
1 2 2 2 1 2 2 2
1 1 1 1
0 1 0 1
2 2 2 2
0 4 6 2 3 = Eq3 (Eq2 * 4) 0 0 1 1

Then, perform a back substitution.



1 3 1 3=1
1 + x 2 + x 3 =
2 2 2
1 1 1 1
1 1 x2 + = x2 = = 1
1x 2 + x 3 = 2 2 2 2
2 2
1 3 1 1 3 1 1
1x 3 = 1 x1 + = x1 = + =
2 2 2 2 2 2 2
Pitfall of elimination
methods
Division by zero (zero pivot element)
It is possible that during both elimination and back-

substitution phases a division by zero can occur.


10x 2 7x 3 = 7
1
6x1 + 2.099x 2 + 3x 3 = 3.901 a (a21 )
11
5x1 x 2 + 5x 3 = 6
Round-off errors
Nave Gauss Elimination
method is prone to round-off error.
Particularly important when large number of equations are
involved & error tends to propagates.
Always check answer by substituting the solution in the

original equations to check the errors.


Pitfall of elimination
methods
Ill-conditioned systems Check determinant is close to zero!
Small changes in coefficients result in large changes in the

solution.
May also happens when two or more equations are nearly

identical, resulting a wide ranges of answers to approximately


satisfy the equations.
Round off errors can induce small changes in the coefficients,

these changes can lead to large solution errors.


Use scaling on the coefficient of the matrix!

Singular systems Check determinant is zero!


When two equations are identical, we would loose one degree of

freedom and be dealing with the impossible case of n-1 equations


for n unknowns. For large sets of equations, it may not be obvious
however.
The determinant of a singular system is zero evaluate the

determinant after the elimination stage. If a zero diagonal element


is created, calculation is terminated.
Techniques for improving
solutions
1. Increase the number of significant figures.
Decrease round-off error

Simplest remedy for ill-conditioning

Doesnt eliminate division by zero problem

2. Pivoting.
To avoid division by zero problem as well as reduce round-off error

Gauss Elimination with partial pivoting is method of choice.


Partial pivoting: Switching the rows so that the largest element is the

pivot element.
Complete pivoting: Searching for the largest element in all rows and

columns then switching

3. Scaling
To circumvent/prevent problem with ill-condition

scale the equations so that the maximum element coefficient is any

row is 1.
Partial pivoting
To avoid division by zero, swap the row having the zero
pivot with one of the rows below it.

To minimize the effect of round-off, always choose the row


that puts the largest pivot element on the diagonal, i.e., find
ip such that | , i | = max(|, i |) for k = i,...,n

Full or Complete Pivoting
Exchange both rows and columns
Full or Complete Pivoting
Column exchange requires changing the order of the x i
For increased numerical stability, make sure the largest
possible pivot element is used. This requires searching in
the pivot row, and in all rows below the pivot row, starting
the pivot column.
Full pivoting is less susceptible to round-off, but the
increase in stability comes at a cost of more complex
programming (not a problem if you use a library routine)
and an increase in work associated with searching and data
movement.
Example 3
Nave Gauss Elimination with partial pivoting
Solve the following system of linear algebraic equations by
Gauss elimination method with partial pivoting.
2x 2 + x 3 = 5
4x1 + x 2 x 3 = 3
2x1 + 3x 2 3x 3 = 5
Solution:
0 2 1 5 4 1 1 3
partial pivoting
[ | b]= 4 1 1 3 0 2 1 5
2
3 3 5 2
3 3 5

Then, one can continue with forward elimination and back


substitution to solve the problem

Das könnte Ihnen auch gefallen