Sie sind auf Seite 1von 61

TMS3133

Numerical Methods
Learning Unit 6
System of Linear Equa>ons

REVIEW
Linear Algebraic Equa>ons and
Matrices

Overview
A matrix consists of a rectangular array of
elements represented by a single symbol
(example: [A]).
An individual entry of a matrix is an element
(example: a23)

Overview (cont)
A horizontal set of elements is called a row and a
ver>cal set of elements is called a column.
The rst subscript of an element indicates the row
while the second indicates the column.
The size of a matrix is given as m rows by n columns,
or simply m by n (or m x n).
1 x n matrices are row vectors.
m x 1 matrices are column vectors.

Special Matrices
Matrices where m=n are called square matrices.
There are a number of special forms of square
matrices:
Symmetric
5 1 2

[A] = 1 3 7
2 7 8

Diagonal
a11

a22
[A] =

a33

Identity
1

[A] = 1

Upper Triangular
Lower Triangular
Banded
a11 a12 a13
a11

a11 a12

a22 a23 [A] = a21 a22


[A] =
[A] = a21 a22

a31 a32 a33

a33
a32

a23
a33
a43

a34

a44

Matrix Opera>ons
Two matrices are considered equal if and only if
every element in the rst matrix is equal to every
corresponding element in the second. This means
the two matrices must be the same size.
Matrix addi>on and subtrac>on are performed by
adding or subtrac>ng the corresponding elements.
This requires that the two matrices be the same size.
Scalar matrix mul>plica>on is performed by
mul>plying each element by the same scalar.

Matrix Mul>plica>on
The elements in the matrix [C] that results
from mul>plying matrices [A] and [B] are
n
calculated using:
c ij = aikbkj
k=1

Matrix Inverse and Transpose


The inverse of a square, nonsingular matrix [A]
is that matrix which, when mul>plied by [A],
yields the iden>ty matrix.
[A][A]-1=[A]-1[A]=[I]

The transpose of a matrix involves


transforming its rows into columns and its
columns into rows.
(aij)T=aji

Represen>ng Linear Algebra


Matrices provide a concise nota>on for
represen>ng and solving simultaneous linear
equa>ons:
a11 a12

a21 a22
a31 a32

a11 x1 + a12 x 2 + a13 x 3 = b1


a21 x1 + a22 x 2 + a23 x 3 = b2
a31 x1 + a32 x 2 + a33 x 3 = b3

a13 x1 b1

a23 x 2 = b2
a33 x 3 b3

[A]{x} = {b}

Solving With MATLAB


MATLAB provides two direct ways to solve
systems of linear algebraic equa>ons [A]{x}
={b}:
LeZ-division
x = A\b
Matrix inversion
x = inv(A)*b

The matrix inverse is less ecient than leZ-


division and also only works for square, non-
singular systems.

What have you learned thus far?


Understanding matrix nota>on.
Being able to iden>fy the following types of
matrices: iden>fy, diagonal, symmetric, triangular,
and tridiagonal.
Knowing how to perform matrix mul>plica>on and
being able to assess when it is feasible.
Knowing how to represent a system of linear
equa>ons in matrix form.
Knowing how to solve linear algebraic equa>ons with
leZ division and matrix inversion in MATLAB.

Contents
Elimina>on methods
Gauss elimina>on method
GE with pivo>ng

What is a system of linear


equa>ons?

General form of the system

Matrix Form

Augmented Matrix Form

Solu>ons

In this course, only the case of unique solution where matrix A must
be square matrix will be discussed

Graphical Method
For small sets of simultaneous equa>ons,
graphing them and determining the loca>on
of the intercept provides a solu>on.

Graphical Method (cont)

Graphing the equa>ons can also show


systems where:
a) No solu>on exists
b) Innite solu>ons exist
c) System is ill-condi>oned

Concept of Elimina>on method

Deni>on

Gauss Elimina>on Method (1/6)


Discuss the algorithm by example
Consider 3 linear equa>ons with 3 unknowns
below:

This concept can be extended to n linear equa>ons with n unknowns

Gauss Elimina>on Algorithm (2/6)


The system of linear equa>ons can be wricen
as

In augmented matrix form

Gauss Elimina>on Algorithm (3/6)

Gauss Elimina>on Algorithm (4/6)

Gauss Elimina>on Algorithm (5/6)

Gauss Elimina>on Algorithm (6/6)

Gauss Elimina>on Summary


1. Transform augmented matrix to upper
triangular form
Do elimina>on process

2. Back subs>tu>on

Nave Gauss Elimina>on (summary)

Forward elimina>on
Star>ng with the rst row, add or subtract
mul>ples of that row to eliminate the rst
coecient from the second row and
beyond.
Con>nue this process with the second
row to remove the second coecient
from the third row and beyond.
Stop when an upper triangular matrix
remains.

Back subs>tu>on
Star>ng with the last row, solve for the
unknown, then subs>tute that value into
the next highest row.
Because of the upper-triangular nature of
the matrix, each row will contain only one
more unknown.

GE Hand Calcula>on
Solve the system Ax = b where

GE Hand Calcula>on (another example)


Solve the system
4 x1 + 3x2
3x1 + 4 x2

+ 2 x3
+ 3x3

+ x4
+ 2 x4

=
=

2 x1 + 3x2

+ 4 x3

+ 3 x4

= 1

+ 3x3

+ 4 x4

= 1

x1

+ 2 x2

1
1

General nonsingular system of n


linear equa>ons
Consider

a11 x1 + + a1n xn = b1

Eq1

!
an1 + + ann xn = bn

Eqn

For k = 1, 2, , n-1, carry out the following
elimina>on step

Step k : Eliminate xk from Eq k+1 through Eq n.

Elimina>on Step (1/3)


The result of the preceding step will yield a
new system
a11 x1 + a12 x2

! +

a1n xn

b1

Eq1

c22 x2
#

! +

c 2 n xn

= d2
"

Eq 2

ekk xk
enk xk

+ ! + ekn xn

fk

Eqk

+ ! + enn xn

"
=

fn

Eqn

Assume e 0 and dene


kk

eik
mik =
,
ekk

i = k + 1,, n

Elimina>on Step (2/3)


For equa>ons i = k+1,,n, subtract mik >mes
Eqk from Eqi, elimina>ng xk from Eqi.
The new coecients and the RHS numbers in
Eqk+1 through Eqn are dened by
eij = eij mik ekj ,

i, j = k + 1,, n

f i = f i mik f k ,

i = k + 1,, n

Elimina>on Step (3/3)


When step n-1 completed, the linear system
will be in upper triangular form denoted by
u11 x1 + " + u1n xn
unn xn

= g1
!
= gn

Back Subs>tu>on
Solve successively for x n , x n 1 ,
, x 1 using
back subs>tu>on
xn =

gn
,
u nn
gi

xi =

j = i +1

uii

ij

xj
,

i = n 1, ,1

Gaussian Elimina>on Pseudocode


func>on X = mygauss(A,b)
E = [A b];
[r,c] = size(E);

%forward elimina>on
for i = 1:r-1
for k = i+1:r
m(k,i) = E(k,i)/E(i,i);
for j = i+1:c
E(k,j) = E(k,j) - m(k,i)* E(i,j);
end
end
end

%backward subs>tu>on
X(r) = E(r,c)/E(r,r);
for i = r-1:-1:1
sum = 0;
for j = i+1:n

sum = sum + E(i,j)*X(j);
end
X(i) = (E(i,c)-sum)/E(i,i);
end

GE Cartoon

for i = 1

Gaussian Elimina>on
func>on X = mygauss(A,b)
E = [A b];
[r,c] = size(E);

%forward elimina>on

for i = 1:r-1
for k = i+1:r
m(k,i) = E(k,i)/E(i,i);
for j = i+1:c
E(k,j) = E(k,j) - m(k,i)* E(i,j);
end
end

end

%backward subs>tu>on
X(r) = E(r,c)/E(r,r);
for i = r-1:-1:1
sum = 0;
for j = i+1:n

sum = sum + E(i,j)*X(j);
end
X(i) = (E(i,c)-sum)/E(i,i);
end

GE Cartoon
i=1
k=2

k=3

k=r

Gaussian Elimina>on
func>on X = mygauss(A,b)
E = [A b];
[r,c] = size(E);

%forward elimina>on
for i = 1:r-1
for k = i+1:r
m(k,i) = E(k,i)/E(i,i);
for j = i+1:c
E(k,j) = E(k,j) - m(k,i)* E(i,j);
end
end
end

%backward subs>tu>on
X(r) = E(r,c)/E(r,r);
for i = r-1:-1:1
sum = 0;
for j = i+1:n

sum = sum + E(i,j)*X(j);
end
X(i) = (E(i,c)-sum)/E(i,i);
end

GE Cartoon
i=1

k=2
j=2:c

k=3
j=2:c

k=r
j=2:c

Gaussian Elimina>on
func>on X = mygauss(A,b)
E = [A b];
[r,c] = size(E);

%forward elimina>on
for i = 1:r-1
for k = i+1:r
m(k,i) = E(k,i)/E(i,i);
for j = i+1:c
E(k,j) = E(k,j) - m(k,i)* E(i,j);
end
end
end

%backward subs>tu>on
X(r) = E(r,c)/E(r,r);
for i = r-1:-1:1
sum = 0;
for j = i+1:n

sum = sum + E(i,j)*X(j);
end
X(i) = (E(i,c)-sum)/E(i,i);
end

GE Cartoon
i=2
k=3
j=3:c

k=r
j=3:c

GE Cartoon

i = r-1

k=r
j=r:c

GE Summary
GE is an orderly process of transforming an
augmented matrix into an equivalent upper
triangular form.
The elimina>on opera>on is
e
m
=
,
i = k + 1,, n
e ij = e ij m
ik e kj , i , j = k + 1 ,
, n with
e
ik

ik

kk

Nave Gauss Elimina>on Program

Gauss Program Eciency


The execu>on of Gauss elimina>on depends on the amount of
oa:ng-point opera:ons (or ops). The op count for an n x n
system is:

3
2n
Forward

+ O(n 2 )
Elimination
3

Back

n 2 + O(n )
Substitution

2n 3

Total
+ O(n 2 )
3

Conclusions:
As the system gets larger, the computa>on >me increases greatly.
Most o
f the eort is incurred in the elimina>on step.

Pivo>ng
Problems arise with nave Gauss elimina>on if a
coecient along the diagonal is 0 (problem: division
by 0) or close to 0 (problem: round-o error)
One way to combat these issues is to determine the
coecient with the largest absolute value in the
column below the pivot element. The rows can then
be switched so that the largest element is the pivot
element. This is called par:al pivo:ng.
If the rows to the right of the pivot element are also
checked and columns switched, this is called
complete pivo:ng.

Ques>on
How does your GE func>on change to include
par>al pivo>ng?

Par>al Pivo>ng Program

Tridiagonal Systems
A tridiagonal system is a banded system with a
bandwidth of 3:

f g
x r
e
x r
f g


e
f
g

x r



e
f
g x r

e
f x r

Tridiagonal systems can be solved using the same

method
as Gauss elimina>on, but with much less eort
because most of the matrix elements are already 0.
1

n1

n1

n1

n1

n1

Tridiagonal System Solver

What have you learned today?


Knowing how to solve small sets of linear equa>ons with the
graphical method.
Understanding how to implement forward elimina>on and
back subs>tu>on as in Gauss elimina>on.
Understanding how to count ops to evaluate the eciency
of an algorithm.
Understanding the concepts of singularity and ill-condi>on.
Understanding how par>al pivo>ng is implemented and how
it diers from complete pivo>ng.
Recognizing how the banded structure of a tridiagonal system
can be exploited to obtain extremely ecient solu>ons.

Das könnte Ihnen auch gefallen