Beruflich Dokumente
Kultur Dokumente
Systems of Linear
Equations
Eduardo E. Descalsota, Jr.
Email: descaltronix@gmail.com
Course Website: http://descaltronix.ucoz.com
Subtopics
1.1 Methods of Solution
1.2 Matrix Inversion Method
1.3 Gauss Elimination Method
1.4 Gauss-Jordan Method
1.5 LU Decomposition Methods
1.6 Jacobi’s Iteration Method
1.7 Gauss-Seidel Iteration Method
Notations of Linear Equations
Matrix Notations
• or simply Ax = b
Augmented Matrix
• obtained by adjoining the constant vector b to the
coefficient matrix A
Uniqueness of Solution
A system of n linear equations in n unknowns has a
unique solution, provided that:
• determinant of the coefficient matrix is nonsingular,
i.e., if |A| ≠ 0
• rows and columns of a nonsingular matrix are
linearly independent
– no row (or column) is a linear combination of other rows
(or columns)
1.1 Methods of Solutions
• direct methods
–transforms the original equation into equivalent
equations that can be solved more easily
–transformation is carried out by applying certain
operations
Methods of Solutions (cont’d.)
• indirect or iterative methods
–start with a guess of the solution x
–then repeatedly refine the solution until a certain
convergence criterion is reached
–less efficient than direct methods due to the large
number of operations or iterations required
Direct Methods
1. Matrix Inverse Method
2. Gauss Elimination Method
3. Gauss-Jordan Method
4. LU Decomposition Methods
Advantages and Drawbacks
• does not contain any truncation errors
• round off errors is introduced due to floating-point
operations
Indirect or Iterative Methods
1. Jacobi’s Iteration Method
2. Gauss-Seidel Iteration Method
Advantages and Drawbacks
• more useful to solve a set of ill-conditioned
equations
• round off errors (or even arithmetic mistakes) in
one iteration cycle are corrected in subsequent
cycles
• contains truncation error
• does not always converge to the solution
– the initial guess affects only the number of iterations
that are required for convergence
1.2 Matrix Inversion Method
• inverse of a matrix is obtained by dividing its
adjoint matrix by its determinant |A|
Requirements for obtaining a unique
inverse of a matrix
1. The matrix is a square matrix.
2. The determinant of the matrix is not zero (the
matrix is non-singular)
– if |A| = 0, then the elements of A-1 approach infinity
using:
(a) Gaussian elimination and
(b) Gauss-Jordan elimination
1.5 LU Decomposition Methods
• expressing the matrix as the multiplication of a
lower triangular matrix L and an upper triangular
matrix U
• A = LU
• Doolittle’s Method
• Crout’s Method
• Choleski’s Method
LU Decomposition
• aka LU Factorization
• process of computing L and U for a given A
• expressed as a product of a lower triangular matrix L and
an upper triangular matrix U
Constraints
• LU decomposition is not unique unless certain constraints
are placed on L or U
Doolittle’s Decomposition Method
• transforms Ax=b to LUx=b
Doolittle’s Decomposition Method
Example:
• Use Doolittle’s decomposition method to solve the
equations Ax = b, where
Decomposition Phase:
2 2 1
3 3 1
3 3 2
Solution Phase: Backward substitution: Ux= y
Forward substitution: Ly = b x1 + 4x2 + x3 = 7
y1 =7 2x2 – 2x3 = 6
y1 + y2 = 13 -9x3 = 18
2y1 – 4.5y2 + y3 = 5 x3 = -2
Solving for y2: Solving for x2:
y2 = 13 – y1 = 13 – 7 2x2 = 6 + 2x3 = 6 + 2(-2)
y2 = 6 x2 = 2/2 = 1
Solving for y3: Solving for x1:
y3 = 5 – 2y1 + 4.5y2 x1 = 7 – 4x2 – x3 = 7 – 4(1) + 2
y3 = 5 – 2(7) + 4.5(6) x1 = 5
y3 = 18
Problem:
• Solve AX = B with Doolittle’s decomposition and compute
|A|, where
Crout’s Decomposition Method
A = LU
or
Example: Solve the following set of equations by
Crout’s method:
2x + y + 4z =12
8x – 3y + 2z =20
4x + 11y – z =33
c2 = c2 – 0.5c1, c3 = c3 – 2c1
c3 = c3 – 2c2
Solution Phase:
Ly = b: forward subst.: Ux = y: backward subst.:
2y1 = 12 z = y3 = 1
y1 = 6
y + 2z = y2
8y1 – 7y2 = 20 y = 4 – 2(1)
-7y2 = 20 – 8(6) y=2
y2 = -28/-7 = 4
x + ½y + 2z = y1
4y1 + 9y2 – 27y3 = 33 x = 6 – ½(2) – 2(1)
-27y3 = 33 – 4(6) – 9(4) x=3
y3 = -27/-27 = 1
Problem:
• Solve the following set of equations by using the Crout’s
method:
2x1 + x2 + x3 = 7
x1 + 2x2 + x3 = 8
x1 + x2 + 2x3 = 9
Choleski’s Decomposition
• A = LLT where U=LT
• Limitations:
–requires A to be symmetric since the matrix
product LLT is symmetric
–involves taking square roots of certain
combinations of the elements of A
– square roots of negative numbers can be avoided only
if A is positive definite
Looking at Choleski’s A = LLT
Example:
• Compute the Choleski’s decomposition of matrix A and
solve x by using the constant vector b.
Solution:
Using Matlab: L=
1 1 1
>> A=[1 1 1; 1 2 2; 1 2 3];
0 1 1
>> b=[1 3/2 3]’; 0 0 1
>> L = chol(A), U=L’
U=
1 1 1
0 1 1
>> x = U\(L\b)
0 0 1
x=
1
-4.5
3
Problem:
• Solve the equation Ax = b by Choleski’s decomposition
method, where
Additional Problem:
• Given the LU decomposition A = LU, determine A and
|A|.
6. Jacobi’s Iteration Method
Consider the equation:
3x + 1 = 0
which can be cast into an iterative scheme as:
2x = -x – 1 or
Will it converge?
Jacobi’s Iteration Method
• aka the method of simultaneous displacements
• applicable to predominantly diagonal systems
• Consider the system of • Unknowns are solved
linear equations: using the equations:
2) 2x – y + 3z = 4
x + 9y – 2z = -8
4x – 8y + 11z = 15
Matlab Functions
• x = A\b
– returns the solution x of Ax =b
– obtained by Gauss elimination
• [L,U] = lu(A)
– returns an upper triangular matrix in U and a permuted lower triangular matrix L
• L = chol(A)
– Choleski’ s decomposition A = LLT
• B = inv(A)
– returns B as the inverse of A
• c = cond(A)
– returns the condition number of the matrix A
MatLab Functions
• A = spdiags(B,d,n,n)
– creates a n× n sparse matrix from the columns of matrix B by placing the
columns along the diagonals specified by d
• A = full(S)
– converts the sparse matrix S into a full matrix A
• S = sparse(A)
– converts the full matrix A into a sparse matrix S
• x = lsqr(A,b)
– conjugate gradient method for solving Ax = b
• spy(S)
– draws a map of the nonzero elements of S
Exercises: Set 1
Solve the following set of simultaneous linear equations by the matrix
inverse method.
(a)
(b)
Exercises: Set 2
Solve the following systems using Gaussian elimination
and Gauss-Jordan process:
a)
b)
Exercises: Set 3
1) Solve using appropriate LU method.
a)
b)
Exercises: Set 4
Solve using Jacobi’s and Gauss-Seidel iteration methods:
a)
b)