Beruflich Dokumente
Kultur Dokumente
Matrix
A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]).
An individual entry of a matrix is an element (example: a23)
𝑎11 𝑎12 𝑎13 ⋯ 𝑎1𝑛
𝑎21 𝑎22 𝑎23 ⋯ 𝑎2𝑛
𝐴 = ⋮ ⋮ ⋮ ⋮
𝑎𝑚 1 𝑎𝑚 2 𝑎𝑚 3 ⋯ 𝑎𝑚𝑛
A horizontal set of elements is called a row and a vertical set of elements is called a column.
The first subscript of an element indicates the row while the second indicates the column.
The size of a matrix is given as m rows by n columns, or simply m by n (or m x n).
1 x n matrices are row vectors.
m x 1 matrices are column vectors.
Matrices where m=n are called square matrices.
There are a number of special forms of square matrices:
Symmetric Diagonal Identity
5 1 2 a11 1
A 1 3 7 A a 22
A 1
2 7 8 a 33 1
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
Thus, the equations are now in the form of straight lines; that is, x2 = (slope) x1 + intercept. These
lines can be graphed on Cartesian coordinates with x2 as the ordinate and x1 as the abscissa. The
values of x1 and x2 at the intersection of the lines represent the solution. The figure below shows an
example of how the graphical method works.
Figure 1
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
For three simultaneous equations, each equation would be represented by a plane in a three-
dimensional coordinate system. The point where the three planes intersect would represent the
solution. Beyond three equations, graphical methods break down and, consequently, have little
practical value for solving simultaneous equations. However, they some-times prove useful in
visualizing properties of the solutions.
Graphing the equations (Example in figure 2) can also show systems where:
a) No solution exists
b) Infinite solutions exist
c) System is ill-conditioned
Figure 2
Cramer’s Rule
Cramer’s rule uses determinants to find the solution of linear algebraic equations.
Determinants of matrix
The determinant D=|A| of a matrix is formed from the coefficients of [A]. Determinants for small
matrices are:
1 1 a11 a11
a11 a12
2 2 a11a22 a12a21
a21 a22
a11 a12 a13
a a23 a a a a
3 3 a21 a22 a23 a11 22 a12 21 23 a13 21 22
a32 a33 a31 a33 a31 a32
a31 a32 a33
Determinants for matrices larger than 3 x 3 can be very complicated.
Cramer’s Rule states that each unknown in a system of linear algebraic equations may be
expressed as a fraction of two determinants with denominator D and with the numerator obtained
from D by replacing the column of coefficients of the unknown in question by the constants b1, b2,
…, bn.
3
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
Figure 3
(c)
5
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
The first phase is designed to reduce the set of equations to an upper triangular system. The initial
step will be to eliminate the first unknown, x1, from the second through the nth equations. To do
𝑎 21
this, multiply Eq. (a) by to give
𝑎 11
Or,
Here the prime indicates that the variable has been changed. IF the variable is changed two times
there will be two primes in the variable.
The procedure is then repeated for the remaining equations. For instance, Eq. (a) can be multiplied
𝑎
by 𝑎 31 and the result subtracted from the third equation. Repeating the procedure for the remaining
11
equations results in the following modified system:
(d)
(e)
(f)
(g)
For the foregoing steps, Eq. (a) is called the pivot equation and a11 is called the pivot coefficient or
element. Note that the process of multiplying the first row by a21/a11 is equivalent to dividing it by
a11 and multiplying it by a21. Sometimes the division operation is referred to as normalization.
Now repeat the above to eliminate the second unknown from Eq. (f) through (g). To do this multiply
Eq. (e) by a’32/a’22 and subtract the result from Eq. (f). Perform a similar elimination for the
remaining equations to yield
The procedure can be continued using the remaining pivot equations. The final manipulation in the
sequence is to use the (n − 1)th equation to eliminate the xn−1 term from the nth equation. At this
point, the system will have been transformed to an upper triangular system
(1)
(2)
(3)
(4)
6
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
Back substitution
Equation 4 can be solved for xn using the equation
This result can be back-substituted into the (n − l)th equation to solve for xn−1. The procedure, which
is repeated to evaluate the remaining x’s, can be represented by the following formula:
Pseudo code for (a) forward elimination and (b) back esubstitution
4. Singular system
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
Division by 0
The primary reason that this technique is called “naive” is that during both the elimination and the
back-substitution phases, it is possible that a division by zero can occur. For example, if we use naive
Gauss elimination to solve
Since the 𝑎11 element is 0, there is a division by zero problem. This problem can be overcome using
pivoting technique.
Ill-Conditioned Systems
Well-conditioned systems are those where a small change in one or more of the coefficients results
in a similar small change in the solution. Ill-conditioned systems are those where small changes in
coefficients result in large changes in the solution.
An alternative interpretation of ill-conditioning is that a wide range of answers can approximately
satisfy the equations. Because round-off errors can induce small changes in the coefficients, these
artificial changes can lead to large solution errors for ill-conditioned systems.
See example 9.6 for an example of ill conditioned system.
Singular system
When two or more equations are identical the problem of singular system arises when we are
dealing with an impossible case of (n-1) equations and n unknowns.
The answer to the problem is neatly offered by the fact that the determinant of a singular system is
0. So computers can check for the determinant and if it finds it to be 0, it can stop calculations
immediately.
Pivoting
Problems arise with naïve Gauss elimination if a coefficient along the diagonal is 0 (problem:
division by 0) or close to 0 (problem: round-off error)
One way to combat these issues is to determine the coefficient with the largest absolute value in the
column below the pivot element. The rows can then be switched so that the largest element is the
pivot element. This is called partial pivoting.
If the rows to the right of the pivot element are also checked and columns switched, this is called
complete pivoting.
See example 9.9 for example of pivoting
8
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
Scaling effect
Scaling has value in standardizing the size of the determinant. It has utility in minimizing round-off
errors for those cases where some of the equations in a system have much larger coefficients than
others. Such situations are frequently encountered in engineering practice when widely different
units are used in the development of simultaneous equations.
See example 9.7, 9.8 and 9.10 for example of scaling effect
Gauss -Jordan
The Gauss-Jordan method is a variation of Gauss elimination. The major difference is that when an
unknown is eliminated in the Gauss-Jordan method, it is eliminated from all other equations rather
than just the subsequent ones. In addition, all rows are normalized by dividing them by their pivot
elements. Thus, the elimination step results in an identity matrix rather than a triangular matrix.
Consequently, it is not necessary to employ back substitution to obtain the solution.
The figure below shows a depiction of how Gauss Jordan method works.
Figure 4
See example 9.12 for how to do math using gauss Jordan method.
For lab
Page 264, figure 9.6 has a pseudo code of the implementation of Naïve gauss elimination using
partial pivoting. This is important for lab.
9
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
LU Factorization [Decomposition]
As described previously, Gauss elimination is designed to solve systems of linear algebraic equations,
[ 𝐴]{𝑋 } = {𝐵}
Although it certainly represents a sound way to solve such systems, it becomes inefficient when
solving equations with the same coefficients [A], but with different right-hand-side constants (the
b’s)
LU decomposition methods separate the time-consuming elimination of the matrix [A] from the
manipulations of the right-hand side ,B-. Thus, once *A+ has been “decomposed,” multiple right-
hand-side vectors can be evaluated in an efficient manner.
Interestingly, Gauss elimination itself can be expressed as an LU decomposition. Before showing how
this can be done, let us first provide a mathematical overview of the decomposition strategy.
[ 𝐴]{𝑋 } = {𝐵}
→ [ 𝐴]{𝑋 } − {𝐵} = 0
Suppose this equation can be written as upper triangular matrix
𝑢11 𝑢12 𝑢13 𝑥1 𝑑1
0 𝑢22 𝑢23 𝑥2 = 𝑑2
0 0 𝑢33 𝑥3 𝑑3
This is similar to the manipulation that occurs in the first step of Gauss elimination. That is,
elimination is used to reduce the system to upper triangular form.
[𝑈 ]{𝑋 } − {𝐷} = 0
Now, assume that there is a lower diagonal matrix with 1’s on the diagonal,
1 0 0
𝐿 = 𝐿21 1 0
𝐿31 𝐿32 1
Since the matrix [A] is decomposed into [U] and [L], it can be written as
[𝐿]{[𝑈 ]{𝑋 } − {𝐷}} = [ 𝐴]{𝑋 } − {𝐵}
If this equation holds, it follows from the rules for matrix multiplication that
[𝐿][𝑈 ] = [ 𝐴]
And [𝐿]{𝐷} = {𝐵}
We can also write
𝐴 𝑋 = 𝐵
𝐿 𝑈 𝑋 = {𝐵}
−1 −1
𝐿 𝐿 𝑈 𝑋 = 𝑙 𝐵
𝑈 𝑋 = 𝐿 −1 𝐵 = {𝐷}
So {𝐷} = 𝐿 −1 𝐵
∴ 𝐵 = 𝐿 {𝐷}
1. A two-step strategy for obtaining solutions can be used in LU decomposition LU
decomposition step. *A+ is factored or “decomposed” into lower *L+ and upper *U + triangular
matrices.
2. Substitution step. [L] and [U ] are used to determine a solution {X } for a right-hand side {B}.
a. First, Eq. 𝐿 𝐷 = 𝐵 is used to generate an intermediate vector {D} by forward
substitution.
10
b. Then, the result is substituted into Eq. [𝑈 ]{𝑋 } − {𝐷} = 0, which can be solved by
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
𝑎11 𝑎12 𝑎13
𝑓21 𝑎′22 𝑎′23
𝑓31 𝑓32 𝑎′′33
Where
𝑎11 𝑎12 𝑎13 1 0 0
𝑢 = 0 𝑎′22 𝑎′23 & 𝐿 = 𝑓21 1 0
0 0 𝑎′′33 𝑓31 𝑓32 1
The pseudo subroutine in figure 6a shows a pseudo code for implementation of decomposition
This algorithm is “naive” in the sense that pivoting is not included.
The forward-substitution step can be represented concisely as
The pseudo subroutine in figure 6b shows a pseudo code for implementation of substitution step
12
(a) (b)
Page
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
To summarize
• [A]{x}={b} can be rewritten as [L][U]{x}={b} using LU factorization.
• The LU factorization algorithm requires the same total flops as for Gauss elimination.
• The main advantage is once [A] is decomposed, the same [L] and [U] can be used for
multiple {b} vectors.
• To solve [A]{x}={b}, first decompose [A] to get [L][U]{x}={b}
• Set up and solve [L]{d}={b}, where {d} can be found using forward substitution.
• Set up and solve [U]{x}={d}, where {x} can be found using backward substitution.
• MATLAB’s lu function can be used to generate the [L] and [U] matrices:
[L, U] = lu(A)
• To solve in MATLAB:
[L, U] = lu(A)
d = L\b
x = U\d
For lab
Page 282, figure 10.2 has a pseudo code of the implementation of LU decomposition. This is
important for lab.
Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
14
Page
Prepared BY
Shahadat Hussain Parvez