Sie sind auf Seite 1von 14

EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

Matrix
A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]).
An individual entry of a matrix is an element (example: a23)
𝑎11 𝑎12 𝑎13 ⋯ 𝑎1𝑛
𝑎21 𝑎22 𝑎23 ⋯ 𝑎2𝑛
𝐴 = ⋮ ⋮ ⋮ ⋮
𝑎𝑚 1 𝑎𝑚 2 𝑎𝑚 3 ⋯ 𝑎𝑚𝑛
 A horizontal set of elements is called a row and a vertical set of elements is called a column.
 The first subscript of an element indicates the row while the second indicates the column.
 The size of a matrix is given as m rows by n columns, or simply m by n (or m x n).
 1 x n matrices are row vectors.
 m x 1 matrices are column vectors.
 Matrices where m=n are called square matrices.
 There are a number of special forms of square matrices:
Symmetric Diagonal Identity
5 1 2 a11  1 
A  1 3 7  A   a 22 
 A   1 

2 7 8   a 33   1

Upper Triangular Lower Triangular Banded


a11 a12 a13   a11   a11 a12 
A   a 22 a 23  A  a 21 a 22  a a 22 a 23 
 A   21 
 a 33  a 31 a 32 a 33   a 32 a 33 a 34 
 
 a 43 a 44 
Matrix operation
• Two matrices are considered equal if and only if every element in the first matrix is equal to
every corresponding element in the second. This means the two matrices must be the same
size.
• Matrix addition and subtraction are performed by adding or subtracting the corresponding
elements. This requires that the two matrices be the same size.
• Scalar matrix multiplication is performed by multiplying each element by the same scalar.
Matrix multiplication
The elements in the matrix [C] that results from multiplying matrices [A] and [B] are calculated using:
n
cij   aik bkj
k 1
1
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

Matrix inverse and transpose


The inverse of a square, nonsingular matrix [A] is that matrix which, when multiplied by [A], yields
the identity matrix.
[A][A]-1=[A]-1[A]=[I]
The transpose of a matrix involves transforming its rows into columns and its columns into rows.
(aij)T=aji
Representing linear algebra
Matrices provide a concise notation for representing and solving simultaneous linear equations:
𝑎11 𝑥1 + 𝑎12 𝑥2 + 𝑎13 𝑥3 = 𝑏1 𝑎11 𝑎12 𝑎13 𝑥1 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 + 𝑎23 𝑥3 = 𝑏2 𝑎21 𝑎22 𝑎23 𝑥2 = 𝑏2 𝐴 𝑥 = {𝑏}
𝑎31 𝑥1 + 𝑎32 𝑥2 + 𝑎33 𝑥3 = 𝑏3 𝑎31 𝑎32 𝑎33 𝑥3 𝑏3
Graphical method
For small sets of simultaneous equations, graphing them and determining the location of the
intercept provides a solution.
For a system with two variable like

Both equations can be solved for 𝑥2

Thus, the equations are now in the form of straight lines; that is, x2 = (slope) x1 + intercept. These
lines can be graphed on Cartesian coordinates with x2 as the ordinate and x1 as the abscissa. The
values of x1 and x2 at the intersection of the lines represent the solution. The figure below shows an
example of how the graphical method works.

Figure 1
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

For three simultaneous equations, each equation would be represented by a plane in a three-
dimensional coordinate system. The point where the three planes intersect would represent the
solution. Beyond three equations, graphical methods break down and, consequently, have little
practical value for solving simultaneous equations. However, they some-times prove useful in
visualizing properties of the solutions.
Graphing the equations (Example in figure 2) can also show systems where:
a) No solution exists
b) Infinite solutions exist
c) System is ill-conditioned

Figure 2

The simple rule of graphical method can be summarized in 2 steps as


1. Plot the equations
2. The intersecting point is the solution

Cramer’s Rule
Cramer’s rule uses determinants to find the solution of linear algebraic equations.
Determinants of matrix
The determinant D=|A| of a matrix is formed from the coefficients of [A]. Determinants for small
matrices are:
1 1 a11  a11
a11 a12
2 2  a11a22  a12a21
a21 a22
a11 a12 a13
a a23 a a a a
3 3 a21 a22 a23  a11 22  a12 21 23  a13 21 22
a32 a33 a31 a33 a31 a32
a31 a32 a33
Determinants for matrices larger than 3 x 3 can be very complicated.

Cramer’s Rule states that each unknown in a system of linear algebraic equations may be
expressed as a fraction of two determinants with denominator D and with the numerator obtained
from D by replacing the column of coefficients of the unknown in question by the constants b1, b2,
…, bn.
3
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

Example of Cramer’s rule


• Find x2 in the following system of equations:
0.3x1  0.52 x2  x3  0.01
0.5 x1  x2  1.9 x3  0.67
0.1x1  0.3x2  0.5 x3  0.44
• Find the determinant D
0.3 0.52 1
1 1.9 0.5 1.9 0.5 1
D  0.5 1 1.9  0.3  0.52 1  0.0022
0.3 0.5 0.1 0.5 0.1 0.4
0.1 0.3 0.5
• Find determinant D2 by replacing D’s second column with b
0.3  0.01 1
0.67 1.9 0.5 1.9 0.5 0.67
D2  0.5 0.67 1.9  0.3  0.01 1  0.0649
 0.44 0.5 0.1 0.5 0.1  0.44
0.1  0.44 0.5
• Divide
D2 0.0649
x2    29.5
D  0.0022
Elimination of unknowns
The elimination of unknowns by combining equations is an algebraic approach that can be illustrated
for a set of two equations:
𝑎11 𝑥1 + 𝑎12 𝑥2 = 𝑏1
𝑎21 𝑥1 + 𝑎22 𝑥2 = 𝑏2
The basic strategy is to multiply the equations by constants so that one of the unknowns will be
eliminated when the two equations are combined. The result is a single equation that can be solved
for the remaining unknown. This value can then be substituted into either of the original equations
to compute the other variable.
For example in the above two equations 𝑎11 𝑎𝑛𝑑 𝑎21 can be used to multiply equation 2 and 1
𝑎11 𝑎21 𝑥1 + 𝑎12 𝑎21 𝑥2 = 𝑏1 𝑎21
𝑎21 𝑎11 𝑥1 + 𝑎22 𝑎11 𝑥2 = 𝑏2 𝑎11
Subtracting these 2 equations will yield

Which can be solved for

And substituted for


This is actually similar to Cramer’s rule
4
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

Naïve Gauss Elimination


For larger systems, Cramer’s Rule can become unwieldy.
Instead, a sequential process of removing unknowns from equations using (i) forward elimination
followed by (ii) back substitution may be used - this is Gauss elimination.
“Naïve” Gauss elimination simply means the process does not check for potential problems resulting
from division by zero.
• Forward elimination
– Starting with the first row, add or subtract multiples of that row to eliminate the first
coefficient from the second row and beyond.
– Continue this process with the second row to remove the second coefficient from
the third row and beyond.
– Stop when an upper triangular matrix remains.
• Back substitution
– Starting with the last row, solve for the unknown, then substitute that value into the
next highest row.
– Because of the upper-triangular nature of the matrix, each row will contain only one
more unknown.

Figure 3

Forward elimination of unknowns


Let’s consider the following set of equations
(a)
(b)

(c)
5
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

The first phase is designed to reduce the set of equations to an upper triangular system. The initial
step will be to eliminate the first unknown, x1, from the second through the nth equations. To do
𝑎 21
this, multiply Eq. (a) by to give
𝑎 11

Now the equation can be subtracted from equation (b) to give

Or,
Here the prime indicates that the variable has been changed. IF the variable is changed two times
there will be two primes in the variable.
The procedure is then repeated for the remaining equations. For instance, Eq. (a) can be multiplied
𝑎
by 𝑎 31 and the result subtracted from the third equation. Repeating the procedure for the remaining
11
equations results in the following modified system:
(d)
(e)
(f)

(g)
For the foregoing steps, Eq. (a) is called the pivot equation and a11 is called the pivot coefficient or
element. Note that the process of multiplying the first row by a21/a11 is equivalent to dividing it by
a11 and multiplying it by a21. Sometimes the division operation is referred to as normalization.
Now repeat the above to eliminate the second unknown from Eq. (f) through (g). To do this multiply
Eq. (e) by a’32/a’22 and subtract the result from Eq. (f). Perform a similar elimination for the
remaining equations to yield

The procedure can be continued using the remaining pivot equations. The final manipulation in the
sequence is to use the (n − 1)th equation to eliminate the xn−1 term from the nth equation. At this
point, the system will have been transformed to an upper triangular system
(1)
(2)
(3)

(4)
6
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

Back substitution
Equation 4 can be solved for xn using the equation

This result can be back-substituted into the (n − l)th equation to solve for xn−1. The procedure, which
is repeated to evaluate the remaining x’s, can be represented by the following formula:

Pseudo code for (a) forward elimination and (b) back esubstitution

Gauss program efficiency


• The execution of Gauss elimination depends on the amount of floating-point operations (or
flops). The flop count for an n x n system is:
Forward
Eliminatio n
2n 3
3
 O n2 
Back
n 2  On 
Substitution
Total
2n 3
3
 O n2 
• Conclusions:
– As the system gets larger, the computation time increases greatly.
– Most of the effort is incurred in the elimination step.

Pitfalls of Elimination methods


Some of the common drawbacks in using Naïve Gauss eliminations include
1. Division by 0
2. Round off error
7

3. Ill Conditioned system


Page

4. Singular system

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

Division by 0
The primary reason that this technique is called “naive” is that during both the elimination and the
back-substitution phases, it is possible that a division by zero can occur. For example, if we use naive
Gauss elimination to solve

Since the 𝑎11 element is 0, there is a division by zero problem. This problem can be overcome using
pivoting technique.

Round off error


Due to large number of mathematical operations involved in the technique, slight discrepancy is
created due to rounding numbers during intermediate calculations.
The problem become significant when there is a large number of equations to be solved.

Ill-Conditioned Systems
Well-conditioned systems are those where a small change in one or more of the coefficients results
in a similar small change in the solution. Ill-conditioned systems are those where small changes in
coefficients result in large changes in the solution.
An alternative interpretation of ill-conditioning is that a wide range of answers can approximately
satisfy the equations. Because round-off errors can induce small changes in the coefficients, these
artificial changes can lead to large solution errors for ill-conditioned systems.
See example 9.6 for an example of ill conditioned system.

Singular system
When two or more equations are identical the problem of singular system arises when we are
dealing with an impossible case of (n-1) equations and n unknowns.
The answer to the problem is neatly offered by the fact that the determinant of a singular system is
0. So computers can check for the determinant and if it finds it to be 0, it can stop calculations
immediately.

Techniques for improving solution


Use more significant figures
Using more significant figures in computation can improve for both the creation of round off error
and ill conditioned system.

Pivoting
Problems arise with naïve Gauss elimination if a coefficient along the diagonal is 0 (problem:
division by 0) or close to 0 (problem: round-off error)
One way to combat these issues is to determine the coefficient with the largest absolute value in the
column below the pivot element. The rows can then be switched so that the largest element is the
pivot element. This is called partial pivoting.
If the rows to the right of the pivot element are also checked and columns switched, this is called
complete pivoting.
See example 9.9 for example of pivoting
8
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

Scaling effect
Scaling has value in standardizing the size of the determinant. It has utility in minimizing round-off
errors for those cases where some of the equations in a system have much larger coefficients than
others. Such situations are frequently encountered in engineering practice when widely different
units are used in the development of simultaneous equations.
See example 9.7, 9.8 and 9.10 for example of scaling effect

Gauss -Jordan
The Gauss-Jordan method is a variation of Gauss elimination. The major difference is that when an
unknown is eliminated in the Gauss-Jordan method, it is eliminated from all other equations rather
than just the subsequent ones. In addition, all rows are normalized by dividing them by their pivot
elements. Thus, the elimination step results in an identity matrix rather than a triangular matrix.
Consequently, it is not necessary to employ back substitution to obtain the solution.
The figure below shows a depiction of how Gauss Jordan method works.

Figure 4

See example 9.12 for how to do math using gauss Jordan method.

For lab
Page 264, figure 9.6 has a pseudo code of the implementation of Naïve gauss elimination using
partial pivoting. This is important for lab.
9
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

LU Factorization [Decomposition]
As described previously, Gauss elimination is designed to solve systems of linear algebraic equations,
[ 𝐴]{𝑋 } = {𝐵}
Although it certainly represents a sound way to solve such systems, it becomes inefficient when
solving equations with the same coefficients [A], but with different right-hand-side constants (the
b’s)
LU decomposition methods separate the time-consuming elimination of the matrix [A] from the
manipulations of the right-hand side ,B-. Thus, once *A+ has been “decomposed,” multiple right-
hand-side vectors can be evaluated in an efficient manner.
Interestingly, Gauss elimination itself can be expressed as an LU decomposition. Before showing how
this can be done, let us first provide a mathematical overview of the decomposition strategy.
[ 𝐴]{𝑋 } = {𝐵}
→ [ 𝐴]{𝑋 } − {𝐵} = 0
Suppose this equation can be written as upper triangular matrix
𝑢11 𝑢12 𝑢13 𝑥1 𝑑1
0 𝑢22 𝑢23 𝑥2 = 𝑑2
0 0 𝑢33 𝑥3 𝑑3
This is similar to the manipulation that occurs in the first step of Gauss elimination. That is,
elimination is used to reduce the system to upper triangular form.
[𝑈 ]{𝑋 } − {𝐷} = 0
Now, assume that there is a lower diagonal matrix with 1’s on the diagonal,
1 0 0
𝐿 = 𝐿21 1 0
𝐿31 𝐿32 1
Since the matrix [A] is decomposed into [U] and [L], it can be written as
[𝐿]{[𝑈 ]{𝑋 } − {𝐷}} = [ 𝐴]{𝑋 } − {𝐵}
If this equation holds, it follows from the rules for matrix multiplication that
[𝐿][𝑈 ] = [ 𝐴]
And [𝐿]{𝐷} = {𝐵}
We can also write
𝐴 𝑋 = 𝐵
𝐿 𝑈 𝑋 = {𝐵}
−1 −1
𝐿 𝐿 𝑈 𝑋 = 𝑙 𝐵
𝑈 𝑋 = 𝐿 −1 𝐵 = {𝐷}
So {𝐷} = 𝐿 −1 𝐵
∴ 𝐵 = 𝐿 {𝐷}
1. A two-step strategy for obtaining solutions can be used in LU decomposition LU
decomposition step. *A+ is factored or “decomposed” into lower *L+ and upper *U + triangular
matrices.
2. Substitution step. [L] and [U ] are used to determine a solution {X } for a right-hand side {B}.
a. First, Eq. 𝐿 𝐷 = 𝐵 is used to generate an intermediate vector {D} by forward
substitution.
10

b. Then, the result is substituted into Eq. [𝑈 ]{𝑋 } − {𝐷} = 0, which can be solved by
Page

back substitution for {X }.

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

The two step strategy can be summarized by the figure below.

Figure 5 The steps in LU decomposition

LU Decomposition Version of Gauss Elimination


Gauss elimination can be used to decompose [A] into [L] and [U]
[U] is the direct product of the forward elimination step of gauss elimination
𝑢11 𝑢12 𝑢13 𝑎11 𝑎12 𝑎13
𝑈 = 0 𝑢22 𝑢23 = 0 𝑎′22 𝑎′23
0 0 𝑢33 0 0 𝑎′′33
which is in the desired upper triangular format.
Though it might not be as apparent, the matrix [L] is also produced during the step. This can
be readily illustrated for a three-equation system,
𝑎11 𝑎12 𝑎13 𝑥1 𝑏1
𝑎21 𝑎22 𝑎23 𝑥2 = 𝑏2
𝑎31 𝑎32 𝑎33 𝑥3 𝑏3
The first step in Gauss elimination is to multiply row 1 by the factor
𝑎21
𝑓21 =
𝑎11
and subtract the result from the second row to eliminate 𝑎21 . Similarly, row 1 is multiplied by
𝑎31
𝑓31 =
𝑎11
and the result subtracted from the third row to eliminate 𝑎31 .
The final step is to multiply the modified second row by

𝑎32
𝑓32 = ′
𝑎22

and subtract the result from the third row to eliminate 𝑎32
We can store these coefficients as:
11
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations
𝑎11 𝑎12 𝑎13
𝑓21 𝑎′22 𝑎′23
𝑓31 𝑓32 𝑎′′33
Where
𝑎11 𝑎12 𝑎13 1 0 0
𝑢 = 0 𝑎′22 𝑎′23 & 𝐿 = 𝑓21 1 0
0 0 𝑎′′33 𝑓31 𝑓32 1
The pseudo subroutine in figure 6a shows a pseudo code for implementation of decomposition
This algorithm is “naive” in the sense that pivoting is not included.
The forward-substitution step can be represented concisely as

The back-substitution step can be represented concisely as

The pseudo subroutine in figure 6b shows a pseudo code for implementation of substitution step

12

(a) (b)
Page

Figure 6 subroutione for (a) decomposition (b) substitution

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

To summarize
• [A]{x}={b} can be rewritten as [L][U]{x}={b} using LU factorization.
• The LU factorization algorithm requires the same total flops as for Gauss elimination.
• The main advantage is once [A] is decomposed, the same [L] and [U] can be used for
multiple {b} vectors.
• To solve [A]{x}={b}, first decompose [A] to get [L][U]{x}={b}
• Set up and solve [L]{d}={b}, where {d} can be found using forward substitution.
• Set up and solve [U]{x}={d}, where {x} can be found using backward substitution.
• MATLAB’s lu function can be used to generate the [L] and [U] matrices:
[L, U] = lu(A)
• To solve in MATLAB:
[L, U] = lu(A)
d = L\b
x = U\d
For lab
Page 282, figure 10.2 has a pseudo code of the implementation of LU decomposition. This is
important for lab.

THE MATRIX INVERSE


If a matrix *A+ is square, there is another matrix, *A+−1, called the inverse of *A+, for which
𝐴 𝐴 −1 = 𝐴 −1 [ 𝐴] = [𝐼 ]
Now we will focus on how the inverse can be computed numerically. Then we will explore how it can
be used for engineering analysis.
Calculating the Inverse
The inverse can be computed in a column-by-column fashion by generating solutions with unit
vectors as the right-hand-side constants. For example, if the right-hand-side constant has a 1 in the
first position and zeros elsewhere,
1
𝑏 = 0
0
the resulting solution will be the first column of the matrix inverse. Similarly, if a unit vector with a 1
at the second row is used
0
𝑏 = 1
0
the result will be the second column of the matrix inverse.
The best way to implement such a calculation is with the LU decomposition algorithm Recall that
one of the great strengths of LU decomposition is that it provides a very efficient means to evaluate
multiple right-hand-side vectors. Thus, it is ideal for evaluating the multiple unit vectors needed to
compute the inverse.
13
Page

Prepared BY
Shahadat Hussain Parvez
EEE 305 Lecture 6: Simultaneous Linear Algebraic Equations

1. What is the difference between complete pivoting and partial pivoting?


2. Chapra problems 9.1-9.10
3. Chapra Problems 10.2 – 10.6
4. Chapra examples
a. 9.1 – 9.3
b. 9.5 - 9.6
c. 9.7, 9.8, 9.10
d. 9.9
e. 9.12 (VI)
f. 10.1-10.2
g. 10.3

14
Page

Prepared BY
Shahadat Hussain Parvez

Das könnte Ihnen auch gefallen