Sie sind auf Seite 1von 125

EUM114

ADVANCED ENGINEERING CALCULUS

LINEAR ALGEBRA (LA)

Dr. Mohammad Nishat Akhtar


nishat@usm.my
LESSON PLAN We l e a d

Ø Matrix analysis and operations


Ø Concept of determinants, rank and inverse matrix
Ø Solution of linear systems using
– Inverse matrix
– Cramer’s rule
– Gauss elimination
– LU (Doolittle, Crout)
Ø Eigen value & eigen vector
Ø Numerical method for solving linear equation (Gauss
Seidel)
Linear Algebra (LA) We l e a d

Ø A fairly extensive subject that covers vectors and matrices,


determinants, system of linear equations, vector spaces and
linear transformations, eigenvalue problems and etc.

Ø Matrices and vectors are the main tools for linear algebra

Ø Matrices let us express large amounts of data and functions


in an organised and concise form.
Example of matrix application We l e a d

Suppose that in a weight-watching program, a person of 185 lbs


burns 350 cal/hr in walking (3 mph), 500 in bicycling (13 mph),
and 950 in jogging (5.5 mph). Bob, weighing 185 lbs, plans to
exercise according to the matrix shown. Verify the calculations
(W=walking, B=Bicycling, J=Jogging)
Matrix We l e a d
Matrix addition We l e a d
Matrix addition We l e a d

• Need to have the same size/ order


• Rules for matrix addition

(commutative law)

(associative law)

0 denotes the zero matrix (all entries equals to zero)


Scalar multiplication We l e a d

• The product of matrix A by scalar p, written p . A or simply


pA is the matrix obtained by multiplying each element of A
by p:
Scalar multiplication We l e a d

Rules for scalar multiplications:

(distributive law)
Transposition We l e a d

The transpose of an matrix is the matrix (read A transpose) that


has the first row of A as its first column, the second row of A as
its second column, and so on. Thus the transpose of A at left
(blue) is written out as:
Transposition We l e a d

• Rules for transposition:


Examples We l e a d

=?

=?

=?
Matrix multiplication We l e a d

Condition; r = n (B must have as many rows as A’s column)


Matrix multiplication We l e a d
Matrix multiplication We l e a d

Rules for matrix multiplication:


Special Matrices We l e a d

ØSymmetric/skew-symmetric matrices
ØTriangular matrices
ØLower triangular matrix
ØUpper triangular matrix
ØDiagonal matrices
ØScalar matrix
ØIdentity/ unit matrix
ØTrace of a matrix
Symmetric/ Skew-Symmetric Matrix We l e a d
Example We l e a d
Symmetric/ Skew-Symmetric Matrix We l e a d

Matrix Br×r is defined as skew-symmetric if B = −B t . That is,


Triangular matrices We l e a d

Triangular Matrices. Upper triangular matrices are square


matrices that can have nonzero entries only on and above the main
diagonal, whereas any entry below the diagonal must be zero.
Similarly, lower triangular matrices can have nonzero entries only
on and below the main diagonal. Any entry on the main diagonal of
a triangular matrix may be zero or not.
Diagonal matrices We l e a d

Diagonal Matrices. These are square matrices that can have nonzero
entries only on the main diagonal. Any entry above or below the main
diagonal must be zero.
Scalar matrices We l e a d

If all the diagonal entries of a diagonal matrix S are equal, say, c, we


call S a scalar matrix because multiplication of any square matrix A
of the same size by S has the same effect as the multiplication by a
scalar, that is,
Identity matrix We l e a d

Identity/ Unit Matrices. A scalar matrix, whose entries on the main


diagonal are all 1, is called a unit matrix (or identity matrix) and is
denoted by In or simply by I.
Trace of a matrix We l e a d

Trace of a matrix. The sum of the diagonal elements.

Example:
1 2 3
Let 𝐴 = −4 −4 −4
5 6 7

So,

Diagonal of A = [1, -4, 7] and tr(A) = 1 – 4 + 7 = 4


Determinants We l e a d
Determinants We l e a d
Determinants We l e a d

Matrix must be square (have same number of rows and columns)


Determinants We l e a d

Cofactor expansion of a matrix:


Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Examples We l e a d

0 2 −1
1. 𝐴= 4 3 5
2 0 −4

2 0 3
2. 𝐴 = −1 4 −2
1 −3 5
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Solution of linear systems We l e a d

ØInverse matrix
ØCramer’s rule
ØGauss elimination
ØLU (Doolittle, Crout)
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d

Example: Determine the inverse of

A= if it exists.
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d

For larger matrices (3x3, 4x4 and etc), we can


solve it using:
1. Minors, cofactors and adjugate/ adjoint
2. Elementary row operations (Gauss-Jordan)
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d

Determinants for the first 2 elements in


the first row

Determinants for the last 2 elements in


the last row
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Minors, cofactors & adjugate
We l e a d
Inverse of a Matrix – Gauss-Jordan method
We l e a d
Elementary Row Operations on a Matrix
We l e a d

1. R ij : Interchange of the ith and the jth rows


2. R i(k) : Multiplication of every element of ith row
by a non-zero scalar, k.
3. R ij(k) : Addition to the elements of ith row, of k
times the corresponding elements of the jth
row.
Inverse of a Matrix – Gauss-Jordan method
We l e a d

Example: Determine the inverse of

A=
Inverse of a Matrix – Gauss-Jordan method
We l e a d

Turn A into I (elementary row operations):


• swap rows
• multiply or divide each element in a row by a constant
• replace a row by adding or subtracting a multiple of another row
to it
Inverse of a Matrix – Gauss-Jordan method
We l e a d
Inverse of a Matrix – Gauss-Jordan method
We l e a d
Inverse of a Matrix – Gauss-Jordan method
We l e a d
Properties of Inverse Matrix
We l e a d

1. A-1 exists only if det(A) ≠ 0 [det(A) = 0 is a singular


matrix]
2. (AB) -1= B-1A-1
3. If D is a diagonal matrix with dii as diagonal elements, D-1 is a
diagonal matrix with diagonal elements 1/dii
4. (A-1)T = (AT)-1
5. (A-1)-1 = A
6. If A(nxn), then AA-1 = A-1A = I
Inverse of a Matrix – Exercises
We l e a d

Determine the inverse of the matrix

(a) (c)

(b)
(d)
Row Echelon Form (ref) We l e a d

A matrix is in row echelon form (ref) when it satisfies the following


conditions.
q The first non-zero element in each row, called the leading entries, is 1.
q Each leading entry is in a column to the right of the leading entry in the
previous row.
q Row with all zero elements, if any, are below rows having a non-zero
element

Each of the matrices shown below are examples of matrices in row echelon form.
Reduced Row Echelon Form (rref) We l e a d

A matrix is in reduced row echelon form (rref) when it satisfies the following
conditions.

q The matrix satisfies conditions for a row echelon form.


q The leading entry for each row is the only non-zero entry in its column

Each of the matrices shown below are examples of matrices in reduced row echelon form.

cefmn@2014
Transforming a matrix into ref/ rref We l e a d

Using a series of elementary row operations:

1. Pivot the matrix


• Find the pivot/leading entry, the first non-zero entry in the first column
of the matrix.
• Interchange rows, moving the pivot row to the first row.
• Multiply each element in the pivot row by the inverse of the pivot, so the
pivot equals
• Add multiples of the pivot row to each of the lower rows, so every element
in the pivot column of the lower rows equals 0.

2. To get the matrix in row echelon form, repeat the pivot


• Repeat the procedure from Step 1 above, ignoring previous pivot rows.
• Continue until there are no more pivots to be processed.
Transforming a matrix into ref/ rref We l e a d

3. To get the matrix in reduced row echelon form, process non-zero entries
above each pivot.
• Identify the last row having a pivot equal to 1, and let this be the pivot
row.
• Add multiples of the pivot row to each of the upper rows, until every
element above the pivot equals 0.
• Moving up the matrix, repeat this process for each row.

pivot
R1 R2 R3=-2R1+R3 R3=-3R2+R3 R1=-2R2+R1
Rank of a Matrix
We l e a d

Definition 1:
The rank of matrix A is the maximum number of linearly independent
row vectors of A. It is denoted by rank A or r(A)
For r x c matrix

q If r < c, then the max rank of the matrix is r

q If r > c, then the max rank of the matrix is c

• The rank of a matrix would be zero only if the matrix is a zero


matrix (all elements are zero)
Rank of a matrix We l e a d

Definition 1:

The maximum number of linearly independent vectors in a matrix is


equal to the number of non-zero rows in its row echelon matrix.
Therefore, to find the rank of a matrix, we simply transform the
matrix to its row echelon form and count the number of non-zero
rows.

Use elementary row operations to transform the matrix into row


echelon form.
Example: Rank of a matrix by using row echelon form
We l e a d

r(A) = 2
2 non-zero rows in its row echelon matrix
Rank of a matrix We l e a d

Definition 2:

The order of the largest square submatrix of A with a non-zero


determinant, whereby a square submatrix being formed by deleting
rows and columns to form a square matrix.

Use determinant.
System of Linear Equations We l e a d

A system of m linear equations in n unknowns x1, x2, . . . , xn


is a set of equations of the form

where for 1 ≤ i ≤ n, and 1 ≤ j ≤ m; aij , bi ∈ R.

The above Linear System is called homogeneous if


b1 = b2 = · · · = bm = 0 and non-homogeneous otherwise.
System of Linear Equations We l e a d

We rewrite the previous equations in the form Ax = b, where

The matrix A is called the coefficient matrix and the block matrix [A
b], is called the augmented matrix of the linear system

For a system of linear equations Ax = b, the system Ax = 0 is called the


associated homogeneous system.
A set of values x1, x2, . . . , xn which satisfy the above system is called a solution of
the system. The system of equations is said to be inconsistent if it has no solution.
System of Linear Equations We l e a d

System of Linear
Equations

Inconsistent Consistent

Infinitely Many
No Solution Unique Solution
Solution
Examples We l e a d

The linear system 𝑥+𝑦 =1


𝑥+𝑦 =0
has no solution, since there are no values for 𝑥 and 𝑦

For linear system 𝑥+𝑦 =1


𝑥 + 2𝑦 = 2
has the single solution 𝑥 = 0 and 𝑦 = 1

And for linear system 𝑥 + 2𝑦 = 1


2𝑥 + 4𝑦 = 2
has non-unique solution, if 𝑦 = 𝛼 then 𝑥 = 1 − 2𝛼
Solving Linear Equations We l e a d

ØInverse matrix
ØCramer’s rule
ØGauss elimination
ØLU
ØDoolittle
ØCrout
Solving Linear Equation – Inverse Matrix
We l e a d

15
Solve 5𝑥 = 15 𝑥= =3
5

𝑏
Solve Ax = b 𝑥=
𝐴 ??? X 𝒙 = 𝑨-1 b

We can obtain inverse of A by:


1. Using matrix of minors, cofactors and adjoint.
2. Using Gauss-Jordan method.
Example We l e a d

Solve the following system of equations using inverse matrix


𝑥 + 2𝑦 = 150
3𝑥 + 4𝑦 = 250

Solution:
The above equations systems can be written as Ax = b with
1 2 𝑥 150
A= , x = 𝑦 , and b =
3 4 250

1 2 𝑥 150
=
3 4 𝑦 250
Example We l e a d

By using Gauss-Jordan method, find A-1


Example We l e a d

𝑥
𝑦 = x = A-1 b

−2 1 150 −50
= , ./ =
- - 250 100

Hence, x = -50 and y = 100


Solving Linear Equation – Cramer’s Rule
We l e a d

Cramer’s rule states that if Ax = b, where A is invertible, then


each component xi of x may be computed as the ratio of two
determinants; the denominator is det(A), and the numerator is
also the determinant of the A matrix but with the ith column
replaced by b.

Uses determinant to find solution of the system


Solving Linear Equation – Cramer’s Rule
We l e a d

If,
𝑎!! 𝑥 + 𝑎!" 𝑦 + 𝑎!# 𝑧 = 𝑏!
𝑎"! 𝑥 + 𝑎"" 𝑦 + 𝑎"# 𝑧 = 𝑏"
𝑎#! 𝑥 + 𝑎#" 𝑦 + 𝑎## 𝑧 = 𝑏#

then,
012(4! ) 012(4" ) 012(4# )
x= ,y= , and z = ,
012(4) 012(4) 012(4)

𝑎!! 𝑎!" 𝑎!# 𝑎!! 𝑏! 𝑎!#


det(𝐴) = 𝑎"! 𝑎"" 𝑎"# det(𝐴% ) = 𝑎"! 𝑏" 𝑎"#
𝑎#! 𝑎#" 𝑎## 𝑎#! 𝑏# 𝑎##

𝑏! 𝑎!" 𝑎!# 𝑎!! 𝑎!" 𝑏!


det(𝐴$ ) = 𝑏" 𝑎"" 𝑎"# det(𝐴& ) = 𝑎"! 𝑎"" 𝑏"
𝑏# 𝑎#" 𝑎## 𝑎#! 𝑎#" 𝑏#

Replace the column in red box with b column


Solving Linear Equation – Cramer’s Rule
We l e a d

Solve for the following system of linear equations by Cramer’s rule


𝑥/ + 3𝑥- + 𝑥, = −2
2𝑥/ + 5𝑥- + 𝑥, = −5
𝑥/ + 2𝑥- + 3𝑥, = 6

Solution:
The coefficient matrix A and the vector b are,
1 3 1 −2
A = 2 5 1 , b = −5
1 2 3 6
Solving Linear Equation – Cramer’s Rule
We l e a d

then,
−2 3 1 1 −2 1 1 3 −2
A1 = −5 5 1 , A2 = 2 −5 1 , A3 = 2 5 −5
6 2 3 1 6 3 1 2 6

det(A) = -3

thus,
012(4$ ) .,
x1 = 012(4)
= ., = 1
012(4% ) :
x2 = = = −2
012(4) .,
012(4& ) .;
x3 = 012(4)
= .,
=3
Exercise We l e a d

Solve the following simultaneous equations using Cramers rule

(1)

(2)

(3)
Gauss Elimination/ Gauss-Jordan We l e a d

The Gaussian elimination method is a procedure for solving a linear


system Ax = b (consisting of m equations in n unknowns) by bringing
the augmented matrix

to an upper triangular form

by application of elementary row operations (back substitution).


Gauss Elimination/ Gauss-Jordan We l e a d

Elementary row operations for matrices:


1. Interchange of 2 rows
2. Addition of a constant multiple of one
equation to another row.
3. Multiplication of a row by a nonzero
constant.
Gauss Elimination We l e a d

Gauss Elimination

Unique solution No Solution

Infinitely many solutions


Gauss Elimination We l e a d

Example:

Kirchhoff ’s Current Law (KCL). At any point of a circuit, the sum of the inflowing currents
equals the sum of the outflowing currents.

Kirchhoff ’s Voltage Law (KVL). In any closed loop, the sum of all voltage drops equals the
impressed electromotive force.
Gauss Elimination We l e a d

Step 1. Elimination of x1
Call the first row of A the pivot row and the first equation the pivot equation. Call the
coefficient 1 of its x1-term the pivot in this step. Use this equation to eliminate (get rid of x1
in the other equations.
Gauss Elimination We l e a d

Step 2. Elimination of x2
The first equation remains as it is. We want the new second equation to serve as the next
pivot equation. But since it has no x2-term (in fact, it is , we must first change the order of
the equations and the corresponding rows of the new matrix. We put at the end and move
the third equation and the fourth equation one place up. This is called partial pivoting (as
opposed to the rarely used total pivoting, in which the order of the unknowns is also
changed). It gives
Gauss Elimination We l e a d

Unique solution
Gauss Elimination We l e a d

Step 1. Elimination of x1
Gauss Elimination We l e a d

Step 2. Elimination of x2

False statement 0 = 12, no solution


Gauss Elimination We l e a d

Step 1. Elimination of x1
Gauss Elimination We l e a d

Step 2. Elimination of x2

Back Substitution
The second equation,

From both equations we get ,

Since x3 and x4 remain arbitrary, we have infinitely many solutions. If we choose a value of x3
and a value of x4 , then the corresponding values of x1and x2 are uniquely determined.
Gauss-Jordan Elimination We l e a d

The Gauss-Jordan method consists of first applying the Gauss


Elimination method form of the matrix [A b] and then further
(followed by) applying the elementary row operations to obtain
the reduced row echelon form.
ð We aim to get the Identity matrix when we reduce the matrix

Exercise:
Determine the solution of the following system using:
1) Gauss elimination
2) Gauss-Jordan elimination

x1 + x2 – x3 = 1
3x1 + x2 + x3 = 9
x1 - x2 + 4x3 = 8
LU – Factorization/ Decomposition We l e a d

A m x n matrix is said to have an LU – Factorization if there exists matrices L and U.


We will cover 2 methods:

1. Doolittle
q L is an m x n lower triangular matrix with all diagonal entries being 1
q U is an m x n matrix in some echelon form
2. Crout’s
q U is a m x n upper triangular matrix with all diagonal entries being 1
q L is a m x n matrix in some echelon form

Suppose we want to solve a m x n system AX = b

If we can find LU-factorization of A, then to solve AX = b, it is enough to solve


the systems
LY = b
UX = Y

Thus the system LY = b can be solved by the method of forward substitution and the
UX = Y can be solved by the method of backward substitution.
LU – Factorization/ Decomposition We l e a d

A = LU – Doolittle’s Factorization : Require that L have main diagonal 1,…1.

æ a11 a12 a13 ö æ 1 0 0 öæ a11 a12 a13 ö


ç ÷ ç ÷ç ÷
ç a21 a22 a23 ÷ = ç L21 1 0 ÷ç 0 a22 a23 ÷
ça a33 ÷ø çè L31 1 ÷ç a33 ÷ø
è 31 a32 L32 øè 0 0

A = LU – Crout’s Factorization : Require that L have main diagonal 1,…1.

æ a11 a12 a13 ö æ a11 0 0 öæ 1 a12 a13 ö


ç ÷ ç ÷ç ÷
ç a21 a22 a23 ÷ = ç a21 a22 0 ÷ç 0 1 a23 ÷
ça a33 ÷ø çè a31 a32 a33 ÷ç 1 ÷ø
è 31 a32 øè 0 0
LU – Factorization/ Decomposition We l e a d

The solution X to the linear system AX = B is found in


three steps :

1. Construct the matrices L and U

2. Solve LY = B for Y using forward substitution

1. Solve UX = Y for X using back substitution


LU – Factorization/ Decomposition We l e a d

Example:
Solve the following system of linear equations by using:
a) Doolittle’s method
b) Crout’s method

3x1 + 5x2+2x3 = 8
8x2 + 2x3 = -7
6x1 + 2x2 + 8x3 = 26
Eigenvalue and eigenvector We l e a d

Let 𝐴 = 𝑎$% be a real square matrix of order n.


If there exists a non-zero column vector 𝑥 and a scalar l such that

(1)
𝑨 𝒙 = l𝒙
Then, l is called an eigenvalue (or characteristic values) of A
𝑥 is called the eigenvector (or characteristic vectors) corresponding to the l

Now, if we expressed a set of a separate equations based on (1)


Eigenvalue and eigenvector We l e a d

Bringing the RHS terms to the LHS, this simplifies to

𝑨 𝒙 = l𝒙 becomes
𝑨𝒙 −l𝒙=𝟎
By introducing the n × n unit matrix (𝐼),
then we have
Eigenvalue and eigenvector We l e a d

The set of all solutions to 𝐴 − l𝑰 𝑥 = 0 ( OR the null space of 𝐴 − l𝑰 )


is called the eigenspace of 𝐴 corresponding to l.

For this set of homogeneous linear equations to have a non-trivial solution,


𝐀 − l𝐈 must be zero

𝐀 − l𝐈 ð is called the characteristic determinant of 𝑨


The determinant 𝐀 − l𝐈 is a polynomial of degree n in l and is also called the
characteristic polynomial of 𝑨

𝐀 − l𝐈 = 𝟎 ð is called characteristic equation of 𝑨


Eigenvalue and eigenvector We l e a d

Schaum's Outline of Theory and Problems of Matrix Operations (Richard Bronson)


McGraw-Hill, pp. 60
Eigenvalue and eigenvector We l e a d

The process of finding eigenvalue and eigenvector


To solve for the eigenvalues, l, and the corresponding eigenvectors,
𝑥, of an n x n matrix A, do the following:
1. Multiply an n x n identity matrix by the scalar, l.
2. Subtract the identity matrix multiple from the matrix A.
3. Find the determinant of the matrix and the differences.
4. Solve for the values of l that satisfy the equation det 𝐀 − l𝐈 = 𝟎.
5. Solve for the corresponding vector to each l.
Eigenvalue and eigenvector We l e a d

Example:
Find the eigenvalues of the following matrices:

(1)

(2)

(3)
Eigenvalue and eigenvector We l e a d

Eigenvector
Each eigenvalue obtained has corresponding to a solution of 𝑥
called eigenvector.
In matrices, the term “vector” indicates a row matrix OR column matrix

Example:
Find the eigenvectors of the following matrices:

(1)

(2)
Eigenvalue and eigenvector We l e a d

Exercise:
Determine the eigenvalues and corresponding eigenvectors of the
following matrices

Œ


Ž
Eigenvalue and eigenvector We l e a d

Exercise:
Eigenvalue and eigenvector We l e a d
Eigenvalue and eigenvector We l e a d
Eigenvalue and eigenvector We l e a d
Eigenvalue and eigenvector We l e a d
Linear System: Solution by Iteration We l e a d

What is Gauss-Seidel?

Why do we want to use it?


Gauss-Seidel Iteration We l e a d

Consider,
Gauss-Seidel Iteration We l e a d

Repeat using the new x’s

where k = current iteration, k-1 = previous iterations


Gauss-Seidel Iteration We l e a d

Gauss-Seidel – Convergence criteria


Gauss-Seidel à iterative method which give approximate solution by
successive approximation.

This method will always converge rapidly if the system is diagonally dominant
An n x n matrix A is diagonally dominant if the absolute value of each entry on
the main diagonal is greater than the sum of the absolute values of the other
entries in the same row. That is,
Gauss-Seidel Iteration We l e a d

Example:
Which of the following systems of linear equations has a diagonally
dominant coefficient matrix?
Gauss-Seidel Iteration We l e a d

Example (1):
Find the solution to the following system of equations using the Gauss-
Seidel method

Use as the initial guess


Gauss-Seidel Iteration We l e a d

Solution:

Hence, the solution should converge using the Gauss-Seidel method.


Rewriting the equations,

Assuming an initial guess of


Gauss-Seidel Iteration We l e a d

The absolute relative


approximate error for each xi,

The maximum absolute relative approximate error is 100.00%


Gauss-Seidel Iteration We l e a d

The absolute relative approximate error for each xi,

The maximum absolute relative approximate error is 240.61%. This is


greater than the value of 100.00% we obtained in the first iteration. Is the
solution diverging? No, as you conduct more iterations, the solution
converges as follows.
Gauss-Seidel Iteration We l e a d

This is close to the exact solution vector of

Das könnte Ihnen auch gefallen