Beruflich Dokumente
Kultur Dokumente
Ø Matrices and vectors are the main tools for linear algebra
(commutative law)
(associative law)
(distributive law)
Transposition We l e a d
=?
=?
=?
Matrix multiplication We l e a d
ØSymmetric/skew-symmetric matrices
ØTriangular matrices
ØLower triangular matrix
ØUpper triangular matrix
ØDiagonal matrices
ØScalar matrix
ØIdentity/ unit matrix
ØTrace of a matrix
Symmetric/ Skew-Symmetric Matrix We l e a d
Example We l e a d
Symmetric/ Skew-Symmetric Matrix We l e a d
Diagonal Matrices. These are square matrices that can have nonzero
entries only on the main diagonal. Any entry above or below the main
diagonal must be zero.
Scalar matrices We l e a d
Example:
1 2 3
Let 𝐴 = −4 −4 −4
5 6 7
So,
0 2 −1
1. 𝐴= 4 3 5
2 0 −4
2 0 3
2. 𝐴 = −1 4 −2
1 −3 5
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Determinants We l e a d
Solution of linear systems We l e a d
ØInverse matrix
ØCramer’s rule
ØGauss elimination
ØLU (Doolittle, Crout)
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d
A= if it exists.
Inverse of a Matrix We l e a d
Inverse of a Matrix We l e a d
A=
Inverse of a Matrix – Gauss-Jordan method
We l e a d
(a) (c)
(b)
(d)
Row Echelon Form (ref) We l e a d
Each of the matrices shown below are examples of matrices in row echelon form.
Reduced Row Echelon Form (rref) We l e a d
A matrix is in reduced row echelon form (rref) when it satisfies the following
conditions.
Each of the matrices shown below are examples of matrices in reduced row echelon form.
cefmn@2014
Transforming a matrix into ref/ rref We l e a d
3. To get the matrix in reduced row echelon form, process non-zero entries
above each pivot.
• Identify the last row having a pivot equal to 1, and let this be the pivot
row.
• Add multiples of the pivot row to each of the upper rows, until every
element above the pivot equals 0.
• Moving up the matrix, repeat this process for each row.
pivot
R1 R2 R3=-2R1+R3 R3=-3R2+R3 R1=-2R2+R1
Rank of a Matrix
We l e a d
Definition 1:
The rank of matrix A is the maximum number of linearly independent
row vectors of A. It is denoted by rank A or r(A)
For r x c matrix
Definition 1:
r(A) = 2
2 non-zero rows in its row echelon matrix
Rank of a matrix We l e a d
Definition 2:
Use determinant.
System of Linear Equations We l e a d
The matrix A is called the coefficient matrix and the block matrix [A
b], is called the augmented matrix of the linear system
System of Linear
Equations
Inconsistent Consistent
Infinitely Many
No Solution Unique Solution
Solution
Examples We l e a d
ØInverse matrix
ØCramer’s rule
ØGauss elimination
ØLU
ØDoolittle
ØCrout
Solving Linear Equation – Inverse Matrix
We l e a d
15
Solve 5𝑥 = 15 𝑥= =3
5
𝑏
Solve Ax = b 𝑥=
𝐴 ??? X 𝒙 = 𝑨-1 b
Solution:
The above equations systems can be written as Ax = b with
1 2 𝑥 150
A= , x = 𝑦 , and b =
3 4 250
1 2 𝑥 150
=
3 4 𝑦 250
Example We l e a d
𝑥
𝑦 = x = A-1 b
−2 1 150 −50
= , ./ =
- - 250 100
If,
𝑎!! 𝑥 + 𝑎!" 𝑦 + 𝑎!# 𝑧 = 𝑏!
𝑎"! 𝑥 + 𝑎"" 𝑦 + 𝑎"# 𝑧 = 𝑏"
𝑎#! 𝑥 + 𝑎#" 𝑦 + 𝑎## 𝑧 = 𝑏#
then,
012(4! ) 012(4" ) 012(4# )
x= ,y= , and z = ,
012(4) 012(4) 012(4)
Solution:
The coefficient matrix A and the vector b are,
1 3 1 −2
A = 2 5 1 , b = −5
1 2 3 6
Solving Linear Equation – Cramer’s Rule
We l e a d
then,
−2 3 1 1 −2 1 1 3 −2
A1 = −5 5 1 , A2 = 2 −5 1 , A3 = 2 5 −5
6 2 3 1 6 3 1 2 6
det(A) = -3
thus,
012(4$ ) .,
x1 = 012(4)
= ., = 1
012(4% ) :
x2 = = = −2
012(4) .,
012(4& ) .;
x3 = 012(4)
= .,
=3
Exercise We l e a d
(1)
(2)
(3)
Gauss Elimination/ Gauss-Jordan We l e a d
Gauss Elimination
Example:
Kirchhoff ’s Current Law (KCL). At any point of a circuit, the sum of the inflowing currents
equals the sum of the outflowing currents.
Kirchhoff ’s Voltage Law (KVL). In any closed loop, the sum of all voltage drops equals the
impressed electromotive force.
Gauss Elimination We l e a d
Step 1. Elimination of x1
Call the first row of A the pivot row and the first equation the pivot equation. Call the
coefficient 1 of its x1-term the pivot in this step. Use this equation to eliminate (get rid of x1
in the other equations.
Gauss Elimination We l e a d
Step 2. Elimination of x2
The first equation remains as it is. We want the new second equation to serve as the next
pivot equation. But since it has no x2-term (in fact, it is , we must first change the order of
the equations and the corresponding rows of the new matrix. We put at the end and move
the third equation and the fourth equation one place up. This is called partial pivoting (as
opposed to the rarely used total pivoting, in which the order of the unknowns is also
changed). It gives
Gauss Elimination We l e a d
Unique solution
Gauss Elimination We l e a d
Step 1. Elimination of x1
Gauss Elimination We l e a d
Step 2. Elimination of x2
Step 1. Elimination of x1
Gauss Elimination We l e a d
Step 2. Elimination of x2
Back Substitution
The second equation,
Since x3 and x4 remain arbitrary, we have infinitely many solutions. If we choose a value of x3
and a value of x4 , then the corresponding values of x1and x2 are uniquely determined.
Gauss-Jordan Elimination We l e a d
Exercise:
Determine the solution of the following system using:
1) Gauss elimination
2) Gauss-Jordan elimination
x1 + x2 – x3 = 1
3x1 + x2 + x3 = 9
x1 - x2 + 4x3 = 8
LU – Factorization/ Decomposition We l e a d
1. Doolittle
q L is an m x n lower triangular matrix with all diagonal entries being 1
q U is an m x n matrix in some echelon form
2. Crout’s
q U is a m x n upper triangular matrix with all diagonal entries being 1
q L is a m x n matrix in some echelon form
Thus the system LY = b can be solved by the method of forward substitution and the
UX = Y can be solved by the method of backward substitution.
LU – Factorization/ Decomposition We l e a d
Example:
Solve the following system of linear equations by using:
a) Doolittle’s method
b) Crout’s method
3x1 + 5x2+2x3 = 8
8x2 + 2x3 = -7
6x1 + 2x2 + 8x3 = 26
Eigenvalue and eigenvector We l e a d
(1)
𝑨 𝒙 = l𝒙
Then, l is called an eigenvalue (or characteristic values) of A
𝑥 is called the eigenvector (or characteristic vectors) corresponding to the l
𝑨 𝒙 = l𝒙 becomes
𝑨𝒙 −l𝒙=𝟎
By introducing the n × n unit matrix (𝐼),
then we have
Eigenvalue and eigenvector We l e a d
Example:
Find the eigenvalues of the following matrices:
(1)
(2)
(3)
Eigenvalue and eigenvector We l e a d
Eigenvector
Each eigenvalue obtained has corresponding to a solution of 𝑥
called eigenvector.
In matrices, the term “vector” indicates a row matrix OR column matrix
Example:
Find the eigenvectors of the following matrices:
(1)
(2)
Eigenvalue and eigenvector We l e a d
Exercise:
Determine the eigenvalues and corresponding eigenvectors of the
following matrices
Eigenvalue and eigenvector We l e a d
Exercise:
Eigenvalue and eigenvector We l e a d
Eigenvalue and eigenvector We l e a d
Eigenvalue and eigenvector We l e a d
Eigenvalue and eigenvector We l e a d
Linear System: Solution by Iteration We l e a d
What is Gauss-Seidel?
Consider,
Gauss-Seidel Iteration We l e a d
This method will always converge rapidly if the system is diagonally dominant
An n x n matrix A is diagonally dominant if the absolute value of each entry on
the main diagonal is greater than the sum of the absolute values of the other
entries in the same row. That is,
Gauss-Seidel Iteration We l e a d
Example:
Which of the following systems of linear equations has a diagonally
dominant coefficient matrix?
Gauss-Seidel Iteration We l e a d
Example (1):
Find the solution to the following system of equations using the Gauss-
Seidel method
Solution: