Sie sind auf Seite 1von 8

Linear equations, matrices, determinants

• Introduction
• Rank of a matrix and inverse of a matrix
○ Singular and regular matrices
○ Rank of a matrix
○ Adjoint matrix of a square matrix A
○ Cofactors property
○ Inverse matrix of a regular square matrix
○ Uniqueness of the inverse matrix
○ Calculating the inverse of a regular square matrix
○ Singular matrices and the inverse.
○ Cramer's rule
○ Homogeneous system of Cramer
• Classification of systems of linear equations
○ Main equations ; side equations ; main unknowns ; side unknowns
○ Characteristic determinant
○ Classification
 First kind
 Second kind
 Third kind
 Fourth kind
○ Homogeneous equations
 Important conclusions
 Application about collinear points in a plane
 Application about lines in a plane

Introduction
In previous articles, we have seen the fundamental properties of linear equation
systems, matrices and determinants.
In this part II, we bring these concepts together and we'll find many relations between
these fundamentals.
Rank of a matrix and inverse of a matrix
Singular and regular matrices
If the determinant of a square matrix is 0, we call this matrix singular otherwise, we
call the matrix regular.
Rank of a matrix
Take a fix matrix A. By crossing out, in a suitably way, some rows and some columns
from A, we can construct many square matrices from A.
Doing this, search now the biggest regular square matrix.
The number of rows of that matrix is called the rank of A.
Adjoint matrix of a square matrix A
Replace each element of A with its own cofactor and transpose the result, then you
have made the adjoint matrix of A.
Cofactors property
Theorem : When we multiply the elements of a row of a square matrix with the
corresponding cofactors of another row, then the sum of these product is 0.
Prove:
We'll prove this property for 3x3 matrices but the method of the proof is universal.
So take P =

[a b c]
[d e f]
[g h i]
Let A,B,C,D,E,F,G,H,I be the cofactors of a,b,c,d,e,f,g,h,i.
We multiply the elements of a row, say the second, with the corresponding cofactors
of another row, say the the first.
We have to prove that dA+eB+fC = 0.
Take now the matrix Q =
[d e f]
[d e f]
[g h i]
Since the matrix has two equal rows,its determinant is 0. So det(Q) = 0.
Furthermore, the cofactors of corresponding elements of the first row of P and Q are
the same. These cofactors are A B and C.
Hence the calculation of det(Q) emanating from the first row gives dA+eB+fC.
Since we know that det(Q) = 0, dA+eB+fC = 0.
Q.E.D.
Inverse matrix of a regular square matrix
We say that B is an inverse matrix of A if and only if A.B=I=B.A . (I is the identity
matrix.)
Uniqueness of the inverse matrix
We show that it is impossible that there are two inverse matrices for A.
Say, there are two inverse matrices B and C for A, then we have
A.B=I=B.A and A.C=I=C.A
Then, B=I.B=C.A.B=C.I=C
So B=C is the unique inverse of A.
Calculating the inverse of a regular square matrix
We'll show that for the unique inverse B of a regular matrix A holds:
B = (adjoint of A) / det(A) We'll prove this property for 3x3 matrices but the method
of the proof is universal.
First we'll calculate
A.(adjoint of A) =
[a b c] [A D G]
[d e f] . [B E H] =
[g h i] [C F I]

[aA+bB+cC aD+bE+cF aG+bH+cI]


[dA+eB+fC dD+eE+fF dG+eH+fI] =
[gA+hB+iC gD+hE+iF gG+hH+iI]

Because of the cofactors property,

[aA+bB+cC 0 0 ]
[ 0 dD+eE+fF 0 ] =
[ 0 0 gG+hH+iI]

The diagonal elements of this matrix are det(A)

[ det(A) 0 0 ]
[ 0 det(A) 0 ] =
[ 0 0 det(A)]

[ 1 0 0]
det(A). [ 0 1 0] =
[ 0 0 1]

det(A).I
In the same way, (adjoint of A).A =det(A).I
Remark : All regular matrices have an inverse matrix and now we can calculate this
inverse matrix.
Singular matrices and the inverse.
Assume, B is the inverse of a singular matrix A. Then, A.B = I => |A|.|B|=1 Since |A|
=0, this is impossible. So, a singular matrix has no inverse.
Cramer's rule
A system of n linear equations in n unknowns is called a Cramer system if and only if
the matrix formed by the coefficients is regular.
There is a special method to solve such a system. This method is called Cramer's rule.
We'll prove the rule for a system of 3 equations in 3 unknowns, but the rule is
universal.
Take,

/ ax + by + cz = d
| a'x+ b'y + c'z = d' (1)
\ a"x+ b"y + c"z = d"

|a b c |
with |N| = |a' b' c'|
|a" b" c"|

Then we have
|xa b c |
x.|N| = |xa' b' c'|
|xa" b" c"|
and using the properties of determinants
|xa +by +cz b c |
x.|N| = |xa'+b'y+c'z b' c'|
|xa"+b"y+c"z b" c"|
and appealing to (1)
|d b c |
x.|N| = |d' b' c'|
|d" b" c"|
Thus,
|d b c |
x = |d' b' c'| / |N| (2)
|d" b" c"|
Similarly,
|a d c |
y = |a' d' c'| / |N| (3)
|a" d" c"|

|a b d |
z = |a' b' d'| / |N| (4)
|a" b" d"|
The formules (2), (3), (4) constitute Cramer's rule. It can be proved that this solution is
the only solution of (1).
Homogeneous system of Cramer
If all the known terms of a Cramer system are 0, the system is homogeneous in the
unknowns. It follows directly from Cramer's rule that the only solution of such system
is the zero solution (all unknowns = 0). This solution is also called the obvious
solution.
Classification of systems of linear equations
Main equations ; side equations ; main unknowns ; side unknowns
Take any system with m linear equations in n unknowns.
Let A be the matrix of coefficients, and say rank(A) = r. Then there can at least one
regular r x r matrix M be made from A .
The equations corresponding with the rows of M are called the main equations, the
other equations are the side equations.
The unknowns corresponding with the columns of M are called the main unknowns,
the other columns are the side unknowns.
Characteristic determinant
The characteristic matrix, associated with a particular side equation, is the matrix
formed by adding to the main matrix :
1. a row at the bottom, with the coefficients of the main unknowns in that side
equation.
2. a column at the right, with the known terms of the main equations and of the known
term of that side equation.
The characteristic determinant is the determinant of the characteristic matrix.
So there are as much characteristic determinants as side equations.
Example:

/ x+2y+z+2u=1
| 4x+4y=-3
| 3x+6y=-4.5
\ 2x+4y+2z+4u=2

The matrix of coefficients is

[1 2 1 2]
[4 4 0 0]
[3 6 0 0]
[2 4 2 4]

The rank of this matrix is 3.

We choose a main matrix from that matrix.

[1 2 1]
[4 4 0]
[3 6 0]
By this choice, the main equations are the first, the second and the third.
The main unknowns are x, y and z. The last equation is the only side equation and z is
the side unknown. The characteristic determinant of this side equation is
[1 2 1 1 ]
[4 4 0 -3 ]
[3 6 0 4.5]
[2 4 2 2 ]
With all these new concepts we can classify all the systems of linear equations.
Classification
First kind
The systems with n equations and n unknowns and with n = rank of the matrix of
coefficients .
These are the Cramer systems mentioned earlier. They have exactly one solution.
Second kind
The systems with m equations and n unknowns and with m = rank of the matrix of
coefficients .
In that case, there are side unknowns, but no side equations.
It can be proved that all the side unknowns can be chosen arbitrarily. With each
choice of these side unknowns corresponds exactly one solution of the system.
Example:

/ 2x+3y+z=4
\ x+2y-z=3

Choose the main matrix

[2 3]
[1 2]
z is the side unknown. With each choice of z, corresponds exactly one solution of the
system.
Third kind
The systems with m equations and n unknowns and with n = rank of the matrix of
coefficients .
In that case, there are (m - n) side equations, but no side unknowns.
Thus there are (m - n) characteristic determinants. It can be proved that the system has
a solution if and only if all the characteristic determinants are zero. In that case,
the solution is unique! Furthermore,all the side equations are linear combinations
of the main equations.
The unique solution can be found by omitting all the side equations, and solving the
remaining system of the first kind.
Example:
/ 2x+y=1
| x+y=0
| 3x+2y=1
\ 4x+3y=1

I choose the main matrix

[2 1]
[1 1]
The characteristic determinants are
|2 1 1|
|1 1 0| = 0
|3 2 1|
and

|2 1 1|
|1 1 0| = 0
|4 3 1|

The unique solution is the solution of the system

/ 2x+y=1
\ x+y=0

x=1, y=-1
Fourth kind
The systems with m equations and n unknowns and with r = rank of the matrix of
coefficients .
In that case, there are (m - r) side equations, and (n - r) unknowns.
Thus there are (m - r) characteristic determinants. It can be proved that the system has
a solution if and only if all the characteristic determinants are zero.
furthermore, all the side equations are linear combinations of the main
equations and can be ommited. Then, the remaining system is a system of the second
kind.
All the side unknowns can be chosen arbitrarily. With each choice of these side
unknowns corresponds exactly one solution of the system.
Homogeneous equations
If all the known terms of a system are 0, the system is homogeneous in the unknowns.
A homogeneous system always has the obvious solution. All properties stated in
previous classification, also hold for homogeneous systems.
Because all known terms are zero, it is easy to verify that all characteristic
determinants are 0.
Important conclusions
1. A homogeneous Cramer system has exactly one solution, the obvious solution.
2. In a homogeneous system of the second kind, we can choose all the side
unknowns arbitrarily. With each choice of these side unknowns corresponds
exactly one solution of the system.
3. A homogeneous system of the third kind, has exactly one solution, the obvious
solution.
4. In a homogeneous system of the fourth kind, we can choose all the side
unknowns arbitrarily and omit the side equations. With each choice of these
side unknowns corresponds exactly one solution of the system.
5. A homogeneous system of n equations in n unknowns, is either a Cramer
system or a system of the fourth kind. It has respectively only the obvious
solution, or an infinity number of solutions. Hence we can state the important
conclusion:
A homogeneous system of n equations in n unknowns has a solution
different from the obvious one, if and only if the determinant of the
coefficient matrix is zero.
6. It is immediate that, if a solution is found for a homogeneous system, each real
multiple of this solution is also a solution of the system.
Application about collinear points in a plane
Take three points A,B,C in a plane. The coordinates with respect to an orthonormal
coordinate system are (a,a');(b,b');(c,c') respectivily.
The three points are collinear
<=>
there is a line ux+vy+w=0 through A,B and C
<=>
There are values for u,v,w different from 0,0,0 such that
u.a + v.a' + w = 0
u.b + v.b' + w = 0
u.c + v.c' + w = 0
<=>
the system
a.u + a'.v + 1.w = 0
b.u + b'.v + 1.w = 0
c.u + c'.v + 1.w = 0

has a solution for u,v,w different from the obvious solution.


<=>
|a a' 1|
|b b' 1| = 0
|c c' 1|
Application about lines in a plane
Take two points A,B in a plane. The coordinates with respect to an orthonormal
coordinate system are (a,a'),(b,b') respectivily. Appealing on previous application, we
can write
A third point P(x,y) is on the line AB
<=>
|x y 1|
|a a' 1| = 0
|b b' 1|
This is the equation of the line AB.

Das könnte Ihnen auch gefallen