Sie sind auf Seite 1von 14

MATH2099 LECTURE 1

GAUSSIAN ELIMINATION

Any augmented matrix may be reduced to echelon form via the elementary row
operations

Ri = Ri Rj and Ri Rj

Once in echelon form the system may be solved via back-substitution.

An inconsistent equation at any stage of reduction indicates that there is no


solution and you may stop. Else

If every column on the LHS is a leading column then the solution is unique. Else

The presence of a non-leading column on the LHS of the echelon form indicates
innite solutions with the non-leading variables then serving as parameters.

MATH2099 LECTURE 2
MATRIX ANALYSIS

An m n matrix is a rectangular array of real or complex numbers with m rows


and n columns.

Matrices are added and subtracted using simple pointwise operations.

If A is m n and B is p q then the product AB exists if and only if n = p and


the resulting matrix is m q.

In general AB = BA

Given a square n n matrix A the inverse of A (denoted by A1 ) is another n n


matrix with the property that AA1 = A1 A = I.
 1  
a b 1 d b
=
c d ad bc c a

 1
a b
exists if and only if ad bc = 0.
c d

1
MATH2099 LECTURE 3
VECTOR SPACES INDEPENDENCE AND SPANNING

A subset S of a vector space V is a subspace of V if it is non-empty and closed


under vector addition and scalar multiplication.

A set of vectors in a vector space V is said to be linearly independent if


1 v1 + 2 v2 + + n vn = 0 1 = 2 = = n = 0.
(linear independent every column in the echelon form is a leading column)

The span of a set of vectors in a vector space V is the set of all possible linear
combinations 1 v1 + 2 v2 + + n vn where 1 , 2 , . . . , n are scalars.
(spanning set for V no zero rows in the echelon form)

MATH2099 LECTURE 4
BASES AND DIMENSION

A basis for a vector space is a linearly independent spanning set.

The number of vectors in any basis is the dimension of the vector space.

If V is an n dimensional vector space with ordered basis B = {v1 , v2 . . . vn } and


v V is (uniquely) expressed as v = 1 v1 + 2 v2 + + n vn then
the coordinate
1
2

vector [v]B of v with respect to B is given by [v]B = .

.
n

Every n dimensional vector space V is essentially just Rn . Once an ordered basis


B for V is chosen, the map T : V Rn given by T (v) = [v]B will serve as an
isomorphism (structure preserving map) between V and Rn .

2
MATH2099 LECTURE 5
DOT PRODUCTS AND PROJECTIONS

u1 v1
u2 v2

Let u =
. and v =
. be two vectors in Rn : Then

. .
un vn

The vector
from u to v is given by
uv = v u.
uv

The dot product of u and v is given by u v = u1 v1 + u2 v2 + + un vn .




The magnitude of the vector u is  u  = u21 + u22 + + u2n = u u.

uv
The angle between u and v is given by cos() =
 u  v 

u v u v = 0.
uv
The projection of u onto v is given by P rojv (u) = ( )v.
vv

MATH2099 LECTURE 6
THE GRAM-SCHMIDT ALGORITHM
Given a linearly independent set {v1 , v2 , , vn } in Rn : Let
a1 = v1
a2 = v2 P roja1 (v2 )
a3 = v3 P roja1 (v3 ) P roja2 (v3 )
..
.
an = vn P roja1 (vn ) P roja2 (vn ) P roja3 (vn ) P rojan1 (vn )

Then {a1 , a2 , , an } is an orthogonal set with the same span as {v1 , v2 , , vn }.

To produce an orthonormal spanning set simply normalise the vectors to produce


a1 a2 an
{u1 , u2 , , un } = { , , , }.
 a1   a2   an 

Any matrix A with linearly independent columns can be decomposed as A = QR


where Q has orthonormal columns and R is upper triangular. Q is found by
applying Gram-Schmidt to the columns of A and R is found via R = QT A.

3
MATH2099 LECTURE 7
LINEAR TRANSFORMATIONS

A transformation T mapping the vector space V into the vector space W is linear
if:

T (v1 + v2 ) = T (v1 ) + T (v2 ) and (1)


T (v1 ) = T (v1 ) (2)

for all vectors v1 , v2 V and scalars .

Ker(T ) = {v V | T (v) = 0}. (The kernel of T)

Im(T ) = {T (v)| v V }. (The image of T)

Ker(T ) is a subspace of V and Im(T ) is a subspace of W .

Dim(Ker(T )) is referred to as the nullity of T .

Dim(Im(T )) is referred to as the rank of T .

Rank(T )+Nullity(T )=Dim(V ) (The Rank-Nullity Theorem)

For a linear transformation T induced by a matrix A:

Ker(T ) is simply the solution of Ax = 0.

Nullity(T ) is the number of non-leading columns in the Echelon form.

Im(T ) is the span of all the leading columns of A.

Rank(T ) is the number of leading columns in the Echelon form.

4
MATH2099 LECTURE 8
MATRIX TRANSFORMATIONS ON R2
SOME STANDARD MATRIX TRANSFORMATIONS ON R2
 
1 0
Reection in the x axis: A =
0 1
 
1 0
Reection in the y axis: A=
0 1
 
1 0
Reection across the origin: A=
0 1
 
cos sin
Rotation anticlockwise about the origin by : A=
sin cos

For clockwise rotations use in the above matrix.


 
0
Dilation by in the x direction and in the y direction: A =
0
, > 1 yields expansions , < 1 yields contractions.
 
1 k
kshear in the x direction: A=
0 1
 
1 0
kshear in the y direction: A=
k 1
 
1 1 m2 2m
Reection in the line y = mx: A=
1 + m2 2m m2 1
 
1 1 m
Projection onto the line y = mx: A=
1 + m2 m m2

In all of the above the inverse of the transformation (if it exists) is given by A1 .

MATH2099 LECTURE 9
MATRIX OF A LINEAR TRANSFORMATION
Let T : V W be a linear transformation from an n-dimensional vector space V
to an m-dimensional vector space W . Suppose that B = {v1 , v2 , . . . , vn } is an
ordered basis for V and that C = {w1 , w2 , . . . , wm } is an ordered basis for W .
Then there is a unique matrix A referred to as the matrix of the linear
transformation with the property that [T (v)]C = A[v]B for all v V .

The ith column of A is the coordinate vector [T (vi )]C .

The matrix A depends only upon the transformation T and the two bases B and C.

5
MATH2099 LECTURE 10
COMMUTATIVE DIAGRAMS

A
V W

P QAP = F Q

V W
F

MATH2099 LECTURE 11
ORTHOGONAL COMPLEMENTS AND SUBSPACE PROJECTIONS

W = {v Rn | v w = 0 for all w W }

Let W be a k dimensional subspace of Rn with an orthogonal basis


{a1 , a2 , , ak }. Then for any vector v in Rn

P rojW (v) = P roja1 (v) + P roja2 (v) + + P rojak (v)

6
MATH2099 LECTURE 12
MATRIX TRANSFORMATIONS IN SPACE

Some standard rotation matrices in R3 are:



cos sin 0
Rotation anticlockwise by about the z-axis Rz = sin cos 0
0 0 1

cos 0 sin
Rotation anticlockwise by about the y-axis Ry = 0 1 0
sin 0 cos

1 0 0
Rotation anticlockwise by about the x-axis Rx = 0 cos sin
0 sin cos
All rotations about the positive axis according to the right-hand rule

Let G be the n k matrix G = (u1 | u2 | | uk ) which has as its columns an


orthonormal basis for a subspace W of Rn and let S = GGT . Then the standard
matrix S projects Rn onto W .

S is a projection matrix S is symmetric and S 2 = S.

MATH2099 LECTURE 13
LINES OF BEST FIT AND REFLECTIONS

To nd a curve of best t, nd the least squares solution to Ax = y by moving


over to the normal equations AT Ax = AT y.

Let W be an (n 1) dimensional subspace (hyperplane) in Rn with d W . Then


the standard matrix which reects Rn across W is given by
2
R=I ( )ddT
dd

If R is a reection matrix then R is symmetric and orthogonal and hence R2 = I.

7
MATH2099 LECTURE 14
DETERMINANTS

Given a square n n matrix A the inverse of A (denoted by A1 ) is another n n


matrix with the property that AA1 = I.
 1  
a b 1 d b
=
c d ad bc c a
 
a b a b
det = = ad bc

c d c d

More generally det(A) = a11 |A11 | a12 |A12 | + a13 |A13 | (1)n a1n |A1n |

A1 exists if and only if det(A) = 0.

det(AB) =det(A)det(B).

1
det(A1 ) = .
det(A)

In general det(A + B) = det(A)+det(B).

det(A)=det(AT ).

If a square matrix has a zero row or column then its determinant is 0.

Swapping rows or columns multiplies the determinant by 1.

Multiplying a row or column by multiplies the determinant by .

If a row of a square matrix is a scalar multiple of any other row then the
determinant of the matrix is 0 and hence the matrix is singular.

If a column of a square matrix is a scalar multiple of any other column then the
determinant of the matrix is 0 and hence the matrix is singular.

The determinant of a square matrix in echelon form is the product of the diagonal
elements of the echelon form.

8
MATH2099 LECTURE 15
EIGENVALUES, EIGENVECTORS AND DIAGONALISATION

Given a square matrix A, a non-zero vector v is said to be an eigenvector of A if


Av = v for some R. The number is referred to as the associated eigenvalue
of A.

We rst nd eigenvalues through the characteristic equation det(A I) = 0. The


eigenvectors are then found via row reduction and back substitution.

The zero vector is never an eigenvector but it is OK to have a zero eigenvalue.

If an n n matrix A has n linearly independent eigenvectors and P is the matrix


of eigenvectors aligned vertically then P 1 AP = D where D is the diagonal matrix
of eigenvalues. The order of the eigenvalues in D must match the order of the
eigenvectors in P . This is referred to as a diagonalization of A.

A matrix can be non-diagonalisable by coming up short on eigenvectors. The only


general way to nd out if a matrix has a full set of eigenvectors is to nd them all.

A useful check is the fact that (eigenvalues) = Trace(A).

Establishing the eigenanalysis of a particular matrix gives you a clear vision of the
internal workings of that matrix, and through diagonalisation the matrix may be
transformed into a more workable diagonal structure.

Eigenvectors from dierent eigenvalues are linearly independent.

Two n n matrices A and B are said to be similar if there exists an n n matrix


P with the property that P 1 AP = B.

Similar matrices share the same eigenvalues.

9
MATH2099 LECTURE 16
APPLICATIONS OF EIGENVECTORS

P 1 AP = D A = P DP 1 An = P D n P 1 .

(At)2 (At)3 (At)n


eAt = I + At + + + +
2! 3! n!

P 1 AP = D A = P DP 1 eAt = P eDt P 1 .

eAt eBt = e(A+B)t .

(eAt )1 = eAt .

(eAt ) = AeAt .

The solution of the system of dierential equations y = Ay with initial conditions


y(0) = c is given by y = eAt c.

Suppose that A is a 2 2 matrix with two linearly independent eigenvectors v1 , v2


and associated eigenvalues 1 , 2 . Then the solution to y = Ay takes the form

y = 1 v1 e1 t + 2 v2 e2 t

where 1 , and 2 are arbitrary constants which may be determined by applying


initial conditions. Mutatis mutandis the result also holds for larger matrices.

10
MATH2099 LECTURE 17
SYMMETRIC MATRICES AND QUADRATIC CURVES

A matrix A is said to be symmetric if A = AT .

The eigenvectors from dierent eigenvalues of a symmetric matrix are mutually


perpendicular.

A square matrix Q is said to be orthogonal if QT Q = I or equivalently Q1 = QT .

The linear transformations induced by orthogonal matrices are rotations, reections or a


composition of both.

The columns of an orthogonal matrix are an orthonormal set.

Let A be a symmetric matrix and Q the orthogonal matrix made up of unit eigenvectors
of A. Then QT AQ = D is an orthogonal diagonalisation of A with the matrix D being
the diagonal matrix of corresponding eigenvalues of A.

(AB)1 = B 1 A1 .

(AB)T = B T AT .

x2 y 2
+ 2 = 1 is an ellipse in R2 . (++)
a2 b

x2 y 2
2 = 1 is a hyperbola in R2 (+).
a2 b

The quadratic form xT Ax in the plane, where A is a symmetric matrix, has principal
axes given by the orthogonal eigenvectors of A. The associated quadratic curve
T
x Ax = C may be orthogonally transformed through rotations and/or reections into a
standard ellipse or hyperbola with the eigenvalues appearing as coecients.

11
MATH2099 LECTURE 18
SYMMETRIC MATRICES AND QUADRIC SURFACES IN SPACE

x2 y 2 z 2
+ 2 + 2 = 1 is an ellipsoid in R3 (+ + +).
a2 b c

x2 y 2 z 2
+ 2 2 = 1 is a hyperboloid of one sheet in R3 (+ + ). (Axis on the negative)
a2 b c

x2 y 2 z 2
2 2 = 1 is a hyperboloid of 2 sheets in R3 (+ ). (Axis on the positive)
a2 b c

In R3 the number of sheets is the number of ()s.

The quadric surface xT Ax in space, where A is a symmetric matrix, has principal axes given
by the orthogonal eigenvectors of A. The associated quadratic surface xT Ax = C may be
orthogonally transformed through rotations and/or reections into a standard hyperboloid or
ellipsoid with the eigenvalues appearing as coecients.

MATH2099 LECTURE 19
JORDAN DECOMPOSITIONS
The Jordan block Jn () is an n n matrix with on the diagonal, 1s
immediately above the diagonal and zeros everywhere else.

A Jordan matrix J is a direct sum of Jordan blocks.

Every square matrix A is similar to a Jordan matrix J.

The algebraic multiplicity of an eigenvalue is the number of times the eigenvalue


appears in the characteristic equation while the geometric multiplicity is the
number of eigenvectors that it eventually delivers.

If we are short on eigenvectors diagonalisation is impossible and we instead


establish a Jordan decomposition which has similar applications.

To produce a Jordan decomposition inate the eigenspace to produce a sequence of


nested generalised eigenspaces terminating with a dimension equal to the algebraic
multiplicity of the eigenvector.

Using the stepping tool vn1 = (A I)vn form eigen-chains from the outer rings
to the eigencore. Each eigen-chain produces a Jordan block with the size of the
block equaling the length of the chain. The number of blocks=the number of
chains=dimension of the eigenspace=geometric multiplicity of the eigenvalue.

12
MATH2099 LECTURE 20
JORDAN DECOMPOSITIONS PART 2

The Jordan block Jn () is an n n matrix with on the diagonal, 1s


immediately above the diagonal and zeros everywhere else.

A Jordan matrix J is a direct sum of Jordan blocks.

Every square matrix A is similar to a Jordan matrix J.

The algebraic multiplicity of an eigenvalue is the number of times the eigenvalue


appears in the characteristic equation while the geometric multiplicity is the
number of eigenvectors that it eventually delivers.

If we are short on eigenvectors diagonalisation is impossible and we instead


establish a Jordan decomposition which has similar applications.

To produce a Jordan decomposition inate the eigenspace to produce a sequence of


nested generalised eigenspaces terminating with a dimension equal to the algebraic
multiplicity of the eigenvector.

Using the stepping tool vn1 = (A I)vn form eigen-chains from the outer rings
to the eigencore. Each eigen-chain produces a Jordan block with the size of the
block equaling the length of the chain. The number of blocks=the number of
chains=dimension of the eigenspace=geometric multiplicity of the eigenvalue.

13
MATH2099 LECTURES 21 and 22
MATRIX EXPONENTIALS AND SYSTEMS OF DES REVISITED



1 t  
J2 ()t 0 t 1 t
e =e =e
0 1



1 0

0 1
t 2
1 t t2!
eJ3 ()t = e 0 0 = et 0 1 t
0 0 1



1 0 0



0 1 0
t
t t2! t3
2
1


0 0 1
3!
t2
= et
0 0 0 0 1 t
eJ4 ()t = e 0
2!
0 1 t
0 0 0 1

Given a Jordan decomposition P 1 AP = J we have eAt = P eJt P 1 .

The exponential of a direct sum is the direct sum of the exponentials.

The solution of the system of linear dierential equations y = Ay with initial


conditions y(0) = c is given by y = eAt c.

MATH2099 LECTURE 23
NON-HOMOGENEOUS SYSTEMS OF DIFFERENTIAL EQUATIONS

The solution to y = Ay + b satisfying the initial condition y(0) = c is given by

t 
At As
y=e c+ e b(s)ds
0

14

Das könnte Ihnen auch gefallen