Sie sind auf Seite 1von 38

Linear Algebra and Control Applications

Notes

Synopsis
Modern control approach using state space method requires extensive usage of matrices and deep understanding of linear algebra.

Contents

Synopsis 1 Introduction 1.1 1.2 1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Algebra Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 1.3.2 1.4 Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 1 1 2 2 2 3 4 4 4 6 6 7 7 7

Linear Combination and Vector Spaces . . . . . . . . . . . . . . . . . . .

2 Matrix Algebra 2.1 2.2 2.3 2.4 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrices and Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . Linear Independence and Matrices . . . . . . . . . . . . . . . . . . . . . . 2.4.1 2.5 Linear Dependence/Independence of Vectors and Determinants .

Vector Span and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Column Space and Row Space . . . . . . . . . . . . . . . . . . . .

2.6

Basis and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Pivot Rows and Columns of a Matrix and Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 8 8 8 9 9 9 9 10 10 11 11 12 13 14 15 16 18 18 18 19 20 22 24

2.7

Dimension, Rank and Matrices 2.7.1 2.7.2 2.7.3 2.7.4

Dimension of Whole Matrix . . . . . . . . . . . . . . . . . . . . . Dimension of Upper Triangular Matrix . . . . . . . . . . . . . . . Dimension of Diagonal Matrix . . . . . . . . . . . . . . . . . . . . Dimension of Symmetric Matrix . . . . . . . . . . . . . . . . . . .

2.8 2.9

The Null Space and the Four Subspaces of a Matrix . . . . . . . . . . . . The Complete Solution to Rx = 0 . . . . . . . . . . . . . . . . . . . . . . 2.9.1 2.9.2 2.9.3 2.9.4 Row Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Column Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dimension of Null Space . . . . . . . . . . . . . . . . . . . . . . . The Complete Solution to Rx = 0 . . . . . . . . . . . . . . . . . .

2.10 The Complete Solution to Ax = 0 . . . . . . . . . . . . . . . . . . . . . . 2.11 The Complete Solution to Ax = b . . . . . . . . . . . . . . . . . . . . . . 2.11.1 Full Row Rank(r = m);Full Column Rank(r = n) . . . . . . . . . 2.11.2 Full Row Rank (r = m);Column Rank Less (r < n) . . . . . . . . 2.11.3 Full Column Rank (r = n);Row Rank Less (r < m) . . . . . . . . 2.11.4 Row Rank and Column Rank Less (r < m; r < n) . . . . . . . . . 2.12 Orthogonality of the Four Subspaces . . . . . . . . . . . . . . . . . . . . 2.13 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.13.1 Projection Onto a Line . . . . . . . . . . . . . . . . . . . . . . . . 2.13.2 Projection with Trigonometry . . . . . . . . . . . . . . . . . . . . 2.13.3 Projection Onto a Subspace . . . . . . . . . . . . . . . . . . . . .

2.14 Least Squares Solution to Ax = b . . . . . . . . . . . . . . . . . . . . . . 3 Matrices and Determinants 3.1 3.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Determinant of a Square Matrix . . . . . . . . . . . . . . . . . . . . . . .

25 31 31 31

4 Solution to Dynamic Problems with Eigen Values and Eigen Vectors 32 4.1 4.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Dierential Equations and Matrices . . . . . . . . . . . . . . . . . 32 32 33 33 33

5 Matrices and Linear Transformations 5.1 5.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 1 Introduction
1.1 Overview

Control systems engineering can be divided into two types of approaches namely the classical or traditional control wherein transfer functions (ratio of Laplace transforms of output to input) are used and the modern control approach wherein state space methods are used. The state space approach has several advantages over the transfer function approach. Chief among them are that they are computationally straightforward and simple to use in multivariable control problems as well as non-linear control problems. State space methods use matrices extensively and require a thorough understanding of linear algebra.

1.2

Literature Survey

The books - Linear Algebra and its Applications and Introduction to Linear Algebra - by Gilbert Strang and his video lectures in the MIT website give a very good insight into linear algebra theory. These notes are based on the above books and video lectures.

1.3

Linear Algebra Basics

In the study of linear system of equations, the vector space over a eld is an important denition and stems from linear algebra.

1.3.1

Field

A eld, F , is a set of elements called scalars, together with the two operations of addition and multiplication for which the following axioms hold:(1) For any pair of elements a, b F , there is a unique sum a + b F and a unique product a.b F . Further by the law of commutativity, a + b = b + a and a.b = b.a. (2) For any three elements a, b and c F , the associative laws a + (b + c) = (a + b) + c and a(bc) = (ab)c as well as the distributive law a(b + c) = ab + ac hold good. (3) F contains the zero element denoted by 0 and the unity element denoted as 1, such that a + 0 = a and a.1 = a for ever a F . (4) F contains the additive inverse, i.e., for every a F there exists an element b F such that a + b = 0. (5) F contains the multiplicative inverse, i.e., for every a F there exists an element b F such that a.b = 1.

1.3.2

Vector Space

A vector space or linear vector space or linear space over the eld, F , is a set V of elements called vectors (also denoted as V (F )), together with two operations of vector addition and scalar multiplication for which the following axioms hold:(1) For any pair of vectors x, y V , there is a unique sum x + y V . Further by the law of commutativity, x + y = y + x. (2) For any vector x V , and scalar F , there is always a unique product x V .

(3) For any three vectors x, y and z V , the associative law x + (y + z) = (x + y) + z holds good. (4) For any two vectors x and y V , and scalar F , the distributive law (x + y) = x + y holds good. (5) For any two scalars and F , and vector x V , associative law (x) = x and the distributive law ( + )x = x + x hold good. (6) V contains the zero or null vector, denoted by 0, such that x + 0 = x for every xV. (7) V contains the unity vector, denoted by 1, such that x.1 = x for every x V . (8) For every x V , there exists an element x V such that x + (x) = 0.

1.4

Linear Combination and Vector Spaces

Linear algebra is based on two operations on vectors which dene the vector space. They are vector addition and scalar multiplication. If c and d are scalars F and u and v are two vectors V then linear combination of u and v is dened as cu + dv.

Chapter 2 Matrix Algebra


2.1 Introduction

In this chapter, an understanding of matrices is developed from linear algebra basics.

2.2

Matrices

We can form linear combination of vectors using matrices. For example, let three vectors u, v and w be given as Example: 0 u = 1; v = 1 ; w = 0; 0 1 1 1 0 Their linear combination in three-dimensional space can be given by cu + dv + ew i.e., 0 c c 1 + d 1 + e 0 = d c 0 1 1 ed 1 0

(2.1)

The above linear combination can be rewritten using matrices as 1 0 0 c c 1 1 0 d = d c 0 1 1 e ed i.e., the matrix times vector can be given as c Ax = u v w d = cu + dv + ew e where the scalars c,d and e are dened as the components of the vector x.

(2.2)

(2.3)

Thus the rewriting of the linear combination in matrix form has brought about a crucial change in view point explained as follows:(a) At rst the scalars c,d and e were multiplying the vectors u, v and w to form the linear combination cu + dv + ew. (b) In the matrix form, the matrix, A, is multiplying the scalars as c d = cu + dv + ew e

Ax = u v w

(2.4)

Thus matrix, A, acts on the vector, x. The result of the matrix multiplication of vector x, i.e., Ax can be dened as the column vector b expressed as b = Ax (2.5)

Linear combinations are the key to linear algebra and the output Ax is a linear combination of the columns of A. Thus matrices can be said to be made up of row vectors and column vectors. An m n matrix can be said to be made up of m row vectors of n elements each or n column vectors of m elements each.

2.3

Matrices and Linear Equations

Matrices and linear algebra concepts can be very useful in solving a system of linear equations. For the example discussed above where c Ax = u v w d = b e from linear algebra point of view, we were interested in computing the linear combination cu + dv + ew to nd b. In case of linear equations, we will consider c, d and e to represent the elements x1 , x2 and x3 of the column vector x and consider the problem to be that of nding which combination of u, v and w produces a particular vector b, i.e., to nd the input x that gives the desired output, b = Ax. Thus Ax = b can be seen as a system of linear equations that has to be solved for x1 , x2 and x3 .

(2.6)

2.4

Linear Independence and Matrices

A set of vectors v1 , v2 , ..., vn is said to be linearly independent if their linear combination, say, 1 v1 + 2 v2 + ..... + n vn where 1 , 2 , ....., n are scalars equals the sero vector if and only if 1 = 2 = ... = n = 0 Even of one i , i = 1, 2, ..., n is non-zero, then the set is linearly dependent. The columns of a matrix, A, can be considered as column vectors. Thus the columns of A are linearly independent when the only solution to Ax = 0 is x = 0. If x = [x1 x2 ... xn ]T and A = v 1 v2 . . . vn a11 a12 . . . a1n (2.8) a21 a22 . . . a2n = . . . . . . . . . . . . an1 an2 . . . ann (2.7)

then v1 , v2 , ..., vn are linearly independent if x1 v1 + x2 v2 + ...xn vn = 0 and if and only if x1 = x2 = ..... = xn = 0. 6

2.4.1

Linear Dependence/Independence of Vectors and Determinants

The linear dependence or independence of the columns of a matrix is an important property and it can be veried by calculating the determinant of the matrix, A. If the determinant |A| is zero, i.e., matrix A is singular, then the column vectors are linearly dependent, i.e.,if Ax = 0 and |A| = 0, then x = 0 and hence dependent. If the determinant |A| is not zero, i.e., matrix A is invertible then the column vectors are linearly independent i.e., Ax = 0 and |A| = 0 implies x = 0 and hence the column vectors are independent.

2.5

Vector Span and Matrices

A set of vectors is said to span a vector space or linear space if their linear combinations ll that space. The set of vectors may be dependent or independent. In other words, if the given set of vectors are enough to produce the rest of the vectors in that space, then that set is said to span its vector space.

2.5.1

Column Space and Row Space

The columns of a matrix span its column space, i.e., the columns are considered as vectors and the column space is the vector space consisting of linear combinations of these column vectors. In a similar manner, the rows of a matrix span its row space. The row space of the matrix A is the column space of its transpose, i.e., AT .

2.6

Basis and Matrices

A basis for a vector space is a set of vectors with two properties:(a) The basis vectors are linearly independent. (b) They span the vector space. 7

This combination of properties is fundamental to linear algebra. Every vector, v, in the space is a combination of basis vectors, because they span the space. Most important is that the combination of the basis vectors that form v is unique since these basis vectors, say, v1 , v2 , ..., vn are independent. Thus there is one and only one way to write v as a combination of the basis vectors. The columns of every invertible n n matrix form a basis for vectors v1 , v2 , ..., vn are a basis of invertible matrix. Thus
n n n

. In other words, the

exactly when they are the columns of an n n

has innitely many dierent bases.

If the set of vectors are exactly the number required to produce the rest of the vectors in the space (neither too many nor too few), then they form a basis.

2.6.1

Pivot Rows and Columns of a Matrix and Basis

The pivot columns of matrix A are a basis for its column space. The pivot rows of A are a basis for its row space. The pivot rows and columns of the echelon form R (reduced form) also form a basis for its respective row space and column space. The columns of the n n identity matrix gives the standard basis for
n

2.7

Dimension, Rank and Matrices

The dimension of a vector space is the number of basis vectors in that space. The rank of the matrix is the dimension of the column space. The rank is also the dimension of the row space of that matrix.

2.7.1

Dimension of Whole Matrix

The dimension of a whole n n matrix space is n2 . For a 2 2 matrix considering the basis, i.e., A1 , A2 , A3 , A4 = 1 0 0 0 , 0 1 0 0 , 0 0 1 0 8 , 0 0 0 1 (2.9)

the dimension is 4.

2.7.2

Dimension of Upper Triangular Matrix

The dimension of the subspace of upper triangular matrices is 1 n2 + 1 n. 2 2 For the example basis in eqn.(2.9) above, A1 , A2 and A4 form a basis for upper triangular matrices and hence dimension is 3.

2.7.3

Dimension of Diagonal Matrix

The dimension of the subspace of diagonal matrices is n. For the example basis in eqn.(2.9) above, A1 and A4 form a basis for diagonal matrices. Hence dimension is 2.

2.7.4

Dimension of Symmetric Matrix

1 The dimension of subspace of symmetric matrices is 2 n2 + 1 n. 2

For the example basis in eqn.(2.9) above, A1 , A2 , A4 (or) A1 , A3 , A4 form a basis for symmetric matrices. Hence dimension is 3.

2.8

The Null Space and the Four Subspaces of a Matrix

The null space of a given matrix A consists of all solutions to Ax = 0. It is denoted by N (A). One immediate solutions to Ax = 0 is x = 0. For invertible matrices, this is the only solution. For other matrices that are not invertible, i.e., singular matrices, there are non-zero solutions to Ax = 0. Each solution x belongs to the null space, N (A). The m n matrix A can be square or rectangular. The solution vector x has n

components in this case. Thus they are vectors in of


n

. So the null space is a subspace

The column space of a m n matrix A denoted by C(A) is a subspace of Rem . The row space of a m n matrix A denoted by C(AT ) is a subspace of
n

The matrices A and AT are usually dierent. Their column spaces and null spaces are also dierent. For the m n matrix A, the left null space consists of all solutions to AT y = 0. It is denoted by N (AT ). The left null space is a sub space of has m components in this case.
m

since the solution vector y

2.9

The Complete Solution to Rx = 0

The matrix A can be generally reduced to its row echelon form R so that the four spaces can be easily identied. This is because the four dimensions for the four subspaces are the same for A and R. The basis for each subspace is found and its dimension is checked as follows:Consider the following example of 3 5 matrix in reduced form R: 1 2 5 0 7 R = 0 0 0 1 3 0 0 0 0 0 The pivot rows in the above matrix are 1 and 2 and the pivot columns are 1 and 4. Thus the rank of this matrix R is 2 since there are two pivots and the two pivots form a 2 2 identity matrix.

(2.10)

2.9.1

Row Space

The row space of the above matrix R contains combinations of all three rows. However the third row contains only zeros. The rst two non-zero rows are independent and form a basis since they span the row space C(RT ). The dimension of the row space and its rank is 2.

10

2.9.2

Column Space

The column space of the above matrix R contains combinations of all ve columns. However the pivot columns are 1 and 4 and form a basis for the column space C(R). They are independent since they form the 2 2 identity matrix. All other columns are a linear combination of these two columns. Thus columns 1 and 4 span the column space. Also, the combination of 1 and 4 columns does not give a zero column. Hence they are independent and form a basis for the column space C(R). The dimension of the column space is the rank 2. This is the same as the row space since both the pivot rows and pivot columns form a 2 2 identity matrix whose rank is 2.

2.9.3

Dimension of Null Space

In the above example, the number of columns (n) is 5 and rank (r) is 2. Hence the null space has a dimension given by n r = 5 2 = 3. Thus there are three free variables. Here the columns 2, 3 and 5 are free and yield three special solutions to Rx = 0. Thus the null space N (R) has dimension n r. From inspection of the matrix R, it is found that (a) Columns 1 and 4 are linearly independent and form a 2 2 identity matrix when placed together. Thus we can solve for x1 to x5 as x1 = x2 = ... = x5 = 0 which is a unique solution. (b) Column 2 is two times column 1. The solution vector for satisfying Rx = 0 corresponding to column 2 is (2, 1, 0, 0, 0). Thus this is a special solution for x2 = 1. (c) Column 3 is ve times column 1. The solution vector for satisfying Rx = 0 corresponding to column 3 is (5, 0, 1, 0, 0). Thus this is a special solution for x3 = 1. (d) Column 5 is seven times column 1 plus three times column 4. The solution vector for satisfying Rx = 0 corresponding to column 5 is (7, 0, 0, 3, 1). Thus this is a special solution for x5 = 1.

11

(e) Thus the null space consists of the above three solution vectors i.e., 2 1 N (R) = 0 , 0 0 5 0 1 , 0 0 7 0 0 3 1

(2.11)

These solution vectors are independent and hence form a basis. All solutions to Rx = 0 are linear combinations of these three column vectors.

2.9.4

The Complete Solution to Rx = 0

Thus the complete solution for Rx = 0 for the given example matrix R can be calculated in two steps:(i) For the column vectors 1 and 4 of the matrix R which are independent, the solution vector is unique and is given by x1 = x2 = ... = x5 = 0. (ii) For the column vectors 2, 3 and 5 which are dependent (linear combinations of 1 and/or 4), there can be an innite number of solutions. For the given example, three solutions which form a basis for satisfying Rx = 0 are as follows:(aa) If x2 = 1 is chosen, then x1 = 2 and x3 = x4 = x5 = 0. Thus all linear combinations which satisfy x1 = 2x2 and x3 = x4 = x5 = 0 are solutions to Rx = 0. This corresponds to 2 1 x= 0 0 0

(2.12)

(ab) If x3 = 1 is chosen, then x1 = 5 and x2 = x4 = x5 = 0. Thus all linear combinations which satisfy x1 = 5x3 and x2 = x4 = x5 = 0 are solutions to

12

Rx = 0. This corresponds to 5 0 x= 1 0 0

(2.13)

(ac) If x5 = 1 is chosen, then x1 = 7, x4 = 3 and x2 = x3 = 0. Thus all linear combinations which satisfy x1 = 7x5 , x4 = 3x5 and x2 = x3 = 0 are solutions to Rx = 0. This corresponds to 7 0 x= 0 3 1

(2.14)

2.10

The Complete Solution to Ax = 0

The dimensions of the four subspaces for a matrix A are the same as its reduced form R. A can be reduced to R through an elimination matrix E. This invertible elimination matrix E is a rpoduct of elementary matrices such that R = EA and A = E 1 A. For the reduced form R given by 1 2 5 0 7 R = 0 0 0 1 3 0 0 0 0 0 the matrix A can be chosen such that it reduces to R as 1 2 5 0 7 A = 0 0 0 1 3 2 4 10 1 17 It can be seen from the above matrices A and R that the following properties hold:-

(2.15)

(2.16)

13

(a) A has the same row space as R. Thus the dimensions are also the same and the basis is also the same. This is because every row of A is a combination of the rows of R. Elimination changes rows but not row spaces. (b) The matrix A may not have the same column space as R. This is because columns of R may end in zeros but columns of A may not end in zeros. However for every matrix, it is necessary that the number of independent rows equals the number of independent columns. Thus the dimensions of A and R are the same though the column spaces are dierent. This is because the number of pivot columns (which are independent) are the same for A and R. Thus the pivot columns of A are a basis and form its column space. (c) The matrix A has the same null space as R and hence the same dimension and same basis. This is because the elimination steps for reducing A to R do not change the solutions to Ax = 0. (d) The dimension of the left null space of A i.e., the null space of AT is the same as the left null space of R since the dimensions of the column space of R and AT are the same. For the mn matrix A, the column space has dimension, say, r, then the dimension of its null space is n r. Thus the whole space is
n

such that r + (n r) = n.

Similarly, for the transpose of A i.e., the n m matrix AT , the column space has the same dimension r while the dimension of its null space is m r. Thus the whole space is
m

such that r + (m r) = m.

The Fundamental Theorem of Linear Algebra for a mn matrix A is given as follows:The column space and row space both have dimension r for A as well as AT . The null spaces have dimensions n r for A and m r for AT .

2.11

The Complete Solution to Ax = b

The solution to Ax = b when b = 0 was dealt with in the previous sections. Elimination operation reduced the matrix A to its reduced form R and converted the 14

problem to solving Rx = 0. The free variables were assigned values one and zero so that the pivot variables were found by back substitution. Since x = 0 is one of the solutions when b = 0, the solution x was in the null space of A. When the right hand side of Ax = b, i.e., b is not zero then the solution can be separated into two parts as x = xp + xn where x is known as the complete solution, xp is known as the particular solution and xn is known as the null space solution. There are four possiblities for the complete solution of Ax = b depending on the rank r which are discussed as follows with the consideration that A is a m n non-zero matrix.

2.11.1

Full Row Rank(r = m);Full Column Rank(r = n)

This condition occurs when A is a non-singular square matrix. Thus A is invertible and Ax = b has exactly one solution. The Null space N (A) contains the zero vector only. The column vector b is a linear combination of one or more column vectors of the matrix A. In this case, the complete solution x = xp + 0.

Example 1 Let 1 0 0 A = 0 2 0 0 1 1 and 1 b = 6 1

(2.17)

(2.18)

A is a non singular matrix since its determinant(|A| = 2 = 0). Thus the solution can be found from x = A1 b as 1 0 0 x = 0 0.5 0 0 0.5 1 given below: 1 1 6 = 3 1 2

(2.19)

Thus the solution is unique (only solution) given by x = [1 3 2]T . 15

2.11.2

Full Row Rank (r = m);Column Rank Less (r < n)

In this case, Ax = b has innite number of solutions. Every matrix A with full row rank i.e., r = m satises the following properties:(a) All rows have pivots and the reduced form R has no zero rows. (b) There is a solution to Ax = b for any and every right hand side b. (c) The null space N (A) contains the zero vector x = 0 and also solution vectors n r = n m such that Ax = 0. Thus the null space is a basis consisting of n m non-zero vectors. There can be innite number of solution vectors which are the linear combinations of these non-zero solution vectors which form the basis.

Example 2 Let 1 5 0 3 A = 0 0 1 2 1 5 1 5 and 1 b = 3 4 (2.21) (2.20)

To solve this easily, the system of equations Ax = b is reduced to a simpler system Rx = d by using the augmented matrix [A b] (wherein b is added as an extra column to the matrix A) and applying elimination steps to this augmented matrix. Thus for the above system, the augmented matrix is given by 1 5 0 3 1 [A b] = 0 0 1 2 3 1 5 1 5 4 Applying the elimination step R3 R3 R1 gives 1 5 0 3 1 [R b] = 0 0 1 2 3 0 0 1 2 3 16

(2.22)

(2.23)

Applying the elimination step R3 R3 R2 gives 1 5 0 3 1 [R b] = 0 0 1 2 3 0 0 0 0 0

(2.24)

Let the solution vector x = [x1 x2 x3 x4 ]T . The pivot columns of R are 1 and 3 while the free columns are 2 and 4. Selecting the variables against the free columns as x2 and x4 to be equal to zero, back-substitution gives x3 = 3 and x1 = 1. Thus the unique solution or particular solution for Ax = b can be given by [1 0 3 0]T for the given by b. However since the rank of the matrix A is 2 while the number of columns is 4, the null space N (A) contains n r = 4 2 = 2 vectors apart from the zero vector. These two vectors form the basis of the null space and their linear combinations give an innite number of solutions to Ax = b. For the given Ax = b, the null space is found to contain the following vectors which form a basis by reversing the signs of 5, 3 and 2 in the free columns of R:-

5 1 (x2 , x4 )n = , 0 0

3 0 2 1

(2.25)

Thus the complete solution to Ax = b in this case is given by x = xp + xn where xp is the particular solution which solves Axp = b and xn is the n r special solutions which solve Axn = 0 for the given example, the complete solution is given by

5 5 3 1 1 0 x = + x2 + x 4 0 0 2 0 0 1

(2.26)

As can be seen from the above system of equations, Ax = b comprises three planes in the x1 x2 x3 x4 space. The rst and third planes are parallel to each other. The rst 17

and second planes are not parallel and intersect at a point along two lines. Adding the null space vectors xn results in the solution moving along the two lines and hence the plane containing the two lines. Then x = xp + xn gives the complete solution.

2.11.3

Full Column Rank (r = n);Row Rank Less (r < m)

In this case, Ax = b has nil or one solution. Every matrix A with full column rank i.e., r = n satises the following properties:(a) All columns of the matrix A are the pivot columns. (b) Thus the null space contains the zero vector x = 0 only. (c) If Ax = b has a solution then it has only one solution. Otherwise there is no solution.

Example 3

TO BE CONTINUED
2.11.4 Row Rank and Column Rank Less (r < m; r < n)

In this case, Ax = b has nil solution or innite number of solutions. In practice, this situation is least possible or rarely occurs.

2.12

Orthogonality of the Four Subspaces

Two vectors are said to be orthogonal to each other if their dot product is zero, i.e., v.w = 0 or v T w = 0. If v and w are the sides of a right angles triangle, then ||v||2 + ||w||2 = ||v + w||2 The right hand side of the above expression can also be written as ||v + w||2 = (v + w)T (v + w) 18

This equation will be ||v + w||2 = v T v + wT w if and only if v T w = wT v = 0 which gives the proof of orthogonality. The Fundamental Theorem of Linear Algebra consists of two parts:(a) Part-I. The row and column spaces have the same dimension r and the two null spaces N (A) and N (AT ) have the remaining dimensions n r and m r. (b) Part-II. The row space is perpendicular to the null space of A, i.e., N (A) while the column space is perpendicular to the null space of AT , i.e., N (AT ). Thus the row space C(AT ) and null space of A, N (A) are orthogonal subspaces over over
m n

while

the column space, C(A) and null space of AT , N (AT ), are orthogonal subspaces .

When b is not in the column space of A then we cannot get a direct solution for Ax = b. In that case, the null space of AT is considered for computing the least squares solution of Ax = b since e = b Ax which will be in the null space of AT is used for computing this least squares solution.

2.13

Projections

Projection of a vector, say b, onto a line is the part of b along that line. Similarly the projection of a vector b onto a plane is the part of b in that plane. If p is the projection vector then it is given by p = P b where P is the projection matrix. For example, considering a standard three-dimensional plane xyz, if a point in this plane is described by the vector, b = (x1 , y1 , z1 ), then the projection of b along the z-axis can be found with the help of the projection matrix given by 0 0 0 P = 0 0 0 0 0 1

(2.27)

19

The projection of b onto the xy plane can be found with the help of the projection matrix 1 0 0 P = 0 1 0 0 0 0 Every subspace of
m

(2.28)

has its own m m projection matrix.

The projection matrix P of a subspace is best described by its basis. Thus the columns of P are the basis vectors of the subspace. Thus the projection P of any vector b onto the column space of any m n matrix. In case of a line, the dimension is one and the matrix P has only one column. For example 0 P1 = 0 1 gives the z-axis. Similarly, in case of a two dimensional xy plane, 1 0 P2 = 0 1 0 0 The z-axis (line) and the xy plane are orthogonal complements since their dimensions add up to three and the sum of projection matrices P1 and P2 is identity matrix, i.e., P1 + P2 = I. Thus b = P1 b + P2 b = (P1 + P2 )b = Ib.

(2.29)

(2.30)

2.13.1

Projection Onto a Line

The projection, p of any point on a line b on another line a is given by the product of a scalar x and a, i.e., p is a multiple of a. The key to projection is orthogonality wherein the perpendicular to the vector a joined to b decides the error vector e given by e = b xa (2.31)

20

Since this line is perpendicular to a, we can determine the scalar, x from e from the fact that e is perpendicular to a when their dot product is zero. Thus a.e = 0 => a.(b xa) = 0 => a.b xa.a = 0 => x = The transpose applies to matrices. Thus if b = a then x = 1 and the projection of a onto a is itself. If b is perpendicular to a then aT b = 0, the projection p = 0 i.e., x = 0. a.b aT b = T a.a a a

Example 1 Problem : Project the vector b = [1 1 1]T onto another vector given by a = [1 2 2]T . Solution : Here we need to nd the projection of b on a given by p = xa. From the formula given above, aT b aT a 5 x = 9 x = Hence, the projection vector p is given by xa = 5 a 9 5
9 10 9

(2.32)

(2.33)

= 10 9

The error vector between b and p is e = b p. Thus 5 1 9 10 e = 1 9 10 1 9 4


9

(2.34)

= 1 9
1 9

21

This error e must be perpendicular to a. This can be proved as 1 T 4 e a = 9 1 1 2 = 0 9 9 2

2.13.2

Projection with Trigonometry

As shown in the above example, the vector b has been split into two parts:(a) Component along the line through a given by p (b) Component perpendicular to the line through a given by e These two components thus form the sides of a right angled triangle having lengths ||b|| cos and ||b|| sin . Thus the components of b from trigonometry are given as ||p|| = ||b||cos and ||e|| = ||b||sin The above formula will involve calculating square roots. Hence the better way to arrive at ||p|| =
5 9

is through the projection p = xa given by x= aT b aT a

Thus the projection matrix can now be formulated from p = a as x p=a giving P = aaT aT a
aaT . aT a

aT b aaT = T b = Pb aT a a a

Thus the projection matrix P onto the line through a can be given P =

22

Example 2 For the Example 1 calculate the projection matrix P . P = aaT aT a 1 2 2 (2.35)

1 2 2 1 2 2 2 4 4

1 2 2 1 2 1 = 2 4 9 2 4

This matrix projects any vector b onto a as can be proved from the above Example 1, p = Pb 1 2 2 1 1 = 2 4 4 1 9 2 4 4 1 5
9

(2.36)

= 10 9
10 9

Note:- If the vector a is doubled the matrix P stays the same. It still projects on the same line. If the matrix is squared, P 2 equals P since P is a symmetric matrix. When P projects onto one subspace, I P projects onto the perpendicular subspace. This is the vector e.

23

2.13.3

Projection Onto a Subspace

The above results are for the one dimensional problem of projection onto a line i.e., p = xa is the projection of any vector b on the line passing through a. In case of an n-dimensional plane, we start with n vectors a1 , a2 , ..., an in
m

where

all the as are independent. The projection onto this n-dimensional plane is given by p = x1 a1 + x2 a2 + ... + xn an = A x This projection which is closest to b is calculated from AT (b A) = 0 x which gives AT A = AT b x The symmetric matrix AT A is an n n matrix and is invertible since the as are independent. The solution is given by x = (AT A)1 AT b Thus the projection p can now be given as p = A = A(AT A)1 AT b x Hence if p = P b is the projection of b onto a then P , the projection matrix is given by the formula P = A(AT A)1 AT If the matrix A is rectangular, then it has no inverse matrix. However when A has independent columns, AT A is invertible. For every matrix A, AT A has the same null space as A if A has linearly independent columns. Proof:- When the columns of A are linearly independent, its null space contains only the zero vector. In that case, if x is in null space of A, then Ax = 0. Multiplying by AT gives AT Ax = 0. Hence x is also in the null space of AT A. To prove that AT A has the same null space as A we need to prove Ax = 0 from AT Ax = 0. 24 (2.37)

Multiplying by xT gives xT AT Ax = 0 (or) (Ax)T Ax = 0 or ||(Ax)2 || = 0. Thus if AT Ax = 0 then Ax has length zero. Thus Ax = 0. Thus every vector x in the null space of A is in the other null space of AT A. Hence proved. Note:- 1.The projection p of b on a is given by p = x1 a1 + x2 a2 + ... + xn an This gives x = (AT A)1 AT b Thus projection p = A . x 2. The error is given by e = b p = b A x 3. The projection matrix P has two properties namely PT = P and P2 = P (2.40) (2.39) (2.38)

2.14

Least Squares Solution to Ax = b

If x is an exact solution of Ax = b, then the error dened by e = b Ax will be equal to zero. This is the case when the matrix A is invertible or b is in the column space of A. However quite often Ax = b has no solution. This happens when A has Full Column Rank (r = n);Row Rank Less (r < m), i.e., the matrix has more rows than columns. Simply put there are more equations than unknowns (m > n). In this case, a solution is possible if and only if b is in the column space of A.

25

In most practical cases of a control problem, if all the measurements are perfect, then b will be in the column space of A. However, if the measurements include noise or external disturbances or uncertainties, then b is outside the column space of A. In this case, the error e will not be zero. Thus we have the special case of least squares solution where x is the least squares solution such that the length e is as small as possible. Thus if we consider e = b Ax, then e is zero when x is an exact solution of Ax = b or in other words b is in the column space oa A. If b is not in the column space of A, then e is not zero. Hence the problem will now be to minimise this error e to as small value as possible. When the length of e is as small as possible, x is a least squares solution. When Ax = b has no solution, then we consider the projection p of b on x which is connected by p = A . This projection p which is closest to b is calculated from x AT (b A ) = 0 x or AT A = AT b x Thus the solution is given by x = (AT A)1 AT b (2.43) (2.42) (2.41)

It can be proved that x is the least squares solution or the best solution for Ax = b which minimiuses the error e = bAx to as small as possible through geometry, algebra, calculus or by setting the derivative of error to zero (dierential equation).

Example 3 Find the closest line to the points (0, 6), (1, 0) and (2, 0). Solution:- Let y = Cx + D be the equation of the straight line. For the given problem, let the points be (x1 , y1 ), (x2 , y2 ) and (x3 , y3 ). Thus x1 = 0; x2 = 1; x3 = 2. Substituting for x1 , x2 , x3 , in y = Ax + B, we get y1 = D; y2 = C + D; y3 = 2C + D

26

Since y1 = 6; y2 = y3 = 0, we get the set of equations D = 6 C +D = 0 2C + D = 0 The above system of equations does not have a solution. Expressing the above system of equations in matrix form we get 0 1 6 C = 0 1 1 D 2 1 0 Considering 0 1 A = 1 1 2 1 x= and 6 b = 0 0 then Ax = b is not solvable since b is not in the column space of A. To nd the least squares solution, we solve AT A = AT b x i.e., 0 1 1 1 x = 2 1 5 3 3 3 x = 6 0 0 C D (2.44)

(2.45)

0 1 2 1 1 1

0 1 2 1 1 1 0 6

(2.46)

27

Thus x = = 5 3 3 3 3 5
1

0 6

(2.47)

Thus with C = 3 and D = 5, y = 3 x + 5 is the equation of the line closest to the three given points. The projection p of b onto the column space of A is thus given by p = A x 0 1 p = 1 1 2 1 5 = 2 1 Thus the error e = b p is given as 5 6 e = 0 2 0 1 1 = 2 1 The length of e for this solution is ||e2 || = eT e = 1 2 1 1 (2.48) 3 5

(2.49)

2 = 6 1

28

To check and verify:(1) e must be perpendicular to both columns of A. Hence 0 1 2 1 1 1

eT A 1 =

1 2 1

(2.50)

= 0 eT A2 = 1 2 1

= 0 veries the same. (2) p = P b To verify this, rst we nd P . P = A(AT A)1 AT 0 1 0 1 = 1 1 ( 1 1 2 1 0 1 3 1 = 1 1 6 3 2 1 0 1 1 3 = 1 1 6 5 2 1 5 2 1 1 = 2 2 2 6 1 2 5 (2.51) 2 1 3 5 0 3 0 1 1 0 1 2 1 1) 1 1 1 2 1 0 1 2 1 1 1

2 1

29

Hence calculating p = Pb (2.52)

5 2 1 6 1 = 2 2 2 0 6 1 2 5 0 5 = 2 1

which was the earlier result.

30

Chapter 3 Matrices and Determinants


3.1 Introduction

In this chapter, properties of determinants are discussed.

3.2

Determinant of a Square Matrix

The determinant of a square matrix is a scalar which can immediately tell us whether the given matrix is invertible or not. If the determinant is zero, i.e., the matrix is singular then it is not invertible. A square matrix is invertible if and only if its determinant is not equal to zero.

31

Chapter 4 Solution to Dynamic Problems with Eigen Values and Eigen Vectors
4.1 Introduction

In this chapter, eigen values and eigen vectors are discussed.

4.2

Linear Dierential Equations and Matrices


du dt

Steady state problems can be expressed by linear equations of the form Ax = b. Dynamic problems are those of the form = Au. Their solutions change with time and can be decaying with time, growing with time or oscillating. Eigen values and eigen vectors help us in arriving at the solution of these dierential equations expressed in matrix form in simple and easy steps.

32

Chapter 5 Matrices and Linear Transformations


5.1 Introduction

In this chapter, linear transformations and their applications using matrices are discussed.

5.2

Linear Transformations

Transformation is like a function. In case of a function, for every input x the output is expressed as f (x). Similarly, if v is an input vector in the vector space, V then a transformation T assigns an output T (v) to each input vector v. A linear transformation is a transformation which satises the two conditions of homogeneity and superposition.

33

Das könnte Ihnen auch gefallen