Sie sind auf Seite 1von 33

PPT021

Chapter.2 Matrix Eigenvalues Problems


Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Chapter. 2
Linear algebra:
Matrix Eigenvalues Problems

Fall 2014

HeeChang LIM Ph.D.
Engineering Building 9. (213)

Tel. 051-510-2302
Reference:
1. Hildebrand, F. B. (1965), Methods of Applied Mathematics, Prentice-Hall: Englewood Cliffs, New Jersey.
2. Kreyszig, Erwin (2006), Advanced Engineering Mathematics, Wiley


PPT022
Chapter.2 Matrix Eigenvalues Problems
- From the viewpoint of engineering applications, eigenvalue problems are
among the most important problems in connection with matrices. We begin by
defining the basic concepts and show how to solve these problems
- Let =

be a given matrix and consider the vector equation




Here is an unknown vector and an unknown scalar. Our task is to determine
s and s that satisfy (1).
- Geometrically, we are looking for vectors for which the multiplication by has
the same effect as the multiplication by a scalar ; in other words, should be
proportional to .
- Clearly, the zero vector = 0 is a solution of (1) for any value of , because
= . This is of no interest. A value of for which (1) has a solution is
called an eigenvalue or characteristic value (or latent root) of the matrix . (the
corresponding solutions of (1) are called the eigenvectors of
corresponding to that eigenvalue .
Linear Algebra: Matrix Eigenvalues Problems
=
(1)
PPT023
Chapter.2 Matrix Eigenvalues Problems
How to Find Eigenvalues and Eigenvectors
- The problem of determining the eigenvalues and eigenvectors of a matrix is
called an eigenvalue problem.
- Such problems occur in physical, technical, geometric, and other applications.
Example 1
PPT024
Chapter.2 Matrix Eigenvalues Problems
PPT025
Chapter.2 Matrix Eigenvalues Problems
Generalization of this procedure
- Eqn(1) written in components is
PPT026
Chapter.2 Matrix Eigenvalues Problems
- By Cramers theorem, this homogeneous linear system of equations has a
nontrivial solution if and only if the corresponding determinant of the
coefficients is zero.
Theorem 1
PPT027
Chapter.2 Matrix Eigenvalues Problems
Theorem 2
Proof
PPT028
Chapter.2 Matrix Eigenvalues Problems
Example 2
PPT029
Chapter.2 Matrix Eigenvalues Problems
PPT0210
Chapter.2 Matrix Eigenvalues Problems
Example 1
PPT0211
Chapter.2 Matrix Eigenvalues Problems
PPT0212
Chapter.2 Matrix Eigenvalues Problems
PPT0213
Chapter.2 Matrix Eigenvalues Problems
Example 2
PPT0214
Chapter.2 Matrix Eigenvalues Problems
PPT0215
Chapter.2 Matrix Eigenvalues Problems
PPT0216
Chapter.2 Matrix Eigenvalues Problems
Symmetric, Skew-Symmetric and Orthogonal Matrices
Definitions
PPT0217
Chapter.2 Matrix Eigenvalues Problems
Example 1
Example 2
PPT0218
Chapter.2 Matrix Eigenvalues Problems
An orthogonal matrix is a square matrix with real entries whose columns and
rows are orthogonal unit vectors (i.e., orthonormal vectors), i.e.



where is the identity matrix ().

This leads to the equivalent characterization: a matrix Q is orthogonal if its
transpose is equal to its inverse:



An orthogonal matrix is necessarily invertible (with inverse
1
=

),
unitary (
1
=

) and therefore normal (

) in the reals. The


determinant of any orthogonal matrix is either +1 or 1. As a linear
transformation, an orthogonal matrix preserves the dot product of vectors,
and therefore acts as an isometry() of Euclidean space(3),
such as a rotation or reflection. In other words, it is a unitary transformation.
Orthogonal Matrix ?

=
1

PPT0219
Chapter.2 Matrix Eigenvalues Problems
Theorem 1
Example 3
PPT0220
Chapter.2 Matrix Eigenvalues Problems
Orthogonal Transformations and Orthogonal Matrices
PPT0221
Chapter.2 Matrix Eigenvalues Problems
Theorem 2
Theorem 3
Theorem 4
PPT0222
Chapter.2 Matrix Eigenvalues Problems
Theorem 5
Example5
PPT0223
Chapter.2 Matrix Eigenvalues Problems
Eigenbases. Diagonalization. Quadratic Forms
So far we have emphasized properties of eigenvalues. We now turn to
general properties of eigenvectors. Eigenvectors of an matrix may
(or may not!) form a basis for

. Eigenbasis (basis of eigenvectors) is of


great advantage because then we can represent any in

uniquely as a
linear combination of the eigenvectors

, ,

, say,

=
1

1
+
2

2
+ +

.

And, denoting the corresponding eigenvalues of the matrix by
1
, ,

,
we have

, so that we simply obtain



= =
1

1
+
2

2
++


=
1

1
++


=
1

1
++


Now if the eigenvalues are all different, we do obtain a basis:
PPT0224
Chapter.2 Matrix Eigenvalues Problems
Theorem 1
Example 1
Example 2
Theorem 2
PPT0225
Chapter.2 Matrix Eigenvalues Problems
Diagonalization of Matrices
Definition
Eigenbases also play a role in reducing a matrix to a diagonal matrix whose
entries are the eigenvalues of . This is done by a similarity transformation.








The key property of this transformation is that it preserves the eigenvalues
of :
Theorem 3
Proof
PPT0226
Chapter.2 Matrix Eigenvalues Problems
Example 3
PPT0227
Chapter.2 Matrix Eigenvalues Problems
Theorem 4
Example 4
PPT0228
Chapter.2 Matrix Eigenvalues Problems
Quadratic forms (). Transformation to Principal Axes
By definition, a quadratic form in the components
1
, ,

of a vector
is a sum of
2
terms, namely,









=

is called the coefficient matrix of the form. We may assume is


symmetric, because we can take off-diagonal terms together in pairs and
write the result as a sum of two equal terms (see the example).
-------(7)
PPT0229
Chapter.2 Matrix Eigenvalues Problems
Example 5
Quadratic forms occur in physics and geometry, for instance, in connection
with conic sections (ellipses
1
2

2
+
2
2

2
= 1 , etc.) and quadratic
surfaces (cones, etc.). Their transformation to principal axes is an important
practical task related to the diagonalization of matrices, as follows.



PPT0230
Chapter.2 Matrix Eigenvalues Problems
Remind ! Theorem 2 (the symmetric coefficient matrix of (7) has an
orthonormal basis of eigenvectors). Hence if we take these as column
vectors, we obtain a matrix that is orthogonal, so that

. Thus, we
have =

. Substitution into (7) gives



(8) =

.

If we set

= , then, since

, we get

(9) = .

Furthermore, in (8) we have

and

= , so that
becomes simply

(10) =

=
1

1
2
+
2

2
2
++

2
.

This proves the following basic theorem.
PPT0231
Chapter.2 Matrix Eigenvalues Problems
Theorem 5
Example 6
PPT0232
Chapter.2 Matrix Eigenvalues Problems
Example 6
PPT0233
Chapter.2 Matrix Eigenvalues Problems

Das könnte Ihnen auch gefallen