Sie sind auf Seite 1von 16

1

Important Definitions
Linear combination: Let V be a vector space and S a nonempty subset of V. A vector is
called a linear combination of vectors of S if there exists a finite number of vectors

1
,
2
, ,

and scalars
1
,
2
, ,

such that
=
1

1
+
2

2
++

.
Span: Let S be a nonempty subset of vector space V. The span of S, denoted span(S), is the set
consisting of all linear combinations of the vectors in S. For convenience, we define span()=
{0}.
Linear Transformation: Let and be vector spaces over F. We call a function : a
linear transformation from V to W if , , we have (closure under linear
combination)
1. ( +) = () + ()
2. () = ()
We will often state that T is linear if the above conditions are satisfied.
Null space (kernel) and Nullity: Let V and W be vector spaces, and let : be linear. We
define the null space (or kernel ) N(T) to be the set of all vectors x in V s.t. T(x) = 0; that is,
() = { () = 0}
Nullity is dim(N(T)).
Image (Range) and Rank: We define the image R(T) of T to be the subset of W consisting of
all images (under the linear transformation T) of vectors in V; that is, () = { (): }. x
is also a basis vector of V.
The Rank is dim(R(T)).
Onto: If : is a function with range of B, that is, () = then f is called onto. So f is
onto if and only if the range of f equals the codomain of f.

One to One: : is one to one if () = () implies = or equivalently, if
implies () ()



2

Subspace: Let V be a vector space and W a subset of V. Then W is a subspace of V iff. the
following three conditions hold for the operations defined in V.
1. 0
2. + whenever and
3. whenever and
Linearly Independent /Linearly Depended: A subset S of a vector space V is called linearly
dependent if there exists a finite number of distinct vectors
1
,
2
, ,

and scalars

1
,
2
, ,

, not all zer0, such that

1
+
2

2
++

= 0
in this case we also say that the vectors of S are linearly dependent- if not then the Subset of
Vectors is linearly independent.
Basis: A basis for a vector space V is a linearly independent subset of V that generates (spans)
V. If is a basis for V, we also say that the vectors of form a basis for V.
Dimension: A vector space is called finite-dimensional if it has a basis consisting of a finite
number of vectors. The unique number of vectors in each basis for V is called the dimension of
V and is denoted by dim(V). A vector space that is not finite-dimensional is called infinite-
dimensional.
Coordinates: If = {
1
,
2
, . . . ,

} is an ordered basis for V and , then the coordinates


of with respect to is the vector

given by
[]

= (

) =

=1

Determinant: If
= (


)
is a 2x2 matrix with entries from a field F, then we define the determinant of , denoted det(A) or
|A| to be the scalar ad-bc. Thus the determinant is a transformation : () .
Useful Properties of Determinants:
det(AB) = det(A)det(B)
Adding a multiple of a row to another row of the matrix does not change the determinant.
If A is an nxn matrix, it is invertible if and only if det(A) is NOT = 0.
The determinant of an upper triangular matrix is the product of its diagonal.
3

If a row is the sum of two rows, then the resulting determinant of the matrix that includes
the sum is the determinant of one part of the sum with the rest of the matrix untouched
and the determinant of the other part of the sum with the rest of the matrix untouched.
Eigenvector and Eigenvalue:
Let T be a linear operator on a vector space V. A nonzero vector is called an
eigenvector of T if there exists a scalar s.t. () = v. The scalar is called the eigenvalue
corresponding to the eigenvector v.
Let A be in Mmxn (F). A nonzero vector

is called an eigenvector of A if v is an
eigenvector of LA; that is if = v for some scalar . The scalar is also called the eigenvalue
of A corresponding to the eigenvector v.
Isomorphism: Let V and W be vector spaces. We say that V s isomorphic to W if there exists a
linear transformation : that is invertible. Such a linear transformation is called an
isomorphism from V onto W. Isomorphisms are one to one and onto (bijective).
Transpose: If

() then A transpose is the function of the form


():

()

()
Thus the transform takes all entries Aij and makes the entries Aji for all entries.
Trace:

() then trace is the linear transformation defined by


() =

= 1,2, ,
Thus ()

()
Rank Nullity Theorem: If : is linear and dim(V)< then the following equality
holds:
() +() = dim()
Inverse: Let : be linear and : U is an inverse of T if U(T) = IW and T(U) = IV
If these conditions are satisfied, then the we say that T is invertible.

Algebraic Multiplicity: If is a eigenvalue of : the algebraic multiplicity of is the
largest positive integer M s.t. ( )

is a factor of the characteristic polynomial.



4

Geometric Multiplicity: The geometric multiplicity of an eigenvalue is the dimension of its
Eigenspace.
Invariant Subspace: If is a vector space and : is a linear operator, then a subspace
is an invariant subspace if the following hold:
()
That is for each ()
T-Invariant Cyclic Subspace: Fix a vector in and let the t-cyclic subspace be generated
by this vector. The subspace is
= { , ( ),
2
( ), }
clearly { , ( ),
2
( ), } is a basis for . (If you restricted W to a finite dimension, say k,
then the T-Cyclic Subspace generated by v, would have a basis v and k-1 transforms of v).
Invariant Subspace Characteristic Polynomial: If is a T-invariant Subspace and is
finite dimensional then the characteristic polynomial of TW (the transformation restricted to the
subspace) divides the characteristic polynomial of :

De Moivres Formula:
( +)

= cos() + ()
Theorem 7.5.3: If
22
() with eigenvalues = and eigenvectors { + } and
{ } where
2
then
= (

)
and also

1
= (


)
And by de Moivres formula

1
= (
cos () sin ()
sin () cos ()
)

Theorem 5.22: Let : be a linear transformation and be finite dimensional. Let
and = { , ( ),
2
( ), } and Let dim()
5

1. { , ( ),
2
( ), ,
1
( )} is a basis for .
2. If
0
+
1
() + +
1

1
() +

() = 0
Then () = det(

) = (1)

(
0
+
1
+ +
1

1
+

)
Cayley- Hamilton Theorem: Let : be a linear transformation and the dimension of
to be finite, and let () = ( ) be the characteristic polynomial of T then () =

0
(Zero Transformation) that is that the Transformation satisfies its own characteristic
polynomial.


Change of Coordinate Basis: If
= {
1
,
2
, . . . ,

= {
1
,
2
, . . . ,

}
Then the change of coordinate basis is the matrix that corresponds to the identity transform
relative to the basis above. The coordinate transform in this way changes

coordinates into
coordinates. Below is the formulaic method as well as the diagram method.
Let V be a vectors space generated by either of the bases above.
= [


= (
| |
[
1
]

[

]

| |
)
Where
[
1
]

= (

)
ie

1
=
1

1
+
2

2
++



or rather for

the jth column of Q and

the entries of the change of coordinate matrix and


the ith vector in .

=1


6

General Transforms: Let
= {
1
,
2
, . . . ,

= {
1
,
2
, . . . ,

}
A transform : may be represented as two different matrices doing the same
transformation relative to the different bases above.

[]

[]

= [()]


[]

= (
| |
[(
1
)]

[(

)]

| |
)
Where
[(
1
)]

= (

)
ie
(
1
) =
1

1
+
2

2
++




Diagram Method Below:










7



Direct Sum: If
1
and
2
are both subspaces then the direct sum is the normal sum and the
condition that the intersect is zero.

1

2
=
on the condition that

1
+
2
= = {


1
,


2
}
and that

1

2
= {0

}
Direct Sum of Matrices: It is convenient to write the sum of matrices. THE MATRICES MUST
BE SQUARE, and if the matrices are in question
1
,
2
, ,

1

2

= [

0
2
0 0
0 0



0

]

Inner Product: Let V be a vector space over F. An inner prodect on V is a function that assigns,
to every ordered pair of vectors x and y in V, a scalar in F, denoted , , such that for all x, y,
and z in V and for all a in F the following hold.
1. +, = , + ,
2. , = ,
3. ,

= , where the bar denotes the complex conjugate.


4. , > 0 0
Conjugate Transpose: Let
22
() we define the conjugate transpose or adjoint of A to
be the nxm matrix

such that (

for all i,j.


Theorem 6.1: Let V be an inner product space ( a space where an inner product is assigned).
Then for , , , and the following statements are true. (follow from definition of
inner product)
1. , + = , + ,
2. , = ,
3. , 0 = 0 , = 0
4. , = 0 if and only if = 0


5. If , = , for all then =
8


Norm or Length: Let V be an inner product space. For , we define the norm or length of
by
= ,



Theorem 6.2: Let V be an inner product space over F. Then for all , , and the
following statements are true.
1. = ||
2. = 0 if and only if = 0 in any case > 0
3. (Cauchy-Schwarz Inequality) |, |
4. (Triangle Inequality) + +
Perpendicular or Orthogonal: If V is an inner product space with vectors , , then
vectors , are orthogonal if , = 0. A Subset S of V is orthogonal if any two distinct
vectors in S are orthogonal. A vector in V is said to be a unit vector if = 1. Finally a
subset S of V is orthonormal if S is orthogonal and consists entirely of unit vectors.
Orthonormal Basis: Let V be an inner product space. A subset of V is an orthonormal basis for
V if it is an ordered basis that is orthonormal.
Theorem 6.4 (The Gram Schmidt Process): Let V be an inner product space and =
{
1
,
2
, ,

} be a linearly independent subset of V. Define

= {
1
,
2
, ,

} where
1
=

1
and

1
=1

2
Then S is an orthogonal set of nonzero vectors such that span(S) = span(S)

Theorem 6.6: Let W be a finite dimensional subspace of an inner product space V, and let
. Then there exists a unique vector and

such that y = u + z. Furthermore, if


{
1
,
2
, ,

} is an orthonormal basis for W, then
=

=1

1
9

The vector u is the unique vector in W, that is closest to y. we call u the orthogonal projection of
y. The picture below shows what each vector looks like


Fourier Coefficients: Let be an orthonormal subset (possibly infinite) of an inner product
space V, and let . We define the Fourier Coefficients of relative to to be the scalars
, where .

Orthogonal Compliment: Let S be a nonempty subset of an inner product space V. We define

(read as S- perp) to be the set of all vectors in V that are orthogonal to every vector in S; that
is

= { , = 0 S}

Theorem 6.7: Suppose that = {
1
,
2
, ,

} is an orthonormal set in a k-dimensional inner
product space V. Then the following are true:
1. S can be extended to an orthonormal basis = {
1
,
2
, ,

,
+1
, ,

} for V.
2. If = (), then
1
= {
+1
,
+2
, ,

} is an orthonormal basis for


3. If W is any subspace of V, then dim() = dim() +dim (

)

Theorem 6.8: Let V be a finite dimensional vector space over F, and let : be a linear
transformation. Then there exists a unique vector such that () = , for all
We may also calculate by the following:
If
= {
1
,
2
, . . . ,

}
a orthonormal basis for V then
= (

=1


10

Theorem 6.9: Let V be a finite dimension inner product space, and let T be a linear operator on
V. Then there exists a unique function

: such that (), = ,

() for all , ,
. Furthermore

is linear.
Theorem 6.10: Let V be a finite dimensional inner product space with an orthonormal basis B. If
T is a linear operator on V. Then
[

= []


Theorem 6.11: Let V be an inner product space, and let T and U be linear operators on V. Then
the following are true:
1. ( +)


2. ()

For any c in the scalar field.


3. ()


4.

=
5.

=
The same properties hold for nxn matrices, as they represent linear transformations.
Theorem 6.12: Let

() and

. Then
0

s.t. (

)
0
=

and

. Furthermore if () = thern
0
= (

)
1


The Process of Least Squares:
For an abstract space: use theorem 6.6 to find the closest/ orthogonal projection onto the
subspace of interest.
For Theorem 6.12: In the x-y plane with m observations
Let

0
= (

) = (

1
1

1
) = (

)
Theorem 6.13: Let

() and

. Suppose that = is consistent. Then the


following statements are true.
a) There exits exactly one minimal solution, , of =

, and (

)
b) The vector is the only solution to = that lies in (

); that is if satisfies
(

) =

, then =



11

Lemma: Let T be a linear operator on a finite dimensional inner product space V. If T has an
eigenvector, then so does T*.

Theorem 6.14 (Schur): Let T be a linear operator on a finite dimensional inner product space V.
Suppose that the characteristic polynomial of T splits. Then there exists an orthonormal basis
for V such that the matrix [T] is upper triangular.

Normal: Let V be an inner product space, and let T be a linear operator on V. We say that T is
normal if

. An n x n real or complete matrix A is normal is

.

Theorem 6.15: Let V be an inner product space, and let T be a normal operator on V. Then the
following statements are true.
a) () =

() for all .
b) T cI is normal for all c in the scalar field.
c) If is an eigenvector of T, then is also an eigenvector of T*. In fact, if ( ) = ,
then

( ) =
d) If
1
and
2
are distinct eigenvalues of T with corresponding eigenvectors
1
and
2
then
the eigenvectors are orthogonal.
Theorem 6.16: Let T be a linear operator on a finite dimensional complex inner product space
V. Then T is normal if and only if there exists an orthonormal basis for V consisting of
eigenvectors of T.
Self-Adjoint: Let T be a linear operator on an inner product space V. We say that T is self-
adjoint if =

. An n x n real or complex matrix A is self-adjoint if =

.

Lemma: Let T be a self-adjoint operator on a finite dimensional inner product space V. Then
a) Every eigenvalue of T is real.
b) Suppose that V is a REAL inner product space. Then the characteristic polynomial of T
splits.(Complex inner product always splits because of the fundamental theorem of
algebra).
Theorem 6.17: Let T be a linear operator on a finite-dimensional real inner product space V.
Then T is self-adjoint if and only if there exists an orthonormal basis for V consisting of
eigenvectors of T.
12

Thus self adjoint implies there exists an orthonormal basis of eigenvectors
Thus an orthonormal basis of eigenvector of T insures both that T is self adjoint and normal ( as
being self adjoint is sufficient for being normal)

Positive Definite/ Positive Semidefinite: A linear operator T on a finite dimensional inner
product space is called positive definite/positive semidefinite if T is self-adjoint and
(), > 0 0 0
An nxn matrix A with entries from R or C (real or complex field) is called positive
definite/positive semidefinite if LA (the matrix representing the linear transformation) is positive
definite/positive semidefinite.

Problem 17 Consequence of P.D.: T is positive definite if and only if ALL the eigenvalues of T
are positive and T is self adjoint. This also means that all the eigenvalues are positive real.

Unitary/Orthogonal Operator: Let T be a linear operator on a finite dimensional inner product
space V (over F). If () = () for all . We call T a unitary operator if F = C (field
of complex numbers) and an orthogonal operator if F = R.

Theorem 6.18 Let T be a linear operator on a finite dimensional inner product space V. Then the
following statements are equivalent.
a)

=
b) (), () = , , ,
c) If is an orthonormal basis for V, then T() is an orthonormal basis for V.
d) There exits an orthonormal basis for V such that T() is an orthonormal basis for V.
e) () = ()
A-E all imply each otherso it is sufficient to show only one property above to prove that T is
in fact a unitary or orthogonal operator.
Lemma: Let U be a self-adjoint operator on a finite-dimensional inner product space V. If
(), = 0 for all then =
0
(The zero transformation).

13

Corollary 1: Let T be a linear operator on a finite- dimensional real inner product space V. Then
V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of absolute
value 1 if and only if T is both self-adjoint and orthogonal.

Corollary 2: Let T be a linear operator on a finite dimensional complex inner product space V.
Then V has an orthonormal basis of eigenvectors of T with corresponding eigenvalues of
absolute value 1 if and only if T is unitary.

Reflection: Let L be a one-dimensional subspace of
2
. We may view L as a line in the plane
through the origin. A linear operator T on
2
is called a reflection of
2
about L if T(x) = -x for
all



Unitary/Orthogonal Matrix: A square matrix A is called an orthogonal matrix if

= and unitary if

=

Theorem 6.19: Let A be a complex nxn matrix. Then A is normal if and only if A is unitarily
equivalent to a diagonal matrix. To be unitarily equivalent we mean:
=


For example if Q were a matrix whose columns are orthonormal eigenvectors of R, ( Q is
unitarily equivalent) then A would be unitarily equivalent to D and so A would be normal. (
think about the rotation matrix, it is not over R but it works)

Theorem 6.20: Let A be a real nxn matrix. Then A is symmetric if and only if A is orthogonally
equivalent to a real diagonal matrix.

Theorem 6.21(Schur): Let

() be a matrix whose characteristic polynomial splits


over F.
a) If F = C, then A is unitarily equivalent to a complex upper triangular matrix.
b) If F = R, then A is orthogonally equivalent to a real upper triangular matrix.
14

Rigid Motion: Let V be a real inner product space. A function : is called a rigid motion
if
() () =
Basically the length is persevered for all vectors under the linear transformation. This includes
translations and rotations, all orthogonal operators.

Theorem 6.22: Let : be a rigid motion on a finite dimensional real inner product space
V. Then there exists a unique orthogonal operator T on V and a unique translation g on V such
that
=

Theorem 6.23: Let T be an orthogonal operator on
2
, and let = []

where is the standard


ordered basis for
2
. Then exactly ONE of the following conditions is satisfied.
a) T is a rotation and det(A) = 1.
b) T is a reflection about a line through the origin, and det(A) = -1.

Corollary: Any rigid motion in
2
is either a rotation followed by a translation or a reflection
about a line through the origin followed by a translation.

Conic Sections: We are interested in quadratics of the form:

2
+2 +
2

We may rewrite this in the following matrix notation when we are interested in figuring out what
the quadratic form represents in R
2
.

=
2
+2 +
2

Where

= (

) = (


) = (

)
15

We may rewrite the system as a change of coordinates with respect to the eigenvalues and
eigenvectors of the matrix A. Because the matrix A is symmetric we can diagonalize it with
orthonormal eigenvectors. The diagonal matrix is
= (

1
0
0
2
)
And the form of the quadratic becomes

=
1
(

)
2
+
2
(

)
2

Orthogonal Projection: Let V be an inner product space and let : be a projection. We
say that T is an orthogonal projection if the following two conditions hold.
()

= ()
()

= ()

Theorem 6.24: Let V be an inner product space, and let T be a linear operator on V. Then T is
an orthogonal projection if and only if T has an adjoint T* and T
2
= T = T*.

Theorem 6.25 (The Spectral Theorem): Suppose that T is a linear operator on a finite
dimensional inner product space V over F with the distinct eigenvalues:
1
,
2
, ,

. Assume
that T is normal if F = C and that T is self-adjoint if T = R. For each ( 1 ), let

be
the eigenspace of T corresponding to the eigenvalue

and let

be the orthogonal projection of


V onto

. Then the follow statements are true:


a) =
1

2


b) If

denotes the direct sum of the subspaces

for , then


c)

for 1 ,
d) =
1
+
2
+ +


e) =
1

1
+
2

2
+ +



Corollary 1: If F = C, then T is normal if and only if T*=g(T) for some polynomial g.

Corollary 2: If F = C then T is unitary if and only if T is normal and |

| = 1 for every
eigenvalue of T.
16


Corollary 3: If F = C and T is normal, then T is self adjoint if and only if every eigenvalue of T
is real.

Corollary 4: Let T be as in the spectral theorem with spectral decomposition =
1

1
+

2

2
+ +

. Then each

is a polynomial in T.

Das könnte Ihnen auch gefallen