Sie sind auf Seite 1von 3

T.

the nullity of T measures how much information (or degrees of freedom) is lost when applying
example: (x1 , ..., x5 ) = (x1 , x2 , x3 , 0, 0)
null space dimention 2. exactly the freedom to vary x4 and x5 are lost after applying .
T : V 7! W , the rank of T is the dimension of R(T ) rank(T ) = dim(R(T ))
rank measures how much information (or degree of freedom) is retained by T.
nullity(T ) + rank(T ) = dim(V )

corollary: if dim(V ) = dim(W ), then linear transformation T : V 7! W is one-to-one i T is


onto.
if T : V 7! W is linear, and {v1 , ..., vn } spans V , then {T v1 , ..., T vn } spans R(T ).
if T : V 7! W is linear and one-to-one, and {v1 , ..., vn } is linearly independent, then
{T v1 , ..., T vn } is linearly independent.
if T : V 7! W is both one-to-one and onto, and {v1 , ..., vn } is a basis of V , then {T v1 , ..., T vn }
is a basis of W .
ex. T : P3 (R) 7! R4 defined by T (ax3 + bx2 + cx + d) = (a, b, c, d)
so convert basis of P3 (R) to a basis in R4 . {1, x, x2 , x3 } to {(0, 0, 0, 1), (0, 0, 1, 0), (0, 1, 0, 0, )...}
use basis to describe linear transformation in a compact way.
Let V be a vector space with basis {v1 , ..., vn }. Let W be another vector space and let w1 , ..., wn
be arbitrary vectors in W . then there exists exactly one linear transformation T : V 7! W
such that T vj = wj for j = 1 n.
standard ordered basis for RN : (e1 , ..., en )
let = (v1 , ..., vn ) be an ordered basis for V , then 8v 9! v = a1 v1 + ... + an vn . define coordinate
a1
vector of v relative to as [v] = [ ... ]
an
thus we represent any vector as a familar column vector, provided that we supply a basis .
units of measurements are to scalars as bases are to vectors.
let V, W be finite dimensional vector spaces, and let
ordered bases for V and W .
x1
take a vector in V , [v] = [ ... ]
xn
y1
similarly, [T v] = [ ... ]
ym
v = x1 v1 + ...
T v1 = a11 w1 + ... + am1 wm
...

= (v1 , ..., vn ) and

= (w1 , ..., wm ) be

1
0
1
a11 a12 ... a1n
y1
B a21 a22 ... a2n C x1
C @ ... A
@ ... A = B
@
A
ym
xn
am1
amn
[T ] is defined to be that matrix, and [T ] = ([T v1 ] [T v2 ] ...[T vn ] )
and recall {v1 , ..., vn ) is denoted as .
0

[T v] = [T ] [v]
Eisenstein summation convention
[T ] is the matrix representation of T wrt
of T vj wrt .

and . the j th column is just the coordinate vector

L(V, W ) F(V, W )
S : V 7! W , T : U 7! V , then S(T ((u)) is another transformation.
[S + T ] = [S] + [T ]
[cT ] = c[T ]
more importantly,
T : X 7! Y , S : Y 7! Z
[ST ] = [S] [T ]
(after applying T , vector in basis transfers to vector in
basis)

basis, then after S, it transfres to

if A is m n, then Im A = A and AIn = A


summary:
given a vector space X and an ordered basis for X, we can write v 2 V as column vectors [v] ,
given two vector spaces X and Y and ordered bases and , we can write linear transformations
T : X 7! Y as matrices [T ] , the action of T is interpreted as:
[T v] = [T ] [v]
similarly, composition of two linear transformations correspond to matrix multiplication: if
S : Y 7! Z and is an ordered basis for Z, then
[ST ] = [S] [T ] (matrix multiplication)
let A be m n matrix, then define LA : Rn 7! Rm by: LA x = Ax for x 2 Rn
matrix multiplication rules translate into linear transformation composition rules.
handwriting for 2 pages?

change of basis, how is [v] and [v]

related?
2

use equation IV v = v, convert this equation into matrices, measuring the domain V using
0

[Iv] [v] = [v]


recall that [Iv]

0
0

is applying IV to elements of

and writing them in terms of

[IV ] is the change of coordinate matrix from


and invertible.
linear T : V 7! V. given a basis

another basis

pf: [T ]

to

. change of coordinate matrices are always square

of V , we can write a matrix [T ] representing T in the basis

. if we use

we get a dierent matrix [T ] 0 , which is related to [T ] .

let Q = [IV ] , then [T ]


0

but range V useing

0
0

= [IV ] [T ] [IV ]

= Q[T ] Q

two n n matrices A and B are similar if B = QAQ

for some invertible Qnn .

recall the rank of a linear transformation T : V 7! W is the dimension of its range R(T ).
lemma: suppose dim(V ) = dim(W ) = n. then T is invertible i rank(T ) = n.
pf: if rank(T ) = n, then R(T ) has the same dimenstion as W . hence onto. from dimension theorem we see that
nullity(T ) = 0, so T is 1-1.
lemma: let T : V 7! W be a linear transformation, S : U 7! V be an invertible transformation, Q : Q 7! Z
another invertible. then r(T ) = r(QT ) = r(T S) = r(QT S)
so if you multiply a linear transformation on the left or right by an invertible transformation, then the rank
doesnt change.
to compute the rank of an arbitrary linear transformation is dicult. the best way is to convert the transform
to a matrix, whose rank can be calculated. Let A be a matrix in row-echelon form, then rank(A) = # of non-zero
rows
Let Amn has rank r. then we can use elementary row and column matrices to place A in the form (

Ir
0(m r)r

0r(n r)
)
0(m r)(n r)

let Amn with rank r. then we have Bmm and Cnn which are products of elementary matrices (corresponding
Ir
0r(n r)
to row operations), hence invertible, such that: A = B(
)C.
0(m r)r 0(m r)(n r)
this is an example of factorization theorem, which takes a general matrix and factors into simpler pieces.
to mimic the properties of linear transformations, we have for matrices:
let Amn , Bmm invertible, Cnn invertible, then rank(A) = rank(BA) = rank(AC) = rank(BAC)
if A is invertible, then I = I t = (AA 1 )t = At (A
as a consequence of factorization example.
let T : V 7! W be a linear transformation, and

) implying At also invertible, and has the same rank as A

1 t

and

be bases for V and W . then rank(T ) = rank([T ]

Das könnte Ihnen auch gefallen