Beruflich Dokumente
Kultur Dokumente
html
Invertible Linear
Transformations
A First Course in Linear
Algebra
Preface
Dedication and
Acknowledgements
Systems of Linear
Equations
One preliminary denition, and then we will have our main denition for this
section.
IW (w) = w
IW : W W ,
that
S T = IU
T S = IV
Four Subsets
1 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
Vector Spaces
Subspaces
Linear Independence and
Spanning Sets
Bases
Dimension
Properties of Dimension
Determinants
Determinant of a Matrix
S ([
(T S) ([
a b
])
c d
a b
= T (S ([
]))
c d
= T ((a c d) + (c + d)x +
=[
Eigenvalues and
Eigenvectors
Invertible Linear
Transformations
Representations
Vector Representations
Matrix Representations
Change of Basis
Orthonormal
Diagonalization
Preliminaries
Complex Number
Operations
Sets
1
(a b c d)x2 + cx3 )
2
(a c d) 2( 12 (a b c d))
a b
]
c d
a b
= IM22 ([
])
c d
Similarity and
Diagonalization
Surjective Linear
Transformations
(a c d) + (c + d)
=[
Properties of Eigenvalues
and Eigenvectors
Injective Linear
Transformations
1
a b
]) = (a c d) + (c + d)x + (a b c d)x2 + cx3
2
c d
Then
Eigenvalues
Linear Transformations
a + b a 2c
]
d
bd
Properties of Determinants
of Matrices
Linear Transformations
T (a + bx + cx2 + dx3 ) = [
T : P3 M22 ,
(c + d) c
and
(S T ) (a + bx + cx2 + dx3 )
= S (T (a + bx + cx2 + dx3 ))
a + b a 2c
= S ([
])
d
bd
= ((a + b) d (b d)) + (d + (b d))x
1
+ ( ((a + b) (a 2c) d (b d))) x2 + (d)x3
2
= a + bx + cx2 + dx3
= IP3 (a + bx + cx2 + dx3 )
Archetypes
ABCDEFGHIJKLM
NOPQRSTUVWX
2 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
Reference
a
ab
2a + 2b + c
T b = [
]
3a
+
b
+
c
2a
6b 2c
c
Notation
Denitions
Theorems
Diagrams
Examples
Sage
Proof Techniques
GFDL License
5 3
] is not in the range of T .
8 2
a
This will amount to nding an input to T , b , such that
c
First verify that the 2 2 matrix A = [
ab=5
2a + 2b + c = 3
3a + b + c = 8
2a 6b 2c = 2
1
3 2
T 2 = [
]=B
5 2
4
0
3
T 3 = [
5
8
2
]=B
2
1
1 1
1
S (B) = S T 2 = (S T ) 2 = IC3 2 = 2
4
4 4
4
or
0 0
0
0
S (B) = S T 3 = (S T ) 3 = IC3 3 = 3
8 8
8
8
Which denition should we provide for S (B)? Both are necessary. But
then S is not a function. So we have a second reason to know that there
is no function S that will allow us to conclude that T is invertible. It
happens that there are innitely many column vectors that S would have
to take to B. Construct the kernel of T ,
3 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
K (T ) = 1
Now choose either of the two inputs used above for T and add to it a
scalar multiple of the basis vector for the kernel of T . For example,
1
1 3
x = 2 + (2) 1 = 0
4
4 4
then verify that T (x) = B. Practice creating a few more inputs for T that
would be sent to B, and see why it is hopeless to think that we could
ever provide a reasonable denition for S (B)! There is a whole
subspace's worth of values that S (B) would have to take on.
(in context)
./knowls/example.ANILT.knowl
In Example ANILT you may have noticed that T is not surjective, since the
matrix A was not in the range of T . And T is not injective since there are two
dierent input column vectors that T sends to the matrix B. Linear
transformations T that are not surjective lead to putative inverse functions S
that are undened on inputs outside of the range of T . Linear transformations
T that are not injective lead to putative inverse functions S that are multiplydened on each of their inputs. We will formalize these ideas in Theorem
ILTIS.
But rst notice in Denition IVLT that we only require the inverse (when it
exists) to be a function. When it does exist, it too is a linear transformation.
Proof
So when T has an inverse, T 1 is also a linear transformation. Furthermore,
T 1 is an invertible linear transformation and its inverse is what you might
expect.
= T.
Proof
4 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
Proof
When a linear transformation is both injective and surjective, the pre-image of
any element of the codomain is a set of size one (a singleton). This fact
allowed us to construct the inverse linear transformation in one half of the
proof of Theorem ILTIS (see Proof Technique C) and is illustrated in the
following cartoon. This should remind you of the very general Diagram KPI
which was used to illustrate Theorem KPI about pre-images, only now we have
an invertible linear transformation which is therefore surjective and injective
(Theorem ILTIS). As a surjective linear transformation, there are no vectors
depicted in the codomain, V , that have empty pre-images. More importantly,
as an injective linear transformation, the kernel is trivial ( Theorem KILT), so
each pre-image is a single vector. This makes it possible to turn around all
the arrows to create the inverse linear transformation T 1 .
Many will call an injective and surjective function a bijective function or just a
bijection. Theorem ILTIS tells us that this is just a synonym for the term
invertible (which we will use exclusively).
We can follow the constructive approach of the proof of Theorem ILTIS to
construct the inverse of a specic linear transformation, as the next example
shows.
Proof
When a composition is invertible, the inverse is easy to construct.
= T 1 S 1 .
Proof
Notice that this theorem not only establishes what the inverse of S T is, it
5 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
also duplicates the conclusion of Theorem CIVLT and also establishes the
invertibility of S T . But somehow, the proof of Theorem CIVLT is a nicer way
to get this property.
Does Theorem ICLT remind you of the avor of any theorem we have seen
about matrices? (Hint: Think about getting dressed.) Hmmmm.
A few comments on this denition. First, be careful with your language ( Proof
Technique L). Two vector spaces are isomorphic, or not. It is a yes/no situation
and the term only applies to a pair of vector spaces. Any invertible linear
transformation can be called an isomorphism, it is a term that applies to
functions. Second, given a pair of vector spaces there might be several
dierent isomorphisms between the two vector spaces. But it only takes the
existence of one to call the pair isomorphic. Third, U isomorphic to V , or V
isomorphic to U ? It does not matter, since the inverse linear transformation
will provide the needed isomorphism in the opposite direction. Being
isomorphic to is an equivalence relation on the set of all vector spaces (see
Theorem SER for a reminder about equivalence relations).
6 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
In Example IVSAV we avoided a computation in P3 by a conversion of the
computation to a new vector space, M22 , via an invertible linear
transformation (also known as an isomorphism). Here is a diagram meant to
to illustrate the more general situation of two vector spaces, U and V , and an
invertible linear transformation, T . The diagram is simply about a sum of two
vectors from U , rather than a more involved linear combination. It should
remind you of Diagram DLTA.
7 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
related. Here are the denitions and theorems, see the Archetypes
(Archetypes) for loads of examples.
n (T ) = dim (K (T ))
Here are two quick theorems.
r (T ) + n (T ) = dim (U)
Proof
Theorem RPNC said that the rank and nullity of a matrix sum to the number of
columns of the matrix. This result is now an easy consequence of Theorem
RPNDD when we consider the linear transformation T : Cn Cm dened with
the m n matrix A by T (x) = Ax . The range and kernel of T are identical to
the column space and null space of the matrix A ( Exercise ILT.T20, Exercise
SLT.T20), so the rank and nullity of the matrix A are identical to the rank and
nullity of the linear transformation T . The dimension of the domain of T is the
dimension of Cn , exactly the number of columns for the matrix A.
This theorem can be especially useful in determining basic properties of linear
transformations. For example, suppose that T : C6 C6 is a linear
transformation and you are able to quickly establish that the kernel is trivial.
Then n (T ) = 0 . First this means that T is injective by Theorem NOILT. Also,
Theorem RPNDD becomes
6 = dim (C6 ) = r (T ) + n (T ) = r (T ) + 0 = r (T )
So the rank of T is equal to the rank of the codomain, and by Theorem ROSLT
we know T is surjective. Finally, we know T is invertible by Theorem ILTIS. So
from the determination that the kernel is trivial, and consideration of various
8 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
dimensions, the theorems of this section allow us to conclude the existence of
an inverse linear transformation for T . Similarly, Theorem RPNDD can be used
to provide alternative proofs for Theorem ILTD, Theorem SLTD and Theorem
IVSED. It would be an interesting exercise to construct these proofs.
It would be instructive to study the archetypes that are linear transformations
and see how many of their properties can be deduced just from considering
only the dimensions of the domain and codomain. Then add in just knowledge
of either the nullity or rank, and see how much more you can learn about the
linear transformation. The table preceding all of the archetypes (Archetypes)
could be a good place to start this analysis.
2 1 7 7
D = 3 4 5 6
1 1 4 5
To apply the theory of linear transformations to these two archetypes, employ
the matrix-vector product (Denition MVP) and dene the linear
transformation,
T :C C ,
4
2
1
7
7
T (x) = Dx = x1 3 + x2 4 + x3 5 + x4 6
1
1
4
5
4
transformations this is equivalent to asking for T 1 (b). In the language of
vectors and matrices it asks for a linear combination of the four columns of D
7
8
that will equal b. One solution listed is w = . With a nonempty preimage,
1
3
Theorem KPI tells us that the complete solution set of the linear system is the
preimage of b,
w + K (T ) = {w + z z K (T )}
The kernel of the linear transformation T is exactly the null space of the
9 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
matrix D (see Exercise ILT.T20), so this approach to the solution set should be
reminiscent of Theorem PSPHS. The kernel of the linear transformation is the
preimage of the zero vector, exactly equal to the solution set of the
homogeneous system LS (D, 0) . Since D has a null space of dimension two,
every preimage (and in particular the preimage of b) is as big as a
subspace of dimension two (but is not a subspace).
Archetype E is identical to Archetype D but with a dierent vector of
this system of equations since the coecient matrix is identical. Now the set
of solutions to LS (D, d) is the pre-image of d, T 1 (d). However, the vector
d is not in the range of the linear transformation (nor is it in the column space
of the matrix, since these two sets are equal by Exercise SLT.T20). So the
empty pre-image is equivalent to the inconsistency of the linear system.
These two archetypes each have three equations in four variables, so either
the resulting linear systems are inconsistent, or they are consistent and
application of Theorem CMVEI tells us that the system has innitely many
solutions. Considering these same parameters for the linear transformation,
the dimension of the domain, C4 , is four, while the codomain, C3 , has
dimension three. Then
n (T ) = dim (C4 ) r (T )
= 4 dim (R (T ))
43
=1
Theorem RPNDD
Definition ROLT
R (T ) subspace of C3
10 de 11
05/05/15 01:55
http://linear.ups.edu/html/section-IVLT.html
elements of the codomain of the linear transformation. For every theorem
about systems of linear equations there is an analogue about linear
transformations. The theory of linear transformations provides all the tools to
recreate the theory of solutions to linear systems of equations.
We will continue this adventure in Chapter R.
11 de 11
05/05/15 01:55