Sie sind auf Seite 1von 11

http://linear.ups.edu/html/section-IVLT.

html

A First Course in Linear Algebra Linear Transformations Invertible Linear


Transformations

Invertible Linear
Transformations
A First Course in Linear
Algebra
Preface

In this section we will conclude our introduction to linear transformations by


bringing together the twin properties of injectivity and surjectivity and
consider linear transformations with both of these properties.

Dedication and
Acknowledgements

Subsection IVLT: Invertible Linear


Transformations

Systems of Linear
Equations

One preliminary denition, and then we will have our main denition for this
section.

What is Linear Algebra?

Denition IDLT: Identity Linear Transformation. The identity


linear transformation on the vector space W is dened as

Solving Systems of Linear


Equations
Reduced Row-Echelon
Form
Types of Solution Sets
Homogeneous Systems of
Equations
Nonsingular Matrices
Vectors
Vector Operations
Linear Combinations
Spanning Sets
Linear Independence
Linear Dependence and
Spans
Orthogonality
Matrices
Matrix Operations
Matrix Multiplication
Matrix Inverses and
Systems of Linear
Equations

IW (w) = w

IW : W W ,

Informally, IW is the do-nothing function. You should check that IW is really


a linear transformation, as claimed, and then compute its kernel and range to
see that it is both injective and surjective. All of these facts should be
straightforward to verify (Exercise IVLT.T05). With this in hand we can make
our main denition.

Denition IVLT: Invertible Linear Transformations. Suppose that

T : U V is a linear transformation. If there is a function S : V U such

that

S T = IU

T S = IV

then T is invertible. In this case, we call S the inverse of T and write


S = T 1 .
Informally, a linear transformation T is invertible if there is a companion linear
transformation, S , which undoes the action of T . When the two linear
transformations are applied consecutively (composition), in either order, the
result is to have no real eect. It is entirely analogous to squaring a positive
number and then taking its (positive) square root.
Here is an example of a linear transformation that is invertible. As usual at the
beginning of a section, do not be concerned with where S came from, just
understand how it illustrates Denition IVLT.

Example AIVLT: An invertible linear transformation

Matrix Inverses and


Nonsingular Matrices
Column and Row Spaces

Example AIVLT: An invertible linear transformation

Four Subsets

Archetype V is the linear transformation


Vector Spaces

1 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html

Vector Spaces
Subspaces
Linear Independence and
Spanning Sets
Bases
Dimension
Properties of Dimension
Determinants
Determinant of a Matrix

S ([

(T S) ([

a b
])
c d
a b
= T (S ([
]))
c d

= T ((a c d) + (c + d)x +
=[

Eigenvalues and
Eigenvectors

Invertible Linear
Transformations
Representations
Vector Representations
Matrix Representations
Change of Basis
Orthonormal
Diagonalization
Preliminaries
Complex Number
Operations
Sets

1
(a b c d)x2 + cx3 )
2

(a c d) 2( 12 (a b c d))

a b
]
c d
a b
= IM22 ([
])
c d

Similarity and
Diagonalization

Surjective Linear
Transformations

(a c d) + (c + d)

=[

Properties of Eigenvalues
and Eigenvectors

Injective Linear
Transformations

1
a b
]) = (a c d) + (c + d)x + (a b c d)x2 + cx3
2
c d

Then

Eigenvalues

Linear Transformations

a + b a 2c
]
d
bd

Dene the function S : M22 P3 dened by

Properties of Determinants
of Matrices

Linear Transformations

T (a + bx + cx2 + dx3 ) = [

T : P3 M22 ,

(c + d) c

and

(S T ) (a + bx + cx2 + dx3 )

= S (T (a + bx + cx2 + dx3 ))
a + b a 2c
= S ([
])
d
bd
= ((a + b) d (b d)) + (d + (b d))x
1
+ ( ((a + b) (a 2c) d (b d))) x2 + (d)x3
2
= a + bx + cx2 + dx3
= IP3 (a + bx + cx2 + dx3 )

For now, understand why these computations show that T is invertible,


and that S = T 1 . Maybe even be amazed by how S works so perfectly
in concert with T ! We will see later just how to arrive at the correct form
of S (when it is possible).
(in context)
./knowls/example.AIVLT.knowl

It can be as instructive to study a linear transformation that is not invertible.

Example ANILT: A non-invertible linear transformation

Archetypes
ABCDEFGHIJKLM
NOPQRSTUVWX

2 de 11

Example ANILT: A non-invertible linear transformation

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html

Reference

Consider the linear transformation T : C3 M22 dened by

a
ab
2a + 2b + c
T b = [
]
3a
+
b
+
c
2a
6b 2c
c

Notation
Denitions
Theorems
Diagrams
Examples
Sage
Proof Techniques
GFDL License

Suppose we were to search for an inverse function S : M22 C3 .

5 3
] is not in the range of T .
8 2
a
This will amount to nding an input to T , b , such that
c
First verify that the 2 2 matrix A = [

ab=5
2a + 2b + c = 3
3a + b + c = 8
2a 6b 2c = 2

As this system of equations is inconsistent, there is no input column


vector, and A R (T ). How should we dene S (A) ? Note that

T (S (A)) = (T S) (A) = IM22 (A) = A


So any denition we would provide for S (A) must then be a column
vector that T sends to A and we would have A R (T ), contrary to the
denition of T . This is enough to see that there is no function S that will
allow us to conclude that T is invertible, since we cannot provide a
consistent denition for S (A) if we assume T is invertible.
Even though we now know that T is not invertible, let us not leave this
example just yet. Check that

1
3 2
T 2 = [
]=B
5 2
4

How would we dene S (B) ?

0
3
T 3 = [
5
8

2
]=B
2

1
1 1
1
S (B) = S T 2 = (S T ) 2 = IC3 2 = 2
4
4 4
4

or

0 0
0
0
S (B) = S T 3 = (S T ) 3 = IC3 3 = 3
8 8
8
8

Which denition should we provide for S (B)? Both are necessary. But
then S is not a function. So we have a second reason to know that there
is no function S that will allow us to conclude that T is invertible. It
happens that there are innitely many column vectors that S would have
to take to B. Construct the kernel of T ,

3 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html

K (T ) = 1

Now choose either of the two inputs used above for T and add to it a
scalar multiple of the basis vector for the kernel of T . For example,

1
1 3
x = 2 + (2) 1 = 0
4
4 4

then verify that T (x) = B. Practice creating a few more inputs for T that
would be sent to B, and see why it is hopeless to think that we could
ever provide a reasonable denition for S (B)! There is a whole
subspace's worth of values that S (B) would have to take on.
(in context)
./knowls/example.ANILT.knowl

In Example ANILT you may have noticed that T is not surjective, since the
matrix A was not in the range of T . And T is not injective since there are two
dierent input column vectors that T sends to the matrix B. Linear
transformations T that are not surjective lead to putative inverse functions S
that are undened on inputs outside of the range of T . Linear transformations
T that are not injective lead to putative inverse functions S that are multiplydened on each of their inputs. We will formalize these ideas in Theorem
ILTIS.
But rst notice in Denition IVLT that we only require the inverse (when it
exists) to be a function. When it does exist, it too is a linear transformation.

Theorem ILTLT: Inverse of a Linear Transformation is a Linear


Transformation. Suppose that T : U V is an invertible linear
transformation. Then the function T 1 : V U is a linear transformation.

Proof
So when T has an inverse, T 1 is also a linear transformation. Furthermore,
T 1 is an invertible linear transformation and its inverse is what you might
expect.

Theorem IILT: Inverse of an Invertible Linear Transformation.

Suppose that T : U V is an invertible linear transformation. Then T 1 is an


invertible linear transformation and (T 1 )

= T.

Proof

Sage IVLT: Invertible Linear Transformations

Subsection IV: Invertibility


We now know what an inverse linear transformation is, but just which linear
transformations have inverses? Here is a theorem we have been preparing for
all chapter long.

Theorem ILTIS: Invertible Linear Transformations are Injective


and Surjective. Suppose T : U V is a linear transformation. Then T is
invertible if and only if T is injective and surjective.

4 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html
Proof
When a linear transformation is both injective and surjective, the pre-image of
any element of the codomain is a set of size one (a singleton). This fact
allowed us to construct the inverse linear transformation in one half of the
proof of Theorem ILTIS (see Proof Technique C) and is illustrated in the
following cartoon. This should remind you of the very general Diagram KPI
which was used to illustrate Theorem KPI about pre-images, only now we have
an invertible linear transformation which is therefore surjective and injective
(Theorem ILTIS). As a surjective linear transformation, there are no vectors
depicted in the codomain, V , that have empty pre-images. More importantly,
as an injective linear transformation, the kernel is trivial ( Theorem KILT), so
each pre-image is a single vector. This makes it possible to turn around all
the arrows to create the inverse linear transformation T 1 .

Diagram IVLT Invertible Linear Transformation

Many will call an injective and surjective function a bijective function or just a
bijection. Theorem ILTIS tells us that this is just a synonym for the term
invertible (which we will use exclusively).
We can follow the constructive approach of the proof of Theorem ILTIS to
construct the inverse of a specic linear transformation, as the next example
shows.

Example CIVLT: Computing the Inverse of a Linear


Transformations
We will make frequent use of the characterization of invertible linear
transformations provided by Theorem ILTIS. The next theorem is a good
example of this, and we will use it often, too.

Theorem CIVLT: Composition of Invertible Linear


Transformations. Suppose that T : U V and S : V W are invertible
linear transformations. Then the composition, (S T ) : U W is an
invertible linear transformation.

Proof
When a composition is invertible, the inverse is easy to construct.

Theorem ICLT: Inverse of a Composition of Linear


Transformations. Suppose that T : U V and S : V W are invertible
1

linear transformations. Then S T is invertible and (S T )

= T 1 S 1 .

Proof
Notice that this theorem not only establishes what the inverse of S T is, it

5 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html
also duplicates the conclusion of Theorem CIVLT and also establishes the
invertibility of S T . But somehow, the proof of Theorem CIVLT is a nicer way
to get this property.
Does Theorem ICLT remind you of the avor of any theorem we have seen
about matrices? (Hint: Think about getting dressed.) Hmmmm.

Sage CIVLT: Computing the Inverse of a Linear


Transformations

Subsection SI: Structure and Isomorphism


A vector space is dened (Denition VS) as a set of objects (vectors)
endowed with a denition of vector addition (+) and a denition of scalar
multiplication (written with juxtaposition). Many of our denitions about
vector spaces involve linear combinations (Denition LC), such as the span of
a set (Denition SS) and linear independence (Denition LI). Other denitions
are built up from these ideas, such as bases ( Denition B) and dimension
(Denition D). The dening properties of a linear transformation require that a
function respect the operations of the two vector spaces that are the
domain and the codomain (Denition LT). Finally, an invertible linear
transformation is one that can be undone it has a companion that
reverses its eect. In this subsection we are going to begin to roll all these
ideas into one.
A vector space has structure derived from denitions of the two operations
and the requirement that these operations interact in ways that satisfy the
ten properties of Denition VS. When two dierent vector spaces have an
invertible linear transformation dened between them, then we can translate
questions about linear combinations (spans, linear independence, bases,
dimension) from the rst vector space to the second. The answers obtained in
the second vector space can then be translated back, via the inverse linear
transformation, and interpreted in the setting of the rst vector space. We say
that these invertible linear transformations preserve structure. And we say
that the two vector spaces are structurally the same. The precise term is
isomorphic, from Greek meaning of the same form. Let us begin to try to
understand this important concept.

Denition IVS: Isomorphic Vector Spaces. Two vector spaces U and

V are isomorphic if there exists an invertible linear transformation T with


domain U and codomain V , T : U V . In this case, we write U V , and the
linear transformation T is known as an isomorphism between U and V .

A few comments on this denition. First, be careful with your language ( Proof
Technique L). Two vector spaces are isomorphic, or not. It is a yes/no situation
and the term only applies to a pair of vector spaces. Any invertible linear
transformation can be called an isomorphism, it is a term that applies to
functions. Second, given a pair of vector spaces there might be several
dierent isomorphisms between the two vector spaces. But it only takes the
existence of one to call the pair isomorphic. Third, U isomorphic to V , or V
isomorphic to U ? It does not matter, since the inverse linear transformation
will provide the needed isomorphism in the opposite direction. Being
isomorphic to is an equivalence relation on the set of all vector spaces (see
Theorem SER for a reminder about equivalence relations).

Example IVSAV: Isomorphic vector spaces, Archetype V

6 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html
In Example IVSAV we avoided a computation in P3 by a conversion of the
computation to a new vector space, M22 , via an invertible linear
transformation (also known as an isomorphism). Here is a diagram meant to
to illustrate the more general situation of two vector spaces, U and V , and an
invertible linear transformation, T . The diagram is simply about a sum of two
vectors from U , rather than a more involved linear combination. It should
remind you of Diagram DLTA.

Diagram AIVS Addition in Isomorphic Vector Spaces

To understand this diagram, begin in the upper-left corner, and by going


straight down we can compute the sum of the two vectors using the addition
for the vector space U . The more circuitous alternative, in the spirit of
Example IVSAV, is to begin in the upper-left corner and then proceed
clockwise around the other three sides of the rectangle. Notice that the vector
addition is accomplished using the addition in the vector space V . Then,
because T is a linear transformation, we can say that the result of
T (u1 ) + T (u2 ) is equal to T (u1 + u2 ) . Then the key feature is to recognize
that applying T 1 obviously converts the second version of this result into the
sum in the lower-left corner. So there are two routes to the sum u1 + u2 , each
employing an addition from a dierent vector space, but one is direct and
the other is roundabout. You might try designing a similar diagram for the
case of scalar multiplication (see Diagram DLTM) or for a full linear
combination.
Checking the dimensions of two vector spaces can be a quick way to establish
that they are not isomorphic. Here is the theorem.

Theorem IVSED: Isomorphic Vector Spaces have Equal


Dimension. Suppose U and V are isomorphic vector spaces. Then

dim (U) = dim (V ) .


Proof

The contrapositive of Theorem IVSED says that if U and V have dierent


dimensions, then they are not isomorphic. Dimension is the simplest
structural characteristic that will allow you to distinguish non-isomorphic
vector spaces. For example P6 is not isomorphic to M34 since their
dimensions (7 and 12, respectively) are not equal. With tools developed in
Section VR we will be able to establish that the converse of Theorem IVSED is
true. Think about that one for a moment.

Subsection RNLT: Rank and Nullity of a Linear


Transformation
Just as a matrix has a rank and a nullity, so too do linear transformations. And
just like the rank and nullity of a matrix are related (they sum to the number
of columns, Theorem RPNC) the rank and nullity of a linear transformation are

7 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html
related. Here are the denitions and theorems, see the Archetypes
(Archetypes) for loads of examples.

Denition ROLT: Rank Of a Linear Transformation. Suppose that

T : U V is a linear transformation. Then the rank of T , r (T ), is the


dimension of the range of T ,
r (T ) = dim (R (T ))

Denition NOLT: Nullity Of a Linear Transformation. Suppose


that T : U V is a linear transformation. Then the nullity of T , n (T ), is the
dimension of the kernel of T ,

n (T ) = dim (K (T ))
Here are two quick theorems.

Theorem ROSLT: Rank Of a Surjective Linear Transformation.

Suppose that T : U V is a linear transformation. Then the rank of T is the


dimension of V , r (T ) = dim (V ) , if and only if T is surjective.
Proof

Theorem NOILT: Nullity Of an Injective Linear Transformation.


Suppose that T : U V is a linear transformation. Then the nullity of T is
zero, n (T ) = 0 , if and only if T is injective.
Proof
Just as injectivity and surjectivity come together in invertible linear
transformations, there is a clear relationship between rank and nullity of a
linear transformation. If one is big, the other is small.

Theorem RPNDD: Rank Plus Nullity is Domain Dimension.


Suppose that T : U V is a linear transformation. Then

r (T ) + n (T ) = dim (U)
Proof
Theorem RPNC said that the rank and nullity of a matrix sum to the number of
columns of the matrix. This result is now an easy consequence of Theorem
RPNDD when we consider the linear transformation T : Cn Cm dened with
the m n matrix A by T (x) = Ax . The range and kernel of T are identical to
the column space and null space of the matrix A ( Exercise ILT.T20, Exercise
SLT.T20), so the rank and nullity of the matrix A are identical to the rank and
nullity of the linear transformation T . The dimension of the domain of T is the
dimension of Cn , exactly the number of columns for the matrix A.
This theorem can be especially useful in determining basic properties of linear
transformations. For example, suppose that T : C6 C6 is a linear
transformation and you are able to quickly establish that the kernel is trivial.
Then n (T ) = 0 . First this means that T is injective by Theorem NOILT. Also,
Theorem RPNDD becomes

6 = dim (C6 ) = r (T ) + n (T ) = r (T ) + 0 = r (T )
So the rank of T is equal to the rank of the codomain, and by Theorem ROSLT
we know T is surjective. Finally, we know T is invertible by Theorem ILTIS. So
from the determination that the kernel is trivial, and consideration of various

8 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html
dimensions, the theorems of this section allow us to conclude the existence of
an inverse linear transformation for T . Similarly, Theorem RPNDD can be used
to provide alternative proofs for Theorem ILTD, Theorem SLTD and Theorem
IVSED. It would be an interesting exercise to construct these proofs.
It would be instructive to study the archetypes that are linear transformations
and see how many of their properties can be deduced just from considering
only the dimensions of the domain and codomain. Then add in just knowledge
of either the nullity or rank, and see how much more you can learn about the
linear transformation. The table preceding all of the archetypes (Archetypes)
could be a good place to start this analysis.

Sage LTOE: Linear Transformation Odds and Ends

Subsection SLELT: Systems of Linear


Equations and Linear Transformations
This subsection does not really belong in this section, or any other section, for
that matter. It is just the right time to have a discussion about the
connections between the central topic of linear algebra, linear
transformations, and our motivating topic from Chapter SLE, systems of linear
equations. We will discuss several theorems we have seen already, but we will
also make some forward-looking statements that will be justied in Chapter R.
Archetype D and Archetype E are ideal examples to illustrate connections with
linear transformations. Both have the same coecient matrix,

2 1 7 7
D = 3 4 5 6
1 1 4 5
To apply the theory of linear transformations to these two archetypes, employ
the matrix-vector product (Denition MVP) and dene the linear
transformation,

T :C C ,
4

2
1
7
7
T (x) = Dx = x1 3 + x2 4 + x3 5 + x4 6
1
1
4
5

Theorem MBLT tells us that T is indeed a linear transformation. Archetype D

asks for solutions to LS (D, b) , where b = 12 . In the language of linear

4
transformations this is equivalent to asking for T 1 (b). In the language of

vectors and matrices it asks for a linear combination of the four columns of D

7
8
that will equal b. One solution listed is w = . With a nonempty preimage,
1
3

Theorem KPI tells us that the complete solution set of the linear system is the
preimage of b,

w + K (T ) = {w + z z K (T )}
The kernel of the linear transformation T is exactly the null space of the

9 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html
matrix D (see Exercise ILT.T20), so this approach to the solution set should be
reminiscent of Theorem PSPHS. The kernel of the linear transformation is the
preimage of the zero vector, exactly equal to the solution set of the
homogeneous system LS (D, 0) . Since D has a null space of dimension two,
every preimage (and in particular the preimage of b) is as big as a
subspace of dimension two (but is not a subspace).
Archetype E is identical to Archetype D but with a dierent vector of

constants, d = 3 . We can use the same linear transformation T to discuss

this system of equations since the coecient matrix is identical. Now the set
of solutions to LS (D, d) is the pre-image of d, T 1 (d). However, the vector
d is not in the range of the linear transformation (nor is it in the column space
of the matrix, since these two sets are equal by Exercise SLT.T20). So the
empty pre-image is equivalent to the inconsistency of the linear system.
These two archetypes each have three equations in four variables, so either
the resulting linear systems are inconsistent, or they are consistent and
application of Theorem CMVEI tells us that the system has innitely many
solutions. Considering these same parameters for the linear transformation,
the dimension of the domain, C4 , is four, while the codomain, C3 , has
dimension three. Then

n (T ) = dim (C4 ) r (T )
= 4 dim (R (T ))
43
=1

Theorem RPNDD
Definition ROLT
R (T ) subspace of C3

So the kernel of T is nontrivial simply by considering the dimensions of the


domain (number of variables) and the codomain (number of equations).
Pre-images of elements of the codomain that are not in the range of T are
empty (inconsistent systems). For elements of the codomain that are in the
range of T (consistent systems), Theorem KPI tells us that the pre-images are
built from the kernel, and with a nontrivial kernel, these pre-images are
innite (innitely many solutions).
When do systems of equations have unique solutions? Consider the system of
linear equations LS (C, f ) and the linear transformation S (x) = Cx . If S has
a trivial kernel, then pre-images will either be empty or be nite sets with
single elements. Correspondingly, the coecient matrix C will have a trivial
null space and solution sets will either be empty (inconsistent) or contain a
single solution (unique solution). Should the matrix be square and have a
trivial null space then we recognize the matrix as being nonsingular. A square
matrix means that the corresponding linear transformation, T , has
equal-sized domain and codomain. With a nullity of zero, T is injective, and
also Theorem RPNDD tells us that rank of T is equal to the dimension of the
domain, which in turn is equal to the dimension of the codomain. In other
words, T is surjective. Injective and surjective, and Theorem ILTIS tells us that
T is invertible. Just as we can use the inverse of the coecient matrix to nd
the unique solution of any linear system with a nonsingular coecient matrix
(Theorem SNCM), we can use the inverse of the linear transformation to
construct the unique element of any pre-image (proof of Theorem ILTIS).
The executive summary of this discussion is that to every coecient matrix of
a system of linear equations we can associate a natural linear transformation.
Solution sets for systems with this coecient matrix are preimages of

10 de 11

05/05/15 01:55

http://linear.ups.edu/html/section-IVLT.html
elements of the codomain of the linear transformation. For every theorem
about systems of linear equations there is an analogue about linear
transformations. The theory of linear transformations provides all the tools to
recreate the theory of solutions to linear systems of equations.
We will continue this adventure in Chapter R.

Sage SUTH1: Sage Under The Hood, Round 1


Reading Questions
Exercises

11 de 11

05/05/15 01:55

Das könnte Ihnen auch gefallen