Sie sind auf Seite 1von 3

The Inverse Matrix Theorem I

Recall that the inverse of an n × n matrix A is an n × n matrix A−1 for which


AA−1 = In = A−1 A,
where In is the n × n identity matrix. Not all matrices have inverses, and those that do are called
invertible or nonsingular.
In general, a matrix being invertible/nonsingular tells you a tremendous amount about the ma-
trix + its corresponding linear system(s) / linear transformation(s). To highlight this, your textbook
regularly adds to + revisits a monolithic colossus it calls The Invertible Matrix Theorem.
This so-called “theorem” is really just a collection of statements/observations which mean the
same thing as (and hence are logically equivalent to) A has an inverse. However, because many
of the statements lumped into this “theorem” are important—and indeed, many are related to /
duplicates of statements we’ve already visited before—I want to make sure you have them explicitly
given and explained to you. Hence, this handout!
As we learned in class, A has an inverse if and only if det(A) 6= 0; as it turns out, this
 determinant

condition is the very first item in our “theorem.” Throughout, assume that A = v1 · · · vn
is an n × n matrix, i.e. that T (x) = Ax is a linear transformation Rn → Rn !

(1) A is invertible if and only if det(A) 6= 0.

Why should this be true? When you think of a matrix being nonsingular, you
should think of a linear transformation which neither “squishes” nor “blows up” in-
formation.
This is possible if and only if such a transformation doesn’t transform parallelo-
grams with positive area into those with zero area, and because
(area after transformation) = | det(A)| · (area before transformation) 6= 0,
it follows that a matrix is nonsingular if and only if its determinant is not zero.

A number of other conditions are related to having nonzero determinant. For example:

(2) A is invertible if and only if the columns of A form a linearly independent set.

Why should this be true? A matrix which has linearly dependent rows or columns
will result in a cofactor expansion which equals zero (check this!).
Thus, det(A) 6= 0 if and only if the columns of A form a linearly independent
set.

(3) A is invertible if and only if Ax = 0 has only the trivial solution.

Why should this be true? As we’ve seen before, Ax = 0 has nontrivial solutions
if and only if the columns of A are linearly dependent. Now, refer to (2).

1
(4) A is invertible if and only if the linear transformation T (x) = Ax is one-to-one.

Why should this be true? T (x) = Ax is one-to-one if and only if Ax = 0 has only
the trivial solution. Now, refer to (3).

Now, we know that general linear transformations may be one-to-one and not onto or vice versa.
However, because A is n × n (i.e. T : Rn → Rn ), this is not the case and T being one-to-one is
possible if and only if T is also onto.
If you want, you can accept this as fact; however, it also follows from A being invertible:

(5) A is invertible if and only if the linear transformation T (x) = Ax is onto.

Why should this be true? T (x) = Ax is onto if and only if every vector b in (the
codomain) Rn is hit by some vector x in (the domain) Rn .
 
Because A = v1 · · · vn , this means that T is onto if and only if the collection
{v1 , . . . , vn } span Rn , but a set of n vectors spans Rn if and only if {v1 , . . . , vn } are
linearly independent. Now, refer to (2).

As a result of (5), there are a number of equivalent conditions which stem from various rewritings
of onto.

(6) A is invertible if and only if the equation Ax = b has ≥ 1 solution for each b ∈ Rn .

Why should this be true? The equation Ax = b has ≥ 1 solution for each b ∈ Rn
if and only if T (x) = Ax is onto. Now, apply (5).

(7) A is invertible if and only if the columns of A span Rn .


 
Why should this be true? The columns of A = v1 · · · vn span Rn if and only
if T (x) = Ax is onto. Now, apply (5).

The next ingredient of our “theorem” comes from the process of finding the inverse of a matrix
(when it exists).

(8) A is invertible if and only if A is row equivalent to the n × n identity matrix In .

Why should this be true? Recall that—when it exists—you can find A−1 by
forming the augmented matrix A In and putting the “A part” into RREF. The
result is guaranteed to be the augmented matrix
 
In A−1 ,
and so A can be transformed into In by elementary row operations.

2
Another equivalence involves the relationship between A and its transpose AT .
Recall that the transpose of an m × n matrix B is the n × m matrix BT whose first row is the
first column of B, whose second row is the second column of B, etc.
As was shown on a previous handout, det(A) = det(AT ) for all square matrices A and this is the
key to the point at hand?

(9) A is invertible if and only if AT invertible.

Why should this be true? A is invertible if and only if det(A) 6= 0 (see (1)) and
det(A) = det(AT ). Hence, A is invertible if and only if det(AT ) 6= 0 if and only if AT
is invertible.

The above list of conditions is just a small fraction of the number of equivalent ways one can say “A
is invertible,” and we’ll most definitely be revisiting this handout ≥ 1 time throughout the semester
to make updates!
In the meantime, take some time to read through the above and digest everything thoroughly.
To help, use the above to work through the following practice problems!

Das könnte Ihnen auch gefallen