Sie sind auf Seite 1von 10

Matrices

Kevin James

Kevin James Matrices


Theorem
Suppose that X , U are finite dimensional vector spaces over a field K of
dimensions n, m respectively. Then X ∼= K n, U ∼= K m and

L(X , U) = K m×n
.

Corollary (Theorem 4.1 of Lax)


Suppose that T ∈ L(K n , K m ). Then for 1 ≤ i ≤ m; 1 ≤ j ≤ n, there are
tij ∈ K such that if u = Tx then
n
X
ui = tij xj , for 1 ≤ i ≤ m.
j=1

Note
Note that tij = ui0 (Txj ) = (ui0 , Txj ) = (ui0 T , xj ) = (T 0 (ui0 ), xj ).

Kevin James Matrices


Notation
• We will write the matrix associated to T as AT or MT . We will
write the entries of this matrix as as tij or [T ]ij , aij , [A]ij especially
when the choice of bases is clear.  
t1j
 .. 
• We may write AT = (c1 , . . . , cn ) where cj =  . .
tmj
 
r1
• We may write AT = 
 .. , where r = (t , . . . , t ).
.  i i1 in
rm

Note
Note that if T ∈ L(X , U) then with respect to the standard bases,
AT = [[T (x1 )]BX , . . . , [T (xn )]BX ].

Convention
In light of the above result, we will write all vectors in K ` as column
vectors regardless of whether they are in the range or domain.
Kevin James Matrices
Note
Since the matrix representation of ` ∈ L(K n , K ) is a row vector (-i.e.
` = (`1 , . . . , `n ), and we have `(x) = `1 x1 + · · · + `n xn we deduce that
   
r1 r1 x
AT =  ...  =⇒ AT x =  ..  .
  
. 
rm rm x
 
x1
Similarly, if AT = [c1 , . . . , cn ], then AT  ...  = j=1 xj cj .
  Pn

xn

Kevin James Matrices


Matrix Algebra
Suppose T , R ∈ L(K n , K m ), k ∈ K and S ∈ L(K m , K t ). Then

[T + R]ij = Tij + Rij


[kT ]ij = k[T ]ij ,
Xm
[ST ]ij = Sik Tkj .
k=1

We say that the matrix AS is invertible if S is and then (AS )−1 := AS −1 .

Note
Suppose that X , U, V are finite dimensional vector spaces and
T : X → U, S : U → V are linear maps. The definition of matrix
multiplication is such that AST = AS AT .
In particular, L(X , X ) ∼
= K m×n as algebras as well as vector spaces.

Remark
From the above note, it follows that associative and distributes over
addition of matrices as does composition of functions.
Kevin James Matrices
Convention
Suppose that X is a finite dimensional vector space over a field K of
dimension n. Then X 0 ∼= (K n )0 ∼
= K n and we will associate row vectors
0
to elements of X , since row vectors naturally give a linear map  fromthe
x1
column representation of K n to K . (-i.e. `(x) = (`1 , . . . , `n ) ·  ... .)
 

xn

Matrix of the Transpose


Suppose that X , U are finite dimensional vector spaces over a field K of
dimensions n, m respectively and that T ∈ L(X , U). Let AT denote the
matrix of T with respect to some fixed choice of bases for X and U. Let
L denote the dual basis of U 0 associated to the basis for U. Recall that
T 0 ∈ L(U 0 , T 0 ). The matrix of T 0 is denoted AT 0 = T AT and satisfies

[T AT ]ij = [AT ]ji .

Kevin James Matrices


Theorem
Suppose that A = [a1 , . . . , an ] ∈ K m×n . Then RA = Span(a1 , . . . , an ).

Definition
 
r1
Suppose that A = [a1 , . . . , an ] ∈ K m×n =  ... .
 

rm
1 Col(A) = Span(a1 , . . . , an ).
2 Row(A) = Span(r1 , . . . , rm ).

Note
dim(Col(AT )) = dim(RT ) = dim(RT 0 ) = dim(Col(AT 0 )) =
dim(Col(T AT )) = dim(Row(AT )).

Definition
We call the common value from the note above the rank of the map.

Kevin James Matrices


Theorem (Change of Basis)
Suppose that X , U are finite dimensional vector spaces over a field K of
dimensions n, m respectively and that T : X → U is a linear map.
Suppose that B 1 and B 2 are bases for X and that C 1 and C 2 are bases for
U. Then there are invertible matrices R ∈ K n×n and S ∈ K m×m such
that
−1
B2 [T ]C 2 = R B1 [T ]C 1 S .
In particular, if T ∈ L(X , X ) and B i = C i (i = 1, 2), then we have an
invertible S ∈ K n×n such that
−1
B2 [T ]B2 =S B1 [T ]B1 S .

Note
Similar matrices describe the same mapping of a space into itself with
respect to different bases.

Kevin James Matrices


Miscellaneous definitions

Definition
1 A matrix which is not invertible is called singular,

2 The square matrix I with Iij = δij is called the unit matrix,
3 A square matrix (tij ) for which j > i ⇒ tij = 0 is called
lower triangular,
4 A square matrix (tij ) for which j < i ⇒ tij = 0 is called
upper triangular,
5 A square matrix (tij ) for which i 6= j ⇒ tij = 0 is called
diagonal,
6 A square matrix (tij ) for which |i − j| > 1 ⇒ tij = 0 is called
tridiagonal,

Kevin James Matrices


Theorem (Detecting Solubility / Gaussian Elimination)
Suppose that A ∈ K m×n and that {`1 , . . . , `k } ∈ K 1×m is a basis for the
left-nullspace of A (equivalently { T `1 , . . . , T `k } is a basis for the
nullspace of T A). Then, the matrix equation Ax = u is solvable if and
only if (`i , u) = 0 for 1 ≤ i ≤ k.

Kevin James Matrices

Das könnte Ihnen auch gefallen