Sie sind auf Seite 1von 36

Lecture 2: Background

• Linear Algebra

1
Vectors
• Column vector of dimension (length) N
 x1 
 
 x 2 
x=
 
 
x N 
• Row vector

x = x 1, x 2, , x N 
T

2
Vectors
• Hermitian transpose
H
x = x ( )
T ∗
= x 1∗, x 2∗, , x N∗ 
• Other notations
 x (0)   x (n ) 
   
x (1) x (n − 1)
x=  x(n ) =  
     
   
x (N − 1) x (n − N + 1) 3
Vector Norms
• Provide a metric for vector length. Can be used to
measure the distance between two vectors.
• L1 norm: N
x 1 =  xi
i =1
• L2 norm (Euclidean norm):
1/2
 N
2
x 2 = x =  x i 
 i =1 
• L∞ norm:
x ∞ = max x i
i

• Unit norm vector: x


vx =
x 4
Inner Product
• Complex vectors N
a, b = a H b =  ai∗bi
i =1
• Real vectors N
a, b = a b =  aibi
T

i =1
• Geometrical relationship between two vectors
a, b = a b cos q
• Orthogonal vectors
a, b = 0
• Orthonormal vectors: orthogonal vectors having
unit norm. See Example 2.3.1 5
Inner Product
• Cauchy-Schwarz inequality:
a, b ≤ a b
Equality holds iff a and b colinear i.e., a = α b
for some constant α
• Another useful inequality
2 2
2 a, b ≤ a + b
Equality hold if and only if ||a|| = ||b||

6
Inner Product
• FIR filter Convolution Sum:
N −1
y(n ) =  h(k )x (n − k )
k =0

• impulse response in vector form:


 h(0) 
 
 h(1) 
h=
  
 
h(N − 1)
• Output expressed in terms of inner product:
y(n ) = hT x(n ) 7
Linear Independence
• A set of n vectors, v1, v2,..., vn is linearly
independent if
a1v1 + a2 v2 +  + an vn = 0
implies that αi = 0 ∀ i.
• Linearly dependent vectors: At least one of
the vectors may be expressed as a linear
combination of the others
v1 = b2 v2 + b3 v 3 +  + bn vn
for some set of scalars, βi .
See Example 2.3.2 8
Vector Spaces
• Given a set of N vectors,
V = {v1, v2, , vN }
• Vector space  is the set of all vectors that
may be formed from a linear combination of
vectors vi , N
v =  ai vi
i =1

• Vectors vi are said to span the space 

9
Basis Vectors
• If the vectors vi are linearly independent,
then they are said to form a basis for the
space 
• The number of vectors N in the basis is called
the dimension of the space.
• Reading: Section 2.3.2

10
Matrices
• Notation: n × m matrix A with elements aij
• Convolution matrix: Consider the output of
the FIR LSI filter,

y(n ) = hT x(n ) = xT (n )h

• If x(n) = 0 for n < 0 then y(n) for n ≥ 0 is given


by
X0h = y

11
Matrices
where the convolution matrix

 x (0) 0 0  0 
 
 x (1) x (0) 0  0 
 x (2) x (1) x (0)  0 
X0 =  
    
x (N − 1) x (N − 2) x (N − 3)  x (0)
 
      

12
Matrices
• Symmetric matrix: A square matrix is
symmetric if
A = AT
• Hermitian matrix: A n × m complex matrix is
Hermitian if
H ∗ T T ∗
A = A = (A ) = (A )
• See Section 2.3.3 for some properties of
Hermitian matrices
13
Matrices

14
Matrix Rank
• Rank of matrix A, denoted by ρ (A) is defined as the
number of linearly independent columns in A
• ρ (A) = ρ (AH) => if A is partitioned in terms of n
row vectors,
 r1H 
 H
 r2 
A=
 
 H
 rn 
then the rank of A is equivalently equal to the
number of linealy independent row vectors.
15
Matrix Rank
• ρ (A) = ρ (AAH) = ρ (AHA)
• for an n × m matrix A
r(A) ≤ min(m, n )
• Full rank matrix:
r(A) = min(m, n )

16
Matrix Inverse
• Exists for square matrices of full rank
• Some properties:
( )
−1
−1 −1
AB = B A
(A ) ( )
−1 H
H −1
= A
• Matrix Inversion lemma:

( A + BCD) ( )
−1 −1
−1 −1 −1 −1 −1
= A − A B C + DA B DA
• Where A is n x n, B is n x m, C is m x m, and D is m x n
with A and C nonsingular matrices.
17
Matrix Inverse
• Woodbury’s Identity:
−1 H −1
A uv A
( )
−1
A + uvH = A −1 − H −1
1+ v A u
• Is obtained from the Matrix Inversion lemma when C
= 1, B = u, and D = vH where u and v are n x 1 vectors.
• Special case, A = I:
1
( I + uv )
−1
H H
=I− uv
1 + vH u

18
Determinant and Trace
• Determinant and its properties (See Section 2.3.5)
• Condition for inversion of n x n matrix A:
det(A) ≠ 0

• Trace of n x n matrix A: sum of the terms along the


diagonal n
tr(A) =  aii
i =1

19
Linear Equations
• Set of n linear equations in m unknowns, xi = 1,2,...,m

• In matrix form:
Ax = b
where A is n x m, x is m x 1, b is n x 1
• The above equation can be viewed as an expansion
of b in terms of a linear combination of the column
vectors ai of A m
b =  x i ai (2.32)
20
i =1
Solutions to Ax = b
• A is n x n square matrix, m = n:
x = A −1b
• If A singular => no solution or many solutions
• If A singular => linearly dependent columns, then there
exists non-zero solutions to the homogeneous
equation
Az = 0
– k = n − ρ(A) linearly independent solutions to the
homogeneous equations
– if at least one vector x0 exists that solves Ax = b, then any
vector of the form
x = x 0 + a1z1 +  + ak zk
will also be a solution where zi , i = 1,...,k are linearly
independent solution to Az = 0
21
Solutions to Ax = b
• A is rectangular matrix, n < m: Fewer equations than
unknowns => many solutions exist, solution
underdetermined or incompletely specified
– A unique minimum norm solution is to find the vector
satisfying the equations that has min. norm
min x such that Ax = b
– Minimum norm solution:
x 0 = AH (AAH )−1 b
– where AAH is invertible and the matrix
A + = AH (AAH )−1
is the pseudo-inverse of matrix A
22
Solutions to Ax = b
• A is rectangular matrix, m < n: More equations than
unknowns => equations are inconsistent and solution is said
to be overdetermined
• We need to find an approximate solution
• Fig. 2.5: Case of 3 equations in 2 unknowns

23
Solutions to Ax = b
• An arbitrary vector b in this case cannot be represented as a
linear combination of the colums of A (Eq. 2.32)
• Goal: Find coefficient xi that produce best approximation to b
m
ˆ=
b x a
i =1
i i

• Least Squares solution: Find vector x that minimizes the norm


of the error 2 2
e = b − Ax
• Least square solution has the property that the error
e = b − Ax
• is orthogonal to the column vectors ai of A used to
approximate b
AH e = 0 (2.37)
24
Solutions to Ax = b
or
AH Ax = AH b
which are known as normal equations. If columns of A are
linearly independent (A full rank) then AHA is invertible and
the least squares solution is
x = (AH A)−1 AH b
or
x = A+ b
where
A + = (AH A)−1 AH
is the pseudo-inverse of A for the overdetermined problem.

25
Solutions to Ax = b
The best approximation to b is given by the projection of b
onto the subspace spanned by vectors ai
ˆ = Ax 0 = A(AH A)−1 AH b
b
or
ˆ = PA b
b
where
PA = A(AH A)−1 AH
is called the projection matrix
• Minimum least squares error: using the orthogonality
condition (Eq. 2.37)
2
min e = bH e = bH b − bH Ax 0

26
Special matrix Forms
• Diagonal matrix

27
Special matrix Forms
• Block Diagonal Matrix

28
Special matrix Forms
• Upper Triangular Matrix

29
Special matrix Forms
• Toeplitz Matrix: all of the elements along each of
the diagonals have the same value

• Special case of persymmetric matrices: symmetric


about the cross-diagonal aij = an-j+1,n-i+1
• A persymmetric matrix may not necessarily be
Toeplitz
30
Special matrix Forms
• Symmetric Toeplitz Matrix (special case of
centrosymmetric matrices => both symmetric and
persymmetric)

• Hermitian Toeplitz Matrix

31
Special matrix Forms
• Orthogonal matrix: real n x n matrix having
orthonormal columns (and rows)

32
Special matrix Forms
• Unitary matrix: complex n x n matrix having
orthogonal columns (rows)

Reading: Section 2.3.7 33


Quadratic and Hermitian Forms
• Quadratic form of a real symmetric n x n matrix A is
the scalar

• Example:

34
Quadratic and Hermitian Forms
• Hermitian form of a Hermitian n x n matrix A is the
scalar

• Positive definite matrix A, written A > 0: quadratic


form of A is positive for all nonzero vectors x

35
Quadratic and Hermitian Forms
• Positive semidefinite matrix A: quadratic form of A is
nonnegative for all nonzero vectors x

• Negative definite matrix A: for all nonzero vectors x,

• Negative semidefinite matrix A: for all nonzero


vectors x,

• Indefinite: none of the aforementioned


Reading: Section 2.3.8 36

Das könnte Ihnen auch gefallen