Beruflich Dokumente
Kultur Dokumente
• Linear Algebra
1
Vectors
• Column vector of dimension (length) N
x1
x 2
x=
x N
• Row vector
x = x 1, x 2, , x N
T
2
Vectors
• Hermitian transpose
H
x = x ( )
T ∗
= x 1∗, x 2∗, , x N∗
• Other notations
x (0) x (n )
x (1) x (n − 1)
x= x(n ) =
x (N − 1) x (n − N + 1) 3
Vector Norms
• Provide a metric for vector length. Can be used to
measure the distance between two vectors.
• L1 norm: N
x 1 = xi
i =1
• L2 norm (Euclidean norm):
1/2
N
2
x 2 = x = x i
i =1
• L∞ norm:
x ∞ = max x i
i
i =1
• Geometrical relationship between two vectors
a, b = a b cos q
• Orthogonal vectors
a, b = 0
• Orthonormal vectors: orthogonal vectors having
unit norm. See Example 2.3.1 5
Inner Product
• Cauchy-Schwarz inequality:
a, b ≤ a b
Equality holds iff a and b colinear i.e., a = α b
for some constant α
• Another useful inequality
2 2
2 a, b ≤ a + b
Equality hold if and only if ||a|| = ||b||
6
Inner Product
• FIR filter Convolution Sum:
N −1
y(n ) = h(k )x (n − k )
k =0
9
Basis Vectors
• If the vectors vi are linearly independent,
then they are said to form a basis for the
space
• The number of vectors N in the basis is called
the dimension of the space.
• Reading: Section 2.3.2
10
Matrices
• Notation: n × m matrix A with elements aij
• Convolution matrix: Consider the output of
the FIR LSI filter,
y(n ) = hT x(n ) = xT (n )h
11
Matrices
where the convolution matrix
x (0) 0 0 0
x (1) x (0) 0 0
x (2) x (1) x (0) 0
X0 =
x (N − 1) x (N − 2) x (N − 3) x (0)
12
Matrices
• Symmetric matrix: A square matrix is
symmetric if
A = AT
• Hermitian matrix: A n × m complex matrix is
Hermitian if
H ∗ T T ∗
A = A = (A ) = (A )
• See Section 2.3.3 for some properties of
Hermitian matrices
13
Matrices
14
Matrix Rank
• Rank of matrix A, denoted by ρ (A) is defined as the
number of linearly independent columns in A
• ρ (A) = ρ (AH) => if A is partitioned in terms of n
row vectors,
r1H
H
r2
A=
H
rn
then the rank of A is equivalently equal to the
number of linealy independent row vectors.
15
Matrix Rank
• ρ (A) = ρ (AAH) = ρ (AHA)
• for an n × m matrix A
r(A) ≤ min(m, n )
• Full rank matrix:
r(A) = min(m, n )
16
Matrix Inverse
• Exists for square matrices of full rank
• Some properties:
( )
−1
−1 −1
AB = B A
(A ) ( )
−1 H
H −1
= A
• Matrix Inversion lemma:
( A + BCD) ( )
−1 −1
−1 −1 −1 −1 −1
= A − A B C + DA B DA
• Where A is n x n, B is n x m, C is m x m, and D is m x n
with A and C nonsingular matrices.
17
Matrix Inverse
• Woodbury’s Identity:
−1 H −1
A uv A
( )
−1
A + uvH = A −1 − H −1
1+ v A u
• Is obtained from the Matrix Inversion lemma when C
= 1, B = u, and D = vH where u and v are n x 1 vectors.
• Special case, A = I:
1
( I + uv )
−1
H H
=I− uv
1 + vH u
18
Determinant and Trace
• Determinant and its properties (See Section 2.3.5)
• Condition for inversion of n x n matrix A:
det(A) ≠ 0
19
Linear Equations
• Set of n linear equations in m unknowns, xi = 1,2,...,m
• In matrix form:
Ax = b
where A is n x m, x is m x 1, b is n x 1
• The above equation can be viewed as an expansion
of b in terms of a linear combination of the column
vectors ai of A m
b = x i ai (2.32)
20
i =1
Solutions to Ax = b
• A is n x n square matrix, m = n:
x = A −1b
• If A singular => no solution or many solutions
• If A singular => linearly dependent columns, then there
exists non-zero solutions to the homogeneous
equation
Az = 0
– k = n − ρ(A) linearly independent solutions to the
homogeneous equations
– if at least one vector x0 exists that solves Ax = b, then any
vector of the form
x = x 0 + a1z1 + + ak zk
will also be a solution where zi , i = 1,...,k are linearly
independent solution to Az = 0
21
Solutions to Ax = b
• A is rectangular matrix, n < m: Fewer equations than
unknowns => many solutions exist, solution
underdetermined or incompletely specified
– A unique minimum norm solution is to find the vector
satisfying the equations that has min. norm
min x such that Ax = b
– Minimum norm solution:
x 0 = AH (AAH )−1 b
– where AAH is invertible and the matrix
A + = AH (AAH )−1
is the pseudo-inverse of matrix A
22
Solutions to Ax = b
• A is rectangular matrix, m < n: More equations than
unknowns => equations are inconsistent and solution is said
to be overdetermined
• We need to find an approximate solution
• Fig. 2.5: Case of 3 equations in 2 unknowns
23
Solutions to Ax = b
• An arbitrary vector b in this case cannot be represented as a
linear combination of the colums of A (Eq. 2.32)
• Goal: Find coefficient xi that produce best approximation to b
m
ˆ=
b x a
i =1
i i
25
Solutions to Ax = b
The best approximation to b is given by the projection of b
onto the subspace spanned by vectors ai
ˆ = Ax 0 = A(AH A)−1 AH b
b
or
ˆ = PA b
b
where
PA = A(AH A)−1 AH
is called the projection matrix
• Minimum least squares error: using the orthogonality
condition (Eq. 2.37)
2
min e = bH e = bH b − bH Ax 0
26
Special matrix Forms
• Diagonal matrix
27
Special matrix Forms
• Block Diagonal Matrix
28
Special matrix Forms
• Upper Triangular Matrix
29
Special matrix Forms
• Toeplitz Matrix: all of the elements along each of
the diagonals have the same value
31
Special matrix Forms
• Orthogonal matrix: real n x n matrix having
orthonormal columns (and rows)
32
Special matrix Forms
• Unitary matrix: complex n x n matrix having
orthogonal columns (rows)
• Example:
34
Quadratic and Hermitian Forms
• Hermitian form of a Hermitian n x n matrix A is the
scalar
35
Quadratic and Hermitian Forms
• Positive semidefinite matrix A: quadratic form of A is
nonnegative for all nonzero vectors x