Beruflich Dokumente
Kultur Dokumente
Overview
Goals of the tutorial:
See that geometric notions unify (simplify!) signal processing
Learn/review basics of Hilbert space view
See Hilbert space view in action
Learn about textbooks Foundations of Signal Processing and
Fourier and Wavelet Signal Processing
Overview
Goals of the tutorial:
See that geometric notions unify (simplify!) signal processing
Learn/review basics of Hilbert space view
See Hilbert space view in action
Learn about textbooks Foundations of Signal Processing and
Fourier and Wavelet Signal Processing
Overview
Goals of the tutorial:
See that geometric notions unify (simplify!) signal processing
Learn/review basics of Hilbert space view
See Hilbert space view in action
Learn about textbooks Foundations of Signal Processing and
Fourier and Wavelet Signal Processing
Overview
Goals of the tutorial:
See that geometric notions unify (simplify!) signal processing
Learn/review basics of Hilbert space view
See Hilbert space view in action
Learn about textbooks Foundations of Signal Processing and
Fourier and Wavelet Signal Processing
Unifying principles
Signal processing has various dichotomies
continuous time vs. discrete time
infinite intervals vs. finite intervals
periodic vs. aperiodic
deterministic vs. stochastic
Example payoffs:
Unified understanding of best approximation (projection theorem)
Unified understanding of Fourier domains
Unified understanding of signal expansions (including sampling)
Unifying principles
Signal processing has various dichotomies
continuous time vs. discrete time
infinite intervals vs. finite intervals
periodic vs. aperiodic
deterministic vs. stochastic
Example payoffs:
Unified understanding of best approximation (projection theorem)
Unified understanding of Fourier domains
Unified understanding of signal expansions (including sampling)
Unifying principles
Signal processing has various dichotomies
continuous time vs. discrete time
infinite intervals vs. finite intervals
periodic vs. aperiodic
deterministic vs. stochastic
Example payoffs:
Unified understanding of best approximation (projection theorem)
Unified understanding of Fourier domains
Unified understanding of signal expansions (including sampling)
Unifying principles
Signal processing has various dichotomies
continuous time vs. discrete time
infinite intervals vs. finite intervals
periodic vs. aperiodic
deterministic vs. stochastic
Example payoffs:
Unified understanding of best approximation (projection theorem)
Unified understanding of Fourier domains
Unified understanding of signal expansions (including sampling)
Mathematical rigor
Everything should be made as simple as possible, but no simpler.
– Common paraphrasing of Albert Einstein
Make everything as simple as possible without being wrong.
– Our variant for teaching
Mathematical rigor
Everything should be made as simple as possible, but no simpler.
– Common paraphrasing of Albert Einstein
Make everything as simple as possible without being wrong.
– Our variant for teaching
Vector spaces
A vector space generalizes easily beyond the R2 Euclidean plane
Axioms
A vector space over a field of scalars C (or R) is a set of vectors V together
with operations
◮ vector addition: V × V → V
◮ scalar multiplication: C × V → V
that satisfy the following axioms:
1. x +y =y +x
2. (x + y ) + z = x + (y + z)
3. ∃ 0 ∈ V s.t. x + 0 = x for all x ∈ V
4. α(x + y ) = αx + αy
5. (α + β)x = αx + βx
6. (αβ)x = α(βx)
7. 0x = 0 and 1x = x
Vector spaces
Examples
CN : complex (column) vectors of length N
Vector spaces
Key notions
Subspace
◮ S ⊆ V is a subspace when it is closed under vector addition and scalar
multiplication:
⋆ For all x, y ∈ S, x + y ∈ S
⋆ For all x ∈ S and α ∈ C, αx ∈ S
Span
◮ S: set of vectors (could be infinite)
◮ span(S) = set of all finite linear combinations of vectors in S:
( N−1 )
X
S = αk ϕk | αk ∈ C, ϕk ∈ S and N ∈ N
k=0
Vector spaces
Key notions
Subspace
◮ S ⊆ V is a subspace when it is closed under vector addition and scalar
multiplication:
⋆ For all x, y ∈ S, x + y ∈ S
⋆ For all x ∈ S and α ∈ C, αx ∈ S
Span
◮ S: set of vectors (could be infinite)
◮ span(S) = set of all finite linear combinations of vectors in S:
( N−1 )
X
S = αk ϕk | αk ∈ C, ϕk ∈ S and N ∈ N
k=0
Vector spaces
Key notions
Linear independence
N−1
◮ S = {ϕk }k=0 is linearly independent when:
N−1
X
αk ϕk = 0 only when αk = 0 for all k
k=0
Dimension
◮ dim(V ) = N if V contains a linearly independent set with N vectors and every
set with N + 1 or more vectors is linearly dependent
◮ V is infinite dimensional if no such finite N exists
Vector spaces
Key notions
Linear independence
N−1
◮ S = {ϕk }k=0 is linearly independent when:
N−1
X
αk ϕk = 0 only when αk = 0 for all k
k=0
Dimension
◮ dim(V ) = N if V contains a linearly independent set with N vectors and every
set with N + 1 or more vectors is linearly dependent
◮ V is infinite dimensional if no such finite N exists
Inner products
Inner products generalize angles (especially right angles) and orientation
Inner products
Examples
N−1
X
On CN : hx, y i = xn yn∗ = y ∗ x
n=0
X
On CZ : hx, y i = xn yn∗ = y ∗ x
n∈Z
Z ∞
On CR : hx, y i = x(t)y ∗ (t) dt
−∞
Orthogonality
Let S = {ϕi }i ∈I be a set of vectors
Definition (Orthogonality)
x and y are orthogonal when hx, y i = 0 written x ⊥ y
Norm
Norms generalize length in ordinary Euclidean space
Definition (Norm)
A norm on V is a function k·k : V → R satisfying
1 Positive definiteness: kxk ≥ 0 and kxk = 0 if and only if x = 0
2 Positive scalability: kαxk = |α| kxk
3 Triangle inequality: kx + y k ≤ kxk + ky k with equality if and only if y = αx
Norm
Norms generalize length in ordinary Euclidean space
Definition (Norm)
A norm on V is a function k·k : V → R satisfying
1 Positive definiteness: kxk ≥ 0 and kxk = 0 if and only if x = 0
2 Positive scalability: kαxk = |α| kxk
3 Triangle inequality: kx + y k ≤ kxk + ky k with equality if and only if y = αx
Examples
N−1
!1/2
p X
N 2
On C : kxk = hx, xi = |xn |
n=0
!1/2
p X
On CZ : kxk = hx, xi = |xn |2
n∈Z
Z 1/2
p ∞
On CR : kxk = hx, xi = |x(t)|2 dt
−∞
p
On C-valued random variables: kxk = hx, xi = E |x|2
Properties
Pythagorean theorem
◮ x ⊥y ⇒ kx + y k2 = kxk2 + ky k2
X
2 X
◮ {xk }k∈K orthogonal ⇒
xk
=
kxk k2
k∈K k∈K
x +y
y
Properties
Parallelogram Law
kx + y k2 + kx − y k2 = 2(kxk2 + ky k2 )
x
x +y
y y
x −y
Properties
Cauchy–Schwarz inequality
|hx, y i| ≤ kxk ky k
Examples
P 1/p
N−1
On CN : kxkp = n=0 |xn |
p
, p ∈ [1, ∞)
P p
1/p
On CZ : kxkp = n∈Z |xn | , p ∈ [1, ∞)
kxk∞ = sup|xn |
n∈Z
R 1/p
∞
On CR : kxkp = −∞
|x(t)|p dt , p ∈ [1, ∞)
p
Geometry of ℓ : Unit balls
x1
kxk∞ = 1
kxk4 = 1
kxk2 = 1
kxk1 = 1
kxk1/2 = 1
x0
Valid norm (and convex unit ball) for p ≥ 1; ordinary geometry for p = 2
A normed vector space is a set satisfying axioms of a vector space where the
norm is finite
A normed vector space is a set satisfying axioms of a vector space where the
norm is finite
Definition
A sequence of vectors x0 , x1 , . . . in a normed vector space V is said to converge
to v ∈ V when limk→∞ kv − xk k = 0, or for any ε > 0, there exists Kε such that
kv − xk k < ε for all k > Kε .
Example
For k ∈ Z+ , let
1, for t ∈ [0, 1/k ];
xk (t) =
0, otherwise.
v (t) = 0 for all t. Then for p ∈ [1, ∞),
Z ∞ 1/p 1/p
1 k→∞
kv − xk kp = |v (t) − xk (t)|p dt = −→ 0,
−∞ k
Definition
A sequence of vectors x0 , x1 , . . . in a normed vector space V is said to converge
to v ∈ V when limk→∞ kv − xk k = 0, or for any ε > 0, there exists Kε such that
kv − xk k < ε for all k > Kε .
Example
For k ∈ Z+ , let
1, for t ∈ [0, 1/k ];
xk (t) =
0, otherwise.
v (t) = 0 for all t. Then for p ∈ [1, ∞),
Z ∞ 1/p 1/p
1 k→∞
kv − xk kp = |v (t) − xk (t)|p dt = −→ 0,
−∞ k
Definition
A sequence of vectors x0 , x1 , . . . in a normed vector space V is said to converge
to v ∈ V when limk→∞ kv − xk k = 0, or for any ε > 0, there exists Kε such that
kv − xk k < ε for all k > Kε .
Example
For k ∈ Z+ , let
1, for t ∈ [0, 1/k ];
xk (t) =
0, otherwise.
v (t) = 0 for all t. Then for p ∈ [1, ∞),
Z ∞ 1/p 1/p
1 k→∞
kv − xk kp = |v (t) − xk (t)|p dt = −→ 0,
−∞ k
Definitions
A sequence {xn } is a Cauchy sequence in a normed space when for any
ε > 0, there exists kε such that kxk − xm k < ε for all k, m > kε
Hilbert spaces
Examples
Q is not a complete space
X∞
1 π2
◮ → ∈ R, ∈
/Q
n=1
n2 6
X∞
1
◮ → e ∈ R, ∈
/Q
n=0
n!
e
5 8 65
1 2 2 3 24
R is a complete space
Hilbert spaces
Examples
Q is not a complete space
X∞
1 π2
◮ → ∈ R, ∈
/Q
n=1
n2 6
X∞
1
◮ → e ∈ R, ∈
/Q
n=0
n!
e
5 8 65
1 2 2 3 24
R is a complete space
Hilbert spaces
Examples
All finite dimensional spaces are complete
C q ([a, b]), functions on [a, b] with q continuous derivatives, are not complete
except for q = 0 under L∞ norm
Hilbert spaces
Examples
All finite dimensional spaces are complete
C q ([a, b]), functions on [a, b] with q continuous derivatives, are not complete
except for q = 0 under L∞ norm
Hilbert spaces
Examples
All finite dimensional spaces are complete
C q ([a, b]), functions on [a, b] with q continuous derivatives, are not complete
except for q = 0 under L∞ norm
Summary on spaces
Vector spaces
Normed vector spaces
Hilbert
spaces
• ℓ1 (Z)
•Q N
•C N • ℓ∞ (Z)
• C ([a, b]) • ℓ2 (Z) • L1 (R)
• L2 (R) • L∞ (R)
• (C 1 ([a, b]), k · k∞ ) • (V , d)
• ℓ0 (Z)
Linear operators
Linear operators generalize matrices
Definitions
A : H0 → H1 is a linear operator when for all x, y ∈ H0 , α ∈ C:
1 Additivity: A(x + y ) = Ax + Ay
2 Scalability: A(αx) = α(Ax)
Null space (subspace of H0 ): N (A) = {x ∈ H0 , Ax = 0}
Range space (subspace of H1 ): R(A) = {Ax ∈ H1 , x ∈ H0 }
Operator norm: kAk = sup kAxk
kxk=1
Linear operators
Linear operators generalize matrices
Definitions
A : H0 → H1 is a linear operator when for all x, y ∈ H0 , α ∈ C:
1 Additivity: A(x + y ) = Ax + Ay
2 Scalability: A(αx) = α(Ax)
Null space (subspace of H0 ): N (A) = {x ∈ H0 , Ax = 0}
Range space (subspace of H1 ): R(A) = {Ax ∈ H1 , x ∈ H0 }
Operator norm: kAk = sup kAxk
kxk=1
Linear operators
Linear operators generalize matrices
Definitions
A : H0 → H1 is a linear operator when for all x, y ∈ H0 , α ∈ C:
1 Additivity: A(x + y ) = Ax + Ay
2 Scalability: A(αx) = α(Ax)
Null space (subspace of H0 ): N (A) = {x ∈ H0 , Ax = 0}
Range space (subspace of H1 ): R(A) = {Ax ∈ H1 , x ∈ H0 }
Operator norm: kAk = sup kAxk
kxk=1
Linear operators
Linear operators generalize matrices
Definitions
A : H0 → H1 is a linear operator when for all x, y ∈ H0 , α ∈ C:
1 Additivity: A(x + y ) = Ax + Ay
2 Scalability: A(αx) = α(Ax)
Null space (subspace of H0 ): N (A) = {x ∈ H0 , Ax = 0}
Range space (subspace of H1 ): R(A) = {Ax ∈ H1 , x ∈ H0 }
Operator norm: kAk = sup kAxk
kxk=1
Adjoint operators
Adjoint generalizes Hermitian transposition of matrices
If A = A∗ , A is self-adjoint or Hermitian
Adjoint operators
Z !
X X n+1/2 XZ n+1/2
hAx, y iℓ2 = (Ax)n yn∗ = x(t) dt yn∗ = x(t)yn∗ dt
n∈Z n∈Z n−1/2 n∈Z n−1/2
XZ n+1/2 Z ∞
= x ∗ (t) dt =
x(t)b x ∗ (t) dt = hx, b
x(t)b x iL2 = hx, A∗ y iL2
n∈Z n−1/2 −∞
x(t) yn = (Ax)n
10 10
5 5
t n
5 10 15 5 10 15
−5 −5
−10 −10
t t
5 10 15 5 10 15
−5 −5
−10 −10
Unitary operators
Unitary operators
Projection operators
Projection operators
Theorem
If A : H0 → H1 , B : H1 → H0 bounded and A is a left inverse of B, then BA is
a projection operator. If B = A∗ then, BA = A∗ A is an orthogonal projection
x ∈ S that is closest to x
Nearest point problem: Find b
•
•′ b
x
x
x ∈ S that is closest to x
Nearest point problem: Find b
•
•′ b
x
x
x ∈ S that is closest to x
Nearest point problem: Find b
Solution uniquely determined by x − b
x⊥X
◮ Circle must touch S in one point, radius ⊥ tangent
•
•′ b
x
x
x ∈ S that is closest to x
Nearest point problem: Find b
Solution uniquely determined by x − b
x⊥X
◮ Circle must touch S in one point, radius ⊥ tangent
x ∈ S that is closest to x
Find b
x = argminkx − sk
b
s∈S
•
•′ b
x
x
Orthogonality: x − b
x ⊥ S is necessary and sufficient to determine b
x
Uniqueness: b
x is unique
Linearity: b
x = Px where P is a linear operator
Self-adjointness: P = P ∗
All “nearest vector in a subspace” problems in Hilbert spaces are the same
Orthogonality: x − b
x ⊥ S is necessary and sufficient to determine b
x
Uniqueness: b
x is unique
Linearity: b
x = Px where P is a linear operator
Self-adjointness: P = P ∗
All “nearest vector in a subspace” problems in Hilbert spaces are the same
Orthogonality: x − b
x ⊥ S is necessary and sufficient to determine b
x
Uniqueness: b
x is unique
Linearity: b
x = Px where P is a linear operator
Self-adjointness: P = P ∗
All “nearest vector in a subspace” problems in Hilbert spaces are the same
1 1
0.8 0.8
0.6 x(t) 0.6
0.4 0.4
0.2 0.2
0
−0.2
x (t)
b 0
−0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
0 0.2 0.4
t 0.6 0.8 1 0 0.2 0.4
t 0.6 0.8 1
Alternative:
◮ Expand into quadratic function of c and minimize with calculus
◮ Not too difficult, but lacks insight
Alternative:
◮ Expand into quadratic function of c and minimize with calculus
◮ Not too difficult, but lacks insight
Alternative:
◮ Expand into quadratic function of c and minimize with calculus
◮ Not too difficult, but lacks insight
Alternative:
◮ Expand into quadratic function of c and minimize with calculus
◮ Not too difficult, but lacks insight
Alternative:
◮ Expand into quadratic function of c and minimize with calculus
◮ Not too difficult, but lacks insight
cx,bx,k = abx,k , k ∈Z
Cx,y (e jω )
◮ DTFT-domain version: H(e jω ) = , ω∈R
Ay (e jω )
Alternative:
◮ Expand into quadratic function of h and minimize with calculus
◮ Messy (!)—especially in complex case—and lacks insight
cx,bx,k = abx,k , k ∈Z
Cx,y (e jω )
◮ DTFT-domain version: H(e jω ) = , ω∈R
Ay (e jω )
Alternative:
◮ Expand into quadratic function of h and minimize with calculus
◮ Messy (!)—especially in complex case—and lacks insight
cx,bx,k = abx,k , k ∈Z
Cx,y (e jω )
◮ DTFT-domain version: H(e jω ) = , ω∈R
Ay (e jω )
Alternative:
◮ Expand into quadratic function of h and minimize with calculus
◮ Messy (!)—especially in complex case—and lacks insight
cx,bx,k = abx,k , k ∈Z
Cx,y (e jω )
◮ DTFT-domain version: H(e jω ) = , ω∈R
Ay (e jω )
Alternative:
◮ Expand into quadratic function of h and minimize with calculus
◮ Messy (!)—especially in complex case—and lacks insight
Local averaging
Z k+ 21
2 2
A : L (R) → ℓ (Z) (Ax)k = x(t)dt
k− 12
has adjoint A∗ : ℓ2 (Z) → L2 (R) that produces staircase function
1 2
(−π, π) (π, π)
2
(−π, π) (π, π)
3
1 2
(−π, π) (π, π)
2
Bases
Definition (Basis)
Φ = {ϕk }k∈K ⊂ V is a basis when
1 Φ is linearly independent and
2 Φ is complete in V : V = span(Φ)
X
Expansion formula: for any x ∈ V , x= αk ϕk
k∈K
{αk }k∈K : is unique
αk : expansion coefficients
Example
The standard
basis for RN T
ek = 0 0 · · · 0 1 0 0 ··· 0 , k = 0, . . . , N − 1
N−1
X
any x ∈ RN , x= xk ek
k=0
Bases
Definition (Basis)
Φ = {ϕk }k∈K ⊂ V is a basis when
1 Φ is linearly independent and
2 Φ is complete in V : V = span(Φ)
X
Expansion formula: for any x ∈ V , x= αk ϕk
k∈K
{αk }k∈K : is unique
αk : expansion coefficients
Example
The standard
basis for RN T
ek = 0 0 · · · 0 1 0 0 ··· 0 , k = 0, . . . , N − 1
N−1
X
any x ∈ RN , x= xk ek
k=0
Bases
Definition (Basis)
Φ = {ϕk }k∈K ⊂ V is a basis when
1 Φ is linearly independent and
2 Φ is complete in V : V = span(Φ)
X
Expansion formula: for any x ∈ V , x= αk ϕk
k∈K
{αk }k∈K : is unique
αk : expansion coefficients
Example
The standard
basis for RN T
ek = 0 0 · · · 0 1 0 0 ··· 0 , k = 0, . . . , N − 1
N−1
X
any x ∈ RN , x= xk ek
k=0
Bases
Examples
α1 α1 ϕ1
x x
ϕ
e1 ϕ1
ϕ1 ϕ1
ϕ0
ϕ0 ϕ0
α0 α0 ϕ0 ϕ
e0
Orthonormal basis Biorthogonal basis . . . with dual basis
Note that the analysis operator is the adjoint of the synthesis operator
Note that the analysis operator is the adjoint of the synthesis operator
Note that the analysis operator is the adjoint of the synthesis operator
Orthonormal bases
Orthonormal bases
Orthonormal bases
Example
Example
Example
ϕ2 = e2 x
hx, ϕ1 i
ϕ1 = e1
ϕ0 = e0
hx, ϕ0 i
x01
In general:
hx, y i = hΦ∗ x, Φ∗ y i = hα, βi
where αk = hx, ϕk i, βk = hy , ϕk i
In general:
hx, y i = hΦ∗ x, Φ∗ y i = hα, βi
where αk = hx, ϕk i, βk = hy , ϕk i
y
β
Φ∗
x
θ Φ
θ α
H ℓ2 (K)
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 52 / 96
Basic Principles in Teaching SP with Geometry Hilbert space tools—Part II: Bases through discrete-time systems
In general:
hx, y i = hΦ∗ x, Φ∗ y i = hα, βi
where αk = hx, ϕk i, βk = hy , ϕk i
y
β
Φ∗
x
θ Φ
θ α
H ℓ2 (K) isometry
Theorem
Φ = {ϕk }k∈I ⊂ H, I ⊂ K
X
PI x = hx, ϕk iϕk = ΦI Φ∗I x
k∈I
Theorem
Φ = {ϕk }k∈I ⊂ H, I ⊂ K
X
PI x = hx, ϕk iϕk = ΦI Φ∗I x
k∈I
Definition
Biorthogonal pairs of bases
e = {ϕ
Φ = {ϕk }k∈K ⊂ H and Φ ek }k∈K ⊂ H is a biorthogonal pair of bases
when
1 e are both bases for H
Φ and Φ
2 e are biorthogonal: hϕi , ϕ
Φ and Φ ek i = δi −k for all i, k ∈ K
e are interchangeable
Roles of Φ and Φ
Example
1 0 1 1 0 1
ϕ0 = 1, ϕ1 = 1, ϕ2 = 1, Φ = 1 1 1
0 1 1 0 1 1
0 −1 1 0 1 −1
−1
e0 = 1 ,
ϕ e1 = 1 ,
ϕ e2 = −1,
ϕ Φ = −1 1 0
−1 0 1 1 −1 1
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 54 / 96
Basic Principles in Teaching SP with Geometry Hilbert space tools—Part II: Bases through discrete-time systems
Definition
Biorthogonal pairs of bases
e = {ϕ
Φ = {ϕk }k∈K ⊂ H and Φ ek }k∈K ⊂ H is a biorthogonal pair of bases
when
1 e are both bases for H
Φ and Φ
2 e are biorthogonal: hϕi , ϕ
Φ and Φ ek i = δi −k for all i, k ∈ K
e are interchangeable
Roles of Φ and Φ
Example
1 0 1 1 0 1
ϕ0 = 1, ϕ1 = 1, ϕ2 = 1, Φ = 1 1 1
0 1 1 0 1 1
0 −1 1 0 1 −1
−1
e0 = 1 ,
ϕ e1 = 1 ,
ϕ e2 = −1,
ϕ Φ = −1 1 0
−1 0 1 1 −1 1
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 54 / 96
Basic Principles in Teaching SP with Geometry Hilbert space tools—Part II: Bases through discrete-time systems
Definition
Biorthogonal pairs of bases
e = {ϕ
Φ = {ϕk }k∈K ⊂ H and Φ ek }k∈K ⊂ H is a biorthogonal pair of bases
when
1 e are both bases for H
Φ and Φ
2 e are biorthogonal: hϕi , ϕ
Φ and Φ ek i = δi −k for all i, k ∈ K
e are interchangeable
Roles of Φ and Φ
Example
1 0 1 1 0 1
ϕ0 = 1, ϕ1 = 1, ϕ2 = 1, Φ = 1 1 1
0 1 1 0 1 1
0 −1 1 0 1 −1
−1
e0 = 1 ,
ϕ e1 = 1 ,
ϕ e2 = −1,
ϕ Φ = −1 1 0
−1 0 1 1 −1 1
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 54 / 96
Basic Principles in Teaching SP with Geometry Hilbert space tools—Part II: Bases through discrete-time systems
Theorem
e = {ϕ
Φ = {ϕk }k∈K , Φ ek }k∈K biorthogonal pair of bases for H
Any x ∈ H has expansion coefficients
αk = hx, ϕ e ∗x
ek i for k ∈ K, or α = Φ
X
Synthesis: x = hx, ϕ
ek iϕk = Φα = ΦΦ e ∗x
k∈K
X
Also x = hx, ϕk iϕ
ek
k∈K
e1 ϕ1
ϕ
ϕ0
ϕ
e0
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 55 / 96
Basic Principles in Teaching SP with Geometry Hilbert space tools—Part II: Bases through discrete-time systems
Theorem
e = {ϕ
Φ = {ϕk }k∈K , Φ ek }k∈K biorthogonal pair of bases for H
Any x ∈ H has expansion coefficients
αk = hx, ϕ e ∗x
ek i for k ∈ K, or α = Φ
X
Synthesis: x = hx, ϕ
ek iϕk = Φα = ΦΦ e ∗x
k∈K
X
Also x = hx, ϕk iϕ
ek
k∈K
e1 ϕ1
ϕ
ϕ0
ϕ
e0
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 55 / 96
Basic Principles in Teaching SP with Geometry Hilbert space tools—Part II: Bases through discrete-time systems
Theorem
e = {ϕ
Φ = {ϕk }k∈K , Φ ek }k∈K biorthogonal pair of bases for H
Any x ∈ H has expansion coefficients
αk = hx, ϕ e ∗x
ek i for k ∈ K, or α = Φ
X
Synthesis: x = hx, ϕ
ek iϕk = Φα = ΦΦ e ∗x
k∈K
X
Also x = hx, ϕk iϕ
ek
k∈K
e1 ϕ1
ϕ
ϕ0
ϕ
e0
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 55 / 96
Basic Principles in Teaching SP with Geometry Hilbert space tools—Part II: Bases through discrete-time systems
Example
R2
1 0 −1
Given Φ =
ϕ2 = e2 x 0 1 −1
hx, ϕ1 i
ϕ0 = e0
ϕ1 = e1 {ϕ0 , ϕ1 , ϕ2 } is a frame for R2
hx, ϕ0 i
x01
Assume x = Φα = Ψβ
Then β = Ψ∗ x = Ψ∗ Φα
As a matrix
.. .. ..
. . .
· · · hϕ−1 , ψ−1 i hϕ0 , ψ−1 i hϕ1 , ψ−1 i · · ·
CΦ,Ψ = · · · hϕ−1 , ψ0 i hϕ0 , ψ0 i hϕ1 , ψ0 i · · ·
· · · hϕ−1 , ψ1 i hϕ0 , ψ1 i hϕ1 , ψ1 i · · ·
.. .. ..
. . .
Assume x = Φα = Ψβ
Then β = Ψ∗ x = Ψ∗ Φα
As a matrix
.. .. ..
. . .
· · · hϕ−1 , ψ−1 i hϕ0 , ψ−1 i hϕ1 , ψ−1 i · · ·
CΦ,Ψ = · · · hϕ−1 , ψ0 i hϕ0 , ψ0 i hϕ1 , ψ0 i · · ·
· · · hϕ−1 , ψ1 i hϕ0 , ψ1 i hϕ1 , ψ1 i · · ·
.. .. ..
. . .
Assume x = Φα = Ψβ
Then β = Ψ∗ x = Ψ∗ Φα
As a matrix
.. .. ..
. . .
· · · hϕ−1 , ψ−1 i hϕ0 , ψ−1 i hϕ1 , ψ−1 i · · ·
CΦ,Ψ = · · · hϕ−1 , ψ0 i hϕ0 , ψ0 i hϕ1 , ψ0 i · · ·
· · · hϕ−1 , ψ1 i hϕ0 , ψ1 i hϕ1 , ψ1 i · · ·
.. .. ..
. . .
Assume x = Φα = Ψβ
Then β = Ψ∗ x = Ψ∗ Φα
As a matrix
.. .. ..
. . .
· · · hϕ−1 , ψ−1 i hϕ0 , ψ−1 i hϕ1 , ψ−1 i · · ·
CΦ,Ψ = · · · hϕ−1 , ψ0 i hϕ0 , ψ0 i hϕ1 , ψ0 i · · ·
· · · hϕ−1 , ψ1 i hϕ0 , ψ1 i hϕ1 , ψ1 i · · ·
.. .. ..
. . .
Let y = Ax with A : H → H
How are expansion coefficients of x and y related?
◮ {ϕk }k∈K orthonormal basis of H
◮ x = Φα, y = Φβ
Matrix representation allows computation of A directly on coefficient
sequences
Γ : ℓ2 (K) → ℓ2 (K) s.t. β = Γα
As a matrix:
.. .. ..
. . .
· · · hAϕ−1 , ϕ−1 i hAϕ0 , ϕ−1 i hAϕ1 , ϕ−1 i · · ·
Γ = · · · hAϕ−1 , ϕ0 i hAϕ0 , ϕ0 i hAϕ1 , ϕ0 i · · ·
· · · hAϕ−1 , ϕ1 i hAϕ0 , ϕ1 i hAϕ1 , ϕ1 i · · ·
.. .. ..
. . .
Let y = Ax with A : H → H
How are expansion coefficients of x and y related?
◮ {ϕk }k∈K orthonormal basis of H
◮ x = Φα, y = Φβ
Matrix representation allows computation of A directly on coefficient
sequences
Γ : ℓ2 (K) → ℓ2 (K) s.t. β = Γα
As a matrix:
.. .. ..
. . .
· · · hAϕ−1 , ϕ−1 i hAϕ0 , ϕ−1 i hAϕ1 , ϕ−1 i · · ·
Γ = · · · hAϕ−1 , ϕ0 i hAϕ0 , ϕ0 i hAϕ1 , ϕ0 i · · ·
· · · hAϕ−1 , ϕ1 i hAϕ0 , ϕ1 i hAϕ1 , ϕ1 i · · ·
.. .. ..
. . .
Let y = Ax with A : H → H
How are expansion coefficients of x and y related?
◮ {ϕk }k∈K orthonormal basis of H
◮ x = Φα, y = Φβ
Matrix representation allows computation of A directly on coefficient
sequences
Γ : ℓ2 (K) → ℓ2 (K) s.t. β = Γα
As a matrix:
.. .. ..
. . .
· · · hAϕ−1 , ϕ−1 i hAϕ0 , ϕ−1 i hAϕ1 , ϕ−1 i · · ·
Γ = · · · hAϕ−1 , ϕ0 i hAϕ0 , ϕ0 i hAϕ1 , ϕ0 i · · ·
· · · hAϕ−1 , ϕ1 i hAϕ0 , ϕ1 i hAϕ1 , ϕ1 i · · ·
.. .. ..
. . .
H H
A
x • • y
Φ∗ Φ Φ∗ Φ
α • • β
Γ
2
ℓ (K) ℓ2 (K)
H H
A
x • • y
Φ∗ Φ Φ∗ Φ
α • • β
Γ
2
ℓ (K) ℓ2 (K)
Example
Z 2(ℓ+1)
1
Let A : H0 → H1 , y (t) = Ax(t) = 2 x(τ ) dτ
2ℓ
for 2ℓ ≤ t < 2(ℓ + 1), ℓ∈Z
H0 : piecewise-constant, finite-energy functions with breakpoints at integers
H1 : piecewise-constant, finite-energy functions with breakpoints at even
integers
1, for t ∈ I ;
Given χI (t) =
0, otherwise
Let Φ = {ϕk (t)}k∈Z = {χ[k,k+1) (t)}k∈Z
Ψ = {ψi (t)}i ∈Z = { √12 χ[2i ,2(i +1))(t)}i ∈Z
Example (Cont.)
Z 2
1 1 √1 √1
Aϕ0 (t) = 2 χ[0,2) (t) ⇒ hAϕ0 , ψ0 i = 2 2 dτ = 2
0
.. .. .. .. .. .. ..
. . . . . . .
· · · 1 1 0 0 0 0 · · ·
Then Γ = √12
· · · 0 0 1 1 0 0 · · ·
· · · 0 0 0 0 1 1 · · ·
.. .. .. .. .. .. ..
. . . . . . .
Discrete-time systems
Discrete-time systems
Doing all computation in discrete time is the essence of digital signal processing:
yn wn
x(t) Sampling DT processing Interpolation v (t)
y (t) v (t)
xn Interpolation CT channel Sampling b
xn
Bandwidth
0.5 æ æ
0.5 0.5
æ
ææ æææ æææææ
æ æ æ ææ æ ææ æ æ
æ ææ æ æ æ
æ æ æ æ æ æ æ æ æ æ
æ æ æ æ æ
æ æ æ æ æææææ æ æ æ æææ
æ
æ
æ æ æ æ ææææ
æ æ æ æ æ æ æ ææ
æ æ æ æ æ æ æ æ ææ ææ
æ æ ææ æ æ æ æ æ æ æ
ææ ææ æææææ
æ æ ææ
æ æ
æ æ ææ æ
æ ææ
ææ æ
æ
-0.5 -0.5 -0.5
ω0 = π ω0 = π/2 ω0 = π/4
Bandwidth
Bandwidth
Bandwidth
Classical Sampling
6 0;
(sin t)/t, for t =
Recall: sinc(t) =
1, for t = 0
Classical Sampling
6 0;
(sin t)/t, for t =
Recall: sinc(t) =
1, for t = 0
History
Many names:
◮ Shannon sampling theorem
◮ Nyquist–Shannon sampling theorem
◮ Nyquist–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Nyquist–Shannon–Kotel’nikov sampling theorem
Well-known people associated with sampling (but less often so):
◮ Cauchy (1841) – apparently not true
◮ Borel (1897)
◮ de la Vallée Poussin (1908)
◮ E. T. Whittaker (1915)
◮ J. M. Whittaker (1927)
◮ Gabor (1946)
Less-known people, almost lost to history:
◮ Ogura (1920)
◮ Küpfmüller (Küpfmüller filter)
◮ Raabe [PhD 1939] (assistant to Küpfmüller)
◮ Someya (1949)
◮ Weston (1949)
Best approach: use no names?
◮ Cardinal Theorem of interpolation theory
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 70 / 96
Example Lecture: Sampling and Interpolation Classical View and Historical Notes
History
Many names:
◮ Shannon sampling theorem
◮ Nyquist–Shannon sampling theorem
◮ Nyquist–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Nyquist–Shannon–Kotel’nikov sampling theorem
Well-known people associated with sampling (but less often so):
◮ Cauchy (1841) – apparently not true
◮ Borel (1897)
◮ de la Vallée Poussin (1908)
◮ E. T. Whittaker (1915)
◮ J. M. Whittaker (1927)
◮ Gabor (1946)
Less-known people, almost lost to history:
◮ Ogura (1920)
◮ Küpfmüller (Küpfmüller filter)
◮ Raabe [PhD 1939] (assistant to Küpfmüller)
◮ Someya (1949)
◮ Weston (1949)
Best approach: use no names?
◮ Cardinal Theorem of interpolation theory
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 70 / 96
Example Lecture: Sampling and Interpolation Classical View and Historical Notes
History
Many names:
◮ Shannon sampling theorem
◮ Nyquist–Shannon sampling theorem
◮ Nyquist–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Nyquist–Shannon–Kotel’nikov sampling theorem
Well-known people associated with sampling (but less often so):
◮ Cauchy (1841) – apparently not true
◮ Borel (1897)
◮ de la Vallée Poussin (1908)
◮ E. T. Whittaker (1915)
◮ J. M. Whittaker (1927)
◮ Gabor (1946)
Less-known people, almost lost to history:
◮ Ogura (1920)
◮ Küpfmüller (Küpfmüller filter)
◮ Raabe [PhD 1939] (assistant to Küpfmüller)
◮ Someya (1949)
◮ Weston (1949)
Best approach: use no names?
◮ Cardinal Theorem of interpolation theory
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 70 / 96
Example Lecture: Sampling and Interpolation Classical View and Historical Notes
History
Many names:
◮ Shannon sampling theorem
◮ Nyquist–Shannon sampling theorem
◮ Nyquist–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Shannon–Kotel’nikov sampling theorem
◮ Whittaker–Nyquist–Shannon–Kotel’nikov sampling theorem
Well-known people associated with sampling (but less often so):
◮ Cauchy (1841) – apparently not true
◮ Borel (1897)
◮ de la Vallée Poussin (1908)
◮ E. T. Whittaker (1915)
◮ J. M. Whittaker (1927)
◮ Gabor (1946)
Less-known people, almost lost to history:
◮ Ogura (1920)
◮ Küpfmüller (Küpfmüller filter)
◮ Raabe [PhD 1939] (assistant to Küpfmüller)
◮ Someya (1949)
◮ Weston (1949)
Best approach: use no names?
◮ Cardinal Theorem of interpolation theory
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 70 / 96
Example Lecture: Sampling and Interpolation Classical View and Historical Notes
X
b (ω) = 1 G (ω)
X X ω−
2π
k
T T
k∈Z
Dissatisfaction
Mathematical rigor of derivation:
X
FT 2π X 2π
δ(t − nT ) ←→ δ ω− k
T T
n∈Z k∈Z
Dissatisfaction
Mathematical rigor of derivation:
X
FT 2π X 2π
δ(t − nT ) ←→ δ ω− k
T T
n∈Z k∈Z
Dissatisfaction
Mathematical rigor of derivation:
X
FT 2π X 2π
δ(t − nT ) ←→ δ ω− k
T T
n∈Z k∈Z
Φ
y g (t) b
x
T
Reconstruction space
Range of interpolation operator has special form
Φ∗
T
x g ∗ (−t) y
Theorem
Sampling and interpolation operators are adjoints
Theorem
Sampling and interpolation operators are adjoints
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
b
x is best approximation of x within shift-invariant subspace generated by g if
P = Φ Φ∗ is an orthogonal projection operator
P is automatically self-adjoint: P ∗ = (Φ Φ∗ )∗ = P
Need P idempotent: P 2 = Φ Φ∗ Φ Φ∗ = P
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
b
x is best approximation of x within shift-invariant subspace generated by g if
P = Φ Φ∗ is an orthogonal projection operator
P is automatically self-adjoint: P ∗ = (Φ Φ∗ )∗ = P
Need P idempotent: P 2 = Φ Φ∗ Φ Φ∗ = P
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
b
x is best approximation of x within shift-invariant subspace generated by g if
P = Φ Φ∗ is an orthogonal projection operator
P is automatically self-adjoint: P ∗ = (Φ Φ∗ )∗ = P
Need P idempotent: P 2 = Φ Φ∗ Φ Φ∗ = P
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
b
x is best approximation of x within shift-invariant subspace generated by g if
P = Φ Φ∗ is an orthogonal projection operator
P is automatically self-adjoint: P ∗ = (Φ Φ∗ )∗ = P
Need P idempotent: P 2 = Φ Φ∗ Φ Φ∗ = P
b
x (t) T
yn
T
g (t) g ∗ (−t) ybn
b
x (t) T
yn
T
g (t) g ∗ (−t) ybn
b
x (t) T
yn
T
g (t) g ∗ (−t) ybn
Theorem
Let g be orthogonal to its integer shifts: hg (t − n), g (t)it = δn . The system
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
yields b
x = P x where P is the orthogonal projection operator onto the
shift-invariant subspace S generated by g .
Corollaries:
If x ∈ S, then x is recovered exactly from samples y
If x 6∈ S, then b
x is the best approximation of x in S
Theorem
Let g be orthogonal to its integer shifts: hg (t − n), g (t)it = δn . The system
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
yields b
x = P x where P is the orthogonal projection operator onto the
shift-invariant subspace S generated by g .
Corollaries:
If x ∈ S, then x is recovered exactly from samples y
If x 6∈ S, then b
x is the best approximation of x in S
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
g ∗ (−t) = g (t)
T
yn
x(t) g ∗ (−t) T
g (t) b
x (t)
g ∗ (−t) = g (t)
Φ∗
∗ y
x g−n N
Φ
y N gn b
x
Theorem
Let g be orthogonal to its shifts by multiples of N: hgn−kN , gn in = δk . The system
∗
yn
xn g−n N N gn b
xn
yields b
x = P x where P is the orthogonal projection operator onto the
shift-invariant subspace S generated by g with shift N.
T
yn
x(t) ge(t) T
g (t) b
x (t)
e ∗ , interpolation operator Φ
Sampling operator Φ
T
yn
x(t) ge(t) T
g (t) b
x (t)
e ∗ , interpolation operator Φ
Sampling operator Φ
e ∗ generally not self-adjoint, but can still be a projection operator
ΦΦ
e ∗ Φ = I for an oblique projection
Check Φ
T
yn
x(t) ge(t) T
g (t) b
x (t)
e ∗ , interpolation operator Φ
Sampling operator Φ
e ∗ generally not self-adjoint, but can still be a projection operator
ΦΦ
e ∗ Φ = I for an oblique projection
Check Φ
e
S
·
S
•x
•
b
x
Variations
Multichannel sampling
2T y0,n
x(t) ge0 (t)
2T y1,n
ge1 (t)
Many inverse problems have linear forward models, perhaps not shift-invariant
Similar subspace geometry holds
Variations
Multichannel sampling
2T y0,n
x(t) ge0 (t)
2T y1,n
ge1 (t)
Many inverse problems have linear forward models, perhaps not shift-invariant
Similar subspace geometry holds
Beyond subspaces
where
#{tk in [−T /2, T /2]} ρ
lim sup =
T →∞ T 2
W is not a subspace
For some g , exact recovery from uniform samples at rate ρ is possible
Many classes of techniques
Summary
Adjoints
◮ Time reversal between sampling and interpolation
Subspaces
◮ Shift-invariant, range of interpolator Φ
◮ Null space of sampler Φ∗
Projection
◮ Φ Φ∗ always self adjoint
◮ Φ∗ Φ = I implies Φ Φ∗ is a projection operator
◮ Together, orthogonal projection operator, best approximation
Basis expansions
◮ Sampling produces analysis coefficients for basis expansion
◮ Interpolation synthesizes from expansion coefficients
Textbooks
Two books:
M. Vetterli, J. Kovačević, and V. K. Goyal,
Foundations of Signal Processing
Textbooks
Foundations of Signal Processing
1 On Rainbows and Spectra
2 From Euclid to Hilbert
3 Sequences and Discrete-Time Systems
4 Functions and Continuous-Time Systems
5 Sampling and Interpolation
6 Approximation and Compression
7 Localization and Uncertainty
Features:
About 640 pages illustrated with more than 200 figures
More than 200 exercises (more than 30 with solutions within the text)
Solutions manual for instructors
Summary tables, guides to further reading, historical notes
Vetterli & Goyal www.FourierAndWavelets.org August 27, 2012 90 / 96
Teaching Materials Textbooks
Textbooks
Fourier and Wavelet Signal Processing
1 Filter Banks: Building Blocks of Time-Frequency Expansions
2 Local Fourier Bases on Sequences
3 Wavelet Bases on Sequences
4 Local Fourier and Wavelet Frames on Sequences
5 Local Fourier Transforms, Frames and Bases on Functions
6 Wavelet Bases, Frames and Transforms on Functions
7 Approximation, Estimation, and Compression
Prerequisites
Solutions manual
Convolution of Derivative and Primitive
Let h and x be differentiable functions, and let
Z t Z t
h(1) (t) = h(τ ) dτ and x (1) (t) = x(τ ) dτ
−∞ −∞
Solutions manual
From the definition of convolution, (4.35),
Z ∞
(h ∗ x)(t) = x(τ )h(t − τ ) dτ.
−∞
we obtain
u ′ (τ ) = x ′ (τ ) and v (τ ) = −h(1) (t − τ ).
Mathematica figures
|X10 (e jω )| |X100 (e jω )|
2 2
Π Π 3Π
Π
ω Π Π 3Π
Π
ω
4 2 4 4 2 4
|X1000 (e jω )| |X1000 (e jω )|
2
Π Π 3Π
Π
ω 9Π Π ω
4 2 4 20 2