Sie sind auf Seite 1von 5

Compressed Sensing Simple Explanations

May 14, 2018

1 Byrne
The objective of Compressed Sensing (CS) is to exploit the properties of sparsity
to reconstruct a sparse signal vector (x) from relatively few linear functional
measurements (y) of the original signal (f ).

The original discrete signal f is represented by a Jx1 column vector in RJ .

The signal f can be sparsely represented as vector x in the orthonormal basis


V={v1 , v2 ,...,vJ } spanning RJ , where vi is the ith Jx1 column of V. Meaning
that the vector f can be expanded in the V basis as:

f = x1 v1 + x2 v2 + ... + xJ vJ = VT x (1)
From the above equation it follows that:

x = Vf (2)

Since VVT = I (Identity Matrix) as V is a unitary matrix as a result of


being an orthonormal basis.

We say vector x = (x1 ,...,xJ ) is s-sparse if s is the number of nonzero xj


present. The lower the value of s the more sparse x will be. A preferred conven-
tion is for ||x||0 to represent the support of x (The number of nonzero elements
present in x). Thus the support of x must be suffciently low for it to be con-
sidered to be sparse.

1
A second orthonormal basis U={u1 , u2 ,...,uJ } also spanning RJ is selected
to maximise the incoherence between V and U.

A matrix (AT ) is a created through the uniform random selection of I mem-


bers from basis U where I < J.

AT = UK (3)

Where K is a JxI matrix consisting of I coloumn vectors ki . Each ki has el-


ements equal to zero except at ki,j which equals 1. The j in ki,j is the row
index of ki and corresponds to the desired column vector uj of U. Additionally
ki =kj only if i=j.

The matrix AT = {aT1 , aT2 ,...,aTI }, where aTi = uj if ki,j =1. The inner-
product functional measurement between the sparse signal and a member of is
AT defined to be:

yi = haTi , xi (4)
Following this the measured vector is:

y = Ax (5)
Where y is a Ix1 vector and A is a IxJ vector.

Substituting 2 into 5 gives:

y = AVf = Φf (6)

Where Φ is the IxJ sensing matrix.

An approximation of the signal’s sparse representation x denoted x̂ is cre-


ated, where:
x̂ = x̂1 v1 + x̂2 v2 + ... + x̂J vJ (7)
Due to x being sparse the approximation x̂ will approach x when minimising
the support of x̂ subject to:

y = Ax̂ (8)

2
And I is sufficiently large. Minimising the support of x̂ represented as the l0
norm ||x̂||0 uses computationally intensive algorithms. It was proven that for
sparse vectors that the l1 norm also satisfies the above problem. This allows for
less taxing Linear Programming (LP) algorithms to be applied to recover x.

Therefore x̂ = x for the x̂∗ that solves:

min ||x̂||1 (9)

Subject to:
y = Ax̂ (10)
Where
J
X
||x̂||1 = |x̂i | (11)
i=1

The original signal f can be recovered by substituting x̂∗ into 1.

2 Elad and Bruckstein


A signal represented in an over-complete vector space constructed from vectors
present in a pair of orthonormal bases must have a unique representation for
the l0 optimisation problem to be replaced by a l1 Linear Programming (LP)
problem.

The signal S is represented as a real column vector in RN . S has unique


representation in every basis of this space.

If Φ = {φ1 , φ2 , ..., φN } are N orthogonal vectors of unit length. hφi , φj i = δij


where δij = 1 if i=j and δij = 0 otherwise. Then:
 
α1
N
 α2  X

 
S= 1 φ φ . . . φ . = αi φi (12)
2  .. 
N 
i=1
αN
Where [φ1 , φ2 , ..., φN ] is an NxN matrix and coefficients αi are given by αi
= hφi , Si

Now suppose we have two different bases for RN : Φ = {φ1 , φ2 , ..., φN } and
Ψ = {ψ 1 , ψ 2 , ..., ψ N }. Then
   
α1 β1
  α 2
    β2 
S = φ1 φ2 . . . φ N  .  = ψ 1 ψ 2 . . . ψ N  .  (13)
   
 ..   .. 
αN βN

3
A dictionary is a joint, over-complete set of vectors, created by concatenating
the vectors from different bases. For this example the dictionary will be {Φ,
Ψ} = {φ1 , φ2 , ..., φN , ψ 1 , ψ 2 , ..., ψ N } which gives a Nx2N matrix. The problem
arises when representing a signal in terms of the dictionary since multiple rep-
resentations are possible for the same signal. Of these multiple representations
finding the sparest is a difficult optimisation problem. Suppose we have:

γ1φ
 
 γφ 
 2
 . 
 .. 
 
φ N N
γN  
 X X
γiφ φi + γiψ ψ i
 
S = φ1 φ2 ... φN ψ 1 ψ2 ... ψN  ψ = Φ Ψ γ=
 γ1 
 ψ i=1 i=1
 γ2 
 
 .. 
 . 
ψ
γN
(14)
Choosing γ involves solving an under-determined set of N equations with 2N
unknowns. To obtain the sparsest representation the additional requirement of
minimising the support of γ must be met. The support of γ is the number of
nonzero entries present in the vector γ. Hence the following problem is required
to be solved.

 
(P0 ) Minimise ||γ||0 subject to S = Φ Ψ γ

where ||γ||0 is the size of the support of γ. Problem (P0 ) can be addressed us-
ing two different approximation methods named ”matching pursuit” and ”basis
pursuit”. Donoho and Hu formalised conditions for basis pursuit under which
the basis pursuit method exactly finds the desired sparse representation.

S is considered to have a very sparse representation when there exist a γ


such that:  
S= Φ Ψ γ (15)
And
1
||γ||0 < (1 + M −1 ) (16)
2
Where
M= sup (|hφi , ψ j i|) (17)
1≤i,j≤N

This very sparse representation is the unique solution for both (P0 ) and (P1 ).

 
(P1 ) Minimise ||γ||1 subject to S = Φ Ψ γ

4
PN
Where ||γ||1 = i=1 |γi |

Das könnte Ihnen auch gefallen