Beruflich Dokumente
Kultur Dokumente
INTRODUCTION
Imaging is an extensive field and it is evolving at a rapid rate at the same time. It provides
various form of framework for image acquisition, modification, generalization, visualization,
reconstruction and many others. It offers various methods for reconstruction that includes a
wide range with different-different requirements and goals which helps in achieving the
various prospective of any proposed work. Imaging is a vast and immensely vital field which
covers all aspects of the analysis, modification, compression, visualization, and generation of
images. It is a highly interdisciplinary field in which researchers from biology, medicine,
engineering, computer science, physics, and mathematics, among others, work together to
provide the best possible image. Imaging science is profoundly mathematical and challenging
from the modeling and the scientific computing point of view. There are at least two major
areas in imaging science in which applied mathematics has a strong impact: image
processing, and image reconstruction. In image processing the input is a (digital) image such
as a photograph or video frame, while in image reconstruction the input is a set of data from
which the desired image can be recovered. In the latter case, the data is limited, and its poor
information content is not enough to generate an image to start with. Image reconstruction
refers to the techniques used to create an image of the interior of a body (or region) noninvasively, from data collected on its boundary. Image reconstruction can be seen as the
solution of a mathematical inverse problem in which the cause is inferred from the effect.
Image reconstruction can be achieved by wide range of techniques in an image processing
framework. The Traditional techniques involves transformation, iterative methods,
tomography, total variation and greedy pursuits while recovery of high dimensional sparse
signal based on a small number of linear measurements is successfully possible through
compressive sensing [1]. It provides means on how to reconstruct the signal from undersampled data. Image reconstruction has a history dates back to 1917, but this field still
provides challenging opportunity to the researcher either to still improvise the existing
techniques or propose new one. The major reconstruction methods based on Radons work
was developed in 1917 i.e. the classic image reconstruction from projections paper. In 1972,
Hounsfield develop the first commercial x-ray computer tomography scanner. The very
fundamental or Classical reconstruction method is based on the radon transform which
acquaints the researcher with the method known as back projection. The other alternative
approaches those were further proposed involves Fourier transform and iterative series1
expansion methods. It further takes into account the Statistical estimation methods, Wavelet
and other multiresolution methods. These days signal recovery from the under-sampled data
or inaccurate measurements are in trend. This motive of signal recovery can be accomplished
with the amalgamation of modified recovery techniques with compressive sensing.
As our modern technology-driven civilization acquires and exploits ever-increasing amounts
of data, everyone now knows that most of the data we acquire can be thrown away with
almost no perceptual losswitness the broad success of lossy compression formats for
sounds, images, and specialized technical data [2]. The phenomenon of ubiquitous
compressibility raises very natural questions: why go to so much effort to acquire all the data
when most of what we get will be thrown away? Can we not just directly measure the part
that will not end up being thrown away? Due to the paradigm to acquire sparse signals at a
rate significantly at a rate significantly below Nyquist rate, compressive sensing has attracted
much attention in recent years. The field of CS has existed for around four decades. It was
first used in Seismology in 1970 when Claerbout and Muir gave attractive alternative of least
square solutions,[3] In 1990s, Rudin, Osher, and Fatemi used total variation minimization in
Image Processing which is very close to l1 minimization . The idea of CS got a new life in
2004 when David Donoho, Candes, Justin Romberg, and Terence Tao gave important results
regarding the mathematical foundation of CS. A series of papers have come out in last six
years and the field is witnessing significant advancement almost on a daily basis. The
compressive sensing theorem states that a sparse signal can be perfectly reconstructed even
though it is sampled at a rate lower than the Nyquist rate [1]. It has gained an increasing
interest due to its promising results in various applications. The goal of Compressive sensing
is to recover the sparse vector using a small number of linearly transformed measurements.
The process of acquiring compressed measurements is referred to as sensing while that of
recovering the original sparse signals from compressed measurements is called reconstruction
[4]. The reconstruction problem basically require the solution for two distinct questions that
how many measurements are necessary and given these measurements, what algorithms can
be used. For reconstruction algorithms, there are two popular algorithms for Compressive
sensing and these are basis pursuit (BP) and Matching Pursuit (MP). A number of variants of
these techniques have been proposed. In this report Orthogonal matching pursuit method is
used for recovering the signal from inaccurate measurements. The class of greedy algorithms
solves the reconstruction problem by finding the answer, step by step, in an iterative fashion.
The idea is to select columns of residue in a greedy fashion. At each iteration, the column of
residue that correlates most with y (measurement vector) is selected. Conversely, least square
2
error is minimized at each iteration. That rows contribution is subtracted from Y and
iterations are done on the residual until correct set of columns is identified. This is usually
achieved in M iterations. The stopping criterion varies from algorithm to algorithm. Most
used greedy algorithms are matching pursuit [5] and its derivative orthogonal matching
pursuits (OMP) [6] because of their low implementation cost and high speed of recovery. The
other methods can be Regularized OMP, subspace OMP, iterative thresholding algorithms,
with each having particular advantage and use. The report starts with presenting a brief
historical background of image reconstruction, CS during last four decades and the matching
pursuit techniques. It is followed by a comparison of the modified technique with
conventional sampling technique. A succinct mathematical and theoretical foundation
necessary for grasping the idea behind CS which is used with OMP is given. It then talks
about the modified technique and its simulation results. In the end, open research areas are
identified, results are justified and the report is concluded.
1.1 OBJECTIVES
The following objectives are considered for the thesis work in order to design a framework
for image reconstruction using compressed sensing and Orthogonal Matching Pursuit. The
objectives are as follows:
Recovery of image from the inaccurate and undersampled measurements via OMP
with explicit stopping rule and exact recovery condition
Recovery of image from the inaccurate and undersampled data via ROMP with
explicit stopping rule i.e. selecting the group of indices and then cut it down one on
the basis of the maximal energy.
Generalization of the implemented technique i.e. improving the criteria for identifying
the multiplying significant indices for better correlation on the basis of thresholding
the residual value.
Analysis of parameters like sampling ratio (M/N), PSNR, Running time and
percentage of recovery for the evaluation of the implemented technique.
CHAPTER 2
IMAGE RECONSTRUCTION USING COMPRESSIVE SENSING
Conventional approaches to sampling signals or images follow Shannons celebrated
theorem: the sampling rate must be at least twice the maximum frequency present in the
signal (the so-called Nyquist rate). In fact, this principle underlies nearly all signal acquisition
protocols used in consumer audio and visual electronics, medical imaging devices, radio
receivers, and so on. For some signals, such as images that are not naturally bandlimited, the
sampling rate is dictated not by the Shannon theorem but by the desired temporal or spatial
resolution. However, it is common in such systems to use an antialiasing low-pass filter to
band limit the signal before sampling, and so the Shannon theorem plays an implicit role. In
the field of data conversion, for example, standard analog-to-digital converter (ADC)
technology implements the usual quantized Shannon representation: the signal is uniformly
sampled at or above the Nyquist rate [7]. This report surveys the theory of compressive
sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that
goes against the common wisdom in data acquisition. CS theory asserts that one can recover
certain signals and images from far fewer samples or measurements than traditional methods
use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals
of interest, and incoherence, which pertains to the sensing modality.
2.1 COMPRESSIVE SENSING PARADIGM
Compressive sensing (CS) has witnessed an increased interest recently courtesy high demand
for fast, efficient, and in-expensive signal processing algorithms, applications, and devices.
Contrary to traditional Nyquist paradigm, the CS paradigm, banking on finding sparse
solutions to underdetermined linear systems, can reconstruct the signals from far fewer
samples than is possible using Nyquist sampling rate. The problem of limited number of
samples can occur in multiple scenarios, e.g., when we have limitations on the number of
data capturing devices, measurements are very expensive or slow to capture such as in
radiology and imaging techniques via neutron scattering. In such situations, CS provides a
promising solution. CS exploits sparsity of signals in some transform domain and the
incoherency of these measurements with the original domain. In essence, CS combines the
sampling and compression into one step by measuring minimum samples that contain
maximum information about the signal: This eliminates the need to acquire and store large
number of samples only to drop most of them because of their minimal value. CS has seen
4
major applications in diverse fields, ranging from image processing to gathering geophysics
data. Most of this has been possible because of the inherent sparsity of many real world
signals like sound, image, video, etc [8]. These applications of CS are the main focus of this
report, with added attention given to the reconstruction of the image in the imaging domain.
The novel technique of CS is applied with OMP to avail the image recovery from random and
inaccurate data with improved concluding parameters.
2.1.1 Sparsity
Sparsity expresses the idea that the information rate of a continuous time signal may be
much smaller than suggested by its bandwidth, or that a discrete-time signal depends on a
number of degrees of freedom which is comparably much smaller than its (finite) length.
More precisely, CS exploits the fact that many natural signals are sparse or compressible in
the sense that they have concise representations when expressed in the proper basis .
Mathematically speaking, we have a vector f Rn (such as the n-pixel image in Figure 2.2)
which we expand in an orthonormal basis (such as a wavelet basis)
n] as follows:
= [12
f(t)=
x (t )
(2.1)
i=1
now clear: when a signal has a sparse expansion, one can discard the small coefficients
without much perceptual loss. Natural signals such as sound, image or seismic data can be
stored in compressed form, in terms of their projection on suitable basis [9]. When basis is
chosen properly, a large number of projection coefficients are zero or small enough to be
ignored. If a signal has only s non-zero coefficients, it is said to be s-sparse. If a large number
of projection coefficients are small enough to be ignored, then signal is said to be
compressible. Well known compressive-type basis include 2 dimensional (2D) wavelets for
images, localized sinusoids for music, fractal-type waveforms for spiky reflectivity data, and
curvelets for wave field propagation.
Figure 2.2 Original image and its image in Wavelet transform domain
2.1.2 Incoherence
Incoherence extends the duality between time and frequency and expresses the idea that
objects having a sparse representation in
they are acquired, just as a Dirac or a spike in the time domain is spread out in the frequency
domain. Put differently, incoherence says that unlike the signal of interest, the
sampling/sensing waveforms have an extremely dense representation in
[8]. Coherence
measures the maximum correlation between any two elements of two different matrices.
......
as columns and
is an M N matrix with
is a
1.......
N max
(2.2)
for 1 j N and 1 k M. It follows from linear algebra that 1 ( , )
N .
In CS, we are concerned with the incoherence of matrix used to sample/sense signal of
interest (hereafter referred as measurement matrix
and
). Within
Inverse problems: In still other situations, the only way to acquire f may be to use a
measurement system
Biological Applications: CS can also be used for efficient and inexpensive sensing in
biological applications. Recent works show usage of CS in comparative
deoxyribonucleic acid (DNA) microarray [14]. Traditional microarray bio-sensors are
useful for detection of limited number of micro organisms. To detect greater number
of species large expensive microarrays are required. However, natural phenomena are
sparse in nature and easily compressible in some basis. DNA microarrays consist of
millions of probe spots to test a large number of targets in a single experiment. CS
gives an alternative design of compressed microarrays in which each spot contains
copies of different probe sets reducing the overall number of measurements and still
efficiently reconstructing from them.
Sparse Channel Estimation: CS has been used in communications domain for sparse
channel estimation. Adoption of multiple-antenna in communication system design
and operation at large bandwidths, possibly in gigahertz, enables sparse representation
of channels in appropriate bases. Conventional technique of training based estimation
using least-square (LS) methods may not be an optimal choice. Various recent studies
have employed CS for sparse channel estimation. Compressed channel estimation
(CCS) gives much better reconstruction using its non-linear reconstruction algorithm
as opposed to linear reconstruction of LS-based estimators. In addition to nonlinearity, CCS framework also provides scaling analysis. The use of high time
resolution over-complete dictionaries further enhances channel estimation. BP and
OMP are used to estimate multipath channels with Doppler spread ranging from mild,
like on a normal day, to severe, like on stormy days.
= [1,2| . . . |N] with the vectors {i} as columns, a signal x can be expressed as
N
x=
s
i=1
or x = s
combination of only v basis vectors; that is, only K of the si coefficients in (are nonzero and
10
(N K) are zero. The case of interest is when K N. The signal x is compressible if the
representation (2.3) has just a few large coefficients and many small coefficients [15].
Transform domain: The fact that compressible signals are well approximated by K-sparse
representations forms the foundation of transform coding. In data acquisition systems (for
example, digital cameras) transform coding plays a central role: the full N-sample signal x is
acquired; the complete set of transform coefficients {s i} is computed via s = i T x; the K
largest coefficients are located and the (N K) smallest coefficients are discarded; and the K
values and locations of the largest coefficients are encoded. Unfortunately, this sample-thencompress framework suffers from three inherent inefficiencies. First, the initial number of
samples N may be large even if the desired K is small. Second, the set of all N transform
coefficients {si} must be computed even though all but K of them will be discarded. Third,
the locations of the large coefficients must be encoded, thus introducing an overhead.
Compressive Sensing Problem: Compressive sensing addresses these inefficiencies by
directly acquiring a compressed signal representation without going through the intermediate
stage of acquiring N samples. Consider a general linear measurement process that computes
M<N inner products between x and a collection of vectors { j} where j=1 to J as in y j =
x,
in an M x 1 vector y
. Then, by substituting
s= s
(2.4)
y
where
that
is fixed and does not depend on the signal x. The problem consists of designing a
The solution consists of two steps. In the first step, a stable measurement matrix
is
developed that ensures that the notable information in any V-sparse or compressible signal is
not damaged by the dimensionality reduction from x RN down to y RM. In the second
step, we develop a reconstruction algorithm to recover x from the measurements y. Initially,
we focus on exactly K-sparse signals.
Figure 2.4 (a) Compressive sensing measurement process with (random Gaussian) measurement matrix
and Transform matrix. (a)The coefficient vector s is sparse with K= 4. (b) Measurement process in terms of the
matrix product
as
defined .Clearly reconstruction will not be possible if the measurement process damages the
information in x. Unfortunately, this is the case in general: Since the measurement process is
linear and defined in terms of the matrices
and
just a linear algebra problem, and with M < N. However, the K-sparsity of s comes to
concern. In this case the measurement vector y is just a linear combination of the K columns
of
nonzero, and then we could form an M K system of linear equations to solve for these
nonzero entries, where now the number of equations M equals or exceeds the number of
unknowns K. A necessary and sufficient condition to ensure that this M K system is wellconditioned and hence sports a stable inverse is that for any vector p sharing the same K
nonzero entries as s we have
12
1-
p 2
p 2
1+
(2.5)
for some
sparse vectors. This is the so-called restricted isometry property (RIP) [17]. For ensuring the
stability the measurement matrix
in the sense that the vectors { j} cannot sparsely represent the vectors { i} and vice
j,i
random variables from a zero-mean, 1/N-variance Gaussian density (white noise) . Then, the
measurements y is merely M different randomly weighted linear combinations of the
elements of x. A Gaussian
incoherent with the basis
is
equivalently, its sparse coefficient vector s. For K-sparse signals, since M < N, there are
infinitely many s' that satisfy
s = y. This is because if
13
s = y then
(s + r) = y
algorithm aims to find the signals' sparse coefficient vector in the (N K)dimensional.
2.4.1 OMP for Signal Recovery
For reconstruction using compressed sensing, there are number of basis pursuit algorithms
used till date based on L-norms which give reliable reconstruction but at the cost of slower
reconstruction time. Orthogonal Matching Pursuit is a greedy pursuit algorithm for recovery
that provides rapid processing than the basis pursuit but at the cost of its computational
complexity. If complexity of OMP can be reduced by certain means then it offers a much
effective algorithm over basis pursuit in terms of reconstruction time and exact recovery. The
OMP algorithm has been studied by Rauhut [19]. Their work was focused on recovery via
random frequency measurements. The highest correlation between
and residual of y is
calculated and one coordinate for support of signal is produced per iteration. Hence the
complete signal X can be recovered by total iterations performed by algorithm. In its
alternative, compressive sensing OMP a proxy signal of y is generated and then its
correlations are found out. There had been a lot of research work on image reconstruction
using compressed sensing with Orthogonal Matching Pursuit. Major research in this area is
performed by Donoho. D. L. Donoho with Tsaig has given a stage wise orthogonal matching
pursuit for the sparse solution of underdetermined system of linear equations [20]. Tropp and
Gilbert focused on the measurement matrices such as Gaussian and Bernoulli. Inverse
problems are often solved with the help of greedy algorithms. Two popular greedy algorithms
used for compressed sensing reconstruction are orthogonal least square and orthogonal
matching pursuit. Generally the two are taken as same but that is not true. The confusion
between two was made clear with work of Davies and Thomas Blumemsath. Soussen and
Gribnovels work is based on data without noise taken into account. In their work a subset of
the true support is formed from the available partial information. They derive condition
complementary to restricted isometry for the success of greedy algorithms. This condition
relaxes the coherence constrain which is considered as a necessity for implementation of
compressed sensing. Generally two greedy algorithms orthogonal matching pursuit and
orthogonal least square are used for compressed sensing reconstruction are taken as same but
that is not true. Davies and Thomas Blumemsath cleared the confusion between two. T. Tony
and Lie Wang have reconstructed the high dimensional signal in presence of noise with OMP.
14
They have given OMP with explicitly stopping conditions. Their work shows that
reconstructions possible under mutual incoherence of coefficients by suing OMP. Signal
Reconstruction using tree based orthogonal matching pursuit has been performed by La C
and Do M.N. Their recovery results shows OMP gives better reconstruction as compared to
recover algorithms that only use assumption of sparse representation. Their work solved the
linear inverse problems with a limit in the total number of measurements. Beck and Teboulle
has proposed a fast speed recovery algorithm called the iterative thresholding algorithm
(ISTA) and fast iterative thresholding algorithm (FISTA). In this algorithm there is an
optimization by solving the optimization problem without -1 penalty. After this the
algorithm selects the values from x using a predefined threshold and decreases -1 norm. The
selection is based on hard and soft threshold. For the soft thresholding, the value zero is
assigned to atoms of x which have magnitude below a certain predefined variable and for the
hard thresholding the algorithm assigns the value zero to the entities which have smaller
magnitude. Reboulle and Dowe have proposed an optimized orthogonal matching pursuit
reconstruction that builts upon functions selected from a dictionary. For an iteration an
approximation of signal is given that minimizes the residual norm [21]. A fast orthogonal
matching pursuit algorithm is given by Gharavi and Huang. Their proposed algorithm has a
Signal recovery from random frequency measurements has been performed by Rauhat and
Kunis. They have proved that OMP is faster than the L-norm method of reconstruction. They
have proved that for a K sparse signal computation complexity for a number of coding
applications that is very close to the non orthogonal version.
2.4.2 OMP algorithm
Matching Pursuit is an approach to compute adaptive signal representations. The prevalent
goal of this technique is to obtain a sparse signal representation by choosing, a transformed
form residual that is best adapted to approximate part of the signal at each iteration.
Nonetheless, the MP algorithm in its original form [22] does not provide at each iteration the
linear expansion of the selected residual that approximate the signal at best. A later
distillation which does provide such approximation has been termed orthogonal matching
pursuit (OMP). The OMP approach improves upon the MP in the following sense: from the
selected residues through the MP criterion, the OMP approach gives rise to the set of
coefficients yielding the linear expansion that minimizes the distance to the signal i.e. the
least mean square. OMP is an iterative greedy algorithm that selects at each step the column
which is most correlated with the current residuals. This column is then added into the set of
15
selected columns. The algorithm updates the residuals by projecting the observed
measurements y onto the linear subspace spanned by the columns that have already been
selected, and the algorithm then iterates. Compared with other alternative methods, a major
advantage of the OMP is its simplicity and fast implementation.
Algorithm for OMP for Signal Recovery:
, A M-dimensional data vector v , The sparsity
|
i
=argmaxi=1 to j | rt-1,
(2.6)
If the maximum occurs for multiple indices, break the tie deterministically.
3) Augment the index set
=[
t-1
t-1
x 2
(2.7)
5) Calculate the new approximation of the data and the new residual
at =
xt
(2.8.a)
rt = v - a t
(2.8.b)
16
rt-1 is nonzero, the algorithm selects a new atom at iteration t and the matrix
column rank. At iteration, the least squares problem can be solved with marginal cost.
17
has full
CHAPTER 3
LITERATURE REVIEW
Image reconstruction is a mathematical process that generates images from the recovered data
in many different ways. Image reconstruction has a fundamental impact on image quality and
on the application for which the image is used. Compressed sensing relies on L1 techniques,
which several other scientific fields have used historically. The
matching pursuit in 1993 and basis pursuit in 1998. There were theoretical results describing
when these algorithms recovered sparse solutions, but the required type and number of
measurements were sub-optimal and subsequently greatly improved by compressed sensing.
At first glance, compressed sensing might seem to violate the sampling theorem, because
compressed sensing depends on the sparsity of the signal in question and not its highest
frequency. This is a misconception, because the sampling theorem guarantees perfect
reconstruction given sufficient, not necessary, conditions. Sparse signals with high frequency
components can be highly under-sampled using compressed sensing compared to classical
fixed-rate sampling.
The image reconstruction using compressive sensing can be accomplished with different
techniques like TV minimization algorithm, OMP, basic pursuit etc. It can be transformed in
sparse form to have more economical recovery. Image can be reconstructed with transform
code reconstruction [23] in which the reconstructed quality is decided by the quantization
level. Compressive sensing (CS) breaks the limit and states that sparse signals can be
perfectly recovered from incomplete or even corrupted information by solving convex
optimization. Under the same acquisition of images, if images are represented sparsely
enough, they can be reconstructed more accurately by CS recovery than inverse transform.
So, modified TV operator is used to enhance image sparse representation and reconstruction
accuracy, and image information is acquired from transform coefficients corrupted by
quantization noise in image transform coding.
The method of Improved Total variation (TV) minimization algorithms [24] recover sparse
signals or images in the compressive sensing (CS) and improve it in terms of undesirable
staircase effect i.e. either by intra-prediction[25] or gradient descent method. The new
method conducts intra-prediction block by block in the CS reconstruction process and
generates a residual for the image block being decoded in the CS measurement domain. The
gradient of the residual is sparser than that of the image itself, which can lead to better
reconstruction quality in CS by TV regularization. The staircase effect can also be eliminated
18
coordinate for support of signal is produced per iteration. Hence the complete signal X can be
recovered by total iterations performed by algorithm. In its alternative, compressive sensing
OMP a proxy signal of y is generated and then its correlations are found out. There had been
a lot of research work on image reconstruction using compressed sensing with Orthogonal
Matching Pursuit. Major research in this area is performed by Donoho. D. L. Donoho with
Tsaig has given a stage wise orthogonal matching pursuit for the sparse solution of
underdetermined system of linear equations [29]. Tropp and Gilbert focused on the
measurement matrices such as Gaussian and Bernoulli [30]. Inverse problems are often
solved with the help of greedy algorithms. Two popular greedy algorithms used for
19
compressed sensing reconstruction are orthogonal least square and orthogonal matching
pursuit. Generally the two are taken as same but that is not true. The confusion between two
was made clear with work of Davies and Thomas Blumemsath. Soussen and Gribnovels
work is based on data without noise taken into account. In their work a subset of the true
support is formed from the available partial information. They derive condition
complementary to restricted isometry for the success of greedy algorithms. This condition
relaxes the coherence constrain which is considered as a necessity for implementation of
compressed sensing. Generally two greedy algorithms orthogonal matching pursuit and
orthogonal least square are used for compressed sensing reconstruction are taken as same but
that is not true. Davies and Thomas Blumemsath cleared the confusion between two. T. Tony
and Lie Wang have reconstructed the high dimensional signal in presence of noise with OMP.
They have given OMP with explicitly stopping conditions. Their work shows that
reconstructions possible under mutual incoherence of coefficients by suing OMP. Signal
Reconstruction using tree based orthogonal matching pursuit has been performed by La C
and Do M.N. Their recovery results shows OMP gives better reconstruction as compared to
recover algorithms that only use assumption of sparse representation. Their work solved the
linear inverse problems with a limit in the total number of measurements. Beck and Teboulle
has proposed a fast speed recovery algorithm called the iterative thresholding algorithm
(ISTA) and fast iterative thresholding algorithm (FISTA). In this algorithm there is an
optimization by solving the optimization problem without -1 penalty. After this the
algorithm selects the values from x using a predefined threshold and decreases -1 norm. The
selection is based on hard and soft threshold. For the soft thresholding, the value zero is
assigned to atoms of x which have magnitude below a certain predefined variable and for the
hard thresholding the algorithm assigns the value zero to the entities which have smaller
magnitude [31]. Reboulle and Dowe have proposed an optimized orthogonal matching
pursuit reconstruction that builts upon functions selected from a dictionary. For an iteration
an approximation of signal is given that minimizes the residual norm [32]. A fast orthogonal
matching pursuit algorithm is given by Gharavi and Huang. Their proposed algorithm has a
Signal recovery from random frequency measurements has been performed by Rauhat and
Kunis. They have proved that OMP is faster than the L-norm method of reconstruction. They
have proved that for a V sparse signal computation complexity for a number of coding
applications that is very close to the non orthogonal version [33].
The OMP [34] approach improves upon the MP in the following sense: from the selected
atoms through the MP criterion, the OMP approach gives rise, at each iteration, to the set of
20
coefficients yielding the linear expansion that minimizes the distance to the signal. However,
since it selects the atoms according to the MP prescription, the selection criterion is not
optimal in the sense of minimizing the residual of the new approximation. OMP [35] is an
iterative greedy algorithm that selects at each step the column which is most correlated with
the current residuals. Orthogonal matching pursuit involves the use of approximation of the
signal estimates in terms of dictionary. Then, these approximations are used to calculate the
recovery, at each iteration recovery signal is computed which is compared with the estimates
to have maximum inner product. This procedure is repeated until the stopping condition. With
OMP, [36] side information has been used, the noise component is also considered in the
estimates, generalized OMP has been implemented and many others too. With each
implementation, different kind of sparse domain is taken into consideration.
The paper titled "Signal Recovery from Incomplete and Inaccurate Measurements via
Regularized Orthogonal Matching Pursuit" by Deanna Needell and Roman Vershynin
proposes Regularized Orthogonal Matching Pursuit (ROMP) that seeks to provide the
benefits of the two major approaches to sparse recovery. It combines the speed and ease of
implementation of the greedy methods with the strong guarantees of the convex programming
methods. For any measurement matrix
principle, ROMP recovers a signal x with O(n) non zeros from its inaccurate measurements in
at most iterations, where each iteration amounts to solving a least squares problem. In
particular, if the error term vanishes the reconstruction is exact. This stability result extends
naturally to the very accurate recovery of approximately sparse signals[t].
The paper titled "Orthogonal Matching Pursuit for sparse signal recovery with noise"
propposed by T.Tony Cai and Lie in the year 2011 consider the OMP technique for the
recovery of high dimensional sparse signal based on a small number of noisy linear
measurements. In this paper OMP as an iterative greedy algorithm selects at each step the
column, which is most correlated with the current residuals along with the explicit stopping
rules. Here the problem of identifying the significant components in the case where some of
the nonzero components are possibly small. With these modified rules, the OMP algorithm
can ensure that no zero components are selected.
The next paper generalizes the traditional technique of OMP proposed by Jian Wang,
Seokbeop Kwon, Byonghyo Shim named "Generalized Orthogonal Matching Pursuit" in the
year 2012. In this paper generalization of OMP is done in the sense that multiple N indices
are identified per iteration. Owing to the selection of multiple "correct" indices, the gOMP
21
algorithm is finished with much smaller number of iterations when compared to the OMP
[38].
The paper titled "Signal recovery from random measurements via orthogonal matching
pursuit" proposed by Joel. A. Tropp demonstrates theoretically and empirically that a greedy
algorithm called OMP can reliably recover a signal with K nonzero entries in dimension N
given random linear measurements of that signal. This is a massive improvement over
previous results, which require O (N2) measurements [39].
The following table summarizes and illustrates the previous work done in the field of image
reconstruction with corresponding advantages and disadvantages. The evaluating parameters
are also discussed.
Table 3.1: Evaluation of Literature review
PAPER TITLE
YEAR
DETAILS OF WORK
PARAMET
Compressive Sensing
2010
TV+NORM 1
CONS
Decrease the cost of
ER
PSNR,
TV+DCT Constraint +
Hardware, Higher
MEASURE
Variation Minimization
NORM1
Algorithm(ieee)
TV+ Contourlet
MATRIX
Constraint+NORM1
performance, visual
DIM. (M)
(CS in all)
improvement of edges
Enhance sparsity,
PSNR
Coeffient , TV + NORM 1
Ratio of
reconstruction in
Adaptibility can be
data
compressive sensing
Kroneckor product
improved for
acquisition
(CS)
Image into set of atoms
complicated image
Simple, less memory
PSNR
algorithm based on
requirement, better
TIME
compressed sensing
COST
using conjugate
speed
gradient (ieee)
An iterative Weighing
(ieee)
An image reconst.
2010
2010
22
Compressive Sensing
2010
PSNR
imge reconstruction
measurement matrix-
transforms (ieee)
Reduced measurement
PSNR
sensing based on
matrix , optimal
wavelet transform in
approximation ,
contourlet domain
(elsevier) vol 91
, each subband is
computation
2011
transformed by 2-ds
orthonormal wavelet basis:
solve optimal problem +
thresholding + smoothing
(contourlet + orthonormal
wavelet transform +
opyimization + thresholding
+ smoothing by wiener)
Modified TV operator +CS
PSNR
optimization based on
and PSNR
Quantizatio
compressive sensing
considered, minimize TV
n noise
Modified TV operator
Reduces staircase
parameter
Comparison
variation minimization
+ CS + Intraprediction
effect, blocking
to prev
methods,
sensing by intra
by in loop deblocking
measureme
prediction (elsevier )
filter
nt rate,
strength
CS + WT + OMP
Reduce sampling
PSNR
PSNR,
Reconstruction based
complexity &
Sampling
on block compressed
rate (M/N)
sensing (ieee)
consuming , lesser
for different
block size
Image Decoding
vol 92
Improved Image
2011
2012
2012
23
PSNR
Compressive Sensing
(daubechies )
SSIM
strategy, TV reduces
respective
(ieee)
additional measurement,
ringing artifacts
to
An Adaptive
2013
Measurement is reduced by
compressio
n ratio
PSNR,
using block
measurements, data
block size ,
sampling and
visual
compression
adaptive measurements
compression at the
quality,
applications ( elsevier)
(RWS or RO)
energy
Image representation
vol 24
Sampling adaptive
2013
2013
robust coding
BCS+SA+SPL+ED+DDWT Fast calculation Speed,
ratios
PSNR
block compressed
BCS+SA+SPL+ED+CT
adaptive sampling
sensing reconstruction
Better reconstruction
filter
quality
Improved PSNR
PSNR
compressive sensing
Reduced computational
MSE
based on TV norm
CG OPTIMIZATION
effort
CPU time
CS +Reweighted TV
Number of
measureme
computed towars a
values
nts
vol 49
maximum a posteriori
(ieee)
Compressive Sensing
2013
2013
estimation of gradients +
non local self similarity
constraint
24
Iterative gradient
2014
CS + TV +DDWT+
PSNR
Gradient Descent(GD)+
CPU time
two dimensional
Bivariate Shrinkag(BS )+
Measureme
compressive sensing
Projection- directly
aliasing
nt rate ,
sparse image
Iterative
reconstruction
iteratively. GD decreases
numbers,
Noise level
2012
Reduced computational
Number of
time, reduced
iteration
vol 60
complexity,
and total
time taken
Exact recovery,
Incoherence
parameter (
reduced computational
ieee, vol 57
time.
Orthogonal Matching
2011
),
sparsity
mutual incoherence
level(k),
Exact recovery,
Sensing
Matching Pursuit
Selection criteria is
Matrix,
representation is built up
optimal,
recovered
Optimized Orthogonal
2002
25
signal.
The recovery technique can be observed under the effect of noise and the merge of
recovery and noise removing technique can be considered.
For OMP, various apt criteria like energy, variance, least mean square for selection of
significant columns can be considered which correlate most with the residual can be
accounted for.
Image can be recovered from the inaccurate and under sampled data via ROMP for
under the influence of the explicit stopping rules.
Generalization of OMP can be applied with the modified stopping rules in the
presence of bounded noise.
Stopping rules can involve Mutual incoherence property, restricted isometry property
with desired sparsity level.
26
CHAPTER 4
IMAGE RECONSTRUCTION
USING COMPRESSIVE SENSING AND MODIFIED OMP
In the previous section, the conventional technique of OMP with Compressive sensing is
discussed that provides the means to recover the desired image. Here, in this chapter, a
modified technique is used that progresses under the effect of the certain rules in order to
modify the number of computations required and the overall time elapsed.
4.1 MODIFIED OMP
Orthogonal matching pursuit (OMP) is a greedy search algorithm popularly being used for
the recovery of compressive sensed sparse signals. In this report discreet wavelet transform is
used to obtain the sparse form of the test images. In OMP, the greedy algorithm selects at
each step the column of
column is then added into the set of selected columns. The algorithm updates the residuals by
projecting the observation y onto the linear subspace spanned by the columns that have
already been selected and the algorithm then iterates. The major advantage is its simplicity.
The measurement vector is given as:
y= x
(4.1)
where
| rt ,
complete signal can be recovered by total iterations performed by the algorithm, where
'
(4.2)
(4.3)
27
Reconstruction using OMP is an inverse problem. Initially y residual is calculated and its
correlation with measurement matrix is found out. For each iteration, an approximation of the
given image signal is generated, which is orthogonal projection of signal onto the subspace
generated by the selected entries of signal and which minimizes the norm of the
corresponding residual error. In the second step the minimum of residual is calculated. After
the orthogonal projection to the values, the entry with minimum residual error, i.e. r n = S-Sn is
selected. The continuous update then results in the overall recovered image. The recovery
result with conventional OMP can be improved in one step further by sparsifying the low
frequency coefficients rather than employing the recovery algorithm on the overall image in
order to conserve the memory storage required. The OMP algorithm can even perform better
with the explicit stopping Rules and properties. It is shown that under the mutual incoherence
and the specified number of iterations given with the minimum magnitude of the nonzero
components of the signal. In this case, the OMP algorithm still selects all significant
components before possibly selecting incorrect ones [40]. In this report the stopping rules is
also discussed and the properties of the OMP is investigated. The mutual incoherence
property can be included in the stopping rule property to modify the algorithm. Incoherence
says that unlike the signal of interest, the sampling/sensing waveforms have an extremely
dense representation in. Coherence measures the maximum correlation between any two
elements of two different matrices. These two matrices might represent two different
basis/representation domains. If
is an M N matrix with
is a NN matrix with
1.......
( , )=
......
as columns and
N max
(4.4)
for 1 j N and 1 k M. It follows from linear algebra that 1 ( , )
N .
In CS, we are concerned with the incoherence of matrix used to sample/sense signal of
interest (hereafter referred as measurement matrix
28
and
). Within
required for reconstruction of signal. The MIP requires that the mutual incoherence
can be bounded as
to
condition to ensure that the M K system is well-conditioned and hence sports a stable
inverse is that for any vector p sharing the same K nonzero entries as s we have
1-
p 2
p 2
1+
(4.5)
for some
sparse vectors. This is the so-called restricted isometry property (RIP). The OMP can
reconstruct all K-sparse vectors if
measurement matrix
K+1
< 1/
in the sense
that the vectors { j} cannot sparsely represent the vectors { i} and vice versa. The
parameter for exact recovery condition (ERC) can be given as:
M=
max {( (t )' (t ) )
r (k)
-1
(t)' r||
1
(4.6)
This condition is called the Exact Recovery Condition (ERC) [41]. The ERC is a sufficient
condition for the exact recovery of the signal in the noiseless case. The bounded stopping
condition allows only specified iteration by selecting the significant correlated column before
applying process on the non significant zero columns. The modified stopping rules can ensure
that no zero components are selected.
4.1.1 Algorithm for Image Recovery
The Modified Orthogonal Matching Pursuit algorithm selects at each step the column of
which is most correlated with the current residuals. This column is then added into the
set of selected columns. The algorithm updates the residuals by projecting the observation y
onto the linear subspace spanned by the columns that have already been selected and the
algorithm then iterates. The algorithm only selects those significant components which satisfy
the modified stopping rule as given by equation no. (4.6), and thus ensure that no zero
29
components are selected. The MIP ensures the proper selection of significant columns that
correlated the most with the residual and significantly reduce the computational time for
overall algorithm. The modified algorithm commences in the same way as the conventional
OMP and under the certain stopping condition the OMP algorithm iterates at a lesser number
with the better quality recovered image.
The algorithm stated as follows:
Step 1: Consider N x N image (x), Choose appropriate M and construct the measurement
matrix
(M x N).
Step 2: Make sparse representation for the image and get the low frequency coefficients L i
(i=1,2.....N) , high frequency coefficients Hi , Vi , Di (i=1,2.....N). then only measure the low
frequency coefficients using the compressive sensing technique.
y= Li
(4.7)
Step 3: Reconstruct the low frequency coefficients using the modified OMP algorithm under
the certain stopping condition in the presence of Gaussian noise.
Step 4: Initialize the residual r0 = y and initialize the set of selected variables
Let the iteration count k=1. The other parameters can be specified as
Matrix (M x N) , x as the input image(N x N),
Step 5:If the incoherence
as Measurement
condition for k is checked. (While (norm(r) > threshold and k< min{K,M/N}) )do
Increment the iteration count .Select the indices { (i) }i=1,2,3...N corresponding to N largest
entries in ' r
k-1
k-1
{ (1 ) , .. (N )
u2
Step 7: Update the residual to recover the image: r k = y retrieve the recovered image: x = min y - u 2
30
k k
This Flow chart signifies the step wise implementation of the modified algorithm along with
the condition and their properties.
CHAPTER 5
RESULT AND CONCLUSION
In this section, the analysis and execution of the modified OMP technique under the certain
conditions is observed. The result can be observed on the given test images.
5.1 TEST IMAGES
The following are the standard test images of size 256 x 256. The technique is implemented
on these test images i.e. lena, cameraman and barbaara. The results are the form of
reconstructed images.
31
(a) Lena
(b) Cameraman
(c) Barbaara
(a)
32
Figure 5.2: Reconstruction result with modified OMP (a)Original image (b) Reconstructed image with 128
number of measurements (M=128) (c): Reconstructed image with M=190
(a)
(b)
Figure 5.3: Reconstructed image with modified OMP (a) Original image (b) Reconstructed image with M=220
The images reconstructed for the Figure 5.3 (b) represents the optimum recovery of the
desired image under the specified Exact recovery condition and the Mutual incoherence
property rule with lesser number of measurements for the low frequency coefficients and
hence in turn lesser storage spaceThe results are also displayed in tabular form comparing the
PSNR values for various techniques. The modified technique displays relatively improved
PSNR.
Table 5.1 Comparison of PSNR
M
OMP
PSNR(db)
M=128
M=150
M=170
26.44
28.23
30.72
33
M=190
32.63
PSNR(db)
M=128
M=150
M=170
25.33
26.45
Modified OMP
32.09
31.87
32.09
28.01
28.06
28.17
OMP (dct)
M=190
28.04
33.67
28.19
M=150
M=170
OMP
4.64
5.02
5.26
Modified OMP
3.91
4.24
4.30
8.48
9.08
11.72
M=190
5.29
4.43
11.45
The tabular results show that the lesser number of measurements are sufficient to reconstruct
the image. The table indicates that 150 samples of the cameraman image are sufficient to
reconstruct it instead of total 256 if certain modified stopping and exact recovery rules are
applied. Sparsity level can reconstruct it from 190 samples instead of 256. PSNR values for
after reconstruction are shown in table for different techniques. Similar result follows for
Lena image. The elapsed time for the three implemented algorithms is computed and the
comparison is shown in table 5.2. The time is given in seconds. The tabular results show that
the implemented OMP algorithm is better than the existing techniques. The reconstruction
process is faster and gives a stable result. The elapsed time is calculated recovery of image
from sparse domain image form.
5.3 CONCLUSION AND DISCUSSIONS
The theoretical and empirical work in this paper demonstrates that OMP is an effective
alternative for signal recovery from random measurements. In this paper, compressive
sensing based image reconstruction is performed by implementing orthogonal matching
pursuit using Gaussian measurements under modified condition. The simulation results
demonstrate that the implemented OMP gives a faster reconstruction than the existing
algorithms using lesser number of dimensions than previous work on OMP. Modified OMP
can be used effectively to recover the sparse images. Implemented technique of OMP is
performed under certain conditions and stopping rules and it is observed that the complexity
34
of the algorithm can be reduced by solving it in. It provides feasible results in reduced
running time for lesser number of undersampled data provided. The modified technique can
further be optimized or even generalised under these conditions to avail the better
reconstruction result within the reduced amount of time elapsed.
5.4 WORK TO BE DONE
Till now, OMP with explicit stopping rules has been implemented. The next step is the
amalgamation of Restricted Isometry Property condition as a stopping rule and MIP. The
generalisation of the already implemented technique can also be considered for the upcoming
efforts. The motive is also to optimize the design of Measurement matrix and the procedure
of identifying and selecting the significant components in order to improve the percentage of
correlation. Generalization can be obtained by improving the criteria for identifying the
multiple significant indices for better correlation. The regularization of the technique gives
the algorithm with reduced computational time.
REFERENCES
[1] Supatana Auethavekiat, "Introduction to the implementation of compressive sensing", AU
journal of Technology, vol 14, no. 1, pp 39- 46, 2010
[2] D. Donoho, "Compressed Sensing", IEEE Transactions on inf. theory, vol. 52, pp 12891306.
[3] D. Donoho, Y. Tsaig, I. Drori, and J. Starck, Sparse solutions of underdetermined linear
equations by stagewise orthogonal matching pursuit, 2006. Available: http://wwwstat.stanford. edu/~donoho/Reports/2006/StOMP-20060403.
35
[4] Baraniuk R. "Compressive sensing". IEEE Signal Processing Magazine, 2007, 24 (4):
118-121
[5] E. J. Cands, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal
reconsruction from highly incomplete frequency information, IEEE Trans. Inf. Theory, vol.
52, no. 2, pp. 489509, Feb. 2006
[6] M.A. Davenport andM. B. Wakin, Analysis of Orthogonal Matching Pursuit using the
restricted isometry property, IEEE Trans. Inf. Theory, vol. 56, no. 9, pp. 43954401, Sep.
2010.J.A.
[7] Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, and Chinmay Hegde,"Model
based compressed sensing" IEEE Trans. on info. theory, vol. 56, pp. 1982-2001, 2010
[8] SaadQaisar, Rana Muhammad Bilal, Wafa Iqbal, Muqaddas Naureen, andSungyoung Lee,
"Compressive Sensing: From theory to Applications, a Survey", Journal Of Communications
and Networks, vol. 15, pp. 443-456, 2013.
[9] E. J. Cands and J. Romberg, Sparsity and incoherence in compressive sampling,
Inverse problems, vol. 23, pp. 969969, Apr. 2007
[10] Emmauel J. Candes, Michael B. Wakin, "An introduction to Compressive Sampling",
IEEE Signal Processing Magazine, pp. 21-30, 2008.
[11] D. Baron, M.B. Wakin, M.F. Duarte, S. Sarvotham, and R.G. Baraniuk, Distributed
compressed sensing, 2005, Preprint.
[12] M. Lustig, D.L. Donoho, and J.M. Pauly,Rapid MR imaging with compressed sensing
and randomly under-sampled 3dft trajectories, in Proc. 14th Ann. Meeting ISMRM, Seattle,
WA, May 2006.
[13] V. Cevher, A. Sankaranarayanan, M. Duarte, D. Reddy, R. Baraniuk, and R. Chellappa,
Compressive sensing for background subtraction, Comput. Vision-ECCV 2008, pp. 155
168, 2008.
[14] M. Mohtashemi, H. Smith, D. Walburger, F. Sutton, and J. Diggans, Sparse sensing
DNA microarray-based biosensor: Is it feasible? in Proc. SAS, 2010, pp. 127130.
[15] Emmauel J. Candes, Michael B. Wakin, "An introduction to Compressive Sampling",
IEEE Signal Processing Magazine, pp. 21-30, 2008.
[16] Richard Baraniuk, "Compressed sensing", IEEE Signal processing magazine, vol. 24, pp
1-9, 2007.
[17] E. J. Cands, The restricted isometry property and its implications for compressed
sensing, Comptes Rendus Mathematique, vol. 346, no. 910, pp. 589592, 2008.
36
[18] R. Baraniuk, M.Davenport, R. DeVore, and M.Wakin, A simple proof of the restricted
isometry property for random matrices, Construct. approx., vol. 28, pp. 253263, 2008.
[19] E. Cand`es, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal
reconstruction from highly incomplete frequency information, IEEE Trans. Inform. Theory,
vol. 52, no. 2, pp. 489509, Feb. 2006.
[20] D.L.donoho, Y. Tsaig. I. Drori and J.L. strack, Sparse solution of undetermined linear
equations by stagewise orthogonal matching pursuit(Stomp), IEEE Transactions on
information theory, vol. 58, issue 2, 2007.
[21] Laura Rebollo-Neira and David Lowe," Optimized Orthogonal Matching Pursuit
Approach", IEEE Signal Processing Letters, VOL. 9, NO. 4, pp. 137-140, APRIL 2002.
[22] S. Mallat and Z. Zhang, Matching pursuit in timefrequency dictionary, IEEE Trans.
Signal Processing, vol. 41, pp. 33973415, Dec. 1993.
[23] Natterer F. and Wubbeling F., Mathematical Methods in Image Reconstruction, SIAM
Monographs on Mathematical Modeling and Computation, 2001.
[24] Zhen Zhang, Yunhui Shi, Dehui Kong, Wenpeng Ding, Baocai Yin," Image decoding
optimization based on compressive sensing, Elsevier, Image and Vision Computing, vol. no.
236, pp. 812-818, 2011.
[25] Jie Xu a,b, Jianwei Mac, Dongming Zhang a, Yongdong Zhang a, Shouxun Lin a,"
Improved total variation minimization method for compressive sensing by intra-prediction,
Elsevier, Image and Vision Computing, vol. no. 92 , pp. 2614-2623, 2013
[26] Zhirong Gao, Chengyi Xiong , Lixin Ding , Cheng Zhou , "Image representation using
block compressive sensing for compression applications" Elsevier, Image and Vision
Computing, vol. no. 24, pp. 885-894, 2013.
[27] Zheng Hai-bo, Yide Ma, Zhu Xiu-chang,"Sampling adaptive block compressed sensing
reconstruction algorithms for images based on edge detection, Elsevier, Image and Vision
Computing, vol. no. 28, pp. 97-103, 2013.
[28] Donoho DL, Elad M, Temlyakov VN. Stable recovery of sparse over complete
representations in the presence of noise. IEEE Trans Inf Theory 2006;52:618.
[29] D.L.donoho, Y. Tsaig. I. Drori and J.L. strack, Sparse solution of undetermined linear
equations by stagewise orthogonal matching pursuit(Stomp), IEEE Transactions on
information theory, vol. 58, issue 2, 2007
[30] Tropp, Gillbert, Signal recovery from random measurements via orthogonal matching
pursuit, IEEE Transactions On Inf Theory, Vol. 53, No. 12, Dec 2007
37
[31] Beck, Teboulle,M.2009a. Fast gradient based algorithms for constrained total variation
image denoising and deblurring problems, IEEE Transactions on image Processing 18,
2419- 2434.
[32] Rebollo-Neira, Lowe D, Optimized Orthogonal matching pursuit approach, signal
processing Letters, IEEE, Volume:9 Issue:4, April 2002.
[33] Gharavi, Huang, T.S., A fast orthogonal matching pursuit algorithm, Acoustics,
Speech and Signal processing, 1998, Proceedings of the 1998 IEEE international Conference,
vol. 3, 1998
[34] Paul Tseng, "Further results on stable recovery of sparse overcomplete representations in
the presence of noise" IEEE Transactions on Information Theory, Vol. 55, No. 2,pp. 888-899,
2009.
[35] Laura Rebollo-Neira and David Lowe," Optimized Orthogonal Matching Pursuit
Approach", IEEE Signal Processing Letters, VOL. 9, NO. 4, pp. 137-140, APRIL 2002.
[36] T. Tony Cai and Lie Wang, "Orthogonal Matching Pursuit for Sparse Signal Recovery
With Noise"IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7,pp.
4680-4688 July 2011.
[37] Deanna Needell and Roman Vershynin, "Signal Recovery From Incomplete and
Inaccurate Measurements Via Regularized Orthogonal Matching Pursuit" IEEE JOURNAL
OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 4, NO. 2, APRIL 2010 310-316
[38] Jian Wang, Seokbeop Kwon, Byonghyo Shim,"Generalised Orthogonal Matching
Pursuit", IEEE Trans. on Signal Processing, vol 60, pp. 6202-6216, 2012.
[39] Joel A. Tropp, Member, IEEE, and Anna C. Gilbert," Signal Recovery From Random
Measurements Via Orthogonal Matching Pursuit" IEEE Transcations on Information Theory,
Vol. 53, No. 12,pp. 4655-4666, December 2007.
[40] J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Trans.
Inf. Theory, vol. 50, pp. 22312242, 2004.
38