Sie sind auf Seite 1von 112

www.1000projects.

com
www.fullinterview.com
www.chetanasprojects.com

CHAPTER 1
Introduction

A majority of today’s Internet bandwidth is estimated to be used for images and


video. Recent multimedia applications for handheld and portable devices place a limit on
the available wireless bandwidth. The bandwidth is limited even with new connection
standards. JPEG image compression that is in widespread use today took several years
for it to be perfected. Wavelet based techniques such as JPEG2000 for image
compression has a lot more to offer than conventional methods in terms of compression
ratio. Currently wavelet implementations are still under development lifecycle and are
being perfected. Flexible energy-efficient hardware implementations that can handle
multimedia functions such as image processing, coding and decoding are critical,
especially in hand-held portable multimedia wireless devices.

1.1Background

Data compression is, of course, a powerful, enabling technology that plays a vital
role in the information age. Among the various types of data commonly transferred over
networks, image and video data comprises the bulk of the bit traffic. For example,
current estimates indicate that image data take up over 40% of the volume on the
Internet. The explosive growth in demand for image and video data, coupled with
delivery bottlenecks has kept compression technology at a premium. Among the several
compression standards available, the JPEG image compression standard is in wide
spread use today. JPEG uses the Discrete Cosine Transform (DCT) as the transform,
applied to 8-by-8 blocks of image data. The newer standard JPEG2000 is based on the
Wavelet Transform (WT). Wavelet Transform offers multi-resolution image analysis,
which appears to be well matched to the low level characteristic of human vision. The

1
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

DCT is essentially unique but WT has many possible realizations. Wavelets provide us
with a basis more suitable for representing images.

This is because it cans represent information at a variety of scales, with local


contrast changes, as well as larger scale structures and thus is a better fit for image data.

1.2 Aim of the project

The main aim of the project is to implement and verify the image compression
technique and to investigate the possibility of hardware acceleration of DWT for signal
processing applications. A hardware design has to be provided to achieve high
performance, in comparison to the software implementation of DWT. The goal of the
project is to

 Implement this in a Hardware description language (Here VHDL).

 Perform simulation using tools such as Xilinx ISE 8.1i.

 Check the correctness and to synthesize for a Spartan 3E FPGA Kit.

2
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

1.3 Block Diagram

Fig 1.1: Image Compression Model

3
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 1.2: Image Decompression Model

CHAPTER 2
Description

4
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 2.1: Block Diagram of Lifting based DWT

The block diagram consists of 4 blocks:


1) DWT
2) Compression block
3) Decompression block
4) IDWT

The input image is given to DWT which consists of lifting scheme where the
image is splitted into sequence of even and odd series coefficients. These splitted series
are passed to compression block where the image is compressed using SPIHT algorithm.
Compressed image is converted into bit streams using Entropy encoder. The
reconstructed image is obtained by passing the compressed image through decompression
block and IDWT.

5
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

INTRODUCTION TO WAVELETS AND WAVELET


TRANSFORMS

2.1 Fourier Analysis


Signal analysts already have at their disposal an impressive arsenal of tools. Perhaps
the most well-known of these is Fourier analysis, which breaks down a signal into
constituent sinusoids of different frequencies. Another way to think of Fourier analysis is
as a mathematical technique for transforming our view of the signal from time-based to
frequency-based.

Fig 2.2 Fourier analysis


For many signals, Fourier analysis is extremely useful because the signal’s
frequency content is of great importance

2.2 Short-Time Fourier analysis

6
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier
transform to analyze only a small section of the signal at a time—a technique called
windowing the signal. Gabor’s adaptation, called the Short-Time Fourier Transform
(STFT), maps a signal into a two-dimensional function of time and frequency.

Fig 2.3 Short-Time Fourier analysis

The STFT represents a sort of compromise between the time- and frequency-
based views of a signal. It provides some information about both when and at what
frequencies a signal event occurs. However, you can only obtain this information with
limited precision, and that precision is determined by the size of the window.
While the STFT compromise between time and frequency information can be
useful, the drawback is that once you choose a particular size for the time window, that
window is the same for all frequencies. Many signals require a more flexible approach—
one where we can vary the window size to determine more accurately either time or
frequency.

2.3 Problem Present in Fourier Transform


2

The Fundamental idea behind wavelets is to analyze according to scale. Indeed,


some researchers feel that using wavelets means adopting a whole new mind-set or

7
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

perspective in processing data. Wavelets are functions that satisfy certain mathematical
requirements and are used in representing data or other functions. This idea is not new.
Approximation using superposition of functions has existed since the early 18OOs, when
Joseph Fourier discovered that he could superpose sines and cosines to represent other
functions. However, in wavelet analysis, the scale used to look at data plays a special
role. Wavelet algorithms process data at different scales or resolutions. Looking at a
signal (or a function) through a large “window,” gross features could be noticed.
Similarly, looking at a signal through a small “window,” small features could be noticed.
The result in wavelet analysis is to see both the forest and the trees, so to speak.

This makes wavelets interesting and useful. For many decades scientists have
wanted more appropriate functions than the sines and cosines, which are the basis of
Fourier analysis, to approximate choppy signals.’ By their definition, these functions are
non-local (and stretch out to infinity). They therefore do a very poor job in approximating
sharp spikes. But with wavelet analysis, we can use approximating functions that are
contained neatly in finite domains. Wavelets are well-suited for approximating data with
sharp discontinuities.

The wavelet analysis procedure is to adopt a wavelet prototype function, called an


analyzing wavelet or mother wavelet. Temporal analysis is performed with a contracted,
high-frequency version of the prototype wavelet, while frequency analysis is performed
with a dilated, low-frequency version of the same wavelet. Because the original signal or
function can be represented in terms of a wavelet expansion (using coefficients in a linear
combination of the wavelet functions), data operations can be performed using just the
corresponding wavelet coefficients. And if wavelets best adapted to data are selected, the
coefficients below a threshold is truncated, resultant data are sparsely represented. This
sparse coding makes wavelets an excellent tool in the field of data compression. Other
applied fields that are using wavelets include astronomy, acoustics, nuclear engineering,
8
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

sub-band coding, signal and image processing, neurophysiology, music, magnetic


resonance imaging, speech discrimination, optics, fractals, turbulence, earthquake
prediction, radar, human vision, and pure mathematics applications such as solving
partial differential equations.

Basically wavelet transform (WT) is used to analyze non-stationary signals, i.e.,


signals whose frequency response varies in time, as Fourier transform (FT) is not suitable
for such signals. To overcome the limitation of FT, short time Fourier transform (STFT)
was proposed. There is only a minor difference between STFT and FT. In STFT, the
signal is divided into small segments, where these segments (portions) of the signal can
be assumed to be stationary. For this purpose, a window function "w" is chosen. The
width of this window in time must be equal to the segment of the signal where its still be
considered stationary. By STFT, one can get time-frequency response of a signal
simultaneously, which can’t be obtained by FT.

The short time Fourier transform for a real continuous signal is defined as:


x( f , t ) = ∫[ x(t ) w (t −τ ) * ]e −2 jπft dt
−∞

9
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Where the length of the window is (t-τ ) in time such that we can shift the
window by changing value of t and by varying the value τ we get different frequency
response of the signal segments.

The Heisenberg uncertainty principle explains the problem with STFT. This
principle states that one cannot know the exact time-frequency representation of a signal,
i.e., one cannot know what spectral components exist at what instances of times. What
one can know are the time intervals in which certain band of frequencies exists and is
called resolution problem. This problem has to do with the width of the window function
that is used, known as the support of the window. If the window function is narrow, then
it is known as compactly supported. The narrower we make the window, the better the
time resolution, and better the assumption of the signal to be stationary, but poorer the
frequency resolution:

Narrow window ===> good time resolution, poor frequency resolution.


Wide window ===> good frequency resolution, poor time resolution.

The wavelet transform (WT) has been developed as an alternate approach to


STFT to overcome the resolution problem. The wavelet analysis is done such that the
signal is multiplied with the wavelet function, similar to the window function in the
STFT, and the transform is computed separately for different segments of the time-
domain signal at different frequencies. This approach is called multiresolution analysis
(MRA), as it analyzes the signal at different frequencies giving different resolutions.

MRA is designed to give good time resolution and poor frequency resolution at
high frequencies and good frequency resolution and poor time resolution at low

10
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

frequencies. This approach is good especially when the signal has high frequency
components for short durations and low frequency components for long durations, e.g.,
images and video frames.

So why do we need other techniques, like wavelet analysis?

Fourier analysis has a serious drawback. In transforming to the frequency domain,


time information is lost. When looking at a Fourier transform of a signal, it is impossible
to tell when a particular event took place. If the signal properties do not change much
over time — that is, if it is what is called a stationary signal—this drawback isn’t very
important. However, most interesting signals contain numerous non stationary or
transitory characteristics: drift, trends, abrupt changes, and beginnings and ends of
events. These characteristics are often the most important part of the signal, and Fourier
analysis is not suited to detecting them.

2.4 Wavelet Analysis

Wavelet analysis [1] represents the next logical step: a windowing technique with
variable-sized regions. Wavelet analysis allows the use of long time intervals where we
want more precise low-frequency information, and shorter regions where we want high-
frequency information

11
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 2.4 Wavelet Analysis

Here’s what this looks like in contrast with the time-based, frequency-based, and
STFT views of a signal:

Fig 2.5
You may have noticed that wavelet analysis does not use a time-frequency region,
but rather a time-scale region. For more information about the concept of scale and the
link between scale and frequency, see “How to Connect Scale to Frequency?” .
Now that we know some situations when wavelet analysis is useful, it is
worthwhile asking “What is wavelet analysis?” and even more fundamentally,

12
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

“What is a wavelet?”
A wavelet is a waveform of effectively limited duration that has an average value
of zero.
Compare wavelets with sine waves, which are the basis of Fourier analysis.
Sinusoids do not have limited duration — they extend from minus to plus infinity. And
where sinusoids are smooth and predictable, wavelets tend to be irregular and symmetric.

Fig 2.5 sine wave


Fourier analysis consists of breaking up a signal into sine waves of various
frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and
scaled versions of the original (or mother) wavelet. Just looking at pictures of wavelets
and sine waves, you can see intuitively that signals with sharp changes might be better
analyzed with an irregular wavelet than with a smooth sinusoid, just as some foods are
better handled with a fork than a spoon. It also makes sense that local features can be
described better with wavelets that have local extent.

2.5 What Can Wavelet Analysis Do?

One major advantage afforded by wavelets is the ability to perform local analysis,
that is, to analyze a localized area of a larger signal. Consider a sinusoidal signal with a
small discontinuity — one so tiny as to be barely visible. Such a signal easily could be
generated in the real world, perhaps by a power fluctuation or a noisy switch.

13
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

0.5

0 Sinusoid with a small discontinuity

-0.5

Fig 2.6 Sinusoidal Signal

A plot of the Fourier coefficients (as provided by the fft command) of this signal
shows nothing particularly interesting: a flat spectrum with two peaks representing a
single frequency. However, a plot of wavelet coefficients clearly shows the exact location
in time of the discontinuity.

Fig 2.7 Fourier coefficients and wavelet coefficients

14
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Wavelet analysis is capable of revealing aspects of data that other signal analysis
techniques miss aspects like trends, breakdown points, discontinuities in higher
derivatives, and self-similarity. Furthermore, because it affords a different view of data
than those presented by traditional techniques, wavelet analysis can often compress or de-
noise a signal without appreciable degradation. Indeed, in their brief history within the
signal processing field, wavelets have already proven themselves to be an indispensable
addition to the analyst’s collection of tools and continue to enjoy a burgeoning popularity
today.

Thus far, we’ve discussed only one-dimensional data, which encompasses most
ordinary signals. However, wavelet analysis can be applied to two-dimensional data
(images) and, in principle, to higher dimensional data. This toolbox uses only one and
two-dimensional analysis techniques.

2.6 Wavelet Transform


When we analyze our signal in time for its frequency content, Unlike Fourier
analysis, in which we analyze signals using sines and cosines, now we use wavelet
functions.

2.6.1 The Continuous Wavelet Transform


Mathematically, the process of Fourier analysis is represented by the Fourier
transform:

F (ω) = ∫ f (t )e
− jωt
dt
−∞

15
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Which is the sum over all time of the signal f(t) multiplied by a complex
exponential. (Recall that a complex exponential can be broken down into real and
imaginary sinusoidal components.) The results of the transform are the Fourier
coefficients F(w), which when multiplied by a sinusoid of frequency w yields the
constituent sinusoidal components of the original signal. Graphically, the process looks
like:

Fig 2.8 Continuous Wavelets of different frequencies


Similarly, the continuous wavelet transform (CWT) is defined as the sum over all
time of signal multiplied by scaled, shifted versions of the wavelet function.

C ( Scale , position ) = ∫ f (t )ψ (scale , position , t )dt
−∞

The result of the CWT is a series many wavelet coefficients C, which are a
function of scale and position.
Multiplying each coefficient by the appropriately scaled and shifted wavelet
yields the constituent wavelets of the original signal:

16
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 2.9 Continuous Wavelets of different scales and positions


The wavelet transform involves projecting a signal onto a complete set of
translated and dilated versions of a mother wavelet Ψ (t). The strict definition of a mother
wavelet will be dealt with later so that the form of the wavelet transform can be examined
first. For now, assume the loose requirement that Ψ (t) has compact temporal and spectral
support (limited by the uncertainty principle of course), upon which set of basis functions
can be defined. The basis set of wavelets is generated from the mother or basic wavelet is
defined as

1 t −b 
ψ a ,b
(t ) = ψ
a  a
 ; a, b ∈ R1 and a>0

(2.2)

The variable ‘a’ (inverse of frequency) reflects the scale (width) of a particular
basis function such that its large value gives low frequencies and small value gives high
frequencies. The variable ‘b’ specifies its translation along x-axis in time. The term 1/
a is used for normalization. The 1-D wavelet transform is given by:

w f
( a , b) = ∫ x(t )ψ
−∞
a ,b (t ) dt (2.3)

The inverse 1-D wavelet transform is given by:

∞ ∞
1 da
x( t ) = ∫ ∫W f ( a, b)ψ a ,b (t ) db
C 0 −∞ a2

(2.4)

17
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

2

ψω)
Where C = ∫
−∞
ω
dω < ∞

(2.5)

X (t) is the Fourier transform of the mother wavelet Ψ (t). C is required to be


finite, which leads to one of the required properties of a mother wavelet. Since C must be
finite, then x (t) =0 to avoid a singularity in the integral, and thus the x(t) must have zero
mean. This condition can be stated as

∫ψ(t)dt
−∞
=0 (2.6)

and known as the admissibility condition. The other main requirement is that the mother
wavelet must have finite energy:

∫ψ(t )
2
dt < ∞ (2.7)
−∞

A mother wavelet and its scaled versions are depicted in figure 2.10 indicating the
effect of scaling.

Fig 2.10 Mother wavelet and its scaled versions

18
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Unlike the STFT which has a constant resolution at all times and frequencies, the
WT has a good time and poor frequency resolution at high frequencies, and good
frequency and poor time resolution at low frequencies.

2.6.2 The Discrete Wavelet Transform


Calculating wavelet coefficients at every possible scale is a fair amount of work,
and it generates an awful lot of data. What if we choose only a subset of scales and
positions at which to make our calculations? It turns out rather remarkably that if we
choose scales and positions based on powers of two—so-called dyadic scales and
positions—then our analysis will be much more efficient and just as accurate. We obtain
such an analysis from the discrete wavelet transform (DWT)[12].

An efficient way to implement this scheme using filters was developed in 1988 by
Mallat. The Mallat algorithm is in fact a classical scheme known in the signal processing
community as a two-channel sub band coder. This very practical filtering algorithm
yields a fast wavelet transform — a box into which a signal passes, and out of which
wavelet coefficients quickly emerge. Let’s examine this in more depth.

Now consider, discrete wavelet transform (DWT), which transforms a discrete


time signal to a discrete wavelet representation. The first step is to discretize the wavelet
parameters, which reduce the previously continuous basis set of wavelets to a discrete
and orthogonal / orthonormal set of basis wavelets.

ψ m ,n
(t ) = 2 m / 2ψ (2 m t − n) ; m, n ∈ Ζ 1 such that - ∞ < m, n < ∞

(2.8)

The 1-D DWT is given as the inner product of the signal x(t) being transformed
with each of the discrete basis functions.

19
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

w m ,n ψ
= x (t ),
m,n
(t ) ; m, n €Z

(2.9)

The 1-D inverse DWT is given as: x(t ) = ∑∑Wm ,nψm ,n (t ) ; m, n € Z (2.10)
m n

2.7 Properties of Wavelet Transforms

2.7.1 Scaling
We’ve seen the interrelation of wavelets and quadrature mirror filters. The
wavelet function  is determined by the high pass filter, which also produces the details of
the wavelet decomposition.

There is an additional function associated with some, but not all wavelets. This is
the so-called scaling function. The scaling function is very similar to the wavelet
function. It is determined by the low pass quadrature mirror that iteratively up- sampling
and convolving the high pass filter produces a shape approximating the wavelet function,
iteratively up-sampling and convolving the low pass filter produces a shape
approximating the scaling function.We’ve already alluded to the fact that wavelet
analysis produces a time-scale view of a signal and now we’re talking about scaling and
shifting wavelets.
What exactly do we mean by scale in this context?
Scaling a wavelet simply means stretching (or compressing) it. To go beyond
colloquial descriptions such as “stretching,” we introduce the scale factor, often denoted
by the letter a.
If we’re talking about sinusoids, for example the effect of the scale factor is very easy to
see:

20
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 2.11 Scaling


The scale factor works exactly the same with wavelets. The smaller the scale factor, the
more “compressed” the wavelet.

Fig 2.12 Scaling


It is clear from the diagrams that for a sinusoid sin (wt) the scale factor ‘a’ is
related (inversely) to the radian frequency ‘w’. Similarly, with wavelet analysis the scale
is related to the frequency of the signal.

2.7.2 Shifting
21
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Shifting a wavelet simply means delaying (or hastening) its onset.


Mathematically, delaying a function Ψ(t) by k is represented by Ψ(t-k).

Fig 2.13 Shifting

2.8 Decomposition of Wavelets

2.8.1 One-Stage Decomposition

For many signals, the low-frequency content is the most important part. It is what
gives the signal its identity. The high-frequency content on the other hand imparts flavor
or nuance. Consider the human voice. If you remove the high-frequency components, the
voice sounds different but you can still tell what’s being said. However, if you remove
enough of the low-frequency components, you hear gibberish. In wavelet analysis, we
often speak of approximations and details. The approximations are the high-scale, low-
frequency components of the signal. The details are the low-scale, high-frequency
components. The filtering process at its most basic level looks like this:
The original signal S passes through two complementary filters and emerges as
two signals. Unfortunately, if we actually perform this operation on a real digital signal,
we wind up with twice as much data as we started with. Suppose, for instance that the
original signal S consists of 1000 samples of data. Then the resulting signals will each
22
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

have 1000 samples, for a total of 2000. These signals A and D are interesting, but we get
2000 values instead of the 1000 we had. There exists a more subtle way to perform the
decomposition using wavelets.

Fig 2.14 One-Stage Decomposition

By looking carefully at the computation, we may keep only one point out of two
in each of the two 2000-length samples to get the complete information. This is the
notion of down sampling. We produce two sequences called cA and cD.

Fig 2.15 Two-Stage Decomposition


The process on the right which includes down sampling produces DWT
Coefficients. To gain a better appreciation of this process let’s perform a one-stage

23
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

discrete wavelet transform of a signal. Our signal will be a pure sinusoid with high-
frequency noise added to it.
Here is our schematic diagram with real signals inserted into it:
Notice that the detail coefficients cD is small and consist mainly of a high-
frequency noise, while the approximation coefficients cA contains much less noise than
does the original signal.
You may observe that the actual lengths of the detail and approximation
coefficient vectors are slightly more than half the length of the original signal.

Fig 2.16
2.8.2 Multi-step Decomposition and Reconstruction

A multi step analysis-synthesis process can be represented as: filters, and thus is
associated with the approximations of the wavelet decomposition.

24
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 2.17 Decomposition and Reconstruction


In the same way this process involves two aspects: breaking up a signal to obtain
the wavelet coefficients, and reassembling the signal from the coefficients. We’ve
already discussed decomposition and reconstruction at some length. Of course, there is
no point breaking up a signal merely to have the satisfaction of immediately
reconstructing it. We may modify the wavelet coefficients before performing the
reconstruction step. We perform wavelet analysis because the coefficients thus obtained
have many known uses, de-noising and compression being foremost among them. But
wavelet analysis is still a new and emerging field. No doubt, many uncharted uses of the
wavelet coefficients lie in wait. The Wavelet Toolbox can be a means of exploring
possible uses and hitherto unknown applications of wavelet analysis. Explore the toolbox
functions and see what you discover.

25
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

2.9 Wavelet Reconstruction

We’ve learned how the discrete wavelet transform can be used to analyze or
decompose signals and images. This process is called decomposition or analysis. The
other half of the story is how those components can be assembled back into the original
signal without loss of information. This process is called reconstruction, or synthesis.
The mathematical manipulation that effects synthesis is called the inverse
discrete wavelet transforms (IDWT). To synthesize a signal in the Wavelet Toolbox, we
reconstruct it from the wavelet coefficients:

Fig 2.18 Wavelet Reconstruction

26
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Up sampling is the process of lengthening a signal component by inserting zeros


between samples:

Signal component Upsampled signal component

The Wavelet Toolbox includes commands like IDWT and wavered that perform
single-level or multilevel reconstruction respectively on the components of one-
dimensional signals. These commands have their two-dimensional analogs, idwt2 and
waverec2.

2.10 Dissimilarities between Fourier and Wavelet Transforms

The most interesting dissimilarity between these two kinds of transforms is that
individual wavelet functions are localized in space.
Fourier sine and cosine functions are not. This localization feature, along with
wavelets' localization of frequency, makes many functions and operators using
wavelets\sparse. When transformed into the wavelet domain. This sparseness, in turn,
results in a number of useful applications such as data compression, detecting features in
images, and removing noise from time series. One way to see the time-frequency
resolution differences between the Fourier transform and the wavelet transform is to look

27
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

at the basis function coverage of the time-frequency plane. Figure 1.1 shows a windowed
Fourier transform, where the window is simply a square wave. The square wave window
truncates the sine or cosine function to a window of a particular width. Because a single
window is used for all frequencies in the WFT, the resolution of the analysis is the same
at all locations in the time-frequency plane.
An advantage of wavelet transforms is that the windows vary. In order to isolate
signal discontinuities, one would like to have some very short basis functions. At the
same time, in order to obtain detailed frequency analysis, one would like to have some
very long basis functions.

Fig 2.19 Fourier basis functions, time-frequency tiles


and coverage of the time-frequency plane.
A way to achieve this is to have short high-frequency basis functions and long
low-frequency ones. This happy medium is exactly what you get with wavelet
transforms. Figure 1.2 shows the coverage in the time-frequency plane with one wavelet
function, the Daubechies [15] wavelet.

28
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

One thing to remember is that wavelet transforms do not have a single set of basis
functions like the Fourier transform, which utilizes just the sine and cosine functions.
Instead, wavelet transforms have an infinite set of possible basis functions. Thus wavelet
analysis provides immediate access to information that can be obscured by other time-
frequency methods such as Fourier analysis.

Fig 2.20 Daubechies wavelet basis functions, time-frequency tiles


and coverage of the time-frequency plane.

2.11 WAVELET APPLICATIONS

29
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The following applications show just a small sample of what researchers can do
with wavelets.
• Computer and Human Vision
• FBI Fingerprint compression
• De-noising Noisy Data
• Detecting self similar behavior in a time series
• Musical tone generation

CHAPTER 3
DWT Architecture
3.1 Discrete Wavelet Transform
The next step toward developing a DWT is to be able to transform a discrete time
signal. The wavelet transform can be interpreted as a applying a set of filters. Digital
filters are very efficient to implement and thus provide us with the needed tool for
performing the DWT, and are usually applied as equivalent low and high-pass filters. The
design of these filters is similar to subband coding, i.e., only the low pass filter has to be
designed such that the high pass filter has additional phase shift of 180 degree as
compared to the low pass filter. Unlike subband coding, these filters are designed to give
flat or smooth spectral response and are bi-orthogonal.

DWT (k , j ) =< x,ψk , j >= 2 − j / 2 ∑ x(n)ψ (2
m =−∞
− jn
− k)

30
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The relation to multiresolution filter trees is derived next. With a closer look at
the above equation 2.0, it can be recognized that the signal x(t) is filtered by Ψ. This
filter changes its length according the scale. When j is increased by 1, the wavelet is
dilated by 2. In discrete time this means also that its frequency is halved and the obtained
coefficients could be sub sampled according the sampling theorem. Starting with j = 0,
the DWT will first compute the coefficients with the highest frequency content (or the
highest detail level) and will continue with a filter half the frequency and so on. This can
be used to build a filter tree according this principle and it links the Wavelet Theory to
Multi-resolution Analysis (MRA) and Sub-band Coding techniques.

Fig 3.1(a) Filter Tree structure for the fast wavelet transforms

Fig 3.1 (b) the position of the samples on the time scale planes

31
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The Sub-band Coding Scheme splits the input signal in two signals by the use of a
half band lowpass h(t) and its orthogonal highpass filter g(t). The sampling theorem then
allows to sub-sample both signals by 2 (drop every other sample). This signal
decomposition can now be iterated in a way that matches the intentions of the DWT. If
the lowpass signal is separated again using the same coding scheme, a band-pass and a
lowpass signal half the resolution is received. This is equal to the increase of the scaling
parameter j. For the wavelet analysis, the structure given in figure 3.1(a) can be obtained.
A reversed structure can be used to synthesize the signal from its wavelet coefficients.
The whole procedure is known as the Mallat Algorithm (also referred as the Pyramid
Algorithm). The sub-sampling by 2 at every stage leads to the arrangement of the wavelet
coefficients on the time scale grid as shown in figure 3.1(b)

3.1.1 One Dimensional Discrete Wavelet Transform

The discrete wavelets transform (DWT), which transforms a discrete time signal
to a discrete wavelet representation. The first step is to discretize the wavelet parameters,
which reduce the previously continuous basis set of wavelets to a discrete and
orthogonal / orthonormal set of basis wavelets.

ψ m ,n
(t ) = 2 m / 2ψ (2 m t − n) ; m, n ∈ Ζ such that - ∞ < m, n < ∞ -------- (3.1)

The 1-D DWT is given as the inner product of the signal x(t) being transformed
with each of the discrete basis functions.

w m ,n
= x (t ),ψ m,n
(t ) ; m, n €Z; m, n ∈ Z ------------

(3.2)

32
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The 1-D inverse DWT is given as:

x(t ) = ∑∑Wm ,nψm ,n (t ) ; m, n ∈ Z -------------


m n

(3.3)
The generic form of 1-D DWT is depicted in Figure 3.2. Here a discrete signal is
passed through a lowpass and highpass filters H and G, then down sampled by a factor of
2, constituting one level of transform. The inverse transform is obtained by up sampling
by a factor of 2 and then using the reconstruction filters H’ and G’, which in most
instances are the filters H and G reversed.

Fig 3.2 Perfect reconstruction filter bank for used for 1-D DWT

3.1.2 Two Dimensional Discrete Wavelet Transform

The 1-D DWT can be extended to 2-D transform using separable wavelet filters.
With separable filters, applying a 1-D transform to all the rows of the input and then
repeating on all of the columns can compute the 2-D transform. When one-level 2-D
DWT is applied to an image, four transform coefficient sets are created. As depicted in
Figure 3.3, the four sets are LL, HL, LH, and HH, where the first letter corresponds to

33
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

applying either a low pass or high pass filter to the rows, and the second letter refers to
the filter applied to the columns.

Fig 3.3: Level one 2-D DWT applied on an image

Fig 3.4: DWT for Lena image (a) Original Image (b) Output image after the 1-D
applied on column input (c) Output image after the second 1-D applied on row input

The Two-Dimensional DWT (2D-DWT) converts images from spatial domain to


frequency domain. At each level of the wavelet decomposition, each column of an image
is first transformed using a 1D vertical analysis filter-bank. The same filter-bank is then
applied horizontally to each row of the filtered and subsampled data. One-level of
wavelet decomposition produces four filtered and subsampled images, referred to as sub
bands. The upper and lower areas of Fig. 3.4(b), respectively, represent the low pass and
high pass coefficients after vertical 1D-DWT and subsampling. The result of the
horizontal 1D-DWT and subsampling to form a 2D-DWT output image is shown in
Fig.3.4(c).

34
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

We can use multiple levels of wavelet transforms to concentrate data energy in


the lowest sampled bands. Specifically, the LL subband in fig 3.3 can be transformed
again to form LL2, HL2, LH2, and HH2 subbands, producing a two-level wavelet
transform. An (R-1) level wavelet decomposition is associated with R resolution levels
numbered from 0 to (R-1), with 0 and (R-1) corresponding to the coarsest and finest
resolutions.
The straight forward convolution implementation of 1D-DWT requires a large
amount of memory and large computation complexity. An alternative implementation of
the 1D-DWT, known as the lifting scheme, provides significant reduction in the memory
and the computation complexity. Lifting also allows in-place computation of the wavelet
coefficients. Nevertheless, the lifting approach computes the same coefficients as the
direct filter-bank convolution.

Fig 3.5 Computation of 2D DWT an example

35
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

3-Level DWT:

Figure 3.6 gives a model of 3-level decomposition in 1D and 2D.


Recently a new algorithm used for constituting integer wavelet transform, known as
lifting scheme (LS) has been proposed. Bi-orthogonal wavelet filters using this scheme
have been identified as very nice for lossy image compression applications.

Fig 3.6(a) Level-3 dyadic DWT scheme

36
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 3.6(b) Level-3 dyadic DWT scheme

3.2 Reconstruction Filters

The filtering part of the reconstruction process also bears some discussion,
because it is the choice of filters that is crucial in achieving perfect reconstruction of the
original signal. The down sampling of the signal components performed during the
decomposition phase introduces a distortion called aliasing. It turns out that by carefully
choosing filters for the decomposition and reconstruction phases that are closely related
(but not identical), we can “cancel out” the effects of aliasing.
The low- and high pass decomposition filters (L and H), together with their
associated reconstruction filters (L' and H'), form a system of what is called quadrature
mirror filters:

37
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 3.7 Reconstruction Filters

3.2.1 Reconstructing Approximations and Details

We have seen that it is possible to reconstruct our original signal from the
coefficients of the approximations and details.

Fig 3.8
It is also possible to reconstruct the approximations and details themselves from
their coefficient vectors. As an example, let’s consider how we would reconstruct the
first-level approximation A1 from the coefficient vector cA1. We pass the coefficient
vector cA1 through the same process we used to reconstruct the original signal. However,
instead of combining it with the level-one detail cD1, we feed in a vector of zeros in
place of the detail coefficients vector:

38
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 3.9

The process yields a reconstructed approximation A1, which has the same length
as the original signal S and which is a real approximation of it. Similarly, we can
reconstruct the first-level detail D1, using the analogous process:

Fig 3.10
The reconstructed details and approximations are true constituents of the original
signal. In fact, we find when we combine them that:
A1 + D1 = S
Note that the coefficient vectors cA1 and cD1—because they were produced by
down sampling and are only half the length of the original signal — cannot directly be
combined to reproduce the signal.

39
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

It is necessary to reconstruct the approximations and details before combining


them. Extending this technique to the components of a multilevel analysis, we find that
similar relationships hold for all the reconstructed signal constituents.
That is, there are several ways to reassemble the original signal:

Fig 3.11

3.2.2 Relationship of Filters to Wavelet Shapes


In the section “Reconstruction Filters”, we spoke of the importance of choosing
the right filters. In fact, the choice of filters not only determines whether perfect
reconstruction is possible, it also determines the shape of the wavelet we use to perform
the analysis. To construct a wavelet of some practical utility, you seldom start by drawing
a waveform. Instead, it usually makes more sense to design the appropriate quadrature
mirror filters, and then use them to create the waveform. Let’s see how this is done by
focusing on an example.
Consider the low pass reconstruction filter (L') for the db2 wavelet.

Wavelet function position

40
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Figure 3.12(a)
The filter coefficients can be obtained from the dbaux command:
Lprime = dbaux (2)
Lprime = 0.3415 0.5915 0.1585 –0.0915
If we reverse the order of this vector (see wrev), and then multiply every even sample by
–1, we obtain the high pass filter H':
Hprime = –0.0915 –0.1585 0.5915 –0.3415
Next, up sample Hprime by two (see dyadup), inserting zeros in alternate positions:
HU =–0.0915 0 –0.1585 0 0.5915 0 –0.3415 0
Finally, convolve the up sampled vector with the original low pass filter:
H2 = conv(HU,Lprime);
plot(H2)

Figure 3.12(b)

If we iterate this process several more times, repeatedly up sampling and


convolving the resultant vector with the four-element filter vector Lprime, a pattern
begins to emerge:

41
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Figure 3.12(c)
The curve begins to look progressively more like the db2 wavelet. This means that the
wavelet’s shape is determined entirely by the coefficients of the reconstruction filters.
This relationship has profound implications. It means that you cannot choose just any
shape, call it a wavelet, and perform an analysis. At least, you can’t choose an arbitrary
wavelet waveform if you want to be able to reconstruct the original signal accurately.
You are compelled to choose a shape determined by quadrature mirror decomposition
filters.

CHAPTER 4
Introduction to Tools

4.1 Introduction to MATLAB

42
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

MATLAB is a software package for computation in engineering, science, and


applied mathematics. It offers a powerful programming language, excellent graphics, and
a wide range of expert knowledge. MATLAB is published by and a trademark of The
Math Works, Inc. The focus in MATLAB is on computation, not mathematics: Symbolic
expressions and manipulations are not possible (except through the optional Symbolic
Toolbox, a clever interface to Maple). All results are not only numerical but inexact,
thanks to the rounding errors inherent in computer arithmetic. The limitation to numerical
computation can be seen as a drawback, but it is a source of strength too: MATLAB is
much preferred to Maple, Mathematica, and the like when it comes to numerics.

On the other hand, compared to other numerically oriented languages like C++
and FORTRAN, MATLAB is much easier to use and comes with a huge standard library.
The unfavorable comparison here is a gap in execution speed. This gap is not always as
dramatic as popular lore has it, and it can often be narrowed or closed with good
MATLAB programming. Moreover, one can link other codes into MATLAB, or vice
versa, and MATLAB now optionally supports parallel computing. Still, MATLAB is
usually not the tool of choice for maximum-performance computing.

The MATLAB niche is numerical computation on workstations for non-experts in


computation. This is a huge niche one way to tell is to look at the number of MATLAB-
related books on mathworks.com. Even for supercomputer users, MATLAB can be a
valuable environment in which to explore and fine-tune algorithms before more laborious
coding in another language.

43
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Most successful computing languages and environments acquire a distinctive


character or culture. In MATLAB, that culture contains several elements: an
experimental and graphical bias, resulting from the interactive environment and
compression of the write-compile-link-execute analyze cycle; an emphasis on syntax that
is compact and friendly to the interactive mode, rather than tightly constrained and
verbose; a kitchen-sink mentality for providing functionality; and a high degree of
openness and transparency (though not to the extent of being open source software).

Creating a GUI in MATLAB

One of the major comments that we hear about Matlab is "I don't know how I can use
it in my classroom. It is too hard for my students to use." This may be a correct
comment - we don't necessarily want the students to construct a model from scratch in
Matlab, but would want them to use a previously prepared model. We can do this using
Input and disp statements in a M-file which is run from the Matlab window. A much
better way is to use a Graphical User Interface (GUI) where the student just "fills in the
blanks" and all of the Matlab commands are completely hidden from the student.

Creating GUI's in Matlab Version 7 is significantly easier than in previous versions of


the program. In particular, the guide program allows one to easily place controls onto a
model and set their properties. In the following exercise, we will construct a GUI which
will create RESET button.

44
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The Example Files and Code:

First, download the GUI skeleton here. Unzip the files and place them wherever you
please.

1. Now, type guide at the command prompt.

45
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

2. Choose to open the sample GUI by clicking on “Open Existing GUI”. Click on
“Browse” to locate where you saved the GUI files.

46
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

3. Here is what the GUI should look like when you open it:

47
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

4. Next, we must allow the GUI to run multiple instances. Go to the Tools tab, and
then to GUI Options. Disable the following option as shown:

48
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

49
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

5. Click on the icon on the GUI figure to bring up the accompanying .m file.
6. Find the reset Callback and add the following code

7. closeGUI = handles.figure1; %handles.figure1 is the GUI figure


8. guiPosition = get(handles.figure1,'Position'); %get the position of the GUI
9. guiName = get(handles.figure1,'Name'); %get the name of the GUI
10. eval(guiName) %call the GUI again
11. close(closeGUI); %close the old GUI
set(gcf,'Position',guiPosition); %set the position for the new GUI

12. Now run the GUI and test it out! You should see a brief flicker when you reset
the GUI, as a new GUI is being opened while the old one is closed.

50
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

4.2 Introduction to Visual Basic 6.0

What is Visual Basic?


· Visual Basic is a tool that allows you to develop Windows (Graphic User Interface -
GUI) applications. The applications have a familiar appearance to the user.
· Visual Basic is event-driven, meaning code remains idle until called upon to
respond to some event (button pressing, menu selection , ...). Visual Basic is governed by
an event processor. Nothing happens until an event is detected. Once an event is detected,
the code corresponding to that event (event procedure) is executed. Program control is
then returned to the event processor.
· Some Features of Visual Basic:
Þ Full set of objects - you 'draw' the application
Þ Lots of icons and pictures for your use
Þ Response to mouse and keyboard actions
Þ Clipboard and printer access
Þ Full array of mathematical, string handling, and graphics functions

51
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Þ Can handle fixed and dynamic variable and control arrays


Þ Sequential and random access file support
Þ Useful debugger and error-handling facilities
Þ Powerful database access tools
Þ ActiveX support
Þ Package & Deployment Wizard makes distributing your applications

Visual Basic 6.0 versus Other Versions of Visual Basic

· The original Visual Basic for DOS and Visual Basic For Windows were introduced in
1991.
· Visual Basic 3.0 (a vast improvement over previous versions) was released in 1993.
· Visual Basic 4.0 released in late 1995 (added 32 bit application support).
· Visual Basic 5.0 released in late 1996. New environment, supported creation of ActiveX
controls, deleted 16 bit application support.
· And, now Visual Basic 6.0 - some identified new features of Visual Basic 6.0:
Þ Faster compiler
Þ New ActiveX data control object
Þ Allows database integration with wide variety of applications
Þ New data report designer
Þ New Package & Deployment Wizard
Þ Additional internet capabilities
Þ The Form Window is central to developing Visual Basic applications. It is where you
draw your application.

52
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Þ the Toolbox is the selection menu for controls used in your application.
Þ The Properties Window is used to establish initial property values for objects. The
drop-down box at the top of the window lists all objects in the current form. Two views
are available: Alphabetic and Categorized.
Under this box are the available properties for the currently selected object.
Þ The Form Layout Window shows where (upon program execution) your form will be
displayed relative to your monitor’s screen:
Þ The Project Window displays a list of all forms and modules making up your
application. You can also obtain a view of the Form or Code windows (window
containing the actual Basic coding) from the Project window.

As mentioned, the user interface is ‘drawn’ in the form window. There are two
ways to place controls on a form:
1. Double-click the tool in the toolbox and it is created with a default size on the
form. You can then move it or resize it. Click the tool in the toolbox, then move the
mouse pointer to the form window. The cursor changes to a crosshair. Place the crosshair
at the upper left corner of where you want the control to be, press the left mouse button

53
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

and hold it down while dragging the cursor toward the lower right corner. When you
release the mouse button, the control is drawn.

· To move a control you have drawn, click the object in the form window and
drag it to the new location. Release the mouse button.
· To resize a control, click the object so that it is select and sizing handles appear.
Use these handles to resize the object.

4.3 Introduction to FPGA Kit

54
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Figure 4.1 FPGA KIT

Spartan-3E FPGA Features and Embedded Processing Functions


The Spartan-3E Starter Kit board highlights the unique features of the Spartan-3E
FPGA family and provides a convenient development board for embedded processing
applications. The board highlights these features:
 Spartan-3E specific features
 Parallel NOR Flash configuration
 MultiBoot FPGA configuration from Parallel NOR Flash PROM
 SPI serial Flash configuration
 Embedded development
 MicroBlaze™ 32-bit embedded RISC processor
 PicoBlaze™ 8-bit embedded controller
 DDR memory interfaces

Key Components and Features:


The key features of the Spartan-3E Starter Kit board are:

 Xilinx XC3S500E Spartan-3E FPGA


 Up to 232 user-I/O pins
 320-pin FBGA package
 Over 10,000 logic cells
 Xilinx 4 Mbit Platform Flash configuration PROM
55
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

 Xilinx 64-macrocell XC2C64A CoolRunner CPLD


 64 MByte (512 Mbit) of DDR SDRAM, x16 data interface, 100+ MHz
 16 MByte (128 Mbit) of parallel NOR Flash (Intel StrataFlash)
 FPGA configuration storage
 MicroBlaze code storage/shadowing
 16 Mbits of SPI serial Flash (STMicro)
 FPGA configuration storage
 MicroBlaze code shadowing
 2-line, 16-character LCD screen
 PS/2 mouse or keyboard port
 VGA display port
 10/100 Ethernet PHY (requires Ethernet MAC in FPGA)
 Two 9-pin RS-232 ports (DTE- and DCE-style)
 On-board USB-based FPGA/CPLD download/debug interface
 50 MHz clock oscillator
 SHA-1 1-wire serial EEPROM for bit stream copy protection
 Hirose FX2 expansion connector
 Three Diligent 6-pin expansion connectors
 Four-output, SPI-based Digital-to-Analog Converter (DAC)
 Two-input, SPI-based Analog-to-Digital Converter (ADC) with programmable-
gain Pre-amplifier
 Chip Scope™ Soft Touch debugging port
 Rotary-encoder with push-button shaft
 Eight discrete LEDs
 Four slide switches
 Four Push Button Switches
56
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

 SMA clock input


 8 pin DIP socket for auxiliary clock oscillator

4.4 Introduction to EDK Tool


EDK 8.1 Micro Blaze Tutorial in Spartan 3E
Objectives:
This tutorial will demonstrate process of creating and testing a MicroBlaze system design
using the Embedded Development Kit (EDK). EDK is the combination of software and
hardware by the name of embedded. EDK itself contains two parts
1. Xilinx Platform Studio (XPS)
2. Software Development Kit (SDK)

4.4.1 Defining the Hardware Design (XPS)


The tutorial contains these sections:
• System Requirements
• MicroBlaze System Description
• Tutorial Steps
The following steps are described in this tutorial:
• Starting XPS
• Using the Base System Builder Wizard
• Create – Import IP Peripheral
• Design Modification using Platform Studio
• Implementing the Design
• Defining the Software Design
• Downloading the Design
• Debugging the Design
• Performing Behavioral Simulation of the Embedded System

57
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

System Requirements:
You must have the following software installed on your PC to complete this tutorial:
• Windows 2000 SP2/Windows XP
Note: This tutorial can be completed on Linux or Solaris, but the screenshots and
directories illustrated in this tutorial are based on the Windows Platform.
• EDK 8.1i or later
• ISE 8.1i sp1 or later
In order to download the completed processor system, you must have the following
hardware:
• Xilinx Spartan-3 Evaluation Board (3S200 FT256 -4)
• Xilinx Parallel -4 Cable used to program and debug the device
• Serial Cable

MicroBlaze System Description:


In general, to design an embedded processor system, you need the following:
• Hardware components
• Memory map
• Software application
Tutorial Design Hardware
The MicroBlaze (MB) tutorial design includes the following hardware components:
• MicroBlaze
• Local Memory Bus (LMB) Bus
• LMB_BRAM_IF_CNTLR
• BRAM_BLOCK
• On-chip Peripheral Bus (OPB) BUS
• OPB_MDM
58
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

• OPB_UARTLITE
• 3 - OPB_GPIOs
• OPB_EMC

Setup:
Spartan-3 board with a RS-232 terminal connected to the serial port and
configured for 57600 baud, with 8 data bits, no parity and no handshakes.

Creating the Project File in XPS


The first step in this tutorial is using the Xilinx Platform Studio (XPS) to create a
project file. XPS allows you to control the hardware and software development of the
Micro Blaze system, and includes the following:
• An editor and a project management interface for creating and editing source code
• Software tool flow configuration options

You can use XPS to create the following files:


• Project Navigator project file that allows you to control the hardware implementation
flow
• Microprocessor Hardware Specification (MHS) file
Note: For more information on the MHS file, refer to the “Microprocessor Hardware
Specification (MHS)” chapter in the Platform Specification Format Reference Manual.
• Microprocessor Software Specification (MSS) file
Note: For more information on the MSS file, refer to the “Microprocessor Software
Specification (MSS)” chapter in the Platform Specification Format Reference Manual...

59
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

XPS supports the software tool flows associated with these software specifications.
Additionally, you can use XPS to customize software libraries, drivers, and interrupt
handlers, and to compile your programs.
Starting XPS
• To open XPS, select Start → Programs → Xilinx Platform Studio 8.1i → Xilinx
Platform Studio
• Select Base System Builder Wizard (BSB) to open the Create New Project Using BSB
Wizard dialog box shown
in Figure 1.
• Click Ok.
• Use the Project File Browse button to browse to the folder you want as your project
directory.
• Click Open to create the system.xmp file then Save.
• Click Ok to start the BSB wizard.
Note: XPS does not support directory or project names which include spaces.

MHS and MPD Files


The next step in the tutorial is defining the embedded system hardware with the
Microprocessor Hardware Specification (MHS) and Microprocessor Peripheral
Description (MPD) files.

MHS File
The Microprocessor Hardware Specification (MHS) file describes the following:
• Embedded processor: either the soft core MicroBlaze processor or the hard core
PowerPC (only available in Virtex-II Pro and Virtex-4 FX devices)
• Peripherals and associated address spaces
• Buses
• Overall connectivity of the system
60
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The MHS file is a readable text file that is an input to the Platform Generator (the
hardware system building tool).
Conceptually, the MHS file is a textual schematic of the embedded system. To instantiate
a component in the MHS file, you must include information specific to the component.

MPD File
Each system peripheral has a corresponding MPD file. The MPD file is the symbol of the
embedded system peripheral to the MHS schematic of the embedded system. The MPD
file contains all of the available ports and hardware parameters for a peripheral. The
tutorial MPD file is located in the following directory:
$XILINX_EDK/hw/XilinxProcessorIPLib/pcores/<peripheral name>/data

Note: For more information on the MPD and MHS files, refer to the
“Microprocessor Peripheral Description (MPD)” and “Microprocessor Hardware
Specification (MHS)” chapters in the Embedded System Tools Guide.
EDK provides two methods for creating the MHS file. Base System Builder
Wizard and the dd/Edit Cores Dialog assist you in building the processor system, which
is defined in the MHS file. This tutorial illustrates the Base System Builder.

Using the Base System Builder Wizard


Use the following steps to create the processor system:
• In the Base System Builder – Select “I would like to create a new design” then click
Next.
• In the Base System Builder - Select Board Dialog select the following, as shown in
Figure 2:
• Board Vendor: Xilinx
61
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

• Board Name: Spartan-3E


• Board Version: C, D.
To debug the design, follow these steps:
In XPS select Deug-> XMD debug Options. The XMD Debug Options dialog box
allows the user to specify the connections type and JTAG chain Definition . The
connection types are available for MicroBlaze:
 Simulator – enables XMD to connect to the MicroBlaze ISS
 Hardware – enables XMD to connect to the MDM peripheral in the hardware
 Stub – enables XMD to connect to the JTAG UART or UART via XMDSTUB
 Virtual platform – enables a virtual (c model) to be used
Verify that Hardware is selected
Select Save
Select Debug -> Launch XMD

CHAPTER 5
Implementation

5.1 Fundamentals of Digital Image

Digital image is defined as a two dimensional function f(x, y), where x and y are
spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is
called intensity or grey level of the image at that point. The field of digital image
processing refers to processing digital images by means of a digital computer. The digital
image is composed of a finite number of elements, each of which has a particular location
and value. The elements are referred to as picture elements, image elements, pels, and
pixels. Pixel is the term most widely used.
62
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

5.1.1 Image Compression

Digital Image compression addresses the problem of reducing the amount of data
required to represent a digital image. The underlying basis of the reduction process is
removal of redundant data. From the mathematical viewpoint, this amounts to
transforming a 2D pixel array into a statically uncorrelated data set. The data redundancy
is not an abstract concept but a mathematically quantifiable entity. If n1 and n2 denote
the number of information-carrying units in two data sets that represent the same
information, the relative data redundancy R D [2] of the first data set (the one
characterized by n1) can be defined as,
1
RD = 1 −
CR

Where C R called as compression ratio [2]. It is defined as


n1
CR =
n2
In image compression, three basic data redundancies can be identified and
exploited: Coding redundancy, interpixel redundancy, and phychovisal redundancy.
Image compression is achieved when one or more of these redundancies are reduced or
eliminated.
The image compression is mainly used for image transmission and storage. Image
transmission applications are in broadcast television; remote sensing via satellite, aircraft,
radar, or sonar; teleconferencing; computer communications; and facsimile transmission.
Image storage is required most commonly for educational and business documents,
medical images that arise in computer tomography (CT), magnetic resonance imaging
(MRI) and digital radiology, motion pictures, satellite images, weather maps, geological
surveys, and so on.

63
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 5.1 Block Diagram


5.1.2 Image Compression Types
There are two types’ image compression techniques.
1. Lossy Image compression
2. Lossless Image compression

5.1.2.1 Lossy Image compression


Lossy compression provides higher levels of data reduction but result in a less
than perfect reproduction of the original image. It provides high compression ratio.
Lossy image compression is useful in applications such as broadcast television,
videoconferencing, and facsimile transmission, in which a certain amount of error is an
acceptable trade-off for increased compression performance.

5.1.2.2 Lossless Image compression

64
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Lossless Image compression is the only acceptable amount of data reduction. It


provides low compression ratio while compared to lossy. In Lossless Image compression
techniques are composed of two relatively independent operations: (1) devising an
alternative representation of the image in which its interpixel redundancies are reduced
and (2) coding the representation to eliminate coding redundancies. Lossless Image
compression is useful in applications such as medical imaginary, business documents and
satellite images.

5.1.3 Image Compression Standards

There are many methods available for lossy and lossless, image compression. The
efficiency of these coding standardized by some Organizations. The International
Standardization Organization (ISO) and Consultative Committee of the International
Telephone and Telegraph (CCITT) are defined the image compression standards for both
binary and continuous tone (monochrome and Colour) images. Some of the Image
Compression Standards are
1. JBIG1
2. JBIG2
3. JPEG-LS
4. DCT based JPEG
5. Wavelet based JPEG2000
Currently, JPEG2000 [4] [5] is widely used because; the JPEG-2000 standard
supports lossy and lossless compression of single-component (e.g., grayscale) and
multicomponent (e.g., color) imagery. In addition to this basic compression functionality,
however, numerous other features are provided, including: 1) progressive recovery of an
image by fidelity or resolution; 2) region of interest coding, whereby different parts of an
image can be coded with differing fidelity; 3) random access to particular regions of an
image without the needed to decode the entire code stream; 4) a flexible file format with
provisions for specifying opacity information and image sequences; and 5) good error

65
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

resilience. Due to its excellent coding performance and many attractive features, JPEG
2000 has a very large potential application base.
Some possible application areas include: image archiving, Internet, web browsing,
document imaging, digital photography, medical imaging, remote sensing, and desktop
publishing.
The main advantage of JPEG2000 over other standards, First, it would addresses a
number of weaknesses in the existing JPEG standard. Second, it would provide a number
of new features not available in the JPEG standard.

5.2 Lifting Scheme


The wavelet transform of image is implemented using the lifting scheme [3]. The
lifting operation consists of three steps. First, the input signal x[n] is down sampled into
the even position signal xe (n) and the odd position signal xo(n) , then modifying these
values using alternating prediction and updating steps.

xe ( n) = x[ 2n] and xo ( n) = x[ 2n +1]

A prediction step consists of predicting each odd sample as a linear combination


of the even samples and subtracting it from the odd sample to form the prediction error.
An update step consists of updating the even samples by adding them to a linear
combination of the prediction error to form the updated sequence. The prediction and
update may be evaluated in several steps until the forward transform is completed. The
block diagram of forward lifting and inverse lifting is shown in figure 5.2
xe (n) s
x (n)
Split P U

xo(n) d
Fig (a)

66
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

U P Merge
x (n)
d
Fig (b)
Fig 5.2 The Lifting Scheme. (a) Forward Transform (b) Inverse Transform

The inverse transform is similar to forward. It is based on the three operations undo
update, undo prediction, and merge. The simple lifting technique using Haar wavelet is
explained in next section.

5.2.1 Lifting using Harr


The lifting scheme is a useful way of looking at discrete wavelet transform. It is
easy to understand, since it performs all operations in the time domain, rather than in the
frequency domain, and has other advantages as well. This section illustrates the lifting
approach using the Haar Transform [6].
The Haar transform is based on the calculations of the averages (approximation
co-efficient) and differences (detail co-efficient). Given two adjacent pixels a and b, the

( a + b)
principle is to calculate the average s = and the difference d = a − b . If a and b
2
are similar, s will be similar to both and d will be small, i.e., require few bits to represent.

d d
This transform is reversible, since a = s − and b = s + and it can be written using
2 2
matrix notation as

 1 / 2 − 1 1 1 
(s, d ) = (a, b)  = (a,b)A, (a, b) = (s, d ) 
 = ( s, d ) A
−1

 1 / 2 1   − 1 / 2 1 / 2 

67
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Consider a row of 2 n pixels values S n ,l for 0 ≤ l < 2 n . There are 2 n −1 pairs of pixels

S n , 2 l , S n , 2 l +1 forl = 0,2,4,....., 2 n −2 . Each pair is transformed into an average


S n −1,1 = ( S n , 2l + S n , 2 l +1 ) / 2 and the difference d n −1,l = S n , 2l +1 −S n , 2l . The result is a set

S n −1of 2 n −1 averages and a set d n −1of 2 n −1 differences.

Wavelet decompositions are widely used in signal and image processing


applications. Classical linear wavelet transforms perform homogeneous smoothing of
signal contents. In a number of cases, in particular in applications in image and video
processing, such homogeneous smoothing is undesirable. This has led to a growing
interest in nonlinear wavelet representations called as adaptive wavelet decomposition
that can preserve discontinuities such as transitions in signals and edges in images.
Adaptive wavelet decomposition is very useful in various applications, such as
image analysis, compression, and feature extraction and denoising. The adaptive
multiresolution representations take into account the characteristics of the underlying
signal and do leave intact important signal characteristics, such as sharp transitions,
edges, singularities, and other region of interests.
In the adaptive update lifting framework, the update lifting step, which yields a
low pass filter or a moving average process, is performed first on the input polyphase
components (the output from the splitting process, according to the lifting terminology),
followed by a fixed prediction step yielding the wavelet coefficients.

5.2.2 General Adaptive Update Lifting


We consider a (K + 1) band Filter bank decomposition with inputs x, y (1), y (2),
y (3)….y (k), with K ≥ 1 , which represent the polyphase components of the analyzed
signal. The first polyphase component, x, is updated using the neighboring signal
elements from the other polyphase components, thus yielding an approximation signal.
Subsequently, the signal elements in the polyphase components y (1), y (2)…y (K) are
predicted using the neighboring signal elements from the approximated polyphase

68
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

component and the other polyphase components. The prediction steps, which are non-
adaptive, result in detail coefficients. The adaptive update step is illustrated in Figure 5.3.

x + xx

D Ud

y(1) ….. ….
y(2)
y(k)

Fig 5.3 Adaptive update lifting scheme

Here, x and y (1), y (2)…y (K) are the input for a decision map D, whose output
at location n is binary decision
dn = D {y (1), y (2)…y (K)} ∈ {0,1}
Which triggers the update filter Ud and the addition ⊕ d. More precisely, if dn is the
binary decision at location n, then the updated value x1(n) is given by

x1 (n) = x (n) ⊕dn U dn ( y i )( n) ------------ (5.1)

We assume that the addition ⊕ d is of the form x⊕ du = α d(x+u) with α d #0, so


that the operation is invertible. The update filter is taken to be of the form
L2
U d ( y )( n) = ∑λ
j =−L1
d, j y j (n) -------------

(5.2)

69
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Where yj(n) = y(n+j) and L1 and L2 are nonnegative integers. The filter
coefficients λ d,j depend on the decision d at location n. Henceforth, we will use∑j to
denote the summation from – L1 to L2.
From (5.1) and (5.2), we infer the update equation used at analysis:
N
x 1 (n) = α dn x(n) + ∑ β dn , j y j (n) ---------- (5.3)
j =1

Where βd , j =αd λd , j . Clearly, we can easily invert (3.3) through


1
x ( n) = ( x 1 ( n) − ∑βdn , j y j ( n)) ------------
αdn j

(5.4)
Presumed that the decision dn is known at every location n. Thus, in order to have perfect
reconstruction, it must be possible to recover the decision dn = D(x, y)(n) from x1 (rather
than x which is not available at synthesis) and y. This amounts to the problem of finding
another decision map D1 such that

D ( x, y j )( n) = D 1 ( x 1 , y j )( n) ------------
(5.5)

Where x1 is given by (5.1). It can be shown that a necessary, but in no way sufficient,

condition for perfect reconstruction is that the value α dn + ∑ β dn , j = 1 .


j =1

5.2.3 Threshold Technique


The input images x, y1, y2 and y3 are obtained by a polyphase decomposition of an
original image x0 is given by,
x (m, n) = x0(2m, 2n)
y1 (m, n) = x0(2m, 2n+1)
y2 (m, n) = x0(2m+1, 2n)
70
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

y3 (m, n) = x0(2m+1, 2n+1)


Where x(m,n) represents the current location pixel value. y1(m,n),y2(m,n) and
y3(m,n) are horizontal, vertical and diagonal pixel value respectively. This is obtained by
using context formation, as shown in figure 5.3.The gradient vector v (n), with
components, {v1(n),v2(n),v3(n)}T (where T represents transposition) is given by
v1 (n) = x ( n) − y1 (n) ;
v 2 ( n) = x ( n ) − y 2 ( n ) ;
v3 ( n) = x (n) − y3 (n) ;

Then the L2 norm of v (n) is given by


2 2 2
d ( s ) = v1 + v 2 + v3

Here, the decision map takes binary decision. i.e.) either 1 or 0.

If, d ( s ) > T then dn=1 and else dn=0.

Here d(s) is called as seminorm and T denotes the given Threshold. It can take
arbitrary value (user defined). Depending on the condition, the Decision map chooses the
different update filters, followed the fixed prediction step. The filter equation for different
regions is given by

Decision Region I
If, d ( s ) > T ⇒ xx = 0.4 * y +0.2*yh+0.2*yv+0.2*yd
Decision Region II
If, d ( s ) ≤ T ⇒ xx = 0.5 * y + 0.2*yh+0.15*yv+0.15*yd.

Moreover, we are exclusively interested in the case where the decision map D1 at
synthesis is of the same form as D, but possibly with a different threshold. Thus we need
that
d ( s) > T ⇔ d 1 ( s) > T 1

71
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The d1(s) is the norm for the gradient vector v1(n) at synthesis. The v1 (n) is given by
1 1
v1 (n) = x1 (n) − y1 (n)
1 1
v 2 ( n ) = x1 ( n ) − y 2 ( n )
1 1
v3 (n) = x1 (n) − y3 (n)
1 1 1
Where y1 (n), y 2 (n), y 3 (n) corresponds the horizontal, the vertical, and diagonal
detail bands respectively.
The L2 norm of v1(n) is given by

d 1 ( s ) = (v 11 ) 2 + (v 1 2 ) 2 + (v 13 ) 2

Then decision map is given by


If, d 1 ( s) > T then d1n=1 and else d1n=0.
and the filter for different region is given by

Decision Region I
If, d 1 ( s) > T ⇒ x= (1/0.4) * (ry - (0.2*xh+0.2*xv+0.2*xd))
Decision Region II
If, d ( s ) ≤ T ⇒ x = (1/0.5)*(ry - (0.2*xh+0.15*xv+0.15*xd))

Here, Different threshold values are choose arbitrarily and reconstructed. Then the
reconstructed image is compared to original image and found the Peak Signal to Noise
Ratio.

72
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

5.3 SPIHT Algorithm

Embedded zero tree wavelet (EZW) coding, introduced by J. M. Shapiro, is a very


effective and computationally simple technique for image compression. Here we offer an
alternative explanation of the principles of its operation, so that the reasons for its
excellent performance can be better understood. These principles are partial ordering by
magnitude with a set partitioning sorting algorithm, ordered bit plane transmission, and
exploitation of self-similarity across different scales of an image wavelet transform.
Moreover, we present a new and different implementation based on set partitioning in
hierarchical trees (SPIHT), which provides even better performance than our previously
reported extension of EZW that surpassed the performance of the original EZW. The
image coding results, calculated from actual file sizes and images reconstructed by the
decoding algorithm, are either comparable to or surpass previous results obtained through
much more sophisticated and computationally complex methods. In addition, the new
coding and decoding procedures are extremely fast, and they can be made even faster,
with only small loss in performance.

5.3.1 Progressive Image Transmission


After converting the image pixels into wavelet coefficient SPIHT [10] is applied.
We assume, the original image is defined by a set of pixel values p i , j , where (i, j) the
pixel coordinates. The wavelet transform is actually done to the array given by,
c (i, j ) = DWT { p (i, j )} . --------------- (5.6)
Where c (i, j) is the wavelet coefficients.

In SPIHT, initially, the decoder sets the reconstruction vector c to zero and
updates its components according to the coded message. After receiving the value
(approximate or exact) of some coefficients, the decoder can obtain a reconstructed
image by taking inverse wavelet transform,
ˆ (i, j ) = IDWT {c (i, j )}
p --------------- (5.7)

73
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

called as “progressive transmission”.


A major objective in a progressive transmission scheme is to select the most
important information-which yields the largest distortion reduction-to be transmitted first.
For this selection, we use the mean squared-error (MSE) distortion measure
1 1
∑∑
2
DMSE ( p − pˆ ) = p − pˆ = ( pi , j − pˆ i , j ) 2 ---------- (5.8)
N N i j

ˆ i , j is the
Where N is the number of image pixels. p i , j is the Original pixel value and p
reconstructed pixel value. Furthermore, we use the fact that the Euclidean norm is
invariant to the unitary transformation
1
DMSE ( p − pˆ ) = DMSE (c − cˆ) =
N
∑∑
i j
(ci , j − cˆi , j ) 2 ----------- (5.9)

From the above the equation, it is clear that if the exact value of the transform
2
coefficient ci , j is sent to the decoder, then the MSE decreases by ci , j /N . This means
that the coefficients with larger magnitude should be transmitted first because they have a
larger content of information. This is the progressive transmission method. Extending this
approach, we can see that the information in the value of ci , j can also be ranked
according to its binary representation, and the most significant bits should be transmitted
first. This idea is used, for example, in the bit-plane method for progressive transmission.
Following, we present a progressive transmission scheme that incorporates these two
concepts: ordering the coefficients by magnitude and transmitting the most significant
bits first. To simplify the exposition, we first assume that the ordering information is
explicitly transmitted to the decoder. Later, we show a much more efficient method to
code the ordering information.

5.3.2 Set Partitioning Sorting Algorithm

One of the main features of the proposed coding method is that the ordering data
is not explicitly transmitted. Instead, it is based on the fact that the execution path of any

74
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

algorithm is defined by the results of the comparisons on its branching points. So, if the
encoder and decoder have the same sorting algorithm, then the decoder can duplicate the
encoder’s execution path if it receives the results of the magnitude comparisons, and the
ordering information can be recovered from the execution path.

One important fact used in the design of the sorting algorithm is that we do not
need to sort all coefficients. Actually, we need an algorithm that simply selects the
n +1
coefficients such that 2 ≤ ci , j < 2 , with n decremented in each pass. Given n, if
n

ci , j ≥2 n +1 then we say that a coefficient is significant; otherwise it is called


insignificant. The sorting algorithm divides the set of pixels into partitioning subsets
Tm and performs the magnitude test

Max{ c
( i , j )∈Tm
i, j } ≥ 2n ------------- (5.10)

If the decoder receives a “no’’ to that answer (the subset is insignificant), then it
knows that all coefficients in Tm , are insignificant. If the answer is “yes” (the subset is
significant), then a certain rule shared by the encoder and the decoder is used to partition
Tm , into new subsets Tm ,l and the significance test is then applied to the new subsets.

This set division process continues until the magnitude test is done to all single
coordinate significant subsets in order to identify each significant coefficient.
To reduce the number of magnitude comparisons (message bits) we define a set
partitioning rule that uses an expected ordering in the hierarchy defined by the subband
pyramid. The objective is to create new partitions such that subsets expected to be
insignificant contain a large number of elements, and subsets expected to be significant
contain only one element.
To make clear the relationship between magnitude comparisons and message bits,
we use the function

75
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

S n (T ) = 1; Max{ c
( i , j )∈Tm
i, j } ≥ 2n ---------------- (5.11)

0; otherwise.
to indicate the significance of a set of coordinates T . To simplify the notation of single
pixel sets, we write S n ({( i, j )}) as S n (i, j ) .

5.3.3 Spatial Orientation Trees

A tree structure, called spatial orientation tree, naturally defines the spatial
relationship on the hierarchical pyramid. Fig.5.4 shows how our spatial orientation tree is
defined in a pyramid constructed with recursive four-subband splitting. Each node of the
tree corresponds to a pixel and is identified by the pixel coordinate. Its direct descendants
(offspring) correspond to the pixels of the same spatial orientation in the next finer level
of the pyramid. The tree is defined in such a way that each node has either no offspring
(the leaves) or four offspring, which always form a group of 2 x 2 adjacent pixels.
In Fig 5.4, the arrows are oriented from the parent node to its four offspring. The
pixels in the highest level of the pyramid are the tree roots and are also grouped in 2 x 2
adjacent pixels. However, their offspring branching rule is different, and in each group,
one of them (indicated by the star in Fig.5.4) has no descendants.

76
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Fig 5.4 Examples of parent-offspring dependencies in the


Spatial-orientation tree

The following sets of coordinates are used to present the new coding method:
O (i,j): set of coordinates of all offspring of node (i, j);
D (i, j ) : set of coordinates of all descendants of the node
H: set of coordinates of all spatial orientation tree roots (nodes in the highest
pyramid level);
L(i, j ) = D(i, j ) - O(i, j )
For instance, except at the highest and lowest pyramid levels, we have
O(i,j) = {(2i,2j),(2i,2j+1),(2i+1,2j),(2i+1,2j+1)}
We use parts of the spatial orientation trees as the partitioning subsets in the sorting
algorithm. The set partition rules are simply the following.
I)The initial partition is formed with the sets ((i, j ) ) and D(i,j ), for all (i,j)∈ H.
2) If D(i,j) is significant, then it is partitioned into L(i,j ) plus the four single-element sets
with ( k , I ) ∈ O(i,j ) .
3)If L(i,j ) is significant, then it is partitioned into the four sets D(k,I), with (k,I) ∈ O(i,j )

77
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Future Scope

Future work aims at extending this frame work for color images, video
compressions, and De-noising applications

78
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Conclusion
The architecture has been implemented using .c; The model has been verified
with a set of text data. the derived architecture has many advantages especially the
reduction in the number of operations per sample, leading to a reduced chip area and
power consumption .it can be said that the original design criteria has been considered
and an effective and feasible architecture has been designed.

79
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The major disadvantage of this design is that it cannot perform online transform
i.e. the output will not be generated in all instance of data input, this work can be further
extended by using multi processor architecture to speed up the data usage. The maximum
clock speed achievable is around 131.6 MHz.

BIBLIGRAPHY

1. Ajith Boopardikar, Wavelet Theory and Application,TMH


2. I.Daubechies W. Sweldens, “Factoring wavelet transforms into lifting schemes,”
J. Fourier Anal. Appl., vol. 4, pp. 247–269, 1998.

80
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

3. W. Sweldens, “The lifting scheme: A new philosophy in biorthogonal wavelet


constructions,” in Proc. SPIE, vol. 2569, 1995, [3]
4. JPEG2000 Committee Drafts [Online].
Available: http://www.jpeg.org/CDs15444.htm
5. JPEG2000 Verification Model 8.5 (Technical Description), Sept. 13,2000.
6. K. Andra, C. Chakrabarti, and T. Acharya, “A VLSI architecture for lifting based
wavelet transform,” in Proc. IEEE Workshop SignlProces. Syst., Oct. 2000,
pp70–79.
7. Real-time image compression based on wavelet vector quantization, algorithm
and VLSI architecture ,Hatami, S. Sharifi, S. Ahmadi, H. Kamarei, M.
Dept. of Electr. & Comput. Eng., Univ. of Tehran, Iran; IEEE Trans,May 2005.
8. Fourier Analysis- http://www.sunlightd.com/Fourier/
9. A VLSI Architecture for Lifting-Based Forward and Inverse Wavelet Transform
Kishore Andra, Chaitali Chakrabarti, Member, IEEE, and Tinku Acharya, Senior
Member, IEEE, IEEE Trans. on Signal Processing, vol. 50,No.4, April 2002.
10. K. K. Parhi and T. Nishitani, “VLSI architectures for discrete wavelet
transforms,” IEEE Trans. VLSI Syst., vol. 1, pp. 191–202, June 1993.
11. M. Ferretti and D. Rizzo, “A parallel architecture for the 2-D discrete wavelet
transform with integer lifting scheme,” J. VLSI Signal Processing, vol. 28, pp.
165–185, July 2001.
12. Discrete Wavelet Transform http://en.wikipedia.org/wiki/Discrete_wavelet_transform
13. A. Rushton, "VHDL for logic synthesis," Wiley, 1998 5

14. C. H. Roth, "Digital systems design using VHDL," PWS Pub. Co., 1998 5
15. I. Daubechies, "Orthonormal Basis of Compactly Supported Wavelets," Comm. in
Pure and Applied Math Vol. 41, No. 7, pp. 909 -996, 1988 3

81
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

16. Chih-Chi Cheng; Chao-Tsung Huang; Ching-Yeh Chen; Chung-Jr Lian; Liang-
Gee Chen, "On-Chip Memory Optimization Scheme for VLSI Implementation of
Line-Based Two-Dimentional Discrete Wavelet Transform," Circuits and
Systems for Video Technology, IEEE Transactions on , vol.17, no.7, pp.814-822,
July 2007

82
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

APPENDIX
Source Code
#include <stdio.h>
#include <math.h>
#define ROW 64
#define COL 32
#define R_1 64
#define C_1 64
#define True 1
#define False 0
#define true1 1
#define false1 0
#define Len 4096
int Input[64][64];
int REDB[R_1][C_1];
int st[40000];
int Len_Array=10000;
int Ad[64][64];
int Adr[64][64];
int desc[4][3];
int xDim=64;
int yDim=64;
int level=1;
int rowS=32;
int colS=32;

83
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

int rowL=64;
int colL=64;
int L;
int max;
int currSet[1][4];
int currset;
int D=0;
int T=256;
int Level=2;
int rows=64;
int columns=64;
int Even[64][32];
int Odd[64][32];
int Low[64][32];
int High[64][32];
int LEven[32][32];
int wavedecode[64][64];
int waveencodeImage[64][64];
int HOdd[32][32];
int LOdd[32][32];
int HEven[32][32];
int LL[32][32];
int LH[32][32];
int HL[32][32];
int HH[32][32];
int RLL[32][32];
int RHL[32][32];
int RHH[32][32];
84
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

int RLH[32][32];
int totalbitCount;
int RL[ROW][COL];
int RH[ROW][COL];
int R[ROW][COL];
int H[ROW][COL];
int Output[ROW][ROW];
FILE *R1=0x22000000;
FILE *R2;
int aa;
int cc;
struct descArr{
int desc1[4];
int desc2[4];
int desc3[4];
};
struct descArr descArrS;
void integerdwt();
void reversedwt();

/////////////////////////////////////////////////////////////////////

void integerdwt()
{//begin of integer dwt
int i,j,k,a;
int columns1;
int rows1;
rows1=rows/2;
85
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

columns1=columns/2;

//////// one dimensional Even component////////


for (j=0;j<rows;j++)
{
a=1;
for (k=0;k<columns1;k++)
{
Even[j][k]=Input[j][a];
a=a+2;
}
}

//////// one dimensional Odd component////////


for (j=0;j<rows;j++)
{
a=0;
for (k=0;k<columns1;k++)
{
Odd[j][k]=Input[j][a];
a=a+2;
}
}

//////////// comput L AND H pass component


for (j=0;j<rows;j++)
{
for (k=0;k<columns1;k++)
86
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

{
High[j][k]=Odd[j][k]-Even[j][k];
aa=Odd[j][k]-Even[j][k];
aa=aa/2;
cc=ceil(aa);
Low[j][k]=(Even[j][k]+cc);
}
}
///////////////////one dimensional L even
for (j=0;j<columns1;j++)
{
a=1;
for (k=0;k<columns1;k++)
{
LEven[k][j]=Low[a][j];
HEven[k][j]=High[a][j];
a=a+2;
}
}
////////////////////////////////////////////////////////////////
//////// one dimensional L Odd component////////
for (j=0;j<columns1;j++)
{
a=0;
for (k=0;k<columns1;k++)
{
LOdd[k][j]=Low[a][j];
HOdd[k][j]=High[a][j];
87
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

a=a+2;
}
}

/////////////////////////////////////////////////////////

for (j=0;j<columns1;j++)
{
for (k=0;k<columns1;k++)
{

LH[j][k]=LOdd[j][k]-LEven[j][k];
aa=LOdd[j][k]-LEven[j][k];
aa=aa/2;
cc=ceil(aa);
LL[j][k]=(LEven[j][k]+cc);
HH[j][k]=HOdd[j][k]-HEven[j][k];
aa=HOdd[j][k]-HEven[j][k];
aa=aa/2;
cc=ceil(aa);
HL[j][k]=(HEven[j][k]+cc);
}
}

}// end of integer dwt

void reversedwt()
88
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

{//begin of reverse dwt

int i,j,k,a;
int columns1=32;
int Lenr =32;
int Lenc =32;
int rlen2r=64;
int valc;

/////init the RL RH arrays (4x2)

for(i=0;i<ROW;i++){
for(j=0;j<COL;j++){
RL[i][j]=0;
RH[i][j]=0;
}
}

for (j=0;j<columns1;j++)
{
for (k=0;k<columns1;k++)
{

aa=LH[j][k];
aa=aa/2;
cc=ceil(aa);

89
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

RLL[j][k]=(LL[j][k]-cc);

RLH[j][k]=RLL[j][k]+LH[j][k];

aa=HH[j][k];
aa=aa/2;
cc=ceil(aa);

RHL[j][k]=(HL[j][k]-cc);

RHH[j][k]=HH[j][k]+RHL[j][k];

}
}

k=0;
for(i=1;i<ROW;i=i+2){
for(j=0;j<COL;j++){
RL[i][j]=RLL[k][j];
RH[i][j]=RHL[k][j];

}
k++;
}

k=0;
for(i=0;i<ROW;i=i+2){
90
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

for(j=0;j<COL;j++){
RL[i][j]=RLH[k][j];
RH[i][j]=RHH[k][j];

}
k++;
}

for(i=0;i<ROW;i=i+1)
{
for(j=0;j<COL;j++)
{
aa=RH[i][j]/2;
cc=ceil(aa);
R[i][j]=RL[i][j]-cc;
H[i][j]=R[i][j]+RH[i][j];

for(i=0;i<ROW;i++){
k=0;
for(j=1;j<ROW;j=j+2){
Output[i][j]=R[i][k];
k++;

}
91
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

for(i=0;i<ROW;i++){
k=0;
for(j=0;j<ROW;j=j+2){
Output[i][j]=H[i][k];
k++;

}// end of reverse dwt

int main()

char word[15];
char c;
int v,m,t;
char ch;
int manti,tempI,remind;
char charVa;
int i,j,ind;
int index2=0;
int temp1,temp2,temp3;
92
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

int min,index,k;
int val;
char *cp=0x22000000;
int matVal=0;
int intValC;

for( i=0;i<R_1;i++)
{
for( j=0;j<C_1;j++)
{
while(1){
ch=*cp;

ch=*cp;
if(ch=='0'){
matVal=(matVal*10)+0;
}
else if(ch=='1'){
matVal=(matVal*10)+1;
}
else if(ch=='2'){
matVal=(matVal*10)+2;
}
else if(ch=='3'){
matVal=(matVal*10)+3;
}
else if(ch=='4'){
matVal=(matVal*10)+4;
93
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

}
else if(ch=='5'){
matVal=(matVal*10)+5;
}
else if(ch=='6'){
matVal=(matVal*10)+6;
}
else if(ch=='7'){
matVal=(matVal*10)+7;
}
else if(ch=='8'){
matVal=(matVal*10)+8;
}
else if(ch=='9'){
matVal=(matVal*10)+9;
}
else{
//printf(" %d ",matVal);
REDB[i][j]=matVal;
matVal=0;
cp++;
ch=*cp;
while(ch=='\n'||intValC==010||intValC==020||intValC==003||intValC==032){
cp++;
ch=*cp;
}
break; //break while
}
94
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

intValC=*cp;
if(ch=='\n'||intValC==010||intValC==020||intValC==003||intValC==032){
//printf("LB");
}
cp++;
}//while loop end
}// for j
}//for i

for( i=0;i<R_1;i++)
{
for( j=0;j<C_1;j++)
{
//printf("%d \n",REDB[i][j]);

}//end of outer for


// printf("value\n");

// printf("matrix Input: \n");


for( i=0;i<R_1;i++)
{
for( j=0;j<C_1;j++)
{

Input[i][j]=REDB[i][j];
printf("%d\n",Input[i][j]);
95
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

}
}

integerdwt();
// printf("matrix all: \n");

for(i=0;i<rowL;i++){
for(j=0;j<colL;j++){

if(i<rowS){
if(j<colS){
waveencodeImage[i][j]=LL[i][j];
}
else{
waveencodeImage[i][j]=LH[i][j-colS];
}
}
else{
if(j<colS){
waveencodeImage[i][j]=HL[i-rowS][j];
}
else{
waveencodeImage[i][j]=HH[i-rowS][j-colS];
}

}
// printf(" %d\n",waveencodeImage[i][j]);
96
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

}
//printf("\n");
}

for( i=0;i<64;i++)
{
for( j=0;j<64;j++)
{
//printf("%d\n",Output[i][j]);
Ad[i][j]=waveencodeImage[i][j];
printf(" %d\n",Ad[i][j]);
}

for( i=0;i<64;i++)
{
for( j=0;j<64;j++)
{
//printf("%d\n",Output[i][j]);
// Adr[i][j]=waveencodeImage[i][j];
printf(" %d\n",Adr[i][j]);
}

for(i=0;i<rowL;i++){
97
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

for(j=0;j<colL;j++){

if(i<rowS){
if(j<colS){
RLL[i][j]=Adr[i][j];
}
else{
RLH[i][j-colS]=Adr[i][j];
}
}
else{
if(j<colS){
RHL[i-rowS][j]=Adr[i][j];
}
else{
RHH[i-rowS][j-colS]=Adr[i][j];
}

}
//printf(" %d ",all[i][j]);
}
//printf("\n");
}

reversedwt();

//printf("matrix Output: \n");


for( i=0;i<64;i++)
98
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

{
for( j=0;j<64;j++)
{
printf("%d\n",Output[i][j]);
//wavedecode[i][j]=Output[i][j];
}

}
// printf("exit\n");

return 0;

}//end of main loop

Simulation & Synthesis Result


1 Simulation Results
1.1 Matlab output

99
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

The above figure shows the Matlab GUI interface. The input image is loaded
using “image” button and the image is converted into text file using “convert_text”
button.

1.2 Xilinx Results

100
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Input file:

The above figure shows the text file created using Matlab GUI for the given input image.

101
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

This figure shows the text files of even, odd, LL, LH, HL, HH.

102
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Comparing input text file with the retrieval text file (after decompression)

103
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Verifying “input text” file in Matlab GUI interface.

104
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

105
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Verifying “retrieval” text file in Matlab GUI interface.

106
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Synthesis Results

Input Image:

107
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Compressed Image:-

108
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Reconstructed Image:-

109
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

3 Synthesis Report
Overview

Generated on Fri Apr 03 12:03:07 2010

Source D:/New_Folder/lifting/system.xmp

EDK Version 8.1.02

FPGA Family spartan3e

Device xc3s500efg320-4

# IP Instantiated 17

# Processors 1

# Busses 3

110
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Timing Information
Post Synthesis Clock Limits

These are the post synthesis clock frequencies. The critical frequencies are marked with green.
The values reported here are post synthesis estimates calculated for each individual module. These values will change
after place and route is performed on the entire system.

MODULE CLK Port MAX FREQ

microblaze_0 FSL3_S_CLK 65.595MHz

microblaze_0 DBG_CLK 65.595MHz

DDR_SDRAM_16Mx16 Device_Clk 83.914MHz

DDR_SDRAM_16Mx16 OPB_Clk 83.914MHz

DDR_SDRAM_16Mx16 DDR_Clk90_in 83.914MHz

111
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

DDR_SDRAM_16Mx16 Device_Clk90_in 83.914MHz

DDR_SDRAM_16Mx16 Device_Clk90_in_n 83.914MHz

DDR_SDRAM_16Mx16 Device_Clk_n 83.914MHz

DDR_SDRAM_16Mx16 DDR_Clk90_in_n 83.914MHz

opb_intc_0 OPB_Clk 117.233MHz

RS232_DCE OPB_Clk 138.083MHz

debug_module debug_module/drck_i 146.951MHz

debug_module OPB_Clk 146.951MHz

debug_module bscan_update 146.951MHz

mb_opb OPB_Clk 181.719MHz

ilmb LMB_Clk 249.128MHz

dlmb LMB_Clk 249.128MHz

112
www.1000projects.com
www.fullinterview.com
www.chetanasprojects.com

Das könnte Ihnen auch gefallen