Sie sind auf Seite 1von 54

Image compression using 3D-SPIHT

2007*2008

ABSTRACT

We propose a highly scalable image compression scheme based on the set partitioning in hierarchical trees (SPIHT) algorithm. Our algorithm called highly scalable SPIHT (HS-SPIHT), supports spatial and SNR scalability and provides a bit stream that can be easily adapted (reordered) to given bandwidth and resolution requirements by a simple transcoder (parser). The HS-SPIHT algorithm adds the spatial scalability feature without sacrificing the SNR embeddedness property as found in the original SPIHT bit stream. HS-SPIHT finds applications in progressive Web browsing, flexible image storage and retrieval, and image transmission over heterogeneous networks

Vivekananda Institute Of Technology

-1-

Image compression using 3D-SPIHT

2007*2008

TABLE OF CONTENTS

CHAPTER NO.

TITLE LIST OF TABLES

PAGE NO.

LIST OF FIGURES

Vivekananda Institute Of Technology

-2-

Image compression using 3D-SPIHT 1 INTRODUCTION 1.1 MOTIVATION 1.2 PRINCIPLE OF COMPRESSION 1.3 COMPRESSION ALGORITHMS 1.4 IMAGE COMPRESSION PROCESS 1.5 ADAPTIVE COMPRESSION 1.6 ORGANIZATION OF THE THESIS 2 3 LITERATURE SURVEY OVERVIEW OF WAVELET CODER 3.1 WAVELET IMAGE COMPRESSION 3.2 COMPRESSION STEPS 3.3 TYPICAL IMAGE CODER 3.4 WAVELET TRANSFORM OVERVIEW 3.4.1 3.4.2 3.4.3 3.4.4 3.5.1 3.5.2 3.5.3 3.5.4 WAVELET TRANSFORM SCALING SHIFTING SCALE AND FREQUENCY ONE STAGE FILTERING MULTIWAVELET

2007*2008

IMAGE

3.5 DISCRETE WAVELET TRANSFORM MULTIPLE LEVEL DECOMPOSITION WAVELET RECONSTRUCTION RECONSTRUCTING APPROXIMATION AND DETAILS 3.6 1 D WAVELET TRANSFORM 3.7 2 D TRANSFORM HIERARCHY

Vivekananda Institute Of Technology

-3-

Image compression using 3D-SPIHT 3.8 AWIC FILTER CHOICE 3.9 WAVELET COMPUTATION 3.10 QUANTIZATION 3.11 ENTROPY ENCODING 4 PROPOSED SYSTEM

2007*2008

5 ADAPTIVE MULTIWAVELET IMAGE COMPRESSION 5.1 WAVELET IMAGE COMPRESSION PARAMETERS 5.1.1 5.1.2 5.1.3 VARYING WAVELET TRANSFORM LEVEL VARYING QUANTIZATION LEVEL VARYING ELIMINATION LEVEL

5.2 ADAPTIVE IMAGE COMMUNICATION 6 7 APPENDICES 1 2 ANALYSIS OF TRANSFORM CONSTANTS PERFORMANCE METRICS 2.1 DEGREE OF COMPRESSION 2.2 IMAGE QUALITY REFERENCES SIMULATION RESULTS CONCLUSION

Vivekananda Institute Of Technology

-4-

Image compression using 3D-SPIHT

2007*2008

CHAPTER 1
INTRODUCTION
1.1 MOTIVATION One of the major challenges in enabling mobile multimedia data services will be the need to process and wirelessly transmit a very large volume of data. While significant improvements in achievable bandwidth are expected with future wireless access technologies, improvements in battery technology will lag the rapidly growing energy requirements of future wireless data services. One approach to mitigate to this problem is to reduce the volume of multimedia data transmitted over the wireless channel via data compression techniques. This has motivated active research on multimedia data compression techniques such as JPEG [1, 2], JPEG 2000 [3, 4] and MPEG [5]. These approaches concentrate on achieving higher compression ratio without sacrificing the quality of the image. However these efforts ignore the energy consumption during compression and RF transmission. Vivekananda Institute Of Technology -5-

Image compression using 3D-SPIHT

2007*2008

Today a lot of hospitals handle their medical image data with computers. The use of computers and a network makes it possible to distribute the image data among the staff efficiently. As the health care is computerized new techniques and applications are developed, among them the MR and CT techniques. MR and CT produce sequences of images (image stacks) each a cross-section of an object. The amount of data produced by these techniques is vast and this might be a problem when sending the data over a network. To overcome this image data can be compressed. For two-dimensional data there exist many compression techniques such as JPEG, GIF and the new wavelet based JPEG2000 standard [7]. All schemes above are used or two-dimensional data (images) and while they are excellent for images they might not be that well suited for compression of three-dimensional data such as image stacks. 1.2 THESIS GOAL The purpose of this thesis is to look at coding schemes based on wavelets for medical volumetric data. The thesis should discuss theoretical issues as well as suggest a practically feasible implementation of a coding scheme. A short comparison between two- and three-dimensional coding is also included. Another goal is to implement highly scalable image compression based on SPIHT.

1.6 ORGANIZATION OF THE THESIS CHAPTER 2: Overview of the compression schemes CHAPTER 3: Design Specifications of wavelet coder CHAPTER 4 Proposed system of SPIHT

Vivekananda Institute Of Technology

-6-

Image compression using 3D-SPIHT CHAPTER5: Simulation Results CHAPTER6: Conclusion REFERENCES

2007*2008

CHAPTER 2
OVERVIEW OF THE IMAGE COMPRESSION TECHNIQUES

PRINCIPLE OF COMPRESSION
Image compression addresses the problem of reducing the amount of data required to represent a digital image. The underlying basis of the reduction process is the removal of redundant data. From a mathematical viewpoint, this amounts to transforming a 2-D pixel array into a statistically uncorrelated data set. The transformation is applied prior to storage and transmission of the image. The compressed image is decompressed at some later time, to reconstruct the original image or an approximation to it. Different types of data redundancies:

Vivekananda Institute Of Technology

-7-

Image compression using 3D-SPIHT

2007*2008

Interpixel redundancy: Neighboring pixels have similar values. This property is Psychovisual redundancy: Human visual system cannot simultaneously

exploited in the wavelet transform stage. distinguish all colors. This property is exploited in the lossy quantization stage. Coding redundancy: Fewer bits represent frequent symbols.

1.3

COMPRESSION ALGORITHMS

There are various algorithms for image transformation: Discrete cosine transform (DCT) JPEG Sub-band coding Embedded zero wavelet transform (EZW) SPIHT

1.4 IMAGE COMPRESSION PROCESS

Fig.1 illustrates the main block diagram of the image compression process.

Vivekananda Institute Of Technology

-8-

Image compression using 3D-SPIHT

2007*2008

The image sample first goes through a transform, which generates a set of frequency coefficients. The transformed coefficients are then quantized to reduce the volume of data. The output of this step is a stream of integers, each of which corresponds to an index of particular quantized binary. Encoding is the final step, where the stream of quantized data is converted into a sequence of binary symbols in which shorter binary symbols are used to encode integers that occur with relatively high probability. This reduces the number of bits transmitted.

CHAPTER 3

OVERVIEW OF WAVELET CODER

3.1 WAVELET IMAGE COMPRESSION The foremost goal is to attain the best compression performance possible for a wide range of image classes while minimizing the computational and implementation complexity of the algorithm. For a compression algorithm to be widely useful, it must perform well on a wide

Vivekananda Institute Of Technology

-9-

Image compression using 3D-SPIHT

2007*2008

variety of image content while maintaining a practical compression/ decompression time on modest computers. In order to allow a broad range of implementation, an algorithm must be amenable to both software and hardware implementation. 3.2 COMPRESSION STEPS

The steps needed to compress an image are as follows: 1. 2. 3. 4. Digitize the source image into a signal, which is a string of numbers. Decompose the signal into a sequence of wavelet coefficients. Use quantization to convert coefficients to a sequence of binary symbols. Apply entropy coding to compress it into binary strings.

The first step in the wavelet compression process is to digitize the image. The digitized image can be characterized by its intensity levels, or scales of gray that range from 0 (black) to 255 (white), and its resolution, or how many pixels per square inch. The wavelets process the signal, but upto this point, compression has not yet occurred. The next step is quantization which converts a sequence of floating numbers to a sequence of integers. The simplest form is to round to the nearest integer. Another option is to multiply each number by a constant and then round to the nearest integer. Quantization is called lossy because it introduces error into the process, since the conversion is not a one-to-one function. The last step is encoding that is responsible for the actual compression. One method to compress the data is Huffman entropy coding. With this method, an integer sequence is changed into a shorter sequence, with the numbers being 8 bit integers. The conversion is made by an entropy coding table.

Vivekananda Institute Of Technology

- 10 -

Image compression using 3D-SPIHT

2007*2008

3.3 TYPICAL IMAGE CODER

Source Image data

MWT

Quantizer

Entropy Encoder

Compressed Image data

(a)

Compressed Image data

Entropy Decoder

Inverse Quantizer

Inverse MWT

Reconstructed Image data

(b) Fig. 3.1 (a) Wavelet Coder, (b) Wavelet Decoder

A typical image compression system consisting of three closely connected components namely (a) Source Encoder (b) Quantizer, and (c) Entropy Encoder are shown in Fig. 3.1. Compression is accomplished by applying a linear transform to decorrelate the image data, quantizing the resulting transform coefficients, and entropy coding the quantized values. The source coder decorrelates the pixels. A variety of linear transforms have been developed which include Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and many more, each with its own advantages and disadvantages.

Vivekananda Institute Of Technology

- 11 -

Image compression using 3D-SPIHT

2007*2008

A quantizer simply reduces the number of bits needed to store the transformed coefficients by reducing the precision of those values. Since this is a manyto-one mapping, it is a lossy process. Quantization can be performed on each individual coefficient, which is known as Scalar Quantization (SQ). Quantization can also be performed on a group of coefficients together, and this is known as Vector Quantization (VQ). An entropy encoder further compresses the quantized values losslessly to give better overall compression. It uses a model to accurately determine the probabilities for each quantized value and produces an appropriate code based on these probabilities so that the resultant output code stream will be smaller than the input stream. The most commonly used entropy encoders are the Huffman encoder and the arithmetic encoder, although for applications requiring fast execution, simple runlength encoding (RLE) has proven very effective. It is important to note that a properly designed quantizer and entropy encoder are absolutely necessary along with optimum signal transformation to get the best possible compression.

3.4 WAVELET TRANSFORM OVERVIEW 3.4.1 Wavelet Transform

Vivekananda Institute Of Technology

- 12 -

Image compression using 3D-SPIHT

2007*2008

Wavelets are mathematical functions defined over a finite interval and having an average value of zero that transform data into different frequency components, representing each component with a resolution matched to its scale. The basic idea of the wavelet transform is to represent any arbitrary function as a superposition of a set of such wavelets or basis functions. These basis functions or baby wavelets are obtained from a single prototype wavelet called the mother wavelet, by dilations or contractions (scaling) and translations (shifts). They have advantages over traditional Fourier methods in analyzing physical situations where the signal contains discontinuities and sharp spikes. Many new wavelet applications such as image compression, turbulence, human vision, radar, and earthquake prediction are developed in recent years. In wavelet transform the basis functions are wavelets. Wavelets tend to be irregular and symmetric. All wavelet functions, w (2kt - m), are derived from a single mother wavelet, w (t). This wavelet is a small wave or pulse like the one shown in Fig. 3.2.

Fig. 3.2 Mother wavelet w (t) Normally it starts at time t = 0 and ends at t = T. The shifted wavelet w (t m) starts at t = m and ends at t = m + T. The scaled wavelets w (2kt) start at t = 0 and end at t = T/2k. Their graphs are w (t) compressed by the factor of 2k as shown in Fig. 3.3. For example, when k = 1, the wavelet is shown in Fig 3.3 (a). If k = 2 and 3, they are shown in (b) and (c), respectively.

Vivekananda Institute Of Technology

- 13 -

Image compression using 3D-SPIHT (A)w(2t) (b)w(4t) Fig. 3.3 Scaled wavelets (c)w(8t)

2007*2008

The wavelets are called orthogonal when their inner products are zero. The smaller the scaling factor is, the wider the wavelet is. Wide wavelets are comparable to low-frequency sinusoids and narrow wavelets are comparable to high-frequency sinusoids.

3.4.2 Scaling

Wavelet analysis produces a time-scale view of a signal. Scaling a wavelet simply means stretching (or compressing) it. The scale factor is used to express the compression of wavelets and often denoted by the letter a. The smaller the scale factor, the more compressed the wavelet. The scale is inversely related to the frequency of the signal in wavelet analysis.

3.4.3 Shifting

Shifting a wavelet simply means delaying (or hastening) its onset. Mathematically, delaying a function f(t) by k is represented by: f(t-k) and the schematic is shown in fig. 3.4.

(a) Wavelet function (t)

(b) Shifted wavelet function (t-k)

Vivekananda Institute Of Technology

- 14 -

Image compression using 3D-SPIHT

2007*2008

3.4.4 Scale and Frequency

The higher scales correspond to the most stretched wavelets. The more stretched the wavelet, the longer the portion of the signal with which it is being compared, and thus the coarser the signal features being measured by the wavelet coefficients. The relation between the scale and the frequency is shown in Fig. 3.5.

Low scale

High scale

Fig. 3.5 Scale and frequency Thus, there is a correspondence between wavelet scales and frequency as revealed by wavelet analysis: Low scale a Compressed wavelet Rapidly changing details High frequency. High scale aStretched waveletSlowly changing, coarse featuresLow frequency.

3.5 DISCRETE WAVELET TRANSFORM

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates an awful lot of data. If the scales and positions are chosen based on powers of two, the so-called dyadic scales and positions, then calculating wavelet coefficients are efficient and just as accurate. This is obtained from discrete wavelet transform (DWT). 3.5.1 One-Stage Filtering Vivekananda Institute Of Technology - 15 -

Image compression using 3D-SPIHT

2007*2008

For many signals, the low-frequency content is the most important part. It is the identity of the signal. The high-frequency content, on the other hand, imparts details to the signal. In wavelet analysis, the approximations and details are obtained after filtering. The approximations are the high-scale, low frequency components of the signal. The details are the low-scale, high frequency components. The filtering process is schematically represented as in Fig. 3.6.

Fig. 3.6 Single stage filtering

The original signal, S, passes through two complementary filters and emerges as two signals. Unfortunately, it may result in doubling of samples and hence to avoid this, downsampling is introduced. The process on the right, which includes downsampling, produces DWT coefficients. The schematic diagram with real signals inserted is as shown in Fig. 3.7.

Vivekananda Institute Of Technology

- 16 -

Image compression using 3D-SPIHT

2007*2008

Fig. 3.7 Decomposition and decimation

3.5.2 Multiple-Level Decomposition

The decomposition process can be iterated, with successive approximations being decomposed in turn, so that one signal is broken down into many lower resolution components. This is called the wavelet decomposition tree and is depicted as in Fig. 3.8.

Fig. 3.8 Multilevel decomposition 3.5.3 Wavelet Reconstruction

Vivekananda Institute Of Technology

- 17 -

Image compression using 3D-SPIHT

2007*2008

The reconstruction of the image is achieved by the inverse discrete wavelet transform (IDWT). The values are first upsampled and then passed to the filters. This is represented as shown in Fig. 3.9.

Fig. 3.9 Wavelet Reconstruction

The wavelet analysis involves filtering and downsampling, whereas the wavelet reconstruction process consists of upsampling and filtering. Upsampling is the process of lengthening a signal component by inserting zeros between samples as shown in Fig. 3.10.

Fig. 3.10 Reconstruction using upsampling 3.5.4 Reconstructing Approximations and Details It is possible to reconstruct the original signal from the coefficients of the approximations and details. The process yields a reconstructed approximation which has the same length as the original signal and which is a real approximation of it.

Vivekananda Institute Of Technology

- 18 -

Image compression using 3D-SPIHT

2007*2008

The reconstructed details and approximations are true constituents of the original signal. Since details and approximations are produced by downsampling and are only half the length of the original signal they cannot be directly combined to reproduce the signal. It is necessary to reconstruct the approximations and details before combining them. The reconstructed signal is schematically represented as in Fig. 3.11.

Fig. 3.11 Reconstructed signal components 3.6 1-D WAVELET TRANSFORM The generic form for a one-dimensional (1-D) wavelet transform is shown in Fig. 3.12. Here a signal is passed through a low pass and high pass filter, h and g, respectively, then down sampled by a factor of two, constituting one level of transform.

Fig. 3.12 1D Wavelet Decomposition. Repeating the filtering and decimation process on the lowpass branch outputs make multiple levels or scales of the wavelet transform only. The process is typically carried out for a finite number of levels K, and the resulting coefficients are called wavelet coefficients.

Vivekananda Institute Of Technology

- 19 -

Image compression using 3D-SPIHT

2007*2008

The one-dimensional forward wavelet transform is defined by a pair of filters s and t that are convolved with the data at either the even or odd locations. The filters s and t used for the forward transform nL li = sjx2i+j j=-nl and nH hi = tjx2i+1+j j=-nH are called analysis filters.

Although l and h are two separate output streams, together they have the same total number of coefficients as the original data. The output stream l, which is commonly referred to as the low-pass data may then have the identical process applied again repeatedly. The other output stream, h (or high-pass data), generally remains untouched. The inverse process expands the two separate low- and high-pass data streams by inserting zeros between every other sample, convolves the resulting data streams with two new synthesis filters s and t, and adds them together to regenerate the original double size data stream. nH yi = nl l 2i+1 = 0 h2i+1 = hi, h2i = 0 tjli+j + sj hi+j where l2i = li, j= -nH j= -nH

To meet the definition of a wavelet transform, the analysis and synthesis filters s, t, s and t must be chosen so that the inverse transform perfectly reconstructs the original data. Since the wavelet transform maintains the same number of coefficients as the original data, the transform itself does not provide any compression. However, the structure provided by the transform and the expected values of the coefficients give a form that is much more amenable to compression than the original data. Since the filters s, t, s and t are chosen to be perfectly invertible, the wavelet transform itself is lossless. Later application of the quantization step will cause some data loss and can be used to control the degree of compression. The forward wavelet-based transform uses a 1-D subband decomposition process; here a 1-D set of samples is converted into the low-pass subband (Li) and high-pass subband (Hi). The low-pass subband represents a down

Vivekananda Institute Of Technology

- 20 -

Image compression using 3D-SPIHT

2007*2008

sampled low-resolution version of the original image. The high-pass subband represents residual information of the original image, needed for the perfect reconstruction of the original image from the low-pass subband 3.7 2-D TRANSFORM HEIRARCHY The 1-D wavelet transform can be extended to a two-dimensional (2-D) wavelet transform using separable wavelet filters. With separable filters the 2-D transform can be computed by applying a 1-D transform to all the rows of the input, and then repeating on all of the columns.

LL1

HL1

LH1

HH1

Fig. 3.13 Subband Labeling Scheme for a one level, 2-D Wavelet Transform

The original image of a one-level ( K=1), 2-D wavelet transform, with corresponding notation is shown in Fig. 3.13. The example is repeated for a three-level (K =3) wavelet expansion in Fig. 3.14. In all of the discussion K represents the highest level of the decomposition of the wavelet transform.

Vivekananda Institute Of Technology

- 21 -

Image compression using 3D-SPIHT

2007*2008

LL1 LH1 LH2

HL1 HH1

HL2 HL3 HH2

LH3

HH3

Fig. 3.14 Subband labeling Scheme for a Three Level, 2-D Wavelet Transform The 2-D subband decomposition is just an extension of 1-D subband decomposition. The entire process is carried out by executing 1-D subband decomposition twice, first in one direction (horizontal), then in the orthogonal (vertical) direction. For example, the low-pass subbands (Li) resulting from the horizontal direction is further decomposed in the vertical direction, leading to LLi and LHi subbands. Similarly, the high pass subband (Hi) is further decomposed into HLi and HHi. After one level of transform, the image can be further decomposed by applying the 2-D subband decomposition to the existing LLi subband. This iterative process results in multiple transform levels. In Fig. 3.14 the first level of transform results in LH1, HL1, and HH1, in addition to LL1, which is further decomposed into LH2, HL2, HH2, LL2 at the second level, and the information of LL2 is used for the third level transform. The subband LLi is a low-resolution subband and high-pass subbands LHi, HLi, HHi are horizontal, vertical, and diagonal subband respectively since they represent the horizontal, vertical, and diagonal residual information of the original image. An example

Vivekananda Institute Of Technology

- 22 -

Image compression using 3D-SPIHT

2007*2008

of three-level decomposition into subbands of the image CASTLE is illustrated in Fig. 3.15. H2H1HH

Fig. 3.15

The process of 2-D wavelet transform applied through three

transform levels To obtain a two-dimensional wavelet transform, the one-dimensional transform is applied first along the rows and then along the columns to produce four subbands: low-resolution, horizontal, vertical, and diagonal. (The vertical subband is created by applying a horizontal high-pass, which yields vertical edges.) At each level, the wavelet transform can be reapplied to the low-resolution subband to further decorrelate the image. Fig. 3.16 illustrates the image decomposition, defining level and subband conventions used in the AWIC algorithm. The final configuration contains a small low-resolution subband. In addition to the various transform levels, the phrase level 0 is used to refer to the original image data. When the user requests zero levels of transform, the original image data (level 0) is treated as a low-pass band and processing follows its natural flow. Low Resolution Subband 4 3 4 4 3 3 Level 2 Level 2 Level 1 Vertical subband HL

Level 2

Vivekananda Institute Of Technology

- 23 -

Image compression using 3D-SPIHT Level 1 Horizontal Subband LH Level 1 Diagonal Subband HH

2007*2008

Fig. 3.16 Image Decomposition Using Wavelets

3.8 AWIC FILTER CHOICE The main difference between subband and wavelet coding is the choice of filters to be used in the transform. The filters used in wavelet coding systems were typically designed to satisfy certain smoothness constraints. In contrast, subband filters were designed to approximately satisfy the criteria of non-overlapping frequency responses. There are two types of filter choices, orthogonal and biorthogonal. The biorthogonal wavelet transform has the advantage that it can use linear phase filters, but the disadvantage is that it is not energy preserving. The fact that biorthogonal wavelets are not energy preserving does not turn out to be a big problem, since there are linear phase biorthogonal filter coefficients, which are close to being orthogonal. The 9-7 and 5-3 Daubechies biorthogonal filters, were chosen as ideal for image compression. Both the biorthogonal 9-7 and 5-3 filters were used in AWIC (Rushanan 1997). The detracting visual artifact at high compression rates using the longer filter is ringing, while the shorter filter produces staircasing. The design trade-off between these two filters is speed performance and quality performance. Visually, the level of ringing artifacts and staircasing artifacts appears to be nearly equivalent,

Vivekananda Institute Of Technology

- 24 -

Image compression using 3D-SPIHT

2007*2008

dependent on subjective opinion. The peak-signal-to-noise ratio favors the ringing artifact over the staircasing. The Daubechies 5-tap/3-tap filter is used in forward wavelet transform and it has good localization and symmetric properties, which allows simple edge treatment, high-speed computation, and high quality compressed image. The following equation represents the Daubechies 5-tap/3-tap filter.

3.9 WAVELET COMPUTATION


In order to obtain an efficient wavelet computation, it is important to eliminate as many unnecessary computations as possible. A careful examination of the forward and reverse transforms shows that about half the operations either lead to data which are destroyed or are null operations (as in multiplication by 0). The one-dimensional wavelet transform is computed by separately applying two analysis filters at alternating even and odd locations. The inverse process first doubles the length of each signal by inserting zeros in every other position, then applies the appropriate synthesis filter to each signal and adds the filtered signals to get the final reverse transform.

3.10 QUANTIZATION

Quantization refers to the process of approximating the continuous set of values in the image data with a finite set of values. The input to a quantizer is the original data, and the output is always one among a finite number of levels. The quantizer is a function whose set of output values is discrete, and usually finite. Obviously, this is a

Vivekananda Institute Of Technology

- 25 -

Image compression using 3D-SPIHT

2007*2008

process of approximation, and a good quantizer is one, which represents the original signal with minimum loss or distortion. There are two types of quantization - Scalar Quantization and Vector Quantization. In scalar quantization, each input symbol is treated separately in producing the output. If the input range is divided into levels of equal spacing, then the quantizer is termed as a Uniform Quantizer, and if not, it is termed as a Non-Uniform Quantizer. A uniform quantizer can be easily specified by its lower bound and the step size. Also, implementing a uniform quantizer is easier than a non-uniform quantizer. Just the same way a quantizer partitions its input and outputs discrete levels, a inverse quantizer is one which receives the output levels of a quantizer and converts them into normal data, by translating each level into a 'reproduction point' in the actual range of data. The quantization error (x - x') is used as a measure of the optimality of the quantizer and inverse quantizer. 3.11 ENTROPY ENCODING

There are many ways of compressing images. An image is composed of many dots, called pixels, and each pixel has a color. Pictures contain some number of rows (R) and columns (C) of pixels. An image is stored in the computer as a set of values, each of which represents the color of a pixel. The total number of values in the image is R times C. To transmit an image, all of these values could be sent. Alternatively a set of instructions that allow the picture to be redrawn when it is decompressed can also be sent. Encoding is done by a number of methods. The simple and basic encoding method being Run length coding (Sayood 2000), it has a low compression performance but is lossless. The Huffman coding is also a lossless coding method which gained popularity for its compression. It is performed by the use of standard tables. The recent standard methods such as arithmetic coding and LZW coding are very effective.

Vivekananda Institute Of Technology

- 26 -

Image compression using 3D-SPIHT

2007*2008

CHAPTER 4

PROPOSED SPIHT
Parents and children
SPIHT was introduced in [9]. It is a refinement of the algorithm presented by Shapiro in [10]. SPIHT assumes that the decomposition structure is the octave-band structure and then uses the fact that sub-bands at different levels but of the same orientation display similar characteristics. As is seen in Figure 4.1 the band LL HL has similarities with the band HL (both have high-pass filtered rows). To utilize the above observation SPIHT defines spatial parent-children relationships in the decomposition structure. The squares in Figure 4.1 represent the same spatial location of the original image and the same orientation, but at different scales. The different scales of the subbands imply that a region in the sub-band LL HL is spatially co-located (represent the same region in the original image) with a region 4 times larger (in the two dimensional case) in the band HL. SPIHT describes this collocation with one to four parent-children relationships, where the Vivekananda Institute Of Technology - 27 -

Image compression using 3D-SPIHT

2007*2008

parent is in a sub-band of the same orientation as the children but at a smaller scale .If. this prediction is successful then SPIHT can represent the parent and all its descendants with a single symbol called a zero-tree, introduced in [10]. To predict energy of coefficients in lower level sub-bands (children) using coefficients in higher level subbands (parents) makes sense since there should be more energy per coefficient in these small bands, than in the bigger ones. To see how SPIHT uses zerotrees the workings of SPIHT are briefly explained below. For more information the reader is referred to [9]. SPIHT consists of two passes, the ordering pass and the refinement pass. In the ordering pass SPIHT attempts to order the coefficients according to their magnitude. In the refinement pass the quantization of coefficients is refined. The ordering and refining is made relative to a threshold. The threshold is appropriately initialised and then continuously made smaller with each round of the algorithm. SPIHT maintains three lists of coordinates of coefficients in the decomposition. These are the List of Insignificant Pixels (LIP), the List of Significant Pixels (LSP) and the List of Insignificant Sets (LIS). To decide if a coefficient is significant or not SPIHT uses the following definition. A coefficient is deemed significant at a certain threshold if its magnitude is larger then or equal to the threshold. Using the notion of significance the LIP, LIS and LSP can be explained. The LIP contains coordinates of coefficients that are insignificant at the current threshold, The LSP contains the coordinates of coefficients that are significant at the same threshold.The LIS contains coordinates of the roots of the spatial parentchildren trees . 1. In the ordering pass of SPIHT (marked by the dotted line in the schematic above) the LIP is first searched for coefficients that are significant at the current threshold, if one is found 1 is output then the sign of the coefficient is marked by outputting either 1 or 0 (positive or negative). Now the significant coefficient is moved to the LSP. If a coefficient in LIP is insignificant a 0 is outputted. 2. Next in the ordering pass the sets in LIS are processed. For every set in the LIS it is decided whether the set is significant or insignificant. A set is deemed significant if at least one coefficient in the set is significant. If the set is significant the immediate

Vivekananda Institute Of Technology

- 28 -

Image compression using 3D-SPIHT

2007*2008

children of the root are sorted into LIP and LSP depending on their significance and 0s and 1s are output as when processing LIP. After sorting the children a new set (spatial coefficient tree) for each child is formed in the LIS. If the set is deemed insignificant, that is this set was a zero-tree, a 0 is outputted and no more processing is needed. The above is a simplification of the LIS processing but the important thing to remember is that entire sets of insignificant coefficients, zero-trees, are represented with a single 0. The idea behind defining spatial parent-children relationships as in (4.1) is to increase the possibility of finding these zero-trees. 3.The SPIHT algorithm continues with the refinement pass. In the refinement pass the next bit in the binary representation of the coefficients in LSP is outputted. The next bit is related to the current threshold. The processing of LSP ends one round of the SPIHT algorithm, before the next round starts the current threshold is halved. Below is a schematic of how SPIHT works.

Vivekananda Institute Of Technology

- 29 -

Image compression using 3D-SPIHT

2007*2008

CHAPTER 5
Vivekananda Institute Of Technology - 30 -

Image compression using 3D-SPIHT

2007*2008

Results

FIRST LEVEL DECOMPOSITION

SECOND LEVEL DECOMPOSITION

Vivekananda Institute Of Technology

- 31 -

Image compression using 3D-SPIHT

2007*2008

THIRD LEVEL DECOMPOSITION

Vivekananda Institute Of Technology

- 32 -

Image compression using 3D-SPIHT

2007*2008

CHAPTER 6 IMPLEMENTATION

BRAIN IMAGE IN 3D
mri.bmp

This is the 3-dimensional view of brain image which we are using for the purpose of compression.

Vivekananda Institute Of Technology

- 33 -

Image compression using 3D-SPIHT

2007*2008

The image shown above is sliced into 8 frame (bit planes) which is as shown below-

Vivekananda Institute Of Technology

- 34 -

Image compression using 3D-SPIHT Brain images from frames 1-8

2007*2008

mri1.bmp

m ri2.bmp

1
mri3.bmp

2
mri4.bmp

3
mri5.bmp

4
mri6.bmp

Vivekananda Institute Of Technology

- 35 -

Image compression using 3D-SPIHT

2007*2008

mri7.bmp

mri8.bmp

the above images are applied to discerte wavelet transform in order to sample and quantizeIn image compression, such a decomposition scheme should be extended to twodimensions. By choosing separable filters, a two-dimensional subband coding scheme can be implemented as a straightforward extension of the one-dimensional version. This is performed by applying the same filters along the two dimensions iteratively. For an M x M image, M one-dimensional transforms are carried out along the rows (one for each row). This results in two M x M/2 subimages/subbands, one corresponding to the lowpass filtered components and the other corresponding to the high-pass filtered rows of the image. Next, each of these subbands is filtered further along the columns to partition the resulting data into four M/2 x M/2 subbands (low-pass row and low-pass column, lowpass row and high-pass column, high-pass row and low-pass column, high-pass row and high-pass column). This constitutes one stage of the subband decomposition of an image, and is depicted in Figure 2.3. In this thesis, we are concerned with 3D datasets, so one stage of decomposition results in eight sub-volumes. This is accomplished by transforming each slice of the volumetric dataset using two-dimensional transform and then applying one-dimensional transform along the slices.

Vivekananda Institute Of Technology

- 36 -

Image compression using 3D-SPIHT

2007*2008

coding stage:
Once the image is decomposed into subbands, an encoding scheme is used to compress the transform coefficients. Traditionally, scalar quantization is used in this stage [16]. Two state-of-the-art coding schemes based on wavelet transform, SPIHT encoding is done.

SPIHT Coding Algorithm


Set Partitioning in Hierarchical Trees (SPIHT) algorithm is based on embedded zerotree wavelet (EZW) coding method; it employs spatial orientation trees and uses set partitioning sorting algorithm [24]. An octave-band decomposition structure is used in SPIHT. Coefficients corresponding to the same spatial location in different subbands in the pyramid structure

Vivekananda Institute Of Technology

- 37 -

Image compression using 3D-SPIHT

2007*2008

display self-similarity characteristics. Note that, SPIHT defines parent-children relationships between these self-similar subbands to establish spatial orientation trees. Such parent-children relationships in two-dimensional case can be expressed as in Equation below.

An example showing octave-band decomposition is depicted in Figure

The arrows in the figure indicate the parent-children relationship in subband pyramid. The start of arrow line is parent coefficient, and end of arrow indicates four children coefficients. The coefficients in the shaded area are all the descendants of root coefficient, A. It is assumed that lower frequency subbands have more energy and if the magnitude of a parent coefficient in a lower frequency subband is less than a threshold, it is highly

Vivekananda Institute Of Technology

- 38 -

Image compression using 3D-SPIHT

2007*2008

likely that magnitudes of its descendants are also less than that threshold. Thus, significance map of transformed coefficients with a preset threshold can be efficiently coded by a zero-tree coding scheme. For most images, the assumption used for the construction of spatial orientation tree is satisfied. The next step is the set partitioning sorting algorithm, which consists of two passes: sorting pass and refinement pass. In sorting pass, the ordering information of the coefficients is found according to the results of a magnitude test. In the refinement pass, the quantization for each significant coefficient is refined in a successive manner. A quantization threshold is used in coefficient magnitude test. The threshold is initialized as

.. .

and then successively decreased by a factor of two in each pass of the

algorithm. When the bit budget is reached, the algorithm will stop. Three ordered lists, in which each element indicates coordinate pair (i, j) of a coefficient, are used to implement the coding algorithm. They are named as List of Insignificant Pixels (LIP), List of Significant Pixels (LSP) and List of Insignificant Sets (LIS). Magnitude test is performed to decide whether a coefficient is significant or not. A coefficient is said to be significant at a given threshold if its magnitude is larger than or equal to that threshold. The construction of LIP, LIS and LSP takes place according to the significance of the coefficients with respect to the threshold value. _ LIP contains coordinates of coefficients that are insignificant with respect to the current threshold _ LSP contains the coordinates of coefficients that are significant at the same threshold _ LIS contains coordinates of the roots of the spatial orientation parent-children trees

Vivekananda Institute Of Technology

- 39 -

Image compression using 3D-SPIHT

2007*2008

The flowchart given in Figure illustrates how the SPIHT algorithm works.

Vivekananda Institute Of Technology

- 40 -

Image compression using 3D-SPIHT

2007*2008

1. In the sorting pass, the LIP is scanned to determine whether an entry is significant at the current threshold. If an entry is found to be significant, output a bit 1 and another bit for the sign of the coefficient, which is marked by either 1 for positive or 0 for negative. Now the significant entry is moved to the LSP. If an entry in LIP is insignificant, a bit 0 is output. 2. Entries in LIS are processed. When an entry is the set of all descendants of a coefficient, named type A, magnitude tests for all descendants of the current entry are carried out to decide whether they are significant or not. If the entry is found to be as significant, the direct offsprings of the entry are undergone magnitude tests. If direct offspring is significant, it is moved into LIP; otherwise it is moved into LSP. If the entry is deemed to be insignificant, this spatial orientation tree rooted by the current entry was a zero-tree, so a bit 0 is output and no further processing is needed. Finally, this entry is moved to the end of LIS as type B, which stands for the set of all descendants except for the immediate offsprings of a coefficient, if there are still descendants of its direct offsprings. Alternatively, if the entry in LIS is type B, significance test is performed on the descendants of its direct offspring. If significance test is true, the spatial orientation tree with root of type B entry is split into four sub-trees that are rooted by the direct offspring and these direct offsprings are added in the end of LIS as type A entries. The important thing in LIS sorting is that entire sets of insignificant coefficients, zero-trees, are represented with a single 0. The purpose behind defining spatial parent-children relationships is to increase the possibility of finding these zero-trees. 3. Finally, refinement pass is used to output the refinement bits (nth bit) of the coefficients in LSP at current threshold. Before the algorithm proceeds to the next round, the current threshold is halved.

After the encoding process the compressed data is being obtained and it is used to store or transmit.

Vivekananda Institute Of Technology

- 41 -

Image compression using 3D-SPIHT

2007*2008

At the receiver :
The inverse process is being performed. Initially inverse SPIHT is applied. It uses the threshold and the received bitstream in order to obtain the image matrix same as the original image matrix. The next step is inverse DWT

Synthesis Stage
In the previous section, we described the subband decomposition of images into different frequent components. In order to reconstruct the input image, a synthesis scheme is needed to restore the original input. The process of decomposition can be reversed, so it is possible to obtain reconstructed image given that the decomposed components are available. Such synthesis process can be realized by up-sampling followed by filtering, and then combining the results at the output of the filters. For the analysis scheme given in Figure corresponding synthesis scheme is shown in Figure

Vivekananda Institute Of Technology

- 42 -

Image compression using 3D-SPIHT

2007*2008

thus we obtain reconstructed image which is same as original image When lossy compression is employed, we need to measure the difference (distortion) between the original signal and the reconstructed one. A common metric for measuring distortion is the Mean Squared Error, or MSE. In the MSE measurement the total squared difference between the original signal and the reconstructed one is averaged over the entire signal and can be computed as follows:

Another distortion measure is the Peak Signal to Noise Ratio, or PSNR. Since the MSE is sensitive to the range of signal scale, the PSNR is used to incorporate the maximum amplitude of the original signals into the metric, making this measurement independent of the range of the data. PSNR is usually measured in decibels as:

Vivekananda Institute Of Technology

- 43 -

Image compression using 3D-SPIHT

2007*2008

For 8 bit images or image volumes for instance, peak x = 255. The graph of these distortion values is as shown below

MSE V/S BIT PLANE 1250

1200

1150

1100 MSE 1050 1000 950 900

4 bit plane

Vivekananda Institute Of Technology

- 44 -

Image compression using 3D-SPIHT


PSNR V/S BIT PLANE 67 66.8 66.6 66.4 PSNR IN DB 66.2 66 65.8 65.6 65.4

2007*2008

4 bit plane

Vivekananda Institute Of Technology

- 45 -

Image compression using 3D-SPIHT

2007*2008

The original image ,reconstructed image and their differences is obtained as shown below

Original Frame1

Reconst Frame1

DIFFERENCE1

Vivekananda Institute Of Technology

- 46 -

Image compression using 3D-SPIHT

2007*2008

Original Frame2

Reconst Frame2

DIFFERENCE2

Original Frame3

Reconst Frame3

DIFFERENCE3

Vivekananda Institute Of Technology

- 47 -

Image compression using 3D-SPIHT

2007*2008

Original Frame4

Reconst Frame4

DIFFERENCE4

Original Frame5

Reconst Frame5

DIFFERENCE5

Vivekananda Institute Of Technology

- 48 -

Image compression using 3D-SPIHT

2007*2008

Original Frame6

Reconst Frame6

DIFFERENCE6

Original Frame7

Reconst Frame7

DIFFERENCE7

Vivekananda Institute Of Technology

- 49 -

Image compression using 3D-SPIHT

2007*2008

Original Frame8

Reconst Frame8

DIFFERENCE8

Vivekananda Institute Of Technology

- 50 -

Image compression using 3D-SPIHT

2007*2008

CHAPTER 7 CONCLUSION AND FUTURE WORK

The performance of the Wavelet -based coder could be raised if a better costfunction was developed. If the gain in PSNR from using this new cost function is significant maybe the Wavelet coder would be the preferred choice. In the next phase the proposed system was simulated and implemented in VLSI.

Vivekananda Institute Of Technology

- 51 -

Image compression using 3D-SPIHT

2007*2008

REFERENCES

[1] Forchheimer R. (1999), Image coding and data compression, Linkping: Department of electrical engineering at Linkpings University. [2] Chui C. K. (1992), An introduction to wavelets, Boston, Academic Press, ISBN 0121745848 [3] Sayood K. (2000), Introduction to data compression, Morgan Kaufmann Publrs, US, 2 revised edition, ISBN 1558605584. [4] Daubechies I., Barlaud M., Antonini M., Mathieu P. (1992), Image coding using wavelet transform in IEEE Transactions on image processing, Vol. 1, No. 2, pp. 205-220. [5] Brislawn C. M.. (1996), Classification of nonexapnsive symmetric extension transforms for multi rate filter banks in Applied and computational harmonic analysis, Vol. 3, pp. 337-357. [6] Azpiroz-Leehan J., Lerallut J.-F. (2000), Selection of biorthogonal filters for image compression of MR images using wavelet packets in Medcial engineering & physics, Vol. 22, pp. 225-243.

Vivekananda Institute Of Technology

- 52 -

Image compression using 3D-SPIHT

2007*2008

Vivekananda Institute Of Technology

- 53 -

Das könnte Ihnen auch gefallen