Sie sind auf Seite 1von 5

Available ONLINE www.ijart.

org

IJART, Vol. 2 Issue 3, 2012,115-119

ISSN NO: 6602 3127


R RE ES SE EA AR RC CH H A AR RT TI IC CL LE E

Design And Analysis Of VLSI Based FELICS Algorithm For Lossless Image Compression
N.Muthukumaran, Asst. Professor/ ECE, Francis Xavier Engineering College Dr.R.Ravi, Professor & Head of CSE Department, Francis Xavier Engineering College

ABSTRACT
In this research paper, the VLSI oriented FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression applications such as Medical imaging, Technical drawings ect... To analysis the performance of architecture compared with exiting models and to improve the Quality progressive, Resolution progressive and Component progressive for Lossless image compression, to reduce the image without losing image quality, to design high speed image compression for VLSI based architecture. In FELICS algorithm, which consists of simplified adjusted binary code and GolombRice Code with storage less k parameter selection is used for Image compression. This parameter is used to achieve high processing speed. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. The High Definition (HD) display applications, our encoding capability can achieve a high quality specification of Full- HD 1080p at 50Hz with complete Red, Green, Blue color components. This method can be further enhanced by multilevel parallelisms and apply QHD and QFHD. Key words: Image compression, parallel Processing, Adjusted Binary code, Golomb Rice code, High Definition display, VLSI Architecture.

1. INTRODUCTION
Compression is useful because it reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth [1]. On the downside, compressed data must be decompressed to be used, and this extra processing may be unfavorable to some applications. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as its being decompressed. The design of data compression schemes therefore involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced, and the computational resources required compressing and uncompressing the data. Most lossless compression programs do two things in sequence: the first step generates a statistical model

ISSN NO: 6602 3127

www.ijart.org

Page | 115

International Journal of Advanced Research in Technology Vol. 2 Issue 3, March 2012

for the input data and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data[2]. The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by DEFLATE) and arithmetic coding. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1. There are two primary ways of constructing statistical models: in a static model, the data are analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces a single model to be used for all data being compressed, and so performs poorly on files containing heterogeneous data. Adaptive models dynamically update the model as the data are compressed [5]. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data performance improve. In Lossy compression technique, many sophisticated standards have been intensively developed such as JPEG and JPEG 2000 for still image and H.264 for multimedia communications and high-end video applications, respectively. Therefore, both algorithm and hardware implementation have attracted massive research effort for the evolution of lossy compression technique. Lossless compression can remove redundant information and the reconstructed procedure is as same as original information. The decoded information is exactly identical to original information. According to the coding principle of lossless compression technique, it can be categorized into two fields: Dictionary-based Prediction-base In Dictionary-based, frequently occurring and repetitive patterns are assigned to a shorter codeword. The less efficient codeword is assigned to the others. Many famous methods, including Huffman coding, run length coding, arithmetic coding, LZ77, and LZW, have been widely developed, and some of them are further applied in lossy compression standards. In Prediction-based algorithms apply prediction technique to generate the residual, and utilize the entropy coding tool to encode it. Many methods, including fast, efficient, lossless image compression system (FELICS), context-based, adaptive, lossless image coding (CALIC) and JPEG-LS, have been extensively developed in this field. FELICS algorithm can provide more efficient coding principle without data dependency, and maintain competitive coding efficiency. Two main techniques,

including simplified adjusted binary code and GolombRice code with storage-less k parameter selection, are incorporated. The proposed color difference preprocessing (CDP) can efficiently improve the coding efficiency with simple arithmetic operation [4]. The rest of this paper is organized as follows. In Section II, the Description of the Methods is introduced. Section III presents Detailed Description Of VLSI-oriented FELICS algorithm and the proposed hardware architecture of VLSI-oriented FELICS algorithm. Experiment results and discussions are described in Section IV. Finally, the conclusions and further enhanced are given in Section

V.DESCRIPTION OF THE METHODS


The intensity distribution model is exploited to predict the correlation between current pixel and reference pixels. In this model, the intensity that occurs between L and H is with almost uniform distribution, and regarded as in range. The intensity higher than H or smaller than L is regarded as above range and below range, respectively [7]. For in range, the adjusted binary code is adopted, and GolombRice code is for both above range and below range.

Fig.1. Intensity Model For in range, the probability distribution is slightly higher in the middle section and lower in both side sections. Therefore, the feature of adjusted binary code claims that the shorter codeword is assigned to the middle section, and longer one is assigned to both side sections. Adjusted Binary Code Sample of P-L 0 1 2 3 4

Codeword 111 00 01 10 110 Table.1. Codeword of Adjusted Binary Code To describe the coding flow of adjusted binary code, the coding parameters should be first declared as follows: delta=H-L range=delta+1 upper_bound=[log2(range)] lower_bound=[log2(range)] threshold=2upper_bound range

ISSN NO: 6602 3127

www.ijart.org

Page | 116

International Journal of Advanced Research in Technology Vol. 2 Issue 3, March 2012

The adjusted binary code takes the sample of P-L to be encoded, and range indicates that the number of possible samples should be encoded for a given delta. The upper bound and lower bound denote the maximum and minimum number of bit to represent the codeword for each sample, respectively. Particularly, the lower bound is identical to upper bound, while the range is exactly equal to the power of two. The threshold and shift number are utilized to determine which sample should be encoded with upper bound bit or lower bound bit. If delta = 4, the range is equal to [0, 4].The required number of bit is 2 for lower bound and 3 for upper bound. With the intensity distribution in in range, 2 bits are allocated for the middle section, including sample of [1, 2, 3], and 3 bits for side section, including sample of [0, 4]. Golomb_Rice Code For both above range and below range, the probability distribution sharply varies with exponential decay rate, and the efficient codeword should be more intensively assigned to the intensity with high probability. Therefore, GolombRice code is adopted as the coding tool for both above range and below range. With Golomb code, the codeword of sample x is partitioned into unary and binary parts Unary part: [ ] Binary part: x mod m where m is a positive integer, and dominates the coding efficiency of Golomb code. If m is assigned to the power of 2, this coding scheme is regarded as GolombRice code. Its unary and binary part are listed as follows: Unary part: Binary part: x mod 2k where, k is a positive integer. The entire codeword is concatenated with unary part and binary part, and one bit is inserted between both for identification. Therefore, the GolombRice code is a special case of Golomb code, and its k parameter, exactly equal to power of 2, is efficient for hardware implementation [8]. The selection procedure of k parameter induces serious data dependency and consumes considerable storage capacity.

Table.2. Pixel value of an entire image

DETAILED DESCRIPTION OF VLSI ORIENTED FELICS ALGORITHM


Complex Coding Flow in Adjusted Binary Code The adjusted binary code is partitioned into three coding procedures: Parameter Computation Circular Rotation Codeword Generation The parameter computation generates the coding parameters; the circular rotation shifts the sample less than threshold to middle section and the others to both side sections. After circular rotation, the codeword generation adds threshold to the sample, which is greater than or equal to threshold, in side section and encodes it with upper bound bit [3]. The lower bound bit is assigned to the other samples in middle section. As a result, the codeword length of each sample is consistent with the probability distribution of in range. Extra-Storage Capacity in Golomb-Ricecode

Coding Flow First two pixels at first row are directly packed into bit stream without any encoding procedure. Find the corresponding two reference pixels N1 and N2. Assign L=min(N1,N2), H=max(N1,N2). Apply Adjusted Binary Code for P-L in in range, Golomb_Rice Code for L-P-1 in below range, and Golomb_Rice code for P-H-1 in above range

The FELICS algorithm adaptively estimates the k parameter in GolombRice code to provide the bestfitting for exponential decay rate in above range and below range. The estimation procedure could induce additional computation load and storage requirement. Since various delta values incur different exponential decay rate due to the diverse of image texture, FELICS prepares a candidate set, which consists of several k parameters, and the most efficient k parameter is selected from this candidate set [5]. An individual candidate set is assigned to each delta value, and the k parameter that contains the minimum cumulative codeword length in the candidate set is selected as the most efficient one for GolombRice code. The residual of 2, Golomb_Rice code produces new codeword with entire candidate set of k = {1, 2, 3, 4}, and the codeword length for each k is 3, 3, 4, and 5 bits, respectively. Then the cumulative codeword length of each k parameter in candidate set is immediately updated to 20, 61, 27, and 85 bits. The updated cumulation table is successively referenced by next Golomb_Rice coding procedure [6]. 3.3 DATA DEPENDENCY

ISSN NO: 6602 3127

www.ijart.org

Page | 117

International Journal of Advanced Research in Technology Vol. 2 Issue 3, March 2012

P-1 or P-H-1 with 8-bit format, its range is [0... 254], and F (u) is defined as

Fig.2.Consecutive Pixels Even though FELICS applies a simple and efficient method for k parameter selection in Golomb Rice code, it also induces serious data dependency and limits hardware performance in parallel processing. Both P1 and P2 are encoded with GolombRice code, and mapped to the same delta value in cumulation table. In sequential processing, since the cumulation table is completely updated by P1 encoding procedure, the updated cumulation-table is available to be successively referenced by P2 encoding procedure [3]. In parallel processing, P2 encoding procedure cannot be simultaneously performed with encoding procedure, since the cumulation table is currently updated by P1 encoding procedure and not available for P2 encoding procedure. The data dependency is a bottleneck in high-throughput applications [1].

EXPERIMENT DSCUSSIONS

RESULTS

AND

Fig 3.1 Input Image

MODIFIED FELICS ALGORITH


A.Simplified Adjusted Binary Code To make a simplified adjusted binary code, a compact probability distribution model, SSGM, is adopted to reduce the arithmetic operation in adjusted binary code [7]. With SSGM, the smaller residual less than threshold is allocated with shorter codeword, and the longer codeword is assigned to the residual greater than or equal to threshold. Fixed k Parameter in Golomb Rice Code The GolombRice code is adopted for above range and below range with various kinds of exponential decay rate, the Exponential Distribution Function (EDF) can be adequately exploited to analyze the impact on coding efficiency, and it is defined as EDF(x,u) = exp where x represents the input residual of Golomb Rice code, and u is applied to model various kinds of exponential decay rate. The EDF(x) is further divided by a factor to yield the probability density function (PDF), and it is defined as PDF(x,u) = EDF(x,u) Where F(u) represents the normalization factor. Since the residual of Golomb_Rice code is L-

Fig 3.2 Re-Sized Image

Fig 3.3 Pixel Value of an Image

ISSN NO: 6602 3127

www.ijart.org

Page | 118

International Journal of Advanced Research in Technology Vol. 2 Issue 3, March 2012

adopted, to increase the throughput of the Engine. This method can be further enhanced by multilevel parallelism.

REFERENCES
[1] Tsung-Han Tsai,Yu-Hsuan Lee and Yu-Yu Lee,Design and Analysis of High-Throughput Lossless Image Compression Engine Using VLSIOriented FELICS Algorithm in IEEE Transactions On Very Large Scale Integration (VLSI) Systems, Vol. 18, NO. 1, Jan 2010, pp.39-52. [2] L. Xiaowen, X. Chen, X. Xie, G. Li, L. Zhang, C. Zhang, and Z.Wang, A low power, fully pipelined JPEG-LS encoder for lossless image compression, in Proc. IEEE Int. Conf. Mul timedia EXPO, 2007, pp. 19061909. [3] W. D. Len-Salas, S. Balkir, K. Sayood, N. Schemm, and M. W. Hoffman, A CMOS imager with focal plane compression using predictive coding, IEEE J. Solid-State Circuits, vol. 42, no. 11, pp. 25552572, Nov. 2007. [4] L. Yang, H. Lekatsas, and R. P. Dick, High performance operating system controlled memory compression, in Proc. Int. Conf. Des. Autom. Conf., Jul. 2006. [5] R. Mehboob, S. A. Khan, and Z. Ahmed, High speed lossless data compression architecture, in Proc. IEEE Int. Conf. Multitopic, 2006, pp. 84 88. Available [6] X. Xie, G. L. Li, X. K. Chen, C. Zhang, and Z. H. Wang, A low complexity near-lossless image compression method and its ASIC design for wireless endoscopy system, in Proc. Int. Conf. ASICON, 2005,. [7] C.-C. Cheng, P.-C. Tseng, C.-T. Huang, and L.-G. Chen, Multi-mode embedded compression codec engine for power-aware video coding system, in Proc. IEEE Workshop. Signal Process. Syst., 2005. [8] M. Milward, J. L. Nunez, and D. Mulvaney, Design and implementation of a lossless parallel high-speed data compression system, IEEE Trans. Parallel Distrib. Syst., vol. 15, no. 6, pp. 481 490, Jun. 2004.

Fig 3.4 Output Waveform Arithmetic Add/sub com shift operation Adjusted Binary Code Simplified Adjusted Binary Code 6 3 2 10 2 0

Total 10 4

Table.3. Arithmetic operation between adjusted binary code and simplified adjusted binary code No. of Datas in Cumulation Table For Variable K(1,2,3,4) For Fixed K Data Dependency

4*256=1024

1*256=256

NA

Table.4. Output Comparison with Variable kvalue and fixed k-value Simplified Adjusted Binary Code reduces the total arithmetic operation from 10 to 4. So the processing speed is increased. The number of Datas in Cumulation table for variable k parameter is 1024. Whereas the number of datas for fixed k is 256. Therefore the area is also reduced.

5. CONCLUSION ENHANCEMENT

AND

FUTURE

Simplified Adjusted Binary Code reduces the number of arithmetic operation and improves processing speed. The storage-less k parameter selection applies a fixed value in GolombRice Code to remove data dependency and extra storage for cumulation table. Two-level parallelism and Four-stage pipelining are

ISSN NO: 6602 3127

www.ijart.org

Page | 119

Das könnte Ihnen auch gefallen