Sie sind auf Seite 1von 4

CHAPTER 1: LOSSLESS DATA COMPRESSION

1. Debate on the topic, Efficiency of Entropy Encoding as well as Dictionary based Techniques for Lossless Data Compression.
1.1 Introduction to lossless data compression: Data compression techniques have brought about multimedia revolutions. Data compression is used because may it be communication or any kind of data exchange it happens only in digital form i.e. in the form of bytes of data. For a multimedia data the number of bytes required will be really huge. Data compression is divided into two broad classes: lossless compression schemes and lossy compression schemes. Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression, which only allows an approximation of the original data to be reconstructed, in exchange for better compression rates. The Lossless Data Compression technique recommended preserves the source data accuracy by removing redundancy from the application source data. In the decompression processes the original source data is reconstructed from the compressed data by restoring the removed redundancy. The quantity of redundancy removed from the source data is variable and is highly dependent on the source data statistics. The reconstructed data is an exact replica of the original source data. 1.2 Brief explanation about efficient usage of the data types: Different data types includes a text file, an image etc. Text compression always opts for lossless compression, because text data is reconstructed, if in case there are very small differences, it can result in statements with very different meanings. Images captured in radiology domain, if not reconstructed exactly, small undetectable parts of the image when processed could mislead the radiologist. But multimedia images can be compressed using lossy compression technique so as to save little more bandwidth and storage space at the cost of losing finer details of the image. Multimedia file formats such as GIF, TIFF and PDF uses dictionary-based LZW encoding algorithm.

M.S Ramaiah School of Advanced Studies Postgraduate Engineering and Management Programme (PEMP)

1.3 Merits and demerits of entropy coding and dictionary based technique:
1.3.1 Merits of entropy coding: This kind of technique is based on the information theory lossless data compression. The entropy coding creates and assigns a unique prefix code to each symbol that occurs in the input. Uses a lower number of bits to encode more frequent data. 1.3.2 Demerits of entropy coding:
The self-information of an individual message or symbol taken from a given

probability distribution.
The entropy of a given probability distribution of messages or symbols, and The entropy rate of a stochastic process.

1.3.3 Merits of Dictionary based technique: They encode variable length strings of symbols. More common are methods where the dictionary starts in some predetermined state but the contents change during the encoding process, based on the data that has already been encoded. 1.3.4 Demerits of Dictionary based technique: The repeating occurrences are replaced by code word that contains the pattern. This type of compression technique is based on the repeating occurrences are replaced by code word that contains the pattern.

1.4 Coding Complexity& Decoding Capabilties:


Coding complexity increases when it comes to large image with color, video, audio etc, in both lossless data compression. In case of entropy encoding, if the input is image then, according to the size the probabilities increase, so matrix computation becomes little difficult and slow. But in dictionary technique as image the is an input and its size is large, it
Embedded communication system

M.S Ramaiah School of Advanced Studies Postgraduate Engineering and Management Programme (PEMP)

makes symbols to increase and encoding will be slow. But in this modern world with coding complexity is not matter at all, because of machines having high computation power. The lossless data compression techniques depend on code word which is used to encrypt the data which may be text, audio, video or an image in transmitter end. All the techniques will their own specific code word generated during the data compression. The data can decrypt in the receiver end only when there right code word which was used to compress the data at transmitter. The question is does all compression technique will prompt for same code word at receiver end, at this instant the all compression techniques show there decoding capacities with perfect algorithm.

1.5 Compression Ratio:


Compression algorithms are measured for its complexity in terms of memory required to implement the algorithm, the speed of the algorithm and how well reconstruction resembles the original. Compression Ratio is also one of the ways of measuring compression algorithm. Compression Ratio is defined as the ratio of number of bits required to represent the data before compression to the number of bits required to represent the data after compression. When trying to store an image made up of a square array of 256x256 pixels, it requires 65536 bytes. When the same image is compressed, it just requires 16384 bytes. Thus the compression ratio is 4:1. Compression Ratio = Compressed Size Uncompressed Size

1.6 Conclusion:
Entropy encoding generates optimal and prefix codes of variable-length from a block of fixed-length input data. Dictionary-based encoding scheme generates variable to fixedlength codes. Based on some of the factors both the encoding schemes are efficient in their own ways. Say for instance, memory is a factor, entropy encoding saves memory space. But dictionary may sometimes even run out of memory. Another factor can be decoding capability, entropy encoding technique is little complex when it comes to decoding as codeword are of variable-length and getting probabilities involves complexity. But decoding a fixed-length code, based on the dictionary is very fast and easy.

Embedded communication system

M.S Ramaiah School of Advanced Studies Postgraduate Engineering and Management Programme (PEMP)

CHAPTER2: HUFFMAN COADING


1 Flow Chart for Huffman Coding:
START Load the image file and stored in variable Convert the image file in to Gray scale

Find the probability of each colour in an image Assign the probabilities into a variable p

Do Arrange probabilities in descending order Add least 2 probabilities Replace added value in last second place

Reduce p length by 1

While length (p)>1 Find Redundancy=entropy-average length

Find efficiency STOP Figure2. 1Flow Chart Huffman Coding

Embedded communication system

Das könnte Ihnen auch gefallen