Sie sind auf Seite 1von 23

www.Vidyarthiplus.

com

NOORUL ISLAM COLLEGE OF ENGINEERING


DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING
2 and 16 MARK QUESTIONS AND ANSWERS

(FOR II SEMESTER M.E COMMUNICATION SYSTEM)

SUBJECT CODE : WS1621

SUBJECT NAME : MULTIMEDIA COMPRESSION TECHNIQUES

Prepared by
C.P.SREE BALA LEKSHMI
LECTURER,
DEPARTMENT OF ECE.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

MULTIMEDIA COMPRESSION TECHNIQUES


TWO MARK QUESTIONS AND ANSWERS
1. What is rate distortion theory?
Rate distortion theory is concerned with the trade-offs between distortion and rate in
lossy compression schemes. If the average number of bits used to represent each sample
value i.e. the rate is decreased there will be an increase in distortion. This is rate
distortion theory.

2. Define the basic concepts of information theory.


If the probability of an event is low the amount of self information associated with it
is high; i.e. the probability of an event is high, the information associated with it is low.
The self information associated with any event is given by,
i (A)= -log p(A).

3. Write any three techniques for lossless compression?


i) Huffman coding.
ii) Shannon fano coding.
iii) Arithmetic coding.

4. How entropy is related for the performance measures?


The entropy is a measure of the average number of binary symbols needed to code
the o/p of the source. Hence for a compression scheme to be lossless it is necessary to
code the o/p of the source with an average number of bits equal to the entropy of the
source.

5. What do you mean by lossy and lossless compression?


If the reconstructed data in the received end is same as that of the original data,
then it is a lossless compression.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

If the reconstructed data in the received end is differs from that of the original data,
then it is a lossy compression.

6. Write any three techniques for lossy compression?


i) Subband coding
ii) Wavelet based compression
iii) JPEG

7. Define vector quantization and give its merit over scalar quantization.
If the set of inputs & outputs of a quantizer are vectors then it is called vector
quantization. For a given rate the use of vector quantization results in a lower distortion
than scalar quantization.

8. What are the important applications of data compression?


Data compression schemes find its application in mobile communication, digital
TV, satellite TV.

9. Write the taxonomy of compression techniques?


Based on the requirements of reconstruction, data compression scheme can be
classified as lossy compression and lossless compression.

10. What is meant by companded quantization?


In companded quantization the input is first mapped through a compressor
function. This function stretches the high probability regions close to the origin and
correspondingly compresses the low probability regions away from the origin. The output
of the compressor function is quantized using a uniform quantizer & the quantized value
is transformed via an expander function. This is known as companded quantization.

11. What is meant by modelling?


Modelling is nothing but extracting the information about any redundancy that
exist in the data and describing the redundancy in the form of a model.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

12. What are the parts of human audio visual system?


The various parts of a human audio visual system are retina, fovea, tympanic,
membrane, cochlea, oval window etc.

13. Give some models that are used in lossless compression?


The various models that are used in a lossless compression scheme are
probabilistic models, physical models, and markov models, composite
source models etc.

14. Give some models that are used in a lossy compression.


The various models that are used in a lossy compression scheme are probabilistic
models, physical models, and linear system models.

15. What is a composite source model?


In many applications, it is not easy to use a single model to describe the source. In
such cases, we can define a composite source , with only one source being active at a
time.

16. What are prefix codes?


A code in which no code word is a prefix to another code word is called prefix
code (eg.Huffman code)

17. Give any two characteristics of a code.


i) A code should be uniquely decodable.
ii) The code words for letters that occur more frequently are shorter than for letters
that occurs less frequently.

18. What are the two types of quantization error?


Granular error and slope over load error.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

19. What are two types of adaptive quantization?


The two types of adaptive quantization are forward adaptive quantization and
backward adaptive quantization.

20. What do you mean by forward adaptive quantization?


In forward adaptive quantization, the source output is divided into blocks of data.
Each block is analyzed before quantization, and the quantizer parameters are set
accordingly. The settings of the quantizer are then transmitted to the receiver as side
information.

21.What is meant by optimum prefix codes?


In an optimum code, symbols that occur more frequently (have a higher
probability of occurrence) will have shorter code words than symbols that
occur less frequently.
In an optimum code, the two symbols that occur less frequently will have
the same length.

21. Write any three techniques for loss less compression?


a. Huffman coding
b. Adaptive Huffman coding
c. Arithmetic coding
d. Shannonfano coding

22. What are the applications of Arithmetic coding?


a. Bi-level Image compression-JBIG standard
b. JBIG2
c. Image compression

23. What does digram-coding mean?

www.Vidyarthiplus.com
www.Vidyarthiplus.com

a. It is one of the most common forms of static dictionary coding.


b. In this form of coding, the dictionary consists of all letters of the source
alphabet followed by as many pairs of letters called digrams.
c. The digram encoder reads two character input and search the dictionary to
see if this input exists. If so, index is encoded and transmitted.

24. What is LZ family algorithm?


a. It is one of the widely used adaptive dictionary based technique.
b. LZ77, LZ78, LZW are the different approaches in this algorithm.

25. Define offset in LZ77 approach?


To encode the sequence in the look ahead buffer, the encoder moves a
search pointer back through the search buffer until it encounters a match to the
first symbol I the look ahead buffer. The distance of the pointer from the look
ahead buffer is called the offset.

26. Define search buffer and look ahead buffer?


The encoder examines the input sequence through a sliding window. The
window consists of two parts, search buffer that contains a portion of the recently
encoded sequence and a look ahead buffer that contains the next portion of the
sequence to be encoded.

27. Give some application of LZW?


a. File compression-UNIX compression
b. Image compression-GIF (Graphics Interchange Format)
c. Compression over MODEMS-V.42 bits

28. What does Static Dictionary mean?


a. Static dictionary are used when prior knowledge about the sources
available.
b. Depending on the input source the dictionary is adapted.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

29. What is meant by adaptive dictionary?


a. This is used in situation when there is no prior knowledge about the
sources available.
b. Depending on the input source the dictionary is adapted.

30. Is Huffman coding lossy or less compression?


Huffman coding is a loss less compression.

31. Give an example for offline encoding and online encoding?


a. Huffman coding is a offline coding in which the datas are stored in a
buffer and then encoded.
b. Adaptive Huffman coding is an online encoding in which the input data
coded directly by using tree encoding technique.

32. Define lexicographic ordering?


In lexicographic ordering the ordering of the letters in an alphabet induces an
ordering on the words constructed from this alphabet.
Ex: ordering of words in a dictionary.

33. What is the algorithm used in JBIG?


a. Progressive transmission algorithm
b. Loss less compression algorithm.
34. What are the different approaches used in a adaptive dictionary technique?
a. LZ77
b. LZ78
c. LZW
35. What is meant by progressive transmission?
In progressive transmission of an image, a low-resolution representation of
the image is sent first. This low resolution requires only few bits to encode. The

www.Vidyarthiplus.com
www.Vidyarthiplus.com

image is then updated or refine to the desired fidelity by transmitting more and
more information.

38. Define lossless channel.


The channel described by a channel matrix with only one nonzero element
in each column is called a lossless channel. In the lossless channel no sources
information is lost in transmission.

39. Define Deterministic channel


A channel described by a channel matrix with only one nonzero element in
each row is called a deterministic channel and this element must be unity.

40.What is the important of sub band coding in audio compression?


In sub band coding the source output is separated into different bands of
frequencies. This results in frequency bands with different characteristics. Thus
we can choose the compression scheme most suited to that characteristic. It also
helps to a variable bit allocation to the various frequency components depending
upon the information content. This decreases the average number of bits required
to code the source output.

41. What are the parameters used in silence compression?


Silence compression in sound files is the equivalent of run length coding on
normal data files. The various parameters are
A threshold value below which it can be considered as silence
A special silence code followed by a single byte that indicates how many
consecutive silence codes are present
A threshold to recognize the start of a run of silence. Only if we have
sufficient bytes of silence we apply silence coding.
A parameter to indicate how many consecutive non-silence codes are needed,
after a string of silence, so that we can declare the silence runs to be over.

42. List the various analysis / synthesis speech schemes.


The various analysis / synthesis schemes are
Channel vocoders (each segment of input speech is analyzed using a bank
of filters)
Linear predictive vocoders (US govt. standard at the rate of 2.4Kbps)
Code excited linear prediction (CELP) based schemes. [federal standard
1016 (FS-10), G.728 speech standard]
Sinusoidal coders which provide excellent performance at rates of
4.8Kbps and

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Mixed excitation linear prediction (MELP), [which is the new standard


2.4Kbps federal standard speech coder].

43. What are the factors to be considered for a voiced/ unvoiced decision in
predictive coders?
The following factors are considered
Voiced speech (a/e/o) has larger amplitude and hence more energy than unvoiced
signals(s/f)
Unvoiced speech has higher frequencies. Hence the unvoiced crosses x=0 line
more often than voiced speech sample signals.
Checking the magnitudes of the coefficients of the equivalent vocal tract filter.

Therefore we can decide about whether the speech is voiced or unvoiced based on
the energy in the segment relative to background noise and the number of zero
crossings within a specified window.

44. What are the components of MPEG audio scheme?


Moving Picture Expert Group (MPEG) has proposed three coding
schemes called Layer1, Layer2, Layer3 coding. The coders are upward
compatible. A layer N decoder is able to decode the bit stream generated by N-1
encoder.
Layer 1 & 2 coders both use a bank of 32 filters, splitting the input into 32
bands. Each band has a bandwidth of fs/64, where fs is the sampling frequency.
Allowable sampling frequencies are 32, 000, 44, 100, 48,000 samples/sec.
The output of the sub band is quantized using a uniform quantiser. A
variable length quantiser is used. The number bits are assigned based on the
masking property of human ears. That is, if we have a large amplitude signal at
one frequency, the audibility of the neighboring signals is affected. Hence if we
have a large signal in one of the sub bands, we need only fewer bits to code a
neighboring sub band.

45. Define vocoders and what are the types channel vocoders?
Vocoders are also called voice coders. Vocoders reproduce synthetic sounding
which is somewhat artificial quality. They can transmit signals at a lower bit rate
in the range of 1.2 to 2.4Kb. The receiver uses the model parameters along with
the transmitted parameters to synthesize the approximation to the source output.
The types of channel vocoders are linear predictive coder and code excited linear
prediction.

46. What is known as quadrative mirror filter?


The filter bank in sub band coding consists of a cascade of filter stages
where each stage consists of LPF & HPF. The most popular among the filter is
QMF. These filters have the property that the impulse response of a LPF is given
by hn. Then the high pass impulse response is given by [(-1)^nhN-1-n].
[hN-1-n] = hn, n=0,1,-------(N/2)-1.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

47. Application to speech coding G.722


G.722 provides a technique for wide band coding of speech signals that is based
on sub band coding. They provide high quality speech at 64Kbps. The two other
models used to encode the input are at 56 & 48Kbps. These two models are used
when auxiliary channels are used.

48. Define bit allocation?


The allocation of bits between the sub bands is an important design parameter.
Different sub band contain different amounts of information. We need to allocate
the available bits among the sub bands according to some measure of information.
The bit allocation can have a significant impact in the quality of the final
reconstruction especially when the information content of different bands is very
different.

49. Give an example of bit allocation procedure in basic sub band coding?
Suppose we are decomposing the source output into four bands and we want a
coding rate of 1 bit per sample. We can accomplish this by using 1 bit per sample
for each of the four bands. Otherwise we could simply discard the output of two
of the bands and use 2 bits per sample for the two remaining bands. Other way is
to discard the output of three of the four filters and use 4 bits per sample to
encode the output of the remaining filter.

50. What is meant by decimation & interpolation?


Suppose we have a sequence X0, X1, X2,----, then we can divide the sequence
into two sub sequences, i.e., X0, X2, X4, ------& X1, X3, -----w here 1/z
corresponds to a delay of one sample and M denotes sub sampling by a factor of
M. this sub sampling process is called down sampling or decimation. The original
sequence can be recovered from the two down sampling by inserting zeros
between consecutive samples of the sub sequences. This is called up sampling or
information.

51. The cutoff frequency for anti-aliasing filter is 7 KHz and 8 KHz. Give
reasons.
Even though the speech output is sampled at 16,000 samples per second, the
cutoff frequency for the anti-aliasing filter is 7 KHz. The reason is that the cutoff
frequency for the anti-aliasing filter is not going to be sharp like that of the ideal
low pass filter. Therefore, the highest frequency component in the filter output
will be greater than 7 KHz.

52. How masking properties of the human ear used I reducing the number of
bits in uniform quantiser?
If we have a large amplitude signal at one frequency, it affects the
audibility of signals at other frequencies. In particular, a loud signal at one
frequency may make quantization noise at other frequencies inaudible. Therefore,
if we have a large signal in one of the sub bands, we can tolerate more

www.Vidyarthiplus.com
www.Vidyarthiplus.com

quantization error in the neighboring bands and use fewer bits. So number of bits
can be reduced in uniform quantiser.

53. Generally, auto correlation function is used as a tool for obtaining the pitch
period. But in linear predictive coders, AMDF is used. Why?
Voiced speech is not exactly periodic which makes the maximum lower
than we would expect from the periodic signal. Generally, a maximum is detected
by checking the auto correlation value against the threshold. If the value is greater
than the threshold, a maximum is declared to have occurred. When there is
uncertainty about the magnitude of the maximum value, it is difficult to select a
value for the threshold. Another problem occurs because of the interference due to
other resonances in vocal tract. So Average Magnitude Difference Function
(AMDF) is used.

54. What are formants? What are its properties?


All frequency components of speech are equally important. As the vocal
tract is a tube of non uniform cross section, it resonates at a number of different
frequencies. These frequencies are called formants.
The formants values change with different sounds, but ranges in which it
occurred can be identified. For example, the first formant occurs in the range
2000-800 Hz for a male speaker and in the range 250-1000 Hz for a female
speaker.

55. What is regular pulse excitation coding?


T6he Multi Pulse- Linear Prediction Coding (MP-LPC) algorithm was later
modified. Instead of using excitation vectors in which the nonzero values are
separated by an arbitrary number of zero values, they forced the nonzero values to
occur at regularly spaced intervals. Furthermore, they allowed the nonzero values
to taken on a number of different values. This scheme is called regular pulse
excitation (RPE) coding.

56. Define aliasing. What is anti-aliasing filter?


Components with frequencies higher than half the sampling rate show up
at lower frequencies. This is called aliasing.
In order to prevent aliasing, most systems that require sampling will
contain an anti-aliasing filter that restricts the input to the sampler to be less than
half the sampling frequency.

57. Give the Nyquist rule.


If the highest frequency component of a signal is f0, then we need to
sample the at a frequency more than 2f0 times per second. This result is known as
the Nyquist theorem or Nyquist rule.
It can also be extended to signals that only have frequency components
between two frequencies f1 and f2. In order to recover the signal exactly, we need
to sample the signal at a rate of at least 2(f2-f1) samples per second.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

58. What are filter coefficients?


The general form of the input-output relationships of the filter is given by

N M
Yn = ai Xn-I + bi Yn-i
i=0 i=1

where the sequence {Xn} is the input to the filter, the sequence {Yn } is the output
from the filter, and the values {ai} and {bi} are called the filter coefficients.

59. What is FIR and IIR filters?


If the input sequence is a signal 1 followed by all 0s, the output sequence is called
the impulse response of the filter.
If the filter coefficient bi are all 0, then the impulse response will die out after N
samples. These filters are called Finite Impulse Response (FIR) filters. The
number N is sometimes called the number of taps in the filter.
If any of the bi have non zero values, the impulse response can continue forever.
Filters with non zero value for some of the bi are called Infinite Impulse Response
(IIR filters.

60. Define quad tree?


Quad tree is a portioning method in fractal compression. In this
method we start by dividing up the image into maximum size range blocks. If a
particular block does not have a satisfactory reconstruction we can divide it up in
to 4 blocks. These blocks in turn can also if needed be divided in to 4 blocks. This
method of partitioning is called quad tree partitioning

61. In what way SPIHT is more efficient than EZW?


In EZW when entire sub tree is in significant we transmit all
coefficients in it with a zero tree root label zr
The SPIHT algorithm uses portioning of trees in such a manner that
it tend to keep insignificant coefficients together in larger subsets.

62. What is massic transformation?


In fractal compression massic transformation adjusts the intensity
and orientation of pixels in domain block.
63. What is progressive transmission?

www.Vidyarthiplus.com
www.Vidyarthiplus.com

In progressive transmission of an image a low resolution


representation of image is sent first. This low-resolution representation requires very
few bits to encode.

64. What is the difference in JPEG and JPEG 2000?


JPEG 2000 differs mainly by means of transform coding.
In JPEG 2000 wavelets are used to perform decomposition of image. In JPEG
DCT is used

65. What is fractal compression?


fractal compression is a lossy compression method used to compress
images using fractals. The method is best suited for photographs of natural scenes.
fractal compression technique relies on the fact that in certain images parts of
images resembles other part of same image.

66. What is EBCOT?


Embedded Block Coding with Optimized Truncation is a block
coding scheme. It generates an embedded bit stream. It organizes bit steam in a
succession of layers. Each layer corresponds to certain distortion level. With in
each layer each block is coded with a variable number of bits. The partitioning of
bits between blocks is obtained using a lagrangian optimization that dictates the
partitioning. The quality of reproduction is proportional to no: of layers received.

67. What is a wavelet transform?


Wavelets are functions defined over a finite interval and having an average
value of zero. Basic idea of wavelet transform is to represent any arbitrary
function as a super position of a set of such wavelets or basic functions. These
basic functions are obtained from a single prototype wavelet called mother
wavelet by dilations or contractions and translations.

68. Define delta modulation

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Delta modulation is the one-bit version of differential pulse code modulation.

69. Define adaptive delta modulation


The performance of a delta modulator can be improved significantly by
making the step size of the modulator assume a time- varying form. In
particular, during a steep segment of the input signal the step size is
increased. Conversely, when the input signal is varying slowly, the step is
reduced , In this way, the step size is adapting to the level of the signal. The
resulting method is called adaptive delta modulation (ADM).

70. Name the types of uniform quantizer?


Mid tread type quantizer.
Mid riser type quantizer.

71. Define mid tread quantizer?


Origin of the signal lies in the middle of a tread of the staircase.
Output

3a
2a
a Overload level
-3a/2 -a/2
Input
-a
-2a

Peak to peak excursion where a=delta

72. Define quantization error?


Quantization error is the difference between the output and input values of
quantizer.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

73. Define mid-riser quantizer?


Origin of the signal lies in the middle of a riser of the staircase

O/p

3a/2
a/2 Over load level

a 2a 3a 4a i/p

74. Draw the quantization error for the mid tread and mid-rise type of
quantizer?
For mid tread type:
Quantization error
a/2

Input

-a/2 a
For mid riser type:
Quantization error

a/2
Input

75.What you mean by non-uniform quantization?

www.Vidyarthiplus.com
www.Vidyarthiplus.com

Step size is not uniform. Non uniform quantizer is characterized by a step size
that increases as the separation from the origin of the transfer characteristics is
increased. Non-uniform quantization is otherwise called as robust quantization

76. What is the disadvantage of uniform quantization over the non-uniform


quantization?
SNR decreases with decrease in input power level at the uniform quantizer but
non-uniform quantization maintain a constant SNR for wide range of input
power levels. This type of quantization is called as robust quantization.

77.What is video compression?


Video compression can be viewed as the compression of a sequence of
images; in other words image compression with temporal component. Here the
video compression makes use of temporal correlation to remove redundancy

78.What is motion compensation?


The previous reconstructed frame is used to generate a
prediction for the current frame, the prediction error or residual, is encoded and
transmitted to the receiver. The previous reconstructed frame is also available at
the receiver. So the receiver knows the manner in which the prediction was
performed, it can use this information to generate the prediction values and add
them to the prediction error to generate the reconstruction. This prediction
operation in video coding has taken to account the motion of objects in frame,
which is known as motion compensation

79.What are the disadvantages of video compression?


We do not perceive the motion video in the dame manner as we
perceive the still images. Motion video may mask coding aircrafts that would be
visible in still images. On the other hand aircrafts that may not be visible in
reconstructed still images can be very annoying in the reconstructed motion video
sequences

www.Vidyarthiplus.com
www.Vidyarthiplus.com

80.What is the advantage of loop filter?


Sharp edges in the block used for prediction can result in
generation of sharp changes in the prediction error. This in turn causes high values for
high frequency coefficients in transforms, which can increase transmission rate. To
avoid this prior to taking the difference, the prediction block can be smoothed by
using a two dimensional spatial filter. The filter is separable, it can be implemented as
a one dimensional filter that first operates on rows and then on columns. The filter
coefficients are , , , except at block boundaries where one of the filter taps
would fall outside block

81.Differentiate global motion & local motion?


In three dimensional model based approach to compression of
facial image sequences, a generic wire frame model is constructed using triangles.
Once this model is available to both transmitter and receiver, only changes in faces
are transmitted to receiver. These changes can be classified as global motion or local
motion. global motion involves movement of head while local motion involves
changes in features i.e. changes in facial expression.

82.What is MPEG-4?
The standard views a multimedia scene as a collection of objects. These
objects can be visual such as still background or a talking head such as music, speech
and so on. Each of these objects can be coded independently using different
techniques to generate separate elementary bit streams. These bit streams are
multiplexed along with a scene description. the protocol for managing the elementary
streams and their multiplexed version called DELIVERY MULTIMEDIA
INTEGRATION FRAME WORK is an important part of MPEG-4

83. What is H.261 standard?


The earliest DCT based video coding standard is H.261 standard. An
i/p image is divided in to blocks of 8*8 pixels. For a given 8*8 pixel we subtract
the prediction generated using the previous frame. The difference btw the block

www.Vidyarthiplus.com
www.Vidyarthiplus.com

being encoded and the prediction is transformed using a DCT. The transform
coefficients are quantized and the quantization label encoded using a variable
length code.

84. What is MPEG?


Standards for asymmetric applications have been developed by ISO &
IEC, which is known as MPEG. MPEG was set up at different rates for
applications that require storage of audio and video on digital storage media.
MPEG1 MPEG2& MPEG3 are targeted at rates of 1.5, 10,40Mb/s. respectively.

85. List out one application of wavelet based computation?


One of the most popular applications of wavelet has been to image
compression. The JPEG2000 standard is designed to update and replace the
current JPEG standard will use wavelets instead of DCT to perform
decomposition of image.

86. What is a mother wavelet?


The functions that are obtained by changing the size if function or
scaling and translating the single function are called mother wavelet.

87. What is group of pictures?


Different frames like B frames, I frames, P frames are organized together
in a group . This group is called as group of pictures. Group of pictures is a
small random access unit in the video seqence.

88. Give the different orders in MPEG1 standarad.


Display order and bitstream order
Display order is the sequence in which video sequence is displayed to the
user
Bitstream order is the processing order that is different from the display order.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

89. What is constrained parameter bitstream?


The MPEG committee has provided some suggested values for the various
parameters like vertical picture size ,horizontal picture size ,pixel rate.These
suggested values are called constrained parameter bitstream.

90. What do you meant by profiles and levels in MPEG 2 standard?


Profile defines the algorithm that is used in MPEG 2 standard and levels
defines the constraints on the parameters.

91. Name the profiles and levels used in MPEG 2 standard.


Profiles are simple, main, snr-scalable, spatially scalable, and main. The levels are
low, main, high 1440 and high.
The low level corresponds to a frame size of 352 x 240
The main level corresponds to a frame size of 720 x 480
The high 1440 level corresponds to a frame size of 1440 x 1152
The high level corresponds to a frame size of 1920 x 1080

92. What is DMIF?


DMIF means Delivery Multimedia Integration Framework. It is a protocol for
managing the elementary bit streams and multiplexed version of different bit streams.
It is used in MPEG 4 standard.

93. What is a post detection filter?


The post detection filter named as base-band low pass filter has a bandwidth
that is just large enough to accommodate the highest frequency component of the
message signal.

94. What is the disadvantage of uniform quantization over the non-uniform


quantization?
SNR decreases with decrease in input power level at the uniform quantizer but

www.Vidyarthiplus.com
www.Vidyarthiplus.com

non-uniform quantization maintain a constant SNR for wide range of input power
levels. This type of quantization is called as robust quantization.

95. What are the two fold effects of quantizing process.


The peak-to-peak range of input sample values subdivided into a finite set of
decision levels or decision thresholds .The output is assigned a discrete value
selected from a finite set of representation levels are reconstruction values that are
aligned with the treads of the staircase.

96. What is meant by idle channel noise?


Idle channel noise is the coding noise measured at the receiver output with
zero transmitter input.

97.What are the applications of Huffman coding?


a. Image compression
b. Audio compression
c. Text compression
d.
98. What is forward and backward adaption?
In forward adaption, adaption is done before encoder. Decoder doesnt know
about manipulation done.
In backward adaption, adaption is done after encoder. here decoder knows
about manipulation done

99. What are the applications of MPEG-7?


Digital libraries-This include video libraries, image catalogs, musical dictionaries,
future home multimedia databases.
Multimedia directory services-An examples are yellow pages.
Broadcast media selection-This includes radio channel and Internet broadcast
search and selection.
Multimedia editing-personalized electronic news services and media
Authoring

100.Describe the aims of MPEG-21


To understand if and how various components fit together
To discuss which new standards may be required, if gaps in the infrastructure
exist and when the above two points have been reached.To accomplish the
integration of different standards.

www.Vidyarthiplus.com
www.Vidyarthiplus.com

16 MARKS QUESTIONS

1. Discuss in detail various evaluation techniques for compression.


2. Explain the concept of down sampling and up sampling.
3. Explain in detail the error analysis and methodologies?
4. a) Explain the taxonomy of compression technique.
b) Explain the concept of scalar Quantization theory and rate distortion
theory
5. a) Define redundancy. What are the various types of redundancy? Explain how
redundancy can be removed?
b) Explain the following terms:
i) Source encoding
ii) Vector Quantization.

6. What is motion compensation in H.261.briefly discussing it?


Explain model based coding?
7. Write short notes on video signal representation?
8. Define wavelet. Discuss the concept of wavelet based compression techniques
Explain with examples?
9. Discuss various MPEG standards?
10. Explain the DVI technology for symmetric and asymmetric motion video
compression/decomposition?
11. Explain various predictive techniques for image compression?
12. Bring out difference btw JPEG & JBIG?
13. Explain clearly various processes in JPEG image compression?
14. Discuss contour based compression technique for image?
15. Explain transform coding?
16. Explain EPIC, SPIHT, JPEG, and JBIG?
17. Write an algorithm and explain the fractal compression technique for images.
What are its applications?
18. Explain DPCM with backward adaptive prediction?

www.Vidyarthiplus.com
www.Vidyarthiplus.com

19. What is meant by Huffman coding? Explain its types. Give some application of
Huffman coding?
Refer Khalid Sayood: Introduction to Data compression, Morgan, Huffman
Harcourt, Second Edition-2000.
Page No: 39 to 71.
20. What are the applications of arithmetic coding? Explain anyone application.
Page No: 106 to 113.

21. Explain about Adaptive Huffman coding


Page No: 55 to 60

22. What is meant by arithmetic coding? Explain it with an example.


Page No: 78 to 88.

23. Define the concept of predictive technique?


The technique that uses the past values of a sequence to predict the current
value and then encode the error in prediction or residual is called predictive technique

24. What are the predictive techniques for image compression?


1. CALIC (context adaptive loss less image compression)
2. JPEG (joint photographic experts group)
1. MTF (move to front algorithm) Discrete walsh
hadamard transform

25. What are the various steps of JPEG compression process?


i. Applying transformation- Discrete cosine transform on pixels
ii. Quantization-scalar
iii. Encoding- Huffman encoding

26. What are the various transformation used for image compression?
2. Karhunen- loeve transform

www.Vidyarthiplus.com
www.Vidyarthiplus.com

27.Explain the concept of linear predictive coder in speech compression


28.Discuss the role of QMF in sub band coding
29.Explain the various audio compression techniques
30.Give a detailed description of G.722 audio coding scheme
31.Explain vocoders
32.Explain an application to speech coding
33.Write notes on audio-silence compression
34.Explain the basic sub band coding algorithm
35.Explain an application to audio coding.

www.Vidyarthiplus.com