Sie sind auf Seite 1von 17

Ayya

Unit-1
PART - 1
1. Define Image and Digital Image ?
2. Define Digital image processing?
3.Define Picture elements?
4.List few areas where digital image processing techniques are utilized?
5. List the categories of digital storage for image processing applications
6. What are the types of light receptors? Differentiate Photopic and Scotopic vision.
7. Define weber ratio
8. What is simultaneous contrast?
9. Write short notes on neighbors of a pixel.
10. Find the number of bits required to store a 256 X 256 image with 32 gray levels

PART - 2
1.What are the fundamental steps in Digital image processing ? Explain it in detail.
2.Explain Image sampling and quantization?
3. Explain image sensing and acquisition
4 Explain color models.

UNIT-1
PART-1

1. Define Image and Digital Image ?


An image may be defined as a two dimensional function, f(x,y) , where x and y are
spatial(plane) coordinates and the amplitude of f at any pair of coordinates(x,y) is called intensity or
gray level of the image at the point.
When x, y and the intensity values of f in the image are all finite, discrete units, then it is called
digital image.
2. Define Digital image processing?
Digital Image Processing encompasses processes whose inputs and outputs are images and it also
encompasses processes that extract attributes from images and it includes the recognition of individual
objects from the image.
The field of digital image processing refers to processing digital images by means of digital
computer.
3.Define Picture elements?
A digital image is composed of a finite number of elements, each of which has a particular
location and value. These elements are called picture elements. It is also referred as pixels, pels or image
elements. Pixel is the term that is most widely used to denote the elements of digital image.
4.List few areas where digital image processing techniques are utilized?
Digital image processing techniques are highly utilized in following areas:
1. Medical imaging
2. Remote earth resources observation
3. Astronomy
4. Archeology
5. Physics and releted fields
6. Defense
7. Industry
8. Solving problems dealing with machine perception

5. List the categories of digital storage for image processing applications


1. Short term storage for use during processing.
2. Online storage for relatively fast recall.
3. Archival storage for infrequent access.
6. What are the types of light receptors? Differentiate Photopic and Scotopic vision.
The two types of light receptors are
Cones and
Rods
Main differences between Photopic and Scotopic vision are :
Photopic vision

Scotopic vision

Cone vision is called photopic or bright light


vision

Rod vision is called Scotopic or dimlight vision

The human being can resolve the fine details


with these cones because each one is connected
to its own nerve end.

Several rods are connected to one nerve end. So


it gives the overall picture of the image.

7. Define weber ratio


The ratio of increment of illumination to background of illumination is called as weber ratio.(ie)
i/i.
If the ratio (i/i) is small, then small percentage of change in intensity is needed (ie) good
brightness adaptation. If the ratio (i/i) is large , then large percentage of change in intensity is needed
(ie) poor brightness adaptation
8. What is simultaneous contrast?
The region reserved brightness not depend on its intensity but also on its
background. All centre square have same intensity. However they appear to the eye to
become darker as the background becomes lighter.
9. Write short notes on neighbors of a pixel.
The pixel p at co-ordinates (x, y) has 4 neighbors (i.e) 2 horizontal and 2 vertical
neighbors whose co-ordinates is given by (x+1, y), (x-1,y), (x,y-1), (x, y+1). This is
called as direct neighbors. It is denoted by N4(p) .

Four diagonal neighbors of p have co-ordinates (x+1, y+1), (x+1,y-1), (x-1, y-1),
(x-1, y+1). It is denoted by ND(p).
Eight neighbors of p denoted by N8(P) is a combination of 4 direct neighbors and
4 diagonal neighbors.
10. Find the number of bits required to store a 256 X 256 image with 32 gray levels
32 gray levels = 25
= 5 bits
therefore number of bits required to store a 256 X 256 image with 32 gray levels are 256 * 256 *
5 = 327680 bits.

PART-2
1.What are the fundamental steps in Digital image processing ? Explain it in detail.
The fundamental steps in Digital image processing are as follows:
1.Image Acquisition
2.Image Enhancement
3.Image Restoration
4. Color Image processing
5. Wavelets and Multiresolution processing
6. Compression
7. Morphological processing
8. Segmentation
9. Representation and Description
10. Object recognition
Refer textbook page no 25-28
2.Explain Image sampling and quantization?
Refer textbook page no 52 68
3. Explain image sensing and acquisition

Refer textbook page no 46 52


4 Explain color models.
Refer textbook page no 402 414

UNIT-2
PART-1

1. How the spatial domain techniques differ from techniques of frequency domain?
2. What do you meant by gray level transformation?
3. Define histogram?
4. What is spatial filtering?
5.What is the use of smoothing spatial filter?
6.How smoothing is achieved in frequency domain? List out some of the low pass filters in
frequency domain?
7. What is contrast stretching?
8. Define averaging filters.
9. What is Image Negative?
10. Write the application of sharpening filters.

PART-2
1. Explain the methods of histogram processing?
2.Discuss in detail about smoothing and sharpening spatial filtering?
3.Explain Image smoothing using frequency domain filters?
4. Explain Image sharpening using frequency domain filters?

PART-1
1. How the spatial domain techniques differ from techniques of frequency domain?
The spatial domain techniques operate directly on the pixels of the image as opposed to the
frequency domain in which operations are performed on the fourier transform of the image. Also Spatial
domain techniques are more efficient computationally and require less processing resources to implement.
2. What do you meant by gray level transformation?
The spatial domain processes can be denoted by the expression g(x,y) = T[ f(x,y) ] , where f(x,y)
is the input image and g(x,y) is the output image and T is an operator on f defined over a neighborhood
of point (x,y).
The smallest possible neighborhood is of size 1 X 1. In this case , g depends only on the value of
f at a single point (x,y) and T in above equation becomes an intensity or gray level transformation
function of the form s = T( r ), where s and r are the variables denoting the intensity of g and f at any
point (x,y) .
3. Define histogram?
The histogram of a digital image with intensity levels in the range [0, L-1] is a descrete function
h(rk) = nk , where rk is the kth intensity level and nk is the number of pixels in the image with intensity rk
. Histograms are the basis for numerous spatial domain processing techniques and Histogram
manipulation can be used for image enhancement.
4. What is spatial filtering?
Spatial filtering is the process of moving the filter mask from point to point in an image. For
linear spatial filter, the response is given by a sum of products of the filter coefficients, and the
corresponding image pixels in the area spanned by the filter mask.
5.What is the use of smoothing spatial filter?
Smoothing filters are used for blurring and for noise reduction. Blurring is used in preprocessing
tasks such as removal of small details from an image prior to object extraction and bridging of small gaps
in lines or curves. Noise reduction can be accomplished by blurring with a linear filter and also by
nonlinear filtering .
6.How smoothing is achieved in frequency domain? List out some of the low pass filters in
frequency domain?
Smoothing is achieved in frequency domain by high frequency attenuation, that is by lowpass
filtering.
Three types of low pass filters are:
1.Ideal filter

2. Butterworth filter
3. Guassian filter.
7. What is contrast stretching?
Contrast stretching reduces an image of higher contrast than the original, by darkening the levels
below m and brightening the levels above m in the image
8. Define averaging filters.
The output of a smoothing, linear spatial filter is the average of the pixels contained in the
neighborhood of the filter mask. These filters are called averaging filters.
9. What is Image Negative?
The negative of an image with gray levels in the range [0, L-1] is obtained by using the negative
transformation, which is given by the expression. s = L-1- r ,Where s is output pixel, r is input pixel.
10. Write the application of sharpening filters.
The applications of sharpening filters are as follows,
i. Electronic printing and medical imaging to industrial application
ii. Autonomous target detection in smart weapons.

PART-2

1. Explain the methods of histogram processing?


Refer chapter 3.3 in text book (page no. 120 to 144)
2.Discuss in detail about smoothing and sharpening spatial filtering?
Refer chapter 3.5 and 3.6 in text book (page no.152 to 168)
3.Explain Image smoothing using frequency domain filters?
Refer chapter 4.8 in text book (page no.269-277)
4. Explain Image sharpening using frequency domain filters?
Refer chapter 4.9 in text book (page no.280-286)

UNIT-3
PART-1

1.What are the various noise models available in DIP?


2. Define Restoration.
3. Why the image is subjected to wiener filtering?
4. What are the three methods of estimating the degradation function?
5. Give the relation for Impulse noise
6. What is inverse filtering?
7. What is segmentation? Write few applications of segmentation.
8. Define region growing.
9. What are the two main operations under morphological processing and mention the difference
among them?
10. What are the three basic objectives of cannys approach?

PART-2

1.Explain noise models


2.Explain the following
a) Mean filters (8m)
b) Median filter,Max and Min filter (4m))
c) Band pass and Band Reject filters (4m)
3.Explain Region-Based Segmentation
4.Explain the concepts of erosion and dilation

PART-1
1.What are the various noise models available in DIP?

Guassian noise
Rayleigh noise
Uniform noise
Gamma noise
Impulse noise
Exponential noise

2. Define Restoration.
Restoration is a process of reconstructing or recovering an image that has been degraded by using
a priori knowledge of the degradation phenomenon. Thus restoration techniques are oriented towards
modeling the degradation and applying the inverse process in order to recover the original image.
3. Why the image is subjected to wiener filtering?
This method of filtering consider images and noise as random process and the objective is to find
an estimate of the uncorrupted image f such that the mean square error between them is minimized. So
that image is subjected to wiener filtering to minimize the error.
4. What are the three methods of estimating the degradation function?
The three methods of degradation function are,
i.Observation
ii.Experimentation
iii.Mathematical modeling.
5. Give the relation for Impulse noise
Impulse noise:
The PDF is
=
( )=

6. What is inverse filtering?


The simplest approach to restoration is direct inverse filtering, an estimate ( , ) of the
transform of the original image simply by dividing the transform of the degraded image ( , ) by the
degradation function.

( , )=

( , )
( , )

7. What is segmentation? Write few applications of segmentation.


The first step in image analysis is to segment the image. Segmentation subdivides an image into
its constituent parts or objects. The applications of segmentation are,
i. Detection of isolated points.
ii. Detection of lines and edges in an image.
8. Define region growing.
Region growing is a procedure that groups pixels or sub regions into layer regions based on
predefined criteria. The basic approach is to start with a set of seed points and from the grow regions by
appending to each seed these neighboring pixels that have properties similar to the seed.
9. What are the two main operations under morphological processing and mention the difference
among them?
Two main operations of morphological processing are Erosion and Dilation.
The main difference among these two operations is that the erosion performs shrinking or
thinning operation whereas dilation performs thickening operation.
10. What are the three basic objectives of cannys approach?
1.Low error rate
2.Edge points should be well localized
3.Single edge point response.
PART-2

1.Explain noise models


Refer text book (page no.313 to 321)
2.Explain the following
a) Mean filters (8m)(Refer text book (page no. 322 to 325)
b) Median filter,Max and Min filter (4m) (Refer text book page no.326 to 327)
c) Band pass and Band Reject filters (4m)(Refer text book page no.336 and 337)
3.Explain Region-Based Segmentation

Refer text book(page no.763 to 769)


4.Explain the concepts of erosion and dilation
Refer text book (page no.630 to 635)

UNIT-4
PART-1

1.What are the applications of Wavelet transform


2.Define Subband coding?
3. What is the Need For Compression?
4 .What is Lossless compression?
5. What is lossy compression?
6. Define compression ratio.
7. What is JPEG?
8. What is bit plane decomposition?
9. What are the basic steps in JPEG?
10. What is Huffman Coding?
PART-2
1. Explain subband coding
2.Explain about multiresolution expansions
3.Explain Predictive coding
4.List out the popular image compression standards.

PART-1
1.What are the applications of Wavelet transform
1.Easy to compress many images
2. Easy to transmit many images
3. Easy to analyze many images
2.Define Subband coding?
An image is decomposed to a set of bandlimited components (subbands). The decomposition is
carried by filtering and downsampling. If the filters are properly selected the image may be reconstructed
without error by filtering and upsampling.
The goal of subband coding is to select the analysis and synthesis filters in order to have perfect
reconstruction of the signal.
3. What is the Need For Compression?
In terms of storage, the capacity of a storage device can be effectively increased with methods
that compress a body of data on its way to a storage device and decompresses it when it is retrieved. In
terms of communications, the bandwidth of a digital communication link can be effectively increased by
compressing data at the sending end and decompressing data at the receiving end.
At any given time, the ability of the Internet to transfer data is fixed. Thus, if data can effectively
be compressed wherever possible, significant improvements of data throughput can be achieved. Many
files can be combined into one compressed document making sending easier.
4 .What is Lossless compression?
Lossless compression can recover the exact original data after compression. It is used mainly for
compressing database records, spreadsheets or word processing files, where exact replication of the
original is essential.
5. What is lossy compression?
Lossy compression will result in a certain loss of accuracy in exchange for a substantial increase
in compression. Lossy compression is more effective when used to compress graphic images and
digitized voice where losses outside visual or aural perception can be tolerated.
6. Define compression ratio.
Compression ratio is defined as the ratio of original size of the image to compressed size of the
image .It is given as
Compression Ratio = original size / compressed size: 1

7. What is JPEG?
The acronym is expanded as "Joint Photographic Expert Group". It is an international standard in
1992. It perfectly Works with colour and greyscale images, Many applications e.g., satellite, medical etc.,
8. What is bit plane decomposition?
An effective technique for reducing an images inter-pixel redundancies is to process the images
bit plane individually. This technique is based on the concept of decomposing multilevel images into a
series of binary images and compressing each binary image via one of several well-known binary
compression methods.
9. What are the basic steps in JPEG?
The Major Steps in JPEG Coding involve:
i. DCT (Discrete Cosine Transformation)
ii. Quantization
iii. Zigzag Scan
iv. DPCM on DC component
v.RLE on AC Components
vi. Entropy Coding
10. What is Huffman Coding?
Huffman compression reduces the average code length used to represent the symbols of an
alphabet. Symbols of the source alphabet, which occur frequently, are assigned with short length codes.
The general strategy is to allow the code length to vary from character to character and to ensure that the
frequently occurring characters have shorter codes.

Part-2

1. Explain subband coding


Refer text book page no.466 to 477
2.Explain about multiresolution expansions
Refer text book page no.477 to 486
3.Explain Predictive coding

Refer text book page no.584 to 603


4.List out the popular image compression standards.
Refer text book page no.539 to 541

UNIT-5
PART-1

1. Define chain code.


2. What are the demerits of chain code?
3. What is polygonal approximation method?
4. Specify the various polygonal approximation methods.
5. Name few boundary descriptors.
6. Define length of a boundary.
7. Define texture
8. List the approaches to describe texture of a region.
9. What are the choices for representing the region
10. Define pattern,feature and pattern class ?

PART-2
1.Explain Chain code and polynomial approximation technique for object representation?
2.Explain in detail about boundary descriptors
3. Explain in detail about any two Region descriptors
4. Explain in detail about recognition based on matching

PART-1
1. Define chain code.
Chain codes are used to represent a boundary by a connected sequence of straight line segment of
specified length and direction. Typically this representation is based on 4 or 8 connectivity of segments.
The direction of each segment is coded by using a numbering scheme.
2. What are the demerits of chain code?
The demerits of chain code are,
i.The resulting chain code tends to be quite long.
ii.Any small disturbance along the boundary due to noise cause changes in the code that
may not be related to the shape of the boundary.
3. What is polygonal approximation method?
Polygonal approximation is a image representation approach in which a digital boundary can be
approximated with arbitrary accuracy by a polygon. For a closed curve the approximation is exact when
the number of segments in polygon is equal to the number of points in the boundary so that each pair of
adjacent points defines a segment in the polygon.
4. Specify the various polygonal approximation methods.
The various polygonal approximation methods are
i.Minimum perimeter polygons.
ii.Merging techniques.
iii.Splitting techniques
5. Name few boundary descriptors.
i.Simple descriptors.
ii.Shape descriptors.
iii.Fourier descriptors.
6. Define length of a boundary.
The length of a boundary is the number of pixels along a boundary. Example, for a chain coded
curve with unit spacing in both directions, the number of vertical and horizontal components plus 2
times the number of diagonal components gives its exact length.
7. Define texture.
Texture is one of the regional descriptors. It provides measure measures of properties such as
smoothness, coarseness and regularity.

8. List the approaches to describe texture of a region.


The approaches to describe the texture of a region are,
i.Statistical approach.
ii.Structural approach.
iii.Spectural approach.
9. What are the choices for representing the region
We can represent region in 2 choices.
1)Representing interms of it external characteristics(its boundary)
2)Representing interms of it internal characteristics (the ixels comprising the region)
10. Define pattern,feature and pattern class ?
A Pattern is an arrangement of descriptors. The name feature is used in the pattern recognition to
denote a descriptor. A pattern class is a family of patterns that share some common properties.

PART-2
1.Explain Chain code and polynomial approximation technique for object representation?
Refer text book page no.798 to808
2.Explain in detail about boundary descriptors
Refer text book page no.815 to822
3. Explain in detail about any two Region descriptors
Refer text book page no.822 to832
4. Explain in detail about recognition based on matching
Refer text book page no.866 to872

Das könnte Ihnen auch gefallen