Sie sind auf Seite 1von 8

JERUSALEM COLLEGE OF ENGINEERING, CHENNAI 600 100 Department of Computer Applications

Assignment Answers MC9284 DIGITAL IMAGING (Regulation 2009) UNIT I FUNDAMENTALS OF IMAGE PROCESSING Part A Questions 1. Define image.

An image is defined as a two-dimensional function, f(x, y), where x and y are spatial coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point.
2. Define digital image.

When in an image f(x, y), the spatial coordinates x and y, and the amplitude values of f are all finite, discrete quantities, the image is called as a digital image.
3. What is digital image processing?

Processing of a digital image by a digital computer is referred to as digital image processing.


4. What is meant by pixel?

A digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels and pixels. 5. List the applications of image processing. Medical Imaging, Industrial Inspection, Remote Sensing, Microscopy, Law Enforcement, Machine Vision, Astronomy; Laser, Radar, Sonar and Acoustic Image Processing
6. List the fundamental steps in image processing.

Acquisition Enhancement Restoration Compression Segmentation Representation and description Object Recognition

7. What is meant by illumination and reflectance? Illumination is the amount of source light incident on a scene. It is

represented as i(x, y).

Reflectance is the amount of light reflected by the object in the scene. It is represented by r(x, y).

8. What do you mean by gray level?

Gray level refers to a scalar measure of intensity that ranges from black to grays and finally to white.
9. What is sampling and quantization?

If an image is defined as a two-dimensional function, f(x, y), then digitizing the coordinate values x and y is called sampling and digitizing the amplitude values of f is called quantization.
10. Write the expression to find the number of bits to store a digital image and find the number of bits required to store 256 x 256 image with 32 gray levels.

The number of bits to store a M x N, k-bit digital image is given by b = M x N x k. When M = N, b = N2k The number of bits required to store 256 x 256 image with 32 gray levels is b = 256 x 256 x 5 = 327680 bits
11. dd Part B Questions

1. Explain the steps involved in digital image processing. Block Diagram: Refer Notes The fundamental steps in digital image processing can be divided into two broad categories: methods whose input and output are images and methods whose inputs may be images, but whose outputs are attributes extracted from those images. This organization is summarized in the above figure. The diagram does not imply that every process is applied to an image. Rather, it conveys an idea of all the methodologies that can be applied to images for different purposes and possibly with different objectives. Image acquisition involves preprocessing Image enhancement techniques to bring out detail that is hidden, or to highlight certain features of interest in an image examples: contrast and edge enhancement, pseudo coloring, noise filtering, sharpening, and magnifying

useful in feature extraction, image analysis, and visual information display enhancement is subjective

Image restoration deals with improving the appearance of an image refers to removal or minimization of known degradations in an image includes deblurring of images, noise filtering, and correction of geometric distortion or nonlinearities due to sensors image restoration is objective Color image processing deals with color models and color processing Wavelets foundation for representing images in various degrees of resolution used for image data compression and for pyramidal representation Compression deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it without any appreciable loss of information Morphological processing deals with tools for extracting image components, that are useful in the representation and description of shape Segmentation partition an image into its constituent parts or objects o autonomous segmentation o rugged segmentation o weak or erratic segmentation the more accurate the segmentation, the more likely recognition is to succeed Representation and description follows the output of a segmentation stage converting the data to a form suitable for computer processing Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another Recognition is the process that assigns a label to an object based on its descriptors Knowledge base knowledge database - simple or complex controls the interaction between modules

Viewing the results of image processing can take place at the output of any stage in the figure and also as the complexity of an image processing task increases, so does the number of processes required to solve the problem. 2. Write notes on image acquisition. Images are generated by the combination of an illumination source and the reflection or absorption of energy from that source by the elements of the scene being imaged. Depending on the nature of the source, illumination energy is reflected from, or transmitted through, objects. For example, o Light is reflected from a planar surface o X-rays pass through a patients body In some applications, the reflected or transmitted energy is focused onto a photoconverter (e.g. a phosphor screen), which converts the energy into visible light. Three principal sensor arrangements used to transform illumination energy into digital images. Diagram: RB No.1, Page: 46 Working principle: Incoming energy is transformed into a voltage by the combination of input electrical power and sensor material that is responsive to the particular type of energy being detected. The output voltage waveform is the response of the sensor(s), and a digital quantity is obtained from each sensor by digitizing its response. Image Acquisition using a Single Sensor Photodiode most familiar single sensor constructed of silicon materials output voltage waveform is proportional to light Use of filter in front of a sensor improves selectivity example Generation of 2-D image using a single sensor diagram: RB No.1, Page: 47 working principle Image Acquisition using Sensor Strip Diagram: RB No.1, Page: 48 in-line arrangement of sensors in the form of a sensor strip strip provides imaging elements in one direction motion perpendicular to the strip provides imaging in the other direction Airborne imaging applications working principle Sensor strips mounted in a ring configuration

working principle

Image Acquisition Using Sensor Arrays CCD sensors used widely in digital cameras and other light sensing instruments the response of each sensor is proportional to the integral of the light energy projected onto the surface of the sensor The key advantage of a two dimensional sensor array is that a complete image can be obtained by focusing the energy pattern onto the surface of the array. Diagram: RB No.1, Page: 50 Working principle A Simple Image Formation Model Images are denoted by two-dimensional functions of the form f(x, y). The value or amplitude of f at (x, y) is a positive scalar quantity. f(x, y) must be nonzero and finite; that is, 0 < f(x, y) < The function f(x, y) may be characterized by two components: the amount of source illumination incident on the scene being viewed, and the amount of illumination reflected by the objects in the scene. These are called the illumination and reflectance and are denoted by i(x, y) and r(x, y), respectively. The two functions combine as a product to form f(x, y): f(x, y) = i(x, y) r(x, y) where 0 < i(x, y) < and 0 < r(x, y) < 1 The above inequality indicates that reflectance is bounded by 0 (total absorption) and 1 (total reflectance). In case of transmission, a transmissivity is dealt instead of a reflectivity function. 3. Explain image sampling and quantization. Objective of image sensing and acquisition: o to generate digital images from sensed data. The output of sensors o continuous voltage waveform whose amplitude and spatial behavior are related to the physical phenomenon being sensed. A digital image is created by o converting the continuous sensed data into digital form. o This involves two processes: sampling and quantization An image may be continuous with respect to the x- and y-coordinates, and also in amplitude. To convert it to digital form, sample the function in

both coordinates and in amplitude. Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called quantization. Basic Concepts in Sampling and Quantization Figures: RB No.1, Page: 53 To convert a continuous image, f(x, y) into digital form. Starting at the top of the image and carry out the following procedure line by line produces a two-dimensional digital image. Consider a scan line AB (1-D function). Plot this function for amplitude (gray level) values of the continuous image along the line segment AB. To sample this function, take equally spaced samples along line AB. Superimpose the sample on the function. The set of these discrete locations gives the sampled function. In order to form a digital function, convert the gray-level values (quantize) into discrete quantities. Assign specific values to each of the gray levels. The continuous gray levels are quantized simply by assigning one of the discrete gray levels to each sample. Representing Digital Images The result of sampling and quantization is a matrix of real numbers. Assume that an image f(x, y) is sampled so that the resulting digital image has M rows and N columns. The values of the coordinates (x, y) now become discrete quantities. The coordinate convention to be used to represent an image is as shown below.

According to this coordinate convention, an M x N digital image can be represented in the following matrix form:

Each element of this matrix is called an image element, picture element, pixel, or pel. The above matrix can be simply written as

where, aij = f(x=i, y=j) = f(i, j). Expressing Sampling and Quantization in more Formal Mathematical Terms Let Z and R denote the set of real integers and the set of real numbers, respectively. If f(x, y) is a digital image then (x, y) are integers from Z2. If the gray levels also are integers, Z replaces R, then a digital image becomes a 2-D function whose coordinates and amplitude values are integers. The digitization process requires o M and N to be positive integers. o the number of gray levels L typically is an integer power of 2. i.e., L = 2k The discrete gray levels are equally spaced and they are integers in the interval [0, L-1]. If b bits required to store a digitized image then b = M x N x k. When M = N, the above equation becomes b = N2k. In general, an image with 2k gray levels is referred to as a k-bit image. For example, an image with 256 possible gray-level values is called an 8-bit image (28 = 256). Spatial and Gray-Level Resolution Sampling is the principal factor determining the spatial resolution of an image. The measure of how closely lines can be resolved in an image is called spatial resolution, and it depends on properties of the system creating the image. Gray-level resolution similarly refers to the smallest noticeable change in gray level. The gray-level resolution is a term that refers to the number of shades of gray utilized in preparing the image for display. Aliasing and Moir Patterns

Sampling is the process of converting a signal (for example, a function of continuous time or space) into a numeric sequence (a function of discrete time or space). The sampling rate in images is the number of samples taken (in both spatial directions) per unit distance. If the function is undersampled, then a phenomenon called aliasing corrupts the sampled image. The corruption is in the form of additional frequency components being introduced into the sampled function. These are called aliased frequencies. The effect of aliased frequencies can be seen under the right conditions in the form of so called Moir patterns. Moir effect is a visual perception that occurs when viewing a set of lines or dots that is superimposed on another set of lines or dots, where the sets differ in relative size, angle, or spacing.

Zooming and Shrinking Digital Images Zooming may be viewed as oversampling, while shrinking may be viewed as undersampling. Zooming o Nearest neighbor interpolation. o Pixel replication o Bilinear interpolation Shrinking

Das könnte Ihnen auch gefallen