Sie sind auf Seite 1von 26

SEMINAR REPORT

Image Processing Introduction and Application


Guided By Mayank Srivastava Asst. Professor ECE Dept. RKGIT By Ravi Kumar Verma ECE 3 Year rd 0803331092

CERTIFICATE
This is to certify that Ravi Ku mar Verma of ECE 6 th semester has worked hard under my guidance on the seminar top ic assigned to him. He has been honest and determined thro ugho ut the seminar cond ucted. GUIDE FACULTY MR. MAYANK SRIVASTAVA ASST. PROFESSOR DEPT. OF ECE, RKGIT

ACKNOWLEDGMENT
I extend my sincere gratitude towards Prof. K.K. Tripathi Head of Department for giving us his invaluable knowledge and wonderful technical guidance. I express my thanks to Mayank Srivastava Sir, who guided me and provided with all the usefull informat ion fo r presenting this seminar. I also thank all the other faculty members of ECE department and my friends for their help and support.

Ravi Kumar Verma ECE 3 year rd 0803331092

ABSTRACT
Image Processing , in its broadest and most literal sense, aims to address the goal of providing practical, reliable and affordable means to allow machines to cope with images while assisting man in his general endeavors. The term image pr ocessing itself has become firmly associated with the much more objective of modif ying images such that they are either : a. Cor rected for errors introduced during acquisition or transmission (restoration); or b. Enhanced to overcome the weakness of human visual system ( enhancement )

(a) (b) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

TABLE OF CONTENTS Acknowledgment Abstract Introduction Image and Image Processing Vision and Computer vision Types of image processing Steps involved in image processing Components of image processing Image sensors(CCD and CMOS) Applications Conclusion References

Introduction
Images are a vital and integral part of everyday life. On an individual or per son-to-person basis, images are used to reason, interpret, illustrate, represent, memorize, educate, communicate, evaluate, navigate, survey, entertain, etc. We do this continuously and almost entirely without conscious effort. As man builds machines to facilitate his ever mor e complex lifestyle, the only reason for NOT providing them with the ability to exploit or transparently convey such images is a weakness of available technology. Interests in image processing processing stems from two principal application areas: a) Improvement of pictorial information for better human interpretation b) Processing of scene data for autonomous machine perception One of the first applications of image processing techniques in the first category was in impr oving digitized newspaper sent by submar ine cable between London and Newyork. From then till these days, image processing is continuously improving human vision. The field has grown so vigorously that it is now used to solve variety of problems ranging from improving vision to space program, in geographical information systems, in medicines, in surveillance etc. Geographers use the same technique to study pollution patter ns from aerial and satellite imagery. Image enhancement and r estoration techniques are used to process degraded images of unrecoverable objects or experimental results too expensive to duplicate. In ar chaeology, image processing methods have successfully restored blurred pictures that were the only available records of rar e artifacts lost or damaged after photographed. I n physics and related fields, computer techniques routinely enhance images of experiments in areas such as high energy plasma and electron microscopy. Similarly successful applications of image pr ocessing can be found in astronomy, biology, nuclear medicine, law enforcement, defense, and industrial applications. Typical problems in machine perception that routinely utilize image processing techniques ar e automatic character recognition, industrial machine vision for product assembly and inspection, military recognizance, automatic processing of fingerprints, screening of x-rays and blood samples, and machine processing of aerial and satellite imager y for weather prediction and crop assessment.

IMAGE
An image (Latin: imago ) is an artifact, for example a two-dimensional picture, that has a similar appearance to some subjectusually a physical object or a person. Mathematically image can be defined as, Image is a two dimensional light intensity function, f(x, y), wher e the value of f at a spatial location (x, y) is the intensity of the image at that point. Digital image is obtained by sampling and quantizing the function f(x, y). The function f(x, y) can be a measure of the reflected light (photography), X-ray attenuation (XRays) or any other physical parameter. Digital Image is actually an image discretized both in spatial coordinates and br ightness. A digital image can be considered a matr ix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at that point. The elements of such digital ar ray are called image elements, picture elements, pixels, or pels.

IMAGE PROCESSING
In electrical engineer ing and computer science, image processing is any form of signal for which the input is an image, such as a photograph or video frame; the output of image pr ocessing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it. In short, Act of examining images for the purpose of identifying objects and judging their significance. An image may be considered to contain sub- images sometimes referred to as regions - of - interest , ROIs , or simply regions . This concept reflects the fact that images frequently contain collections of objects each of which can be the basis for a r egion. In a sophisticated image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image (r egion) might be processed to suppress motion blur while another part might be processed to improve color r endition.

WHY DO WE NEED IMAGE PROCESSING?


A) Improvement of pictorial info rmation for human interpretatio n

B) Processing o f scene data for autonomous machine perception

Improvement of pictorial information for human interpretation


A) Invo lved selection o f print ing procedures and distribut ion of brightness levels B) Improvements on processing methods for transmitted d igital pictures

Application areas include a) Archeo logy b) Astronomy c) Biology d) Industrial App lications e) Law enforcements f) Medical Imaging g) Space program etc. Processing of scene data for autonomous machine perception
Focuses on procedures fo r extracting fro m image information in a form suitab le for computer processing.

NOTE: Often this information bears little resemblance to visual features that . human beings use in interpreting the content of an image

Application areas include :


a) Automatic Opt ical Character Recognitio n b) Machine visio n for prod uct assemb ly and inspection c) Military recognizance d) Automatic fingerprint matching etc.

Vision and Computer Vision


Whatever human eyes see and then perceive the world around - VISION To duplicate human eye by electronically perceiving and understanding the image by any means COMPUTER VISION

Computer Vision

Vision

TYPES OF IMAGE PROCESSING


Based on the mode of techniques used image processing can be broadly categorized into following three types:

A) Analog Image Processing B) Digital Image Processing C) Optical Image Processing ANALOG IMAGE PROCESSING
Is any image processing task conducted on two-dimensional analog signals by analog

.
means

DIGITAL IMAGE PROCESSING

.
Is the use of computer algorithms to perform image processing on digital images

OPTICAL IMAGE PROCESSING


Is the use of optical techniques to process image for increasing clar ity and extracting

.
infor mation from the image

BASED ON THE TRANSFORMATIONS IMAGE PROCESSING IS CLASSIFIED INTO FOLLOWIN TYPES


a) Image-to-image transformation b) Image-to-information transfor mation c) Information-to-image transfor mation

IMAGE TO IMAGE TRANSFORMATION


Enhancement (make image more useful, pleasing) Restoratio n (DE blurring, grid, line removal) Geo metry (scaling, sizing, zoo ming, morp hing etc.)

IMAGE TO INFORMATION TRANSFORMATION


Image statistics (histograms) Image compression Image Analysis (segmentation, feature extractio n) Co mputer aided detectio n and diagnosis (CAD)

INFORMATION TO IMAGE TRANSFORMATION


Depression o f comp ressed image data Reconstruction of image Comp uter graphics, animat ion and virtual reality

STEPS INVOVED IN IMAGE PROCESSING


Image processing encompasses a broad range of hardware, software, and the theoretical underpinnings.

:
Following flow diagram clearly depicts the important steps involved in image processing

IMAGE ACQUISTION
The fir st step in the process is image acquisition-that is, to acquire a digital image. To do so requir es following elements: a) Imaging Sensors b) Digitizer Imaging sensors acquires image and the digitizer conver ts that image into computer understandable language of digital for m. The imaging sensor could be a monochrome or color TV camera that produces an entire image of the pr oblem domain ever y 1/30 sec. The imaging sensor could also be a line camera that produces a single image line at a time. In this case the objects motion past the line scanner produces a two-dimensional image. If the output of camer a or the other imaging sensor is not in digital for m, that is achieved by an ADC ( analog to digital converter). The nature if the sensor and the image it produces are determined by the application. For ex/- mail reading applications greatly rely on line-scan cameras.

PREPROCESSING
After a digital image has been acquired, the next step deals with the preprocessing of the image. The key function of preprocessing is to improve the image in ways that increase the chances for success of other processes. Mainly, preprocessing deals with the techniques for enhancing contrast, removing noise, and isolating regions whose textures indicate a likelihood of alphanumer ic infor mation.

SEGMENTATION
The next stage deals with segmentation. Broadly defined, segmentation partitions an input image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. On the one hand a rugged segmentation br ings the process a long way towards successful solution of an imaging problem. On the other hand erratic

.
segmentation results always into eventual failure

The output of the segmentation is raw pixel data, constituting either the boundary of a region or all the points in the region itself. In either case, converting data to a form suitable to computer processing is necessary. Boundar y representation is appr opriate when the focus is on external shape characteristics, such as corners or inflections. Regional representations ar e appropriate when the focus is on internal properties, such as textur es or skeletal shape. I n some situations both representations may coexist.

REPRESENTATION AND DESCRIPTION


Choosing a representation is only a part of the solution for transforming raw data into a form suitable for subsequent computer processing. A method must also be specified for descr ibing the data so that features of interest are highlighted. Description also called featur e selection, deals with extracting features that result in some quantitative information of inter est or features that are basic for differentiating one class of object from other.

RECOGNITION AND INTERPRETATION


The last stage involves recognition and interpretation. Recognition is the pr ocess that assigns a label to an object based on the information pr ovided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. In terms of example, identifying a character as, say, a c requires associating the descriptors for that character with the label c.

.
Interpretation attempts to assign meaning to a set of labeled entities

KNOWLEDGE BASE
Knowledge about a problem domain is coded into an image processing in the for m of knowledge database. This knowledge base may be as simple as detailing regions of an image where the infor mation of inter est is known to be located. It can be quite complex too, such as an interrelated list of all major possible defects in a materials inspection pr oblem or an image database containing high resolution satellite images of a region in connection with changedetection applications. In addition to this, knowledge database also controls the interaction between modules.

COMPONENTS OF IMAGE PROCESSING


Image Sensors Image Displays Image Processing Software(OpenCV,Matlab,CIMG) Image Processing Hardware Memory

IMAGE SENSORS
Sensors are device which convert illumination energy into digitized for m. An image sensor is a device that converts an optical image to an electric signal. It is used mostly in digital cameras and other imaging devices. Ear ly sensors were video camer a tubes but a moder n one is typically a charge- coupled device (CCD) or a complementary metaloxide semiconductor (CMOS) active pixel sensor. Following are the sensors which ar e used dominantly in image processing:

Charge Couple Devices (CCD) Co mplementary MOSFET (CMOS)

CHARGE COUPLED DEVICES (CCD)

A charge-coupled device ( CCD ) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. Often the device is integrated with an image sensor, such as a photoelectric device to produce the charge that is being read, thus making the CCD a major technology for digital image. Although CCDs are not the only technology to allow for light detection, CCDs are widely used in professional, medical, and scientific applications where high-quality image data is required.

COMPLIMENTARY MOSFETs SENSORS (CMOS)

CMOS sensors also known as ACTIVE PIXEL SENSORS(ASP), uses integrated circuits like

.
transistors at each pixel that amplify and move the charge using more traditional wires

The CMOS approach is more flexible as each pixel can be read individually.

CCD VS CMOS
Most digital still cameras use either a CCD image sensor or a CMOS sensor. Both types of sensor accomplish the same task of capturing light and converting it into electrical signals. A CCD is an analog device. When light strikes the chip it is held as a small electrical char ge in each photo sensor. The charges are converted to voltage one pixel at a time as they are read from the chip. Additional circuitry in the camera conver ts the voltage into digital information. A CMOS chip is a type of active sensor pixel (ASP) made using the CMOS semiconductor process. Extra circuitr y next to each photo sensor converts the light ener gy to a voltage. Additional circuitry on the chip may be included to conver t the voltage to digital data. Neither technology has a clear advantage in image quality. On one hand, CCD sensors are more susceptible to ver tical smear fr om bright light sources when the sensor is overloaded; highend CCDs in turn do not suffer from this problem. CMOS can potentially be implemented with fewer components, use less power , and/or provide faster readout than CCDs. CCD is a more matur e technology and is in most respects the equal of CMOS. CMOS sensors are less expensive to manufacture than CCD sensors.

APPLICATIONS
Medicine Defense Meteoro logy Environmental science Manufact ure Surveillance Crime investigation Script Recognition Optical Character Recognition Handwritten Signature Verification
One of the first applications of image processing techniques in the first category was in impr oving digitized newspaper sent by submar ine cable between London and New York. From then till these days, image processing is continuously improving human vision. The field has grown so vigorously that it is now used to solve variety of problems ranging from improving vision to space program, in geographical information systems, in medicines, in surveillance etc. Geographers use the same technique to study pollution patter ns from aerial and satellite imagery. Image enhancement and r estoration techniques are used to process degraded images of unrecoverable objects or experimental results too expensive to duplicate. In ar chaeology, image processing methods have successfully restored blurred pictures that were the only available records of rar e artifacts lost or damaged after photographed. I n physics and related fields, computer techniques routinely enhance images of experiments in areas such as high energy plasma and electron microscopy.

CONCLUSION
Using image processing techniques, we can sharpen the images, contrast to make a graphic display more useful for display, reduce amount of memory requirement for storing image infor mation, etc., due to such techniques, image processing is applied in recognition of images as in factory floor quality assurance systems; image enhancement, as in satellite reconnaissance systems; image synthesis as in law enforcement suspect identification systems, and image construction as in plastic surger y design systems.

REFERENCES [1]
[2] [3] [4] [5] [6] [7] [8] [9] Digital Image Processing, Third Edition, Gonzalez Wikipedia Awcock G.W. & Tho mas R. (1996) App lied Image Processing. Sid Ahmed (1995) Image Processing. William K. Pratt (1978) Digital Image Processing. Christopher Watkins, Alberto Sadun, Stephen Marenka Mordern Image Processing. Maher A. Sid-Ah med Image Processing. G.W.Awcock, R. Thomas Applied Image Processing. Google

Das könnte Ihnen auch gefallen