Beruflich Dokumente
Kultur Dokumente
INTRODUCTION :
The field of digital image processing is continually evolving. During the past five years, there
has been a significant increase in the level of interest in image data compression, image
recognition and knowledge based analysis systems.
Interest in digital image processing methods stems from two application areas:
One of the first applications of image processing techniques in the first category was in
improving digitized newspaper pictures sent by submarine cable between London and Newyork.
Introduction of the Bartlane cable picture transmission system in the early 1920’s reduced the
time required to transport a picture across the Atlantic from more than a week. Some of the
initial problems in improving the visual quality of these early digital pictures were related to the
selection of printing procedures and the distribution of brightness levels. The printing method
used to send the picture by SPECIALIZED PRINTING EQUIPMENT CODED
PICTURES was abandoned towards the end of 1921 in favor of a technique based on
PHOTOGRAPHIC REPRODUCTION made from tapes perforated at the telegraph
receiving terminal whose improvements are evident both in tonal quality and resolution. During
this period introduction of a system for developing a film plate via light beams that were
modulated by the coded picture tape improved the reproduction process considerably.
The term monochrome image or simply image, refers to a 2-dimensional light intensity function
f(x, y), where x and y denote spatial coordinates and the value of f at any point (x, y) is
proportional to the brightness ( or gray level ) of the image at that point. The figure below
illustrates the axis convention.
FIGURE 1
Sometimes viewing an image function in perspective with the third axis being
brightness is useful. Viewed in this way the figure would appear as a series of active peaks in
regions with numerous changes in brightness levels and smoother regions or plateaus where the
brightness levels varied little or were constant. Using the convention of assigning
proportionately higher values to brighter areas would make the height of the components in the
plot proportional to the corresponding brightness in the image.
A digital image is an image f(x, y) that has been discretized both in spatial coordinates and
brightness. A digital image can be considered matrix whose row and column indices identify a
point in the image and corresponding matrix element value identifies the gray level at that point.
The elements of such a digital array are called image elements, picture elements, pixels or pels,
with the last two being commonly used abbreviations of “picture elements”. The computer
breaks down the image into thousands of pixels. Pixels are the smallest component of an image.
They are the small dots in the horizontal lines across a television screen. Each pixel is converted
into a number that represents the brightness of the dot. For a black-and-white image, the pixel
represents different shades between total black and full white. The computer can then adjust the
pixels to enhance image quality.
The first step in digital image processing is to transfer an image to a computer, digitizing the
image and turning it into a computer image file that can be stored in a computer’s memory or on
a storage medium such as a hard disk or CD-ROM. Digitization involves translating the image
into a numerical code that can be understood by a computer. It can be accomplished using a
scanner or a video camera linked to a frame grabber board in the computer.
FUNDAMENTAL STEPS IN IMAGE PROCESSING
Digital image processing encompasses a broad range of hardware, software, and theoritical
underpinnings.
FIGURE 2
IMAGE ACQUISITION:
The first step in this process is to acquire a digital image. To do so, it requires an imaging sensor
and the capability to digitize the signal produced by the sensor. The imaging sensor could also
be a line-scan camera that produces a single image line at a time. In this case the object motion
past the line scanner produces a 2-dimensional image. If the output of the camera or other
imaging sensor is not already in digital form an analog to digital converter digitizes it.
The Digitizer, is a device for converting the electrical output of the physical sensing
device into digital form.
As a physical device consider the basics of X-ray imaging systems. The output of an X-ray
source is directed at an object and a medium sensitive to X-rays is placed on the other side of the
object. The medium thus acquires an image of materials( such as bones and tissue) having
various degrees of X-ray absorption. The medium itself can be film, or television camera
combined with a converter of X-rays to photons whose outputs are combined to reconstruct a
digital image.
Another major sensor category deals with Visible and Infrared light. Among
the devices most frequently used for this purpose are microdensitometers, image
dissectors. In microdensitometers the image to be digitized is in the form of a transparency
(such as a negative film) or photograph. Although these are slow devices, they are capable of
high degrees of position accuracy due to the essentially continuous nature of mechanical
translation used in the digitization process.
Image digitization is achieved by feeding the video output of the cameras into a digitizer as
stated earlier which converts the given input to its equivalent digital form.
IMAGE PREPROCESSING:
After a digital image has been obtained, the next step deals with preprocessing that image. The
key function of preprocessing is to improve the image in ways that increase the chances for
success of other processes. It typically deals with techniques for enhancing contrast,
removing noise and isolating regions whose texture indicate a livelihood of alphanumeric
information. The three main categories of digital image processing are image compression,
image enhancement and restoration, and measurement extraction.
Image compression
Image compression is a mathematical technique used to reduce the amount of computer memory
needed to store a digital image. The computer discards some information, while retaining
sufficient information to make the image pleasing to the human eye.
The spatial domain refers to the image plane itself and approaches in this category are based on
direct manipulation of pixels in an image where as frequency domain processing techniques are
based on modifying the fourier transform of an image.
Histogram Equalization
Histogram Equalization is the process which increases the contrast of the image as shown below.
An image with poor contrast , such as the one at the left of the figure , can be improved by
adjusting the image histogram to produce the image shown at the right of it.
FIGURE 3
The gray levels of an image that has been subjected to histogram equalization are spread out and
always reach white. This process increases the dynamic range of gray levels and consequently ,
produces an increase in image contrast. Overall , however histogram equalization significantly
improved the visual appearance of this image .
Filters
Filters are used to enhance the appearance of raw images . However , information is lost in the
process. Filters include
• Low pass (softening)
The image at the left of the figure has been corrupted by noise during the digitization process.
The ‘clean’ image at the right of it was obtained by applying a median filter to the image.
FIGURE 5
FIGURE 6
FIGURE 7
SEGMENTATION:
The first step in image analysis generally is to segment the image. Segmentation subdivides an
image into its constituent parts or objects. The level to which this subdivision is carried depends
on the problem being solved. That is, segmentation should stop when the objects of interest in
an application have been isolated.
For example in autonomous air to ground target acqisition applications interest lies in identifying
vehicles on the road. The first step is to segment the road from the image and then to segment
the contents of the road down to objects of a range of sizes that correspond to potential vehicles.
There is no point in carrying segmentation below the scale, nor is there any need to attempt
segmentation that lie outside the boundaries of the road. In general autonomous segmentation is
one of the most difficult tasks in image processing.
The segmentation algorithms from monochrome images generally are based on one of two basic
properties of gray level values which are Discontinuity and Similarity. In the first category, the
approach is to partition an image based on abrupt changes in gray level. The principal areas of
interest within this category are detection of isolated points and detection of lines and edges in an
image. In the second category the principal approaches are based on thresholding, region
growing, and region splitting and merging.
The concept of segmenting an image based on dicontinuity and similarity of the gray level values
of its pixels is applicable to both static and dynamic( time varying ) images.
FIGURE 8
Representation in terms of its internal characteristics( pixels comprising the region ) Ex:
Regional descriptors such as topological descriptors, the texture using statistical approaches
or structural approaches.
Choosing a representation scheme however is only part of the task of making the data useful to
the computer. The next task is to describe the region based on the chosen representation. For
example, a region may be represented by its boundary with the boundary described by features
such as its length, the orientation of the straight line joining the extreme points. Generally an
external representation is chosen when the primary focus is on shape characteristics. An internal
representation is selected when the focus is on reflectivity properties, such as color and texture.
Recognition is the process that assigns a label to an object based on the information provided by
its descriptors. Interpretation involves assigning meaning to an ensemble of recognized
objects. In terms of example, identifying a character as, say, a ‘c’ requires associating the
descriptors for that character with the label c. We conclude the coverage of digital image
processing by developing several techniques for recognition and interpretation.
APPLICATION OF DIGITAL IMAGE PROCESSING
Digital image processing finds its applications in many areas such as:
Criminology
Morphology
Microscopy
Bio medical
Meteorology
MORPHOLOGY: The word morphology commonly denotes a branch of biology that deals
with the form and structure of animals and plants. We use this image processing under the
context of mathematical morphology as a tool for extracting image components that are useful in
the representation and description of region shape, such as boundaries, skeletons and the convex
hull. The language of mathematical morphology is Set theory. Sets in this represent the shapes
of objects in an image.
The Digital Image processing deals with the process in which the given image is processed
using the techniques such as Image Acquisition, Preprocessing, Segmentation and so on.
These elements of image processing are dealt in order to be in pace with the new developments
in image-processing hardware and software.
Firstly, in the processing flow the Image Acquisition plays a prominent role. In this
process acquiring of the images is done in order to process it. After absorbing the image, it is
Preprocessed. i.e. the image is Enhanced, Restored and then Compressed. While Enhancing
the image or during image enhancement, several techniques are adopted such as gray- scale
mappings for image negatives, contrast stretching, gray-level slicing and so on. After enhancing
the required image, it is Restored whose ultimate goal is to improve an image or to reconstruct
an image that has been degraded by using some priori knowledge of the degradation
phenomenon. Then comes the turn of Compression in which the amount of data required to
represent a digital image is compressed . Image compression plays a crucial role in many
important and diverse applications, including telex video conferencing, remote sensing, FAX
etc,. In the next process, which is the Segmentation process, the image is divided into several
parts or segments. After segmentation, Representation and Description comes into picture.
Choosing a representation scheme, however, is only part of the task of making the data useful to
the computer. The next task is to describe the region based on chosen representation. Next
comes Recognition and interpretation which is a process in which the acquired image is
recognized and also interpreted to get a final image of high resolution and clear picture clarity.
The image processing techniques are described and this is found in almost all the present
communication systems such as remote sensing, and it has also found its applications in area like
FORENSIC DEPARTMENTS, SPACE SERVICES, MEDICINE, and many more.
BIBLIOGRAPHY
INTERNET WEBSITES:
1) www.howstuffworks.com
2) www.electronicsforu.com
3) www.khoral.com
4) www.dca.fee.unicamp.br
5) www.google.com