Sie sind auf Seite 1von 11

unit-3

What is Image Processing?


Image processing is a method to convert an image into digital form and perform
some operations on it, in order to get an enhanced image or to extract some useful
information from it. It is a type of signal dispensation in which input is image, like
video frame or photograph and output may be image or characteristics associated
with that image. Usually Image Processing system includes treating images as two
dimensional signals while applying already set signal processing methods to them.
It is among rapidly growing technologies today, with its applications in various
aspects of a business. Image Processing forms core research area within
engineering and computer science disciplines too.

Image processing basically includes the following three steps.


Importing the image with optical scanner or by digital photography.
Analyzing and manipulating the image which includes data compression and image
enhancement and spotting patterns that are not to human eyes like satellite
photographs.
Output is the last stage in which result can be altered image or report that is based
on image analysis.

Purpose of Image processing


The purpose of image processing is divided into 5 groups. They are:
1. Visualization - Observe the objects that are not visible.
2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Measurement of pattern Measures various objects in an image.
5. Image Recognition Distinguish the objects in an image.

Types
The two types of methods used for Image Processing are Analog and
Digital Image Processing. Analog or visual techniques of image processing can be
used for the hard copies like printouts and photographs. Image analysts use various
fundamentals of interpretation while using these visual techniques. The image
processing is not just confined to area that has to be studied but on knowledge of
analyst. Association is another important tool in image processing through visual
techniques. So analysts apply a combination of personal knowledge and collateral
data to image processing.
Digital Processing techniques help in manipulation of the digital images by using
computers. As raw data from imaging sensors from satellite platform contains
deficiencies. To get over such flaws and to get originality of information, it has to

undergo various phases of processing. The three general phases that all types of
data have to undergo while using digital technique are Pre- processing,
enhancement and display, information extraction.

1.1.3 Components of Image Processing System

i) Image Sensors
With reference to sensing, two elements are required to acquire digital image.
The first is a physical device that is sensitive to the energy radiated by the object
we
wish to image and second is specialized image processing hardware.

ii) Specialize image processing hardware


It consists of the digitizer just mentioned, plus hardware that performs other
primitive
operations such as an arithmetic logic unit, which performs arithmetic such
addition
and subtraction and logical operations in parallel on images
iii) Computer
It is a general purpose computer and can range from a PC to a supercomputer
depending on the application. In dedicated applications, sometimes specially
designed
computer are used to achieve a required level of performance
iv) Software
It consist of specialized modules that perform specific tasks a well designed
package
also includes capability for the user to write code, as a minimum, utilizes the
specialized module. More sophisticated software packages allow the integration
of
these modules.

v) Mass storage
This capability is a must in image processing applications. An image of size 1024
x1024 pixels ,in which the intensity of each pixel is an 8- bit quantity requires
one
megabytes of storage space if the image is not compressed .Image processing
applications falls into three principal categories of storage

i) Short term storage for use during processing


ii) On line storage for relatively fast retrieval
iii) Archival storage such as magnetic tapes and disks
vi) Image displaysImage
displays in use today are mainly color TV monitors. These monitors are driven

by the outputs of image and graphics displays cards that are an integral part of
computer system
vii)Hardcopy devices The devices for recording image includes laser printers, film cameras, heat
sensitive
devices inkjet units and digital units such as optical and CD ROM disk. Films
provide
the highest possible resolution, but paper is the obvious medium of choice for
written
applications.
viii) Networking
It is almost a default function in any computer system in use today because of
the large
amount of data inherent in image processing applications. The key consideration
in
image transmission bandwidth.

Fundamental Steps in Digital Image Processing


There are two categories of the steps involved in the image processing
(1) Methods whose outputs are input are images.
(2) Methods whose outputs are attributes extracted from those images.
i) Image acquisition
It could be as simple as being given an image that is already in digital form.
Generally
the image acquisition stage involves processing such scaling.
ii) Image Enhancement
I+I
It is among the simplest and most appealing areas of digital image processing.
The idea
behind this is to bring out details that are obscured or simply to highlight certain
features of interest in image. Image enhancement is a very subjective area of
image

processing.
iii) Image Restoration
It deals with improving the appearance of an image. It is an objective approach,
in the
sense that restoration techniques tend to be based on mathematical or
probabilistic
models of image processing. Enhancement, on the other hand is based on
human
subjective preferences regarding what constitutes a good enhancement result
iv) Color image processing
It is an area that is been gaining importance because of the use of digital images
over
the internet. Color image processing deals with basically color models and their
implementation in image processing applications.
v) Wavelets and Multiresolution Processing These are the foundation for representing image in various degrees of resolution
vi) Compression It deals with techniques reducing the storage required to save an image, or the
bandwidth required to transmit it over the network. It has to major approaches
a) Lossless Compression
b) Lossy Compression
vii)Morphological processing
It deals with tools for extracting image components that are useful in the
representation
and description of shape and boundary of objects. It is majorly used in
automated
inspection applications.
viii) Representation and DescriptionIt
always follows the output of segmentation step that is, raw pixel data,
constituting

either the boundary of an image or points in the region itself. In either case
converting
the data to a form suitable for computer processing is necessary.
ix) Recognition
It is the process that assigns label to an object based on its descriptors. It is the
last step
of image processing which use artificial intelligence of softwares.
Knowledge base
Knowledge about a problem domain is coded into an image processing system in
the
form of a knowledge base. This knowledge may be as simple as detailing regions
of an
image where the information of the interest in known to be located. Thus limiting
search
that has to be conducted in seeking the information. The knowledge base also
can be quite
complex such interrelated list of all major possible defects in a materials
inspection
problems or an image database containing high resolution satellite images of a
region in
connection with change detection application

Image resolution
Image resolution can be defined in many ways. One type of it which is pixel
resolution that has been discussed in the tutorial of pixel resolution and aspect
ratio.

Spatial resolution:
Spatial resolution states that the clarity of an image cannot be determined by
the pixel resolution. The number of pixels in an image does not matter.
Spatial resolution can be defined as thesmallest discernible detail in an image.
(Digital Image Processing - Gonzalez, Woods - 2nd Edition)Or in other way we can
define spatial resolution as the number of independent pixels values per inch.
In short what spatial resolution refers to is that we cannot compare two different
types of images to see that which one is clear or which one is not. If we have to
compare the two images , to see which one is more clear or which has more
spatial resolution , we have to compare two images of the same size.

MEASURING SPATIAL RESOLUTION:


Since the spatial resolution refers to clarity , so for different devices , different
measure has been made to measure it.
Dots per inch
Lines per inch
Pixels per inch
DOTS PER INCH:
Dots per inch or DPI is usually used in monitors.
LINES PER INCH:
Lines per inch or LPI is usually used in laser printers.
PIXEL PER INCH:
Pixel per inch or PPI is measure for different devices such as tablets , Mobile
phones e.t.c.

What is quantization.
Quantization is opposite to sampling. It is done on y axis. When you are
qunaitizing an image , you are actually dividing a signal into quanta(partitions).

On the x axis of the signal , are the co-ordinate values, and on the y axis , we
have amplitudes. So digitizing the amplitudes is known as Quantization.

Here how it is done

quantization
You can see in this image , that the signal has been quantified into three different
levels. That means that when we sample an image , we actually gather a lot of
values, and in quantization , we set levels to these values. This can be more
clear in the image below.

quantization levels
In the figure shown in sampling , although the samples has been taken , but they
were still spanning vertically to a continuous range of gray level values. In the
figure shown above , these vertically ranging values have been quantized into 5
different levels or partitions. Ranging from 0 black to 4 white. This level could
vary according to the type of image you want.

The relation of quantization with gray levels has been further discussed below.

Relation of Quantization with gray level resolution:

The quantized figure shown above has 5 different levels of gray. It means that
the image formed from this signal , would only have 5 different colors. It would
be a black and white image more or less with some colors of gray. Now if you
were to make the quality of the image more better, there is one thing you can do
here. Which is , to increase the levels , or gray level resolution up. If you increase
this level to 256, it means you have an gray scale image. Which is far better then
simple black and white image.

Now 256 , or 5 or what ever level you choose is called gray level. Remember the
formula that we discussed in the previous tutorial of gray level resolution which
is

We have discussed that gray level can be defined in two ways. Which were these
two.

Gray level = number of bits per pixel (BPP).(k in the equation)

Gray level = number of levels per pixel.


In this case we have gray level is equal to 256. If we have to calculate the
number of bits , we would simply put the values in the equation. In case of
256levels , we have 256 different shades of gray and 8 bits per pixel, hence the
image would be a gray scale image.

High-pass filter
A high-pass filter is an electronic filter that passes signals with a frequency
higher than a certain cutoff frequency and attenuates signals with frequencies
lower than the cutoff frequency. The amount of attenuation for each frequency
depends on the filter design. A high-pass filter is usually modeled as a linear
time-invariant system. It is sometimes called a low-cut filter or bass-cut filter.[1]

High-pass filters have many uses, such as blocking DC from circuitry sensitive to
non-zero average voltages or radio frequency devices. They can also be used in
conjunction with a low-pass filter to produce a bandpass filter.
Image
High-pass and low-pass filters are also used in digital image processing to
perform image modifications, enhancements, noise reduction, etc., using designs
done in either the spatial domain or the frequency domain.[6]

A high-pass filter, if the imaging software does not have one, can be done by
duplicating the layer, putting a gaussian blur, inverting, and then blending with
the original layer using an opacity (say 50%) with the original layer.[7]

The unsharp masking, or sharpening, operation used in image editing software is


a high-boost filter, a generalization of high-pass.
A high pass filter is the basis for most sharpening methods. An image is
sharpened when contrast is enhanced between adjoining areas with little
variation in brightness or darkness (see Sharpening an Image for more detailed
information).

A high pass filter tends to retain the high frequency information within an image
while reducing the low frequency information. The kernel of the high pass filter is
designed to increase the brightness of the center pixel relative to neighboring
pixels. The kernel array usually contains a single positive value at its center,
which is completely surrounded by negative values. The following array is an
example of a 3 by 3 kernel for a high pass filter:

Low-pass filter
A low-pass filter is a filter that passes signals with a frequency lower than a
certain cutoff frequency and attenuates signals with frequencies higher than the
cutoff frequency. The amount of attenuation for each frequency depends on the
filter design. The filter is sometimes called a high-cut filter, or treble cut filter in

audio applications. A low-pass filter is the opposite of a high-pass filter. A bandpass filter is a combination of a low-pass and a high-pass filter.

Low-pass filters exist in many different forms, including electronic circuits (such
as a hiss filter used in audio), anti-aliasing filters for conditioning signals prior to
analog-to-digital conversion, digital filters for smoothing sets of data, acoustic
barriers, blurring of images, and so on. The moving average operation used in
fields such as finance is a particular kind of low-pass filter, and can be analyzed
with the same signal processing techniques as are used for other low-pass filters.
Low-pass filters provide a smoother form of a signal, removing the short-term
fluctuations, and leaving the longer-term trend.
low pass filter is the basis for most smoothing methods. An image is smoothed
by decreasing the disparity between pixel values by averaging nearby pixels (see
Smoothing an Image for more information).

Using a low pass filter tends to retain the low frequency information within an
image while reducing the high frequency information. An example is an array of
ones divided by the number of elements within the kernel, such as the following
3 by 3 kernel:

Das könnte Ihnen auch gefallen