Beruflich Dokumente
Kultur Dokumente
Types
The two types of methods used for Image Processing are Analog and
Digital Image Processing. Analog or visual techniques of image processing can be
used for the hard copies like printouts and photographs. Image analysts use various
fundamentals of interpretation while using these visual techniques. The image
processing is not just confined to area that has to be studied but on knowledge of
analyst. Association is another important tool in image processing through visual
techniques. So analysts apply a combination of personal knowledge and collateral
data to image processing.
Digital Processing techniques help in manipulation of the digital images by using
computers. As raw data from imaging sensors from satellite platform contains
deficiencies. To get over such flaws and to get originality of information, it has to
undergo various phases of processing. The three general phases that all types of
data have to undergo while using digital technique are Pre- processing,
enhancement and display, information extraction.
i) Image Sensors
With reference to sensing, two elements are required to acquire digital image.
The first is a physical device that is sensitive to the energy radiated by the object
we
wish to image and second is specialized image processing hardware.
v) Mass storage
This capability is a must in image processing applications. An image of size 1024
x1024 pixels ,in which the intensity of each pixel is an 8- bit quantity requires
one
megabytes of storage space if the image is not compressed .Image processing
applications falls into three principal categories of storage
by the outputs of image and graphics displays cards that are an integral part of
computer system
vii)Hardcopy devices The devices for recording image includes laser printers, film cameras, heat
sensitive
devices inkjet units and digital units such as optical and CD ROM disk. Films
provide
the highest possible resolution, but paper is the obvious medium of choice for
written
applications.
viii) Networking
It is almost a default function in any computer system in use today because of
the large
amount of data inherent in image processing applications. The key consideration
in
image transmission bandwidth.
processing.
iii) Image Restoration
It deals with improving the appearance of an image. It is an objective approach,
in the
sense that restoration techniques tend to be based on mathematical or
probabilistic
models of image processing. Enhancement, on the other hand is based on
human
subjective preferences regarding what constitutes a good enhancement result
iv) Color image processing
It is an area that is been gaining importance because of the use of digital images
over
the internet. Color image processing deals with basically color models and their
implementation in image processing applications.
v) Wavelets and Multiresolution Processing These are the foundation for representing image in various degrees of resolution
vi) Compression It deals with techniques reducing the storage required to save an image, or the
bandwidth required to transmit it over the network. It has to major approaches
a) Lossless Compression
b) Lossy Compression
vii)Morphological processing
It deals with tools for extracting image components that are useful in the
representation
and description of shape and boundary of objects. It is majorly used in
automated
inspection applications.
viii) Representation and DescriptionIt
always follows the output of segmentation step that is, raw pixel data,
constituting
either the boundary of an image or points in the region itself. In either case
converting
the data to a form suitable for computer processing is necessary.
ix) Recognition
It is the process that assigns label to an object based on its descriptors. It is the
last step
of image processing which use artificial intelligence of softwares.
Knowledge base
Knowledge about a problem domain is coded into an image processing system in
the
form of a knowledge base. This knowledge may be as simple as detailing regions
of an
image where the information of the interest in known to be located. Thus limiting
search
that has to be conducted in seeking the information. The knowledge base also
can be quite
complex such interrelated list of all major possible defects in a materials
inspection
problems or an image database containing high resolution satellite images of a
region in
connection with change detection application
Image resolution
Image resolution can be defined in many ways. One type of it which is pixel
resolution that has been discussed in the tutorial of pixel resolution and aspect
ratio.
Spatial resolution:
Spatial resolution states that the clarity of an image cannot be determined by
the pixel resolution. The number of pixels in an image does not matter.
Spatial resolution can be defined as thesmallest discernible detail in an image.
(Digital Image Processing - Gonzalez, Woods - 2nd Edition)Or in other way we can
define spatial resolution as the number of independent pixels values per inch.
In short what spatial resolution refers to is that we cannot compare two different
types of images to see that which one is clear or which one is not. If we have to
compare the two images , to see which one is more clear or which has more
spatial resolution , we have to compare two images of the same size.
What is quantization.
Quantization is opposite to sampling. It is done on y axis. When you are
qunaitizing an image , you are actually dividing a signal into quanta(partitions).
On the x axis of the signal , are the co-ordinate values, and on the y axis , we
have amplitudes. So digitizing the amplitudes is known as Quantization.
quantization
You can see in this image , that the signal has been quantified into three different
levels. That means that when we sample an image , we actually gather a lot of
values, and in quantization , we set levels to these values. This can be more
clear in the image below.
quantization levels
In the figure shown in sampling , although the samples has been taken , but they
were still spanning vertically to a continuous range of gray level values. In the
figure shown above , these vertically ranging values have been quantized into 5
different levels or partitions. Ranging from 0 black to 4 white. This level could
vary according to the type of image you want.
The relation of quantization with gray levels has been further discussed below.
The quantized figure shown above has 5 different levels of gray. It means that
the image formed from this signal , would only have 5 different colors. It would
be a black and white image more or less with some colors of gray. Now if you
were to make the quality of the image more better, there is one thing you can do
here. Which is , to increase the levels , or gray level resolution up. If you increase
this level to 256, it means you have an gray scale image. Which is far better then
simple black and white image.
Now 256 , or 5 or what ever level you choose is called gray level. Remember the
formula that we discussed in the previous tutorial of gray level resolution which
is
We have discussed that gray level can be defined in two ways. Which were these
two.
High-pass filter
A high-pass filter is an electronic filter that passes signals with a frequency
higher than a certain cutoff frequency and attenuates signals with frequencies
lower than the cutoff frequency. The amount of attenuation for each frequency
depends on the filter design. A high-pass filter is usually modeled as a linear
time-invariant system. It is sometimes called a low-cut filter or bass-cut filter.[1]
High-pass filters have many uses, such as blocking DC from circuitry sensitive to
non-zero average voltages or radio frequency devices. They can also be used in
conjunction with a low-pass filter to produce a bandpass filter.
Image
High-pass and low-pass filters are also used in digital image processing to
perform image modifications, enhancements, noise reduction, etc., using designs
done in either the spatial domain or the frequency domain.[6]
A high-pass filter, if the imaging software does not have one, can be done by
duplicating the layer, putting a gaussian blur, inverting, and then blending with
the original layer using an opacity (say 50%) with the original layer.[7]
A high pass filter tends to retain the high frequency information within an image
while reducing the low frequency information. The kernel of the high pass filter is
designed to increase the brightness of the center pixel relative to neighboring
pixels. The kernel array usually contains a single positive value at its center,
which is completely surrounded by negative values. The following array is an
example of a 3 by 3 kernel for a high pass filter:
Low-pass filter
A low-pass filter is a filter that passes signals with a frequency lower than a
certain cutoff frequency and attenuates signals with frequencies higher than the
cutoff frequency. The amount of attenuation for each frequency depends on the
filter design. The filter is sometimes called a high-cut filter, or treble cut filter in
audio applications. A low-pass filter is the opposite of a high-pass filter. A bandpass filter is a combination of a low-pass and a high-pass filter.
Low-pass filters exist in many different forms, including electronic circuits (such
as a hiss filter used in audio), anti-aliasing filters for conditioning signals prior to
analog-to-digital conversion, digital filters for smoothing sets of data, acoustic
barriers, blurring of images, and so on. The moving average operation used in
fields such as finance is a particular kind of low-pass filter, and can be analyzed
with the same signal processing techniques as are used for other low-pass filters.
Low-pass filters provide a smoother form of a signal, removing the short-term
fluctuations, and leaving the longer-term trend.
low pass filter is the basis for most smoothing methods. An image is smoothed
by decreasing the disparity between pixel values by averaging nearby pixels (see
Smoothing an Image for more information).
Using a low pass filter tends to retain the low frequency information within an
image while reducing the high frequency information. An example is an array of
ones divided by the number of elements within the kernel, such as the following
3 by 3 kernel: