Beruflich Dokumente
Kultur Dokumente
Ans - There are various fields which uses digital image processing and are categorized
according to their sources for e.g. X-Ray, Gamma ray, ultraviolet, visible, infrared and
microwave bands etc.
i) X-ray imaging: The X-ray best use is known to be in medical diagnostics, but has extensive
use in industries and the other areas like astronomy. X-rays are among the oldest sources of
Electromagnetic (EM) radiation used for imaging.
EM waves can be conceptualized as propagating sinusoidal waves of varying
wavelengths, or can be thought of stream of massless particles, each travelling in a wavelike
pattern and moving at the speed of light. Each massless particle contains a certain amount
of energy or bundle of energy called a photon. If spectral bands are grouped according to
energy per photon, we obtain the spectrum ranging from gamma rays (highest emery) to
one end to radio waves (lowest energy) at the other.
X-rays for medical and industrial imaging are generated using an X-ray tube, which is
a vaccum tube with a cathode and anode. The cathode is heated, causing free electron to be
released. These electrons flows at a very high speed to the positively charged anode. When
the electrons strike a nucleus, energy is released in the form of X-ray radiation. The
penetrating power of the X-ray is controlled by a voltage applied across the anode and the
number of X-rays is controlled by s current applied to the filament in the cathode.
The intensity of the X-ray is modified by absorption as they pass through the patient
and the resulting energy falling on the film develops it, much in the same way that light
develops photography film. In digital radiography, digital images are obtained by one the two
methods:
a) By digitizing X-ray films
b) By having the X-rays the pass through the patient fall directly onto devices that
convert X-rays to light. The light signal in turn is captured by a light-sensitive digitizing
system.
ii) Imaging in Microwave band: The dominant application of imaging in the microwave band is
radar. The unique feature of imaging radar is its ability to collect data over virtually any
region at any time, regardless of weather or ambient lighting conditions. Some radar waves
can penetrate clouds, and under certain conditions can also see through vegetation, ice, and
extremely dry sand. In many cases, radar is the only way to explore inaccessible regions of
the Earths surface.
An imaging radar works like a flash camera in that it provides its own illumination
(microwave pulses) to illuminate an area on the ground and take a snapshot image. Instead
of a camera lens, a radar uses an antenna and digital computer processing to record its
images. In a radar image, one can see only the microwave energy that was reflected back
toward the radar antenna. Fig. 1.9 shows a space borne radar image covering a rugged
mountainous area of Southeast Tibet, about 90 km east of the city of Lhasa. In the lower
right corner is a wide valley of the Lhasa River, which is populated by Tibetan farmers and
yak herders and includes the village of Menba. Mountains in this area reach about 5800 m
(19,000 ft.) above sea level, while the valley floors lie about 4300 m (14,000 ft.) above sea
level. Note the clarity and detail of the image, unencumbered by clouds or other atmospheric
conditions that normally interfere with images in the visual band.
Medium wavelength travel further because they reflect from layers in the
atmosphere
3. Cooking (Microwaves)
Dangers: microwaves are absorbed by living tissue Internal heating will damage or kill
cells
It is used for night vision and security cameras as Infrared Radiation is visible in
daytime or night-time
4. Ultraviolet
Over-exposure to UVA and B damages surface cells and eyes and can cause cancer.
There is a problem with current sunscreens which protect against skin burning from
high UVB but give inadequate protection against free radical damage caused by UVA.
Sun exposure for the skin is best restricted to before 11am and after 3pm in the UK in
summer months.
UVC is germicidal, destroying bacteria, viruses and moulds in the air, in water and on
surfaces.
Used in state of the art air-handling units, personal air purifiers and swimming pool
technology.
Used to detect forged bank notes: they fluoresce in UV light; real bank notes dont.
Used to identify items outside visible spectrum areas, known as 'black lighting'.
5. X-rays
X-rays pass through flesh but not dense material like bones
Dangers: X-rays damage cells and cause cancers. Radiographer precautions include
wearing lead aprons and standing behind a lead screen to minimise exposure.
6. Gamma Rays
In high doses, gamma can kill normal cells and cause cancers
H(j,k)
where F(j, k), for 1 j, k N is a binary-valued image and H(j, k) for , 1 j, k L, where L is
an odd integer, is a binary-valued array called a structuring element. For notational
simplicity, F(j,k) and H(j,k) are assumed to be square arrays. Generalized dilation can be
defined mathematically and implemented in several ways. The Minkowski addition definition
is
It states that G(j,k) is formed by the union of all translates of F(j,k) with respect to itself in
which the translation distance is the row and column index of pixels of H(j,k) that is a logical
1. Fig. 1 illustrates the concept.
The meaning of this relation is that erosion of F(j,k) by H(j,k) is the intersection of all
translates of F(j,k) in which the translation distance is the row and column index of pixels of
H(j,k) that are in the logical one state. Fig. 2 illustrates this. Fig. 3 illustrates generalized
dilation and erosion.
Que 5 - Which are the two quantitative approaches used for the evaluation of image
features? Explain.
Ans: There are two quantitative approaches to the evaluation of image features:
prototype performance
figure of merit.
In the prototype performance approach for image classification, a prototype image with
regions (segments) that have been independently categorized is classified by a classification
procedure using various image features to be evaluated. The classification error is then
measured for each feature set. The best set of features is, of course, that which results in the
least classification error.
The prototype performance approach for image segmentation is similar in nature. A
prototype image with independently identified regions is segmented by a segmentation
procedure using a test set of features. Then, the detected segments are compared to the
known segments, and the segmentation error is evaluated. The problems associated with the
prototype performance methods of feature evaluation are the integrity of the prototype data
and the fact that the performance indication is dependent not only on the quality of the
features but also on the classification or segmentation ability of the classifier or segmenter.
The figure-of-merit approach to feature evaluation involves the establishment of some
functional distance measurements between sets of image features such that a large distance
implies a low classification error, and vice versa. Faugeras and Pratt have utilized the
Bhattacharyya distance figure-of-merit for texture feature evaluation. The method should be
extensible for other features as well. The Bhattacharyya distance (B-distance for simplicity) is
a scalar function of the probability densities of features of a pair of classes defined as
where x denotes a vector containing individual image feature measurements with conditional
density p (x | S1).
Que 6 - Explain about the Region Splitting and merging with example.
Ans: Splitting and merging attempts to divide an image into uniform regions. The basic
representational structure is pyramidal, i.e. a square region of size m by m at one level of a
pyramid has 4 sub-regions of size m/2 by m/2 below it in the pyramid. Usually the algorithm
starts from the initial assumption that the entire image is a single region, then computes the
homogeneity criterion to see if it is TRUE. If FALSE, then the square region is split into the
four smaller regions. This process is then repeated on each of the sub-regions until no further
splitting is necessary. These small square regions are then merged if they are similar to give
larger irregular regions. The problem (at least from a programming point of view) is that any
two regions may be merged if adjacent and if the larger region satisfies the homogeneity
criteria, but regions which are adjacent in image space may have different parents or be at
different levels (i.e. different in size) in the pyramidal structure. The process terminates when
no further merges are possible.
and is the mean intensity of the N pixels in the region. Whereas splitting is quite simple,
merging is more complex. Different algorithms are possible, some use the same test for
homogeneity but others use the difference in average values. Generally, pairs of regions are
compared, allowing more complex shapes to emerge.
A program in use at Heriot-Watt is spam ( split and merge) which takes regions a pair at a
time and uses the difference of averages to judge similarity, i.e. merge region A with
neighbouring region B if the difference in average intensities of A and B is below a threshold.
Algorithm for successive region merging
Put all regions on ProcessList
Repeat
Extract each region from ProcessList
Traverse remainder of list to find similar region (homogeneity criterion)
If they are neighbours then merge the regions and recalculate property
values;
until (no merges are possible)