Beruflich Dokumente
Kultur Dokumente
CHAPTER 1
INTRODUCTION
CHAPTER 1
CHAPTER 1: INTRODUCTION
Page No
1.1
Introduction
1.2
Image fusion
1.3
Reconfigurable hardware
1.4
Thesis outline
11
1. INTRODUCTION
1.1
INTRODUCTION
Image processing is gaining more importance in the areas of
the
demands
on
computational
efficiencies.
Various
remote
sensing
applications,
satellites
provide
the
information of the large areas of the planet [1]. To meet the needs of
several remote sensing applications such as weather, meteorological
and environmental, the remote sensing system offers spatial, spectral,
radiometric and temporal resolutions [2]. Generally, satellites take
various images from different frequencies in the visual and non-visual
ranges called as monochrome images. Based on the frequency range
each monochrome image contain various informations about the
object. Each monochrome image is represented as a band and a
produces
color
image.
Satellites
usually
provide
IMAGE FUSION
The main reason for the increased importance of image fusion in
of
climate
changes
and
the
preservation
of
the
The better idea to overcome this limitation is image fusion. The main
objective of image fusion in remote sensing is merging the grey-level
high-resolution panchromatic image and the colored low-resolution
multispectral image [3]. When the input images are taken from
different satellites, fusing of these images can be called multi-sensor
image fusion otherwise it is said to be single-sensor image fusion. A
multi-sensor image fusion overcomes the constraints of a singlesensor image fusion by combining the several sensor images to form a
composite image. The multi-sensor image fusion includes various
benefits
viz.,
robust
system
performance,
improved
reliability,
6
Scene observation
Scene observation
Image 1
Image 2
Sensor 1
Input Images
Sensor 2
Pixel-Level
Fusion
Feature
extraction
Fused Image
Feature
Extraction
Feature
Extraction
Feature Vector1
Decision
Makers
Feature-Level
Fusion
Feature Vector2
Decision
Makers
Decision
Makers
Decision Image1
Symbol-Level
Fusion
Decision Image2
Fused Image
7
Image
Fusion
Pixel Level
Image Fusion
Feature Level
Image Fusion
Neural
Networks
Avaraging
Choice LEVEL
IMAGE FUSION
Brovey
Region Based
Segmentation
PCA
Fusion Based On
Support Vector Machine
K-Means
Clustering
Wavelet
Transform
Similarity
Matching To
Content Level
Retrieving
Intensity Hue
Saturation
Transform
wavelet
subsampled
families such as
contourlet transform,
curvelet transform
non
which
have been used for image fusion. Though their performance is good
when compared to Discrete Wavelet Transform, these transforms are
Hence, the
Panchromatic Image
Multispectral Image
Image
Fusion
Preprocessing
Fused
Image
Such
RECONFIGURABLE HARDWARE
Most of the image processing algorithms generally operate in
processing
algorithms
implementation
Application-Specific
solutions
Standard
Parts
includes
(ASSP),
standard
cell
Application-Specific
programmable solutions
such
as
10
conventional
general-purpose
must
provide
sufficient
capabilities
and
reasonable
11
THESIS OUTLINE
The present investigation is fairly able to develop hardware
software
co-simulation
algorithm
to
fuse
multispectral
and
presents
detailed
literature
survey
on
image
provides
the
image
neighbor,
bilinear
preprocessing
stages
viz.,
bicubic
are
performed
on
12
CHAPTER 2
LITERATURE REVIEW
13
CHAPTER 2
CHAPTER 2: LITERATURE REVIEW
Page No
2.1
Introduction
14
2.2
Image preprocessing
16
17
21
Image fusion
23
2.3.1 Averaging
23
24
26
28
2.3
2.3.4.1
Laplacian Pyramid
29
2.3.4.2
Morphological Pyramid
30
2.3.4.3
Gradient Pyramid
30
32
33
2.3.6.1
34
2.4
Reconfigurable Hardware
39
2.5
51
14
2. LITERATURE REVIEW
2.1
INTRODUCTION
Owing to the importance of multi-sensor data in many fields
both
low
spatial
resolution
multispectral
data
(color
(SFIM),
Modified
Brovey,
High
Pass
Filter
(HPF),
15
detection.
They
addressed
some
recommendations
on
16
David
L.
Hall
and
James
Llinas
[18],
discussed
the
IMAGE PREPROCESSING
17
18
19
Moigne
J,
Campbell
and
Cromp
[26],
and
Costa
[27],
proposed an automatic
coefficients. Then,
an
illuminant
direction
estimator.
Local
curvature
20
Nicchiotti G
[30],
proposed
an
automatic
spline-based
method
for
evaluation
purpose.
The
21
for
feature
extraction
and
feature
correspondence.
22
interpolation techniques.
Heather Studley and Keith T Weber [37], stated that image
resampling is a process used to interpolate the new cell values of a
raster image during a resizing operation. They mentioned that even
though there are many resampling methods available, each method
has strengths and weaknesses which should be considered carefully.
Their purpose of the study was to explore how different methods
implemented by different software vendors (ArcGIS and Paint
ShopPro). Aggregated Average and Nearest Neighbor were considered
23
IMAGE FUSION
Based on the domain of operation, pixel level image fusion can
be broadly classified into two types, which are spatial domain fusion
methods and transform domain fusion methods. Spatial domain
techniques are directly deal with the image pixels [38]. The pixel
values are manipulated to achieve desired result. Fusion methods
such as Averaging, Intensity Hue Saturation (IHS) and Principal
Component Analysis (PCA) are some of the examples of spatial domain
techniques.
2.3.1Averaging
Averaging is the simplest process to fuse two input images by
taking the mean-value of the corresponding pixels [39]. This is a
fundamental image fusion technique. Image fusion is performed by
just
averaging
corresponding
pixels
in
each
input
image
1 , + 2 ,
where,
I1(x, y) is Input image 1.
(2.1)
as
24
separates
spatial
(intensity)
and
spectral
(hue
and
saturation) information from an image [40 & 41]. The fusion method
first converts a RGB image into Intensity (I) Hue (H) and Saturation (S)
components. In the next step, intensity (I) is substituted with the high
spatial resolution panchromatic image. A reverse IHS transform is
then performed on the PAN together with the Hue (H) and Saturation
(S) bands to get an IHS fused image.
Firouz Abdullah Al-Wassai, Kalyankar and Ali N V and AlZuky A [41], reported that, IHS technique is one of the commonly
used techniques for image fusion. They explained different IHS image
fusion algorithms to transfer a color image from the RGB space to the
IHS space. This experiment is mainly targeted for remote sensing
applications, like fusing Multispectral (MS) and Panchromatic (PAN)
images. The performance of the methods was evaluated by using
25
concept of the IHS method is based on the representation of lowresolution MS images in the IHS system and then substituting the
Intensity component I with the PAN image. However, IHS method
would introduce spectral distortion into the resulting MS images. It
appears as a change in colors between compositions of resampled and
fused multispectral bands. Hence, they proposed a histogram
matching method to avoid the problem and improve IHS method in
spectral fidelity.
Tu T M, Huang P S, Hung C L and Chang C P [43], explain
that if more than three MS bands are available in IHS transforms, a
viable solution is GIHS transform, i.e. by including the response of the
Near- Infrared (NIR) band into the intensity component, which is
defined on the basis of the edges of the panchromatic and MS images.
Yee leung, Jmnminliu and Jiangshe Zhang [44], mentioned
that conventional IHS methods substitute the PAN image into
intensity components of bands. Due to this substitution, spectral
responses of MS bands are not perfectly overlap with the bandwidth of
the PAN image. To overcome this problem adaption of intensity is
26
27
matrices
of
each
source.
Different
image
fusion
the
image
reduced
into
few
variables
(principal
28
techniques,
the
fusion
methodology
is
carried
on
the
29
30
top level, it adopts the maximum region information rule; and to the
rest levels, it adopts the maximum region energy rule. Finally, the
fused image is obtained by inverse Laplacian pyramid transform.
Chhamman Sahu1 and Raj Kumar Sahur [55], proposed an
image fusion based pyramid transform techniques. In this approach
images are decomposed into small images using different filters
(lapalcian pyramid). Each pyramid level images are fused and
expanded to further level. Finally, the small images of each pyramid
level for both input images are combined to get a fused image.
2.3.4.2 Morphological Pyramid
The input images captured at different scales, at any level L is
created by applying morphological filtering with a 33 structuring
element to the image level (L-1) followed by down-sampling (by a factor
of 2) the filtered image [56].
2.3.4.3 Gradient Pyramid
The fusion procedure in Gradient Pyramid is similar to
Laplacian
Pyramid
method.
Based
on
the
Gaussian
pyramid
31
images. Initially, all the images are of same size otherwise the
by
applying
the
bilateral
gradient.
The
image
was
32
different. It has poor signal to noise ratio and also it does not provide
any directional information. Therefore discrete cosine transform can
be used for overcome this problem.
2.3.5 Discrete Cosine Transform Technique
Discrete Cosine Transform (DCT) based image fusion is
processed in frequency domain. DCT method is introduced to solve
the problems that are created by averaging, pyramid based image
fusion.
Rao [60], carried out an image fusion method based on average
measure in DCT domain. A new version of direct DCT image fusion is
introduced by taking the average of all the DCT representations of all
the input images. This image fusion method is referred as the
combined DCT and simple average or improved DCT technique.
VPS Naidu [61], implemented six different types of image fusion
algorithms based on Discrete Cosine Transform (DCT) and evaluated
the performance for six image fusion techniques. These algorithms are
DCTe, DCTch, DCTcm, DCTah, DCTma and DCTav.. If the image size
or block size is less than 8x8 fusion performance is not good. In these
algorithms, DCTe and DCTmx image fusion algorithms fusion
performance is good and these are suitable for real time applications.
VPS Naidu [62], implemented block DCT based Image Fusion
Techniques. Five image fusion architectures such as Feature DCT
(FDCT), Resizing DCT (RDCT), Wavelet Structure DCT (WSDCT), Sub
33
34
[66].
Stephane
G,
Mallat
provided
theory
for
35
image
and
multispectral
images
based
on
verification
map.
Finally
applied,
inverse
wavelet
36
37
images
of
IHS
method
and
wavelet
method.
Principal
resolution
Multispectral
(MS)
image
and
high
resolution
38
39
observing
the
performance
of
all
the
image
fusion
RECONFIGURABLE HARDWARE
SunYingLi [80], implemented image fusion technology on FPGA
Quartus
II5.0
module
design,
simulation
and
40
Electro-Optic
sensor
data
using
FPGAs.
They
and
combined
for
conversion
of
sequential
image
41
environment
(MATLAB
simulink)
to
the
hardware
42
enable
quick
implementation
of
complex
image
processing
the
computation
requirement
and
providing
enough
43
Saegusa,
Tsutomu
Maruyama
and
Yoshiki
44
source images from the same scene to generate one single image
containing more precise details of the scene than any of the
source
images.
Among many
image
fusion
methods
like
Transform,
Discrete Wavelet
source images to fused images so that clear images are obtained. The
principle of selection about low & high frequency coefficients are
according to different frequency domain in wavelet transform. In
choosing the low frequency coefficients, the concept of local area
variance was chosen as measuring criteria. In choosing the high
frequency coefficients, the window property and local characteristics
of pixels are analyzed. It was applied successfully in image processing
field. They further felt that its excellent characteristic in onedimension cant be extended to two dimensions or multi-dimension
simply.
Steffen Klupsch, Markus Ernst, Sorin A Huss, Rumpf M and
Strzodka R [89], reported on speeding up of image processing
methods on 2D and 3D images using FPGA technology. They
45
at
an
economical
price. They
concluded that the FPGA technology has become a viable target for
the implementation of real time algorithms.
Madhumathi et al. [90], proposed a black box model for
providing a system integration platform for the design of DSP FPGAs.
This model allows RTL to be imported into Simulink and co-simulated
with either ModelSim or Xilinx ISE Simulator. Finally concluded that
hardware software Co-Synthesis platform with System Generator
makes it possible to incorporate a design, implemented on a Xilinx
Spartan3E FPGA
Suthar A C, Mohammed Vayada, Patel C B, Kulkarni G R
[11], demonstrated model based approach for image processing
applications using MATLAB SIMULINK and Xilinx System Generator
(XSG). These tools support software simulation along with its
46
47
Image
(IRI)
enhancement
of
thermo
graphic
images.
48
provides
basic
digital
blocks
with
flexible
(UART)
serial
communication.
Finally,
[96],
explained
the
importance
of
digital
image
49
to
50
51
2.5
Whereas,
multispectral
resolutions.
and
the
satellites
panchromatic
IRS-P6
data
and
Cartosat-2
respectively
at
record
different
52
53
CHAPTER 3
IMAGE PREPROCESSING
54
CHAPTER 3
CHAPTER 3: IMAGE PREPROCESSING
Page No
3.1
Introduction
55
3.2
Satellite Information
55
3.3
Image Registration
58
3.4
Image Resampling
60
61
61
62
3.5
63
3.6
64
55
3.IMAGE PREPROCESSING
3.1
INTRODUCTION
SATELLITE INFORMATION
This section describes the IRS-P6 and Cartosat-2 satellite
information.
The
PSLV-C5
launch
vehicle
placed
the
satellite
of
the
IRS
series.
The
satellite
IRS-P6
can
acquire
56
Spectral band
Resolution
Repeat
No.of bits
interval
2(Green)
0.52 0.59 m
5.8 x 5.8 m
5 days
3(Red)
0.62 0.68 m
5.8 x 5.8 m
5 days
4(NIR)
0.77 0.86 m
5.8 x 5.8 m
5 days
57
Information
System
(GIS)
applications.
Cartosat-2
Resolution
Repeat interval
No.of bits
1x1m
4 days
10
0.50 0.85m
The MS and PAN images used in this study are shown in Figure 3.1.
(a) MS image
58
3.3
IMAGE REGISTRATION
Image registration played a vital role in remote sensing
of
climate
changes
and
the
preservation
of
the
59
60
Figure 3.3 shows the registered input images that are used in this
study.
IMAGE RESAMPLING
Image fusion takes place only when the spatial resolution of the
61
techniques are nearest neighbor, bilinear and bicubic [37]. For better
understanding, in this section, these techniques are described in onedimensional basis. It is well known that these techniques can also be
implemented in two-dimensional basis.
3.4.1 Nearest Neighbor
From a computational stand point of view, nearest neighbor is
said
to
be
the
simplest
interpolation
technique
where
each
large-scale
changes,
nearest
neighbor
interpolation
interpolation
computes
new
pixels
using
linear
62
one-dimension
cubic
convolution
is
performed
in
the
63
, =
3
=0
3
=0
(3.1)
Max i
m
i =1
n (F
j=1
i,j M i,j )2
(3.2)
i, j
PSNR = 20 log
Maxi
mean square error
64
3.6
3.6.
images and the standard test images are calculated and represented
in Table 3.3.
Band2
Nearest neighbor
Bilinear
Bicubic
65
Band3
Nearest neighbor
Bilinear
Bicubic
66
Band4
Nearest neighbor
Bilinear
Bicubic
67
Nearest
Neighbor
Bilinear
Bicubic
Band2
22.8916
25.5712
27.1940
Band3
28.0435
31.3259
33.71072
Band4
22.8658
22.8658
27.4211
68
69
CHAPTER 4
70
CHAPTER 4
CHAPTER 4: DWT BASED IMAGE FUSION
Page No
4.1 Introduction
71
72
73
73
74
75
75
76
76
78
80
82
84
84
86
87
90
90
92
94
96
71
INTRODUCTION
In remote sensing, images are characterized by their different
72
WAVELET TRANSFORM
The aim of the image transform is to pack as much information
73
| t |
<
t | =0
| w | 2
| |
<
(4.1)
(4.2)
(4.3)
74
is an oscillatory
1
a
* (
tb
a
(4.4)
75
1
a
x t ab
tb
a
dt
(4.5)
tb
a
) are
1
M
k W
Where W j0, k =
W j, k =
j0, k j0,k n +
1
M
j=j 0
(4.6)
f[n] j0,k n
n
1
M
76
These filters outputs are decimated by two in row wise. These two
sequences are further applied to low pass and high pass filters
followed by decimation in column wise. After first level decomposition
of image, it has three different directional details (horizontal, vertical,
diagonal) and approximation details. Further level of decomposition
will be carried on approximation details. It is called multilevel
decomposition or Mallat algorithm [66]. Each sub image is
1th
4
of
77
1
t = 1
0 t 2
(4.7)
t 1
otherwise
1
0
0 t 1
otherwise
(4.8)
2
1
HPF
1
2
78
79
In Figure 4.4, H0 (z) and H1 (z) are decomposition filters (low pass and
high pass).
high pass). X(z) and Y(z) are input & output (either image or signal).
Output at each stage is determined as follows.
Stage1 is
Stage 2 is
[ X(z) H0 z + X z H0 z ]
Stage 4 is
[ X (z 2 ) H0 (z 2 ) + X (z 2 ) H0 (z 2 )]
Stage 3 is
(4.9)
[ X(z) H0 z + X(z) H0 z ]
[ X(z) H0 z + X z H0 z ] F0 z
(4.10)
(4.11)
(4.12)
Stages 5, 6 & 7 are similar to 1, 2 & 3 except H0(z) and F0(z) are
replaced with H1(z) and F1(z). So, similarly the stage 8 output is
1
2
(4.13)
1
2
[ X(z) H0 z + X z H0 z ] F0 z +
1
2
Y(z) =
1
2
(4.14)
(4.15)
80
(4.16)
Finally,
F0 z = - H1 z
(4.17)
F1 z = H0 z
(4.18)
(4.19)
The db1, db3 low pass and high pass filter coefficients are
shown in Tables 4.1 and 4.2 respectively.
Table 4.2 db3 filter coefficients
LPF
0.33267
0.8068
0.459
-0.135
-0.085
0.03522
HPF
0.03522
0.085
-0.135
-0.459
0.8068
-0.33267
0 ()
2
Y (z)
X (z)
1 ()
F1 ()
(Z)
81
) in
reconstruction.
3 , 2 , 1 and 0 are filter coefficients of high pass filter ( F1 z ) in
reconstruction
82
0 = 0
0 = 0
1 = 1
1 = 1
2 = 2
2 = 2
3 = 3
3 = 3
4 = 4
order to get the successful image fusion. The images captured from
83
IRS - P6 (MS bands) and Cartosat-2 (PAN) are taken as input images
Image 1 and Image 2 respectively. In this research, these input images
are pre-processed to apply DWT fusion effectively. The Figure 4.7
shows the top level block diagram of image fusion using wavelet
transform. During the decomposition process, DWT lets the input
images to be decomposed into different types of coefficients by
retaining the original information. These coefficients coming from
several input images are then combined according to some fusion
rules to get the new fused coefficients [33]. During the reconstruction
process, Inverse Discrete Wavelet Transform (IDWT) is performed on
the combined fused coefficients to get the resultant fused image.
84
4.4
in a proper way using fusion rules to obtain the best quality fused
image. In this study, the following fusion rules are used to fuse the
low resolution multispectral and high resolution panchromatic
images.
1. Wavelet Averaging based image fusion
2. Wavelet Additive based image fusion
3. Wavelet Substitution based image fusion
In wavelet average and substitutive methods, up to five levels of
decomposition has been performed for each wavelet [97]. Performance
evaluation has been done for all the above fused images.
4.4.1 Wavelet Average Method
The Indian satellites IRS- P6 give low resolution images (R band,
G band, NIR band) and IRS-P7 (CARTOSAT 2) gives high resolution
PAN image. Initially, R band image and PAN image are taken as
inputs. Having done the image preprocessing of these images
(discussed in Chapter 3), the registered images have been passed as
input signals through two different one-dimensional digital filters H0
and H1 respectively. These H0 and H1 digital filters perform high pass
and low pass filtering operations respectively for both the input
images. The output of each filter is followed by sub-sampling by a
factor of two. This step is referred as the Row compression and
85
86
87
4.10.
88
using
MS
bands
approximation,
replaced
with
PAN
N
i
i (HR
+ VRi + DiR )
(4.20)
G ANG +
N
i
i (HG
+ VGi + DiG )
(4.21)
NIR ANNIR +
N
i
i (HNIR
i
+ VNIR
+ DiNIR )
(4.22)
89
PAN ANP +
N
i
i (HP
+ VPi + DiP )
(4.23)
Where,
AN : Approximation coefficient at level N or approximation plane.
Hi : Horizontal coefficient at level i or horizontal wavelet plane.
Vi : Vertical coefficient at level i or vertical wavelet plane.
Di : Diagonal coefficient at level i or diagonal wavelet plane.
After decomposition, substitution has been done by placing MS
bands approximation in PAN approximation at each level. For each
MS band and PAN, single fused image coefficients are obtained.
Similarly for each level, three fused image coefficients have been
obtained.
Once substitution is done, inverse wavelet transform has been
applied as shown in Equations 4.24 to 4.26.
ANR +
ANG +
ANNIR +
N
i
i (HP
N
i
i (HP
N
i
i (HP
(4.24)
(4.25)
(4.26)
90
4.5
Correlation Coefficient (CC):It measures the similarity of two images which ranges from -1 to
+1. Here +1 indicates that two images are highly similar and -1
indicates highly dissimilar. It is calculated by using Equation 4.27.
CC F, M = (
FF (MM )
(FF )2 )( (MM )2 )
(4.27)
Where
F is fused image
F is mean of fused image.
M is MS image
M is mean of MS image.
4.6
PSNR (refer section 3.5) and CC values between the original low
resolution MS image and fused image [123]. This chapter describes
the performance of Haar, db3 and CDF (9/7) DWT based image fusion
techniques.
One of the most frequently published combinations uses NIR
light as red, red light as green and green light as blue. In this case,
plants reflect NIR and green light. Cities and exposed ground appear
in grey or tan. Clear water appears in black. This appearance can be
observed in the resultant fused images.
91
Band 2
Band 3
Band 4
PAN
Figure 4.12 Input Images MS bands and PAN
92
db3
CDF 9/7
Parameters
Band2
Band3
Band4
CC
0.7631
0.8423
Haar
db3
CDF 9/7
Parameters
Band2
Band3
Band4
Band4
PSNR(db) 78.4984 78.5002 77.6000 77.4138 76.6752 76.1554 78.9464 78.9958 78.0229
CC
0.7631
0.8423
93
1- level Haar
3- level Haar
1- Level db3
3- Level db3
94
db3
CDF 9/7
Parameters
Band2
Band3
Band4
Band4
PSNR(db) 76.5763 76.7641 70.4521 77.2057 77.3400 70.5678 77.1342 77.2650 76.5596
CC
0.6017
0.6801
95
Haar
db3
CDF 9/7
Figure 4.14 Wavelet additive method output images
It is observed from the Figure 4.14 that CDF 9/7 wavelet fused
image has good visual quality when compared to that of Haar and
db3. It is also observed from the Table 4.7 that PSNR and CC values of
CDF 9/7 wavelet fused image are showing high performance when
compared to the other two wavelets.
96
db3
CDF 9/7
Parameters
Band2 Band3
Band4
Band2 Band3
Band4
CC
db3
CDF 9/7
Parameters
Band2 Band3
Band4
Band2 Band3
Band4
CC
97
1-Level Haar
3-Level Haar
1-Level db3
3-Level db3
98
in
order
to
speed
up
the
fusion
process.
FPGA
99
CHAPTER 5
FPGA IMPLEMENTATION OF
DWT BASED IMAGE FUSION
100
CHAPTER 5
CHAPTER 5: FPGA IMPLEMENTATION OF DWT
BASED IMAGE FUSION
Page No
5.1 Introduction
101
102
103
105
108
5.5.1
109
115
Co-simulation
119
120
101
INTRODUCTION
In the past few years, image fusion has become a very popular
field in the area of image processing. This is primarily due to the fast
entrance of digital imaging into the remote sensing and satellite
applications. There is often a need to store large amount of image data
and process it very quickly. These tasks are very complex and require
a large amount of computation to be completed [10, 12 & 14].
Creating
specialized
hardware
would
greatly
reduce
the
time
predominant
process.
For
implementation
of
Programmable
Gate
the
this
reason,
image
Array
fusion
(FPGA)
reconfigurable
is
proposed
technology
hardware
[98].
which
Field
supports
implementation
of
algorithms
suited
to
image
processing
applications.
This chapter deals with the FPGA implementation of image
fusion using CDF 9/7 filter transform using hardware software cosimulation. In this study, fusion model has been designed using
averaging method. Model based design gives the opportunity to
perform rapid prototyping of the image fusion algorithms while the
image fusion hardware is being developed.
102
5.2
Averaging
Figure 5.1 Top level block diagram of DWT-IDWT based image fusion
103
format.
This
model
is
simulated
in
MATLAB
simulink
104
105
106
107
108
MBD
PROCESS
In this study, a hardware software co-simulation algorithm has
been designed for fusing multisensor images and implementation on
FPGA. The registered multisensor images are considered for this
study. The wavelengths (0.5m to 0.8m) of PAN are converted to
intensity by using RGB to Intensity converter block. Then the
109
multisensor images (MS and PAN) of size 256X256 have been resized
to 128X128 due to the memory constraints in FPGA. These resized
images have been applied to 2D-1D block for the conversion of the two
dimensional image data to one dimensional bit stream using simulink
blocksets. Then, it is applied as inputs to system generator model for
FPGA implementation process.
5.5.1 DESIGNING OF SUB BLOCKS
This section describes the system generator blocks likes image
conversion blocks, low pass and high pass filters and DWT and IDWT
blocks. The process of converting the input image to serial stream as
illustrated [125] in Figure 5.5.
110
111
the image coefficients have been added and in the next step, the
resultant output has been multiplied by a factor 0.5. Then the fused
coefficients have been applied to synthesis filters to restore the fused
image bit stream in IDWT process. This fused image 1-D data (Serial
stream) is converted back to 2-D image using serial data to image
block [125] after the DWT & IDWT operation as illustrated in Figure
5.8.
112
Also
Index Number
Coefficient
K=0
0.029
K=1
0.2666
K = 2
-0.0782
K=3
-0.0168
K=4
0.0267
with
oversampling
the
Maximum_Possible
specification
[125],
format
automatic
of
hardware
determination
of
113
Index Number
Coefficient
K=0
1.1150
K=1
-0.5912
K = 2
-0.0575
K=3
0.0912
114
DWT Decomposition:
DWT Reconstruction:
115
Figure 5.13 shows the block diagram of fusion block for the developed
system generator model.
116
117
118
119
5.7
EXPERIMENTAL
SETUP
OF
HARDWARE
SOFTWARE
CO-SIMULATION
120
5.8
121
(a)
(b)
BAND2
BAND3
BAND4
PAN
FUSED PAN&BAND2
FUSED PAN&BAND3
FUSED PAN&BAND4
122
Used
Available
Utilized
217
393600
0%
216
196800
0%
216
217
99%
144
600
24%
32
3%
123
image fusion design. This schematic representation shows the preoptimized design in terms of generic symbols. It helps in discovering
design issues early in the design process.
(a)
(b)
Figure 5.19 (a) Top level RTL and
(b) RTL internal schematic of averaging method
124
(a)
(b)
Figure 5.20 (a) Top level Technology schematic and
(b) Internal technology schematic of averaging method
125
FPGA
Device Properties
Minimum
Maximum
Period
Frequency
1.177ns
849.618MHz
Family
Virtex 6
XC6VSX315T3FF1156
generator
DSP
blocks
for
FPGA
greatly
shortens
the
126
CHAPTER 6
CONCLUSIONS
AND
FUTURE WORK
127
CHAPTER 6
CHAPTER 6: CONCLUSIONS AND FUTURE WORK
Page No
6.1
Conclusions
129
6.2
Future work
131
128
CHAPTER 6
CONCLUSIONS AND FUTURE WORK
This chapter deals with the conclusions drawn from the
research and also includes the future scope that is recommended for
the continuation of this investigation.
The present investigation is mainly aimed to develop hardware
software
co-simulation
algorithm
to
fuse
multispectral
and
Hence
in
our
investigation,
bicubic
interpolation
129
CONCLUSIONS
During image registration process, feature-based method has
been adapted to extract and match the common features from the MS
and PAN images. ERDAS IMAGINE software has been used to get the
registered images.
Nearest neighbor, bilinear and bicubic resampling techniques
have been performed to identify the better technique. Based on PSNR
values, bicubic interpolation technique has been selected and
implemented throughout the study.
For averaging and substitutive methods, 5.8m resolution of MS
image has been upsampled to PAN image resolution (1m). For additive
method, MS image resolution has been upsampled from 5.8m to 4m
(nearest power of 2).
In
MATLAB
Simulink
R2010b,
averaging,
additive
and
130
131
6.2
FUTURE WORK
In this study, single level DWT based hardware software co-
abstract
level
implementation
techniques
can
be
considered.
The present study used the basic wavelet (DWT) for image
fusion algorithm. Hence, it is recommended to extend the research
with advanced wavelet families.
132
REFERENCES
133
REFERENCES
[1]
[3]
[4]
Chetan,
K.;
Solankil;
and
Narendra
Patel,
M.;
National
Yang, J.;
[7]
[8]
134
[9]
[10]
Adhyana
Gupta;
International
Journal
of
Computational
[12]
Research
in
Computer
Science
and
Software
[14]
[15]
[16]
[17]
135
[18]
Hall, D.L.; and Llinas; Proceedings of the IEEE, 85 (1), 1997, 623.
[19]
[22]
[23] Barbara Zitova; and Jan Flusser; Elsevier, Image and Vision
Computing 21, 2003, 977-1000.
[24]
[25]
[26]
Le
Moigne,
J.;
Campbell,
R.P.;
IEEE
[28]
[29]
136
[30]
[31]
[32]
[33]
[34]
[35]
Parker; Anthony, J.; Kenyon; Robert, V.; and Troxel, D.; IEEE
Transactions on Medical Imaging , 2(1), 1983, 31-39.
[36]
[37]
[38]
[39]
[40]
Chavez,
P.S.;
Revisited
and
improved,
Photogrammetric
137
[42]
Wen
Dou;
and
Yunhao
Chen;
Journal
Computers
and
Tu, T.M.; Huang, P.S.; Hung, C.L.; and Chang, C.P.; Information
Fusion, 2 (3), 2001, 177-186.
[44]
[45]
Yan Luo; Rong Liu; and Yu Feng Zhu; Remote Sensing and
Spatial Information Sciences. XXXVII (B7), 2008, 1155-1158.
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
138
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
139
[66]
Stephane
Mallatt,
G.;
Transactions
of
the
American
Romn
[69]
Vadher
Jagruti;
IOSR
Journal
of
Electronics
and
[71]
Conference
on
Electronic
&
Mechanical
[73]
Lavanya, A.; Vani, K.; Sanjeevi, S.; and Suresh Kumar, R.; IEEE
International Conference on Recent Trends in Information
Technology, 3(5), 2011, 920-925.
[74]
[75]
[76]
140
[77]
[78]
[79]
[80]
[81]
[82]
Stephan
Blokzyl;
Matthias
Vodel;
and
Wolfram
Hardt;
[85]
[86]
[87]
141
[88]
[89]
Steffen Klupsch; Markus Ernst; Sorin, A.; Huss, M.; Rumpf, R.;
and Strzodka; Proceedings of IEEE Workshop Heterogeneous
reconfigurable Systems on Chip, 2002, 1-7.
[90]
[91]
[92]
[93]
[94]
[95]
[96]
[97]
Feng Qu; Bochao; Liu; Jian Zhao; and Qiang Sun; Optics and
Photonics Journal, 3, 2013,76-78.
[98]
142
[99]
ZHANG, B.;
143
[112] Huber,
W.;
http://www.quantdec.com/SYSEN597/GTKAV/
section9/map_algebra.htm, 2009.
[113] Gagandeep Kour; and Sharad Singh, P.; International Journal
of
Advanced
Research
in
Electrical,
Electronics
and
A.;
Ingrid
Daubechies;
and
Feauveau,
J.C.;
Klonus;
and
Manfred
Ehlers;
12th
International
144
[124] Saidani, T.; Dia, D.; Elhamzi, W.; Atri, M.; and Tourki, R.;
Proceedings of the World Congress on Engineering, 1, 2009, 37.
[125] Xilinx system generator reference guide.
[126] Shajan,
P.X.;
Muniraj,
N.J.R.;
and
John
Abraham,
T.;
145
INDEX
146
INDEX
A
M
Matlab 10, 41, 47, 52, 72, 103,
108, 128
Multispectral 4, 8, 15, 25, 56,
84, 129
Multi-resolution 43, 73, 74, 75
Multi-sensor 11, 17, 5, 56, 72,
109, 130
N
Nearest neighbor 12, 23, 62,
68, 128
P
Panchromatic 4, 8, 15, 36, 56,
84, 128, 129
R
Registration 8, 17, 19, 51, 59,
129
Resampling 8, 12, 23, 56, 68
Remote Sensing 3, 4, 16, 62, 75
147
X
Xilinx 10, 42, 43, 105, 116,
123, 130
148
LIST OF PUBLICATIONS
149
LIST OF PUBLICATIONS
1. List of Research Papers Presented in Journals:
G. Mamatha, M.V. Lakshmaiah, V. Sumalatha, S. Varadarajan,
DWT
Based
Pan-Sharpening
of
Low
Resolution
Resampling
Methods,
International
Advanced
Mamatha,
M.V.
Lakshmaiah,
V.
Sumalatha,
FPGA
Mamatha,
M.V.
Lakshmaiah,
V.
Sumalatha,
FPGA
Method
IEEE
and
SPRINGER
technically