Sie sind auf Seite 1von 149

1

CHAPTER 1

INTRODUCTION

CHAPTER 1
CHAPTER 1: INTRODUCTION

Page No

1.1

Introduction

1.2

Image fusion

1.3

Reconfigurable hardware

1.4

Thesis outline

11

1. INTRODUCTION
1.1

INTRODUCTION
Image processing is gaining more importance in the areas of

science and technology. It constitutes a promising area of research


due to ever growing importance of scientific visualization in various
applications. The need of better performance in the image processing
increased

the

demands

on

computational

efficiencies.

Various

alternatives are available to improve the performance of image


processing using specialized architectures.
Image fusion is a process of merging the relevant information
from several input images into a single image. It is extensively used in
image processing applications like management of natural resources,
remote sensing, defense and medical imaging. Various fusion
techniques are available to improve the quality of fused image.
In

remote

sensing

applications,

satellites

provide

the

information of the large areas of the planet [1]. To meet the needs of
several remote sensing applications such as weather, meteorological
and environmental, the remote sensing system offers spatial, spectral,
radiometric and temporal resolutions [2]. Generally, satellites take
various images from different frequencies in the visual and non-visual
ranges called as monochrome images. Based on the frequency range
each monochrome image contain various informations about the
object. Each monochrome image is represented as a band and a

collection of these bands of the same scene obtained by a sensor is


called multispectral image (MS). In general, an MS image contains
three bands (Red, Green and Blue). The combination of these three
bands

produces

color

image.

Satellites

usually

provide

panchromatic (PAN) image along with MS image. A PAN image refers


to a gray scale image that contains the data with a wide range of
wavelengths from the visible to the thermal infrared.
1.2

IMAGE FUSION
The main reason for the increased importance of image fusion in

remote sensing is that remote sensing is currently moving towards


many important social and scientific applications. These applications
include the management of natural disasters and natural resources,
assessment

of

climate

changes

and

the

preservation

of

the

environment. Furthermore, there is an increasing availability of


images with different characteristics, increased flexibility of time,
shorter revisiting time of satellite and the evolution of sensor
technologies. Therefore, a growing need emerges to simultaneously
process different data from the remote sensing images for information
extraction and data fusion. In the remote sensing, most of the sensors
operate either in panchromatic mode or multispectral mode. A
panchromatic mode sensor gives high spatial resolution image, which
does not contain any color information, whereas a multispectral mode
sensor gives color image with low spatial resolution. Either of these
images separately, will not provide complete information of the object.

The better idea to overcome this limitation is image fusion. The main
objective of image fusion in remote sensing is merging the grey-level
high-resolution panchromatic image and the colored low-resolution
multispectral image [3]. When the input images are taken from
different satellites, fusing of these images can be called multi-sensor
image fusion otherwise it is said to be single-sensor image fusion. A
multi-sensor image fusion overcomes the constraints of a singlesensor image fusion by combining the several sensor images to form a
composite image. The multi-sensor image fusion includes various
benefits

viz.,

robust

system

performance,

improved

reliability,

compact representation of information, extended range of operation


and reduced uncertainty.
Image fusion is generally done at one of the three different
processing levels depending on the stage at which fusion takes place
viz., pixel, feature and decision as illustrated in Figure 1.1. In pixel
level fusion combination mechanism is directly on the pixels obtained
at the different sensors. Feature level fusion works on image features
extracted from the source images and decision level fusion works at
higher level and merges the interpretations of different images
obtained after image understanding.
Based on domain of operation, pixel level image fusion methods
are classified into two types which are spatial domain fusion and
transform domain fusion methods [3].

6
Scene observation

Scene observation
Image 1

Image 2
Sensor 1

Input Images

Sensor 2

Pixel-Level
Fusion

Feature
extraction

Fused Image

Feature
Extraction

Feature
Extraction
Feature Vector1

Decision
Makers

Feature-Level
Fusion

Feature Vector2

Decision
Makers

Decision
Makers

Decision Image1

Symbol-Level
Fusion

Decision Image2

Fused Image

Figure 1.1 Image fusion levels


The spatial domain fusion technique directly deals with the
manipulation of pixel values of source images [4]. Fusion methods
such as Averaging, Principal Component Analysis (PCA) and Intensity
Hue Saturation (IHS) are the techniques in spatial domain [5]. The
drawback of spatial domain techniques is that they give spatial
distortion in the resultant fused image [6]. In the transform domain,
the fusion methodology is carried on the transformed coefficients,
which provides better spectral and spatial quality of fused image than
the spatial domain fusion techniques [5]. Transform domain fusion
comprise of pyramid based, wavelet based fusion techniques. Figure
1.2 describes image fusion methods.

7
Image
Fusion

Pixel Level
Image Fusion

Feature Level
Image Fusion

Neural
Networks

Avaraging

Choice LEVEL
IMAGE FUSION

Fusion Based On Fuzzy


and Unsupervised FCM

Brovey
Region Based
Segmentation

PCA

Fusion Based On
Support Vector Machine

K-Means
Clustering

Wavelet
Transform

Fusion Based On The


Information Level In
The Regions Of Image

Similarity
Matching To
Content Level
Retrieving

Intensity Hue
Saturation
Transform

Figure 1.2 Image fusion methods


The pyramid transform based fusion methods mainly suffers
from blocking effect in the regions where the input images are
different. It does not provide any directional information and also it
has poor Peak Signal to Noise Ratio (PSNR) [7]. Therefore wavelet
transform has been used for image fusion. Compare to pyramid;
wavelet transforms have better representation of detailed features of
the image. The Discrete Wavelet Transform (DWT) is the most
commonly used wavelet transform for image fusion. There are some
improved

wavelet

subsampled

families such as

contourlet transform and

contourlet transform,
curvelet transform

non
which

have been used for image fusion. Though their performance is good
when compared to Discrete Wavelet Transform, these transforms are

computationally expensive and require large memory [8].

Hence, the

two dimensional Discrete Wavelet Transform (DWT) is becoming one of


the standard tools for image fusion. The standard image fusion
process is shown in Figure 1.3.

Panchromatic Image
Multispectral Image

Image

Fusion

Preprocessing

(Spatial & Transform)

Fused
Image

Figure 1.3 Standard image fusion process


In the image fusion, the starting step is to prepare the input
images for fusion process. It is also called as image preprocessing
which includes registration and resampling of the input images.
Registration is needed to align the corresponding pixels in the input
images. This is usually done by geometric normalization of the images.
If Multispectral (MS) and Panchromatic (PAN) images are taken from
the same sensor they are usually already co-registered and can be
directly used for fusion processing. However, if the images are taken
from different sensors, a registration process is necessary to ensure
that pixels in the input images exactly represent the same location on
the ground. After registration, resampling of images should be done
to make the proportion between the pixel spacing of the PAN and MS
images to be same (Spatial domain) or a power of 2 (DWT).
In transform domain approach, depending on the resolution of
the images, different levels of decomposition have been performed to
obtain the same scale transformed image coefficients.

Such

coefficients coming from different images can be appropriately merged


to obtain new coefficients so as to ensure that the original image
information is collected. Once the coefficients are combined, the final
fused image is achieved through the inverse transform.
1.3

RECONFIGURABLE HARDWARE
Most of the image processing algorithms generally operate in

software mode. Software implementation of these algorithms has


several limitations like complex operations, that have to be realized by
a large sequence of simple operations, can only be implemented
serially. As a result, it is difficult to meet real time requirements with
software [12]. Hence, it is desirable to use a new system which
supports the real time requirements. Image

processing

algorithms

implementation in hardware have emerged as the most viable


solution for improving the performance of image processing systems.
Hardware

implementation

Application-Specific

solutions

Standard

Integrated Circuits (ASIC) and

Parts

includes
(ASSP),

standard

cell

Application-Specific

programmable solutions

such

as

Digital Signal Processor (DSP) or media processors and FPGAs.


When a design has been programmed onto an ASIC or ASSPs, it
cannot be altered. In ASICs, if an error exists in the hardware design
and is not discovered before product shipment then it cannot be
corrected without a very costly product recall. However, powerful
DSPs are costly and their corresponding software applications may
not match the performance of hardware systems [13].

10

The introductions of reconfigurable devices and system level


hardware programming languages have further accelerated the design
of image processing algorithms in hardware [10]. Reconfigurable
hardware device such as Field Programmable Gate Array (FPGA) is
one of the finest alternatives. FPGAs are also used to speed up image
processing applications [9] with the salient features of FPGAs, like
greater I/O bandwidth to local memory, parallelism, pipelining and
availability of optimizing compiler make them superior in speed
over

conventional

general-purpose

hardware like Pentiums [11].

Complex tasks, which involve multiple image operations, run much


faster on FPGAs than on Pentiums.
The reconfigurable computing technology in FPGAs, along with
many other features of FPGAs make them ideally suited for real-time
image processing. Creating reconfigurable applications is not as
straight forward as designing either software or hardware, as the
application is intrinsically a hardware software co-simulation [9].Best
approach for co-simulation is the high level graphical interface of
MATLAB Simulink with XILINX System Generator. The co-simulation
interface

must

provide

sufficient

capabilities

and

reasonable

simulation speeds. The System generator automatically specifies the


details of FPGA. It is the best solution for hardware approach as it
provides easier hardware verification and implementation when
compared to Hardware Description Language (HDL) based approach in
order to attain low cost, high performance and short development time

11

[14]. It automatically generates User Constraints File (.UCF) of VHDL


or VERILOG code which can be dumped directly on FPGA board.
1.4

THESIS OUTLINE
The present investigation is fairly able to develop hardware

software

co-simulation

algorithm

to

fuse

multispectral

and

panchromatic satellite images and to implement on reconfigurable


hardware:
1. Chapter-2

presents

detailed

literature

survey

on

image

preprocessing, image fusion using DWT and implementation of


image fusion on FPGA.
2. Chapter-3

provides

the

image

registration and resampling.


nearest

neighbor,

bilinear

preprocessing

stages

viz.,

The resampling methods viz.,


and

bicubic

are

performed

on

multispectral images and evaluated the best method.


3. Chapter-4 explains DWT based image fusion using averaging,
additive and substitutive fusion rules with Haar, db3 and CDF
9/7 filters in MATLAB.

Performance parameters of these

algorithms are analyzed.


4. Chapter-5 deals with the FPGA implementation of image fusion
using CDF 9/7 filter transform using hardware software cosimulation. Fusion model has been designed using averaging
method in MATLAB Simulink & XILINX system generator.
5. Chapter-6 describes overall conclusions made in this research
and scope of future work.

12

CHAPTER 2

LITERATURE REVIEW

13

CHAPTER 2
CHAPTER 2: LITERATURE REVIEW

Page No

2.1

Introduction

14

2.2

Image preprocessing

16

2.2.1 Image registration

17

2.2.2 Image resampling

21

Image fusion

23

2.3.1 Averaging

23

2.3.2 Intensity Hue Saturation

24

2.3.3 Principal Component Analysis

26

2.3.4 Pyramid Transform

28

2.3

2.3.4.1

Laplacian Pyramid

29

2.3.4.2

Morphological Pyramid

30

2.3.4.3

Gradient Pyramid

30

2.3.5 Discrete Cosine Transform (DCT) technique

32

2.3.6 Discrete Wavelet Transform (DWT)

33

2.3.6.1

Discrete Wavelet Transform overview

34

2.4

Reconfigurable Hardware

39

2.5

Motivation and Objective of the project

51

14

2. LITERATURE REVIEW
2.1

INTRODUCTION
Owing to the importance of multi-sensor data in many fields

such as remote sensing, medical and military imaging applications,


image fusion has become prominent in the area of research. In remote
sensing applications, satellite MS image bands give color information
and PAN image gives the details of the target respectively. But, either
of these individual images does not provide the required information of
the target. The aim of image fusion is to produce new images that
contain

both

low

spatial

resolution

multispectral

data

(color

information) and high spatial resolution panchromatic data (details).


In principle, multi-sensor fusion provides significant benefits when
compared to single-sensor fusion. The use of different types of sensors
may improve the quality of the target information [15]. This chapter
deals with the various image fusion techniques and importance of
reconfigurable hardware for image fusion.
Wu Wenbo, Yao Jing and Kang Tingjun [16], obtained good
quality information in satellite image fusion by making multispectral
images matching with thematic mapper panchromatic image, with an
error control of 0.3 pixels. They used Smoothing Filter-based Intensity
Modulation

(SFIM),

Modified

Brovey,

High

Pass

Filter

(HPF),

Multiplication, Principle Component Analysis (PCA) Transform and


IHS methods for the image fusion. They evaluated the quality of fused
images by using mean, entropy, standard deviation, correlation

15

coefficient with MS image and PAN image as parameters. The results


revealed out of six methods, HPF and SFIM are the best methods in
preserving the spectral information of original images.
Yun Zhang and Ruisheng Wang [17], explained an approach
for object extraction from high-resolution satellite images. This
approach integrates multi-spectral classification, image fusion, feature
segmentation and feature extraction into the object extraction. Both
spatial information from Panchromatic (PAN) and spectral information
from Multispectral (MS) images are utilized for the extraction to
improve accuracy. They mainly concentrated on road extraction from
QuickBird MS and Pan Images using the proposed approach and
concluded that the proposed approach was very effective with
correctness of road network extraction to 0.95 which is significantly
higher than that of other existing road extraction methods like
multispectral classification, PAN based feature extraction and MS &
PAN integrated classification.
Jiang Dong, Dafang Zhuang, Yaohuan Huang and Jingying
Fu [15], presented briefly an overview of recent advances in multisensor satellite image fusion. Initially, they explained the most useful
existing image fusion algorithms in remote sensing applications that
include object identification and classification, targets tracking and
change

detection.

They

addressed

some

recommendations

on

development and improvement of fusion algorithms for establishing


automatic quality assessment scheme.

16

David

L.

Hall

and

James

Llinas

[18],

discussed

the

multisensor data fusion. They introduced the multi sensor data


fusion, also mentioned that fused data from multiple sensors provides
several advantages like good observability and determining the
accurate position of an object, than from a single sensor. They
explained the need of multi-sensor image fusion in military and non
military applications. In their view, co-registration is the key challenge
in multi-image data fusion. In the co-registration process, the
alignment of two or more images is overlaid so that each image
represents the same location on earth.
2.2

IMAGE PREPROCESSING

The Earth Observing System Data and Information System


(EOSDIS) receive raw data from all the spacecrafts and process it to
remove telemetry errors and eliminate communication artifacts. Image
preprocessing is a preliminary phase to improve the image quality
from undesirable of atmospheric interference, sensor motion, system
noise etc [19]. This preprocessing includes registration and resampling
of the multisensor images (MS and PAN).
Yoonsuk Choi, Sharifahmadian E and Latifi S [20], explained
the importance of preprocessing of source images for satellite image
fusion and various effects of preprocessing on satellite image fusion.
They reported that pre-processing should be done properly to achieve
high quality fusion results in the main fusion process.

17

2.2.1 Image Registration


Initially, the different images are in different coordinate systems.
The image registration process spatially aligns them by considering
one of the images as a reference and transforming the images one at a
time. Hence, a selection of corresponding structure or elements in the
reference image and in each of the other images is necessary in order
to determine an appropriate transformation. Once the registration
process is completed, the images can be processed for information
extraction. The registration can be done both in manual and
automatic process. Many methods have been proposed in the image
registration [21].
Manjunath B S, Shekhar C and Chellappa R [22], explained
importance of feature detection in intermediate level problems like
image registration, object recognition and face recognition. In this
algorithm, feature detection is based on different scales of image
interaction model. The Feature detector detects the salient image
feature points like line endings, short lines, corners and other sharp
changes in curvature. This feature detection is based on the simple
and robustness feature.
Barbara Zitova and Jan Flusser [23], provided a survey on
latest and the popular image registration methods. These methods are
area based and feature based. Also, they explained the procedure for
image registration such as object feature detection and matching,

18

mapping function design, image transformation and resampling. They


mentioned merits and demerits of each registration method.
Le Yu et al. [24], developed a pre registration technique to
align

input image to the reference image. In this process, Scale

Invariant Feature Transform (SIFT) and affine transformation model


detects the matching points automatically. Fine-scale registration
process has been performed by piecewise linear transformation
technique with feature points that are detected by the Harris corner
detector soon after the coarse registration is completed. They
made experiments with Quick Bird, SPOT5, SPOT4, TM remote
sensing images.
Yuanxin Ye and Jie Shan [25], concluded that automatic
registration of multispectral remote sensing images is a challenging
task due to non significant non linear differences caused by
radiometric variations. In order to counter this problem they proposed
two stage process, these are pre registration and fine registration. Pre
registration is achieved using the Scale Restriction Scale Invariant
Feature Transform (SR SIFT) to eliminate the translation, rotation and
scale differences between the reference image and sensed image. In
the fine registration stage, the evenly distributed interest points are
first extracted in the pre-registered image using the Harris corner
detector. Their proposed process has been evaluated with three pairs
of multispectral remote sensing images from ETM+, TM and ASTER

19

Worldview satellites and observed that proposed method achieve


reliable registration outcome.
Le

Moigne

J,

Campbell

and

Cromp

[26],

implemented image registration technique based on the correlation


coefficients. In this algorithm, wavelet decomposition was used for
feature selection. Horizontal (HL) and vertical (LH) coefficients are
computed by using histogram in which 13% to 15% of the points are
saved from the wavelet coefficients. This method is similar to an edgebased correlation method.
Fonseca

and

Costa

[27],

proposed an automatic

registration of satellite images by selecting the features using the local


modulus maxima of the wavelet transform

coefficients. Then,

thresholding was applied on feature coefficients in order to eliminate


insignificant feature points.
Qinfen Zheng and Chellappa R [28], implemented an approach
which estimates 2-D translation, scale and rotation of the partially
overlapping images. They estimated the initial camera rotation by
using

an

illuminant

direction

estimator.

Local

curvature

discontinuities were detected by locating the feature points with Gabor


wavelet model. Because of that, the proposed novel approach gave
good results.
Li et al. [29], proposed a wavelet based registration scheme,
starting with feature extraction. The feature extracting was carried out

20

by a contour. After feature extraction, a voting algorithm was


performed on each contour point. Then the algorithm uses the
normalized correlation as the similarity measure to filter out
mismatched points.
Corvi M and

Nicchiotti G

[30],

proposed

an

automatic

registration procedure based on a multiresolution analysis of images.


This technique used the residue images of the Discrete Wavelet
Transform (DWT) and clustering technique, to obtained the initial
transformation parameters. Also, the algorithm used both the maxima
and minima of the DWT coefficients to allow for more points for the
feature correspondence and least mean square estimation.
Wu J and Chung A [31], developed a coarse-to-fine waveletbased image registration algorithm to estimate dense motion vectors
between two images. Compare to the finer-scale basis function the
coarser-scale basis function has larger support. With these variable
supports in full resolution, the basis functions serve as large-to-small
windows. So that the global and local information can be incorporated
concurrently for image matching, especially for recovering motion
vectors containing large displacements. Two sets of test images were
experimented using both the wavelet-based method and a leading
pyramid

spline-based

method

for

evaluation

purpose.

The

experimental results showed that wavelet-based method produced the


images with smaller mean and standard deviation.

21

Christopher Paulson, Soundararajan Ezekiel, and Dapeng


Wu [32], proposed a registration technique, which uses wavelet
coefficients

for

feature

extraction

and

feature

correspondence.

Projective transformation was performed. After transformation the


bicubic interpolation was performed on images. With this method they
achieved good Peak Signal to Noise Ratio (PSNR). But the algorithm is
slow and the robustness.
2.2.2 Image resampling
Image resampling is used to develop a new version of the image
with a different pixel dimensions to obtain the relevant information
[33]. Resampling is very much needed because, the satellite imaging is
done on a fixed time intervals, whereas the final output generation
depends on the image being at regular spatial intervals. Hence, there
is a need to shift the original samples of the image or interpolate
between the input values to obtain the image samples at the output
locations.
Resampling is used for increasing or decreasing the size of an
image in order to match the characteristics of multisensor images
[34]. Directly changing the size of the image cannot increase the
information in the image or resolution of the image. In the resampling
process, the image quality highly depends on the used interpolation
technique.

22

Parker, Anthony J, Kenyan, Robert V and Troxel D [35],


compared different types of interpolation techniques for image
resampling. They discussed about how to resample the original image
and explained different types of interpolation techniques for image
resampling. Finally, they concluded that expense of some increase in
computing time, image quality can be improved by resampling using
the high-resolution cubic spline function as compared to the nearest
neighbor, linear, or cubic B-spline functions.
Philippe Thevenaz, Thierry Blu and Michael Unser [36],
presented a survey of interpolation techniques for image resampling.
They defined interpolation to represent an arbitrary continuous
function as a discrete sum of weighted and shifted synthesis
functions. They highlighted several artifacts that may arise when
performing interpolation, such as ringing, aliasing, blocking and
blurring.

Finally, they performed cost performance analysis for

interpolation techniques.
Heather Studley and Keith T Weber [37], stated that image
resampling is a process used to interpolate the new cell values of a
raster image during a resizing operation. They mentioned that even
though there are many resampling methods available, each method
has strengths and weaknesses which should be considered carefully.
Their purpose of the study was to explore how different methods
implemented by different software vendors (ArcGIS and Paint
ShopPro). Aggregated Average and Nearest Neighbor were considered

23

for experiment. Landsat imagery was resampled from 28.5 to 100


meters per pixel (mpp) using two methods. Then correlation coefficient
was used as an evaluation parameter. Finally, they concluded that,
the selection of resampling methods should be done carefully.
2.3

IMAGE FUSION
Based on the domain of operation, pixel level image fusion can

be broadly classified into two types, which are spatial domain fusion
methods and transform domain fusion methods. Spatial domain
techniques are directly deal with the image pixels [38]. The pixel
values are manipulated to achieve desired result. Fusion methods
such as Averaging, Intensity Hue Saturation (IHS) and Principal
Component Analysis (PCA) are some of the examples of spatial domain
techniques.
2.3.1Averaging
Averaging is the simplest process to fuse two input images by
taking the mean-value of the corresponding pixels [39]. This is a
fundamental image fusion technique. Image fusion is performed by
just

averaging

corresponding

pixels

in

each

input

image

represented in equation 2.1.


, =

1 , + 2 ,

where,
I1(x, y) is Input image 1.

(2.1)

as

24

I2(x, y) is Input image 2.


If (x, y) is Resultant fused image.
This technique is valid only for some simpler applications since
one of the input image will always be with poor lighting/brightness
and thus the quality of a resultant averaged image will obviously
decrease. Averaging method doesn't actually offer very good results as
it reduces the contrast of the resultant fused image.
2.3.2 Intensity Hue Saturation
Intensity Hue Saturation (IHS) is a color fusion technique. It
effectively

separates

spatial

(intensity)

and

spectral

(hue

and

saturation) information from an image [40 & 41]. The fusion method
first converts a RGB image into Intensity (I) Hue (H) and Saturation (S)
components. In the next step, intensity (I) is substituted with the high
spatial resolution panchromatic image. A reverse IHS transform is
then performed on the PAN together with the Hue (H) and Saturation
(S) bands to get an IHS fused image.
Firouz Abdullah Al-Wassai, Kalyankar and Ali N V and AlZuky A [41], reported that, IHS technique is one of the commonly
used techniques for image fusion. They explained different IHS image
fusion algorithms to transfer a color image from the RGB space to the
IHS space. This experiment is mainly targeted for remote sensing
applications, like fusing Multispectral (MS) and Panchromatic (PAN)
images. The performance of the methods was evaluated by using

25

Standard Deviation (SD), Correlation Coefficient (CC), Normalization


Root Mean Square Error (NRMSE), Entropy (En), Deviation Index (DI)
and Signal-to Noise Ratio (SNR) parameters. From their study it is
clear that the IHS transformations based fusion show different results
corresponding to the formula of IHS transformation that is used.
Wen Dou

and Yunhao Chen [42], reported that, the main

concept of the IHS method is based on the representation of lowresolution MS images in the IHS system and then substituting the
Intensity component I with the PAN image. However, IHS method
would introduce spectral distortion into the resulting MS images. It
appears as a change in colors between compositions of resampled and
fused multispectral bands. Hence, they proposed a histogram
matching method to avoid the problem and improve IHS method in
spectral fidelity.
Tu T M, Huang P S, Hung C L and Chang C P [43], explain
that if more than three MS bands are available in IHS transforms, a
viable solution is GIHS transform, i.e. by including the response of the
Near- Infrared (NIR) band into the intensity component, which is
defined on the basis of the edges of the panchromatic and MS images.
Yee leung, Jmnminliu and Jiangshe Zhang [44], mentioned
that conventional IHS methods substitute the PAN image into
intensity components of bands. Due to this substitution, spectral
responses of MS bands are not perfectly overlap with the bandwidth of
the PAN image. To overcome this problem adaption of intensity is

26

required. Therefore, the authors implemented an Adaptive Intensity


Hue Saturation (AIHS) method for image fusion. In this method, the
amount of spatial details injected into each band of Multispectral (MS)
image is determined by the appropriate weighting matrix.
2.3.3 Principal Component Analysis
Principal Component Analysis (PCA) is a mathematical tool
which transforms a number of correlated variables into a number of
uncorrelated variables. The PCA transform converts inter-correlated
Multispectral (MS) bands into a new set of uncorrelated components
[45]. To perform this approach the principal components of the MS
image bands shall be available. Then, the first principal component
which contains the most information of the image is substituted by
PAN image. Finally, the inverse principal component transform is
performed to get the new RGB (Red, Green, and Blue) bands of multispectral image from the principle components.
Chavez Jr P S, Sides S C and Anderson J A [46], mentioned
that Principal Component Analysis (PCA) is an alternative to IHS
techniques. In PCA method PAN image is substituted to the first
principal component (pc1), for this PAN image must be histogram
matched to pc1 before substitution, because pc1 has far large values
of mean and variance than those of PAN. Also noted that spectral
distortion in the fused bands is less in PCA compared to HIS, but
cannot be avoided completely.

27

Naidu V P S and Rao J R [47], proposed a pixel-level image


fusion using PCA. The fusion was achieved by weighted average of
input images. The weights for each source image were obtained from
the Eigen vector corresponding to the largest Eigen value of the
covariance

matrices

of

each

source.

Different

image

fusion

performance metrics with and without reference image have been


evaluated. The simple averaging fusion algorithm shows degraded
performance as compared to the PCA.
NishaGawari and Lalitha Y S [48], discussed about the
Formulation, Process Flow Diagrams and algorithms of PCA (Principal
Component Analysis), DCT (Discrete Cosine Transform) and DWT
(Discrete Wavelet Transform) based image fusion techniques. The
proposed PCA uses a vector space transform to reduce the
dimensionality of large data sets. That means by using mathematical
projection

the

image

reduced

into

few

variables

(principal

components). They explained that, the Discrete Cosine Transform


(DCT) and Discrete Wavelet Transform (DWT) converts the image from
the spatial domain to frequency domain then performs the fusion on
the transformed coefficients. Finally, the fused image is obtained by
performing the inverse transform. The comparative analysis was
performed on techniques PCA and DCT and DWT. Finally, their study
concludes that DWT is the best approach for image fusion.
Nirosha Joshitha J and Medona Selin R [49], proposed an
image fusion technique based on principal component analysis for

28

palm print recognition. This method extracted optimal weighting


coefficients with respect to information content and redundancy
removal.

Zhang Y [50], says that most advanced satellites like Ikonos


and Quick Bird have very high spatial resolution images .For those
images, MS bands spectral response are not perfectly overlapped with
the bandwidth of PAN, which yields poor result in terms of spectral
fidelity of IHS and PCA based methods.
The drawback of spatial domain approaches is that they
produce spatial distortion in the fused image. In the transform domain
fusion

techniques,

the

fusion

methodology

is

carried

on

the

transformed coefficients, which provides better spectral and spatial


quality of fused image than the spatial domain fusion techniques [51].
Transform domain fusion comprise of pyramid based, wavelet based
fusion techniques.
2.3.4 Pyramid Transform
A pyramid structure is defined as a collection of images that are
captured at different scales are combined together, represents the
original image [52]. An image can be represented as a pyramid
structure when analyzed by use of pyramid transform. Pyramid
transform can be performed in 3 ways such as Laplacian Pyramid
(LP), Morphological Pyramid (MP) and Gradient Pyramid (GP).

29

2.3.4.1 Laplacian Pyramid


Laplacian Pyramid (LP) is derived from the Gaussian Pyramid
(GP), which is a multi-scale representation obtained by performing
recursive reduction. LP uses a pattern selective based method to
fuse the input images in order to get a feature at a time instead of
pixel at a time [53].
The basic idea of Laplacian Pyramid transform is to perform
pyramid decomposition on each source image initially and then
integrates all these decomposition to produce a composite depiction.
Then the fused image is reconstructed by performing an inverse
pyramid transform.The fusion is then implemented for each level of
the pyramid using a feature selection decision mechanism. It can be
used several modes of combination such as averaging or selection
[54].
In the first step, a pyramid structure for each source images is
constructed. In the arrangement process, selects the most salient
component pattern from the source images and copies it to the
composite pyramid and discards the less salient.
Wencheng Wang [53], presented an efficient algorithm for
image fusion based on laplacian pyramid. This method mainly
composed of three steps. Firstly, the Laplacian pyramids of each
source image are constructed separately and then each level of new
Laplacian pyramid is fused by adopting different fusion rules. To the

30

top level, it adopts the maximum region information rule; and to the
rest levels, it adopts the maximum region energy rule. Finally, the
fused image is obtained by inverse Laplacian pyramid transform.
Chhamman Sahu1 and Raj Kumar Sahur [55], proposed an
image fusion based pyramid transform techniques. In this approach
images are decomposed into small images using different filters
(lapalcian pyramid). Each pyramid level images are fused and
expanded to further level. Finally, the small images of each pyramid
level for both input images are combined to get a fused image.
2.3.4.2 Morphological Pyramid
The input images captured at different scales, at any level L is
created by applying morphological filtering with a 33 structuring
element to the image level (L-1) followed by down-sampling (by a factor
of 2) the filtered image [56].
2.3.4.3 Gradient Pyramid
The fusion procedure in Gradient Pyramid is similar to
Laplacian

Pyramid

method.

Based

on

the

Gaussian

pyramid

decomposition, it uses four gradient operators in order to make a


filtering in the horizontal, vertical, and two diagonal directions. By this
way, the extracted edge information of the source image will be better
with the characteristic details. The fused image has a better definition
and contains adequate effective messages [57].

31

The pyramid transform based fusion methods mainly suffers


from blocking effect in the regions where the input images are
different. It has poor signal to noise ratio and also it does not provide
any directional information. Therefore wavelet transform have been
used for image fusion.
Burt and Kolczynski [58], implemented gradient pyramid
based image fusion. In the gradient pyramid approach, image is
separated into sub-bands according to direction as well as scale.
Gradient pyramid is derived from the filter subtract decimate
Laplacian pyramid by applying four directionally sensitive filters. They
reported that when it is applied at all levels of scale; each filter
removes all the information that does not fall within the well defined
orientation image.
Geetha G, Raja Mohammad S and Murthy Y S S R [59],
proposed the method which combines the multiresolution transform
and local phase coherence measure to measure the sharpness in
the

images. Initially, all the images are of same size otherwise the

fusion process will be rejected. The images are decomposed into


multiple resolutions at different scales by using multiresolution
transform. The image fusion process has been done subband by
subband

by

applying

the

bilateral

gradient.

The

image

was

reconstructed from the resultant subbands to get the fused image.


The pyramid transform based fusion methods mainly suffers
from blocking effect in the regions where the input images are

32

different. It has poor signal to noise ratio and also it does not provide
any directional information. Therefore discrete cosine transform can
be used for overcome this problem.
2.3.5 Discrete Cosine Transform Technique
Discrete Cosine Transform (DCT) based image fusion is
processed in frequency domain. DCT method is introduced to solve
the problems that are created by averaging, pyramid based image
fusion.
Rao [60], carried out an image fusion method based on average
measure in DCT domain. A new version of direct DCT image fusion is
introduced by taking the average of all the DCT representations of all
the input images. This image fusion method is referred as the
combined DCT and simple average or improved DCT technique.
VPS Naidu [61], implemented six different types of image fusion
algorithms based on Discrete Cosine Transform (DCT) and evaluated
the performance for six image fusion techniques. These algorithms are
DCTe, DCTch, DCTcm, DCTah, DCTma and DCTav.. If the image size
or block size is less than 8x8 fusion performance is not good. In these
algorithms, DCTe and DCTmx image fusion algorithms fusion
performance is good and these are suitable for real time applications.
VPS Naidu [62], implemented block DCT based Image Fusion
Techniques. Five image fusion architectures such as Feature DCT
(FDCT), Resizing DCT (RDCT), Wavelet Structure DCT (WSDCT), Sub

33

band DCT (SDCT) and Morphological DCT (MpDCT) based on block


DCT. Finally, it is concluded that WSDCT based image fusion
performance is good.
DCT based image fusion produces better result with low quality,
low PSNR and high mean square error when compared to pyramid
techniques.
Jagdeep Singh and Vijay Kumar Banga [63], proposed image
fusion using both PCA and DCT. Initially, the principal component
analysis was applied for both images individually and then fused both
the principal components based on DCT technique. Next, histogram
equalization was applied for better clarity of fused image. This
combination of PCA and DCT techniques provided better results.
It is very difficult to completely decorrelate the blocks at their
boundaries using DCT. The spatial correlation of the pixels in the
single 2-D block is only considered and the correlations from the
pixels of the neighboring blocks are neglected. Due to very low bit rate
or high compression ratios, unwanted blocking artifacts affect the
reconstructed images. These are some of the limitations of DCT based
image fusion. To overcome these limitations DWT have been used for
image fusion [64].
2.3.6. Discrete Wavelet Transform (DWT)
Wavelet theory is an extension of Fourier theory in many
aspects and it is introduced as an alternative to the Short-Time

34

Fourier Transform (STFT). In Fourier transform, the signal is


decomposed into sines and cosines but in wavelets the signal is
projected on a set of wavelet functions [65]. Fourier transform
provides good resolution in frequency domain, but it is not suitable for
non-stationary signals whose frequency response vary in time.
2.3.6.1. Discrete Wavelet Transform overview
The Discrete Wavelet Transform (DWT) is the most commonly
used wavelet transform for image fusion associated with the Mallats
algorithm

[66].

Stephane

G,

Mallat

provided

theory

for

multiresolution signal decomposition mathematical model for the


computation and interpretation of the concept of multiresolution
representation, explained how to extract the difference of information
between successive resolutions and define new representation is
wavelet representation. One of the main advantages of wavelets is that
they offer a simultaneous localization in time and frequency domain.
These wavelets have the great advantage of being able to separate the
approximation details and orientation details. Although the wavelet
theory was introduced in 1980s, this has been extensively used in
image processing.
Basically, wavelet transform based image fusion has three
steps. The first step is decomposition of both registered images. The
second step is combination of transform coefficients. The third step is
reconstruction of combined transform coefficients.

35

Stephane G Mallat [66], provided a theory for multiresolution


decomposition of image and signal analysis. He proved that different
resolutions 2j+1 and 2j provides different information. He explained
that Wavelet analysis is nothing but image is convolved with
quadrature mirror filters and sub sampling. It represents the image at
different resolutions having different spatial orientations.
Jorge Nunez et al. [67], proposed a technique for fusion of both
panchromatic

image

and

multispectral

images

based

on

multiresolution wavelet decomposition. This method is about adding


the panchromatic image wavelet coefficients and multispectral images
wavelet coefficients. They concluded that this technique is better than
the IHS and proved both spectral and spatial information.
Hui Li et al. [68], proposed the wavelet based image fusion
technique. In his algorithm initially both images are decomposed then
applied fusion rule is area based maximum selection rule and a
consistency

verification

map.

Finally

applied,

inverse

wavelet

transform to get fused image. This algorithm is suitable optical to


SAR, visible to infrared and it can be straight forwardly used to fuse
more than two multi sensor images
Vadher Jagruti [69], implemented an image fusion based on
discrete wavelet transform. In this method, both images were
decomposed and averaging the approximation details of both image1
and image 2. Maximum coefficients of both orientation details of
image 1 and image 2 were selected. Finally, inverse wavelet transform

36

was applied on new approximation and orientation details for fused


image.
Gonzalo Pajares and Jesus Manuel de la Cruz [70], reported
image fusion to combine information from multiple images of same
scene based on wavelet decomposition. The images can be fused with
same or different resolution level. They concluded that the wavelet
based methods can achieve the similar results like classical methods.
Shaoqing Yang et al. [71], proposed color image fusion
algorithm using IHS and discrete wavelet frame transform. This
algorithm is simple average of wavelet coefficients of both images.
They observed the results by computer simulation and the approach
is successfully applied to combine two color images from different
electro-optical trackers.
Yong Yang [72], proposed image fusion method based on
wavelet transform by using new technique for selection coefficients.
Initially, both images are decomposed into low frequency bands and
high frequency bands. Then the low frequency bands of both
decomposition images are selected by edge based scheme and high
frequency bands of both decomposition images are selected by
variance based scheme. Finally, fused image is constructed by an
inverse wavelet transform performing on the combined coefficients
from all frequency bands.

37

Lavanya et al. [73] discussed two image fusion methods


Principal Component Analysis and wavelet combined Intensity Hue
Saturation. They initially implemented the IHS method, then wavelet
fused method based on substitution method and finally combined the
fused

images

of

IHS

method

and

wavelet

method.

Principal

Component Analysis transformations for remotely sensed lunar image


are used to extract features accurately. They compared these two
wavelet transforms and concluded that PCA combined transform gives
better results than other techniques.
Yue Jin et al. [74], implemented pixel level fusion techniques
by Principal Component Analysis (PCA), wavelet transform fusion
technique and combined approach of PCA and wavelet for SAR images
at different polarizations. It is observed that the combined approach is
more effective than other individual approaches. Finally, they
concluded that combined approach fused image enhances spatial
detail information of fused image largely and suppresses the speckle
noise well.
Joshing and Chao [75], reported the use of wavelet packet
based method to decompose the images. They introduced a fusion rule
based on high and low frequency parts. They applied threshold and
weight for low frequency parts which was determined manually and
high pass filtering fusion rule with high frequency parts.
Pushkar S Pradhan [76], reported that difficulties in fusion of
low

resolution

Multispectral

(MS)

image

and

high

resolution

38

Panchromatic (PAN). He expressed that too few decomposition levels


for fusion result in poor spatial quality fused images when shift
invariant wavelet transform is used. He felt that increase in the
number of decomposition levels results in high computational
complexity. Finally, he concluded that choose the number of
decomposition levels based on applications.
Deepak kumar sahu and Parasai M P [77], provided critical
review on different image fusion techniques. They compared all the
fusion techniques, spatial domain techniques (Average, Maximum
selection and Minimum selection), discrete wavelet transform and
Principal Component Analysis (PCA). Finally, they concluded that
combination of spatial domain fused image (PCA) and DWT based
fused image provides both high quality spectral and high spatial
resolution content.
Zhang Bin et al. [78], developed a new image fusion technique
based on lifting wavelet transform. Lifting wavelet transform is very
fast image fusion approach and it has several unique advantages
compared to conventional wavelet transforms. This new fusion rule is
developed based on regional feature selection which is carried on
fusion simulation experiment in view of different type of different
source images. They concluded that this method can be used as fast
fusion process on multiple source images.
Ming Li et al. [79], proposed a novel image fusion algorithm
based on lifting wavelet transform and fractal dimension theory by

39

fusing the same object of visual and near infrared images of


agriculture pick machine vision systems. They observed that their
proposed fusion algorithm provided more effective fused image quality
than traditional image fusion algorithms based on wavelet transform.
By

observing

the

performance

of

all

the

image

fusion

techniques, the DWT gives efficient results. Due to its orthogonality, in


the present investigation, DWT technique has been chosen for
compression and decompression for the FPGA implementation of the
image fusion technique.
2.4

RECONFIGURABLE HARDWARE
SunYingLi [80], implemented image fusion technology on FPGA

to provide an ideal platform for hardware implementation of an image


or data processing algorithms. He explains FPGA devices are high
integration, small size, and application-specific programmed device. It
allows designers to take advantage of computer-based development
platform, design entry, simulation, testing and validation to achieve
the desired results, reducing the development cycle. He implemented
image fusion using HIS, Wavelet Transform and Integration of both
HIS and Wavelet Transform. He also observed that Lifting scheme
wavelet transform is simple, fast and have a good compatibility. He
has completed the hardware structure design based on the use of the
Verilog hardware description language on Altera's FPGA development
software

Quartus

II5.0

module

design,

simulation

and

40

implementation. He has concluded stating that the design can achieve


good image fusion.
Ibrahim Melih olova [81], in his thesis proposed a modified
2D Discrete Cosine Transform based electro-optic and IR image fusion
algorithm and implemented on FPGA ALTERA Stratix III family. The
algorithm is also compared with the image fusion software application
GUI developed in Matlab. He proposed algorithm by corresponding
4x4 pixel blocks of two images to be fused and transforms them by
means of 2D Discrete Cosine Transform. Then, the L2 norm of each
block is calculated and used as the weighting factor for the AC values
of the fused image block. The DC value of the fused block is the
arithmetic mean of the DC coefficients of both input blocks. Based on
this mechanism, the whole two images are processed in such a way
that the output image is a composition of the processed 4x4 blocks.
He finally concluded that algorithm performs well compared to the
other state of the art image fusion algorithms both in subjective and
objective quality evaluations.
Stephan Blokzyl, Matthias Vodel and Wolfram Hardt [82],
proposed a novel concept for hardware-accelerated computation of
high-resolution

Electro-Optic

sensor

data

using

FPGAs.

They

introduced two data processing approaches by utilizing specific FPGA


capabilities: data and task parallelization. These approaches are used
separately

and

combined

for

conversion

of

sequential

image

processing chains to parallelized hardware design. They observed that

41

for image processing of a huge amount of high-resolution image data


with a strong character of software-based a significant speedup can be
reached by only hardware implementation.
Adhyana Gupta [10], discussed for one particular application
how hardware software co-simulation can be used with Xilinx System
Generator (XSG), which provides Simulink Blockset for several
hardware operations and implemented on various Xilinx Field
Programmable Gate Arrays (FPGAs). The method described was object
feature identification and detection. Xilinx System Generator provides
some blocks to transform data provided from the software side of the
simulation

environment

(MATLAB

simulink)

to

the

hardware

side(System Generator). Also mentioned that the Xilinx System


Generator, embedded in MATLAB Simulink is used to program the
model and then test on the FPGA board using the properties of
hardware co-simulation tools.

Abhijeet Nimbalkar and Sathyanarayana R [83], reported that


the image fusion algorithm based on Discrete Wavelet Transform
which is faster developed was a multi-resolution analysis image fusion
method in recent decade. Discrete Wavelet Transform has good timefrequency characteristics.

They proposed that the method could

extract useful information from source images to fused images so


that clear images are obtained. Hardware implementation of image
fusion system os carried out by using MATLAB and XILINX. They

42

concluded that the hardware realization which is based on FPGA


technology provides a fast, compact, and low-power solution for image
fusion.
Aniket Burkule and Borole P B [12], reported that most of the
image processing algorithms are time consuming due to large amount
of calculations involved in them. Hence, it is desirable to use high
speed system. The most effective solution is to use FPGA. Single FPGA
can perform single image processing algorithm at a time. To make it
multitasking the option of reconfigurability can be used. He concluded
that edge detection using software is not tough job but when
implemented it on hardware many challenges are to be faced like total
VHDL code or Verilog code actually becomes very bulky (about 5000
lines). To shrink they used Xilinx System Generator for hardware
software co-simulation with that simulation speed increased and can
easily go for ASIC prototype by this approach and the design has
implemented in the Xilinx FPGA Development kit.
HanenChenini, Jean Pierre Drutin, RomualdAufrre and
Roland Chapuis [84], reported that, providing a software abstraction
to

enable

quick

implementation

of

complex

image

processing

applications on Field Programmable Gate Array (FPGA) platform. The


design of homogeneous network of communicating processor is
presented from the hardware and software specification down to
synthesizable hardware description. They concluded that design
balanced

the

computation

requirement

and

providing

enough

43

computation performance to ensure real time processing of complex


image/video processing algorithms.
Gribbon K T, Bailey D G and Bainbridge-Smith A [85],
reported that Field Programmable Gate Arrays (FPGAs) offer many
performance benefits for executing image processing applications.
Algorithms for various image processing applications then are mapped
to the FPGA.

Mapping an algorithm requires building and utilizing

FPGA specific hardware. This is fundamentally different approach to


the design of software for the fixed architectures of conventional
processors.
Takashi

Saegusa,

Tsutomu

Maruyama

and

Yoshiki

Yamaguchib [86], reported that in image processing, FPGAs have


shown very high performance in spite of their low operational
frequency. This high performance comes from high parallelism in
applications, in image processing, high ratio of 8 bit operations, and a
large number of internal memory banks on FPGAs which can be
accessed in parallel. They concluded that the performances of FPGAs
are limited by the size of FPGAs, and the memory bandwidth (image
data are too large to store on FPGAs). The performance of FPGAs can
be improved by dividing an image to sub-images, and processing them
in parallel in the same way as the multi-thread execution in the
microprocessor if the memory throughput allows it.
Mohamed M A and EI-Den R M

[87], reported that image

fusion is a process which combines the data from two or more

44

source images from the same scene to generate one single image
containing more precise details of the scene than any of the
source

images.

Among many

image

fusion

methods

like

averaging, principle component analysis and various types of


Pyramid Transforms, Discrete Cosine

Transform,

Discrete Wavelet

Transform. Comparison of different techniques to determine the


best approach and implement the best technique by using Field
Programmable Gate Arrays (FPGA).
Khasim Hussain D, Laxmikanth Reddy C and AshokKumar V
[88], reported that Wavelet Transform has good time-frequency
characteristics.

The method could extract useful information from

source images to fused images so that clear images are obtained. The
principle of selection about low & high frequency coefficients are
according to different frequency domain in wavelet transform. In
choosing the low frequency coefficients, the concept of local area
variance was chosen as measuring criteria. In choosing the high
frequency coefficients, the window property and local characteristics
of pixels are analyzed. It was applied successfully in image processing
field. They further felt that its excellent characteristic in onedimension cant be extended to two dimensions or multi-dimension
simply.
Steffen Klupsch, Markus Ernst, Sorin A Huss, Rumpf M and
Strzodka R [89], reported on speeding up of image processing
methods on 2D and 3D images using FPGA technology. They

45

developed a level set methods, in which the workflow allows to


exchange the mathematical methods easily. They reported that FPGA
implementation profits from the high parallelism in the algorithm and
the moderate number precision required to preserve the qualitative
effects of the mathematical models.

Sambashivudu K, Javeed Md and Kiran R [13], discussed that


high computational

complexities present in image processing

applications requires, reconfigurable hardware devices in the form


of

Field Programmable Gate Arrays (FPGAs) to obtain high

performance in terms of speed

at

an

economical

price. They

concluded that the FPGA technology has become a viable target for
the implementation of real time algorithms.
Madhumathi et al. [90], proposed a black box model for
providing a system integration platform for the design of DSP FPGAs.
This model allows RTL to be imported into Simulink and co-simulated
with either ModelSim or Xilinx ISE Simulator. Finally concluded that
hardware software Co-Synthesis platform with System Generator
makes it possible to incorporate a design, implemented on a Xilinx
Spartan3E FPGA
Suthar A C, Mohammed Vayada, Patel C B, Kulkarni G R
[11], demonstrated model based approach for image processing
applications using MATLAB SIMULINK and Xilinx System Generator
(XSG). These tools support software simulation along with its

46

capability to synthesize on FPGAs hardware in parallelism with speed


and robust which are essentials in image processing applications.
They concluded that the Xilinx System Generator tool is a new
application in image processing, and offers a friendly environment
design for the processing, because processing units are designed by
blocks.
Munawar Ali S and Naveen Kumar S [91], proposed a concept
of FPGA based design and implementation of image architecture using
Xilinx System Generator. Recent advances in synthesis tools for
SIMULINK suggest a feasible high-level approach to algorithm
implementation for embedded DSP systems and an FPGA based
hardware design for enhancement of color and grey scale images in
image and video processing. Xilinx system generator is a very useful
tool for developing computer vision based algorithms. They further
concluded that to process an image in real time, the implementation
on hardware which offers parallelism is needed to reduce the
processing time significantly.
Elamaran V and Rajkumar G [92], discussed point processes
which use only the information in individual pixels to produce new
images arithmetic operations, XOR operations, histograms, contrast
stretching and intensity transformations are implemented using Xilinx
System Generator (XSG). XSG is a useful tool to understand
fundamental Digital Signal Processing (DSP) algorithms for Field
Programmable Gate Array (FPGA) implementation.

47

Chandrashekar et al. [14], analyzed image enhancement


capabilities and properties. Also deals hardware implementation of
Infrared

Image

(IRI)

enhancement

of

thermo

graphic

images.

Successive Mean Quantization Transform (SMQT) was used to


implement FPGA implementation and results compared with Matlab
experiments.
David C Zhang, Sek Chai and Gooitzen VanderWal [93],
discussed that Image fusion is an important visualization technique of
integrating coherent spatial and temporal information into a compact
form with laplacian fusion process which combines regions of images
from different sources into a single fused image based on a salience
selection rule for each region. They also proposed an algorithmic
approach using a mask pyramid to better localize the selection
process, which operates in different scales of the image to improve the
fused image quality beyond a global selection rule. A new embedded
system architecture that builds upon the Acadia II Vision Processor
is used for hardware implementation. They concluded that the
presented technique is no limited to pyramid fusion, may apply to any
wavelet fusion.
Abhishek Acharya, Rajesh Mehra and Vikram Singh Takher
[94], discussed FPGA based hardware design for enhancement of color
image and gray scale image in image and video processing. The
approach used is known as adaptive histogram equalization which
works very effectively for image captured under extremely dark

48

environment as well as non-uniform lighting environment where


bright regions are kept unaffected and dark object in bright
background. This paper shows that reconfigurable FPGAs have both
real time and parallel computing expectations for the enhancement
process in Images. XSG is a very useful tool for developing computer
vision algorithms. It could be described as a timely, advantageous
option for developing in a much more comfortable way than that
permitted by Hardware Description Languages (HDLs).
Devika S V, Khumuruddeen Sk and Alekya [95], explained
that FPGA is widely used in implementation of real time algorithms
suited to video and image processing applications. The FPGA
implementation

provides

basic

digital

blocks

with

flexible

interconnections to achieve realization of high speed digital hardware.


The FPGA consists of a system of logic blocks such as gates or flip
flops, LUTs and some amount of memory. The image is then
transferred from PC to FPGA board using Universal Asynchronous
Receiver/Transmitter

(UART)

serial

communication.

Finally,

concluded that in their research they combine hardware and software


to achieve accurate as well as a considerably high performance, which
accounted to the parallel implementation so that the speed is
increased.
Manan

[96],

explained

the

importance

of

digital

image

processing and the significance of their implementations on hardware


to achieve better performance. They reported that the work addresses

49

implementation of image processing algorithms like median filter,


morphological operations, convolution and smoothing operation and
edge detection on FPGA using Very high speed Hardware Description
Language (VHDL).
Feng Qu, Bochao Liu, Jian Zhao, Qiang Sun.[97], analyzed five-

band image fusion and implemented on FPGA with multi DSPs

to

solve complex algorithm. Image acquisition, image registration, image


fusion and display output can be done within the system by using
FPGA as main processor and other three DSP as an algorithm
processor which utilize the FPGA high- speed characteristics, and take
full advantage of the DSP powerful computing function. To coordinate
the asynchronous timing problems among the various modules, and
to improve the efficiency of data exchange, FIFO generated in FPGA is
used to complete a five- band image data acquisition. Then the image
fusion algorithm based on multi-wavelet transform is optimized and
transplanted.
Johnston C T et al. [98], reported that FPGAs are able to
exploit spatial and temporal parallelism for implementation of image
processing algorithms. They presented some general techniques for
evaluating complex expressions to deal with resource constraints and
efficient mapping for three types of image processing operations.
Qian Weixian et al. [99], performed low level light image and
ultraviolet image fusion, also developed hardware system for the
same, using FPGA+SDRAM architecture. For implementation of fusion

50

they used advanced video processor TMS320C6711. Finally they


concluded that developed system performs image fusion with
minimum amount of noise.
Anbumozhi S and Manoharan P S [100], proposed a study on
a image fusion algorithm based on wavelet transform and fuzzy
reasoning. The edges in medical images are detected using set of fuzzy
rules. The hardware implementation of a fusion method used for
medical diagnosis has been presented in FPGA. They mentioned that
hardware realization of proposed fusion technique based on FPGA
technology provides a fast, compact and low-power solution for
medical image fusion.
Neha P Raut, Gokhale A V [101], reported that image
processing applications can be implemented on FPGA using the most
efficient tool called Xilinx System Generator (XSG) for MATLAB. Use of
Xilinx System Generator for image processing applications effectively
reduces complexity also provides additional feature for hardware cosimulation. They concluded that Xilinx System Generator is a versatile
tool to perform software and hardware image processing task.
Hardware implementations of complex image processing techniques
utilize minimum resources and minimum delay. They further felt that
the need of prototyping tools such as MATLAB Simulink and Xilinx
System Generator are increasingly important in recent times for
Hardware implementation and time-to-market constraints.

51

2.5

MOTIVATION AND OBJECTIVE OF THE PROJECT


From the past research, it is observed that wavelet based

multiresolution multisensor image fusion algorithms in software


(MATLAB, ERDAS etc) have been developed already. While coming to
the hardware implementation it is noticed that majority of the image
processing works are reported on processing of image from single
sensor and appropriate algorithms are implemented on FPGA. A few
works are reported in the area of hardware implementation of image
fusion algorithms viz A.M. El Ejaily et.al reported single sensor image
fusion using independent component analysis with the help of genetic
algorithm. Feng Qu et.al carried their work on single sensor 5 band
image fusion with the resolution of 1392 X 1040 and successfully
implemented on FPGA. Implementation of DWT based algorithm to
fuse three band multispectral and single band panchromatic images
with different resolutions obtained from different sensors is carried
out. It is a challenging task to implement on reconfigurable hardware
and very few works have been reported in this area, this forms the
motivation for the current research.
Most of the earth observation satellites such as SPOT, Landsat
7, IKONOS, Quick Bird record image data in two different modes viz.,
a low resolution multispectral and high resolution panchromatic
mode.

Whereas,

multispectral
resolutions.

and

the

satellites

panchromatic

IRS-P6
data

and

Cartosat-2

respectively

at

record
different

As each of these images will not provide the required

52

information due to physical limitation of sensors, image fusion should


be done to get the better result. Generally, the software algorithms
consume more time due to large amount of calculations. Hence, the
objective of this investigation is mainly aimed to implement a high
speed Discrete Wavelet Transform (DWT) based satellite image fusion
algorithm on reconfigurable hardware.
In this study, Multispectral (MS) bands having 5.8m spectral
resolution captured by IRS - P6 and Panchromatic (PAN) image having
1m spatial resolution captured by Cartosat-2 are used. These satellite
images are obtained from National Remote Sensing Centre (NRSC),
Hyderabad.

53

CHAPTER 3

IMAGE PREPROCESSING

54

CHAPTER 3
CHAPTER 3: IMAGE PREPROCESSING

Page No

3.1

Introduction

55

3.2

Satellite Information

55

3.3

Image Registration

58

3.4

Image Resampling

60

3.4.1 Nearest Neighbor

61

3.4.2 Bilinear Interpolation

61

3.4.3 Bicubic Interpolation

62

3.5

Peak Signal to Noise Ratio

63

3.6

Results and Discussion

64

55

3.IMAGE PREPROCESSING
3.1

INTRODUCTION

Image preprocessing is a preliminary phase of image processing


to improve the quality of image by correcting the undesirable
degradation, distortion and system noise etc [102]. This preprocessing
includes registration and resampling of the multisensor images.
In this study, Multispectral (MS) bands having 5.8m spectral
resolution captured by IRS - P6 (Date of Pass 26-FEB-2014) and
Panchromatic (PAN) image having 1m spatial resolution captured by
Cartosat-2 (Date of Pass 07-FEB-14) are used. Comparative study of
resampling techniques has been performed to identify the better
technique and to implement fusion using that technique.
3.2

SATELLITE INFORMATION
This section describes the IRS-P6 and Cartosat-2 satellite

information.

The

PSLV-C5

launch

vehicle

placed

the

satellite

RESOURCESAT-1 into the polar sun synchronous orbit with an


altitude of 817 km on 17th October, 2003 with a design life span of 5
years. RESOURCESAT-1 is also called as IRS-P6 [103]. It is the tenth
satellite

of

the

IRS

series.

The

satellite

IRS-P6

can

acquire

simultaneously Multispectral (MS) data in three different spatial


resolutions at 23.5m, 5.8m and 56m respectively from three sensors
viz., medium resolution Linear Imaging Self-Scanner (LISS-III), high

56

resolution Linear Imaging Self-Scanner (LISS-IV) and Advanced Wide


Field Sensor (AWiFS) [104].
These three sensors operate on the push broom scanning
using linear arrays of Charge Coupled Devices (CCDs). Onboard the
satellite, the image data is digitized into 10 bits, but only 7 bits are
transmitted to the ground. The selection of which 7 bits to transfer
from the 10 bit signal is performed by the satellite operator during
collection tasking. The satellite has a data rate of 105Mbps, weight
169.5kg, repeat interval l5 days and swath width is 23.9km [105].
IRS-P6 satellite LISS-IV sensor multispectral mode band details are
illustrated in Table 3.1.
Table 3.1 IRS-P6, LISS-IV sensor details
Band

Spectral band

Resolution

Repeat

No.of bits

interval
2(Green)

0.52 0.59 m

5.8 x 5.8 m

5 days

3(Red)

0.62 0.68 m

5.8 x 5.8 m

5 days

4(NIR)

0.77 0.86 m

5.8 x 5.8 m

5 days

Cartosat-2 is the second satellite of the Cartosat series. The


Polar Satellite Launch Vehicle (PSLV) placed the Earth observation
satellite Cartosat-2 into the Sun synchronous orbit on 10th January,
2007. This satellite was built, launched and maintained by the Indian
Space Research Organization (ISRO).

57

Cartosat-2 carries a PAN camera that captures black and white


pictures of the earth. The swath width covered by this PAN camera is
9.6km with less than 1 metre spatial resolution. The satellite image
data will be used for cartographic, Land Information System (LIS) and
Geographical

Information

System

(GIS)

applications.

Cartosat-2

satellite panchromatic mode details are illustrated in Table 3.2.

Table 3.2 Cartosat-2 sensor details

Band Spectral band


PAN

Resolution

Repeat interval

No.of bits

1x1m

4 days

10

0.50 0.85m

The MS and PAN images used in this study are shown in Figure 3.1.

(a) MS image

(b) PAN image

Figure 3.1 MS and PAN images

58

3.3

IMAGE REGISTRATION
Image registration played a vital role in remote sensing

applications. Image registration refers to the primary task in image


processing to match two or more images which have been taken of the
same object or scene from different viewpoints, from different sensors
or at different times [106].
The increased importance of image registration in remote
sensing is that, the remote sensing is currently moving towards many
important social and scientific applications [107]. These applications
include the management of natural disasters and natural resources,
assessment

of

climate

changes

and

the

preservation

of

the

environment. Furthermore, there is an increasing availability of


images with different characteristics, increased flexibility of time,
shorter revisiting time of satellite and the evolution of sensor
technologies. Therefore, a growing need emerges to simultaneously
process different data from the remote sensing images for information
extraction and data fusion. It includes the comparison of newly
obtained images with previous images taken with different sensors.
The remote images can, therefore, the multisource (obtained from
multiple sensors), multitemporal (taken at different dates), multimode
(obtained with different acquisition modalities), or stereo-images
(taken from various view points).
Initially, the different images are in different coordinate systems.
The image registration process spatially aligns them by considering

59

one of the images as a reference and transforming the images one at a


time. Hence, a selection of corresponding structure/elements in the
reference image and in each of the other images is necessary in order
to determine an appropriate transformation. Once the registration
process is completed, the images can be processed for information
extraction.
The registration can be done both in manual and automatic
process. Many methods have been proposed in the image registration.
They can be classified into two categories: area-based methods and
feature-based methods [108].
In this study, feature-based method has been adapted to extract
and match the common features of the two images. ERDAS IMAGINE
software has been used to get the registered images as shown in
Figure 3.2.

Figure 3.2 Registered area in MS and PAN images

60

Figure 3.3 shows the registered input images that are used in this
study.

Figure 3.3 Registered Input images


3.4

IMAGE RESAMPLING
Image fusion takes place only when the spatial resolution of the

input images are same.

Image resampling is necessary to get the

required spatial resolution of an image [109]. Image resampling is the


mathematical technique used to alter the scale of an image. In image
resampling, interpolation and sampling are often combined so that the
image is interpolated at only those pixels which need to predict.
Specifically increasing or decreasing the image size by resampling
cannot increase the resolution of the image or the information in the
image. In the resampling process, the image quality highly depends on
the used interpolation technique. Popularly known basic interpolation

61

techniques are nearest neighbor, bilinear and bicubic [37]. For better
understanding, in this section, these techniques are described in onedimensional basis. It is well known that these techniques can also be
implemented in two-dimensional basis.
3.4.1 Nearest Neighbor
From a computational stand point of view, nearest neighbor is
said

to

be

the

simplest

interpolation

technique

where

each

interpolated output pixel is assigned the value of the nearest sample


point in the input image. This procedure fast and does not introduce
any artificial data into the final result. This technique is also known
as point shift algorithm.
For

large-scale

changes,

nearest

neighbor

interpolation

technique produces images with a blocky result. In addition, shift


errors of up to one-half pixel are possible. Hence, this technique is
inappropriate when sub-pixel accuracy is required.
3.4.2 Bilinear Interpolation
Bilinear

interpolation

computes

new

pixels

using

linear

interpolation. This interpolation method operates on the 2 by 2 cell of


pixels surrounding each new pixel location. The resulting images are
much smoother and it retains better positional accuracy than those
produced by the nearest neighbor interpolation method [110].
This determines the grey level value from the weighted average
of the four closest pixels to the specified input coordinates and

62

assigns that value to the output coordinates. Initially, two linear


interpolations are performed in horizontal direction and then one
more linear interpolation is performed in the perpendicular direction.
Bilinear interpolation offers improved image quality when
compared to nearest neighbor. It is the most widely used interpolation
technique for reconstruction as it produces reasonably good results at
moderate cost.
3.4.3 Bicubic Interpolation
Bicubic Interpolation determines the grey level value from the
sixteen closest pixels to the specified input coordinates and assigns
that value to the output coordinates [111].
The output image is slightly sharper than that produced by
bilinear interpolation and it does not have the disjointed appearance
produced by nearest neighbor interpolation [112].
In bicubic interpolation, initially, four one-dimensional cubic
interpolations are performed in horizontal direction and then one
more

one-dimension

cubic

convolution

is

performed

in

the

perpendicular direction. It means that a bicubic interpolation requires


five cubic interpolations. Bicubic interpolation calculations are often
done using matrix techniques as these can be hard to understand.
The two-dimensional cubic interpolation kernel is illustrated in
Equation 3.1.

63

, =

3
=0

3
=0

(3.1)

To evaluate the performance of the above said resampling


techniques, MS bands viz., Band2 (green), Band3 (red) and Band4
(near infrared) are tested. For the purpose of evaluation, the original
size of each image 256x256 is scaled down to a size of 128x128. Then
by using three interpolation techniques, the scaled down images are
resampled to the original size images.
3.5

PEAK SIGNAL TO NOISE RATIO


Peak Signal to Noise Ratio (PSNR) is the ratio of maximum

available power of a signal to the power of available noise in the


signal. For better performance, the PSNR should be high. The PSNR is
calculated by using Equation 3.2.
PSNR dB = 20 log

Max i
m
i =1

n (F
j=1

i,j M i,j )2

(3.2)

Where m, n are represent the size of the fused image. M i, j is pixel


value of MS image and F i, j

is pixel value of fused image at

i, j

location. MAX i represent the maximum image pixel value.


A higher peak signal to noise ratio would normally indicate the
higher quality of the output image. Another way of representing PSNR
is a logarithmic representation of the Mean Square Error.

PSNR = 20 log

Maxi
mean square error

64

3.6

RESULTS AND DISCUSSION


The original and resampled images are shown in Figures 3.4-

3.6.

After interpolation, the PSNR values between the interpolated

images and the standard test images are calculated and represented
in Table 3.3.

Band2

Nearest neighbor

Bilinear

Bicubic

Figure 3.4 Original and resampled images of Band2

65

Band3

Nearest neighbor

Bilinear

Bicubic

Figure 3.5 Original and resampled images of Band3

66

Band4

Nearest neighbor

Bilinear

Bicubic

Figure 3.6 Original and resampled images of Band4

67

Table 3.3 PSNR (db) Comparison


Test Image\
Method

Nearest

Neighbor

Bilinear

Bicubic

Band2

22.8916

25.5712

27.1940

Band3

28.0435

31.3259

33.71072

Band4

22.8658

22.8658

27.4211

From the Table 3.3, it is observed that bicubic interpolation


gives a high PSNR values for all Bands as compared to those of
bilinear and nearest neighbor interpolation techniques. Hence in our
investigation, bicubic interpolation technique has been used.
In general for same sensor images, the proportion between PAN
and MS image resolutions is a power of two. In our study, PAN image
with 1 m resolution and MS image with 5.8m resolution are used. As
required by the wavelet transform based fusion, the proportion
between PAN and MS image resolutions must be a power of two.
Therefore, resampling of MS image is essential to fuse with the PAN
image so that the actual resolutions of both the images are
maintained after fusion process.
In this study, resampling of MS image is done in two
approaches. In the first approach, 5.8m resolution of MS image has
been upsampled to PAN image resolution (1m). But due to large extent

68

of spectral resolution, it is observed that quality of the image has been


degraded. It is to be noted that large extent resampling degrades the
quality and structure of the image.
But for the analysis purpose, this first approach has been used
in averaging and substitution fusion rules.

Keeping in view of the

image quality and wavelet transform, second approach has been


performed to upsample MS image resolution from 5.8m to 4m (nearest
power of 2). The second approach has been used in additive fusion
rule. A detailed analysis has been carried out in Chapter 4 using these
fusion rules for the choice of appropriate wavelet filters like Haar, db3
and CDF 9/7.

69

CHAPTER 4

DWT BASED IMAGE FUSION

70

CHAPTER 4
CHAPTER 4: DWT BASED IMAGE FUSION

Page No

4.1 Introduction

71

4.2 Wavelet Transform

72

4.2.1 Multi-resolution Analysis

73

4.2.2 Fundamentals of Wavelet Transform

73

4.2.3 Types of Wavelet Transform

74

4.2.3.1 Continuous Wavelet Transform

75

4.2.3.2 Discrete Wavelet Transform

75

4.2.4 Wavelet Families

76

4.2.4.1 Haar Wavelet

76

4.2.4.2 Daubechies Wavelet

78

4.2.4.3 Cohen-Daubechies-Feauveau Wavelet

80

4.3 DWT Based Image Fusion

82

4.4 Wavelet Based Image Fusion Rules

84

4.4.1 Wavelet Averaging Method

84

4.4.2 Wavelet Additive Method

86

4.4.3 Wavelet Substitutive Method

87

4.5 Correlation Coefficient (CC)

90

4.6 Results and Discussion

90

4.6.1 Wavelet Averaging Method

92

4.6.2 Wavelet Additive Method

94

4.6.3 Wavelet Substitutive Method

96

71

4. DWT BASED IMAGE FUSION


This chapter presents the designing and modeling of fusion of
multisensor images using 2-D DWT fusion rules like averaging,
additive and substitutive with Haar and Daubechies filters using
MATLAB software. The results of all fusion rules for the input images
have been carried out. The Peak Signal-to-Noise Ratio (PSNR) and
Correlation Coefficient (CC) values of fused images for different fusion
rules have been tabulated.
4.1

INTRODUCTION
In remote sensing, images are characterized by their different

resolutions. Spectral resolution refers to bandwidth. Smallest feature


separation in the scene referred by spatial resolution. Remote sensing
images are either a high spatial resolution and low spectral resolution
or low spatial and high spectral resolution due to limitations of
satellite sensor. Large Instantaneous Field Of View (IFOV) reduces the
spatial resolution. While collecting with a larger bandwidth reduces
spectral resolution. There are several situations that simultaneously
require high spatial and high spectral resolution in a single image.
This is particularly very important in remote sensing applications like
different objects are distinguish in a same scene, enhancing visual
interpretation, mapping of land use and extracting urban features like
buildings and roads. Also, image fusion is the effective tool for urban
mapping.

72

The standard data fusion methods may not be satisfactory to


merge a high resolution panchromatic image and a low resolution
image because they can distort the spectral characteristics of the
multispectral data. DWT based image fusion techniques are widely
adopted as they provide better fusion results. Due to its incredible
advantage, in this study, 2-D DWT has been adopted for designing of
image fusion [114]. In all the fusion rules, Haar, Daubechies 3 (db3)
and Cohen-Daubechies-Feauveau (CDF) 9/7 filters [96] are used for
image decomposition [115].
4.2

WAVELET TRANSFORM
The aim of the image transform is to pack as much information

as possible into smallest number of coefficients. Need of image


transform is to convert the data into a form where the compression is
easier which facilitates reduction of redundant irrelevant information.
Fast data computation is also possible in transform domain.
Wavelet has finite energy and limited duration signal which is
referred as a basis function. Each basis function represents small
wave. Representing an image in the form of basis function is called
wavelet transform. It gives both time resolution and frequency
resolution [116]. Wavelet transform provides multi-resolution analysis
for a given signal or image.

73

4.2.1 Multi-resolution Analysis


Representation of signals or images at different resolutions is
called multi-resolution. For better interpretation, the large objects or
high contrast images require low resolution, whereas the small objects
or low contrast images require high resolution. This can be achieved
by multi-resolution approach. Different resolutions of signals or
images are achieved by filtering and sub sampling operations. Sub
sampling is nothing but reducing sampling rate or removing of some
samples. This multi-resolution concept is very useful in wavelet
analysis in image processing.
4.2.2 Fundamentals of Wavelet Transform
Wavelet means a small wave that decays quickly. Equivalent
mathematical conditions for wavelet are [117]

| t |

<

t | =0

| w | 2

| |

<

(4.1)
(4.2)

(4.3)

Equation 4.1 represents wavelets finite energy in time domain.


Equation 4.2 represents zero average value of wavelet. Equation 4.3
represents wavelets finite energy in frequency domain.
In wavelet analysis, single prototype function ab (t) generates a
set of basis functions by dilating and translating of t which is called

74

basic wavelet or mother wavelet. The mathematical expression of this


function is shown in Equation 4.4. The function t

is an oscillatory

function. It has a limited duration and dies out rapidly as | t |


[9].
ab t =

1
a

* (

tb
a

(4.4)

The parameter a is the scaling parameter or scale, and it


measures the degree of compression. The parameter b is the
translation parameter which determines the time location of the
wavelet. When |a| < 1, then the wavelet is the compressed version
(smaller support in time-domain) of

the mother wavelet and

corresponds to higher frequencies. On the other hand, when |a| > 1,


then ab t

has a larger time-width than ( t) and corresponds to

lower frequencies. Thus, wavelets have time-widths adapted to their


frequencies. This is the main reason for the success of the Morlet
wavelets in signal processing and time-frequency signal analysis.
4.2.3 Types of Wavelet Transform
Wavelet transform is defined as the sum over all time of the
signal multiplied by scaled, shifted version of mother wavelet t .
There are mainly two types of wavelet transforms which are
Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform
(DWT) [118].

75

4.2.3.1 Continuous Wavelet Transform


Continuous wavelet is also called integral wavelet transform.
Decomposition of signal x(t) is based on continuous wavelet as shown
in Equation 4.5.
WT { xw (a,b) } =

1
a

x t ab

tb
a

dt

(4.5)

Where, a and b are called dilation and translated parameters;


x t is input signal, xw wavelet transform of x(t) and ab (

tb
a

) are

called baby wavelets.


4.2.3.2 Discrete Wavelet Transform
Continuous wavelet has the drawback of redundancy and
impartibility with digital computers. We can approximate a discrete
signal as follows
f n =

1
M

k W

Where W j0, k =

W j, k =

j0, k j0,k n +

1
M

j=j 0

k W [j, k] j.k [n]

(4.6)

f[n] j0,k n
n

1
M

f[n] j.k [n]


n

Here j0,k n is a scaling function and j.k [n] is basis function.


Generally, image has two dimensions and hence 2D DWT
analysis is required. During the first level of decomposition, the image
is passed through the low pass filter (H0(Z)) and high pass filter (H1(Z)).

76

These filters outputs are decimated by two in row wise. These two
sequences are further applied to low pass and high pass filters
followed by decimation in column wise. After first level decomposition
of image, it has three different directional details (horizontal, vertical,
diagonal) and approximation details. Further level of decomposition
will be carried on approximation details. It is called multilevel
decomposition or Mallat algorithm [66]. Each sub image is

1th
4

of

parent image. Decomposition of image is shown in Figure 4.1.

Figure 4.1. One level image decomposition


4.2.4 Wavelet Families
Wavelets can be classified into orthogonal and biorthogonal
filter banks. The orthogonal (Haar and Daubechies) and biorthogonal
(CDF 9/7 or Cohen Daubechies - Feauvean ) wavelets are discussed
in the following sections.
4.2.4.1 Haar Wavelet
The Haar wavelet is the first known wavelet and was proposed
in 1909 by Alfred Haar. It is a special case of the Daubechies wavelet,
it is also known as db1 [119].

77

The Haar wavelet is also the simplest possible wavelet. The


disadvantage of the Haar wavelet is that it is not continuous and
therefore not differentiable. The Haar basis function is shown in
Figure 4.2. Haar (db1) filter coefficients are shown in Table 4.1.
1

1
t = 1

0 t 2

(4.7)

t 1
otherwise

Its scaling function (t) can be described as


t =

1
0

0 t 1
otherwise

(4.8)

Figure 4.2 Haar wavelet basis function

Table 4.1 Haar (db1) filter coefficients


1
LPF

2
1

HPF

1
2

78

4.2.4.2 Daubechies Wavelet


Ingrid Daubechies developed new type of wavelets called
daubechies [119]. It is very popular because of compactly supported
orthonormal wavelets. It represents as (dbN) where db indicates family
and N represent the either filter coefficients or vanishing moment
order of wavelet function. Daubechies wavelet basis functions are
shown in Figure 4.3. Analysis of Daubechies wavelets is shown in
Figure 4.4.

Figure 4.3 Daubechies wavelet basis function

Figure 4.4 wavelet analysis for decomposition and reconstruction

79

In Figure 4.4, H0 (z) and H1 (z) are decomposition filters (low pass and
high pass).

F0 (z) and F1 (z) are reconstruction filters (low pass and

high pass). X(z) and Y(z) are input & output (either image or signal).
Output at each stage is determined as follows.
Stage1 is

Stage 2 is

[ X(z) H0 z + X z H0 z ]

Stage 4 is

[ X (z 2 ) H0 (z 2 ) + X (z 2 ) H0 (z 2 )]

Stage 3 is

(4.9)

[ X(z) H0 z + X(z) H0 z ]

[ X(z) H0 z + X z H0 z ] F0 z

(4.10)

(4.11)

(4.12)

Stages 5, 6 & 7 are similar to 1, 2 & 3 except H0(z) and F0(z) are
replaced with H1(z) and F1(z). So, similarly the stage 8 output is
1
2

[ X(z) H1 (z) + X(-z) H1 z ] F1 (Z)

(4.13)

So, the final output is


Y(z) =

1
2

[ X(z) H0 z + X z H0 z ] F0 z +
1
2

Y(z) =

1
2

[ X(z) H1 (z) + X(-z) H1 z ] F1 (Z)

(4.14)

[ F0 z H0 z + H1 (z) F1 (Z) ] X(z) +


1
2

[H0 z F0 z + H1 z ] F1 (Z) ] X(-z)

(4.15)

In order to achieve perfect reconstruction it needs to eliminate the


aliasing (X(-z)) for decomposition has to be zero.

80

[H0 z F0 z + H1 z F1 (Z) ]X (-z) = 0

(4.16)

Finally,
F0 z = - H1 z

(4.17)

F1 z = H0 z

(4.18)

Perfect reconstruction filter can be expressed as


F1 (Z) = H1 (z 1 ) z D

(4.19)

The db1, db3 low pass and high pass filter coefficients are
shown in Tables 4.1 and 4.2 respectively.
Table 4.2 db3 filter coefficients
LPF

0.33267

0.8068

0.459

-0.135

-0.085

0.03522

HPF

0.03522

0.085

-0.135

-0.459

0.8068

-0.33267

4.2.4.3 Cohen-Daubechies-Feauveau Wavelet


Cohen-Daubechies-Feauveau wavelet is also called the CDF 9/7
filter (where 9 and 7 indicate the number of filter taps) [120]. This
filter is used by the FBI for fingerprint compression. Single level
decomposition and reconstruction of CDF 9/7 is shown in Figure 4.5.
F0

0 ()
2

Y (z)

X (z)

1 ()

F1 ()
(Z)

Figure 4.5. Single level decomposition and reconstruction of CDF 9/7

81

Let 0 ()and 1 () be the wavelet decomposition (forward/


analysis) filters. Where 0 () is a low pass filter and 1() is a high pass
filter. Let the dual filters F0 z and F1 z be the wavelet reconstruction
(reverse/ synthesis) filters. The analysis and synthesis filters are
shown in Tables 4.3 and 4.4. CDF 9/7 wavelet filter is shown in
Figure 4.6.
H0 z = h3 z 3 + z 3 + h2 z 2 + z 2 + h1 z + 1 + h0
H1 z = g 4 z 4 + z 4 + g 3 z 3 + z 3 + g 2 z 2 + z 2 + g1 z + 1 + g 0
F0 z = h4 z 4 + z 4 + h3 z 3 + z 3 + h2 z 2 + z 2 + h1 z + 1 + h0
F1 z = g 3 z 3 + z 3 + g 2 z 2 + z 2 + g1 z + 1 + g 0
Here 3 , 2 , 1 and 0 are filter coefficients of low pass filter ( H0 z ) in
decomposition.
4 , 3 , 2 , 1 and 0 filter coefficients of high pass filter ( H1 z ) in
decomposition.
4 , 3 , 2 , 1 and 0 are filter coefficients of low pass ( F0 z

) in

reconstruction.
3 , 2 , 1 and 0 are filter coefficients of high pass filter ( F1 z ) in
reconstruction

82

Table 4.3 Forward/ analysis filter coefficients


Low pass filter

High pass filter

0 = 0.788 485 616 614

0 = 0.852 698 679 009

1 = 0.418 092 273 333

1 = 0.377 402 855 613

2 = 0.040 689 417 620

2 = 0.110 624 404 418

3 = 0.064 538 882 646

3 = 0.023 849 465 019


4 = 0.037 828 455 507

Table 4.4 Reverse/ synthesis filter coefficients


Low pass filter

High pass filter

0 = 0

0 = 0

1 = 1

1 = 1

2 = 2

2 = 2

3 = 3

3 = 3

4 = 4

Figure 4.6. CDF 9/7 wavelet filter


4.3.

DWT BASED IMAGE FUSION


The images are to be properly aligned on a pixel-by-pixel in

order to get the successful image fusion. The images captured from

83

IRS - P6 (MS bands) and Cartosat-2 (PAN) are taken as input images
Image 1 and Image 2 respectively. In this research, these input images
are pre-processed to apply DWT fusion effectively. The Figure 4.7
shows the top level block diagram of image fusion using wavelet
transform. During the decomposition process, DWT lets the input
images to be decomposed into different types of coefficients by
retaining the original information. These coefficients coming from
several input images are then combined according to some fusion
rules to get the new fused coefficients [33]. During the reconstruction
process, Inverse Discrete Wavelet Transform (IDWT) is performed on
the combined fused coefficients to get the resultant fused image.

Figure 4.7 DWT based image fusion approach

84

4.4

WAVELET BASED IMAGE FUSION RULES


The important step in image fusion is combining the coefficients

in a proper way using fusion rules to obtain the best quality fused
image. In this study, the following fusion rules are used to fuse the
low resolution multispectral and high resolution panchromatic
images.
1. Wavelet Averaging based image fusion
2. Wavelet Additive based image fusion
3. Wavelet Substitution based image fusion
In wavelet average and substitutive methods, up to five levels of
decomposition has been performed for each wavelet [97]. Performance
evaluation has been done for all the above fused images.
4.4.1 Wavelet Average Method
The Indian satellites IRS- P6 give low resolution images (R band,
G band, NIR band) and IRS-P7 (CARTOSAT 2) gives high resolution
PAN image. Initially, R band image and PAN image are taken as
inputs. Having done the image preprocessing of these images
(discussed in Chapter 3), the registered images have been passed as
input signals through two different one-dimensional digital filters H0
and H1 respectively. These H0 and H1 digital filters perform high pass
and low pass filtering operations respectively for both the input
images. The output of each filter is followed by sub-sampling by a
factor of two. This step is referred as the Row compression and

85

resultant is called as L-low frequency component and H-high


frequency component. Then, the down sampled outputs have been
further passed to two one dimensional digital filters in order to obtain
Column compression. After two level compressions of both input
images, the output frequency components High High (HH), High Low
(HL), Low High (LH) and Low Low (LL) are obtained.
The obtained frequency components of one input image is fused
with the frequency components of second image respectively. The HH
components of both images have been added and then the resultant
output has been divided by a factor two [98]. Similarly, the average of
HL, LL and LH components has been taken. This entire process is
known as Image Fusion. The averaged result has been followed by the
reconstruction process i.e., Inverse Discrete Wavelet Transform
(IDWT). IDWT is the reverse process of DWT. In IDWT process, the
output frequency components (HH, HL, LH and LL) have been first upsampled and then filtering operation has been carried out. The subbands have been added to get the resultant fused image as shown in
Figure 4.8.

Figure 4.8 Wavelet Average Method

86

This process is repeated for other two MS bands (G and NIR)


with PAN individually to get the other two fused images. Finally, these
three fused images were concatenated to form a new three-band fused
image as shown in Figure 4.9.

Figure 4.9 Image concatenation process


The entire operation above has been performed up to five levels
of decomposition on different wavelets. The DWT based image fusion
technique produced the quality fused image even when the images
have been taken from different sensors.
4.4.2 Wavelet Additive Method
Keeping in view of the image quality and wavelet transform,
resampling of MS image resolution from 5.8m to 4m (nearest power of
2) has been performed. The resolutions of MS and PAN images are 4m
and 1m respectively.

One-level wavelet transform is applied to the

individual bands of the MS image to get wavelet coefficients. Since the


pixel spacing of the PAN image is four times less than the MS bands,

87

three level wavelet decomposition of PAN image is done, so that pixel


spacing becomes equal. That is, the detail coefficients (approximation,
horizontal, vertical and diagonal) of the one-level decomposed MS
image bands and three-level decomposed PAN image are matched
pixel by pixel. Then, wavelet additive fusion rule is performed to merge
the PAN and each MS band individually to create a new fused image
[121].

After obtaining new fused image, three-level inverse wavelet

decomposition is performed. As a result, a MS image with 1m spatial


resolution is attained. This process is repeated for each individual MS
band. Finally, three fused images were concatenated to form a new
three-band fused image.

The entire process is illustrated in Figure

4.10.

Figure 4.10 Wavelet based Additive Method

4.4.3 Wavelet Substitutive Method


In this approach, 5.8m resolution of MS image has been
resampled to PAN image resolution (1m) so that pixel spacing becomes
equal. Proper care has been taken in this step to make sure both the
dimensions of PAN and MS images are equal, so that equal n number

88

of decomposition levels can be applied on both PAN and MS images.


Once the decomposition level is performed, substitution fusion rule is
applied

using

MS

bands

approximation,

replaced

with

PAN

approximation [122] which is shown in Figure 4.11.

Figure 4.11 Wavelet Substitutive Method


In this approach, MS bands and PAN image have been
decomposed up to five levels. In each level, the detail coefficients
(approximation, horizontal, vertical and diagonal) of MS bands and
PAN image were obtained.

R, G and NIR bands of MS image have

been decomposed to five levels as shown in Equations 4.20-4.22.


R ANR +

N
i
i (HR

+ VRi + DiR )

(4.20)

G ANG +

N
i
i (HG

+ VGi + DiG )

(4.21)

NIR ANNIR +

N
i
i (HNIR

i
+ VNIR
+ DiNIR )

(4.22)

Similarly, PAN image has been decomposed to five levels as


shown in Equation 4.23.

89

PAN ANP +

N
i
i (HP

+ VPi + DiP )

(4.23)

Where,
AN : Approximation coefficient at level N or approximation plane.
Hi : Horizontal coefficient at level i or horizontal wavelet plane.
Vi : Vertical coefficient at level i or vertical wavelet plane.
Di : Diagonal coefficient at level i or diagonal wavelet plane.
After decomposition, substitution has been done by placing MS
bands approximation in PAN approximation at each level. For each
MS band and PAN, single fused image coefficients are obtained.
Similarly for each level, three fused image coefficients have been
obtained.
Once substitution is done, inverse wavelet transform has been
applied as shown in Equations 4.24 to 4.26.
ANR +
ANG +
ANNIR +

N
i
i (HP

N
i
i (HP

+ VPi + DiP ) R NEW

+ VPi + DiP ) GNEW

N
i
i (HP

+ VPi + DiP ) NIR NEW

(4.24)
(4.25)
(4.26)

After inverse wavelet transform, three new fused images have


been obtained in each level.

Finally, these three fused images are

concatenated to form a new three-band fused image in each level.

90

4.5

Correlation Coefficient (CC):It measures the similarity of two images which ranges from -1 to

+1. Here +1 indicates that two images are highly similar and -1
indicates highly dissimilar. It is calculated by using Equation 4.27.
CC F, M = (

FF (MM )
(FF )2 )( (MM )2 )

(4.27)

Where
F is fused image
F is mean of fused image.
M is MS image
M is mean of MS image.
4.6

Results and Discussion


In this study, the fusion methods are evaluated by calculating

PSNR (refer section 3.5) and CC values between the original low
resolution MS image and fused image [123]. This chapter describes
the performance of Haar, db3 and CDF (9/7) DWT based image fusion
techniques.
One of the most frequently published combinations uses NIR
light as red, red light as green and green light as blue. In this case,
plants reflect NIR and green light. Cities and exposed ground appear
in grey or tan. Clear water appears in black. This appearance can be
observed in the resultant fused images.

91

Band 2

Band 3

Band 4

MS (Combination of Band2 ,3 and 4)

PAN
Figure 4.12 Input Images MS bands and PAN

92

4.6.1 Wavelet Averaging Method


The output images obtained by wavelet averaging method for
Level-1 and Level-3 are shown in Figure 4.13. The PSNR and CC
values of the fused images for levels 1 and 3 are given in Tables 4.5
and 4.6.
Table 4.5 Averaging method results at Level 1
Haar

db3

CDF 9/7

Parameters

Band2

Band3

Band4 Band2 Band3

Band4

Band2 Band3 Band4

PSNR(db) 78.498478.5002 77.6000 78.498478.5002 77.6000 78.9464 78.9958 78.0229

CC

0.7631

0.8423

0.7530 0.7617 0.8412 0.7522

0.7844 0.8569 0.7560

Table 4.6 Averaging method results at Level 3

Haar

db3

CDF 9/7

Parameters

Band2

Band3

Band4

Band2 Band3 Band4 Band2 Band3

Band4

PSNR(db) 78.4984 78.5002 77.6000 77.4138 76.6752 76.1554 78.9464 78.9958 78.0229

CC

0.7631

0.8423

0.7530 0.6404 0.7033 0.6198 0.7844 0.8569 0.7560

93

1- level Haar

3- level Haar

1- Level db3

3- Level db3

1- level CDF 9/7

3- level CDF 9/7

Figure 4.13 Wavelet averaging method output images

94

From the Figure 4.13, it is observed that good visual quality


fused image has been obtained at level 1 and 3 by using CDF 9/7
wavelet when compared to that of Haar and db3. It is also observed
from the Table 4.5 that PSNR and CC values of CDF 9/7 wavelet fused
image are showing high performance when compared to the other two
wavelets. Interestingly from the Tables 4.5 and 4.6, it is also noted
that at level 1 and 3, Haar and CDF 9/7 wavelets are exhibiting
similar performance. It is clearly observed from the results, the
performance of db3 has been reduced from level 1 to level3.
4.6.2 Wavelet Additive Method
The output images obtained by wavelet additive method are
shown in Figure 4.14. The PSNR and CC values of the fused images
are tabulated in Table 4.7.
Table 4.7 Additive method results
Haar

db3

CDF 9/7

Parameters

Band2

Band3

Band4

Band2 Band3 Band4 Band2 Band3

Band4

PSNR(db) 76.5763 76.7641 70.4521 77.2057 77.3400 70.5678 77.1342 77.2650 76.5596

CC

0.6017

0.6801

0.5427 0.6288 0.7044 0.5548 0.7256 0.8069 0.7163

95

Haar

db3

CDF 9/7
Figure 4.14 Wavelet additive method output images

It is observed from the Figure 4.14 that CDF 9/7 wavelet fused
image has good visual quality when compared to that of Haar and
db3. It is also observed from the Table 4.7 that PSNR and CC values of
CDF 9/7 wavelet fused image are showing high performance when
compared to the other two wavelets.

96

Whereas, Haar and db3 wavelets are exhibiting almost similar


performance as shown in Table 4.7.
4.6.3. Wavelet Substitutive Method
The output images obtained by wavelet substitutive method for Level1 and Level-3 are shown in Figure 4.15. The PSNR and CC values of
the fused images for levels 1 and 3 are given in Tables 4.8 and 4.9.
Table 4.8 Substitutive method results at Level 1
Haar

db3

CDF 9/7

Parameters

Band2 Band3

Band4

Band2 Band3

Band4

Band2 Band3 Band4

PSNR(db) 74.279374.1996 73.8831 74.741875.2109 74.1738 76.1159 76.5089 76.3153

CC

0.4190 0.5200 0.4391

0.5583 0.6566 0.5102

0.5871 0.7289 0.7134

Table 4.9 Substitutive method results at Level 3


Haar

db3

CDF 9/7

Parameters

Band2 Band3

Band4

Band2 Band3

Band4

Band2 Band3 Band4

PSNR(db) 83.430683.4375 83.4442 78.084078.2335 78.2854 83.4119 83.4168 83.4240

CC

0.8563 0.9179 0.9157

0.6675 0.6743 0.6760

0.8558 0.9176 0.9155

97

1-Level Haar

3-Level Haar

1-Level db3

3-Level db3

1- level CDF 9/7

3- level CDF 9/7

Figure 4.15 Wavelet substitutive method output images

98

From the Figure 4.15, it is observed that good visual quality


fused image has been obtained at level 1 by using CDF 9/7 wavelet
when compared to that of Haar and db3. It is also observed from the
Table 4.8 that PSNR and CC values of CDF 9/7 wavelet fused image
are showing high performance when compared to the other two
wavelets. From the Tables 4.8 and 4.9, it is also noted that visual
quality and performance of the three wavelets have been increased
from level 1 to level 3. It is clearly seen from the results that Haar and
CDF 9/7 wavelets are exhibiting almost similar performance.
From the results, it is revealed that CDF 9/7 filter is showing
better performance in all the fusion rules when compared to that of
Haar and db3 filters. Hence in our study, the implementation of 2D
CDF 9/7 wavelet based image fusion on Virtex 6 FPGA kit has been
proposed

in

order

to

speed

up

the

fusion

process.

FPGA

implementation of DWT base image fusion is presented in Chapter 5.

99

CHAPTER 5

FPGA IMPLEMENTATION OF
DWT BASED IMAGE FUSION

100

CHAPTER 5
CHAPTER 5: FPGA IMPLEMENTATION OF DWT
BASED IMAGE FUSION

Page No

5.1 Introduction

101

5.2 Block diagram of Image Fusion using Hardware Software


Co-simulation

102

5.3 Implementation Design Flow

103

5.4 Algorithm Design for Image Fusion

105

5.5 Hardware Software Co-simulation Implementation Process

108

5.5.1

Designing of sub blocks

109

5.6 Image Fusion using Averaging Method


5.7 Experimental Setup of Hardware Software
5.8 Results and Discussion

115
Co-simulation

119
120

101

5. FPGA IMPLEMENTATION OF DWT BASED IMAGE FUSION


5.1

INTRODUCTION
In the past few years, image fusion has become a very popular

field in the area of image processing. This is primarily due to the fast
entrance of digital imaging into the remote sensing and satellite
applications. There is often a need to store large amount of image data
and process it very quickly. These tasks are very complex and require
a large amount of computation to be completed [10, 12 & 14].
Creating

specialized

hardware

would

greatly

reduce

consumed by these processes. Also, the use of

the

time

predominant

algorithms would greatly increase the speed and effectiveness of the


overall

process.

For

implementation

of

Programmable

Gate

the

this

reason,

image

Array

fusion

(FPGA)

reconfigurable
is

proposed

technology

hardware
[98].

which

Field

supports

reconfigurable computing technology has become a viable target for


the

implementation

of

algorithms

suited

to

image

processing

applications.
This chapter deals with the FPGA implementation of image
fusion using CDF 9/7 filter transform using hardware software cosimulation. In this study, fusion model has been designed using
averaging method. Model based design gives the opportunity to
perform rapid prototyping of the image fusion algorithms while the
image fusion hardware is being developed.

102

5.2

BLOCK DIAGRAM OF IMAGE FUSION USING HARDWARE


SOFTWARE CO-SIMULATION

Averaging

Figure 5.1 Top level block diagram of DWT-IDWT based image fusion

The design has been developed using System Generator (XSG) of


Xilinx ISE 13.1 design tool configured with MATLAB R2010a that
integrates Xilinx Blockset and Matlab Simulink environment which
supports Virtex 6 FPGA [9]. The MATLAB environment is a high-level
technical computing language for algorithm development, data
visualization, data analysis and numerical computing. One of the key
features of this tool is the integration ability with other languages.

103

MATLAB also included the Simulink graphical environment used for


multi-domain simulation and model-based design. Signal processing
designers take advantage of Simulink as a good platform for
preliminary algorithmic exploration and optimization. Using MATLAB
simulink to assist the system generator verification relies on cosimulating the two environments. The co-simulation interface must
provide sufficient capabilities and reasonable simulation speeds.
System generator automatically specifies the details of FPGA with the
help of Xilinx DSP block set for simulink, then FPGA has been
programmed, which is shown in Figure 5.1 which supports Virtex 6
FPGA [9].
5.3

IMPLEMENTATION DESIGN FLOW


Figure 5.2 shows the image fusion implementation design flow

using hardware software co-simulation [101]. The design flow steps


are described as follows.
In the first step, MATLAB Simulink is used to develop the image
fusion algorithm as shown in Figures 5.3 and 5.4. Once the algorithm
is developed, it is modeled using Xilinx block set library. The input
images are given to Xilinx models in the form of vector in Xilinx fixed
point

format.

This

model

is

simulated

in

MATLAB

simulink

environment with suitable simulation time. Once expected fused


image is obtained,

104

Figure 5.2 Implementation design flow

105

system generator token has been configured for virtex-6 FPFA


board. System Generator provides hardware co-simulation, making it
possible to incorporate a design running in an FPGA directly into a
Simulink simulation. After I/O clock planning is done, the model is
implemented for JTAG hardware co-simulation. On compilation, the
netlist and Xilinx ISE accessible programming file have been
generated in verilog HDL [11]. The developed image fusion model is
checked for behavioral syntax and then it is synthesized and
implemented on FPGA. The Xilinx system generator itself has the
feature of configuring user constraint file (.UCF), test bench and test
vectors for testing architecture. Bit stream compilation is done to
create an FPGA bit file that is suitable for FPGA input and
implemented on Virtex 6 ML605 target device.
5.4

ALGORITHM DESIGN FOR IMAGE FUSION


In this study, the Algorithm (shown in Figure 5.3) has been

used for image fusion and to implement on FPGA using hardware


software co-simulation.
Main stages in developed algorithm are
1. Loading of input images
2. Preprocessing
3. Wavelet based fusion
4. Concatination

106

Figure 5.3 Flow chart for image fusion algorithm


The Figure 5.4 shows the developed simulink (software
reference) model for image fusion.

107

Figure 5.4 Software reference model of DWT-IDWT based


average image fusion

108

Model Based Design (MBD) has become an increasingly popular


method for performing image processing design applications.

MBD

tools such as MATLAB and Simulink offers the advantage of design


and simulate an application of image fusion in a simulation
environment prior to building or implementing a hardware design.
MBD reduces the design time by improving the design based on model
performance. It allows the simultaneous development of the image
fusion on application specific hardware. It gives the opportunity to
develop prototyping hardware by allowing immediate testing of fusion
algorithms while the hardware is under development. This reduces
time to market.
DWT-IDWT plays an important role in image processing. In this
thesis, DWT-IDWT based image fusion model has been developed and
implemented on ML605 Virtex 6 FPGA for prototype custom hardware.
The building blocks for this research have been identified as analysis,
synthesis filters (low pass filter, high pass filter) and fusion block.
5.5

HARDWARE SOFTWARE CO-SIMULATION IMPLEMENTATION

PROCESS
In this study, a hardware software co-simulation algorithm has
been designed for fusing multisensor images and implementation on
FPGA. The registered multisensor images are considered for this
study. The wavelengths (0.5m to 0.8m) of PAN are converted to
intensity by using RGB to Intensity converter block. Then the

109

multisensor images (MS and PAN) of size 256X256 have been resized
to 128X128 due to the memory constraints in FPGA. These resized
images have been applied to 2D-1D block for the conversion of the two
dimensional image data to one dimensional bit stream using simulink
blocksets. Then, it is applied as inputs to system generator model for
FPGA implementation process.
5.5.1 DESIGNING OF SUB BLOCKS
This section describes the system generator blocks likes image
conversion blocks, low pass and high pass filters and DWT and IDWT
blocks. The process of converting the input image to serial stream as
illustrated [125] in Figure 5.5.

Figure 5.5 Image conversion in 2D-1D block


The PAN and MS band conversion process from two dimensional
data to one dimensional bit stream (Serial stream) has been illustrated
in Figures 5.6 and 5.7 respectively.

110

Figure 5.6 PAN conversion from 2D to 1D

Figure 5.7 MS band conversion from 2D to 1D


Then, the image bit stream data have been applied as inputs to
system generator DWT building blocks of analysis filters (low-pass and
high-pass) through Gateway In. These filters separate each input bit
stream into approximation and detail coefficients [100]. These
coefficients have been obtained by convolving the input values with
the low-pass filter for approximation and with the high-pass filter for
detail and results into a collection of sub-bands with smaller
bandwidths and slower sample rates [91].
The 2-level DWT compression produces four sub-bands HH, HL,
LH and LL. The sub-bands of both images are fused by combining HH
coefficient of first image with HH coefficient of second.
Similarly for the remaining coefficients HL have been fused with
HL, LH has been fused with LH and LL has been fused with LL. The
averaging fusion has been carried out in two steps. In the first step,

111

the image coefficients have been added and in the next step, the
resultant output has been multiplied by a factor 0.5. Then the fused
coefficients have been applied to synthesis filters to restore the fused
image bit stream in IDWT process. This fused image 1-D data (Serial
stream) is converted back to 2-D image using serial data to image
block [125] after the DWT & IDWT operation as illustrated in Figure
5.8.

Figure 5.8 Fused bit stream conversion from 1D to 2D


Low pass filter:

Figure 5.9 Low pass filter Xilinx system generator block

112

The Xilinx low pass filter block specification is shown in Figure


5.9 accepts a stream of input data and computes filter output with a
fixed delay based on low pass filter configuration. Data input port din
of the filter provides input data for all channels in a time multiplexed
manner and data output port dout provides output at all channels in
time shared manner depending on the wavelet filter coefficients
entered using FDA tool box [125]. For CDF 9/7 wavelet, low pass filter
coefficients are considered as shown in Table 5.1.
Table 5.1 Low pass filter coefficients

Also

Index Number

Coefficient

K=0

0.029

K=1

0.2666

K = 2

-0.0782

K=3

-0.0168

K=4

0.0267

with

oversampling

the

Maximum_Possible

specification

[125],

format

automatic

of

hardware

determination

oversampling has been done based on the din sample rate.

of

113

High pass filter:

Figure. 5.10 High pass Xilinx system generator block

The Xilinx high pass filter block specification is shown in Figure


5.10 is similar to the Xilinx low pass filter with different coefficients.
The high pass filter coefficients are shown in Table 5.2.
Table 5.2

High pass filter coefficients

Index Number

Coefficient

K=0

1.1150

K=1

-0.5912

K = 2

-0.0575

K=3

0.0912

114

DWT Decomposition:

Figure 5.11 Image decomposition

DWT Reconstruction:

Figure 5.12 Image reconstruction

115

Figures 5.11 and 5.12 represent DWT decomposition and


reconstruction using low pass and high pass Xilinx system generator
blocks. During the DWT decomposition, the image is decomposed into
four details viz., horizontal, vertical, diagonal and approximation
details. During the DWT reconstruction, these four details are merged
again to form a single fused image.
5.6

IMAGE FUSION USING AVERAGING METHOD

Figure 5.13 shows the block diagram of fusion block for the developed
system generator model.

Figure 5.13 Fusion block

116

The HH, HL, LH and LL components of PAN has been added


with the HH, HL, LH and LL components of Band2 image. The
resultant coefficients have been multiplied by 0.5. This process is
referred as averaging based fusion process [70]. Fused HH, HL LH and
LL components have been passed to the 2D IDWT block. Similar
process has been repeated for Band3 and Band4 with PAN separately.
All three fused images have been concatenated to form high resolution
MS image. The entire process developed in system generator model is
shown in Figure 5.14.

Hardware software co-simulation model for

averaging method is shown in Figure 5.15.

117

Figure 5.14 System generator simulation for averaging method

118

Figure 5.15 Hardware Software co-simulation model for image fusion


by averaging method

119

5.7

EXPERIMENTAL

SETUP

OF

HARDWARE

SOFTWARE

CO-SIMULATION

Hardware software co-simulation means meeting system-level


objectives by exploiting the synergism of hardware and software
through their concurrent design. Digital hardware design has
increasingly more similarities to software design.

In this study, the hardware implementation of a fusion method


for satellite images has been presented. The hardware realization of
the proposed high speed DWT based fusion technique based on FPGA
technology provides a fast, compact and low power solution for
satellite image fusion [126].

Figure 5.16 shows the experimental setup for satellite image


fusion algorithm.

Virtex 6 ML605 device specification (XC6VSX315T-3FF1156):

Onboard configuration circuitry (USB to JTAG)

6-input LUT structure

40nm CMOS technology

Clock frequency is 1.2GHz

Available memory 2GB (393600 slice registers)

120

Figure 5.16 Experimental setup

5.8

RESULTS AND DISCUSSION


Figure 5.17 (a) shows the output images of the system generator

simulation result and Figure 5.17 (b) is FPGA result.

121

(a)

(b)

Figure 5.17 (a) System generator simulation output and


(b) FPGA output for averaging method

Figure 5.18 shows the Simulation results of input and fused


images represented in 1-D bit stream data.

BAND2
BAND3
BAND4
PAN
FUSED PAN&BAND2
FUSED PAN&BAND3
FUSED PAN&BAND4

Figure 5.18 Simulation results of input and fused images for


averaging method
Synthesis is a process by which an abstract form of designed
circuit behavior or Register Transfer Level (RTL) has been converted
into a design implementation i.e., in terms of logic gates [127]. The
synthesis of verilog code has been carried out by Xilinx Synthesis

122

Technology (XST) tool, which is part of Xilinx ISE software whose


results are shown in Table 5.3. The Table 5.4 shows the timing
summary of averaging based fusion method.
Table 5.3 Device utilization summary of averaging method
Logic utilization

Used

Available

Utilized

Number of slice registers

217

393600

0%

Number of slice LUTs

216

196800

0%

216

217

99%

144

600

24%

32

3%

Number of fully used LUTFF pairs


Number of bonded IOBs
Number of
BUFG/BUFGCTRLs

Table 5.4 Timing summary of averaging method


Speed Grade: -3
Minimum period: 1.177ns (Maximum Frequency: 849.618MHz)
Minimum input arrival time before clock: 0.437ns
Maximum output required time after clock: 0.562ns
Maximum combinational path delay: 0.448ns

After the HDL synthesis process, schematic representation of


synthesized design has been extracted. Figure 5.19 represents the
RTL schematic diagram and its internal structure of the proposed

123

image fusion design. This schematic representation shows the preoptimized design in terms of generic symbols. It helps in discovering
design issues early in the design process.

(a)

(b)
Figure 5.19 (a) Top level RTL and
(b) RTL internal schematic of averaging method

Technology schematic gives the design in terms of optimized


logic elements to the target Xilinx device. Figure 5.20 represents the
Technology schematic diagram and its internal structure of the
proposed image fusion design for averaging method.

124

(a)

(b)
Figure 5.20 (a) Top level Technology schematic and
(b) Internal technology schematic of averaging method

Figure 5.21 shows the power report of the proposed image


fusion. From the report, it is observed that the design consumes a
total power of 4.349 W, out of which leakage power is 4.280 W. It is
also observed that the design can withstand up to a highest operating
junction temperature of 56.50C.

125

Figure 5.21 XPOWER analyzer power report

Table 5.5 Speed Performance of fusion on FPGA

FPGA

Device Properties

Minimum

Maximum

Period

Frequency

1.177ns

849.618MHz

Family
Virtex 6

XC6VSX315T3FF1156

From the Table 5.5, it is observed that efficient hardware


architecture has been designed and developed on Virtex 6 which
operates at a highest frequency (849.618MHz) and less time (1.177ns).

It is to be summarized that the present investigation has


successfully developed DWT based hardware software co-simulation
algorithm using CDF 9/7 filter for multisensor image fusion and
implementation on FPGA. The proper utilization of Simulink/ Xilinx
system

generator

DSP

blocks

for

FPGA

greatly

shortens

the

development cycle from software algorithm to hardware. It leads to


fast time to market of the design.

126

CHAPTER 6

CONCLUSIONS
AND
FUTURE WORK

127

CHAPTER 6
CHAPTER 6: CONCLUSIONS AND FUTURE WORK

Page No

6.1

Conclusions

129

6.2

Future work

131

128

CHAPTER 6
CONCLUSIONS AND FUTURE WORK
This chapter deals with the conclusions drawn from the
research and also includes the future scope that is recommended for
the continuation of this investigation.
The present investigation is mainly aimed to develop hardware
software

co-simulation

algorithm

to

fuse

multispectral

and

panchromatic satellite images and implementation on reconfigurable


hardware.
The present study started with the preprocessing of MS and
PAN images that include registration and resampling of images to
fulfill pre-requirements of image fusion, from this it is observed that
bicubic interpolation gives a high PSNR values for all Bands as
compared to those of bilinear and nearest neighbor interpolation
techniques.

Hence

in

our

investigation,

bicubic

interpolation

technique has been used.


A detailed analysis has been carried out in MATLAB Simulink
R2010b software using averaging, additive and substitutive fusion
rules in order to choose the appropriate wavelet filter for the design
and implementation on FPGA. In all rules, Haar, Daubechies 3 (db3)
and Cohen Daubechies Feauveau (CDF) 9/7 filters are used. PSNR
and CC values have been calculated to measure the performance of
image fusion techniques. Hardware software co-simulation algorithm

129

has been developed using system generator

for Virtex 6 kit using

averaging method for single level CDF 9/7 decomposition.


6.1

CONCLUSIONS
During image registration process, feature-based method has

been adapted to extract and match the common features from the MS
and PAN images. ERDAS IMAGINE software has been used to get the
registered images.
Nearest neighbor, bilinear and bicubic resampling techniques
have been performed to identify the better technique. Based on PSNR
values, bicubic interpolation technique has been selected and
implemented throughout the study.
For averaging and substitutive methods, 5.8m resolution of MS
image has been upsampled to PAN image resolution (1m). For additive
method, MS image resolution has been upsampled from 5.8m to 4m
(nearest power of 2).
In

MATLAB

Simulink

R2010b,

averaging,

additive

and

substitutive fusion rules are used to fuse the low resolution


multispectral and high resolution panchromatic images with Haar,
daubechies 3 and CDF 9/7 filters. From the results, it is revealed that
CDF 9/7 filter is showing better performance in all the fusion rules
when compared to that of Haar and db3 filters. Hence, CDF 9/7 has
been chosen as the best filter for FPGA implementation.

130

The multisensor images (MS and PAN) of size 256X256 have


been resized to 128X128 for FPGA implementation. Using Xilinx ISE
13.1 system generator, DWT, IDWT and fusion blocks model has been
verified.
Memory placement increases the cost & delay in the complete
hardware design. Hence, this research focused on developing the
hardware software co-simulation algorithm using MATLAB SIMULINK
to improve the image processing speed. The developed algorithm has
been designed & synthesized in Xilinx ISE 13.1. ML605 Virtex 6 FPGA
board is considered for implementation. Further, as the data transfer
from CPU to FPGA consumes more time, JTAG communication
protocol is used.
The implemented design is tested on Virtex 6 FPGA with
maximum speed grade of -3. The design consumed a total power of
4.349W, out of which leakage power is 4.280W & it operates at a
maximum frequency of 849.618MHz i.e., it takes 1.177ns to complete
the fusion process.
It is to be concluded that the present investigation has
successfully developed DWT based hardware software co-simulation
algorithm using CDF 9/7 filter for multisensor image fusion and
implementation on FPGA.

131

6.2

FUTURE WORK
In this study, single level DWT based hardware software co-

simulation algorithm has been developed for multisensor images.


However, simulation results show better results in terms of image
quality at third level decomposition of DWT. Hence, a multilevel DWT
based hardware software co-simulation algorithm can be developed as
an extension to this work.
In this research, hardware model for image fusion is developed
using system generator for implementation & faster processing.
Hence, optimization of hardware to further reduce power consumption
& area is not feasible in this approach. To extend the work optimizing
hardware,

abstract

level

implementation

techniques

can

be

considered.
The present study used the basic wavelet (DWT) for image
fusion algorithm. Hence, it is recommended to extend the research
with advanced wavelet families.

132

REFERENCES

133

REFERENCES
[1]

Mohammad, Sohrabinia.; Sadeghianb, Saeid.; and Dadfar


Manavic; The

International Archives of the Photogrammetry

Remote Sensing and Spatial Information Sciences, XXXVII (B4),


2008, 1351-1354.
[2]

Curran, P.J.; International Journal of Remote Sensing, 6 (11),


1985, 705-708.

[3]

Yufeng, Zheng.; InTech publications first edition, 2011.

[4]

Chetan,

K.;

Solankil;

and

Narendra

Patel,

M.;

National

Conference on Recent Trends in Engineering & Technology,


2011, 13-14.
[5]

Yang, J.;

Ma, Y.; Yao, W.; and Lu, W. T.; The International

Archives of the Photogrammetry, Remote Sensing and Spatial


Information Sciences, XXXVII, (B7), 2006, 1147-1150.
[6]

Morris, C.; and Rajesh, R.S.; International Journal of Advanced


Research in Computer Science Engineering and Information
Technology, 2 (3), 2014, 249-254.

[7]

Simrandeep Singh; Narwant Singh, Grewal.; and Harbinder


Singh; International Journal of Advanced Research in Computer
Science and Software Engineering , 3 (11), 2013, 1639-1642.

[8]

Deepak Kumar, Sahu.; and Parsai, M.P.; International Journal


of Modern Engineering Research (IJMER), 2 (5), 2012, 42984301.

134

[9]

Sulochana, T.; Dilip Chandra, E.; Manvi, S.S.; and Imran


Rasheed; International Journal of Advanced Research in
Electrical, Electronics and Instrumentation Engineering, 3(3),
2014, 8177-8184.

[10]

Adhyana

Gupta;

International

Journal

of

Computational

Science and Information Technology (IJCSITY) 1(2), 2013, 1-12.


[11]

Suthar, A.C.; Vayada, Md.; Patel, C.B.; and Kulkarni, G.R.;


IJCSI International Journal of Computer Science Issues, 9(2),
2012, 560-562.

[12]

Aniket Burkule; and Borole, P.B.; International Journal of


Advanced

Research

in

Computer

Science

and

Software

Engineering, 3(1), 2013, 532-536.


[13]

Sambashivudu, K.; Javeed, Md.; and Kiran, R.; International


Journal of Innovative Technology and Exploring Engineering
(IJITEE), 3(6), 2013, 72-76.

[14]

Chandrashekar, M.; NareshKumar, U.; SudershanReddy, K.;


and NagabhushanRaju, K.; International Journal of Electronic
Engineering Research, 1(3) 279285, 2009, 279-285.

[15]

Jiang Dong; Dafang Zhuang; Yaohuan Huang; and Jingying Fu;


Sensors 9, 7771-7784, 2009, 7771-7784.

[16]

Wu Wenbo; Yao Jing; and Kang Tingjun; The International


Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, XXXVII (B7), 2008, 1141-1146.

[17]

Yun Zhang: Photogrammetric Engineering & Remote Sensing,


2004, 657-661.

135

[18]

Hall, D.L.; and Llinas; Proceedings of the IEEE, 85 (1), 1997, 623.

[19]

Dipti Deodhare; RangaSuri, R.; and Amit; International Journal


of Computer Science and Applications, 11 (11), 2005, 131-144.

[20] Yoonsuk Choi Sharifahmadian, E.; and Latifi, S.; Computer


Architecture And Digital Systems (CADS), 2013, 111-115.
[21]

Lisa Gottesfeld Brown; ACM Computing Surveys, 24(4), 1992,


325-376.

[22]

Manjunath, B. S.; Shekhar, C.; and Chellappa, R.; DACA,


76-89.

[23] Barbara Zitova; and Jan Flusser; Elsevier, Image and Vision
Computing 21, 2003, 977-1000.
[24]

Le Yu, Dengrong Zhang; and Eun Jung Holden; Elsevier Journal


of Computers and Geosciences, 34, 2008, 838-848.

[25]

Yuanxin Ye; and Jie Shan; Elsevier Journal ISPRS Journal of


Photogrammetry and Remote Sensing, 90, 2014, 83-95.

[26]

Le

Moigne,

J.;

Campbell,

W.J.; and Cromp,

R.P.;

IEEE

Transactions on Geoscience and Remote Sensing, 40 (8), 2002,


1849 - 1864.
[27]

Fonseca, L.; and Costa, M.; Computer Graphics And Image


Processing, 1997, 219-226.

[28]

Qinfen Zheng; and Chellappa, R.; IEEE Transactions on image


processing, 3 (2), 1993, 311-326.

[29]

Li, H.; and Zhou, Y.; IEEE International Conference on Image


Processing, B, 1995, 161164.

136

[30]

Corvi, M.; and Nicchiotti, G.; International Conference on Image


Processing Proceedings, 1995, 224-227.

[31]

Wu, J.; and Chung, A.; Lecture Notes in Computer Science,


2004, 270277.

[32]

Christopher Paulson; Soundararajan Ezekiel; and Dapeng Wu;


Department of Electrical and Computer Engineering.

[33]

Jonthan Sachs; Digital Light and color, 2001, 1-14.

[34]

Neil Anthony Dodgson; University of Cambridge Computer


Laboratory, 1992.

[35]

Parker; Anthony, J.; Kenyon; Robert, V.; and Troxel, D.; IEEE
Transactions on Medical Imaging , 2(1), 1983, 31-39.

[36]

Philippe Thvenaz; Thierry Blu; and Michael Unser; Ieee


Transactions on Medical Imaging, 99(7), 2000, 739-758.

[37]

Heather Studley; Keith; and Weber, T.; Assessing Post-Fire


Recovery of Sagebrush, 2011, 185-196.

[38]

Chetan, K.; Solanki; and Narendra, M.; National Conference on


Recent Trends in Engineering & Technology, 2011.

[39]

Kusum Rani; and Reecha Sharma; International Journal of


Emerging Technology and Advanced Engineering, 3(5), 2013,
288-291.

[40]

Chavez,

P.S.;

Revisited

and

improved,

Photogrammetric

Engineering and Remote Sensing, 62, 1996, 1025-1036.


[41]

Firouz Abdullah Al-Wassai1; Kalyankar, N.V.; and Al-Zuky Ali,


A.; Computer vision and Pattern Recognition, 8(3), 2011.

137

[42]

Wen

Dou;

and

Yunhao

Chen;

Journal

Computers

and

Geoscience, 33 (2), 2007, 219-228.


[43]

Tu, T.M.; Huang, P.S.; Hung, C.L.; and Chang, C.P.; Information
Fusion, 2 (3), 2001, 177-186.

[44]

Yee leung; Jmnmin liu ; and Jiangshe Zhang; IEEE Geoscience


and remote sensing letters,11(5), 2014, 369-382.

[45]

Yan Luo; Rong Liu; and Yu Feng Zhu; Remote Sensing and
Spatial Information Sciences. XXXVII (B7), 2008, 1155-1158.

[46]

Chavez, P.S.; Sides, S.C.; and Anderson, J.A.; Photogrammetric


Engineering and Remote Sensing, 57(3), 1991, 295-303.

[47]

Naidu, V.P.S.; and Raols, J.R.; defence Science Journal, 58,


2008, 338-352.

[48]

Nisha, G.; and Lalitha, Y.S.; International Journal of Emerging


Research in Management & Technology, 3(5), 2014, 54-61.

[49]

Nirosha Joshitha, J.; and Medona Selin, R.; International


Journal of Soft Computing and Engineering (IJSCE), 2(2), 2012,
226-230.

[50]

Yang Jinghui; and Zhang Jixian; ISPRS TC VII Symposium,


XXXVIII(7B), 2010, 680-686.

[51]

Morris, C.; and Rajesh, R.S.; Special Issue on Video Processing


For Multimedia Systems, 5(1), 2013, 895-898.

[52]

Yufeng Zheng; 12th International Conference on Information


Fusion Seattle, 2009, 1060-1067.

[53]

Wencheng Wang; Journal of Computers, 6(12), 2011, 25592565.

138

[54]

Eduardo, C.; Published University of Bath press Cantabria,


2002.

[55]

Chhamman Sahu1; and Raj Kumar, S.; International Journal of


Engineering & Computer Science, 3(8), 2014.

[56]

Xiaoli Zhang; Yuncong Feng; Xiongfei LI; and Song WANG;


Journal of Computational Information Systems, 9, 2013, 23822391.

[57]

Toet, A.; Pattern Recognition Letters, 9(4), 1989, 255261.

[58]

Burt; and Kolczynski; Computer Vision, Proceedings, Fourth


International Conference, 1993, 173 182.

[59]

Geetha, G.; Raja Mohammad, S.; and Murthy, Y.S.S.R.;


Computer Science & Information Technology (CS & IT) 2012,
103115.

[60]

Rao, K.R.; and Yip, P.; IEEE Transaction Pattern Analysis


Machine, 11 (4), 2007, 674-693.

[61]

Naidu, V.P.S.; Journal of Communication, Navigation and


Signal Processing, 1(3), 2012, 35-45.

[62]

Naidu, V.P.S.; e-Journal of Science & Technology (e-JST), 9(1),


2014, 49-66.

[63]

Anil kumar, K.; Swati, P.; and Mahesh, G.; Journal of


Information Engineering and Applications,1(2), 2011, 7-9.

[64]

Amara Graps; IEEE Computational Science and Engineering,


2(2), 1995, 50-61.

[65]

Stephane Mallatt, G.; IEEE Transactions on Pattern Analysis


and Machine Intelligence, II(7), 1989, 674-698.

139

[66]

Stephane

Mallatt,

G.;

Transactions

of

the

American

Mathematical Society, 315(1), 1989, 69-87.


[67]

Jorge Nez; Octavi Fors; Xavier Otazu; Vicen Pal;


Arbiol; and

Romn

Maria Teresa Merino; IEEE Transactions on

Geoscience and Remote Sensing, 44(9), 2006, 2539-2548.


[68]

Hui Li; Manjunath, B.S.; and mithra, S.K.; image processing


ieee transactions, 4(3), 1995, 320-334.

[69]

Vadher

Jagruti;

IOSR

Journal

of

Electronics

and

Communication Engineering (IOSR-JECE), 9(2), 2014, 107-109.


[70]

Gonzalo Pajares; and Jesus Manuel de la Cruz; Elsevier Pattern


Recognition, 37, 2004, 1855-1872.

[71]

Shaoqing Yang; Hongwen Lin; Anqing Zhang; and LinzhouXu;


International

Conference

on

Electronic

&

Mechanical

Engineering and Information Technology, 2011, 2807-2810.


[72]

Yong Yang; JOURNAL OF MULTIMEDIA, 6(1), 2011, 91-98.

[73]

Lavanya, A.; Vani, K.; Sanjeevi, S.; and Suresh Kumar, R.; IEEE
International Conference on Recent Trends in Information
Technology, 3(5), 2011, 920-925.

[74]

Yue Jin; yang ruliang; and huon ruohong; International


conference on radar, 2006, 1-4.

[75]

Joshing; and Chao; 6th International Symposium on Advanced


Optical Manufacturing and Testing Technologies, 8420, 2012.

[76]

Pushkar, S.; Pradhan; and rogerl, K.; IEEE transactions on


geoscience and remote sensing, 44(12), 2006, 3674-3686.

140

[77]

Deeepak kumar sahu; and Parasai, M.P; International Journal


of Modern Engineering Research (IJMER), 2(5), 2012, 42984301.

[78]

Zhang Bin; and zheng yang guo; Computational Intelligence and


Natural Computing Proceedings (CINC), 2010, 390-393.

[79]

Li Ming-xi; and Chen Jun; department of Equipment, Huang-shi


Institute of Technology, 2010, 385-389.

[80]

Sun Ying Li ; north western polytechnic university.

[81]

Ibrahim Melih olova; Thesis on a modified 2D Discrete Cosine


Transform based electro-optic and IR image fusion algorithm

[82]

Stephan

Blokzyl;

Matthias

Vodel;

and

Wolfram

Hardt;

Deutscher Luft- und Raumfahrtkongress, 2012, 1-8.


[84]

Hanen Chenini; Jean Pierre Derutin; Romuald Aufrere; and


Roland Chapuis; EURASIP Journal on Advances in Signal
Processing, 2013, 1-23.

[85]

Johnston, C.T.; Gribbon, K.T.; and Bailey, D.G.;TENCON, 2005,


1-6.

[86]

Takashi Saegusa; Tsutomu Maruyama; and Yoshiki Yamaguchi;


Systems and Information Engineering, University of Tsukuba;
2008, 77-82.

[87]

Mohamed. M.A.; and EI-Den, R.M.; IJCSNS International


Journal of Computer Science and Network Security, 10(5),
2010, 95-102.

141

[88]

Khasim Hussain, D.; Laxmikanth Reddy, C.; and Ashok Kumar,


V.; International Journal of Computer Applications Technology
and Research, 2(6), 2013, 676-679.

[89]

Steffen Klupsch; Markus Ernst; Sorin, A.; Huss, M.; Rumpf, R.;
and Strzodka; Proceedings of IEEE Workshop Heterogeneous
reconfigurable Systems on Chip, 2002, 1-7.

[90]

Madhumati, G.L.; Muralikrishna, B.; and Habibulla Khan;


2014, 417-420.

[91]

MunawarAli, S.; and Naveen Kumar, S.; Elixir Image Processing,


50, 2012, 10536-10538.

[92]

Elamaran, V.; and Rajkumar, G.; Journal of Theoretical and


Applied Information Technology, 41(2), 2012, 201-206.

[93]

David, C.; Zhang; Sek Chai; and Gooitzen Vander Wal;


information fusion; 2001, 1-8.

[94]

Abhishek Acharya; Rajesh Mehra; and Vikram Singh Takher;


Int. J. Comp. Tech. Appl., 2(2), 2013, 349-358.

[95]

Devika, S.V.; Khumuruddeen, S.K.; and Alekya; International


Journal of Engineering Research and Applications (IJERA), 2(1),
2012, 645-650.

[96]

Abdul Manan; and Ajay Kumar; 25-28.

[97]

Feng Qu; Bochao; Liu; Jian Zhao; and Qiang Sun; Optics and
Photonics Journal, 3, 2013,76-78.

[98]

Johnston, C.T.; Gribbon, K.T.; and Bailey, D.G.; Institute of


Information Sciences & Technology, Massey University, 2004,
118-124.

142

[99]

Qian Weixian; BAI Lianfa; Gu Guohua; and

ZHANG, B.;

Nanjing University of Science and Technology, 2005, 57-64.


[100] Anbumozhi, S.; and Manoharan, P.S.; American Journal of
Applied Sciences, 11(5), 2014, 769-781.
[101] Neha Raut, P.; and Gokhale, A.V.; IOSR Journal of VLSI and
Signal Processing, 2(4), 2013, 26-36.
[102] Dipti Deodhare; RangaSuri, R.; and Amit; International Journal
of Computer Science and Applications, 11(11), 2005, 131-144.
[103] Vinay, K.; and Dadhwal; 50thSession of Scientific & Technical
Subcommittee of COPUOS, 2013, 11-22.
[104] Gyanesh Chander; USGS, 2005.
[105] David Johnson, M.; ASPRS Annual Conference Portland, 2008.
[106] Jacqueline LeMoinge; Nathan Netanyau, S.; and Roger Eastman,
D.; Cambridge university press, 2011.
[107] Bruce, D.; and Lucas Takeo Kanade; Proceedings of Imaging
Understanding Workshop, 1981, 121-130.
[108] Ezzeldeen, R.M.; Ramadan, H.H.; Nazmy, T.M.; Adel Yehia, M.;
and Addel Wahab, M.S.; The Egyptian Journal of Remote
Sensing and Space Sciences, 13, 2010, 31-36.
[109] Gonzalez, R.C.; and Woods, R.E.; Digital image processing
Prentice Hall, 2008.
[110] Verbyla, D.L.; Taylor and Francis Practical GIS analysis, 2002.
[111] Goldsmith, N.; http://www.jiscdigitalmedia.ac.uk/stillimages/
advice/ resampling-raster-images/, 2009.

143

[112] Huber,

W.;

http://www.quantdec.com/SYSEN597/GTKAV/

section9/map_algebra.htm, 2009.
[113] Gagandeep Kour; and Sharad Singh, P.; International Journal
of

Advanced

Research

in

Electrical,

Electronics

and

Instrumentation Engineering, 2 (11), 2013, 5491-5496.


[114] Martin vetterli; IEEE Transaction on signal processing, 40 (9),
1992, 2207-2232.
[115] Hartmann, D.L.; ATMS 552 Notes, 2014, 255-275.
[116] Sifuzzaman, M.; Islam, M.R.; and Ali, M.Z.; Journal of Physical
Sciences, 13, 2009, 121-134.
[117] Daniel Lee, T.L.; and Akio Yamamoto; Hewlett Packard journal
1994, 44-52.
[118] Jorgensen Palle, E.T.; and Myung-Sin Song; U.S. National
Science Foundation.
[119] Cedric Vonesh; Thierry Blu ; and Michael unser; ICASSP 2005,
593-596.
[120] Cohen,

A.;

Ingrid

Daubechies;

and

Feauveau,

J.C.;

Communications on Pure and Applied Mathematics, XLV,


1992, 485-560.
[121] Pardhan, P.S.; and King, R.; Proceedings of the world congress
on engineering, 44, 2006, 3674-3686.
[122] Li, H; International conference, 13(16), 1994, 51-55.
[123] Sascha

Klonus;

and

Manfred

Ehlers;

12th

International

conference on information fusion, 2009, 1409-1416.

144

[124] Saidani, T.; Dia, D.; Elhamzi, W.; Atri, M.; and Tourki, R.;
Proceedings of the World Congress on Engineering, 1, 2009, 37.
[125] Xilinx system generator reference guide.
[126] Shajan,

P.X.;

Muniraj,

N.J.R.;

and

John

Abraham,

T.;

International Journal of Computer Science and Information


Technologies, 3 (4), 2012, 168-177.
[127] Chaithra, N.M.; and Ramana Reddy, K.V.; International
Journal of Engineering and Advanced Technology, 2(6), 2013,
243-247.

145

INDEX

146

INDEX
A

Additive 69, 72, 84, 87

Infrared 4, 26, 36, 40, 48, 64

Approximation 35, 37, 76, 87,


89, 116

Image processing 3, 61, 76

Averaging 6, 28, 45, 102, 117

Interpolation 22, 23, 61, 63, 65,


128

Bicubic 12, 22, 62, 68, 128

Low pass 76, 81, 85, 109, 112,


116

Bilinear 56, 62, 63, 68


C
CDF 12, 69, 73, 77, 91, 113,
125
Correlation Coefficient 15, 20,
24, 90
D
Daubechies (db) 72, 77, 78,
128, 129
Diagonal 31, 76, 87, 89, 116
Discrete Wavelet
Transform
(DWT) 7, 12, 34, 73, 83, 102,
108, 116, 130
F
Fusion 3, 6, 16, 32, 72, 84, 88,
102, 108, 116
G
Gradient 29, 31, 32
H
Haar 12, 69, 72, 77, 95, 99
High pass 15, 39, 76, 79, 81,
85, 114

M
Matlab 10, 41, 47, 52, 72, 103,
108, 128
Multispectral 4, 8, 15, 25, 56,
84, 129
Multi-resolution 43, 73, 74, 75
Multi-sensor 11, 17, 5, 56, 72,
109, 130
N
Nearest neighbor 12, 23, 62,
68, 128
P
Panchromatic 4, 8, 15, 36, 56,
84, 128, 129
R
Registration 8, 17, 19, 51, 59,
129
Resampling 8, 12, 23, 56, 68
Remote Sensing 3, 4, 16, 62, 75

147

Satellite 4, 15, 17, 56, 57, 72,


119, 128

Vertical 20, 31, 76, 87, 90, 116

Simulink 10, 42, 47, 103, 106,


109, 128
System Generator 10, 42, 47,
52, 104, 109, 111, 130

X
Xilinx 10, 42, 43, 105, 116,
123, 130

148

LIST OF PUBLICATIONS

149

LIST OF PUBLICATIONS
1. List of Research Papers Presented in Journals:
G. Mamatha, M.V. Lakshmaiah, V. Sumalatha, S. Varadarajan,
DWT

Based

Pan-Sharpening

of

Low

Resolution

Multispectral Satellite Images, i-Managers Journal on Image


Processing, 1(2), April-June 2014, 25-32.
G. Mamatha, M.V. Lakshmaiah, V. Sumalatha, Cyril Prasanna
Raj P., Evaluation of DWT based image Fusion with three
Different

Resampling

Methods,

International

Advanced

Research Journal in Science, Engineering and Technology, 2(2),


February 2015, 10-14.
2. List of Research Papers Presented in Conferences:
G.

Mamatha,

M.V.

Lakshmaiah,

V.

Sumalatha,

FPGA

Implementation and Performance Analysis of Image Fusion


using DWT, IEEE EIT 2013 International Conference, May
2013.
G. Mamatha, M.V. Lakshmaiah, V. Sumalatha, An Efficient
Hardware Implementation for Image Fusion Using Lifting
Scheme 2-D DWT, Intellect base International Consortium
Academic Conference, October 2013.
G.

Mamatha,

M.V.

Lakshmaiah,

V.

Sumalatha,

FPGA

Implementation of Satellite Image Fusion using Wavelet


Substitution

Method

IEEE

and

SPRINGER

technically

support science and information conference 2015, July 28-30,


London, UK. (paper Accepted yet to be published)

Das könnte Ihnen auch gefallen