Sie sind auf Seite 1von 6

2011 International Conference on Image Information Processing (ICIIP 2011)

Proceedings of the 2011 International Conference on Image Information Processing (ICIIP 2011)

978-1-61284-861-7/11/$26.00 2011 IEEE

Image fusion using hierarchical PCA.
Ujwala Patil,
Uma Mudengudi
Dept Electronics and communication,
BVB college of Engg and Tech,
Hubli, Karnataka, India.
ujwalapatil@bvb.edu


Abstract In this paper we propose image fusion algorithm
using hierarchical PCA. Image fusion is a process of combining
two or more images (which are registered) of the same scene to
get the more informative image. Hierarchical multiscale and
multiresolution image processing techniques, pyramid
decomposition are the basis for the majority of image fusion
algorithms. Principal component analysis (PCA) is a well-known
scheme for feature extraction and dimension reduction and is
used for image fusion. We propose image fusion algorithm by
combining pyramid and PCA techniques and carryout the
quality analysis of proposed fusion algorithm without reference
image. There is an increasing need for the quality analysis of the
fusion algorithms as fusion algorithms are data set dependent.
Subjective analysis of fusion algorithm using hierarchical PCA is
done by considering the opinion of experts and non experts and
for quantitative quality analysis we use different quality metrics.
We demonstrate fusion using pyramid, wavelet and PCA fusion
techniques and carry out performance analysis for these four
fusion methods using different quality measures for variety of
data sets and show that proposed image fusion using hierarchical
PCA is better for the fusion of multimodal imaged. Visible
inspection with quality parameters are used to arrive at a fusion
results.

Keywords- Image fusion, image registration, PCA, image
pyramids, without reference quality analysis
I. INTRODUCTION .
Image fusion is a process of generating a single fused image
using a set of input images which are assumed to be registered.
Input images could be multi sensor, multimodal, multi focal or
multi temporal. One of the important preprocessing steps for
the fusion process is image registration, i.e., the coordinate
transformation of one image with respect to other. We propose
image fusion using hierarchical PCA. Fusion algorithms are
input dependent. In order to carry out the quality analysis of
the proposed fusion algorithm, different fusion algorithms
using pyramids, PCA and wavelets are implemented. We carry
out the quality analysis of these fusion algorithms using
different quality measures proposed in [5], quality measures
QM1 and QM2 proposed in [8] etc, and show hierarchical
PCA gives better results for multimodal images. Image fusion
find application in the area of navigation guidance, object
detection and recognition, medical diagnosis, satellite imaging
for remote sensing, rob vision, military and civilian
surveillance, etc.
Several fusion algorithms starting from simple pixel
based to sophisticated wavelets and PCA based are available,
in what follows we demonstrate the results for different
pyramids, kernel-PCA, and wavelet based fusion algorithm
and compare with the proposed image fusion technique.
Pyramid based image fusion: In pyramid approach, pyramid
levels obtained from the down sampling of source images are
fused at pixel level depending on fusion rules. The fused
image is obtained by reconstructing the fused image pyramid
[4]. Different image pyramids like Gaussian, laplacian,
gradient, ratio of low pass, filter subtract decimate
morphological, contrast can be used for the image fusion using
different fusion rules.
PCA based image fusion: Principal component analysis is a
mathematical tool which transforms a number of correlated
variables into a several uncorrelated variables. PCA is widely
used in image classification. In [1], [5] authors describe PCA
based image fusion. The fusion using PCA is achieved by
weighted sum of source images. The weights for each source
image are obtained from the normalized Eigen vector of the
covariance matrices of each source image. Authors in [9]
suggest the region based image fusion, as the useful
information is not only at one pixel, it also present in features
such as size, shape, edge, region etc. We implement kernel
based PCA by considering 3X3 kernel of both the input
images I1(x, y) and I2(x, y). These are the input square
matrices for the PCA algorithm. Covariance matrix and the
corresponding normalized Eigen values for this kernel are
calculated. Normalized Eigen values are applied as weights for
the input images and the fused image is the weighted sum of
input images. We combine pyramid decomposition and PCA
techniques to add the advantages of pixel and region based
fusion methods in proposed fusion algorithm, hierarchical
PCA. From the literature, it is evident that PCA [1], [5],
wavelets [2], [5] and pyramid based image fusion approaches
[7] have better performance than other fusion methods, we
propose to compare these image fusion methods with
hierarchical PCA. Subjective analysis and quantitative
analysis are the major forms of quality assessment for image
fusion algorithms [5]. We perform subjective analysis by
considering opinion of experts and non experts and, and
2011 International Conference on Image Information Processing (ICIIP 2011)
Proceedings of the 2011 International Conference on Image Information Processing (ICIIP 2011)

quantitative analysis of fusion algorithms using quality metrics
without reference. We carry out the quality analysis of image
fusion techniques based on hierarchical PCA, pyramid, PCA
and wavelet using different quality measures as given in [3],
[6]. Our contributions are as follows:

1) We propose image fusion algorithm, using hierarchical
PCA.

2) We carryout quality analysis of image fusion algorithms for
hierarchical PCA, pyramid and wavelet and show that
hierarchical PCA works better for multimodal images.

In the Section 2, we explain proposed image fusion
algorithm using hierarchical PCA. In section 3 and 4 quality
analysis of image fusion methods and conclusions are included
respectively.
II. IMAGE FUSION USING HIERARCHICAL PCA
The goal of the image fusion is to generate the composite
image, which is more informative than its input images. We
propose image fusion model by combining pyramid and PCA
fusion techniques. The proposed algorithm does a pixel wise
image fusion in its pyramid decomposition and region based
fusion (as we use kernel PCA) during the fusion of each levels
of source image pyramids as shown in Fig 1. ImageI
1
and
ImageI
2
are the image pyramids of input images I
1
(x, y) and
I
2
(x, y) respectively. I
1
L
i
, I
2
L
i
and I
f
L
i
are the levels of
pyramid of I
1
(x, y), I
2
(x, y) and fused image I
f
(x, y)
respectively (for i = 1, 2, 3, 4...K. where K is the max level in
the pyramid decomposition). Image fusion using kernel PCA
is achieved by considering 3X3 window to calculate the
weights for each pixel of input kernel. We calculate
normalized weights P
1
and P
2
for I
1
(x, y) and I
2
(x, y)
respectively and fused image,

I
f
(x, y) = P
1
* I
1
(x, y) + P
2
* I
2
(x, y).

Fig. 1. Hierarchical PCA image fusion model.
The steps involved are:
1) Construct the pyramid of input images I1(x, y) and I2(x, y).

2) Fuse K
th
levels of input image pyramids using kernel PCA
based fusion technique to construct the K
th
level of fused
pyramid, I
f
-L
k
.

3) Consider next level in the pyramids and repeat the step 2 till
all the K levels are constructed in fused image pyramid.

4) Reconstruct the fused image If (x, y) from the fused
pyramid.
III. RESULTS OF FUSION ALGORITHMS.
We demonstrate hierarchical PCA results using multi-
modal images like MRI and CT scan, multisensor images like
visible and IR range images and multifocus images. These
results are shown in Fig 2, Fig 3, and Fig 4 respectively.


(a) (b)

(c) (d)

(e) (f)

(g) (h)
Fig. 2. Multimodal Image fusion. (a) MRI image of human head. (b) CT
image. (Images are taken from www.fusion.org) (c) using hierarchical
(laplacian image decomposition) PCA image, (d) using laplacian (max as
fusion rule) pyramid, (e) using PCA (f) using wavelets (g)using Hierarchical
(FSD image decomposition) PCA image (h) using FSD (max as fusion rule)
pyramid.
2011 International Conference on Image Information Processing (ICIIP 2011)
Proceedings of the 2011 International Conference on Image Information Processing (ICIIP 2011)

IV. QUALITY ANALYSIS OF FUSION ALGORITHMS
There is an increasing need for performance or quality
assessment tools in order to compare the results obtained with
different image fusion algorithms. This analysis can be used to
select a specific fusion algorithm for a particular type of data
set.
Quantitative quality analysis: We carry out the quantitative
quality analysis of fusion algorithm using spatial frequency
(SF), entropy (ENTR), standard deviation (SD), cross entropy
(CE), fusion mutual information (FMI), Structural similarity
index (SSIM), quality metrics (QM1 and QM2) without
reference image. (We explained these parameters in an
Appendix). The quality analysis of image fusion algorithms
based on hierarchical PCA, PCA, pyramid, wavelets is shown
in Tab-1. Quality parameters like SD, SSIM, FMI, CE, QM
1
,
and QM2 are high for the fusion of multi modal data using
hierarchical PCA. In few results the quality parameters are
high but the visibility of the fused image is not good (Wavelet
fusion for multi modal data). We consider visibility and
quality metrics and demonstrate that hierarchical PCA works
better for multimodal images, pyramids give better results for
multi sensor data and multi focus images can be fused using
wavelets.


(a) (b)

(c) (d)

(e) (f)

Fig. 3. Multisensor Image fusion. (a) Visible range image. (b) Infrared image
(Images are taken from www.fusion.org) (c) using FSD pyramid (contrast as
fusion rule), (d) using PCA, (e) using hierarchical PCA (f) using wavelets.

.

(a) (b)

(c) (d)

(e) (f)
Fig. 4. Multi focus image fusion. (a) Clock is focused image. (b) Man is
focused (Images are taken from www.fusion.org) (c) using wavelet (max as
fusion rule), (d) using hierarchical PCA, (e) using PCA (f) using pyramid.

Subjective quality analysis: We carryout subjective quality
analysis of fusion algorithms. The results of fusion algorithms
for variety of data sets are shown in Fig 2, Fig 3, and Fig 4.
Two input images of MRI and CT scan of human head are
shown in Fig 2 (a) and (b), fused image is shown in Fig 2(c)
(d) (e) (f) (g) and (h). The results of new image fusion
algorithm for multimodal data using hierarchical PCA are
shown in Fig 2 (c) and (g). Non experts like a set of students
and faculty given an opinion, visibility of fused images using
laplacian pyramid decomposition with PCA fusion as shown
in Fig 2 (c) and FSD pyramid decomposition with PCA fusion
algorithm as shown in Fig 2 (g) are better than the individual
laplacin pyramid, FSD pyramid and PCA fusion techniques
and the quantitative analysis shown in Tab-1 is coinciding
with this analysis. Experts like practicing doctor opined fused
image using laplacian pyramid decomposition with PCA
fusion is more informative than FSD pyramid decomposition
with PCA fusion algorithm.
Two input images of visible and IR range are shown in
Fig 3 (a) and (b), fused image is shown in Figure 3(c) (d) (e)
and (f). Multi sensor image can be fused using pyramids as
shown in Fig 3 (c).
Two multifocus input images are shown in Fig 4 (a) and
(b), fused image is shown in Figure 4(c) (d) (e) and (f). We
can use wavelets for multifocal image fusion as shown in Fig
4 (c).


2011 International Conference on Image Information Processing (ICIIP 2011)
Proceedings of the 2011 International Conference on Image Information Processing (ICIIP 2011)

TABLE I. QUANTITATIVE ANALYSIS OF FUSION ALGORITHM
Input
Fusion methods
Quality measures
SD SSIM VI SF MI CE QM1
=0.6,
==1
QM2
=0.6,
==1, 0 =1.5
Multi-modal HierarchicallaplacianPCA
HierarchicalFSDPCA
PCA
PyramidLaplacian
Wavelets
58.56
64
58
39.16
36.28
0.58
0.47
0.49
0.48
0.89
4139
3547
4602
5594
1134.76
9.43
10.2
16.6
5.65
7.056
14.23
14.4
3.6
13.2
4.248
2.33
2.3
0.4
0.02
0.592
41.8
41.7
19.43
35.16
27.65
49
50
27
43.3
18.48
Multi-sensor HierarchicallaplacianPCA
PCA
PyramidFSD
Wavelets
48.79
4.17
51.124
56.56
0.4
0.46
0.92
0.61
602
580
433.40
539.83
5.34
25.56
2.73
5.746
4.34
3.6
3.77
5.605
0.53
0.4
0.72
0.651
17.6
38.9
91.52
71.8
24.7
43.3
96.1
76.7
Multi-focus HierarchicallaplacianPCA
PCA
Pyramid RMS-ROLP
Wavelets
46.7
46.853
41.669
46.978
0.8
0.88
0.59
0.83
661.3
681.99
585.87
1134.76
5.2
10.338
9.704
9.884
0.26
0.064
0.368
0.213
0.06
0.011
0.165
0.046
10.25
12.18
12.08
15.02
17.2
17.7
17.52
22.03

V. IV CONCLUSION
In this paper we proposed new fusion algorithm by
combining pyramid and PCA. The proposed algorithm is
compared with individual PCA and pyramid and also with the
wavelet fusion algorithms using variety of input images. We
carried out the qualitative and quantitative analysis of these
fusion algorithms. Experts opined that multimodal fused
image using hierarchical PCA algorithm is more informative
than the fused image using individual pyramid or PCA
algorithm. We have demonstrated PCA, Pyramid, hierarchical
PCA, and wavelets based fusion algorithm for multi-modal,
multi-sensor and multi-focus data sets using different fusion
rules. We carried out without reference quality analysis using
different quality parameters and found that multi-modal,
multi-sensor and multi-focus images can be effectively fused
using hierarchical PCA, pyramid and wavelet based fusion
approaches respectively. However we use visible inspection
along with quality matrices to arrive at a conclusion of fusion
results.
REFERENCES
[1] P J. Burt and E H. Adelson. The laplacian pyramid as a compact image
alignment by maximization of mutual information. Proc. of the 5
th

International Conference on Computer vision, 31:15, 2008.
[2] A. Cohen, I. Duabechies, and P. Vial. Wavelets on the interval and fast
wavelet transforms. Applied and Computational Harmonic Analysis,
(1):5481, 1993.
[3] J. Kong, K. Zheng, J. Zhang, and X. Feng. Multi-focus image fusion
using spatial frequency and genetic algorithm. International Journal of
Computer Science and Network Security, 8(2):2220224, May 2008.





[4] J. Kong, K. Zheng, J. Zhang, and X. Feng. Multi-focus image fusion
using spatial frequency and genetic algorithm. International Journal of
Computer Science and Network Security, 8(2):2220224, May 2008.
[5] Anima Mishra and Subrata Rakshit. Fusion of noisy multi-sensor
imagerys. Defense Science Journal, 58:136146, 2008.
[6] V.P.S. Naidu and J.R. Raol. Pixel-level image fusion using wavelets and
principle component analysis. Defence Science Journal, 58(3):338352,
May 2008.
[6] G Piella and H Heijmans. A new quality metric for image fusion.
Proceedings International Conference on Image Processing, 3(1):173
176, 2003.
[7] Firooz sadjadi. Comparative image analysis. IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, pages 18,
2005.
[8] Uma M Ujwala Patil, Ganesh. Image fusion framework. International
conference on telecommunication and computing, Springer, pages 653
657, March 2011.
[9] Wang Zhong-hua, Qin Zheng, and Liu Yu. A framework of region based
dynamic image fusion. Journal of Zhejiang University ISSN 1009-3095,
pages 5662, August 2007.


APPENDIX:
Spatial frequency(SF): This frequency in spatial domain
indicates the overall activity level in the fused image [5].
SF = RF
2
+ CF
2

where RF and CF are the row and column frequency
respectively and are given by,

2011 International Conference on Image Information Processing (ICIIP 2011)
Proceedings of the 2011 International Conference on Image Information Processing (ICIIP 2011)

RF =
1
HN
_(I
]
(x, y) - I
]
(x, y - 1))
2
N
=2
M
x=1


RF =
1
HN
_(I
]
(x, y) - I
]
(x - 1, y))
2
N
=2
M
x=1


where, If (x,y) is intensity in the fused image and M and N are
the size of the image. Higher value of SF indicates more
details in the fused image and better is the fusion algorithm.

Entropy (ENTR): Entropy is used to measure the information
content of an image. Entropy is sensitive to noise and other
unwanted rapid fluctuations. An image with high information
content would have high entropy [5].
ENIR(I
]
) = b
I
]
(x) log b
I
]
(x)
L
x=0

where, h
If
(i) is the normalized histogram of fused image and
L is the number of frequency bins in the histogram.

Standard Deviation (SD): SD is composed of the signal and
noise parts. This metric would be more efficient in the absence
of noise. It measures the contrast in the fused image. An image
with a high contrast would have high standard deviation [5].
o = _(i - t)
2
L
x=0
b
I
]
(x)

where, h
If
(i) is the normalized histogram of the fused image
If(x, y) and L is the number of frequency bins in the histogram.

Cross Entropy (CE): CE evaluates the similarity in the
information content between the input images and the fused
image. Fused and reference images containing the same
information would have low cross entropy [5].

CE(I
1,
I
2
: I
]
) =
CE(I
1,
I
]
) +CE(I
2,
I
]
)
2

where,
CE(I
1,
I
]
) = _b
I
1
(x) log
b
I
1
(x)
b
I
]
(x)
L
x=0


CE(I
2,
I
]
) = _b
I
2
(x) log
b
I
2
(x)
b
I
]
(x)
L
x=0

and I1 and I2 are two input images and If the fused image.

Measure of Structural Similarity (SSIM): Natural image
signals would be highly structured and their pixels reveal
strong dependencies. These dependencies would carry vital
information about the structure of the object. It compares local
patterns of pixel intensities that have been normalized for
luminance and contrast.
SSIH =
(2I
r
I
]
+C
1
) (2cI
r
I
]
+C
2
)
(I
r
2
+I
]
2
+C
1
) (cI
r
2
+cI
]
2
+C
2
)

where,
pI
]
=
1
HN
I
]
(x, y)
N
=1
M
x=1

o
I
r
2
=
1
HN - 1
(I

(x, y) - pI

)
2
N
=1
M
x=1


o
I
]
2
=
1
HN - 1
(I
]
(x, y) - pI
]
)
2
N
=1
M
x=1


oI

I
]
=
1
HN -1
(I

(x, y) -
N
=1
M
x=1
pI

)(I
]
(x, y) - pI
]
)

Fusion Mutual Information (FMI) [5], [7]: It measures the
degree of dependence of the two images. A large value implies
better quality. If the joint histogram between I
1
(x, y) and
I
f
(x,y) is defined as h
I1If
(x,y) and I
2
(x, y) and If(x, y) is defined
as h
I2If
(x,y), then the mutual information between source and
fused image are:

FHI = HI
I
1I
]
+HI
I
2I
]

where,

HI
I
1I
]
= b
I
1I
]
(x, y) log
2
b
I
1I
]
(x, y)
b
I
1
(x, y)b
I
]
(x, y)
N
=1
M
x=1


HI
I
2I
]
= b
I
2I
]
(x, y) log
2
b
I
2I
]
(x, y)
b
2
(x, y)b
I
]
(x, y)
N
=1
M
x=1


Visibility (VI): The visibility of an image block is
defined as,

II = (p) _
I
]
(x, y) -p
p
_
N
=1
M
x=1

Where,
is the intensity mean value of the block, and

() =1/.

Quality metrics QM
1
and QM
2
: We calculate quality metric
QM
1
and quality metric QM
2
to evaluate the performance of
fusion algorithms [8].

Q
M1
= e
u (FMI+CF+SSIM)
+ - ln (SF) + y - ln (FI + SD)

Q
M2
= e
u (FMI+CF+SSIM)
+ ln(SF) +yln(FI) + 0 |n (SD)

2011 International Conference on Image Information Processing (ICIIP 2011)
Proceedings of the 2011 International Conference on Image Information Processing (ICIIP 2011)

Where, QM
1
- Quality Metric-1 and QM
2
- Quality Metric-2
and , , 0 , y are tuning parameters. The first term of QM
1

depends on FMI, CE and SSIM. We assign exponential
function for this term to increase the value since mutual
information and cross entropy have small values. SF has
relatively large value compared to other metric so it is
minimized by applying logarithmic function. Sum of SD and
VI has relatively large value than others, so we apply
logarithmic compression to the sum of SD and VI. In QM
2
we
reduce VI and SD by applying separate logarithmic. Table 1
shows the analysis using different quality parameters.

Das könnte Ihnen auch gefallen