Sie sind auf Seite 1von 12

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS 1

A Multiple-Kernel Fuzzy C-Means Algorithm for


Image Segmentation
Long Chen, C. L. Philip Chen, Fellow, IEEE, and Mingzhu Lu, Student Member, IEEE
AbstractIn this paper, a generalized multiple-kernel fuzzy
C-means (FCM) (MKFCM) methodology is introduced as a frame-
work for image-segmentation problems. In the framework, aside
from the fact that the composite kernels are used in the kernel
FCM (KFCM), a linear combination of multiple kernels is pro-
posed and the updating rules for the linear coefcients of the
composite kernel are derived as well. The proposed MKFCM
algorithm provides us a new exible vehicle to fuse different pixel
information in image-segmentation problems. That is, different
pixel information represented by different kernels is combined
in the kernel space to produce a new kernel. It is shown that
two successful enhanced KFCM-based image-segmentation algo-
rithms are special cases of MKFCM. Several new segmentation al-
gorithms are also derived fromthe proposed MKFCMframework.
Simulations on the segmentation of synthetic and medical images
demonstrate the exibility and advantages of MKFCM-based
approaches.
Index TermsComposite kernel, fuzzy C-means (FCM), image
segmentation, kernel function, multiple kernel.
I. INTRODUCTION
I
MAGE segmentation is a central task in many research elds
including computer vision [5] and intelligent image and
video analysis [6]. Its essential goal is to split the pixels of
an image into a set of regions such that the pixels in the same
region are homogeneous according to some properties and the
pixels in different regions are not similar. Clustering, particu-
larly fuzzy C-means (FCM)-based clustering and its variants,
have been widely used in the task of image segmentation due
to their simplicity and fast convergence [4], [6][9], [11], [12],
[14], [21]. By carefully selecting input features such as pixel
color, intensity, texture, or a weighted combination of these
data, the FCM algorithm can segment images to several re-
gions in accordance with resulting clusters. Recently, the FCM
and other clustering-based image-segmentation approaches are
improved by including the local spatial information of pixels
in classical clustering procedures [4], [6][9], [14], [15], [21],
[22]. For example, an additional term about the difference
between the local spatial information and the cluster centers is
Manuscript received March 5, 2010; revised July 27, 2010 and January 19,
2011; accepted February 9, 2011. This work was supported in part by the
National Aeronautics and Space Administration under Grant NNC04GB35G
and in part by The Chinese National Basic Research Program (also called 973
Program) under Grant 2011CB302801. The review of this paper was arranged
by Editor A. Gomez Skarmeta.
L. Chen and M. Lu are with the Department of Electrical and Computer
Engineering, The University of Texas, San Antonio, TX 78249-0669 USA
(e-mail: gbu922@my.utsa.edu; lwf054@my.utsa.edu).
C. L. P. Chen is with the Faculty of Science and Technology, The University
of Macau, Macau, China (e-mail: philip.chen@ieee.org).
Digital Object Identier 10.1109/TSMCB.2011.2124455
attached to the traditional objective functions of FCM algo-
rithms [4]. Because of the embedded local spatial information,
the new FCM has demonstrated robustness over noises in
images [4], [26], [31].
In addition to the incorporation of local spatial information,
the kernelization of FCM has made an important performance
improvement [3], [11], [19], [23], [26], [30], [31], [36]. The
kernel FCM (KFCM) algorithm is an extension of FCM, which
maps the original inputs into a much higher dimensional Hilbert
space by some transformfunction. After this reproduction in the
kernel Hilbert space, the data are more easily to be separated
or clustered. Previous research on transformation to the kernel
space has already been studied. Liao et al. [23] have directly
applied the KFCM in the image-segmentation problems, where
the input data selected for clustering is the combination of
the pixel intensity and the local spatial information of a pixel
represented by the mean or the median of neighboring pixels.
Chen and Zhang [31] applied the idea of kernel methods in
the calculation of the distances between the examples and the
cluster centers. They compute these distances in the extended
Hilbert space, and they have demonstrated that such distances
are more robust to noises. To keep the merit of applying local
spatial information, an additional term about the difference
between the local spatial information and the cluster centers
(also computed in the extended Hilbert space) is appended to
the objective function. More kernel methods, the kernelization
of clustering algorithms besides FCM, and their applications in
the problems of image segmentation and classication can be
found in [2], [5], [10], [13], [17], [18], [28], [32], [33], [41],
and [42].
Recently, developments on kernel methods and their applica-
tions have emphasized the need to consider multiple kernels or
composite kernels instead of a single xed kernel [2], [25]. With
multiple kernels, the kernel methods gain more exibility on
kernel selections and also reect the fact that practical learning
problems often involve data from multiple heterogeneous or
homogeneous sources [1], [2], [17], [20], [24], [25], [27], [29],
[41], [42]. Specically, in image-segmentation problems, the
inputs are the properties of image pixels, and they could be
derived from different sources. For example, the intensity of
a pixel is directly obtained from the image itself, but some
complicated texture information is perhaps gained from some
wavelet ltering of the image [34]. Multiple-kernel methods
provide us a great tool to fuse information from different
sources [17]. It is necessary to clarify that, in this paper, we
use the term multiple kernel in a wider sense than the one
used in machine learning community. In the machine learning
community, multiple-kernel learning refers to the learning
1083-4419/$26.00 2011 IEEE
2 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS
using an ensemble of basis kernels (usually a linear ensemble),
whose combination is optimized in the learning process. In
this paper, we focus more on the exible information fusion
by applications of composite kernels constructed by multiple
kernels dened in different information channels. The combi-
nation of the ensemble kernel can be automatically adjusted in
the learning of multiple-kernel FCM (MKFCM) or it can be
settled by trial and error or cross-validation.
In this paper, we propose a general framework of MKFCM
methodology. In the framework, besides the direct applications
of various composite kernels in the KFCM, a new algorithm
that uses a linear composite of multiple kernels is proposed
and the updating rules of the linear coefcients of the com-
bined kernel are obtained automatically. When applying the
MKFCM framework in image-segmentation problems, this
paper rst shows that two successful enhanced KFCM-based
image-segmentation algorithms, which take advantages of lo-
cal spatial information [23], [31], are indeed special cases of
MKFCM. After that, several new variants of MKFCM-based
image-segmentation algorithms are developed. The proposed
MKFCM-based algorithms demonstrate the exibility in kernel
selections and combinations, and therefore, they provide the
potential of signicant improvement over traditional methods
on image segmentation.
The rest of this paper is organized as follows. In Section II,
we review the foundations of KFCM briey. Based on the prop-
erties of kernel functions, the general framework of MKFCM is
introduced. In Section III, two traditional KFCM-based image-
segmentation algorithms are proved to be the special cases
MKFCM. Moreover, several variants of MKFCM-based image-
segmentation algorithms are proposed. The simulation results
reported in Section IV demonstrate that better segmentation
results are derived from MKFCM-based algorithms. Finally,
this paper concludes in Section V.
II. KFCM AND MKFCM
A. Foundations of KFCM
Given a data set X = {x
1
, . . . , x
n
}, where the data point
x
j
R
p
(j = 1, . . . , n), n is the number of data, and p
is the input dimension of a data point, traditional FCM [40]
groups X into c clusters by minimizing the weighted sum of
distances between the data and the cluster centers or prototypes
dened as
Q =
c

i=1
n

j=1
u
m
ij
x
j
o
i

2
. (1)
Here, is the Euclidean distance. u
ij
is the membership
of data x
j
belonging to cluster i, which is represented by the
prototype o
i
. The constraint on u
ij
is

c
i=1
u
ij
= 1, and m is
the fuzzication coefcient, which usually takes the value of 2.
As an enhancement of classical FCM, the KFCM maps the
data set X from the feature space or the data space R
p
into
a much higher dimensional space H (a Hilbert space usually
called kernel space) by a transform function : H. In the
new kernel space, the data demonstrate simpler structures or
patterns. According to clustering algorithms, the data in the new
space show clusters that are more spherical and therefore can be
clustered more easily by FCM algorithms [3], [30], [36].
Generally, the transform function is not given out explic-
itly, but the kernel function is given and it is dened as k:
R
k(x, y) = (x), (y) x, y (2)
where , is the inner product for Hilbert space H. Such kernel
functions are usually called Mercer kernels or kernel. Given
a Mercer kernel k, we know that there is always a transform
function : H satises k(x, y) = (x), (y), although
sometimes, we do not know the specic form of . Widely
used Mercer kernels include the Gaussian kernel k(x, y) =
exp(x y
2
/r
2
) and the polynomial kernel k(x, y) = (x
y + d)
2
. They are both dened over R
n
R
n
. Clearly, due
to the fact that we only know the kernel functions, we need
to solve the clustering problems in the kernel space by only
using kernel functions, i.e., the inner product of the transform
function . Usually this is called kernel trick [35].
There are two types of KFCM. If the prototypes o
i
are
constructed in the kernel space, this type of KFCM is referred
as KFCM-K (with K standing for the kernel space) [36]. The
objective function of KFCM-K is
Q =
c

i=1
n

j=1
u
m
ij
(x
j
) o
i

2
. (3)
The learning algorithm of KFCM-K iteratively updates u
ij
as
u
ij
= 1/
c

h=1
_
d
2
ij
/d
2
hj
_
1/(m1)
(4)
where
d
2
ij
= k(x
j
, x
j
)
2

n
h=1
u
m
ih
k(x
h
, x
j
)

n
h=1
u
m
ih
+

n
h=1

n
l=1
u
m
ih
u
m
il
k(x
h
, x
l
)
(

n
h=1
u
m
ih
)
2
. (5)
More details about the derivation of (4) and (5) can be referred
to [36].
Another type of KFCM connes that the prototypes in the
kernel space are actually mapped from the original data space
or the feature space. That is, the objective function is dened as
Q =
c

i=1
n

j=1
u
m
ij
(x
j
) (o
i
)
2
. (6)
This type of KFCM is referred as KFCM-F (with F standing
for feature space/data space) [36].
Usually, only the Gaussian kernel k(x, y) = exp(x
y
2
/r
2
) is applied in KFCM-F, and because k(x, x) = 1 for
Gaussian kernel
(x
j
) (o
i
)
2
= (x
j
), (x
j
) + (o
i
), (o
i
)
2 (x
j
), (o
i
)
=k(x
j
, x
j
) + k(o
i
, o
i
) 2k(x
j
, o
i
)
=2 (1 k(x
j
, o
i
)) . (7)
CHEN et al.: MULTIPLE-KERNEL FUZZY C-MEANS ALGORITHM FOR IMAGE SEGMENTATION 3
The objective function in (6) is then reformulated as [26]
Q =
c

i=1
n

j=1
u
m
ij
(1 k(x
j
, o
i
)) . (8)
Here, 1 k(x
j
, o
i
) can be considered as a robust distance mea-
surement derived in the kernel space [26]. For these KFCM-
F-applying Gaussian kernels [36], we iteratively update the
prototypes and memberships as
u
ij
=
(1 k(x
j
, o
i
))
1/m1

c
l=1
(1 k(x
j
, o
l
))
1/m1
(9)
o
i
=

n
l=1
u
m
il
k(x
l
, o
i
)x
l

n
l=1
u
m
il
k(x
l
, o
i
)
. (10)
B. MKFCM
Before the introduction of the MKFCM, we rst list some
necessary Mercer kernels properties in the following [35].
Theorem 1: Let k
1
and k
2
be kernels over , R
p
,
and k
3
be a kernel over R
p
R
p
. Let function : R
p
1) k(x, y) = k
1
(x, y) + k
2
(x, y) is a kernel.
2) k(x, y) = k
1
(x, y) is a kernel, when > 0.
3) k(x, y) = k
1
(x, y)k
2
(x, y) is a kernel.
4) k(x, y) = k
3
((x), (y)) is a kernel.
The proof of these properties can be referred to [35].
The general framework of MKFCM aims to minimize the
same objective function as the single xed KFCM, i.e.,
Q =
c

i=1
n

j=1
u
m
ij

com
(x
j
) o
i

2
(11)
or
Q =
c

i=1
n

j=1
u
m
ij

com
(x
j
)
com
(o
i
)
2
. (12)
Comparing (3) and (6) to (11) and (12), the only difference
between them is that the transform function in (3) and
(6) is changed to
com
, which is derived from a compos-
ite kernel k
com
(x, y) =
com
(x),
com
(y). The composite
kernel k
com
is dened as a combination of multiple kernels
using properties introduced in Theorem 1. For example, two
simple composite kernels are k
com
= k
1
+ k
2
and k
com
=
k
1
k
2
. Given that k
1
and k
2
are Mercer kernels, based on
properties 1), 2), and 3) in Theorem 1, the composite kernel
k
com
is a Mercer kernel as well. In other words, we can
always nd some transformation
com
such that k
com
(x, y) =

com
(x),
com
(y)x, y .
For MKFCM, it still updates u
ij
according to (4) and (5) or
(9) and (10). The difference is that the kernel function k in these
equations is replaced by the combined kernel k
com
. Similar to
KFCM, if MKFCM assumes the prototypes in the kernel space
[using objective function (11)], this type of MKFCM is referred
as MKFCM-K; if MKFCM connes that the prototypes are
mapped from feature space or data space [using objective func-
tion (12)], such a type of MKFCM is referred as MKFCM-F.
When the number of parameters in the combined kernel
is small, the parameters can be adjusted by trial and error.
For instance, the parameter in the k
com
= k
1
+ k
2
can
be selected by testing a group in a predened range or
set. While the number of parameters in the combined kernel
is large, the more feasible method is automatically adjusting
these parameters in the learning algorithms. For example, in
machine learning community, a widely used composite kernel is
the linear combination of several kernels, i.e., k
com
= w
1
k
1
+
w
2
k
2
+ . . . + w
l
k
l
. Some learning algorithms that adjust the
weights w
i
automatically in typical kernel learning methods
like multiple-kernel regressions and classications [20], [25]
have been studied. Here, we propose a similar algorithm for
MKFCM using linearly combined kernels.
To increase the number of selections for kernel functions, a
linearly combined kernel function is applied in MKFCM. The
new composite kernel k
L
is dened as
k
L
= w
b
1
k
1
+ w
b
2
k
2
+ + w
b
l
k
l
(13)
where b > 1 is a coefcient similar to the fuzzy coefcient m
in (1) and (3). The regulation on weights, w
1
, w
2
, . . . , w
l
, is

l
i=1
w
i
= 1.
The objective function of the MKFCM with the linearly
combined kernel is still the weighted sum of distances between
the data and prototypes in the kernel space
Q =
c

i=1
n

j=1
u
m
ij

L
(x
j
) o
i

2
(14)
where
L
is the transformation derived from the linearly com-
bined kernel k
L
(x
i
, x
j
) =
L
(x),
L
(y) [k
L
is dened in
(13)]. Just the same as in the KFCM [(4) and (5)], the learning
rule for membership values is
u
ij
= 1/
c

k=1
_
d
2
ij
/d
2
kj
_
1/(m1)
(15)
where
d
2
ij
= k
L
(x
j
, x
j
)
2

n
h=1
u
m
ih
k
L
(x
h
, x
j
)

n
h=1
u
m
ih
+

n
h=1

n
l=1
u
m
ih
u
m
il
k
L
(x
h
, x
l
)
(

n
h=1
u
m
ih
)
2
. (16)
Introducing the Lagrange term of the constraint of weights
w
i
(i = 1, . . . , l) into the objective function, we have
Q =
c

i=1
n

j=1
u
m
ij

L
(x
j
) o
i

2
+
_
1
l

i=1
w
i
_
. (17)
Taking the derivative of Q over w
i
and setting the results
to zero, we obtain the updating rule of the weights w
i
(i =
1, . . . , l)
Q
w
i
= 0 (i = 1, . . . , l). w
i
= 1/
n

h=1
(Q
i
/Q
h
)
1/(b1)
(18)
4 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS
where
Q
h
=
c

i=1
n

j=1
u
m
ij

h
(x
j
) o
i

2
(h = 1, . . . , l). (19)
Here,
h
is the transform function dened by k
h
(h = 1, . . . , l)
in (13) and

h
(x
j
) o
i

2
= k
h
(x
j
, x
j
)
2

n
l=1
u
m
il
k
h
(x
l
, x
j
)

n
l=1
u
m
il
+

n
g=1

n
l=1
u
m
ig
u
m
il
k
h
(x
g
, x
l
)
_

n
g=1
u
m
ig
_
2
. (20)
The algorithm introduced here is named as LMKFCM
(standing for Linear combined MKFCM). LMKFCM can lin-
early combine more than two kernels and automatically adjust
the weights of each kernel in the optimization procedure. By
studying the objective function (14) applied in this LMKFCM,
we know that the prototypes are directly dened in the kernel
space; therefore, this LMKFCM is indeed LMKFCM-K (with
K standing for the kernel space). Here, we shorten LMKFCM-
K as LMKFCM. In feature space, it is very difcult to derive a
learning algorithm for feature-space LMKFCM (or LMKFCM-
F in short) because the linear combination of basic kernels is
not a Gaussian kernel.
III. MKFCM-BASED IMAGE SEGMENTATION
The application of multiple or composite kernels in the
FKCM has its advantages. In addition to the exibility in
selecting kernel functions, it also offers a new approach to
combine different information from multiple heterogeneous
or homogeneous sources in the kernel space. Specically, in
image-segmentation problems, the input data involve properties
of image pixels sometimes derived from very different sources.
For example, as mentioned in Section I, the intensity of a
pixel is directly gained from the image itself, but the texture
information of a pixel might be obtained from some wavelet
ltering of the image. Therefore, we can dene different ker-
nel functions purposely for the intensity information and the
texture information separately, and we then combine these
kernel functions and apply the composite kernel in MKFCM
(including LMKFCM) to obtain better image-segmentation
results. Examples that are more visible could be found from
multitemporal remote sensing images. The pixel information
in these images inherits from different temporal sensors. As a
result, we can dene different kernels for different temperature
channels and apply the combined kernel in a multiple-kernel
learning algorithm.
In this section, we rst study some successful enhanced
KFCM-based image-segmentation algorithms that consider
both the pixel intensity and the local spatial information. These
algorithms are proved actually the special cases of MKFCM-
based methods, which mingle a kernel for the spectral informa-
tion and a kernel for the local spatial information. After that,
several new variants of MKFCM-based image-segmentation
algorithms are developed. These new variants demonstrate the
exibility of MKFCM in kernel selections and combinations
for image-segmentation problems and offer the potentials of
improvement in segmentation results.
At rst, we formulate a proposition that is useful in the
following.
Proposition 1: For a data point x = [x
1
, x
2
, . . . , x
n
]
R
p+q
, we also dene it as x = [x
p
, x
q
], where x
p
R
p
con-
tains p dimensions of the data point x and x
q
R
q
contains
the remaining q dimensions. If k
p
: R
p
R
p
R is a kernel
over R
p
R
p
, then the function k: R
p+q
R
p+q
R, such
that k(x, y) = k
p
(x
p
, y
p
), is also a kernel over R
p+q
R
p+q
.
Indeed, setting k
3
= k
p
and : R
p+q
R
p
such that
(x) = x
p
, we obtain the conclusion of Proposition 1 by
directly applying Theorem 1 [property 4)].
Because the Gaussian kernel k(x
p
, y
p
) = exp(x
p

y
p

2
/r
2
) and the polynomial kernel k(x
p
, y
p
) = (x
p
y
p
+
d)
2
are typical kernels dened on R
p
R
p
, based on Proposi-
tion 1, we know the Gaussian function dened as k: R
p+q

R
p+q
R, such that k(x, y) = exp(x
p
y
p

2
/r
2
) and
the polynomial function k(x, y) = (x
p
y
p
+ d)
2
are both ker-
nel functions over R
p+q
R
p+q
. Without loss of generality, we
call these two functions the Gaussian kernel and the polynomial
kernel as well. We can use such kind of kernels for different
information embedded in different subdimensions of the input
data.
A. Two Enhanced KFCM Algorithms as MKFCM
In order to combine the local spatial information of pixels
into the classical clustering-based image-segmentation algo-
rithms, Liao et al. [23] select input data x
j
(j = 1, 2, . . . , n) as
x
j
= [x
j
, x
j
] R
2
and directly apply the KFCM-K on these
input data. Here, x
j
is the intensity of pixel j and x
j
is the
ltered intensity of pixel j, which represents the local spatial
information. In [23], x
j
is the mean or the median ltered
intensity dened in a 3 3 window centered at pixel j. We
denote this algorithm as DKFCM (here, D stands for direct
application of KFCM). Specically, DKFCM_meanf is used to
denote the DKFCM applying the mean ltered intensities as the
spatial information and DKFCM_medianf is used for DKFCM
with the median ltered intensities. In DKFCM, the kernel
function is the Gaussian kernel [23] and the applied learning
rules are the same as (4) and (5).
We now prove that DKFCM is a special case of MKFCM.
Case 1: DKFCM is a special case of MKFCM-K with
k
com
= k
1
k
2
applied on input data x
j
= [x
j
, x
j
] R
2
(j =
1, 2, . . . , n). Here, k
1
is the Gaussian kernel for pixel in-
tensity k
1
(x
i
, x
j
) := exp(|x
i
x
j
|
2
/r
2
) and k
2
is another
Gaussian kernel for local spatial information k
2
(x
i
, x
j
) :=
exp(|x
i
x
j
|
2
/r
2
), in which x
j
is the intensity of pixel j
and x
j
is the local spatial information represented by the ltered
intensity of pixel j.
CHEN et al.: MULTIPLE-KERNEL FUZZY C-MEANS ALGORITHM FOR IMAGE SEGMENTATION 5
The kernel function k(x
i
, x
j
) is used in DKFCM, where
x
j
= [x
j
, x
j
] and k is the Gaussian kernel. By its denition
k(x
i
, x
j
) = exp
_

x
i
x
j

r
2
2
_
= exp
_

[x
i
, x
i
] [x
j
, x
j
]
2
r
2
_
= exp
_

|x
i
x
j
|
2
+ |x
i
x
j
|
2
r
2
_
= exp
_

|x
i
x
j
|
2
r
2
_
exp
_

|x
i
x
j
|
2
r
2
_
=k
1
(x
i
, x
j
)k
2
(x
i
, x
j
)
=k
com
(x
i
, x
j
). (21)
That is, the DKFCMuses the same kernel function as MKFCM-
K. Considering that both DKFCM and MKFCM-K use the
updating rules (4) and (5), we know that DKFCM is a special
case of MKFCM-K that uses the composite kernel k
com
=
k
1
k
2
.
Chen and Zhang [31] enhance the Gaussian-kernel-based
KFCM-F by adding a local information term in the objective
function, i.e., the new objective function becomes
Q=
c

i=1
n

j=1
u
m
ij
(1k(x
j
, o
i
))+
c

i=1
n

j=1
u
m
ij
(1k(x
j
, o
i
))
(22)
where x
j
is the intensity of pixel j. In the new objective
function, the additional term is the weighted sum of differ-
ences between the ltered intensity x
j
(the local spatial in-
formation) and the clustering prototypes. The differences are
also measured using the kernel-induced distances. Such kind
of enhanced KFCM-based algorithm is denoted as AKFCM
(with A standing for additional term). Like DKFCM_meanf
and DKFCM_medianf, we use AKFCM_meanf to represent the
AKFCM applying the mean ltered intensities as the local spa-
tial information, and AKFCM_medianf denotes the AKFCM
using the median ltered intensities. In AKFCM, the kernel
function is the Gaussian kernel [31].
Next, we prove that AKFCM [31] is also a special case of
MKFCM.
Case 2: AKFCM is a special case of MFKCM-F with
k
com
= k
1
+ k
2
on the input data x
j
= [x
j
, x
j
] (j =
1, 2, . . . , n), where k
1
, k
2
, x
j
, and x
j
are the same as the ones
dened in Case 1.
In AKFCM, the goal is to minimize the following objective
function:
Q
1
=
c

i=1
n

j=1
u
m
ij
(1k(x
j
, o
i
))+
c

i=1
n

j=1
u
m
ij
(1k(x
j
, o
i
))
=
c

i=1
n

j=1
u
m
ij
(1+(k(x
j
, o
i
)+k(x
j
, o
i
)))
=
c

i=1
n

j=1
u
m
ij
_
1+
_
exp
_

|x
j
o
i
|
2
r
2
_
+exp
_

|x
j
o
i
|
2
r
2
___
. (23)
On the other hand, if k
com
= k
1
+ k
2
is the composite
kernel for MKFCM-F, the objective function of the MKFC is
Q=
c

i=1
n

j=1
u
m
ij

com
(x
j
)
com
(o
i
)
2
=
c

i=1
n

j=1
u
m
ij
(
com
(x
j
),
com
(x
j
)
+
com
(o
i
),
com
(o
i
)
2
com
(x
j
),
com
(o
i
))
=
c

i=1
n

j=1
u
m
ij
(k
com
(x
j
, x
j
)+k
com
(o
i
, o
i
)2k
com
(x
j
, o
i
))
=
c

i=1
n

j=1
u
m
ij
(k
1
(x
j
, x
j
)+k
2
(x
j
, x
j
)+k
1
(o
i
, o
i
)
+k
2
(o
i
, o
i
)2 (k
1
(x
j
, o
i
)+k
2
(x
j
, o
i
)))
(by k
com
= k
1
+k
2
)
=2
c

i=1
n

j=1
u
m
ij
(1+(k
1
(x
j
, o
i
)+ k
2
(x
j
, o
i
)))
(by denitions of k
1
, k
2
)
=2
c

i=1
n

j=1
u
m
ij
_
1+
_
exp
_

|x
j
o
i
|
2
r
2
_
+exp
_

|x
j
o
i
|
2
r
2
___
=2Q
1
. (24)
Comparing (23) to (24), we know that AKFCM is actually
the MKFCM-F using k
com
= k
1
+ k
2
(minimizations of Q
1
and 2Q
1
are the same problem). In other words, AKFCM is a
special case of MKFCM-F.
B. Variants of MKFCM-Based
Image-Segmentation Algorithms
As shown in the previous section, AKFCM is a special
case of MKFCM-F with k
com
= k
1
+ k
2
(k
1
is the kernel
for intensity, and k
2
is the kernel for local spatial infor-
mation). Therefore, similar to MKFCM-F, AKFCM connes
the prototypes as the points mapped from the original data
space or the feature space. This limits the search space of
the prototypes. The natural choice to x this shortcoming is
applying MKFCM-K, which searches prototypes in the total
kernel space, directly to image-segmentation problems. Indeed,
we have demonstrated that DKFCM is an MKFCM-K with
6 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS
a composite kernel k
com
= k
1
k
2
. More variants of MKFCM-
K-based image-segmentation algorithms can be proposed. For
instance, we propose that the rst variant of MKFCM-K that
uses the composite kernel
k
com
= k
1
+ k
2
(25)
on the input data is x
j
= [x
j
, x
j
] R
2
(j = 1, 2, . . . , n), in
which x
j
R is the intensity of pixel j and x
j
R is the
ltered intensity of pixel j that stands for the local spatial
information. Identical to DKFCM, in the composite kernel, k
1
is the Gaussian kernel for pixel intensities, i.e., k
1
(x
i
, x
j
) =
exp(|x
i
x
j
|
2
/r
2
), and k
2
is the Gaussian kernel for
the local spatial information, i.e., k
2
(x
i
, x
j
) = exp(|x
i

x
j
|
2
/r
2
). As a variant of the general MKFCM-K introduced
in Section II, this algorithm still updates u
ij
following the rules
(4) and (5), in which the kernel function k is replaced by k
com
.
It is worth pointing out that k
1
or k
2
in the rst variant
of MKFCM-K-based image segmentation can be changed to
any other Mercer kernel function for the information related to
image pixels. This empowers the exibility to the segmentation
algorithm in kernel function selections and combinations.
For example, a composite kernel that joins different shaped
kernels can be dened as
k
com
= k
1
+ k
2
(26)
where k
1
is still the Gaussian kernel for pixel intensities
k
1
(x
i
, x
j
) = exp(|x
i
x
j
|
2
/r
2
), but k
2
is a polynomial
kernel for the spatial information k
2
(x
i
, x
j
) = (x
i
x
j
+ d)
2
,
where x
j
is the ltered intensity of pixel j. We denote
this MKFCM-K-based algorithm as the second variant of
MKFCM-K.
Aside from the exibility of selecting different shaped ker-
nel functions for the intensity and the spatial information,
MKFCM-K allows us to apply kernel functions for other infor-
mation derived from the image. Take the texture information as
the example; we can set the input data x
j
as x
j
= [x
j
, x
j
, s
j
]
R
3
, in which x
j
R is the intensity of pixel j. The two-tuple
[x
j
, s
j
] R
2
is a simple descriptor of the texture information
at pixel x
j
[37], where x
j
is the ltered intensity of pixel j
and s
j
is the standard variance of the intensities of the pixels
in the neighborhood of pixel j. Then, we dene the combined
kernel as
k
com
= k
1
+ k
2
(27)
where k
1
is the Gaussian kernel for pixel intensities and k
2
is
the Gaussian kernel for the texture information k
2
(x
i
, x
j
) =
exp([x
i
, s
i
] [x
j
, s
j
]
2
/r
2
). This algorithm is denoted as
the third variant of MKFCM-K.
Just like what we did in AKFCM and DKFCM, the notations
_meanf and _medianf can be attached to the names
of algorithms to refer different applied spatial information.
Therefore, MKFCM-K_meanf is designated to variants of
MKFCM-K using the mean ltered intensities as the local
spatial information, and the MKFCM-K_medianf is used when
the median ltered intensities are selected as the local spatial
information in MKFCM-K.
Fig. 1. Segmentation performance based on different alpha values.
To increase the information diversity of an image, LMK-
FCM can be applied in image-segmentation problems as well.
Specically, we dene different kernels for different image
information, linearly ensemble them into a new kernel, and
then apply equations (15)(16) and (18)(20) to update the
membership values and weighting coefcients.
For example, the input image data x
j
is set to be x
j
=
[x
j
, x
j
, s
j
] R
3
, the same as the third variant of MKFCM-K.
Then, the composite kernel is designed as
k
L
= w
b
1
k
1
+ w
b
2
k
2
+ w
b
3
k
3
(28)
where k
1
is the Gaussian kernel for pixel intensities
k
1
(x
i
, x
j
) = exp(|x
i
x
j
|
2
/r
2
), k
2
is the Gaussian kernel
for spatial information k
2
(x
i
, x
j
) = exp(|x
i
x
j
|
2
/r
2
), and
k
3
is the Gaussian kernel for texture information k
3
(x
i
, x
j
) =
exp([x
i
, s
i
] [x
j
, s
j
]
2
/r
2
). The next section will apply
this composite kernel for LMKFCM in simulations. For sim-
plicity, we also use LMKFCMto refer the specic segmentation
algorithm utilizing the composite kernel dened in (28).
IV. SIMULATION RESULTS
In this section, we compare the KFCM-based and the
newly proposed MKFCM-based (including LMKFCM) image-
segmentation algorithms on several synthetic and medical
images. Because the performance of FCM-type algorithms
depends on the initialization, this paper does the initialization
100 times and chooses the one with the best objective function
value. This increases the reliability of comparison results ac-
quired in the simulations.
A. Example 1: Synthetic Noised Two-Cluster Image
The synthetic image is similar to the one used in [4] and
[31]. It is in the size of 64 64 pixels. It contains two clusters
with two intensity values taken as 128 and 0. Different noises,
including Gaussian noise and salt and pepper noise, are
added to the synthetic image. The AKFCM-, DKFCM-, and
MKFCM-K-based algorithms are tested on the noised images,
and their performance of segmentation is measured as
Segmentation accuracy
=
number of correctly classied pixels
total number of pixels
. (29)
CHEN et al.: MULTIPLE-KERNEL FUZZY C-MEANS ALGORITHM FOR IMAGE SEGMENTATION 7
Fig. 2. 5% Gaussian-noised synthetic image segmented by different methods. (a) Noised image. (b) Segmentation result of DKFCM_meanf. (c) Segmentation
result of AKFCM_ meanf. (d) Segmentation result of MKFCM-K_meanf.
Fig. 3. 10% salt-and-pepper-noised synthetic image segmented by different methods. (a) Noised image. (b) Segmentation result of DKFCM_medianf.
(c) Segmentation result of AKFCM_medianf. (d) Segmentation result of MKFCM-K_medianf.
TABLE I
SEGMENTATION ACCURACIES OF DIFFERENT METHODS ON NOISED IMAGES
The MKFCM-K-based image-segmentation algorithm ap-
plied here is the rst variant of MKFCM-K introduced in
Section III. The kernel functions used in AKFCM, DKFCM,
and the rst variant of MKFCM-K are all Gaussian kernels
as k(x, y) = exp(x y
2
/r
2
), and the parameter r is of
great importance to the performance of these kernel methods.
As suggested in [31], we take r = 150. In AKFCM and the rst
variant of MKFCM-K, there is a parameter that balances the
importance of different kernels, more specically, balances the
importance of pixel intensities and the local spatial informa-
tion. Fig. 1 shows the segmentation performance of different
methods based on different values. The testing image is the
10% Gaussian-noised synthetic image. From Fig. 1, it is easy
to draw the conclusion that the larger is, the better it is for a
heavy noised image. In other words, for a heavy noised image,
the local spatial information is of greater importance. In this
example, we select = 3.8 as suggested in [31].
Fig. 2 shows the 5%Gaussian-noised synthetic image and the
segmentation results of AKFCM_meanf, DKFCM_meanf, and
MKFCM-K_meanf. Here, the local spatial information used in
different algorithms is the mean ltered intensity of the 3 3
window around the considered pixel because the mean lter is
a good tool to smooth the Gaussian noise.
Fig. 3 shows the 10% salt-and-pepper-noised image and the
segmentation results of AKFCM_medianf, DKFCM_medianf,
and MKFCM-K_medianf. For the salt-and-pepper-noised im-
age, the selected local spatial information is the median ltered
value of the 3 3 windowaround the considered pixel. Because
the median lter has better results than the mean lter for salt-
and-pepper noise reduction, it is used in this salt-and-pepper-
noised image.
The segmentation accuracies (SAs) of the three methods for
different noised images are listed in Table I. From Figs. 2 and
3 and Table I, it is clear that AKFCM- and MKFCM-K-based
methods that combine the intensity information and the spatial
information in the kernel spaces have much better performance
than the DKFCM method, in which the corresponding pieces of
information are combined in the data space.
B. Example 2: Synthetic Two-Texture Image
To demonstrate the exibility and the advantages of
MKFCM, a two-texture image is tested in this simulation. The
image is as shown in Fig. 4(a), in which the left half of the
image and the right half are of great difference because the left
half is coarse and the right half is smooth, i.e., their textures
are visibly different. Traditional enhanced KFCM-based algo-
rithms like DKFCM and AKFCM cannot deal with this kind
of image very well [as shown in Fig. 4(b)(e)] because they
only consider the local spatial information. While considering
the problem in the MKFCM framework, we can simply apply
the combined kernel like the one in (27) (the third variant of
MKFCM-K in Section III), where k
1
is the Gaussian kernel
for the intensities but k
2
is the Gaussian kernel for the texture
information. Fig. 4(f) shows the segmentation result of the
third variant of MKFCM-K. Due to the consideration of the
8 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS
Fig. 4. Two-textured image segmented by different methods. SA is the
abbreviation of segmentation accuracy. (a) Two-textured image. (b) Seg-
mentation result of AKFCM_meanf (SA = 0.716). (c) Segmentation result
of AKFCM_medianf (SA = 0.748). (d) Segmentation result of DKFCM_
meanf (SA = 0.715). (e) Segmentation result of DKFCM_ medianf (SA =
0.747). (f) Segmentation result of MKFCM-K third variant (SA = 0.753).
(g) Segmentation result of LMKFCM (SA = 0.853). (h) MKFCM-K, k
com
=
k
1
k
2
k
3
(SA = 0.723). (i) Segmentation result of MKFCM-K, k
com
= k
1
+
k
2
+ k
3
(SA = 0.730). (j) Segmentation result of KFCM, single intensity
kernel (SA = 0.720). (k) Segmentation result of KFCM, single spatial kernel
(SA = 0.709). (l) Segmentation result KFCM, texture kernel (SA = 0.763).
texture information, it has a better result than DKFCM and
AKFCM. To further improve the performance of segmentation,
we test LMKFCM that linearly combines three kernels as (28),
i.e., the rst two kernels are the kernels for intensities and
the local spatial information. The third kernel is the texture
kernel. Fig. 4(g) shows that the LMKFCM achieves a much
better segmentation result. The corresponding SAs of different
methods are shown in Fig. 4. The values of SA clearly illustrate
that the proposed variants of MKFCM-K and LMKFCM have
better performances than those of AKFCM and DKFCM. The
resulting weights of the three kernels in LMKFCM are 0.3967,
0.3032, and 0.2999, respectively.
To show the advantage of automatically updating the weights
of different kernels in LMKFCM, the results of segmentation
by applying a single kernel, or applying a xed composite
kernel k
com
= k
1
+ k
2
+ k
3
or k
com
= k
1
k
2
k
3
, are shown in
Fig. 4(h)(l). The denitions for k
1
, k
2
, and k
3
are the same as
the ones in LMKFCM. The SAs of different algorithms demon-
strate that the automatic updating of the weights of different
kernels in LMKFCM achieves better segmentation results than
MKFCM using a xed composite kernel or a single kernel.
C. Example 3: Medical Images
The rst medical image in this simulation is the magnetic res-
onance (MR) image. The image and its reference segmentations
are obtained from [38]. They are T1-weighted MR phantom
with slice thickness of 1 mm, 3% noise, and no intensity
inhomogeneity. The image will be segmented into three clusters
corresponding to White Matters (WMs), Gray Matters (GMs),
and Cerebrospinal Fluid (CSF). The SA of algorithm i on class
j is calculated as
S
ij
=
A
ij
A
refj
A
ij
A
refj
(30)
where A
ij
stands for the set of pixels belonging to class j that
are found by algorithm i and A
refj
stands for the set of pixels
belonging to class j that is in the reference segmented image.
After applying the AKFCM_meanf, DKFCM_meanf, and
MKFCM-K_meanf (the rst variant of MKFCM-K) methods
on the MRimage [Fig. 5(a)], the segmentation results are shown
in Fig. 5(b)(d). We use the LMKFCM algorithm with three
kernels on the MRimage as well. The three kernels are the same
as the ones in the two-texture image example. Fig. 5(e) shows
the segmentation result of LMKFCM. The resulting weights
for the three kernels are 0.1213, 0.5505, and 0.3281. The local
spatial information used in this example is the mean of the
intensities because there is no salt-and-pepper noise in this
studied image.
Due to the exibility of kernel types in MKFCM-K, we can
easily change the second kernel in the rst variant of MKFCM-
Kfromthe Gaussian kernel for the local spatial information into
a polynomial one as (16) (the second variant of MKFCM-K).
For simplicity, we name it as MKFCM-K_poly. The MKFCM-
K_polys segmentation results are shown in Fig. 5(f). In Fig. 5,
the rst column is the image as a whole, the second column is
the segmented CSF, the third column is the segmented GM, and
the last column is the segmented WM.
Table II lists the SAs of different methods, in which S1
means the SA for the cluster of CSF, S2 is for the cluster
of GM, and S3 is for the cluster of WM. In this simulation,
because the noise rate of the image is low, the balance rate
is selected as a relatively small number 0.8, which is also
the suggestion of [31]. From Fig. 5, it is hard to determine
CHEN et al.: MULTIPLE-KERNEL FUZZY C-MEANS ALGORITHM FOR IMAGE SEGMENTATION 9
Fig. 5. Segmentation results of different methods on an MR image. (a) MR image and its correct segmentation. From left to right are the integrated MR
image, the CSF, the GM, and the WM. (b) Segmentation results of AKFCM_meanf. (c) Segmentation results of DKFMC_meanf. (d) Segmentation results of
MKFCM-K_meanf (rst variant). (e) Segmentation results of MKFCM-K_poly. (f) LMKFCM.
TABLE II
SEGMENTATION ACCURACIES OF DIFFERENT METHODS ON THE MR IMAGE. S1: SEGMENTATION ACCURACY FOR THE CLUSTER OF CSF, S2:
SEGMENTATION ACCURACY FOR THE CLUSTER OF GM, AND S3: SEGMENTATION ACCURACY FOR THE CLUSTER OF WM
10 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS
which method is better. However, from Table II, the new
multiple-kernel methods proposed in this paper, MKFMC-
K_poly and LMKFCM, have better segmentation performances
than those of other methods. MKFCM-K_polys good re-
sults highlight the potential advantages of applying differ-
ent shaped kernel functions for different pieces of image
information.
The second medical image shown in Fig. 6(a) is a 128
128 positron emission tomography (PET) image of a dogs lung
[39]. We do not have the reference segmentations of this image.
The segmentation results obtained by the AKFCM_meanf,
DKFCM_meanf, MKFCM-K_meanf (rst variant), MKFCM-
K_poly, and LMKFCM are shown in Fig. 6(b)(f). These
applied algorithms and their settings are the same as the ones
dened for the previous MR image. As shown in Fig. 6,
the proposed MKFCM_poly algorithm derives more homoge-
nous regions, which clearly outperforms the competitors.
AKFCM_meanf is worse than MKFCM_poly (the misclassi-
cation in the circle is noticeable), but it is better than other
algorithms. Unlike the results of the MR image in Fig. 5,
the LMKFCM here does not demonstrate a better result. This
is because the texture information in the PET image is not
signicant.
D. Discussions and Analysis
The simulations in this section do not intend to prove that
the MKFCM-based (including LMKFCM) image-segmentation
algorithms are inherently better than other KFCM-based image-
segmentation methods. They are used to demonstrate the
MKFCMs signicant exibility in kernel selections and com-
binations and the great potential of this exibility could bring
to image-segmentation problems. Under the framework of
MKFCM, changing the Gaussian kernel for local spatial infor-
mation in the rst variant of MKFCM-K to a polynomial kernel
is straightforward, and the corresponded learning algorithm
is not changed. By doing so, the segmentation results are
improved, as studied in Example 3. Owing to the framework
of MKFCM, we can easily fuse the texture information into
segmentation algorithms by just adding a kernel designed for
the texture information in the composite kernel. As in the MR
image-segmentation and two-texture image-segmentation prob-
lems, simply adding a Gaussian kernel function of the texture
descriptor in the composite kernel of MKFCM or LMKFCM
leads to better segmentation results.
To sum up, the merit of MKFCM-based image-segmentation
algorithms is the exibility in selections and combinations of
the kernel functions in different shapes and for different pieces
of information. After combining the different kernels in the
kernel space (building the composite kernel), there is no need to
change the computation procedures of MFKCM or LMFKCM.
This is another advantage to reect and fuse the image informa-
tion from multiple heterogeneous or homogeneous sources.
V. CONCLUSION
In this paper, an MKFCM methodology has been proposed
and applied as the general framework for image-segmentation
Fig. 6. Segmentation result of different methods on a PET of dogs lung
image. (a) PET of dogs lung. (b) Segmentation result of AKFCM_meanf.
(c) Segmentation results of DKFCM_meanf. (d) Segmentation results of
MKFCM_meanf (rst variant). (e) Segmentation results of MKFCM_poly.
(f) Segmentation results of LMKFCM.
problems, where the kernel function is composited by mul-
tiple kernels. These kernels are selected for different pieces
of information or properties of image pixels. Aside from the
applications of xed composite kernels, a new method that
uses a linear combination of multiple kernels is proposed, and
the updating rules of the linear coefcients of the composite
kernel are derived. Two traditional spatial KFCM-based image-
segmentation algorithms proved the special cases of MKFCM-
based image-segmentation methods. Moreover, several new
image-segmentation approaches, derived under the framework
of MKFCM, are also proposed in this paper.
Considering the image-segmentation problems under the
MKFCM framework, the proposed algorithms provide a sig-
nicant exibility in selecting and combining different kernel
functions. More importantly, a new information fusion method
is obtained, where the information of the image from multiple
heterogeneous or homogeneous data sources is combined in
the kernel space. Simulations on several synthetic and medical
images show the exibility and the advantages of MKFCM in
image-segmentation problems.
ACKNOWLEDGMENT
The authors would like to thank the anonymous reviewers for
their valuable comments.
REFERENCES
[1] S. A. Rojas and D. Fernandez-Reyes, Adapting multiple kernel parame-
ters for support vector machines using genetic algorithms, in Proc. IEEE
Congr. Evol. Comput., Edinburgh, U.K., 2005, vol. 13, pp. 626631.
[2] G. Camps-Valls, L. Gomez-Chova, J. Munoz-Mari, J. Vila-Frances,
and J. Calpe-Maravilla, Composite kernels for hyperspectral image
CHEN et al.: MULTIPLE-KERNEL FUZZY C-MEANS ALGORITHM FOR IMAGE SEGMENTATION 11
classication, IEEE Geosci. Remote Sens. Lett., vol. 3, no. 1, pp. 9397,
Jan. 2006.
[3] D. W. Kim, K. Y. Lee, D. Lee, and K. H. Lee, Evaluation of the perfor-
mance of clustering algorithms in kernel-induced feature space, Pattern
Recognit., vol. 38, no. 4, pp. 607611, Apr. 2005.
[4] W. L. Cai, S. C. Chen, and D. Q. Zhang, Fast and robust fuzzy
c-means clustering algorithms incorporating local information for im-
age segmentation, Pattern Recognit., vol. 40, no. 3, pp. 825838,
Mar. 2007.
[5] S. Dambreville, Y. Rathi, and A. Tannenbaum, A framework for im-
age segmentation using shape models and kernel space shape priors,
IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 8, pp. 13851399,
Aug. 2008.
[6] K. Sikka, N. Sinha, P. K. Singh, and A. K. Mishra, A fully automated
algorithm under modied FCM framework for improved brain MR image
segmentation, Magn. Reson. Imaging, vol. 27, no. 7, pp. 9941004,
Sep. 2009.
[7] S. P. Chatzis and T. A. Varvarigou, A fuzzy clustering approach toward
hidden Markov random eld models for enhanced spatially constrained
image segmentation, IEEE Trans. Fuzzy Syst., vol. 16, no. 5, pp. 1351
1361, Oct. 2008.
[8] M. A. Jaffar, N. Naveed, B. Ahmed, A. Hussain, and A. M. Mirza, Fuzzy
C-means clustering with spatial information for color image segmenta-
tion, in Proc. 3rd Int. Conf. Elect. Eng., Lahore, Pakistan, Apr. 2009,
pp. 136141.
[9] K. S. Chuang, H. L. Tzeng, S. Chen, J. Wu, and T. J. Chen, Fuzzy
c-means clustering with spatial information for image segmentation,
Comput. Med. Imaging Graph., vol. 30, no. 1, pp. 915, Jan. 2006.
[10] C. F. Juang, S. H. Chiu, and S. J. Shiu, Fuzzy system learned through
fuzzy clustering and support vector machine for human skin color seg-
mentation, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 37,
no. 6, pp. 10771087, Nov. 2007.
[11] M. S. Yang and H. S. Tsai, A Gaussian kernel-based fuzzy c-means
algorithm with a spatial bias correction, Pattern Recognit. Lett., vol. 29,
no. 12, pp. 17131725, Sep. 1, 2008.
[12] R. J. He, S. Datta, B. R. Sajja, and P. A. Narayana, Generalized
fuzzy clustering for segmentation of multi-spectral magnetic resonance
images, Comput. Med. Imaging Graph., vol. 32, no. 5, pp. 353366,
Jul. 2008.
[13] X. W. Liu and D. L. Wang, Image and texture segmentation using lo-
cal spectral histograms, IEEE Trans. Image Process., vol. 15, no. 10,
pp. 30663077, Oct. 2006.
[14] Y. A. Tolias and S. M. Panas, Image segmentation by a fuzzy clustering
algorithm using adaptive spatially constrained membership functions,
IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 28, no. 3, pp. 359
369, May 1998.
[15] Y. Xia, D. G. Feng, T. J. Wang, R. C. Zhao, and Y. N. Zhang, Image
segmentation by clustering of spatial patterns, Pattern Recognit. Lett.,
vol. 28, no. 12, pp. 15481555, Sep. 1, 2007.
[16] X. C. Yang, W. D. Zhao, Y. F. Chen, and X. Fang, Image segmentation
with a fuzzy clustering algorithm based on Ant-Tree, Signal Process.,
vol. 88, no. 10, pp. 24532462, Oct. 2008.
[17] G. Camps-Valls, L. Gomez-Chova, J. Munoz-Mari, J. L. Rojo-Alvarez,
and M. Martinez-Ramon, Kernel-based framework for multitemporal
and multisource remote sensing data classication and change detec-
tion, IEEE Trans. Geosci. Remote Sens., vol. 46, no. 6, pp. 18221835,
Jun. 2008.
[18] H. Y. Li, V. Bochko, T. Jaaskelainen, J. Parkkinen, and I. F. Shen, Kernel-
based spectral color image segmentation, J. Opt. Soc. Amer. A, Opt.
Image Sci. Vis., vol. 25, no. 11, pp. 28052816, Nov. 2008.
[19] J. Kawa and E. Pietka, Kernelized fuzzy c-means method in fast segmen-
tation of demyelination plaques in multiple sclerosis, in Proc. Annu. Int.
Conf. IEEE Eng. Med. Biol. Soc., 2007, vol. 116, pp. 56165619.
[20] S. Sonnenburg, G. Ratsch, C. Schafer, and B. Scholkopf, Large scale
multiple kernel learning, J. Mach. Learn. Res., vol. 7, pp. 15311565,
Jul. 2006.
[21] J. Z. Wang, J. Kong, Y. H. Lu, M. Qi, and B. X. Zhang, A modied FCM
algorithm for MRI brain image segmentation using both local and non-
local spatial constraints, Comput. Med. Imaging Graph., vol. 32, no. 8,
pp. 685698, Dec. 2008.
[22] L. L. He and I. R. Greenshields, An MRF spatial fuzzy clustering method
for fMRI SPMs, Biomed. Signal Process. Control, vol. 3, no. 4, pp. 327
333, Oct. 2008.
[23] L. Liao, T. S. Lin, and B. Li, MRI brain image segmentation and
bias eld correction based on fast spatially constrained kernel cluster-
ing approach, Pattern Recognit. Lett., vol. 29, no. 10, pp. 15801588,
Jul. 15, 2008.
[24] Z. Wang, S. C. Chen, and T. K. Sun, MultiK-MHKS: A novel multi-
ple kernel learning algorithm, IEEE Trans. Pattern Anal. Mach. Intell.,
vol. 30, no. 2, pp. 348353, Feb. 2008.
[25] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan, Multiple kernel learn-
ing, conic duality, and the SMO algorithm, in Proc. 21st ICML, 2004,
pp. 4148.
[26] D. Q. Zhang and S. C. Chen, A novel kernelized fuzzy C-means algo-
rithm with application in medical image segmentation, Artif. Intell. Med.,
vol. 32, no. 1, pp. 3750, Sep. 2004.
[27] P. Guo, C. L. P. Chen, and M. R. Lyu, Cluster number selection for a
small set of samples using the Bayesian Ying-Yang model, IEEE Trans.
Neural Netw., vol. 13, no. 3, pp. 757763, May 2002.
[28] A. Guerrero-Curieses, J. L. Rojo-Alvarez, P. Conde-Pardo, I. Landesa-
Vazquez, J. Ramos-Lopez, and J. L. Alba-Castro, On the performance
of kernel methods for skin color segmentation, EURASIP J. Adv. Signal
Process., vol. 2009, pp. 113, 2009.
[29] M. Z. Lu, C. L. P. Chen, J. B. Huo, and X. Z. Wang, Optimization
of combined Kernel function for SVM based on large margin learning
theory, in Proc. IEEE Int. Conf. SMC, 2008, vol. 16, pp. 353358.
[30] D. Graves and W. Pedrycz, Performance of kernel-based fuzzy cluster-
ing, Electron. Lett., vol. 43, no. 25, pp. 14451446, Dec. 6, 2007.
[31] S. C. Chen and D. Q. Zhang, Robust image segmentation using FCM
with spatial constraints based on new kernel-induced distance measure,
IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 34, no. 4, pp. 1907
1916, Aug. 2004.
[32] D. Cremers, T. Kohlberger, and C. Schnorr, Shape statistics in kernel
space for variational image segmentation, Pattern Recognit., vol. 36,
no. 9, pp. 19291943, Sep. 2003.
[33] Q. Li, N. Mitianoudis, and T. Stathaki, Spatial kernel K-harmonic means
clustering for multi-spectral image segmentation, IET Image Process.,
vol. 1, no. 2, pp. 156167, Jun. 2007.
[34] K. Muneeswaran, L. Ganesan, S. Arumugam, and K. R. Soundar, Tex-
ture image segmentation using combined features from spatial and spec-
tral distribution, Pattern Recognit. Lett., vol. 27, no. 7, pp. 755764,
May 2006.
[35] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Analysis.
Cambridge, U.K.: Cambridge Univ. Press, 2004.
[36] D. Graves and W. Pedrycz, Kernel-based fuzzy clustering and fuzzy
clustering: A comparative experimental study, Fuzzy Sets Syst., vol. 161,
no. 4, pp. 522543, Feb. 16, 2010.
[37] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing
Using MATLAB. Upper Saddle River, NJ: Prentice-Hall, 2004.
[38] C. A. Cocosco, V. Kollokian, R. K.-S. Kwan, and A. C. Evans, BrainWeb:
Online interface to a 3D MRI simulated brain database, NeuroImage,
vol. 5, no. 4. pt. 2/4, p. S425, 1997. [Online]. Available: http://mouldy.
bic.mni.mcgill.ca/brainweb/
[39] D. C. Stanford, Fast automatic unsupervised image segmentation and
curve detection in spatial point pattern, Ph.D. dissertation, Dept. Stat.,
Univ. Washington, Seattle, WA, 1999.
[40] J. C. Bezdek, Pattern Recognition With Fuzzy Objective Function Algo-
rithms. New York: Plenum, 1981.
[41] C. Y. Yeh, C. W. Huang, and S. J. Lee, Multi-kernel support vector
clustering for multi-class classication, Int. J. Innovative Comput. Appl.,
vol. 6, no. 5, pp. 22452262, May 2010.
[42] B. Zhao, J. Kwok, and C. Zhang, Multiple kernel clustering, in Proc.
9th SIAM Int. Conf. Data Mining, Sparks, NV, 2009, pp. 638649.
Long Chen received the B.S. degree in information
sciences from Peking University, Beijing, China, in
2000, the M.S.E. degree from the Institute of Au-
tomation, Chinese Academy of Sciences, Beijing, in
2003, the M.S. degree in computer engineering from
the University of Alberta, Edmonton, AB, Canada, in
2005, and the Ph.D. degree in electrical engineering
from The University of Texas, San Antonio, in 2010.
He is currently a Postdoctoral Fellow at The
University of Texas, San Antonio. His current re-
search interests include computational intelligence,
Bayesian methods, and other machine learning techniques and their applica-
tions.
Dr. Chen has been working in the publications area matters for several
IEEE conferences. He was the Publications Cochair of the IEEE International
Conference on Systems, Man, and Cybernetics in 2009.
12 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS
C. L. Philip Chen (F07) received the M.S. degree
from the University of Michigan, Ann Arbor, in
1985, and the Ph.D. degree from Purdue University,
West Lafayette, IN, in 1988.
He is currently the Dean and Chair Professor of the
Faculty of Science and Technology, The University
of Macau, Macau, China. He was a Professor and the
Chair of the Department of Electrical and Computer
Engineering and the Associate Dean for Research
and Graduate Studies of the College of Engineering,
The University of Texas, San Antonio. He was a
Visiting Research Scientist at the Materials Directorate, U.S. Air Force Wright
Laboratory, OH. He was also a Senior Research Fellow sponsored by the U.S.
National Research Council and a Research Faculty Fellow at the National
Aeronautics and Space Administration (NASA) Glenn Research Center for
several years. Over the last 20 years, his research projects have been sup-
ported, continuously and consistently, by the U.S. National Science Foundation,
NASA, U.S. Air Force Ofce Scientic Research, U.S. Air Force, and Ofce
of Naval Research. His current research interests include theoretic development
in computational intelligence, intelligent systems, robotics and manufacturing
automation, networking, diagnosis and prognosis, and life prediction and life-
extending control.
Dr. Chen has been involved in IEEE professional service for 20 years. He
is the President-Elect and Vice President on Conferences and Meetings of the
IEEE Systems, Man, and Cybernetics Society (SMCS), where he has been the
Vice President of the IEEE SMCS Tech Activities on Systems Science and
Engineering, a founding Cochair of three IEEE SMCS technical committees,
a founding Cochair of two SMCS chapters, an Associate Editor of the IEEE
TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS (SMC)PART
C: APPLICATIONS AND REVIEWS and the IEEE SYSTEMS JOURNAL, the
General Chair of the IEEE SMC 2009, and the General Cochair of the IEEE
2007 Secure System Integration and Reliability. He is a member of the Tau
Beta Pi and Eta Kappa Nu honorary societies. He has been the Founding Faculty
Advisor of an IEEE Computer Society Student Chapter and the Faculty Advisor
of the Tau Beta Pi Engineering Honor Society at UTSA.
Mingzhu Lu (S06) received the M.S. degree in
computer science from Hebei University, Baoding,
China, in 2007. She is currently working toward
the Ph.D. degree in the Department of Electrical
and Computer Engineering, The University of Texas,
San Antonio.
She has been a Reviewer for the International
Journal of Machine Learning and Cybernetics, etc.
Her current research interests include machine learn-
ing, pattern recognition, data mining, Bayesian meth-
ods, intelligent systems, and their applications.
Ms. Lu is a member of the Tau Beta Pi and Eta Kappa Nu Engineering
honor societies. She serves as the Corresponding Secretary of the Tau Beta
Pi National Engineering Honor Societycurrently Texas Mu Chapter. She
has been a Reviewer for the IEEE TRANSACTIONS ON SYSTEMS, MAN,
AND CYBERNETICSPART B: CYBERNETICS. She was awarded the NSF
travel grant for Women in Machine Learning 2010 (Vancouver, BC, Canada)
and CRA-W Graduate Cohort Workshop 2010 (Bellevue WA, USA) and 2011
(Boston MA, USA). Moreover, she has served as a volunteer for several IEEE
conferences, the Grace Hopper Celebration, and many campus and community
activities.

Das könnte Ihnen auch gefallen