Sie sind auf Seite 1von 18

Swarm and Evolutionary Computation ()

Contents lists available at ScienceDirect

Swarm and Evolutionary Computation


journal homepage: www.elsevier.com/locate/swevo

Review

A survey on nature inspired metaheuristic algorithms


for partitional clustering
Satyasai Jagannath Nanda a,n, Ganapati Panda b
a
b

Department of Electronics and Communication Engineering, Malaviya National Institute of Technology Jaipur, Rajasthan 302017, India
School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, Odisha 751013, India

art ic l e i nf o

a b s t r a c t

Article history:
Received 10 October 2012
Received in revised form
23 August 2013
Accepted 20 November 2013

The partitional clustering concept started with K-means algorithm which was published in 1957. Since
then many classical partitional clustering algorithms have been reported based on gradient descent
approach. The 1990 kick started a new era in cluster analysis with the application of nature inspired
metaheuristics. After initial formulation nearly two decades have passed and researchers have developed
numerous new algorithms in this eld. This paper embodies an up-to-date review of all major nature
inspired metaheuristic algorithms employed till date for partitional clustering. Further, key issues
involved during formulation of various metaheuristics as a clustering problem and major application
areas are discussed.
& 2014 Published by Elsevier B.V.

Keywords:
Partitional clustering
Nature inspired metaheuristics
Evolutionary algorithms
Swarm intelligence
Multi-objective Clustering

Contents
1.
2.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Single objective nature inspired metaheuristics in partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.
Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2.
Historical developments in nature inspired metaheuristics for partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2.1.
Evolutionary algorithms in partitional clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2.2.
Physical algorithms in partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.3.
Swarm Intelligence algorithms in partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.4.
Bio-inspired algorithms in partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.5.
Other nature inspired metaheuristics for partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.
Fitness functions for partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.
Cluster validity indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3. Multi-objective algorithms for exible clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.
Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.
Historical development in multi-objective algorithms for partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.
Evaluation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4. Real life application areas of nature inspired metaheuristics based partitional clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6. Future research issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Corresponding author.
E-mail addresses: nanda.satyasai@gmail.com (S.J. Nanda), ganapati.panda@gmail.com (G. Panda).

2210-6502/$ - see front matter & 2014 Published by Elsevier B.V.


http://dx.doi.org/10.1016/j.swevo.2013.11.003

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

1. Introduction
Data clustering determines a group of patterns in a dataset
which are homogeneous in nature. The objective is to develop an
automatic algorithm which can accurately classify an unleveled
dataset into groups. Recent literature [15] broadly classies
clustering algorithms into three categories: hierarchical, partitional and overlapping. The hierarchical algorithms provide a tree
structure output (dendrogram plot) which represent the nested
grouping of the elements of a dataset [6,7]. They do not require a
priori knowledge about the number of clusters present in the
dataset [8,9]. However the process involved in the algorithm is
assumed to be static and elements assigned to a given cluster
cannot move to other clusters [10]. Therefore they exhibit poor
performance when the separation of overlapping clusters is
carried out.
The overlapping nature of clusters is better expressed in fuzzy
clustering [1113]. The popular algorithms include fuzzy c-means
(FCM) [14] and fuzzy c-shells algorithm (FCS) [15]. In this approach
each element of a dataset belongs to all the clusters with a fuzzy
membership grade. The fuzzy clustering can be converted to a
crisp clustering (any element belongs to one cluster only) by
assigning each element to the cluster with highest measure of
membership value.
The partitional clustering divides a dataset into a number of
groups based upon certain criterion known as tness measure. The
tness measure directly affects the nature of formation of clusters.
Once an appropriate tness measure is selected the partitioning
task is converted into an optimization problem (example: grouping based on minimization of distance or maximization of correlation between patterns, otherwise optimizing their density in the N
dimensional space etc.). These partitional techniques are popular
in various research elds due to their capability to cluster large
datasets (example: in signal and image processing for image
segmentation [16], in wireless sensor network for classifying the
sensors to enhance lifetime and coverage [1720], in communication to design accurate blind equalizers [21], in robotics to
efciently classify the humans based upon their activities [22], in
computer science for web mining and pattern recognition [23], in
economics research to identify the group of homogeneous consumers [24], in management studies to determine the portfolio
[27], in seismology to classify the aftershocks from the regular
background events [28], to perform high dimensional data analysis
[29], in medical sciences to identify diseases from a group of
patient reports and genomic studies [30], in library sciences for
grouping books based upon the content [32], etc.). In all these
applications the nature of patterns associated with the datasets is
different from each other. Therefore a single partitional algorithm
cannot universally solve all problems. Thus given a problem in
hand an user has to carefully investigate the nature of the patterns
associated with the dataset and select the appropriate clustering
strategy.
The K-means algorithm is the most fundamental partitional
clustering concept which was published by Lloyd of Bell Telephone
laboratories in 1957 [373375]. After 50 years of its existence, till
date this algorithm is still popular and widely used for high
dimensional datasets due to its simplicity and lower computational complexity [34,376,377]. In this case the minimization of
Euclidean distance between elements and cluster center is considered as optimization criterion. Inspired by K-means a number of
gradient algorithms for partitional clustering are developed by
researchers which include bisecting K-means [35] (recursively
dividing the dataset into two clusters in each step), sort-means
[36] (means are shortened in the order of increasing distance from
each mean to speed up the traditional process), kd-tree [37]
(determines the closest cluster centers for all the data points),

X-means [38] (determines the best number of clusters K by


optimizing a criterion such as Akaike Information Criterion (AIC)
or Bayesian Information Criterion (BIC)), k-harmonic means [39]
(instead of minimum of Euclidean distance the harmonic mean is
taken), k-modes algorithm [40,41] (selects k initial modes instead
of centers, followed by allocating every object to the nearest
mode), Kernel K-means [42] (to detect arbitrary shaped clusters,
with proper choice of the kernel function), K-medoid [43] (cluster
center is represented by median of the data instead of the mean).
These algorithms are computationally simpler, but are often
trapped into local optimums due to hill-climbing approach (of
cluster center moment in case of K-means). On the other hand, the
nature inspired metaheuristics employ a population to explore the
search space and thus ensure greater probability to achieve
optimal cluster partitions.
Literature review [44,45] reveals the recent trend to name all
stochastic algorithms with randomization and local search as
metaheuristic. The randomization process generates arbitrary
solutions, which explore the search space and are responsible to
achieve global solution. The local search is responsible to determine convergence and focus on achieving good solutions in a
specic region. The rst nature inspired metaheuristic is genetic
algorithm (GA) developed by Holland and his colleagues in 1975
[46,47]. It is followed by development of simulated annealing (SA)
by Kirkpatrick in 1983 [48]. Recent literature reports many
established nature inspired metaheuristics which are enlisted in
Table 1. These algorithms are broadly classied into Evolutionary
Algorithms, Physical Algorithms, Swarm Intelligence, Bio-inspired
Algorithms and others. Table 1 lists these algorithms which are
further divided into single objective and multi-objectives depending on the number of objective functions that they simultaneously
optimize to achieve the solution.
The fundamental approach to develop nature inspired metaheuristics based clustering algorithm using simulated annealing
was proposed by Selim and Alsultan [159] in 1991. Then Bezdek
et al. [100] proposed the evolutionary approach to develop
clustering using genetic algorithm in 1994. The research article
by Sarkar and Yegnarayana [101] highlights the core issues
involved in evolutionary programming for development of clustering algorithm. Lumer and Faieta rst explored the use of swarm
nature of clustering ants [191]. Subsequently the swarm intelligence algorithms like ant colony optimization [183] and particle
swarm optimization [219] have been applied for cluster analysis.
This paper presents an in-depth survey of nature inspired
metaheuristic algorithms used for partitional clustering. The paper
focuses on the nature inspired metaheuristics that have been used
for cluster analysis in the last two decades. Few interesting review
articles on cluster analysis with overwhelming citation by
researchers have been published by Jain et al. [34], Hruschka
et al. [3], Xu and Wunsch [91], Freitas [92], Paterlini and Minerva
[93], Jafar and Sivakumar [94]. To the best of our knowledge a
review paper employing recently developed nature inspired
metaheuristics for partitional clustering has not been reported.
In 2009 Hruschka et al. [3] have focused on the initialization
procedures, crossover, mutation, tness evaluation and reselection
associated with genetic type evolutionary algorithms for single
and multiobjective cases. Jain et al. [34] have dealt with key issues
of clustering, users dilemma and have suggested corresponding
solutions. Jafar and Sivakumar [94] highlighted the developments
in ant algorithm for cluster analysis. The book chapter by Abraham
et al. [4] focuses on the use of PSO and ant algorithm for clustering
task. The basic principles and methods of clustering are embodied
in the books [1,9599].
Keeping the current research trends in mind the present paper
contributes in the survey of partitional clustering in terms of four
aspects: (1) systematic review on all the single objective nature

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

Table 1
Broad classication of nature inspired metaheuristic algorithms.
Types

Single objective

Multi-objective

Evolutionary algorithms

Genetic Algorithm (GA) [46,47]


Differential Evolution (DE) [5358]
Genetic Programming (GP) [52]
Evolutionary Strategy (ES) [51139]
Granular agent evolutionary algo. [358]

NSGA II [305,306],
Multi-objective DE [343]
Multi-objective GP [317]
Multi-objective ES [318]
SPEA [326], PESA II [325]

Physical algorithms

Simulated Annealing (SA) [48]


Memetic Algorithm (MA) [167170]
Harmony Search (HS) [173,174]
Shufed Frog-Leaping algo. (SFL) [179]

Multi-objective
Multi-objective
Multi-objective
Multi-objective

SA [313]
MA [314]
HS [315]
SFL [316]

Swarm intelligence

Ant Colony Opt. (ACO) [6267]


Particle Swarm Opt. (PSO) [6872]
Articial Bee Colony (ABC) [7377]
Fish Swarm algo. (FSA) [254,255]

Multi-objective
Multi-objective
Multi-objective
Multi-objective

ACO [333]
PSO [307]
ABC [310]
FSA [321]

Bio-inspired algorithms

Articial Immune System (AIS) [7883]


Bacterial Foraging Opt. (BFO) [84,85]
Dendritic Cell algo. [87,88]
Krill herd algo. [356]

Multi-objective AIS [308]


Multi-objective BFO [309]

Other nature inspired algorithms

Cat Swarm Opt. (CSO)[269,270]


Cuckoo Search algo. [272274]
Firey algo. [275277]
Invasive Weed Opt. algo. (IWO)[280]
Gravitational Search algo. [285,286]
River formation dynamics [357]
Bat algorithm [359,360]

Multi-objective
Multi-objective
Multi-objective
Multi-objective
Multi-objective

CSO [311]
Cuckoo [319]
Firey [312]
IWO [283]
GSA [320]

Multi-objective Bat [361]

inspired metaheuristics used in partitional clustering, (2) up-todate survey on exible partitional clustering based on multiobjective metaheuristic algorithms, (3) consolidation of recently
developed cluster validation majors, and (4) exploration of the
new application areas of partitional clustering algorithms.
The paper is organized as follows. Section 2 deals with the
advances in single objective nature inspired metaheuristics for
partitional clustering, which includes recent developments in
algorithm design, tness functions selection and cluster validity
indices used for verication. The multi-objective metaheuristics
used for exible clustering are discussed in Section 3. The real life
application areas of nature inspired partitional clustering are
highlighted in Section 4. Finally the concluding remarks of
investigation made in the survey are presented in Section 5.
A number of issues on innovative future research are presented
in Section 7.

2.2. Historical developments in nature inspired metaheuristics


for partitional clustering

2. Single objective nature inspired metaheuristics in


partitional clustering

2.2.1. Evolutionary algorithms in partitional clustering


The evolutionary algorithms are inspired by Darwin theory of
natural selection which is based on survival of ttest candidate for
a given environment. These algorithms begin with a population
(set of solutions) which tries to survive in an environment
(dened with tness evaluation). The parent population shares
their properties of adaptation to the environment to the children
with various mechanisms of evolution such as genetic crossover
and mutation. The process continues over a number of generations
(iterative process) till the solutions are found to be most suitable
for the environment. With this concept in mind initially Holland
proposed the Genetic Algorithm (GA) in 1975 [46,47]. It is followed
by development of Evolution Strategies (ES) by Schwefel in 1981
[4951] and Genetic Programming (GP) by Koza [52] in 1992. Storn
and Price developed another evolutionary concept in 1997 termed
as Differential Evolution (DE) [53]. The books [54,150] on DE,
research work on adaptive DE [55,56] and opposition-based DE
[57,58] made the DE quite popular amongst researchers. The
application of these evolutionary algorithms to partitional clustering is outlined below.

2.1. Problem formulation


Given an unleveled dataset Z ND fz1D ; z2D ; ; zND g representing N patterns, each having D features, partitional approach
aims to cluster the dataset into K groups K r N such that
Ck a

8 k 1; 2; ; K;

Ck \ Cl

8 k; l 1; 2; K and k a l; C k Z:

k1

The clustering operation is dependent on the similarity between


elements present in the dataset. If f denotes the tness function
then the clustering task is viewed as an optimization problem as
Optimize

Ck

f Z ND ; C k

8 k 1; 2; ; K

Hence the optimization based clustering task is carried out by


single objective nature inspired metaheuristic algorithms.

In the last two decades a number of nature inspired metaheuristics have been proposed in the literature and applied to many
real life applications. In recent years to solve various unsupervised
optimization problems the metaheuristic algorithms are successfully used. Present stage for any unsupervised optimization problem in hand an user can easily pick up a suitable metaheuristic
algorithm for solving the purpose. The solution achieved ensures
optimality as these population based algorithms explore the entire
search space with the progress in generations.
The basic steps associated with the core algorithms for partitional clustering are listed in Table 2. The recent works on
partitional clustering are outlined in sequence.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

Table 2
Basic steps involved in single objective standard GA, DE, ACO, PSO, ABC, AIS and BFO algorithms for solving partitional clustering problem.
GA
Initialize
Chromos.
DE
Initialize
Particles
ACO
Initialize
Ants
PSO
Initialize
Particles
ABC
Initialize
Bees
AIS
Initialize
Immune Cells
BFO
Initialize
Bacteria

(
+
)

Next Generation

Crossover

Mutation

(
+
)

Next Generation

Mutation

Crossover

(
+
)

Next Generation

Fitness

Update Pheromone
Intensity

(
+
)

Next Generation

Vel. update
Pos. update

Compute
GBst & PBst

Next Generation

Compute
Emp. Bees

Greedy Selection
& Fitness

(
+
)

Next Generation

Fitness

Clone

(
+
)

Next Generation

Chemotaxis

Swarming

(
+
)

 GA-based approaches: Bezdek et al. [100] initially proposed the use


of basic genetic algorithm for partitional clustering. The standard
binary encoding scheme with xed number of cluster centers
(k) is used for initialization of chromosomes [100102]. The reproduction operation is carried out using uniform crossover and
cluster-oriented mutation (altering the bits of binary string).
Subsequently integer based encoding of chromosomes is used
by Murthy and Chowdhury [103]. They suggested the use of single
point crossover and XiaofengPalmieri based mutation scheme
[104] for reproduction. However theoretically this mutation may
produce invalid offsprings. Maulik and Bandyopadhyay have
proposed the use of real coded genetic algorithm for partitional
clustering [105]. With real coding the computational complexity is
reduced to O(k) compared to O(nk) associated with integer or
binary encoding. A genetic K-mean algorithm is proposed in [106]
which replaces the crossover phenomenon by the basic search
operation with K-means. Based on this concept Lu et al. have
developed fast genetic K-means [108] and incremental genetic kmeans [109] algorithms for gene expression data analysis. Similarly Sheng and Liu [107] have proposed a genetic based hybrid
K-medoid algorithm for accurate clustering of large databases.
All these algorithms are based on a xed number of clusters.
These algorithms work satisfactorily when the suitable number of
partitions for a dataset is known a priori. But in many practical
scenarios the value of K (number of clusters) is unknown to user. The
K value directly affects the partition quality, therefore it is necessary
that the clustering algorithm should explore the number of partitions
along with the process of optimization. Cowgill et al. [110] have
developed a hybrid algorithm COWCLUS which rst uses nondeterministic genetic algorithm based approach to determine the

Fitness

Fitness

Drop or
Peak

Fitness

Onlooker
Bees

Mutation

Reproduction

Selection

Selection

Short
Memory

Selection

Selection

Selection

Eliminat.
& Dispers.

(
*
)

(
*
)

(
*
)

(
*
)

(
*
)

(
*
)

(
*
)

Cl.
O/p

Cl.
O/p

Cl.
O/p

Cl.
O/p

Cl.
O/p

Cl.
O/p

Cl.
O/p

good partitions, then used hill-climbing approach to improve these


partitions to produce the nal best partition. Tseng and Yang [111]
have proposed the automatic evolution of clusters with genetic
algorithm. In [112,113] Bandopadhay and Maulik have developed
nonparametric genetic algorithm for automatic selection of number
of partitions K. Based upon this concept a self-adaptive genetic
algorithm for cluster analysis is reported in [125]. Recently a
quantum inspired genetic algorithm for k-means clustering is proposed by Xiao et al. [128] which reports superior performance than
that obtained in [112,113]. The automatic evolution of clusters has
been successfully applied to image classication [113], document
clustering [115], intrusion detection [116], microarray [117] and geneexpression data analyses [118120].
In genetic based evolutionary approaches normally a population is
initialized where each individual searches for the optimal weight
vector for all the clusters. Gancarski and Blansche [126,127] developed co-evolutionary approaches (unlike evolutionary here several
populations are employed and each population searches for a local
weight vector for a cluster) based upon Darwinian theory, Lamarckian theory and Baldwin effect for feature weighting in K-means
algorithms. Based upon the three theories they proposed six genetic
approaches for feature weighting in K-means (three based on
evolutionary scheme DE-LKM, LE-LKM and BE-LKM and three coevolutionary schemes DC-LKM, LC-LKM and BC-LKM). They reported
that the co-evolutionary based approach for cluster analysis provides
superior performance than the traditional evolutionary based ones.
Intuitively hybrid evolutionary algorithms (formulated by combining the good features of two individual parent processes)
provide superior performance than the conventional parent algorithms. A hybrid of GA and PSO based algorithm is developed in
[129] for order clustering to reduce the surface mount technology

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

(SMT) setup time. Feng-jie and Ye [130] applied the GA and PSO
based hybrid clustering algorithm for image segmentation of
transmission lines picture to determine the faults. This system is
helpful for remote video monitoring. Hong and Kwong [131]
combined steady-state genetic algorithm and ensemble learning
for cluster analysis. Chaves and Lorenab [132] developed a hybrid
algorithm Clustering Search (consisting of GA along with local
search heuristic) to solve capacitated centered clustering problem.
Recently a two stage genetic algorithm was proposed by He et al.
[134] for cluster analysis in which two-stage selection and mutation operations are incorporated to enhance the search capability
of the algorithm. The two stage genetic algorithm provides
accurate results compared to agglomerative k-means [133] and
standard genetic k-means algorithms. A grouping genetic algorithm (GGA) is a compact one proposed by Falkenauer [135] to
handle grouping-based problems. The GGA is successfully used for
cluster analysis of benchmark UCI datasets in [136]. Recently Tan
et al. [137] applied the GGA based clustering technique to improve
the spectral efciency of OFDMA (orthogonal frequency-division
multiple access) based multicast systems.

 ES-based approaches: Babu and Murty [138] developed the

partitional and fuzzy clustering algorithms with ES in 1994. They


have used the minimization of WGSS (within group sum of
squared error) objective function for partitional clustering and
minimization of FCM (fuzzy C-means) objective functions for
fuzzy clustering. The paper by Beyer and Schwefel [139] discusses
the fundamental and recent advancements in partitional clustering with ES. Hybrid partitional clustering algorithm based on Kmeans and ES is developed in [140]. It is observed that the hybrid
algorithms provide better performance than the regular ES on
cluster analysis of benchmark UCI datasets. The ES based partitional clustering has been suitably used for cluster analysis of DNA
microarray database [141].
GP-based approaches: The GP is related to GA, where it automatically generates computer programs, based on the Darwin
principle. Each individual computer program is a solution to
the optimization problem and is encoded in the form of a tree
comprising functions and terminals. The GP has been widely
used for supervised classication problem and it is reported
that the trees generated by GP have capability to separate
regions with varieties of shapes [142144]. Falco et al. [145,146]
developed the partitional clustering algorithm based on GP. The
algorithm starts with a population of program trees generated
at random. The algorithm determines the optimal number of
clusters by selecting a variable number of trees per individual.
The user has to provide a parameter that directly inuences the
number of clusters present in the dataset. The trees undergo
tness evaluation and those having higher tness have the
higher probability to serve as parents for next generation. The
genetic operators like crossover and mutation are applied on
the parent trees to generate offspring. The process continues
till a predened stopping criteria corresponding to the optimal
cluster partition get satised. Boric et al. [147] modied the GP
based partitional clustering with an information theoretic
tness measure which can determine arbitrary shape clusters
present in the dataset.
DE-based approaches: The book on Metaheuristic Clustering by
Das et al. [2] in 2009 discusses the fundamental as well as the
advances in DE approaches for cluster analysis [150]. In case of
DE based clustering the individual target solutions (which
combines to create a population P) are taken as parameter
vectors or genomes. Each target vector xi mi1 ; mi2 ; ;
mik ; ; miK , where mik is the centroid of cluster cik and K
represents the number of clusters. Then DE employs the
mutation operation to produce a mutant vector vi. The ve

most commonly used mutation strategies are


v1;i xr1 ;i F  xr2 ;i  xr3 ;i
v2;i xbest F  xr1 ;i xr2 ;i
v3;i xi F  xbest;i  xi F  xr1 ;i  xr2 ;i
v4;i xbest F  xr1 ;i xr2 ;i F  xr3 ;i  xr4 ;i
v5;i xr1 ;i F  xr2 ;i  xr3 ;i F  xr4 ;i  xr5 ;i

where i varies from 1 to P and r 1 ; r 2 ; r 3 ; r 4 ; r 5 are mutually


exclusive integers randomly generated within the range [1, P].
The scale factor F is a control parameter used for amplication
of difference vector, normally lies in range [0,2]. Then a crossover operation is applied to each pair of the target vector xi and
its corresponding mutant vector vi to obtain a trial vector ui as
(
vi if rand1 rCR or i irand
ui
4
8 i 1; 2; ; P
xi Otherwise;
where rand1 is a random number in [0,1]. The crossover rate CR
is an user dened constant in the range [0,1]. The irand is a
randomly chosen integer in the range [1, P]. The tness of all
target vector xi and trail vector ui is evaluated using one of the
tness functions dened in Table 3. Then the population for
next generation is given by
( t
ui if f uti r f xti
xti 1
5
8 i 1; 2; ; P
xti Otherwise;
where t is the number of generation. The algorithm run for
certain number of generations till the algorithm converges and
the optimum clusters are achieved.
The benchmark research article on DE based automatic clustering
[16] is published in 2008 by Das et al. Prior to that the DE based
framework introduced for partitional clustering by Paterlini and
Krink [148,149] is worth mentioning. Further research work by Das
et al. deals with hybridization of Kernel-based clustering with DE
[151] and application of DE based clustering algorithms to image
pixel clustering [152]. Subsequently various hybrid algorithms based
on DE are developed by several researchers which include DE-K
means by Kwedlo [153], DE-K harmonic means by Tian et al. [154],
DE-possibilistic clustering by Hu et al. [71]. These algorithms have
been successfully applied to image classication [156], document
clustering [157] and node selection in mobile networks [158].

2.2.2. Physical algorithms in partitional clustering


The physical algorithms are inspired by physical processes such as
heating and cooling of materials (Simulated Annealing given by
Kirkpatrick et al. in 1983 [48]), discrete cultural information which is
treated as in between genetic and culture evolution (Memetic
algorithm by Moscato [167] in 1989), harmony of music played by
musicians (Harmony Search by Geem et al. [173] in 2001) and
cultural behavior of frogs (Shufed frog-leaping algorithm by Eusuff
et al. [179] in 2006). These algorithms have been applied to solve
partitional clustering problem as briey explained in sequence:

 Simulated Annealing (SA) based approaches: Selim and Alsultan


rst developed the SA based partitional clustering in 1991
[159]. Then in 1992 Brown and Huntley [160] applied the SA
based partitional clustering algorithm to solve multi-sensor
fusion problem. The clustering algorithm begins with an initial
solution x (cluster centroids) having a large initial temperature
T. The tness of the initial solution f(x) (computed with any
function from Table 3) represents the internal energy of the
system. The heuristic algorithm moves to a new solution x
(selected from its neighborhoods of a state) or remain in the
old state x depending upon a acceptance probability function

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

Table 3
Similarity functions f  used by the single objective nature inspired metaheuristic algorithms for cluster analysis. Considering dataset Z ND fz1D ; z2D ; ; zND g to be
divided into K clusters with valid partitions Ck as per (1).
Similarity fun.

Characteristics

Medoid distance

Explanation
Minimization of sum of Distance between objects and medoids of dataset
min
Representation
F 1 N
i 1 j A f1; Kgdzi ; mj where medoids fm1 ; m2 ; ; mk g  Z, d is any distance
Used in
Lucasius et al. [121], Castro and Murray [122], Sheng and Liu [107]

Centroid distance

Explanation
Minimization of sum of squared Euclidean distance of objects from respective cluster means
Representation F 2 K zi A cj J zi  j J 2 , with j is the mean of cj
j1

Distortion distance

Explanation
Minimization of intraclus- ter diversity
Representation F 3 F 2 =N  D
Used in
Krishna and Murty [106], Lu et al. [108,109], Franti et al. [124], Kivijarvi et al. [125]

Variance ratio criterion

Explanation
It is the ratio of between cluster (B) and pooled within cluster (W) covariance matrices. The VRC should be maximized
Representation
trace B=K  1
F 4 VRC
trace W=N  K
Used in
Cowgill et al. [110], Casillas et al. [115]

Used in

Details

Maulik and Bandyopadhyay [105], Zhang and Cao [207], Murthy and Chowdhury [103]

Intra- and inter-clust. distance Explanation


Difference between inter-cluster to intra-cluster dist.
Representation F 5 Ki 1 Dinter ci w  Dintra ci , w is a parameter
Used in
Tseng and Yang [111]
Dunn0 s index

Explanation
Dunn0 s index to be maximized for optimal partition



min
min
Representation
c ;c
F 6 DIK iA K j A K; j a i max i j
, where ci ; cj minfdzi ; zj : zi A ci ; zj A cj g, ck maxfdzi ; zj : zi ; zj A ci g,
k A K fck g

Used in
DavisBouldin (DB) index

d is the distance
Dunn [293], Zhang and Cao [207]

Explanation
Ratio of sum of within cluster scatter to between cluster separation. DB index to be minimized


max
Si;q Sj;q
Representation
F 7 DBK 1kKi 1 Ri;qt , where Ri;qt j A K; ja i
dij;t
h
i1=q
, where Ni and i are the number of elements and center
The ith cluster scatter Si;q N1i z A ci J z  i J q
Used in

t 1=t
of ci respectively. The separation distance between ith and jth cluster is dij;t D
d 1 ji;d  j;d j 
Davis and Bouldin [291], Cole [123], Das et al. [16], Bandyopadhyay and Maulik [113], Agustin-Blas et al. [136]

CS measure

Explanation
CS Measure is to be minimized for optimal partitioning
h
i
max 
Representation
Ki 1 N1i zj A ci zq A ci dzj ; zq
, with centroid mi N1i zj A ci zj , where Ni is number of elements in ci
F 8 CSK


min
Ki 1 j A K; j a ifdmi ; mj g

Silhouette

Explanation
Higher silhouette provides better assignment of elements
Szi
Representation
where element zi A A, with A; B  ck
F 9 N
i1
N
bzi  azi
Szi
, Silhouette range:  1 r Szi r 1,
maxfazi ; bzi g
azi is the average dissimilarity of zi to other elements of A,
neighbor dissimilarity bzi min disszi ; B, A a B
Used in
Kaufman and Rousseeuw [294], Hruschka et al. [3,118]

Used in

Chou et al. [292], Das et al. [16]

given by



f x  f x0
Paccept exp
T

where f(x) is the energy and T is the temperature of the present


state. The probability function P(accept) is positive when f x0
is lower than f(x) which represents that the smaller energy
solutions are better than those with a greater energy. The
temperature T plays a crucial role in controlling the evolution
of the state with the cooling process of the system. The
algorithm continues either for a xed number of iterations or
until a state with minimum energy is found (global solution
corresponds to optimal cluster partition).
The basic SA has been suitably combined with K-means [161]
and K-harmonic means [162] to develop hybrid algorithms which
provide superior performance in accurately clustering the UCI
datasets. A GA and SA based hybrid clustering algorithm is
developed in [164] to solve the dynamic topology management
and energy conservation problem in mobile ad hoc network.

Lu et al. [165] developed a fast simulated annealing based clustering approach by combining multiple clusterings based on different
agreement measures between partitions. Recently the SA based
clustering is applied to group the suppliers for effective management and to fulll the demands of customers (i.e. to build a good
supply chain management system) [166].

 Memetic Algorithm-based approaches: The recent survey articles


by Chen et al. [169,170] highlight the recent advances in theory
and application areas of memetic algorithm. This algorithm is
used by Merz [168] to perform cluster analysis on gene
expression proles using minimization of the sum-of-squares
as tness measure. It begins with a population which undergoes a global search (exploration of various areas of the search
space), combined with an individual solution improvement
(performed by a local search heuristic to provide local renements). A balance mechanism is carried out with the local and
global mechanisms to ensure that the system does not achieve
premature convergence to a local solution as well as it does
not consume more computational resources for achieving the

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

solution. The memetic based partitional clustering algorithm


has been applied for energy efcient clustering of nodes in
wireless sensor networks [171] and segmentation of natural
and remote sensing images [172].
Harmony Search: The Harmony search algorithm becomes
popular after Lee and Geem [174] applied it for various
engineering optimization problems. Mahdavi et al. developed
the Harmony search based partitional algorithm for web page
clustering [175,176]. The algorithm is inspired by the harmony
played by the musicians. Here each musician represents a
decision variable which denotes solution of the problem. The
musicians try to match harmony with respect to time by
incorporating variation and improvisations in the pitch played
by him. The variation in pitch is given by x0 x PB  , where
PB is the pitch bandwidth which is an user dened parameter
to control the amount of change and is a random number in
the range [ 1,1]. This variation is reected in the form
of improvement in the cost function to achieve the global
solution. Mahdavi and Abolhassani have also formulated
a hybrid Harmony K-means algorithm for document clustering
[177]. The clustering algorithm [178] has been suitably
applied for designing clustering protocols for wireless sensor
networks.
Shufed frog-leaping algorithm (SFL): The SFL algorithm mimics
the nature of frogs in the memeplexes. The algorithm is used to
solve partitional clustering problem [180] and has been
reported to yield better solutions than ACO, simulated annealing, genetic k-means [106] approaches on several synthetic and
real life datasets. The initial population consists of a set of frogs
(solutions) which is grouped into subsets known as memeplexes. The frogs which belong to different memeplexes are
assumed to be of different cultures and are allowed to perform
local search. So within each memeplexes each individual frog
shares its ideas with other frogs and thus the group evolves
with new ideas (with memetic evolution). After a pre-dened
number of steps (with memetic evolution), the ideas are shared
among the memeplexes using a shufing process. The local
(memetic) and global searches (shufing process) continue till
the optimal tness (accurate clusters) is achieved. The clustering algorithm based on SFL has been used for color image
segmentation [181] and web0 s text mining [182].

2.2.3. Swarm Intelligence algorithms in partitional clustering


Swarm intelligence is the group of natural metaheuristics
inspired by the collective intelligence. The collective intelligence
is built up through a population of homogeneous agents interacting with each other and with their environment. Example of such
intelligence is found among colonies of ants, ocks of birds,
schools of sh, etc. The books [5961] highlight the fundamentals
and developments in swarm intelligence algorithms for solving
numerous real life optimization problems. The major such algorithms include: Ant colony optimization (ACO) by Dorigo [62] in
1992, Particle swarm optimization (PSO) by Kennedy and Eberhart
in 1995 [68,69], Articial bee colony (ABC) algorithm by Karaboga
and Basturk in 2006 [73], Fish Swarm Algorithm (FSA) by Li et al.
in 2002 [254,255]. Application of these algorithms to solve partitional clustering problems is outlined in sequence

 ACO-based approaches: The ACO algorithm is inspired by ants


behavior in determining the optimal path from the nest to the
food source. The algorithm becomes popular after Dorigo et al.
work was standardized in IEEE [6365]. With the progress
of time Dorigo0 s book on ACO [66] and survey paper [67]
are heavily cited by the researchers and scientists in this eld.

The cluster analysis algorithms based on ACO follow either of


the two fundamental natures of real life ants.
First one is based on ants foraging behavior for determining the
food source. Initially ants wander randomly for food in the surrounding regions of nest. An ant0 s movement is observed by the neighboring
ants with the pheromone intensity it lays down while searching for
food. Once a food source is found the pheromone intensity of the path
increases due to the movement of ant from source to nest and other
ants instead of searching at random, they follow the trail. With the
progress in time the pheromone intensity starts to evaporate and
reduce its attraction. The amount of time taken for an ant to travel to
food source and back to the nest is directly proportional to the
quantity of pheromone evaporation. So with time an optimal shortest
path is achieved to maintain the high pheromone intensity. With this
concept the cluster analysis is formulated as an optimization problem
and solved using ACO to obtain the optimal partitions in [183,184]. A
constrained ACO (C-ACO) [185] was proposed to handle arbitrary
shaped clusters and outliers present in the data. Then adaptive ACO
was proposed by several researchers [186188] to improve the
convergence rate and to determine the optimal number of clusters.
A variant of ACO, known as APC (aggregation pheromone densitybased clustering) algorithm is proposed by Ghosh et al. [189,190]. The
beauty of APC is updation of the pheromone matrix which is helpful to
avoid the convergence of solutions to a local optima.
The second one imitates the ants behavior of grouping dead
bodies. Ants work together to deposit more dead bodies in their
nest and group them with respect to their size. This grouping
property of ants is rst coded in the form of algorithm for data
clustering (LF algorithm) by Lumer and Faieta [191]. The basic LF
algorithm was followed and improved by several researchers
[192,193]. Yang et al. [194,195] proposed the use of multi-ant
colonies algorithm for clustering. In this concept, the algorithm
consists of several independent ant colonies (each having a queen
ant). The moving speed of ants and parameters of the probability
conversion function in different colonies differ from each other.
Each colony produces a clustering result in parallel and sends it to
the queen ant agent. A hypergraph model (through queen ants) is
used to combine all the parallel colonies.
Handel et al. published a number of articles on ACO [196199]
which are extensively cited by the researchers. They have incorporated robustness in the standard LF algorithm (known as ACA)
and applied it for document retrieval [196]. The performance of
these methods have been compared with that obtained by antbased clustering with K-means, average link and 1DSOM in [197].
They have suggested an improved ACO in [198] which incorporates
adaptive and heterogeneous ants for better exploration of search
space. In the survey article [199] both the approaches of ACO along
with other swarm based clustering approaches (like bird ocking
algorithm and PSO) have been dealt. A modied version of ACA
(known as ACAM) is proposed by Boryczka et al. [200] which has
been shown to outperform ACA [196] in terms of accuracy (tested
with ve cluster validation measures). Recently an automatic
clustering based on ant dynamics is proposed in [201], which
can detect arbitrary shape clusters (both convex and/or nonconvex). Another algorithm known as chaotic ant swarm (CAS)
proposed by Wan et al. [202] provides optimal partitions irrespective of cluster size and density.
A number of hybrid algorithms based on ants are available in
the literature. Initially Kuo et al. [203] have proposed ants based Kmeans algorithm, which is subsequently improved by hybridization of ACO, self-organizing maps(SOM) and k-means in [204].
Further, Jiang et al. have developed new hybrid clustering algorithms by combining the ACO with K-harmonic means algorithm
in [205] and DSBCAN algorithm in [206]. Recently Zhang and Cao
[207] have suggested a new one by integrating ACO with kernel

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

principal component analysis (KPCA). Here the KPCA is applied on


the dataset to compute efcient features and then ant based
clustering is performed in the feature space (instead of the input
space). A multiple cluster detection algorithm based on spatial
scan statistic and ACO is reported in [208]. It is observed that these
hybrid algorithms exhibit performances superior to that of the
individual algorithms in terms of efciency and clustering quality.
The ant based clustering algorithms nd applications to web
mining [209], test mining [188], texture segmentation [210],
intrusion detection [211,212], high dimensional data analysis
[213], long-term electrocardiogram processing [214] and gene
expression data analysis [215].

 PSO-based approaches: The PSO is based on the swarming


behavior of particles searching for food in a collaborative
manner. The algorithm has become popular among the
researchers [7072] due to its simple form for implementation,
easier selection of parameters and faster convergence rate.
The cluster analysis using PSO was proposed by Omran et al.
[219] for image clustering. Then van der Merwe and Engelhrecht
[220] applied it for cluster analysis of arbitrary datasets. The
algorithm in its basic form for cluster analysis consists of a swarm
in a D dimensional search space in which each particle0 s position
xi mi1 ; mi2 ; ; mik ; ; miK  consists of K cluster centroid vectors.
The mik is the centroid of cluster cik. The position of ith particle is
associated with a velocity V i vi1 ; vi2 ; ; viK , where vi1 ; vi2 are
initialized as random numbers in the search range. Then the
tness of particles is evaluated with a suitable tness function
f : dened in Table 2. Based on the tness values the best previous
positions achieved by the particles represent the local solutions
given by P i pi1 ; pi2 ; ; piK . For the initial run P i xi . The global
solution is the best position achieved by the swarm in a generation
given by P g pg1 ; pg2 ; ; pgt , where t is the number of generation.
The cluster centroid positions are updated with the velocity and
position update of the particles given by
vik t 1 w  vik t c1  r 1  pik t  xik t
c2  r 2  pg t xik t

xik t 1 xik t vik t 1

where r1 and r2 represent random numbers between [0, 1], w is


the inertia weight which is taken as 0.4. The c1 and c2 are
acceleration constants taken as 2.05. The updation process continues till the number of data points which belongs to each cluster
remains constant for certain generations.
A number of variants of PSO based clustering algorithms have
been reported by researchers in the last couple of years. Cohen and
Castro [221] have proposed a particle swarm clustering (PSC)
algorithm in which the particle0 s velocity update is inuenced by
particle0 s previous position along with a cognitive term, social
term and self-organizing term. These terms are helpful to guide
the particle for better solutions and to avoid local stagnation.
A combinatorial particle swarm optimization (CPSO) based partitional clustering is proposed in [222] for solving multi-mode
resource-constrained project scheduling problem. Chuang et al.
[223] developed a chaotic PSO which replaces the convergence
parameters like w, c1, c2, r1, r2 with chaotic operators. These new
operators incorporates ergodic, irregular, and stochastic properties
of chaos in PSO to improve its convergence. A selective particle
regeneration based PSO (SRPSO) and a combination of it with Kmeans (KSRPSO) are proposed in [224] for partitional clustering.
Both algorithms provide faster convergence than PSO and
K-means, due to particle regeneration operation that enables
better exploration of search space. Sun et al. [225] proposed a
quantum-behaved PSO (QPSO) algorithm for cluster analysis of

gene expression database. Recently a new PSO based partitional


clustering algorithm is developed by Cura et al. [226] to handle
unknown number of clusters.
The hybrid algorithm based on K-means and PSO is proposed
by van der Merwe and Engelhrecht [220] in 2003. The PSO has
been suitably combined with K-harmonic means [227] and rough
set theory [228] to produce hybrid algorithms for partitional
clustering. Du et al. have formulated a DK algorithm [229] by
hybridizing particle-pair optimizer (PPO) algorithm (a variation on
the traditional PSO) with K-means for microarray data cluster
analysis. The DK algorithm is reported to be more accurate and
robust than K-means and Fuzzy K-means(FKM) algorithm. Zhang
et al. [230] combined PSO with possibilistic C-means(PCM) for
image segmentation which provides superior performance than
fuzzy C-means(FCM) algorithm. Another efcient approach based
on PSO, ACO and K-means for cluster analysis is reported in [231].
Recently several researchers have produced new hybrid evolutionary clustering algorithms by suitably combining PSO with
differential evolution [232], genetic algorithm [233], immune
algorithms [234,235] and simulated annealing [236]. These hybrid
algorithms provide superior performance than the individual
traditional evolutionary algorithms in terms of efciency, robustness and clustering accuracy.
The PSO based clustering algorithms have been effectively used
in several real life applications including node clustering in
wireless sensor network (WSN) to enhance lifetime of sensors
and coverage area [17], energy balanced cluster routing in WSN
[237], clustering in mobile ad hoc networks to determine the
cluster heads which becomes responsible for aggregating the
topology information [238], cluster analysis of stock market data
for portfolio management [27], grouping for security assessment
in power systems [239], gene expression data analysis [240], color
image segmentation [241], clustering for manufacturing cell
design [242], image clustering [243], document clustering [244],
cluster analysis of web usage data [245] and network anomaly
detection [246].

 ABC-based approaches: The ABC algorithm mimics the foraging


behavior of honey bee swarm. The algorithm has become
popular after a sequence of publication made by Karaboga
et al. [7477]. Recently the ABC algorithm is used for cluster
analysis by several researchers like Zhang et al. [247], Zou et al.
[248], Fathian et al. [249] and Karaboga et al. [250].
The clustering algorithm based on ABC begins with initialization of
bee population with randomly selected cluster centroids in the
dataset. The initial population is categorized into two parts: employed
bees and the onlookers. The employed bees are always associated with
a food source. The food source represents the quality of the solution
(in terms of tness) to the problem and to be optimized. An employed
bee modies its position (i.e. determines a new food source) depending upon local information and the tness value of new source. If the
tness value of new source is more than the previous one than the
employed bee memorizes the new position and forgets the old one.
After all employed bees complete the search, they share the information on food sources and their position with the onlooker bees on the
dance area. Then the onlooker bees are assigned as employed bees
based on a probability which is related to the tness of the food
source. These bees now update their position and share their
information. Every bee colony has scout bees which do random search
in the environment surrounding the next to discover new food
sources. This process is helpful for exploration in the search space
and to avoid the solutions being trapped into a local food source
(optima). The clustering algorithm based on ABC has been suitably
applied for solving network routings [251] and sensor deployment
problems [252] in wireless sensor networks. Recently a hybrid

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

clustering algorithm HABC is proposed by Yan et al. [253] by


incorporating crossover operation of GA in ABC which provides
superior performance than that obtained by each of PSO, CPSO, GA,
ABC and K-means algorithm.

 Fish Swarm Algorithm (FSA): The FSA algorithm is derived from


the schooling behavior of sh. Cheng et al. [256] applied the
FSA for cluster analysis. The algorithm operates by mimicking
three important behavior of natural sh: searching behavior
(tendency of sh to look at food), swarming behavior (sh
assembles in swarms to minimize danger) and following
behavior (when a sh identify food source, its neighboring
individuals follow based on sh0 s visual power). Tsai and Lin
[257] have reported improved solution provided by FSA compared to PSO for several optimization problems.

2.2.4. Bio-inspired algorithms in partitional clustering


The bio-inspired, short form of biologically inspired algorithms
comprise natural metaheuristics derived from living phenomena
and behavior of biological organisms. The intelligence derived
with bio-inspired algorithms are decentralized, distributed, selforganizing and adaptive in nature (under uncertain environments). The major algorithms in this eld include Articial
immune systems (AIS) [7883], Bacterial foraging optimization
(BFO) [8486], Dendritic cell algorithm [87,88] and Krill herd
algorithm [356]. The usage of these algorithms to efciently solve
partitional clustering problem is highlighted for each case:

 AIS-based approaches: The books by Dasgupta [78], Charsto and


Timmis [79] provide the fundamental concepts on articial
immune system for computing and its potential applications.
The four core models developed by mimicking the principle of
biological immune system include: negative selection algorithm,
clonal selection algorithm, immune network model and danger
theory. Among these four the clonal selection principle by
Charsto and Zuben [80] has becomes popular for machine
learning and optimization purposes. The recent articles by
Dasgupta et al. [81,82] and thesis by Nanda [83] highlight the
major advances in the theory and applications of AIS.
Initially Nasraoui et al. [258] developed an AIS based model for
dynamic unsupervised learning. Then the clonal selection algorithm [259,260] has been effectively used for cluster analysis.
In this algorithm the immune cells (they combine to form a
population which is responsible to protect the body against
infection) are initialized with K cluster centroid vectors. When
an antigen (foreign element) invades the body; number of antibodies (immune cells) that recognize these antigens survive
(based on the best tness value). These immune cells undergo
clonal reproduction (new immune cells are produced which are
copies of efcient parent cells). Then a portion of cloned population undergoes a mutation mechanism (somatic hypermutation).
The mutation mechanism is responsible to diversify the solutions
in the search space, thus avoids the cells to be trapped in the local
optima. The best particles among the mutated and cloned ones are
kept as the parents for next generation. The algorithm runs for a
xed number of generations (user dened) till the convergence is
achieved and a optimal number of clusters are obtained.
Li and Tan [261] rst developed the hybrid clustering algorithm
based on AIS by combining it with support vector machine (SVM).
Then an immune K-means algorithm is developed in [262] which
is based on the negative selection principle. Nanda et al. [234]
developed an Immunized PSO (IPSO) algorithm in which the global
best particle is cloned and mutated after the velocity and position
update to enhance the particles search in an focused manner. In a

recent work the IPSO has been suitably employed for partitional
clustering task [234]. Graaff and Engelbrecht [263] initially developed a local network neighborhood clustering method based on
AIS. Later on they have formulated the immune based algorithm
for cluster analysis under uncertain environments [264].

 BFO-based approaches: Passino [84] proposed the bacterial


foraging optimization (BFO) algorithm in 2002 which imitates
the foraging strategies of E. coli bacteria for nding food. An E.
coli bacterium can search for food in its surrounding by two
types of movements: run or tumble. These movements are
possible with the help of agella (singular, agellum) that enable
the bacterium to swim. If the agella move counterclockwise,
their effects accumulate in the form of a bundle which pushes
the bacterium to move forward in one direction (run). When the
agella rotate clockwise, each agellum separates themselves
from the others and the bacterium tumbles (it does not have any
set direction for movement and there is almost no displacement). The bacterium alternates between these two modes of
operation throughout its entire lifetime. After the initial development by Passino the algorithm gradually has become popular
due to its capability to provide good solution in dynamic [85]
and multi-modal [86] environments.
Literature review indicates that this algorithm has recently
being applied to cluster analysis [265, 266]. The basic clustering
algorithm based upon the BFO consists of four fundamental steps:
chemotaxis, swarming, reproduction, elimination and dispersal.
The initial solution space is created by assigning the bacteria
positions as the randomly chosen cluster centroids in the dataset.
Then the chemotaxis process denes the movement of bacteria,
which represents either a tumble followed by a tumble or tumble
followed by a run. The detailed mathematical expression in
chemotaxis for the movement of bacteria (i.e. cluster head) is
dened in [84]. The swarming operation represents the cell-to-cell
signaling scheme of bacteria via an attractant. The clustering task
can be also performed satisfactorily without the swarming
scheme (which involves high computational complexity and thus
eliminated in [267]). After performing a xed number of chemotaxis loops the reproduction is carried out where the population is
sorted with respect to the tness value. The rst half of the bacteria
is retained and the second half (i.e. least healthy bacteria) is
allowed to die. Each of the healthiest bacteria splits into two
bacteria, which are placed at the same location. In order to prevent
the bacteria from being trapped into local optima the elimination
and dispersal phases are carried out. Here a bacterium is chosen
according to a preset probability and is allowed to disperse
(i.e move to another random position). The dispersal at times
becomes useful as it may place bacteria near good food sources (i.e.
optimal cluster partitions). The BFO based clustering algorithm has
been successfully applied to deploying sensor nodes in wireless
sensor network to enhance the coverage and connectivity [268].

2.2.5. Other nature inspired metaheuristics for partitional clustering

 Cat Swarm Optimization (CSO) The CSO algorithm is proposed


by Chu and Tsai [269,270] by observing the natural hunting
skill of cats. Santosa et al. [271] used CSO based clustering to
classify benchmark UCI datasets [354]. The algorithm determines the optimal solution based on two modes of operation
cats: seeking mode (represents global search technique which
mimics the resting position of cats with slow movement) and
tracing mode (local search technique which reects the rapid
chase of cat behind the target). Recently Pradhan et al. [20]

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

10

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

applied the multi-objective CSO algorithm for optimal deployment of sensor nodes in wireless sensor networks.
Cuckoo Search Algorithm The cuckoo search algorithm is
developed by Yang and Deb [272] in 2009. The algorithm
mimics the breeding behavior of cuckoos (to lay their eggs
in the nests of other birds). Three basic operations associated
are: (i) every cuckoo lays one egg at a time, and dumps its egg
in randomly selected nest in the environment, (ii) the nests
with good quality of eggs will remain for next generations, (iii)
the number of host bird nests is xed, and the egg laid by a
cuckoo is identied by the host bird depending on a probability
in the range [0, 1] (under such situation, the host bird can
either destroy the egg or destroy the present nest and build a
new one). Goel et al. [274] have formulated the cuckoo search
based clustering algorithm and applied it for extraction of
water body information from remote sensing satellite images.
Firey algorithm The algorithm is proposed by Yang [275277]
observing the rhythmic ashes of reies. Senthilnath et al.
[278] applied the algorithm for cluster analysis of UCI datasets.
The algorithm follows three rules based upon the glowing
nature of reies: (i) all reies are unisex and each rey is
attracted towards other reies regardless of their sex; (ii) the
attraction is proportional to their brightness. Therefore between
any two ashing reies, the less brighter one will move
towards the brighter one. As the attraction is proportional to
the brightness, both decrease with the increase in distance
between reies. In the surrounding if there is no brighter one
than a particular rey, then it has to move randomly; (iii) the
brightness of a rey is determined by the nature of objective
function. Initially at the beginning of clustering algorithm, all the
reies are randomly dispersed across the entire search space.
Then the algorithm determines the optimal partitions based on
two phases: (i) variation of light intensity: the brightness of a
rey at current position is reected on its tness value, (ii)
movement towards attractive rey: the rey changes its
position by observing the light intensity of adjacent reies.
Hassanzadeh et al. [279] have successfully applied the rey
clustering algorithm for image segmentation.
Invasive Weed Optimization Algorithm (IWO) The IWO is
proposed by Mehrabian and Lucas [280] by following the
colonization of weeds. The weeds reproduce their seeds spread
over a special area and grow to new plants in order to nd the
optimized position. The automatic clustering algorithm based
upon IWO is formulated by Chowdhury et al. [281]. The
algorithm is based upon four basic steps: (i) initialization of
the weeds in the whole search space, (ii) reproduction of the
weeds, (iii) distribution of the seeds, (iv) competitive exclusion
of the weeds (tter weeds produce more seeds). Su et al. [282]
applied the algorithm for image clustering. The multi-objective
IWO is proposed by Kundu et al. [283] which is recently applied
for cluster analysis by Liu et al. [284].
Gravitational Search Algorithm (GSA) Rashedi [285,286] proposed the GSA following the principles of Newton law of
gravity which states that Every particle in the universe attracts
every other particle with a force that is directly proportional to
the product of their masses and inversely proportional to the
square of the distance between them. The algorithm is used for
cluster analysis by Hatamlou et al. [287]. Recently Yin et al.
[288] developed a hybrid algorithm based on K-harmonic
means and GSA.

2.3. Fitness functions for partitional clustering


The similarity function f  described in (2) plays a major role in
effectively partitioning the dataset. It represents a mathematical

function that quanties the goodness of a partition based on the


similarity between the patterns present in it. Various tness
functions used by the nature inspired metaheuristic algorithms
for partitional clustering are listed in Table 3.
2.4. Cluster validity indices
The cluster validity indices represent statistical functions used
for quantitative evaluation of the clusters derived from a dataset.
The objective is to determine the importance of the disclosed
cluster structure produced by any clustering algorithm. In a recent
review article [289] Xu et al. compared the performance of eight
major validity indices used by swarm-intelligence-based clustering on synthetic and benchmark UCI datasets. Arbelaitz et al. [372]
have demonstrated the use of 30 cluster validity indices in 720
synthetic and 20 real datasets. The books by Gan et al. [95],
Berkhin [96] and Maulik et al. [322] present the validity indices
used by the evolutionary clustering algorithms. Some popular
validity indices like DB index, Dunn index, CS Measure and
Silhouette are also used as tness function by several researchers
(the details are enlisted in Table 3). Other validity indices used in
the bio-inspired clustering literature include CH Index [290,299],
I Index [112], Rand index [95], Jaccard coefcient [95], Folkes and
Mallows index [1], Hubert0 s statistic [95], SD Index [298], S-Dbw
index [295,296], root-mean-square standard deviation index
[295,296], RS index [297], PBM index [300] and SV index [301].
Gurrutxaga et al. [302] suggested a standard methodology to
evaluate internal cluster validity indices. Recently Saha and Bandyopadhyay [303] have proposed connectivity based measures to
improve the performance of standard cluster validity indices used
by bio-inspired clustering techniques.

3. Multi-objective algorithms for exible clustering


Recent survey article by Zhou et al. [304] highlights the basic
principles, advancements and application of multi-objective algorithms to several real world optimization problems. Basically these
algorithms are preferred over single objective counterparts as they
incorporate additional knowledge in terms of objective functions
to achieve optimal solution. In the last decade researchers have
developed many nature inspired multi-objective algorithms which
include non-dominated sorting GA (NSGA-II) [305,306], Pareto
envelope-based selection algorithm (PESA II) [325], Strength
Pareto Evolutionary Algorithm (SPEA) [326], and Voronoi Initialised Evolutionary Nearest-Neighbour Algorithm (VIENNA) [327].
Along with these other major nature inspired multiobjective
algorithms are enlisted in Table 1. The recent book by Maulik
et al. [322] highlights the overview and applicability of these
multi-objective algorithms for partitional clustering.
3.1. Problem formulation
The partitional clustering problem can be formulated as a
multi-objective problem by simultaneously minimizing M objective function represented by
Min

kAK

Fk min f 1 k; f 2 k; ; f M k

where K is the set of feasible clusters derived from dataset Z ND . In


multi-objective clustering, instead of achieving a single solution
(cluster partition achieved in single objective algorithm), a group
of optimal solutions are obtained (known as Pareto optimal) by
suitable combination of different objective functions. All the
Pareto optimal solutions are better from each other in the
form of some objective functions and therefore known as

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

non-dominated solutions [305]. The pictorial representation of


Pareto optimal solutions, with respect to the objective function is
known as Pareto optimal front [306].
3.2. Historical development in multi-objective algorithms for
partitional clustering
The survey paper by Bong and Rajeswari [323] reports that the
design, development and uses of multi-objective bio-inspired
algorithms for clustering and classication problems have exponentially increased from year 2006 to 2010. Research in the area of
bio-inspired multiobjective clustering has been strengthened after
the work on MOCK (Multi-objective clustering with automatic K
determination) by Handl and Knowles [330] published in 2007.
Prior to that Corne et al. developed pareto envelope-based selection algorithm (PESA) [324] and PESA-II [325] to solve partitional
clustering problem. Then the work by Handl and Knowles on
VIENNA (Voronoi Initialised Evolutionary Nearest-Neighbour Algorithm) [327], multi-objective clustering with automatic determination of the number of clusters [328] and improvements in the
scalability [329] have drawn the attention of many evolutionary
computing researchers. These articles are considered to have
served as backbone for the development of MOCK [330].
On the same year with MOCK [330], Bandyopadhyay et al. in
[331] reported multi-objective clustering based on NSGA-II [306]
and applied it for classication of remote sensing images. The
NSGA-II based multi-objective clustering has recently been applied
for MR brain image segmentation in [332]. Santosh et al. [333]
have proposed a multi ant-colonies based multi-objective clustering that can effectively group distributed data. Here each colony
works in parallel over the same dataset and simultaneous optimization of two objectives provides better solutions than those
achieved with individual objectives being separately optimized.
An immune-inspired algorithm to solve multiobjective clustering is initially proposed in [334] to classify the benchmark UCI
datasets UCI12. Then Ma et al. [335] developed the immunodominance and clonal selection inspired multiobjective clustering for
classifying handwritten digits. The immune multi-objective clustering has been suitably applied for the SAR image segmentation
[336]. Recently Gou et al. [370] have reported development of
multi-elitist immune clonal quantum clustering algorithm.
An automatic kernel clustering using multi elitist PSO is
proposed by Das et al. in [337]. Paoli et al. [338] have formulated
the MOPSO based clustering for grouping hyperspectral images.
Recently the MOPSO has been applied for energy-efcient clustering in mobile ad hoc networks [339].
Simulated annealing based multi-objective clustering algorithm which uses symmetry distance is reported by Saha and
Bandyopadhyay [340,341]. A scatter tabu search algorithm is used
for multiobjective clustering problems in [342]. Suresh et al. [343]
have proposed a multi-objective differential evolution based
automatic clustering for micro-array data analysis. The multiobjective invasive weed optimization (MOIWO) has recently been
applied for cluster analysis by Liu et al. [284].
A clustering ensemble developed by Faceli et al. [344] deals
with generation of multiple partitions of the same data. Combining
these resulting partitions, an user can obtain a good data partitioning even though the original output clusters are not compact and
well separated. Ripon and Siddique have proposed an evolutionary
multi-objective tool for overlapping clusters detection.
3.3. Evaluation methods
Handl and Knowles [346] initially described the cluster validity
indices for multi-objective bio-inspired clustering. Then Brusco

11

and Steinley [347] reported the cross validation issues in multiobjective clustering.
Recently the use of parametric and nonparametric statistical tests
has become popular among the evolutionary researchers. Usually
these tests are carried out to decide where one evolutionary
algorithm is considered better than another [348]. Therefore these
tests can effectively be applied to evaluate the performance of the
new multi-objective clustering algorithms. The parametric tests
described by Garcia et al. [349] are popular in which the authors
have selected 14 UCI datasets to compare the performance of ve
evolutionary algorithm used for classication purpose. They have
used Wilcoxon signed-ranks to evaluate the performance with
classication rate and Cohen0 s kappa as accuracy measure. However
the parametric tests are based upon the assumptions of independence, normality, and homoscedasticity which at times do not get
satised in multi-problem analysis. Under such situations the nonparametric test is preferable. The papers by Derrac et al. [348] and
Garcia et al. [350] clearly highlight the signicance of nonparametric
test, which can perform two classes of analysis pairwise comparisons
and multiple comparisons. The pairwise comparisons include Sign
test, Wilcoxon test, Multiple sign test, and Friedman test. The multiple comparisons consist of Friedman Aligned ranks, Quade test,
Contrast Estimation. The books [353,352] and statistical toolbox in
MATLAB [351] are helpful in implementing these statistical tests.

4. Real life application areas of nature inspired metaheuristics


based partitional clustering
The nature inspired partitional clustering algorithms have been
successfully applied to diversied areas of engineering and science.
Many researchers have employed the benchmark UCI datasets to
validate the performance of nature inspired clustering algorithms.
Some popular UCI datasets and its uses in the corresponding
algorithms are listed in Table 4. The major applications of the
nature inspired clustering literature and the corresponding authors
are shown in Table 5. Along with Table 5 some more application
areas include character recognition [10,335], traveling salesman
problem [91], blind channel equalizer design [21], human action
classication [22,363], book clustering [32], texture segmentation
[210], tourism market segmentation [371], analysis of gene expression patterns [365], electrocardiogram processing [214], security
assessment in power systems [239], manufacturing cell design
[242], clustering of sensor nodes [362], identication of clusters for
accurate analysis of seismic catalogs [364].

5. Conclusion
This paper provides an up-to-date review of nature inspired
metaheuristic algorithms for partitional clustering. It is observed
that the traditional gradient based partitional algorithms are
computationally simpler but often provide inaccurate results as
the solution is trapped in the local minima. The nature inspired
metaheuristics explore the entire search space with the population
involved and ensure that optimal partition is achieved. Further
single objective algorithms provide one optimal solution where as
the multi-objective algorithms provide the exibility to select the
desired solution from a set of optimal solutions. The promising
solutions of automatic clustering are much helpful as they do not
need apriori information about the number of clusters present in
the dataset. It is important to note that although numerous
clustering algorithms have been published considering various
practical aspects, no single clustering algorithm has been shown to
dominate the rest for all application areas.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

12

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

Table 4
Widely used UCI benchmark data sets for nature inspired metaheuristics based partitional clustering.
Datasets

Creater

Used in popular research articles for partitional clustering

Iris [150  4], Cl-3

R.A. Fisher

GA [100,114,91,128,134], DE [16,155,153],
ACO [184,198,193,202,207], BFO [267],
PSO [223,226,233,224,222,227], CSO [271],
ABC [247,250,248], Firey [278], Frog [180],
NSGA II [345], MOAIS [334], MOCK [22], MODE [343]

Wine [178  13], Cl-3

Forina et al.

GA [128], ACO [201,189,193,202,207],


PSO [223,224,233,226,227,231], DE [16,155],
BFO [267], AIS [235], ABC [247,250,248],
Firey [278], GSA [288], Frog [180], NSGA II [345],
MODE [343], MOSA [340,341], VIENNA [327]

Glass [214  9], Cl-6

B. German

GA [4,128,134], ACO [201,189,193,202,231],


PSO [224,226,233,222,227], DE [16,155],
BFO [267], ABC [250,248], Firey [278], CSO [271],
GSA [288], NSGA II [345]

Brest cancer [683  9], Cl-2

W.H. Wolberg
O. Mangasarian

GA [4,134], ACO [201,231], DE [16,155],


PSO [223,236,224,226,227], BFO [267],
ABC [250,248], Firey [278], GSA [288], MODE [343],
MOAIS [334], MOSA [340,341], VIENNA [327]

Thyroid [215  5], Cl-3

R. Quinlan

ACO [189,193], PSO [226], ABC [247,250],


Firey [278], Frog [180], MOSA [340,341]

Cont. Met. Choice [768  8], Cl-2

National Indo. Cont. Survey

ACO [231], PSO [226,224,223,227],


ABC [250,248,253], GSA [288]

Dermatology [366  34], Cl-6

N. Ilter
H.A. Guvenir

AIS [235], ABC [250], Firey [278],


VIENNA [327]

Diabetes [768  8], Cl-2

V. Sigillito

ACO [193], BFO [267], ABC [250], Firey [278],


NSGA II [345]

Image Segm. [2310  19], Cl-7

Vision Group

GA [131], ACO [201,189], DE [153]

Table 5
Real life application areas of nature inspired metaheuristic based partitional clustering.
Applications

Popular research articles based on nature inspired partitional clustering

Image segmentation

GA Feng et al. [130], PSO Lee et al. [241], Abraham et al. [4], Zhang et al. [230], ACO Ghosh et al. [190], DE Das et al.
[2], Review Jain et al. [10], NSGA II Mukhopadhyay et al. [332], Bandyopadhyay et al. [331], MOCLONAL Yang et al.
[336], Multiobj. Review Bong and Rajeswari [323]

Image clustering

GA Bandyopadhyay et al. [113], DE Das et al. [151,152], Omran et al. [156], PSO Omran et al. [219,243],
NSGA II Bandyopadhyay et al. [331]

Document clustering

GA Casillas et al. [115], Kuo and lin [129], PSO Cui et al. [244], ACO Yang et al. [194], Handl and Meyer [196],
DE Abraham et al. [157], Review Steinbach et al. [35], Andrews et al. [33], Jain et al. [34]

Web mining

ACO Labroche et al. [209], Abraham and Ramos [216], PSO Alam et al. [245]

Text mining

ACO Handl and Meyer [196], Vizine et al. [218], SA - Chang [163]

Clustering in wireless sensor networks

GA Tan et al. [137], ABC Karaboga et al. [251], Udgata et al. [252], PSO Yu et al. [237], BFO Gaba et al. [268], MOCSO
Pradhan and Panda [20], Review O. Younis et al. [17], M. Younis et al. [18], Kumarawadu et al. [19]

Clustering in mobile networks

ACO Merkle et al. [217], PSO Ji et al. [238], Ali et al. [339], DE Chakraborty et al. [158]
SA W Jin et al. [164]

Gene expression data analysis

GA Lu et al. [109], Ma et al. [117], ACO He and Hui [215], DE Das et al. [2], PSO Sun et al. [225], Du et al. [229],
Thangavel et al. [236], AIS Lie et al. [260], Review Jiang et al. [30], Lukashin et al. [31], Xu and Wunsch [91], Hruschka
et al. [3,119,120], Jain et al. [34], MODE Suresh et al. [343]

Intrusion detection

GA Liu et al. [116], ACO Ramos and Abraham [211], Tsang and Kwong [212], PSO Lima et al. [246]

Computational nance

Review MacGregor et al. [24], Brabazon et al. [25], Amendola et al. [26], Nanda et al. [27]

Large datasets analysis

GA Franti et al. [124], Lucasius et al. [121], ACO Chen et al. [213] Evolutionary algorithm NOCEA Saras et al. [29]

Geological data analysis

PSO Cho [355], Review Jain et al. [10,34], Zaliapin et al. [28],
Nanda et al. [364]

6. Future research issues


The eld of nature inspired partitional clustering is relatively
young and emerging with new concepts and applications. There
are many new research directions in this eld which need
investigations include:

 In order to solve any partitional clustering problem the success of


a particular nature inspired metaheuristic algorithm to achieve
optimal partition is dependent on its design environment (i.e
encoding scheme, operators, set of parameters, etc.). So for a given
complex problem the design choices should be theoretically
analyzed before the simulation and implementation.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

 The recently developed new nature inspired metaheuristics

like Krill herd algorithm [356], Dendritic cell algorithm [87,88],


River formation dynamics [357], Granular agent evolutionary
algorithm [358], Bat algorithm [359361], Glowworm Swarm
Optimization [366], Black hole algorithm [368], Cellular automata algorithm [369] can also be employed to solve the
partitional clustering problems.
In real life datasets at times cluster analysis has to be carried
out with certain constrains. Recent research article by Xu et al.
[367] and book by Basu et al. [5] highlight the major constrain
handing with swarm intelligence based clustering algorithms.
In many practical applications, it is very important to select a
good feature extraction method (not necessarily the best
clustering algorithm) that highlights the underlying structure
of the dataset from clustering aspect.
Many practical datasets contain patterns which are similar and
overlapping in nature. In such cases the transformed domain
information of the dataset can be effectively used to group the
patterns.
In many cluster analysis applications there is a need for
stability or consistence performance of the results. As most
nature-inspired algorithms are heuristic in nature, stability
issues of these clustering algorithms are still a barren area of
research.

References
[1] R. Xu, D.C. Wunsch, Clustering, Oxford, Wiley, 2009.
[2] S. Das, A. Abraham, A. Konar, Metaheuristic Clustering, Springer, 2009,
ISBN 3540921729.
[3] E.R. Hruschka, R.J.G.B. Campello, A.A. Freitas, A.C.P.L.F. De Carvalho, A survey
of evolutionary algorithms for clustering, IEEE Trans. Syst. Man Cybern. Part
C Appl. Rev. 39 (2) (2009) 133155.
[4] A. Abraham, S. Das, S. Roy, Swarm intelligence algorithms for data clustering,
in: Soft Computing for Knowledge Discovery and Data Mining, Part IV,
Springer, 2007, pp. 279313.
[5] S. Basu, I. Davidson, K. Wagstaff (Eds.), Constrained Clustering: Advances in
Algorithms, Theory and Applications, Data Mining and Knowledge Discovery,
Chapman and Hall/CRC, 2008. ISBN 1584889977.
[6] H. Frigui, R. Krishnapuram, A robust competitive clustering algorithm with
applications in computer vision, IEEE Trans. Pattern Anal. Mach. Intell. 21 (5)
(1999) 450465.
[7] Y. Leung, J. Zhang, Z. Xu, Clustering by scale-space ltering, IEEE Trans.
Pattern Anal. Mach. Intell. 22 (12) (2000) 13961410.
[8] S.C. Johnson, Hierarchical clustering schemes, Psychometrika 2 (1967)
241254.
[9] F. Murtagh, A survey of recent advances in hierarchical clustering algorithms,
Comput. J. 26 (1983) 354359.
[10] A.K. Jain, M.N. Murty, P.J. Flynn, Data clustering a review, ACM Comput. Surv.
31 (3) (1999) 264323.
[11] M. Sato-Ilic, L.C. Jain, Innovations in Fuzzy Clustering: Theory and Application, Springer-Verlag, Berlin, Germany, 2006.
[12] A. Baraldi, P. Blonda, A survey of fuzzy clustering algorithms for pattern
recognitionPart I, IEEE Trans. Syst. Man Cybern. Part B Cybern. 29 (6) (1999)
778785.
[13] A. Baraldi, P. Blonda, A survey of fuzzy clustering algorithms for pattern
recognitionPart II, IEEE Trans. Syst. Man Cybern. Part B Cybern. 29 (6)
(1999) 786801.
[14] F. Hoppner, F. Klawonn, R. Kruse, T. Runkler, Fuzzy Cluster Analysis, Wiley,
1999, ISBN 0471988642.
[15] F. Hoppner, Fuzzy shell clustering algorithms in image processing: fuzzy crectangular and 2-rectangular shells, IEEE Trans. Fuzzy Syst. 5 (1997)
599613.
[16] S. Das, A. Abraham, A. Konar, Automatic clustering using an improved
differential evolution algorithm, IEEE Trans. Syst. Man Cybern. Part A Syst.
Hum. 38 (1) (2008) 218237.
[17] O. Younis, M. Krunz, S. Ramasubramanian, Node clustering in wireless sensor
networks: recent developments and deployment challenges, IEEE Netw.
(2006) 2025.
[18] M. Younis, P. Munshi, G. Gupta, S.M. Elsharkawy, On efcient clustering of
wireless sensor networks, in: Second IEEE Workshop on Dependability and
Security in Sensor Networks and Systems, 2006, pp. 110.
[19] P. Kumarawadu, D.J. Dechene, M. Luccini, A. Sauer, Algorithms for node
clustering in wireless sensor networks: a survey, in: IEEE International
Conference on IAFS, 2008, pp. 295300.

13

[20] P.M. Pradhan, G. Panda, Connectivity constrained wireless sensor deployment using multiobjective evolutionary algorithms and fuzzy decision
making, Ad Hoc Netw. 10 (6) (2012) 11341145.
[21] S. Chen, S. McLaughlin, P.M. Grant, B. Mulgrew, Multi-Stage blind clustering
equaliser, IEEE Trans. Commun. 43 (1995) 701705.
[22] S.J. Nanda, G. Panda, Automatic clustering using MOCLONAL for classifying
actions of 3D human models, in: IEEE Symposium on Humanities, Science
and Engineering Research, 2012, pp. 385390.
[23] S.K. Halgamuge, L. Wang, Classication and Clustering for Knowledge
Discovery, Springer-Verlag, Berlin, Germany, 2005.
[24] R.C. MacGregor, A.T. Hodgkinson, Small Business Clustering Technologies:
Applications in Marketing, Management, IT and Economics, Idea Group Pub.,
Hershey, Pennsylvania, 2007.
[25] A. Brabazon, M. O0 Neill, I. Dempsey, An introduction to evolutionary
computation in nance, IEEE Comput. Intell. Mag. 3 (4) (2008) 4255.
[26] A. Amendola, D. Belsley, E.J. Kontoghiorghes, H.K. van Dijk, Y. Omori, E. Zivot,
Special issue on statistical and computational methods in nance, Comput.
Stat. Data Anal. 52 (2008) 28422845.
[27] S.R. Nanda, B. Mahanty, M.K. Tiwari, Clustering Indian stock market data for
portfolio management, Expert Syst. Appl. 37 (12) (2010) 87938798.
[28] I. Zaliapin, A. Gabrielov, V. Keilis-Borok, H. Wong, Clustering analysis of
seismicity and aftershock identication, Phy. Rev. Lett. 101 (2008).
[29] I.A. Saras, P.W. Trinder, A.M.S. Zalzala, NOCEA: a rule-based evolutionary
algorithm for efcient and effective clustering of massive high-dimensional
databases, Appl. Soft Comput. 7 (3) (2007) 668710.
[30] D. Jiang, C. Tang, A. Zhang, Cluster analysis for gene expression data: a
survey, IEEE Trans. Knowl. Data Eng. 16 (11) (2004) 13701386.
[31] A.V. Lukashin, M.E. Lukashev, R. Fuchs, Topology of gene expression networks as revealed by data mining and modeling, Bioinformatics 19 (15)
(2003) 19091916.
[32] M.N. Murty, A.K. Jain, Knowledge-based clustering scheme for collection
management and retrieval of library books, Pattern Recognit. 28 (7) (1995)
949963.
[33] N.O. Andrews, E.A. Fox, Recent Developments in Document Clustering, Technical
Report TR-07-35, Department of Computer Science, Virginia Tech, 2007.
[34] A.K. Jain, Data clustering: 50 years beyond K-means, Pattern Recognit. Lett.
31 (2010) 651666.
[35] M. Steinbach, G. Karypis,V. Kumar, A comparison of document clustering
techniques, in: KDD Workshop on Text Mining, 2000.
[36] S. Phillips, Acceleration of k-means and related clustering algorithms, in:
International Workshop on Algorithm Engineering and Experimentation, Lecture
Notes in Computer Science, vol. 2409, Springer-Verlag, 2002, pp. 166177.
[37] D. Pelleg, A. Moore, Accelerating exact k-means algorithms with geometric
reasoning, in: Fifth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, CA-ACM Press, 1999, pp. 277281.
[38] D. Pelleg, A. Moore, x-means: extending k-means with efcient estimation of
the number of clusters, in: Seventeenth International Conference on
Machine Learning, Morgan Kaufmann, San Francisco, 2000, pp. 727734.
[39] B. Zhang, M. Hsu, U. Dayal, k-harmonic means: a spatial clustering algorithm
with boosting, in: International Workshop on Temporal, Spatial and SpatioTemporal Data Mining, Lecture Notes in Articial Intelligence, Springer, 2001,
pp. 3145.
[40] Z. Huang, Extensions to the k-means algorithm for clustering large data sets
with categorical values, Data Mining Knowl. Discov. 2 (3) (1998) 283304.
[41] A. Chaturvedi, P. Green, J. Carroll, k-modes clustering, J. Classif. 18 (1) (2001)
3555.
[42] B. Scholkopf, A. Smola, K.R. Muller, Nonlinear component analysis as a kernel
eigenvalue problem, J. Neural Comput. 10 (5) (1998) 12991319.
[43] L. Kaufman, P.J. Rousseeuw, Finding Groups in Data: An Introduction to
Cluster Analysis, Wiley, 2008, ISBN 0471735787.
[44] X.S. Yang, Nature-Inspired Metaheuristic Algorithms, second edition, Luniver
Press, 2010, ISBN 1905986106.
[45] J. Brownlee, Clever Algorithms: Nature-Inspired Programming Recipes, lulu.
com, 2012.
[46] J.H. Holland, Adaptation in Natural and Articial Systems, University of
Michigan Press, 1975, ISBN 9780262581110.
[47] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine
Learning, Addison-Wesley, 1989, ISBN 0201157675.
[48] S. Kirkpatrick, C.D. Gelatt , M.P. Vecchi, Optimization by simulated annealing,
Science 220 (4598) (1983) 671680.
[49] H.P. Schwefel, Numerical Optimization of Computer Models, John Wiley and
Sons, 1981, ISBN 0471099880.
[50] I. Rechenberg, Evolution strategy: nature0 s way of optimization, in: Optimization: Methods and Applications, Possibilities and Limitations, Lecture
Notes in Engineering, Springer, Berlin, 1989, pp. 106126.
[51] T. Back, F. Hoffmeister, H.P. Schwefel, A survey of evolution strategies, in:
Proceedings of the Fourth International Conference on Genetic Algorithms,
1991.
[52] J.R. Koza, Genetic Programming: On the Programming of Computers by
Natural Selection, MIT Press, Cambridge, 1992.
[53] R. Storn, K. Price, Differential evolutiona simple and efcient heuristic for
global optimization over continuous spaces, J. Global Optim. 11 (1997)
341359.
[54] K. Price, R. Storn, J. Lampinen, Differential EvolutionA Practical Approach to
Global Optimization, Springer, Berlin, 2005.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

14

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

[55] A.K. Qin, P.N. Suganthan, Self-adaptive differential evolution algorithm for
numerical optimization, in: IEEE Congress on Evolutionary Computation, CEC
2005, pp. 17851791.
[56] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with
strategy adaptation for global numerical optimization, IEEE Trans. Evol.
Comput. 13 (2) (2009) 398417.
[57] S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential
evolution for optimization of noisy problems. in: IEEE Congress on Evolutionary Computation, CEC 2006, pp. 18651872.
[58] S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential
evolution, IEEE Trans. Evol. Comput. 12 (1) (2008) 6479.
[59] E. Bonabeau, M. Dorigo, G. Theraulaz, Swarm Intelligence: From Natural to
Articial Systems, Oxford University Press, US, 1999.
[60] J. Kennedy, R.C. Eberhart, Y. Shi, Swarm Intelligence, Morgan Kaufmann,
2001, ISBN 1558605959.
[61] A.P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John
Wiley and Sons, 2006, ISBN 0470091916.
[62] M. Dorigo, Optimization, learning and natural algorithms (Ph.D. thesis),
Politecnico di Milano, Italy, 1992.
[63] M. Dorigo, V. Maniezzo, A. Colorni, The ant system: optimization by a colony of
cooperating agents, IEEE Trans. Syst. Man Cybern. Part B Cybern. 26 (1996) 2941.
[64] M. Dorigo, L.M. Gambardella, Ant colony system: a cooperative learning
approach to the traveling salesman problem, IEEE Trans. Evol. Comput. 1 (1)
(1997) 5366.
[65] M. Dorigo, G. Di Caro, Ant colony optimization: a new meta-heuristic, in:
IEEE Congress on Evolutionary Computation, CEC 1999, vol. 2, 1999,
pp. 14701477.
[66] M. Dorigo, T. Stutzle, Ant Colony Optimization, The MIT Press, 2004,
ISBN 0262042193.
[67] M. Dorigo, C. Blum, Ant colony optimization theory: a survey, Theor. Comput.
Sci. 344 (2005) 243278.
[68] J. Kennedy, R. Eberhart, Particle swarm optimization, in: IEEE International
Conference on Neural Network, vol. 4, 1995, pp. 19421948.
[69] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in:
Sixth International Symposium on Micro Machine and Human Science, MHS
1995, pp. 3943.
[70] M. Clerc, J. Kennedy, The particle swarmexplosion, stability, and convergence in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (1)
(2002) 5873.
[71] X. Hu, Y. Shi, R. Eberhart, Recent advances in particle swarm, in: IEEE
Congress on Evolutionary Computation, CEC 2004, vol. 1, pp. 9097.
[72] R. Poli, J. Kennedy, T. Blackwell, Particle Swarms: The Second Decade,
Hindawi Publishing Corporation, 2008, ISBN 9774540379.
[73] B. Basturk, D. Karaboga, An articial bee colony (ABC) algorithm for numeric
function optimization, in: IEEE Swarm Intelligence Symposium, IN, USA,
2006.
[74] D. Karaboga, B. Basturk, Articial bee colony optimization algorithm for
solving constrained optimization problems, in: Foundations of Fuzzy Logic
and Soft Computing, Lecture Notes in Computer Science, vol. 4529, Springer,
2007, pp. 789798.
[75] D. Karaboga, B. Basturk, A powerful and efcient algorithm for numerical
function optimization: articial bee colony (ABC) algorithm, J. Global Optim.
39 (3) (2007) 171459.
[76] D. Karaboga, B. Basturk, On the performance of articial bee colony (ABC)
algorithm, Appl. Soft Comput. 8 (1) (2008) 687697.
[77] D. Karaboga, B. Akay, A comparative study of articial bee colony algorithm,
Appl. Math. Comput. 214 (1) (2009) 108132.
[78] D. Dasgupta, Articial Immune Systems and their Applications, SpringerVerlag, 1999, ISBN 3540643907.
[79] L.N. de Charsto, J. Timmis, An Introduction to Articial Immune Systems: A
New Computational Intelligence Paradigm, Springer-Verlag, 2002.
[80] L.N. de Charsto, J.V. Zuben, Learning and optimization using clonal selection
principle, IEEE Trans. Evol. Comput. 6 (3) (2002) 239251.
[81] D. Dasgupta, Advances in articial immune systems, IEEE Comput. Intell.
Mag. 1 (4) (2006) 4049.
[82] D. Dasgupta, S. Yu, F. Nino, Recent advances in articial immune systems:
models and applications, Appl. Soft Comput. 11 (2011) 15741587.
[83] S. J. Nanda, Articial immune systems: principle, algorithms and applications
(M. tech. research thesis), National Institute of Technology, Rourkela, India,
2009.
[84] K. Passino, Biomimicry of bacterial foraging for distributed optimization and
control, IEEE Control Syst. Mag. 22 (3) (2002) 5267.
[85] W.J. Tang, Q.H. Wu, J.R. Saunders, Bacterial foraging algorithm for dynamic
environments, in: IEEE Congress on Evolutionary Computation, CEC 2006,
pp. 13241330.
[86] S. Mishra, A hybrid least squarefuzzy bacterial foraging strategy for
harmonic estimation, IEEE Trans. Evol. Comput. 9 (1) (2005) 6173.
[87] J. Greensmith, The dendritic cell algorithm (Ph.D. thesis), University of
Nottingham, 2007.
[88] J. Greensmith, U. Aickelin, J. Twycross, Articulation and clarication of the
dendritic cell algorithm, in: Fifth International Conference on Articial
Immune Systems (ICARIS 2006), Lecture Notes in Computer Science, vol.
4163, Springer-Verlag, 2006, pp. 404417.
[89] A.K. Jain, A. Topchy, M.H.C. Law, J.M. Buhmann, Landscape of clustering
algorithms, in: Proceedings of International Conference on Pattern Recognition, 2004, pp. 260263.

[90] A.K. Jain, R. Duin, J. Mao, Statistical pattern recognition: a review, IEEE Trans.
Pattern Anal. Mach. Intell. 22 (1) (2000) 437.
[91] R. Xu, D. Wunsch, Survey of clustering algorithms, IEEE Trans. Neural Netw.
13 (3) (2005) 645678.
[92] A.A. Freitas, A survey of evolutionary algorithms for data mining and
knowledge discovery, in: Advances in Evolutionary Computing, SpringerVerlag, New York, USA, 2003.
[93] S. Paterlini, T. Minerva, Evolutionary Approaches for Cluster Analysis, in: Soft
Computing Applications, Springer, 2003, pp. 167178.
[94] O.A. Mohamed Jafar, R. Sivakumar, Ant-based clustering algorithms: a brief
survey, Int. J. Comput. Theory Eng. 2 (5) (2010) 787796.
[95] G. Gan, C. Ma, J. Wu, Data Clustering Theory, Algorithms and Applications,
ASA-SIAM Series on Statistics and Applied Probability, 2007.
[96] P. Berkhin, A Survey of Clustering Data Mining Techniques, in: Grouping
Multidimensional Data, Springer, 2006, pp. 25-73.
[97] C.M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006,
ISBN 0387310738.
[98] P.N. Tan, M. Steinbach, V. Kumar, Introduction to Data Mining, AddisonWesley Longman Pub., USA, 2005.
[99] R. Duda, P. Hart, D. Stork, Pattern Classication, Second Ed., John Wiley and
Sons, New York, 2001.
[100] J.C. Bezdek, S. Boggavaparu, L.O. Hall, A. Bensaid, Genetic algorithm guided
clustering, in: IEEE Congress on Evolutionary Computation, CEC 1994,
pp. 3440.
[101] M. Sarkar, B. Yegnarayana, D. Khemani, A clustering algorithm using an
evolutionary programming-based approach, Pattern Recognit. Lett. 18 (1997)
975986.
[102] L.I. Kuncheva, J. C. Bezdek, Selection of cluster prototypes from data by a
genetic algorithm, in: Fifth European Congress on Intelligent Techniques and
Soft Computing, 1997, pp. 16831688.
[103] C.A. Murthy, N. Chowdhury, In search of optimal clusters using genetic
algorithm, Pattern Recognit. Lett. 17 (8) (1996) 825832.
[104] Q. Xiaofeng, E. Palmieri, Theoretical analysis of evolutionary algorithms with
an innite population size in continuous space, Part I: basic properties of
selection and mutation, IEEE Trans. Neural Netw. 5 (1) (1994) 102119.
[105] U. Maulik, S. Bandyopadhyay, Genetic algorithm-based clustering technique,
Pattern Recognit. 33 (2000) 14551465.
[106] K. Krishna, N. Murty, Genetic K-means algorithm, IEEE Trans. Syst. Man
Cybern. Part B Cybern. 29 (3) (1999) 433439.
[107] W. Sheng, X. Liu, A hybrid algorithm for K-medoid clustering of large data
sets, in: IEEE Congress on Evolutionary Computation, CEC 2004, pp. 7782.
[108] Y. Lu, S. Lu, F. Fotouhi, Y. Deng, S.J. Brown, FGKA: A fast genetic K-means
clustering algorithm, in: ACM Symposium on Applied Computing, 2004,
pp. 622623.
[109] Y. Lu, S. Lu, F. Fotouhi, Y. Deng, S.J. Brown, Incremental genetic K-means
algorithm and its application in gene expression data analysis, BMC Bioinform. 5 (2004) 110.
[110] M.C. Cowgill, R.J. Harvey, L.T. Watson, A genetic algorithm approach to cluster
analysis, Comput. Math. Appl. 37 (7) (1999) 99108.
[111] L.Y. Tseng, S.B. Yang, A genetic approach to the automatic clustering problem,
Pattern Recognit. 34 (2001) 415424.
[112] S. Bandyopadhyay, U. Maulik, Nonparametric genetic clustering: comparison
of validity indices, IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 31 (1)
(2001) 120125.
[113] S. Bandyopadhyay, U. Maulik, Genetic clustering for automatic evolution of
clusters and application to image classication, Pattern Recognit. 35 (2002)
11971208.
[114] S. Bandyopadhyay, U. Maulik, An evolutionary technique based on K-Means
algorithm for optimal clustering in RN, Inf. Sci. 146 (2002) 221237.
[115] A. Casillas, M.Y.G. de Lena, R. Martynez, Document clustering into an
unknown number of clusters using a genetic algorithm, in: International
Conference on Text, Speech, and Dialogue, Lecture Notes in Computer
Science, vol. 2807, Springer, 2003, pp. 4349.
[116] Y. Liu, K. Chen, X. Liao, W. Zhang, A genetic clustering method for intrusion
detection, Pattern Recognit. 37 (2004) 927942.
[117] P.C.H. Ma, K.C.C. Chan, X. Yao, D.K.Y. Chiu, An evolutionary clustering
algorithm for gene expression microarray data analysis, IEEE Trans. Evol.
Comput. 10 (3) (2006) 296314.
[118] E.R. Hruschka, R.J.G.B. Campello, L.N. de Castro, Improving the efciency of a
clustering genetic algorithm, in: Ninth Ibero-American Conference on Articial Intelligence, Lecture Notes in Computer Science, vol. 3315, SpringerVerlag, 2004, pp. 861870.
[119] E.R. Hruschka, L.N. de Castro, R.J.G.B. Campello, Evolutionary algorithms for
clustering gene-expression data, in: Fourth IEEE International Conference on
Data Mining, 2004, pp. 403406.
[120] E.R. Hruschka, R.J.G.B. Campello, L.N. de Castro, Evolving clusters in geneexpression data, Inf. Sci. 176 (2006) 18981927.
[121] C.B. Lucasius, A.D. Dane, G. Kateman, On k-medoid clustering of large
datasets with the aid of a genetic algorithm: background, feasiblity and
comparison, Anal. Chim. Acta 282 (3) (1993) 647669.
[122] V. Estivill-Castro, A.T. Murray, Spatial clustering for data mining with genetic
algorithms, in: International ICSC Symposium on Engineering of Intelligent
Systems, 1997, pp. 317323.
[123] R.M. Cole, Clustering with genetic algorithms (M.S. thesis), University of
Western Australia, Perth, 1998.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()


[124] P. Franti, J. Kivijarvi, T. Kaukoranta, O. Nevalainen, Genetic algorithms for
large-scale clustering problems, Comput. J. 40 (1997) 547554.
[125] J. Kivijarvi, P. Franti, O. Nevalainen, Self-adaptive genetic algorithm for
clustering, J. Heuristics 9 (2003) 113129.
[126] A. Blansche, P. Gancarski, J. J. Korczak, Genetic algorithms for feature
weighting: Evolution vs. coevolution and Darwin vs. Lamarck, in: Fourth
Mexican International Conference on Articial Intelligence, Mexico, Lecture
Notes in Articial Intelligence, vol. 3789, Springer, 2005, pp. 682691.
[127] P. Gancarski, A. Blansche, Darwinian, Lamarckian and Baldwinian (Co)evolutionary approaches for feature weighting in K-means based algorithms, IEEE
Trans. Evol. Comput. 12 (5) (2008) 617629.
[128] J. Xiao, Y. Yan, J. Zhang, Y. Tang, A quantum-inspired genetic algorithm for
k-means clustering, Expert Syst. Appl. 37 (7) (2010) 49664973.
[129] R.J. Kuo, L.M. Lin, Application of a hybrid of genetic algorithm and particle
swarm optimization algorithm for order clustering, Dec. Support Syst. 49 (4)
(2010) 451462.
[130] S. Feng-jie, T. Ye, Transmission line image segmentation based GA and PSO
hybrid algorithm, in: IEEE International Conference on Computational and
Information Sciences, 2010, pp. 677680.
[131] Y. Hong, S. Kwong, To combine steady-state genetic algorithm and ensemble
learning for data clustering, Pattern Recognit. Lett. 29 (2008) 14161423.
[132] A.A. Chaves, L.A.N. Lorenab, Hybrid evolutionary algorithm for the Capacitated
Centered Clustering Problem, Expert Syst. Appl. 38 (5) (2011) 50135018.
[133] M.J. Li, M.K. Ng, Y. Cheung, J.Z. Huang, Agglomerative fuzzy K-Means
clustering algorithm with selection of number of clusters, IEEE Trans. Knowl.
Data Eng. 20 (11) (2008) 15191534.
[134] H. He, Y. Tan, A two-stage genetic algorithm for automatic clustering,
Neurocomputing 81 (1) (2012) 4959.
[135] E. Falkenauer, Genetic Algorithms for Grouping Problems, Wiley, New York,
1998.
[136] L.E. Agustin-Blas, S. Salcedo-Sanz, S. Jimenez-Fernandez, L. Carro-Calvo, J. Del
Ser, J.A. Portilla-Figueras, A new grouping genetic algorithm for clustering
problems, Expert Syst. Appl. 39 (2012) 96959703.
[137] C.K. Tan, T.C. Chuah, S.W. Tan, M.L. Sim, Efcient clustering scheme for
OFDMA-based multicast wireless systems using grouping genetic algorithm,
Electron. Lett. 48 (3) (2012) 184186.
[138] G.P. Babu, M.N. Murty, Clustering with evolution strategies, Pattern Recognit.
27 (2) (1994) 321329.
[139] H. Beyer, H.P. Schwefel, Evolution strategies: a comprehensive introduction,
Nat. Comput. 1 (2002) 352.
[140] Y. Ling, J. Jing-Ping, Application of evolution strategy in cluster analysis, in:
IEEE Fifth World Congress on Intelligent Control and Automation, vol. 3,
2004, pp. 21972199.
[141] K. Lee, J.H. Kim, T.S. Chung, B.S. Moon, H. Lee, Kohane I.S., Evolution strategy
applied to global optimization of clusters in gene expression data of DNA
microarrays, in: IEEE Congress on Evolutionary Computation, CEC 2001, vol.
2, pp. 845850.
[142] J.K. Kishore, L.M. Patnaik, V. Mani, V.K. Agrawal, Application of genetic
programming for multicategory pattern classication, IEEE Trans. Evol.
Comput. 4 (3) (2000) 242258.
[143] D. Muni, N. Pal, J. Das, A novel approach to design classiers using genetic
programming, IEEE Trans. Evol. Comput. 8 (2) (2004) 183196.
[144] D. Muni, N. Pal, J. Das, Genetic programming for simultaneous feature
selection and classier design, IEEE Trans. Syst. Man Cybern. Part B Cybern.
36 (1) (2006) 106117.
[145] I. De Falco, E. Tarantino, A.D. Cioppa, F. Gagliardi, A novel grammar-based
genetic programming approach to clustering, in: ACM Symposium on
Applied Computing, 2005, pp. 928932.
[146] I. De Falco, E. Tarantino, A.D. Cioppa, F. Fontanella, An innovative approach to
genetic programmingbased clustering, in: Applied Soft Computing Technologies: The Challenge of Complexity, Advances in Soft Computing, Springer
Book Series, vol. 34, 2006, pp. 5564.
[147] N. Boric, P.A. Estevez, Genetic programming-based clustering using an
information theoretic tness measure, in: IEEE Congress on Evolutionary
Computation, CEC 2007, 3138.
[148] S. Paterlini, T. Krink, High performance clustering with differential evolution, in:
IEEE Congress on Evolutionary Computation, CEC 2004, vol. 2, pp. 20042011.
[149] S. Paterlini, T. Krink, Differential evolution and particle swarm optimization
in partitional clustering, Comput. Stat. Data Anal. 50 (2006) 12201247.
[150] S. Das, A. Abraham, A. Konar, Automatic hard clustering using improved
differential evolution algorithm, in: SCI: Metaheuristic Clustering, Springer,
vol. 178, 2009, pp. 137174.
[151] S. Das, S. Sil, U.K. Chakraborty, Kernel-based clustering of image pixels with
modied differential evolution, in: IEEE Congress on Evolutionary Computation, CEC 2008, 34723479.
[152] S. Das, A. Konar, Automatic image pixel clustering with an improved
differential evolution, Appl. Soft Comput. 9 (1) (2009) 226236.
[153] W. Kwedlo, A clustering method combining differential evolution with the Kmeans algorithm, Pattern Recognit. Lett. 32 (12) (2011) 16131621.
[154] Y. Tian, D. Liu, H. Qi, K-harmonic means data clustering with differential
evolution, in: IEEE International Conference on Future BioMedical Information Engineering, 2009, pp. 369372.
[155] Y. Hu, F. Qu, Y. Yang, X. Gu, An improved possibilistic clustering based on
differential algorithm, in: IEEE Second International Workshop on Intelligent
Systems and Applications (ISA), 2010, pp. 14.

15

[156] M. Omran, A.P. Engelbrecht, Salman, Differential Evolution Methods


for Unsupervised Image Classication, in: IEEE Congress on Evolutionary
Computation, CEC 2005, vol. 2, pp. 966973.
[157] A. Abraham, S. Das, A. Konar, Document Clustering Using Differential
Evolution, in: IEEE Congress on Evolutionary Computation, CEC 2006,
pp. 17841791.
[158] U.K. Chakraborty, S.K. Das, T.E. Abbott, Clustering in mobile Ad Hoc networks
with differential evolution, in: IEEE Congress on Evolutionary Computation,
CEC 2011, pp. 22232228.
[159] S.Z. Selim, K. Alsultan, A simulated annealing algorithm for the clustering
problem, Pattern Recognit. 24 (10) (1991) 10031008.
[160] D.E. Brown, C.L. Huntley, A practical application of simulated annealing to
clustering, Pattern Recognit. 25 (4) (1992) 401412.
[161] L.X. Sun, F. Xu, Y.Z. Liang, Y.L. Xie, R.Q. Yu, Cluster analysis by the K-means
algorithm and simulated annealing, Chemom. Intell. Lab. Syst. 25 (1) (1994)
5160.
[162] Z. Gungor, A. Unler, K-harmonic means data clustering with simulated
annealing heuristic, Appl. Math. Comput. 184 (2) (2007) 199209.
[163] C.H. Chang, Simulated annealing clustering of Chinese words for contextual
text recognition, Pattern Recognit. Lett. 17 (1) (1996) 5766.
[164] W. Jin, X. Li, Z. Baoyu, A genetic annealing hybrid algorithm based clustering
strategy in mobile ad hoc network, in: IEEE International Conference on
Communications, Circuits and Systems, 2005, pp. 314318.
[165] Z. Lu, Y. Peng, Horace H.S. Ip, Combining multiple clusterings using fast
simulated annealing, Pattern Recognit. Lett. 32 (15) (2011) 19561961.
[166] Z.H. Che, Clustering and selecting suppliers based on simulated annealing
algorithms, Comput. Math. Appl. 63 (1) (2012) 228238.
[167] P. Moscato, On evolution, search, optimization, genetic algorithms and
martial arts: towards Memetic Algorithms, Caltech Concurrent Computation
Program Report 826, 1989.
[168] P. Merz, Analysis of gene expression proles: an application of memetic
algorithms to the minimum sum-of-squares clustering problem, Biosystems
72 (2003) 99109.
[169] X.S. Chen, Y.S. Ong, M.H. Lim, K.C. Tan, A multi-facet survey on memetic
computation, IEEE Trans. Evol. Comput. 15 (5) (2011) 591607.
[170] X.S. Chen, Y.S. Ong, M.H. Lim, Research frontier: memetic computationpast,
present and future, IEEE Comput. Intell. Mag. 5 (2) (2010) 2436.
[171] A.A. Salehpour, A. Afzali-Kusha, S. Mohammadi, Efcient clustering of
wireless sensor networks based on memetic algorithm, in: IEEE International
Conference on Innovations in Information Technology, 2008, pp. 450454.
[172] L. Jiao, M. Gong, S. Wang, B. Hou, Z. Zheng, Q. Wu, Natural and remote sensing
image segmentation using memetic computing, IEEE Comput. Intell. Mag. 5
(2) (2010) 7891.
[173] Z.W. Geem, J.H. Kim, G.V. Loganathan, A new heuristic optimization algorithm: harmony search, Simulation 76 (2) (2001) 6068.
[174] K.S. Lee, Z.W. Geem, A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice, Comput. Methods
Appl. Mech. Eng. 194 (2005) 39023933.
[175] M. Mahdavi, M. Chehreghani, H. Abolhassani, R. Forsati, Novel meta-heuristic
algorithms for clustering web documents, Appl. Math. Comput. 201 (2008)
441451.
[176] R. Forsati, M. Mahdavi, M. Kangavari, B. Safarkhani, Web page clustering
using Harmony search optimization, IEEE Canadian Conference on Electrical
and Computer Engineering, 2008, pp. 16011604.
[177] M. Mahdavi, H. Abolhassani, Harmony K-means algorithm for document
clustering, Data Mining Knowl. Dis. 18 (3) (2009) 370391.
[178] D.C. Hoang, P. Yadav, R. Kumar, S.K. Panda, A robust harmony search
algorithm based clustering protocol for wireless sensor networks, in: IEEE
International Conference on Communications, 2010, pp. 15.
[179] M. Eusuff, K. Lansey, F. Pashab, Shufed frog-leaping algorithm: a memetic
meta-heuristic for discrete optimization, Eng. Optim. 38 (2) (2006) 129154.
[180] B. Amiri, M. Fathian, A. Maroosi, Application of shufed frog-leaping algorithm on clustering, Int. J. Adv. Manuf. Technol. 45 (2009) 199209.
[181] A. Bhaduri, Color image segmentation using clonal selection-based shufed
frog leaping algorithm, in: IEEE International Conference on Advances in
Recent Technologies in Communication and Computing, 2009, pp. 517520.
[182] Y. Fang, J. Yu, Application of shufed frog-leaping algorithm in web0 s text
cluster technology, in: Emerging Research in Web Information Systems and
Mining, Springer Book Series: CCIS 238, 2011, pp. 363368.
[183] P.S. Shelokar, V.K. Jayaraman, B.D. Kulkarni, An ant colony approach for
clustering, Anal. Chim. Acta 509 (2004) 187195.
[184] L. Chen, L. Tu, H.J. Chen, A Novel Ant Clustering Algorithm with Digraph,
Lecture Notes in Computer Science, vol. 3611, Springer, Berlin, Heidelberg,
2005, pp. 12181228.
[185] S.C. Chu, J.F. Roddick, C.J. Su, J.S. Pan, Constrained ant colony optimization for
data clustering, in: Eighth Pacic Rim International Conference on Articial
Intelligence, Lecture Notes in Articial Intelligence, vol. 3157, Springer, 2004,
pp. 534543.
[186] X.H. Xu, L. Chen, Y.X. Chen, A4C: an adaptive articial ants clustering
algorithm, in: IEEE Symposium on Computational Intelligence in Bioinformatics and Computational Biology, 2004, pp. 268274.
[187] D.A. Ingaramo, G. Leguizamon, M. Errecalde, Adaptive clustering with
articial ants, J. Comput. Sci. Technol. 5 (4) (2005) 264271.
[188] A. Vizine, L.N. de Castro, E.R. Hruschka, R.R. Gudwin, Towards improving clustering
ants: an adaptive clustering algorithm, Informatica 29 (2005) 143154.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

16

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

[189] A. Ghosh, A. Halder, M. Kothari, S. Ghosh, Aggregation pheromone density


based data clustering, Inf. Sci. 178 (2008) 28162831.
[190] S. Ghosh, M. Kothari, A. Halder, A. Ghosh, Use of aggregation pheromone
density for image segmentation, Pattern Recognit. Lett. 30 (2009) 939949.
[191] E. Lumer, B. Faieta, Diversity and adaptation in populations of clustering ants,
in: Third International Conference on Simulation of Adaptive Behavior: From
Animals to Animats, MIT Press/Bradford Books, Cambridge, MA, 1994,
pp. 501508.
[192] A.E. Langham, P.W. Grant, Using competing ant colonies to solve k-way
partitioning problems with foraging and raiding strategies, in: Fifth European
Conference on Advances in Articial Life, Lecture Notes in Computer Science,
vol. 1674, Springer, 1999, pp. 621625.
[193] H. Azzag, N. Monmarche, M. Slimane, G. Venturini, AntTree: a new model for
clustering with articial ants, in: IEEE Congress on Evolutionary Computation, CEC 2003, 4, pp. 26422647.
[194] Y. Yang, M. Kamel, F. Jin, Topic discovery from document using ant-based
clustering combination, in: Web Technologies Research and Development
APWeb 2005, Seventh Asia-Pacic Web Conference, Lecture Notes in Computer Science, vol. 3399, Springer, UK, 2005, pp. 100108.
[195] Y. Yang, M.S. Kamel, An aggregated clustering approach using multi-ant
colonies algorithms, Pattern Recognit. 39 (7) (2006) 12781289.
[196] J. Handl, B. Meyer, Improved ant-based clustering and sorting in a document
retrieval interface, in: Seventh International Conference on Parallel Problem
Solving from Nature, Lecture Notes in Computer Science, vol. 2439, Springer,
2002, pp. 913923.
[197] J. Handl, J. Knowles, M. Dorigo, Ant-Based Clustering: a Comparative Study of
its Relative Performance with Respect to k-means, Average Link and 1DSOM,
Technical Report TR/IRIDIA/2003-24, IRIDIA, University Libre de Bruxelles,
Belgium, 2003.
[198] J. Handl, J. Knowles, M. Dorigo, Ant-based clustering and topographic
mapping, Artif. Life 12 (1) (2004) 136.
[199] J. Handl, B. Meyer, Ant-based and swarm-based clustering, Swarm Intell. 1
(2) (2007) 95113.
[200] U. Boryczka, Finding groups in data: cluster analysis with ants, Appl. Soft
Comput. 9 (1) (2009) 6170.
[201] A. Chowdhury, S. Das, Automatic shape independent clustering inspired by
ant dynamics, Swarm Evol. Comput. 3 (2012) 3345.
[202] M. Wan, C. Wang, L. Li, Y. Yang, Chaotic ant swarm approach for data
clustering, Appl. Soft Comput. 12 (8) (2012) 23872393.
[203] R.J. Kuo, H.S. Wang, T. Hu, S.H. Chou, Application of ant K-means on
clustering analysis, Comput. Math. Appl. 50 (2005) 17091724.
[204] S. Chi, C.C. Yang, Integration of ant Colony SOM and k-means for clustering
analysis, in: Knowledge Based Intelligent Information and Engineering Systems,
Lecture Notes in Computer Science, vol. 4251, Springer, 2006, pp. 18.
[205] H. Jiang, S. Yi, J. Li, F. Yang, X. Hu, Ant clustering algorithm with K-harmonic
means clustering, Expert Syst. Appl. 37 (12) (2010) 86798684.
[206] H. Jiang, J. Li, S. Yi, X. Wang, X. Hu, A new hybrid method based on
partitioning-based DBSCAN and ant clustering, Expert Syst. Appl. 38 (8)
(2011) 93739381.
[207] L. Zhang, Q. Cao, A novel ant-based clustering algorithm using the kernel
method, Inf. Sci. 181 (2011) 46584672.
[208] Y. Wan, T. Pei, C. Zhou, Y. Jiang, C. Qu, Y. Qiao, ACOMCD: a multiple cluster
detection algorithm based on the spatial scan statistic and ant colony
optimization, Comput. Stat. Data Anal. 56 (2) (2012) 283296.
[209] N. Labroche, N. Monmarche, G. Venturini, AntClust: Ant Clustering and Web
Usage Mining, in: Genetic and Evolutionary Computing Conference, Chicago,
2003, pp. 2536.
[210] A.H. Channa, N.M. Rajpoot, K.M Rajpoot, Texture segmentation using ant tree
clustering, in: IEEE International Conference on Engineering of Intelligent
Systems, 2006, pp. 16.
[211] V. Ramos, A. Abraham, Antids: self-organized ant-based clustering model for
intrusion detection system, in: Proceedings of the Fourth IEEE International
Workshop WSTST 05, Springer Engineering Series: Soft Computing as
Transdisciplinary Science and Technology, vol. 29, 2005, pp. 977986.
[212] C. Tsang, S. Kwong, Ant colony clustering and feature extraction for anomaly
intrusion detection, in: Swarm Intelligence in Data Mining, Studies in
Computational Intelligence, vol. 34, Springer, 2006, pp. 101123.
[213] J. Chen, J. Sun, Y. Chen, A new ant-based clustering algorithm on high
dimensional data space, in: Complex Systems Concurrent Engineering, vol.
12, Springer, 2007, pp. 605611.
[214] M. Bursa, L. Lhotska, Modied ant colony clustering method in long-term
electrocardiogram processing, in: Twenty Ninth IEEE International Conference on Engineering in Medicine and Biology Society, 2007, pp. 32493252.
[215] Y. He, S.C. Hui, Exploring ant-based algorithms for gene expression data
analysis, Artif. Intell. Med. 47 (2) (2009) 105119.
[216] A. Abraham, V. Ramos, Web usage mining using articial ant colony
clustering and linear genetic programming, in: IEEE Congress on Evolutionary Computation, CEC 2003, pp. 13841391.
[217] D. Merkle, M. Middendorf, A. Scheidler, Decentralized packet clustering in
networks, in: Proceedings of the 18th International Parallel and Distributed
Processing Symposium, 2004, pp. 163170.
[218] A. Vizine, L.N. de Castro, R.R. Gudwin, Text document classication using
swarm intelligence: in IEEE International Conference on the Integration of
Knowledge Intensive Multi-agent Systems, 2005, pp. 134139.

[219] M. Omran, A. Salman, A.P. Engelbrecht, Image classication using particle


swarm optimization, in: Fourth Asia-Pacic Conference on Simulated Evolution and Learning, Singapore, 2002.
[220] D.W. van der Merwe, A.P. Engelbrecht, Data clustering using particle swarm
optimization, IEEE Congress on Evolutionary Computation, CEC 2003, vol. 1,
pp. 215220.
[221] S.C.M. Cohen, L.N. de Castro, Data Clustering with Particle Swarms, IEEE
Congress on Evolutionary Computation, CEC 2006, pp. 17921798.
[222] B. Jarboui, M. Cheikh, P. Siarry, A. Rebai, Combinatorial particle swarm
optimization (CPSO) for partitional clustering problem, Appl. Math. Comput.
192 (2) (2007) 337345.
[223] L. Chuang, C. Hsiao, C. Yang, Chaotic particle swarm optimization for data
clustering, Expert Syst. Appl. 38 (2011) 1455514563.
[224] C. Tsai, I. Kao, Particle swarm optimization with selective particle regeneration for data clustering, Expert Syst. Appl. 38 (2011) 65656576.
[225] J. Sun, W. Chen, W. Fang, X. Wun, W. Xu, Gene expression data analysis with
the clustering method based on an improved quantum-behaved Particle
Swarm Optimization, Eng. Appl. Artif. Intell. 25 (2012) 376391.
[226] T. Cura, A particle swarm optimization approach to clustering, Expert Syst.
Appl. 39 (2012) 15821588.
[227] F. Yang, T. Sun, C. Zhang, An efcient hybrid data clustering method based on
K-harmonic means and Particle Swarm Optimization, Expert Syst. Appl. 36
(6) (2009) 98479852.
[228] K.Y. Huang, A hybrid particle swarm optimization approach for clustering
and classication of datasets, Knowl. Based Syst. 24 (2011) 420426.
[229] Z. Du, Y. Wang, Z. Ji, PK-means: a new algorithm for gene clustering, Comput.
Biol. Chem. 32 (2008) 243247.
[230] Y. Zhang, D. Huang, M. Ji, F. Xie, Image segmentation using PSO and PCM with
Mahalanobis distance, Expert Syst. Appl. 38 (2011) 90369040.
[231] T. Niknam, B. Amiri, An efcient hybrid approach based on PSO, ACO and kmeans for cluster analysis, Appl. Soft Comput. 10 (2010) 183197.
[232] R. Xu, J. Xu, D.C. Wunsch, Clustering with differential evolution particle
swarm optimization, in: IEEE Congress on Evolutionary Computation, CEC
2010, pp. 18.
[233] R.J. Kuo, Y.J. Syu, Zhen-Yao Chen, F.C. Tien, Integration of particle swarm
optimization and genetic algorithm for dynamic clustering, Inf. Sci. 195
(2012) 124140.
[234] S.J. Nanda, G. Panda, B. Majhi, Improved identication of Hammerstein plants
using new CPSO and IPSO algorithms, Expert Syst. Appl. 37 (2010)
68186831.
[235] S.J. Nanda, G. Panda, Accurate partitional clustering algorithm based on
Immunized PSO, in: IEEE International Conference on Advances in Engineering, Science and Management, 2012, pp. 537541.
[236] K. Thangavel, J. Bagyamani, R. Rathipriya, Novel hybrid PSO-SA model for
biclustering of expression data, Proc. Eng. 30 (2012) 10481055 (International Conference on Communication Technology and System Design 2011).
[237] H. Yu, W. Xiaohui, PSO-based energy-balanced double cluster-heads clustering routing for wireless sensor networks, Proc. Eng. 15 (2011) 30733077
(Advanced in Control Engineering and Information Science).
[238] C. Ji, Y. Zhang, S. Gao, P. Yuan, Z. Li, Particle swarm optimization for mobile ad
hoc networks clustering, in: IEEE International Conference on Networking,
Sensing and Control, 2004, pp. 372375.
[239] S. Kalyani, K.S. Swarup, Particle swarm optimization based K-means clustering approach for security assessment in power systems, Expert Syst. Appl. 38
(2011) 1083910846.
[240] D. Mishra, Discovery of overlapping pattern biclusters from gene expression
data using hash based PSO, Proc. Technol. 4 (2012) 390394.
[241] C. Lee, J. Leou, H. Hsiao, Saliency-directed color image segmentation using
modied particle swarm optimization, Signal Process. 92 (1) (2012) 118.
[242] O. Duran, N. Rodriguez, L.A. Consalter, A PSO-based clustering algorithm for
manufacturing cell design, in: IEEE Workshop on Knowledge Discovery and
Data Mining, 2008, pp. 7275.
[243] M. Omran, A.P. Engelbrecht, A. Salman, Particle swarm optimization method
for image clustering, Int. J. Pattern Recognit. Artif. Intell. 19 (3) (2005)
297322.
[244] X. Cui, T.E. Potok, P. Palathingal, Document clustering using particle swarm
optimization, in: IEEE Swarm Intelligence Symposium, 2005, pp. 185191.
[245] S. Alam, G. Dobbie, P. Riddle, Particle swarm optimization based clustering of
web usage data, in: IEEE/WIC/ACM International Conference on Web
Intelligence and Intelligent Agent Technology, 2008, pp. 451454.
[246] M.F. Lima, L. Sampaio, B.B. Zarpelao, J. Rodrigues, T. Abrao, M.L. Proenca,
Networking anomaly detection using DSNS and particle swarm optimization
with re-clustering, in: IEEE Global Telecommunications Conference, 2010,
pp. 16.
[247] C. Zhang, D. Ouyang, J. Ning, An articial bee colony approach for clustering,
Expert Syst. Appl. 37 (7) (2010) 47614767.
[248] W. Zou, Y. Zhu, H. Chen, X. Sui, A clustering approach using cooperative
articial bee colony algorithm, Discrete Dyna. Nat. Soc. (2010) 116.
[249] M. Fathian, B. Amiri, A. Maroosi, Application of honey-bee mating optimization algorithm on clustering, Appl. Math. Comput. 190 (2) (2007) 15021513.
[250] D. Karaboga, C. Ozturk, A novel clustering approach: articial Bee Colony
(ABC) algorithm, Appl. Soft Comput. 11 (1) (2011) 652657.
[251] D. Karaboga, S. Okdem, C. Ozturk, Cluster based wireless sensor network
routings using articial bee colony algorithm, in: IEEE International Conference on Autonomous and Intelligent Systems, 2010, pp. 15.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()


[252] S.K. Udgata, S.L. Sabat, S. Mini, Sensor deployment in irregular terrain using
Articial Bee Colony algorithm, in: World congress on Nature and Biologically Inspired Computing, 2009, pp. 13091314.
[253] X. Yan, Y. Zhu, W. Zou, L. Wang, A new approach for data clustering using
hybrid articial bee colony algorithm, Neurocomputing 97 (2012) 241250.
[254] X. Li, Z. Shao, J. Qian, An optimizing method base on autonomous animates:
Fish Swarm Algorithm, Syst. Eng. Theory Pract. 22 (2002) 3238.
[255] X. Li, A new intelligent optimizationarticial sh swarm algorithm (Doctor
thesis), Zhejiang University of Zhejiang, China, 2003.
[256] Y. Cheng, M. Jiang, D. Yuan, Novel clustering algorithms based on improved
articial sh swarm algorithm, IEEE Sixth International Conference on Fuzzy
Systems and Knowledge Discovery, vol. 3, 2009, pp. 141145.
[257] H.C. Tsai, Y.H. Lin, Modication of the sh swarm algorithm with particle
swarm optimization formulation and communication behavior, Appl. Soft
Comput. 11 (2011) 53675374.
[258] O. Nasraoui, F. Gonzalez, C. Cardona, C. Rojas, D. Dasgupta, A scalable articial
immune system model for dynamic unsupervised learning, in: Genetic and
Evolutionary Computation, Lecture Notes in Computer Science, vol. 2723,
Springer-Verlag, 2003, pp. 219230.
[259] T. Liu, Y. Zhou, Z. Hu, Z. Wang, A new clustering algorithm based on articial
immune system, in: IEEE Fifth International Conference on Fuzzy Systems
and Knowledge Discovery, 2008, pp. 347351.
[260] R. Liu, L. Jiao, X. Zhang, Y. Li, Gene transposon based clone selection
algorithm for automatic clustering, Inf. Sci. 204 (2012) 122.
[261] Z. Li, H.Z. Tan., A combinational clustering method based on articial immune
system and support vector machine, in: Knowledge Based Intelligent
Information and Engineering Systems, Lecture Notes in Computer Science,
vol. 4251, Springer, 2006, pp. 153162.
[262] M. Bereta, T. Burczynski, Immune K-means and negative selection algorithms
for data analysis, Inf. Sci. 179 (10) (2009) 14071425.
[263] A.J. Graaff, A.P. Engelbrecht, A local network neighbourhood articial
immune system for data clustering, in: IEEE Congress on Evolutionary
Computation, CEC 2007, pp. 260267.
[264] A.J. Graaff, A.P. Engelbrecht, Clustering data in an uncertain environment
using an articial immune system, Pattern Recognit. Lett. 32 (2011) 342351.
[265] M. Wan, L. Li, J. Xiao, C. Wang, Y. Yang, Data clustering using bacterial
foraging optimization, J. Intell. Inf. Syst. 38 (2) (2011) 321341.
[266] X. Lei, S. Wu, L. Ge, A. Zhang, Clustering PPI data based on bacteria foraging
optimization algorithm, in: IEEE International Conference on Bioinformatics
and Biomedicine, 2011, pp. 9699.
[267] J.R. Olesen, J. Cordero H., Y. Zeng, Auto-Clustering using particle swarm
optimization and bacterial foraging, in: Agents and Data Mining Interaction,
Lecture Notes in Computer Science, vol. 5680, Springer series, 2009, pp.
6983.
[268] G.S. Gaba, K. Singh, B.S. Dhaliwal, Sensor node deployment using bacterial
foraging Optimization, in: IEEE International Conference on Recent Trends in
Information Systems, 2011, 7376.
[269] S.C. Chu, P.W. Tsai, J.S. Pan, Cat Swarm Optimization, in: Proceedings of
Nineth Pacic Rim International Conference on Articial Intelligence, Lecture
Notes in Computer Science, vol. 4099, Springer-Verlag, 2006, pp. 854858.
[270] S.C. Chu, P.W. Tsai, Computational intelligence based on the behavior of cats,
Int. J. Innov. Comput. Inf. Control 3 (2007) 163173.
[271] B. Santosa, M.K. Ningrum, Cat swarm optimization for clustering, in: IEEE
International Conference on Soft Computing and Pattern Recognition, 2009,
pp. 5459.
[272] X.S. Yang, S. Deb, Cuckoo search via Levy ights, in: IEEE World Congress on
Nature and Biologically Inspired Computing, 2009, pp. 210214.
[273] X.S. Yang, S. Deb, Engineering optimisation by cuckoo Search, Int. J. Math.
Model. Numer. Optim. 1 (4) (2010) 330343.
[274] S. Goel, A. Sharma, P. Bedi, Cuckoo Search Clustering Algorithm: A novel
strategy of biomimicry, in: IEEE World Congress on Information and Communication Technologies, 2011, pp. 916921.
[275] X.S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, 2008,
ISBN 1905986106.
[276] X.S. Yang, Firey algorithms for multimodal optimization, in: Stochastic
Algorithms Foundations and Applications, SAGA 2009, Lecture Notes in
Computer Science, vol. 5792, Springer-Verlag, 2009, pp. 169178.
[277] X.S. Yang, Firey algorithm, stochastic test functions and design optimization, Int. J. Bio-Inspired Comput. 2 (2) (2010) 7884.
[278] J. Senthilnath, S.N. Omkar, V. Mani, Clustering using rey algorithm:
performance study, Swarm Evol. Comput. 1 (3) (2011) 164171.
[279] T. Hassanzadeh, H. Vojodi, A.M.E. Moghadam, An image segmentation
approach based on maximum variance Intra-cluster method and Firey
algorithm, in: IEEE Seventh International Conference on Natural Computation, 2011, pp. 18171821.
[280] A.R. Mehrabian, C. Lucas, A novel numerical optimization algorithm inspired
from weed colonization, Ecol. Inf. 1 (4) (2006) 355366.
[281] A. Chowdhury, S. Bose, S. Das, Automatic clustering based on invasive weed
optimization algorithm, in: Swarm evolutionary and Memetic Computing,
Lecture Notes in Computer Science, vol. 7707, Springer, 2011, pp. 105112.
[282] S. Su, J. Fang, J. Wang, B. Wang, Image clustering method based on invasive
weed colonization, J. S. China Univ. Technol. (Nature Science Edition) 36 (5)
(2008) 95105.
[283] D. Kundu, K. Suresh, S. Ghosh, S. Das, B.K. Panigrahi, S. Das, Multi-objective
optimization with articial weed colonies, Inf. Sci. 181 (2011) 24412454.

17

[284] R. Liu, X. Wang, Y. Li, X. Zhang, Multi-objective invasive weed optimization


algorithm for clustering, in: IEEE Congress on Evolutionary Computation, CEC
2012, pp. 18.
[285] E. Rashedi, Gravitational search algorithm (M.Sc. thesis), Shahid Bahonar
University of Kerman, Kerman, Iran, 2007.
[286] E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: a gravitational search
algorithm, Inf. Sci. 179 (2009) 22322248.
[287] A. Hatamlou, S. Abdullah, H. Nezamabadi-pour, Application of gravitational
search algorithm on data clustering, in: Rough Sets and Knowledge Technology, Lecture Notes in Computer Science, vol. 6954, Springer-Verlag, 2011,
pp. 337346.
[288] M. Yin, Y. Hu, F. Yang, X. Li, W. Gu, A novel hybrid K-harmonic means and
gravitational search algorithm approach for clustering, Expert Syst. Appl. 38
(8) (2011) 93199324.
[289] R. Xu, J. Xu, D.C. Wunsch, A comparison study of validity indices on swarmintelligence-based clustering, IEEE Trans. Syst. Man Cybern. Part B Cybern. 42
(4) (2012) 12431256.
[290] G. Milligan, M. Cooper, An examination of procedures for determining the
number of clusters in a data set, Psychometrika 50 (2) (1985) 159179.
[291] D.L. Davies, D.W. Bouldin, A cluster separation measure, IEEE Trans. Pattern
Anal. Mach. Intell. 1 (2) (1979) 224227.
[292] C.H. Chou, M.C. Su, E. Lai, A new cluster validity measure and its application
to image compression, Pattern Anal. Appl. 7 (2) (2004) 205220.
[293] J.C. Dunn, Well separated clusters and optimal fuzzy partitions, J. Cybern. 4
(1974) 95104.
[294] L. Kaufman, P.J. Rousseeuw, Finding groups in dataan introduction to
cluster analysis, in: Series in Probability and Mathematical Statistics, Wiley,
1990.
[295] M. Halkidi, Y. Batistakis, M. Vazirgiannis, Clustering algorithms and validity
measures, in: IEEE Thirteenth International Conference on Scientic and
Statistical Database Management, 2001, pp. 322.
[296] M. Halkidi, Y. Batistakis, M. Vazirgiannis, On clustering validation techniques,
J. Intell. Inf. Syst. 17 (2001) 107145.
[297] M. Halkidi, Y. Batistakis, M. Vazirgiannis, Cluster validity methods Part I, ACM
SIGMOD Record 31 (2) (2002).
[298] M. Halkidi, Y. Batistakis, M. Vazirgiannis, Clustering validity checking
methods Part II, ACM SIGMOD Record 31 (3) (2002).
[299] U. Maulik, S. Bandyopadhyay, Performance evaluation of some clustering
algorithms and validity indices, IEEE Trans. Pattern Anal. Mach. Intell. 24 (12)
(2002) 16501654.
[300] M.K. Pakhira, S. Bandyopadhyay, U. Maulik, Validity index for crisp and fuzzy
clusters, Pattern Recognit. Lett. 37 (3) (2004) 487501.
[301] K.R. Zalik, B. Zalik, Validity index for clusters of different sizes and densities,
Pattern Recognit. Lett. 32 (2) (2011) 221234.
[302] I. Gurrutxaga, Muguerza, J.O. Arbelaitz, J.M. Perez, J.I. Martin, Towards a
standard methodology to evaluate internal cluster validity indices, Pattern
Recognit. Lett. 32 (3) (2011) 505515.
[303] S. Saha, S. Bandyopadhyay, Some connectivity based cluster validity indices,
Appl. Soft Comput. 12 (5) (2012) 15551565.
[304] A. Zhou, B.Y. Qu, H. Li, S.Z. Zhao, P.N. Suganthan, Q. Zhang, Multiobjective
evolutionary algorithms: a survey of the state of the art, Swarm Evol.
Comput. 1 (1) (2011) 3249.
[305] K. Deb, Multi-Objective Optimization using Evolutionary Algorithms, John
Wiley and Sons, 2001, ISBN 8126528044.
[306] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective
genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6 (2) (2002) 182197.
[307] C.A. Coello Coello, G.T. Pulido, M.S. Lechuga, Handling multiple objectives
with particle swarm optimization, IEEE Trans. Evol. Comput. 8 (3) (2004)
256279.
[308] C.A. Coello Coello, Solving multiobjective optimization problems using an
articial immune system, Genet. Progr. Evol. Mach. 6 (2005) 163190.
[309] B. Niu, H. Wang, L. Tan, J. Xu, Multi-objective optimization using BFO
algorithm, in: Bio-inspired Computing and Applications, Lecture Notes in
Computer Science, Springer Book Series, vol. 6840, 2012, pp. 582587.
[310] R. Akbari, R. Hedayatzadeh, K. Ziarati, B. Hassanizadeh, A multi-objective
articial bee colony algorithm, Swarm Evol. Comput. 2 (2012) 3952.
[311] P.M. Pradhan, G. Panda, Solving multi-objective problems using cat swarm
optimization, Expert Syst. Appl. 39 (3) (2012) 29562964.
[312] X.S. Yang, Multi-objective rey algorithm for continuous optimization.
Engineering with Computers, Springer, 29 (2) (2013) 175184.
[313] S. Bandyopadhyay, S. Saha, U. Maulik, K. Deb, A simulated annealing-based
multiobjective optimization algorithm: AMOSA, IEEE Trans. Evol. Comput. 12
(3) (2008) 269283.
[314] J. Knowles, D. Corne, M-PAES: A memetic algorithm for multiobjective
optimization, in: IEEE Congress on Evolutionary Computation, CEC 2000,
pp. 325332.
[315] J. Ricart, G. Huttemann, J. Lima, B. Baran, Multiobjective harmony search
algorithm proposals, Electron. Notes Theor. Comput. Sci. 281 (2011) 5167.
[316] J. Liu, Z. Li, X. Hu, Y. Chen, F. Liu, Multi-objective dynamic population shufed
frog-leaping biclustering of microarray data, BMC Genomics 13 (3) (2012)
112.
[317] A.L.V. Coelho, E. Fernandes, K. Faceli, Inducing multi-objective clustering
ensembles with genetic programming, Neurocomputing 74 (2010) 494498.
[318] L. Costa, P. Oliveira, An evolution strategy for multiobjective optimization, in:
IEEE Congress on Evolutionary Computation, CEC 2002, pp. 97102.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i

18

S.J. Nanda, G. Panda / Swarm and Evolutionary Computation ()

[319] X.S. Yang, S. Deb, Multiobjective cuckoo search for design optimization.
Comput. Oper. Res. 40 (6) (2013) 16161624.
[320] H.R. Hassanzadeh, A multi-objective gravitational search algorithm, in: IEEE
Second International Conference on Computational Intelligence, Communication Systems and Networks, 2010, pp. 712.
[321] M. Jiang, K. Zhu, Multiobjective optimization by articial sh swarm algorithm, in: IEEE International Conference on Computer Science and Automation Engineering, 2011, pp. 506511.
[322] U. Maulik, S. Bandyopadhyay, A. Mukhopadhyay, Multiobjective Genetic
Algorithms for Clustering, Springer, 2011, ISBN 3642166148.
[323] C. Bong, M. Rajeswari, Multi-objective nature-inspired clustering and classication techniques for image segmentation, Appl. Soft Comput. 11 (2011)
32713282.
[324] D.W. Corne, J. Knowles, M. J. Oates, The Pareto Envelope-based Selection
Algorithm for Multiobjective Optimization, in: Sixth Conference on Parallel
Problem Solving from Nature, 2000, pp. 839848.
[325] D.W. Corne, N.R. Jerram, J. Knowles, M. J. Oates, PESA-II: Region-based
selection in evolutionary multiobjective optimization, in: Genetic and
Evolutionary Computing Conference, 2001, pp. 283290.
[326] E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative
case study and the strength pareto approach, IEEE Trans. Evol. Comput. 3 (4)
(1999) 257271.
[327] J. Handl, H. Julia, J. Knowles, Evolutionary multiobjective clustering, in:
Eighth conference on Parallel Problem Solving from Nature, 2004.
[328] J. Handl, J. Knowles, Multiobjective Clustering with Automatic Determination of
the Number of Clusters, Technical Report TR-COMPSYSBIO-2004-02, Institute of
Science and Technology, University of Manchester, Manchester, U.K., Available:
http://www.dbkweb.ch.umist.ac.uk/handl/publications.html, 2004.
[329] J. Handl, J. Knowles, Improvements to the scalability of multiobjective
clustering, in: IEEE Congress on Evolutionary Computation, CEC 2005,
pp. 23722379.
[330] J. Handl, J. Knowles, An evolutionary approach to multiobjective clustering,
IEEE Trans. Evol. Comput. 11 (1) (2007) 5676.
[331] S. Bandyopadhyay, U. Maulik, A. Mukhopadhyay, Multiobjective genetic
clustering for pixel classication in remote sensing imagery, IEEE Trans.
Geosci. Remote Sens. 45 (5) (2007) 15051511.
[332] A. Mukhopadhyay, U. Maulik, A multiobjective approach to MR brain image
segmentation, Appl. Soft Comput. 11 (2011) 872880.
[333] D.S. Santos, D.D. Oliveira, A.L.C. Bazzan, A multiagent multiobjective clustering algorithm, in: Data Mining and Multi-agent Integration, Springer, 2009.
[334] M. Gong, L. Zhang, L. Jiao, S. Gou, Solving multiobjective clustering using an
immune-inspired algorithm, in: IEEE Congress on Evolutionary Computation,
CEC 2007, pp. 1522.
[335] W. Ma, L. Jiao, M. Gong, Immunodominance and clonal selection inspired
multiobjective clustering, Progress Nat. Sci. 19 (2009) 751758.
[336] D. Yang, L. Jiao, M. Gong, F. Liu, Articial immune multi-objective SAR image
segmentation with fused complementary features, Inf. Sci. 181 (2011)
27972812.
[337] S. Das, A. Abraham, A. Konar, Automatic kernel clustering with a Multi-Elitist
Particle Swarm Optimization Algorithm, Pattern Recognit. Lett. 29 (2008)
688699.
[338] A. Paoli, F. Melgani, E. Pasolli, Clustering of hyperspectral images based on
multiobjective particle swarm optimization, IEEE Trans. Geosci. Remote Sens.
47 (12) (2009) 41754188.
[339] H. Ali, W. Shahzad, F.A. Khan, Energy-efcient clustering in mobile ad-hoc
networks using multi-objective particle swarm optimization, Appl. Soft
Comput. 12 (7) (2012) 19131928.
[340] S. Saha, S. Bandyopadhyay, A new multiobjective simulated annealing based
clustering technique using symmetry, Pattern Recognit. Lett. 30) (2009)
13921403.
[341] S. Saha, S. Bandyopadhyay, A symmetry based multiobjective clustering
technique for automatic evolution of clusters, Pattern Recognit. 43) (2010)
738751.
[342] R. Caballero, M. Laguna, R. Marti, J. Molina, Scatter tabu search for multiobjective clustering problems, J. Oper. Res. Soc. 62 (11) (2011) 20342046.
[343] K. Suresh, D. Kundu, S. Ghosh, S. Das, A. Abraham, S.Y. Han, Multi-objective
differential evolution for automatic clustering with application to MicroArray data analysis, Sensors 9 (2009) 39814004.
[344] K. Faceli, A.C.P.L.F. de Carvalho, M.C.P. Souto, Multi-objective clustering
ensemble, Int. J. Hybrid Intell. Syst. 4 (3) (2007) 145156.
[345] K.S.N. Ripon, M.N.H. Siddique, Evolutionary multi-objective clustering for
overlapping clusters detection, in: IEEE Congress on Evolutionary Computation, CEC 2009, pp. 976982.
[346] J. Handl, J. Knowles, Multi-objective clustering and cluster validation, in:
Studies in Computational Intelligence, vol. 16, Springer, 2006, pp. 2147.
[347] M.J. Brusco, D. Steinley, Cross validation issues in multiobjective clustering,
Br. J. Math. Stat. Psychol. 62) (2009) 349368.

[348] J. Derrac, S. Garcia, D. Molina, F. Herrera, A practical tutorial on the use of


nonparametric statistical tests as a methodology for comparing evolutionary
and swarm intelligence algorithms, Swarm Evol. Comput. 1 (2011) 318.
[349] S. Garcia, A. Fernandez, J. Luengo, F. Herrera, A study of statistical techniques
and performance measures for genetics-based machine learning: accuracy
and interpretability, Soft Comput. 13 (10) (2009) 959977.
[350] S. Garcia, D. Molina, M. Lozano, F. Herrera, A study on the use of nonparametric tests for analyzing the evolutionary algorithms0 behaviour: A case
study on the CEC0 2005 special session on real parameter optimization,
J. Heuristics 15 (2009) 617644.
[351] MathWorks Statistics Toolbox, Available: http://www.mathworks.in/products/statistics/, 2012.
[352] W.L. Martinez, A.R. Martinez, Computational Statistics Handbook with
MATLAB, Second Ed., Chapman and Hall/CRC, 2008, ISBN 1584885661.
[353] D.J. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures, fourth edition, Chapman and Hall/CRC, 2006, ISBN 1439858012.
[354] UCI Repository of Machine Learning Database, Available: http://archive.ics.
uci.edu/ml/datasets.html, 2012.
[355] N.F. Cho, Ergodicity and seismicity clustering with applications in statistical
seismology (Ph.D. thesis), University of Western Ontario London, Ontario,
Canada, 2011.
[356] A.H. Gandomia, A.H. Alavib, Krill herd: a new bio-inspired optimization
algorithm, Commun. Nonlinear Sci. Numer. Simul. 17 (12) (2012) 48314845.
[357] P. Rabanal, I. Rodriguez, F. Rubio, Using river formation dynamics to design
heuristic algorithms, in: Unconventional Computation, Lecture Notes in
Computer Science, vol. 4618, Springer, 2007, pp 163177.
[358] X. Pan, L. Jiao, A granular agent evolutionary algorithm for classication,
Appl. Soft Comput. 11 (2011) 30933105.
[359] X.S. Yang, A new metaheuristic Bat-Inspired algorithm, in: Nature Inspired
Cooperative Strategies for Optimization, Studies in Computational Intelligence, vol. 284, Springer, Berlin, 2010, pp. 6574.
[360] X.S. Yang, A.H. Gandomi, Bat algorithm: a novel approach for global
engineering optimization, Eng. Comput. 29 (5) (2012) 464483.
[361] X.S. Yang, Bat algorithm for multiobjective optimization, Int. J. Bio-Inspired
Comput. 3 (5) (2011) 267274.
[362] T. Panigrahi, G. Panda, B. Mulgrew, B. Majhi, Distributed DOA estimation
using clustering of sensor nodes and diffusion PSO algorithm, Swarm Evol.
Comput. 9 (2013) 4757.
[363] S.J. Nanda, G. Panda, Automatic clustering algorithm based on multiobjective Immunized PSO to classify actions of 3D human models, Eng. Appl.
Artif. Intell. 26 (2013) 14291441.
[364] S.J. Nanda, K.F. Tiampo, G. Panda, L. Mansinha, N. Cho, A. Mignan, A tri-stage
cluster identication model for accurate analysis of seismic catalogs, Nonlinear Process. Geophys. 20 (2013) 143162, http://dx.doi.org/10.5194/npg20-1-2013 (special issue on: Nonlinearity, scaling and complexity in exploration geophysics).
[365] A. Mukhopadhyay, U. Maulik, S. Bandyopadhyay, An interactive approach to
multi-objective clustering of gene expression patterns, IEEE Trans. Biomed.
Eng. 60 (2013) 3541.
[366] I. Aljarah, S.A. Ludwig, A new clustering approach based on glowworm
swarm optimization, in: IEEE Congress on Evolutionary Computation, CEC
2013, pp. 26421649.
[367] X. Xu, L. Lu, P. He, Z. Pan, L. Chen, Improving constrained clustering via
swarm intelligence, Neurocomputing 116 (2013) 317325.
[368] A. Hatamlou, Black hole: a new heuristic optimization approach for data
clustering, Inf. Sci. 222 (2013) 175184.
[369] J. Lope, D. Maravall, Data clustering using a linear cellular automata-based
algorithm, Neurocomputing 114 (2013) 8691.
[370] S. Gou, X. Zhuang, Y. Li, C. Xu, L. Jiao, Multi-elitist immune clonal quantum
clustering algorithm, Neurocomputing 101 (2013) 275289.
[371] P. D0 Urso, L. Giovanni, M. Disegna, R. Massari, Bagged clustering and its
application to tourism market segmentation, Expert Syst. Appl. 40 (2013)
49444956.
[372] O. Arbelaitz, I. Gurrutxaga, J. Muguerza, J. Prez, I. Perona, An extensive
comparative study of cluster validity indices, Pattern Recognit. 46 (2013)
243256.
[373] S. Lloyd, Least Square Quantization in PCM, Bell Telephone Laboratories,
Whippany, 1957.
[374] S. Lloyd, Least squares quantization in PCM, IEEE Trans. Inf. Theory 28 (1982)
129137.
[375] J. MacQueen, Some methods for classication and analysis of multivariate
observations, in: Proceedings of 5th Berkeley Symposium on Mathematical
Statistics and Probability, 1967, pp. 281297.
[376] S.J. Nanda, P.M. Pradhan, G. Panda, L. Mansinha, K.F. Tiampo, A correlation
based stochastic partitional algorithm for accurate cluster analysis,
Inderscience 6 (1) (2013) 5258.
[377] S.J. Nanda, Development of new signal processing and nature-inspired
algorithms for partitional clustering (Ph.D thesis), Indian Institute of Technology Bhubaneswar, Odisha, India, 2013.

Please cite this article as: S.J. Nanda, G. Panda, A survey on nature inspired metaheuristic algorithms for partitional clustering, Swarm
and Evolutionary Computation (2014), http://dx.doi.org/10.1016/j.swevo.2013.11.003i