Sie sind auf Seite 1von 8

International Journal of Application or Innovation in Engineering & Management (IJAIEM)

Web Site: www.ijaiem.org Email: editor@ijaiem.org


Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

PERFORMANCE COMPARATIVE
ANALYSIS OF IMPROVED FUZZY AND
NON-FUZZY CLASSIFICATION METHODS
Simhachalam Boddana1, Ganesan G
1

Department of Mathematics, GIT, GITAM University


Visakhapatnam, Andhra Pradesh 530045, INDIA

Department of Mathematics, Adikavi Nannaya University


Rajahmundry, Andhra Pradesh 533296, INDIA

ABSTRACT
In data mining technology, data clustering has been considered as the most important exploratory data analysis method used to
extract the unknown valuable information from the large volume of data for many real time applications. Most of the
clustering techniques proved their efficiency in many fields such as medical sciences, earth sciences, decision making systems
etc. One of the main approaches to clustering is partition based clustering. This work reports the results of classification
performance of three such widely used algorithms namely K-means (KM) or Hard c-means, Fuzzy Possibilistic c-Means
(FPCM) and Possibilistic Fuzzy c-Means (PFCM) clustering algorithms. Two well known data sets from UCI machine learning
repository are considered to test the algorithms. The efficiency of clustering output is compared with the results observed from
the repository. The experimental results demonstrate that FPCM produces close results to PFCM and K-means algorithm yields
more accurate results than the FPCM and PFCM algorithms.

Keywords: K-means, Fuzzy Possibilistic c-means, Possibilistic Fuzzy c-means, classification, improved fuzzy c-means

1. Introduction
In the field of data analysis, data mining techniques plays an important role to assist the decision makers in making
predictions that impact people and enterprises. Clustering or Classification is the key element in data analysis. The
process of clustering is organization of the data into groups of similar objects called as clusters or classes. Researchers
have developed many clustering algorithms which are having applications in different fields, such as medical sciences,
image processing, earth sciences, decision making systems etc [4]. Among the various clustering algorithms partition
based clustering algorithms have the advantage of being accuracy in decision making by using appropriate objective
function based on similarity measures [12]. The main task of the objective function is to locate the cluster prototypes or
centroids by optimizing the objective function so that most similar objects with respect to the centroid create a cluster.
In partition based clustering algorithms, K-means (KM) and Fuzzy c-Means (FCM) algorithms are widely used
iterative algorithms. Although FCM is a popular clustering algorithm it has some drawbacks such as creating noise
points etc. To overcome the drawbacks occurring in FCM researchers developed different clustering algorithms.
Among various improved c-Means clustering algorithms, Fuzzy Possibilistic c-Mean (FPCM) and Possibilistic Fuzzy cMean (PFCM) algorithms are popular. Fuzzy cluster analysis employs membership function which assigns a number
called membership value ranged between 0 and 1 to each object in the dataset. In the literature many researchers
analyzed the clustering performance of these techniques; Simhachalam and Ganesan [11] evaluated the clustering
performance of FCM, FPCM and PFCM on medical diagnostics and reported that the efficiency of PFCM is better than
FCM and FPCM methods. J.Quintanilla-Dominguez et al.[3] compared the advantages and drawbacks of KM , FCM
and PFCM algorithms for detection of micro calcifications in image segmentation. Nidhi Grover [8] studied the
advantages and drawbacks of FCM, FPCM and PFCM algorithms. Rajendran and Dhanasekaran [1] analyzed FCM
and PFCM methods on MRI brain image tissue segmentation and reported that the PFCM achieved better clustering
results than FCM.
In this work, the authors aim to present the application of the three unsupervised clustering algorithms, K-means (KM),
Fuzzy Possibilistic c-Mean (FPCM) and Possibilistic Fuzzy c-Mean (PFCM) algorithms to two popular real data sets
namely Liver disorders and Wine data sets. The comparative analysis of performance of the three algorithms is
presented in this work. The organization of the rest of the work is as follows: In section 2, a brief discussion of the data

Volume 5, Issue 10, October 2016

Page 112

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org
Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

sets and the clustering algorithms is presented. In section 3, results and discussion are given and the conclusions are in
section 4.
2. MATERIALS AND METHODS
In data analysis clustering is a discipline devoted to investigating and describing the clusters with similar objects. The
efficiency and robustness of clustering algorithms could be investigated by clustering output. The performance of
clustering algorithms can be improved by defining suitable objective function. The partition based clustering algorithms
FCM and KM were developed by introducing memberships and distance measures in its objective functions
respectively. The algorithms FPCM and PFCM were developed by implementing memberships and introducing
typicalities to improve the performance of FCM. In this section the brief details of data sets, liver disorder and wine and
the algorithms KM, FPCM and PFCM are presented.
2.1. The Dataset
To evaluate K-means (KM), Fuzzy Possibilistic c-Mean (FPCM) and Possibilistic Fuzzy c-Mean (PFCM) algorithms,
the real world data sets Liver disorder data set donated by Richard [10] and Wine data set donated by Forina et al. [2]
from the UCI Machine Learning Repository have been considered.
Liver disorder data set contains 341 samples with 6 attributes each. These attributes are the measurements of the
blood tests that are sensitive to liver disorders which might arise due to excessive alcohol consumption. These
blood tests are mcv-mean corpuscular volume, alkphos-alkaline phosphotase, sgpt-alamine aminotransferase,
sgot-aspartate aminotransferase, gammagt-gamma-glutamyl transpeptidase and drinks-the number of half-pint
equivalents of alcoholic beverages drunk per day.
Wine data set contains 178 samples and each sample has 13 attributes. These attributes are the chemical
analysis of the wine derived from three different cultivars but grown in the same region in Italy. They are the
chemical analysis values of Alcohol, Malic acid, Ash, Alcalinity of ash, Magnesium, Total phenols,
Flavanoids, Nonflavanoid phenols, Proanthocyanins, Color intensity, Hue, OD280/OD315 of diluted wines
and Proline.
The samples in the Liver disorder data set are classified into two different classes according to the liver disorders: 142
samples belong to class 1 and 199 samples belong to class 2. The samples in Wine data set are classified in to three
different cultivars: 59 samples belong to cultivar 1, 71 samples belong to cultivar 2 and cultivar 3 contains 48 samples.
2.2 K-Means Clustering
The K-Means or Hard C-Means algorithm developed by MacQueen [6] is basically a partition based algorithm. This is
one of the simplest algorithms applied to classify data into partitions. The principle of this method is to define the
center points i.e. centroids of each cluster by optimizing its objective function iteratively. K-means applied to classify
data in to c (1 c N ) clusters and the objects belong to one cluster are similar to each other and at the same time
each object can only associate to one cluster. Consider a dataset

with N observations (objects). Each observation is

an n -dimensional row vector, z k [ z k 1 , z k 2 , z kn ] . The dataset

is represented by N n matrix. The

rows of Z represent samples (observations) and the columns are measurements for these samples (objects). K-Means
model achieves its partitioning by the iterative optimization of its objective function (a squared error function) given as
c

J (V ) z k vi

(1)

i 1 k 1

Where

z k vi is the Euclidean distance calculated between kth object, z k and ith centroid, vi . The algorithm

comprises the following basic steps:


Step 1: Initialization: Choose desired number of clusters c and place c cluster centroids.
Step 2: Classification: Assign each object to the cluster whose centroid is nearest to it.
Step 3: Centroid updation: After all objects have been assigned, recalculate the cluster centroids using vi
where

1
ci

ci

i 1

ci is the number of objects in the ith cluster and update the clusters.

Step 4: Convergence criteria: If the stopping criterion has been met, then stop, otherwise repeat from step 2.
K-Means algorithm can be run multiple times to reduce the sensitivity caused by initial random selection of centroids.

Volume 5, Issue 10, October 2016

Page 113

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org
Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

2.3. Fuzzy Possibilistic c-Mean Clustering


Traditional clustering approaches the partition whereby each object can only belong to one cluster at any one time.
Fuzzy clustering extends this notion to each object can belong to more than one cluster at a time with different
membership values using a membership function. These membership values ranged from 0 to 1. FPCM was developed
based on fuzzy theory by Pal and Bedzek [7]. The concept of typicality and membership functions was introduced in
FPCM model to overcome the drawbacks occurring in FCM model proposed by Bezdek et al. [5]. The partition of the
dataset Z into c clusters is represented by the fuzzy partition matrix U = [ik ] cN . The fuzzy partitioning space for

Z is the set
c
N

c N
M fc = U / ik [ 0, 1], i,k; ik = 1, k ; 0 < ik , i
i=1
k =1

(2)

Fuzzy Possibilistic c-Mean model achieves its partitioning by optimizing its iterative objective function defined as
Where
c N

min
J
(Z;
U,
T
,
V)
=
ikm t ik z k vi

m ,
U ,T ,V
i 1 k =1

2
A

(3)

T U c N / ik [0, 1], i,k; ki; 0 < ik < N, i ik 0 , Subject to the


k=1

constraints 0 u ik , t ik

1 , m 1 , 1 and i, t ik T such that

ik

1.

k 1

Here

V = [v1 , v2 , , vc ] where vi n denotes a vector of (unknown) cluster prototypes (centers) and the degree

of fuzziness determined by a weighting parameter,


N


vi =

m
ik

tik z k

k=1
N

m [1, ) .

, 1 i c .
m
ik

ik

(4)

k =1

c
D
ik = ikA

j=1 D jkA

2
m 1

N
1
D
ikA

t ik =
j=1 DijA

2
DikA
= z k vi

2
A

,1 i c , 1 k N

(5)

,1 i c , 1 k N

= (z k vi )T A(z k vi )

(6)

(7)

where 1 i c , 1 k N .
The prototypes, the membership values, the typicalities and the distances are calculated by the equations (4), (5), (6)
and (7) respectively. Since FPCM is an iterative algorithm it terminates when the objective function converges to a
local minimum. The algorithm comprises of the following basic steps:
Step 1: Initialization: Randomly initialize partition matrix U and typicality matrix T , number of clusters c ,
parameters m , and the termination tolerance > 0 .
Step 2: Centroid calculation: Calculate the fuzzy cluster prototypes by using the equation (4).
Step 3: Classification: Update the membership matrix by using the equation (5) and the typicality matrix by using the
equation (6).

Volume 5, Issue 10, October 2016

Page 114

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org
Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

Step 4: Convergence criteria: Compare the membership matrices of previous and after the iteration. If the comparison
value is less than the termination tolerance, then stop, else repeat from step 2.
This model was successfully developed to overcome the problems that occur in FCM but it has some drawbacks. For
example, if the data set is huge the typicalities associated with the objects will become very small and hence it needs to
be scaled up. FPCM employs standard Euclidean distance norm as distance metric.
2.4 Possibilistic Fuzzy c-Mean Clustering
In order to achieve good clustering results the memberships and typicalities are both important. Nikhil et al. [9]
proposed Possibilistic Fuzzy c-Mean (PFCM) model. In this proposed model the constraint in the FPCM model that the
sum of the typicalities of all data points in a cluster is equal to 1 is relaxed and retains the constraint on memberships.
Possibilistic Fuzzy c-Mean model achieves its partitioning by optimizing its iterative objective function defined as
c N
c
N

min
J
(Z;
U,
T
,
V)
=
a

bt

m ,
ik
ik
k
i A i 1 t ik
U ,T ,V
i 1 k =1
i 1
k 1

Subject to the constraints 0 u ik , t ik

(8)

1 , m, 1 , a, b 0 and i 0 . The typicality matrix and the prototypes

are calculated by the equation (9) and equation (10) respectively.


1

b 2 1
t ik =1 DikA

i

a
vi =

m
ik

bt ik

k =1
N

,1 i c , 1 k N

zk
, 1 i c .

m
ik

bt

(9)

(10)

m
ik

k =1

The basic steps of the PFCM algorithm are described as follows.


Step 1: Initialization: Randomly initialize partition matrix U and typicality matrix T , number of clusters c ,
parameters m , , a , b and the termination tolerance > 0 .
Step 2: Centroid calculation: Calculate the fuzzy cluster prototypes by using the equation (10).
Step 3: Classification: Update the membership matrix by using the equation (5) and the typicality matrix by using the
equation (9).
Step 4: Convergence criteria: Compare the membership matrices of previous and after the iteration. If the comparison
value is less than the termination tolerance, then stop, else repeat from step 2.
This model has the potential that is either it can influence the prototypes by means of memberships (when a > b ) or by
typicalities (when b > a ). If the values of a and b are restricted as a =1 and b =0 then the PFCM model performs as
FCM model. The effect of outliers can be reduce by considering high value of b ( m ) than a ( ). The experimental
results are discussed in the next section.

3.RESULTS AND DISCUSSION


The algorithms were implemented in MATLAB version R2010a. To achieve good clustering results authors considered
the maximum of 100 iterations and 15 independent test runs. The threshold value = 0.00001 and the weighting
exponent in FPCM and PFCM is m = 2.
3.1 Results
The liver disorder data set contains 341 samples classified as two different classes. Each sample is characterized by 6
attributes and all the samples are labeled by numbers 1 to 341. The samples from 1 to 142 i.e., 142 samples are
classified as class 1 and from 143 to 341 i.e., 199 samples are classified as class 2. The algorithms KM, FPCM and
PFCM are applied to generate two clusters. Both the algorithms FPCM and PFCM generate two clusters corresponding
to class 1 containing 53 samples and class 2 containing 288 samples. 36 samples which belong to class 2 are wrongly

Volume 5, Issue 10, October 2016

Page 115

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org
Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

assigned in to class 1 and 125 samples which belong to class 1 are wrongly assigned in to class 2 by both the FPCM
and PFCM algorithms.
The method KM generates two clusters containing 38 samples corresponding to class 1 and 303 samples corresponding
to class 2. 24 samples which belong to class 2 are wrongly grouped in to class 1 and 128 samples which belong to
cluster 1 are wrongly grouped in to class 2.
The data set of wine contains 178 samples classified in to three different clusters according to their cultivars. Each
sample is characterized by 13 attributes and all the samples are labeled by numbers 1 to 178. The samples from 1 to 59
i.e., 59 samples are classified as cultivar 1, from 60 to 130 i.e., 71 samples are classified as cultivar 2 and from 131 to
178 i.e., 48 samples are classified as cultivar 3. The algorithms KM, FPCM and PFCM are applied to cluster the data
set in to three different clusters namely cultivar 1, cultivar 2 and cultivar 3. Both the FPCM and PFCM methods
generate three clusters corresponding to cultivar 1 containing 46 samples, cultivar 2 containing 71 samples and cultivar
3 containing 61 samples. The sample numbered 74 that belong to cultivar 2 is wrongly assigned to cultivar 1 and 21
samples that belong to cultivar 3 wrongly classified in to cultivar 2 by the both FPCM and PFCM methods. 14 samples
associated with cultivar 1 and 20 samples associated with cultivar 2 are wrongly grouped in to cultivar 3 by both FPCM
and PFCM methods.
The method KM classified the data set in to three clusters namely cultivar 1, cultivar 2 and cultivar 3 containing 47, 69
and 62 samples respectively. The sample numbered 74 that belongs to cultivar 2 is assigned wrongly to cultivar 1. 19
samples associated with cultivar 3 are wrongly grouped to cultivar 2. 13 samples of cultivar 1 and 20 samples of
cultivar 2 are grouped in to cultivar 3 wrongly.
Table 1 summarizes the results of the clustering methods. For each method, it records the number of samples that are
classified properly and improperly in to the respective clusters of the data sets.
Table 1: The clustering results obtained by the algorithms KM, FPCM and PFCM for the data sets liver disorder and
wine.
Liver data set (2
Wine data set (3 clusters)
clusters)
Clustering Method
Class 1
Class 2
Cultivar 1
Cultivar 2
Cultivar 3
Correct
14
175
46
50
29
K-Means (KM)
Incorrect
24
128
1
19
33
Total
38
303
47
69
62
Correct
17
163
45
50
27
Fuzzy Possibilistic c-Means
Incorrect
36
125
1
21
34
(FPCM)
Total
53
288
46
71
61
Correct
17
163
45
50
27
Possibilistic Fuzzy c-Means
Incorrect
36
125
1
21
34
(FPCM)
Total
53
288
46
71
61
3.2 Discussions
According to the results of the k-means algorithm obtained for the liver disorder data set, out of 142 samples of the
class 1 cluster, 14 samples were assigned properly. 128 samples were incorrectly classified as class 2 samples. These
frequencies are equal to 17 and 125 samples respectively if both the Fuzzy Possibilistic c-Means and Possibilistic Fuzzy
c-Means methods are applied. Further, out of 199 samples of the cluster class 2, 175 samples were correctly classified.
Only 24 samples of the cluster class 2 were wrongly grouped as class 1 samples. These frequencies are equal to 163 and
36 samples respectively if both the Fuzzy Possibilistic c-Means and Possibilistic Fuzzy c-Means methods are applied.
The percentage of correctness and the classification performance of the three methods are summarized in table 2. For
the liver disorder data set, K-means algorithm achieved accuracy of about 9.85% corresponding to the cluster class 1
and 87.94% corresponding to the cluster class 2. In comparison, both the FPCM and PFCM methods achieved accuracy
of about 11.97% corresponding to the cluster class 1 and 81.91% corresponding to the cluster class 2.
According to the results of the k-means algorithm obtained for the wine data set, out of 59 samples of the cluster
cultivar 1, 46 samples were classified properly. 13 samples were wrongly assigned to cluster cultivar 3. These
frequencies are equal to 45 and 14 samples respectively if FPCM and PFCM methods are applied. Further out of 71
samples of cultivar 2, 50 samples are grouped properly. The remaining 21 samples that belong to cultivar 2, 20 samples
are wrongly assigned to cultivar 3 and only one sample was assigned to cultivar 1 wrongly. These frequencies are equal

Volume 5, Issue 10, October 2016

Page 116

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org
Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

to 50, 20 and 1 samples respectively if both the FPCM and PFCM algorithms are implemented. And out of 48 samples
of the cluster cultivar 3, 29 samples were classified properly. 19 samples were assigned to cultivar 2 wrongly. These
frequencies are equal to 27 and 21 samples respectively if both the FPCM and PFCM algorithms are applied.
Considering the wine data set, the K-means algorithm achieved accuracy of about 77.96% for the cultivar 1, 70.42% for
the cultivar 2 and 60.41% for the cultivar 3. In comparison, both the FPCM and PFCM methods achieved accuracy of
about 76.27% corresponding to the cluster cultivar 1, 70.42% corresponding to the cluster cultivar 2 and 56.25%
corresponding to the cluster cultivar 3.
According to the results obtained for the three methods the classification performance of K-means yields its best with
55.43% comparing to the methods both FPCM and PFCM which yield 52.79% in case of liver disorder data set. In the
case of wine data set the classification performance of K-means yields its best with 70.22% comparing to both the
methods FPCM and PFCM which yield 68.54%.
Table 2: Comparison of performance of the clustering results obtained by the algorithms FCM, FPCM and PFCM for
the data sets liver disorder and wine.
Liver data set (2 clusters)
Clustering Method

Correctness %

K-Means (KM)
Fuzzy Possibilistic cMeans (FPCM)
Possibilistic Fuzzy cMeans (PFCM)

Wine data set (3 clusters)


Classification
performance
%

Classification
performance
%

Correctness %

Class 1
9.85

Class 2
87.94

55.43

Cultivar 1
77.96

Cultivar 2
70.42

Cultivar 3
60.41

70.22

11.97

81.91

52.79

76.27

70.42

56.25

68.54

11.97

81.91

52.79

76.27

70.42

56.25

68.54

The linkage distances between the attributes of the Liver disorder data set and the Wine data set are represented by
column dendograms in figure 1 and figure 2 respectively in which x-axis represents variables or attributes and y-axis
represents the Euclidean distance measure among the variables in different levels. Graphically the classification
performance of the algorithms KM, FPCM and PFCM for the two data sets is shown in figure 3. In figure 3, the data
sets are given along x-axis and the performance percentages of the algorithms are given along y-axis.
Tree Diagram for 6 Variables
Single Linkage
Eucl idean distances
900

800

Linkage Distance

700

600

500

400

300

200
Var5

Var6

Var4

Var3

Var2

Var1

Figure 1. Dendogram of the attributes of liver disorder data set

Volume 5, Issue 10, October 2016

Page 117

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org
Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

Tree Di agram for 13 Variabl es


Single Linkage
Eucl idean distances
10000

Linkage Distance

8000

6000

4000

2000

0
NewVar3
Var10
NewVar1
Var7
Var6
Var2
Var1
Var5
Var9
Var8
NewVar2
Var3
Var4

Figure 2. Dendogram of the attributes of wine data set

Figure 3. Performance comparison between KM, FPCM and PFCM algorithms

4.CONCLUSION
Data clustering is a method used to partition a data set by extracting the unknown valuable information from the large
volume of data for many real time applications. The robustness of an algorithm can be evaluated from the clustering
output. In this comparative study, authors set out to compare three different classification algorithms namely K-means
(KM), Fuzzy Possibilistic c-Mean (FPCM) and Possibilistic Fuzzy c-Mean (PFCM) algorithms with two popular real
world data sets such as liver disorder and wine from UCI repository. The efficacy of KM, FPCM and PFCM methods
has been investigated and are compared to each other with respect to the classification performance. The distance
relations among the attributes in each data set are shown by column dendograms. The fuzzy algorithms, FPCM and
PFCM reported similar results in the aspects of obtaining correctness and classification accuracy. The analysis showed
that among the algorithms KM had the best accuracy. Empirical results of classification performance of KM, FPCM

Volume 5, Issue 10, October 2016

Page 118

International Journal of Application or Innovation in Engineering & Management (IJAIEM)


Web Site: www.ijaiem.org Email: editor@ijaiem.org
Volume 5, Issue 10, October 2016

ISSN 2319 - 4847

and PFCM algorithms, KM was superior, and particularly simple and efficient in classification. Further, as a future
study, the hybridization of these algorithms with evolutionary algorithms can be implemented to improve the efficacy of
the algorithms.
Conflict of interests
The authors declare that they have no conflict of interest.

References
[1] A.Rajendran, R.Dhanasekaran, (2011). MRI Brain Image Tissue Segmentation Analysis Using Possibilistic Fuzzy
C-means Method. International Journal on Computer Science and Engineering, Vol 3, no.12, pp. 3832-3836.
[2] Forina M, Stefan Aeberhard, Riccardo Leardi, (1991). UCI Machine Learning Repository, Insittute of
Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno, Genoa, Italy. From
http://archive.ics.uci.edu/ml
[3] J.Quintanilla-Dominguez, B.Ojeda-Magana, M.G.Cortina-Januchs, R.Ruelas, A.Vega-Corona, D.Andina, (2011).
Image Segmentation by Fuzzy and Possibilistic Clustering Algorithms for the Identification of Microcalcifications.
Scientia Iranica, Transcations D: Computer and Engineering and Electrical Engineering, 18 pp. 580-589.
[4] Jain A, Muryt M and Flynn P, (1999). Data Clustering: A review. ACM Computing Surveys, vol.31, no.3, pp.
264-323.
[5] James C Bezdek, Robert Ehrlich, William Full, (1984). FCM: The Fuzzy c-Means Clustering Algorithm.
Computers & Geosciences, vol. 10, no.2-3, pp. 191-203.
[6] MacQueen J, Some Methods for Classification and Analysis of Multivariate Observations, in Proceedings of the
Fifth Berkeley Symposium on Mathematical Statistics and Probability, (Berkeley, vol 1: Statistics, 1967 ) pp. 281297.
[7] N. R. Pal and J. C. Bezdek, FPCM: A mixed c-means clustering model, in IEEE Int. Conf. Fuzzy Systems,
(Spain, 1997) pp. 11-21.
[8] Nidhi Grover, (2014). A Study of Various Fuzzy Clustering Algorithms. International Journal of Engineering
Research, Vol 3, Issue 3 pp. 177-181.
[9] Nikhil R. Pal, Kuhu Pal, James M. Keller, James C. Bezdek, (2005). A Possibilistic Fuzzy c-Means clustering
Algorithm. IEEE Transactions on Fuzzy Systems, 13, 4, pp. 517-530.
[10] Richard S. Forsyth, (1990). UCI Machine Learning Repository. Mapperley Park, Nottingham NG3 5DX, England.
From http://archive.ics.uci.edu/ml
[11] Simhachalam B and Ganesan G, (2015). Fuzzy Clustering Algorithms in Medical Diagnostics. Wulfenia Journal,
Vol. 22, no.7, pp. 308-317.
[12] Velmurugan T and T Santhanam, (2011). A Survery of Partition Based Clustering Algorithms in Data Mining:
An Experimental Approach. Information Technology Journal, vol.10, issue 3, pp. 478-484.

AUTHOR
Simhachalam Boddana received the M.Sc. and M.Phil. degrees in Applied Mathematics from
Andhra University in 2005 and 2007, respectively and received M.Tech degree in Information
Technology from Andhra University in 2009. The research interests are Applied Group Theory and
Soft Computing.

Volume 5, Issue 10, October 2016

Page 119

Das könnte Ihnen auch gefallen