Sie sind auf Seite 1von 16

Applied Soft Computing 13 (2013) 40474062

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

Particle swarm optimization with grey evolutionary analysis


Min-Shyang Leu a , Ming-Feng Yeh a, , Shih-Chang Wang b
a
b

Department of Electrical Engineering, Lunghwa University of Science and Technology, Taoyuan 33306, Taiwan
Department of Business Administration, Lunghwa University of Science and Technology, Taoyuan, Taiwan

a r t i c l e

i n f o

Article history:
Received 16 October 2012
Received in revised form 5 March 2013
Accepted 23 May 2013
Available online 14 June 2013
Keywords:
Evolutionary computation
Grey evolutionary analysis
Grey relational analysis
Parameter automation strategy
Particle swarm optimization

a b s t r a c t
Based on grey relational analysis, this study attempts to propose a grey evolutionary analysis (GEA) to
analyze the population distribution of particle swarm optimization (PSO) during the evolutionary process. Then two GEA-based parameter automation approaches are developed. One is for the inertia weight
and the other is for the acceleration coefcients. With the help of the GEA technique, the proposed parameter automation approaches would enable the inertia weight and acceleration coefcients to adapt to the
evolutionary state. Such parameter automation behaviour also makes an attempt on the GEA-based PSO
to perform a global search over the search space with faster convergence speed. In addition, the proposed
PSO is applied to solve the optimization problems of twelve unimodal and multimodal benchmark functions for illustration. Simulation results show that the proposed GEA-based PSO could outperform the
adaptive PSO, the grey PSO, and two well-known PSO variants on most of the test functions.
2013 Elsevier B.V. All rights reserved.

1. Introduction
Particle swarm optimization (PSO), introduced by Kennedy and
Eberhart in 1995 [1,2], was inspired by the simulation of simplied animal social behaviours including bird ocking, sh schooling,
etc. The PSO uses a simple mechanism that imitates their swarm
behaviours to guide the particles to search for globally optimal
solutions. Similar to other evolutionary computation techniques,
it is also a population-based iterative algorithm, but it works on
the social behaviour of particles in the swarm. It nds the global
best solution by simply adjusting the trajectory of each particle not
only towards its own best location but also towards the best particle of the entire swarm at each generation. Owing to its simplicity
of implementation and ability to quickly converge to a reasonably
good solution [3], the PSO has been successfully applied in solving
many real-world optimization problems including electrical power
system [4], pattern recognition [5], controller design [68], etc.
Although the standard PSO has the previous advantages, it may
suffer from trapping in the local optimal problem when solving
complex multimodal functions [9]. Avoiding the local optimal problem and accelerating the convergence speed therefore become
two important issues in the PSO research. A number of variants of PSO have been proposed to achieve these two purposes.
Generally speaking, those developments could be categorized as
the following three approaches: control of algorithm parameters
[1013], combination with auxiliary search operators [1419], and

Corresponding author. Tel.: +886 2 82093211x5501; fax: +886 2 82094650.


E-mail address: mfyeh@mail.lhu.edu.tw (M.-F. Yeh).
1568-4946/$ see front matter 2013 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2013.05.014

improvement of topological structure [2023]. As far as control


of algorithm parameters is considered, the time-varying control
schemes, such as the linearly varying inertia weight [10] and the
time-varying acceleration coefcients [11], are commonly used
to improve the search performance of PSO. Although those two
approaches are simple, they may suffer from improperly updating
the parameters because no information on the evolutionary state
that reects the diversity of the population is identied or utilized.
Zhan et al. therefore proposed the adaptive PSO (APSO) [3] to solve
this problem. By evaluating the population distribution and particle tness, the APSO utilizes an evolutionary state estimation (ESE)
technique to identify one out of four dened evolutionary states
(i.e., exploration, exploitation, convergence and jumping out) in
each generation. Then, based on the identied evolutionary state,
the adaptive parameter control strategies are applied to adjust
the inertia weight and the acceleration coefcients. Such adaptive strategies could enhance the performance of PSO in terms of
convergence speed, global optimality, and algorithm reliability.
In order to identify the evolutionary state, the ESE technique
must compute the mean distance of each particle to all the other
particles in advance. For a swarm with a large population size,
the ESE technique may suffer from a heavily computational load.
The fact gives rise to the motivation to develop a more efcient
ESE technique in this study. Grey relational analysis is a similarity
measure for nite sequences with incomplete information [23,24].
For a given reference sequence and a given set of comparative
sequences, grey relational analysis can be used to determine the
relational grade (similarity) between the reference and each comparative sequence in the given set. While each particle in the PSO is
regarded as a comparative sequence and the ttest particle as the

4048

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

reference one, the relationship between a specic particle and the


ttest particle can be found by the corresponding relational grade.
The larger the relational grade is, the closer the ttest particle and
that specic particle are. Therefore the result of grey relational analysis involves some information of population distribution. Based on
this characteristic, this study attempts to propose the so-called grey
evolutionary analysis (GEA) scheme to improve the identication
efciency of the original ESE approach.
In addition, this study also proposes two GEA-based parameter automation approaches for the PSO. One is for the inertia
weight and the other is for the acceleration coefcients. Rather
than using the adaptive parameter control process as indicated
in the APSO [3], the proposed GEA-based algorithm parameters
are directly obtained from the corresponding transformation functions. In this study, those functions are determined according to the
evolutionary behaviour so as to perform a global search with faster
convergence speed.
The remainder of this paper is organized as follows. Section 2
briey presents some background material of grey relational analysis and particle swarm optimization. Grey evolutionary analysis
for the PSO is described in Section 3. The proposed GEA-based PSO
is described in Section 4. Section 5 presents the search performance
of the proposed algorithm for twelve test functions. Finally, Section
6 contains some conclusions of this study.

2.1. Grey relational analysis


Grey relational analysis is a similarity measure for nite
sequences with incomplete information [23]. Assume that the
reference sequence is dened as y = (y1 , y2 , y3 , . . ., yn ) and the comparative sequences are given by xi = (xi1 , xi2 , . . ., xin ), i = 1, 2, 3, . . ., m.
The grey relational coefcient between y and xi at the kth datum,
k = 1, 2, 3, . . ., n, is dened as follows:
min +  max
,
ik +  max

(1)

where ik = |yk xik |, max = maxi maxk ik , min = mini mink ik , and
 (0, 1], which is a distinguishing coefcient to control the resolution between max and min . The corresponding grey relational
grade is
g(y, xi ) =

n


[k r(yk , xik )],

vid = wvid + c1 rand1d (pBestid xid ) + c2 rand2d (gBestd xid ),

(3)

xid = xid + vid ,

(4)

where w is the inertia weight, c1 and c2 are the acceleration


coefcients, and rand1d and rand2d are two uniformly distributed
random numbers independently generated within [0, 1] for the dth
dimension [3]. In (3), pBesti represents the position with the best
tness found so far for the ith particle, and gBest is the best position
discovered by the whole particles.
The rst part of (3) is the previous velocity, which provides
the necessary momentum for particles to roam around the search
space. The second part, known as the self-cognitive component,
represents the personal thinking of each particle. The cognitive
component encourages the particles to move towards their own
best positions found so far. The third part, regarded as the social
inuence component, expresses the collaborative effect of the particles, in nding the global optimal solution. The social component
always pulls the particles to the global best particle found so far.
2.3. Some variants of PSO
Shi and Eberhart in [10] proposed the PSO with a linearly varying
inertia weight (PSO-LVIW) over the generations to improve the performance of PSO. The corresponding mathematical representation
is

2. Preliminaries

r(yk , xik ) =

other particles (gBest) in the search space. That is, the velocity and
position of the ith particle on dimension d are updated as

(2)

k=1

where k is the
nweighting factor of grey relational coefcient
= 1. The selection of the weighting facr(yk , xik ) and
k=1 k
tor for a relational coefcient reects the importance of that
datum. In general, we can select it as k = 1/n for all k. The best
comparative sequence is determined as the one with the largest
relational grade. On the other hand, it can be derived from (1)
that r(yk , xik ) [/(1 + ), 1]. The result could further imply that g(y,
xi ) [/(1 + ), 1].
2.2. Particle swarm optimization
In the PSO, a swarm of particles are represented as potential
solutions, and each particle i is associated with two vectors, i.e.,
the velocity vector Vi = (vi1 , vi2 , . . ., viD ) and the position vector
Xi = (xi1 , xi2 , . . ., xiD ), where D represents the dimensions of the solution space. The velocity and position of each particle are initialized
by random vectors within the corresponding ranges. During the
evolutionary process, the trajectory of each individual is adjusted
by dynamically altering the velocity of each particle, according to
its own ying experience (pBest) and the ying experience of the

t
w = wmax (wmax wmin ) ,
T

(5)

where t is the current generation number and T is a predened maximum number of generations. Besides, the maximal and minimal
weights, wmax and wmin , are usually set to 0.9 and 0.4, respectively
[10].
The PSO with time-varying acceleration coefcients (PSO-TVAC)
[11] is another widely used strategy to improve the performance
of PSO. With a large cognitive component and a small social component at the beginning, particles are allowed to move around the
search space, instead of moving towards the population best. On the
other hand, a small cognitive component and a large social component allow the particles to converge to the global optima in the
latter part of the evolutionary process. This modication can be
mathematically represented as follows:
c1 = (c1f c1i )

t
+ c1i ,
T

(6)

c2 = (c2f c2i )

t
+ c2i
T

(7)

where c1i , c1f , c2i , and c2f are constants. The best ranges for c1 and
c2 suggested in [11] are 2.50.5 and 0.52.5, respectively. In other
words, a larger c1 and a smaller c2 were set at the beginning and
were gradually reversed during search.
Different from the above two time-varying schemes, the APSO
[3] utilizes an evolutionary state estimation approach to identify
one out of four evolutionary states, i.e., the states of exploration,
exploitation, convergence and jumping out. The ESE approach
involves the following three main steps.
(1) Calculate the mean distance of each particle i to all the other
particles, where the mean distance is dened as

di =

1
N1


 D
N


 (xid xjd )2 ,

j=1,j =
/ i

d=1

(8)

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

4049

Fig. 1. Fuzzy membership functions for the ESE approach.

where N is the population size.


(2) Compute the evolutionary factor f as follows.
f =

dg dmin
dmax dmin

[0, 1],

(9)

where dg represents the mean distance of the globally best particle, and dmax and dmin are the maximum and minimum mean
distances, respectively.
(3) Classify the evolutionary factor f into one of the four evolutionary states with a fuzzy classication option. The membership
functions for the four evolutionary states are given in Fig. 1.
Once the evolutionary state is identied, the algorithm parameters are adjusted according to the identied state. For example, the
inertia weight can be obtained by the following sigmoid mapping
w(f ) =

1
1 + 1.5e2.6f

[0.4, 0.9].

(10)
Fig. 2. Relation between grey relational grade and inertia weight.

Besides, the following adaptive control strategies are applied to


adjust the acceleration coefcients. (1) Increasing c1 and decreasing
c2 in the exploration state. (2) Increasing c1 slightly and decreasing
c2 slightly in the exploitation state. (3) Increasing c1 slightly and
increasing c2 slightly in the convergence state. (4) Decreasing c1
and increasing c2 in the jump-out state.
2.4. Grey particle swarm optimization
Grey PSO, which combines the PSO with grey relational analysis, is a kind of hybrid PSO [25]. While the ttest particle gBest is
regarded as the reference sequence and all particles Xs are viewed
as the comparative ones, grey relational analysis could be applied
to analyze the similarity between them. Denote the grey relational
grade between the ttest particle gBest and the ith particle Xi as
gi = g(gBest, Xi ). Then the larger the relational grade gi is, the closer
the ttest particle gBest and the ith particle Xi are. Since the relational grade involves some information of population distribution,
the grey PSO utilizes the relational grade gi to determine the weight
inertia (wi ) and acceleration coefcients (c1i and c2i ) for the particle
i. The corresponding mathematical representations are represented
as follows.
w
wmax gmax wmin gmin
wmax
wi = fw (gi ) = min
g +
,
(11)
gmax gmin i
gmax gmin
c2i

cmax cmin
c gmax cmax gmin
= fc (gi ) =
g + min
,
gmax gmin i
gmax gmin

c1i = 4.0 c2i ,

(12)
(13)

where the subscripts max and min represent the maximal and
minimal values of the corresponding parameter, respectively, and
fw and fc are the transformation functions for the weight inertia w
and the acceleration coefcient c2 , respectively. Figs. 2 and 3 are
two possible transformation functions for the grey PSO.

Fig. 3. Relation between grey relational grade and acceleration coefcient.

As can be seen from (11)(13), each particle has its own inertia
weight and acceleration coefcients whose values are dependent
upon the corresponding grey relational grade. Since the relational
grade of a particle is varying over the generations, those parameters
are also time-varying. Even if in the same generation, those parameters may differ for different particles. With this modication, the
updating rule for the velocity of the ith particle on dimension d
becomes as

vid = wi vid + c1i rand1 (pBestid xid ) + c2i rand2 (gBestd xid ),

(14)

4050

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

where wi is the inertia weight, c1i and c2i are the acceleration
coefcients, and rand1 and rand2 are two uniformly distributed
random numbers independently generated within [0, 1].

3.1. Global-type grey relational analysis


While grey relational analysis is directly applied to the evolutionary algorithm such as the grey PSO [25], grey relational
analysis will suffer from the following two problems. The rst one
is that grey relational analysis is a local-type, but not global-type,
approach. That is to say, the determinations of max and min only
take the distribution of particles in that generation into consideration, but do not consider the population distribution in other
generations. The maximal factor max and the minimal factor min
given in (1) therefore can be regarded as in the form of local
version. The second problem is that grey relational coefcient is
undened if max = 0 and min = 0. While all the swarm is converged
to the optimal point, both max and min are equal to zero. At this
situation, grey relational coefcient (1) becomes undened.
Considering these concerns, a global version of grey relational analysis is proposed such that it not only attempts to solve
the undened problem but also can use almost the same metric
to measure the relational grades in the evolutionary process. In
this study, global-type grey relational analysis is modied from
grey relational analysis with global distance factor in [26]. Dene
id = |gBestd xid | and max = maxi maxd id , where max represents
the local maximal factor. Denote max as the global maximal factor. At the rst generation, the factor max is initialized to be max ,
and then it is updated by the following adaption rule.
If max > max , then max = max ; otherwise,
(15)

With this adaption, the factor max can be guaranteed that it is


the global maximal factor found so far. Besides, no matter how
the population distribution is, at the rst generation, the local best
position of a particle, pBest, is equal to the initial position of that
particle. Then the initial global best position, gBest, could be discovered from those pBests and therefore is identical to one of them.
Since each particle is regarded as a comparative sequence in the
proposed approach, the ttest particle gBest is identical to one of
the comparative sequences at the rst generation. The local minimal factor min therefore is equal to zero at that moment. It could
further imply that the global minimal factor min also equals zero.
Hence, while grey relational analysis is applied to the PSO, the minimal factor can be omitted. Then, according to (1), grey relational
coefcient between the ttest particle gBest and the ith particle Xi
at the dth dimension can be rewritten as
 max
,
id +  max

(17)

d=1

During a PSO search process, the characteristic of population


distribution varies over the generations. At the early part of the
search, the particles may be scattered all over the search space. The
population distribution therefore is dispersive. As the evolutionary process goes on, the particles will gradually gather and then
converge to a locally or globally optimal point. It is obvious that
the information of population distribution is different from that in
the early stage. Owing to grey relational analysis with the ability of
similarity measure, this section therefore attempts to develop the
grey evolutionary analysis for the PSO in order to investigate the
relation between the information of population distribution and
the result of grey relational analysis.

rid = r(gBestd , xid ) =

1
rid ,
D
D

gi = g(gBest, Xi ) =

3. Grey evolutionary analysis for PSO

max remains unchanged.

Since max > 0, the relational coefcient given in (16) is well


dened. The corresponding relational grade is given as

(16)

where D represents the dimensions of the solution space. Besides,


rid [/(1 + ), 1] and gi [/(1 + ), 1]. They are consistent with the
results obtained by grey relational analysis.
3.2. Grey evolutionary analysis
Generally speaking, the particles may be scattered all over the
search space at the early part of the search process. Since the relational grade of each particle may be widely distributed over the
range of [/(1 + ), 1], the average grade would be small at the beginning. However, in the exploitation or convergence state, the swarm
tends to gather together and centre around the ttest particle. It is
expected that the relational grade of each particle is close or equal
to the maximal grade of 1.0. Thus the average grade would be large
in the exploitation or convergence state. While all the swarm is converged to the optimal point, the average grade will be equal to the
maximal value of one. Therefore, the average relational grade could
reveal the evolutionary state of a swarm. Such an average value
hereafter is termed the evolutionary factor and the corresponding
mathematical representation is
1
gi
N
N

fGEA =

i=1


,1
1+

(18)

where N is the population size.


The evolutionary factor fGEA can be roughly classied into three
states as shown in Fig. 4. Note that the jumping out state does not
take into consideration in this case. Also the membership functions
given in Fig. 4 are dened for the distinguishing coefcient with the
value of 1.0.
3.3. Comparison of GEA with ESE
In order to attain the evolutionary factor, the ESE approach must
determine the mean distance of each particle i to all the other
particles. For a swarm with a population size of N, the Euclidian distance between two distinct particles must be computed at
least N(N 1)/2 times such that the mean distances between every
pair of particles in the swarm can be obtained. However, the GEA
approach requires computing the distinct relational grades only
N times to obtain the evolutionary factor. Consider N = 20, D = 30
and the system conguration as given in Section 5, the computational time of evolutionary factor required by the ESE approach,
in average, is 1.9375 times as long as that required by the GEA
approach. As expected, the proposed GEA approach is faster than
the ESE approach.
4. GEA-based PSO
In order to reduce the computational load in the PSO search,
the proposed GEA-based PSO directly use the value of evolutionary factor fGEA to determine the inertia weight and the acceleration
coefcients at each generation. It is unnecessary to classify the
evolutionary factor into an evolutionary state with the fuzzy classication method as shown in Fig. 4.
4.1. GEA-based inertia weight
Many researches, including the APSO, have suggested that the
value of w should be large in the exploration state and small in

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

4051

Fig. 4. Fuzzy membership functions for the GEA approach with  = 1.

the exploitation or convergence state [3,10]. The commonly used


setting is that the inertia weight is varying from 0.9 to 0.4 over
the generations. However, the main disadvantage of this method
is that once the inertia weight is decreased, the swarm loses its
ability to search new areas because it is not able to recover its exploration mode [27,28]. On the other hand, the inertia weight w could
also play the role of a forgetting factor which could provide bias
towards more recent data thus placing less emphasis on older data.
As w approaches zero, the velocity update equation quickly forgets
old inputs (i.e., the rst part of (3)) and remembers only the most
recent inputs (i.e., the second and third parts of (3)) more clearly.
Different from the widely used decreasing approach, the proposed GEA-based inertia weight is increasing with the evolutionary
factor fGEA . A possible transformation function can be represented
as

w(fGEA ) =

0.4,

if fGEA < gcenter ,


1

1 + 1.5e2.6h(fGEA )

(19)
,

elsewhere,

where h(fGEA ) = 2(1 + )fGEA (1 + 2) and gcenter = (1 + 2)/[2(1 + )]


which represents the average of the minimal relational grade
and the maximal relational grade, i.e., the centre of the range
of [/(1 + ), 1]. By this way, the inertia weight could adapt to
the search environment characterized by the evolutionary factor. Fig. 5(a) shows the transformation function for the inertia
weight with  = 1. At this condition, gcenter = 0.75. In the gure,
if fGEA 0.75, then w(fGEA ) = 1/{1 + 1.5exp[2.6(4fGEA 3)]}; otherwise, w(fGEA ) = 0.4.
While the evolutionary factor fGEA is smaller than gcenter , the
swarm is treated as in the initial state. In this study, the inertia
weight is initialized to 0.4. As the evolutionary process is going on,
the evolutionary factor fGEA is gradually increasing. The larger the
evolutionary factor fGEA is, the larger the inertia weight w is. When
fGEA is relatively small in the range of [gcenter , 1], the search process
is at the early stage of the search. A small inertia weight is adopted
to not only reduce the tendency of the particle to continue in the
same direction it has been travelling, but also place more emphasis
on directing each particle to the global optimum or its local best
position. In addition, a small inertia weight still could allow the
particles to move freely to nd the global optimum neighbourhood
at the beginning of the search. Therefore it can also avoid premature convergence and maintain the diversity of the swarm in the
exploration state. On the other hand, when fGEA is relatively large,
the search process is at the exploitation or convergence state. The
large inertia weight could maintain the tendency of the particle
towards the optimal region to help with fast convergence. At this
moment, the recent inputs will benet the local search.

4.2. GEA-based acceleration coefcients


The acceleration coefcients c1 and c2 control the movement of
each particle towards its individual and global best position, respectively. That is, they are used to balance the global and local search
capabilities. Kennedy and Eberhart suggested a xed value of 2.0
for each coefcient [1], and this setting has been adopted by many
other researchers. Rather than a xed value of 2.0, Ratnaweera et al.
proposed the time-varying acceleration coefcients to efciently
control the local search and converge to the global optimum solution [11]. They suggested that, with a large cognitive component
and a small social component at the beginning, particles are allowed
to move around the search space, instead of moving towards the
population best. Besides, a small cognitive component and a large
social component allow the particles to converge the global optima
in the latter part of the search.
The central idea of the proposed GEA-based acceleration
coefcients is based on the concept of the above time-varying
approach. Besides, the sum of two acceleration coefcients is xed
at the value of 4.0, i.e., c1 + c2 = 4.0. According to the suggestion
given in [11], c1 must be large than c2 at the early stage of the
search. In this study, c1 and c2 are initialized to 2.5 and 1.5, respectively. When the swarm is in the initial state (i.e., fGEA < gcenter ), the
acceleration coefcients c1 and c2 are remained at the corresponding initial values. However, when fGEA gcenter , c1 is monotonically
decreasing with fGEA but c2 is monotonically increasing. That is, the
GEA-based acceleration coefcients could also adapt to the search
environment characterized by the evolutionary factor. In the exploration state, a larger c1 and a smaller c2 could help for exploring
local optimums and maintaining the diversity of the swarm. In the
exploitation or convergence state, a smaller c1 and a larger c2 could
allow the particles to converge to the global optimum.
On the basis of the above central idea, a possible transformation
function for c1 can be represented as

c1 (fGEA ) =

2.5,

if fGEA < gcenter ,

2 + 0.5 cos[h(fGEA )],

elsewhere.

(20)

where h(fGEA ) = 2(1 + )fGEA (1 + 2) and gcenter = (1 + 2)/[2(1 + )].


Once c1 is determined, c2 can be obtained by
c2 = 4.0 c1 .

(21)

Fig. 5(b) shows the transformation function for the acceleration coefcients with  = 1. In the gure, if fGEA 0.75, then
c1 (fGEA ) = 2 + 0.5cos[(4fGEA 3)]; otherwise, c1 (fGEA ) = 2.5.
4.3. Procedure of GEA-based PSO
The following procedure can be used for implementing the proposed GEA-based PSO algorithm.

4052

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

Fig. 5. Transformation function for  = 1. (a) Inertia weight. (b) Acceleration coefcients.

(1) Initialize the swarm by assigning a random position and the


corresponding velocity vector to each particle. Also determine
the necessary parameters such as the grey distinguishing coefcient  and the maximum number of generations T.
(2) Evaluate the tness value for each particle.
(3) Update the best position for each particle pBest and the global
best position gBest.
(4) Update the global maximal factor max by (15).
(5) Calculate the grey relational grade of each particle i using (16)
and (17).
(6) Compute the evolutionary factor fGEA by (18).
(7) Determine the inertia weight by (19) and the acceleration
coefcients by (20) and (21).
(8) Update the velocities and positions of all particles using (3) and
(4).
(9) Repeat steps 28 until a stopping criterion is fullled (e.g., the
maximum number of generations or the goal is reached).

Table 1
Dimensions, search spaces, global optimum values, and acceptance levels of test
functions.
Test function

Dimension

Search space

Global optimum

Acceptance

Unimodal
f1
f2
f3
f4
f5
f6

30
30
30
30
30
30

[100, 100]D
[10, 10]D
[100, 100]D
[10, 10]D
[100, 100]D
[1.28, 1.28]D

0
0
0
0
0
0

0.01
0.01
100
100
0
0.01

Multimodal
f7
f8
f9
f10
f11
f12

30
30
30
30
30
30

[500, 500]D
[5.12, 5.12]D
[5.12, 5.12]D
[32, 32]D
[600, 600]D
[50, 50]D

12569.5
0
0
0
0
0

10,000
50
50
0.01
0.01
0.01

5. Simulation results
In order to demonstrate the search performance of the proposed GEA-based PSO algorithm, twelve benchmark test functions
selected from [9,25] are used to verify it. The GEA-based PSO
(GEA-PSO) is also compared with the APSO [3], the grey PSO
[25], and two well-known PSO algorithms, PSO-LVIW [10], and
HPSO-TVAC [11]. As mentioned in [3], the APSO generally outperforms the comprehensive-learning PSO (CLPSO) [9], the dynamic
multi-swarm PSO (DMS-PSO) [18], the von Neumann topological
structure PSO (VPSO) [21], and the fully informed particle swarm
(FIPS) algorithm [29] on most of test functions. Therefore the detail
numerical results and convergence characteristics of those four
variants of PSO are not shown in this study.

six functions are unimodal and the rest are multimodal. The corresponding dimensions, search spaces, global optimum values, and
acceptance levels of the test functions are listed in Table 1.
(1) Sphere model:

f1 (X) =

xi2 .

i=1

(2) Schwefels problem 2.22:

5.1. Test functions


The following 12 benchmark functions are used to test the
search performance of the proposed algorithm. Note that the rst

D


f2 (X) =

D

i=1

|xi | +

|xi |.

i=1

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

where
yi = 1 + (1/4)(xi + 1)

k(xi a)m ,
xi > a,
0,
a xi a,
k(xi a)m , xi < a.

(3) Schwefels problem 1.2:

f3 (X) =

D


i


i=1

2
xj .

4053

and

u(xi , a, k, m) =

j=1

5.2. Parameter settings and system conguration


(4) Generalized Rosenbrocks function:

f4 (X) =

D1


In the simulations, all the PSO algorithms were tested using the
same population size of 20, a value of which is widely adopted
in PSO [3,25]. In order to reduce the statistical errors, each algorithm was run 30 independent trials on every test function with
the same maximum number of generations, i.e., T = 10,000, for each
trial and their mean results are used in the comparison. As for the
APSO, the inertia weight is initialized to 0.9, and c1 and c2 to 2.0.
Those setting are the same as the initialization given in [3]. As
for the PSO-LVIW, the inertia weight was set to change from 0.9
(wmax ) to 0.4 (wmin ) over the generations. The boundary values of
acceleration coefcients for the HPSO-TVAC were set as c1,0 = 2.5,
c1,f = 0.5, c2,0 = 0.5, and c2,f = 2.5, which are the best ranges for c1
and c2 suggested in [11]. The best algorithm conguration of the
grey PSO given in [25] is represented as follows. The maximal and
minimal inertia weights were respectively set as wmax = 0.9 and
wmin = 0.4. The boundary values for the acceleration coefcient c2
were cmin = 1.5 and cmax = 2.5. In addition, the distinguishing coefcient was set as  = 1.0 and the weighting factor was k = 1/D for
k = 1, 2, . . ., D. The last two parameters are also utilized in the proposed GEA-based PSO.
All the programs coded by Matlab version R14 were executed
by a personal computer with Intel Pentium Dual CPU @ 1.60-GHz
processor, 2.0-GB RAM, and Windows XP2 operating system.

[100(xi+1 xi2 ) + (xi 1)2 ].

i=1

(5) Step function:

f5 (X) =

D



( xi + 0.5 ) .

i=1

(6) Quartic function, i.e., noise:

f6 (X) =

D


ixi4 + random[0, 1).

i=1

(7) Generalized Schwefels problem 2.26:


D


f7 (X) =

xi sin



|xi | .

i=1

(8) Generalized Rastrigins function:

f8 (X) =

D


5.3. Search behaviours of GEA-based PSO

[xi2 10 cos(2xi ) + 10].

i=1

(9) Discontinuous Rastrigins function:

f9 (X) =

D


[yi2 10 cos(2yi ) + 10],

i=1

|xi | < 0.5,


|xi | 0.5.

xi ,
round(2xi )/2,
(10) Ackleys function:
where yi =

 D

 D

1
1
f10 (X) = 20 exp 0.2
xi2 exp
cos(2xi ) + 20 + e

i=1

i=1

(11) Generalized Griewank function:

f11 (X) =

D

xi2

4000

i=1

cos

x 
i
i

i=1

+ 1.

(12) Generalized Penalized function:


f12 (X) =
30

10 sin2 (y1 ) +

D1


(yi 1)2 [1 + 10 sin2 (yi+1 )]

i=1

+(yD 1)2

 
D
+

i=1

u(xi , 5, 100, 4),

The search behaviours of the proposed GEA-based PSO herein


are investigated only on the Sphere model f1 (a unimodal function)
and the generalized Rastrigins function f8 (a multimodal function).
Figs. 6 and 7 depict the corresponding search behaviours. Roughly
speaking, those two gures are similar to each other. Also it can
be seen from Figs. 6(a) and 7(a) that the evolutionary factor starts
from about 0.78, and then rapidly increases to 0.88 at about the
10th generation. Then the rate of rise is slowed down and the factor
reaches about 0.92 at the end of the 50th generation. After the 50th
generation, the evolutionary factor will slightly increase to 0.95
as shown in Figs. 6(b) and 7(b). Fig. 4 has demonstrated how to
roughly classify the evolutionary factor into three states. According
to the gure, the PSO would be in the state of exploration during
the rst 10 generations, and then lead into the exploitation phase
in the subsequent 40 generations. Finally, the PSO stands in the
convergence state after the 50th generation.
At each generation, the GEA-based inertia weight and acceleration coefcients are dependent upon the evolutionary factor
which involves the information of population distribution. Therefore those algorithm parameters could adapt to the evolutionary
state. The inertia weight shown in Figs. 6(c), (d) and 7(c), (d) conrms that the GEA-based PSO maintains a small w in the exploration
state and a large w in the exploitation and convergence states.
Figs. 6(e), (f) and 7(e), (f) also show good agreement with the central idea of the GEA-based acceleration coefcients. It can be seen
that c1 decreases and c2 increases with the evolutionary factor. In
the exploration state, c1 is large than c2 . However, in the exploitation or convergence state, c1 becomes small than c2 . Besides, while
the transformation functions (19) and (20) could be properly determined in advance, the GEA-based PSO can perform a global search
over the entire search space with faster convergence speed.

4054

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

Fig. 6. Search behaviours of GEA-based PSO on Sphere model f1 . (a) Evolutionary factor: the rst 50 generations. (b) Evolutionary factor for a middle run. (c) Mean value of
inertia weight: the rst 50 generations. (d) Mean value of inertia weight for a middle run. (e) Mean value of acceleration coefcients: the rst 50 generations. (f) Mean value
of acceleration coefcients for a middle run.

5.4. Comparisons on solution accuracy


Fig. 8 shows the convergence characteristics of the evolutionary
processes for each PSO algorithm in solving all the test functions.
Each curve represents the variation of mean tness over the generations for a specic PSO. The terminal point of a curve which
does not terminate at the 10,000th generation (see functions f1 , f2 ,
f3 , f5 , f8 , f9 , and f11 ) represents that all the 30 independent trials
of such a specic PSO could achieve the global optimum before
the corresponding number of generations. In addition, on functions f1 , f2 , f8 , and f9 , the nal mean tness obtained by the APSO is
much smaller than that obtained by the PSO-LVIW and HPSO-TVAC.
Therefore those exactly nal values are given in the corresponding
gures.
Table 2 lists the detail performance on the solution accuracy
of each PSO, where the performance is measured in terms of the
means and standard deviations of the solutions obtained by 30
independent runs. Boldface in the table indicates the best result(s)
among the algorithms. As can be seen, all the PSO algorithms can
attain the minimum value of f5 . The main reason is that it is a region
rather than a single point in f5 that is the optimum. As far as other
unimodal functions are concerned, the GEA-based PSO can attain
the best accuracy on functions f1 , f2 , f3 , and f6 , and rank the second
on f4 , whereas the APSO has the highest accuracy only on function
f4 . As for the complex multimodal functions, the GEA-based PSO
could also achieve the best accuracy on functions f8 , f9 , f10 , and f11 .
However, the proposed PSO performs the second best on f7 and
rank the third on f12 . On those two functions, the APSO could attain
the best accuracy. To sum up, except on function f5 , the GEA-based
PSO can attain the best accuracy on 8 out of 11 functions, while the

APSO only on 3 functions. It can conclude that the GEA-based PSO


surpasses the APSO and our previous work, the grey PSO [25], on
the solution accuracy.
5.5. Comparisons on convergence speed
Table 3 lists the comparisons on the convergence speed of each
PSO in terms of the mean number of generations needed to reach an
acceptance solution given in Table 1 and the corresponding mean
computational time. The APSO attains the smallest mean number
of generations on functions f7 , f8 , and f9 , while the GEA-based PSO
on the rest 9 functions. However, a shorter evolutionary process
cannot imply that it uses a lesser computational time. Many existing
PSO variants, including the APSO, grey PSO, and GEA-based PSO,
have added extra operations that cost the computational time. As
seen in Table 3, the HPSO-TVAC uses the least computational time
on f7 , whereas the grey PSO costs the shortest time on f8 , and f9 .
The proposed GEA-based PSO could attain the least computational
time on other 9 functions. Although the APSO uses the smallest
mean number of generations on f7 , f8 , and f9 , it is not the fastest
algorithm on those three functions.
5.6. Comparisons using t-tests and discussions
Based on the nal search results of 30 independent trials on
every function, Table 4 presents the t values and the P values on
every function of the two-tailed test with the 5% level of signicance
between the GEA-based PSO and another PSO variant. In the table,
rows +1 (better), 0 (same), and 1 (worse) represents the
number of functions that the GEA-based PSO performs signicantly

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

4055

Fig. 7. Search behaviours of GEA-based PSO on generalized Rastrigins function f8 . (a) Evolutionary factor: the rst 50 generations. (b) Evolutionary factor for a middle run. (c)
Mean value of inertia weight: the rst 50 generations. (d) Mean value of inertia weight for a middle run. (e) Mean value of acceleration coefcients: the rst 50 generations.
(f) Mean value of acceleration coefcients for a middle run.
Table 2
Search result comparisons on 12 test functions.
Test function

PSO-LVIW
52

HPSO-TVAC
13

APSO
149

Grey PSO

GEA-PSO

f1

Mean
Std. dev.

3.16 10
6.11 1052

9.36 10
3.62 1012

2.31 10
4.95 10149

0
0

0
0

f2

Mean
Std. dev.

2.04 1029
4.05 1029

5.33 106
9.51 106

4.18 1079
9.96 1079

0
0

0
0

f3

Mean
Std. dev.

1.11 101
1.27 101

2.66 101
6.98 101

1.71 1010
2.87 1010

0
0

0
0

f4

Mean
Std. dev.

26.93
30.33

63.46
31.25

2.72
4.03

21.95
1.46 102

21.67
6.06

f5

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

f6

Mean
Std. dev.

8.29 103
1.74 103

7.50 103
1.86 102

5.01 103
1.19 103

2.96 103
2.55 103

1.50 103
1.04 103

f7

Mean
Std. dev.

10243.02
205.60

10316.36
264.32

11259.91
2.16 1011

10477.75
268.79

10709.28
522.11

f8

Mean
Std. dev.

40.11
8.64

37.48
9.67

7.57 1015
1.01 1014

0
0

0
0

f9

Mean
Std. dev.

33.17
15.34

38.20
8.29

8.86 1016
3.01 1015

0
0

0
0

f10

Mean
Std. dev.

1.15 1014
3.55 1015

9.57 106
3.68 105

1.11 1014
5.65 1015

8.88 1016
0

8.88 1016
0

f11

Mean
Std. dev.

2.79 103
4.15 103

3.29 103
4.23 103

2.03 103
3.94 103

0
0

0
0

f12

Mean
Std. dev.

2.28 1032
1.64 1032

2.76 102
6.15 102

8.24 1040
6.18 1039

2.26 101
5.73 102

7.79 104
3.59 104

Boldface indicates the best result(s) among the algorithms.

4056

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

Fig. 8. Convergence performance on the 12 test functions. (a) f1 . (b) f2 . (c) f3 . (d) f4 . (e) f5 . (f) f6 . (g) f7 . (h) f8 . (i) f9 . (j) f10 . (k) f11 . (l) f12 .

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

Fig. 8. (Continued.)

4057

4058

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

Table 3
Convergence speed comparisons on 12 test functions.
Test function

PSO-LVIW

HPSO-TVAC

APSO

Grey PSO

GEA-PSO

f1

Mean epochs
Time (s)

5495.4
0.9119

2327.5
0.5354

1065.8
0.5086

1339.5
0.3723

437.5
0.1547

f2

Mean epochs
Time (s)

5289.4
0.9118

2336.8
0.5683

1081.0
0.5293

1410.5
0.3981

513.1
0.1818

f3

Mean epochs
Time (s)

7470.5
2.3859

3571.1
0.8243

1996.5
0.9736

5315.9
1.4810

1403.5
0.5296

f4

Mean epochs
Time (s)

5193.3
0.9953

2490.1
0.5208

1058.3
0.4565

1601.2
0.3879

422.4
0.1544

f5

Mean epochs
Time (s)

5649.2
1.0309

2447.2
0.4535

795.3
0.3421

1482.1
0.3649

474.6
0.1708

f6

Mean epochs
Time (s)

9253.7
2.9209

5589.1
1.7359

4959.0
3.2196

3587.2
1.3325

2550.4
1.2656

f7

Mean epochs
Time (s)

3279.3
1.0953

1480.8
0.4992

955.8
0.5601

6740.0
2.2354

3001.8
1.8636

f8

Mean epochs
Time (s)

5885.5
1.2939

2776.1
0.6105

1087.2
0.4908

1738.1
0.4446

1557.0
0.7139

f9

Mean epochs
Time (s)

7715.3
2.2411

3364.1
0.9206

926.2
0.4646

1589.0
0.4531

936.0
0.4859

f10

Mean epochs
Time (s)

5774.0
1.2505

2642.7
0.5511

2846.5
1.3141

2231.9
0.5924

554.2
0.2346

f11

Mean epochs
Time (s)

5686.5
1.5537

2430.9
0.6483

1027.1
0.5747

1394.1
0.4449

501.3
0.2376

f12

Mean epochs
Time (s)

5470.6
1.7837

2580.9
0.9268

3156.5
1.8770

7792.6
2.5877

1261.7
0.6613

Boldface indicates the best result(s) among the algorithms.

Table 4
Comparisons between GEA-PSO and other PSOs on t-tests.
Test function

PSO-LVIW

f1

t-Value
P-Value

2.0031
0.0649

1.0203
0.3249

2.0394
0.0607

0
0

f2

t-Value
P-Value

1.5958
0.1450

1.4676
0.1643

1.6669
0.1177

0
0

f3

t-Value
P-Value

1.2288
0.4349

1.4740
0.1626

1.1480
0.2947

0
0

f4

t-Value
P-Value

0.4495
0.6600

5.1736
0.0001

7.1209
0.0057

f5

t-Value
P-Value

0
0

0
0

f6

t-Value
P-Value

20.1338
0.0000

16.6182
0.0000

10.8430
0.0000

4.2097
0.0009

f7

t-Value
P-Value

14.0018
0.0008

8.2280
0.0000

15.4487
0.0000

17.3961
0.0000

f8

t-Value
P-Value

12.8690
0.0000

4.1126
0.0011

1.7145
0.1085

0
0

f9

t-Value
P-Value

5.4787
0.0028

7.0214
0.0000

1.8889
0.0798

0
0

f10

t-Value
P-Value

11.6190
0.0000

1.0071
0.3310

10.2324
0.0020

0
0

f11

t-Value
P-value

2.6056
0.0207

3.0054
0.0095

2.1277
0.0516

0
0

f12

t-Value
P-Value

16.4417
0.0000

1.7202
0.1074

28.1059
0.0000

+1 (better)
0 (same)
1 (worse)
General merit over contender

7
5
1
6

HPSO-TVAC

6
6
0
6

APSO

0
0

3
6
3
0

Grey PSO

1.7050
0.1103
0
0

11.1414
0.0000
3
9
0
3

1. The value of t with 29 degrees of freedom is signicant at = 0.05 by a two-tailed test; 2. Boldface font indicates that the GEA-based PSO performs signicantly better than
the compared algorithm; 3. Boldface and italic font indicates that the GEA-based PSO performs signicantly worse than the compared algorithm.

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

4059

Table 5
Search result comparisons for different distinguishing coefcients.


0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0.50

f1

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f2

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f3

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f4

Mean
Std. dev.

21.74
0.72

21.76
9.47

21.75
11.26

21.74
11.08

21.78
0.54

21.73
11.75

21.79
4.47

21.69
0.51

21.76
0.54

21.71
4.60

f5

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f6

Mean
Std. dev.

5.97 103
3.41 103

4.94 103
2.61 103

4.94 103
2.63 103

4.03 103
2.59 103

3.33 103
2.21 103

3.86 103
2.52 103

2.54 103
2.29 103

2.76 103
2.11 103

2.92 103
2.22 103

2.42 103
2.00 103

f7

Mean
Std. dev.

10611.20
383.09

10679.07
330.89

10667.02
335.15

10631.74
309.58

10593.74
398.84

10642.60
400.47

10716.38
371.49

10581.40
361.19

10568.26
335.54

10710.02
426.67

f8

Mean
Std. dev.

0
0

4.99 103
1.83 102

1.81 103
9.09 103

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f9

Mean
Std. dev.

3.87 104
1.34 103

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f10

Mean
Std. dev.

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

f11

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f12

Mean
Std. dev.

1.88 103
7.46 103

1.81 103
6.30 103

1.89 103
5.92 103

1.32 103
4.16 103

8.87 104
3.68 103

8.06 104
2.36 103

8.64 104
3.67 103

8.79 104
2.65 103

8.15 104
3.64 104

8.13 104
1.74 103

0.55

0.60

0.65

0.70

0.75

0.80

0.85

0.90

0.95

1.00

f1

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f2

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f3

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f4

Mean
Std. dev.

21.72
0.72

21.69
9.47

21.65
11.26

21.66
11.08

21.68
0.54

21.67
11.75

21.69
4.47

21.68
0.51

21.67
0.54

21.67
4.60

f5

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f6

Mean
Std. dev.

1.95 103
1.72 103

1.91 103
1.64 103

2.60 103
2.16 103

2.17 103
1.73 103

2.03 103
1.71 103

2.02 103
1.99 103

2.37 103
1.93 103

1.90 103
1.64 103

1.64 103
1.37 103

1.50 103
1.04 103

f7

Mean
Std. dev.

10712.90
368.57

10657.24
354.85

10709.06
389.68

10593.34
341.67

10375.85
248.10

10562.53
281.91

10492.82
398.60

10429.44
431.74

10638.50
395.35

10709.21
522.11

f8

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f9

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f10

Mean
Std. dev.

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

8.88 1016
0

f11

Mean
Std. dev.

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

0
0

f12

Mean
Std. dev.

7.92 104
2.68 103

7.81 104
1.84 103

7.84 104
1.67 103

8.67 104
1.44 104

7.78 104
1.65 103

7.65 103
1.66 103

7.54 104
1.64 103

7.85 104
2.09 104

7.92 104
1.77 103

7.79 104
3.59 104

Boldface and italic font indicates that the grey PSO with the corresponding distinguishing coefcient performs better than or equal to the PSO-LVIW, HPSO-TVAC, APSO, and
Grey PSO.

better than, almost the same as, and signicantly worse than the
compared algorithm, respectively. Row General merit gives the
difference between the number of +1s and the number of 1s,
which is used to show an overall comparing between the two
algorithms. Take the comparison between the GEA-based PSO and
the APSO for instance. The former signicantly outperformed the

latter on three functions (f6 , f10 , and f11 ), does as better as the latter on six functions (f1 , f2 , f3 , f5 , f8 , and f9 ), and does worse than on
three functions (f4 , f7 , and f12 ). That yields a General merit gure of
merit of 3 3 = 0, indicating the GEA-based PSO generally performs
almost the same as the APSO on the solution accuracy. Although
the GEA-based PSO performed slightly weaker on some functions,

4060

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

Table 6
Convergence speed comparisons for different distinguishing coefcients.


0.05

0.10

0.15

0.20

0.25

0.30

0.35

0.40

0.45

0.50

f1

Epochs
Time (s)

4032.8
1.4682x

2959.3
1.0477x

2169.3
0.7532x

1314.8
0.4744x

1318.7
0.4844x

1139.3
0.4492x

877.7
0.4271x

1028.6
0.3712x

983.1
0.4010x

842.2
0.3347x

f2

Epochs
Time (s)

3070.5
1.1817x

2019.8
0.7433x

1810.8
0.6404x

1297.5
0.5089x

1372.2
0.5256x

1008.2
0.4139

1036.0
0.5063x

984.1
0.3601x

1086.3
0.4182x

857.0
0.3749x

f3

Epochs
Time (s)

4357.3
1.7739x

3373.5
1.3406x

2921.3
1.1574x

2496.6
1.0216x

2289.5
0.9378x

2259.0
1.0482x

2058.5
1.0898x

1809.2
0.7388x

1933.2
0.8270x

1703.9
0.7406x

f4

Epochs
Time (s)

2762.6
1.0467x

1962.9
0.7246x

1603.4
0.5853x

1467.7
0.5848x

940.0
0.3571x

1139.2
0.4900x

771.2
0.3978x

808.6
0.3357x

767.2
0.3149x

652.3
0.2772x

f5

Epochs
Time (s)

3800.2
1.4568x

2320.1
0.8757x

1721.8
0.6365x

1405.8
0.5377x

1275.7
0.4849x

971.4
0.4167x

935.8
0.4482x

837.5
0.3189x

764.9
0.3032x

831.6
0.3468x

f6

Epochs
Time (s)

6894.2
3.5700x

5450.6
2.7593x

5510.3
2.7097x

4649.3
2.4080x

4207.1
2.1524x

4114.7
2.2851x

3635.9
2.1759x

3227.4
1.6321x

3242.1
1.7069x

2582.6
1.4113x

f7

Epochs
Time (s)

4986.5
2.7318x

4483.3
2.4529x

3685.5
2.3246x

3341.9
2.1083x

2915.5
1.8785

3460.2
2.1701x

2858.0
2.0324x

3274.2
1.8130

2839.4
1.6537+

2871.8
1.7244+

f8

Epochs
Time (s)

7721.2
3.1823x

5546.2
2.2561x

4541.2
1.8046x

4970.3
2.0779x

3865.8
1.6743x

3552.5
1.6555x

3225.5
1.5276x

2152.5
0.9135x

2317.0
1.0232x

2311.8
1.0975x

f9

Epochs
Time (s)

6984.0
3.3205x

6311.2
2.8680x

5229.2
2.3616x

5323.3
2.5759x

5074.0
2.4896x

4358.3
2.2550x

3875.2
2.2520x

4054.2
1.9540x

3623.6
1.8941x

1908.0
0.9436x

f10

Epochs
Time (s)

3624.2
1.5395x

2407.0
1.0022x

1793.1
0.7535x

1234.9
0.5257x

1337.3
0.5823x

1063.6
0.4932x

1055.2
0.5588x

755.2
0.3199x

714.6
0.3456x

740.4
0.3519x

f11

Epochs
Time (s)

4226.2
2.0306x

2571.5
1.2109x

1850.7
0.8720x

1388.8
0.6699x

1259.5
0.6174x

1089.5
0.5702x

978.9
0.5508x

1059.1
0.4683x

881.4
0.4190x

842.5
0.4185x

f12

Epochs
Time (s)

5126.2
2.4741x

4084.4
1.8854x

3171.9
1.5452x

2455.0
1.2165x

2564.0
1.3455x

2027.9
1.1049x

2030.2
1.1165x

1799.3
0.9463x

1882.4
1.0260x

1500.9
0.8260x

+
x

0
12
0

0
12
0

0
12
0

0
12
0

0
11
1

0
12
0

0
12
0

0
11
1

1
11
0

1
11
0

0.55

0.60

0.65

0.70

0.75

0.80

0.85

0.90

0.95

1.00

f1

Epochs
Time (s)

653.3
0.2432x

795.8
0.3108x

704.7
0.2781x

617.1
0.2069x

615.5
0.2232x

487.1
0.1685x

449.3
0.1569

543.4
0.1892x

474.8
0.1659x

437.5
0.1547

f2

Epochs
Time (s)

904.4
0.3610x

996.7
0.4258x

806.3
0.3471x

666.3
0.2444x

939.1
0.3873x

766.2
0.2940x

558.6
0.2089x

464.1
0.1807

815.8
0.3208x

513.1
0.1818

f3

Epochs
Time (s)

1542.5
0.6829x

1937.1
0.8872x

1984.2
0.9246x

2097.2
0.8299x

1905.0
0.7904x

1542.6
0.6466x

1468.8
0.6064x

1686.9
0.7060x

1735.8
0.6987x

1403.5
0.5296

f4

Epochs
Time (s)

797.7
0.3321x

556.7
0.2316x

602.5
0.2478x

519.0
0.1832x

554.6
0.2207x

456.2
0.1744x

525.1
0.2001x

581.1
0.2241x

428.8
0.1649x

422.4
0.1544

f5

Epochs
Time (s)

682.9
0.2814x

548.1
0.2241x

548.1
0.2263x

586.7
0.2066x

576.5
0.2204x

590.8
0.2165x

543.7
0.1996x

500.5
0.1829x

456.5
0.1666

474.6
0.1708

f6

Epochs
Time (s)

2704.6
1.4694x

2356.7
1.2752

2954.2
1.6199x

2773.6
1.3426x

2408.5
1.2495

2567.5
1.2761

2437.4
1.2191

2694.5
1.3567x

1821.7
0.9154+

2550.4
1.2656

f7

Epochs
Time (s)

2164.9
1.2472+

2933.9
1.7328+

3045.4
1.8067

2409.8
1.4070+

2726.5
1.6781+

2425.5
1.4756+

2801.2
1.6428+

2460.4
1.4317+

3011.6
1.8698

3001.8
1.8636

f8

Epochs
Time (s)

2122.2
0.9557x

2056.7
0.9644x

1667.9
0.9703x

1307.6
0.5474+

1288.8
0.5434+

1934.8
0.8672x

1782.2
0.7425

3566.5
1.4873x

891.0
0.4082+

1557.0
0.7139

f9

Epochs
Time (s)

2219.9
1.1839x

2396.5
1.2804x

1303.2
0.6653x

2123.6
1.0759x

1268.0
0.6253x

1088.6
0.5201x

2052.0
1.0323x

2280.2
1.1778x

1562.3
0.8109x

936.0
0.4859

f10

Epochs
Time (s)

794.1
0.3664x

659.3
0.2946x

532.8
0.2383

603.3
0.2497x

571.7
0.2469x

599.1
0.2504x

537.7
0.2242

411.6
0.1704+

487.7
0.2028+

554.2
0.2346

f11

Epochs
Time (s)

815.7
0.3892x

676.4
0.3215x

690.2
0.3203x

636.7
0.3003x

610.1
0.2928x

615.8
0.2884x

499.5
0.2370

628.5
0.2952x

525.6
0.2487

501.3
0.2376

f12

Epochs
Time (s)

2188.0
1.2037x

1548.5
0.8555x

1623.5
0.7873x

1596.5
0.8336x

1160.5
0.6253+

1293.2
0.6662

1159.3
0.6064+

1297.6
0.6787

1431.8
0.7479

1261.7
0.6613

+
x

1
11
0

1
10
1

0
10
2

2
10
0

3
8
1

1
9
2

2
5
5

2
8
2

3
6
3

Boldface and italic font indicates that the grey PSO with the corresponding distinguishing coefcient converges faster than the PSO-LVIW, HPSO-TVAC, APSO, and Grey PSO.
+, x, and denote that the convergence speed of the corresponding distinguishing coefcient is faster than, slower than, and similar to that of the distinguishing
coefcient of 1.00, respectively.

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

Table 4 also reveals that it generally outperforms the PSO-LVIW,


HPSO-TVAC, and grey PSO.
Take the solution accuracy (Table 2) and the convergence speed
(Table 3) into consideration simultaneously. The GEA-based PSO
outperforms the PSO-LVIW, HPSO-TVAC, APSO, and grey PSO on
seven out of twelve test functions (f1 , f2 , f3 , f5 , f6 , f10 , and f11 ). On the
rest ve functions, those ve PSO variants cannot simultaneously
attain the best solution accuracy and the fastest convergence speed.
For example, the APSO can attain the best accuracy on functions f4 ,
f7 , and f12 , whereas the grey PSO and GEA-based PSO on f8 and f9 .
However, the GEA-based PSO has the least computational time on
f4 and f12 , while the HPSO-TVAC on f7 and the grey PSO on f8 and
f9 . Besides, the PSO-LVIW generally performs the worst results on
the convergence speed as well as the solution accuracy.
To sum up, the GEA-based PSO outperforms the APSO, grey PSO,
HPSO-TVAC, and PSO-LVIW on the solution accuracy and computational time in most of the considered problems. The fact also reveals
that the GEA-based inertia weight and acceleration coefcients
could provide global search and deliver faster convergence for the
search process.

5.7. Effects of distinguishing coefcient


There are two main parameters in grey relational analysis. One
is the weighting factor k and the other is the distinguishing
coefcient . In the function optimization problems, each element (dimension) of a particle generally has the identical degree
of weightiness. This study therefore adopts the equi-weighting
scheme, i.e., k = 1/D for all ks, in the simulations. If the weightiness
of each dimension could be obtained in advance, every weighting
factor can be determined according to the corresponding weightiness. On the other hand, the distinguishing coefcient is used to
control the resolution between max and min . It can affect the
magnitude of grey relational grade, that is, gi [/(1 + ), 1] for  (0,
1]. Theoretically, a smaller distinguishing coefcient will result in
a wider distributing range for the grey relational grade. However,
the fact cannot guarantee that it will have a better result on the
search problems. Generally speaking, the value of distinguishing
coefcient is suggested around 0.5 or about 1.0 [25].
Table 5 demonstrates the search result comparisons for different distinguishing coefcients. In the table, the performance is also
measured in terms of the means and standard deviations of the
solutions obtained by 30 independent runs. Boldface and italic font
indicates that the GEA-based PSO with the corresponding distinguishing coefcient performs better than or equal to the PSO-LVIW,
HPSO-TVAC, APSO and grey PSO. Except that  = 0.10 and 0.15 on f8
and  = 0.05 on f9 , all the distinguishing coefcients can attain the
same search result on functions f1 , f2 , f3 , f5 , f8 , f9 , f10 , and f11 .
On the rest four functions, different coefcients generally yield
similar results on the identical function. That is to say, the distinguishing coefcient does not heavily affect the solution accuracy of
the proposed GEA-based PSO. Generally speaking, the GEA-based
PSO with any distinguishing coefcient could outperform the PSOLVIW, HPSO-TVAC, APSO and grey PSO on nine out of twelve test
functions (i.e., f1 , f2 , f3 , f5 , f6 , f8 , f9 , f10 , and f11 ).
Table 6 shows the comparisons on the convergence speed for
different distinguishing coefcients in terms of the mean number
of generations and the corresponding mean computational time.
Boldface and italic font represents that the GEA-based PSO with the
corresponding distinguishing coefcient could converge faster than
the PSO-LVIW, HPSO-TVAC, APSO and grey PSO. Besides, symbols
+, x, and denote that the convergence speed of the corresponding distinguishing coefcient is faster than, slower than, and
similar to that with the distinguishing coefcient of 1.00, respectively. It can be identied from Table 6 that the distinguishing

4061

coefcient of 1.00 performs the fastest speed. Therefore it can conclude that the distinguishing coefcient of 1.00 is the best setting
for the proposed algorithm. That is the main reason why the distinguishing coefcient was set as  = 1.0 in the initial parameter setting
of the proposed GEA-based PSO.
6. Conclusions
With the help of grey relational analysis, this study has developed the grey evolutionary analysis for the PSO so as to evaluate
the evolutionary state of a swarm. In addition, two GEA-based
parameter automation approaches were proposed to enable the
inertia weight and acceleration coefcients to adapt to the evolutionary state. Those two approaches would improve the search
efciency and hasten the convergence speed. From the results of
empirical simulations with twelve of the well-known benchmarks,
the GEA-based PSO outperforms the APSO, grey PSO, HPSO-TVAC,
and PSO-LVIW on the solution accuracy and computational time
in most of the considered problems. The fact also reveals that the
proposed GEA-based PSO would perform a global search over the
search space with faster convergence speed, owing to the algorithm parameters that utilize the information of grey evolutionary
analysis.
Acknowledgment
This work was supported by the National Science Council,
Taiwan, Republic of China, under Grants NSC 100-2221-E-262-002
and NSC 101-2221-E-262-011.
References
[1] J. Kennedy, R.C. Eberhart, A new optimizer using particle swarm theory, in: Proc.
6th Int. Symp. Micro Machine Human Sci., Nagoya, Japan, 1995, pp. 3943.
[2] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. IEEE Int. Conf.
Neural Netw., Piscataway, NJ, 1995, pp. 19421948.
[3] Z.H. Zhan, J.Z. Zhan, Y. Li, H.S.H. Chung, Adaptive particle swarm optimization,
IEEE Transactions on System, Man and Cybernetics Part B: Cybernetics 39 (6)
(2009) 13621381.
[4] M.R. Al-Rashidi, M.E. El-Hawary., A survey of particle swarm optimization
applications in electric power systems, IEEE Transactions on Evolutionary Computation 13 (4) (2009) 913918.
[5] C.J. Lin, M.H. Hsieh, Classication of mental task from EEG data using neural
networks based on particle swarm optimization, Neurocomputing 72 (46)
(2009) 11211130.
[6] C.H. Liu, Y.Y. Hsu, Design of a self-tuning PI controller for a STATCOM using
particle swarm optimization, IEEE Transactions on Industrial Electronics 57 (2)
(2010) 702715.
[7] S.Z. Zhao, M.W. Iruthayarajan, S. Baskar, P.N. Suganthan, Multi-objective robust
PID controller tuning using two lbests multi-objective particle swarm optimization, Information Science 181 (2011) 33233335.
[8] R.J. Wai, J.D. Lee, K.L. Chuang, Real-time PID control strategy for maglev
transportation system via particle swarm optimization, IEEE Transactions on
Industrial Electronics 58 (2) (2011) 629646.
[9] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE
Transactions on Evolutionary Computation 10 (3) (2006) 281295.
[10] Y. Shi, R.C. Eberhart, A modied particle swarm optimizer, in: Proc. IEEE World
Congr. Comput. Intell., 1998, pp. 6973.
[11] A. Ratnaweera, S. Halgamuge, H. Watson, Self-organizing hierarchical particle
swarm optimizer with time-varying acceleration coefcients, IEEE Transactions on Evolutionary Computation 8 (3) (2004) 240255.
[12] Y. Shi, R.C. Eberhart, Fuzzy adaptive particle swarm optimization, in: Proc. IEEE
Congr. Evol. Comput., vol. 1, 2001, pp. 101106.
[13] R.C. Eberhart, Y. Shi, Tracking and optimizing dynamic systems with particle
swarms, in: Proc. IEEE Congr. Evol. Comput., Seoul, Korea, 2001, pp. 9497.
[14] P.J. Angeline, Using selection to improve particle swarm optimization, in: Proc.
IEEE Congr. Evol. Comput., Anchorage, AK, 1998, pp. 8489.
[15] Y.P. Chen, W.C. Peng, M.C. Jian, Particle swarm optimization with recombination and dynamic linkage discovery, IEEE Transactions on System, Man and
Cybernetics Part B: Cybernetics 37 (6) (2007) 14601470.
[16] P.S. Andrews, An investigation into mutation operators for particle swarm optimization, in: Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2006, pp.
10441051.
[17] W.J. Zhang, X.F. Xie, DEPSO: hybrid particle swarm with differential evolution
operator, in: Proc. IEEE Conf. Syst., Man, Cybern., 2003, pp. 38163821.

4062

M.-S. Leu et al. / Applied Soft Computing 13 (2013) 40474062

[18] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer with local search, in: Proc. IEEE Congr. Evol. Comput., 2005,
pp. 522528.
[19] H. Liu, Z. Cai, Y. Wang, Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization, Applied
Soft Computing 10 (2) (2010) 629640.
[20] J. Kennedy, Small worlds and mega-minds: Effects of neighborhood topology
on particle swarm performance, in: Proc. IEEE Congr. Evol. Comput., vol. 3, 1999,
pp. 19311938.
[21] J. Kennedy, R. Mendes, Population structure and particle swarm performance, in: Proc. IEEE Congr. Evol. Comput., vol. 2, 2002, pp. 1671
1676.
[22] Y. Wang, Z. Cai, A hybrid multi-swarm particle swarm optimization to solve
constrained optimization problems, Frontiers of Computer Science in China 3
(1) (2009) 3852.
[23] J.L. Deng, Introduction to grey system theory, Journal of Grey Systems 1 (1)
(1989) 124.

[24] M.F. Yeh, K.C. Chang, A self-organizing CMAC network with grey credit assignment, IEEE Transactions on System, Man and Cybernetics Part B: Cybernetics
36 (3) (2006) 623635.
[25] M.S. Leu, M.F. Yeh, Grey particle swarm optimization, Applied Soft Computing
12 (9) (2012) 29852996.
[26] C. Wen, M.F. Yeh, K.C. Chang, ECG beat recognition using GreyART network, IET
Signal Process 1 (1) (2007) 1928.
[27] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence
in a multidimensional complex space, IEEE Transactions on Evolutionary Computation 6 (1) (2002) 5873.
[28] Y. del Valle, G.K. Venayagamoorthy, S. Mohaheghi, J.C. Hernandez, R.G. Harley,
Particle swarm optimization: basic concepts, variants and applications in
power systems, IEEE Transactions on Evolutionary Computation 12 (2) (2008)
171195.
[29] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler,
maybe better, IEEE Transactions on Evolutionary Computation 8 (3) (2004)
204210.

Das könnte Ihnen auch gefallen