Beruflich Dokumente
Kultur Dokumente
Department of Electrical Engineering, Lunghwa University of Science and Technology, Taoyuan 33306, Taiwan
Department of Business Administration, Lunghwa University of Science and Technology, Taoyuan, Taiwan
a r t i c l e
i n f o
Article history:
Received 16 October 2012
Received in revised form 5 March 2013
Accepted 23 May 2013
Available online 14 June 2013
Keywords:
Evolutionary computation
Grey evolutionary analysis
Grey relational analysis
Parameter automation strategy
Particle swarm optimization
a b s t r a c t
Based on grey relational analysis, this study attempts to propose a grey evolutionary analysis (GEA) to
analyze the population distribution of particle swarm optimization (PSO) during the evolutionary process. Then two GEA-based parameter automation approaches are developed. One is for the inertia weight
and the other is for the acceleration coefcients. With the help of the GEA technique, the proposed parameter automation approaches would enable the inertia weight and acceleration coefcients to adapt to the
evolutionary state. Such parameter automation behaviour also makes an attempt on the GEA-based PSO
to perform a global search over the search space with faster convergence speed. In addition, the proposed
PSO is applied to solve the optimization problems of twelve unimodal and multimodal benchmark functions for illustration. Simulation results show that the proposed GEA-based PSO could outperform the
adaptive PSO, the grey PSO, and two well-known PSO variants on most of the test functions.
2013 Elsevier B.V. All rights reserved.
1. Introduction
Particle swarm optimization (PSO), introduced by Kennedy and
Eberhart in 1995 [1,2], was inspired by the simulation of simplied animal social behaviours including bird ocking, sh schooling,
etc. The PSO uses a simple mechanism that imitates their swarm
behaviours to guide the particles to search for globally optimal
solutions. Similar to other evolutionary computation techniques,
it is also a population-based iterative algorithm, but it works on
the social behaviour of particles in the swarm. It nds the global
best solution by simply adjusting the trajectory of each particle not
only towards its own best location but also towards the best particle of the entire swarm at each generation. Owing to its simplicity
of implementation and ability to quickly converge to a reasonably
good solution [3], the PSO has been successfully applied in solving
many real-world optimization problems including electrical power
system [4], pattern recognition [5], controller design [68], etc.
Although the standard PSO has the previous advantages, it may
suffer from trapping in the local optimal problem when solving
complex multimodal functions [9]. Avoiding the local optimal problem and accelerating the convergence speed therefore become
two important issues in the PSO research. A number of variants of PSO have been proposed to achieve these two purposes.
Generally speaking, those developments could be categorized as
the following three approaches: control of algorithm parameters
[1013], combination with auxiliary search operators [1419], and
4048
(1)
where ik = |yk xik |, max = maxi maxk ik , min = mini mink ik , and
(0, 1], which is a distinguishing coefcient to control the resolution between max and min . The corresponding grey relational
grade is
g(y, xi ) =
n
(3)
(4)
2. Preliminaries
r(yk , xik ) =
other particles (gBest) in the search space. That is, the velocity and
position of the ith particle on dimension d are updated as
(2)
k=1
where k is the
nweighting factor of grey relational coefcient
= 1. The selection of the weighting facr(yk , xik ) and
k=1 k
tor for a relational coefcient reects the importance of that
datum. In general, we can select it as k = 1/n for all k. The best
comparative sequence is determined as the one with the largest
relational grade. On the other hand, it can be derived from (1)
that r(yk , xik ) [/(1 + ), 1]. The result could further imply that g(y,
xi ) [/(1 + ), 1].
2.2. Particle swarm optimization
In the PSO, a swarm of particles are represented as potential
solutions, and each particle i is associated with two vectors, i.e.,
the velocity vector Vi = (vi1 , vi2 , . . ., viD ) and the position vector
Xi = (xi1 , xi2 , . . ., xiD ), where D represents the dimensions of the solution space. The velocity and position of each particle are initialized
by random vectors within the corresponding ranges. During the
evolutionary process, the trajectory of each individual is adjusted
by dynamically altering the velocity of each particle, according to
its own ying experience (pBest) and the ying experience of the
t
w = wmax (wmax wmin ) ,
T
(5)
where t is the current generation number and T is a predened maximum number of generations. Besides, the maximal and minimal
weights, wmax and wmin , are usually set to 0.9 and 0.4, respectively
[10].
The PSO with time-varying acceleration coefcients (PSO-TVAC)
[11] is another widely used strategy to improve the performance
of PSO. With a large cognitive component and a small social component at the beginning, particles are allowed to move around the
search space, instead of moving towards the population best. On the
other hand, a small cognitive component and a large social component allow the particles to converge to the global optima in the
latter part of the evolutionary process. This modication can be
mathematically represented as follows:
c1 = (c1f c1i )
t
+ c1i ,
T
(6)
c2 = (c2f c2i )
t
+ c2i
T
(7)
where c1i , c1f , c2i , and c2f are constants. The best ranges for c1 and
c2 suggested in [11] are 2.50.5 and 0.52.5, respectively. In other
words, a larger c1 and a smaller c2 were set at the beginning and
were gradually reversed during search.
Different from the above two time-varying schemes, the APSO
[3] utilizes an evolutionary state estimation approach to identify
one out of four evolutionary states, i.e., the states of exploration,
exploitation, convergence and jumping out. The ESE approach
involves the following three main steps.
(1) Calculate the mean distance of each particle i to all the other
particles, where the mean distance is dened as
di =
1
N1
D
N
(xid xjd )2 ,
j=1,j =
/ i
d=1
(8)
4049
dg dmin
dmax dmin
[0, 1],
(9)
where dg represents the mean distance of the globally best particle, and dmax and dmin are the maximum and minimum mean
distances, respectively.
(3) Classify the evolutionary factor f into one of the four evolutionary states with a fuzzy classication option. The membership
functions for the four evolutionary states are given in Fig. 1.
Once the evolutionary state is identied, the algorithm parameters are adjusted according to the identied state. For example, the
inertia weight can be obtained by the following sigmoid mapping
w(f ) =
1
1 + 1.5e2.6f
[0.4, 0.9].
(10)
Fig. 2. Relation between grey relational grade and inertia weight.
cmax cmin
c gmax cmax gmin
= fc (gi ) =
g + min
,
gmax gmin i
gmax gmin
(12)
(13)
where the subscripts max and min represent the maximal and
minimal values of the corresponding parameter, respectively, and
fw and fc are the transformation functions for the weight inertia w
and the acceleration coefcient c2 , respectively. Figs. 2 and 3 are
two possible transformation functions for the grey PSO.
As can be seen from (11)(13), each particle has its own inertia
weight and acceleration coefcients whose values are dependent
upon the corresponding grey relational grade. Since the relational
grade of a particle is varying over the generations, those parameters
are also time-varying. Even if in the same generation, those parameters may differ for different particles. With this modication, the
updating rule for the velocity of the ith particle on dimension d
becomes as
vid = wi vid + c1i rand1 (pBestid xid ) + c2i rand2 (gBestd xid ),
(14)
4050
where wi is the inertia weight, c1i and c2i are the acceleration
coefcients, and rand1 and rand2 are two uniformly distributed
random numbers independently generated within [0, 1].
(17)
d=1
1
rid ,
D
D
gi = g(gBest, Xi ) =
(16)
fGEA =
i=1
,1
1+
(18)
4051
w(fGEA ) =
0.4,
1 + 1.5e2.6h(fGEA )
(19)
,
elsewhere,
c1 (fGEA ) =
2.5,
elsewhere.
(20)
(21)
Fig. 5(b) shows the transformation function for the acceleration coefcients with = 1. In the gure, if fGEA 0.75, then
c1 (fGEA ) = 2 + 0.5cos[(4fGEA 3)]; otherwise, c1 (fGEA ) = 2.5.
4.3. Procedure of GEA-based PSO
The following procedure can be used for implementing the proposed GEA-based PSO algorithm.
4052
Fig. 5. Transformation function for = 1. (a) Inertia weight. (b) Acceleration coefcients.
Table 1
Dimensions, search spaces, global optimum values, and acceptance levels of test
functions.
Test function
Dimension
Search space
Global optimum
Acceptance
Unimodal
f1
f2
f3
f4
f5
f6
30
30
30
30
30
30
[100, 100]D
[10, 10]D
[100, 100]D
[10, 10]D
[100, 100]D
[1.28, 1.28]D
0
0
0
0
0
0
0.01
0.01
100
100
0
0.01
Multimodal
f7
f8
f9
f10
f11
f12
30
30
30
30
30
30
[500, 500]D
[5.12, 5.12]D
[5.12, 5.12]D
[32, 32]D
[600, 600]D
[50, 50]D
12569.5
0
0
0
0
0
10,000
50
50
0.01
0.01
0.01
5. Simulation results
In order to demonstrate the search performance of the proposed GEA-based PSO algorithm, twelve benchmark test functions
selected from [9,25] are used to verify it. The GEA-based PSO
(GEA-PSO) is also compared with the APSO [3], the grey PSO
[25], and two well-known PSO algorithms, PSO-LVIW [10], and
HPSO-TVAC [11]. As mentioned in [3], the APSO generally outperforms the comprehensive-learning PSO (CLPSO) [9], the dynamic
multi-swarm PSO (DMS-PSO) [18], the von Neumann topological
structure PSO (VPSO) [21], and the fully informed particle swarm
(FIPS) algorithm [29] on most of test functions. Therefore the detail
numerical results and convergence characteristics of those four
variants of PSO are not shown in this study.
six functions are unimodal and the rest are multimodal. The corresponding dimensions, search spaces, global optimum values, and
acceptance levels of the test functions are listed in Table 1.
(1) Sphere model:
f1 (X) =
xi2 .
i=1
D
f2 (X) =
D
i=1
|xi | +
|xi |.
i=1
where
yi = 1 + (1/4)(xi + 1)
k(xi a)m ,
xi > a,
0,
a xi a,
k(xi a)m , xi < a.
f3 (X) =
D
i
i=1
2
xj .
4053
and
u(xi , a, k, m) =
j=1
f4 (X) =
D1
In the simulations, all the PSO algorithms were tested using the
same population size of 20, a value of which is widely adopted
in PSO [3,25]. In order to reduce the statistical errors, each algorithm was run 30 independent trials on every test function with
the same maximum number of generations, i.e., T = 10,000, for each
trial and their mean results are used in the comparison. As for the
APSO, the inertia weight is initialized to 0.9, and c1 and c2 to 2.0.
Those setting are the same as the initialization given in [3]. As
for the PSO-LVIW, the inertia weight was set to change from 0.9
(wmax ) to 0.4 (wmin ) over the generations. The boundary values of
acceleration coefcients for the HPSO-TVAC were set as c1,0 = 2.5,
c1,f = 0.5, c2,0 = 0.5, and c2,f = 2.5, which are the best ranges for c1
and c2 suggested in [11]. The best algorithm conguration of the
grey PSO given in [25] is represented as follows. The maximal and
minimal inertia weights were respectively set as wmax = 0.9 and
wmin = 0.4. The boundary values for the acceleration coefcient c2
were cmin = 1.5 and cmax = 2.5. In addition, the distinguishing coefcient was set as = 1.0 and the weighting factor was k = 1/D for
k = 1, 2, . . ., D. The last two parameters are also utilized in the proposed GEA-based PSO.
All the programs coded by Matlab version R14 were executed
by a personal computer with Intel Pentium Dual CPU @ 1.60-GHz
processor, 2.0-GB RAM, and Windows XP2 operating system.
i=1
f5 (X) =
D
( xi + 0.5 ) .
i=1
f6 (X) =
D
i=1
f7 (X) =
xi sin
|xi | .
i=1
f8 (X) =
D
i=1
f9 (X) =
D
i=1
xi ,
round(2xi )/2,
(10) Ackleys function:
where yi =
D
D
1
1
f10 (X) = 20 exp 0.2
xi2 exp
cos(2xi ) + 20 + e
i=1
i=1
f11 (X) =
D
xi2
4000
i=1
cos
x
i
i
i=1
+ 1.
f12 (X) =
30
10 sin2 (y1 ) +
D1
i=1
+(yD 1)2
D
+
i=1
4054
Fig. 6. Search behaviours of GEA-based PSO on Sphere model f1 . (a) Evolutionary factor: the rst 50 generations. (b) Evolutionary factor for a middle run. (c) Mean value of
inertia weight: the rst 50 generations. (d) Mean value of inertia weight for a middle run. (e) Mean value of acceleration coefcients: the rst 50 generations. (f) Mean value
of acceleration coefcients for a middle run.
4055
Fig. 7. Search behaviours of GEA-based PSO on generalized Rastrigins function f8 . (a) Evolutionary factor: the rst 50 generations. (b) Evolutionary factor for a middle run. (c)
Mean value of inertia weight: the rst 50 generations. (d) Mean value of inertia weight for a middle run. (e) Mean value of acceleration coefcients: the rst 50 generations.
(f) Mean value of acceleration coefcients for a middle run.
Table 2
Search result comparisons on 12 test functions.
Test function
PSO-LVIW
52
HPSO-TVAC
13
APSO
149
Grey PSO
GEA-PSO
f1
Mean
Std. dev.
3.16 10
6.11 1052
9.36 10
3.62 1012
2.31 10
4.95 10149
0
0
0
0
f2
Mean
Std. dev.
2.04 1029
4.05 1029
5.33 106
9.51 106
4.18 1079
9.96 1079
0
0
0
0
f3
Mean
Std. dev.
1.11 101
1.27 101
2.66 101
6.98 101
1.71 1010
2.87 1010
0
0
0
0
f4
Mean
Std. dev.
26.93
30.33
63.46
31.25
2.72
4.03
21.95
1.46 102
21.67
6.06
f5
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
f6
Mean
Std. dev.
8.29 103
1.74 103
7.50 103
1.86 102
5.01 103
1.19 103
2.96 103
2.55 103
1.50 103
1.04 103
f7
Mean
Std. dev.
10243.02
205.60
10316.36
264.32
11259.91
2.16 1011
10477.75
268.79
10709.28
522.11
f8
Mean
Std. dev.
40.11
8.64
37.48
9.67
7.57 1015
1.01 1014
0
0
0
0
f9
Mean
Std. dev.
33.17
15.34
38.20
8.29
8.86 1016
3.01 1015
0
0
0
0
f10
Mean
Std. dev.
1.15 1014
3.55 1015
9.57 106
3.68 105
1.11 1014
5.65 1015
8.88 1016
0
8.88 1016
0
f11
Mean
Std. dev.
2.79 103
4.15 103
3.29 103
4.23 103
2.03 103
3.94 103
0
0
0
0
f12
Mean
Std. dev.
2.28 1032
1.64 1032
2.76 102
6.15 102
8.24 1040
6.18 1039
2.26 101
5.73 102
7.79 104
3.59 104
4056
Fig. 8. Convergence performance on the 12 test functions. (a) f1 . (b) f2 . (c) f3 . (d) f4 . (e) f5 . (f) f6 . (g) f7 . (h) f8 . (i) f9 . (j) f10 . (k) f11 . (l) f12 .
Fig. 8. (Continued.)
4057
4058
Table 3
Convergence speed comparisons on 12 test functions.
Test function
PSO-LVIW
HPSO-TVAC
APSO
Grey PSO
GEA-PSO
f1
Mean epochs
Time (s)
5495.4
0.9119
2327.5
0.5354
1065.8
0.5086
1339.5
0.3723
437.5
0.1547
f2
Mean epochs
Time (s)
5289.4
0.9118
2336.8
0.5683
1081.0
0.5293
1410.5
0.3981
513.1
0.1818
f3
Mean epochs
Time (s)
7470.5
2.3859
3571.1
0.8243
1996.5
0.9736
5315.9
1.4810
1403.5
0.5296
f4
Mean epochs
Time (s)
5193.3
0.9953
2490.1
0.5208
1058.3
0.4565
1601.2
0.3879
422.4
0.1544
f5
Mean epochs
Time (s)
5649.2
1.0309
2447.2
0.4535
795.3
0.3421
1482.1
0.3649
474.6
0.1708
f6
Mean epochs
Time (s)
9253.7
2.9209
5589.1
1.7359
4959.0
3.2196
3587.2
1.3325
2550.4
1.2656
f7
Mean epochs
Time (s)
3279.3
1.0953
1480.8
0.4992
955.8
0.5601
6740.0
2.2354
3001.8
1.8636
f8
Mean epochs
Time (s)
5885.5
1.2939
2776.1
0.6105
1087.2
0.4908
1738.1
0.4446
1557.0
0.7139
f9
Mean epochs
Time (s)
7715.3
2.2411
3364.1
0.9206
926.2
0.4646
1589.0
0.4531
936.0
0.4859
f10
Mean epochs
Time (s)
5774.0
1.2505
2642.7
0.5511
2846.5
1.3141
2231.9
0.5924
554.2
0.2346
f11
Mean epochs
Time (s)
5686.5
1.5537
2430.9
0.6483
1027.1
0.5747
1394.1
0.4449
501.3
0.2376
f12
Mean epochs
Time (s)
5470.6
1.7837
2580.9
0.9268
3156.5
1.8770
7792.6
2.5877
1261.7
0.6613
Table 4
Comparisons between GEA-PSO and other PSOs on t-tests.
Test function
PSO-LVIW
f1
t-Value
P-Value
2.0031
0.0649
1.0203
0.3249
2.0394
0.0607
0
0
f2
t-Value
P-Value
1.5958
0.1450
1.4676
0.1643
1.6669
0.1177
0
0
f3
t-Value
P-Value
1.2288
0.4349
1.4740
0.1626
1.1480
0.2947
0
0
f4
t-Value
P-Value
0.4495
0.6600
5.1736
0.0001
7.1209
0.0057
f5
t-Value
P-Value
0
0
0
0
f6
t-Value
P-Value
20.1338
0.0000
16.6182
0.0000
10.8430
0.0000
4.2097
0.0009
f7
t-Value
P-Value
14.0018
0.0008
8.2280
0.0000
15.4487
0.0000
17.3961
0.0000
f8
t-Value
P-Value
12.8690
0.0000
4.1126
0.0011
1.7145
0.1085
0
0
f9
t-Value
P-Value
5.4787
0.0028
7.0214
0.0000
1.8889
0.0798
0
0
f10
t-Value
P-Value
11.6190
0.0000
1.0071
0.3310
10.2324
0.0020
0
0
f11
t-Value
P-value
2.6056
0.0207
3.0054
0.0095
2.1277
0.0516
0
0
f12
t-Value
P-Value
16.4417
0.0000
1.7202
0.1074
28.1059
0.0000
+1 (better)
0 (same)
1 (worse)
General merit over contender
7
5
1
6
HPSO-TVAC
6
6
0
6
APSO
0
0
3
6
3
0
Grey PSO
1.7050
0.1103
0
0
11.1414
0.0000
3
9
0
3
1. The value of t with 29 degrees of freedom is signicant at = 0.05 by a two-tailed test; 2. Boldface font indicates that the GEA-based PSO performs signicantly better than
the compared algorithm; 3. Boldface and italic font indicates that the GEA-based PSO performs signicantly worse than the compared algorithm.
4059
Table 5
Search result comparisons for different distinguishing coefcients.
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
f1
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f2
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f3
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f4
Mean
Std. dev.
21.74
0.72
21.76
9.47
21.75
11.26
21.74
11.08
21.78
0.54
21.73
11.75
21.79
4.47
21.69
0.51
21.76
0.54
21.71
4.60
f5
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f6
Mean
Std. dev.
5.97 103
3.41 103
4.94 103
2.61 103
4.94 103
2.63 103
4.03 103
2.59 103
3.33 103
2.21 103
3.86 103
2.52 103
2.54 103
2.29 103
2.76 103
2.11 103
2.92 103
2.22 103
2.42 103
2.00 103
f7
Mean
Std. dev.
10611.20
383.09
10679.07
330.89
10667.02
335.15
10631.74
309.58
10593.74
398.84
10642.60
400.47
10716.38
371.49
10581.40
361.19
10568.26
335.54
10710.02
426.67
f8
Mean
Std. dev.
0
0
4.99 103
1.83 102
1.81 103
9.09 103
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f9
Mean
Std. dev.
3.87 104
1.34 103
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f10
Mean
Std. dev.
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
f11
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f12
Mean
Std. dev.
1.88 103
7.46 103
1.81 103
6.30 103
1.89 103
5.92 103
1.32 103
4.16 103
8.87 104
3.68 103
8.06 104
2.36 103
8.64 104
3.67 103
8.79 104
2.65 103
8.15 104
3.64 104
8.13 104
1.74 103
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
f1
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f2
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f3
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f4
Mean
Std. dev.
21.72
0.72
21.69
9.47
21.65
11.26
21.66
11.08
21.68
0.54
21.67
11.75
21.69
4.47
21.68
0.51
21.67
0.54
21.67
4.60
f5
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f6
Mean
Std. dev.
1.95 103
1.72 103
1.91 103
1.64 103
2.60 103
2.16 103
2.17 103
1.73 103
2.03 103
1.71 103
2.02 103
1.99 103
2.37 103
1.93 103
1.90 103
1.64 103
1.64 103
1.37 103
1.50 103
1.04 103
f7
Mean
Std. dev.
10712.90
368.57
10657.24
354.85
10709.06
389.68
10593.34
341.67
10375.85
248.10
10562.53
281.91
10492.82
398.60
10429.44
431.74
10638.50
395.35
10709.21
522.11
f8
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f9
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f10
Mean
Std. dev.
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
8.88 1016
0
f11
Mean
Std. dev.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
f12
Mean
Std. dev.
7.92 104
2.68 103
7.81 104
1.84 103
7.84 104
1.67 103
8.67 104
1.44 104
7.78 104
1.65 103
7.65 103
1.66 103
7.54 104
1.64 103
7.85 104
2.09 104
7.92 104
1.77 103
7.79 104
3.59 104
Boldface and italic font indicates that the grey PSO with the corresponding distinguishing coefcient performs better than or equal to the PSO-LVIW, HPSO-TVAC, APSO, and
Grey PSO.
better than, almost the same as, and signicantly worse than the
compared algorithm, respectively. Row General merit gives the
difference between the number of +1s and the number of 1s,
which is used to show an overall comparing between the two
algorithms. Take the comparison between the GEA-based PSO and
the APSO for instance. The former signicantly outperformed the
latter on three functions (f6 , f10 , and f11 ), does as better as the latter on six functions (f1 , f2 , f3 , f5 , f8 , and f9 ), and does worse than on
three functions (f4 , f7 , and f12 ). That yields a General merit gure of
merit of 3 3 = 0, indicating the GEA-based PSO generally performs
almost the same as the APSO on the solution accuracy. Although
the GEA-based PSO performed slightly weaker on some functions,
4060
Table 6
Convergence speed comparisons for different distinguishing coefcients.
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
f1
Epochs
Time (s)
4032.8
1.4682x
2959.3
1.0477x
2169.3
0.7532x
1314.8
0.4744x
1318.7
0.4844x
1139.3
0.4492x
877.7
0.4271x
1028.6
0.3712x
983.1
0.4010x
842.2
0.3347x
f2
Epochs
Time (s)
3070.5
1.1817x
2019.8
0.7433x
1810.8
0.6404x
1297.5
0.5089x
1372.2
0.5256x
1008.2
0.4139
1036.0
0.5063x
984.1
0.3601x
1086.3
0.4182x
857.0
0.3749x
f3
Epochs
Time (s)
4357.3
1.7739x
3373.5
1.3406x
2921.3
1.1574x
2496.6
1.0216x
2289.5
0.9378x
2259.0
1.0482x
2058.5
1.0898x
1809.2
0.7388x
1933.2
0.8270x
1703.9
0.7406x
f4
Epochs
Time (s)
2762.6
1.0467x
1962.9
0.7246x
1603.4
0.5853x
1467.7
0.5848x
940.0
0.3571x
1139.2
0.4900x
771.2
0.3978x
808.6
0.3357x
767.2
0.3149x
652.3
0.2772x
f5
Epochs
Time (s)
3800.2
1.4568x
2320.1
0.8757x
1721.8
0.6365x
1405.8
0.5377x
1275.7
0.4849x
971.4
0.4167x
935.8
0.4482x
837.5
0.3189x
764.9
0.3032x
831.6
0.3468x
f6
Epochs
Time (s)
6894.2
3.5700x
5450.6
2.7593x
5510.3
2.7097x
4649.3
2.4080x
4207.1
2.1524x
4114.7
2.2851x
3635.9
2.1759x
3227.4
1.6321x
3242.1
1.7069x
2582.6
1.4113x
f7
Epochs
Time (s)
4986.5
2.7318x
4483.3
2.4529x
3685.5
2.3246x
3341.9
2.1083x
2915.5
1.8785
3460.2
2.1701x
2858.0
2.0324x
3274.2
1.8130
2839.4
1.6537+
2871.8
1.7244+
f8
Epochs
Time (s)
7721.2
3.1823x
5546.2
2.2561x
4541.2
1.8046x
4970.3
2.0779x
3865.8
1.6743x
3552.5
1.6555x
3225.5
1.5276x
2152.5
0.9135x
2317.0
1.0232x
2311.8
1.0975x
f9
Epochs
Time (s)
6984.0
3.3205x
6311.2
2.8680x
5229.2
2.3616x
5323.3
2.5759x
5074.0
2.4896x
4358.3
2.2550x
3875.2
2.2520x
4054.2
1.9540x
3623.6
1.8941x
1908.0
0.9436x
f10
Epochs
Time (s)
3624.2
1.5395x
2407.0
1.0022x
1793.1
0.7535x
1234.9
0.5257x
1337.3
0.5823x
1063.6
0.4932x
1055.2
0.5588x
755.2
0.3199x
714.6
0.3456x
740.4
0.3519x
f11
Epochs
Time (s)
4226.2
2.0306x
2571.5
1.2109x
1850.7
0.8720x
1388.8
0.6699x
1259.5
0.6174x
1089.5
0.5702x
978.9
0.5508x
1059.1
0.4683x
881.4
0.4190x
842.5
0.4185x
f12
Epochs
Time (s)
5126.2
2.4741x
4084.4
1.8854x
3171.9
1.5452x
2455.0
1.2165x
2564.0
1.3455x
2027.9
1.1049x
2030.2
1.1165x
1799.3
0.9463x
1882.4
1.0260x
1500.9
0.8260x
+
x
0
12
0
0
12
0
0
12
0
0
12
0
0
11
1
0
12
0
0
12
0
0
11
1
1
11
0
1
11
0
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
f1
Epochs
Time (s)
653.3
0.2432x
795.8
0.3108x
704.7
0.2781x
617.1
0.2069x
615.5
0.2232x
487.1
0.1685x
449.3
0.1569
543.4
0.1892x
474.8
0.1659x
437.5
0.1547
f2
Epochs
Time (s)
904.4
0.3610x
996.7
0.4258x
806.3
0.3471x
666.3
0.2444x
939.1
0.3873x
766.2
0.2940x
558.6
0.2089x
464.1
0.1807
815.8
0.3208x
513.1
0.1818
f3
Epochs
Time (s)
1542.5
0.6829x
1937.1
0.8872x
1984.2
0.9246x
2097.2
0.8299x
1905.0
0.7904x
1542.6
0.6466x
1468.8
0.6064x
1686.9
0.7060x
1735.8
0.6987x
1403.5
0.5296
f4
Epochs
Time (s)
797.7
0.3321x
556.7
0.2316x
602.5
0.2478x
519.0
0.1832x
554.6
0.2207x
456.2
0.1744x
525.1
0.2001x
581.1
0.2241x
428.8
0.1649x
422.4
0.1544
f5
Epochs
Time (s)
682.9
0.2814x
548.1
0.2241x
548.1
0.2263x
586.7
0.2066x
576.5
0.2204x
590.8
0.2165x
543.7
0.1996x
500.5
0.1829x
456.5
0.1666
474.6
0.1708
f6
Epochs
Time (s)
2704.6
1.4694x
2356.7
1.2752
2954.2
1.6199x
2773.6
1.3426x
2408.5
1.2495
2567.5
1.2761
2437.4
1.2191
2694.5
1.3567x
1821.7
0.9154+
2550.4
1.2656
f7
Epochs
Time (s)
2164.9
1.2472+
2933.9
1.7328+
3045.4
1.8067
2409.8
1.4070+
2726.5
1.6781+
2425.5
1.4756+
2801.2
1.6428+
2460.4
1.4317+
3011.6
1.8698
3001.8
1.8636
f8
Epochs
Time (s)
2122.2
0.9557x
2056.7
0.9644x
1667.9
0.9703x
1307.6
0.5474+
1288.8
0.5434+
1934.8
0.8672x
1782.2
0.7425
3566.5
1.4873x
891.0
0.4082+
1557.0
0.7139
f9
Epochs
Time (s)
2219.9
1.1839x
2396.5
1.2804x
1303.2
0.6653x
2123.6
1.0759x
1268.0
0.6253x
1088.6
0.5201x
2052.0
1.0323x
2280.2
1.1778x
1562.3
0.8109x
936.0
0.4859
f10
Epochs
Time (s)
794.1
0.3664x
659.3
0.2946x
532.8
0.2383
603.3
0.2497x
571.7
0.2469x
599.1
0.2504x
537.7
0.2242
411.6
0.1704+
487.7
0.2028+
554.2
0.2346
f11
Epochs
Time (s)
815.7
0.3892x
676.4
0.3215x
690.2
0.3203x
636.7
0.3003x
610.1
0.2928x
615.8
0.2884x
499.5
0.2370
628.5
0.2952x
525.6
0.2487
501.3
0.2376
f12
Epochs
Time (s)
2188.0
1.2037x
1548.5
0.8555x
1623.5
0.7873x
1596.5
0.8336x
1160.5
0.6253+
1293.2
0.6662
1159.3
0.6064+
1297.6
0.6787
1431.8
0.7479
1261.7
0.6613
+
x
1
11
0
1
10
1
0
10
2
2
10
0
3
8
1
1
9
2
2
5
5
2
8
2
3
6
3
Boldface and italic font indicates that the grey PSO with the corresponding distinguishing coefcient converges faster than the PSO-LVIW, HPSO-TVAC, APSO, and Grey PSO.
+, x, and denote that the convergence speed of the corresponding distinguishing coefcient is faster than, slower than, and similar to that of the distinguishing
coefcient of 1.00, respectively.
4061
coefcient of 1.00 performs the fastest speed. Therefore it can conclude that the distinguishing coefcient of 1.00 is the best setting
for the proposed algorithm. That is the main reason why the distinguishing coefcient was set as = 1.0 in the initial parameter setting
of the proposed GEA-based PSO.
6. Conclusions
With the help of grey relational analysis, this study has developed the grey evolutionary analysis for the PSO so as to evaluate
the evolutionary state of a swarm. In addition, two GEA-based
parameter automation approaches were proposed to enable the
inertia weight and acceleration coefcients to adapt to the evolutionary state. Those two approaches would improve the search
efciency and hasten the convergence speed. From the results of
empirical simulations with twelve of the well-known benchmarks,
the GEA-based PSO outperforms the APSO, grey PSO, HPSO-TVAC,
and PSO-LVIW on the solution accuracy and computational time
in most of the considered problems. The fact also reveals that the
proposed GEA-based PSO would perform a global search over the
search space with faster convergence speed, owing to the algorithm parameters that utilize the information of grey evolutionary
analysis.
Acknowledgment
This work was supported by the National Science Council,
Taiwan, Republic of China, under Grants NSC 100-2221-E-262-002
and NSC 101-2221-E-262-011.
References
[1] J. Kennedy, R.C. Eberhart, A new optimizer using particle swarm theory, in: Proc.
6th Int. Symp. Micro Machine Human Sci., Nagoya, Japan, 1995, pp. 3943.
[2] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. IEEE Int. Conf.
Neural Netw., Piscataway, NJ, 1995, pp. 19421948.
[3] Z.H. Zhan, J.Z. Zhan, Y. Li, H.S.H. Chung, Adaptive particle swarm optimization,
IEEE Transactions on System, Man and Cybernetics Part B: Cybernetics 39 (6)
(2009) 13621381.
[4] M.R. Al-Rashidi, M.E. El-Hawary., A survey of particle swarm optimization
applications in electric power systems, IEEE Transactions on Evolutionary Computation 13 (4) (2009) 913918.
[5] C.J. Lin, M.H. Hsieh, Classication of mental task from EEG data using neural
networks based on particle swarm optimization, Neurocomputing 72 (46)
(2009) 11211130.
[6] C.H. Liu, Y.Y. Hsu, Design of a self-tuning PI controller for a STATCOM using
particle swarm optimization, IEEE Transactions on Industrial Electronics 57 (2)
(2010) 702715.
[7] S.Z. Zhao, M.W. Iruthayarajan, S. Baskar, P.N. Suganthan, Multi-objective robust
PID controller tuning using two lbests multi-objective particle swarm optimization, Information Science 181 (2011) 33233335.
[8] R.J. Wai, J.D. Lee, K.L. Chuang, Real-time PID control strategy for maglev
transportation system via particle swarm optimization, IEEE Transactions on
Industrial Electronics 58 (2) (2011) 629646.
[9] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE
Transactions on Evolutionary Computation 10 (3) (2006) 281295.
[10] Y. Shi, R.C. Eberhart, A modied particle swarm optimizer, in: Proc. IEEE World
Congr. Comput. Intell., 1998, pp. 6973.
[11] A. Ratnaweera, S. Halgamuge, H. Watson, Self-organizing hierarchical particle
swarm optimizer with time-varying acceleration coefcients, IEEE Transactions on Evolutionary Computation 8 (3) (2004) 240255.
[12] Y. Shi, R.C. Eberhart, Fuzzy adaptive particle swarm optimization, in: Proc. IEEE
Congr. Evol. Comput., vol. 1, 2001, pp. 101106.
[13] R.C. Eberhart, Y. Shi, Tracking and optimizing dynamic systems with particle
swarms, in: Proc. IEEE Congr. Evol. Comput., Seoul, Korea, 2001, pp. 9497.
[14] P.J. Angeline, Using selection to improve particle swarm optimization, in: Proc.
IEEE Congr. Evol. Comput., Anchorage, AK, 1998, pp. 8489.
[15] Y.P. Chen, W.C. Peng, M.C. Jian, Particle swarm optimization with recombination and dynamic linkage discovery, IEEE Transactions on System, Man and
Cybernetics Part B: Cybernetics 37 (6) (2007) 14601470.
[16] P.S. Andrews, An investigation into mutation operators for particle swarm optimization, in: Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2006, pp.
10441051.
[17] W.J. Zhang, X.F. Xie, DEPSO: hybrid particle swarm with differential evolution
operator, in: Proc. IEEE Conf. Syst., Man, Cybern., 2003, pp. 38163821.
4062
[18] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer with local search, in: Proc. IEEE Congr. Evol. Comput., 2005,
pp. 522528.
[19] H. Liu, Z. Cai, Y. Wang, Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization, Applied
Soft Computing 10 (2) (2010) 629640.
[20] J. Kennedy, Small worlds and mega-minds: Effects of neighborhood topology
on particle swarm performance, in: Proc. IEEE Congr. Evol. Comput., vol. 3, 1999,
pp. 19311938.
[21] J. Kennedy, R. Mendes, Population structure and particle swarm performance, in: Proc. IEEE Congr. Evol. Comput., vol. 2, 2002, pp. 1671
1676.
[22] Y. Wang, Z. Cai, A hybrid multi-swarm particle swarm optimization to solve
constrained optimization problems, Frontiers of Computer Science in China 3
(1) (2009) 3852.
[23] J.L. Deng, Introduction to grey system theory, Journal of Grey Systems 1 (1)
(1989) 124.
[24] M.F. Yeh, K.C. Chang, A self-organizing CMAC network with grey credit assignment, IEEE Transactions on System, Man and Cybernetics Part B: Cybernetics
36 (3) (2006) 623635.
[25] M.S. Leu, M.F. Yeh, Grey particle swarm optimization, Applied Soft Computing
12 (9) (2012) 29852996.
[26] C. Wen, M.F. Yeh, K.C. Chang, ECG beat recognition using GreyART network, IET
Signal Process 1 (1) (2007) 1928.
[27] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence
in a multidimensional complex space, IEEE Transactions on Evolutionary Computation 6 (1) (2002) 5873.
[28] Y. del Valle, G.K. Venayagamoorthy, S. Mohaheghi, J.C. Hernandez, R.G. Harley,
Particle swarm optimization: basic concepts, variants and applications in
power systems, IEEE Transactions on Evolutionary Computation 12 (2) (2008)
171195.
[29] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler,
maybe better, IEEE Transactions on Evolutionary Computation 8 (3) (2004)
204210.