Beruflich Dokumente
Kultur Dokumente
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/309002622
CITATIONS READS
0 28
4 authors, including:
Haibin Ouyang
Guangzhou University
39 PUBLICATIONS 82 CITATIONS
SEE PROFILE
All content following this page was uploaded by Haibin Ouyang on 24 March 2017.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Applied Soft Computing 52 (2017) 9871008
a r t i c l e i n f o a b s t r a c t
Article history: In this paper, an improved global-best-guided particle swarm optimization with learning operation
Received 22 April 2016 (IGPSO) is proposed for solving global optimization problems. The particle population is divided into cur-
Received in revised form 16 August 2016 rent population, historical best population and global best population, and each population is assigned
Accepted 23 September 2016
a corresponding searching strategy. For the current population, the global neighborhood exploration
Available online 6 October 2016
strategy is employed to enhance the global exploration capability. A local learning mechanism is used
to improve local exploitation ability in the historical best population. Furthermore, stochastic learning
Keywords:
and opposition based learning operations are employed to the global best population for accelerating
Particle swarm optimization
Global exploration capability
convergence speed and improving optimization accuracy. The effects of the relevant parameters on the
Convergence speed performance of IGPSO are assessed. Numerical experiments on some well-known benchmark test func-
Accuracy tions reveal that IGPSO algorithm outperforms other state-of-the-art intelligent algorithms in terms
of accuracy, convergence speed, and nonparametric statistical signicance. Moreover, IGPSO performs
better for engineering design optimization problems.
2016 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2016.09.030
1568-4946/ 2016 Elsevier B.V. All rights reserved.
988 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008
connected, wheel and Von Neumann; (d) hybrid algorithm. A sum- Table 1
Pseudo code of PSO.
mary on the improvements, theoretical analysis and applications
of PSO can be found in [4]. Algorithm 1. Particle Swarm Optimization (PSO)
In PSO, all the particles are attracted by the same globally best 1: initialize parameters, and particles
particle (Gbest) and the swarm has the tendency to fast converge 2: While termination criterion not met do
to the current globally best point. Recently many researchers point 3: For i = 1 to PopuSize do
out that the main reason of the premature convergence is PSO does 4: Compute pi and pg
5: Update velocity v
not sufciently utilize its populations search information to guide
6: Update position x
the search direction [3]. Although a lot of research on using the 7: Calculate tness f(x)
local information of the current particle population has been done, 8: End For
little attention has been paid to utilize the historical best popula- 9: End While (until termination condition)
10: Return best solution found
tion and globally best population information independently. As
many modied PSO cannot escape local minimum, and are even
short of the guidance of population history information, studying 2.1. Particle swarm optimization
the adjustment and application of the Gbest and the historical best
particle (Pbest) remains an important and signicant research area PSO is a simple population-based algorithm with stochastic
to be explored. This research aims to make a contribution in this components. It was rst introduced by Kennedy and Eberhart [1,2].
regard. PSO is inspired by the movement of natural swarms and ocks.
In this paper, an improved version of PSO (improved global- It consists of a swarm of particles and each particle represents a
best-guided particle swarm optimization with learning operation, potential solution for a problem. PSO tunes its current position
IGPSO) is proposed. The IGPSO algorithm has the following toward the global optimum according to two steps: velocity updat-
attributes: ing and position updating equations. The two equations are
(1) Based on pyramid theory and potential equilibrium theory, xi,j = xi,j + vi,j (2)
the particle population is divided into current population, his-
Herr pibest,j is the personal best for particle i in the jth dimension
torical best population and global best population, and each
and pgbest,j is the global best in jth dimension. D is the maximum
population has its own independent searching strategy. In this
dimension; vi,j is the jth dimension of the ith particle velocity and
pyramid, from the top to bottom are: global best, historical best
xi,j is the jth dimension of the ith particle position; c1 and c2 are
and the current population.
the acceleration coefcients, i.e. cognitive and social coefcients,
(2) For the current population, global neighborhood exploration
respectively; r1 and r2 belong to a uniform distribution in the ranges
strategy is presented to enhance the global exploration capa-
[0,1]. The pseudo code of PSO is shown in Table 1.
bility. With the global neighborhood exploration strategy, each
particle updates its velocity and position by taking its historical
best neighborhood potential information and its globally best 2.2. Previous related work
neighborhood potential particle as exemplars.
(3) A local learning mechanism is used to improve local exploita- Since the introduction of PSO, many modications have been
tion ability for the historical best population. Different to the proposed to the PSO for reinforcing its accuracy and convergence
former PSO algorithms, each historical best particle can learn speed. A good literature review on the early works on PSO can be
from better historical best particles independently. found in [3]. Here, we briey summarize the contributions to the
(4) Two unique learning operations such as stochastic perturba- development of PSO with a focus on learning strategy or framework,
tion and opposition-based learning perturbation are built to neighborhood or local search, and history information exploration.
enhance the learning of Gbest and reduce the probability of Learning strategy or operation is widely used to improve the
premature convergence. global searching capability of the PSO algorithm. Liang et al. pro-
posed a comprehensive learning particle swarm optimizer (CLPSO)
for global optimization of multimodal functions [5]. CLPSO used a
novel learning operation to modify the velocity update, which is
All the current particles, history best particles and global best good at discouraging premature. Wu et al. developed a modied
particle are considered in our proposed algorithm. The purpose is to PSO algorithm named adaptive comprehensive learning particle
balance the global search and local search so that trapping into local swarm optimization (A-CLPSO) [6], in which a more efcient learn-
optima can be avoided. Numerical results reveal that our algorithm ing strategy was designed to ameliorate the overall optimization
is superior to all other previously introduced intelligent algorithms performance of PSO. Zhan et al. developed an adaptive particle
when applied to a well-known benchmark library and engineering swarm optimization (APSO) [7]. APSO incorporates an elitist learn-
design optimization problems. ing strategy to improve the global searching capability. In 2011,
The remainder of the paper is organized as follows. Related work an orthogonal learning particle swarm optimization (OLPSO) was
on PSO is summarized in Section (2). The proposed algorithm IGPSO presented by Zhan et al. [8]. In OLPSO, orthogonal experimental
is elaborated in Section 3. Some experimental studies are presented design (OED) is used to form an orthogonal learning (OL) strategy.
in Section 4. Finally, concluding remarks are given in Section 5. Experimental results demonstrate the effectiveness of the orthog-
onal learning strategy and OLPSO algorithm. Wang et al. designed a
self-adaptive learning framework to probabilistically steer four PSO
velocity updating strategies with different features, and presented a
2. Background and related work self-adaptive learning based particle swarm optimization (SLPSO)
[9]. It has been documented that applying multiple strategies or
This section introduces the optimization principle of the canon- methods in one algorithmic framework is useful for algorithm to
ical PSO algorithm, provides a brief overview of the previous work achieve good performance on different kinds of problems. Huang
on PSO improvements. et al. developed an example-based learning particle swarm opti-
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 989
mization for continuous optimization [10]. The idea is highly repair operator-based particle swarm optimization [30], parallel
capable examples can lead others to make progress, as they can comprehensive learning particle swarm optimizer (PCLPSO) [31]
learn from these examples to improve their own capabilities. Li have attracted a lot of attentions.
et al. proposed adaptive learning PSO (ALPSO) and self-learning
particle swarm optimizer (SLPSO) [11,12]. In ALPSO, each particle 3. Improved global-best-guided PSO with learning
can adjust its search strategy according to the selection ratios of operation (IGPSO)
four learning operators in different searching stages. Based on the
ALPSO algorithm, SLPSO algorithm incorporates two new strate- In this section, we will introduce an improved version of PSO
gies and a biased selection method to improve the performance of algorithm, namely improved global-best-guided PSO with learning
PSO algorithm. In 2011, an enhanced PSO algorithm called GOPSO operation (IGPSO). Three aspects of improvement are put forward
was proposed by Wang et al. [13]. GOPSO employs generalized in the IGPSO: global neighborhood exploration strategy, local learn-
opposition-based learning (GOBL) and Cauchy mutation to prevent ing mechanism, and two learning operations. These improvements
PSO algorithm from falling into local optima. Lim et al. developed and our motivation are described in more detail below.
several improved PSO algorithms such as adaptive two-layer par-
ticle swarm optimization algorithm with elitist learning strategy 3.1. Motivation
(ATLPSO-ELS) [14], teaching and peer-learning PSO (TPLPSO) [15],
bidirectional teaching and peer-learning PSO (BTPLPSO) [16]. These In the basic PSO algorithm, all the particles are attracted by the
improved PSO algorithms used different learning strategies to help- same global best particle, and the swarm has tendency to fast con-
ing algorithm escape from premature stagnation. Liu proposed a verge to the current globally best point, so the basic PSO algorithm
PSO based simultaneous learning framework for clustering and is generally regarded as a global version of PSO [3]. The global best
classication (PSOSLCC) [17]. Cheng designed a social learning par- particle plays a vital role in the basic PSO algorithm. It guides the
ticle swarm optimization algorithm [18], in which various learning swarm to move to a new better space in the search process. Many
strategies are employed to enhance the global and local searching researchers have realized that the global best particle serves as
ability of PSO. guidelines for the current swarm and it can accelerate convergence
Neighborhood or local search method is also important and ef- effectively. Based on this observation, the concept of global best
cient for the improving of the PSO algorithm. In the early stage, has been applied to various algorithms. For example, the DE algo-
Suganthan proposed a PSO with neighborhood operator, in which rithm called DE/best/1[32] uses the current global best individual to
the local neighborhood size is gradually increased in the search adjust mutation operation. Another greedy DE variant DE/current-
process [19]. Liang et al. provided a dynamic multi-swarm particle to-best/1 further modies the mutation of DE/best/1 to make better
swarm optimizer with local search [20]. This algorithm used the use of the current global best individual. In 2008, Global-best har-
Quasi-Newton method to enhance the local search ability of PSO. mony search (GHS) algorithm was presented by Omran [33]. The
In 2008, a new multi-strategy ensemble particle swarm optimiza- GHS modies the pitch-adjustment step of the HS by introducing
tion (MEPSO) for dynamic optimization is proposed by Du et al. the global best harmony. After that, some other improved harmony
[21], in which two new strategies, Gaussian local search and differ- search algorithms (NGHS [34] and IGHS [35]) also introduce the
ential mutation, were introduced into the searching mechanism. global best harmony in improvisation process. Additionally, the
Mendes et al. presented a fully informed particle swarm (FIPS) current global best is used in articial bee colony algorithm such as
to solve global optimization problems in 2004 [22]. In the fully Gbest-guided articial bee colony (GABC) algorithm [36].
informed neighborhood, all neighbors inuenced each other. Nasir In this paper, we assess carefully the use of the current global
et al. presented a variant of single-objective PSO called dynamic best and aim to further explore the utilization and adjustment of
neighborhood learning particle swarm optimizer (DNLPSO) [23]. the current global best. After analyzing the swarm characteristic of
In DNLPSO, the exemplar particle is selected from a neighborhood the basic PSO algorithm, and a simple conclusion is that the particle
and the learner particle can learn from the historical information swarm can be regrouped as three types: current swarm, historical
of its neighborhood or sometimes from that of its own. Hu et al. best swarm and global best swarm (only the global best particle).
presented a modied particle swarm optimization algorithm, in PSO algorithm and many other improved PSO variants explore the
which a dynamic neighborhood strategy is employed to improve current swarm only. This exploring strategy hardly prevents the
the performance of PSO [24]. algorithm from getting stuck in the local optima. Therefore, com-
The exploration of history information can help PSO to reduce bining with the framework of the basic particle, we propose three
the blindness of global search and improve the efciency of explor- modications to adjust the three types of swarm independently.
ing. However, there is not much work devoted to discussing or using Our purpose is to improve the optimization potential of PSO algo-
history information for the improvement of PSO algorithm. Tang rithm and accelerate the convergence speed.
et al. presented a feedback learning particle swarm optimization
algorithm with quadratic inertia weight (FLPSO-QIW) to solve opti- 3.2. Global neighborhood exploration strategy
mization problems [25]. In FLPSO-QIW, each particles history best
tness information is used to estimate the current search environ- According to the PSO algorithm, we know that the position of
ment, and the feedback tness information is used to automatically a particle is inuenced by the best position visited by itself (i.e. its
design the learning probabilities for gaining a good convergence own experience) and the position of the best particle in the swarm
speed and search performance. A self-guided PSO with independent (i.e. the experience of swarm). However, it is hard to achieve a sat-
dynamic inertia weights setting on each particle was developed by isfactory solution in a reasonable time for the basic PSO algorithm,
Geng et al. in 2013 [26]. It is self-guided by considering the devi- as it solely depends on the personal best and the global best in the
ation between the objective value of each particle and that of the whole search process. In order to overcome this weakness, some
best particle and combining the difference of the objective value of researchers have studied various neighborhood topologies such as
each particles best position in the two continuous generations. fully connected, wheel and Von Neumann. It has been found that
In recent years, some proposed PSO algorithms such as binary the neighborhood information can efciently enhance PSO perfor-
particle swarm optimization [27], Particle swarm optimization mance. Therefore, how to explore some effective neighborhood
with time-varying acceleration coefcients [28], adaptive search information is the key to improve the optimization capability of
diversication in PSO (ADS-PSO) [29], Self-adaptive check and the PSO algorithm. In fact, the personal best and the global best
990 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008
0.9 0.9
0.85 0.8
0.8
0.7
0.75
0.6
0.7
0.5
value
value
0.65
0.4
0.6
0.3
0.55
0.2
0.5
0.45 0.1
0.4 0
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Genearation Genearation
(a) (b)
Fig. 1. Variation of versus generation number.
gather the whole historical experience of particles. It is natural to ber. Assuming max = 0.9, and min = 0.4, the maximum number of
realize that many excellent potential positions may exist around generation G = 1000, changes dynamically with the generation
the personal best and the global best positions. We thus propose number as shown in Fig. 1 which has two contrasting parts: Part
a global neighborhood exploration strategy, which combine with (a) shows the change curve of which is only based on the expo-
the neighborhood information of the personal best and the global nential decrease rule, (b) shows the change trend of according to
best. The global neighborhood exploration strategy is shown as Eq. (10). In Part (a), the value of is predetermined in each gen-
eration. In Part (b), it can be seen that the value of dynamically
vg+1
i,j
g g
= (g) vi,j + c1 r1 [pibest,j (1 1 U(0, 1)) xi,j ] changes between 0 and 0.9, it does not decrease simply with gen-
g eration because the change of stochastic number. This adjustment
+c2 r2 [pgbest,j (1 2 U(0, 1)) xi,j ] (9)
strategy is benecial to maintain the diversity of the inertia weight.
Ultimately, the diversity of inertia weight helps the algorithm to
enhance the constantly updating of velocity.
1
(g) = a exp(b g 2 ) rand, b = ln( min ), a
G2 1 max
= max exp(b) (10) 3.3. Local learning mechanism
Table 2
Comparison algorithms.
Table 3
Benchmark functions (f1 -f14 ).
D
D
Sphere f1 = xi2 [100, 100] 0
i=1
D
D
D
Schwefel 2.22 f2 = xi + xi [100, 100] 0
i i=12
i=1
D
D
Schwefel 1.2 f3 = xj [100, 100] 0
i=1 j=1
D
Schwefel 2.21 f4 = max{|xi |, 1 i D} [100, 100] 0
D1
2 2 D
Rosenbrock f5 = 100(xi+1 xi ) + (xi 1) [30, 30] 0
i=1
D
2 D
Step f6 = (xi + 0.5) [100, 100] 0
i=1
D
D
Quartic f7 = ixi4 + random[0, 1) [1.28, 1.28] 0
i=1
D
D
Schwefel f8 = xi sin xi [500, 500] 418.9829D
i=1
D
D
Rastrigin f9 = xi2 10 cos (2xi ) + 10 [5.12, 5.12] 0
i=1
D
1
D
f10 = 20 exp(0.2
1 D
Ackley xi2 ) exp( cos 2xi ) + 20 + e [32, 32] 0
D D
D D i=1 i=1
1
Griewank f11 = xi2 cos xi i + 1 [600, 600]
D
0
4000
i=1 i=1
D1
2 2 2
f12 = {10sin (y1 ) + (yi 1) [1 + 10sin (yi+1 )]
D
i=1 D
Penalized [50, 50] 0
D
1
2
+(yn 1) } + u(xi , 10, 100, 4)yi = 1 + (xi + 1)
4
i=1
D1
2 2 2
f13 = 0.1{sin (3x1 ) + (xi 1) [1 + sin (3xi+1 )]
i=1
D
2 2
+(xn 1) [1 + sin (2xn )]} + u(xi , 5, 100, 4) D
Penalized 2 [50, 50] 0
m
i=1
k(xi a) , xi > a
u(xi , a, k, m) = 0 a xi a
m
k(xi a) , xi < a
n1
0.25 0.1 2 D
Schafferf 7 f14 = (xi2 + xi+1
2
) (sin (50(xi2 + xi+1
2
) ) + 1) [100, 100] 0
i=1
Table 4
Benchmark functions (f15 -f25 ).
1
25
1
Foxholes f16 = [ + ] [65.536, 65.536]2 0.998
500
2
j=1 6
j+ (xi aij )
i=1
11
2
x1 (bi + bi x2 )
2
4
Kowalik f17 = (ai 2
) [5, 5] 3.08e04
bi + bi x3 + x4
i=1
1 6 2
6 Hump camel back f18 = 4x12 2.1x14 + x + x1 x2 4x22 + 4x24 [5, 5] 1.0316285
3 1
2
5.1 2 5 1
Branin f19 = (x2 x1 + x1 6) + 10(1 ) cos x1 + 10 [5,10]2 or [0,15]2 0.398
4 2 8
2
f20 = [1 + (x1 + x2 + 1) (19 14x1 + 3x12 14x2 + 6x1 x2 + 3x22 )]
2
Goldstein Price [5, 5] 3
2
[30 + (2x1 3x2 ) (18 32x1 + 12x12 + 48x2 36x1 x2 + 27x22 )]
4
3
2 3
Hartman 3 f21 = - ci exp[ aij (xj pij ) ] [0, 1] 3.86278
i=1 j=1
4
6
2 6
Hartman 6 f22 = - ci exp[ aij (xj pij ) ] [0, 1] 3.32
i=1 j=1
2 4
f23 = exp(0.5(x12 + x22 25) ) + sin (4x1 3x2 )
2
Goldstein Price 2 [5, 5] 1
2
+0.5(2x1 + x2 10)
2
sin x12 + x22 0.5
2
Schaffer f24 = 2
0.5 [100, 100] 1
[1 + 0.001(x12 + x22 )]
2 5
est values of the ith variable in the current swarm, respectively. learning control parameter Lc and cooperation factors and .
xj L and xj U are the lower and upper of the jth dimension variable. The maximum number of function evaluation Maxi FEs, the current
pgbest,j is jth dimension of the Gbest. Fig. 2 shows the improved number of function evaluation FEs.
opposition-based learning strategy. Step 2. Initialize swarm particles.
According to Eq. (9), we can obviously obtain that b1 and pgbest,j Initialize NP particles with random positions and velocities in
are symmetric around (xj L + max(xj ))/2, b2 and pgbest,j are symmet- the search space [xj L , xj U ] (j = 1, 2, 3, . . ., D). Calculate the objec-
ric around (xj U + min(xj ))/2. In Fig. 2, b1 and pgbest,j are symmetric tive function value of each particle and determine the global best
around the red dotted line, and b2 and pgbest,j are symmetric around particle (Gbest). Meanwhile, build the historical best swarm by the
the blue dotted line. It can be seen from Fig. 2 that the candidate current initialized swarm.
Gbest ugbest is generated by the cooperation of the two opposition- Step 3. Update the Current swarm
based learning particles. Thus it enhances the self-study of the According to the improved velocity updating strategy and the
Gbest and extends the exploration. original position updating (Eq. (9) and (10)), generate new particle,
In order to control the implementation of the two learning oper- update the current swarm, determine the personal best particle
ations, a learning control parameter Lc is used and compared with (Pbest) and Gbest. Note that the Pbest are saved in the historical
the Lc and a uniform random number. When the value of Lc is larger best swarm.
than the uniform random number, the two learning operations are Step 4. Update the historical best swarm
carried out in the proposed algorithm. Gbest learning operation Based on the historical best swarm, employing the personal best
with a certain probability aims at updating the Gbest effectively. neighborhood learning (Eq. (11)) to generate the candidate per-
sonal best particle yibest and update the corresponding Pbest in the
3.5. Complete steps of IGPSO algorithm historical best swarm.
Step 5. Update the Gbest independently
According to the discussion above, the complete steps of IGPSO Comparing the value of Lc with the uniform random number,
algorithm are summarized as follows: if the value of Lc larger than the uniform random number, apply
Step 1. Initialize algorithm and optimization problem parame- the stochastic learning and opposition-based learning operation to
ters. adjust the Gbest.
The optimization problem is dened as Minimize f(x) subject Step 6. Check the stopping criterion.
to xj L xj xj U (j = 1, 2, 3, . . ., D), where xj L and xj U are the lower If the maximal function evaluation number (Maxi FEs) is satis-
and upper bounds for decision variables, respectively. D is the ed, computation is terminated. Otherwise, repeat Steps 3, 4 and
maximum number of the problem dimension. The HS algorithm 5.
parameters are specied in this step as well. They are the number In order to illustrate IGPSO algorithm clearly, a owchart of the
of particles NP, the acceleration constants c1 and c2 , the maximum proposed algorithm is provided in Fig. 3.
and minimum of the inertia weight max and min , scaling factor,
994 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008
10290.26
3.95E-198
3.27E-201
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.96E + 01
8.18E + 02
9.88E + 00
0.00E + 00
0.00E + 00
1.04E-02
1.48E-97
6.13E-97
2.46E-04
1.93E-04
4.33E-05
1.33E-05
5.70E-03
[0.1, 0.5]
9886.399
4.39E-199
7.79E-197
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.48E + 01
1.18E + 01
8.16E + 02
5.53E-96
2.47E-95
1.89E-04
1.63E-04
1.93E-05
6.73E-06
1.44E-02
3.08E-02
The proposed algorithm is compared with other well-known
PSO variants and state-of-the-art meta-heuristic algorithms. All the
comparison algorithms are listed Table 2. The comparison algo-
rithms include PSO variants such as BBPSO [41], CLPSO [5], APSO
[7], and DMS-PSO [20], differential evolution (DE) variants such as
[0.001, 0.005]
11010.92 DE/best/1 [32] and ODE [39], articial bee colony (ABC) variants
7.91E-153
8.63E-164
2.04E-153
0.00E + 00
0.00E + 00
0.00E + 00
9.20E + 00
1.15E + 01
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.42E + 03
0.00E + 00
1.65E-76
3.55E-15
5.70E-76
2.26E-04
2.12E-06
1.77E-06
6.13E-03
1.29E-02
2.00E-04
such as ABC [42] and GABC [36], HS variants such as IGHS [35] and
GDHS [43]. In order to make a fair comparison among the algo-
rithms, the parameter congurations of different algorithms are
set according to that proposed by their authors, respectively.
In the eld of evolutionary computation, it is common to com-
pare different algorithms using a large test set, especially when
9907.295
6.67E-202
2.27E-102
9.70E-102
1.84E-207
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.44E + 01
5.75E + 02
1.20E + 01
1.81E-04
1.41E-04
1.98E-05
7.52E-06
2.32E-02
3.90E-02
the test involves function optimization. However, the differences
in test problem sets may induce bias toward particular algo-
0.5
4.81E-190
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.17E + 01
1.19E + 01
1.15E + 03
2.25E-93
8.26E-93
1.95E-15
1.81E-15
2.56E-04
5.19E-03
2.32E-02
1.37E-02
3.12E-02
3.05E-04
2.54E-143
3.10E-134
1.09E-142
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.56E + 03
4.62E + 00
9.27E + 00
0.00E + 00
1.71E-67
4.16E-67
3.55E-15
4.44E-04
3.27E-04
5.18E-03
2.32E-02
2.65E-02
5.55E-02
3.25E-110
9.40E-110
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.54E + 01
2.15E + 01
4.59E + 00
8.96E + 00
9.03E + 02
0.00E + 00
3.55E-15
6.48E-04
3.16E-04
5.18E-03
2.32E-02
1.26E-02
2.62E-02
CPU Q9400 dual core processor 2.66 GHz and 2.66 GHz with 3.50GB
0.05
9.18E + 00
5.20E + 00
12569.4
2.36E-36
9.54E-36
6.56E-18
1.39E-17
2.28E-46
8.43E-46
1.81E-13
8.11E-13
5.51E-15
3.22E-03
2.86E-03
3.37E-02
4.07E-15
7.77E-08
8.17E-03
2.16E-02
4.06E-03
8.03E-03
5.70E-08
12569.4
1.43E-32
2.18E-32
5.47E-16
7.63E-16
5.22E-38
1.27E-37
1.25E-13
3.87E-03
1.58E-01
1.86E-08
7.85E-08
4.01E-13
1.26E-02
5.68E-08
1.23E-07
8.19E-03
1.68E-02
3.03E-03
9.10E-03
of these parameters.
0.005
12569.5
3.00E-08
1.08E-08
4.94E-03
1.58E-29
4.89E-15
9.96E-15
6.16E-36
1.59E-35
1.13E-13
6.71E-30
2.24E-02
4.03E-13
8.12E-03
1.26E-02
5.60E-03
4.30E-03
1.05E-02
5.00E-03
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
(Table 4).
SD
SD
SD
SD
SD
SD
SD
SD
SD
SD
SD
SD
SD
f11
f12
f13
f1
f2
f3
f4
f5
f6
f7
f8
f9
F
9678.749
8.47E-214
1.02E-209
6.84E-105
4.28E-104
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
9.29E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.27E + 03
2.06E + 01
0.5). Or the value of 1 and 2 are randomly generated from three
1.41E-04
7.69E-03
2.17E-04
7.27E-05
2.82E-05
1.68E-02
[0.1, 0.5]
interval, such as [0.001, 0.005], [0.01, 0.05], [0.1, 0.5]. Therefore, the
values of 1 and 2 change from 0.001 to 0.5 and the other parame-
ters remain as in the default parameter settings in the experiment.
The maximum number of objective function evaluations (OFE) is
4104 . The statistic results are given in Table 5 and Table 6 in terms
[0.01, 0.05]
9695.905
3.39E-105
1.91E-104
2.15E-208
1.14E-209
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.84E + 01
8.97E + 02
1.05E + 01
of the mean and standard deviation values of the solutions found
2.19E-04
1.87E-04
2.48E-05
8.11E-06
9.15E-03
2.33E-02
in 40 independent trials.
From Table 5, it can be seen that the performance of IGPSO
becomes better along with the increase of 1 in most cases such
as f1 -f4 , f7 , f9 -f11 . Because a small value of 1 narrows the neighbor-
hood information space of the historical best particle, a large value
[0.001, 0.005]
5.72E-156
3.62E-155
6.07E-154
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.15E + 01
1.73E + 03
1.04E + 01
0.00E + 00
6.76E-76
3.34E-75
3.55E-15
2.83E-04
1.99E-04
2.33E-06
1.27E-06
1.43E-02
2.55E-02
the historical best particle. However, for the ill-condition function
f5 , complex multimodal function f8 , and penalty functions f12 and
f13 , a small value of 1 can improve the convergence performance
of IGPSO. One reason is that the relative small value of 1 makes
the IGPSO algorithm focus on the minimal neighborhood zone of
the historical best particle, which enhance the local exploitation
9502.234
6.70E-211
1.32E-103
3.23E-207
3.20E-104
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.71E + 01
7.95E + 02
1.09E + 01
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.70E-04
1.48E-04
2.68E-05
9.21E-06
1.57E-02
2.93E-02
capability. Additionally, when 1 is randomly generated in a given
interval, we nd that the IGPSO algorithm can achieve good perfor-
0.5
mance. An arbitrary value within the range [0.1, 0.5] for 1 should
be acceptable to the IGPSO algorithm for most simple tested func-
tions. Fig. 4 provided the optimization curve of IGPSO with different
value of 1 for two typical functions f1 and f5 . Obviously, Fig. 4
9632.899
8.82E-185
2.42E-185
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.32E + 01
1.19E + 01
9.78E + 02
0.00E + 00
0.00E + 00
2.58E-15
1.61E-15
1.22E-90
5.91E-90
2.52E-04
1.64E-02
1.62E-02
2.09E-04
2.60E-03
4.40E-02
4.57E-148
1.36E-147
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.13E + 01
1.71E + 03
9.18E + 00
0.00E + 00
1.63E-67
3.55E-15
8.70E-67
2.67E-04
2.11E-04
7.79E-03
2.77E-02
1.78E-02
3.44E-02
tion f8 , and penalty functions f12 and f13 , a small value of 2 can
improve the convergence performance of IGPSO. Besides, when 2
0.1
2.35E-120
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
3.48E + 00
6.74E + 00
1.04E + 01
1.02E + 03
9.09E + 00
0.00E + 00
1.67E-02
5.65E-54
1.63E-53
3.55E-15
5.67E-04
3.92E-04
2.59E-03
1.64E-02
7.94E-03
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.34E + 01
7.85E + 02
7.13E + 00
1.04E + 01
8.40E + 00
0.00E + 00
12373.6
1.77E-91
9.25E-91
1.83E-41
9.61E-41
3.55E-15
9.11E-04
1.85E-04
1.17E-03
1.47E-06
6.78E-06
9.28E-03
2.32E-02
6.30E-04
functions.
5.45E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.55E + 01
1.99E + 02
7.15E + 00
9.53E + 00
0.00E + 00
6.45E-07
1.04E-06
1.33E-92
3.68E-92
2.68E-43
3.55E-15
1.02E-42
7.92E-04
4.75E-04
2.47E-04
1.56E-03
2.39E-02
5.18E-02
0.00E + 00
0.00E + 00
3.87E + 01
0.00E + 00
0.00E + 00
1.13E + 01
1.83E + 01
5.51E + 00
9.63E + 00
0.00E + 00
3.55E-15
2.90E-91
1.25E-90
1.60E-43
3.52E-04
9.85E-04
5.16E-03
9.24E-07
3.48E-06
2.13E-02
1.07E-02
7.00E-04
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
mean
SD
SD
SD
SD
SD
SD
SD
SD
SD
SD
SD
SD
f11
f12
f13
f1
f2
f3
f4
f5
f6
f7
f8
f9
F
F Index Lc
=0.005
1
=0.01
1
=0.05
1
4
-50
10 1
=0.1
10
=0.5
1
[0.001,0.005]
1
2 [0.01,0.05]
1
-100 10 [0.1,0.5]
10 =0.0001 1
1
=0.005
1
=0.01
1
=0.05
1
-150 =0.1 0
10 1 10
=0.25
1
=0.5
1
[0.001,0.005]
1
[0.01,0.05]
1
-200 [0.1,0.5] -2
10 1
10
(f1) ( f 5)
Fig. 4. Optimization curve of IGPSO with different value of 1 (f1 and f5 ).
0
10
Optimal objective function value
=0.0001
1
=0.005 4
-5
1
-10
10 1
=0.01
=0.0001
1
=0.05
1
=0.005
1
=0.1
1
=0.01
1
=0.25
1 =0.05
1
=0.5
1 =0.1
-10 [0.001,0.005] 1
10 1
=0.25
1
[0.01,0.05]
1
=0.5
1
[0.1,0.5]
1 [0.001,0.005]
1
[0.01,0.05]
1
[0.1,0.5]
5 1
-10
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
Objective function evaluation number 4 Objective function evaluation number 4
x 10 x 10
(f4) ( f 8)
Fig. 5. Optimization curve of IGPSO with different value of 2 (f4 and f8 ).
carry out the two learning operations in each iteration, the IGPSO According to Table 8, the change of and has some inuence on
algorithm can nd better results in most cases but much compu- the performance of IGPSO algorithm. There is no indication that one
tation effort are needed. For the three given intervals, in contrast, setting of is superior to the other settings. However, =0 or =1
the results demonstrate that [0.75, 1] is a suitable one. So we rec- is harmful to the optimization quality of the IGPSO in most cases,
ommend the value of Lc should be large than 0.75. Besides, Fig. 6 and will deteriorate the performance of the IGPSO if the number of
depicts the convergence process of IGPSO algorithm with different iteration is increased. Based on the statistic results and optimiza-
values of Lc and also conrms that large values of Lc improve the tion trend (Fig. 7), an appropriate value of is approximately 0.25,
convergence speed of IGPSO. which implies that the value of is 0.75.
F Index
F D Index Algorithms
BBPSO CLPSO APSO DMS-PSO DE/best/1 ODE ABC GABC IGHS GDHS IGPSO
f1 30 Mean 1.28E-216 8.06E-96 2.30E-12 1.53E-113 5.01E-57 5.01E-57 8.01E-19 8.01E-19 2.17E-11 2.57E-04 0.00E + 00
SD 0.00E + 00 3.53E-95 1.03E-11 5.14E-113 1.57E-56 1.57E-56 2.36E-18 2.36E-18 2.98E-12 3.67E-05 0.00E + 00
50 Mean 1.00E + 03 2.28E-85 9.54E-140 8.62E-103 1.39E-53 1.00E-165 7.97E-25 1.80E + 02 5.96E-11 9.29E-04 4.24E-297
SD 3.08E + 03 3.87E-85 3.61E-139 2.82E-102 2.20E-53 0.00E + 00 1.25E-24 4.05E + 02 5.63E-12 1.37E-04 0.00E + 00
100 Mean 1.95E + 04 4.16E-75 2.50E + 03 4.89E-77 7.68E-37 5.67E-81 1.29E-23 8.58E + 02 2.14E-10 8.07E-03 4.45E-156
SD 1.54E + 04 1.80E-74 4.44E + 03 1.40E-76 2.22E-36 9.55E-81 2.91E-23 1.52E + 03 1.22E-11 8.87E-04 9.29E-156
f2 30 Mean 4.40E + 02 5.50E-57 5.38E-03 2.18E + 02 2.73E-32 4.73E-32 5.18E-10 5.18E-10 1.21E-02 6.76E-02 0.00E + 00
SD 2.23E + 02 1.92E-56 1.01E-02 1.83E + 02 3.54E-32 5.54E-32 7.84E-10 7.84E-10 3.33E-02 4.55E-03 0.00E + 00
50 Mean 9.80E + 02 4.47E-52 2.94E-38 3.09E + 02 2.62E-33 1.03E-42 8.91E-13 4.81E + 01 1.49E-01 1.67E-01 1.47E-146
SD 1.77E + 02 1.22E-51 1.31E-37 2.87E + 01 6.71E-33 3.53E-42 4.85E-13 2.68E + 01 2.52E-01 1.09E-02 4.65E-146
100 Mean 2.30E + 03 3.45E-45 1.50E + 01 5.83E + 02 8.38E-24 3.60E-06 2.34E-12 8.47E + 01 3.96E-01 7.20E-01 0.00E + 00
999
1000 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008
c L =0.5
4 c
10 L =0.5
c L =0.75
c
L =0.75
c L =0.9
c L =1
c
L =1
c L [0,0.25]
2 L [0,0.25] c
10 c
L [0.25,0.75]
c
L [0.25,0.75]
c 0 L [0.75,1]
L [0.75,1] 10 c
c
0
10
-2
10
-2
10 -4
10
-4 -6
10 10
(f7) ( f12)
Fig. 6. Optimization curve of IGPSO with different value of Lc (f7 and f12 ).
10 =0.9 =0.9
2
=0.95 10 =0.95
=1 =1
-5 0
10 10
-2
10
-10
10
-4
10
(f9) ( f13)
Fig. 7. Optimization curve of IGPSO with different value of (f9 and f13 ).
evolutions (FEs)), scalability, non-parameter statistic signicant Each algorithm repeated 50 times independently. The mean and
(MannWhitney U test). standard deviation (SD) results for the high-dimension functions
are provided in Tables 9 and 10. The mean and standard devia-
tion (SD) results for the low-dimension functions are recorded in
4.3.1. Optimization accuracy and scalability
Table 11. Note that the computer displays zero when the number
In this section, we provide a comparison between the pro-
is smaller than 10325 . The best results are highlighted in bold face.
posed algorithm and other classic algorithms such as PSO variants
From Tables 911, it can be seen that IGPSO algorithm performs
such as BBPSO [45], CLPSO [46], APSO [47], and DMS-PSO [49],
much better than all the comparison algorithms for almost all the
differential evolution (DE) variants such as DE/best/1 [51] and
tested functions except f5 , f12 and f13 . IGPSO algorithm achieved
ODE [53], articial bee colony (ABC) variants such as ABC [54]
the global optimal value for f1 -f4 , f6 , f8 , f9 , f11 , f14 , f15 . Moreover,
and GABC [33], HS variants such as IGHS [32] and GDHS [56]. In
IGPSO offers near-global optimum solution for the other 5 func-
IGPSO algorithm, c1 = c2 = 2, max = 0.9 and min = 0.4, 1 = 0.05, and
tions. BBPSO, CLPSO, APSO, and DMS-PSO perform better in some
2 = 0.01, Lc = 0.75, = 0.25 and = 1-. The parameter settings for
cases, but IGPSO algorithm beats them in terms of the quality of
the other comparison algorithms are inherited from the referenced
solution under the same number of function evaluations. BBPSO
papers. For the high-dimension functions (f1 -f15 ), the maximum
algorithm with the help of Gaussian distribution to search a good
number of objective function evaluations (OFEs) is D 104 , while
result, but the change of mean and variance depend on the per-
for the low-dimensional functions (f16 -f25 ), OFEs is set to 5 104 .
Table 10
Results of the 11 compared algorithms for f9 f15.
F D Index Algorithms
BBPSO CLPSO APSO DMS-PSO DE/best/1 ODE ABC GABC IGHS GDHS IGPSO
f9 30 Mean 9.97E + 01 1.27E + 01 4.98E-02 1.49E + 01 1.31E + 01 1.31E + 01 3.20E-11 3.20E-11 4.23E-09 3.76E-04 0.00E + 00
SD 3.44E + 01 4.22E + 00 2.22E-01 3.62E + 00 3.91E + 00 3.91E + 00 5.28E-11 5.28E-11 4.68E-10 5.53E-05 0.00E + 00
50 Mean 3.08E + 02 2.42E + 01 1.79E + 02 2.99E + 01 3.26E + 01 1.11E-12 1.27E-06 1.92E + 01 1.16E-08 1.37E-03 0.00E + 00
SD 5.64E + 01 6.40E + 00 5.32E + 01 6.30E + 00 6.49E + 00 4.39E-12 5.52E-06 9.08E + 00 9.64E-10 1.46E-04 0.00E + 00
100 Mean 8.93E + 01 7.02E + 00 5.03E + 01 1.95E + 01 1.08E + 02 4.52E-08 1.08E-02 4.64E + 01 4.41E-08 1.16E-02 0.00E + 00
SD 2.89E + 01 1.00E + 02 2.61E + 01 2.59E + 01 1.53E + 01 1.41E-07 1.81E-02 1.63E + 01 3.47E-09 1.14E-03 0.00E + 00
f10 30 Mean 2.00E + 01 1.42E-14 7.57E-03 6.06E-12 6.04E-15 6.04E-15 1.63E-09 1.63E-09 3.44E-06 4.56E-03 3.55E-15
SD 1.04E + 01 7.46E-15 9.63E-04 3.90E-13 1.67E-15 1.67E-15 1.44E-09 1.44E-09 1.79E-07 3.94E-04 0.00E + 00
1001
1002
Table 11
Comparison results for IGPSO (f16 f25 ).
BBPSO CLPSO APSO DMS-PSO DE/best/1 ODE ABC GABC IGHS GDHS IGPSO
f16 mean 0.998004 1.244546 0.998004 1.98013 0.998004 0.998004 0.998004 0.998004 0.998004 0.998004 0.998004
SD 7.2E-17 1.10257 0 2.481412 1.61E-16 5.09E-17 2.16E-16 2.79E-13 0 1.44E-16 0
f17 mean 0.004327 0.006048 0.003089 0.002934 0.000386 0.00166 0.001486 0.002752 0.000648 0.000725 0.000353
SD 0.007108 0.008925 0.005924 0.006218 0.000234 0.004421 0.004456 0.006031 0.0001 0.00038 0.000202
f18 mean 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163
SD 2.16E-16 2.28E-16 2.1E-16 2.22E-16 1.02E-16 2.28E-16 1.84E-16 3.63E-10 1.69E-16 1.84E-16 8.82E-17
f19 mean 0.397887 0.397887 0.397887 0.397887 0.517552 0.397887 0.397887 0.397887 0.397887 0.397887 0.397887
SD 0 0 0 0 0.535157 0 0 1.69E-09 0 0 0
f20 mean 3 3 3 8.4 3 3 3 3 3 3 3
SD 9.39E-16 1.19E-15 7.62E-16 18.78801 3.74E-15 2.88E-16 3.46E-15 9.28E-10 5.58E-16 3.55E-15 1.23E-15
f21 mean 3.86081 3.86278 3.86278 3.82413 3.86278 3.86278 3.86278 3.86278 3.86278 3.86278 3.86278
SD 0.003499 2.5E-16 1.07E-15 0.172852 2.28E-16 1.39E-15 5.94E-16 1.23E-11 2.28E-15 3.22E-16 3.67E-16
f22 mean 3.26434 3.26177 3.22773 3.27408 3.28633 3.26849 3.28633 3.27711 3.23877 3.25066 3.28038
SD 0.084686 0.124996 0.07564 0.06021 0.055899 0.060685 0.055899 0.064279 0.055899 0.059759 0.058182
f23 mean 1.001001 1.000279 1.000209 1.000372 1.051421 1.000101 1.000028 1.000017 1.00012 1.001187 1.00003
SD 0.000919 0.00066 0.000548 0.000739 0.157936 0.000403 0.000117 1.58E-05 0.000398 0.000837 4.56E-16
f24 mean 1 0.99417 1 0.99417 0.99126 0.99903 0.99854 1 1 1 1
SD 0 0.004883 0 0.004883 0.00299 0.00299 0.003559 6.12E-09 5.09E-17 0 0
f25 mean 181.365 186.731 186.731 182.065 186.731 186.731 186.731 186.731 186.731 186.731 186.731
SD 23.99748 8.14E-08 9.63E-07 14.36004 2.61E-14 2.44E-14 2.69E-14 8.12E-07 2.69E-10 2.35E-14 1.13E-14
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 1003
Table 12
MannWhitney U test result obtained using IGPSO and the 10 comparison algorithms (D = 50).
Table 13
MannWhitney U test result obtained using IGPSO and the 10 PSO comparison algorithms (D = 100).
Table 14
Compared results of welded beam design problem.
sonal best and global best. As the current personal best and global personal best solutions to produce a new solution, it improves on
best solution easily fall into local optima, BBPSO hardly can gen- the global search ability of PSO algorithm. However, CLPSO algo-
erate a better solution. Therefore, the performance of BBPSO is rithm suffers from that fact that the global best solution often does
not good. As the CLPSO algorithm uses the advantage of different not get updated in search process. So the premature convergence
1004 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008
Table 15
Compared results of tension/compression spring design problem.
has better performance than IGHS and GDHS algorithm based on ity constraints, respectively. xjmin and xj max are the lower and upper
the result of MannWhitney U tests. bounds of xj , respectively. Based on the penalty function method,
From Table 13, it can be seen that the IGPSO algorithm has also the above constrained optimization problem can be converted into
obtained better results in most cases when the dimension size is an unconstrained function F(x) and described as
increased. The values of UIGPSO are smaller than those of considered
algorithms for almost all the considered functions except f12 and
p
M
f13 . Therefore, the IGPSO algorithm is superior to the considered F(x) = f (x) + { max(0, gi (x)) + max(0, |hi (x)| )} (17)
algorithms. i=1 i=p+1
,
= P/( 2x1 x2 ),
2R
x2 2 x1 + x3 2
R = 0.5 x2 2 + (x1 + x3 )2 , J = 2 2x1 x2 +( ) , (x) = 6PL/(x4 x3 2 ),
4 2
4.013E x3 2 x4 6 /36 x3 E
(x) = 6PL3 /(Ex3 2 x4 ), Pc(x) = 1 ,
L 2L 4G
P = 6000.L = 14, E = 30 106 , G = 12 106 , max = 30600, max = 30000, max = 0.25
1006 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008
In this experiment, the IGPSO algorithm combine with the 4.4.3. Speed reducer design
penalty function method is used to solve welded beam design For speed reducer design problem (Fig. 10), the weight of speed
problem. The parameters of the IGPSO algorithm are xed reducer is to be minimized subject to constraints on transverse
as NP = 10, c1 = c2 = 2, max = 0.9 and min = 0.4, 1 = 0.05, and deections of the shafts, surface stress, bending stress of the gear
2 = 0.01, Lc = 0.75, = 0.25 and = 1-. The IGPSO algorithm are teeth and stresses in the shafts [60]. This problem involves seven
coded in MATLAB and all tests were performed on a PC with design variables such as the face width b (x1 ), module of teeth m(x2 ),
2.93 GHz, 2.94 GHz processor and 4 GB RAM, number of objec- number of teeth in the pinion z(x3 ), length of the rst shaft between
tive function evaluations (OFEs) is determined as 12 104 , the bearings l1 (x4 ), length of the second shaft between bearings l2 (x5 )
penalty factor = 1010 , 30 runs are performed for each problem. and the diameter of the rst shaft d1 (x6 ) and second shaft d2 (x7 ).
The IGPSO obtained the best design overall of 1.724852314184504 Note that the third variable x3 (number of teeth) is of integer value
corresponding to x = [0.205729639285696 3.470488675359993 while all left variables are continuous. The speed reducer design
9.036623944851453 0.205729639665801]. Table 14 compares the problem includes 11 constraints, so it is very hard even to locate a
optimization results found by IGPSO with those of other optimiza- feasible solution [63]. This design problem is described as
tion algorithms reported in the literature. From Table 14, it can
be seen that the best solution for welded beam design problem f (x) = 0.7854x1 x2 2 (3.3333x3 2 + 14.9334x3 43.0934)
is found by IGPSO. This solution is better than most of the pub- 1.506x1 (x6 2 + x7 2 ) + 7.4777(x6 3 + x7 3 ) + 0.7854(x4 x6 2 + x5 x7 2 )
lished results and it is the same as the best solutions reported
27 397.5
in the literature so far. Moreover, the mean and standard devia-
g1 (x) = 1 0, g2 (x) = 10
tion (SD) results are obtained by IGPSO are apparently outperform
x1 x2 2 x3 x1 x2 2 x3 2
the reported results in the literature, which demonstrated that the
1.93x4 3 1.93x5 3
g (x) = 1 0, g4 (x) = 10
IGPSO algorithm is more reliable than the other published algo-
3
x2 x3 x6
4 x2 x3 x7 4
rithms.
745x4 2
( ) + 16.9 106
x2 x3
g5 (x) = 10
110x6 3
4.4.2. Tension/compression spring design
Tension/compression spring design problem is described in [45].
745x4 2
Fig. 9 shows a tension/compression spring with three design vari- ( ) + 157.5 106
x2 x3
ables. The weight of the spring is to be minimized, subject to four s.t g6 (x) = 10
85x7 3
constraints on the minimum deection, shear, and surge frequency,
and limits on the outside diameter [60]. This problem include three
g7 (x) = x2 x3 40 0, g8 (x) =
x1
+50
design variables such as the wire diameter d(x1 ), the mean coil
x2
diameter D(x2), and the number of active coils N(x3 ). This design g9 (x) = x1 12 0, g10 (x) = 1.5x6 + 1.9 1 0
problem can be expressed as follows:
x2 x4
+
g11 (x) =
1.5x 7 1.9
10
x5
min f (x) = (x3 + 2)x2 x1 2
x2 3 x3
2.6 x1 3.6, 0.7 x2 0.8, 17 x3 28, 7.3 x4 8.3
g1 (x) = 1
71785x1 4
7.3 x5 8.3, 2.9 x6 3.9, 0.5 x6 5.5
g (x) = 4x2 2 x1 x2 1 (20)
2 + 10
s.t. 12566(x2 x1 x1
3 4) 5108x1 2 (19)
The IGPSO algorithm is used to solve speed reducer design prob-
140.45x1
g3 (x) = 1 0, lem. In IGPSO, the parameters are xed as NP = 40, c1 = c2 = 2,
x2 2 x3 max = 0.9 and min = 0.4, 1 = 0.05, and 2 = 0.01, Lc = 0.75, = 0.25
g (x) = x2 + x1 1 0 and = 1-. The number of objective function evaluations
4
1.5 (OFEs) is determined as 10 104 , the penalty factor =1010 ,
0 < x1 2, 0.25 x2 2, 2 x3 15
30 runs are performed for each problem. The IGPSO obtained
the best design overall of 2994.380995367928 corresponding
In this experiment, the parameters of the IGPSO algorithm to x = [3.499999990375726 0.70 17.0 7.30 7.715169535948116
are xed as NP = 40, c1 = c2 = 2, max = 0.9 and min = 0.4, 1 = 0.05, 3.350214650206060 5.286517819343709]. Table 16 compares the
and 2 = 0.01, Lc = 0.75, = 0.25 and = 1-. The number of objec- optimization results found by IGPSO with those of other optimiza-
tive function evaluations (OFEs) is determined as 10 104 , the tion algorithms reported in the literature.
penalty factor = 1010 , 30 runs are performed for each problem. As it can be seen from Table 16, the best solution obtained by
The IGPSO obtained the best design overall of 0.012665249890278 IGPSO algorithm is superior to the best solutions reported in the
corresponding to x = [0.051670094177424 0.356261453811766 literature so far. IGPSO has performed with more robustness in
11.315774165651165]. Table 15 compares the optimization results terms of the quality of results obtained. Whats more, even the
found by IGPSO with those of other optimization algorithms worst result found by IGSPO is better than the best results found by
reported in the literature. As can be seen in the statistics of Table 15, other methods. The mean and standard deviation (SD) results are
this problem is not complex enough to IGPSO as the worst solu- obtained by IGPSO are apparently outperform the reported results
tion is close to the optimum value. The best result reported for in the literature, which demonstrated that the IGPSO algorithm is
this problem is near to 0.012665 [49]. The IGPSO also achieved a quite competitive on the other published algorithms.
better value close to 0.012665. Specially, the mean and standard
deviation (SD) results are obtained by IGPSO are apparently bet- 5. Concluding remarks
ter than the reported results in the literature, which demonstrated
that the IGPSO algorithm is more robust than the other published An improved global-best-guided particle swarm optimization
algorithms. with learning operation (IGPSO) is presented in this paper. Based on
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 1007
35,0000
200,000
100,000
24,000
15,000
15,000
20,000
20,000
40,000
30,000
50,000
swarm and global best swarm are constructed. The global neigh-
OFEs
borhood exploration strategy, local learning mechanism, stochastic
learning and opposition based learning operations are employed in
these three swarms, respectively. A large number of experiments
are carried out to test the performance of the IGPSO algorithm.
0.00000.59
721.51803
0.028671
7.0E-02
The effects of the relevant parameters on the performance of
2.81e-7
0.0000
24.48
0.09
IGPSO have been analyzed and evaluated. The experimental results
SD
0
reveal that the IGPSO algorithm is superior to the state-of-the-
art PSO variants, classic meta-heuristic algorithms and several
other nature-inspired algorithms in terms of accuracy, convergence
2997.058412
2994.61336
2996.51487
2996.40852
2994.47107
speed, stability and robustness. Besides, The IGPSO is used to solve
2996.3482
2994.4671
3016.4926
2994.4710
2994.381
3012.12
2994.47106
4973.8644
3094.5568
2996.6690
Acknowledgement
2996.37269
2994.49910
2994.47106
2994.47106
2994.4671
2996.3480
2997.0584
2994.381
3008.08
The authors are grateful to the editor and the anonymous ref-
Best
61403174).
5.286683
5.289773
5.286683
5.286683
5.286654
5.286683
5.286517
5.287800
5.2875
References
x7
[1] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory,
Proceedings of the Sixth International Symposium on Micro Machine and
3.3502146
3.3502309
3.365576
3.350215
3.350214
3.350215
3.350215
3.350215
3.350219
3.350214
7.71532
7.71532
7.71516
7.80006
7.30248
7.30001
[7] Z.H. Zhan, J. Zhang, Y. Li, H.S.H. Chung, Adaptive particle swarm optimization,
7.30
7.3
7.3
7.3
7.3
7.3
[8] Z.H. Zhan, J. Zhang, Y. Li, Y.H. Shi, Orthogonal learning particle swarm
Compared results of tension/compression spring design problem.
17.0
17.0
17
17
17
17
17
17
x3
0.70
0.70
0.70
(2012) 627646.
0.7
0.7
0.7
0.7
0.7
x2
[12] C. Li, S. Yang, I. Korejo, An adaptive mutation operator for particle swarm
optimization, The 2008 UK Workshop on Computational Intelligence (2008)
165170.
[13] H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, M. Ventresca, Enhancing particle
3.49999
3.49999
3.50612
3.50002
3.50000
3.50000
3.5000
3.5000
[14] W.H. Lim, N.A.M. Isa, An adaptive two-layer particle swarm optimization with
elitist learning strategy, Info. Sci. 273 (2014) 4972.
[15] W.H. Lim, N.A.M. Isa, Bidirectional teaching and peer-learning particle swarm
optimization, Info. Sci. 280 (2014) 111134.
De Melo et al. [63]
Cagnina et al. [52]
Aguirre et al. [47]
Baykasoglu [60]
Wang et al. [66]
[16] W.H. Lim, N.A.M. Isa, Teaching and peer-learning particle swarm
Akhtaretal. [65]
Tomassetti [54]
Baykasoglu[64]
IGPSO
[18] R. Cheng, Y. Jin, A social learning particle swarm optimization algorithm for
scalable optimization, Info. Sci. 291 (2015) 4360.
1008 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008
[19] P.N. Suganthan, Particle swarm optimiser with neighbourhood operator, [47] H. Aguirre, A.M. Zavala, E.V. Diharce, S.B. Rionda, COPSO: Constrained
Proceedings of the 1999 Congress on Evolutionary Computation 3 (1999) Optimization via PSO Algorithm, Center for Research in Mathematics
19581962. (CIMAT), 2007 (Technical report No. I-07-04/22-02-2007).
[20] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer [48] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for
with local search, Proceedings of The 2005 IEEE Congress on Evolutionary constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (2007)
Computation 1 (2005) 522528. 8999.
[21] W. Du, B. Li, Multi-strategy ensemble particle swarm optimization for [49] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility-based
dynamic optimization, Info. Sci. 178 (2008) 30963109. rule for constrained optimization, Appl. Math. Comput. 186 (2007)
[22] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler 14071422.
maybe better, IEEE Trans. Evol. Comput. 8 (2004) 204210. [50] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search
[23] M. Nasir, S. Das, D. Maity, et al., A dynamic neighborhood learning based algorithm for solving optimization problems, Appl. Math. Comput. 188 (2007)
particle swarm optimizer for global numerical optimization, Info. Sci. 209 15671579.
(2012) 1636. [51] M. Fesanghary, M. Mahdavi, M. Minary-Jolandan, Y. Alizadeh, Hybridizing
[24] X. Hu, R. Eberhart, Multiobjective optimization using dynamic neighborhood harmony search algorithm with sequential quadratic programming for
particle swarm optimization, WCCI, IEEE (2002) 16771681. engineering optimization problems, Comput. Methods Appl. Mech. Eng. 197
[25] Y. Tang, Z. Wang, J. Fang, Feedback learning particle swarm optimization, (2008) 30803091.
Appl. Soft Comput. 11 (2011) 47134725. [52] L. Cagnina, S. Esquivel, C.C. Coello, Solving engineering optimization problems
[26] H. Geng, Y. Huang, J. Gao, H. Zhu, A self-guided particle swarm optimization with the simple constrained particle swarm optimizer, Informatica 32 (2008)
with independent dynamic inertia weights setting on each particle, Appl. 319326.
Math. Info. Sci. 7 (2013) 545552. [53] G. Tomassetti, A cost-effective algorithm for the solution of engineering
[27] C.J. Lin, M.S. Chern, M. Chih, A binary particle swarm optimization based on problems with particle swarm optimization, Eng. Optim. 42 (2010) 471495.
the surrogate information with proportional acceleration coefcients for the [54] I. Maruta, T.H. Kim, T. Sugie, Fixed-structure H controller synthesis: a
01 multidimensional knapsack problem, J. Ind. Prod. Eng. 33 (2016) 77102. metaheuristic approach using simple constrained particle swarm
[28] M. Chih, C.J. Lin, M.S. Chern, et al., Particle swarm optimization with optimization, Automatica 45 (2009) 553559.
time-varying acceleration coefcients for the multidimensional knapsack [55] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Mixed variable structural optimization
problem, Appl. Math. Modell. 38 (2014) 13381350. using rey algorithm, Comput. Struct. 89 (2011) 23252336.
[29] G. Ardizzon, G. Cavazzini, G. Pavesi, Adaptive acceleration coefcients for a [56] B. Akay, D. Karaboga, Articial bee colony algorithm for large-scale problems
new search diversication strategy in particle swarm optimization and engineering design optimization, J. Intell. Manuf. 23 (2012) 10011014.
algorithms, Info. Sci. 299 (2015) 337378. [57] A.H. Gandomi, X.-S. Yang, A.H. Alavi, S. Talatahari, Bat algorithm for
[30] M. Chih, Self-adaptive check and repair operator-based particle swarm constrained optimization tasks, Neural Comput. Appl. 22 (2013) 12391255.
optimization for the multidimensional knapsack problem, Appl. Soft Comput. [58] A.R. Yildiz, Comparison of evolutionary-based optimization algorithms for
26 (2015) 378389. structural design optimization, Eng. Appl. Artif. Intell. 26 (2013) 327333.
[31] S. Glc, H. Kodaz, A novel parallel multi-swarm algorithm based on [59] I. Brajevic, M. Tuba, An upgraded articial bee colony (ABC) algorithm for
comprehensive learning particle swarm optimization, Eng. Appl. Artif. Intell. constrained optimization problems, J. Intell. Manuf. 24 (2013) 729740.
45 (2015) 3345. [60] A. Baykasoglu, F.B. Ozsoydan, Adaptive rey algorithm with chaos for
[32] R. Storn, K. Price, Differential evolution?a simple and efcient heuristic for mechanical design optimization problems, Appl. Soft Comput. 36 (2015)
global optimization over continuous spaces, J. Global Optim. 11 (1997) 152164.
341359. [61] T. Ray, K.M. Liew, Society and civilization: an optimization algorithm based on
[33] M.G.H. Omran, M. Mahdavi, Global-best harmony search, Appl. Math. Comput. the simulation of social behavior, IEEE Trans. Evol. Comput. 7 (2003) 386396.
198 (2008) 643656. [62] E. Mezura-Montes, C.A.C. Coello, R. Landa-Becerra, Engineering optimization
[34] D.X. Zou, L.Q. Gao, J.H. Wu, S. Li, Novel global harmony search algorithm for using a simple evolutionary algorithm, Proceedings of the 15th IEEE
unconstrained problems, Neurocomputing 73 (2010) 33083318. International Conference on Tools with Articial Intelligence (2003).
[35] E.A. Mohammed, An improved global-best harmony search algorithm, Appl. [63] V.C.V. De Melo, G.L.C. Carosio, Investigating multi-view differential evolution
Math. Comput. 222 (2013) 94106. for solving constrained engineering design problems, Expert Syst. Appl. 40
[36] G.P. Zhu, S. Kwong, Gbest-guided articial bee colony algorithm for numerical (2013) 33703377.
function optimization, Appl. Math. Comput. 217 (2010) 31663173. [64] A. Baykasoglu, Design optimization with chaos embedded great deluge
[37] F.V.D. Bergh, A.P. Engelbrecht, A study of particle swarm optimization particle algorithm, Appl. Soft Comput. 12 (2012) 10551567.
trajectories, Info. Sci. 176 (2006) 937971. [65] S. Akhtar, K. Tai, T. Ray, A socio-behavioural simulation model of engineering
[38] H.R. Tizhoosh, Opposition-Based learning a new scheme for machine design optimization, Eng. Optim. 34 (2002) 341354.
intelligence, CIMCA/IAWTIC, 2005, pp. 695701. [66] Y. Wang, Z. Cai, Y. Zhou, Z. Fan, Constrained optimization based on hybrid
[39] S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential evolutionary algorithm and adaptive constraint-handling technique, Struct.
evolution, IEEE Trans. Evol. Comput. 12 (2008) 6479. Multidiscip. Optim. 37 (2009) 395413.
[40] A. Banerjee, V. Mukherjee, S.P. Ghoshal, An opposition-based harmony search [67] X.S. Yang, A.H. Gandomi, Bat algorithm: a novel approach for global
algorithm for engineering optimization problems, Ain Shams Eng. J. 5 (2014) engineering optimization, Eng. Comput. 29 (2012) 464483.
85101. [68] M. Chih, L.L. Yeh, F.C. Li, Particle swarm optimization for the economic and
[41] J. Kennedy, Bare bones particle swarms, Proceedings of the 2003 IEEE Swarm economic statistical designs of the control chart, Appl. Soft Comput. 11 (2011)
Intelligence Symposium (2003) 8087. 50535067.
[42] D. Karaboga, B. Basturk, On the performance of articial bee colony (ABC) [69] R.P. Singh, V. Mukherjee, S.P. Ghoshal, Particle swarm optimization with an
algorithm, Appl. Soft. Comput. 8 (2008) 687697. aging leader and challengers algorithm for the solution of optimal power ow
[43] M. Khalili, R. Kharrat, K. Salahshoor, M.H. Sefat, Global dynamic harmony problem, Appl. Soft Comput. 40 (2016) 161177.
search algorithm: GDHS, Appl. Math. Comput. 228 (2014) 195219. [70] X. Zhao, M. Turk, W. Li, et al., A multilevel image thresholding segmentation
[44] A.D. Belegundu, J.S. Arora, A study of mathematical programming methods for algorithm based on two-dimensional KL divergence and modied particle
structural optimization. Part I: Theory, Int. J. Numer. Methods Eng. 21 (1985) swarm optimization, Appl. Soft Comput. 48 (2016) 151159.
15831599. [71] L. Xu, F. Qian, Y. Li, et al., Resource allocation based on quantum particle
[45] C.A.C. Coello, Self-adaptive penalties for GA-based optimization, Proceedings swarm optimization and RBF neural network for overlay cognitive OFDM
of the 1999 Congress on Evolutionary Computation 1 (1999) 573580. System, Neurocomputing 173 (2016) 12501256.
[46] C.A.C. Coello, E.M. Montes, Constraint-handling in genetic algorithms through
the use of dominance-based tournament selection, Adv. Eng. Inf. 16 (2002)
193203.