Sie sind auf Seite 1von 23

See

discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/309002622

Improved Global-Best-Guided Particle Swarm


Optimization with Learning Operation for
Global Optimization Problems

Article October 2016


DOI: 10.1016/j.asoc.2016.09.030

CITATIONS READS

0 28

4 authors, including:

Haibin Ouyang
Guangzhou University
39 PUBLICATIONS 82 CITATIONS

SEE PROFILE

All content following this page was uploaded by Haibin Ouyang on 24 March 2017.

The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Applied Soft Computing 52 (2017) 9871008

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

Improved global-best-guided particle swarm optimization with


learning operation for global optimization problems
Hai-bin Ouyang a, , Li-qun Gao b , Steven Li c , Xiang-yong Kong d
a
School of Mechanical and Electric Engineering, Guangzhou University, Guangzhou, 510006, China
b
College of Information & Science, Northeastern University, Shenyang 110819, China
c
Graduate School of Business and Law, RMIT University, Melbourne 3000, Australia
d
School of Electrical Engineering and Automation, Jiangsu Normal University, Xuzhou 221116, China

a r t i c l e i n f o a b s t r a c t

Article history: In this paper, an improved global-best-guided particle swarm optimization with learning operation
Received 22 April 2016 (IGPSO) is proposed for solving global optimization problems. The particle population is divided into cur-
Received in revised form 16 August 2016 rent population, historical best population and global best population, and each population is assigned
Accepted 23 September 2016
a corresponding searching strategy. For the current population, the global neighborhood exploration
Available online 6 October 2016
strategy is employed to enhance the global exploration capability. A local learning mechanism is used
to improve local exploitation ability in the historical best population. Furthermore, stochastic learning
Keywords:
and opposition based learning operations are employed to the global best population for accelerating
Particle swarm optimization
Global exploration capability
convergence speed and improving optimization accuracy. The effects of the relevant parameters on the
Convergence speed performance of IGPSO are assessed. Numerical experiments on some well-known benchmark test func-
Accuracy tions reveal that IGPSO algorithm outperforms other state-of-the-art intelligent algorithms in terms
of accuracy, convergence speed, and nonparametric statistical signicance. Moreover, IGPSO performs
better for engineering design optimization problems.
2016 Elsevier B.V. All rights reserved.

1. Introduction exploring various methods such as mathematical programming,


gradient-based iterative methods, evolutionary strategy, evolu-
Facing the challenge of limited land, labor force, goods and mate- tionary computation and natural-based meta-heuristic algorithms.
rials, how to optimize and allocate these resources is becoming In particular, the natural-based meta-heuristics algorithms which
more and more signicant. A large number of real-world optimiza- combine the classic mathematical rules and the random natural
tion problems involving minimum time, lowest cost, maximum laws have been extensively explored in recent years. For exam-
economy benet etc. are often confronted in science, engineering, ple, particle swarm optimization (PSO) originally developed by
economics, and business elds. All these optimization problems can Kennedy and Eberhart is demonstrated as a simple yet powerful
be transformed into numerical optimization problems with vari- meta-heuristic algorithm [1,2].
ous objective functions. Without loss of generality, optimization The key idea of PSO is to emulate ocking behavior of birds
problems can be formulated as the following minimization model: to solve global optimization problems. PSO has received increas-
ing attention in the optimization research. One reason is that it
has better computational efciency, imposes fewer mathematical
min f (x), x = (x1 , x2 , x3 , , xD ) requirements and can be easily adapted for solving various real-
world engineering problems. However, similar to other population
where D is the maximum dimension number of the variables to be
based algorithms, PSO also experiences premature convergence
optimized.
due to particles getting trapped in local minima [3]. So far, many
Although a lot of studies have been done, some complex
contributions have been made to overcome this weakness. Some
and large scale numerical optimization problems still cannot be
main contributions can be summarized as: (a) parameter adjust-
solved effectively and efciently. Thus researchers are actively
ment such as the change of acceleration coefcients, inertia weight
and constriction factor; (b) addition of new operators such as learn-
ing strategy, elitist learning strategy, and aging mechanism; (c)
Corresponding author.
designing of neighborhood topology, the classic type includes fully
E-mail address: oyhb1987@163.com (H.-b. Ouyang).

http://dx.doi.org/10.1016/j.asoc.2016.09.030
1568-4946/ 2016 Elsevier B.V. All rights reserved.
988 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

connected, wheel and Von Neumann; (d) hybrid algorithm. A sum- Table 1
Pseudo code of PSO.
mary on the improvements, theoretical analysis and applications
of PSO can be found in [4]. Algorithm 1. Particle Swarm Optimization (PSO)
In PSO, all the particles are attracted by the same globally best 1: initialize parameters, and particles
particle (Gbest) and the swarm has the tendency to fast converge 2: While termination criterion not met do
to the current globally best point. Recently many researchers point 3: For i = 1 to PopuSize do
out that the main reason of the premature convergence is PSO does 4: Compute pi and pg
5: Update velocity v
not sufciently utilize its populations search information to guide
6: Update position x
the search direction [3]. Although a lot of research on using the 7: Calculate tness f(x)
local information of the current particle population has been done, 8: End For
little attention has been paid to utilize the historical best popula- 9: End While (until termination condition)
10: Return best solution found
tion and globally best population information independently. As
many modied PSO cannot escape local minimum, and are even
short of the guidance of population history information, studying 2.1. Particle swarm optimization
the adjustment and application of the Gbest and the historical best
particle (Pbest) remains an important and signicant research area PSO is a simple population-based algorithm with stochastic
to be explored. This research aims to make a contribution in this components. It was rst introduced by Kennedy and Eberhart [1,2].
regard. PSO is inspired by the movement of natural swarms and ocks.
In this paper, an improved version of PSO (improved global- It consists of a swarm of particles and each particle represents a
best-guided particle swarm optimization with learning operation, potential solution for a problem. PSO tunes its current position
IGPSO) is proposed. The IGPSO algorithm has the following toward the global optimum according to two steps: velocity updat-
attributes: ing and position updating equations. The two equations are

vi,j = vi,j + c1 r1 (pibest,j xi,j ) + c2 r2 (pgbest,j xi,j ) (1)

(1) Based on pyramid theory and potential equilibrium theory, xi,j = xi,j + vi,j (2)
the particle population is divided into current population, his-
Herr pibest,j is the personal best for particle i in the jth dimension
torical best population and global best population, and each
and pgbest,j is the global best in jth dimension. D is the maximum
population has its own independent searching strategy. In this
dimension; vi,j is the jth dimension of the ith particle velocity and
pyramid, from the top to bottom are: global best, historical best
xi,j is the jth dimension of the ith particle position; c1 and c2 are
and the current population.
the acceleration coefcients, i.e. cognitive and social coefcients,
(2) For the current population, global neighborhood exploration
respectively; r1 and r2 belong to a uniform distribution in the ranges
strategy is presented to enhance the global exploration capa-
[0,1]. The pseudo code of PSO is shown in Table 1.
bility. With the global neighborhood exploration strategy, each
particle updates its velocity and position by taking its historical
best neighborhood potential information and its globally best 2.2. Previous related work
neighborhood potential particle as exemplars.
(3) A local learning mechanism is used to improve local exploita- Since the introduction of PSO, many modications have been
tion ability for the historical best population. Different to the proposed to the PSO for reinforcing its accuracy and convergence
former PSO algorithms, each historical best particle can learn speed. A good literature review on the early works on PSO can be
from better historical best particles independently. found in [3]. Here, we briey summarize the contributions to the
(4) Two unique learning operations such as stochastic perturba- development of PSO with a focus on learning strategy or framework,
tion and opposition-based learning perturbation are built to neighborhood or local search, and history information exploration.
enhance the learning of Gbest and reduce the probability of Learning strategy or operation is widely used to improve the
premature convergence. global searching capability of the PSO algorithm. Liang et al. pro-
posed a comprehensive learning particle swarm optimizer (CLPSO)
for global optimization of multimodal functions [5]. CLPSO used a
novel learning operation to modify the velocity update, which is
All the current particles, history best particles and global best good at discouraging premature. Wu et al. developed a modied
particle are considered in our proposed algorithm. The purpose is to PSO algorithm named adaptive comprehensive learning particle
balance the global search and local search so that trapping into local swarm optimization (A-CLPSO) [6], in which a more efcient learn-
optima can be avoided. Numerical results reveal that our algorithm ing strategy was designed to ameliorate the overall optimization
is superior to all other previously introduced intelligent algorithms performance of PSO. Zhan et al. developed an adaptive particle
when applied to a well-known benchmark library and engineering swarm optimization (APSO) [7]. APSO incorporates an elitist learn-
design optimization problems. ing strategy to improve the global searching capability. In 2011,
The remainder of the paper is organized as follows. Related work an orthogonal learning particle swarm optimization (OLPSO) was
on PSO is summarized in Section (2). The proposed algorithm IGPSO presented by Zhan et al. [8]. In OLPSO, orthogonal experimental
is elaborated in Section 3. Some experimental studies are presented design (OED) is used to form an orthogonal learning (OL) strategy.
in Section 4. Finally, concluding remarks are given in Section 5. Experimental results demonstrate the effectiveness of the orthog-
onal learning strategy and OLPSO algorithm. Wang et al. designed a
self-adaptive learning framework to probabilistically steer four PSO
velocity updating strategies with different features, and presented a
2. Background and related work self-adaptive learning based particle swarm optimization (SLPSO)
[9]. It has been documented that applying multiple strategies or
This section introduces the optimization principle of the canon- methods in one algorithmic framework is useful for algorithm to
ical PSO algorithm, provides a brief overview of the previous work achieve good performance on different kinds of problems. Huang
on PSO improvements. et al. developed an example-based learning particle swarm opti-
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 989

mization for continuous optimization [10]. The idea is highly repair operator-based particle swarm optimization [30], parallel
capable examples can lead others to make progress, as they can comprehensive learning particle swarm optimizer (PCLPSO) [31]
learn from these examples to improve their own capabilities. Li have attracted a lot of attentions.
et al. proposed adaptive learning PSO (ALPSO) and self-learning
particle swarm optimizer (SLPSO) [11,12]. In ALPSO, each particle 3. Improved global-best-guided PSO with learning
can adjust its search strategy according to the selection ratios of operation (IGPSO)
four learning operators in different searching stages. Based on the
ALPSO algorithm, SLPSO algorithm incorporates two new strate- In this section, we will introduce an improved version of PSO
gies and a biased selection method to improve the performance of algorithm, namely improved global-best-guided PSO with learning
PSO algorithm. In 2011, an enhanced PSO algorithm called GOPSO operation (IGPSO). Three aspects of improvement are put forward
was proposed by Wang et al. [13]. GOPSO employs generalized in the IGPSO: global neighborhood exploration strategy, local learn-
opposition-based learning (GOBL) and Cauchy mutation to prevent ing mechanism, and two learning operations. These improvements
PSO algorithm from falling into local optima. Lim et al. developed and our motivation are described in more detail below.
several improved PSO algorithms such as adaptive two-layer par-
ticle swarm optimization algorithm with elitist learning strategy 3.1. Motivation
(ATLPSO-ELS) [14], teaching and peer-learning PSO (TPLPSO) [15],
bidirectional teaching and peer-learning PSO (BTPLPSO) [16]. These In the basic PSO algorithm, all the particles are attracted by the
improved PSO algorithms used different learning strategies to help- same global best particle, and the swarm has tendency to fast con-
ing algorithm escape from premature stagnation. Liu proposed a verge to the current globally best point, so the basic PSO algorithm
PSO based simultaneous learning framework for clustering and is generally regarded as a global version of PSO [3]. The global best
classication (PSOSLCC) [17]. Cheng designed a social learning par- particle plays a vital role in the basic PSO algorithm. It guides the
ticle swarm optimization algorithm [18], in which various learning swarm to move to a new better space in the search process. Many
strategies are employed to enhance the global and local searching researchers have realized that the global best particle serves as
ability of PSO. guidelines for the current swarm and it can accelerate convergence
Neighborhood or local search method is also important and ef- effectively. Based on this observation, the concept of global best
cient for the improving of the PSO algorithm. In the early stage, has been applied to various algorithms. For example, the DE algo-
Suganthan proposed a PSO with neighborhood operator, in which rithm called DE/best/1[32] uses the current global best individual to
the local neighborhood size is gradually increased in the search adjust mutation operation. Another greedy DE variant DE/current-
process [19]. Liang et al. provided a dynamic multi-swarm particle to-best/1 further modies the mutation of DE/best/1 to make better
swarm optimizer with local search [20]. This algorithm used the use of the current global best individual. In 2008, Global-best har-
Quasi-Newton method to enhance the local search ability of PSO. mony search (GHS) algorithm was presented by Omran [33]. The
In 2008, a new multi-strategy ensemble particle swarm optimiza- GHS modies the pitch-adjustment step of the HS by introducing
tion (MEPSO) for dynamic optimization is proposed by Du et al. the global best harmony. After that, some other improved harmony
[21], in which two new strategies, Gaussian local search and differ- search algorithms (NGHS [34] and IGHS [35]) also introduce the
ential mutation, were introduced into the searching mechanism. global best harmony in improvisation process. Additionally, the
Mendes et al. presented a fully informed particle swarm (FIPS) current global best is used in articial bee colony algorithm such as
to solve global optimization problems in 2004 [22]. In the fully Gbest-guided articial bee colony (GABC) algorithm [36].
informed neighborhood, all neighbors inuenced each other. Nasir In this paper, we assess carefully the use of the current global
et al. presented a variant of single-objective PSO called dynamic best and aim to further explore the utilization and adjustment of
neighborhood learning particle swarm optimizer (DNLPSO) [23]. the current global best. After analyzing the swarm characteristic of
In DNLPSO, the exemplar particle is selected from a neighborhood the basic PSO algorithm, and a simple conclusion is that the particle
and the learner particle can learn from the historical information swarm can be regrouped as three types: current swarm, historical
of its neighborhood or sometimes from that of its own. Hu et al. best swarm and global best swarm (only the global best particle).
presented a modied particle swarm optimization algorithm, in PSO algorithm and many other improved PSO variants explore the
which a dynamic neighborhood strategy is employed to improve current swarm only. This exploring strategy hardly prevents the
the performance of PSO [24]. algorithm from getting stuck in the local optima. Therefore, com-
The exploration of history information can help PSO to reduce bining with the framework of the basic particle, we propose three
the blindness of global search and improve the efciency of explor- modications to adjust the three types of swarm independently.
ing. However, there is not much work devoted to discussing or using Our purpose is to improve the optimization potential of PSO algo-
history information for the improvement of PSO algorithm. Tang rithm and accelerate the convergence speed.
et al. presented a feedback learning particle swarm optimization
algorithm with quadratic inertia weight (FLPSO-QIW) to solve opti- 3.2. Global neighborhood exploration strategy
mization problems [25]. In FLPSO-QIW, each particles history best
tness information is used to estimate the current search environ- According to the PSO algorithm, we know that the position of
ment, and the feedback tness information is used to automatically a particle is inuenced by the best position visited by itself (i.e. its
design the learning probabilities for gaining a good convergence own experience) and the position of the best particle in the swarm
speed and search performance. A self-guided PSO with independent (i.e. the experience of swarm). However, it is hard to achieve a sat-
dynamic inertia weights setting on each particle was developed by isfactory solution in a reasonable time for the basic PSO algorithm,
Geng et al. in 2013 [26]. It is self-guided by considering the devi- as it solely depends on the personal best and the global best in the
ation between the objective value of each particle and that of the whole search process. In order to overcome this weakness, some
best particle and combining the difference of the objective value of researchers have studied various neighborhood topologies such as
each particles best position in the two continuous generations. fully connected, wheel and Von Neumann. It has been found that
In recent years, some proposed PSO algorithms such as binary the neighborhood information can efciently enhance PSO perfor-
particle swarm optimization [27], Particle swarm optimization mance. Therefore, how to explore some effective neighborhood
with time-varying acceleration coefcients [28], adaptive search information is the key to improve the optimization capability of
diversication in PSO (ADS-PSO) [29], Self-adaptive check and the PSO algorithm. In fact, the personal best and the global best
990 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

0.9 0.9

0.85 0.8

0.8
0.7
0.75
0.6
0.7
0.5
value

value
0.65
0.4
0.6
0.3
0.55

0.2
0.5

0.45 0.1

0.4 0
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Genearation Genearation

(a) (b)
Fig. 1. Variation of versus generation number.

gather the whole historical experience of particles. It is natural to ber. Assuming max = 0.9, and min = 0.4, the maximum number of
realize that many excellent potential positions may exist around generation G = 1000, changes dynamically with the generation
the personal best and the global best positions. We thus propose number as shown in Fig. 1 which has two contrasting parts: Part
a global neighborhood exploration strategy, which combine with (a) shows the change curve of which is only based on the expo-
the neighborhood information of the personal best and the global nential decrease rule, (b) shows the change trend of according to
best. The global neighborhood exploration strategy is shown as Eq. (10). In Part (a), the value of is predetermined in each gen-
eration. In Part (b), it can be seen that the value of dynamically
vg+1
i,j
g g
= (g) vi,j + c1 r1 [pibest,j (1 1 U(0, 1)) xi,j ] changes between 0 and 0.9, it does not decrease simply with gen-
g eration because the change of stochastic number. This adjustment
+c2 r2 [pgbest,j (1 2 U(0, 1)) xi,j ] (9)
strategy is benecial to maintain the diversity of the inertia weight.
Ultimately, the diversity of inertia weight helps the algorithm to
enhance the constantly updating of velocity.
1
(g) = a exp(b g 2 ) rand, b = ln( min ), a
G2 1 max
= max exp(b) (10) 3.3. Local learning mechanism

In the basic PSO algorithm, each particle has a corresponding


Here g and g + 1 are the current and next iteration, respectively. personal best particle. All the personal best particles are gathered
G is the maximum number of generation. (g) is the value of inertia together into a historical best swarm. Although much work on the
weight in the current iteration. c1 and c2 are the acceleration con- improvements of the PSO algorithm has been done, only a few
improvements are proposed for the independent updating of per-
stants, r1 and r2 are two uniformly distributed random numbers sonal best particle. In this paper, a local learning mechanism is
independently generated within [0,1], pibest,j is the personal best developed to update the personal best particle independently as
for particle i in the jth dimension, j = 1,2,. . .,D. pgbest,j is the global below:
best in jth dimension. vi,j is the jth dimension of the ith particle
velocity, xi,j is the jth dimension of the ith particle position. max  g g g g g
pibest,j + Gauss(0, 1) (pm pm ), f (pm ) f (pm )
1 best,j 2 best,j 1 best 2 best
and min represents the maximum and minimum of the inertia g+1
yibest,j =
g g g g g
(11)
pibest,j + Gauss(0, 1) (pm best,j pm best,j ), f (pm best ) < f (pm best )
weight respectively. 1 and 2 is a scaling factor, U(0,1) represents 2 1 2 1

a uniform distribution random number in the ranges [0,1], and rand


is a stochastic number between 0 and 1.
Here, the dimension value j = 1,2,. . .,D, each new variables gen-
The global neighborhood exploration strategy includes two g+1
erated independently. yibest,j is the jth dimension of the candidate
modications: using the personal best and the global best neigh- g g
bor information and the adjustment of inertia weight. We employ personal best position. pm andpm are randomly selected
1 best,j 2 best,j
the interval ((1-U(0,1)) pibest,j , (1 + U(0,1)) pibest,j ) and ((1- from historical best swarm, m1 , m2 D (1, 2, ..., NP), and> m1 = / m2 ,
U(0,1)) pgbest,j , (1 + U(0,1)) pgbest,j ) to replace the personal best NP is the number of particle population, f(x) means the objective
and the global best in the velocity updating. This aims at explor- function value of x. Gauss(0,1) denotes a Gaussian random num-
ing the potential better solution near the current personal best ber with a mean of 0 and a standard deviation of 1. pibest,j is the
and global best and avoiding the algorithm trapped into the cur- personal best for particle i in the jth dimension. If the objective
g+1
rent global best point. Many researchers have advocated that the function value of the candidate personal best positionyibest is bet-
value of should be large in the exploration state and small in ter than that of the corresponding personal best position pibest ,
g+1
the exploitation state [37]. However, it is not necessarily correct replace the personal best position pibest withyibest . In the personal
to decrease simply with time [9]. Here, the exponential decrease best neighborhood learning strategy, each personal best particle
rule and randomness are integrated into the dynamical control of can learn from a better personal best particle in the historical best
the inertia weight . The value of maintains a decrease trend swarm, which effectively enhances the communication among the
with the generation, and it is also affected by a stochastic num- historical best swarm.
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 991

Table 2
Comparison algorithms.

Algorithm Methods Authors and reference

BBPSO Bare bones particle swarms Kennedy [41]


CLPSO Comprehensive learning particle swarm optimizer Liang et al. [5]
APSO Adaptive particle swarm optimization Zhan et al. [7]
DMS-PSO Dynamic multi-swarm particle swarm optimizer Liang and Suganthan [20]
DE/best/1 Differential evolution Storn and Price [32]
ODE Opposition-Based Differential Evolution Rahnamayan, et al. [39]
ABC Articial bee colony algorithm Karaboga, et al. [42]
GABC Gbest-guided articial bee colony algorithm Zhu and Kwong [36]
IGHS Improved global-best harmony search algorithm Mohammed El-Abd [35]
GDHS Global Dynamic Harmony Search algorithm: GDHS Khalili, et al. [43]
IGPSO Improved Global-best-guided Particle Swarm Optimization with Learning Operation Proposed algorithm

3.4. Stochastic learning and opposition based learning operations


pgbest,j
b1 b2
Generally speaking, the global best particle guides the swarm to
move forward to a satisfactory position. It plays an important role
in the basic PSO algorithm. However, due to the lack of prior knowl-
xjL min(xj) max(xj) xjU
edge and self-regulated learning ability of the global best particle
(Gbest), Gbest cannot obtain signicant improvement depending Fig. 2. Improved opposition-based learning operation.
on itself. So, the whole swarm falling in the domain of local minima
and the problem of early convergence arises in the PSO algorithm. ugbest,j the jth dimension of the candidate Gbest. is a random
If Gbest cannot obtain effective updating, the search ability of the number between 1 and 1. If the objective function value of the
entire swarm will be reduced. Therefore, it is essential to design candidate Gbest position ugbest is better than that of the corre-
an effective operation to improve the self-regulated learning abil- sponding Gbest position pgbest , replace the Gbest position pgbest with
ity of the Gbest. Two simple yet powerful learning operations are ugbest . Stochastic learning enlarger the search space and improve
employed to enhance the quality of Gbest in this paper. The two the extensive learning capability of the Gbest.
learning operations (stochastic perturbation and opposition-based (b) Opposition-based learning
learning perturbation) are described in the following. The concept of opposition-based learning (OBL) was proposed
(a) Stochastic learning by Tizhoosh [38] and has been applied to various stochastic
The Gbest works as a guide for the current swarm. It must optimization algorithms [39,40]. The key idea in OBL is the simulta-
enhance extensive learning capability and absorb the superiority neous consideration of an estimate and its corresponding opposite
of any other particles. Based on this consideration, a stochastic estimate (i.e., guess and opposite guess) to achieve a better approx-
learning is proposed as imation for the current candidate solution, increase the coverage
ugbest,j = pgbest,j +  (pgbest,j xa,j ) (12) of the solution space and reduce exploration time.
To describe the opposition-based learning in detail, we need to
Here, the dimension value j = 1,2,. . .,D. xa,j is the jth dimension introduce the following denitions.
of the randomly produced particle, which is randomly generated in
the original domain [xj L , xj U ]. xj L and xj U are the lower and upper of Denition 1. Let x be a real number dened on a certain interval
the jth dimension variable. pgbest,j is jth dimension of the Gbest, x [a, b]. The opposite number x is dened as

Fig. 3. Flowchart of the IGPSO algorithm.


992 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

Table 3
Benchmark functions (f1 -f14 ).

Function name Function model Initial range fmin


D
D
Sphere f1 = xi2 [100, 100] 0
i=1
 D

D
D
Schwefel 2.22 f2 = xi  + xi  [100, 100] 0

 i i=12
i=1


D

D
Schwefel 1.2 f3 = xj [100, 100] 0
i=1 j=1
D
Schwefel 2.21 f4 = max{|xi |, 1 i D} [100, 100] 0

D1

2 2 D
Rosenbrock f5 = 100(xi+1 xi ) + (xi 1) [30, 30] 0
i=1
 D
2 D
Step f6 = (xi + 0.5) [100, 100] 0
i=1
 D
D
Quartic f7 = ixi4 + random[0, 1) [1.28, 1.28] 0
i=1
 D
D
Schwefel f8 = xi sin xi  [500, 500] 418.9829D
i=1
D
 D
Rastrigin f9 = xi2 10 cos (2xi ) + 10 [5.12, 5.12] 0
i=1

D
1 
D

f10 = 20 exp(0.2
1 D
Ackley xi2 ) exp( cos 2xi ) + 20 + e [32, 32] 0
D D
 D  D i=1 i=1

1
  
Griewank f11 = xi2 cos xi i + 1 [600, 600]
D
0
4000
i=1 i=1


 D1
2 2 2
f12 = {10sin (y1 ) + (yi 1) [1 + 10sin (yi+1 )]
D
i=1 D
Penalized [50, 50] 0

D
1
2
+(yn 1) } + u(xi , 10, 100, 4)yi = 1 + (xi + 1)
4
i=1

D1
2 2 2
f13 = 0.1{sin (3x1 ) + (xi 1) [1 + sin (3xi+1 )]
i=1


D
2 2
+(xn 1) [1 + sin (2xn )]} + u(xi , 5, 100, 4) D
Penalized 2 [50, 50] 0
m
i=1

k(xi a) , xi > a
u(xi , a, k, m) = 0 a xi a
m
k(xi a) , xi < a
n1  
 0.25 0.1 2 D
Schafferf 7 f14 = (xi2 + xi+1
2
) (sin (50(xi2 + xi+1
2
) ) + 1) [100, 100] 0
i=1

x = a + b x (13) (x1 , x 2 , x 3 , , x n ). Thus, we can calculate f (x)andf (x) in each iter-


ation. If f (x) f (x), we employ the vectorx, otherwise x. In this
Similarly, the opposite number in a multidimensional case can
approach, opposition is a way to reach far points in the solution
be dened.
space, which may have a greater tness.
Denition 2. Let z = (x1 , x2 , x3 , , xn ) be a point in a n- In order to enhance the self-study of the Gbest, an improved
dimensional coordinate system with x1 , x2 , x3 , , xn and xi opposition-based learning strategy is introduced as follow:
[ai , bi ]. The opposite pointzis completely dened by its coordinates ugbest,j = b1 + b2
x1 , x2 , x3 , , xn where
b1 = xjL + max(xj ) pgbest,j (15)
x i = ai + bi xi (14)
b2 = xjU + min(xj ) pgbest,j
Opposition-based learning technique can be now explained
as follows. Let x = (x1 , x2 , x3 , , xn ) and f (x)be a proper eval- Here, the dimension value j = 1,2,. . .,D. and are cooperation
uation function. Using (14) we can get the opposite vectorx = factors, +=1, max(xj ) and max(xj ) denote the lowest and the high-
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 993

Table 4
Benchmark functions (f15 -f25 ).

Function name Function model Initial range fmin


 2 2  2 4
 2
 
D
Zakharov f15 = xi2 + 0.5ixi + 0.5ixi [100, 100] 0
i=1 i=1 i=1
1

1

25
1
Foxholes f16 = [ + ] [65.536, 65.536]2 0.998
500 
2
j=1 6
j+ (xi aij )
i=1

11
2
x1 (bi + bi x2 )
2
4
Kowalik f17 = (ai 2
) [5, 5] 3.08e04
bi + bi x3 + x4
i=1
1 6 2
6 Hump camel back f18 = 4x12 2.1x14 + x + x1 x2 4x22 + 4x24 [5, 5] 1.0316285
3 1
2
5.1 2 5 1
Branin f19 = (x2 x1 + x1 6) + 10(1 ) cos x1 + 10 [5,10]2 or [0,15]2 0.398
4 2  8
2
f20 = [1 + (x1 + x2 + 1) (19 14x1 + 3x12 14x2 + 6x1 x2 + 3x22 )]
2
Goldstein Price [5, 5] 3
2
[30 + (2x1 3x2 ) (18 32x1 + 12x12 + 48x2 36x1 x2 + 27x22 )]

4

3
2 3
Hartman 3 f21 = - ci exp[ aij (xj pij ) ] [0, 1] 3.86278
i=1 j=1

 4
 6
2 6
Hartman 6 f22 = - ci exp[ aij (xj pij ) ] [0, 1] 3.32
i=1 j=1
2 4
f23 = exp(0.5(x12 + x22 25) ) + sin (4x1 3x2 )
2
Goldstein Price 2 [5, 5] 1
2

+0.5(2x1 + x2 10)
2
sin x12 + x22 0.5
2
Schaffer f24 = 2
0.5 [100, 100] 1
[1 + 0.001(x12 + x22 )]
2 5

Shubert f25 = j cos((j + 1)xi + j) [10,10]2 186.7309


i=1 j=1

est values of the ith variable in the current swarm, respectively. learning control parameter Lc and cooperation factors and .
xj L and xj U are the lower and upper of the jth dimension variable. The maximum number of function evaluation Maxi FEs, the current
pgbest,j is jth dimension of the Gbest. Fig. 2 shows the improved number of function evaluation FEs.
opposition-based learning strategy. Step 2. Initialize swarm particles.
According to Eq. (9), we can obviously obtain that b1 and pgbest,j Initialize NP particles with random positions and velocities in
are symmetric around (xj L + max(xj ))/2, b2 and pgbest,j are symmet- the search space [xj L , xj U ] (j = 1, 2, 3, . . ., D). Calculate the objec-
ric around (xj U + min(xj ))/2. In Fig. 2, b1 and pgbest,j are symmetric tive function value of each particle and determine the global best
around the red dotted line, and b2 and pgbest,j are symmetric around particle (Gbest). Meanwhile, build the historical best swarm by the
the blue dotted line. It can be seen from Fig. 2 that the candidate current initialized swarm.
Gbest ugbest is generated by the cooperation of the two opposition- Step 3. Update the Current swarm
based learning particles. Thus it enhances the self-study of the According to the improved velocity updating strategy and the
Gbest and extends the exploration. original position updating (Eq. (9) and (10)), generate new particle,
In order to control the implementation of the two learning oper- update the current swarm, determine the personal best particle
ations, a learning control parameter Lc is used and compared with (Pbest) and Gbest. Note that the Pbest are saved in the historical
the Lc and a uniform random number. When the value of Lc is larger best swarm.
than the uniform random number, the two learning operations are Step 4. Update the historical best swarm
carried out in the proposed algorithm. Gbest learning operation Based on the historical best swarm, employing the personal best
with a certain probability aims at updating the Gbest effectively. neighborhood learning (Eq. (11)) to generate the candidate per-
sonal best particle yibest and update the corresponding Pbest in the
3.5. Complete steps of IGPSO algorithm historical best swarm.
Step 5. Update the Gbest independently
According to the discussion above, the complete steps of IGPSO Comparing the value of Lc with the uniform random number,
algorithm are summarized as follows: if the value of Lc larger than the uniform random number, apply
Step 1. Initialize algorithm and optimization problem parame- the stochastic learning and opposition-based learning operation to
ters. adjust the Gbest.
The optimization problem is dened as Minimize f(x) subject Step 6. Check the stopping criterion.
to xj L xj xj U (j = 1, 2, 3, . . ., D), where xj L and xj U are the lower If the maximal function evaluation number (Maxi FEs) is satis-
and upper bounds for decision variables, respectively. D is the ed, computation is terminated. Otherwise, repeat Steps 3, 4 and
maximum number of the problem dimension. The HS algorithm 5.
parameters are specied in this step as well. They are the number In order to illustrate IGPSO algorithm clearly, a owchart of the
of particles NP, the acceleration constants c1 and c2 , the maximum proposed algorithm is provided in Fig. 3.
and minimum of the inertia weight max and min , scaling factor,
994 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

4. Experiments and results

10290.26
3.95E-198
3.27E-201
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.96E + 01

8.18E + 02
9.88E + 00

0.00E + 00
0.00E + 00

1.04E-02
1.48E-97
6.13E-97

2.46E-04
1.93E-04

4.33E-05
1.33E-05
5.70E-03
[0.1, 0.5]

In this section, experimental studies are carried out to assess the


performance of the proposed algorithm in comparison with other
algorithms.

4.1. Algorithms and problems


[0.01, 0.05]

9886.399
4.39E-199

7.79E-197
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.48E + 01
1.18E + 01

8.16E + 02
5.53E-96
2.47E-95

1.89E-04
1.63E-04

1.93E-05
6.73E-06
1.44E-02
3.08E-02
The proposed algorithm is compared with other well-known
PSO variants and state-of-the-art meta-heuristic algorithms. All the
comparison algorithms are listed Table 2. The comparison algo-
rithms include PSO variants such as BBPSO [41], CLPSO [5], APSO
[7], and DMS-PSO [20], differential evolution (DE) variants such as
[0.001, 0.005]

11010.92 DE/best/1 [32] and ODE [39], articial bee colony (ABC) variants
7.91E-153

8.63E-164
2.04E-153

0.00E + 00
0.00E + 00
0.00E + 00
9.20E + 00
1.15E + 01
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.42E + 03

0.00E + 00
1.65E-76

3.55E-15
5.70E-76

2.26E-04

2.12E-06
1.77E-06
6.13E-03
1.29E-02
2.00E-04

such as ABC [42] and GABC [36], HS variants such as IGHS [35] and
GDHS [43]. In order to make a fair comparison among the algo-
rithms, the parameter congurations of different algorithms are
set according to that proposed by their authors, respectively.
In the eld of evolutionary computation, it is common to com-
pare different algorithms using a large test set, especially when
9907.295
6.67E-202

2.27E-102
9.70E-102
1.84E-207
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.44E + 01

5.75E + 02
1.20E + 01

1.81E-04
1.41E-04

1.98E-05
7.52E-06
2.32E-02
3.90E-02
the test involves function optimization. However, the differences
in test problem sets may induce bias toward particular algo-
0.5

rithms when assessing their performance. In order to determine


whether an algorithm is better than another, a large test set con-
taining 25 benchmark problems with different characteristics is
employed. Among the 25 benchmark problems, Sphere function,
9464.194
1.58E-186

4.81E-190
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.17E + 01
1.19E + 01

1.15E + 03
2.25E-93
8.26E-93

1.95E-15
1.81E-15
2.56E-04

5.19E-03
2.32E-02
1.37E-02
3.12E-02
3.05E-04

Schwefel problem 2.22, Schwefel 1.2, Schwefel 2.21, Rosenbrock,


Step, Quartic functions are unimodal. Rastrigin, Ackley, Griewank,
0.25

Penalized and Penalized 2 are difcult multimodal problems where


the number of local optima increases with the problem dimen-
sion. Foxholes, Kowalik, 6 Hump camel back, Branin, Goldstein
Price, Hartman 3, Hartman 6, Shekel 5, GoldsteinPrice 2, Zakharov,
11367.08
6.95E-135

2.54E-143
3.10E-134

1.09E-142
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.56E + 03
4.62E + 00
9.27E + 00

0.00E + 00
1.71E-67
4.16E-67

3.55E-15
4.44E-04
3.27E-04

5.18E-03
2.32E-02
2.65E-02
5.55E-02

Schaffer are low-dimensional functions where a great majority of


functions have many local optima. Although this test set is not
0.1

exhaustive, it is large enough as it includes many different types


of problems such as unimodal, multimodal, regular, irregular, sep-
arable, non-separable and multidimensional. The 25 benchmark
functions with different traits are described in Tables 34.
12367.46
1.522E-92

3.25E-110
9.40E-110

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.54E + 01
2.15E + 01
4.59E + 00
8.96E + 00

9.03E + 02

0.00E + 00

The experiments are implemented on IntelTM CoreTM 2 Quad


4.31E-92
4.37E-44
1.55E-43

3.55E-15
6.48E-04
3.16E-04

5.18E-03
2.32E-02
1.26E-02
2.62E-02

CPU Q9400 dual core processor 2.66 GHz and 2.66 GHz with 3.50GB
0.05

memory physical address extension. All algorithms are coded and


carried out by using MATLAB 2008a version under the environment
of Microsoft Windows XP Professional.
0.00E + 00
0.00E + 00
3.43E + 01
2.55E + 01

9.18E + 00
5.20E + 00

12569.4
2.36E-36
9.54E-36
6.56E-18
1.39E-17
2.28E-46
8.43E-46

1.81E-13
8.11E-13
5.51E-15
3.22E-03
2.86E-03

3.37E-02

4.07E-15

7.77E-08

8.17E-03
2.16E-02
4.06E-03
8.03E-03

5.70E-08

4.2. Effects of algorithm parameters


0.01

In this section, we discuss and analyze the effect of the parame-


ters (1 and 2 , Lc , and ) on the performance of IGPSO algorithm.
13 classic test functions with various features such as unimodal,
mutilmodal, noise, ill-condition are used to investigate the impact
0.00E + 00
0.00E + 00
4.32E + 01
4.26E + 01
2.99E + 00
7.29E + 00

12569.4
1.43E-32
2.18E-32
5.47E-16
7.63E-16
5.22E-38
1.27E-37

1.25E-13
3.87E-03

1.58E-01
1.86E-08
7.85E-08

4.01E-13

1.26E-02
5.68E-08
1.23E-07
8.19E-03
1.68E-02
3.03E-03

9.10E-03

of these parameters.
0.005

A brief description of the 13 functions is shown in Table 3. The


dimension of function is xed as 30 and the experiments of each
parameter are conducted over 40 independent runs for each func-
Effects of 1 on the performance of IGPSO.

tion. We apply the variable-controlling approach to analyze the


0.00E + 00
0.00E + 00
3.49E + 01
2.88E + 01
6.14E + 00
9.62E + 00

12569.5

inuence of each parameter on the performance of IGPSO algo-


9.68E-03

3.00E-08
1.08E-08
4.94E-03
1.58E-29
4.89E-15
9.96E-15
6.16E-36
1.59E-35

1.13E-13
6.71E-30

2.24E-02

4.03E-13
8.12E-03

1.26E-02
5.60E-03
4.30E-03

1.05E-02
5.00E-03

rithm. The variable-controlling approach means only one factor is


0.001

different in each experiment, so that the effect of that single factor


1

can be determined. Before the experiment, we provided the default


parameter settings of IGPSO algorithm, such as c1 = c2 = 2, max = 0.9
and min = 0.4, 1 = 0.05, and 2 = 0.01, Lc = 0.75, = 0.25 and =1
Index

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

(Table 4).
SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

4.2.1. Scalar parameters 1 and 2


Table 5

We discuss the inuence of different scalar parameter1 and 2


f10

f11

f12

f13
f1

f2

f3

f4

f5

f6

f7

f8

f9
F

on IGPSO algorithm here. The scalar parameter1 and 2 are xed


H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 995

to seven values, such as 1 , 2 = (0.001, 0.005, 0.01, 0.05, 0.1, 0.25,

9678.749
8.47E-214

1.02E-209
6.84E-105
4.28E-104
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00

9.29E + 00
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.27E + 03
2.06E + 01
0.5). Or the value of 1 and 2 are randomly generated from three

1.41E-04

7.69E-03
2.17E-04

7.27E-05
2.82E-05

1.68E-02
[0.1, 0.5]
interval, such as [0.001, 0.005], [0.01, 0.05], [0.1, 0.5]. Therefore, the
values of 1 and 2 change from 0.001 to 0.5 and the other parame-
ters remain as in the default parameter settings in the experiment.
The maximum number of objective function evaluations (OFE) is
4104 . The statistic results are given in Table 5 and Table 6 in terms
[0.01, 0.05]

9695.905
3.39E-105
1.91E-104
2.15E-208

1.14E-209
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
0.00E + 00
1.84E + 01

8.97E + 02
1.05E + 01
of the mean and standard deviation values of the solutions found

2.19E-04
1.87E-04

2.48E-05
8.11E-06
9.15E-03
2.33E-02
in 40 independent trials.
From Table 5, it can be seen that the performance of IGPSO
becomes better along with the increase of 1 in most cases such
as f1 -f4 , f7 , f9 -f11 . Because a small value of 1 narrows the neighbor-
hood information space of the historical best particle, a large value
[0.001, 0.005]

of 1 is benecial to using the potential useful information around


10483.39
1.26E-154

5.72E-156
3.62E-155
6.07E-154

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.15E + 01

1.73E + 03
1.04E + 01

0.00E + 00
6.76E-76
3.34E-75

3.55E-15
2.83E-04
1.99E-04

2.33E-06
1.27E-06
1.43E-02
2.55E-02
the historical best particle. However, for the ill-condition function
f5 , complex multimodal function f8 , and penalty functions f12 and
f13 , a small value of 1 can improve the convergence performance
of IGPSO. One reason is that the relative small value of 1 makes
the IGPSO algorithm focus on the minimal neighborhood zone of
the historical best particle, which enhance the local exploitation
9502.234
6.70E-211

1.32E-103
3.23E-207
3.20E-104

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.71E + 01

7.95E + 02
1.09E + 01
0.00E + 00

0.00E + 00

0.00E + 00
0.00E + 00
1.70E-04
1.48E-04

2.68E-05
9.21E-06
1.57E-02
2.93E-02
capability. Additionally, when 1 is randomly generated in a given
interval, we nd that the IGPSO algorithm can achieve good perfor-
0.5

mance. An arbitrary value within the range [0.1, 0.5] for 1 should
be acceptable to the IGPSO algorithm for most simple tested func-
tions. Fig. 4 provided the optimization curve of IGPSO with different
value of 1 for two typical functions f1 and f5 . Obviously, Fig. 4
9632.899
8.82E-185

2.42E-185

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.32E + 01
1.19E + 01

9.78E + 02
0.00E + 00

0.00E + 00

2.58E-15
1.61E-15
1.22E-90
5.91E-90

2.52E-04

1.64E-02
1.62E-02
2.09E-04

2.60E-03

4.40E-02

also indicates that large value of 1 can accelerate the convergence


speed.
0.25

From Table 6, it can be seen that the value of 2 has some


effects on the performance of IGPSO algorithm. When the 2 value
increases, the IGPSO performance is improved for functions f1 -f4 , f7 ,
f9 -f11 . For the ill-condition function f5 , complex multimodal func-
11045.17
3.68E-135
2.33E-134

4.57E-148
1.36E-147
0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.13E + 01

1.71E + 03
9.18E + 00

0.00E + 00
1.63E-67

3.55E-15
8.70E-67

2.67E-04
2.11E-04

7.79E-03
2.77E-02
1.78E-02
3.44E-02

tion f8 , and penalty functions f12 and f13 , a small value of 2 can
improve the convergence performance of IGPSO. Besides, when 2
0.1

is randomly generated in a given interval, we nd that the range


[0.1, 0.5] for 2 is relatively reasonable to the IGPSO algorithm for
most simple test functions. Fig. 5 provides the optimization curve
of IGPSO with different value of 1 and 2 for two typical functions
12203.46
3.75E-121
6.30E-111
3.92E-110

2.35E-120

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
3.48E + 00

6.74E + 00
1.04E + 01

1.02E + 03
9.09E + 00

0.00E + 00

1.67E-02
5.65E-54
1.63E-53

3.55E-15
5.67E-04
3.92E-04

2.59E-03
1.64E-02
7.94E-03

f4 and f8 . The optimization curve of f4 also shows that large value


of 2 can accelerate the convergence speed. But for complex func-
0.05

tions f8 , small value of 2 is benecial to the convergence of IGPSO


algorithm. In sum, there is no single best choice for 1 and 2 . They
should be adjusted according to practical optimization problems.
In our paper, large values of 1 and2 are used in simple unimodal
1.36E-104
3.03E-105

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.34E + 01

7.85E + 02
7.13E + 00
1.04E + 01
8.40E + 00

0.00E + 00
12373.6
1.77E-91
9.25E-91
1.83E-41
9.61E-41

3.55E-15
9.11E-04

1.85E-04
1.17E-03
1.47E-06
6.78E-06
9.28E-03
2.32E-02
6.30E-04

functions, small values of 1 and 2 are used in difcult multimodal


0.01

functions.

4.2.2. Learning control parameter Lc


The effect of the learning control parameter Lc on the perfor-
12534.52

mance of the IGPSO is investigated here. The value of Lc is either


6.71E-105
3.01E-104

5.45E + 00

0.00E + 00
0.00E + 00

0.00E + 00
0.00E + 00
1.55E + 01

1.99E + 02
7.15E + 00

9.53E + 00

0.00E + 00

6.45E-07
1.04E-06
1.33E-92
3.68E-92
2.68E-43

3.55E-15
1.02E-42

7.92E-04
4.75E-04

2.47E-04
1.56E-03

2.39E-02
5.18E-02

xed to be 0, 0.1, 0.25, 0.5, 0.75, 0.9, 1, or taken as a random number


0.005

with a uniform distribution from one of the three intervals ([0,0.25],


[0.25, 0.75], [0.75, 1]). Each function is tested 40 times under vari-
ous learning control parameter Lc , the mean and standard deviation
Effects of 2 on the performance of IGPSO.

value are reported in Table 7.


1.33E-103
8.25E-103

0.00E + 00
0.00E + 00

3.87E + 01
0.00E + 00
0.00E + 00
1.13E + 01
1.83E + 01
5.51E + 00
9.63E + 00

0.00E + 00

Table 7 shows that there are no signicant differences when


12560.9
4.18E-44

3.55E-15
2.90E-91
1.25E-90

1.60E-43

3.52E-04

9.85E-04
5.16E-03
9.24E-07
3.48E-06

2.13E-02
1.07E-02
7.00E-04

different values of Lc are used for most simple unimodal functions


0.001

such as f1 -f4 . However, the value of Lc affects the convergence rate


2

and stability of IGPSO algorithm for complicated unimodal function


such as f5 , multimodal functions such as f8 -f13 . When Lc = 0, which
means the IGPSO excludes the two learning operations (stochas-
Index

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

mean

tic learning and opposition-based learning), the results obtained


SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

SD

by IGPSO are unsatisfactory for complicated unimodal and multi-


modal functions. When Lc > 0 and Lc < 1, the performance of IGPSO
Table 6

becomes better along with the increase of 1 in most case such


f10

f11

f12

f13
f1

f2

f3

f4

f5

f6

f7

f8

f9
F

as f5 , f8 -f13 . when Lc = 1, which means the IGPSO algorithm must


996
Table 7
Effects of Lc on the performance of IGPSO.

F Index Lc

0 0.1 0.25 0.5 0.75 0.9 1 [0,0.25] [0.25, 0.75] [0.75, 1]

H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008


f1 mean 2.25E-91 4.70E-92 1.22E-91 7.77E-90 1.38E-92 2.42E-92 7.56E-90 1.24E-91 4.65E-92 3.16E-93
SD 1.25E-90 2.84E-91 6.12E-91 4.91E-89 8.46E-92 1.48E-91 4.49E-89 7.78E-91 2.49E-91 1.41E-92
f2 mean 2.11E-44 6.55E-44 8.80E-44 1.95E-43 1.63E-44 5.01E-44 2.75E-44 9.01E-43 2.78E-44 9.71E-44
SD 6.10E-44 3.64E-43 3.56E-43 1.16E-42 5.97E-44 1.52E-43 6.54E-44 5.62E-42 9.09E-44 4.58E-43
f3 mean 6.06E-103 2.43E-104 1.78E-103 1.01E-106 1.72E-104 9.96E-106 3.95E-104 1.87E-103 2.08E-104 1.50E-104
SD 3.76E-102 1.36E-103 1.12E-102 4.63E-106 1.08E-103 3.80E-105 2.49E-103 1.18E-102 1.17E-103 8.85E-104
f4 mean 1.26E + 01 1.33E + 01 1.57E + 01 8.60E + 00 1.10E + 01 1.45E + 01 1.25E + 01 1.45E + 01 1.32E + 01 1.15E + 01
SD 2.08E + 01 1.98E + 01 2.24E + 01 1.24E + 01 2.03E + 01 1.87E + 01 1.69E + 01 2.09E + 01 2.05E + 01 1.77E + 01
f5 mean 2.20E + 01 1.86E + 01 1.48E + 01 8.74E + 00 8.78E + 00 1.68E + 00 2.83E + 00 1.64E + 01 1.25E + 01 6.19E + 00
SD 6.47E-01 7.81E + 00 1.04E + 01 1.07E + 01 1.08E + 01 5.86E + 00 7.32E + 00 9.59E + 00 1.09E + 01 9.78E + 00
f6 mean 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
SD 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
f7 mean 7.10E-04 1.00E-03 8.04E-04 9.00E-04 8.19E-04 8.58E-04 7.48E-04 8.31E-04 7.41E-04 6.69E-04
SD 5.40E-04 8.66E-04 7.47E-04 7.78E-04 5.39E-04 5.90E-04 6.00E-04 5.78E-04 5.15E-04 4.78E-04
f8 mean 8537.695 11610.57 12003.72 12566.37 12501.62 12559.12 12563.37 12108.21 12546.04 12569.1
SD 8.94E + 02 1.66E + 03 1.22E + 03 1.88E + 01 3.89E + 02 4.56E + 01 3.77E + 01 1.14E + 03 7.97E + 01 1.18E + 00
f9 mean 4.44E-17 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 6.47E-01 0.00E + 00 0.00E + 00
SD 2.81E-16 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 4.09E + 00 0.00E + 00 0.00E + 00
f10 mean 3.64E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15
SD 5.62E-16 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
f11 mean 2.37E-10 0.00E + 00 0.00E + 00 3.69E-04 3.30E-12 1.30E-16 0.00E + 00 2.47E-04 3.39E-05 0.00E + 00
SD 1.50E-09 0.00E + 00 0.00E + 00 2.34E-03 2.08E-11 8.25E-16 0.00E + 00 1.56E-03 2.14E-04 0.00E + 00
f12 mean 6.22E-02 2.07E-02 1.04E-02 5.19E-03 2.59E-03 3.77E-07 4.07E-07 1.81E-02 2.59E-03 5.15E-07
SD 9.33E-02 7.12E-02 3.93E-02 2.29E-02 1.64E-02 1.43E-07 3.15E-07 5.19E-02 1.64E-02 6.84E-07
f13 mean 3.92E-02 4.67E-02 2.45E-02 2.24E-02 1.15E-02 1.23E-02 8.44E-03 4.27E-02 1.28E-02 6.28E-03
SD 6.49E-02 9.75E-02 4.91E-02 4.68E-02 2.50E-02 2.55E-02 1.96E-02 6.22E-02 2.20E-02 1.42E-02
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 997

Function optimization curve


Function optimization curve 6
10
0 =0.0001
10 1

=0.005
1

=0.01
1
=0.05
1
4
-50
10 1
=0.1
10

Optimal objective function value


=0.25
1
Optimal objective function value

=0.5
1
[0.001,0.005]
1

2 [0.01,0.05]
1
-100 10 [0.1,0.5]
10 =0.0001 1
1

=0.005
1

=0.01
1

=0.05
1

-150 =0.1 0
10 1 10
=0.25
1

=0.5
1

[0.001,0.005]
1

[0.01,0.05]
1

-200 [0.1,0.5] -2
10 1
10

0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4


Objective function evaluation number 4 Objective function evaluation number 4
x 10 x 10

(f1) ( f 5)
Fig. 4. Optimization curve of IGPSO with different value of 1 (f1 and f5 ).

Function optimization curve Function optimization curve


3
-10
5
10

0
10
Optimal objective function value

Optimal objective function value

=0.0001
1

=0.005 4
-5
1
-10
10 1
=0.01
=0.0001
1
=0.05
1
=0.005
1
=0.1
1
=0.01
1
=0.25
1 =0.05
1
=0.5
1 =0.1
-10 [0.001,0.005] 1
10 1
=0.25
1
[0.01,0.05]
1
=0.5
1
[0.1,0.5]
1 [0.001,0.005]
1

[0.01,0.05]
1

[0.1,0.5]
5 1
-10
0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4
Objective function evaluation number 4 Objective function evaluation number 4
x 10 x 10

(f4) ( f 8)
Fig. 5. Optimization curve of IGPSO with different value of 2 (f4 and f8 ).

carry out the two learning operations in each iteration, the IGPSO According to Table 8, the change of and has some inuence on
algorithm can nd better results in most cases but much compu- the performance of IGPSO algorithm. There is no indication that one
tation effort are needed. For the three given intervals, in contrast, setting of is superior to the other settings. However, =0 or =1
the results demonstrate that [0.75, 1] is a suitable one. So we rec- is harmful to the optimization quality of the IGPSO in most cases,
ommend the value of Lc should be large than 0.75. Besides, Fig. 6 and will deteriorate the performance of the IGPSO if the number of
depicts the convergence process of IGPSO algorithm with different iteration is increased. Based on the statistic results and optimiza-
values of Lc and also conrms that large values of Lc improve the tion trend (Fig. 7), an appropriate value of is approximately 0.25,
convergence speed of IGPSO. which implies that the value of is 0.75.

4.2.3. Cooperation factors ,


In order to assess the sensitively of the cooperation factors 4.3. Comparison of the proposed algorithm with classic
and , we use ten functions to test the performance of the IGPSO algorithms
algorithm with different values of and . Note that +=1, so we
require to study one of the two parameters only. Here, we x the To assess the performance of an algorithm, a comparison study is
value of to be 0, 0.1, 0.25, 0.35, 0.5, 0.75, 0.8, 0.9, 0.95, 1. Each test presented on a large number of benchmark functions focusing on
is implemented 40 independently. All the test results are shown in the following performance metrics: optimization accuracy (com-
Table 8. parison under fair condition such as the same number of function
998
Table 8
Effects of on the performance of IGPSO.

F Index

0 0.1 0.25 0.35 0.5 0.75 0.8 0.9 0.95 1

H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008


f1 mean 2.62E-91 6.41E-92 3.27E-92 4.34E-91 4.01E-92 1.29E-90 2.05E-91 1.88E-91 1.33E-93 1.25E-92
SD 1.65E-90 2.65E-91 1.37E-91 2.61E-90 2.26E-91 8.13E-90 8.99E-91 1.14E-90 3.87E-93 7.41E-92
f2 mean 3.47E-44 9.66E-43 2.00E-44 9.06E-44 1.63E-43 3.35E-44 6.59E-44 1.64E-44 5.18E-44 5.60E-43
SD 1.07E-43 5.84E-42 6.32E-44 3.46E-43 5.01E-43 1.20E-43 2.11E-43 4.78E-44 1.55E-43 2.37E-42
f3 mean 9.08E-106 3.15E-106 8.55E-106 2.67E-106 2.85E-106 2.18E-108 6.12E-104 9.73E-106 3.44E-104 3.01E-104
SD 3.39E-105 1.97E-105 5.37E-105 1.64E-105 1.56E-105 4.50E-108 3.52E-103 5.91E-105 2.16E-103 1.90E-103
f4 mean 1.40E + 01 1.23E + 01 1.08E + 01 7.81E + 00 1.43E + 01 1.17E + 01 9.76E + 00 1.08E + 01 1.40E + 01 1.23E + 01
SD 1.82E + 01 2.05E + 01 1.67E + 01 1.17E + 01 2.01E + 01 1.84E + 01 1.73E + 01 1.89E + 01 2.45E + 01 2.44E + 01
f5 mean 6.07E + 00 8.23E + 00 4.93E + 00 6.41E + 00 3.79E + 00 4.40E + 00 6.58E + 00 4.67E + 00 4.93E + 00 5.58E + 00
SD 9.94E + 00 1.07E + 01 9.18E + 00 9.79E + 00 8.31E + 00 8.78E + 00 1.02E + 01 8.86E + 00 9.20E + 00 9.76E + 00
f6 mean 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
SD 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
f7 mean 7.98E-04 8.23E-04 7.59E-04 8.63E-04 8.79E-04 6.83E-04 7.36E-04 8.61E-04 7.59E-04 7.77E-04
SD 6.39E-04 5.64E-04 6.00E-04 6.85E-04 5.43E-04 4.31E-04 4.74E-04 7.79E-04 5.20E-04 5.23E-04
f8 mean 12569.38 12569.24 12563.35 12361.29 12465.4 12563.39 12364.32 12558.01 12563.38 12442.01
SD 8.84E-02 7.87E-01 2.62E + 01 8.38E + 02 5.45E + 02 2.63E + 01 8.69E + 02 5.02E + 01 3.77E + 01 7.30E + 02
f9 mean 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
SD 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
f10 mean 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15 3.55E-15
SD 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
f11 mean 2.47E-04 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 2.76E-05 0.00E + 00 0.00E + 00 0.00E + 00
SD 1.56E-03 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 1.75E-04 0.00E + 00 0.00E + 00 0.00E + 00
f12 mean 1.22E-06 6.45E-07 2.59E-03 2.59E-03 7.31E-07 2.59E-03 5.18E-03 5.18E-03 2.59E-03 2.59E-03
SD 4.81E-06 1.48E-06 1.64E-02 1.64E-02 1.09E-06 1.64E-02 2.29E-02 2.29E-02 1.64E-02 1.64E-02
f13 mean 5.50E-03 2.17E-02 1.11E-02 1.36E-02 2.31E-02 1.45E-02 1.01E-02 1.92E-02 7.99E-03 1.77E-02
SD 1.03E-02 4.18E-02 2.78E-02 3.24E-02 5.18E-02 3.45E-02 2.70E-02 3.89E-02 1.94E-02 3.17E-02
Table 9
Results of the 11 comparison algorithms for f1 f8.

F D Index Algorithms

BBPSO CLPSO APSO DMS-PSO DE/best/1 ODE ABC GABC IGHS GDHS IGPSO

f1 30 Mean 1.28E-216 8.06E-96 2.30E-12 1.53E-113 5.01E-57 5.01E-57 8.01E-19 8.01E-19 2.17E-11 2.57E-04 0.00E + 00
SD 0.00E + 00 3.53E-95 1.03E-11 5.14E-113 1.57E-56 1.57E-56 2.36E-18 2.36E-18 2.98E-12 3.67E-05 0.00E + 00
50 Mean 1.00E + 03 2.28E-85 9.54E-140 8.62E-103 1.39E-53 1.00E-165 7.97E-25 1.80E + 02 5.96E-11 9.29E-04 4.24E-297
SD 3.08E + 03 3.87E-85 3.61E-139 2.82E-102 2.20E-53 0.00E + 00 1.25E-24 4.05E + 02 5.63E-12 1.37E-04 0.00E + 00
100 Mean 1.95E + 04 4.16E-75 2.50E + 03 4.89E-77 7.68E-37 5.67E-81 1.29E-23 8.58E + 02 2.14E-10 8.07E-03 4.45E-156
SD 1.54E + 04 1.80E-74 4.44E + 03 1.40E-76 2.22E-36 9.55E-81 2.91E-23 1.52E + 03 1.22E-11 8.87E-04 9.29E-156
f2 30 Mean 4.40E + 02 5.50E-57 5.38E-03 2.18E + 02 2.73E-32 4.73E-32 5.18E-10 5.18E-10 1.21E-02 6.76E-02 0.00E + 00
SD 2.23E + 02 1.92E-56 1.01E-02 1.83E + 02 3.54E-32 5.54E-32 7.84E-10 7.84E-10 3.33E-02 4.55E-03 0.00E + 00
50 Mean 9.80E + 02 4.47E-52 2.94E-38 3.09E + 02 2.62E-33 1.03E-42 8.91E-13 4.81E + 01 1.49E-01 1.67E-01 1.47E-146
SD 1.77E + 02 1.22E-51 1.31E-37 2.87E + 01 6.71E-33 3.53E-42 4.85E-13 2.68E + 01 2.52E-01 1.09E-02 4.65E-146
100 Mean 2.30E + 03 3.45E-45 1.50E + 01 5.83E + 02 8.38E-24 3.60E-06 2.34E-12 8.47E + 01 3.96E-01 7.20E-01 0.00E + 00

H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008


SD 5.04E + 02 1.52E-44 3.66E + 01 3.54E + 01 2.27E-23 1.13E-05 6.21E-13 3.06E + 01 3.07E-01 4.16E-02 0.00E + 00
f3 30 Mean 1.20E + 04 9.32E-03 7.70E-08 2.95E-04 3.59E-06 3.59E-06 7.80E + 03 7.80E + 03 1.07E-10 1.54E-03 0.00E + 00
SD 7.04E + 03 1.93E-02 2.44E-07 5.80E-04 5.24E-06 5.24E-06 2.58E + 03 2.58E + 03 2.73E-11 3.62E-04 0.00E + 00
50 Mean 4.04E + 04 5.43E + 01 8.83E + 03 2.54E + 00 6.76E-02 6.64E-22 2.32E + 04 1.98E + 04 1.24E-09 3.24E-02 0.00E + 00
SD 1.97E + 04 3.14E + 01 9.32E + 03 2.79E + 00 4.23E-02 1.13E-21 9.00E + 03 9.77E + 03 2.76E-10 7.71E-03 0.00E + 00
100 Mean 1.27E + 05 7.83E + 03 3.31E + 04 1.27E + 03 9.50E + 01 1.81E-05 8.81E + 04 8.94E + 04 2.72E-06 6.17E + 00 0.00E + 00
SD 4.29E + 04 1.68E + 03 1.89E + 04 4.94E + 02 2.77E + 01 4.69E-05 3.47E + 04 3.51E + 04 9.49E-07 7.70E-01 0.00E + 00
f4 30 Mean 1.85E-04 6.53E-02 3.02E-05 7.94E-03 1.76E + 01 1.76E + 01 1.32E + 01 1.32E + 01 2.04E-06 7.50E-03 0.00E + 00
SD 4.22E-04 3.79E-02 5.85E-05 7.62E-03 4.86E + 00 4.86E + 00 4.76E + 00 4.76E + 00 1.98E-07 7.67E-04 0.00E + 00
50 Mean 6.22E + 01 1.49E + 00 1.31E-02 9.95E-01 2.86E + 01 6.12E-02 3.70E + 01 3.47E + 01 3.19E-06 1.45E-02 0.00E + 00
SD 2.15E + 01 5.47E-01 1.50E-02 5.23E-01 5.23E + 00 3.36E-02 2.09E + 01 1.58E + 01 2.32E-07 1.69E-03 0.00E + 00
100 Mean 9.59E + 01 9.02E + 00 3.52E + 01 1.32E + 01 4.35E + 01 9.27E-02 5.51E + 01 5.34E + 01 3.53E + 00 6.71E-02 0.00E + 00
SD 1.16E + 00 1.70E + 00 6.41E + 00 2.01E + 00 5.54E + 00 2.33E-02 2.10E + 01 1.93E + 01 2.67E-01 1.05E-02 0.00E + 00
f5 30 Mean 1.81E + 04 4.18E + 01 7.00E + 01 3.49E + 01 2.03E + 01 2.03E + 01 4.83E-01 4.83E-01 2.42E + 01 2.65E + 01 5.37E-05
SD 3.69E + 04 3.35E + 01 1.25E + 02 2.76E + 01 1.76E + 01 1.76E + 01 4.43E-01 4.43E-01 2.68E + 01 1.79E + 01 6.18E-05
50 Mean 2.27E + 04 6.97E + 01 4.52E + 03 7.03E + 01 6.43E + 01 7.16E-25 2.92E-01 2.39E + 04 5.97E + 01 5.03E + 01 3.45E-03
SD 3.99E + 04 4.45E + 01 2.01E + 04 4.49E + 01 3.83E + 01 1.56E-24 2.55E-01 7.77E + 04 3.29E + 01 2.13E + 01 2.96E-03
100 Mean 3.18E + 07 1.46E + 02 4.86E + 03 1.20E + 02 1.98E + 02 9.69E-12 3.42E-01 6.07E + 05 1.32E + 02 1.07E + 02 4.18E-02
SD 4.72E + 07 4.78E + 01 2.01E + 04 3.62E + 01 8.39E + 01 3.06E-11 2.23E-01 8.23E + 05 4.49E + 01 2.39E + 01 2.86E-02
f6 30 Mean 1.55E + 00 0.00E + 00 0.00E + 00 0.00E + 00 5.00E-02 5.00E-02 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
SD 3.12E + 00 0.00E + 00 0.00E + 00 0.00E + 00 2.24E-01 2.24E-01 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00 0.00E + 00
50 Mean 3.01E + 03 0.00E + 00 2.50E + 00 0.00E + 00 5.65E + 00 0.00E + 00 0.00E + 00 5.92E + 02 0.00E + 00 0.00E + 00 0.00E + 00
SD 5.71E + 03 0.00E + 00 8.24E + 00 0.00E + 00 6.43E + 00 0.00E + 00 0.00E + 00 7.23E + 02 0.00E + 00 0.00E + 00 0.00E + 00
100 Mean 1.83E + 04 1.00E-01 1.01E + 03 3.50E-01 2.60E + 02 0.00E + 00 0.00E + 00 2.35E + 03 0.00E + 00 0.00E + 00 0.00E + 00
SD 1.60E + 04 3.08E-01 3.08E + 03 7.45E-01 2.48E + 02 0.00E + 00 0.00E + 00 2.90E + 03 0.00E + 00 0.00E + 00 0.00E + 00
f7 30 Mean 4.10E-01 1.74E-03 2.86E-03 6.09E-04 4.65E-03 4.65E-03 1.01E-01 1.01E-01 4.22E-03 7.32E-04 1.94E-04
SD 9.81E-01 7.83E-04 9.23E-04 4.18E-04 2.72E-03 2.72E-03 3.30E-02 3.30E-02 1.95E-03 2.50E-04 7.68E-05
50 Mean 3.64E + 00 3.31E-03 4.08E-01 1.15E-03 1.64E-02 1.38E-03 3.03E-01 8.61E-01 6.04E-03 1.21E-03 3.16E-04
SD 7.50E + 00 9.05E-04 1.31E + 00 3.92E-04 8.47E-03 1.17E-03 1.50E-01 5.81E-01 1.86E-03 4.61E-04 1.75E-04
100 Mean 1.02E + 02 7.00E-03 1.35E + 01 6.40E-03 7.12E-02 1.83E-03 8.06E-01 8.57E-01 1.80E-02 2.87E-03 1.40E-04
SD 1.00E + 02 1.53E-03 3.00E + 01 2.56E-03 1.97E-02 2.02E-03 6.81E-01 6.27E-01 2.17E-03 5.74E-04 1.45E-04
f8 30 Mean 10042.54 12569.5 12569.5 12569.5 12123.35 12569.5 12569.5 12569.5 12569.5 12234.5 12569.5
SD 5.61E-02 3.50E-03 1.09E-04 8.36E-02 2.86E-02 7.80E-03 7.87E-02 2.98E-02 9.12E-04 7.67E-04 7.54E-05
50 Mean 17765.67 20949.12 20949.12 20949.12 20685.13 20949.13 20949.13 20949.14 20949.13 20048.98 20949.14
SD 1.03E + 02 3.97E-02 1.29E-03 1.05E-02 6.55E-01 4.98E-01 9.98E-03 6.45E-02 7.98E-03 6.99E-01 6.55E-03
100 Mean 38965.43 38995.04 39087.11 41898.27 41675.73 41709.27 41898.27 41898.29 39898.27 41875.27 41898.29
SD 4.66E + 03 4.51E + 02 2.36E + 03 2.07E-03 1.83E-01 4.35E-02 1.98E-02 2.13E-02 9.76E-01 4.57E-02 1.33E-02

999
1000 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

Function optimization curve


6
Function optimization curve 10
6
10 L =0
c
L =0
c L =0.1
c
L =0.1 4
c L =0.25
L =0.25 10 c

c L =0.5
4 c
10 L =0.5
c L =0.75
c
L =0.75
c L =0.9

Optimal objective function value


L =0.9 2 c
10
Optimal objective function value

c L =1
c
L =1
c L [0,0.25]
2 L [0,0.25] c
10 c
L [0.25,0.75]
c
L [0.25,0.75]
c 0 L [0.75,1]
L [0.75,1] 10 c
c

0
10
-2
10

-2
10 -4
10

-4 -6
10 10

0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4


Objective function evaluation number 4 Objective function evaluation number 4
x 10 x 10

(f7) ( f12)
Fig. 6. Optimization curve of IGPSO with different value of Lc (f7 and f12 ).

Function optimization curve Function optimization curve


6
10
5
10 =0 =0
=0.1 =0.1
=0.25 =0.25
=0.35 4 =0.35
=0.5
10 =0.5
=0.75 =0.75
0 =0.8 =0.8
Optimal objective function value

Optimal objective function value

10 =0.9 =0.9
2
=0.95 10 =0.95
=1 =1

-5 0
10 10

-2
10
-10
10
-4
10

0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.5 1 1.5 2 2.5 3 3.5 4


Objective function evaluation number 4 Objective function evaluation number 4
x 10 x 10

(f9) ( f13)
Fig. 7. Optimization curve of IGPSO with different value of (f9 and f13 ).

evolutions (FEs)), scalability, non-parameter statistic signicant Each algorithm repeated 50 times independently. The mean and
(MannWhitney U test). standard deviation (SD) results for the high-dimension functions
are provided in Tables 9 and 10. The mean and standard devia-
tion (SD) results for the low-dimension functions are recorded in
4.3.1. Optimization accuracy and scalability
Table 11. Note that the computer displays zero when the number
In this section, we provide a comparison between the pro-
is smaller than 10325 . The best results are highlighted in bold face.
posed algorithm and other classic algorithms such as PSO variants
From Tables 911, it can be seen that IGPSO algorithm performs
such as BBPSO [45], CLPSO [46], APSO [47], and DMS-PSO [49],
much better than all the comparison algorithms for almost all the
differential evolution (DE) variants such as DE/best/1 [51] and
tested functions except f5 , f12 and f13 . IGPSO algorithm achieved
ODE [53], articial bee colony (ABC) variants such as ABC [54]
the global optimal value for f1 -f4 , f6 , f8 , f9 , f11 , f14 , f15 . Moreover,
and GABC [33], HS variants such as IGHS [32] and GDHS [56]. In
IGPSO offers near-global optimum solution for the other 5 func-
IGPSO algorithm, c1 = c2 = 2, max = 0.9 and min = 0.4, 1 = 0.05, and
tions. BBPSO, CLPSO, APSO, and DMS-PSO perform better in some
2 = 0.01, Lc = 0.75, = 0.25 and = 1-. The parameter settings for
cases, but IGPSO algorithm beats them in terms of the quality of
the other comparison algorithms are inherited from the referenced
solution under the same number of function evaluations. BBPSO
papers. For the high-dimension functions (f1 -f15 ), the maximum
algorithm with the help of Gaussian distribution to search a good
number of objective function evaluations (OFEs) is D 104 , while
result, but the change of mean and variance depend on the per-
for the low-dimensional functions (f16 -f25 ), OFEs is set to 5 104 .
Table 10
Results of the 11 compared algorithms for f9 f15.

F D Index Algorithms

BBPSO CLPSO APSO DMS-PSO DE/best/1 ODE ABC GABC IGHS GDHS IGPSO

f9 30 Mean 9.97E + 01 1.27E + 01 4.98E-02 1.49E + 01 1.31E + 01 1.31E + 01 3.20E-11 3.20E-11 4.23E-09 3.76E-04 0.00E + 00
SD 3.44E + 01 4.22E + 00 2.22E-01 3.62E + 00 3.91E + 00 3.91E + 00 5.28E-11 5.28E-11 4.68E-10 5.53E-05 0.00E + 00
50 Mean 3.08E + 02 2.42E + 01 1.79E + 02 2.99E + 01 3.26E + 01 1.11E-12 1.27E-06 1.92E + 01 1.16E-08 1.37E-03 0.00E + 00
SD 5.64E + 01 6.40E + 00 5.32E + 01 6.30E + 00 6.49E + 00 4.39E-12 5.52E-06 9.08E + 00 9.64E-10 1.46E-04 0.00E + 00
100 Mean 8.93E + 01 7.02E + 00 5.03E + 01 1.95E + 01 1.08E + 02 4.52E-08 1.08E-02 4.64E + 01 4.41E-08 1.16E-02 0.00E + 00
SD 2.89E + 01 1.00E + 02 2.61E + 01 2.59E + 01 1.53E + 01 1.41E-07 1.81E-02 1.63E + 01 3.47E-09 1.14E-03 0.00E + 00
f10 30 Mean 2.00E + 01 1.42E-14 7.57E-03 6.06E-12 6.04E-15 6.04E-15 1.63E-09 1.63E-09 3.44E-06 4.56E-03 3.55E-15
SD 1.04E + 01 7.46E-15 9.63E-04 3.90E-13 1.67E-15 1.67E-15 1.44E-09 1.44E-09 1.79E-07 3.94E-04 0.00E + 00

H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008


50 Mean 1.80E + 01 1.01E-14 1.84E + 00 7.75E-08 3.85E-01 7.32E-13 1.01E-11 3.24E + 00 4.24E-06 6.87E-03 3.55E-15
SD 4.19E + 00 3.32E-15 8.85E-01 2.79E-07 5.64E-01 2.13E-12 1.60E-11 3.05E + 00 1.98E-07 3.35E-04 0.00E + 00
100 Mean 1.97E + 01 2.74E-14 6.60E + 00 9.91E-01 7.31E + 00 1.72E-04 3.96E-11 7.03E + 00 5.91E-06 1.44E-02 3.55E-15
SD 3.09E-01 5.17E-15 3.72E + 00 4.43E + 00 7.16E + 00 5.44E-04 3.22E-11 2.47E + 00 1.63E-07 6.71E-04 0.00E + 00
f11 30 Mean 1.08E-02 3.20E-03 2.87E-02 1.85E-03 2.83E-03 2.83E-03 7.48E-10 7.48E-10 1.08E-02 4.14E-03 0.00E + 00
SD 1.32E-02 4.93E-03 2.58E-02 4.07E-03 6.60E-03 6.60E-03 2.40E-09 2.40E-09 9.74E-03 4.51E-03 0.00E + 00
50 Mean 1.81E + 01 4.93E-04 4.56E + 00 7.40E-04 6.27E-03 1.74E-02 6.29E-13 3.07E + 00 2.22E-03 8.06E-04 0.00E + 00
SD 4.72E + 01 2.20E-03 2.03E + 01 2.28E-03 1.10E-02 3.18E-02 1.15E-12 5.24E + 00 4.66E-03 7.48E-05 0.00E + 00
100 Mean 1.76E + 02 3.33E-17 9.05E + 00 7.40E-04 3.71E-02 7.40E-04 1.05E-12 2.14E + 01 2.28E-03 4.50E-03 0.00E + 00
SD 1.29E + 02 7.29E-17 2.78E + 01 2.28E-03 7.31E-02 2.34E-03 2.05E-12 3.00E + 01 4.27E-03 5.18E-04 0.00E + 00
f12 30 Mean 2.28E-01 1.36E-33 2.65E-06 0.00E + 00 5.18E-03 5.18E-03 3.66E-21 3.66E-21 2.04E-13 7.43E-07 2.84E-10
SD 3.79E-01 2.82E-33 7.17E-06 0.00E + 00 2.32E-02 2.32E-02 1.02E-20 1.02E-20 2.44E-14 1.37E-07 9.24E-11
50 Mean 1.28E + 07 3.11E-03 3.06E-01 3.11E-03 3.72E + 02 5.01E-27 3.67E-25 5.59E-01 2.86E-13 1.60E-06 3.99E-08
SD 5.72E + 07 1.39E-02 3.70E-01 1.39E-02 1.15E + 03 1.61E-26 1.33E-24 1.28E + 00 3.57E-14 2.28E-07 1.75E-08
100 Mean 8.96E + 07 9.33E-03 2.98E-01 1.56E-02 3.65E + 05 3.49E-25 2.53E-23 2.93E-01 5.26E-13 6.83E-06 1.50E-07
SD 1.50E + 08 2.87E-02 4.10E-01 4.23E-02 5.46E + 05 8.66E-25 7.70E-23 9.27E-01 4.25E-14 6.46E-07 3.70E-08
f13 30 Mean 7.69E-03 1.65E-33 4.48E-06 6.16E-35 1.04E + 03 1.04E + 03 1.47E-19 1.47E-19 3.03E-12 1.20E-05 5.49E-04
SD 1.34E-02 4.03E-33 1.14E-05 2.76E-34 3.46E + 03 3.46E + 03 4.04E-19 4.04E-19 4.49E-13 1.66E-06 2.46E-03
50 Mean 6.59E-03 5.49E-04 4.39E-03 8.63E-34 2.97E + 04 2.20E-03 7.12E-25 3.11E-01 7.64E-12 4.17E-05 1.55E-06
SD 5.52E-03 2.46E-03 5.52E-03 2.30E-33 1.08E + 05 9.83E-03 1.14E-24 1.39E + 00 7.94E-13 7.30E-06 2.02E-06
100 Mean 1.03E + 08 1.10E-03 6.04E-03 5.49E-03 2.10E + 05 1.07E-15 5.35E-24 2.04E + 05 1.86E-06 3.33E-04 1.75E-05
SD 1.82E + 08 3.38E-03 1.04E-02 1.26E-02 3.27E + 05 2.23E-15 7.50E-24 4.44E + 05 5.89E-06 4.66E-05 5.60E-06
f14 30 Mean 9.82E + 01 4.30E-01 3.02E + 00 8.53E-02 2.12E-01 2.12E-01 1.45E + 00 1.45E + 00 1.21E + 01 2.52E + 00 0.00E + 00
SD 2.86E + 01 3.70E-01 2.07E + 00 7.87E-02 3.62E-01 3.62E-01 1.09E + 00 1.09E + 00 3.49E + 00 1.58E-01 0.00E + 00
50 Mean 2.85E + 02 1.28E + 00 1.57E + 02 1.26E + 00 6.87E + 00 8.18E + 00 6.25E-01 9.88E + 00 1.92E + 01 5.54E + 00 0.00E + 00
SD 5.72E + 01 9.71E-01 4.78E + 01 2.40E + 00 3.78E + 00 4.67E + 00 5.83E-01 4.49E + 00 3.23E + 00 2.36E-01 0.00E + 00
100 Mean 7.47E + 02 3.01E + 00 5.49E + 02 1.16E + 01 1.36E + 02 1.23E + 01 6.94E + 00 3.17E + 01 4.06E + 01 1.94E + 01 0.00E + 00
SD 4.94E + 01 1.71E + 00 5.36E + 01 5.19E + 00 3.16E + 01 7.70E + 00 7.84E + 00 5.15E + 00 7.70E + 00 1.20E + 00 0.00E + 00
f15 30 Mean 2.00E + 04 1.85E-02 8.78E + 01 9.69E-01 4.33E-01 4.33E-01 2.93E + 04 2.93E + 04 5.19E-11 7.32E-04 0.00E + 00
SD 1.42E + 04 4.47E-02 2.80E + 02 2.35E + 00 1.38E + 00 1.38E + 00 1.94E + 04 1.94E + 04 7.37E-12 1.12E-04 0.00E + 00
50 Mean 5.10E + 04 3.55E + 02 5.17E + 02 7.94E + 02 8.76E + 03 6.97E-15 5.59E + 04 5.89E + 04 1.72E-10 4.53E-03 0.00E + 00
SD 1.96E + 04 2.59E + 02 2.31E + 03 3.62E + 02 3.64E + 03 2.91E-14 3.89E + 04 3.76E + 04 3.14E-11 8.09E-04 0.00E + 00
100 Mean 2.73E + 05 2.89E + 04 1.77E + 04 3.21E + 04 1.31E + 05 4.29E + 02 2.11E + 05 2.18E + 05 3.15E + 01 1.33E-01 0.00E + 00
SD 4.54E + 04 6.68E + 03 1.04E + 04 5.69E + 03 1.28E + 04 8.19E + 02 4.72E + 04 2.42E + 04 6.83E + 01 1.61E-02 0.00E + 00

1001
1002
Table 11
Comparison results for IGPSO (f16 f25 ).

H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008


F Index Algorithm

BBPSO CLPSO APSO DMS-PSO DE/best/1 ODE ABC GABC IGHS GDHS IGPSO

f16 mean 0.998004 1.244546 0.998004 1.98013 0.998004 0.998004 0.998004 0.998004 0.998004 0.998004 0.998004
SD 7.2E-17 1.10257 0 2.481412 1.61E-16 5.09E-17 2.16E-16 2.79E-13 0 1.44E-16 0
f17 mean 0.004327 0.006048 0.003089 0.002934 0.000386 0.00166 0.001486 0.002752 0.000648 0.000725 0.000353
SD 0.007108 0.008925 0.005924 0.006218 0.000234 0.004421 0.004456 0.006031 0.0001 0.00038 0.000202
f18 mean 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163 1.03163
SD 2.16E-16 2.28E-16 2.1E-16 2.22E-16 1.02E-16 2.28E-16 1.84E-16 3.63E-10 1.69E-16 1.84E-16 8.82E-17
f19 mean 0.397887 0.397887 0.397887 0.397887 0.517552 0.397887 0.397887 0.397887 0.397887 0.397887 0.397887
SD 0 0 0 0 0.535157 0 0 1.69E-09 0 0 0
f20 mean 3 3 3 8.4 3 3 3 3 3 3 3
SD 9.39E-16 1.19E-15 7.62E-16 18.78801 3.74E-15 2.88E-16 3.46E-15 9.28E-10 5.58E-16 3.55E-15 1.23E-15
f21 mean 3.86081 3.86278 3.86278 3.82413 3.86278 3.86278 3.86278 3.86278 3.86278 3.86278 3.86278
SD 0.003499 2.5E-16 1.07E-15 0.172852 2.28E-16 1.39E-15 5.94E-16 1.23E-11 2.28E-15 3.22E-16 3.67E-16
f22 mean 3.26434 3.26177 3.22773 3.27408 3.28633 3.26849 3.28633 3.27711 3.23877 3.25066 3.28038
SD 0.084686 0.124996 0.07564 0.06021 0.055899 0.060685 0.055899 0.064279 0.055899 0.059759 0.058182
f23 mean 1.001001 1.000279 1.000209 1.000372 1.051421 1.000101 1.000028 1.000017 1.00012 1.001187 1.00003
SD 0.000919 0.00066 0.000548 0.000739 0.157936 0.000403 0.000117 1.58E-05 0.000398 0.000837 4.56E-16
f24 mean 1 0.99417 1 0.99417 0.99126 0.99903 0.99854 1 1 1 1
SD 0 0.004883 0 0.004883 0.00299 0.00299 0.003559 6.12E-09 5.09E-17 0 0
f25 mean 181.365 186.731 186.731 182.065 186.731 186.731 186.731 186.731 186.731 186.731 186.731
SD 23.99748 8.14E-08 9.63E-07 14.36004 2.61E-14 2.44E-14 2.69E-14 8.12E-07 2.69E-10 2.35E-14 1.13E-14
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 1003

Table 12
MannWhitney U test result obtained using IGPSO and the 10 comparison algorithms (D = 50).

Group Algorithm f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15

1 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 152 132 0 0


UBBPSO 900 900 900 900 900 900 900 900 900 900 900 748 768 900 900
2 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 454 465 0 0
UCLPSO 900 900 900 900 900 900 900 900 900 900 900 456 435 900 900
3 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 446 435 0 0
UAPSO 900 900 900 900 900 900 900 900 900 900 900 454 465 900 900
4 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 545 424 0 0
UDMS-PSO 900 900 900 900 900 900 900 900 900 900 900 355 476 900 900
5 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
UDE/best/1 900 900 900 900 900 900 900 900 900 900 900 900 900 900 900
6 UIGPSO 0 0 0 0 432 425 0 0 0 0 0 721 135 0 0
UODE 900 900 900 900 468 475 900 900 900 900 900 179 765 900 900
7 UIGPSO 0 0 0 0 0 387 0 274 0 0 0 900 894 0 0
UABC 900 900 900 900 900 513 900 626 900 900 900 0 6 900 900
8 UIGPSO 0 0 0 0 0 443 0 438 0 0 0 753 897 0 0
UGABC 900 900 900 900 900 457 900 462 900 900 900 147 3 900 900
9 UIGPSO 0 0 0 0 0 435 0 429 0 0 0 448 453 0 0
UIGHS 900 900 900 900 900 465 900 471 900 900 900 452 447 900 900
10 UIGPSO 0 0 0 0 0 447 0 445 0 0 0 900 900 0 0
UGDHS 900 900 900 900 900 453 900 455 900 900 900 0 0 900 900

Table 13
MannWhitney U test result obtained using IGPSO and the 10 PSO comparison algorithms (D = 100).

Group Algorithm f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15

1 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 295 186 0 0


UBBPSO 900 900 900 900 900 900 900 900 900 900 900 605 714 900 900
2 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 452 463 0 0
UCLPSO 900 900 900 900 900 900 900 900 900 900 900 448 437 900 900
3 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 463 478 0 0
UAPSO 900 900 900 900 900 900 900 900 900 900 900 437 422 900 900
4 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 432 507 0 0
UDMS-PSO 900 900 900 900 900 900 900 900 900 900 900 468 393 900 900
5 UIGPSO 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
UDE/best/1 900 900 900 900 900 900 900 900 900 900 900 900 900 900 900
6 UIGPSO 0 0 0 0 410 404 0 0 0 0 0 829 785 0 0
UODE 900 900 900 900 490 496 900 900 900 900 900 71 115 900 900
7 UIGPSO 0 0 0 0 0 402 0 368 0 0 0 653 798 0 0
UABC 900 900 900 900 900 498 900 532 900 900 900 246 102 900 900
8 UIGPSO 0 0 0 0 0 443 0 438 0 0 0 453 397 0 0
UGABC 900 900 900 900 900 457 900 462 900 900 900 447 503 900 900
9 UIGPSO 0 0 0 0 0 419 0 429 0 0 0 497 356 0 0
UIGHS 900 900 900 900 900 481 900 471 900 900 900 413 544 900 900
10 UIGPSO 0 0 0 0 0 317 0 376 0 0 0 358 435 0 0
UGDHS 900 900 900 900 900 583 900 524 900 900 900 542 465 900 900

Table 14
Compared results of welded beam design problem.

Ref. x1 x2 x3 x4 Best Worst Mean SD OFEs

Belegundu[44] 0.208800 3.420500 8.997500 0.210000 1.748309 1.785835 1.771973 0.011220


Coello[45] 0.202369 3.544214 9.048210 0.205723 1.728024 1.782143 1.748831 0.012926 900,000
Coello and Montes[46] 0.205986 3.471328 9.020224 0.206480 1.728226 80,000
Aguirre et al. [47] 0.205730 3.470489 9.036624 0.205730 1.724852 1.724881 0.000012 350,000
He and Wang[48] 0.202369 3.544214 9.048210 0.205723 1.728024 1.782143 1.748831 0.012926 200,000
He and Wang[49] 0.205730 3.470489 9.036624 0.205730 1.724852 1.814295 1.749040 0.040049 81,000
Mahdavi et al. [50] 0.20573 3.47049 9.03662 0.20573 1.7248 200,000
Fesanghary et al. [51] 0.20572 3.47060 9.03682 0.20572 1.7248 100,000
Cagnina et al. [52] 0.205729 3.470488 9.036624 0.205729 1.724852 2.0574 0.2154 32,000
Tomassetti[53] 0.205729 3.470489 9.036624 0.205730 1.724852 200,000
Maruta et al. [54] 0.205730 3.470489 9.036624 0.20573 1.724852 1.813471 1.728471 0.0136371 40,000
Gandomi et al. [55] 1.7312065 2.3455793 1.8786560 0.2677989 50,000
Akay and Karaboga[56] 1.724852 1.741913 0.031 30,000
Gandomi, et al. [57] 0.2015 3.562 9.0414 0.2057 1.7312 2.3455793 1.8786560 0.2677989 20,000
Yidiz[58] 0.205730 3.470489 9.036624 0.205730 1.7248 1.75322 1.73418 0.00510 20,000
Brajevic and Tuba[59] 0.205730 3.470489 9.036624 0.205730 1.724852 1.724853 0.0000017 15,000
IGPSO 0.205730 3.470489 9.036624 0.205730 1.724852 1.724852 1.724852 4.76378E-09 120,000

sonal best and global best. As the current personal best and global personal best solutions to produce a new solution, it improves on
best solution easily fall into local optima, BBPSO hardly can gen- the global search ability of PSO algorithm. However, CLPSO algo-
erate a better solution. Therefore, the performance of BBPSO is rithm suffers from that fact that the global best solution often does
not good. As the CLPSO algorithm uses the advantage of different not get updated in search process. So the premature convergence
1004 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

Fig. 9. Tension/compression spring.

Fig. 8. Welded beam structure.


rithm and the 10 comparison algorithms. The detailed explanation
of MannWhitney U test can be found in [31]. Before using
remains a concern for the CLPSO algorithm. APSO algorithm uses MannWhitney U test, it is necessary to set a rule to determine
the evolutionary state estimation method to improve algorithm who the winner in a comparison is. Suppose xi (i = 1, y, n1 ) is a solu-
performance, but it lacks of historical information to estimate cor- tion obtained by algorithm A for a problem, and yj (j = 1, y, n2 ) is a
rect search state, sometimes the estimated state can be blind or solution obtained by algorithm B for the same problem. If xi (i = 1,
incorrect. Consequently, APSO algorithm performs worse in some y, n1 ) is better than yj (j = 1, y, n2 ), then algorithm A is the winner,
cases. otherwise, algorithm B is the winner. If xi (i = 1, . . .., n1 ) is the same
Compared to DE/best/1 and ODE algorithm, IGPSO performs bet- as yj (j = 1, y, n2 ), then we should consider the optimization times
ter in most cases. DE/best/1 is excessively dependent on the global of two algorithms. If the optimization time of algorithm A is less
best solution in the search process so that it can be trapped into local than that of algorithm B, then algorithm A is the winner, otherwise,
minimum quickly. ODE algorithm introduces the opposition-based algorithm B is the winner [31].
learning technique to enhance the performance of differential evo- Following the above rule, ten groups of MannWhitney U
lution algorithm. Opposition-based learning technique can reduce tests are carried out: (UIGPSO , UBBPSO ), (UIGPSO , UCLPSO ), (UIGPSO ,
the probability of algorithm falling in local optima, but it has some UAPSO ), (UIGPSO , UDMS-PSO ) (UIGPSO , UDE/best/1 ), (UIGPSO , UODE ), (UIGPSO ,
blindness in the search process. UABC ), (UIGPSO , UGABC ), (UIGPSO , UIGHS )and (UIGPSO , UGDHS ). The 15
Compared to ABC and GABC algorithm, it can be obviously seen benchmark functions (f1 f15 ) are selected in this experiment. 30
that IGPSO algorithm has better performance. ABC and GABC algo- independent tests are carried out in each case. The dimension size
rithms are also difcult to escape from local optima. GDHS and is xed as 50 and 100. The ten groups of MannWhitney U tests are
IGHS algorithms are two newly proposed harmony search vari- reported in Tables 1213.
ants, which perform much better in most cases. However, IGPSO From Table 12, it can be observed that the values of UIGPSO are
algorithm still has some advantages than the two harmony search smaller than those of UBBPSO for all the tested functions. Especially,
variants for most test functions. for f1 , f2 , f3 , f4 , f9 , f10 , f11 , f14 and f15 , the values of UIGPSO and
The IGPSO algorithm reinforces the utilization of the current UBBPSO are 0 and 900, respectively. In other words, the solutions
swarm, historical best swarm and global best swarm in the search of IGPSO beaten by BBPSO solutions are 0 for all the 15 functions.
process. Therefore, IGPSO algorithm is superior to the ten com- This indicates that the performance of IGPSO is better than the
parison algorithms. According to the above observations, it can be BBPSO algorithm. Compared to CLPSO, APSO, and DMS-PSO algo-
concluded that the performance of the IGPSO is better than that of rithms, the values of UIGPSO are smaller than those of UCLPSO , UAPSO ,
the other PSO methods. UDMS-PSO for almost all the considered functions except f12 and f13 ,
thus the IGPSO algorithm is superior to the CLPSO, APSO, and DMS-
4.3.2. MannWhitney U test PSO algorithms. Compared to DE/best/1 and ODE algorithms, IGPSO
MannWhitney U test is employed in [31] to conrm if a algorithm has better results in most cases. ABC and GABC algorithm
statistically signicant difference in performance between NGHS perform better than IGPSO algorithm in some cases such as f12 and
algorithm and other comparion algorithms (HS, IHS and SGHS). f13 . However, the values of UIGPSO are smaller than those of UABC
In this paper, the MannWhitney U test is also used to test if and UAPSO for the other 13 functions. It shows that IGPSO algorithm
the statistically signicant difference exists between IGPSO algo- is superior to ABC and GABC algorithms. Besides, IGPSO algorithm

Table 15
Compared results of tension/compression spring design problem.

Ref. x1 x2 x3 Best Worst Mean SD OFEs

Coello [45] 0.051480 0.351661 11.632201 0.0127047834 0.01282208 0.01276920 900,000


Ray et al. [61] 0.0521602170 0.3681586950 10.6484422590 0.0126692493 0.01671727 0.01292267 0.000592 30,000
Mezura[62] 0.012688 0.017037 0.013014 0.000801 36,000
He and Wang[48] 0.051728 0.357644 11.244543 0.0126747 0.012924 0.012730 0.000051985 200,000
He and Wang[49] 0.051706 0.357126 11.265083 0.0126652 0.0127191 0.0127072 0.000015824 81,000
Mahdavi et al. [50] 0.05115438 0.34987116 12.0764321 0.0126706 30,000
Cagnina et al. [52] 0.051583 0.354190 11.438675 0.012665 0.0131 0.00041 24,000
Maruta et al. [53] 0.0516885495 0.3567054307 11.2896874780 0.0126652329 0.01461170 0.01275760 0.000269863 40,000
Tomassetti [54] 0.051644 0.355632 11.35304 0.012665 200,000
Gandomi et al. [57] 0.05169 0.35673 11.2885 0.01266522 0.0168954 0.01350052 0.001420272 20,000
Brajevic [59] 0.051691 0.356769 11.285988 0.012665 0.012683 0.00000331 15,000
Baykasoglu [60] 0.0516674837 0.3561976945 11.3195613646 0.0126653049 0.0128058 0.0126770446 0.0127116883 50,000
IGPSO 0.051670094177424 0.356261453811766 11.315774165651165 0.0126652498902 0.012665843 0.012665618 9.69175E- 06 100,000
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 1005

Fig. 10. Speed reducer.

has better performance than IGHS and GDHS algorithm based on ity constraints, respectively. xjmin and xj max are the lower and upper
the result of MannWhitney U tests. bounds of xj , respectively. Based on the penalty function method,
From Table 13, it can be seen that the IGPSO algorithm has also the above constrained optimization problem can be converted into
obtained better results in most cases when the dimension size is an unconstrained function F(x) and described as
increased. The values of UIGPSO are smaller than those of considered
algorithms for almost all the considered functions except f12 and 
p

M

f13 . Therefore, the IGPSO algorithm is superior to the considered F(x) = f (x) +  { max(0, gi (x)) + max(0, |hi (x)| )} (17)
algorithms. i=1 i=p+1

here, is the penalty factor and is the tolerance value of the


4.4. Engineering design optimization
equality constraints.

It is generally known that real-world engineering design opti-


mization problems are non-linear and involve with many complex 4.4.1. Welded beam design
geometrical and mechanical constraints. To further assess the opti- The welded beam design problem is introduced in [44], which
mization performance of the IGPSO algorithm, some well-known is aim to minimum overall cost of fabrication (see Fig. 8). In this
engineering design problems including welded beam design, ten- problem, there exist four continuous design variables such as the
sion/compression spring design and speed reducer design are thickness of the weld, h(x1 ), the length of the welded joint, l(x2 ),
considered in this experiment. the width of the beam, t(x3 ) and the thickness of the beam, b(x4 ).
Engineering design optimization problem is a complex con- This problem is subject to ve constraints including shear stress
strained problem. So, we need to introduce a constraint-handling (), bending stress (s), buckling load (Pc), deection () and geo-
technique to help IGPSO algorithm to nd global feasible solu- metric constraints. The mathematical formulation of the problem
tion in a reasonable time. Currently, many constraint-handling is expressed as follows
techniques are presented and applied. These constraint-handling
f (x) = 1.1047x1 2 x2 + 0.04811x3 x4 (14.0 + x2 )
techniques are classied as penalty functions, special represen-
tations and operators, repair algorithms, separate objective and g1 (x) = (x) max 0


constraints, and hybrid methods. The penalty function method, due


g2 (x) = (x) max 0
to its simplicity to implementation, has been most widely stud-

ied and used so far. Generally, constrained optimization problem is
g (x) = x1 x4 0

3
transformed into series of unconstrained ones by introduction of a
penalty term. Without loss of generality, a constrained optimiza- s.t. g4 (x) = 0.1047x1 2 x2 + 0.04811x3 x4 (14.0 + x2 ) 5

(18)
tion problem with D variables and M constraints may be described
g5 (x) = 0.125 x1


as


g6 (x) = (x) max 0
min f (x), x = [x1 , x2 , , xD ]


g7 (x) = P Pc(x) 0
s.t. gi (x) 0, i = 1, 2, , p
(16) 0.125 x1 10, 0.1 x2 10,
hi (x) = 0, i = p + 1, p + 2, , M
0.1 x3 10, 0.1 x4 5
xj min < xj < xj max , j = 1, 2, , D
where E is youngs modulus of bar stock, G is shear modulus of
where x = (x1 , x2 ,. ., xD ) is the solution vector, f(x) is the objective bar stock, P is the loading condition, L is overhang length of the
function, gi (x) and hi (x), which represent the inequality and equal- beam.

x2
+ 

, 
= P/( 2x1 x2 ), 

= MR/J, M = P(L + x2 /2),


2
(x) = 
2 + 2


2R   
x2 2 x1 + x3 2
R = 0.5 x2 2 + (x1 + x3 )2 , J = 2 2x1 x2 +( ) , (x) = 6PL/(x4 x3 2 ),
4 2
  
4.013E x3 2 x4 6 /36 x3 E
(x) = 6PL3 /(Ex3 2 x4 ), Pc(x) = 1 ,
L 2L 4G
P = 6000.L = 14, E = 30 106 , G = 12 106 , max = 30600, max = 30000, max = 0.25
1006 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

In this experiment, the IGPSO algorithm combine with the 4.4.3. Speed reducer design
penalty function method is used to solve welded beam design For speed reducer design problem (Fig. 10), the weight of speed
problem. The parameters of the IGPSO algorithm are xed reducer is to be minimized subject to constraints on transverse
as NP = 10, c1 = c2 = 2, max = 0.9 and min = 0.4, 1 = 0.05, and deections of the shafts, surface stress, bending stress of the gear
2 = 0.01, Lc = 0.75, = 0.25 and = 1-. The IGPSO algorithm are teeth and stresses in the shafts [60]. This problem involves seven
coded in MATLAB and all tests were performed on a PC with design variables such as the face width b (x1 ), module of teeth m(x2 ),
2.93 GHz, 2.94 GHz processor and 4 GB RAM, number of objec- number of teeth in the pinion z(x3 ), length of the rst shaft between
tive function evaluations (OFEs) is determined as 12 104 , the bearings l1 (x4 ), length of the second shaft between bearings l2 (x5 )
penalty factor = 1010 , 30 runs are performed for each problem. and the diameter of the rst shaft d1 (x6 ) and second shaft d2 (x7 ).
The IGPSO obtained the best design overall of 1.724852314184504 Note that the third variable x3 (number of teeth) is of integer value
corresponding to x = [0.205729639285696 3.470488675359993 while all left variables are continuous. The speed reducer design
9.036623944851453 0.205729639665801]. Table 14 compares the problem includes 11 constraints, so it is very hard even to locate a
optimization results found by IGPSO with those of other optimiza- feasible solution [63]. This design problem is described as
tion algorithms reported in the literature. From Table 14, it can
be seen that the best solution for welded beam design problem f (x) = 0.7854x1 x2 2 (3.3333x3 2 + 14.9334x3 43.0934)
is found by IGPSO. This solution is better than most of the pub- 1.506x1 (x6 2 + x7 2 ) + 7.4777(x6 3 + x7 3 ) + 0.7854(x4 x6 2 + x5 x7 2 )
lished results and it is the same as the best solutions reported
27 397.5
in the literature so far. Moreover, the mean and standard devia-
g1 (x) = 1 0, g2 (x) = 10
tion (SD) results are obtained by IGPSO are apparently outperform
x1 x2 2 x3 x1 x2 2 x3 2


the reported results in the literature, which demonstrated that the
1.93x4 3 1.93x5 3

g (x) = 1 0, g4 (x) = 10
IGPSO algorithm is more reliable than the other published algo-
3


x2 x3 x6

4 x2 x3 x7 4
rithms.


745x4 2

( ) + 16.9 106

x2 x3

g5 (x) = 10

110x6 3
4.4.2. Tension/compression spring design

Tension/compression spring design problem is described in [45].


745x4 2
Fig. 9 shows a tension/compression spring with three design vari- ( ) + 157.5 106
x2 x3
ables. The weight of the spring is to be minimized, subject to four s.t g6 (x) = 10

85x7 3
constraints on the minimum deection, shear, and surge frequency,

and limits on the outside diameter [60]. This problem include three
g7 (x) = x2 x3 40 0, g8 (x) =
x1
+50


design variables such as the wire diameter d(x1 ), the mean coil

x2


diameter D(x2), and the number of active coils N(x3 ). This design g9 (x) = x1 12 0, g10 (x) = 1.5x6 + 1.9 1 0



problem can be expressed as follows:

x2 x4

+

g11 (x) =
1.5x 7 1.9
10

x5
min f (x) = (x3 + 2)x2 x1 2



x2 3 x3

2.6 x1 3.6, 0.7 x2 0.8, 17 x3 28, 7.3 x4 8.3

g1 (x) = 1


71785x1 4
7.3 x5 8.3, 2.9 x6 3.9, 0.5 x6 5.5





g (x) = 4x2 2 x1 x2 1 (20)
2 + 10
s.t. 12566(x2 x1 x1
3 4) 5108x1 2 (19)
The IGPSO algorithm is used to solve speed reducer design prob-

140.45x1

g3 (x) = 1 0, lem. In IGPSO, the parameters are xed as NP = 40, c1 = c2 = 2,

x2 2 x3 max = 0.9 and min = 0.4, 1 = 0.05, and 2 = 0.01, Lc = 0.75, = 0.25


g (x) = x2 + x1 1 0 and = 1-. The number of objective function evaluations
4
1.5 (OFEs) is determined as 10 104 , the penalty factor =1010 ,
0 < x1 2, 0.25 x2 2, 2 x3 15
30 runs are performed for each problem. The IGPSO obtained
the best design overall of 2994.380995367928 corresponding
In this experiment, the parameters of the IGPSO algorithm to x = [3.499999990375726 0.70 17.0 7.30 7.715169535948116
are xed as NP = 40, c1 = c2 = 2, max = 0.9 and min = 0.4, 1 = 0.05, 3.350214650206060 5.286517819343709]. Table 16 compares the
and 2 = 0.01, Lc = 0.75, = 0.25 and = 1-. The number of objec- optimization results found by IGPSO with those of other optimiza-
tive function evaluations (OFEs) is determined as 10 104 , the tion algorithms reported in the literature.
penalty factor = 1010 , 30 runs are performed for each problem. As it can be seen from Table 16, the best solution obtained by
The IGPSO obtained the best design overall of 0.012665249890278 IGPSO algorithm is superior to the best solutions reported in the
corresponding to x = [0.051670094177424 0.356261453811766 literature so far. IGPSO has performed with more robustness in
11.315774165651165]. Table 15 compares the optimization results terms of the quality of results obtained. Whats more, even the
found by IGPSO with those of other optimization algorithms worst result found by IGSPO is better than the best results found by
reported in the literature. As can be seen in the statistics of Table 15, other methods. The mean and standard deviation (SD) results are
this problem is not complex enough to IGPSO as the worst solu- obtained by IGPSO are apparently outperform the reported results
tion is close to the optimum value. The best result reported for in the literature, which demonstrated that the IGPSO algorithm is
this problem is near to 0.012665 [49]. The IGPSO also achieved a quite competitive on the other published algorithms.
better value close to 0.012665. Specially, the mean and standard
deviation (SD) results are obtained by IGPSO are apparently bet- 5. Concluding remarks
ter than the reported results in the literature, which demonstrated
that the IGPSO algorithm is more robust than the other published An improved global-best-guided particle swarm optimization
algorithms. with learning operation (IGPSO) is presented in this paper. Based on
H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008 1007

the characteristics of the particles, current swarm, historical best

35,0000

200,000

100,000
24,000

15,000
15,000
20,000
20,000
40,000

30,000

50,000
swarm and global best swarm are constructed. The global neigh-

OFEs
borhood exploration strategy, local learning mechanism, stochastic


learning and opposition based learning operations are employed in
these three swarms, respectively. A large number of experiments
are carried out to test the performance of the IGPSO algorithm.

0.00000.59
721.51803
0.028671
7.0E-02
The effects of the relevant parameters on the performance of

2.81e-7
0.0000
24.48

0.09
IGPSO have been analyzed and evaluated. The experimental results
SD

0
reveal that the IGPSO algorithm is superior to the state-of-the-
art PSO variants, classic meta-heuristic algorithms and several
other nature-inspired algorithms in terms of accuracy, convergence

2997.058412
2994.61336

2996.51487
2996.40852

2994.47107
speed, stability and robustness. Besides, The IGPSO is used to solve
2996.3482

2994.4671
3016.4926

2994.4710

2994.381
3012.12

several engineering design optimization problems, results demon-


Mean

strated that the performance of the IGPSO algorithm is superior to


and competitive with other recent and well-known published algo-


rithms. Therefore, the IGPSO algorithm offers a good alternative
for tackling some complex global optimization problems. Further
2994.75231

2994.47106

4973.8644
3094.5568

2996.6690

practical engineering application of these proposed algorithms in


2994.381
3028.28
Worst

areas of economic and economic statistical design [68], power sys-


tem [69], image processing [70], and network optimization [71] can



shed further light and is left for future research.


2996.348165
2996.348165
2996.348165

Acknowledgement
2996.37269
2994.49910

2994.47106
2994.47106
2994.4671
2996.3480

2997.0584

2994.381
3008.08

The authors are grateful to the editor and the anonymous ref-
Best

erees for their constructive comments and suggestions, which


have helped to improve this paper signicantly. This work is sup-
ported by National Natural Science Foundation of China (Grant No.
5.2866636
5.2866832

61403174).
5.286683
5.289773

5.286683
5.286683

5.286654

5.286683
5.286517
5.287800

5.2875

References
x7

[1] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory,
Proceedings of the Sixth International Symposium on Micro Machine and
3.3502146
3.3502309
3.365576
3.350215

3.350214
3.350215
3.350215

3.350215

3.350219
3.350214

Human Science 1 (1995) 3943.


3.35021

[2] J. Kennedy, R.C. Eberhart, Particle swarm optimization, Proceedings of IEEE


International Conference on Neural Networks 4 (1995) 19421948.
x6

[3] M.R. Tanweer, S. Suresh, N. Sundararajan, Self regulating particle swarm


optimization algorithm, Info. Sci. 294 (2015) 182202.
[4] M. Eslami, H. Shareef, M. Khajehzadeh, M. Khajehzadeh, A. Mohamed, A
survey of the state of the art in particle swarm optimization, Res. J. Appl. Sci.
7.715377
7.859330
7.800000

7.71532
7.71532

7.71516
7.80006

Eng. Technol. 4 (2012) 11811197.


[5] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle
7.8
7.8
7.8
7.8
x5

swarm optimizer for global optimization of multimodal functions, IEEE Trans.


Evol. Comput. 10 (2006) 281295.


[6] H. Wu, J. Geng, R. Jin, et al., An improved comprehensive learning particle
swarm optimization and its application to the semiautomatic design of
7.300427

7.30248
7.30001

antennas, IEEE Trans. Antennas Propag. 57 (2009) 30183028.


7.5491
7.3000

[7] Z.H. Zhan, J. Zhang, Y. Li, H.S.H. Chung, Adaptive particle swarm optimization,
7.30
7.3
7.3
7.3
7.3

7.3

IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 39 (2009) 13621381.


x4

[8] Z.H. Zhan, J. Zhang, Y. Li, Y.H. Shi, Orthogonal learning particle swarm
Compared results of tension/compression spring design problem.

optimization, IEEE Trans. Evol. Comput. 15 (2011) 832847.


[9] Y. Wang, B. Li, T. Weise, et al., Self-adaptive learning based particle swarm
17.0

17.0

17.0

optimization, Info. Sci. 181 (2011) 45154538.


17
17

17
17
17
17

17

17
x3

[10] H. Huang, H. Qin, Z. Hao, A. Lim, Example-based learning particle swarm


optimization for continuous optimization, Info. Sci. 182 (2012) 125138.
[11] C. Li, S. Yang, T.T. Nguyen, A self-learning particle swarm optimizer for global
optimization problems, IEEE Trans. Syst. Man Cybern. Part B Cybern. 42
0.700
0.70
0.70

0.70
0.70
0.70

(2012) 627646.
0.7
0.7
0.7
0.7

0.7
x2

[12] C. Li, S. Yang, I. Korejo, An adaptive mutation operator for particle swarm
optimization, The 2008 UK Workshop on Computational Intelligence (2008)
165170.
[13] H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, M. Ventresca, Enhancing particle
3.49999

3.49999
3.50612
3.50002

3.50000
3.50000
3.5000

3.5000

swarm optimization using generalized opposition-based learning, Info. Sci.


181 (2011) 46994714.
3.5
3.5
3.5
x1

[14] W.H. Lim, N.A.M. Isa, An adaptive two-layer particle swarm optimization with
elitist learning strategy, Info. Sci. 273 (2014) 4972.
[15] W.H. Lim, N.A.M. Isa, Bidirectional teaching and peer-learning particle swarm
optimization, Info. Sci. 280 (2014) 111134.
De Melo et al. [63]
Cagnina et al. [52]
Aguirre et al. [47]

Baykasoglu [60]
Wang et al. [66]

[16] W.H. Lim, N.A.M. Isa, Teaching and peer-learning particle swarm
Akhtaretal. [65]

Tomassetti [54]
Baykasoglu[64]

Yang et al. [67]


Akay et al. [56]

optimization, Appl. Soft Comput. 18 (2014) 3958.


Brajevic [59]

[17] R. Liu, Y. Chen, L. Jiao, et al., A particle swarm optimization based


simultaneous learning framework for clustering and classication, Pattern
Table 16

IGPSO

Recogn. 47 (2014) 21432152.


Ref.

[18] R. Cheng, Y. Jin, A social learning particle swarm optimization algorithm for
scalable optimization, Info. Sci. 291 (2015) 4360.
1008 H.-b. Ouyang et al. / Applied Soft Computing 52 (2017) 9871008

[19] P.N. Suganthan, Particle swarm optimiser with neighbourhood operator, [47] H. Aguirre, A.M. Zavala, E.V. Diharce, S.B. Rionda, COPSO: Constrained
Proceedings of the 1999 Congress on Evolutionary Computation 3 (1999) Optimization via PSO Algorithm, Center for Research in Mathematics
19581962. (CIMAT), 2007 (Technical report No. I-07-04/22-02-2007).
[20] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer [48] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for
with local search, Proceedings of The 2005 IEEE Congress on Evolutionary constrained engineering design problems, Eng. Appl. Artif. Intell. 20 (2007)
Computation 1 (2005) 522528. 8999.
[21] W. Du, B. Li, Multi-strategy ensemble particle swarm optimization for [49] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility-based
dynamic optimization, Info. Sci. 178 (2008) 30963109. rule for constrained optimization, Appl. Math. Comput. 186 (2007)
[22] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler 14071422.
maybe better, IEEE Trans. Evol. Comput. 8 (2004) 204210. [50] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search
[23] M. Nasir, S. Das, D. Maity, et al., A dynamic neighborhood learning based algorithm for solving optimization problems, Appl. Math. Comput. 188 (2007)
particle swarm optimizer for global numerical optimization, Info. Sci. 209 15671579.
(2012) 1636. [51] M. Fesanghary, M. Mahdavi, M. Minary-Jolandan, Y. Alizadeh, Hybridizing
[24] X. Hu, R. Eberhart, Multiobjective optimization using dynamic neighborhood harmony search algorithm with sequential quadratic programming for
particle swarm optimization, WCCI, IEEE (2002) 16771681. engineering optimization problems, Comput. Methods Appl. Mech. Eng. 197
[25] Y. Tang, Z. Wang, J. Fang, Feedback learning particle swarm optimization, (2008) 30803091.
Appl. Soft Comput. 11 (2011) 47134725. [52] L. Cagnina, S. Esquivel, C.C. Coello, Solving engineering optimization problems
[26] H. Geng, Y. Huang, J. Gao, H. Zhu, A self-guided particle swarm optimization with the simple constrained particle swarm optimizer, Informatica 32 (2008)
with independent dynamic inertia weights setting on each particle, Appl. 319326.
Math. Info. Sci. 7 (2013) 545552. [53] G. Tomassetti, A cost-effective algorithm for the solution of engineering
[27] C.J. Lin, M.S. Chern, M. Chih, A binary particle swarm optimization based on problems with particle swarm optimization, Eng. Optim. 42 (2010) 471495.
the surrogate information with proportional acceleration coefcients for the [54] I. Maruta, T.H. Kim, T. Sugie, Fixed-structure H controller synthesis: a
01 multidimensional knapsack problem, J. Ind. Prod. Eng. 33 (2016) 77102. metaheuristic approach using simple constrained particle swarm
[28] M. Chih, C.J. Lin, M.S. Chern, et al., Particle swarm optimization with optimization, Automatica 45 (2009) 553559.
time-varying acceleration coefcients for the multidimensional knapsack [55] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Mixed variable structural optimization
problem, Appl. Math. Modell. 38 (2014) 13381350. using rey algorithm, Comput. Struct. 89 (2011) 23252336.
[29] G. Ardizzon, G. Cavazzini, G. Pavesi, Adaptive acceleration coefcients for a [56] B. Akay, D. Karaboga, Articial bee colony algorithm for large-scale problems
new search diversication strategy in particle swarm optimization and engineering design optimization, J. Intell. Manuf. 23 (2012) 10011014.
algorithms, Info. Sci. 299 (2015) 337378. [57] A.H. Gandomi, X.-S. Yang, A.H. Alavi, S. Talatahari, Bat algorithm for
[30] M. Chih, Self-adaptive check and repair operator-based particle swarm constrained optimization tasks, Neural Comput. Appl. 22 (2013) 12391255.
optimization for the multidimensional knapsack problem, Appl. Soft Comput. [58] A.R. Yildiz, Comparison of evolutionary-based optimization algorithms for
26 (2015) 378389. structural design optimization, Eng. Appl. Artif. Intell. 26 (2013) 327333.
[31] S. Glc, H. Kodaz, A novel parallel multi-swarm algorithm based on [59] I. Brajevic, M. Tuba, An upgraded articial bee colony (ABC) algorithm for
comprehensive learning particle swarm optimization, Eng. Appl. Artif. Intell. constrained optimization problems, J. Intell. Manuf. 24 (2013) 729740.
45 (2015) 3345. [60] A. Baykasoglu, F.B. Ozsoydan, Adaptive rey algorithm with chaos for
[32] R. Storn, K. Price, Differential evolution?a simple and efcient heuristic for mechanical design optimization problems, Appl. Soft Comput. 36 (2015)
global optimization over continuous spaces, J. Global Optim. 11 (1997) 152164.
341359. [61] T. Ray, K.M. Liew, Society and civilization: an optimization algorithm based on
[33] M.G.H. Omran, M. Mahdavi, Global-best harmony search, Appl. Math. Comput. the simulation of social behavior, IEEE Trans. Evol. Comput. 7 (2003) 386396.
198 (2008) 643656. [62] E. Mezura-Montes, C.A.C. Coello, R. Landa-Becerra, Engineering optimization
[34] D.X. Zou, L.Q. Gao, J.H. Wu, S. Li, Novel global harmony search algorithm for using a simple evolutionary algorithm, Proceedings of the 15th IEEE
unconstrained problems, Neurocomputing 73 (2010) 33083318. International Conference on Tools with Articial Intelligence (2003).
[35] E.A. Mohammed, An improved global-best harmony search algorithm, Appl. [63] V.C.V. De Melo, G.L.C. Carosio, Investigating multi-view differential evolution
Math. Comput. 222 (2013) 94106. for solving constrained engineering design problems, Expert Syst. Appl. 40
[36] G.P. Zhu, S. Kwong, Gbest-guided articial bee colony algorithm for numerical (2013) 33703377.
function optimization, Appl. Math. Comput. 217 (2010) 31663173. [64] A. Baykasoglu, Design optimization with chaos embedded great deluge
[37] F.V.D. Bergh, A.P. Engelbrecht, A study of particle swarm optimization particle algorithm, Appl. Soft Comput. 12 (2012) 10551567.
trajectories, Info. Sci. 176 (2006) 937971. [65] S. Akhtar, K. Tai, T. Ray, A socio-behavioural simulation model of engineering
[38] H.R. Tizhoosh, Opposition-Based learning a new scheme for machine design optimization, Eng. Optim. 34 (2002) 341354.
intelligence, CIMCA/IAWTIC, 2005, pp. 695701. [66] Y. Wang, Z. Cai, Y. Zhou, Z. Fan, Constrained optimization based on hybrid
[39] S. Rahnamayan, H.R. Tizhoosh, M.M.A. Salama, Opposition-based differential evolutionary algorithm and adaptive constraint-handling technique, Struct.
evolution, IEEE Trans. Evol. Comput. 12 (2008) 6479. Multidiscip. Optim. 37 (2009) 395413.
[40] A. Banerjee, V. Mukherjee, S.P. Ghoshal, An opposition-based harmony search [67] X.S. Yang, A.H. Gandomi, Bat algorithm: a novel approach for global
algorithm for engineering optimization problems, Ain Shams Eng. J. 5 (2014) engineering optimization, Eng. Comput. 29 (2012) 464483.
85101. [68] M. Chih, L.L. Yeh, F.C. Li, Particle swarm optimization for the economic and
[41] J. Kennedy, Bare bones particle swarms, Proceedings of the 2003 IEEE Swarm economic statistical designs of the control chart, Appl. Soft Comput. 11 (2011)
Intelligence Symposium (2003) 8087. 50535067.
[42] D. Karaboga, B. Basturk, On the performance of articial bee colony (ABC) [69] R.P. Singh, V. Mukherjee, S.P. Ghoshal, Particle swarm optimization with an
algorithm, Appl. Soft. Comput. 8 (2008) 687697. aging leader and challengers algorithm for the solution of optimal power ow
[43] M. Khalili, R. Kharrat, K. Salahshoor, M.H. Sefat, Global dynamic harmony problem, Appl. Soft Comput. 40 (2016) 161177.
search algorithm: GDHS, Appl. Math. Comput. 228 (2014) 195219. [70] X. Zhao, M. Turk, W. Li, et al., A multilevel image thresholding segmentation
[44] A.D. Belegundu, J.S. Arora, A study of mathematical programming methods for algorithm based on two-dimensional KL divergence and modied particle
structural optimization. Part I: Theory, Int. J. Numer. Methods Eng. 21 (1985) swarm optimization, Appl. Soft Comput. 48 (2016) 151159.
15831599. [71] L. Xu, F. Qian, Y. Li, et al., Resource allocation based on quantum particle
[45] C.A.C. Coello, Self-adaptive penalties for GA-based optimization, Proceedings swarm optimization and RBF neural network for overlay cognitive OFDM
of the 1999 Congress on Evolutionary Computation 1 (1999) 573580. System, Neurocomputing 173 (2016) 12501256.
[46] C.A.C. Coello, E.M. Montes, Constraint-handling in genetic algorithms through
the use of dominance-based tournament selection, Adv. Eng. Inf. 16 (2002)
193203.

View publication stats

Das könnte Ihnen auch gefallen