Sie sind auf Seite 1von 6

2008 IEEE Swann Intelligence Symposium

St. Louis MO USA, September 21-23, 2008


Velocity Self-Adaptation Made Particle Swarm Optimization Faster
Guangming Lin, Lishan Kang, Yongsheng Liang, Yuping Chen
Abstract-The lognormal self-adaptation has been used
extensively in evolutionary programming (EP) and evolution
strategies (ES) to adjust the search step size for each objective
variable. The Particle Swarm Optimization (PSO) relies on two
kinds of factors: velocity and position of particles to generate
better particles. In this paper, we propose Self-Adaptive Velocity
PSO (SAVPSO) in which we firstly introduce lognormal
self-adaptation strategies to efficiently control the velocity of
PSO. Extensive empirical studies have been carried out to
evaluate the performance of SAVPSO, standard PSO and some
other improved versions of PSO. From the experimental results
on 7 widely used test functions, we can show that SAVPSO
outperforms standard PSO.
I. INTRODUCTION
E
volutionary algorithms (EAs) have been applied to lnany
optimization problems successfully in recent years. They
are population-based search algorithms with the
generation-and-test feature [1, 2]. New offspring are generated
by perturbations and tested to detennine the acceptable
individuals for the next generation. For large search spaces,
the methods of EAs are more sufficient than the classical
exhaustive methods; they are stochastic algorithms whose
search methods model some natural phenomena: genetic
inheritance and Darwinian strife for survival. The best known
techniques in the class of EAs are Genetic Algorithlns (GA),
Evolution Strategies (ES), Evolution Programming (EP), and
Genetic Programlning (GP). The Particle Swann Optimization
(PSO) is also a stochastic search algorithln first proposed by
Kennedy and Eherhart [8,9], which developed out of work
simulating the movement of flocks of birds. PSO shares many
features with EAs. It has shown to be an efficient, robust and
simple optimization algorithln. PSO has been applied
successfully to many different kinds of problems [18, 19].
Optimization using EAs and PSO can be explained by two
lnaj or steps:
1. Generate the solutions in the current population, and
Manuscript received June 16, 2008. This work was supported in part by the
National Natural Science Foundation of China (No. 60473081).
Guangming Lin is with Shenzhen Institute of Information Technology.
No.1068 West Niguang Road, Shenzhen 518029, China (corresponding
author, phone: 86-755-25859105; e-mail: lingm@sziit.com.cn)
Lishan Kang is with School of Computer Science, China University of
Geoscience, Wuhan, China
Yongsheng Liang is with Shenzhen Institute of Information Technology.
Yuping Chen is with School of Computer Science, China University of
Geoscience, Wuhan, China
2. Select the next generation from the generated and the
current solutions.
These two steps can be regarded as a population-base
version of the classical generate-and-test method, where we
use mutation (or velocity and position in PSO) to generated
new solutions and selection is used to test which of the newly
generated solutions should survive to the next generation.
Fonnulating EAs as a special case of the generate-and-test
method establishes a bridge between PSO and other search
algorithms, such as EP, ES, GA, simulated annealing (SA),
tabu search (TS), and others, and thus facilitates
cross-fertilization amongst different research areas.
Standard PSO perfonns well in the early iterations, but
has problelns reaching a near optimal solution in some of the
multi-modal optitnization problelns [8]. PSO could often
easily fall into local optima, because the particle could quickly
get closer to the best particle. Both Eberhart [8] and Angeline
[10] conclude that hybrid models of the EAs and the PSO,
could lead to further advances. Some researchers have been
done to tackle this problem [18, 19]. In [18, 19], a method
hybrid Fast EP and PSO to fonn a Fast PSO, which is uses
Cauchy mutation operator to mutate the best position of
particles gbest, It is to hope that the long jump from Cauchy
mutation could get the best position out of the local optitna
where it has fallen. FPSO focus on the best position of particle
gbest. Actually in PSO procedure, there is another important
factor is the velocity ofparticle. During PSO search, the global
best position gbest and the current best position of particles
pbest indicate the search direction. The velocity of particle is
the search step size. In [2] we analyzed the importance of steps
size affect the perfonnance of EAs. In this paper we focus on
velocity the search step size of PSO. We first introduce the
10gnonna1 self-adaptation strategy to control the velocity of
PSO. According to the global optimization search strategy, in
the early stages, we should increase the step size to enhance
the global search ability, and in the final stages, we should
decrease the step size to enhance the local search ability. The
characteristics of lognonnal function fit this search strategy
very well. We proposed a new self-adaptive velocity PSO
(SAVPSO) algorithm to efficiently control the global and
local search of PSO. We use a suite of 7 functions to test PSO
and SAVPSO. We can see SAVPSO significantly outperfonns
PSO in all the test functions.
The rest of the paper is organized as following: Section 2
fonnulates the global optimization problem considered in this
paper and describes the implementation of EP, FEP and PSO.
Section 3 describes the implementation of the new SAVPSO
978-1-4244-2705-5/08/$25.00 2008 IEEE
Authorized licensed use limited to: Bankura Unnayani Institute of Engineering. Downloaded on January 16, 2010 at 00:43 from IEEE Xplore. Restrictions apply.
(2.3)
algorithm. Section 4 lists benchmark functions use in the
experiments, and gives the experimental settings. Section 5
presents and discusses the experimental results. Finally,
Section 6 concludes with a summary and a few remarks.
II. OPTIMIZATION USING EP, FEP AND PSO
A global minimization problem can be represented as a
pair (S,j), where S R n is a bounded set on R n andf SHR
is an n-dimensional real-valued function. The problem is to
find a point x min E S such that j(x min) is a global minitnum on
S. More specifically, it is required to find x min E S such that
\;fXE
Wherej does not need to be continuous but it must be bounded.
This paper considers only the unconstrained optimization
functions.
A. Classical Evolutionary Programming
Fogel [4] and Back and Schwefel [6] have indicated that
CEP with self-adaptive mutation usually performs better than
CEP without self-adaptive mutation for the function they
tested. For this reason, CEP with self-adaptive mutation will
be investigated in this paper. As described by Back and
Schwefel [6], CEP implemented in this study is as follows:
1. Generate the initial population of Jl individuals, and
set k=1. Each individual is taken as a pair of
real-valued vectors, (Xi,TJi),ViE {1,2,oo.,p} , where
Xi s are variables and lli s are standard deviations for
Gaussian mutations (also know as strategy parameters
in self-adaptive evolutionary algorithms).
2. Evaluate the fitness score for each
individual (Xi'1JJ, Vi E {1,2,oo.,p}, of the population
based on the objective function f(x
i
) .
3. Each parent (XpTJi)' Vi E {1,2,oo.,jl}, creates a single
offspring (Xi' , 17
i
') by: for}= 1,2, ... n,
x; (j) = xi(J) +11/J)N
j
(0, 1) (2.1)
17i'(j)=17i(j) xexp(T'N(O,l)+T'N
j
(O,l; (2.2)
Where Xi (j), Xi '(j), 17
i
(j) and 17
i
'(j) denote the j-th
component of the vectors Xi ' Xi " 1]i and 1]i "
respectively. N(O, 1) denote a normally distributed
one-dimensional random number with mean and
standard deviation 1. Nj (0,1) indicates that a new random
number is generated for each value of}. The 1" and 1" '
are commonly set to (Mr1and [6].
4. Calculate the fitness of each offspring
(x;','l;'),'V'iE {1,2,..., p}'
5. Conduct pair WIse comparison over the union of
parents
Xi 17i
and offspring
(x
i
',1] i' ),'i i E {1,2,... , ,ll}. For each individual, q
opponents are chosen uniformly at random from all the
parents and offspring. For each comparison, if an
individual's fitness is better than its opponent, then it is
the winner.
6. Select Jl individuals, out of ( Xi ' 17
i
) and
( x;' ,Tl;' ),'17' i E {1,2,...,,ll}' those are winners, to be
parents in the next generation.
7. Stop if the halting criterion is satisfied; otherwise,
k=k+1and go to Step 3.
B. The Standard Particle Swarm Optimization
Particle swarm optimization (PSO) algorithm is a recent
addition to the list of global search Inethods. It is a population
based stochastic optitnization technique developed by
Kennedy and Eberhart [8] in 1995, inspired by social behavior
of organisms such as fish schooling, bird flocking and swarm
intelligence theory. PSO has been found to be robust in
solving continuous nonlinear optimization problems. Recently,
PSO has been successfully employed to solve non-smooth
cOInplex optimization problems. In past several years, PSO
has been widely applied in many research and application
areas.
The Particle Swarm Optimization simulates social
behavior such as a school of flying birds in searching of food.
The behavior of each individual is impacted by the behaviors
of neighborhoods and the swarm.
PSO is initialized with a population of random solutions
of the objective function. It uses a population of individuals,
called particles, with an initial population distributed randomly
over the search space. It searches for the optimal value of a
function by updating the population through a number of
generations. Each new population is generated from the old
population with a set of simple rules that have stochastic
elements.
Each particle searches the optimum position like the
behavior of a birds search food that it "flown" through the
problem space by following the current optimal particles. The
position of each particle is updated by a new velocity
calculated through equations (2.3) and (2.4) which is based on
its previous velocity, the position at which the best solution so
far has been achieved by the particle (pbest or pb), and the
position at which the best solution so far has been achieved by
the global population (gbest or gb).
v(i +1) = OJxv(i)+c
1
Xli x(pb-x(i))+
c
2
xr
2
x(gb-x(i))
x(i +1) = x(i) +v(i +1) (2.4)
In equation( 1), < OJ <1 is a weight determining the
proportion of the particle's previous velocity preserved,
c
1
and c
2
are two positive acceleration constants, 'i and
r
2
are two uniform random sequences produced from U(O,I).
Authorized licensed use limited to: Bankura Unnayani Institute of Engineering. Downloaded on January 16, 2010 at 00:43 from IEEE Xplore. Restrictions apply.
-00 <x< +00
Fitness values established from the objective function are used
to determine which positions in the search space are better than
others. This fitness drives the particles to "fly" through the
search space being attracted to both their personal best
as well as the best position found by the global populatIon so
far.
III. SELF-ADAPTIVE VELOCITY PSO (SAVPSO) ALGORITHM
Based on the optimization theory, there are two important
factors: search direction and search step size to affect the
performance of the search algorithms. When the search points
are far away from the global optimum in the initial search
stages, increasing the search step size will i?crease the
probability ofescaping from a local optimuln, and If the search
direction is correct, it also has higher probability to reach the
global optimum. On the other hand, with the progress of
search the current search points are likely to Inove closer and
closer 'towards a global optimuln. So it is necessary to restrict
the step size in the later stages. However, it is hard to know in
advance whether the search points are far from the global
optitnum. Unfortunately, the probability that a
generated initial population is very close to a global optImum
is quite stnall in practice. It certainly worth enlarging the
search step size in the early stages when we use EAs. In the
final stages, the population of EAs tends to converge, and the
step size tends to be reduced.
There exist many factors that would influence the
convergence property and performance ofPSO [8], including
selection of (j) , c1 and c2; velocity clamping; position
clamping; topology of neighborhood; etc. Holland discussed
the balance between exploration and exploitation that an
algorithtn must maintain [5]. Exploration ability is related to
the algorithm's tendency to explore new regions of the
space; in this stage we should increase the search step sIze
according to the search direction. Exploitation is the tendency
to search a smaller region more thoroughly, in this stage; we
should reduce the search step size. Researchers in PSO
community used to believe that inertia weight balances
exploration and exploitation in PSO algo.rithtn. In our
inertia weight can not balance exploratton and exploltatton.
The factor to balance exploration and exploitation should be
the velocity of particle. In this paper we focus on how to
control the search step size of PSO. We use lognonnal
self-adaptive strategy to control the velocity in PSO algorithm.
The main steps of the SAVPSO algorithm are as follows:
Self-Adaptive Velocity Particle Swarm Optimizer
Begin
v(i + 1) = OJ x v(i) + '1 (i)x (c
1
x r
1
x (pb - x(i
+c
2
x r
2
x (gb - x(i);
1l(i+ 1) = 1l(i) x exp(r J
1
+ 1" J
2
);
x (i + 1) = x (i) + v (i + 1);
end
where 0 < OJ <1 is a weight determining the proportion of
the particle's previous velocity preserved, c
1
and C2 are two
positive acceleration constants, 'i and r
2
are two uniform
random sequences produced from U(O,I), hI and h
2
are
Gaussian random numbers, for Gaussian density function fG
with expectation 0; and variance a 2 is
" x-
I" 1 - 20-
2
J( =--- e
G aJ2K
rand r' are commonly set to (Mr
1
and [6].
IV. BENCHMARK FUNCTIONS
The availability of appropriate and standardized set of
benchmark functions is very important for assessing
evolutionary algorithlns with respect to their effectiveness and
efficiency [11].
Seven benchtnark functions, from different sources [1, 11,
13], were used for our experimental studies. These functions
were carefully chosen as the aim of this research is not to show
SAVPSO performs better (or worse) than PSO, instead to find
when and why SAVPSO is better (or worse) than PSO.
Wolpert and Macready [14] have shown that under certain
assumptions no single search algorithm is, on average, better
for all problems. If the number of test problems is small, it
would be very difficult to generalize the claim. Use of too few
test problems has the potential risk that the algori.thm is bias.ed
(optimized) towards the chosen problems, whIle such btas
lnight not be useful for other problem of interest.
We sUlnmarize the benchmark functions which have been
used to investigate the behavior of evolutionary algorithms in
the continuous parameter optilnization domain. The
benchmark problems contain functions froln simple uni-modal
to multi-modal with few to many local optima. They range
from low to high dimensional problems. A regular
arrangelnent of local optima, reparability of the objective
function decreasing difficultly of the problem with increasing
and potential bias introduced by locating the
global optitnum at the origin of the coordin.ate are
identified as properties of multi-modal functIons whIch are
neither representative of arbitrary problems nor well suited for
assessing the global optimization qualities of evolutionary
algorithms. The 7 benchmark functions are given in Table
Functionsfl andfl are high-dimensional problems, whIch
are uni-modal. Function 13 and [4 are multi-modal functions
where the number of local minima increases exponentially
with the problem dimension [12.13]. Those classes of
functions appear to be the most difficult class of problems for
many optimization algorithms (including EP). Functions15 to
j7 are low-dimensional functions which have only a few local
minima [13].
Authorized licensed use limited to: Bankura Unnayani Institute of Engineering. Downloaded on January 16, 2010 at 00:43 from IEEE Xplore. Restrictions apply.
Table 4. 1. The seven benchmark functions used in our experimental study, where n is the dimension ofthe function,lmin is the
minimum value of the function, and S Rn
Test Function n S ,{min
n
30
[-100,100] n
0
it (x)= LX;
i=1
/2 (x)= flx;1+nlx;1
30
[-10,10] n
0
i=1 i=1
1
3
(X) = - 20
(-
')
)-
30
[-32,32] n
0
exp X j-
exp (.; cos 2;rxi ) - 20 + e
1 n n x. 30
[-600,600] n
0
f4(X) = -LX
i
2
- II
4000 i=1 i=1 Ji
5 4
[0,10] n
-10
1
5
(X) =- L[(x-ai)(x- ai)T +C
i
]-1
i=1
7 4
[0,10] n
-10
1
6
(x)=- L[(x-ai)(x-ai)T +C
i
]-1
i=1
10 4
[0,10] n
-10
17 (X) =- L[(x- a
i
)(x- ai)T +C
i
]-1
i=1
v . EXPERIMENTAL STUDIES AND DISCUSSIONS
A. Experimental Studies
In order to fairly compare between SAVPSO and PSO,
SAVPSO was tested using the same experimental setup as
and PSO. In all experiments, the parameters and operators
such as the self-adaptive method, the population size is 100,
c 1=c2=1. 5, the initial 17 =3.0, and the initial population
used for SAVPSO and PSO were same. These parameters
were considered as suggested by Back and Schwefel [6]
and Fogel [4]. The average results of 50 independent runs
are summarized in Table 5.1
Table 5.1: Comparison between SAVPSO and PSO
on functionjI toj7. All results have been average over 50
runs,
F #.of SAVPSO PSO
Gen. Mean Best Mean Best
f1
100
3.57Xl0-
7
1.88X 10
2
12
100
9.59X 10-
16
9.03X 10
3
13
100
4.18X 10-
12
1.05 X 10
1
14 100
4.44X 10-
17
1.94 X 10
1
f5
100 -10.15 -5.01
16 100 -10.40 -4.95
j7 100 -10.54 -4.90
where "Mean Best" indicates the mean best function
values found in the last generation.
B. Discussions
It is very clear from Table 5.1 that SAVPSO has
improved PSO' s performance significantly for all test
functions. For the two uni-modal functions, SAVPSO is
outperfonned by PSO significantly.
It is very encouraging that SAVPSO is capable of
perfonning Inuch better than PSO for all the test functions.
This is achieved through a minitnal change to the existing
PSO. No prior knowledge or any complicated operators
were use, and also no additional parameter was used either.
The superiority of SAVPSO also demonstrates the
importance of self-adaptive velocity of particles search
biases (e.g. "step sizes") in a robust search algorithm.
In fact, the large velocity of particles played a major
role in the early stages of evolution, since the distance
between the current search points and the global optimum
are relatively large on average in the early stages, hence
large search step size performed better. However, as the
evolution progress, the distances to the global optimum
become smaller and smaller, we should reduce the search
step size. The lognormal self-adaptive strategy fit this
requirement very well during the whole evolution progress.
That why SAVPSO performs much better than standard
PSO. The rapid convergence of SAVPSO shown in Fig. 1
and Fig.2 support our explanations.
Authorized licensed use limited to: Bankura Unnayani Institute of Engineering. Downloaded on January 16, 2010 at 00:43 from IEEE Xplore. Restrictions apply.
_10'
10-12 L -"--- ~ ~ ~ _
-102 L--__-'--__--'--__---'- ~
o 20 60 80 100
f1(Sphere Model)
10
6
1--------,---------,------,-- ----;::::='====:::;-
f5 Sheckel function (m=5)
80 100
-102 L--__-'--__--'--__---'- ----"
o 20 40 60 80 100
f2(Schwefel Problem 2.22)
f6 (Sheckel function (m=7
_10
1
80 100
_ 2 ~ ___'_ _'____--'--- ~ ____'
o 20 40 60 80 100
f4(Generalized Griewank Function)
f7 (Sheckel function (m=lO
Fig.2. Comparison between PSO and SAVPSO on f5-f7
VI. CONCLUSIONS AND THE FUTURE WORK
SAVPSO uses lognonnal self-adaptive velocity of
particles in PSO. Unlike some switching algorithms, which
have to decide the timing of switching between different
velocity of particles in PSO, SAVPSO does not require any
switching decision and parameters related to such switching.
SAVPSO is robust, assumes no prior knowledge of the
problem to be solved, and perfonns much better than PSO
for most benchmark problems. Future work on SAVPSO
includes the comparison of SAVPSO with other
self-adaptive algorithms such Born's algorithm [15] and
100 80 60 40 20
Fig. 1. Comparison between PSO and SAVPSO on fl-f4
10-' L--__~ __-'--_____'_____---'-__--'
o
Authorized licensed use limited to: Bankura Unnayani Institute of Engineering. Downloaded on January 16, 2010 at 00:43 from IEEE Xplore. Restrictions apply.
other evolutionary algorithms using lognormal
self-adaptive strategy [16].
The idea of SAVPSO can also be applied to other
algorithms to design faster optimization algorithms [20].
SAVPSO would be particularly attractive since
self-adaptive to generate different search step sizes of
particles in PSO. It may be beneficial if different
individuals are generated by different search step sizes [20].
ACKNOWLEDGEMENTS
This paper supported by the National Natural Science
Foundation of China (No. 60473081).
REFERENCES
[1]. Yao, X., Liu, Y., Lin, G.: Evolutionary programming
made faster. IEEE Trans. Evolutionary Computation,
1999, 3(2):82-102.
[2]. Yao, X., Lin, G. and Liu,Y.: An Analysis of
Evolutionary Algorithms Base on Neighborhood and
Step Sizes. In Angeline, P. 1., Reynolds, R. G.,
McDonnell, 1. R. and Eberhart, R., editors,
Evolutionary Programming VI: Proc. of the Sixth
Annual Conference on Evolutionary Programming,
Volume 1213 of Lecture Notes in Computer Science,
pages 297-307, Berlin, 1997. Springer-Verlag.
[3]. Fogel, L. 1. , Owens, A. 1. and Walsh, M. 1.: Artificial
Intelligence Through Simulated Evolution. John Wiley
& Sons, New York, NY 1966.
[4]. Fogel, D. B. : Evolving Artifical Intelligence. PhD
thesis, University of California, San Diego, CA, 1992.
[5]. Holland J H. Adaptation in Natural and Artificial
SysteIn. Ann Arbor: The University ofMichigan Press,
1975.
[6]. Back, T. and Schwefel, H.-P. : An overview of
evolutionary algorithms for parameter optitnization.
Evolutionary Computation, 1(1): 1-23,1993.
[7]. Fogel, D. B. : An Introduction to Simulated
Evolutionary Optimization. IEEE Trans. On Neural
Networks, 5( 1): 3-4, 1994.
[8]. Eberhart, R. C. and Kennedy, 1. A new optimizer using
particle swarm theory. Proceedings of the Sixth
International S)'lnposium on Micro-machine and
Human Science, Nagoya, Japan. pp. 39-43,1995
[9]. Kennedy, 1. and Eberhart, R. C. Particle swarm
optimization. Proceedings of IEEE International
Conference on Neural Networks, Piscataway, N1. pp.
1942-1948, 1995
[10]. Angeline, P. 1. Evolutionary optimization versus
particle swarm optimization: philosophy and
performance differences. Evolutionary Programming
VII: Proceedings of the Seventh Annual Conference on
Evolutionary Programming, 1998
[11].Back, T., Fogel, D. B. and Michalewicz, Z. :
Handbook of Evolutionary Computation. lOP
Publishing and Oxford University Press, 1997.
[12]. Schwefel, H.-P. : Evolution and Optimum Seeking.
John Wiley & Sons, New York, 1995.
[13]. Tom, A and Zilinskas, A. : Global Optimization.
Springer-Verlag, Berlin, 1989. Lecture Notes in
Computer Science, Vol. 350.
[14]. Wolpert, D. H. and Macready, W. G. : No free lunch
theorems for search. IEEE Transcation on
Evolutionary Computation, 1(1):67-82,1997.
[15]. Born, 1. : An evolution strategy with adaptation of the
step sizes by a variance function. In Voigt, H.-M.,
Ebeling, W., Rechenberg, I. and Schwefel, H.-P.,
editors, Parallel Problem Solving from Nature
(PPSN)IV, volume 1141 of Lecture Notes in Computer
Science, pages 388-397, Berlin, 1996,
Springer-Verlag.
[16]. Kappler, C. : Are evolutionary algorithms improved
by large mutations? : In Voigt, H.-M., Ebeling, W.,
Rechenberg, I. and Schwefel, H.-P., editors, Parallel
Problem Solving from Nature (PPSN)IV, volume 1141
of Lecture Notes in Computer Science, pages 346-355,
Berlin, 1996, Springer-Verlag.
[17]. Duan, M. and Povinelli, R.: Nonlinear Modeling:
Genetic Programming vs. Fast Evolutionary
Programlning, Intelligent Engineering Systems
Through Artificial Neural Networks (ANNIE 2001)
pages 171-176, 2001.
[18].Wang H., Liu, Y. Li, C. and Zeng, S. A Hybrid
Particle Swarm Algorithln with Cauchy Mutation.
Proceedings of the 2007 IEEE Swarm Intelligence
SYlnposium. 356-365.SIS 2007. USA.
[19].Li, C., Liu, Y., Zhao, A., Kang, L. and Huang, H.: A Fast
Particle Swann Optimization Algorithm with Cauchy
Mutation and Natural Selection Strategy. Lecture Notes in
Computer Science 4683, 157-168, Springer-Verlag Berlin
Heidelberg 2007.
[20]. Lin, G. et al : Self-Adaptive Search Step Size made
Differential Evolution Faster. To be published. 2008.
Authorized licensed use limited to: Bankura Unnayani Institute of Engineering. Downloaded on January 16, 2010 at 00:43 from IEEE Xplore. Restrictions apply.

Das könnte Ihnen auch gefallen