Sie sind auf Seite 1von 115

Particle Swarm Optimization

Millie Pant
Associate Professor
Department of Applied Science and
Engineering,
Saharanpur Campus of IIT Roorkee
millifpt@iitr.ac.in

3/24/17 1
CONTENTS
Optimization
Optimization Models
Particle Swarm Optimization
Variants of PSO
Some test problems
Applications of PSO
References
Introduction to optimization

3/24/17 3
Optimization
The process of finding the optimal value of a function
is called optimization
The function, which has to be optimized, is called the
objective function
The domain in which it has to be optimized is generally
specified by a set of equalities or inequalities called
constraints
Process of optimization seeks those values of the
decision variables, which do not violate the constraints
and at the same time optimize the objective function

3/24/17 4
Most General Optimization Problem

Unconstrained Optimization Problem :


Minimize (or Maximize) f ( x ), x ( x1 , x2 ,...., x D )

f: RD R s.t. ai xi bi

Constrained Optimization Problem :


Minimize (or Maximize) f ( x ), x ( x1 , x2 ,...., xD )

Subject to:
gi ( x) 0, i 1,2...., m
h j ( x) 0, j m 1, m 2...., p
where f, g1, g2,gm , hm+1, hm+2,. .hp are real valued
functions defined on D. 5
3/24/17
An example of unconstrained optimization
problem
n Fitness Function
Min f(x) = x
i 1
2
i
x is the decision variable
and the solution space
Subject to 5 <xi <5 ranges from -5 to 5 and
represents a rectangular
Box hyperplane
constraints
Optimal solution is given
by: xi = 0 with fmin= 0.

3/24/17 6
Local and Global Optimal Solution

Local Maxima Global Maxima

Local Minima Global Minima

3/24/17 7
LOCAL AND GLOBAL OPTIMAL
SOLUTION

Let S be a set of all feasible solutions. Then


for X* S
Local Optimal Solution
If f(X*) f(X) for all XS N(X*)
where N(X*) is an
neighborhood of X*.
Global Optimal Solution
If f(X*) f(X) for all XS.
-
nbd
Global and local optima

3/24/17 9
.

3/24/17 10
3/24/17 11
3/24/17 12
CLASSIFICATION
On the basis of :
(i) existence of constraints
(ii) nature of design variable
(iii) deterministic nature of
variables
(iv) nature of equation evolved
(v) number of objective functions
(vi) convexity of functions and
decision variables
BASED ON EXISTENCE OF
CONSTRAINTS
Unconstraint Constraint Optimization
Optimization Problem
Problem
Min (Max.) f(X)
Min (Max.) f(X) s.To
gi(x) 0 i=1,2m
where X= (x1,x2,x3, hi(x) = 0 j=1,2p
.....xn)
f(x): Rn R where X= (x1,x2,x3,
.....xn)
f(x): Rn R
BASED ON NATURE OF EQUATIONS
EVOLVED

Linear Optimization Non-linear


Problem Optimization Problem

Min (Max.) f(X)= cTx Min (Max.) f(X)


s.to s.to
Ax B gi(x) 0 i=1,2m
x 0 hi(x) = 0 j=1,2p
where X= (x1,x2,x3,
.....xn) where X= (x1,x2,x3,
Amxn and Bmx1 .....xn)
cT=(c1,c2,..cn) f,g ,h are non-linear

f(x): Rn R f(x): Rn R
BASED ON NUMBER OF OBJECTIVE
FUCTION

Single Objective Multi Objective


Optimization Problem Optimization Problem

Min (Max.) f(X) Min (Max.) f1(x),f2(x)..


s.to fl(x)
gi(x) 0 i=1,2m s.to
hi(x) = 0 j=1,2p gi(x) 0 i=1,2m
hi(x) = 0 j=1,2p
where X= (x1,x2,x3,
.....xn) where X= (x1,x2,x3,
.....xn)
f(x): Rn R
f(x): Rn R
BASED ON CONVEXITY OF
FUNCTIONS
Convex Optimization Non-Convex
Problem Optimization Problem

Min (Max.) f(X) Min (Max.) f(X)


s.to s.to
gi(x) 0 i=1,2m gi(x) 0 i=1,2m
hi(x) = 0 j=1,2p hi(x) = 0 j=1,2p

where X= (X1,X2,X3, where X= (X1,X2,X3,


.....Xn) .....Xn)
f,g ,h are convex any of f,g and h is non-
convex
f(x): Rn R
f(x): Rn R
BASED ON NATURE OF DESIGN
VARIABLE
Discrete Optimization Continuous
Problem Optimization Problem

Min (Max.) f(X) Min (Max.) f(X)


s.to s.to
gi(x) 0 i=1,2m gi(x) 0 i=1,2m
hi(x) = 0 j=1,2p hi(x) = 0 j=1,2p

where X= (X1,X2,X3, where X= (X1,X2,X3,


.....Xn) .....Xn)
all Xis S Xis a continuous set
Where S is a discrete set
f(x): Rn R
f(x): Rn R
Taxonomy
Constrained Unconstrained
of Optimization Optimization
optimizatio Problem Problem

n problems
Linear Non-linear
Optimization Optimization
Problem Problem

Single objective Multi objective


Optimization Optimization
Problem Problem

Convex Optimization
Problem Non-convex
Optimization
Problem
Discrete Continuous
Optimization Optimization
Problem Problem
Solution
Methodologies

3/24/17 20
SOLVING METHODS
Simplex Method Linear Programming
Problems
Golden Section Method
Direct Search
Fibonacci Search Method Method
(unconstrained)
Bisecting Search Method Some
Gradient based classical
Newton-Raphson Method methods methods
(unconstrained)
Cyclic Co-ordinate Method

Hookes and Jeeves Method


Gradient based method
for multidimensional
Steepest Descent Method
search (unconstrained)
Conjugate Direction Method
SOLVING METHODS

Quadratic programming Non linear


constrained
Geometric Programming problems

Dynamic Programming

3/24/17 22
But What For
More Complex
Problems?
Optimization Methods

Deterministic Probabilistic

3/24/17 24
COMPARISON BETWEEN
PROBABILISTIC

AND DETERMINISTIC

OPTIMIZATION TECHNIQUES

3/24/17 25
Property Probabilistic Deterministic
Search space Population of Trajectory by a
potential solutions single point
Motivation Nature inspired Mathematical
selection and social properties (gradient,
adaptation Hessian etc)
Applicability Domain Applicable to
independent, specific problem
application to domain
variety of problems
Point transition Probabilistic Deterministic
Prerequisites An objective auxiliary knowledge
function to be such as gradient
optimized vectors

3/24/17 26
Initial guess Automatically Provided by the
generated by the user
algorithm
Flow of control Mostly Parallel Mostly Serial
Global optimum more Local optimum
Results probable dependent on initial
guess
Advantages Global Search, Parallel Convergence Proof
speed
No general Locality,
Drawbacks convergence Proof computational cost

3/24/17 27
NATURE INSPIRED ALGORITHMS

Based on theory of evolution Based on behavior of species


Evolutionary algorithms Swarm Intelligence (SI)
Genetic Algorithms Algorithms
Evolutionary Programming Particle Swarm Optimization
Differential Evolution Artificial Bee Colony
Ant colony optimization
Based on other natural
Glow swarm
phenomena
Fire fly
Artificial Neural
Networks
Artificial Immune System
Intelligent water drop
algorithms
Weed Algorithm
Gravitational Search
Algorithm
SOME PROBABILISTIC METHODS

Genetic Algorithm (GA) (1975)


Artificial Immune Systems (AIS) (1976)
Geometric Programming (GP) (1980)
Simulated Annealing (SA) (1983)
Ant Colony Optimization (ACO) (1992)
Tabu Search (TS) (1995)
Scatter Search (SS) (1995)
Differential Evolution (DE) (1995)
Particle Swarm Optimization (PSO) (1995)
Memetic Algorithms (MAs) (1999)
Self Organizing Migrating Algorithms (SOMA) (2000)
Artificial Bee Colony (ABC) (2005)
Bio-Geography based optimization (BGO) (2007)
Bacterial Foraging Optimization (2007)
Cuckoo Search (2009)

3/24/17 29
Nature Inspired Algorithm (NIA)
Nature Inspired Algorithms (NIA) are relatively a newer
addition to class of population based stochastic search
techniques based on the self organising collective
processes in nature and human artefacts.

All NIA algorithms are based of natural or social metaphor.

More importantly, nature has been very successful in


solving highly complex problems.

In a very low level, there is an urge for survival in living


organisms: they have to search for food, hide from
predators and weather conditions; they need to mate,
organize their homes, etc.

Nature Inspired Algorithms


3/24/17 30
Nature Inspired Algorithm (NIA)
Cont.
The motivation of NIA is to provide the solution
to problems that could not be resolved by
other more traditional techniques, such as
linear, non-linear, and dynamic programming.
Among all natural computing approaches,
computational algorithms and systems
inspired by nature are the oldest and the most
popular ones.

3/24/17 Nature Inspired Algorithms 31


Particle Swarm Optimization

3/24/17 32
As described by the inventers James Kennedy and
Russell Eberhart,

particle swarm algorithm imitates human (or insects)


social behaviour. Individuals interact with one another
while learning from their own experience, and gradually
the population members move into better regions of the
problem space.

Mechanism of PSO
Mechanism of PSO
Social and cooperative behavior displayed by various natural
Social and cooperative behavior displayed by various natural
species
species

3/24/17 33
Swarm Behaviour

The intelligent behaviour


Collective and
synchronized behaviour
without any leader

Mutual interaction among


individuals following rules
as:
Stay near to neighbours
Avoid Collision with
neighbours
Match their velocity to
that of its neighbours
PSO applications
Problems with continuous, discrete, or mixed search space, with
multiple local minima; problems with constraints; multiobjective,
dynamic optimization.

Evolving neural networks:


Human tumor analysis;
Computer numerically controlled milling optimization;
Battery pack state-of-charge estimation;
Real-time training of neural networks (Diabetes among Pima
Indians);
Pressure vessel (design a container of compressed air, with
many constraints)
Compression spring (cylindrical compression spring with certain
mechanical characteristics);
Moving Peaks (multiple peaks dynamic environment);

3/24/17 35
Particle Swarm Optimization (PSO)
PSO was developed in 1995 by James Kennedy (social-
psychologist) and Russell Eberhart (electrical engineer).
PSO is a robust stochastic optimization technique based on
the movement and intelligence of swarms.
It applies the concept of social interaction to problem solving.
It uses a number of agents (particles) that constitute a
swarm moving around in the search space looking for the
best solution.
Each particle is treated as a point in a N-dimensional space
which adjusts its flying according to its own flying
experience as well as the flying experience of other particles.
Particle Swarm Optimization (PSO)

Each particle keeps track of its coordinates in the solution


space which are associated with the best solution (fitness)
that has achieved so far by that particle. This value is called
personal best , pbest.
Another best value that is tracked by the PSO is the best
value obtained so far by any particle in the neighborhood of
that particle. This value is called gbest.
The basic concept of PSO lies in accelerating each particle
toward its pbest and the gbest locations, with a random
weighted acceleration at each time step.
PSO terminology
Particles/ Swarm Size
Velocity Vector
Position vector
pbest (pb)
gbest (gb)
Inertia weight (w)
Acceleration constants (c1, c2)

3/24/17 38
BASIC EQUATIONS GOVERNING THE
WORKING OF PSO

velocity vector
vid vid c1r1 ( pid xid ) c2 r2 ( p gd xid ) (1)

position vector

xid xid vid (2)

3/24/17 39
A closer look

vid vid c1r1 ( pid xid ) c2 r2 ( p gd xid )

Cognitive Social
component component
personal thinking cooperation among
of the particle particles

inertia of the c1, c2 Acceleration constants


previous velocity

r1, r2 are the uniformly generated random numbers


in the range of [0, 1]

3/24/17 40
Particle Swarm Optimization (PSO)

Each particle tries to modify its position using the


following
information:
the current positions,
the current velocities,
the distance between the current position
and pbest,
the distance between the current position
and the gbest.
Understanding PSO: Step by step procedure

Minimize f ( x) x12 x22


5 x1 , x2 5

fmin = 0

3/24/17 42
Computational Steps

1) Initialize the swarm by assigning a random position in


the problem hyperspace to each particle.
2) Evaluate the fitness function for each particle.
3) For each individual particle, calculate the personal best
position.
4) Identify the particle that has the best fitness value. This
particle will be called the global best particle.
5) Update the velocities and positions of all the particles
using (1) and (2).
6) Repeat steps 25 until a stopping criterion is met (e.g.,
maximum number of iterations or a sufficiently good fitness
value).
3/24/17 43
Generate the initial population by scaling
the random numbers within the given
range
xi,j= xmin,j + rand(0, 1)(xmax,j-xmin,j)

where xmin,j and xmax,j are lower and upper bounds for jth
component respectively, rand(0,1) is a uniform random
number between 0 and 1.

In the present numerical example


xmin,j = -5 and xmax,j = 5

3/24/17 44
Population Initialization

Generate a swarm S of size 5 in the feasible domain


Let S = {X1, X2, X3, X4, X5}
X1 = (2.7045 4.8030)
X2 = (4.5974 2.8793)
X3 = (1.8710 4.0528)
X4 = (1.6400 1.3202)
X5 = (3.3392 0.9963)

Points are generated randomly using


uniform distribution in the range (-5, 5).

3/24/17 45
Evaluation of Fitness Function Value
Fitness function is determined by evaluating the function
value for each particle
f (X1) = 2.70452 +4.80302 = 30.3831
Similarly
f(X2) = 29.4265
f(X3)= 19.9258
f(X4 )=4.4325
f(X5)= 12.1429

Best value obtained at X4 with fmin = 4.4325

3/24/17 46
Generate an initial velocity for all the
particles
Generate velocity vector uniformly in the range
(0, 1).
V1 = (0.4752 0.6987)
V2 = (0.4141 0.4020)
V3 = (0.7797 0.9433)
V4 = (0.6183 0.4749)
V5 = (0.2530 0.9398)

3/24/17 47
Minimum is 4.4325, i.e. X4 is the best solution of this swarm
Call it gbest.
For initial population gebst = X4 .
Now we proceed to the next iteration using PSO update
equations

3/24/17 48
For first particle

For first component


Velocity update:
v11 v11 c1 r1 ( pbest11 x11 ) c 2 r2 ( gebst1 x11 )

v11 = 0.4752 + 2*0.34*(2.7045 - 2.7045 ) + 2*0.86*(1.6400 - 2.7045)


= -1.35574
Position Update:
Out of bound values may be
x11 x11 v11 truncated to bring them within
x11 = 2.7045 + (-1.35574) range

= 1.34876
Since 1.34876 lies in the range (-5, 5) so we accept this solution.

All calculations are carried out component wise


3/24/17 49
For second component

v12 v12 c1r1 ( pbest12 x12 ) c2 r2 ( gebst2 x12 )

v12 = 0.6987+2*0.47*(4.8030-4.8030)+2* 0.91*(1.3202-4.8030)


= -4.368362

x12 x12 v12

x12 = 4.8030 +(-4.368362)


= 0.434638
Again since - 0.434638 also lies in the range (-5, 5) so we accept
it. Thus the first particle after PSO update equations becomes
New X1= (1.34876, 0.434638)

3/24/17 50
update all the particles using same procedure
Second Particle:
v21 = 0.4141+2*0.34*(4.5974 -4.5974)+ 2*0.86*(1.6400 -4.5974)
= -4.672628
x21 = 4.5974 +(- 4.672628)
= -0.075228

v22 = 0.4020+2*0.12*(2.8793-2.8793)+ 2*0.06*(1.3202-2.8793)


= 0.214908
x22 = 2.8793+(-0.1593)
= 3.094208
Thus the second particle after PSO update equations becomes:
X2 = (- 0.075228, 3.094208)

3/24/17 51
Third Particle:
v31 = 0.7797+2*0.98*(1.8710 -1.8710)+ 2*0.86*(1.6400 -1.8710)
= 0.38238
x31 = 1.8710 + 0.38238
= 2.25338
v32 = 0.9433+2*0.69*(4.0528-4.0528)+ 2*0.34*(1.3202-4.0528)
= -0.914868
x32 = 4.0528+(-0.914868)
= 3.137932
Thus the third particle after PSO update equations becomes:
X3 = (2.25338, 3.137932)

3/24/17 52
Fourth Particle:
v41 = 0.6183+2*0.18*(1.6400 -1.6400)+ 2*0.23*(1.6400 -1.6400)
= 0.6183
x41 = 1.6400+(0.6183)
= 2.2583
v42 = 0.4749+2*0.61*(1.3202-1.3202)+ 2*0.04*(1.3202-1.3202)
= 0.4749
x42 = 1.3202+(0.4749)
= 1.7951
Thus the fourth particle after PSO update equations becomes:
X4 = (2.2583, 1.7951)

3/24/17 53
Fifth Particle:
v51 = 0.2530+2*0.09*(3.3392 -3.3392)+ 2*0.39*(1.6400 -3.3392)
= -1.072376
x51 = 3.3392+ (-1.072376)
= 2.266824
v52 = 0.9398+2*0.65*(0.9963-0.9963)+ 2*0.10*(1.3202-0.9963)
= 1.00458
x52 = 0.9963+(1.00458)
= 2.00088
Thus the fifth particle after PSO update equations becomes:
X5 = (2.266824, 2.00088)

3/24/17 54
Finally the updated swarm is
X1 = (1.34876,0.43638)
X2 = (-0.075228, 3.094208)
X3 = (2.25338, 3.137932)
X4 = (2.2583, 1.7951)
X5 = (2.266824, 2.00088)
And the Corresponding fitness value is
f(X1) = 2.00806
f(X2) = 9.57978
f(X3) = 14.9243
f(X4) = 8.32266
f(X5) = 9.12401
3/24/17 55
Initial Swarm Updated swarm
X1 = (2.7045 4.8030) X1 = (1.34876,0.43638)
X2 = (4.5974 2.8793) X2 = (-0.075228, 3.094208)
X3 = (1.8710 4.0528) X3 = (2.25338, 3.137932)
X4 = (1.6400 1.3202) X4 = (2.2583, 1.7951)
X5 = (3.3392 0.9963) X5 = (2.266824, 2.00088)
Corresponding fitness value is Corresponding fitness value is
f (X1) = 30.3831 f(X1) = 2.00806
f(X2) = 29.4265 f(X2) = 9.57978
f(X3)= 19.9258 f(X3) = 14.9243
f(X4 )=4.4325 f(X4) = 8.32266
f(X5) = 9.12401
f(X5)= 12.1429
Clearly, the least fitness is 2.00806 and it corresponds to first
particle.
Therefore, gbest for updated swarm is x1.
Now compare this gbest with the previous gbest, obviously
updated gbest is better so we replace old gbest with the new
one.
3/24/17
If updated gbest would not be better then old 56 gbest is carried
UPDATE MECHANISM OF PBEST

Note that gbest is for whole swarm and pbest is for a


particular particle.

For first particle:

Fitness in the previous swarm = 30.3831


Fitness in the current swarm = 2.00806

The fitness of current swarm is less than that of its previous, so


we set pbest1 = (1.34876, 0.434638).
If it would be the opposite case i.e. if the fitness of current
swarm would not be less than that of its previous then the
current pbest will be the old pbest.

3/24/17 57
Similarly,

For second particle: pbest2 =(-0.075228, 3.094208)


For third particle: pbest3 =(2.25338, 3.137932)
For fourth particle: pbest4 =(1.6400, 1.3202)
For fifth particle: pbest5 = (2.266824, 2.00088)

The gbest will now be X1

The same procedure is continued until some


termination criteria is attained.

3/24/17 58
initial position of points updated swarm
5 5
4.5 4.5
4 4
3.5 3.5
3 3
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Initial Swarm Updated Swarm


X1 = (2.7045 4.8030) X1 = (1.34876,0.43638)
X2 = (4.5974 2.8793) X2 = (-0.075228, 3.094208)
X3 = (1.8710 4.0528) X3 = (2.25338, 3.137932)
X4 = (1.6400 1.3202) X4 = (2.2583, 1.7951)
X5 = (3.3392 0.9963) X5 = (2.266824, 2.00088)

3/24/17 59
2 2

1.5 1.5

1 1

0.5 0.5

0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-0.5 -0.5

-1 -1

-1.5 -1.5

-2 -2

Initial search space Contracted search space

3/24/17 60
Graphical representation of working of PSO

3/24/17 61
Basic Parameters of PSO

Swarm Size (initial population)

Inertia Weight

Acceleration Constants

2002-04-24 Maurice.Clerc@WriteMe.com
Recall of the basic velocity equation
vid vid c1r1 ( pid xid ) c2 r2 ( p gd xid )

personal thinking cooperation among


of the particle particles

inertia of the c1, c2 Acceleration constants


previous velocity

r1, r2 are the uniformly generated random numbers


in the range of [0, 1]

3/24/17 63
Inertia weight

The inertia weight can be used to control exploration and


exploitation:

For w 1: velocities increase over time, swarm diverge;

For 0 < w < 1: particles decelerate; convergence depends on


value for c1 and c2;

For w < 0: velocities decrease over time, eventually reaching


0; convergence behaviour.

Empirical results suggest that a constant inertia weight w = 0.7298


and
c1=c2=1.49618
Eberhart and Shi provide good convergence
also suggested behaviour.
the use of linearly decreasing
inertia weight typically from 0.9 to 0.4.
It will narrow the search, gradually changing it from an
exploratory to an exploitative mode.
3/24/17 64
Analysis of Acceleration coefficients
c1>0, c2=0: independent hill-climbers; local search by each
particle.
c1=0, c2>0: swarm is one stochastic hill-climber.
c1=c2>0: particles are attracted towards the average of pi
and pg.
c2>c1: more beneficial for unimodal problems.
c1>c2: more beneficial for multimodal problems.
low c1 and c2: smooth particle trajectories.
high c1 and c2: more acceleration, abrupt movements.
Adaptive acceleration coefficients have also been proposed.
C1
C1For(importance
example to of
(importance of personal
personal
have c1 andbest)
best)
c2 decreased over time.
C2
C2 (importance
(importance ofof neighborhood
neighborhood best)
best)

vid vid c1r1 ( pid xid ) Cognition only model

social only model vid vid c2 r2 ( p gd xid )


3/24/17 65
Algorithmic Aspects

Number of times an algorithm is


executed
Average fitness function value and
standard deviation
Number of function evaluations
Average CPU time
Success Rate

3/24/17 66
When to stop the algorithm
Maximum number of generations
Maximum number of function evaluations
Specifying an accuracy criteria

f max f min 105

3/24/17 67
Validating the
algorithm
Efficiency
Reliability
Robustness

Sufficient number of test problems with


varying complexity
Comparison with other techniques
Graphical representation

3/24/17 68
Comparison with other evolutionary
computation techniques.
Unlike in genetic algorithms, evolutionary programming and
evolutionary strategies, in PSO, there is no selection operation.
All particles in PSO are kept as members of the population
through the course of the run
PSO is the only algorithm that does not implement the survival of
the fittest.
No crossover operation in PSO.
eq 1(b) resembles mutation in EP.
In EP balance between the global and local search can be
adjusted through the strategy parameter while in PSO the balance
is achieved through the inertial weight factor (w) of eq. 1(a)
Benchmark Problems

3/24/17 70
Rastringin Function
n
f1 ( x) ( xi 2 10 cos(2xi ) 10)
i 1

highly multimodal function where the degree


of multimodality increases with the increase in
the dimension of the problem.

3/24/17 71
Spherical function
n 2
f 2 ( x ) xi
i 1

It is a continuous, strictly convex and unimodal


function and usually do not pose much difficulty
for an optimization algorithm. It can be used to test
the convergence speed.

3/24/17 72
Griewank Function
1 n 1 2 n 1 xi
f 3 ( x) xi cos( ) 1
4000 i 0 i 0 i 1

It is highly multimodal function having


several local minima.
3/24/17 73
Rosenbrock Function
n 1
f 4 ( x) 100( xi 1 xi 2 ) 2 ( xi 1) 2
i 0

The search space of the is dominated by a large


gradual slope. Despite the apparent simplicity of
the problem it is considered difficult for search
algorithms because of its extremely large search
space combined with relatively small global
optima.
3/24/17 74
But PSO is no
Philosophers stone

Shortcomings
3/24/17 of PSO
75
Important features of an
algorithm that influences
its performance

3/24/17 76
Exploration vs Exploitation
exploration
explorationis the ability of
the algorithm to search for
new individuals far from the
current individual (current initial phase - exploration.
solution) in the search space. Later phase - exploitation.

Exploitation is to
search the surrounding
search area nearby the
current solution
something like local
search.
3/24/17 77
Diversity
The manner in which the solutions are
dispersed in the entire search space.
Mathematically

1 ns nx
2
Diversity ( S (t )) ( xij (t ) x j (t ))
ns i 1 j 1

3/24/17 78
S is the swarm
ns = S is the swarm size
nx is the dimensionality of the problem
xij is the jth value of the ith particle

x j (t ) average of the j-th dimension over


all particles
ns
xij (t )
i 1
x j (t )
ns
3/24/17 79
Mechanisms to improve the
performance of PSO

Use of a suitable initialization technique

Tuning of Parameters

Hybridized Algorithms

Inclusion of additional operators

Inclusion of local search methods

3/24/17 80
Initiating the swarm
instead of using the computer generated
random numbers, more sophisticated
sequences like Sobol, Faure Sequences can
be used. These sequences use quasi
random numbers rather than simple
uniform distribution

3/24/17 81
Computer
generated random
numbers

Sobol Sequence
generated
random numbers

3/24/17 82
Tuning of parameters
Dynamical adjustments of inertia weight
1. Random adjustments
A different inertia weight is randomly selected at each
iteration.
Eg 1 use of probability distribution to generate inertia
weight
2. Linear decreasing
Eg 2 wa=large
Initially (c1r1inertia
+ c2r2)weight is decreased linearly to a
smaller inertia weight (0.9 to 0.4)
3. Non linear decreasing inertia weight
initially large value decreases non linearly to a smaller
value. Nonlinear decreasing methods allow shorter
exploration time than linear methods. They are more
appropriate for smoother search spaces
4. Fuzzy adaptive inertia weight
Inertia weight is adjusted dynamically on the basis of
fuzzy sets and rules.
Variations in Acceleration
Coefficients
Adjust the coefficients dynamically, by giving
importance to pbest in the beginning and
gbest in later stages i.e. decrease c1 linearly
with time while increase c2 linearly with time.
Use of fuzzy adaptive rules
Hybridization

3/24/17 86
Use of additional operators borrowed
from other EA
Mutation : Mutate the gbest position or
mutate the whole swarm
Reproduction : pbest and gbest
positions can be combined suitably to
generate a new particle

The operators can be applied periodically


or can be applied according to some
criterion

3/24/17 87
Mutation PSO
Changing the position vector using
various probability distributions like
Uniform, Gaussian, Cauchy etc.

NUMERICAL RESULTS SHOW THAT


CAUCHY DISTRIBUTION IS BETTER
THAN UNIFORM AND GAUSSIAN

3/24/17 88
Recombination in PSO

addition of a suitable recombination


operator to improve the performance of
algorithm
Some popular recombination operators in
GA
Arithmetic operator
PCX operator
SBX operator

3/24/17 89
Hybrid Approach
Hybridizing PSO with a Local Search Method
Hybridizing PSO with with some other
Evolutionary Algorithm

well known local search methods like:


steepest descent,
golden section technique,
Powell method
etc can be included in later iterations for thorough
search of the region
Variants of PSO

Discrete PSO to handle discrete binary variables.


Self Adaptive PSO to overcome the drawback of parameter
tuning.
Quantum PSO adding concepts of quantum mechanics
Chaotic PSO using chaos theory.
Mulobjective PSO for solving multiobjective optimization
problems
Constrained optimization Problems

Minimize / Maximize f (x )
g j ( x ) 0, j 1,......, p

k 1,......, q
hk ( x ) 0,
xi min xi xi max (i 1,......, n)
Reject infeasible solutions

Penalty function methods

Preserving feasibility methods


If the new particle and the previous particle
are feasible then select the best one
If both the particles are infeasible then select
the one having smaller constraint violation
If one is feasible and the other one is
infeasible then select the feasible one.
Also at the end of every iteration, the particles are sorted
by using the three criteria:
Sort feasible solutions before infeasible solutions
Sort feasible solutions according to their fitness
function values
Sort infeasible solutions according to their constraint
violations.

3/24/17 97
Some PSO variants (Selected papers 2002-
2006)(Riget and Vesterstorm, 2002) uses a diversity
ARPSO
measure to alternate between 2 phases;
Dissipative PSO (Xie, et al., 2002) increasing randomness;
PSO with self-organized criticality (Lovbjerg and Krink,
2002) aims to improve diversity;
FDR-PSO (Veeramachaneni, et al., 2003) using nearest
neighbour interactions;
PSO with mutation (Higashi and Iba, 2003; Stacey, et al.,
2004)
DEPSO (Zhang and Xie, 2003) aims to combine DE with
PSO;
Self-organizing Hierachicl PSO (Ratnaweera, et al. 2004);
Cooperative PSO (van den Bergh and Engelbrecht, 2005) a
cooperative approach
CLPSO (Liang, et al., 2006) incorporate learning from
more previous best particles.
Tribes (Clerc, 2006) aims to adapt population size, so that
it does not have to be set by the users; Tribes have also
been
3/24/17 used for discrete, or mixed (discrete/continuous)
98
Selected PSO variants (2014-2015)

3/24/17 99
Bare-bones particle swarm optimization
with disruption operator
Hao Liu, Guiyan Ding, Bing Wanga, Applied
Mathematics and Computation, 2014

To enhance population diversity and


speed up convergence rate of BPSO, this
paper proposes a novel disruption strategy,
originating from astrophysics, to shift the
abilities between exploration and
exploitation during the search process.

3/24/17 100
HEPSO: High exploration particle swarm
optimization
M.J. Mahmoodabadi, Z. Salahshoor Mottaghi, A.
Bagheri, Information Science 2014
In this paper, a new optimization method based on
the combination of PSO and two novel operators is
introduced in order to increase the exploration
capability of the PSO algorithm (HEPSO). The first
operator is inspired by the multi-crossover mechanism
of the genetic algorithm, and the second operator
uses the bee colony mechanism to update the
position of the particles.

3/24/17 101
A diversity-guided hybrid particle swarm
optimization based on gradient search
Fei Han, Qing Liu, Neurocomputing, 2014

In this paper, a diversity-guided hybrid PSO based on


gradient search is proposed to improve the search ability of
the swarm. The adaptive PSO is first used to search the
solution till the swarm loses its diversity. Then, the search
process turns to a new PSO(DGPSOGS), and the particles
update their velocities with their gradient directions as well
as repel each other to improve the swarm diversity.
Depending on the diversity value of the swarm, the proposed
hybrid methods witches alternately between two PSOs.

3/24/17 102
Adaptive acceleration coefficients for a new
search diversification strategy in particle
swarm optimization algorithms
Guido Ardizzon, Giovanna Cavazzini, Giorgio
Pavesi, information science 2015
The paper presents a novel paradigm of the original particle
swarm concept, based on the idea of having two types of
agents in the swarm; the explorers and the settlers, that
could dynamically exchange their role in the search process.
The explorers task is to continuously explore the search
domain, while the settlers set out to refine the search in a
promising region currently found by the swarm.

3/24/17 103
Competitive and cooperative particle swarm
optimization with information sharing mechanism for
global optimization
problems
Yuhua Li, Zhi-Hui Zhan, Shujin Lin, Jun Zhang
Xiaonan Luo, Information Science, 2015
This paper proposes an information sharing mechanism (ISM) to
improve the performance of particle swarm optimization (PSO).
The ISM allows each particle to share its best search
information, so that all the other particles can take advantage of
the shared information by communicating with it. In this way,
the particles could enhance the mutual interaction with the
others sufficiently and heighten their search ability greatly by
using the search information of the whole swarm. Also, a
competitive and cooperative (CC) operator is designed for a
particle to utilize the shared information in a proper and efficient
way.
3/24/17 104
Topics of research
How to make the search more
systematic ?
How to make the search more
controllable ?
How to make the performance scalable?
Constraint handling
Multi objective optimization problems
Discrete/integer/binary/combinatorial
optimization
Adaptive parameters
Textbook David Corne, Marco Dorigo and Fred Glover, New
Ideas in Optimization, McGraw-Hill, 1999.

Fred Glover and Gary A. Kochenberger, Handbook of


Metaheuristics, Kluwer Academic Publishers, 2003.

New Achievements in Evolutionary Computation (edited by Peter


Koroec), InTech, 2010.
//http://www.intechopen.com/books/newachievements-in-
evolutionary-computation

Swarm Intelligence by Kennedy & Eberhart [2001]


Bibliography: www.computelligence.org/pso/bibliography.htm
Survey paper Christian Blum and Andrea Roli, Metaheuristics in
Combinatorial Optimization: Overview and Conceptual
Comparison, ACM Computing Surveys, Vol. 35, No. 3, September
2003, pp. 268308.
References
1. Osman, I.H., and Laporte,G. Metaheuristics:A bibliography. Ann.
Oper. Res. 63, 513623, 1996.
2. Blum, C., and Andrea R. Metaheuristics in Combinatorial
Optimization: Overview and Conceptual Comparison. ACM
Computing Surveys, 35(3), 268308, 2003.
3. Kirkpatrick, S., Gelatt. C. D., and Vecchi, M. P. Optimization by
simulated annealing, Science, 13 May 1983 220, 4598, 671680,
1983.
4. Glover, F. Future paths for integer programming and links to
artificial intelligence, Comput. Oper. Res. 13, 533549, 1986.
5. Hansen, P. and Mladenovi, N. An introduction to variable
neighborhood search. In Metaheuristics: Advances and trends in local
search paradigms for optimization, S. Vo, S. Martello, I. Osman, and
C. Roucairol, Eds. Kluwer Academic Publishers, Chapter 30, 433458,
1999.
6. Holland, J. H. Adaption in natural and artificial systems. The
University of Michigan Press,Ann Harbor, MI. 1975.
7. Dorigo, M. Optimization, learning and natural algorithms (in italian).
Ph.D. thesis, DEI, Politecnico di Milano, Italy. pp. 140, 1992.
8. Kennedy, J. and Eberhart, R. Particle Swarm Optimization,
Proceedings of the 1995 IEEE International Conference on Neural
Networks, pp. 1942-1948, IEEE Press, 1995.
References
9. Lin-Yu Tseng* and Chun Chen, (2008) Multiple Trajectory Search for
Large Scale Global Optimization, Proceedings of the 2008 IEEE
Congress on Evolutionary Computation, June 2-6, 2008, Hong Kong.
10. Lin-Yu Tseng* and Chun Chen, (2009) Multiple Trajectory Search for
Unconstrained/Constrained Multi-Objective Optimization,
Proceedings of the 2009 IEEE Congress on Evolutionary Computation,
May 18-21, Trondheim, Norway.
Some useful websites

Particle Swarm Optimization http://www.swarmintelligence.org


Ant Colony Optimization
http://iridia.ulb.ac.be/~mdorigo/ACO/ACO.html
Celular Automata http://cell-auto.com/
Memetic Algorithms
http://www.ing.unlp.edu.ar/cetad/mos/memetic_home.html
Evolutionary Algorithms
http://www.cs.sandia.gov/opt/survey/ea.html
Tabu Search http://www.cs.sandia.gov/opt/survey/ts.html
Simulated Annealing
http://esa.ackleyshack.com/thesis/esthesis7/node15.html
Greedy Randomized Adaptive Search Procedure
http://www.optimization-online.org/DB_FILE/2001/09/371.pdf
Variable Neighborhood Search
http://citeseer.nj.nec.com/mladenovic97variable.html
Scatter Search http://hces.bus.olemiss.edu/reports/hces0199.pdf
Guided Local Search & Fast Local Search
http://citeseer.nj.nec.com/voudouris95function.html
Immune Systems http://www.artificial-immune-systems.org/
My Research Group
Algorithms
Differential Evolution
Particle Swarm Optimization
Artificial Bee Colony
Applications
Multiobjective Optimization Problems
Constrained Optimization
Discrete Optimization
Image Processing
Thresholding (B/W and Colored Images)
Watermarking
Industrial Optimization
Green Supply Chain
Sustainable supplier selection
Some useful journals
IEEE Transactions on Evolutionary Computation,
IEEE Press
IEEE Transactions on Systems, Man and
Cybernetics, IEEE Press
Applied Soft Computing, Elsevier
Soft Computing, Springer
Information Science, Elsevier

3/24/17 Nature Inspired Algorithms and their 111


Applications
Important Conferences
CEC (Congress on Evolutionary Computation)
SIS (Swarm Intelligence Symposium)
GECCO (Genetic and Evolutionary
Computational Conference)
SOCPROS (Soft Computing for Problem
Solving)

Important e-newsletters
EC Digest: http://ec-
digest.research.ucf.edu/

3/24/17 Nature Inspired Algorithms and their 112


Applications
SocProS
SocProS 2015
2015
Saharanpur
Saharanpur Campus
Campus of of
IIT
IIT Roorkee
Roorkee
December
December 1818 20,
20, 2015
2015

3/24/17 113
Want to know more?
A simple mantra
Read Read Read and Read
some more

WIKIPEDIA

3/24/17 Differential Evolution: An 114


introduction

Das könnte Ihnen auch gefallen