Beruflich Dokumente
Kultur Dokumente
Millie Pant
Associate Professor
Department of Applied Science and
Engineering,
Saharanpur Campus of IIT Roorkee
millifpt@iitr.ac.in
3/24/17 1
CONTENTS
Optimization
Optimization Models
Particle Swarm Optimization
Variants of PSO
Some test problems
Applications of PSO
References
Introduction to optimization
3/24/17 3
Optimization
The process of finding the optimal value of a function
is called optimization
The function, which has to be optimized, is called the
objective function
The domain in which it has to be optimized is generally
specified by a set of equalities or inequalities called
constraints
Process of optimization seeks those values of the
decision variables, which do not violate the constraints
and at the same time optimize the objective function
3/24/17 4
Most General Optimization Problem
f: RD R s.t. ai xi bi
Subject to:
gi ( x) 0, i 1,2...., m
h j ( x) 0, j m 1, m 2...., p
where f, g1, g2,gm , hm+1, hm+2,. .hp are real valued
functions defined on D. 5
3/24/17
An example of unconstrained optimization
problem
n Fitness Function
Min f(x) = x
i 1
2
i
x is the decision variable
and the solution space
Subject to 5 <xi <5 ranges from -5 to 5 and
represents a rectangular
Box hyperplane
constraints
Optimal solution is given
by: xi = 0 with fmin= 0.
3/24/17 6
Local and Global Optimal Solution
3/24/17 7
LOCAL AND GLOBAL OPTIMAL
SOLUTION
3/24/17 9
.
3/24/17 10
3/24/17 11
3/24/17 12
CLASSIFICATION
On the basis of :
(i) existence of constraints
(ii) nature of design variable
(iii) deterministic nature of
variables
(iv) nature of equation evolved
(v) number of objective functions
(vi) convexity of functions and
decision variables
BASED ON EXISTENCE OF
CONSTRAINTS
Unconstraint Constraint Optimization
Optimization Problem
Problem
Min (Max.) f(X)
Min (Max.) f(X) s.To
gi(x) 0 i=1,2m
where X= (x1,x2,x3, hi(x) = 0 j=1,2p
.....xn)
f(x): Rn R where X= (x1,x2,x3,
.....xn)
f(x): Rn R
BASED ON NATURE OF EQUATIONS
EVOLVED
f(x): Rn R f(x): Rn R
BASED ON NUMBER OF OBJECTIVE
FUCTION
n problems
Linear Non-linear
Optimization Optimization
Problem Problem
Convex Optimization
Problem Non-convex
Optimization
Problem
Discrete Continuous
Optimization Optimization
Problem Problem
Solution
Methodologies
3/24/17 20
SOLVING METHODS
Simplex Method Linear Programming
Problems
Golden Section Method
Direct Search
Fibonacci Search Method Method
(unconstrained)
Bisecting Search Method Some
Gradient based classical
Newton-Raphson Method methods methods
(unconstrained)
Cyclic Co-ordinate Method
Dynamic Programming
3/24/17 22
But What For
More Complex
Problems?
Optimization Methods
Deterministic Probabilistic
3/24/17 24
COMPARISON BETWEEN
PROBABILISTIC
AND DETERMINISTIC
OPTIMIZATION TECHNIQUES
3/24/17 25
Property Probabilistic Deterministic
Search space Population of Trajectory by a
potential solutions single point
Motivation Nature inspired Mathematical
selection and social properties (gradient,
adaptation Hessian etc)
Applicability Domain Applicable to
independent, specific problem
application to domain
variety of problems
Point transition Probabilistic Deterministic
Prerequisites An objective auxiliary knowledge
function to be such as gradient
optimized vectors
3/24/17 26
Initial guess Automatically Provided by the
generated by the user
algorithm
Flow of control Mostly Parallel Mostly Serial
Global optimum more Local optimum
Results probable dependent on initial
guess
Advantages Global Search, Parallel Convergence Proof
speed
No general Locality,
Drawbacks convergence Proof computational cost
3/24/17 27
NATURE INSPIRED ALGORITHMS
3/24/17 29
Nature Inspired Algorithm (NIA)
Nature Inspired Algorithms (NIA) are relatively a newer
addition to class of population based stochastic search
techniques based on the self organising collective
processes in nature and human artefacts.
3/24/17 32
As described by the inventers James Kennedy and
Russell Eberhart,
Mechanism of PSO
Mechanism of PSO
Social and cooperative behavior displayed by various natural
Social and cooperative behavior displayed by various natural
species
species
3/24/17 33
Swarm Behaviour
3/24/17 35
Particle Swarm Optimization (PSO)
PSO was developed in 1995 by James Kennedy (social-
psychologist) and Russell Eberhart (electrical engineer).
PSO is a robust stochastic optimization technique based on
the movement and intelligence of swarms.
It applies the concept of social interaction to problem solving.
It uses a number of agents (particles) that constitute a
swarm moving around in the search space looking for the
best solution.
Each particle is treated as a point in a N-dimensional space
which adjusts its flying according to its own flying
experience as well as the flying experience of other particles.
Particle Swarm Optimization (PSO)
3/24/17 38
BASIC EQUATIONS GOVERNING THE
WORKING OF PSO
velocity vector
vid vid c1r1 ( pid xid ) c2 r2 ( p gd xid ) (1)
position vector
3/24/17 39
A closer look
Cognitive Social
component component
personal thinking cooperation among
of the particle particles
3/24/17 40
Particle Swarm Optimization (PSO)
fmin = 0
3/24/17 42
Computational Steps
where xmin,j and xmax,j are lower and upper bounds for jth
component respectively, rand(0,1) is a uniform random
number between 0 and 1.
3/24/17 44
Population Initialization
3/24/17 45
Evaluation of Fitness Function Value
Fitness function is determined by evaluating the function
value for each particle
f (X1) = 2.70452 +4.80302 = 30.3831
Similarly
f(X2) = 29.4265
f(X3)= 19.9258
f(X4 )=4.4325
f(X5)= 12.1429
3/24/17 46
Generate an initial velocity for all the
particles
Generate velocity vector uniformly in the range
(0, 1).
V1 = (0.4752 0.6987)
V2 = (0.4141 0.4020)
V3 = (0.7797 0.9433)
V4 = (0.6183 0.4749)
V5 = (0.2530 0.9398)
3/24/17 47
Minimum is 4.4325, i.e. X4 is the best solution of this swarm
Call it gbest.
For initial population gebst = X4 .
Now we proceed to the next iteration using PSO update
equations
3/24/17 48
For first particle
= 1.34876
Since 1.34876 lies in the range (-5, 5) so we accept this solution.
3/24/17 50
update all the particles using same procedure
Second Particle:
v21 = 0.4141+2*0.34*(4.5974 -4.5974)+ 2*0.86*(1.6400 -4.5974)
= -4.672628
x21 = 4.5974 +(- 4.672628)
= -0.075228
3/24/17 51
Third Particle:
v31 = 0.7797+2*0.98*(1.8710 -1.8710)+ 2*0.86*(1.6400 -1.8710)
= 0.38238
x31 = 1.8710 + 0.38238
= 2.25338
v32 = 0.9433+2*0.69*(4.0528-4.0528)+ 2*0.34*(1.3202-4.0528)
= -0.914868
x32 = 4.0528+(-0.914868)
= 3.137932
Thus the third particle after PSO update equations becomes:
X3 = (2.25338, 3.137932)
3/24/17 52
Fourth Particle:
v41 = 0.6183+2*0.18*(1.6400 -1.6400)+ 2*0.23*(1.6400 -1.6400)
= 0.6183
x41 = 1.6400+(0.6183)
= 2.2583
v42 = 0.4749+2*0.61*(1.3202-1.3202)+ 2*0.04*(1.3202-1.3202)
= 0.4749
x42 = 1.3202+(0.4749)
= 1.7951
Thus the fourth particle after PSO update equations becomes:
X4 = (2.2583, 1.7951)
3/24/17 53
Fifth Particle:
v51 = 0.2530+2*0.09*(3.3392 -3.3392)+ 2*0.39*(1.6400 -3.3392)
= -1.072376
x51 = 3.3392+ (-1.072376)
= 2.266824
v52 = 0.9398+2*0.65*(0.9963-0.9963)+ 2*0.10*(1.3202-0.9963)
= 1.00458
x52 = 0.9963+(1.00458)
= 2.00088
Thus the fifth particle after PSO update equations becomes:
X5 = (2.266824, 2.00088)
3/24/17 54
Finally the updated swarm is
X1 = (1.34876,0.43638)
X2 = (-0.075228, 3.094208)
X3 = (2.25338, 3.137932)
X4 = (2.2583, 1.7951)
X5 = (2.266824, 2.00088)
And the Corresponding fitness value is
f(X1) = 2.00806
f(X2) = 9.57978
f(X3) = 14.9243
f(X4) = 8.32266
f(X5) = 9.12401
3/24/17 55
Initial Swarm Updated swarm
X1 = (2.7045 4.8030) X1 = (1.34876,0.43638)
X2 = (4.5974 2.8793) X2 = (-0.075228, 3.094208)
X3 = (1.8710 4.0528) X3 = (2.25338, 3.137932)
X4 = (1.6400 1.3202) X4 = (2.2583, 1.7951)
X5 = (3.3392 0.9963) X5 = (2.266824, 2.00088)
Corresponding fitness value is Corresponding fitness value is
f (X1) = 30.3831 f(X1) = 2.00806
f(X2) = 29.4265 f(X2) = 9.57978
f(X3)= 19.9258 f(X3) = 14.9243
f(X4 )=4.4325 f(X4) = 8.32266
f(X5) = 9.12401
f(X5)= 12.1429
Clearly, the least fitness is 2.00806 and it corresponds to first
particle.
Therefore, gbest for updated swarm is x1.
Now compare this gbest with the previous gbest, obviously
updated gbest is better so we replace old gbest with the new
one.
3/24/17
If updated gbest would not be better then old 56 gbest is carried
UPDATE MECHANISM OF PBEST
3/24/17 57
Similarly,
3/24/17 58
initial position of points updated swarm
5 5
4.5 4.5
4 4
3.5 3.5
3 3
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
3/24/17 59
2 2
1.5 1.5
1 1
0.5 0.5
0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-0.5 -0.5
-1 -1
-1.5 -1.5
-2 -2
3/24/17 60
Graphical representation of working of PSO
3/24/17 61
Basic Parameters of PSO
Inertia Weight
Acceleration Constants
2002-04-24 Maurice.Clerc@WriteMe.com
Recall of the basic velocity equation
vid vid c1r1 ( pid xid ) c2 r2 ( p gd xid )
3/24/17 63
Inertia weight
3/24/17 66
When to stop the algorithm
Maximum number of generations
Maximum number of function evaluations
Specifying an accuracy criteria
3/24/17 67
Validating the
algorithm
Efficiency
Reliability
Robustness
3/24/17 68
Comparison with other evolutionary
computation techniques.
Unlike in genetic algorithms, evolutionary programming and
evolutionary strategies, in PSO, there is no selection operation.
All particles in PSO are kept as members of the population
through the course of the run
PSO is the only algorithm that does not implement the survival of
the fittest.
No crossover operation in PSO.
eq 1(b) resembles mutation in EP.
In EP balance between the global and local search can be
adjusted through the strategy parameter while in PSO the balance
is achieved through the inertial weight factor (w) of eq. 1(a)
Benchmark Problems
3/24/17 70
Rastringin Function
n
f1 ( x) ( xi 2 10 cos(2xi ) 10)
i 1
3/24/17 71
Spherical function
n 2
f 2 ( x ) xi
i 1
3/24/17 72
Griewank Function
1 n 1 2 n 1 xi
f 3 ( x) xi cos( ) 1
4000 i 0 i 0 i 1
Shortcomings
3/24/17 of PSO
75
Important features of an
algorithm that influences
its performance
3/24/17 76
Exploration vs Exploitation
exploration
explorationis the ability of
the algorithm to search for
new individuals far from the
current individual (current initial phase - exploration.
solution) in the search space. Later phase - exploitation.
Exploitation is to
search the surrounding
search area nearby the
current solution
something like local
search.
3/24/17 77
Diversity
The manner in which the solutions are
dispersed in the entire search space.
Mathematically
1 ns nx
2
Diversity ( S (t )) ( xij (t ) x j (t ))
ns i 1 j 1
3/24/17 78
S is the swarm
ns = S is the swarm size
nx is the dimensionality of the problem
xij is the jth value of the ith particle
Tuning of Parameters
Hybridized Algorithms
3/24/17 80
Initiating the swarm
instead of using the computer generated
random numbers, more sophisticated
sequences like Sobol, Faure Sequences can
be used. These sequences use quasi
random numbers rather than simple
uniform distribution
3/24/17 81
Computer
generated random
numbers
Sobol Sequence
generated
random numbers
3/24/17 82
Tuning of parameters
Dynamical adjustments of inertia weight
1. Random adjustments
A different inertia weight is randomly selected at each
iteration.
Eg 1 use of probability distribution to generate inertia
weight
2. Linear decreasing
Eg 2 wa=large
Initially (c1r1inertia
+ c2r2)weight is decreased linearly to a
smaller inertia weight (0.9 to 0.4)
3. Non linear decreasing inertia weight
initially large value decreases non linearly to a smaller
value. Nonlinear decreasing methods allow shorter
exploration time than linear methods. They are more
appropriate for smoother search spaces
4. Fuzzy adaptive inertia weight
Inertia weight is adjusted dynamically on the basis of
fuzzy sets and rules.
Variations in Acceleration
Coefficients
Adjust the coefficients dynamically, by giving
importance to pbest in the beginning and
gbest in later stages i.e. decrease c1 linearly
with time while increase c2 linearly with time.
Use of fuzzy adaptive rules
Hybridization
3/24/17 86
Use of additional operators borrowed
from other EA
Mutation : Mutate the gbest position or
mutate the whole swarm
Reproduction : pbest and gbest
positions can be combined suitably to
generate a new particle
3/24/17 87
Mutation PSO
Changing the position vector using
various probability distributions like
Uniform, Gaussian, Cauchy etc.
3/24/17 88
Recombination in PSO
3/24/17 89
Hybrid Approach
Hybridizing PSO with a Local Search Method
Hybridizing PSO with with some other
Evolutionary Algorithm
Minimize / Maximize f (x )
g j ( x ) 0, j 1,......, p
k 1,......, q
hk ( x ) 0,
xi min xi xi max (i 1,......, n)
Reject infeasible solutions
3/24/17 97
Some PSO variants (Selected papers 2002-
2006)(Riget and Vesterstorm, 2002) uses a diversity
ARPSO
measure to alternate between 2 phases;
Dissipative PSO (Xie, et al., 2002) increasing randomness;
PSO with self-organized criticality (Lovbjerg and Krink,
2002) aims to improve diversity;
FDR-PSO (Veeramachaneni, et al., 2003) using nearest
neighbour interactions;
PSO with mutation (Higashi and Iba, 2003; Stacey, et al.,
2004)
DEPSO (Zhang and Xie, 2003) aims to combine DE with
PSO;
Self-organizing Hierachicl PSO (Ratnaweera, et al. 2004);
Cooperative PSO (van den Bergh and Engelbrecht, 2005) a
cooperative approach
CLPSO (Liang, et al., 2006) incorporate learning from
more previous best particles.
Tribes (Clerc, 2006) aims to adapt population size, so that
it does not have to be set by the users; Tribes have also
been
3/24/17 used for discrete, or mixed (discrete/continuous)
98
Selected PSO variants (2014-2015)
3/24/17 99
Bare-bones particle swarm optimization
with disruption operator
Hao Liu, Guiyan Ding, Bing Wanga, Applied
Mathematics and Computation, 2014
3/24/17 100
HEPSO: High exploration particle swarm
optimization
M.J. Mahmoodabadi, Z. Salahshoor Mottaghi, A.
Bagheri, Information Science 2014
In this paper, a new optimization method based on
the combination of PSO and two novel operators is
introduced in order to increase the exploration
capability of the PSO algorithm (HEPSO). The first
operator is inspired by the multi-crossover mechanism
of the genetic algorithm, and the second operator
uses the bee colony mechanism to update the
position of the particles.
3/24/17 101
A diversity-guided hybrid particle swarm
optimization based on gradient search
Fei Han, Qing Liu, Neurocomputing, 2014
3/24/17 102
Adaptive acceleration coefficients for a new
search diversification strategy in particle
swarm optimization algorithms
Guido Ardizzon, Giovanna Cavazzini, Giorgio
Pavesi, information science 2015
The paper presents a novel paradigm of the original particle
swarm concept, based on the idea of having two types of
agents in the swarm; the explorers and the settlers, that
could dynamically exchange their role in the search process.
The explorers task is to continuously explore the search
domain, while the settlers set out to refine the search in a
promising region currently found by the swarm.
3/24/17 103
Competitive and cooperative particle swarm
optimization with information sharing mechanism for
global optimization
problems
Yuhua Li, Zhi-Hui Zhan, Shujin Lin, Jun Zhang
Xiaonan Luo, Information Science, 2015
This paper proposes an information sharing mechanism (ISM) to
improve the performance of particle swarm optimization (PSO).
The ISM allows each particle to share its best search
information, so that all the other particles can take advantage of
the shared information by communicating with it. In this way,
the particles could enhance the mutual interaction with the
others sufficiently and heighten their search ability greatly by
using the search information of the whole swarm. Also, a
competitive and cooperative (CC) operator is designed for a
particle to utilize the shared information in a proper and efficient
way.
3/24/17 104
Topics of research
How to make the search more
systematic ?
How to make the search more
controllable ?
How to make the performance scalable?
Constraint handling
Multi objective optimization problems
Discrete/integer/binary/combinatorial
optimization
Adaptive parameters
Textbook David Corne, Marco Dorigo and Fred Glover, New
Ideas in Optimization, McGraw-Hill, 1999.
Important e-newsletters
EC Digest: http://ec-
digest.research.ucf.edu/
3/24/17 113
Want to know more?
A simple mantra
Read Read Read and Read
some more
WIKIPEDIA