Sie sind auf Seite 1von 81

PSO and its variants

Swarm Intelligence Group


Peking University

Classical and standard PSO

Swarm is better than personal

Classical and standard PSO

Russ Eberhart

James Kennedy

Classical
Vid w Vid c1 Rand () ( pid xid ) c2 Rand () ( g d xid )

(1)

xid xid Vid

(2)

Vid : Velocity of each particle in each dimension

i:
D:

W:
c1c2 :
Rand() :
Pid :
gd :
xid :

Particle
Dimension

Inertia Weight
Constants
Random
Best position of each particle
Best position of swarm
Current position of each particle in each dimension

Classical and standard PSO


Vid w Vid c1 Rand () ( pid xid ) c2 Rand () ( g d xid )

(1)

xid xid Vid

(2)

Vid (t 1)

xid (t 1)

g d (t )
Vid (t )

xid (t )

pid (t )
x

Flow chart depicting the General PSO Algorithm:

Particle Swarm Optimization (PSO)


PSO is a robust stochastic optimization technique based on
the movement and intelligence of swarms.
PSO applies the concept of social interaction to problem
solving.

It was developed in 1995 by James Kennedy (socialpsychologist) and Russell Eberhart (electrical engineer).
It uses a number of agents (particles) that constitute a
swarm moving around in the search space looking for the
best solution.
Each particle is treated as a point in a N-dimensional space
which adjusts its flying according to its own flying
experience as well as the flying experience of other
particles.

Particle Swarm Optimization


(PSO)

Each particle keeps track of its coordinates in the solution


space which are associated with the best solution (fitness)
that has achieved so far by that particle. This value is called
personal best , pbest.
Another best value that is tracked by the PSO is the best
value obtained so far by any particle in the neighborhood of
that particle. This value is called gbest.
The basic concept of PSO lies in accelerating each particle
toward its pbest and the gbest locations, with a random
weighted accelaration at each time step as shown in Fig.1

Particle Swarm Optimization


(PSO)
y

sk+1

vk
vk+1
sk

vgbest
vpbest
x

Fig.1 Concept of modification of a searching point by PSO


sk : current searching point.
sk+1: modified searching point.
vk: current velocity.
vk+1: modified velocity.
vpbest : velocity based on pbest.
vgbest : velocity based on gbest

Particle Swarm Optimization


(PSO)

Each particle tries to modify its position.

The modification of the particles position can be mathematically


modeled according the following equation :

Vik+1 = wVik +c1 rand1() x (pbesti-sik) + c2 rand2() x (gbest-sik) .. (1)


where,

vik : velocity of agent i at iteration k,


w: weighting function,
cj : weighting factor,
rand : uniformly distributed random number between 0 and 1,
sik : current position of agent i at iteration k,
pbesti : pbest of agent i,
gbest: gbest of the group.

Particle Swarm Optimization


(PSO)
The following weighting function is usually utilized in (1)
w = wMax-[(wMax-wMin) x iter]/maxIter

(2)

where wMax= initial weight,


wMin = final weight,
maxIter = maximum iteration number,
iter = current iteration number.

sik+1 = sik + Vik+1

(3)

Particle Swarm Optimization


(PSO)
Comments on the Inertial weight factor:
A large inertia weight (w) facilitates a global search while
a small inertia weight facilitates a local search.
By linearly decreasing the inertia weight from a relatively
large value to a small value through the course of the
PSO run gives the best PSO performance compared
with fixed inertia weight settings.

Larger w ----------- greater global search ability


Smaller w ------------ greater local search ability.

Particle Swarm Optimization


(PSO)
Flow chart depicting the General PSO Algorithm:
Start

For each particles position (p)


evaluate fitness
If fitness(p) better than
fitness(pbest) then pbest= p

Set best of pBests as gBest


Update particles velocity (eq. 1) and
position (eq. 3)
Stop: giving gBest, optimal solution.

Loop until max iter

Loop until all


particles exhaust

Initialize particles with random position


and velocity vectors.

Comparison with other


evolutionary computation
techniques.

Unlike in genetic algorithms, evolutionary programming and


evolutionary strategies, in PSO, there is no selection operation.

All particles in PSO are kept as members of the population through


the course of the run
PSO is the only algorithm that does not implement the survival of
the fittest.
No crossover operation in PSO.
eq 1(b) resembles mutation in EP.
In EP balance between the global and local search can be adjusted
through the strategy parameter while in PSO the balance is
achieved through the inertial weight factor (w) of eq. 1(a)

simulation

max

min
fitness

x
search space

simulation

max

min
fitness

x
search space

simulation

max

min
fitness

x
search space

simulation

max

min
fitness

x
search space

simulation

max

min
fitness

x
search space

simulation

max

min
fitness

x
search space

simulation

max

min
fitness

x
search space

simulation

max

min
fitness

x
search space

Schwefel's function
n

f ( x) ( xi ) sin( xi )
i 1

where
500 xi 500
global minimum
f ( x) = n 418.9829;
xi = 420.9687,i = 1 : n

EvolutionInitialization

Evolution5 iteration

Evolution10 iteration

Evolution15 iteration

Evolution20 iteration

Evolution25 iteration

Evolution100 iteration

Evolution500 iteration

Search result
Iteration

Swarm best

416.245599

515.748796

10

759.404006

15

793.732019

20

834.813763

100

837.911535

5000

837.965771

Global

837.9658

Standard benchmark functions


1Sphere Function
n

f x xi , x 5,5
2

i 1

2Rosenbrock Function
n 1

f x 100 xi 1 xi
i 1

2 2

xi 1 , x 10 ,10
2

3Rastrigin Function
f x x 10 cos2x 10
D

i 1

2
i

4Ackley Function

1 n
f x 20 e 20 exp 0.2 x 2

n i 1

exp 1 cos2xi , x 32 ,32 n

n i 1

Composition Function

Particle Swarm Optimization


Swarm Search

In PSO, particles never die!


Particles can be seen as simple agents that fly through
the search space and record (and possibly
communicate) the best solution that they have
discovered.
Initially the values of the velocity vectors are randomly
generated with the range [-Vmax, Vmax] where Vmax
is the maximum value that can be assigned to any vid.
Once the particle computes the new Xi and evaluates
its new location. If x-fitness is better than p-fitness,
then Pi = Xi and p-fitness = x-fitness.

Particle Swarm Optimization


The algorithm
1. Initialise particles in the search space at random.
2. Assign random initial velocities for each particle.

3. Evaluate the fitness of each particle according a user


defined objective function.
4. Calculate the new velocities for each particle.
5. Move the particles.
6. Repeat steps 3 to 5 until a predefined stopping criterion is
satisfied.

Particle Swarm:
Controlling Velocities

When using PSO, it is possible for the magnitude of


the velocities to become very large.
Performance can suffer if Vmax is inappropriately set.
Several methods were developed for controlling the
growth of velocities:

A dynamically adjusted inertia factor,


A dynamically adjusted acceleration coefficients.
Re-initialisation of stagnated particles..

Particle Swarm Optimization:


Related Issues

There are a number of related issues concerning


PSO:

Controlling velocities (determining the best value for Vmax),


Swarm Size,
Inertia weight factor,
Robust Settings for (C1 and C2),

Analysis of PSO_state of art

Stagnation - Convergence

Clerc 2002

Kennedy 2005

Dynamic-Probabilistic Particle Swarms,2005

Poli 2007

The particle swarm - explosion, stability, and convergence in a multidimensional


complex space,2002

Exact Analysis of the Sampling Distribution for the Canonical Particle


Swarm Optimiser and its Convergence during Stagnation,2007
On the Moments of the Sampling Distribution of Particle Swarm
Optimisers,2007
Markov Chain Models of Bare-Bones Particle Swarm Optimizers,2007

standard PSO

Defining a Standard for Particle Swarm Optimization,2007

Analysis of PSO_state of art

standard PSO: constriction factor - convergence


Update formula
Vid (Vid c1 Rand () ( pid xid ) c2 Rand () ( g d xid ) )
xid xid Vid

Equivalent

Vid w Vid c1 Rand () ( pid xid ) c2 Rand () ( g d xid )


xid xid Vid

Analysis of PSO_state of art

standard PSO

50 particles
Non-uniform initialization
No evaluation when particle is out of boundary

Analysis of PSO_state of art

standard PSO
A local ring topology

Analysis of PSO_state of art

How does PSO works?

Stagnation versus objective function


Classical PSO versus Standard PSO
Search strategy versus performance

Classical PSO

Main idea:

Exploit the current best position

Particle swarm optimization,1995

Pbest
Gbest

Explore the unkown space

pbest

gbest

Classical PSO

Implementation
pbest
gbest

wV

pbest

gbest

Vid w Vid c1 Rand () ( pid xid ) c2 Rand () ( g d xid ) (1)


xid xid Vid

(2)

Analysis of PSO_our idea

Search strategy of PSO

Exploitation
Exploration

Analysis of PSO_our idea

Hybrid uniform distribution


wV

pbest

gbest

Exploitation
wV

Exploration

Analysis of PSO_our idea


Sampling probability density-computable

x(t 1) x(t ) wV (t ) Z

Analysis of PSO_our idea

Analysis of PSO_our idea

Analysis of PSO_our idea


Sampling probability

wV

Analysis of PSO_our idea

No inertia part(wV)

Analysis of PSO_our idea

Inertia part(wV)

Analysis of PSO_our idea

No inertia part(wV)

Analysis of PSO_our idea

Inertia part(wV)

Analysis of PSO_our idea

Difference among variants of PSO

Exploitation

Exploration
Probability

Balance

Analysis of PSO_our idea

What is the property of the iteration?

Analysis of PSO_our idea

Whether the search strategy is the same or whether


the PSO is adaptive when

Same parameter(during the convergent process)


Different parameter
Different dimensions
Different number of particles
Different topology
Different objective functions
In different search phase(when slow or sharp
slope,stagnation,etc)

Whats the change pattern of the search strategy?

Analysis of PSO_our idea

What is the better PSO on the search strategy?

Simpler implement

Using one parameter as a tuning knob instead of two in


standard PSO
Prove they are equialent when setting some value of parameter

Effective on most objective functions


Adaptive

Analysis of PSO_our idea

Markov chain

State transition matrix

Analysis of PSO_our idea

Random process

Covarance matrix

Gauss process

Gaussian process
Kernel mapping

Kernel function

Search straegy

Effective?

Mapping ability

Objective
problem

Analysis of PSO_our idea

the object of our analysis

search strategy of PSO

Different parameter sets


In different dimensions
Using different number of particles
On different objective functions
Fitness evaluation
Different topology
Markov or gauss process and kernel function

Direction to PSO

Knob PSO

Analysis of PSO_our idea


w:

x1

c:

x2
x3
x4

dim:
Num:
Top:
FEs:
Fun:

x5
x6
x7

1
2

3
4
5
6
7

f ( i xi )

P( Exploitation)

Current results

Variance with convergence

func_num=1;
fes_num=5000;
run_num=10;
particles_num=50;
dims_num=30;

Current results

Variance with dimensions

func_num=1;
fes_num=3000;
run_num=10;
particles_num=50;

Current results

Variance with number of particles

func_num=1;
fes_num=3000;
run_num=10;
dims_num=30;

Current results

Variance with topology

Current results

Variance with inertia weight

Current results

1.
2.

Shifted Sphere Function


Shifted Schwefel's Problem 1.2

x 10

x 10

5
8

4
6

3
2

-1
100
50

100
50

-50

-50
-100

-100

-2
100
50

100
50

-50

-50
-100

-100

PSO on Benchmark Function

3.
4.

Shifted Rotated High Conditioned Elliptic Function


Shifted Schwefel's Problem 1.2 with Noise in Fitness

10

x 10

x 10

4
5

3
2

0
100
50

100

0
100

50

-50

-50
-100

0
-20

50

-40
-60

-100

-80
0

-100

Current results

Variance with objective functions


Unimodal Functions
Multimodal Functions
Expanded Multimodal Functions
Hybrid Composition Functions

Current results

Variance with objective functions

func_num=1,2,3,4;
fes_num=3000;
run_num=5;
particles_num=50;
dims_num=30;

variants of PSO_state of art

Traditional strategy

Adopted from other fields

Clonal operation
Mutation operation

Heuristical Methods

Simulated annealing
Tabu strategy
Gradient methods

Advance and retreat

Structure topology

Full connection
Ring topology

Our variants of PSO

CPSO
AR-CPSO
MPSO
RBH-PSO
FPSO

Our variants of PSO

CPSO

nn

Our variants of PSO

MPSO

Our variants of PSO

AR-CPSO

Our variants of PSO

FPSO

Applications of PSO

Applications of PSO

Applications of PSO

Das könnte Ihnen auch gefallen