Sie sind auf Seite 1von 10

Applied Soft Computing 13 (2013) 29973006

Contents lists available at SciVerse ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

Review article

A review of particle swarm optimization and its applications in


Solar Photovoltaic system
Anula Khare , Saroj Rangnekar
Department of Energy, Maulana Azad National Institute of Technology, Bhopal 462051, India

a r t i c l e

i n f o

Article history:
Received 14 October 2011
Received in revised form
25 September 2012
Accepted 24 November 2012
Available online 22 December 2012
Keywords:
Particle swarm optimization
PSO parameters & control
Linearly decreasing inertia weight
Time varying acceleration coefcients
Solar Photovoltaics

a b s t r a c t
Particle swarm optimization is a stochastic optimization, evolutionary and simulating algorithm derived
from human behaviour and animal behaviour as well. Special property of particle swarm optimization
is that it can be operated in continuous real number space directly, does not use gradient of an objective function similar to other algorithms. Particle swarm optimization has few parameters to adjust, is
easy to implement and has special characteristic of memory. Paper presents extensive review of literature available on concept, development and modication of Particle swarm optimization. This paper is
structured as rst concept and development of PSO is discussed then modication with inertia weight
and constriction factor is discussed. Issues related to parameter tuning, dynamic environments, stagnation, and hybridization are also discussed, including a brief review of selected works on particle swarm
optimization, followed by application of PSO in Solar Photovoltaics.
2012 Elsevier B.V. All rights reserved.

Contents
1.
2.
3.
4.

5.
6.
7.
8.

9.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Concept of particle swarm optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modied equation with addition of new operators: inertia weight and constriction factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Effect of value of parameters for solving problem through PSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1.
Average and maximum velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.
Inertia weight and constriction factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.
Acceleration factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
PSO algorithm combined with other intelligent algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Solving constrained optimization problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Discrete space optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Applications of PSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.
Application in Solar PhotoVoltaics (SPV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.1.
Sizing and allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.1.2.
Maximum power point tracking (MPPT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2997
2998
2998
2998
2998
2998
2999
3000
3002
3002
3003
3003
3003
3004
3004
3005

1. Introduction
Particle Swarm Optimization (PSO) is an evolutionary computation technique, developed for optimization of continuous non
linear, constrained and unconstrained, non differentiable multimodal functions [1].

Corresponding author. Tel.: +91 9826218764.


E-mail addresses: anulakhare03@gmail.com (A. Khare), rangnekars@manit.ac.in
(S. Rangnekar).
1568-4946/$ see front matter 2012 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2012.11.033

PSO is inspired rstly by general articial life, the same as bird


ocking, sh schooling and social interaction behaviour of human
and secondly by random search methods of evolutionary algorithm
[2]. Animals, especially birds, shes etc. always travel in a group
without colliding, each member follows its group, adjust its position and velocity using the group information, because it reduces
individuals effort for search of food, shelter etc.
Particle swarm optimization is evolutionary technique similar to genetic algorithm because both are population based and
are equally affective. Particle swarm optimization has better

2998

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

computational efciency, i.e. it requires less memory space and


lesser speed of CPU, it has less number of parameters to adjust.
Genetic algorithm and other similar techniques (e.g. simulated
annealing), work for discrete design variables, whereas particle
swarm optimization work for discrete as well as analogue systems,
because it is inherently continuous, does not need D/A or A/D conversion. Although for handling discrete design variables Particle
swarm optimization needs some modication to be done in particle swarm optimization methods. In this paper the position of
PSO in the large eld of computing, including the development of
the algorithm is described in Section 2, Section 3 briey reviews
the standard formulae of PSO which are usually related with engineering problems, Section 4 reviews the parameter tuning and its
insight, Section 5 highlights some recent research into hybrid algorithms, many of which involve evolutionary techniques, Section
6 considers the use of PSO for multicriteria and constrained optimization, Section 7 then considers a range of discrete optimization
problems, Section 8 gives idea about application of PSO in Solar
Photovoltaics and Section 9 concludes overall.
2. Concept of particle swarm optimization
Human being have their own previous experiences, set beliefs
and set rules of doing some work, based on which they take their
actions (individual best position) also human follow the path set
by society/group, this path is supposed to be the best according to
the whole group (global best position) [1].
The essence for the development of particle swarm optimization was the assumption that potential solution to an optimization
problem is treated as a point ying like bird in multi dimensional
space, adjusting its position in search space according to its own
previous experience and that of its neighbours [3]. This point has
got no mass and volume and is called a particle as it has velocity
vector. As per Shi and Eberhart [10], to bring the particle to best
position, initial velocity is incremented in either positive or negative direction depending on current value, if present position is less
than best position then increase velocity and vice versa [1].
Vx = Vx [][] + C1 rand() (P bestx[][] presentx[][])
+ C2 rand() (P bestx[][G bestx] presentx[][])

(1)

Vx and present value have two matrices to show number of particles


and dimensions. In the absence of previous velocity, particles will
have no momentum and will get trapped in still position. C1 and C2
represent weighing of stochastic acceleration terms that pull particle towards P best and G best positions. Above equation has both
cognizant part (its own previous best value, derived from human
behaviour of memory and experience) and social part (derived from
human as well as animal behaviour to follow the path set by society or group). Population size is generally problem dependent and
generally kept in between 20 and 50 [4].
3. Modied equation with addition of new operators:
inertia weight and constriction factor
To reach to the target in a group of population, random numbers of particles are generated with random positions and random
velocities. Their velocity vectors and positions are updated in number of iterations and then their tness is calculated according to
objective function [5]. Simulation is processed through simple Eqs.
(3.1) and (3.2) which are used as equations of standard particle
swarm optimization (SPSO).
The particle moves towards an optimum solution through its
present velocity and its individual best solution obtained by itself in
each iteration and global best solution is obtained by all particles. In
a physical dimensional search space, updated position and velocity

of the ith particle are represented as


Vi k+1 = K(Vik + C1 R1 (P best(i) Xik )
+ C2 R2 (G best Xik ))

(3.1)

Xik+1 = Xik + Vik+1

(3.2)

where Xi = [Xi1 , Xi2 ,. . ., Xid ].


Vi = [Vi1 , Vi2 ,. . ., Vid ], here 1, 2,. . ., d shows possible dimensions
for i = 1, 2,. . ., i particles with position X and velocity V.
P best (i) = [Xi1P best , Xi2P best ,. . ., XidP best ] represent individual
best positions of particle i.
G best = [X1G best , X2G best ,. . ., XnG best ] represent the global best
positions. k represents the iteration number for total n iterations.
is inertia weight, K is constriction factor, C1 and C2 are non
negative coefcients called acceleration factors, R1 and R2 are two
random numbers different from each other and generally distributed in the range [0,1].
In a simplied model taking into account only one dimension
of particle P best and G best are combined into one single term,
making convergence and divergence of a particle easy for some of
the benchmark functions [6].
4. Effect of value of parameters for solving problem
through PSO
Search reaching success or optimization depends on selection
of appropriate parameters and their tuning through the search
process. What follows is a brief discussion of adjustment of parameters of particle swarm optimization as summed up after literature
review.
4.1. Average and maximum velocity
To converge to optimum point craziness or neighbour matching
velocity concept was initially proposed, and then it is suggested to
make an increment in desired direction by some random number,
say 2 (Fig. 1).
Average velocity gradually decreases, as a good solution tends
to be obtained. Average velocity of large scale problem is larger
than that of a small scale problem under the condition of same
parameters. Success or failure of search is related to the average of
absolute value of velocity [6].
Particles velocity in each direction is clamped to some maximum value Vmax , so that particles do not try to overpass the search
space. Vmax is an important parameter to determine resolution; its
too high value may make the particle y past the good solution and
too low value will cause particles to be trapped in local minima,
not allowing them to travel far places in search of good solution in
the search space [7]. In early experiments Vmax was set at 1020%
of dynamic range of variable in each dimension.
4.2. Inertia weight and constriction factor
If velocity term is excluded from the equation then the particle
will know only about current and best position. If best position is

velocity

Vmax
Iteration
Fig. 1. Decrease in velocity with iterations.

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

found in starting only then velocity will become zero. Only when
global optimum is in initial search space, particle swarm optimization will nd solution, implying that nal solution is heavily
dependent on initial kernel. And if initial velocity is added particles will have the ability to explore new areas. Also larger Vmax
facilitates global exploration while smaller Vmax encourages local
exploitation. This control was taken over by inertia weight suggested in 1998 by Shi and Eberhart [10]. The use of inertia weight
has provided improved performance in number of applications.
can be a positive constant or positive, linear or non linear function
of time [8].
Suitable selection of provides a balance between global exploration and local exploitation eliminates the need of Vmax and also
reduces total number of iterations. Although Vmax cannot be eliminated completely, particle swarm optimization works well if Vmax
is kept at value of dynamic range of each variable Xmax [1].
For a particle swarm optimization problem we need a better
global search in starting phase to help the algorithm converge to
an area quickly and then we need a stronger local search to get high
precision value. Therefore it is needed to keep as a variable value
and not constant. Hence linearly decreasing inertia weight is used,
given by
1 = (1 2 )

 MAXITER iter 
MAXITER

+ 2

(4.2.1)

where 1 and 2 are initial and nal values of inertia weight, iter
is current iteration number and MAXITER is maximum number of
allowable iterations.
Clerc [9] showed that use of constriction factor K may be necessary to ensure convergence of particle swarm optimization
K=

2
abs(2 C sqrt(C 2 4 C))

where C = C1 + C2

(4.2.2)

= C1 + C2
>4
Typically value of = 4.1 so that C1 = C2 = 2.05 and K becomes
0.729, this way (P X) is multiplied by

2999

effective for tracking dynamic complex systems and it also suffers


with lack of diversity at the end (ne tuning) hence random inertia
weight is chosen according to variation of population tness as per
the following formula:
= 0.5 +

rand(.)
2

(4.2.4)

rand (.) is uniformly distributed random number between 0 and 1.


So here is a random number and does not decrease linearly [12].
One more formula is suggested in Chao and Duo [13], where
does not decrease linearly.
(i) = 0 exp

i
max i

n
(4.2.5)

here 0 is initial inertia weight, i is iteration number, max i is


maximum iteration number and n is curve shape control power.
Hu et al. [14] have proposed a new sensation model for PSO
for making efcient utilization of each particles information. This
method for changing inertia weight is performed by enhancing
individual particles sensation ability. Duration of the trajectory
from distribution of a random variable with exponential probability
distribution function is computed. This makes performance more
predictable and removes inconsistency.
Tang and Zhao [15] suggested a strategy of self adaptive dynamic
inertia weight as per the given expression:
1 = (1 2 )

(f f2 )
(f  f2 )

(4.2.6)

here f is current objective function value, f is current particles


average objective function value and f2 is minimum objective function value. This method is effective for global and local exploration
capability, dealing with multi peak functions.
Inertia weight approach is less effective for a large scale problem than for a small scale problem, because if applied to large
scale problem, the search comes to an end before phase of searching shifts from diversication to intensication. In different cases
choice of formula for inertia weight may be problem dependent.
4.3. Acceleration factors

0.729 2.05 = 1.49445


when Vmax is kept at Xmax .
Therefore, if compared particle swarm optimization with inertia
weight and with constriction factor, they are equivalent if K is put
equal to and C1 and C2 meet the condition = C1 + C2 and > 4.
For optimizing dynamic system whose state changes continuously or very frequently, some modications are proposed in value
of inertia weight.
changed can enhance global search ability proved by many
experiments.
Shi and Eberhart [10], observed the effect of varying inertia
weight and maximum velocity on performance of particle swarm
optimization and suggested that when Vmax is small (less than or
equal to two) inertia weight should be kept at value equal to one,
when Vmax is greater than or equal to three then inertia weight
should be selected as 0.8 and when Vmax is not known then inertia
weight is taken as equal to dynamic range of particles Xmax .
According to Hou [11] with reference to wiener model time
varying inertia weight (TVIW) can be calculated from following
formula:
a
(4.2.3)
=
b + [1gN]2
where a and b are two constants. TVIW concept suggests that inertia weight is decreased linearly from 0.9 to 0.4 during one run. This
is tested on benchmark shufers F6 function. TVIW is not very

Whether increment in cognizant value or social value should


be larger, there was no good way to gure it out. It is desired
that neither social nor cognizant part should be comparatively
higher; hence initially in earlier experiments acceleration factors
were given a constant value 2 giving an average of one, so that
particles overy only half the time of search.
Instead of having a xed value of acceleration factors Suganthan [2] through empirical studies suggested that acceleration
coefcients should not be always equal to 2 for obtaining better
solutions.
A time varying value of acceleration coefcients (TVAC) proved
to give improved performance of particle swarm optimization after
certain number of initial iterations, in order to have ne tuning
and problem based tuning of particle swarm optimization. With
an allowed large value of cognitive component and small social
component in the beginning and small cognitive and large social
component at the later part of optimization, enhances global search
in the beginning and promote particles to converge to global optimum point at the end of search. Adjustment in value of C1 , C2 is as
follows:
C1 = (C1f C1i )
C2 = (C2f C2i )




iter
MAXITER
iter
MAXITER




+ C1i

(4.3.1)

+ C2i

(4.3.2)

3000

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

where C1f , C1i , C2f and C2i are constants, iter is current iteration
number and MAXITER is maximum number of allowable iterations.
Improved optimum solution was observed when C1 is decreased
from 2.5 to 0.5 and C2 from 0.5 to 2.5 over entire search range [16].
Stacey et al. [17], suggested that these acceleration factors
can be related with mutation function in evolutionary programming, thereby giving birth to mutation particle swarm optimization
(MPSO). Keeping mutation step size equal to maximum allowable
velocity. A considerable improvement was observed with MPSO
TVAC method. MPSO with xed acceleration constants does not
give good results.
For complex multi model functions inertia weight and constriction method both proved to be ineffective, elimination of velocity
term from the equation also led to local convergence tendency.
Concerning these factors for complex multimodal functions, a self
organizing hierarchical particle swarm optimization (HPSO) along
with TVAC proved to perform better. In HPSO initial velocity is
kept zero and to prevent stagnation of particles in search space,
a time varying reinitialization velocity proportional to maximum
allowable velocity is allotted.
To improve optimizing efciency and stability new metropolis
coefcients were given by Jie et al. [2], which represent fusion of
simulated annealing and particle swarm optimization. Metropolis
coefcients Cm1 and Cm2 are multiplied by c1 and c2 respectively in
SPSO equation and vary according to distance between current and
best position and according to generation of particles. This method
gives better results in less number of iteration steps and less time.
These coefcients are given as


Cm1 = exp
and
Cm2 = exp



(Pi Xi )2
t

(Pg Xi )2
t

C1it =

2 1/Ptit

t=1

C2it =

(4.3.3)


(4.3.4)

Like other evolutionary computation techniques, particle


swarm optimization also has drawback of premature convergence.
To reach to an optimum value particle swarm optimization depends
on interaction between particles. If this interaction is restrained
algorithms searching capacity will be limited, thereby requiring
long time to come out of local optimums. Many developments
including combination of PSO with other intelligent algorithm have
been developed to solve this problem.
Elite particle swarm optimization with mutation (EPSOM) was
suggested by [3]. According to author, after some initial iteration
each particle is aligned according to its tness value. Particles of
higher tness value are sorted out to form a different swarm and
give a better convergence. Mutation operator as given below is
introduced to avoid decreasing diversity and increasing chance of
being trapped in local minima.
(5.1)

where  is a random number usually distributed between 0 and 1.


EPSOM gives better results than random inertia weight and linearly
decreasing inertia weight.
Multi swarm and multi best particle swarm optimization was
proposed by [18], according to author advantage of information at
every position of particle should be taken unlike SPSO in which use
of information at best position only is made (P best and G best), for

(5.2)

(1/Ptit )

2 1/Gtit

(5.3)

X (1/Gtit )
t=1 i

Pt is tness value of P best and Gt is tness value of G best.


Another approach is multiswarm particle swarm optimization
in which after being initialized population of particles is divide
into n number of groups randomly, every group is regarded as
a new population, which update their velocity and positions synchronously, thus n G bests are obtained, then again these groups are
combined to give one population. Now it becomes easy to calculate
real optimum value out of n G bests [18].
Particles quality is always estimated based on tness value
and not on dimensional behaviour, but some particles may have
different dimensional behaviour in spite of same tness value.
Also in SPSO each updation of velocity is considered as overall
improvement, while it is possible that some particle may move
away from the solution with this updating. To beat these problems
a new parameter called particle distribution degree (dis(s)) was
introduced [19].
1
dis(s) =
dim

dim  n 
2

 PNum 

ail
N

i=0

5. PSO algorithm combined with other intelligent


algorithms

Pg  = Pg (1 + 0.5)

this author suggests new values of C1 , C2 instead of a constant value


of 2 to be replaced in SPSO equation, as

(5.4)

l=1

here s is swarm, dim is dimensionality of problem, N is equal


separation size of particle swarm, I dimension, l separation area.
Particles crowd together more centrally the bigger the dis(s) is.
Liu [20] suggests two improvement strategies in rst in the starting, solution is initialized in a limited range then according to limit
of searching range certain steps of size proportional to population size are selected to distribute the points uniformly. This way
combination of uniform and random solutions in initial iterations
enhances diversity. In second approach author suggests that speed
and location updating is not necessary after each iteration. Coordinates can be assigned to each optimal tness value, i.e. a certain
incremental or detrimental value is intercepted from every dimension of each optimal solution then it is compared with right tness
value.
To exploit the quality of sensitivity to initial solutions present
in Nelder Mean Simplex Method (NMSM) and good global search
ability of particle swarm optimization it can be combined with
NMSM. This gives a hybrid particle swarm optimization (hPSO).
Choice of initial points in simplex search method is predetermined
but PSO has random initial points (particles). Also PSO proceeds
towards best decreasing difference between current and best position, whereas simplex search method evolves by moving away from
a point which has worst performance. In hPSO composition of function is of little concern. In hPSO each particle is regarded as a point
of simplex. On each iteration worst particle is replaced by a new
particle generated by one iteration of NMSM. Then all particles are
again updated by PSO. The PSO and NMSM are performed iteratively
[21].
Hsu and Gao [22] further for effectively solving multi dimensional problems author suggested a hybrid approach with
incorporating NMSM along with existence of a centre particle in
PSO. With exploitation property of PSO and exploration property of
NMSM, aid of centre particle whish dwells near optima and attracts
many particles convergence; accuracy of PSO is further improved.
Author has performed experiments on 18 benchmark functions to
show its superiority.

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

Another approach given by Wang et al. [23] for avoiding local


convergence in case of multimodal and multidimensional problems
is called group decision particle swarm optimization (GDPSO). It
takes into account every particles information for making group
decision in early stages (implying human intelligence in which
everybodys individual talent and intelligence makes up for lack
of experience) then at later stages original decision making of particle swarm optimization is used. Thus in GDPSO search space is
enlarged and diversity is increased to solve high dimension functions.
According to Maeda et al. [24] one of the reason of premature
convergence of PSO is its same intensity of search all along the process and the process being one as a whole without being divided
into exclusive segments. Based on hierarchical control concept of
control theory two layer particle swarm optimization is introduced.
In this one swarm at the top and L number of swarm at the bottom
layer in which parallel computation takes place, thereby increasing
number of particles as l multiplied by number of particles in each of
L swarms, thus diversity is increased. When the velocity of particles on bottom layer is less than marginal value and the position of
particles cannot be updated with velocity particle velocity need to
be updated and re-initialized without considering former strategy,
to avoid premature convergence.
Voss [25] introduced a new principle component particle swarm
optimization (PCPSO) which could be an economic alternative for
large number of engineering problems. It reduces time complexity
of high dimensional problems. In PCPSO particles y in two separate axial spaces at same time, one in original space and other being
rotated space wise. Then new z locations are mapped into x space,
using a parameter which denes fraction of considered rotated
space ight. Now their weighted average is found out. P best and
G best are found and updated using covariance.
All particles converge to one point at nal step in SPSO. Seo
et al. [26] suggested a multi grouped particle swarm optimization
(MGPSO) for multimodal functions. Here also a repulsive velocity
is introduced in Eq. (1) of SPSO to encourage individual particles to
escape from G best areas of the other groups efciently. It gives N
peaks for N groups. When number of groups is considerably more
to obtain equal peaks then concept of time varying territory is used
whose radius decreases from some xed initial value to zero. Tested
on permanent magnet synchronous motor optimization problem
this method gives good results.
Pan et al. [27] have combined PSO with simulated annealing and
swarm core evolutionary particle swarm optimization to improve
local search ability of PSO. In PSO with simulated annealing particle moves to next position not directly with a comparison criterion
of best position but with some probability function controlled
by temperature. In swarm core evolutionary particle swarm optimization, the particle swarm was divided into three sub swarms,
as per the distance as core, near and far and assigned different
tasks. This process works better in cases where optima change
frequently.
Kiranyaz et al. [28] have proposed two new methods. First
method is multi dimensional (MDPSO), which removes the necessity of a xed direction as a preset condition and it allows particles
to pass through multi dimensions, while rest of the updation
process remains same. Another method is fractional global best
formation technique (FGBF), in which an articial global best
particle is created after deriving information from best dimensional components. Improvement in speed of convergence and
better accuracy is resulted as after the experiments performed by
author.
Wangi et al. [29] proposed a two stage composite particle swarm
optimization as an improved PSO. In this approach, rst a suitable
solution group is searched showing provisional result. Secondly
based on above solution group, range of optimization is narrowed

3001

under certain strategy. It is advantageous in giving high precision


solution for large scale high precision problems.
Wang et al. [30], in this paper a modied particle swarm optimization based on chaotic neighbourhood search is presented.
Based on the idea that if G best does not change for long, it may
be a sign of local trapping, and then a random particle is generated and taken to chaotic search. Same repeated for all particles.
Fitness of chaotic search is calculated and compared with G best.
One having higher value is considered. This method has faster convergence capability, global search capability and provides solution
to overcome premature convergence.
Tang and Zhao [15] proposed a modied PSO with ne tuning
operator is presented. Performance of PSO is very much improved. It
combines the effectiveness of three ne tuning parameters namely
mutation, crossover and RMS variants. Mutation is designed for
high tuning capability aimed at high precision. Crossover is controlled by crossover probability as to occur between which pairs for
ne tuning. RMS variants are used to ne tune the PSO algorithm
for precision and enhanced convergence.
Zhao et al. [31] divided particle swarm into two alike sub
swarms, with the rst following SPSO and the second follows only
cognition PSO. The particle of worst tness value of rst sub swarm
exchanges with the particle of best tness value of second sub
swarm after each iteration thereby reducing the possibility of stagnation of particles. When the steady state is reached few particles of
worst tness from the rst sub swarm are taken out and exchanged
with randomly drawn few particles of second sub swarm. This
allows larger search space when reaching towards the end. This
algorithm improves global convergence and works for premature
convergence as well.
Zhao et al. [32] slightly modied the two sub swarm strategy
proposed above. When the steady state is reached individually each
sub swarm is exchanging particles but with different exchanged
particle number in different searching phase and with progressively decreasing sum with time. Experiments show that this
method has better optimizing property regarding convergence rate
and global optimization.
According to Chen [33], A new method is introduced based on
particle classication modelling as per particles tness value, best,
average and worst. Author suggests that velocity term from rst
equation of SPSO can be removed. Eqs. (1) and (2) can be combined
into a single equation as
Xid = Xid + c r (Pid Xid ) + c r (Pgd Xid )

(5.5)

The process becomes simple and convenient with this. To ensure


the balance of different particles between global and local search
TVAC and TVIW is used. This simplied PSO gives comparatively
better results.
Epitropakis et al. [34], suggested a hybrid of PSO with differential evolution (DE) algorithm. At each evolution step DE algorithm
is included; three main DE steps (mutation, recombination and
selection) are applied to P bests. This method gives a good balance between exploitation and exploration for multidimensional
functions.
Ji et al. [35] proposed a bi swarm PSO with co-operative co evolution. In this swarm consists of two parts, rst swarm is generated
randomly in the whole search space, and the second swarm is generated periodically centring towards largest and smallest bound of
the best and worst particle of the rst swarm in all directions. Two
swarms share information from each other during each generation.
Updation of velocity and position follow SPSO equations (1) and (2).
Then best particles of swarm one is compared with those of two and
this way the best particle is found out. Experiments show that this
method performs better than SPSO regarding convergence, speed
and precision.

3002

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

6. Solving constrained optimization problems


PSO is better option over genetic algorithm (GA) for solving
constrained optimization problems, because GA, which has been
mostly used for solving such problems has disadvantage of slow
convergence due to mutation operator leading to destruction of
good genes hence poor convergence. For solution of constrained
optimization problems few modication are needed to be made in
SPSO.
Parsopoulos and Vrahatis [36], proposed a dynamic multi-stage
assignment penalty function for converting constrained optimization problem into a non constrained one, and then PSO is applied.
However unsuitable penalty functions again make the convergence
slow.
To handle constrained optimization problems quantum particle
swarm optimization (QPSO) is suggested as one of the method in
which state of particles to be given by waveform function  (x,t),
instead of position and velocity. QPSO has lesser parameter to control and is good in search ability, when compared with SPSO. Liu
et al. [37], further modied QPSO in which double tness value is
given, need of penalty factor is eliminated and new comparison
criterion is given.
According to Huang et al. [38], speed of convergence in QPSO
depends on best position history of particle and the position is
dependent on tness value of the particle itself and has nothing to
do with other particles best position. In this paper author through
some experiments has attempted to nd some instructional rules
for choice of parameters. Now the best position of each particle is
substituted with public best position of all particles of the swarm.
New improved QPSO is suggested to improve convergence as well
as global search ability.
Ray and Liew [39] proposed scheme based on multiobjective
optimization concept. Pareto ranking scheme is performed to generate a better performer list based on constrained matrix. To be
a member of the list particle has to improve its performance by
taking information from its nearest neighbour present in the list.
Jian et al. [40] combined genetic particle swarm optimization
(GPSO) with SPSO. GPSO generates binary position based on original
position of particles. These binary positions are now converted to
real valued position to calculate the objective function value and
constraint violation. SPSO also generates real valued positions with
its equations. Now the better one is adopted as new position of
the particle. To compare ranking of the problem stochastic ranking
algorithm is adopted.
Hu and Eberhart [41] suggested approach in which preference is
given to feasible solutions over infeasible. All particles update their
position and velocity only referring to feasible solution. And during
initialization all particles are started with feasible solutions.
Yang et al. [42] have introduced two swarms to handle constraints, one is master another is slave. Master swarm nds
optimum value by ying towards current better particles in the
feasible region. Nascent infeasible particles take place of worse
infeasible particles in slave swarm. Slave swarm searches feasible
particles by ying in the infeasible regions. Nascent feasible particles take place of worse feasible particles in the master swarm.
Tested on 11 benchmark problems it improves global exploration
capability.
7. Discrete space optimization
PSO works well with continuous functions, to make it well
applicable to problems having discrete variables as well Kennedy
and Eberhart [43] developed binary particle swarm optimization
(BPSO). Binary coding of discrete variables, Xik should take value
either 0 or 1, but as obvious from Eq. (1) the results of Vik+1 will not

be an integral, therefore from Eq. (2) it can be said that Xik+1 may

acquire value other than 0,1 after iteration. Therefore while making
attempts adjusting Eqs. (1) and (2) to get results only 0 and 1 after
iteration algorithm author brought in sigmoid function: Sig(x)
Sig(x) =

1
1 + exp(x)

(7.1)

So that Eq. (2) becomes


Xik+1 = 0 for rand () being greater or equal to Sig(Vik+1 )
= 1 for other
So in binary PSO Vik only represents a probability, unlike in Eq.
(1) of SPSO where it represents velocity, continuously updating
direction and position of Xik .
Above binary PSO approach seemed to be dissatisfactory in
resolving certain discrete optimization problems. Different authors
have suggested modication and mixture with other algorithms.
Liu and Fan [44] said that BPSO has commanding capability for
global search but requires improvement in its exploration capability. After analyzing BPSO author suggested improvement in
probability mapping formula and bit obtaining value formula.
Jun and Chang [45] suggested that BPSO can be integrated with
genetic crossover and mutation concepts along with simulated
annealing. Taking advantage of linear clawback strategy, dimension
of crossover and mutation are large in initial iterations stagnation is prevented and diversity is increased and in the later part
dimension of crossover and mutation kept smaller to enhance right
convergence. Then to further enhance the search capability and
convergence, simulated annealing algorithm is followed for an individual evolution. According to acceptable criterion it accepts this
solution within a certain probability. Combination of these three
algorithms works better than the individual algorithm working
alone.
Zhen et al. [46] presented a new PSO inspired binary based optimization for solving discrete binary optimization problem, which is
based on the PSO information sharing mechanism and the concept
of probability optimization algorithm. Value of each bit is determined by its corresponding probability of being 1, this probability
itself is updated according to information share mechanism of PSO.
When tested on benchmark functions proposed algorithm proves
to be better than original BPSO.
Cervantes et al. [47] have evaluated capacity of BPSO by testing on two classical approaches taken from GA community: the
Pittsburgh approach in which each particle represents full solution
of the problem and the Michigan approach in which particle represent partial solution of the problem. Both these approaches are
tested with a reference set of problems, the Monks set. To enforce
competition among particles the G best part in Eq. (1) is replaced
by a repulsive term, which ensures that the particle is repelled by
best particle in its neighbourhood when in same direction they are
at same position, and if the particle itself is best then this term
becomes zero. Further with insertion of this repulsive factor neighbourhood is no more xed or static like SPSO. For each iteration
this dynamic neighbourhood is calculated depending on proximity
of particles. Modication lead to improvement of BPSO and can be
generalized for continuous version of PSO as well.
Chen [48] has proposed a new second generation particle swarm
optimization according to which best solution may be geometric
centre of the optimum swarm or nearby. A component (having tendency to move particle to centre of optimum swarm) can be added
to velocity part in SPSO equation. Fitness value of geometric centre compared with global tness value of swarm. Author has shown
through experiments that SGPSO is better than SPSO and PSO TVAC.
Zhao et al. [49] proposed an improved particle swarm optimization (IPSO) in which particle moves with optimized time step to
nd the best position along the direction of its velocity. Also author
developed a hybrid particle swarm optimization (HPSO) based on
parallel is presented which is a combination of Powell, pattern

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

search and improved PSO proposed above. IPSO improved basic


stability and convergence problem. Whereas HPSO combines features of above three is good at solving complex problems regarding
global optimization.
8. Applications of PSO
The PSO is an important algorithm in optimization and for the
reason of its high adaptability; PSO has many applications in diverse
sciences such as medical, nancial, economics, security and military, biological, system identication etc. In recent times since
more and more new methods originated from PSO are being developed, providing its scope in problem solving of above mentioned
areas. As a state-of-art the research work on the PSO application
in some elds such as electrical engineering and mathematics are
extensive, but in other elds for example chemical and civil engineering are exceptional. In mathematics PSO nds application in
the eld of multi modal function, multi objective and constrained
optimization, salesman problem, data mining, modelling etc.
To quote examples of engineering elds, PSO can be used
in material engineering, electronics (antenna, image and sound
analysis, sensors and communication), in computer science and
engineering (visuals, graphics, games, music, animation), in
mechanical engineering (robotics, dynamics, uids), in industrial
engineering (in job and resource allocation, forecasting, planning,
scheduling, sequencing, maintenance, supply chain management),
trafc management in civil engineering and chemical process in
chemical engineering. In electrical engineering PSO nds uses
in generation, transmission, state estimation, unit commitment,
fault detection and recovery, economic load dispatch, control
application, in optimal use of electrical motor, structuring and
restructuring of network, neural network and fuzzy systems and
Renewable Energy Systems (RES). As a method to nd optima of
complex search processes through iteration of each particle of population PSO can provide answers to planning, designing and control
of RES [50,51].
8.1. Application in Solar PhotoVoltaics (SPV)
Achieving consistent and economic power solutions for the
global development of renewable energy into urban and rural and
remote regions present a difcult problem. PSO an original stochastic optimizer with fast speed and simple way of realization than
genetic algorithm and ant colony optimization has been effectively
applied to solve large range of problems of RES. SPV is one of
the main technologies with the help of which we can dream of
a large share of our energy supply replaced by renewable and is
forecasted to become a popular alternate electricity source in the
future. SPV will be used as a popular source or as a part in hybrid
RES in future. RES add to regulating energy structures and protecting environment. However it is difcult to model mathematically
or otherwise these nonlinear, multiple objective and multi-peak
problems with traditional methods. Although, not all the RES are
nancially viable and reliable as compared to the conventional centralized generation system. For example, the outputs from both
wind turbine generators and PV panels are signicantly inuenced
by the weather conditions such as wind speed and insolation, correspondingly. Moreover, the initial costs of the renewable generation
facilities are quite expensive and there are also maintenance costs.
Thus, it is desirable to make use of RES in a suitable manner so
that a cost-effective and reliable autonomous power generation
system can be obtained. Investigational results in recent years have
shown that PSO can cope with diverse loads and solar irradiation to
maximize the power output and number of devices used, thereby
enhancing economy and security of system [52,53].

3003

8.1.1. Sizing and allocation


PSO is used for optimal sizing of the system, satisfying all constraints. The aim of sizing methodology is to determine the optimal
number and types of devices being used. Cost function includes
investment, operation, maintenance cost, along with cost of losses
and cost of selling energy to grid, if it is a grid connected system [5456]. Simulation results can be carried out in any language
e.g. Matlab, C, C++ [57]. The comparative analysis shows that the
PSO technique performs better than GA, when applied for sizing
problem in terms of number of iterations and CPU utilization time
[58]. Methodologies used in most of the researches use reliability index. With increase in installation capacity of the system, the
traditional capacity design by experience cannot meet the accuracy and the optimization in design and operation. To deal with
the problem a comprehensive objective function model can be
made to include reliability and cost criterion [56,57,59]. Sometimes
some other algorithm, for example Harmoni Search (HS) is also
applied to above mentioned case, in search of still better solution
[60]. To improve global optimal searching capability and to avoid
trapping in local minima meta PSO is used for hybrid wind-SPV
system. This makes use of long term data of ambient temperature, wind speed and solar irradiance. Tilt angle (which is function
of latitude, wind speed, load and capacity of devices being used)
of SPV affects cost and performance, therefore optimal tilt angle
can also be set as decision making variable, to be found out with
PSO [61]. Navaeefard et al. [62] have presented optimal sizing of
hybrid system (consisting of wind, SPV, battery, fuel cell and hydrogen tank) components in a micro grid. Reliability aspect for two
aspects is used-rst considering wind uncertainties and second
not considering it. Sulaiman [63] has used Articial Immune System (AIS) for sizing of a grid connected SPV system, also to select
optimal number of devices. AIS gave better optimum results compared to GA and PSO, but it took much large time compared to
PSO. Akshat Kumar et al. [56] have proposed optimal sizing method
for hybrid system of wind, SPV and fuel cell considering cost and
reliability criterion. They have used reliability constraints such as
penalty factor to control parameters like battery state of charge,
start stop cycle of fuel cell. Therefore appropriate combination of
the system reliabity indices level and capital investment cost is
vital. So the major advantage of PSO is its effectiveness to manipulate the new particles giving optimal solutions and fullling the
constraints. Wang and Singh [53,64] designed a hybrid power generation system including wind power and solar power is on the
basis of cost, reliability, and emission criteria. In this investigation,
wind turbine generators, photovoltaic panels, and storage batteries
are used to build a grid-linked generation system which is optimal in terms of multiple criteria including cost, reliability, and
emissions. A set of tradeoff solutions can be obtained using the
multidisciplinary approach, which offers many design alternatives
to the decision maker. A customized particle swarm optimization
algorithm is developed to derive these non-dominated solutions.
A grid-linked hybrid power system is designed based on the proposed approach. Also, due to the unpredictability of wind speed
and solar insolation, AutoRegressive Moving Average (ARMA) models are adopted to reect the stochastic characteristics of wind
speed and solar insolation. Sensitivity studies are also carried
out to examine the impacts of different weather conditions and
economic rates.
In simulation study model of SPV is used to validate the system
design of a PV system. With an accurate SPV model for use in
circuit orientated software, the behaviour of SPV array under
different environmental conditions can be observed for study.
Soon et al. [65] have identied the unknown parameters of a single
diode PV model using the PSO approach with log barrier constraint.
The proposed method has been applied to a PV module and has
shown accurate modelling result. It eliminates the needs to omit

3004

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

any terms to reduce the complexity of the IV output equation


during formulation. Also, it includes the effects of temperature for
classifying the unidentied model parameter.
8.1.2. Maximum power point tracking (MPPT)
MPPT is an electronic system that operates the SPV modules
such that modules can produce maximum possible power output. But it is not a mechanical tracking system that will physically
moves the modules to make them follow solar radiations directly
(although MPPT can be used in conjunction with a mechanically
tracking system, but that is totally different), but it is a fully
electronic system that varies the operating point of the module.
Several MPPT methods have been developed, which can be categorized in the following group Perturb and observe method (P&O),
Incremental conductance method, Fractional open circuit voltage,
Fractional short circuit method. PSO based tracking systems do
not require any derivatives calculation, therefore it is vigorous and
noise-resistive [66].
Alam et al. [67] said that compared to a single junction solar
cell, a multijunction (MJ) solar cell can take out higher energy from
sun by dividing the solar spectrum. Based on the spectrum dividing
methods, two different structures of MJ solar cells are possible: vertical multijunction solar cell (VMJ) and lateral multijunction solar
cell (LMJ). Both of which have their own advantages and drawbacks.
LMJ solar cell has the capacity to come out as an efcient solution
for solar energy conversion, although the availability of research
materials on LMJ is inadequate compared to that of VMJ. So they
constructed a complete SPV power system from lateral multijunction solar cells, along with a new interconnection technique. The
I/V characteristics of the solar cells matched in the proposed interconnection using a multi-input dcdc converter. In order to ensure
maximum power point (MPP) operation, particle swarm optimization algorithm is applied that requires only one maximum power
point control for four solar modules resulting in cost and complexity reduction.
A linear current controller-based state vector pulse width modulation scheme is proposed for a 3 phase grid connected SPV having
voltage source inverter. The phase locked loop is used as a grid
phase detector. PSO is used to implement the real time self tuning
method for the current control parameters. High dynamic response
is achieved for inverter output current with tolerable harmonics
level [68].
Kashif et al. [69,70] have used PSO with the capability of direct
duty cycle being used to track the MPP of system. It eliminates
the PI control loop which is used to manipulate the duty cycle. An
SPV system designed in Matlab Simulink atmosphere was used for
validation. Results reveal that the proposed method totally outperforms the traditional hill climbing method in terms of tracking
speed and steady state oscillations. The algorithm can be easily
used by low cost microcontrollers. Hanny et al. [71] have proposed
a 3 phase, 4 wire current controlled voltage source inverter for
both power quality improvement and SPV energy extraction. The
MPPT controller employs the PSO technique. From computer simulation results, it proves that grid currents are sinusoidal and in
phase with grid voltages, delivering maximum power to the loads.
Fu et al. [72] proposed a new PSO algorithm based on adaptive
grouping for photovoltaic MPP prediction. This algorithm proposes
that, after obtaining local optimal area, only parts of the particles
are left to nd local optimal point, while other particles are dealt
with by catastrophe, and are restrained in the remaining regions for
new search. In this way, the particle swarm cannot only improve
the convergence rate and precision, but also effectively enhance
the ability of global optimization. Therefore, this new algorithm
can be applied to predict the maximum power point (MPP) of the
photovoltaic cell. Meanwhile, the effectiveness of this algorithm is
demonstrated in the experimental ndings.

In general case a simple control method based on P&O method


is sufcient but in cases of partial shading output power is highly
affected. In case of partially shaded modules hot spot occur, which
could even damage the SPV cells, partial shading causes multiple
peaks in SPV characteristic curve [7375]. To deal with problem of
less output due to partially shaded modules, Kondo et al. [76,77]
have used a new MPPT control method namely, initialization and
repulsion PSO. So this method has two features, one is when two
or more local maximum points exist on SPV characteristics under
partial shading conditions, it can reach the global maximum point,
the other is simulated on SPV which is controlled by initialization
and repulsion PSO and applied to multi dimension MPPT in order
to increase output power. And Miyatake et al. [78] took the case
of two series connected modules in which one is partially shaded.
Their voltage difference has to be controlled, i.e. individual voltage
simultaneously. So tracking to a global maximum is nothing but a
multidimensional MPPT problem. PSO took 1 or 2 s to nd global
optima and the response time was independent of search space
dimension and partial shading. Where multiple SPV modules are
used and some are partially shaded. Elanchezhian et al. [79] presented an interleaved soft switching boost converter is proposed.
The topology increases the efciency of power conditioning system,
minimizing cost and losses by adopting a soft switching method.
The proposed algorithm uses only one pair of sensors to control
multiple arrays. Boutasseta [74] proposed a new control scheme
based on combination between PSO and the classical PI controller
to extract maximum power from a SPV panel subject to partial
shading. Ngan and Tan [75] presented a hybrid algorithm of PSO
and ANN. SPV system consists of a resistive load. The ANN was used
to generate suitable values of P and initial value of SPV current to
the PSO algorithm, when there is a change in solar irradiance. Then
PSO algorithm generates the corresponding SPV current at MPP.
9. Conclusion
Like other evolutionary algorithms, PSO has become an important tool for optimization and other complex problem solving. It
is an interesting and intelligent computational technique for nding global minima and maxima with high capability or multimodal
functions and practical applications. Particle swarm optimization
works on theory of cooperation and competition between particles.
Because PSO has simple and general principles which can be easily
used in a large range of elds. Many applications of PSO are given
in the literature, like neural fuzzy networks, optimization of articial neural networks, computational biology, image processing and
medical imaging, optimization of electricity generation and network routing, nancial forecasting etc. SPV systems not only supply
power, it is also a promising application to provide dynamic stability to system even when no sun light will there. On the other hand,
PSO provides optimization of the controller gains to maximize the
efciency of this system.
Although there are few challenges yet remaining to overcome,
such as dynamic problems, pass up stagnation, handle constraint
and multiple objectives, and these are important research points
apparent from the literature. So the drawbacks to be worked with
are its tendency of particles to converge at local optima, slow speed
of convergence, and search space being very large. Because lot of
time is wasted in visiting states of poor tness value. PSO sometimes cannot effectively and accurately solve non linear equations.
Hybridizing with other algorithm generally demands higher number of functions to be evaluated. Most of the approaches proposed
such as inertia weight method, adoptive variation and hybrid PSO
do solve premature convergence problem but there is a problem
of low convergence speed. Those suggested for increased speed
based on compression show premature or no convergence. In the
improved algorithms with mutation theory problem is of selecting

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

mutation step size and distance of best position from local optima is
not known. Therefore no generalized solution can be given applicable to all type of problems. Yet PSO is a promising method working
in direction of simulation and optimization of difcult engineering and other problems. To overcome the problem of stagnation
of particles in search space, to improve efciency, to achieve better adjustability, adoptability and vigour of parameters different
researchers are taking it up as an active research topic and coming
up with new ideas applicable for different problems. Further analysis of the comparative potency of PSO, and the problems in using
a PSO based system are needed.
References
[1] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of the
1995 IEEE International Conference on Neural Networks, vol. 4, 1995, pp.
19421948.
[2] X. Jie, X. Deyun, New metropolis coefcients of particle swarm optimization,
in: IEEE, 2008.
[3] J. Wei, L. Guangbin, L. Dong, Elite particle swarm optimizaion with mutation,
in: Asia Simulation Conference 7th Intl. Conf. on Sys. Simulation and Scientic
Computing, IEEE, 2008, pp. 800803.
[4] R. Eberhart, A new optimizer using particle swarm theory, in: Sixth International Symposium on Micro Machine and Human Science, IEEE, 1995, pp.
3944.
[5] R. Hassan, B. Cohanim, O. de Weck, A Comparison of Particle Swarm Optimization and Genetic Algorithm, 2004.
[6] K. Yasada, N. Iwasaki, Adaptive particle swarm optimization using velocity
information of swarm, in: IEEE International Conference on Systems, Man and
Cybernetics, 2004, pp. 34753481.
[7] R.C. Eberhart, Y. Shi, Particle swarm optimization: developments application
and resources, in: Proceedings of IEEE International Conference on Systems,
2001, pp. 6873.
[8] Y. Shi, R. Eberhart, A modied particle swarm optimizer, in: IEEE, 1998, pp.
6973.
[9] M. Clerc, The swarm and the queen: towards a deterministic and adaptive
particle swarm optimization, in: Proc. 1999 ICEC, Washington, DC, 1999, pp.
19511957.
[10] Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optimization, in:
Proceedings of 7th International Conference on Computation Programming VII,
1998.
[11] Z.X. Hou, Wiener model identication based on adaptive particle swarm optimization, in: IEEE Proceedings of Seventh International Conference on Machine
Learning and Cybernatics, Kumming 1215th July, 2008, pp. 10411045.
[12] X. Zhang, S. Wen, H. Li, A novel particle swarm optimization with self adaptive
inertia weight, in: Proceedings of 24th Chinese Control Conference, Guangzhou,
P.R. China, 2005, pp. 13731376.
[13] X. Chao, Z. Duo, An adoptive particle swarm optimization algorithm with
dynamic non linear inertia weight variation, in: The 1st International Conference on Enhance and Promotion of Computational Methods in Engineering
Science and Mechanics, Changchun, P.R. China, 2006, pp. 672676.
[14] J. Hu, L. Yu, K. Zou, Enhanced Self Adaptive Search Capability Particle Swarm
Optimization, 2008.
[15] J. Tang, X. Zhao, A ne tuning hybrid particle swarm optimization algorithm, in:
International Conference on Future Biomedical Information Engineering, 2009.
[16] A. Ratanweera, S.K. Halgamuge, H.C. Watson, Self organizing hierarchical
particle swarm optimizer with time varying acceleration coefcients, IEEE
Transactions on Evolutionary Computation 8 (June (3)) (2004) 240252.
[17] A. Stacey, M. Jancic, I. Grundy, Particle swarm optimization with mutation, in:
Conf. IEEE Evolutionary Computation, CEC03. The 2003 Congress, Published in
812 December 2003, vol. 2, 2003, pp. 14251430.
[18] J. Li, X. Xiao, Multi swarm and multi best particle swam optimization algorithm,
in: IEEE, 2008.
[19] W. Zu, Y.l. Hao, H.t. Zeng, W.Z. Tang, Enhancing the particle swarm optimization based on equilibrium of distribution, in: Control and Decision Conference,
China, 2008, pp. 285289.
[20] E. Liu, Y. Dong, J. Song, X. Hou, N. Li, A modied particle swarm optimization
algorithm, in: International Workshop on Geosciences and Remote Sensing,
2008, pp. 666669.
[21] A. Ouyang, Y. Zhou, Q. Luo, Hybrid particle swarm optimization algorithm for
solving systems of nonlinear equations, in: IEEE International Conference on
Granular Computing, 2009, pp. 460465.
[22] C.-C. Hsu, C.H. Gao, Particle swarm optimization incorporating simplex search
and center particle for global optimization, in: Conference on Soft Computing
in Industrial Applications, Muroran, Japan, 2008.
[23] L. Wang, Z. Cui, J. Zeng, Particle swarm optimization with group decision making, in: Ninth International Conference on Hybrid Intelligent Systems, 2009.
[24] Y. Maeda, N. Matsushita, S. Miyoshi, H. Hikawa, On simultaneous perturbation particle swarm optimization, in: CEC 2009 IEEE, Proceedings on Eleventh
Conference on Congress on Evolutionary Computation, 2009.
[25] M.S. Voss, Principle component particle swarm optimization, in: IEEE Congress
on Evolutionary Computation, vol. 1, 2005, pp. 298305.

3005

[26] J.-H. Seo, C.H. Im, C.G. Heo, J.K. Kim, H.K. Jung, C.G. Lee, Multimodal function
optimization based on particle swarm optimization, in: IEEE Transaction on
Magnetics, vol. 2, April, 2006.
[27] G. Pan, Q. Dou, X. Liu, Performance of two improved particle swarm optimization in dynamic optimization environments, in: Proceedings of the Sixth
International Conference on Intelligent Systems Design and Applications, 2006.
[28] S. Kiranyaz, J. Pulkkinen, M. Gabbouj, Multi dimensional particle swarm optimization for dynamic environments, in: IEEE, 2008.
[29] R.-J. Wangi, R.Y. Hongi, X.-X. Zhu, K. Zheng, Study of two stage composite
particle swarm optimization, in: IEEE Proceedings of the Eighth International
Conference on Machine Learning and Cybernetics, Baoding, July, 2009.
[30] W. Wang, J.M. Wu, J.H. Liu, A particle swarm optimization based on chaotic
neighbourhood search to avoid premature convergence, in: Third International
Conference on Genetic and Evolutionary Computing, IEEE, 2009.
[31] J. Zhao, L. Lu, H. Sun, A modied two sub swarms exchange particle swarm
optimization, in: IEEE International Conference on Intelligent Computation
Technology and Automation, 2010.
[32] J. Zhao, L. Lu, H. Sun, X.-w. Zhang, A novel two sub swarm exchange particle
swarm optimization based on multi phases, in: IEEE International Conference
on Granular Computing, 2010.
[33] G. Chen, Simplied particle swarm optimization algorithm based on particle
classication, in: Sixth International Conference on Natural Computation, 2010.
[34] M.G. Epitropakis, V.P. Plagianakos, M.N. Vrahatis, Evolving cognitive and social
experience in particle swarm optimization through differential evolution, in:
IEEE, 2010.
[35] H. Ji, J. Jie, J. Li, Y. Tan, A bi-swarm particle optimization with cooperative co-evolution, international conference on computational aspects of social
networks, in: IEEE, 2010.
[36] K.E. Parsopoulos, M.N. Vrahatis, Particle swarm optimization method for constrained optimization problems, in: Proceedings of the Euro-International
Symposium on Computational Intelligence, 2002.
[37] H. Liu, S. Xu, X. Liang, A modied quantum behaved particle swarm optimization for constrained optimization, in: IEEE International Symposium on
Intelligent Information Technology Application Workshop, 2008.
[38] Z. Huang, Y. Wang, C. Yang, C. Wu, A new improved quantum behaved particle
swarm optimization model, in: IEEE, 2009.
[39] T. Ray, K.M. Liew, A swarm with an effective information sharing mechanism
for unconstrained and constrained optimization single objective optimization
problem, in: Proceedings of IEEE Congress on Evolutionary Computation, Seoul,
Korea, 2001, pp. 7580.
[40] L. Jian, L. Zhimimg, C. Peng, Solving constrained optimization via dual particle
swarm optimization with stochastic ranking, in: IEEE International Conference
on Computer Science & Engineering, 2008, pp. 12151218.
[41] X. Hu, R.C. Eberhart, Solving constrained non linear optimization problems with
particle swarm optimization, in: Proceedings of the Sixth World Multiconference on Systemic, Cybernetics and Informatics, USA, 2002.
[42] B. Yang, Y. Chen, Z. Zhao, Q. Han, A master slave particle swarm optimization
algorithm for solving constrained optimization problems, in: Proceedings of
6th World Conference on Intelligent Control and Automation, Dalian, China,
2006.
[43] J. Kennedy, R. Eberhart, A discrete binary version of particle swarm optimization, in: Proceeding of the Conference on System, Man and Cybernetics, IEEE
Service Center, NJ, USA, 1997, pp. 41044109.
[44] J. Liu, X. Fan, The analysis and improvement of binary particle swarm optimization, in: International Conference on Computational Intelligence and Security,
2009.
[45] X. Jun, H. Chang, The discrete binary version of the improved particle swarm
optimization algorithm, in: IEEE, 2009.
[46] L. Zhen, L. Wang, X. Wang, L. Zhen, Z. Huang, A novel PSO inspired
probability-based binary optimization algorithm, in: International Symposium
on Information Science and Engineering, 2008.
[47] A. Cervantes, I. Galvan, P. Isasi, A comparison between Pittsburgh and Michigan
approaches for the binary PSO algorithm, in: IEEE, 2005.
[48] M.Q. Chen, Second generation particle swarm optimization, in: International
Conference on Intelligent Computation Technology and Automation, 2008.
[49] Y. Zhao, X. An, W. Luo, Hybrid particle swarm optimization based on parallel
collaboration.
[50] B. Yang, Y. Chen, Z. Zhao, IEEE International Conference on Survey on Applications of Particle Swarm Optimization in Electric Power Systems, May 30
2007June 1 2007, 2007.
[51] R.-J. Wai, S. Cheng, Y.-C. Chen, 6th IEEE on Industrial Electronics and Applications (ICIEA), 2011.
[52] B. Zhang, Y. Yang, L. Gan, Dynamic control of wind/photovoltaic hybrid power
systems, in: IEEE International Conference on Industrial Technology (IEEE ICIT
2008), 2008.
[53] L. Wang, C. Singh, Compromise between cost and reliability in optimum design
of an autonomous hybrid power system using mixed-integer PSO algorithm,
in: IEEE, 2007.
[54] R. Belfkira, O. Hajji, C. Nichita, G. Barakat, Optimal sizing of stand-alone hybrid
wind/PV system with battery storage, 12 February 2012.
[55] S. Abedi, H.G. Ahangar, M. Nick, S.H. Hosseinian, Economic and reliable design
of a hybrid PV-wind-fuel cell energy system using differential evolutionary
algorithm, in: 19th Iranian Conference on Electrical Engineering (ICEE), 2011.
[56] S. Akshat Kumar, B. Prabodh, Swarm intelligence based optimal sizing of solar
PV, fuel cell and battery hybrid system, in: International Conference on Power
and Energy Systems, 2012.

3006

A. Khare, S. Rangnekar / Applied Soft Computing 13 (2013) 29973006

[57] M. Bashir, J. Sadeh, Size optimization of new hybrid stand-alone renewable


energy system considering a reliability index, in: 11th International Conference
Environment and Electrical Engineering (EEEIC), 2012.
[58] B. Tudu, S. Majumder, K.K. Mandal, N. Chakraborty, Comparative Performance
Study of Genetic Algorithm and Particle Swarm Optimization Applied on Offgrid Renewable Hybrid Energy System, Swagatam Das, 2012.
[59] J.-H. Koh, H. Song, B. Choi, Optimal Allocation Problem of PV Systems Using
Discrete Particle Swarm Optimization with a Hybrid Discretization Scheme,
ISIS, 2011.
[60] Y.S. Zhao, J. Zhan, Y. Zhang, D.P. Wang, B.G. Zou, The optimal capacity conguration of an independent wind/PV hybrid power supply system based on
improved pso algorithm, in: APSCOM 8th International Conference at 2009,
2009.
[61] B. Ajay Kumar, R.A. Gupta, Rajesh Kumar, Optimization of Hybrid PV/wind
Energy System using Meta Particle Swarm Optimization (MPSO).
[62] A. Navaeefard, S.M.M. Tafreshi, M. Barzegari, A.J. Shahrood, Optimal sizing of
distributed energy resources in microgrid considering wind energy uncertainty
with respect to reliability, in: IEEE International Energy Conference, 2010.
[63] S.I. Sulaiman, T.K.A. Rahman, I. Musirin, Articial immune system for sizing
gridconnected photovoltaic system, in: 5th International Conference Power
Engineering and Optimization Conference (PEOCO), 2011.
[64] L. Wang, C. Singh, PSO-based multidisciplinary design of a hybrid power generation system with statistical models of wind speed and solar insolation, in:
IEEE, 2006.
[65] J.J. Soon, K.-S. Low, Optimizing photovoltaic model parameters for simulation,
in: IEEE International Industrial Electronics (ISIE), 2012.
[66] M. Azab, Optimal power point tracking for stand-alone PV system using particle
swarm optimization, International Journal of Renewable Energy Technology
(2012).
[67] M.K. Alam, F.H. Khan, A.S. Imtiaz, An efcient power electronics solution for
lateral multi-junction solar cell systems, in: IEEE Industrial Electronics Society
Annual Conference, 2011.
[68] W. Al-Saedi, S.W. Lachowicz, D. Habibi, An optimal current control strategy
for a three-phase grid-connected photovoltaic system using particle swarm
optimization, in: IEEE, 2011.

[69] K. Ishaque, Z. Salam, A. Shamsudin, Application of particle swarm optimization for maximum power point tracking of PV system with direct control
method, in: 37th Annual Conference on IEEE Industrial Electronics Society,
2011.
[70] K. Ishaque, Z. Salam, M. Amjad, S. Mekhilef, An improved particle swarm optimization (PSO) based MPPT for PV with reduced steady-state oscillation, IEEE
Transactions on Power Electronics 27 (August (8)) (2012).
[71] H.H. Tumbelaka, M. Miyatake, A grid current-controlled inverter with particle swarm optimization MPPT for PV generators, World Academy of Science,
Engineering and Technology 43 (2010).
[72] Q. Fu, N. Tong, A new PSO algorithm based on adaptive grouping for photovoltaic MPP prediction, in: International Workshop on Intelligent Systems and
Applications, China, 2010.
[73] S. Dehghan, B. Kiani, A. Kazemi, A. Parizad, Optimal sizing of a hybrid wind/PV
plant considering reliability indices, World Academy of Science, Engineering
and Technology 32 (2009).
[74] N. Boutasseta, PSO-PI based control of photovoltaic arrays, International Journal
of Computer Applications (2012).
[75] M.S. Ngan, C.W. Tan, Multiple peaks tracking algorithm using particle swarm
optimization incorporated with articial neural network, World Academy of
Science, Engineering and Technology 58 (2011).
[76] Y. Kondo, V. Phimmasone, Y. Ono, M. Miyatake, Verication of Efcacy of PSObased MPPT for Photovoltaics, International Conferences, 2010.
[77] V. Phimmasone, Y. Kondo, T. Kamejima, M. Miyatake, Verication of Efcacy
of the Improved PSO-based MPPT Controlling Multiple Photovoltaic Arrays,
in: 9th IEEE International Conference on Power Electronics and Drive Systems
(IEEE PEDS2011), vol. 343, 2011, pp. 881883.
[78] M. Miyatake, M. Veerachary, F. Toriumi, N. Fujii, H. Ko, Maximum power point
tracking of multiple photovoltaic arrays: a PSO approach, in: IEEE Transactions
on Aerospace and Electronic Systems, 2011.
[79] P. Elanchezhian, Soft-switching boost converter for Photovoltaic powergeneration system with pso based Mppt, International Journal of Communications and Engineering 06-No. 6 (March (01)) (2012).

Das könnte Ihnen auch gefallen