Sie sind auf Seite 1von 8

Electric Power Systems Research 70 (2004) 203–210

Application of particle swarm optimization technique and its variants to


generation expansion planning problem
S. Kannan a,∗ , S. Mary Raja Slochanal b , P. Subbaraj a , Narayana Prasad Padhy c
a Electrical Engineering Department, A.K. College of Engineering, Anand Nagar, Krishnankoil 626190, India
b Thiagarajar College of Engineering, Madurai, India
c Indian Institute of Technology, Roorkee, India

Received 2 May 2003; received in revised form 20 November 2003; accepted 9 December 2003

Abstract

This paper presents the application of particle swarm optimization (PSO) technique and its variants to least-cost generation expansion
planning (GEP) problem. The GEP problem is a highly constrained, combinatorial optimization problem that can be solved by complete
enumeration. PSO is one of the swarm intelligence (SI) techniques, which use the group intelligence behavior along with individual intelligence
to solve the combinatorial optimization problem. A novel ‘virtual mapping procedure’ (VMP) is introduced to enhance the effectiveness of
the PSO approaches. Penalty function approach (PFA) is used to reduce the number of infeasible solutions in the subsequent iterations. In
addition to simple PSO, many variants such as constriction factor approach (CFA), Lbest model, hybrid PSO (HPSO), stretched PSO (SPSO)
and composite PSO (C-PSO) are also applied to test systems. The differential evolution (DE) technique is used for parameter setting of C-PSO.
The PSO and its variants are applied to a synthetic test system of five types of candidate units with 6- and 14-year planning horizon. The
results obtained are compared with dynamic programming (DP) in terms of speed and efficiency.
© 2003 Elsevier B.V. All rights reserved.

Keywords: Combinatorial optimization; Composite PSO; Constriction factor approach; Differential evolution; Generation expansion planning; Particle swarm
optimization; Penalty function approach; Stretched PSO; Swarm intelligence; Virtual mapping procedure

1. Introduction (DP) are applied to solve this combinatorial optimization


problems [3,4].
Generation expansion planning (GEP) problem is to Nowadays, evolutionary computational (EC) techniques
determine ‘WHAT’ type of generating units to be commis- are used to solve the combinatorial optimization problems.
sioned and ‘WHEN’ the generating units to be committed Genetic algorithm (GA) is a search algorithm based on the
online, over a long-range planning horizon [1,2]. The main mechanism of natural selection and genetics and uses the
objective of GEP is to minimize the total investment and concept of ‘Survival of the fittest’ to find the optimal solu-
operating costs associated with the addition of new units tion [5]. Improved genetic algorithm (IGA) with stochastic
and to satisfy the reliability, fuel mix, and the demand cri- crossover technique and elitism is applied to solve the GEP
terion. As GEP is a highly constrained, nonlinear, discrete problem [6]. Evolutionary programming (EP), one of the
optimization problem, it is a highly challenging problem EC techniques, is emerging as efficient approach for various
for the decision makers. The solution for this GEP problem search, classification, and optimization problems [7]. GA
may be obtained by complete enumeration of each possible and EP use genetic operators. EP does not perform crossover
combination in the entire planning horizon. Since 1950, operation as GA, but obtains the solution by selection, mu-
many optimization techniques such as linear program- tation, and competition. EP with domain mapping procedure
ming, integer programming and dynamic programming is applied to GEP problem [8].
Recently swarm intelligence (SI) techniques, particle
swarm optimization (PSO) and ant colony optimization
∗ Corresponding author. (ACO) techniques are gaining more importance. They are
E-mail address: kannaneee@rediffmail.com (S. Kannan). used to solve the combinatorial optimization problems,

0378-7796/$ – see front matter © 2003 Elsevier B.V. All rights reserved.
doi:10.1016/j.epsr.2003.12.009
204 S. Kannan et al. / Electric Power Systems Research 70 (2004) 203–210

due to its simplicity in coding and consistency in perfor- and the newly introduced units, s the variable used to in-
mance. These techniques use swarm behavior to solve the dicate that the maintenance cost is calculated at the middle
problem, (i.e.) they use the concept of group intelligence of each year, O(Xt ) the outage cost of the existing and the
along with individual intelligence. PSO technique is used newly introduced units, S(Ut ) the salvage value of the newly
to solve continuous combinatorial optimization problems added unit at tth interval, d the discount rate, CIi the capital
[9,10]. investment cost of ith unit, δi the salvage factor of ith unit,
In this paper the five variants of PSO [9], are applied T the length of the planning horizon (in stage), N the total
to solve the GEP problem. The virtual mapping procedure number of different types of units, FC the fixed cost of the
(VMP) and penalty function approach (PFA) are also ad- existing as well as newly added units, MC the maintenance
dressed to reduce the number of infeasible solutions that cost of the units and OC is the outage cost of the units.
appear in the subsequent iterations. The variants of PSO
incorporating VMP and PFA approach are applied to a 2.2. Constraints
test system with 15 existing units, five types of candidate
units for a planning period of 6- and 14-year. The test 2.2.1. Upper construction limit
results are compared with DP and their performances are Let Ut be the units to be committed in the expansion
evaluated. plan at stage t that must satisfy the maximum construction
capacity of the units to be committed:

2. Formulation of least-cost GEP problem 0 ≤ Ut ≤ Umax,t (8)

where Umax,t is the maximum construction capacity of the


2.1. Objective function units in stage t.
The GEP problem is equivalent to find a set of best de- 2.2.2. Reserve margin
cision vectors over a planning horizon that minimizes the The selected units must satisfy the minimum and maxi-
investment and operating costs with several constraints. The mum reserve margin:
cost function (objective function) is represented by the fol-
N

lowing expression:
(1 + Rmin ) × Dt ≤ Xt,i ≤ (1 + Rmax ) × Dt (9)
T
 i=1
min C = [I(Ut ) + M(Xt ) + O(Xt ) − S(Ut )] (1)
t=1 where Rmin is the minimum reserve margin, Rmax the max-
imum reserve margin, Dt the demand at tth stage in MW,
where
Xt ,i is the cumulative capacity of ith unit in stage t.
Xt = Xt−1 + Ut (t = 1, . . . , T) (2)
N
2.2.3. Fuel mix ratio

−2t The GEP has different types of generating units such as
I(Ut ) = (1 + d) (CIi × Ut,i ) (3)
i=1
coal, liquefied natural gas (LNG), oil, and nuclear. The se-
lected units along with the existing units of each type must
N
 satisfy the fuel mix ratio:

S(Ut ) = (1 + d)−T (CIi × δi × Ut,i ) (4)
j Xt,j j
i=1 Mmin ≤ N ≤ Mmax , j = 1, 2, . . . , N (10)
1    i=1 Xt,i
  
M(Xt ) = (1 + d)1.5+t +s (Xt × FC) + MC j
where Mmin is the minimum fuel mix ratio of jth type, Mmax
j
s =0
the maximum fuel mix ratio of jth type, j is the type of the
(5) unit (type of fuel used: oil, LNG, coal, nuclear).

1
 
O(Xt ) = OC × (1 + d)1.5+t +s (6) 2.2.4. Reliability criterion
s =0 The selected units along with the existing units must sat-
isfy the reliability criterion, loss of load probability (LOLP):
t  = 2(t − 1), and T  = 2 × T − t (7)
LOLP(Xt ) ≤ ε (11)
and C is the total cost, Ut the vector of newly introduced
unit in the stage t (1 stage = 2 years), Xt −1 the cumula- where ε is the reliability criterion expressed in LOLP.
tive capacity vector of existing units in the stage (t − 1), Xt In addition, the energy produced by each unit and expected
the cumulative capacity vector of existing units in the stage energy not served (EENS) is calculated using ‘equivalent
t, I(Ut ) the investment cost of the newly added unit at tth energy function method’. The EENS indices are used for
stage, M(Xt ) the operation and maintenance cost of existing outage cost calculation [2].
S. Kannan et al. / Electric Power Systems Research 70 (2004) 203–210 205

3. Overview of PSO and its variants 3.2. Variants of PSO

The inherent rule adhered by the members of birds and Few modifications are incorporated in simple PSO and
fishes in the swarm, enables them to move, synchronize, the modified techniques are called (1) constriction factor
without colliding, resulting in an amazing choreography was approach (CFA), (2) Lbest model, (3) hybrid PSO (HPSO),
the basic idea of PSO technique [11,12]. PSO is similar to EC (4) stretched PSO (SPSO), and (5) composite PSO (C-PSO).
techniques in which, a population of potential solutions to
the problem under consideration is used to probe the search 3.2.1. Constriction factor approach (CFA)
space. The major difference is that, the EC techniques use Both in simple PSO and in CFA, maximum and minimum
genetic operators whereas SI techniques use the physical velocities are set to a priori values to avoid the infeasible
movements of the individuals in the swarm. combinations. In PSO, these values are kept constant. How-
ever, in CFA, the velocity (vi+1 ) is modified by a factor
3.1. Parameters of PSO known as constriction factor (χ) such that (vi+1 ) = χ(vi ).
This modification increases the performance of modified
PSO is developed through simulation of bird flocking in PSO. The constriction factor (χ) is selected between (0,
two-dimensional space. The position of each agent is rep- 1). By properly selecting the constriction factor (χ), the
resented in X–Y plane with position (sx , sy ), vx (velocity velocities can be maintained in a constant interval without
along X-axis), and vy (velocity along Y-axis). Modification exceeding the set velocities.
of the agent position is realized by the position and velocity The constriction factor value can be either fixed or varied
information. randomly. In fixed CFA, a fixed value (say 0.78) is chosen.
Bird flocking optimizes a certain objective function. Each To improve the effectiveness of the approach, the value of ‘χ’
agent knows its best value so far, called ‘Pbest’, which may be selected inversely proportional to ‘w’. The modified
contains the information on position and velocities. This velocity equation is given below:
information is the analogy of personal experience of each
agent. Moreover, each agent knows the best value so far, vk+1
i = χ[vki + C1 × rand1 × (Pbesti − sik ) + C2 × rand2
in the group ‘Gbest’ among Pbests. This information is the × (Gbest − sik )] (15)
analogy of knowledge, how the other neighboring agents
have performed. Each agent tries to modify its position 3.2.2. Lbest model
by considering current positions (sx , sy ), current velocities In simple PSO, the velocities of the individuals are modi-
(vx , vy ), the individual intelligence (Pbest), and the group fied based on the personal experience and the group behavior.
intelligence (Gbest). However, in the Lbest model, the velocities of the individuals
The following equations are utilized, in computing the are modified, by considering the personal experience of the
position and velocities, in the X–Y plane: individual as well as the experience of the neighboring indi-
vk+1
i = w × vki + C1 × rand1 × (Pbesti − sik ) vidual (not the whole group behavior as with simple PSO).
The velocity of an individual is modified by the following
+ C2 × rand2 × (Gbest − sik ) (12) equation:

sik+1 = sik + vk+1


i (13) vk+1
i = [w × vki + C1 × rand1 × (Pbesti − sik ) + C2

where, vk+1 is the velocity of (k + 1)th iteration of ith in- × rand2 × (Nbest − sik )] (16)
i
k
dividual, vi the velocity of kth iteration of ith individual, w
where ‘Nbest’ is the best value of the neighboring individual.
the inertial weight, C1 , C2 are the positive constants, having
The number of neighboring individuals selected, may be
values [0, 2], rand1 , rand2 are the random numbers selected
based on the size of the problem. The convergence and exe-
between 0 and 1, Pbesti is the best position of the ith indi-
cution time depends upon the number of neighbors. Greater
vidual, Gbest the best position among the individual (group
the number of neighbors, closer to the optimum value and
best) and sik is the position of ith individual at kth iteration
higher the execution time, and vice versa.
The velocity of each agent is modified according to (12)
and the position is modified according to (13). The weight-
3.2.3. Hybrid PSO (HPSO)
ing factor ‘w’ is modified using (14) to enable quick con-
The modified method is called hybrid PSO (HPSO), since
vergence:
it uses the basic concept of PSO such as velocity updating,
wmax − wmin position modification and the selection mechanism of EC
w = wmax − × iter (14)
itermax techniques:
where wmax is the initial weight, wmin the final weight, iter
the current iteration number and itermax is the maximum basic concepts of PSO ⊕ selection process of EC techniques
iteration number. ⇒ hybrid PSO
206 S. Kannan et al. / Electric Power Systems Research 70 (2004) 203–210

The solution obtained by simple PSO depends on indi- tial velocity (vi ), and the initial PSO parameters (Xi =
vidual best (Pbest) and the group best (Gbest). Hence, these [w, C1 , C2 ]) randomly. The size of s, v and X is equal to Np ,
Pbest and Gbest parameters limit the search space. The where Np is the size of the population and ‘i’ is the current
search space can be broadened by introducing the selection iteration number.
mechanism in simple PSO. The competitive selection pro-
cess is used in HPSO. Step 2. For each Xi , calculate vi and si using (12) and (13).
In competitive selection process, the individuals with Calculate the fitness function value using (19).
worst fitness function values are replaced by, those one with
better fitness function values. Due to this, two or more par-
Step 3. Apply mutation, crossover, and selection operators
ticles move from the same position with different velocity.
of DE algorithm on Xi . Let X∗ be the best individual pro-
duced by this process. Replace Xi by X∗ and repeat the pro-
3.2.4. Stretched PSO (SPSO)
cess for a priori set number of iterations of DE is reached.
The main drawback found with many global optimization
techniques is that the problem of converging to local minima.
The GEP problem has numerous local minima. When the Step 4. The process continues from Step 2 until the stopping
search begins, the solution may fall and may stagnate in criterion (maximum number of iterations ‘I’) is met.
the local minima itself. In such situation, a technique called
‘stretching’ (SPSO) is employed to escape from the local
minima. 4. PSO and its variants implementation to GEP problem
The equations used are two-stage transformation equa-
tions. In the first stage, FCi (original fitness function) is The additional concept introduced to improve the effec-
transformed into Gi using (17). This Gi eliminates all the tiveness of the proposed techniques is VMP and PFA.
local minima located above ī. The second transformation
stretches the neighborhood of FCi upward without affecting
4.1. Virtual mapping procedure (VMP)
the global minima using (18). The two-stage transformation
equations are given below:
  To improve the effectiveness of the proposed approach, a
Gi = FCi + γ1 i − ī (sgn(FCi − FCi ) + 1) (17) novel mapping procedure called ‘virtual mapping procedure’
is used to solve the least-cost GEP problem. This map-
sgn(FCi − FCi ) ping procedure transforms the number of candidate units for
Hi = Gi + γ2 (18)
tanh(µ(Gi − Gi )) each year to a dummy variable (i.e.) it maps the yearly cu-
mulative capacity numbers into one dummy variable. This
where ī represents one of the local minima, Gi is the first dummy variable specifies the position of each agent in the
transformation function, which eliminates the entire local swarm. The position is modified, by adding velocity with this
minima located above ī, FCi the fitness function value of dummy variable. This improves the performance of PSO.
ith individual, Hi the second transformation function, which The main advantage of using VMP for this GEP problem
stretches all the neighborhood of ī upwards, γ 1 , γ 2 , µ are is to avoid the dimensionality problem since it handles single
positive constants, sgn(·) is the signum function which is dummy variable. In addition, it needs less memory space.
triple valued (−1, 0, 1). Further, if the mapped variable took part in all the PSO
Both stages do not alter the local minimum, which is related operation, a small change in the mapped variable will
located below ī. Thus, the location of the global minimum reduce the infeasible solutions.
is left unaltered. The steps involved in VMP are as follows:
3.2.5. Composite PSO (C-PSO) • Form all the possible combinations of the candidate units.
The PSO parameters (w, C1 , C2 ) are selected in the • Multiply the number of units with the corresponding ca-
previous sections by trial and error. The selection of PSO pacities and adding them to get the total capacity (see
parameters by some other algorithms such as GA, EP, or Table 1).
differential evolution (DE) may produce a good result. The • Arrange the total capacity in ascending order (see Table 2).
DE algorithm is an EC technique developed by Storn and
Price [13]. The DE algorithm is used for parameter selection Thus, a multivariable could be mapped to a single variable
of PSO. The C-PSO algorithm employing DE algorithm is problem.
explained below. To illustrate, this VMP could be explained with two num-
bers of 1000 MW unit and two numbers of 700 MW units,
3.2.6. C-PSO algorithm in Table 1. The capacities and the corresponding variables
are arranged in ascending order, as shown in Table 2. If the
Step 1. Initialize ‘i’ to zero and set maximum number of dummy variable value is 4, it indicates one number of 1000
iterations as I. Generate initial position of particles (si ), ini- MW and one number of 700 MW units.
S. Kannan et al. / Electric Power Systems Research 70 (2004) 203–210 207

Table 1 ity of obtaining infeasible solution. The infeasible solutions


Virtual mapping procedure—combination generation are avoided in the subsequent iterations by using PFA. In
X1 × (1000 MW) X2 × (700 MW) Total capacity (MW) this approach, the objective function is evaluated and the
0 0 0
reserve margin, LOLP and fuel mix ratio are checked for
0 1 700 constraint violation. If the constraints are violated, then
0 2 1400 proportional penalty value is added to the objective func-
1 0 1000 tion. The objective function with penalty function approach
1 1 1700 is given as fitness function cost (FC):
1 2 2400
   
2 0 2000
2 1 2700
FCi = Ci + α1 ψ1 + α2 ψ2 + α3 ψ3 (19)
2 2 3400
where FCi is the fitness function value of ith individual, Ci
the objective function value of ith individual, α1 the penalty
4.2. Initialization of agents and their velocity in the swarms factor for the constraint, reserve margin, ψ1 the violation
amount of the constraint, reserve margin, α2 the penalty
The initial agents are selected randomly from the mapped factor for the constraint, fuel mix ratio, ψ2 the violation
variable. The velocities of each agent are also selected ran- amount of the constraint, fuel mix ratio, α3 the penalty fac-
domly between 0 and 1. The size of the swarm will be tor for the constraint LOLP and ψ3 is the violation amount
(Np × n), where Np is the total number of agents in the of the constraint, LOLP.
swarm and ‘n’ is the number of stages. Here, 2 years plan-
ning period is assumed as one stage. 4.6. Termination criteria

4.3. Updating the velocity The process continues until the maximum number of it-
erations set a priori is reached.
The velocity is updated by considering the current velocity Most of the variant techniques differ in the way by which
of the agent, the best fitness function value of that agent, the velocity is updated. For CFA and Lbest model, the ve-
and the best fitness function value among the agents in the locity is updated by (15) and (16), respectively. HPSO, the
swarm. The velocity of each agent is modified by (12). The algorithm is similar to simple PSO but an additional mech-
value of the weighting factor ‘w’ in (12) is decremented by anism of selection is introduced to avoid converging to the
(14) to enable quicker convergence. local minimum. In HPSO, the position of worst individuals
is replaced by the position of best individuals. Competitive
4.4. Updating the position selection process is used in HPSO.
In SPSO, simple PSO is applied as starter. When the indi-
The position of each agent is updated by adding the up- viduals stagnate in local minimum, stretching is applied on
dated velocity with the current position of the individual in the original objective function value by the two-stage trans-
the swarm. The position of the ith individual at (k + 1)th formations given by (17) and (18). Because of these trans-
iteration is found by (13). formations, the global minimum is not altered.
The C-PSO procedure starts with the initial solutions
4.5. Fitness function evaluation taken from the mapped variable.
(penalty function approach)

The fitness function of the agents is evaluated using (1). 5. Test results
As the individuals are selected randomly, there is a possibil-
All the PSO techniques and DP for GEP problem was im-
Table 2 plemented using MATLAB on a PC with Pentium 500 MHz
Virtual mapping procedure—mapping with dummy variable processor.
X1 × (1000 MW) X2 × (700 MW) Total capacity Y (dummy
(MW) variable) 5.1. Test system description
0 0 0 0
0 1 700 1 The forecasted load demand is given in Table 3. The tech-
1 0 1000 2
0 2 1400 3
nical data for the existing units and candidate units are taken
1 1 1700 4 from [6]. The test system consists of 15 existing units and
2 0 2000 5 five different fuel types of new candidate units, with the
1 2 2400 6 maximum construction capacity of 5, 4, 3, 3, and 3 are as-
2 1 2700 7 sumed to be present in every stage. Three test cases are con-
2 2 3400 8
sidered. In the test case-1, 6-year planning horizon with a
208 S. Kannan et al. / Electric Power Systems Research 70 (2004) 203–210

Table 3
Forecasted load demand
Stage

0 1 2 3 4 5 6 7

Year 2003 2005 2007 2009 2011 2013 2015 2017


Peak (MW) 5000 7000 9000 10000 12000 13000 14000 15000

total number of 18 candidate units is considered. The test obtained to the number of test runs. The success rate of each
case-2, is same as that of test case-1, but the maximum con- algorithm differs.
struction capacity of each fuel type is doubled with a total
number of 36 units. The test case-3 is same as that of test 5.4.1. Test case-1
case-1, but the planning horizon is increased to 14 years. This test case consists of five different fuel types of 18
new candidate units with a planning horizon of 6 years. The
results of the test case-1, for 100-test simulation runs of each
5.2. Parameters for GEP
algorithm are given in Table 5. Simple PSO is the basic for
all PSO techniques. It has a success rate of 68% for this
The parameters pertaining to the GEP problem are taken
simple test case and best result was obtained at the 180th
from [6]. Practically, the lower and upper bounds for reserve
iteration with an execution time of 285 s.
margin are set to 20 and 40% to meet any failure in gen-
In the simple PSO, as the velocity is modified based on
erating units and to carry out maintenance activities. The
Gbest and Pbest of the current individuals, the velocity ex-
salvage factor is additionally added to include the depreci-
ceeds the predefined value under some circumstances. To
ation value of the units in calculating the salvage value of
avoid this difficulty, the velocity is restricted to some max-
the newly added units. Additionally the following parame-
imum set value. This is achieved in CFA, by multiplying
ters are assumed and used in solving the GEP problem. The
the velocity term by a factor called ‘constriction factor’ as
unserved energy (EENS) cost is set at 0.05 Rs./kWh. The
explained in Section 3.2.1. The CFA took 130 iterations to
initial period is 2 years. The investment cost, maintenance
converge to best solution, whereas the simple PSO took 180
cost, and salvage cost are assumed to occur at the begin-
iterations. The success rate of the CFA is 83%. Hence, the
ning of the year, middle of the year, and at the end of the
performance of the CFA is better than the simple PSO.
planning period, respectively. The outage cost is determined
In the simple PSO, the particles are assumed to move to-
using the EENS indices.
wards the previous best location and the best position among
its neighbors. Whereas, in Lbest model, the particles choose
5.3. Parameters for PSO and its variants some ‘n’ nearest neighbors and communicate with them.
Because of this neighborhood information, the particles
The parameters of PSO and its variants are selected moving in the search space have more information regarding
through trial and error is given in Table 4. the solution space when compared to other variants of PSO.
Hence, the success rate for finding best result in Lbest model
5.4. Results and discussion is more than that of simple PSO. The best solution was ob-
tained at the 120th iteration. However, the execution time of
The results, shown in Tables 5–7 for three different test Lbest model is higher than simple PSO (more than twice).
cases, give best and worst values, success rate with mini- This is due to, the fact that the individual particles of the
mum and maximum error percentage of each algorithm. The simple PSO move with the single Gbest. However, in Lbest
success rate is the ratio of number of times the best solution model, the particles are allowed to move with two or three

Table 4
Parameters of PSO and its variants
Parameters Simple PSO CFA Lbest model HPSO SPSO C-PSO

Population size 20 20 20 50 20 20
Max. no. of iteration 200 200 200 200 200 100
wmax , wmin (0.8, 0.2) (0.8, 0.2) (0.8,0.2) (0.8,0.2) (0.8,0.2) Parameter selection by DE algorithm
C1 , C2 2.2 2.2 2.2 2.2 2.2
χ 1 0.8 1 1 1 1
Exchange rate – – – 40% – –
No. of neighbors selected – – 3 – – –
S. Kannan et al. / Electric Power Systems Research 70 (2004) 203–210 209

Table 5
Result obtained by PSO and its variants technique (test case-1) (18 candidate units with 6-year planning horizon)
PSO and its variants Total cost (Rs.) × 1010 Success rate (%) Mean execution Error (min − max) (%)
(for 100 runs) time (s)
Best result Worst result

Simple PSO 1.2009 1.2059 68 285 0–0.42


CFA 1.2009 1.2071 83 198 0–0.52
Lbest model 1.2009 1.2024 91 655 0–0.12
HPSO 1.2009 1.2050 84 415 0–0.34
SPSO 1.2009 1.2012 90 291 0–0.02
C-PSO 1.2009 1.2044 80 2223 0–0.30
DP 1.2009 100 320 0

Table 6
Result obtained by PSO and its variants technique (test case-2) (36 candidate units with 6-year planning horizon)
PSO and its variants Total cost (Rs.) × 1010 Success rate (%) Mean execution Error (min − max) (%)
(for 100 runs) time (s)
Best result Worst result
Simple PSO 1.1984 1.2224 11 368 0–2.00
CFA 1.1984 1.2019 54 388 0–0.30
Lbest model 1.1984 1.2040 46 1124 0–0.47
HPSO 1.1984 1.2045 12 430 0–0.51
SPSO 1.1984 1.2047 61 863 0–0.53
C-PSO 1.1984 1.2068 52 2369 0–0.70
DP 1.1984 100 4411 0

neighboring individuals’ intelligence along with its own The C-PSO technique has a moderate success rate of
personal experience and hence the execution time gets 80% and the execution time is higher than that of other
increased. variant techniques. This is due to the fact that, the C-PSO
In SPSO, the optimal or the best result was obtained, at has some additional computation like mutation, crossover,
110th iteration and the success rate was found to be 90%. and selection.
When the algorithm stumbles upon the local minima, or Therefore, it may be concluded that, CFA performs better
when there is no further improvement, then the stretching in terms of execution time and if some sacrifice is made in
technique is employed to escape from the stagnation. The terms of execution time, Lbest model and SPSO perform
execution time of this approach is also less when compared better than other methods in terms of high success rate and
to other variants. lower error.
If the worst individuals took part in the group, then these
individuals may not contribute much for finding the best 5.4.2. Test case-2
solution. Hence, HPSO is used to improve the performance This test case consists of five different fuel types of 36
by omitting the worst individuals and introducing best in- new candidate units with a planning horizon of 6-year. The
dividuals through the selection mechanism. In HPSO, the results obtained for the 100-test simulation runs with the
positions of the worst individuals are replaced by the posi- best and worst solutions with error percentage and the mean
tion of best individuals. From the new position, the particles execution time are given in Table 6.
move with their own velocities. This approach produces For this test case of 36 units, the Lbest model, CFA,
best result with a success rate of 84%, with higher execution and SPSO, have higher success rate compared to HPSO
time than simple PSO, CFA, and SPSO. and C-PSO. CFA is superior to all other variants, in

Table 7
Result obtained by PSO and its variants technique (test case-3) (18 candidate units with 14-year planning horizon)
PSO and its variants Total cost (Rs.) × 1010 Success rate (%) Mean execution Error (min − max) (%)
(for 25 runs) time (min)
Best result Worst result

Simple PSO 2.1865 2.2049 0 20 0.31–1.16


CFA 2.1797 2.1926 51 21 0–0.59
Lbest model 2.1797 2.1941 40 67 0–0.66
HPSO 2.1852 2.2067 0 52 0.25–1.24
SPSO 2.1797 2.1988 40 29 0–0.88
C-PSO 2.1797 2.2176 33 98 0–1.73
DP 2.1797 100 2700 0
210 S. Kannan et al. / Electric Power Systems Research 70 (2004) 203–210

terms of success rate and error, which is evident from by DP, increases exponentially as dimension increase. How-
Table 6. ever, the PSO techniques produced the best result in much
less time when compared with DP.
5.4.3. Test case-3 Among the variants of PSO, it is found that constriction
It could be seen that barring simple PSO and HPSO, all factor approach performs comparatively better than other
variants are able to obtain best solutions, which is evident algorithms in terms of success rate, execution time, and error
from Table 7. From Table 7, it is easy to conclude that limit.
CFA performs better than all other algorithms in terms of
success rate, execution time, and error. The Lbest model is
also performing better with lower error, but it takes higher Acknowledgements
execution time than CFA.
It may be concluded that, simple PSO may not give satis- The authors wish to thank their respective institutions for
factory results for large dimension problem, which is evident the necessary help and support.
from Tables 5–7. The modifications are necessary to improve
the performance of simple PSO to deal with large dimension
problems. The SPSO is suitable for problems with numerous References
local minima. The Lbest model also performs better for this
type of GEP problem. Overall, it is concluded that perfor- [1] Introduction to the WASP-IV model User’s manual, International
mance of CFA is better than other PSO algorithms in terms Atomic Energy Agency, Vienna, Austria, November 2001.
[2] X. Wang, McDonald, Modern Power System Planning, McGraw-Hill
of success rate, execution time, and tolerance/error limit. International Limited, Singapore, 1994.
[3] J.S. Khokhar, Programming Models for the Electricity Industry, Com-
monwealth Publishers, New Delhi, 1997.
6. Conclusion [4] A.K. David, R. Zhao, Integrating expert systems with dynamic pro-
gramming in generation expansion planning, IEEE Trans. PWRS
4 (3) (1989) 1095–1101.
In this paper, an attempt is made to solve three different [5] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and
test cases of GEP problem using PSO and its variants. A Machine Learning, Addison-Wesley, MA, 1989.
new technique called VMP is introduced in this paper. This [6] J.-B. Park, Y.-M. Park, J.-R. Won, K.Y. Lee, An improved genetic
procedure is used in all PSO algorithms and it is found algorithm for generation expansion planning, IEEE Trans. Power
that the VMP procedure reduces the execution time, and Systems 15 (3) (2000) 916–922.
[7] Zbigniew Michalewich, ‘Genetic algorithm + Data structures
the chances of infeasible solution while increasing the ef- = Evolutionary programs’, Springer-Verlag, 1996.
ficiency. The simulation results are used to compare the [8] Y. Park, J. Won, J. Park, D. Kim, Generation Expansion Planning
different algorithms with respect to speed, success rate and Based on an Advanced Evolutionary Programming, IEEE Trans.
error percentage of objective function value. The results Power Systems, vol. 14 (1), February 1999.
are also compared with the solution obtained by the DP. [9] K.E. Parsopoulos, M.N. Vrahatis, Recent Approaches to Global Op-
timization Problems Through Particle Swarm Optimization, Natural
The consistency of all algorithms is verified with 100-test Computing 1, Kluwer Academic Publishers, The Netherlands, 2002,
simulation runs for test case-1, test case-2, and with 25-test pp. 235–306.
simulation run for test case-3. As PFA starts with infeasible [10] H. Yoshida, K. Kawata, Y. Fukuyama, S. Takayama, Y. Nakanishi, A
solutions and progressively leads to feasible solution in the particle swarm optimization for reactive power and voltage control
subsequent iterations and converges to best solution. The re- considering voltage security assessment, IEEE Trans. Power Syst.
15 (4) (2000) 1232–1239.
sults obtained for all the three test cases ensure that, the so- [11] J. Kennedy, R. Eberhart, Particle swarm optimization, vol. IV, in:
lutions obtained were within the tolerable error limits (2%). Proceedings of IEEE International Conference on Neural Networks
For the GEP problem of 6-year planning horizon, the ex- (ICNN’95), Perth, Australia, 1995, pp. 1942–1948.
ecution time of the PSO techniques is found greater than the [12] Y. Fukuyama, Fundamentals of Particle Swarm Optimization Tech-
DP technique. For the 14-year planning horizon, DP takes niques, IEEE-PES Tutorial on Modern Heuristic Optimization Tech-
niques with Application to Power Systems (Chapter 5).
much time (45 h) to find the optimal solution. If the planning [13] R. Storn, K. Price, Differential evolution-a simple and efficient heuris-
horizon is increased and/or the number of units is increased, tic for global optimization over continuous spaces, J. Global Optim.
the execution time of DP increased. Hence, the time taken 11 (1997) 341–359.

Das könnte Ihnen auch gefallen