Sie sind auf Seite 1von 8

Journal of Applied Computer Science & Mathematics, no.

12 (6) /2012, Suceava

Personal Best Position Particle Swarm Optimization


1

Narinder SINGH, 2S.B. SINGH

Department of Mathematics, Punjabi University, Patiala, Punjab, India-147002


1
narindersinghgoria@ymail.com, 2sbsingh69@yahoo.com
Abstract-In this paper, a new particle swarm
optimization method has been proposed. In the proposed
approach a novel philosophy of modifying the velocity update
equation of Standard Particle Swarm Optimization approach
has been used. The modification has been done by vanishing
the gbest term in the velocity update equation of SPSO. The
performance of the proposed algorithm (Personal Best
Position Particle Swarm Optimization, PBPPSO) has been
tested on several benchmark problems. It is concluded that
the PBPPSO performs better than SPSO in terms of
accuracy and quality of solution.

Vmax

Rn

U (0,1)

Keywords: Standard Particle Swarm Optimization


(SPSO), PBPPSO (Personal Best Position Particle Swarm
Optimization), gbest (Global Best Position), Pbest (Personal
Best Position), Current Position.

Self Confidence Factor

C2

Swarm Confidence Factor


(The parameters C1 and C2 in equation (2), are not
critical for PSOs convergence and alleviation of local
minima, C1 than a social parameter C2 but with

Optimization is an art of selecting the best alternative(s)


amongst a given set of options or the process of finding
the largest or the smallest possible value, which a given
function can attain in its domain of definition.
The function to be optimized could be linear or
nonlinear. Obtaining the solution of many real life
problems is not possible without the help of robust
optimization techniques. The robust techniques are
designed from the mathematical theory of optimization.
The Particle Swarm Optimization is one of the important
robust optimization technique.
The particle swarm optimization (PSO) is a new global
optimization method based on a metaphor of social
interaction [1], [2]. Since its inspection PSO is finding
applications in all areas of science and engineering [3].
In past several years, PSO has been successfully applied
in many research and application areas. It is demonstrated
that PSO gets better results in a faster, cheaper way
compared with other optimization methods.
A PSO algorithm maintains a swarm of individuals
(called particles), where each individual (particle)
represents a candidate solution. Particles follow a very
simple behavior: emulate the success of neighboring
particles, and own successes achieved.
The position of a particle is therefore influenced by the
best particle in a neighborhood, as well as the best solution
found by the particle. Particle position xi is adjusted using

C1 + C2 =4)
f

Fitness Function

yij

Personal Best Position of the

i th

particle in

j th

i th

particle in

j th

dimension

y ij

Global Best Position of the


dimension

vij (t )

v (k 1)
ij

xij (k )
xij (k 1)

Old Velocity of the

i th

New Update Velocity


Old Position of the

particle in

i th

th

j th

particle in

particle in

New Update Position of the

j
i th

dimension

j th
th

dimension

dimension
particle in

Objective Function

I. INTRODUCTION

NOMENCLATURE
C1

Maximum velocity (Vmax) parameter. This parameter


limits the maximum jump that a particle can make in one
step.
Real Number
Swarm Size :-(Number of particles in the swarm affects
the run-time significantly, thus a balance between variety
(more particles) and speed (less particles) must be
sought)
Real Number of n -triples
Time
Time Increment
Uniformly distribution between 0 and 1

j th

dimension
Inertia Weight:- (The role of inertia weight in equation
(2), is considered critical for the PSO,s convergence
behavior. The inertia weight is employed to control the
impact of previous history of velocities on the current
one.
Constriction Coefficient
Random Number between 0 and 1(The parameters r1 and

equation

r2 are used to maintain the diversity of the population,

xij (t 1) xi (t ) vi (t 1)

and they are uniformly distributed in the range [0,1])

69

(1)

Mathematics Section

V. Krishna Reddy L.S.S. Reddy [26] have implemented


a parallel Asynchronous version and Synchronous
versions of PSO on the GPU and compared the
performance in terms of execution time and speedup with
their sequential versions on the GPU.
Feng Luan, Jong Ho Choi and Hyun Kyo Jung [27] have
proposed improved Particle Swarm Optimization
algorithm for robust optimization problems. The
efficiency and advantages of the proposed algorithm have
been verified by the application to a mathematical function
and a practical electromagnetic problem.
Hemlata S. Urade, Rahila Patel [28] have developed the
concept of dynamic Particle Swarm Optimization. The
performance of this technique resulted that Dynamic PSO
performs better than SPSO.
Bahman Bahmanifirouzi, Mehdi Nafar and Mosoud
Jabbari [29] presented a Modified Particle Swarm
Optimization based algorithm for Economic Dispatch. The
performance of the proposed method has been
demonstrated on one test case with three generating units.
L.M. Palanivelu and P. Vijayakumar [30] have
investigated the problem of optimizing space for multi
application smart card using compression techniques.
Narinder Singh and S.B.Singh [33] have developed the
OHGBPPSO a novel philosophy by modifying the
velocity update equation has been presented. The
performance of this approach has been tested through
numerical and graphical results. The results obtained are
compared with the SPSO for scalable and non-scalable
problems.

where the velocity component, vi (k ) represents the step


Shi and Eberhart [11] proposed to use an inertia weight
parameter
vij (t 1) w vij (t ) c1r1 j ( yij xij ) c2 r2 j ( y j xij ) (2)

Eberhart and Shi suggested to use the inertia weight


which decreasing over time, typically from 0.9 to 0.4. It
has the effect of narrowing the search, gradually changing
from an exploratory to an exploitative mode.
R.Mendes, J.Kennedy and J.Neves [31] have proposed
fully informed PSO. In the standard version of PSO, the
effective sources of influence are in fact only two : self
and best neighbor. Information from the remaining
neighbors is unused. Mendes has revised the way particles
interact with their neighbors. Whereas in the traditional
algorithm each particle is affected by its own previous
performance and the single best success found in its
neighborhood, in Mendes fully informed particle swarm
(FIPS), the particle is affected by all its neighbors,
sometimes with no influence form its own previous
success. FIPS can be depicted as follows:
nNi
r (t )( ym (t ) xi (t ))
(3)
Vi (v i
)
nN i
m 1
Where n Ni =

Ni

Ni

is the set of particles in the


nx

neighborhood of particle i and r ( t ) U (0, c1 c 2 ) .


Danie Bratton and Jamees Kennedy [32] developed a
Standard Particle Swarm Optimization that could be easy
extension of the first algorithm of PSO.

III. THE PROPOSED ALGORITHM


The motivation behind introducing PBPPSO is that in
the velocity update equation instead of comparing the
particles current position with only personal best position,
it is compared with cognitive component, c1r1 j ( pbestij xij )

II. REVIEW ON PARTICLE SWARM OPTIMIZATION


PSO variants are continually being devised in an
attempt to overcome this deficiency, see e.g. [13] [14] [15]
[16] [17] [18] [19] [20] [21] for a few recent additions.
These PSO variants greatly increase the complexity of the
original method and we have previously demonstrated that
satisfactory performance can be achieved with the basis
PSO if only its parameters are properly tuned [22] [23].
A. Immanuel Selvakumar [24] proposed a new version
of the classical Particle Swarm Optimization, namely,
New PSO, to solve non-convex economic dispatch
problems.
Mehdi Neshat, Shima Farshchian Yazdi [25] h, Das
applied PSO to multiple field such as machine learning,
data mining, wireless sensor networks and pattern
recognition. One of the most famous clustering approaches
is K-means which effectively has been used in many
clustering problems. But this algorithm has some
drawbacks such as local optimal convergence and
sensitivity to initial points.

and social component, c2 r2 j ( xij ) of pbest only. Thus, we


introduce a new velocity update equation as fellows:
vij ( t 1)

wvij ( t )

c1r1 j ( yij xij )


c2 r2 j ( xij )

Current Motion Cognitive Component Social Component

(4)

In the velocity update equation of this new PSO the first


term represents the current velocity of the particle and can
be thought of as a momentum term. The second term is
proportional to the vector c1r1 j ( pbestij xij ) , is responsible
for the attractor of particles current position and positive
direction of its own best position (pbest). The third term is
proportional to the vector c2 r2 j ( xij ) , is responsible for the
attractor of particles current position.
The pseudo code of PBPPSO is shown below:

70

Journal of Applied Computer Science & Mathematics, no. 12 (6) /2012, Suceava

Problem III: (Exponential)

ALGORITHM - PBPPSO

n
M in f ( x ) ( 0.5 xi2 )
i 1

- Randomly initialize particle position and velocities,


- While do not terminate

Evaluate fitness objective functional value at


current position xi .

If objective functional value is better than


Personal Best Position ( yij ) then update yij .

In which search space lies between

Update velocity

vij ( t 1) and

and

Minimize Objective Function Value is -1.


Problem IV: (Griewank)
M in f ( x ) 1

- For each particle;

1 xi 1

n
n
x
1
x 2 cos( i )
i
4000 i 1
i
i 1

In which search space lies between 600 xi 600 and


Minimize Objective Function Value is 0.
Problem V: (Rastrigin)

position xij (t 1)

n
Min f ( x ) 10 n [ xi2 10 cos( 2 xi )]
i 1

vij ( t 1) wvij ( t ) c1 r1 j ( y ij xij ) c 2 r2 j ( xij )

or

In which search space lies between 5.12 xi 5.12 and


Minimize Objective Function Value is 0.

xi ( t 1) xi ( t ) vi ( t 1)

END ALGORITHM
Problem VI: (Function 6)
n 1
Min f ( x ) [100 ( xi 1 xi2 ) 2 ( xi 1) 2 ]
i 1

IV. THE TEST PROBLEMS


Problem Set 1 consists of 15 scalable problems, i.e.,
those problems in which the dimension of the problems
can be increased / decreased at will.
In general, the complexity of the problem increases as
the problem size is increased. Problem Set 2 consists of
those problems in which the problem size is fixed, but the
problems have many local as well as global optima.

In which search space lies between 30 xi 30 and


Minimize Objective Function Value is 0.
Problem VII: (Zakharovs)
n
n i
n i
2
4
Min f ( x ) xi2 [ ( ) xi ] [ ( ) xi ]
2
2
i 1
i 1
i 1

In which search space lies between 5.12 xi 5.12 and


Minimize Objective Function Value is 0.

Detail of 15 Scalable Problems SET-I (Continued)


(In which Particle size in the swarm increasing and
decreasing, no particle sized is fixed).

Problem VIII: (Sphere)

Problem I: (Ackley)

In which search space lies between 5.12 xi 5.12 and


Minimize Objective Function Value is 0.

Min f ( x ) 20 exp( 0.02 n 1 xi2 )


i 1

exp( n

cos( x )) 20 e
i 1

Problem IX: (Axis parallel hyper ellipsoid)

In which search space lies between


Minimize Objective Function Value is 0.

30 x i 30

n
Min f ( x ) ixi2
i 1

and

In which search space lies between 5.12 xi 5.12


and Minimize Objective Function Value is 0.

Problem II: (Cosine Mixture)


n

i 1

i 1

Min f ( x ) 0.1 cos(5 xi ) xi2

In which search space lies between

n
M in f ( x ) xi2
i 1

1 x i 1

Problem X: (Schwefel 3)

and

Minimize Objective Function Value is 0.1 ( n ) .

71

n
n
Min f ( x ) xi xi
i 1
i 1

Mathematics Section

Problem III: (Bohachevsky 2)

In which search space lies between 10 xi 10 and


Minimize Objective Function Value is 0.

2
2
M in f ( x ) x1 2 x 2 0.3 cos( 3 x1 ) cos( 4 x 2 ) 0.3

In which search space lies between 50 xi 50 and


Minimize Objective Function Value is 0.

Problem XI: (Dejong)


n
Min f ( x ) ( xi4 rand (0,1))
i 1

Problem IV: (Branin)


2
2
Min f ( x ) a ( x 2 bx1 cx1 d ) g (1 h ) cos( x1 ) g
5.1
5
1
a 1, b
,c
, d 6, g 10, h
2

8
4

In which search space lies between 10 xi 10 and


Minimize Objective Function Value is 0.
Problem XII: (Schwefel 4)
Min f ( x ) Max{ xi , 1 i n}

In which search space lies between 5 x1 100 ,

In which search space lies between 100 xi 100 and


Minimize Objective Function Value is 0.

0.398.

Problem XIII: (Cigar)

Problem V: (Eggcrate)

5 x 2 15 and Minimize Objective Function Value is

n
2
Min f ( x ) xi 100000 xi2
i 1

2
2
2
2
Min f ( x ) x1 x 2 25(sin x1 sin x 2 )

In which search space lies between 2 xi 2 and


Minimize Objective Function Value is 0.

In which search space lies between 10 xi 10 and


Minimize Objective Function Value is 0.

Problem VI: (Miele and Cantrell)

Problem XIV: (Brown 3)


n 1
Min f ( x ) [( xi2 )( xi21 1) ( xi21 1)( xi2 1) ]
i 1

Min f ( x ) (exp( x1 ) x 4 )

100( x 2 x3 )

(tan( x3 x 4 ))

In which search space lies between 1 xi 1 and


Minimize Objective Function Value is 0.

In which search space lies between 1 xi 4 and


Minimize Objective Function Value is 0.
Problem XV: (Function 15)

n
Min f ( x ) ixi2
i 1

Problem VII: (Modified Rosenbrock)

Min f ( x ) 100( x2 x12 )2 [6.4( x2 0.5)2 x1 0.6]2

In which search space lies between 10 xi 10 and


Minimize Objective Function Value is 0.

In which search space lies between 5 x1 , x 2 5 and


Minimize Objective Function Value is 0.

Detail of 13 Non- Scalable Problems SET-II ((In which


Particle size in the swarm is fixed, no particle increasing
and decreasing in the swarm).
Problem I: (Becker and Lago)
2
2
Min f ( x ) ( x1 5 ) ( x 2 5 )

Problem VIII: (Easom)


2
2
Min f ( x) cos( x1 ) cos( x2 ) exp(( x1 ) ( x2 ) )

In which search space lies between 10 xi 10 and


Minimize Objective Function Value is 0.
Problem II: (Bohachevsky 1)

In which search space lies between 10 xi 10 and


Minimize Objective Function Value is -1.

2
2
Min f ( x ) x1 2 x 2 0.3 cos(3 x1 ) 0.4 cos( 4 x 2 ) 0.7

Problem IX: (Periodic)


Min f ( x ) 1 sin

In which search space lies between 50 xi 50 and


Minimize Objective Function Value is 0.

72

x1 sin

2
2
x 2 0.1 exp( x1 x 2 )

8
x1

Journal of Applied Computer Science & Mathematics, no. 12 (6) /2012, Suceava

10 xi 10

In which search space lies between

(v) The dynamic range for each element of a particle


is defined as (-100, 100), that is, the particle
cannot move out of this range in each dim and
thus Xmax = 100

and

Minimize Objective Function Value is 0.9.

Problem X: (Powells)
Min f ( x ) ( x1 10 x 2 )

5( x3 x 4 )

( x 2 2 x3 )

10( x1 x 4 )

In which search space lies between 10 xi 10 and


Minimize Objective Function Value is 0.
Problem XI: (Camel back-3)
2
4 1 6
2
Min f ( x ) 2 x1 1.05 x1 x1 x1 x 2 x 2
6

1In which search space lies between

5 x1 , x 2 5

and

In the proposed method we have set the following


parameter:
(i) Swarm size=20
(ii) Weights =0.7
(iii) Acceleration coefficient=1.5
(iv) The maximum number of function evaluations is
fixed to be 30,000.
(v) The dynamic range for each element of a particle
is defined as (-100, 100), that is, the particle
cannot move out of this range in each dim and
thus Xmax = 100
VI. ANALYSIS OF RESULS

Minimize Objective Function Value is 0.


The performance of the SPSO and newly Proposed
algorithm, PBPPSO has been tested on a set of 28
benchmark Problems (15 Scalable and 13 Non-Scalable).
The
Standard
Particle
Swarm
Optimization
implementation was written in C and compiled using the
Borland C++ Version 4.5 compiler. Version 4.5 compiler.
For the purpose of comparison, all the simulation use the
parameter setting of the SPSO implementation except the
inertia weight, acceleration coefficient, swarm size and
maximum velocity allowed.

Problem XII: (Camel back-6)


2
4 1 6
2
4
Min f ( x ) 4 x1 2.1 x1 x1 x1 x 2 4 x 2 4 x 2
3

In which search space lies between

5 x1 , x 2 5

and

Minimize Objective Function Value is -1.0316.


Problem XIII: (Aluffi-Pentinis)

If the SPSO and HPSO implementation cannot find an


acceptable solution within 30,000 iterations, it is ruled that
it fails to find the global optimum in this run.
In observing Table 3, it can be seen that PBPPSO gives
a better quality of solutions as compared to SPSO. Thus,
for the scalable problems PBPPSO outperforms SPSO
with respect to efficiency, reliability, cost and robustness.

4
4
2
2
Min f ( x ) 0.25 x1 0.5 x1 0.5 x1 0.1 x1 0.5 x 2

In which search space lies between

10 xi 10

and

Minimize Objective Function Value is -0.352.


V. PARAMETER SETTING
Except from making the contribution of gbest equal to
zero, following parameters have been changed from SPSO.
The parameter in the SPSO given in the literature are:
(i) Swarm size varies from is 20 to 30
(ii) Weights 0.4 to 0.9
(iii) Acceleration coefficient is 1.5 to 2.0.
(iv) The maximum number of function evaluations is
fixed to be 30,000.

In observing Table 4, it can be seen that PBPPSO gives


a better quality of solutions as compared to SPSO. Thus,
for the non-scalable problems PBPPSO outperforms SPSO
with respect to efficiency, reliability, cost and robustness.
In Table 3, it is observed that SPSO could not solve two
problems with 100% success, whereas PBPPSO solved all
the problems with 100% success.

73

Mathematics Section

TABLE-3 COMPARATIVE OBJECTIVE FUNCTION VALUE OBTAINED IN 50 RUNS BY SPSO AND PBPPSO FOR PROBLEM SET-I
Problem
No.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Minimum Function Value

Mean Function Value

Standard Deviation

Rate of Success

SPSO

PBPPSO

SPSO

PBPPSO

SPSO

PBPPSO

SPSO

PBPPSO

0.667619
0.571815
0.000000
0.777974
27.12781
0.000061
0.000274
0.685057
0.000002
0.001109
0.601870
0.022248
0.001848
0.000126
0.000009

0.399862
0.277552
0.000000
0.290225
0.171168
0.000021
0.000000
0.209639

16485.6000
1581.60000
60.000000
14364.6000
30000.0000
166.200000
72.000000
6096.00000
60.600000
60.600000
11341.8000
78.000000
1767.00000
60.000000
60.000000

553.600000
183.000000
60.000000
1120.20000
1088.40000
172.200000
72.600000
520.200000
60.600000
60.000000
777.000000
97.800000
1126.00000
60.000000
60.000000

0.142795
0.072987
0.000207
0.026005
29.809592
0.200616
0.229660
0.054336
0.179978
0.161759
0.067786
0.243564
0.253535
0.048579
0.005729

0.158865
0.158519
0.000153
0.122853
0.218556
0.227167
0.233730
0.176575
0.200105
0.179291
0.217221
0.252906
0.244377
0.050731
0.005125

98.00%
100%
100%
100%
0.00%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

0.000599
0.064401
0.000349
0.001403
0.000126
0.000000

TABLE-4 COMPARATIVE OBJECTIVE FUNCTION VALUE OBTAINED IN 50 RUNS BY SPSO AND PBPPSO FOR PROBLEM SET-II
Problem
No.
1
2
3
4
5
6
7
8
9
10
11
12
13

Minimum Function
Value
SPSO
PBPPSO
0.500008
0.500003
0.017193
0.008802
0.012194
0.010299
0.399105
0.394891
0.018613
0.013457
0.002489
0.000124
0.007356
0.000202
0.000000
0.000000
0.480507
0.480466
0.067997
0.021487
0.003378
0.000466
0.031064
0.030937
0.019435
0.004263

Mean Function Value


SPSO
60.000000
64.200000
54.800000
189.600000
72.000000
42.000000
42.000000
60.000000
60.000000
840.600000
60.600000
49.200000
44.800000

PBPPSO
60.600000
66.600000
0.209085
241.200000
71.400000
42.000000
42.000000
60.000000
60.000000
401.400000
61.200000
48.000000
42.400000

FIGURE A: COMPARING THE SPSO AND PBPSO WITH THE HELP OF


SCALABLE 15 PROBLEMS

Standard Deviation
SPSO
0.042453
0.258362
0.225253
0.142762
0.240972
0.028132
0.126423
0.000000
0.026709
0.215576
0.207517
0.258470
0.179291

PBPPSO
0.062716
0.223069
0.209085
0.151097
0.224596
0.030601
0.119506
0.000000
0.024677
0.220904
0.178353
0.271096
0.255534

Success of Rate
SPSO
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

PBPPSO
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%
100%

FIGURE B: COMPARING THE SPSO AND PBPSO WITH THE HELP OF NONSCALABLE 13 PROBLEMS.

74

Journal of Applied Computer Science & Mathematics, no. 12 (6) /2012, Suceava

[12] P.J.Angline, Evolutionary optimization versus particle


swarm optimization philosophy and performance
differences, Lecture Notes in Computer Science,
vol.1447, pp.601-610, Springer, Berlin, 1998a.
[13] Z-H. Zhan, J. Zhang, Y. Li, and H.S-H. Chung,
Adaptive particle swarm optimization, IEEE
Transactions on Systems, Man, and Cybernetics, pp.
1362-1381,2009.
[14] Z. Xinchao, A perturbed particle swarm algorithm for
numerical optimization, Applied Soft Computing, pp.
119-124, 2010.
[15] T. Niknam and B. Amiri, An efficient hybrid approach
based on PSO, ACO and k-means for cluster analysis,
Applied Soft Computing, pp. 183- 197, 2010.
[16] M. El-Abda, H. Hassan, M. Anisa, M.S. Kamela, and M.
Elmasry, Discrete cooperative particle swarm
optimization for FPGA placement, Applied Soft
Computing, pp. 284-295, 2010.
[17] M-R. Chena, X. Lia, X. Zhanga, and Y-Z. Lu, A novel
particle swarm optimizer hybridized with extremal
optimization, Applied Soft Computing, pp. 367-373,
2010.
[18] P.W.M. Tsang, T.Y.F. Yuena, and W.C. Situ,
Enhanced a_ne invariant matching of broken
boundaries based on particle swarm optimization and the
dynamic migrant principle, Applied Soft Computing,
pp.. 432-438, 2010.
[19] C-C. Hsua, W-Y. Shiehb, and C-H. Gao, Digital
redesign of uncertain interval systems based on extremal
gain/phase margins via a hybrid particle swarm
optimizer, Applied Soft Computing, pp. 606-612,2010.
[20] H. Liua, Z. Caia, and Y. Wang, Hybridizing particle
swarm Optimization with differential evolution for
constrained numerical and engineering optimization,
Applied Soft Computing, pp. 629-640, 2010.
[21] K. Mahadevana and P.S. Kannan, Comprehensive
learning particle swarm optimization for reactive power
dispatch, Applied Soft Computing, pp. 641-652, 2010.
[22] M.E.H. Pedersen, Tuning & Simplifying Heuristical
Optimization, PhD thesis, School of Engineering
Sciences, University of Southampton, England, 2010.
[23] M.E.H. Pedersen and A.J. Chipper_eld, Simplifying
particle swarm optimization, Applied Soft Computing,
pp. 618-628, 2010.
[24] A. Immanuel Selvakumar and K. Thanushkodi, A New
Particle Swarm Optimization Solution to Nonconvex
Economic Dispatch Problems, IEEE Transactions on
Power Systems, vol. 22, no. 1, 2007.
[25] Mehdi Neshat, Shima Farshchian Yazdi, A New
Cooperative Algorithm Based on PSO and K-Means for
Data Clustering, Journal of Computer Science vol. 8,
no. 2, pp. 188-194, 2012
[26] V.Krishna Reddy, L.S.S. Reddy, Performance
Evaluation of Particle Swarm Optimization Algorithms
on GPU using CUDA,International Journal of
Engineering Science & Advanced Technology, vol. 2,
no. 1, pp.92-100, 2012.
[27] Feng Luan, Jong-Ho Choi and Hyun Kyo Jung, A
Particle Swarm Optimization Algorithm With Novel
Expected Fitness Evaluation for Robust Optimization
Problems, IEEE Transactions on Magnetics, vol. 48, no.
2, 2012.

Note: -x-axis represents the scalable and non-scalable


problems and y-axis denoted the Minimize Objective
Function Values.
VII. CONCLUSIONS
In the SPSO, the movement of particle is governed by
three behaviors, namely, inertial, cognitive and social
influence. The cognitive behavior helps the particle to
remember its previously visited best position. The newly
proposed PBPPSO approach assumes that gbest is not
contributing anything in the velocity update equation. In
this approach position of each particle is dependent the
Personal Best Position only. After testing the proposed
algorithm on several benchmark problems it is concluded
that the performance of new algorithm performs better in
solving the real word problems.
REFERENCES
[1] R.C. Eberhart and J. Kennedy, A New Optimizer using
Particle Swarm Theory, In Proceedings of the Sixth
International Symposium on Micromachine and Human
Science, pp. 3943, 1995.
[2] J. Kennedy and R.C. Eberhart, Particle Swarm
Optimization, In Proceedings of the IEEE International
Joint Conference on Neural Networks, pp. 19421948.
IEEE Press, 1995.
[3] J. Kennedy, Small Worlds and Mega-Minds: Effects of
Neighborhood
Topology
on
Particle
Swarm
Performance, In Proceedings of the IEEE Congress on
Evolutionary Computation, vol. 3, pp. 19311938, July
1999.
[4] J. Kennedy and R. Mendes, Population Structure and
Particle Performance In Proceedings of the IEEE
Congress on Evolutionary Computation, pp. 16711676.
IEEE Press, 2002.
[5] E.S. Peer, F. van den Bergh, and A.P. Engelbrecht,
Using
Neighborhoods
with
the
Guaranteed
Convergence PSO, In Proceedings of the IEEE Swarm
Intelligence Symposium, pp. 235242. IEEE Press, 2003.
[6] A.P. Engelbrecht, Fundamentals of Computational
Swarm Intelligence, Wiley & Sons, 2005.
[7] J. Kennedy, R.C. Eberhart, and Y. Shi., Swarm
Intelligence, Morgan Kaufmann, 2001.
[8] F. van den Bergh, An Analysis of Particle Swarm
Optimizers, PhD thesis, Department of Computer
Science, University of Pretoria, Pretoria, South Africa,
2002.
[9] F. van den Bergh and A.P. Engelbrecht, A Study of
Particle Swarm Optimization Particle Trajectories,
Information Sciences, vol.176, no. 8, pp.937971, 2006.
[10] J. Kennedy, Bare Bones Particle Swarms, In
Proceedings of the IEEE Swarm Intelligence
Symposium, pp. 8087, April 2003.
[11] Y. Shi and R.C. Eberhart, A Modified Particle Swarm
Optimizer In Proceedings of the IEEE Congress on
Evolutionary Computation, pp. 6973, May 1998.

75

Mathematics Section

[28] Hemlata S. Urda, and Rahila Patel, Performance


Evaluation of Dynamic Particle Swarm Optimization,
International Journal of Computer Science and Network,
vol. 1, no.1, 2012.
[29] Bahman Bahmanifirouzi, Mehdi Nafar and Masoud
Jabbari, Modified Particle Swarm Optimization for
Economic Dispatch of Generating Units, J. Basic,
Applied Sci., vol.2, no.1, pp.138-140, 2012.
[30] L.M.Palanivelu and P.Vijayakumar, A Particle Swarm
Optimization for Image Segmentation in Multi
Application Smart Cards, European Journal of

Scientific Research, ISSN 1450-216X, vol. 70, no.


3,2012.
[31] R. Mendes, J. Kennedy, and J. Neves, The fully
informed particle swarm: Simpler, maybe better, IEEE
Trans. Evol. Comput., vol. 8, pp.204210, June 2004.
[32] D.Bratton and James Kennedy, Defining a Standard for
Particle Swarm Optimization,IEEE Swarm Intelligence
Symposium, pp. 120-127, 2007.
[33] Narinder Singh and S.B.Singh, One Half Global Best
Position Particle Swarm Optimization Alogirthm,
International Journal of Scientific & Engineering
Research vol. 2 no. 8, August, 2011.

Prof. S. B. Singh, Department of Mathematics, Punjabi University, Patiala, INDIA, Punjab, Pin No - 147002, e-mail ID:
sbsingh69@yahoo.com. The author is Head in Department of Mathematics Punjabi University, Patiala. The research interests of author are
Mathematical modeling and Optimization Techniques. He has more than 50 research papers in journals/Conferences. He holds a number of
national and international awards.
Mr. Narinder Singh, Department of Mathematics, Punjabi University, Patiala, INDIA, Punjab, Pin No- 147002, Email ID:
narindersinghgoria@Yahoo.com. He is currently a Ph.D. student at the Department of Mathematics, Punjabi University, Patiala-147002,
Punjab, INDIA. His area of research interests is in Particle Swarm Optimization.

76

Das könnte Ihnen auch gefallen