Sie sind auf Seite 1von 12

Available online at www.sciencedirect.

com

Computers & Industrial Engineering 54 (2008) 960971


www.elsevier.com/locate/caie

A Pareto archive particle swarm optimization


for multi-objective job shop scheduling
Deming Lei

School of Automation, Wuhan University of Technology, 122 Luoshi Road, Wuhan, Hubei Province, Peoples Republic of China
Received 7 November 2006; received in revised form 5 November 2007; accepted 5 November 2007
Available online 22 November 2007

Abstract
In this paper, we present a particle swarm optimization for multi-objective job shop scheduling problem. The objective
is to simultaneously minimize makespan and total tardiness of jobs. By constructing the corresponding relation between
real vector and the chromosome obtained by using priority rule-based representation method, job shop scheduling is converted into a continuous optimization problem. We then design a Pareto archive particle swarm optimization, in which the
global best position selection is combined with the crowding measure-based archive maintenance. The proposed algorithm
is evaluated on a set of benchmark problems and the computational results show that the proposed particle swarm optimization is capable of producing a number of high-quality Pareto optimal scheduling plans.
 2007 Elsevier Ltd. All rights reserved.
Keywords: Particle swarm optimization; Pareto optimal; Archive maintenance; Global best position; Multi-objective job shop scheduling

1. Introduction
Multi-objective job shop scheduling problem (MOJSSP) is the one with multiple conicting objectives,
which mainly presents some diculties related to the objectives. If the objectives are combined into a scalar
function by using weights, the diculty is to assign weights for each objective; if all objectives are optimized
concurrently, the problem is to design the eective search algorithm for some extra steps and the considerable
increment of time complexity.
In the past decades, the literature of MOJSSP is notably sparser than the literature of single-objective job
shop scheduling problem (JSSP). Sakawa and Kubota (2000) presented a genetic algorithm incorporating the
concept of similarity among individuals by using Gantt charts for JSSP with fuzzy processing time and fuzzy
due date. The objective is to maximize the minimum agreement index, to maximize the average agreement
index and to minimize the maximum fuzzy completion time. Ponnambalam, Ramkumar, and Jawahar
(2001) proposed a multi-objective genetic algorithm to derive the optimal machine-wise priority dispatching

Tel.: +86 2786534910.


E-mail address: deminglei11@163.com

0360-8352/$ - see front matter  2007 Elsevier Ltd. All rights reserved.
doi:10.1016/j.cie.2007.11.007

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

961

rules to resolve the conict among the contending jobs in the Gier and Thompson procedure (Gier &
Thompson, 1960) applied in job shop scheduling. The objective is to minimize the weighted sum of makespan,
the total idle time of machines and the total tardiness. The weights assigned for combining the objectives into a
scalar tness are not constant. Esquivel et al. (2002) proposed an enhanced evolutionary algorithm with new
multi-re-combinative operators and incest prevention strategy for single and multi-objective job shop scheduling problem. Kacem, Hammadi, and Borne (2002) presented a combination approach based on the fusion of
fuzzy logic and multi-objective evolutionary algorithm to the problems of exible job shop scheduling. Xia
and Wu (2005) proposed a hybrid algorithm of particle swarm optimization (PSO) and simulated annealing
to multi-objective exible job shop scheduling problems. The hybrid algorithm makes use of PSO to assign
operations on machines and simulated annealing algorithm to arrange operations on each machine.
The above-mentioned approaches optimize the weighted sum of objective functions and can produce one or
several optimal solutions. Some studies have attempted to simultaneously optimize all objectives and obtain a
group of Pareto optimal solutions. Arroyo and Armentano (2005) presented a genetic local search algorithm
with features such as preservation of dispersion in the population, elitism et al. for ow shop scheduling problems in such a way as to provide the decision maker with the approximate Pareto optimal solutions. Lei and
Wu (2006) developed a crowding measure-based multi-objective evolutionary algorithm to the problems of job
shop scheduling. The proposed algorithm makes use of crowding measure to adjust the external population
and assign dierent tness for individuals and is applied to MOJSSP to minimize makespan and the total tardiness of jobs.
PSO is seldom applied to JSSP since 1995 and the optimization capacity and advantage of PSO on JSSP are
not intensively considered. Compared with evolutionary algorithm, PSO has its own superiorities on scheduling. For instance, it is unnecessary for PSO to design the special crossover and mutation operators to avoid the
occurrence of the illegal individuals. The main obstacle of directly applying PSO to the combinatorial optimization problems such as JSSP is its continuous nature.
To remedy the above drawback, this paper presents an eective approach to convert JSSP to a continuous
optimization problem. Once JSSP is converted into the continuous problem, the direct application of PSO
becomes possible. In addition, an eective multi-objective particle swarm optimization (MOPSO) is proposed
for solving MOJSSP. The proposed algorithm combines the global best position selection with the external
archive maintenance, whereas these two steps in most of MOPSO are separately considered.
The remainder of the paper is organized as follows. The basic concepts of multi-objective optimization are
introduced in Section 2. Section 3 formulates JSSP with multiple objectives. In Section 4, we introduce standard PSO and describe the method which converts scheduling problems into the continuous optimization
ones. In Section 5, we describe a Pareto archive particle swarm optimization (PAPSO) for MOJSSP. The proposed algorithm is applied to a set of benchmark problems and the performances of three algorithms are compared in Section 6. Conclusions and remarks for the future work are given in Section 7.
2. Basic concepts
The general multi-objective optimization problem is of the form
~ f1 X
~; f2 X
~;    fM X
~
max y f X
~ 6 0 i 1; 2;    D
Subject to gi X

~ 2 H 2 Rn , H is search space. y 2 Y is objective vector and


~ x1 ; x2    xn T is called decision vector, X
where X
Y is objective space. gi is a constraint. For simplicity, we suppose that all fk(k = 1, 2,    , M) is greater than
zero in this paper. If fk 6 0, we replace fk with fk + s. Where s is a big enough positive number.
The following basic concepts are often used in multi-objective optimization case.
~0 2 H
Denition 1: Let a decision vector X
~0 is said to dominate a decision vector X
~1 2 HX
~0  X
~1 if and only if
1. X
~0 P fi X
~1 i 1; 2;    M
fi X

~0 > fi X
~1
f i X

9i 2 f1; 2;    Mg:

962

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

~0 is said to be Pareto optimal if and only if :9X


~1 2 H : X
~1  X
~0 .
X
~0 2 Hj:9X
~1 2 H; X
~1  X
~0 g.
Pareto optimal set P S of all Pareto optimal decision vectors. P S fX
Pareto optimal front P F of all objective function values corresponding to the decision vectors in P S :
~ f1 X
~; f2 X
~;    fM X
~jX
~ 2 P F g.
P F ff X
0
~
~0 is not dominated by any decision vectors in the
5. X is said to be non-dominated regarding a given set if X
set.

2.
3.
4.

Pareto optimal decision vector cannot be improved in any objectives without causing degradation in at least
one other objective. When a decision vector is non-dominated on the whole search space, it is Pareto optimal.
3. Problem formulation
Determining an ecient scheduling for the general shop problem has become the subject of research for
more than 50 years. The elements of JSSP are the set of machines and a collection of jobs to be scheduled.
The processing of a job on a certain machine is referred to as an operation. The processing time of each operation are xed and known in advance. Each job passes each machine exactly once in a prescribed sequence and
should be delivered before due date.
The minimization of cost and the maximization of customer satisfaction are two main foci of the practical
application. For reecting real-world situation adequately, we formulate multi-objective job shop scheduling
problems as two-objective ones which simultaneously minimize makespan and total tardiness. Makespan is
the most frequently considered objective and many ecient heuristics including tabu search and simulated
annealing have been developed for the minimum makespan. For tardiness objectives, several local search
approaches and genetic algorithms have also been introduced. In this paper, we propose a PSO-based heuristic
to minimize these objectives.
(1) Makespan Cmax = max{Ci} where
Pn Ci is the completion time of job i.
(2) Total tardiness of jobs T tot i1 maxf0; Li g where Li is the lateness of job i.
The assumptions considered in this paper are as follows:
Setting up times of machines and move times between operations are negligible;
Machines are independent of each other;
Jobs are independent of each other;
A machine cannot process more than one job at a time. No jobs may be processed on more than one
machine at a time;
There are no precedence constraints among the operations of dierent jobs.

4. Particle swarm optimization


PSO is a population-based stochastic optimization technique inspired by the choreography of a bird ock.
This technique has good performance, low computational cost and easy implementation. Due to these advantages, PSO has attracted signicant attentions from researchers around the world since its introduction in
1995.
4.1. The standard PSO
The standard algorithm is given in some form resembling the following.
~t  X
~ t r 2 c2 G
~t
~
tt1 w~
tt r1 c1 ~
Pt  X
~t1 X
~t ~
X
tt1

2
3

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

963

where w is an inertia weight, r1 and r2 are two learning factors, c1 and c2 are the random number following the
~t represent the speed and position of a particle, respectively, at
uniform distribution between [0, 1]. ~
tt and X
~
~
time t. Pt and G t , respectively, denote the personal best position of a particle and the global best position
at time t.
The procedure of PSO is as follows:
(1) Initialize a population of particles with random positions and velocities in the search space.
(2) Update the velocity and the position of each particle according to Eqs. (2) and (3).
(3) Map the position of each particle into solution space and evaluate its tness value according to the
~t .
desired optimization tness function. At the same time, update ~
Pt and G
(4) If the stopping criterion is met, terminate search; else go to (2).
The original PSO design is suited to a continuous solution space. For better solving combinatorial optimization problem, we propose an eective approach to handle JSSP by using PSO.

4.2. Handling JSSP using PSO


Two strategies can be used to apply PSO to JSSP. The rst strategy is to deal with scheduling problems by
using the discrete PSO (Hu, Eberhart, & Shi, 2003). The second is to transform JSSP into the continuous one
and solve the latter by using PSO.
Generally, it is dicult to design the discrete PSO. The main diculty is the redenition of the position or
velocity update method for JSSP. The redenition is essential for the application of PSO; on the other hand,
the low performance of the discrete PSO mainly results from its redenition. This is a paradox.
Compared with the rst strategy, the second strategy has the following advantages: JSSP is easily converted
into the continuous optimization; moreover, once the conversion is implemented, the high-quality solutions
can be obtained by using the various improvement methods of PSO. In this study, we adopt the second strategy and convert scheduling problems into continuous ones.
The Gier and Thompson procedure is rst described before the introduction to the conversion.
(1) Let t = 1, PSt = U, determine St.
~ ij joij 2 S t g, record the corresponding machine j*.
(2) Calculate Co minfCo
(3) Dene conict set ct foij 2 S t jrij < Co g. choose ouj 2 ct , PS t1 PS t [ fouj g, delete ouj from St
and add the next operation of job i into St and form St+1.
(4) t = t + 1, go to (2) until St = U.
where U is an empty set, St is a set of operations which can be scheduled in the t-th iteration and PSt is a set of
operations which have been scheduled in the t-th iteration. oij denotes the operation of job i processed on
machine j. C(oij) and rij, respectively, indicate the earliest completion time and the earliest beginning time
of operation oij. ct is the conict set which consist of all operations competing for the same machine.
There are many genetic representation methods of JSSP, such as operation-based, priority rule-based and
job-based representation et al. The following conversion procedure between chromosome and real vector
shows that priority rule-based representation is one of the most suitable methods to handle job shop scheduling by using PSO.
With respect to the priority rule-based representation, For n m JSSP, the chromosome of a feasible scheduling plan is a string (p1, p2,    , pnm) with n m genes, in which each gene corresponds to a priority rule. In the
Gier and Thompson procedure, when more than one job contends for one machine, the conict occurs. The
gene pij means that its corresponding priority rule is used to eliminate the conict occurring in the (i j)th iteration by choosing a job from those contending jobs.
Five priority rules such as FCFS, LPT, SPT, CR and S/OPN are chosen in terms of the chosen objectives.
FCFS, LPT and SPT are the processing time-based rules. CR and S/OPN are related to due dates and ecient
in delivering on due dates. The combination of these rules may produce an eective scheduling plan and con-

964

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

Table 1
Value of gene and priority rule
Value of gene

Priority rule

0
1
2
3
4

FCFS rst come rst served


SPT shortest processing time
LPT longest processing time
CR smallest critical rate
S/OPN smallest slack per number of operation remaining

currently minimize the above-mentioned objectives. Table 1 shows the relation between priority rule and its
gene value.
The above chromosome is converted into real vector in the following way: The whole string is rst divided
into a number of substrings. The length of all substrings can be identical to or dierent from each other. Then
each substring corresponds to a real variable.
An instance is used to illustrate the above procedure.
For 8 2 JSSP, the chromosome has 16 genes and is divided into four substrings. The length of each substring is 4.
3340

j 4123

j 3124

j 1234

Each substring has 625 possible combinations for pij 2 {0,1,2,3,4}. For substring h1 h2 h3 h4, the value of the
corresponding real variable x is obtained from the following formula.
x 0:01  541  h1 542  h2 543  h3 544  h4 :

So we can specify the domain [0, 6.24] for each real variable xi, i = 1, 2, 3, 4. For substring 3 3 4 0, x1 = 4.70,
substring 4 1 2 3, x2 = 5.38, substring 3 1 2 4, x3 = 4.14 and substring 1 2 3 4, x4 = 1.94.
When xi, i = 1, 2, 3, 4 is assigned a new value, the corresponding substring can be obtained in the following
way. If x1 = 4.753
(1) x1 = dx1 100e = 475
(2) For j = 1 to 4
Begin
hj = dx1/54je x1 = x1  hj*54j
End
Finally, x1 = 4.753 corresponds to substring 3 4 0 0. where dxe indicates the biggest integer number below
x. When all substrings of the corresponding variables are constructed, a new chromosome is obtained.
5. A Pareto archive particle swarm optimization
~ t selection are two main steps in MOPSO. These two steps are independently
Archive maintenance and G
implemented in most cases. This paper combines these two steps and constructs a new MOPSO.
5.1. Main algorithm
PAPSO is outlined as follows.
1. t = 0, Initialize initial swarm St, calculate the objective vectors of each particle and include the non-dominated solutions into archive At.
~ t and ~
2. Determine initial G
Pt for each particle.
3. Under the condition that particles y within search space, determine the new position and velocity for all
particles and form St+1. Update ~
Pt for each particle.
~ t selection and produce At+1.
4. Implement the hybrid procedure of the archive maintenance and G
5. Perform mutation on some chosen archive members and maintain archive At+1 again.

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

965

6. t = t + 1, If the termination condition is met, then terminate search; else go to 3.


Where St and At are, respectively, the particle swarm and the external archive at time t.
We adopt the method presented by Coello, Pulido, and Lechuga (2004) to determine ~
Pt for each particle
and propose the following approach to ensure particle to y within the search space. Suppose a particle goes
beyond the boundaries of a decision variable, that is, xitj vitj > bj or xitj vitj < aj .
If xitj vitj > bj , then vit1j h  bj  xitj , xit1j xitj vit1j ;
~t xt1 ; xt2    xtn ,
If xitj vitj < aj , then vit1j h  xitj  aj , xit1j xitj  vit1j . 0 < h < 1, where X
xtj 2 [aj, bj], aj and bj are the lower and upper boundary of the jth decision variable. j = 1,2  n.
~
tt vt1 ; vt2    vtn . h is a constant.
~ t selection procedure and the mutation in PAPSO are
In the next three subsections, archive maintenance, G
described in detail.
5.2. Archive maintenance
The external archive is used to store some of the non-dominated solutions produced in the searching process of multi-objective evolutionary algorithm (MOEA) and MOPSO. In most cases, a maximum size is specied for the external archive. When the archive size reaches its predetermined maximum value, the external
archive must be maintained to decide which solution can be inserted into the archive. Pareto archived evolutionary strategy (PAES) (Knowles & Corne, 2000) and strength Pareto evolutionary algorithm2 (SPEA2) (Zitzler, Laumanns, & Thiele, 2001) update the archive in terms of the following principle: when meeting one of
the following conditions, the new non-dominated solution becomes a member of the archive.
(1) The new solution dominates some members of the archive.
(2) The archive size is less than its maximum value.
(3) The archive size is equal to its maximum value and the new solution has higher density value than at
least one member of the archive.
Density-estimation metric is vital to the above approach and directly inuences the diversity of algorithm.
Some density metrics are introduced in MOEAs. In PAES, the whole objective space is divided into a number
of grids. The number of solutions in a grid is the density of the grid. When the archive becomes full, a solution
in a less crowded grid always preferably becomes the member of the archive. In SPEA2,pthe
density

of an individual is the distance between the individual and its kth nearest individual, where k N N 0 , N is population scale and N 0 indicates the maximum size of the archive. In non-dominated sorting genetic algorithm 2
(Deb, Pratap, Agarwal, & Meyarivan, 2002), crowding distance is dened as density-estimation metric. In this
study, a density-estimation metric is introduced (Lei & Wu,
2006).
q
 2
P
M   ~i 
i
j
~j
~
~
The distance dij between solution X and X is dened as
 fk X
. uk is a constant and
k1 uk fk X
chosen to make all ukfk close to each other. To decide the appropriate value of uk, rst we let all uk = 1. Then
the problem is optimized and some near optimal solutions are obtained, max fk for each objective is also
obtained by using these near optimal solutions. Finally, uk is decided in the following way: u1 = a > 0,
max fk
uk uk1  max
for fk, k > 1.
fk1
~i 2 Q, d 1 minfd ij jX
~j 2 Q; j 6 ig,
Denition 2: Q is the set of some solutions. For X
i
2
1 ~j
i
~ in Q
d i minfd ij jd ij > d i ; X 2 Q; j 6 ig, then the crowding measure of X
 1

ciQ d i d 2i =2:
If there exist only two points in Q, then ciQ is dened as the distance between these points. Crowding measure depicts the density of other solutions surrounding a solution. A point in the crowded region on the objective space has low crowding measure while the solution in the sparse region is assigned high crowding measure.
The density metric of PAES just describes some solutions are located in the same region. The density metric
in SPEA2 is dened based on the distance between two points on objective space. Crowding distance and

966

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

crowding measure are both dened on the basis of distance among three solutions. However, for the boundary
solution with the biggest or smallest function value of at least an objective function, its crowding distance is
innite and the corresponding crowding measure nite.
~ t selection
5.3. The combination procedure of archive maintenance and G
~ t is described in the following way.
A hybrid procedure to maintain archive and select G
(1) Assign all members of the archive At into At+1, nd all non-dominated solutions in St+1 and store them
in set #.
~i 2 #, if X
~i dominates some members of At+1, then substitute X
~i for those dom(2) For each solution X
t1
t1
t1
n
~j
inated members and the global best position of all particles in the set
j 2 S t1 jG
t1

~k ; X
~i  X
~k 2 At1 g.
X
t1
t1
t1
~i 2 # and X
~i 62 At1 , rst insert it into the archive At+1 and then
(3) For each solution X
t1
t1
0
~l with minimum crowding measure from At+1; If X
~l
~i
(3.1) If Narc = N , remove a member X
t1
n
o t1 6 X t1 ,
~j X
~l .
~i for the global best position of all particles in j 2 S t1 jG
substitute X
t1
t1
t1
0
<
N
,
(3.2) If Narc
 k

~ 2 At1 jnpX
~k > g .
(1) F X
t1
t1
~k
(2)
n Select a solution Xot1 2 F with the minimum dik, replace the global best position of all particles in
j
~ X
~i
~k from F; repeat the above procedure until
~k and remove X
j 2 S t1 jG
with X
t1
t1

t1

t1

~i g or F = U. If F = U and npX
~i < g, go to (1).
npX
t1
t1
~j X
~
Where N is population scale, N 0 is the maximum value and Narc is the actual size of the archive. G
t1
~
~
means that the global best position of particle j is X . npX t denotes the number of particles whose global
~t . g(2[0.025N,0.05N]) is a constant. The cardinality of the set F is denoted by jFj. U is
best position is X
an empty set.
~ t . In PAPSO, if
In single-objective PSO, particles always select the solution with optimal tness value as G
i
~
X t1 dominates some archive members, it will replace those members and the global best position of some particles. This is the generalization of single-objective method in multi-objective case and helpful to make PSO
quickly approximate to the true Pareto optimal front.
The above procedure also ensures that each archive member acts as the global best position of at least one
~ t of any
particle. For MOPSO with small archive and big population, if some archive members do not act as G
particles, these archive members cannot participate in new search of MOPSO and cannot guide particles to y
towards some new regions. Moreover, if the particle swarm just follows a part of archive members, MOPSO is
~ t selection approach of PAPSO is helpful to
dicult to approximate the whole Pareto optimal front. Thus, G
obtain high diversity performance.
5.4. Inclusion of mutation
The external archive greatly inuences the performance of MOPSO. If the archive cannot be updated incessantly or the archive members just locate in a narrow region, the search may stagnate or just converge to a part
of Pareto optimal front. This motivates the introduction to a mutation in PAPSO to produce new non-dominated solutions and lead the evolution of the whole swarm.
In PAPSO, the mutation process is described as follows:
(1) Select some archive members, perform mutation on the copy of these chosen members and store all new
non-dominated solutions in set X.
~ t selection again by using the solutions in
(2) Implement the hybrid procedure of archive maintenance and G
set X. The new solutions excluded from archive are thrown away.

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

967

In sub-step (2), the hybrid procedure having been shown in Section 5.3 starts from step 2.
Mutation operator has been used in MOPSO. Coello et al. (2004) proposed a method of merging mutation
and MOPSO. Their mutation was applied not only to the particles of swarm, but also to the range of each
decision variable. At the beginning, all particles are aected by mutation and then the number of particles
aected by mutation will diminish over time in terms of a nonlinear function. Our mutation is applied to
the archive members and exists in the whole search process of PAPSO.
6. Simulation results
In this section, we rst describe SPEA2 and a MOPSO (Coello et al., 2004) called MOPSO for simplicity
and test problems. Then we perform sensitivity analyses and nally compare the computational results of three
algorithms.
6.1. Algorithm description and test problems
In SPEA2, each individual in the main population and the external archive is assigned a strength
value, which incorporates both dominance and density information. On the basis of the strength value,
the nal rank value is determined by the summation of the strengths of the points that dominate the
current individual. Meanwhile, a kth nearest neighbor density estimation method is applied to obtain
the density value of each individual. The nal tness value is the sum of rank and density values. In
addition, a truncation method is used in the elitist archive in order to maintain a constant number of
individuals in the archive.
In MOPSO, a secondary repository of particles is used by other particles to guide their own ight and the
adaptive grid method of PAES is used to maintain the external repository. Meanwhile, roulette-wheel method
is developed to select the global best position for particle.
Eighteen JSSPs are used to test the performance of PAPSO, SPEA2 and MOPSO. For problems with 10
jobs, the due date of job 2, 3 is 1.5 times the total processing time of the corresponding job, the deadline of job
10 is equal to its total processing time and the due date of other jobs is twice the corresponding total processing time. For the problem with 20 jobs, the due date of job 2, 3, 11 is 1.5 times the corresponding total processing time, the deadline of job 20 is equal to its total processing time and the due date of other jobs is twice
the corresponding total processing time.
6.2. Sensitivity analyses
In this section, the impact of inertia weight and two learning factors on the performance of PAPSO are
discussed and the computational results are shown in Tables 2 and 3. Other parameters of PAPSO are shown
in Table 4.
A metric is dened to test the performance of PAPSO with dierent parameters: rst, for each algorithm,
the non-dominated solutions are chosen from all external archives obtained by the algorithm in all runs and
stored in a set H; then the non-dominated solutions are chosen from the set H. Suppose that ntot is the total
number of non-dominated solutions in H, if algorithm Yi produces nY i solutions of ntot, the metric qY i of algorithm Yi is the ratio of nY i to ntot,
nY
qY i i :
5
ntot
Inertia weight is rst tested between 0.4 and 0.9 in increments of 0.1. When inertial weight is tested, two
learning factors are set to be 2.0. As shown in Table 2, the setting w = 0.4, 0.5 is superior to other settings.
Especially for w = 0.5, PAPSO with this setting produces more non-dominated solutions than PAPSO with
any other settings for 9 of 17 instances. The setting w = 0.6, 0.7 is notably worse than other settings. Then
two learning factors are tested between 1.5 and 2.0 in increments of 0.1 and two factors are always assigned
the same value and inertial weight is 0.5 in the test procedure. From Table 3, when two learning factors is

968

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

Table 2
The computational results of PAPSO with dierent inertia weights
Problem

x = 0.4

x = 0.5

x = 0.6

x = 0.7

x = 0.8

x = 0.9

FT10
FT20
ABZ5
ABZ6
ABZ7
ABZ8
ORB1
ORB2
ORB3
ORB4
ORB5
LA26
LA27
LA28
LA14
LA15
LA16

0.333
0.900
0.167
0.154
0.285
0.084
0.25
0.25
0.166
0.292
0.143
0.231
0.000
0.143
0.182
0.388
0.125

0.000
0.100
0.067
0.231
0.000
0.416
0.417
0.25
0.25
0.333
0.571
0.538
0.000
0.571
0.000
0.388
0.1875

0.133
0.000
0.000
0.154
0.000
0.500
0.1665
0.125
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.056
0.0625

0.133
0.000
0.333
0.154
0.143
0.100
0.1665
0.125
0.000
0.167
0.143
0.0777
1.000
0.000
0.000
0.112
0.125

0.133
0.000
0.333
0.0776
0.571
0.134
0.000
0.125
0.25
0.167
0.143
0.154
0.000
0.000
0.818
0.000
0.375

0.268
0.000
0.100
0.231
0.000
0.166
0.000
0.125
0.334
0.041
0.000
0.000
0.000
0.286
0.000
0.056
0.125

Table 3
The computational results of PAPSO with dierent learning factors
Problem

1.5

1.6

1.7

1.8

1.9

2.0

FT10
FT20
ABZ5
ABZ6
ABZ7
ABZ8
ORB1
ORB2
ORB3
ORB4
ORB5
LA26
LA27
LA28
LA14
LA15
LA16

0.000
0.000
0.167
0.200
0.500
0.133
0.182
0.000
0.118
0.071
0.25
0.152
0.000
0.20
0.000
0.222
0.066

0.000
0.308
0.000
0.133
0.125
0.534
0.364
0.000
0.118
0.143
0.000
0.000
0.000
0.000
0.25
0.389
0.268

0.417
0.000
0.000
0.133
0.125
0.200
0.000
0.143
0.000
0.071
0.25
0.30
0.333
0.300
0.000
0.000
0.133

0.333
0.308
0.333
0.266
0.000
0.000
0.182
0.143
0.000
0.215
0.000
0.000
0.333
0.000
0.125
0.389
0.467

0.25
0.231
0.000
0.068
0.125
0.000
0.000
0.286
0.646
0.357
0.000
0.000
0.334
0.100
0.625
0.000
0.000

0.000
0.153
0.500
0.200
0.125
0.133
0.232
0.428
0.118
0.143
0.500
0.538
0.000
0.400
0.000
0.000
0.066

Table 4
Parameter settings of three algorithms
PAPSO

MOPSO

SPEA2

h = 0.8, u1 = 1
N = 80, N 0 = 20, w = 0.5, r1 = r2 = 2

d = 30

N = 80, N 0 = 20
pc = 0.9, pm = 0.1

pc, crossover probability; pm, mutation probability; d, the number of subdivision of the range of each objective.

equal to 1.8, 1.9 and 2.0, the performance of PAPSO does not signicantly vary with these factors and is better
than that of PAPSO with learning factors of 1.5, 1.6 or 1.7. Thus, we choose inertia weight of 0.5 and two
learning factors of 2.0.

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

969

6.3. Results and discussions


Metric C (Zitzler & Thiele, 1999) is used to compare the approximate Pareto optimal set, respectively,
obtained by three algorithms. CL; B measures the fraction of members of B that are dominated by members
of L.
CL; B

jfb 2 B : 9h 2 L; h  bgj
:
jBj

For multi-objective optimization algorithm with elitism, the ratio of population size to the maximum size of
the archive is frequently set to be 4 to 1 for maintaining an adequate selection pressure for elite solutions, so
we choose population scale of 80 and the maximum size of 20. We use d = 30 because of the recommendation
of Coello et al. (2004). We found that SPEA2 with cross probability of 0.9 and mutation probability of 0.1
yielded the best results. So, we choose the parameters of Table 4 for three algorithms.
All algorithms use the priority rule-based representation. Two kinds of MOPSO adopt the approach in Section 4 to make chromosome correspond to a real vector. We use two-point crossover and two-point mutation in
SPEA2. Two-point crossover is described below: rst randomly select two genes from chromosome and then
exchange genes between the chosen genes. Two-point mutation also has two steps: rst stochastically choose
two genes and then alter the value of the chosen genes. When the number of objective function evaluation
reaches 20,000, the algorithm terminates search. Each algorithm randomly runs 20 times for each instance.
In this study, polynomial mutation (Deb & Agrawal, 1995) with gm = 20 is applied to one real variable corresponding to sub-string of chromosome of each archive member. Polynomial mutation is described below.
~ x1 ; x2    xn , take xi as an instance, a
If polynomial mutation is performed on some genes of individual X
new gene is produced in the following way.
xi bi  ai d
7
2u1=gm 1  1
if u < 0:5
.u is the random number following uniform distribution on
where du
1=g 1
else
1  21  u m
[0, 1]. gm is a constant and often set to be 20. [ai, bi] is the domain of the ith decision variable of problem.
For JSSP, ai and bi are determined by the converting procedure shown in Section 4.2 and [ai, bi] is the domain
of the variable xi, for the instance in Section 4.2, ai = 0, bi = 6.24, i = 1, 2, 3, 4.
If Y1, Y2, Y3 are used to denote PAPSO, MOPSO and SPEA2, then CY i ; Y j indicates the fraction of all
non-dominated solutions stored in the archive of Yj in 20 runs that are dominated by the non-dominated ones
obtained by Yi in all runs. Table 5 shows the computational results. In Table 5, the data in all columns except
xi

Table 5
The comparative results of the three algorithms
Problem

CY 1 ; Y 2

CY 2 ; Y 1

CY 3 ; Y 2

CY 2 ; Y 3

CY 1 ; Y 3

CY 3 ; Y 1

FT06
FT10
FT20
ABZ5
ABZ6
ABZ7
ABZ8
ORB1
ORB2
ORB3
ORB4
ORB5
LA26
LA27
LA28
LA14
LA15
LA16

0.000,
0.785,
1.000,
1.000,
0.910,
0.555,
0.800,
0.916,
1.000,
0.818,
1.000,
1.000,
0.667,
1.000,
1.000,
0.888,
0.909,
1.000,

0.000,
0.000,
0.000,
0.000,
0.273,
0.000,
0.200,
0.210,
0.000,
0.143,
0.060,
0.000,
0.000,
0.000,
0.000,
0.200,
0.000,
0.071,

0.000,
1.000,
1.000,
0.800,
0.818,
1.000,
0.900,
0.833,
0.750,
1.000,
1.000,
0.909,
0.445,
0.665,
0.500,
1.000,
0.727,
0.777,

0.000,
0.000,
0.333,
0.250,
0.230,
0.000,
0.166,
0.200,
0.200,
0.230,
0.150,
0.400,
0.181,
0.143,
0.166,
0.000,
0.230,
0.076,

0.000,
0.000,
0.667,
0.750,
0.615,
0.000,
0.500,
0.400,
0.800,
0.380,
0.650,
0.800,
0.363,
0.571,
1.000,
0.000,
0.384,
0.615,

0.000,
0.400,
0.000,
0.166,
0.353,
0.665,
0.250,
0.235,
0.000,
0.531,
0.200,
0.000,
0.000,
0.000,
0.000,
0.900,
0.181,
0.143,

5
2
0
0
1
4
2
1
0
2
0
0
3
0
0
1
1
0

5
10
6
6
8
15
10
14
6
10
14
8
8
7
6
8
11
13

5
0
0
2
2
0
1
2
1
0
0
1
5
3
4
0
3
2

5
6
6
3
10
6
10
8
3
10
17
3
9
6
10
10
10
12

5
6
3
1
5
6
6
6
1
8
7
1
7
3
0
10
8
5

5
6
8
5
7
5
9
13
6
6
12
8
8
7
6
1
9
12

970

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

the rst column is related to CY i ; Y j and consists of two parts: the rst is the value of CY i ; Y j and the second the number of non-dominated solutions nally obtained by Yj after the archive members of Yi have compared with those of Yj.
As shown in Table 5, PAPSO produces more non-dominated solutions than MOPSO for all problems
except FT06 and most of the archive members obtained by MOPSO are dominated by archive members
of PAPSO. PAPSO has notably better performance than MOPSO. MOPSO is also inferior to SPEA2. Compared with SPEA2, PAPSO obtains better computational results for 13 of 18 problems, especially for nine
problems such as FT20, ABZ5, LA26-28, ORB2, ORB4, ORB5 and LA16 et al., CY 1 ; Y 3 
CY 3 ; Y 1 > 0:35. On the other hand, SPEA2 only performs better than PAPSO for FT10, ABZ7, ORB3
and LA14. Thus, PAPSO can obtain better solutions than other algorithms.
The density-estimation metric in MOPSO just show a group of solutions are located in a grid and archive
maintenance based on this metric makes archive members distribute on a narrow region of Pareto optimal
~ t selection also makes some of particles in repository unable to guide the ight of parfront. Moreover, the G
ticles in population and MOPSO does not approximate some parts of the Pareto front. Thus, MOPSO has
low performance in job shop scheduling.
The mutation of PAPSO is performed on archive members and the mutation of SPEA2 is on individuals of
population; as a result, the former easily generates more non-dominated solutions than the latter. This is the
main reason that PAPSO outperforms SPEA2 for most of the problems. On the other hand, SPEA2 has a
complicated procedure of archive maintenance and tness assignment, while the structure of PAPSO is simple.
As a result, PAPSO becomes more attractive than SPEA2 in job shop scheduling case.
7. Conclusions
We have proposed an eective method to convert JSSP to a continuous one. This is a new path to apply
PSO to the scheduling problem. We also presented a PAPSO for MOJSSP. Unlike previous works, PAPSO
combines the global best position selection with archive maintenance. The eectiveness of PAPSO was tested
on 18 benchmark problems. The computational results show that PAPSO perform better than MOPSO and
obtains better computational results than SPEA2 for most of the problems.
In previous research, the application of PSO on combinatorial optimization problems such as JSSP is seldom investigated. The main contribution of this study is to provide an eective path to apply PSO to JSSP.
Unlike the discrete PSO, the proposed path is simple and easily implemented. Moreover, the existing improvement strategies of PSO can be directly applied for the high-quality solutions. We will take a further discussion
on the new path to handle multi-objective scheduling by using PSO algorithm and carry out research into the
application of MOPSO to other scheduling problems such as exible job shop scheduling in the near future.
Acknowledgements
This research is supported by China Hubei Provincial Science and Technology Department under grant science foundation project (2007ABA332). The authors also want to express their deepest gratitude to the anonymous reviewers for their incisive and seasoned suggestions.
References
Arroyo, J. E. C., & Armentano, V. A. (2005). Genetic local search for multi-objective ow shop scheduling problems. European Journal of
Operational Research, 167, 717738.
Coello, C. A. C., Pulido, G. T., & Lechuga, M. S. (2004). Handling multiple objectives with particle swarm optimization. IEEE
Transactions on Evolutionary Computation, 18(3), 256279.
Deb, K., & Agrawal, R. B. (1995). Simulated binary crossover for continuous search space. Complex Systems, 9, 115148.
Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multi-objective genetic algorithms: NSGA-II. IEEE
Transactions on Evolutionary Computation, 6(2), 182197.
Esquivel, S., Ferrero, S., Gallard, R., Salto, C., Alfonso, H., & Schutz, M. (2002). Enhanced evolutionary algorithm for single and multiobjective optimization in job shop scheduling problem. Knowledge-Based System, 15, 1325.
Gier, B., & Thompson, G. L. (1960). Algorithm for solving production scheduling problems. Operations Research, 8, 487503.

D. Lei / Computers & Industrial Engineering 54 (2008) 960971

971

Hu, X., Eberhart, R. C., & Shi, Y. (2003). Swarm intelligence for permutation optimization: A case study of N-queens problem.
Proceedings of the IEEE Swarm Intelligence Symposium, 243246.
Kacem, I., Hammadi, S., & Borne, P. (2002). Approach by localization and multi-objective evolutionary optimization for exible job shop
scheduling problems. IEEE Transactions on Systems, Man and Cybernetics, Part C, 32(1), 113.
Knowles, J. D., & Corne, D. W. (2000). Approximating the non-dominated front using the Pareto archive evolutionary strategy.
Evolutionary Computation, 8(2), 149172.
Lei, D., & Wu, Z. (2006). Crowding-measure-based multi-objective evolutionary algorithm for job shop scheduling. International Journal
of Advanced Manufacturing Technology, 30(12), 112117.
Ponnambalam, S. G., Ramkumar, V., & Jawahar, N. (2001). A multi-objective genetic algorithm for job shop scheduling. Production
Planning and Control, 12(8), 764774.
Sakawa, M., & Kubota, R. (2000). Fuzzy programming for multi-objective job shop scheduling with fuzzy processing time and fuzzy due
date through genetic algorithm. European Journal of Operational Research, 120, 393407.
Xia, W., & Wu, Z. (2005). An eective hybridization approach for multi-objective exible job-shop scheduling. Computers and Industrial
Engineering, 48(2), 409425.
Zitzler, E., Laumanns, M., & Thiele, L. (2001). SPEA2: Improving the strength Pareto evolutionary algorithm. Swiss Federal Institute of
Technology, Lausanne, Switzerland, Tech. Rep. TIK-Rep, 103.
Zitzler, E., & Thiele, L. (1999). Multi-objective evolutionary algorithms: A comparative case study and the strength Pareto approach.
IEEE Transactions on Evolutionary Computation, 3(4), 257271.

Das könnte Ihnen auch gefallen