Sie sind auf Seite 1von 6

Downloaded from http://iranpaper.

ir
http://www.itrans24.com/landing1.html

2nd Conference on Swarm Intelligence and Evolutionary Computation (CSIEC2017), Shahid Bahonar University of Kerman, Iran, 2017

MOCSA: A Multi-Objective Crow Search Algorithm


for Multi-Objective Optimization

Hadi Nobahari, Ariyan Bighashdel


Department of Aerospace Engineering
Sharif University of Technology
Tehran, Iran
nobahari@sharif.edu

Abstract-In this paper, an extension of the recently developed much harder. They fool each other to protect their caches [9].
Crow Search Algorithm (CSA) to multi-objective optimization The main problem in proposing a multi-objective variant for
problems is presented. The proposed algorithm, called Multi CSA is the updating of crows' caches based on the value of
Objective Crow Search Algorithm (MOCSA), defines the fitness multiple objectives and also the chasing procedure. In other
function using a set of determined weight vectors, employing the words, the problem of fitness assignment arises, which is one of
max-min strategy. In order to improve the efficiency of the search
the crucial issues in dealing with MOOPs
space, the performance space is regionalized using specific control
points. A new chasing operator is also employed in order to In general, two categories of common strategies to assign
improve the convergence process. Numerical results show that fitness in a MOOP, are aggregation-based and
MOCSA is closely comparable to well-known multi-objective Pareto/dominance-based [ 10]. After the Goldenberg's
algorithms. suggestion about the concept of domination [ 1 1], many
researchers proposed mUlti-objective optimization algorithms
Keywords-Crow Search Algorithm; multi-objective using the same concept. Fonseca and Fleming [ 12] described a
optimization; weight vectors; max-min strategy;
rank-based fitness assignment method for Multiple Objective
Genetic Algorithms (MOGAs). Horn et al. [ 13] modified the GA
I. INTRODUCTION to deal with multiple objectives by incorporating the concept of
Over the past decades, multi-objective optimization Pareto domination in its selection operator, and applying a
problems (MOOPs) have attracted much attention, since real niching pressure to spread its population out along the Pareto
world problems often involve several conflicting objectives. optimal tradeoff surface and introduced the Niched Pareto GA
Unlike a single-objective optimization problem (SOOP) in (NPGA). Srinvas and Deb [ 1], presented another algorithm,
which there is only one optimal point, there exists a set of naed Non-dominated Sorting Genetic Algorithm (NSGA),
solutions in a typical MOOP which are superior to the rest of the whIch was different from two previous implementations and
solutions in the search space when all objectives are considered, was closer to Goldberg's suggestion. Until then, all proposed
but are inferior to other solutions in the space in one or more multi-objective algorithms based on domination, were criticized
objectives [ 1]. These solutions are known as Pareto-optimal for their (i) computational complexity (ii) non-elitism approach,
solutions or non-dominated solutions [2]. Generating Pareto and (iii) the need for specifying a sharing parameter [4].
optimal solutions plays an important role in multi-objective Accordingly, Deb et al. [4], suggested a non-dominated sorting
optimization , and mathematically the problem is considered to based multi-objective evolutionary algorithm (NSGA-II). It was
be solved when the Pareto optimal set is found [3]. In an improved version of NSGA which alleviated all mentioned
development of multi-objective algorithms, there has been an difficulties. With the emersion of NSGA-II and its impressive
intrinsic interest in employing population-based algorithms, success, more and more attention was attracted to this type of
such as EAs, to MOOPs. The primary reason for this is their fitness assignment. Zeng et al. [ 14], Reddy and Kwnar [ 15], AI
ability to fmd multiple Pareto-optimal solutions in one single run Jadaan et al. [ 16], Ghiasi et al. [ 17], Nobahari et al. [8], Costa et
[4]. Therefore, many multi-objective variant of population al. [ 10], Salcedo-Sanz et al [ 18], all proposed multi-objective
based, metaheuristic algorithms were proposed including algorithms using same concepts and their result indicated the
VEGA [5], NSGA [ 1], PAIA [6], MOPSOID [7], NSGSA [8] improvement in convergence and distribution of Pareto-front,
and so on. more or less. However, the steps of fmding non-dominated
solutions in all these algorithms are complicated and time
Crow Search Algorithm (CSA) is a new nature-inspired consuming [ 19].
metaheuristic optimizer based on the intelligent behavior of
crows [9]. CSA is a population-based technique which works In addition to dominance-based approaches, a MOOP can
based on the crows' behavior when they store their excess food also be solved by scalarization. Scalarization means converting
in a hiding place and retrieve it when the food is needed. Due to the MOOP into a single or a family of single objective
their greedy characteristics, crows follow each other to obtain optimization problems with a real-valued objective function,
better food sources but their intelligence, make this procedure termed the scalarizing function [3]. A Pareto optimal solution to

978- 1-5090-433 0-9/ 17/$3 1.00 20 17 IEEE


60
Downloaded from http://iranpaper.ir
http://www.itrans24.com/landing1.html

a MOOP could be an optimal solution of a scalar optimization Crows protect their caches from being pilfered by a
problem in which the objective is an aggregation function of all probability
individual objectives. Thus, a MOOP can be decomposed into a
number of single objective optimization sub-problems. This is Assuming that there is a d-dimensional environment,
the basic idea of the aggregating or decomposition approach [7]. including a flock of n crows. The position of each crow (crow i)
Schaffer Probably was the first who employed scalarization in at any time (iteration iter) in the search space can be specified
by a vector ( U1er = XI , X2 , , Xd) . Each crow has a memory in
x
multi-objective optimization [20]. He proposed the Vector . .

Evaluated Genetic Algorithm (VEGA) to find the Pareto optimal which the position of its hiding place is memorized (mi.iter = ml
solutions of MOOPs. In VEGA, the population was divided into , m2 , , md). This is the best position that crow i has obtained
. . .

disjoint sub-populations each of which were governed by one so far. Assume that at iteration iter, crow j wants to visit its
objective function. Despite the relatively good reported results, hiding place. At the same time, crow i decides to follow crow j.
VEGA seemed to be able to fmd only extreme solutions on the In this situation, two states may happen:
Pareto front [2 1]. In order to employ various search directions to
find all non-dominated solutions of a MOOP, Murata and State A:
Ishibuchi [2 1] employed a novel method and proposed a Multi Crow j does not realize that it has been followed by another
Objective Genetic Algorithm (MOGA) for solving MOOPs. crow. In this case the new position of crow i is obtained as
MOGA used a weighted sum of mUltiple objective function to follows:
combine them into a scalar fitness function, but its characteristic
feature was that the weights attached to the multiple objective
(
xi,ifer+l = xi,ifer +'1 xFE,iter x mj,ifer _ xi,iter ) ( 1)
functions were generated randomly. The method of random where r, is a random number with uniform distribution
weights later used by Leung and Wang [22], Zhang and Li [23] between 0 and 1 and FD,iler denotes the flight length of crow i at
and Yang et al. [24]. All proposed algorithms benefited from iteration iter.
many advantages, e.g. simplicity, easiness to implement, and so
on. Nevertheless, they were unable to find the solutions on the
State B:
non-convex parts of the Pareto frontier [25]. In order to
overcome the above drawback and maintain the advantages, Liu Crow j realizes that it has been followed by another crow. As a
and Li [26] proposed a new fitness function scheme with each result, in order to protect its cache from being pilfered, crow j
individual fitness defmed as the maximum value of the weighted will fool crow i by going to another position in search space.
normalized objectives, in which the weights were properly In general, two states can be expressed as follows:
constructed via the sphere coordinate transformation and
uniform design. The proposed method, called max-min strategy Xi,iler +rxFE.iterx mJ.iter _xuter
( ) r. Apj,iler
Xi'!',,+l
which could gradually fmd approximate uniformly distributed
{ (2)
solutions on the Pareto frontier regardless of whether the Pareto = Randompossition therwise
front is convex or non-convex [25]. where rj is a random number with uniform distribution between
pJ '
o and 1 and A ,i er denotes the Awareness Probability of crow j
In this paper, an extension of the CSA to multi-objective
optimization problems is presented. The proposed algorithm, at iteration iter,
called MOCSA, defmes the fitness function by using a set of
predetermined weight vectors, employing the max-min strategy III. MULTI-OBJECTIVE CROW SEARCH ALGORITHM
and spherical coordinate transformation, similar to the one in In this section, a multi-objective variant of CSA, called
literature [26]. In order to improve the efficiency of the MOCSA is introduced. The fitness assignment during memory
searching, the performance space is divided into some sub updating and chasing procedure, has been dealt with based on
regions using specific control points. A new chasing mechanism aggregating approach. As mentioned before, in aggregating
is also employed in order to improve the convergence process. approach, the MOOP is converted into a family of single
The reminder of the paper is organized as follows: Section II objective problems in which the transformed objectives are
describes the structure of the CSA. In section III, the details of aggregation function of all individual objectives. The
MOCSA is presented. Section IV presents the numerical results conversion procedure happens using some weight vectors which
and finally, the conclusions are made in section V. could be generated either randomly or systematically. In this
study, Similar to the method of Lui and Li [26], the weight
vectors are generated via sphere coordinate transformation and
II. CROW SEARCH ALGORITHM
uniform design. The transformed fitness functions are obtained
In this section, CSA is briefly described and more details on using these predetermined weight. After the construction of the
this algorithm can be found in [9]. CSA works based on four transformed fitness functions, the updating and chasing process
principles, listed below: happen. The details of MOSCA are described in the following

subsections,
Crows live in the form of flock
Crows memorizes the position of their hiding places A, Construction of the transformedfitness function
Crows follow each other to do thievery A typical weight vector (ith weight vector) is defined as
follows [26]:

61
Downloaded from http://iranpaper.ir
http://www.itrans24.com/landing1.html

l/ cosB/
1/ sin B/ cosB2i
(3
U
J h Jh .
e =
m
e
' j = 1,2,,,,m (6)

W _
m
/ (x) l/ sin B/ sin B2i ... sin Bm_2i cos Bm_/ ) L/ I
w i
m
(x) 1/ sin B/ sin B21 ... sin B _2i sin B _/
m m
J

Suppose that the bth class has the length h. we calculte the
Euclidian distance between each mapped fitness vector wIth the
bth CP and put the first (nip) xh crows who have the least
where 6i (eli, e2i, ... , em) are the generalized spherical distance from the bth CP in the bth group of crows (see Fig.l b).
coordinates in performance space and m is the number of Each Crow-group corresponds to the class of the same CPo
objectives. With a flock of size n, we can generate p (n is a Obviously, the total number of groups is q.
mUltiple of p) evenly distributed Memorized Points (MPs), a) ,
a2 , ... , ap on unit hypersphere in the first quadrant by
appropriately selecting 61 (i 1 , 2 , ... , p), leading to p
C. Chasing process
2
=

Memorized Weight Vectors (MWVs), Wi , w , , wp. The . .


There is a little difference in the chasing process of MOCSA,
fitness vector of crow c in the performance space can be compared with CSA. As mentioned in section II, two states may
specified by fC if/ , H , ... , Inn. Let happen when crow j wants to visit its hiding place and crow i
decides to chase him. When state A happens, i.e. crow j doesn't
h/ = log2 ( 1+.f/-fn , j = I,2,"'m (4) realize the chasing, there will be no difference and the same
process will be done. But when the latter happens and crow j
where jj* is the instantaneous minimum (in the case of .
realizes the chasing, crow i moves towards the target, but wIll be
minimization) of the fitness vector in dimension j. The
misled in some directions. For this purposes, we introduce a
transformed fitness function of crow c with respect to the ith MP
parameter called MdR for misdirection rate. So, crow i will be
is defined as
misdirected in each dimension, with the probability MdRiler
p,e (x) = max { wh/} (5)
IJ::;;
m D. Main framework
Fig. 2 shows the general iterative steps of MOCSA in he
form of a pseudo code which will be discussed in the followmg
B. Classification of MPs, MWVs and crows subsections.
In order to improve the efficiency of search, the MPs and MWVs
Initialization: In this stage, all MPs, MWVs and CPs are
are divided into q (q < p) classes. First we generate q evenly
generated and classified. Crows are spread randomly within the
distributed Control Points (CPs) on unit hypersphere in the first
search space and are classified. At each group of crows, the
quadrant, c), C2 , , cq Then we calculate the Euclidian distance
transformed function of the members are calculated and sorted I
.

between each MP with CPs and put the nearest MPs from each times (l is the length of the corresponding class) and the best one
CP in a specific class (see Fig. 1 a). This way, we can classify
is selected, each time based on one MP and MWV. The best
MPs and the corresponding MWVs into q classes .Obviously,
crows of each group, form a new group, called S-group (note
the total number of classes is q (Note that the class lengths are
that each S-group corresponds to one group of crows and one
not necessarily equal).
class). Each S-group, consists of I crows, every of which is
related to one specific MP. Since at the initialization step, the
members of S-groups have no experiences, it is assumed that
they have hidden their foods at their initial positions. At each
iteration, the positions of the S-groups will form the Pareto set
and their fitnesses form the Pareto front.
Main Loop: At each interaction, crows of each group, only
chase one of the members of the corresponding S-group.
Suppose a typical class has a length of I. first, I greedy crows
from the group are selected and then, each greedy crow chases
a b
one member of the corresponding S-group. Therefore, at the end
Figure I. The deffinition of Class and Crow-Group in performance space of each iteration, there will be p (number of MPs) greedy crows,
each of which has a new position and fitness vector and needs to
Crows also will be grouped. First, all crows' fitness vectors be classified. For this purpose, first we map them on the unit
are mapped on the unit hypersphere. The mapped fitness vector hypersphere and calculate their Euclidian distance from each
for crow c ( UC) can be calculated as CPo Then, I nearest greedy crows form a group called N-group.
Note that each N-group has I members and corresponds to one
S-group, one class and one group of crows. At each N-group, the
transformed fitness function of the members are calculated and
sorted I times (Each time, based on one MP). After each sorting,
the best crow is compared with the hiding place position of the

62
Downloaded from http://iranpaper.ir
http://www.itrans24.com/landing1.html

corresponding member of the S-group and is replaced if possess TABLE I. PARAMETERS OF THE ALGORlTM
a better hiding place. When the iteration reaches its maximum Two- Three- Five-
value, the algorithm stops and the fmal Pareto front will be the Parameters
Objective Objective Objective
final fitnesses of all S-groups. Dimension 30 30 30
Flock Size 600 900 4290
FL 2 2 2
Pseudo Code
AP 0.3 0.22 0.2
MdR 0.02 0.015 0.1
Initialization
I. Generate and classity MPs,MWVs and CPs Number of MPs 100 150 715
2. Initialize the position of crows and classity them Number of CPs 15 36 126
3. Form S-groups
Main Loop
In order to measure the quality of solutions, the IGD metric
While iter < max iteration
4. Select greedy crows has been used [27]. IGD is defmed as follows: Let P * be a set of
5. Perform the chasing process uniformly distributed points along the Pareto Front (in the
6. Form N-groups performance space). Let A be an approximate set to the Pareto
7. Update S-groups Front, the average distance from P * to A is defined as
End
I .d(v,A)
!.Elp*1
IGD= (7)
Figure 2. MOCSA Pseudo Code

where d(v A) is the minimum Euclidean distance between v and


,

IV. RESULTS
the points in A.
To evaluate the performance of MOCSA, we simulate 13 30 random simulations are performed for each problem and
unconstrained multi-objective test from the literature [27] and the mean and standard deviation (Std) of the IGD metric are
compare the results with 10 well-known algorithms, namely, reported. The respective results for 10 mentioned algorithms, as
ClusteringMOEA [28], DECMOSA-SQP [29], DMOEADD well as MOCSA, are shown in Tables II and III. In Table IV and
[3 0], GDE3 [3 1], LuiLi [26], MOEAD [32], MOEP [ 19], MTS V, the results of MOCSA are compared with the mean results of
[33], OWMOSaDE [34], and NSGAIILS [35]. The parameters other algorithms. It is observed that the proposed algorithm can
of the algorithm are set as follows: obtain comparable performance compared to other algorithms.
The dimensions of all problems are 30. The flock size is 600
for two-objective problems, 900 for three-objective problems
and 4290 for five-objective problems. The number of MPs (P)
and CPs (q) are also defmed based on the optimization problem.
All parameters are listed in Table I.

TABLE II. MEAN AND STANDARD DEVIATION OF IGD FOR PROBLEMS I TO 7

Problem
Algorithm
1 2 3 4 5 6 7
0.0043 0.0068 0.0074 0.0638 0.1807 0.0059 0.0044
MOEAD
(0.0003) (0.0018) (0.0059) (0.0053) (0.0681) (0.0017) (0.0012)
0.0053 0.0119 0.1064 0.0265 0.0393 0.2509 0.0252
GDE3
(0.0003) (0.0015) (0.0130) (0.0003) (0.0039) (0.0196) (0.0089)
0.0065 0.0062 0.0531 0.0236 0.0149 0.0592 0.0408
MTS
(0.0003) (0.0005) (0.01l7) (0.0007) (0.0033) (0.0106) (0.0144)
0.0078 0.0123 0.0150 0.0435 0.1619 0.1755 0.0073
LuiLi
(0.0021) (0.0033) (0.0240) (0.0006) (0.0282) (0.0830) (0.0009)
0.0103 0.0068 0.0334 0.0427 0.3145 0.0667 0.0103
DMOEADD
(0.0023) (0.0020) (0.0057) (0.0014) (0.0466) (0.0238) (0.0022)
0.0115 0.0124 0.0106 0.0584 0.5657 0.3103 0.0213
NSGAIILS
(0.0073) (0.0091) (0.0686) (0.0051) (0.1827) (0.1933) (0.0194)
0.0122 0.0081 0.1030 0.0513 0.4303 0.1918 0.0585
OWMOSaDE
(0.0012) (0.0023) (0.0190) (0.0019) (0.0174) (0.0290) (0.029 I)
0.0299 0.0228 0.0549 0.0585 0.2473 0.0871 0.0233
ClusteringMOEA
(0.0033) (0.0023) (0.0147) (0.0147) (0.0384) (0.0057) (0.0020)
0.0770 0.0283 0.0935 0.0340 0.1671 0.1260 0.0242
DECMOSA-SQP
(0.0393) (0.0313) (0.1980) (0.0054) (0.0895) (0.5617) (0.0223)
0.0596 0.0189 0.0990 0.0427 0.2245 0.1031 0.0197
MOEP
(0.0128) (0.0038) (0.0132) (0.0008) (0.0344) (0.0345) (0.0008)
0.0061 0.0125 0.0242 0.0333 0.1157 0.2390 0.0239
MOCSA
(0.0006) (0.0037) (0.0103) (0.001O) (0.021O) (0.0891) (0.0005)

63
Downloaded from http://iranpaper.ir
http://www.itrans24.com/landing1.html

TABLE III. MEAN AND STANDARDDEVIATlON oF IGD FOR PROBLEMS 8 TO 13

Problem
Algorithm
8 9 10 11 12 13
0.0584 0.0790 0.4741 0.1103 146.781 1.8489
MOEAD
(0.0032) (0.053 I) (0.0736) (0.0023) (41.828) (0.0198)
0.2485 0.0824 0.4333 0.2342 202.126 3.2057
GDE3
(0.0355) (0.0225) (0.0123) (0.0100) (37.705) (0.0821)
0.1125 0.1144 0.1531 0.4550 350.206 1.9090
MTS
(0.0129) (0.0255) (0.0158) (0.0372) (45.119) (0.0253)
0.0824 0.0939 0.4469 0.1325 444.825 2.2885
LuiLi
(0.0073) (0.0471) (0.1296) (0.0036) (83.775) (0.0872)
0.0684 0.0490 0.3221 1.2032 477.656 1.997
DMOEADD
(0.0094) (0.0091) (0.0222) (0.0713) (93.471) (0.0115)
0.0863 0.0719 0.8447 0.1752 158.05 3.2323
NSGAIILS
(0.0124) (0.0450) (0.1626) (0.0071) (40.437) (0.2273)
0.0945 0.0983 0.7430 0.3951 734.568 3.2573
OWMOSaDE
(0.0119) (0.0244) (0.0885) (0.0384) (65.065) (0.3521)
0.2383 0.2934 0.4111 1.2401 1039.4 3.4043
ClusteringMOEA
(0.0230) (0.0781) (0.0501) (0.0954) (290.03) (0.1363)
0.2158 0.1411 0.3699 0.3830 943.352 1.9178
DECMOSA-SQP
(0.1215) (0.3453) (0.6532) (0.1170) (551.65) (1.4243)
0.4230 0.3420 0.3621 0.4337 885.89 2.0145
MOEP
{0.05651 {0.15841 {0.04441 {0.03541 {77.6581 {0.01371
0.0837 0.1046 0.4038 0.1276 421.164 2.5424
MOCSA
(0.0062) (0.0461) (0.1059) (0.0026) (114.95) (0.1265)

TABLE IV. IGD RESULTS COMPARISONS FOR PROBLEMS 1 TO 7

Problem
J 2 3 4 5 6 7

MOCSA IGD Results 0.0061 0.0125 0.0242 0.0333 0.1157 0.2390 0.0239
Mean IGD Results 0.0224 0.0135 0.0576 0.0445 0.2346 0.1376 0.0235

TABLE V. IGD RESULTS COMPARISONS FOR PROBLEMS 8 TO 13

Problem
8 9 10 11 12 13

MOCSA IGD Results 0.0837 0.1046 0.4038 0.1276 421.164 2.5424

Mean IGD Results 0.1628 0.1365 0.4560 0.4762 538.282 2.5075

closely comparable to other well-known multi-objective


v. CONCLUSIONS optimization algorithms.
In this study, the single-objective crow search algorithm
successfully extended to solve multi-objective optimization REFERENCES
problems. In the proposed algorithm, the fitness functions was
[ I] N. Srinvas and K. Deb, "Multi-objective function optimization using non
defmed using a set of determined weight vectors and max-min dominated sorting genetic algorithms," Evolutionary Computation. vol. 2,
strategy. The weight vectors in fitness function are designed in pp. 221-248, 1994.
such a way that an evenly distributed Pareto front is achieved. [2] C. Vira and Y. Y. Haimes, Multiobjective decision making: theory and
In order to improve the efficiency of the search, the performance methodology: North-Holland,1983.
[3] K. Mieninen, Nonlinear multiobjective optimization vol. 12: Springer
space was divided into some sub-regions. A new chasing
Science & Business Media,2012.
operator was also developed in order to improve the [4] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, "A fast elitist non
convergence process. Performance of MOSCA was compared dominated sorting genetic algorithm for multi-objective optimization:
with 10 well-known multi-objective optimization algorithms. NSGA-II," in international Conference on Parallel Problem Solving
Numerical results show that the performance of MOCSA is From Nature, 2000, pp. 849-858.

64
Downloaded from http://iranpaper.ir
http://www.itrans24.com/landing1.html

[5] E. Zitzler and L. Thiele, "Multiobjective optimization using evolutionary [22] Y.-W. Leung and Y. Wang, "Multiobjective programming using uniform
algorithms - A comparative case study," in Parallel Problem Solving design and genetic algorithm," iEEE Transactions on Systems, Man, and
from Nature ed: Springer Berlin Heidelberg,1998,pp. 292-30I. Cybernetics, Part C (Applications and Reviews), vol. 30, pp. 293-304,
[6] 1. Chen and M. Mahfouf, "A population adaptive based immune algorithm 2000.
for solving multi-objective optimization problems," in International [23] Q. Zhang and H. Li, "MOEAlD: A multiobjective evolutionary algorithm
Conference on Artificial Immune Systems, 2006,pp. 280-293. based on decomposition," IEEE Transactions on evolutionary
[7] W. Peng and Q. Zhang, "A decomposition-based multi-objective particle computation, vol. 11,pp. 712-731,2007.
swarm optimization algorithm for continuous optimization problems," in [24] x.-S. Yang, M. Karamanoglu, and X. He, "Multi-objective flower
IEEE International Conference on, 2008,pp. 534-537. algorithm for optimization," Procedia Computer Science, vol. 18, pp.
[8] H. Nobahari, M. Nikusokhan, and P. Siarry, "A Multi-Objective 861-868,2013.
Gravitational Search Algorithm Based on Non-Dominated Sorting," [25] H.-I. Liu, Y. Wang, and Y.-m. Cheung, "A multi-objective evolutionary
International Journal Of Swarm Intelligence Research (!jsir), vol. 3,pp. algorithm using min-max strategy and sphere coordinate transformation,"
32-49,2012. Intelligent Automation & Soft Computing, vol. 15,pp. 361-384,2009.
[9] A. Askarzadeh, "A novel metaheuristic method for solving constrained [26] H.-L. Liu and X. Li, "The multiobjective evolutionary algorithm based on
engineering optimization problems: Crow search algorithm," Computers determined weight and sub-regional search," in 2009 IEEE Congress on
& Structures, vol. 169,pp. 1-12,6// 2016. Evolutionary Computation, 2009,pp. 1928-1934.
[10] M. F. P. Costa,A. M. A. Rocha,and E. M. Fernandes, "Combining Non [27] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari,
dominance, Objective-order and Spread Metric to Extend Firefly "Multiobjective optimization test instances for the CEC 2009 special
Algorithm to Multi-objective Optimization," in international Conference session and competition," University of Essex, Colchester, UK and
on Evolutionary Multi-Criterion Optimization, 2015,pp. 292-306. Nanyang technological University, Singapore, special session on
[II] J. Holland and D. Goldberg, "Genetic Algorithms in Search,Optimization peljormance assessment of multi-objective optimization algorithms,
and Machine Learning," ed: Addison-Wesley,Reading,MA,1989. technical report, vol. 264,2008.
[12] C. M. Fonseca and P. 1. Fleming, "Genetic Algorithms for Multiobjective [28] Y. Wang, C. Dang, H. Li, L. Han, and J. Wei, "A clustering multi
Optimization: FormulationDiscussion and Generalization," in ICGA, objective evolutionary algorithm based on orthogonal and uniform
1993,pp. 416-423. design," in 20091EEE Congress on EvolutionGlY Computation, 2009,pp.
[13] 1. Hom, N. Nafpliotis, and D. E. Goldberg, "A niched Pareto genetic 2927-2933.
algorithm for multiobjective optimization," in IEEE World Congress on [29] A. Zamuda,J. Brest,B. Boskovic,and V. Zumer, "Differential evolution
Computational Intelligence, 1994,pp. 82-87. with self-adaptation and local search for constrained multiobjective
[14] S. Y. Zeng, L. S. Kang, and L. X. Ding, "An orthogonal multi-objective optimization," in 2009 iEEE Congress on EvolutionGlY Computation,
evolutionary algorithm for multi-objective optimization problems with 2009,pp. 195-202.
constraints," Evolutionary computation, vol. 12,pp. 77-98,2004. [30] M. Liu, X. Zou, Y. Chen, and Z. Wu, "Performance assessment of
[15] M. Janga Reddy and D. Nagesh Kumar, "An efficient multi-objective DMOEA-DD with CEC 2009 MOEA competition test instances," in
optimization algorithm based on swarm intelligence for engineering IEEE Congress on Evolutionary Computation, 2009,pp. 2913-2918.
design," Engineering Optimization, vol. 39,pp. 49-68,2007. [31] S. Kukkonen and 1. Lampinen, "Performance assessment of generalized
[16] O. Al Jadaan,L. Rajamani,and C. Rao, "NON-DOMINATED RANKED differential evolution 3 with a given set of constrained multi-objective test
GENETIC ALGORITHM FOR SOLVING MULTI-OBJECTIVE problems," in 2009 IEEE Congress on Evolutionary Computation, 2009,
OPTIMlZATION PROBLEMS: NRGA," Journal of Theoretical & pp. 1943-1950.
Applied information Technology, vol. 4,2008. [32] Q. Zhang, W. Liu, and H. Li, 'The performance of a new version of
[17] H. Ghiasi, D. Pasini, and L. Lessard, "A non-dominated sorting hybrid MOEAlD on CEC09 unconstrained MOP test instances," in IEEE
algorithm for multi-objective optimization of engineering problems," congress on evolutionary computation, 2009, pp. 203-208.
Engineering Optimization, vol. 43,pp. 39-59,2011. [33] L.-Y. Tseng and C. Chen, "Multiple trajectory search for
[18] S. Salcedo-Sanz, A. Pastor-Sanchez, J. Portilla-Figueras, and L. Prieto, unconstrained/constrained multi-objective optimization," in 2009 IEEE
"Effective multi-objective optimization with the coral reefs optimization Congress on EvolutionGlY Computation, 2009, pp. 1951-1958.
algorithm," Engineering Optimization, vol. 48,pp. 966-984,2016. [34] V. L. Huang, S. Z. Zhao, R. Mallipeddi, and P. N. Suganthan, "Multi
[19] B.-Y. Qu and P. N. Suganthan, "Multi-objective evolutionary objective optimization using self-adaptive differential evolution
programming without non-domination sorting is up to twenty times algorithm," in 2009 IEEE Congress on Evolutionary Computation, 2009,
faster," in 2009 IEEE Congress on Evolutionary Computation, 2009, pp. pp. 190-194.
2934-2939. [35] K. Sindhya, A. Sinha, K. Deb, and K. Miettinen, "Local search based
[20] J. D. Schaffer, "Some experiments in machine learning using vector evolutionary multi-objective optimization algorithm for constrained and
evaluated genetic algorithms," Vanderbilt Univ., Nashville, TN unconstrained problems," in 2009 IEEE Congress on Evolutionary
(USA)1985. Computation, 2009,pp. 2919-2926.
[21] T. Murata and H. Ishibuchi, "MOGA: multi-objective genetic
algorithms," in IEEE international Conference on Evolutionary
Computation, 1995,p. 289.

65

Das könnte Ihnen auch gefallen