Beruflich Dokumente
Kultur Dokumente
(2009) 1:153171
DOI 10.1007/s12293-009-0008-9
Received: 10 September 2008 / Accepted: 17 December 2008 / Published online: 10 February 2009
Springer-Verlag 2009
1 Introduction
Since its earliest definition by Moscato and Norman [1], a
memetic algorithm (MA) is perceived as a structure which
combines stand alone elements that are supposed to constructively interact. This ideology, also referred to in [2], views the
algorithmic structure as a composition of algorithmic components which are supposed to compete and cooperate in
order to improve upon solutions and thus generate an efficient
optimization process.
The concept of cooperation and competition in Memetic
Computing has subsequently been developed and generalized
when MAs containing many local searchers (multimeme
algorithms) were proposed [3,4]. In this context, in [5] the
term Meta-Lamarckian learning has been coined in order to
indicate the adaptive coordination of the various local searchers which are supposed to harmonically compete and cooperate with each other. In addition, as explained in [6], a robust
MA contains a list of local search algorithms which have
various working mechanisms in order to allow exploration
under complementary perspectives. A classification of algorithmic solutions to adaptively perform the coordination of
the local search algorithms in a MA is given in [7].
Another topic strictly related to MA design is the preservation of diversity in the population of the solutions. In fact,
the necessity of designing a MA comes from the poor performance, in some cases, of traditional metaheuristics. This poor
performance is often due to a premature loss of the genotypic
diversity and thus premature convergence or steady behavior
of the diversity for many consecutive generations and thus
stagnation. It is therefore fundamental to promote variation
in the diversity values by applying various algorithmic components. Some techniques aiming at control of the diversity
in MAs have been proposed [812]. Also in the context of
parallel MAs [13], the diversity seems to play a determinant
123
154
role, as explained in [14]. Similarly, the successful functioning of a MA can be seen as a proper balance between global
and local search as expressed in [15,16]. Finally in [17], the
use of both empirical experience and theoretical reasoning is
recommended in order to design an efficient MA.
In summary, a successful MA is an algorithm composed
of many various components which are harmonically coordinated by means of a proper balance between global and local
search, thus ensuring a diversity dynamic which guarantees
fast and efficient improvements in the search until detection
of a solution with high performance.
Although these recommendations are undoubtedly useful, in practice they might be very hard to follow when the
algorithmic implementation occurs. For example, the concept of proper balance between global and local search can
significantly vary in its practical meaning dependant on the
problem under analysis. In addition, the search for complex
adaptive systems which perform the coordination of algorithmic components, e.g. in [10], can lead to algorithms which
are extremely domain specific and thus very hard to export
to other fields. This fact can obviously be seen as a practical
consequence of the No Free Lunch Theorem [18]: since the
average performance of all possible optimization methods is
the same over all possible optimization problems, a tailored
algorithmic design must be performed in order to obtain high
performance results in a specific application domain.
Despite the validity and importance of the No Free Lunch
Theorem, in this paper we propose an algorithm which aims
at pushing towards the outer limit of this theorem. In other
words, we propose an algorithm which combines robustness
and high performance on a various set of optimization problems. In a sense, the algorithm proposed in this paper can be
seen as a continuation of the work described in [19] which
was already proven to have a high performance for diverse
applications and test problems.
In order to pursue this aim, this article proposes an algorithm based on fairly simple working principles. We propose
an evolutionary framework in the fashion of the differential
evolution (DE) with some randomization in the parameter
setting inspired by the approach described in [20]. The choice
of the DE is due to its reliability and versatility in continuous
space [21]. In addition, the proposed algorithm makes use
of two simple local search algorithms, a golden search and
a hill-climber, activated by a randomized and deterministic
criterion respectively. One novelty in the proposed approach
is that local search is integrated within the move operators
and in particular within choice of the scale factor during the
mutation move. This choice can be seen as a natural countermeasure to the stagnation problems which the Differential
Evolution is subject to [22].
This paper is organized in the following way: Sect. 2 gives
an extensive description of the DE and the enhancement
recently proposed in literature. This section takes on the role
123
(1)
=xi + F(xbest xi ) + F(xs xt )
DE/cur-to-best/1 : xoff
= xbest + F(xs xt ) + F(xu xv )
DE/best/2 : xoff
DE/rand/2 : xoff
= xr + F(xs xt ) + F(xu xv )
,
,
,
Fig. 1 DE pseudocode
155
fmax
max lmin , 1 ffmax
if
f min < 1
min
(3)
F=
max l , 1 fmin
otherwise
min
f max
where lm = 0.4 is the lower bound of F, f min and f max are
the minimum and maximum fitness values over individuals
of the populations.
2.3 Self-adapting control parameters in differential
evolution
In order to avoid the parameter setting of F and CR, a simple
and effective strategy has been proposed in [20]. This strategy is named self-adapting control parameters in differential
evolution. The DE algorithm employing this strategy here is
called self-adaptive control parameter differential evolution
(SACPDE) and consists of the following.
With reference to Fig. 1, when the initial population is generated, two extra values between 0 and 1 are also generated
123
156
(4)
In accordance with a self-adaptive logic, e.g. [33], the variation operations are preceded by the parameter update. More
specifically when, at each generation, the ith individual xi is
taken into account and three other individuals are extracted
pseudo-randomly, its parameters Fi and CRi are updated
according to the following scheme:
Fl + Fu rand1 , if rand2 < 1
Fi =
(5)
Fi ,
otherwise
rand3 , if rand4 < 2
CRi =
(6)
CRi ,
otherwise
where randj , j {1, 2, 3, 4}, are uniform pseudo-random
values between 0 and 1; 1 and 2 are constant values which
represent the probabilities that parameters are updated, Fl
and Fu are constant values which represent the minimum
value that Fi could take and the maximum variable contribution to Fi , respectively. The newly calculated values of Fi
and CRi are then used for generating the offspring. The variation operators and selection scheme are identical to that of
a standard DE (see Sect. 2.1).
For sake of clarity, the pseudo-code highlighting the working principles of the SACPDE is given in Fig. 2.
2.4 Opposition based differential evolution
The opposition based differential evolution (OBDE), proposed in [34], employs the logic of the opposition points in
order to enhance exploration properties of the DE and test a
wide portion of the decision space.
For a given point xi = xi,1 , xi,2 , . . . , xi, j , . . . , xi,n
belonging to a set D = [a1 , b1 ] [a2 , b2 ] [a j , b j ]
[an , bn ] its opposition point is defined as:xi = a1 +
b1 xi,1 , a2 +b2 xi,2 , . . . , a j +b j xi, j , . . . , an +bn xi,n .
The OBDE consists of a DE framework and two opposition
based components: the first after the initial sampling and the
second after the survivor selection scheme. While the first
opposition based component is always applied after initialization, the second is activated by means of the probability
jr (jump rate). These opposition based components process
a set of candidate solutions and generate their opposition
points. They then merge the two sets of points (original and
opposition) and select those points which have the best performance (equal to the amount of candidate solutions in the
original set).
More specifically, when the initial sampling is
pseudo-randomly performed, opposition points of the initial
123
,
,
157
,
,
If the SPX succeeds in improving upon the starting solution, a replacement occurs according to a meta-Lamarckian
logic [5].
It should be remarked that in Fig. 4 is a control parameter of the SPX which has been set equal to 1 in [35]. Finally,
(7)
123
158
Hill
climb
if 3 rand5 < 4
Fi =
F
+
F
rand
,
if
rand
<
l
u
1
2
1
if rand5 > 4
Fi
otherwise
(8)
rand3 , if rand4 < 2
CRi =
CRi , otherwise
(9)
where rand j , j {1, 2, 3, 4, 5}, are uniform pseudo-random
values between 0 and 1; k , k {1, 2, 3, 4} are constant
threshold values. Values 1 and 2 represent the probabilities
that parameters are updated while 3 and 4 are related to
activation of the local search. The values Fl and Fu , analogous to the case of the SACPDE described in Sect. 2.3,
are constant values which represent the minimum value that
Fi could take and the maximum variable contribution to Fi ,
respectively. For all other individuals the update occurs as
with the SACPDE, see Eqs. (5) and (6) in Sect. 2.3.
When the values Fi are CRi have been calculated, for each
individual xi of the Spop available, three individuals xr , xs
and xt are pseudo-randomly extracted from the population,
is generated
in the DE fashion. The provisional offspring xoff
by mutation as:
xoff
= xt + Fi (xr xs ).
(10)
(12)
For sake of clarity, the procedure describing the fitness function is shown in Fig. 5.
It must be remarked that each scale factor Fi corresponds
to an offspring solution xoff during the local search and thus
the proposed local search can be seen as an instrument for
detecting solutions with a high quality over the directions
suggested by a DE scheme. At the end of the local search
process, newly generated design variables xi, j with corresponding scale factor Fi within the candidate solution xi ,
see Eq. (7), compose the offspring solution. In addition, it is
fundamental to observe that negative values of Fi are admitted up to 1. The meaning of the negative scale factor is
obviously, in this context, inversion of the search direction.
If this search in the negative direction succeeds, the corresponding positive value (the absolute value |Fi |) is assigned
to the offspring solution which has been generated by a local
search. In order to perform this minimization, the following
algorithms have been included.
123
a
Fi2 = a +
Fi1 = b
(13)
159
,
,
,
123
160
The uni-dimensional hill-climb local search is one of the simplest and most popular optimization algorithms present in any
optimization book e.g. [44]. The algorithm uses the current
value of Fi as a starting point and is composed of an exploratory move and a decisional move. The exploratory move
samples Fi h and Fi + h where h is a step size. The decisional move computes the min{ f (Fi h), f (Fi ), f (Fi +h)}
and selects the corresponding point as the center of the next
exploratory move. If the center of the new exploratory move
is still Fi , the step size h is halved. The local search is stopped
when a budget condition is exceeded. For sake of completeness the pseudo-code of the scale factor hill-climb (SFHC)
is shown in Fig. 7.
It should be remarked that the SFHC is a local search algorithm characterized by a steepest descent pivot rule, see [45],
i.e. an algorithm which explores the whole neighborhood of
the candidate solution before making a decision on the search
Table 1 Test problems
Test problem
The proposed SFLSDE is thus composed of the aforementioned evolutionary framework and local search algorithms.
Function
Decision space
Alpine
n
2
20 + e + exp 0.2
x
i=1
i
n
n
exp n1 i=1
cos(2 xi )xi
n
i=1 |x i sin x i + 0.1x i |
Camelback
4x12 2.1x12 +
DeJong
||x||2
Ackley
DropWave
Easom
Griewangk
Michalewicz
Pathological
Rastrigin
Rosenbrock valley
Schwefel
Sum of powers
Tirronen
Whitley
Zakharov
123
x 16
3
+ x1 x2 4x22 + 4x24
1
2
2 ||x|| +2
cos x1 cos x2 exp (x1 )2 (x2 )2
n
||x||2
xi
i=0 cos i + 1
4000
n
ix 2
i=1
sin xi sin i
i=1
[10, 10]n
[3, 3] [2, 2]
[5.12, 5.12]n
1+cos 12 ||x||2
n1
[1, 1]n
0.5 +
2 0.5
100xi2 +xi+1
2
2
2
1+0.001 xi 2xi xi+1 +xi+1
sin2
[5.12, 5.12]n
[100, 100]2
[600, 600]n
[0, ]n
n 2
10n + i=0
xi 10 cos(2 xi )
2
n1
xn+1 xi2 + (1 x)2
i=0
n
|xi |
i=1 x i sin
n
i+1
i=1 |x i |
2
10 exp 8||x||2
3 exp ||x||
10n
n
+ 2.5
i=1 cos (5x i (1 + i mod 2))
n
2
n n
yi, j
cos
y
+
1
,
i, j
i=1
j=1 4000
2
where yi, j = 100(x j xi )2 + (1 xi )2
2
4
n
n
i x1
i x1
||x||2 +
+
i=1 2
i=1 2 x i
[100, 100]n
[5.12, 5.12]n
[2.048, 2.048]n
[500, 500]n
[1, 1]n
[10, 5]n
[100, 100]n
[5, 10]n
161
EF+SFGSS
EF+SFHC
SFLSDE
Ackley
4.44e-16 0.00e+00
4.00e-15 0.00e+00
4.44e-16 0.00e+00
Rotated Ackley
7.44e-03 1.05e-02
3.11e-15 8.88e-16
1.54e-15 2.46e-15
Alpine
0.00e+00 0.00e+00
7.52e-18 1.54e-16
1.95e-42 4.46e-42
Rotated alpine
1.30e-01 0.00e+00
1.69e-23 1.73e-23
1.94e-36 1.67e-36
Camelback
1.03e+00 0.00e+00
1.03e+00 0.00e+00
1.03e+00 0.00e+00
De Jong
0.00e+00 0.00e+00
9.14e-74 0.00e+00
3.06e-77 7.89e-79
Dropwave
9.61e-01 2.46e-02
1.00e+00 0.00e+00
1.00e+00 0.00e+00
Easom
1.00e-00 0.00e+00
1.00e+00 0.00e+00
1.00e+00 0.00e+00
Griewangk
0.00e+00 0.00e+00
0.00e+00 0.00e+00
0.00e+00 0.00e+00
Rotated Griewangk
0.00e+00 0.00e+00
3.29e-04 6.75e-03
0.00e+00 0.00e+00
Michalewicz
2.96e+01 9.06e-04
2.88e+01 0.00e+00
2.92e+01 1.44e-01
Rotated Michalewicz
1.08e+01 1.10e+00
1.56e+01 1.38e+00
1.60e+01 1.37e+00
Pathological
1.45e+01 0.00e+00
1.45e+01 7.17e-08
1.45e+01 0.00e+00
Rotated pathological
1.45e+01 7.35e-06
1.45e+01 1.44e-05
1.45e+01 3.30e-06
Rastrigin
0.00e+00 0.00e+00
0.00e+00 0.00e+00
0.00e+00 0.00e+00
Rotated Rastrigin
2.22e-01 2.22e-01
0.00e+00 0.00e+00
0.00e+00 0.00e+00
Rosenbrock valley
1.57e-05 0.00e+00
1.67e-07 3.60e-07
5.16e-08 6.83e-09
5.47e-01 0.00e+00
5.47e-01 1.74e-06
5.47e-01 0.00e+00
Schwefel
1.26e+04 3.64e-12
1.14e+04 4.27e+02
1.17e+04 0.00e+00
Rotated Schwefel
8.33e+03 1.97e+02
7.52e+03 2.43e+01
7.34e+03 1.43e+02
Sum of powers
8.07e-131 0.00e+00
2.18e-177 0.00e+00
1.82e-203 0.00e+00
1.23e-08 0.00e+00
6.11e-10 0.00e+00
0.00e+00 0.00e+00
Tirronen
7.30e+00 0.00e+00
4.64e+00 2.20e+00
4.53e+00 2.12e+00
Rotated Tirronen
7.64e+00 0.00e+00
2.69e+00 1.55e+00
5.27e+00 3.00e+00
1.60e+02 1.39e+02
Whitley
2.06e+02 3.05e+01
2.14e+02 1.05e+02
Rotated Whitley
4.08e+02 0.00e+00
4.06e+02 0.00e+00
4.06e+02 0.00e+00
Zakharov
8.41e-02 8.82e-02
6.01e-22 5.94e-22
9.36e-64 0.00e+00
Rotated Zakharov
2.42e-03 4.03e-03
2.85e-21 0.00e+00
4.81e-53 6.96e-52
offspring with high performance which can promote a successful evolution. The activation of the probability, determined by 3 and 4 is set to be rather low (see parameter
setting in Sect. 4) since an excessive local search would lead
to an unnecessary increase of the computational cost. The
application of the local search is carried out, in any case,
with a relatively low depth, i.e. with a limited computational
budget. Since the local search is performed in one dimension a relatively low number of fitness evaluations (especially
for the SFGSS) is sufficient to obtain a substantial improvement.
For sake of clarity, Fig. 9 shows the complete pseudocode
of the SFLSDE.
4 Numerical results
This section shows numerical results which prove the viability of the proposed approach. The results are divided into
123
162
DE
SACPDE
OBDE
DEahcSPX
SFLSDE
Ackley
6.72e-15 1.50e-15
5.36e-14 9.41e-14
1.61e-02 1.38e-03
6.60e-02 4.02e-02
1.46e-15 1.93e-15
Rotated Ackley
6.37e-15 1.67e-15
8.13e-14 2.03e-13
1.45e-03 0.00e+00
1.37e-01 7.85e-02
4.44e-16 0.00e+00
Alpine
9.97e-06 2.80e-05
4.41e-16 5.27e-16
1.85e-02 1.99e-03
2.65e-02 7.76e-03
1.09e-36 1.08e-35
1.68e+01 4.81e+00
2.10e-01 2.71e-01
1.36e-02 0.00e+00
4.58e-01 5.37e-01
2.08e-35 2.08e-35
Rotated alpine
Camelback
De Jong
Dropwave
Easom
Griewangk
Rotated Griewangk
Michalewicz
1.03e+00 6.66e-16 1.03e+00 0.00e+00 1.03e+00 6.66e-16 1.03e+00 6.66e-16 1.03e+00 0.00e+00
4.51e-36 4.04e-36
7.86e-01 6.95e-09
7.61e-32 4.08e-31
1.56e-02 0.00e+00
7.22e-01 8.62e-01
1.42e-79 9.26e-80
1.00e+00 0.00e+00 1.00e+00 0.00e+00 1.00e+00 0.00e+00 1.00e+00 0.00e+00 1.00e+00 0.00e+00
0.00e+00 0.00e+00
5.49e-02 1.13e-01
1.17e-01 0.00e+00
2.95e+00 4.12e+00
0.00e+00 0.00e+00
2.95e-04 1.59e-03
6.23e-03 1.09e-02
8.43e-01 4.00e-01
1.75e+00 8.42e-01
0.00e+00 0.00e+00
2.88e+01 2.59e-01 2.88e+01 3.71e-01 2.74e+01 1.50e+00 1.85e+01 1.24e+00 2.89e+01 6.83e-02
Rotated Michalewicz 9.17e+00 5.03e-01 1.53e+01 1.43e+00 1.03e+01 7.74e-01 8.21e+00 5.15e-01 1.58e+01 1.05e+00
Pathological
1.45e+01 4.06e-07
1.45e+01 3.60e-07
1.45e+01 3.21e-09
1.45e+01 1.57e-05
1.45e+01 1.45e-08
Rotated pathological
1.45e+01 7.12e-06
1.45e+01 4.97e-05
1.45e+01 2.50e-06
1.45e+01 6.41e-05
1.45e+01 2.77e-06
Rastrigin
8.20e+00 2.27e+00
1.21e+01 4.66e+00
1.05e+01 0.00e+00
7.89e+01 1.80e+01
0.00e+00 0.00e+00
Rotated Rastrigin
2.01e+02 1.00e+01
4.65e+01 1.56e+01
1.05e+02 4.45e+01
1.87e+02 1.54e+01
0.00e+00 0.00e+00
Rosenbrock valley
5.75e-21 7.47e-21
1.03e-05 5.54e-05
9.28e-01 0.00e+00
3.78e+00 1.35e+00
5.95e-10 0.00e+00
Rotated
Rosenbrock valley
5.47e-01 5.66e-15
5.47e-01 1.50e-05
1.84e+00 0.00e+00
6.70e+00 2.77e+00
5.47e-01 2.29e-06
Schwefel
1.20e+04 2.60e+02 1.14e+04 2.76e+02 1.13e+04 4.92e+02 7.27e+03 7.84e+02 1.19e+04 1.18e+02
Rotated Schwefel
6.21e+03 2.60e+02 7.10e+03 5.31e+02 7.64e+03 7.04e+02 4.59e+03 3.55e+02 7.45e+03 0.00e+00
Sum of powers
7.92e-103 1.93e-102
6.92e-09 1.95e-08
1.23e-12 8.74e-13
1.13e-05 3.70e-05
6.04e-206 0.00e+00
3.12e-05 1.80e-05
1.03e-09 7.31e-10
1.32e-07 4.81e-08
1.19e-07 1.85e-08
3.93e-62 0.00e+00
Rotated sum
of powers
Tirronen
2.25e+00 4.90e-02 2.39e+00 7.37e-02 2.26e+00 9.75e-02 1.41e+00 9.06e-02 7.70e+00 0.00e+00
Rotated Tirronen
1.31e+00 7.80e-02 1.97e+00 4.66e-01 1.29e+00 1.15e-01 1.06e+00 1.07e-01 5.14e+00 2.87e+00
Whitley
1.67e+02 1.11e+02
7.04e+02 1.04e+03
4.71e+09 7.20e+09
8.54e+12 1.60e+13
2.01e+02 0.00e+00
Rotated Whitley
6.87e+02 2.43e+01
5.66e+02 9.53e+01
5.07e+06 1.17e+08
1.43e+08 2.18e+08
4.06e+02 5.68e-14
Zakharov
4.52e+01 1.00e+01
1.44e-04 2.17e-04
2.57e+00 3.01e-01
7.25e+00 3.06e+00
2.04e-60 3.46e-60
Rotated Zakharov
4.83e+01 1.08e+01
1.41e-04 2.39e-04
2.04e+00 1.14e+00
5.29e+00 7.27e-01
6.32e-58 2.23e-57
123
the SFHC, and a 93% chance to undergo a normal self-adaptive strategy. Choice of the 3 and 4 has been empirically
set on the basis of a parameter tuning. As general guidelines, it has been observed that the local search pressure
due to the proposed approach should be limited compared
to the global pressure, due to the SACPDE framework, in
order to obtain a successful algorithmic behavior. In addition, according to our results, the local search should not
be run with a high depth value; an application to relatively
many points with a limited depth (limited computational budget) is preferable to an application of local search to few
points with a high depth. More specifically, each SFGSS is
run for 8 fitness evaluations and the SFHC for 20. On the
contrary, population size has been set, for all the algorithms
under analysis, on the basis of the dimensionality of the
problem.
163
DE
SACPDE
OBDE
DEahcSPX
Ackley
Rotated Ackley
Alpine
Rotated alpine
Camelback
De Jong
Dropwave
Easom
Griewangk
Rotated Griewangk
Michalewicz
Rotated Michalewicz
Pathological
Rotated pathological
Rastrigin
Rotated Rastrigin
Rosenbrock valley
Rotated Schwefel
Schwefel
Sum of powers
Tirronen
Rotated Tirronen
Whitley
Rotated Whitley
Zakharov
Rotated Zakharov
In the three following subsections the test functions present in Table 1 are considered. It should be remarked that some
rotated problems have been added to our benchmark set. The
rotated problems are obtained by means of multiplication of
the vector of variables to a randomly generated orthogonal
rotation matrix.
123
164
Table 5 Results of the Q test in
30 dimensions
Test Problem
DE
SACPDE
OBDE
DEahcSPX
SFLSDE
Ackley
6.1e+01
2.7e+01
2.3e+01
4.3e+01
2.7e+01
Rotated Ackley
9.7e+01
4.0e+01
2.5e+01
3.2e+02
3.2e+01
Alpine
1.4e+02
3.5e+01
3.4e+01
1.5e+02
3.3e+01
Rotated alpine
inf
6.9e+01
4.6e+01
3.4e+02
5.3e+01
Camelback
2.8e+00
2.2e+00
2.6e+00
2.0e+01
2.4e+00
De Jong
3.5e+01
1.6e+01
1.1e+01
1.7e+01
1.7e+01
Dropwave
inf
inf
inf
inf
2.1e+02
Easom
1.7e+01
9.8e+00
9.2e+00
7.9e+01
1.0e+01
Griewangk
3.4e+01
1.6e+01
1.2e+01
1.9e+01
1.6e+01
1.6e+01
Rotated Griewangk
3.5e+01
1.6e+01
1.2e+01
1.7e+01
Michalewicz
3.0e+02
1.1e+02
1.3e+03
inf
1.2e+02
Rotated Michalewicz
inf
9.9e+02
inf
inf
1.0e+03
Pathological
7.4e+00
2.8e+01
8.2e+00
1.8e+02
2.9e+01
Rotated pathological
2.1e+01
9.5e+01
1.3e+01
3.0e+02
8.9e+01
Rastrigin
3.2e+02
7.2e+01
3.3e+02
inf
5.4e+01
Rotated Rastrigin
inf
inf
1.1e+04
inf
1.3e+02
Rosenbrock valley
5.1e+01
2.6e+01
2.3e+01
1.3e+02
2.8e+01
6.2e+01
3.3e+01
2.9e+01
3.0e+02
3.6e+01
Rotated Schwefel
inf
7.3e+02
5.9e+02
inf
6.9e+02
Schwefel
2.2e+02
1.6e+02
4.7e+02
inf
1.1e+02
Sum of powers
1.4e+01
5.9e+00
4.0e+00
5.2e+00
6.1e+00
2.3e+01
6.0e+00
3.4e+00
4.2e+00
6.0e+00
2.7e+03
Tirronen
inf
inf
inf
inf
Rotated Tirronen
inf
inf
inf
inf
8.6e+02
Whitley
1.3e+01
5.8e+00
3.9e+00
4.1e+00
6.0e+00
Rotated Whitley
1.8e+01
5.3e+00
3.4e+00
3.8e+00
5.6e+00
Zakharov
8.0e-01
2.9e+00
7.9e-01
8.2e+00
3.0e+00
Rotated Zakharov
4.9e-01
9.2e-01
2.8e-01
5.2e-01
6.1e-01
123
165
(a)
(b)
(c)
(d)
(e)
(f)
(g)
described in [47] has been applied. For each test problem and
each algorithm, the Q measure is computed as:
Q=
ne
(14)
123
166
DE
SACPDE
OBDE
DEahcSPX
SFLSDE
Ackley
5.16e-01 3.55e-02
8.35e-05 1.99e-04
9.02e-04 2.04e-03
1.19e-03 2.76e-03
9.85e-07 5.30e-07
Rotated Ackley
1.96e+00 1.31e-01
1.46e-04 2.96e-04
3.20e-03 7.06e-03
1.84e-03 4.55e-03
9.61e-07 8.19e-07
Alpine
8.08e+01 3.41e+00
4.88e-02 4.91e-02
8.53e-02 7.82e-03
9.64e-02 1.39e-02
1.65e-03 2.11e-03
Rotated alpine
1.92e+02 8.13e+00
3.96e+00 1.88e+00
1.14e-01 4.55e-02
2.09e-01 5.85e-02
1.58e-03 1.60e-03
De Jong
1.24e+01 1.49e+00
9.88e-06 2.75e-05
7.58e-04 2.07e-03
1.60e-02 4.68e-02
3.15e-10 2.58e-10
Dropwave
1.50e-02 1.01e-03
2.61e-01 5.83e-02
5.84e-01 5.83e-02
4.86e-01 5.97e-02
8.52e-01 1.23e-01
Griewangk
4.49e+01 5.73e+00
5.13e-02 7.80e-02
6.07e-02 1.20e-01
2.74e-02 5.89e-02
7.28e-08 9.96e-08
Rotated Griewangk
4.23e+01 5.47e+00
6.23e-02 7.33e-02
1.50e-01 3.09e-01
1.23e-01 3.77e-01
3.00e-06 5.26e-06
Michalewicz
4.33e+01 1.40e+00 8.63e+01 3.71e+00 4.36e+01 1.61e+00 3.05e+01 1.23e+00 8.63e+01 3.73e+00
Rotated Michalewicz 1.72e+01 8.68e-01 1.94e+01 1.71e+00 1.97e+01 9.92e-01 1.61e+01 8.04e-01 1.88e+01 1.90e+00
Pathological
4.95e+01 8.70e-04
4.95e+01 3.14e-04
4.95e+01 3.84e-04
4.95e+01 4.33e-03
4.95e+01 3.44e-04
4.95e+01 4.55e-03
4.95e+01 6.67e-03
4.95e+01 5.25e-03
4.95e+01 4.66e-04
Rastrigin
7.58e+02 2.82e+01
4.24e+01 6.74e+00
5.48e+02 3.66e+01
7.43e+02 3.09e+01
6.81e-06 1.54e-05
Rotated Rastrigin
1.17e+03 3.74e+01
1.72e+02 5.29e+01
7.92e+02 3.01e+01
8.61e+02 2.18e+01
7.95e-02 3.67e-01
Rosenbrock valley
3.41e+01 2.30e+00
3.21e-02 2.22e-02
3.31e-02 3.83e-02
6.59e-02 1.22e-01
3.67e-02 3.59e-02
Rotated
Rosenbrock valley
5.13e+01 3.27e+00
3.48e+00 5.38e-02
3.68e+00 6.82e-01
3.48e+00 1.74e-01
3.48e+00 1.38e-01
Schwefel
Rotated Schwefel
1.12e+04 8.68e+02 1.69e+04 2.43e+03 9.89e+03 6.31e+02 8.88e+03 5.68e+02 1.69e+04 2.43e+03
3.74e+04 8.70e+02
Sum of powers
7.24e-06 4.78e-06
1.56e-15 4.32e-15
1.81e-25 9.73e-25
8.36e-10 4.50e-09
5.33e-34 2.81e-33
Rotated
sum of powers
3.45e+00 2.56e+00
6.66e-10 1.32e-09
1.74e-05 1.30e-05
2.12e-05 1.21e-05
1.63e-10 6.82e-10
Tirronen
Rotated Tirronen
7.04e-01 5.03e-02
3.48e-01 6.31e-02
7.04e-01 8.34e-02
5.88e-01 5.58e-02
5.48e-01 3.86e-02
Whitley
1.00e+13 4.13e+15
1.45e+12 2.38e+12
4.93e+08 1.86e+09
2.09e+12 4.70e+12
3.70e+03 7.20e+02
Rotated Whitley
1.00e+13 5.72e+17
9.59e+04 2.18e+05
9.01e+03 3.92e+02
2.21e+04 6.20e+04
4.53e+03 3.74e+01
1.35e+00 1.18e+00
Zakharov
2.01e+03 1.37e+02
9.83e+01 2.51e+01
1.23e+03 9.33e+01
1.10e+03 1.07e+02
1.49e-07 6.78e-07
Rotated Zakharov
2.30e+03 9.87e+01
2.41e+02 4.43e+01
1.75e+03 1.24e+02
1.54e+03 1.23e+02
2.35e-04 1.12e-03
123
167
Function
DE
SACPDE
OBDE
DEahcSPX
Ackley
Rotated Ackley
Alpine
Rotated alpine
De Jong
Dropwave
Griewangk
Rotated Griewangk
Michalewicz
Rotated Michalewicz
Pathological
Rotated pathological
Rastrigin
Rotated Rastrigin
Rosenbrock valley
Schwefel
Rotated Schwefel
Sum of powers
Tirronen
Rotated Tirronen
Whitley
Rotated Whitley
Zakharov
Rotated Zakharov
123
168
Table 8 Results of the Q test in
100 dimensions
SACPDE
DEahcSPX
OBDE
SFLSDE
Ackley
inf
2.1e+02
1.6e+02
1.8e+02
2.1e+02
Rotated Ackley
2.4e+02
inf
1.9e+02
1.7e+02
2.2e+02
Alpine
inf
2.5e+02
2.2e+02
2.5e+02
2.3e+02
Rotated alpine
inf
4.6e+02
3.1e+02
2.6e+02
3.0e+02
De Jong
7.4e+02
1.3e+02
9.0e+01
7.9e+01
1.3e+02
Dropwave
inf
inf
inf
inf
1.2e+03
Griewangk
7.6e+02
1.3e+02
9.0e+01
8.0e+01
1.3e+02
Rotated Griewangk
7.4e+02
1.3e+02
8.9e+01
8.0e+01
1.4e+02
Michalewicz
inf
1.0e+03
inf
inf
1.0e+03
Rotated Michalewicz
inf
1.6e+03
2.6e+04
8.4e+02
2.1e+03
Pathological
2.6e+02
3.6e+02
5.4e+03
1.2e+02
3.2e+02
Rotated pathological
1.4e+03
4.4e+02
2.0e+04
1.5e+02
4.1e+03
3.6e+02
Rastrigin
inf
4.1e+03
inf
inf
Rotated Rastrigin
inf
inf
inf
inf
5.0e+02
Rosenbrock valley
inf
2.0e+02
1.4e+02
1.8e+02
2.0e+02
inf
2.0e+02
1.4e+02
1.5e+02
2.0e+02
Schwefel
inf
4.4e+02
inf
inf
4.3e+02
Rotated Schwefel
inf
1.8e+03
inf
inf
1.6e+03
Sum of powers
2.7e+02
5.0e+01
3.4e+01
1.5e+01
5.1e+01
7.2e+01
6.9e+00
3.5e+00
3.4e+00
5.8e+00
Tirronen
1.1e+04
inf
inf
6.4e+02
2.7e+04
2.7e+04
Rotated Tirronen
6.6e+02
4.4e+04
1.7e+03
1.0e+03
Whitley
3.4e+02
6.8e+01
3.6e+01
2.1e+01
6.6e+01
Rotated Whitley
6.9e+02
6.4e+01
3.1e+01
1.6e+01
6.2e+01
Zakharov
4.4e+00
5.9e+01
2.6e+01
5.6e+00
5.6e+01
Rotated Zakharov
1.1e+00
7.6e+01
2.1e+00
2.0e+00
1.5e+01
123
DE
5 Conclusion
This paper proposes the scale factor local search differential
evolution (SFLSDE). The SFLSDE is a memetic algorithm
composed of a differential evolution based evolutionary
framework and two simple local search algorithms. These
169
(a)
(b)
(c)
(d)
(e)
(f)
(g)
123
170
It should be remarked that the proposed memetic algorithm performs the local search on the scale factor and thus on
one parameter regardless of the dimensionality of the problem. This kind of hybridization seems to be very efficient
in effecting enhancements in the offspring generation and
to have a dramatic impact on stagnation prevention in the
differential evolution framework. More specifically, these
improved solutions seem to be beneficial in refreshing the
genotypes and assist the global search in the optimization
process.
A future development of this work will aim at further
investigating and modifying the employment of scale factor
local search focused on large scale problems.
References
1. Moscato P, Norman M (1989) A competitive and cooperative
approach to complex combinatorial search. Technical Report 790
2. Moscato P (1989) On evolution, search, optimization, genetic algorithms and martial arts: towards memetic algorithms. Technical
Report 826
3. Krasnogor N, Blackburne B, Burke E, Hirst J (2002) Multimeme
algorithms for proteine structure prediction. In: Proceedings of
parallel problem solving in nature VII. Lecture notes in computer
science springer, Berlin
4. Krasnogor N (2002) Studies in the theory and design space of
memetic algorithms, Ph.D. thesis. University of West England
5. Ong YS, Keane AJ (2004) Meta-lamarkian learning in memetic
algorithms. IEEE Trans Evol Comput 8(2):99110
6. Krasnogor N (2004) Toward robust memetic algorithms. In: Hart
WE, Krasnogor N, Smith JE (eds) Recent advances in memetic
algorithms. Studies in fuzzines and soft computing. Springer,
Berlin, pp 185207
7. Ong YS, Lim MH, Zhu N, Wong KW (2006) Classification of
adaptive memetic algorithms: a comparative study. IEEE Trans
Syst Man Cybern B 36(1):141152
8. Caponio A, Cascella GL, Neri F, Salvatore N, Sumner M (2007) A
fast adaptive memetic algorithm for on-line and off-line control
design of pmsm drives. IEEE Trans Syst Man Cybern B Memet
Algorithms 37(1):2841
9. Neri F, Toivanen J, Mkinen RAE (2007) An adaptive evolutionary
algorithm with intelligent mutation local searchers for designing
multidrug therapies for HIV. Appl Intell 27:219235
10. Neri F, Toivanen J, Cascella GL, Ong YS (2007) An adaptive
multimeme algorithm for designing HIV multidrug therapies.
IEEE/ACM Trans Comput Biol Bioinform 4(2):264278
11. Tirronen V, Neri F, Krkkinen T, Majava K, Rossi T (2007) A
memetic differential evolution in filter design for defect detection in paper production. In: Applications of evolutionary computing. Lectures notes in computer science, 4448. Springer, Berlin,
pp 320329
12. Tirronen V, Neri F, Krkkinen T, Majava K, Rossi T (2008) An
enhanced memetic differential evolution in filter design for defect
detection in paper production. Evol Comput 16:529555
13. Tang J, Lim MH, Ong YS (2006) Parallel memetic algorithm with
selective local search for large scale quadratic assignment problems. Int J Innov Comput Inf Control 2(6):13991416
14. Tang J, Lim MH, Ong YS (2007) Diversity-adaptive parallel
memetic algorithm for solving large scale combinatorial optimization problems. Soft Comput Fusion Found Methodol Appl
11(9):873888
123
171
41. Brest J, umer V, Maucec M (2006) Self-adaptive differential evolution algorithm in constrained real-parameter optimization. In:
Proceedings of the IEEE congress on evolutionary computation,
pp 215222
42. Lozano M, Herrera F, Krasnogor N, Molina D (2004) Real-coded
memetic algorithms with crossover hill-climbing. Evol Comput
Memet Algorithms 12(3):273302
43. Kiefer J (1953) Sequential minimax search for a maximum. Proc
Am Math Soc 4:502506
44. Russell SJ, Norvig P (2003) Artificial intelligence: a modern
approach, 2nd edn. Prentice-Hall, Englewood Cliffs, pp 111114
45. Hart WE, Krasnogor N, Smith JE (2004) Memetic evolutionary
algorithms. In: Hart WE, Krasnogor N, Smith JE (eds) Recent
advances in memetic algorithms. Springer, Berlin, pp 327
46. NIST/SEMATECH, e-handbook of statistical methods. http://
www.itl.nist.gov/div898/handbook/
47. Feoktistov V (2006) Differential evolution in search of solutions.
Springer, Berlin, pp 8386
123