Sie sind auf Seite 1von 19

Memetic Comp.

(2009) 1:153171
DOI 10.1007/s12293-009-0008-9

REGULAR RESEARCH PAPER

Scale factor local search in differential evolution


Ferrante Neri Ville Tirronen

Received: 10 September 2008 / Accepted: 17 December 2008 / Published online: 10 February 2009
Springer-Verlag 2009

Abstract This paper proposes the scale factor local search


differential evolution (SFLSDE). The SFLSDE is a differential evolution (DE) based memetic algorithm which employs,
within a self-adaptive scheme, two local search algorithms.
These local search algorithms aim at detecting a value of the
scale factor corresponding to an offspring with a high performance, while the generation is executed. The local search
algorithms thus assist in the global search and generate offspring with high performance which are subsequently supposed to promote the generation of enhanced solutions within
the evolutionary framework. Despite its simplicity, the proposed algorithm seems to have very good performance on
various test problems. Numerical results are shown in order
to justify the use of a double local search instead of a single
search. In addition, the SFLSDE has been compared with
a standard DE and three other modern DE based metaheuristic for a large and varied set of test problems. Numerical results are given for relatively low and high dimensional
cases. A statistical analysis of the optimization results has
been included in order to compare the results in terms of final
solution detected and convergence speed. The efficiency of
the proposed algorithm seems to be very high especially for
large scale problems and complex fitness landscapes.
Keywords Differential evolution Adaptive memetic
algorithms Golden section search Multimeme algorithms
Large scale optimization

F. Neri (B) V. Tirronen


Department of Mathematical Information Technology,
Agora University of Jyvskyl, 40014 Jyvskyl, Finland
e-mail: neferran@jyu.fi
V. Tirronen
e-mail: aleator@jyu.fi

1 Introduction
Since its earliest definition by Moscato and Norman [1], a
memetic algorithm (MA) is perceived as a structure which
combines stand alone elements that are supposed to constructively interact. This ideology, also referred to in [2], views the
algorithmic structure as a composition of algorithmic components which are supposed to compete and cooperate in
order to improve upon solutions and thus generate an efficient
optimization process.
The concept of cooperation and competition in Memetic
Computing has subsequently been developed and generalized
when MAs containing many local searchers (multimeme
algorithms) were proposed [3,4]. In this context, in [5] the
term Meta-Lamarckian learning has been coined in order to
indicate the adaptive coordination of the various local searchers which are supposed to harmonically compete and cooperate with each other. In addition, as explained in [6], a robust
MA contains a list of local search algorithms which have
various working mechanisms in order to allow exploration
under complementary perspectives. A classification of algorithmic solutions to adaptively perform the coordination of
the local search algorithms in a MA is given in [7].
Another topic strictly related to MA design is the preservation of diversity in the population of the solutions. In fact,
the necessity of designing a MA comes from the poor performance, in some cases, of traditional metaheuristics. This poor
performance is often due to a premature loss of the genotypic
diversity and thus premature convergence or steady behavior
of the diversity for many consecutive generations and thus
stagnation. It is therefore fundamental to promote variation
in the diversity values by applying various algorithmic components. Some techniques aiming at control of the diversity
in MAs have been proposed [812]. Also in the context of
parallel MAs [13], the diversity seems to play a determinant

123

154

role, as explained in [14]. Similarly, the successful functioning of a MA can be seen as a proper balance between global
and local search as expressed in [15,16]. Finally in [17], the
use of both empirical experience and theoretical reasoning is
recommended in order to design an efficient MA.
In summary, a successful MA is an algorithm composed
of many various components which are harmonically coordinated by means of a proper balance between global and local
search, thus ensuring a diversity dynamic which guarantees
fast and efficient improvements in the search until detection
of a solution with high performance.
Although these recommendations are undoubtedly useful, in practice they might be very hard to follow when the
algorithmic implementation occurs. For example, the concept of proper balance between global and local search can
significantly vary in its practical meaning dependant on the
problem under analysis. In addition, the search for complex
adaptive systems which perform the coordination of algorithmic components, e.g. in [10], can lead to algorithms which
are extremely domain specific and thus very hard to export
to other fields. This fact can obviously be seen as a practical
consequence of the No Free Lunch Theorem [18]: since the
average performance of all possible optimization methods is
the same over all possible optimization problems, a tailored
algorithmic design must be performed in order to obtain high
performance results in a specific application domain.
Despite the validity and importance of the No Free Lunch
Theorem, in this paper we propose an algorithm which aims
at pushing towards the outer limit of this theorem. In other
words, we propose an algorithm which combines robustness
and high performance on a various set of optimization problems. In a sense, the algorithm proposed in this paper can be
seen as a continuation of the work described in [19] which
was already proven to have a high performance for diverse
applications and test problems.
In order to pursue this aim, this article proposes an algorithm based on fairly simple working principles. We propose
an evolutionary framework in the fashion of the differential
evolution (DE) with some randomization in the parameter
setting inspired by the approach described in [20]. The choice
of the DE is due to its reliability and versatility in continuous
space [21]. In addition, the proposed algorithm makes use
of two simple local search algorithms, a golden search and
a hill-climber, activated by a randomized and deterministic
criterion respectively. One novelty in the proposed approach
is that local search is integrated within the move operators
and in particular within choice of the scale factor during the
mutation move. This choice can be seen as a natural countermeasure to the stagnation problems which the Differential
Evolution is subject to [22].
This paper is organized in the following way: Sect. 2 gives
an extensive description of the DE and the enhancement
recently proposed in literature. This section takes on the role

123

Memetic Comp. (2009) 1:153171

of offering a compact survey on the topic and, at the same time


introduces algorithms which will be used as a benchmark
for the proposed algorithm. Section 3 gives a description of
the proposed algorithms highlighting the working principles
of each component and justifies the choices made regarding their coordination. Section 4 shows numerical results of
the experiments carried out. A preliminary numerical test
is given in order to justify the employment of two local
search algorithms within the evolutionary framework. Subsequently, an in depth analysis of the algorithmic performance
is carried out in order to compare the behavior of the proposed algorithm with four other differential evolution based
metaheuristics. The numerical experiments have been carried out in both low and high dimensions. Section 5 gives
the conclusion of the work carried out and mentions possible
future developments.

2 Recent advances in differential evolution


In order to clarify notation used throughout this article we
refer to the minimization problem of an objective function
f (x), where x is a vector of n design variables in a decision
space D.
2.1 Differential evolution
According to its original definition given in [23], the DE
consists of the following steps. An initial sampling of Spop
individuals is executed pseudo-randomly with a uniform
distribution function within the decision space D. At each
generation, for each individual xi of the Spop available, three
individuals xr , xs and xt are pseudo-randomly extracted from
the population. According to DE logic, a provisional off is generated by mutation as:
spring xoff

= xt + F(xr xs )
xoff

(1)

where F [0, 1+[ is a scale factor which controls the length


of the exploration vector (xr xs ) and thus determines how
far from point xi the offspring should be generated. With
F [0, 1 + [, it is here meant that the scale factor should
be a positive value which cannot be much greater than 1, see
[21]. While there is no theoretical upper limit for F, effective values are rarely greater than 1.0. The mutation scheme
shown in Eq. (10) is also known as DE/rand/1. Other variants of the mutation rule have been subsequently proposed
in literature [24]:

= xbest + F(xs xt )
DE/best/1 : xoff


=xi + F(xbest xi ) + F(xs xt )
DE/cur-to-best/1 : xoff


= xbest + F(xs xt ) + F(xu xv )
DE/best/2 : xoff


DE/rand/2 : xoff
= xr + F(xs xt ) + F(xu xv )

Memetic Comp. (2009) 1:153171

,
,
,

Fig. 1 DE pseudocode

where xbest is the solution with the best performance among


the individuals of the population, xu and xv are two additional
pseudo-randomly selected individuals.
Then, to increase exploration, each gene of the new indi is switched with the corresponding gene of x with
vidual xoff
i
a uniform probability C R [0, 1] and the final offspring xoff
is generated:

xi, j
if rand (0, 1) < CR
xoff, j =
(2)

xoff,j
otherwise
where rand(0, 1) is a random number between 0 and 1; j is
the index of the gene under examination, from 1 to n, n the
length of each individual.
The resulting offspring xoff is evaluated and, according to
a steady-state strategy, it replaces xi if and only if f (xoff ) <
f (xi ); otherwise no replacement occurs. For the sake of clarity, the pseudo-code highlighting working principles of the
DE is shown in Fig. 1.
2.2 Parameter setting in differential evolution
As highlighted in [22], due to its inner structure, the DE is
subject to stagnation problems. Stagnation is that undesired
effect which occurs when a population based algorithm does
not converge to a solution (even suboptimal) and the population diversity is still high. In the case of the DE, stagnation
occurs when the algorithm does not manage to improve upon
any solution of its population for a prolonged amount of generations.
In order to avoid this undesired effect, the moving operators of the DE must be properly set. In other words, for
successful functioning of the DE, a proper setting of the population size and parameters F and CR (see Eqs. 2,10) must

155

be performed. The population size, analogous to the other


evolutionary algorithms (EAs), if too small could cause premature convergence and if too large could cause stagnation
(see [25]). A good value can be found by considering the
dimensionality of the problem similar to what is commonly
performed for the other EAs. A guideline is given in [26]
where a setting of Spop equal to ten times the dimensionality
of the problem is proposed.
On the other hand, the setting of F and CR is neither an
intuitive nor a straightforward task but is unfortunately crucial for guaranteeing algorithmic functioning. Several studies
have thus been proposed in literature. The study reported in
[22] arrives at the conclusion, after an empirical analysis, that
usage of F = 1 is not recommended, since according to a
conjecture of the authors it leads to a significant decrease in
explorative power. Analogously, the setting CR = 1 is also
discouraged since it would dramatically decrease the amount
of possible offspring solutions. In [26] and [27] the settings
F [0.5, 1] and CR [0.8, 1] are recommended. In [27] the
setting F = CR = 0.9 is chosen on the basis of discussion
in [28]. The empirical analysis reported in [29] shows that in
many cases the setting of F 0.6 and CR 0.6 leads to
results having better performance.
Several studies, e.g. [30,31], highlight that an efficient
parameter setting is very prone to problems (e.g. F = 0.2
could be a very efficient setting for a certain fitness landscape
and completely inadequate for another problem). This result
can be seen as an application to the DE scheme of the No
Free Lunch Theorem [18]. In [32], a modified version of DE
has been proposed. A system with two evolving populations
has been presented. The crossover rate CR has been set equal
to 0.5 after an empirical study. Unlike CR, the value of F is
adaptively updated at each generation by means of the following scheme:






 fmax 
max lmin , 1  ffmax 
if

f min  < 1
min



(3)
F=
max l , 1  fmin 
otherwise
min
f max
where lm = 0.4 is the lower bound of F, f min and f max are
the minimum and maximum fitness values over individuals
of the populations.
2.3 Self-adapting control parameters in differential
evolution
In order to avoid the parameter setting of F and CR, a simple
and effective strategy has been proposed in [20]. This strategy is named self-adapting control parameters in differential
evolution. The DE algorithm employing this strategy here is
called self-adaptive control parameter differential evolution
(SACPDE) and consists of the following.
With reference to Fig. 1, when the initial population is generated, two extra values between 0 and 1 are also generated

123

156

Memetic Comp. (2009) 1:153171

per each individual. These values represent F and CR related


to the individual under analysis. Each individual is thus composed (in a self-adaptive logic) of its genotype and its control
parameters:
xi = xi,1 , xi,2 , . . . , xi, j , . . . xi,n , Fi , CRi .

(4)

In accordance with a self-adaptive logic, e.g. [33], the variation operations are preceded by the parameter update. More
specifically when, at each generation, the ith individual xi is
taken into account and three other individuals are extracted
pseudo-randomly, its parameters Fi and CRi are updated
according to the following scheme:

Fl + Fu rand1 , if rand2 < 1
Fi =
(5)
Fi ,
otherwise

rand3 , if rand4 < 2
CRi =
(6)
CRi ,
otherwise
where randj , j {1, 2, 3, 4}, are uniform pseudo-random
values between 0 and 1; 1 and 2 are constant values which
represent the probabilities that parameters are updated, Fl
and Fu are constant values which represent the minimum
value that Fi could take and the maximum variable contribution to Fi , respectively. The newly calculated values of Fi
and CRi are then used for generating the offspring. The variation operators and selection scheme are identical to that of
a standard DE (see Sect. 2.1).
For sake of clarity, the pseudo-code highlighting the working principles of the SACPDE is given in Fig. 2.
2.4 Opposition based differential evolution
The opposition based differential evolution (OBDE), proposed in [34], employs the logic of the opposition points in
order to enhance exploration properties of the DE and test a
wide portion of the decision space.
For a given point xi = xi,1 , xi,2 , . . . , xi, j , . . . , xi,n 
belonging to a set D = [a1 , b1 ] [a2 , b2 ] [a j , b j ]
[an , bn ] its opposition point is defined as:xi = a1 +
b1 xi,1 , a2 +b2 xi,2 , . . . , a j +b j xi, j , . . . , an +bn xi,n .
The OBDE consists of a DE framework and two opposition
based components: the first after the initial sampling and the
second after the survivor selection scheme. While the first
opposition based component is always applied after initialization, the second is activated by means of the probability
jr (jump rate). These opposition based components process
a set of candidate solutions and generate their opposition
points. They then merge the two sets of points (original and
opposition) and select those points which have the best performance (equal to the amount of candidate solutions in the
original set).
More specifically, when the initial sampling is
pseudo-randomly performed, opposition points of the initial

123

,
,

Fig. 2 SACPDE pseudocode

population are calculated, then half of these points (having


the best fitness values) are selected to begin the optimization
process. Analogously, at the end of each DE generation, when
the population has been selected for subsequent generation,
the opposition based component is applied.
For sake of clarity, the pseudo-code describing functioning of the OBDE is shown in Fig. 3.
2.5 Differential evolution with adaptive crossover local
search
In order to enhance performance of the DE, in [35] a memetic approach, called differential evolution with adaptive
hill climbing simplex crossover (DEahcSPX), has been proposed. The main idea is that a proper balance of the exploration abilities of the DE and exploitation abilities of a local
searcher (LS) can lead to an algorithm with high performance. The proposed algorithm hybridizes the DE described
in Sect. 2.1 as an evolutionary framework and a LS deterministically applied to that individual of the DE population
with the best performance (in terms of fitness value).
The LS proposed for this hybridization is simplex crossover (SPX) [36]. More specifically, at each generation, that
individual having the best fitness value, indicated here with
xb , is extracted and the LS described in Fig. 4 is applied.

Memetic Comp. (2009) 1:153171

157

the DE framework employed is the standard DE described


in Fig. 1.

3 Scale factor local search differential evolution


,

,
,

Fig. 3 OBDE pseudocode

Despite its simplicity and the lack of theoretical justification


the SACPDE proposed in [20] seems to have good performance for a various set of test problems and applications,
as confirmed in the study reported in [19]. According to our
interpretation, the SACPDE demonstrates successful behavior since it tends to correct a structural weak point in the
traditional DE. More specifically, the search logic of the DE
is too strongly deterministic and a unique value of scale factor F and crossover rate CR for all the individuals over the
entire evolution can lead to some difficulties in generating
new promising genotypes. This limitation in the DE search
structure seems to be more evident when the fitness landscape is more complex and when the dimensionality is high,
as often shown in literature e.g. [3739]. In other words,
the assignment of different scale factors and crossover rates
to each population individual seems to be beneficial to the
algorithmic performance [40]. Analogously, the probabilistic update of the aforementioned parameters also seems to
increase the chance of detecting novel promising solutions.
This paper aims at considering the study in [20,41] and
enhancing the results by means of the integration of two
cooperative/competitive local searchers. These local searchers attempt to enhance the quality of newly generated
offspring by operating directly on the scale factor. This algorithmic choice allows the application of a simple local search
which is independent of the dimensionality of the problem
and thus also computationally viable for large scale problems.
The proposed algorithm, namely scale factor local search
differential evolution (SFLSDE), is explained in detail in the
following subsections.
3.1 Evolutionary framework
At the beginning of the optimization process an initial population of Spop solutions xi is pseudo-randomly generated.
Similar to the SACPDE described in Sect. 2.3, each ith individual is structured in the following way:

Fig. 4 SPX pseudocode

If the SPX succeeds in improving upon the starting solution, a replacement occurs according to a meta-Lamarckian
logic [5].
It should be remarked that  in Fig. 4 is a control parameter of the SPX which has been set equal to 1 in [35]. Finally,

xi = xi,1 , xi,2 , . . . , xi, j , . . . xi,n , Fi , CRi .

(7)

where the values of scale factor Fi and crossover rate CRi


are initially pseudo-randomly generated between 0 and 1.
At each generation, the individual with the best performance is taken into account; scale factor and crossover rate
are updated according the following rules:

123

158

Memetic Comp. (2009) 1:153171

Golden section search


if rand5 < 3

Hill

climb
if 3 rand5 < 4
Fi = 
F
+
F
rand
,
if
rand
<

l
u
1
2
1

if rand5 > 4
Fi
otherwise
(8)

rand3 , if rand4 < 2
CRi =
CRi , otherwise
(9)
where rand j , j {1, 2, 3, 4, 5}, are uniform pseudo-random
values between 0 and 1; k , k {1, 2, 3, 4} are constant
threshold values. Values 1 and 2 represent the probabilities
that parameters are updated while 3 and 4 are related to
activation of the local search. The values Fl and Fu , analogous to the case of the SACPDE described in Sect. 2.3,
are constant values which represent the minimum value that
Fi could take and the maximum variable contribution to Fi ,
respectively. For all other individuals the update occurs as
with the SACPDE, see Eqs. (5) and (6) in Sect. 2.3.
When the values Fi are CRi have been calculated, for each
individual xi of the Spop available, three individuals xr , xs
and xt are pseudo-randomly extracted from the population,
 is generated
in the DE fashion. The provisional offspring xoff
by mutation as:

xoff
= xt + Fi (xr xs ).

(10)

 is switched with the corEach gene of the new individual xoff


responding gene of xi with a uniform probability CRi and
the final offspring xoff is generated:

if rand(0, 1) < CR
xi, j
xoff, j =
(11)

otherwise
xoff,
j

where rand(0, 1) is a random number between 0 and 1; j is


the index of the gene under examination. The resulting offspring xoff is evaluated and, according to a steady-state strategy, it replaces xi if and only if f (xoff ) < f (xi ); otherwise
no replacement occurs.

Fig. 5 Local search fitness function, f (Fi ) pseudocode

search determines those genes which are undergoing


crossover by means of the standard criterion explained in
Eq. (11), then it attempts to find the scale factor value which
guarantees an offspring with the best performance. Thus, for
given values of xt , xr , xs , and the set of design variables to
be swapped during the crossover operation, the scale factor
local search attempts to solve the following minimization
problem:
min f (Fi ) in [1, 1] .
Fi

(12)

For sake of clarity, the procedure describing the fitness function is shown in Fig. 5.
It must be remarked that each scale factor Fi corresponds
to an offspring solution xoff during the local search and thus
the proposed local search can be seen as an instrument for
detecting solutions with a high quality over the directions
suggested by a DE scheme. At the end of the local search
process, newly generated design variables xi, j with corresponding scale factor Fi within the candidate solution xi ,
see Eq. (7), compose the offspring solution. In addition, it is
fundamental to observe that negative values of Fi are admitted up to 1. The meaning of the negative scale factor is
obviously, in this context, inversion of the search direction.
If this search in the negative direction succeeds, the corresponding positive value (the absolute value |Fi |) is assigned
to the offspring solution which has been generated by a local
search. In order to perform this minimization, the following
algorithms have been included.

3.2 Local search algorithms


This update scheme corrects the SACPDE by including two
local search applications as possibility for choosing Fi . The
main idea is that the update of the scale factor and thus generation of the offspring is, with a certain probability, controlled in order to guarantee a high quality solution which
can take on a key role in subsequent generations, see also
[42]. Two different local search algorithms have been chosen in accordance with the principle of conducting a search
under complementary perspectives [6].
Local search in the scale factor space can be seen as
the minimization over the variable Fi of fitness function
f in the direction given by xr and xs and modified by the
crossover. More specifically, at first the scale factor local

123

3.2.1 Scale factor golden section search


Golden section search is a classical local search algorithm
for non-differentiable fitness functions introduced in [43].
The scale factor golden section search (SFGSS) applies the
golden section search to the scale factor in order to generate
a high quality offspring. The SFGSS processes the interval
[a = 0.1, b = 1] and generates two intermediate points:
ba

a
Fi2 = a +

Fi1 = b

(13)

Memetic Comp. (2009) 1:153171

159

Fig. 6 SFGSS pseudocode

Fig. 7 SFHC pseudocode

,
,
,

Fig. 9 SFLSDE pseudocode

Fig. 8 Graphical representation of the SFHC

where = 1+2 5 is the golden ratio. The values of f (Fi1 ) and


f (Fi2 ) are then computed and compared. If f (Fi1 ) < f (Fi2 )
then Fi2 replaces b and the procedure is repeated in the new
smaller [a, b] interval. If f (Fi1 ) f (Fi2 ), Fi1 replaces a and
the procedure is repeated in the new interval. The SFGSS is
interrupted when a computational budget is exceeded. Fig. 6
shows the SFGSS pseudocode.
In the context of the DE multidimensional search, the
SFGSS performs control of the offspring generation by at
first individuating a rough estimation of a promising scale
factor value (i.e. detects whether a high or scale factor value
is preferable) and then progressively refines the search.
Two issues related to the SFGSS must be clarified in depth.
The first issue is that the operator is focused on uni-modal

search. This is apparently a limitation of the reliability of


the SFGSS. On the other hand, the multi-modality of the
function f (Fi ) is subject to features of the fitness landscape
and the distance between xr and xs . If the points xr and xs are
close enough, the aforementioned function is likely to be unimodal and the SFGSS is fully reliable. If the function is multimodal, the SFGSS tends to improve upon the initial scale
factor Fi anyway, thus generating an offspring with a fairly
high performance. The second issue is that the SFGSS, due
to its structure, can return satisfactory results with very low
computational cost. Since the golden section search structure locates a promising area and performs fine tuning in the
subsequent iterations, improvements after a few iterations are
likely to lead to negligible improvements and thus a few iterations can already be of great assistance towards the global
DE search.

123

160

Memetic Comp. (2009) 1:153171

3.2.2 Scale factor hill-climb

direction. This property makes, in general, the local search


accurate and thus relatively computationally expensive. The
computational cost in one dimension cannot, in any case, be
dramatically high.
It is interesting to visualize the functioning of this local
searcher in terms of generation of an offspring within a DE
for a multi-dimensional problem. Since the scale factor is
related to the modulus of a moving vector (xr xs ) in the
generation of the preliminary offspring, the SFHC in operation can be seen as a pulsing vector in a multi-dimensional
space which tunes to the best offspring and then generates
this offspring. Figure 8 gives a graphical representation of
the SFHC.

The uni-dimensional hill-climb local search is one of the simplest and most popular optimization algorithms present in any
optimization book e.g. [44]. The algorithm uses the current
value of Fi as a starting point and is composed of an exploratory move and a decisional move. The exploratory move
samples Fi h and Fi + h where h is a step size. The decisional move computes the min{ f (Fi h), f (Fi ), f (Fi +h)}
and selects the corresponding point as the center of the next
exploratory move. If the center of the new exploratory move
is still Fi , the step size h is halved. The local search is stopped
when a budget condition is exceeded. For sake of completeness the pseudo-code of the scale factor hill-climb (SFHC)
is shown in Fig. 7.
It should be remarked that the SFHC is a local search algorithm characterized by a steepest descent pivot rule, see [45],
i.e. an algorithm which explores the whole neighborhood of
the candidate solution before making a decision on the search
Table 1 Test problems

Test problem

The proposed SFLSDE is thus composed of the aforementioned evolutionary framework and local search algorithms.

Function

Decision space

Alpine


n
2
20 + e + exp 0.2
x
i=1
i
n

 n
exp n1 i=1
cos(2 xi )xi
n
i=1 |x i sin x i + 0.1x i |

Camelback

4x12 2.1x12 +

DeJong

||x||2

Ackley

DropWave
Easom
Griewangk
Michalewicz

Pathological
Rastrigin
Rosenbrock valley
Schwefel
Sum of powers
Tirronen

Whitley

Zakharov

123

3.3 Considerations about the hybridization

x 16
3

+ x1 x2 4x22 + 4x24

1
2
2 ||x|| +2



cos x1 cos x2 exp (x1 )2 (x2 )2
n
||x||2
xi

i=0 cos i + 1
4000


n
ix 2
i=1
sin xi sin i

i=1

[10, 10]n
[3, 3] [2, 2]
[5.12, 5.12]n



1+cos 12 ||x||2

n1

[1, 1]n


0.5 +

 
2 0.5
100xi2 +xi+1
 2
2
2
1+0.001 xi 2xi xi+1 +xi+1
sin2

[5.12, 5.12]n
[100, 100]2
[600, 600]n
[0, ]n


n  2
10n + i=0
xi 10 cos(2 xi )

2
n1 
xn+1 xi2 + (1 x)2
i=0
 
n
|xi |
i=1 x i sin
n
i+1
i=1 |x i |




2
10 exp 8||x||2
3 exp ||x||
10n
n
+ 2.5
i=1 cos (5x i (1 + i mod 2))
n

2
 
n n
yi, j

cos
y
+
1
,
i, j
i=1
j=1 4000
2

where yi, j = 100(x j xi )2 + (1 xi )2

2 
4
n
n
i x1
i x1
||x||2 +
+
i=1 2
i=1 2 x i

[100, 100]n
[5.12, 5.12]n
[2.048, 2.048]n
[500, 500]n
[1, 1]n
[10, 5]n

[100, 100]n

[5, 10]n

Memetic Comp. (2009) 1:153171

161

Table 2 Single versus double local search schemes: optimization results


Test problem

EF+SFGSS

EF+SFHC

SFLSDE

Ackley

4.44e-16 0.00e+00

4.00e-15 0.00e+00

4.44e-16 0.00e+00

Rotated Ackley

7.44e-03 1.05e-02

3.11e-15 8.88e-16

1.54e-15 2.46e-15

Alpine

0.00e+00 0.00e+00

7.52e-18 1.54e-16

1.95e-42 4.46e-42

Rotated alpine

1.30e-01 0.00e+00

1.69e-23 1.73e-23

1.94e-36 1.67e-36

Camelback

1.03e+00 0.00e+00

1.03e+00 0.00e+00

1.03e+00 0.00e+00

De Jong

0.00e+00 0.00e+00

9.14e-74 0.00e+00

3.06e-77 7.89e-79

Dropwave

9.61e-01 2.46e-02

1.00e+00 0.00e+00

1.00e+00 0.00e+00

Easom

1.00e-00 0.00e+00

1.00e+00 0.00e+00

1.00e+00 0.00e+00

Griewangk

0.00e+00 0.00e+00

0.00e+00 0.00e+00

0.00e+00 0.00e+00

Rotated Griewangk

0.00e+00 0.00e+00

3.29e-04 6.75e-03

0.00e+00 0.00e+00

Michalewicz

2.96e+01 9.06e-04

2.88e+01 0.00e+00

2.92e+01 1.44e-01

Rotated Michalewicz

1.08e+01 1.10e+00

1.56e+01 1.38e+00

1.60e+01 1.37e+00

Pathological

1.45e+01 0.00e+00

1.45e+01 7.17e-08

1.45e+01 0.00e+00

Rotated pathological

1.45e+01 7.35e-06

1.45e+01 1.44e-05

1.45e+01 3.30e-06

Rastrigin

0.00e+00 0.00e+00

0.00e+00 0.00e+00

0.00e+00 0.00e+00

Rotated Rastrigin

2.22e-01 2.22e-01

0.00e+00 0.00e+00

0.00e+00 0.00e+00

Rosenbrock valley

1.57e-05 0.00e+00

1.67e-07 3.60e-07

5.16e-08 6.83e-09

Rotated Rosenbrock valley

5.47e-01 0.00e+00

5.47e-01 1.74e-06

5.47e-01 0.00e+00

Schwefel

1.26e+04 3.64e-12

1.14e+04 4.27e+02

1.17e+04 0.00e+00

Rotated Schwefel

8.33e+03 1.97e+02

7.52e+03 2.43e+01

7.34e+03 1.43e+02

Sum of powers

8.07e-131 0.00e+00

2.18e-177 0.00e+00

1.82e-203 0.00e+00

Rotated sum of powers

1.23e-08 0.00e+00

6.11e-10 0.00e+00

0.00e+00 0.00e+00

Tirronen

7.30e+00 0.00e+00

4.64e+00 2.20e+00

4.53e+00 2.12e+00

Rotated Tirronen

7.64e+00 0.00e+00

2.69e+00 1.55e+00

5.27e+00 3.00e+00
1.60e+02 1.39e+02

Whitley

2.06e+02 3.05e+01

2.14e+02 1.05e+02

Rotated Whitley

4.08e+02 0.00e+00

4.06e+02 0.00e+00

4.06e+02 0.00e+00

Zakharov

8.41e-02 8.82e-02

6.01e-22 5.94e-22

9.36e-64 0.00e+00

Rotated Zakharov

2.42e-03 4.03e-03

2.85e-21 0.00e+00

4.81e-53 6.96e-52

One important feature of the proposed algorithm is the


structure of the solutions in a self-adaptive fashion as shown
in Eq. (7). As stated before, assignment of local parameters
related to the variation operators (Fi and C Ri ) to each solution seems to be a crucial factor for enhancing the exploration
features in a DE scheme. Also the occasional refreshment of
the parameters seems to provide a better chance for the detection of promising genotypes.
Nevertheless, the performance of the evolutionary framework can be significantly improved, especially for complex
fitness landscapes and high dimensional cases. For this purpose, two local search algorithms with different search logics, acting on the scale factor, have been integrated into the
evolutionary framework. Activation of the local search is
performed by a simple probabilistic criterion, as shown in
Eq. (8). Choice of this hybridization has been performed following the same logic as that of the SACPDE i.e. as an additional set of update rules for the scale factor. The role of
these local searchers is to assist in the evolution by generating

offspring with high performance which can promote a successful evolution. The activation of the probability, determined by 3 and 4 is set to be rather low (see parameter
setting in Sect. 4) since an excessive local search would lead
to an unnecessary increase of the computational cost. The
application of the local search is carried out, in any case,
with a relatively low depth, i.e. with a limited computational
budget. Since the local search is performed in one dimension a relatively low number of fitness evaluations (especially
for the SFGSS) is sufficient to obtain a substantial improvement.
For sake of clarity, Fig. 9 shows the complete pseudocode
of the SFLSDE.

4 Numerical results
This section shows numerical results which prove the viability of the proposed approach. The results are divided into

123

162

Memetic Comp. (2009) 1:153171

Table 3 Final values, average standard deviation in 30 dimensions


Test problem

DE

SACPDE

OBDE

DEahcSPX

SFLSDE

Ackley

6.72e-15 1.50e-15

5.36e-14 9.41e-14

1.61e-02 1.38e-03

6.60e-02 4.02e-02

1.46e-15 1.93e-15

Rotated Ackley

6.37e-15 1.67e-15

8.13e-14 2.03e-13

1.45e-03 0.00e+00

1.37e-01 7.85e-02

4.44e-16 0.00e+00

Alpine

9.97e-06 2.80e-05

4.41e-16 5.27e-16

1.85e-02 1.99e-03

2.65e-02 7.76e-03

1.09e-36 1.08e-35

1.68e+01 4.81e+00

2.10e-01 2.71e-01

1.36e-02 0.00e+00

4.58e-01 5.37e-01

2.08e-35 2.08e-35

Rotated alpine
Camelback
De Jong
Dropwave
Easom
Griewangk
Rotated Griewangk
Michalewicz

1.03e+00 6.66e-16 1.03e+00 0.00e+00 1.03e+00 6.66e-16 1.03e+00 6.66e-16 1.03e+00 0.00e+00
4.51e-36 4.04e-36
7.86e-01 6.95e-09

7.61e-32 4.08e-31

1.56e-02 0.00e+00

5.36e-01 1.23e-01 7.86e-01 0.00e+00

7.22e-01 8.62e-01

1.42e-79 9.26e-80

5.10e-01 1.39e-01 1.00e+00 0.00e+00

1.00e+00 0.00e+00 1.00e+00 0.00e+00 1.00e+00 0.00e+00 1.00e+00 0.00e+00 1.00e+00 0.00e+00
0.00e+00 0.00e+00

5.49e-02 1.13e-01

1.17e-01 0.00e+00

2.95e+00 4.12e+00

0.00e+00 0.00e+00

2.95e-04 1.59e-03

6.23e-03 1.09e-02

8.43e-01 4.00e-01

1.75e+00 8.42e-01

0.00e+00 0.00e+00

2.88e+01 2.59e-01 2.88e+01 3.71e-01 2.74e+01 1.50e+00 1.85e+01 1.24e+00 2.89e+01 6.83e-02

Rotated Michalewicz 9.17e+00 5.03e-01 1.53e+01 1.43e+00 1.03e+01 7.74e-01 8.21e+00 5.15e-01 1.58e+01 1.05e+00
Pathological

1.45e+01 4.06e-07

1.45e+01 3.60e-07

1.45e+01 3.21e-09

1.45e+01 1.57e-05

1.45e+01 1.45e-08

Rotated pathological

1.45e+01 7.12e-06

1.45e+01 4.97e-05

1.45e+01 2.50e-06

1.45e+01 6.41e-05

1.45e+01 2.77e-06

Rastrigin

8.20e+00 2.27e+00

1.21e+01 4.66e+00

1.05e+01 0.00e+00

7.89e+01 1.80e+01

0.00e+00 0.00e+00

Rotated Rastrigin

2.01e+02 1.00e+01

4.65e+01 1.56e+01

1.05e+02 4.45e+01

1.87e+02 1.54e+01

0.00e+00 0.00e+00

Rosenbrock valley

5.75e-21 7.47e-21

1.03e-05 5.54e-05

9.28e-01 0.00e+00

3.78e+00 1.35e+00

5.95e-10 0.00e+00

Rotated
Rosenbrock valley

5.47e-01 5.66e-15

5.47e-01 1.50e-05

1.84e+00 0.00e+00

6.70e+00 2.77e+00

5.47e-01 2.29e-06

Schwefel

1.20e+04 2.60e+02 1.14e+04 2.76e+02 1.13e+04 4.92e+02 7.27e+03 7.84e+02 1.19e+04 1.18e+02

Rotated Schwefel

6.21e+03 2.60e+02 7.10e+03 5.31e+02 7.64e+03 7.04e+02 4.59e+03 3.55e+02 7.45e+03 0.00e+00

Sum of powers

7.92e-103 1.93e-102

6.92e-09 1.95e-08

1.23e-12 8.74e-13

1.13e-05 3.70e-05

6.04e-206 0.00e+00

3.12e-05 1.80e-05

1.03e-09 7.31e-10

1.32e-07 4.81e-08

1.19e-07 1.85e-08

3.93e-62 0.00e+00

Rotated sum
of powers
Tirronen

2.25e+00 4.90e-02 2.39e+00 7.37e-02 2.26e+00 9.75e-02 1.41e+00 9.06e-02 7.70e+00 0.00e+00

Rotated Tirronen

1.31e+00 7.80e-02 1.97e+00 4.66e-01 1.29e+00 1.15e-01 1.06e+00 1.07e-01 5.14e+00 2.87e+00

Whitley

1.67e+02 1.11e+02

7.04e+02 1.04e+03

4.71e+09 7.20e+09

8.54e+12 1.60e+13

2.01e+02 0.00e+00

Rotated Whitley

6.87e+02 2.43e+01

5.66e+02 9.53e+01

5.07e+06 1.17e+08

1.43e+08 2.18e+08

4.06e+02 5.68e-14

Zakharov

4.52e+01 1.00e+01

1.44e-04 2.17e-04

2.57e+00 3.01e-01

7.25e+00 3.06e+00

2.04e-60 3.46e-60

Rotated Zakharov

4.83e+01 1.08e+01

1.41e-04 2.39e-04

2.04e+00 1.14e+00

5.29e+00 7.27e-01

6.32e-58 2.23e-57

three subsections. Section 4.1 compares the performance of


the SFLSDE with the evolutionary framework combined to a
single local search and empirically proves that the combined
action of two local search algorithms is beneficial.
Section 4.2 compares the performance SFLSDE with a standard DE and three modern DE based metaheuristics. Finally,
Sect. 4.3 repeats the comparison carried out in Sect. 4.2 for
the high dimensional case.
Throughout the entire numerical result section a constant
parameter setting has been used for SFLSDE in order to
prove the robustness of the proposed algorithm over various optimization problems. More specifically, Fl = 0.1 and
Fu = 0.9 as suggested in [20]. Regarding the local search
activation 1 = 2 = 0.1, 3 = 0.03, and 4 = 0.07. This
setting means that each generated offspring has a 3% chance
of being biased by the SFGSS, a 4% chance to be biased by

123

the SFHC, and a 93% chance to undergo a normal self-adaptive strategy. Choice of the 3 and 4 has been empirically
set on the basis of a parameter tuning. As general guidelines, it has been observed that the local search pressure
due to the proposed approach should be limited compared
to the global pressure, due to the SACPDE framework, in
order to obtain a successful algorithmic behavior. In addition, according to our results, the local search should not
be run with a high depth value; an application to relatively
many points with a limited depth (limited computational budget) is preferable to an application of local search to few
points with a high depth. More specifically, each SFGSS is
run for 8 fitness evaluations and the SFHC for 20. On the
contrary, population size has been set, for all the algorithms
under analysis, on the basis of the dimensionality of the
problem.

Memetic Comp. (2009) 1:153171

163

Table 4 Results of the Students t test in 30 dimensions


Test problem

DE

SACPDE

OBDE

DEahcSPX

Ackley

Rotated Ackley

Alpine

Rotated alpine

Camelback

De Jong

Dropwave

Easom

Griewangk

Rotated Griewangk

Michalewicz

Rotated Michalewicz

Pathological

Rotated pathological

Rastrigin

Rotated Rastrigin

Rosenbrock valley

Rotated Rosenbrock valley

Rotated Schwefel

Schwefel

Sum of powers

Rotated sum of powers

Tirronen

Rotated Tirronen

Whitley

Rotated Whitley

Zakharov

Rotated Zakharov

In the three following subsections the test functions present in Table 1 are considered. It should be remarked that some
rotated problems have been added to our benchmark set. The
rotated problems are obtained by means of multiplication of
the vector of variables to a randomly generated orthogonal
rotation matrix.

the evolutionary framework and the scale factor hill-climb.


The parameter setting has been performed in order to keep the
balance between global and local search constant.
Thus, the SFGSS is activated in the EF+SFGSS with a probability of 0.13 while the SFHC in the EF+SFHC with a
probability of 0.05. Each algorithm has been run for 50,000
fitness evaluations with a population size of Spop = 30 for 30
independent runs. Optimization results standard deviation
errors are shown in Table 2. The best results are highlighted
in bold face.
Although, as expected, results in Table 2 are very similar
to each other, this preliminary test shows that the combined
use of two local searches is more beneficial than use of a
single local search integrated into the evolutionary framework. This finding is in accordance with the proposed design
strategy described in [6] and the Meta-Lamarckian learning
approach shown in [5].

4.2 Numerical results for (relatively) low dimensional


problems
This subsection compares performance of the SFLSDE with
the algorithms listed in the following: a plain DE [23], SACPDE [20], OBDE [34], and DEahcSPX [35].
The DE has been run with a population size Spop = 30,
F = 0.7, and CR = 0.7 in accordance to the suggestions
given in [29].
The SACPDE has been run with a population size Spop =
30; the constant values in Eqs. (5), (6) have been set,
respectively, Fl = 0.1, Fu = 0.9, 1 = 0.1, and 2 = 0.1
as shown in [20].
The OBDE has been run with a population size S pop =
30, F = 0.7, and C R = 0.7 (in accordance with [29])
and, with reference to Fig. 3, jr = 0.3.
The DEahcSPX has been run with a population size
Spop = 30, F = 0.7, and CR = 0.7 (in accordance
with [29]) and, with reference to Fig. 4, n p = 10.

4.1 Empirical justification of the use of double local search


This section aims at empirically justifying the combined
employment of two local search algorithms. In order to pursue this aim, the SFLSDE has been run for the test problems listed in Table 1 (n = 25 for all the test problems
except the Camelback and Easom functions which are twodimensional) and compared to two algorithms composed of
the same Evolutionary Framework (EF) and only one local
search algorithm at once. More specifically, EF + SFGSS
is composed of the evolutionary framework and the scale
factor golden section search, the EF + SFHC is composed of

Each algorithm has been run for 30 independent runs,


100,000 fitness evaluations each run. In this section the algorithmic performance in a low dimensional case is experimentally studied. More specifically, with reference to Table 1, all
the functions under analysis (except the Camelback and Easom) have been run with n = 30 variables. The population
size of the SFLSDE has been set Spop = 30 as well, as were
the other algorithms.
Table 3 shows the final values ( the standard deviation
error) obtained by each algorithm for each test problem. The
best results are highlighted in bold face.

123

164
Table 5 Results of the Q test in
30 dimensions

Memetic Comp. (2009) 1:153171

Test Problem

DE

SACPDE

OBDE

DEahcSPX

SFLSDE

Ackley

6.1e+01

2.7e+01

2.3e+01

4.3e+01

2.7e+01

Rotated Ackley

9.7e+01

4.0e+01

2.5e+01

3.2e+02

3.2e+01

Alpine

1.4e+02

3.5e+01

3.4e+01

1.5e+02

3.3e+01

Rotated alpine

inf

6.9e+01

4.6e+01

3.4e+02

5.3e+01

Camelback

2.8e+00

2.2e+00

2.6e+00

2.0e+01

2.4e+00

De Jong

3.5e+01

1.6e+01

1.1e+01

1.7e+01

1.7e+01

Dropwave

inf

inf

inf

inf

2.1e+02

Easom

1.7e+01

9.8e+00

9.2e+00

7.9e+01

1.0e+01

Griewangk

3.4e+01

1.6e+01

1.2e+01

1.9e+01

1.6e+01
1.6e+01

Rotated Griewangk

3.5e+01

1.6e+01

1.2e+01

1.7e+01

Michalewicz

3.0e+02

1.1e+02

1.3e+03

inf

1.2e+02

Rotated Michalewicz

inf

9.9e+02

inf

inf

1.0e+03

Pathological

7.4e+00

2.8e+01

8.2e+00

1.8e+02

2.9e+01

Rotated pathological

2.1e+01

9.5e+01

1.3e+01

3.0e+02

8.9e+01

Rastrigin

3.2e+02

7.2e+01

3.3e+02

inf

5.4e+01

Rotated Rastrigin

inf

inf

1.1e+04

inf

1.3e+02

Rosenbrock valley

5.1e+01

2.6e+01

2.3e+01

1.3e+02

2.8e+01

Rotated Rosenbrock valley

6.2e+01

3.3e+01

2.9e+01

3.0e+02

3.6e+01

Rotated Schwefel

inf

7.3e+02

5.9e+02

inf

6.9e+02

Schwefel

2.2e+02

1.6e+02

4.7e+02

inf

1.1e+02

Sum of powers

1.4e+01

5.9e+00

4.0e+00

5.2e+00

6.1e+00

Rotated sum of powers

2.3e+01

6.0e+00

3.4e+00

4.2e+00

6.0e+00
2.7e+03

Tirronen

inf

inf

inf

inf

Rotated Tirronen

inf

inf

inf

inf

8.6e+02

Whitley

1.3e+01

5.8e+00

3.9e+00

4.1e+00

6.0e+00

Rotated Whitley

1.8e+01

5.3e+00

3.4e+00

3.8e+00

5.6e+00

Zakharov

8.0e-01

2.9e+00

7.9e-01

8.2e+00

3.0e+00

Rotated Zakharov

4.9e-01

9.2e-01

2.8e-01

5.2e-01

6.1e-01

Results in Table 3 show that the SFLSDE converges to


the best results in 23 cases out of the 28 under analysis. In
the remaining five test problems the SFLSDE, in any case,
reaches the second best results and is still competitive with
the best performing algorithm. For example, with the Schwefel function the DE obtains the best values but the proposed
SFLSDE reaches satisfactory results anyway.
In order to prove statistical significance of the results, the
Students t test has been applied according to the description given in [46] for a confidence level of 0.95. Final values
obtained by the SFLSDE have been compared to the final
value returned by each algorithm used as a benchmark.
Table 4 shows the results of the test. Indicated with + is
the case when the SFLSDE statistically outperforms, for the
corresponding test problem, the algorithm mentioned in the
column; indicated with = is the case when pairwise comparison leads to success of the t test i.e. the two algorithms
have the same performance; indicated with is the case
when the SFLSDE is outperformed.

123

The t test in Table 4 shows that the SFLSDE loses the


comparison in only 5 cases out of the 112 comparisons carried out i.e. the SFLSDE loses in only 4.4% of the pairwise
comparisons. These results confirm the high performance in
terms of robustness of the proposed algorithm for a wide
spectrum of problems.
In order to execute a numerical comparison of the convergence speed performance, for each test problem, the average
final fitness value returned by the best performing algorithm
G has been considered. Subsequently, the average fitness
value at the beginning of the optimization process J has also
been computed. The threshold value THR = J 0.95(G J )
has then been calculated. The value THR represents 95% of
the decay in the fitness value of the algorithm with the best
performance. If an algorithm succeeds during a certain run
to reach the value THR, the run is said to be successful.
For each test problem, the average amount of fitness evaluations ne
required, for each algorithm, to reach THR has been
computed. Subsequently, the Q-test (Q stands for Quality)

Memetic Comp. (2009) 1:153171

165

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Fig. 10 Performance trends in 30 dimensions

described in [47] has been applied. For each test problem and
each algorithm, the Q measure is computed as:
Q=

ne

(14)

where the robustness R is the percentage of successful runs.


It is clear that, for each test problem, the smallest value equals
the best performance in terms of convergence speed.The
value inf means that R = 0, i.e. the algorithm never reached
the THR.

123

166

Memetic Comp. (2009) 1:153171

Table 6 Final values, average standard deviation in 100 dimensions


Test problem

DE

SACPDE

OBDE

DEahcSPX

SFLSDE

Ackley

5.16e-01 3.55e-02

8.35e-05 1.99e-04

9.02e-04 2.04e-03

1.19e-03 2.76e-03

9.85e-07 5.30e-07

Rotated Ackley

1.96e+00 1.31e-01

1.46e-04 2.96e-04

3.20e-03 7.06e-03

1.84e-03 4.55e-03

9.61e-07 8.19e-07

Alpine

8.08e+01 3.41e+00

4.88e-02 4.91e-02

8.53e-02 7.82e-03

9.64e-02 1.39e-02

1.65e-03 2.11e-03

Rotated alpine

1.92e+02 8.13e+00

3.96e+00 1.88e+00

1.14e-01 4.55e-02

2.09e-01 5.85e-02

1.58e-03 1.60e-03

De Jong

1.24e+01 1.49e+00

9.88e-06 2.75e-05

7.58e-04 2.07e-03

1.60e-02 4.68e-02

3.15e-10 2.58e-10

Dropwave

1.50e-02 1.01e-03

2.61e-01 5.83e-02

5.84e-01 5.83e-02

4.86e-01 5.97e-02

8.52e-01 1.23e-01

Griewangk

4.49e+01 5.73e+00

5.13e-02 7.80e-02

6.07e-02 1.20e-01

2.74e-02 5.89e-02

7.28e-08 9.96e-08

Rotated Griewangk

4.23e+01 5.47e+00

6.23e-02 7.33e-02

1.50e-01 3.09e-01

1.23e-01 3.77e-01

3.00e-06 5.26e-06

Michalewicz

4.33e+01 1.40e+00 8.63e+01 3.71e+00 4.36e+01 1.61e+00 3.05e+01 1.23e+00 8.63e+01 3.73e+00

Rotated Michalewicz 1.72e+01 8.68e-01 1.94e+01 1.71e+00 1.97e+01 9.92e-01 1.61e+01 8.04e-01 1.88e+01 1.90e+00
Pathological

4.95e+01 8.70e-04

4.95e+01 3.14e-04

4.95e+01 3.84e-04

4.95e+01 4.33e-03

4.95e+01 3.44e-04

Rotated pathological 4.95e+01 1.85e-03

4.95e+01 4.55e-03

4.95e+01 6.67e-03

4.95e+01 5.25e-03

4.95e+01 4.66e-04

Rastrigin

7.58e+02 2.82e+01

4.24e+01 6.74e+00

5.48e+02 3.66e+01

7.43e+02 3.09e+01

6.81e-06 1.54e-05

Rotated Rastrigin

1.17e+03 3.74e+01

1.72e+02 5.29e+01

7.92e+02 3.01e+01

8.61e+02 2.18e+01

7.95e-02 3.67e-01

Rosenbrock valley

3.41e+01 2.30e+00

3.21e-02 2.22e-02

3.31e-02 3.83e-02

6.59e-02 1.22e-01

3.67e-02 3.59e-02

Rotated
Rosenbrock valley

5.13e+01 3.27e+00

3.48e+00 5.38e-02

3.68e+00 6.82e-01

3.48e+00 1.74e-01

3.48e+00 1.38e-01

Schwefel

2.01e+04 5.99e+02 3.74e+04 8.82e+02 1.67e+04 1.04e+03 1.04e+04 6.08e+02

Rotated Schwefel

1.12e+04 8.68e+02 1.69e+04 2.43e+03 9.89e+03 6.31e+02 8.88e+03 5.68e+02 1.69e+04 2.43e+03

3.74e+04 8.70e+02

Sum of powers

7.24e-06 4.78e-06

1.56e-15 4.32e-15

1.81e-25 9.73e-25

8.36e-10 4.50e-09

5.33e-34 2.81e-33

Rotated
sum of powers

3.45e+00 2.56e+00

6.66e-10 1.32e-09

1.74e-05 1.30e-05

2.12e-05 1.21e-05

1.63e-10 6.82e-10

Tirronen

1.44e+00 4.59e-02 1.13e+00 1.42e-01 1.59e+00 5.70e-02 9.43e-01 5.35e-02

Rotated Tirronen

7.04e-01 5.03e-02

3.48e-01 6.31e-02

7.04e-01 8.34e-02

5.88e-01 5.58e-02

5.48e-01 3.86e-02

Whitley

1.00e+13 4.13e+15

1.45e+12 2.38e+12

4.93e+08 1.86e+09

2.09e+12 4.70e+12

3.70e+03 7.20e+02

Rotated Whitley

1.00e+13 5.72e+17

9.59e+04 2.18e+05

9.01e+03 3.92e+02

2.21e+04 6.20e+04

4.53e+03 3.74e+01

1.35e+00 1.18e+00

Zakharov

2.01e+03 1.37e+02

9.83e+01 2.51e+01

1.23e+03 9.33e+01

1.10e+03 1.07e+02

1.49e-07 6.78e-07

Rotated Zakharov

2.30e+03 9.87e+01

2.41e+02 4.43e+01

1.75e+03 1.24e+02

1.54e+03 1.23e+02

2.35e-04 1.12e-03

Table 5 shows the Q values in 30 dimensions. The best


results are highlighted in bold face.
Results in Table 5 show that the OBDE has the best performance in terms of convergence speed in 30 dimensions.
Nevertheless, the SFLSDE offers a competitive convergence
speed performance for all the test problems under examination. If we take into consideration the results in Tables 3
and 5 we can conclude that in low dimension problems the
SFLSDE is either the best algorithm in terms of convergence
speed and final detected result or else can be slightly slower
in early generations but eventually capable of outperforming
the other algorithms. In addition, it is fundamental to observe
that the SFLSDE is the only algorithm which never takes the
value inf in the Q measure. This fact means that the proposed
algorithm is never characterized by a null robustness (R = 0),
i.e. for all the 28 test problems the algorithm never prematurely convergences to solutions with a poor performance.
In this sense, the SFLSDE offers the best performance in

123

terms of robustness and in our opinion the best compromise


between the capacity to attain high quality results and convergence speed demand.
In order to graphically show behavior of the algorithms,
some examples of the average performance, dependant on the
number of fitness evaluations (for some of the test problems
listed in Table 1), are represented in Fig. 10.
4.3 Numerical results for (relatively) high dimensional
problems
The same numerical experiments shown in Sect. 4.2 (except
the Camelback and Easom functions which are explicitly in
two variables) have been repeated for n = 100 dimensions in
order to test the performance of the SFLSDE for large scale
problems. All algorithms have been run with the same parameter setting except for the population size; all algorithms have
been run with a population size Spop = 100. Table 6 shows

Memetic Comp. (2009) 1:153171


Table 7 Results of the Students
t test in 100 dimensions

167
Function

DE

SACPDE

OBDE

DEahcSPX

Ackley

Rotated Ackley

Alpine

Rotated alpine

De Jong

Dropwave

Griewangk

Rotated Griewangk

Michalewicz

Rotated Michalewicz

Pathological

Rotated pathological

Rastrigin

Rotated Rastrigin

Rosenbrock valley

Rotated Rosenbrock valley

Schwefel

Rotated Schwefel

Sum of powers

Rotated sum of powers

Tirronen

Rotated Tirronen

Whitley

Rotated Whitley

Zakharov

Rotated Zakharov

the final values ( the standard deviation error) obtained by


each algorithm for each test problem. The best results are
highlighted in bold face.
Numerical results in Table 6 show that the SFLSDE converges to the best results in 22 test problems out of the 26
considered here; in the other 4 cases, again, the SFLSDE
offers a good performance in terms of capability of capacity
to detect good solutions. Thus the SFLSDE seems to be
very promising for the high dimensional case as well.
Table 7 shows results of the t-test for a confidence level of
0.95.
Numerical results in Table 7 confirm that the proposed
SFLSDE is very promising in the high dimensional case as
well. As shown the proposed algorithm loses only 3 pairwise
comparisons out of the 104 carried out.
In order to estimate both robustness and convergence
speed, the Q test has been repeated in the 100 dimension
case. Table 8 shows the Q measures in high dimensional
problems.
Results in Table 8 show that the higher dimensionality
makes the problem more difficult, thus the algorithms tend

to prematurely converge to suboptimal solutions. It can be


noticed that the DE is rather inefficient compared to the other
algorithms as is proven by the massive presence of inf values
in the Table. Thus, standard DE seems to be incapable of
solving complex large scale problems (e.g. Rotated Schwefel). The SACPDE seems to have a rather good performance
in terms of Q measure since it fails at detecting solutions
with a competitive performance a limited amount of times
(4 times). In addition, a pairwise comparison between SACPDE and SFLSDE shows that the proposed scale factor local
search seems to be rather efficient since it systematically
tends to either improve (e.g. Dropwave) or to leave it unaltered (e.g. Michalewicz) the performance of the SACPDE.
The DEahcSPX and OBDE seem to be able to solve most
of the problems (they fail at detecting values with a high
performance for 6 test problems). The OBDE in particular
has a high performance in terms of convergence speed. Most
importantly, the SFLSDE is the only algorithm which never
takes an inf value and is thus the only algorithm (among
those under analysis) which never prematurely converges to
a solution with a poor performance. This fact proves that the

123

168
Table 8 Results of the Q test in
100 dimensions

Memetic Comp. (2009) 1:153171


Test problem

SACPDE

DEahcSPX

OBDE

SFLSDE

Ackley

inf

2.1e+02

1.6e+02

1.8e+02

2.1e+02

Rotated Ackley

2.4e+02

inf

1.9e+02

1.7e+02

2.2e+02

Alpine

inf

2.5e+02

2.2e+02

2.5e+02

2.3e+02

Rotated alpine

inf

4.6e+02

3.1e+02

2.6e+02

3.0e+02

De Jong

7.4e+02

1.3e+02

9.0e+01

7.9e+01

1.3e+02

Dropwave

inf

inf

inf

inf

1.2e+03

Griewangk

7.6e+02

1.3e+02

9.0e+01

8.0e+01

1.3e+02

Rotated Griewangk

7.4e+02

1.3e+02

8.9e+01

8.0e+01

1.4e+02

Michalewicz

inf

1.0e+03

inf

inf

1.0e+03

Rotated Michalewicz

inf

1.6e+03

2.6e+04

8.4e+02

2.1e+03

Pathological

2.6e+02

3.6e+02

5.4e+03

1.2e+02

3.2e+02

Rotated pathological

1.4e+03

4.4e+02

2.0e+04

1.5e+02

4.1e+03
3.6e+02

Rastrigin

inf

4.1e+03

inf

inf

Rotated Rastrigin

inf

inf

inf

inf

5.0e+02

Rosenbrock valley

inf

2.0e+02

1.4e+02

1.8e+02

2.0e+02

Rotated Rosenbrock valley

inf

2.0e+02

1.4e+02

1.5e+02

2.0e+02

Schwefel

inf

4.4e+02

inf

inf

4.3e+02

Rotated Schwefel

inf

1.8e+03

inf

inf

1.6e+03

Sum of powers

2.7e+02

5.0e+01

3.4e+01

1.5e+01

5.1e+01

Rotated sum of powers

7.2e+01

6.9e+00

3.5e+00

3.4e+00

5.8e+00

Tirronen

1.1e+04

inf

inf

6.4e+02

2.7e+04
2.7e+04

Rotated Tirronen

6.6e+02

4.4e+04

1.7e+03

1.0e+03

Whitley

3.4e+02

6.8e+01

3.6e+01

2.1e+01

6.6e+01

Rotated Whitley

6.9e+02

6.4e+01

3.1e+01

1.6e+01

6.2e+01

Zakharov

4.4e+00

5.9e+01

2.6e+01

5.6e+00

5.6e+01

Rotated Zakharov

1.1e+00

7.6e+01

2.1e+00

2.0e+00

1.5e+01

SFLSDE has a very high performance in terms of robustness


in 100 dimension case as well.
Figure 11 shows average performance of the algorithms
under investigation for some of the test problems in
Table 1.
In summary, numerical results show that the SFLSDE is
a very robust algorithm which seems to lead to high quality
results in low and high dimension cases and for a various set
of test problems. The performance behavior of the SFLSDE
can be classified into three groups according to the test problems under analysis. In some cases the algorithm significantly outperforms the other algorithms obtaining excellent
results (see e.g. Fig. 11a, e). For other test problems, the
SFLSDE has a slightly worse performance in terms of convergence speed compared to other algorithms (usually the
OBDE) but eventually outperforms all the other algorithms
in detection of the final solutions. Finally, in a few cases (5 for
30 dimensional problems and 5 for 100 dimensional problems) the SFLSDE has a slightly worse performance than
one or two other algorithms but outperforms the remaining ones. It should be remarked that when the SFLSDE is

123

DE

outperformed, the difference in performance with the best


algorithm is always very limited (see Fig 11d). As a matter of fact, the SFLSDE is outperformed in a statistically
relevant way in an extremely limited amount of cases as
shown in Tables 4 and 7. The pairwise analysis between
SACPDE and SFLSDE highlights that the proposed local
search either greatly improve the performance of the SACPDE framework or does not have a relevant effect (see e.g.
Schwefel or Rotated Schwefel in 100 dimensions, where the
performance trends almost completely coincide). The scale
factor local search seems, for the 54 test problems considered in this paper, to never jeopardize the performance of the
evolutionary framework.

5 Conclusion
This paper proposes the scale factor local search differential
evolution (SFLSDE). The SFLSDE is a memetic algorithm
composed of a differential evolution based evolutionary
framework and two simple local search algorithms. These

Memetic Comp. (2009) 1:153171

169

(a)

(b)

(c)

(d)

(e)

(f)

(g)

Fig. 11 Performance trends in 100 dimensions

local search algorithms operate on the scale factor in order to


generate offspring with a high performance during the mutation process.
Despite its simplicity, the proposed algorithm seems to
have a remarkable performance over a wide and various set
of test problems. This robustness feature seems to be valid in

both low and high dimensional cases, even when DE based


algorithms are likely to fail. The SFLSDE has a rather good
performance in terms of convergence speed and an excellent
performance in terms of detection of high quality solutions.
For some test problems the SFLSDE significantly outperforms modern and efficient metaheuristics.

123

170

It should be remarked that the proposed memetic algorithm performs the local search on the scale factor and thus on
one parameter regardless of the dimensionality of the problem. This kind of hybridization seems to be very efficient
in effecting enhancements in the offspring generation and
to have a dramatic impact on stagnation prevention in the
differential evolution framework. More specifically, these
improved solutions seem to be beneficial in refreshing the
genotypes and assist the global search in the optimization
process.
A future development of this work will aim at further
investigating and modifying the employment of scale factor
local search focused on large scale problems.

References
1. Moscato P, Norman M (1989) A competitive and cooperative
approach to complex combinatorial search. Technical Report 790
2. Moscato P (1989) On evolution, search, optimization, genetic algorithms and martial arts: towards memetic algorithms. Technical
Report 826
3. Krasnogor N, Blackburne B, Burke E, Hirst J (2002) Multimeme
algorithms for proteine structure prediction. In: Proceedings of
parallel problem solving in nature VII. Lecture notes in computer
science springer, Berlin
4. Krasnogor N (2002) Studies in the theory and design space of
memetic algorithms, Ph.D. thesis. University of West England
5. Ong YS, Keane AJ (2004) Meta-lamarkian learning in memetic
algorithms. IEEE Trans Evol Comput 8(2):99110
6. Krasnogor N (2004) Toward robust memetic algorithms. In: Hart
WE, Krasnogor N, Smith JE (eds) Recent advances in memetic
algorithms. Studies in fuzzines and soft computing. Springer,
Berlin, pp 185207
7. Ong YS, Lim MH, Zhu N, Wong KW (2006) Classification of
adaptive memetic algorithms: a comparative study. IEEE Trans
Syst Man Cybern B 36(1):141152
8. Caponio A, Cascella GL, Neri F, Salvatore N, Sumner M (2007) A
fast adaptive memetic algorithm for on-line and off-line control
design of pmsm drives. IEEE Trans Syst Man Cybern B Memet
Algorithms 37(1):2841
9. Neri F, Toivanen J, Mkinen RAE (2007) An adaptive evolutionary
algorithm with intelligent mutation local searchers for designing
multidrug therapies for HIV. Appl Intell 27:219235
10. Neri F, Toivanen J, Cascella GL, Ong YS (2007) An adaptive
multimeme algorithm for designing HIV multidrug therapies.
IEEE/ACM Trans Comput Biol Bioinform 4(2):264278
11. Tirronen V, Neri F, Krkkinen T, Majava K, Rossi T (2007) A
memetic differential evolution in filter design for defect detection in paper production. In: Applications of evolutionary computing. Lectures notes in computer science, 4448. Springer, Berlin,
pp 320329
12. Tirronen V, Neri F, Krkkinen T, Majava K, Rossi T (2008) An
enhanced memetic differential evolution in filter design for defect
detection in paper production. Evol Comput 16:529555
13. Tang J, Lim MH, Ong YS (2006) Parallel memetic algorithm with
selective local search for large scale quadratic assignment problems. Int J Innov Comput Inf Control 2(6):13991416
14. Tang J, Lim MH, Ong YS (2007) Diversity-adaptive parallel
memetic algorithm for solving large scale combinatorial optimization problems. Soft Comput Fusion Found Methodol Appl
11(9):873888

123

Memetic Comp. (2009) 1:153171


15. Ishibuchi H, Yoshida T, Murata T (2003) Balance between genetic
search and local search in memetic algorithms for multiobjective permutation flow shop scheduling. IEEE Trans Evol Comput
7:204223
16. Ishibuchi H, Hitotsuyanagi Y, Nojima Y (2007) An empirical study
on the specification of the local search application probability in
multiobjective memetic algorithms. In: Proceedings of the IEEE
congress on evolutionary computation. September 2007, pp 2788
2795
17. Krasnogor N, Smith J (2005) A tutorial for competent memetic
algorithms: model, taxonomy, and design issues. IEEE Trans Evol
Comput 9:474488
18. Wolpert D, Macready W (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):6782
19. Caponio A, Neri F, Tirronen V (2008) Super-fit control adaptation
in memetic differential evolution frameworks. Soft Comput Fusion
Found Methodol Appl (in press)
20. Brest J, Greiner S, Bokovic B, Mernik M, umer V (2006) Selfadapting control parameters in differential evolution: a comparative
study on numerical benchmark problems. IEEE Trans Evol Comput
10(6):646657
21. Price KV, Storn R, Lampinen J (2005) Differential evolution: a
practical approach to global optimization. Springer, Berlin
22. Lampinen J, Zelinka I (2000) On stagnation of the differential evolution algorithm. In: Osmera P (ed) Proceedings of 6th international
mendel conference on soft computing, pp 7683
23. Storn R, Price K (1995) Differential evolutiona simple and efficient adaptive scheme for global optimization over continuous
spaces. Technical Report TR-95-012, ICSI
24. Qin AK, Suganthan PN (2005) Self-adaptive differential evolution
algorithm for numerical optimization. In: Proceedings of the IEEE
congress on evolutionary computation. vol 2, pp 17851791
25. Eiben AE, Smith JE (2003) Introduction to evolutionary computation. Springer, Berlin
26. Storn R, Price K (1997) Differential evolutiona simple and efficient heuristic for global optimization over continuous spaces.
J Glob Optim 11:341359
27. Liu J, Lampinen J (2002) A fuzzy adaptive differential evolution
algorithm. In: Proceedings of the 17th IEEE region 10th international conference on computer, communications, control and power
engineering. vol I, pp 606611
28. Price K, Storn R (1997) Differential evolution: a simple evolution
strategy for fast optimization. Dr Dobbs J Softw Tools 22(4):1824
29. Zielinski K, Weitkemper P, Laur R, Kammeyer K-D (2006)
Parameter study for differential evolution using a power allocation problem including interference cancellation. In: Proceedings of the IEEE congress on evolutionary computation,
pp 18571864
30. Gmperle R, Mller SD, Koumoutsakos P (2002) A parameter
study for differential evolution. In: Proceedings of the conference
in neural networks and applications (NNA), fuzzy sets and fuzzy
systems (FSFS) and evolutionary computation (EC), WSEAS,
pp 293298
31. Liu J, Lampinen J (2002) On setting the control parameter of the
differential evolution algorithm. In: Proceedings of the 8th international mendel conference on soft computing, pp 1118
32. Ali MM, Trn A (2004) Population set based global optimization
algorithms: Some modifications and numerical studies. Comput
Oper Res 31:17031725
33. Rechemberg I (1973) Evolutionstrategie: Optimierung Technisher
Systeme nach prinzipien des Biologishen Evolution. FrommanHozlboog Verlag
34. Rahnamayan S, Tizhoosh HR, Salama MM (2008) Oppositionbased differential evolution. IEEE Trans Evol Comput 12(1):6479
35. Noman N, Iba H (2008) Accelerating differential evolution using
an adaptive local search. IEEE Trans Evol Comput 12(1):107125

Memetic Comp. (2009) 1:153171


36. Tsutsui S, Yamamura M, Higuchi T (1999) Multi-parent recombination with simplex crossover in real coded genetic algorithms.
In: Proceedings of the genetic evolution computer conference
(GECCO):657664
37. Yang Z, Tang K, Yao X (2007) Differential evolution for highdimensional function optimization. In: Proceedings of the IEEE
congress on evolutionary computation, pp 35233530
38. Noman N, Iba H (2005) Enhancing differential evolution
performance with local search for high dimensional function optimization. In: Proceedings of the 2005 conference on Genetic and
evolutionary computation ACM, pp 967974
39. Gao Y, Wang Y-J (2007) A memetic differential evolutionary algorithm for high dimensional functions optimization. In: Proceesings of the third international conference on natural computation,
pp 188192
40. Zamuda A, Brest J, Bokovic B, umer V (2008) Large scale global
optimization using differential evolution with self-adaptation and
cooperative co-evolution. In: Proceedings of the IEEE world congress on computational intelligence, pp 37193726

171
41. Brest J, umer V, Maucec M (2006) Self-adaptive differential evolution algorithm in constrained real-parameter optimization. In:
Proceedings of the IEEE congress on evolutionary computation,
pp 215222
42. Lozano M, Herrera F, Krasnogor N, Molina D (2004) Real-coded
memetic algorithms with crossover hill-climbing. Evol Comput
Memet Algorithms 12(3):273302
43. Kiefer J (1953) Sequential minimax search for a maximum. Proc
Am Math Soc 4:502506
44. Russell SJ, Norvig P (2003) Artificial intelligence: a modern
approach, 2nd edn. Prentice-Hall, Englewood Cliffs, pp 111114
45. Hart WE, Krasnogor N, Smith JE (2004) Memetic evolutionary
algorithms. In: Hart WE, Krasnogor N, Smith JE (eds) Recent
advances in memetic algorithms. Springer, Berlin, pp 327
46. NIST/SEMATECH, e-handbook of statistical methods. http://
www.itl.nist.gov/div898/handbook/
47. Feoktistov V (2006) Differential evolution in search of solutions.
Springer, Berlin, pp 8386

123

Das könnte Ihnen auch gefallen