43 views

Uploaded by dayas_90

- Superior Technique to Cluster Large Objects
- 005717322
- Swarm Intelligence
- Panta Lucic ED
- elenco_abil
- 00065905(7)
- L10b_-_Optimisation
- FINAL PPT
- Review Hapsir
- 05 Particle Swarm Optimisation
- 05583509
- 200606
- Operations Research - Contemporary Role in Managerial Decision Making
- Kf 3418331844
- Reviewer
- 2149
- Water Resources Optimal Allocation based on Multi-Objective Genetic Algorithm
- 02 Graphical Solution of Two Variable LPPs
- Directional Overcurrent Relay 1
- A Novel Approach for Large-disturbance Stability Control

You are on page 1of 12

**6, DECEMBER 2010 1555
**

SamACO: Variable Sampling Ant Colony

Optimization Algorithm for Continuous Optimization

Xiao-Min Hu, Student Member, IEEE, Jun Zhang, Senior Member, IEEE,

Henry Shu-Hung Chung, Senior Member, IEEE, Yun Li, Member, IEEE, and Ou Liu

Abstract—An ant colony optimization (ACO) algorithm offers

algorithmic techniques for optimization by simulating the foraging

behavior of a group of ants to perform incremental solution

constructions and to realize a pheromone laying-and-following

mechanism. Although ACO is ﬁrst designed for solving discrete

(combinatorial) optimization problems, the ACO procedure is

also applicable to continuous optimization. This paper presents

a new way of extending ACO to solving continuous optimization

problems by focusing on continuous variable sampling as a key

to transforming ACO from discrete optimization to continuous

optimization. The proposed SamACO algorithm consists of three

major steps, i.e., the generation of candidate variable values for

selection, the ants’ solution construction, and the pheromone

update process. The distinct characteristics of SamACO are the

cooperation of a novel sampling method for discretizing the

continuous search space and an efﬁcient incremental solution

construction method based on the sampled values. The perfor-

mance of SamACO is tested using continuous numerical functions

with unimodal and multimodal features. Compared with some

state-of-the-art algorithms, including traditional ant-based algo-

rithms and representative computational intelligence algorithms

for continuous optimization, the performance of SamACO is seen

competitive and promising.

Index Terms—Ant algorithm, ant colony optimization (ACO),

ant colony system (ACS), continuous optimization, function opti-

mization, local search, numerical optimization.

I. INTRODUCTION

S

IMULATING the foraging behavior of ants in nature,

ant colony optimization (ACO) algorithms [1], [2] are a

class of swarm intelligence algorithms originally developed

to solve discrete (combinatorial) optimization problems, such

as traveling salesman problems [3], [4], multiple knapsack

problems [5], network routing problems [6], [7], scheduling

Manuscript received February 20, 2009; revised July 5, 2009, November 2,

2009, and January 29, 2010; accepted February 3, 2010. Date of publication

April 5, 2010; date of current version November 17, 2010. This work was

supported in part by the National Natural Science Foundation of China Joint

Fund with Guangdong under Key Project U0835002, by the National High-

Technology Research and Development Program (“863” Program) of China

2009AA01Z208, and by the Sun Yat-Sen Innovative Talents Cultivation Pro-

gram for Excellent Tutors 35000-3126202. This paper was recommended by

Associate Editor H. Ishibuchi.

X.-M. Hu and J. Zhang are with the Department of Computer Science,

Sun Yat-Sen University, Guangzhou 510275, China, and also with the Key

Laboratory of Digital Life (Sun Yat-Sen University), Ministry of Education,

Guangzhou 510275, China (e-mail: junzhang@ieee.org).

H. S.-H. Chung is with the Department of Electronic Engineering, City

University of Hong Kong, Kowloon, Hong Kong.

Y. Li is with the Department of Electronics and Electrical Engineering,

University of Glasgow, G12 8LT Glasgow, U.K.

O. Liu is with the School of Accounting and Finance, Hong Kong Polytech-

nic University, Kowloon, Hong Kong.

Digital Object Identiﬁer 10.1109/TSMCB.2010.2043094

problems [8]–[10], and circuit design problems [11]. When

solving these problems, pheromones are deposited by ants on

nodes or links connecting the nodes in a construction graph [2].

Here, the ants in the algorithm represent stochastic constructive

procedures for building solutions. The pheromone, which is

used as a metaphor for an odorous chemical substance that real

ants deposit and smell while walking, has similar effects on

biasing the ants’ selection of nodes in the algorithm. Each node

represents a candidate component value, which belongs to a

ﬁnite set of discrete decision variables. Based on the pheromone

values, the ants in the algorithm probabilistically select the

component values to construct solutions.

For continuous optimization, however, decision variables are

deﬁned in the continuous domain, and, hence, the number of

possible candidate values would be inﬁnite for ACO. Therefore,

howto utilize pheromones in the continuous domain for guiding

ants’ solution construction is an important problem to solve in

extending ACO to continuous optimization. According to the

uses of the pheromones, there are three types of ant-based

algorithms for solving continuous optimization problems in the

literature.

The ﬁrst type does not use pheromones but uses other forms

of implicit or explicit cooperation. For example, API (named af-

ter Pachycondyla APIcalis) [12] simulates the foraging behav-

ior of Pachycondyla apicalis ants, which use visual landmarks

but not pheromones to memorize the positions and search the

neighborhood of the hunting sites.

The second type of ant-based algorithms places pheromones

on the points in the search space. Each point is, in effect, a com-

plete solution, indicating a region for the ants to perform local

neighborhood search. This type of ant-based algorithms gener-

ally hybridizes with other algorithms for maintaining diversity.

The continuous ACO (CACO) [13]–[15] is a combination

of the ants’ pheromone mechanism and a genetic algorithm.

The continuous interacting ant colony algorithm proposed by

Dréo and Siarry [16] uses both pheromone information and

the ants’ direct communications to accelerate the diffusion of

information. The continuous orthogonal ant colony (COAC)

algorithm proposed by Hu et al. [17] adopts an orthogonal

design method and a global pheromone modulation strategy

to enhance the search accuracy and efﬁciency. Other methods

such as hybridizing a Nelder–Mead simplex algorithm [18] and

using a discrete encoding [19] have also been proposed. Since

pheromones are associated with the entire solutions instead of

components in this type of algorithms, no incremental solution

construction is performed during the optimization process.

The third type of algorithms follows the ACO framework,

i.e., the ants in the algorithms construct solutions incrementally

biased by pheromones on components. Socha [20] extended

1083-4419/$26.00 © 2010 IEEE

1556 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010

the traditional ACO for solving both continuous and mixed

discrete–continuous optimization problems. Socha and Dorigo

[21] later improved their algorithm and referred the resultant

algorithm to ACO

R

, where an archive was used to preserve

the k best solutions found thus far. Each solution variable

value in the archive is considered as the center of a Gaussian

probability density function (PDF). Pheromones in ACO

R

are

implicit in the generation of Gaussian PDFs. When constructing

a solution, a new variable value is generated according to the

Gaussian distribution with a selected center and a computed

variance. The fundamental idea of ACO

R

is the shift from

using a discrete probability distribution to using a continuous

PDF. The sampling behavior of ACO

R

is a kind of probabilistic

sampling, which samples a PDF [21]. Similar realizations of

this type are also reported in [22]–[28].

Different from sampling a PDF, the SamACO algorithm pro-

posed in this paper is based on the idea of sampling candidate

values for each variable and selecting the values to form solu-

tions. The motivation for this paper is that a solution of a

continuous optimization problem is, in effect, a combination of

feasible variable values, which can be regarded as a solution

“path” walked by an ant. The traditional ACO is good at select-

ing promising candidate component values to form high-quality

solutions. Without loss of the advantage of the traditional ACO,

a means to sample promising variable values from the contin-

uous domain and to use pheromones on the candidate variable

values to guide the ants’ solution construction is developed in

SamACO.

Distinctive characteristics of SamACO are the cooperation

of a novel sampling method for discretizing the continuous

search space and an efﬁcient method for incremental solution

construction based on the sampled variable values. In

SamACO, the sampling method possesses the feature of

balancing memory, exploration, and exploitation. By preserving

variable values from the best solutions constructed by the

previous ants, promising variable values are inherited from the

last iteration. Diversity of the variable values is maintained

by exploring a small number of random variable values. High

solution accuracy is achieved by exploiting new variable values

surrounding the best-so-far solution by a dynamic exploitation

process. If a high-quality solution is constructed by the ants,

the corresponding variable values will receive additional

pheromones, so that the latter ants can be attracted to select the

values again.

Differences between the framework of ACO in solving

discrete optimization problems (DOPs) and the proposed

SamACO in solving continuous optimization problems will

be detailed in Section II. The performance of SamACO in

solving continuous optimization problems will be validated

by testing benchmark numerical functions with unimodal and

multimodal features. The results are compared not only with

the aforementioned ant-based algorithms but also with some

representative computational intelligence algorithms, e.g., com-

prehensive learning particle swarm optimization (CLPSO) [29],

fast evolutionary programming (FEP) [30], and evolution strat-

egy with covariance matrix adaptation (CMA-ES) [31].

The rest of this paper is organized as follows. Section II

ﬁrst presents the traditional ACO framework for discrete op-

timization. The SamACO framework is then introduced in a

more general way. Section III describes the implementation

Fig. 1. Framework of the traditional ACO for discrete optimization.

of the proposed SamACO algorithm for continuous optimiza-

tion. Parameter analysis of the proposed algorithm is made

in Section IV. Numerical experimental results are presented

in Section V for analyzing the performance of the proposed

algorithm. Conclusions are drawn in Section VI.

II. ACO FRAMEWORK FOR DISCRETE AND

CONTINUOUS OPTIMIZATION

A. Traditional ACO Framework for Discrete Optimization

Before introducing the traditional ACO framework, a dis-

crete minimization problem is ﬁrst deﬁned as in [2] and [21].

Deﬁnition 1: A discrete minimization problem Π is denoted

as (S, f, Ω), where

• S is the search space deﬁned over a ﬁnite set of discrete

decision variables (solution components) X

i

with val-

ues x

(j)

i

∈ D

i

= ¦x

(1)

i

, x

(2)

i

, . . . , x

([D

i

[)

i

¦, i = 1, 2, . . . , n,

with n being the number of decision variables.

• f : S → ' is the objective function. Each candidate so-

lution x ∈ S has an objective function value f(x). The

minimization problem is to search for a solution x

∗

∈ S

that satisﬁes f(x

∗

) ≤ f(x), ∀x ∈ S.

• Ω is the set of constraints that the solutions in S must

satisfy.

The most signiﬁcant characteristic of ACO is the use of

pheromones and the incremental solution construction [2].

The basic framework of the traditional ACO for discrete

optimization is shown in Fig. 1. Besides the initialization step,

the traditional ACO is composed of the ant’s solution con-

struction, an optional local search, and the pheromone update.

The three processes iterate until the termination condition is

satisﬁed.

When solving DOPs, solution component values or the links

between the values are associated with pheromones. If com-

ponent values are associated with pheromones, a component-

pheromone matrix M, given by (1), shown at the bottom of

the next page, can be generated. The pheromone value τ

(j)

i

reﬂects the desirability for adding the component value x

(j)

i

to

the solution, j = 1, 2, . . . , [D

i

[, i = 1, 2, . . . , n.

If the links between the values are associated with phero-

mones, each ¸x

(j)

i

, x

(l)

u

)-tuple will be assigned a pheromone

value τ

(j,l)

i,u

, j = 1, 2, . . . , [D

i

[, l = 1, 2, . . . , [D

u

[, i, u =

1, 2, . . . , n, i ,= u. Since the treatment of placing pheromones

HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1557

on links or components is similar, the rest of this paper will

focus on the case where pheromones are on components.

As the number of component values is ﬁnite in DOPs,

pheromone update can be directly applied to the τ

(j)

i

in (1)

for increasing or reducing the attractions of the corresponding

component values to the ants. The realization of the pheromone

update process is the main distinction among different ACO

variants in the literature, such as the ant system (AS) [3], the

rank-based AS (AS

rank

) [32], the Max-Min AS [33], and the

ant colony system (ACS) [4].

B. Extended ACO Framework for Continuous Optimization

Different from DOPs, a continuous optimization problem is

deﬁned as follows [2], [21].

Deﬁnition 2: A continuous minimization problem Π is de-

noted as (S, f, Ω), where

• S is the search space deﬁned over a ﬁnite set of con-

tinuous decision variables (solution components) X

i

, i =

1, 2, . . . , n, with values x

i

∈ [l

i

, u

i

], l

i

and u

i

representing

the lower and upper bounds of the decision variable X

i

,

respectively.

• the deﬁnitions of f and Ω are the same as in Deﬁnition 1.

The difference from the DOP is that, in continuous opti-

mization for ACO, the decision variables are deﬁned in the

continuous domain. Therefore, the traditional ACO framework

needs to be duly modiﬁed.

In the literature, researchers such as Socha and Dorigo

[21] proposed a method termed ACO

R

to shift the discrete

probability distribution in discrete optimization to a continuous

probability distribution for solving the continuous optimization

problem, using probabilistic sampling. When an ant in ACO

R

constructs a solution, a Gram–Schmidt process is used for each

variable to handle variable correlation. However, the calculation

of the Gram–Schmidt process for each variable leads to a signif-

icantly higher computational demand. When the dimension of

the objective function increases, the time used by ACO

R

also

increases rapidly. Moreover, although the correlation between

different decision variables is handled, the algorithm may still

converge to local optima, particularly when the values in the

archive are very close to each other.

However, if a ﬁnite number of variable values are sampled

from the continuous domain, the traditional ACO algorithms

for discrete optimization can be used. This forms the basic

idea of the SamACO framework proposed in this paper. The

key for successful optimization now becomes how to sample

promising variable values and use them to construct high-

quality solutions.

Fig. 2. Framework of the proposed SamACO for continuous optimization.

III. PROPOSED SamACO ALGORITHM FOR

CONTINUOUS OPTIMIZATION

Here, the detailed implementation of the proposed SamACO

algorithm for continuous optimization is presented. The suc-

cess of SamACO in solving continuous optimization problems

depends on an effective variable sampling method and an

efﬁcient solution construction process. The variable sampling

method in SamACO maintains promising variable values for

the ants to select, including variable values selected by the

previous ants, diverse variable values for avoiding trapping,

and variable values with high accuracy. The ants’ construction

process selects promising variable values to form high-quality

solutions by taking advantage of the traditional ACO method in

selecting discrete components.

The SamACO framework for continuous optimization is

shown in Fig. 2. Each decision variable X

i

has k

i

sampled

values x

(1)

i

, x

(2)

i

, . . . , x

(k

i

)

i

from the continuous domain [l

i

, u

i

],

i = 1, 2, . . . , n. The sampled discrete variable values are then

used for optimization by a traditional ACOprocess as in solving

DOPs. Fig. 3 illustrates the ﬂowchart of the algorithm. The

overall pseudocode of the proposed SamACO is shown in

Appendix I.

A. Generation of the Candidate Variable Values

The generation processes of the candidate variable values

in the initialization step and in the optimization iterations are

M=

⎡

⎢

⎢

⎢

⎢

⎢

⎣

x

(1)

1

, τ

(1)

1

x

(1)

2

, τ

(1)

2

x

(1)

n

, τ

(1)

n

x

(2)

1

, τ

(2)

1

x

(2)

2

, τ

(2)

2

x

(2)

n

, τ

(2)

n

.

.

.

.

.

.

.

.

.

.

.

.

x

([D

1

[)

1

, τ

([D

1

[)

1

x

([D

2

[)

2

, τ

([D

2

[)

2

x

([D

n

[)

n

, τ

([D

n

[)

n

⎤

⎥

⎥

⎥

⎥

⎥

⎦

(1)

1558 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010

Fig. 3. Flowchart of the SamACO algorithm for continuous optimization.

different. Initially, the candidate variable values are randomly

sampled in the feasible domain as

x

(j)

i

= l

i

+

u

i

− l

i

m + ϑ

j − 1 + rand

(j)

i

(2)

where (m + ϑ) is the initial number of candidate values for

each variable i, and rand

(j)

i

is a uniform random number in [0,

1], i = 1, 2, . . . , n, j = 1, 2, . . . , m + ϑ.

During the optimization iterations, candidate variable values

have four sources, i.e., the variable values selected by ants

in the previous iteration, a dynamic exploitation, a random

exploration, and a best-so-far solution. In each iteration, m

ants construct m solutions, resulting in m candidate values for

each variable for the next iteration. The best-so-far solution is

then updated, representing the best solution that has ever been

found. The dynamic exploitation is applied to the best-so-far

solution, resulting in g

i

new candidate variable values near the

corresponding variable values of the best-so-far solution for

each variable X

i

. Furthermore, a random exploration process

generates Θ new values for each variable by discarding the

worst Θ solutions that are constructed by the ants in the previ-

ous iterations. Suppose that the worst Θ solutions are denoted

by x

(m−Θ+1)

, x

(m−Θ+2)

, . . . , x

(m)

. The new variable values

for the solution x

(j)

are randomly generated as

x

(j)

i

= l

i

+ (u

i

− l

i

) rand

(j)

i

(3)

where i = 1, 2, . . . , n and j = m − Θ + 1, . . . , m. Newvalues,

thus, can be imported in the value group. To summarize, the

composition of the candidate variable values for the ants to

select is illustrated in Fig. 4. There are a total of (m + g

i

+ 1)

candidate values for each variable X

i

.

Fig. 4. Composition of the candidate variable values for the ants to select.

B. Dynamic Exploitation Process

The dynamic exploitation proposed in this paper is effective

for introducing ﬁne-tuned variable values into the variable

value group. We use a radius r

i

to conﬁne the neighborhood

exploitation of the variable value x

i

, i = 1, 2, . . . , n.

The dynamic exploitation is applied to the best-so-far so-

lution x

(0)

= (x

(0)

1

, x

(0)

2

, . . . , x

(0)

n

), aiming at searching the

vicinity of the variable value x

(0)

i

in the interval [x

(0)

i

−

r

i

, x

(0)

i

+ r

i

], i = 1, 2, . . . , n. The values of the variables in

the best-so-far solution are randomly selected to be increased,

unchanged, or reduced as

ˆ x

i

=

⎧

⎪

⎪

⎨

⎪

⎪

⎩

min

x

(0)

i

+ r

i

σ

i

, u

i

, 0 ≤ q < 1/3

x

(0)

i

, 1/3 ≤ q < 2/3

max

x

(0)

i

− r

i

σ

i

, l

i

, 2/3 ≤ q < 1

(4)

where σ

i

∈ (0, 1] and q ∈ [0, 1) are uniformrandomvalues, i =

1, 2, . . . , n. Then, the resulting solution ˆ x = (ˆ x

1

, ˆ x

2

, . . . , ˆ x

n

)

is evaluated. If the new solution is no worse than the recorded

best-so-far solution, we replace the best-so-far solution with the

new solution. The above exploitation repeats for ϑ times. The

newvariable values that are generated by increasing or reducing

a random step length are recorded as x

(j)

i

, j = m + 1, m +

2, . . . , m + g

i

, i = 1, 2, . . . , n, where g

i

counts the number of

new variable values in the dynamic exploitation process.

In each iteration, the radiuses adaptively change based on the

exploitation result. If the best exploitation solution is no worse

than the original best-so-far solution (case 1), the radiuses

will be extended. Otherwise (case 2), the radiuses will be

reduced, i.e.,

r

i

←

r

i

v

e

, case 1

r

i

v

r

, case 2

(5)

where v

e

(v

e

≥ 1) is the radius extension rate, and v

r

(0 <

v

r

≤ 1) is the radius reduction rate. The initial radius value

is set as r

i

= (u

i

− l

i

)/(2m), i = 1, 2, . . . , n. The extension

of the radiuses can import values in a distance further away

HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1559

Fig. 5. Illustration of two solutions constructed by ant a and ant b.

from the original value, whereas the reduction of the radiuses

can generate values with high accuracy near to the original

value. The extension and the reduction of radiuses aim at

introducing values with different accuracy levels according to

the optimization situation.

C. Ants’ Solution Construction

After the candidate variable values are determined, m ants

are dispatched to construct solutions. Each candidate variable

value is associated with pheromones, which bias the ants’ se-

lection for solution construction. The index l

(k)

i

of the variable

value selected by ant k for the ith variable is

l

(k)

i

=

arg max

τ

(1)

i

, τ

(2)

i

, . . . , τ

(m)

i

, if q < q

0

L

(k)

i

, otherwise

(6)

where q is a uniform random value in [0, 1), i = 1, 2, . . . , n,

k = 1, 2, . . . , m. The parameter q

0

∈ [0, 1) controls whether an

ant will choose the variable value with the highest pheromone

from the m solutions generated in the previous iteration, or ran-

domly choose an index L

(k)

i

∈ ¦0, 1, . . . , m + g

i

¦ according to

the probability distribution given by

p

(j)

i

=

τ

(j)

i

m+g

i

u=0

τ

(u)

i

, j = 0, 1, . . . , m + g

i

. (7)

The constructed solution of ant k is denoted x

(k)

=

(x

(l

(k)

1

)

1

, x

(l

(k)

2

)

2

, . . . , x

(l

(k)

n

)

n

). An illustration of two ants con-

structing solutions is shown in Fig. 5. The variable values that

are selected by each ant form a solution “path.” Pheromones

will then be adjusted according to the quality of these solution

paths.

D. Pheromone Update

Initially, each candidate variable value is assigned an

initial pheromone value T

0

. After evaluating the m solutions

constructed by ants, the solutions are sorted by their objective

function values in an order from the best to the worst. Suppose

that the sorted solutions are arranged as x

(1)

, x

(2)

, . . . , x

(m)

.

The pheromones on the selected variable values are

evaporated as

τ

(j)

i

← (1 − ρ) τ

(j)

i

+ ρ T

min

(8)

where 0 < ρ < 1 is the pheromone evaporation rate, and T

min

is the predeﬁned minimum pheromone value, i = 1, 2, . . . , n,

j = 1, 2, . . . , m.

The variable values in the best Ψ solutions have their

pheromones reinforced as

τ

(j)

i

← (1 − α) τ

(j)

i

+ α T

max

(9)

where 0 < α < 1 is the pheromone reinforcement rate, and

T

max

is the predeﬁned maximum pheromone value, i =

1, 2, . . . , n, j = 1, 2, . . . , Ψ, with Ψ being the number of the

high-quality solutions that receive additional pheromones on

the variable values. The pheromones on the variable values

that are generated by the exploration process and the dynamic

exploitation process are assigned as T

0

. In each iteration,

pheromone values on the variable values of the best-so-far

solution are assigned equal to the pheromone values on the

iteration best solution.

IV. PARAMETER STUDY OF SamACO

It can be seen that the proposed SamACO involves several

parameters. Here, we will investigate the effects of these param-

eters on the proposed algorithm.

A. Effects of the Parameters in SamACO

1) Discarding Number Θ: The discarding number Θ ≥ 1 is

used for maintaining diversity in the candidate variable values.

In each iteration, the random exploration process replaces the

Θ worst solutions constructed by ants. The more the solutions

replaced, the higher the diversity in the candidate variable

values. Diversity is important for avoiding stagnation, but a high

discarding rate can slow down the convergence speed.

2) Elitist Number Ψ: Different from the discarding number

Θ, the elitist number Ψ determines the best solutions con-

structed by ants to receive additional pheromones. The variable

values with extra pheromone deposits have higher chances to

be selected by ants. Therefore, the elitist number Ψ helps pre-

serve promising variable values and accelerate the convergence

speed.

Both the parameters Θ and Ψ cannot be set too large. In

fact, a small discarding number and a small elitist number are

sufﬁcient because their effects accumulate in iterations.

3) Traditional Parameters in ACO: Similar to the ACS [4],

parameters of SamACO to simulate the ants’ construction

behavior include the number of ants m, the parameter q

0

, the

pheromone evaporation rate ρ, the pheromone reinforcement

rate α, the initial pheromone value T

0

, and the lower and upper

limits of pheromone values T

min

and T

max

. In the literature, a

great deal of work has been carried out on the parametric study

for ACO, such as [4] and [34]–[37]. In Section IV-B, we will

use experiments to ﬁnd suitable settings for these parameters.

4) Parameters in Dynamic Exploitation: The parameters in

the dynamic exploitation include the exploitation frequency

ϑ and the radius reduction and extension rates v

r

and v

e

,

respectively. The exploitation frequency ϑ controls the number

of values to be sampled in the neighborhood of the best-so-

far solution per iteration. Since each candidate variable value

group is mainly composed of mvariable values that are selected

by the previous ants and the variable values from the local

1560 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010

Fig. 6. Contour illustration of using different values of m and ϑ on f

6

. The

contour represents the average number of FEs used for achieving an accuracy

level smaller than ε = 1, with the other parameters set as default.

exploitation, the value of ϑ inﬂuences the selection probabilities

of the m variable values. Furthermore, a large ϑ provides more

candidate variable values surrounding the neighborhood of the

best-so-far solution, but it may induce premature convergence.

On the contrary, a small ϑ may not sample enough local variable

values, and, thus, the speed to approach the local optimum is

slow. On the other hand, the radius reduction and extension

rates v

r

and v

e

, respectively, adapt the exploitation neighbor-

hood range according to the solution quality. Because the radius

extension is a reversal operation of the radius reduction, we set

v

e

= 1/v

r

in this paper.

B. Numerical Analysis on Parameter Settings

Some of the SamACO parameters have their default settings.

They are set as T

min

= T

0

= 0.1, T

max

= 1.0, Θ = 1, and Ψ =

1. There is generally no need to change these values. However,

the performance of the proposed algorithm is more sensitive to

other parameters.

According to our prior experiments on a set of unimodal and

multimodal functions (listed in Appendix II) with n = 30, the

parameters m = 20, ϑ = 20, ρ = 0.5, α = 0.3, q

0

= 0.1, and

v

r

= 0.7 are used for comparing the performance of SamACO

with other algorithms. In the remainder of this section, we

undertake a numerical experiment on f

6

(the shifted Schwefel’s

problem 1.2) for discussing the inﬂuence of these parame-

ters. The algorithm terminates when the number of function

evaluations (FEs) reaches 300 000. Each parameter setting is

independently tested 25 times.

1) Correlations Between Ant Number m and Exploitation

Frequency ϑ: The correlations between m = 1, 4, 8, 12, . . . ,

40 and ϑ = 0, 1, 2, 3, 4, 8, 12, . . . , 40 have been tested by ﬁxing

the values of the other parameters. Fig. 6 shows a contour graph

representing the average number of FEs used for achieving an

accuracy level smaller than ε = f(x) − f(x

∗

) = 1, where f(x)

and f(x

∗

) denote the solution value and the optimal solution

value, respectively. The smaller the number of FEs used, the

better the performance of the proposed algorithm. It can be

observed in Fig. 6 that SamACO is more sensitive in m than

in ϑ. When the value of ϑ is ﬁxed, a larger m consumes

more FEs. Furthermore, when ϑ = 0 and 1 (not shown in

Fig. 6), SamACO cannot achieve any satisfactory results within

the maximum number of FEs (300 000) using any value of

m. Therefore, a sufﬁcient amount of dynamic exploitation is

crucial to achieving solutions with high accuracy.

2) Radius Reduction Rate v

r

: The values of v

r

= 0, 0.1,

0.2, . . . , 1 have been tested, with the other parameter values

ﬁxed. Fig. 7(a) illustrates a curve representing the average

number of FEs used with different values of v

r

on f

6

. The curve

shows a signiﬁcant decreasing trend when v

r

increases from 0

to 1. When v

r

= 0.7, the average number of FEs used is mini-

mized. With a larger value of v

r

, the reduction speed of radiuses

is slower so that promising variable values are not so easy to be

omitted when exploiting the neighborhood. When v

r

= 0 (v

e

=

0), the neighborhood radius is 0. When v

r

= v

e

= 1, the neigh-

borhood radius is ﬁxed as the initial value, which is too large

to obtain solutions within the predeﬁned accuracy level. In the

above two cases, SamACO cannot ﬁnd satisfactory solutions.

3) Traditional ACO Parameters q

0

, ρ, and α: The values of

0, 0.1, 0.2, . . . , 1 have been assigned to q

0

, ρ, and α, respec-

tively, for analyzing their inﬂuence to SamACO. The curves

showing the average number of FEs used with different values

of q

0

, ρ, and α are given in Fig. 7(b)–(d). It can be observed

in Fig. 7(b) that an inclination for selecting the variable value

having the largest pheromone from the m variable values [as

in (6)] is beneﬁcial because SamACO uses smaller numbers

of FEs when q

0

= 0.2 and 0.1 than that of q

0

= 0. However,

the value of q

0

cannot be too large. Otherwise, the algorithm

traps in local optima and needs a long time to jump out. From

Fig. 7(c) and (d), the curves of ρ and α show similar trends. The

algorithm has unsatisfactory performance when ρ = 0 or α =

0, in which case a local or global pheromone update does not

take any effect. Therefore, pheromone updates are quite useful

for the optimization process. To facilitate practical applications

of SamACO to various problems, Table I lists the domains

and default settings (denoted by DS) of the parameters (P)

and concludes the sensitivity of SamACO based on the default

settings.

V. NUMERICAL EXPERIMENTS

A. Experimental Setup

Sixteen benchmark numerical functions have been tested

in this paper to validate the performance of the proposed

SamACO. Appendix II lists these functions chosen from the

literature [30], [40]. Among them, functions f

1

to f

9

are

unimodal, whereas f

10

to f

16

are multimodal.

Three types of algorithms are used for comparing the per-

formance of SamACO. The ﬁrst type includes the ant-based

algorithms such as CACO [15], COAC [17], and ACO

R

[21].

The second type consists of the other well-known swarm intel-

ligence algorithm as ACO, that is, particle swarm optimization

(PSO) [29], [38], [39]. Different fromACO, PSOwas originally

developed for continuous optimization. An improved variant

of PSO—the CLPSO [29]—is applied. The third type consists

of representative algorithms that use an explicit probability-

learning notion. In general, most computational intelligence

algorithms have implicit or explicit probabilistic models that

guide the optimization process. ACO uses pheromones as

explicit guidance. Moreover, algorithms such as evolutionary

HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1561

Fig. 7. Illustration of using different values of v

r

, q

0

, ρ, and α on f

6

. The y-axis represents the average number of FEs used for achieving an accuracy level

smaller than ε = 1, with the other parameters set as default.

TABLE I

SUMMARY OF PARAMETER SETTINGS IN SamACO

programming and evolution strategy also learn from explicit

probabilistic models for generating new solutions. Here, we

use FEP [30] and CMA-ES [31] as representatives of the

probability-learning algorithms.

Except for CMA-ES, algorithms in comparison with

SamACO are programmed in C, according to the source code

provided by the authors and the descriptions given in the refer-

ences. CMA-ES is programmed in MATLAB.

1

The parameters

of the algorithms are set as the recommended values according

to the references. Notice that all of the algorithms terminate

when the number of FEs reaches 300 000. Each function is

independently tested 25 times.

B. Results and Comparisons

1) Comparison With Other Ant-Based Algorithms: Table II

tabulates the mean error values f(x) − f(x

∗

) and the corre-

sponding standard deviations (St. dev) on f

1

−f

16

for SamACO,

CACO, COAC, and ACO

R

when n = 30. A t-test is made

based on the error values of 25 independent runs for showing

whether SamACO is signiﬁcantly better or worse than the other

algorithms. If the same zero error values are obtained for the

compared algorithms (i.e., f

1

), the numbers of FEs that are re-

quired for achieving the zero error values are used by the t-test.

A score 1, 0, or −1 is added when SamACO is signiﬁcantly

better than (b†), indifferent to (i), or signiﬁcantly worse than

(w‡) the compared algorithm for each function, respectively. In

the table, the total scores corresponding to the other ant-based

algorithms are all positive, which means that the performance

of SamACO is generally better than those algorithms.

Table III shows the average number of FEs that are required

to achieve accuracy levels smaller than ε

1

= 1 and ε

2

= 10

−6

.

The symbol denotes that the results cannot reach the prede-

ﬁned accuracy level ε within maximum 300 000 FEs, whereas

“%ok” stands for the successful percentage over 25 indepen-

dent runs. Only successful runs used for calculating the average

number of FEs and only the functions that can be successfully

solved by SamACO are presented in the table. It can be ob-

served that the proposed SamACO performs the best among

1

The MATLAB code used, cmaes.m, Version 2.35, is available at

http://www.bionik.tu-berlin.de/user/niko/formersoftwareversions.html.

1562 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010

TABLE II

AVERAGE ERROR VALUES AT n = 30, AFTER 300 000 FEs BY SamACO, CACO, COAC, ACO

R

, CLPSO, FEP, AND CMA-ES

TABLE III

AVERAGE NUMBER OF FES REQUIRED TO ACHIEVE ACCURACY LEVELS SMALLER THAN ε

1

= 1 AND ε

2

= 10

−6

WHEN n = 30 FOR SamACO,

CACO, COAC, ACO

R

, CLPSO, FEP, AND CMA-ES

HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1563

Fig. 8. Convergence graphs of the algorithms. The vertical axis is the average error value, and the horizontal axis is the number of FEs. (a) f

5

. (b) f

10

. (c) f

14

.

(d) f

15

.

the four algorithms. SamACO not only has higher successful

percentages than the other ant-based algorithms but also uses a

relatively small number of FEs to achieve solutions within the

predeﬁned accuracy levels. For example, when solving f

2

(the

shifted Schwefel’s P2.21), SamACO can ﬁnd solutions within

accuracy levels 1 and 10

−6

in all runs. However, CACO, COAC,

and ACO

R

can only obtain solutions within the accuracy level

1 or 10

−6

successfully in some runs.

2) Comparison With CLPSO and Probability-Learning Al-

gorithms: The scores in Table II show that SamACO generally

performs better than CLPSO on the test functions. According

to Table III, the average numbers of FEs used by SamACO

are much smaller than those used by CLPSO. Furthermore,

SamACO can ﬁnd satisfactory solutions in more test functions

than CLPSO.

On the other hand, CMA-ES achieves the smallest t-test

total scores among the compared algorithms presented in

Table II. However, CMA-ES has unsatisfactory performance

on functions such as f

10

, f

11

, f

15

, and f

16

. The second place

is SamACO because the other algorithms have positive total

scores, which means that they are not better than SamACO.

Table III shows that FEP generally needs more FEs to obtain

satisfactory results. CMA-ES can obtain high-quality solutions

very fast for most of the test functions.

Fig. 8 illustrates the convergence curves of SamACO and

the compared algorithms. It can be seen that SamACO ﬁnds

solutions with high accuracy very fast for the functions.

3) Analysis on the Computational Complexity of Algorithms:

Following the methods in [40], the computational complexity

of the algorithms discussed in this paper is computed as in

TABLE IV

COMPUTATIONAL COMPLEXITY OF ALGORITHMS

Table IV. Computations of T0, T1, and

ˆ

T2 can be referred in

[40]. The values of T1 and

ˆ

T2 are obtained on f

7

at n = 30

after 200 000 FEs. All of the time values are measured in

CPU seconds. The results of SamACO, CACO, COAC, ACO

R

,

CLPSO, and FEP are obtained on a PC with CPU DualCore

2.0 GHz, RAM 1G, Platform: Visual C++ 6.0. The results

of CMA-ES are obtained on CPU Dual 2.0 GHz, RAM 1G,

Platform: MATLAB 7.1.0.

By using the Gram–Schmidt process, which is a compu-

tationally heavy operation, the computational complexity of

ACO

R

is much larger than that of the other algorithms. In

addition, algorithms that use complex distribution models (such

as FEP and CMA-ES) generally require a longer computation

time. The computational complexity of SamACO is modest,

with a complexity value smaller than 3.

VI. CONCLUSION

An algorithm termed SamACO has been proposed for

solving continuous optimization problems in this paper. It

1564 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010

demonstrates that after determining candidate variable values,

the optimization procedure of ACO in solving DOPs is, in

effect, a subprocess in solving continuous optimization prob-

lems. Therefore, a way to sample promising variable values

in the continuous domain and to use pheromones to guide

ants’ construction behavior has been developed in SamACO

to extend ACO to continuous optimization. It has been shown

that SamACO can also be applied to both DOPs and mixed

discrete–continuous optimization problems. The novelties and

characteristics of SamACO are as follows.

• A new framework for ACO. The SamACO framework ex-

tends ACO to continuous optimization, using an effective

sampling method to discretize the continuous space, so

that the traditional ACO operations for selecting discrete

components can be used compatibly.

• A new sampling method for discretizing the continuous

search space. The sampling method aims at balancing

memory of previous ants, exploration for new variable

values, and exploitation for variable values with high

accuracy.

• Efﬁcient incremental solution construction based on the

sampled variable values. Each ant in SamACO constructs

solutions incrementally by selecting values from the sam-

pled variable values.

Some state-of-the-art algorithms have been used for com-

paring the performance with the SamACO algorithm. They in-

clude ant-based algorithms for continuous optimization, swarm

intelligence algorithms, and representative probability-learning

algorithms. The experimental results show that the SamACO

algorithm can deliver good performance for both unimodal and

multimodal problems.

APPENDIX I

SamACO ALGORITHM

1) /

∗

Initialization

∗

/

Initialize parameter values as in Table I.

For i := 1 to n do /

∗

For each variable

∗

/

r

i

:= (u

i

− l

i

)/(2m)/

∗

Set the initial radius value

∗

/

g

i

:= m + ϑ/

∗

Set the number of candidate variable

values for selection

∗

/

For j := 1 to (m + ϑ) do

Randomly sample variable values according to (2)

τ

(j)

i

:= T

0

/

∗

Assign the initial pheromone value

∗

/

End-for

End-for

For j := 1 to (m + ϑ) do

Evaluate the jth solution according to the objective

function

End-for

Sort the m solutions from the best to the worst

Record the best-so-far solution ¦x

(0)

1

, x

(0)

2

, . . . , x

(0)

n

¦ and

assign each variable i with τ

(0)

i

:= T

0

Perform ant’s solution construction as the following Step 3)

Perform pheromone update as the following Step 4)

2) /

∗

Generation of the candidate variable values

∗

/

/

∗

Perform a dynamic exploitation process to the best-so-far

solution

∗

/

For i := 1 to n do g

i

:= 1 End-for

flag := False /

∗

Indicate whether exploitation can ﬁnd a

better solution

∗

/

For k := 1 to ϑ do /

∗

Repeat for ϑ times

∗

/

For i := 1 to n do /

∗

For each variable

∗

/

Generate a value ˆ x

i

according to (4)

If (0 ≤ q < 1/3 or 2/3 ≤ q < 1)

x

(m+g

i

)

i

:= ˆ x

i

/

∗

Record the new variable value

∗

/

τ

(m+g

i

)

i

:=T

0

/

∗

Assignthe initial pheromone value

∗

/

g

i

:= g

i

+ 1/

∗

Counter adds 1

∗

/

End-if

End-for

Evaluate the solution ¦ˆ x

1

, ˆ x

2

, . . . , ˆ x

n

¦ according to the

objective function

If the resulting solution is no worse than the best-so-far

solution

For i := 1 to n do x

(0)

i

:= ˆ x

i

End-for /

∗

Update the

best-so-far solution

∗

/

flag := True

End-if

End-for

If flag = True

For i := 1 to n do r

i

:= r

i

v

e

End-for

Else

For i := 1 to n do r

i

:= r

i

v

r

End-for

End-if

/

∗

Perform a random exploration process to the worst

solutions

∗

/

For j := m− Θ + 1 to m do

For i := 1 to n do

x

(j)

i

:= l

i

+ (u

i

− l

i

) rand

(j)

i

τ

(j)

i

:= T

0

End-for

End-for

3) /

∗

Ants’ solution construction

∗

/

For i := 1 to n do

For k := 1 to m do

Ant k selects the index l

(k)

i

according to (6) and (7)

˜ x

(k)

i

:= x

(l

(k)

i

)

i

/

∗

Record the selected variable value

∗

/

˜ τ

(k)

i

:= τ

(l

(k)

i

)

i

/

∗

Record the corresponding pheromone

value

∗

/

End-for

End-for

For k := 1 to m do

For i := 1 to n do

x

(k)

i

:= ˜ x

(k)

i

/

∗

Update the constructed solutions

∗

/

τ

(k)

i

:= ˜ τ

(k)

i

End-for

Evaluate the kth solution according to the objective

function

End-for

4) /

∗

Pheromone update

∗

/

Sort the m solutions from the best to the worst

Update the best-so-far solution

Perform pheromone evaporation to the m solutions accord-

ing to (8)

Perform pheromone reinforcement to the best Ψ solutions

according to (9)

HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1565

For i := 1 to n do τ

(0)

i

:= τ

(1)

i

End-for

5) If (Termination_condition = True)

Output the best solution

Else goto Step 2)

End-if

APPENDIX II

BENCHMARK FUNCTIONS

The test functions used for experiments include the ﬁrst

ten functions for CEC 2005 Special Session on real-parameter

optimization [40], as f

5

−f

9

and f

12

−f

16

in this paper,

and functions from [30] by adding a shifted offset o =

(0, 1, 2, . . . , n − 1), as f

1

−f

4

and f

10

−f

11

. The shifted vari-

able vectors and bounds for f

1

−f

4

and f

10

−f

11

are deﬁned

as x

new

= x −o, l

new

= l +o, and u

new

= u +o, where x,

l, and u are variable vectors, lower bound, and upper bound

deﬁned in [30], and x

new

, l

new

, and u

new

are the corresponding

shifted vectors. The test functions are as follows:

f

1

Shifted step.

f

2

Shifted Schwefel’s P2.21.

f

3

Shifted Schwefel’s P2.22.

f

4

Shifted quartic with noise.

f

5

Shifted sphere.

f

6

Shifted Schwefel’s P1.2.

f

7

Shifted rotated high-conditioned elliptic.

f

8

Shifted Schwefel’s P1.2 with noise.

f

9

Schwefel’s P2.6 with global optimum on bounds.

f

10

Shifted Schwefel’s P2.26.

f

11

Shifted Ackley.

f

12

Shifted Rosenbrock.

f

13

Shifted rotated Griewank’s without bounds.

f

14

Shifted rotated Ackley’s function with global optimum on

bounds.

f

15

Shifted Rastrigin.

f

16

Shifted rotated Rastrigin.

ACKNOWLEDGMENT

The authors would like to thank Dr. M. Dorigo, Dr. K. Socha,

Dr. P. N. Suganthan, Dr. X. Yao, and Dr. N. Hansen for letting us

use their source codes for the benchmark functions. The authors

would also like to thank the anonymous associate editor and the

referees for their valuable comments to improve the quality of

this paper.

REFERENCES

[1] M. Dorigo, G. D. Caro, and L. M. Gambardella, “Ant algorithms for

discrete optimization,” Artif. Life, vol. 5, no. 2, pp. 137–172, Apr. 1999.

[2] M. Dorigo and T. Stützle, Ant Colony Optimization. Cambridge, MA:

MIT Press, 2004.

[3] M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: Optimization by

a colony of cooperating agents,” IEEE Trans. Syst., Man, Cybern. B,

Cybern., vol. 26, no. 1, pp. 29–41, Feb. 1996.

[4] M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative

learning approach to the traveling salesman problem,” IEEE Trans. Evol.

Comput., vol. 1, no. 1, pp. 53–66, Apr. 1997.

[5] G. Leguizamón and Z. Michalewicz, “A new version of ant system

for subset problems,” in Proc. IEEE CEC, Piscataway, NJ, 1999,

pp. 1459–1464.

[6] K. M. Sim and W. H. Sun, “Ant colony optimization for routing and load-

balancing: Survey and new directions,” IEEE Trans. Syst., Man, Cybern.

A, Syst., Humans, vol. 33, no. 5, pp. 560–572, Sep. 2003.

[7] S. S. Iyengar, H.-C. Wu, N. Balakrishnan, and S. Y. Chang, “Biologically

inspired cooperative routing for wireless mobile sensor networks,” IEEE

Syst. J., vol. 1, no. 1, pp. 29–37, Sep. 2007.

[8] W.-N. Chen and J. Zhang, “An ant colony optimization approach to a

grid workﬂow scheduling problem with various QoS requirements,” IEEE

Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 39, no. 1, pp. 29–43,

Jan. 2009.

[9] D. Merkle, M. Middendorf, and H. Schmeck, “Ant colony optimization

for resource-constrained project scheduling,” IEEE Trans. Evol. Comput.,

vol. 6, no. 4, pp. 333–346, Aug. 2002.

[10] G. Wang, W. Gong, B. DeRenzi, and R. Kastner, “Ant colony opti-

mizations for resource- and timing-constrained operation scheduling,”

IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 26, no. 6,

pp. 1010–1029, Jun. 2007.

[11] J. Zhang, H. S.-H. Chung, A. W.-L. Lo, and T. Huang, “Ex-

tended ant colony optimization algorithm for power electronic circuit

design,” IEEE Trans. Power Electron., vol. 24, no. 1, pp. 147–162,

Jan. 2009.

[12] N. Monmarché, G. Venturini, and M. Slimane, “On how Pachycondyla

apicalis ants suggest a new search algorithm,” Future Gener. Comput.

Syst., vol. 16, no. 9, pp. 937–946, Jun. 2000.

[13] G. Bilchev and I. C. Parmee, “The ant colony metaphor for searching con-

tinuous design spaces,” in Proc. AISB Workshop Evol. Comput., vol. 933,

LNCS, 1995, pp. 25–39.

[14] M. Wodrich and G. Bilchev, “Cooperative distributed search: The ants’

way,” Control Cybern., vol. 26, no. 3, pp. 413–446, 1997.

[15] M. Mathur, S. B. Karale, S. Priye, V. K. Jayaraman, and B. D. Kulkarni,

“Ant colony approach to continuous function optimization,” Ind. Eng.

Chem. Res., vol. 39, no. 10, pp. 3814–3822, 2000.

[16] J. Dréo and P. Siarry, “Continuous interacting ant colony algorithm

based on dense heterarchy,” Future Gener. Comput. Syst., vol. 20, no. 5,

pp. 841–856, Jun. 2004.

[17] X.-M. Hu, J. Zhang, and Y. Li, “Orthogonal methods based ant colony

search for solving continuous optimization problems,” J. Comput. Sci.

Technol., vol. 23, no. 1, pp. 2–18, Jan. 2008.

[18] J. Dréo and P. Siarry, “An ant colony algorithm aimed at dynamic contin-

uous optimization,” Appl. Math. Comput., vol. 181, no. 1, pp. 457–467,

Oct. 2006.

[19] H. Huang and Z. Hao, “ACO for continuous optimization based on

discrete encoding,” in Proc. ANTS, vol. 4150, LNCS, M. Dorigo,

L. M. Gambardella, M. Birattari, A. Martinoli, R. Poli, and

T. St¨utzle, Eds., 2006, pp. 504–505.

[20] K. Socha, “ACO for continuous and mixed-variable optimization,” in

Proc. ANTS, vol. 3172, LNCS, M. Dorigo, M. Birattari, C. Blum,

L. M. Gambardella, F. Mondata, and T. St¨utzle, Eds., 2004, pp. 25–36.

[21] K. Socha and M. Dorigo, “Ant colony optimization for continuous

domains,” Eur. J. Oper. Res., vol. 185, no. 3, pp. 1155–1173, Mar. 2008.

[22] P. Korošec and J. Šilc, “High-dimensional real-parameter optimization

using the differential ant-stigmergy algorithm,” Int. J. Intell. Comput.

Cybern., vol. 2, no. 1, pp. 34–51, 2009.

[23] M. Kong and P. Tian, “A direct application of ant colony optimiza-

tion to function optimization problem in continuous domain,” in ANTS

2006, vol. 4150, LNCS, M. Dorigo, L. M. Gambardella, M. Birattari,

A. Martinoli, R. Poli, and T. St¨utzle, Eds., 2006, pp. 324–331.

[24] P. Korošec, J. Šilc, K. Oblak, and F. Kosel, “The differential ant-stigmergy

algorithm: An experimental evaluation and a real-world application,” in

Proc. IEEE CEC, Singapore, 2007, pp. 157–164.

[25] P. Korošec and J. Šilc, “The differential ant-stigmergy algorithm for large

scale real-parameter optimization,” in ANTS 2008, vol. 5217, LNCS,

M. Dorigo, M. Birattari, C. Blum, M. Clerc, T. St¨utzle, and A. F. T.

Winﬁeld, Eds., 2008, pp. 413–414.

[26] S. Tsutsui, “An enhanced aggregation pheromone system for real-

parameter optimization in the ACO metaphor,” in Proc. ANTS, vol. 4150,

LNCS, M. Dorigo, L. M. Gambardella, M. Birattari, A. Martinoli, R. Poli,

and T. St¨utzle, Eds., 2006, pp. 60–71.

[27] F. O. de França, G. P. Coelho, F. J. V. Zuben, and R. R. de F. Attux,

“Multivariate ant colony optimization in continuous search spaces,” in

Proc. GECCO, Atlanta, GA, 2008, pp. 9–16.

[28] S. H. Pourtakdoust and H. Nobahari, “An extension of ant colony system

to continuous optimization problems,” in Proc. ANTS, vol. 3172, LNCS,

M. Dorigo, M. Birattari, C. Blum, L. M. Gambardella, F. Mondata, and

T. St¨utzle, Eds., 2004, pp. 294–301.

[29] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive

learning particle swarm optimizer for global optimization of multimodal

1566 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010

functions,” IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 281–295,

Jun. 2006.

[30] X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,”

IEEE Trans. Evol. Comput., vol. 3, no. 2, pp. 82–102, Jul. 1999.

[31] A. Auger and N. Hansen, “Performance evaluation of an advanced

local search evolutionary algorithm,” in Proc. IEEE CEC, 2005, vol. 2,

pp. 1777–1784.

[32] B. Bullnheimer, R. F. Hartl, and C. Strauss, “A new rank based version of

the ant system—A computational study,” Central Eur. J. Oper. Res. Econ.,

vol. 7, no. 1, pp. 25–38, 1997.

[33] T. Stützle and H. H. Hoos, “MAX-MIN ant system,” Future Gener.

Comput. Syst., vol. 16, no. 8, pp. 889–914, Jun. 2000.

[34] A. C. Zecchin, A. R. Simpson, H. R. Maier, and J. B. Nixon, “Parametric

study for an ant algorithm applied to water distribution system optimiza-

tion,” IEEE Trans. Evol. Comput., vol. 9, no. 2, pp. 175–191, Apr. 2005.

[35] M. Guntsch and M. Middendorf, “Pheromone modiﬁcation strategies for

ant algorithms applied to dynamic TSP,” in Proc. EvoWorkshop, vol. 2037,

LNCS, E. J. W. Boers, J. Gottlieb, P. L. Lanzi, R. E. Smith, S. Cagnoni,

E. Hart, G. R. Raidl, and H. Tijink, Eds., 2001, pp. 213–222.

[36] P. Pellegrini, D. Favaretto, and E. Moretti, “On MAX-MIN ant sys-

tem’s parameters,” in Proc. ANTS, vol. 4150, LNCS, M. Dorigo,

L. M. Gambardella, M. Birattari, A. Martinoli, R. Poli, and T. St¨utzle,

Eds., 2006, pp. 203–214.

[37] M. L. Pilat and T. White, “Using genetic algorithms to optimize ACS-

TSP,” in Proc. ANTS, vol. 2463, LNCS, M. Dorigo, G. D. Caro, and

M. Sampels, Eds., 2002, pp. 282–283.

[38] J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence. San Mateo,

CA: Morgan Kaufmann, 2001.

[39] M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and

convergence in a multidimensional complex space,” IEEE Trans. Evol.

Comput., vol. 6, no. 1, pp. 58–73, Feb. 2002.

[40] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger,

and S. Tiwari, “Problem Deﬁnitions and Evaluation Criteria for the CEC

2005 Special Session on Real-Parameter Optimization,” Nanyang Tech-

nol. Univ., Singapore, KanGAL Rep. 2005005, May 2005.

Xiao-Min Hu (S’06) received the B.S. degree in

network engineering in 2006 from Sun Yat-Sen Uni-

versity, Guangzhou, China, where she is currently

working toward the Ph.D. degree in computer ap-

plication and technology in the Department of Com-

puter Science.

Her current research interests include artiﬁcial

intelligence, evolutionary computation, swarm in-

telligence, and their applications in design and

optimization on wireless sensor networks, net-

work routing technologies, and other intelligence

applications.

Jun Zhang (M’02–SM’08) received the Ph.D. de-

gree in electrical engineering from the City Univer-

sity of Hong Kong, Kowloon, Hong Kong, in 2002.

From 2003 to 2004, he was a Brain Korean 21

Postdoctoral Fellow with the Department of Elec-

trical Engineering and Computer Science, Korea

Advanced Institute of Science and Technology,

Daejeon, South Korea. Since 2004, he has been

with the Sun Yat-Sen University, Guangzhou, China,

where he is currently a Professor and the Ph.D. Su-

pervisor with the Department of Computer Science.

He has authored seven research books and book chapters, and over 80 technical

papers in his research areas. His research interests include computational

intelligence, data mining, wireless sensor networks, operations research, and

power electronic circuits.

Dr. Zhang is currently an Associate Editor of the IEEE TRANSACTIONS ON

INDUSTRIAL ELECTRONICS. He is currently the Chair of the IEEE Beijing

(Guangzhou) Section Computational Intelligence Society Chapter.

Henry Shu-Hung Chung (M’95–SM’03) received

the B.Eng. and Ph.D. degrees in electrical en-

gineering from Hong Kong Polytechnic Univer-

sity, Kowloon, Hong Kong, in 1991 and 1994,

respectively.

Since 1995, he has been with the City University

of Hong Kong (CityU), Hong Kong. He is currently

a Professor with the Department of Electronic Engi-

neering and the Chief Technical Ofﬁcer of e.Energy

Technology Ltd.—an associated company of CityU.

He has authored four research book chapters and

over 250 technical papers, including 110 refereed journal papers in his research

areas. He is also the holder of ten patents. His research interests include

time- and frequency-domain analyses of power electronic circuits, switched-

capacitor-based converters, random-switching techniques, control methods,

digital audio ampliﬁers, soft-switching converters, and electronic ballast design.

Dr. Chung was a recipient of the 2001 Grand Applied Research Excel-

lence Award from the CityU. He was the IEEE Student Branch Counselor

and was the Track Chair of the Technical Committee on power electronics

circuits and power systems of the IEEE Circuits and Systems Society from

1997 to 1998. He was an Associate Editor and a Guest Editor of the IEEE

TRANSACTIONS ON CIRCUITS AND SYSTEMS, PART I: FUNDAMENTAL

THEORY AND APPLICATIONS from 1999 to 2003. He is currently an Associate

Editor of the IEEE TRANSACTIONS ON POWER ELECTRONICS and the IEEE

TRANSACTIONS ON CIRCUITS AND SYSTEMS, PART I: FUNDAMENTAL

THEORY AND APPLICATIONS.

Yun Li (S’87–M’90) received the B.Sc. degree in

radio electronics science from Sichuan University,

Chengdu, China, in 1984, the M.Eng. degree in

electronic engineering from the University of Elec-

tronic Science and Technology of China (UESTC),

Chengdu, in 1987, and the Ph.D. degree in comput-

ing and control engineering from the University of

Strathclyde, Glasgow, U.K., in 1990.

From 1989 to 1990, he was with the U.K. National

Engineering Laboratory and the Industrial Systems

and Control Limited, Glasgow. In 1991, he was a

Lecturer with the University of Glasgow, Glasgow. In 2002, he served as

a Visiting Professor with Kumamoto University, Kumamoto, Japan. He is

currently a Senior Lecturer with the University of Glasgow and a Visiting

Professor with the UESTC. In 1996, he independently invented the “indef-

inite scattering matrix” theory, which opened up a groundbreaking way for

microwave feedback circuit design. From 1987 to 1991, he carried out leading

work in parallel processing for recursive ﬁltering and feedback control. In

1992, he achieved ﬁrst symbolic computing for power electronic circuit design,

without needing to invert any matrix, complex-numbered or not. Since 1992,

he has pioneered into design automation of control systems and discovery of

novel systems using evolutionary learning and intelligent search techniques.

He established the IEEE Computer-Aided Control System Design Evolutionary

Computation Working Group and the European Network of Excellence in

Evolutionary Computing (EvoNet) Workgroup on Systems, Control, and Drives

in 1998. He has supervised 15 Ph.D.s in this area and has over 140 publications.

Dr. Li is a Chartered Engineer and a member of the Institution of Engineering

and Technology.

Ou Liu received the Ph.D. degree in information

systems from the City University of Hong Kong,

Kowloon, Hong Kong, in 2006.

Since 2006, he has been with the Hong Kong Poly-

technic University, Hong Kong, where he is currently

an Assistant Professor with the School of Accounting

and Finance. His research interests include business

intelligence, decision support systems, ontology en-

gineering, genetic algorithms, ant colony systems,

fuzzy logic, and neural networks.

j = 1. In SamACO. so that the latter ants can be attracted to select the values again. i = 1. . which samples a PDF [21]. the traditional ACO is composed of the ant’s solution construction. each xi . Each solution variable value in the archive is considered as the center of a Gaussian probability density function (PDF). Each candidate solution x ∈ S has an objective function value f (x). The three processes iterate until the termination condition is satisﬁed. j = 1. . Numerical experimental results are presented in Section V for analyzing the performance of the proposed algorithm. given by (1). • f : S → is the objective function. The results are compared not only with the aforementioned ant-based algorithms but also with some representative computational intelligence algorithms. which can be regarded as a solution “path” walked by an ant. DECEMBER 2010 the traditional ACO for solving both continuous and mixed discrete–continuous optimization problems. ACO F RAMEWORK FOR D ISCRETE AND C ONTINUOUS O PTIMIZATION A. Framework of the traditional ACO for discrete optimization. in effect. xi . The traditional ACO is good at selecting promising candidate component values to form high-quality solutions. The most signiﬁcant characteristic of ACO is the use of pheromones and the incremental solution construction [2]. If the links between the values are associated with phero(j) (l) mones. If a high-quality solution is constructed by the ants. . Parameter analysis of the proposed algorithm is made in Section IV. n. Different from sampling a PDF. Besides the initialization step. Conclusions are drawn in Section VI. By preserving variable values from the best solutions constructed by the previous ants. Deﬁnition 1: A discrete minimization problem Π is denoted as (S. . . where an archive was used to preserve the k best solutions found thus far. The sampling behavior of ACOR is a kind of probabilistic sampling. The performance of SamACO in solving continuous optimization problems will be validated by testing benchmark numerical functions with unimodal and multimodal features. . l = 1. . 2. II.u . . exploration. i = u. Socha and Dorigo [21] later improved their algorithm and referred the resultant algorithm to ACOR . • Ω is the set of constraints that the solutions in S must satisfy. The motivation for this paper is that a solution of a continuous optimization problem is. 40. f. Pheromones in ACOR are implicit in the generation of Gaussian PDFs. the sampling method possesses the feature of balancing memory. . 6. Differences between the framework of ACO in solving discrete optimization problems (DOPs) and the proposed SamACO in solving continuous optimization problems will be detailed in Section II. When solving DOPs. Section III describes the implementation Fig. i = 1. . . .1556 IEEE TRANSACTIONS ON SYSTEMS. . Section II ﬁrst presents the traditional ACO framework for discrete optimization. 1. solution component values or the links between the values are associated with pheromones. 2.g. xu -tuple will be assigned a pheromone (j. The basic framework of the traditional ACO for discrete optimization is shown in Fig. n. n. can be generated. a discrete minimization problem is ﬁrst deﬁned as in [2] and [21]. . comprehensive learning particle swarm optimization (CLPSO) [29]. e. |Di |. and evolution strategy with covariance matrix adaptation (CMA-ES) [31]. 2. Diversity of the variable values is maintained by exploring a small number of random variable values. AND CYBERNETICS—PART B: CYBERNETICS. 2. an optional local search. . . promising variable values are inherited from the last iteration. a componentpheromone matrix M. the corresponding variable values will receive additional pheromones.l) value τi. 1. Traditional ACO Framework for Discrete Optimization Before introducing the traditional ACO framework. and exploitation. where • S is the search space deﬁned over a ﬁnite set of discrete decision variables (solution components) Xi with val(j) (1) (2) (|D |) ues xi ∈ Di = {xi . |Du |. . . . . . VOL. . and the pheromone update. u = 1. When constructing a solution. a means to sample promising variable values from the continuous domain and to use pheromones on the candidate variable values to guide the ants’ solution construction is developed in SamACO.. The pheromone value τi (j) reﬂects the desirability for adding the component value xi to the solution. a combination of feasible variable values. The SamACO framework is then introduced in a more general way. High solution accuracy is achieved by exploiting new variable values surrounding the best-so-far solution by a dynamic exploitation process. . xi i }. The minimization problem is to search for a solution x∗ ∈ S that satisﬁes f (x∗ ) ≤ f (x). Similar realizations of this type are also reported in [22]–[28]. the SamACO algorithm proposed in this paper is based on the idea of sampling candidate values for each variable and selecting the values to form solutions. . NO. i. of the proposed SamACO algorithm for continuous optimization. a new variable value is generated according to the Gaussian distribution with a selected center and a computed variance. The rest of this paper is organized as follows. shown at the bottom of (j) the next page. ∀x ∈ S. Without loss of the advantage of the traditional ACO. . 2. with n being the number of decision variables. MAN. . Distinctive characteristics of SamACO are the cooperation of a novel sampling method for discretizing the continuous search space and an efﬁcient method for incremental solution construction based on the sampled variable values. If component values are associated with pheromones. 2. |Di |. Ω). fast evolutionary programming (FEP) [30]. The fundamental idea of ACOR is the shift from using a discrete probability distribution to using a continuous PDF. . . Since the treatment of placing pheromones .

The success of SamACO in solving continuous optimization problems depends on an effective variable sampling method and an efﬁcient solution construction process. Framework of the proposed SamACO for continuous optimization. . . the calculation of the Gram–Schmidt process for each variable leads to a significantly higher computational demand. the traditional ACO framework needs to be duly modiﬁed. i = 1. τ1 . ··· . . • the deﬁnitions of f and Ω are the same as in Deﬁnition 1. Generation of the Candidate Variable Values The generation processes of the candidate variable values in the initialization step and in the optimization iterations are ⎡ ⎢ ⎢ ⎢ M=⎢ ⎢ ⎣ x1 . xi . However. The key for successful optimization now becomes how to sample promising variable values and use them to construct highquality solutions. 2. τ1 (2) (1) (1) (2) x2 . n.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1557 on links or components is similar. 2. . if a ﬁnite number of variable values are sampled from the continuous domain. (|Dn |) (|Dn |) . B. [21]. This forms the basic idea of the SamACO framework proposed in this paper. Therefore. such as the ant system (AS) [3]. The realization of the pheromone update process is the main distinction among different ACO variants in the literature. the detailed implementation of the proposed SamACO algorithm for continuous optimization is presented. . . 2. . a Gram–Schmidt process is used for each variable to handle variable correlation. Each decision variable Xi has ki sampled (1) (2) (k ) values xi . τn xn .HU et al. τ2 (2) (1) (1) (2) ··· ··· . . li and ui representing the lower and upper bounds of the decision variable Xi . a continuous optimization problem is deﬁned as follows [2]. the Max-Min AS [33]. . τn . . The difference from the DOP is that.. the traditional ACO algorithms for discrete optimization can be used. the decision variables are deﬁned in the continuous domain. 2. P ROPOSED SamACO A LGORITHM FOR C ONTINUOUS O PTIMIZATION Here. . Extended ACO Framework for Continuous Optimization Different from DOPs. xn . . The sampled discrete variable values are then used for optimization by a traditional ACO process as in solving DOPs. In the literature. ui ]. where • S is the search space deﬁned over a ﬁnite set of continuous decision variables (solution components) Xi . with values xi ∈ [li . As the number of component values is ﬁnite in DOPs. the rest of this paper will focus on the case where pheromones are on components. . When the dimension of the objective function increases. . . Fig. f. in continuous optimization for ACO. τ2 x2 xn . A. The ants’ construction process selects promising variable values to form high-quality solutions by taking advantage of the traditional ACO method in selecting discrete components. the time used by ACOR also increases rapidly. III. (|D1 |) (|D1 |) . 3 illustrates the ﬂowchart of the algorithm. τn (2) (1) (1) (2) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (1) x1 . xi i from the continuous domain [li . . i = 1. When an ant in ACOR constructs a solution. using probabilistic sampling. Ω). particularly when the values in the archive are very close to each other. The variable sampling method in SamACO maintains promising variable values for the ants to select. (|D2 |) (|D2 |) . including variable values selected by the previous ants. although the correlation between different decision variables is handled. . ui ]. Fig. τ2 . The overall pseudocode of the proposed SamACO is shown in Appendix I. the algorithm may still converge to local optima. diverse variable values for avoiding trapping. However. The SamACO framework for continuous optimization is shown in Fig. respectively. researchers such as Socha and Dorigo [21] proposed a method termed ACOR to shift the discrete probability distribution in discrete optimization to a continuous probability distribution for solving the continuous optimization problem. n. (j) pheromone update can be directly applied to the τi in (1) for increasing or reducing the attractions of the corresponding component values to the ants. and the ant colony system (ACS) [4]. Deﬁnition 2: A continuous minimization problem Π is denoted as (S. . and variable values with high accuracy. the rank-based AS (ASrank ) [32]. Moreover. τ1 x1 x2 .

3. ri · vr . . . . . We use a radius ri to conﬁne the neighborhood exploitation of the variable value xi . . m + ϑ. Dynamic Exploitation Process The dynamic exploitation proposed in this paper is effective for introducing ﬁne-tuned variable values into the variable value group. . VOL. j = m + 1. . The initial radius value is set as ri = (ui − li )/(2m). The extension of the radiuses can import values in a distance further away . . i = 1. different. the composition of the candidate variable values for the ants to select is illustrated in Fig. j = 1. i. . x2 . and vr (0 < vr ≤ 1) is the radius reduction rate. i = 1. we replace the best-so-far solution with the new solution. . a dynamic exploitation. .e. aiming at searching the (0) (0) vicinity of the variable value xi in the interval [xi − (0) ri . 4. or reduced as ⎧ ⎪ min x(0) + ri · σi . 2/3 ≤ q < 1 i i i i Fig. the resulting solution x = (ˆ1 . . ui . 2. During the optimization iterations. . B. . representing the best solution that has ever been found. 2. n. i = 1. thus. . xn ) ˆ x ˆ ˆ is evaluated. . . . . . To summarize. 2. n. .. candidate variable values have four sources. 2. 1]. Otherwise (case 2). If the best exploitation solution is no worse than the original best-so-far solution (case 1). . x(m−Θ+2) . . Initially. . Then. The best-so-far solution is then updated. 2. . . .. . n. m ants construct m solutions. 2. 6. i = 1. where ve (ve ≥ 1) is the radius extension rate. a random exploration. . m + gi . 0 ≤ q < 1/3 ⎪ i ⎨ xi = x(0) . . . The new variable values for the solution x(j) are randomly generated as xi (j) where σi ∈ (0. . Composition of the candidate variable values for the ants to select.1558 IEEE TRANSACTIONS ON SYSTEMS. n. 1) are uniform random values. . 40. Furthermore. . . can be imported in the value group. l . AND CYBERNETICS—PART B: CYBERNETICS. the radiuses will be extended. where gi counts the number of new variable values in the dynamic exploitation process. . The above exploitation repeats for ϑ times. i. m.e. . . 2. . i = 1. x2 . ri ← ri · ve . unchanged. n. Suppose that the worst Θ solutions are denoted by x(m−Θ+1) . . the radiuses will be reduced. i = 1. n. n and j = m − Θ + 1. and randi is a uniform random number in [0. . NO. resulting in gi new candidate variable values near the corresponding variable values of the best-so-far solution for each variable X i . the variable values selected by ants in the previous iteration. There are a total of (m + gi + 1) candidate values for each variable Xi . . New values. MAN. resulting in m candidate values for each variable for the next iteration. case 1 case 2 (5) = li + (ui − li ) · randi (j) (3) where i = 1. In each iteration. . the candidate variable values are randomly sampled in the feasible domain as xi (j) = li + u i − li (j) j − 1 + randi m+ϑ (2) where (m + ϑ) is the initial number of candidate values for (j) each variable i. ˆ (4) 1/3 ≤ q < 2/3 ⎪ i ⎪ ⎩ max x(0) − r · σ . . a random exploration process generates Θ new values for each variable by discarding the worst Θ solutions that are constructed by the ants in the previous iterations. the radiuses adaptively change based on the exploitation result. . . xn ). 4. . xi + ri ]. . Flowchart of the SamACO algorithm for continuous optimization. . 1] and q ∈ [0. . x(m) . m + 2. . The dynamic exploitation is applied to the best-so-far solution. The new variable values that are generated by increasing or reducing (j) a random step length are recorded as xi . In each iteration. DECEMBER 2010 Fig. The dynamic exploitation is applied to the best-so-far so(0) (0) (0) lution x(0) = (x1 . and a best-so-far solution. If the new solution is no worse than the recorded best-so-far solution. . The values of the variables in the best-so-far solution are randomly selected to be increased. 2.

we will investigate the effects of these parameters on the proposed algorithm. .” Pheromones will then be adjusted according to the quality of these solution paths. . Each candidate variable value is associated with pheromones. . m + gi } according to the probability distribution given by pi (j) = τi (j) m+gi u=0 τi (u) . 1. . the solutions are sorted by their objective function values in an order from the best to the worst. IV. The pheromones on the selected variable values are evaporated as τi (j) ← (1 − ρ) · τi (j) + ρ · Tmin (8) . such as [4] and [34]–[37]. a great deal of work has been carried out on the parametric study for ACO. 4) Parameters in Dynamic Exploitation: The parameters in the dynamic exploitation include the exploitation frequency ϑ and the radius reduction and extension rates vr and ve . 2. n. but a high discarding rate can slow down the convergence speed. . τi . The parameter q0 ∈ [0. . m. . n. A. i = 1. 2. In each iteration. . i = 1. the elitist number Ψ helps preserve promising variable values and accelerate the convergence speed. with Ψ being the number of the high-quality solutions that receive additional pheromones on the variable values. 1. Diversity is important for avoiding stagnation. . x(m) . . . Both the parameters Θ and Ψ cannot be set too large. . Ψ. . the pheromone reinforcement rate α. pheromone values on the variable values of the best-so-far solution are assigned equal to the pheromone values on the iteration best solution. . D. and Tmin is the predeﬁned minimum pheromone value. respectively. 2. m + gi . . 5. 1) controls whether an ant will choose the variable value with the highest pheromone from the m solutions generated in the previous iteration. The variable values that are selected by each ant form a solution “path. Since each candidate variable value group is mainly composed of m variable values that are selected by the previous ants and the variable values from the local = arg max τi . a small discarding number and a small elitist number are sufﬁcient because their effects accumulate in iterations. . . the parameter q0 . if q < q0 (6) where q is a uniform random value in [0. the random exploration process replaces the Θ worst solutions constructed by ants. . xn n ). otherwise (1) (2) (m) . 5. . . τi (k) Li . j = 0. i = 1. In Section IV-B. In each iteration. j = 1. . (7) The constructed solution of ant k is denoted x(k) = (k) (k) An illustration of two ants constructing solutions is shown in Fig. . Pheromone Update Initially. Ants’ Solution Construction After the candidate variable values are determined. In fact. the elitist number Ψ determines the best solutions constructed by ants to receive additional pheromones. and Tmax is the predeﬁned maximum pheromone value. 3) Traditional Parameters in ACO: Similar to the ACS [4]. Illustration of two solutions constructed by ant a and ant b. (k) (l ) (l ) (l ) (x1 1 . . 2. Suppose that the sorted solutions are arranged as x(1) .: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1559 where 0 < ρ < 1 is the pheromone evaporation rate. Here. . . . . n. The variable values in the best Ψ solutions have their pheromones reinforced as τi (j) ← (1 − α) · τi (j) + α · Tmax (9) Fig. The pheromones on the variable values that are generated by the exploration process and the dynamic exploitation process are assigned as T0 . . . .HU et al. . . . parameters of SamACO to simulate the ants’ construction behavior include the number of ants m. j = 1. x(2) . . and the lower and upper limits of pheromone values Tmin and Tmax . Therefore. PARAMETER S TUDY OF SamACO It can be seen that the proposed SamACO involves several parameters. After evaluating the m solutions constructed by ants. . k = 1. m. . x2 2 . the initial pheromone value T0 . from the original value. The variable values with extra pheromone deposits have higher chances to be selected by ants. C. the higher the diversity in the candidate variable values. each candidate variable value is assigned an initial pheromone value T0 . 2. we will use experiments to ﬁnd suitable settings for these parameters. whereas the reduction of the radiuses can generate values with high accuracy near to the original value. 2. 2) Elitist Number Ψ: Different from the discarding number Θ. or ran(k) domly choose an index Li ∈ {0. . which bias the ants’ se(k) lection for solution construction. . . The exploitation frequency ϑ controls the number of values to be sampled in the neighborhood of the best-sofar solution per iteration. the pheromone evaporation rate ρ. . . 1). . The extension and the reduction of radiuses aim at introducing values with different accuracy levels according to the optimization situation. In the literature. Effects of the Parameters in SamACO 1) Discarding Number Θ: The discarding number Θ ≥ 1 is used for maintaining diversity in the candidate variable values. . The index li of the variable value selected by ant k for the ith variable is li (k) where 0 < α < 1 is the pheromone reinforcement rate. m ants are dispatched to construct solutions. . The more the solutions replaced.

40 have been tested by ﬁxing the values of the other parameters. . functions f1 to f9 are unimodal. The algorithm terminates when the number of function evaluations (FEs) reaches 300 000.7 are used for comparing the performance of SamACO with other algorithms. and α. 40 and ϑ = 0. To facilitate practical applications of SamACO to various problems. the neighborhood radius is ﬁxed as the initial value. . [39]. ϑ = 20. On the contrary.3. 6 shows a contour graph representing the average number of FEs used for achieving an accuracy level smaller than ε = f (x) − f (x∗ ) = 1. the average number of FEs used is minimized. the neighborhood radius is 0. Among them. .1. the value of ϑ inﬂuences the selection probabilities of the m variable values. . ρ. The third type consists of representative algorithms that use an explicit probabilitylearning notion. . . However. and Ψ = 1. 4. Furthermore. Therefore. the speed to approach the local optimum is slow. we undertake a numerical experiment on f6 (the shifted Schwefel’s problem 1.5. particle swarm optimization (PSO) [29]. most computational intelligence algorithms have implicit or explicit probabilistic models that guide the optimization process. . Moreover. the reduction speed of radiuses is slower so that promising variable values are not so easy to be omitted when exploiting the neighborhood. The second type consists of the other well-known swarm intelligence algorithm as ACO. 6. 6. AND CYBERNETICS—PART B: CYBERNETICS. ACO uses pheromones as explicit guidance. It can be observed in Fig. 6 that SamACO is more sensitive in m than in ϑ. COAC [17]. [38].2. 1 have been assigned to q0 . the value of q0 cannot be too large. 2) Radius Reduction Rate vr : The values of vr = 0. exploitation. N UMERICAL E XPERIMENTS A. and vr = 0. 7(a) illustrates a curve representing the average number of FEs used with different values of vr on f6 . Contour illustration of using different values of m and ϑ on f6 . The algorithm has unsatisfactory performance when ρ = 0 or α = 0. . but it may induce premature convergence. a sufﬁcient amount of dynamic exploitation is crucial to achieving solutions with high accuracy. . 8. and. PSO was originally developed for continuous optimization. 6). 1) Correlations Between Ant Number m and Exploitation Frequency ϑ: The correlations between m = 1. respectively. the better the performance of the proposed algorithm. They are set as Tmin = T0 = 0. 1 have been tested. VOL. NO. Tmax = 1. Numerical Analysis on Parameter Settings Some of the SamACO parameters have their default settings. Otherwise. The ﬁrst type includes the ant-based algorithms such as CACO [15]. When vr = 0 (ve = 0). Θ = 1. ρ. and α are given in Fig. respectively. Because the radius extension is a reversal operation of the radius reduction.1 than that of q0 = 0. . Each parameter setting is independently tested 25 times. The curve shows a signiﬁcant decreasing trend when vr increases from 0 to 1. When vr = 0. 3. in which case a local or global pheromone update does not take any effect. B. Table I lists the domains and default settings (denoted by DS) of the parameters (P) and concludes the sensitivity of SamACO based on the default settings. Appendix II lists these functions chosen from the literature [30]. 0. the performance of the proposed algorithm is more sensitive to other parameters. with the other parameter values ﬁxed.1. 0. According to our prior experiments on a set of unimodal and multimodal functions (listed in Appendix II) with n = 30. 3) Traditional ACO Parameters q0 . thus. With a larger value of vr .2. respectively. .1. 40. Different from ACO. SamACO cannot ﬁnd satisfactory solutions. for analyzing their inﬂuence to SamACO. 12. when ϑ = 0 and 1 (not shown in Fig.7. 12. In the remainder of this section. Three types of algorithms are used for comparing the performance of SamACO. On the other hand. where f (x) and f (x∗ ) denote the solution value and the optimal solution value. a larger m consumes more FEs. whereas f10 to f16 are multimodal. SamACO cannot achieve any satisfactory results within the maximum number of FEs (300 000) using any value of m. the algorithm traps in local optima and needs a long time to jump out. pheromone updates are quite useful for the optimization process. with the other parameters set as default. 0. 7(b) that an inclination for selecting the variable value having the largest pheromone from the m variable values [as in (6)] is beneﬁcial because SamACO uses smaller numbers of FEs when q0 = 0. The smaller the number of FEs used. 2. q0 = 0. ρ. There is generally no need to change these values. However. In the above two cases. . 1. . MAN. When the value of ϑ is ﬁxed. When vr = ve = 1. DECEMBER 2010 Fig. ρ = 0.1.0. Experimental Setup Sixteen benchmark numerical functions have been tested in this paper to validate the performance of the proposed SamACO. . the parameters m = 20. V. the curves of ρ and α show similar trends. 7(c) and (d). From Fig. algorithms such as evolutionary .2 and 0. and α: The values of 0. Therefore. Fig. and ACOR [21]. An improved variant of PSO—the CLPSO [29]—is applied. the radius reduction and extension rates vr and ve . we set ve = 1/vr in this paper. 7(b)–(d). Furthermore. a small ϑ may not sample enough local variable values. [40].1560 IEEE TRANSACTIONS ON SYSTEMS. The contour represents the average number of FEs used for achieving an accuracy level smaller than ε = 1. The curves showing the average number of FEs used with different values of q0 . that is. Fig. 4. 8. a large ϑ provides more candidate variable values surrounding the neighborhood of the best-so-far solution. α = 0. . which is too large to obtain solutions within the predeﬁned accuracy level.2) for discussing the inﬂuence of these parameters. . 0. adapt the exploitation neighborhood range according to the solution quality. In general. It can be observed in Fig.

the numbers of FEs that are required for achieving the zero error values are used by the t-test.e.m. which means that the performance of SamACO is generally better than those algorithms. COAC.tu-berlin. respectively. A t-test is made based on the error values of 25 independent runs for showing whether SamACO is signiﬁcantly better or worse than the other algorithms. according to the source code provided by the authors and the descriptions given in the refer- . or signiﬁcantly worse than (w‡) the compared algorithm for each function. Illustration of using different values of vr . Version 2. If the same zero error values are obtained for the compared algorithms (i. programming and evolution strategy also learn from explicit probabilistic models for generating new solutions.html. dev) on f1 −f16 for SamACO. Here. q0 . 7. B. the total scores corresponding to the other ant-based algorithms are all positive. whereas “%ok” stands for the successful percentage over 25 independent runs. CACO. ρ. 0. The symbol × denotes that the results cannot reach the predeﬁned accuracy level ε within maximum 300 000 FEs.1 The parameters of the algorithms are set as the recommended values according to the references.bionik. CMA-ES is programmed in MATLAB. It can be observed that the proposed SamACO performs the best among 1 The MATLAB code used. cmaes. and α on f6 . Table III shows the average number of FEs that are required to achieve accuracy levels smaller than ε1 = 1 and ε2 = 10−6 .: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1561 Fig. is available at http://www. The y-axis represents the average number of FEs used for achieving an accuracy level smaller than ε = 1. TABLE I S UMMARY OF PARAMETER S ETTINGS IN SamACO ences. Each function is independently tested 25 times. Notice that all of the algorithms terminate when the number of FEs reaches 300 000. algorithms in comparison with SamACO are programmed in C. and ACOR when n = 30.35. indifferent to (i).. In the table. Only successful runs used for calculating the average number of FEs and only the functions that can be successfully solved by SamACO are presented in the table. Except for CMA-ES. with the other parameters set as default. we use FEP [30] and CMA-ES [31] as representatives of the probability-learning algorithms.de/user/niko/formersoftwareversions. Results and Comparisons 1) Comparison With Other Ant-Based Algorithms: Table II tabulates the mean error values f (x) − f (x∗ ) and the corresponding standard deviations (St. or −1 is added when SamACO is signiﬁcantly better than (b†). f1 ).HU et al. A score 1.

NO. AND CMA-ES . CLPSO. AND CMA-ES TABLE III AVERAGE N UMBER OF FE S R EQUIRED TO ACHIEVE ACCURACY L EVELS S MALLER T HAN ε1 = 1 AND ε2 = 10−6 W HEN n = 30 FOR SamACO. 40. ACOR . FEP.1562 IEEE TRANSACTIONS ON SYSTEMS. AND CYBERNETICS—PART B: CYBERNETICS. MAN. DECEMBER 2010 TABLE II AVERAGE E RROR VALUES AT n = 30. ACOR . 6. CLPSO. CACO. FEP. VOL. CACO. COAC. COAC. A FTER 300 000 FEs BY SamACO.

However. The computational complexity of SamACO is modest. C ONCLUSION An algorithm termed SamACO has been proposed for solving continuous optimization problems in this paper. Table III shows that FEP generally needs more FEs to obtain satisfactory results. CMA-ES achieves the smallest t-test total scores among the compared algorithms presented in Table II. The results of CMA-ES are obtained on CPU Dual 2. CMA-ES can obtain high-quality solutions very fast for most of the test functions. RAM 1G. f11 . However. On the other hand.0. (a) f5 . By using the Gram–Schmidt process. T 1. (c) f14 .0. the computational complexity of the algorithms discussed in this paper is computed as in TABLE IV C OMPUTATIONAL C OMPLEXITY OF A LGORITHMS ˆ Table IV. Furthermore. CACO.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1563 Fig.1. the computational complexity of ACOR is much larger than that of the other algorithms. The second place is SamACO because the other algorithms have positive total scores. The results of SamACO. and the horizontal axis is the number of FEs. SamACO not only has higher successful percentages than the other ant-based algorithms but also uses a relatively small number of FEs to achieve solutions within the predeﬁned accuracy levels. COAC. which is a computationally heavy operation. (b) f10 .0 GHz. Computations of T 0. Platform: MATLAB 7. Fig.21). SamACO can ﬁnd solutions within accuracy levels 1 and 10−6 in all runs. (d) f15 . and f16 . when solving f2 (the shifted Schwefel’s P2. According to Table III.HU et al. All of the time values are measured in CPU seconds. VI. and ACOR can only obtain solutions within the accuracy level 1 or 10−6 successfully in some runs. the four algorithms. 8 illustrates the convergence curves of SamACO and the compared algorithms. algorithms that use complex distribution models (such as FEP and CMA-ES) generally require a longer computation time. f15 . the average numbers of FEs used by SamACO are much smaller than those used by CLPSO. Platform: Visual C++ 6. ACOR . 8. In addition. COAC. It . SamACO can ﬁnd satisfactory solutions in more test functions than CLPSO. which means that they are not better than SamACO. The values of T 1 and T after 200 000 FEs. and T 2 can be referred in ˆ2 are obtained on f7 at n = 30 [40]. with a complexity value smaller than 3. CMA-ES has unsatisfactory performance on functions such as f10 . Convergence graphs of the algorithms. CLPSO. CACO. It can be seen that SamACO ﬁnds solutions with high accuracy very fast for the functions. and FEP are obtained on a PC with CPU DualCore 2. The vertical axis is the average error value. 3) Analysis on the Computational Complexity of Algorithms: Following the methods in [40]. 2) Comparison With CLPSO and Probability-Learning Algorithms: The scores in Table II show that SamACO generally performs better than CLPSO on the test functions. For example. RAM 1G.0 GHz.

xn } according to the x ˆ ˆ objective function If the resulting solution is no worse than the best-so-far solution (0) ˆ For i := 1 to n do xi := xi End-for /∗ Update the ∗ best-so-far solution / f lag := True End-if End-for If f lag = True For i := 1 to n do ri := ri · ve End-for Else For i := 1 to n do ri := ri · vr End-for End-if /∗ Perform a random exploration process to the worst solutions∗ / For j := m − Θ + 1 to m do For i := 1 to n do (j) (j) xi := li + (ui − li ) · randi (j) τi := T0 End-for End-for 3) /∗ Ants’ solution construction ∗ / For i := 1 to n do For k := 1 to m do (k) Ant k selects the index li according to (6) and (7) xi ˜ (k) := xi i (l (k) ) ∗ / Record the selected variable value ∗ / τi := τi ˜ /∗ Record the corresponding pheromone ∗ value / End-for End-for For k := 1 to m do For i := 1 to n do (k) (k) ˜ xi := xi /∗ Update the constructed solutions ∗ / (k) (k) τi := τi ˜ End-for Evaluate the kth solution according to the objective function End-for 4) /∗ Pheromone update ∗ / Sort the m solutions from the best to the worst Update the best-so-far solution Perform pheromone evaporation to the m solutions according to (8) Perform pheromone reinforcement to the best Ψ solutions according to (9) (k) (k) (li ) . DECEMBER 2010 demonstrates that after determining candidate variable values. swarm intelligence algorithms.1564 IEEE TRANSACTIONS ON SYSTEMS. Each ant in SamACO constructs solutions incrementally by selecting values from the sampled variable values. The experimental results show that the SamACO algorithm can deliver good performance for both unimodal and multimodal problems. and representative probability-learning algorithms. . NO. . For i := 1 to n do /∗ For each variable ∗ / ri := (ui − li )/(2m)/∗ Set the initial radius value ∗ / gi := m + ϑ/∗ Set the number of candidate variable values for selection ∗ / For j := 1 to (m + ϑ) do Randomly sample variable values according to (2) (j) τi := T0 /∗ Assign the initial pheromone value ∗ / End-for End-for For j := 1 to (m + ϑ) do Evaluate the jth solution according to the objective function End-for Sort the m solutions from the best to the worst (0) (0) (0) Record the best-so-far solution {x1 . so that the traditional ACO operations for selecting discrete components can be used compatibly. MAN. exploration for new variable values. The novelties and characteristics of SamACO are as follows. 6. • Efﬁcient incremental solution construction based on the sampled variable values. • A new sampling method for discretizing the continuous search space. A PPENDIX I SamACO A LGORITHM 1) /∗ Initialization ∗ / Initialize parameter values as in Table I. a way to sample promising variable values in the continuous domain and to use pheromones to guide ants’ construction behavior has been developed in SamACO to extend ACO to continuous optimization. They include ant-based algorithms for continuous optimization. in effect. The sampling method aims at balancing memory of previous ants. The SamACO framework extends ACO to continuous optimization. x2 . a subprocess in solving continuous optimization problems. VOL. . It has been shown that SamACO can also be applied to both DOPs and mixed discrete–continuous optimization problems. . xn } and (0) assign each variable i with τi := T0 Perform ant’s solution construction as the following Step 3) Perform pheromone update as the following Step 4) 2) /∗ Generation of the candidate variable values ∗ / /∗ Perform a dynamic exploitation process to the best-so-far solution ∗ / For i := 1 to n do gi := 1 End-for f lag := False /∗ Indicate whether exploitation can ﬁnd a better solution ∗ / For k := 1 to ϑ do /∗ Repeat for ϑ times ∗ / For i := 1 to n do /∗ For each variable ∗ / Generate a value xi according to (4) ˆ If (0 ≤ q < 1/3 or 2/3 ≤ q < 1) (m+gi ) xi := xi /∗ Record the new variable value ∗ / ˆ (m+gi ) τi := T0 /∗Assign the initial pheromone value∗/ gi := gi + 1/∗ Counter adds 1 ∗ / End-if End-for Evaluate the solution {ˆ1 . x2 . . 40. • A new framework for ACO. Therefore. the optimization procedure of ACO in solving DOPs is. . and exploitation for variable values with high accuracy. AND CYBERNETICS—PART B: CYBERNETICS. using an effective sampling method to discretize the continuous space. . . Some state-of-the-art algorithms have been used for comparing the performance with the SamACO algorithm.

” in Proc. “High-dimensional real-parameter optimization using the differential ant-stigmergy algorithm. Math. 2006. and unew are the corresponding shifted vectors. 1459–1464. M. Man. and functions from [30] by adding a shifted offset o = (0. W. Middendorf. W. V.HU et al. pp. 16. Bilchev and I. Blum. L. and R. Korošec and J. Syst. 29–41. “A direct application of ant colony optimization to function optimization problem in continuous domain. 1999. Šilc. 2007.. 39. and T. Tian. “A new version of ant system for subset problems.” IEEE Syst.” Artif. St¨tzle. vol. 3172. Cybern. 2006. R. and S. B. 157–164. M. 413–446. F. LNCS. H. Power Electron. Wodrich and G. and R. Jun. Jayaraman. T. “Ant colony approach to continuous function optimization. Martinoli. Gong.. “Ant colony optimization for routing and loadbalancing: Survey and new directions. J.2 with noise.. vol. and unew = u + o. 560–572. no. Cybern. vol. and A. 2009. M. and S. f5 Shifted sphere. LNCS.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1565 For i := 1 to n do τi := τi End-for 5) If (Termination_condition = True) Output the best solution Else goto Step 2) End-if (0) (1) A PPENDIX II B ENCHMARK F UNCTIONS The test functions used for experiments include the ﬁrst ten functions for CEC 2005 Special Session on real-parameter optimization [40]. Poli. no. 2008. Syst. Martinoli. Mondata. Mar. St¨tzle. “The differential ant-stigmergy algorithm: An experimental evaluation and a real-world application. Comput. St¨tzle. 23. 1. Li.” IEEE Trans. 1996. Poli. ANTS. Eds. IEEE CEC. M. [4] M. Chung. Sep. Comput. M. Jan. ANTS. Michalewicz. 457–467. and F. R. GECCO. MA: MIT Press. “Extended ant colony optimization algorithm for power electronic circuit design. no. as f5 −f9 and f12 −f16 in this paper. “Biologically inspired cooperative routing for wireless mobile sensor networks. “Ant colony optimizations for resource. “An extension of ant colony system to continuous optimization problems. f2 Shifted Schwefel’s P2. l. 2008. de F. and T. 5. Dorigo. G. Atlanta. [23] M. 181. no. [8] W. Schmeck. 10. “Ant colony optimization for continuous domains. pp.” Future Gener.” in Proc. 2006.. Res. Birattari. 1997. Jun. Dr. . pp. Korošec. ANTS. Dorigo. and T.2.. 3814–3822.” in Proc. 53–66. [26] S. 1999. vol.. H. Mathur. K. pp. u [21] K. pp. “Ant colony optimization for resource-constrained project scheduling. 6.21. C. 2009. Sci. 29–37. [9] D. J. pp. 1. Syst. M. Comput. [17] X. Siarry. V. [14] M. pp. Gambardella. Humans. pp. vol. Dr. Comput. LNCS.” Future Gener.” Control Cybern. no. R.” IEEE Trans. J. Jun. pp. 413–414. pp. 137–172. no. Rev. M. Dorigo and L. Res. Blum. NJ. and L. LNCS. no.” in Proc. 4150. 29–43. pp. “Cooperative distributed search: The ants’ way. DeRenzi. 1. Blum. [19] H. Dorigo. where x. M. M. 147–162.. vol. St¨tzle. M. P.. [5] G. pp. 5. Poli.-L. Life. M. lnew . 185. and upper bound deﬁned in [30]. B. Man. f6 Shifted Schwefel’s P1. 39. no. and u are variable vectors. . n − 1). Dr. S. Chen and J. [28] S. vol.” Appl. vol. F. Monmarché. no. Cambridge. vol. 3. Socha and M. [11] J. Gambardella. “Ant algorithms for discrete optimization. Comput.-C. pp. lnew = l + o. J. “Ant colony system: A cooperative learning approach to the traveling salesman problem. vol. pp. 4150. Martinoli. “ACO for continuous and mixed-variable optimization. “Comprehensive learning particle swarm optimizer for global optimization of multimodal . and B.. 1. Šilc.” IEEE Trans. Technol. 2008. Singapore. f4 Shifted quartic with noise. O. 324–331. Korošec and J. pp. K. Jan. 9. Gambardella. f9 Schwefel’s P2. no. Oct.” in ANTS 2006. M. Feb. 5. and Y. 2. Mondata. 1. Eds. G. M.. “Multivariate ant colony optimization in continuous search spaces. [6] K. vol. Kosel. Gambardella. Cybern. M. ACKNOWLEDGMENT The authors would like to thank Dr. f10 Shifted Schwefel’s P2. Dorigo and T. u [29] J. 1. 1. [7] S. Chang. Birattari. u [20] K. Coelho. Oblak. St¨tzle. 2000. Nobahari. Dorigo. Hansen for letting us use their source codes for the benchmark functions. and T. 20.. G. 1995.. C. Dorigo. [3] M. K. A. J. pp. L. pp. Kastner. S. M. Bilchev. pp. S. 4.. 1155–1173. 3. Syst. Dorigo.26. J. and xnew .. Cybern. C. 3172. ANTS. as f1 −f4 and f10 −f11 . Gambardella.. Karale. Wu. Zuben. Man. 26. Cybern. Appl. “On how Pachycondyla apicalis ants suggest a new search algorithm. vol. f15 Shifted Rastrigin. Merkle. IEEE CEC. Oper. u [24] P. M. 2004. Comput. Balakrishnan. L. [15] M. f14 Shifted rotated Ackley’s function with global optimum on bounds. 1010–1029. Dorigo. f16 Shifted rotated Rastrigin. Sun. no. [12] N. Stützle. Qin.” IEEE Trans. 333–346. pp. 4150.. Huang. 60–71. Iyengar. u [27] F. Sim and W. A. Apr. Ant Colony Optimization. pp. Liang. J. Hu. Piscataway. Intell. 294–301. vol. V.. 2002. pp. L. Comput. pp. Eds. F. 504–505. vol.. Socha. 2006. L. vol. Dorigo. no. pp. pp. no. [18] J. K. LNCS. 937–946.” Int.” in Proc. . Leguizamón and Z. AISB Workshop Evol. vol.” J. H. and T. de França.” Ind. Birattari.” in Proc. Syst. S. “An ant colony algorithm aimed at dynamic continuous optimization. P. R EFERENCES [1] M. M. 34–51. 2004. C. “An enhanced aggregation pheromone system for realparameter optimization in the ACO metaphor. T. 33. f8 Shifted Schwefel’s P1. Maniezzo. C.. 2004. Venturini. no. M. vol. 26. Parmee. vol. vol. N. F. Comput. 1. [13] G. The shifted variable vectors and bounds for f1 −f4 and f10 −f11 are deﬁned as xnew = x − o. 1. 841–856. Wang. Zhang. Baskar. A. Attux. P. Socha. vol. pp.. R. The test functions are as follows: f1 Shifted step. f3 Shifted Schwefel’s P2. M. H. Dorigo.-M. vol.” in ANTS 2008. . 25–36. A.. M. “An ant colony optimization approach to a grid workﬂow scheduling problem with various QoS requirements. vol. Circuits Syst. Apr. 2009. Hao. Caro. and T. Suganthan. Dréo and P. 24.” Eur. M.” IEEE Trans. Y.” in Proc. and H. M.-N. Colorni. 1997. Eng.and timing-constrained operation scheduling. [16] J. f12 Shifted Rosenbrock. Evol.6 with global optimum on bounds. f7 Shifted rotated high-conditioned elliptic. lower bound. Clerc. Birattari.-H. LNCS.. The authors would also like to thank the anonymous associate editor and the referees for their valuable comments to improve the quality of this paper. 933. and A. “Ant system: Optimization by a colony of cooperating agents. 2. Birattari. 9–16.22. Zhang. [2] M. B.. Gambardella. “ACO for continuous optimization based on discrete encoding. Siarry. vol. f13 Shifted rotated Griewank’s without bounds. Pourtakdoust and H. no. vol. 1. and M. 1. Evol. Syst. 2007. D. Suganthan. “Continuous interacting ant colony algorithm based on dense heterarchy. N. M. Eds. Sep. St¨tzle. “Orthogonal methods based ant colony search for solving continuous optimization problems. no. and Dr. Kong and P.. A. X. [10] G. Eds. pp. no. [22] P. Jan. 2. Huang and Z. Eds. Aug. pp. A. D. 2007. Šilc. 2004. LNCS.” IEEE Trans..” IEEE Trans. Chem.. “The differential ant-stigmergy algorithm for large scale real-parameter optimization. Comput. f11 Shifted Ackley. Kulkarni.” in Proc. N. 2000. Slimane. Dréo and P. Dorigo. “The ant colony metaphor for searching continuous design spaces. 2003. Yao. Birattari.-Aided Design Integr. GA. 25–39. 6. [25] P. 2–18. Priye. 2008. N. 26. Lo. Gambardella. 5217. Tsutsui. Zhang. u Winﬁeld.

Auger and N. Hong Kong. wireless sensor networks.” in Proc. Jun. 2002. in 1984. Evol. 8. Jun. N. H. 889–914. respectively. C. no. degree in electronic engineering from the University of Electronic Science and Technology of China (UESTC). He has authored seven research books and book chapters. 175–191. 2. in 1987.. Daejeon. LNCS. K. 2001. Her current research interests include artiﬁcial intelligence. Hart. S. and M. P. where he is currently a Professor and the Ph. Favaretto. control methods. Chen. Comput. random-switching techniques.” Future Gener. He established the IEEE Computer-Aided Control System Design Evolutionary Computation Working Group and the European Network of Excellence in Evolutionary Computing (EvoNet) Workgroup on Systems. E. pp. 2000. Y.. Zhang is currently an Associate Editor of the IEEE T RANSACTIONS ON I NDUSTRIAL E LECTRONICS. degree in computing and control engineering from the University of Strathclyde. Hoos. 4150. R. Hong Kong.D. From 2003 to 2004. vol. He is currently a Professor with the Department of Electronic Engineering and the Chief Technical Ofﬁcer of e. degree in network engineering in 2006 from Sun Yat-Sen University. He has supervised 15 Ph. San Mateo. Univ. Stützle and H. in 2006.K. MAN. Yun Li (S’87–M’90) received the B. 281–295.D. Liang. he was a Lecturer with the University of Glasgow. no. 2001. 58–73. and H. he has been with the City University of Hong Kong (CityU). 25–38. 1.. Birattari. Henry Shu-Hung Chung (M’95–SM’03) received the B. Strauss. Kennedy. Feb. From 1987 to 1991. J. He is currently the Chair of the IEEE Beijing (Guangzhou) Section Computational Intelligence Society Chapter. pp. Evol.D. Tijink. 2037. A. PART I: F UNDAMENTAL T HEORY AND A PPLICATIONS.” in Proc. Dr. Kennedy.K. Hong Kong. switchedcapacitor-based converters. ANTS. 9. and power electronic circuits. 1777–1784. data mining. in 1991 and 1994. Boers. vol. Lanzi. C. 1997. he carried out leading work in parallel processing for recursive ﬁltering and feedback control. network routing technologies. G. degree in information systems from the City University of Hong Kong. pp. CA: Morgan Kaufmann. U. Raidl. St¨tzle. L. Kowloon. Kumamoto. 40. Oper. LNCS. vol. W. Xiao-Min Hu (S’06) received the B.1566 IEEE TRANSACTIONS ON SYSTEMS. DECEMBER 2010 [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] functions. no. complex-numbered or not. swarm intelligence. J. B. Control. he served as a Visiting Professor with Kumamoto University. 2005. X. Kowloon. Evol. M. Shi. and the Ph. M.s in this area and has over 140 publications. genetic algorithms. IEEE CEC. ontology engineering. and electronic ballast design. He was an Associate Editor and a Guest Editor of the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS . decision support systems. he achieved ﬁrst symbolic computing for power electronic circuit design. soft-switching converters. He is currently an Associate Editor of the IEEE T RANSACTIONS ON P OWER E LECTRONICS and the IEEE T RANSACTIONS ON C IRCUITS AND S YSTEMS . Chengdu. R. Comput. D. pp. Apr. “Using genetic algorithms to optimize ACSTSP.. “Problem Deﬁnitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization. 2463.. Clerc and J. 1. digital audio ampliﬁers. F. L. Hansen. Pellegrini.” Nanyang Technol.. Li is a Chartered Engineer and a member of the Institution of Engineering and Technology. which opened up a groundbreaking way for microwave feedback circuit design. 203–214. R. pp. South Korea. 82–102. Hartl. he independently invented the “indefinite scattering matrix” theory. fuzzy logic.. R. pp. Jul. Ou Liu received the Ph. L. VOL.Energy Technology Ltd. 213–222. and Y. Comput. and over 80 technical papers in his research areas.. Pilat and T. “Pheromone modiﬁcation strategies for ant algorithms applied to dynamic TSP. Suganthan. Caro. including 110 refereed journal papers in his research areas. vol. Comput. G. His research interests include computational intelligence. AND CYBERNETICS—PART B: CYBERNETICS. He is also the holder of ten patents. R.Eng. A. J.S. 3. Yao. Since 1992. Guangzhou. Dorigo. Bullnheimer. Since 1995. Jun Zhang (M’02–SM’08) received the Ph. the M. Nixon. P. Singapore. E.—an associated company of CityU. and Drives in 1998. Hong Kong. Simpson. “A new rank based version of the ant system—A computational study. and E. no. Res. Eds. “Evolutionary programming made faster. Gottlieb. In 2002. vol. he has pioneered into design automation of control systems and discovery of novel systems using evolutionary learning and intelligent search techniques. in 2002. “On MAX-MIN ant system’s parameters. Y. 10.. vol. Hong Kong.D. pp. His research interests include time. Guntsch and M. Gambardella. u Eds. and other intelligence applications. in 1990. Korea Advanced Institute of Science and Technology.D. P. 2005005. no. Glasgow. operations research. Guangzhou. Auger. He is currently a Senior Lecturer with the University of Glasgow and a Visiting Professor with the UESTC. Japan. 282–283. Glasgow. Smith. 7. He was the IEEE Student Branch Counselor and was the Track Chair of the Technical Committee on power electronics circuits and power systems of the IEEE Circuits and Systems Society from 1997 to 1998. R. 2. he was with the U. Supervisor with the Department of Computer Science. “The particle swarm-explosion. A. J.. 2002. and T. vol. degree in computer application and technology in the Department of Computer Science. vol. Comput. N. “MAX-MIN ant system. and Ph. KanGAL Rep. without needing to invert any matrix. White.. Martinoli. ANTS.” IEEE Trans. and G. where he is currently an Assistant Professor with the School of Accounting and Finance. In 1991. J. “Performance evaluation of an advanced local search evolutionary algorithm. and neural networks. where she is currently working toward the Ph.” in Proc. May 2005. pp. and their applications in design and optimization on wireless sensor networks.D. Lin. “Parametric study for an ant algorithm applied to water distribution system optimization. Tiwari. 2005.D. E. Maier. In 1996. Middendorf. M. D. Eds. 2006. H. Eberhart. stability. 6. 2006. and C. China. 6. Deb. M. Chung was a recipient of the 2001 Grand Applied Research Excellence Award from the CityU. Chengdu. PART I: F UNDAMENTAL T HEORY AND A PPLICATIONS from 1999 to 2003. vol. A. He has authored four research book chapters and over 250 technical papers. M. Syst.” Central Eur. Since 2004. pp. he has been with the Sun Yat-Sen University. degree in radio electronics science from Sichuan University. Zecchin. China. LNCS. Poli. 1999. 3. A. T. NO. M. Econ. .Sc. Moretti. no. J. Liu. he has been with the Hong Kong Polytechnic University. From 1989 to 1990. Glasgow. and convergence in a multidimensional complex space. vol. pp. In 1992. Evol. 16. EvoWorkshop. Since 2006. ant colony systems. evolutionary computation.” IEEE Trans.-P. Dr.” in Proc. Hansen. degrees in electrical engineering from Hong Kong Polytechnic University. Cagnoni. Sampels. B. and J. Dr. R. M. degree in electrical engineering from the City University of Hong Kong. Kowloon.Eng.and frequency-domain analyses of power electronic circuits.” IEEE Trans. China. Dorigo. Swarm Intelligence.” IEEE Trans. and S. His research interests include business intelligence. he was a Brain Korean 21 Postdoctoral Fellow with the Department of Electrical Engineering and Computer Science. National Engineering Laboratory and the Industrial Systems and Control Limited. 2.

- Superior Technique to Cluster Large ObjectsUploaded byIRJET Journal
- 005717322Uploaded bytamanimo
- Swarm IntelligenceUploaded byRajesh Jalisatgi
- Panta Lucic EDUploaded byTATATAHER
- elenco_abilUploaded byaskafjwpij98u
- 00065905(7)Uploaded bybluemoon1172
- L10b_-_OptimisationUploaded byAravind Naidu
- FINAL PPTUploaded bySandeep Vempati
- Review HapsirUploaded byBhaskar Anupam
- 05 Particle Swarm OptimisationUploaded bythavaselvan
- 05583509Uploaded byChaitanya Kedari
- 200606Uploaded byallovid
- Operations Research - Contemporary Role in Managerial Decision MakingUploaded byIJAERS JOURNAL
- Kf 3418331844Uploaded byAnonymous 7VPPkWS8O
- ReviewerUploaded byChicco Salac
- 2149Uploaded byLivia Marsa
- Water Resources Optimal Allocation based on Multi-Objective Genetic AlgorithmUploaded byRobert Aguedo
- 02 Graphical Solution of Two Variable LPPsUploaded byMeghashyam Sandeep
- Directional Overcurrent Relay 1Uploaded byArtiya Sopalag
- A Novel Approach for Large-disturbance Stability ControlUploaded byWijarn Wangdee
- Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter OptimizationUploaded byFernando Ruiz
- Dual FrameworkUploaded bySanky Rahul
- A Novel Highly Effective Control Parameterization Approach for Industrial Dynamic ProcessesUploaded bysaleh1978
- Genetic Algorithms as a Reactive Power Source Dispatching Aid for Voltage Security EnhancementUploaded byjaach78
- Lecture-5 (Process Simulation Lab)Uploaded byAntarim Dutta
- 10.1.1.34.4189Uploaded bynh572490
- Goal ProgrammingUploaded byGeo Raju
- A+Novel+Face+Recognition+Algorithm+for+Distinguishing+Faces+With+Various+AnglesUploaded bytjvg1991
- Reliability-Based Structural Design Optimization for Nonlinear Structures in OpenSeesUploaded bymgrubisic
- Effects of various power system stabilizers on improving power system dynamic performanceUploaded byhoussem_ben_aribia6240

- All eBooksUploaded byIshwarya Jeyaraj
- Apple 2005Uploaded bydayas_90
- GSM-PagingProcessUploaded byhemant solanki
- A a a AUploaded bydayas_90
- Getting StartedUploaded byXenogenic Medusa
- Life After DeathUploaded bydayas_90
- Life After DeathUploaded bydayas_90
- Srs SampleUploaded byAkshay Sharma
- AgingUploaded byJnanaranjan Swain
- 23433657 Communication SkillsUploaded bydayas_90

- Stacks and QueuesUploaded byShawna Burnett
- Lec04.pdfUploaded byNasrullah Yusup
- Digital Signal ProcessingUploaded byAbubakr Ahmed
- matlec7Uploaded byBharat Proteem Gogoi
- IJIRAE::Water Level Controller by Fuzzy LogicUploaded byIJIRAE- International Journal of Innovative Research in Advanced Engineering
- AI-ppt.pptUploaded byMohammad Gulam Ahamad
- I-SEM-AIUploaded byKetki Muzumdar
- Review of neural network applications in sleep researchUploaded bySmn
- Steganography REPORTUploaded byRatnesh Pandey
- Convex Optimization ReviewUploaded byBen
- A REVIEW ON COMPRESSION TECHNIQUES WITH RUN LENGTH ENCODINGUploaded byAnonymous vQrJlEN
- GATE-2015-Syllabus-for-cse-and-IT.pdfUploaded bysunny
- VHDL Reference - FSMUploaded byIslam Sehsah
- Informatica Interview QuestionsUploaded bybhaskar1234567
- DualityUploaded bySafwan mansuri
- MODIFICATION AFFINE CIPHERS ALGORITHM FOR CRYPTOGRAPHY PASSWORDUploaded byIJRISE Journal
- OPTIM1Uploaded byrevandifitro
- rfUploaded byagautam07
- VMOD Flex WhitePaper Grid TypesUploaded byCarlos Magno Cym
- cachemisaofaUploaded by蘇意喬
- MB0032 Operations ResearchUploaded byShwetadhuri_15
- Discrete Structure Chapter 6-Recurrence RelationUploaded byrockers91
- Bootstrapping a Libor Curve Off an OIS CurveUploaded bymichael.puhle1370
- Pattern search for Understanding NLPUploaded byAsaduz Zaman
- FftUploaded byRazvan
- Parametros de Random ForestUploaded byYulisa Quecaño Turpo
- Pulse Code ModulationUploaded byEr Amarsinh Ranaware
- Elementary Graph AlgorithmsUploaded bymedoreme
- ch 3Uploaded bykibrom atsbha
- gate Control SystemsUploaded byAkshat Srivastava