Beruflich Dokumente
Kultur Dokumente
S
that satises f(x
) f(x), x S.
is the set of constraints that the solutions in S must
satisfy.
The most signicant characteristic of ACO is the use of
pheromones and the incremental solution construction [2].
The basic framework of the traditional ACO for discrete
optimization is shown in Fig. 1. Besides the initialization step,
the traditional ACO is composed of the ants solution con-
struction, an optional local search, and the pheromone update.
The three processes iterate until the termination condition is
satised.
When solving DOPs, solution component values or the links
between the values are associated with pheromones. If com-
ponent values are associated with pheromones, a component-
pheromone matrix M, given by (1), shown at the bottom of
the next page, can be generated. The pheromone value
(j)
i
reects the desirability for adding the component value x
(j)
i
to
the solution, j = 1, 2, . . . , [D
i
[, i = 1, 2, . . . , n.
If the links between the values are associated with phero-
mones, each x
(j)
i
, x
(l)
u
)-tuple will be assigned a pheromone
value
(j,l)
i,u
, j = 1, 2, . . . , [D
i
[, l = 1, 2, . . . , [D
u
[, i, u =
1, 2, . . . , n, i ,= u. Since the treatment of placing pheromones
HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1557
on links or components is similar, the rest of this paper will
focus on the case where pheromones are on components.
As the number of component values is nite in DOPs,
pheromone update can be directly applied to the
(j)
i
in (1)
for increasing or reducing the attractions of the corresponding
component values to the ants. The realization of the pheromone
update process is the main distinction among different ACO
variants in the literature, such as the ant system (AS) [3], the
rank-based AS (AS
rank
) [32], the Max-Min AS [33], and the
ant colony system (ACS) [4].
B. Extended ACO Framework for Continuous Optimization
Different from DOPs, a continuous optimization problem is
dened as follows [2], [21].
Denition 2: A continuous minimization problem is de-
noted as (S, f, ), where
S is the search space dened over a nite set of con-
tinuous decision variables (solution components) X
i
, i =
1, 2, . . . , n, with values x
i
[l
i
, u
i
], l
i
and u
i
representing
the lower and upper bounds of the decision variable X
i
,
respectively.
the denitions of f and are the same as in Denition 1.
The difference from the DOP is that, in continuous opti-
mization for ACO, the decision variables are dened in the
continuous domain. Therefore, the traditional ACO framework
needs to be duly modied.
In the literature, researchers such as Socha and Dorigo
[21] proposed a method termed ACO
R
to shift the discrete
probability distribution in discrete optimization to a continuous
probability distribution for solving the continuous optimization
problem, using probabilistic sampling. When an ant in ACO
R
constructs a solution, a GramSchmidt process is used for each
variable to handle variable correlation. However, the calculation
of the GramSchmidt process for each variable leads to a signif-
icantly higher computational demand. When the dimension of
the objective function increases, the time used by ACO
R
also
increases rapidly. Moreover, although the correlation between
different decision variables is handled, the algorithm may still
converge to local optima, particularly when the values in the
archive are very close to each other.
However, if a nite number of variable values are sampled
from the continuous domain, the traditional ACO algorithms
for discrete optimization can be used. This forms the basic
idea of the SamACO framework proposed in this paper. The
key for successful optimization now becomes how to sample
promising variable values and use them to construct high-
quality solutions.
Fig. 2. Framework of the proposed SamACO for continuous optimization.
III. PROPOSED SamACO ALGORITHM FOR
CONTINUOUS OPTIMIZATION
Here, the detailed implementation of the proposed SamACO
algorithm for continuous optimization is presented. The suc-
cess of SamACO in solving continuous optimization problems
depends on an effective variable sampling method and an
efcient solution construction process. The variable sampling
method in SamACO maintains promising variable values for
the ants to select, including variable values selected by the
previous ants, diverse variable values for avoiding trapping,
and variable values with high accuracy. The ants construction
process selects promising variable values to form high-quality
solutions by taking advantage of the traditional ACO method in
selecting discrete components.
The SamACO framework for continuous optimization is
shown in Fig. 2. Each decision variable X
i
has k
i
sampled
values x
(1)
i
, x
(2)
i
, . . . , x
(k
i
)
i
from the continuous domain [l
i
, u
i
],
i = 1, 2, . . . , n. The sampled discrete variable values are then
used for optimization by a traditional ACOprocess as in solving
DOPs. Fig. 3 illustrates the owchart of the algorithm. The
overall pseudocode of the proposed SamACO is shown in
Appendix I.
A. Generation of the Candidate Variable Values
The generation processes of the candidate variable values
in the initialization step and in the optimization iterations are
M=
x
(1)
1
,
(1)
1
x
(1)
2
,
(1)
2
x
(1)
n
,
(1)
n
x
(2)
1
,
(2)
1
x
(2)
2
,
(2)
2
x
(2)
n
,
(2)
n
.
.
.
.
.
.
.
.
.
.
.
.
x
([D
1
[)
1
,
([D
1
[)
1
x
([D
2
[)
2
,
([D
2
[)
2
x
([D
n
[)
n
,
([D
n
[)
n
(1)
1558 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010
Fig. 3. Flowchart of the SamACO algorithm for continuous optimization.
different. Initially, the candidate variable values are randomly
sampled in the feasible domain as
x
(j)
i
= l
i
+
u
i
l
i
m +
j 1 + rand
(j)
i
(2)
where (m + ) is the initial number of candidate values for
each variable i, and rand
(j)
i
is a uniform random number in [0,
1], i = 1, 2, . . . , n, j = 1, 2, . . . , m + .
During the optimization iterations, candidate variable values
have four sources, i.e., the variable values selected by ants
in the previous iteration, a dynamic exploitation, a random
exploration, and a best-so-far solution. In each iteration, m
ants construct m solutions, resulting in m candidate values for
each variable for the next iteration. The best-so-far solution is
then updated, representing the best solution that has ever been
found. The dynamic exploitation is applied to the best-so-far
solution, resulting in g
i
new candidate variable values near the
corresponding variable values of the best-so-far solution for
each variable X
i
. Furthermore, a random exploration process
generates new values for each variable by discarding the
worst solutions that are constructed by the ants in the previ-
ous iterations. Suppose that the worst solutions are denoted
by x
(m+1)
, x
(m+2)
, . . . , x
(m)
. The new variable values
for the solution x
(j)
are randomly generated as
x
(j)
i
= l
i
+ (u
i
l
i
) rand
(j)
i
(3)
where i = 1, 2, . . . , n and j = m + 1, . . . , m. Newvalues,
thus, can be imported in the value group. To summarize, the
composition of the candidate variable values for the ants to
select is illustrated in Fig. 4. There are a total of (m + g
i
+ 1)
candidate values for each variable X
i
.
Fig. 4. Composition of the candidate variable values for the ants to select.
B. Dynamic Exploitation Process
The dynamic exploitation proposed in this paper is effective
for introducing ne-tuned variable values into the variable
value group. We use a radius r
i
to conne the neighborhood
exploitation of the variable value x
i
, i = 1, 2, . . . , n.
The dynamic exploitation is applied to the best-so-far so-
lution x
(0)
= (x
(0)
1
, x
(0)
2
, . . . , x
(0)
n
), aiming at searching the
vicinity of the variable value x
(0)
i
in the interval [x
(0)
i
r
i
, x
(0)
i
+ r
i
], i = 1, 2, . . . , n. The values of the variables in
the best-so-far solution are randomly selected to be increased,
unchanged, or reduced as
x
i
=
min
x
(0)
i
+ r
i
i
, u
i
, 0 q < 1/3
x
(0)
i
, 1/3 q < 2/3
max
x
(0)
i
r
i
i
, l
i
, 2/3 q < 1
(4)
where
i
(0, 1] and q [0, 1) are uniformrandomvalues, i =
1, 2, . . . , n. Then, the resulting solution x = ( x
1
, x
2
, . . . , x
n
)
is evaluated. If the new solution is no worse than the recorded
best-so-far solution, we replace the best-so-far solution with the
new solution. The above exploitation repeats for times. The
newvariable values that are generated by increasing or reducing
a random step length are recorded as x
(j)
i
, j = m + 1, m +
2, . . . , m + g
i
, i = 1, 2, . . . , n, where g
i
counts the number of
new variable values in the dynamic exploitation process.
In each iteration, the radiuses adaptively change based on the
exploitation result. If the best exploitation solution is no worse
than the original best-so-far solution (case 1), the radiuses
will be extended. Otherwise (case 2), the radiuses will be
reduced, i.e.,
r
i
r
i
v
e
, case 1
r
i
v
r
, case 2
(5)
where v
e
(v
e
1) is the radius extension rate, and v
r
(0 <
v
r
1) is the radius reduction rate. The initial radius value
is set as r
i
= (u
i
l
i
)/(2m), i = 1, 2, . . . , n. The extension
of the radiuses can import values in a distance further away
HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1559
Fig. 5. Illustration of two solutions constructed by ant a and ant b.
from the original value, whereas the reduction of the radiuses
can generate values with high accuracy near to the original
value. The extension and the reduction of radiuses aim at
introducing values with different accuracy levels according to
the optimization situation.
C. Ants Solution Construction
After the candidate variable values are determined, m ants
are dispatched to construct solutions. Each candidate variable
value is associated with pheromones, which bias the ants se-
lection for solution construction. The index l
(k)
i
of the variable
value selected by ant k for the ith variable is
l
(k)
i
=
arg max
(1)
i
,
(2)
i
, . . . ,
(m)
i
, if q < q
0
L
(k)
i
, otherwise
(6)
where q is a uniform random value in [0, 1), i = 1, 2, . . . , n,
k = 1, 2, . . . , m. The parameter q
0
[0, 1) controls whether an
ant will choose the variable value with the highest pheromone
from the m solutions generated in the previous iteration, or ran-
domly choose an index L
(k)
i
0, 1, . . . , m + g
i
according to
the probability distribution given by
p
(j)
i
=
(j)
i
m+g
i
u=0
(u)
i
, j = 0, 1, . . . , m + g
i
. (7)
The constructed solution of ant k is denoted x
(k)
=
(x
(l
(k)
1
)
1
, x
(l
(k)
2
)
2
, . . . , x
(l
(k)
n
)
n
). An illustration of two ants con-
structing solutions is shown in Fig. 5. The variable values that
are selected by each ant form a solution path. Pheromones
will then be adjusted according to the quality of these solution
paths.
D. Pheromone Update
Initially, each candidate variable value is assigned an
initial pheromone value T
0
. After evaluating the m solutions
constructed by ants, the solutions are sorted by their objective
function values in an order from the best to the worst. Suppose
that the sorted solutions are arranged as x
(1)
, x
(2)
, . . . , x
(m)
.
The pheromones on the selected variable values are
evaporated as
(j)
i
(1 )
(j)
i
+ T
min
(8)
where 0 < < 1 is the pheromone evaporation rate, and T
min
is the predened minimum pheromone value, i = 1, 2, . . . , n,
j = 1, 2, . . . , m.
The variable values in the best solutions have their
pheromones reinforced as
(j)
i
(1 )
(j)
i
+ T
max
(9)
where 0 < < 1 is the pheromone reinforcement rate, and
T
max
is the predened maximum pheromone value, i =
1, 2, . . . , n, j = 1, 2, . . . , , with being the number of the
high-quality solutions that receive additional pheromones on
the variable values. The pheromones on the variable values
that are generated by the exploration process and the dynamic
exploitation process are assigned as T
0
. In each iteration,
pheromone values on the variable values of the best-so-far
solution are assigned equal to the pheromone values on the
iteration best solution.
IV. PARAMETER STUDY OF SamACO
It can be seen that the proposed SamACO involves several
parameters. Here, we will investigate the effects of these param-
eters on the proposed algorithm.
A. Effects of the Parameters in SamACO
1) Discarding Number : The discarding number 1 is
used for maintaining diversity in the candidate variable values.
In each iteration, the random exploration process replaces the
worst solutions constructed by ants. The more the solutions
replaced, the higher the diversity in the candidate variable
values. Diversity is important for avoiding stagnation, but a high
discarding rate can slow down the convergence speed.
2) Elitist Number : Different from the discarding number
, the elitist number determines the best solutions con-
structed by ants to receive additional pheromones. The variable
values with extra pheromone deposits have higher chances to
be selected by ants. Therefore, the elitist number helps pre-
serve promising variable values and accelerate the convergence
speed.
Both the parameters and cannot be set too large. In
fact, a small discarding number and a small elitist number are
sufcient because their effects accumulate in iterations.
3) Traditional Parameters in ACO: Similar to the ACS [4],
parameters of SamACO to simulate the ants construction
behavior include the number of ants m, the parameter q
0
, the
pheromone evaporation rate , the pheromone reinforcement
rate , the initial pheromone value T
0
, and the lower and upper
limits of pheromone values T
min
and T
max
. In the literature, a
great deal of work has been carried out on the parametric study
for ACO, such as [4] and [34][37]. In Section IV-B, we will
use experiments to nd suitable settings for these parameters.
4) Parameters in Dynamic Exploitation: The parameters in
the dynamic exploitation include the exploitation frequency
and the radius reduction and extension rates v
r
and v
e
,
respectively. The exploitation frequency controls the number
of values to be sampled in the neighborhood of the best-so-
far solution per iteration. Since each candidate variable value
group is mainly composed of mvariable values that are selected
by the previous ants and the variable values from the local
1560 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010
Fig. 6. Contour illustration of using different values of m and on f
6
. The
contour represents the average number of FEs used for achieving an accuracy
level smaller than = 1, with the other parameters set as default.
exploitation, the value of inuences the selection probabilities
of the m variable values. Furthermore, a large provides more
candidate variable values surrounding the neighborhood of the
best-so-far solution, but it may induce premature convergence.
On the contrary, a small may not sample enough local variable
values, and, thus, the speed to approach the local optimum is
slow. On the other hand, the radius reduction and extension
rates v
r
and v
e
, respectively, adapt the exploitation neighbor-
hood range according to the solution quality. Because the radius
extension is a reversal operation of the radius reduction, we set
v
e
= 1/v
r
in this paper.
B. Numerical Analysis on Parameter Settings
Some of the SamACO parameters have their default settings.
They are set as T
min
= T
0
= 0.1, T
max
= 1.0, = 1, and =
1. There is generally no need to change these values. However,
the performance of the proposed algorithm is more sensitive to
other parameters.
According to our prior experiments on a set of unimodal and
multimodal functions (listed in Appendix II) with n = 30, the
parameters m = 20, = 20, = 0.5, = 0.3, q
0
= 0.1, and
v
r
= 0.7 are used for comparing the performance of SamACO
with other algorithms. In the remainder of this section, we
undertake a numerical experiment on f
6
(the shifted Schwefels
problem 1.2) for discussing the inuence of these parame-
ters. The algorithm terminates when the number of function
evaluations (FEs) reaches 300 000. Each parameter setting is
independently tested 25 times.
1) Correlations Between Ant Number m and Exploitation
Frequency : The correlations between m = 1, 4, 8, 12, . . . ,
40 and = 0, 1, 2, 3, 4, 8, 12, . . . , 40 have been tested by xing
the values of the other parameters. Fig. 6 shows a contour graph
representing the average number of FEs used for achieving an
accuracy level smaller than = f(x) f(x
) = 1, where f(x)
and f(x
Initialization
/
Initialize parameter values as in Table I.
For i := 1 to n do /
(j)
i
:= T
0
/
(m+g
i
)
i
:=T
0
/
/
g
i
:= g
i
+ 1/
Counter adds 1
/
End-if
End-for
Evaluate the solution x
1
, x
2
, . . . , x
n
according to the
objective function
If the resulting solution is no worse than the best-so-far
solution
For i := 1 to n do x
(0)
i
:= x
i
End-for /
Update the
best-so-far solution
/
flag := True
End-if
End-for
If flag = True
For i := 1 to n do r
i
:= r
i
v
e
End-for
Else
For i := 1 to n do r
i
:= r
i
v
r
End-for
End-if
/
/
For j := m + 1 to m do
For i := 1 to n do
x
(j)
i
:= l
i
+ (u
i
l
i
) rand
(j)
i
(j)
i
:= T
0
End-for
End-for
3) /
(k)
i
:=
(k)
i
End-for
Evaluate the kth solution according to the objective
function
End-for
4) /
Pheromone update
/
Sort the m solutions from the best to the worst
Update the best-so-far solution
Perform pheromone evaporation to the m solutions accord-
ing to (8)
Perform pheromone reinforcement to the best solutions
according to (9)
HU et al.: SamACO: VARIABLE SAMPLING ACO ALGORITHM FOR CONTINUOUS OPTIMIZATION 1565
For i := 1 to n do
(0)
i
:=
(1)
i
End-for
5) If (Termination_condition = True)
Output the best solution
Else goto Step 2)
End-if
APPENDIX II
BENCHMARK FUNCTIONS
The test functions used for experiments include the rst
ten functions for CEC 2005 Special Session on real-parameter
optimization [40], as f
5
f
9
and f
12
f
16
in this paper,
and functions from [30] by adding a shifted offset o =
(0, 1, 2, . . . , n 1), as f
1
f
4
and f
10
f
11
. The shifted vari-
able vectors and bounds for f
1
f
4
and f
10
f
11
are dened
as x
new
= x o, l
new
= l +o, and u
new
= u +o, where x,
l, and u are variable vectors, lower bound, and upper bound
dened in [30], and x
new
, l
new
, and u
new
are the corresponding
shifted vectors. The test functions are as follows:
f
1
Shifted step.
f
2
Shifted Schwefels P2.21.
f
3
Shifted Schwefels P2.22.
f
4
Shifted quartic with noise.
f
5
Shifted sphere.
f
6
Shifted Schwefels P1.2.
f
7
Shifted rotated high-conditioned elliptic.
f
8
Shifted Schwefels P1.2 with noise.
f
9
Schwefels P2.6 with global optimum on bounds.
f
10
Shifted Schwefels P2.26.
f
11
Shifted Ackley.
f
12
Shifted Rosenbrock.
f
13
Shifted rotated Griewanks without bounds.
f
14
Shifted rotated Ackleys function with global optimum on
bounds.
f
15
Shifted Rastrigin.
f
16
Shifted rotated Rastrigin.
ACKNOWLEDGMENT
The authors would like to thank Dr. M. Dorigo, Dr. K. Socha,
Dr. P. N. Suganthan, Dr. X. Yao, and Dr. N. Hansen for letting us
use their source codes for the benchmark functions. The authors
would also like to thank the anonymous associate editor and the
referees for their valuable comments to improve the quality of
this paper.
REFERENCES
[1] M. Dorigo, G. D. Caro, and L. M. Gambardella, Ant algorithms for
discrete optimization, Artif. Life, vol. 5, no. 2, pp. 137172, Apr. 1999.
[2] M. Dorigo and T. Sttzle, Ant Colony Optimization. Cambridge, MA:
MIT Press, 2004.
[3] M. Dorigo, V. Maniezzo, and A. Colorni, Ant system: Optimization by
a colony of cooperating agents, IEEE Trans. Syst., Man, Cybern. B,
Cybern., vol. 26, no. 1, pp. 2941, Feb. 1996.
[4] M. Dorigo and L. M. Gambardella, Ant colony system: A cooperative
learning approach to the traveling salesman problem, IEEE Trans. Evol.
Comput., vol. 1, no. 1, pp. 5366, Apr. 1997.
[5] G. Leguizamn and Z. Michalewicz, A new version of ant system
for subset problems, in Proc. IEEE CEC, Piscataway, NJ, 1999,
pp. 14591464.
[6] K. M. Sim and W. H. Sun, Ant colony optimization for routing and load-
balancing: Survey and new directions, IEEE Trans. Syst., Man, Cybern.
A, Syst., Humans, vol. 33, no. 5, pp. 560572, Sep. 2003.
[7] S. S. Iyengar, H.-C. Wu, N. Balakrishnan, and S. Y. Chang, Biologically
inspired cooperative routing for wireless mobile sensor networks, IEEE
Syst. J., vol. 1, no. 1, pp. 2937, Sep. 2007.
[8] W.-N. Chen and J. Zhang, An ant colony optimization approach to a
grid workow scheduling problem with various QoS requirements, IEEE
Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 39, no. 1, pp. 2943,
Jan. 2009.
[9] D. Merkle, M. Middendorf, and H. Schmeck, Ant colony optimization
for resource-constrained project scheduling, IEEE Trans. Evol. Comput.,
vol. 6, no. 4, pp. 333346, Aug. 2002.
[10] G. Wang, W. Gong, B. DeRenzi, and R. Kastner, Ant colony opti-
mizations for resource- and timing-constrained operation scheduling,
IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 26, no. 6,
pp. 10101029, Jun. 2007.
[11] J. Zhang, H. S.-H. Chung, A. W.-L. Lo, and T. Huang, Ex-
tended ant colony optimization algorithm for power electronic circuit
design, IEEE Trans. Power Electron., vol. 24, no. 1, pp. 147162,
Jan. 2009.
[12] N. Monmarch, G. Venturini, and M. Slimane, On how Pachycondyla
apicalis ants suggest a new search algorithm, Future Gener. Comput.
Syst., vol. 16, no. 9, pp. 937946, Jun. 2000.
[13] G. Bilchev and I. C. Parmee, The ant colony metaphor for searching con-
tinuous design spaces, in Proc. AISB Workshop Evol. Comput., vol. 933,
LNCS, 1995, pp. 2539.
[14] M. Wodrich and G. Bilchev, Cooperative distributed search: The ants
way, Control Cybern., vol. 26, no. 3, pp. 413446, 1997.
[15] M. Mathur, S. B. Karale, S. Priye, V. K. Jayaraman, and B. D. Kulkarni,
Ant colony approach to continuous function optimization, Ind. Eng.
Chem. Res., vol. 39, no. 10, pp. 38143822, 2000.
[16] J. Dro and P. Siarry, Continuous interacting ant colony algorithm
based on dense heterarchy, Future Gener. Comput. Syst., vol. 20, no. 5,
pp. 841856, Jun. 2004.
[17] X.-M. Hu, J. Zhang, and Y. Li, Orthogonal methods based ant colony
search for solving continuous optimization problems, J. Comput. Sci.
Technol., vol. 23, no. 1, pp. 218, Jan. 2008.
[18] J. Dro and P. Siarry, An ant colony algorithm aimed at dynamic contin-
uous optimization, Appl. Math. Comput., vol. 181, no. 1, pp. 457467,
Oct. 2006.
[19] H. Huang and Z. Hao, ACO for continuous optimization based on
discrete encoding, in Proc. ANTS, vol. 4150, LNCS, M. Dorigo,
L. M. Gambardella, M. Birattari, A. Martinoli, R. Poli, and
T. Stutzle, Eds., 2006, pp. 504505.
[20] K. Socha, ACO for continuous and mixed-variable optimization, in
Proc. ANTS, vol. 3172, LNCS, M. Dorigo, M. Birattari, C. Blum,
L. M. Gambardella, F. Mondata, and T. Stutzle, Eds., 2004, pp. 2536.
[21] K. Socha and M. Dorigo, Ant colony optimization for continuous
domains, Eur. J. Oper. Res., vol. 185, no. 3, pp. 11551173, Mar. 2008.
[22] P. Koroec and J. ilc, High-dimensional real-parameter optimization
using the differential ant-stigmergy algorithm, Int. J. Intell. Comput.
Cybern., vol. 2, no. 1, pp. 3451, 2009.
[23] M. Kong and P. Tian, A direct application of ant colony optimiza-
tion to function optimization problem in continuous domain, in ANTS
2006, vol. 4150, LNCS, M. Dorigo, L. M. Gambardella, M. Birattari,
A. Martinoli, R. Poli, and T. Stutzle, Eds., 2006, pp. 324331.
[24] P. Koroec, J. ilc, K. Oblak, and F. Kosel, The differential ant-stigmergy
algorithm: An experimental evaluation and a real-world application, in
Proc. IEEE CEC, Singapore, 2007, pp. 157164.
[25] P. Koroec and J. ilc, The differential ant-stigmergy algorithm for large
scale real-parameter optimization, in ANTS 2008, vol. 5217, LNCS,
M. Dorigo, M. Birattari, C. Blum, M. Clerc, T. Stutzle, and A. F. T.
Wineld, Eds., 2008, pp. 413414.
[26] S. Tsutsui, An enhanced aggregation pheromone system for real-
parameter optimization in the ACO metaphor, in Proc. ANTS, vol. 4150,
LNCS, M. Dorigo, L. M. Gambardella, M. Birattari, A. Martinoli, R. Poli,
and T. Stutzle, Eds., 2006, pp. 6071.
[27] F. O. de Frana, G. P. Coelho, F. J. V. Zuben, and R. R. de F. Attux,
Multivariate ant colony optimization in continuous search spaces, in
Proc. GECCO, Atlanta, GA, 2008, pp. 916.
[28] S. H. Pourtakdoust and H. Nobahari, An extension of ant colony system
to continuous optimization problems, in Proc. ANTS, vol. 3172, LNCS,
M. Dorigo, M. Birattari, C. Blum, L. M. Gambardella, F. Mondata, and
T. Stutzle, Eds., 2004, pp. 294301.
[29] J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, Comprehensive
learning particle swarm optimizer for global optimization of multimodal
1566 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICSPART B: CYBERNETICS, VOL. 40, NO. 6, DECEMBER 2010
functions, IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 281295,
Jun. 2006.
[30] X. Yao, Y. Liu, and G. Lin, Evolutionary programming made faster,
IEEE Trans. Evol. Comput., vol. 3, no. 2, pp. 82102, Jul. 1999.
[31] A. Auger and N. Hansen, Performance evaluation of an advanced
local search evolutionary algorithm, in Proc. IEEE CEC, 2005, vol. 2,
pp. 17771784.
[32] B. Bullnheimer, R. F. Hartl, and C. Strauss, A new rank based version of
the ant systemA computational study, Central Eur. J. Oper. Res. Econ.,
vol. 7, no. 1, pp. 2538, 1997.
[33] T. Sttzle and H. H. Hoos, MAX-MIN ant system, Future Gener.
Comput. Syst., vol. 16, no. 8, pp. 889914, Jun. 2000.
[34] A. C. Zecchin, A. R. Simpson, H. R. Maier, and J. B. Nixon, Parametric
study for an ant algorithm applied to water distribution system optimiza-
tion, IEEE Trans. Evol. Comput., vol. 9, no. 2, pp. 175191, Apr. 2005.
[35] M. Guntsch and M. Middendorf, Pheromone modication strategies for
ant algorithms applied to dynamic TSP, in Proc. EvoWorkshop, vol. 2037,
LNCS, E. J. W. Boers, J. Gottlieb, P. L. Lanzi, R. E. Smith, S. Cagnoni,
E. Hart, G. R. Raidl, and H. Tijink, Eds., 2001, pp. 213222.
[36] P. Pellegrini, D. Favaretto, and E. Moretti, On MAX-MIN ant sys-
tems parameters, in Proc. ANTS, vol. 4150, LNCS, M. Dorigo,
L. M. Gambardella, M. Birattari, A. Martinoli, R. Poli, and T. Stutzle,
Eds., 2006, pp. 203214.
[37] M. L. Pilat and T. White, Using genetic algorithms to optimize ACS-
TSP, in Proc. ANTS, vol. 2463, LNCS, M. Dorigo, G. D. Caro, and
M. Sampels, Eds., 2002, pp. 282283.
[38] J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence. San Mateo,
CA: Morgan Kaufmann, 2001.
[39] M. Clerc and J. Kennedy, The particle swarm-explosion, stability, and
convergence in a multidimensional complex space, IEEE Trans. Evol.
Comput., vol. 6, no. 1, pp. 5873, Feb. 2002.
[40] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger,
and S. Tiwari, Problem Denitions and Evaluation Criteria for the CEC
2005 Special Session on Real-Parameter Optimization, Nanyang Tech-
nol. Univ., Singapore, KanGAL Rep. 2005005, May 2005.
Xiao-Min Hu (S06) received the B.S. degree in
network engineering in 2006 from Sun Yat-Sen Uni-
versity, Guangzhou, China, where she is currently
working toward the Ph.D. degree in computer ap-
plication and technology in the Department of Com-
puter Science.
Her current research interests include articial
intelligence, evolutionary computation, swarm in-
telligence, and their applications in design and
optimization on wireless sensor networks, net-
work routing technologies, and other intelligence
applications.
Jun Zhang (M02SM08) received the Ph.D. de-
gree in electrical engineering from the City Univer-
sity of Hong Kong, Kowloon, Hong Kong, in 2002.
From 2003 to 2004, he was a Brain Korean 21
Postdoctoral Fellow with the Department of Elec-
trical Engineering and Computer Science, Korea
Advanced Institute of Science and Technology,
Daejeon, South Korea. Since 2004, he has been
with the Sun Yat-Sen University, Guangzhou, China,
where he is currently a Professor and the Ph.D. Su-
pervisor with the Department of Computer Science.
He has authored seven research books and book chapters, and over 80 technical
papers in his research areas. His research interests include computational
intelligence, data mining, wireless sensor networks, operations research, and
power electronic circuits.
Dr. Zhang is currently an Associate Editor of the IEEE TRANSACTIONS ON
INDUSTRIAL ELECTRONICS. He is currently the Chair of the IEEE Beijing
(Guangzhou) Section Computational Intelligence Society Chapter.
Henry Shu-Hung Chung (M95SM03) received
the B.Eng. and Ph.D. degrees in electrical en-
gineering from Hong Kong Polytechnic Univer-
sity, Kowloon, Hong Kong, in 1991 and 1994,
respectively.
Since 1995, he has been with the City University
of Hong Kong (CityU), Hong Kong. He is currently
a Professor with the Department of Electronic Engi-
neering and the Chief Technical Ofcer of e.Energy
Technology Ltd.an associated company of CityU.
He has authored four research book chapters and
over 250 technical papers, including 110 refereed journal papers in his research
areas. He is also the holder of ten patents. His research interests include
time- and frequency-domain analyses of power electronic circuits, switched-
capacitor-based converters, random-switching techniques, control methods,
digital audio ampliers, soft-switching converters, and electronic ballast design.
Dr. Chung was a recipient of the 2001 Grand Applied Research Excel-
lence Award from the CityU. He was the IEEE Student Branch Counselor
and was the Track Chair of the Technical Committee on power electronics
circuits and power systems of the IEEE Circuits and Systems Society from
1997 to 1998. He was an Associate Editor and a Guest Editor of the IEEE
TRANSACTIONS ON CIRCUITS AND SYSTEMS, PART I: FUNDAMENTAL
THEORY AND APPLICATIONS from 1999 to 2003. He is currently an Associate
Editor of the IEEE TRANSACTIONS ON POWER ELECTRONICS and the IEEE
TRANSACTIONS ON CIRCUITS AND SYSTEMS, PART I: FUNDAMENTAL
THEORY AND APPLICATIONS.
Yun Li (S87M90) received the B.Sc. degree in
radio electronics science from Sichuan University,
Chengdu, China, in 1984, the M.Eng. degree in
electronic engineering from the University of Elec-
tronic Science and Technology of China (UESTC),
Chengdu, in 1987, and the Ph.D. degree in comput-
ing and control engineering from the University of
Strathclyde, Glasgow, U.K., in 1990.
From 1989 to 1990, he was with the U.K. National
Engineering Laboratory and the Industrial Systems
and Control Limited, Glasgow. In 1991, he was a
Lecturer with the University of Glasgow, Glasgow. In 2002, he served as
a Visiting Professor with Kumamoto University, Kumamoto, Japan. He is
currently a Senior Lecturer with the University of Glasgow and a Visiting
Professor with the UESTC. In 1996, he independently invented the indef-
inite scattering matrix theory, which opened up a groundbreaking way for
microwave feedback circuit design. From 1987 to 1991, he carried out leading
work in parallel processing for recursive ltering and feedback control. In
1992, he achieved rst symbolic computing for power electronic circuit design,
without needing to invert any matrix, complex-numbered or not. Since 1992,
he has pioneered into design automation of control systems and discovery of
novel systems using evolutionary learning and intelligent search techniques.
He established the IEEE Computer-Aided Control System Design Evolutionary
Computation Working Group and the European Network of Excellence in
Evolutionary Computing (EvoNet) Workgroup on Systems, Control, and Drives
in 1998. He has supervised 15 Ph.D.s in this area and has over 140 publications.
Dr. Li is a Chartered Engineer and a member of the Institution of Engineering
and Technology.
Ou Liu received the Ph.D. degree in information
systems from the City University of Hong Kong,
Kowloon, Hong Kong, in 2006.
Since 2006, he has been with the Hong Kong Poly-
technic University, Hong Kong, where he is currently
an Assistant Professor with the School of Accounting
and Finance. His research interests include business
intelligence, decision support systems, ontology en-
gineering, genetic algorithms, ant colony systems,
fuzzy logic, and neural networks.