Sie sind auf Seite 1von 12

Computers and Chemical Engineering 28 (2004) 13251336

A new robust technique for optimal control


of chemical engineering processes
Simant R. Upreti
Department of Chemical Engineering, Ryerson University, Toronto, Ont., Canada M5B 2K3
Received 1 August 2002; received in revised form 1 September 2003; accepted 3 September 2003

Abstract
A new optimal control technique is presented to provide good quality, robust solutions for chemical engineering problems, which are
generally non-linear, and highly constrained. The technique neither uses any input of feasible control solution, nor any auxiliary condition.
The technique generates optimal control by applying the genetic operations of selection, crossover, and mutation on an initial population of
random, binary-coded deviation vectors. Each element of a deviation vector corresponds to a control stage, and is a deviation from some
mean control value randomized initially for that stage. The deviation, and the mean control value map on to the actual discrete step value
of control at that stage. The mapping is logarithmic in beginning, but is later allowed to alternate with a linear one. The genetic operations are
periodically followed by the replacement of mean control values by a newly available optimal control solution, and by the size-variation of
control domain between its limits. The optimal control technique is successfully tested on four challenging problems of chemical engineering.
2003 Elsevier Ltd. All rights reserved.
Keywords: Non-linear systems; Optimal control; Genetic algorithms

1. Introduction
The optimal control of chemical engineering processes
offers the realization of high standards of product purity, operational safety, and environmental regulations in addition to
reduction in production costs. Given a wide spectrum of optimal control applications (Biegler, Cervantes, & Wchter,
2002), there has been a vigorous effort in the last 10 years
to develop efficient optimal control techniques. These techniques are based on variational calculus using Pontryagins
maximum principle (Pontryagin, Boltyanskii, Gamkrelidge,
& Mishchenko, 1962), dynamic programming (first applied
by Luus, 1990), non-linear programming (summarized by
Biegler et al., 2002), and search. While the variational techniques require gradient information, the programming techniques rely on either gradient information, or enumeration
(direct or stochastic). The search techniques use direct search
(Luus & Hennessy, 1999), semi-exhaustive search (Gupta,
1995), or evolutionary search (Lee, Han, & Chang, 1997;
Lee, Han, & Chang, 1999; Wang & Chiou, 1997).

Tel.: +1-416-979-5000x6344; fax: +1-416-979-5044.


E-mail address: supreti@ryerson.ca (S.R. Upreti).

0098-1354/$ see front matter 2003 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compchemeng.2003.09.003

The determination of optimal control can be very difficult,


and open-ended due to frequent presence of non-linearity
in process models, inequality constraints on process variables, and implicit process discontinuities (Barton, Allgor,
Feehery, & Galn, 1998). This presence gives rise to a
multimodal, and non-continuous relation, or functional, between a performance index and a control function. The
gradient-based techniques are limited to unimodal and continuous functionals. For multimodal, and non-continuous
functionals, the results of these techniques are sensitive to
starting points. The enumeration-based techniques, on the
other hand, suffer from unreasonably tremendous amount
of performance index evaluations, even for modestly sized
optimal control problems. The search techniques mentioned
above, carry out reasonable amount of performance index
evaluations with different sizes of control domain, and combinations of several control functions. With this approach,
these techniques try to increase the probability of generating
optimal solutions.
A good optimal control technique, especially when applied for the first time on a particular process, should
1. provide consistent, good quality results regardless of
starting points;

1326

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

2. use a reasonable amount of performance index evaluations.


The results of such a technique will provide better starting
points for problem-specific techniques to fine tune the results, and check their globality. Chemical engineers further
desire the ease of implementation and programming, minimum restrictions and auxiliary conditions, and freedom from
analytical and error-prone derivatives.
In this work, a new optimal control technique is presented
to provide robust, good quality solutions independent of
starting points, and auxiliary conditions. The technique is
based on Genetic Algorithms (Holland, 1975), which simulate the evolution of living beings. These algorithms generate
robust optimal solutions (Goldberg, 1989a) by stochastically
emphasizing (selecting) optimally better variable-values, recombining (crossing over) them, and changing them slightly
(mutating) in a randomly generated collection (population).
The previous applications of Genetic Algorithms on optimal control problems include the works of Michalewicz,
Janikow, and Krawczyk (1992), Seywald, Kumar, and
Deshpande (1995), and Lee et al. (1997). In particular,
Lee et al. (1997) applied Genetic Algorithms for the optimal control of continuous polymerization reactors. They
obtained better results in comparison to iterative dynamic
programming as well as sequential quadratic programming.
Explained later in Section 3, the presented optimal control technique applies genetic operations on a population of
random, binary-coded deviation vectors. An element of a deviation vector carries the value of deviation of control from
its mean value. A control vector is mapped to each deviation vector, and a vector of randomly initialized mean control values. Each element of these vectors corresponds to a
control stage. After a few repetitions of genetic operations,
a newly generated optimal control vector is used to update
the vector of mean control values, and the size of control
domain is varied within its limits. The genetic operations are
applied again. For the size-variation of control domain, its
successive contraction is alternated with successive expansion. The mapping of control vectors is kept logarithmic in
the beginning, but later on alternated with a linear one. The
salient features of this technique, namely,
1. the update of mean control values;
2. the alternation of the size-variation of control domain;
3. the alternation of control mapping;
distinguish it from the previous applications of Genetic Algorithms. These features are intended to generate desir-

able solutions with a small population size, or equivalently,


reduced performance index evaluations, and computation
times. This outcome is promoted by the above features,
which increase the diversity of population under genetic operations, and avoid premature convergence.

2. Problem formulation
The following process model is considered for optimal
control:
dx
(1)
= f (x, u), 0 t tf
dt
In Eq. (1), x is a vector of state variables, and u is a control
function within some specified bounds. Both x and u are
functions of time, t, over a given process operation time tf .
State vector x is known at t = 0. Eq. (1) is subject to the
satisfaction of g, a vector of constraints on x and u.
The objective is to obtain the optimal control function,
which would optimize a given performance index J(x). The
discrete step values of u, equispaced over process operation
time, are considered as optimization variables. These step
values form a control vector u.

3. The optimal control technique


Given Nu stages of step values for control function u, the
presented optimal control technique is applied on a problem
by randomly initializing a mean control value u i for each
stage, i = 0, 1, . . . , Nu 1. At any ith stage, the step value
of control, ui , is calculable from u i , and a binary-coded deviation
ui,2 by means of some mapping. Between the control limits of umin and umax , Fig. 1 shows a snapshot of ui ,
u i , and
ui,2 , the Nu values of each of which form vectors,
a population of
u, u and
u2 , respectively. In addition to u,

u2 is also randomly generated. The mapping to calculate


and any
u2 in its population is described
control u from u,
in the next section. Logarithmic mapping is used in the beginning to emphasize relative precision within the elements
of u (Coley, 1999a).
the genetic opTo generate an optimal control vector u,
erations of selection, cross-over, and mutation are successively applied to the population of binary-coded deviation
vector
u2 . A value of performance index is associated with
each
u2 by using its corresponding control u (as calculated
from the mapping) to solve the process model of a problem.

u i, 2
(deviation)
0

u min

_
ui
(mean value)

_
u i = u i (u i , u i, 2 )
(control value)

u max

Fig. 1. A snapshot of the ith stage mean, deviation and control values in a control domain.

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

These performance index evaluations are done before selection. The value of each performance index is scaled up
by raising it to a specified power, n > 1, to favor the optimally better members of the population during selection
(Goldberg, 1989b; Coley, 1999b). If any process constraint
is violated for any
u2 , its performance index is set to zero
so that the infeasible
u2 is eliminated in the next round of
selection after participating in crossover and mutation.
After a specified number of generations, Ngen , of genetic
operations, the domain of u is contracted, and u is replaced
by u following the approach of Luus and Hennessy (1999).
This completes one iteration of the optimal control technique. Control domain is contracted in successive iterations
until it reaches its minimum size, when it is expanded successively. This expansion helps in searching a better optimal
control vector in a bigger control domain. When the maximum size of control domain is reached, its successive contraction is resumed to let the refinement of a hopefully new,
optimal control vector. In this way, successive contraction is
alternated with successive expansion for the size-variation
of control domain.
When the fractional improvement of optimal performance
index falls below a specified level, the alternation of the logarithmic mapping with a linear mapping is enabled between
the iterations. The three operationsof (i) replacing u by
(ii) alternating the size-variation of control domain, and
u,
(iii) alternating the mappingavoid premature stagnation
of the population under genetic operations, and perpetually
promote the search and refinement of an optimal control
vector. The application of the optimal control technique is
terminated after a specified number of iterations.
3.1. Mapping
For any ith control stage, a mapping relates a binary-coded
deviation
ui,2 (positive or negative) and a mean control
value u i to the decimal value of a control ui . Thus, a mapping provides a control vector u corresponding to each
binary-coded deviation vector
u2 in its population. The
presented optimal control technique uses the following logarithmic and linear mappings:
Logarithmic mapping: For any ith stage of control, the
logarithmic mapping provides the step value, ui = byi
where
b = umax umin
yi = logb u i +

logb D

ui,2
2Nbit 1

1327

Linear mapping: The linear mapping is straightforward,


and is given by
ui = u i +

ui,2
1

2Nbit

(4)

Logarithmic mapping emphasizes the relative order of magnitudes of control values at different stages. This property
leads to an efficient search of feasible control solutions in a
large control domain with a low value of Nbit . This search is
especially important during the initial iterations of the presented optimal control technique, which later on alternates
the logarithmic mapping with the linear one to refine an optimal control solution.
3.2. Inputs
The presented optimal control technique needs the following inputs:
1. The number of state variables, their initial values, and
constraints.
2. The range of integration, its accuracy (), the minimum
step of integration (hmin ), and the initial step of integration
(hini ).
3. The number of step changes or stages (Nu ) for control
function u.
4. The minimum value (Dmin ) of control domain, its maximum value (Dmax ), and a factor (C) to vary the size of
control domain.
5. A seed number to generate pseudo-random numbers.
6. The number of inactive iterations (No ) needed to start
the alternation of the logarithmic mapping with the linear
mapping.
7. The number of iterations (Nitr ) of the optimal control
technique.
8. The following parameters for the genetic operations of
selection, crossover, and mutation:
(a) The number of bits (Nbit ) to represent
ui,2 .
(b) The number of cross-over sites (Nxsites ) for any
ui,2 .
(c) The probability of cross-over (pc ).
(d) The probability of mutation (pm ).
(e) The power index (n) to scale performance index.
(f) The number of genetic generations (Ngen ) every iteration.

(2)

3.3. Algorithm

(3)

Following is the algorithm of the presented optimal control technique:

In Eq. (2), b is the logarithmic base, and umax and


umin are the maximum and minimum values of control
function u. In Eq. (3), D is the variable value of control
domain between the limits of Dmin > 0 and b, and
Nbit is the number of bits specified to represent any ith
element of
u2 , i.e.
ui,2 .

1. Initialize,
the vector of mean values of control function for
(a) u,
all Nu stages using,
u i = umin + Ri (umax umin ),
0 Ri 1,

i = 0, 1, . . . , Nu 1

(5)

1328

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

where Ri is the ith pseudo-random number obtained


from a pseudo-random number generator.
(b) A population of Npop binary-coded deviation vectors
u2 using the pseudo-random number generator, where Npop = Nu Nbit .
(c) The variable control domain, D = (umax umin )/2.
(d) A boolean variable (needed to enable the alternation of logarithmic mapping with linear mapping),
ALTERNATE = FALSE.
2. Set logarithmic mapping for the genetic operations of
selection, crossover, and mutation.
3. Carry out the following operations on the population of

u2 for Ngen generations:


(a)
(b)
(c)
(d)

Performance index evaluation for each


u2 .
Selection based on scaled performance index.
Crossover with probability pc .
Mutation with probability pm .

4. Store the resulting optimal value of performance index


and corresponding optimal control vector (u).

(J),

5. Replace u by u.
6. If ALTERNATE is TRUE, repeat Steps 35 once with
linear mapping.
7. If ALTERNATE is FALSE, then if for No consecutive
iterations, the fractional change in J is less than 1%, set
ALTERNATE = TRUE. (This step executes only once.)
8. If D is equal to either Dmin or Dmax , set the
size-variation factor for control domain, C = C1 .
(This step allows the alternation of the successive
contraction of D with its successive expansion.)
9. Set D = CD. If D < Dmin , set D = Dmin . If D >
Dmax , set D = Dmax . (This step allows the variation of
D within its limits.)
10. Go to Step 2 until the specified number of iterations,
Nitr , are done.

4. Application and results


The presented optimal control technique was applied to
the four problems of (i) ethanol fermentation, (ii) protein
production, (iii) penicillin production, and (iv) methylcyclopentane hydroisomerization. The optimal control of these
processes has been attempted by several researchers, and
reported to be very challenging. The parameters used by
the technique, both common and specific to the four problems, are listed in Tables 1 and 2, respectively. The minimum number of two bits was taken to represent a deviation value for control at any stage. This approach reduces
the population size for the deviation vectors (in Step 1(b) of
Algorithm), and consequently, the overhead of performance
index evaluations. To increase the diversity of the resulting
small population, a high value of 0.2 for mutation probability was chosen. For the sake of comparison, the number of
control stages, and the accuracy of integration were taken to
be same as in previous studies.

Table 1
The parameters (as described in Section 3.2) used by the presented optimal
control technique common to all four problems
Nbit
Nxsites
pc
pm
n
Ngen
No
Nitr
C
Dmin

2
1
0.6
0.2
2
10
200
3000
0.75
1 104

To statistically examine the robustness of optimal control


solutions, the technique was applied 90 times to each problem. Each time for a problem, an application was initialized with a unique random seed to generate pseudo-random
numbers, which were also used to carry out stochastic genetic operations. The subtractive method of Knuth (1973)
was employed to generate these numbers. The 90 random
seeds had a varying number of digits up to nine.
It may be noted that due to the use of stochastic genetic operations by the presented optimal control technique,
there is always a possibility of a sudden, increased rate of
improvement of an optimal performance index after any
specified number of iterations. This phenomenon was witnessed many times during the 360 applications of the presented technique, and is different than that experienced with
gradient-based techniques, which progressively decrease the
rate of improvement. Since the application of the presented
technique cannot be allowed to run forever, a deterministic
termination criterion of Nitr = 3000 was used. With this criterion, the technique took reasonable computation times for
all 360 applications, and generated results, which were in
good agreement with those previously reported. When the
applications terminated, the fractional improvement of optimal performance index per unit iteration with this termination criterion was of O(104 ) or less.
To solve the differential equations of process model, the
fifth-order RungeKutta Fehlberg method with CashKarp
parameters, and adaptive step-size control (Press, Teukolsky,
Vetterling, & Flannery, 2002) was used. The method was
programmed to exit immediately with zero performance
index in case of integration failure due to any infeasible control function, or any violation of constraints. For
each problem, the resulting 90 values of optimal perforTable 2
The parameters (as described in Section 3.2) used by the presented optimal
control technique specific to each of the four problems
Process model

hmin

hini

Nu

Ethanol fermentation
Protein production
Penicillin production
Methylcyclopentane
hydroisomerization

1 106

1 102

0.2
0.2
0.2
0.2

20
45
20
10

1 107
1 106
1 106

1 102
5 103
1 102

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

mance index, and corresponding control solutions were


analyzed.
The technique was coded in C++ language. The reproducibility of the results of the technique was established by
running it on three different computers, and two different
compilers (Microsoft Visual C++ 6.0, and GNU C++, gcc
2.95.3-5). The reported results of this study were obtained
using an IBM computer (Pentium III with 192 MB RAM).

The fermentation of ethanol in a fed-batch reactor has


been used for optimal control by Hong (1986), Chen and
Hwang (1990), Luus (1993a), Hartig, Keil, and Luus (1995),
Bojkov and Luus (1996), Luus and Hennessy (1999), and
Gupta (1995). The following model has been used for this
process:
dx1
x1
= Ax1 u
dt
x4

(6)

150 x2
dx2
= 10Ax1 + u
dt
x4

(7)

x3
dx3
= Bx1 u
dt
x4

(8)

dx4
=u
dt

(9)

where
A=

x2
0.408

1 + (x3 /16) 0.22 + x2

(10)

B=

x2
1

1 + (x3 /71.5) 0.44 + x2

(11)

In the above model, x1 is cell mass concentration, x2 is


substrate concentration, x3 is product concentration, and x4
is the liquid volume inside the reactor. The initial conditions
are:
x1 (0) = 1,

x2 (0) = 150,

x3 (0) = 0,

x4 (0) = 10

(12)

The control function u is feed rate, which is constrained as


follows:
0 u 12

(13)

There is an additional inequality constraint on the liquid


volume,
x4 (tf ) 200

(14)

at the final time, tf = 63 h. The objective is to find the optimal control function, which would maximize the following
performance index:
J = x3 (tf )x4 (tf )

(15)

avg.
std. dev.

4
f 10 2

4.1. Fermentation of ethanol

1329

10

20

30 40 50 60
random seed no.

70

80

90

Fig. 2. The fractional difference of J from its best reported value vs.
randomly seeded application of the presented optimal control technique
to ethanol fermentation.

4.1.1. Results
The results of the 90 different applications of the optimal control technique are plotted in Fig. 2 as the fractional
from its best
differences of optimal performance index (J)
reported optimal value (J best = 2.08411 104 , Luus &
Hennessy, 1999) versus seed numbers. The fractional dif J best . Thus, in the figure,
ference is given by, f = 1 J/
a plot-point closer to abscissae denotes a more accurate result. The overall accuracy of the 90 f -values is quantified
by their low average of 1.7%, while their precision is quantified by their low standard deviation of 1.1% in the interval,
0.445.4%.
The maximum optimal value of J max = 2.074969
104 was obtained from the 90 applications. As shown in
Table 3, this value, which has f = 0.44%, agrees well
with those obtained earlier through four different optimal
control techniques of semi-exhaustive search (Gupta, 1995),
direct-search (Luus & Hennessy, 1999), sequential quadratic
programming (Hartig et al., 1995), and iterative dynamic
programming (Hartig et al., 1995). The values of the optimal
control vector corresponding to J max are listed in Table 4.
To generate J max , the presented optimal control technique
took a reasonable computation time of 458 s (on the IBM
computer) during which 1,200,000 objective function calls
were made to integrate Eqs. (6)(9) from 0 to 63 h. This number of calls is greater than 184,000890,000 such calls used
by the direct search technique (Luus & Hennessy, 1999),
which generated the J best . However, as shown in Table 3,
the presented technique used random initialization as against
the specific initial control value used by the direct search
technique.

1330

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

Table 3
The comparison of results for ethanol fermentation with those reported earlier using different optimal control techniques
Technique

Starting point

J max (104 )

Computation time (s)

Present
Semi-exhaustive search (Gupta, 1995)
Direct-search (Luus & Hennessy, 1999)
Sequential quadratic programming
(Hartig et al., 1995)
Iterative dynamic programming
(Hartig et al., 1995)

Random
Specific u given by Luus (1991)
u = 3 for all stages
Random

2.074969
2.0830
2.08411
2.0805

u = 3 for all stages

2.0836

458 (on the IBM computer)


3600 (on 486/33 personal computer)
34533710 (on Pentium/120)
87840 (for 400 starting points
on 486/66 personal computer)
8640 (on 486/66 personal computer)

For the presented technique, the objective function calls


reported in this work include successful as well as failed
integration attempts. Some attempts fail due to infeasible
control functions, or any violation of constraints.

dx5
=u
dt

(20)

where
g3 =

21.87x4
(x4 + 0.4)(x4 + 62.5)

(21)

The production of secreted protein in a fed-batch bioreactor has been used for optimal control by Park and Ramirez
(1988), Luus (1992), and Gupta (1995). The following model
has been used for this process:

g2 =

x4 exp(5x4 )
0.1 + x4

(22)

g1 =

4.75g3
0.12 + g3

(23)

dx1
x1
= g1 (x2 x1 ) u
dt
x5

(16)

x2
dx2
= g2 x3 u
dt
x5

(17)

In the above model, x1 and x2 relate to the concentration


of secreted and total protein, respectively. x3 and x4 denote
the concentration of cell and glucose, respectively. x5 is the
volume of the reactor. The initial conditions are:

x3
dx3
= g3 x3 u
dt
x5

(18)

x4 (0) = 5,

20 x4
dx4
= 7.3g3 x3 + u
dt
x5

(19)

The control function u is feed rate, which is constrained as


follows:

4.2. Production of secreted protein

x1 (0) = x2 (0) = 0,

x3 (0) = 1,

x5 (0) = 1

0u2

Table 4
The optimal control values, corresponding to J max = 2.074969 104 , for
ethanol fermentation

(25)

The objective is to find the optimal control function, which


would maximize the following performance index:

Stage (i)

ui

J = x1 (tf )x5 (tf )

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

1.175828 105

at the final time, tf = 15 h.

4.453167 105
3.170494 104
2.111250 101
9.937975 101
1.325410
1.344505
1.754847
1.876797
2.731385
2.481849
3.966079
3.826159
4.685252
6.381866
6.549184
1.018872 101
1.199999 101
9.340255 105
4.797458 106

(24)

(26)

4.2.1. Results
The results of the 90 different applications of the optimal
control technique are plotted in Fig. 3 as fractional differences of optimal performance index from its best reported
optimal value (J best = 3.2686867 101 , Luus & Hennessy,
1999) versus seed numbers. The overall accuracy of the 90
f -values is quantified by their low average of 0.77%, while
their precision is quantified by their low standard deviation
of 0.27% in the interval, 0.292.0%.
The maximum optimal value of J max = 3.259277
101 was obtained from the 90 applications. As shown in
Table 5, this value, which has f = 0.29%, agrees well
with those obtained earlier through three different optimal
control techniques of variational calculus (Park & Ramirez,
1988), direct-search (Luus & Hennessy, 1999), and iterative

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

2.5

Table 6
The optimal control values, corresponding to J max = 3.259277 101 , for
protein production

avg.
std. dev.

f 10

1.5

0.5

10

20

30 40 50 60
random seed no.

70

80

1331

90

Fig. 3. The fractional difference of J from its best reported value vs.
randomly seeded application of the presented optimal control technique
to protein production.

dynamic programming (Luus & Hennessy, 1999). The values of the optimal control vector corresponding to J max are
listed in Table 6.
To generate J max , the presented optimal control technique
took a reasonable computation time of 1,244 s (on the IBM
computer) during which 2,700,000 objective function calls
were made to integrate Eqs. (16)(20) from 0 to 15 h. This
number of calls is higher than 1,600,0002,100,000 such
calls used by the direct search technique (Luus & Hennessy,
1999), which generated the J best . However, as shown in
Table 5, the presented technique used random initialization
as against the specific initial control value used by the direct
search technique.
4.3. Production of penicillin
The production of penicillin in a fed-batch reactor has
been used for optimal control by Lim, Tayeb, Modak, and
Bonte (1986), Cuthrell and Biegler (1989), Luus (1993b),
Mekarapiruk and Luus (1997), Luus and Hennessy (1999),

Stage (i)

ui

Stage (i)

ui

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

1.946846 101

23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44

1.982789
1.99383
1.993075
3.628645 101
1.903873
1.70462
1.078404
6.814616 105
9.726911 102
7.885631 101
7.933671 101
7.990748 101
7.958244 101
8.060720 101
8.040727 101
8.068483 101
8.239965 101
8.338405 101
8.590953 101
9.138066 101
1.047172
1.999053

1.354798 101
1.445002 101
2.301419 101
2.092167 101
3.634367 101
2.581563 101
2.062290 101
3.093587 101
3.128208 101
7.903737 101
3.282011 101
6.231075 101
2.174090 101
1.455103 101
1.810653
5.237942 101
4.205993 101
1.976336
7.359788 101
5.723070 101
4.989463 101
1.994245

Dadebo and McAuley (1995), and Gupta (1995). The following model has been used for this process:
dx1
x1
= h1 x1 u
dt
500x4

(27)

dx2
x2
= h2 x1 0.01x2 u
dt
500x4

(28)

dx3
h 1 x1
h 2 x1
2.9 102 x3 x1
=

dt
0.47
1.2
104 + x3
u 
x3 
+
1
x4
500
dx4
u
=
dt
500

(29)
(30)

where
h1 =

0.11x3
6 103 x1 + x3

(31)

Table 5
The comparison of results for protein production with those reported earlier using different optimal control techniques
Technique

Starting point

J max (101 )

Computation time (s)

Present
Variational calculus (Park & Ramirez, 1988)
Direct-search (Luus & Hennessy, 1999)
Iterative dynamic programming
(Luus & Hennessy, 1999)

Random
Specific u (not reported)
u = 1 for all stages
Random

3.259277
3.25a
3.2686867
3.2686867

1244 (on the IBM computer)


Not reported
less than 36968 (on Pentium/120)
35847391 (on Pentium/120)

As read from graph.

1332

h2 =

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

5.5 103 x3
104 + x3 (1 + 10x3 )

(32)

In the above model, x1 , x2 and x3 denote the concentration


of biomass, product and substrate, respectively. x4 is reactor
volume. The initial conditions are:

x1 (0) = 1.5,

The control function u is feed rate, which is constrained as


follows:
0 u 50

x4 (0) = 7 (33)
f 10

x2 (0) = x3 (0) = 0,

(34)

In addition, the following inequalities must not be violated:

0 x1 40

(35)

0 x3 25

(36)

0 x4 10

(37)

avg.
std. dev.

The objective is to find the optimal control function, which


would maximize the following performance index:
J = x2 (tf )x4 (tf )

(38)

at the final time, tf = 132 h.


For this problem, several applications of the optimal control technique did not generate any feasible control vector
(uf ) with 20 elements or stages, for a large number of iterations. This situation was handled by starting the technique
with the following inputs:
1. One stage of control function u, i.e. Nu = 1, which corresponds to a population of two deviation vectors,
u2 .
2. Two generations of genetic operations on the population
every iteration, i.e. Ngen = 2.
As soon as any uf was found, Nu was incremented by one.
The size of vectors u, u and
u2 , and of the population of

u2 was updated accordingly. A new population of


u2
of the updated size was randomly generated. The vector uf
was used to initialize a new vector of mean control values,
The value for its new incremental element or stage
i.e. u.
was set equal to the last stage value of uf . The application
of the technique was resumed with these changes, which
continued until Nu became equal to 20. At that point, Nitr
was set to zero, Ngen was set to 10, and the technique was
applied to generate an optimal control solution.

10

20

30 40 50 60
random seed no.

70

80

90

Fig. 4. The fractional difference of J from its best reported value vs.
randomly seeded application of the presented optimal control technique
to penicillin production.

4.3.1. Results
The results of the 90 different applications of the optimal
control technique are plotted in Fig. 4 as fractional differences of optimal performance index from its best reported
optimal value (J best = 8.8 101 , Gupta, 1995) versus seed
numbers. The overall accuracy of the 90 f -values is quantified by their low average of 1.2%, while their precision
is quantified by their low standard deviation of 1.3% in the
interval, 0.147.7%.
The maximum optimal value of J max = 8.787298
101 was obtained from the 90 applications. As shown in
Table 7, this value, which has f = 0.14%, agrees well
with those obtained earlier through three different optimal
control techniques of semi-exhaustive search (Gupta, 1995),
direct-search (Luus & Hennessy, 1999), and iterative dynamic programming (Luus, 1993b; Mekarapiruk & Luus,
1997). The values of the optimal control vector corresponding to J max are listed in Table 8.
To generate J max , the presented optimal control technique
took a reasonable computation time of 2932 s (on the IBM
computer) during which 1,866,400 objective function calls

Table 7
The comparison of results for penicillin production with those reported earlier using different optimal control techniques
Technique

Starting point

J max (101 )

Computation time (s)

Present
Semi-exhaustive search (Gupta, 1995)
Direct-search (Luus & Hennessy, 1999)

Random
Specific u cited by Gupta (1995)
u = 11.9 for all stages

8.787298
8.8
8.79964

Iterative dynamic programming (Luus, 1993b)


Iterative dynamic programming
(Mekarapiruk & Luus, 1997)

u = 11.9 for all stages


Specific u given by Gupta (1995),
or Luus (1993b)

8.7948
8.7959

2932 (on the IBM computer)


2820 (on 486/33 PC)
17726 (on Pentium/166
digital computer)
9000 (on 486/33 personal computer)
Not reported

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336


Table 8
The optimal control values, corresponding to J max = 8.787298 101 , for
penicillin production
Stage (i)

ui

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

4.706504
5.537037
6.637024
3.759060 101
3.334574 101
8.365168
8.921780
9.190752
9.286282
9.069012
9.406337
9.094980
9.264788
9.414326
9.497962
9.735480
9.512968
9.490603
9.581659
9.623707

1333

dx3
= k2 x2
dt

(41)

dx4
= k6 x4 + k5 x5
dt

(42)

dx5
= k3 x2 + k6 x4 (k4 + k5 + k8 + k9 )x5
dt
+ k7 x6 + k10 x7

(43)

dx6
= k8 x5 k7 x6
dt

(44)

dx7
= k9 x5 k10 x7
dt

(45)

where,
ki =

4


ci,j uj1 ,

i = 1, 2, . . . , 10

(46)

j=1

were made to integrate Eqs. (27)(30) from 0 to 132 h. This


number of calls is higher than 81,000 such calls used by the
direct search technique (Luus & Hennessy, 1999). However,
as shown in Table 7, the presented technique used random
initialization as against the specific initial control value used
by the direct search technique.
4.4. Hydroisomerization of methylcyclopentane
The hydroisomerization of methylcyclopentane to benzene in the presence of a bifunctional catalyst in a tubular
reactor has been used for optimal control by Luus, Dittrich,
and Keil (1992). The following model has been used for this
process:

In the above model, t is the characteristic time defined


as the ratio of catalyst mass up to a given reactor section to methylcyclopentane feed rate, x1 is the mole fraction of methylcyclopentane, x2 x6 are the mole fractions
of five intermediate species, and x7 is the mole fraction of
product benzene. ki s are rate constants, which depend on
the coefficients ci,j listed in Table 9. The initial conditions
are:
x1 (0) = 1,

x2 (0) = x3 (0) = = x7 (0) = 0

(47)

The control function u is catalyst blend, the ratio of the mass


of hydrogenating catalyst to that of total catalyst mass, as
the function of the characteristic time. u is constrained as
follows:
0.6 u 0.9

(48)

The objective is to find the optimal control, which would


maximize the following performance index:

dx1
= k1 x1
dt

(39)

J = x7 (tf )

dx2
= k1 x1 (k2 + k3 )x2 + k4 x5
dt

(40)

or benzene concentration at the characteristic final time,


tf = 2000 g h/mol, corresponding to the exit of the reactor.

(49)

Table 9
The coefficients of the rate constants for the hydroisomerization of methylcyclopentane
i

ci,1

ci,2

ci,3

ci,4

0
1
2
3
4
5
6
7
8
9

2.918487 103

8.045787 103

6.749947 103

1.416647 103
1.733333 101
4.429997 101
3.166655 102
6.666689
5.033333 102
2.166664 101
1.199999 101
4.974987 101
9.833470 103

9.509977
2.682093 101
2.087241 102
1.350005
1.921995 102
1.323596 101
7.339981
3.950534 101
2.504665 105

3.500994 101
9.556079 101
7.198052 102
6.850027
7.945320 102
4.696255 101
2.527328 101
1.679353
1.005854 102

4.283329 101
1.130398 102
8.277466 102
1.216671 101
1.105666 101
5.539323 101
2.993329 101
1.777829
1.986696 102

1334

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

Table 10
The comparison of results for methylcyclopentane hydroisomerization with those reported earlier using different optimal control techniques
Technique

Starting point

J max (102 )

Computation time (s)

Present
Sequential quadratic programming (Luus et al., 1992)

Random
Random

1.00937
1.00527

Iterative dynamic programming (Luus et al., 1992)


Iterative dynamic programming (Luus et al., 1992)

u = 0.75 for all stages


u = 0.75 for all stages

1.00942
1.00942

166 (on the IBM computer)


185 (for 100 starting points on
CRAY XMP/24 digital computer)
360-960 (on CRAY XMP/24 digital computer)
10800 (on computer with 386/33
processor and mathematical coprocessor)

4.4.1. Results
The results of the 90 different applications of the optimal control technique are plotted in Fig. 5 as fractional
differences of optimal performance index from its best reported optimal value (J best = 1.00942 102 , Luus et al.,
1992) versus seed numbers. The overall accuracy of the 90
f -values is quantified by their very low average of 0.0094%,
while their precision is quantified by their very low standard
deviation of 0.0024% in the interval, 0.0050.02%.
The maximum optimal value of J max = 1.00937 102
was obtained from the 90 applications. As shown in Table 10,
this value, which has f = 0.005%, agrees very well with
those obtained earlier through two different optimal control
techniques of sequential quadratic programming (Luus et al.,
1992), and iterative dynamic programming (Luus et al.,
1992). The values of the optimal control vector corresponding to J max are listed in Table 11.
To generate J max , the presented optimal control technique
took a reasonable computation time of 166 s (on the IBM
computer) during which 173,200 objective function calls
were made to integrate Eqs. (39)(45) from 0 to 2000 units
of the characteristic time. For this problem, the number of
18

std. dev.

14

f 105

12
10
8
6
4
2
0

10

20

30 40 50 60
random seed no.

70

80

Stage (i)

ui 10

0
1
2
3
4
5
6
7
8
9

6.660811
6.736933
6.763065
8.999429
8.999889
8.999994
8.999466
8.999979
8.999946
8.999914

objective function calls using any direct search technique is


not available in literature. As shown in Table 10, the computation time taken by the presented technique to generate
J max is less than that taken by sequential quadratic programming as well as iterative dynamic programming on a much
faster CRAY supercomputer.

5. Discussion and conclusion

avg.

16

Table 11
The optimal control values, corresponding to J max = 1.00937 102 , for
methylcyclopentane hydroisomerization

90

Fig. 5. The fractional difference of J from its best reported value vs.
randomly seeded application of the presented optimal control technique
to methylcyclopentane hydroisomerization.

A new technique was presented to provide robust solutions for the optimal control of non-linear, multimodal,
and non-continuous processes of chemical engineering. The
technique was tested on four challenging, well-studied optimal control problems of (i) ethanol fermentation, (ii) protein
production, (iii) penicillin production, and (iv) methylcyclopentane hydroisomerization. The technique demonstrated
its robustness by generating results with (i) the low values
of average fractional difference of performance index (from
its best, reported value) in the range 0.00941.7%, and (ii)
the low values of standard deviation (of the fractional difference) in the range 0.00241.3%. These statistical results are
based on the 360 applications (90 different randomly initiated applications for each of the four problems) of the presented technique. The computation time required for the application of the technique was quite reasonable in the range
1662932 s on a 750 MHz personal computer. It is worth
noting that the technique did not utilize any feasible control
input, or any other helpful, a priori information to generate
optimal solutions.

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

For the first three problems, the comparison of the number


of objective function evaluations of the presented technique
was done with that of a direct search technique (Luus &
Hennessy, 1999), which, like the presented technique, does
not have any other major computation overhead than the
objective function evaluation requiring the complete integration of ordinary differential equations. It was found that
the presented technique required greater numbers of objective function evaluations. However, the technique showed
an economy of computation for the last problem, and generated optimal solution in less time than that taken by sequential quadratic programming as well as iterative dynamic
programming on a much faster CRAY supercomputer. It is
noteworthy that the technique always used random initialization unlike very specific initial control values as used by
the direct search technique for all problems. Random initialization broadens the scope of the presented technique to
solve optimal control problems with minimum or no helpful,
a priori initialization. These solutions may be efficiently utilized by techniques allowing specific initializations to generate new or improved solutions.
The third problem of penicillin production was the most
difficult one due to the presence of three additional inequality constraints on state variables along with a large control
domain. This situation made it impossible to randomly generate a population with at least one eligible member (related
to a feasible control function), and implement the presented
optimal control technique as usual. In fact, as seen in Table 7,
all previous techniques solved this problem with a very specific starting point. The presented technique took care of this
situation by slowly increasing the number of control stages
from one to the given number of 20. It is not unlikely that
problems with a larger number of inequality constraints, and
size of control domains could pose such difficulty. Another
difficulty that could be envisaged is the possible limitation
of the presented technique to handle a large number of control stages (or variables) with modest computation overhead.
The technique is based on Genetic Algorithms, which for a
larger number of variables require a larger size of population
for its adequate diversity, and consequently, an equally large
number of objective function evaluations. In such a scenario,
computation overhead could be an issue with slower computers thereby yielding premature solutions in limited time
frames. The conventional gradient-based techniques of variational calculus, or sequential quadratic programming do not
have this limitation, and therefore can be used to improve
the premature solutions. These techniques offer the benefit
of progressively converging to a local optimum. This benefit
can be exploited to assure the globality of an optimal solution by improving different premature solutions. The development of such a hybrid approach forms the focus of future
research.
Based on the comparative numerical evidence in this
study, the presented technique shows promise in solving
non-linear, discontinuous optimal control problems, generating alternative control solutions, furnishing start-up

1335

guesses for other techniques, and validating their results.


The presented technique does not depend on any auxiliary
condition, or any information on derivatives or feasible control inputs. As such, the technique is easily programmable,
and readily applicable for the optimal control of a variety
of challenging chemical engineering processes.

Acknowledgements
The financial support of Ryerson University Faculty Seed
Grant, and the Natural Sciences and Engineering Research
Council of Canada is gratefully acknowledged.
References
Barton, P. I., Allgor, R. J., Feehery, W. F., & Galn, S. (1998). Dynamic
optimization in a discontinuous world. Industrial and Engineering
Chemistry Research, 37, 966981.
Biegler, L. T., Cervantes, A. M., & Wchter, A. (2002). Advances in
simultaneous strategies for dynamic process optimization. Chemical
Engineering Science, 57, 575593.
Bojkov, B., & Luus, R. (1996). Optimal control of non-linear systems
with unspecified final times. Chemical Engineering Science, 51(6),
905919.
Chen, C. T., & Hwang, C. (1990). Optimal control computation for
differential-Algebraic process systems with general constraints. Chemical Engineering Communications, 97, 926.
Coley, D. A. (1999a). An introduction to genetic algorithms for scientists
and engineers (2nd ed., p. 87). New Jersey: World Scientific.
Coley, D. A. (1999b). An introduction to genetic algorithms for scientists
and engineers (2nd ed., p. 78). New Jersey: World Scientific.
Cuthrell, J. E., & Biegler, L. T. (1989). Simultaneous optimization and
solution methods for batch reactor control profiles. Computers and
Chemical Engineering, 13, 4962.
Dadebo, S. A., & McAuley, K. B. (1995). Dynamic optimization of constrained chemical engineering problems using dynamic programming.
Computer and Chemical Engineering, 19, 513525.
Goldberg, D. E. (1989a). Genetic algorithms in search, optimization &
machine learning. New York: Addison-Wesley.
Goldberg, D. E. (1989b). Genetic algorithms in search, optimization &
machine learning (p. 124). New York: Addison-Wesley.
Gupta, Y. P. (1995). Semiexhaustive search for solving nonlinear optimal
control problems. Industrial and Engineering Chemistry Research,
34(11), 38783884.
Hartig, F., Keil, F. J., & Luus, R. (1995). Comparison of optimization
methods for a fed-batch reactor. Hungarian Journal of Industrial
Chemistry, 23, 141148.
Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann
Arbor: University of Michigan Press.
Hong, J. (1986). Optimal substrate feeding policy for fed-batch fermentation with substrate and product inhibition kinetics. Biotechnology and
Bioengineering, 27, 14211431.
Knuth, D. E. (1973). The art of computer programming (Vol. 2). Reading,
Massachussetts: Addison-Wesley.
Lee, M. H., Han, C., & Chang, K. S. (1997). Hierarchical time-optimal
control of a continuous co-polymerization reactor during start-up or
grade change operation using genetic algorithms. Computers and
Chemical Engineering, 21, S1037S1042.
Lee, M. H., Han, C., & Chang, K. S. (1999). Dynamic optimization of
a continuous polymer reactor using a modified differential evolution
algorithm. Industrial and Engineering Chemistry Research, 38(12),
48254831.

1336

S.R. Upreti / Computers and Chemical Engineering 28 (2004) 13251336

Lim, H. C., Tayeb, Y. J., Modak, J. M., & Bonte, P. (1986). Computational algorithms for optimal feed rates for a class of fed-batch fermentation: Numerical results for penicillin and cell mass production.
Biotechnology and Bioengineering, 28, 14081420.
Luus, R. (1990). Application of dynamic programming to highdimensional nonlinear optimal control problems. International Journal of Control, 52, 239250.
Luus, R. (1991). Application of iterative dynamic programming to state
constrained optimal control problems. Hungarian Journal of Industrial
Chemistry, 19(4), 245254.
Luus, R. (1992). On the application of iterative dynamic programming
to singular optimal control problems. IEEE Transactions on Automic
Control, 37, 18021806.
Luus, R. (1993a). Application of dynamic programming to
differential-algebraic process systems. Computers and Chemical
Engineering, 17(4), 373377.
Luus, R. (1993b). Optimization of fed-batch fermentors by iterative dynamic programming. Biotechnology and Bioengineering, 41, 599
602.
Luus, R., Dittrich, J., & Keil, F. J. (1992). Multiplicity of solutions
in the optimization of a bifunctional catalyst blend in a tubular
reactor. Canadian Journal of Chemical Engineering, 70(4), 780
785.

Luus, R., & Hennessy, D. (1999). Optimization of fed-Batch reactors by


the LuusJakola optimization procedure. Industrial and Engineering
Chemistry Research, 38(5), 19481955.
Mekarapiruk, W., & Luus, R. (1997). Optimal control of inequality state
constrained systems. Industrial and Engineering Chemistry Research,
36(5), 16861694.
Michalewicz, Z., Janikow, C. Z., & Krawczyk, J. B. (1992). A modified genetic algorithm for optimal control problem. Computers and
Mathematics with Applications, 23(12), 8394.
Park, S., & Ramirez, W. F. (1988). Optimal production of secreted protein
in fed-batch reactors. AIChE Journal, 34, 15501558.
Pontryagin, L. S., Boltyanskii, V. G., Gamkrelidge, R., & Mishchenko,
E. (1962). The mathematical theory of optimal processes. New York:
Interscience.
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2002).
Numerical recipes in C++. The art of scientific computing (2nd ed.,
pp. 719727). New York: Cambridge University Press.
Seywald, H. R., Kumar, R. R., & Deshpande, S. M. (1995). Genetic algorithm approach for optimal control problems with linearly appearing
controls. Journal of Guidance, Control and Dynamics, 18(1), 177182.
Wang, F., & Chiou, J. (1997). Optimal control and optimal time location
problems of differential-algebraic systems by differential evolution.
Industrial and Engineering Chemistry Research, 36(12), 53485357.

Das könnte Ihnen auch gefallen