Sie sind auf Seite 1von 13

Journal of Biotechnology 117 (2005) 407419

Dynamic optimization of bioprocesses: Efcient and robust numerical strategies


Julio R. Bangaa, , Eva Balsa-Cantob , Carmen G. Molesc , Antonio A. Alonsoa
b a Process Engineering Group, IIM-CSIC, C/Eduardo Cabello 6, 36208 Vigo, Spain Department of Applied Mathematics II, Universidad de Vigo, Campus Marcosende, 36280 Vigo, Spain c Unilever Research Vlaardingen, P.O. Box 114, NL-3130 Vlaardingen, The Netherlands

Received 1 September 2004; received in revised form 9 February 2005; accepted 26 February 2005

Abstract The dynamic optimization (open loop optimal control) of non-linear bioprocesses is considered in this contribution. These processes can be described by sets of non-linear differential and algebraic equations (DAEs), usually subject to constraints in the state and control variables. A review of the available solution techniques for this class of problems is presented, highlighting the numerical difculties arising from the non-linear, constrained and often discontinuous nature of these systems. In order to surmount these difculties, we present several alternative stochastic and hybrid techniques based on the control vector parameterization (CVP) approach. The CVP approach is a direct method which transforms the original problem into a non-linear programming (NLP) problem, which must be solved by a suitable (efcient and robust) solver. In particular, a hybrid technique uses a rst global optimization phase followed by a fast second phase based on a local deterministic method, so it can handle the nonconvexity of many of these NLPs. The efciency and robustness of these techniques is illustrated by solving several challenging case studies regarding the optimal control of fed-batch bioreactors and other bioprocesses. In order to fairly evaluate their advantages, a careful and critical comparison with several other direct approaches is provided. The results indicate that the two-phase hybrid approach presents the best compromise between robustness and efciency. 2005 Elsevier B.V. All rights reserved.
Keywords: Optimal control; Dynamic optimization; Non-linear bioprocesses; Global optimization

1. Introduction In recent years, many efforts have been devoted to the model-based optimization of processes in biotech Corresponding author. Tel.: + 34 986 214473; fax: +34 986 292762. E-mail address: julio@iim.csic.es (J.R. Banga).

nology and bioengineering. An example of a problem which has received major attention is the dynamic optimization (open loop optimal control) of fed-batch bioreactors, as reviewed by Johnson (1987), Rani and Rao (1999) and, more recently, by Banga et al. (2003). Dynamic optimization allows the computation of the optimal operating policies for these units, e.g. the best time-varying feed rate(s) which ensure the maximiza-

0168-1656/$ see front matter 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.jbiotec.2005.02.013

408

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

tion of a pre-dened performance index (usually, a productivity, or an economical index derived from the operation prole and the nal concentrations). Once computed in a reliable way, these operating policies can be implemented using different control strategies, such as adaptive control (Smets et al., 2004) or model predictive control (Mahadevan and Doyle, 2003). Most bioprocesses have highly non-linear dynamics, and constraints are also frequently present on both the state and the control variables. Thus, efcient and robust dynamic optimization methods are needed in order to successfully obtain their optimal operating policies. In this work, the general problem of dynamic optimization of non-linear bioprocesses with unspecied nal time is considered. Several solution strategies, both deterministic and stochastic, are compared based on their results for three challenging case studies: the optimal operation of two fed-batch bioreactors and the optimal drug scheduling for cancer chemotherapy. A hybrid (stochastic-deterministic) approach is also presented and evaluated, showing signicant advantages over the other methods in terms of robustness and computational effort.

An additional set of algebraic inequality constraints are the upper and lower bounds on the state and control variables, Eqs. (5) and (6): xL x{t } xU uL u{t } uU (5) (6)

The above formulation assumes that the process is modelled as a lumped system (i.e. described by ordinary differential equations). If the process is modelled as a distributed system (e.g. a bioreactor where state variables are function of both time and spatial position), the corresponding governing partial differential equations (PDEs) are introduced as an additional set of equality constraints: p(x, x , x , . . . , ) = 0 a(x, x , . . . , ) = 0 (7) (8)

where are the independent variables (time and spatial position), Eq. (7) is the system of governing PDEs within the domain , where x = x/, and Eq. (8) are the auxiliary conditions of the PDEs (boundary and initial conditions) on the boundary of the domain .

2. Problem statement The general dynamic optimization (optimal control) problem of a bioprocess, considering a free terminal time, can be stated as nding the control vector u{t } and the nal time tf to minimize (or maximize) a performance index J [x, u]: J [x, u] = [x{tf }] +
tf t0

3. Review of solution methods The dynamic optimization of fed-batch bioreactors is a very challenging problem due to several reasons: rst, the control variable (e.g. feed rate) often appears linearly in the system differential equations, so the problem is singular, creating additional difculties for its solution (especially using indirect methods, as discussed below). For this type of problems, the optimal operating policy will be either bang-bang, or singular, or a combination of both. second, most bioprocesses have highly non-linear dynamics, and constraints are also frequently present on both the state and the control variables. These characteristics introduce new challenges to the existing solution techniques. Therefore, efcient and robust methods are needed in order to obtain the optimal operating policies. Numerical methods for the solution of optimal control

[x{t }, u{t }, t ]dt

(1)

subject to a set of ordinary differential equality constraints, Eq. (2): dx = dt [x{t }, u{t }, t ] (2)

where x is the vector of state variables, with initial conditions x{t0 } = x0 , and also subject to sets of algebraic equality and inequality constraints, Eqs. (3) and (4): h[x{t }, u{t }] = 0 g[x{t }, u{t }] 0 (3) (4)

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

409

problems are usually classied under three categories: dynamic programming, indirect and direct approaches. We will concentrate on the last two types since they are the most promising strategies for the problems considered here. 3.1. Indirect methods Indirect (classical) approaches are based on the transformation of the original optimal control problem into a two point boundary value problem (BVP) using the necessary conditions of Pontryagin (Bryson and Ho, 1975). Although many researchers have followed this approach for the optimization of fed-batch reactors (San and Stephanopoulos, 1984, 1989; Hong, 1986; Lim et al., 1986; Modak et al., 1986; Park and Ramirez, 1988; Van Impe et al., 1992; Lee and Ramirez, 1994), the resulting boundary value problems can be very difcult to solve, especially when state constraints are present. The use of transformations have been suggested in order to facilitate the numerical solution of those BVPs (Jayant and Pushpavanam, 1998; Lee et al., 1999; Oberle and Sothmann, 1999). 3.2. Deterministic direct methods Alternatively, direct approaches transform the original optimal control problem (which is innite dimensional) into a non-linear programming (NLP) problem, either using control vector parameterization (CVP) (Vassiliadis et al., 1994a,b) or complete (control and state) parameterization, CP (Cuthrell and Biegler, 1989). 3.2.1. Fundamentals of CVP methods In order to illustrate the basis of the CVP approach, let us consider that the dynamic of the system can be described as: f ( x, x, u, v) = 0, t [t0 , tf ] (9)

of elements and the control is approximated using predened basis functions. For example, the control variables are usually approximated by some type of low order polynomial (e.g. piecewise-constant, piecewiselinear) in which coefcients may involve some (or all) of the parameters in set v: u = u(v) (11)

More details on this may be found in e.g. Vassiliadis et al. (1994a). For example, if a piecewise constant parameterization (i.e. sequence of steps) is considered, assuming the more general case of steps of different duration, then the elements of the vectors of parameters v will be the durations and control values of these steps. This parameterization transforms the original (innite dimensional) dynamic optimization problem into a non-linear programming problem where v is the nite dimensional vector of decision variables, and where the systems dynamics (differential equality constraints) must be integrated for each evaluation of the performance index. Thus, the direct CVP method has transformed the original problem into a master (outer) NLP with an inner initial value problem (IVP). For this reason, this approach is also sometimes called sequential direct strategy. State of the art methods for solving NLPs, like sequential quadratic programming (SQP), usually require very good estimates of the gradient of the performance index with respect to the decision variables. A key question in the CVP framework is how to efciently compute gradient estimates of high quality. The use of the rst order sensitivity equations has been presented (Vassiliadis et al., 1994a,b) as the best approach for obtaining gradients with respect to optimization parameters in the framework of the CVP approach. These sensitivities can be obtained by a chain rule differentiation applied to the system in Eqs. (9) (11) with respect to the time-invariant parameters v: f x f x f u f + + + = 0, x v x v u v v t [t0 , tf ] (12) with the initial conditions: x x (t0 ) = 0 (v) v v (13)

with suitable initial conditions: x(t0 ) = x0 (v) (10)

where x and x are the state (output) variables and their time derivatives respectively; u, the control variable vector and v, the time-independent decision parameters. In CVP, the time horizon is divided into a number

410

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

The system Jacobian in Eq. (12) is the partitioned matrix: f f (14) , x x which is the same as the one in the resulting equations used in the integration of Eq. (9). This allows efcient integration of Eq. (12) in a decoupled direct way, simultaneously with that of Eq. (9). Recently, Balsa-Canto et al. (2000) have presented a CVP method which makes use of restricted second order information and a mesh renement procedure in order to solve these problems in a very efcient way, even for high levels of control discretization. 3.2.2. Fundamentals of CP methods In complete parameterization (CP) methods, also named simultaneous direct strategy, both the controls and the states are parameterized (usually, by means of collocation on nite elements). This full discretization obviously results in a larger NLP than CVP methods, but it has the advantage of not requiring the solution of any inner initial value problem, as in CVP. The application of this method for the optimization of a fedbatch bioreactor was illustrated by Cuthrell and Biegler (1989). The state of the art of CP has been recently reviewed by Biegler et al. (2002). 3.3. Stochastic direct methods

expensive (Roubos et al., 1997; Lee et al., 1999), especially for systems involving a large number of differential and algebraic equations (DAEs). Further, several search parameters of IDP must be adjusted by the user in order to ensure suitable convergence, so a number of exploratory runs are necessary. Several studies have dealt with the enhancement of the convergence of IDP (Lin and Hwang, 1998; Tholudur and Ramirez, 1997), but other methods based on the CVP scheme seem to be much more efcient (Balsa-Canto et al., 2000). In any case, rened solutions are usually obtained at a large computational cost using the above mentioned stochastic methods. Although there is always a trade-off between convergence speed and robustness in both stochastic and deterministic methods, the latter usually have the opposite behavior, i.e. they converge very fast if they are started close to the global solution. Clearly, a convenient approach would be to combine both methodologies in order to compensate for their weaknesses while enhancing their strengths. A hybrid (stochastic-deterministic) approach was suggested by Banga and Seider (1996), and later considered by Balsa-Canto et al. (1998) and Carrasco and Banga (1998) with good results. This approach was developed by adequately combining the key elements of a stochastic and a deterministic method, taking advantage of their complementary features. 4. Methods considered

It should be noted that the NLPs arising from the application of direct approaches (such as CVP) are frequently multimodal. Therefore, deterministic (gradient based) local optimization techniques may converge to local optima, especially if they are started far away from the global solution. Adaptive stochastic methods have been suggested as robust alternatives to surmount these difculties (Banga et al., 1994; Banga and Seider, 1996; Banga et al., 1997). Other types of stochastic algorithms have also been used, including different random search algorithms (Simutis and L ubbert, 1997; Luus and Hennessy, 1999) and genetic algorithms (Yamashita and Shima, 1997; BalsaCanto et al., 1998; Roubos et al., 1999; Zuo and Wu, 2000). De Tremblay et al. (1992) and Luus (1992) have proposed the use of iterative dynamic programming (IDP), but this method seems to be computationally too

This contribution has two main objectives. First, to present a careful comparison of available recent approaches for the dynamic optimization of non-linear bioprocesses. The purpose of this comparison is to present a critical review which can serve as a guideline for the selection of suitable solvers for other similar problems. Second, to illustrate how a suitable hybrid method presents the best efciency and robustness for the solution of these problems. The following deterministic (local methods) have been considered (see also the previous discussion about types of methods): CVP-fd: an implementation of the control vector parameterization approach where the outer non-linear programming problem is solved using a sequential quadratic programming code with the gradient estimated using nite differences.

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

411

CVP-sg: another CVP implementation using SQP to solve the outer NLP, but with a more rened computation of the gradient (via rst order sensitivities) CP-col: an implementation of the complete parameterization approach, using collocation, and solving the resulting NLP using SQP. The following stochastic methods were also considered, all of them following the CVP approach: ICRS/DS: the adaptive stochastic method for dynamic optimization as presented by Banga et al. (1994) with a mesh renement procedure (Banga et al., 1995, 1998). GA: a genetic algorithm, based on GENOCOP III (Michalewicz, 1996), adapted by our group for dynamic optimization using CVP. DE: an adaptation of the Differential Evolution method (Storn and Price, 1997). Like GA, DE can also be classied as an evolutionary computation method. DHC: the Dynamic Hill Climbing method, as presented by de la Maza and Yuret (1994). Finally, we have also considered the following hybrid method, which has been designed based on our previous preliminary results (Banga and Seider, 1996; Carrasco and Banga, 1998; Balsa-Canto et al., 1998): TPH: a two-phase (stochastic-deterministic) hybrid (TPH) approach, with adaptive mesh renement for the control parameterization in the stochastic phase and a moving elements approach in the deterministic (gradient based) nal phase, with gradients estimated via rst order sensitivities. 4.1. TPH details The main idea in the two-phase hybrid method is the combination of a global stochastic method, ICRS/DS, with a local deterministic method, CVP-sg, in order to take advantage of their complementary strengths: some global convergence properties, in the case of the stochastic strategy, and fast convergence if started close to the global solution, in the case of the deterministic approach. A key issue in TPH is to decide the amount of search to be performed by each method, i.e. the tuning of the hybrid.

Fig. 1. CPU time of the two phases of the TPH method as a function of the value of the performance index at the end of the stochastic phase.

In TPH, the user must pre-specify the stopping criteria for the stochastic and the local deterministic methods, 1 and 2 . The value assigned to 1 somehow controls the robustness of the hybrid, i.e. the probability of convergence to the vicinity of the global solution. Therefore, it must be chosen so as to ensure that the stochastic method will arrive to a point inside the radius of convergence of the deterministic method to the global optimum. On the other hand, the value selected for 2 will be crucial to minimize the nal computation time while ensuring a solution very close to the global. Therefore, one must nd the best tuning, i.e. a compromise between robustness and efciency. Choosing adequate criteria 1 and 2 is problem dependent so, in general, the tuning of the hybrid method should be done for each specic class of problems. However, some general guidelines (i.e. set of recommended default values) can be extracted from the solution of case studies. The typical behavior of the hybrid method vs. the switching point is outlined in Figs. 1 and 2, where a minimization process is assumed. Our experience indicates that a good value for 1 is 0.05 (relative tolerance), whereas 2 is more problem dependent, but values between 104 and 106 are usually ne for bioprocess engineering purposes.

5. Case studies Three challenging case studies are examined here. For the sake of brevity, only a brief description of each

412

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

subject to: x1 dx 1 = h1 x1 u dt 500x4 x2 dx2 = h2 x1 0.01x2 u dt 500x4 h 1 x1 dx3 x1 = h2 x1 dt 0.47 1.2 x3 u 1 + x4 500 u dx4 = dt 500 h1 = 0.11 x3 0.006x1 + x3 x3 0.0001 + x3 (1 + 10x3 ) 0.029x3 0.0001 + x3 (18) (19) (20) (21) (16) (17)

Fig. 2. Total CPU time and nal error (with respect to global optimum) as a function of the value of the performance index at the switching point.

case study is given. The detailed statements of the dynamic optimization problems are given in the cited references.

h2 = 0.0055

where x1 , x2 , and x3 are the biomass, penicillin and substrate concentrations (g/L), and x4 is the volume (L). The initial conditions are: x(t0 ) = [1.5 0 0 7]T (22)

5.1. Case study I: optimal control of a fed-batch fermentor for penicillin production This problem considers a fed-batch reactor for the production of penicillin, as studied by Cuthrell and Biegler (1989). This problem has also been studied by many other authors (Dadebo and McAuley, 1995; Banga and Seider, 1996; Banga et al., 1997). We consider here the free terminal time version where the objective is to maximize the amount of penicillin using the feed rate as the control variable. It should be noted that the resulting NLP problem (after using CVP) does not seem to be multimodal, but it has been reported that local gradient methods do experience convergence problems if initialized with far-fromoptimum proles, or when a very rened solution is sought. Thus, this example will be excellent in order to illustrate the better robustness and efciency of the alternative stochastic and hybrid approaches. The mathematical statement of the free terminal time problem is: Find u(t ) and tf over t [t0 , tf ] to maximize J = x2 (tf ) x4 (tf ) (15)

There are several path constraints (upper and lower bounds) for state variables (case III of Cuthrell and Biegler, 1989): 0 x1 40 0 x3 25 0 x4 10 (23) (24) (25)

The upper and lower bounds on the only control variable (feed rate of substrate) are: 0 u 50 5.2. Case study II: optimal control of a fed-batch reactor for ethanol production This case study considers a fed-batch reactor for the production of ethanol, as studied by Cheng and Hwang (1990a) and others (Bojkov and Luus, 1996; Banga et al., 1997). The (free terminal time) optimal control problem is to maximize the yield of ethanol using the feed rate as the control variable. As in the previous (26)

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

413

case, this problem has been solved using CVP and gradient based methods, but convergence problems have been frequently reported, something which has been conrmed by our own experience. The mathematical statement of the free terminal time problem is: Find the feed owrate u{t } and the nal time tf over t [t0 , tf ] to maximize J = x3 {tf }x4 {tf } subject to dx1 x1 = g1 x1 u dt x4 dx2 150 x2 = 10g1 x1 + u dt x4 dx3 x3 = g2 x1 u dt x4 dx4 =u dt g1 = g2 = 0.408 1 + x3 /16 1 1 + x3 /71.5 x2 0.22 + x2 x2 0.44 + x2 (28) (29) (30) (31) (32) (33) (27)

a malignant tumor as measured at some particular time in the future. The drug concentration must be kept below some level throughout the treatment period and the cumulative (toxic) effect of the drug must be kept below the ultimate tolerance level. Bojkov et al. (1993) and Luus et al. (1995) also studied this problem using direct search optimization. More recently, Carrasco and Banga (1997) have applied stochastic techniques to solve this problem, obtaining better results (Carrasco and Banga, 1998). The mathematical statement of this dynamic optimization problem is: Find u{t } over t [t0 , tf ] to maximize J = x1 {tf } subject to dx1 = k1 x1 + k2 (x2 k3 ) H {x2 k3 } dt dx2 = u k 4 x2 dt dx3 = x2 dt (38) (39) (40) (37)

where x1 , x2 and x3 are the cell mass, substrate and product concentrations (g/L), and x4 is the volume (L). The initial conditions are: x{0} = [1 150 0 10]T (34)

where the tumor mass is given by N = 1012 exp(x1 ) cells; x2 , the drug concentration in the body in drug units [D] and x3 , the cumulative effect of the drug. The parameters are taken as k1 = 9.9 104 days, k2 = 8.4 103 days1 [D1 ], k3 = 10 [D1 ], and k4 = 0.27 days1 . The initial state considered is: x{0} = [ln(100) where, H {x2 k3 } = 1 0 if x2 k3 if x2 < k3 (42) 0 0]T (41)

The constraints (upper and lower bounds) on the control variable (feed rate, L/h) are: 0 u 12 (35)

and there is an end-point constraint on the volume: 0 x4 {tf } 200 5.3. Case study III: optimal drug scheduling for cancer chemotherapy Many researchers have devoted their efforts to determine whether current methods for drugs administration during cancer chemotherapy are optimal, and if alternative regimens should be considered. Martin (1992) considered the interesting problem of determining the optimal cancer drug scheduling to decrease the size of (36)

and the nal time tf = 84 days. The optimization is subject to the following constraints on the drug delivery (control variable): u0 (43)

There are the following path constraints on the state variables: x2 {t } 50 x3 {t } 2.1 103 (44) (45)

414

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

Also, there should be at least a 50% reduction in the size of the tumor every three weeks, so that the following point constraints must be considered: x1 {21} ln(200) x1 {42} ln(400) x1 {63} ln(800) (46) (47) (48)

mance when used with default recommended values, i.e. without any expensive preliminary tuning. Therefore, we concluded that this approach would offer a more realistic and useful evaluation of the different algorithms. Finally, the TPH hybrid method was tuned following the guidelines stated above. 6.1. Case I The same (constant) starting prole u0 {t } = 11.25 was considered for all methods. In the CVP methods, a value of 106 was set for the relative and absolute error tolerances for solution of the inner initial value problem. The best solution was found by the TPH method, which arrived at a performance index of J = 87.99 with a fed-batch process time of tf = 132 h. The fastest method to arrive within 0.5% of this best solution was CVP-sg, with a computation time of 25 s (PC PII). It should be noted that the computation time of TPH was only two times larger, a very reasonable extra work. The only other methods which were able to produce solutions within 0.5% of the best were ICRS/DS and DE, both of stochastic nature and with a computational effort slightly smaller than that of TPH. The GA approach arrived at the worst result in the largest CPU time, while DHC arrived at a similar value but much faster. Finally, the CVP-fd and CP-col deterministic methods converged to solutions more than 1% away from the best, with computation efforts similar to CVP-sg. The detailed results (relative CPU time and % of error with respect to the best solution) for all methods are compared in Fig. 3. The best optimal control, found by TPH, is presented in Fig. 4, with the corresponding proles for the states shown in Fig. 5. 6.2. Case II This case study was rst solved, as a free nal time optimal control problem, by Bojkov and Luus (1996). Using iterative dynamic programming they reported a performance index value of J = 20838 and tf = 61.3 h. Note that this was a typical result starting from the initial control policy u0 {t } = 3.0. Further, renements were achieved by these authors arriving at J = 20842 after large number of runs (changing search parameters and initial control proles), each one taking 4 h of CPU time (Pentium/66 MHz).

6. Results and discussion In order to perform a proper comparative evaluation of the methods considered, quality of the solution and computational cost (i.e. performance index values and computation times) will be considered. Further, these results will also be compared with the best found in the literature when possible. In order to ensure high accuracy and consistent results, in all the CVP-based methods the relative and absolute error tolerances for integrations of the system dynamics were set to tight values ranging from 106 to 109 . All computations were done in double precision. In order to make meaningful comparisons, computation times in gures are reported as relative numbers with respect to the fastest method which arrived to within 0.5% of the best solution. All the absolute CPU times reported in the text correspond to a PC Pentium II 450 MHz. It should be noted that the relative performance of the various methods could also depend on the tuning parameters used for each algorithm. In the case of the deterministic (local) methods (i.e. CVP-fd, CVP-sg, and CP-col), no special tuning of the SQP method was needed, and the same settings for similar parameters (e.g. the number of elements in the control discretization) were used in order to ensure a fair comparison. Regarding the stochastic methods, it is well known that the tuning of their internal parameters can have a signicant effect on the results. Rather than performing a time-consuming previous study (tuning each of these methods for each of the problems), we preferred to follow the recommended guidelines given by the original authors of each method (as cited above). In this way, we avoided the overhead of preliminary runs for such tuning, thus making a more fair comparison with the deterministic methods. Besides, most possible users of these methods are interested in knowing about their perfor-

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

415

Fig. 5. State proles for best optimal control, case study I. Fig. 3. Relative CPU time and nal error with respect to best solution (J = 87.99, tf = 132 h) for case study I.

In our runs, the best results were again obtained by the TPH method, which arrived at J = 20839 and tf = 61.17 h in only 35 s of computation (PC PII), which is orders of magnitude faster than IDP (after taking into account the different machine performances). We also found that no other method was faster in arriving at within 0.5% of this J value, conrming the good efciency of TPH. CP-col arrived at essentially the same performance index but with a CPU time three times larger. Only CVP-sg and DE were able to reach

within 0.5%, the latter with a rather large computation effort. The CVP-fd and GA methods failed quite dramatically, the latter at a huge computational cost (50 times that of TPH). The detailed results (relative CPU time and % of error with respect to the best solution) for all methods are compared in Fig. 6. Note the log scale in the relative CPU time axis. The best optimal control, again found by TPH, is presented in Fig. 7, with the corresponding proles for the states shown in Fig. 8.

Fig. 4. Best optimal control (J = 87.99, tf = 132 h) for case study I.

Fig. 6. Relative CPU time (log scale) and nal error with respect to best solution (J = 20839, tf = 61.17 h) for case study II.

416

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

Fig. 7. Best optimal control (J = 20839, tf = 61.17 h) for case study II.

Fig. 9. Relative CPU time and nal error with respect to best solution (J = 17.4811) for case study III.

6.3. Case III Since this is a xed terminal time problem with only a few states, it could appear simple to solve at a rst glance. However, the path and point constraints on the state variables make this problem especially challenging. In fact, the optimal control of Luus et al. (1995), who reported a value of J = 17.4760, was found to be not feasible (the path constraint on x2 was slightly violated). These discrepancies are probably due to the different integrators, tolerance values and ways of checking the path constraints, as explained by Carrasco and Banga (1997). These authors obtained feasible solutions using two stochastic algorithms, but these were later found to be sub-optimal due to the tolerance values used for the optimization convergence criterion. However, if tighter values are used with those methods, the CPU time becomes excessive. Therefore, this is another good case study to demonstrate the advantages of the hybrid approach. In agreement with the previous results, TPH arrived to the best result (J = 17.4811, which is better than any other published solution), and no other method arrived faster to within 0.5% of this value, so the CPU time of TPH (just 40 s on a PC PII) was taken as reference for obtaining the relative computation times. Once again, this computational effort was orders of magnitude better than previously published results. Interestingly, DE was the only method to arrive close (J = 17.4772) to the result of TPH, but the computational effort was rather signicant. The CVP-sg method converged to J = 17.38, and all the other methods produced solutions with errors larger than 2%. The detailed results (relative CPU time and % of error with respect to the best solution) for all methods are presented in Fig. 9. The best optimal control, found by TPH as in the other cases, is presented in Fig. 10, with states shown in Fig. 11.

Fig. 8. State proles for best optimal control, case study II.

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419

417

Fig. 10. Best optimal control (J = 17.4811) for case study III.

to solve some or all of the problems, converging to local solutions, and illustrating the motivation of this work. In contrast, the two-phase hybrid (TPH) method was able to solve the case studies better than any other method, and with the minimum (or close to the minimum) computational effort. Thus, this hybrid approach presented the best efciency and robustness. The DE stochastic method also presented good robustness, although with larger computation times. These results conrm the usefulness of stochastic methods in order to ensure robustness, and the superiority of the hybrid approach to obtain the best trade-off between such robustness and efciency. However, it is also important to note that, for case studies I and II, the CVP-sg method was able to nd near optimal solutions in less computation time than the TPH method. These good enough results can be interesting for applications constrained by CPU time, like e.g. model predictive control, which is usually implemented following a receding horizon dynamic re-optimization scheme. Finally, it should be recognized that the problems presented here, although non trivial, are small in size, i.e. the dynamics of the systems are described by a small number of differential equations. Many applications, like the dynamic optimization of distributed bioprocesses, or of metabolic networks, involve systems which can be several orders of magnitude larger. Although the deterministic local approaches have been applied to rather large problems successfully, the scalability of the stochastic and hybrid approaches remains as a research issue which is the subject of ongoing work. Acknowledgements The authors thank the Spanish Ministry of Science and Technology (MCyT project AGL20012610-C02-02) and Xunta de Galicia (grant PGIDIT02PXIC40211PN) for nancial support. References
Balsa-Canto, E., Alonso, A.A., Banga, J.R., 1998. Dynamic optimization of bioprocesses: deterministic and stochastic strategies. Presented at ACoFop IV (Automatic Control of Food and Biological Processes), G oteborg, Sweden. Balsa-Canto, E., Banga, J.R., Alonso, A.A., Vassiliadis, V.S., 2000. Efcient optimal control of bioprocesses using second-order information. Ind. Eng. Chem. Res. 39, 42874295.

Fig. 11. State proles for best optimal control, case study III.

7. Conclusions In this work, we have compared several direct numerical methods for the optimal control of non-linear bioprocesses, a problem of major importance in bioprocess engineering. The methods considered here essentially cover most of the currently known direct approaches, including control vector parameterization (CVP) and complete parameterization, and deterministic and stochastic solvers for the resulting non-linear programming problems. Considering three challenging case studies, many of the methods, both deterministic and stochastic, failed

418

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419 Johnson, A., 1987. The control of fed-batch fermentation processesa survey. Automatica 23, 691705. Lee, J.-H., Lim, H., Yoo, Y., Park, Y., 1999. Optimization of feed rate prole for the monoclonal antibody production. Bioprocess Eng. 20, 137146. Lee, J., Ramirez, W., 1994. Optimal fed-batch control of induced foreign protein production by recombinant bacteria. AIChE J. 40 (5), 899. Lim, H.C., Tayeb, Y., Modak, J.M., Bonte, P., 1986. Computational algorithms for optimal feed rates for a class of fed-batch fermentation: numerical results for penicillin and cell mass production. Biotechnol. Bioeng. 28, 1408. Lin, J.-S., Hwang, C., 1998. Enhancement of the global convergence of using iterative dynamic programming to solve optimal control problems. Ind. Eng. Chem. Res. 37, 2469 2478. Luus, R., 1992. On the application of dynamic iterative programming to singular optimal control problems. IEEE Trans. Autom. Control 37 (11), 1802. Luus, R., Hartig, F., Keil, F., 1995. Optimal drug scheduling of cancer chemotherapy by direct search optimization. Hungarian J. Ind. Chem. 23, 5558. Luus, R., Hennessy, D., 1999. Optimization of fed-batch reactors by the Luus-Jakola optimization procedure. Ind. Eng. Chem. Res. 38 (5), 19481955. Mahadevan, R., Doyle, F.J., 2003. On-line optimization of recombinant product in a fed-batch bioreactor. Biotechnol. Prog. 19, 639646. Martin, R.B., 1992. Optimal control drug scheduling of cancer chemotherapy. Automatica 28, 11131123. Michalewicz, Z., 1996. Genetic Algorithms + Data Structures = Evolution Programs, third ed. Springler-Verlag. Modak, J., Lim, H., Tayeb, Y., 1986. General characteristics of optimal feed rate proles for various fed-batch fermentation processes. Biotechnol. Bioeng. XXVIII, 1396 1407. Oberle, H.J., Sothmann, B., 1999. Numerical computation of optimal feed rates for a fed-batch fermentation model. JOTA 100, 113. Park, S., Ramirez, W.F., 1988. Optimal production of secreted protein in fed-bath reactors. AIChE J. 34 (9), 1550. Rani, K.Y., Rao, V.S.R., 1999. Control of fermenters: a review. Bioprocess Eng. 21, 7788. Roubos, J., de Gooijer, C., Straten, G.V., Boxtel, A.V., 1997. Comparison of optimization methods for fed-batch cultures of hybridoma cells. Bioprocess Eng. 17, 99102. Roubos, J., Straten, G.V., Boxtel, A.V., 1999. An evolutionary strategy for fed-batch bioreactor optimization; concepts and performance. J. Biotechnol. 67, 173187. San, K., Stephanopoulos, G., 1984. A note on the optimality criteria for maximum biomass production in a fed-batch fermentor. Biotechnol. Bioeng. 26, 12611264. San, K., Stephanopoulos, G., 1989. Optimization of a fed-batch penicillin fermentation: a case of singular optimal control with state constraints. Biotechnol. Bioeng. 34, 7278. Simutis, R., L ubbert, A., 1997. A comparative study on random search algorithms for biotechnical process optimization. J. Biotechnol. 52, 245256.

Banga, J.R., Alonso, A.A., Singh, R.P., 1994. Stochastic optimal control of fed-batch bioreactors. Presented at AIChE Annual Meeting 1994, San Francisco. Banga, J.R., Alonso, A.A., Singh, R.P., 1997. Stochastic dynamic optimization of batch and semicontinuous bioprocesses. Biotechnol. Prog. 13, 326335. Banga, J.R., Balsa-Canto, E., Moles, C.G., Alonso, A.A., 2003. Dynamic optimization of bioreactorsa review. Proc. Ind. Natl. Sci. Acad. 69A, 257265. Banga, J.R., Irizarry, R., Seider, W.D., 1998. Stochastic optimization for optimal and model-predictive control. Comput. Chem. Eng. 22 (45), 603612. Banga, J.R., Irizarry-Rivera, R., Seider, W.D., 1995. Stochastic optimization for optimal and model-predictive control. Presented at AIChE Annual Meeting 1995, Miami. Banga, J.R., Seider, W.D., 1996. Global optimization of chemical processes using stochastic algorithms. In: Floudas, C.A., Pardalos, P.M. (Eds.), State of the Art in Global Optimization. Kluwer Academic Publishers, Dordrecht, The Netherlands, pp. 563 583. Biegler, L.T., Cervantes, A.M., Wachter, A., 2002. Advances in simultaneous strategies for dynamic process optimization. Chem. Eng. Sci. 57, 575593. Bojkov, B., Hansel, R., Luus, R., 1993. Application of direct search optimization to optimal control problems. Hungarian J. Ind. Chem. 21, 177185. Bojkov, B., Luus, R., 1996. Optimal control of nonlinear systems with unspecied nal times. Chem. Eng. Sci. 51 (6), 905919. Bryson, A.E., Ho, Y.C., 1975. Applied Optimal Control. Hemisphere Pub. Corp., New York. Carrasco, E., Banga, J.R., 1997. Dynamic optimization of batch reactors using adaptive stochastic algorithms. Ind. Eng. Chem. Res. 36 (6), 22522261. Carrasco, E., Banga, J.R., 1998. A hybrid method for the optimal control of chemical processes. Proceedings of the International Conference on CONTROL98, UKACC. Cheng, C., Hwang, C., 1990a. Optimal control computation for differential-algebraic process systems with general constraints. Chem. Eng. Commun. 97, 926. Cuthrell, J., Biegler, L., 1989. Simultaneous optimization and solution methods for batch reactor control proles. Comput. Chem. Eng. 13, 49. Dadebo, S.A., McAuley, K.B., 1995. Dynamic optimization of constrained chemical engineering problems using dynamic programming. Comput. Chem. Eng. 19 (5), 513525. de la Maza, M., Yuret, D., 1994. Dynamic hill climbing. AI Expert 9 (3), 2631. De Tremblay, M., Perrier, M., Chavarie, C., Archambault, J., 1992. Optimization of fed-batch culture of hybridoma cells using dynamic programming: single and multi feed cases. Bioprocess Eng. 7, 229234. Hong, J., 1986. Optimal substrate feeding policy for a fed batch fermentation with substrate and product inhibition kinetics. Biotechnol. Bioeng. 28, 14211431. Jayant, A., Pushpavanam, S., 1998. Optimization of a biochemical fed-batch reactor-transition from a nonsingular to a singular problem. Ind. Eng. Chem. Res. 37, 43144321.

J.R. Banga et al. / Journal of Biotechnology 117 (2005) 407419 Smets, I.Y., Claes, J.E., November, E.J., Bastin, G.P., Van Impe, J.F., 2004. Optimal adaptive control of (bio)chemical reactors: past, present and future. J. Process Control 14, 795 805. Storn, R., Price, K., 1997. Differential evolutiona simple and efcient heuristic for global optimization over continuous spaces. J. Global Optim. 11, 341359. Tholudur, A., Ramirez, W.F., 1997. Obtaining smoother singular arc policies using a modied iterative dynamic programming algorithm. Int. J. Control 68 (5), 1115 1128. Van Impe, J.F., Nicola i, B.M., Vanrolleghem, P.A., Spriet, J., Moor, B., Vandewalle, J., 1992. Optimal control of the penicillin g fedbatch fermentation: an analysis of a modied unstructured model. Chem. Eng. Comm. 117, 337353.

419

Vassiliadis, V.S., Pantelides, C.C., Sargent, R.W.H., 1994. Solution of a class of multistage dynamic optimization problems. 1. Problems without path constraints. Ind. Eng. Chem. Res. 33 (9), 2111 2122. Vassiliadis, V.S., Pantelides, C.C., Sargent, R.W.H., 1994. Solution of a class of multistage dynamic optimization problems. 2. Problems with path constraints. Ind. Eng. Chem. Res. 33 (9), 2123 2133. Yamashita, Y., Shima, M., 1997. Numerical computational method using genetic algorithm for the optimal control problem with terminal constraints and free parameters. Nonlin. Anal.-Theory Methods Appl. 30 (4), 22852290. Zuo, K., Wu, W., 2000. Semi-realtime optimization and control of a fed-batch fermentation system. Comput. Chem. Eng. 24, 1105 1109.

Das könnte Ihnen auch gefallen