Beruflich Dokumente
Kultur Dokumente
Sr No.
Topic
1
Optimisation & Scheduling of Batch
Process Plants
2
Introduction to Batch Scheduling - 1
3
4
Faculty
Dr. M. S. Rao, Professor, Department
of Chemical Engineering, DDU
Dr. M. S. Rao, Professor, Department
of Chemical Engineering, DDU
Introduction to Batch Scheduling - 2 Dr. M. S. Rao, Professor, Department
of Chemical Engineering, DDU
Overview of Planning and Scheduling Dr. Munawar A. Shaik, Assistant
: Short term Scheduling for Batch
Professor Department of Chemical
Plants Discrete Time Model
Engineering, IIT, Delhi
Short term Scheduling for Batch
Dr. Munawar A. Shaik, Assistant
Plants : Slot based and Global -event Professor Department of Chemical
based Continuous time Models
Engineering, IIT, Delhi
Short term Scheduling for Batch
Dr. Munawar A. Shaik, Assistant
Plants : Unit-Specific Event-based
Professor Department of Chemical
Continuous time Models
Engineering, IIT, Delhi
Short term Scheduling of Continuous Dr. Munawar A. Shaik, Assistant
Plants : Industrial Case Study of
Professor Department of Chemical
FMCG.
Engineering, IIT, Delhi
Cyclic Scheduling of Continuous
Dr. Munawar A. Shaik, Assistant
Plants
Professor Department of Chemical
Engineering, IIT, Delhi
Advance Scheduling of Pulp and
Dr. Munawar A. Shaik, Assistant
Paper Plant
Professor Department of Chemical
Engineering, IIT, Delhi
This is page i
Printer: Opaque this
ii
ABSTRACT This book contains information necessary to introduce concept of optimisation to the biginers. Advanced optimisation techniques necessary for the practicing engineering with a special emphasis of MINLP is
discussed. Discussion on schduling of batch plants and recent advences in
the area of schduling of batch plants are also presented. Dynamic optimisation and global optimisation techniques are also introduced in this book.
Contents
1 Introduction
1.1 Applications of Optimisation problems .
1.2 Types of Optimization and Optimisation
1.2.1 Static Optimization . . . . . . .
1.2.2 Dynamic Optimization . . . . . .
.
.
.
.
.
.
.
.
1
1
3
3
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
7
7
9
9
11
12
12
13
15
16
17
17
18
19
. . . . . .
Problems
. . . . . .
. . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
iv
Contents
3 Linear Programming
3.1 The Simplex Method . . . . . . . . . . . . . . . . .
3.2 Infeasible Solution . . . . . . . . . . . . . . . . . .
3.3 Unbounded Solution . . . . . . . . . . . . . . . . .
3.4 Multiple Solutions . . . . . . . . . . . . . . . . . .
3.4.1 Matlab code for Linear Programming (LP)
.
.
.
.
.
21
23
27
29
30
30
4 Nonlinear Programming
4.1 Convex and Concave Functions . . . . . . . . . . . . . . . .
33
36
5 Discrete Optimization
5.1 Tree and Network Representation . . . . . . . . . . . . . . .
5.2 Branch-and-Bound for IP . . . . . . . . . . . . . . . . . . .
39
41
42
47
47
47
50
51
56
58
6.6.1
6.6.2
6.6.3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
60
62
7 Dynamic Optimization
7.1 Dynamic programming . . . . . . . . . . . . . . . . . . . . .
71
73
75
75
6.6.4
6.6.5
6.6.6
8.2
8.3
Simulated Annealing . .
8.2.1 Introduction . .
GA . . . . . . . . . . . .
8.3.1 Introduction . .
8.3.2 Denition . . . .
8.3.3 Coding . . . . .
8.3.4 Fitness . . . . . .
8.3.5 Operators in GA
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
65
66
68
75
75
77
77
78
78
79
79
Contents
8.4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
81
81
82
85
86
86
87
87
90
91
95
95
9 A GAMS Tutorial
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . .
9.2 Structure of a GAMS Model . . . . . . . . . . . . .
9.3 Sets . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . .
9.4.1 Data Entry by Lists . . . . . . . . . . . . .
9.4.2 Data Entry by Tables . . . . . . . . . . . .
9.4.3 Data Entry by Direct Assignment . . . . .
9.5 Variables . . . . . . . . . . . . . . . . . . . . . . .
9.6 Equations . . . . . . . . . . . . . . . . . . . . . . .
9.6.1 Equation Declaration . . . . . . . . . . . .
9.6.2 GAMS Summation (and Product) Notation
9.6.3 Equation Denition . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
99
99
102
103
105
105
106
107
108
108
109
109
109
Objective Function . . . . . . . . . . . . . . . . . . . . . . .
Model and Solve Statements . . . . . . . . . . . . . . . . . .
Display Statements . . . . . . . . . . . . . . . . . . . . . . .
9.9.1 The .lo, .l, .up, .mDatabase . . . . . . . . . . . . .
9.9.2 Assignment of Variable Bounds and/or Initial Values
9.9.3 Transformation and Display of Optimal Values . . .
9.10 GAMS Output . . . . . . . . . . . . . . . . . . . . . . . . .
9.10.1 Echo Prints . . . . . . . . . . . . . . . . . . . . . . .
9.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .
111
111
112
112
113
113
114
115
116
8.5
Dierential Evolution . . . . . . . . . .
8.4.1 Introduction . . . . . . . . . . .
8.4.2 DE at a Glance . . . . . . . . . .
8.4.3 Applications of DE . . . . . . . .
Interval Mathematics . . . . . . . . . . .
8.5.1 Introduction . . . . . . . . . . .
8.5.2 Interval Analysis . . . . . . . . .
8.5.3 Real examples . . . . . . . . . .
8.5.4 Interval numbers and arithmetic
8.5.5 Global optimization techniques .
8.5.6 Constrained optimization . . . .
8.5.7 References . . . . . . . . . . . . .
9.7
9.8
9.9
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
119
vi
Contents
Preface
Optimization has pervaded all spheres of human endeavor and process industris are not an expection. Impact of optimisation has increased in last
ve decades. Modern society lives not only in an environment of intense
competition but is also constrained to plan its growth in a sustainable
manner with due concern for conservation of resources. Thus, it has become imperative to plan, design, operate, and manage resources and assets
in an optimal manner. Early approaches have been to optimize individual
activities in a standalone manner, however, the current trend is towards an
integrated approach: integrating synthesis and design, design and control,
production planning, scheduling, and control. The functioning of a system
may be governed by multiple performance objectives. Optimization of such
systems will call for special strategies for handling the multiple objectives
to provide solutions closer to the systems requirement.
Optimization theory had evolved initially to provide generic solutions to
optimization problems in linear, nonlinear, unconstrained, and constrained
domains. These optimization problems were often called mathematical programming problems with two distinctive classications, namely linear and
nonlinear programming problems. Although the early generation of programming problems were based on continuous variables, various classes
of assignment and design problems required handling of both integer and
continuous variables leading to mixed integer linear and nonlinear programming problems (MILP and MINLP). The quest to seek global optima
has prompted researchers to develop new optimization approaches which
do not get stuck at a local optimum, a failing of many of the mathematical programming methods. Genetic algorithms derived from biology and
viii
Preface
Preface
ix
Preface
This is page 1
Printer: Opaque this
1
Introduction
Optimization involves nding the minimum/maximum of an objective function f (x) subject to some constraint x b S. If there is no constraint for
x to satisfy or, equivalently, S is the universe then it is called an unconstrained optimization; otherwise, it is a constrained optimization. In this
chapter, we will cover several unconstrained optimization techniques such
as the golden search method, the quadratic approximation method, the
NelderMead method, the steepest descent method, the Newton method,
the simulated-annealing (SA) method, and the genetic algorithm (GA). As
for constrained optimization, we will only introduce the MATLAB built-in
routines together with the routines for unconstrained optimization. Note
that we dont have to distinguish maximization and minimization because
maximizing f (x) is equivalent to minimizing -f (x) and so, without loss of
generality, we deal only with the minimization problems.
1. Introduction
that variables cannot take arbitrary values. For example, while designing a bridge, an engineer will be interested in minimizing the cost, while
maintaining a certain minimum strength for the structure. Optimizing the
surface area for a given volume of a reactor is another example of constrained optimization. While most formulations of optimization problems
require the global minimum to be found, mosi of the methods are only able
to nd a local minimum. A function has a local minimum, at a point where
it assumes the lowest value in a small neighbourhood of the point, which
is not at the boundary of that neighbourhood.
To nd a global minimum we normally try a heuristic approach where
several local minima are found by repeated trials with dierent starting
values or by using dierent techniques. The dierent starting values may
be obtained by perturbing the local minimizers by appropriate amounts.
The smallest of all known local minima is then assumed to be the global
minimum. This procedure is obviously unreliable, since it is impossible to
ensure that all local minima have been found. There is always the possibility that at some unknown local minimum, the function assumes an even
smaller value. Further, there is no way of verifying that the point so obtained is indeed a global minimum, unless the value of the function at the
global minimum is known independently. On the other hand, if a point i
s claimed to be the solution of a system of non-linear equations, then it
can, in principle, be veried by substituting in equations to check whether
all the equations are satised or not. Of course, in practice, the round-o
error introduces some uncertainty, but that can be overcome.
Owing to these reasons, minimization techniques are inherently unreliable and should be avoided if the problem can be reformulated to avoid
optimization. However, there are problems for which no alternative solution method is known and we have to use these techniques. The following
are some examples.
1. Not much can be said about the existence and uniqueness of either
the
2. It is possible that no minimum of either type exists, when the function
is
3. Even if the function is bounded from below, the minimum may not
exist
4. Even if a minimum exists, it may not be unique; for exarnple,Xx) =
sin x global or the local minimum of a function of several variables.
not bounded from below [e.g.,Ax) = XI. [e.g.,Ax) = e"]. has an innite
number of both local and global minima.
5. Further, innite number of local minimum may exist, even when there
is no global minimum [e.g.,Ax) = x + 2 sin x].
6. If the function or its derivative is not continuous, then the situation
could be even more complicated. For example,Ax) = & has a global minimum at x = 0, which is not a local minimum [i.e.,Ax) = 01.
Optimization in chemical process industries infers the selection of equipment and operating conditions for the production of a given material so
that the prot will be maximum. This could be interpreted as meaning
the maximum output of a particular substance for a given capital outlay,
or the minimum investment for a specied production rate. The former
is a mathematical problem of evaluating the appropriate values of a set
of variables to maximize a dependent variable, whereas the latter may be
considered to be one of locating a minimum value. However, in terms of
prot, both types of problems are maximization problems, and the solution of both is generally accomplished by means of an economic balance
(trade-@ between the capital and operating costs. Such a balance can be
represented as shown in Fig.(??), in which the capital, the operating cost,
and the total cost are plotted againstf, which is some function of the size
of the equipment. It could be the actual size of the equipment; the number of pieces of equipment, such as the number of stirred tank reactors in
a reactor battery; the frames in a lter press; some parameter related to
the size of the equipment, such as the reux ratio in a distillation unit; or
the solvent-to-feed ratio in a solvent extraction process. Husain and Gangiah (1976) reported some of the optimization techniques that are used for
chemical engineering applications.
1. Introduction
1. Introduction
Multiobjective optimization: Problems involving more than one objective. Often involves the above categories as subcategories.
This is page 7
Printer: Opaque this
2
Conventional optimisation techniques
2.1 Introduction
In this chapter, we will introduce you some of the very well known conventional optimisation techniques. A very brief review of search methods
followed by gradient based methods are presented. The compilation is no
way exhaustive. Matlab programs for some of the optimisation techniques
are also presented in this chapter.
Optimum
Is in this
area
0
10
The most obvious way is to place the four experiments equidistance over
the interval that is at 2,4,6 and 8. We can see from the gure that the value
at 4 is higher than the value of y at 2. Since we are dealing with a unimodel
function the optimal value can not lie between x=0 and x=2. By similar
reasoning the area between x=8 and 10 can be eleminated as well as that
between 6 and 8. the area remaining is area between 2 and 6.
If we take the original interval as L and the F as fraction of original
interval left after performing N experiments then N experiments devide
the interval into N+1 intervals. Width of each interval is NL+1 OPtimum
can be specied in two of these intervals.That leaves 40% area in the given
example
F
=
=
2L
1
2
=
N +1 L
N +1
2
= 0:4
4+1
=
=
L
1
2
=
N +2
+1 L
2
= 0:33
4+2
N
2
10
Optimum
Is in this
area
0
10
10
Optimum
Is in this
area
0
=
=
11
1
N
22
1
= 0:25
22
xn = xn
n
2
+ xn
FN
FN
2
10 = 4
5
The area remaining is between 0 and 6 the new length will be 6 and new
value of d2 is obtained by substituting N-1 for N
d2 =
FN
FN
3
1
L=
F1
1
L= 6=2
F3
3
The next pair of experiments are performed around 2 and the experiment at 4 need not be performed as we have allready done it. This is the
advantage of Fibonacci search the remaining experiment can be performed
as dicocomos search to identify the optimal reagion around 4. This terns
out to be the region between 4 and 6. The fraction left out is
F =
1
1
1
=
= = 0:2
FN
F4
5
12
4)2
(x2
8
(2.1)
13
c=b
(a + (1
r)(b
a)) = rhord
a = a + rh
a = rh
r; r2 + r 1 = 0;
p
p
1+ 1+4
1+ 5
=
2
2
(2.2)
(2.3)
fo x21
2 [o (x1
x22 + f1 x22
x2 ) + f1 (x2
x20 + f2 x20
x0 ) + f2 (x0
x21
x1 )]
(2.4)
x = x3 =
fo x21
2 [o (x1
x22 + f1 x22
x2 ) + f1 (x2
x20 + f2 x20
x0 ) + f2 (x0
x21
j x1=x+h
x1 )] x2=x1+h
(2.5)
We keep updating the three points this way until jx2 - x0j
0 and/or
jf (x2) - f (x0)j
0, when we stop the iteration and declare x3 as the
minimum point. The rule for updating the three points is as follows.
1. In case x0 < x3 < x1, we take {x0, x3, x1} or {x3, x1, x2} as the new
set of three points depending on whether f (x3) < f(x1) or not.
2. In case x1 < x3 < x2, we take {x1, x3, x2} or {x0, x1, x3} as the new
set of three points depending on whether f (x3) f (x1) or not.
This procedure, called the quadratic approximation method, is cast into
the MATLAB routine opt_quad(), which has the nested (recursive call)
structure. We made the MATLAB program nm712.m, which uses this
14
15
rf (x) =
(2.6)
with the step-size k (at iteration k) adjusted so that the function value
is minimized along the direction by a (one-dimensional) line search technique like the quadratic approximation method. The algorithm of the steepest descent method is summarized in the following box and cast into the
MATLAB routine opt_steep().
We made the MATLAB program nm714.m to minimize the objective
function (7.1.6) by using the steepest descent method.
STEEPEST DESCENT ALGORITHM
Step 0. With the iteration number k = 0, nd the function value f0 = f
(x0) for the initial point x0.
Step 1. Increment the iteration number k by one, nd the step-size k 1
along the direction of the negative gradient -gk-1 by a (one-dimensional)
line search like the quadratic approximation method.
k 1
= ArgM in f (xk
gk 1
)
jjgk 1jj
(2.7)
Step 2. Move the approximate minimum by the step-size k-1 along the
direction of the negative gradient -gk-1 to get the next point
xk = xk
1gk
1=jjgk
1jj
Step 3. If xk
xk-1 and f (xk) f (xk-1), then declare xk to be the
minimum and terminate the procedure. Otherwise, go back to step 1.
function [xo,fo] = opt_steep(f,x0,TolX,TolFun,alpha0,MaxIter)
% minimize the ftn f by the steepest descent method.
%input: f = ftn to be given as a string f
% x0 = the initial guess of the solution
%output: x0 = the minimum point reached
% f0 = f(x(0))
if nargin < 6, MaxIter = 100; end %maximum # of iteration
if nargin < 5, alpha0 = 10; end %initial step size
if nargin < 4, TolFun = 1e-8; end %jf(x)j < TolFun wanted
if nargin < 3, TolX = 1e-6; end %jx(k)- x(k - 1)j<TolX wanted
x = x0; fx0 = feval(f,x0); fx = fx0;
alpha = alpha0; kmax1 = 25;
warning = 0; %the # of vain wanderings to nd the optimum
step size
16
for k = 1: MaxIter
g = grad(f,x); g = g/norm(g); %gradient as a row vector
alpha = alpha*2; %for trial move in negative gradient direction
fx1 = feval(f,x - alpha*2*g);
for k1 = 1:kmax1 %nd the optimum step size(alpha) by line
search
fx2 = fx1; fx1 = feval(f,x-alpha*g);
if fx0 > fx1+TolFun & fx1 < fx2 - TolFun %fx0 > fx1 < fx2
den = 4*fx1 - 2*fx0 - 2*fx2; num = den - fx0 + fx2; %Eq.(7.1.5)
alpha = alpha*num/den;
x = x - alpha*g; fx = feval(f,x); %Eq.(7.1.9)
break;
else alpha = alpha/2;
end
end
if k1 >= kmax1, warning = warning + 1; %failed to nd optimum step size
else warning = 0;
end
if warning >= 2j(norm(x - x0) < TolX&abs(fx - fx0) < TolFun),
break; end
x0 = x; fx0 = fx;
end
xo = x; fo = fx;
if k == MaxIter, fprintf(Just best in %d iterations,MaxIter),
end
%nm714
f713 = inline(x(1)*(x(1) - 4 - x(2)) + x(2)*(x(2)- 1),x);
x0 = [0 0], TolX = 1e-4; TolFun = 1e-9; alpha0 = 1; MaxIter
= 100;
[xo,fo] = opt_steep(f713,x0,TolX,TolFun,alpha0,MaxIter)
17
(2.8)
h1 (x)
h2 (x)
h(x) = h3 (x) = 0
:
h4 (x)
According to the Lagrange multiplier method, this problem can be converted to the following unconstrained optimization problem:
M inl(x; ) = f (x) +
h(x) = f (x) +
M
X
m=1
m hjm (x)
18
(2.9)
19
20
g = inline(g0,x);
x0 = [0 0.5] %initial guess
[xon,fon] = opt_Nelder(f,x0) %min point, its ftn value by opt_Nelder
[xos,fos] = fminsearch(f,x0) %min point, its ftn value by fminsearch()
[xost,fost] = opt_steep(f,x0) %min point, its ftn value by opt_steep()
TolX = 1e-4; MaxIter = 100;
xont = Newtons(g,x0,TolX,MaxIter);
xont,f(xont) %minimum point and its function value by Newtons()
[xocg,focg] = opt_conjg(f,x0) %min point, its ftn value by
opt_conjg()
[xou,fou] = fminunc(f,x0) %min point, its ftn value by fminunc()
For constraint optimisation
%nm732_1 to solve a constrained optimization problem by
fmincon()
clear, clf
ftn=((x(1) + 1.5)^2 + 5*(x(2) - 1.7)^2)*((x(1)-1.4)^2 + .6*(x(2).5)^2);
f722o = inline(ftn,x);
x0 = [0 0.5] %initial guess
A = []; B = []; Aeq = []; Beq = []; %no linear constraints
l = -inf*ones(size(x0)); u = inf*ones(size(x0)); % no lower/upperbound
options = optimset(LargeScale,o); %just [] is OK.
[xo_con,fo_con] = fmincon(f722o,x0,A,B,Aeq,Beq,l,u,f722c,options)
[co,ceqo] = f722c(xo_con) % to see how constraints are.
This is page 21
Printer: Opaque this
3
Linear Programming
22
3. Linear Programming
min 4x1
x1 ;x2
x2
(3.1)
Subject to
2x1 + x2
x2
x1 x2
x1
x2
8 storage constraint
5Availability constraint
4 Safety constraint
0
0
(3.2)
(3.3)
(3.4)
(3.5)
(3.6)
23
n
X
Ci xi
(3.7)
i=1
Subject to
n
X
Ci xi
bj
(3.8)
i=1
j
xj
= 1; 2; 3; :::::; m
2 R
(3.9)
(3.10)
24
3. Linear Programming
Convert all inequalities into equalities by adding slack variables (nonnegative) for less than or equal to constraints ( ) and by
subtracting surplus variables for greater than or equal to constraints ( ).
The objective function must be minimization or maximization.
The standard LP involving m equations and n unknowns has m basic
variables and n-m nonbasic or zero variables. This is explained below using
Example
Consider Example in the standard LP form with slack variables, as given
below.
Standard LP:
M T inimize
(3.11)
Subject to
Z + 4x1 x2
2x1 + x2 + s1
x2 + s2
x1 x2 + s3
x1
s1
=
=
=
=
0
8
5
4
0; x2
0; s2
0
0; s3
(3.12)
(3.13)
(3.14)
(3.15)
(3.16)
(3.17)
The feasible region for this problem is represented by the region ABCD
in Figure . Table shows all the vertices of this region and the corresponding
slack variables calculated using the constraints given by Equations (note
that the nonnegativity constraint on the variables is not included).
It can be seen from Table that at each extreme point of the feasible
region, there are n - m = 2 variables that are zero and m = 3 variables that
are nonnegative. An extreme point of the linear program is characterized
by these m basic variables.
In simplex the feasible region shown in Table gets transformed into a
tableau
25
26
3. Linear Programming
function. Furthermore, nonnegative slack variables s1, s2, and s3 are added
to each constraint.
M T inimize
(3.18)
Subject to
Z + 4x1 x2
2x1 + x2 + s1
x2 + s2
x1 x2 + s3
x1
s1
=
=
=
=
0
8
5
4
0; x2
0; s2
0
0; s3
(3.19)
(3.20)
(3.21)
(3.22)
(3.23)
(3.24)
The standard LP is shown in Table 2.3 below where x1 and x2 are nonbasic or zero variables and s1, s2, and s3 are the basic variables. The starting
solution is x1 = 0; x2 = 0; s1 = 8; s2 = 5; s3 = 4 obtained from the RHS
column.
Determine the entering and leaving variables. Is the starting solution
optimum? No, because Row 0 representing the objective function equation
contains nonbasic variables with negative coe cients.
This can also be seen from Figure. In this gure, the current basic solution is shown to be increasing in the direction of the arrow.
Entering Variable: The most negative coe cient in Row 0 is x2. Therefore, the entering variable is x2. This variable must now increase in the
direction of the arrow. How far can this increase the objective function?
Remember
that the solution has to be in the feasible region. Figure shows that the
maximum increase in x2 in the feasible region is given by point D, which
is on constraint (2.3). This is also the intercept of this constraint with the
y-axis, representing x2. Algebraically, these intercepts are the ratios of the
right-hand side of the equations to the corresponding constraint coe cient
of x2. We are interested only in the nonnegative ratios, as they represent
27
the direction of increase in x2. This concept is used to decide the leaving
variable.
Leaving Variable: The variable corresponding to the smallest nonnegative
ratio (5 here) is s2. Hence, the leaving variable is s2.
So, the Pivot Row is Row 2 and Pivot Column is x2.
The two steps of the GaussJordon Row Operation are given below.
The pivot element is underlined in the Table and is 1.
Row Operation:
Pivot: (0, 0, 1, 0, 1, 0, 5)
Row 0: (1, 4,-1, 0, 0, 0, 0)- (-1)(0, 0, 1, 0, 1, 0, 5) = (1, 4, 0, 0, 1, 0, 5)
Row 1: (0, 2, 1, 1, 0, 0, 8)- (1)(0, 0, 1, 0, 1, 0, 5) = (0, 2, 0, 1,-1, 0, 3)
Row 3: (0, 1,-1, 0, 0, 1, 4)- (-1)(0, 0, 1, 0, 1, 0, 5) = (0, 1, 0, 0, 1, 1, 9)
These steps result in the following table (Table).
There is no new entering variable because there are no nonbasic variables
with a negative coe cient in row 0. Therefore, we can assume that the
solution is reached, which is given by (from the RHS of each row) x1 = 0;
x2 = 5; s1 = 3; s2 = 0; s3 = 9; Z = -5.
Note that at an optimum, all basic variables (x2, s1, s3) have a zero
coe cient in Row 0.
28
3. Linear Programming
From Figure 2.3, it is seen that the solution is infeasible for this problem.
Applying the simplex Method results in Table 2.5 for the rst step.
Z + 4x1 x2 = 0
2x1 + x2 + s1 =
8 Sorage Constraint
x2 + s2 = 5 Availability Constraint
x1 x2 + s3 = 4 Safety Constraint
x1
0; x2 0
Standard LP
(3.25)
(3.26)
(3.27)
(3.28)
(3.29)
29
Z + 4x1 x2 = 0
2x1 + x2 + s1 =
8 Sorage Constraint
x2 + s2 = 5 Availability Constraint
x1 x2 + s3 = 4 Safety Constraint
(3.30)
(3.31)
(3.32)
(3.33)
M inimizex1;x2 Z = 4x1 x2
x1 x2 + s3 = 4 Safety Constraint
x1
0; x2 0
(3.34)
(3.35)
(3.36)
30
3. Linear Programming
can take as high a value as possible. This is also apparent in the graphical
solution shown in Figure The LP is unbounded when (for a maximization
problem) a nonbasic variable with a negative coe cient in row 0 has a
nonpositive coe cient in each constraint, as shown in the table
(3.37)
31
subject to
Ax
(3.38)
32
3. Linear Programming
This is page 33
Printer: Opaque this
4
Nonlinear Programming
34
4. Nonlinear Programming
The above example demonstrates that NLP problems are dierent from
LP problems because:
An NLP solution need not be a corner point.
An NLP solution need not be on the boundary (although in this example it is on the boundary) of the feasible region. It is obvious that one
cannot use the simplex for solving an NLP. For an NLP solution, it is necessary to look at the relationship of the objective function to each decision
variable. Consider the previous example. Let us convert the problem into a
onedimensional problem by assuming constraint (isoperimetric constraint)
as an equality. One can eliminate x2 by substituting the value of x2 in
terms of x1 using constraint.
min Z = 8x1
x1;x2
x1
x21
(4.1)
(4.2)
Figure shows the graph of the objective function versus the single decision variable x1. In Figure , the objective function has the highest value
(maximum) at x1 = 4. At this point in the gure, the x-axis is tangent to
the objective
function curve, and the slope dZ/dx1 is zero. This is the rst condition
that is used in deciding the extremum point of a function in an NLP setting.
Is this a minimum or a maximum? Let us see what happens if we convert
this maximization problem into a minimization problem with -Z as the
objective function.
min Z = 8x1
x1
x21
(4.3)
4. Nonlinear Programming
35
x1
(4.4)
Figure shows that -Z has the lowest value at the same point, x1 = 4. At
this point in both gures, the x-axis is tangent to the objective function
curve, and slope dZ/dx1 is zero. It is obvious that for both the maximum
and minimum points, the necessary condition is the same. What dierentiates a minimum from a maximum is whether the slope is increasing or
decreasing around the extremum point. In Figure 3.2, the slope is decreasing as you move away from x1 = 4, showing that the solution is a maximum.
On the other hand, in Figure 3.3 the slope is increasing, resulting in a minimum.
Whether the slope is increasing or decreasing (sign of the second derivative) provides a su cient condition for the optimal solution to an NLP.
Many times there will be more than one minia existing. For the case
shown in the gure the number of minim are two more over one being
better than the other.This is another case in which an NLP diers from an
LP, as
In LP, a local optimum (the point is better than any adjacentpoint)
is a global (best of all the feasible points) optimum. With NLP, a solution
can be a local minimum.
For some problems, one can obtain a global optimum. For example,
Figure shows a global maximum of a concave function.
Figure presents a global minimum of a convex function. What is the
relation between the convexity or concavity of a function and
its optimum point? The following section describes convex and concave
functions and their relation to the NLP solution.
36
4. Nonlinear Programming
37
38
4. Nonlinear Programming
This is page 39
Printer: Opaque this
5
Discrete Optimization
40
5. Discrete Optimization
41
42
5. Discrete Optimization
43
44
5. Discrete Optimization
45
46
5. Discrete Optimization
This is page 47
Printer: Opaque this
6
Integrated Planning and Scheduling
of processes
6.1 Introduction
In this chapter, we address each part of the manufacturing business hierarchy, and explain how optimization and modeling are key tools that help
link the components together. Also introduce the concept of scheduling and
resent developments related to schduling of batch process.
48
over relatively long time frames and tend to be loosely coupled to the information ow and analysis that occur at lower levels in the hierarchy. The
time scale for decision making at the highest level (planning) may be on
the order of months, whereas at the lowest level (e.g., process monitoring)
the interaction with the process may be in fractions of a second.
Plantwide management and optimization at level 3 coordinates the network of process units and provides cost-eective setpoints via real-time optimization. The unit management and control level includes process control
[e.g., optimal tuning of proportional-integral-derivative (PID) controllers],
emergency response, and diagnosis, whereas level 5 (process monitoring
and analysis) provides data acquisition and online angysis and reconciliation functions as well as fault detection. Ideally, bidirectional communication occurs between levels, with higher levels setting goals for lower levels
and the lower levels communicating constraints and performance information to the higher levels. Data are collected directly at all levels in the
enterprise. In practice the decision ow tends to be top down, invariably
resulting in mismatches between goals and their realization and the consequent accumulation of inventory. Other more deleterious eects include
reduction of processing capacity, o-specication products, and failure to
meet scheduled deliveries.
Over the past 30 years, business automation systems and plant automation systems have developed along dierent paths, particularly in the way
data are acquired, managed, and stored. Process management and control systems normally use the same databases obtained from various online
measurements of the state of the plant. Each level in Figure (??p1) may
have its own manually entered database, however, some of which are very
large, but web-based data interchange will facilitate standard practices in
the future.
Table (??p1) lists the kinds of models and objective functions used in
the computer-integrated manufacturing (CIM) hierarchy. These models are
used to make decisions that reduce product costs, improve product quality,
or reduce time to market (or cycle time). Note that models employed can
be classied as steady state or dynamic, discrete or continuous, physical
or empirical, linear or nonlinear, and with single or multiple periods. The
models used at dierent levels are not normally derived from a single model
source, and as a result inconsistencies in the model can arise. The chemical processing industry is, however, moving in the direction of unifying the
modeling approaches so that the models employed are consistent and robust, as implied in Figure (??p1). Objective functions can be economically
based or noneconomic, such as least squares.
Planning and Scheduling
Bryant (1993) states that planning is concerned with broad classes of
products and the provision of adequate manufacturing capacity. In contrast, scheduling focuses on details of material ow, manufacturing, and
production, but still may be concerned with o- ine planning. Reactive
49
50
Managing the supply chain eectively involves not only the manufacturers, but also their trading partners: customers, suppliers, warehousers,
terminal operators, and transportation carriers (air, rail, water, land).
In most supply chains each warehouse is typically controlled according to
some local law such as a safety stock level or replenishment rule. This local
control can cause buildup of inventory at a specic point in the system and
thus propagate disturbances over the time frame of days to months (which
is analogous to disturbances in the range of minutes or hours that occur at
the production control level). Short-term changes that can upset the system
include those that are "selnicted" (price changes, promotions, etc.) or
eects of weather or other cyclical consumer patterns. Accurate demand
forecasting is critical to keeping the supply chain network functioning close
to its optimum when the produce-to-inventory approach is used.
6.3 Planning
A simplied and idealized version of the components involved in the planning step, that is, the components of the supply chain incorporates. S possible suppliers provide raw materials to each of the M manufacturing plants.
These plants manufacture a given product that may be stored or warehoused in W facilities (or may not be stored at all), and these in turn are
delivered to C dierent customers. The nature of the problem depends on
whether the products are made to order or made to inventory; made to order fullls a specic customer order, whereas made to inventory is oriented
to the requirements of the general market demand., with material balance
conditions satised between suppliers, factories, warehouses, and customers
(equality constraints). Inequality constraints would include individual line
capacities in each manufacturing plant, total factory capacity, warehouse
storage limits, supplier limits, and customer demand. Cost factors include
variable manufacturing costs, cost of warehousing, supplier prices, transportation costs (between each sector), and variable customer pricing, which
may be volume and quality-dependent. A practical problem may involve
as many as 100,000 variables and can be solved using mixed-integer linear
programming (MILP).
Most international oil companies that operate multiple reneries analyze
the renery optimization problem over several time periods (e.g., 3 months).
This is because many crudes must be purchased at least 3 months in advance due to transportation requirements (e.g., the need to use tankers
to transport oil from the Middle East). These crudes also have dierent
grades and properties, which must be factored into the product slate for
the renery. So the multitime period consideration is driven more by supply and demand than by inventory limits (which are typically less than 5
days). The LP models may be run on a weekly basis to handle such items
6.4 Scheduling
51
as equipment changes and maintenance, short-term supply issues (and delays in shipments due to weather problems or unloading di culties), and
changes in demand (4 weeks within a 1-month period). Product properties such as the Reid vapor pressure must be changed between summer
and winter months to meet environmental restrictions on gasoline properties. See Pike (1986) for a detailed LP renery example that treats quality
specications and physical properties by using product blending, a dimension that is relevant for companies with varied crude supplies and product
requirements.
6.4 Scheduling
Information processing in production scheduling is essentially the same
as in planning. Both plants and individual process equipment take orders
and make products. or a plant, the customer is usually external, but for
a process (or "work cell" in discrete manufacturing parlance), the order
comes from inside the plant or factory. In a plant, the nal product can
be sold to an external customer; for a process, the product delivered is an
intermediate or partially nished product that goes on to the next stage of
processing (internal customer).
Two philosophies are used to solve production scheduling problems (Puigjaner and Espura, 1998):
1. The top-down approach, which denes appropriate hierarchical coordination mechanisms between thedierent decision levels and decision
structures at each level. These structures force constraints on lower operating levels and require heuristic decision rules for each task. Although
this approach reduces the size and complexity of scheduling problems, it
potentially introduces coordination problems.
2. The bottom-up approach, which develops detailed plant simulation
and optimization models, optimizes them, and translates the results from
the simulations and optimization into practical operating heuristics. This
approach often leads to large models with many variables and equations
that are di cult to solve quickly using rigorous optimization algorithms.
Table (??t3) categorizes the typical problem statement for the manufacturing scheduling and planning problem. In a batch campaign or run,
comprising smaller runs called lots, several batches of product may be produced using the same recipe. To optimize the production process, you need
to determine
1. The recipe that satises product quality requirements.
2. The production rates needed to fulll the timing requirements, including any precedence constraints.
3. Operating variables for plant equipment that are subject to constraints.
52
4.
5.
6.
7.
6.4 Scheduling
53
jth product in the sequence leaves unit k after completion of its processing,
and let rLkb e the time required to process the jth product in the sequence
on unit k (See Table E16.2). The rst product goes into unit 1 at time
zero, so ClYo= 0. The index j in T ~an,d~C jrk denotes the position of a
product in the sequence. Hence CN is the time at which the last product
leaves the last unit and is the makespan to be minimized. Next, we derive
the set of constraints (Ku and Karimi, 1988; 1990) that interrelate the
Cj,k. First, the jth product in the sequence cannot leave unit k until it is
processed, and in order to be processed on unit k, it must have left unit k
- 1. Therefore the clock time at which it leaves unit k (i.e., q,+) m ust be
equal to or after the time at which it leaves unit k - 1 plus the processing
time in k. Thus the rst set of constraints in the formulation is Similarly,
the jth product cannot leave unit k until product ( j - 1) has been processed
and transferred: Set C= 0. Finally the jth product in the sequence cannot
leave unit k until the downsbeam unit k + 1 is free [i.e., product ( j - 1)
has left]. Therefore Although Equations (a)-(c) represent the complete set
of constraints, some of them are redundant. From Equation (a) Cjr Cj,k+ T ~fo,r~k 1 2 . But from Equation (c), Cj,k-l 2 Cj- l,k, hence Cj,k
1 Cj- l,k + ri,k for k = 2, M. In essence, Equations (a) and (c) imply
Equations (b) for k = 2, M, so Equations (b) for k = 2, M are redundant.
Having derived the constraints for completion times, we next determine the
sequence of operations. In contrast to the CjPkt, he decision variables here
are discrete (binary). Dene Xij as follows. Xi,. = 1 if product i (product
with label pi) is in slot j of the sequence, otherwise it is zero. So X3 = 1
means that product p3 is second in the production sequence, and X3, = 0
means that it is not in the second position. The overall integer constraint
is Similarly every product should occupy only one slot in the sequence: The
Xij that satisfy Equations (d) and (e) always give a meaningful sequence.
Now we must determine the clock times ti for any given set of Xi,j. If
product pi is in slot j, then tj,km~~tbe7i,kandX= i1,j a ndXi,l = Xi = . .
. = Xi ,.~- 1 = X~.. +=l . . . = xi,N= 0, therefore we can use XiFj to pick
the right processing time representing $ by imposing the constraint. To
reduce the number of constraints, we substitute rjtkfr om Equation (f ) into
Equations(a) and (b) to obtain the following formulation (Ku and Karimi,
1988).
Minimize: CNM
Subject to: Equations (c), (d), (e) and
C, r 0 and Xibinary Because the preceding formulation involves binary
(XiJ) as well as continuous variables(Ci) and has no nonlinear functions,
it is a mixed-integer linear programming(MILP) problem and can be solved
using the GAMS MIP solver. Solving for the optimal sequence using Table
E16.2, we obtain XI= X2,4 = X3= X, = 1. This means that pl is in the
rst position in the optimal production sequence, p2 in the fourth, p3 in the
second, and p4 in the third. In other words, the optimal sequence is in the
order pl-p3-p4-p2. In contrast to the XiJ, we must be careful in interpreting
54
FIGURE 6.1. Gantt chart for the optimal multiproduct plant schedule
the Ci,f,ro m the GAMS output, because C, really means the time at which
the jth product in the sequence (and not product pi) leaves unit k. Therefore
C2 = 23.3 means that the second product (i.e., p3) leaves unit 3 at 23.3
h. Interpreting the others in this way, the schedule corresponding to this
production sequence is conveniently displayed in form of a Gantt chart in
Figure E16.2b, which shows the status of the units at di erent times. For
instance, unit 1 is processing pl during [0, 3.51 h. When pl leaves unit 1
at t = 3.5 h, it starts processing p3. It processes p3 during [3.5,7] h. But
as seen from the chart, it is unable to dischargep3 to unit 2, because unit
2 is still processing pl. So unit 1 holds p3 during [7,7.8] h. When unit 2
discharges p3
to unit 3 at 16.5 h, unit 1 is still processingp4, therefore unit 2 remains
idle during [16.5, 19.81 h. It is common in batch plants to have units blocked
due to busy downstream units or units waiting for upstream units to nish.
This happens because the processing times vary from unit to unit and from
product to product, reducing the time utilization of units in a batch plant.
The nished batches of pl, p3, p4, and p2 are completed at times 16.5 h,
23.3 h, 3 1.3 h, and 34.8 h. The minimum makespan in 34.8 h.
This problem can also be solved by a search method . Because the order of products cannot be changed once they start through the sequence of
units, we need only determine the order in which the products are processed.
Let be a permutation or sequence in which to process the jobs, where p(j)
is the index of the product in position j of the sequence. To evaluate the
makespan of a sequence, we proceed as in Equations (a)-(c) of the mixedinteger programming version of the problem. Let Ci, be the completion time
of product p(j) on unit k. If product p(j) does not have to wat for product
p(j - 1) to nish its processing on unit k, then If it does have to wait, then
Hence Cj,k is the larger of these two values: This equation is solved rst
6.4 Scheduling
55
56
57
tures, pressures, and ow rates that are the best in some sense. For example,
the selection of the percentage
of excess air in a process heater is quite critical and involves a balance
on the fuel-air ratio to ensure complete combustion and at the same time
maximize use of the heating potential of the fuel. Examples of periodic optimization in a plant are minimizing steam consumption or cooling water
consumption, optimizing the reux ratio in a distillation column, blending
of renery products to achieve desirable physical properties, or economically allocating raw materials. Many plant maintenance systems have links
to plant databases to enable them to track the operating status of the
production equipment and to schedule calibration and maintenance. Realtime data from the plant also may be collected by management information
systems for various business functions.
The objective function in an economic model in RTQ involves the costs
of raw materials, values of products, and costs of production as functions of
operating conditions, projected sales or interdepartmental transfer prices,
and so on. Both the operating and economic models typically include constraints on
(a) Operating Conditions: Temperatures and pressures must be within
certain limits.
(b) Feed and Production Rates: A feed pump has a xed capacity; sales
are limited by market projections.
(c) Storage and Warehousing Capacities: Storage tanks cannot be overlled during periods of low demand.
(d) Product Impurities: A product may contain no more than the maximum amount of some contaminant or impurity.
In addition, safety or environmental constraints might be added, such
as a temperature limit or an upper limit on a toxic species. Several steps
are necessary for implementation of RTO, including determining the plant
steady-state operating conditions, gathering and validating data, updating
of model parameters (if necessary) to match current operations, calculating
the new (optimized) setpoints, and implementing these setpoints. An RTO
system completes all data transfer, optimization calculations, and setpoint
implementations before unit conditions change and require a new optimum
to be calculated.
Some of the RTO problems characteristic of level 3 are
1. Reux ratio in distillation .
2. Olen manufacture .
3. Ammonia synthesis .
4. Hydrocarbon refrigeration .
The last example is particularly noteworthy because it represents the
current state of the art in utilizing fundamental process models in RTO.
Another activity in RTO is determining the values of certain empirical parameters in process models from the process data after ensuring that the
process is at steady state. Measured variables including ow rates, temper-
58
atures, compositions, and pressures can be used to estimate model parameters such as heat transfer coe cients, reaction rate coe cients, catalyst
activity, and heat exchanger fouling factors.
Usually only a few such parameters are estimated online, and then optimization is carried out using the updated parameters in the model. Marlin
and Hrymak (1997) and Forbes et al. (1994) recommend that the updated
parameters be observable, represent actual changes in the plant, and signicantly inuence the location of the optimum; also the optimum of the
model should be coincident with that of the true process. One factor in
modeling that requires close attention is the accurate representation of the
process constraints, because the optimum operating conditions usually lie
at the intersection of several constraints. When RTO is combined with
model predictive regulatory control (see Section 16.4), then correct (optimal) moves of the manipulated variables can be determined using models
with accurate constraints.
Marlin and Hrymak (1997) reviewed a number of industrial applications
of RTO, mostly in the petrochemical area. They reported that in practice a maximum change in plant operating variables is allowable with each
RTO step. If the computed optimum falls outside these limits, you must
implement any changes over several steps, each one using an RTO cycle.
Typically, more manipulated variables than controlled variables exist, so
some degrees of freedom exist to carry out both economic optimization as
well as establish priorities in adjusting manipulated variables while simultaneously carrying out feedback control.
59
Detailed production recipes; e.g., stoichiometric coe cients, processing times, processing rates, utility requirements.
Production costs; e.g., raw materials, utilities, cleaning, etc.
Production targets or orders with due dates.
The goal of scheduling is to determine
o
The allocation of resources to processing tasks.
o
The sequencing and timing of tasks on processing units.
The objectives functions include the minimization of make span, lateness
and earliness, as well as the minimization of total cost. Scheduling formulations can be broadly classied in to discrete-time models and continuous
time models 16.
Early attempts in modeling the process scheduling problems relied
on the discrete- time approach, in which the time horizon is divided in
to a number of time intervals of uniform durations and events such as
the beginning and ending of a task are associated with the boundaries
of these time intervals. To achieve a suitable approximation of the original
problem, it is usually needed to use a time interval that is su ciently small,
for example, the greatest common factor (GCF) of the processing times.
This usually leads to very large combinatorial problems of intractable size,
especially for real-world problems, and hence limits its applications. The
basic concept of the discrete-time approach is illustrated in Fig (2.1).
Continuous-time models events are potentially allowed to take place at
any point in the continuous domain of time. Modeling of this exibility is
accomplished by introducing the concepts of variable event times, which
can be dened globally or for each unit. Variables are required to determine the timings of events. The basic idea of the continuous-time approach
is also illustrated in Fig (2.2). Because of the possibility of eliminating
a major fraction of the inactive event-time interval assignments with the
continuous-time approach, the resulting mathematical programming problems are usually of much smaller sizes and require less computational eorts
for their solution. However, due to the variable nature of the timings of the
events, it becomes more challenging to model the scheduling process and
the continuous-time approach may lead to mathematical models with more
complicated structures compared to their discrete-time counterparts.
Network represented processes are involve in most scheduling problems.
When the production recipes become more complex and/or dierent products have low recipe similarities, processing networks are used to represent
the production sequences. This corresponds to a more general case in which
batches can merge and/or split and material balances are required to be
60
61
whether it has only one type of product which is then shared between 2 and
3. Similarly, it is also impossible to determine from Fig (2.3) whether task 4
requires two dierent types of feed stocks, respectively produced by tasks 2
and 5, or whether it only needs one type of feedstock which can be produced
by either 2 or 5. Both interpretations are equally plausible. The former
could be the case if, say, task 4 is a catalytic reaction requiring a main
feedstock produced by task 2, and a catalyst which is then recovered from
the reaction products by the separation task 5. The latter case could arise
if task 4 were an ordinary reaction task with a single feedstock produced by
2, with task 5 separating the product from the unreacted material which
is then recycled to 4.
The distinctive characteristic of the STN is that it has two types of
nodes; namely, the state nodes, representing the feeds, intermediate and
nal products and the task nodes, representing the processing operations
which transform material from one or more input states to one or more
output states. State and task nodes are denoted by circles and rectangles,
respectively.
State-task networks are free from the ambiguities associated with recipe
networks. Figure (2.4) shows two dierent STNs, both of which correspond
to the recipe network of Fig (2.3). In the process represented by the STN
of Fig (2.4a), task 1 has only one product which is then shared by tasks
2 and 3. Also, task 4 only requires one feedstock, which is produced by
both 2 and 5. On the other hand, in the process shown in Fig (2.4b), task 1
has two dierent products forming the inputs to tasks 2 and 3, respectively.
62
63
separate tasks, thus leading to ve separate tasks (i=1, . . ., 5), each suitable
in one unit 3.
Shaik et.al. (2008) propose a new model to investigate the RTN representation for unit-specic event-based models. For handling dedicated nite
storage, a novel formulation is proposed without the need for considering
storage as a separate task. The performance of the proposed model is evaluated along with several other continuous-time models from the literature
based on the STN and RTN process representations 3.
Grossmann et.al. (2009) consider the solution methods for mixed-integer
linear fractional programming (MILFP) models, which arise in cyclic process
scheduling problems. Dinkelbachs algorithm is introduced as an e cient
method for solving large-scale MILFP problems for which its optimality
and convergence properties are established. Extensive computational examples are presented to compare Dinkelbachs algorithm with various MINLP
methods. To illustrate the applications of this algorithm, we consider industrial cyclic scheduling problems for a reactionseparation network and a
tissue paper mill with byproduct recycling. These problems are formulated
as MILFP models based on the continuous time Resource-Task Network
(RTN) 21.
64
Mixed Integer Linear Programming (MILP) model and solve the one-dimensional
scheduling problems, two-dimensional cutting problems, as well as plant
layout problems and three-dimensional packing problems. Additionally,
some problems in four dimensions are solved using the considered model
23.
Magatao et.al. (2004), present the problem of developing an optimisation
structure to aid the operational decision-making of scheduling activities in
a real-world pipeline scenario. The pipeline connects an inland renery to
a harbour, conveying dierent types of oil derivatives. The optimisation
structure is developed based on mixed integer linear programming (MILP)
with uniform time discretisation, but the MILP well-known computational
burden is avoided by the proposed decomposition strategy, which relies on
an auxiliary routine to determine temporal constraints, two MILP models,
and a database 24.
A new mixed-integer programming (MIP) formulation is presented for
the production planning of single-stage multi-product processes. The problem is formulated as a multi-item capacitated lot-sizing problem in which
(a) multiple items can be produced in each planning period, (b) sequenceindependent set-ups can carry over from previous periods, (c) set-ups can
cross over planning period boundaries, and (d) set-ups can be longer than
one period. The formulation is extended to model time periods of nonuniform length, idle time, parallel units, families of products, backlogged
demand, and lost sales 25.
Bedenik et.al. (2004) describes an integrated strategy for a hierarchical multilevel mixed-integer nonlinear programming (MINLP) synthesis of
overall process schemes using a combined synthesis/analysis approach. The
synthesis is carried out by multilevel-hierarchical MINLP optimization of
the exible superstructure, whilst the analysis is performed in the economic
attainable region (EAR). The role of the MINLP synthesis step is to obtain
a feasible and optimal solution of the multi-D process problem, and the role
of the subsequent EAR analysis step is to verify the MINLP solution and
in the feedback loop to propose any protable superstructure modications
for the next MINLP. The main objective of the integrated synthesis is to
exploit the interactions between the reactor network, separator network
and the remaining part of the heat/energy integrated process scheme 26.
Grossmann et.al. (2009) consider the solution methods for mixed-integer
linear fractional programming (MILFP) models, which arise in cyclic process
scheduling problems. They rst discuss convexity properties of MILFP
problems, and then investigate the capability of solving MILFP problems
with MINLP methods. Dinkelbachs algorithm is introduced as an e cient
method for solving large-scale MILFP problems for which its optimality
and convergence properties are established. Extensive computational examples are presented to compare Dinkelbachs algorithm with various MINLP
methods 21.
65
66
67
68
69
70
This is page 71
Printer: Opaque this
7
Dynamic Optimization
72
7. Dynamic Optimization
u(t);tf
h
y(t)
: s:t:x = f (x; u)
= l(x; u)
x(to) : xo
u min
ymin
u(t)
y(t)
umax
(7.2)
(7.3)
(7.4)
ymax
(7.5)
(7.6)
where t0 and tf denote the initial and nal transition times and vectors x(t), y(t) and u(t) represent the state, output and input trajectories.
Vector functions h and g are used to denote all equality and inequality constraints, respectively. Equations 1b,c represent the process model, whereas
equation 1d represents the process and safety constraints. The solution of
the above problem yields dynamic proles of manipulated variables u(t) as
well as the grade changeover time, tf t0. Equation 1 may be solved with
a standard NLP solver through use of CVP, where manipulated variables
are parameterized and approximated by a series of trial functions . Thus,
for the ith manipulated variable,
ui (t) =
na
X
aij
ij (t
tij )
(7.7)
j=1
where tij is the jth switching time of the ith manipulated variable, na is
the number of switching intervals, and aij represents the amplitude of the
ith manipulated variable at the switching time tij.
necessary steps that integrate CVP with the model equations and the
NLP solver are summarized below,
Step 1. Discretize manipulated variables by selecting an appropriate trial
function (see equation 2)
Step 2. Integrate the process model (Equation 1b,c) and the ODE sensitivities (Equations 3) if using a gradient-based solver and the manipulated
variables as inputs
Step 3. Compute the objective function (Equation 1a) and gradients if
necessary
73
Step 4. Provide this information to the NLP solver and iterate starting
with step 2 until an optimal solution is found
Step 5. Construct the optimal recipes using the optimal amplitude and
switching intervals obtained by the NLP solver.
An alternative strategy eliminates Step 2 by discretizing the continuous
process model by simultaneously treating both, the state variables and the
parameterized manipulated variables as decision variables.
In this workshop, we will focus on problem formulation, solution approach, and demonstrating its application to a polymerization reactor for
optimal product grade transition. We shall use MATLAB tool for solving
a dynamic optimization problem solution.
74
7. Dynamic Optimization
world case study where we show that a problem solution can be simplied
when one uses a combination of these methods.
This is page 75
Printer: Opaque this
8
Global Optimisation Techniques
8.1 Introduction
Tradictional optimisation techniques have the limitation of getting traped
in the local optimum. To over come this problem some optimisation techniques based on the natural phenomena are proposed. Some stocastic optimisation techniques which are popular are presented in the following sections
76
8.3 GA
77
8.3 GA
8.3.1 Introduction
Genetic algorithms (GAS) are used primarily for optimization purposes.
They belong to the goup of optimization methods known as non-traditional
optimization methods.
Non-traditional optimization methods also include optimization techniques such as neural networks (simulated annealing, in particular). Simu-
78
8.3.2 Denition
Genetic algorithms are computerized search and optimization algorithms
based on the mechanics of natural genetics and natural selection. Consider
a maximization problem Maximize f ( x ) x;l) 5 xi 5 xi(u) i = 1, 2, ..., N
This is an unconstrained optimization problem. Our aim is to nd the
maximum of this function by using GAS. For the implementation of GAS,
there are certain well-dened steps.
8.3.3 Coding
Coding is the method by which the variables xi are coded into string structures. This is necessary because we need to translate the range of the function in a way that is understood by a computer. Therefore, binary coding
is used. This essentially means that a certain number of initial guesses are
made within the range of the function and these are transformed into a binary format. It is evident that for each guess there might be some required
accuracy. The length of the binary coding is generally chosen with respect
to the required accuracy.
For example, if we use an eight-digit binary code. (0000 0000) would
represent the minimum value possible and (1 11 1 11 11) would represent
the maximum value possible.
A linear mapping rule is used for the purpose of coding:(/) x y - x!/) xi
= xi + (decoded value) 2Il - 1
The decoded value (of si, the binary digit in the coding) is given by the
rule / - I .I ~ Decoded value = C2si i=O
where
s; E (0, 1) (10.4)
For example, for a binary code (01 1 1), the decoded value will be (0111)
= (112~+ (112~+ (1p2+ (012~= 7
It is also evident that for a code of length 1, there are 2combinations or
codes possible. As already mentioned, the length of the code depends upon
the required accuracy for that variable. So, it is imperative that there be a
8.3 GA
79
method to calculate the required string length for the given problem. So,
in general, Eq. is used for this purpose.
So, adhering to the above-mentioned rules, it is possible to generate a
umber of guesses or, in other words, an initial population of coded points
that lie in the given range of the function. Then, we move on to the next
step, that is, calculation of the tness.
8.3.4 Fitness
As has already been mentioned, GAS work on the principle of survival
of the ttest. This in eect means that the good points or the points
that yield maximum values for the function are allowed to continue in
the next generation, while the less protable points are discarded from our
calculations. GAS maximize a given function, so it is necessary to transform
a minimization problem to a maximization problem before we can proceed
with our computations.
Depending upon whether the initial objective function needs to be maximized or minimized, the tness function is hence dened in the following
ways: for a minimization problem It should be noted that this transformation does not alter the location of the minimum value. The tness function
value for a particular coded string is known as the strings tness. This
tness value is used to decide whether a particular string carries on to the
next generation or not.
GA operation begins with a population of random strings. These strings
are selected from a given range of the function and represent the design or
decision variables. To implement our optimization routine, three operations
are carried out:
F(x) =Ax) for a maximization problem - 1 1 + f(x>
1. Reproduction
2. Crossover
3. Mutation
8.3.5 Operators in GA
Reproduction
The reproduction operator is also called the selection operator. This is
because it is this operator that decides the strings to be selected for the next
generation. This is the rst operator to be applied in genetic algorithms.
The end result of this operation is the formation of a mating pool, where
the above average strings are copied in a probabilistic manner. The rule can
be represented as a (tness of string) Probability of selection into mating
pool The probability of selection of the ith string into the mating pool is
given by where Fi is the tness of the ith string. Fj is the tness of thejth
string, and n is the population size.
80
The average tness of all the strings is calculated by summing the tness
of individual strings and dividing by the population size, and is represented
by the symbol F: - i=l F = - n
It is obvious that the string with the maximum tness will have the most
number of copies in the mating pool. This is implemented using roulette
wheel selection. The algorithm of this procedure is as follows. Roulette
wheel selection
Step I Using Fi calculate pi.
Step 2 Calculate the cumulative probability Pi.
Step 3 Generate n random numbers (between 0 and 1).
Step 4 Copy the string that represents the chosen random number in the
cumulative probability range into the mating pool. A string with higher
tness will have a larger range in the cumulative probability and so has
more probability of getting copied into the mating pool.
At the end of this implementation, all the strings that are t enough
would have been copied into the mating pool and this marks the end of the
reproduction operation.
Crossover
After the selection operator has been implemented, there is a need to introduce some amount of randomness into the population in order to avoid
getting trapped in local searches. To achieve this, we perform the crossover
operation. In the crossover operation, new strings are formed by exchange
of information among strings of the mating pool. For example,
oo/ooo - 00~111
Crossover point
lljlll
Parents Children
Strings are chosen at random and a random crossover point is decided,
and crossover is performed in the method shown above. It is evident that,
using this method, better strings or worse strings may be formed. If worse
strings are formed, then they will not survive for long, since reproduction
will eliminate them. But what if majority of the new strings formed are
worse? This undermines the purpose of reproduction. To avoid this situation, we do not select all the strings in a population for crossover. We
introduce a crossover probability (p,). Therefore, (loop,)% of the strings
are used in crossover. ( 1 - p,)% of the strings is not used in crossover.
Through this we have ensured that some of the good strings from the
mating pool remain unchanged. The procedure can be summarized as follows: Step I Select (loop,)% of the strings out of the mating pool at random.
Step 2 Select pairs of strings at random (generate random numbers that
map the string numbers and select accordingly).
Step 3 Decide a crossover point in each pair of strings (again, this is
done by a random number generation over the length of the string and
81
82
8.4.2 DE at a Glance
As already stated, DE in principle is similar to GA. So, as in GA, we use a
population of points in our search for the optimum. The population size is
denoted by NP. The dimension of each vector is denoted by D. The main
operation is the NP number of competitions that are to be carried out to
decide the next generation.
To start with, we have a population of NP vectors within the range of
the objective function. We select one of these NP vectors as our target
vector. We then randomly select two vectors from the population and nd
the dierence between them (vector subtraction). This dierence is multiplied by a factor F (specied at the start) and added to a third randomly
selected vector. The result is called the noisy random vector. Subsequently,
crossover is performed between the target vector and the noisy random
vector to produce the trial vector. Then, a competition between the trial
vector and the target vector is performed and the winner is replaced into
the population. The same procedure is carried out NP times to decide the
next generation of vectors. This sequence is continued till some convergence
criterion is met.
This summarizes the basic procedure carried out in dierential evolution.
The details of this procedure are described below.Assume that the objective
function is of D dimensions and that it has to be minimized. The weighting
constant F and the crossover constant CR are specied. Refer to Fig. for
the schematic diagram of dierential evolution.
Step 1 Generate NP random vectors as the initial population: Generate
(NP x D) random numbers and linearize the range between 0 and 1 to cover
the entire range of the function. From these (NP x 0)ra ndom numbers,
generate NP random vectors, each of dimension D, by mapping the random
numbers over the range of the function.
Step 2 Choose a target vector from the population of size NP: First
generate a random number between 0 and 1. From the value of the random
83
84
85
The notations that have been used in the above strategies have distinct
meanings.The notation after the rst /, i.e., best, rand-to-best, and rand,
denotes the choice of the vector X,. Bestmeans that X, corresponds to the
vector in the population that has the minimum cost or maximum prot.
Rand-to-bestmeans that the weighted dierence between the best vector
and another randomly chosen vector is taken as
X,. Randmeans X, corresponds to a randomly chosen vector from the
population. The notation after the second /, i.e., 1 or 2, denotes the
number of sets of random vectors that are chosen from the population
for the computation of the weighted dierence. Thus 2 means the value
of X; is computed using the following expression:
where Xal, xbl, Xa2, and xb2 are random vectors. The notation after the
third /, i.e., exp or bin, denotes the methodology of crossover that has been
used. Exp denotes that the selection technique used is exponential and
bindenotes that the method used is binomial. What has been described
in the preceding discussion is the binomial technique where the random
number for each dimension is compared with CR to decide from where
the value should be copied into the trial vector. But, in the exponential
method, the instant a random number becomes greater than CR, the rest
of the values are copied from the target vector.
Apart from the above-mentioned strategies, there are some more innovative strategies that are being worked upon by the authors group and
implemented on some application.
x: = xc + F(Xal - x b l ) + F(Xa2 - xb2,)
Innovations on DE
The following additional strategies are proposed by the author and his associates, in addition to the ten strategies listed before: (k) DE/best/3/exp,
(1) DE/best/3/bin, (m) DE/rand/3/exp, and (n) DE/rand/3/bin. Very recently, a new concept of nested DEhas been successfully implemented for
the optimal design of an auto-thermal ammonia synthesis reactor (Babu
et al. 2002). This concept uses DE within DE wherein an outer loop takes
care of optimization of key parameters (NP is the population size, CR is
the crossover constant, and F is the scaling factor) with the objective of
minimizing the number of generations required, while an inner loop takes
care of optimizing the problem variables. Yet, the complex objective is
the one that takes care of minimizing the number of generations/function
evaluations and the standard deviation in the set of solutions at the last
generatiodfunction evaluation, and trying to maximize the robustness of
the algorithm.
8.4.3 Applications of DE
DE is catching up fast and is being applied to a wide range of complex
problems. Some of the applications of DE include the following: digital
lter design (Storn 1995), fuzzy-decision-making problems of fuel ethanol
86
87
88
89
algorithms often ask what is the relative speed of interval and point operations and intrinsic function evaluations. Aside from the fact that relatively
little time and eort have been spent on interval system software implementation and almost no time and eort implementing interval-specic hardware, there is another reason why a dierent question is more appropriate
to ask. Interval and point algorithms solve dierent problems. Comparing
how long it takes to compute guaranteed bounds on the set of solutions
to a given problem, as compared to providing an approximate solution of
unknown accuracy, is not a reasonable way to compare the speed of interval and point algorithms. For many even problems, the operation counts
in interval algorithms are similar to those in non-interval algorithms. For
example, the number of iterations to bound a polynomial root to a given
(guaranteed) accuracy using an interval Newton method is about the same
as the number of iterations of a real Newton method to obtain the same
(not guaranteed) accuracy.
Virtues of Interval Analysis
The admirable quality of interval analysis is that it enables the solution of
certain problems that cannot be solved by non-interval methods. The primary example is the global optimization problem, which is the major topic
of this talk. Prior to the use of interval methods, it was impossible to solve
the nonlinear global optimization problem except in special cases. In fact,
various authors have written that in general it is impossible in principle to
numerically solve such problems. Their argument is that by sampling values
of a function and some of its derivatives at isolated points, it is impossible to determine whether a function dips to a global minimum (say) between the sampled points. Such a dip can occur between adjacent machinerepresentable oating point values. Interval methods avoid this di culty
by computing information about a function over continua of points even if
interval endpoints are constrained to be machine-representable. Therefore,
it is not only possible but relatively straightforward to solve the global
optimization problem using interval methods.
The obvious comment regarding the apparent slowness of interval methods for some problems (especially if they lack the structure often found
in real world problems) is that a price must be paid to have a reliable algorithm with guaranteed error bounds that non-interval methods do not
provide. For some problems, the price is somewhat high; for others it is
negligible or nonexistent. For still others, interval methods are more e cient.
There are several other virtues of interval methods that make them well
worth paying even a real performance price. In general, interval methods
are more reliable. As we shall see, some interval iterative methods always
converge, while their non-interval counterparts do not. An example is the
Newton method for solving for the zeros of a nonlinear equation.
90
bg
(8.1)
A degenerate interval is an interval with zero width, i.e. . The endpoints and of a given interval might not be representable on a given computer. Such an interval might be a datum or the result of a computation
on the computer. In such a case, we round down to the largest machinerepresentable number that is less than and round up to the smallest machinerepresentable number that is greater than . Thus, the retained interval
contains . This process is called outward rounding.
An under-bar indicates the lower endpoint of an interval; and an overbar indicates the upper endpoint. For example, if , then and . Similarly, we
write .
Finite interval arithmetic
Let +, -, ; and
(8.2)
Thus the interval X Y resulting from the operation contains every possible number that can be formed as x y for each x X, and each y Y.
This denition produces the following rules for generating the endpoints
of X
91
X +Y
X Y
= [a + c b + d]
= [a d b c]
(8.3)
(8.4)
92
Interval analysis makes it possible to solve the global optimization problem, to guarantee that the global optimum is found, and to bound its value
and location. Secondarily, perhaps, it provides a means for dening and
performing rigorous sensitivity analyses.
Until fairly recently, it was thought that no numerical algorithm can
guarantee having found the global solution of a general nonlinear optimization problem. Various authors have atly stated that such a guarantee
is impossible. Their argument was as follows: Optimization algorithms can
sample the objective function and perhaps some of its derivatives only at a
nite number of distinct points. Hence, there is no way of knowing whether
the function to be minimized dips to some unexpectedly small value between
sample points. In fact the dip can be between closest possible points in
a given oating point number system. This is a very reasonable argument;
and it is probably true that no algorithm using standard arithmetic will ever
provide the desired guarantee. However, interval methods do not sample
at points. They compute bounds for functions over a continuum of points,
including ones that are not nitely representable.
Example 4 Example 1
Consider the following simple optimization problem
We know that the global minimum will be at +/- 2
evaluate f at x=1 and over the interval . We obtain,
. Suppose we
Thus, we know that f (x) 0 for all x [3, 4.5] including such transcendental points as = 3.14.... Since f (1) = 3, the minimum value of f is
no larger than 3. Therefore, the minimum value of f cannot occur in the
interval [3, 4.5]. We have proved this fact using only two evaluations of f.
Rounding and dependence have not prevented us from infallibly drawing
this conclusion. By eliminating subintervals that are proved to not contain the global minimum, we eventually isolate the minimum point. In a
nutshell, an algorithm for global optimization generally provides various
(systematic) ways to do the elimination.
Global optimization algorithm
The algorithm proceeds as follows:
i.
Begin with a box x(0) in which the global minimum is sought.
Because we restrict our search to this particular box, our problem is really
constrained. We will discuss this aspect in later.
ii.
Delete subboxes of x(0) that cannot contain a solution point. Use
fail-safe procedures so that, despite rounding errors, the point(s) of global
minimum are never deleted.
The methods for Step 2 are as follows:
93
94
Use of gradient
We assume that f is continuously dierentiable. Therefore, the gradient g of
f is zero at the global minimum. Of course, g is also zero at local minima, at
maxima and at saddle points. Our goal is to nd the zero(s) of g at which f
is a global minimum. As we search for zeros, we attempt to discard any that
are not a global minimum of f. Generally, we discard boxes containing nonglobal minimum points before we spend the eort to bound such minima
very accurately.
When we evaluate gradient g over sub-boxes XI, if then there is no stationary point of f in x. therefore x cannot contain the global minimum and
can be deleted. We can apply hull consistency to further expedite convergence of the algorithm and sometimes reduce the search box.
An upper bound on the minimum
We use various procedures to nd the zeros of the gradient of the objective
function. Please note that these zeros can be stationary points that are not
the global minimum. Therefore, we want to avoid spending the eort to
closely bound such points when they are not the desired solution. In this
section we consider procedures that help in this regard.
Suppose we evaluate f at a point x. Because of rounding, the result is
generally an interval . Despite rounding errors in the evaluation, we know
without question that f I(x) is an upper bound for and hence for f*. Let
denote the smallest such upper bound obtained for various points sampled
at a given stage of the overall algorithm. This upper bound plays an important part in our algorithm. Since , we can delete any subbox of X for
which . This might serve to delete a subbox that bounds a non-optimal
stationary point of f. There are various methods that can be applied to the
upper bound on the minimum.
Termination
The optimization algorithm splits and subdivides a box into sub-boxes,
at the same time sub-boxes that are guaranteed to not contain a global
optimum are deleted. Eventually we will have a few small boxes remaining.
For a box to be considered as a solution box, two conditions must be
satised. One,
Where, is specied by user. This condition means that width of a solution
box must be less than a specied threshold. Second, we require
Where, is specied by user. This condition guarantees that the globally minimum value f* of the objective function is bounded to within the
tolerance .
95
8.5.7 References
Alefeld, G. (1984). On the convergence of some interval arithmetic modications of Newtons method, SIAM J. Numer. Anal. 21, 363372.
Alefeld, G. and Herzberger, J. (1983). Introduction to Interval Computations, Academic Press, Orlando, Florida.
Bazaraa, M. S. and Shetty, C. M. (1979). Nonlinear Programming. Theory and Practice,Wiley, NewYork.
96
97
Walster, G.W., Hansen, E. R., and Sengupta, S. (1985). Test results for
a global optimization algorithm, in Boggs, Byrd, and Schnabel (1985), pp.
272287.
6.1.
Web References
Fortran 95 Interval Arithmetic Programming Reference
http://docs.sun.com/app/docs/doc/816-2462
INTerval LABoratory (INTLAB) for MATLAB
http://www.ti3.tu-harburg.de/~rump/intlab/
Research group in Systems and Control Engineering, IIT Bombay
http://www.sc.iitb.ac.in/~nataraj/publications/publications.htm
Global Optimization using Interval Analysis
http://books.google.com/books?id=tY2wAkb-zLcC&printsec
=frontcover&dq=interval+global+optimization&hl=en&ei=m8QYTLuhO5mMNbuzoccE&sa=
X&oi=book_result&ct=result&resnum=1&ved=0CCwQ6AEwAA#v=onepage&q&f=false
Global Optimization techniques (including non-inverval)
http://www.mat.univie.ac.at/~neum/glopt/techniques.html
List of Interval related softwares
http://cs.utep.edu/interval-comp/intsoft.html
Interval Global optimization
http://www.cs.sandia.gov/opt/survey/interval.html
Interval Methods
http://www.mat.univie.ac.at/~neum/interval.html
List of publications on Interval analysis in early days
http://interval.louisiana.edu/Moores_early_papers/bibliography.html.
98
This is page 99
Printer: Opaque this
9
A GAMS Tutorial
9.1 Introduction
The introductory part of this book ends with a detailed example of the use
of GAMS for formulating, solving, and analyzing a small and simple optimization problem. Richard E. Rosenthal of the Naval Postgraduate School
in Monterey, California wrote it. The example is a quick but complete
overview of GAMS and its features. Many references are made to other
parts of the original user manual , but they are only to tell you where
to look for more details; the material here can be read protably without
reference to the rest of the user manual.
The example is an instance of the transportation problem of linear programming, which has historically served as a laboratory animalin the development of optimization technology. [See, for example, Dantzig (1963).]
It is good choice for illustrating the power of algebraic modeling languages
like GAMS because the transportation problem, no matter how large the
instance at hand, possesses a simple, exploitable algebraic structure. You
willsee that almost all of the statements in the GAMS input le we are
about to present would remain unchanged if a much larger transportation
problem were considered.
In the familiar transportation problem, we are given the supplies at several plants and the demands at several markets for a single commodity,
and we are given the unit costs of shipping the commodity from plants
to markets. The economic question is: how much shipment should there
100
9. A GAMS Tutorial
9.1 Introduction
101
102
9. A GAMS Tutorial
if all has gone well the optimal shipments will be displayed at the bottom
as follows.
new-york
chicago
topeka
seattle
50.000
300.000
san-diego 275.000
275.000
You will also receive the marginal costs (simplex multipliers) below.
chicago
topeka
seattle
0.036
san-diego 0.009
These results indicate, for example, that it is optimal to send nothing
from Seattle to Topeka, but if you insist on sending one case it will add
.036 $K (or $36.00) to the optimal cost. (Can you prove that this gure is
correct from the optimal shipments and the given data?)
9.3 Sets
103
be inserted within specic GAMS statements. All the lowercase words in the
transportation model are examples of the second form of documentation.
5. As you can see from the list of input components above, the creation
of GAMS entities involves two steps: a declaration and an assignment or
denition. Declaration means declaring the existence of something and giving it a name. Assignment or denition means giving something a specic
value or form. In the case of equations, you must make the declaration
and denition in separate GAMS statements. For all other GAMS entities,
however, you have the option of making declarations and assignments in
the same statement or separately.
6. The names given to the entities of the model must start with a letter
and can be followed by up to thirty more letters or digits
Sets
Sets are the basic building blocks of a GAMS model, corresponding exactly to the indices in the algebraic
representations of models. The Transportation example above contains
just one Set statement:
9.3 Sets
i canning plants / seattle, san-diego /
j markets / new-york, chicago, topeka / ;
The eect of this statement is probably self-evident. We declared two
sets and gave them the names i and j. We also assigned members to the
sets as follows:
104
9. A GAMS Tutorial
9.4 Data
105
9.4 Data
The GAMS model of the transportation problem demonstrates all of the
three fundamentally dierent formats that are allowable for entering data.
The three formats are:
Lists
Tables
Direct assignments
The next three sub-sections will discuss each of these formats in turn.
106
9. A GAMS Tutorial
(An element-value list by itself is not interpretable by GAMS and will result
in an error message.)
3. The GAMS compiler has an unusual feature called domain checking,
which veries that each domain element in the list is in fact a member of
the appropriate set. For example, if you were to spell Seattle correctly
in the statement declaring Set i but misspell it as Seatlein a subsequent
element-value list, the GAMS compiler would give you an error message
that the element Seatledoes not belong to the set i.
4. Zero is the default value for all parameters. Therefore, you only need
to include the nonzero entries in the element-value list, and these can be
entered in any order .
5. A scalar is regarded as a parameter that has no domain. It can be
declared and assigned with a Scalar statement containing a degenerate list
of only one value, as in the following statement from the transportation
model.
Scalar f freight in dollars per case per thousand miles /90/;
If a parameters domain has two or more dimensions, it can still have
its values entered by the list format. This is very useful for entering arrays
that are sparse (having few non-zeros) and super-sparse (having few distinct
non-zeros).
9.4 Data
107
108
9. A GAMS Tutorial
Variables
The decision variables (or endogenous variables ) of a GAMS-expressed
model must be declared with a Variables
statement. Each variable is given a name, a domain if appropriate, and
(optionally) text. The transportation
model contains the following example of a Variables statement.
9.5 Variables
x(i,j) shipment quantities in cases
z total transportation costs in thousands of dollars ;
This statement results in the declaration of a shipment variable for each
(i,j) pair. (You will see in Chapter 8, page 65, how GAMS can handle
the typical real-world situation in which only a subset of the (i,j) pairs is
allowable for shipment.)
The z variable is declared without a domain because it is a scalar quantity. Every GAMS optimization model must contain one such variable to
serve as the quantity to be minimized or maximized.
Once declared, every variable must be assigned a type. The permissible
types are given below. Variable Type Allowed Range of Variable
free(default) -1 to +1
positive 0 to +1
negative -1 to 0
binary 0 or 1
integer 0, 1, . . . , 100 (default)
The variable that serves as the quantity to be optimized must be a scalar
and must be of the free type. In our transportation example, z is kept
free by default, but x(i,j) is constrained to non-negativity by the following
statement.
Positive variable x ;
Note that the domain of x should not be repeated in the type assignment.
All entries in the domain automatically have the same variable type.
Section 2.10 of user manual describes how to assign lower bounds, upper
bounds, and initial values to variables.
9.6 Equations
The power of algebraic modeling languages like GAMS is most apparent
in the creation of the equations and inequalities that comprise the model
under construction. This is because whenever a group of equations or inequalities has the same algebraic structure, all the members of the group
are created simultaneously, not individually.
9.6 Equations
109
110
9. A GAMS Tutorial
111
112
9. A GAMS Tutorial
qcp
for quadratic constraint programming
nlp
for nonlinear programming
dnlp
for nonlinear programming with discontinuous derivatives
mip
for mixed integer programming
rmip
for relaxed mixed integer programming
miqcp
for mixed integer quadratic constraint programming
minlp
for mixed integer nonlinear programming
rmiqcp
for relaxed mixed integer quadratic constraint programming
rminlp
for relaxed mixed integer nonlinear programming
mcp
for mixed complementarity problems
mpec
for mathematical programs with equilibrium constraints
cns
for constrained nonlinear systems
5. The keyword minimizingor maximizing
6. The name of the variable to be optimized
113
114
9. A GAMS Tutorial
new-york
chicago
topeka
seattle
15.385
100.000
san-diego 84.615
100.000
For an example involving marginal, we briey consider the ratio constraints that commonly appear in blending and rening problems. These
linear programming models are concerned with determining the optimal
amount of each of several available raw materials to put into each of several desired nished products. Let y(i,j) be the variable for the number
of tons of raw material i put into nished product j. Suppose the ratio
constraint is that
no product can consist of more than 25 percent of one ingredient, that
is,
y(i,j)/q(j) =l= .25 ;
for all i, j. To keep the model linear, the constraint is written as
ratio(i,j).. y(i,j) - .25*q(j) =l= 0.0 ;
rather than explicitly as a ratio.
The problem here is that ratio.m(i,j), the marginal value associated with
the linear form of the constraint, has no intrinsic meaning. At optimality,
it tells us by at most how much we can benet from relaxing the linear
constraint to
y(i,j) - .25*q(j) =l= 1.0 ;
Unfortunately, this relaxed constraint has no realistic signicance. The
constraint we are interested in relaxing (or tightening) is the nonlinear form
of the ration constraint. For example, we would like to know the marginal
benet arising from changing the ratio constraint to
y(i,j)/q(j) =l= .26 ;
We can in fact obtain the desired marginals by entering the following
transformation on the undesired marginals: parameter amr(i,j) appropriate
marginal for ratio constraint ;
amr(i,j) = ratio.m(i,j)*0.01*q.l(j) ;
display amr ;
Notice that the assignment statement for amr accesses both .m and .l
records from the database. The idea behind the transformation is to notice
that
y(i,j)/q(j) =l= .26 ;
is equivalent to
y(i,j) - .25*q(j) =l= 0.01*q(j) ;
115
Echo Print Reference Maps Status Reports. Error Messages Model Statistics Solution Reports. A great deal of unnecessary anxiety has been caused
by textbooks and usersmanuals that give the reader the false impression
that awless use of advanced software should be easy for anyone with a
positive pulse rate. GAMS. is designed with the understanding that even
the most experienced users will make errors. GAMS attempts to catch the
errors as soon as possible and to minimize their consequences.
116
9. A GAMS Tutorial
32
33 Positive Variable x ;
34
35 Equations
36 cost dene objective function
37 supply(i) observe supply limit at plant i
38 demand(j) satisfy demand at market j ;
39
40 cost .. z =e= sum((i,j), c(i,j)*x(i,j)) ;
41
42 supply(i) .. sum(j, x(i,j)) =l= a(i) ;
43
44 demand(j) .. sum(i, x(i,j)) =g= b(j) ;
45
46 Model transport /all/ ;
47
48 Solve transport using lp minimizing z ;
49
50 Display x.l, x.m ;
51
The reason this echo print starts with line number 3 rather than line
number 1 is because the input le contains two dollar-print-control statements. This type of instruction controls the output printing, but since it
has nothing to do with dening the optimization model, it is omitted from
the echo. The dollar print controls must start in column 1.
$title a transportation model
$ouppper
The $title statement causes the subsequent text to be printed at the top
of each page of output. The $oupper statement is needed for the echo to
contain mixed upper- and lowercase. Other available instructions are given
in Appendix D, page 193.
9.11 Summary
This tutorial has demonstrated several of the design features of GAMS that
enable you to build practical optimization models quickly and eectively.
The following discussion summarizes the advantages of using an algebraic
modeling language such as GAMS versus a matrix generator or conversational solver.
1) By using an algebra-based notation, you can describe an optimization
model to a computer nearly as easily as you can describe it to another
mathematically trained person.
9.11 Summary
117
118
9. A GAMS Tutorial
Appendix A
The First Appendix
Example1.1.
ShorttermscchedulingbasedonDisccretetim
merepre
esentatio
on
Timehorizon:H=9hr
Model:
6/18/2010
CyclicSchedulingofContinuousPlants
Time slot
k=1
stage 1
k=2
k=1
stage 2
k = NK
...
k=2
...
Transition
processing
i
k = NK
.
k=1
stage M
k=2
...
k = NK
time
WpMi
Im im
Zijkm
Cinvim
Ctrij
Tc
Tc
Tc
i
m
i
j
k m
1
TppikM
Cinvfi iM RpiM 1
TppikM
2 i k
Tc
subject to
ikm
=1
ik
ikm
=1
(1)
(2a)
(2b)
ijkm
= yikm
(3a)
ijkm
= y jk 1m
(3b)
6/18/2010
Tspikm U im yikm 0
k m
(4a)
Tepikm U im yikm 0
k m
(4b)
T ikm U im yikm 0
Tpp
k m
(4 )
(4c)
k m
(4d)
Tsp
i1 m
Tsp
i ( k + 1) m
ijm
z ij 1 m
(5a)
Tep
i
ikm
+ ijm z ij ( k
i
+ 1) m
k < NK m
(5c)
Tppikm
Tspikm
Tppikm
stage m
Tepikm
Tspikm
Tepikm
Tppik(m+1)
Tppik(m+1)
stage m+1
Tspik(m+1) Tepik(m+1)
Tspik(m+1) Tepik(m+1)
Inventory
level
(5b)
Inventory
I1im
I0im
level
I2im
I1im
I3im
time
I2im
I3im
I0im
time
6/18/2010
0 I 1im Im im
0 I 2 im Im im
0 I 3im Im im
Im im U im
i
I
i m
I 3im = I 0 im
(6)
Wp Mi = iM Rp iM Tpp ikM
(7a)
Wp Mi DiTc
(7b)
Variables:
y ikm {0 ,1}
0 z ijkm 1
Tsp ikm , Tep ikm , Tpp ikm , Wp Mi , T c , Im im , I 0 im , I 1im , I 2 im , I 3 im 0
6/18/2010
stage 1
Product
stage 2
sale pprice ($
($/ton))
A
B
C
150
400
650
50
100
250
stage 1
product
A
B
C
processing
rate
(kg/h)
800
1200
1000
demand ((kg/h)
g )
stage 2
intermediate
storage
($/ton)
processing
rate
(kg/h)
140.6
140.66
140
140.6
900
600
1100
final
inventory
($/ton.h)
4.06
44.06
06
4.06
stage 2
3
8
10
3
3
6
-
A
3
4
B
7
0
C
3
10
-
6/18/2010
stage 1
0
10.84
94.05
19.72
3
stage 2
23 9
23.9
15 68
15.68
91 75
91.75
Inv
( )
(ton)
Time (hr)
demand (kg/hr)
50
100
250
production (kg/h)
50
100
758
Where?
How much?
Scheduling: How?
Applications:
Operations Research Community:
Flowshops & Jobshops
Scheduling of Personnel, Vehicles etc.
Chemical Engineering:
Batch plants: Food, Pharmaceuticals, Paper products &
Specialty Chemicals
Continuous/Semi-continuous: Petrochemical & Refinery
operations
Grossmann et al
CMU
Floudas et al
Princeton Univ.
Karimi
m ett a
al
Kar
N
NUS
S
Reklaitis, Pekny et al
Purdue
Pinto et al
RPI
Ierapetritou et al
Rutgers Univ.
Integration
Challenges:
Different time scales
Consistent Decisions
Infrequent Revision
Large-size Problems
Planning
Forecasting
Scheduling
Market
Operations
Management
Demand/Supply
Plant
RTO
MPC
site 1
Time Horizon is
longer
period T
site 2
site N
R l i l Short
Relatively
Sh
Ti
Time
Horizons
3 months to 1 year
hour or day
site n
Problem Statement
Given:
Set of products along with their demands and due dates
Set of manufacturing locations
Process flow sheet at each plant
Equipment and storage capacities
Batch processing time for each product in all stages
Transition times (sequence dependent)
Production, transportation, inventory or earliness, and tardiness costs
Medium-term Scheduling
Mixed
d production
d
lines
Short-term scheduling
Serial units
Cyclic scheduling
Parallel lines
Robust scheduling
Hybrid flowshops
Reactive scheduling
Discrete time
Continuous-time
formulation
Slot-based
Global-event based
Unit-specific-event based
Precedence based
Max Profit
Min Make Span
Min Tardiness/Earliness
Multi-product plants
(job shops)
Multi-purpose plants
(flow shops)
Without Resources
Unlimited Storage
No Storage / Zero-wait
Finite-Intermediate
Storage
With Resources
(Utilities)
Dedicated Storage
Flexible Storage
A
B
C
A
1
B
fC
Process Representation
STN Representation:
S8
S1
Heating
i=1
F dA
Feed
S4
Hot A
0.4
IntAB
0.4 Reaction 2 0.6
S5
i=4, 5
06
0.6
S6
Product 1
S2
Feed B
IntBC
Impure E
S7
Reaction 1
i=2, 3
0.5
Reaction 3
i=6, 7
0.2
0.1
Separation
i=8
0.9
S9
Product 2
S3
Feed C
STN Representation:
1 heater, 2 reactors, 1 still
Heating
i=1
S1
9 states, 8 tasks
S4
Feed A
Hot A
Product 1
0.4
IntAB
0.4 Reaction 2 0.6
S5
i=4, 5
0.6
S6
IntBC
S8
Reaction 2
i=4
R ti 2
Reaction
i=5
Hot A
S4 0.4
Heating
i=1
1
0.6
Reactor1
J2
0.5
Feed C
IntBC
Reaction 1
i=2
Reaction 1
i=3
0.5
S9
Product 2
Product 1
Product 2
Feed A
S2
i=8
09
0.9
Feed C
Heater
J1
Feed B
Reaction 3
i=6, 7
0.2
0.1
Separation
S3
RTN Representation:
S1
S7
Reaction 1
i=2, 3
0.5
S2
Feed B
Impure E
0.6
IntAB
S5
0.8
S6
Reactor2
J3
S9
0.4
0.9
0.1 Separation
i=8
Reaction 3
i=6
Reaction 3
i=7
Separator
J4
S7
13 resources, 8 tasks
Impure E
0.2
S3
U1
UN
UN
5 slots or 6 events
H-1
0 1 2
U1
U2
UN
4 events
3
H-1
H-1
0 12
U2
U2
U1
n
U2
n+1
UN
0 12
Only 2 events
4
H-1
Scheduling Characteristics
Profit maximization
Performance
criteria
Make-span minimization
Mean-flow time minimization
Average tardiness minimization
UIS (Unlimited Intermediate Storage)
Transfer
policies
Mathematical Model
Max Profit or Min Makespan
s.t.
Allocation constraints
Material balance constraints
Capacity constraints
Storage constraints
Duration constraints
Sequence constraints
Demand constraints
Due date constraint
Time horizon constraints
Solvers
GAMS
LP/MILP CPLEX
MINLP
SBB,
SBB DICOPT
DICOPT, BARON
NLP
DAEs
MINOPT
yp
g individual batches of
Events of anyy type-such
as the start or end of p
processing
individual tasks, changes in the availability of processing equipment and other
resources etc.-are only allowed at the interval boundaries.
The main advantage of this type of time representation is that it facilitates the
formulation by providing a reference grid against which all operations competing
for shared resources are positioned.
Variables
Allocation Constraints
Capacity Limitations
Material Balance
Other aspects
Examples 1 & 2
STN Representation
S8
S1
F dA
Feed
Heating
i=1
S4
Hot A
Product 1
0.4
IntAB
0.4 Reaction 2 0.6
S5
i=4, 5
06
0.6
S6
S2
Feed B
IntBC
Impure E
S7
Reaction 1
i=2, 3
0.5
Reaction 3
i=6, 7
0.2
0.1
Separation
i=8
0.9
S9
Product 2
S3
Feed C
10
Example 3
STN Representation
A1
F dA
Feed
A2
Prod A
Int A
B1
Feed B
B2
Prod B
Int B
C1
Feed C
C2
Prod C
Int C
Data
Data of coefficients of processing times of tasks, limits on batch sizes of units
Task
i
Examples 1&2
Example 3
Unit
j
Processing
times (h)
Bijmin
Bijmax
(mu)
(mu)
Heating
(i=1)
Reaction1 (i=2)
(i=3)
Reaction2 (i=4)
(i=5)
Reaction3 (i=6)
(i=7)
Separation (i=8)
Heater
Reactor1
Reactor2
Reactor1
Reactor2
Reactor1
Reactor2
Separator
1
2
2
2
2
1
1
2
-----------------
100
80
50
80
50
80
50
200
A1
A2
B1
B2
C1
C2
Unit 1
Unit 2
Unit 3
Unit 4
Unit 2
Unit 3
2
1
1
1
1
2
-------------
2029
1691
720
929
1691
720
11
Data
Data of storage capacities, initial stock levels and prices of various states
Example 1
State
Feed A
Feed B
Feed C
Hot A
Int AB
Int BC
Imp E
P1
P2
Storage Initial
capacity stock
(mu)
(mu)
UL
UL
UL
100
200
150
200
UL
UL
Example 2
Price
($/mu)
Storage
capacity
(mu)
Initial
stock
(mu)
0
0
0
0
0
0
0
1
1
UL
UL
UL
1000
1000
0
1000
UL
UL
AA
AA
AA
AA
AA
AA
Example 3
Price
($/mu)
0
0
0
0
0
0
0
10
10
State
Storage Initial
capacity stock
(mu)
(mu)
Feed A UL
Feed B UL
Feed C UL
Int A
100000
Int B
0
Int C
0
Prod A UL
Prod B UL
Prod C UL
AA
AA
AA
Price
($/mu)
0
0
0
0
0
0
1
1
2.5
Computational Results
Example
Time Points
CPU
time (s)
Nodes
RMILP
($)
MILP
($)
Binary
variables
Continuous
variables
Constraints Nonzeros
Example 1 (H=9)
17
18
20
0.015
0.015
0.031
4
17
91
232.75
204.99
257.20
214.75
191.75
241
136
144
192
0.016
0.031
0.032
11
28
138
2138.01
2426
2437
2053.33
2268.33
2296.25
200
208
216
0.016
0.015
0
0.015
015
0
21
0
15085
14935
16591.27 15000
17501 35 16756
17501.35
48
120
126
299
316
418
358
379
505
1263
1466
2065
435
458
469
526
547
568
2155
2245
2335
130
310
325
321
801
841
559
1746
1835
Example 2 (H=10)
25
26
27
Example 3 (H=10)
19
20
21
12
R1
R2
R1
R2
R1
R3
R2
R2
R3
R2
R3
H Heater
R1 Reaction 1
R2 Reaction 2
R3 Reaction 3
D Distillation
Time (hrs)
R1
R1
R2
R3
R3
R2
R2
R2
Time (hrs)
13
Time (hrs)
14
Introduction
Short-term scheduling has received increasing attention from
both academia and industry in the past two decades
Floudas & Lin, C&ChE (2004); Floudas & Lin, Annals of OR (2005)
- Discrete-time models
- Continuous-time models
Slot based
Global-event based
Unit-specific-event based
Both Slot based and Global event based models use a set of
events that are common for all tasks and all units while Unit
Unitspecific event based models define events on a unit basis
U1
U1
U2
U2
UN
UN
5 slots or 6 events
0
H-1
0 1 2
H-1
Global event
based
U1
U2
UN
H-1
U2
n+1
UN
4 events
0 12
Unit Specific
p
event based
U1
Only 2 events
0 12
H-1
Literature Review
Review: Floudas and Lin (2004, 2005), Mendez et al. (2006)
Slot-based models
Pinto & Grossmann (1994,1995,1996,1997), Pinto et al. (1998,2000),
Alle & Pinto (2002
(2002, 2004)
2004), Karimi & McDonald (1997)
(1997), Lamba & Karimi
(2002)
Sundaramoorthy & Karimi (2005)
Unit-specific
p
event-based models
Ierapetritou & Floudas (1998a,b), Ierapetritou et al (1999), Lin & Floudas
(2001), Lin et al (2003,2004), Janak et al (2004,2005,2006,2007), Shaik and
Floudas (2007)
Giannelos & Georgiadis (2002, 2003, 2004)
Allocation constraints
Material balance constraints
Capacity constraints
Storage constraints
Duration constraints
Sequence constraints
Demand constraints
Due date constraint
Time horizon constraints
No big-M constraints
Tasks can occur over multiple events
Tasks allowed to end before the end time of the event
resulting in inexact process time representations with
additional wait periods
price R(r, t = T )
r
i, t t < T
t '> t
i, t t < T
t '>t
i, t t < T
t =1
Rrmin R ( r , t ) Rrmax
T (t ) H
T (t ) = 0
t '< t
r , t
r , t
t = 1
t ' = 1 or t ' t
t = T
T (t ) MS
t=T
t=T
price R(r , t = T )
r
R (r , t ) = Rr0
t =1
+ R (r , t 1) t >1 +
i
t <t 't + t
+
i
t t t '<t
ri
r , t
r , t
t
t = 1
t ' = 1 or t ' t or t = T
t=T
T (t ) MS
t=T
price ST (s, N )
Ws(i, n) 1
j , n
isuitij
Wf (i, n) 1
j , n
isuitij
j , n
isuitij n ' n
i, n
i , n
i, n
i , n > 1
(The finish time of a task i remains unchanged until the next occurance of task i)
i , n > 1
i, n
i , n > 1
i I ZW , n > 1
i, n
Batch-size constraints
i, n
Bimin Ws (i, n ') Wf (i, n ') Bp(i, n) Bimax Ws (i, n ') Wf (i, n ') i, n
n ' n
n ' n
n '< n
n '< n
B I (i, s, n) = si Bs (i, n)
Constraint to ensure
Bs(i,n)=Bf(i,n)=Bp(i,n)
i, n, s SI (i )
iO ( s )
i, n, s SI (i )
i, n, s SO (i )
i , n > 1
i, n, s SO (i )
B O (i, s, n)
B (i , s , n )
I
s, n > 1
iI ( s )
n < N
Tightening constraints
D(i, n) H
isuitij
j , n
isuitij n ' n
isuitij n ' n
Ts (i, n) H
i , n
Tf (i, n) H
i , n
ST ( s, n) STsmax
j , n
s FIS , n
n = N
T (N ) = H
T ( N ) = MS
Tightening constraints
D(i, n) MS
isuitij
j , n
isuitij n ' n
When utility requirements are considered, the following constraints are added:
i, r , n
i , r , n
R (r , n) Rrmax
r , n
r , n
10
price ST (s, K )
s
SL(k ) H
k
Z ( j, k ) =
Y (i, j, k )
j , 0 k < K
isuitij
Z ( j, k ) =
YE (i, j, k )
j , 0 < k < K
isuitij
( Y (i, j, k ) +
isuitij
t ( j, k )
isuitij
ij
ij
y (i, j, k ) + ij b(i, j, k )
ST ( s, k ) = ST ( s, k 1) +
j isuitij ,i 0, si > 0
ST ( s, k ) STsmax
ij
i > 0 , j suitij ,k
, >0
B(i, j , k ) ) SL( k + 1)
j , k < K
j, 0 < k < K
si BE (i, j , k ) +
j isuitij ,i 0, si < 0
si B (i, j , k )
s, k
s FIS , k
11
t ( j , k ) max( ij + ij Bijmax )
isuitij
k > 0
j , k
Y ( i , j , k ) = y (i , j , k ) = b ( i , j , k ) = B ( i , j , k ) = 0
i, j suitij = 0 or k = K
i, j suitij = 0 or k = 0
j , k = K
t ( j , k ) = 0; SL ( k ) = 0
k = 0
0 y (i, j , k ), YE (i, j , k ), Z ( j , k ) 1
Min MS=
SL(k )
k =1
ST ( s, K ) Demand s
References
Floudas, C. A.; Lin, X. Continuous-time versus discrete-time approaches for scheduling of chemical
processes: A review. Comput. Chem. Eng. 2004, 28, 2109.
Floudas, C. A.; Lin, X. Mixed integer linear programming in process scheduling: Modeling,
algorithms, and applications. Ann. Oper. Res. 2005, 139, 131.
Mendez, C. A.; Cerda, J.; Grossmann, I. E.; Harjunkoski, I.; Fahl, M. State-of-the-art review of
optimization methods for short-term scheduling of batch processes. Comput. Chem. Eng. 2006, 30, 913
Maravelias, C. T.; Grossmann, I. E. New general continuous-time state-task network formulation for
short-term scheduling of multipurpose batch plants. Ind. Eng. Chem. Res. 2003, 42, 3056.
Pinto, J. M.; Grossmann, I. E. Optimal cyclic scheduling of multistage continuous multiproduct plants.
Comput. Chem. Eng. 1994, 18, 797.
Sundaramoorthy, A
Sundaramoorthy
A.;; Karimi
Karimi, II. A
A. A simpler better slot-based continuous-time formulation for shortterm scheduling in multipurpose batch plants. Chem. Eng. Sci. 2005, 60, 2679.
Ierapetritou, M. G.; Floudas, C. A. Effective continuous-time formulation for short-term
scheduling: 1. Multipurpose batch processes. Ind. Eng. Chem. Res. 1998, 37, 4341.
Ierapetritou, M.G.; Floudas, C.A. Effective continuous-time formulation for short-term scheduling: 2.
Continuous and semi-continuous processes. Ind. Eng. Chem. Res. 1998, 37, 4360.
12
References
Janak, S. L.; Lin, X.; Floudas, C. A. Enhanced continuous-time unit-specific event-based formulation for
short-term scheduling of multipurpose batch processes: Resource constraints and mixed storage policies.
Ind. Eng. Chem. Res. 2004, 42, 2516.
Shaik, M. A.; Janak, S. L.; Floudas, C. A. Continuous-time models for short-term scheduling of
multipurpose batch plants: A comparative study. Ind. Eng. Chem. Res. 2006, 45, 6190.
Shaik, M. A.; Floudas, C. A. Improved unit-specific event-based model continuous-time model for
short-term scheduling of continuous processes: Rigorous treatment of storage requirements. Ind. Eng.
Chem. Res. 2007, 46, 1764.
Shaik, M. A.; Floudas, C. A. Unit-specific event-based continuous-time approach for short-term
scheduling of batch plants using RTN framework. Comput. Chem. Eng. 2008, 32, 260.
Shaik, M. A.; Floudas, C. A.; Kallrath, J.; Pitz, H. -J. Production scheduling of a large-scale industrial
continuous plant: Short
Short-term
term and medium
medium-term
term scheduling
scheduling , Comput.
Comput Chem
Chem. Eng
Eng. 2009,
2009 32,
32 670
670-686
686.
Shaik, M. A.; Bhushan, M.; Gudi, R. D.; Belliappa, A. M. Cyclic scheduling of continuous multiproduct plants in a hybrid flowshop facility. Ind. Eng. Chem. Res. 2003, 42, 5861.
Munawar, S.A. and Gudi, R.D.A Multi-level, Control-theoretic Framework for Integration of Planning,
Scheduling and Rescheduling, Ind. Eng. Chem. Res., 2005, 44, 4001-4021.
Shaik, M. A.; Floudas, C. A. Unit-specific event-based continuous-time approach for short-term
scheduling of batch plants using RTN framework. Comput. Chem. Eng. 2008, 32, 260.
13
s
i si > 0
jsuitij
w(i, j, n) 1
j,n
isuit ij
si
i si > 0
b(i, j , n 1) +
jsuitij
i,j suitij ,n
si
i si < 0
b(i, j , n)
s,n
jsuitij
ts (i, j , n + 1) ts (i ', j ', n) + i ' j ' w(i ', j ', n) + i ' j ' b(i ', j ', n) H (1 w(i ', j ', n))
s , i , i ', j , j ' suitij , suiti ' j ' , i i ', j j ', si < 0, si ' > 0, n < N
i, j suitij , n
i, j suitijj
i, j suitij = 0
ST (s, N ) +
si > 0
si
b(i, j , N ) Demand s
jsuitij
i, j suitij
tasks
tasks related to resource r
resources
equipment resources
material resources
material resources with finite dedicated storage
event points within the time horizon
Parameters
H
Pr
Dr
r
ii '
Ermin
Ermax
rip , ric
rip , ric
scheduling horizon
price of resource r
demand for resource r
sequence independent clean up time
sequence-dependent clean up time required between tasks i and i
lower bound on the availability of resource r
upper bound on the availability of resource r
c
proportion of equipment resource produced, consumed in task i, rip 0, ri 0
p
c
proportion of material resource produced, consumed in task i, ri 0, ri 0
Positive variables
b(i,n))
b(i
E0(r)
E(r,n)
Ts(i,n)
A
Amount
t off material
t i l processed
d by
b ttaskk i in
i eventt n
initial amount of resource r available or required from external sources
excess amount of resource r available at event n
time at which task i starts at event n
Capacity Constraints
w(i, n) Bimin b(i, n) w(i, n) Bimax
Ermin E ( r , n) Ermax
i I , n N
(1)
r R , n N
(2)
ri w(i, n) + ri b(i, n)
iI r
r R , n N , n > 1
r R , n N , n = 1
(3a)
(3b)
iI r
iI r
E (r , n) = E0 (r ) + ric w(i, n)
r R J , n N , n = 1
iI r
A separate task is assumed for each task suitable in multiple equipment resources
Implicitly represents the allocation constraint (No need to write separately)
i I , n N , n < N
(4)
r R J , i I r , i ' I r , n N , n < N
(5a)
(5c)
The last constraint is novel and more generic than the earlier version
shown below
Only the last constraint has big-M terms unlike in the earlier works
(6)
i I , n N
T s ( i , N ) + i w( i , N ) + i b ( i , N ) H
(7a)
i I
(7b)
Ti ht i Constraint
Tightening
C
t i t
nN iI r
r R J
(8)
Objective Function
Maximization of Profit
P E (r , N ) + (
Max Profit =
rR S
iI r
p
ri
(9)
Demand Constraints
E (r , N ) + ( rip w(i, N ) + rip b(i, N ) ) Dr
r R S
(10)
iI r
i I
(11)
nN iI r
r R J
(12)
E (r , n) Ermax
r R s , n N
Example:
N2
U2
U1
100
E(r,N1)=200 ?
N1
N2
300
(6a)
Constraint (6a) along with Eq (6) for different tasks in different units will enforce the start
time of next event to be equal to the end of previous event
T s (i, n) T s (i ', n) + i ' w(i ', n) + i 'b(i ', n) + H (1 w(i ', n))
r R FIS , i ' I r , i I r , i i ', rip' > 0, ric < 0, n N
(6b)
Constraint (6b) enforces that the start time of the consuming task at event n should be
before the end of the active producing task (producing FIS) at event n
There is no need to write these constraints at the end of the horizon time, as the excess
amount produced can stay in the batch unit itself
Flexible FIS: storage needs to be considered as a separate task (Janak et al., 2004)
No big-M constraints
Although based on unit specific events, due to special
sequencing constraints on tasks producing/consuming the
same state
state, it is similar to global events
They enforce the start and end times of all tasks producing or
consuming the same state to be same
Benchmark Examples
Problem Statement
Given:
Determine:
P d ti recipe
Production
i iin tterms off ttaskk sequences
Pieces of equipment and their ranges of capacities
Material storage policy (Unlimited)
Production requirement
Utility requirements and their maximum availabilities
Time horizon under consideration
Optimal sequence of tasks taking place in each unit
Amount of material processed at each time in each unit
Processing time of each task in each unit
Utility usage profiles
Example 1
STN Representation
S1
Task 1
i=1, 2
S2
Task 2
i=3
S3
Task 3
i=4, 5
S4
Problem involves 5 units, 3 processing tasks, and 4 states (1 feed, 2 int, 1 product)
Variable batch sizes and processing times
Unlimited intermediate storage (UIS) for all states
Consider two objective functions:
Maximization of Profit
for 3 cases of different time horizons:
Case 1a: H=8 hr
Case 1b: H=12 hr
Case 1c: H=16 hr
Minimization of Makespan
for 2 cases of different demands:
Case 1a: D4 =2000 mu
Case 1b: D4 =4000 mu
Example 2
STN Representation
S8
S1
Heating
i=1
F dA
Feed
S4
Hot A
Product 1
0.4
IntAB
0.4 Reaction 2 0.6
S5
i=4, 5
06
0.6
S6
S2
Feed B
Impure E
IntBC
S7
Reaction 1
i=2, 3
0.5
0.1
Separation
i=8
0.9
Reaction 3
i=6, 7
0.2
S9
Product 2
S3
Feed C
Maximization of Profit
for 3 cases of different time horizons:
Case 2a: H=8 hr
Case 2b: H=12 hr
D8 =200 mu
D9 =200 mu
Example 3
STN Representation
Feed 3
S8
0.75
Heating 2
i=2
Int 1
S1
Heating 1
S3
0.5
Reaction 2
i=5, 6
0.5
S2
S6
i=1
Feed 1
Reaction 1
i=3, 4
Feed 2
S5
Int 3
S4
S9
Int 6
0.25
Int 4
Reaction 3 0.4
S13
i=7, 8
Product 2
0.6
Int 7 S10
Int 5
0.4
0.4
Separation 0.5 S7 0.4 Mixing
i=9
i=10, 11
0.1
0.2
Feed 4
S12
Product 1
S11
Int 2
Minimization of Makespan
for 2 cases of different demands:
Case 3a: D12 =100 mu, D13 =200 mu
Case 3b: D12 = D13 =250 mu
Computational Results
Maximization of Profit
Example 1
Model
Events
Example 1a (H=8)
S&K
M&G
CBM
CBMN(t=1)
(t=2)
G&G
I&F
5
5
5
5
5
4
4
5
Example 1b (H=12)
S&K
9
M&G
9
CBM
9
CBMN(t=1)
9
(t=2)
9
G&G
6
I&F
6
7
Example 1c (H=16)
S&K
12
13
M&G
12
13
CBM
12
13
CBMN(t=2)
12
(t=3)
12
(t=3)
13
G&G
11
I&F
9
10
a Suboptimal
CPU
time (s)
Nodes
RMILP
($)
0.05
0.03
0.04
0.01
0.02
0.01
0.01
0.05
13
2
0
0
7
0
1
160
2000.0
2000.0
2000.0
2000.0
2000.0
2000.0
2000.0
2804.6
1840.2
1840.2
1840.2
1840.2
1840.2
1840.2
1840.2
1840.2
40
40
70
20
35
20
10
15
215
195
115
70
85
76
48
62
192
520
201
86
116
122
69
92
642
1425
655
274
414
355
176
245
26.83
29.52
26.93
0.23
10.32
0.03
0.03
0.19
27176
26514
23485
606
21874
22
24
589
4481.0
4563.8
5237.6
4419.9
5237.6
3890.0
4000.0
4857.6
3463.6
3463.6
3463.6
3301.6 a
3463.6
3301.6a
3463.6
3463.6
80
80
220
40
75
30
20
25
415
375
301
130
165
114
76
90
408
1000
553
162
232
182
115
138
1358
3415
2099
546
886
541
314
383
5328 22
5328.22
>670001
37675.13
>670002
32456.61
>670003
1086.08
3911.14
40466.83
3.40
1.76
20.60
3408476
36297619
17465450
20693001
14385711
22948021
1642027
4087336
44252075
9533
6596
89748
6312.6
6312
6
6381.9
6332.8
6391.4
7737.6
8237.6
7737.6
7737.6
8237.6
6236.0
6601.5
6601.5
5038
5038.1
1
5038.1
5038.1
5038.1
5038.1
5038.1
5000.0 a
5038.1
5038.1
4840.9a
5038.1
5038.1
110
120
110
120
385
450
105
150
165
55
35
40
565
615
510
555
493
567
225
270
295
209
118
132
570
624
1360
1480
922
1065
319
409
448
332
184
207
1895
2074
5275
5965
3707
4343
1240
1680
1848
1006
521
590
Example 1c
Maximization of Profit
9 events, Bin = 35
CPU time = 1.76 sec
Nodes=6596
Castro et al (2004)
Computational Results
Example 2
Model
Events
Example 2a (H=8)
S&K
M&G
CBM
CBMN(t=1)
G&G
I&F
5
5
5
5
4
4
5
Example 2b (H=12)
S&K
7
8
9
10
11
M&G
7
8
9
10
11
CBM
7
8
9
10
11
CBMN(t=2)
7
8
9
10
11
G&G
6(to 11)
I&F
7
8
a
CPU
time (s)
0 07
0.07
0.16
0.07
0.01
0.03
0.03
0.28
RMILP
($)
4
26
8
4
14
13
883
1730.9
1730
9
1730.9
1812.1
1730.9
1812.1
1812.1
2305.3
1498
1498.6
6
1498.6
1498.6
1498.6
1498.6
1498.6
1498.6
48
64
112
32
32
18
26
235
360
184
104
142
90
115
249
826
322
114
234
165
216
859
2457
1105
439
820
485
672
3002.5
3167.8
3265.2
3315.8
3343.4
3002.5
3167.8
3265.2
3315.8
3343
3343.4
4
3190.5
3788.3
4297.9
4770.8
5228.7
3045.0
3391.0
3730.5
4070.0
4409.5
3190.5
3788.3
4297.9
2610.1
2610.3
2646.8
2646.8
2646.8a
2610.1
2610.3
2646.8
2646.8a
2658
2658.5
5
2610.1
2610.3
2646.8
2646.8
2627.9a
2610.1
2610.3
2646.8
2646.8
2646.8a
2564.6a
2658.5
2658.5
72
84
96
108
120
96
112
128
144
160
216
280
352
432
520
88
104
120
136
152
48
42
50
367
433
499
565
631
526
609
692
775
858
316
394
480
574
676
188
218
248
278
308
208
165
190
387
456
525
594
663
1210
1402
1594
1786
1978
572
721
886
1067
1264
224
261
298
335
372
348
318
369
1363
1615
1867
2119
2371
4019
4884
5805
6782
7815
2146
2791
3519
4330
5224
1050
1238
1426
1614
1802
1238
1046
1233
1.93
1234
29.63
16678
561.58
288574
10889.61
3438353
>670001
17270000
2.15
814
58.31
17679
2317.38
611206
>670002
10737753
>670003
9060850
1.38
1421
35.81
30202
1090.53
680222
40355.97 19225950
>670004
13393455
0.63
1039
14.39
32463
331.72
593182
4366.09
6018234
>670005
80602289
0.33
701
6.19
14962
105.64
211617
MILP
($)
Maximization of Profit
Nodes
Binary
variables
Continuous
variables
Constraints
Nonzeros
Suboptimal solution; Relative Gap: 1.59 %1, 3.16%2, 5.12%3 , 28.16%4 , 2.58%5
Computational Results
Example 3
Model
Events
Example 3a (H=8)
S&K
M&G
CBM
CBMN(t=2)
G&G
I&F
7
7
7
7
5
5
6
Example 3b (H=12)
S&K
9
10
M&G
9
10
CBM
9
10
CBMN(t=2)
9
10
G&G
6
I&F
7
8
a
CPU
time (s)
RMILP
($)
MILP
($)
145888
429949
13130
10361
807
1176
57346
2513.8
2560.6
2809.4
2606.5
2100.0
2100.0
2847.8
1583.4
1583.4
1583.4
1583.4
1274.5a
1583.4
1583.4
102
132
297
121
55
30
41
597
717
433
264
244
155
190
584
1667
841
343
392
303
377
2061
5601
3049
1495
1335
875
1139
372.92
94640
>710001 12781125
19708.33 2254227
2
>71000
5857914
290.84
80123
16416.31
3194816
107.97
47798
1173.82
751686
1.18
2750
18.33
15871
50.48
41925
3867.3
4029.7
3867.3
4029.7
4059.4
4615.6
3864.3
4189.8
2871.9
3465.6
4059.4
3041.3
3041.3
3041.3
2981.7a
3041.3
3041.3
3041.3
3041.3
2443.2a
3041.3
3041.3
136
153
176
198
484
594
165
187
66
52
63
783
876
943
1056
658
787
348
390
290
225
260
792
896
2195
2459
1307
1576
457
514
469
451
525
2789
3153
8114
9492
5001
6154
2031
2299
1608
1403
1667
184.46
1012.68
19.82
6.90
0.35
0.38
25.92
Nodes
Maximization of Profit
Binary
Continuous Constraints Nonzeros
variables variables
Minimization of Makespan
Example 1
Model
Events
H
(hr)
CPU
time (s)
Nodes
RMILP
(hr)
MILP
(hr)
27.126
25.702
25.142
24.871
24.716
27.126
25.335
25.024
24.834
23.313
21.049
19.049
17.049
27.126
25.824
25.358
19.049
17.049
15.049
27.126
25.064
24.236
24.236
24.236
24.236
29.772
29.772
29.439
29.106
28.773a
29.772
29.772
29.439
29.106a
29.772
29.772
29.439
29.106a
29.772
29.772
29.772
29.439
29.106
28.773a
29.772
29.772a
28.439
27.903
27.881
27.881
120
130
140
150
160
120
130
140
150
450
520
595
675
60
65
70
135
145
155
60
75
50
55
60
65
615
665
715
765
815
556
601
646
691
568
647
731
820
191
206
221
286
306
326
228
285
160
174
188
202
624
678
732
786
840
1485
1605
1725
1845
1066
1219
1382
1555
239
258
277
407
436
465
367
457
253
276
299
322
2074
2253
2432
2611
2790
6056
6786
7551
8351
4404
5095
5836
6627
824
892
960
1605
1723
1841
1108
1387
732
801
870
939
-101.03
34598 51.362
-- 15814.03 4164921 49.939
100 21974.42 2525960 51.362
>900004 5129168 49.572
-6.09
185 43.313
-2016.50
136348 41.049
-0.05
0 51.362
-0.20
72 50.061
->800005 34358380 39.049
-0.07
0 51.362
-1.53
1707 49.594
100
6.48
19019 48.473
384.12
832372 48.473
56.432
56.432a
56.432
57.765a
56.432
56.432a
56.432
56.432
56.099a
56.432
56.432a
52.433
52.072
220
230
220
230
1375
1495
110
115
235
110
120
100
105
1115
1165
1006
1051
1583
1712
341
356
486
418
456
300
314
1164
1218
2685
2805
3046
3299
429
448
697
667
727
483
506
3864
4043
14931
16011
13564
14755
1504
1572
2785
2038
2224
1422
1491
-1.18
362
-31.54
15622
-728.05
400789
-- 37877.69 12064418
-- >800001 17279722
50
2.19
394
645.06
139488
25253.81 5273904
>900002 11258561
-0.50
6
-14.90
4262
-2163.55
454549
-- 64850.69
9852772
-0.02
0
-0.11
65
-0.28
417
-235.90
236250
-- 27994.64 23426601
-- >800003
80105289
-0.03
0
-1.87
3529
50
0.12
208
2.26
7863
41.89
134961
950.64
2693556
Binary
Continuous Constraints Nonzeros
variables variables
Suboptimal solution; Relative Gap: 4.22 %1, 7.38 %2, 0.12%3, 11.01%4, 2.18 %5
Computational Results
Minimization of Makespan
Example 2
Model
Events
H
(hr)
CPU
time (s)
---50
10.98
519.35
11853.03
66.55
5693.53
>800001
---------50
7.75
727.23
32258.74
0.71
50.49
56.11
5535.27
1.97
1614.35
0
0.78
78
74.26
1672.11
Nodes
RMILP
(hr)
5378 18.685
142108 18.685
2840768 18.685
15674 18.685
1066939 18.685
5019315
6426
441130
13776145
1809
134189
109917
8389012
3804
1182082
1008
111907
2079454
12.555
9.889
7.223
18.685
18.685
15.654
12.988
12.555
10.475
12
12.738
738
12.477
12.435
96
556
108
622
120
688
128
693
144
776
19.340
160
528
597
666
1598
1790
859
1936
2188
2440
5869
6850
1982
19.789
19.340
19.340
19.789
19.789
19.340
19.340
19.789
19.789a
19
19.764
764
19.340
19.340
352
432
520
64
72
136
152
64
80
45
53
61
888
1069
1266
216
241
337
374
475
589
367
418
469
3584
4403
5305
872
979
1623
1811
1675
2093
1211
1398
1585
481
575
677
193
215
279
309
274
340
190
215
240
Computational Results
Minimization of Makespan
Example 3
Model
Events
H
(hr)
CPU
Nodes
time (s)
RMILP
(hr)
36
5156
53789
821194
11858901
316
21366
605450
7481387
68
2762
22452
673460
20380858
67
2566
17426
326752
16842943
5054232
338
3960
458
3506
12.317
11.621
11.417
11.335
11.295
12.317
11.621
11.417
226238
5802875
9627
28079
84970
2668
424617
MILP
(hr)
Binary
Continuous Constraints Nonzeros
variables variables
10.941
8.941
6.941
4.941
3.825
12.192
10.192
8.192
6.192
6.192
4.635
11.066
10.167
11.066
10.000
14.366
13.589
13.532
13.367
13.367
14.366
13.589
13.532
11.335
14.366
13.589
13.532
13.367
13.367
14.366
13.589
13.532
13.532
13.367
13.367
14.616
14.616a
13.367
13.367
119
690
136
783
153
876
170
969
187
1062
154
831
176
944
198
1057
13.367
220
385
541
484
659
594
788
715
928
847
1079
143
307
165
349
187
391
209
433
297
521
330
574
77
336
99
428
52
225
63
260
689
793
897
1001
1105
1937
2201
2465
1170
1064
1309
1578
1871
2188
402
459
516
573
725
801
558
712
452
526
14.535
14.535
10.722
12.494
12.763
12.500
12.500
17.357a
17.357a
17.357a
17.357a
18.978a
17.025
17.025
170
220
715
209
110
85
96
1001
2729
1871
573
789
674
748
969
1170
928
433
474
330
365
2425
2789
3153
3517
3881
6905
8208
9592
2729
4044
5090
6254
7536
8936
1776
2044
2312
2580
3494
3877
1902
2448
1413
1677
11057
3517
11057
7536
2580
2721
2205
2469
Conclusions
Long-term Scheduling
Medium-term scheduling
Short-term scheduling
Discrete time
Slot-based
Continuous-time
formulation
Global event-based
Unit-specific event-based
Positive variables
B(i,n)
ST(s,n)
ST0(s)
Ts(i,n)
Tf(i,n)
Storage
policies
w(i, n) 1
j J , n N
iI j
Rimin
(T f (i p , n) T s (i p , n)) b(i p , n) Rimax
(T f (i p , n) T s (i p , n))
p
p
b(i p , n) = Ri p (T (i p , n) T (i p , n))
f
min
p
max
Demand constraints: Ds si b(i p , n) Ds
nN i pI s
i p I p , n N
i p I p , n N
s S P
T f (i p , n) T s (i p , n) Hw(i p , n)
Extra tightening constraint:
(T
n N iI j
i p I p , n N
j J
i p I s
ST ( s, n) = ST0 ( s ) +
i p I s
c
si p
c
si p
s S R , n N , n > 1
b(i p , n)
s S R , n = 1
b(i p , n)
Intermediates:
ST ( s, n) = ST ( s, n 1) +
i p I s
ST ( s, n) = ST0 ( s ) +
i p I s
p
si p
p
si p
b(i p , n) +
b(i p , n) +
i p I s
i p I s
c
si p
c
si p
b(i p , n) s S IN , n N , n > 1
b(i p , n) s S IN , n = 1
i I , n N , n N
T s (i, n + 1) T f (i ' , n)
j J , i I j , i ' I j , n N , n N
j J , i, i ' I j , i i' , n N , n N
Giannelos and Georgiadis (2002) enforced the zw condition for batch plants
as well leading to suboptimal solutions
(Sundaramoorthy & Karimi, 2005; Shaik, Janak, & Floudas, 2006)
ST ( s, n) STsmax
s S FIS , n N
i p I s
p
si p
b(i p , n) +
i p I s
c
si p
or eliminate ST(s,n)
b(i p , n) = 0
s S IN , n N
ist I st , n N
ist I st , n N
i p I s
i p I s
p
si p
p
si p
b(i p , n) +
ist I s
p
sist
ist I st , n N
or eliminate ST(s,n)
b(ist , n 1) + b(i, n) = 0
iI s
c
si
s S FIS , n N , n > 1
s S FIS , n = 1
iI s
ist I st , n N
(c) End time of storage task end time of both producing and consuming tasks
(e) When z(ist,n)=1 then ist should go till start time of next consuming/producing task:
T f (ist , n) T s (i p , n + 1) H (2 w(ist , n) w(i p , n + 1)) H (1 z (ist ,n))
s S IN , ist I s , i p I s , n N , n N
T f (ist , n) T s (i p , n + 1) + H (2 w(ist , n) w(i p , n + 1)) + H (1 z (ist ,n))
s S IN , ist I s , i p I s , n N , n N
ST0 ( s ) +
i p I s
i p I s
p
si p
p
si p
s S FIS , n N , n > 1
b(i p , n) STsmax
s S FIS , n N , n = 1
b(i p , n) STsmax
w(i
st
, n) w(i p , n)
s S FIS , i p sipp , n N
i st sic st
ist I s
p
sist
b(ist , n 1) +
ST0 ( s ) +
i p I s
p
si p
i p I s
p
si p
b(i p , n)
b(i p , n)
ist I s
max
isi
ist I s
max
isi
w(ist , n)
s S FIS , n = 1
No need to enforce the zero wait condition on start time of processing tasks
(c) End time of consuming task End time of producing task:
Objective Function
Max Profit
C1
price
sS P n N
i I s
p
si
b(i, n) C2 w(i, n) C3
iI n N
z(i
i st I st nN
st
, n)
Min MS
T f (i, n) MS
i I , n N
j J
nN iI j
Munawar A. Shaik
and Christodoulos
Department of Chemical Engineering
Princeton University
A. Floudas
State-Task-Network
Pack-P1
P1
(Line 3)
Pack-P2
P2
(Line 1)
Pack-P3
P3
(Line 2)
Pack-P4
P4
(Line 1)
Pack-P5
P5
(Line 2)
Pack-P6
P6
Pack-P7
P7
(Line 4)
Pack-P8
P8
(Line 1)
Pack-P9
P9
(Line 2)
Pack-P10
P10
(Li 5)
(Line
Pack-P11
P11
(Line 5)
Pack-P12
P12
(Line 4)
Pack-P13
P13
(Line 4)
Pack-P14
P14
(Line 2)
Pack-P15
P15
(Line 4)
(Mixer A)
Make-Int1
Int1
Base A
(Mixer A)
Make-Int2
Int2
(Mixer B,C)
Make-Int3
Int3
Base B
(Line 3)
(Mixer B C)
Make-Int4
(Mixer B,C)
Make-Int5
Base C
(Mixer B,C)
Make-Int6
(Mixer B,C)
Make-Int7
Int4
Int5
Int6
Int7
Different Cases
a) Unlimited intermediate storage with Dmin
b) No intermediate storage with Dmin
c) Flexible finite intermediate storage: 3 tanks available with
maximum
i
capacity
it off 60 tton each,
h and
d any iintermediate
t
di t can
be stored in any of the 3 tanks. With Dmin and Dmax
d) Same as (c) but without Dmax
e) Restricted finite intermediate storage: With Dmin and Dmax
f) Same as (e) but without Dmax
Restricted Storage Suitability
State(s)
tank (Jst)
Int1
Int2
Int3
Int4
Int5
Int6
Int7
Tank A
Tank B
Tank C
Tank A
Tank B
Tank A
Tank C
Other models
Ierapetritou, M. G.; Floudas, C. A. Effective continuous-time formulation for
short-term scheduling: 2. Continuous and semi-continuous processes. Ind.
g Chem. Res. 1998,, 37,, 4360.
Eng.
Giannelos, N. F.; Georgiadis, M. C. A novel event-driven formulation for shortterm scheduling of multipurpose continuous processes. Ind. Eng. Chem.
Res., 2002, 41, 2431.
Mendez, C. A.; Cerda, J. An efficient MILP continuous-time formulation for
short-term scheduling of multiproduct continuous facilities. Comput. Chem.
Eng. 2002, 26, 687.
Castro, P
Castro
P. M
M.;; Barbosa-Povoa
Barbosa Povoa, A
A. P
P.;; Matos
Matos, H
H. A
A.;; Novais
Novais, A
A. Q
Q. Simple
continuous-time formulation for short-term scheduling of batch and
continuous processes. Ind. Eng. Chem. Res. 2004, 43, 105.
Castro, P. M.; Barbosa-Povoa, A. P.; Novais, A. Q. A divide-and-conquer
strategy for the scheduling of process plants subject to changeovers using
continuous-time formulations. Ind. Eng. Chem. Res. 2004, 43, 7939.
Castro et al
4
108
356
1040
4867
2695.32
2695.32
2695.32
0
1.03
69
Giannelos &
Georgiadis
9
4
236
108
762
628
894
1637
5354
2695.32
2695.32
2695.32
2695.32
2695.32
2695.32
0
0
58.5a
186.44
1041
15310
Mendez &
Cerda
38
44
140
2695.32
2695.32
14a
10
Castro et al
9
236
762
894
2695.32
2672.50
2672.50
0.85
200000a
8007335
11
Giannelos &
Georgiadis
4
220
712
2113
6884
1388
1388
1388
0
0.81
30
Giannelos &
Georgiadisa
4
292
712
2113
7176
13876.67a
13857a
1388
0.01
3.92
50
Max Profit
C1
price
sS P n N
i I s
p
si
b(i, n) C2 w(i, n) C3
iI n N
z(i
i st I st nN
st
, n)
12
13
14
Giannelos &
Georgiadis
Events
4
4
Binary variables
164
164
Continuous variables 412
656
Constraints
2065
1833
Nonzeros
9346
5904
RMILP
26946.72
2695.32
MILP
26914.18
2689.39
Profit
2695.32 2689.39
Integrality gap (%)
0.093
0.22
CPU time (s)
662.58 60000
Nodes
16524 3550540
15
Conclusions
Proposed an Improved Unit-Specific Event-based Continuous-Time
formulation for Short-Term Scheduling of Continuous Plants
Rigorous
Ri
T
Treatment
t
t off St
Storage Requirements
R
i
t compared
d tto th
the M
Model
d l
of Ierapetritou and Floudas (1998)
Improved Sequencing Constraints
Finds the global optimal solution for all variants of an Industrial Case
Study from the Literature
16
Problem Definition
Given:
.
.
products
. ...
.
.
.
stage 1
.
.
.
.
stage 2
1
2
N
stage M
M lti t
Multistage
Multiproduct
M lti
d tC
Continuous
ti
Pl
Plantt
(Pinto & Grossmann, 1994)
Mathematical Formulation
Time slot
stage 1
stage 2
k=1
k=2
k=1
k = NK
...
k=2
Transition
k = NK
...
processing
i
.
stage M
k=1
k=2
...
k = NK
time
WpMi
Im im
Zijkm
Cinvim
Ctrij
Tc
Tc
Tc
i
m
i
j
k m
1
TppikM
Cinvfi iM RpiM 1
TppikM
2 i k
Tc
subject to
ikm
=1
ik
ikm
=1
(1)
(2a)
(2b)
ijkm
= yikm
(3a)
ijkm
= y jk 1m
(3b)
k m
(4a)
Tepikm U im yikm 0
k m
(4b)
Tppikm U im yikm 0
k m
(4c)
k m
(4d)
Tsp
i1 m
Tsp
i ( k + 1) m
ijm
z ij 1 m
(5a)
Tep
i
ikm
+ ijm z ij ( k
i
+ 1) m
k < NK m
(5b)
(5c)
Inventory Breakpoints
Tppikm
stage m
Tspikm Tepikm
Tppik(m+1)
ik( 1)
Tspik(m+1) Tepik(m+1)
Tppikm
Tspikm
Tpp
ppik(m+1)
stage m+1
Inventory
Tepikm
Tspik(m+1) Tepik(m+1)
Inventory
level
I1im
level
I2im
I0im
I3im
I1im
I2im
I3im
I0im
time
time
Inventory Breakpoints
I 2 im
i = I 1im
i + ( im Rp im Rp i ( m +1) ) max 0, Tep ikm
ik Tsp ik ( m + 1)
k
0 I 1im Im im
0 I 2 im Im im
0 I 3im Im im
Im im U im
I
i m
I 3im = I 0 im
(6)
Wp Mi = iM Rp iM Tpp ikM
(7a)
Wp Mi DiTc
(7b)
Mathematical model
Variables:
y ikm {0 ,1}
0 z ijkm 1
Tsp ikm , Tep ikm , Tpp ikm , Wp Mi , T c , Im im , I 0 im , I 1im , I 2 im , I 3 im 0
Most of the Scheduling problems in Chemical
Engineering result in MILP/MINLP models with
large number of binary and continuous variables.
fB
fC
stage 1
P d t
Product
A
B
C
ssale
l p
price
i ($/t
($/ton)
n)
150
400
650
stage 2
demand
d m nd (kg/h)
(k /h)
50
100
250
product
processing
rate
(kg/h)
A
B
C
stage 2
intermediate
stora e
storage
($/ton)
800
1200
1000
processing
rate
(kg/h)
140.6
140.6
140.6
final
inventory
($/ton.h)
900
600
1100
4.06
4.06
4.06
stage 2
3
8
10
3
3
6
-
3
4
7
0
3
10
-
3P2S Solution
stage 1
3
0
B
10.84
15.68
C
94.05
19.72
3
stage 2
23.9
C
91.75
Inv
(ton)
Time (hr)
3P2S Solution
Profit
= $ 442.53 / hr
Cycle time = 94.05 hr
Variables = 146 (18 bin)
Constraints = 162
CPU time = 12.43 sec
P d t
Product
d
demand
d (kg/hr)
(k /h )
A
B
C
production
d ti (kg/h)
(k /h)
50
100
250
50
100
758
tanks
L41
L11
S1
S2
S3
L12
stage 1
tanks
S4
LM-2,1
S5
L42
stage 2
stage 3
stage 4
SM-2
SM-1
SM
LM-2,2
stage 5
stage M
Effect of Sloping
Rp Tpp
Rp Tpp
Tank
Tank
Stage
Rp Trpp (sloping)
Rp Trpp (sloping)
Tank
Tank
Trsp
Tpp
Tsp
Tep
Trep
Trpp
Tep
Tsp
sloping
Model Constraints
Sequencing constraints
Mass balance constraints
Inventory balance constraints
Objective function
MINLP f
formulation
l ti
ikm
=1
ikm
=1
ijkm
= yikm
j k m
= y jk 1m
i k m
ijkm
Tepikm U im yikm 0
Tppikm U im yikm 0
Tsp
= (1 / 2 )
i1 m
Tsp ik
+ 1m
Tep ikm +
z ij 1 m
ijm
Trsp
ikm
Tsp
ikm
Tep
ijm z ijk
(1 / 2 )
ikm
i
ikm
k'
+ 1m
+ (1 / 2 )
ijm
k"
z ijkm
ijm
z ijk
i"
Tc Trppikm
i'
Trep
+ +1m
i '1
i ' k '1
k'
im =
imRpim Tppikm
k
k
Rpim Trppikm
im Tppikm
k
k
Trpp
ikm
k"
i 'l =
' i ' lRp' i ' l Trpp ' i ' k ' l = Rpi ' m Trppi ' km
k'
k'
' i ' lRp ' i ' l Trpp ' i ' k ' l + " i" lRp" i" l Trpp" i" k " l = Rpim Trppikm
k'
k"
10
tanks
tanks
L41
L11
S3
S2
S1
LM-2,1
S5
S4
L12
SM-2
L42
stage 1
stage 2
stage 3
SM-1
SM
LM-2,2
stage 4
stage 5
stage M
k'
Sfi '1i ' mRpi ' m Trppi ' km = Rp ' i '1 Trpp ' i ' k '1
k
k'
k"
k"
11
iMRpiM TrppikM
k
Tc
k"
k"
k'
Inventory breakpoints
Tppikm
Tspikm
Trppikm+1
Trspikm+1
I0im
Tspikm
Tepikm
Trppikm+1
stage m+1
Trspikm+1
Trepikm+1
Inventory
level
Tppikm
stage m
Tepikm
Trepikm+1
Inventory
I1im
level
I2im
I3im
time
I1im
I0im
I2im
I3im
time
12
Example: two-stage
Stage 1
Stage 2
I0 (B)
I2 (B)
I0 (C)= I1(C)
I3 (B)
I3 (C)
Inventory
I3 (A)
I0 (A)
I1 (A)
I2 (C)
I1 (B)
I1 (A)
Time
Inventory constraints
Five categories
Feed inventory of grades charged into only
L11 or L12
Feed inventory of common grades charged
into both L11 and L12
Inventory
y between any
y two stages
g
Inventory between a stage and a parallel line
Inventory between a parallel line and a stage
13
tanks
tanks
L41
L11
S3
S2
S1
LM-2,1
S5
S4
L12
L42
stage 1
stage 2
stage 3
SM-1
SM-2
SM
LM-2,2
stage 4
stage 5
stage M
Inventory breakpoints
Tppikm
Tspikm
stage m
Tepikm
Trppikm+1
Trspikm+1
Tepikm
Trppikm+1
stage m+1
Trspikm+1
Trepikm+1
Inventory
Trepikm+1
Inventory
level
level
I1im
I2im
I1im
I0im
Tppikm
Tspikm
time
I3im
I0im
I2im
time
I3im
14
+ (imRp
pim Rp
pim + 1) max 0, Treppikm + 1 Tsppikm (1 qqsim) (1 qqeim)
k
Trpp
Tpp
qeim
ikm + 1
ikm
(1 qeim)
Inventory
level
I1 (B)
I3 ((B))
I0 (B)
I2 (B)
Time
15
Trsp'
i ' k '1
k'
I 2i ' = I1i '+( Fi ' Rp' i '1 ) Trpp 'i ' k '1
k'
Trsp"
i " k "1
k"
Trppik
Line 1
Trepik
Trppik
Trspik
Line 2
Trepik
Trspik
Trepik
Trppik
Trspik
Trepik
Inventory
level
I1im
I3im
I0im
I2im
I4im
I5im
time
16
I1i = I 0i + Fi qi
Trsp '
i ' k '1
k'
I 2i = I1i + qi (Fi -Rp' i '1) Trpp ' i ' k '1 + (1 qi )(Fi -Rp" i"1) Trpp" i" k "1
k'
k"
I 3i = I 2i + qiFi Trsp" i" k "1 Trep' i ' k '1 + (1 qi ) Fi Trsp ' i ' k '1 Trep" i" k "1
k'
k"
k'
k"
I 4i = I 3i + (1-qi )(Fi -Rp' i '1) Trpp ' i ' k '1 + qi (Fi -Rp" i"1) Trpp" i " k "1
k'
k"
WpiM Td Qd i
zijkm
Tc
j'
k'
Ctr"i" j"l
i"
j" k "
z"i" j"k"l
Tc
Im axil + Im axim
Cinvi
Tc
i
l m
17
Fb =
Td = 1000 hrs
5237
Qdi = 8414
975
975
21
3
28 m /hr
33
m3
350
500 $/m3
250
250
Ctr = $3500
Cinv = 5 $/m3
UiI = 800 m3
18
19
Qi =
6719.5
9888.2 m3
1488.6
2186.6
profit
Tc
SfB1
B1
= 7296 $/hr
= 6.1
= 61.37 hrs
= 0.939
= 1.0
20
10.8
Fb = 19.67 m3/hr
8.64
21
Results
With and without sloping (3p_2s problem)
Without sloping
With sloping
Profit ($/hr)
448
402
Cycle time(hr)
96
122
CPU time(sec)
1.5
8.5
4P3S
5P3S
6P3S
8P3S
10P3S
48
66
98
154
252
Continuous variables
476
717
1112
1848
3436
No. of equations
606
855
1231
1621
2609
13.2
19.4
68.3
372.3
648.1
1135
1609
2388
3667
6251
88669
Code length
15397
Derivative pool
73
124
232
77
115
Constant pool
22
27
28
26
33
22
6/14/2010
Pulp
P
Paper
machine
Wi d
Winder
Jumbo reels
Sh t
Sheeter
Cut
rolls
Sheets
Major Decisions:
Order allocation to various mills
Sequencing at each mill
Trimming
Dispatch Schedule
6/14/2010
G1
G2
sheets
Winder 1
Paper
Machine 1
Wrapper 1
rolls
G3
Palletizer
Offline
coater
Rewinder
G2
Wrapper 2
G3
Paper
Machine 2
rolls
Winder 2
G4
Sheeter 2
Simplifying assumptions:
Paper machine slowest
Capacity constraints for
shared resources
Level-1
Level-2
Simultaneous order
assignment and sequencing
Simultaneous trim loss
minimization and
optimal sequencing of
cutting patterns
Level-1b
Level-2a
Level-2b
optimal sequencing of
cutting patterns
6/14/2010
Proactive measures
High trim losses
Level-1
Time slots
Level-2
Higher tardiness
compared to earliness
Conservative
production rates
m=1
G3
J6 J7
J11 J12
k=1
k=2
k=3
G1
m=2
G2
J1 J3 J5
G2
G3
k=1
k=2
k=3
k=4
G1
J16 J17
Changeover
time
Production time
k-1
k=4
time
Iteration on maximum number of time slots (NK)
6/14/2010
Qpr Cpr
mk
gm
z gmk +
Q Ctr
j
+ E j Q j Ce m y jmk +
j
Tard
s.t.
y
m
g'
jmk
=1
y jmk
j k
Q j Ctard j
j
jm
z
m k >1 g
gmk
g'
j where OMj,m=1
g, m, k
z gm ( k 1)
gmk
m, k >1
j
Qprmk
Tpp mk = z gmk
m, k
Rp gm
g
Qprmk Qm gm z gmk m,
m k
g
m, k=1
m, k >1
g'
g'
Ts j = Td j Trans jm y jmk
m
E j Ts j Tmk y jmk
m
6/14/2010
G2
J3
J7
J13
G3
G4
J1 J2
J8
J10
J4
J18
G5
J9
J12
J15
J20
4 slots
5 slots 6 slots
72,26,088 69,64,937 122,48,810
2,04,170 4,71,820
8,80,620
2,10,455 9,30,072 14,49,588
1,24,180 8,77,822 158,60,900
24,400
31,200
28,000
77,89,293 92,75,852 304,67,920
6/14/2010
G3
G4
J4
1
10
Mumbai
(M2)
G5
12
G2
J13
2
G1
G3
30
G3
G4
G1
J11 J19
9
J5 J6
25
14.5
J10 J18
0
22
J9 J15 J20
J7
0
Chennai
(M4)
16
G2
Mumbai
(M3)
G2
J3
J1 J2
24.5
J12
18
29.5
G5
25
time (days)
G5
J9 J23 J27
J12
J15
J20 J34
6/14/2010
production cost
transportation cost
earliness cost
tardiness cost
gradechange cost
total cost
G4
G1
J4 J25 J1 J2
1
G2
J24
0
G2
J7
6.7
27.6
G5
J9 J15 J20
12.1
G3
25.1
33.4
G5
G3
30.6
J8J22J26J31
G4
J3 J13 J21
2
G2
J28
J6 J32 J33
14
G2
Mumbai
(M3)
Chennai
(M4)
G3
G1
25.1
32
G2
G5
20
29.1 32.3
time (days)
6/14/2010
Level-2 Results
Orders width quantity due date transport tardiness
time
penalty
i
bi (m) Qi(ton) Tdi(days) tri(days) Cti
I1
I2
I3
I4
I5
I6
I7
0.90
0.80
0.70
1.15
1.05
1.20
1.00
140
130
125
100
95
80
50
7
8
10
10
3
3
4
2
1
1
3
2
0
3
8
4
10
4
10
6
2
Sequential approach
% Trimloss
Trim cost
Under production cost
Knife change cost
Tardiness cost
The total cost
1.031174
$9899.2
$9905.6
$700
$2438.4
$22943.2
22 Feasible Cutting
g
Patterns
Simultaneous
1.03134
$9900.8
$9907.2
$1000
$1015.5
$21823.5