Sie sind auf Seite 1von 72

1

PLANTWIDE CONTROL
How to design the control system for a
complete plant in a systematic manner

Sigurd Skogestad

Department of Chemical Engineering
Norwegian University of Science and Tecnology (NTNU)
Trondheim, Norway



Petrobras, March 2010

2





Summary.
Systematic procedure for plantwide control
1. Start top-down with economics:
Optimize steady-state operation
Identify active constraints (should normally be tightly controlled to
maxize profit)
For remaining degrees of freedom: Select controlled variables c based on
self-optimizing control.
2. Regulatory control I: Decide on how to move mass through the plant:
Where to set the throughput (usually: feed)
Propose local-consistent inventory (level) control structure.
3. Regulatory control II: Bottom-up stabilization of the plant
Control variables to stop drift (sensitive temperatures, pressures, ....)
Pair variables to avoid interaction and saturation
4. Finally: make link between top-down and bottom up.
Advanced control system (MPC):
CVs: Active constraints and self-optimizing economic variables +
look after variables in layer below (e.g., avoid saturation)
MVs: Setpoints to regulatory control layer.
Coordinates within units and possibly between units


c
s

3





Summary and references
The following paper summarizes the procedure:
S. Skogestad, ``Control structure design for complete chemical plants'',
Computers and Chemical Engineering, 28 (1-2), 219-234 (2004).
There are many approaches to plantwide control as discussed in the
following review paper:
T. Larsson and S. Skogestad, ``Plantwide control: A review and a new
design procedure'' Modeling, Identification and Control, 21, 209-240
(2000).

4





S. Skogestad ``Plantwide control: the search for the self-optimizing control structure'', J. Proc. Control, 10, 487-507 (2000).
S. Skogestad, ``Self-optimizing control: the missing link between steady-state optimization and control'', Comp.Chem.Engng., 24, 569-575 (2000).
I.J. Halvorsen, M. Serra and S. Skogestad, ``Evaluation of self-optimising control structures for an integrated Petlyuk distillation column'', Hung. J.
of Ind.Chem., 28, 11-15 (2000).
T. Larsson, K. Hestetun, E. Hovland, and S. Skogestad, ``Self-Optimizing Control of a Large-Scale Plant: The Tennessee Eastman Process'', Ind.
Eng. Chem. Res., 40 (22), 4889-4901 (2001).
K.L. Wu, C.C. Yu, W.L. Luyben and S. Skogestad, ``Reactor/separator processes with recycles-2. Design for composition control'', Comp. Chem.
Engng., 27 (3), 401-421 (2003).
T. Larsson, M.S. Govatsmark, S. Skogestad, and C.C. Yu, ``Control structure selection for reactor, separator and recycle processes'', Ind. Eng.
Chem. Res., 42 (6), 1225-1234 (2003).
A. Faanes and S. Skogestad, ``Buffer Tank Design for Acceptable Control Performance'', Ind. Eng. Chem. Res., 42 (10), 2198-2208 (2003).
I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003).
A. Faanes and S. Skogestad, ``pH-neutralization: integrated process and control design'', Computers and Chemical Engineering, 28 (8), 1475-1487
(2004).
S. Skogestad, ``Near-optimal operation by self-optimizing control: From process control to marathon running and business systems'', Computers and
Chemical Engineering, 29 (1), 127-137 (2004).
E.S. Hori, S. Skogestad and V. Alstad, ``Perfect steady-state indirect control'', Ind.Eng.Chem.Res, 44 (4), 863-867 (2005).
M.S. Govatsmark and S. Skogestad, ``Selection of controlled variables and robust setpoints'', Ind.Eng.Chem.Res, 44 (7), 2207-2217 (2005).
V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res,
46 (3), 846-853 (2007).
S. Skogestad, ``The dos and don'ts of distillation columns control'', Chemical Engineering Research and Design (Trans IChemE, Part A), 85 (A1), 13-
23 (2007).
E.S. Hori and S. Skogestad, ``Selection of control structure and temperature location for two-product distillation columns'', Chemical Engineering
Research and Design (Trans IChemE, Part A), 85 (A3), 293-306 (2007).
A.C.B. Araujo, M. Govatsmark and S. Skogestad, ``Application of plantwide control to the HDA process. I Steady-state and self-optimizing
control'', Control Engineering Practice, 15, 1222-1237 (2007).
A.C.B. Araujo, E.S. Hori and S. Skogestad, ``Application of plantwide control to the HDA process. Part II Regulatory control'', Ind.Eng.Chem.Res,
46 (15), 5159-5174 (2007).
V. Kariwala, S. Skogestad and J.F. Forbes, ``Reply to ``Further Theoretical results on Relative Gain Array for Norn-Bounded Uncertain systems''''
Ind.Eng.Chem.Res, 46 (24), 8290 (2007).
V. Lersbamrungsuk, T. Srinophakun, S. Narasimhan and S. Skogestad, ``Control structure design for optimal operation of heat exchanger
networks'', AIChE J., 54 (1), 150-162 (2008). DOI 10.1002/aic.11366
T. Lid and S. Skogestad, ``Scaled steady state models for effective on-line applications'', Computers and Chemical Engineering, 32, 990-999 (2008). T.
Lid and S. Skogestad, ``Data reconciliation and optimal operation of a catalytic naphtha reformer'', Journal of Process Control, 18, 320-331 (2008).
E.M.B. Aske, S. Strand and S. Skogestad, ``Coordinator MPC for maximizing plant throughput'', Computers and Chemical Engineering, 32, 195-204
(2008).
A. Araujo and S. Skogestad, ``Control structure design for the ammonia synthesis process'', Computers and Chemical Engineering, 32 (12), 2920-
2932 (2008).
E.S. Hori and S. Skogestad, ``Selection of controlled variables: Maximum gain rule and combination of measurements'', Ind.Eng.Chem.Res, 47 (23),
9465-9471 (2008).
V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, 138-148 (2009)
E.M.B. Askoe and S. Skogestad, Consistent inventory control, Submitted (2009)

5





Plantwide control intro course: Contents
Overview of plantwide control
Selection of primary controlled variables based on economic : The link
between the optimization (RTO) and the control (MPC; PID) layers
- Degrees of freedom
- Optimization
- Self-optimizing control
- Applications
- Many examples
Where to set the production rate and bottleneck
Design of the regulatory control layer ("what more should we
control")
- stabilization
- secondary controlled variables (measurements)
- pairing with inputs
- controllability analysis
- cascade control and time scale separation.
Design of supervisory control layer
- Decentralized versus centralized (MPC)
- Design of decentralized controllers: Sequential and independent design
- Pairing and RGA-analysis
Summary and case studies

6





Outline
Control structure design (plantwide control)
A procedure for control structure design
I Top Down
Step 1: Degrees of freedom
Step 2: Operational objectives (optimal operation)
Step 3: What to control ? (primary CVs) (self-optimizing control)
Step 4: Where set the production rate? (Inventory control)
II Bottom Up
Step 5: Regulatory control: What more to control (secondary CVs) ?
Step 6: Supervisory control
Step 7: Real-time optimization
Case studies

7





Main message
1. Control for economics (Top-down steady-state arguments)
Primary controlled variables c = y
1
:

Control active constraints
For remaining unconstrained degrees of freedom: Look for self-optimizing
variables

2. Control for stabilization (Bottom-up; regulatory PID control)
Secondary controlled variables y
2
(inner cascade loops)
Control variables which otherwise may drift

Both cases: Control sensitive variables (with a large gain)!



8





Idealized view of control
(Ph.D. control)

9





Practice: Tennessee Eastman challenge
problem (Downs, 1991)
(PID control)

10





How we design a control system for a
complete chemical plant?
Where do we start?
What should we control? and why?
etc.
etc.

11





Alan Foss (Critique of chemical process control theory, AIChE
Journal,1973):

The central issue to be resolved ... is the determination of control system
structure. Which variables should be measured, which inputs should be
manipulated and which links should be made between the two sets?
There is more than a suspicion that the work of a genius is needed here,
for without it the control configuration problem will likely remain in a
primitive, hazily stated and wholly unmanageable form. The gap is
present indeed, but contrary to the views of many, it is the theoretician
who must close it.

Carl Nett (1989):
Minimize control system complexity subject to the achievement of accuracy
specifications in the face of uncertainty.

12





Control structure design
Not the tuning and behavior of each control loop,
But rather the control philosophy of the overall plant with emphasis on
the structural decisions:
Selection of controlled variables (outputs)
Selection of manipulated variables (inputs)
Selection of (extra) measurements
Selection of control configuration (structure of overall controller that
interconnects the controlled, manipulated and measured variables)
Selection of controller type (LQG, H-infinity, PID, decoupler, MPC etc.).
That is: Control structure design includes all the decisions we need
make to get from ``PID control to Ph.D control

13





Process control:
Plantwide control = Control structure
design for complete chemical plant
Large systems
Each plant usually different modeling expensive
Slow processes no problem with computation time
Structural issues important
What to control? Extra measurements, Pairing of loops

Previous work on plantwide control:
Page Buckley (1964) - Chapter on Overall process control (still industrial practice)
Greg Shinskey (1967) process control systems
Alan Foss (1973) - control system structure
Bill Luyben et al. (1975- ) case studies ; snowball effect
George Stephanopoulos and Manfred Morari (1980) synthesis of control structures for chemical processes
Ruel Shinnar (1981- ) - dominant variables
Jim Downs (1991) - Tennessee Eastman challenge problem
Larsson and Skogestad (2000): Review of plantwide control


14





Control structure selection issues are identified as important also in
other industries.

Professor Gary Balas (Minnesota) at ECC03 about flight control at Boeing:

The most important control issue has always been to select the right
controlled variables --- no systematic tools used!

15





Main objectives control system
1. Stabilization
2. Implementation of acceptable (near-optimal) operation
ARE THESE OBJECTIVES CONFLICTING?

Usually NOT
Different time scales
Stabilization fast time scale
Stabilization doesnt use up any degrees of freedom
Reference value (setpoint) available for layer above
But it uses up part of the time window (frequency range)


16





c
s
= y
1s
MPC
PID
y
2s
RTO
u (valves)

Follow path (+ look after)
CV=y
1
(+ u); MV=y
2s
Stabilize + avoid drift
CV=y
2
; MV=u

Min J (economics);
MV=y
1s
OBJECTIVE
Dealing with complexity
Main simplification: Hierarchical decomposition
Process control

The controlled variables (CVs)
interconnect the layers

17





Example: Bicycle riding
Note: design starts from the bottom
Regulatory control:
First need to learn to stabilize the bicycle
CV = y
2
= tilt of bike
MV = body position

Supervisory control:
Then need to follow the road.
CV = y
1
= distance from right hand side
MV=y
2s
Usually a constant setpoint policy is OK, e.g. y
1s
=0.5 m

Optimization:
Which road should you follow?
Temporary (discrete) changes in y
1s


Hierarchical decomposition

21





Summary: The three layers
Optimization layer (RTO; steady-state nonlinear model):
Identifies active constraints and computes optimal setpoints for primary controlled variables (y
1
).
Supervisory control (MPC; linear model with constraints):
Follow setpoints for y
1
(usually constant) by adjusting setpoints for secondary variables (MV=y
2s
)
Look after other variables (e.g., avoid saturation for us used in regulatory layer)
Regulatory control (PID):
Stabilizes the plant and avoids drift, in addition to following setpoints for y
2
. MV=valves (u).

Problem definition and overall control objectives (y
1
, y
2
) starts from the top.
Design starts from the bottom.

A good example is bicycle riding:

Regulatory control:
First you need to learn how to stabilize the bicycle (y
2
)
Supervisory control:
Then you need to follow the road. Usually a constant setpoint policy is OK, for example, stay
y
1s
=0.5 m from the right hand side of the road (in this case the "magic" self-optimizing variable
self-optimizing variable is y1=distance to right hand side of road)
Optimization:
Which road (route) should you follow?


22





Stepwise procedure plantwide control
I. TOP-DOWN
Step 1. DEGREES OF FREEDOM
Step 2. OPERATIONAL OBJECTIVES
Step 3. WHAT TO CONTROL? (primary CVs c=y
1
)
Step 4. PRODUCTION RATE

II. BOTTOM-UP (structure control system):
Step 5. REGULATORY CONTROL LAYER (PID)
Stabilization
What more to control? (secondary CVs y
2
)
Step 6. SUPERVISORY CONTROL LAYER (MPC)
Decentralization
Step 7. OPTIMIZATION LAYER (RTO)
Can we do without it?



23





Control structure design procedure
I Top Down
Step 1: Identify degrees of freedom (MVs)
Step 2: Define operational objectives (optimal operation)
Cost function J (to be minimized)
Operational constraints
Step 3: Select primary controlled variables c=y
1
(CVs)
Step 4: Where set the production rate? (Inventory control)
II Bottom Up
Step 5: Regulatory / stabilizing control (PID layer)
What more to control (y
2
; local CVs)?
Pairing of inputs and outputs
Step 6: Supervisory control (MPC layer)
Step 7: Real-time optimization

Understanding and using this procedure is the most important part of this course!!!!
y
1
y
2
Process
MVs

24





Step 1. Degrees of freedom (DOFs) for
operation (N
valves
):

To find all operational (dynamic) degrees of freedom

Count valves! (N
valves
)

Valves also includes adjustable compressor power, etc.
Anything we can manipulate!


25





Steady-state degrees of freedom (DOFs)
IMPORTANT! No. of steady-state CVs = No. of steady-state DOFs

Three methods to obtain no. of steady-state degrees of freedom (N
ss
):

1. Equation-counting
N
ss
= no. of variables no. of equations/specifications
Very difficult in practice (not covered here)
2. Valve-counting (easier!)
N
ss
= N
valves
N
0ss
N
specs
N
0ss
= variables with no steady-state effect
3. Typical number for some units (useful for checking!)

CV = controlled variable (c)

26





Steady-state degrees of freedom (N
ss
):
2. Valve-counting


N
valves
= no. of dynamic (control) DOFs (valves)
N
ss
= N
valves
N
0ss
N
specs
: no. of steady-state DOFs
N
0ss
= N
0y
+ N
0,valves
: no. of variables with no steady-state effect

N
0,valves
: no. purely dynamic control DOFs
N
0y
: no. controlled variables (liquid levels) with no steady-state effect
N
specs
: No of equality specifications (e.g., given pressure)



27






N
valves
= 6 , N
0y
= 2 , N
specs
= 2, N
SS
= 6 -2 -2 = 2
Distillation column with given feed and pressure

28





Heat-integrated distillation process







N
valves
= 11 (w/feed), N
0y
= 4 (levels), N
ss
= 11 4 =
7

29





Heat-integrated distillation process







N
valves
= 11 (w/feed), N
0y
= 4 (levels), N
ss
= 11 4 = 7

30





Heat exchanger with bypasses

CW
N
valves
= 3, N
0valves
= 2 (of 3), N
ss
= 3 2 = 1

31





Heat exchanger with bypasses

CW
N
valves
= 3, N
0valves
= 2 (of 3), N
ss
= 3 2 = 1

32





Steady-state degrees of freedom (N
ss
):
3. Typical number for some process units
each external feedstream: 1 (feedrate)
splitter: n-1 (split fractions) where n is the number of exit streams
mixer: 0
compressor, turbine, pump: 1 (work/speed)
adiabatic flash tank: 0
*

liquid phase reactor: 1 (holdup-volume reactant)
gas phase reactor: 0
*
heat exchanger: 1 (duty or net area)
column (e.g. distillation) excluding heat exchangers: 0
*
+ no. of sidestreams
pressure
*
: add 1DOF at each extra place you set pressure (using an extra
valve, compressor or pump), e.g. in adiabatic flash tank, gas phase reactor or
column

* Pressure is normally assumed to be given by the surrounding process and is then not a degree of
freedom

33





Heat exchanger with bypasses

CW
Typical number heat exchanger N
ss
= 1

34






Typical number,
N
ss
= 0 (distillation) + 2*1 (heat exchangers) = 2
Distillation column with given feed and pressure

35





Heat-integrated distillation process







Typical number, N
ss
= 1 (feed) + 2*0 (columns) + 2*1
(column pressures) + 1 (sidestream) + 3 (hex) = 7

36





HDA process
Mixer FEHE
Furnace PFR
Quench
Separator
Compressor
Cooler
Stabilizer Benzene
Column
Toluene
Column
H
2
+ CH
4
Toluene

Toluene

Benzene

CH
4
Diphenyl

Purge (H
2
+ CH
4
)

37





HDA process: steady-state degrees of freedom
1
2
3
8
7
4
6
5
9
10
11
12
13
14
Conclusion: 14
steady-state
DOFs
Assume given column pressures
feed:1.2
hex: 3, 4, 6
splitter 5, 7
compressor: 8
distillation: 2 each
column

38









Check that there are enough manipulated variables (DOFs) - both
dynamically and at steady-state (step 2)
Otherwise: Need to add equipment
extra heat exchanger
bypass
surge tank


39





Outline
Introduction
Control structure design (plantwide control)
A procedure for control structure design
I Top Down
Step 1: Degrees of freedom
Step 2: Operational objectives (optimal operation)
Step 3: What to control ? (self-optimizing control)
Step 4: Where set production rate? (inventory control)
II Bottom Up
Step 5: Regulatory control: What more to control ?
Step 6: Supervisory control
Step 7: Real-time optimization
Case studies

40





Step 2. Define optimal operation (economics)
What are we going to use our degrees of freedom for?
Define scalar cost function J(u
0
,x,d)
u
0
: degrees of freedom
d: disturbances
x: states (internal variables)
Typical cost function:


Optimal operation for given d:
min
uss
J(u
ss
,x,d)
subject to:
Model equations: f(u
ss
,x,d) = 0
Operational constraints: g(u
ss
,x,d) < 0

J = cost feed + cost energy value products

41





Optimal operation distillation column
Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V
Cost to be minimized (economics)

J = - P where P= p
D
D + p
B
B p
F
F p
V
V

Constraints
Purity D: For example x
D, impurity
max
Purity B: For example, x
B, impurity
max
Flow constraints: min D, B, L etc. max
Column capacity (flooding): V V
max
, etc.
Pressure: 1) p given, 2) p free: p
min
p p
max
Feed: 1) F given 2) F free: F F
max

Optimal operation: Minimize J with respect to steady-state DOFs
value products
cost energy (heating+ cooling)
cost feed

42





Optimal operation


1. Given feed
Amount of products is then usually indirectly given and J = cost energy.
Optimal operation is then usually unconstrained:


2. Feed free
Products usually much more valuable than feed + energy costs small.
Optimal operation is then usually constrained:

minimize J = cost feed + cost energy value products
maximize efficiency (energy)
maximize production
Two main cases (modes) depending on marked conditions:
Control: Operate at bottleneck (obvious)
Control: Operate at optimal
trade-off (not obvious how to do
and what to control)

43





Comments optimal operation

Do not forget to include feedrate as a degree of freedom!!
For LNG plant it may be optimal to have max. compressor power or max.
compressor speed, and adjust feedrate of LNG
For paper machine it may be optimal to have max. drying and adjust the
feedrate of paper (speed of the paper machine) to meet spec!

Control at bottleneck
see later: Where to set the production rate



44





QUIZ
1. Degrees of freedom (Dynamic, steady-state)?
2. Expected active constraints?
3. Proposed control structure?
Reaktor
A+B C
Kompressor
Purge (mest A, litt B)
Fde
ca 51%A, 49% B
Oppvarming
Kjling
Separator
Produkt (C)
Heating
Cooling
Feed
51%A, 49%B
Purge (mostly A, some B, trace C)
Liquid Product (C)
Flash
Gas phase process
(e.g. ammonia, methanol)

45





Implementation of optimal operation
Optimal operation for given d
*
:
min
u
J(u,x,d)
subject to:
Model equations: f(u,x,d) = 0
Operational constraints: g(u,x,d) < 0


u
opt
(d
*
)

Problem: Usally cannot keep u
opt
constant because disturbances d change
How should we adjust the degrees of freedom (u)?

46





Solution I: Optimal feedforward
With availability of perfect model and
measurement of all disturbances d,
degrees of freedom u can be continuously
updated using online optimizer
Problem: UNREALISTIC!
Feedforward problems:
1. Lack of measurements of d
2. Sensitive to model error

47





Solution II: Optimizing feedback control
Estimate d from measurements y
and recompute u
opt
(d)
Problem:
TOO COMPLICATED!
Requires detailed model and
description of uncertainty
y

48





Solution III (Practical!): Hierarchical
decomposition with separate layers
When disturbance d:
Degrees of freedom (u)
are updated indirectly to
keep controlled variables
at setpoints
y
Controlled variables that
link the optimization and
control layers

49





Self-Optimizing Control
= Solution III with constant setpoints
When constant
setpoints are OK
y

50





Formal Definition
Self-optimizing control is said to occur when we can achieve an acceptable loss (in comparison with
truly optimal operation) with constant setpoint values for the controlled variables without the need
to reoptimize when disturbances occur.
Reference: S. Skogestad, Plantwide control: The search for the self-optimizing control structure'',
Journal of Process Control, 10, 487-507 (2000).
Acceptable loss )
self-optimizing control
Controller
Process
d
u(d)
c = f(y)
c
s
e

-
+
+
n

c
m

51





How does self-optimizing control (solution
III) work?
When disturbances d occur, controlled variable c deviates from setpoint c
s
Feedback controller changes degree of freedom u to u
FB
(d) to keep c at c
s
Near-optimal operation / acceptable loss (self-optimizing control)

is achieved if
u
FB
(d) u
opt
(d)
or more generally, J(u
FB
(d)) J(u
opt
(d))
Of course, variation of u
FB
(d) is different for different CVs c.
We need to look for variables, for which J(u
FB
(d)) J(u
opt
(d)) or Loss = J(u
FB
(d)) -
J(u
opt
(d)) is small

52





Remarks
Self-optimizing control provides a trade-off between complexity of control system
and optimality.
Old idea (Morari et al., 1980):
We want to find a function c of the process variables which when held constant,
leads automatically to the optimal adjustments of the manipulated variables, and
with it, the optimal operating conditions.

The term self-optimizing control is similar to self-regulation, where acceptable
dynamic behavior is attained by keeping manipulated variables constant.

53





Relation with Similar Techniques
Broadly, self-optimizing control can be seen as a measurement-based (or feedback-based)
optimization technique.

Some related ideas are:
Control of necessary conditions of optimality (Profs. Dominique Bonvin, B. Srinivasan and co-
workers)
Extremum seeking control (Profs. Miroslav Krstic, Martin Guay and co-workers)

In these techniques, u is updated to drive the gradient of Lagrange function (obtained
analytically or estimated) to zero.

Gradient of Lagrange function is a possible self-optimizing variable!

54





Step 3. What should we control (c)?
(primary controlled variables y
1
=c)











Introductory example: Marathon runner
What should we
control?

55





Optimal operation Runner
Cost: J=T
One degree of freedom (u=power)
Optimal operation?


56





Solution 1: Optimizing control
Even getting a reasonable model
requires > 10 PhDs and
the model has to be fitted to each
individual.

Clearly impractical!

Optimal operation - Runner

57





Solution 2 Feedback
(Self-optimizing control)
What should we control?

Optimal operation - Runner

58





Self-optimizing control: Sprinter (100m)
1. Optimal operation of Sprinter, J=T
Active constraint control:
Maximum speed (no thinking required)


Optimal operation - Runner

59





Optimal operation of Marathon runner, J=T
Any self-optimizing variable c (to control at
constant setpoint)?
c
1
= distance to leader of race
c
2
= speed
c
3
= heart rate
c
4
= level of lactate in muscles

Optimal operation - Runner
Self-optimizing control: Marathon (40 km)

60






Conclusion Marathon runner


c = heart rate
select one measurement
Simple and robust implementation
Disturbances are indirectly handled by keeping a constant heart rate
May have infrequent adjustment of setpoint (heart rate)
Optimal operation - Runner

61





Example: Cake Baking
Objective: Nice tasting cake with good texture
u
1
=

Heat input
u
2
= Final time
d
1
=

oven specifications
d
2
=

oven door opening
d
3
=

ambient temperature
d
4
=

initial temperature
y
1
=

oven temperature
y
2
=

cake temperature
y
3
=

cake color
Measurements
Disturbances
Degrees of Freedom

62





Further examples self-optimizing control
Marathon runner
Central bank
Cake baking
Business systems (KPIs)
Investment portifolio
Biology
Chemical process plants: Optimal blending of gasoline

Define optimal operation (J) and look for magic variable
(c) which when kept constant gives acceptable loss (self-
optimizing control)

63





More on further examples
Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%)
Cake baking. J = nice taste, u = heat input. c = Temperature (200C)
Business, J = profit. c = Key performance indicator (KPI), e.g.
Response time to order
Energy consumption pr. kg or unit
Number of employees
Research spending
Optimal values obtained by benchmarking
Investment (portofolio management). J = profit. c = Fraction of
investment in shares (50%)
Biological systems:
Self-optimizing controlled variables c have been found by natural
selection
Need to do reverse engineering :
Find the controlled variables used in nature
From this possibly identify what overall objective J the biological system has
been attempting to optimize


64





Step 3. What should we control (c)?
(primary controlled variables y
1
=c)

Selection of controlled variables c

1. Control active constraints!
2. Unconstrained variables: Control magic self-
optimizing variables!

65





1. If c = u = manipulated input (MV):
Implementation trivial: Keep u at constraint (u
min
or u
max
)

2. If c = y = output variable (CV):
Use u to control y at constraint (feedback)
BUT: Need to introduce back-off (safety margin)
c c
constraint
J
opt

c

J

c
opt

Optimal solution is usually at constraints, that is, most of the
degrees of freedom are used to satisfy active constraints,
g(u,d) = 0 -> c=c
constraint

1. CONTROL ACTIVE CONSTRAINTS!

66





Back-off for active output constraints
a) If constraint can be violated dynamically (only average matters)
Required Back-off = bias (steady-state measurement error for c)

b) If constraint cannot be violated dynamically (hard constraint)
Required Back-off = bias + maximum dynamic control error
Back-off

Loss

c c
constraint
J
opt

c

J

c
opt


67





Example. Optimal operation = max. throughput.
Want tight bottleneck control to reduce backoff!
Time
Back-off
= Lost
production
Rule for control of hard output constraints: Squeeze and shift!
Reduce variance (Squeeze) and shift setpoint c
s
to reduce backoff

68





Hard Constraints: SQUEEZE AND SHIFT
0 50 100 150 200 250 300 350 400 450
0
0.5
1
1.5
2
OFF
SPEC
QUALITY
N Histogram
Q1
Sigma 1
Q2
Sigma 2
DELTA COST (W2-W1)
LEVEL 0 / LEVEL 1
Sigma 1 -- Sigma 2
LEVEL 2
Q1 -- Q2
W1
W2
COST FUNCTION
Richalet
SHIFT
SQUEEZE

69





SUMMARY ACTIVE CONSTRAINTS

c
constraint
= value of active constraint
Implementation of active constraints is usually obvious, but may need
back-off (safety limit) for hard output constraints
C
s
= C
constraint
- backoff
Want tight control of hard output constraints to reduce the back-off
Squeeze and shift



70







Cost to be minimized (economics)
J = - P where P= p
D
D + p
B
B p
F
F p
V
V


Constraints
Purity D: For example x
D, impurity
max
Purity B: For example, x
B, impurity
max
Flow constraints: 0 D, B, L etc. max
Column capacity (flooding): V V
max
, etc.

value products
cost energy (heating+ cooling)
cost feed
Optimal operation distillation

71





Expected active constraints distillation
Valueable product: Purity spec. always active
Reason: Amount of valuable product (D or B)
should always be maximized
Avoid product give-away
(Sell water as methanol)
Also saves energy

Control implications valueable product: Control
purity at spec.

valuable
product
methanol
+ max. 0.5%
water
cheap product
(byproduct)
water
+ max. 0.1%
methanol
methanol
+ water

72





Expected active constraints distillation:
Cheap product
Over-fractionate cheap product? Trade-off:
Yes, increased recovery of valuable product (less loss)
No, costs energy

Control implications cheap product:
1. Energy expensive: Purity spec. active
Control purity at spec.
2. Energy cheap: Overpurify
(a) Unconstrained optimum given by trade-off between energy
and recovery.
In this case it is likely that composition is self-optimizing
variable
Possibly control purity at optimum value
(overpurify)

(b) Constrained optimum given by column reaching capacity
constraint
Control active capacity constraint (e.g. V=V
max
)

Methanol + water example: Since methanol loss anyhow is low (0.1% of water), there is
not much to gain by overpurifying. Nevertheless, with energy very cheap, it is probably
optimal to operate at V=V
max
.

valuable
product
methanol
+ max. 0.5%
water
cheap product
(byproduct)
water
+ max. 0.1%
methanol
methanol
+ water

73





Summary: Optimal operation distillation
Cost to be minimized
J = - P where P= p
D
D + p
B
B p
F
F p
V
V
N=2 steady-state degrees of freedom
Active constraints distillation:
Purity spec. valuable product is always active (avoid give-
away of valuable product).
Purity spec. cheap product may not be active (may want to
overpurify to avoid loss of valuable product but costs energy)
Three cases:
1. N
active
=2: Two active constraints (for example, x
D, impurity
= max. x
B,
impurity
= max, TWO-POINT COMPOSITION CONTROL)
2. N
active
=1: One constraint active (1 unconstrained DOF)
3. N
active
=0: No constraints active (2 unconstrained DOFs)
Can happen if no purity specifications
(e.g. byproducts or recycle)
WHAT SHOULD WE
CONTROL (TO SATISFY
UNCONSTRAINED DOFs)?
Solution:
Often compositions
but not always!

74





QUIZ (again)
1. Degrees of freedom (Dynamic, steady-state)?
2. Expected active constraints? (1. Feed given; 2. Feed free)
3. Proposed control structure?
Reaktor
A+B C
Kompressor
Purge (mest A, litt B)
Fde
ca 51%A, 49% B
Oppvarming
Kjling
Separator
Produkt (C)
Heating
Cooling
Feed
51%A, 49%B
Purge (mostly A, some B, trace C)
Liquid Product (C)
Flash
Gas phase process
(e.g. ammonia, methanol)

75





2. UNCONSTRAINED VARIABLES:
WHAT MORE SHOULD WE CONTROL?

Intuition: Dominant variables (Shinnar)

Is there any systematic procedure?

A. Senstive variables: Max. gain rule (Gain= Minimum singular value)
B. Brute force loss evaluation
C. Optimal linear combination of measurements, c = Hy

Das könnte Ihnen auch gefallen