Sie sind auf Seite 1von 253

Tuning of

Industrial
Control Systems
Second Edition

by Armando B. Corripio, Ph.D., P.E.


Louisiana State University
Notice
The information presented in this publication is for the general education of the reader. Because
neither the author nor the publisher have any control over the use of the information by the reader,
both the author and the publisher disclaim any and all liability of any kind arising out of such use.
The reader is expected to exercise sound professional judgment in using any of the information
presented in a particular application.

Additionally, neither the author nor the publisher have investigated or considered the affect of any
patents on the ability of the reader to use any of the information in a particular application. The
reader is responsible for reviewing any possible patents that may affect any particular use of the
information presented.

Any references to commercial products in the work are cited as examples only. Neither the author
nor the publisher endorse any referenced commercial product. Any trademarks or tradenames
referenced belong to the respective owner of the mark or name. Neither the author nor the publisher
make any representation regarding the availability of any referenced commercial product at any
time. The manufacturer’s instructions on use of any commercial product must be followed at all
times, even if in conflict with the information in this publication.

Copyright © 2001 ISA—The Instrumentation, Systems, and Automation Society.

All rights reserved.

Printed in the United States of America.

No part of this publication may be reproduced, stored in retrieval system, or


transmitted, in any form or by any means, electronic, mechanical, photocopying,
recording or otherwise, without the prior written permission of the publisher.

ISA
67 Alexander Drive
P.O. Box 12277
Research Triangle Park
North Carolina 27709

Library of Congress Cataloging-in-Publication Data

Corripio, Armando B.
Tuning of industrial control systems / Armando B. Corripio.-- 2nd ed.
p. cm.
Includes bibliographical references and index.
ISBN 1-55617-713-5
1. Process control--Automation. 2. Feedback control systems. I. Title.
TS156.8. C678 2000
670.42’75--dc21
00-010127
TABLE OF CONTENTS

Unit 1: Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1-1. Course Coverage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1-2. Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-3. Audience and Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-4. Study Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-5. Organization and Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-6. Course Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1-7. Course Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Unit 2: Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


2-1. The Feedback Control Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2-2. Proportional, Integral, and Derivative Modes . . . . . . . . . . . . . . . . 13
2-3. Typical Industrial Feedback Controllers. . . . . . . . . . . . . . . . . . . . . 19
2-4. Stability of the Feedback Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2-5. Determining the Ultimate Gain and Period . . . . . . . . . . . . . . . . . . 24
2-6. Tuning for Quarter-decay Response . . . . . . . . . . . . . . . . . . . . . . . . 25
2-7. Need for Alternatives to Ultimate Gain Tuning . . . . . . . . . . . . . . 31
2-8. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Unit 3: Open-Loop Characterization of Process Dynamics . . . . . . . . . . . . . . . . . 35


3-1. Open-Loop Testing: Why and How. . . . . . . . . . . . . . . . . . . . . . . . . 37
3-2. Process Parameters from Step Test . . . . . . . . . . . . . . . . . . . . . . . . . 39
3-3. Estimating Time Constant and Dead Time. . . . . . . . . . . . . . . . . . . 41
3-4. Physical Significance of the Time Constant . . . . . . . . . . . . . . . . . . 45
3-5. Physical Significance of the Dead Time. . . . . . . . . . . . . . . . . . . . . . 49
3-6. Effect of Process Nonlinearities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3-7. Testing Batch Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3-8. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Unit 4: How to Tune Feedback Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59


4-1. Tuning for Quarter-decay Ratio Response . . . . . . . . . . . . . . . . . . . 61
4-2. A Simple Method for Tuning Feedback Controllers . . . . . . . . . . . 64
4-3. Comparative Examples of Controller Tuning . . . . . . . . . . . . . . . . 65
4-4. Practical Controller Tuning Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4-5. Reset Windup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4-6. Processes with Inverse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4-7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Unit 5: Mode Selection and Tuning Common Feedback Loops . . . . . . . . . . . . . 83
5-1. Deciding on the Control Objective. . . . . . . . . . . . . . . . . . . . . . . . . . 85
5-2. Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5-3. Level and Pressure Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5-4. Temperature Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5-5. Analyzer Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5-6. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Unit 6: Computer Feedback Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99


6-1. The PID Control Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6-2. Tuning Computer Feedback Controllers . . . . . . . . . . . . . . . . . . . 108
6-3. Selecting the Controller Processing Frequency . . . . . . . . . . . . . . 115
6-4. Compensating for Dead Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6-5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

vii
viii Table of Contents

Unit 7: Tuning Cascade Control Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125


7-1. When to Apply Cascade Control . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7-2. Selecting Controller Modes for Cascade Control. . . . . . . . . . . . . 130
7-3. Tuning Cascade Control Systems. . . . . . . . . . . . . . . . . . . . . . . . . . 131
7-4. Reset Windup in Cascade Control Systems . . . . . . . . . . . . . . . . . 139
7-5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

Unit 8: Feedforward and Ratio Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143


8-1. Why Feedforward Control? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8-2. The Design of Linear Feedforward Controllers . . . . . . . . . . . . . . 150
8-3. Tuning Linear Feedforward Controllers . . . . . . . . . . . . . . . . . . . . 152
8-4. Nonlinear Feedforward Compensation . . . . . . . . . . . . . . . . . . . . 157
8-5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Unit 9: Multivariable Control Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167


9-1. What Is Loop Interaction? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
9-2. Pairing Controlled and Manipulated Variables. . . . . . . . . . . . . . 173
9-3. Design and Tuning of Decouplers . . . . . . . . . . . . . . . . . . . . . . . . . 183
9-4. Tuning Multivariable Control Systems . . . . . . . . . . . . . . . . . . . . . 188
9-5. Model Reference Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
9-6. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

Unit 10: Adaptive and Self-tuning Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197


10-1. When Is Adaptive Control Needed? . . . . . . . . . . . . . . . . . . . . . . . 199
10-2. Adaptive Control by Preset Compensation . . . . . . . . . . . . . . . . . 202
10-3. Adaptive Control by Pattern Recognition . . . . . . . . . . . . . . . . . . 209
10-4. Adaptive Control by Discrete Parameter Estimation . . . . . . . . . 212
10-5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

Appendix A: Suggested Reading and Study Materials. . . . . . . . . . . . . . . . . . . . 223

Appendix B: Solutions to All Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Unit 1:
Introduction and
Overview
UNIT 1

Introduction and Overview


Welcome to Tuning of Industrial Control Systems. The first unit of this self-
study program provides the information you will need to take the course.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Understand the general organization of the course.

B. Know the course objectives.

C. Know how to proceed through the course.

1-1. Course Coverage

This book focuses on the fundamental techniques for tuning industrial


control systems. It covers the following topics:

A. The common techniques for representing and measuring the dynamic


characteristics of the controlled process.

B. The selection and tuning of the various modes of feedback control,


including those of computer- and microprocessor-based controllers.

C. The selection and tuning of advanced control techniques, such as


cascade, feedforward, multivariable, and adaptive control.

When you finish this course you will understand how the methods for
tuning industrial control systems relate to the dynamic characteristics of
the controlled process. By approaching the subject in this way you will
gain insight into the tuning procedures rather than simply memorizing a
series of recipes.

Because microprocessor- and computer-based controllers are now widely


used in industry, this book will extend the techniques originally
developed for analog instruments to digital controllers. We will examine
tuning techniques that have been specifically developed for digital
controllers as well as those for adaptive and auto-tuning controllers.

No attempt is made in this book to provide an exhaustive presentation of


tuning techniques. In fact, we have specifically omitted techniques based
on frequency response, root locus, and state space analysis because they
are more applicable to electrical and aerospace systems than to industrial

3
4 Unit 1: Introduction and Overview

processes. Such techniques are unsuitable for tuning industrial control


systems because of the nonlinear nature of industrial systems and the
presence of transportation lag (dead time or time delay).

1-2. Purpose

The purpose of this book is to present, in easily understood terms, the


principles and practice of industrial controller tuning. Although this
course cannot replace actual field experience, it is designed to give you the
insights into the tuning problem to speed up your learning process during
field training.

1-3. Audience and Prerequisites

The material covered will be useful to engineers, first-line supervisors,


and senior technicians who are concerned with the design, installation,
and operation of process control systems. The course will also be helpful
to students in technical schools, colleges, or universities who wish to gain
some insight into the practical aspects of automatic controller tuning.

There are no specific prerequisites for taking this course. However, you
will find it helpful to have some familiarity with the basic concepts of
automatic process control, whether acquired through practical experience
or academic study. In terms of mathematical skills, you do not need to be
intimately familiar with some of the mathematics used in the text in order
to understand the fundamentals of tuning. This book has been designed to
minimize the barrier that mathematics usually presents to students’
understanding of automatic control concepts.

1-4. Study Materials

This textbook is the only study material required in this course. It is an


independent, stand-alone textbook that is uniquely and specifically
designed for self-style.

Appendix A contains a list of suggested readings to provide you with


additional reference and study materials.

1-5. Organization and Sequence

This book is organized into ten separate units. The next three units (Units
2-4) are designed to teach you the fundamental concepts of tuning,
namely, the modes of feedback control, the characterization and
measurement of process dynamic response, the selection of controller
Unit 1: Introduction and Overview 5

performance, and the adjustment of the tuning parameters. Unit 5 tells


you how to select controller modes and tuning parameters for some
typical control loops. An entire unit, Unit 6, is devoted to the specific
problem of tuning computer- and microprocessor-based controllers. The
last four units, Units 7 through 10, demonstrate how to tune the more
advanced industrial control strategies, namely, cascade, feedforward,
multivariable, and adaptive control systems.

As mentioned, the method of instruction used is self-study: you select the


pace at which you learn best. You may browse through or completely skip
some units if you feel you are intimately familiar with their subject matter
and devote more time to other units that contain material new to you.

Each unit is designed in a consistent format with a set of specific learning


objectives stated at the very beginning of the unit. Note these learning
objectives carefully; the material in the unit will teach to these objectives.
Each unit also contains examples to illustrate specific concepts and
exercises to test your understanding of these concepts. The solutions for
all of these exercises are contained in Appendix B, so you can check your
own solutions against them.

You are encouraged to make notes in this textbook. Ample white space has
been provided on every page for this specific purpose.

1-6. Course Objectives

When you have completed this entire book, you should:

• Know how to characterize the dynamic response of an industrial


process.

• Know how to measure the dynamic parameters of a process.

• Know how to select performance criteria and tune feedback con-


trollers.

• Know how to pick the right controller modes and tuning parame-
ters to match the objectives of the control system.

• Understand the effect of sampling frequency on the performance of


computer-based controllers.

• Know when to apply and how to tune cascade, feedforward, ratio,


and multivariable control systems.

• Know how to apply adaptive and auto-tuning control techniques to


compensate for process nonlinearities.
6 Unit 1: Introduction and Overview

Besides these overall course objectives, each individual unit contains its
own set of learning objectives, which will help you direct your study.

1-7. Course Length

The basic premise of self-study is that students learn best when they
proceed at their own pace. As a result, the amount of time individual
students require for completion will vary substantially. Most students will
complete this course in thirty to forty hours, but your actual time will
depend on your experience and personal aptitude.
Unit 2:
Feedback Controllers
UNIT 2

Feedback Controllers
This unit introduces the basic modes of feedback control, the important
concept of control loop stability, and the ultimate gain or closed-loop
method for tuning controllers.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Understand the concept of feedback control.

B. Describe the three basic controller modes.

C. Define stability, ultimate loop gain, and ultimate period.

D. Tune simple feedback control by the ultimate gain or closed-loop


method.

2-1. The Feedback Control Loop

The earliest known industrial application of automatic control was the


flywheel governor. This was a simple feedback controller, introduced by
James Watt (1736-1819) in 1775, for controlling the speed of the steam
engine in the presence of varying loads. The concept had been used earlier
to control the speed of windmills. To better understand the concept of
feedback control, consider, as an example, the steam heater sketched in
Figure 2-1.

Steam

FS

Process
Fluid F C

Ti

Steam
Trap

Condensate

Figure 2-1. Example of a Controlled Process: A Steam Heater

9
10 Unit 2: Feedback Controllers

The process fluid flows inside the tubes of the heater and is heated by
steam condensing on the outside of the tubes. The objective is to control
the outlet temperature, C, of the process fluid in the presence of variations
in process fluid flow (throughput or load), F, and in its inlet temperature,
Ti. This is accomplished by manipulating or adjusting the steam rate to the
heater, Fs, and with it the rate at which heat is transferred into the process
fluid, thus affecting its outlet temperature.

In the example in Figure 2-1, the outlet temperature is the controlled,


measured, or output variable; the steam flow is the manipulated variable; and
the process fluid flow and inlet temperature are the disturbances. These
terms refer to the variables in a control system. They will be used
throughout this book.

Now that we have defined the important variables of the control system,
the next step is to decide how to accomplish the objective of controlling
the temperature. In Figure 2-1, the approach is to set up a feedback control
loop, which is the most common industrial control technique—in fact, it is
the “bread and butter” of industrial automatic control. The following
procedure illustrates the concept of feedback control:

Measure the controlled variable, compare it with its desired value,


and adjust the manipulated variable based on the difference
between the two.

The desired value of the controlled variable is the set point, and the
difference between the controlled variable and the set point is the error.

Figure 2-2 shows the three pieces of instrumentation that are required to
implement the feedback control scheme:
1. A control valve for manipulating the steam flow.
2. A feedback controller, TC, for comparing the controlled variable
with the set point and calculating the signal to the control valve.
3. A sensor/transmitter, TT, for measuring the controlled variable
and transmitting its value to the controller.

The controller and the sensor/transmitter are typically electronic or


pneumatic. In the former case, the signals are electric currents in the range
of 4-20 mA (milliamperes), while in the latter they are air pressure signals
in the range of 3-15 psig (pounds per square inch gauge). The control valve
is usually pneumatically operated, which means that the electric current
signal from the controller must be converted to an air pressure signal. This
is done by a current-to-pressure transducer.
Unit 2: Feedback Controllers 11

Steam Setpoint

FS r
m TC

F Ti TT

Process C
Fluid
Steam
Trap

Condensate

Figure 2-2. Feedback Control Loop for Heater Outlet Temperature

Modern control systems also use digital controllers. There are three basic
types of digital controllers: distributed control systems (DCS), computer
controllers, and programmable logic controllers (PLC). Some of the more
modern installations use the “fieldbus” concept, in which the signals are
transmitted digitally, that is, in the form of zeros and ones.

This book is in accordance with standard ANSI/ISA S5.1-1984 (R1992),


Instrumentation Symbols and Identification. Further, the degree of detail
is per Section 6.12, Example 2, “Typical Symbolism for Conceptual
Diagrams,” that is, diagrams that convey the basic control concepts
without regard to the specific implementation hardware. The diagram in
Figure 2-2 is an example of a conceptual diagram.

Figure 2-2 shows that the feedback control scheme creates a loop around
which signals travel. A change in outlet temperature, C, causes a
proportional change in the signal to the controller, b, and therefore an
error, e. The controller acts on this error by changing the signal to the
control valve, m, causing a change in steam flow to the heater, Fs. This
causes a change in the outlet temperature, C, which then starts a new cycle
of changes around the loop.

The control loop and its various components are easier to recognize when
they are represented as a block diagram, as shown in Figure 2-3. Block
diagrams were introduced by James Watt, who recognized that the
complex workings of the linkages and levers in the flywheel governor are
12 Unit 2: Feedback Controllers

Heater

Heater

Controller Valve Heater

Sensor

Figure 2-3. Block Diagram of Feedback Control Loop

easier to explain and understand if they are considered as signal


processing blocks and comparators. The basic elements of a block diagram
are arrows, blocks (rectangles), and comparators (circles). The arrows
represent the instrument signals and process variables, for example,
transmitter and controller output signals, steam flow, outlet temperature,
and so on. The blocks (rectangles) represent the processing of the signals
by the instruments as well as the lags, delays, and magnitude changes of
the variables caused by the process and other pieces of equipment. For
example, the blocks in Figure 2-3 represent the control valve, the sensor/
transmitter, the controller, and the heater. Finally, the comparators (circles)
represent the addition and/or subtraction of signals, for example, the
calculation of the error signal by the controller.

The signs in the diagram in Figure 2-3 represent the action of the various
input signals on the output signal. That is, a positive sign means that an
increase in input causes an increase in output or direct action, while a
negative sign means that an increase in input causes a decrease in output
or reverse action. For example, the negative sign by the process flow into
the heater means that an increase in flow results in a decrease in outlet
temperature. By following the signals around the loop you will notice that
there is a net reverse action in the loop. This property is known as negative
feedback and, as we will show shortly, it is required if the loop is to be
stable.

The most important component of a feedback control loop is the feedback


controller. It will be the subject of the next two sections.
Unit 2: Feedback Controllers 13

2-2. Proportional, Integral, and Derivative Modes

The previous section showed that the purpose of the feedback controller is
twofold. First, it computes the error as the difference between the
controlled variable and the set point, and, second, it computes the signal
to the control valve based on the error. This section presents the three basic
modes the controller uses to perform the second of these two functions.
The next section (2-3) discusses how these modes are combined to form
the feedback controllers most commonly used in industry.

The three basic modes of feedback control are proportional, integral or reset,
and derivative or rate. Each of these modes introduces an adjustable or
tuning parameter into the operation of the feedback controller. The
controller can consist of a single mode, a combination of two modes, or all
three.

Proportional Mode

The purpose of the proportional mode is to cause an instantaneous


response in the controller output to changes in the error. The formula for
the proportional mode is the following:

Kce (2-1)

where Kc is the controller gain and e is the error. The significance of the
controller gain is that as it increases so does the change in the controller
output caused by a given error. This is illustrated in Figure 2-4, where the
response in the controller output that is due to the proportional mode is
shown for an instantaneous or step change in error, at various values of
the gain.

Another way of looking at the gain is that as it increases the change in


error that causes a full-scale change in the controller output signal
decreases. The gain is therefore sometimes expressed as the proportional
band (PB) or the change in the transmitter signal (expressed as a
percentage of its range) that is required to cause a 100 percent change in
controller output. The relationship between the controller gain and its
proportional band is then given by the following formula:

PB = 100/Kc (2-2)

Some instrument manufacturers calibrate the controller gain as


proportional band, while others calibrate it as the gain. It is very important
to realize that increasing the gain reduces the proportional band and vice
versa.
14 Unit 2: Feedback Controllers

Figure 2-4. Response of Proportional Controller to Constant Error

Offset

The proportional mode cannot by itself eliminate the error at steady state
in the presence of disturbances and changes in set point. The
unavoidability of this permanent error or offset can best be understood by
imagining that the steam heater control loop of Figure 2-2 has a controller
that has proportional mode only. The formula for such a controller is as
follows:

m = m0 + Kce (2-3)

where m is the controller output signal and m0 is its bias or base value.
This base value is usually adjusted at calibration time to be about 50
percent of the controller output range so as to give the controller room to
move in each direction. However, assume that the bias on the temperature
controller of the steam heater has been adjusted so as to produce zero error
at the normal operating conditions, that is, to position the steam control
valve so that the steam flow is that flow required to produce the desired
outlet temperature at the normal process flow and inlet temperature. In
this manner the initial error of the controller is zero and the controller
output is equal to the bias term.

Figure 2-5 shows the response of the outlet temperature and of the
controller output to a step change in process flow for the case of no control
and for the case of two different values of the proportional gain. For the
case of no control, the steam rate remains the same, which causes the
temperature to drop because there is more fluid to heat with the same
amount of heat. The proportional controller can reduce this error by
opening the steam valve, as shown in Figure 2-5. However, it cannot
Unit 2: Feedback Controllers 15

Figure 2-5. Response of Heater Temperature to Step Change in Process Flow Using a
Proportional Controller

eliminate it completely because, as Eq. 2-3 shows, zero error results in the
original steam valve position, which is not enough steam rate to bring the
temperature back up to its desired value. Although an increased controller
gain results in a smaller steady-state error or offset, it also causes, as
shown in Figure 2-5, oscillations in the response. These oscillations are
caused by the time delays on the signals as they travel around the loop
and by overcorrection on the part of the controller as the gain is increased.
To eliminate the offset a control mode other than proportional is required,
namely, the integral mode.

Integral Mode

The purpose of the integral or reset mode is to eliminate the offset or


steady-state error. It does this by integrating or accumulating the error
over time. The formula for the integral mode is the following:

Kc
TI ∫
------ e dt (2-4)

where TI is the integral or reset time, and t is time. The calculus operation
of integration is somewhat difficult to visualize, and perhaps it is best
understood by using a physical analogy. Consider the tank shown in
Figure 2-6. Assume that the liquid level in the tank represents the output
of the integral action, while the difference between the inlet and outlet
flow rates represents the error e. When the inlet flow rate is higher than
the outlet flow rate, the error is positive, and the level rises with time at a
rate that is proportional to the error. Conversely, if the outlet flow rate is
higher than the inlet, the level drops at a rate proportional to the negative
16 Unit 2: Feedback Controllers

error. Finally, the only way for the level to remain stationary is for the inlet
and outlet flows to be equal, in which case the error is zero. The integral
mode of the feedback controller acts exactly in this manner, thus fulfilling
its purpose of forcing the error to zero at steady state.

The integral time TI is the tuning parameter of the integral mode. In the
analogous tank in Figure 2-6, the cross-sectional area of the tank represents
the integral time. The smaller the integral time (area), the faster the
controller output (level) will change for a given error (difference in flows).
As the proportional gain is part of the integral mode, integral time means
the time it takes for the integral mode to match the instantaneous change
caused by the proportional mode on a step change in error. This concept is
illustrated in Figure 2-7.

Figure 2-6. Tank Analog of Integral Controller

Figure 2-7. Response of PI Controller to a Constant Error


Unit 2: Feedback Controllers 17

Some instrument manufacturers calibrate the integral mode parameter as


the reset rate, which is simply the reciprocal of the integral time. Again, it is
important to realize that increasing the integral time results in a decrease
in the reset rate and vice versa.

Although the integral mode is effective in eliminating offset, it is slower


than the proportional mode in that it must act over a period of time. A
faster mode than the proportional is the derivative mode, which we
discuss next.

Derivative Mode

The derivative or rate mode responds to the rate of change of the error
over time. This speeds up the controller action, compensating for some of
the delays in the feedback loop. The formula for the derivative action is as
follows:

de
K c TD ------ (2-5)
dt

where TD is the derivative or rate time. The derivative time is the time it
takes the proportional mode to match the instantaneous action of the
derivative mode on an error that changes linearly with time (a ramp). This
is illustrated in Figure 2-8. Notice that the derivative mode acts only when
the error is changing with time.

Figure 2-8. Response of PD Controller to an Error Ramp


18 Unit 2: Feedback Controllers

On-Off Control

The three basic modes of feedback control presented in this section are all
proportional to the error in their action. That is, a doubling in the
magnitude of the error causes a doubling in the magnitude of the change
in controller output. By contrast, on-off control operates by switching the
controller output from one end of its range to the other based only on the
sign of the error, not on its magnitude. On-off controllers are not generally
used in process control, and when they are it is very simple to tune them.
Their only adjustment is the magnitude of a dead band around the set
point.

The next section, 2-3, discusses the procedures for combining the three
basic control modes to produce industrial process controllers. However,
before doing this we need to simplify the notation for the integral and
derivative modes; a simple look at Eqs. 2-4 and 2-5 makes it clear why. A
simpler notation is achieved by introducing the Heaviside operator “s.”

Oliver Heaviside (1850-1925) was a British physicist who baffled


mathematicians by noting, without proof, that the differentiation operator
d/dt could be treated as an algebraic quantity, a quantity we will represent
by the symbol “s” here. Heaviside’s concept makes it easy to simplify our
notation as follows:

• se will denote the rate of change of the error

• e/s will denote the integral of the error

Integration is the reciprocal operation because the rate of change of the


output is proportional to the input. This allows us to write the formulas
for the integral and derivative modes as follows:
K
Integral mode: -------c- e (2-6)
TI s

Derivative mode: K c T D s e (2-7)

These expressions are easier to manipulate than Eqs. 2-4 and 2-5. For those
readers who are not comfortable with the mathematics, be assured that we
will use these expressions only to simplify the presentation of the material.
Nevertheless, it is important to associate the s operator with rate of change
and its reciprocal with integration. It is also important to realize that since
s is associated with rate of change, it takes on a value of zero (that is, it
disappears) at steady state, when variables do not change with time.
Unit 2: Feedback Controllers 19

2-3. Typical Industrial Feedback Controllers

Most industrial feedback controllers, about 75 percent, are


proportional-integral (PI) or two-mode controllers, and most of the rest are
proportional-integral-derivative (PID) or three-mode controllers. As Unit
6 will show, there are a few applications for which single-mode
controllers, either proportional or integral, are indicated, but not many. It
is also rather easy to tune a single-mode controller, as only one tuning
parameter needs to be adjusted. In this section, we will look at PI and PID
controllers in terms of how the modes are combined and implemented.

The formula for the PI controller is produced by simply adding the


proportional and integral modes:

K
m = K c e + -------c- e = K c [ 1 + ( 1 ⁄ T I s ) ] e (2-8)
Ts I

Eq. 2-8 shows that the PI controller has two adjustable parameters, the
gain Kc and the integral or reset time TI. Figure 2-9 presents a block
diagram representation of the PI controller.

The simplest formula for the PID or three-mode controller is the addition
of the proportional, integral, and derivative modes, as follows:

K
m = K c e + -------c- e + K c TD s e = K c [ 1 + ( 1 ⁄ T I s ) + TD s ] e (2-9)
Ts I

This equation shows that the PID controller has three adjustable or tuning
parameters, the gain Kc, the integral or reset time TI, and the derivative or

1
Tls

r e m
KC

Figure 2-9. Block Diagram of PI Controller


20 Unit 2: Feedback Controllers

rate time TD. The block diagram implementation of Eq. 2-9 is sketched in
Figure 2-10. The figure also shows an alternative form that is more
commonly used because it avoids taking the rate of change of the set point
input to the controller. This prevents derivative kick, an undesirable pulse of
short duration on the controller output that would take place when the
process operator changes the set point.

The formula of Eq. 2-9 is commonly used in computer-based controllers,


as Unit 6 will show. This form is sometimes called the “parallel” PID
controller because, as Figure 2-10 shows, the three modes are in parallel.
All analog and most microprocessor (distributed) controllers use a
“series” PID controller, which is given by the following formula:

m = Kc ′ [ 1 + ( 1 ⁄ TI ′ s ) ] [ ( 1 + TD ′ s ) ⁄ ( 1 + α TD ′ s ) ] (2-10)

The last term in brackets in Eq. 2-10 is a derivative unit and is attached to
the standard PI controller of Figure 2-9 to create the PID controller, as
shown in Figure 2-11. It contains a filter (lag) to prevent the derivative
mode from amplifying noise. The derivative unit is installed on the
controlled variable input to the controller to avoid the derivative kick, just
as in Figure 2-10. The value of the filter parameter α in Eq. 2-10 is not
adjustable; it is built into the design of the controller. It is usually of the
order of

1
T ls
r e m
KO

b TDs

1
T ls
r e m
KO

TDs
b

Figure 2-10. Block Diagram of Parallel PID Controller with Derivative on the Error Signal, and
with Derivative on the Measurement
Unit 2: Feedback Controllers 21

0.05 to 0.1. The noise filter can and should be added to the derivative term
of the parallel version of the PID controller. Its effect on the response of the
controller is usually negligible because the lag time constant, αTD, is small
relative to the response time of the loop.

The three formulas in Eq. 2-11 convert the parameters of the series PID
controller to those of the parallel version:

Kc = Kc'Fsp TI = TI'Fsp TD = TD'/Fsp (2-11)

where

Fsp = 1 + (TD'/TI')

The formulas for converting the parallel PID parameters to the series are
as follows:

Kc' = KcFps TI' = TIFps TD' = TD/Fps (2-12)

where

Fps = 0.5 + [0.25 - (TD/TI)]0.5

Because of this difference between the parameters of the series and


parallel versions of the PID controller, this will be indicated explicitly
whether the tuning parameters are for one version or the other. It follows
that in tuning a controller you must determine whether it is the series or
parallel form by using the manuals for the specific controllers. Notice that
there is no difference when the derivative time is zero (PI controller).

Figure 2-11. Block Diagram of Series PID Controller with Derivative on the Measurement
22 Unit 2: Feedback Controllers

All industrial feedback controllers, whether they are electronic,


pneumatic, or computer-based, have the following features:

Features intended for the plant operator—

• Controlled variable display

• Set point display

• Controller output signal display

• Set point adjustment

• Manual output adjustment

• Remote/local set point switch (cascade systems only)

• Auto/manual switch

Features intended for the instrument or control engineer—

• Proportional gain, integral time, and derivative time adjustments

• Direct/reverse action switch

The operator features are on the front of panel-mounted controllers or in


the “menu” of the computer control video display screens. The
instrument/control engineer features are on the side of panel-mounted
controllers; in computer control systems they are in separate computer
video screens that can be accessed only by a key or separate password.

Now that we have described the most common forms of feedback


controllers, we will turn in the next section to the concept of loop stability,
that is, the interaction between the controller and the process.

2-4. Stability of the Feedback Loop

One of the characteristics of feedback control loops is that they may


become unstable. The loop is said to be unstable when a small change in
disturbance or set point causes the system to deviate widely from its
normal operating point. The two possible causes of instability are that the
controller has the incorrect action or it is tuned two tightly, that is, the gain
is too high, the integral time is too small, the derivative time is too high, or
a combination of these. Another possible cause is that the process is
inherently unstable, but this is rare.

When the controller has the incorrect action, you can recognize instability
by the controller output “running away” to either its upper or its lower
Unit 2: Feedback Controllers 23

limit. For example, suppose the temperature controller on the steam heater
of Figure 2-2 was set so that an increasing temperature increases its
output. In this case, a small increase in temperature would result in an
opening of the steam valve, which in turn would increase the temperature
further, and the cycle would continue until the controller output reached
its maximum with the steam valve fully opened. On the other hand, a
small decrease in temperature would result in a closing of the steam valve,
which would further reduce the temperature, and the cycle would
continue until the controller output is at its minimum point with the steam
valve fully closed. Thus, for the temperature control loop of Figure 2-2 to
be stable, the controller action must be “increasing measurement decreases
output.” This is known as reverse action.

When the controller is tuned too tightly, you can recognize instability by
observing that the signals in the loop oscillate and the amplitude of the
oscillations increases with time, as in Figure 2-12. The reason for this type
of instability is that the tightly tuned controller overcorrects for the error
and, because of the delays and lags around the loop, the overcorrections
are not detected by the controller until some time later. This causes a larger
error in the opposite direction and further overcorrection. If this is allowed
to continue the controller output will end up oscillating between its upper
and lower limits.

As pointed out earlier, the oscillatory type of instability is caused by the


controller having too high a gain, too fast an integral time, too high a
derivative time, or a combination of these. This is a good point to
introduce the simplest method for characterizing the process in order to
tune the controller: determining the ultimate gain and period of oscillation
of the loop.

Figure 2-12. Response of Unstable Feedback Control Loop


24 Unit 2: Feedback Controllers

2-5. Determining the Ultimate Gain and Period

The earliest published method for characterizing the process for controller
tuning was proposed by J. G. Ziegler and N. B. Nichols.1 This method
consists of determining the ultimate gain and period of oscillation of the
loop. The ultimate gain is the gain of a proportional controller at which the
loop oscillates with constant amplitude, and the ultimate period is the
period of the oscillations. The ultimate gain is thus a measure of the
controllability of the loop; that is, the higher the ultimate gain, the easier it
is to control the loop. The ultimate period is in turn a measure of the speed
of response of the loop; that is, the longer the period, the slower the loop.
Because this method of characterizing a process must be performed with
the feedback loop closed, that is, with the controller in “Automatic
Output,” it is also known as the “closed-loop method.”

It follows from the definition of the ultimate gain that it is the gain at
which the loop is at the threshold of instability. At gains just below the
ultimate the loop signals will oscillate with decreasing amplitude, as in
Figure 2-5, while at gains above the ultimate the amplitude of the
oscillations will increase with time, as in Figure 2-12. When determining
the ultimate gain of an actual feedback control loop, it is therefore very
important to ensure that it is not exceeded by much, or the system will
become violently unstable.

The procedure for determining the ultimate gain and period is carried out
with the controller in “Auto” and with the integral and derivative modes
removed. It is as follows:
1. Remove the integral mode by setting the integral time to its
highest value (or the reset rate to its lowest value). Alternatively,
if the controller model or program allows the integral mode to be
switched off, then do so.
2. Switch off the derivative mode or set the derivative time to its
lowest value, usually zero.
3. Carefully increase the proportional gain in steps. After each
increase, disturb the loop by introducing a small step change in
the set point, and observe the response of the controlled and
manipulated variables, preferably on a trend recorder. The
variables should start oscillating as the gain is increased, as in
Figure 2-5.
4. When the amplitude of the oscillations remains constant (or
approximately constant) from one oscillation to the next, the
ultimate controller gain has been reached. Record it as Kcu.
Unit 2: Feedback Controllers 25

5. Measure the period of the oscillations using the trend recordings,


as in Figure 2-13, or a stopwatch. For better accuracy, time several
oscillations and calculate the average period. In Figure 2-13, for
example, the time required for five oscillations is measured and
then divided by five.

6. Stop the oscillations by reducing the gain to about half of the


ultimate.

The procedure just outlined is simple and requires only a minimum upset
to the process, just enough to be able to observe the oscillations.
Nevertheless, the prospect of taking a process control loop to the verge of
instability is not an attractive one from a process operation standpoint.
However, it is not absolutely necessary in practice to obtain sustained
oscillations. It is also important to realize that some simple loops cannot be
made to oscillate with constant amplitude using just a proportional
controller. Fortunately, these are usually the simplest loops to control and
tune.

The next section, 2-6, shows how to use the ultimate gain and period to
tune the feedback controller.

2-6. Tuning for Quarter-decay Response

The preceding section outlined Ziegler and Nichols’ method for


determining the ultimate gain and period of a feedback control loop.
However, Ziegler and Nichols also proposed that the ultimate gain and
period be used to tune the controller for a specific response, that is, the

Figure 2-13. Determination of Ultimate Period


26 Unit 2: Feedback Controllers

quarter-decay ratio response, or QDR, for short. Figure 2-14 illustrates the
QDR response for a step change in disturbance and for a step change in set
point. Its characteristic is that each oscillation has an amplitude that is one
fourth that of the previous oscillation. Table 2-1 summarizes the formulas
proposed by Ziegler and Nichols for calculating the QDR tuning
parameters of P, PI, and PID controllers from the ultimate gain Kcu and
period Tu.2

It is intuitively obvious that for the proportional (P) controller the gain for
QDR response should be half of the ultimate gain, as Table 2-1 shows. At
the ultimate gain, the maximum error in each direction causes an identical
maximum error in the opposite direction. At half the ultimate gain, the
maximum error in each direction is exactly half the preceding maximum

Figure 2-14. Quarter Decay Responses to Disturbance and Set Point

Table 2-1. Quarter-Decay Ratio Tuning Formulas

Controller Gain Integral Time Derivative Time


P Kc = 0.5 Kcu — —
PI Kc = 0.45 Kcu TI = Tu/1.2 —
PID, series Kc' = 0.6 Kcu TI' = Tu/2 TD' = Tu/8
PID, parallel Kc = 0.75 Kcu TI = Tu/1.6 TD = Tu/10
Unit 2: Feedback Controllers 27

error in the opposite direction and one fourth the previous maximum
error in the same direction. This is the quarter-decay response.

Notice that the addition of integral mode results in a reduction of 10


percent in the QDR gain between the P and the PI controller tuning
formulas. This is due to the additional lag introduced by the integral
mode. On the other hand, the addition of the derivative mode allows the
controller gain to increase by 20 percent over the proportional controller.
Therein lies the justification for the derivative mode: the increase in the
controllability of the loop. Finally, the derivative and integral times in the
series PID controller formulas show a ratio of 1:4. This is a useful
relationship to keep in mind when tuning PID controllers by trial and
error, that is, in those cases when the ultimate gain and period cannot be
determined.

Example 2-1. Ultimate Gain Tuning of Steam Heater. Determine the


ultimate gain and period for the temperature control loop of Figure 2-2,
and determine the quarter-decay tuning parameters for a P, a PI, and a PID
controller.

Figure 2-15 shows the determination of the ultimate gain for the
temperature control loop. A 1°C change in set point is used to start the
oscillations. The figure shows responses for the proportional controller
with gains of 8 and 15%C.O./%T.O. (Note: %C.O. = percent of controller
output range, and %T.O. = percent transmitter output range). Since the
gain of 15%C.O./%T.O. causes sustained oscillations, it is the ultimate
gain, and the period of the oscillations is the ultimate period.

Ultimate gain: 15%C.O./%T.O. (= 100/15 = 6.7%PB)


Ultimate period: 0.50 minute (determined in Figure 2-15)

Using the formulas in Table 2-1, the QDR tuning parameters are as
follows:

P controller: Gain = 0.5 (15) = 7.5%C.O./%T.O. (or 13%PB)


PI controller: Gain = 0.45 (15) = 6.75%C.O./%T.O. (or 15%PB)
TI = 0.50/1.2 = 0.42 min

Parallel PID controller:

Gain = 0.75(15) = 11.25%C.O./%T.O. (8.9%PB)


TI = 0.50/1.6 = 0.32 min
TD = 0.50/10 = 0.05 min
28 Unit 2: Feedback Controllers

Figure 2-15. Determination of Ultimate Gain and Period for Temperature Control Loop on
Steam Heater

Figure 2-16 shows the response of the controller output and of the outlet
process temperature to an increase in process flow for the proportional
controller with the QDR gain of 7.5%C.O./%T.O. and with a gain of
4.0%C.O./%T.O. Similarly, Figs. 2-17 and 2-18 show the responses of the PI
and parallel PID controllers, respectively. In each case, the smaller
proportional gain results in less oscillatory behavior and less initial
movement of the controller output, at the expense of a larger initial
deviation and slower return to the set point. This shows that the desired
response can be obtained by varying the values for the tuning parameters,
particularly the gain, given by the formulas.

Notice the offset in Figure 2-16 and the significant improvement that the
derivative mode produces in the responses of Figure 2-18 over those of
Figure 2-17.

Practical Ultimate Gain Tuning Tips

1. In determining the ultimate gain and period, it is not absolutely


necessary to force the loop to oscillate with constant amplitude.
This is because the ultimate period is not sensitive to the gain as
the loop approaches the ultimate gain. Any oscillation that would
allow you to make a rough estimate of the ultimate period gives
good enough values for the integral and derivative times. You
can then adjust the proportional gain to obtain an acceptable
response. For example, notice in Figure 2-15 that, for the case of a
gain of 8%C.O./%T.O., the period of oscillation is 0.7 minute,
which is only about 40 percent off the actual ultimate period.
Unit 2: Feedback Controllers 29

(a)

58

K = 7.5%C.O./%T.O.
56 C
M, %C.O.

54 K = 4.0%C.O./%T.O.
C

52

0 0.5 1.0 1.5 2.0 2.5


Time, min
(b)

Figure 2-16. Proportional Controller Response to an Increase in Process Flow

2. The performance of the feedback controller is not usually


sensitive to the tuning parameters. Thus, when you adjust the
parameters from the values given by the formulas you would be
wasting your time to change them by less than 50 percent.

3. The recommended parameter adjustment policy is to leave the


integral and derivative times fixed at the values you calculated
from the tuning formulas but adjust the gain, up or down, to
obtain the desired response.
30 Unit 2: Feedback Controllers

(a)

62
KC= 6.75%C.O./%T.O.
KC= 3.5%C.O./%T.O.
58
M, %C.O.

54

TI = 0.42 min
50
0 0.5 1.0 1.5 2.0 2.5
Time, min
(b)

Figure 2-17. Proportional-Integral Controller Response to an Increase in Process Flow

The QDR tuning formulas allow you to tune controllers for a specific
response when the ultimate gain and period of the loop can be
determined. The units that follow present alternative methods for
characterizing the dynamic response of the loop (Unit 3) and for tuning
feedback controllers (Units 4, 5, and 6). Section 2-7 discusses the need for
such alternative methods.
Unit 2: Feedback Controllers 31

(a)

KC= 11.25%C.O./%T.O.
60
KC= 6.0%C.O./%T.O.
58
M, %C.O.

56

54
TI = 0.31 min
52 TD= 0.05 min

50
0 0.5 1.0 1.5 2.0 2.5
Time, min
(b)

Figure 2-18. Parallel PID Controller Response to an Increase in Process Flow

2-7. Need for Alternatives to Ultimate Gain Tuning

Although the ultimate gain tuning method is simple and fast, other
methods for characterizing the dynamic response of feedback control
loops have been developed over the years. These alternative methods are
needed because it is not always possible to determine the ultimate gain
and period of a loop. As pointed out earlier, some simple loops would not
exhibit constant amplitude oscillations with a proportional controller.

The ultimate gain and period, although sufficient to tune most loops, do
not provide insight into which process or control system characteristics
could be modified to improve the feedback controller performance. A
more fundamental method of characterizing process dynamics is needed
to guide such modifications.
32 Unit 2: Feedback Controllers

There is also a need to develop tuning formulas for responses other than
the quarter-decay ratio response. This is because the set of PI and PID
tuning parameters that produce quarter-decay response are not unique. It
is easy to see that for each setting of the integral and derivative time, there
will usually be a setting of the controller gain that will produce quarter-
decay response. This means there are an infinite number of combinations
of the tuning parameters that satisfy the quarter-decay ratio specification.

The next unit introduces an open-loop method for characterizing the


dynamic response of the process in the loop, while units 4, 5 and 6 present
tuning formulas that are based on the parameters of the open-loop model.

2-8. Summary

This unit has introduced the concepts behind feedback control, controller
modes, and stability of control loops. The ultimate gain or closed-loop
method of tuning feedback controllers for quarter-decay ratio response
was described and found to be simple and fast, but limited in the
fundamental insight it can provide into the performance of the feedback
controller. Alternative process characterization and tuning methods will
be presented in the units that follow.

EXERCISES

2-1. Imagine that Watt's steam engine, controlled by a flywheel governor, is


being used to drive the main shaft in a nineteenth-century machine shop.
The shop’s various lathes, drills, and other machines are driven by belts that
are connected to the main shaft through manually operated clutches. In this
scenario, identify the controlled variable, the manipulated variable, and the
disturbances for the engine speed controller. Also identify the sensor, and
draw a block diagram for the feedback loop in which you identify each block.

2-2. Repeat Exercise 2-1 for a conventional house oven. What variable does the
cook vary when he or she adjusts the temperature dial?

2-3. How much does the output of a proportional controller change when the
error changes by 5 percent if its gain is:

a. 20% PB?

b. 50% PB?

c. 250% PB?

2-4. A proportional controller with a PB of 20 percent is used to control the


temperature of the steam heater of Figure 2-2. After an increase in process
fluid flow, the heater reaches a new steady state in which the steam valve
Unit 2: Feedback Controllers 33

position has changed by 8 percent. What is the offset in the outlet


temperature? To eliminate the offset, must the steam valve open or close?
What would the offset be if the controller PB were 10 percent and all other
conditions were the same?

2-5. In testing a PI controller, the proportional gain is set to 0.6%C.O./%T.O.


and the reset time to two minutes. Then a sustained error of 5%T.O. is
applied, and the controller is switched to automatic. Describe
quantitatively how the controller output responds over time, and sketch the
time response.

2-6. Repeat Exercise 2-5 but with a PID controller that has a gain of 1.0%C.O./
%T.O., a reset rate of 0 repeats per minute, and a derivative time of 2.0
minutes. In this case, the error signal applied to the controller is as shown
below, that is, a ramp of 5%T.O. per minute is applied for five minutes.

2-7. A test is made on the temperature control loop for a fired heater. It is
determined that the controller gain required to cause sustained oscillations
is 1.2%C.O./%T.O., and the period of the oscillations is 4.5 min.
Determine the QDR tuning parameters for a PI controller. Report the
controller gain as a proportional band and the reset rate in repeats per
minute.

2-8. Repeat Exercise 2-8 for a PID controller, both series and parallel.

REFERENCES

1. J. G. Ziegler and N. B. Nichols, “Optimum Settings for Automatic


Controllers,” Transactions of the ASME, vol.64 (Nov. 1942), p. 759.
2. Ibid.
Unit 3:
Open-Loop
Characterization of
Process Dynamics
UNIT 3

Open-Loop Characterization of Process Dynamics


This unit shows how to characterize the dynamic response of a process
from open-loop step tests, and how to determine the process gain, time
constant, and dead time from the results of those step tests. These are the
parameters that you will need to tune feedback and feedforward
controllers in the units to follow.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Perform open-loop step tests and analyze their results.

B. Define process gain, time constant, and dead time.

C. Understand process nonlinearity.

D. Determine dynamic parameters for continuous and batch processes.

3-1. Open-Loop Testing: Why and How

Unit 2 showed you how to determine the ultimate gain and period of a
feedback control loop by performing a test with the loop closed, that is,
with the controller on “automatic output.” By contrast, this unit shows
you how to determine the process dynamic parameters by performing a
test with the controller on “manual output,” that is, an open-loop test.
Such tests present you with a more fundamental model of the process than
the ultimate gain and period.

The purpose of an open-loop test is to determine the transfer function of


the process, that is, the relationship between the process output variables
and its input variables. In the case of a feedback control loop the
relationship of most interest is that between the controlled or measured
variable and the manipulated variable. However, the relationship between
the controlled variable and a disturbance can also be determined,
provided that the disturbance variable can be changed and measured. This
unit considers only the manipulated/controlled variable pair, as the
principles of the testing procedure and analysis are the same for any pair
of variables.

To better understand the open-loop test concept, consider the temperature


feedback control loop in the heater sketched in Figure 3-1. When the

37
38 Unit 3: Open-Loop Characterization of Process Dynamics

Steam Setpoint

FS r
m TC

b
F Ti TT

Process C
Fluid
Steam
Trap

Condensate

Figure 3-1. Sketch of Temperature Control of Steam Heater

controller is switched to “manual output” the loop is interrupted at the


controller, which makes possible the direct manipulation of the controller
output signal or manipulated variable, m. Under these conditions, the
block diagram of Figure 3-2(a) shows the relationship between the
manipulated and measured variables. It is convenient to combine the
blocks that represent the valve, the heater, and the sensor in Figure 3-2(a)
into the single block of Figure 3-2(b) because this emphasizes the two
signals of interest in an open-loop test: the controller output variable, m,
and the transmitter output signal, b.

Notice that the controlled variable C does not appear in the diagram of
Figure 3-2(b). This is because, in practice, the true process variable is not
accessible; what is accessible is the measurement of that variable, that is,
the transmitter output signal b. Similarly, the flow through the control
valve, Fs, does not appear in Figure 3-2(b) because, even if it were
measured, the variable of interest is the controller output signal, m, that is,
the variable that is directly manipulated by the controller.

The procedure for performing an open-loop test is simply to cause a step


change in the process input, m, and record the resulting response of the
transmitter signal, b. The only equipment required to cause the change is
simply the controller itself since its output can be directly manipulated
when it is in the manual state. To record the transmitter signal you will
need a trend recording device with variable chart speed and sensitivity.
The standard trend recorders found in most control rooms are not
appropriate for this purpose because they are usually too slow and not
Unit 3: Open-Loop Characterization of Process Dynamics 39

Figure 3-2. Block Diagram of Feedback Control Loop with Controller on Manual. (a) Showing
the Separate Process Blocks. (b) With all the Field Equipment Combined in a Single Block.

sensitive enough to provide the precision required for analyzing the test
results. Computer and microprocessor-based controllers are ideal for
open-loop testing because they are capable of a more precise change in
their output than are their analog counterparts. They also provide trend
recordings that have adjustable ranges on the measurement and time
scales.

The simplest type of open-loop test is a step test, that is, a sudden and
sustained change in the process input signal m. Figure 3-3 shows a typical
step test. You can obtain more accurate results with pulse testing but at the
expense of considerably more involved analysis. Pulse testing is outside
the scope of this book. The interested reader can find excellent discussions
of pulse testing in the books listed in Appendix A, specifically the texts by
Luyben1 and by Smith and Corripio.2 Sinusoidal testing is not at all
appropriate for most industrial processes because such processes are
usually too slow.

3-2. Process Parameters from Step Test

This section shows you how to extract the process characteristic


parameters from the results of a step test using the step test of Figure 3-3
as an example. The parameters to be estimated from the results of a step
test are the process gain, the time constant, and the dead time. Most
40 Unit 3: Open-Loop Characterization of Process Dynamics

Figure 3-3. Step Response of Steam Heater

controller tuning methods require these three parameters for estimating


the controller parameters, as the remaining units in this book will show.
For a given process, the gain indicates how much the controlled variable
changes for a given change in controller output; the time constant indicates
how fast the controlled variable changes, and the dead time indicates how
long it takes for the controller to detect the onset of change in transmitter
output.

Process Gain

The steady-state gain, or simply the gain, is one of the most important
parameters of a process. It is a measure of the sensitivity of the process
output to changes in its input. The gain is defined as the steady-state
change in output divided by the change in input that caused it:

K = Change in output
----------------------------------------- where K is the process gain. (3-1)
Change in input

The change in output is measured after the process reaches a new steady
state (see Figure 3-3), assuming that the process is self-regulating. A self-
regulating process is one that reaches a new steady state when it is driven
by a steady change in input. There are two types of processes that are not
self-regulating: imbalanced or integrating processes and open-loop
unstable processes. A typical example of an imbalanced process is the
liquid level in a tank, and an example of an unstable process is an
exothermic chemical reactor. It is obviously impractical to perform step
tests on processes that are not self-regulating. Fortunately, most processes
are self-regulating.

The units of process gain are transmitter output divided by controller


output. For a given process, the numerical value of the gain is the same
whether it is expressed in mA/mA (electronic controller), psi/psi
Unit 3: Open-Loop Characterization of Process Dynamics 41

(pneumatic controller), or percent transmitter output per percent


controller output (%T.O./%C.O.). The units most commonly used with
modern digital controllers are %T.O./%C.O, and they will be used
throughout this book. Because the controller gain is dimensionless, the
process gain to be used in the tuning formulas must also be dimensionless.

The gain defined by Eq. 3-1 includes the gains of the transmitter, the
process, and the control valve. This is because, as illustrated in Figure
3-2(b), these three blocks are essentially combined into one. It is common
practice, however, to express the transmitter signal in the engineering
units of the measured variable, in which case it is necessary to convert the
value of the gain to dimensionless units. This is illustrated in Example 3-1.

Example 3-1. Estimation of the Gain from the Step Response. The
step test of Figure 3-3 shows that a 5 percent change in controller output
causes a steady-state change in temperature from 90°C to 95°C. First, the
change in temperature must be converted to a percentage of transmitter
output range. Assume the transmitter range for the steam heater is 50°C to
150°C. Thus, the change in transmitter output signal is as follows:

100 – 0 %T.O.
( 95 – 90 )° C --------------------- ---------------- = 5%T.O.
150 – 50 ° C

Thus, the dimensionless process gain is as follows:

K = 5%T.O./5%C.O. = 1.0 %T.O./%C.O.

By using percent of range as the units of the signals, the value of the gain is
equally valid for electronic, pneumatic, and computer-based controllers.

Example 3-1 illustrates that it is important to keep track of the units of the
gain when tuning controllers.

3-3. Estimating Time Constant and Dead Time

Just as the gain is a measure of the steady-state sensitivity of the controlled


process, the time constant and the dead time are measures of its dynamic
response. The time constant is a measure of how long it takes the process
to reach a new steady state after the initial change in output is detected.
The dead time is a measure of how long it takes for the initial change in
output to be detected after the occurrence of the input change. As shall be
seen later in Unit 4, the ratio of the process dead time to its time constant is
a measure of the controllability of a feedback control loop.
42 Unit 3: Open-Loop Characterization of Process Dynamics

There are several methods for estimating the process time constant and
dead time from the step response. The first of these methods was
originally proposed by Ziegler and Nichols.3 Let’s call this method the
“tangent” method. The other two methods, the “tangent-and-point”
method and the “two-point” method, give more reproducible results than
the tangent method. The constructions that are required to estimate the
time constant and the dead time are shown in Figure 3-4, which is
basically a reproduction of the step response of Figure 3-3 but showing the
constructions needed to analyze it.

Tangent Method

The tangent method requires you to draw the tangent to the response line
at the point of maximum rate of change or “inflection point,” as shown in
Figure 3-4. The time constant is then defined as the distance in the time
axis between the point where the tangent crosses the initial steady state of
the output variable and the point where it crosses the new steady-state
value. The dead time is the distance in the time axis between the
occurrence of the input step change and the point where the tangent line
crosses the initial steady state. These estimates are indicated in Figure 3-4.
The basic problem with the tangent method is that the drawing of the
tangent is not very reproducible, which creates significant variance in the
estimates of the process time constant and dead time. Another problem
with the tangent method is that its estimate of the process time constant is
too long, and thus it results in tighter controller tuning than the
tangent-and-point and two-point methods.

Figure 3-4. Graphical Determination of Time Constant and Dead Time from Step Response
Unit 3: Open-Loop Characterization of Process Dynamics 43

Tangent-and-Point Method

The tangent-and-point method differs from the tangent method in the


estimate it provides of the time constant, but it estimates the dead time in
exactly the same way. In this method, it is necessary to determine the point
at which the step response reaches 63.2 percent of its total steady-state
change. This point is marked as t1 in Figure 3-4. The time constant is then
the period of time between the point where the tangent line crosses the
initial steady state and the point where the response reaches 63.2 percent
of the total change. Thus, the time constant is calculated by the following:
τ = t 1 - t0 (3-2)
where τ is the process time constant and t0 is the dead time.

The tangent-and-point method results in a shorter estimate of the time


constant and thus is used in more conservative controller tuning than the
tangent method. However, notice that both estimates of the dead time and
the time constant are dependent on how the tangent line is drawn. This is
because the 63.2 percent point fixes only the sum of the dead time and the
time constant, making each individual estimate dependent on the location
of the tangent line, which is the least reproducible step of the procedure.
Because of this, Dr. Cecil Smith proposed the two-point method, which
does not require the tangent line to be drawn.4

Two-Point Method

The two-point method makes use of the 63.2 percent point defined in the
tangent-and-point method as well as one other point: where the step
response reaches 28.3 percent of its total steady-state change. This point is
marked in Figure 3-4 as t2. Actually, any two points in the region of
maximum rate of change of the response would do, but the two points
Smith chose result in the following simple estimation formulas for the
time constant and the dead time:
τ = 1.5 (t1 - t2) (3-3)
t0 = t1 - τ (3-4)

The reason the two points should be in the region of maximum rate of
change is that otherwise small errors in the ordinate would cause large
errors in the estimates of t1 and t2. Compared to the tangent-and-point
method, the two-point method results in longer estimates of the dead time
and shorter estimates of the time constant, but it is more reproducible
because it does not require the tangent line to be drawn. This feature is
particularly useful when the response takes the form of sampled values
stored in a computer. In this case, the values of t1 and t2 can be determined
by interpolation, and it is not even necessary to plot the response. In fact,
44 Unit 3: Open-Loop Characterization of Process Dynamics

the computer could easily be programmed to compute the estimates of the


time constant and the dead time from the recorded step response data.

Example 3-2 illustrates the three methods for determining the dynamic
parameters of the process from the step response.

Example 3-2. Gain and Time Constant of Steam Heater. The step
response of Figure 3-4 is for a step change of 5 percent in the output of the
temperature controller of the steam heater shown in Figure 3-1. This
response is an expanded version of the response of Figure 3-3, which was
used in Example 3-1 to determine the process gain. As in that example, the
steady-state change in temperature is 5°C, or 5 percent of the transmitter
range of 50°C to 150°C. In Example 3-1, the process gain was determined
to be 1.0 %T.O./%C.O. In this example, the process time constant and
dead time are estimated by each of the three methods just discussed.

Tangent Method. Figure 3-4 shows the necessary construction of the tangent
to the response at the point of maximum rate of change (inflection point).
The values of the dead time and time constant are then determined from
the intersection of the tangent line with the initial and final steady-state
lines. From Figure 3-4, we get:
Dead time plus time constant: 0.98 min
Dead time: t0 = 0.12 min
Time constant: τ = 0.98 - 0.12 = 0.86 min
Tangent-and-Point Method. The estimate of the dead time is the same as for
the tangent method. To estimate the time constant, first determine point t1
at which the response reaches 63.2 percent of the total steady-state change:
T = 90.0 + 0.632(5.0) = 93.2°C
From Figure 3-4, we get:
t1 = 0.73 min
Time constant: τ = 0.73 - 0.12 = 0.61 min
Two-Point Method. In addition to the 63.2 percent point, which was
determined in the previous method, now determine the 28.3 percent point:
T = 90.0 + 0.283(5.0) = 91.4°C
From Figure 3-4, we get:
t2 = 0.36 min
Time constant, from Eq. 3-3:τ = 1.5(0.73 - 0.36) = 0.56 min
Dead time, from Eq. 3-4: t0 = 0.73 - 0.36 = 0.17 min
Unit 3: Open-Loop Characterization of Process Dynamics 45

As mentioned, the two-point method results in a higher estimate of the


dead time and a lower estimate of the time constant than the other two
methods. The tangent method is at the other extreme. Of the three
methods, the two-point method is the easiest to use because it only
requires you to read two points from the response curve.

3-4. Physical Significance of the Time Constant

Although, as Section 3-4 showed, the process time constant and dead time
can be estimated from an open-loop step test, it is important to examine
the physical significance of these two dynamic measures of the process.
Doing so will enable us to estimate the process time constant and dead
time from physical process characteristics (e.g., volumes, flow rates, valve
sizes) when it is not convenient to perform the step test. This section
discusses the time constant, and Section 3-5 explores the dead time.

To understand the physical significance of the time constant, consider


some of the physical systems whose dynamic response can be
characterized by a single time constant and no dead time. Such systems
consist of a single capacitance to store mass, energy, momentum or
electricity and a conductance to the flow of these quantities. Such single
capacitance/conductance systems are called first-order systems or
first-order lags. Figure 3-5 presents several examples of first-order
systems.

The time constant of a first-order system is defined as the ratio of its


capacitance to its conductance or the product of the capacitance times the
resistance (the resistance is the reciprocal of the conductance):

Capacitance
τ = ------------------------------- = Capacitance × Resistance (3-5)
Conductance

The concepts of capacitance, resistance, and conductance are best


understood by analyzing the physical systems of Figure 3-5. In each of
them there is a physical quantity that is conserved, a rate of flow of that
quantity, and a potential that drives the flow. The capacitance is defined by
the amount of quantity conserved per unit of potential:

Amount of quantity conserved


Capacitance = ------------------------------------------------------------------------- (3-6)
Potential

The conductance is the ratio of the flow to the potential that drives it:

Flow of quantity conserved


Conductance = ----------------------------------------------------------------- (3-7)
Potential
46 Unit 3: Open-Loop Characterization of Process Dynamics

Figure 3-5. Typical Physical Systems with First-Order Dynamic Response. (a) Electrical R-C
Circuit. (b) Liquid Storage Tank. (c) Gas Surge Tank. (d) Blending Tank.

To obtain more physical meanings for the terms capacitance, resistance,


and conductance, consider each of the four physical systems of Figure 3-5:
electrical system, liquid storage tank, and blending tank. These are
discussed in the next four subsections.

Electrical System

For this system, the quantity conserved is electric charge, the potential is
electric voltage, and the flow is the electric current. The capacitance is
provided by the ability of the capacitor to store electric charge, and the
conductance is the reciprocal of the resistance of the electrical resistor. The
time constant is then given by:
τ = RC (3-8)

where

R = the resistance of the electrical resistor, ohms

C = the capacitance of the electrical capacitor, farads and the time


constant is in seconds.

Liquid Storage Tank

In this common process system, the quantity conserved is the volume of


liquid (assuming constant density), the capacitance is provided by the
Unit 3: Open-Loop Characterization of Process Dynamics 47

ability of the tank to store liquid, and the potential for flow through the
valve is provided by the level of liquid in the tank. The capacitance is
volume of liquid per unit level, that is, the cross-sectional area of the tank,
and the conductance is the change in flow through the valve per unit
change in level. The time constant can then be estimated by:

τ = A/Kv (3-9)

where

A = the cross sectional area of the tank, ft2

Kv = the conductance of the valve, (ft3/min)/ft

The conductance of the valve depends on the valve size and the
percentage of lift. It is usually referred to in terms of flow per unit pressure
drop. Notice that the change in pressure drop across the valve per unit
change in level can be calculated by multiplying the density of the liquid
by the local acceleration of gravity.

Gas Surge Tank

This system is analogous to the liquid storage tank. The quantity


conserved is the mass of gas, the potential that drives the flow through the
valve is the pressure in the tank, and the capacitance is provided by the
ability of the tank to store gas as it is compressed. The capacitance can be
calculated by the formula MV/zRT lb/psi, where V is the volume of the
tank, R is the ideal gas constant (10.73 psi-ft3/lbmole-°R), z is the
compressibility of the gas, M is its molecular weight, and T is its absolute
temperature. The conductance of the valve is expressed in change of mass
flow per unit change in pressure drop across the valve. The time constant
of the tank can be estimated by the formula:
τ = (MV/zRT)/Kv (3-10)
where
Kv = the conductance of the valve, (lb/min)/psi

Blending Tank

The change of temperature and composition in a blending tank is


governed by the phenomenon of convection transfer of energy and mass,
respectively. Assuming that the tank is perfectly mixed, the capacitance is
provided by the ability of the material in the tank (usually a liquid) to
store the energy and mass of the various components of the mixture
entering the tank. The conductance is the total flow through the tank. The
potential for energy transfer is the temperature, and for mass transfer the
48 Unit 3: Open-Loop Characterization of Process Dynamics

potential is the concentration of each component. In the absence of


chemical reactions and heat transfer through the walls of the blender, the
time constant for both temperature and composition is given by the
following:

τ = V/F (3-11)

where

V = the volume of the tank, ft3

F = the total flow through the tank, ft3/min

If there is a chemical reaction, the time constant for the concentration of


reactants is decreased. This is because the conductance is increased to the
sum (F + kV) where k is the reaction coefficient, which is defined here as
the change in reaction rate divided by the change in the reactant
concentration. The conductances are added because the processes of
reaction and convection occur in parallel.

Similarly, if there is heat transfer to the surroundings, or to a coil or jacket,


the time constant for temperature changes is reduced. This is because the
conductance is increased to the sum [F + (UA/ρCp)], where U is the
coefficient of heat transfer (Btu/min- ft2-°F), A is the heat transfer area
(ft2), ρ is the density of the fluid (lb/ft3), and Cp is the heat capacity of the
fluid (Btu/lb-°F). In this case, the conductances are additive because the
processes of conduction and convection occur in parallel.

For the preceding examples of first-order processes the time constant is


estimated from process parameters and thus a dynamic test on the process
is not needed. For more complex processes such as distillation columns
and heat exchangers, the time constant cannot be estimated because these
processes represent higher-order systems. That is, they are made up of
many resistance/capacitance combinations in series and in parallel. For
these systems, the only recourse is to perform a dynamic test such as the
one presented earlier in this unit.

Example 3-3. Estimation of the Time Constant of a Surge Tank. The


surge tank of Figure 3-5c is for an air compressor. It runs at a temperature
of 150°F and has a volume of 10 ft3. The valve can pass a flow of 100 lb/hr
at a pressure drop of 5 psi when the pressure in the tank is 30 psig.
Estimate the time constant of the tank.

The capacitance of the tank is its ability to store air as its density changes
with pressure, which is the potential for flow. Assuming that air at 30 psig
Unit 3: Open-Loop Characterization of Process Dynamics 49

behaves as an ideal gas (z=1) and using the fact that its molecular weight,
M, is 29, the capacitance is as follows:

Capacitance = Vρ/P = VM/RT =

(10)(29)/(10.73)(150+460) = 0.0443 lb/psi

You can estimate the conductance of the valve using the formulas given by
valve manufacturers for sizing the valves. Because the pressure drop
through the valve is small compared with the pressure in the tank, the
flow is “subcritical,” and the conductance is given by the following
formula:

Kv = W (1 + ∆Pv/P)/(2∆Pv) =

= (100/60)[1 + 5/(30+14.7)]/[(2)(5)]

= 0.1853 (lb/min)/psi

The time constant is then:

τ = 0.0443/0.1853 = 0.24 min (14.3 s)

The conductance calculated for the valve is the change in gas flow per unit
change in tank pressure, P. It takes into account the variation in gas
density with pressure and the variation in flow with the square root of the
product of density times the pressure drop across the valve, ∆Pv . For
critical flow, when the pressure drop across the valve is more than one half
the upstream absolute pressure, the conductance can be calculated by the
following formula:

Kv = W/P

3-5. Physical Significance of the Dead Time

Pure dead time, also known as transportation lag or time delay, occurs
when the process variable is transported from one point to another, hence
the term transportation lag. At any point in time, the variable downstream
is what the variable upstream was one dead time before, hence the term
time delay. When the variable first starts changing at the upstream point, it
takes one dead time before the downstream variable starts changing,
hence the term dead time. These concepts are all illustrated in Figure 3-6.
The dead time can be estimated using the following formula:

t 0 = Distance
--------------------- (3-12)
Velocity
50 Unit 3: Open-Loop Characterization of Process Dynamics

Figure 3-6. Transportation Lag (Dead Time or Time Delay). Physical Occurrence and Time
Response.

Different physical variables travel at different velocities, as follows:

• Electric voltage and current travel at the velocity of light,


300,000 km/s or 984,000,000 ft/s.

• Pressure and flow travel at the velocity of sound in the fluid, e.g.,
340 m/s or 1,100 ft/s for air at ambient temperature.

• Temperature, composition, and other fluid properties travel at the


velocity of the fluid, up to about 5 m/s (15 ft/s) for liquids and up
to about 60 m/s (200 ft/s) for gases.

• Solid properties vary at the velocity of the solid, e.g., paper in a


paper machine, coal in a conveyor.

These numbers show that, for the reasonable distances that are typical of
process control systems, pure dead time is only significant for
temperature, composition, and other fluid and solid properties. The
velocity of the fluid in a pipe can be calculated using the following
formula:

v = F/Ap (3-13)

where

v = the average velocity, ft/s

F = the volumetric flow, ft3/s

Ap = the cross-sectional area of the pipe, ft2

Given that, as we shall see shortly, the dead time makes a feedback loop
less controllable, most process control loops are designed to reduce the
Unit 3: Open-Loop Characterization of Process Dynamics 51

dead time as much as possible. Dead time can be reduced by installing the
sensor as close to the equipment as possible, using electronic instead of
pneumatic instrumentation, and by other means of reducing the distance
or increasing the speed of transmission.

Pure dead time is usually not significant for most processes. The process
dead time that is estimated from the response to the step test arises from a
phenomenon that is not necessarily transportation lag, but rather from the
presence of two or more first-order processes in series (e.g., the trays in a
distillation column). When you model these processes with a first-order
model, you need the dead time to represent the delay caused by the
multiple lags in series. As an example, Figure 3-7 shows the response of
the composition in a blending train when it consists of one, two, five, and
nine tanks in series. It assumes that the total blending volume is the same,
for example, each of the five tanks has one-fifth the volume of the single
tank. In the limit, an infinite number of infinitesimal tanks in series results
in a pure dead time that is equal to the time constant of the single tank,
that is, the total volume divided by the volumetric flow.

Most real processes fall somewhere between the two extremes of first-
order (perfectly mixed) processes and transportation (unmixed) processes.

The first-order-plus-dead-time (FOPDT) model is the simplest model that


can be used to characterize such processes.

Figure 3-7. Response of Composition Out of a Train of Blending Tanks in Series. Curves are for
One, Two, Five, and Nine Tanks in Series, Keeping in Each Case the Total Volume of All the
Tanks the Same.
52 Unit 3: Open-Loop Characterization of Process Dynamics

Example 3-4. Estimation of Dead Time. Estimate the dead time of


temperature of a liquid flowing through a one-inch standard pipe at 10
gpm (gpm = gallons per minute). The distance that the fluid must travel is
100 feet. A pipe manual or engineering handbook gives the cross-sectional
area of the one-inch standard pipe: Ap = 0.00600 ft2. The velocity of the
fluid in the pipe is then the following:

v = (10 gpm)/[(7.48 gal/ft3)(60 s/min)(0.00600 ft2)]

= 3.71 ft/s

Dead time: t0 = (100 ft)/(3.71 ft/s) = 26.9 s (0.45 min)

3-6. Effect of Process Nonlinearities

A common characteristic of most chemical processes is that they are


nonlinear. There are in general two types of nonlinearities: those that arise
from the variation of dynamic parameters with different operating
conditions and those that result from saturation of the final control
elements, for example, control valves driven against their upper or lower
operating limits.

As process operating conditions change, the resulting variation in the


process gain, time constant, and dead time causes the controller
performance to vary as well. Because of this, a controller is usually tuned
so its performance is best at the design operating point and acceptable
over the expected range of operating conditions.

The formulas provided in the preceding sections of this unit show that, for
concentration and temperature, the time constant and the dead time vary
with process throughput. Eqs. 3-11, 3-12, and 3-13 show that the time
constant and the dead time are inversely proportional to the flow and thus
to the throughput. Eqs. 3-9 and 3-10 also show that, for liquid level and
gas pressure, the time constant varies with the valve conductance, Kv ,
which usually varies since it is a function of the valve characteristics and
of the pressure drop across the valve. Control valve characteristics are
usually selected to maintain the process gain constant, which, for liquid
level and gas pressure, is equivalent to keeping the valve conductance
constant (the valve gain is the reciprocal of the valve conductance).

Of the three parameters of a process, the gain has the greatest influence on
the performance of the control system. Such devices as equal-percentage
control valve characteristics are used to ensure that the process gain is as
constant as possible. The equal-percentage characteristic, shown in
Unit 3: Open-Loop Characterization of Process Dynamics 53

Figure 3-8, is particularly useful for this purpose because the gain of most
rate processes (e.g., fluid flow, heat transfer, mass transfer) decreases as
the flow increases, that is, as the valve opens. As Figure 3-8 shows, the
gain or sensitivity of an equal-percentage valve increases as the valve is
opened, which compensates for the decrease in the process gain.

Reset Windup

The second type of process nonlinearity is caused by saturation of the


controller output and of the final control element, not necessarily at the
same points. To varying degrees, saturation gives rise to the problem
known as reset windup, which occurs when the reset or integral mode
drives the controller output against one of its limits. Reset windup is
worse when the controller output limit is different from the corresponding
limit of its destination, for example, the position of the control valve. As an
example, in a pneumatic control installation control valves operate in the
range of 3 to 15 psig air pressure, but if the controllers are not properly
protected against windup they can operate between 0 and 20 psig.

Reset windup is more common in batch processes and during the start-up
and shutdown of continuous processes, but when you are tuning
controllers you should always keep the possibility of windup in mind.

Some problems that are apparently tuning problems are really caused by
unexpected reset windup. Unit 4 looks at reset windup in more detail.

Example 3-5 illustrates the variation of the process gain in a steam heater.
It takes advantage of the fact that for the heater the gain can be calculated
from a simple steady-state energy balance on the heater.

Figure 3-8. Equal-Percentage Characteristics of a Control Valve


54 Unit 3: Open-Loop Characterization of Process Dynamics

Example 3-5. Variation in Steam Heater Gain with Process Flow. At


design conditions, the process flow through the heater of Figure 3-1 is
F=12 kg/s, its inlet temperature is Ti=50°C, and the desired temperature at
which it is to be heated is C=90°C. The process fluid has a specific heat of
Cp=3.75 kJ/kg-°C, and the steam supplies Hv=2250 kJ/kg upon
condensing. Heat losses to the surroundings can be ignored. The
temperature transmitter range is 50°C to 150°C, and the control valve is
linear, with constant pressure drop, and delivers 2.0 kg/s of steam when
fully opened. Calculate the gain of the heater in terms of the sensitivity of
the outlet temperature to changes in steam flow.

Based on the response to a step test, Example 3-1 determined that the gain
of the heater is 1.0%T.O./%C.O. at the design conditions. In this example
we will verify this value from a steady-state energy balance on the heater
and study its dependence on process flow.

An energy balance on the heater, ignoring heat losses, yields the following
formula:
FCp(T - Ti) = FsHv
where Fs is the steam flow, and the other terms have been defined in our
initial statement of the problem. The desired gain is the steady-state
change in outlet temperature per unit change in steam flow:

Hv
K = Change in outlet temperature
--------------------------------------------------------------------- = ----------
-
Change in steam flow FC p

Notice that the gain is inversely proportional to the process flow F. From
this formula, we know that the units of the gain are °C/(kg/s). To convert
them to %T.O./%C.O. (dimensionless), multiply this number by the range
of the valve (2.0 kg/s) and divide the result by the span of the transmitter
(100°C). This results in the following table:

F, kg/s K, °C/(kg/s) K, %T.O./%C.O.


3.0 200.0 4.0
6.0 100.0 2.0
12.0 50.0 1.0
18.0 33.3 0.67

Example 3-5 shows the variation of the process gain, which indicates that
the steam heater is nonlinear. As mentioned earlier, the decrease in process
gain with an increase in flow is characteristic of many process control
systems. This explains the popularity of equal-percentage control valves,
which compensate exactly for this gain variation.
Unit 3: Open-Loop Characterization of Process Dynamics 55

3-7. Testing Batch Processes

Dynamic testing of batch processes differs from continuous process testing


in that the base conditions around which the process is disturbed are not
constant with time. The step-testing procedure for a continuous process
assumes that the reference for the test is constant, but this is not
necessarily true when you are testing batch processes. This section
demonstrates that the step test can still be performed on a batch process as
long as the process parameters are estimated by taking the difference
between the response to the test and a nonconstant base response.

The base response is the controlled variable profile for the batch when the
manipulated variable is maintained at the base or design conditions. Then,
when you apply the step change to the manipulated variable, you obtain a
different profile for the controlled variable. You must then estimate the
process parameters from the difference between the two profiles. This
procedure is demonstrated in Example 3-6.

Example 3-6. Step Testing of a Vacuum Pan. A key step in the


production of cane sugar is the separation of the sugar from impurities.
This is done by batch crystallization in a vacuum pan. A sketch of the
vacuum pan is shown in Figure 3-9. To produce sugar crystals of uniform
size in a reasonable time, it is important to control the supersaturation of
sugar in the massecuite (mother liquor) as well as its mobility (viscosity).
The manipulated variables are the syrup feed rate and the steam rate. The
syrup is fed continuously during the batch to replenish the sugar in the
massecuite, and the steam condenses in a calandria (donut-shaped basket
of heat exchange tubes) so as to evaporate the water fed with the syrup.

Figure 3-9. Sketch of Vacuum Pan Used for Batch Crystallization of Sugar
56 Unit 3: Open-Loop Characterization of Process Dynamics

Figure 3-10 shows the base profiles of the supersaturation and viscosity.
The figure also shows the corresponding profiles after a step change in
steam rate is applied. The step response is then given by the difference
between the two curves. As demonstrated by a computer simulation of a
vacuum pan reported by Qi Liwu and Corripio, these curves would be
difficult to obtain on an actual pan because they involve running two
batches with the steam valve held constant.5
Supersaturation

1.20

1.15
Step

1.10 Base

80 Step
Viscosity

60
Base

40
0 6 12 18 24 30
Time, minutes

Figure 3-10. Base Profile and Profile After a Step Change in the Steam Valve of Vacuum Pan.
The Step Response is the Difference Between the Two Profiles.

3-8. Summary

This unit showed you how to perform and analyze a process step test to
determine the parameters of a first-order-plus-dead-time (FOPDT) model
of the process. These parameters are the gain, the time constant, and the
dead time. It also discussed the physical significance of these parameters
and showed how to estimate them from process design parameters for
some simple process loops. The units to follow will use these estimated
dynamic parameters to design and tune feedback, feedforward, and
multivariable controllers.

Regardless of the method you use to measure the dynamic characteristics


of a process, it is important to realize that even a rough estimate of the
process dynamic parameters can be quite helpful in tuning and
troubleshooting process control systems.
Unit 3: Open-Loop Characterization of Process Dynamics 57

EXERCISES

3-1. Summarize the procedure for performing a step test on a process.

3-2. What are the parameters of a first-order-plus-dead-time (FOPDT) model of


the process? Briefly describe each one.

3-3. A change of 100 lb/hr in the set point of a steam flow controller for the
reboiler of a distillation column results in a change in the bottoms
temperature of 2°F. The steam flow transmitter has a range of 0 to
5,000 lb/hr, and the temperature transmitter has a calibrated range of
200°F to 250°F. Calculate the process gain for the temperature loop in
°F/(lb/hr) and in %T.O./%C.O.

3-4. When tuning feedforward control systems you need the FOPDT
parameters of the process for step changes both in the disturbance and in
the manipulated variable. The figure below shows the response of the steam
heater outlet temperature of Figure 3-1 to a step change of 2 kg/s in process
flow. Determine the gain, time constant, and dead time for this response
using the slope method and the slope-and-point method.

Response of Heater Outlet Temperature to a Change in Process Flow


58 Unit 3: Open-Loop Characterization of Process Dynamics

3-5. Do Exercise 3-4 using the two-point method.

3-6. A passive low-pass filter can be built with a resistor and capacitor. The
maximum sizes of these two components for use in printed circuit boards
are, respectively, 10 megohms (million ohms) and 100 microfarads
(millionth of farad). What then would be the maximum time constant of a
filter built with these components?

3-7. The surge tank of Figure 3-5b has an area of 50 ft2, and the valve has a
conductance of 50 gpm/ft of level change (1 ft3 = 7.48 gallons). Estimate the
time constant of the response of the level.

3-8. The blender of Figure 3-5d has a volume of 2,000 gallons. Calculate the
time constant of the composition response for product flows of (a) 50 gpm,
(b) 500 gpm, and (c) 5,000 gpm.

3-9. The blender of Figure 3-5d mixes 100 gpm of concentrated solution at
20 lb/gallon with 400 gpm of dilute solution at 2 lb/gallon. Calculate the
steady-state product concentration in lb/gallon. How much would the
outlet concentration change if the concentrated solution rate were to change
to 110 gpm, all other conditions remaining the same? Calculate the process
gain for the suggested change.

3-10. Repeat Exercise 3-9 assuming that the initial rates are 10 gpm of
concentrated solution and 40 gpm of dilute solution and that to do the test
the concentrated solution is changed to 11 gpm.

REFERENCES

1. W. L. Luyben, Process Modeling, Simulation and Control for Chemical


Engineers, 2d ed. (New York: McGraw-Hill, 1990)
2. C. A. Smith and A. B. Corripio, Principles and Practice of Automatic
Process Control, 2d ed. (New York: Wiley, 1997).
3. J. G. Ziegler and N. B. Nichols, “Optimum Settings for Automatic
Controllers,” Transactions of the ASME, vol. 64 (Nov. 1942), p. 759.
4. C. L. Smith, Digital Computer Process Control (Scranton, PA:
International Textbook, 1972).
5. Qi Liwu and A. B. Corripio, “Dynamic Matrix Control of Sugar
Crystallization in a Vacuum pan,” Proceedings of ISA/85 (Research
Triangle Park, NC: ISA, 1985).
Unit 4:
How to Tune
Feedback Controllers
UNIT 4

How to Tune Feedback Controllers


In Unit 3 we introduced methods for estimating the three fundamental
process parameters from open-loop step tests: the gain, time constant, and
dead time. In this unit, we will introduce formulas for tuning controllers
based on these three parameters.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Tune feedback controllers based on estimates of the process gain,


time constant, and dead time.

B. Compare controller tuning methods.

C. Identify factors that affect controller performance.

D. Recognize reset windup and know how to avoid it.

4-1. Tuning for Quarter-decay Ratio Response

As we learned in Unit 2, Ziegler and Nichols developed the formulas for


quarter-decay ratio (QDR) response tuning that are based on the ultimate
gain and period of the loop (see Table 2-1 in Unit 2). However, they also
developed formulas for tuning feedback controllers for QDR response that
are based on the process gain, K; time constant, τ; and dead time, to.1
These formulas are given in Table 4-1.

Table 4-1. Tuning Formulas for Quarter-decay Ratio Response


Gain Integral Time Derivative Time
P Kc = τ/Kto — —
PI Kc = 0.9τ/Kto TI = 3.33to —
PID, series Kc' = 1.2τ/Kto TI'= 2.0 to TD'= 0.5to
PID, parallel Kc = 1.5τ/Kto TI = 2.5 to TD = 0.4to

The formulas of Table 4-1 are very similar to those of Table 2-1. Notice, for
example, that in both sets of formulas the proportional gain of the PI
controller is 10 percent lower and the series PID gain 20 percent higher
than that of the P controller. Note also that the derivative or rate time is
one-fourth the integral or reset time for the series PID controller. The ratio
of the integral time of the PI controller to that of the series PID controller is
61
62 Unit 4: How to Tune Feedback Controllers

also the same for both sets of formulas. In other words, the reset action is
about 1.7 times faster when derivative is used than when it is not.

The formulas of Table 4-1, however, provide important insights into the
effect that the parameters of the process have on the tuning of the
controller and thus on the performance of the loop. In particular, they
allow us to draw the following three conclusions:
1. The controller gain is inversely proportional to the process gain
K. Since the process gain represents the product of all the
elements in the loop other than the controller (control valve,
process equipment, and sensor/transmitter), this means that the
loop response depends on the loop gain, that is, the product of all
of the elements in the loop. It also means that if the gain of any of
the elements were to change because of recalibration, resizing, or
nonlinearity (see Section 3-6), the response of the feedback loop
would change unless the controller gain is readjusted.
2. The controller gain must be reduced when the ratio of the process
dead time to its time constant increases. This means that the
controllability of the loop decreases when the ratio of the process
dead time to its time constant increases. It also allows us to define
the ratio of dead time to time constant as the uncontrollability
parameter of the loop:

t
Pu = ---0- (4-1)
τ

where

to = the process dead time

τ = the process time constant

Notice that it is the ratio of the dead time to the time constant that
determines the controllability of the loop. In other words, a
process with a long dead time is not uncontrollable if its time
constant is much longer than the dead time.

3. The speed of response of the controller, which is determined by


the integral and derivative times, must match the speed of
response of the process. The QDR formulas match these response
speeds by relating the controller time parameters to the process
dead time.

These three conclusions can be very helpful as guidelines for the tuning of
feedback controllers, even in cases where the tuning formulas cannot be
Unit 4: How to Tune Feedback Controllers 63

used directly because the process parameters cannot be accurately


estimated. For example, if the performance of a well-tuned controller were
to deteriorate under operation, look for either a change in the process gain,
in its uncontrollability parameter, or in its speed of response. In other
instances the controller performance is poor because the reset time is
much shorter than the process response time, so the process cannot
respond as fast as the controller wants it to.

The three conclusions we have just drawn from the tuning formulas can
also guide the design of the process and its instrumentation when they are
coupled with the methods for estimating time constants and dead times
given in Sections 3-4 and 3-5 of Unit 3. For example, loop controllability
can be improved by reducing the dead time between the manipulated
variable and the sensor or by increasing the process time constant.
Moreover, it is possible to quantitatively estimate the effect of process,
control valve, and sensor nonlinearities on the variability of the loop gain
and thus determine whether there’s any need to readjust the controller
gain when process conditions change.

Applying the QDR Tuning Formulas

The formulas of Table 4-1 were developed empirically for the most
common range of the process uncontrollability parameter, which is
between 0.1 and 0.3. This assumes that the process does not exhibit
significant transportation lag, but rather that the dead time is the result of
several time lags in series (e.g., trays in a distillation column).

The QDR formulas were developed for continuous analog controllers and
thus must be adjusted for the sampling frequency of digital controllers—
that is, computer control algorithms, distributed controllers, or
microprocessor-based controllers. Moore and his co-workers proposed
that the process dead time be increased by one half the sampling period to
account for the fact that the controller output is held constant for one
sampling period, where the sampling period is the time between updates
of the controller output.2 Following this procedure, the uncontrollability
parameter for digital controllers is as follows:

T
t 0 + ---
2
P u = -------------- (4-2)
τ

where T is the sampling period. Notice that increasing the sampling


period reduces the controllability of the loop. In other words, the slower
the control algorithm processing frequency, the worse the performance of
the loop. This does not necessarily mean you should process every loop as
fast as possible because there is a point of diminishing returns, that is, a
64 Unit 4: How to Tune Feedback Controllers

sampling frequency above which the computer load is increased without


any significant improvement in control performance. For most loops,
control performance does not improve much when the sample time is
reduced beyond one-tenth the time constant.

4-2. A Simple Method for Tuning Feedback Controllers


Many sets of tuning formulas and methods have been proposed in the
literature since Ziegler and Nichols introduced their pioneer formulas.
The methods vary in how they define “good controller performance” and
in the formulas they use to calculate the tuning parameters. Some require
process models that are more complex than the first-order-plus-dead-time
(FOPDT) method described in Unit 3. However, since this book is not
meant to be an encyclopedia of tuning methods, but rather a guide for the
quick and simple tuning of industrial control systems, in this section we
will present one of the simplest and most effective methods proposed for
tuning feedback controllers. This method has come to be known as the
IMC (for Internal Model Control) tuning rules.3 However, it was originally
introduced under the name “controller synthesis” by Martin in 1975 and
then further developed by Smith and Corripio.4

For the first-order-plus-dead-time process model, the IMC tuning rules


consist of setting the integral time equal to the process time constant and
the derivative time equal to one half the process dead time. The process
gain is then adjusted to obtain the desired response of the loop. The
following formulas are to be used with the series PID controller:
TI' = τ

TD' = t0/2 (4-3)

When the process dead time is very small compared with the process time
constant, the effect of the derivative time is minor, and a PI controller can
be used in which the integral time is equal to the process time constant.
For computer (discrete) controllers with a uniform sampling interval, one
half the sample time must be added to the dead time, as in Eq. 4-2.

Gain Adjustment

Although the gain is adjustable, the following formulas are proposed here:

• For good response to disturbances, when Pu is between 0.1 and 0.5,


use the formula:
Kc' = 2 τ/Kt0 (4-4)

When Pu is less than 0.1 or greater than 0.5, you should use one half
this gain as the starting value.
Unit 4: How to Tune Feedback Controllers 65

• For optimum response to changes in set point, when Pu is in the


range 0.1 to 0.5 and when using a PI controller, the following for-
mula is appropriate:

Kc = 0.6 τ/Kt0 (4-5)

• For optimum response to changes in set point, when Pu is in the


range 0.1 to 0.5 and using a PID controller, use this formula:

Kc' = 0.83 τ/Kt0 (4-6)

• For 5 percent overshoot on set point changes, use the following for-
mula:

Kc = 0.5 τ/Kt0 (4-7)

The four preceding formulas convey the idea that the controller gain can
be adjusted to obtain a variety of responses. Once you have set the time
parameters using the best estimates of the process time parameters, the
tuning procedure is reduced to adjusting a single parameter: the controller
gain.

One advantage these simple formulas have over the Ziegler-Nichols


formulas presented in Section 4-1 is that they apply over a wider range of
the uncontrollability parameter. The following section compares the
controller performance using both sets of formulas.

4-3. Comparative Examples of Controller Tuning

This section compares the two methods presented in the first two sections
of this unit by tuning the temperature controller of the steam heater in
Figure 3-1 as well as two other hypothetical processes: one that is
controllable and one that is difficult to control.

For the heat exchanger of Figure 3-1, recall that the first-order-plus-dead-
time model parameters (which we determined in Example 3-2) are as
follows:

K = 1.0 %T.O./%C.O.

Tangent and Point


Tangent Method Two-Point Method
Method
Time constant, min 0.86 0.61 0.56
Dead time, min 0.12 0.12 0.17
66 Unit 4: How to Tune Feedback Controllers

Notice that which tuning parameters you use will depend on which
method you use to determine the time constant and dead time. Ziegler
and Nichols used the tangent method to develop their empirical formulas,
working with actual processes and physical simulations. Thus, you should
use the tangent method when tuning for quarter-decay-ratio (QDR)
response. The IMC tuning rules were developed for first-order-plus-dead-
time models, so any of the three methods can be used to determine which
dead time and time constant to use with the IMC formulas. Since the
tangent method gives the smallest value for the uncontrollability
parameter—that is, shortest dead time and longest time constant--it
results in the tightest tuning, while the two-point method produces the
highest value for the uncontrollability parameter and thus the most
conservative tuning.

The following example compares PI versus PID temperature control of the


heat exchanger in Figure 3-1 using QDR tuning.

Example 4-1. Heat Exchanger Temperature Control—PI versus PID


Performance with QDR Tuning. To tune the proportional-integral (PI)
and the proportional-integral-derivative (PID) controllers for QDR
response on the temperature controller, use the process parameters
estimated by the tangent method:

K = 1%T.O./%C.O. τ = 0.86 min t0 = 0.12 min

The formulas in Table 4-1 produce the following tuning parameters:

Kc, %C.O./%T.O. TI, min TD, min

PI 6.5 0.40 —
PID series 8.6 0.24 0.06

Using these tuning parameters, Figure 4-1 compares the responses of the
temperature transmitter output and of the controller output to a step
increase in process flow to the heater. The advantage of the derivative
mode is obvious: it produces a smaller initial deviation and maintains the
temperature closer to the set point for the entire response, with fewer
oscillations.
Unit 4: How to Tune Feedback Controllers 67

(a)

60 PI

58
M, %C.O.

56
PID
54

52

50
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)

Figure 4-1. Responses of PI and PID Controllers to Disturbance Input on the Heat Exchanger
with QDR Tuning

The next example compares QDR versus IMC tuning of the temperature
controller of the heat exchanger in Figure 3-1.

Example 4-2. Heat Exchanger Temperature Control—QDR versus IMC


Tuning of PID Controller. This example compares the tuning of a PID
controller using QDR tuning versus IMC tuning. The QDR tuning
parameters of the PID controller are the same as in Example 4-1. Recall
that these parameters were obtained from the process parameters
estimated by the tangent method. By contrast, the process parameters
estimated by the two-point method are used to tune the IMC controller.
This is because the two-point method is simpler and more reproducible
than the other two methods. The process parameters are as follows:

K = 1.0 %T.O./%C.O. τ = 0.56 min. t0 = 0.17 min


68 Unit 4: How to Tune Feedback Controllers

The IMC tuning rules presented in Section 4-2 give the integral and
derivative times:

TI' = τ = 0.56 min

TD' = t0/2 = 0.17/2 = 0.09 min

The gain, as mentioned earlier, is adjustable. To test the response to a


disturbance, from Eq. 4-4, use the following:

Kc' = 2 τ/Kt0 = 2(0.56)/(1.0)(0.17) = 6.6%C.O./%T.O.

A comparison of the tuning parameters shows that the QDR formulas call
for a 30 percent higher gain, for an integral mode over twice as fast, and a
33 percent shorter derivative time than IMC. The difference in derivative
time is caused only by the difference in the method used for estimating the
dead time, as the formulas are identical.

Figure 4-2 compares the QDR and IMC responses of the PID temperature
controller to a step increase in process flow to the heater. Both controllers
perform well, reducing the initial deviation in outlet temperature to about
one-tenth what it would be without control. QDR tuning results in a
slightly smaller initial deviation and brings the temperature back to the set
point of 90°C quicker than does IMC tuning. To achieve this good
performance the QDR-tuned controller causes a 50 percent overcorrection
in controller output, while the IMC-tuned controller smoothly moves the
controller output from its initial to its final position.

To compare the performance to set point changes, the IMC gain must be
adjusted to the value recommended for set point changes which is given
by Eq. 4-6:

Kc' = 0.83 τ/Kt0 = 0.83(0.56)/(1.0)(0.17) = 2.7%C.O./%T.O.

The QDR parameters for the PID controller are as shown in Example 4-1.
Figure 4-3 compares the responses of the PID controller to a 5°C set point
change. As expected, QDR tuning results in a large overshoot, while IMC
tuning smoothly moves the variable from its original to its final set point.

QDR tuning also causes a much larger initial change in the controller
output.

Example 4-2 highlights an apparent dilemma between tuning for good


performance on disturbance inputs and tuning for good performance on
set point changes. However, there are several ways to tune for good
performance on disturbance inputs and still prevent or diminish poor
Unit 4: How to Tune Feedback Controllers 69

(a)

60
QDR
58
M, %C.O.

56 IMC

54

52

50
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)

Figure 4-2. Comparison of PID Responses to Disturbance Input on Heat Exchanger with QDR
and IMC Tuning

performance on set point changes. Most industrial controllers are


designed to compensate for disturbances with few if any changes in set
point. So, one way to prevent detrimental overshoot and excessive
correction on set point changes is to advise the operator against making
large and sudden changes in set point. Set point changes can be ramped or
divided into a successive series of small changes.

However, this solution does not address cases where set point changes are
common, such as batch processes and on-line optimization. One recent
development in industrial operations is to incorporate on-line
optimization programs that automatically change controller set points as
the optimum conditions change. Most of these programs have limits on
the sizes of the set point changes they can make. At any rate, one sure way
to prevent large changes in controller output on set point changes is to
70 Unit 4: How to Tune Feedback Controllers

(a)

100 QDR
M, %C.O.

IMC

50

20
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)

Figure 4-3. Responses to Set Point Change on Heat Exchanger PID Controller with QDR and
IMC Tuning

have the proportional mode act on the process measurement or variable


instead of on the error. As long as there is integral mode, this option does
not affect the performance of the controller in response to disturbance
inputs. The “proportional-on-measurement” option is available on most
modern distributed control systems and other computer-based controllers.
When this option is chosen, the controller can be safely tuned for
disturbance inputs without danger of causing large changes in controller
output when the set point is changed.

The following example illustrates tuning feedback controllers for very


controllable processes, that is, processes with a dead time to time constant
ratio of less than 0.1.
Unit 4: How to Tune Feedback Controllers 71

Example 4-3. PI Control of a Controllable Process. Some processes


consist of a single first-order lag with little or no dead time. This makes
them very controllable and the controllers easy to tune, provided the
tuning formulas are not blindly followed. For example, if the process dead
time is zero, the QDR formulas recommend an infinite controller gain and
zero integral time. Both of these would result in a very sensitive controller
if they were jointly approached in the actual tuning.

Consider a controllable process with the following parameters:

K = 2.0 %T.O./%C.O. τ = 5 min t0 = 0.25 min

The controllability parameter for this process is Pu = 0.25/0.5 = 0.05, which


is below the limits of most tuning correlations. Since the dead time is
small, a PI controller is appropriate for controlling the process. The tuning
parameters for QDR (from Table 4-1) and for IMC (from Section 4-2) are as
follows:

Kc, %C.O./%T.O. TI, min

QDR 9.0 0.83


IMC 10.0 5.0

Where the gain of the IMC controller has been taken as one half the gain
given by Eq. 4-4 for disturbance inputs because the uncontrollability
parameter is less than 0.1. Notice that the gains are rather high, which
indicates very tight control.

Figure 4-4 compares the responses of the PI controller for a 10 percent


disturbance input. Again, both controllers perform well, each reducings
the initial deviation to a little over 5 percent of what it would be without
control. And again, QDR tuning brings the controlled variable back to set
point faster than IMC tuning because of the faster integral mode. Notice
that both controllers cause large initial changes in the controller output, of
over 15 percent! This is due to the high gains.

Example 4-3 shows that good performance on the controlled variable must
be balanced against too much action on the controller output. This is
because the controller output usually causes disturbances to other
controllers and in some cases manipulates safety-sensitive variables. For
example, in a furnace temperature controller the controller output could
be manipulating the fuel flow to the furnace. A large drop in fuel flow
could cause the flame in the firing box to go out.
72 Unit 4: How to Tune Feedback Controllers

Transmitter Output, %T.O.


51
IMC

QDR

50

0 2 4 6 8 10
Time, minutes
(a)

50
Controller Output, %C.O.

IMC

40

QDR
30
0 2 4 6 8 10
Time, minutes
(b)

Figure 4-4. Responses to Disturbance Input of a Controllable Process with Pu = 0.05 for PI
Controller with QDR and IMC Tuning

Processes with uncontrollability parameters of the order of unity or


greater are difficult to control by feedback control alone. The following
example compares QDR versus IMC tuning for such a process.

Example 4-4. PID Control of a Process with Low Controllability.


Processes that consist of many lags in series, or that exhibit true dead time,
are difficult to control by feedback control alone. Consider a process with
the following parameters:

K = 2.0 %T.O./%C.O. τ = 5 min t0 = 5 min


Unit 4: How to Tune Feedback Controllers 73

The uncontrollability parameter is Pu = 5/5 = 1.0, which is high. Because


of this, a PID controller is appropriate. From the QDR tuning formulas of
Table 4-1 and the IMC formulas of Section 4-2, the controller parameters
for this process are as follows:

Kc', %C.O./%T.O. TI', min TD', min

QDR 0.6 10.0 2.5


IMC 0.5 5.0 2.5

Notice that in this case the IMC formulas call for a faster integral time than
do the QDR formulas. The IMC gain is half the one predicted by Eq. 4-4 for
disturbance inputs because the uncontrollability parameter is greater than
0.5.

Figure 4-5 compares the responses of the PID controllers tuned using the
QDR and IMC formulas to a 10 percent change in disturbance. Notice that
the initial deviation in the controlled variable for both controllers is about
65 percent of what it would be if there were no control (13%T.O. versus
20%T.O.). This is because the high uncontrollability parameter requires
low controller gains. Because of its faster integral mode, the IMC-tuned
controller brings the controlled variable back to set point of 50%T.O.
slightly faster than the QDR-tuned controller. The variation in the
controller output is about the same for both controllers. It is high because
of the large deviation of the controlled variable from the set point.

The four examples in this section have compared the tuning parameters
obtained from the two tuning methods presented in this unit, and it has
compared the performance of the controller when tuned by each of these
methods. To summarize our findings:

• Derivative mode provides superior performance for processes with


a high dead-time-to-time-constant ratio.

• Except for controllers that must constantly respond to set point


changes (e.g., slaves in cascade loops; see Unit 7), the controller
should be tuned for good performance on disturbance inputs, and
sudden set point changes should be limited in magnitude.

• For very controllable processes, high controller gains are possible,


but they should be avoided when large variations in the controller
output may upset the process.

• For very uncontrollable processes, even the best attainable response


produced by the tuning formulas is not good. For these processes,
74 Unit 4: How to Tune Feedback Controllers

Transmitter Output, %T.O.


60

QDR

50
IMC

0 10 20 30 40 50
Time, minutes
(a)

50
Controller Output, %C.O.

QDR

40

IMC

30
0 10 20 30 40 50
Time, minutes
(b)

Figure 4-5. Responses of PID Controller to Disturbance Input on an Uncontrollable Process


with Pu = 1.0 for QDR and IMC Tuning

alternatives to simple feedback control should be explored (see Sec-


tion 6-4 and Units 7 and 8).

4-4. Practical Controller Tuning Tips

This section presents seven tips that I hope will help you make your
controller tuning task more efficient and satisfying.
1. Tune coarse, not fine.

Realizing that the performance of a feedback controller is not


sensitive to the precise adjustment of its tuning parameters
significantly simplifies the tuning task. Faced with the infinite
possible combinations of precise tuning parameter values, you
Unit 4: How to Tune Feedback Controllers 75

might give up the task of tuning before you even get started. But
once you realize that the controller performance does not require
tuning parameters to be set precisely, you reduce the number of
significantly different combinations to a workable number.
Moreover, you will be satisfied by the large improvements in
performance that can be achieved by coarse tuning—in sharp
contrast to the frustration you will feel in the small incremental
improvements achieved through fine tuning. How coarse is coarse
tuning? When tuning a controller, I seldom change a parameter by
less than half its current value.
2. Tune with confidence.

One of the reasons controller performance is not sensitive to


precise tuning parameter settings is that any of the parameters
may be adjusted to make up for nonoptimal values in the other
parameters. One effective approach is to select the integral time
first and set the derivative time to about one-fourth of the integral
time, or, if the dead time is known, to one half the dead time. Then
adjust the proportional gain to obtain tight control of the controlled
variable without undue variations in the manipulated variable. If
the response is still too oscillatory, double the integral and
derivative times, or, if the response approaches the set point too
slowly, cut the integral and derivative times in half; then readjust
the gain. When you obtain satisfactory performance, leave it alone.
Do not try to fine-tune it further. If you try to fine-tune it you will be
disappointed by the insignificant incremental improvement.
3. Use all of the available information.

You may be able to gather enough information about the process


equipment to estimate the gain, time constant, and dead time of
the process without having to resort to the open-loop step test (see
Sections 3-4 and 3-5). You can also gather information during
trial-and-error tuning, which allows you to estimate the integral
and derivative times from the period of oscillation of the loop or
from the total delay around the loop (dead time plus time
constant). The total delay around the loop can be estimated as the
time difference between peaks in the controller output and the
corresponding peaks in the transmitter signal.
4. Try a longer integral time.

Many times, poor loop response can be the result of trying to bring
the controlled variable back to its set point faster than the process
can respond. In such cases, increasing the integral time allows an
increase in the process gain and an improvement in the response.
76 Unit 4: How to Tune Feedback Controllers

5. Tuning very controllable processes.

Processes with uncontrollability parameters less than 0.1 have very


large ultimate gains, which are difficult to determine using the
closed-loop method introduced in Unit 2. When the
uncontrollability parameter is less than 0.1, most tuning formulas
result in very high gains and very fast integral times, both of which
should be used only rarely. What the tuning formulas are
indicating is that it is possible to use higher gains and faster
integral times than would normally be reasonable. In other words,
it is a good idea to let your judgment override the tuning formulas.
6. Tuning very uncontrollable processes.

For processes with uncontrollability parameters of 1 and higher, it


is important to recognize that even the optimally tuned feedback
controller will result in poor performance, that is, large initial
deviations on disturbance inputs and slow return to set point. In
such cases, you can achieve improved performance by using
feedforward control (see Unit 8) and by using dead-time
compensation in the feedback controller (see Section 6-4).

7. Beware of problems that are not related to tuning.

The following problems interfere with the normal operation of a


controller, and although they may appear to be tuning problems
they are not:

• Reset windup, which is caused by saturation of the controller


output (see Section 4-5).

• Interaction between loops (see Unit 9).

• Processes with inverse or overshoot response, which is caused


by the presence of parallel effects of opposite direction between
a process input and the controlled variable (see Section 4-6).

• Changes in process parameters because of nonlinearities,


which must be handled by adaptive control methods (see
Unit 10).

• Control valve hysteresis. That is, the valve stops at a different


position than the one desired, and the difference changes its
direction depending on the direction of motion of the valve.
This is caused by dry friction on the valve packing. Control
valve hysteresis causes the controller output to oscillate around
the desired position of the valve.
Unit 4: How to Tune Feedback Controllers 77

All of these problems cause loss of feedback controller


performance, which must be handled by means other than
controller tuning, as, for example, by decoupling (Unit 9), by
feedforward control (Unit 8), by adaptive control (Unit 10), or by
using valve positioners (Unit 5). The units that follow will discuss
each of these techniques.

4-5. Reset Windup

Reset windup or saturation of the controller output may often be assumed


to be a tuning problem when in reality it cannot be resolved by tuning the
controller. It is therefore important that you be able to recognize the
symptoms of reset windup and know how to resolve them.

A properly tuned controller will behave well as long as its output remains
in a range where it can change the manipulated flow. However, it will
behave poorly if, for any reason, the effect of the controller output on the
manipulated flow is lost. A gap between the limit on the controller output
and the operational limit of the control valve is the most common cause of
reset windup. The symptom is a large overshoot of the controlled variable
while the integral mode in the controller is crossing the gap. Reset windup
occurs most commonly during start-up and shutdown, but it can also
occur during product grade switches and large disturbances during
continuous operation. Momentary loss of a pump may also cause reset
windup.

To illustrate a typical occurrence of reset windup, consider a large reactor


where the temperature is controlled by manipulating steam flow to the
jacket, as sketched in Figure 4-6. Suppose that the reactor is poorly
insulated and operating close to full capacity, with the steam valve at 95
percent open. At point “a” in the trend recording of Figure 4-6, a sudden
thunderstorm causes a sharp drop in the reactor temperature, which
causes the steam valve to open fully. However, because the controller
output is not properly limited, it continues to increase beyond the 100
percent valve position (20 mA output) to the full supply current of 125
percent (24 mA). The gap between the limit on the controller output and
the operational limit of the control valve is between 100 percent and 125
percent. The valve does not move over this gap because it is held against
its fully opened position of 100 percent. At point “b” in the trend the
thunderstorm subsides, and the reactor temperature starts to increase back
to its set point. However, when it reaches its set point, at point “c”, the
controller output is still at 125 percent of range, and the valve is fully
opened. At this point, the integral mode starts to reduce the controller
output, but because it is in the gap, the control valve continues to be fully
opened until the controller output reaches 100 percent at point “d”.
78 Unit 4: How to Tune Feedback Controllers

TC
Reactants
TT
m

Steam
T

Products

Condensate
125
100
m, %C.O.

T, °C

Figure 4-6. Reset Windup in Reactor Temperature Control

Meanwhile, the reactor temperature has continued to increase, and its


response shows the large overshoot that is symptomatic of reset windup.
At point “d” the steam valve finally begins to close, and the reactor
temperature starts to decrease back to its set point.

Reset windup can be prevented by eliminating any possible gaps over


which the controller output has no effect on the manipulated flow. In this
case, limits on the controller output would have to be set that correspond
to the limits on the control valve operation. These limits are not always 0
percent and 100 percent of range. For example, some control valves are
poorly designed, and their installed characteristics may show little change
in flow for valve positions above 80 percent or 90 percent open. In such
cases, the controller output limit should be set at the point where there is
little increase in flow for an increase in controller output. Modern
microprocessor-based controllers are equipped with adjustable limits on
the output as well as on the set point, so there is no expense incurred in
setting the limits. The job is just to determine what the limits should be.

4-6. Processes with Inverse Response

Some processes exhibit what is known as inverse response, that is, an initial
move in the direction opposite to the final steady-state change when the
input is a step change. A typical example of a process with inverse
response is an exothermic reactor where the feed is colder than the reactor.
An increase in the feed rate to the reactor causes the temperature to drop
Unit 4: How to Tune Feedback Controllers 79

initially due to the larger rate in cold feed. However, eventually, the
increase in reactants flow increases the rate of the reaction and with it the
rate of the heat generated by the reaction. This causes the temperature in
the reactor to end up higher than it was initially. Another typical inverse
response is the level in the steam drum of a water tube boiler when the
steam demand changes. The inverse response is caused when the
phenomena of “swell” and “shrink” affect the steam bubbles in the boiler
tubes.

As might be expected, the inverse response makes a process more


uncontrollable than dead time when the dead time is equal to the duration
of the inverse move. This is because the controller is fooled by the move in
the wrong direction and starts taking action in the wrong direction. The
best way to compensate for inverse response is with feedforward control
(see Unit 8). The three-element controller for boiler drum level is a
combination of feedforward and cascade control.

One approach to tuning a feedback controller for a process that has inverse
response is to consider the period of the inverse move as dead time. This is
demonstrated in Example 4-5.

Example 4-5. Control of Process with Inverse Response. Figure 4-7


shows the uncontrolled (open-loop) response of an inverse response
process to a unit step in disturbance and the control of the same process
with a PI controller tuned for quarter-decay ratio response. The controller
tuning parameters are determined as follows:

Gain: K = (51 - 50)%T.O./1%C.O. = 1.0%T.O./%C.O.

From Figure 4-7, we know the duration of the inverse response is 1.3
minutes. This is taken as the process dead time. The time required to reach
the 63.2 percent point of the response (50.63%T.O.) is shown in the figure
to be 3.3 minutes. Therefore:

Dead time: t0 = 1.3 min

Time constant: τ = 3.3 - 1.3 = 2.0 min

The tuning parameters for a PI controller are calculated with the formulas
from Table 4-1:

Kc = 0.9(2.0)/(1.0)(1.3) = 1.4%C.O./%T.O.

TI = 3.33(1.3) = 4.3 min

With these tuning parameters we obtain the responses labeled “PI” in


Figure 4-7. The disturbance input is a unit step. Notice that the initial
80 Unit 4: How to Tune Feedback Controllers

deviation in the opposite direction is higher than for the uncontrolled


response, and also the first deviation in the positive direction is higher
than the uncontrolled steady-state error. This is because the feedback
controller is fooled by the inverse response.

Although the standard tuning formulas provided a relatively reasonable


response in Example 4-5, the formulas should be used with caution when
they are applied to processes that do not conform to the first-order-plus-
dead-time model. For example, for the process in Figure 4-7, the tuning
formulas for a PID controller resulted in an unstable response.

Uncontrolled
51

50.63 PI
C, %T.O.

50

3.3 min

1.3 min
49
0 2 4 6 8 10
Time, minutes
(a)

(b)

Figure 4-7. Control of a Process with Inverse Response


Unit 4: How to Tune Feedback Controllers 81

4-7. Summary

In this unit we looked at controller tuning methods based on the gain, time
constant, and dead time of the process in the feedback control loop, where
the process represents all of the elements between the controller output
and its input. We then compared the tuning methods with each other. We
demonstrated the effect of derivative mode, as well as the question of
when to tune for disturbance inputs or for set point changes. Tuning for
very controllable and very uncontrollable processes was discussed and
illustrated, and some practical tuning tips were presented. The
phenomena of reset windup and inverse response were also discussed.

EXERCISES

4-1. Based on the tuning formulas given in this unit, how must you change the
controller gain if, after the controller is tuned, the process gain were to
double because of its nonlinear behavior?

4-2. How is the controllability of a feedback loop measured?

4-3. Assuming that the quarter-decay ratio formulas of Table 4-1 give the same
tuning parameters as those of Table 2-1, what relationship can be
established between the controller ultimate gain and the gain and
uncontrollability parameter of the process in the loop? What is the
relationship between the ultimate period and the process dead time?

4-4. Compare the following processes as to controllability, sensitivity, and speed


of response:

Process A Process B Process C


Gain, %T.O./%C.O. 0.5 2.0 2.0
Time constant, min 0.2 3.0 10.0
Dead time, min 0.1 1.5 2.0

4-5. Calculate the quarter-decay ratio tuning parameters of a series PID


controller for the three processes of Exercise 4-4.

4-6. Readjust the tuning parameters of Exercise 4-5 to reflect that the PID
controller is to be carried out with a processing period of 8 s on a computer
control installation.

4-7. Repeat Exercise 4-5 for a series PID controller tuned by the IMC tuning
rules for disturbance inputs.

4-8. Repeat Exercise 4-5 for a series PID controller tuned by the IMC tuning
rules for set point changes.
82 Unit 4: How to Tune Feedback Controllers

4-9. Which method would you use to tune the slave controller in a cascade
control system? In such a system the output of the master controller takes
action by changing the set point of the slave controller.

4-10. What is the typical symptom of reset windup? What causes it? How can it
be prevented?

REFERENCES

1. J. G. Ziegler and N. B. Nichols. “Optimum Settings for Automatic


Controllers,” Transactions of the ASME, vol. 64 (Nov. 1942), p. 759.
2. C. F. Moore, C. L. Smith, and P. W. Murrill, “Simplifying Digital
Control Dynamics for Controller Tuning and Hardware Lag
Effects,” Instrument Practice, vol. 23 (Jan. 1969), p. 45.
3. D. E. Rivera, M. Morari, and S. Skogestad, “Internal Model
Control, 4. PID Controller Design,” Industrial and Engineering
Chemistry Process Design and Development, vol. 25 (1986), p. 252.
4. J. Martin Jr., A. B. Corripio, and C. L. Smith, “How to Select
Controller Modes and Tuning Parameters from Simple Process
Models,” ISA Transactions, vol. 15 (Apr. 1976), pp. 314-19.
5. C. A. Smith and A. B. Corripio, Principles and Practice of Automatic
Process Control, 2d ed. (New York: Wiley, 1997), Chapter 7.
Unit 5:
Mode Selection and
Tuning Common
Feedback Loops
UNIT 5

Mode Selection and Tuning Common Feedback Loops


The preceding units dealt with the tuning of feedback controllers for
general processes that can be represented by a first-order-plus-dead-time
model. This unit presents tuning guidelines for the most typical process
control loops, specifically, flow, level, pressure, temperature, and
composition control loops.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Decide on the appropriate control objective for a loop.

B. Select proportional, integral, and derivative modes for specific


control loops.

C. Design and tune simple feedback controllers for flow, level,


pressure, temperature, and composition.

D. Differentiate between averaging and tight level control.

5-1. Deciding on the Control Objective

The most common objective for feedback control is to maintain the


controlled variable at its set point. However, there are some control
situations, often involving the control of level or pressure, when it is
acceptable to just maintain the controlled variable in an acceptable range.
Differentiating between these two objectives is important because, as
Unit 2 showed, the purpose of the integral mode is to eliminate the offset
or steady-state error, that is, to maintain the controlled variable at the set
point. Consequently, integral mode is not required when it is acceptable to
allow the controlled variable to vary in a range. One advantage of
eliminating the integral mode is that it permits higher proportional gain,
thus reducing the initial deviation of the controlled variable caused by
disturbances.

There are two situations when the controlled variable can be allowed to
vary in a range:

• When the process is so controllable—a single long time constant


with insignificant dead time—that the proportional gain can be set
high and maintain the controlled variable in a very narrow range.

85
86 Unit 5: Mode Selection and Tuning Common Feedback Loops

• When it is desirable to allow the controlled variable to vary over a


wide range so the control loop attenuates the oscillations caused by
recurring disturbances.

The first of these situations calls for proportional (P) and


proportional-derivative (PD) controllers with very high gains, as well as
for on-off controllers. We find this situation in the control of level in
evaporators and reboilers as well as in the control of temperature in
refrigeration systems, ovens, constant-temperature baths, and air
conditioning/heating systems. On-off controllers can be used when the
time constant is long enough such that the cycling it necessarily causes is
of a very slow frequency. Otherwise, proportional controllers are used to
modulate the operation of the manipulated variable. In either case, the
dead band of the on-off controller or the proportional band of the
proportional controllers can be set very narrow. Derivative mode can be
added to compensate for the lag in the sensor or final control element and
thus can improve stability.

The second situation when the controlled variable can be allowed to vary
in a range calls for proportional controllers with as wide a proportional
band as possible. This situation is found in the control of level in
intermediate storage tanks and condenser accumulators, as well as in the
control of pressure in gas surge tanks.

5-2. Flow Control

Flow control is the simplest and most common of the feedback control
loops. The schematic diagram of a flow control loop in Figure 5-1 shows
that there are no lags between the control valve that causes the flow to
change and the flow sensor/transmitter (FT) that measures the flow. Since
most types of flow sensors (orifice, venturi, flow tubes, magnetic
flowmeters, turbine meters, coriolis, etc.) are very fast, the only significant
lag in the flow loop is the control valve actuator. Most actuators have time
constants on the order of a few seconds.

Several controller synthesis theories (Internal Model Control, controller


synthesis, optimal control, etc.) suggest that the controller for a very fast
loop should contain only integral mode. In practice, flow controllers have
traditionally been PI controllers tuned with low proportional gains and
very fast integral times, on the order of a few seconds, which are
essentially pure integral controllers.

This traditional approach is acceptable when flow is controlled so as to


maintain a constant rate with few manual changes in flow set point.
However, when the flow controller is the slave in a cascade control
scheme, it is important for the flow to respond quickly to set point
Unit 5: Mode Selection and Tuning Common Feedback Loops 87

FC

FT

Figure 5-1. Typical Flow Control Loop

changes. This requires a proportional-integral controller that has a gain


near unity. To maintain stability, this controller may require an increase in
the integral time from the few seconds normally used in flow controllers.
The IMC tuning rules (see Section 4-2) suggest that the integral time be set
equal to the time constant of the loop, usually that of the control valve
actuator. They also suggest that the gain be adjusted for the desired
tightness of control. In cascade situations, tight flow control is indicated.

The proportional gain should also be increased when hysteresis of the


control valve causes variations in the flow around its set point. As
mentioned in Unit 4, hysteresis is caused by static friction in the valve
packing, which creates a difference between the actual valve position and
the corresponding controller output. The error changes direction
according to the direction in which the stem must move, and this causes a
dead band around the desired valve position. Increasing the flow
controller gain reduces the amplitude of the flow variations caused by
hysteresis. Valve positioners also reduce hysteresis and speeds up the
valve, but they are usually difficult to justify for flow control loops.

The following example illustrates the effect of valve hysteresis in the


performance of a flow controller.

Example 5-1. Flow Control with Valve Hysteresis. Figure 5-2 shows
the responses of a flow control loop to small variations in pressure drop
across the valve for two different tunings of the controller. The control
valve is assumed to have a hysteresis band of 0.1 percent of the range of
the valve position and a time constant of 0.1 minutes. The curve labeled (a)
is for the traditional tuning of low gain and fast integral, while curve (b) is
for a more aggressive tuning of a gain near unity and slower integral. As
88 Unit 5: Mode Selection and Tuning Common Feedback Loops

Figure 5-2 shows, the more aggressive tuning reduces the variation in
flow, which in this case is caused by the hysteresis in the valve.

5-3. Level and Pressure Control

There are two reasons for controlling level and pressure: to keep them
constant because of their effect on process or equipment operation or to
smooth out variations in flow while satisfying the material balance. The
former case calls for “tight” control while the latter is usually known as
“averaging” control. Pressure is to gas systems what level is to liquid
systems, although liquid pressure is sometimes controlled.

50.2
(a)

50.1
(b)
Flow, %T.O.

50.0

49.9

49.8
0 10 20 30 40 50
Time, seconds

(a)

(b)

Figure 5-2. Flow Control Responses (a) Kc = 0.4%C.O./%T.O., TI = 0.05 min,


(b) Kc = 0.9%C.O./%T.O., TI = 0.10 min
Unit 5: Mode Selection and Tuning Common Feedback Loops 89

Tight Control

Two examples of tight liquid level control and one example of tight
pressure control are shown in Figure 5-3. It is important to control level in
natural-circulation evaporators and reboilers because a level that is too
low causes deposits on the bare hot tubes, while a level that is too high
causes elevation of the boiling point, which reduces the heat transfer rate
and prevents the formation of bubbles that enhance heat transfer by
promoting turbulence. A good example of tight pressure control or
pressure regulation is the control of the pressure in a liquid or gas supply
header. It is important to maintain the pressure in the supply header
constant to prevent disturbances to the users when there is a sudden
change in the demand of one or more of the users.

To design tight level and pressure control systems one must have a fast-
acting control valve, with a positioner if necessary, so as to avoid
secondary time lags, which would cause oscillatory behavior at high
controller gains. If the level or pressure controller is cascaded to a flow
controller, the latter must be tuned as tight as possible, as mentioned in the
preceding section.

Normally, only proportional mode is needed for tight level or pressure


control. The proportional gain must be set high, from 10 to over 100
(proportional band of 1 percent to 10 percent of range). If the lag of the

Vapors

LC
LT
Feed Steam
Steam

T
LC T
Condensate
Condensate Bottoms
Product

(a) (b)
PC

Loads

(c)

Figure 5-3. Examples of Tight Control: (a) Calandria Type Evaporator, (b) Thermosyphon
Reboiler, (c) Header Pressure Regulation
90 Unit 5: Mode Selection and Tuning Common Feedback Loops

level or pressure sensor were significant, derivative mode could be added


to compensate for it, making a higher gain possible. The derivative time
should be set approximately equal to the time constant of the sensor.
Integral mode should not be used, as it would require that proportional
gain be reduced.

Averaging Level Control

Two examples of averaging level control are shown in Figure 5-4: the
control of level in a surge tank and in a condenser accumulator drum. Both
the surge tank and the accumulator drum are intermediate process storage
tanks. The liquid level in these tanks has absolutely no effect on the
operation of the process. It is important to realize that the purpose of an
averaging level controller is to smooth out flow variations while keeping
the tank from overflowing or running empty. If the level were to be
controlled tight in such a situation, the outlet flow would vary just as
much as the inlet flow(s), and it would be as if the tank (or accumulator)
were not there.

The averaging level controller should be proportional only with a set point
of 50 percent of range, a gain of 1.0 (proportional band of 100 percent), and
an output bias of 50 percent. This configuration causes the outlet valve to
be fully opened when the level is at 100 percent of range and fully closed
when the level is at 0 percent of range, using the full capacity of the valve
and of the tank. A higher gain would reduce the effective capacity of the
tank for smoothing variations in flow, while a lower gain would reduce
the effective capacity of the control valve and create the possibility that the
tank would overflow or run dry. With this proposed design, the tank
behaves as a low-pass filter to flow variations. The time constant of such a
filter is as follows:

A ( h max – hmin )
τ f = -----------------------------------------
- (5-1)
K c F max

where

A = the cross-sectional area of the tank, ft2

hmin and hmax = the low and high points of the range of the level transmitter,
respectively, ft

Fmax = the maximum flow through the control valve when fully
opened (100 percent controller output), ft3/min

Kc = the controller gain, %C.O./%T.O.


Unit 5: Mode Selection and Tuning Common Feedback Loops 91

Figure 5-4. Averaging Level Control: (a) Surge Tank, (b) Condenser Accumulator Drum

The controller gain is assumed to be 1.0 in this design. When the level
controller is cascaded to a flow controller, Fmax is the upper limit of the
range of flow transmitter in the flow control loop. Notice that an increase
in gain results in a reduction of the filter time constant and therefore less
smoothing of the variations in flow. A good way to visualize this is to
notice that doubling the gain would be equivalent to reducing either the
tank area or the transmitter range by a factor of two, thus reducing the
effective capacity of the tank. On the other hand, reducing the controller
gain to half would be equivalent to reducing the capacity of the valve by
half, thus increasing the possibility that the tank would overflow.

Although you can accomplish averaging level control with a simple


proportional controller, most level control applications use PI controllers.
92 Unit 5: Mode Selection and Tuning Common Feedback Loops

This is because control room operators have an aversion to variables that


are not at their set points. The process in a level control loop is unlike most
other loops in that it does not self-regulate; that is, the level tends to
continuously rise or fall when the feedback controller is not in automatic.
This means that a time constant cannot usually be determined for level
control loops. Even when there is some degree of self-regulation, the
process time constant is very long, on the order of one hour or longer.
Because of this, PI controllers in level loops have the following
characteristics:

• The level and the flow that is manipulated to control the level oscil-
late with a long period of oscillation. Sometimes the period is so
long that the oscillation is imperceptible, unless it is trended over a
very long time.

• The period of oscillation becomes shorter as the integral time short-


ens.

• The level loop is unstable when the integral time is equal to or


shorter than the time constant of the control valve.

• Unlike most other loops, there is a range of controller gains over


which the oscillations increase as the controller gain is decreased.

These characteristics lead to the following general rules for tuning PI


controllers for average level control:

• Set the integral time to sixty minutes or longer.

• Set the proportional gain to at least 1.0%C.O./%T.O.

Averaging pressure control is not as common as averaging level control


because, in the case of gas systems, a simple fixed resistance on the outlet
of the surge tank is all that is required to smooth out variations in flow.

Intermediate Level Control

There are intermediate situations that do not require a very tight level
control but where it is nevertheless important to ensure that the level does
not swing through the full range of the transmitter as in averaging level
control. A typical example would be a blending tank, where the level
controls the tank volume and therefore the residence time for blending. If
a ±5 percent variation in residence time is acceptable, a proportional
controller with a gain of 5 to 10, or even lower, could be used, as the flow
would not be expected to vary over the full range of the control valve
capacity.
Unit 5: Mode Selection and Tuning Common Feedback Loops 93

The following example compares tight and averaging level control.

Example 5-2. Tight and Averaging Level Control. Figure 5-5 shows the
responses of the control of the level in a tank where the level controller is
tuned for averaging and for tight level. The inlet flow into the tank, shown
by the step changes in the figure, increases by 200 gpm, then by an
additional 200 gpm five minutes later. It then decreases by 200 gpm five
minutes after that and returns to its original value five minutes later. This
simulates the dumping of the contents of two batch reactors into the tank,
each at the rate of 200 gpm for ten minutes, with the second reactor
starting halfway through the dumping of the first one. The integral time of
the level controller is set to twenty minutes, and the tank has a total
capacity of 10,000 gallons, while the valve has a flow capacity of 1,000 gpm
when fully opened.

70 (a)
Level, %T.O.

60

(b)
50

40
0 20 40 60 80 100
Time, minutes
(a)

(b)

Figure 5-5. Level Control Responses (a) Averaging Control, Kc = 1%C.O./%T.O., (b) Tight
Control, Kc = 10%C.O./%T.O. (Inlet flow is represented by the step changes)
94 Unit 5: Mode Selection and Tuning Common Feedback Loops

As Figure 5-5 shows, the averaging level control reduces the variation of
the outlet flow to about half the variation of the inlet flow, and it causes
the changes in the outlet flow to be gradual. On the other hand, tight level
control maintains the level within 5 percent of the set point. Such tight
control of level requires that the outlet flow essentially follow the variation
of the inlet flow.

5-4. Temperature Control

Temperature controllers are usually proportional-integral-derivative


(PID). The derivative mode is required to compensate for the lag of the
temperature sensor, which is usually significant. The sensor time constant
can often be estimated by the following formula:

MC
τ s = ------------p- (5-2)
hA

where

M = the mass of the sensor, including the thermowell, kg

Cp = the average specific heat of the sensor, kJ/kg-°C

h = the film coefficient of heat transfer, kW/m2-°C

A = the area of contact of the thermowell, m2

When these units are used, the time constant is calculated in seconds.

Temperature is the variable most often controlled in chemical reactors,


furnaces, and heat exchangers. When the temperature controller
manipulates the flow of steam (see Figure 3-1) or fuel to a heater or
furnace (see Figure 5-6), the rate of heat is proportional to the flow of
steam or fuel. This is because the heat of condensation of the steam and
the heating value of the fuel remain approximately constant with load.
However, when the manipulated variable is cooling water or hot oil, the
heat rate is very nonlinear with water or oil flow. This is because an
increase in the heat transfer rate requires that the outlet utility temperature
approach its inlet temperature as the heat transfer rate increases. Because
of this, higher increments in flow are required for equal increments in heat
rate as the load increases. To reduce the nonlinear nature of the loop, the
temperature controller is sometimes cascaded to a heat rate controller, as in
Figure 5-7. The process variable for the heat rate controller (QC) is the rate
of heat transfer in the exchanger, which is proportional to the flow and to
the change in temperature of the utility: Q = FCp(Tin - Tout).
Unit 5: Mode Selection and Tuning Common Feedback Loops 95

Process

SP

TC

TT

Air Fuel

Figure 5-6. Temperature Control of a Process Furnace

SP

TC

Tin TT SP
QC
F FT

TT

Tout TT

Figure 5-7. Temperature Control of Hot-Oil Heater by Manipulation of Heat Rate


96 Unit 5: Mode Selection and Tuning Common Feedback Loops

Example 5-3. Estimate of Temperature Sensor Time Constant.


Estimate the time constant of an RTD (resistance temperature device)
weighing 0.22 kg and having a specific heat of 0.15 kJ/kg-°C. The
thermowell is cylindrical with an outside diameter of 12.5 mm and a
length of 125 mm. The film coefficient of heat transfer between the fluid
and the thermowell is 0.5 kW/m2-°C.

The area of the thermowell is as follows:

A = πDL = 3.1416(0.0125)(0.125) = 0.0049 m2.

The time constant, from Eq. 5-2, is estimated as follows:

τs = (0.22)(0.15)/(0.5)(0.0049) = 13.5 s (0.22 min.)

Most industrial temperature controllers usually can be tuned following


the methods outlined in Units 2, 3, and 4. There are a few exceptions:

• The control of the outlet temperature from reformer furnaces by


manipulating the fuel flow involves using very fast loops similar to
flow control loops. The controllers can be tuned as flow controllers
(see Section 5-2).

• The control of laboratory constant temperature baths by manipulat-


ing power to electric heaters is usually done with on-off controllers
or high-gain proportional controllers.

5-5. Analyzer Control

The major problem with the control of composition is usually associated


with the sensor/transmitter. The sampling of process streams introduces
significant dead time into the loop, as well as some measurement noise if
the sample is not representative because of poor mixing. Sensors are often
slow, and their measurements are sensitive to temperature and other
process variables. Analyses of hydrocarbon mixtures are done by
chromatographic separation, which is discontinuous in time. These
analyzers also involve a time delay in the measurement of the order of the
analysis cycle, which compounds the control problem.

It is the ratio of the dead time to the process time constant that determines
the controllability of the loop (see Unit 4). Thus, in spite of all the sources
for time delays in the sampling and analysis, if the combination of the
analysis sample time and time delay is less than the process time constant
a proportional-integral-derivative controller is indicated. Any of the
tuning methods of Units 2 and 4 can be used, but the IMC tuning rules
Unit 5: Mode Selection and Tuning Common Feedback Loops 97

have an advantage: they can be extrapolated to any value of the dead-


time-to-time-constant ratio. On the other hand, if the total dead time is on
the order of several process time constants, the theory calls for a pure
integral controller. This is because the process responds quickly relative to
the time frame in which the analysis is done. Unit 6 discusses the tuning of
controllers that involve sampled measurements.

5-6. Summary

This unit presented some guidelines for selecting and tuning feedback
controllers for several common process variables. While flow control calls
for fast PI controllers with low gains, level and pressure control can be
achieved with simple proportional controllers with high or low gains,
depending on whether the objective is tight control or the smoothing of
flow disturbances. When PI controllers are used for level control, the
integral time should be long, on the order of one hour or longer. PID
controllers are commonly used for temperature and analyzer control.

EXERCISES

5-1. Briefly state the difference between tight level control and averaging level
control. In which of the two is it important to maintain the level at the set
point? Give an example of each.

5-2. What type of controller is recommended for flow control loops? Indicate
typical ranges for the gain and integral times.

5-3. What type of controller is indicated for tight level control? Indicate typical
gains for the controller.

5-4. What type of controller is indicated for averaging level control? Indicate
typical gains for the controller.

5-5. When a PI controller is used for averaging level control, what should the
integral time be? Would an increase in gain increase or decrease
oscillations?

5-6. Estimate the time constant of a temperature sensor weighing 0.03 kg, with
a specific heat of 23 kJ/kg-°C. The thermowell has a contact area of 0.012
m2, and the heat transfer coefficient is 0.6 kW/m2-°C.

5-7. Why are PID controllers commonly used for controlling temperature?

5-8. What is the major difficulty with the control of composition?


Unit 6:
Computer Feedback
Control
UNIT 6

Computer Feedback Control


This unit deals with tuning methods for discrete feedback controllers, that
is, controllers that sample the process variables and update their outputs
at discrete and regular time intervals.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Recognize the parallel and series forms of discrete controllers.


B. Correct the controller tuning parameters for the effect of sampling.
C. Select the sampling time or processing frequency for discrete
control loops.
D. Tune computer and microprocessor-based feedback controllers.
E. Apply feedback controllers with dead-time compensation.

6-1. The PID Control Algorithm

Most of the process industries today use computers and microprocessors


to carry out the basic feedback control calculations. Microprocessors
perform the control calculations in distributed control systems (DCS),
programmable logic controllers (PLC), and single-loop controllers, while
larger computers perform higher-level control functions, many of which
include feedback control. Unlike analog instruments, digital devices must
sample the controlled variable and compute and update the controller
output at discrete time intervals. The formulas that are programmed into
the computer to calculate the controller output are discrete versions of the
feedback controllers presented in Unit 2. A particular way of arranging a
formula for these calculations is called an algorithm.

This section introduces the PID (proportional-integral-derivative)


algorithm. As there is no extra cost in programming all three modes of
control, most algorithms contain all three and then use flags and logic to
allow the control engineer to specify any single mode, combination of two
modes, or all three modes.

Because the feedback control calculation is made at regular intervals, the


controlled variable or process variable (PV) is sampled only when the
controller output is calculated and updated, as illustrated in Figure 6-1.
Notice that the controller output is updated at the sampling instants and

101
102 Unit 6: Computer Feedback Control

Figure 6-1. Block Diagram of a Computer Feedback Control Loop Showing the Sampled nature
of the Signals

held constant for one sampling interval T. The sampling of the process
variable is done by the analog-to-digital converter (A/D) and multiplexer
(MUX), while the digital-to-analog converter (D/A) updates and holds the
controller output.

To calculate the error for a reverse-acting controller, subtract the process


variable from its set point:

Ek = Rk - Ck (6-1)

where

Ek = the error, %T.O.

Rk = the set point, %T.O.

Ck = the process or controlled variable, %T.O.

and the subscript “k” stands for the kth sample or calculation of the
controller. The signs of the process variable and the set point are reversed
for a direct-acting controller. Alternatively, the controller gain is set to a
negative value.

Unit 2 established that there are two forms of the PID controller: the
parallel form, Eq. 2-9, and the series form, Eq. 2-10. Table 6-1 presents the
two corresponding forms of the discrete PID controller. Although the
Unit 6: Computer Feedback Control 103

series version is the one used in analog controllers, many computer


controllers use the parallel version, and some computer control systems
allow the option of using either version. The formulas for converting the
controller parameters from one form to the other are given in Unit 2,
Eqs. 2-11 and 2-12.

Table 6-1. Discrete PID Controllers


Parallel:
T
∆M k = K c E k – E k – 1 + ----- E k + B k
T I

where
αT D TD
B k = --------------------
-B k – 1 – --------------------- ( C – 2C k – 1 + C k – 2 )
T + αT D T + αT D k
Series:

T
∆M k = K c' E k – E k – 1 + ------ E k
T' I
where
E k = Rk – Y k

αT D' T ( α + 1 )T D'
Y k = ----------------------Y k 1 - ( C k – Ck – 1 )
+ ----------------------C k + --------------------------
T + αT D' – T + αT D' T + αT D'

Controller Output:
M k = M k – 1 + ∆M k

where
Rk = set point, %T.O.
Ck = process variable (measurement), %T.O.
Mk = controller output, %C.O.
Ek = error or set point deviation, %T.O.
α = derivative filter parameter
T = sampling interval, min

The PID controller formulas of Table 6-1 are designed to avoid undesirable
pulses on set point changes by having the derivative mode work on the
process variable Ck instead of on the error. The formulas also contain a
derivative filter, with time constant αTD (or αTD‘), which is intended to
limit the magnitude of pulses on the controller output when the process
variable changes suddenly.

It is seldom desirable for the derivative mode of the controller to respond


to set point changes because such changes cause large changes in the error,
which last for just one sample. If the derivative mode were to act on the
error, undesirable pulses, known as “derivative kicks,” would occur on
104 Unit 6: Computer Feedback Control

the controller output right after the set point is changed. These pulses are
completely avoided by the controller of Table 6-1 since the derivative
mode, acting on the process variable, does not “see” changes in set point.
The minus sign in the formula for the parallel form is used on the
assumption that the error is calculated as in Eq. 6-1 so that, for a direct-
acting controller, the proportional gain would be set to a negative number.
Most modern computer and microprocessor-based controllers provide the
option of having the derivative mode act on the error or on the process
variable. Breaking the “never say never” rule, I can say with confidence
that there is never a good reason for having the derivative act on the error.

In the formulas of Table 6-1 the filter parameter α has a very special
meaning. Its reciprocal, 1/α, is the amplification factor on the change of
the error at each sampling instant, and is also called the “dynamic gain
limit.” Notice that, if α were set to zero, the amplification factor on the
change in error would have no limit. For example, if the sampling interval
is one second (1/60 min) and the derivative time is one minute, the change
in error at each sample with α=0 would be multiplied by a factor of 60
(TD/T = 60). By setting the nonadjustable parameter α to a reasonable
value, say 0.1, the algorithm designer can assure that the change in error
cannot be amplified by a factor greater than 10, independent of the
sampling interval and the derivative time. The dynamic limit is also an
advantage for the control engineer because it allows him or her to set the
derivative time to any desired value without the danger of introducing
large undesirable pulses on the controller output.

The following example illustrates the response of the derivative unit with
and without the filter term.

Example 6-1. Response of the Derivative Unit to a Ramp. Calculate


the output of the derivative term on the derivative unit of the series PID
controller to a ramp that starts at zero and increases by 1 percent each
sample. Use a sample time of 1 s and a derivative time of 0.5 min. The
derivative filter parameter is α = 0.1.

Directly substituting both the values given and the process variable at
each sample into the series controller of Table 6-1 produces the results
summarized in the following table. The results for the “ideal” derivative
unit are calculated using a filter parameter of zero.

Sample, s 0. 1. 2. 3. 4. 5. 10. 20. 40.


Ck 0. 1. 2. 3. 4. 5. 10. 20. 40.
Yk 0. 8.5 15.2 20.3 24.5 27.9 38.3 49.9 70.0
Ideal 30. 31.0 32.0 33.0 34.0 35.0 40.0 50.0 70.0
Unit 6: Computer Feedback Control 105

100

Controller Output, %C.O.


80

Unfiltered
60

40
Filtered
20
Input
0
0 20 40 60 80 100
Time, Seconds

Figure 6-2. Response of Derivative Unit (P+D), with and without Filter, to a Ramp Input.

Notice that the unfiltered (ideal) derivative unit jumps to 30 at time 0 and
increments by 1 each sample. Both these responses are shown graphically
in Figure 6-2. The unfiltered derivative unit is leading the input by one
derivative time (30 s), while the derivative unit with the filter, after a brief
lag, also leads the error by one derivative time. In practice, the lag is too
small to significantly affect the performance of the controller.

Eliminating Proportional Kick on Set Point Changes

Similar to the derivative kick, the sudden change in controller output


caused by the proportional mode right after a change in set point is known
as “proportional kick,” although it is not a pulse. It too can be eliminated
by replacing the error with the negative of the process variable in the
proportional term of the parallel controller of Table 6-1, or with the output
of the derivative unit Yk in the series controller. Once again, modern
computer and microprocessor-based controllers offer the option of having
the proportional mode act on either the error or on the process variable.
The option must be selected on the following basis:

• If the controller is a main controller with infrequent changes in set


point, the proportional mode should act on the process variable.
This allows the controller to be tuned for disturbance inputs (higher
gain) without the danger of large overshoots on sudden set point
changes (see Section 4-3).

• If the controller is the slave of a cascade control scheme (see Unit 7),
the proportional mode must act on the error. Otherwise, when the
main controller changes the set point of the slave, the slave would
106 Unit 6: Computer Feedback Control

not respond immediately, as it must if the cascade scheme is to


work.

It is important to realize that the reason the proportional-on-measurement


option is selected is to allow the operator to make changes in set point
without fear of causing a sudden change in the controller output. As
would be expected, the resulting approach to the new set point will be
slower than if the proportional term acted on the error. The rate of
approach to set point is controlled by the reset or integral time when the
proportional-on-measurement option is selected.

As in the case of the derivative-on-measurement option, the performance


of the controller on disturbance inputs is the same when the proportional
mode acts on the error or on the measurement. This is because in both
cases the set point does not change.

Nonlinear Proportional Gain

Practically all modern computer and microprocessor-based controllers


offer the option of a nonlinear gain parameter. The purpose of this feature
is to have the proportional gain increase as the error increases:

Kc = KL(1 + KNL|Ek|) (6-2)


where

KL = the gain at zero error, %C.O./%T.O.

KNL = the increase in gain per unit increase in error

and the bars around the error indicate the absolute value or magnitude of
the error. By using the absolute value of the error the gain increases when
the error increases in either the positive or the negative directions.

The nonlinear gain is normally used with averaging level controllers (see
Section 5-3) because it allows a wider variation of the level near the set
point while still preventing the tank from overflowing or running dry, as
illustrated in Figure 6-3. The nonlinear gain allows greater smoothing of
flow variations with a given tank, that is, makes the tank look bigger than
it is, as long as the flow varies near the middle of its range. Some computer
controllers provide the option of having a zero gain at zero error, a feature
that is desirable in some pH control schemes.

The following example illustrates how to determine the nonlinear gain


parameter for an averaging level controller.
Unit 6: Computer Feedback Control 107

100

Controller Output, %C.O.


80

60

40

20

0
0 20 40 60 80 100
Process Variable, %T.O.

Figure 6-3. Controller Output versus Process Variable for an Averaging Level Controller with
Nonlinear Gain

Example 6-2. Adjusting the Nonlinear Gain. An averaging level


controller is proportional only with a gain of 1%C.O./%T.O., a set point of
50 percent, and an output bias of 50 percent. Determine the value of the
nonlinear gain that would be required to reduce the gain at zero error to
0.5%C.O./%T.O. while still keeping the tank from overflowing or running
dry.

To prevent the tank from overflowing or running dry, the valve must be
fully opened when the level is at 100 percent of range and closed when the
level is at 0 percent. Since the set point is 50 percent, either of these
requirements takes place when the magnitude of the error is ±50 percent.
With the output bias of 50 percent, using the upper limit requirement in
Eq. 6-2, we get:

100% = 50% + Kc(100% - 50%)

= 50% + 0.5[1 + KNL(50%)](50%)

KNL = [(100 - 50)/(0.5)(50) - 1]/50 = 0.02

or 2 percent increase in gain per percentage increase in error. The


proportional gain then increases from 0.5 at zero error to 1.0 at 50 percent
error. Recall from Eq. 5-1 that the time constant of the tank is inversely
proportional to the controller gain. Thus, as for smoothing flow variations,
the effective capacity of the tank can be increased from its real value at full
and zero flow to twice that value at half full flow.
108 Unit 6: Computer Feedback Control

This section introduced the most common discrete controllers and the
options that their configurable nature makes possible. The next section
concerns the tuning of these controllers.

6-2. Tuning Computer Feedback Controllers


Although the tuning formulas of Units 2 and 4 are intended for
continuous controllers, they can be applied to computer controllers as
long as you take the effect of sampling into consideration. This section
presents a simple correction that can be made to the tuning formulas to
compensate for the effect of sampling. It also introduces formulas that are
specifically applicable to discrete controllers.
Tuning by Ultimate Gain and Period

Unit 2 presented the formulas for quarter-decay ratio response based on


the ultimate gain and period of the loop. They can be applied directly to
computer controllers because the effect of sampling is accounted for in the
experimentally determined ultimate gain and period. Increasing the
sampling interval decreases the ultimate gain and increases the ultimate
period because slower sampling makes the feedback control loop less
controllable and slower.
Tuning by First-order-plus-dead time Parameters

When the controller is tuned using the process parameters of gain, time
constant, and dead time that were estimated by the methods presented in
Unit 3, the effect of sampling is not included in the process model. This is
because the process model is obtained from a step test in controller output
(as we learned in Unit 3), and such a step will always take place at a
sampling instant and remains constant after that.

Moore and his coworkers developed a simple correction for the controller
tuning parameters to account for the effect of sampling.1 They pointed out
that when a continuous signal is sampled at regular intervals of time and
then reconstructed by holding the sampled values constant for each
sampling period the reconstructed signal is effectively delayed by
approximately one half the sampling interval (as shown in Figure 6-4).
Now, as Figure 6-1 shows, the digital-to-analog converter holds the output
of the digital controller constant between updates, thus adding one half
the sampling time to the dead time of the process components. To correct
for sampling, one half the sampling time is simply added to the dead time
obtained from the step response. The uncontrollability parameter is then
given by the following:
T
t 0 + ---
2
P u = -------------- (6-3)
τ
Unit 6: Computer Feedback Control 109

Continuous signal

Figure 6-4. Effective Delay of the Sample and Hold (DAC) Unit

where

Pu = the uncontrollability parameter

t0 = the process dead time, min

τ = the process time constant, min

T = the sample interval, min

This equation was presented without justification in Unit 4 as Eq. 4-2. It


was presented there to ensure that you don’t overlook this important
correction when you tune digital controllers.

Tuning Formulas for Discrete Controllers

Dahlin introduced a procedure for synthesizing computer-based


controllers in the late 1960s.2 This synthesis procedure can be used to
develop tuning formulas for discrete controllers. The advantage of these
tuning formulas, shown in Table 6-2, is that they account exactly for the
effect of sampling, so they apply over any set of values of the process
parameters and the sampling time. For details on the derivation of these
formulas, see Smith and Corripio.3
110 Unit 6: Computer Feedback Control

Table 6-2. Tuning Formulas for Discrete PID Controller


Given the process parameters:
K = process gain, %T.O./%C.O.
τ1 = process time constant, min
τ2 = second process time constant (zero if unknown), min
t0 = process dead time, min
T = sampling interval, min
q = an adjustable parameter, in the range of 0 to 1

–T ⁄ τ1 –T ⁄ τ 2
Let N = t0/T a1 = e a2 = e

Tuning Formulas for Parallel Controller


( 1 – q ) ( a 1 – 2a 1 a 2 + a 2 )
K c = --------------------------------------------------------------------------------
-
K ( 1 – a1 ) ( 1 – a2 )[ 1 + N ( 1 – q ) ]

T ( a 1 – 2a 1 a 2 + a 2 )
T I = --------------------------------------------
-
( 1 – a1 ) ( 1 – a2 )

Ta 1 a 2
T D = -------------------------------------
-
a 1 – 2a 1 a 2 + a 2
Tuning Formulas for the Series Controller
( 1 – q )a 1
K c' = ------------------------------------------------------------
-
K ( 1 – a1 ) [ 1 + N ( 1 – q ) ]

Ta 1
T I' = -------------
-
1 – a1

Ta 2
T D' = -------------
-
1 – a2

The formulas of Table 6-2 contain an adjustable parameter q that affects


only the controller gain. This parameter is adjusted in the range of 0 to 1 to
shape the tightness of the closed-loop response. If the model parameters
were an exact fit of the process response, the value of q would be the
fraction of the error at any one sample that will remain after one dead time
plus one sample. For example, setting q = 0 specifies that the process
variable should match the set point after N + 1 samples, where N is the
number of samples of dead time. This would result in the highest gain and
therefore in the tightest control. However, for any value of q the tightness
of the closed-loop response depends on the ratio of the sample time to the
dominant process time constant, T/τ1. A more fundamental adjustable
parameter is the closed-loop time constant τc which can be related to the
time parameters of the process—short for fast processes and long for slow
Unit 6: Computer Feedback Control 111

processes. If τc is specified, the value of q can be computed by the


following:

T
– ----
τc
q = e (6-4)

Setting q = 0 results in an upper limit for the controller gain. This value can
be used as a guide for the initial tuning of the controller. As is the case
with the tuning formulas presented in Unit 4, the upper limit of the
controller gain decreases with increasing process dead time, parameter N.

To tune the controller, the formulas of Table 6-2 require two process time
constants, τ1 and τ2. When only one time constant is available, the second
time constant τ2 is set to zero. This results in a PI controller because both a2
and the derivative time are zero.

As mentioned earlier, the formulas of Table 6-2 are applicable to any value
of the process parameters and the sample time. In addition, with these
formulas the controller gain can be adjusted to obtain fast response with
reasonable variation in the controller output. The formulas are highly
recommended because they relate the integral and derivative times to the
process time constants, thus reducing the tuning procedure to the
adjustment of the controller gain. The following example illustrates the
use of the formulas of Table 6-2 to the temperature control of the steam
heater.

Example 6-3. Computer Control of Temperature in Steam Heater. Use


the tuning formulas of Table 6-2 to tune the temperature controller for the
heater of Figure 3-1. Use sample times of 1, 2, 4, 8, and 16 s and the series
PID controller. The process parameters for the heater were determined in
Example 3-2. Using the two-point method, they are as follows:

K = 1%/% τ = 0.56 min t0 = 0.17 min

As the model has only one time constant, the derivative time resulting
from Table 6-2 is zero. That means that the controller becomes a PI
controller. The calculation of the tuning parameters is outlined in the
following table:

Sample time, s 1 2 4 8 16
Dead time, N 10 5 3 1 0
Maximum Kc (q=0), 3.0 2.7 2.0 1.9 1.6
%C.O./%T.O.
Integral time, min 0.55 0.54 0.53 0.50 0.44
112 Unit 6: Computer Feedback Control

Notice that the maximum gain is lower and the integral time faster as the
sampling interval is increased. This means that the loop is less controllable
at the longer sample times. On the other hand, it is not accurate to say that
the sampling interval should always be as short as possible. Recall that for
a sample time of one second the controller must be processed four times
more often than for a sample time of four seconds. This increases the
workload of the computer or microprocessor and thus reduces the number
of loops it can process.

Figure 6-5 shows that a point of diminishing returns can be reached when
selecting the sample time. The figure shows the heater temperature control
responses for a PI controller using the tuning parameters presented in the
preceding table for a step increase in process flow to the heater and
sampling intervals of 1, 2, and 4 seconds. It is evident that the reduction in
sampling interval from two seconds to one does not significantly improve
the response.

Fast Process/Slow Sampling

When the sample time is more than three or four times the dominant
process time constant, the process reaches steady state after each controller
output move before it is sampled again. This may happen because the
process is very fast or because the sensor is an analyzer with a long cycle
time. For such situations, the formulas of Table 6-2 result in a pure integral
controller:

Mk = Mk-1 + KIEk (6-5)

where

(1 – q)
K I = ----------------------------------------
-
K[ 1 + N( 1 – q )]

Notice that for the case N = 0 and q = 0, the controller gain is the reciprocal
of the process gain. This result makes sense since a loop gain of 1.0 is what
is needed to reduce the error to zero in one sample if the process reaches
steady state during that interval. An interesting application of this is a
chromatographic analyzer sampling a fast process. Because it is in the
nature of such analyzers that a full cycle is required to separate the
mixture and analyze it, the composition is not available to the controller
until the end of the analysis cycle. This means that the process dead time is
approximately one sample, or N = 1. For q = 0, Eq. 6-5 gives a gain of KI =
1/K(1 + 1) = 1/2K, or one half the reciprocal of the process gain. This also
makes sense because when the controller takes action, it takes two
sampling periods to see the result of that action, so the formula says to
Unit 6: Computer Feedback Control 113

(a)

T=1 s
58
2

56
M, %C.O.

54

52

50
0 0.5 1.0 1.5 2.0 2.5
Time, minutes
(b)

Figure 6-5. Response of Heater Temperature with PI Controller Sampled at 1, 2, and 4 Second
Intervals

spread the corrective action equally over two samples. The following
example illustrates what happens when the steam heater is controlled
with a slow-sampling controller.

Example 6-4. Slow Sampling of Steam Heater Outlet Temperature.


For the steam heater of Figure 3-1, calculate the maximum gain for the PI
controller using the formulas of Table 6-2 and sampling times of 32, 64,
and 128 seconds. Also calculate the gain of the pure integral controller,
given by KcT/TI (this is the same as the KI of Eq. 6-5).

This problem is just a continuation of the progression of the sample time in


Example 6-3. The results are summarized in the following table:
114 Unit 6: Computer Feedback Control

Sample time, s 32 64 128 256


Dead time, N 0 0 0 0
Maximum gain (q=0), 0.63 0.17 0.02 0.0005
%C.O./%T.O.
Integral time, min 0.34 0.19 0.048 0.0021
Integral gain, KcT/TI 1.0 1.0 1.0 1.0

As the sample time is increased, the proportional term disappears, while


the gain of the pure integral controller remains constant. Figure 6-6
compares the temperature control responses for a PI controller with the
tuning parameters for sample times of 4 and 32 seconds and q=0. The
disturbance is a step increase in process flow to the heater. Although the
slow sampling allows a larger initial deviation in temperature, the time to
return to set point is about the same, as is the overshoot in controller
output. This shows that the tuning formulas of Table 6-2 can be applied to
a wide range of the sample-time-to-time-constant ratio.

(a)

(b)

Figure 6-6. Response of Heater Temperature with PI Controller Sampled at 4 and 32 Second
Intervals
Unit 6: Computer Feedback Control 115

To summarize, the formulas presented in Table 6-2 can be used with a


first-order-plus-dead time process model, resulting in a PI controller. They
can also be used with a second-order-plus-dead time process model,
resulting in a PID controller. They are applicable over a wide range of
sample times and dead time-to-time-constant ratios.

6-3. Selecting the Controller Processing Frequency

Most microprocessor-based controllers (e.g., DCS) have a fixed processing


frequency of about one to ten output updates per second. For most
feedback control loops such a short sample time has no effect on controller
performance, and the controller can be considered to be continuous. On
the other hand, computer control systems, and higher-level DCS
functions, allow the control engineer to select the sampling interval of
each controller. In theory, the minimum sampling interval results in
maximum loop performance. However, there is a point of diminishing
returns where further reduction in the loop sample time results in minor
improvement in loop performance but at the expense of overloading the
process control system and limiting the number of loops it can process.

The relationship between sample time and controller performance is a


function of the time constant and dead time of the process. In fact, a good
way to analyze the selection problem is to look at the ratio of sample time
to process time constant versus the ratio of process dead time to time
constant or process uncontrollability parameter.

It makes sense to ratio the sample time to the process time constant
because the relative change in the process output from one sample to the
next depends only on this ratio. That is, the relative change will be the
same for a process with a one-minute time constant sampled once every
five seconds as it is for a process with a ten-minute time constant sampled
every fifty seconds.

It also makes sense to relate the sample time to the uncontrollability


parameter because the dead time imposes a limit on controller
performance. This limit is met at higher sample-time-to-time-constant
ratios the higher the dead time-to-time-constant ratio is for the process.

By definition, the loop gain is the product of the gains around the feedback
loop, KKc. You may use the value of this gain recommended by any of the
tuning methods, or alternatively the ultimate loop gain, to test the
sensitivity of controller performance to some parameter such as the
sampling frequency. This is because as the loop gain increases the effect of
disturbances on the process variable decreases. In Figure 6-7, the
maximum loop gain, which is calculated using the tuning formula from
Table 6-2, is plotted against the sample-time-to-time-constant ratio for
116 Unit 6: Computer Feedback Control

different dead time-to-time-constant ratios. The process is modeled by a


first-order-plus-dead time model, and the calculations are similar to those
of Example 6-3.

The graphs of Figure 6-7 show that as the sample-time-to-time-constant


ratio is decreased the maximum loop gain approaches a maximum limit.
The sample time at which this happens depends on the dead time.
However, observe that the loop gain does not increase much as the sample
time is decreased beyond a value of roughly one-tenth the time constant,
except for very low dead time-to-time-constant ratios. Even for that
exception the loop gain is very high at a sample time of one-tenth the time
constant. Thus, a rule of thumb for selecting the sample time could be as
follow: set the sample time to about one-tenth the dominant time
constant of the loop. There are two exceptions to this rule:

• When the dead time is greater than the time constant, longer sam-
ple times may be used because the performance of the loop is lim-
ited by the dead time and not the sample time. This can be verified
by observing the curve for t0/τ = 1 in Figure 6-7; the loop gain is
low and essentially independent of the sample time.

• When the dead time is less than one-tenth the time constant, and a
high gain is desired for the loop, a shorter sample time should be used.

By selecting the proper sample time for each loop, the control engineer can
increase the number of loops the process control system can handle
without experiencing deterioration of performance.

Figure 6-7. Effect of Sample Time on Maximum Proportional Gain (q = 0)


Unit 6: Computer Feedback Control 117

Optimizing Feedback Loops

Many modern computer control installations use feedback controllers to


minimize the consumption of energy and to maximize production rate. A
very common example of such control loops is the technique of “valve
position control,” in which a controller looks at the output of another
controller or valve position and keeps it close to fully opened or fully
closed. Such controllers are designed to drive the process toward its
constraints over a very long time period, and their sample times should be
much longer than the sample time of the controller whose output they
control, maybe thirty times or longer. This is to prevent the valve position
controller from continuously introducing disturbances into the control
system.

Sometimes the valve position controller is designed with a “gap” or dead


band around its set point so it only takes action when the controlled valve
position is outside that dead band. Once again, the purpose of the gap is to
prevent the valve position controller from introducing disturbances and
interaction into the control system.

6-4. Compensating for Dead Time

To this point, this module has clearly established that feedback controllers
cannot perform well when the process has a high ratio of dead time to
time constant. The total loop gain must be low for such processes, which
means that the deviations of the controlled variable from its set point
cannot be kept low in the presence of disturbances. One way to improve
the performance of the feedback controller for low controllability loops is
to design a controller that compensates explicitly for the process dead
time. This section presents two controllers that have been proposed to
compensate for dead time, the Smith Predictor and the Dahlin Controller.

Dead time compensation requires you to store and play back past values
of the controller output. Not until the advent of computer-based
controllers was the storage and playback of control signals possible.
Computer memory makes possible the storage and retrieval of past
sampled values.

The Smith Predictor

Smith proposed a dead time compensator that consisted of an internal


model of the process, which was to be driven on line by the controller
output and continuously compared with the controlled variable to correct
for model errors and disturbances.4 A block diagram of the scheme,
known as the “Smith Predictor,” is shown in Figure 6-8. Notice that in the
process model the dead time term is separated from the rest of the model
118 Unit 6: Computer Feedback Control

Figure 6-8. Block Diagram of Smith Predictor

transfer function. This is done so the model output, after being corrected
for model error and disturbance effects, can be fed to the feedback
controller in such a way that the process dead time is bypassed, hence
compensating for dead time.

A disadvantage of the Smith Predictor is that, although it requires a model


of the process, it does not use it to design or tune the feedback controller.
As a result, it ends up with too many adjustable parameters: the model
parameters plus the controller tuning parameters. Because there are so
many parameters to adjust, there is no convenient way to adjust the
closed-loop response when the model does not properly fit the process.
Given the nonlinear nature of process dynamics, any technique that
depends heavily on exact process modeling is doomed to fail.

The Dahlin Controller

The controller synthesis procedure introduced by Dahlin produces a


feedback controller that is exactly equivalent to the Smith Predictor, but
with the advantage that the controller tuning parameters are obtained
directly from the model parameters.2 Those interested in the details of the
derivation can refer to Smith and Corripio.3
Unit 6: Computer Feedback Control 119

The Dahlin dead time compensation controller can be reduced to a PID


controller with an extra term. The only modification to the controllers of
Table 6-1 is in the calculation of the controller output:

Mk = Mk-1 + ∆Mk + (1 - q)(Mk-N-1 - Mk-1) (6-6)

where ∆Mk can be computed by either the series form or the parallel
controller of Table 6-1. The last term in the calculation of the output
provides the dead time compensation. Notice that the term vanishes when
there is no dead time, N = 0. The actual controller is tuned with the
formulas from Table 6-2, except for the controller gain, which is given by
the following:

Parallel:

( 1 – q ) ( a 1 – 2 a1 a2 + a2 )
K c = ------------------------------------------------------------
- (6-7)
K ( 1 – a1 ) ( 1 – a 2 )

Series:

(1 – q)a
K c' = -----------------------1- (6-8)
K ( 1 – a1 )

Comparing these formulas with the corresponding formulas in Table 6-2


shows that these lack the term [1 + N(1 - q)] in the denominator. Recall that
this term decreases the controller gain to account for dead time. Since the
controller of Eq. 6-6 explicitly compensates for dead time, its gain can be
higher.

The Dahlin Controller is used extensively to control processes with long


dead times. A common application is the control of paper machines,
where the properties of the paper can only be measured after it has gone
through the drying process, which introduces significant dead time. One
characteristic of this application is that the dead time is relatively constant
and can be determined precisely. Dead time compensation presents
problems in other processes in which the dead time depends on flow and
other process variables (see Section 3-5).
120 Unit 6: Computer Feedback Control

Example 6-5. Dead Time Compensation Control of Steam Heater.


Compare the response of the temperature controller for the steam heater
of Figure 3-1 with and without dead time compensation. Use a series PI
controller with a sample time of 0.05 min, which is approximately one-
tenth of the time constant (0.56 min). The dead time compensation term
requires three samples of dead time:
N = int(t0/T) = 0.17/0.05 = 3

Using the formulas of Table 6-2 for the series controller, the tuning
parameters are as follows:
a1 = e(-0.05/0.56) = 0.915 a2 = 0

Without dead time compensation and with q = 0.5, we get:

Kc = (1-0.5)(0.915/0.085)/(1)[1+3(1-0.5)] = 2.2%C.O./%T.O.

TI = (0.915/0.085)*0.05 = 0.54 min

With dead time compensation, we get:

Kc = (1-0.5)(0.915/0.085)/1 = 5.4%C.O./%T.O.

TI = 0.54 min

Figure 6-9 compares the responses of the controllers to a step increase in


process flow to the heater. The dead time compensation controller results
in a smaller deviation from set point and less oscillation than the regular
PI controller. In this case, dead time compensation also results in a smaller
overshoot in the controller output. The value of q = 0.5 was selected to
prevent excessive variability in the controller output.

More sophisticated dynamic compensation controllers have been


proposed in the past few years, for example, the Vogel-Edgar controller5
and Internal Model Control.6 These controllers can incorporate a more
precise compensator than the Dahlin Controller, provided that a precise
model of the process is available. Nevertheless, the Dahlin Controller has
been applied successfully to the control of paper machines and other
processes with high dead time-to-time-constant ratios.
Unit 6: Computer Feedback Control 121

(a)

58
(a)

56 (b)
M, %C.O.

54

52

50
0 1 2 3 4 5
Time, minutes
(b)

Figure 6-9. Response of Temperature Controller for Steam Heater (a) without Dead time
Compensation, and (b) with Dead Time Compensation

6-5. Summary

This unit introduced computer feedback controllers and described how to


tune them and select the sample time for them. It is strongly
recommended that you use the controllers of Table 6-1 with the tuning
formulas of Table 6-2; they are the ones most commonly used in computer
control applications. For processes with high dead time-to-time-constant
ratios, the Dahlin dead time compensation controller, Eq. 6-6, is commonly
used in industry and also recommended here.
122 Unit 6: Computer Feedback Control

EXERCISES

6-1. How do computer controllers differ from analog controllers?

6-2. What is “derivative kick”? How is it prevented? Why is a “dynamic gain


limit” needed in the derivative term of the PID controller?

6-3. How and why would you eliminate “proportional kick” on set point
changes? Will the process variable approach its set point faster or slower
when proportional kick is avoided? When must proportional kick be
allowed?

6-4. Why is it important to differentiate between series and parallel versions of


the PID controller? When doesn't it matter?

6-5. What is the advantage of the nonlinear proportional gain in averaging level
control situations? In such a case, what must the nonlinear gain be for the
gain to be 0.25%C.O./%T.O. at zero error and still have the controller
output reach its limits when the level reaches its limits (0 and 100%)?
Assume a level set point of 50%T.O. and an output bias of 50%C.O.

6-6. A process has a gain of 1.6%T.O./%C.O., a time constant of 10 min, and a


dead time of 2.5 min. Calculate the tuning parameters for a discrete PID
controller if the sample time is (a) 4 s, (b) 1 min, (c) 10 min, and (d) 50 min.

6-7. Repeat exercise 6-6, but for a PID controller with dead time compensation.
Specify also how many samples of dead time compensation, N, must be used
in each case.

6-8. What is the basic idea behind the Smith Predictor? What is its major
disadvantage? How does the Dahlin Controller with dead time
compensation overcome the disadvantage of the Smith Predictor?

REFERENCES

1. C. F. Moore, C. L. Smith, and P. W. Murrill, “Simplifying Digital


Control Dynamics for Controller Tuning and Hardware Lag
Effects,” Instrument Practice, vol. 23 (Jan. 1969), p. 45.
2. E. B. Dahlin, “Designing and Tuning Digital Controllers,”
Instruments and Control Systems, vol. 41 (June 1968), p. 77.
3. C. A. Smith and A. B. Corripio, Principles and Practice of Automatic
Process Control, 2d ed. (New York: Wiley, 1997), Chapter 15.
4. O. J. M. Smith, “Closer Control of Loops with Dead Time,”
Chemical Engineering Progress, vol. 53 (May 1957), pp. 217- 19.
Unit 6: Computer Feedback Control 123

5. E. F. Vogel and T. F. Edgar, “A New Dead Time Compensator for


Digital Control,” Proceedings ISA/80 (Research Triangle Park, NC:
ISA, 1980); C. E.
6. Garcia and M. Morari, “Internal Model Control, 1. A Unifying
Review and Some Results,” Industrial and Engineering Chemistry
Process Design and Development, vol. 21 (1982), pp. 308-23.
Unit 7:
Tuning Cascade
Control Systems
UNIT 7

Tuning Cascade Control Systems


Cascade control is a common strategy for improving the performance of
process control loops. In its simplest form it consists of closing a feedback
loop inside the primary control loop by measuring an intermediate
process variable. This unit presents an overview of cascade control and the
tuning of cascade control systems.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Know when to apply cascade control and why.

B. Select the control modes and tune the controllers in a cascade


control system.

C. Recognize reset windup in cascade control systems and know how


to prevent it.

7-1. When to Apply Cascade Control

Figure 7-1 shows a typical cascade control system for controlling the
temperature in a jacketed exothermic chemical reactor. The control
objective is to control the temperature in the reactor, but instead of having
the reactor temperature controller, TC 1, directly manipulate the jacket
coolant valve, the jacket temperature is measured and controlled by a
different controller, TC 2, which is the one that manipulates the valve. The
output of the reactor temperature controller, TC 1 (or “master” controller)
is connected or cascaded to the set point of the jacket temperature
controller, TC 2 (or “slave” controller). Notice that only the reactor
temperature set point is maintained at the operator set value. The jacket
temperature set point changes to whatever value is required to maintain
the reactor temperature at its set point. A block diagram of the reactor
cascade control strategy, shown in Figure 7-2, clearly shows that the slave
control loop is inside the master control loop.

There are three major advantages to using cascade control:

• Any disturbances that affect the slave variable are detected and
compensated for by the slave controller before they have time to
affect the primary control variable. Examples of such disturbances
for the reactor of Figure 7-1 are the coolant inlet temperature and
header pressure.

127
128 Unit 7: Tuning Cascade Control Systems

SP
TC
1
Reactants
SP
TT
TC TT
2

Water
Out

Steam
Cooling
Water
Products

Figure 7-1. Cascade Temperature Control on a Jacketed Exothermic Chemical Reactor

Figure 7-2. Block Diagram of Cascade Control System

• The controllability of the outside loop is improved because the


inside loop speeds up the response of the process dynamic elements
between the control valve and the slave variable. In the reactor
example, the speed of response of the jacket is increased, which
results in a more controllable loop for the reactor temperature.
Unit 7: Tuning Cascade Control Systems 129

• Nonlinearities of the process in the inner loop are handled by that


loop and removed from the more important outer loop. In the reac-
tor example, the cascade arrangement makes the nonlinear relation-
ship between temperature and coolant flow a part of the inner loop,
while the outer loop enjoys the linear relationship between reactor
and jacket temperatures. As the inner loop should be more control-
lable than the overall loop, variations in the process gain are less
likely to cause instability when they are isolated in the inner loop.

In comparison with simple feedback control, cascade control requires you


to invest in an additional sensor (TT) and controller (TC 2). It is therefore
important that the three advantages just described result in significant
improvement in control performance. Such improvement depends on the
inner loop responding faster than the outer loop because all three
advantages depend on it. If the inner loop is not faster than the outer loop
three problems will arise. First, disturbances into the inner loop will not be
eliminated fast enough to prevent the primary control variable from being
affected. Second, a speeding up of the inner loop would result in a
decrease in the controllability of the overall loop because its dead-time-to-
time-constant ratio would increase. Third, nonlinearities would become a
part of the slower and possibly less controllable inner loop, thus affecting
the stability of the control system.

The success of cascade control requires one other condition besides the
inner loop being faster than the outer loop: the sensor of the inner loop
must be fast and reliable. One would not consider, for example, cascading
a temperature controller to a chromatographic analyzer controller. On the
other hand, the sensor for the inner loop does not have to be accurate, only
repeatable, because the integral mode in the master controller
compensates for errors in the measurement of the slave variable. In other
words, it is acceptable for the inner loop sensor to be wrong as long as it is
consistently wrong.

Finally, it should be pointed out that cascade control would not be able to
improve the performance of loops that are already very controllable, as,
for example, liquid level and gas pressure control loops. Similarly, cascade
control cannot improve the performance of loops when the controlled
variable does not need to be tightly maintained around its set point, for
example, in averaging level control. When a level controller is cascaded to
a flow controller it is usually justified on the grounds that it provides
greater flexibility in the operation of the process, not because of improved
control performance.

Now that we have looked at the reasons and requirements for using
cascade control, the following sections will consider how to select the
controller modes for cascade control systems and how to tune them.
130 Unit 7: Tuning Cascade Control Systems

7-2. Selecting Controller Modes for Cascade Control

In a cascade control system the master controller has the same function as
the controller in a single feedback control loop: to maintain the primary
control variable at its set point. It follows that the selection of controller
modes for the master controller should follow the same guidelines
presented for a single controller in Unit 5. On the other hand, because the
function of the slave controller is not the same as that of the master or
single controller, it requires different design guidelines.

Unlike the master or single feedback controller, the slave controller must
constantly respond to changes in set point, which it must follow as quickly
as possible with a small overshoot and decay ratio. It is also desirable that
the slave controller transmit changes in its set point to its output as
quickly as possible and, if possible, to amplify them because the output of
the slave controller is the one that manipulates the final control element. If
the slave controller is to speed up the response of the master controller, it
must transmit changes in the master controller output (slave set point) to
the final control element at least as fast as if it were not there. It is evident
then that the slave controller must have the following characteristics:

• It must have proportional mode.

• The proportional mode must act on the error signal.

• The slave controller should have a proportional gain of one or


greater if stability permits it.

If the gain of the slave controller is greater than one, changes in the master
controller output result in higher immediate changes in the final control
element than is the case when a single feedback loop is used. This
amplification results in the master loop having a faster response.

Integral Mode in the Slave Controller

Whether you should use integral and derivative modes on the slave
controller will depend on the application. Recall from previous units that
adding integral mode results in a reduction of the proportional gain, while
adding derivative mode results in an increase in the proportional gain.
This may suggest that all slave controllers should be proportional-
derivative (PD) controllers, but this is generally not the case.

As mentioned earlier, you do not need integral mode in the slave


controller to eliminate the offset because the integral mode of the master
controller can adjust the set point of the slave controller to compensate for
the offset. However, if the slave loop is fast and subject to large
disturbances, for example, a flow loop, the offset in the slave controller
Unit 7: Tuning Cascade Control Systems 131

would require the master controller to take corrective action and therefore
introduce a deviation of the primary controlled variable from its set point.
The use of a fast-acting integral mode on the slave controller would
eliminate both the need for corrective action on the part of the master
controller and the deviation in the primary controlled variable.

The integral mode should not be used in those slave loops in which the
gain is limited by stability. It should also be avoided in those slave loops in
which the disturbances into the inner loop do not cause large offsets in the
slave controller. The jacket temperature controller of the reactor in Figure
7-1 is a typical example of a slave loop that does not require integral mode.

Derivative Mode in the Slave Controller

A common rule states that derivative mode should not be used in both the
slave and master controllers. Moreover, since derivative mode would do
the most good on the less controllable loop, which is the outer loop, this
rule essentially comes down to stating that derivative mode should never
be used in the slave controller. There are two reasons for this rule. First,
having all three modes in both the master and slave controller results in
six tuning parameters, which, without the proper guidelines, makes the
tuning task more difficult. Second, it is undesirable to put two derivative
units in series in the loop. However, both of these reasons can be argued
away as follows:

• Guidelines, such those presented in Units 2, 4 and 6, simplify the


task of tuning. For example, keeping the derivative time to about
one-fourth the integral time, or to one half the dead time when it is
known, reduces the number of parameters in the cascade loop to
four (two gains and two integral times).

• If you have the derivative of the slave controller act on the process
variable instead of on the error, it will not be in series with the
derivative unit in the master controller.

The purpose of the derivative unit in the slave controller is to compensate


for the sensor lag or loop dead time and to allow for a higher slave
controller gain with less overshoot and a low decay ratio. When the inner
loop is fast and very controllable, as for example in flow loops, the slave
controller does not require derivative mode.

7-3. Tuning Cascade Control Systems

The controllers in a cascade control system must be tuned from the inside
out. That is, the innermost loop must be tuned first, then the loop around
132 Unit 7: Tuning Cascade Control Systems

it, and so on. The block diagram of Figure 7-2 shows why this is so: the
inner loop is part of the process of the outer loop.

Each loop in a cascade system must be tuned tighter and faster than the
loop around it. Otherwise, the set point of the slave loop would vary more
than its measured variable, which would result in poorer control of the
master variable. Ideally, the slave variable should follow its set point as
quickly as possible, but with little overshoot and few oscillations. Quarter-
decay ratio response is not recommended for the slave controller because
it overshoots set point changes by 50 percent. The ideal overshoot for the
slave variable to a set point change is 5 percent.

After the inner loop is tuned, the master loop can be tuned to follow any
desired performance criteria by any of the methods discussed in Units 2, 4,
5, and 6. Since what is special on cascade systems is the tuning of the slave
loop, the next three sections will discuss some typical slave loops, namely,
flow, temperature, and pressure loops. Keep in mind, however, that any
variable, including composition, can be used as a slave variable provided
it can be measured fast and reliably.

Slave Flow Loop

In modern computer control systems flow is the innermost loop in most


cascade control schemes because it allows the operator to intervene in the
control scheme by taking direct control of the manipulated flow. Figure 7-3
shows a typical temperature-to-flow-control scheme. The flow transmitter
compensates for variations in the pressure drop across the control valve
and absorbs any nonlinearities of the valve. If the square root of the
differential pressure is extracted, the slave’s measured variable, and thus
the output of the master controller, becomes linear with the flow.

In a cascade scheme, the flow controller must be tuned tight. To


accomplish this, a proportional-integral (PI) controller can be used in
which the integral time is set equal to the time constant of the valve
actuator (see Section 5-2), and a gain is set near 1%C.O./%T.O. If
hysteresis or dead band in the valve position is a problem, the higher gain
of the flow controller helps reduce the variations in flow that are required
to overcome the hysteresis.

Slave Temperature Loop

Using temperature as the slave measured variable entails two difficulties:


the sensor lag and the possibility of reset windup. However, both of these
can be handled. Section 7.4 discusses the reset windup problem. The
sensor lag can be compensated for by using derivative mode in the slave
controller with the derivative time set equal to the sensor time constant.
Unit 7: Tuning Cascade Control Systems 133

Vapors to
SP Condenser
TC

SP
FC
TT

Column Reflux

Figure 7-3. Flow as the Slave Variable in a Cascade Control Scheme (Distillation Column
Reflux)

The derivative unit must act on the slave’s measured variable only, not on
the error, in order to prevent the connection of two derivative units in
series in the loop.

The reactor temperature control scheme of Figure 7-1 is a typical example


of a slave temperature controller. In this application, the temperature has
an advantage over the coolant flow: it compensates for changes in both
coolant header pressure and temperature, while coolant flow compensates
only for variations in coolant header pressure. The temperature controller
also closes a loop around the jacket, reducing its effective time constant
and thus making the reactor temperature control loop more controllable.

Slave Pressure Loop

Pressure is a good slave variable to use because it can be measured easily


and reliably with negligible time lag. Figure 7-4 shows a temperature-to-
pressure cascade system. The pressure in the steam chest in the reboiler
directly determines the heat transfer rate because it controls the steam
condensing temperature and therefore the difference in temperature
across the heat transfer area. Like temperature, using pressure involves the
difficulty of reset windup, which is discussed in Section 7-4.

Another difficulty with pressure as a slave variable is that it can move out
of the transmitter range and thus get out of control. For example, in the
scheme of Figure 7-4, if at low production rate the reboiler temperature
134 Unit 7: Tuning Cascade Control Systems

SP
TC

TT
SP
PC

PT

Steam

Bottoms Condensate
Product

Figure 7-4. Pressure as the Slave Variable in a Cascade Control Scheme (Distillation Column
Reboiler)

drops below 100°C (212°F), the pressure in the steam chest will drop below
atmospheric pressure, moving out of the transmitter range, unless the
pressure transmitter is calibrated to read negative pressures (vacuum).

Computer Cascade Control

When both the master and the slave controllers are carried out on the
computer, the inner loop is usually processed at a higher frequency than
the outer loop. This is so the slave controller has time to respond to a set
point change from the master controller before the next change takes
place. Recall that the inner loop should respond faster than the outer loop.

One important consideration when cascading digital feedback algorithms


is ensuring bumpless transfer from manual to automatic. This is done by
initializing the output of the master controller to the measured (process)
variable of the slave controller when the loops are switched to automatic
control. This will make for a smooth transition to automatic in most cases.

The following example of cascade control of the temperature in a jacketed


reactor illustrates how a properly tuned cascade control system can
improve control performance.

Example 7-1. Cascade Control of Jacketed Chemical Reactor. This


example shows how to tune the cascade control system for the jacketed
chemical reactor of Figure 7-1. For comparison purposes, the response of a
Unit 7: Tuning Cascade Control Systems 135

single reactor temperature controller is compared to the response of the


cascade control scheme. The single reactor temperature controller, TC-1,
manipulates the coolant valve directly. Meanwhile, in the cascade scheme
the reactor temperature controller, TC-1, sets the set point of a jacket
temperature controller, TC-2, which in turn manipulates the coolant valve,
as in Figure 7-1. For the purposes of this example the manual steam valve
is always closed.

To obtain the process parameters, perform a step test in coolant flow with
the controllers on manual, and record both the reactor temperature and
the jacket temperature. The following results are obtained from the
response of the reactor temperature:

K = 0.55%T.O./%C.O. τ = 8.7 min t0 = 4.4 min

The following results are obtained from the response of the jacket
temperature:

K = 0.82%T.O./%C.O. τ = 6.2 min t0 = 0 min

Although an increase in coolant flow results in a decrease in both the


reactor and jacket temperatures, the signs on the process gains are positive
because, for safety, the coolant valve fails opened. As a result, an increase
in controller output results in a decrease in coolant flow and,
consequently, an increase in the temperatures, hence the positive gains.

Use the Ziegler-Nichols QDR tuning formulas in Table 4-1 to tune the
single reactor temperature series PID controller:

Kc' = 1.2(8.7)/(0.55)(4.4) = 4.3%C.O./%T.O.

TI' = 2.0(4.4) = 8.8 min TD' = 0.5(4.4) = 2.2 min

Use the parameters from the response of the jacket temperature to tune the
jacket temperature controller in the cascade scheme, TC-2. To gain good
response to set point changes from the master controller, use the IMC rules
presented in Section 4-2. Since the dead time is zero, a PI controller is
indicated, and its gain can be as high as is desired. To keep it reasonable,
use the following parameters:

Kc = 5%C.O./%T.O. TI = τ = 6.2 min TD = 0

Once you have tuned the jacket temperature controller TC-2, switch it to
automatic, and apply a step test in its set point with the reactor
temperature in manual. Record the response of the reactor temperature to
obtain the following results:

K = 0.62%T.O./%C.O. τ = 4.7 min t0 = 3.1 min


136 Unit 7: Tuning Cascade Control Systems

When you compare the results of the response to the step in coolant flow
you see that the reactor temperature loop has both a shorter time constant
and a shorter dead time when the jacket temperature controller is used.
Recall, however, that these parameters depend on the tuning of the jacket
temperature controller. For example, if you used a higher gain for TC-2,
the time parameters would be shorter still.

The reactor temperature controller, TC-1, is now tuned for the preceding
parameters:

Kc' = 1.2(4.7)/(0.62)(3.1) = 2.9%C.O./%T.O.

TI' = 2.0(3.1) = 6.2 min TD' = 0.5(3.1) = 1.5 min

Figure 7-5 compares the responses of the single reactor temperature


controller and the cascade control scheme to a 10°F step increase in coolant
temperature. As this disturbance is immediately detected and corrected
for by the slave controller, the reactor temperature in the cascade scheme
hardly deviates from its set point. The cascade scheme immediately
increases the coolant flow to compensate for the increase in coolant
temperature.

Figure 7-5. Reactor Temperature Response to Step Increase of 10°F in Coolant Inlet
Temperature. (a) Single Temperature Controller. (b) Reactor Temperature Cascaded to Jacket
Temperature
Unit 7: Tuning Cascade Control Systems 137

Figure 7-6 shows that the cascade control scheme also improves the
response of the reactor temperature for a step increase in feed flow to the
reactor. However, the improvement in performance is not as dramatic
because the feed flow has a direct effect on the reactor temperature, and
the jacket temperature controller cannot correct it in time. The
improvement in control is due to the faster response of the reactor
temperature to controller output in the cascade scheme. Notice the inverse
response of the temperature to the feed flow. This is because, as the
reactants are colder than the reactor, the increase in reactants flow causes
an immediate drop in temperature. At the same time the increase in flow
causes the reactants’ concentration to increase, which eventually results in
an increase in reaction rate and consequently in temperature.

Example 7-2 shows a very successful industrial application of cascade


control. It is an example of composition-to-composition cascade, which is
not very common. It also shows a three-level cascade control system,
where the flow controller is the lowest level.

Figure 7-6. Reactor Temperature Response to Step Increase of 10% in Feed Flow. (a) Single
Temperature Controller. (b) Reactor Temperature Cascaded to Jacket Temperature
138 Unit 7: Tuning Cascade Control Systems

Example 7-2. Control of Hydrogen/Nitrogen Ratio in an Ammonia


Synthesis Loop. Figure 7-7 shows a simplified diagram of the synthetic
ammonia process. Air, natural gas (N.G.), and steam are mixed in the
reforming furnace, and after the carbon dioxide (CO2) is removed, a
mixture of hydrogen and nitrogen is obtained and fed to the synthesis
loop compressor. The flow in the synthesis loop is about six to seven times
the flow of fresh feed because the synthesis reactor converts only about
15 percent of the hydrogen-nitrogen mixture to ammonia (NH3) in each
pass. Compared to the short time constant for the reforming process, this
high recycle-to-fresh-feed ratio results in a long time constant for the
synthesis loop. This is an ideal situation for cascade control.

The objective is to control the hydrogen-to-nitrogen ratio (H/N) of the


mixture entering the synthesis reactor at its optimum value (about 2.85 for
a slight excess of nitrogen). The master controller (AC 10) receives the
measurement of the composition at the reactor inlet from a very accurate
analyzer (AT). The output of the master controller adjusts the set point on
the slave controller (AC 11). The slave controller receives the measurement
of the composition of the fresh feed from a fast and inexpensive analyzer
(AT, usually a simple thermal conductivity detector), and its output
adjusts the ratio of air to natural gas. The ratio controller, in turn, adjusts
the set point of the process air flow controller (FC 2).

Vent

Air

N.G. Reforming
Process

Steam Compressor

Purge

Figure 7-7. Cascade Control of Reactor Inlet Composition and Pressure in the Ammonia
Synthesis Loop
Unit 7: Tuning Cascade Control Systems 139

Example 7-2 illustrates our earlier point that the slave measurement does
not have to be accurate but does have to be fast. Errors in the slave
measurement are corrected by the integral mode of the master controller.
On the other hand, the measurement of the master controller can be slow,
but it must be accurate. Disturbances in the reforming process are handled
quickly by the slave controller before they have a chance to affect the
primary controlled variable.

Figure 7-7 also shows a pressure-to-flow cascade loop for controlling the
pressure in the synthesis loop. In this cascade, the master controller is the
pressure controller (PC 4), and the slave controller is the purge flow
controller (FC 4). The purge is a small stream removed from the loop to
prevent the accumulation of inert gases (argon and methane) and the
excess nitrogen.

Although analog controllers could carry out both cascade control loops of
Figure 7-5, computer control offers this scheme an unexpected virtue:
patience. For example, in one actual installation where the pressure
control scheme was carried out with analog controllers, the master
controller was operated on manual because it was swinging the purge
flow all over its range. This is because the process for this loop has a time
constant of about one hour. On the same installation a digital controller
with a sample time of five minutes and an integral time of forty-five
minutes was able to maintain the pressure at its optimum set point.

7-4. Reset Windup in Cascade Control Systems

Unit 4 showed that a discrepancy between the operating range of the


single feedback controller output and the control valve causes the
undesirable overshoot of the controlled variable when the control valve is
recovering from a period of saturation. Such range discrepancies are more
common in cascade control systems because the range of the transmitter
on the slave loop is usually wider than the operating range of the variable,
particularly when the slave variable is temperature or pressure.

To illustrate the problem of cascade windup, consider the start-up of the


jacketed reactor shown in Figure 7-1. Initially, both controllers are on
manual, with the cooling water valve closed and the steam valve
manually opened to bring the reactor up to the operating temperature, say
55°C. The jacket temperature transmitter, TT 2, has a range of 0°C to 120°C,
and the steam condenses at 110°C, which is the value of the jacket
temperature when the steam valve is closed and the cascade control
system is initialized and switched to automatic. To prevent overheating,
this is done before the reactor temperature reaches its set point, say when
it reaches 50°C.
140 Unit 7: Tuning Cascade Control Systems

Following the bumpless transfer procedure of the control program, the


control system initializes the output of the master controller to the
measured temperature of the slave controller, 110°C. At this time the jacket
temperature begins to drop because the steam has been turned off and the
reactor is at the lower temperature of 50°C, while the reactor temperature
slowly increases because of the heat of the reaction. During the time that
the reactor temperature is between 50°C and 55°C (its set point), the
control situation is as follows:

• The slave controller sees a jacket temperature below its set point
(110°C) and calls for the cooling water valve to remain closed.

• The master controller also sees its temperature below set point and
calls for an increase in the jacket temperature set point above the
current 110°C value.

Most computer and DCS controllers detect that the slave controller output
is limited or “clamped” at the closed position. They then prevent the
master controller from increasing its output because this would only result
in a call to close the coolant valve, which is already closed. Does this logic
prevent the cascade control system from winding up? Let us see what
happens next.

Notice that a gap has been created between the set point of the slave
controller, frozen at 110°C, and its measured temperature. As the reactor
temperature crosses its set point of 55°C, the master controller starts
decreasing the set point of the slave controller to bring the temperature
down. However, the coolant valve will not open until the set point of the
slave controller drops below its measured temperature, that is, until the
gap between the slave controller’s set point and its measured temperature
is overcome. Since the set point of the slave controller will change at a rate
controlled by the integral time of the master controller, it takes a long time
for the coolant valve to start to open. As a result, the reactor temperature
overshoots its set point badly, which is the most common symptom of
reset windup. By the time the coolant valve starts to open, the reactor
temperature has reached its trip point of 60°C, and the entire system must
be shut down by dumping the reactor contents into a pool of water below.
As you can see, in this case the saturation or “clamp limit” detection
system could not avoid reset windup.

One solution to this problem is to reinitialize the output of the master


controller to the measured value of the jacket temperature. This can be
done as long as the slave controller output is clamped. In this solution, the
gap that causes the windup is eliminated, and the coolant valve opens the
moment the reactor temperature crosses its set point because, at that point,
Unit 7: Tuning Cascade Control Systems 141

the master controller calls for a lower jacket temperature than its current
value, and the slave controller responds by opening the coolant valve.

Reset Feedback

A more elegant cascade windup protection method, one that does not
require any logic, is to use a “reset feedback” signal on the control
algorithm. In the cascade scheme, the reset feedback signal is the
measured variable of the slave loop, expressed in percentage of
transmitter range. The reset feedback signal is used in the calculation of
the controller output by the velocity algorithm as follows:

Mk = bk + ∆Mk (7-1)

where

Mk = the output of the master controller and set point of the slave
controller

bk = the reset feedback variable, in this case the measured variable


of the slave loop

∆Mk= the incremental output of the master controller, which is


calculated as shown in Table 6-1

By using this formula to update the set point of the slave controller every
time the master controller is processed, there will be no possibility of
windup because the master controller will call for an increase or decrease
of the slave variable from its current value, not from the previous set
point. To use the reset feedback approach the slave loop must be processed
more frequently than the master loop, and the slave controller must have
integral mode. Otherwise, any offset in the slave controller would cause
an offset in the master controller, even if the master controller has integral
mode.

A third approach for protecting against cascade windup is to set clamp


limits on the set point of the slave controller that correspond to the slave
controller’s actual operating limits. For our earlier jacketed reactor
example these limits would be the coolant inlet temperature and the
reactor set point. However, notice that these limits change during normal
operation. It would be tedious to have to constantly change them to match
operating conditions.
142 Unit 7: Tuning Cascade Control Systems

7-5. Summary

This unit discussed the reasons for using cascade control, how to select
modes for the slave controller, and the procedure for tuning cascade
control systems. It also looked at cascade windup and ways to protect
against it. Cascade control has proliferated in computer control
installations because there is essentially no cost for the additional slave
controllers. One transmitter and one multiplexer input channel for each
slave loop represent the only additional cost in a computer control system.

EXERCISES

7-1. What are the three major advantages of cascade control?

7-2. What is the main requirement if a cascade control system is to result in


improved control performance? What is required of the sensor for the slave
loop?

7-3. Is the tuning and selection of modes different for the master controller in a
cascade control system than for the controller in a simple feedback control
loop? Explain.

7-4. What is different about the tuning of the slave controller in a cascade
control system? When should it not have integral mode? If the slave is to
have derivative mode, should it operate on the process variable or the error?

7-5. In what order must the controllers in a cascade control system be tuned?
Why?

7-6. What are the two major difficulties entailed in using temperature as the
process variable of the slave controller in a cascade control system? How
can they be handled?

7-7. Why is pressure a good variable to use as the slave variable in cascade
control? What are the two major difficulties encountered when using
pressure as the slave variable?

7-8. What is the relationship between the processing frequencies of the master
and slave controllers in a computer cascade control system?

7-9. How can reset windup occur in a cascade control system? How can it be
avoided?
Unit 8:
Feedforward and
Ratio
Control
UNIT 8

Feedforward and Ratio Control


This unit focuses on the design and tuning methods of feedforward and
ratio control strategies. As with cascade control, these strategies can be
classified as multiple-input single-output (MISO) because they require
more than one process measurement but only one final control element
(valve) as there is only one control objective.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Understand when to apply feedforward and ratio control.

B. Know when to use and how to tune a static feedforward


compensator.

C. Know how to tune dynamic feedforward compensators.

8-1. Why Feedforward Control?

Unit 4 showed that some feedback loops are more controllable than others
and that the parameter that measures the uncontrollability of a feedback
loop is the ratio of the dead time to the time constant of the process in the
loop. When this ratio is high, on the order of one or greater, feedback
control cannot prevent disturbances from causing the controlled variable
to deviate substantially from its set point. This is when the strategies of
feedforward and ratio control can have the greatest impact on improving
control performance.

The strategy of feedforward control consists of measuring the major


disturbances to a control objective and calculating the change in the
manipulated variable that is required to compensate for them. The
following are characteristics of feedforward control:

• It is in theory possible to have perfect control, that is, zero error at


all times (this is not so for feedback control, which must operate on
an error).

• To design the feedforward controller you need an accurate model of


the process. The model must include the effects of both the distur-
bances and the manipulated variable on the controlled variable.

145
146 Unit 8: Feedforward and Ratio Control

• All disturbances must be measured and compensated. Alterna-


tively, feedback trim can be added to compensate for disturbances
that have a minor effect on the controlled variable or that vary too
slowly to merit measurement, for example, ambient conditions or
exchanger scaling.

Feedforward compensation can be a simple proportionality between two


signals, or it can require complex material and energy balance calculations
involving the measured disturbances and the manipulated variable. No
matter how simple or complex the steady-state compensation,
compensation for process dynamics is usually accomplished with a simple
linear lead-lag unit, which we will introduce later in this unit.

The advantages to using feedforward control are best presented by


comparing it to feedback control. Figure 8-1 shows a block diagram of the
typical feedback control loop. The characteristics of feedback control that
make it so convenient are as follows:
1. The controller is a standard off-the-shelf item or software
algorithm.
2. The feedback controller can be tuned on line, by trial and error, so
you do not need a detailed model of the process to implement it.
3. The integral mode of the controller computes the value of its
output, M, that is required to keep the controlled variable, C, at
its set point, R.

G2
Feedback
Controller

R E M C
GC G1

Sensor

Figure 8-1. Block Diagram of Feedback Control Loop


Unit 8: Feedforward and Ratio Control 147

In addition to these very desirable characteristics are two undesirable


ones:

1. When a disturbance, U, enters the system the controlled variable


must deviate from its set point before the controller can take
action.

2. Overcorrections occur because of delays in the process and


sensor that can cause the controlled variable to oscillate around
its set point.

These problems are significant in process systems because of the long time
delays involved, sometimes hours in length. The remedy to these
problems is feedforward control.

Pure Feedforward Control

Figure 8-2 shows the block diagram for pure feedforward control. This
technique consists of measuring the disturbance U instead of the
controlled variable. Corrective action begins as soon as the disturbance
enters the system and can, in theory, prevent any deviation of the
controlled variable from its set point. However, pure feedforward control
requires that you have an exact model of the process and its dynamics as
well as exact compensation for all possible disturbances. The “set point
element” of Figure 8-2 provides for calibrated adjustment of the set point
and seldom includes any dynamic compensation.

Feedforward
Element U

G2
G1

G2
Setpoint
Element
R M C
1
G1
G1

Figure 8-2. Block Diagram for Pure Feedforward Control


148 Unit 8: Feedforward and Ratio Control

The “feedforward element” of Figure 8-2 simulates the effect of the


disturbance on the controlled variable (block G2) and compensates for the
lags and delays on the manipulated variable (block G1). Notice that the
signals always travel forward; that is, there is no loop in the diagram, so
the feedforward controller cannot introduce or prevent instability in the
process response.

Feedforward-Feedback Control

It is seldom practical to measure all the disturbances that affect the


controlled variable. A more reasonable approach is to measure only those
disturbances that are expected to cause the greatest deviations in the
controlled variable and handle the so-called minor disturbances by adding
“feedback trim” to the feedforward controller. Figure 8-3 shows the block
diagram for a feedforward-feedback control system. Notice that the
feedback controller takes the place of the set point element of Figure 8-2,
and only the feedforward element is necessary in the combined control
scheme. A feedforward element is required for each disturbance that is
measured.

When the outputs of the feedforward and feedback controllers are


summed, as in Figure 8-3, the presence of the feedforward controller does
not affect the response of the loop to inputs other than the measured
disturbance. In fact, you do not need to adjust the feedback controller just
because you have installed the feedforward controller.

Feedforward
Element U

G2
G1

G2
Feedback
Controller
R C
E M
GC G1

Sensor

Figure 8-3. Block Diagram of Feedforward-Feedback Control Scheme


Unit 8: Feedforward and Ratio Control 149

Economic considerations dictate that you should use a feedforward


controller to measure and compensate for only those disturbances that are
frequent enough and important enough to affect product quality, safety, or
similar considerations.

The advantages of the feedforward-feedback scheme are as follows:


1. The feedback controller takes care of those disturbances that are
not important enough to be measured and compensated for.

2. The feedforward controller does not have to compensate exactly


for the measured disturbances since any minor errors in the
model are trimmed off by the feedback loop, hence, the term
feedback trim.

Because of these advantages, feedback trim is part of almost every


feedforward control scheme.

Ratio Control

The simplest form of feedforward control is ratio control. It consists


simply of establishing a ratio between two flows. Figure 8-4 shows an
example of ratio control between the steam and process flows of a steam
heater. In this example, the process flow is the disturbance or “wild” flow,
and the steam is the manipulated flow. The steam flow controller takes
care of the control valve’s nonlinearity as well as variations in the pressure

RC Steam

SP
FC FT FS

F FT C

Process
Fluid
T
Condensate

Figure 8-4. Ratio Control of Heat Exchanger


150 Unit 8: Feedforward and Ratio Control

drop across the control valve. By maintaining a constant ratio when the
operator or another controller changes the process flow, the outlet process
temperature is kept constant, as long as the steam latent heat and process
inlet temperature remain constant.

Some control engineers prefer to calculate the ratio by dividing the


manipulated flow by the wild flow and then controlling the ratio with a
feedback controller, as in Figure 8-5. This alternative has the advantage of
displaying the ratio directly but at the expense of creating a very nonlinear
feedback control loop. Notice that the gain of the feedback loop in Figure
8-5 is inversely proportional to the wild flow, which is the major
disturbance. The ratio controllers in some computer and distributed
control systems (DCS) display the calculated ratio but do not use it for
control. Instead, the output is calculated by multiplying the input or wild
flow by the ratio set point, as in Figure 8-4.

8-2. The Design of Linear Feedforward Controllers

As shown in the block diagram of Figure 8-2, the feedforward controller


and the process constitute two parallel paths between the disturbance U
and the controlled variable C. The response of the controlled variable is
the sum of its responses to the manipulated variable and to the
disturbance:

C = G1M + G2U (8-1)

Wild Stream

FT
(B/A)set
A
SP
%
RC
FY

Manipulated FT
Stream

Figure 8-5. Ratio Control by Feedback Control of the Calculated Ratio


Unit 8: Feedforward and Ratio Control 151

where M is the manipulated variable, U is the disturbance, and G1 and G2


represent the effects of the manipulated variable and the disturbance,
respectively, on the controlled variable C.

The value of M that is required to keep C equal to the set point R is given
by the following:

1 G
M = ------- R – ------2- U (8-2)
G1 G1

This is the design equation for the feedforward controller that has the set
point R and disturbance U as inputs and the manipulated variable M as
output. Eq. 8-2 provides the design formulas for both the set point and
feedforward elements of Figure 8-2. The design formula for the set point
element is as follows:

1-
G s = ------ (8-3)
G1

The design formula for the feedforward element is as follows:

G
G F = – ------2- (8-4)
G1

When feedback trim is used, as in Figure 8-3, only the feedforward


element is needed because the feedback controller takes the place of the set
point element.

Simple Linear Models for Feedforward Control

When the process elements G1 and G2 are modeled with simple


first-order-plus-dead-time models, you can build the feedforward
controller out of standard algorithms available in most commercial
process control programs. The feedforward controller then consists of
three elements:

GF = (Gain)(Lead-Lag)(Dead time Compensator) (8-5)

with

K
Gain = – ------2- (8-6)
K1
152 Unit 8: Feedforward and Ratio Control

Lead of τ
Lead-Lag = ------------------------1- (8-7)
Lag of τ 2

Dead time Compensator = t02 - t01 (8-8)

where

K1 = the gain of the manipulated variable on the controlled variable


(gain of G1), %T.O./%C.O.

K2 = the gain of the disturbance on the controlled variable (gain of


G2), %T.O./%T,O.

τ1, τ2 = the time constants of G1 and G2, respectively, min

t01, t02 = the dead times of G1 and G2, respectively, min

Although the feedforward controller of Eq. 8-5 results from simple


first-order process models, there is no incentive to use dynamic
compensation terms that have a higher order than the simple lead-lag
unit. For example, using a second-order model would require a
compensator with two more parameters than the lead-lag unit. This would
make it harder to tune while offering little improvement in performance
over a well-tuned lead-lag unit.

The dead time compensator of Eq. 8-8 can only be realized when the dead
time between the disturbance and the controlled variable is longer than
the dead time between the manipulated variable and the controlled
variable. Otherwise, the dead time compensator would call for the
feedforward correction to start before the disturbance takes place, which is
obviously not possible.

To be implemented, the dead time compensator needs the memory of


digital devices (computers and microprocessors). Often, the dead time
compensator can be left out because the lead-lag unit can be tuned to
provide all of the required dynamic compensation, thus simplifying the
tuning task. In general, the dead time compensator should only be used
when the lead-lag unit cannot do the job by itself.

8-3. Tuning Linear Feedforward Controllers

Of the three terms of the feedforward controller shown in Eq. 8-5 the gain
is always required, and the dynamic compensators are optional. When
only the gain is used, the feedforward controller is called a “static”
compensator.
Unit 8: Feedforward and Ratio Control 153

Gain Adjustment

You can adjust the feedforward gain with the feedback controller on
manual or automatic. If you do it with the feedback controller on manual
and when the gain is not correct, the controlled variable will deviate from
its set point after a sustained disturbance input. You can then adjust the
gain until the controlled variable is at the set point again. Because of
process nonlinearities, the required feedforward gain may change with
variations in operating conditions. Thus, it may not be possible to achieve
exact compensation with a simple linear controller.

If you adjust the feedforward gain when the feedback controller is in


automatic, the variable you want to observe is the output of the feedback
controller. If the feedback controller has integral mode, the controlled
variable will always return to its set point after a disturbance. However, if
the feedforward gain is incorrect, the output of the feedback controller will
be changed to compensate for the error in the feedforward controller. You
must then adjust the feedforward gain until the feedback controller output
returns to its initial value. As before, process nonlinearities will prevent a
single value of the gain from working at all process conditions.

The one thing to remember when tuning the feedforward gain is that you
will have to wait until the system reaches steady state before making the
next adjustment.

Tuning the Lead-lag Unit

The most commonly used feedforward dynamic compensator is the


lead-lag unit, which is available both as an analog off-the-shelf device and
as a control block in computer control programs. To understand how to
tune a lead-lag unit you need to know how it responds to step and ramp
signals. Keep in mind that both the lead and the lag time constants are
adjustable, and that either one can be longer than the other.

Figure 8-6 shows the response of the lead-lag unit to a step change in its
input for two scenarios: the lead being longer than the lag and the lag
being longer than the lead, assuming in each case that the gain is unity.
The initial change in the output of the lead-lag unit is always equal to the
ratio of the lead to the lag. As a result, there is an initial overcorrection
when the lead is longer than the lag, and a partial correction when the lag
is longer than the lead. In either case, the output approaches the steady-
state correction exponentially, at a rate determined by the lag time
constant.

Figure 8-7 shows the response of the lead-lag unit to a ramp input, both for
the lead-longer-than-the-lag scenario and for the lag-longer-than-the-lead
154 Unit 8: Feedforward and Ratio Control

Figure 8-6. Step Response of Lead-Lag Unit

Figure 8-7. Response of a Lead-Lag Unit to a Ramp

scenario, assuming unity gain. The figure shows where the names lead and
lag come from: After a transient period, the output of the lead-lag unit
either leads the input ramp by the difference between the lead and the lag
or lags it by the difference between the lag and the lead. The ramp
response is more typical of the inputs provided by the disturbances in a
Unit 8: Feedforward and Ratio Control 155

real process than the step response to the type of inputs provided by the
disturbances in a real process. The ramp can also approximate the rising
and dropping portions of slow sinusoidal disturbances.

When you keep the responses to step and ramp inputs in mind, tuning the
lead-lag unit becomes a simple procedure. First, decide by how much you
should lead or lag the feedforward correction to the disturbance; this fixes
the difference between the lead and the lag. Then select the ratio of the
lead to the lag based on how much you want to amplify or attenuate
sudden changes in the disturbance inputs. For example, suppose you
want to lead the disturbance by one minute. A lead of 1.1 minutes and a
lag of 0.1 minutes gives an amplification factor of 1.1/0.1=11, while a lead
of 3 minutes and a lag of 2 minutes gives an amplification factor of only
3/2=1.5. If the disturbance is noisy, for example, a flow, the second choice
is preferable because it results in less amplification of the noise.

Although it is possible to have a lag with zero lead, it is not possible to


have a lead without a lag. The ratio of the lead to the lag should not be
greater than ten. When a net lag is required, the lead can usually be set to
zero, which simplifies the tuning task.

Computer Lead-lag Algorithm

A common computer formula for implementing a lead-lag unit is given by


the following:

τ LD
Yk = Y k – 1 + ( 1 – a ) ( Xk – 1 – Yk – 1 ) + --------- ( Xk – X k – 1 ) (8-9)
τ LG

where

Xk = the input at the kth sample

Yk = the output at the kth sample

τLD, τLG = the lead and lag constants, respectively, min

a = τLG/(T + τLG) = filter parameter

T = the sample time, min

The actual algorithms employed in commercial computer control


programs use various approximations for the filter parameter “a”, but it is
always a function of the sample time and the lag time constant. Notice that
the effect of the lead is just to multiply the change in input at each sample
by the ratio of the lead to the lag. In other words, for the computer lead-lag
algorithm the input change at each sample is a step change.
156 Unit 8: Feedforward and Ratio Control

Eq. 8-9 is for unity gain. If the gain is different than unity, it can be applied
to the signal before or after the lead-lag calculation.

Tuning the Dead time Compensation Term

Besides lead-lag dynamic compensation, dead time can be compensated


for by taking advantage of the computer’s ability to store information in
its memory. Dead time compensation should be used only when the dead
time is much longer than the time lag in the lead-lag unit. It is
accomplished by storing the feedforward corrective action at each control
update in a memory stack and then retrieving it several sample times later
to be output to the process. The output of the dead time compensator is
equal to its input N samples earlier:

Yk = Xk-N (8-10)

where N is the number of samples of dead time, and unity gain is


assumed.

Figure 8-8 shows a plot of the responses of dead time compensation to a


step and to a ramp. Notice that the dead time compensator does not start
responding until one dead time after the change in the input; then, the
output reproduces the input exactly. Dead time compensation should be
used only when even a lag without lead would cause the feedforward
correction to take place too soon.

Input

Delay
Output

Time
(a)

Input

Output
Delay

Time
(b)

Figure 8-8. Response of Dead Time Compensator: (a) to a Step, (b) to a Ramp
Unit 8: Feedforward and Ratio Control 157

The dead time compensator is easy to tune because it only has one
dynamic parameter, the number of samples of delay N.

Before you apply dead time compensation you must ensure that the dead
time does not delay the action in a feedback control loop. Recall that dead
time always makes a feedback control loop less controllable. The reason it
can be used in feedforward control is that the corrective action always
goes forward; that is, no loop is involved.

8-4. Nonlinear Feedforward Compensation

Although linear feedforward compensation can significantly improve


control performance, process nonlinearities cause the performance of the
compensator to deteriorate when process conditions change. Based on
your knowledge of the process you can use simple nonlinear models to
design feedforward compensators that perform well over a wide range of
operating conditions. The idea is to use the basic principles of physics to
replace the steady-state gain of the linear feedforward controller with
more precise calculations that reflect the full nonlinear interaction between
the process variables. You can keep the control calculations simple by
designing the controller from steady-state relationships and then using
lead-lag and dead time compensators to compensate for process
dynamics.

The outline of the design procedure is as follows:


1. State the control objective, that is, define which variable needs to
be controlled and what its set point is. It is useful to write the
objective in the form:

variable = set point

The set point should be adjustable by the operator and not a


constant.
2. Enumerate the possible measured disturbances. Which
disturbances can be easily measured? How much and how fast
do you expect each to vary? How much would it cost to measure
each of them? It is not really necessary to make a precise cost
estimate or get a price bid from a vendor. Just be aware that, for
example, a composition sensor may be more expensive to buy
and maintain than a flow or temperature sensor.

3. State which variable the feedforward controller is going to


manipulate. When the feedforward controller is cascaded to a
slave controller, the manipulated variable should be defined as
158 Unit 8: Feedforward and Ratio Control

the set point of the slave controller (for example, the flow of the
manipulated stream) instead of the valve position.

4. Using basic principles, usually material and energy balances,


write the formulas that relate all the variables defined in the first
three steps. Keep them as simple as possible. Solve for the
manipulated variable so it can be calculated from the measured
disturbances and the control objective. The resulting formula or
formulas constitute the design equation(s) to be programmed
into the computer for on-line execution. Caution: the formula
must use the set point of the controlled variable and not its
measured value.
5. Reevaluate the list of measured disturbances. You can calculate
the effect of the expected variation of each disturbance on the
controlled variable from the basic formulas. If it is small the
disturbance need not be measured. On the other hand, there may
be a disturbance that was not on the original list that the formulas
tell you will have a significant effect on the controlled variable. In
deciding whether or not to measure you must weigh the effect of
the disturbance, its expected magnitude, the speed and frequency
of variation, and the cost of measuring it. Unmeasured
disturbances are treated as constants that are equal to their design
or average expected values. Alternatively, if unmeasured
disturbances are difficult to measure but you still expect them to
vary, you may adjust them by using feedback trim.
6. Introduce the feedback trim, if any, into the design equation. This
is done by grouping unknown terms and unmeasured
disturbances as much as possible and letting the output of the
feedback controller adjust the group of terms that is expected to
vary the most. A simple and effective approach is to have the
output of the feedback controller adjust the set point of the
feedforward controller.
7. Decide whether dynamic compensation is needed and how it is
to be introduced into the design. Simple lead-lag or dead time
compensators are commonly used. You should install a separate
dynamic compensator on each measured disturbance. It is not
good practice to install the dynamic compensator in such a way
that it becomes part of the feedback trim loop, especially if it
contains dead time compensation.
8. Draw the instrumentation diagram for the feedforward
controller. This is a diagram showing the various computations
and relationships between the signals. It is good practice to draw
it so that all the input signals enter from the top (or left) and the
Unit 8: Feedforward and Ratio Control 159

output signals exit at the bottom (or right). It is at this point that
you must decide on implementation details. These will largely
depend on the equipment used. A good design should be able to
continue to operate safely when some of its input measurements
fail, a characteristic of the design known as “graceful
degradation.”

The feedforward controller can then be programmed on the control


computer or configured on the distributed control system. Example 8-1
illustrates this design procedure. For other good examples, see the texts by
Luyben1 and by Smith and Corripio.2

Example 8-1. Feedforward Temperature Control of a Steam Heater:


An example of a nonlinear model for feedforward control is given by the
heat exchanger application described by Shinskey.3 Figure 8-9 shows a
sketch of the steam heater and feedforward controller. The design
procedure is as follows:
1. Control objective: To = Toset (8-11)
2. Measured disturbances:
W, the flow through the exchanger, lb/h
Ti, the inlet temperature, °F
3. Manipulated variable:
F, steam flow controller set point, lb/h

set TC
TO
SP
m Steam
Feedforward
Fset
Controller SP
FC F
FT

FT TT TT
Process
Fluid TO
W Ti

T
Condensate

Figure 8-9. Feedforward Control of Steam Heater


160 Unit 8: Feedforward and Ratio Control

4. A steady-state energy balance on the exchanger yields the


following equation for the static feedforward controller:

FHv = WC(To - Ti) + QL (8-12)

where

C = the specific heat of fluid, Btu/lb°F

Hv = the heat of vaporization of the steam, Btu/lb

QL = the heat loss rate, Btu/h


5. At this point, it is possible to evaluate the effect of the possible
disturbances on the outlet temperature. Such analysis may
determine that the heat loss rate is as important as the two
measured disturbances but difficult to measure. If so, the heat
loss rate is a candidate for feedback trim adjustment. Conversely,
you may find that the inlet temperature does not have enough
effect to merit the cost of measuring it, in which case the
feedforward controller becomes a simple steam-to-process-flow
ratio controller.
6. You can determine the need for feedback trim by considering
how much the unknown terms in the design formula are
expected to vary. Here again, you must consider the cost of the
feedback sensor. The three unknown terms are the physical
properties, C and Hv , and the heat loss rate, QL. The three can be
lumped together by assuming that the heat loss rate is
proportional to the heat transfer rate:

QL = (1 - η)FHv (8-13)

where η is a heater efficiency or fraction of the energy input that


is transferred to the process fluid. Substituting Eq. 8-13 into Eq.
8-12 and solving for the manipulated variable yields the design
formula:

set C set
F = ----------- ( T – Ti ) W (8-14)
Hv η o

Notice that the outlet temperature in the formula has been


replaced by its set point. That is, the control objective given in
Eq. 8-11 has been substituted into the design formula to assure
that it is enforced by the feedforward controller. In modern
computer control systems it is possible to retrieve the set point
from the feedback controller and use it in the feedforward
Unit 8: Feedforward and Ratio Control 161

calculation, so the operator only has to enter one set point. This is
an important design requirement.

All of the unknowns of the model have been lumped into a single
coefficient, C/Hvη, and it would seem natural for the feedback
trim controller to adjust this coefficient to correct for variations in
the physical properties and heater efficiency. However, these
parameters are not expected to vary much, and it would not be
desirable for the feedback trim controller to control by adjusting a
term that is not expected to vary. You can create a better control
system structure if you make the feedback controller output
adjust the set point of the feedforward controller or, equivalently,
the product of the unknown coefficient and the set point. This is
done as follows:
set C
F = m – -----------Ti W (8-15)
Hv η

where
m = CToset/Hvη = output of feedback controller

The coefficient C/Hvη becomes the tunable gain of the inlet


temperature correction. Notice that this term can be calculated
from measured values of the temperatures and flows, averaged
over long enough periods of time. From Eq. 8-14, we get:

C F (8-16)
----------- = ----------------------------
-
Hv η W ( To – Ti )

7. The feedforward formula was derived from a steady-state energy


balance on the heater. Dynamic compensation will probably be
required because changes in steam flow, the manipulated
variable, are delayed by the lags of the control valve and steam
chest, while the process flow will have a faster effect on the outlet
temperature. On the other hand, the effect of changes in inlet
temperature will be delayed by the transportation lag in the
heater. To compensate for these dynamic imbalances, you can
insert lead-lag units on the two measured disturbances before
they are used in the computation.
8. Figure 8-10 shows the instrumentation diagram for the
feedforward controller. In some computer control systems, the
multiplier may be carried out as a ratio controller. In these
systems, the ratio is set by the summer that combines the
feedback controller output and the inlet temperature correction.
162 Unit 8: Feedforward and Ratio Control

T oset
TT TT FT
SP
Ti To W
TC
L/L L/L
TY TY
m
Σ
TY

x
TY

Fset

Figure 8-10. Diagram of Feedforward Controller for Steam Heater

The following example illustrates how to tune the lead-lag unit for the
feedforward controller we have just designed.

Example 8-2. Tuning of Lead-lag Units. Tune the lead-lag units for the
steam heater feedforward controller of the preceding example. Figure 8-11
compares the responses of the outlet temperature to a change in process
flow with (a) a well-tuned feedback controller, (b) a static feedforward
controller, and (c) a feedforward controller with lead-lag compensation.
Notice that with static compensation the temperature drops even though
the steam flow is immediately increased in proportion to the process flow.
It is evident from the graph in Figure 8-11 that the steam needs to lead the
process flow because the simultaneous action still allows the variable to
deviate in the same direction as when feedforward control is not used.
Curve (c) in Figure 8-11 uses a lead of two minutes and a lag of one minute
for a net lead of one minute. As the process flow is expected to be a noisy
signal, these values limit the amplification of the noise to a factor of two.
With this tuning, the lead-lag unit reduces the deviation of the
temperature by one half of the deviation of the static compensator.

Figure 8-12 compares the responses of the outlet temperature to a 10°C


increase in inlet temperature with (a) a well-tuned feedback controller, (b)
a static feedforward controller, and (c) a feedforward controller with a
lead-lag unit. Because the temperature changes in the opposite direction as
when feedforward is not used, the correction in steam flow is too fast, and
Unit 8: Feedforward and Ratio Control 163

thus the inlet temperature signal needs a lag. Curve (c) of Figure 8-12
shows the response when a lag of one minute and zero lead are installed
on the inlet temperature signal. In this case, you could also have tried
dead time compensation since the dead time to the inlet temperature—the
disturbance—is longer than the dead time to the steam flow—the
manipulated variable.

Example 8-2 has a characteristic that is typical of many successful


feedforward control applications: the formulas used in the compensation
are simple steady-state relationships. If you need dynamic compensation,
you add lead-lag and dead time compensation to the nonlinear
steady-state compensator. The moral is keep your design super simple.

Figure 8-11. Responses to Step Change in Process Flow to Steam Heater. (a) Feedback
Control, (b) Static Feedforward Control, and (c) Feedforward Control with Lead-Lag
Compensation

Figure 8-12. Responses to Step Change in Inlet Temperature to Steam Heater. (a) Feedback
Control, (b) Static Feedforward Control, and (c) Feedforward Control with Lead-Lag
Compensation
164 Unit 8: Feedforward and Ratio Control

8-5. Summary

Ratio and feedforward control complement feedback control by


preventing deviations of the controlled variable caused by disturbances.
The feedforward controller is free of stability considerations but requires a
model of the process to be controlled. The best approach is a combination
of feedforward and feedback control. Ratio control is the simplest form of
feedforward control; it establishes a simple proportionality between two
flows.

EXERCISES

8-1. Why isn't it possible to have perfect control—that is, the controlled variable
always equal to the set point—using feedback control alone? Is perfect
control possible with feedforward control?

8-2. What are the main requirements of feedforward control? What are the
advantages of feedforward control with feedback trim over pure feedforward
control?

8-3. What is ratio control? What is the control objective of the air-to-natural gas
ratio controller in the control system sketched in Figure 7-7 for the
ammonia process? Which are the measured disturbance and the
manipulated variable for that ratio controller?

8-4. What is a lead-lag unit? How is it used in a feedforward control scheme?


Describe the step and ramp responses of a lead-lag unit.

8-5. Suppose you want to lead a disturbance in a feedforward controller by 1.5


minutes. If the amplification factor for the noise in the disturbance
measurement must not exceed two, what must the lead and the lag be?

8-6. What is dead time compensation in a feedforward controller? When can it


be used? When should it be used?

8-7. Refer to the furnace shown in the following figure. Design a feedforward
controller to compensate for changes in process flow, inlet temperature, and
supplementary fuel flow in the furnace’s outlet temperature control.
Explicitly discuss each of the eight steps of the procedure for designing
nonlinear feedforward compensation outlined in Section 8-4.
Unit 8: Feedforward and Ratio Control 165

Flue gas

FT TT

Process Stream

TT

SP
FC

Main FT
Fuel FT Auxillary
Fuel

Feedforward Control of Furnace Coil Outlet Temperature

REFERENCES

1. W. L. Luyben, Process Modeling, Simulation, and Control for


Chemical Engineers, 2d ed. (New York: McGraw-Hill, 1990),
Sections 8.7 and 11.
2. C. A. Smith and A. B. Corripio, Principles and Practice of Automatic
Process Control, 2d ed. (New York: Wiley, 1997), Chapter 12.
3. F. G. Shinskey, “Feedforward Control Applied,” ISA Journal
(November 1963), p. 61.
Unit 9:
Multivariable Control
Systems
UNIT 9

Multivariable Control Systems


In the previous units of this module we have looked at the tuning of
feedback controllers from the point of view of a single loop. That is, we
considered a single control objective and a single manipulated variable at
a time. In this unit, we’ll consider the effect of interaction between
multiple control objectives and the tuning of multivariable control
systems.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Understand how interaction with other loops affects the


performance of a feedback control loop.
B. Estimate the extent of interaction between loops.
C. Pair controlled and manipulated variables to minimize the effect of
interaction.
D. Adjust the tuning of feedback controllers to account for interaction.
E. Design decouplers for multivariable control systems.
F. Recognize advanced multivariable control systems.

9-1. What Is Loop Interaction?

When two or more feedback loops are installed on a process or unit


operation (e.g., distillation column, evaporator, etc.), there is a possibility
that the loops will interact. This means that each controlled variable is
affected by more than one manipulated variable. As shown in Figure 9-1,
both controlled variables, in controlling the total flow and concentration
out of a blender, are affected by each of the two manipulated variables: the
flows of the concentrated and dilute inlet streams. The problem that arises
in this scenario is known as loop interaction. Since multiple control
objectives are involved, solving the problem of loop interaction can be
viewed as the design of a multivariable control system.

Effect of Loop Interaction

Consider the block diagram representation of the 2x2 multivariable


control system shown in Figure 9-2. The terms G11 and G21 represent the
effect of manipulated variable M1 on the two controlled variables, C1 and

169
170 Unit 9: Multivariable Control Systems

AC FC
M1

F1x1

AT FT

M2
x F
F2x2

Figure 9-1. Multivariable Control of a Blender

R1 E1 M1 C1
Gc1 G11

G12

G21

R2 E2 M2 C2
Gc2 G22

Figure 9-2. Block Diagram for a 2x2 Interacting Control System

C2, while G12 and G22 are the corresponding effects of manipulated
variable M2. The two controllers, GC1 and GC2, act on their respective
errors, E1 and E2, to produce the two manipulated variables. Signals R1
and R2 represent the set points of the loops. In the diagram of Figure 9-2
each of the four process blocks includes the gains and dynamics of the
Unit 9: Multivariable Control Systems 171

final control elements (valves), the process, and the sensor/transmitters.


For simplicity, the disturbances are not shown.

To look at the effect of interaction assume that the gains of all four process
blocks are positive. That is, an increase in each manipulated variable
results in an increase in each of the controlled variables. Suppose then that
at a certain point a step change in manipulated variable M1 takes place
with both loops on “manual” (opened). Figure 9-3 shows the responses of
both controlled variables, C1 and C2, where the time of the step change is
marked as point “a”. Now suppose that at time “b” control loop 2 is closed
(switched to “automatic”) and that it has integral or reset mode.
Manipulated variable M2 will decrease until controlled variable C2 comes
back down to its original value, which is assumed to be its set point.
Through block G12, the decrease in M2 also causes a decrease in controlled
variable C1, so that the net change in C1 is smaller than the initial change.
Notice that this initial change is the only change that would take place if
there were no interaction, or if controller 2 were kept on manual. The
difference between the initial change and the net change in C1 is the effect
of interaction. It depends on the effect that M1 has on C2 (G21), the effect
that M2 has on C2 (G22, which determines the necessary corrective action
on M2), and the effect that M2 has on C1 (G12). Notice also that, provided
controller 2 has integral mode, the steady-state effect of interaction
depends only on the process gains, not on the controller tuning.

M1 C1 b

a
a

b
b
M2 C2
a

Time

Figure 9-3. Effect of Interaction on the Response of the Controlled and Manipulated Variables
172 Unit 9: Multivariable Control Systems

I invite you to verify that a step in M2, followed by closing control loop 1,
has the same effect on C2, at least qualitatively, as the effect just observed
on C1. It will be shown shortly that the relative effect of interaction for
control loop 2 and control loop 1 is quantitatively the same.

In the case just analyzed, all four process gains were assumed to be
positive (direct actions). The effect of interaction was in the direction
opposite the direct (initial) effect of the step change, which resulted in a
net change smaller than the initial change. This situation, in which the two
loops “fight each other,” is known as “negative” interaction. You can easily
verify that the interaction would also be negative if any two of the process
transfer functions had positive gains and the other two had negative
gains. Notice that it is possible for the effect of interaction to be greater
than the initial effect, in which case the direction of the net change will be
opposite that of the initial change. Here we could say that “the wrong loop
wins the fight,” a situation that, we will soon see, is caused by incorrect
pairing of the loops.

If one of the four process gains had a sign opposite that of the other three,
the net change would be greater than the initial change, as you can also
verify. This is the case of “positive” interaction, when the two loops “help
each other.” Positive interaction is usually easier to handle than negative
interaction because the possibility that inverse response (i.e., the
controlled variable moving in the wrong direction right after a change) or
open-loop overshoot will occur exists only when the process exhibits
negative interaction.

Both positive and negative interaction can be very detrimental to the


performance of the control system. This is because the response of each
loop is affected when the other loop is switched into and out of automatic,
or when its output saturates. In summary, loop interaction has the
characteristics:
1. For interaction to affect the performance of the control system, it
must work both ways. That is, each manipulated variable must
affect both controlled variables through the process. Notice that if
either G12 or G21 is absent from the diagram of Figure 9-2, there is
no interaction effect.

2. Because of interaction, a set point change to either loop produces


at least a transitory change in both controlled variables.
3. The interaction effect on one loop can be eliminated by
interrupting the other loop. That is, if one of the two controllers is
switched to “manual,” the remaining loop is no longer affected
by interaction.
Unit 9: Multivariable Control Systems 173

In the following two sections, 9-2 and 9-3, we look at two ways to
approach the problem of loop interaction:
1. By pairing the controlled and manipulated variables so as to
minimize the effect of interaction between the loops.
2. By combining the controller output signals through decouplers
so as to eliminate the interaction between the loops.

More advanced multivariable control design techniques will be briefly


introduced in Section 9-4.

9-2. Pairing Controlled and Manipulated Variables

Usually, the first step in the design of a control system for a process is
selecting the control loops, that is, selecting those variables that must be
controlled and those that are to be manipulated to control them. This
pairing task has been traditionally performed by the process engineer
using mostly his or her intuition and knowledge of the process.
Fortunately, for a good number of loops, intuition is all that is necessary.
However, when the interactions involved in a system are not clearly
understood and the “intuitive” approach produces the wrong pairing,
control performance will be poor. The expedient solution is to switch the
troublesome controllers to “manual,” which, as mentioned in the
preceding section, eliminates the effect of interaction. The many
controllers operating in manual in control rooms throughout the process
industries are visible reminders of the importance of correctly pairing the
variables in the system. Each one represents a failed attempt to apply
automatic control.

In the mid-1960s, Bristol published a method for quantitatively determin-


ing the correct pairing of controlled and manipulated variables in a multi-
variable system.1 It is popularly known as the Relative Gain Matrix or
Interaction Measure, and it requires only steady-state information that is
easy to obtain off line. The fact that the method does not include dynamic
information, on the other hand, is the one objection that has kept it from
gaining wider acceptance.

Open-Loop Gains

Consider the 2x2 system of Figure 9-2. The following open-loop gains can be
calculated if a change is applied to manipulated variable M1, while the
other manipulated variable is kept constant, and the changes in controlled
variables C1 and C2 are measured:
174 Unit 9: Multivariable Control Systems

Change in C
K 11 = ---------------------------------1- (9-1)
Change in M 1

Change in C
K 21 = ---------------------------------2-
Change in M 1

Similarly, when a change is applied to M2, keeping M1 constant, the other


two open-loop gains can be calculated:

Change in C
K 12 = ---------------------------------1- (9-2)
Change in M 2

Change in C
K 22 = ---------------------------------2-
Change in M 2

The open-loop gains can also be obtained from the steady-state equations
or computer simulation programs that were used to design the plant.

There is a natural tendency to try to use the open-loop gains to pair the
variables. However, it is immediately apparent that C1 and C2 and M1 and
M2 do not necessarily have the same dimensions. Thus, attempting to
compare open-loop gains would be similar to trying to decide between
buying a new sofa or a new house. To overcome this problem, Bristol
proposed computing relative gains that are independent of dimensions.

Closed-Loop Gains

As discussed in Section 9-1, because of interaction the effect of M1 on C1


when the other loop is closed differs from its effect on C1 when it is
opened. For this reason, we must define the closed-loop gains K11', K21', K12',
and K22'. They are defined exactly as shown in Eqs. 9-1 and 9-2, but the
changes in C1 are determined with C2 kept constant, and the changes in C2
are determined with C1 kept constant. For example, to determine K11', a
change is made in M1, and the change in C1 is measured while a feedback
controller with integral mode controls C2 by manipulating M2.

However, closed-loop tests are not needed because you can compute the
closed-loop gains from the open-loop gains previously defined. For
example, when both M1 and M2 change, the total change in C1 can be
estimated by the sum of the two changes:

Change in C1 = K11(Change in M1) + K12(Change in M2)


Unit 9: Multivariable Control Systems 175

The same holds true for the total change in C2. Now, if C2 is kept constant,
its change is zero:

Change in C2 = K21(Change in M1) + K22(Change in M2) = 0

Solving for the change in M2 required for C2 to remain constant, we get:

K 21
- ( Change in M 1 )
Change in M 2 = – --------
K 22

Substitute to obtain the total change in C1:

K 12 K 21
- ( Change in M 1 )
Change in C 1 = K 11 – ------------------
K 22

The bracketed expression is then the closed-loop gain K11‘. The


closed-loop gains for each of the other three pairings can be derived in the
same way.

Relative Gains (Interaction Measure)

To obtain Bristol's relative gains or measures of interaction divide each


open-loop gain by the corresponding closed-loop gain:

K ij
µ ij = --------
- (9-3)
K ij ′

where µij is the relative gain for the pairing of controlled variable Ci with
manipulated variable Mj.

The following formulas can be used to compute the relative gains for any
2x2 system:

K 11 K 22
µ 11 = µ 22 = -------------------------------------------
- (9-4)
K 11 K 22 – K 12 K 21

K 12 K 21
µ 12 = µ 21 = -------------------------------------------
-
K 12 K 21 – K 11 K 22

It makes sense that the interaction measure for the C1-M1 pair be the same
as for the C2-M2 pair because they represent a single option in the 2x2
system. The other option is C1-M2 and C2-M1.
176 Unit 9: Multivariable Control Systems

The relative gains are dimensionless and can therefore be compared to one
another. To minimize the effect of interaction, the controlled and
manipulated variables are paired so the relative gain for the pair is closest
to unity. This results in the least change to gain when the other loop of the
pair is closed. Notice that in cases where there is no interaction, the
open-loop gain is equal to the closed-loop gain, and the relative gains are
1.0 for one pairing and 0.0 for the other.

The following example illustrates how to calculate the relative gains for a
blending process, and how to interpret the resulting values of the relative
gains.

Example 9-1. Calculating Relative Gains of Blender. In the blender


shown in Figure 9-1 a change of 5 lb/h in F1, the dilute inlet stream, results
in a steady-state increase of 5 lb/h in F, the outlet flow, and a decrease of
0.5 percent in x, the outlet concentration. A change of 2 lb/h in F2, the
concentrated inlet stream, results in a steady-state increase of 2 lb/h in F
and an increase of 0.8 percent in x. Determine the relative gains, and pair
the flow and concentration controllers so as to minimize interaction.

From the change in F1, the open-loop gains are as follows:

KF1 = (5 lb/h)/(5 lb/h) = 1.0

Kx1 = (-0.5%)/(5 lb/h) = -0.1%/(lb/h)

From the change in F2, the open-loop gains are:

KF2 = (2 lb/h)/(2 lb/h) = 1.0

Kx2 = (0.8%)/(2 lb/h) = 0.4%/(lb/h)

From Eq. 9-4, the relative gains are as follows:

µF1 = µx2 = (1.0)(0.4)/[(1.0)(0.4) - (-0.1)(1.0)] = 0.8

µF2 = µx1 = (-0.1)(1.0)/[(-0.1)(1.0) - (1.0)(0.4)] = 0.2

This means that for the pair F1 with F and F2 with x, the steady-state gain
of each loop increases to 1/0.8 = 1.25 (a 25% change) when the other loop
is closed. Conversely, for the pair F1 with x and F2 with F, the gain of each
loop increases by a factor of 1/0.2 = 5 (a 400% change) when the other loop
is closed! Obviously, the first pairing is significantly less sensitive to
interaction than the second.
Unit 9: Multivariable Control Systems 177

Extending Relative Gains to Systems with More Than Two Control


Objectives

Eq. 9-4 can be used to compute the relative gains for any control system
with two objectives. For systems with more than two controlled and
manipulated variables, the open-loop gain of each loop is determined with
all the other loops opened, and the closed loop gain implies that all the
other loops are closed. The relative gain for each controlled/manipulated
variable pair is still defined as the ratio of the open-loop gain to the closed-
loop gain for that pair.

Calculating the relative gains involves inverting the matrix of open-loop


gains. You will therefore find it helpful to use a computer and canned
programs to perform the following matrix operations:
1. Compute the inverse of the matrix of open-loop gains.
2. Transpose the inverse matrix.
3. Multiply each term of the open-loop gain matrix by the
corresponding term of the transposed inverse matrix to obtain
the corresponding term of the relative gain matrix.

Properties of the Relative Gains

The following properties of the relative gains are useful for interpreting
them:
1. Positive and Negative Interaction. The relative gains are not only
nondimensional. They are also normalized in the sense that the
sum of the gains of any row or column of the matrix is unity. You
can verify this fact for the 2x2 by adding the relative gain
formulas for each pairing, that is, µ11 + µ12 = 1. This property also
applies to systems with more than two controlled and
manipulated variables.
2. Positive and Negative Interaction. For the 2x2 system, when the two
loops help each other (positive interaction), the relative gains are
between zero and one. When the two loops fight each other
(negative interaction), one set of relative gains is greater than
unity, and the other set is negative. Notice that a negative relative
gain means that the net action of the loop reverses when the other
loop is opened or closed—a very undesirable situation.

For a system with more than two control objectives, the concept
of positive and negative interaction must be applied on a pair-by-
pair basis. In other words, if the relative gain for a pair of
controlled and manipulated variables is positive and less than
178 Unit 9: Multivariable Control Systems

unity, the interaction is positive. That is, that pair is “helped” by


the interaction of all the other loops. On the other hand, if the
relative gain for a pair is greater than unity or negative, the
interaction is negative. That is, the combined action of all other
loops causes a change in the controlled variable that is in the
direction opposite the direct change caused by the manipulated
variable in the pair.

The following example shows that when the steady state relationships are
simple enough, as they are for the blender, the relative gains can be
expressed as formulas in terms of the process variables.

Example 9-2. Controlling Composition and Flow in a Catalyst


Blender. Consider the blender of Figure 9-1. The objectives are to control
the composition (x) and flow (F) of the product stream by manipulating
the positions of the control valves on the two feed streams. Which of the
two controllers should be paired to which valve to minimize the effect of
interaction? The relative gains can be used to determine this. (Note:
Although ratio control should be used here, doing so would still leave the
question of which flow should be ratioed to which, and the answer to our
original question will also answer this one. In fact, the ratio controller is
really a form of decoupling here.)

In Example 9-1 we developed a specific numerical solution for the blender;


here, we will develop a general solution. To do this, we use the
conservation of mass and solute to develop formulas for the open-loop
gains.

Conservation of mass: F = F1 + F2

F1 x 1 + F 2 x2
Conservation of solute: x = -----------------------------
-
F1 + F2

Using differential calculus the steady-state gains are as follows:

KF1 = Kv1 KF2 = Kv2

F2 ( x 1 – x2 ) F1 ( x 2 – x1 )
K x1 = ---------------------------
-K K x2 = ---------------------------
-K
2 v1 2 v2
( F 1 + F2 ) ( F 1 + F2 )

where Kv1 and Kv2 are the valve gains, in (lb/h)/fraction valve position.
Unit 9: Multivariable Control Systems 179

Next, substitute the open-loop gains into the formulas for the relative
gains given in Eq. 9-4. A little algebraic manipulation produces the
following general expressions for the relative gains:

F1 F2
µ F1 = µ x2 = -----------------
- µ F2 = µ x1 = -----------------
-
F1 + F 2 F1 + F 2

In words, the pairing that minimizes interaction has the flow controller
manipulating the larger of the two flows and the composition controller
manipulating the smaller of the two flows. If a ratio controller were used,
the smaller flow should be ratioed to the larger flow, with the flow
controller manipulating the larger flow and the composition controller
manipulating the ratio. It could easily be shown that the ratio controller
decouples the two loops so that a change in flow does not affect the
composition. Notice that the valve gains Kv1 and Kv2 do not affect the
relative gains. This is why they were not considered in Example 9-1.

For most processes the relative gains tell all that needs to be known about
interaction. They are determined from the open-loop, steady-state gains,
which can easily be determined by either on-line or off-line methods.
However, in systems with negative interaction, the pairing recommended
by relative gain analysis may not result in the best control performance
because it does not consider the dynamic response. This is illustrated in
the following example.

Example 9-3. Two-Point Composition Control of a Distillation


Column. Figure 9-4 shows a sketch of a distillation column with five
manipulated and controlled variables. The column separates a 50 percent
mixture of benzene and toluene into a distillate product with 95 percent
benzene and a bottoms product with 5 percent benzene. The objective is to
maintain the compositions of the distillate and bottom products at their set
points. In a distillation column, temperature can provide an indirect
measurement of composition, so the two temperature controllers (TC 1
and TC 2) control the composition of the two products by inference.
Secondary objectives are to maintain the vapor balance by controlling the
column pressure (PC) and the liquid balances by controlling the levels in
the accumulator drum (LC 1) and the column bottom (LC 2). The five
manipulated variables are the flow rates of the two products, the reflux
flow, the steam flow to the reboiler, and the cooling rate of the condenser.

The two level variables do not affect the operation of the column directly;
thus, they cannot be made a part of the interaction analysis. However, the
decision regarding which streams control the levels has an effect on the
interaction between the other control loops. Two arrangements or schemes
180 Unit 9: Multivariable Control Systems

Condenser

Reflux

Distillate
Feed

Steam

Bottoms

Figure 9-4. Multivariable Control of a Distillation Column

are considered. To reduce the problem to a 2x2, assume that the column
pressure controller (PC) manipulates the condenser cooling rate.

Scheme 1. Level control by product stream manipulation.

In this scheme, commonly known as “Energy Balance Control,” the


distillate rate is manipulated to control the level in the condenser
accumulator (LC 1), and the bottoms rate is manipulated to control the
bottom level (LC 2), as in Figure 9-5. This leaves two unpaired control
loops: the two temperature controllers to manipulate the steam and reflux
rates.

Sensitivity tests performed on a simulation of the column yield the


following open-loop gains:

Reflux Steam

TC-1 -2.85 1.16

TC-2 -0.438 2.53

The relative gains are as follows:

Reflux Steam

TC-1 3.38 -2.38

TC-2 -2.38 3.38


Unit 9: Multivariable Control Systems 181

Condenser

Reflux
Distillate
Feed

Steam
Bottoms

Figure 9-5. Energy Balance Control Scheme for Distillation Column

Notice that the obvious pairing—top temperature with reflux and bottom
temperature with steam—results in less interaction than the other one.
However, even then there is much interaction between the two loops: the
gain of each loop decreases by a factor of 3.38 when the other loop is
switched to automatic, which indicates that the two temperature loops
fight each other. This result indicates that his scheme suffers from negative
interaction.

Scheme 2. Bottom level by steam manipulation.

In this scheme, known as “Direct Material Balance Control,” the bottom


level controller manipulates the steam rate, and the bottom temperature
controller manipulates the bottoms product rate, as in Figure 9-6. The top
of the column remains the same as before. The problem is then to pair the
two temperature controllers with the bottoms rate and reflux flow.

The sensitivity study on the simulated column gives the following open-
loop gains:

Reflux Bottoms
TC-1 -0.35 -1.05

TC-2 0.07 -1.93


182 Unit 9: Multivariable Control Systems

Condenser

Reflux
Distillate
Feed

Steam

Bottoms

Figure 9-6. Direct Material Balance Control of Bottoms Product

The relative gains are as follows:

Reflux Bottoms
TC-1 0.90 0.1

TC-2 0.10 0.90

The pairing for this scheme is also the obvious one, top temperature with
reflux and bottom temperature with bottoms product flow. However, the
relative gains show only about 10 percent positive interaction; that is, the
two loops help each other, which is indicated by the relative gains being
positive and less than unity.

From steady-state relative gain analysis, it would appear then that Direct
Material Balance Control results in significantly less interaction than
Energy Balance Control. Unfortunately, the Energy Balance Control
scheme, which relative gain analysis showed to have more steady-state
interaction, was found to perform better in this particular case than the
Direct Material Balance Control scheme. The reason for this is dynamic
interaction, which goes undetected by the relative gain matrix. For the first
scheme, the open-loop responses are monotonic, that is, the temperature
stays between its initial value and its final value during the entire
response. On the other hand, for the second scheme the open-loop
responses exhibit inverse response, that is, the temperature moves in one
direction at the beginning of the response and then moves back to a final
Unit 9: Multivariable Control Systems 183

value on the opposite side of its initial value. This causes the feedback
controller to initially take action in the wrong direction, degrading the
performance of the control system.

Although in this particular example relative gain analysis fails to properly


predict which of the two control schemes performs better, it is still useful
for verifying that the intuitive pairing is the correct one for each scheme.
Relative gain analysis would also have evaluated the interaction for each
scheme correctly had all the responses been monotonic. This example also
shows that the arrangement of the level controllers affects the interaction
between the other loops in the column.

9-3. Design and Tuning of Decouplers

Although relative gain analysis usually results in the pairing of variables,


which minimizes the effect of loop interaction, it does not eliminate it.
When the relative gains approach 0.5 the effect of interaction is the same
regardless of the pairing. In the case of negative interaction, when one set
of relative gains is negative and the other much greater than unity, the
proper pairing still produces a great deal of interaction. The only solution
to this problem is to compensate for interaction by designing a decoupler.

A decoupler is a signal processor that combines the controller outputs so


as to produce the signals to the control valves or slave controller set points.
Its operation can best be understood by considering the block diagram of a
decoupled 2x2 system shown in Figure 9-7.

R1 E1 U1 M1 C1
Gc1 G11

D1 G12

D2 G21

R2 E2 U2 C2
Gc2 G22
M2

Figure 9-7. Block Diagram of Decoupled 2x2 Control System


184 Unit 9: Multivariable Control Systems

Each of the two decoupler terms, D1 and D2, can be considered to be


feedforward controllers for which the “disturbances” are the controller
output signals U1 and U2. The design of the decouplers is therefore
identical to the design of a feedforward controller presented in Unit 8.

Decoupler Design Formulas

The objective of decoupler term D2 is to compensate for the effect of U2 on


C1, that is, to prevent changes in the output of the second controller from
affecting the controlled variable of the first loop. The goal of decoupler
design is to make the total change in C1, which is the sum of the changes
caused by the two paths from U2 to C1, equal to zero:

Change in C1 = D1G11(Change in U2) + G12(Change in U2) = 0

Solving for the decoupler term D1, we get the following:

G 12
D 1 = – --------
- (9-5)
G 11

Similarly, decoupler term D2 is designed to compensate for the effect of U1


on C2, and from the block diagram of Figure 9-7 we get:

G 21
D 2 = – --------
- (9-6)
G 22

Decoupling, like feedforward, can be designed to varied degrees of


complexity. The simplest design is given by linear static compensation
(i.e., forfeiting the dynamic compensation), which can be accomplished in
practice by a simple summer with adjustable gains. The next degree of
complexity is to add dynamic compensation in the form of lead-lag units
(see Unit 8). Ultimately, nonlinear models of the process could be used to
design nonlinear decouplers, following the procedure outlined in Unit 8.
Eqs. 9-5 and 9-6 assume linear models.

Decoupling and Control Loop Performance

Unlike feedforward controllers, the decoupler forms a part of the feedback


loop and can as such introduce instability into the system. Consider the
total effects that U1 has on C1 and that U2 has on C2:

Change in C1 = [G11 + D2G12](Change in U1) (9-7)

Change in C2 = [G22 + D1G21](Change in U2) (9-8)


Unit 9: Multivariable Control Systems 185

It is possible for dynamic compensation to call for unstable terms in D1


and D2. Therefore, these terms must obviously be left out of the
decouplers to maintain stability.

As Eqs. 9-7 and 9-8 show, another aspect of decoupling is that two parallel
paths exist between each controller output and its controlled variable. For
processes with negative interaction these two parallel paths have opposite
signs, which creates either an inverse response or an overshoot in the
open-loop step response of each decoupled loop. It is important to realize,
however, that the parallel paths are not created by the decouplers in that
they were already present in the “un-decoupled” system (the interaction
and direct effects).

As the design of the decoupler makes clear, the steady-state effect of the
decoupler on any one loop is the same effect the integral mode of the other
loops would have if the decoupler were not used. What then does the
decoupler achieve? Basically, through decoupling, the effect of interaction
is made independent of whether the other loops are opened or closed.
However, problems may still arise in one loop if the manipulated variable
of another loop is driven to the limits of its range. This is because the
decoupling action is then blocked by the saturation of the valve. It is
therefore important to select the correct pairing of manipulated and
controlled variables even when decoupling is used, so saturation of one of
the manipulated variables in the multivariable system does not drastically
affect the performance of the other loops.

Half Decoupling

As discussed earlier in this unit, the interaction effect depends on both


manipulated variables affecting both controlled variables. Thus,
interaction can be eliminated by decoupling one loop and letting the other
loop be affected, that is, implementing either D1 or D2 but not both. When
you are deciding which decoupler to select, your first consideration may
be which of the controlled variables it is more important to keep at its set
point. A secondary consideration may be the ease with which the dynamic
terms of the decouplers can be implemented.

In summary, decoupling is a viable strategy for multivariable control


systems. Its design is similar to feedforward control, although it is simpler
in that it does not require additional measurements of process variables.
Unlike feedforward, the decoupler forms part of the loop response and
affects its stability. Applications of decoupling are usually restricted to 2x2
systems. For systems involving more than two control objectives more
sophisticated control strategies are used.

The following example illustrates the design of a simple linear decoupler


for a blending process.
186 Unit 9: Multivariable Control Systems

Example 9-4. Designing a Decoupler for the Catalyst Blender. The


two objectives of the control system for the catalyst blender of Figure 9-1
are the control of the product composition and the control of the flow.
Because the blender is full of liquid, the response of the total flow to
changes in each of the input flows is instantaneous, thus the decoupler for
the total flow should not require dynamic compensation. The response of
the product composition should be that of a first-order lag with a time
constant equal to the residence time of the tank—volume divided by the
total flow. Since this time constant is the same for the composition
response to either input flow, the composition decoupler should not
require dynamic compensation either.

The application of the linear decoupler design formulas, given in


Eqs. 9- 5 and 9-6, results in the following two formulas for the signals to
the control valves. These formulas assume that F1 is the largest of the two
flows, and, for minimum interaction, it is used to control the total flow.
This pairing is the one we determined by relative gain analysis in
Example 9-2:

K v2
- ( U 2 – U2o )
M 1 = U 1 – ---------
K v1

F 2 K v1
- ( U 1 – U1o )
M 2 = U 2 + ---------------
F K 1 v2

where U1o and U2o are the controller outputs at initialization.

The coefficients correct for the sizes of the two valves. And, in the second
formula, the coefficients correct for the ratio between the two inlet flows
that is required to maintain the composition constant. This ratio is a
function of the two inlet stream compositions and the product
composition set point. If any of these compositions were to vary, you
would have to readjust the gain of the decoupler. There is, however,
another way to design the decoupler that does not require you to readjust
the parameters when process conditions change. It consists of using
simple process models to set up the structure of the control system, as
shown in the next section.

Decoupler Design from Process Models

The models needed to design the decouplers are based on the


conservation of total mass balance and of mass of solute, from
Example 9-2. Conservation of total mass stipulates that the output of the
Unit 9: Multivariable Control Systems 187

product flow controller should manipulate the sum of the two inlet flows.
Therefore, the output of the flow controller is assumed to be the total inlet
flow, and the smaller flow is subtracted from it to determine the larger
flow:

F1set = U1 - F2

The smaller flow must be measured and the larger flow must be controlled
for this formula.

The conservation of solute mass shows that the product composition


depends on the ratio of the flows rather than on any one of the inlet flows.
It is then assumed that the output of the composition controller is the ratio
of the smaller flow to the larger flow. The smaller flow is then calculated as
follows:

F2set = U2F1

This formula requires that the smaller flow also be controlled. Figure 9-8
shows the diagram of the resulting control system. In this scheme, the
ratio controller keeps the product composition from changing when the
total flow is changed, and the summer keeps the total flow from changing
when the composition controller takes action. The multivariable control
system is therefore fully decoupled.

set

set

Product

Figure 9-8. Decoupled Control System for Catalyst Blender


188 Unit 9: Multivariable Control Systems

The last two design formulas do not show the scale factors that you may
need to convert the flow signals into the percentage scales of the flow
controllers. The scale factors depend on the spans of the two flow
transmitters rather than on the sizes of the control valves. The flow
controllers allow the signals to be linear with flow. In addition, they take
care of changes in pressure drop across the control valves.

9-4. Tuning Multivariable Control Systems

From the preceding analysis of interacting loops it is obvious that the


interaction is going to affect the response of each loop. That is, the tuning
parameters and manual/automatic state of each loop affect how the other
loops respond. This section shows you how to account for the effect of
interaction when tuning each loop in a multivariable control system.

The first step when tuning interacting loops is to prioritize the control
objectives, in other words, to rank the controlled variables in the order in
which is important to maintain them at their set points. The second step is
to check the relative gain for the most important variable and decide if it is
necessary to detune the other loops. The principle behind this approach is
that a loosely tuned feedback control loop—low gain and slow integral—
behaves as if it were opened or, rather, it will make changes in its
manipulated variable slowly enough to allow the controller of the
important variable to correct for the effect of interaction. The decision as to
how loosely to tune the less important loops is based on how different
from unity is the relative gain for the most important loop. It is
understood that the manipulated variable for the most important variable
has been selected to make the relative gain for that loop as close to unity as
possible. When there are more than two interacting loops, the tightness of
tuning for each loop will decrease with its rank.

An alternative approach to detuning the less important loops is to install


decouplers that compensate for the effect of the action of the less important
loops on the most important loop. You should not install the decouplers
that compensate for the action of the most important loop on the other
loops, especially if the relative gain for that loop is greater than unity. This
is because the action of the decoupler affects the loop whose action is
compensated for (as we discussed in Section 9-3 regarding the decoupled
block diagram of Figure 9-7). If the relative gain for a loop is greater than
unity or negative (negative interaction), the decoupler action will be in the
direction opposite the direct action of the manipulated variable. This
causes inverse response or overshoot, which makes the loop less
controllable. Notice that, for loops with negative interaction, detuning the
other loops slows down the parallel effect in the opposite direction if the
Unit 9: Multivariable Control Systems 189

decoupler is not used. Thus, for example, if the top loop in Figure 9-7 were
the most important of the two, use decoupler D1 but not decoupler D2.

If at least two of the control objectives in a multivariable control system are


of equal importance, you must tune them as tightly as possible. In such
cases, they should be tuned in the order of decreasing speed of response. If
one of the important control loops can be tuned to respond much faster
than the others, it should be tuned first and kept in automatic while you
tune the other loops. In this way, the response used for tuning the slower
loops will include the interaction effect of the faster loop. For example, in
the control system for the blender of Figure 9-1, the flow controller should
be faster than the composition controller because the flow responds almost
instantaneously while the composition is lagged by the time constant of
the tank. The flow controller must then be tuned first and kept in
automatic while the composition controller is tuned.

If all of the loops are of equal importance and speed of response, they
must each be tuned while the other loops are in manual. Then, the
controller gain of each loop must be adjusted by multiplying the controller
gain obtained when all other loops were opened by the relative gain for
the loop:

Kcij' = Kcijµij (9-9)

where

Kcij'= the adjusted controller gain, %C.O./%T.O.

Kcij = the controller gain tuned with all the other loops opened,
%C.O./%T.O.

µij = the relative gain for the loop

This adjustment accounts for the change in steady-state gain when the
other loops are closed, but it does not account for dynamic effects. If some
of the loops are slower than the others or can be detuned, you must
recalculate the relative gains for the remaining loops as if those were the
only interacting loops, that is, as if the slower or detuned loops were
always opened.

The gain adjustment suggested by Eq. 9-9 should be sufficient for those
loops with positive interaction since their response remains monotonic
when the other loops are closed. However, the loops with negative
interaction may need to be retuned after the other loops are closed. This is
because the other loops will cause either inverse or overshoot response,
which normally requires lower gains and slower integral than monotonic
(minimum phase) loops. Notice that the formula results in a gain
190 Unit 9: Multivariable Control Systems

reduction for the loops with positive interaction and a gain increase for the
loops with negative interaction (assuming the pairing with the positive
relative gain is always used).

When decouplers are used, they must be tuned first and then kept active
while the feedback controllers are tuned. Recall that perfect decoupling
has the same effect on a loop as if the other loops was very tightly tuned.
For example, for the blender control system of Figure 9-8, the ratio and
mass balance controllers must be tuned first and kept active while the flow
and composition controllers are tuned.

The following example shows how interaction affects the tuning of the
controllers.

Example 9-5. Response of Catalyst Blender Control. The catalyst


blender control system of Figure 9-1 consists of an analyzer controller AC
with a sample time of one minute that is manipulating the dilute stream F2
and a continuous flow controller FC that is manipulating the concentrated
stream F1. A dead time of one sample time (1 min) is introduced by the
analyzer AT. The analyzer controller is a parallel discrete PID controller
tuned as follows:

Kc = 5%C.O./%T.O. TI = 2 min TD = 0.25 min

A PI controller for the flow controller is tuned as follows:

Kc = 0.9%C.O./%T.O. TI = 0.5 min

Initially, the inlet flows are each 1,000 kg/h, and the product concentration
is 50 percent catalyst. Figure 9-9 shows the responses of the product
composition and flow, as well as the inlet flows, for a step decrease of 10
percent in the dilute stream composition. The curves marked (a) are the
responses when the product flow controller is kept in manual, and the
curves marked (b) are for the flow controller in automatic. Notice that the
response of the analyzer controller is more oscillatory when the flow
controller is in automatic. This is because the interaction is positive, with a
relative gain of 0.5 (from the result of Example 9-2). Thus, the gain of the
analyzer controller doubles when the flow controller is switched to
automatic. If the gain of the analyzer controller were to be reduced by one
half—to 2.5%C.O./%T.O.—the response would match the response
obtained when the flow controller is in manual.
Unit 9: Multivariable Control Systems 191

52

Inlet Flows, kg/h Product, kg/h Composition, %


50 (b)

48 (a)
46
(b)
2000

(a)
1500

1400 (b)
Concentrated (a)
1000
(b)
600 Dilute
(a)
200
0 10 Time, minutes 20

Figure 9-9. Control of Catalyst Blender. (a) With Product Flow Controller on Manual. (b) With
Product Flow Controller on Automatic.

9-5. Model Reference Control

A number of multivariable control schemes that are currently widely used


in the process industries can be classified under the general umbrella of
Model Predictive Control (MPC). Many such commercial schemes for
multivariable control and optimization are available on the market. Some,
like Dynamic Matrix Control (DMC)2 use multivariable linear models,
while others, like The Process Perfecter (PP),3 use artificial neural-
network-based nonlinear models. Although the technical aspects of these
advanced techniques are outside the scope of this book, this section briefly
discusses the MPC technique and presents an example. For a simple
introduction to the mathematics of the DMC technique see Chapter 15 of
Smith and Corripio.4

A model predictive controller uses an on-line process model and feedback


from process measurement to correct for unmeasured disturbances and
model error. The more successful methods are not restricted to specific
model structures such as the first-order-plus-dead-time model described
in Unit 3. Rather, the models are developed from process data. For
example, DMC models consist of the unit step responses of each
controlled variable C to each manipulated variable M and measured
disturbance D. On the other hand, the Process Perfecter develops models
by training neural networks using plant data.

One common characteristic of the successful model predictive controllers


is that they use the models to predict the future response of the controlled
192 Unit 9: Multivariable Control Systems

or dependent variables. Then, they correct the predicted values by


comparing current process measurements with the values predicted by the
model for the current time. The corrected predicted values from the model
are then used to determine the changes in the manipulated or independent
variables that would minimize a function of the deviations of the
dependent variables from their set points. Because the different controlled
variables have different units of measure—temperatures, flows,
compositions, and so on—their deviations must be weighted in the
function that is to be minimized. One way to do this is to define an “equal-
concern error” for each variable. For example, equal-concern errors in a
given application may be 5°F, 200 kg/h, 2 weight%, and so on. Weighing
the deviations by the reciprocals of the equal-concern errors normalizes
them into deviations of equivalent magnitude.

Model predictive controllers share another common characteristic: they


impose penalties in the function to be minimized for excessive movements
in the manipulated variables. In fact, the penalty factors for movements of
the manipulated variable, which are known as “move-suppression
parameters,” are among the parameters used to tune the controller.

Another characteristic of model predictive controllers is that they provide


for optimization of the set points. In a linear scheme like DMC a linear
program (LP) is used to do the optimization. This means that the system is
driven to its constraints since linear systems cannot have optimums inside
the range of operating conditions. Because there are constraints in both the
set points and the manipulated variables and the number of degrees of
freedom is equal to the number of manipulated (independent) variables,
the optimum operating conditions occur when the sum of the number of
variables constrained is equal to the number of manipulated variables.

Finally, model predictive control systems are designed to handle


constraints in both the dependent and independent variables. The main
concern addressed by these techniques is that when one or more variables
are driven against a constraint, the optimum values of the remaining
variables are not the same as when all the variables can be set to their
optimum values.

Example 9-6 illustrates how one of the popular model reference control
schemes, Dynamic Matrix Control, controls a process.

Example 9-6. Dynamic Matrix Control of Jacketed Chemical Reactor.


In the jacketed reactor of Figure 7-1 both composition and temperature
must be controlled by directly manipulating the cooling water and
reactants flows. The first step is to determine experimentally the unit step
responses of both dependent variables (composition C and temperature T)
to the two dependent variables (coolant flow Fc and reactants flow F).
Unit 9: Multivariable Control Systems 193

Figure 9-10 shows the step responses of composition and temperature to a


1%C.O. change in coolant flow (1 lb/min) and a 1 percent change in
reactants flow (0.02 ft3/min). Two experiments are needed to obtain these
responses. Each consists of making a step in an independent variable
while keeping the other one constant.

Figure 9-10 shows that first-order-plus-dead-time models could not be


used to represent the responses to a change in reactants flow because each
response exhibits inverse response. However, the Dynamic Matrix Control
scheme does not require the use of a model to represent the process
responses. Rather, it uses a vector of sampled values of each unit step
response to represent the process response. In this case, it uses four vectors
of forty sampled values each, sampled once per minute.

The tuning of the Dynamic Matrix Controller requires that three


parameters be specified: the number of moves of the independent
variables over which the minimization of the squared deviations is to be
carried out (called the output horizon), the vector of equal-concern errors
for the dependent variables, and the vector of move suppression
parameters for the independent variables. Figure 9-11 shows the responses
of the dependent and independent variables for a 5°F change in
temperature

Figure 9-10. Unit Step Responses of Composition and Temperature of Jacketed Chemical
Reactor to Coolant and Reactants Flow
194 Unit 9: Multivariable Control Systems

21

C, %
20
19
224
T, F

220

60
F, ft3/min Fc, lb/min

40
20

1.4

1.0
0 10 Time, minutes 30 40

Figure 9-11. Dynamic Matrix Control of Composition and Temperature of Jacketed Chemical
Reactor

set point. The response is for an output horizon of ten moves, equal-
concern errors of 1 percent composition and 1°F, and move suppression
parameters of 0.15 for the coolant flow and 0.05 for the reactants flow.

Figure 9-11 shows that the dynamic matrix controller is able to change the
temperature while maintaining the composition relatively constant. When
the move suppression parameters were reduced to 0.01 each, the response
was unstable, and when the mode suppression of the coolant flow was
0.05, the controller drove the coolant flow to zero. It is important to realize
that if only two simple feedback controllers, with or without a decoupler,
were used, this would be a very difficult control problem because of the
inverse responses to changes in reactants flow (shown in Figure 9-10).

9-6. Summary

This unit dealt with multivariable control systems and how to tune them.
It showed the effect that loop interaction has on the response of feedback
control systems, and it presented two methods for dealing with that effect.
The first is Bristol's relative gains, which minimizes the effect of
interaction by quantitatively determining the amount of interaction and
by selecting the pairing of controlled and manipulated variables. The
second is loop decoupling. In one example, the distillation column
showed that you must also consider dynamic interaction, undetected by
the relative gains, when pairing controlled and manipulated variables.
Unit 9: Multivariable Control Systems 195

EXERCISES

9-1. Under what conditions does loop interaction take place? What are its
effects? What two things can be done about it?

9-2. For any given loop in a multivariable (interacting) system, define the open-
loop gain, the closed-loop gain, and the relative gain (interaction measure).

9-3. How are the relative gains used to pair controlled and manipulated
variables in an interacting control system? What makes it easy to
determine the relative gains? What is the major shortcoming of the relative
gain approach?

9-4. In a 2x2 control system the four relative gains are 0.5. Is there a best way to
pair the variables to minimize the effect of interaction? By how much does
the gain of a loop change when the other loop is closed? Is the interaction
positive or negative?

9-5. Define positive and negative interaction. What is the range of values of the
relative gain for each type of interaction?

9-6. The open-loop gains for the top and bottom compositions of a distillation
column are the following:

Reflux Steam

Distillate Compn. 0.05 -0.02

Bottoms Compn. -0.02 0.05

Calculate the relative gains and pair the compositions of the distillate and
bottoms to the reflux and steam rates so that the effect of interaction is
minimized.

9-7. The automated showers in the house of the future will manipulate the hot
and cold water flows to maintain constant water temperature and flow. In a
typical design the system is to deliver three gallons per minute (gpm) of
water at 110°F by mixing water at 170°F with water at 80°F. Determine
the open-loop gains, the relative gains, and the preferred pairing for the two
control loops. Hint: the solution to this problem is identical to that of
Example 9-2.

9-8. Design a decoupler to maintain the temperature constant when the flow is
changed in the shower control system of Exercise 9-7. Dynamic effects can
be ignored.
196 Unit 9: Multivariable Control Systems

REFERENCES

1. E. H. Bristol, “On a Measure of Interaction for Multivariable


Process Control,” IEEE Transactions on Automatic Control, vol.
AC-11 (Jan. 1966), pp.133-34.
2. C. R. Cutler and B. L. Ramaker, “DMC - A Computer Control
Algorithm,” AIChE 1979 Houston Meeting, Paper #516 (New
York: AIChE, 1979).
3. Process Perfecter®, Pavilion Technologies, 11100 Metric
Boulevard, Austin, Texas.
4. C. A. Smith and A. B. Corripio, Principles and Practice of Automatic
Process Control, 2d ed. (New York: Wiley, 1997).
Unit 10:
Adaptive and
Self-tuning Control
UNIT 10

Adaptive and Self-tuning Control


One common characteristic of most process control systems is that they
have to deal with process characteristics that vary with process conditions
and time because the processes under control are nonlinear or time-
varying or both. This unit presents some techniques for adapting the
controller to the changing characteristics of the process.

Learning Objectives — When you have completed this unit, you should be
able to:

A. Know when to apply adaptive and self-tuning control.

B. Understand the use of preset compensation.

C. Be able to apply adaptive and self-tuning controllers based on


pattern recognition and discrete model parameter estimation.

10-1. When Is Adaptive Control Needed?

Adaptive control is needed whenever process nonlinearities or time-


varying characteristics cause a significant change in the process dynamic
parameters. Unit 3 showed that the dynamic behavior of a process can be
characterized by the three parameters of a first-order-plus-dead-time
(FOPDT) model: the gain, the time constant, and the dead time
(transportation lag or time delay). It also showed that these parameters are
usually functions of process operating conditions. Unit 4 showed that the
controllability of a feedback loop decreases with the ratio of the dead time
to time constant of the process.

Because most feedback controllers are linear, once they are tuned at a
given process operating condition their performance will vary when the
process operating conditions change. However, since feedback control is
usually a very robust strategy, small variations in process operating
conditions would normally not change the process dynamic behavior
enough to justify adaptive control techniques. Because of this robustness,
we can say that although most processes are nonlinear, very few processes
require adaptive control.

Feedforward control would be more sensitive to changing process


dynamic behavior were it not for the fact that feedback trim is used on
essentially all installations of feedforward and ratio control strategies (see

199
200 Unit 10: Adaptive and Self-tuning Control

Unit 8). The presence of feedback trim makes these installations less
sensitive to changing process operating conditions.

Though we have said that most process control applications do not require
adaptive control, the following sections will discuss two examples—
process nonlinearities and process time dependence—where it may be
needed.

Process Nonlinearities

Of the three process model parameters the one most likely to affect the
performance of the loop is the gain. This is because the loop gain is
directly proportional to the process gain. Moreover, for the variables for
which good control is important, temperature and composition, the loop
gain is usually inversely proportional to process throughput (see
Section 3-6). Figure 10-1 shows a typical plot of process gain versus
throughput. This plot applies to the control of composition in a blender or
of outlet temperature in a steam heater or furnace. The gain variation is
even more pronounced in a heat exchanger where the manipulated
variable is the flow of a hot oil or coolant. This very common nonlinearity
can be summarized by the following statement:

For most temperature and composition control loops, the process


gain decreases as the throughput—and therefore the position of
the control valve—increases.

Many control schemes are expected to perform well at several throughput


rates, as when a portion of the process is fed by two or more parallel
trains, any number of which can be operating at any given time. This
means that the throughput for the common portion of the process, and

Figure 10-1. Variation of Process Gain with Throughput for a Blender, Furnace, or Steam
Heater
Unit 10: Adaptive and Self-tuning Control 201

consequently its gain, can vary significantly. Another common


nonlinearity is the exponential dependence of reaction rates on
temperature, which becomes important in batch reactors that are operated
at different temperatures during the batch. The dependence of reaction
rate on composition also affects the process gain in batch reactors,
especially if the reaction is carried to a high conversion. The higher the
order of the reaction the greater the effect.

Finally, pH control loops present a high degree of nonlinearity, as shown


in the plot of pH versus the flow of the control stream in Figure 10-2.
Because pH is a logarithmic function of the hydrogen ion concentration,
when the pH is away from the neutral value of 7 the flow of the control
stream must change by a factor of ten to change the pH by each successive
unit. This means that the controller must be able to change the flow by
very small amounts when the pH is near 7 but by very large amounts
when it is away from 7. Notice that in the pH loop, as with the previous
examples cited in this section, the nonlinear behavior of the process results
in a control loop with variable gain.

Process nonlinearities also affect the process time constants and dead time,
but usually to a lesser extent than they affect the gain. In particular, if the
time constants and dead time were to remain proportional to each other as
they vary—as, for example, they would remain in a blender when the
throughput varies—the controllability of the feedback loop would remain
constant since it is defined by the ratio of the effective dead time to the

13

11

pH
7

1
0 50 100 150 200

Acid Flow, % of Base

Figure 10-2. Nonlinear Behavior of pH with Flow of Acid Control Stream


202 Unit 10: Adaptive and Self-tuning Control

effective time constant of the loop. This means that, although the
controller integral and derivative times no longer match the speed of
response of the process when the time parameters vary, for most loops the
loop stability and damping of the response are not affected as much by the
time parameters as they are by the variation of the gain.

Process Time Dependence

Besides nonlinear behavior, many process characteristics vary with time


because of catalyst deactivation in reactors, fouling of heat exchanger
tubes, coking of furnace tubes, and the like. In continuous processes, these
variations occur over long periods of time such as days or weeks, which
are outside the time scale of the process response time. Nevertheless, these
variations may require you to retune the controller during the cleanup or
catalyst replacement cycles.

Other processes are sensitive to ambient conditions such as temperature or


humidity, as, for example, large process air compressors and air-cooled
condensers and exchangers. In such cases, the cycles have periods of one
day, which ride on the annual cycle of the seasons. Yet another set of
processes are affected by changes in product grade.

If the process characteristics change significantly with time, adaptive and


self-tuning techniques are in order. The next three sections (10-2, 10-3, and
10-4) present three approaches for carrying out adaptive control strategies:
preset compensation, pattern recognition, and discrete parameter
estimation.

10-2. Adaptive Control by Preset Compensation

A common technique for maintaining control loop performance in the face


of changing process dynamics is to compensate for the variation of process
parameters in a preset manner, based on knowledge of the process. The
name gain scheduling has been applied to these techniques, reflecting the
fact that the gain is the most important parameter to compensate for.
Indeed, the most common preset compensation practices involve
compensating for variations in process gain. These practices are the focus
of this section.

In the previous section, we learned that the inverse proportionality


between process gain and throughput is the most common nonlinearity
encountered in process control. For that reason, preset compensation
practice deals mostly with the variation of throughput. The three
techniques to be discussed here are the use of control valve characteristics,
the use of cascade to a ratio controller, and the use of gap or dead-band
controllers.
Unit 10: Adaptive and Self-tuning Control 203

Valve Characteristics

The control valve position is an indication of process throughput in most


loops. For this reason, designing the characteristic curve of the valve so
that the increments in flow-per-unit change in valve position is
proportional to the flow through the valve would compensate exactly for
the decrease in process gain with throughput. Such a valve characteristic is
the popular “equal-percentage” characteristic, so called because the
percentage increments in flow-per-unit change in valve position are equal,
that is, the increments in flow are proportional to the current flow.
Figure 10-3 shows a plot of the equal-percentage valve characteristics.

The following restrictions apply when using equal-percentage valve


characteristics to compensate for the decrease in process gain with
throughput:
1. The valve must be designed so that the pressure drop across the
valve remains constant over its range of operation. Otherwise,
the actual installed characteristics would deviate from the equal
percentage and aggravate the process gain variation problem
when the valve is almost fully opened. This phenomenon is
indicated by the line marked “(b)” in Figure 10-3. For the valve to
retain its equal-percentage characteristics it must take up about
60 percent of the total flow-dependent pressure drop at the base

100

(a)
75
Flow, % of Maximum

(b)
50

25

0
0 20 40 60 80 100
Valve Position, %

Figure 10-3. Equal Percentage Valve Characteristic Compensates for the Decrease in Gain with
Throughput when Pressure Drop Across the Valve is Constant (a), but not when Pressure Drop
Varies with Throughput (b).
204 Unit 10: Adaptive and Self-tuning Control

capacity flow. For example, if the rest of the line in series with the
valve takes up 5 psi of friction loss at design flow, the valve must
take up 5(0.6/0.4)=7.5 psi at that flow.
2. If the temperature or composition controller is cascaded to a flow
controller, then the benefits of the equal-percentage
characteristics in the valve are lost to the temperature or
composition loop. Furthermore, if the flow controller receives a
differential pressure signal that is proportional to the square of
the flow, and it does not extract the square root of this signal, the
gain variation of the master controller would be aggravated by
the square function. Notice that if the flow controller receives a
signal that is proportional to the square of the flow, as the output
of the master controller increases it calls for smaller increments in
flow for the same increments in output. That is, the loop gain will
decrease as the flow (throughput) increases.
3. The equal-percentage characteristic curve does not produce zero
flow at zero valve position. Therefore, the actual valve
characteristic curve must deviate from the equal-percentage
characteristic curve in a region near the closed position. This is
illustrated in Figure 10-3 by the short straight lines near the zero
valve position.

Keeping these three restrictions in mind, equal-percentage valves perform


a natural compensation for the inverse proportionality between process
gain and throughput.

Cascade to a Ratio Controller

Another way to compensate for the inverse proportionality between


process gain and throughput is to ratio the manipulated flow to the
throughput flow and have the temperature or composition controller set
the ratio—in other words, cascade the feedback loop to a ratio controller.
Figure 10-4 shows an example of such a scheme for the control of
composition out of a blender. The multiplication of the feedback controller
output, the ratio, by the throughput flow makes the change in
manipulated flow proportional to the throughput flow. Thus, the feedback
loop gain remains constant when the throughput flow changes.

Figure 10-5 shows three examples of temperature control using this simple
gain compensation scheme. In each of the three cases the fuel flow to a
furnace, steam flow to a heater, and hot oil heat rate to an exchanger are
ratioed to the throughput flow. In this last example, the heat rate
computation also provides compensation for the temperature change of
the hot oil.
Unit 10: Adaptive and Self-tuning Control 205

Figure 10-4. Cascade to Ratio Controller Makes the Loop Gain Constant with Throughput

Process

Air

Fuel

Figure 10-5a. Temperature Control of a Furnace with Cascade to Ratio Controller


206 Unit 10: Adaptive and Self-tuning Control

SP
TC
SP
RC Steam
SP
FC FT

FT
TT
Process

T
Condensate

Figure 10-5b. Temperature Control of a Steam Heater with Cascade to Ratio Controller

Σ
Process

Hot Oil

Figure 10-5c. Temperature Control of a Hot Oil Exchanger with Cascade to Ratio Controller
Unit 10: Adaptive and Self-tuning Control 207

The following example shows how cascading the temperature controller


to a steam-to-process-flow ratio controller keeps the gain of the
temperature control loop constant as the process flow changes.

Example 10-1. Cascade-to-Ratio Control of Steam Heater. In Example


8-1 a nonlinear feedforward controller was designed for a steam heater,
which included a ratio controller or multiplier (see Figs. 8-9 and 8-10). An
alternative design is a linear feedforward controller in which the signals
from the controller (TC) and the process flow feedforward compensator
(FT) are added instead of multiplied (see Sections 8-3 and 8-4). Figure 10-6
shows the responses of the temperature response to a 50 percent pulse
change in process flow to the exchanger, that is, a 50 percent decrease
followed some time later by a 50 percent increase to the initial flow. The
series PID feedback controller (TC) is tuned at the initial process flow with
Kc‘ = 2%C.O./%T.O., TI‘ = 1.62 min, and TD‘ = 0.42 min. The curve marked
“(a)” in Figure 10-6 shows the response when the output of the feedback
controller is added to the feedforward compensation signal, and curve (b)
shows the response when the temperature controller adjusts the ratio of
the steam flow to the process flow.

Figure 10-6 shows that the response of the additive controller is more
oscillatory when the process flow is reduced in half, while the response of
the cascade-to-ratio control scheme is almost the same at half flow as at
full flow. The initial deviation in temperature is higher at half flow because
the gain at that flow is twice that of full flow (see Example 3-5). The
responses of the two schemes are identical when the process flow is
restored to the original flow.

Figure 10-6. Response of Heat Exchanger Temperature to a 50% Pulse Change in Load. (a)
With Additive Feedforward Control, (b) With Temperature Controller Cascaded to the Steam-to-
Process Flow Ratio Controller.
208 Unit 10: Adaptive and Self-tuning Control

Gap or Dead-band Controller

Special nonlinearities, such as the wide gain variation in the pH control


system cited in Section 10-1, require special compensation strategies. One
of the simplest and most commonly used pH control schemes is the one
proposed in an early edition of the excellent book on process control
systems by Shinskey.1 The scheme uses two control valves in parallel as
well as a gap controller. Figure 10-7 shows a schematic of the control
scheme. In the scheme, the pH controller (AC-pH) is proportional only
and directly manipulates a small control valve to adjust the flow of control
stream (acid or base) to the neutralization tank. The output of the pH
controller is fed to a valve-position controller (ZC) with a set point of 50
percent of range. This valve position controller manipulates the position of
a large valve in parallel with the small valve and with about twenty times
larger capacity. A gap or dead band on the valve position controller keeps
the large valve from moving while the small valve is making small
adjustments in flow. When a large change in flow is required, the position
of the small valve moves outside the dead band, and the valve position
controller takes action to bring it back inside the dead band.

The valve position controller is proportional-integral and should operate


so that the proportional part of the output does not jump when the valve
position gets outside the band. The proper way to program it is to let it
calculate its output all of the time but change it only if its input is outside
the dead band.

Gap Controller
Control

Process

Figure 10-7. pH Control Scheme Uses Two Control Valves and a Gap Controller
Unit 10: Adaptive and Self-tuning Control 209

The three techniques for compensating for process nonlinearity discussed


in this section are based on knowledge of the process and its behavior.
They are only examples of what can be accomplished if you design the
structure of the control system properly. Recall that Section 8-4 presented a
general procedure for nonlinear feedforward controller design. One of the
steps of that procedure was selecting how the feedback trim was to enter
into the feedforward compensation scheme. The cascade-to-ratio scheme
just discussed is a special case of that general procedure, probably one of
the simplest. By adjusting the ratio, the feedback controller compensates
for the effect of throughput rate on the process gain. Similar compensation
schemes can be created for any nonlinear feedforward control system; the
key step is selecting the function of the feedback trim in the feedforward
controller.

The following two sections (10-3 and 10-4) look at self-tuning and
adaptive control schemes that can be applied to any process. They
essentially view the process as a black box.

10-3. Adaptive Control by Pattern Recognition

Incorporating “expert systems” to auto-tune and adapt the controller


parameters to changing process characteristics is a natural development of
the widespread use of microprocessors to carry out the PID feedback
control functions. This section will briefly describe the pattern recognition
controller marketed by the Foxboro Company as the EXACT controller
because this was the first controller in this class. The controller is based on
an idea of Bristol, whose article on the subject should be consulted for
additional details.2 The overview presented here is based on a paper by
Kraus and Myron.3

Auto-tuning by pattern recognition basically involves programming an


expert system to automatically carry out the steps followed by an
experienced control engineer or technician when tuning the controller. The
principles behind this expert system are the same as those used by Ziegler
and Nichols in developing the quarter-decay ratio response formulas
presented in Unit 2 of this book. The technique consists of recognizing a
pattern in the closed-loop response of the loop, measuring its overshoot
(or damping) and period of oscillation, and adjusting the controller
parameters to match a specified response.

Recognizing the Response Pattern

The pattern recognition phase in the auto-tuning sequence starts when the
error (difference between set point and controlled variable) exceeds a
prespecified noise threshold. Such an error may be caused by a
disturbance or by a set point change. The program then searches for three
210 Unit 10: Adaptive and Self-tuning Control

peaks in the response, measures their amplitude, records the time of


occurrence, and calculates the overshoot, the damping (which is not
independent of the overshoot), and the period of oscillation. Figure 10-8
illustrates a typical response. The definitions of overshoot and damping
are as follows:

Overshoot = -E2/E1 (10-1)

Damping = (E3 - E2)/(E1 - E2) (10-2)

where E1, E2, and E3 are the measured amplitude of the error at each of the
three peaks. Notice that the error of the second peak is assumed to have a
sign opposite that of the other two, and therefore the differences indicated
in the definition of the damping are actually sums.

When the response is not oscillatory, peaks 2 and 3 cannot be detected by


the pattern recognition program. In such cases, to be used in the tuning
formulas they must be estimated.

Auto-Tuning Formulas

The damping parameter and period of oscillation, coupled with the


current controller parameters (gain, integral, and derivative times), define
the tuning state of the closed loop much as the ultimate gain and period
are defined for the Ziegler-Nichols tuning formulas for quarter-decay ratio
response (see Section 2-6). In fact, the period determined by the pattern
recognition program is in the same ballpark as the ultimate period of the
loop, and the damping parameter defined by Eq. 10- 2 is closely related

Figure 10-8. Closed Loop Response Showing the Peaks which are Used by the Pattern
Recognition Adaptive Technique
Unit 10: Adaptive and Self-tuning Control 211

to the quarter-decay response specification--the quarter-decay response


produces a damping parameter of 0.5.

Formulas similar to the Ziegler-Nichols formulas are used to determine


the integral and derivative times of the loop. However, they are subject to
a user-specified “derivative factor” that adjusts the derivative relative to
the value calculated by the tuning formulas. The gain is then adjusted to
vary the damping in the desired direction, either to match a user-specified
damping parameter or a predetermined default. An increase in gain
increases the damping parameter, while a decrease in gain decreases it.
The controller parameters are calculated and reset to their new values only
after the response has settled within the noise threshold band.

Auto-Tuning Parameter Specifications

The auto-tuning algorithm is easy to use because it requires few user


specifications. It is also flexible because it allows additional optional
specifications. The required specifications are as follows:

• Initial controller gain, integral time, and derivative time.

• Noise band: the minimum magnitude of the error that triggers the
pattern recognition program. This parameter depends on the
expected amplitude of the noise in the measured variable.

• Maximum wait time: the maximum time the algorithm will wait for
the second peak in the response after detecting the first one. This
parameter depends on the time scale of the process response.

Optional specifications include the maximum allowed damping and


overshoot parameters, the derivative factor—which can be set to zero if a
proportional-integral controller is desired, and the parameter change limit
factor. This last parameter imposes a limit on the factor by which any of
the controller parameters can be changed by the algorithm, which is based
on the initial parameters.

Pretuning

The Foxboro EXACT controller can automatically execute a pretuning


procedure to determine the initial controller parameter values. The
procedure is carried out with the controller in manual and consists of
obtaining a step response of the process and first-order-plus-dead-time
(FOPDT) model parameters similar to those presented in Unit 3. The
pretune algorithm automatically applies the step test on the controller
output (of a magnitude specified by the user), waits for steady state,
estimates the process parameters, calculates the initial controller
parameters, and returns the controller output to its initial value.
212 Unit 10: Adaptive and Self-tuning Control

Restrictions

The EXACT controller is a rule-base expert system with over two hundred
rules, most of which involve keeping the pattern recognition algorithm
from being confused by peaks that are not caused by the controller tuning
parameters. Nevertheless, the pattern recognition algorithm must be
applied with much care because situations will still arise where it can be
fooled. For example, oscillatory disturbances with a period of the same
order of magnitude as that of the loop will tend to detune the controller
because the auto-tuning algorithm will think the oscillations are caused by
a controller tuning that is too tight. Other situations, such as loop
interaction, may also throw the auto-tuning off if they are not properly
taken into account.

In summary, the EXACT controller shows the practicality of pattern


recognition for auto-tuning feedback controllers. Several vendors of
control systems offer equivalent schemes for auto-tuning their controllers.
Other vendors provide software products for automatically tuning
controllers off line using test data taken from the process. When selecting
one of these products new users should contact current users and obtain
the appropriate technical information from vendor representatives.

10-4. Adaptive Control by Discrete Parameter Estimation

As with the pattern recognition adaptive controller, the emergence of


discrete-model parameter estimation for adaptive control and auto-tuning
of controllers naturally follows from the increasing use of microprocessors
for feedback control. Many examples of auto-tuning controllers based on
the parameter estimation concept are available commercially from several
manufacturers.

Basically, the idea of these controllers is to use linear recursive regression


to estimate the parameters of a discrete linear model of the process from
the sampled values of the controller output and controlled variable taken
on line. The discrete process parameters are then used in an adapter to
calculate the controller parameters using formulas similar to those
described in Unit 6. Åström and Wittenmark4 offer an excellent discussion
of this approach, and Goodwin and Payne present all the mathematical
details for those interested in them.5 The technique we will describe in this
section was originally developed by Touchstone and Corripio6 and
applied by Tompkins and Corripio to auto-tune the temperature
controllers on an industrial furnace using a process computer.7
Unit 10: Adaptive and Self-tuning Control 213

Discrete Process Model

It can be shown that a discrete second-order model of the process can be


used to calculate the parameters of computer- and microprocessor-based
PID control algorithms. As in Section 6-2, if the model is reduced to
first-order, the tuned algorithm reduces to a proportional-integral (PI)
controller; that is, the resulting derivative time is zero. The basic idea
behind the adaptive technique is to estimate the parameters of the discrete
model from process data and then use these parameters to tune the
controller.

The discrete model is given by the following formula:

Cn+1 = -A1Cn - A2Cn-1 + B0Mn-N + B1Mn-N-1 + B2Mn-N-2 (10-3)

where Cn and Mn are, respectively, the values of the controlled and


manipulated variables at the nth sample time; N is the number of
complete samples in the process dead time; and A1, A2, B0, B1, and B2 are
the parameters of the model.

The discrete model of Eq. 10-3 has four very desirable properties:
1. This model can fit the response of most processes, both
monotonic and oscillatory, with and without inverse response,
and with any ratio of dead time to sample time.
2. The parameters of the model can be estimated by linear multiple
regression in a computer control installation because the model
equation is linear in the parameters and their coefficients are the
known sampled values of the controlled variable and the
controller output. Only the dead-time parameter N must be
estimated separately.
3. For a first-order process, the parameters A2 and B2 become zero,
whereas if the dead time is an exact number of samples
parameter B2 is zero for the second-order process and B1 is also
zero for the first-order process.

4. Designing a controller for the model results in a PID algorithm


with dead-time compensation. The derivative time becomes zero
for the first-order process, and a gain adjustment factor can be
applied if dead-time compensation is not used (see Section 6-2).

In summary, the discrete model fits the response of most processes, has
parameters that can be estimated by using a straightforward procedure,
and results in the controller most commonly used in industry.
214 Unit 10: Adaptive and Self-tuning Control

Parameter Estimation

You can estimate the parameters of the discrete model using


straightforward multiple regression calculations. The calculations can be
simple least-squares regression if the measured process output (controlled
variable C) is free of correlated measurement noise, but otherwise slightly
more sophisticated calculations are required. The calculations can be
carried out off line, after all the sampled values have been collected, or
recursively on line, that is, by updating the parameter estimates after each
sample of the process variable and controller output.

Off-Line Least-squares Regression

The parameters of the discrete process model are estimated off line by
collecting enough samples of the process output variable C and of the
controller output M. These data are then fed to a least-squares program,
which is readily available in the form of numerical methods package or in
a spreadsheet program (Lotus 1-2-3, Microsoft Excel, Corel QuatroPro,
etc.). One particular package that is specific to process identification,
MATLAB System Identification Toolbox, was developed by Ljung as a
toolbox for the popular MATLAB software package.8

For the estimates of the process parameters to be good approximations of


the actual process parameters they must satisfy four important
requirements:
1. During the data collection period the process variable C must be
changing as a result of changes in the controller output M. The
variations caused by M should be of greater magnitude than
those caused by disturbances and measurement noise.

The required variations in controller output can be applied


directly to the controller output or to its set point, as Figure 10-9a
suggests. A simple symmetric pulse (Figure 10-9b) or a
pseudo-random binary sequence (Figure 10-9c) can be used for
the forcing signal. The latter signal “excites” the process over a
wider frequency range than the former. In either case, the data
collection period should extend beyond the excitation period to
allow the parameters to settle to average values.

2. The values of C and M used in the regression must be the


differences of their sampled values from their corresponding
initial steady state values. To ensure that they are, subtract the
initial values of C and M from their respective sampled values.
Alternatively, the values of C and M can be “differenced,” that is,
they can be entered as differences of each sample minus the
preceding sample.
Unit 10: Adaptive and Self-tuning Control 215

Identification Disturbances
Signal and Noise

+ +
R E M C
Controller Process
+ + +

Tuning
Parameters
Estimator

Adapter
Model Parameters

Symetric Pulse

Pseudo-Random Binary Signal (PRBS)

Figure 10-9. Block Diagram for Parameter Estimation and Input Signals: Symmetric Pulse,
Pseudo-Random Binary Signal (PRBS)

3. The final values of C and M must match their initial values. This
does not happen when a nonzero mean disturbance upsets the
system during the data collection period. Differencing gets
around this requirement. Another way to handle it is to add a
constant term to Eq. 10-3, which is then estimated and becomes
an estimate of the mean value of the disturbance.

4. Disturbances and measurement noise in the process variable C


must not be autocorrelated because this would cause the
estimates of the parameters to be biased. This problem can be
avoided by estimating a disturbance model. The method, known
as maximum likelihood regression, is described in detail in the
book by Goodwin and Payne listed in this unit’s references
section.5 Another method, instrumental variable regression, is
more applicable to the recursive or on-line parameter estimation
method.
216 Unit 10: Adaptive and Self-tuning Control

All four of these restrictions also apply to the on-line or recursive


parameter estimation method, to be outlined next.

Recursive Parameter Estimation

The recursive method for estimating the process model parameters is


applied on line, with the calculations repeated each time a sample of the
process variable is taken. The least-squares estimate is improved
incrementally after each sample. A parameter in the calculations allows
the algorithm “memory” to be adjusted. When the parameter is set to 1,
the weight of all the past samples is retained in the estimation so new
samples have little effect on the estimates. As the parameter is decreased
in value from 1, the effect of past samples is reduced exponentially with
the age of the samples. This keeps the estimator “alive,” with new samples
showing the effect of process changes in the parameter estimates. The
smaller the value of the parameter, the shorter the “memory” of the
estimator in terms of number of past samples.

The estimator is initialized with estimates of the model parameters and


with the initial values of an estimator matrix, called the variance-
covariance matrix P. The nondiagonal elements of the matrix are set to
zero. As the initial values of the diagonal elements of the matrix (one
element for each parameter) increase, the confidence in the initial value of
the corresponding parameter diminishes. This increases the effect that the
first few samples have on the parameter estimate. If any of the diagonal
elements is set to zero, the corresponding parameter remains constant at
its initial estimate.

Instrumental Variable Regression

To guard against biased estimates caused by correlated noise in the


process variable, you should use instrumental variable (IV) regression
instead of simple least squares. The idea behind the instrumental variable
approach is that the output of the model should be well correlated with
the true plant output but uncorrelated with the noise in the measurement
of the process variable. This removal of the correlation with noise is
accomplished by replacing the plant measurement with the values
predicted by the model when you calculate the estimator gain.

Variance of the Estimates

The diagonal elements of the variance-covariance matrix P multiplied by


the variance of the noise yields the variance of the corresponding
parameter estimate. It is difficult to calculate the variance of the parameter
estimates in the recursive mode because it is not possible to calculate the
variance of the noise. In the off-line method, you can estimate the variance
Unit 10: Adaptive and Self-tuning Control 217

of the noise as the variance of the residuals. At any rate, the “trace” of
matrix P, that is, the sum of its diagonal elements, can serve as a measure
of the goodness of the fit.

Adapter

Table 6-2 presented formulas for tuning PID controllers from continuous
model parameters—that is, process gain, time constants, and dead time--
for the PID controllers of Table 6-1. For auto-tuning and adapter
controllers, similar formulas can be developed using the same methods
from the discrete model parameters A1, A2, B0, B1, and B2, which are
calculated by the estimator. Table 10-1 presents these formulas, which can
be used to calculate the controller parameters from the estimated discrete
model parameters. Parameter q is the control performance parameter,
which can be adjusted to obtain tighter (q-->0) or looser (q-->1) control.

Auto-Tuning versus Adaptive Control

The discrete-model parameter estimation and adapter formulas can be


used in both the auto-tuning and adaptive control modes. In the auto-
tuning mode the program is started at the desired time with the memory
parameter set to 1. To avoid a bump or sudden change in the tuning
parameters, you must calculate the initial values of the model parameters
from the current controller parameters by using the inverse of the
formulas of Table 10-1. You then set the initial values of the diagonal
elements of matrix P and apply the appropriate signal to the controller

Table 10-1. Tuning Formulas for Adapter


Discrete Second-order Model:
Cn + 1 = – A1 Cn – A2 Cn – 1 + B0 Mn – N + B 1 Mn – N – 1 + B2 Mn – N – 2
Parallel PID Tuning Formulas:
( 1 – q ) ( 2A 2 + A 1 )
K c = – ------------------------------------------------------------------------
-
( B0 + B1 + B2 ) [ 1 + N ( 1 – q ) ]

T ( 2A 2 + A 1 )
T I = – --------------------------------
-
1 + A1 + A2

TA 2
T D = – -----------------------
-
2A 2 + A 1
For use with the parallel PID controller of Table 6-1.
When the dead-time compensation PID controller is used (see Section 6-4), the gain
changes to: Kc = (1 – q)(2A2 + A1)/(B0 + B1 + B2).
218 Unit 10: Adaptive and Self-tuning Control

output or to its set point. The estimator/adapter will then adjust the
controller parameters until the parameter estimation gain dies out, at
which time the auto-tuning procedure is stopped. You could then repeat
the auto-tuning procedure until the controller parameters do not change
appreciably from the beginning to the end of a run.

In the adaptive mode, the auto-tuning program is allowed to run all the
time, which takes advantage of process disturbances and normal set point
changes. You need perform the initialization only once, and to keep the
estimator alive the memory parameter would be set to a value less than
unity.

Tompkins and Corripio reported a successful application of the


instrumental variable auto-tuning method to a set of steam cracking
furnaces.7 The following example illustrates the use of the procedure and
provides you with a benchmark against which to test auto-tuning
programs.

Example 10-2. Process Identification by Least-squares Regression.


To test the efficiency of the least-squares parameter estimation technique,
it is applied to a known linear second-order discrete model with the
following parameters:

Gain: 2%T.O./%C.O. Time constants: 1.0 and 0.5 min

Sample time: 0.2 min

The dead time and lead term are zero. The discrete second-order model for
the parameters just given are as follows:

Cn+1 = 1.48905 Cn - 0.54881 Cn-1 + 0.065717 Mn + 0.0538046 Mn-1 + Un+1

where U is a random signal varying between -0.05 percent and +0.05


percent, which is added to simulate measurement noise. The mean value
of this noise signal is approximately zero.

A pseudo-random binary signal (PRBS) with amplitude of -1%C.O. to


1%C.O. is applied to the input Mn. The signal can vary every 4 samples
(0.8 min) and is run for 120 samples. Data on Cn and Mn are collected for
20 more samples (a total of 141 samples, including the initial value).
Figure 10-10 shows a plot of the input and output data.

The analysis of the response data is performed by an off-line least-squares


identification program as well as by a recursive estimator. For the latter,
the initial variance-covariance matrix (P) is a diagonal matrix with all the
diagonal terms set to 1,000, and the memory parameter is set to 1.0 (i.e., no
Unit 10: Adaptive and Self-tuning Control 219

M and Y 0

-1

Estimator Sample
Input Output

Figure 10-10. Input (M) and Output (C) Data Used for Parameter Estimation in Example 10-1

forgetting of past samples). The parameters of the following second-order


discrete model are estimated:

Cn+1 = -A1Cn - A2Cn-1 + B0Mn-N + B1Mn-N-1 + B2Mn-N-2 + D

With N=0, the results are summarized in the following table:

Recursive Off-line Standard


Parameter True Value
Estimate Estimate Deviation
A1 -1.48905 -1.50320 -1.50672 0.01929
A2 0.54881 0.56066 0.56401 0.01844
B0 0.06571 0.06263 0.06270 0.00201
B1 0.05380 0.05467 0.05446 0.00275
B2 0 -0.00124 -0.00177 0.00352
D 0 0.00049 0.00055 0.00152

Parameter D is added to the model so as to account for the mean value of


the noise or for a sustained disturbance. All of the parameter estimates
were within two standard deviations (95% confidence limits) of the true
values, and the standard deviations of the estimates were within 10
percent of their estimated values. More importantly, the tuning
parameters calculated from the estimated model parameters are
practically the same as those calculated from the true parameters, as
shown in the following table:
Parallel PID Tuning Parameters
From True From Recursive From Off-line
Parameter
Parameters Estimates Estimates
Gain, %C.O./%T.O. 3.27 3.29 3.28
Integral time, min 1.31 1.33 1.32
Derivative time, min 0.28 0.29 0.30
220 Unit 10: Adaptive and Self-tuning Control

where the proportional gains are calculated with q = 0. There is no


difference between the tuning parameters derived from the estimated
parameters and the real parameters, so an auto-tuner based on the least-
squares estimates would produce excellent results.

Figure 10-11 shows a plot of the response of the second-order process


(with the real parameters) to a set point change. The controller tuning
parameters were those derived from the recursive least-squares estimates
of the model parameters. As would be expected, the response is excellent.

Figure 10-11. Response of Auto-Tuned Controller of Example 10-1 to a Change in Set Point

Example 10-2 demonstrates how successful least-squares regression can be


for auto-tuning controllers for a simple process. If the process had dead
time, the results would be just as good if the true value of the dead-time
parameter, N, were used in the estimation. The presence of a lead term in
the process transfer function (inverse or overshoot response) does not
present any obstacle to the performance of the estimator or to its accuracy.
However, if the noise term (U) were autocorrelated, you would want to
use the instrumental variable estimation procedure.

10-5. Summary

This unit focused on techniques for adaptive and auto-tuning control.


Although most process controllers do not require adaptive control, it is
important to recognize those situations where process nonlinearities may
adversely affect the performance of the control system. In many cases,
these nonlinearities can be compensated for by proper selecting the
control valve characteristics or by properly designing the feedforward
control system. Pattern recognition and discrete-model regression are
excellent techniques for adaptive and auto-tuning control.
Unit 10: Adaptive and Self-tuning Control 221

EXERCISES

10-1. What characteristic of a process will make it worthwhile to apply adaptive


control? Do most control loops require adaptive control?

10-2. Which of the process parameters is most likely to vary and thus affect the
performance of the control loop? Give an example.

10-3. How can the control valve characteristic be selected to compensate for
process gain variations? Cite the requirements that must be met in order
for the valve characteristic to properly compensate for gain variations.

10-4. How does the cascade of a feedback controller to a ratio controller


compensate for process gain variation?

10-5. Why is a gap controller useful for controlling pH?

10-6. Briefly describe the adaptive and auto-tuning technique based on pattern
recognition.

10-7. Why is a second-order discrete model useful for identifying the dynamic
response of most processes? Why is it easy to estimate its parameters?

10-8. Cite the requirements for using the least-squares estimation of the
parameters of the discrete process model.

10-9. Why is it desirable to estimate the process parameters recursively on line?


Describe how such a technique can be used for both adaptive and auto-
tuning control.

10-10.What is the meaning of the diagonal elements of the variance-covariance


matrix P? How can they be initialized to keep a parameter from varying
during the estimation calculations?

REFERENCES

1. F. G. Shinskey, Process Control Systems, 3d ed. (New York:


McGraw-Hill, 1988).
2. E. H. Bristol, “Pattern Recognition: An Alternative to Parameter
Estimation in Adaptive Control,” Automatica, vol. 13 (Mar. 1977),
pp. 197-202.
3. T. W. Kraus and T. J. Myron, “Self-Tuning PID Controller Uses
Pattern Recognition Approach,” Control Engineering (June 1984),
pp. 106-11.
4. K. J. Åström and B. Wittenmark, Computer Controlled Systems
(Englewood Cliffs, NJ: Prentice-Hall, 1984), Chapters 13 and 14.
222 Unit 10: Adaptive and Self-tuning Control

5. G. C. Goodwin and R. Payne, Dynamic System Identification:


Experiment Design and Data Analysis (New York: Academic Press,
1977).
6. A. T. Touchstone and A. B. Corripio, “Adaptive Control through
Instrumental Variable Estimation of Discrete Model Parameters,”
Proceedings of ISA/77 (Research Triangle Park, NC: ISA, 1977), pp.
57-64.
7. P. M. Tompkins and A. B. Corripio, “Industrial Application of a
Self-tuning Feedback Control Algorithm,” ISA Transactions, vol.
20, no. 2 (1981), pp. 3-10.
8. L. Ljung, MATLAB System Identification Toolbox (Natick, MA: The
MathWorks, Inc., 1991).
Acknowledgements
There are several people responsible for the production of this book. First
of all, Paul W. Murrill, who was not only the first to teach me automatic
process control and the one who got me into teaching and research but,
also as the original Consulting Editor of the ILM series, inspired me to
write the first edition of this book. Secondly, Carlos A. Smith of the
University of South Florida, who got me into teaching short courses and
writing books. Also the many students who, through the years, attended
my ISA short courses and the many students at Louisiana State University,
graduate and undergraduate, who helped me learn along with them about
process dynamics and control. In particular, Jacob Martin, Jr., A. Terrel
Touchstone, Richard Balhoff, Dan Logue, Shaoyu Lin, Carl Thomas, Steve
Hunter, Gene Daniel, Samuel Peebles, Umesh Chitnis, and Olufemi
Adebiyi. Many of the practical tips I have included are drawn from my
experience at Exxon Chemical’s Baton Rouge Chemical Plant, working
with my friends Doug White, Raju Hajare, and Jack Nylin.

Finally, I would like to thank the people at ISA’s Publications Department,


Joice Blackson in particular, for inspiring me to write this second edition.

This book is dedicated to my parents, who inspired me with their example


of dedication, perseverance, and hard work.
Appendix A:
Suggested Reading
and Study Materials
APPENDIX A

Suggested Reading and Study Materials

Corripio, A. B., Design and Application of Process Control Systems, (Research


Triangle Park, NC: ISA, 1998).

Hang, C. C., Lee, T. H., and Ho, W. K., Adaptive Control, (Research Triangle
Park, NC: ISA, 1993).

McMillan, G. K., and Toarmina, C. M., Advanced Temperature Control,


(Research Triangle Park, NC: ISA, 1995).

McMillan, G. K., pH Measurement and Control, 2nd ed., (Research Triangle


Park, NC: ISA, 1994).

Murrill, P. W., Fundamentals of Process Control Theory, 3rd ed., (Research


Triangle Park, NC: ISA, 2000).

Textbooks (selected titles)


Åström, K. J., and Hagglund, T., PID Controllers: Theory, Design, and
Tuning, 2nd ed., (Research Triangle Park, NC: ISA, 1995).

Seborg, D. E, Edgar, T. F., and Mellichamp, D. A., Process Dynamics and


Control, (New York, NY: Wiley, 1989).

Smith, C. A., and Corripio, A. B., Principles and Practice of Automatic Process
Control, 2nd ed., (New York, NY: Wiley, 1997).

Technical Magazines and Journals (selected titles)


AIChE Journal, published by the American Institute of Chemical
Engineers, New York.
Automatica, published by Pergamon Press, New York.
Control Engineering, published by Dun-Donnelly Pub. Corp., New York.
Industrial and Engineering Chemistry Research, published by the American
Chemical Society, Washington, DC.
ISA Transactions, published by the ISA, Research Triangle Park, NC.
InTech, published by the ISA, Research Triangle Park, NC.

Instruments and Control Systems, published by Chilton, Philadelphia.


225
226 Appendix A: Suggested Reading and Study Materials

Software (selected titles)


MATLAB, (Natick, MA: The Math Works, Inc., 1998)

PC-ControLAB2 for Windows, Wade Associates, Inc., (Research Triangle


Park, NC: ISA, 1998).

VisSim, (Westford, MA: Visual Solutions, Inc., 1995).


Appendix B:
Solutions to
All Exercises
APPENDIX B

Solutions to All Exercises

UNIT 2
Exercise 2-1.
Controlled variable—the speed of the engine.

Manipulated variable—the flow of steam to the engine.

Disturbances—the load (torque) on the main shaft, varying as the various


shop machines are started by engaging the clutches.

Sensor—the flywheel governor is the speed sensor.

Block diagram:

Exercise 2-2.
Controlled variable—the temperature in the oven.

Manipulated variable—electric power to the heating element or gas flow


to the burner (operated on/off).

Disturbances—Losses to surroundings, opening the oven door, heat


consumed by the cooking process.

Sensor—usually a gas-filled bulb connected to the operating switch


through a capillary.

229
230 Appendix B: Solutions to All Exercises

What is varied when the temperature dial is adjusted is the set point.

Block diagram:

Heat
Loss Oven

Thermostat
e
Power Heating
Relay Oven
Element
+ + Oven
Temperature

Gas
Bulb
Sensor

Exercise 2-3.
(a) Change in controller output: 5% x 100/20 = 25%

(b) Change in controller output: 5% x 100/50 = 10%

(c) Change in controller output: 5% x 100/250 = 2%

Exercise 2-4.
Offset in outlet temperature: 8%C.O./(100/20) = 1.6%T.O.

In order to eliminate the offset the steam valve must open.

Offset for 10% PB: 8%C.O./(100/10) = 0.8%T.O.

Exercise 2-5.
For a 5%T.O. sustained error, the output of the PI controller will suddenly
change by:

5%T.O. x 0.6%C.O./%T.O. = 3%C.O.

Then it will increase continuously with time at the rate of:

(5%T.O.)(0.6%C.O./%T.O.)/2 min = 1.5%C.O./min


Appendix B: Solutions to All Exercises 231

The following is a sketch of the controller output response:

5%

e
0

1.5%/Min
m
5%

3%
0

0 1 2 3 Min
Time

Exercise 2-6.
The output of the PID controller will suddenly change by:

(5%T.O./min)(1.0%C.O./%T.O.)(2.0 min) = 10.0%C.O.

Then it will ramp for five minutes at the rate of:

(5%T.O./min)(1.0%C.O./%T.O.) = 5%C.O./min

After five minutes, the output will suddenly drop by 10.0%C.O., as the
error ramp stops. The output will then remain constant at:

(5%T.O./min)(5 min)(1.0%C.O./%T.O) = 25%C.O.

The following is a sketch of the controller output response:


232 Appendix B: Solutions to All Exercises

Exercise 2-7.
QDR proportional gain:
0.45(1.2%C.O./%T.O.) = 0.54%C.O./%T.O. or 185% PB
QDR integral rate:
1/(4.5 min/1.2) = 0.266 repeats/min
The tuning formulas are from Table 2-1 for PI controllers.

Exercise 2-8.
Series PID controller:
QDR proportional gain:
0.6(1.2%C.O./%T.O.) = 0.72%C.O./%T.O. or 139% PB
QDR integral rate: 1/(4.5 min/2) = 0.44 repeats/min
QDR derivative time: 4.5 min/8 = 0.56 min
Parallel PID controller:
QDR proportional gain:
0.75(1.2%C.O./%T.O.) = 0.90%C.O./%T.O. or 110% PB
QDR integral rate: 1/(4.5 min/1.6) = 0.36 repeats/min
QDR derivative time: 4.5 min/10 = 0.45 min

The tuning formulas are from Table 2-1 for PID controllers.

UNIT 3
Exercise 3-1.
a. Put the controller on manual.

b. Change the controller output by a small amount; record the size


of the step change and the time at which it is performed.

c. Obtain a recording of the controlled variable versus time.

d. Determine the gain, time constant, and dead time from the
response recorded in step c.
Appendix B: Solutions to All Exercises 233

Exercise 3-2.
Gain: The sensitivity of the process output to its input, measured by the
steady-state change in output divided by the change in input.

Time Constant: The response time of the process, determines how long it
takes to reach steady state after a disturbance.

Dead Time: The time it takes for the output to start changing after a
disturbance.

Exercise 3-3.
Gain: K = (2°F)/(100 lb/h) = 0.02°F/(lb/h)

( 100 – 0 )%T.O.
2° F ---------------------------------------
( 250 – 200 )° F %T.O.
- = 2.0 ----------------
-----------------------------------------------------------
( 100 – 0 )%C.O. %C.O.
100 lb/h ---------------------------------------
( 5000 – 0 ) lb/h

Notice that, as the controller output sets the set point of the steam flow
controller, the percent of controller output corresponds to the percent of
steam flow transmitter output.

Exercise 3-4.
234 Appendix B: Solutions to All Exercises

°C
Gain: K = (--------------------------------------
84.0 – 90.0 )° C
- = – 3.0 ----------
2 kg/s kg/s
Slope method (from figure):
Time constant: 1.03 - 0.11 = 0.92 min
Dead time: 0.11 min

Slope and point method:

63.2% point: T = 90.0 + 0.632(84.0 - 90.0) = 86.2°C


t1 = 0.73 min (from figure)

Time constant: 0.73 - 0.11 = 0.62 min

Dead time: 0.11 min (same as before)

Exercise 3-5.
Two-point method:

63.2% point is the same as before: t1 = 0.73 min

28.3% point: T = 90.0 + 0.283(84.0 - 90.0) = 88.3°C


t2 = 0.36 min (from figure)
Time constant: 1.5(0.73 - 0.36) = 0.56 min
Dead time: 0.73 - 0.56 = 0.17 min

Exercise 3-6.
Maximum time constant: τ = RC = (10 x 106)(100 x 10-6) = 1,000 s

Exercise 3-7.
Time constant: τ = A/Kv = (50 ft2)/[(50 gpm/ft)/7.48 gal/ft3]
= 7.5 min

Exercise 3-8.
Product flow: Time constant:
F = 50 gpm V/F = 2000/50 = 40.0 min
F = 500 gpm V/F = 2000/500 = 4.0 min
F = 5000 gpm V/F = 2000/5000 = 0.4 min
Appendix B: Solutions to All Exercises 235

Exercise 3-9.
Steady-state product concentration:

[(100)(20) + (400)(2)]/(100 + 400) = 5.60 lb/gal

Product concentration for 10 gpm increase in concentrated solution:

[(110)(20) + (400)(2)]/(110 + 400) = 5.88 lb/gal

Change in product concentration:

5.88 - 5.60 = 0.28 lb/gal

Process gain: (5.88 - 5.60 lb/gal)/(110 - 100 gpm)

= 0.028 (lb/gal)/gpm

Exercise 3-10.
Product concentration:

Initial: [(10)(20) + (40)(2)]/(10 + 40) = 5.60 lb/gal

Final: [(11)(20) + (40)(2)]/(11 + 40) = 5.88 lb/gal

Gain: (5.88 - 5.60)/(11 - 10) = 0.28 (lb/gal)/gpm

Thus, the gain at one tenth throughput is ten times the gain at full
throughput.

Unit 4
Exercise 4-1.
If the process gain were to double, the controller gain must be reduced to
half its original value to keep the total loop gain constant.

Exercise 4-2.
The loop is less controllable (has a smaller ultimate gain) as the ratio of the
process dead time to its time constant increases. The process gain does not
affect the controllability of the loop, since the controller gain can be
adjusted to maintain a given loop gain.
236 Appendix B: Solutions to All Exercises

Exercise 4-3.
The required relationships are:

K cu = 2τ Kt 0 Tu = 4 t0

Exercise 4-4.
Process A is less sensitive to changes in controller output than processes B
and C, which have equal sensitivity.

Process C is more controllable than processes A and B, which are equally


controllable.

Process A has the fastest response of the three, and process C the slowest.

Exercise 4-5.
Quarter-decay tuning formulas for series PID controller, from the
formulas on Table 4-1:

Process A Process B Process C

Uncontrollability 0.5 0.5 0.2


Gain, %C.O./%T.O. 4.8 1.2 3.0
Integral time, min 0.20 3.0 4.0
Derivative, min 0.05 0.75 1.0

Exercise 4-6.
To adjust for 8 s sample time we must add 8/2 = 4 s (0.067 min) to the
process dead time. Once more, from the formulas of Table 4-1:

Process A Process B Process C


Uncontrollability 0.9 0.52 0.21
Gain, %C.O./%T.O. 2.9 1.15 2.90
Integral time, min 0.33 3.13 4.13
Derivative, min 0.08 0.78 1.03

Comparison with the results of Exercise 4-5 shows that the sample time
has a greater effect on the tuning parameters for process A because it is the
fastest of the three.
Appendix B: Solutions to All Exercises 237

Exercise 4-7.
The tuning parameters using the IMC rules for disturbance inputs, from
Eqs. 4-3 and 4-4, for a series PID controller with τc= 0:

Process A Process B Process C


Gain, %C.O./%T.O. 8.0 2.0 5.0
Integral time, min 0.2 3.0 10.0
Derivative, min 0.05 0.75 1.0

Exercise 4-8.
The tuning parameters by the IMC for set point changes, from the Eqs. 4-3
and 4-6 for a series PID controller with τc = 0:

Process A Process B Process C


Gain, %C.O./%T.O. 4.0 0.83 2.1
Integral time, min 0.20 3.0 10.0
Derivative, min 0.05 0.75 1.0

Exercise 4-9.
The IMC tuning rules for set point changes is the preferred method for the
slave controller in a cascade system because it produces fast response with
about 5% overshoot. The disturbance and quarter-decay ratio formulas are
too oscillatory on set point changes for a slave controller.

Exercise 4-10.
The typical symptom of integral windup is excessive overshoot of the
controlled variable; it is caused by saturation of the controller output
beyond the limits of the manipulated variable. Integral windup can be
prevented in simple feedback loops by limiting the controller output at
points that coincide with the limits of the manipulated variable.

Unit 5
Exercise 5-1.
Tight level control is indicated when the level has significant effect on the
process operation, as in a natural-circulation evaporator or reboiler.
Averaging level control is to be used when it is necessary to smooth out
sudden variations in flow, as in a surge tank receiving discharge from
238 Appendix B: Solutions to All Exercises

batch operations to feed a continuous process. The tight level control is the
one that requires the level to be kept at or very near its set point.

Exercise 5-2.
For flow control loops a proportional-integral (PI) controller is
recommended with a gain near but less than 1.0%C.O./%T.O. The integral
time is usually small, of the order of 0.05 to 0.1 minutes.

Exercise 5-3.
For tight level control a proportional controller with a high gain, usually
greater than 10%C.O./%T.O. should be used. When the lag of the control
valve is significant, a proportional-derivative controller could be used.
When a proportional-integral controller is used, the integral time should
be long, of the order of one hour or longer.

Exercise 5-4.
For averaging level control a proportional controller with a gain of
1.0%C.O./%T.O. should be used, because this provides maximum
smoothing of variations in flow while still preventing the level from
overflowing or running dry.

Exercise 5-5.
When a PI controller is used for averaging level control, the integral time
should be long, of the order of one hour or longer. At some values of the
gain, an increase in gain would decrease oscillations in the flow and the
level.

Exercise 5-6.
Time constant, from Eq. 5-2:

τ = (0.03 kg)(23 kJ/kg-°C)/[(0.012 m2)(0.6 kW/m2-°C)]

= 96 s (1.6 min)

Exercise 5-7.
PID controllers are commonly used for temperature control so that the
derivative mode compensates for the lag of the temperature sensor which
is usually significant.
Appendix B: Solutions to All Exercises 239

Exercise 5-8.
The major difficulty with the control of composition is the dead time
introduced by sampling and by the analysis.

Unit 6
Exercise 6-1.
Computer controllers perform the control calculations at discrete intervals
of time, with the process variable being sampled and the controller output
updated only at the sampling instants, while analog controllers calculate
their outputs continuously with time.

Exercise 6-2.
The “derivative kick” is a pulse on the controller output that takes place at
the next sample after the set point is changed and lasts for one sample. It
can be prevented by having the derivative term act on the process variable
instead of on the error.

The derivative filter or “dynamic gain limit” is needed to prevent large


amplification of changes in the process variable when the derivative time
is much longer than the algorithm sample time.

Exercise 6-3.
The “proportional kick” is a large step change in controller output right
after a set point change; it can be eliminated by having the proportional
term act on the process variable instead of on the error, so that the operator
can apply large changes in set point without danger of upsetting the
process. When the proportional kick is avoided, the process variable
approaches the set point slowly after it is changed, at a rate determined by
the integral time. The proportional kick must not be avoided whenever it
is necessary to have the process variable follow set point changes fast, as
in the slave controller of a cascade system.

Exercise 6-4.
All three tuning parameters of the parallel version of the PID algorithm
are different from the parameters for the series version. The difference is
minor if the derivative time is much smaller than the integral time.

Exercise 6-5.
The nonlinear gain allows the proportional band to be wider than 100%
when the error is near zero, which is equivalent to having a larger tank in
240 Appendix B: Solutions to All Exercises

an averaging level control situation. To have a gain of 0.25%C.O./%T.O.


(400% PB) at zero error, the nonlinear gain must be:

[(1/0.25) - 1]/50 = 0.06 (%C.O./%T.O.)/%T.O.

This calculation assumes a proportional-only controller with a bias term of


50%C.O. and a set point of 50%T.O.

Exercise 6-6.
Using the formulas of Table 6-2, with q = 0 (for maximum gain) and the
following parameters:

K = 1.6%T.O./%C.O. τ1 = 10 min τ2 = 0 t0 = 2.5 min

Sample time, min 0.067 1 10 50


a1 = exp(-T/τ1) 0.9934 0.905 0.368 0.0067
a2 = exp(-T/τ2) 0 0 0 0
N = t0/T 37 2 0 0
Gain, %C.O./%T.O. 2.5 2.0 0.4 0.004
Integral time, min 10.0 9.5 5.8 0.34
Derivative time, min 0 0 0 0

Exercise 6-7.
If the algorithm has dead time compensation, the gain can be higher
because it does not have to be adjusted for dead time. This only affects the
first two cases, because the dead time is less than one sample for cases (c)
and (d), and, therefore, no dead time compensation is necessary. From Eq.
6-7 and Table 6-2:

Sample time, min 0.067 1


Samples of dead time compensation 37 2
Gain, %C.O./%T.O. 93 5.9
Integral time, min 10 9.5

Exercise 6-8.
The basic idea of the Smith Predictor is to bypass the process dead time to
make the loop more controllable. This is accomplished with an internal
model of the process responding to the manipulated variable in parallel
with the process. The basic disadvantage is that a complete process model
is required, but it is not used to tune the controller, creating too many
adjustable parameters.
Appendix B: Solutions to All Exercises 241

The Dahlin Algorithm produces the same dead time compensation as the
Smith Predictor, but it uses the model to tune the controller, reducing the
number of adjustable parameters to one, q.

Unit 7
Exercise 7-1.
Cascade control (1) takes care of disturbances into the slave loop reducing
their effect on the controlled variable; (2) makes the master loop more
controllable by speeding up the inner part of the process; and (3) handles
the nonlinearities in the inner loop where they have less effect on
controllability.

Exercise 7-2.
For cascade control to improve the control performance, the inner loop
must be faster than the outer loop. The sensor of the slave loop must be
reliable and fast, although it does not have to be accurate.

Exercise 7-3.
The master controller in a slave control system has the same requirements
as the controller in a simple feedback control loop; thus, the tuning and
mode selection of the master controller are no different from those for a
single controller.

Exercise 7-4.
The tuning of the slave controller is different because it has to respond to
set point changes, which it must follow quickly without too much
oscillation. The slave controller should not have integral mode when it can
be tuned with a high enough proportional gain to maintain the offset
small. If the slave is to have derivative mode, it must act on the process
variable so that it is not in series with the derivative mode of the master
controller.

Exercise 7-5.
The controllers in a cascade system must be tuned from the inside out,
because each slave controller forms part of the process controlled by the
master around it.
242 Appendix B: Solutions to All Exercises

Exercise 7-6.
Temperature as the slave variable (1) introduces a lag because of the
sensor lag, and (2) may cause integral windup because its range of
operation is narrower than the transmitter range. These difficulties can be
handled by (1) using derivative on the process variable to compensate for
the sensor lag, and (2) having the slave measurement fed to the master
controller as its reset feedback variable.

Exercise 7-7.
Pressure is a good slave variable because its measurement is fast and
reliable. The major difficulties are (1) that the operating range may be
narrower than the transmitter range, and (2) that part of the operating
range may be outside the transmitter range, e.g., vacuum when the
transmitter range includes only positive gage pressures.

Exercise 7-8.
In a computer cascade control system the slave controller must be
processed more frequently than the master controller.

Exercise 7-9.
Reset windup can occur in cascade control when the operating range of
the slave variable is wider than the transmitter range. To prevent it, the
slave measurement can be passed to the reset feedback of the master; in
such a scheme the master always takes action based on the current
measurement, not on its set point.

Unit 8
Exercise 8-1.
A feedback controller acts on the error. Thus, if there were no error, there
would be no control action. In theory, perfect control is possible with
feedforward control, but it requires perfect process modeling and
compensation.

Exercise 8-2.
To be used by itself, feedforward control requires that all the disturbances
be measured and accurate models of how the disturbances and the
manipulated variable affect the controlled variable.

Feedforward with feedback trim has the advantages that only the major
disturbances have to be measured and compensation does not have to be
Appendix B: Solutions to All Exercises 243

exact, because the integral action of the feedback controller takes care of
the minor disturbance and the model error.

Exercise 8-3.
Ratio control consists of maintaining constant the ratio of two process
flows by manipulating one of them. It is the simplest form of feedforward
control.

For the air-to-natural gas ratio controller of Figure 7-5:

Control objective: Maintain constant the nitrogen-to-hydrogen ratio of the


fresh synthesis gas.

Measured disturbance: Natural gas flow (production rate).

Manipulated variable: The set point of the air flow controller.

Exercise 8-4.
A lead-lag unit is a linear dynamic compensator consisting of a lead (a
proportional plus derivative term) and a lag (a low-pass filter), each
having an adjustable time constant. It is used in feedforward control to
advance or delay the compensation so as to dynamically match the effect
of the disturbance.

The step response of a lead-lag unit is an immediate step of amplitude


proportional to the lead-to-lag ratio, followed by an exponential approach
to the steady-state compensation at a rate controlled by the lag time
constant.

The response of a lead-lag unit to a ramp is a ramp that leads the input
ramp by the difference between the lead and the lag time constants, or lags
it by the difference between the lag and the lead time constants.

Exercise 8-5.
To lead by 1.5 minutes with amplification of 2:

1.5 min = lead - lag = 2(lag) - lag = lag

Therefore, a lag of 1.5 minutes and a lead of 3.0 minutes.

Exercise 8-6.
Dead time compensation consists of storing the feedforward
compensation and playing it back some time later. The time delay is the
adjustable dead time parameter.
244 Appendix B: Solutions to All Exercises

Dead time compensation can be used only when the feedforward action is
to be delayed and a computer or microprocessor device is available to
implement it. It should be used only when the delay time is long relative
to the process time constant.

Exercise 8-7.
Design of feedforward controller for process furnace:

1. Control objective: To = Toset


2. Measured disturbances: W, process flow, lb/h

Fs, supplementary fuel flow, scfh

Ti, inlet process temperature, °F


3. Manipulated variable: Fset, main fuel flow, gph
4. Steady-state energy balance on furnace:

(F∆Hm + Fs∆Hs)η = WC(To - Ti)

where ∆Hm is the heating value of the main fuel in Btu/gal, ∆Hs
is that of the supplementary fuel gas in Btu/scf, η is the efficiency
of the furnace, and C is the specific heat of the process fluid in
Btu/lb-°F.

Solve for the manipulated variable and substitute the control


objective:

Fset = (C/η∆Hm)(Toset - Ti)W - (∆Hs/∆Hm)Fs

5. Numerical values are needed to evaluate the importance of each


disturbance. The change in each disturbance required to cause a
given change in main fuel flow would be calculated.
6. Feedback trim can be added as in Example 8-1:

Feedback output: m = CToset/(η∆Hm)

Design formula: Fset = [m - (C/η∆Hm)Ti]W - (∆Hs/∆Hm)Fs

7. Lead-lag units must be installed on the process flow and inlet


temperatures, but not on the supplementary fuel gas flow,
because its dynamic effect should match that of the main fuel gas
flow.
Appendix B: Solutions to All Exercises 245

8. Instrumentation diagram:
set

set

Unit 9
Exercise 9-1.
Loop interaction takes place when the manipulated variable of each loop
affects the controlled variable of the other loop. The effect is that the gain
and the dynamic response of each loop changes when the auto/manual
state or tuning of the other loops change.

When loop interaction is present, we can (1) pair the loops in the way that
minimizes the effect of interaction and (2) design a control scheme that
decouples the loops.

Exercise 9-2.
Open-loop gain of a loop is the change in its controlled variable divided
by the change in its manipulated variable when all other loops are opened
(in manual).

Closed-loop gain is the gain of a loop when all other loops are closed (auto
state) and have integral mode.

Relative gain (interaction measure) for a loop is the ratio of its open-loop
gain to its closed loop-gain.
246 Appendix B: Solutions to All Exercises

Exercise 9-3.
To minimize interaction for a loop, the relative gain for that loop must be
as close to unity as possible. Thus, the loops must be paired to keep the
relative gains close to unity, which, in a system with more than two control
objectives may require ranking the objectives.

The relative gains are easy to determine because they involve only a
steady-state model of the process, which is usually available at design
time.

The main drawback of the relative gain is that it does not take into account
the dynamic response of the loops.

Exercise 9-4.
When all four relative gains are 0.5, the effect of interaction is the same for
both pairing options. The gain of each loop will double when the other
loop is switched to automatic. The interaction is positive; that is, the loops
help each other.

Exercise 9-5.
When the effect of interaction with other loops is in the same direction as
the direct effect for that loop, the interaction is positive; if the interaction
and direct effects are in opposite direction, the interaction is negative. For
positive interaction, the relative gain is positive and less than unity, while
for negative interaction the relative gain is either negative or greater than
unity.

Exercise 9-6.
Interaction for top composition to reflux and bottom composition to
steam:

(0.05)(0.05)/[(0.05)(0.05) - (-0.02)(-0.02)] = 1.19

Relative gains:

Reflux Steam
Yd 1.19 -0.19
Xb -0.19 1.19

The top composition must be paired to the reflux and the bottom
composition to the steam to minimize the effect of interaction.
Appendix B: Solutions to All Exercises 247

Exercise 9-7.
Let H be the flow of the hot water in gpm, C the flow of the cold water in
gpm, F the total flow in gpm, and T the shower temperature in °F. The
mass and energy balances on the shower, neglecting variations in density
and specific heat, give the following formulas:

F=H+C T = (170H + 80C)/(H + C)

These are the same formulas as for the blender of Example 9-2. So, the
relative gains are:

Hot Cold
F H/F C/F
T C/F H/F

For the numbers in the problem:

H = (3 gpm)(110 - 80)/(170 - 80) = 1 gpm C = 2 gpm

So, as the cold water flow is the higher, use it to control the flow, and use
the hot water flow to control the temperature. The relative gain for this
pairing is:

C/F = 2/3 = 0.67

The gain of each loop increases by 50% when the other loop is closed.

Exercise 9-8.
As in the second part of Example 9-4, we can use a ratio controller to
maintain a constant temperature when the flow changes. We would then
ratio the hot water flow (smaller) to the cold water flow (larger) and
manipulate the cold water flow to control the total flow. The design ratio is
0.5 gpm of hot water per gpm of cold water.

Unit 10
Exercise 10-1.
When the process dynamic characteristics (gain, time constant, and dead
time) are expected to change significantly over the region of operation,
adaptive control is worthwhile to maintain the control loop performance.

Most loops can be controlled satisfactorily without adaptive control,


because either their characteristics do not vary much or their
248 Appendix B: Solutions to All Exercises

controllability is high and insensitive to variation in the process dynamic


parameters.

Exercise 10-2.
The process parameter most likely to change and affect the control loop
performance is the process gain. An example of extreme variation in
process gain is the control of pH in the water neutralization process.

Exercise 10-3.
The equal percentage valve characteristic compensates for the decrease in
process gain with increasing throughput, typical of many blending, heat
transfer, and separation processes. For the equal percentage characteristic
to properly compensate for gain variations: (1) the pressure drop across
the valve must remain constant, (2) the controller output must actuate the
valve (it must not be cascaded to a flow controller), (3) the valve must not
operate in the lower 5% of its range, where the characteristic deviates from
equal percentage.

Exercise 10-4.
When a feedback controller adjusts the ratio of a ratio controller, its output
is multiplied by the process flow, directly compensating for the gain
decrease with throughput.

Exercise 10-5.
A gap or dead band controller can be used for a “valve position controller”
that adjusts a large reagent valve in parallel with a small valve to maintain
the small valve position near half opened. This way the large valve makes
rough adjustments in flow but does not move when the small valve is
doing fine adjustments near neutrality, where the process gain is highest.

Exercise 10-6.
A pattern recognition controller matches an underdamped response curve
to the response of the error by detecting the peaks of the response. The
decay ratio is then controlled by adjusting the controller gain, and the
oscillation period is used to adjust the integral and derivative times.

Exercise 10-7.
The second-order discrete model matches the sampled response of most
processes, because its form is the same for monotonic, oscillatory, inverse
response, integrating, and unstable responses.
Appendix B: Solutions to All Exercises 249

The parameters of a discrete model, except for the dead time, can be
estimated using least squares regression techniques. The second-order
model requires only six parameters, including a bias term to account for
disturbances.

Exercise 10-8.
For least squares regression to successfully estimate the dynamic process
model parameters, (1) the process variable must be changing due to
changes in the controller output, (2) the input/output data must be
differenced or at least entered as differences from their initial steady-state
values, and (3) the noise on the process variable must not be
autocorrelated.

Exercise 10-9.
Recursive estimation provides estimates of the parameters that improve
with each sample of the process input and output. It is convenient to do
on-line autotuning and the only way to do adaptive control. To use
recursive regression for autotuning, the process driving function and
initial covariance matrix are set, and an estimation run is made with the
forgetting factor set to unity. In adaptive control the estimator is kept
running with the forgetting factor set at a value less than unity.

Exercise 10-10.
The diagonal terms of the variance-covariance matrix are the multipliers
of the variance of the noise to obtain the variance of the corresponding
estimated parameters. To keep a parameter from changing during
estimation, the corresponding initial diagonal value of the
variance-covariance matrix is set to zero.
INDEX

Index Terms Links

adapter 212 217 218


adaptive control 76 77 199 200 202
209 212 217 220
algorithm 101 104
ammonia synthesis 138
analog 16 20
analog-to-digital converter (ADC) 102
analysis cycle 96
analyzer control 96 97
arrow 12
auto-tuning 217
averaging level control 90 91 92 94 97
106 107 122

batch process 69
bias 14
blending process 176 185
blending tank 46 47 51
block diagram 11 12 19 20 32

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

capacitance 45 46 47 48
cascade control 86 127 129 132 134
135 137 138 139 140
142
cascade windup 139 141
cascade-to-ratio control 202 207 209
characteristics, of valve 52 53 202 220
closed-loop gain 174 175 177
closed-loop time constant 110
coarse tuning 75
comparator 12
compensation for dead time 117 119 120 121
composition control 85
computer cascade control 134 142
computer-based controller 20
conductance 45 46 47 48 49
52 58
conductance, valve 47
control objective 85 145 157 159 160
control valve 10 11 12 13 14
controllability 24 27
controllable process 70 71 72 76 85
controlled variable 10 13 20 22 32
controller 13 14 20 22
action 17 23
computer-based 20
gain 13 15 24 27 32
33
panel-mounted 22
proportional-integral (PI) 19

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

controller (Cont.)
proportional-integral-derivative (PID) 19
proportional-only 14 19
single-mode 13 19
synthesis 86 118
three-mode 20
two-mode 13
correction for sample time 108 109
covariance matrix 216 218
current-to-pressure transducer 10

Dahlin controller 117 118 119


damping parameter 210
dead band 117
dead time 37 39 40 41 42
43 45 49 50 52
57 85 96
dead time compensation 117 119 120 121 122
156 157 163
dead time compensator 151 152 156 157 158
dead-band controller 202 208
decoupler 173 183 184 185 186
188 190 195
derivative 17 19 20 21
action 17
factor 211
filter 103 104
kick 20 103 105 122
mode 13 17 18 19 20
24 27 28 85 86
90 103 131 132

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

derivative (Cont.)
time 17 21 23 26 28
33 104 111
unit 20 104 105
differencing 215
digital controller 11
digital-to-analog converter (DAC) 102
direct action 12
direct material balance control 181 182
discrete model 213 214 217 218 219
distillation column 169 179 194 195
distributed control systems (DCS) 11 101
distributed controller 20
disturbance 10 14 22 26 32
85 86 89 97
dynamic compensation 147 152 156 158 161
163
dynamic gain limit 104 122
dynamic interaction 182 194

efficiency 160 161


electrical system 46
electronic 10 22
energy balance control 180 182
equal percentage 203
error 10 11 12 13 14
15 16 18 20 23
26 32 70
steady-state 80
estimation 217
of parameters 212 214 218

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

estimation
off line 214
recursive 216
EXACT controller 209 211 212
expert system 209 212

fast process 110 112


feedback control 9 11 31
feedback control loop 10 11 12
feedback controller 13 16 19 22 29
146 150 158 161 162
feedback trim 146 148 149 151 158
160 161 164
feedforward control 145 146 147 148 149
151 152 157 161 162
163
feedforward tuning 152
feedforward-feedback control 148 149
filter parameter 103 104
fine tuning 75
first-order-plus-dead-time (FOPDT) 51 56 85
flow control 86 87 89 96 97
129 132 137 138 139
flow control response 87
FOPDT model 51 56 85
frequency 134

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

gain 37 40 41 44 54
56
closed-loop 174 175 176 177
nonlinear 106 107
open-loop 173 174 177 180 181
relative 176 177 179 182 183
188 190
scheduling 202
steady-state 178 189
variation 54 200 203 208
gap 117
gap controller 208
gas surge tank 46 47
graceful degradation 159

half decoupling 185


heat exchanger 94
heat transfer 89 94 96
heater 149 159 161
efficiency 160 161
example 9 27
feedforward control 162
Heaviside operator 18
higher-order system 48
hydrogen/nitrogen ratio 138
hysteresis 87

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

IMC 86
instrumental variable (IV) regression 216
integral controller 16 86 97
integral mode 15 16 18 19 24
27 129 130 141
integral time 16 22 23 24 26
27 61 64 71 73
75
integrating process 40
interaction 76 169 171 172 173
174 176 177 179 181
183 185 188 189 190
194
interaction measure 175
intermediate level control 92
internal model control (IMC) 86 120
inverse response 78 79 80 81 172
182 185 188 193 194

jacketed reactor 134 139 141

lead-lag compensation 162


lead-lag unit 152 162
least-squares regression 214 218
level control 89 90 91 92 93
94 97
limiting controller output 115
linear feedforward controllers 150 152
This page has been reformatted by Knovel to provide easier navigation.
Index Terms Links

liquid storage tank 46 47


loop interaction 169 172 173 183 194

manipulated variable 10 24 32 145 146


148 150 151 152 157
160 161 163 164
master controller 129 130 131 132 134
138 139 140 141 142
material balance control 181 182
maximum likelihood regression 215
measured disturbance 146 148 149 157 158
159 160 161 164
microprocessor controller 20
mode 9
modes for the master controller 130
multiplexer 102
multivariable control 169 170 173 180 185
187 188 189 191 194

negative feedback 12
negative interaction 172 177 179 181 183
185 188 189 190
noise band 211
nonlinear controller gain 106
nonlinear feedforward compensation 157 164
nonlinearity 62 200 201 202 209

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

off-line estimation 214


offset 14 15 17 28 33
on-line estimation 215
on-off control 18
open-loop gain 176 177 178 180
open-loop test 37 38 39
optimizing feedback loops 117
output 87 90
output pulse 103 105
output variable 10
overshoot 76 189 209 211 220

pairing 172 173 176 177 179


182 183 186 190 194
panel-mounted controller 22
parallel paths 185
parallel PID controller 20 27 28 61 105
110 119
parameter estimation 199 202 212 215 217
218
pattern recognition 199 202 209 210 212
220
PD controller 17 86
percent controller output (%C.O.) 41
percent transmitter output (%T.O.) 41
perfect control 145 164
performance 87
pH control 201 208

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

PI controller 16 19 21 86 91
92
PID algorithm 101
PID controller 19 20
pneumatic 10 22
positive interaction 172 177 182 189 190
practical tips 28 74
preset compensation 202
pressure control 89 92 97 129 139
process dead time 41 51
process gain 37 39 40 44 52
53 54 57 110
process nonlinearity 37 53 209
process time constant 42 44 45 92 96
process variable 101 102 103 104 110
119 122
processing frequency 101 115
programmable logic controllers (PLC) 101
proportional band 13 33
proportional controller 15
proportional kick 105
proportional mode 13 14 16 17
proportional-derivative (PD) controller 86
proportional-integral (PI) controller 86 91
proportional-integral controller 87
proportional-only controller 14 19
proportional-on-measurement 70 106
pseudo-random binary signal (PRBS) 215 218
pulse
symmetric 214 215
PV 101

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

QDR response 26 66 68
QDR tuning 26 27 30 33 61
66 67 68 71 73
quarter-decay ratio (QDR) response 26

rate mode 17
rate time 17 20
ratio control 145 149 164
reactor 40 127 129 131 133
134 135 137 138 141
recursive 218
recursive estimation 212 215 216 218 220
regression 214 220
instrumental variable (IV) 216
maximum likelihood 215
relative gain 174 175 176 177 178
179 180 182 183 188
189 194
relative gain matrix 173 177 182
reset feedback 141
reset mode 15
reset rate 17 24 33
reset time 15 19 33
reset windup 53 61 76 77 78
81 127 132 133 140
142
resistance 45 46 48
reverse action 12 22 23

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

sample time 64 96 104 110 111


112 113 115 116 117
selection 115
sampling frequency 63 64 115
sampling period 63 108 112
saturation 52 53
self-regulating 40
sensor time constant 94
sensor/transmitter 10 12
series PI controller 120
series PID controller 20 21 27 61 64
66 81 104 111
set point 10 13 14 18 20
22 24 26 27 65
66 68 71 73 75
78 81
single-mode controller 13 19
slave controller 82 127 130 131 132
134 136 138 139 140
141
slave controller modes 142
slave flow loop 132
slave pressure loop 133
slave temperature loop 132
slow sampling 112 113 114
Smith Predictor 117 118 122
stability 9 22 32
static compensation 162
static friction 87

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

steady-state 42 44
compensation 146
steam heater 9 14 23 27 28
32 149 159 162 163
step test 37 39 41 45 51
54 55 56
symmetric pulse 214 215

tangent method 42 43 44 45
tangent-and-point method 43
temperature control 94 95 96 127 128
129 131 133 136
three-mode controller 20
tight control 89 93 94 97
tight tuning 22 23
Time 120
time constant 37 39 40 41 43
44 45 46 47 48
52 85 86 87 90
92 94 96
time delay 49 50
trace of matrix 217
transducer 10
transfer function 37
transportation lag 49 50 51
tuning parameter 13 16 19 21 26
27 28 29 32
two-mode controller 13
two-point method 42

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

ultimate gain 9 23 24 25 26
27 28 31 108
ultimate period 9 24 25 27 28
uncontrollability 62 63 65 66 71
72 73 76 108 109
115
uncontrollable process 74 76 81
unstable 22 24 40

vacuum pan 55
valve
characteristics 52 53 202 203 220
conductance 52
gain 52
hysteresis 87
position control 117 208
valve position control 117
variance of the estimates 216
variance-covariance matrix 216 218

wait time 211


Watt, James 9 11
windup 53 140 141
windup, of cascade system 139 142

This page has been reformatted by Knovel to provide easier navigation.


Index Terms Links

Ziegler and Nichols 24 25 42 61 64


65 66

This page has been reformatted by Knovel to provide easier navigation.

Das könnte Ihnen auch gefallen