Sie sind auf Seite 1von 7

Numerical Algorithms for Quadratic Programming

in Model Predictive Control - An Overview


Thuy V. Dang, Tri Tran, and K-V. Ling
Abstract Dynamic modeling of complex systems is an essential part of multi-variable control schemes. Among these
schemes, model predictive control (MPC) is the most successful optimization-based strategy for multi-variable constrained
systems in industry. And in MPC problem formulations,
quadratic objective function and linear inequality constraints
are pervasively used. This paper provides an overview of the
numerical algorithms that are found widespread used to solve
the quadratic programming problem in MPC. Specifically,
the algorithms for interior point methods (IPMs), first-order
methods, and Alternating Direction Methods of Multipliers
(ADMMs) are addressed herein. Discussions on which conditions a particular algorithm should be chosen and its advantages
are also given.

I. M ODEL P REDICTIVE C ONTROL


A. General
Model Predictive Control (MPC), also known as moving
or receding horizon control, has been originated in the
refining industry as a practical control algorithm for multivariable systems that have constraints [1], [2], [3]. It has been
attracting more research and growing interests from various
industries since the 90s [4], [5], [6]. The applications of
MPC in process industries predominant the field of advanced
process control due to their unique proven-in-use advantages.
In the last decade, they have shown a tremendous growth. If
only around 300 systems had been recorded at the beginning
of the 1990s [7], more than 2,200 installed systems were
found a few years later [8], and the number keeps increasing
(e.g. at approximately 18% annual rate [9] as of the year
2000). More than 4,000 systems have been implemented
as of 2004 and the number has exceeded 10,000 in 2007
[10]. Another surveys on the control techniques for industrial
applications in Japan [11], [12] have provided similar trends
at the beginning of the 21st century. MPC is currently among
the main research topics within the control community, and
attracting more applications from different industries [6].
Several industrial MPC packages from different vendors
have been successfully implemented in the oil refinering
industry, where the linear models (e.g., for separation and
This work was supported by the National Research Foundation (NRF)
of Singapore under its Campus for Research Excellence And Technological
Enterprize (CREATE) programme, and the Cambridge Centre for Advanced
Research in Energy Efficiency in Singapore (Cambridge CARES).
Thuy V. Dang and K-V. Ling are with the School of Electrical and
Electronic Engineering, Nanyang Technological University, Block S2,
50 Nanyang Ave, Singapore. Email: dang0028@e.ntu.edu.sg

(Dang), ekvling@ntu.edu.sg (Ling)


Tri Tran is currently with Cambridge CARES, Nanyang Technological
University, Singapore. Email: tctran@ntu.edu.sg

heat transfer processes) are sufficiently good for the early


versions of MPCs. For linear systems having unmeasured
disturbances, a simple Kalman Filter [13] is suitable for
these MPCs [14], [15]. It had not been easy for severely
nonlinear processes, nevertheless. Developing adequate nonlinear empirical models is not always trivial as there are
not any universal models representing different nonlinear
processes [16], [17], [18]. If the step and impulse responses
or low order transfer functions are comfortably obtained for
linear systems, they pose substantial challenges to nonlinear systems due to lacking of the superposition principle.
First-principle and grey-box models (a combination of firstprinciple model and empirical model) are promising, yet to
be exploited, for severely nonlinear systems [18], [19].
The explicit MPC paradigm which combines the offline computation of multi-parametric programming and the
efficient online search had been introduced in [20]. MPC
strategies with mix-integer programming have also been
developed for hybrid systems [21]. Robust NMPC [22] with
the remarkable tube-based approach [23] have been attracting
more research recently. Similar progress for stochastic MPC
[24] has also been envisaged. Distributed MPC algorithms
for large-scale systems are quite mature in their own right
[25], [26].
For slow processes, MPC with direct economic-related
cost functions, called economic MPC (eMPC), has lately
proved a numerically efficient approach in managing the
portfolio of power generations and consumptions in deregulated markets, see, e.g. [27] and references therein. eMPCs
optimize the process operations in a time-varying fashion,
rather than maintaining the variables around a few desired steady states. The process may persistently operate in
transient states with eMPCs. For fast real-time processes,
research efforts have been focused on numerical methods
and algorithms for the optimizations to be implemented on
embedded platforms. This research strand is addressed as
embedded convex optimization.
B. Industrial Model Predictive Control
If three-term PID controllers traditionally dominate the
process control industry, it is the Model (Based) Predictive
Control (MPC/MBPC) that represents the advanced control
techniques for the industry. It is noting that, only the process
control industry is mentioned here. While the applications of
artificial intelligence in control engineering such as neural
control and fuzzy control show some advantageous in the
robotic industries, they are not always applicable in the

industrial process control, except for the system identification purpose. This is partly reasoned by the fact that the
computational capacity and data storage of the industrial
computerized systems are limited [28].
The use of process models appeared early in Coales and
Notons pioneering approach of on-line identifications in
1956 [29], and in an internal model controller introduced
by Smith in 1957 [30]. But Richalet is actually the first
to introduce MPC to industry with the receding (predictive)
horizon idea in 1972 [31]. The technique has evolved a long
way since. Almost all process control vendors nowadays
offer MPC to their customers, either as a functional block
for unit operation in a distributed control system (DCS), or
as a separate software package for plant-wide control. It is
notably that DCS is a specific name designated to the proprietary computer systems tailored made for large industrial
plants, similarly to PLC (programmable logic controller) for
discrete logic control applications.
Predictive control, as a matter of fact, assimilates basic
human activities [32]. A particular human action is usually
undertaken based on assumptions and expectations about
the future outcome. Model predictive control uses the past
information of states and the process model to predict future
control actions. The applications of MPC techniques are
apparently more popular in refineries and petrochemical
plants than in other industries, because the linearised system
models are adequate for processes in those industries. The
typical installations of MPC in industry are hierarchical.
The MPCs provide set points to the local controllers with
traditional PI/PID controllers finely tuned. If a predictive
controller starts to mislead the system behavior, the operator
simply isolates it and lets the local loops hold the plant using
the last received set points [2]. The process plant should be
stable in this condition, so in the worst case it is safe to run
the plant as it is without predictive control layer.
It is interesting to refer back to the debate for optimal
control and optimisation necessary to engineers in the 1960s
[33]. While it was found to be very successful in the
aerospace industry (there were more than 300 references
in a survey about theory and practice of optimal control at
that time, as quoted in [33]), the optimal control techniques
initially failed to provide a convincing solution to industrial process industries due to inherent constraints, model
uncertainty, unmeasured disturbance, as well as computer
capability at that time [34], [32]. There had been a resurgence
of the method that widespread in industry due to strategic
changes. The revitalized strategy was as follows: ... for large
complex problems, it may be better to encode optimality
criteria in more vague but more realistic terms, than to force
unwilling problems into an ill-fitting straitjacket to allow
rigorous optimisation [32]. MPC is among successful stories
using this strategy.
There are many good references of industrial MPCs.
The original work on DMC (Dynamic Matrix Control) is
referenced to in [35], [36]. The original IDCOM (Identification and Command) is referenced to in [31]. Other well
known commercialized packages such as QDMC (Quadratic

Dynamic Matrix Control) [37], QDMPC Plus from Aspentech, RMPCT (Robust Model Predictive Control Technology)
inside the Profit Controller from Honeywell, PFC (Predictive
Functional Control), HIECON (Hierarchical Constraint Control), ExaSMOC from Shell and Yokogawa, ConnoisseurTM
from Invensys, and some others predominant the industrial
MPC market.
For NMPC (MPC for severely nonlinear processes using nonlinear models and programming), the Sequential
Quadratic Programming (SQP) [38] is the well-known numerical method, wherein the objective function is quadratically approximated and the nonlinear constraints are linearized before solving at each iteration step. They are the
extension of Newton-type methods for converging to the
solution of the Karush-Kuhn-Tucker (KKT) conditions of
the constrained optimization. The solutions do not, however,
guarantee the system stability. In this field, the stability of
the closed-loop system is usually achievable by adding a
suitable equality or inequality terminal constraints to the
setup, or enforced stability constraints. The nominal stability
of infinite horizon problems is described in [39]. The formal
proof for the closed-loop stability for the receding-horizon
control (finite horizon) of Lipschitz continuous nonlinear
systems using the equality terminal constraints was provided
in [40].
Adding the terminal constraints may, nonetheless, cause
extra computation cost or even infeasibility to the optimisation algorithm. Moreover, if the chosen predictive horizon is
short, the problem may not have any solutions. The region
of attraction corresponding to such terminal constraints is
normally conservative [18]. The approach of dual-mode
control which defines a neighborhood around the desired
steady state, within which the system can be steered to, by
a linear state feedback controller, is presented in [41], [42],
[43], [44]. Several research papers have proposed methods on
how to enlarge the region of attraction for NMPC algorithms
since then. The other approaches for guaranteeing stability
are the contractive state MPC [45], the Control Lyapunov
Function (CLF) MPC [46], [47] and the converse Lyapunovbased MPC [48]. The perturbed state-feedback [49] with
the control invariant set approach [50], [51] has proved to
be effective for robust NMPCs [52], [53], [54], [55], [56],
[57]. The tube-based approach is becoming more and more
widespread used within the MPC research community lately
[23]. For output feedbacks, the most common approach is to
estimate states using an observer. The closed-loop stability is
not guaranteed, nonetheless, even when both the state based
NMPC and the nonlinear observer are stable. Findeisen et al.
[16] shown that an additional condition must be considered
to guarantee the stability for the augmented system.
The early NMPC software packages available to the industrial users consist of NOVA-NLC of Pavilion Technologies (using first principles models), Process Perfecter (using
input-output Hammerstein models) and some other partial
solutions for nonlinear systems such as Aspen Target using
artificial intelligent techniques [17]. There are two types of
state constraints, hard and soft, in these industrial MPCs.

Soft constraints allow temporary and short-term violations


of respective state bounds. Hard constraints imply strict
containments of state trajectories within given bounds. The
soft constraints in Honeywell RMPCT within the Profit
Controller package are made satisfied by using a frustum
that artificially permits a larger control error in the beginning
of the horizon than in the end [4].
Researches in both academic and industrial environments
for improving existing algorithms and inventing new algorithms for MPC have been progressing and targeting
applications beyond process control.
II. N UMERICAL M ETHODS AND A LGORITHMS
In this section, the convex Quadratic Programming (QP)
problems arising from MPC optimization dense and sparse
formulations are addressed together with the numerical algorithms for IPMs, First-order Methods, and ADMMs. The
reader may refer to [58] for the well known text book in
convex optimization, or [38] for the numerical methods.
A. Convex Quadratic Programming
A sub-class of convex optimization problems has the
following general form defined as a QP problem
1 T
x Qx + q T x
2
s.t. Gx g

minimize
x

F x = f,

(1)
(2)
(3)

where Q Rnn , Q  0, F Rpn , G Rqn ,


x Rn , q Rn , f Rp , and g Rq .
The existing numerical methods for QP solvers including
Interior-point Methods, First-order Methods and the Alternating Direction Methods of Multiplier are outlined in the
next subsections.
B. Interior-point Methods
Interior Point Method (IPM) [59] and Active Set Method
(ASM) [60] are the two commonly employed approaches
for QP in MPC, initially. The computational complexity
of ASM grows exponentially with the predictive horizon
length in the worst case scenario, whereas it is polynomial
in IPM. Therefore, IPM is suitable for medium and large
QP problems. In addition, the solution and timing of IPM is
better predictable. Detail comparisons of these two methods
can be found in [61]. Please add one paragraph describing
the main principle of IPM, and some details in words.
The primal-dual [62] and primal-barrier [63] interior point
methods have been found effective fos use as QP solvers. The
primal-dual IPM (PD-IPM) uses the Newton like method to
solve the equations of KKT optimal condition with a line
search for the step size such that the Lagrange multipliers
are positive. Please add more details on this PD-IPM. In
the primal-barrier IPM (PB-TPM), the inequality constraints
are replaced by a logarithm barrier function added to the
objective function. A PB-IPM algorithm tailored to suit
QP problem of MPC can be found in [63]. The main

computational burden of an IPM is caused by the task


of solving a system of linear equations at each iteration.
Solutions to improve this time consuming task are crucial
for any algorithms for the IPMs.
C. First-order QP Solvers
This class of QP solvers uses the first-order gradient to
solve the QP problem. At each iteration, it does not require
to solve a system of linear equations, so that the iteration
is cheap in contrast to the expensive iteration of an IPM.
However, while IPMs require fewer iterations to converge,
the first-order methods with linear convergence rate require
more iterations. In addition, these methods are sensitive with
the conditioning of the problem. Typical advantages of the
first-order methods consist of the capability of deriving a
certification for the number of iterations for a suboptimal
gap, and the fixed-point arithmetic capability thanks to the
division free characteristic. We list out the existing gradientbased methods for QP solvers that have been used for MPC
below. Detail descriptions can be found in the associated
references.
Please add one paragraph describing the main principle of
IPM, and some details in words.
1) Fast Gradient Method [64]
2) Dual Fast Gradient Method (DFGM) [65] [66]
3) Generalized DFGM [67]
4) Proximal Newton Methods [68].
D. Alternating Direction Method of Multiplier
The Alternating Direction Method of Multiplier (ADMM)
[69], also known as Douglas-Rachford Spliting for the dual
problem, was proposed to solve the optimization problems
of the following form:
min f (x) + g(z)

(4)

x,z

s.t. Ax + Bz = c,

(5)
n

where f, g are convex functions, x R , z R , A


Rpn , and B Rpm .
The augmented Lagrangian for the ADMM iteration is
defined as

L (x, z, y) = f (x)+g(z)+y T (Ax+Bzc)+ kAx+Bzck22,


2
where y is the ... vector, > 0 is the step size.
The ADMM iterations for problem (4), derived from the
KKT optimal condition, are as follows:
xk+1

arg min L (x, zk , yk ) ,

(6)

zk+1

arg min L (xk+1 , z, yk ) ,

(7)

yk+1

yk + (Axk+1 + Bzk+1 c) .

(8)

x
z

In an ADMM algorithm, x and z are updated sequentially


or alternatively. If we then define := 1 y, the scaled
ADMM iterations become



xk+1 = arg min f (x) + ||Ax + Bzk c + k ||22 ,


x
2



zk+1 = arg min g(z) + ||Axk+1 + Bz c + k ||22 ,


z
2
k+1 = k + (Axk+1 + Bzk+1 c) .

The convergence of the algorithm to the optimal solution


is based on the primal and dual residuals, defined as rk :=
Axk + Bzk c and sk := AT B(zk+1 zk ), respectively.
The convergence conditions are given in Theorems 1 and 2
in [69].
E. ADMM for QP problems formulated from MPC
There are two forms of QP problems arising from the MPC
formulations, dense and sparse forms.
1) Dense QP form having inequality constraints only:
1 T
x Qx + q T x
2
s.t. F x f,

minimize
x

(9)
(10)

where Q is a positive definite. (9) can be recast in the


following form by introducing a slack vector z and the
indicator function I(z) I(z) :=?:
1 T
x Qx + q T x + I(z)
2
s.t. F x f + z = 0.

minimize
x

(20)
(21)

uk+1 : =

uk + xk+1 zk+1 ,

(22)

where M11 and M12 are obtained off-line from solving


 

1
Q + I GT
M11 M12
=
.
(23)
T
G
0
M12
M22
The step size selection () has not been discussed, nontheless.
The first author (Thuy V. Dang) has cast the sparse
problem differently and derived another ADMM iterations.
The step size selection has been thoroughly addressed with
formal formulas. Details are given in [73].
3) MPC formulation with sparse form: Consider a
discrete-time state space model of the following:
xk+1
yk

=
=

Axk + Buk
Cxk

(24)
(25)

The MPC optimization is then as follows:

(Q + F T F )1 [q + F T (zk + uk f )],

zk+1
uk+1

=
=

max{0, F xk uk + f },
uk + F xk+1 + zk+1 f.

minimize

N
1
X
k=0


xTk Qx xk + uTk Ruk + xTN QN xN (26)

s.t. xk x Rnx , yk y Rny , uk u Rnu (27)

The step size selection for this dense QP problem is


derived in [70], as follows:
q
1

1
T
1
T
=
1 (F Q F )n (F Q F )
.
(13)
Another approach for solving (9) is presented in [71],
wherein the inequality constraints is created for the copying
variable of the following:
1 T
x Qx + q T x
2
s.t. y = w

(15)

F w f.

(16)

(14)

Using this setup, the second step that requires a projection


onto the set represented by the inequality constraints may
become a complicated sub-problem.
2) Sparse QP form having equality and inequality constraints:
1 T
minimize
x Qx + q T x
(17)
x
2
s.t. F x f
(18)
Gx = g.
(19)
Jerez [72] cast the problem as follow:

M11 (q + (zk uk )) + M12 g


K (xk+1 + uk )

(12)

xk+1

xk+1 : =
zk+1 : =

(11)

This results in the following ADMM iterations [70]:

minimize

And this results in below calculations.

By using the sparse formulation which keeps the state and


control in one common vector,
x = (u0 x1 u1 x2 ... uN 1 xN )T ,

(28)

and considering affine linear constraints x , y , u , (26)


can be written as following
1 T
x Qx + q T x
2
s.t. Gx g
F x = f,

minimize
x

(29)
(30)
(31)

where Q RNd Nd Q  0, F RpNd , G RqNd ,


x RNd , q RNd , f Rp , and g Rq , in which
Nd := N (nx + nu ), the number of decision variables, and
p := (N 1) Nx + nu , the number of equality constraints.



R 0

T
0
I
0 Qx
Q = Nd
, f = Ax0 0 . . . 0 ,
0
QN

B
I
0
0
0
0
0
0
A B
I
0
0
0

0
0
0
A
B
I

0
F =
.
..

..
..
..
..
..
..
.

.
.
.
.
.
.
0
0
0

0
A B I

One can also formulate the optimization in dense QP form,


1 T
1
x Qx + q T x + IA (x) + IK (z) + ||z x||2 where the decision variables are only the control vector [74].
2
2
In this dense formulation, equality constraints do not exist.
s.t
x = z,
The computational complexity of IPM for this dense QP
where > 0 is the step size.
grows cubically with the horizon length [75], whereas, the
minimize

complexity grows linearly with the horizon length in the


sparse formulation [76], [63]. The computational advantages
of sparse QP come from the special properties of banded
Hessian matrix, banded and repeated pattern of the inequality
constraints. The comparison of such formulations in numbers
of flops (floating point operations) can be found in [75].
III. E MBEDDED MPC

ON

FPGA

For deploying MPC with embedded real-time applications


where the computational resources are limited, the choice
of QP algorithms and embedded platforms are crucial for a
successful implementation. Field Programmable Gate Array
(FPGA) has emerged as a powerful embedded platform
computationally. The interior point and active set methods
have previously been applied to algorithms with floating
point arithmetic implementations on FPGA for MPC [77],
[61]. A mixed fixed-point and floating point arithmetic implementation has also been presented in [78]. Algorithms for
the IPM receive strong interests owing to their stable number
of iterations, also, their lesser sensitiveness to ill-conditioned
problems. In addition, IPMs are suited to medium and large
QP problems. Current researches in IPM mainly focus on
solving the system of linear equations of the form Ax = b
at each iteration. Novel solvers as well as dedicated hardware
architect are proposed to efficiently solve this problem.
Details of such implementations can be found in [79], [80],
[81].
The first-order QP solvers have attracted significant interests recently since they have a simpler structure, lower
cost implementation, more efficient parallelism, and allow
fixed-point arithmetic. Gradient-based QP solvers for embedded MPC on low cost micro-controllers and DSPs have
been presented in [82], [83], [84], [85]. The combination
of efficient parallelism of first-order QP solvers and high
paralleling processing capacity of FPGA is very promising,
as be able to deal with very fast dynamic of MHz frequency
[72]. The first-order methods typically have cheap iterations
which are favourable for a low cost embedded implementation. However, they require a higher number of iterations
than that of the IPM, and is sensitive to the conditioning
of the problem. Researches on embedded MPC based on
first-order methods continuously focus on preconditioning
and accelerating techniques, which can be listed as inexact
scheme, early termination, fixed-point arithmetic aspect and
custom hardware architect, see, e.g. [67], [86], [87], [73],
[66], and references therein.
The distributed MPC formulations for use with ADMM
have been presented in [26], [88]. Discussions on the distributed optimization via ADMM can be found in [69], [89],
[90]. The scaling technique for distributed QP based on
ADMM can be found in [91]. The communication efficient
ADMM-based distributed optimization is presented in [92].
IV. C ONCLUSION
An overview on recently developed algorithms for the numerical optimizations of QP problems arising from MPC formulations has been presented. IPM has been found effective

in medium and large QP problems with predictable iterations


and insensitiveness to ill-conditioned problems. First-order
QP solvers, that typically have cheap iterations, allow fixedpoint arithmetic computations which are suited to low cost
platforms. Preconditioning and accelerating techniques for
first-order QP solvers will help improve the algorithm. The
advantages of ADMM are ...
R EFERENCES
[1] D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. M. Scokaert,
Constraint predictive control: Stability and optimality, Automatica,
vol. 36, pp. 789814, 2000.
[2] J. M. Maciejowski, Predictive Control With Constraints. Prentice Hall,
2002.
[3] E. F. Camacho and C. Bordons, Model Predictive Control. Springer:
Springer-Verlag, 2004.
[4] J. S. Qin and T. A. Badgwell, A survey of industrial model predictive
control technology, Control Engineering Practice, vol. 11, pp. 733
764, 2003.
[5] B. Kouvaritakis and M. Cannon, Nonlinear Predictive Control.
Springer: IEE Control Engineering Series 61, 2001.
[6] D. Q. Mayne, Model predictive control: Recent developments and
future promise, Automatica, vol. 50, no. 12, pp. 29672986, 2014.
[7] W. Levines, The Control Handbook. CRC Press, 1996.
[8] J. S. Qin and T. A. Badgwell, An overview of industrial model
predictive control technology, in AIChE Symposium Series: Fifth
International Conference on Chemical Process Control (K. J. C.,
G. C. E., and B. Carnahan, eds.), vol. 316, pp. 232256, AIChE
press ed., 1997.
[9] L. Desborough and R. Miller, Increasing customer value of industrial
control performance monitoring - Honeywells experience, Honeywell
Hi-Spec Solution White Paper, 2001.
[10] J. Richalet, Industrial application of predictive functional control,
Workshop on Nonlinear Model Based Control - Software and Applications (NMPC-SOFAP07), Loughborough UK, 2007.
[11] H. Takatsu, T. Itoh, and M. Araki, Future needs for the control theory
in industries - Report and topics of the control technology survey in
Japanese industry, Journal of Process Control, vol. 8, pp. 369374,
1997.
[12] M. Kano and M. Ogawa, The state of the art in chemical process
control in Japan: Good practice and questionnaire survey, Journal of
Process Control, vol. 20, no. 9, pp. 969982, 2010.
[13] R. E. Kalman, Contributions to the theory of optimal control,
Bulletin Soc. Math. Mex., vol. 5, pp. 102119, 1960.
[14] K. R. Muske and J. B. Rawlings, Model predictive control with linear
models, AIChE Journal, vol. 39, no. 2, pp. 262287, 1993.
[15] A. Zheng and M. Morari, Robust stability of constrained model predictive control, Proceedings of IEEE American Control Conference
ACC93, pp. 379383, June 1993.
[16] R. Findeisen, L. Imsland, F. Allgower, and B. A. Foss, State and
output nonlinear model predictive control: An overview, European
Journal of Control, vol. 9, pp. 190206, 2003.
[17] J. S. Qin and T. A. Badgwell, An overview of nonlinear model predictive control applications, in: Nonlinear Model Predictive Control,
Birkhauser, 2000.
[18] F. Allgower and A. Zheng, Nonlinear Model Predictive Control.
Birkhauser, 2000.
[19] E. F. Camacho and C. Bordons, Nonlinear model prediction control:
an introductory survey, Proceedings of International Workshop on
Assessment and Future Directions of Nonlinear Predictive Control
NMPC05, pp. 1530, 2005.
[20] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, The explicit linear quadratic regulator for constrained systems, Automatica,
vol. 38, pp. 320, 2002.
[21] A. Bemporad and M. Morari, Control of systems integrating logic,
dynamics, and constraints, Automatica, vol. 35, pp. 407427, 1999.
[22] B. Kouvaritakis, M. Cannon, S. V. Rakovic, and Q. Cheng, Explicit
use of probabilistic distributions in linear predictive control, Automatica, vol. 46, no. 10, pp. 17191724, 2010.
[23] J. Rawlings and D. Mayne, Model Predictive Control: Theory and
Design. Nob Hill Publishing, Wisconsin, 2009.

[24] M. Canon, Q. Cheng, B. Kouvaritakis, and S. V. Rakovic, Stochastic


tube MPC with state estimation, Automatica, vol. 48, no. 3, pp. 536
541, 2012.
[25] R. Scattolini, Architectures for distributed and hierarchical model
predictive control - A Review, Journal of Process Control, vol. 19,
pp. 723731, 2009.
[26] F. Farokhi, I. Shames, and K. H. Johansson, Distributed MPC via
dual decomposition and alternative direction method of multipliers,
in Distributed Model Predictive Control Made Easy, pp. 115131,
Springer, 2014.
[27] Tri Tran, K.-V. Ling, and J. M. Maciejowki, Economic model
predictive control: A review, in Proceedings of ISARC14, Sydney,
Australia, 2014.
[28] J. P. Gerry, Control loop optimization, In: Process Software and
Digital Networks, ISA Instrument Engineers Handbook, CRC Press,
2002.
[29] J. F. Coales and A. R. M. Noton, An on-off servomechanism with
predicted changeover, Proceedings of the IEEE, vol. 128, pp. 227
232, 1956.
[30] O. J. M. Smith, A controller to overcome dead time, ISA Journal,
vol. 6, pp. 2833, 1959.
[31] J. Richalet, A. Rault, J. L. Testud, and J. Papon, Model predictive
heuristic control: Application to industrial processes, Automatica,
vol. 14, pp. 413428, 1978.
[32] J. R. Leigh, Control Theory, 2nd Edition. IEE Control Engineering
Series 64, 2004.
[33] F. I. Lotz, The importance of optimal control for practical engineer,
Automatica, vol. 6, pp. 749753, 1970.
[34] F. L.Lewis, Applied Optimal Control and Estimation - Digital Design
and Implementation. Prentice Hall, 1992.
[35] C. R. Cutler and B. L. Ramaker, Dynamic matrix control - a computer
control algorithm, In Proceedings of the Joint Automatic Control
Conference, 1980.
[36] D. M. Prett and C. E. Garcia, Fundamental Process Control. Butterworths, 1988.
[37] C. E. Garca and A. M. Morshedi, Quadratic programming solution
of dynamic matrix control (QDMC), Chemical Engineering Communications, vol. 46, pp. 7387, 1986.
[38] J. Nocedal and S. J. Wright, Numerical Optimization, 2nd Edition.
Springer Verlag, 2006.
[39] E. S. Meadows, M. A. Henson, J. W. Eaton, and J. B. Rawlings, Receding horizon control and discontinuous state feedback stabilization,
Internationl Journal of Control, vol. 62, no. 5, pp. 12171229, 1995.
[40] S. S. Keerthi and E. Gilbert, Optimal infinite-horizon feedback laws
for a general class of constrained discrete-time systems: Stability and
moving horizon approximations, Journal of Optimization Theory and
Applications, vol. 57, no. 2, pp. 26529, 1988.
[41] D. Q. Mayne and H. Michalska, Receding horizon control of nonlinear system, IEEE Transactions on Automatic Control, vol. 35,
pp. 814824, July 1990.
[42] H. Michalska and D. Q. Mayne, Robust receding horizon control
of constrained nonlinear systems, IEEE Transactions on Automatic
Control, vol. 38, no. 11, pp. 16231633, 1993.
[43] H. Chen and F. Allgower, A quasi-infinite horizon nonlinear model
predictive control scheme with guaranteed stability, Automatica,
vol. 34, no. 10, pp. 12051218, 1998.
[44] P. O. M. Scokaert, D. Q. Mayne, and J. B. Rawlings, Suboptimal
model predictive control, IEEE Transactions on Automatic Control,
vol. 44, no. 3, pp. 648654, 1999.
[45] S. L. O. Kothare and M. Morari, Contractive model predictive control
for constrained nonlinear systems, IEEE Transactions on Automatic
Control, vol. 45, no. 6, pp. 10531071, 2000.
[46] J. A. Primbs, V. Nevistic, and J. C. Doyle, On receding horizon
extensions and control Lyapunov functions, Proceedings of IEEE
American Control Conference ACC98, Philadelphia, pp. 32763280,
1998.
[47] P. Mhaskar, N. H. El-Farra, and P. D. Christofides, Stabilization of
nonlinear systems with state and control constraints using Lyapunovbased predictive control, Systems and Control Letters, vol. 55,
pp. 650659, 2006.
[48] N. H. El-Farra, P. Mhaskar, and P. D. Christofides, Uniting bounded
control and MPC for stabilization of constrained linear systems,
Automatica, vol. 40, no. 1, pp. 101110, 2004.

[49] J. R. Gossner, B. Kouvaritakis, and J. A. Rossiter, Stable generalized


predictive control with constraints and bounded disturbances, Automatica, vol. 33, no. 4, pp. 551568, 1997.
[50] I. Kolmanovsky and E. Gilbert, Theory and computation of disturbance invariant sets for discrete-time linear systems, Mathematical
Problems in Engineering: Theory - Methods and Applications, vol. 4,
pp. 317367, 1998.
[51] F. Blanchini, Set invariant in control, Automatica, vol. 35, pp. 1747
1767, 1999.
[52] L. Chisci, J. A. Rossiter, and G. Zappa, Systems with persistent disturbances: predictive control with restrictive constraints, Automatica,
vol. 37, pp. 10191028, 2001.
[53] E. C. Kerrigan and J. M. Maciejowski, Invariant sets for constrained
nonlinear dicretetime systems with application to feasibility in model
predictive control, Proceedings of the 39th IEEE Conference on
Decision and Control, Sydney, pp. 49514956, 2000.
[54] M. Cannon, B. Kouvaritakis, and V. Deshmukh, Enlargement of
polytopic terminal region in NMPC by interpolation and partial
invariance, Automatica, vol. 40, pp. 311317, 2004.
[55] P. J. Goulart, E. C. Kerrigan, and J. M. Maciejowski, Optimization
over state feedback policies for robust control with constraints,
Automatica, vol. 42, pp. 523533, 2006.
[56] C. J. Ong and E. G. Gilbert, The minimal disturbance invariant
set: Outer approximations via its partial sums, Automatica, vol. 42,
pp. 15631568, 2006.
[57] L. Imsland, J. A. Rossiter, B. Pluymers, and J. Suykens, Robust triple
mode MPC, International Journal of Control, vol. 81, no. 4, pp. 679
689, 2008.
[58] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge
University Press, 2004.
[59] S. J. Wright, Interior point methods for optimal control of discrete
time systems, Journal of Optimization Theory and Applications,
vol. 77, no. 1, pp. 161187, 1993.
[60] R. Fletcher, Practical Methods of Optimization. John Wiley & Sons,
New York, 1987.
[61] M. S. Lau, S. Yue, K. Ling, and J. Maciejowski, A comparison
of interior point and active set methods for FPGA implementation
of model predictive control, in Proc. European Control Conference,
pp. 156160, 2009.
[62] S. J. Wright, Primal-dual interior-point methods, vol. 54. Siam, 1997.
[63] Y. Wang and S. Boyd, Fast model predictive control using online
optimization, Control Systems Technology, IEEE Transactions on,
vol. 18, no. 2, pp. 267278, 2010.
[64] S. Richter, C. N. Jones, and M. Morari, Real-time input-constrained
MPC using fast gradient methods, in Decision and Control, 2009 held
jointly with the 2009 28th Chinese Control Conference. CDC/CCC
2009. Proceedings of the 48th IEEE Conference on, pp. 73877393,
IEEE, 2009.
[65] S. Richter, C. N. Jones, and M. Morari, Computational complexity
certification for real-time MPC with input constraints based on the fast
gradient method, Automatic Control, IEEE Transactions on, vol. 57,
no. 6, pp. 13911403, 2012.
[66] P. Patrinos and A. Bemporad, An accelerated dual gradient-projection
algorithm for linear model predictive control, in Decision and Control
(CDC), 2012 IEEE 51st Annual Conference on, pp. 662667, IEEE,
2012.
[67] P. Giselsson, Improving fast dual ascent for MPC part II: The
embedded case, arXiv preprint arXiv:1312.3013, 2013.
[68] P. Patrinos and A. Bemporad, Proximal newton methods for convex
composite optimization, in Decision and Control (CDC), 2013 IEEE
52nd Annual Conference on, pp. 23582363, IEEE, 2013.
[69] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed
optimization and statistical learning via the alternating direction
R
method of multipliers, Foundations and T rends
in Machine
Learning, vol. 3, no. 1, pp. 1122, 2011.
[70] E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson, Optimal
parameter selection for the alternating direction method of multipliers
(ADMM): quadratic problems, arXiv:1306.2454 [math.OC], 2013.
[71] A. U. Raghunathan and S. Di Cairano, Alternating direction method
of multipliers for strictly convex quadratic programs: Optimal parameter selection, in American Control Conference (ACC), 2014,
pp. 43244329, IEEE, 2014.
[72] J. L. Jerez, P. J. Goulart, S. Richter, G. A. Constantinides, E. C.
Kerrigan, and M. Morari, Embedded online optimization for model
predictive control at megahertz rates, arXiv:1303.1090 [cs.SY], 2013.

[73] T. V. Dang, K. V. Ling, and J. M. Maciejowski, Embedded ADMMbased QP solver for MPC with polytopic constraints, in Submitted to
European Control Conference, ECC, 2015.
[74] J. M. Maciejowski, Predictive control: with constraints. Pearson
education, 2002.
[75] J. L. Jerez, E. C. Kerrigan, and G. A. Constantinides, A sparse
and condensed qp formulation for predictive control of lti systems,
Automatica, vol. 48, no. 5, pp. 9991002, 2012.
[76] S. J. Wright, Applying new optimization algorithms to model predictive control, in AIChE Symposium Series, vol. 93, pp. 147155,
Citeseer, 1997.
[77] S. Yue, K. V. Ling, and J. M. Maciejowski, A FPGA implementation
of model predictive control, in Proc. American Control Conference
(ACC), 2006, pp. 19301935, IEEE, 2006.
[78] J. L. Jerez, G. A. Constantinides, and E. C. Kerrigan, Towards a
fixed point qp solver for predictive control, in In Proc. IEEE Conf.
on Decision and Control, Citeseer, 2012.
[79] A. Shahzad, E. C. Kerrigan, and G. A. Constantinides, A fast wellconditioned interior point method for predictive control, in Decision
and Control (CDC), 2010 49th IEEE Conference on, pp. 508513,
IEEE, 2010.
[80] J. L. Jerez, G. A. Constantinides, and E. C. Kerrigan, Fpga implementation of an interior point solver for linear model predictive control,
in Field-Programmable Technology (FPT), 2010 International Conference on, pp. 316319, IEEE, 2010.
[81] J. Liu, H. Peyrl, A. Burg, and G. A. Constantinides, Fpga implementation of an interior point method for high-speed model predictive
control, in Field Programmable Logic and Applications (FPL), 2014
24th International Conference on, pp. 18, IEEE, 2014.
[82] P. Zometa, M. Kogel, T. Faulwasser, and R. Findeisen, Implementation aspects of model predictive control for embedded systems, in
American Control Conference (ACC), 2012, pp. 12051210, IEEE,
2012.
[83] P. Patrinos, A. Guiggiani, and A. Bemporad, Fixed-point dual gradient projection for embedded model predictive control, in Control
Conference (ECC), 2013 European, pp. 36023607, IEEE, 2013.
[84] S. Richter, S. Mariethoz, and M. Morari, High-speed online MPC
based on a fast gradient method applied to power converter control,
in American Control Conference (ACC), 2010, pp. 47374743, IEEE,
2010.
[85] A. Guiggiani, P. Patrinos, and A. Bemporad, Fixed-point implementation of a proximal newton method for embedded model predictive
control (i), IFAC, 2014.
[86] T. V. Dang and K. V. Ling, Moving horizon estimation on a chip, in
The 13th International Conference on Control, Automation, Robotics
and Vision, ICARCV, Singapore, 2014.
[87] P. Giselsson and S. Boyd, Preconditioning in fast dual gradient
methods, in Proceedings of the 53rd Conference on Decision and
Control, 2014.
[88] J. F. Mota, J. M. Xavier, P. M. Aguiar, and M. Puschel, Distributed
optimization with local domains: applications in MPC and network
flows, arXiv preprint arXiv:1305.1885, 2013.
[89] F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem, Linear convergence
rate for distributed optimization with the alternating direction method
of multipliers, in Decision and Control (CDC), 2014 53th IEEE
Conference on, pp. 50465051, IEEE, 2014.
[90] E. Wei and A. E. Ozdaglar, Distributed alternating direction method
of multipliers., in CDC, pp. 54455450, 2012.
[91] A. Teixeira, E. Ghadimi, I. Shames, H. Sandberg, and M. Johansson,
Optimal scaling of the admm algorithm for distributed quadratic
programming, in Decision and Control (CDC), 2013 IEEE 52nd
Annual Conference on, pp. 68686873, IEEE, 2013.
[92] J. F. Mota, J. M. Xavier, P. M. Aguiar, and M. Puschel, D-admm:
A communication-efficient distributed algorithm for separable optimization, Signal Processing, IEEE Transactions on, vol. 61, no. 10,
pp. 27182723.

Das könnte Ihnen auch gefallen