Sie sind auf Seite 1von 6

Proceedings of the 2010 International Conference on

Modelling, Identification and Control, Okayama, Japan, July 17-19, 2010

Model Predictive Tracking Control Using a


State-dependent Gain-scheduled Feedback
Nobutaka WadaÝ ß, Hiroyuki TomosugiÞ, Masami SaekiÞ and Masaharu NishimuraÝ

Abstract— In this paper, we propose a method of synthesizing stabilizing control law, the state-dependent gain-scheduled
a model predictive control (MPC) law for linear dynamical (SDGS) control law [9] is utilized. The SDGS control law
systems with input constraints. The proposed control law is has a structure that a high-gain control law and a low-
composed of a finite horizon open-loop optimal control law and
state-dependent gain-scheduled feedback control law. By using gain control law are interpolated by a single scheduling
the proposed MPC, both high control performance and large parameter. By using this control law, the number of the
region of attraction can be achieved. We show that, by using decision variables can be reduced as compared with [2]. On
the control law, the closed-loop stability can be guaranteed the other hand, the usual MPC treats a regulation problem
and the tracking error converges to zero in the case where and can not be directly applied to a tracking control problem.
a reference signal to be tracked is generated by a certain
linear dynamics. The control algorithm is reduced to a convex Hence, we show a design method of a control law that
optimization problem. guarantees asymptotic convergence of tracking errors to zero
in the case where the reference signal to be tracked is
I. INTRODUCTION generated by a certain linear dynamics.
Model predictive control has been widely adopted in Notations: For vectors    ,   implies     .
industry as an effective method to deal with multivariable For a matrix    ,   denotes the th row of  .
constrained control problems. This control scheme consists For a vector   , we define     . For a
of the on-line optimization technique: at each time instant vector   and a positive definite matrix  , we
a new value of the control signal is calculated on the basis define     . For a positive definite matrix
of the current measurement and the prediction of the future  
 and a positive scalar , we define the set
states. In the last decades, the issues of feasibility of the           .
on-line optimization and stability of the closed-loop system II. PROBLEM FORMULATION
have been extensively studied. The dual-mode MPC [7] is Let us consider the system described by
widely accepted as one of the most systematic approach to
design a MPC that guarantees feasibility and stability. In     
    
    

the dual-mode approach, the open-loop optimal control is    
 (1)
initially applied, then a stabilizing feedback control law is    
    
     
 (2)
utilized after the state variable reaches a positively invariant
set. In the standard dual-mode MPC approach, a fixed state where        Þ, 
 Û . Moreover,
feedback control law designed off-line is utilized as the   
 is the state calculated at time .  and  are
stabilizing feedback controller. In this case, to expand a the control input and reference signal, respectively.  is
region of attraction, we need to use a low-gain control law as the tracking error. We assume that there exists magnitude
the stabilizing controller. This results in poor performance. limitation on the control input as follows.
    
            
To overcome this problem, a novel dual-mode MPC approach
(3)
is proposed in [2]. In this approach, the stabilizing control
law is also recomputed at each sampling time. This enables where   .
us to obtain a MPC that achieves both large region of Moreover, we assume that the reference signal is generated
attraction and high control performance. However, in this by
approach, the number of decision variables becomes quite
  
   
 (4)
large when the control law is applied to large scale systems
and/or multi input systems. This causes large computation We assume that the matrix  has all eigenvalues on the
time. unit circle in the complex plane. It should be noted that, by
In this paper, we show a novel design method a dual- suitably choosing the matrix  , the dynamics can generate
mode MPC that achieves both high control performance and the step signal or periodic signal. We make the following
large region of attraction. In the proposed approach, as the assumption.
Ý Department of Mechanical and Aerospace Engineering, Tottori Univer-
Assumption 1: There exist the matrices   Û

sity, 4-101 Koyama-Minami, Tottori, 680-8552 JAPAN


 Û that satisfy
Þ Department of Mechanical System Engineering, Hiroshima University,
1-4-1 Higashi-Hiroshima, Hiroshima, 739-8527 JAPAN
        (5)
ß nwada@mech.tottori-u.ac.jp       (6)

©2010 ICMIC 418


Further, we make the following assumption. where
Assumption 2: The following inequalities hold.    
   
  


   

        (7) 

   
 



   
 

   ..     .. 
The above assumption guarantees that the control signal  .   . 
 satisfies the constraints (7) in the steady state.    
     

In the following, we derive an error system. By multiply-  

ing (5) from the right by   
, we obtain   
 
  .. 
    
    
     
  . 

   
 (8) ×
 
   

Then we introduce 

     

   .. .. 
 . . 
   
   
   

×   ×   ×      
 (9)
   
    
   
 (10)
Moreover, by using  and  , we obtain
From (1), (4), (8), (9) and (10), we obtain
× 

     
     
     
 (11)
          
      
 

Similarly, from (2), (6) and (9), the signal    
 can be
   
  
  (16)
rewritten as

   
     
 (12) where 
 
     



    . We introduce a scalar  that

Therefore, the error system can be represented as (11) and satisfies


(12).
In this paper, we consider the following problem.       (17)
Problem 1: Consider the system (11) and (12). Design a
MPC algorithm that minimizes the following cost function By substituting (16) for (17), and applying the Schur com-
at each time , and achieves

  
   under the plement [3], and substituting (15) for the resulting inequality,
input constraint (3). we obtain



  


          
      
  (13)
 
  
    (18)

 
 
To achieve the control objective in Problem 1, we adopt  
  
the so-called dual-mode MPC law. We use the open-loop
optimal control law during the time interval     where  stands for symmetric block in matrix inequalities.
where  is the switching horizon, and switch the control Eq.(18) is an LMI [3] with respect to the variables  and  .
law to the following feedback control law during   .

   
       

B. Input Constraints on Open-loop Optimal Control
(14)
In this section, we will show that the input constraints (3)
   is a time-varying feedback gain. In the next section, during       can be reduced to LMI constraints.
we derive several conditions to realize this control algorithm. From (3) and (10), the input constraints at each time can be
rewritten as
III. C ONSTRAINTS ON O PEN - LOOP O PTIMAL C ONTROL

    
 
   

         

A. Upper Bound of the Cost Function during Open-loop
Optimal Control
Moreover, from (4), we have   
   
.
In the following, we will derive an upper bound of the Therefore, the input constraints (3) during      
cost function during    . From (11), we obtain can be rewritten as


 
   
 (15)
      

            (19)

419
where Proof) From eq.(20), we obtain



 $  

 
   
 

 


 $   " $   $ 
   " $ 
  


    $   " $  
 
 ..   
    
  .  

..
.   
     
  
× 

   $  


 (23)
    $ 
 
    .. 
 .  By substituting " $   $ $ for (23), and

 performing a congruence transformation with 
-
  $  $  
 , and applying the Schur comple-
Eq.(19) are LMIs with respect to the variable   .
ment to the resulting inequality, we obtain
   $   
 $ 
     $     $   (24)
IV. C ONSTRAINTS ON C LOSED - LOOP C ONTROL
where  $   $ $ . By multiplying (24) from the
In this paper, we use the gain-scheduled feedback control left by    
 and from the right by    
,
law (14) during   . In the following, we will show a and substituting (11) and (22) for the resulting inequality,
design method of the control law. We introduce the following we obtain
theorem.       
  
     
  

Theorem 1: Consider the error system (11) and (12). For
given positive scalars    and !  , there exist
       
     
  (25)
matrices "  " and positive definite matrices    that From (25) and    
   $   , the re-
satisfy lations    
   $      and
   
      hold. Moreover, by summing
(25)   , and using  
 , we obtain
 
   
  "            
  
 . Furthermore, from   



 
#    $      , the relation    
 
 "   
   (20) 

   
  
   $  holds.

  "   
  From (21), we obtain
 !     
       !  $   
 
ٴе ¾   # 
   (21)

   


 
 

 
   ´Ðµ ¾ 
"
  Ù 
 (26)
 

" $  


Moreover, we assume that there exist a scalar $    By performing a congruence transformation to (26) by 
-
such that    
   $   , where  $    $   
 and applying the Schur complement to
 $  $ . Then, by applying the following control the resulting inequality, we obtain
!  $ 
law

 
   
   
   $    


ٴе
¾
(22)



 $  
 $   


where  $  " $ $ , " $   $"  $" to  


 (27)
 


the error system (11), (12), the following relations hold.
Then, by multiplying (27) from the left by   
   
    
   $      
  and from the right by    
    
  , and
    
     
     
using (14) and (10), we obtain
×       
       
 
 

    
  
 
    



 

     
  
   $ 
    
         !    
  $     

where $  $ $ and  $ $  
  
     

   
! (28)
$ .  


420
Therefore, from (7) and    
   $   , the where the asterisk denotes a solution which is obtained from

relations     
       hold. (QED) the optimization problem at time . This arguments also can
By applying the Schur complement to the inequality  be continued for times       . Therefore, we can
   
  
   $  and substituting  $  conclude that Algorithm 1 is feasible for all times % .
 $ $ for the resulting inequality, we obtain Then, we show that

  
  . From section III
  and IV, the following relation holds.
  
   
  $ 
(29)    &       $  (33)
From (15),    
 can be represented as    
  From (30)–(32), we obtain
×  
   ×      . Hence, (29) is an LMI with × 
  
respect to the variable  .        
    

Remark 1: In this paper, based on Theorem 1, we design 
a gain    "  which enlarges region of attraction    

and a gain    "  which achieves fast convergence
 



of the state variables.
Remark 2: It is clear that the following inequality holds.
      
    


× 
      
 (34)
   

            
    

 From (25), we have
    
 
       
  
      
  

V. M ODEL P REDICTIVE C ONTROL A LGORITHM       
     
  (35)
A. Control Algorithm From (34) and (35), we obtain
We propose the following MPC algorithm. ×

Algorithm 1:        
    

Step1 Set  . 
Step2 Measure 
 
.     
  

Step3 If  
     , set  
    
 and


go to Step4. Otherwise, solve


      
     
 
× 


  
 ½
   $        
    


subject to (18), (19), (29).      
  

Step4 Apply 
   
 
 to the plant (1), (2). × 

Step5 Set    and go to Step2.        
    

Remark 3: The optimization problem in Step3 is an LMI 
optimization problem with respect to the variables $    .     
 
 

Therefore, it can be solved efficiently by an interior point
algorithm.    (36)

B. Feasibility and Stability Hence, without recomputing an input sequence, there exist
 and $ such that
The following theorem can be stated.
Theorem 2: Consider the system (11), (12). Assume that &   &   (37)
Algorithm 1 is feasible at time . Then, Algorithm 1 is
In Algorithm 1, &  is minimized at time   
feasible for all %
and

  
   holds.
. This convex optimization always yields a global op-

timum &    & . Hence, &  decreases monotoni-
Proof) We initially consider the feasibility of Algorithm
1. From the assumption, there exist a feasible solution at
cally. As a result, the relations

  
   and
sampling time . Then, at the next time    , a feasible


  
   hold. (Q.E.D.)
solution is guaranteed to exist, since the shifted solution
which was calculated at time is a feasible solution, i.e. VI. W HOLE O PTIMIZATION OF A C LOSED - LOOP
      

C ONTROLLER
   
 

      (30) In the preceding MPC law, the structure of the control law
during   is confined to (22) and $ is the only tuning
    
   $    

 (31) parameter. In contrast, we can construct a MPC law such
   
 

 $      
  that all the elements of the feedback gain  is optimized at
  (32) each sampling time.

421
0 1
−0.2
0.8
−0.4
0.6
−0.6
−0.8 0.4

u1
−1
z

0.2
−1.2
0
−1.4
−1.6 −0.2
−1.8
−0.4
−2
0 200 400 600 800 1000 1200 1400 0 200 400 600 800 1000 1200 1400
k k

Fig. 1.    (solid: Algorithm 1, dashed: Algorithm 2) Fig. 3. ½   (solid: Algorithm 1, dashed: Algorithm 2)
70
0.4
60
0.2
50
0
40
−0.2

u2
J

30
−0.4

20 −0.6

10 −0.8

0 −1
0 200 400 600 800 1000 1200 1400 0 200 400 600 800 1000 1200 1400
k k

Fig. 2.    (solid: Algorithm 1, dashed: Algorithm 2) Fig. 4. ¾   (solid: Algorithm 1, dashed: Algorithm 2)

   
Algorithm 2: 
   
      
Step1 Set  . 
  





 
Step2 Measure 
 
.        
      
   
Step3 Solve       

   
  


 ½  ¾               
subject to (18)–(21), (29) with "  "  " and  
All the eigenvalues of the matrix are located at  in the
   .
complex plane. We assume that the coefficient matrix of the
Step4 Apply 
   
 
 to the plant (1), (2). 
dynamics (4) is   . Moreover, we assume that  
Step5 Set    and go to Step2. 
  . The solutions of the linear equations (5) and (6)
are            . We solve (20), (21)
The convergence property of Algorithm 2 is immediately
proved by replacing  $  with "  and   in the proof
with         ,    ,
of Theorem 2. This control algorithm can be considered as
    and   , and obtain     "  " .
the extension of the control law of [2] so that the tracking
In Figs.1–6, we show the responses of the system for
control problem can be handled. In this control algorithm, the
      with   . In these figures, the
number of the decision variables increases when the control
solid lines show the responses with Algorithm 1 and the
algorithm is applied to large scale systems and/or multi input
dashed lines show the responses with Algorithm 2. As shown
systems. This may result in large computation time.
in Fig.1, in both cases, the signal  converges to zero. Fig.2
shows that the cost function  converges to zero. Fig.6 shows
VII. N UMERICAL E XAMPLE that the scheduling parameter $ converges to zero. As show
Consider the system (1), (2) with in Figs.3 and 4, the control input  fulfill the input constrints.
 
Fig.5 show the computation time at each sampling time. In

  
    
  
  Table I, we show the average computation time to solve the
        optimization problem in each sampling time. The average
 
   
  
     computation time ' is calculated as '  (
 '  
  


      

where '   is the computation time at time . From Fig.5
        and Table I, we can confirm that the computation time with
     
  Algorithm 1 is reduced as compared to that with Algorithm 2.

422
1 [3] S.Boyd, L.E.Ghaoui, E.Feron and V.Balakrishnan, “Linear matrix
inequalities in system and control theory,” SIAM, 1994.
0.8 [4] L.Magni, D.M.Raimondo and F.Allgower: Nonlinear Model Predictive
Control, Lecture Notes in Control and Information Sciences, No.384,
CPU time [s/sample] Springer-Verlag (2009)
0.6
[5] J. M. Maciejowski: Predictive Control with Constraints, Prentice Hall
(2002)
0.4 [6] D.Q.Mayne, J.B.Rawlings, C.V.Rao and P.O.M.Scokaert: Constrained
model predictive control: Stability and optimality, Automatica, vol.36,
0.2 pp.789-814 (2000)
[7] H. Michalska and D. Q. Mayne: Robust receding horizon control of
constrained nonlinear systems. IEEE Trans. Automatic Control, vol.38,
0
0 200 400 600 800 1000 1200 1400 no.11, pp.1623-1633 (1993)
k
[8] N.Wada, K.Saito and M.Saeki: Model Predictive Control for Linear
Parameter Varying Systems using Parameter Dependent Lyapunov
Fig. 5. CPU Time [s/sample] (solid: Algorithm 1, dashed: Algorithm 2) Function; IEEE Transactions on Circuits and Systems, Vol.II-53,
1 No.12 (2006)
[9] N.Wada and M.Saeki: An LMI Based Scheduling Algorithm for Con-
strained Stabilization Problems, Systems & Control Letters, Vol.57,
0.8
pp.255-261 (2008)

0.6
α

0.4

0.2

0
0 200 400 600 800 1000 1200 1400
k

Fig. 6.  

The numerical simulations are performed with the personal


computer (Windows XP SP3, Intel Core2 2.33 GHz, 1GB
RAM) and MATLAB Robust Control Toolbox.
TABLE I
AVERAGE C OMPUTATION T IME
time [s]
Algorithm 1 0.0352
Algorithm 2 0.1235

VIII. C ONCLUSIONS
We have proposed a dual-mode MPC algorithm for linear
dynamical systems with input constraints. The proposed
control algorithm is reduced to a convex optimization prob-
lem. The MPC algorithm utilizes the state-dependent gain-
scheduled feedback control law as the stabilizing controller.
As a result, the number of decision variables can be re-
duced as compared with the method in [2]. Moreover, we
have shown that, by using the proposed control algorithm,
feasibility of the algorithm is guaranteed for all time, and
the tracking error converges to zero in the case where the
reference signal is generated by a certain linear dynamics.

R EFERENCES
[1] A.Bemporad and M.Morari: Robust model predictive control: A
survey, Lecture Notes in Control and Information Sciences, No.245,
pp.207-226, Springer-Verlag (1999)
[2] H. H. Bloemen, T. J.J. van den Boom and H. B. Verbruggen: Opti-
mizing the end-pointstate-weighting matrix in model-based predictive
control,” Automatica, vol.38, pp.1061-1068 (2002)

423

Das könnte Ihnen auch gefallen