Sie sind auf Seite 1von 9

2242 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO.

4, APRIL 2016

Robust Online Model Predictive Control


for a Constrained Image-Based Visual Servoing
Amir Hajiloo, Student Member, IEEE, Mohammad Keshmiri, Student Member, IEEE,
Wen-Fang Xie, Senior Member, IEEE, and Ting-Ting Wang

Abstract—This paper presents an online image-based The classical IBVS uses a set of geometric features such
visual servoing (IBVS) controller for a 6-degrees-of-freedom as points, segments, or straight lines in image plane as image
(DOF) robotic system based on the robust model predic- features [9]. The controller is designed using the inverse (or
tive control (RMPC) method. The controller is designed
considering the robotic visual servoing system’s input and pseudo-inverse) of the image Jacobian matrix to obtain the con-
output constraints, such as robot physical limitations and trol errors in Cartesian space, and normally the proportional
visibility constraints. The proposed IBVS controller avoids control law is applied to achieve a local convergence to the
the inverse of the image Jacobian matrix and hence can desired visual features. This proportional controller is very easy
solve the intractable problems for the classical IBVS con- to implement; however, its drawbacks are its possible unsat-
troller, such as large displacements between the initial and
the desired positions of the camera. To verify the effec- isfactory behavior due to the difficulty of constraint handling.
tiveness of the proposed algorithm, real-time experimental Also, the stability of the system is only guaranteed in the region
results on a 6-DOF robot manipulator with eye-in-hand around the desired position, and there may exist image singu-
configuration are presented and discussed. larities and image local minima, leading to IBVS performance
Index Terms—Image-based visual servoing (IBVS), degradation. Moreover, if the errors between the initial position
model predictive controller (MPC), robotic, visual servoing. and the desired one are large, with the visibility constraint, the
camera motion may be affected by loss of features, or conflict
with the robot physical limitations, or even lead to servoing fail-
I. I NTRODUCTION ure. Hence, numerous published literatures focus on improving
the control performance and overcoming the visibility problem
V ISUAL servoing has been used extensively in robotics
as a solution to make machines faster and more dexter-
ous [1]. It is, also, referred to as the use of computer vision
[10], [11].
To solve the problem of image singularities, finding the
data to control the motion of a robot in many applications such suitable visual features such as Polar [12], cylindrical [13],
as robotic assembly [2], unmanned aerial vehicle [3], robo- spherical [14] coordinate systems, and moment features [15],
tized endoscope tracking [4], ultrasound probe guidance [5], for visual servoing is a good solution. In [16], the authors
etc. Typical visual servoing controls can be classified into three used Takagi–Sugeno fuzzy framework to model IBVS, and they
categories: 1) position-based visual servoing (PBVS); 2) image- could handle singularity. However, these methods still have
based visual servoing (IBVS); and 3) hybrid visual servoing [1], not addressed the constraints explicitly which are crucial for
[6]. The main idea of a visual servoing system is illustrated in real systems control designing [17]. Also, some advanced con-
Fig. 1. The system consists of two separate controllers which trol techniques have been applied in visual servoing controller
are visual servoing and robot controller. The visual servoing design [18]–[21]. A switching controller is proposed in [18]
block uses a controlling command to generate a velocity screw to realize a large displacement grasping task. In [19], a robust
as the control input for the robotic systems which leads to the fuzzy gain scheduled visual servoing with sampling time uncer-
desired joint velocity. The robot controller takes the signal pro- tainties has been reported. A model predictive control (MPC)
duced by the visual servoing block as its desired path and the method based on the discrete time visual servoing model is
robot controller drives the robot to follow that path [7], [8]. introduced to obtain the convergence of robot motion by nonlin-
ear constraint optimization in [20]. Another predictive control
Manuscript received March 31, 2015; revised July 28, 2015; accepted for constrained IBVS is proposed in [21]. The authors solve a
October 31, 2015. Date of publication December 22, 2015; date of
current version March 8, 2016. This work was supported in part by a finite horizon open-loop-constrained optimization problem and
Discovery Grant from the Natural Sciences and Engineering Research demonstrate the results in simulation. A robust model predic-
Council of Canada (NSERC), and in part by the National Natural tive control (RMPC) based on the polytopic model of IBVS has
Science Foundation of China under Grant 61403122.
A. Hajiloo and W.-F. Xie are with the Department of Mechanical and been proposed in [22] with the fixed-depth parameter. In [22],
Industrial Engineering, Concordia University, Montréal, QC H3G 1M8, the optimization time for calculating the control signal exceeds
Canada (e-mail: ahajiloo@gmail.com). the real system-sampling time. Hence, the proposed controller
M. Keshmiri is with the Department of Computer Science, McGill
University, Montréal, QC H3A 0G4, Canada. is not implemented online and cannot be applied for the real
T.-T. Wang is with the Department of Mechanical and Electrical system. To our best knowledge, very few experimental results
Engineering, Hohai University, Changzhou 213022, China. have been obtained on this topic. The major motivation of this
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. paper is to apply real-time MPC on the IBVS which is a fast
Digital Object Identifier 10.1109/TIE.2015.2510505 dynamic system.
0278-0046 © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS 2243

II. LPV V ISUAL S ERVOING M ODEL


In this work, MPC is used to control the IBVS system for a
robotic system consists of a 6-DOFs manipulator with a cam-
era installed on its end-effector. The target object is assumed to
be stationary with respect to robot’s reference frame. The con-
strained infinite-horizon optimal controller design is based on
the optimization technique in which the current control action is
obtained by minimizing the cost function online [28]. The cost
Fig. 1. General visual servoing approach [7]. function includes the current measurement, the prediction of the
image future states, and the current and future control signals
based on a discrete time model of the system [29]. The pur-
pose of measuring the states and considering them at each time
The main application of the MPC is in the industrial pro-
step is to compensate for unmeasured disturbances and model
cess and its practical application has been very successful [23].
uncertainty [30].
However, the major drawback of MPC is long computational
To control the system using MPC, we need to find a model
time required to solve the optimization problem which often
whereby the future behavior of the image feature vector can
exceeds sampling interval in real-time situation [24]. In order
be predicted. The relationship between the time variation of
to make MPC implementable in practice for the fast dynamic
the image feature vector of the predictor ṡm and the camera
systems, the optimization problem must be solved within the
velocity screw Vc can be written as [1]
time dictated by the sampling period of the system. The under-
lying goal of this work is to design an online RMPC which ṡm = Ls Vc (1)
allows explicit incorporation of plant uncertainties and con-
straints when system parameters vary over a given range. In in which Vc is the control signal, i.e., the camera velocity screw
this paper, based on the chosen image features, image points, written as Vc = [vcx , vcy , vcz , ωcx , ωcy , ωcz ]T . Also, Ls ∈
and the discretized model of image Jacobian matrix, an RMPC κ×6 is named as the image Jacobian or the interaction matrix.
law is formulated for IBVS. Using the discretized relation- According to [1], this matrix is written as follows:
ship between the time derivative of image features and camera
 1 
velocity, a discrete time model of the visual servoing system − Z 0 Zx xy −(1 + x2 ) y
Ls = (2)
is obtained. In the whole working space, the image Jacobian 0 − Z1 Zy 1 + y 2 −xy −x
matrix varies with the bounded variable parameters of image
point coordinates and the object depth. Therefore, it is con- where x and y are the projected coordinate of the feature posi-
sidered as linear parameter-varying (LPV) model. A polytopic tion on the camera frame and Z is the depth of the feature
model of a discrete time model for the visual servoing sys- with respect to camera frame. Usually, the depth parameter,
tem is obtained using tensor product (TP) model transformation Z in the image Jacobian matrix is assumed to be known [31].
described in [25] and [26] from LPV model. Hence, the robust However, in the monocular eye-in-hand configuration, it is dif-
control signal can be calculated at every sampling time instant, ficult to measure the depth online. Thus, this parameter can
by performing convex optimization involving linear matrix be considered as an uncertain variable which varies over a
inequalities (LMIs) in MPC [27]. given range. Therefore, all the parameters in the image Jacobian
Using RMPC law, robot workspace limitations, the visibility matrix are time varying variables. Thus, Ls is the function
constraints, parametric uncertainties, and actuator limitations of the vector of time-varying parameters defined as q(t) =
can be easily posed as inequality constraints associated with {x, y, Z}. Here, q(t) ∈ Ω is the element of the closed hyper-
the output and the input of the visual servoing model. Since cube Ω = [xm , xM ] × [ym , yM ] × [Zm , ZM ] ⊂ 3 , in which
the proposed IBVS controller avoids the direct inverse of the xm , xM , ym , and yM are the minimum and maximum ranges
image Jacobian matrix, some intractable problems in the classi- of the image point coordinates, and Zm and ZM are the mini-
cal IBVS controller, such as large displacement from the initial mum and maximum depths between the object and the camera,
pose to the desired pose of the camera, can be solved by this respectively.
method. At the same time, the visual features are kept in the In order to apply the controller in real time, we need to sac-
image plane even when both the initial and the desired image rifice the accuracy to the computational load at each sampling
features are close to the field-of-view (FOV) boundary. The time. Therefore, instead of using the given LPV model, its TP
real-time experimental results demonstrate the effectiveness of model is used for control design [26]. For the considered time-
this method. varying parameter vector q(t), a convex combination for the
The paper is organized as follows. In Section II, the IBVS polytopic vertex form of the image Jacobian matrix can be
model is established to predict the future behavior of the sys- obtained for the LMI-based RMPC controller design.
tem. Then, an online MPC algorithm for IBVS is given in The first step of obtaining TP model is to discretize a given
Section III. In Section IV, the experiments on an eye-in-hand LPV model over the transformation space Ω. It means that the
6-degrees-of-freedom (DOF) robot illustrate the effectiveness resulting TP model is only explicable in Ω [26]. To apply TP
of the proposed approach. Finally, conclusion and future work transformation on LPV model, an N -dimensional hyper rectan-
are given in Section V. gular equidistant grid-by-grid net over the closed hypercube Ω
2244 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

is generated as follows:
bn − a n
gn,mn = an + (mn − 1), n = 1, . . . , N (3)
Mn − 1
where N is the total number of the time-varying parameters
in the image Jacobian matrix or the dimension of Ω, which is
equal to 3. Also, an and bn are the minimum and maximum of
the closed hypercube elements on each dimension, respectively,
and are given as follows:
a1 = x m , a 2 = ym , a3 = Z m
b1 = x M , b 2 = y M , b 3 = Z M .
Also, an ≤ gn,mn ≤ bn , mn = 1, . . . , Mn stands for the cor-
responding grid line locations and Mn is the number of grids on (a) (b)
nth dimension [26]. Fig. 2. Experimental setup.
Then, the discretization of LPV model, Bd (q(t)) is obtained
by sampling over the grid points in Ω as follows: TABLE I
C AMERA PARAMETERS
Lm1 ,m2 ,m3 = Ls (gm1 ,m2 ,m3 ) (4)
D
so the size of the discretized model L (superscript “D”
denotes “discretized”) is M1 × M2 × M3 × 2 × 6, i.e., M1 ×
M2 × M3 different image Jacobian matrices are obtained
within the domain of Ω and each of which represents an image
Jacobian matrix at a specific time.
The corresponding image Jacobian matrix becomes
3
III. C ONTROLLER D ESIGN

M1 
M2 
M3 
D
L = wn,mn (qn )Lm1 ,m2 ,m3 (5) To design the MPC, we use the discrete time model instead
m1 =1 m2 =2 m3 =1 n=1 of the continuous time dynamic model given in (1). The dis-
where wn,mn (qn ) is the weighting function value evaluated crete time state-space model of each feature point can be
at the discretized values of qn = gn,mn over the n-dimension expressed as
interval [an , bn ]. Based on (5), the (N + 2)-dimensional coeffi- sm (t + 1) = Ism (t) + LVc (t) (9)
cient tensor LD ∈ M1 ×M2 ×M3 ×2×6 is constructed from linear
time invariant (LTI) vertex systems Lm1 ,m2 ,m3 ∈ 2×6 . where sm = [xm , ym ]T is the projected position of each feature
In order to have convex TP model, the weighting function for point on the camera frame. Also, L = hLs (t) is the discrete
all q ∈ Ω should satisfy the following conditions: time image Jacobian matrix and h is the sampling time. In (9),
I is 2 × 2 identity matrix which is fixed and only matrix L is
∀n, m, qn wn,m (qn ) ≥ 0 (6) the function of time-varying parameters.

Mn
It is well known that a unique camera pose can theoreti-
∀n, qn wn,m (qn ) = 1. (7) cally be obtained by using at least four image points. Hence,
mn =1 we choose m features where m = 1, . . . , 4.
Conditions (6) and (7) imply that the TP model type is non- To design MPC, we define the image features error at sam-
negativeness (NN) and sum-normalization (SN), respectively. ple time k as e(k) = s(k) − sd , in which s = [s1 , s2 , s3 , s4 ]T .
The next step of TP transformation modeling is to exe- Also, sd is the desired feature vector acquired from the
cute higher-order singular value decomposition (HOSVD) for image of the stationary object taken in the robot target
each n-mode of LD , n = 1, . . . , N . To reduce the computa- pose.
tional load of the control design, we make a trade-off between The underlying goal of designing RMPC is to find a control
complexity and accuracy. So, we can discard some nonzero sin- law for the system input Vc , so that each image features error
gular values in HOSVD (so-called reduced HOSVD). The error e(k) defined at sampling time k can be steered to zero within a
between the exact tensor LD and the reduced one L̂D can be desirable time period.
approximated as follows [32]: The control signal is defined as linear state feedback Vc,k =
Fk ek that minimizes an upper bound of the worst-case infinite

R
 = LD − L̂D 2 = σi (8) horizon quadratic cost at sampling time k
i=1 ∞

where σi is the discarded singular values and R is the number Jk = max eTk+i|k Qek+i|k + Vc,k+i|k
T
RVc,k+i|k (10)
Z∈Z
i=0
of discarded singular values.
Now the extracted-reduced TP model can be used to design where Q ≥ 0 and R > 0 are weighting matrices which let the
the MPC. designer make a trade-off between small control signal (big
HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS 2245

Fig. 3. SNNN-type weighting functions of the reduced TP model of 8 LTI systems. (a) w1 . (b) w2 . (c) w3 .

value of R) and fast response (big value of Q). The Lyapunov min γk (15)
γk ,Xk ,Yk
function V (e) = eTk Pk ek with Pk > 0 defined at sampling
time k is an upper bound on the worst-case cost if it holds for subject to
all vertices that satisfy the following inequality [33]:
⎡ ⎤
Xk ∗ ∗ ∗
eTk+1|k Pek+1|k − eTk|k Pek|k ≤ −eTk|k Qek|k − Vc,k|k
T
RVc,k|k . ⎢ Xk + L j Y k
(11) ⎢ d Xk ∗ ∗ ⎥ ⎥≥0
⎣ (16)
Xk 0 γk Q−1 ∗ ⎦
It can be seen that by summing up the left-hand and right- Yk 0 0 γk R−1
hand side of the above inequality from 0 to ∞, and inserting a  
linear feedback uk = F ek , a matrix inequality can be obtained 1 xTk|k
≥0 (17)
as follows: xk|k Xk

(I + Lk F )T P(I + Lk F ) − P ≤ −Q − F T RF (12) where j = 1, 2, . . . , L (L is the number vertices). To ensure that


the constraints are satisfied, we can define constraints on input
where Lk is the discrete time Jacobian matrix at sampling and output as follows [36]:
time k. According to Boyd et al. [34], by applying a congru-
 2 
ence transformation P = X −1 , defining Y = F X, and using a vmax I Yk
Schur complement, we can find an LMI in X and Y as follows: ≥0 (18)
YkT Xk
⎡ ⎤  
X ∗ ∗ ∗ 2
ymax I Xk + Ljd Yk
⎢X + Lk Y X ∗ ∗ ⎥ ≥0 (19)
⎢ ⎥≥0 (13) (Xk + Ljd Yk )T Xk
⎣ X 0 Q−1 ∗ ⎦
−1
Y 0 0 R
where vmax and ymax are the upper bound for input and output,
where the symbol ∗ represents a symmetric structure in LMIs. respectively.
This LMI is only valid for one model, but for our system which Under the above-designed closed-loop feedback law, the
is a TP model, LMI should hold for all possible model or solution for the optimization in (15) can be obtained using
vertices. the LMI technique, which results in stabilizing the LPV sys-
In order to solve the LMI online and consider all of the tem and steering the state variables to zero. At each sampling
possible vertices, we define the control signal at time k, as time, an optimal upper bound on the worst-case performance
uk+i|k = Fk ek+i|k which is used for the future as well. The cost over the infinite horizon is obtained by forcing a quadratic
control signal is obtained by minimizing the upper bound on function of the state to decrease by at least the amount of
the worst-case value of the quadratic objective function consid- the worst-case performance cost at each prediction time. Such
ered as γk = eTk|k Pk ek|k . The state feedback gain matrix Fk is online step-by-step optimization can lead to asymptotically
obtained at each sampling time k by stable evolution.
In order to implement MPC, we use YALMIP toolbox which
Fk = Yk Xk−1 (14) is used for modeling and solving the convex and nonconvex
optimization problems [27]. Using this toolbox, we are able
where Xk = γk Pk > 0 and Yk are obtained from the solution to accomplish the online optimization and obtain the control
of the following semidefinite program [35]: signal at each sampling time.
2246 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

TABLE II
I NITIAL (I) AND D ESIRED (D) L OCATION OF F EATURE P OINTS IN P IXEL

Fig. 4. Results for Test 1. (a) Feature trajectory in image plane.


(b) Robot joint angles. (c) Camera 3-D trajectory.

IV. E XPERIMENTAL R ESULTS


In this section, the proposed controller is tested on a 6-DOF
Denso robot. The experimental setup consists of a VS-6556G
Denso robot and a camera mounted on the end-effectors
(Fig. 2).
The robot communicates with its controller with a frequency
of 1 kHz. A CCD camera is used as the vision system and is
mounted on the robot’s end-effector. The camera characteristics Fig. 5. Results for Test 2. (a) Feature trajectory in image plane.
(b) Robot joint angles. (c) Camera 3-D trajectory.
are given in Table I.
HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS 2247

Fig. 6. Results for Test 3. (a) Feature trajectory in image plane.


(b) Robot joint angles. (c) Camera 3-D trajectory. Fig. 7. Results for Test 4. (a) Feature trajectory in image plane.
(b) Robot joint angles. (c) Camera 3-D trajectory.

The camera capturing rate is 30 frames/second. The object values of the parameters in Ω are considered as xm = −0.4 m,
is stationary in the working space. The visual servoing task is xM = 0.4 m, ym = −0.4 m, yM = 0.4 m, Zm = 0.2 m, and
completed when the image features match the desired features. ZM = 0.6 m. After applying HOSVD on each 3-D of the sys-
In this work, four different tests with different strategies have tem tensor, the nonzero singular values in each dimension are
been performed to validate the algorithms. obtained as follows:
σ1,1 = 51.15, σ2,1 = 51.15, σ3,1 = 53.02,
A. TP Model Transformation σ1,2 = 7.23, σ2,2 = 7.23, σ3,2 = 4.00,
σ1,3 = 0.13, σ2,3 = 0.13.
In order to find the discretization model, we generate the
hyper-rectangular N-dimensional space grid and use the TP To reduce the computational load of the control design, we
model transformation. Hundred equidistant grid lines are con- make a trade-off between complexity and accuracy. So, we
sidered on each dimension for discretization. Therefore, a keep two first eigenvalues of the first and second dimensions
100 × 100 × 100 × 2 × 6 tensor of the system is obtained. The and all the nonzero singular values of the third dimension.
2248 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

The error between the full rank of tensor LD and the reduced have presented the simulation results. However, in this work, all
one L̂D by discarding singular values σ1,2 , and σ1,3 can be experiments have carried out in real-time.
approximated by In order to have the fast convergence response, we choose the
bigger elements of the matrix Q in comparison with the ones of
LD − L̂D 2 ≤ σ1,3
2 2
+ σ1,3 ≈ 0.034. (20) the matrix R as follows:
Therefore, the results show that the system in (9) can be
Q = 10 × I8×8 , R = I6×6 (22)
approximately given in the HOSVD-based canonical polytopic
model form with minimum 2 × 2 × 2 = 8 LTI vertex models.
where In×n is the identity matrix. The experimental results of
In order to have the convex TP model which can satisfy LMI
the four different cases are given as follows.
control design conditions, we use SN- and NN-type weighting
functions [26]. The weighting functions are illustrated in Fig. 3.
The LTI system matrices of the polytopic TP model are Test 1
 
0.08 0 0.03 0.01 0.59 −0.01
L1,1,1 = In the first test, the initial and desired features are chosen in
0 0.08 0.03 −0.59 −0.01 0.01 a way that a 90◦ rotation about the camera’s center is required
  to complete the task. The initial and desired locations of the
−0.13 0 −0.05 0.01 0.59 −0.01
L1,1,2 = features are given in Table II. The results of this test are given
0 −0.13 −0.05 −0.59 −0.01 0.01
in Fig. 4.
  Fig. 4(a) shows the trajectory of the features in image plane.
0.08 0 0.03 −0.04 0.59 0.09
L1,2,1 = The trajectory starts from the initial position indicated as the
0 0.08 −0.20 0.40 0.04 0.01
triangle sign and ends at the positions indicated as the circle
 
−0.13 0 −0.05 −0.04 0.59 0.09 sign. This figure shows how the controller takes the features to
L1,2,2 =
0 −0.13 0.33 0.40 0.04 0.01 their desired value without any unnecessary motions. A similar
  test was performed in [7]. The comparison between two results
0.08 0 −0.20 −0.04 −0.40 −0.01 shows that considering the constraints in the controller could
L2,1,1 =
0 0.08 0.03 −0.59 0.04 −0.09 improve the trajectory of the features in image plane. The joint
  angles’ changes during the visual servoing task are shown in
−0.13 0 0.33 −0.04 −0.40 −0.01 Fig. 4(b). Finally, the 3-D trajectory of the robot end-effector in
L2,1,2 =
0 −0.13 −0.05 −0.59 0.04 −0.09 space is shown in Fig. 4(c).
 
0.08 0 −0.20 0.24 −0.40 0.09
L2,2,1 =
0 0.08 −0.20 0.40 −0.24 −0.09
  Test 2
−0.13 0 0.33 0.24 −0.40 0.09
L2,2,2 = . In the second test, a long distance visual servoing task is per-
0 −0.13 0.33 0.40 −0.24 −0.09
formed. The initial and the desired locations of the features are
The image Jacobian matrix or interaction matrix L at each located in relatively far distance from each other as shown in
sampling time can be written as follows: Table II. The results of this test are given in Fig. 5.
One of the drawbacks of the IBVS controller is that it cannot
 2  2 2
keep the end effector inside the workspace when a long dis-
L(t) = w1,i (x(t))w2,j (y(t))w3,k (Z(t))Ldi,j,k
tance task is required. The MPC controller could provide better
i=1 j=1 k=1
(21) results for such tasks because of its prediction algorithm. Thus,
where w1,i , i = 1, 2, w2,j , j = 1, 2, and w3,k , k = 1, 2 are the MPC controller prevents reaching the limits of the space
weighting functions which are shown in Fig. 3. during the operation. The results of Test 2 show how the MPC
The interaction matrix (21) varies in a polytope Ω (convex controller succeeds in completing the task. The sequence of the
Hull) of vertices which satisfies convexity conditions given in result figures is the same as the one in Test 1.
(6) and (7).
Test 3
B. Results and Analysis
In Test 3, another long distance task is tested where the
The maximum control input of camera velocity screw in features move close to the FOV limit. The initial and desired
(19), vmax is limited to 0.25 m/s for the translational speed and locations of the features are given in Table II. Performing the
0.25 rad/s for the rotational speed. same test using an IBVS controller causes the features to leave
YALMIP toolbox is used to solve the optimization problem the FOV [7]. The IBVS controller rotates the features while
in (10). Using the proposed method and this toolbox, the exper- taking the features to the desired position. However, the pro-
iments are done in real-time and the computational time of the posed MPC controller avoids rotating the features in order to
optimization problem is less than the sampling time (0.04 s). respect the system constraints. The rotation of the features hap-
There is scant research on developing real-time MPC tech- pens when the features are close to the desired features. The
nique for IBVS control. For example, both [22] and [37] only results of this test are given in Fig. 6.
HAJILOO et al.: ROBUST ONLINE MPC FOR CONSTRAINED IBVS 2249

Test 4 [5] R. Mebarki, A. Krupa, and F. Chaumette, “2-D ultrasound probe complete
guidance by visual servoing using image moments,” IEEE Trans. Robot.,
In Test 4, a visual servoing task is prepared which requires vol. 26, no. 2, pp. 296–306, Apr. 2010.
a complicated motion in 3-D space and involves all the 6-DOF [6] F. Chaumette and S. Hutchinson, “Visual servo control. II. Advanced
approaches [tutorial],” IEEE Robot. Autom. Mag., vol. 14, no. 1, pp. 109–
motions in space. The initial and desired locations of the fea- 118, Mar. 2007.
tures are given in Table II. The results for this test are given [7] M. Keshmiri, W.-F. Xie, and A. Mohebbi, “Augmented image-based
in Fig. 7. The results show how the proposed MPC controller visual servoing of a manipulator using acceleration command,” IEEE
Trans. Ind. Electron., vol. 61, no. 10, pp. 5444–5452, Oct. 2014.
manages to reach the desired target while keeping the image [8] D.-H. Park, J.-H. Kwon, and I.-J. Ha, “Novel position-based visual ser-
features and the robot within limits. voing approach to robust global stability under field-of-view constraint,”
According to the obtained results, it is obvious that for differ- IEEE Trans. Ind. Electron., vol. 59, no. 12, pp. 4735–4752, Dec. 2012.
[9] Y. Fang, X. Liu, and X. Zhang, “Adaptive active visual servoing of non-
ent tests, the camera has different translational movement in Z holonomic mobile robots,” IEEE Trans. Ind. Electron., vol. 59, no. 1,
direction from initial pose to the desired or final pose. Fig. 7(c) pp. 486–497, Jan. 2012.
shows that camera moves from Z = 0.25 to Z = 0.4 m which [10] N. Garcia-Aracil, E. Malis, R. Aracil-Santonja, and C. Perez-Vidal,
“Continuous visual servoing despite the changes of visibility in image
is a long vertical translation. In most of the researches such as features,” IEEE Trans. Robot., vol. 21, no. 6, pp. 1214–1220, Dec. 2005.
[7], for simplification purpose, a constant depth value is con- [11] A. Remazeilles, N. Mansard, and F. Chaumette, “A qualitative visual ser-
sidered as the depth of the object with respect to the camera. voing to ensure the visibility constraint,” in Proc. IEEE/RSJ Int. Conf.
Intell. Robots Syst., Oct. 2006, pp. 4297–4303.
This assumption can affect the performance of the controller [12] P. Corke, F. Spindler, and F. Chaumette, “Combining cartesian and polar
unless the controller is designed with the robustness against the coordinates in IBVS,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst.
uncertainty. Therefore, we design robust MPC by using robust (IROS’09), Oct. 2009, pp. 5962–5967.
[13] M. Iwatsuki and N. Okiyama, “A new formulation of visual servoing
optimization method in which Z is considered as an uncertain based on cylindrical coordinate system with shiftable origin,” in Proc.
bounded variable. The designed robust MPC can deal with the IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2002, vol. 1, pp. 354–359.
time-varying depth of object. The results of Test 4 demonstrate [14] R. Fomena and F. Chaumette, “Improvements on visual servoing from
spherical targets using a spherical projection model,” IEEE Trans. Robot.,
that it is far better to consider the variable depth instead of the vol. 25, no. 4, pp. 874–886, Aug. 2009.
fixed one. [15] F. Chaumette, “Image moments: A general and useful set of features for
visual servoing,” IEEE Trans. Robot., vol. 20, no. 4, pp. 713–723, Aug.
2004.
[16] I. Siradjuddin, L. Behera, T. McGinnity, and S. Coleman, “Image-based
V. C ONCLUSION visual servoing of a 7-DOF robot manipulator using an adaptive dis-
tributed fuzzy PD controller,” IEEE/ASME Trans. Mechatronics, vol. 19,
In this paper, an online MPC-based IBVS controller was no. 2, pp. 512–523, Apr. 2014.
developed based on the discretized model of image Jacobian [17] W. Sun, H. Gao, and O. Kaynak, “Adaptive backstepping control for
matrix. The control signal is obtained by minimizing the cost active suspension systems with hard constraints,” IEEE/ASME Trans.
Mechatron., vol. 18, no. 3, pp. 1072–1079, Jun. 2013.
function based on the error in image plane and provides the sta- [18] W.-F. Xie, Z. Li, X.-W. Tu, and C. Perron, “Switching control of
bility and convergence of the robot motion. The constraints due image-based visual servoing with laser pointer in robotic manufacturing
to actuator limitations and visibility constraints can be taken systems,” IEEE Trans. Ind. Electron., vol. 56, no. 2, pp. 520–529, Feb.
2009.
into account using MPC strategy. The experimental results on a [19] B. Kadmiry and P. Bergsten, “Robust fuzzy gain scheduled visual-
6-DOF eye-in-hand visual servoing system have demonstrated servoing with sampling time uncertainties,” in Proc. IEEE Int. Symp.
the effectiveness of the proposed method. The experiments have Intell. Control, Sep. 2004, pp. 239–245.
[20] C. Lazar and A. Burlacu, “Visual servoing of robot manipulators using
been carried out in a true real-time fashion. The ability of MPC model-based predictive control,” in Proc. 7th IEEE Int. Conf. Ind.
to keep the system within the desired limits increase the suc- Informat. (INDIN’09), Jun. 2009, pp. 690–695.
cess chance of visual servoing tasks compared to basic visual [21] G. Allibert, E. Courtial, and F. Chaumette, “Predictive control for con-
strained image-based visual servoing,” IEEE Trans. Robot., vol. 26, no. 5,
servoing controllers. pp. 933–939, Oct. 2010.
[22] T. T. Wang, W. F. Xie, G. D. Liu, and Y. M. Zhao, “Quasi-min-max model
predictive control for image-based visual servoing with tensor product
ACKNOWLEDGMENT model transformation,” Asian J. Control, vol. 17, no. 2, pp. 402–416,
2015.
The authors would like to thank Quanser Company for [23] S. Qin and T. A. Badgwell, “A survey of industrial model predictive
support with the experimental setup. control technology,” Control Eng. Pract., vol. 11, no. 7, pp. 733–764,
2003.
[24] R. Milman and E. Davison, “Evaluation of a new algorithm for model
predictive control based on non-feasible search directions using prema-
R EFERENCES ture termination,” in Proc. 42nd IEEE Conf. Decis. Control, Dec. 2003,
vol. 3, pp. 2216–2221.
[1] F. Chaumette and S. Hutchinson, “Visual servo control. I. Basic [25] P. Baranyi, “TP model transformation as a way to LMI-based controller
approaches,” IEEE Robot. Autom. Mag., vol. 13, no. 4, pp. 82–90, Dec. design,” IEEE Trans. Ind. Electron., vol. 51, no. 2, pp. 387–400, Apr.
2006. 2004.
[2] Y. Wang, H. Lang, and C. de Silva, “Visual servo control and parameter [26] Y. Y. Baranyi and P. Várlaki, Tensor Product Model Transformation in
calibration for mobile multi-robot cooperative assembly tasks,” in Proc. Polytopic Model-Based Control. New York, NY, USA: Taylor & Francis,
IEEE Int. Conf. Autom. Logist. (ICAL’08), Sep. 2008, pp. 635–639. 2014.
[3] N. Guenard, T. Hamel, and R. Mahony, “A practical visual servo con- [27] J. Lofberg, “YALMIP: A toolbox for modeling and optimization in
trol for an unmanned aerial vehicle,” IEEE Trans. Robot., vol. 24, no. 2, MATLAB,” in Proc. 2004 IEEE Int. Symp. Comput. Aided Control Syst.
pp. 331–340, Apr. 2008. Des., Sep. 2004, pp. 284–289.
[4] L. Ott, F. Nageotte, P. Zanne, and M. de Mathelin, “Robotic assistance [28] T. Besselmann, J. Lofberg, and M. Morari, “Explicit model predictive
to flexible endoscopy by physiological-motion tracking,” IEEE Trans. control for linear parameter-varying systems,” in Proc. 47th IEEE Conf.
Robot., vol. 27, no. 2, pp. 346–359, Apr. 2011. Decis. Control (CDC’08), Dec. 2008, pp. 3848–3853.
2250 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 63, NO. 4, APRIL 2016

[29] T. Besselmann, J. Lofberg, and M. Morari, “Explicit MPC for LPV sys- Mohammad Keshmiri (S’12) received the B.Sc.
tems: Stability and optimality,” IEEE Trans. Autom. Control, vol. 57, and M.Sc. degrees in mechanical engineering
no. 9, pp. 2322–2332, Sep. 2012. from Isfahan University of Technology (IUT),
[30] L. Grune and J. Pannek, Nonlinear Model Predictive Control: Theory and Isfahan, Iran, in 2006 and 2009, respectively,
Algorithms. New York, NY, USA: Springer, 2011. and the Ph.D. degree in mechatronics from
[31] M. Keshmiri and W. F. Xie, “Augmented imaged based visual servo- Concordia University, Montreal, QC, Canada, in
ing controller for a 6 DOF manipulator using acceleration command,” in 2014.
Proc. 51st IEEE Conf. Decis. Control (CDC), Dec. 2012, pp. 556–561. He was an active member of the Dynamic
[32] P. Baranyi, D. Tikk, Y. Yam, and R. J. Patton, “From differential equations and Robotic Research Group, IUT, and was
to PDC controller design via numerical transformation,” Comput. Ind., involved in several projects of the group. He
vol. 51, no. 3, pp. 281–297, Aug. 2003. is currently a Postdoctoral Fellow with the
[33] D. Mayne and H. Michalska, “Receding horizon control of nonlinear sys- Department of Computer Science, McGill University, Montreal, QC,
tems,” IEEE Trans. Autom. Control, vol. 35, no. 7, pp. 814–824, Jul. Canada. He is also working as a Robotic Researcher with Robotmaster.
1990. His research interests include robotics and control, computer vision,
[34] S. P. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix nonlinear systems, visual servoing, system identification, and artificial
Inequalities in System and Control Theory. Philadelphia, PA, USA: intelligence.
SIAM, 1994, vol. 15.
[35] A. Hajiloo and W. F. Xie, “The stochastic robust model predictive control Wen-Fang Xie (SM’12) received the Master’s
of shimmy vibration in aircraft landing gears,” Asian J. Control, vol. 17, degree in flight control from Beihang University,
no. 2, pp. 476–485, Mar. 2015. Beijing, China, and the Ph.D. degree in intelligent
[36] W. Sun, Y. Zhao, J. Li, L. Zhang, and H. Gao, “Active suspension control process control from Hong Kong Polytechnic
with frequency band constraints and actuator input delay,” IEEE Trans. University, Hung Hom, Hong Kong, in 1991 and
Ind. Electron., vol. 59, no. 1, pp. 530–537, Jan. 2012. 1999, respectively.
[37] G. Allibert, E. Courtial, and F. Chaumette, “Predictive control for con- She became an Assistant Professor with
strained image-based visual servoing,” IEEE Trans. Robot., vol. 26, no. 5, Concordia University, Montreal, QC, Canada,
pp. 933–939, Oct. 2010. in 2003, and was promoted to Associate
Professor, Professor in 2008. She has been a
Professor with the Department of Mechanical
and Industrial Engineering, Concordia University, since 2014. Her
research interests include nonlinear control and identification in mecha-
Amir Hajiloo (S’12) received the M.Sc. and the tronics, visual servoing, model predictive control, neural networks, and
Ph.D. (first class) degrees in mechanical engi- advanced process control and system identification.
neering from Guilan University, Rasht, Iran, in
2006 and 2012, respectively. Currently, he is
Ting-Ting Wang received the Ph.D. degree
working toward the Ph.D. degree in control and in control theory and control engineering from
robotics in the Department of Mechanical and
Jiangnan University, Wuxi, China, in 2012.
Industrial Engineering, Concordia University,
She was a Visiting Researcher with
Montreal, QC, Canada.
Concordia University, Montreal, QC, Canada, in
His research interests include nonlinear con- 2010 and 2011. She is currently a Lecturer with
trol, robotics, real-time model predictive control,
the Department of Mechanical and Electrical
optimal robust control, complex dynamic sys-
Engineering, Hohai University, Changzhou,
tems modeling, and multiobjective controller design.
China. Her research interests include visual
servoing, image processing, artificial intelligent
control, and mobile robotics.