Sie sind auf Seite 1von 48

Linear Flight Control Techniques

for Unmanned Aerial Vehicles 27


Jonathan P. How, Emilio Frazzoli, and Girish Vinayak Chowdhary

Contents
27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
27.2 Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
27.2.1 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
27.2.2 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
27.2.3 Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
27.2.4 Linearization of the Equations of Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
27.2.5 Wind Disturbances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
27.2.6 Simplified Aircraft Equations of Motion for Path Planning . . . . . . . . . . . . . . . . . . . . . 547
27.3 Flight Vehicle Control Using Linear Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
27.3.1 Standard Performance Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
27.3.2 Classical Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
27.3.3 Successive Loop-Closure Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
27.3.4 Multi-inputMulti-output Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
27.4 Path Following . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
27.5 Conclusion and Future Directions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574

J.P. How ()


Department of Aeronautics and Astronautics, Aerospace Controls Laboratory Massachusetts
Institute of Technology, Cambridge, MA, USA
e-mail: jhow@mit.ed
E. Frazzoli
Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge,
MA, USA
e-mail: frazzoli@mit.edu
G.V. Chowdhary
Department of Aeronautics and Astronautics Aerospace Controls Laboratory, Massachusetts
Institute of Technology Laboratory for Information and Decision Systems, Cambridge, MA, USA
e-mail: girishc@MIT.EDU

K.P. Valavanis, G.J. Vachtsevanos (eds.), Handbook of Unmanned Aerial Vehicles, 529
DOI 10.1007/978-90-481-9707-1 49,
Springer Science+Business Media Dordrecht 2014
530 J.P. How et al.

Abstract
This chapter presents an overview of linear flight control and guidance methods
for unmanned aerial vehicles (UAVs). The chapter begins with a discussion of
rotation matrices and kinematic equations of a UAV. The six degree of freedom
UAV equations of motion are also derived using rigid-body dynamics principles.
The equations of motion are then linearized, and several linear multi-loop closure
techniques for UAV guidance and control are discussed.

27.1 Introduction

A flight control system is required to ensure that the UAV exhibits stable behavior
and delivers desired performance such as following a desired trajectory with suffi-
cient accuracy in the presence of external disturbances. Thus, a UAV flight control
system is typically safety/mission critical, as an incorrect design, lack of robustness
to model variations, or a component failure could result in poor performance
(impacting, e.g., payload pointing capability) or even loss of the vehicle.
Figure 27.1 depicts a typical UAV control system. The goal of the control system
is to track the desired reference command in presence of external disturbances. The
control system achieves this by attempting to eliminate the tracking error between
the reference commands and the measured response of the UAV. The control is
typically achieved by using estimates of the UAV states, which include position,
velocity, angular rate, and attitude of the UAV. These estimates are obtained by the
navigation system that fuses noisy measurements from several sensors. The purpose
of this chapter, and its companion chapter on nonlinear control (chapter Nonlinear
Flight Control Techniques for Unmanned Aerial Vehicles), is to outline the various
methods for designing the control system Gc (and some aspects of the navigation
system) and in the process highlight the benefits and challenges of using each
approach.

disturbances

commands error Control control inputs UAV states


system dynamics
Gc(s) G(s)

Sensors and
navigation
state estimates system sensor noise

Fig. 27.1 A typical UAV control system architecture. A control system attempts to minimize the
tracking error between the desired and estimated states in presence of disturbances. A navigation
system provides estimates of the states by fusing together measurements from several complemen-
tary sensors
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 531

This chapter presents a detailed overview of linear techniques for UAV control.
Linear control techniques for manned and unmanned aircraft are well understood
and are widely utilized due to their simplicity, ease of implementation, and asso-
ciated metrics of stability and performance. The chapter begins with a description
of UAV dynamics in Sect. 27.2. Linear control techniques are then developed in
Sect. 27.3. A brief discussion of path following guidance controllers is presented in
Sect. 27.4.

27.2 Equations of Motion

For most problems of interest to UAVs, the Earth can be taken to be an inertial
reference frame, and the aircraft can be modeled as a rigid body. In order to write
equations of motions for an aircraft, it is customary to define a reference frame
(often called a body frame) that is rigidly attached to it (see Fig. 27.2).
A standard choice (often referred to as NED) for the inertial reference frame
.X; Y; Z/ is to choose the X -axis pointing north, the Y -axis pointing east, and the
Z-axis pointing down. Several choices of body frames are common, depending on
the task at hand; examples include principal axes and stability axes, and will be
discussed in the following. There are advantages to each choice, but the stability
axes are very commonly used. In general though, the appropriate body frame
.x; y; z/ is chosen in such a way that the y-axis is orthogonal to the aircrafts plane
of symmetry, pointing to the right. The x-axis is chosen pointing forward (the
exact direction depends on the definition of the body frame), and the z-axis points
down, to complete the right-handed triad (see Fig. 27.3). Note that for this set,
in different flight equilibrium conditions, the axes will be oriented differently with
respect to the aircraft principal axes, so one must transform (rotate) the principal

Y-axis
Q
body
X-axis
body
P
R

Z-axis
body

Fig. 27.2 Aircraft body frame


532 J.P. How et al.

Y-axis
body

R
a

el
at
b X-axis

iv
e
w
body

in
d
Z-axis
body
X-axis
X-axis
stability
wind

Fig. 27.3 Various body frames used for aircraft analysis

inertia components between the frames. However, when vehicle undergoes motion
with respect to the equilibrium, stability axes remain fixed to airplane as if painted
on (see Fig. 27.3).

27.2.1 Dynamics

Newtons laws of mechanics imply that in a body frame whose origin coincides with
the aircrafts center of mass,

FB D mPvB C !B  mvB ; (27.1)


TB D JB !P B C !B  JB !B : (27.2)

In the above equations, the subscript ./B indicates vectors or tensors expressed
in the body frame, the dot indicates differentiation with respect to time, vB is
the velocity of the aircrafts center of mass with respect to the inertial frame,
!B is the angular velocity of the body frame with respect to the inertial frame,
m is the aircrafts mass, and JB is its inertia tensor, which is constant in the
body frame. The operator  denotes the cross product that arises due to the
vector derivative transport theorem (see, e.g., Etkin 1982; Stevens and Lewis
2003; Nelson and Smith 1989). On the left-hand side, FB denotes the sum of the
forces acting on the vehicle (including aerodynamic, gravity, thrust, and buoyancy),
and TB denotes the sum of the moments of these forces about its center of
mass.
In coordinates, Eqs. (27.1) and (27.2) are written as follows. Let FB D .X; Y; Z/,
TB D .L; N M; N /, vB D .U; V; W /, and !B D .P; Q; R/. The inertia tensor is
defined as
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 533

Z
JB D .s/ .s  s/I  s s ds;
Aircraft

where .s/ indicates the density of the aircraft at the point s in the body frame, I is
the identity matrix,  indicates the dot product, and is the Kronecker product, that
is, if u 2 Rm , v 2 Rn , then w D u v is an m  n matrix with entries wij D ui vj .
In most cases of interest, aircraft has a plane of symmetry, typically separating the
left and right halves. Assuming that the y-axis is orthogonal to the aircrafts plane
of symmetry, the inertia tensor will have the following structure:
2 3
Jxx 0 Jxz
JB D 4 0 Jyy 0 5 :
Jxz 0 Jzz
R
(The cross moment of inertia Jxz D Aircraft .x; y; z/ xz dx dy d z will in general
depend on the choice of the body axes and will be zero in case principal axes
are chosen.) Substituting into Eqs. (27.1) and (27.2) (see Etkin and Reid 1996;
Blakelock 1965; Etkin 1982; Stevens and Lewis 2003; Nelson and Smith 1989;
Stengel 2004 for details),
3 2 2 3
X UP C QW  RV
1 1
FB  4 Y 5 D 4 VP C RU  P W 5 ; (27.3)
m m
Z WP C P V  QU
2 3 2 3
L Jxx PP  Jxz RP CQR.Jzz  Jyy /  PQJxz
TB  4 M 5 D 4 Jyy QP CPR.Jxx  Jzz / C .P 2  R2 /Jxz 5 : (27.4)
N Jzz RP  Jxz PP CPQ.Jyy  Jxx / C QRJxz

27.2.2 Kinematics

Equations (27.1) and (27.2) give the evolution over time of the velocity of the
aircrafts center of mass and of the aircrafts angular velocity with respect to the
inertial frame (expressed in body frame). In order to reconstruct the position and
attitude (i.e., orientation) of the aircraft as a function of time, it is necessary to write
equations for the aircrafts kinematics.
Let RIB be the rotation matrix that maps vectors in the body frame to vectors
in the inertial frame. Given a vector u 2 R3 , define a hat operator such that the
matrix .u/^ D uO is the unique matrix satisfying uO v D uv, 8v 2 R3 : In coordinates,
given u D .u1 ; u2 ; u3 /, one gets
2 3
0 u3 u1
uO D 4 u3 0 u2 5 :
u1 u2 0
534 J.P. How et al.

The aircraft kinematics can be written as

pP D RIB vB ; (27.5)
RP IB D RIB !O B ; (27.6)

where p is the position of the aircrafts center of mass. Even though the position and
attitude of the aircraft can be completely specified by (27.5) and (27.6), maintaining
all components of the rotation matrix RIB may be expensive from the computational
point of view, since rotation matrices are not a minimal representation of aircraft
attitude. In practice, several other methods to represent an aircrafts attitude are
commonly used and will be discussed in the following sections. However, some
additional properties of rotation matrices are discussed first.
A first point to note is that the spectrum of any matrix in SO.3/ has the form
eig.R/ D f1; cos N i sin N g. Hence, any matrix R admits a unit eigenvector v
such that Rv D v. Such an eigenvector identifies a fixed axis for the rotation R.
The amplitude of the rotation is given by the angle . N Since the trace of a matrix
is the sum of its eigenvalues, it is the case that Tr.R/ D 1  2 cos . N Conversely,
given a rotation angle N and a fixed axis unit vector v, the corresponding rotation is
computed through Rodriguez formula:

N v/ D I C sin N vO C .1  cos N /Ov2 :


Rot.; (27.7)

N v/ is also referred to as the exponential coordinates of a rotation matrix


The pair .;
N
R D Rot.; v/, since Rot.; N v/ D exp.N vO /.

27.2.2.1 Quaternions
A widely used parametrization of rotations that requires fewer parameters than
rotation matrices is based on unit quaternions, that is, unit vectors in R4 (Kuipers
2002). A quaternion q D .q; V qE/ is typically written in terms of a scalar part qV 2 R
and a vector part qE 2 R3 . A unit quaternion is such that

q  q D qV 2 C qE  qE D 1:

The set of unit vectors in R4 is denoted as the sphere S3 . A unit quaternion


q D .q;
V qE/ 2 S3 defines a rotation

R D Rot.2 arccos q;
V qE =kE
q k/: (27.8)

In other words, the vector part of a unit quaternion is parallel to the fixed axis of
the rotation, and the scalar part determines the rotation angle. Conversely, a rotation
matrix R D Rot.; N v/ can be represented by the unit quaternion

 
N
q D cos.=2/; N
sin.=2/ v : (27.9)
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 535

Note that this representation is not unique, since for all q 2 S3 , q and q map to the
same rotation matrix. (In other words, S3 is a double covering of SO.3/).
Quaternions are composed through a multiplication operation, indicated with ,
defined as follows:

q1 q2 D .qV1 qV2  qE1  qE2 ; qV1 qE2 C qV 2 qE1  qE1  qE2 /;

or in matrix form " #


qV1 E
q1T
q1 q2 D q2 :
qE1 qEO
It can be verified that quaternion multiplication as defined above is consistent
with matrix multiplication, in the sense that if R1 and R2 are rotation matrices
corresponding (according to (27.8)) to the unit quaternions q1 and q2 , respectively,
then R1 R2 corresponds to q1 q2 . Also, for any unit quaternion q D .q; V qE/, its
conjugate quaternion q  D .q; V Eq / corresponds to the inverse rotation, in the
sense that q q  D q  q D .1; 0/.
Finally, quaternion kinematics take the form

1 1
qPIB D .0; !I / qIB D qIB .0; !B /: (27.10)
2 2

This equation can be written in matrix form as follows:

1
qP D  .!B /q; (27.11)
2

where 2 3
0 P Q R
6 P 0 R Q 7
.!/  6
4 Q
7: (27.12)
R 0 P 5
R Q P 0
Quaternions are often the preferred method to model aircraft kinematics, since
storing and multiplying quaternions require less memory and computation time
than storing and multiplying rotation matrices. On the other hand, the fact that
the set of unit quaternions is a double covering of the set of rotation matrices
can cause stability and robustness issues (Chaturvedi et al. 2011). Numerical
errors in quaternion propagation can result in the quaternion losing its normal-
ization constraint. It is common practice therefore to normalize a quaternion
to ensure kqk2 D 1 if it is numerically integrated for control or navigation
purposes.

27.2.2.2 Euler Angles


A rotation can also be parameterized through a sequence of three elementary
rotations about coordinated axes. While many choices are possible, in the field of
536 J.P. How et al.

x'',x Line of nodes


Y

2
y',y''
x'
1

1
3
y

X
O

z 3
z'' 2

Z ,z'

Fig. 27.4 Definition of the three Euler angles used and the associated sequence of rotations

aircraft flight dynamics and control, it is customary to use the roll, pitch, and
yaw Euler angles, defined as follows.
Let .X; Y; Z/ be unit vectors defining the inertial frame, and let e1 ; e2 ; e3 be
the columns of the identity matrix. Starting with a body frame coinciding with the
inertial frame, rotate it by an angle (yaw) about the Z-axis (coinciding with
the third body axis), obtaining a first intermediate body frame given by RIB0 D
Rot. ; e3 / D x 0 ; y 0 ; z0  (with z0 D Z, see Fig. 27.4). Then, rotate this body frame
by an angle  (pitch) about the y 0 -axis, obtaining a second intermediate body frame
given by RIB00 D RIB0 Rot.; e2 / D x 00 ; y 00 ; z00  (with y 00 D y 0 ). Finally, rotate this
body frame by an angle  (roll) about the x 00 -axis, obtaining the final body frame
RIB D RIB00 Rot.; e1 / D x; y; z, with x D x 00 . In summary, the rotation matrix
RIB is computed as a function of the roll, pitch, and yaw Euler angles .; ; / as
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 537

RIB D Rot. ; e3 /Rot.; e2 /Rot.; e1 /


2 3
cos cos  cos sin  sin   cos  sin sin  sin C cos  cos sin 
D 4 cos  sin cos  cos C sin  sin sin  cos  sin sin   cos sin  5 :
 sin  cos  sin  cos  cos 
(27.13)

Conversely, given a rotation matrix R, Euler angles can be obtained as

R32 R21
 D arcsin R31 ;  D arctan ; D arctan ;
R33 R11

where the 4-quadrant arctangent should be taken for  and .


A direct relationship can also be given between Euler angles and quaternions.
Let the components of a quaternion be given by q D q;
V qx ; qy ; qz ; then

 D  arcsin 2.qx qz  qq
V y /;
h i
 D arctan2 2.qy qz C qqV x /; 1  2.qx2 C qy2 / ; (27.14)
h i
D arctan2 2.Eqx qy C qq
V z /; 1  2.qy2 C qz2 / ;

and

qV D .cos.=2/ cos.=2/ cos. =2/ C sin.=2/ sin.=2/ sin. =2// ; (27.15)


qx D .sin.=2/ cos.=2/ cos. =2/  cos.=2/ sin.=2/ sin. =2// ; (27.16)
qy D .cos.=2/ sin.=2/ cos. =2/ C sin.=2/ cos.=2/ sin. =2// ; (27.17)
qz D .cos.=2/ cos.=2/ sin. =2/  sin.=2/ sin.=2/ cos. =2// ; (27.18)

where the sign is arbitrary but must be consistent.


The kinematics of the Euler angles can be obtained by differentiating (27.13)
with respect to time and recalling (27.6). After simplification, one obtains
2 3 2 3
P 1 sin  tan  cos  tan  2 P 3
6 7 60  sin  7
6 P 7 D 6 cos  74Q5: (27.19)
4 5 4 sin  cos  5
P 0 R
cos  cos 

Singularities are a potential problem for an Euler angle representation of the


attitude kinematics. In the case of roll, pitch, and yaw Euler angles, the singularities
at  D 90 can cause computational difficulties. Hence, the recommended
practice, especially for agile vehicles, such as fighter aircraft or small UAVs, that
perform large-angle attitude maneuvers, is to use quaternions (or rotation matrices)
to model the aircrafts kinematics. On the other hand, since Euler angles may be
538 J.P. How et al.

more intuitive to work with, it is common practice to use them as a front end for
humans, for example, to specify initial conditions and to present simulation or flight
test results.
Equations (27.3)(27.5) and (27.11)(27.14) can be combined to form the
nonlinear rigid-body equations of motion for a UAV. The next logical step is
to discuss how the forces and moments acting on the UAV are generated. The
full equations of motion will then be linearized to develop models for the linear
control design.

27.2.3 Forces

Forces and torques acting on a UAV are from a variety of sources, including
aerodynamic, gravity, and thrust. In general, forces and torques will depend on
the aircrafts position, attitude, on its linear and angular speed with respect to the
surrounding air, and on the control settings.
The aerodynamic forces are typically resolved into two components, lift (L) and
drag (D) (see Fig. 27.5). The aerodynamic lift force is perpendicular to the relative
wind vector, while the drag force resists the vehicle motion along the relative wind.
The direction of the aircrafts velocity relative to the wind with respect to the body
frame is expressed by two angles, which can be thought of as spherical coordinates.
The sideslip angle is the angle formed by the aircrafts velocity with the aircrafts
symmetry plane. The angle of attack is the angle formed by the projection of the
aircraft velocity on the aircrafts symmetry plane with the x body axis.
Aerodynamic forces primarily depend on the angle of attack , on the sideslip ,
and on the dynamic pressure Q D 12 VT2 , where  is the air density, which depends

Drag

Lift

Y-axis
body
R

a
el
at

b X-axis
iv
e

body
w
in
d

Z-axis
body
VT X-axis
stability

Fig. 27.5 Lift and drag acting on the aircraft


27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 539

on the altitude, and VT is the free-stream airspeed of the aircraft. The lift and drag
forces on the body will be in the plane of the x- and z-axes of the wind axis system
shown in Fig. 27.5. These can be rotated into the body frame using a rotation matrix
that is a function of the angles and :
2 3
cos cos  cos sin  sin
RBW D 4 sin cos 0 5: (27.20)
sin cos  sin sin cos

In this case, the aerodynamic forces in the body frame can be written in terms of lift
and drag (assuming no side forces):
2 3 2 3
Xa D
.Faero /B D 4 Ya 5 D RBW 4 0 5 : (27.21)
Za L

Propulsive forces are typically contained within the aircrafts symmetry plane,
possibly with a small angular offset T with respect to the x body axis, yielding
2 3 2 3
Xp cos T
.Fprop /B D 4 Yp 5 D kFprop k 4 0 5: (27.22)
Zp  sin T

Given the usual (NED) choice of inertial axis, wherein the z inertial axis points
down, the forces due to gravity can be written as
2 3 2 3 2 3
Xg 0  sin 
.Fgravity /B D 4 Yg 5 D mgRIB 0 D mg 4 cos  sin  5 ;
T 4 5
(27.23)
Zg 1 cos  cos 

where g is the gravity acceleration.

27.2.4 Linearization of the Equations of Motion

The right-hand side of Eqs. (27.3) and (27.4) are nonlinear and typically far more
complicated than can be addressed by standard control techniques and are more
complicated than necessary for most flight regimes. Furthermore, the left-hand side,
which specify the total forces and moments acting on the vehicle are even more
complicated. Thus, numerous simplifications have been developed.

27.2.4.1 Relative Equilibria


The standard approach is to assume that the vehicle is flying in a relative equi-
librium condition and then linearize the equations of motion about this nominal
540 J.P. How et al.

flight condition. In the case of aircraft dynamics, a relative equilibrium is defined as


a steady-state trajectory along which !P B D 0 and vP B D 0, and the control inputs
are maintained fixed or trimmed to some value 0 . Clearly, such a trajectory will
be such that the linear and angular velocities in body frame are constant. These
nominal values of the linear and angular velocities will be denoted by v0 and !0 ,
respectively.
At equilibrium conditions, after writing out explicitly the dependency of the
forces and moments, Eqs. (27.1) and (27.2) take the form

FB .p; RIB ; v0 ; !0 ; 0 / D !0  mv0 ; (27.24)


TB .p; RIB ; v0 ; !0 ; 0 / D !0  JB !0 : (27.25)

Note that the position p and attitude R change along the trajectory, according to
the kinematics equations pP D RIB v0 and RP IB D RIB !O 0 . For Eqs. (27.24) and
(27.25) to hold over time, it is necessary that the left-hand sides do not change
along the trajectory, even though the aircrafts position and attitude change. In the
case of aircraft dynamics, and over small vertical excursions (i.e., neglecting the
dependency of the air density on altitude), it can be safely assumed that the body
forces and moments are invariant to translations and to rotations about a vertical
(inertial) axis, that is,

FB .p; RIB ; v0 ; !0 ; 0 / D FB .p C p; Rot. ; e3 /RIB ; v0 ; !0 ; 0 /;

TB .p; RIB ; v0 ; !0 ; 0 / D FB .p C p; Rot. ; e3 /RIB ; v0 ; !0 ; 0 /;


for any translation p and for any heading rotation  .
In order for Eqs. (27.24) and (27.25) to hold throughout the trajectory and hence
for the trim condition .v0 ; !0 ; 0 / to be a valid relative equilibrium, it is necessary
that RIB !O 0 D 0; 0; P 0 T for some constant heading rate P 0 . In other words, in
the inertial frame, trajectories corresponding to relative equilibria take the form of
circular helices with a vertical axis, flown at constant speed, angle of attack, and
sideslip angle. In the special case in which P 0 D 0, that is, on relative equilibria
corresponding to straight-line motion, the total forces and moments are zero but
this is not true for the general case.
When analyzing the dynamics of an aircraft near a relative equilibrium, it is
convenient to choose the body frame in such a way that the angle of attack is zero
at equilibrium. In other words, the x body axis is chosen in such a way that the
relative wind direction is in the .x; y/ body plane at equilibrium. Such a body frame
is called the stability frame, due to its importance in the aircraft stability analysis.
Notice that the stability frame depends on the particular relative equilibrium under
consideration; for different flight conditions, stability frames will in general be
different.
In the following, a class of symmetric relative equilibria are considered, along
which the sideslip, roll, and yaw angle are zero. To proceed with the linearization
of the dynamics for various flight conditions, define the perturbed velocity and
accelerations in terms of the nominal values, as given in Table 27.1. The assumption
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 541

Table 27.1 Definition of the Nominal Perturbed Perturbed


perturbation variables about
velocity velocity acceleration
the equilibrium condition
Velocities U0 U D U0 C u UP D uP
W0 D 0 W Dw WP D w P
V0 D 0 V Dv VP D vP
Angular P0 D 0 P Dp PP D pP
rates Q0 D 0 QDq QP D qP
R0 D 0 RDr RP D rP
Angles 0 D 0 C  P D P
0 D 0 D P D P
0 D 0 D P D P

herein is that the perturbations to the nominal are much smaller in magnitude than
the nominal (i.e., juj  jU0 j).
A key element of the complexity here is that typically the aerodynamic, thrust,
and gravity forces and moments will also be perturbed by the perturbed motion of
the vehicle, which are denoted in the following as X; : : : ; N :
2 3 2 3
X uP
4 Y 5 D m 4 vP C rU0 5 (27.26)
Z wP  qU0
2 3 2 3
L Jxx pP  Jxz rP
4 M 5 D 4 Jyy qP 5 (27.27)
N Jzz rP  Jxz pP

where we use the ./ to distinguish between the lift L and the moment L. The key
aerodynamic parameters are also perturbed:

Total velocity VT D ..U0 C u/2 C v2 C w2 /1=2  U0 C u (27.28)


Perturbed sideslip angle D sin1 .v=VT /  v=U0 (27.29)
Perturbed angle of attack x D tan1 .w=U /  w=U0 (27.30)

27.2.4.2 Stability Derivatives


To develop the equations of motion further, the terms X : : : N must be investi-
gated. Recall that at equilibrium, the net forces and moments must be zero. But since
the aerodynamic and gravity forces are a function of equilibrium condition and the
perturbations about this equilibrium, in general, it is very difficult to determine the
exact nature of these aerodynamic perturbations. Thus, the standard approach (see
detailed discussions in Etkin (1982), Etkin and Reid (1996), and Nelson and Smith
(1989)) is to try to predict the changes in the aerodynamic forces and moments using
a first-order expansion in the key flight parameters:
542 J.P. How et al.

@X @X @X @X @Xg
X D U C W C WP C  C : : : C  C X c (27.31)
@U @W @WP @ @

@X @X @X @X @Xg
D uC wC PC
w  C:::C  C X c ; (27.32)
@U @W @WP @ @

@X
where @U is called a stability derivative, which is evaluated at the equilibrium
condition. Note that both dimensional and nondimensional forms are used. Clearly,
this is an approximation since it ignores any time lags in aerodynamics forces
(assumes that forces are only functions of instantaneous values).
As before, Xg in Eq. (27.32) corresponds to the X body force component due
to gravity and the perturbation gravity and thrust forces and moments. Assuming
0 D 0, then

@Xg @Zg
D mg cos 0 D mg sin 0 :
@ 0 @ 0

Also X c denotes the perturbations due to the control actuators (e.g., rudder,
ailerons, elevators, thrust).
While Eq. (27.32) leads to simplified force (and moment) perturbations, it is still
clear that the linearized expansion can involve many terms u; uP ; uR ; : : : ; w; w;
P w;
R : : :.
Thus, it is typical to only retain a few terms to capture the dominant effects. For
symmetric aircraft, this dominant behavior is most easily discussed in terms of the
symmetric variables U , W , Q and forces/torques X , Z, and M , and the asymmetric
variables V , P , R, and forces/torques Y , L, and N (Table 27.2).
Furthermore, for most flight conditions, further simplifications can often be
made. For example, for truly symmetric flight, Y , L, and N will be exactly zero for
any value of U , W , Q. So the derivatives of asymmetric forces/torques with respect
to the symmetric motion variables are zero. Also, the derivatives of symmetric
forces/torques with respect to the asymmetric motion variables are small and can
often be neglected. Often derivatives with respect to the derivatives of the motion
variables can also be neglected, but @Z=@wP and MwP  @M=@wP (aerodynamic
lag involved in forming new pressure distribution on the wing in response to
the perturbed angle of attack) should not be neglected. Also, @X=@q is often
negligibly small. A summary of the effects of the aerodynamic perturbations is as
follows:  @X   @X 
(1) X D @U 0
u C @W 0
w ) X  u, x  w=U0
(2) Y   v=U0 , p, r
(3) Z  u, x  w=U0 , P x  w=U P 0, q
(4) L   v=U0 , p, r
(5) M  u, x  w=U0 , P x  w=U P 0, q
(6) N   v=U0 , p, r
The result is that, with these force and torque approximations, Eqs. (27.1), (27.3),
and (27.5) decouple from (27.2), (27.4), and (27.6). In particular, Eqs. (27.1), (27.3),
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 543

Table 27.2 Standard stability derivatives for a typical aircraft planform showing that numerous
terms are 0 or small. The other nonzero terms, denoted as , must be computed using the methods
described in Etkin and Reid (1996) and Nelson and Smith (1989)
@./=@./ X Y Z L M N
u  0  0  0
v 0  0  0 
w  0  0  0
p 0  0  0 
q 0 0  0  0
r 0  0  0 

and (27.5) are the longitudinal dynamics in u, w, and q:


2 3 2 3
X mPu
4 Z 5 D 4 m.w
P  qU0 / 5 (27.33)
M Jyy qP

2  @X    3
@X
@X
u C @W w C @g  C X c
6 
@U 0
 @Z 
0  0     7
6 @Z @Zg 7
6 @U u C w C @Z
P P
w C @Z
q C  C Z c7; (27.34)
4 0 @W
 @M 
0  @W 0  @Q
0 @ 0
5
@M
@U 0
u C @W 0 w C @WP w @M
P C @Q q C M c
@M
0 0

and Eqs. (27.2), (27.4), and (27.6) are the lateral dynamics in v, p, and r:
2 3 2 3
Y m.Pv C rU0 /
4 L 5 D 4 Jxx pP  Jxz rP 5 (27.35)
N Jzz rP  Jxz pP

2 @Y   @Y   @Y  3
 
@V 0
v C  
@P 0
p C  0r C CY c
@R
c
6 @L 7
 4 @V v C @P @L
p C @L r C L 5 : (27.36)
 @N  0  @N  0  @N@R 0
@V 0 v C @P 0 p C @R 0 r C N
c

27.2.4.3 Actuators
The primary actuators in the longitudinal direction are the elevators and thrust.
Clearly the thrusters and elevators play a key role in defining the steady-state
and equilibrium flight condition. The focus now is on determining how they also
influence the aircraft motion about this equilibrium condition. For example, if the
elevator is deflected, a perturbation in the vehicle motion would be expected, as
captured by u.t/; w.t/; q.t/.
Recall that X c is the perturbation in the total force in the X direction as a
result of the actuator commands, that is, a force change due to an actuator deflection
544 J.P. How et al.

from trim. As before, one can approximate these aerodynamic terms using the same
perturbation approach:
X c D Xt h t h C Xe e ; (27.37)
where e is the deflection of the elevator from trim (down positive), t h is the change
in thrust, and Xe and Xt h are the control stability derivatives. This results in
2 3 2 3
X c X t h X e  
4 Z c 5 D 4 Z Z 5 t h : (27.38)
th e
e
M c M t h M e

A similar process can be performed in the lateral direction for the rudder and
ailerons.

Equations of Motion
Combining the results of the previous sections for the longitudinal dynamics yields
the equations of motion:

mPu D Xu u C Xw w  mg cos 0  C X c (27.39)


P  qU0 / D Zu u C Zw w C ZwP wP C Zq q  mg sin 0  C Z c (27.40)
m.w
Jyy qP D Mu u C Mw w C MwP w
P C Mq q C M c : (27.41)

If there is no roll/yaw motion so that q D P and MwP and ZwP are assumed to be
small, these can be rewritten in state space form using D w=Uo as
2 3 2 32 3 2 3
mPu Xu X 0 mg cos 0 u X c
6 mP 7 6
6 7 Zu =U0 Z =U0 Zq =U0 C m mg=U0 sin 0 7 6 7 6
7 6 7 C 6 Z 7
c 7
6 Jyy qP 7 D 6
4 5 4 5 4
4 5 Mu M Mq 0 q M c 5
P 0 0 1 0  0
(27.42)
or equivalently as xP D Ax C Bu where
2 3 2 3
Xu X X X e
6 0 g cos 0 7 6 th 7
6 m m 7 6 m m 7
6 Z 7 6Z 7
6 u Z Zq g 7 6 t h Ze 7
6 1C  sin 0 7 6 7
A D 6 mU0 mU0 mU0 U0 7 and B D 6 mU0 mU0 7 : (27.43)
6 7 6 7
6 Mu M Mq 7 6 M t h M e 7
6 0 7 6 7
4 Jyy Jyy Jyy 5 4 Jyy Jyy 5
0 0 1 0 0 0

Note there are significant notational simplifications if the stability derivatives are
defined to include m and Jyy .
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 545

Using a procedure similar to the longitudinal case, the equations of motion for
the lateral dynamics can be developed using the state vector:
2 3

 
67 a
xD6
4p5
7 with inputs u D ;
r
r

where P D r sec 0

2 3
0 0 1 tan 0
6g Y Yp Yr 7
6 cos 0 1 7
6U0 mU0 mU0 mU0 7
6    7
6 7
AD6 L 0 Lp Lr C J 0 N 7 (27.44)
6 0 C J N C Jzx0 Np 7
6 Jxx0 zx
Jxx0 Jxx0 zx r
7
6    7
4 N N 5
0 Jzx0 L C 0 Jzx0 Lp C p0 Jzx0 Lr C N0r
Jzz Jzz Jzz

2 3
0 0 0 2 3
6 .mU0 /1 7 Y a Y r
0 0 74
BD6
4 L a L r 5 (27.45)
0 .Jxx0 /1 Jzx0 5
N a N r
0 Jzx0 .Jzz0 /1

and

Jxx0 D .Jxx Jzz  Jzx2 /=Jzz


Jzz0 D .Jxx Jzz  Jzx2 /=Jxx (27.46)
Jzx0 D Jzx =.Jxx Jzz  Jzx2 /:

Thus, in either case, the aircraft equations of motion can be written as

xP D Ax C Bu (27.47)
y D C x C Du; (27.48)

where typically D D 0, but this depends on the sensor and actuator selection. One
can also convert the dynamics into transfer function form (for each sensor/actuator
pair) yi D Gij .s/uj , Gij .s/ D Ci .sI  A/1 Bj C Dij that can be used in the
classical control design process.
546 J.P. How et al.

27.2.5 Wind Disturbances

In the presence of wind, the atmosphere is moving relative to the Earth frame (Nel-
son and Smith 1989; Stevens and Lewis 2003). Therefore, the equations of motion
must be modified to account for the fact that the aerodynamic force and moments
are functions of the relative motion between the aircraft and the atmosphere and not
the inertial velocities. Thus, we must differentiate between the inertial velocity of
the vehicle vi and the velocity of the vehicle with respect to the air mass va resulting
from wind effects Wg
va D vi  RBI Wg : (27.49)
In this case, va should be used to calculate the aerodynamic forces and moments
rather than vi , and this quantity can be reliably estimated using an airspeed
measurement device. More specifically, in the X direction, let u be the aircraft
perturbation speed and ug be the gust speed in that direction, then the aircraft speed
with respect to the atmosphere is ua D u  ug . In this case, the linearization of the
aerodynamic forces and moments in Eq. (27.32) should be rewritten in terms of the
perturbations relative to the atmosphere:

@X @X @X @X
X D .u  ug / C .w  wg / C .wP  w
P g/ C .q  qg /
@U @W P
@W @Q
@X @Xg
C::: C  C ::: C  C X c (27.50)
@ @

but note that the gravity and control input terms remain the same. The rotational
aspects of the gusts are caused by spatial variations in the gust components, so that
@w @w
pg D @yg and qg D @xg . Repeating the linearization process outlined before leads
to a new input term

xP D Ax C Bu C Bw w; (27.51)

where, for example, in the longitudinal case,


2 3
Xu X
6   0 7
6 m m 7 23
6 7
6 Zu Z Zq 7 ug
6   7
Bw D 6 mU0 mU0 mU0 7 and w D 4 g 5 ; (27.52)
6 7
6 Mu M Mq 7 qg
6  7
4 Jyy  Jyy  Jyy 5
0 0 0

and a similar operation can be performed for the lateral dynamics in terms of the
disturbance inputs g , pg , and rg .
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 547

The input w can now be used to model the effects of various steady and
stochastic wind models, as well as wind shear. Numerous stochastic models have
been developed for studying the effect of winds on the aircraft dynamics and
performance (Moorhouse and Woodcock 1982; Tatom and Smith 1981; Stevens and
Lewis 2003), but these must be used with care when applied to the low altitude
operations expected for most small-scale UAV flights.

27.2.6 Simplified Aircraft Equations of Motion for Path Planning

A simplified set of aircraft equations of motion can be derived for path planning
purposes. In deriving these equations, the aircraft roll rate, pitch rate, and yaw
rate dynamics are ignored and are replaced using kinematic approximations. The
resulting model describes the motion of a rigid point mass with kinematic path
constraints. For the purpose of this model, the position of the aircraft in an inertial
frame whose x-axis is parallel to the local horizon is denoted by xe ; ye ; ze .
The flight path angle  denotes the angle between the local horizon and the
velocity vector of the aircraft VT (see Fig. 27.6). The heading angle is the angle
between VT and the z-axis of the local inertial frame. The bank angle  is the angle
that the aircraft is banked about the velocity vector VT . The forces acting on the
aircraft consist of the weight mg, thrust T , lift L, and drag D. The equations for a
point mass model of a fixed wing aircraft can then be formulated by assuming small
 and (Zhao and Tsiotras 2010; Etkin and Reid 1996):

xP e D VT cos  cos ; (27.53)


yPe D VT cos  sin ; (27.54)
zPe D VT sin ; (27.55)
1
VPT D T  D  mg sin   ; (27.56)
m
1
P D L cos   mg cos   ; (27.57)
mVT

X-axis
body

a VT

q g
Y-axis
body horizon

Z-axis
body

Fig. 27.6 Definition of the flight path angle


548 J.P. How et al.

P D L sin 
; (27.58)
mVT cos 

and the lift and drag of the aircraft can be approximated as

L D QS Cl and D D QS.CD0 C KCL2 /; (27.59)

where S is the wing surface area, CL is the lift coefficient, CD0 is the parasitic drag
coefficient, and K is a constant dependent on the aircraft wing geometry.

27.3 Flight Vehicle Control Using Linear Techniques

As presented above, even in the simplest case of decoupled linearized dynamics,


the equations of motion of an aircraft are quite complicated, with both the
longitudinal and lateral dynamics being described by a fourth-order system with
multiple actuator inputs and sensor outputs y. Furthermore, the speed of the natural
response of the system associated with the pole locations of the open-loop dynamics
are typically inconsistent with fast and/or stable response to commands, such
as waypoint, trajectory tracking, and agile maneuvers. Finally, the dynamics of
several UAVs can be inherently unstable, typical examples include rotorcraft such
as helicopters, quadrotors, and ducted fans (see, e.g., Mettler 2003; Chowdhary and
Jategaonkar 2009). Thus, feedback control is required to stabilize and/or speed up
the response.
Given the complexity of the dynamics, there are two basic strategies for the
control design. The first is to continue the decomposition in the previous section
to identify components of the dynamics that are well controlled by specific choices
of the actuators and then perform successive loop closure (see, e.g., Lawrence et al.
2008; Johnson and Kannan 2005). In this case, the loops are nested by arranging
that the reference commands for the inner loop are provided by the outer-loop
controller. An example of this is shown in Fig. 27.13 in which the outermost position
control loop provides desired velocity commands using path following guidance
discussed in Sect. 27.4. The outer velocity control loop provides a reference (in
this case, a desired quaternion value) for the inner attitude control loop. One key
advantage of this approach is that it leads to a natural mechanism of handling limits
on flights variables (e.g., such as bank or pitch angles) and actuator inputs because
the reference commands can be saturated before being passed to the inner loop.
Each step of the control design process is simpler, but the nesting of the control
loops leads to some challenges. The general rule of thumb is to ensure that the
inner control loops result in fast dynamics, and then each successive loop added
is slower than the previous one. The primary difficulties here are to determine
what is meant by fast and slow and how to determine if there is too much interaction
between the inner/outer loops being closed (e.g., closing the outer loop might reduce
the performance of the inner loop, requiring a redesign).
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 549

The second approach is to design a controller for the full dynamics either
linear or nonlinear. The advantage of this approach is that it employs the power
of state space control approaches to handle the fully coupled dynamics. However, it
is difficult to handle actuator saturation and very hard to include state constraints.
Furthermore, unless done with extreme care, these controllers, especially in high
performance flight, can be very sensitive to modeling errors and omissions.
Having determined the architecture, the next step in any control design is to
determine the dynamics of the system of interest (i.e., the full set of dynamics
or the approximate inner loop dynamics). Having identified this, one should then
determine the requirements and the extent to which the dynamics meet these
goals. For example, there may be requirements on certain frequency (to ensure
the dynamics are fast) and damping (to ensure that the oscillations die out
quickly) specifications on the pole locations. There may also be requirements on
the maximum steady tracking error to a step command input. Since the open-loop
dynamics of the vehicle rarely satisfy these requirements, the typical approach is to
use linear feedback control to modify the pole locations and loop gains.

27.3.1 Standard Performance Specifications

Figure 27.7 shows a standard feedback control loop, where a typical performance
metric, the tracking error, can be written as

e D r  .y C v/ D S.r  do  v/  S Gdi

yielding y D T .r  v/ C Sdo C S Gdi with L D GGc , S D .I C L/1


T D L.I C L/1 . So good tracking performance of r.t/ (signal typically has
low-frequency content) requires e be small, which translates to kS.j!/k being
small at low frequencies, that is, 0 ! !l . Furthermore, to reduce the impact
of sensor noise v (which typically has high-frequency content) requires kT .j!/k
be small for all !
!h . Since T .s/ C S.s/ D I 8 s, one cannot make both

di do

r(t) e u y
Gc(s) G(s)

Fig. 27.7 Classical control feedback loop


550 J.P. How et al.

104

103

102
(A)
1
10
wh

100

101 wl wc
|L|

102

103

104 (B)

105

106
102 101 100 101 102 103 104
Freq (rad/sec)

Fig. 27.8 Standard loop shaping goals for the indirect design approach

kS.j!/k and kT .j!/k small at the same frequencies, which represents a fundamen-
tal design constraint.
There are two basic approaches to design: the indirect approach that works on L
and the loop transfer function, rather than S and T . Much of classical control design
takes this approach. The direct approach, which works with S and T , is discussed
in Sect. 27.3.4.4.
For the indirect approach, note that if jL.j!/j 1, S D .1 C L/1  L1 ,
so jS j  1and jT j  1. Furthermore, if jL.j!/j  1, S D .1 C L/1  1,
and T  L, so jT j  1. So this converts the performance requirements on S ,
T into specifications on L, with the two regions in Fig. 27.8: (A) high loop gain
which leads to good command following and disturbance rejection and (B) low
loop gain leads to attenuation of any sensor noise. One must be careful at crossover,
when jL.j!c /j  1, which requires that arg L.j!c / 180 to maintain stability
(Levine 1996; Franklin et al. 1994; Kuo 1991; Dorf and Bishop 1995; Skogestad
and Postlethwaite 2005). In summary, the typical control design challenges are to
achieve high enough gain at low frequency to obtain good performance, low enough
gain at high frequency to attenuate sensor noise effects, and sufficient stability
margins in the transition range.
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 551

27.3.2 Classical Techniques

27.3.2.1 Classical and PID Controllers


There are three standard forms of the classical controller, which in the form
Nc .s/
of Fig. 27.7 can be written as u D Gc .s/e D D c .s/
e, where e is the error
signal, typically the difference between the actual variable, for example,  and the
commanded value c (e D c  ).
The simplest is proportional feedback, which uses Gc  Kg a gain, so that
Nc D Dc D 1. This is adequate for some systems but is typically constrained
in what gains can be applied before at least some ofRthe dynamics of the vehicle
t
are destabilized. Integral feedback uses u.t/ D Ki 0 e. /d which means that
Gc .s/ D Ksi . This is typically used to reduce/eliminate steady-state error since,
if e. / is approximately constant, then the magnitude of u.t/ will grow and thus
hopefully correct the error.

1
Example 1. Consider the error response of Gp .s/ D
.s C a/.s C b/
(a > 0, b > 0) to a step r.t/ D 1.t/ ! r.s/ D 1=s where

e 1 r.s/
D D S.s/ ! e.s/ D
r 1 C Gc Gp .1 C Gc Gp /

and S.s/ is the sensitivity transfer function for the closed-loop system. The final
value theorem (limt !1 e.t/ D lims!0 se.s/) can be used to analyze the error. So
with proportional control,

s 1 1
ess D lim e.t/ D lim D
t !1 s!0 s 1 C Kg Gp .s/ K
1 C abg

indicating that ess can be made small but only with a very large Kg . Note, however,
that with integral control, lims!0 Gc .s/ D 1, so ess ! 0. To summarize,
integral control improves the steady state, but this is at the expense of the
transient response, which typically gets worse because the system is less well
damped.

Example 2. Gp .s/ D .sCa/.sCb/


1
, add integral feedback to improve the steady-state
response. Increasing Ki to increase the speed of the response pushes the poles
toward the imaginary axis leading to a more oscillatory response (compare the
possible locus of roots of the closed-loop system shown in Figs. 27.9 and 27.10
using the two control approaches).

If the proportional and integral (PI) feedback are combined, then Gc .s/ D K1 C
K2
s D K1 sCK
s
2
which introduces a pole at the origin and zero at s D K2 =K1 . The
zero has the effect of reducing the tendency of the closed-loop poles to move into the
552 J.P. How et al.

Root Locus
1.5

1
Imaginary Axis (seconds1)

0.5

0.5

1.5
2.5 2 1.5 1 0.5 0 0.5
Real Axis (seconds1)

Fig. 27.9 Root locus for G.s/ in Example 2 using proportional feedback

right-half plane as the gain is increased, and thus, this PI combination solves some
of the problems with using just integral control (compare Figs. 27.10 and 27.11).
The third approach is derivative feedback which uses u D Kd eP so that Gc .s/ D
Kd s. Note that this will not help with the steady-state response since, if stable, at
steady state one should have eP  0. However, it does provide feedback on the rate
of change of e.t/ so that the control can anticipate future errors.

Example 3. G.s/ D .sa/.sb/1


, (a > 0, b > 0) with Gc .s/ D Kd s. As shown
in Fig. 27.12, derivative feedback is very useful for pulling the root locus into the
left-hand plane or increasing the damping leading to a more stable response. It is
typically used in combination with proportional feedback to form proportional-
derivative feedback PD Gc .s/ D K1 C K2 s which moves the zero in Fig. 27.12
away from the origin.

The three approaches discussed above can be combined into the standard PID
controller: 
KI
Gc .s/ D Kp C C KD s : (27.60)
s
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 553

Root Locus
1.5

1
Imaginary Axis (seconds1)

0.5

0.5

1.5
2.5 2 1.5 1 0.5 0 0.5
Real Axis (seconds1)

Fig. 27.10 Root locus for G.s/ in Example 2 with integral feedback

Typical difficulties in this case are that numerical differentiation of a noisy measured
signal can result in noisy or lagged estimates in practice. Furthermore, long-term
integration of a signal that has an unknown bias can lead to large control commands.
Thus, one would typically use band-limited differentiation/integration instead, by
rolling off the PD control with a high-frequency pole (or two) and only integrating
signals above a certain frequency (high pass filter) in the PI controller. This leads to
a slightly different primary building block for the controller components that is of
the form
.s C z/
GB .s/ D Kc (27.61)
.s C p/
which, depending on how the gains are selected, can be morphed into various
types of controllers. For example, if one picks z > p, with p small, then GB .s/ 
Kc .sCz/
s which is essentially a PI compensator. In this form, the compensator is
known as a lag. If instead p is chosen such that p z, then at low frequency,
p
the impact of .sCp/ p
is small, so GB .s/ D Kpc .s C z/ .sCp/  Kc 0 .s C z/ which is
essentially a PD compensator. In this form, the compensator is called a lead.
Various algorithms exist for selecting the components of GB .s/. One basic
procedure is to first use a lead controller to augment the phase margin of the
loop transfer function L D Gc .s/G.s/ at the crossover frequency !c to meet
554 J.P. How et al.

Root Locus
1.5

1
Imaginary Axis (seconds1)

0.5

0.5

1.5
3.5 3 2.5 2 1.5 1 0.5 0 0.5
1)
Real Axis (seconds

Fig. 27.11 Root locus for G.s/ using proportional and integral feedback

the requirements (e.g., desired phase margin). The phase margin is a standard
stability margin for a system that measures the amount that the phase of the loop
transfer function L.s/ differs from 180 when jL.j!/j D 1. Typically one wants a
phase margin of greater than 30 , but other important performance metrics include
the gain margin, the peak of the allowable transient, rise, and settling times, and
time-delay margin (Dorf and Bishop 1995; Ozbay 2000).
1. Find required required from the performance specifications. Note that required D
PM  .180 C ]G.j!c //.
2. Arrange to put max at the crossover frequency. Note that the maximum phase
jzj
added by a lead is sin max D 1C 1
where D jpj , which implies that D
1sin max
: Further, since the frequency of the maximum phase addition is !max D
p1Csin max
jzj  jpj, set !c2 D jpj  jzj:
3. Select Kc so that the crossover frequency is at !c .
To increase the low-frequency gain to improve the steady-state response, one could
use an integrator or use a lag to increase the loop gain. This is done by picking
kc to yield the desired low-frequency gain increase for the loop transfer function
(which now includes the vehicle dynamics and the lead controller if added). It is
also important to include the constraint that limjsj!1 jGlag .s/j D 1, so kc D jzj=jpj
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 555

Root Locus
2

1.5

1
Imaginary Axis (seconds1)

0.5

0.5

1.5

2
2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5
Real Axis (seconds1)

Fig. 27.12 Root locus for G.s/ using derivative feedback

to avoid impacting the higher-frequency dynamics. A rule of thumb is to limit the


frequency of the zero of the lag compensator so that there is a minimal impact of
the phase lag at !c . Thus, with z  !c =10 and kc both specified, one can solve for
the pole location using jpj D jzj=kc .

27.3.3 Successive Loop-Closure Examples

As an example of the successive loop-closure process in Fig. 27.13, the following


considers the combination of the velocity and attitude control loops, wherein the
attitude reference command is created by the velocity controller. Note that for small
angles, the attitude controller could be based on independent PD control of each
Euler angle, but the following presents a more general attitude control approach that
is based on the quaternion representation of the vehicle attitude. A second example
of the successive loop-closure approach is provided in Sect. 27.4.

27.3.3.1 Quaternion-Based Attitude Control


The sequential rotation properties of quaternions can be used to devise a gener-
alized attitude controller for flight vehicles (Wie and Barba 1985; Knoebel 2007;
Hall et al. 2008; Michini 2009). Consider some desired aircraft attitude that is
556 J.P. How et al.


xref Position vref Velocity qd Attitude u Aircraft
Controller Controller Controller Dynamics

Navigation
System
xa, va, qa, wa

Fig. 27.13 Example of a possible sucessive loop-closure control architecture where the measured
quantities (xa ; va ; qEa ; !a ) are fed back to specific control loops. Each outer loop generates a
reference command that is used to create error signals for the inner loop. Saturation on these
reference commands can easily be incorporated (not shown). As shown in the text, the particular
architecture required depends on the vehicle and control objectives

specified as a quaternion rotation from the global frame qEd . This desired rotation
can be constructed intuitively with an axis and an angle as in Eq. (27.9). The
actual measured attitude of the aircraft can also be represented as a quaternion
rotation from the global frame qEa . Now consider that the desired attitude qEd
can be constructed as the measured attitude qEa sequentially rotated by some
error quaternion qEe . Since quaternion rotations are sequential, the error quaternion
represents a rotation from the body frame:

qd D qe qa (27.62)

global frame body frame global frame

The error quaternion can then be solved for explicitly using the conjugate of the
measured attitude quaternion:

qe D qd qa (27.63)

body frame global frame global frame

The error quaternion qe represents the rotation required to get from the measured
attitude to the desired attitude. Since the rotation is from the body frame, the x,
y, and z components of qe correspond to the rotations needed about the x, y, and
z body axes of the aircraft. Thus, the three components correspond directly to the
required aileron, elevator, and rudder commands (or roll cyclic, pitch cyclic, and
pedal commands for a helicopter UAV) without any transformation. The scalar part
of qe , qVe represents the angle through which the aircraft must be rotated and is thus
proportional to the amount of control effort required. As in Sect. 27.2.2.1, let qEe
denote the vector part of qe .
The quaternion representation suggests a simple control law of the form
u D Kp qEe  Kd !, where u D a ; e ; r T is the vector containing the aileron,
elevator, and rudder deflections, and Kp and Kd are positive definite proportional
and derivative gain matrices. However, this control law suffers from the unwinding
phenomena (see, e.g., Mayhew et al. 2011) characterized by the aircraft undergoing
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 557

a larger full rotation in order to achieve the desired attitude instead of a more
efficient smaller rotation. One reason this happens is because the unit quaternions
1; 0; 0; 0 and 1; 0; 0; 0 both represent the unique zero attitude and are the
equilibria of the quaternion dynamics given in Eq. (27.10). Hence, if a nave control
law was used for regulation to zero attitude represented by qd D 1; 0; 0; 0, an
undesirable rotation, possibly leading to instability, could occur. On the other hand,
the control law u D Kp qEe  Kd ! would stabilize qd D 1; 0; 0; 0 but result in
an undesirable rotation if qd D 1; 0; 0; 0. This indicates that the sign of the error
quaternion determines the sign of the gain Kp . Therefore, a consistent control law
that satisfies u.qVe ; qE; !/ D u.qVe ; qE; !/ is desirable (Mayhew et al. 2011). Several
such consistent asymptotically stabilizing quaternion control laws can be formulated
(see, e.g., Mayhew et al. 2011; Wie and Barba 1985), with two examples being

u D sgn.qVe /Kp qEe  Kd !; (27.64)

or

qVe
uD Kp qEe  Kd !: (27.65)
2

Another way to implement a consistent control law is to simply monitor the sign of
qVe online and enforce that the right error quaternion representation is chosen. With
this constraint, the following control law with time-varying gains has yielded good
attitude control performance (Sobolic 2009; Michini 2009; Cutler and How 2012)

Ne
uD Kp qEe  Kd !; (27.66)
sin.Ne =2/

where Ne D 2 arccos.qVe /.


While denoted above as aileron, elevator, and rudder, the control commands
are given in the body frame and thus easily generalizable to any actuator set
(e.g., a three-wing tailsitter or quadrotor helicopter) as long as the commands can
be mapped to the appropriate actuators. Another key advantage of this attitude
controller over Euler angle methods is the lack of singularities, that is, the output
of the controller is the direct rotation needed (from the body frame) to get from
the measured attitude to the desired attitude (Hall et al. 2008; Knoebel 2007;
Michini 2009; Cutler and How 2012). Computationally, no trigonometric functions
or rotation matrices are needed, making the controller amenable to onboard or
embedded implementation.

27.3.3.2 Outer-Loop Horizontal Velocity Controller


The outer-loop velocity controller can be designed to command a desired attitude
based on the desired velocity. For airplanes, this means that in order to achieve
a desired velocity in the body y direction, the aircraft must rotate (bank) about
558 J.P. How et al.

its body x-axis and/or rotate (yaw) about its body z-axis. Furthermore, in order to
achieve a desired flight path angle  , it must rotate (pitch) about its y-axis.
Velocity error and desired attitudes are also tightly coupled for rotorcraft UAVs,
where horizontal velocity is achieved by tilting the thrust vector. For a rotorcraft
with non-hinged blades such as typical quadrotors, this is often achieved by tilting
the body itself. Tilting the vehicle in any direction will generate a horizontal
component of velocity.
With the attitude controller in place, a simple outer-loop horizontal velocity
controller can be added to generate the desired attitude qd . To calculate the desired
quaternion given the desired x, y, and z velocities vdx ; vdy ; vdz  and the actual
measured x, y, and z velocities vax ; vay ; vaz , the velocity errors are calculated:

vex D vdx  vax


vey D vdy  vay (27.67)
vez D vdz  vaz :

The errors are then used to determine the desired quaternion rotation qd in the form


qd D 1:0; qdx .Eve /; qdy .Eve /; qdz .Eve / : (27.68)

A simple control law that works particularly well for rotorcraft by generating the
elements of the desired rotation in Eq. (27.68) based on the velocity error is
Z
qdx D Kpvy vey C Kivy vey dt;
Z
qdy D Kpvx vex  Kivx vex dt; (27.69)

qdz D 0;

where Kp: are the proportional gains and Ki: are the integral gains. In this control
law, lateral motion is achieved by a rotation about the x-axis, and longitudinal
motion is achieved by a rotation about the y-axis. For a rotorcraft, such a controller
is based around the hover configuration, where thrust is being generated vertically
to counteract the acceleration of gravity.
For airplanes, an outer-loop controller that commands the desired attitude is
based around the level forward flight configuration. For airplanes, it is also desirable
to command a yaw rotation in the direction that reduces lateral velocity error and a
pitch rotation to help achieve the desired flight path angle. This can be achieved by
the following control law:
Z
qdx D Kpvy vey C Kivy vey dt;
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 559

Z
qdy D Kpvz vez  Kivz vez dt; (27.70)
Z
qdz D Kpvr vey C Kivr vey dt:

Finally, the desired quaternion is normalized to ensure that kqd k2 D 1. The result is
a combined proportional-integral (PI) horizontal velocity controller and quaternion
attitude controller.
Care must be taken in commanding qdy for airplanes in order to avoid stall due to
high angle of attacks. If airspeed measurements are available, this can be achieved
by limiting commanded pitch attitude to keep the estimated angle of attack within
limits (Chowdhary et al. 2011). Care must also be taken in order to ensure that the
commanded attitude does not result in an undesired loss of altitude. For airplanes,
this can be partially achieved by ensuring that the airplane pitches to maintain
vertical speed as in Eq. (27.70). More generally, this can be achieved by limiting
the magnitude of the desired attitudes in Eq. (27.68). In order to achieve this, define
the inertial altitude error as ze D zd za ; where zd is the desired altitude and za is the
actual altitude, and redefine the authority limited quaternion attitude controller as


qd D 1:0; qdx .Eve / 
x ; qdy .Eve / 
y ; qdz .Eve / : (27.71)

An effective control authority limiting term is Sobolic (2009)


x D Kpz sgn.vay /ze C Kvz sgn.vay /vdz (27.72)

y D Kpz sgn.vax /ze C Kvz sgn.vax /vdz :

This controller is typically implemented in combination with a forward speed (or


airspeed) controller and an altitude controller.

27.3.4 Multi-inputMulti-output Control

While PID and classical control are very effective ways of closing single-input
single-output (SISO) loops, it is often challenging to use those techniques to design
controllers for systems with multiple sensors and actuators (MIMO). State space
techniques can handle that additional complexity very easily.

27.3.4.1 Full-State Feedback Controller


The decoupled system dynamics were given in state space form in Eqs. (27.47)
and (27.48), where it is assumed that D D 0. The time response of this linear
state space system is determined by the eigenvalues of the matrix A, which are the
same as the poles of a transfer function representation (Chen 1984; Franklin et al.
1994; Stengel 1994). As before, if the eigenvalues (poles) are unstable or in the
wrong location for the desired response, the goal is to use the inputs u.t/ to modify
560 J.P. How et al.

the eigenvalues of A to change the system dynamics. The first approach is to assume
a full-state feedback of the form

u.t/ D r.t/  Kx.t/; (27.73)

where r.t/ 2 Rm is a reference input and K 2 Rmn is the controller gain.

r(t) u(t) y(t)


A, B, C


x(t)

This gives the closed-loop dynamics:

P
x.t/ D Ax.t/ C B.r.t/  Kx.t// D .A  BK/x.t/ C Br.t/
D Acl x.t/ C Br.t/ (27.74)
y.t/ D C x.t/: (27.75)

Note that for a system with one input, there are n parameters in K that have to
be selected to attempt to place the n eigenvalues in A. For a system with n states,
a necessary and sufficient condition for controllability (see, e.g., Dorf and Bishop
1995) is that


rank Mc , rank B AB A2 B    An1 B D n:

Ackermanns Formula
This gives us a method of doing this entire design process in one step. For single-
input system, the gains are


K D 0 : : : 0 1 M1
c c .A/;

where the roots of the polynomial c .s/ give the desired pole locations.

Linear Quadratic Regulator (LQR)


It may be difficult to relate the desired performance criteria to desired pole locations.
In such cases, an alternative approach is to automatically select pole locations so that
the closed-loop system optimizes the cost function
Z 1

JLQR D x.t/T Rxx x.t/ C u.t/T Ruu u.t/ dt; (27.76)
0
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 561

where x T Rxx x is the state cost with weight Rxx D Rxx T



0 and uT Ruu u is called
the control cost with weight Ruu D Ruu > 0. It is well known that the solution to
T

this infinite horizon optimal control problem is a linear time-invariant state feedback
of the form (see, e.g., Bryson and Ho 1969)

u.t/ D Klqr x.t/; (27.77)

where Klqr D R1 B T Pr and Pr is found by solving the algebraic Riccati equation
(ARE) for the positive semidefinite symmetric solution Pr

1 T
0 D AT Pr C Pr A C Rxx  Pr BRuu B Pr : (27.78)

Equation (27.78) can be easily solved numerically, for example, by using the
commands are or lqr in MATLAB.
The design freedom in this case is to select the Rxx and Ruu matrices. A good
rule of thumb, typically called the Brysons rule (Bryson and Ho 1969), when
selecting the weighting matrices Rxx and Ruu is to normalize the signals: Rxx D
 2  m2
1 n2 12
diag .x1 /2 ; : : : ; .xn /2 and Ruu D diag .u1 /2 ; : : : ; . The .xi /max
max max max .um /2max
and .ui /max represent the largest desired
P response or
Pcontrol input for that compo-
nent of the state/actuator signal. i i2 D 1 and i i2 D 1 are used to add an
additional relative weighting on the various components of the state/control. The
 is used as the last relative weighting between the control and state penalties
it effectively sets the controller bandwidth. This gives a relatively concrete way to
discuss the relative size of Rxx and Ruu and their ratio Rxx =Ruu .
LQR is a very attractive control approach because it easily handles multiple
actuators and complex system dynamics. Furthermore, it offers very large stability
margins to errors in the loop gain (gain margin of infinity, gain reduction margin of
1/2, and a minimum phase margin of 60 in each control input channel) (Stengel
1994). However, LQR assumes access to the full state, which is rarely available.
This assumption is removed in Sect. 27.3.4.2.

Reference Inputs
The proceeding showed how to pick K to change the dynamics to achieve some
desired properties (i.e., stabilize A). The question remains as to how well this
controller allows the system to track a reference command, which is a performance
issue rather than just stability. For good tracking performance the goal is to
have y.t/  r.t/ as t ! 1. To consider this performance issue in the
frequency domain, the final value theorem can be used. This indicates that for good
Y.s/
performance, sY .s/  sR.s/ as s ! 0, which gives R.s/ D 1. So, for good
sD0
performance, the transfer function from R.s/ to Y .s/ should be approximately 1 at
low frequency (DC).
One approach is to scale a constant reference input r so that u D N r 
Kx.t/ where N is an extra gain used to scale the closed-loop transfer function.
562 J.P. How et al.

The closed-loop equations are now y D C x.t/ and

P
x.t/ D .A  BK/x.t/ C BN r (27.79)

so that
Y .s/
D C .sI  .A  BK//1 BN D Gcl .s/N :
R.s/
Given the goal at DC, one can compute
 1
N D Gcl .0/1 D  C.A  BK/1 B : (27.80)

Note that this development assumed that r was constant, but it could also be used if
r is a slowly time-varying command. This approach is simple to implement but is
typically not robust to errors in the choice of N .
An alternative strategy within the LQR framework is to embed an integrator
into the feedback control to obtain an LQR servo. If the relevant system output
is y.t/ D Cy x.t/ and reference r.t/, add extra states xI .t/, where xP I .t/ D e.t/. In
this case, if the state of the original system is x.t/, then the dynamics are modified
to be
  " #     
P
x.t/ A 0 x.t/ Bu 0
D C u.t/ C r.t/ (27.81)
xP I .t/ Cy 0 xI .t/ 0 I

or
P
x.t/ D Ax.t/ C Bu.t/ C B r r.t/; (27.82)

T
where x.t/ D x T .t/ xI T .t/ . The modified optimal control problem is to obtain
the feedback controller that minimizes the cost:
Z 1

T
JLQR servo D x Rxx x C uT Ruu u dt
0

which is found by solving the Riccati equation (27.78) using the augmented
dynamics A and B to obtain
 
x
u.t/ D Kx.t/ D  K KI  :
xI

Note that the implementation of this LQR servo requires that the integrator states
xI be implemented in software. While slightly more complicated to implement, the
advantage of this approach is the additional robustness of ensuring a sufficiently
high low-frequency gain in the loop transfer function that the steady-state tracking
error to a step reference input will be zero. As such, the LQR-servo form is highly
recommended.
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 563

27.3.4.2 Estimation
The preceding assumed access to the full state for the feedback control. If that
information is available, the control given is very effective, but typically only
the output of a few sensors are available. In this case, consider the n-th order
system model of the form in Eqs. (27.47) and (27.48) (again with D D 0),
x.0/ unknown, and C such that x.t/ cannot be directly determined. One could
try direct (static) output feedback of the form u.t/ D Ky y.t/, but finding Ky
is very difficult, and this approach will be fundamentally limited in what it can
achieve.
The alternative typically used is to create a dynamic output feedback (DOFB)
controller, which uses an observer or estimator to estimate the system state as
b
x .t/, and then employs full-state feedback on the estimated state (so u.t/ D
Kb x .t/) to control the system. This approach, called the separation princi-
ple, separates the design of the DOFB compensator into the regulator problem
(picking K) and the design of the estimator (Stengel 1994; Franklin et al. 1994;
Bryson and Ho 1969).
The estimator design starts with a simulation of the system dynamics, which
are assumed known, but then adds a feedback term on the measurement error
the difference between what was measured and what was expected based on the
Q
estimator, y.t/ D y.t/  C x.t/.
O The feedback gain is called Le , and it yields the
closed-loop estimator:

PO
x.t/ D Ax.t/
O C Bu.t/ C Le y.t/
Q
O C Bu.t/ C Le y.t/
D .A  Le C /x.t/ (27.83)

which is a dynamic system with poles given by i .A  Le C / and which takes the
measured plant outputs as an input and generates an estimate of x.t/. Note that if the
pair A; C is observable, then the eigenvalues of A  Le C can be placed arbitrarily,
leading to a similar (actually the dual) design problem as the regulator. A necessary
and sufficient condition for observability (see, e.g., Dorf and Bishop 1995; Ozbay
2000) is that
2 3
C
6 CA 7
6 7
6 2 7
rank Mo , rank 6 CA 7 D n: (27.84)
6 : 7
4 :: 5
CAn1

Le can be determined using pole placement, with the typical strategy of choosing
the real part of the poles to be approximately a factor of 2 larger in magnitude than
the associated regulator poles. Given the desired estimator pole locations specified
as the roots of the polynomial e .s/, the estimator equivalent of Ackermanns
formula is
T
Le D e .A/M1 o 0 ::: 0 1 : (27.85)
564 J.P. How et al.

Optimal Estimation
Similar to Eq. (27.52), noise in the system is typically modeled as

P
x.t/ D Ax.t/ C Bu.t/ C Bw w.t/ (27.86)
y.t/ D Cy x.t/ C v.t/ (27.87)

where w is the process noise that models the uncertainty in the system model and
v is the sensor noise that models uncertainty in the measurements. It is typically
assumed that w.t/ and v.t/ are zero mean (Ew.t/ D 0, Ev.t/ D 0) and
uncorrelated Gaussian white random noises so that

Ew.t1 /w.t2 /T  D Rww .t1 /.t1  t2 / ) w.t/  N .0; Rww /;


Ev.t1 /v.t2 /T  D Rvv .t1 /.t1  t2 / ) v.t/  N .0; Rvv /;
T
Ew.t1 /v.t2 /  D 0:

Given this noise, one can develop an optimal estimator, where the key step in the
design is to balance the effect of the various types of random noise in the system on
the estimation error. With noise in the system, the dynamics of the estimation error
Q for the closed-loop estimator are
x.t/

xPQ D xP  xPO D Ax C Bu C Bw w  AxO C Bu C Le .y  y/


O
D .A  Le Cy /xQ C Bw w  Le v: (27.88)

Equation (27.88) explicitly shows the conflict in the estimator design process
because one must achieve a balance between the speed of the estimator decay
rate, which is governed by R i .A  Le Cy /, and the impact of the sensing noise
v through the gain Le . Fast state reconstruction requires rapid decay rate, which
typically requires a large Le , but that tends to magnify the effect of v on the
estimation process. The effect of the process noise cannot be modified, but the
Q
choice of Le will tend to mitigate/accentuate the effect of v on x.t/.
An optimal filter can be designed that minimizes J D trace.Qe .t// with
(see Bryson and Ho 1969; Kwakernaak and Sivan 1972; Lewis 1986; Gelb 1974;
Crassidis and Junkins 2004; Stengel 1994; Simon 2006 for details)


Qe .t/ D E fx.t/  x.t/gfx.t/
O  x.t/g
O T

which is the covariance matrix for the estimation error. The result is a closed-loop
1
estimator with Le .t/ D Qe .t/CyT Rvv and Qe .t/
0 solves

QP e .t/ D AQe .t/ C Qe .t/AT C Bw Rww BwT  Qe .t/CyT Rvv


1
Cy Qe .t/
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 565

O
where x.0/ and Qe .0/ are known. It is standard to assume that Rvv > 0, Rww > 0, all
plant dynamics are constant in time, A; Cy  is detectable, and A; Bw  is stabilizable.
A system is said to be stabilizable if all unstable modes are controllable and is
detectable if all unstable modes are observable. In this case, the forward propagation
of the covariance Qe .t/ settles down to a constant Qss independent of Qe .0/, as
t ! 1 where
1
AQss C Qss AT C Bw Rww BwT  Qss CyT Rvv Cy Qss D 0

1
and Lss D Qss CyT Rvv . Note that the stabilizable/detectable assumptions give a
unique Qss
0. Also, if Qss exists, the steady-state filter

PO
x.t/ D .A  Lss Cy /x.t/
O C Lss y.t/ (27.89)

is asymptotically stable given these assumption, and thus, Lss can be used as the
estimator gains Le for the system.

27.3.4.3 Dynamic Output Feedback Control System


Given a closed-loop estimator with gain Le ,

PO
x.t/ D .A  Le C /x.t/
O C Bu.t/ C Le y.t/: (27.90)

O
Then if Le is chosen as outlined in the previous sections, it is known that x.t/ !
x.t/ as t ! 1, but in general, x.t/
O x.t/ 8t. Even so, the approach taken in
the output feedback control case is to implement u.t/ D K x.t/, O leading to the
closed-loop system dynamics
  " # 
P
x.t/ A BK x.t/
PO D (27.91)
x.t/ Le C A  BK  Le C O
x.t/

or
xP cl .t/ D Acl xcl .t/: (27.92)

The stability of this closed-loop


 system
 can be analyzed using the similarity
I 0
transformation matrix T D that preserves the location of the eigenvalues.
I I
In fact it is easy to show that the dynamics matrix of the transformed system is given
by " #
A  BK BK
N
Acl , TAcl T D1
:
0 A  Le C
Thus, the closed-loop pole locations are given by the roots of the polynomial

det.sI  ANcl / , det.sI  .A  BK//  det.sI  .A  Le C // D 0


566 J.P. How et al.

which consist of the union of the regulator poles and estimator poles. The signifi-
cance of this result is that it implies that one can design the estimator and regulator
separately and combine them at the end, and the closed-loop system will be stable.
This is called the separation principle, and when it holds, it greatly simplifies the
control design process.
The resulting dynamic output feedback compensator is the combination of the
regulator and closed-loop estimator written using the new state xc .t/  x.t/
O as

xP c .t/ D Ac xc .t/ C Bc y.t/ (27.93)


u.t/ D Cc xc .t/; (27.94)

where the compensator dynamics are Ac , A  BK  Le C , Bc , Le , and Cc ,


K. Note that, as expected, the compensator maps sensor measurements to actuator
commands.
The DOFB solution provided in this section is called the (infinite horizon) linear
quadratic Gaussian (LQG), and it is the controller that optimizes the expected time-
averaged value of the cost in Eq. (27.76) for the stochastic system in Eqs. (27.86)
and (27.87) (Bryson and Ho 1969; Stengel 1994; Franklin et al. 1994; Skogestad and
Postlethwaite 2005). While LQG has proven to be very effective design algorithm
for many MIMO systems, it does not have the same robustness properties guaranteed
for the LQR design (Doyle 1978). Furthermore, LQG designs can exhibit high
sensitivity to modeling errors. Thus, it is prudent to be cautious and check the peak
magnitude of the sensitivity S.s/, which provides a measure of the closeness of
the loop transfer function to the critical point at s D 1. High sensitivities (above
2050) are indicative of designs that could be susceptible to modeling errors, so
further testing and simulation using perturbed dynamics should be considered.

27.3.4.4 Robust Control Techniques


The alternative approach to control design mentioned in Sect. 27.3.1 works with the
S.s/ and T .s/ functions to shape them directly (McFarlane and Glover 1992). For
example, consider the following design problem in Fig. 27.14 where

200
G.s/ D :
.0:05s C 1/2 .10s C 1/

z1 z2
Ws(s) Wu(s)

r e u y
Gc(s) G(s)

Fig. 27.14 Direct design problem


27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 567

Note that there is one input (r) and two performance outputs one that penalizes
the sensitivity S.s/ of the system and the other that penalizes the control effort used
with the weight functions Ws .s/ and Wu .s/ to be defined later. With

z1 D Ws .s/.r  Gu/ (27.95)


z2 D Wu .s/u (27.96)
e D r  Gu; (27.97)

then closing the loop with u D Gc .s/e yields


   
Ws Ws G
PCL .s/ D C Gc .I C GGc /1 (27.98)
0 Wu
   
Ws  Ws GGc S Ws S
D D (27.99)
Wu Gc S Wu Gc S

so that    
z1 Ws S
D r; (27.100)
z2 Wu Gc S
and thus, the closed-loop transfer function from the reference input to the perfor-
mance variable z1 is the weighted sensitivity function.
This setup gives us direct access to the sensitivity function in the analysis. It is
now possible to embed tests such as whether jS.s/j satisfies a desired bound such as

1
jS.j!/j < 8!
jWs .j!/j

or equivalently, whether jWs .j!/S.j!/j < 1, 8!. To make this bound useful, one
can select the weight Ws .s/ to emphasize different frequency regions. For example,
to achieve good low-frequency tracking and a crossover frequency of about 10 rad/s,
pick
s=1:5 C 10
Ws D Wu D 1:
s C .10/  .0:0001/
Once the architecture of the problem is determined, the structure of PCL .s/ is set.
The design choice is in the selection of the weights Ws .s/ and Wu .s/ which then
shape the closed-loop S.s/. The output z2 participates by providing access to a
measure of the amount of control authority being used.
The overall objective then is to bound the size of the closed-loop transfer function
PCL .s/, which in most cases will be a matrix. The bound used is based on the
singular values of the transfer function matrix evaluated over all frequencies. For a
complex transfer function matrix G.j!/, the singular values are the positive square
roots of the eigenvalues of G H G. The largest
p of these is called the maximum
singular value, so that G.j!/ D maxi i .G H G/. Thus, for a stable system,
568 J.P. How et al.

define
kPCL .s/k1 D sup PCL .j!/: (27.101)
!

One interpretation of this H1 norm, kPCL .s/k1 , of the system is that it is the
energy gain from the input to output. This maximum gain is achieved using a worst
case input signal that is essentially a sinusoid at frequency ! ? with (appropriate
input direction) that yields PCL .j! ? / as the amplification. This interpretation
is consistent with the graphical test which finds the peaks of plotted versus
frequency. Note that the difference here is that the performance test in the H1
case is concerned with the maximum amplification of the input signals at any one
frequency (worst case). This is contrast to the LQG design, which is concerned with
minimizing the sum of the square of all singular values over all frequencies (average
case).
The H1 norm can also be evaluated using a state space test based on the
Hamiltonian matrix H (Zhou et al. 1996), and the following result, which states
that the gain of a generic stable system G.s/ with state space representation

xP n .t/ D An xn .t/ C Bn u.t/ (27.102)


y.t/ D Cn xn .t/ C Dn u.t/; (27.103)

is bounded in the sense that Gmax D kG.s/k1 <  if and only if


1. jDj <  .
2. The Hamiltonian matrix
" #
An C Bn . 2 I  DnT Dn /1 DnT Cn Bn . 2 I  DnT Dn /1 BnT
HD
CnT .I C Dn . 2 I  DnT Dn /1 DnT /Cn ATn  CnT Dn . 2 I  DnT Dn /1 BnT

has no eigenvalues on the imaginary axis.


Thus, finding the H1 norm of a system requires to find the smallest  for which no
imaginary eigenvalues of H exist. This is an iterative process, typically done using
a bisection search called the  -iteration (Skogestad and Postlethwaite 2005; Zhou
et al. 1996). Given these analysis tools, the synthesis problem can now be defined.

Robust Control Synthesis


The synthesis problem using the analysis techniques in the previous section is
to design a controller that, to the extent possible, ensures that the loop-shaping
objectives are satisfied. The approach works with the generalized plant that has
all weighting functions embedded. As depicted in Fig. 27.15, the input/output
signals of the generalized plant P .s/ are defined as z performance output, w
disturbance/reference inputs, y sensor outputs, and u actuator inputs. With the loop
closed (u D Gc .s/y), one can show that

PCL .s/  Fl .P; Gc / D Pzw C Pzu Gc .I  Pyu Gc /1 Pyw (27.104)


27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 569

Fig. 27.15 Generalized w z


plant for H1 synthesis Pzw(s) Pzu(s)
u P(s) = Pyw(s) Pyu7(s) y

Gc(s)

which is called a (lower) linear fractional transformation (LFT). The design objec-
tive is to find Gc .s/ to stabilize the closed-loop system and minimize kFl .P; Gc /k1 .
As noted previously, even finding the H1 norm is challenging, so the approach
taken to this synthesis problem is to consider a suboptimal problem:
1. Find Gc .s/ to satisfy kFl .P; Gc /k1 <  .
2. Then use the  -iteration to find the smallest value (opt ) for which kFl .P; Gc /k1
< opt .
Thus, the suboptimal H1 synthesis problem (Maciejowski 1989; Skogestad and
Postlethwaite 2005; Zhou et al. 1996) is to find Gc .s/ to satisfy kFl .P; Gc /k1 <  :
2 3
  A Bw Bu
Pzw .s/ Pzu .s/ 6 7
P .s/ D WD 4Cz 0 Dzu 5
Pyw .s/ Pyu .s/
Cy Dyw 0

where it is assumed that


1. .A; Bu ; Cy / are stabilizable/detectable (essential).
2. .A; Bw ; Cz / are stabilizable/detectable (essential).
3. DzuT
Cz Dzu  D 0 I  (simplifying, essential).
   
Bw 0
4. Dyw D
T
(simplifying, essential).
Dyw I
Skogestad and Postlethwaite (2005) and Zhou et al. (1996) derive the con-
troller for this case, and to summarize, there exists a stabilizing Gc .s/ such that
kFl .P; Gc /k1 <  if and only if

.1/ 9X
0 that solves
AT X C XA C CzT Cz C X. 2 Bw BwT  Bu BuT /X D 0 (27.105)


and R i A C . 2 Bw BwT  Bu BuT /X < 0 8 i
.2/ 9Y
0 that solves
AY C YAT C BwT Bw C Y . 2 CzT Cz  CyT Cy /Y D 0 (27.106)


and R i A C Y . 2 CzT Cz  CyT Cy / < 0 8 i
.3/ .X Y / <  2 (27.107)
570 J.P. How et al.

where  is the spectral radius (.A/ D maxi j i .A/j). Given these solutions, the
central H1 controller is written as
" #
A C . 2 Bw BwT  Bu BuT /X  ZY CyT Cy ZY CyT
Gc .s/ WD ;
BuT X 0

where Z D .I   2 YX /1 . Note that the central controller has as many states
as the generalized plant. Also, this design does not decouple as well as the
regulator/estimator for LQG, though it can be shown that a separation principle
does exist (Green and Limbeer 1995; Zhou et al. 1996). More general forms of
the H1 synthesis problem are available that relax some of the assumptions given
above (Zhou et al. 1996; Skogestad and Postlethwaite 2005).
The standard design approach is to define an architecture such as Fig. 27.14 that
captures the key transfer functions that must be shaped in PCL .s/. The functional
form of the weighting functions Ws .s/ and Wu .s/ must then be selected, written in
state space form and then embedded into the generalized plant leading to a state
space representation of P .s/. One can then use the various MATLAB toolboxes
(e.g., -tools or the robust control toolbox (Balas et al. 1995; Chiang and Safonov
1997; Gahinet et al. 1994)) to solve for the controller Gc .s/. The parameters of the
weighting functions can then be adjusted to tune the resulting closed-loop transfer
functions. Numerous examples of using these approaches for aircraft flight control
exist in the literature, see, for example, Hyde (1995), Hu et al. (2000), Sadraey and
Colgren (2005), Balas et al. (1998), Reiner et al. (1993), Gadewadikar et al. (2009),
and Chao et al. (2007).
One appealing aspect of this design process is that, in contrast to the LQG
controller framework, there are direct extensions of the H1 control design process
to account for model uncertainty (Balas et al. 1995; Chiang and Safonov 1997;
Gahinet et al. 1994). However, a challenge in this case is finding an appropriate
error model for the model uncertainty that fit into the required assumptions. Also
note that the dimension of the resulting controller can increase dramatically as the
various weighting functions required to capture this model uncertainty are included.
Controller order reduction is possible, but it is often unclear what impact this has on
the performance and robustness of the closed-loop system.

27.4 Path Following

Once the controllers for the inner-loop dynamics (e.g., attitude and velocity) have
been designed, it is then possible to consider tracking a desired flight path. This
flight path might be in the form of interpolated waypoints or a continuous smooth
curve. In either case, this path represents the desired location of the vehicle in 3D
(possibly 4D if it is time-parametrized), and the goal of the outer-loop controller is
to generate reference commands for the inner loops which ensure that the path is
tracked.
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 571

UAV
V

h
ascmd L1 Reference point

Desired path

R R

2h

Fig. 27.16 Pure pursuit guidance logic (Park et al. 2004, 2007)

One way to track the trajectory is a UAV implementation of the pure pursuit
(Amidi and Thorpe 1990; Ollero and Heredia 1995). The resulting guidance logic in
Park et al. (2004, 2007) selects a reference point on the desired trajectory and uses
it to generate a lateral acceleration command. This reference point is on the desired
path at a distance (L1 ) forward of the vehicle, as shown in Fig. 27.16. The algorithm
then generates a lateral acceleration command that is determined by

V2
ascmd D 2 sin : (27.108)
L1

The direction of acceleration depends on the sign of the angle between the L1 line
segment and the vehicle velocity vector. If the selected reference point is to the right
of the vehicle velocity vector, then the vehicle will be commanded to accelerate to
the right (see Fig. 27.16), which means that the vehicle will tend to align its velocity
direction with the direction of the L1 line segment.
Asymptotic Lyapunov stability of the nonlinear guidance method is demonstrated
when the UAV is following circular paths (for details, see Park et al. 2004, 2007).
This stability analysis is also extended to show robust stability of the guidance logic
in the presence of saturated lateral acceleration, which typically occur when there
are limitations on the vehicle bank angle.
Further insights on the approach can be obtained through linearization. If d is
the cross-track error and V is the vehicle nominal speed, then, assuming  is small,

V2 V V
ascmd D 2 sin   2 dP C d : (27.109)
L1 L1 L1
572 J.P. How et al.

Thus, linearization yields a PD controller for the cross-track error. The result
also shows that the vehicle speed V and look-ahead distance L1 determine the
proportional and derivative gains. Note that the nonlinear form in Eq. 27.108 works
for all  and thus is more powerful than a simple PD implementation.
Several other nonlinear approaches have been proposed for UAV trajectory
tracking. For example, Nelson et al. (2006) describes an approach using vector
fields to represent desired ground track headings so as to direct the vehicle onto
the desired path. In Lawrence (2003), a Lyapunov approach is used to control
the vehicle velocity vector to ensure convergence to a limit cycle. The method is
well suited to guiding a vehicle from any initial position to a circular orbit above
a target location. In Niculesu (2001), an approach that can accommodate large
cross-track or heading deviations from a straight line path between way points is
presented.
In contrast to the complexity of these approaches, the guidance law in Eq. 27.108
is very simple and easy to implement. In particular, some key properties of this
algorithm are as follows (Park et al. 2007). It can naturally follow any circular
path of radius greater than a specific limit imposed by the dynamics of the
vehicle. It has an element of anticipation of the desired flight path, enabling
tight tracking of curved flight trajectories. By incorporating the instantaneous
vehicle speed, it adds an adaptive feature with respect to changes in vehicle
ground speed caused by external disturbances, such as wind. The approach is
proven to be asymptotically stable, for all velocities, in the entire state space
of useful initial conditions and in the presence of saturation limits on lateral
acceleration.
The implementation typically follows the successive loop-closure ideas dis-
cussed in Sect. 27.3. For example, one can implement this command by assuming
that vehicle maintains sufficient lift to balance weight, even though banked at angle
. This requires that the vehicle speed up and/or change to a larger . Assume that
L sin  D mas to achieve the desired lateral acceleration and that L cos  D mg,
then tan  D as =g. The requisite lift increment is

L  mg mg
CL D  .n  1/ ; (27.110)
QS QS

where n  L=.mg/ is the load factor. The expression tan  D as =g can be used
to develop the required bank angle command d that must be applied to the roll
controller in the successive loop approach (similar to the architecture in Fig. 27.13).
Depending on the vehicle, the command d is typically saturated at 15 .
Note that this algorithm is designed to follow a given path. It can be used to fly
between waypoints simply by connecting the waypoints with either straight lines or
arcs, but additional algorithms are needed to generate the paths and/or waypoints.
For example, Richards and How (2002) and Richards and How (2004) develop
a waypoint optimization approach using mixed-integer linear programming that
was demonstrated using UAVs (King et al. 2006). Cowlagi and Tsiotras (2009),
Zhao and Tsiotras (2010), and Cowlagi and Tsiotras (2012) develop an alternative
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 573

approach based on optimal control theory. In Zhao and Tsiotras (2010) the authors
use the kinematic model described in Sect. 27.2.6 along with state and control
constraints to formulate and solve an optimal control problem to create time-optimal
trajectories.

27.5 Conclusion and Future Directions

Linear control provides a very powerful set of procedures for controller, a vehicle
near trim conditions. As shown in this chapter, there are numerous choices to be
made, with the primary being classical/PID versus state space. Classical techniques
are useful for relatively simple systems (low-state dimension). This is often the case
for most airplanes in that the full dynamics decouple into the longitudinal and lateral
dynamics and then can be further decomposed into even simpler models associated
with the primary dynamic modes (e.g., phugoid, short period, spiral, dutch roll
(Nelson and Smith 1989)). UAVs, with rotorcraft or other less traditional planforms,
might not exhibit equivalent decompositions, but this option should be considered.
If there remains significant coupling in the dynamics, then the large number of
modes (poles) can greatly complicate a classical design, and a state space approach
should be considered. Two state space options were presented: LQG and H1 , both
of which consider different performance objectives. The particular challenge with
the LQG (and to some extent LQR) and H1 control systems is determining the
system dynamics with sufficient accuracy to ensure that the controllers are robust
and work well in practice. This can often require extensive modeling, wind tunnel
testing, and system identification. Robust control approaches exist if necessary, but
the design procedures are quite complicated compared to those shown here.
With all of the design approaches, the fundamental challenge is to determine
what are the design objectives (i.e., the pole locations in classical design, the Rxx
and Ruu matrices in LQR, and the Ws .s/ and Wu .s/ in H1 control). It should
be noted that well-studied guidance and control strategies for manned aircraft
may be overly conservative for unmanned vehicles. Choosing these performance
specifications can be an iterative process requiring many tests to achieve good/robust
overall vehicle performance, especially for a controller with many nested loops.
Typically, the tendency with many control designs is to push the performance as
high as possible. This is certainly true for the inner loops of the successive loop
architecture in Fig. 27.13. Actuator magnitude (and rate) saturation are then natural
limits on the achievable performance and should be modeled accurately to test the
stability of the system. Excessive feedback of sensor noise and bias can then often be
a problem, as can fatigue of the vehicle actuators if the control gains are very high.
As seen in Sect. 27.2, UAV dynamics are inherently nonlinear. Therefore, any
linear control approach can only be guaranteed to be locally stable. Hence, it
may be difficult to extract the desired performance or even guarantee stability
when agile maneuvers are performed. For traditional fixed wing configurations
executing way-point flight, this is usually not a significant problem due to vehicle
symmetry and inherent stability. However, for rotorcraft platform and other non-
574 J.P. How et al.

traditional configurations, it may be difficult to guarantee satisfactory performance


with a single linear controller. Nonlinear and adaptive control techniques must
also be evoked when the linear models of the UAV are not exactly known or
are time varying. The problem of design and analysis of nonlinear controllers is
studied in the companion chapter on nonlinear flight control techniques for UAVs
(chapter Nonlinear Flight Control Techniques for Unmanned Aerial Vehicles).

References
O. Amidi, C. Thorpe, Integrated mobile robot control. Proc SPIE 1388, 505523 (1990)
G.J. Balas, J.C. Doyle, K. Glover, A. Packard, R. Smith, -Analysis and Synthesis Toolbox (The
Math Works, Natick, 1995)
G.J. Balas, A.K. Packard, J. Renfrow, C. Mullaney, R.T. MCloskey, Control of the f-14 aircraft
lateral-directional axis during powered approach. J. Guid. Control Dyn. 21(6), 899908 (1998)
J.H. Blakelock, Automatic Control of Aircraft and Missiles (Wiley, New York, 1965)
A.E. Bryson, Y-C. Ho, Applied Optimal Control (Blaisdell Publishing Company, Waltham, 1969)
H. Chao, Y. Cao, Y.Q. Chen, Autopilots for small fixed-wing unmanned air vehicles: a survey,
in International Conference on Mechatronics and Automation, 2007. ICMA 2007 (IEEE,
Piscataway, 2007), pp. 31443149
N.A. Chaturvedi, A.K. Sanyal, N.H. McClamroch, Rigid-body attitude control. IEEE Control
Syst. 31(3), 3051 (2011)
C.T. Chen, Linear System Theory and Design (Saunders College Publishing, Fort Worth, 1984)
R.Y. Chiang, M.G. Safonov, Robust Control Toolbox for Use with MATLAB: R Users Guide
(MathWorks, Natick, 1997). Incorporated
G. Chowdhary, R. Jategaonkar, Parameter estimation from flight data applying extended and
unscented kalman filter. J. Aerosp. Sci. Technol. (2009). doi:10.1016/j.ast.2009.10.003
G. Chowdhary, E.N. Johnson, R. Chandramohan, M.S. Kimbrell, A. Calise, H. Jeong, Autonomous
guidance and control of an airplane under severe damage, in AIAA Infotech@Aerospace, 2011.
AIAA-2011-1428
R.V. Cowlagi, P. Tsiotras, Shortest distance problems in graphs using history-dependent transition
costs with application to kinodynamic path planning, in Proceedings of the American Control
Conference, St. Louis, 2009, pp. 414419
R.V. Cowlagi, P. Tsiotras, Hierarchical motion planning with dynamical feasibility guarantees for
mobile robotic vehicles. IEEE Trans. Robot. 28(2), 379395 (2012)
J.L. Crassidis, J.L. Junkins, Optimal Estimation of Dynamic Systems (Chapman & Hall/CRC,
Boca Raton, 2004)
M. Cutler, J.P. How, Actuator constrained trajectory generation and control for variable-pitch
quadrotors, in AIAA Guidance, Navigation, and Control Conference (GNC), Minneapolis, Aug
2012. (submitted)
R.C. Dorf, R.H. Bishop, Modern Control Systems. (Addison-Wesley, Reading, 1995)
J.C. Doyle, Guaranteed margins for LQG regulators. IEEE Trans. Autom. Control AC-23(4),
756757 (1978)
B. Etkin, Dynamics of Flight: Stability and Control, 2nd edn. (Wiley, New York, 1982)
B. Etkin, L.D. Reid, Dynamics of Flight, Stability and Control (Wiley, New York, 1996)
G.F. Franklin, J.D. Powell, A. Emami-Naeini, Feedback Control of Dynamic Systems (Addison-
Wesley, Reading, 1994)
J. Gadewadikar, F.L. Lewis, K. Subbarao, K. Peng, B.M. Chen, H-infinity static output-feedback
control for rotorcraft. J. Intell. Robot. Syst. 54(4), 629646 (2009)
P. Gahinet, A. Nemirovskii, A.J. Laub, M. Chilali, The lmi control toolbox, in IEEE Conference
on Decision and Control (CDC), vol. 3 (IEEE, New York, 1994), pp. 20382041
A. Gelb, Applied Optimal Estimation (MIT (QA402.A5), Cambridge, 1974)
27 Linear Flight Control Techniques for Unmanned Aerial Vehicles 575

M. Green, D.J. Limbeer, Linear Robust Control (Prentice Hall, Englewood Cliffs, 1995)
J.K. Hall, N.B. Knoebel, T.W. McLain, Quaternion attitude estimation for miniature air vehicles
using a multiplicative extended kalman filter, in IEEE/ION Position, Location and Navigation
Symposium, Monterey, May 2008, pp. 12301237
J. Hu, C. Bohn, H.R. Wu, Systematic h [infinity] weighting function selection and its application
to the real-time control of a vertical take-off aircraft. Control Eng. Pract. 8(3), 241252 (2000)
R.A. Hyde, H-Infinity Aerospace Control Design- A VSTOL Flight Application. (Springer,
Berlin/New York, 1995)
E. Johnson, S. Kannan, Adaptive trajectory control for autonomous helicopters. J. Guid. Control
Dyn. 28(3), 524538 (2005)
E. King, Y. Kuwata, J.P. How, Experimental demonstration of coordinated control for multi-vehicle
teams. Int. J. Syst. Sci. 37(6), 385398 (2006)
N. Knoebel, Adaptive control of a miniature tailsitter UAV. Masters thesis, Brigham Young
University, Dec 2007
J.B. Kuipers, Quaternions and Rotation Sequences: A Primer with Applications to Orbits,
Aerospace, and Virtual Reality (Princeton University Press, Princeton, 2002)
B.C. Kuo, Automatic Control Systems (Prentice-Hall, Englewood Cliffs, 1991)
H. Kwakernaak, R. Sivan, Linear Optimal Control Systems (Wiley, New York, 1972)
D.A. Lawrence, Lyapunov vector fields for UAV flock coordination, in 2nd AIAA Unmanned
Unlimited Systems, Technologies, and Operations Aerospace, Land, and Sea Conference,
Workshop and Exhibition, San Diego, 2003
D. Lawrence, E.W. Frew, J.W. Pisano, Vector fields for autonomous unmanned aircraft flight
control. J. Guid. Control 31(5), 12201229 (2008)
W. Levine (ed.), The Control Handbook (CRC, Boca Raton, 1996)
F.L. Lewis, Optimal Estimation (Wiley, New York, 1986)
J.M. Maciejowski, Multivariable Feedback Design (Addison-Wesley, Reading, 1989)
C.G. Mayhew, R.G. Sanfelice, A.R. Teel, On quaternion-based attitude control and the unwinding
phenomenon, in American Control Conference (ACC), 2011, San Francisco, 29 June 20111
July 2011, pp. 299304
D. McFarlane, K. Glover, A Loop Shaping Design Procedure Using H1 Synthesis. IEEE Trans.
Autom. Control 37(6), 759769 (1992)
B. Mettler, Modeling Identification and Characteristics of Miniature Rotorcrafts (Kluwer, Boston,
2003)
B. Michini, Modeling and adaptive control of indoor unmanned aerial vehicles. Masters thesis,
Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, Cam-
bridge, Sept 2009
D.J. Moorhouse, R.J. Woodcock, Background Information and User Guide for MIL-F-8785C,
Military Specification-Flying Qualities of Piloted Airplanes. Technical report, Wright-
Patterson AFB, OH, 1982
R.C. Nelson, S.E. Smith, Flight Stability and Automatic Control (McGraw-Hill, New York, 1989)
D.R. Nelson, D.B. Barber, T.W. McLain, R.W. Beard, Vector field path following for small
unmanned air vehicles, in American Control Conference (ACC), Minneapolis, June 2006
M. Niculesu, Lateral track control law for aerosonde UAV, in Proceedings of the AIAA Aerospace
Sciences Meeting and Exhibit, Reno, Jan 2001
A. Ollero, G. Heredia, Stability analysis of mobile robot path tracking, in Proceedings of
the IEEE/RSJ International Conference on Intelligent Robots and Systems, Pittsburgh, 1995,
pp. 461466
H. Ozbay, Introduction to Feedback Control Theory (CRC, Boca Raton, 2000)
S. Park, J. Deyst, J. P. How, A new nonlinear guidance logic for trajectory tracking, in AIAA
Guidance, Navigation, and Control Conference (GNC), Providence, Aug 2004 (AIAA 2004-
4900)
S. Park, J. Deyst, J.P. How, Performance and lyapunov stability of a nonlinear path-following
guidance method. AIAA J. Guid. Control Dyn. 30(6), 17181728 (2007)
576 J.P. How et al.

J. Reiner, G. Balas, W. Garrard, Design of a flight control system for a highly maneuverable aircraft
using mu synthesis, in AIAA Guidance, Navigation and Control Conference, Monterey, 1993,
pp. 710719
A. Richards, J.P. How, Aircraft trajectory planning with collision avoidance using mixed integer
linear programming, in American Control Conference (ACC), Anchorage, vol. 3, 2002, pp.
19361941
A. Richards, J.P. How, Decentralized model predictive control of cooperating UAVs, in IEEE
Conference on Decision and Control (CDC), Paradise Island, Dec 2004, pp. 42864291
M. Sadraey, R. Colgren, Two dof robust nonlinear autopilot design for a small uav using a
combination of dynamic inversion and loop shaping, in AIAA Guidance Navigation and Control
Conference, San Francisco, vol. 2, 2005, pp. 55185537
D. Simon, Optimal State Estimation: Kalman, H-Infinity, and Nonlinear Approaches (Wiley-
Interscience, Hoboken, 2006)
S. Skogestad, I. Postlethwaite, Multivariable Feedback Control Analysis and Design (Wiley,
Hoboken, 2005)
F. Sobolic, Agile flight control techniques for a fixed-wing aircraft. Masters thesis, MIT
Department Of Aeronautics and Astronuatics, 2009
R.F. Stengel, Optimal Control and Estimation (Dover, New York, 1994)
R.F. Stengel, Flight Dynamics (Princeton University Press, Princeton, 2004)
B.L. Stevens, F.L. Lewis, Aircraft Control and Simulation, 2 edn. (Wiley, Hoboken, 2003)
F.B. Tatom, S.R. Smith, Simulation of atmospheric turbulent gusts and gust gradients. J. Aircr.
19(04), 264271 (1981)
B. Wie, P.M. Barba, Quaternion feedback for spacecraft large angle maneuvers. AIAA J. Guid.
Control Dyn. 8, 360365 (1985)
Y. Zhao, P. Tsiotras, Time-optimal parameterization of geometric path for fixed-wing aircraft, in
AIAA@Infotech conference, Atlanta (AIAA, 2010). AIAA-2010-3352
K. Zhou, J.C. Doyle, K. Glover, Robust and Optimal Control (Prentice-Hall, Englewood Cliffs,
1996)

Das könnte Ihnen auch gefallen