Sie sind auf Seite 1von 14

Real-Time Indoor

Autonomous Vehicle
Test Environment
JONATHAN P. HOW, BRETT BETHKE, ADRIAN FRANK, DANIEL DALE, and JOHN VIAN

A TESTBED FOR THE RAPID


PROTOTYPING OF UNMANNED
VEHICLE TECHNOLOGIES
nmanned aerial vehicles (UAVs) are becoming vital warfare and
homeland-security platforms because they significantly reduce
costs and the risk to human life while amplifying warfighter and
first-responder capabilities. These vehicles have been used in Iraq
and during Hurricane Katrina rescue efforts with some success,
but there remains a formidable barrier to achieving the vision of multiple
UAVs operating cooperatively. Numerous researchers are investigating
systems that use multiple autonomous agents to cooperatively execute
these missions [1][4]. However, little has been said to date about how to
perform multiday autonomous system operations. Autonomous mission
systems must balance vehicle capability, reliability, and robustness
issues with task and mission goals when creating an effective strategy.
In addition, these systems have the added responsibility of interacting
with numerous human operators while managing both high-level mission goals and individual tasks.
To investigate and develop unmanned vehicle systems technologies for autonomous multiagent mission platforms, we
are using an indoor multivehicle testbed called Real-time
indoor Autonomous Vehicle test ENvironment (RAVEN)
to study long-duration multivehicle missions in a controlled environment. Normally, demonstrations of
multivehicle coordination and control technologies
require that multiple human operators simultaneously manage flight hardware, navigation, control,
and vehicle tasking. However, RAVEN simplifies all
of these issues to allow researchers to focus, if desired, on
the algorithms associated with high-level tasks. Alternatively, RAVEN provides a facility for testing low-level control algorithms on both fixed- and rotary-wing aerial platforms. RAVEN is also

Digital Object Identifier 10.1109/MCS.2007.914691

1066-033X/08/$25.002008IEEE

APRIL 2008

IEEE CONTROL SYSTEMS MAGAZINE 51

RAVEN is designed to explore long-duration autonomous air operations using


multiple UAVs with virtually no restrictions on flight operations.
being used to analyze and implement techniques for
embedding the fleet and vehicle health state (for instance,
vehicle failures, refueling, and maintenance) into UAV
mission planning. These characteristics facilitate the rapid
prototyping of new vehicle configurations and algorithms
without requiring a redesign of the vehicle hardware. This
article describes the main components and architecture of
RAVEN and presents recent flight test results illustrating
the applications discussed above.

BACKGROUND
Various research platforms have been developed to study
UAV concepts [5][23]. The BErkeley AeRobot (BEAR)
project features a fleet of commercially available rotarywing and fixed-wing UAVs retrofitted with special electronics. These vehicles have been used in applications such
as autonomous exploration in unknown urban environments and probabilistic pursuit-evasion games [20], [22].
In the Aerospace Controls Laboratory at the Massachusetts
Institute of Technology (MIT), an outdoor testbed consisting of a fleet of eight fixed-wing autonomous UAVs provides a platform for evaluating coordination and control
algorithms [12], [17]. Similarly, researchers in the Multiple
AGent Intelligent Coordination and Control (MAGICC)
Lab at Brigham Young University have built and flown
small fixed-wing UAVs to perform multivehicle experiments outdoors [19]. These planes are launched by hand
and track waypoints autonomously.
Likewise, the DragonFly project at Stanford Universitys Hybrid Systems Laboratory uses a heavily modified
fixed-wing model aircraft [7], [21]. The objective of this
platform is to provide an inexpensive capability for conducting UAV experiments, ranging from low-level flight
control to high-level multiple aircraft coordination. Similarly, to demonstrate new concepts in multiagent control
on a real-world platform, the Hybrid Systems Laboratory
developed the Stanford Testbed of Autonomous Rotorcraft
for Multi-Agent Control (STARMAC). STARMAC is a
multivehicle testbed consisting of two quadrotor UAVs
that autonomously track a given waypoint trajectory [10].
Quadrotors are used in STARMAC due to their convenient
handling characteristics, low cost, and simplicity.
In addition, several indoor multivehicle testbeds have
been constructed to study multiagent activities. For example, the HOvercraft Testbed for DEcentralized Control
(HOTDEC) Platform at the University of Illinois UrbanaChampaign is a ground-vehicle testbed used for multivehicle control and networking research [23]. At Vanderbilt

52 IEEE CONTROL SYSTEMS MAGAZINE

APRIL 2008

University, researchers have built the Vanderbilt Embedded Computing Platform for Autonomous Vehicles, which
supports two flight vehicles in an indoor environment [18].
Also, the UltraSwarm Project at the University of Essex is
designed to use indoor aerial vehicles to examine questions related to flocking and wireless cluster computing
[11]. Likewise, researchers at Oklahoma State University
use the COMET (COoperative MultivehiclE Testbed)
ground-vehicle testbed to implement and test cooperative
control techniques both indoors and outdoors [6].
Some of these testbeds have limitations that inhibit their
utility for investigating health management questions
related to multiday, multiagent mission operations. For
example, most outdoor UAV test platforms can be flown
safely only during daylight hours. In addition, many outdoor UAV test platforms must be flown during good
weather conditions to avoid damage to vehicle hardware.
These outdoor UAVs also typically require a large support
team, which makes long-term testing logistically difficult
and expensive.
In contrast, RAVEN is designed to examine a wide variety of multivehicle missions using both autonomous
ground and air vehicles. The facility uses small electric helicopters and airplanes and can accommodate up to ten air
vehicles at the same time. At the heart of the testbed is a
global metrology motion-capture system, which yields
accurate, high-bandwidth position and attitude data for all
of the vehicles.
Although RAVENs indoor global metrology system is
expensive, there are significant advantages to operating
indoors. Since the position markers are lightweight, the
motion-capture system can sense vehicle position and attitude without adding substantial payload. Thus, RAVEN
does not require significant modifications to off-the-shelf,
low-cost, radio-controlled (R/C) vehicle hardware. This
setup allows researchers to conduct experiments with
large teams (ten units) of inexpensive flying vehicles
(US$1000 each). In addition, one operator can set up and
flight test multiple UAVs in under 20 min. RAVEN thus
facilitates rapid prototyping of multivehicle mission-level
algorithms at a fraction of the cost needed to support multiday, multivehicle external flight demonstrations.
The motion-capture metrology system can be used to
obtain navigation and sensing data for many independent
vehicles both in ideal and degraded form to simulate GPS
obstructions. The motion-capture metrology data can then
be augmented with onboard sensing to extend the scope of
the flight experiments being performed.

System Health Information


Mission
Planning

Task
Assignment

Waypoint
Plans

Trajectory
Design

Vehicle
Controllers
Actuation

Vehicle/Obstacle/Target States

Environment
Estimator

Vehicle

(a)
Vehicle Cmds
Vehicle Cmds
to Trainer Port

System Health Information

Draganfly V Ti Pro
Quadrotor
Mission
Requirements

Mission
Planning

Task
Assignment

Trajectory
Design

Waypoints

Output
Processing

Vicon Positioning
System

Control
Processing

Environment
Vehicle/Obstacle/Target States Estimator

Position/Attitude Data
(by Ethernet)

Input
Processing
Ground Computer

Vehicle
Position/
Attitude
Data

(b)
Vehicle Commands
Sent to Trainer Port

UAV #1
Control
CPU

Planning
CPU

UAV #1
UAV #1 Cmds

Communication Link
Between UAVs

Perception
CPU

UAV #N
Cmds

Vicon
Positioning
System

UAV #N
Control
CPU

Planning
CPU

UAV #N
Perception
CPU

Position/Attitude Data
(by Ethernet)

Mission
Planning
Ground
Vehicle #1
CPU
Ground
Vehicle #M
CPU

Vehicle
Positioning
and
Attitude
Data

Ground
Vehicle #1

Ground
Vehicle #M

Vehicle Commands Sent to Onboard


Computer by RF Modem
(c)

FIGURE 1 (a) Multivehicle command and control architecture block diagram and (b), (c) RAVENs integrated vehicle system block diagrams.
This system architecture is designed to separate the mission- and task-related components of the architecture from RAVENs vehicle infrastructure. Therefore, adjustments in the mission, task, and trajectory planning components of the system do not require changes to the testbeds core software infrastructure and can be made in real time.

APRIL 2008

IEEE CONTROL SYSTEMS MAGAZINE 53

RAVEN enables rapid prototyping of single vehicle flight control


and multivehicle coordination algorithms.
UAV mission problems in real time. Therefore, the architecture must facilitate adding and removing hardware and software components as needed. Another design objective is to
provide a single operator with the capability to simultaneously command several autonomous vehicles. Consequently, the architecture must include components that provide
the mission operator with sufficient situational awareness to
verify or issue changes to a vehicles plan in real time.
To meet these requirements, RAVENs architecture has
the hierarchical design shown in Figure 1(a). This architecture separates the mission and task components from the
testbeds vehicle infrastructure. Thus, changes in the mission and task components of the system can be made in real
time without changing the testbeds core software infrastructure. This approach builds on the system architecture
SYSTEM ARCHITECTURE AND COMPONENTS
One objective of RAVEN is to enable researchers to test a used in the DARPA-sponsored Software Enabled Control
variety of algorithms and technologies applicable to multi- capstone flight demonstration by the MIT team [24], [25].
As shown in Figure 1(b), the
planning part of the system archix 10 4
tecture has four major compo2
nents: 1) a mission-planning level
1.5
that sets the system goals and
1
monitors system progress, 2) a
0.5
task-assignment level that assigns
0
tasks to each vehicle in support of
0.5
the overall mission goals, 3) a tra1
jectory-design level that directs
1.5
each vehicle on how to best per2
form the tasks issued by the task4 3 2 1 0 1 2 3 4
4
x
10
processing level, and 4) a
X-Axis Position (m)
control-processing level that car(a)
ries out the activities set by high30
30
er levels in the system. In
25
25
addition, health information
20
20
about each system component is
15
15
used to make informed decisions
on the capabilities of each subsys10
10
tem in the architecture. As a
5
5
result, system components are
0
0
designed to support and promote
4 3 2 1 0 1 2 3 4
4 3 2 1 0 1 2 3 4
4
4
strategies that are in the systems
X-Axis Position (m) x 10
Y-Axis Position (m) x 10
best interests for each task.
(b)
(c)
The architecture used in
FIGURE 2 Scatter plot of (x, y ) vehicle position. Part (a) shows a scatter plot of the measured RAVEN allows researchers to
(x, y ) position (in meters) of a quadrotor sitting on the floor at position (0, 0 ); (b) and (c) show
rapidly interchange mission-syshistograms of the measured percentage of time of the vehicles measured (b) x- and (c) y-positions. Note that the scale in these plots is meters 104 . With the rotors not turning, the maxi- tem components for the purpose
mum x-position error measured by the system in this test is 0.325 mm, while the maximum of testing various algorithms in a
y-position error measured by the system in this test is 0.199 mm.
real-time environment. For

54 IEEE CONTROL SYSTEMS MAGAZINE

% at Location

% at Location

Y- Axis Position (m)

The primary flight test range, which is located on


the MIT campus, is approximately 10 m by 8 m by 4 m.
The environmental conditions of the indoor facility can
be carefully controlled for flight testing. Conditions
can range from ideal to wind induced, for example, by
blowers or fans. The controlled environment is available 24/7 since it is not dictated by weather, day/night
cycles, or other external factors such as poor visibility.
In addition, flight experiments using small-scale, electric-powered vehicles in an indoor range can be carefully monitored and set up in a safe way by
establishing flight procedures and installing safety features such as nets.

APRIL 2008

example, to allow users to rapidly prototype mission,


task, and other vehicle-planning algorithms with
RAVENs vehicle hardware, each vehicle must be able to
accept command inputs, such as waypoints, from a highlevel planning system. Although low-level commands
such as fly at a set speed can be issued to any vehicle in
the platform, a waypoint interface to the vehicles allows
users to substitute mission-, task-, and path-planning
components into the architecture without changing the
vehicles base capabilities. This interface allows users to
add, remove, and test algorithms and other related components as needed. In fact, the waypoint interface to the
vehicles allows users to develop and implement code on
the platform in real time to test centralized and distributed planning algorithms using computers from other
locations on campus. As a result, researchers can implement and test control, navigation, vehicle-tasking, health,
and mission-management algorithms on the system
[26][28]. Likewise, users can incorporate additional vehicles into the testbed architecture since vehicle controllers
and base capabilities can be added, removed, and tested
in the architecture without affecting the remaining system
components.

system is 0.325 mm, while the maximum y-position error


measured by the system is 0.199 mm. Tracking multiple
reflectors in a unique orientation on each vehicle enables
the motion-capture system to calculate the position of the
center of mass and the attitude of each air/ground vehicle
within range. For example, an 18-camera configuration can
track five air vehicles and multiple ground vehicles in an
8-m-by-5-m-by-3-m flight volume.
Currently, RAVEN is comprised of various rotarywing, fixed-wing, and ground-based R/C vehicle
types, as shown in figures 3, 4, and 5. However, most
testbed flight experiments are performed using the
Draganflyer V Ti Pro quadrotor [30]. This quadrotor is
small ( 0.7 m from blade tip to blade tip) and lightweight (under 500 g), with a payload capacity of about
100 g and the ability to fly between 1317 min on one

MAIN TESTBED HARDWARE


Figure 1(c) shows the system setup, with perception, planning, and control processing performed in linked ground
computers. This system configuration can emulate distributed planning (using network models of the environment
to govern vehicle-to-vehicle communication) as if the planning were being done on board. Note that the motioncapture system, which provides vehicle position and
attitude data, is the only centralized component; however,
each vehicles position and attitude data are transmitted to
that vehicles ground-based computing resources. Therefore, each vehicle is unaware of the other UAVs during an
operation unless a vehicle-to-vehicle communication link
is established.
Currently, the control processing and command data
for each vehicle is processed by a dedicated ground computer and sent over a USB connection to the trainer port
interface on the vehicles R/C transmitter. Each dedicated
ground computer has two AMD 64-bit Opteron processors
and 2 Gb of memory and runs Gentoo Linux.
A Vicon MX motion-capture system provides position
and attitude data for each vehicle in the testbed [29]. By
attaching lightweight reflective markers to the vehicles
structure, the motion-capture system tracks and computes
each vehicles position and attitude at 100 Hz. The accuracy of the motion-capture systems position and attitude
estimates is difficult to confirm during flight operations.
To assess the static accuracy, Figure 2 shows a scatter plot
of the measured (x, y) position (in meters) of a quadrotor
sitting on the floor at position (0, 0). With the rotors not
turning, the maximum x-position error measured by the

(a)

(b)
FIGURE 3 (a) Multivehicle search and track experiment and (b)
operator interface visualization. The sensing system records the
ground vehicle locations in real time. When the vehicles are being
tracked by the UAVs, the vehicles locations are displayed to the
operator. In (b), the cylinders below the flying vehicles show the
vehicles (x, y ) location in the flight space.

APRIL 2008

IEEE CONTROL SYSTEMS MAGAZINE 55

battery charge (using a 2000-mAh lithium-ion battery


from Thunder Power [31]) while carrying a small camera. The four-propeller design simplifies the dynamics
and control, and the vehicles airframe is robust and
easy to repair in the event of a crash. The rotor blades
are designed to fracture or bend when they hit a solid
object. The Draganflyer V Ti Pro quadrotor is thus
durable and safe for indoor flight.
A separate landing and ground-maintenance system
are used to support the quadrotor vehicle hardware in an
around-the-clock environment. More specifically, the
landing hardware and its associated real-time processing
aid the vehicles guidance and control logic during takeoff
and landing. In addition, a maintenance module evaluates
whether the vehicles are due for maintenance and monitors the recharging of the batteries prior to flight [32], [33].

Likewise, modified R/C trucks made by DuraTrax are


used as ground vehicles in the platform [34]. The modifications consist of replacing the stock onboard motor controller and R/C receiver with a custom motor driver
circuit, Robostix microcontroller [35], and RF communication module with 802.15.4 wireless connectivity. These
modifications improve the vehicles precision-driving
capabilities while making it possible to autonomously
command and control multiple ground vehicles by
means of one ground computer in mission scenarios,
such as airborne search and track missions, search and
recovery operations, networking experiments, and air
and ground cooperative mission scenarios.

Task Processing and Operator Interface Components


The control system for each vehicle in the testbed can
process and implement tasks defined by a system component or user. For example, each vehicle has a vehicle manager module that is executed on the dedicated ground
computer and designed to handle the task processing,
trajectory generation, and control processing. This module

(a)
(a)

Y
Z

(b)

(b)
FIGURE 4 Airplane model in (a) autonomous hover and the airplane
axis setup for (b) hover experiments. The lightweight reflective
spheres glued to the vehicles structure shown on the airplane in
(a) and (b) are used by the motion-capture system to track and
compute the airplanes position and attitude.

56 IEEE CONTROL SYSTEMS MAGAZINE

APRIL 2008

FIGURE 5 Fully autonomous flight test with (a) five UAVs and (b) a
closeup of five UAVs in flight. In this flight, the autonomous tasking
system commands all five vehicles to take off and hover 0.5 m
above the ground for two minutes. In (b) the five vehicles are shown
as they hover during the test. In (a) the five transmitters for the vehicles are shown. The transmitter for each vehicle is connected directly to a dedicated ground computer, which monitors and commands
the vehicle during flight.

is designed to allow an external system or user to communicate with the vehicle using task-level commands, such as
fly to waypoint A, hover/loiter over area B, search
region C, classify/assess object D, track/follow object
E, and take off/land at location F. These commands can
be sent to the vehicle at any time during vehicle operations. Each agents vehicle manager processes these tasks
as they arrive and responds to the sender acknowledging
the task request.
RAVEN is also designed with an automated task manager that manages the testbeds air and ground vehicles
using task-level commands. As a result, multivehicle mission scenarios, such as search, persistent surveillance, and
area denial, can be organized and implemented by the task
manager without human intervention. Figure 5 shows a
mission in which the automated task manager controls a
group of five UAVs.
Although coordinated multivehicle flight tasks can also
be managed autonomously by RAVEN, the system has an
operator interface with vehicle tasking capability. The task
manager system is designed to allow an operator to issue a
command to any vehicle at any time. Currently, the operator interface includes a three-dimensional display of the
objects in the testing area (as shown in Figure 3) and a
command and control user interface, which displays vehicle health and state data, task information, and other mission-relevant data.
Each vehicles trajectory is specified as a sequence of
waypoints consisting of a location xi = (x, y, z), vehicle
heading i , and speed vi . Given these waypoints, several
options are available for selecting the vehicles reference
inputs. Perhaps the simplest path is to follow a linear interpolation of the points defined by


1
x ref (t) =
x i+1 x i vi t,
|x i+1 x i|

XB

ZE
YE

YB

XE
ZB
FIGURE 6 Quadrotor model axis definition. The inertial frame
(xE , yE , zE ) is used to specify the vehicles position. The vehicles
body frame (xB , yB , zB ) and the roll, pitch, and yaw Euler angles
, , and are used to specify the vehicles orientation.

the moment of inertia JR of a rotor blade, and a disturbance


d generated by differences in rotor speed. The inputs to the
system are collective , roll , pitch , and yaw , which are the
collective, roll, pitch, and yaw input commands, respectively. Starting with the flat Earth, body-axis six-degree-offreedom (6DOF) equations [36], the kinematic and moment
equations for the nonlinear quadrotor model can be
written as

p = qr


(1)

where the choice of vi can be used to move between


waypoints at varying speeds. This approach is used to
automate the takeoff and landing procedure for the
quadrotor vehicles. As a result, the quadrotor vehicles are
fully autonomous from takeoff to landing during all flight
operations.

QUADROTOR CONTROL DESIGN MODEL


A model of the quadrotor vehicle dynamics is needed to
design a hover controller. Figure 6 defines an inertial
frame (xE , yE , zE ), which is used to specify the vehicles
position, and the roll, pitch, and yaw Euler angles , , and
, which are used to specify the vehicles orientation. The
body frame is specified by the xB -, yB -, and zB -axes; the
body moments of inertia are Ix , Iy, and Iz ; and p, q, and r
denote the body angular rates in the body frame. Additional parameters include the distance L from the quadrotors center of mass to its motors, the mass m of the vehicle,

q = pr

r = pq

Iy Iz
Ix

JR
L
qd + roll ,
Ix
Ix

(2)

JR
L
rd + pitch ,
Iy
Iy

(3)

1
yaw ,
Iz

(4)

Iz Ix
Iy
Ix Iy
Iz

= p + tan (q sin + r cos ),


= q cos r sin ,
= (q sin + r cos ) sec ,

(5)
(6)
(7)

where (JR /Ix )qd and (JR /Iy)rd represent the disturbances caused by changing rotor speeds (as discussed in
[37]), although the gyroscopic effects due to these disturbances for the Draganflyer [30] model are small. Also, the
cross-product moments of inertia are omitted since Ixz is
considerably smaller than Ix and Iz due to the shape of the
quadrotor, while Ixy and Iyz are zero due to the vehicle
symmetry. Since the force applied to the vehicle as a result
of the collective command can be represented as a thrust
vector along the negative zB -body axis, the nonlinear navigation equations for the quadrotor in the reference frame
defined in Figure 6 are given by

APRIL 2008

IEEE CONTROL SYSTEMS MAGAZINE 57

Several mission-management, task-allocation, and path-planning algorithms


have been successfully implemented and tested on RAVEN.

0.2

Y-Axis Position (m)

0.15
0.1
0.05
0
0.05
0.1
0.15
0.2

0.2

0.1
0
0.1
X-Axis Position (m)

0.2

(a)

15
10
5

40
35
30
25
20
15
10
5
0
0.1

0.05
0
0.05
X-Axis Velocity (m/s)
(e)

0.1

25
20
15
10
5
0

0.1 0.05 0 0.05 0.1 0.15


X-Axis Position (m)
(b)

% of Time at Location

20

30

0.1 0.05 0 0.05 0.1 0.15


Y-Axis Position (m)
(c)

40
35
30
25
20
15
10
5
0
0.1

0.05
0
0.05
Y-Axis Velocity (m/s)
(f)

0.1

% of Time in Flight Condition

% of Time in Flight Condition

% of Time at Location

25

% of Time in Flight Condition

% of Time at Location

30

35

35

35

30
25
20
15
10
5
0
0.6

40
35
30
25
20
15
10
5
0
0.1

0.65
0.7
0.75
Z-Axis Position (m)
(d)

0.8

0.05
0
0.05
Z-Axis Velocity (m/s)
(g)

0.1

FIGURE 7 Single-vehicle hover experiment. In this test flight, a quadrotor UAV is commanded to hover at (x, y, z ) = (0, 0, 0.7) m for 10 min.
Part (a) shows the xy plot of vehicle position, (b)(d) show histograms with percentage of time at location for x, y, and z positions, and
(e)(g) show histograms with percentage of time at each flight condition for x, y, and z velocities. These results demonstrate that the vehicle
can hold its position during flight. In this test, the vehicle remains inside the 20-cm box throughout the 10-min hover test flight.

58 IEEE CONTROL SYSTEMS MAGAZINE

APRIL 2008

(8)

(9)

(10)

where u = collective . After linearizing this model around


the hover flight condition with 0, setting
collective = mg + collective , and dropping small terms in the
xE and yE dynamics, yields

Y-Axis Position (m)

1
u,
m
1
y E = ( sin sin cos sin cos ) u,
m
1
z E = g + (cos cos ) u,
m

x E = (sin cos cos sin sin )

First Flight
Second
Flight

4
3
2
1
0

x E = g ,

(11)

y E = g ,
1
z E = collective ,
m
= p ,
= q ,

(12)

= r ,
L
p = roll ,
Ix
L
q = pitch ,
Iy
1
r = yaw .
Iz

(13)
(14)
(15)
(16)
(17)

1
2
1
0
X-Axis Position (m)

FIGURE 8 Single-vehicle waypoint tracking experiments. In this


test, a quadrotor vehicle is commanded to fly from points
(1.5, 0, 1 ), (1.5, 0.5, 1) , (1.5, 0.5, 1 ), (1.5, 5.5, 1 ), and
(1.5, 5.5, 1) and then back to (1.5, 0, 1) m. These results show
that the vehicle can track a path at low speeds. In this test, the
vehicle is commanded to fly and hover at waypoints in a rectangular flight pattern. These results show that the vehicle does not drift
away from the path more than 15 cm at any point during the flight.

(18)
(19)

The values Ix , Iy, Iz , L, m, and JR are measured or determined


experimentally for the Draganflyer models used in RAVEN.
An integrator of the position and heading error is
included in each loop so that the controller can remove
steady-state position and heading errors in hover. Since the
motion-capture system accurately and directly measures
the systems state, four linear-quadratic-regulator (LQR)
controllers using four combinations of vehicle states, namely and xE , and yE , , and zE , are used to stabilize and
control the quadrotor. The regulators are designed to optimize the vehicles capabilities in hover, while ensuring that
the vehicle can respond quickly to position errors.
The vehicles controllers are optimized for surveillance
experiments. In these experiments, a camera is mounted on
the quadrotor vehicles facing toward the ground. Thus,
large changes in pitch, roll, and yaw affect the vehicles ability to focus on a ground target. In addition, an anti-windup
bumpless transfer scheme (similar to those described in
[38]) with adjustable saturation bounds is used to prevent
the integrators from winding up while the vehicle is on the
ground before, during, and after takeoff and landing.

HOVERING AIRPLANE CONTROL DESIGN MODEL


In addition to quadrotors, a foam R/C aircraft [shown in
Figure 4(a)] is used to explore the properties of an aircraft
flying in a prop-hang (that is, nose up) orientation for the

purpose of landing vertically and performing other complex maneuvers, such as perching. To avoid Euler-angle
singularities, the airplanes body-fixed reference frame is
defined such that the positive x-axis points down from the
airplanes undercarriage, the positive y-axis points out
along the right wing, and the positive z-axis points out the
tail of the aircraft as shown in Figure 4(b). Using this
reference frame, the vehicles nominal attitude reference in
hover corresponds to = = = 0 .
Assuming that the Earth is an inertial reference frame
and the aircraft is a rigid body, the aircraft equations of
motion [36], [39] can be written as (5)(7), (8)(10) with
u = throttle , and

p = qr

q = pr

r = pq

Iy Iz
Ix

Ix Iy
Iz

(20)

Aelevator Lelevator
elevator ,
Iy

(21)

prop Iprop
Iz

Arudder Lrudder
rudder ,
Ix

Iz Ix
Iy

+ Cl,prop

prop
2

2

5
dprop

Iz

Aaileron Laileron
aileron ,
Iz

(22)

where dprop is the diameter of the propeller.

APRIL 2008

IEEE CONTROL SYSTEMS MAGAZINE 59

Near hover, the influence of external forces and


moments on the aircraft, other than gravity, are negligible,
and velocities and rotational velocities are small. Using
small angle approximations, the equations of motion
(5)(7), (8)(10), and (20)(22) can be considerably simplified. Since Iprop /Iz 0.04  1, the torque due to a change
in the rotational speed of the motor can also be disregarded. Next, define throttle = mg + throttle and

aileron = Cl,prop

prop,0
2

2

5
dprop

Iz

+ aileron ,

(23)

where prop,0 is the average rotational speed of the propeller to keep the airplane in hover [40]. Next, since the
vehicles reflective markers are mounted on top surface of
both wings and the fuselage, they are only visible to cameras facing the top of the vehicle when the airplane is in
hover. Therefore, reference = (/2) to ensure that there
are at least three or more cameras facing the reflective

markers. Linearizing the equations of motion for the airplane in hover yield
x E = g,

(24)

y E = g,
1
z E = throttle ,
m
= q,
= p,

(25)

= r,
Arudder Lrudder
p =
rudder ,
Ix
Aelevator Lelevator
elevator ,
q =
Iy
Aaileron Laileron
r =
aileron .
Iz

(29)

(26)
(27)
(28)

(30)
(31)
(32)

Z-Axis Position (m)

Y-Axis Position (m)

Y-Axis Position (m)

Here, Ix , Iy , and Iz correspond to the body moment of


inertia terms, while Aelevator ,
Arudder , and Aaileron correspond
to the deflected area of each control surface that is subject to
1
1
propeller flow while the airplane is in hover. In addition,
0.5
0.5
Lelevator , Lrudder , and Laileron are
0
0
the lengths of the control surface
moment arms. These terms are
0.5
0.5
measured or determined experi1
1
mentally for this model.
Four control schemes are
1.5 1 0.5 0 0.5 1 1.5
1.5 1 0.5 0 0.5 1 1.5
applied
to combinations of vehiX-Axis Position (m)
X-Axis Position (m)
and xE , and yE , ,
cle
states
(a)
(b)
and zE to stabilize and control the
airplane in hover. Each loop uses
proportional plus derivative (PD)
1.6
and proportional plus integrator
1.4
plus derivative (PID) controllers
1.2
to maintain the vehicles position
1
during the hover condition. In
0.8
0.6
particular, the controllers use
1
0
large gains on the state derivative
1 0.5
X-Axis Position (m)
0 0.5
1
errors to prevent the vehicle from
1
Y-Axis Position (m)
moving too quickly when trying
(c)
to maintain its position.
Several issues arise in trying to
FIGURE 9 Multivehicle coordinated flight experiment. In this test, two vehicles are commanded to control an airplane in hover. For
fly at a constant speed in a circular pattern with changes in altitude. The vehicles are commandexample, propeller drag torque
ed by the systems task advisor to take off and fly a circular trajectory maintaining a constant
speed of 0.25 m/s and 180 deg of angular separation. In particular, the vehicles are flying in a cir- changes with motor speed, causcle (as projected in the xy coordinate frame shown in (a) and (b)), while they are changing alti- ing the vehicle to rotate about its
tude (flying from an altitude of 0.5 m to 1.5 m) as they move around the flight pattern as shown in
zB -axis shown in Figure 4(b). This
(c). This test is repeated multiple times, and the vehicles fly similar flight paths in five consecutive rotation is mitigated by adding
tests as shown in (b). Notice that the vehicle trajectories in the lower right corner of the plot
an aileron deflection proportional
appear to be more noisy. This disruption is partially caused by the quadrotors flying through the
rotor downwash from another vehicle. These results show that the tasking system can command to motor speed error around the
equilibrium speed. The varying
and coordinate the actions of multiple vehicles.

60 IEEE CONTROL SYSTEMS MAGAZINE

APRIL 2008

speed of the propeller also affects the speed of airflow


over control surfaces, hence affecting control surface
actuation. This issue is most prominent in roll control.
To overcome this effect, the ailerons are deflected additionally at low throttle settings. Unfortunately, as
aileron deflection is increased, the ailerons block a portion of the propeller wash that would otherwise reach
the elevator. To resolve this issue, a gain proportional to
aileron deflection is added in the elevator control path
to compensate for reduced airflow over the elevator and
improve the vehicles pitch response when the ailerons
are deflected.
Finally, fast control surface motions can cause the vehicles airframe to deform during flight. This deformation
causes the reflective markers used by the motion-capture
system to shift in position, thereby giving an incorrect estimate of the airplanes momentary location. Consequently,
the controller gains are sized to minimize rapid changes in
control surface position. Flight results for the airplane in
hover are given in the next section.

Testbed History
Various multivehicle tests and mission scenarios have been flown
using RAVEN. In particular, since January 2006, more than 2500
vehicle experiments have been performed, including approximately
60 flight demonstrations during a 16-h period at the Boeing Technology Exposition at Hanscom Air Force Base near Lexington,
Massachusetts, on May 34, 2006. Each test performed at the
event involved two vehicles. One test involved two air vehicles flying a three-dimensional coordinated pattern (see Figure 9), while
another involved an air vehicle following a ground vehicle. These
demonstrations show that the platform can perform multiple UAV
missions repeatedly, on demand, and with minimal setup.

RESULTS
An overview of RAVEN testbed operations is provided in
Testbed History.

Y-Axis Position (m)

0.5

0
0.5
1
1.5

0.5
0
0.5
X-Axis Position (m)

1.5

40

40

35
30
25
20
15
10
5
0
1

35
30
25
20
15
10
5
0
1

35
30
25
20
15
10
5
0
0.6

0.5
0
0.5
X-Axis Position (m)
(b)

% of Time at Location

40
% of Time at Location

% of Time at Location

(a)

0.5
0
0.5
Y-Axis Position (m)
(c)

0.65
0.7
0.75
Z-Axis Position (m)
(d)

0.8

FIGURE 10 Airplane hover experiment. In this flight test, the airplane is commanded to hover at (x, y, z ) = (0, 0, 0.7) m for 5 min.
Part (a) shows the xy plot of vehicle position, while (b)(d) show histograms with percentage of time at location for x, y, and z positions.
These results demonstrate that the vehicle can hold its position reliably during flight. In this test, the vehicle remains inside the 20-cm box
over 63% of the time during the 5-min hover test flight and stays within 0.8 m of the desired location throughout the entire test.

APRIL 2008

IEEE CONTROL SYSTEMS MAGAZINE 61

This article describes an indoor testbed developed at MIT for studying


long-duration multivehicle missions in a controlled environment.
Quadrotor
Typical results from a 10-min hover test are shown in Figure 7.
In this test a single quadrotor is commanded to hold its position at (x, y, z) = (0, 0, 0.7) m for a 10-min period. Figure 7
shows four plots, including a plot of the vehicles x and y positions. The dashed red box in the picture is 10 cm from the
center point. As shown in Figure 7, the vehicle maintains its
position inside this 20-cm box during the entire flight. The

Y-Axis Position (m)

5
4
3
2
1
0
1
4

1
0
1
2
X-Axis Position (m)

(a)
6

Y-Axis Position (m)

5
4
3
2
1
0
1
4

1
0
1
2
X-Axis Position (m)
(b)

FIGURE 11 Hovering airplane waypoint tracking experiments. In


these flight tests, the vehicle flies between points (1.5, 0, 1),
(1.5, 0, 1), and (1.5, 5.5, 1) m. These results show that the hovering
airplane can fly between waypoints. In both (a) and (b), the vehicle
stays within one meter of the desired trajectory line during both
hover test flights.

62 IEEE CONTROL SYSTEMS MAGAZINE

APRIL 2008

remaining plots in the figure are the histograms of the vehicles x, y, and z positions during these tests. This test shows
that a quadrotor can maintain both its altitude (staying
between 0.650.75 m) and position (staying mostly within 0.05
m from center) in hover over the full charge cycle of a battery.
The results of a single-vehicle waypoint tracking experiment are shown in Figure 8. In this test the vehicle is commanded to hold its altitude at 1 m while flying at a velocity of
0.05 m/s between each of the following waypoints:
(1.5, 0, 1) m, (1.5, 0.5, 1) m, (1.5, 0.5, 1) m, (1.5, 5.5, 1)
m, and (1.5, 5.5, 1) m, and then back to (1.5, 0, 1) m. In
addition, the vehicle hovered at each waypoint for 10 s before
flying to the next waypoint. This test demonstrates that the
vehicle can follow a set trajectory around the indoor laboratory flight space. The plots in Figure 8 show that the vehicle follows the trajectory around a 3-m-by-6-m rectangular box as
specified. The cross-track error is less than 15 cm from the
specified trajectory at any given time during the flight.
In addition to these single-vehicle experiments, several
multivehicle experiments and test scenarios have been conducted, as shown in Figure 5. These tests include, but are
not limited to, formation flight tests, coordinated vehicle
tests involving three air vehicles, and multivehicle search
and track scenarios. In particular, Figure 9 shows the
results from a two-vehicle coordinated flight experiment. In
this experiment, the vehicles are commanded by the systems task advisor to take off and fly a circular trajectory
maintaining a constant speed of 0.25 m/s and 180 of angular separation. In particular, the vehicles are flying in a circle (as projected in the x y coordinate frame) while they
change altitude (flying from an altitude of 0.5 m to 1.5 m) as
they move around the flight pattern. Figure 9(a) shows the
x y projection of one of the five circle test flights completed as part of this experiment. Notice that the vehicle
trajectories in Figure 9(a) appear to be more noisy. This
disruption is partially caused by the quadrotors flying
through the rotor downwash from another vehicle. Flight
testing shows that the downwash from these quadrotor
vehicles is substantial, thus making it difficult to fly one
quadrotor below a second quadrotor without significant
altitude separation. Figure 9(c) shows a three-dimensional
view of the trajectory, making it clear that the vehicles are
also changing altitude during the flight. Figure 9(b) shows
the results of five consecutive two-vehicle circle test flights
performed over a 20-min time span. These results demonstrate that experiments run on RAVEN are repeatable and
that the platform can be used to perform multiple test
flights over a short period of time.

Hovering Airplane
Just as with the quadrotor, hover tests are performed with
the foam airplane shown in Figure 4. In Figure 10 the vehicle is commanded to hold its position at (xE , yE , zE ) =
(0, 0, 0.7) m for 5 min. Figure 10 shows four plots, including a plot of the vehicle x y location while it maintains its
position and attitude. The dashed red box in Figure 10 is
0.5 m from the center point. As shown in Figure 10, the
vehicle maintains its position inside this 1-m box for most
of the 5-min test period. The remaining plots give histograms of the vehicles x, y, and z positions. These plots
confirm that the vehicle is within a 50-cm box around the
target point more than 63% of the time and stays within 0.8
m of the desired location throughout the entire test. These
plots also show that the vehicle maintains its altitude by
staying between 0.650.75 m during the entire hover test.
A video of the aircraft in the hover flight condition can be
found online at [41].
Figure 11 shows two waypoint tracking tests for the
airplane starting from hover and then moving at a horizontal rate of 0.3 m/s between a set of waypoints while
remaining nose-up. In (a) the vehicle starts from
(1.5, 5.5, 1) m, while the vehicle starts from (1.5, 0, 1) m
in (b). In both tests, the airplane flies along the desired
flight path as it hovers between each waypoint despite the
fact that the vehicle has less control authority in its yaw
axis, making it difficult to maintain the vehicles heading
during flight. The reduced control authority of the vehicles yaw axis in hover is due to the fact that propeller
wash covers less than 10% of the ailerons in the hover
flight condition, thus reducing the vehicles ability to
counteract disturbances in propeller torque while maintain vehicle heading. As a result, external disturbances
cause the vehicle to deviate from a straight flight path
between waypoints. However, the vehicle stays within 0.5
m of the intended flight path throughout most of the tests.
Rapid Prototyping discusses the short timelines for
completing these aircraft flight results, which validates
RAVENs rapid prototyping capabilities.

CONCLUSIONS
This article describes an indoor testbed developed at MIT
for studying long-duration multivehicle missions in a controlled environment. While other testbeds have been
developed to investigate the command and control of
multiple UAVs, RAVEN is designed to explore long-duration autonomous air operations using multiple UAVs with
virtually no restrictions on flight operations. Using the
motion-capture system capability, RAVEN enables the
rapid prototyping of coordination and control algorithms
for different types of UAVs, such as fixed- and rotarywing vehicles. Furthermore, RAVEN enables one operator
to manage up to ten UAVs simultaneously for multivehicle missions, which meets the goal of reducing the cost
and logistic support needed to operate these systems.

Rapid Prototyping
The aircraft in Figure 4 was incorporated into RAVEN within three
weeks of acquiring the vehicle, including one week for construction. In addition, three days after making its first human-controlled
flight, the airplane made its first autonomous hover flight. This
activity was performed from the end of September 2006 to the
middle of October 2006, validating the platforms rapid prototyping
capability for new vehicle hardware. A video of the aircraft in hover
flight can be found online at [41].

Finally, several mission-management, task-allocation,


and path-planning algorithms have been successfully implemented and tested on RAVEN [32], [33]. For example, the
UAVs use software monitors that determine when they
must return to base for recharging. These monitors, in conjunction with other system health-management information,
enable the strategic- and tactical-level mission planning
modules to make informed decisions about the best way to
allocate resources given the impact of likely failures. Future
work designed to develop and integrate alternative healthmanagement and multi-UAV mission technologies into the
platform for real-time testing is ongoing.

ACKNOWLEDGMENTS
The authors would like to thank James McGrew and Spencer
Ahrens for their assistance in the project. This research has
been supported by the Boeing Company, Phantom Works,
Seattle, and AFOSR grant FA9550-04-1-0458. Brett Bethke
would like to thank the Hertz Foundation and the American
Society for Engineering Education for their support.

REFERENCES
[1] P.R. Chandler, M. Pachter, D. Swaroop, J.M. Fowler, J.K. Howlett, S.
Rasmussen, C. Schumacher, and K. Nygard, Complexity in UAV cooperative control, in Proc. 2002 American Control Conf., Anchorage, AK, 2002,
pp. 18311836.
[2] P. Gaudiano, B. Shargel, E. Bonabeau, and B. Clough, Control of UAV
SWARMS: What the bugs can teach us, in Proc. 2nd AIAA Unmanned Unlimited Systems, Technologies, and Operations Aerospace Conf., San Diego, CA,
2003, AIAA-2003-6624.
[3] R. Olfati-Saber, W.B. Dunbar, and R.M. Murray, Cooperative control of
multi-vehicle systems using cost graphs and optimization, in Proc. 2003
American Control Conf., Denver, CO, 2003, pp. 22172222.
[4] H. Paruanak, S. Brueckner, and J. Odell, Swarming coordination of
multiple UAVs for collaborative sensing, in Proc. 2nd AIAA Unmanned
Unlimited Systems, Technologies, and Operations Aerospace Conf., San Diego,
CA, 2003, AIAA-2003-6525.
[5] O. Amedi, T. Kanade, and K. Fujita, A visual odometer for autonomous
helicopter flight, Robot. Autonom. Syst., vol. 28, pp. 185193, 1999.
[6] D. Cruz, J. McClintock, B. Perteet, O. Orqueda, Y. Cao and R. Fierro,
Decentralized cooperative control: A multivehicle platform for research in
networked embedded systems, IEEE Contr. Syst. Mag., vol. 27, no. 3, pp.
5878, 2007.
[7] J. Evans, G. Inalhan, J.S. Jang, R. Teo, and C. Tomlin, DragonFly: A
versatile UAV platform for the advancement of aircraft navigation and
control, in Proc. 20th Digital Avionics Systems Conf., Daytona Beach, FL,
2001, pp. 1.C.311.C.312.
[8] R. Franz, M. Milam, and J. Hauser, Applied receding horizon control of
the Caltech ducted fan, in Proc. 2002 American Control Conf., Anchorage,
MA, 2002, pp. 37353740.

APRIL 2008

IEEE CONTROL SYSTEMS MAGAZINE 63

[9] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron, Flight test and simulation results for an autonomous acrobatic helicopter, in Proc. 21st Digital
Avionics Systems Conf., Irvine, CA, 2002, pp. 8.C.318.C.36.
[10] G. Hoffmann, D.G. Rajnarayan, S.L. Waslander, D. Dostal, J.S. Jang, and
C. Tomlin, The Stanford testbed of autonomous rotorcraft for multi agent
control (STARMAC), in Proc. 23rd Digital Avionics Systems Conf., Salt Lake
City, UT, 2004, pp. 12.E.4112.E.410.
[11] O. Holland, J. Woods, R. De Nardi, and A. Clark, Beyond swarm intelligence: The UltraSwarm, in Proc. 2005 IEEE Swarm Intelligence Symp.,
Pasadena, CA, June 2005, pp. 217224.
[12] J.P. How, E. King, and Y. Kuwata, Flight demonstrations of cooperative
control for UAV teams, AIAA 3rd Unmanned Unlimited Technical Conf.,
Workshop and Exhibit, Chicago, IL, 2004, AIAA-2004-6490.
[13] E.N. Johnson and D.P. Schrage, System integration and operation of a
research unmanned aerial vehicle, AIAA J. Aerosp. Computing, Inform., Commun., vol. 1, pp. 518, 2004.
[14] Z. Jin, S. Waydo, E.B. Wildanger, M. Lammers, H. Scholze, P. Foley, D.
Held, and R.M. Murray, MVWT-11: The second generation Caltech multivehicle wireless testbed, in Proc. 2004 American Control Conf., Boston, MA,
2004, pp. 53215326.
[15] E. Kadioglu and N. Papanikolopoulos, A method for transporting a
team of miniature robots, in Proc. 2003 IEEE/RSJ Int. Conf. Intelligent
Robots Systems, Las Vegas, NV, 2003, pp. 22972302.
[16] T. Kalmar-Nagy, R. DAndrea, and P. Ganguly, Near-optimal dynamic
trajectory generation and control of an omnidirectional vehicle, Robot.
Autonom. Syst., vol. 46, pp. 4764, 2004.
[17] E. King, M. Alighanbari, Y. Kuwata, and J.P. How, Coordination and
control experiments on a multi-vehicle testbed, in Proc. 2004 American Control Conf., Boston, MA, 2004, pp. 53155320.
[18] T.J. Koo, Vanderbilt Embedded Computing Platform for Autonomous
Vehicles (VECPAV), July 2006 [Online]. Available: http://www.vuse.vanderbilt.edu/~kootj/Projects/VECPAV/
[19] D.R. Nelson, D.B. Barber, T.W. McLain, and R.W. Beard, Vector field
path following for small unmanned air vehicles, in Proc. 2006 American Control Conf., Minneapolis, MN, 2006, pp. 57885794.
[20] D. Shim, H. Chung, H.J. Kim, and S. Sastry, Autonomous exploration
in unknown urban environments for unmanned aerial vehicles, in Proc.
AIAA Guidance, Navigation, Control Conf. Exhibit, San Francisco, CA, 2005,
AIAA-2005-6478.
[21] R. Teo, J.S. Jang, and C.J. Tomlin, Automated multiple UAV flight the
Stanford DragonFly UAV program, in Proc 43rd IEEE Conf. Decision
Control, Paradise Island, Bahamas, 2004, pp. 42684273.
[22] R. Vidal, O. Shakernia, H.J. Kim, H. Shim, and S. Sastry, Multi-agent
probabilistic pursuit-evasion games with unmanned ground and aerial vehicles, IEEE Trans. Robot. Autom., vol. 18, no. 5, pp. 662669, 2002.
[23] V. Vladimerouy, A. Stubbs, J. Rubel, A. Fulford, J. Strick, and G. Dullerud,
A hovercraft testbed for decentralized and cooperative control, in Proc. 2004
American Control Conf., Boston, MA, 2004, pp. 53325337.
[24] M. Valenti, T. Schouwenaars, Y. Kuwata, E. Feron, J. How, and J.
Paunicka, Implementation of a manned vehicle-UAV mission system,
in Proc. AIAA Guidance, Navigation, Control Conf. Exhibit, Providence, RI,
AIAA-2004-5142, 2004.
[25] T. Schouwenaars, M. Valenti, E. Feron, J. How, and E. Roche, Linear
programming and language processing for human-unmanned aerialvehicle team missions, J. Guid., Contr., Dyn., vol. 29, no. 2, pp. 303313,
Mar. 2006.
[26] K. Culligan, M. Valenti, Y. Kuwata, and J.P. How, Three-dimensional
flight experiments using on-line mixed-integer linear programming trajectory optimization, in Proc. 2007 American Control Conf., New York, NY,
2007, pp. 53225327.
[27] G. Tournier, M. Valenti, J.P. How, and E. Feron, Estimation and control of a quadrotor vehicle using monocular vision and moire patterns, in
Proc. AIAA Guidance, Navigation, Control Conf. Exhibit, Keystone, CO, Aug.
2006, AIAA-2006-6711.
[28] M. Valenti, B. Bethke, G. Fiore, J. How, and E. Feron, Indoor multivehicle flight testbed for fault detection, isolation, and recovery, in Proc.
AIAA Guidance, Navigation, Control Conf. Exhibit, Keystone, CO, Aug. 2006,
AIAA-2006-6200.
[29] Vicon, Vicon MX Systems, June 2006 [Online]. Available:
http://www.vicon.com/products/viconmx.html
[30] Draganfly Innovations Inc., Draganfly V Ti Pro Web site, June 2006
[Online]. Available: http://www.rctoys.com/draganflyer5tipro.php
[31] Thunder Power Batteries, Thunder Power Batteries Web site, Sept.
2006 [Online]. Available: http://www.thunderpower-batteries.com/

64 IEEE CONTROL SYSTEMS MAGAZINE

APRIL 2008

[32] M. Valenti, B. Bethke, J. How, D.P. de Farias, and J. Vian, Embedding


health management into mission tasking for UAV teams, in Proc. 2007
American Control Conf., New York, June 2007, pp. 57775783.
[33] M. Valenti, D. Dale, J. How, and J. Vian, Mission health management
for 24/7 persistent surveillance operations, in Proc. AIAA Guidance, Navigation, Control Conf. Exhibit, Myrtle Beach, SC, Aug. 2007, AIAA-2007-6508.
[34] Hobbico, Inc., DuraTrax Mini Quake Web site, Dec. 2006 [Online].
Available: http://www.duratrax.com/cars/dtxd11.html
[35] Gumstix, Inc., Gumstix.com Web site, Dec. 2006 [Online]. Available:
http://www.gumstix.com
[36] B.L. Stevens and F.L. Lewis, Aircraft Control and Simulation, 2nd ed.
Hoboken, NJ: Wiley, 2003.
[37] S. Bouabdallah, P. Murrieri,and R. Siegwart, Design and control of an
indoor micro quadrotor, in Proc. 2004 IEEE Int. Conf. Robotics Automation,
New Orleans, LA, Apr. 2004, pp. 43934398.
[38] C. Edwards and I. Postlethwaite, Anti-windup and bumpless transfer
schemes, in Proc. UKACC Int. Conf. Control 1996 (Conf. Publ. No. 427), Exeter,
England, Sept. 1996, pp. 394399.
[39] J. How, Lecture Notes: Aircraft Stability and Control (16.333): Lectures
3 and 4 Sept. 2004 [Online]. Available: http://ocw.mit.edu/OcwWeb/
Aeronautics-and-Astronautics/16-333Fall-2004/LectureNotes/index.htm
[40] N. Knoebel, S. Osborne, D. Snyder, T. Mclain, R. Beard, and A.
Eldredge, Preliminary modeling, control, and trajectory design for miniature autonomous tailsitters, in Proc. AIAA Guidance, Navigation, Control
Conf. Exhibit, Keystone, CO, 2006, AIAA-2006-6713.
[41] Aerospace Controls Laboratory MIT, UAV SWARM Health Management Project Web site, June 2006 [Online]. Available: http://vertol.mit.edu

AUTHOR INFORMATION
Jonathan P. How (jhow@mit.edu) is a professor in the
Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT), where he leads the
Aerospace Controls Laboratory. He graduated from the
University of Toronto with a B.Sc. in aerospace engineering and received the M.Sc. and Ph.D. degrees in aeronautics and astronautics from MIT in 1989 and 1993,
respectively. He is a Senior Member of the IEEE. He can be
contacted at Massachusetts Institute of Technology, Aerospace Controls Laboratory, 77 Massachusetts Ave., Rm 33326, Cambridge, MA 02139 USA.
Brett Bethke is a research assistant in the Aerospace
Controls Laboratory at MIT. He has S.B. degrees in physics
and aerospace engineering, an S.M. in aeronautics and
astronautics, and is pursuing his Ph.D. degree in the Aeronautics and Astronautics Department at MIT. He currently
has a fellowship from the Hertz Foundation and the American Society for Engineering Education.
Adrian Frank is a visiting student in the Aerospace
Controls Laboratory at MIT. He is currently pursuing his
M.Sc. degree in aeronautical engineering at the Royal Institute of Technology in Stockholm, Sweden.
Daniel Dale received an M.Eng. degree from the Electrical Engineering and Computer Science Department at
MIT in 2007. He has S.B. degrees in physics and electrical
engineering from MIT.
John Vian is a technical fellow with The Boeing Company,
where he is responsible for leading Phantom Works selfaware adaptive systems strategic technology area. He holds a
B.Sc. in mechanical engineering from Purdue University and
an M.Sc. in aeronautical engineering and Ph.D. in electrical
engineering, both from Wichita State University.

Das könnte Ihnen auch gefallen