Sie sind auf Seite 1von 57

Santa Clara University

DEPARTMENT of MECHANICAL ENGINEERING

Date: January 15, 2014

I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY

SUPERVISION BY

Jackson Arcade

Phased Object Transport Using a Multi-Robot System

BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

DEGREE OF

MASTER OF SCIENCE IN MECHANICAL ENGINEERING

5J ---r---

rJL.

Phased Object Transport Using a Multi-Robot System

by

Jackson Arcade

GRADUATE MASTER THESIS

Submitted in partial fulfillment of the requirements


for the degree of
Master of Science in Mechanical Engineering
School of Engineering
Santa Clara University

Santa Clara, California


January 14, 2014

Phased Object Transport Using a Multi-Robot System

Jackson Arcade

Department of Mechanical Engineering


Santa Clara University
2014

ABSTRACT

Multi-robot systems have several benefits to improve the performance of any robotic
system. Features such as redundancy, increased coverage, throughput, diverse
functionality and flexible re-configurability allow them to perform multiple tasks such as
cutting, pushing, and grasping in a more effective manner than a single robot. The
purpose of this research was to implement force sensing abilities with an existing robotic
testbed and to integrate it with a cluster space controller. Using this system, a sequential
position and force controller was used in order to move an object. The sequencing was
done in a manner where the controller switches from position-based trajectory control of
the cluster, to force-controlled object repositioning, and then to timed velocity control of
the cluster.
Experiments using this sequential controller were run in order to verify the controller.
These experiments resulted in the successful transportation of the object from one
location to another.

Keywords: Cluster Space Control, Force Control, Finite-State Machine

iii

Acknowledgements

First and foremost, I would like to thank my advisor, Dr. Christopher Kitts, for his
guidance, motivation, patience and thought inputs during this research. I really appreciate
the encouragement he showed during each and every time I had any query throughout my
graduate studies as well as this research.
I would also like to thank everyone in the SCU Robotics Systems Lab for helping
me out during this research. The experiments were hard sometimes but I would like to
thank my friend and colleague Matthew Chin for helping me out during the tests and
making them easy to deal with. He has worked alongside me on this project and has made
significant contributions. This wouldn't have been possible without his assistance. A
mention needs to go out to the researchers, Anne Mahacek and Jasmine Cashbaugh,
working on quadrotors for sharing the same UWB testbed and helping me with all the
stupid doubts I had. I would also thank Thomas Adamek for helping me answer questions
about the hardware of robots and Michael Neumann for guiding me and being there to
help me from the commencement of this project.
Finally, I would like to thank my family for their support and encouragement
throughout my graduate studies. Without their constant support and advice this would be
difficult to complete. Most importantly, I would like to thank my friends for keeping my
social life active. I also thank them for believing in me, understanding my work and
encouraging me at times.

iv

Table of Contents

Abstract.............................................................................................................................. iii
Acknowledgments.......................................................................................................................... iv
Table of Contents............................................................................................................................. v
List of Figures.................................................................................................................................vii
Chapter 1: Introduction...................................................................................................................1
1.1 Object Manipulation............................................................................................................4
1.2 Object Manipulation Types...................................................................................................5
1.2.1 Object Manipulation without force sensing..................................................................... 5
1.2.2 Object Manipulation with force sensing ......................................................................... 6
1.3 Project Statement............................................................................................................... 8
1.4 Reader's Guide.................................................................................................................... 9
Chapter 2: Cluster Space Control.................................................................................................. 10
2.1 Introduction to Cluster Space Control................................................................................ 10
2.2 Description of a Two Robot Cluster................................................................................... 12
2.3 Force Control...................................................................................................................... 16
2.4 Use of State Machine.......................................................................................................... 16
Chapter 3: Experimental Testbed................................................................................................... 21
3.1 Robots................................................................................................................................. 22
3.2 Navigation-UWB Tracking................................................................................................. 25
3.3 Data Handling and Control................................................................................................. 26
Chapter 4: Experimental Results................................................................................................... 28
4.1 Force Control Characteristics............................................................................................. 28
4.2 Sequential Control.............................................................................................................. 30
4.2.1 Position based Trajectory Control (Engage Mode)......................................................... 32
4.2.2 Force Control (Maneuver Mode)..................................................................................... 34
4.2.3 Velocity Control (Disengage Mode)................................................................................ 35
v

4.3 State Machine...................................................................................................................... 36


4.4 Testing Summary................................................................................................................ 38
Chapter 5: Conclusion.................................................................................................................... 39
5.1 Future Work............................................................................................................................. 39
References...................................................................................................................................... 41
Appendices
Appendix A: Simulink Model of the overall system................................................................ 45
Appendix B: m files for trajectory generation.......................................................................... 46
Appendix C: Specifications of Vernier force plate................................................................... 48
Appendix D: Stateflow block details.. ......................................................................................49

vi

List of Figures

Figure 1.1 Object Transport............................................................................................................. 2


Figure 1.2 Overhead snapshot of the long loosely coupled scenario............................................... 3
Figure 1.3 Types of closures............................................................................................................ 5
Figure 1.4 Mobile Manipulator Setup.............................................................................................. 7
Figure 2.1 Robot pose using conventional versus cluster representations..................................... 10
Figure 2.2 Cluster space control architecture for a mobile multi-robot system............................. 13
Figure 2.3 Pose reference frames for the planar two-robot system............................................... 14
Figure 2.4 Object repositioning force controller............................................................................ 17
Figure 2.5 Basic State Transition Diagram.................................................................................... 18
Figure 2.6 Layout of Finite State Machine in Matlab.................................................................... 19
Figure 2.7 State Machine details in Matlab................................................................................... 20
Figure 3.1 Testbed layout.............................................................................................................. 21
Figure 3.2 Pioneer P3-AT robots................................................................................................... 22
Figure 3.3 Force Sensor................................................................................................................. 23
Figure 3.4 Force Sensor Connections............................................................................................ 23
Figure 3.5 Calibration curves of the force sensors......................................................................... 24
Figure 3.6 Object to be manipulated.............................................................................................. 24
Figure 3.7 Components.................................................................................................................. 26
Figure 3.8 Communication in the system...................................................................................... 27
Figure 4.1 Force plots while applying constant velocities............................................................. 29
Figure 4.2 Closed loop force control of a robot pushing the box.................................................. 30
Figure 4.3 Overall cluster position showing change of state......................................................... 31
Figure 4.4 Measured and the desired Forces of force plate1 and force plate2.............................. 32
Figure 4.5 Measured and desired cluster trajectories..................................................................... 33
Figure 4.6 Data error plot over time.............................................................................................. 33
Figure 4.7 Size of cluster............................................................................................................... 34

vii

Figure 4.8 Force1 Control, Force2 Control and Object position control....................................... 34
Figure 4.9 Velocity Control for disengage mode........................................................................... 35
Figure 4.10 Change of state and object position control............................................................... 36
Figure 4.11 Maneuver mode in active condition........................................................................... 37

viii

1.0 Introduction

"I just want the future to happen faster. I can't imagine the future without robots."
-Nolan Bushnell

Over the past 50 years, there has been a significant improvement towards automation of
different day to day tasks in the society, be it in industry, space exploration, or in security
and military environments. The development of robots is one of the greatest contributions
in this area. Areas in which robots can be used include medical science, psychology,
agriculture, space, mining and so on. A robot mainly consists of a manipulator, end
effector, actuators, sensors, and a controller. Mobile robots have now become a part of
our daily life. Their knowledge and ability to sense and then act in a complex and
changing environment make them suitable for performing many different tasks.
While most robots perform tasks in an isolated manner, there is a growing interest in the
use of collaborative multi-robot systems to enhance performance in current applications
and to develop new capabilities. The study of multi-robot systems has received increased
attention in recent years. Potential advantages of multi-robot systems include redundancy,
increased coverage and throughput, flexible re-configurability, spatially diverse
functionality, and the fusing of physically distributed sensors and actuators [1]. Other
potential advantages of multi-robot systems over single robots include a possible
reduction in total system cost by utilizing multiple simple and cheap robots as opposed to
a single complex and expensive robot. Furthermore, some tasks and environments are so
complex that they require multi-robot systems to complete their mission requirements. It
will be of great worth to utilize these advantages of multi-robots in our day to day tasks.
The motivation for manipulation and cooperation of tasks was taken from the biological
precedent where aggregations such as ant colonies, bird flocks, fish schools are followed.
The "Two hands are better than one" analogy could be followed here. The collective

strength and manipulation capabilities to move large food-pieces are impossible for a
single ant.

a) Multiple ants transporting an object

b) Distributed manipulation demonstration [31]

Figure 1.1 Object Transport

Similar to the tasks performed by the ants such as transporting an object to a location (fig
1-a), mobile robots can be used to perform meaningful tasks in human environments such
as offices, homes or restaurants. The task of transporting an object by mobile robots [12]
is shown in figure 1-b. Researchers are taking interest in the manipulative tasks, such as
cutting, pushing and grasping, all of which could be accomplished by multi-robot
systems.
One such application of a multi-robot system was done at the Interaction Lab at USC [16]
where an auction-based task allocation system called MURDOCH was implemented and
tested. Their team deployed multi-robots to perform in two task domains, loosely coupled
task allocation and box pushing. Various tasks such as object tracking, sentry-duty,
cleanup, and monitor-object were performed under the loosely coupled task allocation
framework. The system was tested in both of the domains. Figure 2 shows the loosely
coupled long term scenario. Though there might be numerous possible advantages of
using a multi-robot system, there are challenges too. A poorly designed multi-robot
system can be less effective than a better designed single robot system.

Figure 1.2 - Overhead snapshot of the long term loosely coupled scenario. Center robot is
performing the cleanup task while the robot to its immediate left is performing the object
tracking task [16]

There may be some interference between the robots which may result in collisions or
occlusions. More robots mean more sensors and hence dealing with more errors. Other
challenges include the inter-robot communication and coordinated control of the robots.
A wide variety of techniques have been explored to coordinate the motions of individual
robots in a multi-robot system. While Khatib [28] used a decentralized control structure
for cooperative tasks with mobile manipulation systems with holonomic bases, others
have developed a framework known as behavior-based control [7] where robots are
controlled through the integration of a set of interacting behaviors in order to achieve
3

desired system-level behavior. Another strategy uses the master-slave approach [10]
where the slave robot controls its position relative to a master robot.
Over the past several years, students in Santa Clara University's Robotics System Lab
have focused on a new formation control approach termed Cluster Space Control. This
strategy conceptualizes the n-robot system as a single entity, a cluster [1], and the desired
motions are specified as cluster attributes, such as position, orientation, and geometry.
The cluster variables are related to the robot specific variables through kinematic
transforms. Tasks such as entrapping/escorting and patrolling around an autonomous
target have demonstrated use of this cluster space approach [17]. Experiments using an
UWB-based multi-robot test-bed have demonstrated the functionality of the proposed
obstacle avoidance approach [22]. Other applications include object tracking [35] and
adaptive navigation [36].

1.1 Object Manipulation


Transporting an object with mobile robots is a common research area in the field of
cooperative mobile robotics. This can be done by pushing or grasping the object. The
research on box pushing by a manipulator was initially started by Mason [2]. Other ideas
were then later developed in which a rule based control for the pushing operation was
discussed, and a box was pushed from one place to another by a single robot [8]. While
these works describe object pushing by a single robot, there has also been interest shown
by researchers in multi-robot box pushing [3]. Pushing behaviors, as seen in figure 1.3,
can be divided into four categories according to the pushing method and structure: force
closure, form closure, conditional closure and object closure. Pusher-watcher [9], pusherpuller [18] and master-slave approaches [10] are some of the recent strategies that have
been used to push a box to a desired location. In contrast to a few robots doing the
pushing operation, an object can also be manipulated and acted upon by a swarm of
locally controlled robots [19].

a) Force closure b) Form closure c) Conditional closure d) Object closure


Figure 1.3 Types of closures [18]

1.2 Object Manipulation Types


Overall, one factor that significantly distinguishes one strategy from another is whether
the robots make use of interaction force feedback information during their operation.

1.2.1 Object Manipulation without force sensing


Many researchers have successfully demonstrated object transport and/or manipulation
without force sensing. The risk of doing so, however, is in taking control actions without
regard for the magnitudes of interaction forces or torques. This can lead to unsafe
operation for both the robot and the object of interest. That said, researchers at USC have
used multiple robots via a pusher-watcher approach in which position information is
perceived by the watcher robot. This was accomplished with a group of pioneer mobile
5

robots using the auction based task allocation facilities provided by the MURDOCH
control software [9].
Significant works have also been performed by researchers using marine robots. For
example, an underwater box-pushing scenario was presented where three autonomous
fish sense, plan and act on their own to move an elongated box. With one robot observing
the box at the goal location and the other two robots pushing the box, the box was moved
gradually towards the goal location [24]. In comparison to the previously described
pusher-watcher approach which was heterogeneous in terms of the sensing capability, the
robotic fish are homogenous. Other researchers have also developed a coordination
method for multiple biometric robotic fish to perform an underwater transport task. A
limit cycle approach to control the posture of the fish and to realize collision avoidance
was applied. In addition, a fuzzy logic method was adopted to control the transport
orientation in the particular underwater environment [25].
Work involving manipulation and transporting an object without any force control has
also been shown in [32]. The cluster space control approach was implemented for a group
of mobile robots to work in a cooperative fashion. Experiments involving four nonholonomic Boe-Bot robots were operated under a vision based tracking system, and a
rectangular object was transport and rotated.

1.2.2 Object Manipulation with Force sensing


Explicit force control allows for safer and more precise interaction between the robot and
its environment. Initial work on manipulator endpoint compliance was shown in [20], and
numerous control schemes such as hybrid force/position control, impedance force
control, and explicit force control have been demonstrated. Recent advances have been
made in humanoid robots developed at CMU [5,6] where grasping was the initial
motivation. In addition, Tan and Xi [21] presented an integrated sensing and control
framework for autonomous mobile manipulators pushing a nonholonomic cart. This can
be seen in Figure 1.4.

Figure 1.4 Mobile Manipulator Setup [21]

In [27] the control of trajectory and grasp in multi-robots was showed. Tasks requiring
grasping, manipulation, and transporting large objects were the research focus using
mobile robots with manipulators. Using a stiffness controller, robust performance in
carrying an object over a distance of 6m was shown. Also using a force controller,
internal forces and moments on the box to be manipulated were controlled. The stiffness
control law was used to determine the forces and moment applied to the object and to
achieve control with the desired diagonal stiffness matrix. Each robot was controlled
independently in performing the tasks involving picking up a large object by grasping
and transporting it cooperatively in an environment. Presence of orientation errors
resulted in poor performance of the system.
A recent advance in object manipulation was shown by [23] where a strategy that allows
swarms of autonomous tugboats to cooperatively move a large object on water was
demonstrated. A tracking controller and force allocation strategy was presented.
Optimization based force/torque allocation was employed and compared against a
commutation based force/torque allocation strategy. The tugboats used to move the object
had actuators which were unidirectional and experienced saturation. Grasping was
dependent on the point of contact and the push directions.

The Jet Propulsion Laboratory (JPL) has demonstrated their work [26] in the Robotic
Construction project, where transportation of an object was done by using the grasping
ability of multi-robots. The project showed multi-robot construction and assembly
capabilities in simulated terrain. A force-torque feedback for velocity control was
performed to maintain formations required for successful cooperative transport.
CAMPOUT, a behavior-based control architecture, was used, and the experiments
conducted demonstrated tasks such as Acquire Beam, Align at Structure, Place Beam,
and End to End. Limited sensing due to power and mass constraint resulted in noisy data.
This also led to inaccurate results. Force sensing was performed using a 3-axis forcetorque sensor mounted on the base of gripper.

1.3 Project Statement


The purpose of this research was to demonstrate sequential position and force control
modes when using a multi-robot system to move an object. A primary task required to
achieve the goal was development of a sequencing operation through the utilization of
finite state machine. This resulted in the implementation of 'Engage', 'Maneuver' and
'Disengage' states in the controller to switch from position based trajectory control of the
cluster, to force-controlled object positioning, and then to timed velocity control of the
cluster in order to move the robots away from the object. To simplify this work, the
cluster was limited to two robots, and force-controlled object positioning was limited to a
single degree of freedom.
Additional tasks accomplished included co-development of the force controller with
another research student, creation and integration of the force sensing capabilities with
the robotic test-bed, and the execution of experiments to verify the designed controller.
This work resulted in the successful demonstration of sequential force and position
control of Pioneer P3AT land robots in order to push an object from one location to
another. In doing this, it laid the groundwork for several more advanced multi-robot
manipulation initiatives currently ongoing with the Robotics System Lab.

1.4 Reader's Guide


This thesis is divided into five chapters. The first chapter provides an introduction to
multi-robot systems, object manipulation and force sensing. The chapter discusses the
motivation for this research and provides the objective of the thesis. The second chapter
reviews the implementation of finite state machine capability within the cluster control
approach. The cluster space control is briefly mentioned here. The third chapter reviews
the test bed used to complete the project and how the tests were conducted. It describes
the hardware and provides a description of the multi-robot system. The fourth chapter
shows the results obtained in the tests performed by the pioneer robots. The fifth chapter
reviews the results of the thesis and provides suggestions for future work.

2.0 Cluster Space Control


As the name suggests, Cluster Space Control treats the motion control of multiple mobile
robots as a cluster that is directed by variables such as aggregate position, orientation and
shape. Commands given to the cluster are automatically converted to individual robot
commands through the use of kinematic transforms. This allows a simple interface for
controlling the cluster.
This chapter reviews the cluster space control technique and discusses the closed loop
control architecture. Later parts of this chapter provide an introduction to 1-D force
control of the cluster and to the implementation of a finite state machine that switches
between control modes based on whether the robots are in contact with an object or not.

2.1 Introduction to Cluster Space Control


The cluster space layout consists of a global frame {G}, a cluster frame{C} and
individual robot frames [{1} or {2}.....or {n}] as shown in figure 2.1

Figure 2.1 Robot pose using conventional versus cluster space representations
10

A conventional "robot space" description of the system's pose is to state the position and
orientation of each robot frame, {1}, {2}, {3}.... {n}, in the global frame. This leads to
the robot space position vector

To represent the system from the cluster perspective, {C} is defined with respect to the
robots. Given this, the cluster pose is defined by the position and orientation of {C}, by a
set of shape variables that capture the geometry of the group, and by the rotation of each
individual robot with respect to {C}. This result in the cluster space pose vector:

where there are m shape variables and n robots.


Cluster space position variables are related to robots space variable through a set of
kinematic transforms [30].

11

Similarly, the cluster space velocities are related to robot space velocities through a set of
Jacobian matrix transforms which have been derived by taking the partial derivatives of
the robot space pose variables [30]

With the transforms defined, the cluster-level velocity commands are transformed to
robot commands through the inverse Jacobian transform, and the robot-level velocity
commands are transformed to actuator commands through an inverse Jacobian transform.
Sensed robot positions and velocities are converted to cluster space positions and
velocities through the forward transforms. The sensed cluster space parameters are then
compared to the desired parameters allowing the controller to compute cluster velocity
commands. Figure 2.2 shows the basic cluster space control architecture for a mobile
multirobot system.

2.2 Description of a Two-robot Cluster


A cluster can be a group of two or more robots, but this thesis limits the cluster to two
Pioneer P3AT robots. Figure 2.2 depicts the relevant robot and cluster frames for the
planar two robot cluster.
The robot space description of the two-robot system is

. Each

robot has three degrees of freedom: two translations and one rotation. Thus, the system
has a total of six DOF and therefore six variables to specify the position and orientation
of the robots.

12

Figure 2.2 Cluster space control architecture for a mobile multirobot system

For this example, the cluster frame is attached at the midpoint of the cluster with the

unit vector pointed towards robot 1. Cluster size is denoted by d, which is half the
distance between the robots. As we have 6 variables that describe the robots in the
system, we require six parameters to describe the configuration of the cluster from the
cluster space perspective. The cluster space pose is represented by

where
the cluster, and

describes the position and orientation of {C}, d represents the size of


and

express the relative rotation of each robot with respect to the

cluster frame.

13

(X2, Y2)

(X1, Y1)

Figure 2.3 Pose reference frames for the planar two-robot system

The forward position kinematic relationships for the 2 robot cluster are as follows [30]:
(2.1)
(2.2)
(2.3)
(2.4)
(2.5)

14

(2.6)
The inverse position kinematics for the 2 robot cluster are as follows [30]:
(2.7)
(2.8)
(2.9)
(2.10)
(2.11)
(2.12)

The Jacobian matrix for the cluster [30] is

where,

15

Similarly the inverse Jacobian matrix derived is:

2.3 Force Control


A single degree of freedom controller is used for this project. An outer loop executes
position control, which computes a control force as a function of the position error of the
object. An inner loop for each robot attempts to control the interaction force between the
robot and the object in order to implement the object loop's commanded force. Given the
use of two robots and one degree of freedom control, each robot ideally exerts half the
control force on the object. For the experiments conducted in this research program, both
the object controller and individual robot force controller were implemented as simple
proportional controllers. The architecture is displayed in figure 2.4

2.4 Use of State Machine


A Finite-state machine is a model of how the state of a system evolves over time, given
the current state and external inputs. A machine has a finite number of states with only
one state active at a time. A state can be changed from one value to another when
initiated by a triggering event or a condition known as a transition.
In this thesis, we use a state machine to transition between states that invoke different
robot controllers: a position based trajectory controller, a force controlled object

16

F/2
Desired Force1

Force
Controller 1

Desired Box
Position

Object
Position
Controller

Velocity
Commands

F1

Robot1

Measured Force1
from sensor

1/2

X
OBJECT

F/2
Desired Force2

Force
Controller 2

Velocity
Commands

F2

Robot2

Measured Force2
from sensor
Measured Box
Position

Figure 2.4 Object repositioning force controller

repositioning controller and an open loop velocity controller. Figure 2.5 shows a state
transition diagram depicting the switching of the control states.
Figure 2.6 shows the implementation of the statemachine in Matlab. The detail of a
typical stateflow block in Matlab is shown in figure 2.7. For object manipulation, a
typical control sequence consists of first using a cluster space trajectory controller to
move a two-robot cluster from an arbitrary start position to an end position where the box
is located.
As soon as the box is pushed by the two robots, transition of the state takes place and
force controlled object repositioning is initiated. This state remains active until the box
reaches a desired position after which cluster space velocity control takes place leading to
the backing up of robots.
17

Start

Cluster
Space
Trajectory
Control

F1>20 N and
F2>20

Force
Control

Position Error(Ex)<||0||

Cluster
Space
Velocity
Control
Time,100 secs

END

Figure 2.5 Basic State Transition Diagram

18

Figure 2.6-Layout of Finite State Machine in Matlab

19

Figure 2.7 Statemachine details in Matlab

20

3.0 Experimental Testbed


In order to conduct experiments described in this paper, two Pioneer P3-AT robots were
used to manipulate a loaded cardboard box weighing approximately 27 pounds, as shown
in figure 3.1. The robots were configured with force sensors, tracked through the use of
an Ultra-Wide band tracking system and wirelessly controlled by a central control
computer running control software written in Matlab/Simulink. The testbed uses a part of
an electronics and software architecture previously developed by the researchers/students
of the Robotics Systems Laboratory, Santa Clara University.

Receivers

Reference Tag

HUB
Switch/
Network
HUB

Base Station

Figure 3.1 Testbed Layout

21

3.1 Robots
In order to conduct the experiments described in this paper, Pioneer P3-ATs from
Mobilerobots, shown in figure 3.2, were operated using an Ultra-Wide band (UWB)
tracking system. The testbed shares a common electronics architecture that includes all
on-board microcontroller and communication components for each robot in the cluster.
The electronics hardware present on the robots consists of a pair of BasicX
microcontroller boards which accept drive commands in order to run the robots. Drive
commands are wirelessly received from the off-board control computer using a Ricochet
modem.

Figure 3.2 Pioneer P3-AT robots

Forces are sensed using a Vernier Force Plate, shown in figure 3.3. These plates have a
range of -200 to +850N (where positive value is a compression force) and are used for
measuring the force that the robots exert on the object being manipulated. The output
signals from the sensors are read by an ATmega328 chip based Arduino UNO
microcontroller which manipulates and filters the data. This data is wirelessly transmitted
using a series 1 802.15.4 protocol 1mW Digi International XBee radio.

22

Figure 3.3 Force sensor

5 VDC

To Analog Pin

Wireless Transmission

5 VDC

Signals Received

Figure 3.4 Force sensor connections

The force sensor requires calibration, and so the relationship between the analog read
counts and generated forces sensed by the force plate is determined experimentally.

23

Figure 3.5 shows the graph between the force measured on the force plate by a force
gauge and the analog read counts.

350.00
330.00
Number of Counts

310.00
290.00
270.00
250.00

Force plate 1

230.00

Force plate 2

210.00
190.00
170.00
150.00
1

9 10 11 12

Force (N)

Figure 3.5 Calibration curves of the force sensors

Using this data, the following relationships were formulated:

Force_1= totAverage*11.04 - 192.07


Force_2= totAverage*11.0 -251.2
where totAverage is the average of 250 data points.

Figure 3.6 Object to be manipulated

24

3.2 Navigation-UWB Tracking


The Sapphire DART Ultra Wideband Digital Active Real Time Tracking system is used
to determine the position and heading of the two robots and the box. The system includes
a processing hub, four or more UWB receivers, two reference tags, and multiple tags for
individual assets. Each Sapphire DART tag repeatedly sends out a packet burst consisting
of a short UWB pulsetrain, and these pulsetrains are received by the Sapphire DART
UWB receivers located at the periphery of the work area. Two such tags are placed on
both the robots and the box. The location of the tags is determined by an algorithm
executed in the processing hub and is filtered to obtain (x,y) locations within the
workspace. The heading data is determined by calculating the slope between the two tags.
All six tags operate at 25 Hz. The components used in this system are shown in figure
3.7.

a) Sapphire DART T651- 1x1 UWB Tag

b) Sapphire DART Receiver

25

c) Sapphire DART (Model H651) Processing Hub


Figure 3.7 Components

3.3 Data Handling and Control


The off-board control system consists of a network of four computers that 1)receives
force data from each robot via the wireless Xbee link, 2) process UWB position data, 3)
computes control directives using Matlab, and 4) send drive commands to each robot via
the wireless Ricochet links. The architecture is shown in figure 3.8.
Data flows between applications on each computer using the RBNB Data Turbine, a
robust real-time streaming data server that handles all the data management operations
between data source and sink applications.
In total there are four computers used as the base station for command handling and
telemetry which are connected to the Switch or the network HUB as shown in figure 3.8.
Computer 1 receives position data of the tags from the UWB processing hub and posts
the data to the Data Turbine. The computer also has an ability to display the objects/tag
locations using the native UWB system software.
Computer 2 receives the force data from the receiving XBee through the Remote Node
Server application and posts it to the Data Turbine. This requires that a valid COM port
be recognized by the system. The Remote Node Server is an application that provides
data flow to and from a robotic system through a computer serial port which makes use of
the DataTurbine as an interface to the hosts.

26

RFID Tags

Force Data/Xbee
BasicX Stack

RFID Receivers
Xbee
UWB Hub

Computer 1

Computer 2/
Simulink Model

Remote Node
Server

Ricochet
Modem

Computer 3

Computer 4

Switch/ Network
HUB

Figure 3.8 Communication in the system

Computer 3 runs the Simulink-based controller by integrating the force and position data
from the data Turbine and posting drive commands back to a Data Turbine channel.
Computer 4 is connected to the Ricochet Modem and handles the sending of drive
commands to the pioneer robots.

27

4.0 Experimental Results


The performance of the sequential position/force controller was characterized by running
a set of experiments. These tests involved the use of an initial position based trajectory
controller to reach the box, the force controlled one degree of freedom repositioning of
the box, and the backing up of the robots. This chapter reviews the data collected from
performing a set of experiments.

4.1 Force Control Characteristics


Prior to designing the controllers, testing was performed to characterize the friction
present when moving the target object (a box) across the floor. The friction environment
was generally assumed to be of the form:
Ffriction = -Fcoulomb - bviscousV
After considerable testing with a variety of boxes of varying mass and across different
surfaces, it was seen that the friction force was largely independent of the speed; coulomb
friction dominated the system. Figure 4.1 shows a series of the force plots generated
when pushing the object across the floor at different speeds. As can be seen, for this
particular case with a box weighing 27 pounds, the frictional force was approximately
45N.

a) Velocity=2.2 m/s
28

b) Velocity=1.5 m/s

c) Velocity=0.45 m/s
Figure 4.1 Force plots while applying constant velocities commands to the robots

Figure 4.2 shows the behavior of the force plate of a single robot pushing the cardboard
box in a single dimension where the blue signal is the desired force and the green signal
is the measured force. This behavior demonstrates a closed loop force control.

29

4.2 Sequential Control


Figure 4.3 shows an overhead view of motion by the cluster for all the three states over
the course of an experiment. The green line shows the actual trajectory of the cluster
during state1.

Figure 4.2 Closed loop Force Control of a robot pushing a box

This changes to red line during state2 when the robots push the box until reaching a
desired location. After the robots successfully transport the box, the cluster backs up as
depicted by the cyan line.
Figure 4.4 shows force readings by both the force plates for all the three states over the
course of an experiment. The green line shows the measured forces as compared to blue
which shows the desired forces.

30

Figure 4.3 Overall cluster position showing change of state

31

Figure 4.4 Measured and Desired Forces of force plate1 and force plate2

4.2.1 Position based Trajectory control (Engage Mode)


This state of the experiment involved the position-based trajectory control of cluster
reaching the box. The control is performed in such a way that the robots approach the
side of the box to be pushed, in a perpendicular direction. The cluster follows a certain
trajectory which is generated by via points that are generated from the initial cluster
position/orientation and the box position/orientation. Figure 4.5 shows the desired (green)
and the measured (blue) trajectory of the cluster.
The plot of the trajectory error over the period of time can be seen in figure 4.6. The size
of the cluster, d over the period of the experiment can be seen in figure 4.7.

32

Figure 4.5 Measured and the desired cluster trajectories

Figure 4.6 data error plot over time

33

Figure 4.7 Size of cluster

4.2.2 Force control (Maneuver Mode)


Force controlled object repositioning is shown in Figure 4.8 where (a) and (b) show the
force control by the robots pushing the box in one dimension. The plot is comprised of
the desired and measured force values. 4.8 (c) show the object position control where the
plot comprises of the desired and measured position of the box being pushed by the
robots.
The desired position of the box in Figure 4.8 is taken as 4m.

Figure 4.8 a) Force1 Control b) Force2 Control c)Object position control


34

4.2.3 Velocity control (Disengage Mode)


The Disengage mode is shown in figure 4.9. The green plot shows the applied velocity
commands to the cluster and the blue plot shows y coordinate of the cluster during the
disengage state of an experiment. Open loop velocity control, where a ramp input from
100 to 0 PWM counts was applied to the cluster for the disengage mode.

Figure 4.9: Velocity Control for Disengage mode

35

4.3 State Machine


The switching capability of the controllers is demonstrated in Figure 4.10, which shows
the status of the controllers. A value of status=1 shows the system is in initial engage state
where the robots reaches the box. This mode is a position based trajectory controller.
Status=2 shows the system is in the maneuver state where the robots push and transport
the box to a certain desired position in 1-D. This mode is the force controlled object
repositioning. Status=3 shows the system is in the disengage state where the robots back
up after they transport the box successfully to a desired location which is velocity control.

Figure 4.10 Change of state and object position control

In Figure 4.10 (a) we see that state1 (the engage mode) is active from time = 0 to 10500
when the robots approach the box. Then state2 (maneuver mode) is active from time =
10501 to 16000 where the robots are pushing the box. Finally state3 (disengage mode) is
36

active after time = 16000 where the robots simply back up after transporting the box to a
desired location. During the engage state, the box remains unmoved and stays at the
initial position until the robots initiate pushing the box. During the maneuver state, the
robots start pushing the box and transport it to the desired location. Finally during the
disengage state, the box stays at the desired location because the robots start to back away
from it.

Figure 4.11 Maneuver mode in active condition

Figure 4.11 shows the state machine in working condition where state2 or the maneuver
mode is in active condition while the other two states are under inactive condition.

37

4.4 Testing Summary


The tests were performed successfully and demonstrated the successful implementation
of the switching capability of different controllers, i.e. between position, force and
velocity controllers. Position-controlled trajectory controller showed the robot cluster
following a desired trajectory reaching perpendicular to the box, preparing to push it. The
force controlled object repositioning controller demonstrates the successful transport of
the box to a desired location in 1-D while controlling the force applied by the force plates
by both the robots on the cardboard box. Finally, the robots showed reversing capability
after transporting the box to a certain desired location.

38

5.0 Conclusion
The objective of this research project was to implement and demonstrate sequential
position and force controller using land based multi robots to transport an object from one
location to another. This included the development of a single degree of freedom force
controller and a finite state machine switching mechanism between the controllers. These
were integrated in the existing cluster space controller being developed by the researchers
at RSL.
The research has successfully demonstrated sequencing operation through the utilization
of finite state machine which was done by the implementation of 'Engage', 'Maneuver'
and 'Disengage' states in the controller. This resulted in switching of the controllers from
position based trajectory control to one degree of freedom force-controlled repositioning
of object and then velocity controlled backing up the cluster. Using this configuration,
successful creation and integration of the force sensing capabilities with the robotic testbed was implemented and the Force and position manipulation of the Pioneer P3AT land
robots was successfully performed.

5.1 Future Work


Future plans for the object manipulation include making the system more robust. This can
be done by improving the system model for the pioneer land robots pushing the box.
With an improved model, the control process can handle the columbic or viscous
frictional disturbances in a better way. Other improvements could be the mechanical
configurations in the system could be explored such as varying other objects to be
manipulated or trying other sensors. More thought could be put into configuring the force
plate sensor so that it can provide more accurate results.
Force controlled object repositioning has a great deal of potential. The technique focuses
on the manipulation of an object. Future work might include the force manipulation in
multiple degrees of freedom, a challenge currently being studied by RSL student M.
Chin. A hybrid force and position controller in multi-robot system can be implemented,
39

another challenge currently being studied by M. Neumann. Other works may include
modifications to the existing cluster definition and extension of the cluster definition to
include more robots and create a more robust system.
Currently the research testbed is implemented using multiple computers. Multiple
computers were used in order to provide modular functionality within the testbed given
that the system interfaces with six UltraWide Band tags, force data from two sensors, and
velocity commands to two robots. In order to make the system simpler for a user, use of
just a single computer could be implemented.

40

References

[1] C. Kitts, I. Mas, "Cluster Space Specification and Control of Mobile Multirobot
System", IEEE/ASME Transactions on Mechatronics, Vol. 14 No. 2, Nov. 2009, P. 207218
[2] M.T. Mason, "Mechanics and Planning of Manipulator Pushing Operations", Int. J. of
Robotics Research, Vol. 5, No. 3, 1986, P. 53-71
[3] G.A.S Pereira, V. Kumar, M.F.M Campos, "Decentralized Algorithms for MultiRobot Manipulation via caging", The International Journal of Robotics Research, 2004,
P. 783-795
[5] M. Kazemi, J.S. Valois, J.A. Bagnell, N. Pollard, "Robust Object Grasping using
Force Compliant Motion Primitives", Proceedings of Robotics: Science and Systems,
July 2012
[6] B.J. Stephens, C.G. Atkenson, "Dynamic Balance Force Control for Compliant
Humanoid Robots, IEEE/RSJ International Conference on Intelligent Robots and
Systems", October 2010, P. 1248-1255
[7] M. J Mataric, "Behavior based control: Examples from navigation, learning and group
behavior," Journal of Experimental & Theoretical Artificial Intelligence, Vol. 9, Issue 23, 1997
[8] S. Takagi, Y. Okawa, "Rule Based Control of Mobile Robot for the Push-a-Box
Operation", IEEE/RSJ International Workshop on Intelligent Robots and Systems, Nov.
1991, P. 1338-1343
[9] B.P. Gerky, M.J Mataric "Pusher-watcher: An approach to fault-tolerant tightlycoupled robot coordination" International Conference on Robotics and Automation, Vol.
1, 2002, P. 464-469

41

[10] M. Nemrava, P. Cermak- "Solving the Box-Pushing Problem by Master Slave

Robots", Journal of Automation, April 2008, P. 32-37


[12] J. Spletzer, A. K. Das, R. Fierro, C. J. Taylor, V. Kumar, and J. P. Ostrowski,
"Cooperative localization and control for multi-robot manipulation," in Proc. 2001
IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol. 2, Oct. 29Nov. 3, 2001, P. 631-636

[16] B.P. Gerkey, M. J. Mataric, "Sold! Auction Methods for Multirobot Coordination",
IEEE Transactions on Robotics and Automation, Vol. 18, No. 5, October 2002, P. 758768
[17] I. Mas, S. Li, J. Acain and C. Kitts, "Entrapment/Escorting and Patrolling Missions
in Multi-Robot Cluster space control", IEEE/RSJ International Conference on Intelligent
Robots and Systems, October 2009, P. 5855-5861
[18] G. Eoh, J.D. Jeon, J. S. Choi, B. H. Lee, "Multi-robot Cooperative Formation for
Overweight object Transportation", IEEE/SICE International Symposium, 2011, P. 726731
[19] C. R Kube and H. Zhang, "The use of perceptual cues in multi-robot box pushing",
Proc.1996 IEEE International Conference on Robotics and Automation, Vol. 3, 1996, P.
2085-2090
[20] D.E Whitney, "Historical Perspective and State of the Art in Robot Force Control",
International Journal of Robotics Research, Vol. 6, No. 1, 1987, P. 3-14
[21] J. Tan, N. Xi, "Integrated Sensing and Control of Mobile Manipulator", Proceedings
of the 2001 IEEE/RSJ, International Conference on Intelligent Robots and Systems, Vol.
2, 2001, P. 865-870
[22] C. Kitts, K. Stanhouse, P. Chindaphorn, "Cluster space collision avoidance for
mobile two-robot systems", IEEE/RSJ International Conference on Intelligent Robots and
Systems, October 2009, P. 1941-1948

42

[23] J. Esposito, M. Feemster, E. Smith "Cooperative manipulation on the water using a


swarm of autonomous tugboats", IEEE International Conference on Robotics and
Automation, May 2008, P. 1501-1506
[24] Y. Hu, L. Wang, J. Liang, T. Wang, "Cooperative box-pushing with multiple
autonomous robotic fish in underwater environment", IET Control Theory and
Applications, Vol. 5, No. 17, 2011, P.-2015-2022
[25] D. Zhang, L. Wang, J. Yu, M. Tan, "Coordinated transport by Multiple Biometric
Robotic Fish in Underwater Fish in Underwater Environment", IEEE Transactions on
Control Systems Technology, Vol. 15, No. 4, July 2007, P. 658-671
[26] A. Stroupe, A. Okon, M. Robinson, T. Huntsberger, H. Aghazarian, E. Baumgartner"Sustainable cooperative robotic technologies for human outpost infrastructure
construction and maintenance", Springer Science + Business Media, LLC, April 2006, P.
113-123
[27] T. G. Sugar, V. Kumar- "Control of cooperative mobile manipulators, IEEE
Transactions on Robotics and Automation", Vol. 18, No. 1, Feb. 2002, P. 94-103
[28] O. Khatib, K. Yokoi, K. Chang, D. Ruspini, R. Holmberg and A. Casal,
"Coordination and Decentralized Cooperation of Multiple Mobile Manipulators", Journal
of Robotics Systems, Vol. 13, Issue 11, 1996, P. 755-764
[30] R.Ishizu, "The design, simulation and implementation of multi-robot collaborative
control from the cluster perspective," M.S. thesis, Dept. Electr. Eng., Santa Clara Univ.,
Santa Clara, CA, Dec. 2005
[31] A. K. Das, R. Fierro, V. Kumar, J. P. Ostrowski, J. Spletzer, C. J. Taylor, " A
Vision-Based Formation Control Framework", IEEE Transactions on Robotics and
Automation, Vol. 18, No. 5, 2002, P. 813-825
[32] I. Mas, C. Kitts, "Object Manipulation Using Cooperative Mobile Multi-Robot
System", Proceedings of the World Congress on Engineering and Computer Science
2012 Vol I WCECS 2012, October 24-26, 2012
43

[35] S. Dayanidhi, "Target Tracking Using Mobile Robotic Stations", C. Kitts, Adv. M.S.
Thesis, Dept. Mech. Engg., Santa Clara University, Santa Clara, CA, 2013
[36] T. Adamek, C. Kitts, I. Mass, Cluster Space Gradient Based Navigation for Mobile
Multi-robot Systems. Robotic Syst. Lab., Santa Clara Univ., Santa Clara, CA ,
Mechatronics, IEEE/ASME Transactions. 2014, In press

44

Appendix A: Simulink Model for the overall system

45

Appendix B: m file for trajectory generation


%takes via points as desired positions and orientations
%and forms a trajectory
function y = traj_v4(u)
time=u(1);
x_f=u(2);
y_f=u(3);
thetaC_f=u(4);
d=u(5);
phi1=u(6);
phi2=u(7);
% % initial position
x_0=-2.5;
y_0=-8;
pos_0=[x_0 y_0 d pi/2 0 0];
% % % % final position
final_pos=[x_f,y_f,d,thetaC_f,0,0];
% % via point 1
x_1=(3*x_0 + x_f)/4;
y_1=(3*y_0 + y_f)/4;
theta_1=3.14-atan((y_1-y_0)/(x_1-x_0));
pos_1=[x_1 y_1 d theta_1 phi1 phi2];
% % via point 2
x_2=(3*x_0 + x_f)/4;
y_2=(y_0+y_f)/2;
theta_2=3.14-atan((y_2-y_1)/(x_2-x_1));
pos_2=[x_2 y_2 d theta_2 phi1 phi2];
% % via point 3
x_3=(x_0+x_f)/2;
y_3=(y_0+y_f)/2;
theta_3=3.14-atan((y_3-y_2)/(x_3-x_2));
pos_3=[x_3 y_3 d theta_3 phi1 phi2];
% % via point 4
x_4=(x_0+3*x_f)/4;
y_4=(y_0+y_f)/2;
% theta_4=pi;
theta_4=3.14-atan((y_4-y_3)/(x_4-x_3));

46

pos_4=[x_4 y_4 d theta_4 phi1 phi2];


%%
if(time>=0 && time<400)
y=pos_0;

elseif(time>=400 && time<800)


y=pos_1;
%%
%%
elseif(time>=800 && time<1200)
y=pos_2;
%%
elseif(time>=1200 && time<1600)
y=pos_3;
%%
elseif(time>=1600 && time<2000)
y=pos_4;

else
y=[x_f+2,y_f,d,thetaC_f,phi1,phi2];
end
end

47

Appendix C: Specifications of Vernier Force plate

48

Appendix D: Stateflow block details


State block

Simulink Function block

49

Das könnte Ihnen auch gefallen