Beruflich Dokumente
Kultur Dokumente
Robotic Symbionts
Interweaving Human and Machine Actions
Close integration of humans and machines has been investigated by researchers and artists for
generations. One of the earliest studies was by the English monk and scientist Roger Bacon, who
in the 13th century imagined various technologies later realized by inventors such as Leonardo
da Vinci: “flying machines may be constructed so that a man may sit in the midst of the ma-
chine”; “a machine of small size may be made for raising and lowering weights of almost infinite
amount”; and “machines may also be made for going in sea or river down to the bed without
bodily danger.”
This vision was realized in numerous artworks and engineering research over the past decades.
Robotic prostheses and exoskeletons are widely used to overcome disability or enhance physical
performance by supporting existing human limbs or replacing lost ones. An alternative and more
progressive view of human–machine integration was presented by the artist Stelarc in his work
Third Hand (1980), a body-worn robotic arm giving the artist an additional limb controlled by
muscles in his abdomen and legs.1 The idea has acquired attention in academia recently with re-
search exploring supernumerary robotic (SR) limbs.2–5
Figure 1. Examples of supernumerary robotic (SR) limbs: (a) Stelarc’s Third Hand, (b) SR arms on
the shoulder, (c) SR fingers, (d) SR drumming system.
This progression toward a more heterogeneous human–machine hybrid makes the coupling, or
interaction, between humans and robots increasingly dynamic. In his visionary paper “Man–
Computer Symbiosis,” J.C.R. Licklider explained its difference from “a mechanical extension”
in which human operators are there “more to help than to be helped.”6 He argued that computers
can take a larger role, participating in real-time action planning while humans and computers
take care of separate functions in a symbiotic collaboration. A relevant inspiration can be found
in science fiction. The hyper-intelligent race car Asurada in the series Cyber Formula (1991–
2000) autonomously executes a range of driving actions. The car makes a “lifting turn”: while
the driver handles the wheel and pedals, the car autonomously controls propellers and spoilers
for maneuvering through seemingly impossible turns. The discovery happens through an acci-
dent, in which the driver’s reckless cornering attempt makes the car hit a roadblock, lifting it off
the ground. The car autonomously decides to rebalance the car using fans, ending up achieving
an extremely sharp cornering.
The story depicts an instance of exemplary and fluid cooperation between human and robot. In
practice, the actual design of such a system needs to be done in an application-specific manner
and requires the understanding of possible ways a robot could support manipulation tasks as well
as the control shared between humans and machines. This article addresses how, in such dyadic
configurations, humans and robots can coordinate their actions in terms of two aspects: type of
support and degree of control.7 We introduce two case studies with working prototypes that in-
vestigate these aspects, and derive a framework to better understand this relatively new space of
research. We also survey related works in both research and art, and situate those works in our
framework.
These systems are designed to augment manipulation capabilities by enabling users to maneuver
in a higher-dimensional action space.
These developments have led to human–machine hybrid systems with increasing complexity and
corresponding design problems that blend robotic control and human–robot teaming. From a
control point of view, such robotic systems necessitate a means to carry out higher-dimensional
control with lower-dimensional control input.15 From a teaming point of view, for humans and
robots to act independently to achieve a set of related goals,16 coordination of and role division
between the hand and robotic actors need to be worked out. We showcase two of our prototypes
as case studies and dissect the aforementioned related works in order to identify a framework to
address the challenge. In the course of the discussion, we also introduce strategies used by ro-
botic telepresence and smart hand-tool research that hints at their potential use in the systems we
discuss in this article.
Figure 2. Robotic symbiont prototypes. (a) Shape-changing SR wearable device. (b) Modular
robotic platform. Different modules can be daisy chained, including a variety of fingertip sensors or
end effectors. (c) Soft SR fingers fabricated using a standard molding-casting process.
Later iterations of the project employed different mechanical designs to further study applica-
tions that require different robotic structure, motion, or end effectors. The second version of the
system18 consists of robotic/sensor modules that can be plugged into each other to create a wide
range of robotic augmentation circuitries (Figure 2b). In comparison with the initial version that
had a fixed number of motors and fingers, this version aims to provide an engineering solution
that further accommodates different shapes or end-effector functionalities. Another variant19 that
was built later consists of soft SR fingers (Figure 2c). Using a standard molding-casting process,
we aimed to demonstrate how we can standardize the fabrication process for potentially more
nuanced morphologies. Also, soft robots have several benefits over rigid mechanisms —they are
lightweight and compliant, and a single casted shape can undergo higher-dimension actuation
than the motor modules used in the previous versions. Thanks to the smaller form factor and
range of motion, detailed actions within the hand are possible such as interacting with a
smartphone touchscreen.
Robotic applications derived from this project fall into three main categories (Figure 3), which
are adapted from the classification of bimanual tasks. It is known that we engage our hands in a
task through either a symmetric or asymmetric role division,20 where the dominant hand adopts a
more explorative or manipulative role. Similarly, robots can either take a homologous role to the
human hand or an asymmetric role that complements the human hand. One new insight is that
unlike the clear preference for the dominant hand in bimanual tasks, robots can have specialized
action or sensing abilities. Therefore, more independent and active roles can be taken by the ro-
bots. Table 1 summarizes and gives examples from the research literature of these three catego-
ries of support as well an additional fourth category in which robots are possessed and controlled
directly by their human counterparts.
Table 1. Types of support by robotic augmentations with example works for each category.
The first category, namely synchronous action, is well illustrated by SR fingers that are con-
trolled by a mapping between robotic motions and human finger motions.4 Faye Wu and Harry
Asada developed a synergy-based control scheme to allow both human and robotic fingers to fol-
low the same motion paths (synchronous action). Their later iteration used elbow gestures to
lock the robotic fingers to hold an object5 so that the robot supports a user performing dexterous
actions on the object. Shoulder-mounted SR arms2 are designed to offer secondary support to
their wearer, enabling the user to accomplish assembly jobs that would normally require two
men (passive assistance).
There are limited studies of the third category, where the robot takes initiative in taking action,
but one example would be the three-arm drumming system.13 In this project, the robot’s role be-
longs to different categories of the framework of Table 1 depending on how it fits into the music.
Among the many ways the robot can play the drum, the researchers showcased making poly-
rhythm patterns instead of safely blending into the music. The robot comes to the foreground and
actively contributes to the music in a way a human drummer might struggle doing. The last cate-
gory describes a distinct case of using a robot to carry out actions by human limbs at different
displacement, scale, or complexity. MetaLimbs12 presents SR arms controlled by foot, transpos-
ing lower-body actions to a more relevant, upper-body task space. This category is closely re-
lated to teleoperation, but with a research focus on amplifying actions in the user’s space instead
of remote locations.
Figure 4. A Flying Pantograph system: (a) drawings on a tabletop canvas being transposed to a
vertical wall, (b) using other mediums such as spray paints, and (c) experimenting with algorithmic
constraints of the quadrotor’s movement.
Artist Sougwen Chung, after trying out the system, observed that the patterns of lines drawn re-
semble ones that her drawing machine D.O.U.G.21 creates. We also experimented with the spray-
paint version (Figure 4b) of our system with artist Rochelle Haley. She observed that the inher-
ent movement of our system creates a pattern that she also observes in natural movements such
as dancers or animals. Her usual work transposes the unique patterns of dance choreographies
onto canvas, whereas in this collaboration the robotic system becomes a critical stylistic compo-
nent in her expression. There is a degree to which an artist has to experiment and learn the be-
havior of the quadrotor, which can sometimes be suggestive or dismissive to the artist. In other
words, instead of mechanically extending a human artist and trivializing the task, the system’s
motion dynamics and control algorithms form a dialogue with the artist, resulting in a distinct
style. This discovery led to experiments on how we can differently employ the noise (or varia-
tions) that come from the quadrotor (Figure 5). Guiding the quadrotor at different speed or rate
of turns, a user can go from strokes that are abstracted due to the quadrotor’s slow turns to delib-
erately observing the noise from the drone during slow and steady maneuvers.
Figure 5. The degree to which the motion of the drone in A Flying Pantograph contributes to the
final art can be controlled. Faster strokes with quick turns will fully suppress the wiggly lines made
by the drone, while slower maneuvers can be used to fully incorporate them.
A Flying Pantograph focused on how to utilize a creative media that has a programmatic buffer
or noise, but it also explored control decisions in a robotic augmentation system. In our installa-
tions, gross movements were made by a user while parts of the quadrotor’s movement were gen-
erated as result of the combination of the user’s intention and external factors. The system also
offers autonomous stabilization and hazard avoidance during a drawing maneuver, which is criti-
cal since the quadrotor must continuously add pressure to the canvas without crashing. A closed-
loop control of the quadrotor’s tilt angle helps maintain constant pressure, and automatically re-
treats and reorients if an accidental overshooting of the angle happens.
The topic of shared control has been explored in automation research,7,26 and efforts have been
made to define levels of autonomy in master–slave systems. Adapting the framework of Jenay
Beer and her colleagues,7 we define four categories of controlling robotic augmentations with
varying degrees of autonomy (Table 2). We exclude certain categories from her team’s classifi-
cation, namely those where a human operator only takes a planning or intervention role. Full au-
tonomy is also not listed because in the context of human–robot co-action there will always be
some type of interaction between a user and a robot.
Table 2. Types of control methods for robotic augmentations, with the level of autonomy increasing
from left (fully volitional control) to right (partial automation).
Direct control and pseudo-mapping are control methods without robotic autonomy. The former
method directly translates command signals from a human operator to robotic motions. For ex-
ample, Robotic Symbionts17 uses electromyography (EMG) signals from the user’s forearm for
controlling the robot, while MetaLimbs12 uses the position of a human operator’s foot to directly
drive the robotic arm. Pseudo-mapping is similar to direct control, in that control of the robot is
generated through an algorithmic mapping between human actions and robotic actions. The SR
finger robot described earlier4 utilizes this control scheme: robotic finger movements are gener-
ated in a one-to-one mapping from human finger positions. Direct control and pseudo-mapping
offer different tradeoffs between the independence of robotic movement and the control burden
imposed on users.
Assisted control describes a control paradigm in which a robot’s gross movement are guided by a
human operator, with the robot making slight adjustments. This category lacks example systems
relevant to this article’s main topic, but control strategies utilized in prostheses and teleoperation
research hint at its applicability. One example in the case of prosthesis control for object-grasp-
ing tasks is to continuously switch the controller between the intracortical brain–computer inter-
face and a computer according to the phase of the robotic motion.27 Another example is the
computer accurately aligning the hand to an optimal grasp position.28 Teleoperation systems of-
ten utilize assistance by automation, as a control interface might lack the degrees of freedom re-
quired for full control of a robot. Özkan Bebek and M. Cenk Çavuşoğlu presented a surgical
operation system that automatically cancels motion artifacts caused by a beating heart.22 Smart
hand-tools research29,30 demonstrates computationally driven tools that prevent mistakes by auto-
matically controlling the tools.
Shared control describes systems in which the robot takes a larger decision-making role. The
robotic decisions might require significant processing and offload a batch of control maneuvers
from users. The shoulder-mounted SR arms discussed earlier2 use a Petri net to recognize when a
user switches from one task to another and preemptively switch to another position to support
the upcoming assembly task. In the three-arm drumming system13 the generation of rhythm pat-
terns is entirely offloaded to the robot, while the human musician makes the higher-level deci-
sion about the target drum for the robot.
These four categories are not mutually exclusive, and the level of autonomy is more of a contin-
uum than discrete choices. For example, systems can be implemented to incorporate multiple
control schemes according to application context or phases of a task, thereby switching between
manual and autonomous control.15,27
HUMAN–ROBOT INTERACTION
This article addresses the role division between humans and robots in terms of actual actions and
their controls. An additional critical research issue is human–robot interaction. The physical en-
vironment imposes a dynamically changing context and, over the course of collaboration, hu-
mans and robots each must adapt to the change. To form fluid and continuous collaboration
between humans and machines, a user would need to comprehend and properly respond to a ro-
bot’s changing behaviors.
We have some anecdotal observations from our research in Robotic Symbionts17 that the user
adapts to suboptimal movements by the robot. While testing our simulation system that automat-
ically finds control parameters for an object-handling task, some of the simulation results had
errors due to differences between the simulation and the real environment. However, users were
able to adjust their hand movements to successfully utilize the suboptimal configurations of the
robotic extension. Smart hand-tools research demonstrates another way a user can respond to ro-
botic decisions. FreeD29 is a smart milling tool that has the ability to adjust the angle and spindle
speed of a milling bit autonomously. When a user is about to make a wrong cut (with respect to
the 3D model that the user is trying to create), the tool intervenes and informs the user of the po-
tential mistake. The user can then make the decision to either conform or nudge the machine to
carry out the “wrong” action anyway.
This feedback loop in human–machine systems is a critical research topic in the design of usable
robotic augmentation systems (Figure 6). It includes important questions such as how can ma-
chine intention be communicated clearly to the user and how do users familiarize themselves
with the robot’s behavior over time and develop an efficient communication with the machine.
Figure 6. Flow chart describing human–robot integration system for co-actions. In addition to
coordinating control and action flow, the feedback from robots to human operators is critical for
effective collaboration.
CONCLUSION
This article addressed a category of robotic augmentations that deeply engage with human ma-
nipulation tasks. We introduced two of our projects as case studies and examined related works
in the field, with a focus on two aspects: the type of support provided by robotic augmentation
systems, and how the systems are controlled. Those aspects are critical in designing such sys-
tems, as the dyadic configuration between humans and robots makes their coordination a com-
plex problem blending robotic control and human–robot teamwork. The proposed frameworks
describe different ways a human operator and a robot take actions together, and ways the robotic
control is shared. We also briefly discussed a future challenge regarding the interaction and feed-
back between humans and robots in the discussed type of systems. We expect that a systematic
investigation into this human–robot interaction aspect will result in more fluid coordination in
future systems.
REFERENCES
1. Third Hand, Stelarc; http://stelarc.org/?catID=20265.
2. B.L. Bonilla and H.H. Asada, “A Robot on the Shoulder: Coordinated Human-
Wearable Robot Control Using Coloured Petri Nets and Partial Least Squares
Predictions,” Proc. 2014 IEEE Int’l Conf. Robotics and Automation (ICRA 14), 2014,
pp. 119–125.
3. F. Parietti, K. Chan, and H.H. Asada, “Bracing the Human Body with Supernumerary
Robotic Limbs for Physical Assistance and Load Reduction,” Proc. 2014 IEEE Int’l
Conference on Robotics and Automation (ICRA 14), 2014, pp. 141–148.
4. F.Y. Wu and H. Asada, “Bio-Artificial Synergies for Grasp Posture Control of
Supernumerary Robotic Fingers,” Proc. Robotics: Science and Systems X, University
of California: Berkley, CA, 2014; https://dspace.mit.edu/handle/1721.1/88457.
5. F.Y. Wu and H.H. Asada, “‘Hold-and-Manipulate' with a Single Hand Being Assisted
by Wearable Extra Fingers,” Proc. 2015 IEEE Int’l Conf. Robotics and Automation
(ICRA 15), 2015, pp. 6205–6212.
6. J.C.R. Licklider, “Man-Computer Symbiosis,” IRE Trans. Human Factors in
Electronics, March 1960, pp. 4–11.
7. J.M. Beer, A.D. Fisk, and W.A. Rogers, “Toward a Framework for Levels of Robot
Autonomy in Human-Robot Interaction,” J. Human-Robot Interaction, vol. 3, no. 2,
2014, pp. 74–99.
8. M.C. Carrozza et al., “The Development of a Novel Prosthetic Hand—Ongoing
Research and Preliminary Results,” IEEE/ASME Trans. Mechatronics, vol. 7, no. 2,
2002, pp. 108–114.
9. C.A. Torres, “IKO Creative Prosthetic System,” 2014; https://vimeo.com/97877783.
10. The Alternative Limb Project; http://www.thealternativelimbproject.com.
11. Stelarc: The Monograph, Marquard Smith, MIT Press, 2007.
Leigh currently focuses on robotic interfaces that expand our hands’ expressivity—numeri-
cally, spatially, or qualitatively—and enable novel ways to carry out physical manipulation
and creative expression. Contact him at sangwon@media.mit.edu.
Harshit Agrawal is a former master’s student and research assistant in the Fluid Interfaces
group at the MIT Media Lab. He builds tools to study how technology can blend with and
enhance human creative expression and in the process create experiences that invite us to
reflect upon and reevaluate our relationship with technology. He currently focuses on the
interplay between human and machine imaginations and intentions, spanning across virtual
and physical embodiments. Contact him at harshit@alum.mit.edu.
Pattie Maes is a professor in MIT’s Program in Media Arts and Sciences as well as aca-
demic head of the program. She also runs the MIT Media Lab’s Fluid Interfaces group,
which aims to radically reinvent the human machine experience. Coming from a back-
ground in artificial intelligence and human–computer interaction, she is particularly inter-
ested in the topic of cognitive augmentation, or how immersive and wearable systems can
actively assist people with memory, learning, decision making, communication, and well-
being. Maes received a PhD in artificial intelligence from Vrije Universiteit Brussel. Con-
tact her at pattie@media.mit.edu.