Sie sind auf Seite 1von 10

THEME ARTICLE: AUGMENTING HUMANS

Robotic Symbionts
Interweaving Human and Machine Actions

Sang-won Leigh This article defines a category of human–robot


MIT Media Lab
interaction in which human and robotic actors work as
Harshit Agrawal
a single unified system. We survey work from various
MIT Media Lab
fields including human augmentation systems such
Pattie Maes
MIT Media Lab as extra fingers and arms, and other robots that
operate in close proximity to the user. The discussed
works highlight a close interplay between human and robotic actions where control
decisions are made by both actors. Such a dyadic configuration can yield a synergistic
outcome but require that close attention be paid to the coordination between them.
Using case studies from our own work, we discuss two main questions that must be
addressed when designing such closely collaborative human–robot integrations: type of
support and degree of control. The different choices that can be adopted for each of
these design questions define a framework or classification that is useful for surveying
existing and future research.

Close integration of humans and machines has been investigated by researchers and artists for
generations. One of the earliest studies was by the English monk and scientist Roger Bacon, who
in the 13th century imagined various technologies later realized by inventors such as Leonardo
da Vinci: “flying machines may be constructed so that a man may sit in the midst of the ma-
chine”; “a machine of small size may be made for raising and lowering weights of almost infinite
amount”; and “machines may also be made for going in sea or river down to the bed without
bodily danger.”
This vision was realized in numerous artworks and engineering research over the past decades.
Robotic prostheses and exoskeletons are widely used to overcome disability or enhance physical
performance by supporting existing human limbs or replacing lost ones. An alternative and more
progressive view of human–machine integration was presented by the artist Stelarc in his work
Third Hand (1980), a body-worn robotic arm giving the artist an additional limb controlled by
muscles in his abdomen and legs.1 The idea has acquired attention in academia recently with re-
search exploring supernumerary robotic (SR) limbs.2–5

IEEE Pervasive Computing Published by the IEEE Computer Society


April–June 2018 34 1536-1268/18/$33.00 ©2018 IEEE
IEEE PERVASIVE COMPUTING

Figure 1. Examples of supernumerary robotic (SR) limbs: (a) Stelarc’s Third Hand, (b) SR arms on
the shoulder, (c) SR fingers, (d) SR drumming system.

This progression toward a more heterogeneous human–machine hybrid makes the coupling, or
interaction, between humans and robots increasingly dynamic. In his visionary paper “Man–
Computer Symbiosis,” J.C.R. Licklider explained its difference from “a mechanical extension”
in which human operators are there “more to help than to be helped.”6 He argued that computers
can take a larger role, participating in real-time action planning while humans and computers
take care of separate functions in a symbiotic collaboration. A relevant inspiration can be found
in science fiction. The hyper-intelligent race car Asurada in the series Cyber Formula (1991–
2000) autonomously executes a range of driving actions. The car makes a “lifting turn”: while
the driver handles the wheel and pedals, the car autonomously controls propellers and spoilers
for maneuvering through seemingly impossible turns. The discovery happens through an acci-
dent, in which the driver’s reckless cornering attempt makes the car hit a roadblock, lifting it off
the ground. The car autonomously decides to rebalance the car using fans, ending up achieving
an extremely sharp cornering.
The story depicts an instance of exemplary and fluid cooperation between human and robot. In
practice, the actual design of such a system needs to be done in an application-specific manner
and requires the understanding of possible ways a robot could support manipulation tasks as well
as the control shared between humans and machines. This article addresses how, in such dyadic
configurations, humans and robots can coordinate their actions in terms of two aspects: type of
support and degree of control.7 We introduce two case studies with working prototypes that in-
vestigate these aspects, and derive a framework to better understand this relatively new space of
research. We also survey related works in both research and art, and situate those works in our
framework.

ROBOTIC EXTENSIONS OF THE BODY


Historically, people with amputations have used sockets where one can plug in different tool tips
based on needs; one of the oldest examples stems from the Victorian era. These sockets only ac-
commodated passive objects such as hooks or utensils, as neuro-prostheses and robotic exten-
sions were not available at that time. Recently, neuro-interfaces have started showing promise,
with clinical studies reporting successful use of neuro-controlled, dexterous prosthetic arms.8
While these systems typically aim to replicate the form and function of a lost human arm, pros-
thetic systems do exist that are designed to go beyond replication. The IKO Creative Prosthetic
System,9 Alternate Limb Project,10 and works by artist Viktoria Modesta suggest progressive de-
signs of prostheses with unique functions or aesthetics. They offer an alternative perspective of
prosthetic devices as tools for curating human identity and capability, beyond a means of restor-
ing a “normal” state.
Stelarc’s works, such as Third Arm and Exoskeleton, aggressively put forward the vision of peo-
ple evolving through “technology, symbiotically attached and implanted into the body.”11 His
creations are not limited to commonly imagined human–machine hybrids, and demonstrate vis-
ceral versions such as a contraption that gives a person six legs or one that gives the artist a
longer arm. This vision of out-of-the-ordinary extensions of the human body is not restricted to
the realm of art. SRs, pioneered by Harry Asada’s group at MIT, are being actively investigated.
Upper-body SR systems made by Asada’s team and other robotics researchers present a range of
configurations such as an extra pair of robotic arms2,3,12 or additional robotic fingers.4,5,13,14

April–June 2018 35 www.computer.org/pervasive


AUGMENTING HUMANS

These systems are designed to augment manipulation capabilities by enabling users to maneuver
in a higher-dimensional action space.
These developments have led to human–machine hybrid systems with increasing complexity and
corresponding design problems that blend robotic control and human–robot teaming. From a
control point of view, such robotic systems necessitate a means to carry out higher-dimensional
control with lower-dimensional control input.15 From a teaming point of view, for humans and
robots to act independently to achieve a set of related goals,16 coordination of and role division
between the hand and robotic actors need to be worked out. We showcase two of our prototypes
as case studies and dissect the aforementioned related works in order to identify a framework to
address the challenge. In the course of the discussion, we also introduce strategies used by ro-
botic telepresence and smart hand-tool research that hints at their potential use in the systems we
discuss in this article.

THE ROLE OF ROBOTIC AUGMENTATION


The first series of prototypes investigates the types of support a robotic actor can provide in the
course of co-action. The Body Integrated Programmable Joints Interface17 is a shape-changing
robotic device worn on the body, where the hand and the robotic fingers complement each other
(Figure 2a). We explored a range of interaction scenarios with the device through its ability to
reconfigure for various functions.

Figure 2. Robotic symbiont prototypes. (a) Shape-changing SR wearable device. (b) Modular
robotic platform. Different modules can be daisy chained, including a variety of fingertip sensors or
end effectors. (c) Soft SR fingers fabricated using a standard molding-casting process.

Later iterations of the project employed different mechanical designs to further study applica-
tions that require different robotic structure, motion, or end effectors. The second version of the
system18 consists of robotic/sensor modules that can be plugged into each other to create a wide
range of robotic augmentation circuitries (Figure 2b). In comparison with the initial version that
had a fixed number of motors and fingers, this version aims to provide an engineering solution
that further accommodates different shapes or end-effector functionalities. Another variant19 that
was built later consists of soft SR fingers (Figure 2c). Using a standard molding-casting process,
we aimed to demonstrate how we can standardize the fabrication process for potentially more
nuanced morphologies. Also, soft robots have several benefits over rigid mechanisms —they are
lightweight and compliant, and a single casted shape can undergo higher-dimension actuation
than the motor modules used in the previous versions. Thanks to the smaller form factor and
range of motion, detailed actions within the hand are possible such as interacting with a
smartphone touchscreen.

April–June 2018 36 www.computer.org/pervasive


IEEE PERVASIVE COMPUTING

Figure 3. Robotic augmentation applications, categorized according to type of support. Human


fingers and the robot can act together in a synchronous or symmetric manner, the robot can take a
secondary role and support the main task done by the human hand, or the robot can take initiative
and a more dynamic role.

Robotic applications derived from this project fall into three main categories (Figure 3), which
are adapted from the classification of bimanual tasks. It is known that we engage our hands in a
task through either a symmetric or asymmetric role division,20 where the dominant hand adopts a
more explorative or manipulative role. Similarly, robots can either take a homologous role to the
human hand or an asymmetric role that complements the human hand. One new insight is that
unlike the clear preference for the dominant hand in bimanual tasks, robots can have specialized
action or sensing abilities. Therefore, more independent and active roles can be taken by the ro-
bots. Table 1 summarizes and gives examples from the research literature of these three catego-
ries of support as well an additional fourth category in which robots are possessed and controlled
directly by their human counterparts.

Table 1. Types of support by robotic augmentations with example works for each category.

Synchronous action Asymmetric Robot possessed


Passive Dynamic
Robotic actors act Robotic actors take a Robotic actions take Robotic actions are
along with a human secondary role sup- initiative and come to possessed by a user,
operator in a homolo- porting the user, who the foreground. Ro- replicating or carrying
gous or symmetric takes the main role. bots can make ac- out a human limb’s
manner. The robot Main roles include ini- tions of high actions at a different
and the human oper- tiation of task and ex- complexity or inde- scale, displacement,
ator take equal roles. ploration or dexterous pendently of a human or complexity.
Example work: Wu manipulation. operator’s intention. Example work: Bebek
and Asada4; Hu, Example work: Example work: Bre- and Çavuşoğlu22; Be-
Leigh, and Maes19 Bonilla and Asada2; tan et al.13; Chung21 jczy23; Fernando et
Parietti, Chan, and al.24
Asada3; Wu and
Asada5

The first category, namely synchronous action, is well illustrated by SR fingers that are con-
trolled by a mapping between robotic motions and human finger motions.4 Faye Wu and Harry
Asada developed a synergy-based control scheme to allow both human and robotic fingers to fol-
low the same motion paths (synchronous action). Their later iteration used elbow gestures to
lock the robotic fingers to hold an object5 so that the robot supports a user performing dexterous
actions on the object. Shoulder-mounted SR arms2 are designed to offer secondary support to
their wearer, enabling the user to accomplish assembly jobs that would normally require two
men (passive assistance).

April–June 2018 37 www.computer.org/pervasive


AUGMENTING HUMANS

There are limited studies of the third category, where the robot takes initiative in taking action,
but one example would be the three-arm drumming system.13 In this project, the robot’s role be-
longs to different categories of the framework of Table 1 depending on how it fits into the music.
Among the many ways the robot can play the drum, the researchers showcased making poly-
rhythm patterns instead of safely blending into the music. The robot comes to the foreground and
actively contributes to the music in a way a human drummer might struggle doing. The last cate-
gory describes a distinct case of using a robot to carry out actions by human limbs at different
displacement, scale, or complexity. MetaLimbs12 presents SR arms controlled by foot, transpos-
ing lower-body actions to a more relevant, upper-body task space. This category is closely re-
lated to teleoperation, but with a research focus on amplifying actions in the user’s space instead
of remote locations.

DUAL-MINDEDNESS OF ROBOTIC AUGMENTATION


Here we further examine the control problem regarding the robotic augmentation of manipula-
tion capability. We introduce a previous prototype as a case study, and analyze the related works
in terms of robotic control. A Flying Pantograph is an art installation that uses a quadrotor as a
gravity-free drawing agent possessed by the hand (Figure 4).25 The system transposes a human
drawing to an output canvas at a different scale and with altered aesthetics. The software system
provides computing modules to assist the drawing with functions such as smoothing, scaling,
mirroring, and delaying that a user can choose for desired effects. Another aspect of the system,
which is the key statement behind the work, is the new visual language added to the resulting
artwork due to the system’s aerodynamics.

Figure 4. A Flying Pantograph system: (a) drawings on a tabletop canvas being transposed to a
vertical wall, (b) using other mediums such as spray paints, and (c) experimenting with algorithmic
constraints of the quadrotor’s movement.

Artist Sougwen Chung, after trying out the system, observed that the patterns of lines drawn re-
semble ones that her drawing machine D.O.U.G.21 creates. We also experimented with the spray-
paint version (Figure 4b) of our system with artist Rochelle Haley. She observed that the inher-
ent movement of our system creates a pattern that she also observes in natural movements such
as dancers or animals. Her usual work transposes the unique patterns of dance choreographies
onto canvas, whereas in this collaboration the robotic system becomes a critical stylistic compo-
nent in her expression. There is a degree to which an artist has to experiment and learn the be-
havior of the quadrotor, which can sometimes be suggestive or dismissive to the artist. In other
words, instead of mechanically extending a human artist and trivializing the task, the system’s
motion dynamics and control algorithms form a dialogue with the artist, resulting in a distinct
style. This discovery led to experiments on how we can differently employ the noise (or varia-
tions) that come from the quadrotor (Figure 5). Guiding the quadrotor at different speed or rate
of turns, a user can go from strokes that are abstracted due to the quadrotor’s slow turns to delib-
erately observing the noise from the drone during slow and steady maneuvers.

April–June 2018 38 www.computer.org/pervasive


IEEE PERVASIVE COMPUTING

Figure 5. The degree to which the motion of the drone in A Flying Pantograph contributes to the
final art can be controlled. Faster strokes with quick turns will fully suppress the wiggly lines made
by the drone, while slower maneuvers can be used to fully incorporate them.

A Flying Pantograph focused on how to utilize a creative media that has a programmatic buffer
or noise, but it also explored control decisions in a robotic augmentation system. In our installa-
tions, gross movements were made by a user while parts of the quadrotor’s movement were gen-
erated as result of the combination of the user’s intention and external factors. The system also
offers autonomous stabilization and hazard avoidance during a drawing maneuver, which is criti-
cal since the quadrotor must continuously add pressure to the canvas without crashing. A closed-
loop control of the quadrotor’s tilt angle helps maintain constant pressure, and automatically re-
treats and reorients if an accidental overshooting of the angle happens.
The topic of shared control has been explored in automation research,7,26 and efforts have been
made to define levels of autonomy in master–slave systems. Adapting the framework of Jenay
Beer and her colleagues,7 we define four categories of controlling robotic augmentations with
varying degrees of autonomy (Table 2). We exclude certain categories from her team’s classifi-
cation, namely those where a human operator only takes a planning or intervention role. Full au-
tonomy is also not listed because in the context of human–robot co-action there will always be
some type of interaction between a user and a robot.

Table 2. Types of control methods for robotic augmentations, with the level of autonomy increasing
from left (fully volitional control) to right (partial automation).

Direct control Pseudo-mapping Assisted control Shared control


A robot’s action is en- A robot’s action is Gross movements of A robot takes a signif-
tirely controlled by generated by map- a robot are controlled icant role in the con-
human operators ping a human opera- by a human operator, trol, which involves
through input such as tor’s actions. It can be where adjustments managing a portion of
gestures, electromy- one-to-one mapping can be made by the robotic actions per-
ography, electroen- between limb posi- robot for enhanced taining to the main
cephalography, or tions or preset ac- precision, stabiliza- part of the task.
user interfaces. tions triggered by the tion, or error preven- Example work:
Example work: Leigh operator’s behavior. tion. Bonilla and Asada2;
et al.18; Hu, Leigh, Example work: Wu Example work: Katyal Bretan et al.13
and Maes19; Sasaki and Asada4; Chung21 et al.27; Downey et
et al.12 al.28; Bebek and
Çavuşoğlu22

April–June 2018 39 www.computer.org/pervasive


AUGMENTING HUMANS

Direct control and pseudo-mapping are control methods without robotic autonomy. The former
method directly translates command signals from a human operator to robotic motions. For ex-
ample, Robotic Symbionts17 uses electromyography (EMG) signals from the user’s forearm for
controlling the robot, while MetaLimbs12 uses the position of a human operator’s foot to directly
drive the robotic arm. Pseudo-mapping is similar to direct control, in that control of the robot is
generated through an algorithmic mapping between human actions and robotic actions. The SR
finger robot described earlier4 utilizes this control scheme: robotic finger movements are gener-
ated in a one-to-one mapping from human finger positions. Direct control and pseudo-mapping
offer different tradeoffs between the independence of robotic movement and the control burden
imposed on users.
Assisted control describes a control paradigm in which a robot’s gross movement are guided by a
human operator, with the robot making slight adjustments. This category lacks example systems
relevant to this article’s main topic, but control strategies utilized in prostheses and teleoperation
research hint at its applicability. One example in the case of prosthesis control for object-grasp-
ing tasks is to continuously switch the controller between the intracortical brain–computer inter-
face and a computer according to the phase of the robotic motion.27 Another example is the
computer accurately aligning the hand to an optimal grasp position.28 Teleoperation systems of-
ten utilize assistance by automation, as a control interface might lack the degrees of freedom re-
quired for full control of a robot. Özkan Bebek and M. Cenk Çavuşoğlu presented a surgical
operation system that automatically cancels motion artifacts caused by a beating heart.22 Smart
hand-tools research29,30 demonstrates computationally driven tools that prevent mistakes by auto-
matically controlling the tools.
Shared control describes systems in which the robot takes a larger decision-making role. The
robotic decisions might require significant processing and offload a batch of control maneuvers
from users. The shoulder-mounted SR arms discussed earlier2 use a Petri net to recognize when a
user switches from one task to another and preemptively switch to another position to support
the upcoming assembly task. In the three-arm drumming system13 the generation of rhythm pat-
terns is entirely offloaded to the robot, while the human musician makes the higher-level deci-
sion about the target drum for the robot.
These four categories are not mutually exclusive, and the level of autonomy is more of a contin-
uum than discrete choices. For example, systems can be implemented to incorporate multiple
control schemes according to application context or phases of a task, thereby switching between
manual and autonomous control.15,27

HUMAN–ROBOT INTERACTION
This article addresses the role division between humans and robots in terms of actual actions and
their controls. An additional critical research issue is human–robot interaction. The physical en-
vironment imposes a dynamically changing context and, over the course of collaboration, hu-
mans and robots each must adapt to the change. To form fluid and continuous collaboration
between humans and machines, a user would need to comprehend and properly respond to a ro-
bot’s changing behaviors.
We have some anecdotal observations from our research in Robotic Symbionts17 that the user
adapts to suboptimal movements by the robot. While testing our simulation system that automat-
ically finds control parameters for an object-handling task, some of the simulation results had
errors due to differences between the simulation and the real environment. However, users were
able to adjust their hand movements to successfully utilize the suboptimal configurations of the
robotic extension. Smart hand-tools research demonstrates another way a user can respond to ro-
botic decisions. FreeD29 is a smart milling tool that has the ability to adjust the angle and spindle
speed of a milling bit autonomously. When a user is about to make a wrong cut (with respect to
the 3D model that the user is trying to create), the tool intervenes and informs the user of the po-
tential mistake. The user can then make the decision to either conform or nudge the machine to
carry out the “wrong” action anyway.

April–June 2018 40 www.computer.org/pervasive


IEEE PERVASIVE COMPUTING

This feedback loop in human–machine systems is a critical research topic in the design of usable
robotic augmentation systems (Figure 6). It includes important questions such as how can ma-
chine intention be communicated clearly to the user and how do users familiarize themselves
with the robot’s behavior over time and develop an efficient communication with the machine.

Figure 6. Flow chart describing human–robot integration system for co-actions. In addition to
coordinating control and action flow, the feedback from robots to human operators is critical for
effective collaboration.

CONCLUSION
This article addressed a category of robotic augmentations that deeply engage with human ma-
nipulation tasks. We introduced two of our projects as case studies and examined related works
in the field, with a focus on two aspects: the type of support provided by robotic augmentation
systems, and how the systems are controlled. Those aspects are critical in designing such sys-
tems, as the dyadic configuration between humans and robots makes their coordination a com-
plex problem blending robotic control and human–robot teamwork. The proposed frameworks
describe different ways a human operator and a robot take actions together, and ways the robotic
control is shared. We also briefly discussed a future challenge regarding the interaction and feed-
back between humans and robots in the discussed type of systems. We expect that a systematic
investigation into this human–robot interaction aspect will result in more fluid coordination in
future systems.

REFERENCES
1. Third Hand, Stelarc; http://stelarc.org/?catID=20265.
2. B.L. Bonilla and H.H. Asada, “A Robot on the Shoulder: Coordinated Human-
Wearable Robot Control Using Coloured Petri Nets and Partial Least Squares
Predictions,” Proc. 2014 IEEE Int’l Conf. Robotics and Automation (ICRA 14), 2014,
pp. 119–125.
3. F. Parietti, K. Chan, and H.H. Asada, “Bracing the Human Body with Supernumerary
Robotic Limbs for Physical Assistance and Load Reduction,” Proc. 2014 IEEE Int’l
Conference on Robotics and Automation (ICRA 14), 2014, pp. 141–148.
4. F.Y. Wu and H. Asada, “Bio-Artificial Synergies for Grasp Posture Control of
Supernumerary Robotic Fingers,” Proc. Robotics: Science and Systems X, University
of California: Berkley, CA, 2014; https://dspace.mit.edu/handle/1721.1/88457.
5. F.Y. Wu and H.H. Asada, “‘Hold-and-Manipulate' with a Single Hand Being Assisted
by Wearable Extra Fingers,” Proc. 2015 IEEE Int’l Conf. Robotics and Automation
(ICRA 15), 2015, pp. 6205–6212.
6. J.C.R. Licklider, “Man-Computer Symbiosis,” IRE Trans. Human Factors in
Electronics, March 1960, pp. 4–11.
7. J.M. Beer, A.D. Fisk, and W.A. Rogers, “Toward a Framework for Levels of Robot
Autonomy in Human-Robot Interaction,” J. Human-Robot Interaction, vol. 3, no. 2,
2014, pp. 74–99.
8. M.C. Carrozza et al., “The Development of a Novel Prosthetic Hand—Ongoing
Research and Preliminary Results,” IEEE/ASME Trans. Mechatronics, vol. 7, no. 2,
2002, pp. 108–114.
9. C.A. Torres, “IKO Creative Prosthetic System,” 2014; https://vimeo.com/97877783.
10. The Alternative Limb Project; http://www.thealternativelimbproject.com.
11. Stelarc: The Monograph, Marquard Smith, MIT Press, 2007.

April–June 2018 41 www.computer.org/pervasive


AUGMENTING HUMANS

12. T. Sasaki et al., “MetaLimbs: Multiple Arms Interaction Metamorphism,” ACM


SIGGRAPH 2017 Emerging Technologies (SIGGRAPH 17), 2017;
doi.org/10.1145/3084822.3084837.
13. M. Bretan et al., A Robotic Prosthesis for an Amputee Drummer arXiv preprint, 2016;
doi.org/https://arxiv.org/abs/1612.04391v1.
14. I. Hussain, G. Salvetti, and D. Prattichizzo, “On Control Interfaces for the Robotic
Sixth Finger,” Proc. 7th Augmented Human Intl’ Conf. (AH 16), 2016;
doi.org/10.1145/2875194.2875243.
15. L.V. Herlant, R.M. Holladay, and S.S. Srinivasa, “Assistive Teleoperation of Robot
Arms via Automatic Time-Optimal Mode Switching,” Proc. 11th ACM/IEEE Int’l
Conf. Human-Robot Interaction (HRI 16), 2016, pp. 35–42.
16. S. Javdani et al., “Shared Autonomy via Hindsight Optimization for Teleoperation and
Teaming,” arXiv preprint, 2017; doi.org/https://arxiv.org/abs/1706.00155.
17. S. Leigh and P. Maes, “Body Integrated Programmable Joints Interface,” Proc. 2016
CHI Conf. Human Factors in Computing Systems (CHI 16), 2016, pp. 6053–6057.
18. S. Leigh et al., “Morphology Extension Kit: A Modular Robotic Platform for
Physically Reconfigurable Wearables,” Proc. 12th Int’l Conf. Tangible, Embedded,
and Embodied Interaction (TEI 18), 2018, pp. 11–18.
19. Y. Hu, S. Leigh, and P. Maes, “Hand Development Kit: Soft Robotic Fingers as
Prosthetic Augmentation of the Hand,” Adjunct Proc. 30th Ann. ACM Symp. User
Interface Software and technology (UIST 17), 2017, pp. 27–29.
20. K.D. Stone, D.C. Bryant, and C.L.R. Gonzalez, “Hand Use for Grasping in a Bimanual
Task: Evidence for Different Roles?,” Experimental Brain Research., vol. 224, no. 3,
2013, pp. 455–467.
21. S. Chung, Drawing Operations, 2015; https://vimeo.com/138487938.
22. O. Bebek and M.C. Çavuşoğlu, “Intelligent Control Algorithms for Robotic-Assisted
Beating Heart Surgery,” IEEE Trans. Robotics, vol. 23, no. 3, 2007, pp. 468–480.
23. A.K. Bejczy, “Toward Advanced Teleoperation in Space,” Teleoperation and Robotics
in Space, American Inst. Aeronautics and Astronautics, 1994.
24. C.L. Fernando et al., “TELESAR V: TELExistence Surrogate Anthropomorphic
Robot,” Proc. ACM SIGGRAPH 2012 Emerging Technologies (SIGGRAPH 12),
2012; doi.org/10.1145/2343456.2343479.
25. S. Leigh, H. Agrawal, and P.A. Maes, “A Flying Pantograph: Interleaving Expressivity
of Human and Machine,” Proc. 10th Int’l Conf. Tangible, Embedded, and Embodied
Interaction (TEI 16), 2016, pp. 653–657.
26. R. Parasuraman, T.B. Sheridan, and C.D. Wickens, “A Model for Types and Levels of
Human Interaction with Automation,” IEEE Trans. Systems, Man, and Cybernetics—
Part A: Systems and Humans, vol. 30, no. 3, 2000, pp. 286–297.
27. K.D. Katyal et al., “Collaborative BCI Approach to Autonomous Control of a
Prosthetic Limb System,” Proc. 2014 IEEE Int’l Conf. Systems, Man, and Cybernetics
(SMC 14), 2014, pp. 1479–1482.
28. J.E. Downey et al., “Blending of Brain-Machine Interface and Vision-Guided
Autonomous Robotics Improves Neuroprosthetic Arm Performance during Grasping,”
J. NeuroEngineering and Rehabilitation, vol. 13, no. 1, 2016; doi.org/10.1186/s12984-
016-0134-9.
29. A. Zoran, R. Shilkrot, and J. Paradiso, “Human-Computer Interaction for Hybrid
Carving,” Proc. 26th Ann. ACM Symp. User Interface Software and Technology (UIST
13), 2013, pp. 433–440.
30. R. Shilkrot et al., “Augmented Airbrush for Computer Aided Painting (CAP),” ACM
Trans. Graphics, vol. 24, no. 2, 2015; doi.org/10.1145/2699649.

ABOUT THE AUTHORS


Sang-won Leigh is a PhD candidate and research assistant in the Fluid Interfaces group at
the MIT Media Lab. His work investigates the chasm between the immutable nature of the
Newtonian universe and the malleability of digital computing. Seeing the human body and
objects as “interfaces,” he pushes the envelope of user interfaces and augmented reality.

April–June 2018 42 www.computer.org/pervasive


IEEE PERVASIVE COMPUTING

Leigh currently focuses on robotic interfaces that expand our hands’ expressivity—numeri-
cally, spatially, or qualitatively—and enable novel ways to carry out physical manipulation
and creative expression. Contact him at sangwon@media.mit.edu.
Harshit Agrawal is a former master’s student and research assistant in the Fluid Interfaces
group at the MIT Media Lab. He builds tools to study how technology can blend with and
enhance human creative expression and in the process create experiences that invite us to
reflect upon and reevaluate our relationship with technology. He currently focuses on the
interplay between human and machine imaginations and intentions, spanning across virtual
and physical embodiments. Contact him at harshit@alum.mit.edu.
Pattie Maes is a professor in MIT’s Program in Media Arts and Sciences as well as aca-
demic head of the program. She also runs the MIT Media Lab’s Fluid Interfaces group,
which aims to radically reinvent the human machine experience. Coming from a back-
ground in artificial intelligence and human–computer interaction, she is particularly inter-
ested in the topic of cognitive augmentation, or how immersive and wearable systems can
actively assist people with memory, learning, decision making, communication, and well-
being. Maes received a PhD in artificial intelligence from Vrije Universiteit Brussel. Con-
tact her at pattie@media.mit.edu.

April–June 2018 43 www.computer.org/pervasive

Das könnte Ihnen auch gefallen