Sie sind auf Seite 1von 96

2018 IEEE International Symposium on Technology and Society (ISTAS)

³Technology, Ethics, and Policy´


November 13 and 14, 2018
Washington, DC U.S.A.

The IEEE Society for Social Implications of Technology (SSIT) invites you to contribute to the IEEE International Symposium on
Technology and Society (ISTAS) 2018, hosted by the George Washington University School of Engineering and Applied Science.

ISTAS is a multi-disciplinary and interdisciplinary forum for engineers, policy makers, entrepreneurs, philosophers, researchers, social
scientists, technologists, and polymaths to collaborate, exchange experiences, and discuss social implications of technology.

We welcome proposals for papers and practitioner presentations, panels, and workshop sessions focused on technology’s
relationship to social issues ranging from the economic and ethical to the cultural and environmental; in particular, we seek
submissions engaging with the following topics:

x The social implications of technology as they relate to SSIT’s &ŝǀĞWŝůůĂƌƐ͗Sustainable Development and Humanitarian
Technology, Ethics/Human Values, Universal Access to Technology, Societal Impacts, and Protecting the Planet.

x Intersections between the social implications of technology and:


R Privacy and Security of information (vs. National & Homeland Security);
R Net- Neutrality (fast access Internet legislation); 
R Big Data and Decision Making 
R Human Genome Editing (e.g., CRISPR)
R Ethics: Neurotechnology/Big Brain
R Internet of Things (IoT)
R BlockChain Everything – what does it mean?
R Open Data and Open Government

Papers accepted for the Conference Proceedings will be published on /yƉůŽƌĞ, with some papers selected for publication in a
special issue of /dĞĐŚŶŽůŽŐLJĂŶĚ^ŽĐŝĞƚLJDĂŐĂnjŝŶĞ and potentially other journals͘

^ƵďŵŝƐƐŝŽŶĞĂĚůŝŶĞ(for paper drafts and panels or workshop proposals): DĂLJϭϱ͕ϮϬϭϴ

For more information about ISTAS, submission guidelines, and updates, visit our website:

ŚƚƚƉ͗ͬͬƚĞĐŚŶŽůŽŐLJĂŶĚƐŽĐŝĞƚLJ͘ŽƌŐͬĞǀĞŶƚͬϮϬϭϴͲŝĞĞĞͲŝŶƚĞƌŶĂƚŝŽŶĂůͲƐLJŵƉŽƐŝƵŵͲƚĞĐŚŶŽůŽŐLJͲƐŽĐŝĞƚLJͲŝƐƚĂƐͬ

Digital Object Identifier 10.1109/MTS.2018.2807846


PRESIDENT’S MESSAGE

Paul M. Cunningham

SSIT and Sustainable


onsidering the IEEE

C
tagline “Advancing
Technology for Hu-
manity,” it is hardly
surprising that many
IEEE members are actively engaged
in different forms of volunteerism
Development
addressing social challenges at com-
munity level, at home and abroad.
The IEEE Humanitarian and
Philanthropic Opportunities (H&P)
Initiative was launched at Sections
Congress 2017 in Sydney by the As an IEEE technical Society ly recognized. The sheer scale of
IEEE Foundation and IEEE Humani- whose focus on all aspects of soci- opportunities to address the U.N.
tarian Activities Committee (HAC). etal implications of technology com- Sustainable Development Goals
The objective of H&P is to help IEEE plements the technical activities of (SDGs) at home and abroad requires
members identify opportunities all other IEEE Societies, SSIT mem- us to accelerate expanding our pro-
to volunteer their “time, talent or bers have a proud history of contri- grams and continue growing our
treasure” in the sustainable devel- butions to sustainable development global footprint. The IEEE Society on
opment and humanitarian technol- and humanitarian technology. We Social Implications of Technology
ogy space based on their interests, have long focused on addressing eth- (SSIT) is truly demonstrating lead-
expertise, and availability. ical implications, interdependencies, ership in supporting operationaliza-
Currently there are 12 groups context, and socio-cultural norms tion of the IEEE Tagline, “Advancing
across IEEE involved in H&P, offering that are essential to avoid unintend- Technology for Humanity.”
team and individual volunteer oppor- ed and unanticipated consequenc-
tunities of different durations for mem- es. One of our core strengths as a Call for Volunteers
bers at different stages of their careers, community has been our collabora- I invite you to help SSIT continue
ranging from young professional or tive, partnership-based approach. to make a difference, particularly
student, mid-career, pre-retirement, SSIT IST-Africa SIGHT members in the areas of sustainable develop-
or retirement. Participating groups from the IST-Africa Institute, Univer- ment and humanitarian technology.
include the IEEE Foundation, IEEE sity of Gondar, Strathmore Univer- Volunteer opportunities include:
Humanitarian Activities Committee sity, Chancellor College, and Nelson ■ Serving your local community
(HAC), IEEE SIGHT (Special Interest Mandela University have success- through an existing or new SSIT
Group on Humanitarian Technology), fully built trust-based relationships Chapter.
IEEE Eta Kappa Nu, EPICS in IEEE, with healthcare clinics in resource ■ Contributing to the work of SSIT’s
IEEE Smart Village, IEEE Life Members constrained environments in Ethio- committees (including our Stan-
Committee, IEEE History Center, IEEE pia, Kenya, Malawi, and South Afri- dards committee).
Power & Energy Society Scholarship ca. They are providing digital literacy ■ Volunteering to host SSIT Distin-
Plus Initiative, IEEE Internet Initiative training and supporting infrastruc- guished Lecturers.
(3i), IEEE Empower a Billion Lives, and ture development with the objective ■ Submitting articles or review sub-
the IEEE-USA Community Outreach of supporting technology adoption missions to IEEE Technology and
Initiative (Move Project). to strengthen primar y health- Society Magazine.
care delivery.
Digital Object Identifier 10.1109/MTS.2018.2804961
The contribution of our active
Date of publication: 2 March 2018 and committed volunteers is wide- (continued on page 21)

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 1


Volume 37, Number 1, March 2018 ON THE COVER:
CONSIGNMENT ROBOT PICKING PHARMACY PACKAGES
FROM STORAGE SHELF; HOSPITAL PHARMACY OF UNIVER-
SITÄTSKLINIK MÜNSTER. WIKIMEDIA/UKM ELISABETH DEIT-
ERS-KEUL.

SPECIAL ISSUE ON SOCIAL IMPLICATIONS OF ROBOTICS AND AI


19_ Guest Editorial: Robots and Socio-Ethical Implications
Katina Michael, Diana Bowman, Meg Leta Jones, and Ramona Pringle

22_ Humanizing Human-Robot Interaction*


Alessandra Sciutti, Martina Mara, Vincenzo Tagliasco, and Giulio Sandini
Features

30_ Robot-Enhanced Therapy for Children with Autism (DREAM):


A Social Model of Autism*
Kathleen Richardson, Mark Coeckelbergh, Kutoma Wakunuma, Erik Billing, Tom Ziemke,
Pablo Gómez, Bram Vanderborght, and Tony Belpaeme

40_ Automating Sciences: Philosophical and Social Dimensions*


Ross D. King, Vlad Schuler Costa, Chris Mellingwood, and Larisa N. Soldatova

47_ The Technological Fix as Social Cure-All: Origins and Implications*


Sean F. Johnston

55_ Autonomous Weapons Systems:


Failing the Principle of Discrimination*
Ariel Guersenzvaig

62_ The Safety of Autonomous Vehicles:


Lessons from Philosophy of Science*
Daniel J. Hicks

70_ Socio-Economic and Legal Impact of Autonomous Robotics


and AI Entities: The RAiLE© Project*
Morgan M. Broman and Pamela Finckenberg-Broman

80_ A Drone by Any Other Name*


Lisa M. PytlikZillig, Brittany Duncan, Sebastian Elbaum, and Carrick Detweiler

* Refereed articles.

Digital Object Identifier 10.1109/MTS.2018.2795088


Date of publication: 2 March 2018

2 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


Departments

19 22 70

President’s Message Industry View


1 SSIT and Sustainable Development 13 The Internet of Moving Things
Paul M. Cunningham Rui Costa

Editorial Commentary
5 One at a Time, and All at Once 15 Connected Vehicle Security Vulnerabilities
Jeremy Pitt Yoshiyasu Takefuji

Book Reviews Last Word


8 Drone Warfare 92 500 Years Later: Doors and Disputations
Jacob Ossar Christine Perakslis
11 Drowning in Information, Starving
for Knowledge
Abdullah Shahid and Ningzi Li

This March 2018 special issue of IEEE Technology and Society


Magazine (T&S), focused on “Social Implications of Robotics and
AI,” is published in cooperation with IEEE Robotics and Automa-
tion Magazine (RAM). RAM is also publishing a special issue in
March 2018 focused on socio-ethical approaches to robotics de-
velopment. Members of the IEEE Society on Social Implications
of Technology (SSIT) will receive complementary electronic
access to the March 2018 issue of RAM for this issue, and mem-
bers of the IEEE Society on Robotics and Automation will receive
complementary access to this March 2018 issue of T&S Magazine.
Subscribers will receive notification via email. We hope you en-
joy these special issues and your complementary access!

Special Issue Articles


*Refereed articles.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 3


ieeessit.org

EDITOR MANAGING EDITOR SSIT BOARD OF GOVERNORS


Jeremy Pitt Terri Bookman (partial listing)
Department of Electrical and P.O. Box 7465
Electronic Engineering Princeton, NJ 08543-7465 PRESIDENT
Imperial College London +1 609 462 0642 Paul Cunningham
London SW7 2BT, U.K. t.bookman@ieee.org International Information
+44 20 7594 6316 Management Corporation
j.pitt@imperial.ac.uk COVER DESIGN pcunningham@ieee.org
David Beverage Graphic Design
SECRETARY
ASSOCIATE EDITORS SSIT E-NEWSLETTER Lew Terman
Roba Abbas Heather Love (Editor) South Salem, NY IEEE Technology and Society Magazine, pub-
University of Wollongong Heather.Love@usd.edu l.terman@ieee.org lished by the IEEE Society on Social Implica-
tions of Technology, serves to facilitate
Diana Bowman IEEE MAGAZINES TREASURER understanding of the complex interaction
Arizona State University Jessica Welsh, Managing Editor Howard Wolfman between technology, science, and society;
Janet Dudar, Senior Art Director Lumispec Consulting its impact on individuals and society in gen-
Ada Diaconescu Gail Schnitzer, Associate h.wolfman@ieee.org eral; professional and social responsibility
Telecom ParisTech Art Director in the practice of engineering, science, and
PAST PRESIDENT technology; and open discussion on the re-
Khanjan Mehta T&S EDITORIAL BOARD Greg Adamson sulting issues.
Penn State University John Impagliazzo (Publications University of Melbourne
Chair) +61 423 783 527
Jennifer Trelewicz Professor Emeritus g.adamson@ieee.org
Deutsche Bank Technology Hofstra University
Center John.Impagliazzo@hofstra.edu ELECTED
Jeremy Pitt (Editor) MEMBERS-AT-LARGE A PUBLICATION OF THE IEEE SOCIETY
Agnieszka Rychwalska 2018 ON SOCIAL IMPLICATIONS OF TECHNOLOGY
Warsaw University Ronald Kline P. Cunningham, HK Anasuya
IEEE Technology and Society Magazine (ISSN 0278-0097)
Cornell University Devi, B. Pasik-Duncan (ITSMDC) is published quarterly by The Institute of Elec-
rrk1@cornell.edu 2019 trical and Electronics Engineers, Inc. IEEE Headquarters:
BOOK REVIEW EDITOR M. Cardinale, R. Marimuthu, H.
A. David Wunsch Keith W. Miller 3 Park Avenue, 17th Floor, New York, NY 10016-5997. IEEE
Wolfman Service Center: 445 Hoes Lane, Piscataway, NJ 08855-1331
Dept. of Electrical and University of Missouri, St. Louis
2020 U.S.A. $33.00 per year (digital copy included in Society fee)
Computer Engineering millerkei@umsl.edu for each member of the IEEE Society on Social Implications
C. Hughes, H. Love, J. Pearlman
University of Massachusetts of Technology. Print Subscriptions: Additional $120.00 per
Katina Michael
Lowell DIRECTOR, IEEE DIVISION VI year for each member of the IEEE Society on Social Implica-
University of Wollongong tions of Technology (SIT). Non-member subscription prices
Lowell, MA 01854 John Y. Hung
katina@uow.au.edu available on request. Single copy prices: members $30.00;
david_wunsch@uml.edu j.y.hung@ieee.org nonmembers $109.00 Copyright and reprint permissions:
Abstracting is permitted with credit to the source. Libraries
are permitted to photocopy beyond the limits of U.S. Copy-
right law for private use of patrons: (1) those post-1977
For the full current SSIT article that carry a code at the bottom of the first page,
roster listing, visit: provided the per-copy fee indicated in the code is paid
through the Copyright Clearance Center, 222 Rosewood
http://rosters.ieee.org/home
Drive, Danvers, MA 01923 U.S.A.; (2) pre-1978 articles
.html without fee. For other copying, reprint, or republication
permission, write to the IEEE Copyright and Permissions
Dept., IEEE Service Center. All rights reserved. Copy-
right ©2018 by the Institute of Electrical and Electron-
ics Engineers, Inc. Printed in U.S.A. Periodicals postage
paid at New York, NY and at additional mailing offices.
Postmaster: Send address changes to IEEE Technology
and Society Magazine, IEEE, 445 Hoes Lane, Piscataway,
NJ 08854-4150 U.S.A. PRINTED IN U.S.A.

Promoting Sustainable Forestry


Digital Object Identifier 10.1109/MTS.2018.2795090
SFI-01681
EDITORIAL

Jeremy Pitt

One at a Time,
Wicked Problems
and Collective Action
jour ney of a thou-

A and All at Once


sand miles begins
with a single step —
yes, yes, we know;
we’ve all had the in-
spirational-saying desk calendar with The Benefits of Collective Action
that particular aphorism included
on some auspicious day, National Can Begin with a Single Deed
Clothes Peg Day, or some such.
But while familiarity might have
drained from the saying some of its
ability to inspire, that does not make flicts with their axioms than rethink timed 5k run in a park, starting at
it any less true. And its truth becomes their axioms [2], to transaction costs 9:00 a.m. every Saturday (or 9:30 in
increasingly apparent when there are and future discounting, where the Scotland), open to all, of whatever
so-called wicked problems — a social costs are incurred by some individu- age or ability, and free at the “point
problem whose complexity and conti- als in the present are greater than their of access,” that access point usually
nually changing requirements is such short-term benefits, and indeed long- being the local municipal park. Park-
that there is not necessarily an obvious term when most benefits are accrued run was a U.K. initiative originally
terminating condition, or even a con- by others in the future [3]. called Bushy Park Time Trial in 2004,
sistent set of criteria by which to evalu- But just as the thousand-mile jour- but at the time of writing (December
ate such a condition if it even existed ney seems daunting at the outset, it is 2017) it has morphed into a world-
[1]. These problems tend to have sig- still necessary to take that first step. wide organization, with events in
nificant economic, environmental, or Then so it is with wicked problems. It more than 1300 parks in over a dozen
political dimensions; require the coor- is necessary to start somehow, even countries, with 2.5 million register-
dinated and concentrated effort of a if, to push the analogy a bit further, ed runners worldwide. The total
“large” number of people; and may you’ve got no map, no knowledge of distance covered by all the runners
require significant efforts of persua- the destination, and no milometer to is reckoned to be about 150 mil-
sion to convince those people that: measure the distance that has been lion km (basically a gentle jog from
firstly, a problem even exists, and covered. But this can sometimes be the earth to the sun). The founder,
secondly, that they can contribute the essence of collective action for Paul Sinton-Hewitt, was awarded a
meaningfully to its “solution.” Such addressing wicked problems: some- U.K. honor for his services to public
efforts are also likely to encounter a times human behavior defies top- health, the benefits of even a once
number of strong cognitive biases and down direction and even nudge [4], weekly cardiovascular workout being
economic (dis)incentives. These range and begins instead with a single initiat- well-known.
from confirmation bias, where people ing event and “snowballs” from there. The transformation from single
are more likely to reject data that con- initiating event — Bushy Park Time
Parkrun Trial — to global collective action
A fascinating example of this is addressing, perhaps indirectly, a
Digital Object Identifier 10.1109/MTS.2018.2795091
parkrun (www.parkrun.com). As its wicked problem (public health, by
Date of publication: 2 March 2018 name suggests, parkrun is a weekly addressing risks associated with

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 5


physical inactivity such as obesity, falls on the 22nd of any month, the examination, correct diagnosis, and
high blood pressure, heart disease, tradition is for the male runners to curative treatment.)
and diabetes) can reasonably be turn up in a ballet skirt, or tutu).
explained by theories of dynamic However, the success and the su- Technology and Society
social psychology [5]. Dynamic social stainability of each individual park- It is worth noting that there is a fairly
psychology proposes that in order run is arguably due to three factors: minimal obligation on technological
to understand how people organize firstly, the correlation with another availability in order to participate
themselves in groups and communi- social science theory, that of the in parkrun: access to a computer to
ties, and in particular how to bring self-governing institution proposed register, receive email and browse
about behavioral change, it is nec- by Nobel Laureate Elinor Ostrom [8]; the results; and access to a printer
essary to think in terms of complex secondly leadership, teamwork and to output the barcode for the athlete
systems. This requires more than volunteering in the collective inter- ID. This is in stark contrast to, say,
an understanding of the individual est; and thirdly the judicious use of virtual payment systems. The differ-
mind, on which psychology had tend- technology. There is not space here ence between cash and electronic
ed to focus. Instead, it is necessary to explore the first factor, except to money is that the former has zero
to focus on two aspects: the social observe that although not a com- cost for participants to enter the
and the dynamical; i.e., from the mon-pool resource problem, some market, whereas the latter requires
social aspect, considering the col- of Ostrom’s institutional design ownership of a device, which also
lective beliefs, micro-level behaviors principles for sustainable common- requires a contract with a service pro-
and interactions between people; pool resource management can be vider, and may also entail purchas-
and from the dynamic aspect, con- identified in parkrun’s structures ing insurance and regular upgrades.
sidering the system not as an object and procedures. It is also worth noting that parkrun
with state transitions, but as a pro- For the second factor, each park- has accumulated a vast amount of
cess, or set of interacting processes, run needs a run director prepared data: access to the anonymized data
like an ecosystem. For example, the to take responsibility for the event, is granted to medical researchers,
Bubble Theory of Social Change [6] a strong support network, and volun- but is not used in the same way that
showed how social change can be teers to perform certain jobs each some large IT companies use the
brought about by concentrating on week, such as barcode scanning. data that they collect through use of
changing fragments of social net- This is where the third factor comes their platforms and services.
works (bubbles) rather than isolated into play: once registered, each run- It follows that more profound
individuals. In this theory, an initial ner receives an athlete ID and a bar- questions beyond the functional
bubble forms or is initiated, which code, which needs to be printed out and non-functional requirements of
others join, learn from and then and brought to each run. As each run- systems need to be addressed. In
leave to initiate other bubbles, each ner crosses the finish line, the time is particular, there are fundamental
one largely autonomous but still taken and s/he is given a token that questions such as how do we achieve
associated with the original through has its own barcode. Another volun- pro-social outcomes through collec-
the existing social network. Conse- teer then scans both the athlete ID tive action that reflects our shared
quently, information and innovation and the token ID, and at the end of values? But it also raises a num-
continues to spread throughout the the event all the data is uploaded to ber of secondary issues, such as:
entire “bubble system.” a central database. The runners can to what extent must citizens pay to
The history of parkrun (as docu- then receive a text or email informing participate in the digital transforma-
mented in [7]) conforms to this theory. them of their “official” time for the tion? To what extent should they be
As a result, every parkrun is not only run, and can also go online to view obligated (by the state) to participate
technically accessible to any regis- all the results of the parkrun (indeed in the digital transformation? For
tered runner, but the common “heri- any parkrun) and get statistics of example, there is already discussion
tage” means that the experience of any their own performance over time. of the “digital divide,” whereby poor-
parkrun in whatever location is identifi- (This can sometimes be unexpect- er areas and older people are exclud-
ably the same (temporally, structurally, edly helpful. One friend, whose ill- ed by the shift from an analogue
procedurally), although with minor ness persisted for months even after world to a digital one. This could be
variation (and not just the course and several supposed treatments, took extended to effective disenfranchise-
the terrain: for example, at the Dune- her series of deteriorating times to ment if electronic voting were to be
din parkrun in New Zealand, if the run her doctor, which led to an in-depth the only means by which to cast a

6 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


vote. In the U.K., entitlement to ben- analysis, or argument of a partic- plex systems, and engineering, with
efits (for disability, unemployment, ular social phenomenon caused backgrounds in academia, entrepre-
or housing) is predicated on access or affected by technology, demon- neurialism, and industry. I will also
to a computer, which for the less well strating scholarship in the form of get my thanks in advance to people
off without a computer at home can evidence to support the argumen- in various important editorial roles
entail a trip to a library (yet libraries tation, e.g., surveys, technologi- (especially Terri Bookman) and the
themselves are being closed due to cal review, interviews, numerical “back office” team at Imperial Col-
lack of funding), and an ability to fill data, etc. lege (Joan O’Brien, Kristina Mila-
in and return long and complicated ■ Inter-disciplinarity: almost ex- novic, and David Burth Kurka), and
online forms. plicitly its title, this magazine similar advanced thanks to all those
Therefore this discussion of park- provides a forum for analysis of who will act as reviewers.
run and its underlying technology, issues that by their very nature It is also a pleasure to welcome
as an exemplar of collective action are interdisciplinary, trans-disci- and directly address the reader-
in order to address a wicked (social) plinary and cross-cutting. ship of T&S Magazine. I have given
problem, while uncovering several ■ Positive thinking: the magazine seminars where I have said, indeed
secondary issues, is intended to high- plays a critical role in contri- I slipped it into a paper once, that “if
light the distinctive qualities of Tech- bu ting to the discussion, with your only tool is an Ostrom-shaped
nology and Society Magazine, and articles highlighting innova- hammer, then every problem is a
the importance of the magazine as a tive approaches to technological collective action shaped nail.” On
focal point for commenting on, analyz- solutions, for example by design that basis, T&S Magazine is also a
ing, and understanding such phenom- guidelines, policy recommenda- collective action problem, and we
ena as the digital transformation, its tions, and conceptual or theoreti- are trying to have transformative
technology, and its impact on society. cal frameworks. impact by debating and promoting
These qualities include (although pro-social and beneficial outcomes
not necessarily exhaustively) the Concluding (Personal) from the interleaving of technology
following: Remarks and society. So, the benefits of col-
■ Interleaving: T&S studies not It is a great honor and privilege to be lective action really can begin with
just the impact of technology on the successor as Editor in Chief to a single deed… your deed to be pre-
society, nor just the need of soci- the awesome and wonderful Profes- cise, and whatever contribution you
ety for a particular technological sor Katina Michael. I am extremely are able to make to the magazine, it
solution, but also the cr itica l grateful to the IEEE search team, would be very well received.
interleaving, in particular where and to Katina, John Impagliazzo,
technology interleaves with eth- and Keith Miller for influential dis- References
ics, morality, and qualitative hu- cussions. I very much look forward [1] H. Rittel and M. Webber, “Dilemmas in
a general theory of planning,” Policy Sci-
man values. to carrying on the work of many tal- ences, vol. 4, pp. 155–169.
■ Responsibility: too often engi- ented and dedicated people that has [2] C. Tavris and E. Aronson, Mistakes Were
neers and scientists plead the built T&S into what it has become Made (But Not by Me): Why We Justify
Foolish Beliefs, Bad Decisions, and Hurt-
“Oppenheimer defense” — they today, and the opportunity “to stand ful Acts Paperback. Harcourt, 2007.
are only developing technology, the on the shoulders of giants”. [3] J. Doyle, “Survey of time preference,
use to which it is put is not their I am also grateful to, and extreme- delay discounting models,” Judgment and
Decision Making, vol. 8, no. 2, pp. 116–
concern. The magazine places ly pleased to welcome, the new (or 135, 2013.
an emphasis on researchers and renewed) members of the Associate [4] R. Thaler and C. Sunstein, Nudge: Improv-
innovators asking themselves not Editor Board: Roba Abbas (Wollongo- ing Decisions about Health, Wealth, and Hap-
piness. New Haven, CT: Yale Univ. Press, 2008.
only “can we do this?” but also ng University), Diana Bowman (Arizo- [5] R. Vallacher and A. Nowak, Dynamical Sys-
“should we do this?” Educators na State University), Ada Diaconescu tems in Social Psychology. Elsevier, 1994.
and professionals alike should (Telecom ParisTech), Khanjan Mehta [6] R. Praszkier and A. Nowak, Social Entre-
preneurship: Theory and Practice. Cam-
be well aware of the precaution- (Lehigh University), Jennifer Trele- bridge, U.K.: Cambridge Univ. Press, 2011.
ary principle. wicz (Deutsche Bank), and Agniesz- [7] D. Bourne, parkrun: much more than
■ Advocacy and scholarship: the ka Rychwalska (Warsaw University). just a run in the park. Chequered Flag
Publishing, 2014.
magazine offers a unique oppor- Between them they cover a great of [8] E. Ostrom, Governing the Commons.
tunity for researchers to advocate interdisciplinary ground, including IT, Cambridge, U.K.: Cambridge Univ. Press,
original positions or opinions, law, health, design, psychology, com- 1990.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 7


Book Reviews
Jacob Ossar

Drone Warfare
Drone Warfare (War and Conflict in the Modern World)
By John Kaag and Sarah Kreps. Malden, MA: Polity Press, 2014, 188 pages.

ver since the discov- on normative questions about drone in combat zones (Afghanistan, Iraq,

E
ery of the rock, mili- warfare. Moreover, the book’s dis- and Libya). Outside of combat zones
tary technology has cussion addresses these questions (chiefly in Pakistan, Yemen, and
made it possible to almost exclusively with regard to the Somalia) these take place under the
kill from a distance. United States, the largest and most aegis of the CIA. The U.S. began using
While rocks remain in use, millen- prolific user of drones. The U.S. is drones to kill suspected terrorists in
nia of design iterations have brought also, the authors hope, in the best the aftermath of the September 11th
us the laser-guided AGM-114 Hell- position to establish norms and best attacks when Congress enacted the
fire missile, which can be fired by practices regarding drones by virtue Authorization for the Use of Military
remote control from a MQ-9 Reaper of its military might and internation- Force (AUMF), handing the George
drone by an operator as far away as al stature. W. Bush administration extremely
1150 miles. Already, thousands of While drones are still a developing open-ended authority to target those
people have been killed in this way. technology, the drone arsenal pos- who “planned, authorized, commit-
Drone Warfare, by philosopher John sessed by the United States already ted, or aided the terrorist attacks.”
Kaag and political scientist Sarah encompasses a wide variety of drone The Obama administration contin-
Kreps, invites us to consider the types with a correspondingly exten- ued and expanded upon the drone
implications of this for politics and sive range of potential missions. The policies of its predecessor, conduct-
authors have wisely nar- ing more than one hundred drone
rowed their focus to perhaps strikes in Afghanistan alone in
the most troubling mission, 2008 and more still in the following
the one most fraught with years. Drone strikes outside of com-
legal, political, and moral bat zones also increased markedly
First up for consideration is i mplic at ions: the use of under Obama. Some of those killed
how drones undermine political drones to kill suspected ter- have been American citizens, most
rorists. These attacks can be famously Anwar Al-Awlaki. The AUMF
accountability. “targeted killings,” aimed at remains in effect, and is now being
a known individual, or “signa- used to justify operations against
ture strikes,” aimed at some- groups that did not even exist when
domestic accountability, for interna- one whose behavior fits a “signature” it was passed.
tional law, and for ethics. or profile of suspected terrorists. With some basic facts briefly es-
Despite its title, Drone Warfare A recurring frustration I encoun- tablished, the authors turn to norma-
is not a comprehensive treatment tered with the book (one no doubt tive matters. First up for consideration
of drone warfare. There is very little shared by its authors!) was the pau- is how drones, which enable govern-
discussion of the technical aspects city of detailed, accurate data about ments to engage in covert warfare
of drones beyond what is necessary drone use. While much of the book’s with minimal risk of loss of life on
in order to understand their uses. discussion is therefore unavoidably the part of their service members,
With that information briefly estab- speculative, the authors do their undermine political accountability.
lished, the book focuses squarely best to lay out what facts are avail- According to one influential strand of
able about drone strikes on suspect- democratic theory (which the authors
Digital Object Identifier 10.1109/MTS.2018.2804962
ed terrorists. The U.S. Department trace back to Kant’s essay on per-
Date of publication: 2 March 2018 of Defense carries out drone strikes petual peace), the high cost of war

8 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


in blood and treasure should make some additional support, serving as gets in the first place. Even if, as the
leaders who must answer to their an analogue to a declaration of war authors note, more precise strikes
citizenry wary of military adventur- against terrorist groups. The authors are preferable to indiscriminate stra-
ism. When, thanks to drone use, sol- reject this line of argument pointing tegic bombing, that still does not
diers rarely come home in body bags, out, first of all, that terrorist groups in itself mean the more narrowly-
members of the public are not often like Al-Qaeda are non-state actors, targeted strikes are zeroing in on a
prompted to care about or even and hence not the sort of entities target they are justified in hitting.
notice military activity half a world upon which international law sanc- The other aspect of jus in bello is
away, even if it is ostensibly carried tions wage war. Second, even in “proportionality”: whether expected
out in their name. Meanwhile, voting the case of state-sponsored terror military gains are excessive in light
against counter-terrorism measures groups, the U.S. would still need to of incidental but foreseeable harm to
exposes legislators to blame if a ter- be continuously at war with the spon- civilians that may result. The authors
rorist attack happens, but if attacks sor to justify military action. Finally, helpfully frame their discussion here
fail to happen they are rarely in a very few actions against terrorists by looking at the question of whether
position to take credit. Thus, neither can plausibly be described
the public at large nor their elected as anticipatory self-defense
representatives have much incen- against an imminent threat.
tive to rein in drone use. Things are The second main ques-
no better in the case of the judicial tion is whether drone use
branch, as cases brought against the violates the international
When thanks to drone use
government after a drone strike has laws of war concerning the soldiers rarely come home in
already taken place are unlikely to conduct of war once initiated
succeed given the deference courts (jus in bello). The two key body bags, members of the public
have traditionally given to national principles here are distinc-
security concerns. Pre-approving tion and proportionality.
are not often prompted to care
strikes would also be problematic. “Distinction” is a matter of about or even notice military
Careful legal deliberations do not distinguishing between civil-
mesh well with targeted killings ians and military combat- activity half a world away.
where the window of opportunity to ants. The signal advantage
make the attack may be open for of drones from the U.S. point
only a few minutes. Special drone of view is that, unlike the indiscrimi- targeted killings are, in the long
courts to pre-approve strikes, mod- nate destruction wreaked by stra- run, strategically useful in combat-
eled on Foreign Intelligence Surveil- tegic bombing or artillery barrages, ting terrorism. The goal of counter-
lance Act (FISA) courts, are another drones are technically capable of insurgency operations is ultimately
possibility, but these run the risk of targeting even a single individual with to win hearts and minds. As we saw in
becoming little more than a rubber a fair degree of precision, thus help- Vietnam, body counts are not a good
stamp. (Courts approve virtually all ing to avoid civilian casualties. The metric for this. Furthermore, the very
FISA requests.) authors argue forcefully, though, effort to minimize U.S. casualties by
The most tightly-argued section that there is a vast gulf between using drones instead of putting troops
of the book concerns international law. this technical capacity and actu- in harm’s way may cause the U.S.
One key issue is whether, under the ally choosing legitimate targets. It to lose touch with civilian popula-
United Nations Charter, U.S. com- is the nature of asymmetrical war- tions and be seen as an oppressive
mitments specifying when a recourse fare that potential targets are not occupier rather than a potential
to military force is justified (jus ad bel- always actively engaged in warlike partner in rooting out militants. Thus,
lum) are consistent with conducting acts, which in any case exist along while targeted killings are undeni-
targeted killings in places where the a continuum from minimal involve- ably effective tactically, the authors
U.S. is not officially at war. Attempts ment (passing information to mili- argue, it is likely that they are worse
to justify targeted killings under inter- tants, say) to wearing a combatant’s than useless strategically.
national law revolve around the idea uniform and firing on U.S. troops. As the authors themselves occa-
that they constitute anticipatory self- No amount of precision shooting sionally note, many of the issues they
defense against an imminent threat of can take the place of human judg- raise about domestic politics and
terrorist attacks. The AUMF provides ment in selecting legitimate tar- international law are not uniquely

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 9


attributable to drones. Their critique For example, the authors claim question of whether using drones
broadly implicates U.S. counter- that drone operators have an unprec- might, in some circumstances, be
terrorism policy in the 21st century edented amount of “leisure” in Thom- morally obligatory (the authors, un-
more broadly. Nevertheless, even if as Hobbes’ rather unusual sense surprisingly, make a case for “no”),
drones are more a symptom than of “freedom from constant threat but almost no examination of the far
a cause, they are a symptom that of death.” They argue that opera- more pressing question of the cir-
aggravates the severity of the under- tors should use this psychological cumstances, if any, in which target-
lying disease and weakens the body space to reflect on the moral hazards ed killings and/or signature strikes
politic’s resistance to it. involved in what they do, as they can might be morally permissible. A
The final chapter before the book’s
kill without suffering the usual conse- blanket prohibition on assassina-
conclusion concerns the ethical im- quences that attend doing so. But a tions would not be a morally eccen-
plications of targeted killings and few pages later the authors mention tric view, but the authors stress
signature strikes. While the chapter how nearly 30% of drone operators that they are pragmatists and not
opens with the claim that “even in a experience burnout, which the mili- absolutists, which at least strongly
world fraught with ambiguity, there tary defines as “an existential crisis.” suggests there are at least some
are certain acts . . . that should be Killing a named individual whom you cases in which they would counte-
prohibited,” the authors self-con- can observe in real time is much nance the kind of drone strikes they
sciously avoid arguing about which more intimate and fraught than fir- take as their subject in this book.
uses of drones, specifically, should ing an artillery shell at or dropping a With so much about drone warfare
be morally prohibited. They stress bomb on anonymous grid coordi- shrouded in secrecy, a real-life case
that evaluating such questions re- nates. It seems likely that, if any- is probably too much to ask for, but
quires the use of human judgment, thing, operating a drone engenders I would like to have seen if there we
and that therefore “we did not write more stress and less opportunity for was even an idealized scenario in
this section on the ethics of war calm reflection than many other well- which the authors would be willing
and peace and the moral hazard of established military technologies. to countenance a targeted killing.
military technologies in a manner This broadening of the argument (Perhaps on a known terrorist about
appropriate to robots: input decision to issues that are not exclusive to to trigger a nuclear bomb?)
procedure, output decision, and cor- drones recurs in other parts of the The book concludes with a slate
chapter, despite an ostensi- of sensible recommendations: re-
bly narrow focus on targeted evaluating the AUMF, working towards
killings. The authors, invoking international agreements about best
Marcuse’s critique of techno- practices for using drones, and limit-
logical rationality, a s ser t ing the proliferation of armed drones.
The second question is whether that before we can grapple Unfortunately, U.S. leadership in
drone use violates international with specific issues about these areas seems less likely than it
drones, we must take account did when the book was published in
laws of war concerning the conduct of the wide-ranging effects of 2014. This does not, however, dimin-
modern technology on the ish the timeliness and urgency of this
of war once initiated (jus in bello). way we think about, among stimulating book. Kreps and Kaag do
other things, warfare. Anoth- a good job of highlighting key issues,
er section draws on Hannah even if they are sometimes less forth-
related action.” This is fair enough, Arendt’s concept of the “banality of coming with firm conclusions than
but while the authors are at pains to evil” to argue that citizens who allow one might like.
point out an urgent need for ethical undeclared warfare undertaken in
reflection, they do not do much to their name to become normalized are Reviewer Information
identify even which qualitative fac- complicit in it in ways that they rarely Jacob Ossar received the Ph.D.
tors would have salience in making acknowledge. Topics like these are degree in philosophy from Johns
such determinations. Instead, the important, but doing more than gestur- Hopkins University and has taught
discussion centers largely on more ing at them seems beyond the ambit ethics to engineering/science stu-
general issues about modern tech- the authors set in the rest of the book. dents at the Stevens Institute of
nology than on matters specific to We do get a relatively detailed Technology. Email: jacob.ossar@
drones and targeted killings. examination of the somewhat outré gmail.com.

10 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


Abdullah Shahid and Ningzi Li

Drowning in Information,
Starving for Knowledge
Information Overload Paradox: Drowning in Information, Starving for Knowledge.
By L. V. Orman. Seattle, WA: Create Space Independent Publishing, 2016, 190 pages.

nformation overload 1) Substitution: As people substitute to Orman, we need to allow institu-

I
is not a new pheno- “cheap for expensive,” “complex tions to compete so we do not end
menon, but a part for simple,” and “formal for infor- up with quick, irreversible substitu-
and parcel of mod- mal,” a large quantity of infor- tions; whereas, to prevent obsoles-
ern life. In this vein, mation drives out high-quality cence, we need to practice “cultural
Georg Simmel earlier suggested in information. protectionism.” He argues that orga-
The Metropolis and Mental Life [1] 2) Obsolescence: Changes in tech- nizations such as state, family, and
that overwhelming stimuli trans- nologies often require orga - church are monopolies with breed-
form the psyche of urban individu- n i zationa l adapt ations a nd ing grounds for irreversible substi-
als and help them develop a blasé specialization. Consequently, tutions. Even if these organizations
attitude. Social scientists have also old but useful information, adopt inefficient practices, the prac-
sought to understand “information methods, and practices get lost. tices become almost impossible to
overload,” its determinants, conse- 3) Competition: Information over- change. So, we need competition,
quences, and remedies ever since. load makes information a com- such that practices come about
An “information overload” keyword petitive weapon. Social actors through small-scale experimenta-
search in Google Books today yields competing for limited resources tions. While Orman’s illustrations
300 000+ hits! So is there anything might mislead each other through are appealing, he does not note why
new in Orman’s book? deliberate misinformation. such organizations survive despite
Orman poses “information over- Although it is not fully clear how being inefficient monopolies. The
load” as a paradox and carries out the three mechanisms fit together causes are, of course, multifaceted,
the daunting task of drawing from and where the boundary of their and have been subject to debate.
a large body of literature to give us explanatory power is drawn, Orman For instance, in a recent provoca-
three mechanisms through which does a great job in illustrating them tive manuscript, “Why Nations Fail,”
such paradox arises. The paradox is individually. He also makes the Acemoglu and Robinson show
that technologies help us know more, problem of information overload that states are inefficient because
but in the process, we know less. In appear manageable and solvable. extractive political institutions in
a Simmelian world, this is not an However, when it comes to solu- them allow some people an unequal
entirely novel proposition. However, tions, Orman leaves us with some opportunity to usurp resources and
Orman’s simplification of the prob- contradictions, and sidesteps some power [2]. So, if Orman is to pre-
lem along with pertinent evidence existing solutions as well. We will vent states or nations from adopting
makes the mechanisms a compel- examine these solutions and pro- inefficient, irreversible practices,
ling narrative. As we are moving fast pose to consider the interaction of he should address the causes (e.g.,
towards “ubiquitous computing,” three mechanisms. Acemoglu and Robinson’s extractive
Orman’s effort is timely. In summary, First, he prescribes “liberalism” political institutions).
Orman’s mechanisms are as follows: and “protectionism” at the same Second, Orman suggests that use
time without addressing why appar- of “trust partners” can save people
Digital Object Identifier 10.1109/MTS.2018.2804981
ently inefficient organizations may from misinformation, but does not
Date of publication: 2 March 2018 monopolize social lives. According elaborate on the downside of such

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 11


intermediaries. Here, trust partners various aspects of this point. Thus, “competition” and misinformation.
are various information intermediaries a pertinent follow-up question to Based on the reasoning above, we
that are rated and trusted by their Orman would be to find out whether suggest a simultaneous consider-
users on an ongoing basis. We have “information overload paradox” leads ation of Orman’s three mechanisms
many trust partners around us. to changes in the nature of these rela- to understand the implications of
For instance, websites such as Tri- tions and institutions. For example, information overload paradox and
pAdvisor and Expedia are partners “has information overload changed its full consequences.
for hotel information; credit rat- the way individuals form and trust
ing agencies are partners for credit family, friends, and groups?” Reviewer Information
information; auditors are partners As discussed above, Orman lays Abdullah Shahid is a Ph.D. student
for reliable financial statements. In out a simple, elegant framework in the Department of Sociology at
a simple version of Orman’s frame- to understand the paradox; however, Cornell University, Ithaca, NY. Email:
work, any important information will interested research communities may ais58@cornell.edu.
flow through such trust partners and find it useful to examine the ways in Ningzi Li is a Ph.D. student in the
in turn, the users will rate the part- which the mechanisms (“substi- Department of Sociology at Cornell
ners. Thus, individuals will eventu- tution,” “obsolescence,” and “com- University, Ithaca, NY. Email: nl323@
ally have trust partners in every walk petition”) interact. We use an example cornell.edu.
of their lives for reliable information. to illustrate this point. Assume an
Even though such social design may individual has developed a trust References
have apparent merits, it is not with- relationship with an online, inter- [1] G. Simmel, The Metropolis and Men-
tal Life. Blackwell, 1903; http://www
out adverse consequences. One active news community for reli- . b l a c k we l l p u b l i s h i n g . c o m / c o n t e n t /
simple reason is that trust partners able source for interpreting social B P L _ Images/Content_store/Sample_
compete among themselves. Given events. This apparently saves the chapter/0631225137/Bridge.pdf.
[2] D. Acemoglu and J.A. Robinson, Why
that information quality is difficult to individual from using low quality Nations Fail: The Origins of Power, Pros-
ascertain, trust partners may adopt information instead of high quality perity, and Poverty. Crown, 2012.
misinformation to remain competi- information, i.e., the “substitution” [3] B. Becker and T. Milbourn, “How did
increased competition affect credit rat-
tive. Indeed, the financial crisis of problem. But the danger lies in the ings?,” Har vard Bus. School, Working
2008 shows us that such competi- potential that repeated interaction Pap., 2010; http://www.hbs.edu/faculty/
tion among credit rating agencies among the community members Publication%20Files/09-051_13e0275c-
a3a4-48bd-a86a-2324d5d70b57.pdf.
facilitated large-scale commercial- would give rise to a clan-like culture, [4] M. Weber, The Protestant Ethic and the
ization of sub-prime mortgages [3]. leading the whole community to “Spirit” of Capitalism. New York, NY: Pen-
Moreover, given information prod- think and act alike. Such phenome- guin, [1905] 2002.
[5] N. DiFonzo, “ The echo-chamber
ucts have large economies of scale, non has often been described as the effect,” New York Times, Apr. 22, 2011;
what will prevent the trust partners “echo-chamber effect” [5]. Moreover, https://www.nytimes.com/roomforde-
from becoming monopolies? there is ample evidence that human bate/2011/04/21/barack-obama-and-the-
psychology-of-the-birther-myth/the-echo-
Finally, Orman underemphasizes social lives (ideologies, views, living chamber-effect.
the role of existing social relations conditions) correlate with similar [6] B. Bishop, The Big Sort: Why the Clus-
and institutions (e.g., families, friends, others, fortifying the “echo-cham- tering of Liked Minded America is Tearing
Us Apart. First Manner, 2009.
group memberships, kinships, and ber” effect [6]–[9]. Formed for a nar- [7] T.C. Schelling, Micromotives and mac-
status) in addressing issues of infor- row issue or purpose, trust partners robehavior. Norton, 2006.
mation quality and trust. In his pio- or communities could thus develop [8] F.B. Shi, Y. Shi, F. Dokshin, J. Evans,
and M. Macy, “Millions of online book co-
neering work on the role of religion a tunnel vision among its members, purchases reveal the politicization and
in capitalism, Max Weber shows that leading to loss of knowledge and polarization of U.S. sciences, Nature
sect memberships, a form of volun- practices in various other areas of Human Behavior, Apr. 3 2017; https://
www.nature.com/articles/s41562-017-
tary relation, became a stamp of life. As the result, trust communities, 0079.
trust for borrowing and lending trans- those with echo-chamber effect, may [9] K. Hoanagar, “Blame the echo chamber
actions in the U.S. financial markets become a source of “obsolescence” on Facebook. But blame yourself, too,”
Wired, Nov. 25, 2016; https://www.wired
(see “Churches and Sects” in [4]). as well. Furthermore, echo cham- .com/2016/11/facebook-echo-chamber.
Over the last five decades there has bers can develop an identity of their
been a multitude of evidence to prove own and become a source of fierce

12 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


INDUSTRY VIEW

Rui Costa

The Internet of
obile technology isn’t

M
just in your pocket
24/7. It’s everywhere

Moving Things
around us today, with
its continual byprod-
uct — data — trailing us everywhere
we go. The great nexus of this 21st-
century trend isn’t really your smart-
phone — it’s the city where you live,
work, and play. Over half of the world’s
population lives in urban areas, which
are expected to grow to accommo-
date an additional 2.5 billion people
[1] over the next three decades.
While critics argue that 24/7 devo-
tion to our devices can drive us apart,
others point to how myriad streams
of emerging data culled from every
moving (and connected) thing pulsat-
ing through our cities — cars, buses,
bicycles and more — can ultimately
reshape and optimize our urban areas
to make them better places to live.
That presents urban leaders
today with enormous challenges,
but also big opportunities to tap
into the mobile-technology boom to
ISTOCK

improve everything from city services


to air quality — and offer new insights
into public safety and long-term
urban planning. vehicles (AVs) and an emerging “Inter- ■ Local Data Management: the
A handful of critical technolog- net of Moving Things,” brings three platfor m provides real-time,
ies — specifically, the ability to move essential enablers to this vision of delay-tolerant data manage-
massive amounts of real-time mobile truly smart cities: ment for optimized cost, pri-
data between the dynamic urban ■ Multi-Network Mesh Network- oritization, and context-aware
world and the cloud — are key build- ing: Vehicle to everything (V2X) data transmission;
ing blocks of that future. Veniam’s c apa bilit ie s using dedicated ■ Low Latency: Fast connection
unique mobile-data platform [2], built short-range communications to cloud and edge services for
for a future based on autonomous (DSRC), 4G/LTE, and Wi-Fi with critical real-time and event-driven
multi-hop, mobility, and band- applications.
Digital Object Identifier 10.1109/MTS.2018.2795092
width aggregation for improved What are some of the blue-sky
Date of publication: 2 March 2018 connectivity experience; p o s sibi l it ie s w ith new mobile

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 13


infra struc ture in place? Here’s a cuts down on the number of sharp hits We already know the benefits of
look at some of the most promising: to the brakes. a society increasingly logged on and
driven by data.
Safer Streets More Open Space It shapes how we work, who we
Experts speculate that early itera- Allocating space for parking is an es- meet, and how we build our com-
tions of autonomous cars will hit sential but expensive part of doing munities. But as our dependence on
the road by 2020 [3]. The impact on business in big cities. Roughly 31 per- mobile data increases, how we push
safety is one of the great promises cent of the space in the central busi- our cities forward into the future and
of autonomous technology: A recent ness districts of 41 major cities is make them better places to live will
study by the Eno Center for Transpor- currently dedicated to parking, and hinge on how we harness the infor-
tation predicts that when we reach a study by the Sightline Institute [7] mation available all around us.
the point where 90 percent of cars in estimated that developers’ costs for
the U.S. are autonomous, the num- parking increase average rent prices Author Information
ber of accidents will fall from 6 mil- by over $225/month nationwide in Rui Costa is the CTO of Veniam,
lion a year to about 1.3 million [4]. the U.S. But data-driven autonomous www.veniam.com. Email: rcosta@
The enabler of this long-term vision: vehicles will shift those numbers dra- veniam.com.
data. Fully-realized AVs will rely on matically. A professor of transporta-
a variety of cameras, sensors, and tion engineering at the University of References
radar to operate safely and will gen- Texas-Austin recently theorized that [1] M. Donath, “World cities, home to most
people, to add 2.5 billion more by 2050:
erate roughly 4 terabytes of data in if you shifted an entire city to autono- U.N., Reuters.com, Jul. 10, 2014; https://
a typical day [5] of driving. Storage mous cars, it would need 90 percent www.reuters.com/article/us-un-population-
plays a critical role in the cars of the less parking space [8] than it needs cities-report/world-cities-home-to-most-
people-to-add-2-5-billion-more-by-2050-u-n-
future to manage data systems such today, and global consulting firm McK- idUSKBN0FF1QV20140710.
as AV mapping and navigation, info- insey and Company has speculated [2] Veniam – The Internet of moving things,
tainment, telematics, accident/drive that more AVs could open up 61 bil- “Platform,” Veniam.com, 2018; https://
veniam.com/.
recorders, digital clusters, vehicle-to- lion square feet of urban space [9] [3] BI Intelligence, “10 million self-driv-
vehicle (V2V)/vehicle-to-infractruc- we’re currently using for parking spac- ing cars will be on the road by 2020,”
ture (V2I), advanced driver-assisted es. Now imagine all that room opened Business Insider, Jun. 15, 2016; http://
www.businessinsider.com/repor t-10-
systems (ADAS), and more. up for developments like libraries, mu- million-self-driving-cars-will-be-on-the-road-
seums, or even green space. by-2020-2015-5-6.
Reduced Traffic [4] R. Wibberley, “Uber shows that the
future is driverless, but why?,” Forbes, Nov.
The rise of autonomous vehicles is Cleaner Air 15, 2016; https://www.forbes.com/sites/
also expected to lead to a dramatically Studies have already shown that ryanwibberley/2016/11/15/uber-is-showing-
lighter load on city streets. Rideshar- autonomous vehicles and fewer cars u s - t h a t - t h e - f u t u r e - i s - d r i ve r l e s s - bu t -
why/#43a650fb5520.
ing is already an innovative and compet- on the road will cut down on CO2 emis- [5] P. Nelson, “Just one autonomous car
itive multi-billion-dollar industry, but sions, but there are other ways cities will use 4,000 GB or data/day,” NetworkWorld
experts predict that the rise of rideshar- can take advantage of mobile data .com, Dec. 7, 2016; https://www.networkworld
.com/article/3147892/internet/one-autono-
ing with AVs and will mark another to clear up the air. Some cities have mous-car-will-use-4000-gb-of-dataday.html.
step away from automobile owner- already begun to implement mobile [6] C. Arbogast, “Experiments show that
ship in dense metropolitan areas, and sensors attached to devices or objects a few self-driving cars can dramatically
improve traffic flow,” Engineering at Illi-
less cars on the road. The AVs that do that move throughout a city, and the nois, May 9, 2017; https://engineering
brave the commute, meanwhile, will all information gleaned has been stun- .illinois.edu/news/article/21938.
be gathering real-time data about traffic ning. The Environmental Defense [7] Sightline Institute, “Who pays for Park-
ing? The hidden costs of housing,” sightline.
and congestion and cutting down on Fund has used data from vehicles to org, Dec. 12, 2013; http://www.sightline
human habits that lead to stop-and-go create methane maps and identify .org/research_item/who-pays-for-parking/.
traffic jams. A recent study [6] from the more than 5500 leaks in U.S. cities, [8] C. Thompson, “Driverless cars and the
future of parking,” Newsweek, Jan. 24, 2016;
University of Illinois at Urbana-Cham- while a leading Internet mapping com- http://www.newsweek.com/driverless-cars-
paign, for example, showed that the pany has equipped its mapping and and-future-parking-418943.
presence of just one autonomous car imaging vehicles with an environmen- [9] J. Edgerton, “How many industries will
self-driving cars disrupt?,” Moneywatch,
reduces the standard deviation in tal intelligence platform with sensors Mar. 5, 2015; https://www.cbsnews.com/
speed of all the cars in a traffic jam by monitoring particulate matter, NO2, news/self-driving-cars-may-change-lives-
nearly 50 percent, and dramatically CO2, black carbon, and more. disrupt-industries-mckinsey/.

14 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


COMMENTARY

Yoshiyasu
Takefuji

Connected
n t he h i s t or y of

I
mandatory regula-
tion of computer-

Vehicle Security
i z e d veh icle s, an
E - L e t ter ent it led,
“Black box is not safe at all,” was
published in Science [1] in 2017. It
mentioned that on-board diagnos-
tics (OBD-II) specifications were
made mandatory for all cars sold in
Vulnerabilities
the United States in 1996. The Euro-
pean Union made European OBD
(EOBD) mandatory for all gasoline
(petrol) vehicles sold in the European
Union starting in 2001.
The problem is that the OBD-II and
EOBD specifications contain “black
boxes” that cannot be fully tested by
car manufacturers. There is also no
security provided in the OBD-II and
EOBD specifications. In other words,
for more than fifteen years, we have
been neglecting security problems of
the naked (unsecured) cars [1].
Before considering autonomous
vehicles [2], we must understand such
unsecure mandatory specifications.
Why have we been forced to live with
black-box testing without understand-
ISTOCK/ET1972

ing the details of the black box? We


all know that black-box testing is
not suitable for identifying defects in
hardware or software in the black box. closed source programs it needs to tion to remove all black boxes and
However, open source is not auto- be taken on faith that a piece of code to enhance security and incremental
matically more secure than closed works properly. Open source allows innovations [1].
source [3]. The difference is with open the code to be tested and to be veri- However, cyber-security expert
source code you can verify for your- fied to work properly [3]. Open source Gene Spafford has a slightly different
self (or pay someone to verify for you) also allows anyone to fix broken view of the open/closed issues on
whether the code is secure [3]. With code, while closed source can only security: “I agree that we should be
be fixed by the vendor [3]. The open concerned about having unknown
Digital Object Identifier 10.1109/MTS.2018.2795093
source hardware/software movement components in our systems. We
Date of publication: 2 March 2018 has been navigating in a good direc- (historically) had some vendors who

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 15


did extensive and formal testing of diagnostics, and autonomous vehi- attacks, millimeter wave radar jam-
their software for high-assurance cles make the system more vulner- ming/spoofing attacks, light detec-
applications. The current market able through external applications. tion and ranging (LiDAR) sensor relay/
doesn’t support that kind of exami- We must understand and define spoofing attacks, ultrasonic sensor
nation, and few vendors know how vehicle security buzzwords includ- jamming/spoofing attacks, and cam-
to do it, but that doesn’t mean it ing “maps,” “ECU-remapping,” and era sensor blinding attacks.
can’t be done. Vendors might test “re-f l a s h i n g .” E ng i ne ECUs con- Vehicle access attacking affects
better if we had a legal or economic tain “maps” which are basically not only autonomous vehicles but
means of holding them liable for multi-dimensional lookup tables of also conventional vehicles. Vehicle
defects. Right now, if they do a poor minimum,  maximum, and average access attacking includes key fob
job verifying security, they simply values for various engine sensors [5]. clone and telematics service attacking.
release a patch and do it again!” The software on an engine ECU inter-
prets the information from Vehicle Sensor Attacking
those tables and sends an Autonomous vehicles use the fol-
appropriate signal to the lowing sensors: GPS, millimeter
relevant engine sensors so wave (MMW) radar, LiDAR sensor,
that the appropriate perfor- ultrasonic sensor, and camera sen-
Why have we been forced to live mance is delivered during sor. We must protect current and
with black-box testing without the drive [5]. The practice future autonomous vehicles against
of downloading a different these t y pes of sensor at t ack s.
understanding the details of the map into the vehicle’s ECU is Vulnerabilities and attack meth-
often called “re-flashing” [5]. ods are briefly described below.
black box? A process to refine the vehi- Potential countermeasures are
cle’s engine map is called also noted where possible.
“ECU-remapping.” According
A second serious security prob- to Dave Blundell’s “ECU hacking” [6], GPS Jamming and Spoofing
lem is with vehicle electronics and ECU attacks are classified into front GPS spoofing became very popular
en gine control units (ECUs). ECUs door attacks, back door attacks, and after Pokémon GO hacks. GPS signal
include the electronic/engine control exploits, respectively: spoofing must be mentioned first.
module (ECM), powertrain control Front door attacks: Comman- Protecting GPS from spoofers is
module (PCM), transmission control deering the access mechanism of critical to autonomous vehicle navi-
module (TCM), brake control mod- the original equipment manufactur- gation. Conventional GPS systems
ule (BCM or EBCM), central control er (OEM). are vulnerable to spoofing attacks.
module (CCM), central timing mod- Back door attacks: Applying Using inexpensive software defined
ule (CTM), general electronic module more traditional hardware hacking radio (SDR), GPS signal spoofing can
(GEM), body control module (BCM), approaches. be easily achieved [7], [8]. Advanced
suspension control module (SCM), Exploits: Discovering uninten- spoofing technology might pose
and others. Some modern vehicles tional access mechanisms. defense challenges even to very
have up to 80 ECUs where new fea- Hackers or crackers can use in - sophisticated victim receivers. There
tures are added. More new features expensive commercial tools for is a need for more research and
are then patched into the existing ECU attacks. development in the area of spoofing
systems, making the systems more In this article, potential hack- defenses, especially concerning the
vulnerable to attack. ings are classified into “vehicle sen- question of how to recover accurate
An in-vehicle/external network sors attacking” and “vehicle access navigation after the detection of an
makes it more vulnerable. An in-vehi- attacking.” We must protect our attack. More importantly, however,
cle infotainment (IVI) system often au tonomous vehicles against poten- there is a need for receiver manu-
uses Bluetooth technology and/or tial hackings, detailed in the follow- facturers to start implementing and
smart phones to help drivers control ing sections. embedding spoofing defenses [9]. In
the system with voice commands, We are not prepared for potential other words, the current GPS is vul-
touch screen input, or physical con- vehicle sensor attacks. Vehicle sensor nerable to GPS signal spoofing.
trols [4]. In addition to IVI systems, attacks can include global position- Psiaki’s team has found that
smart phone links, vehicle telematics, ing system (GPS) jamming/spoofing combining strategies can provide a

16 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


reasonably secure countermeasure As far as we know, there is no solution provided by manufacturers.
that could be commercially deployed [9]. commercial anti-jamming/spoofing We should immediately prepare for
As far as we know, there is no ultrasonic sensor available in the this key fob clone problem.
commercial anti-spoofing GPS sys- market.
tem available in the market. Telematics Service Attacking
Camera Sensor Attacking Burakova et al. have found that
MMW Radar Attacking Petit et al. have tested a camera the SAE J1939 Standard with Blue-
Millimeter wave (MMW) radar uses sensor (MobilEye C2-270) by blind- tooth, cellular, and WiFi through te-
the following frequency bands: 24.0– ing the camera with laser and LED lematics service used for trucks can
24.25 GHz, 76–77 GHz, 77–81 GHz, matrix. The attacks confused the allow easy access for safety-critical
and a UWB band of 21.65–26.65 GHz. auto controls [11]. For the MobilEye attacks [13]. In other words, an adver-
The 76.5 GHz band is exclusively for C2-270, a simple laser pointer was sary with network access can control
automotive radar worldwide. There sufficient to blind the cam-
are jamming and spoofing attacks era and prevent detection of
against MMW radars. MMW radar vehicle ahead [11].
jamming and spoofing attacks were As far as we know, there
demonstrated in Defcon24 in 2016 is no commercial anti-blind-
[10]. Using off-the-shelf hardware, ing camera sensor available The open source hardware/
they were able to perform jamming in the market.
and spoofing attacks, which caused software movement has been
blinding and malfunction of the Vehicle Access navigating in a good direction
Tesla, which could potentially lead to Attacking
crashes and impair the safety of self- to remove all black boxes and
driving cars [10]. Key Fob Clone
As far as we know, there is no In order to gain access to to enhance security and
commercial anti-jamming / spoofing a vehicle, a key fob clone
incremental innovations.
MMW radar available in the market. technique can be used. Two
distinct vulnerabilities were
LiDAR Sensor Attacking reported in the existing key-
Petit et al. have demonstrated effec- less entry system that could affect safety critical systems of heavy vehi-
tiveness of relay attacks and spoof- 100  million vehicles [12]. Affected cles using the SAE J1939 protocol.
ing attack on LiDAR (ibeo LUX 3), vehicle keyless entry systems in- Tesla has talked publicly about
respectively [11]. A cheap transcei- cluded VW group remote control, implementing a co-designing feature
ver was able to inject fake objects Alfa Romeo, Chevrolet, Peugeot, Lan- where only a trusted code signed
that are successfully detected and cia, Opel, Renault, Ford, and others with a certain cryptographic key
tracked by the ibeo LUX 3. These [12]. By eavesdropping a single sig- works [14]. Cars’ internal networks
attacks prove that additional tech- nal sent by the original remote, an will need better internal segmenta-
niques are needed to make the adversary is able to clone a remote tion and authentication, so that criti-
sensor more robust to ensure appro- control and gain unauthorized access cal components don’t blindly follow
priate sensor data quality [11] to a vehicle [12]. A correlation-based commands from the OBD2 port [14].
However, combining multiple attack on Hitag2 allows us to clone They need intrusion detection sys-
wavelength LiDAR makes it harder the remote control within a few tems that can alert the driver — or
for the attacker to attack both sig- minutes using a laptop computer rider — when something anomalous
nals at the same time [11]. [12]. The wireless carrier frequency happens on the cars’ internal net-
is currently 315 MHz in the U.S./ works [14].
Ultrasonic Sensor Attacking Japan and 433.92 MHz (ISM band) in All of these security problems
Liu et al. have tested Tesla, Audi, Europe. In Japan the modulation is arise because vehicle designers are
Volkswagen, and Ford using ultra- frequency-shift keying (FSK), but not expert enough in network secu-
sonic sensor attacks: jamming and in most other parts of the world, rity. They have not paid attention
spoofing attacks. They showed that amplitude-shift keying (ASK) is used. to the security problem. We must
all tested vehicles were able to be Since the publication of a key fob embed security protection to guard
jammed and spoofed [10]. clone paper [12], there has been no against a variety of attacks.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 17


Exploitation Case Studies against wireless carjacking. Other- And-Connected-Cars-Fooling-Sensors-And-
Tracking-Drivers-wp1.pdf.
wise, the connected vehicles, and
[12] F.D. Garcia et al., “Lock it and still
Wireless Carjacking self-driving cars, will become the lose it —On the (in)security of automotive
Wireless penetration using cellu- next crime frontier. remote keyless entry systems,” in Proc.
USENIX, 2016; https://www.usenix.org/
lar connection, Bluetooth bugs, a
system/files/conference/usenixsecurity16/
rogue Android App, and a malicious Author Information sec16_paper_garcia.pdf.
audio file on a CD were reported Yoshiyasu Takefuji is with the Facul- [13] Y. Burakova et al., “Truck hacking: An
experimental analysis of the SAE J1939
in 2010 [15]. White hat hackers ty of Environment and Information
Standard,” in Proc. USENIX, 2016; https://
revealed nasty new car attacks Studies, Keio University, 5322 Endo, www.usenix.org/system/files/conference/
[16]. White hat hackers killed a Fujisawa 2520882 Japan. Email: woot16/woot16-paper-burakova.pdf.
[14] A. Greenberg, “Securing driverless cars
jeep on the highway in 2015 [17]– takefuji@sfc.keio.ac.jp.
from hackers is hard. Ask the ex-uber guy
[19]. Because of simple authentica- who protects them,” Wired, 2017; https://
tion of ECUs, hackers can control References www.wired.com/2017/04/ubers-former-
[1] Y. Takefuji, “Black box is not safe at all,” top-hacker-securing-autonomous-cars-real-
ECUs. For example, the steering
E-Letters of Science,2017; http://science ly-hard-problem/.
wheel of the 2010 Ford Escape’s .sciencemag.org/content/352/6293/1573/ [15] J. Markoff, “Researchers show how
parking assist module can be con- tab-e-letters. a car’s electronics can be taken over
[2] 1. Jean-François Bonnefon et al., “The remotely,” New York Times, Mar. 10, 2011;
trolled by CAN command 0x0081
social dilemma of autonomous vehicles,” http://www.nytimes.com/2011/03/10/
[17]–[19]. The power steering of the Science, vol. 352, no. 6293, pp. 1573-1576, business/10hack.html?_r=0.
2010 Toyota Prius with lane keep Jun. 24, 2016. [16] A. Greenberg, “Hackers reveal nasty
[3] J. Lynch, “Why is open source software new car attacks – With me behind the
assist (LKA) can be controlled by
more secure?,” Infoworld, Sept. 22, 2015; wheel” (Video), Forbes, Jul. 24, 2013; https://
controller area network (CAN) com- http://www.infoworld.com/article/2985242/ www.forbes.com/sites/andygreenberg/
mand 0x02E4 [17]–[19]. By plug- linux/why-is-open-source-software-more-secure 2013/07/24/hackers-reveal-nasty-new-
.html. car-attacks-with-me-behind-the-wheel-
ging an Internet-connected gadget
[4] V. Beal, “In-Vehicle Infotainment (IVI),” video/#6677f02228c7.
into a car’s OBD2 port, researchers Webopedia, 2018; http://www.webopedia. [17] A. Greenberg, “Hackers remotely kill a
could take control of a Corvette’s com/TERM/I/in-vehicle-infotainment-ivi Jeep on the highway—With me in it,” Wired,
.html. Jul. 2015; https://www.wired.com/2015/07/
brakes in 2015 [20]. Because of
[5] C. Smith, The Car Hacker’s Handbook: hackers-remotely-kill-jeep-highway/.
vulnerabilities, Fiat Chrysler recall- A Guide for the Penetration Tester. No [18] C. Miller and C. Valasek, “Remote
ing 1.4 million vehicles amid con- Starch Press, 2016. exploitation of an unaltered passenger
[6] D. Blundell, “ECU hacking,” The Car vehicle,” Aug. 10, 2015; http://illmatics.
cerns over remote hack attacks
Hacker’s Handbook: A Guide for the Pen- com/Remote%20Car%20Hacking.pdf.
[21]. High-tech thieves could steal etration Tester, 2016, ch. 6; http://publicism [19] C. Valasek and C. Miller, “Adventures
Hyundai cars via its mobile APP in .info/engineering/penetration/7.html. in automotive networks and control units,”
[7] Software-Defined GPS Signal Simulator, IOActive, 2014; https://www.ioactive.com/
2017 [22]. Cyber security expert
Github; https://github.com/osqzss/gps-sdr- pdfs/IOActive_Adventures_in_Automotive_
Kevin Mahaffey said: “Automak- sim, accessed Dec. 15, 2017. Networks_and_Control_Units.pdf.
ers that transform themselves into [8] S. Kiese, “Gotta Catch ‘Em All! – WORLD- [20] A. Greenberg, “Hackers cut a Cor-
WIDE! (or how to spoof GPS to cheat at vette’s brakes via a common car gad-
software companies will win. Oth-
Pokémon GO), Insinuator, 2016; https:// get,” Wired, Aug. 2015; https://www.wired
ers will get left behind” [23]. insinuator.net/2016/07/gotta-catch-em-all- .com/2015/08/hackers-cut-cor vettes-
worldwide-or-how-to-spoof-gps-to-cheat-at- brakes-via-common-car-gadget/.
pokemon-go/. [21] M.B. Quir, “Fiat Chrysler recalling 1.4M
Known and Unknown vehicles amid concerns over remote hack
[9] M.L. Psiaki and T.E. Humphreys, “GNSS
Vulnerabilities spoofing and detection,” Proc. IEEE, vol.104, attacks,” Consumerist, Jul. 24, 2015; https://
There are currently security prob- no. 6, pp.1258-1270, Jun. 2016. consumerist.com/2015/07/24/fiat-chrysler-
[10] C. Yan et al., “Can you trust autono- recalling-1-4m-vehicles-amid-concern-over-
lems of connected vehicles that we
mous vehicles: Contactless attacks against remote-hack-attacks/.
must solve immediately. Jamming/ sensors of self-driving vehicles,” presented [22] Reuters, “High-tech thieves could steal
spoofing problems on vehicle sen- at DEFCON24, 2016; https://assets.docu- Hyundai cars via its mobile app: Re -
mentcloud.org/documents/3004659/ searchers,” Hindustan Times, May 17, 2017;
sor attacking should be resolved.
DEF-CON-whitepaper-on-Tesla-sensor- http://www.hindustantimes.com/autos/
An immediate solution is needed for jamming-and.pdf, https://media.defcon high-tech-thieves-could-steal-hyundai-
vehicle access attacking, including .org/DEF%20CON%2024/DEF%20 cars-via-its-mobile-app-researchers/story-
CON%2024%20presentations/DEFCON- zQ6R1Vouy5bAH1I72shu6I.html.
key fob cloning and telematics ser-
24-Liu-Yan-Xu-Can-You-Trust-Autonomous- [23] N. Perlroth, “Why car companies are
vice attacks. Vehicles.pdf. hiring computer security experts,” New
There are many known/unknown [11] J. Petit et al., “Remote attacks on auto- York Times, May 7, 2017; https://www
mated vehicles sensors: Experiments on .nytimes.com/2017/06/07/technology/
vulnerabilities in the current con-
camera and LiDAR,” in Proc. of Blackhat- why-car-companies-are-hiring-computer-
nected vehicles. The connected EU15, 2015; https://www.blackhat.com/ security-experts.html?mcubz=1&_r=0.
vehicles must be also protected docs/eu-15/materials/eu-15-Petit-Self-Driving-

18 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


GUEST EDITORIAL

Katina Michael Diana Bowman Meg Leta Jones Ramona Pringle

Robots and Socio-


rom lifelike androids

F
to virtual assistants,
industrial machines

Ethical Implications
to drones, today’s
robot creations, in-
cluding those in automation, are
used to perform any number of spe-
cific tasks. These undertakings are
repetitive in nature and suggest that
we are still a long way from manufac-
turing “all-purpose” utility robots.
History teaches us much about
technological innovation and the
perils of over-promising. We are, for
example, still a far cry away from the
headlines of the first computers that
promised so much but delivered
only computational trajectories for
the military. In fact, seventy years
later, we will see headlines similar

WIKIMEDIA COMMONS/GERMAN FEDERAL ARCHIVE


to this one from 1946 describing the
power of the Electronic Numerical
Integrator and Computer: “It Won’t
Mind the Baby — Yet; But Little Else
Stops ‘ENIAC’ ” [1].
It is difficult not to be mesmer-
ized by robotic instrumentation
demonstrating beyond-human-like
capabilities, including superior pow-
Berlin (March 1930). Original caption (translated into English): The first artificial
ers to lift, carry, and run in all-ter-
machine man in Berlin! “Robot”, the first artificial machine man, was invented by his
rain landscapes. inventor, the English engineer, Captain W.H. Richards presented for the first time to
But while the likes of Boston the public in the conservatory in Berlin. The artificial machine man “Robot” can speak,
Dynamics’ creations fill us with turn his head, hold objects and bow. He is made entirely of steel. “Robot,” the artificial
awe, these physically impressive machine man “has breakfast” in the streets of Berlin with his inventor, the English
engineer Captain W.H. Richards.
machines still lack common sense,
basic communication skills, and
emotional intelligence [2]. Is it more than, for example, a robot that actuators are, however, just one
unreasonable for us to want more can get up off the ground, and recov- form of robotics. Against this back-
from the AI-inspired — something er from being hit with a club? drop we are starting to see the rise
Robots encapsulated and em - of software-based robots, or “bots” for
Digital Object Identifier 10.1109/MTS.2018.2795094
bodied in tin metal, and adorned short. These types of bots have been
Date of publication: 2 March 2018 with sensors, silicon, batteries, and beating humans at chess since 1997

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 19


[3], and more recently at GO and of work and their fear of so-called ting side by side to each other, are
Jeopardy as well, providing compan- “technological unemployment.” readers able to understand the true
ionship, and organizing our lives and And then there are those cam- breadth and complexities presented
homes. Other bots, when let loose paigners who are calling for a ban on by today and tomorrow’s robotics.
over interconnected networks, have sex robots [6]. A myriad of profound We hope that this special issue
dramatically impacted political atti- and long-lasting social and policy will appeal to all our IEEE Technol-
tudes [4], marketing strategies, implications remain. It is incumbent ogy and Society Magazine reader-
and official public records. Bots on researchers and readers of this ship. As it is a joint special issue
have the ability to amass and distrib- joint special issue from various dis- with IEEE Robotics and Automation
ute considerable power, especially ciplines, backgrounds, and roles, to Magazine, we hope it may also dem-
when teamed with artificial intelli- help policy makers and others grap- onstrate to engineers of all kinds
gence and humans. ple with the effective responses to the importance of thinking about
Law firms are increasingly using this new era. the socio-ethical potential implica-
bots to troll through large troves John C. Havens elegantly remind- tions of their inventions – all the way
of documents during the discovery ed us recently, “it’s easy to get caught from robots in health, to robots for
phase, for faster and arguably more up in the idea that Artificial Intelli- the military, from robots that serve
reliable searching, while medicine is gence (AI) [or robotics] will be one a specific function in the workplace,
increasingly turning to sophisticated of two things: our destroyer or our to bot software that might well be
algorithms to comb through medical savior. It’s time to move beyond this used to misinform or manipulate
records and make more precise dia - dualistic narrative. The “either or” the masses. There are unintended
gnoses. How do we study, make comparisons create fear or unreal- consequences to all innovations,
sense of, and impact or direct not istic expectations, neither of which but perhaps what is of more inter-
only these tangled and entrench- pragmatically move society forward” est in this special issue are those
ed socio-ethical implications, but [7]. For Havens the emphasis should intended consequences and how
also our societ a l expect ations be on “Extended Intelligence.” Per- they play out [9], [10].
and demands? haps analogously for our special
Most recently it has been “killer issue the emphasis should be on Guest Editor Information
robots” that made the headlines, as “Human-Enhanced Robotics”. Katina Michael is professor at the
Havens cites Joi Ito Faculty of Engineering and Informa-
heavily in his piece for tion Sciences at the University of
IEEE USA and appears to Wollongong, Australia. Email: Katina@
be similarly wary of reduc- uow.edu.au.
tionist approaches that usu- Diana Bowman is an associate
It is difficult not to be ally polarize society to an professor in the Sandra Day O’Connor
“either/or” debate [8]. As College of Law, Arizona State Uni-
mesmerized by robotic
illustrated by the pieces that versity, Tempe, AZ.
instrumentation demonstrating make up this special issue, Meg Leta Jones is an assistant
it simply isn’t that easy. We professor at Georgetown Universi-
beyond-human-like capabilities. have accepted papers that ty, Washington, DC.
demonstrate both positive Ramona Pringle is an assistant
and negative social impli- professor in the RTA School of
leading roboticists have increasingly cations of robotics; papers that are Media at Ryerson University, Toron-
called for governments and others in their nascent stages of proving in to, Canada, and Creative Director of
to prohibit the production of military the value of robotics with respect to the Transmedia Zone, an incubator
drones, especially those that can specific contexts, and papers that for the future of media.
conduct highly targeted signature prove that we have major challenges
strikes based on pattern-of-life data ahead with respect to privacy, secu- References
[1] C.D. Martin, “ENIAC: Press conference
[5]. Equally, as self-driving vehicles rity, trust, and robotics. We have a that shook the world,” IEEE Technology
move beyond real-world testing and range of empirically-based papers, & Society Mag., vol. 14, no. 4, pp. 3-10,
into the commercial market, millions and papers that are more philo- 1995/1996.
[2] K. Michael, “Meet Boston Dynamics’
of transport workers worldwide are sophically-based in argumentation. LS3 – The latest robotic war machine,” The
thinking carefully about the future Only through the inclusion of all, sit- Conversation, 2012; https://theconversation

20 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


.com/meet-boston-dynamics-ls3-the-latest-robot- [6] CAS, “Campaign Against Sex Robots,” [9] R. Pringle, K. Michael, and M.G. Michael,
ic-war-machine-9754, accessed Dec. 9, 2017. https://campaignagainstsexrobots.org/, “Unintended consequences: The paradox of
[3] G. Kasparov, Deep Thinking: Where accessed Dec. 4, 2017. technological potential,” IEEE Potentials, vol.
Machine Intelligence Ends and Human [7] J.C. Havens, “’AI’ to ‘EI’ – Moving from 35, no. 5, pp. 7-10, 2016.
Creativity Begins. London, U.K.: Hodder & fear to flourishing in the age of the algo- [10] R. Pringle, K. Michael, and M.G. Michael,
Stoughton, 2017. rithm,” IEEEUSA, Nov. 29, 2017; https:// “Unintended consequences of living with
[4] K. Michael, “Bots trending now: Disin- insight.ieeeusa.org/articles/ai-ei-moving- AI: The paradox of technological poten-
formation and calculated manipulation of past-fear/. tial?,” IEEE Technology & Society Mag.,
the masses,” IEEE Technology & Society [8] J. Ito, “Designing our complex future with vol. 35, no. 4, pp. 17-21, 2016.
Mag., vol. 36, no. 2, pp. 6-11, 2017. machines,” Resisting Reduction: A Mani-
[5] CSKR, “Campaign to Stop Killer Robots,” festo, Nov. 15, 2017; https://pubpub.ito
2017; https://www.stopkillerrobots.org/, .com/pub/resisting-reduction/?platform=
accessed Dec. 9, 2017. hootsuite.

PRESIDENT’S MESSAGE (continued from page 1)

■ Reviewing submissions to IEEE over the coming years. Funds will UCLA who have finished their terms
ISTAS, Norbert Weiner, IEEE Eth- be invested in further strengthening of service on the SSIT BoG. I welcome
ics, IST-Africa Week, and other and expanding volunteer activities. Charmayne Hughes of the Health
SSIT supported conferences. Options to financially support SSIT Equity Institute and an Associate Pro-
■ Supporting activities of the IEEE volunteer activities include: fessor at San Francisco State Univer-
SSIT IST-Africa SIGHT in IST-Afri- ■ Donate to SSIT online https:// sity; Heather Love, Assistant Professor
ca Partner Countries. ieeefoundation.org/ieee_ssit. of English at the University of South
■ Representing SSIT on IEEE com- ■ Mail a check payable to the “IEEE Dakota; and Jay Pearlman, currently
mittees (TAB, BoD, Standards, Foundation — SSIT Fund” to: IEEE adjunct Professor at the University of
Future Directions Initiative). Foundation, 445 Hoes Lane, Pis- Colorado, who have all been elected
■ Serving on the SSIT Board of cataway, NJ 08854, U.S.A. to serve three-year terms on the
Governors. ■ Asking your employer to match SSIT Board of Governors beginning
If any of these opportunities are of your personal donation. in 2018.
potential interest or if you would like ■ Donate in honor or memory of
to recommend someone, please someone who has touched your Author Information
contact me (Subject: Volunteer for life or others. Paul M. Cunningham, 2017–2018
IEEE SSIT — <name>) and I will direct ■ Direct a gift to the “IEEE Foun- IEEE-SSIT President, is President &
you to the responsible team. If you dation — SSIT Fund” from your CEO, IIMC (Ireland); Director, IST-
have not received a response to a donor advised fund, foundation Africa Institute (www.IST-Africa.org);
previous offer to volunteer, please or family office. Adjunct/Visiting Professor, Interna-
accept my sincere apologies and ■ Remember SSIT in your will. tional University of Management
contact me again so I can assist you. (Namibia); and Visiting Senior Fel-
Thanks and Welcome low, Wrexham Glyndŵr University
Call for Donations, I would like to acknowledge the enor- (Wales). Paul is 2018 Chair, IEEE
Gifts, and Bequests mous contribution made by Subrata Humanitarian Activities Committee
SSIT is launching a fundraising cam- Saha of SUNY Downstate Medical Cen- and serves on the IEEE Global
paign focused on securing the level of ter, Brooklyn, NY, and John Villasenor, Public Policy Committee. Email:
resources required to scale activities Professor of Electrical Engineering at pcunningham@ieee.org.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 21


FIGURE 1. Humanizing the
interaction with robots implies a
mutual understanding between the
two agents, with the effort of adapting
to the partner not falling on the human
shoulders alone, but rather being
shared between the two. Photo by
Laura Taverna — Istituto Italiano di
Tecnologia.

Humanizing
Human-Robot
Interaction

I
Alessandra Sciutti, n conjunction with what is often called the industry
4.0, the new machine age, or the rise of the robots,
Martina Mara, the authors of this paper have each experienced the
Vincenzo Tagliasco, and following phenomenon. At public events and round-
table discussions, among our circles of friends, or
Giulio Sandini during interviews with the media, we are asked on a
surprisingly regular basis: “How must humankind
Digital Object Identifier 10.1109/MTS.2018.2795095
adapt to the imminent process of technological change? What
Date of publication: 2 March 2018 do we have to learn in order to keep pace with the smart new

22 1932-4529/18©2018IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


On the Importance
of Mutual Understanding

machines? What new skills do we need to under- 70 percent of people think that robots will steal peo-
stand the robots?” ple’s jobs and around 90 percent say that the imple-
We think that these questions are being posed from mentation of robots in society needs careful management
the wrong point of view. It is not that we, the ever grow- [1]. In relation to these numbers, a much smaller but
ing number of robot users, should be the ones who still sizeable population could be called “techno-
need to acquire new competencies. On the contrary, we phobes” or “robophobes” [2], defined as individuals
want to ask how the robots that will soon be popping up who are anxious towards smart machines on a person-
all over the place can adjust to their human interaction al level. The Chapman University Survey of American
partners in better ways. What do the robots have to Fears [3] revealed in this regard that 29 percent of U.S.
learn to be considerate of people and, no less impor- residents reported to be very afraid or afraid of robots
tant, be perceived as considerate by people? Which replacing workforce, a number comparable to the
skills do they need, what do they have to learn to make occurrence of the fear of public speaking in the U.S.
cooperation with humans possible and comfortable? population. Furthermore, 22 percent of participants
Coming from various disciplinary backgrounds root- indicated being very afraid or afraid of artificial intelli-
ed in robotics, cognitive science, psychology, and com- gence, and 19 percent of “technology I don’t under-
munication, these are the shared questions on which stand” [3]. The imagined substitution of human beings
we have based our approach to humanize human-robot by intelligent artificial agents has been repeatedly
interaction (HRI). It is an approach that ultimately leads described as a strong fear, reaching from the fear of job
us to the necessity of mutual understanding between loss relevant to everyday life [2], [4] to much vaguer
humans and machines — and therefore to a new design fears of an artificial “superintelligence” [38] that on its
paradigm in which collaborative machines not only own develops doubtful intentions, and, ultimately, a
must be able to anticipate their human partner’s goals “robocalyptic” end of humankind [5]. Science fiction, of
but at the same time enable the human partner to antic- course, plays a role here. While some fictional stories
ipate their own goals as well. have been shown to generate meaning and thereby
We will be elaborating on several important design increaserecipients’ acceptance of robotic technology
factors in each respective area. Even if they don’t con- [35], [37], many highly popular movies such as The Ter-
stitute an all-encompassing concept, we are convinced minator, Blade Runner, or Ex Machina [38] circulate
that they build a solid basis and an effective strategy dystopian outlooks and frequently encourage the audi-
for the development of humane robots. (We adopt here ence to envision a militarized future of human-robot
the “Cambridge Dictionary definition of humane: “show- relations [39].
ing kindness, care, and sympathy towards others, Coming back to more contemporary, non-fictional
especially those who are suffering.”) Moreover, we think developments in robotics, various fears and ethical
that robots that are designed for mutual understand- concerns have been raised in view of so-called social
ing can also make a positive impact on the subjective or “emotional” robots, meant to be used, e.g., for the
psychological experience of human-robot interactions care of children or the elderly. A number of scholars
and enhance public acceptance of robotic technolo- and study participants have expressed their worries
gies in general. about such robotic companions as they might contrib-
ute to social isolation by reducing the amount of time
People’s Fears of Robots spent with other humans, lead to a loss of privacy and
At present there is still much skepticism on the part of liberty or to emotional manipulation of lonely, sensitive
some groups of potential users towards the increasing persons [38], [40], [41], [43].
deployment of robots in domestic environments and, Besides being afraid of robotic surrogates or caregiv-
exceedingly, in workplaces. According to a recent large- er robots, however, there are other types of fears about
scale survey in the European Union, approximately robots that have been noted as relevant in the literature.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 23


These include anxieties about the communication capa- devices. In other words, roboticists have realized that
bilities of robots (e.g., a robot may be unable to un- robots need humans! Leveraging on human-robot col-
derstand a complex conversation), anxieties about laboration, rather than minimizing it, could be a possi-
behavioral characteristics (e.g., what speed a robot will ble solution to approach the desired level of adaptability
move at or what it will do next), and anxieties about dis- and proficiency in dealing with complex tasks. As a
course with robots (e.g., being unsure how to talk to a result, robots now more often aim at exploiting their
robot or whether the robot has understood one’s utter- interaction with humans (being it physical or cognitive)
ance). Many of these concerns are summarized in the to learn such complex concepts, as human common
Robot Anxiety Scale [6]. In addition, Ray and colleagues sense, and to complement their behavioral efficiency
[4] have identified technical dysfunctions and a felt loss with the adaptability, intuitiveness, and creativity prop-
of control as particularly strong fears towards robots — er of human–human interaction (e.g., see the Robot-
issues that recently even have been picked up by the ics2020 Strategic Research for Robotics in Europe
European Parliament’s draft for a union-wide robot law https://www.eu-robotics.net/cms/upload/topic_groups/
[36]. In this draft, the implementation of a mandatory SRA2020_SPARC.pdf).
“kill switch” in every robot is requested to prevent peo- Humans in these instances therefore become part-
ple from malfunctioning machines. ners, not just “users,” and the relationship between
human and robot is not unidirectional (or absent) any-
more, but depends on both the interacting agents. We
posit that for this dynamic equilibrium to work and for
it to bring the expected benefits, robots will have to
Robots will need to base their become more humane, so as to establish an effective
mutual understanding with their partners and carry
behavior on on human needs by part of the effort needed to maintain the interaction.
understanding and anticipating them.
From Humanlike Robots to Humanized
Interaction: A New Design Paradigm of
Mutual Understanding

Reality of Robots The Human in “Humanized”


While some fears about robots are provoked by social Humanizing human-robot interaction means that the
or “emotional” machines made for their users’ personal machines will need to become considerate of humans.
environments, many of people’s fears described above Much research has already been devoted to make com-
also derive from the machines currently deployed in the puters and technology more respectful of human neces-
automotive and electronic industries. Those robotic sities, including from a socio-ethical perspective [44],
platforms are usually bulky and frightful, positioned in [45]. Now, robots will have to base their behavior on
cages where no human is allowed during operation and human needs by anticipating and understanding them,
requiring an expert user perfectly trained to operate and they will have to communicate in understandable
them. Although we can imagine that robots envisioned ways relative to humans (see Figure 1). To achieve this
for diffusion in society will be different in shape and goal, robotic research needs to move beyond the tradi-
applications, the idea that robots are built to work effi- tion of seeking more powerful and efficient systems,
ciently with no need of humans lingers in most people’s and to focus instead on such novel concepts as robot
mind. This kind of thinking leads to an assumption that transparency, legibility, and predictability and on skills
robots might replace humans or that, to interact with a entailing understanding and anticipating humans and
robot, it will be the responsibility of the human user to their needs.
learn a potentially complex sets of instructions, with no It is important to stress that humanizing does not
adaptation from the robotic side. necessarily imply choosing an anthropomorphic appear-
However, this conception of robots is deeply changed ance for a robot and does not require that robots repli-
even from the industrial perspective. Current industries cate exactly all possible human activities or simulate
require flexibility and versatility, the capacity to change human emotion. Rather, it suggests that robots need to
activity and learn new processes in relatively short “be considerate of people,” i.e., maintain a model of
time, something that humans are much better at than humans in order to understand and predict human
robots. Moreover, many human skills — as the ability to needs, intentions and limitations. It suggests robots
deal with unforeseen issues or some manual and cre- need to use ways of communicating and cooperating
ative skills — are still far from being matched by robotic that are intuitive for the human partner. This interactive

24 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


model needs to work for robots with very different without requiring any learning or adaptation from the
embodiments, ranging from humanoids, to robot cars human side, promoting a natural and intuitive (i.e., a
and quadrirotors, to name a few. more humanized) collaboration. There is already evi-
The approach we suggest to increase the fluidity of dence that the sensitivity to these signals facilitates
human-robot interaction is to leverage on the interactive human-robot interaction (and makes it more pleasant
models humans have naturally developed to interact and acceptable). For instance, it has been show that a
with other humans. When working together, how a cer- robot monitoring head orientation can disambiguate
tain action is performed allows the human partner to verbal expression on the basis of the participants’ gaze
intuitively understand several unspoken properties of direction, making the interaction more natural, pleas-
the ongoing interaction and to make it more efficient ant, and efficient [12]. The ability to read eye motion can
and synchronized. For instance, from a human’s motion inform the robot of which object the person might need
or movements it is possible to infer how confident that in a collaborative joint task [10], with no need to pro-
person is in what he is doing [7], how heavy or fragile is cess explicit verbal or gestural instructions [13]. Beyond
the object that is being manipulated [8], and also what gaze, the properties of body motion can also help the
the person intends to do with the same object [9]. We robot act as a more intuitive collaborator. The ability to
posit that robots should be enabled to tap into such detect regularities of biological motions in a scene
flow of information by both reading and sending these enables a robot to detect human activities even when
covert signals within the interaction. no human shape is in sight (e.g., when only the tool
Importantly, we need robots that can understand us being used is visible [14]) and subtle variations in action
but that at the same time can be easily understood and kinematics can inform the robot of the human intention
anticipated by us. Only through such a bidirectional, [15]. It has also been demonstrated that the combina-
mutual understanding can the interaction evolve in a tion of anticipation of human motion trajectories and
safe, natural, seamless way, similarly to what happens modeling of the potential uses of common objects
in human-human exchanges. allows a robot to predict the next human actions with a
sufficient detail to perform anticipatory planning of their
Designing Robots to Predict Human Needs reactive responses [16].
A key ability in humans is the capability to anticipate
what others intend to do or might need. The formation of
expectations about others’ actions and intentions
increases the efficiency of the interaction by limiting the
need for elaborate verbal exchanges and cutting drasti-
cally the delays. To form expectations the robot needs to
Human-robot interaction will
assess the internal, hidden status of the partners, in par- be facilitated by a sensitivity to
ticular what is their intended goal and, to some extent,
what are their motivation and feelings. Between humans physiological signals.
this is achieved through a continuous exchange of tacit,
covert signals, subtly communicated in the way we
behave. For instance, the direction of the human gaze
correlates with the position of the focus of attention, and
is exploited for understanding the role of each partici- A robot able to “read” body motion will also be able
pant in an interaction and to pace turn taking [10], to detect the affective state of the interacting partner, in
whereas the velocity with which an action is performed order to use also this information to adapt its behavior
can reveal the actors’ emotional status or their inten- accordingly (for a recent review on automatic recogni-
tions [9], [11]. tion of body movements for affective expression, [11]).
Some of these signals are physiologically embedded In summary, the first step to make future robots con-
in human behavior and do not need to be added volun- siderate of humans will be to enable them to sense and
tarily for the sake of communication. Hence they do not understand the subtle signals that humans naturally
even require a sender’s awareness. Others, still based exchange during everyday interactions. This ability is at
on the way human move, have an explicit communica- the basis of the process required to develop robots gift-
tive intent (as waving the hand to say good bye, or point- ed with the kind of intuition found in our best human
ing to indicate something relevant) but they are intrinsic collaborators. Conversely, as is explained in the next
to human culture and do not entail any conscious effort section, in order for humans to be considerate of
to be interpreted. A robot reading similar signals could robots, it is necessary to embed in robot motion the
decide when to act and what to do in an interaction same implicit messages used by humans.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 25


sometimes intensified by an actual or perceived inabili-
ty to adapt to the current situation [19]. In early experi-
mental investigations, people have been found to show
The uncanny valley has been a significant preference for predictability in situations of
attributed to a mismatch between potential physical threat, realized, for example, by
means of preliminary warning signals [20]. Low predict-
user expectations and robot actions. ability and controllability of animal movements repeat-
edly have been shown to correlate with the widespread
fear of spiders [21], [22]. And even the much discussed
uncanny valley effect in robotics, a negatively valued
state of creepiness elicited by humanlike robots of
Designing Robots to be Predictable for Humans high realism [23], has been attributed to a mismatch
One might easily assume that for a smooth human- between user expectations and robot actions and thus
robot interaction it is sufficient to have a smart robot, an inherent ambiguity and low perceived predictability
programmed to understand and react to the needs of its of android characters [24].
human partner in real time. This is certainly true in a sit- In the realm of HRI, a comparably new branch of
uation in which all actions are defined a priori, for empirical studies has been dealing with legible and pre-
example when a swivel-arm robot on an assembly line dictable motion design for physical interactions in col-
perpetually repeats the very same movement. This is laborative workplace settings, with most results in
not the case when robots interact with humans in more strong support of the importance of mutual anticipation
dynamic, unconstrained situations that demand a mutu- between humans and machines. To be clear about the
al exchange of information and meaning-making, some- terms, predictability in this context is typically defined
thing that has long been investigated for interpersonal as a targeted outcome state once a robot’s goal is clear
communication in the field of Language and Social to the interacting person, whereas legibility refers to a
Interaction [42]. For example: if a pedestrian wants to robot’s easily “readable” behavioral cues (e.g., its
cross the road, a self-driving robot car must be capable motion trajectory) that allow the person to build an
of inferring the person’s intent by analyzing body direc- expectation of a robot’s intended goal in the first place,
tion or gestures and thus stop automatically. But beyond thus making the robot predictable [25]–[28]. In several
that, the car also has to send some message to inform studies in which industrial robots and test persons col-
the pedestrian of its intention to stop or to continue if laboratively performed pick and place tasks, Dragan
stopping is considered more dangerous. As a result of and colleagues [26], [27] showed that the better people
this implicit dialogue, the person can cross the road were able to predict a robot’s next action, the more
safely — or wait. comfortable they felt while interacting with the robot.
The alternative to this situation is to adopt an ultra- This is very much in line with other research that
safe strategy such as stopping whenever there is a revealed a positive relation between legibility of robot
pedestrian in view, with disadvantageous outcomes in motion and people’s perceptions of safety [29] as well
terms of efficiency. In general situations however, espe- as people’s trust in autonomous agents [30].
cially with humans naïve to robot’s functions or whenev- Yet, a highly legible and predictable motion design
er there are more than only two potential outcomes of seems not only to influence how positively people evalu-
an HRI (contrary to the binary decision stop/move on), it ate an HRI, but also at the same time how efficiently a
is crucial for a robot to proactively communicate its task can be carried out by a human-robot team. People
imminent actions in order to reduce uncertainty on the and robots have been shown to need significantly less
part of its human. Designing robots to be considerate of time for joint task completion when the human partner
people therefore also means designing robots to satisfy was able to predict the robot’s imminent action early.
the basic human longing for clarity, control, and predict- This held true even if the legible motion design actually
able events. To reach this goal, robots not only need to resulted in the robot taking longer to execute its part of
be able to anticipate human intent, but at the same the task [26]. This somewhat counterintuitive efficiency
time need to give people a chance to anticipate the effect can be explained by the test persons’ increased
robot’s behavior as well. ability to coordinate their own actions with the robot at
In the humanities and social sciences it is long an earlier stage. In practice, designing a robot’s motion
known that — with a few exceptions — events, tasks, for optimal legibility means that the robot doesn’t follow
and agents that are characterized by high ambiguity and the most direct path, e.g. for grasping an object or
unpredictability often are evaluated as particularly approaching a person, but favors a curved trajectory by
uncomfortable and even may cause anxiety [17], [18], which the direction of the target location in many cases

26 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


can be predicted implicitly by human observers [26], [27].
Interestingly, it has been shown that a self-learning swiv-
el-arm robot that is only programmed to reward joint task The ability of the robot to anticipate
completion efficiency with human partners, ultimately
results in performing more legible motion [25].
human behavior requires a very deep
A highly predictable robot behavior, however, can be knowledge of the motor and cognitive
established by various means. Aside from a robot’s
motion design, other channels of human-robot commu- bases of human-human interaction.
nication can be exploited as well. Light signals have
been used in a flight path crossing task with a drone, for
example, to proactively express the drone’s intent to emerging in our quotidian environments. Accordingly
brake for a human pedestrian. With light signals given, they will be getting closer and closer to us human beings
people walked significantly faster and displayed fewer in a physical and psychological sense as well.
nonverbal cues of insecurity in comparison to the use If robots would move amongst their own “species,”
of an unpredictable control condition [31]. their behavioral design could be purely functional.
All in all, it is noted that certainty and transparency There would not be any need for communicative signals
in general are variables of high relevance for the per- meant to be detectable by the human senses. However,
ceived comfort of an HRI. This applies not only to the robots will be characterized by their co-existence and
next actions of a robot will take, but also to the broader collaboration with human partners much more than has
context of what can be expected from a robot, and what been trumpeted for a long time. With this article, we
a robot is able to sense, to decide, and to do autono- therefore called for a humane vision of human-robot
mously. The more familiar people are with a robot, the interaction, fostered by a new design paradigm of bidi-
more they are willing to accept it as a partner, in either a rectional — mutual — understanding between humans
personal or work environment [2]. and machines.
It is worth stressing, however, the relevance of mutu- Designing robots to be considerate of human interac-
al understanding in the sense that, even if we can think tion partners implies that they should be able to infer
that humans could easily learn to predict non-humane what a human intends to do, what he or she needs, and
robot behaviors (within the limits of our perceptual whether it is the right moment to intervene or whether it
abilities), the ability of the robot to anticipate human would be better to wait a little. However, this won’t be
behavior requires a very deep knowledge of the motor enough: robots will need to be considerate also in their
and cognitive bases of human-human interaction. It actions, selecting behaviors that maximize human com-
is easier to implement a robot moving in a human fort. This implies not only aiming for safety and ergo-
way, than it is to implement a robot able to interpret nomics, but also at an increase in intention expression
human movement. However, only the effective combi- and action understandability. It is worth stressing here
nation of understanding and being understood will that, in general, being considerate of humans requires
allow establishing a balanced interaction between the the robot to be able to understand and adapt to
two agents, human and robot, making their collabora- human’s skill at the individual/personal level in order to
tion seamless and intuitive. interpret and use a shared vocabulary as people do, for
example, when exaggerating our movement in interact-
Dawn of a New Epoch ing with children or slowing down our movements when
In the perception of many people, we are facing the dawn interacting with the elderly. In this respect, roboticists
of a new epoch. This is an epoch in which the images will have to take into consideration not only behaviors
with which we have long been familiar from science fic- that are common among all humans [46], such as the
tion films start to correspond to the realities of everyday way eye and hand motion are coordinated in a reaching
life. An epoch in which the robots — finally, really — action — but also signals that vary widely between cul-
arrive at our workplaces and in our households, at hospi- tural or ethnic groups. Therefore, if the direction of a
tals and entertainment parks, on the roads and in the sky gaze can be universally used by the robot to predict
above our heads. Regardless of how realistic or starry- which object a person is going to take [13], the interpre-
eyed many of these prospects of our impending high- tation of hand gestures or of the amount of eye contact
tech future might seem, it is clear that autonomous will need to be informed by the cultural context.
machines will not be keeping to themselves for much lon- This underlines the importance of empirical research
ger. They will walk among us. And as they proceed to into cultural differences influencing how people interact
carry out their respective missions, they will no longer be with robots. Only in this way will the effort of adapting
segregated in machine-only realms; rather, they will be to the partner in human-robot interaction fall also onto

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 27


with a comparable ability to anticipate human behavior,
roboticists will need a profound understanding of the
Humanizing human-robot interactions basic mechanisms of human-human interaction.

will be decisive in determining Robots, and in particular humanoid robots, can play an
important role in this effort, as they are a valuable tool
whether people will accept robots for investigating controllable, repetitive dynamics of
human interactions, to derive and validate models of
in their societies. human social behavior [34].
We posit that the design of humane robots will
bring concrete advantages to society, and that they
will change the common perceptions of robots. The
the (powerful) robot’s shoulders and not only on those more people know about robots, the less they fear
of the human. As a result, experiencing mutual adapta- them [2], and mutual understanding between human
tion during the interaction will make the robot behavior and robots increases the predictability and legibility
much more predictable and acceptable, addressing of the machines, fostering a more relaxed and natu-
many of the fears caused by current uncertainties about ral coexistence.
these machines. Therefore, humanizing human-robot interactions will
If more humane interactions are established, it will be decisive in determining whether people will accept
become more and more evident that robots, rather robots in their societies, and how close we will be to a
than replacing us, might support humans, performing future in which humankind and robot-kind can co-exist
tasks we don’t like. Beyond replacing our household in safe and peaceful ways.
appliances, as already some robotic vacuum cleaners
or lawn mowers do, robots might be assigned to pro- Acknowledgment
gressively more complex and relevant duties, such as This work was written in the framework of the European
providing support to the elderly, in order to allow Project CODEFROR(FP7-PIRSES-2013-612555).
them a longer period of autonomous living in their
home. A humane robot won’t replace human contact, Author Information
but will provide concrete support in coping with physi- Alessandra Sciutti is the head of the Cognitive Robotics
cally demanding tasks that a person cannot perform and Interaction Laboratory of the Robotics, Brain and
alone anymore. Cognitive Sciences Department of the Italian Institute
At the same time, this can facilitate interaction with of Technology (IIT) in Genoa, Italy. Email: alessandra
peers. For instance, use of robots may be able to help .sciutti@iit.it.
mediate the access of seniors to novel digital communi- Martina Mara is the head of the RoboPsychology
cation channels, making the interaction with the devic- research department at the Ars Electronica Futurelab in
es intuitive. Already current robotic platforms presented Linz, Austria. She is also a member of the Austrian
as “personal robots” promise to move in this direction, Council for Robotics and a newspaper columnist.
by autonomously dealing with technical aspects of a Vincenzo Tagliasco is founder of the Bioengineering
video call and making the call process transparent to Group at the “Istituto di Elettrotecnica” of the Universi-
the users. ty of Genova. He is the initiator of the anthropomor-
Robots might also provide support to human thera- phic robotics activities described in this article. He
pists, since there is evidence suggesting that use of was the first Director of the Department of Communi-
robots can bring social benefits to clinical populations. cation Computer and System’s Science of the Universi-
For instance, in the case of autism or dementia, it has ty of Genova.
been shown that robots can facilitate group dynamics, Giulio Sandini is full professor of Bioengineering at
by increasing the occasions of interaction between the University of Genoa and Director of Research at the
patients, and leading to an increment in social exchang- Italian Institute of Technology where he leads the Robot-
es between patients and the therapists [32], [33]. ics, Brain and Cognitive Sciences Department.
The task of humanizing human and robot interac-
tions is challenging, however, because robots are cur- References
[1] European Commission, “Special Eurobarometer 427: Autono-
rently not as good as humans at adapting to their mous Systems.,” 2015. [Online]. Available: http://ec.europa.eu.
partner’s needs. There are various exa mples of [2] P. K. McClure, “‘You’re Fired,’ Says the Robot,” Soc. Sci. Comput.
humans learning to predict non-humane machines, Rev., p. 89443931769863, 2017.
[3] Chapman University, “America’s Top Fears,” 2015. [Online]. Avail-
although with some effort, e.g., think of workers deal- able: https://blogs.chapman.edu/wilkinson/2015/10/13/americas-
ing with complex technical devices. To provide robots top-fears-2015/.

28 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


[4] C. Ray, F. Mondada, and R. Siegwart, “What do people expect [27] A.D. Dragan, K.C.T. Lee, and S.S. Srinivasa, “Legibility and pre-
from robots?,” in Proc. 2008 IEEE/RSJ Int. Conf. Intelligent Robots dictability of robot motion,” in Proc. ACM/IEEE Int. Conf. Human-
and Systems, 2008, pp. 3816–3821. Robot Interact., pp. 301–308, Mar. 2013.
[5] K. Richardson, An Anthropology of Robots and AI: Annihila- [28] C. Lichtenthäler and A. Kirsch, “Goal-predictability vs. trajecto-
tion Anxiety and Machines. Routledge, 2015. ry-predictability,” in Proc. 2014 ACM/IEEE Int. Conf. Human-Robot
[6] T. Nomura, T. Suzuki, T. Kanda, and K. Kato, “Measurement of Interaction - HRI ’14, 2014, pp. 228–229.
anxiety toward robots,” in Proc. IEEE Int. Workshop on Robot and [29] C. Lichtenthäler, T. Lorenz, and A. Kirsch, “Influence of legibility
Human Interactive Communication, 2006, pp. 372–377. on perceived safety in a virtual human-robot path crossing task,”
[7] D. Patel, S.M. Fleming, and J.M. Kilner, “Inferring subjective in Proc. 2012 IEEE RO-MAN: 21st IEEE Int. Symp. Robot and
states through the observation of actions,” Proc. R. Soc. B Biol. Human Interactive Communication, 2012, pp. 676–681.
Sci., vol. 279, no. 1748, pp. 4853–4860, 2012. [30] J.K. Choi and Y.G. Ji, “Investigating the importance of trust on
[8] A. Sciutti, L. Patanè, F. Nori, and G. Sandini, “Understanding adopting an autonomous vehicle,” Int. J. Hum. Comput. Interact.,
object weight from human and humanoid lifting actions,” IEEE vol. 31, no. 10, pp. 692–702, 2015.
Trans. Auton. Ment. Dev., vol. 6, no. 2, pp. 80–92, 2014. [31] M. Mara, C. Lindinger, R. Haring, M. Moerth, and A. Mankowsky,
[9] L. Sartori, C. Becchio, and U. Castiello, “Cues to intention: The “When humans and robots share the road: On the relevance of pre-
role of movement information,” Cognition, vol. 119, no. 2, pp. dictable robot behavior,” not yet published.
242–252, May 2011. [32] B. Scassellati, H. Admoni, and M. Mataric, “Robots for use in
[10] A. Frischen, A.P. Bayliss, and S.P. Tipper, “Gaze cueing of atten- autism research,” Annu. Rev. Biomed. Eng., vol. 14, pp. 275–294,
tion: Visual attention, social cognition, and individual differences,” Jan. 2012.
Psych. Bull., vol. 133, no. 4, pp. 694–724, Jul. 2007. [33] K. Wada and T. Shibata, “Living with seal robots - Its socio-
[11] M. Karg, A.A. Samadani, R. Gorbet, K. Kuhnlenz, J. Hoey, and D. psychological and physiological influences on the elderly at
Kulic, “Body movements for affective expression: A survey of auto- a care house,” IEEE Trans. Robotics, 2007, vol. 23, no. 5, pp.
matic recognition and generation,” IEEE Trans. Affect. Comput., 972–980.
vol. 4, no. 4, pp. 341–359, 2013. [34] A. Sciutti and G. Sandini, “Interacting with robots to investigate
[12] S. Ivaldi, S.M. Anzalone, W. Rousseau, O. Sigaud, and M. the bases of social interaction,” IEEE Trans. Neural Syst. Rehabil.
Chetouani, “Robot initiative in a team learning task increases the Eng., not yet published.
rhythm of interaction but not the perceived engagement,” Front. [35] M. Appel, S. Krause, U. Gleich, and M. Mara, “Meaning through
Neurorobot., vol. 8, pp. 1–16, Feb. 2014. fiction: Science fiction and innovative technologies,” Psychology
[13] O. Palinko, F. Rea, G. Sandini, and A. Sciutti, “Robot reading of Aesthetics, Creativity, and the Arts, vol. 10, no. 4, p. 472.
human gaze: Why eye tracking is better than head tracking for [36] European Parliament, “Civil Law Rules on Robotics: European
human-robot collaboration,” in Proc. 2016 IEEE/RSJ Int. Conf. Parliament resolution of 16 February 2017 with recommendations
Intelligent Robots and Systems (IROS), 2016, pp. 5048–5054. to the Commission on Civil Law Rules on Robotics,” retrieved from
[14] A. Vignolo, N. Noceti, F. Rea, A. Sciutti, F. Odone, and G. San- http://www.europarl.europa.eu, 2017.
dini, “Detecting biological motion for human-robot interaction: a [37] M. Mara and M. Appel, “Science fiction reduces the eeriness of
link between perception and action,” Front. Robot. AI, vol. 4, no. android robots: A field experiment,” Comput. Hum. Behav., vol. 48,
14, 2017. pp.156–162, 2015.
[15] A. Sciutti, C. Ansuini, C. Becchio, and G. Sandini, “Investigat- [38] I. Pedersen, “Home is where the AI heart is,” IEEE Technology
ing the ability to read others’ intentions using humanoid robots,” and Society Magazine, 2016, vol. 35, no. 4, pp. 50–51.
Front. Psychol., vol. 6, 2015. [39] I. Pedersen and T. Mirrlees, “Exoskeletons, transhumanism,
[16] H. S. Koppula and A. Saxena, “Anticipating human activities and culture: performing superhuman feats,” IEEE Technology and
using object affordances for reactive robotic response,” IEEE Society Mag., vol. 36, no. 1, pp. 37–45, 2017.
Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1, pp. 14–29, 2016. [40] N. Sharkey and A. Sharkey, “The crying shame of robot nan-
[17] S. Fisher, Stress and Strategy. London, U.K.: Routledge, 1986. nies: an ethical appraisal,” Interaction Studies, 2010, vol. 11, no. 2,
[18] M.E.P. Seligman, Helplessness: On Depression, Development, pp. 161–190.
and Death. Freeman, 1975. [41] N. Sharkey and A. Sharkey, “The rights and wrongs of robot
[19] R.S. Lazarus, “Cognition and motivation in emotion,” Am. Psy- care,” in Robot Ethics: The Ethical and Social Implications of
chol., vol. 46, no. 4, pp. 352–367, 1991. Robotics, P. Lin, G. Bekey, and K. Abney, Eds. Cambridge, MA: M.I.T.
[20] L.A. Pervin, “The need to predict and control under conditions Press, 2012, pp. 267-282.
of threat,” J. Pers., vol. 31, no. 4, pp. 570–587, 1963. [42] K. Tracy, C. Ilie, and T. Sandel, Eds., The International Ency-
[21] J.M. Armfield, “Manipulating perceptions of spider character- clopedia of Language and Social Interaction. Wiley, 2015.
istics and predicted spider fear: Evidence for the cognitive vulner- [43] K. Trynacity, “Imagining a ‘bot’aful life – Robots as caregivers,
ability model of the etiology of fear,” J. Anxiety Disord., vol. 21, no. humans as clients,” IEEE Technology and Society Mag., vol. 36,
5, pp. 691–703, 2007. no. 2, pp. 60–66, 2017.
[22] J.M. Armfield and J.K. Mattiske, “Vulnerability representation: [44] R. Abbas, K. Michael, and M.G. Michael “Using a social-
The role of perceived dangerousness, uncontrollability unpredict- ethical framework to evaluate location-based services in an
ability and disgustingness in spider fear,” Behav. Res. Ther., vol. 34, internet of things world,” Int. Rev. Information Ethics, vol. 22, pp.
no. 11–12, pp. 899–909, 1996. 42–73, 2014.
[23] M. Mori, “The uncanny valley,” Energy, vol. 7, no. 4, pp. [45] M.G. Michael, S.J. Fusco, and K. Michael, “A research note on
33–35, 1970. ethics in the emerging age of überveillance,” Computer Communi-
[24] C. Misselhorn, “Empathy and dyspathy with androids: Philo- cations, vol. 31, no. 6, pp. 1192-1199, 2008.
sophical, fictional, and (neuro) psychological perspectives,” Kon- [46] A. Sciutti and G. Sandini, “Interacting with Robots to Investi-
turen, vol. 2, pp. 101–123, 2010. gate the Bases of Social Interaction,” IEEE Trans. Neural Systems
[25] B. Busch, J. Grizou, M. Lopes, and F. Stulp, “learning legible and Rehabilitation Engineering, vol. 25, no. 12, pp. 2295-2304,
motion from human-robot interactions,” Int. J. Soc. Robot., 2017. 2017; DOI 10.1109/TNSRE.2017.2753879; http://ieeexplore.ieee.org/
[26] A.D. Dragan, S. Bauman, J. Forlizzi, and S.S. Srinivasa, “Effects document/8068256/.
of robot motion on human-robot collaboration,” in Proc. HRI15,
2015, vol. 2, pp. 1921–1930.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 29


Robot Enhanced
Therapy
for Children
with Autism (DREAM)
A Social Model of Autism

T
Kathleen Richardson, he development of social robots
for children with autism has been
Mark Coeckelbergh, a growth field for the past 15 years.
Kutoma Wakunuma, This article reviews studies in robots
and autism as a neurodevelopmen-
Erik Billing, Tom Ziemke, tal disorder that impacts social-
Pablo Gómez, communication development, and
the ways social robots could help children with au-
Bram Vanderborght, and tism develop social skills. Drawing on ethics research
Tony Belpaeme from the EU-funded Development of Robot-Enhanced
Therapy for Children with Autism (DREAM) project (frame-
work 7), this paper explores how ethics evolves and
developed in this European project.
The ethics research is based on the incorporation of
multiple stakeholders’ perspectives including autism
advocacy; parents of children with autism; medical prac-
titioners in the field; and adults with Asperger’s dis-
order. Ethically, we propose that we start from the
position that the child with autism is a social being with
difficulties in expressing this sociality. Following from
this core assumption, we explore how social robots can
help children with autism develop social skills. We chal-
lenge the view that children with autism prefer technolo-
Digital Object Identifier 10.1109/MTS.2018.2795096
gies over other kinds of activities (exploring nature or
Date of publication: 2 March 2018 the arts), engagement with other living beings (animals),

30 1932-4529/18©2018IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


TB
WIKIMEDIA COMMONS/MATT
MATT
TT
NS/MAT
ONS/ BROWN
ROWN
ROWN
WN
ROW
or that they lack interest in human relationships (partic- more males than females, new research has begun to
ularly with close caregivers). look at the gender bias in the testing procedures for
autism, such as the Autism Diagnostic Observation
Autism Spectrum Disorder Schedule (ADOS) and highlight different ways that
According to biomedical science, Autism Spectrum Dis- autism can be overlooked in females, for instance
order (ASD) is characterized by widespread abnormali- through “camouflaging” techniques. Females with
ties in social interactions and communication, as well autism for instance use gestures more frequently than
as severely restricted interests and highly repetitive males with autism [44]. ASD behaviors include compul-
behavior [41]. The diagnostic criteria for ASD included in sions, echolalia, and motor mannerisms such as hand
the Diagnostic and Statistical Manual of Mental Disor- flapping and body rocking [45].
ders, 5th edition (DSM-5) [41], refer to ASD as a single DREAM is a consortium made up of engineers, com-
diagnosis category that includes autistic disorder puter scientists, psychotherapists and psychologists,
(autism), Asperger’s disorder, childhood disintegra- and ethicists. The robotics research is driven by the clin-
tive disorder, and pervasive developmental disorder ical team of psychologists and psychotherapists at the
not otherwise specified [41]. Autism is a very specific Universitatea Babeş-Bolyai in Romania. The members of
difference in the ability to read social cues, understand the clinical team are schooled in Applied Behavioral
social interaction, and respond appropriately. In general Analysis (ABA), a learning theory based on behavioral
terms, the level of cognitive ability, intelligence, percep- repetition and cognitive association. The DREAM project
tion, use of language, degree of withdrawal, excitabili- uses a well-defined clinical psychotherapeutic method.
ty, self-injury, and physical appearance will vary greatly As children with autism have a deficit in social behav-
in autistic persons [43]. iors, three tasks have been identified as crucial to social
According to the Centers for Disease Control and Pre- interaction, communication, and learning: turn-taking,
vention (CDC), ASD occurs in 1 in 68 children and is joint attention, and imitation. Turn-taking involves
almost five times more common among boys than girls: reciprocal interaction with others and is necessary for
1 in 42 boys versus 1 in 189 girls. While autism affects collaborative learning [21]. Imitation is a vital human

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 31


skill for social cognition, and helps support interactions therapies (RATs), they mainly consider only rela-
with others, speech and language, and cognitive devel- tively passive or remote controlled tele-operated
opment [22], [39]. Joint attention is the ability to attend robots. In the long term, however, therapy robots
to objects in the same space and is enacted through need to become more autonomous in order to
pointing or gaze gestures [10]. reduce the burden on human therapists, giving
In traditional ABA therapy, the psychotherapist works them a powerful tool for clinical interventions and
with the child to develop these skills. In Robot En - diagnostic analysis, and providing ASD children
hanced Therapy (RET), the robot is used as tool by the consistent therapeutic experience (DREAM Des -
therapist to help the child embed these social behaviors cription of Work FP7-ICT-2013-10-611391).
into their learning repertoire. The technical team pro-
vides support to the clinical team, who establish the According to DREAM goals, children with ASD exhibit
challenges the technical team must resolve if the robot a preference for interacting with non-human agents
is to carry out any useful ABA therapy. (empathizing-systematizing theory), a theory developed
Research into the therapeutic development of robots by autism expert Simon Baron-Cohen [2]. This has
for children with autism has relied strongly on biomedi- meant that researchers in robotics of autism have come
cal perspectives of autism as a deficit in social-commu- to the issue with the premise (taken from a strand of
nication and interaction. autism studies inspired largely by Baron-Cohen) that
We explore the ethics of robots for children with children with autism are deficient in sociality (the ability
autism as part of research conducted on the Develop- to relate to others and themselves), and from this assump-
ment of Robot-Enhanced Therapy (DREAM) project. The tion relate to the child with autism as though the robot
DREAM project was funded by the European Commis- will be a preferred alternative to a human being.
sion Framework 7 science program. The project runs for The scientific team at the very onset decided to
five years from 2014, and will conclude in September include ethics in the project, and this article contains
2018. In the DREAM description of work (DOW), the the findings of the ethics team, in light of the assump-
project objectives are described as follows: tions, goals, and practices of the robotic scientist and
the clinical psychotherapists.
The scientific and technological goals of the As an ethics team, we have tried to broaden the dis-
DREAM project are the study and development of cussion about what autism is, and show that it is neither
artificial cognitive robotic systems to support psy- a thing or fact, fixed in space or time, but undergoes
chotherapy for children with mental disorders, in transformations as a concept, and set of practices.
particular children with Autism Spectrum Disor- Moreover, as the ethics team, we stress how important
ders (ASD). Although some research projects it is to show that children with autism have a different
focus on improving efficiency in robot assisted sociality, rather than an absent one. As the ethics team
we believe starting from the premise that children with
autism have strong emotional attachments to their pri-
mary caregivers, and express interest in other activities
besides engaging with technological items. They also
enjoy relationships with animals, physical activity, and
artistic play.
This article is informed by research conducted as
part of the ethics studies of the DREAM project and the
data are drawn from qualitative research.
In our ethical study we collected data using qualita-
tive data collection techniques including: drawing on
technical studies of robot therapy for children with
autism; participant observation of DREAM experiments;
interviews with parents of children receiving the thera-
py, and parents of children not receiving the therapy;
autism specialists and educationalists, attendance at
playgroups and community groups; attendance at work-
shops on robot ethics; and meetings with autism spe-
cialist scholars and healthcare practitioners.
Additionally, we organized a mini public (meeting) in
FIGURE 1. Autism diversity poster. early 2017 to elicit general views on the development of

32 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


robots in healthcare, and to engage with the public’s
concerns and hopes about these issues. A mini public is
an event that brings together different stakeholders to Children with autism may be more
deliberate on a topic of personal or political impor-
tance. The mini public is a form of “deliberative democ-
responsive to social feedback when
racy” [23] where “experts” deliver information to the administered by a therapeutic robot.
public for their consideration. The political sciences
developed mini public methodologies to encourage
public engagement and to help develop policy. The
DREAM mini public explored stakeholders’ perceptions ■ Interviewed 2 children with autism (full transcripts
on healthcare and robots. We invited experts in the field in the appendices).
of medical robots to give presentations of current and ■ Met with six autism academics and established a
predicted ways in which robots will be used in health- working network.
care. The mini public attendees deliberated on these ■ Attended regular meetings of a social group for
issues and were invited to give opinions on some of adults with Asperger’s.
their concerns and hopes about the future of healthcare ■ Attended over 20 workshops and meetings related
and robots for children with autism. to robot ethics.
As LaFont [23] explains, “deliberation” follows in- ■ Developed a partnership with the Critical Autism
formation. Ordinary members of the public are not Network (an international research collaboration on
“experts” in fields (in our cases, none of the participants autism that includes partners from Sweden, U.K.,
were experts in robotics), and to compensate for an Brazil and Italy).
information deficit, the invited experts must impart use- The research we present here will open the debate to
ful knowledge to the attendees to help them in their the ways in which children with autism are sometimes
deliberation process. Participants raised concerns presented in the robotics literature as beings detached
about the future of healthcare that the National Health from intimate relationships and preferring instead
Service (a free to all British and European citizen medi- mechanical systems. This is an important debate to
cal service) would be less supported financially. The hold in the community. The consequences of avoiding
attendees also expressed concerns as medical profes- this conversation in the community could have serious
sionals are replaced by technologies to save costs at the implications. If children with autism are presented as
expense of patient care. preferring objects (particularly robots and other techno-
Qualitative research methods allow for personalized logical items) over their interactions with other humans,
experiences to be called forth and provide auto- we must ask 1) Is this true, and 2) What are the conse-
biographical and contextual information. Moreover, as quences of this approach?
robot therapy becomes mainstream in autism circles, In regard to the former issue, parents of children
addressing the normative models and frameworks that with autism, and autism educational providers stress
underlie the use, development, and potential of the the role of intimate relationships for children with
robots to assist children with autism is crucial. autism and the importance of an empathetic relation-
We carried out in-depth interviews lasting from ship with educational or healthcare providers that come
30 min to two hours. Our paper is informed by the follow- into regular contact with the child.
ing sources: In regard to the second issue, could the framing of
■ Interviewed 4 parents receiving robot therapy in autism as an asocial condition to promote robotics stud-
Romania. ies in this area impact negatively on children with
■ Interviewed 8 parents of children with ASD in autism? The children are already singled out as having
England. specific kinds of qualities, rather than, as many parents
■ Interviewed 1 deputy head of an autism specialist and teachers explained, children with autism build affec-
school. tive relationships with people and animals they care
■ Interviewed 1 professional practitioner of the Horse- about. Could children with autism be othered by the
boy method in Texas. (The horse-boy method framing of autism in the robotic literature — led by
(Equine therapy approach) is a therapeutic method researchers who have a stake in emphasizing (or over-
using horses to help support neuro-psychiatric condi- emphasizing) the benefits of robots? Othering creates a
tions. Developed by Rupert Isaacson was developed hierarchical order for sorting human beings, with white
with his son Rowan, who has autism.) heterosexual wealthy and able-bodied males at one
■ Interviewed 4 associated professionals (technolo- extreme, and people of color, women, children, or peo-
gist, building designer interested in autism). ple with disabilities spread through the hierarchy.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 33


Othering works to the detriment of humanity, as it can Specialist help and support for the autistic person, com-
create practices that are based on stereotypes. In the bined with more autism awareness in the school or
history of humanity, racial prejudice, sexism, and anti- work environment, reduces feelings of social isolation
disability are all ways in which people have been othered or distress [33].
on the basis of their racial origin, sex, or abilities.
In the field of autism, the scientific community’s Including Stakeholder Perspectives in Ethics
attempts to produce robots that could help children Ethics is a school of philosophy devoted to exploring
with autism and contribute to well-established thera- what is right or wrong and developing reasons for judg-
peutic goals will not be helped by making analogies ments informed by ideas of what it means to be human
between children with autism and robots. If we chart and what it means to be part of a social community.
the rise in using humanoid robots for children with The ethics approach we use in the DREAM project prob-
autism we find analogies between children with autism lematizes the “top-down” model of the “expert” (philos-
and robots are present in the earliest works. Take for opher, psychiatrist, etc.,) who knows the “truth” about
instance the pioneering work of Brian Scassellati the world, and comes to reason about the truth outside
whose early papers include Theory of Mind for a of relations with others. DREAM ethics is built around
Humanoid Robot [34] and Implementing Models of the involvement of multiple stakeholders who hold dif-
Autism with a Humanoid Robot [35]. A more recent ferent amounts of power, and are embedded in different
example is by a cognitive scientist who in a recent knowledge systems and practices [8], [38].
paper made the claim We refer to the social model of disability and the dif-
ference model that explore how bio-medical critiques
“Almost all robots are autistic; very few humans and practices, and social norms about “ability” and “dis-
are“ [46]. ability” impact on the life experiences of children and
adults with autism [15], [27]. In its most extreme form,
The paper is also title “Curing Robot Autism: A chal- the social model of disability suggests that all disability
lenge.” The author goes on to write “‘Robots and other is a social construct and there is no ability or disability
synthetic agents (e.g., virtual humans) are generally but normative models that privilege certain abilities over
Autistic” [46]. If robots are autists, then are autists others, organize society and normal functioning. We use
robots? What exactly is this language implying about a developmental biopsychosocial model (SOCIAL) which
human beings with autism? The analogies between an “incorporates the biological underpinnings and socio-
autistic mind or state is taken into robotics from the cognitive skills that underlie social function (attention/
field of development psychology and autism studies, executive function, communication, socio-emotional
particularly the model of autism developed by Simon skills), as well as the internal and external (environmen-
Baron-Cohen. Baron-Cohen also coined the term “mind- tal) factors that mediate these skills” [5], recognizing the
blind” in his book Mindblindess: An Essay on Autism real difficulties children and adults with autism experi-
and Theory of Mind [3]. If an autistic child is mindblind, ence. We believe that autism spectrum conditions aware-
so figured Scassellati, a robot, which has no mind, is ness can positively promote understanding of the
also mindblind. This particular way of understanding difficulties experienced by a child or adult with autism,
autism has been criticized by many researchers includ- and the family of the person. However, in our ethics we
ing Runswick-Cole, Mallett, and Timimi [37], Timimi and include the multiplicity of perspectives to give a fuller
McCabe [24], [25], and Collins [9], who argue against picture of what it might be like to have autism, to be a
biomedical models (or the mental disorder models) set- parent of a child with autism, or to be someone in the
ting up the Critical Autism Network. These researchers robotics field wanting to develop socially beneficial
argue these deficit models fail to take into account the robotic systems.
varied complexity, and real-lived life experiences of peo- The ethics we employ in DREAM has to take into
ple with autism and the importance of their social rela- account the multiple perspectives of the consortium
tionships. This theme was confirmed in our interviews team, as well as parents of children with autism, adults
and during our meetings with adults with Asperger’s. with Asperger’s, government and trusted healthcare pro-
Adults with Asperger’s described their hurt at being viders, healthcare specialists, politicians, educationalists,
socially excluded from peer networks during their and members of the general public. By taking the views
school years. Rather than prefer objects to people, of different stakeholders into account, we dispense with
many had little support, and autism awareness was the top-down model and instead give credence and value
often lacking in their schools. Autism awareness is to the experiences of all actors. This is pertinent because
important as it can help people around those with all lived experiences need to be taken into account and
autism to be sensitive to the behaviors of autistic people given some value in order to understand people’s lived

34 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


realities, including the solutions they may attach to the autism by Baron-Cohen et al. is challenged in some
challenges associated to their beliefs and value systems. quarters by disability and difference advocates and new
empirical studies [7].
Autism Models and Change Using particular types of language and premises to
Autism is a complex congenital condition involving describe what a person with autism is like might be
severe delays and deficits in speech and language and helpful to roboticists, but is it useful for children and
communication and social interaction skills. The use of adults with autism? Robots are not autistic, as machines
robots as therapeutic tools for children with autism is cannot be autistic, and the analogy or metaphor of peo-
inspired by a number of factors summarized here: ple with autism to machines and robots is highly prob-
lematic. If robots are autists, then are autists robots?
The clinical use of interactive robots is a promis- What exactly is this language implying about human
ing development in light of research showing that beings with autism? Mechanistic descriptions of autism
individuals with ASD: a) exhibit strengths in have been used in robotics because they are drawn
understanding the physical (object-related) world from the Baron-Cohen model, without taking into
and relative weaknesses in understanding the account the varied complexity, and real lived life experi-
social world… b) are more responsive to feed- ences of people with autism.
back, even social feedback, when administered For example Baron-Cohen’s emphasis on a lack of
via technology rather than a human,…and c) are empathy in individuals with autism has provoked criti-
more intrinsically interested in treatment when it cism from some researchers, adults with Asperger’s,
involves electronic or robotic components (cited and parents [33].
in [47, p. 2].) The use of particular kinds of language can impact
the acceptance or rejection of autism-focused techno-
In the field of robot therapy for children with autism, logy or medicine. One unsuccessful campaign was
the theories of autism specialist Simon Baron-Cohen, launched by Autism Speaks in 2014 titled MSSNG. The
particularly the Empathizing-Systemizing (E-S) theory of MSSNG campaign referred to a genome sequencing proj-
autism, and the Theory of Mind Mechanism (ToMM), con- ect, but individuals with autism took issue with the
tinue to impact the underlying theory of the potential explicit “neuro-typical” language in the public launch.
benefits of robot therapy for children with autism spec- This led to a backlash from the autism community, par-
trum conditions [8], [20], [48]. Recent studies have ticularly adults with Asperger’s and parents of children
explored development of a multilayer reactive system for with autism. Also, there are some adults with autism that
robots “creating an illusion of being alive” [13] to explor- reject a biomedical approach that aims to “cure” autism.
ing how robots could engage in “synchrony and reciproc- Autism advocates see autism as part of their identity.
ity” in social encounters between therapy robots and Bagatell [1] for example describes attending an Asperg-
children with autism [26]. The push to enhance the tech- er’s group with a member wearing a T-shirt “eye contact
nology to explore more possible therapy scenarios is is overrated” as group members subvert normative
technically demanding, with real-time reciprocal social assumptions about what is socially normal. In some cul-
interaction still problematic. Moreover, many researchers tures, it is considered disrespectful for a young person
work within the confines of existing robotic technology, to maintain eye contract with an older person or a
virtual reality, and computer technologies developed for female person to maintain eye contact with a male, so
other purposes and studied in relation to an autism eye-contact norms can vary from culture to culture [28].
focused requirement, e.g., turn-taking, joint attention, or It is important in the DREAM project that language
imitation. DREAM’s robot-enhanced technological soft- used to describe children or adults with autism is care-
ware and hardware designed specifically for autism ther- fully considered as such language can lead to negative
apy has the potential to move the research forward. impacts on persons with autism and their families. As
Much of the literature on robot therapy for autism rarely Richardson [31] has explained, the use of mechanical
accounts for the changing meaning of autism over time. metaphors can be taken to extremes and persons with
Autism, as a category is not fixed in time and space and autism are often described as occupying a state
its diagnosis and relevance to medicine and society is between a typical person and a machine.
constantly shifting. For example, in the 1980s, only
twenty percent of persons diagnosed with autism had an Themes from the Interviews
I.Q. above 80, whereas today this figure is radically dif- The research identified a number of themes relevant for
ferent as in the 1994 version of DSM-III autism began to discussion. These are as follows:
include persons with Asperger’s who typically had a No One Autism for All (Nor One Robot for All).
higher I.Q. [19]. Furthermore, the “deficit model” of We found that among the cohort of our interviewees,

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 35


As researchers developing the technology and thera-
py of ABA, it is important to know that parental views on
Robot therapy must take into account ABA as a therapy were mixed. Some parents identified it
as an expensive and time-consuming therapy. Some
the diversity of autistic children’s even referred to it as “robotic” as it relies on repeating
difficulties. the same behaviors over and over again and rewarding
positive behaviors.

Reactions to Baron-Cohen’s Perspectives on Autism


children had a wide range of behavioral, social, learn- Spectrum Disorders
ing, affective, and cognitive difficulties. When develop- Central to the DREAM theoretical starting place is the
ing a robot therapy it is vital that the diversity of children importance of Simon Baron-Cohen’s particular perspec-
be taken into account because at present it feels as if it tives on autism:
is a one size (one type robot) fits all scenarios for chil-
dren with autism despite their varied challenges. The rationale of using robots for ASD therapies is
Humanistic Impulses Behind Robot Therapy based on the systemizing theory of Baron-Cohen:
Might be Driven by Resource Issues and Not children with ASD prefer the interaction with a
the Best Interests of the Children. robot over humans because, in contrast to the
When we asked parents about any concerns about human social world, robots are highly lawful sys-
robot therapy, some pointed to concerns that tech- tems. Being simpler and more predictable than
nologies were favored over other therapeutic forms as humans, robots have the potential to become
they are less labor intensive, such as Speech and Lan- interactive partners for ASD children and can
guage Therapy or traditional Applied Behavioral Analy- serve as an intermediate step for developing better
sis therapy. As an ethics team we anticipate it might social interaction with humans. The working
be more expensive at present to deliver ABA robot assumption is that, based on the positive respons-
therapy than typical ABA therapy as there are multiple es of children with ASD towards robots, the child
technological devices involved (robot, computer, hard will be more motivated and engaged in learning
drive, kinnect system, etc.), as well as the use of an activities, so the abilities will be mastered earli-
extra person at the keyboard controlling the Wizard of er with less time and human resources [42].
Oz system.
Parents Wondered to what Extent Introducing In our interviews, parents and academics challenged
a Robot into a Child’s Life at an Early Age Could Baron-Cohen’s perspectives on autism as typifying an
Impact on Their Learning. autistic person as lacking in empathy [33], lacking in
As the child receiving robot therapy interacts with the theory of mind [7], and disinterested in social and com-
robot for short periods of time, we do not envisage municative relationships. Baron-Cohen’s “deficit” model
this to be a problem for now. In the longer term, if of autism, or describing children with autism as lacking
robots become more sophisticated, then perhaps empathy is now challenged in many quarters of the
more ethical study needs to be done on the impacts autism community who advocate the social model of
of longer term exposure of a robot on the child’s disability: “The central tenet of the social model of dis-
development. However, if demonstrable effects are ability is therefore its rejection of the conception of dis-
noticed during the DREAM project, it will be important ability as an individual problem, and instead seeing
to highlight and discuss these. disability as a social construction” [6].
Parents of children with autism were often in receipt All the parents interviewed in the DREAM study agreed
of several therapies. The main therapy of U.K. parents that their children enjoy interacting with computers (iPads,
was Speech and Language Therapy which is provided by PCs, video games), but they also encouraged and support-
the National Health Service. Most parents are offered ed their children’s experiences with nature and animals.
and receive only a few sessions. Other therapies parents During our participant observation of experiments in
cited included music therapy, horse therapy, sensory Romania, and in the U.K. (the Explorers workshop) chil-
diets,1 and the movement method (a parent-inspired dren preferred their primary caregivers and voluntarily
therapy focusing on learning and movement). spent more time close to their caregiver (or requested to
be close to their caregivers) than any game or activity.
1
A sensory diet gives a child sensory experiences, developed by Patricia Wil- These findings suggest that, as with typically developing
barger, they are designed to give a child enough sensory stimulation to assist the
child in emotional, cognitive and motor self-regulation. Sensory diets are provid-
children, autistic children may initially get excited when
ed by teachers at schools for children with autism and other learning disabilities. first using a new technology, but may lose interest and

36 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


revert to the one person or people they are closest to.
Therefore, relating to the children in ways that expect
them to prefer robots may be a disservice to them,
which may lead to a lack of investment in helping them
develop their social skills through more human than
robot interaction. During our participant observation dur-
ing the Leicestershire Asperger’s group, members
expressed a strong interest in engaging in social activities
even though they struggled with social understanding.
Here are some responses from the parents:

“We always joke he’s a lover not a fighter. He’s


really affectionate. His hormones haven’t kicked
in yet, so he doesn’t hit people, the only sort of
challenges we have — he runs off, he’s a runner.”
“Nobody has actually thought about the issue
FIGURE 2. A social skills workshop at a group for Adults with
is, it is that on a Tuesday morning, his taxi is dif-
Asperger’s, Leicester, U.K.
ferent. He didn’t like the taxi driver so he gets in
the taxi, taxi driver winds him up, he gets out the
car, doesn’t really know what to do, somebody experienced by a child with autism. Rather than reject
said hello to him and actually he wants to go — opportunities to socially interact with others, the adults
that taxi driver is an idiot. So he hits the person in this group actively participated in developing their
who has targeted him, we end up restraining him. social skills by engaging in regular workshops to
So what we brought in with the calm model is that improve their social skills. Adults were given specific
we actually become, “what is the function of the scenarios and asked to make choices about appropriate
behavior that the child or young person is display- social behaviors. Questions included “what makes a
ing? What are they trying to communicate with good friend” or “what makes a good work colleague.”
that? What state are they in?” (See Figure 2.) In conversations with the young adults,
“Because we are devaluing the relationship, for they reported that bullying had been a problem for
a person with autism, they need things acknowl- them, and although they wanted to make friends at
edging us so they can deal with them, know them school, they had not been accepted by their classmates.
and shape them and move on. And if we are not Perhaps this may go some way to explain why children
doing that, then we perpetuating the cycle actually with autism might seem to prefer using robots or tech-
and they become less empathetic. Because we nological tools. Also this shows that as with any child
are not supporting empathy and we are not sup- facing bullying or rejection, they eventually turn to
porting those kind of things.” things that they see as more accepting to them. This is
different from lacking social skills and preferring robots.
Moreover, adults at the Asperger’s group can choose In Romania, observing the experiments, it was clear
voluntarily which activities to participate in and during there were strong bonds between the children with
our observations of the group many chose to attend autism and their parents. The children actively tried to
activities that explored social relating. Adults in the keep their parents close by during the experiments.
group ages ranged from 16–65 years old and there were
a mixture of male and female attendees. Adults in the Reflections
group were asked questions such as “what should you If we are to build better technologies for the benefit of
do if you go to a party?” or “What are the qualities of humanity, we must ensure that we start from the accu-
people we like or don’t like?” rate premises that all humans share a common identity
We believe that the ethics of child-robot interac- as a species. As a species, social attachment to others
tion, for helping children with autism develop social is crucial to each human being’s survival. Children
behaviors, should value and emphasize the importance come into the world without the necessary cognitive,
of affective attachments for the child. These important motor, social, and emotional skills necessary to survive.
relationships in the child’s life include the mother, These developmental aspects of the human being devel-
father, siblings, or other significant caregiving others. op over time with help and support from adult caregiv-
This approach does not discount or exclude the real ers. Moreover, as a society we have established ethical
ontological, and neurodevelopmental difficulties principles about the way we treat each other as human

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 37


beings and enshrined law is a respect for the dignity of Eurobarometer study many people in Europe do not
human beings regardless of race, class, sex, or ability. It accept the use of robots in healthcare. Note the differ-
has been necessary to enshrine these rights in juridical- ence with the Eurobarometer results about care men-
legal systems because unfortunately, the history of tioned above; apparently the autism community is far
humanity is littered with abuse and exclusion including more positive about using robots in healthcare, includ-
slavery, racism, and genocide. ing in autism therapy. The results from this research
On our research team there were a variety of per- study can be found at [8].
spectives about what makes us human, and what are In order to build on this trust offered by parents it
the similarities and differences between humans and is proposed to embed humanistic ethics in any study
machines. As a research collective we approach these of robots and autism, and always situate the person
issues in different ways. It is fortunate to work in a sci- with autism as a social human being with important
entific community that allows this diversity of perspec- intimate attachments. A child with autism does not
tive. In this paper however, we have tried to consider have an absent sociality, but an different sociality. The
the consequences of using particular kinds of models burden on people with autism can be eased when
of autism that go onto inform the premises and conse- people around the autistic person gain more aware-
quential practice of the research and development into ness of the difficulties of social communication.
the robots for children with autism. Robotic science, Anthropologists Ochs and Solomon [29] referred to
clinical psychotherapy (influenced by the biomedical this as an “autistic sociality” rather than an absent or
model of disability and difference), and the ethics of deficient sociality.
autism and robotics construct, investigate, and prob- The autism community is not a homogenous com-
lematize issues in very different ways. As an interdisci- munity, but is made up of medical experts, educatio-
plinary team we have approached the issue of autism nalists, children and adults with autism, and autism
and robots from different perspectives: experimental, advocates. Any research into the development of robots
clinical, engineering, philosophical, and anthropologi- for children with autism must consider that autism nar-
cal. Synthesizing these approaches can be challenging ratives are influenced by a heterogeneous set of voices,
as each discipline has its own vocabulary, history, often contradictory, and conflictual. From the ethics
methodologies, and unique data preferences. perspective, it is important that this heterogeneity is
taken into account when developing robots for children
Future Steps with autism.
During our first wave of ethics studies [8] that were built Finally, as a generation of children with autism is
around a quantitative survey we found that support for exposed to robots in experimental settings in research
robot therapy for children with autism was viewed posi- labs, or in therapeutic settings in clinics, what will be
tively by our interview cohort including parents of chil- the long-term consequence of these therapeutic inter-
dren with autism. Our target population was parents ventions for the children? What will be the child’s mem-
and therapists in Romania, Belgium, the Netherlands, ories of their own experiences? Will children as they
and England. Participants were recruited based on data- become adults reflect on these encounters positively?
bases of persons involved in our past research and These are questions we are not able to answer, but are
messages were posted on relevant blogs, Facebook, important to consider in the here and now.
newsletters, and websites of autism organizations. A
total of 416 subjects participated in the study. Data from Acknowledgment
22 participants were excluded from the analysis since This paper was funded as part of Seventh Framework
their responses were incomplete. Programme, grant agreement number: 611391. Develop-
In this study conducted in 2014, 23% of participants ment of Robot-Enhanced therapy for children with
were parents of children with ASD and 17% of the par- Autism spectrum disorders (DREAM).
ticipants were therapists or teachers of children with
ASD. The analysis of the distribution of responses to Author Information
the first two questions, ‘‘It is ethically acceptable that Kathleen Richardson, Mark Coeckelbergh, and Kutoma
social robots are used in therapy for children with Wakunuma are with De Montfort University, Leicester, U.K.
autism’’ (85% agree) and ‘‘It is ethically acceptable that Erik Billing and Tom Ziemke are with the University of
social robots are used in healthcare’’ (85% agree) indi- Skövde, Sweden.
cate that a great majority of the respondents agree Pablo Gómez and Bram Vanderborght are with Vrije
with using robots in the health-care system, including Universiteit Brussel, Belgium.
in robot assisted therapy for ASD children. This is Tony Belpaeme is with the University of Plymouth,
somewhat surprising, given that according to the Plymouth, U.K.

38 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


[26] T. Lorenz, A. Weiss, and S. Hirche, “Synchrony and reciprocity:
References Key mechanisms for social companion robots in therapy and care,”
[1] N. Bagatell, “From cure to community: Transforming notions of
Int. J. Social Robotics, vol. 8, no. 1, pp. 125-143, 2016.
autism,” Ethos, vol. 38, no. 1, pp. 33-55, 2010.
[27] R. Mallett and K. Runswick-Cole, “The commodification of
[2] S. Baron-Cohen et al., “Transported to a World of Emotion,”
autism,” in Re-Thinking Autism: Diagnosis, Identity and Equality,
McGill J. Medicine, vol. 12, no. 2, 2009.
K. Runswick-Cole, R. Mallett, and S. Timimi, Eds. London, U.K. and
[3] S. Baron-Cohen, Mindblindness: An Essay on Autism and
Philadelphia, PA: Jessica Kingsley, 2016, pp. 110.
Theory of Mind. Cambridge, MA: M.I.T. Press, 1997.
[28] A. McCarthy et al., “Cultural display rules drive eye gaze during
[4] S. Baron-Cohen, The Essential Difference. U.K.: Penguin UK, 2004.
thinking,” J. Cross-Cultural Psychology, vol. 37, no. 6, pp. 717-722, 2006.
[5] M.H. Beauchamp and V. Anderson, “SOCIAL: An integrative
[29] E. Ochs and O. Solomon, “Autistic sociality,” Ethos, vol. 38, no.
framework for the development of social skills,” Psychological
1, pp. 69-92, 2010.
Bull., vol. 136, no. 1, pp. 39, 2010.
[30] B. Robins, K. Dautenhahn, and J. Dubowski, “Does appearance
[6] C. Brownlow, “Presenting the self: Negotiating a label of autism,” J.
matter in the interaction of children with autism with a humanoid
Intellectual and Developmental Disability, vol. 35, no. 1, pp. 14-21, 2010.
robot?,” Interaction Studies, vol. 7, no. 3, pp. 509-542, 2006.
[7] C. Brownlow and L. O’Dell, “Challenging understandings of
[31] K. Richardson, “The robot intermediary: Mechanical analogies
‘Theory of Mind’: A brief report,” Intellectual and Developmental
and autism,” Anthropology Today, vol. 32, no. 5, pp. 18-20, 2016.
Disabilities, vol. 47, no. 6, pp. 473-478, 2009.
[32] B. Robins et al., “Robotic assistants in therapy and education
[8] M. Coeckelbergh, C. Pop, R. Simut, A. Peca, S. Pintea, D. David,
of children with autism: Can a small humanoid robot help encour-
and B. Vanderborght, “A survey of expectations about the role
age social interaction skills?,” Universal Access in the Information
of robots in robot-assisted therapy for children with ASD: Ethical
Society, vol. 4, no. 2, pp. 105-120, 2005.
acceptability, trust, sociability, appearance, and attachment,” Sci-
[33] J. Robinson, “Participatory research with adults with Asperger’s
ence and Engineering Ethics, vol. 22, no. 1, pp. 47-65, 2016.
syndrome: Using spatial analysis to explore how they make sense
[9] G. Collins, “Does a diagnosis of ASD help us to help a person
of their experience,” 2014; https://www.dora.dmu.ac.uk/bitstream/
with intellectual disabilities?,” in Re-Thinking Autism: Diagnosis,
handle/2086/11040/PhD%20final%20version%205%203%2015.
Identity and Equality, K. Runswick-Cole, R. Mallett, and S. Timimi,
pdf?sequence=1&isAllowed=y.
Eds. London, U.K. and Philadelphia, PA: Jessica Kingsley, 2016.
[34] B. Scassellati, “Theory of mind for a humanoid robot,” Autono-
[10] T. Charman, “Why is joint attention a pivotal skill in autism?,”
mous Robots, vol. 12, no. 1, pp. 13-24, 2002.
Philosophical Trans. Royal Society of London B: Biological Sci-
[35] B. Scassellati, “Implementing models of autism with a human-
ences, vol. 358, no. 1430, pp. 315-324, 2003.
oid robot,” 1999, unpublished; http://www.cs.yale.edu/homes/scaz/
[11] C.A. Costescu and D.O. David, “Attitudes toward using social
abstracts/1999/scaz3.pdf.
robots in psychotherapy,” Erdelyi Pszichologiai Szemle = Transyl-
[36] B. Scassellati, H. Admoni, and M. Mataric, “Robots for use in
vanian J. Psychology, vol. 15, no. 1, pp. 3, 2014.
autism research,” Annual Review of Biomedical Engineering, vol.
[12] D.R. Dixon, T. Vogel, and J. Tarbox, “A brief history of functional
14, pp. 275-294, 2012.
analysis and applied behavior analysis,” in Functional Assessment
[37] K. Runswick-Cole, R. Mallett, and S. Timimi, “Future directions,”
for Challenging Behaviors. Springer, 2012, pp. 3-24.
in Re-Thinking Autism: Diagnosis, Identity and Equality. K. Runs-
[13] P.G. Esteban et al., “A multilayer reactive system for robots interacting
wick-Cole, R. Mallett, and S. Timimi, Eds. London, U.K. and Philadel-
with children with autism,” arXiv, Preprint arXiv:1606.03875, 2016.
phia, PA: Jessica Kingsley, 2016.
[14] K. Gillespie-Lynch et al., “Selecting computer-mediated inter-
[38] B.C. Stahl and M. Coeckelbergh, “Ethics of healthcare robot-
ventions to support the social and emotional development of
ics: Towards responsible research and innovation,” Robotics and
individuals with Autism Spectrum Disorder,” Special and Gifted
Autonomous Systems, 2016.
Education: Concepts, Methodologies, Tools, and Applications:
[39] A. Tapus et al., “Children with autism social engagement in
Concepts, Methodologies, Tools, and Applications, 2016, pp. 32.
interaction with Nao, an imitative robot–A series of single case
[15] T. Grandin, “An inside view of autism,” in High-Functioning
experiments,” Interaction Studies, vol. 13, no. 3, pp. 315-347, 2012.
Individuals with Autism. Springer, 1992, pp. 105-126.
[40] B. Vanderborght et al., “Using the social robot probo as a
[16] R.R. Grinker, “Commentary: On being autistic, and social,”
social story telling agent for children with ASD,” Interaction Stud-
Ethos, vol. 38, no. 1, pp. 172-178, 2010.
ies, vol. 13, no. 3, pp. 348-372, 2012.
[17] R.R. Grinker, Unstrange Minds: Remapping the World of
[41] American Psychiatric Association, Diagnostic and Statistical
Autism. Da Capo, 2008.
Manual of Mental Disorders, 5th ed. Arlington, VA: American Psy-
[18] A. Hiniker, S.Y. Schoenebeck, and J.A. Kientz, “Not at the din-
chiatric Assoc., 2013.
ner table: Parents’ and children’s perspectives on family technol-
[42] DREAM, Description of work (unpublished internal document),
ogy rules,” in Proc. 19th ACM Conf. Computer-Supported Coopera-
Annex 1, Part B, p. 106.
tive Work & Social Computing, 2016, pp. 1376-1389.
[43] C. Trevarthen, K. Aitken, D. Papoudi, and J. Robarts, Children
[19] G. Hollin, “Autism, sociality, and human nature,” Autism, 2014.
with Autism: Diagnosis and Intervention to Meet Their Needs.
[20] C.A. Huijnen et al., “Mapping robots to therapy and educational
London, U.K.: Jessica Kingsley, 1998.
objectives for children with autism spectrum disorder,” J. Autism
[44] A. Rynkiewicz, B. Schuller, E. Marchi, S. Piana, A. Camurri, A.
and Developmental Disorders, vol. 46, no. 6, pp. 2100-2114, 2016.
Lassalle, and S. Baron-Cohen, “An investigation of the ‘female
[21] T. Ikegami and H. Iizuka, “Turn-taking interaction as a coopera-
camouflage effect’ in autism using a computerized ADOS-2 and
tive and co-creative process,” Infant Behavior and Development,
a test of sex/gender differences,” Molecular Autism, vol. 7, no. 1,
vol. 30, no. 2, pp. 278-288, 2007.
2016.
[22] B. Ingersoll, “The social role of imitation in autism: Implica-
[45] J.L. Matson and T.T. Rivet, “Characteristics of challenging behav-
tions for the treatment of imitation deficits,” Infants and Young
iours in adults with autistic disorder, PDD-NOS, and intellectual dis-
Children, vol. 21, no. 2, pp. 107-119, 2008.
ability,” J. Intellectual and Developmental Disability, vol. 33, no. 4,
[23] C. Lafont, “Deliberation, participation, and democratic legiti-
pp. 323-329, 2008.
macy: Should deliberative mini‐publics shape public policy?,” J.
[46] G.A. Kaminka, “Curing robot autism: A challenge,” in Proc. 2013
Political Philosophy, vol. 23, no. 1, pp. 40-63, 2015.
Int. Conf. Autonomous Agents and Multi-Agent Systems. Internation-
[24] S. Timimi and B. McCabe, “What have we learned from the sci-
al Foundation for Autonomous Agents and Multi-agent Systems, May 2013,
ence of autism?,” in Re-Thinking Autism: Diagnosis, Identity and
pp. 801-804.
Equality, K. Runswick-Cole, R. Mallett, and S. Timimi, Eds. London
[47] J.J. Diehl, L.M. Schmitt, M. Villano, and C.R. Crowell, “The clini-
and Philadelphia, Jessica Kingsley, 2016.
cal use of robots for individuals with autism spectrum disorders: A
[25] S. Timimi S and B. McCabe, “Autism screening and diagnostic
critical review,” Res. Autism Spectrum Disorders, vol. 6, no. 1, pp.
tools,” in Re-Thinking Autism: Diagnosis, Identity and Equality,
249-262, 2012.
K. Runswick-Cole, R. Mallett, and S. Timimi, Eds. London and Phila-
delphia, Jessica Kingsley, 2016.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 39


FIGURE
FIG 1.. The Robot Scientist ‘Eve’.
URE 1

Philosophical and
Social Dimensions

C
lark Glymour argued in 2004

Automating that “despite a lack of public fan-


fare, there is mounting evidence
that we are in the midst of … a
revolution — premised on the
automation of scientific discov-

Sciences ery” [1]. This paper highlights


some of the philosophical and sociological dimen-
sions that have been found empirically in work
conducted with robot scientists — that is, with au-
tonomous robotic systems for scientific discovery.
Ross D. King, Vlad Schuler Costa, Robot scientists do not supply definite answers to
the discussed questions, but rather provide “proofs
Chris Mellingwood, and of concept” for various ideas. For example, it is not
Larisa N. Soldatova that robot scientists solve the realist/antirealist
philosophical debate, but that when working with
robot scientists one has to make a philosophical
Digital Object Identifier 10.1109/MTS.2018.2795097
choice — in this case, to assume a realist view of
Date of publication: 2 March 2018 science. There are still few systems for autonomous

40 1932-4529/18©2018IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


scientific discovery in existence, and it is too early to generated functional genomics hypotheses about the
generalize and propose new theories. However, being yeast S. cerevisiae, and experimentally tested these
“in the midst of … a revolution” it is important for the hypotheses by using laboratory automation. Adam’s
research community to re-examine views pertinent to conclusions have been manually confirmed using gold
scientific discovery. This paper highlights how experi- standard experiments.
ence with robot scientists could inform discussions The robot scientist “Eve” (Figure 1) was designed to
in other disciplines, from philosophy of science to com- make drug discovery more economical, specifically for
puter creativity research. neglected tropical diseases [9]. Eve integrates and auto-
mates library-screening, hit-confirmation, and lead gen-
Scientific Discovery and Robot Scientists eration through cycles of quantitative structure activity
The branch of Artificial Intelligence (AI) devoted to devel- relationship learning and testing. Using econometric
oping algorithms for acquiring scientific knowledge is modeling, Eve was shown to economically outperform
known as “scientific discovery.” The pioneering work in standard drug screening. Eve has repositioned several
scientific discovery was the development of learning drugs against specific targets in parasites that cause
algorithms for analysis of mass-spectrometric data [2]. In tropical diseases. One validated discovery is that the
the subsequent 50 years, much has been achieved and anti-cancer compound TNP-470 is a potent inhibitor of
there are now convincing examples in which computer dihydrofolate reductase from the malaria-causing para-
programs have made explicit contributions to scientific site P. vivax.
knowledge (e.g., [3]–[6]). However, the general impact of
such programs on science has been limited. This is slow- The Metaphysics of Robot Scientists
ly changing as the expansion of automation in science is A major motivation for the automation of science is phil-
making it increasingly possible to couple scientific dis- osophical: if a mechanism can be built that is judged to
covery software to laboratory instrumentation [6]–[9]. have discovered some novel scientific knowledge, then
Science is an excellent testbed for the development this will shed light on the nature of science. To quote
of AI discovery systems: Richard Feynman “What I cannot create, I do not under-
■ Scientific problems are abstract, but also involve the stand” (on his blackboard at the time of his death). The
real-world level knowledge. advantage of this approach to the philosophy of sci-
■ Scientific problems are restricted in scope — no ence, compared to traditional ones, is that it is construc-
need to know about “Cabbages and Kings” — and tive and objective. In building robot scientists one is
are also are extensible. confronted with the need to make concrete engineering
■ Science assumes that Nature is not trying to deceive decisions that relate to a number of important problems
us, so there is no need to consider malicious agents. in the philosophy of science: the relation between
■ Scientific knowledge is a public good when it is abstract and physical objects, the nature of truth, the
openly available. relation between observed and theoretical entities, the
■ Science is a worthy object of our study. origin of hypothesis, the problem of induction, etc. This
A robot scientist is an example of such an AI discov- approach to science is analogous to the AI approach to
ery system. The robot scientist is a physically implement- understanding the human mind through the creation of
ed laboratory automation system that exploits AI artifacts that can be empirically shown to have some of
techniques to execute cycles of scientific experimenta- the attributes of human minds [12].
tion [8], [9], [11]. A robot scientist automatically originates We argue that the software/hardware isomorphism is
hypotheses to explain observations, devises experiments the key to bridging the physical/abstract dichotomy in the
to test these hypotheses, physically runs the experiments metaphysics of science (Figure 2). The key to the power of a
by using laboratory robotics, interprets the results, and computer is that computers implement abstract pro-
then repeats the cycle. The advent of robot scientists is of grams in physical devices. This is the insight that distin-
significant philosophical and social interest. They are guished Turing from the other great logicians of his time.
also of practical interest as they have the potential to Although the idea that a physical object can be isomor-
increase the productivity of science: they can work cheap- phic with an abstract system is at least as old as the aba-
er, faster, more accurately, and longer than humans, and cus, a Universal Turing Machine is a uniquely powerful
can also be more easily multiplied. physical/abstract device.
The robot scientist “Adam” was the first machine to In a scientific investigation, to relate corresponding
autonomously discover scientific knowledge, i.e., to abstract and physical entities requires the concept of
autonomously both form and experimentally confirm “truth.” Within philosophy there are a number of compet-
novel hypotheses [8]. Adam worked in the domain of ing theories of truth, including: correspondence, pragma-
yeast functional genomics, and autonomously both tism, verification, and coherence. These theories are

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 41


Abstract Entity Physical Entity
is-a
Denotes
is-a Yeast Strain Yeast Strain
is-a is-a
is-a

Denotes is-a
Model of Yeast Biochemical
Nutrient Nutrient
Metabolism Part-of is-a Entity

Part-of Denotes
is-a Gene Gene Manipulates

Model of Robot Denotes Robotic


Incubator Incubator
Architecture Equipment
is-a is-a

Denotes
Plate Plate
Manipulates

Manipulates
Manipulates
Computer

Computer Computer
Software Hardware

FIGURE 2. The Robot Scientist Universe (a fragment).

associated with different ontologies: correspondence the- as a metaphysical question that cannot be answered, and
ories with realism, and pragmatism, verification, and coher- regards scientific theories as instruments of prediction
ence with idealism, anti-realism, or relativism [14]. [15], [16]. As physical devices, robot scientists necessarily
A robot scientist’s physical effectors (laboratory adopt a realist position as defined above. However, their
robots) can test the truth or falsehood of an abstract sci- approach to determining the truth of propositions is
entific proposition by specific physical experiments: an also consistent with that of anti-realism. Therefore, with
Abstract entity of type Proposition is assigned a truth robot scientists there would seem to be no difference
value by a Physical entity that participates in a speci- in regarding scientific theories as descriptions of reali-
fied Process. This is achieved through the designed iso- ty or as tools for prediction. This approach is related to
morphism between an abstract Denotation rule and a quietism [17].
physical Denotation process (Figure 2). This operation- The realism/anti-realism debate is closely connected
al approach to truth does not discriminate between cor- to another area of interest in the philosophy of science
respondence, pragmatism, verification, or coherence that is important in the design of robot scientists: the
theories of truth. For a human scientist these different relationship between observed and theoretical entities.
approaches may possibly inspire different ways of doing This subject has long been a matter of debate in the phi-
science, but given the current state of development of losophy of science, with some philosophers claiming
robot scientists it is unclear to us whether there is any that the distinction is not real and/or important [16]. We
operational difference between these approaches. argue that the distinction clarifies the robot scientist’s
One of the most debated questions in the philosophy reasoning, and that what are observed and theoretical
of science is that between realism and anti-realism. entities is relative to defined instrumentation.
Realism is “the viewpoint that accords to the objects of A common view in the philosophy of science is that
knowledge an existence that is independent of whether hypothesis formation necessarily requires human cre-
anyone is perceiving or thinking about them” [14]. The ativity [10]. This view has long been challenged by AI,
alternative position regards the existence of the real world [e.g., 19]. Most work within scientific discovery has

42 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


focused on automated hypothesis formation [e.g., 3]. course, there are many more authors who have relevant
Within the philosophy of science, hypothesis formation writings on the topic, e.g., [22]).
has been closely associated with induction. This focus We are unaware if Popper ever directly tackled the
may be due to much philosophy of science being physics question of whether a machine could be engineered to
focused [16]. In modern biology most hypothesis forma- do science. However, it would seem reasonable to infer
tion is abductive [11]. What are hypothesized are factual from his other clearly expressed views — that hypothe-
relationships between entities, e.g., that the gene named sis formation requires human creativity, and that induc-
YBR060c encodes for the enzyme with function choris- tion is a myth — that he would have denied the possibility
mate mutase, that the gene named YPR060c encodes a of mechanizing scientific discovery [10].
protein with a four-helix bundle topology, etc. Such rela- In his postscript to The Structure of Scientific Revo-
tionships are factual rather than general laws. lutions (1970) Thomas Kuhn responds to critics who
Robot scientists follow a hypothetico-deductive meth- viewed his work as relativistic [23]. Kuhn uses his account
odology. Hypotheses are formed either using abduction of scientific theories to argue that science is a special
or induction. The experimental consequences of these case. This is because viewing proponents of competing
hypotheses are then deductively inferred, and then scientific theories as simply akin to members of different
physical experiments are conducted to observe what language-communities does not account for scientists as
causally happens in the real world. Adam used abduc- fundamentally puzzle-solvers [23, p. 205]. Kuhn was an
tion to form hypotheses. A set of models is generated, advocate of scientific progress and believes theories and
each with different abduced propositions. With the paradigms each build upon that which has come
model (T) these propositions (H) enable the deduction before. Similarly, Kuhn was open to the use of com-
of whether growth is predicted (O) for a particular exper- puter programs in scientific knowledge making, and
iment, i.e., T / H : O. These deductions are monitored uses this fact to display his willingness to accept the
by a meta-logical program that determines the truth or importance of rule-following in science [23, p.191]. How-
falsehood of the abstract theoretical growth proposi- ever, Kuhn’s larger point is that it is not quite right to say
tion in the various models [19]. This is then integrated that scientists do not follow any fixed rules. He is arguing
with physical effectors to physically execute an experi- that scientists follow rules based on previous exemplars
ment and thereby determine whether actually growth from their field (see “A Sociological Perspective” section
occurs or not, which can be mapped to the robot scien- below). Rules cannot be abstracted from exemplars and
tists abstract model of reality. take their place [23, p.192], and it is prior experience
Eve uses induction to form hypotheses. To select and training that shapes how  scientists judge when
compounds to test its hypotheses, Eve uses active rules are being followed or not [23, p. 198]. Applied to
learning [9], [20]. The active learning task is compara- robot scientists the question remains open as to how
ble to that in many other areas of science and engi- prior experience and training can become embedded
neering: identify or design artifacts that have optimal in AI programs.
performance. However, it has an extra ingredient remi- If one accepts that robot scientists can automate
niscent of reinforcement learning: balancing the explo- many of the steps in the generation of scientific knowl-
ration of compound space with the exploitation of edge, then there would appear to be two main “get-
regions of highly active compounds. outs” that would still enable philosophers to maintain
that science cannot be automated. One get-out is that a
Robot and Human Scientists current robot scientists is not aware that it is doing sci-
ence, and is therefore the robots are not really doing sci-
Is Science Solely a Human Activity? ence [24]. This type of argument would also apply to
It is easy to find evidence that science should be viewed chess computers [25] — yet we are unaware of any phi-
as solely a human endeavor [e.g., 21]. Within the philos- losopher who has argued that chess computers do not
ophy of science there are many advocates of a humanis- really play chess.
tic understanding of science. However, developments in The other get-out is to deny that what was “discov-
AI question the centrality of human creativity in the cre- ered” was really novel science. For example, it could be
ation of scientific knowledge. Although most philoso- argued that the new scientific knowledge was implicit in
phers of science seem not to have engaged with the the formulation of the problem, and is therefore not
possibility of automating science, the views of certain novel. The argument that computers cannot originate
philosophers would appear to infer that science, as a set anything and can only do what they are programmed to
of practices would be very difficult to automate. Among do is known as “Lady Lovelace’s objection” [26]: “The
these are probably the two best known post-war philos- Analytical Engine has no pretensions to originate any-
ophers of science: Karl Popper and Thomas Kuhn (of thing. It can do whatever we know how to order it to

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 43


perform” (Lady Lovelace’s italics). This argument has strengths of human and robot scientists, and to better
real teeth as the robot scientists Adam and Eve are very understand future working relationships between
far from being autonomous agents seeking out their own humans and automation. These relationships occur at
scientific problems. One counter-argument is that robot many levels: from the most profound (deciding on what
scientists are programmed to learn novel scientific to investigate, structuring a problem for computational
knowledge, and as they learn from observations of the analysis, interpreting unusual experimental results,
physical world the conclusions are not purely deductive. etc.), to the most mundane (cleaning, replacing con-
A variant of the lack of novelty argument is the argu- sumables, etc.).
ment that computers will only ever be able to do “nor- One particularly interesting relationship between
mal science,” i.e., within a paradigm, and will never be human and robot scientists relates to the replication of
able to do “revolutionary science” [23], [27]. Certainly experiments. It has been proposed by Latour [31] that
existing robot scientists are not capable of doing revo- there is a necessary trade-off between the communica-
lutionary science. However, very few human scientists tion of conceptual information and contextual detail:
are either. scientists need to report their findings in the most
There is also an argument that such systems as objective and abstract forms possible, so as to create
robot scientists are not truly autonomous, rather that generalizable statements. However, the decision of
the systems merely are tools of scientists, and that there which detail is conceptual and which is contextual is left
is always a human-in-the-loop [28]. The existing robot to the individual scientist. Anthropological and ethno-
scientists are compliant with the definition of autono- methodological research indicates that this is a trait of
mous robots by IEEE P1872TM/D3 standard “a robot the human mind: scientists first decide on the experi-
performing a given task in which the robot solves the mental design in very broad terms, then later infer how
task without human intervention, while adapting to to empirically conduct the experiment, and lastly report
operational and environmental conditions.” While one their results in mostly the same terms as the experi-
may argue that there are certain shortcomings in Eve’s ment was designed [33]. Robot scientists, on the other
autonomy, the concept of a robot scientist, as originally hand, require a set of definite contextual elements to
introduced [11], implies a complete autonomy in scien- work properly, and they always log these conditions of
tific discovery. the experiments they conduct.

An Anthropological Perspective A Sociological Perspective


Compared to human scientists, robot scientists have an We argued above that there are convincing examples
fascinating mixture of super- and sub-human abilities. now of scientists using computer programs to contrib-
Laboratory robots have traditionally been used to auto- ute to scientific knowledge. We are particularly interested
mate low-level repetitive tasks. Robot scientists inherit in the nature of that contribution and how AI-informed
this ability and have the super-human capacity to work scientific knowledge may differ from knowledge gath-
flawlessly on extremely repetitive tasks for days at a ered without the assistance of computer programs. The
time. In comparison humans perform badly at repetitive earlier description of what a robot scientist does con-
tasks, especially those carried out during extended tains two key processes that are critical to understand-
periods [29]. We have confirmed this during our obser- ing scientific knowledge from a sociological perspective:
vational studies of human scientists, who routinely observation and interpretation. We will take each of
make mistakes, particularly when subject to hindranc- these concepts in turn. The aim is to offer a sociological
es like stress, time pressure, or distractions. Robot sci- perspective on laboratory automation that is informed
entists inherit from AI abilities that have traditionally by some key ideas introduced from sociology of scientif-
been regarded as high-level for humans, such as a ic knowledge literature.
super-human ability to do logical and probabilistic When scientists use the term observation they are
reasoning. However, robot scientists are sub-human in clearly making a connection between a sensory input
their adaptability and understanding, and human scien- (e.g., “seeing”) and the material world around them.
tists are still unequalled in conditions that require flexi- However, a great deal of phenomena in the sciences
bility and dealing with unexpected situations, especially cannot be seen in any literal sense. For example, when
those intuitive functions that might have otherwise been a physicist says there are many ways to “observe” the
considered low level [30]. recoil of an atom, they are not referring to a process
Given the mixture of super- and sub-human abilities that can be inspected in the same way that the color red
of robot scientists, it is informative to investigate how can be seen when a chemistry teacher asks students to
human scientists cooperate with their robot counter- observe red fumes in a gas jar [34, pp. 1–2]. It is impor-
parts, both to improve the technology by playing to the tant therefore to understand exactly what is happening

44 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


when a robot scientist is said to observe some aspect of scientists and their human designers offer a rich seam
the material world. of possible empirical research: questions that speak to
A system designed to record data must be pro- fundamental issues about the role of computer pro-
grammed within a set of parameters, and these parame- grams for scientific knowledge making in contemporary
ters need to be established and agreed upon by a group social life.
to be validated as correct indicators of the phenomena
of interest. For example, a case-based learning algo- Formalization and Reproducibility
rithm designed to perform class discrimination is nor- One important social and technological aspect of scien-
mally first programmed by a group (humans) who have tific discovery where a robot scientist can offer benefit is
decided which objects should be placed in to each clas- reporting and documenting experimental results. This is
sification, based on examples taken from previous expe- of particular importance in the context of “the reproduc-
rience of those objects. The objects that seem to ibility crisis”: “The ability to reproduce experiments is
generate the most agreement as to the correct class in at the heart of science, yet failure to do so is a routine
which they belong can be labelled “exemplars” of that part of research” [35]; “More that 70% of researchers
classification. Thomas Kuhn’s The Structure of Scientif- have tried and failed to reproduce another scientist’s
ic Revolutions applied the theory of exemplars to scien- experiments, and more than half have failed to repro-
tific knowledge claims, arguing that major advances in duce their own experiments” [36]. There are many rea-
scientific fields often amount to a recognition that enti- sons for the non-reproducibility of experimental results:
ties previously thought to be the same actually contain the complexity of experimental and statistical methods,
important differences [34]. “poor experimental design,” the non-availability of raw
The language of observation, therefore, is a means by data, methods, and code, etc. [35], [36]. The use of natu-
which scientists can make distinctions between what is ral languages to document experimental procedures
known and what is not known. There are two important makes this worse. For example, consider this routine
implications of this argument for the use of robot scien- natural language instruction: “inoculate 4 mL of liquid
tists. First, the strength of any knowledge claim is linked YPAD and incubate with shaking overnight at 30 C.” This
to how far observations are agreed upon and shared is ambiguous. It does not specify the speed and mode of
among authoritative members of a knowledge communi- the shaking — is it 200  rpm, 600 rpm, and orbital or
ty. The observations made by the robot scientist were reciprocal? What is the duration of the incubation — is it
embedded within a well-established and respected 8 h, 12 h, or does it not matter? Such information is vital
research program that employed skilled and knowledge- for the reproducibility. The reproducibility of experimen-
able scientists. It was those (human) scientists who were tal results can be improved through the recording and
recognizably authoritative and credible in their academic execution of experiments using robot scientists [37].
discipline. It was mainly because of that credibility that Humans are reluctant to record every detail of their
the research team was able to publish the AI-derived cycles of hypotheses — experimentation — interpreta-
data as a contribution to knowledge in their field. tions, because it is time consuming and also requires
Arguably then, even if robot scientists can be judged knowledge of reporting standards. Humans are prone to
to observe phenomena, the validity of the observations make errors in recording information, and they are
takes shape when those observations are discussed biased. Robot scientists are free from such limitations,
and defined through interaction between a social group. and can record all the information required for the
Indeed, this process is the very definition of the second reproducibility at almost no additional cost and in
concept of interest in this section: “the process… in accordance with the best practices following the recom-
which individual responses are taken up in to patterns mended standards.
of social interaction … interpretation” [34, p. 17]. Can a
robot scientist be judged to interpret results according Conclusions and Discussions
to this definition? Here the sociological perspective Going back to the Glymour paper that opens this arti-
becomes useful. cle, “Kuhn said that scientific revolutions generally
Clearly, robot scientists are becoming acceptable as meet fierce resistance — and the automation of discov-
tools for contributing to scientific knowledge among ele- ery in science is no exception” [1]. However, it is to be
ments of the academic community. The observations hoped, that the collaboration between human and
made by robot scientists are being taken up into pat- robot scientists will produce better science than either
terns of social interaction via the human designers and can alone — human/computer teams still play better
operators of those systems, and by the acceptance of chess that either alone. To understand how best to syn-
AI-derived methods by the wider academic community. ergize the strengths and weaknesses of human and
Therefore the sociological investigations of robot robot scientists we need to better understand the

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 45


anthropological and sociological issues involved in [8] R.D. King et al., “The automation of science,” Science, vol. 324,
pp. 85-89, Apr. 2009.
human/machine collaboration. It is also reasonable to
[9] K. Williams et al., “Cheaper faster drug development validated
hope that developments in robot scientists will contrib- by the repositioning of drugs against neglected tropical diseases,”
ute to the philosophy of science: compared to traditional J. R. Soc. Interface, vol. 12, pp. 1-9, Feb. 2015.
[10] K. Popper, The Logic of Scientific Discovery. London, U.K.:
approaches the development of robot scientists is con-
Hutchinson, 1972.
structive and objective. [11] R.D. King et al., “Functional genomic hypothesis generation
In chess there is a continuum of ability from novices and experimentation by a robot scientist,” Nature, vol. 427, pp. 247-
252, Jan. 2004.
up to Grandmasters. We argue that this is also true in sci-
[12] A. Sloman, The Computer Revolution in Philosophy. Sussex,
ence, from the simple research of Adam/Eve, through U.K.: Harvester, 1978.
what most human scientists can achieve, up to the ability [13] M. David, “The Correspondence Theory of Truth,” in Stanford
Encyclopaedia of Philosophy, E.N. Zalta, Ed., Fall 2016.
of a Newton or Einstein. If one accepts this, then just as in
[14] J. Owens, “Realism,” in Encyclopaedia Britannica Deluxe Edi-
chess, it is likely that advances in technology and our tion 2004 CD. London, U.K.: Encyclopaedia Britannica UK, 2004.
understanding of science will drive the development of [15] R. Carnap, “Intellectual autobiography,” in Philosophy of
Rudolph Carnap., P. Schilpp, Ed. La Salle, IL: Open Court, 1963.
ever-smarter robot scientists. To encourage research in
[16] M. Curd and J.A. Cover, Philosophy of Science: The Central
this area Hiroaki Kitano has called for new grand chal- Issues. New York, NY: Norton, 1998.
lenge for AI: to develop an AI system that can make major [17] A. Miller, “Realism,” in Stanford Encyclopaedia of Philosophy,
E.N. Zalta, Ed. Wint. 2016.
scientific discoveries in biomedical sciences worthy of a
[18] D. Gillies. Artificial Intelligence and Scientific Method.
Nobel Prize [6]. This may sound fantastical, yet the Physics Oxford, U.K.: Oxford Univ. Press, 1996.
Nobel Frank Wilczek is on record as saying that in 100 [19] R. Carnap. Introduction to Symbolic Logic and its Applica-
tions. Mineola, NY: Dover, 1958.
years’ time the best physicist will be a machine [38]. If
[20] D.A. Cohn, Z. Ghahramani, and M.I. Jordan, “Active learning
this comes to pass, this will not only transform technolo- with statistical models,” J. Artif. Intell. Res., vol. 4, pp. 129-145, Mar. 1996.
gy, but our understanding of science and the Universe. [21] I. Kasavin, T. Rockmore, and E. Blinov, “Social Epistemology,
Interdisciplinarity and Context,” Epistemology and Philosophy of
Science, vol. 37, pp. 57-75, Jan. 2013.
Acknowledgment [22] N. Cartwright, “Relativism in the philosophy of science,” in Rela-
The authors of this manuscript have been partially sup- tivism: A Contemporary Anthology, M. Krausz, Ed. New York, NY:
Columbia Univ. Press, 2010, pp. 86-99.
ported by the AdaLab project funded by EPSRC UK (EP/
[23] T.S. Kuhn, The Structure of Scientific Revolutions. Chicago,
M015661/1). IL: Univ. of Chicago Press, 1962.
[24] J. Searle, J. Minds, Brains and Science. Cambridge, MA: Har-
vard Univ. Press, 1986.
Author Information [25] M. Davies. Engines of Logic: Mathematicians and the Origin
Ross D. King is with the Manchester Institute of Biotech- of the Computer. New York, NY: Norton, 2000.
nology, University of Manchester, Manchester, U.K. [26] A. Turing, “Computing machinery and intelligence,” Mind, vol.
59, pp. 433-460, Oct. 1950.
Vlad Schuler Costa is with the School of Social Scienc-
[27] J. Preston, “Review: ‘Artificial intelligence and scientific meth-
es, University of Manchester, Manchester, U.K. od’ and ‘AI: essays at the interface’,” British J. Philosophy Sci-
Chris Mellingwood is with the School of Social and Politi- ence, vol. 48, pp. 610-612, Dec. 1997.
[28] D.A. Mindell. Our Robots, Ourselves: Robotics and the Myths
cal Sciences, University of Edinburgh, Edinburgh, Scotland.
of Autonomy. Viking, 2015.
Larisa N. Soldatova is with the Computing Depart- [29] J.F. O’Hanlon, “Boredom: Practical consequences and a theo-
ment, Goldsmiths, University of London, London, U.K. ry,” Acta Psychologica, vol. 49, pp. 53-82, Oct. 1981.
[30] D.B. Kronenfeld et al., Eds., A Companion to Cognitive
Anthropology. London, U.K.: Wiley-Blackwell, 2011.
References [31] B. Latour, “Circulating reference,” in Pandora’s Hope: essays
[1] C. Glymour, “The automation of discovery,” Daedelus, pp. 69-77, on the reality of science studies. Cambridge, MA: Harvard Univ.
Wint. 2004. Press, 1999.
[2] B.G. Buchanan, G.L. Sutherland, and E.A. Feigenbaum, “Heuris- [32] M. Lynch, Art and Artifact in Laboratory Science: A Study
tic DENDRAL: A program for generating explanatory hypotheses of Shop Work and Shop Talk in a Research Laboratory. London,
in organic chemistry,” in Machine Intelligence 4, B. Meltzer and D. U.K.: Routledge, 1985.
Michie, Eds. Edinburgh, U.K.: Edinburgh Univ. Press, 1969. [33] S. Schaffer and S. Shapin, Leviathan and the Air-Pump:
[3] P. Langley et al., Scientific Discovery: Computational Explora- Hobbes, Boyle, and the Experimental Life. Princeton, NJ: Princ-
tions of the Creative Process. Cambridge, MA: M.I.T. Press, 1987. eton Univ. Press, 1985.
[4] J.M. Zytkow, J. Zhu, and A. Hussam, “Automated discovery in a [34] B. Barnes, D. Bloor, and J. Henry, Scientific Knowledge: A
chemistry laboratory,” in Proc. 8th National Conf. Artificial Intel- Sociological Analysis. London, U.K.: Athlone, 1996.
ligence (AAAI-1990), 1990, pp. 889-894. [35] Nature Editorial, “Reality check on reproducibility,” Nature, vol.
[5] R.E. Valdes-Perez, “Principles of human-computer collaboration 533, pp. 437-438, 2016.
for knowledge discovery in science,” Artificial Intelligence, vol. [36] M. Baker and D. Penny, “Is there a reproducibility crisis?,”
107, pp. 335-346, Feb. 1999. Nature, vol. 533, pp. 452-454, 2016.
[6] H. Kitano, “Artificial intelligence to win the Nobel Prize and [37] L.N. Soldatova et al., “An ontology for a robot scientist,” Bioin-
beyond: Creating the engine for scientific discovery,” AI Magazine, formatics, vol. 22, pp. e464-471, 2006.
vol. 16, pp. 39-49, Spr. 2016. [38] F. Wilczek., Fantastic Realities: 49 Mind Journeys and a Trip to
[7] P. Langley, “The computational support of scientific discovery,” Stockholm. Singapore: World Scientific, 2006.
Int. J. Human-Comp. Stud, vol. 53, pp. 393-410, Sep. 2000.

46 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


The Technological
Fix as Social
Cure-All
Origins and
Implications

Sean F. Johnston

I
n 1966, a well-connec-
ted engineer posed a
provocative question:
will technology solve
all our social prob-
lems? He seemed to
imply that it would,
and soon. Even more conten-
tiously, he hinted that engineers
could eventually supplant social
scientists — and perhaps even
policy-makers, lawmakers, and
religious leaders — as the best
trouble-shooters and problem-
solvers for society [1].1
The engineer was the Direc-
tor of Tennessee’s Oak Ridge
National Laboratory, Dr. Alvin
Weinberg. As an active networker,
essayist, and contributor to gov-
ernment committees on science
FIGURE 1. Engineers and scientists as social problem-solvers
[source: New York Herald Tribune, 7 Aug 1945 (the day after Hiroshima), p.22].
1
Weinberg’s second speech on the topic was
more cautiously titled, and was reprinted
Digital Object Identifier 10.1109/MTS.2018.2795118 in numerous journals and magazines and
Date of publication: 2 March 2018 widely anthologized in university texts [2].

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 1932-4529/18©2018IEEE 47


and technology, he reached wide audiences over the fol- human strategy, but is a particular feature of modern
lowing four decades. culture. The context of rapid innovation has generated
Weinberg did not invent the idea of technology as a widespread appreciation of the potential of technologies
cure-all, but he gave it a memorable name: the “techno- to improve modern life and society. The resonances in
logical fix.” This article unwraps his package, identifies modern culture can be discerned in the ways that popu-
the origins of its claims and assumptions, and explores lar media depicted the future, and in how contemporary
the implications for present-day technologists and soci- problems have increasingly been framed and addressed
ety. I will argue that, despite its radical tone, Weinberg’s in narrow technological terms.
message echoed and clarified the views of predeces- While the notion of the technological fix is straight-
sors and contemporaries, and the expectations of grow- forward to explain, tracing its circulation in culture is
ing audiences. His proselytizing embedded the idea in more difficult. One way to track the currency of a con-
modern culture as an enduring and seldom-questioned cept is via phrase-usage statistics. The invention and
article of faith: technological innovation could confi- popularity of new terms can reveal new topics and dis-
dently resolve any social issue. course. The Google N-Gram Viewer is a useful tool that
Weinberg’s rhetorical question was a call-to-arms for analyzes a large range of published texts to determine
engineers, technologists, and designers, particularly frequency of usage over time for several languages and
those who saw themselves as having a responsibility to dialects [4], [5].
improve society and human welfare. It was also aimed In American English, the phrase technological fix
at institutions, offering goals and methods for govern- emerges during the 1960s and proves more enduring
ment think-tanks and motivating corporate mission- and popular than the less precise term technical fix
statements (e.g., [3]). (Figure 2).
The notion of the technological fix also proved to be We can track this across languages. In German,
a good fit to consumer culture. Our attraction to techno- the term technological fix has had limited usage as
logical solutions to improve daily life is a key feature of an untranslated English import, and is much less
contemporary lifestyles. This allure carries with it a con- common than the generic phrase technische Lösung
stellation of other beliefs and values, such as confi- (“technical solution”), which gained ground from the
dence in reliable innovation and progress, trust in the 1840s. In French, too, there is no direct equivalent, but
impact and effectiveness of new technologies, and reli- the phrase solution technique broadly parallels German
ance on technical experts as general problem-solvers. and English usage over a similar time period. And in
This faith can nevertheless be myopic. It may, for British English, the terms technological fix and tech-
example, discourage adequate assessment of side- nical fix appear at about the same time as American
effects — both technical and social — and close usage, but grow more slowly in popularity. Usage thus
examination of political and ethical implications of hints that there are distinct cultural contexts and
engineering solutions. Societal confidence in technolog- meanings for these seemingly similar terms. Its vary-
ical problem-solving consequently deserves critical and ing currency suggests that the term technological
balanced attention. fix became a cultural export popularized by Alvin
Weinberg’s writings on the topic, but related to earli-
Faith in Fixes er discourse about technology-inspired solutions to
Adoption of technological approaches to solve social, human problems.
political and cultural problems has been a longstanding Such data suggest rising precision in writing about
technology as a generic solution-provider, particularly
after the Second World War. But while the modern popu-
larization and consolidation of the more specific notion
of the “technological fix” can be traced substantially to
A the writings of Alvin Weinberg, the idea was promoted
earlier in more radical form.
Usage

B
C The Voices of Technocracy
Journalists after the First World War christened mod-
ern culture “the Machine Age,” a period that vaunted
1920 1940 1960 1980 2000
the mechanization of cities and agriculture, industrial
FIGURE 2. Modern problem-solving rhetoric: Usage of the efficiency, “scientific management,” and most of all,
terms: A — “technological solution,” B — “technological fix,” and engineering solutions to modern problems [6], [7].
C — “technical fix,” according to Google n-gram analysis. Social progress became associated with applied

48 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


said, “The engineers solved it easily. They built cars
that didn’t have platforms” [10].
The tale communicated Scott’s common-sense con-
viction that social measures could be rendered unneces-
sary by wise engineering. Streetcars with retracting steps
and closing doors ensured that passengers could not
harm themselves. The anecdote was so effective in
describing the essence of technological fixes that it
became a feature of Scott’s speeches for the successor
organization, Technocracy Inc. and was reproduced as a
graphic (Figure 3) on postcards and placards over the fol-
lowing eight decades [11]. His second-in-command, oil
geologist Marion King Hubbert, featured similar exam-
ples in their Technocracy Study Course, which the orga-
FIGURE 3. Graphic displayed at Technocracy Inc meeting halls
nization updated into the twenty-first century [12].
and public exhibits from the 1930s [source: Technocracy Inc,
courtesy of George Wright].
Postwar Recovery and Optimism
Though the technocrats were most prominent during
science. Electric appliances, for example, extended the 1930s, they also found fresh audiences after the
productivity and leisure pursuits; radio entertained, Second World War. Rallies and long-distance road caval-
educated, and united the nation; motor vehicles and cades across North America carried their message
aircraft provided a new mobility for at least a privi- about the power of technologies to transform society.
leged few. Engineers and scientists comprised a significant frac-
But praise of technological change was accom- tion of their membership and audiences, including
panied by criticisms of the imperfections of modern those who had worked on the Manhattan Project during
society, often by the same analysts. The longest- the war and were now imagining applications of nuclear
lived voices were members of a group initially called energy. Their inspiration was to apply rapid innovation
the Technical Alliance, and later Technocracy Inc. to recalcitrant human problems that had outlasted the
Although having no verifiable engineering training, war (Figure 1).
Howard Scott became the Chief Engineer and persua- Among them was Richard L. Meier (1920–2007, Fig-
sive spokesperson for the Alliance, which included ure 4), a wartime research chemist who turned to inves-
General Electric engineer Charles Steinmetz, social tigating technological solutions for postwar urban
philosopher Thorstein Veblen, and economist Stuart problems. He was a technological optimist who con-
Chase. The group railed against the problems of ceived socio-technological systems to reduce inequity
waste, inefficiency, and incompetence and yield wider societal benefits.
of industrialists and government lead- At least one contemporary reviewer
ers, and called for the application of identified “naïve rationalism” and “the
“the achievements of science to soci- spirit of technocratic speculation” in
etal and industrial affairs” ([8], see Meier’s enthusiasms [13]. His work over
also [9]). They sought to collect reli- subsequent decades was, however, the
a ble fact s a nd to apply rationa l antithesis of the technocrats’ casual
eng i neering principles to modern claims as it carefully explored the poli-
problems of all kinds. tical, economic, social, and cultural
The group is noteworthy in the way dimensions of complex technological
it boiled down popular ideas circulat- systems affecting urban and regional
ing among engineers for wider pub- development (see for example, [14]–[16]).
lics. Scott first reached audiences Other contemporary scientists sup-
through a newspaper interview. He ported similar views, some of whom —
described how streetcar design had like Meier and Weinberg — joined the
been improved to safeguard passen- Federation of Atomic Scientists, a new
gers, who often suffered injuries by organization seeking to guide benefi-
falling from crowded running boards. FIGURE 4. Richard L. Meier c1965 cial applications of nuclear energy
Instead of relying on ineffective laws, [source: University of California, [17]. A sounding board for Weinberg’s
policing, and public education, Scott courtesy of Meier family]. ideas was Harvey Brooks, Dean of

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 49


engineering in the national interest, and insights about
the new scale and societal implications of “big science,”
a term he popularized [21].
As Weinberg later recalled,

I began to look upon nuclear energy as a symbol


of a new technologically oriented civilization —
the ultimate “technological fix” that would forever
eliminate quarrels over scarce raw materials. I
coined the phrase “technological fix” to connote
technical inventions that could help resolve pre-
dominantly social problems….

FIGURE 5. Alvin Weinberg teaching at the Oak Ridge Institute So closely was he identified with the concept that
for Nuclear Studies, 1946. Courtesy of Oak Ridge National Weinberg later characterized his career as that of a
Laboratory (ORNL). “technological fixer” [22]. (On the gestation of his ideas
see [23]).
Engineering and Applied Physics at Harvard. Brooks, Weinberg’s cogent articles did not present the polem-
too, had participated in nuclear reactor design and had ics of an interwar technocrat. He was cautious not to
an interest in applying scientific expertise for societal reveal his own political views, and avoided blaming politi-
benefit [18]. In an era of growing technological confi- cians and economists for societal imperfections. Instead,
dence, these hopeful analysts and their peers offered a Weinberg packaged the concept of the technological fix
rational route for societal improvement. in a form that invited responses from policy-makers.
Weinberg’s examples of technological fixes ranged
Weinberg’s Formulation: National Labs from common-sense solutions to provocative examples
for Societal Problems that seemed to lie on an ethically slippery slope. His
Alvin Weinberg’s optimism identified rational analysis easy-to-accept cases included consumer campaigner
and technological innovation as the key drivers of soci- Ralph Nader’s contention that engineering safer cars
etal progress. He argued that it was “the brilliant might provide quicker reduction of traffic deaths than
trying to change driving behaviors. Similarly, he argued
that cigarette filters were obviously better than legisla-
tion or health education campaigns to convince smok-
Weinberg promoted the belief that ers to give up cigarettes. But Weinberg also offered
more uncomfortable illustrations, for example the
technological innovation could notion of providing free air conditioners to literally cool
down urban tensions in American cities of the late
resolve any social issue as an 1960s, or the benefits of intrauterine devices (IUDs)
article of faith. to limit family size and economic deprivation [24].
As a member of government policy panels during
the Eisenhower, Kennedy, and Johnson administra-
tions, Weinberg gained the ears of legislators. Besides
advances in the technology of energy, of mass produc- the air-conditioning of slums, he lobbied for a wall
tion, and of automation,” not social systems or ideolo- between North and South Vietnam to limit enemy
gies, that “created the affluent society” [19]. incursions and thus scale down the war, although he
Weinberg (1915–2006, Figures 5 and 6) focused his quickly labeled it an “amateurish notion” after feed-
postwar career on the design, applications, and wider back from his peers [25], [26].2 Weinberg disclaimed
implications of nuclear reactors, becoming Director of other ideas — notably the general provision of soma
the Oak Ridge National Laboratory (ORNL) in 1955. His pills to relieve unhappiness, as portrayed in Aldous
high-profile position allowed Weinberg to represent not Huxley’s Brave New World, to suggest there were limits
just the nascent field of nuclear engineering, but also to how far technological fixes should go. He adapted to
the closer integration of technological innovation with
the goals of modern American society [20]. His network- 2
As Weinberg realized, his Vietnam wall — like Hadrian’s Wall across north-
ern Britain, the Great Wall of China, the Berlin Wall, and Donald Trump’s
ing provided him with experience as a senior adminis- proposed Mexican wall — is a technological fix for controlling population
trator in the new environment of publicly funded movements.

50 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


his audiences, being circumspect about in lock-step with urban renewal. Support-
the feasibility of technological fixes when ing these enthusiastic forecasts was a
writing for experts in the social sciences widespread but seldom interrogated pop-
but optimistic when preaching to classes ular faith in the link between technologi-
of engineering graduates. cal and social progress, as well as
For legislators and the 1968 Presiden- underlying belief in technological deter-
tial candidates, Weinberg proposed a minism and the inevitability of social
national strategy founded on technological adaptation to innovation.
fixes. He argued that the expertise in phys- Even more widely accepted examples
ical science and engineering marshalled at of technological fixes were to be found in
National Labs since the war could be reori- technologies applied to health and well-
ented to solve predominantly social prob- being. In a period of unprecedented
lems. The “neat trick,” he confided to access to inexpensive food, scientific
Harvey Brooks, was that “social problems nutrition was popularized by via over-the-
could be converted into technological counter vitamin supplements and diet
problems” [27], [28]. With national over- aids.4 Such fixes, argued supporters,
sight, he suggested, technological analysis could correct for unbalanced dietary
and problem-solving could trump tradition- regimes, hectic lifestyles, inexpert cook-
al social, political, economic, educational, ing, lack of will power, or low income.5
FIGURE 6. Alvin Weinberg
and moral approaches. in Washington, late 1960s. Perhaps the most dramatic of technologi-
Influenced by campaigners such as Courtesy of ORNL and the cal fixes for lifestyle and diet-induced ill-
Scott, Meier, and Weinberg, popular sup- Howard H. Baker Jr Center for ness was the heart transplant, first trialed
port for technological solutions was partic- Public Policy, University of to public acclaim during the late 1960s,
ularly strong in the decades after the war.
Tennessee. and hopes for artificial hearts [34].6
For Weinberg the Manhattan Project More recently, software technolo-
represented the paradigm technological fix, in which a gies have been embraced by consumers as even more
powerful technology neutralized enemy aggression and seductive ways to supplement personal skills, improve
bypassed diplomatic negotiation and political allianc- efficiency, and empower lifestyles — a marketing phi-
es. Similarly, he credited the H-bomb as a technological losophy dubbed “solutionism.” By sidestepping tradi-
solution to the problem of war that did not require chang- tional forms of education, self-motivation, skills
ing human nature. development, or political action, such software solu-
For Meier and Weinberg, postwar planning had pro- tions are technological fixes in precisely the form
vided evidence that rationalized housing, transport defined by Weinberg.7
and communication networks could quickly improve
the quality of life in cities under any political system. Institutional Confidence in Fixes
Nascent nuclear energy projects also channeled the Technological fixes also remain popular for organiza-
promise of new technology to transform societies. tions and government as solutions to novel and acute
During the Atoms for Peace initiative of the mid-1950s, problems today. A couple of broad issues can suggest
for example, atomic energy was forecast as a means prevalent attitudes.
of irradiating food to avoid spoilage, desalinating sea- A first domain is resolution of environmental prob-
water to irrigate deserts, and increase food produc- lems. As environmental concerns rose in the late
tion, and supplying low-cost electrical power to boost 1960s, with growing attention to air and river pollution,
economies [29]. oil-tanker spills, and fears about nuclear waste,
Over the following decade, the successes of major
4
On the enrichment of staple foods with vitamins, see [31]. A more recent
technological projects provided confidence in engineer- example is “golden rice” bioengineered to produce beta-carotene as a tech-
ing ingenuity to achieve ambitious goals. The space race nological fix for malnutrition from vitamin deficiency.
5
The socio-technical system of preserving, transporting, and consuming
addressed seemingly insoluble technical challenges and, frozen foods, for example, was largely a post-Second World War develop-
as trumpeted by NASA, its contractors and media sourc- ment involving new technologies (notably refrigeration and microwave-
cooking) co-evolving with social and cultural changes (e.g., declining
es, spun off associated technologies for consumer bene- proportion of primary homemakers and rise of convenience foods) [32].
fit.3 Urban planners supported regeneration projects in Dietary aids included a rapidly expanding variety of over-the-counter prod-
ucts to increase metabolism, reduce appetite or fat absorption, and exer-
which reconfigured infrastructure would transform cise machines to burn calories [33].
social life, such as implementing expressway networks 6
Other technological fixes for health include gastric bands and liposuction.
7
E.g., “technology-enhanced learning” and “technology-mediated commu-
nication” are growing industries, and the Apple slogan “There’s an App for
3
For a nuanced account of the socio-political context of spaceflight, see [30]). that” offers software solutions for human needs.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 51


Cultural Losses of Faith in Technology
Like expressions of technological faith, critiques of tech-
Framing by elites may disempower nology have grown around particular examples. As early

communities that opt for as the 1960s, opponents of the Vietnam War cited the
impotence of high-technology military systems against
technological fixes. the guerilla methods of a resourceful enemy [40]. If high
technology can be negated by such social and political
opposition, this seemed to suggest, why should techno-
logical fixes be trusted as a panacea for social and polit-
technological quick fixes were proposed as timely and ical problems?
reassuring solutions. Current options include oil-digest- For urban audiences over the same period, nuclear
ing microbes to deal with spills and industrial waste, technologies were increasingly cited as inherently dan-
biodegradable packaging, biotechnologies for fuel pro- gerous. For growing numbers, the field represented a
duction, and schemes for addressing anthropogenic cli- failure of government-managed safety certification pro-
mate change via geo-engineering [35]–[37]. cedures and a secretive industry. Similarly the che-
A second domain of problems attracting technology- mical industry, which had once been praised for
dominated responses is terrorism. As airplane hijack- technological fixes such as DDT to kill agricultural
ings proliferated during the early 1970s, and more pests and assure high crop yields, was now criticized
varied threats were identified after 2000, technologists as the source of widespread ecological damage [41].
responded with imaginative solutions ranging from low- Such technological criticism in America was pointed to
tech lockable cockpit doors, to technologies monitoring catastrophes such as super-tanker spills9 as represen-
Internet communications, to materials-detecting and tative of decision-making that prioritized the global
body-scanning systems. In the tradition of technological petrochemical economy. And while human health
fixes, these hardware solutions are rapid responses to remained the domain of technological fixes evincing
events that have relatively complex social, political, or the most widespread optimism, some topics raised
economic roots.8 growing disquiet among consumers. Among them was
an entirely new field for technological fixes: genetic
Quandaries and Implications engineering to design foods that could be longer-last-
of Technological Fixes ing or more nutritious (but not necessarily tastier), or
Such examples suggest support for the notion of tech- to cure inherited illnesses or extend human choices
nological fixes by large companies, governments and (but also introducing myriad moral questions alongside
the general population, as much as by engineers them- these new powers). Such cases were cited to argue that
selves [39]. But alongside unreflective acceptance of technological solutions streamlined analysis, priori-
clever technological solutions for urgent problems, tized economic, corporate, or consumer interests rath-
there is evidence of growing societal concerns about er than wider benefits, and under-estimated societal
some aspects of technological fixes. Such concerns side-effects.
deserve to refocus the discussion begun by Weinberg
fifty years ago. Ethical Implications
Critical assessments of technological fixes have vari- Early scholarly criticisms of Alvin Weinberg’s notions
ously identified reliance on technological solutions as criticized them as naively confident about the outcomes
evidence for inadequate engineering practice, failures of of science (“scientistic”) and tending to narrowly define
government policy, or outcomes of modern consumer- the complexity of problems (“reductionistic”) [42]. Be -
ism. These concerns suggest that technological fixes cause of its exaggerated attention to measurable out-
have important implications for shared social values, comes, rational decision-making carries additional
the wellbeing of wider publics, and the social role of philosophical and ethical dimensions. This confidence
engineers. In short, technological fixes have cultural, in positivism prioritizes confidence in quantitative evi-
ethical and political dimensions. dence, and necessarily devotes less consideration to
aspects of human values that cannot be counted.

8
Engineering disciplines have adapted to the contemporary environment
of terrorist threats by creating special-interest groups to promote secu-
9
rity technologies and funding for technological fixes. Among them is the International incidents included spillages from the oil tankers Amoco Cadiz
Homeland Security group of SPIE, the optical engineering society, which (1978) and Atlantic Empress (1979). Later incidents, such as the Exxon Val-
aims to “stimulate and focus the optics and photonics technology commu- dez (1989) and Deep Water Horizon (2010), fueled public debate about soci-
nity’s contributions to enhance the safety, counter homeland threats, and etal reliance on large-scale technological systems, ironically while promoting
improve the sense of well being” [38]. technological fixes for avoiding or cleaning up after such accidents.

52 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


The focus on outcomes also identifies the link
between technological fixes and utilitarian ethics, in
which the goal is to maximize positive consequences Alongside unreflective acceptance
(“the greatest good”). This ethical framework works
well for purely engineering problems, but can disfavor of clever technological solutions for
groups or environments that are not identified as the
intended beneficiaries (“the greatest number”). There
urgent problems, there is evidence
are other ethical alternatives for judging responsible of growing societal concerns about
innovation: notably duty-based ethics (deontology) and
virtue ethics, which instead focus on rights and on per- some aspects of technological fixes.
sonal behaviors, respectively.
The narrowing of analytical dimensions (reduction-
ism) is particularly dangerous when problem-solving
relies on technological fixes: how can we adequately involving sophisticated systems, technological fixes
assess whether a solution satisfies the unvoiced or inex- were argued to both underestimate and inadequately
pressible wishes of all those affected? The problem solve complex problems. Philosopher Alan Drengson,
becomes acute when we consider communities, species for example, explored the moral values and religious
and environments without a voice. underpinnings of these wider critical perspectives [45],
Philosopher Arne Naess criticized such ethical impli- [46]. He argued that technological fixes were too often
cations of relying on technological solutions. He short-term and incomplete, and consequently could
argued that popular enthusiasm for such fixes tended camouflage the ultimate sources of larger problems
to prioritize the status quo, i.e., the interests of current and the nature of genuinely satisfactory solutions.
ways of life, and particularly current socio-economic
conditions and interests. Naess argued that technologi- The Role of Engineers in Democratic Society
cal fixes carried cultural presuppositions about what The faint voices of the beneficiaries — and potentially
was “reasonable,” and consequently framed problems victims — of technological fixes are of some concern.
narrowly. They generally underestimate the scale and For Howard Scott’s technocrats, engineers were expected
nature of socio-technical problems and the potency to replace inexpert policy-makers, politicians, and
and side effects that engineering solutions can offer. economists by a “technate,” or technological govern-
Naess called short-term environmental attentions and ment. For Weinberg, government-assigned teams of
technologically-oriented solutions shallow ecology, engineers would assume responsibility for address-
and offered his own deep ecology approach in its ing social problems for the national good. For Meier,
place. Naess’s alternative analysis sought to consider the process of directing technical solutions was
social, cultural, and technological solutions in tandem, envisaged as cooperation between engineers and
and identified technological fixes as simplistic and communities, but ultimately guided by those with ex-
inadequate [43]. pert knowledge.
Along the same lines, economist Ernst Schumach- Such management by elites might be assessed and
er defined appropriate technology as morally respon- even voted upon by wider audiences, but this consulta-
sible innovation that takes equal account of local tive process to some extent undermines the special role
social needs, resources, labor, and skills in ways of technological competence in such a rational society.
that most technological fixes do not. He argued that The effects of public participation in engineering solu-
popular engineering criteria such as efficiency, ele- tions raised mixed feelings for Alvin Weinberg, who
gance, and versatility could work against creating a observed that some of his technological solutions were
genuinely sustainable sociotechnical system. Schum- unlikely to succeed in a liberal democracy, and that
acher sometimes referred to his approach as “Bud- “nuclear energy seems to do best where the underlying
dhist economics,” in the sense of incorporating moral political structure is elitist” [47].
and social values into modern systematic problem- The same issues may disempower communities or
solving in much the way that some eastern theologies individual consumers who opt for technological fixes.
did [44]. They may fail to identify how the “problem” and “solu-
For an even wider range of theorists, the technologi- tion” have been framed by the designers, companies,
cal fix was portrayed as hubris, or excessive confi- governments, or media sources who promote them.
dence, regarding human abilities to adequately As a result, the “solutions” they are offered may be
understand and manage society and nature through shallow or off-target, and reproduce undiscerning cul-
rational means. As a “band-aid” solution to problems tural values.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 53


Engineers consequently have important responsibili- [20] S.F. Johnston, The Neutron’s Children: Nuclear Engineers and
the Shaping of Identity. Oxford, U.K.: Oxford Univ. Press, 2012.
ties regarding technological fixes. Designers need to pay
[21] A.M. Weinberg, Reflections on Big Science. Cambridge, MA: M.I.T.
close attention to the scope of their analysis and longev- Press, 1967.
ity of their solutions. They must consider not just the [22] A.M. Weinberg, The First Nuclear Era: The Life and Times of a
Technological Fixer. New York, NY: AIP, 1994, p. 150.
intended beneficiaries (e.g., customers, clients, funders)
[23] S.F. Johnston, “Alvin Weinberg and the promotion of the Tech-
but also non-beneficiaries and “externalities” (e.g., mar- nological Fix,” Technology and Culture, vol. 59, no. 2, 2018 to be
ginal social groups, future generations, other species, published.
[24] A.M. Weinberg, “Can Technology Replace Social Engineering?,”
and distant environments). Most importantly, they
Bull. Atomic Scientists, vol. 22, no. 10, pp. 4-7, 1966.
should recognize that complex modern societies incor- [25] A.M. Weinberg and J.C. Bresee, “On the air-conditioning of
porate multiple values and forms of expertise. Modern low-cost housing,” Weinberg archives, Children’s Museum of Oak Ridge
(CMOR), Cab 5, Drawer 4, Chron 1968-1, 1968.
problems cannot be reduced to mere engineering solu-
[26] A.M. Weinberg, “Letter to J. S. Foster, Jr,” Mar. 7, 1967, CMOR
tions over the long term; human goals are diverse and Cab 5, Drawer 4, Chron 1967-1.
constantly changing. [27] A.M. Weinberg, “Social problems and national socio-technical
institutes,” Applied Science and Technological Progress: A Report to the
Committee on Science and Astronautics, U.S. House of Representatives, by
Author Information the National Academy of Sciences, 1967, pp. 415-434.
Sean F. Johnston is Professor of Science, Technology, [28] A.M. Weinberg, “Letter to H. Brooks,” Jun. 17, 1966, CMOR Cab
5, Drawer 4, Chron 1966-2.
and Society at the University of Glasgow, Glasgow, U.K.
[29] S.L. Del Sesto, “Wasn’t the future of nuclear energy wonder-
Email: sean.johnston@glasgow.ac.uk. ful?,” in Imagining Tomorrow: History, Technology, and the
American Future, J.J. Corn, Ed. Cambridge MA: M.I.T. Press, 1986,
pp. 58-76.
References [30] W.A. McDougall, The Heavens and the Earth: A Political His-
[1] A.M. Weinberg, “Will technology replace social engineering?,” pre- tory of the Space Age. Baltimore, MD: Johns Hopkins Univ. Press,
sented at Fifteenth Annual Alfred Korzybski Memorial Lecture (Harvard Club 1985.
of New York: Institute of General Semantics), Apr. 29, 1966. [31] M. Ackerman, “The nutritional enrichment of flour and bread:
[2] “Can technology replace social engineering?,” presented at Univer- Technological fix or half-baked solution,” in The Technological Fix:
sity of Chicago Alumni Award speech, Jun. 11, 1966. How People Use Technology to Create and Solve Problems, L.
[3] S. Brand, The Media Lab: Inventing the Future at M.I.T. New Rosner, Ed. New York, NY: Routledge, 2004, pp. 75-92.
York, NY: Viking Penguin, 1987. [32] C.P. Mallet, Ed. Frozen Food Technology. London, 1993.
[4] “NGram viewer,” Google Books; https://books.google.com/ [33] T. Maguire and D. Haslam, The Obesity Epidemic and Its Man-
ngrams, accessed Jun. 26, 2017. agement. London, U.K.: Pharmaceutical, 2009.
[5] J.-B. Michel et al., “Quantitative analysis of culture using millions [34] S. McKellar, “Artificial hearts: A technological fix more mon-
of digitized books,” Science, vol. 331, no. 6014, 2011. strous than miraculous?,” in The Technological Fix: How People
[6] R.G. Wilson, D.H. Pilgrim, and D. Tashjian, The Machine Age in Use Technology to Create and Solve Problems, L. Rosner, Ed. New
America 1918-1941. New York, NY: Brooklyn Museum/Abrams, 1986. York, NY: Routledge, 2004, pp. 13-30.
[7] R. Banham, Theory and Design in the First Machine Age. Cam- [35] F.H. Chapelle, “Bioremediation of petroleum hydrocarbon-con-
bridge, MA: M.I.T. Press, 1980. taminated ground water: The perspectives of history and hydrol-
[8] Technical Alliance, The Technical Alliance: What It Is, and What ogy,” Groundwater, vol. 37, no. 1, pp. 122-132, 2005.
It Proposes. New York, 1918. [36] J. Gabrys, “Plastic and the work of the biodegradable,” in Accu-
[9] W.H.G. Armytage, The Rise of the Technocrats: A Social His- mulation: The Material Politics of Plastic, J. Gabrys, G. Hawkins,
tory, M. Keynes, Ed. Routledge, 1965. and M. Michael, Eds. New York, NY: Routledge, 2013, pp. 208-227.
[10] C.H. Wood, “The birth of the technical alliance,” New York [37] Royal Society, Geoengineering the Climate: Science, Gover-
World, Feb. 20, 1921. nance and Uncertainty. Royal Society, 2009.
[11] S.F. Johnston, “Technological parables and iconic imagery: [38] SPIE 2003, /Announcements/index.html#homeland, accessed
American technocracy and the rhetoric of the technological fix,” May 14, 2003.
History and Technology, vol. 33, no. 2, pp. 196-219, 2017. [39] M. Oelschlaeger, “The myth of the technological fix,” South-
[12] M. King Hubbert, “Lesson 22: Industrial design and operating western J. Philosophy, vol. 10, no. 1, pp. 43-53, 1979.
characteristics,” in Technocracy Study Course. New York, NY: Tech- [40] S.W. Leslie, The Cold War and American Science. New York,
nocracy, 1945, pp. 242-268. NY: Columbia Univ. Press, 1993.
[13] P.A. Baran, “Review of Meier, Richard L, Science and Economic [41] R. Carson Silent Spring. New York, NY: Houghton Mifflin, 1962.
Development: New Patterns of Living,” American Economic Rev., [42] E.M. Burns and K.E. Studer, “Reply to Alvin M. Weinberg,” Res.
vol. 6, no. 47, pp. 1019-1021, 1956. Policy, vol. 5, pp. 201-202, 1976.
[14] R.L. Meier, Modern Science and the Human Fertility Problem. [43] A. Naess, “The shallow and the deep, long-range ecology move-
New York, NY: Wiley, 1959. ment. A summary,” Inquiry, vol. 16, pp. 95-100, 1973.
[15] R.L. Meier, Planning for an Urban World: The Design of [44] E.F. Schumacher, Small Is Beautiful: A Study of Economics
Resource-Conserving Cities. Cambridge MA: M.I.T. Press, 1974. as If People Mattered. London, U.K.: Blond & Briggs, 1973.
[16] R.L. Meier, “Late-blooming societies can be stimulated by infor- [45] A.R. Drengson, “The sacred and the limits of the technological
mation technology,” Futures, vol. 32, no. 2, 2000. fix,” Zygon, vol. 19, no. 3, 1984.
[17] D.A. Strickland, Scientists in Politics: The Atomic Scientists [46] The Practice of Technology: Exploring Technology, Ecophi-
Movement, 1945-46. Lafayette, IN: Purdue, 1968. losophy, and Spiritual Disciplines for Vital Links. Albany NY:
[18] H. Brooks, “The evolution of U.S. science policy,” in Technol- State University of New York, 1995.
ogy, R&D, and the Economy, L.R. Bruce and C.E. Barfield, Eds. [47] A.M. Weinberg, “Nuclear power and public perception,” in
Washington, DC: Brookings, 1996. Nuclear Reactions: Science and Trans-Science. Washington, DC:
[19] A.M. Weinberg, “Can technology replace social engineering?,” American Institute of Physics, 1992, pp. 273-289.
Bull. Atomic Scientists, vol. 22, no. 10, pp. 4-7, 1966.

54 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


Autonomous
Failing the
Principle of
Discrimination
Weapon
Systems

ISTOCK

I
n this article, I explore the ethical permissibility of autonomous
weapon systems (AWSs), also colloquially known as killer ro-
bots: robotic weapons systems that are able to identify and en-
gage a target without human intervention. I introduce the sub-
ject, highlight key technical issues, and provide necessary
definitions and clarifications in order to limit the scope of the
Ariel Guersenzvaig discussion. I argue for a (preemptive) ban on AWSs anchored in
just war theory and International Humanitarian Law (IHL), which are both
briefly introduced below.
To make my case, I examine and juxtapose a series of arguments and
Digital Object Identifier 10.1109/MTS.2018.2795119
counterarguments in favor of and against AWSs made by several authors
Date of publication: 2 March 2018 from the literature, especially Sharkey [1], and Schmitt and Turner [2]. I will

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 1932-4529/18©2018IEEE 55


address Sharkey’s technical arguments concerning the decisions: drones are piloted from afar and have a high
shortcomings or possibilities of robots and computing degree of autonomous behavior, but there still is a
systems, I will critically examine his concerns and build human person making the decision to fire a missile and
upon them from both a rule-utilitarian and a precaution- engage an enemy target. An automatic shotgun will
ary approach to argue that AWSs should be pre-emptive- automatically load a new round and fire repeatedly for
ly banned precisely because they are currently technically as long as the shooter doesn’t release the trigger. In both
incapable of complying with IHL and the discrimination these cases, a human intervention is necessary.
principle: if deployed now, they would not maximize the The current state of the art in AI and robotics has
reduction of harm to non-combatants, violating one of reached a point where, according to many experts, it
the main tenets of just war theory and IHL. is already technically feasible — or it will be feasible
within years — to deploy a killer robot that autono-
Increased Attention on AWSs mously hunts and kills enemy combatants without
It can be expected that research into AWSs will increase, necessary human intervention. Precisely by making
and that autonomous systems will be more ubiquitous human judgement unnecessary in the initiation of
in military activities [3]. Many governments have priori- lethal force, AWSs signal a paradigm shift in military
tized the use of autonomous systems and military technology. The signatories of the aforementioned let-
authorities in countries such as the U.S. and The Neth- ter commented: “autonomous weapons have been
erlands have expressed favorable views towards autono- described as the third revolution in warfare, after gun-
mous weapons [4]. These weapons are less expensive powder and nuclear arms.”
than manned systems, entail fewer apparent risks to Behind the opposition against killer robots, there is
military personnel and could give nations a qualitative the conviction that these autonomous systems “would
edge. Another appealing notion is that autonomous not be consistent with international humanitarian law
weapons could be programmed to comply with ethical and would increase the risk of death or injury to civil-
and legal norms [5], [6]. ians during armed conflict” [9]. If AWSs are not capable
Over the last several years, legal, ethical and military of complying with international humanitarian law —
implications of AWSs have been repeatedly brought to the whose principles regarding the conduct in warfare are
attention of the international community by many rooted in Just War Theory — these weapons are both
humanitarian organizations such as Human Right illegal and ethically impermissible.
Watch and the International Committee of the Red Cross
(ICRC). The issue was discussed in several meetings of Between the Terminator and Reaper
experts, most notably in those held within the framework The scope of this paper stretches between two ex-
of the Convention on Certain Conventional Weapons tremes that mark what the discussion on AWSs is not
under the auspices of the UN Office at Geneva. about. On the one extreme, there is the mythical Termi-
Advocacy and criticism against AWSs has also taken nator, an anthropomorphic robot soldier equipped with
ground; two important initiatives from civil society are an intelligence that is near or above the human capaci-
the Campaign to Stop Killer Robots,1 an internation- ties. I will not be discussing this type of AI, also known
al coalition that works to preemptively ban fully auton- as strong AI (equal to a human’s capabilities) or the
omous weapons, and a letter initially signed by 1500 superintelligence (above human’s capabilities). I will
prominent artificial intelligence (AI) and robotics re - stick to what is widely regarded in the robotic and AI
searchers calling for a ban on offensive autonomous community as technically feasible in the coming years.
weapons [7]. Perhaps, the current feasible limit on this extreme is
marked by what Bostrom [10] calls a “domain-specific
Why Now? ‘superintelligence,’” i.e., one that is only smart within
Automatic weapons have been around for decades and one narrow domain. An existing example of a domain-
robotic weapon systems are anything but a novelty specific intelligence is Google’s AlphaGo, a computer
as new teleoperated and telerobotic systems are con- program developed to play the board game Go that in
stantly being developed and deployed in many areas March 2016 defeated Go master Lee Se-dol, one of the
of conflict [8]. greatest modern players. Autonomous cars could be
So, why is this type of weapon receiving specific atten- seen as another type of domain-specific intelligence,
tion now? One possible answer can be that because albeit one that needs be much improved upon. In the
up until recently even sophisticated weapon systems case of an AWS, the domain for this AI would be war, a
retained a degree of human supervision over life-or-death domain whose complexity is far greater than that of the
game Go or city traffic, itself already a very heteroge-
1
Campaign to Stop Killer Robots: http://www.stopkillerrobots.org/ neous and complex domain.

56 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


On the other extreme, we find existing semi-autono-
mous weapons such as the unmanned aerial vehicles
like the Reaper drones. Although these aircraft are high- AI and robotics have reached a point
ly automated, “they are not considered to be autono-
mous because they are still operated under human
where it is technically feasible to
supervision and direct control” [11]. They are automated deploy a killer robot that hunts and
and not fully autonomous because, without human
intervention, they can only perform highly structured kills enemy combatants without
tasks and require human intervention for tasks that are
more dynamic and less well-defined.
human intervention.
However, it would only take a small engineering
tweak to transform a drone and take the human out of
the loop replacing it with a computer and thus enabling Definitions
the autonomous side of the technology. One such a sys- There is a wide spectrum of current conceptions of
tem might already exist: the Samsung SGR-A1, a sentry AWSs. According to Lewis et al. [4], “on one end of the
gun used to monitor the Korean demilitarized zone. This spectrum, [an] AWS is an automated component of an
gun — using technology like the one used in video- existing weapon. On the other, it is a platform that is
games — recognizes human shapes, and orders them to itself capable of sensing, learning, and launching result-
stop and surrender. Currently, the gun is only used in ing attacks.” Asaro [11] defines an AWS as “any system
autonomous surveillance mode, but several reports con- that is capable of targeting and initiating the use of
firm that the gun is equipped to deliver lethal or non- potentially lethal force without direct human supervision
lethal force without human intervention [12], [13]. and direct human involvement in lethal decision-making.”2
In short, neither the Reaper drones nor the Termina- A similar definition is offered by the International Com-
tor are the killer robots I am discussing in this paper. mittee of the Red Cross [14]. I will use this broad defi-
The locus of this discussion is systems such as the SGR- nition of an AWS.
A1, but only when their autonomous capacity is fully A further distinction can be drawn between defen-
functional, i.e., when they can be considered a fully sive and offensive systems. Defensive systems are used
autonomous weapon system. to protect facilities or areas from incoming attacks.
Many defensive systems already exist: one of them is
The Human and the Loop the Phalanx CIWS, which offers defense against anti-
Evidently, all robotic weapons have some degree of ship missiles. These systems can be highly automated,
autonomy based on how their software is programmed. but their autonomy is restricted by severe operational
A further clarification might be required regarding the constraints such as a limited envelop of operations, e.g.,
level of human control in the targeting process, i.e., the the area around a ship — or restrictions of the task they
decision-making process. This human involvement can carry out, e.g., they offer specific protection against
be of three types [9]: incoming supersonic missiles. Although arguments
■ Human-in-the-Loop Weapons: Robots that can against these weapons could be made, for example, on
select targets and deliver force only with a human absolute pacifist grounds, I will not argue against these
command; types of systems because of their limited autonomous
■ Human-on-the-Loop Weapons: Robots that can behavior and their acceptable degree of predictability —
select targets and deliver force under the oversight even without human intervention after the system has
of a human operator who can override the robots’ been activated.
actions; and The arguments I will present below apply to offensive
■ Human-Out-of -the-Loop Weapons: Robots that weapon systems, whose behavior might become less
are capable of selecting targets and delivering force and less predictable because of a series of develop-
without any human input or interaction. ments related to their autonomous behavior: increased
Strictly speaking, AWSs fall into the third type mobility over greater periods of time, adaptability in
(human-out-of-the-loop). However, in some cases, weap- functioning and goal setting, self-learning, and
ons of the second type (human-on-the-loop) could be increased interaction of multiple weapon systems in
considered de-facto out-of-the-loop weapons because of self-organizing formations [14].
lack of adequate or sufficient human supervision: “[t]he
ability of a single operator to have effective oversight of
2
Targeting and initiating the use of force are part of the so-called kill chain,
dozens or even hundreds of aircrafts seems implausible “defined in the US Air Force as containing six steps: find, fix, track, target,
to many experts” [9]. engage and assess” [11].

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 57


Theoretical Foundations for the Discussion under specific circumstances, but prohibiting the killing
The arguments I will present against AWSs rely on these of others (innocent civilians or wounded soldiers), and
weapons not being currently capable of complying with specifying lawful objects (e.g., a military factory) and
IHL. However, my interest in IHL is not qua law, but due unlawful objects (e.g., a school).
to its narrow relation to just war theory — a normative IHL also stipulates restrictions on the means and
framework often used by ethicists to discern the per- methods of warfare: “[a]s a general rule, IHL prohibits
missibility and the justification of the use of force. means and methods of warfare that cause superfluous
injury or unnecessary suffering” [16].
Just War Theory IHL offers, thus, general rules and principles to gov-
Just War Theory regulates both the reasons for going to ern the conduct of hostilities, irrespective of the weapon
war ( jus ad bellum),  and the conduct of war ( jus in used. Nonetheless, specific treaties have banned or
bello). I introduce only the latter because the former is restricted the use of many weapons, such as chemical
less relevant to the discussion. and biological weapons, blinding lasers, or anti-person-
According to the traditionalist view of Just War Theo- nel mines. These weapons are considered illegal per se,
ry, there are three core principles of that regulate i.e., weapons that violate IHL irrespective of their use
conduct in war: discrimination, proportionality, and because they can’t comply with its key principles. Even
military necessity. These principles aim to limit vio- though a biological weapon could be used, in theory, to
lence and minimize harm towards non-combatants; and target a legitimate military target, in practice, the infect-
where harm does occur, even if it is foreseeable, it must ed target could spread the biological agent — i.e. the
be a collateral effect, i.e., unintended.3 Widdows [15] disease — to innocent civilians in an unpredictable way,
describes these three key principles as follows: which would violate the principles of IHL. Weapons that
1) Discrimination. To be just, combatants must dis- are illegal per se are different from weapons that are
tinguish between enemy combatants and non- used in an illegal manner, an AK-47 assault rifle is not
combatants. illegal per se because it could be used in a way that
2) Proportionality. To be just, the harms of any action complies with IHL. However, it would be of course illegal
must be proportional to the gains. to use it to coerce civilians or to execute surrendering
3) Military Necessity. To be just, any action must be enemy combatants.
militarily necessary to achieve the end with the mini-
mum harm. Killer Robots’ Failure to Discriminate:
Arguments and Counterarguments
International Humanitarian Law In this section I present a series of arguments in favor of
International Humanitarian Law can be seen as an oper- a ban based on AWSs’ failure to comply with the princi-
ationalization in treaties and conventions of the abstract ple of discrimination, which is a necessary condition for
principles comprised in jus in bello, which are specially permissible war. After presenting arguments in favor of
addressed in the four Geneva Conventions of 1949 and a ban, I will discuss a series of counterarguments
the so-called additional protocols of 1977. offered by opposers of the ban.
The ICRC [16] defines international humanitarian A key argument against AWSs is that autonomous
law as: systems can’t properly discriminate between combat-
ants and non-combatants in a way that satisfies the
a set of rules which seek, for humanitarian rea- requirements of IHL. Sharkey [1], himself a robotics
sons, to limit the effects of armed conflict. It pro- expert, argues that robots do not currently have the
tects persons who are not or are no longer capacity to make this distinction. His objections are: 1)
participating in the hostilities and restricts the Computers do not have adequate sensory processing
means and methods of warfare. systems for separating combatants from civilians, they
do have cameras and other sensors, and might thus be
The aim of IHL is to protect and minimize the harm able to tell whether something is human or not, but
inflicted to unlawful targets (those who are not taking not more than that. 2) A computer needs that every
part in hostilities), and make sure military force is only element is specified in sufficient detail in a software
used against legitimate targets. The principles of distinc- program, so it can operate on it, but there is no ade-
tion, proportionality, and precaution are codified in IHL quate definition of a civilian that can be translated into
allowing for the killing of some people (combatants) computer code. 3) Even if computers had adequate
visual sensing mechanisms to distinguish civilians
3
This is based on the doctrine of double effect. This doctrine accounts for
the permissibility of an action that causes a serious harm as an unintended
from combatants, machines will still lack the contextu-
effect of achieving some good end. al awareness necessary for discrimination decisions.

58 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


Battlefield awareness may be computationally intracta- of a ballistic missile, but another very different thing
ble, and there is no evidence that AWSs could operate to determine whether a person is a civilian or a non-uni-
on the principle of distinction as well as human sol- form-wearing combatant.
diers can. On the other hand, there are pattern-recognition
IHL prohibits weapon systems that cannot be proper- technologies available today that are indeed able to dis-
ly aimed, these weapons are unlawful per se in that they criminate between a civilian and a uniform-wearing sol-
fail to comply with the principle of discrimination. On dier based on images.4 It could be thus argued that
this basis, Sharkey [1] argues that the morally correct Sharkey’s objections 1) and 2) above could be worked
course of action is to ban AWSs. out for uniform-wearing combatants. Objections 1) and
To counterargue, Schmitt and Thurnher [2] pose the 2) would still stand in situations where combatants are
scenario of an autonomous system for an attack on a not wearing uniforms. It doesn’t require a leap of faith
tank formation in a remote area of a desert: to imagine that once effective uniform-seeking AWSs
are operational, the enemy would cease wearing uni-
not all battlespaces contain civilians or civilian forms altogether in order to avoid being detected. Since
objects. When they do not, a system devoid of an an important discrimination variable — the uniform —
ability to distinguish protected persons and is eliminated, it’s easy to see how this might in turn
objects from lawful military targets can be used heighten the risks to civilians.
without endangering the former. According to Mike Schroepfer, Facebook’s Chief Tech-
nology Officer, “in recent years, the state of AI research
The inability of the weapon systems to distinguish has advanced very quickly — but there’s still a long way
bears on the legality of their use in particular circum- to go before computers can learn, plan, and reason like
stances, such as along a roadway on which military humans” [17]. Thanks to state-of-the-art technology, a
and civilian traffic travels — but not on their lawfulness leading company such as Facebook has the actual abili-
per se. ty to segment visual objects and label them with infor-
Since increased mobility seems to be one of the mation. The technology is already being used to
trends for AWSs, a killer robot will not necessarily be describe photos to visually impaired people, but it is far
active only in particular circumstances or specific bat- from error-proof as the examples in [17] reveal. But even
tlespaces that can be cleared of civilians in advance. if these recognition systems are rapidly improving, cur-
Autonomy is a game-changer precisely because offensive rent AI systems such as Facebook’s also fail at under-
autonomous weapons have an autonomous capacity for standing the picture as a whole, including the context
roaming in order to select and engage a target, which is surrounding the objects they recognize. Schroepfer
their intrinsic characteristic. The very notion of bat- exemplifies this issue with a picture of a sausage pizza.
tlespaces that can be known to be totally safe before- A human has no trouble visually discriminating whether
hand becomes nonsensical: the location of the theatre of a sausage pizza is vegetarian or not.5 A computer, how-
warfare is dynamic and complex and can not be con- ever, can’t answer that question because it lacks the
fined to “particular circumstances” in which an autono- necessary contextual understanding. Facebook’s
mous system would have permission to safely fail to researchers are currently trying to give computers con-
make a distinction between civilians and military targets. textual understanding, one case at the time. Schroepfer
Schmitt and Thurnher [2] also commend the present [17] explains:
pattern-recognition capabilities of military technology:
we need to give them a model by which they can
“[it] has advanced well beyond simply being able understand the world — a set of facts and con-
to spot an individual or object. Modern sensors cepts they can draw from in order to answer ques-
can ... assess the shape and size of objects, deter- tions like the one about the pizza. [...] Despite [...]
mine their speed, identify the type of propulsion progress, there’s a lot more work required to
being used ...” make computer systems truly intelligent.

It is true that recognition technology is advancing 4


In November 2016, Facebook published an example of their software for
understanding a visual scene that identifies the objects in it. A picture of
rapidly. Automated traffic-control systems equipped a woman and two policemen was correctly recognized and labeled by the
with road sensors can detect if a car is speeding, software. The picture is no longer on-line, but it can be accessed at the
Internet Archive: http://web.archive.org/web/20161205202820/http://
take a picture of the car, read its license plate num- newsroom.fb.com/news/2016/11/accelerating-innovation-and-powering-
ber, and send the car owner a fine. However, it is one new-experiences-with-ai/.
5
Humans are able to do this thanks to representational structures often
thing to detect and recognize well-defined variables knows as mental models, schemata, frames, etc., which enable humans to
such as the numbers on a license plate or the shape interpret the world they see.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 59


Opponents of the ban [2], [5], [18] argue that we can’t Defenders of autonomous weapons argue that
predict what future autonomous systems may be capable humans too cause unjust deaths. It would be counterfac-
of. They might even outperform humans in many tasks tual to deny that humans do indeed commit atrocities
related to discrimination. Nevertheless, with present-day during war, either by mistake or with intent. So what is
technology, even when considering state-of-the-art sys- different with AWSs? Atrocities that are committed with
tems, it seems Sharkey’s third objection — machines intent are called war crimes and the responsible individ-
lacking the contextual awareness necessary for discrimi- uals are meant to be held accountable under IHL.
nation decisions — is a difficult one to solve. Accountability, i.e., allocating individual responsibility for
Many weapons were banned because they failed the war crimes, serves as deterrent of future harm, as an
test of the principle of discrimination at the time they exercise in retribution for victims, and it is a necessary
were banned. Arguing that autonomous weapons condition for fighting a just war [3], [19].
should not be banned today because they might get However, many questions arise in regard to AWSs
better in the future requires a leap of faith and it is akin and accountability: what happens when an autonomous
to arguing that the ban on antipersonnel landmines weapon commits a war crime? Who should be held
should be lifted because in the future they might be responsible for an unjust action carried out by an AWS?
equipped with specific sensors that will discriminate if These and other questions were explored elsewhere [4],
the person stepping on it is a lawful target or not. The [9], [19] in more detail, but I wish to join the conversa-
ban on antipersonnel landmines considers them indis- tion in this last section.
criminate weapons in its current technology, it doesn’t The whole notion of accountability becomes fuzzy
take into account hypothetical improvements that might with an AWS, as probably neither the system’s design-
or might not arrive. ers or developers nor the maintainers or the military
Until there is serious evidence or research that sug- who deployed the system wilfully acted with the inten-
gests that an autonomous system has the capacities to tion of causing that crime and are, even, bound to be
make discriminating decisions in complex environ- unaware of the concrete particular situation where the
ments such as an (urban) battlefield, AWSs must be unjust action took place. Sparrow [19] has argued that
assumed to fail to comply with the criterion of discrimi- a programmer or commanding officer are not satisfac-
nation, a crucial principle in both just war theory and tory loci of responsibility because it would be unfair to
IHL, and should therefore be banned. hold someone “entirely responsible for actions over
Schmitt and Thurnher [2] state that “more may not which they had no control.” We should keep in mind
be asked of autonomous weapon systems than of that we are not discussing weapons, but weapon sys-
human-operated systems,” referring to the fact that tems, the focus does not lie on the weapon qua instru-
humans can also fail to pass the test, and that “enemies ment, but on “the process by which the use of force is
have been feigning civilian or other protected status [...] initiated” [11].
for centuries.” Although this is a strong point, it remains Without intent there is no crime, so who should be
unconvincing as an argument against a ban as it only held responsible? The question is very difficult to
shows that humans do not have a 100% success rate at answer, but the consequences of leaving it unanswered
passing the test. The question remains if autonomous can be disastrous. A blurring of accountability “would
systems equipped with current technology are at least fail to deter violations of international humanitarian law
as good as humans in discriminating civilians from and to provide victims meaningful retributive justice” [9].
combatants. This issue could be solved empirically, but Although jus in bello requires a clear accountability
meanwhile we could safely guess that humans are chain [3], Anderson and Waxman [5] do not consider the
much better than today’s systems, which as we have accountability gap to be sufficient reason to block the
seen, have trouble determining if a sausage pizza is veg- development of AWSs, as their use could end up saving
etarian or not. lives. Sharkey [1], on the contrary, argues that without
There are, of course, many other consequentialist clear accountability to enforce compliance many more
and deontological arguments against and in favor of civilian lives could be lost. Others argue that no such
AWSs, see [2], [3], [11], [19] for more of these. gap would ever exist as there will always be a human
involved, at least at the point of deployment, who could
A Note on the Accountability of Killer Robots be held responsible [14].
Considering that a ban — for political, military, com- I shall not elucidate this conundrum here. It is clear,
mercial, or other reasons — might be difficult to achieve however, that the issue of the accountability of autono-
[4], it is perhaps inevitable that deploying AWSs eq - mous weapons, in situations when no person involved is
uipped with today’s technology could result in many acting with the intent of committing a war crime, should
unjust deaths. be formally addressed assuming that AWSs might be

60 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


deployed sometime in the future. Crootof [20] provides a Author Information
promising avenue for discussion by signaling the need The author is a senior lecturer and researcher at ELISAVA
for war torts to deal with the accountability gap. A tort is Barcelona School of Design and Engineering (Pompeu
an act or omission that causes injury or harm to a per- Fabra University), Spain. Email: aguersenzvaig@
son or an organization, i.e., a wrong for which courts can elisava.net.
impose legal liability on the party that commits the tor-
tious act. Crootof elaborates: References
[1] N. Sharkey, “The evitability of autonomous warfare,” Int. Rev.
Red Cross, vol. 94, pp. 787-799, 2012.
Just as the Industrial Revolution fostered the
[2] M.N. Schmitt and J.S. Thurnher, “‘Out of the loop’: Autonomous
development of modern tort law, autonomous weapon systems and the law of armed conflict,” Harvard Nat.
weapon systems highlight the need for “war torts”: Security J., vol. 4, pp. 231-281, 2013.
[3] J. Sullins, “An ethical analysis of the case for robotic weapons
serious violations of international humanitarian
arms control,” in Proc. 2013 5th Int. Conf. Cyber Conflict, K.
law that give rise to state responsibility. Podins, J. Stinissen, and M. Maybaum, Eds. Tallin: NATO CCD COE,
2013, pp. 487-506.
[4] D. Lewis, G. Blum, and N. Modirzadeh, “War-algorithm account-
Due to space limitations, I will not discuss this issue
ability,” Harvard Law School, Program on International Law and Armed
in further detail here and refer the interested reader to Conflict (PILAC), Harvard Univ., research briefing, 2016.
the aforementioned source. [5] K. Anderson and M. Waxman, “Law and ethics for autonomous
weapon systems: Why a ban won’t work and how the laws of war
can,” The Hoover Inst., Stanford Univ., Jean Perkins Task Force on Nation-
Current Technology does not Meet al Security and Law Essay, 2013.
Conditions Under IHL [6] R. Arkin, “The case for ethical autonomy in unmanned systems,”
J. Military Ethics, vol. 9, pp. 332-341, 2010.
The argument I have tried to make here in favor of a pre-
[7] Future of Life Institute, Autonomous Weapons: An Open Letter
emptive ban is that today’s technology does not meet from AI & Robotics Researchers, 2015. [Online]. Available: http://
the necessary conditions for permissible killing under futureoflife.org/open-letter-autonomous-weapons/, accessed, Apr. 10,
2017.
IHL. Since today’s AWSs would likely fail to discriminate,
[8] P.W. Singer, Wired for War. New York, NY: Penguin, 2009.
they would not comply with international humanitarian [9] Human Rights Watch, “Losing Humanity,” Human Rights Watch,
law, hence they should be considered both illegal and New York, NY, 2012.
[10] N. Bostrom, Ethical Issues in Advanced Artificial Intelligence,
ethically impermissible. However, meeting the key criteri-
2003. [Online]. Available: http://www.nickbostrom.com/ethics/ai.html,
on of discrimination is not in itself a sufficient condition accessed, Apr. 10, 2017.
for ethical killing. Even if an autonomous system could [11] P. Asaro, “On banning autonomous systems: Human rights,
automation, and the dehumanization of lethal decision-making,”
outperform humans regarding discrimination, this would
Int. Rev. Red Cross, vol. 94, pp. 687-709, 2012.
not entail that the use of force is morally justified per se. [12] A. Velez-Green. The South Korean Sentry—A “Killer Robot” to
According to the tiered nature of just war theory, in order Prevent War, 2015. [Online]. Available: https://www.lawfareblog.com/
foreign-policy-essay-south-korean-sentry%E2%80%94-killer-robot-
for the killing of an enemy combatant to be permissible,
prevent-war, accessed, Apr. 10, 2017.
the combatant has to, for example, constitute a real [13] L. Hin-Yan, “Categorization and legality of autonomous and
threat, or the killing has to yield a proportional strategic remote weapons systems,” Int. Rev. Red Cross, vol. 94, pp. 627-
652, 2012.
advantage. Also, even in the case of AWSs becoming sta-
[14] ICRC, “Views of the International Committee of the Red Cross
tistically better than humans at discriminating, many on autonomous weapons systems,” International Committee of the
persuasive deontological arguments could be made for a Red Cross, Geneva, Switzerland, 2016.
[15] H. Widdows, Global Ethics. Durham, NC: Acumen, 2011.
principled bound on autonomous machine killing: the
[16] ICRC, “What is International Humanitarian Law?,” Int. Commit-
authority to initiate lethal force should not be delegated tee of the Red Cross, Geneva, Switzerland, 2014.
to an unsupervised automated process as an AWS is [17] M. Schroepfer, Accelerating Innovation and Powering New
Experiences with AI, 2016. [Online]. Available: http://newsroom
unable to establish a relevant interpersonal relationship
.fb.com/news/2016/11/accelerating-innovation-and-powering-new-
of respect [11], [21], [22]. experiences-with-ai/, accessed, Apr. 10, 2017.
If my argument so far has been sound, I have at [18] R. Arkin, Governing Lethal Behaviour in Autonomous Sys-
tems. Boca Raton, FL: CRC, 2009.
least shown that the technical incapacity of current
[19] R. Sparrow, “Killer robots,” J. Applied Philosophy, vol. 24, pp.
AWSs at discriminating is sufficient reason to question 62-77, 2007.
the use of AWSs and to call for a ban — or at least a [20] R. Crootof, “War torts: Accountability for autonomous weap-
ons,” Univ. Penn. Law Rev., vol. 164, pp. 1347-1402, 2016.
moratorium on their use — that would minimize the
[21] P. Asaro, “Military robots and just war theory,” in Ethical and
harm inflicted on unlawful targets, and give research- Legal Aspects of Unmanned Systems, G. Dabringer, Ed., Vienna,
ers, civil society, and policy makers more time to Austria: Institut für Religion und Frieden, 2011, pp. 103-119.
[22] R. Sparrow, “Robots and respect: Assessing the case against
engage in a discussion around the deeper philosophi-
Autonomous Weapon Systems,” Ethics and Int. Affairs, vol. 30, pp.
cal issues at stake: e.g., whether AWSs are compatible 93-116, 2016.
with the requirement of respect for the humanity of
combatants and non-combatants [22].

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 61


ISTOCK/JOSSDIM
The Safety of
Autonomous
Vehicles
Lessons from Philosophy of Science
Daniel J. Hicks

Digital Object Identifier 10.1109/MTS.2018.2795123


Date of publication: 2 March 2018

62 1932-4529/18©2018IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


T
he safety argument is perhaps the regulations required long-distance automated trucks
most widely cited argument in fa- to have a human monitor, it is plausible that a monitor
vor of the rapid development and would need much less training and experience than a
widespread adoption of automated professional driver, and thus would likely receive
vehicles (AVs). Versions of this ar- much lower pay.
gument promote the development In this respect, AVs are likely to be an important
and use of AVs on the grounds that test case for the next wave of automation. For obvious
these vehicles will have much lower crash, injury, or fa- reasons, it is basically impossible to produce high-level
tality rates than human drivers. (Throughout this es- projections of the economic impact of near-future
say, I assume that AVs are designed to respond to automation without relying on questionable assump-
malfunctions and hazards without human intervention, tions. Thus estimates of these impacts vary dramati-
corresponding to level 4 or 5 automation [1].) For ex- cally. Frey and Osborne [5] estimate that 47% of U.S.
ample, the U.S. National Highway Traffic Safety Admin- jobs have “high probability of computerization,” mean-
istration’s (NHTSA) Federal Automated Vehicles Policy, ing that they could be automated in the near future or
announced in September 2016, gives the safety argu- with relatively minor improvements in existing technol-
ment as its first, and most well-developed, argument ogy. Arntz et al. [6] offer a relatively sanguine estimate
in support of AVs: that “only 9% of all individuals in the U.S. face a high
automatibility, i.e., an automatibility of at least 70%”
For [the U.S. Department of Transportation (DOT)], [15]. However, even a 9% structural shift in employ-
the excitement around highly automated vehicles ment is still massive; for comparison, the Great
(HAVs) starts with safety. Two numbers exemplify Re cession of 2008–2010 caused a 6% increase in
the need. First, 35 092 people died on U.S. road- unemployment in the U.S. Given that the political and
ways in 2015 alone. Second, 94 percent of crash- economic effects of automation are both highly uncer-
es can be tied to a human choice or error. An tain and likely to be large, analysis of the philosophical
important promise of HAVs is to address and issues raised by AVs can help us prepare for automa-
mitigate that overwhelming majority of crashes tion in other areas of the economy.
[2, p. 5]. The philosophical issues raised by automation
include ethical issues, such as the intrinsic ethics of
Safety is prominent throughout the document, AV systems (that is, the capacity of AVs to make moral
especially when compared to economic consider- decisions) and the social ethics of technology adop-
ations. In 116 pages, the framework uses the words tion (such as questions of justice as AVs replace
“safe” or its derivatives (e.g., “safety”) 467 times; but professional drivers). However, AVs also raise episte-
terms such as “unemployment” or “unemployed” mological issues; that is, questions about our knowl-
never occur. Potential economic concerns about AVs edge and understanding of these systems. And
are raised exactly once, in the introduction [2, p. 3]. In political and epistemological issues can be entangled,
short, while the framework momentarily nods in the as stakeholders with different interests in AVs may be
direction of economic and ethical concerns with AVs, motivated to make or critique different scientific or
the substance of the framework gives no reason to technical claims. In other words, AVs have high politi-
think that NHTSA will regulate this technology on any cal and economic stakes, and these could lead to a
basis other than safety. heated scientific controversy, somewhat like the con-
Yet the economic impacts of AVs are likely to be troversy over climate change. Philosophical analyses
enormous. A recent Los Angeles Times story reports of heated scientific controversies might help prevent
that 3.4 million people in the U.S., or about 2% of the AVs — and automation more generally — from becom-
U.S. labor force, are professional drivers — including ing “the next climate change.”
drivers of trucks, taxis, buses, and delivery vehicles
[3]. Automation will dramatically reduce the need for Proving the Safety of Self-Driving Cars
these professional drivers, especially for long-distance Kalra and Paddock [7] provide an a priori, statistical
highway trucking. In October 2016, Otto (a subsidiary analysis of the number of miles that AVs must travel in
of Uber) announced that they had delivered a ship- order to produce precise estimates of the crash rate for
ment of beer from Fort Collins to Colorado Springs AVs based on observational data (that is, merely
using an automated truck: “our professional driver recording the number of crashes as they happen). Car
was out of the driver’s seat for the entire 120-mile crashes are relatively rare, even for human drivers —
journey down I-25, monitoring the self-driving system Kalra and Paddock [7] report that “The 32 719 fatalities
from the sleeper berth in the back” [4]. Even if NHTSA [from car crashes] in 2013 correspond to a failure rate

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 63


miles (95% confidence of a maximum crash rate of
190 reported crashes per 100 million miles traveled) to
The safety of AVs may become a 11 billion miles (95% confidence and 80% power to

scientific proxy for an economic detect a 20% improvement over the human fatality
rate of 1.09 fatalities per 100 million miles). A fleet of
conflict between professional drivers 1000 AVs, driving an average of 6 hours per day at
an average of 60 mph, would require 84 years to travel
and robotics manufacturers who are 11 billion miles.

trying to replace them. Uncertainty due to the large amount of data required
is reflected in other analyses. A Google-sponsored
report used data from a National Academies study and
Google to compare human-driven and automated vehi-
of 1.09 fatalities per 100 million [vehicle] miles [trav- cle crash rates [8]. The 95% confidence intervals for the
eled].” Precise estimates of such rare events require AV crash rate estimates are several times wider than
large samples. In the case of human-driven vehicles, those for human-driven vehicles, and for the two high-
these large samples come from trillions of vehicle est-severity crash levels, the AV confidence intervals
miles traveled per year, distributed over tens of mil- fully contain the human-driven confidence intervals
lions of cars. [8, fig. 3, p. 23]. There is simply insufficient observation-
Kalra and Paddock [7] calculate the number of AV al evidence to conclude that AVs are statistically signifi-
miles traveled required to produce statistically precise cantly safer than human-driven cars at conventional
estimates of AV crash rates across several different levels of statistical stringency. And, per [7], this evidence
crash types (from any reported crash to fatalities), will remain insufficient for decades to come.
degrees of statistical stringency or precision, and statis-
tical tasks (estimating the maximum crash rate, demon- Alternatives to Observational Data
strating that the crash rate is lower than a threshold, If we can’t use observational data, then simulations,
and demonstrating that the crash rate is statistically sig- physical test systems, and other broadly “laboratory-
nificantly lower than the human crash rate). Their calcu- based” or “experimental” methods seem to offer a way
lations range across 5 orders of magnitude, from 1.6 million forward. For example, self-driving software might be run
offline several orders of magnitude faster than real-time
driving, and thereby generate billions of “virtual miles
traveled” on a scale of years — or perhaps even days  —
Crash Rate per Million

25 rather than decades.


An announcement by Tesla in September 2016 sug-
Miles Driven

20
gests another testing strategy. In May 2016, a Tesla driv-
15
er was killed when his car when it was operating in
10 “Autopilot” mode and failed to recognize a truck. In
5 response, Tesla proposed a “fleet learning” strategy: A
0 Tesla car’s AV system will track its human drivers’
Level 1 Level 2 Level 3 behavior while the AV system is “off,” comparing the
Crash Severity
human behavior to the behavior that it would take
SHRP 2 Overall based on its sensor data. Pooling these data from all
SHRP 2 PR Tesla vehicles will enable the identification of locations
Self-Driving Car and situations where the AV system tends to be inaccu-
SHRP 2 Overall Age-Adjusted rate and needs improvement [9].
SHRP 2 PR Age-Adjusted This fleet learning proposal could also be used for
safety studies. For example, fleet learning data could
FIGURE 1. Crash rate estimates for human and AV system be filtered for three kinds of scenarios: a crash
drivers. Bars give crash rate estimates with whiskers indicating occurred under AV system control; a crash occurred
95% confidence intervals. Blue and orange bars are different under human control; and a human reclaimed control
classes of estimates for human drivers; green bars are from the AV system and immediately began rapidly
estimates for AVs. Level 1 crash severity is the most severe braking or changing direction. The first kind of scenario
and level 3 is the least severe. Note that, for levels 1 and 2, the
confidence intervals for humans are entirely contained within gives us direct data on AV crash rates. The second and
the confidence intervals for AVs. This is due to limited data for third scenarios let us compare human behavior with
AVs [8, Fig. 3, 23]. counterfactual AV system behavior. In the third scenario,

64 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


it may be reasonable to assume that the human acted
to prevent a crash; these data can be examined to
determine whether the AV system also anticipated the Using the example of climate
crash and whether, if the human hadn’t intervened, the
system also would have attempted to avoid the crash.
science/policy, we see that political
Overall, this strategy allows the collection of AV-relevant controversy mightn’t be settled by
data even when the AV system is not actually control-
ling the vehicle. simply appealing to “what science
However, experiment– or simulation-based methods
have their own limitations. Insofar as an experimental
tells us” — even when science
situation is different from real-world conditions, it’s not speaks with a single clear voice.
obvious that we can extrapolate from the experiment
to the real world. For instance, a comparison to coun-
terfactual AV system behavior using fleet learning data
could tell us whether the AV system would have Replication and Extrapolation
attempted to avoid an anticipated crash, but it does The replication crisis has caused a great deal of conster-
not tell us whether, say, icy road conditions would have nation across many fields of scientific research. Most
led to a crash anyways. A crash (or avoiding a crash) is prominently this has occurred in psychology, but also
the result of a complex interaction between the driver, neuroscience, econometrics, and cancer research [12]–
the vehicle, and the environment. Substituting a [15]. In these fields, experimental research findings
human driver for an AI system in simulation can tell us have been much more difficult to reproduce than ex-
what a different driver might have done differently, but pected, both methodologically — due to a lack of details
not how the vehicle and environment would have about study or analysis methods, for example — and
responded differently. statistically — after re-running the experiment and
Another approach, inspired by methods used in soft- crunching the numbers, the previously-reported effects
ware validation and control theory, would be to develop often do not appear, or are much more ambiguous.
formal, mathematical proofs that a given system is safe. Replication — and replication failure — can be
For example, Claire Tomlin and her coauthors derive understood in terms of prediction and extrapolation.
safety criteria for mathematical representations of The data gathered in a given experiment directly tell us
mechanical systems [10] such as traveling distances what happened only in that particular experimental
between vehicles [11]. Tomlin’s methods identify run. Researchers routinely make an extrapolative pre-
“reachable states” of the given system, then develop diction: that the result observed in this particular
control laws — the rules that the AV system uses to experimental run will also appear in other, similar
determine how to accelerate or brake or steer, for experimental runs. A replication can then be under-
example — that guarantee that the reachable states are stood as an attempt to check this prediction, that is,
a subset of the “safe” states. Consider a platoon of investigating whether or not the result observed in
automated trucks — several automated trucks travel- experiment A will indeed appear in experiment B. Sup-
ing, one immediately behind the other, to allow drafting pose a psychologist observes in an experiment that a
for fuel efficiency. Tomlin’s approach might consider AV group of engineering students pick chocolate chip cook-
system reaction times and cargo mass to calculate ies much more often than gingersnaps. She attempts to
maximum deceleration rates for a following truck. replicate this finding, making the prediction that that
These maximum deceleration rates could be used to the same result will be observed in a new experiment.
determine the minimum distance between platoon With this prediction, she extrapolates from the first
trucks such that a following truck can safely stop if the experiment to the second.
lead truck abruptly stops. Importantly, the “can” here is Sometimes, purely because of chance, the prediction
guaranteed by a mathematical proof. will turn out to be false. Suppose it just so happens
However, these formal methods depend on mathe- that, on the second experiment, the psychologist got a
matical assumptions, and generally tell us nothing sample of engineers who love gingersnaps. This kind of
about what happens when the assumptions are not sat- replication failure is not worrisome; it can be avoided
isfied. Again, we may not be able to extrapolate from a with larger sample sizes and detected statistically with
mathematical model to a real-world situation. To under- enough replications of the experiment.
stand the challenges of extrapolation, let us consider a By contrast, the replication crisis is that the predic-
high-profile case where extrapolation has failed: the rep- tion has turned out to be false much more frequently
lication crisis. than expected. Results observed in one experiment run

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 65


often do not appear in later ones, especially when the largely unsuccessful: depending on exactly how suc-
later ones have greater statistical power and are careful- cesses were counted, as few as 36% of the studies
ly designed to test the prediction (rather than discover were successfully replicated. In reply, [18] argued that
new phenomena). “many of OSC’s replication studies used procedures
Norton [16] and Feest [17] offer independent but that differed from the original study’s procedures in
similar discussions of replication disputes, that is, substantial ways.” For example, “an original study that
cases in which there is a scientific dispute over wheth- gave younger children the difficult task of locating tar-
er a result has been replicated. Both Norton and Feest gets on a large screen was replicated by giving older
show that such disputes frequently turn on whether (or children the easier task of locating targets on a small
which) follow-up experiments are epistemologically screen.” Different researchers might disagree about
relevant to the original experiment, that is, whether whether age makes a difference. One group of psychol-
there is reason to think that both experiments produce ogists might argue that the experimental task is well
evidence concerning the same results. If the follow-up within the abilities of children at both ages; these
experiment is relevant, and the phenomenon fails to researchers would presumably regard the second
appear, then it is at least uncertain whether or not the experiment as substantially the same as the first, at
original result was correct; thus, proponents of the least in terms of the research participants. Another
original experiment (often the researchers who con- group of psychologists might argue that older children
ducted it) will often argue that the follow-up experi- will be much better at the task than younger children;
ment had design errors that made it irrelevant to the these researchers would presumably regard the second
original experiment. In terms of extrapolation, there is experiment as substantially different, and therefore
no reason to expect a phenomenon in situation A to irrelevant as a replication attempt.
extrapolate to situation B if the two situations are sub- More generally, both [16] and [17] point out that
stantially different. debates over the relevance of follow-up experiments —
whether or not the two situations are substantially
similar or different — involve substantial controver-
sies over how the result is supposed to work. What
It is practically impossible to counts as a substantial difference depends on the
causal story that’s behind the purported result. In
demonstrate the safety of AVs based other words, the “design errors” alleged in follow-up
experiments are often not methodological or formal
on real world crash rates until long statistical problems  — such as contaminated experi-
after they are in widespread use. mental materials or incorrect data analysis — but
instead depend on substantive views about the phe-
nomenon in question, such as substantive disagree-
ments about the age at which children are capable of
To return to the cookie example, suppose a rival psy- performing certain kinds of perceptual tasks. The
chologist attempts to replicate the original experiment controversy may not be settled by simply appealing to
using humanities students, rather than engineers, and “the evidence” because the two sides may not be able
chocolate and ginger candy rather than cookies. The to agree on which experiments count as evidence.
rival psychologist finds no evidence that students prefer One side can always dismiss the other side’s replica-
chocolate. The first psychologist might claim that this tion attempt by finding a subtle difference between
experiment was not relevant to her original version: dif- the two experiments.
ferent majors have different cultures, so students in dif- What does all this have to do with AVs? Above, we
ferent majors may not have the same preferences; and saw that it is practically impossible to demonstrate the
candy preferences are not necessarily the same as safety of AVs based on real-world crash rates until long
cookie preferences. So, the first psychologist con- after they are in widespread use. This led to the sugges-
cludes, the “replication” is not actually a replication of tion of using experiments, simulations, or formal
her original experiment at all, and her result still stands. proofs. However, the challenges involved in extrapolat-
But her rival maintains that these are not substantial ing from one experiment to another also appear in
differences, and concludes that the original result has extrapolating from experimental results to real-world
been refuted. driving conditions. Is a test track experiment, offline
To see this in a real-world dispute, consider the sci- simulation, or “fleet learning” human-AV comparison
entific response to [12], which reported that an attempt analysis, relevant to real-world driving conditions? This
to replicate 100 high-profile psychological studies was depends on substantive views about which factors are

66 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


causally important in crashes. Different social groups, AVs, and thus a strong incentive to minimize regula-
with different interests in AV success or failure, might tion. In a policy context where regulation depends
have different substantive views about these factors, only on safety, this gives AV manufacturers a strong
and thus might disagree about different test methods incentive to manufacture certainty about AV safety
or particular studies. Again, the controversy may not be studies, that is, to create the appearance of expert
settled by simply appealing to “the evidence” because consensus and overwhelming scientific evidence. In
different sides might disagree about what counts other words, the safety argument’s tight link between
as evidence. To understand the political implications safety and policy leads us to expect AV manufacturers
of this epistemological conclusion, let us turn now to to fund expert panels of roboticists and mechanical
the controversy over climate change. engineers who will endorse their preferred safety stud-
ies and criticize research that indicates that AVs are
Controversy and Policy less safe. The safety of AVs may become a scientific
Based on work by historians and social scientists [19]– proxy for an economic conflict between professional
[21], the climate controversy can be understood as drivers and robotics manufacturers who are trying to
a scientific proxy for climate policy specifically and replace them.
environmental policy more generally [22]. Climate sci- At an extreme, AVs could be caught in an intracta-
ence and policy are tightly linked in the way we think ble proxy war, with one side producing ever “more and
about climate issues: the fact of anthropogenic cli- better” safety studies, the other side perpetually find-
mate change is generally regarded as a sufficient, and ing subtle “mistakes” that “invalidate” the studies, and
indeed urgent, reason to adopt cap-and-trade policies, the technology itself caught in regulatory limbo until it
promote the development of sustainable energy sourc- is simply abandoned or one side forces through a
es, and generally reduce the use of fossil fuels. In [23], major regulatory change. Whatever the actual safety of
Howe calls this science communication-policy strategy AVs — and whatever one thinks about whether they
“the forcing function of knowledge,” and argues that it should be widely adopted, and how quickly — a pro-
has not just failed, but indeed backfired. The tight link tracted proxy war is likely to consume substantial legal,
between science and policy gives opponents of cli- regulatory, and scientific resources that could be more
mate policies a strong incentive to oppose the science useful elsewhere.
as well. Denying the science knocks out the single At this point, it might be objected that there is a
largest pillar supporting the policy. It’s therefore major disanalogy between climate change, on the one
unsurprising that prominent climate skeptics have hand, and AV safety, on the other. There is a near con-
received substantial support from the fossil fuels sensus among climate scientists that climate change is
industry [24], [25]. happening, it is caused primarily by anthropogenic
When it comes to AVs, my concern is that we are greenhouse gas emissions, and that it is already having
careening towards a similar scientific proxy dispute. As serious negative impacts on human societies and the
we saw above, U.S. DOT’s regulatory framework barely natural environment [26]. While there are uncertainties
even mentions the potential economic fallout from about many important and technical details of climate
AVs, and instead focuses entirely on safety. Without change — such as the precise amount of warming
complementary policies that would, for example, facili- expected from a given increase in atmospheric carbon
tate retraining and early retirement, professional driv- dioxide concentration — there is essentially no serious
ers — especially through organizations such as the scientific objection to the three claims made in the previ-
Teamsters union — have a strong economic incentive ous sentence. Today, “sophisticated” criticisms of cli-
to delay or prevent the adoption of AVs. Then, insofar mate science either reiterate arguments that have
as federal and state regulators view AVs only in terms already been answered or exaggerate the importance of
of safety, AV opponents will have strong incentives to the detail-level technical uncertainties [19]. By contrast,
criticize safety studies. The tight link between climate as I discussed above, there is substantial, reasonable,
science and climate policy motivates criticisms of cli- scientific and engineering uncertainty surrounding the
mate science; in a similar way, the safety argument cre- safety of AVs. Indeed, we are still in the process of devel-
ates a tight link between AV safety and rapid AV oping the various candidate approaches that might be
adoption; and so we might expect similar motivated used to assess AV safety.
criticisms of AV safety studies. And the safety studies, But this disanalogy between climate change and
as we saw above, are unlikely to be completely immune AVs actually supports my point. The case of climate
to criticism. change shows us that an overwhelming expert con-
At the same time, AV manufacturers have a strong sensus is politically insufficient to drive policy. Even
economic incentive to promote the rapid adoption of if — though this seems unlikely, short of manufacturing

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 67


retraining, and early retirement for professional drivers,
smoothing the economic transition from human-driven
The possibility of AVs becoming a to autonomous vehicles. By “complementary policies” I
mean that while AVs might be directly regulated only in
politically partisan issue should not terms of safety, these regulations could be extremely
stringent and costly unless and until retraining and early
be ruled out. retirement policies are in place.
Professional drivers will have less reason to resist the
adoption of AVs if they are not worried about their own
economic security. Since AV systems are expected to be
less expensive than human drivers — this is exactly why
certainty — there were overwhelming expert agree- AVs pose a threat to professional drivers, after all — a
ment on the safety of AVs, sophisticated critics could specific transitional tax on AVs could be used to (partial-
still be put forward to manufacture the appearance of ly) offset the cost of training and early retirement plans.
controversy in order to delay AV adoption. The politi- While this is likely to slow the adoption of AVs, this can
cal controversy can’t be settled by simply appealing be seen as a feature rather than a bug: a more gradual
to “what science tells us,” even when science speaks transition gives drivers and potential drivers more time
with a single clear voice. to plan their individual transitions, whether to another
To be clear, I am not claiming here that AVs will career or to retirement.
necessarily become the kind of highly partisan issue Second, for the sake of simplicity, this essay has
that climate change is. Both Democrat and Republi- generally assumed that AV technology development
can policymakers have supported AV development and adoption will be market-driven, perhaps with the
and adoption. At the same time, there are populist pace moderated by policy and politics. However, policy
currents in both parties that might be more favor- also plays a role in how technology is designed and
able towards professional drivers. For example, in adopted [29], [30]. Specifically, there is a crucial
fall 2017, the U.S. House and Senate both worked on difference between, on the one hand, robotic systems
bills that would loosen regulations for automated designed to supplement human capabilities in order to
cars (H.R. 3388 was passed by the House and sent improve safety, and on the other hand, robotic systems
to the Senate; S. 1885 was passed out of committee designed to be inexpensive replacements for expensive
but has not been voted on by the full Senate as of human labor and that also happen to be safer than
early November 2017). These bills had substantial human drivers [31, p. 118]. While both approaches to AV
bipartisan support. Major lobbying support for these system design might improve safety, the former
bills came from the “Self Driving Coalition for Safer approach would be less controversial. Policy should be
Streets” — note the use of the safety argument in used to encourage researchers, designers, and manu-
the name — which was established by Ford, Lyft, facturers to take the former approach rather than the
Volvo, and Waymo (the AV arm of Google) (http:// latter one.
selfdrivingcoalition.org/). However, automated trucks Finally, both safety researchers and AV systems
were excluded from these bills; according to some designers should collaborate with professional drivers
reports, this was because of opposition from orga- and their trade organizations, such as the Teamsters
nized labor [27]. union, to design experimental studies that are broad-
Not all scientific-political disagreements are partisan ly recognized as relevant. While this kind of ante hoc
issues in the U.S. But, at the same time, we should not acceptance of a body of research cannot totally pre-
rule out the possibility of AVs becoming a partisan issue vent post hoc criticism — a critic could claim that
over the next few decades. Recall that climate change unforeseen developments rendered the study irrele-
was a bipartisan issue 30 years ago [28]. vant — it does rhetorically constrain such criticism.
Further, professional drivers are likely to have expe-
Preventing a Proxy War rience-based knowledge about road conditions,
with Policy, Design, and Science vehicle handling, and the causes of crashes that
The worst-case scenario of a protracted proxy war over wou ld c omple ment t he expertise of engineers,
AV safety between manufacturers and professional driv- roboticists, and transportation regulators [32]. Includ-
ers might be avoided with a proactive combination of ing professional drivers in the process of developing
policy, design, and science. First, policies developed to safety studies could lead to studies that even techni-
safely promote the adoption of AVs should have com- cal experts would regard as more realistic, relevant,
plementary policies to facilitate alternative training, and informative.

68 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


All together, these strategies treat the transition [12] Open Science Collaboration, “Estimating the reproducibility
of psychological science,” Science, vol. 349, no. 6251, p. aac4716,
from human drivers to AVs as one of mutual benefit
Aug. 2015.
rather than antagonism. Rather than market forces [13] K. S. Button, “Power failure: Why small sample size undermines
abruptly turning human drivers out into the cold, AV the reliability of neuroscience,” Nature Reviews Neuroscience, vol.
14, no. 5, pp. 365–76, May 2013.
design and testing would build on the experience-
[14] A. Chang and P. Li, “Is economics research replicable? Sixty
based knowledge of professional drivers, and in turn published papers from thirteen journals say ‘usually not’,” Board of
drivers would receive transitional economic support, Governors of the Federal Reserve System, Finance and Economics
Discussion Series 2015-083, Washington, DC, 2015.
either to a new career or to retirement. Put more gener-
[15] E. Yong, “How reliable are cancer studies?” 18-Jan-2017.
ally, better politics — developing policy strategies that [Online]. Available: https://www.theatlantic.com/science/archive/
recognize and attempt to reconcile the conflicting eco- 2017/01/what-proportion-of-cancer-studies-are-reliable/513485/.
[Accessed: 05-Jun-2017].
nomic interests between stakeholder groups — could
[16] J. Norton, “Replicability of experiment,” Theoria, vol. 30, no. 2,
help prevent the development of a proxy war over scien- pp. 229–48, Jun. 2015.
tific research. [17] U. Feest, “The experimenters’ regress reconsidered: Replica-
tion, tacit knowledge, and the dynamics of knowledge generation,”
Studies in History and Philosophy of Science, vol. 58, pp. 34–45,
Acknowledgment Aug. 2016.
Views stated in this paper are those of the author, and [18] D. Gilbert, G. King, S. Pettigrew, and T. Wilson, “Comment on
‘Estimating the reproducibility of psychological science’,” Science,
do not necessarily reflect the views and policies of any
vol. 351, no. 6277, p. 1037, Mar. 2016.
other organization or entity. For feedback on earlier ver- [19] N. Oreskes and E. M. Conway, Merchants of doubt: How
sions of this paper, thanks to Lynne Parker and partici- a handful of scientists obscured the truth on issues from
tobacco smoke to global warming. New York, NY: Bloomsbury
pants at the 2017 Values in Science, Technology, and
Press, 2010.
Medicine Conference at UT Dallas. [20] D. Sarewitz, “Curing climate backlash,” Nature, vol. 464, no. 4,
p. 28, Mar. 4, 2010.
[21] A. M. McCright, S. T. Marquart-Pyatt, R. L. Shwom, S. R. Brechin,
Author Information and S. Allen, “Ideology, capitalism, and climate: Explaining public
Daniel J. Hicks is a AAAS Science and Technology Policy views about climate change in the United States,” Energy Research
Fellow, hosted at the National Science Foundation, Wash- & Social Science, vol. 21, pp. 180–9, Nov. 2016.
[22] D. Hicks, “Scientific controversies as proxy politics,” Issues in
ington, DC. Email: hicks.daniel.j@gmail.com.
Science and Technology, vol. 33, no. 2, Winter, 2017.
[23] J. Howe, Behind the curve: Science and the politics of global
References warming. Seattle, WA: Univ. Washington Press, 2014.
[1] “Taxonomy and definitions for terms related to driving auto- [24] J. Farrell, “Corporate funding and ideological polarization
mation systems for on-road motor vehicles,” SAE International, about climate change,” Proc. National Academy of Sciences, vol.
J3016_21609, Sep. 2016. 113, no. 1, pp. 92–7, Nov. 2015.
[2] National Highway Transportation Safety Administration, “Accel- [25] C. Boussalis and T. Coan, “Text-mining the signals of climate
erating the next revolution in roadway safety,” Sep-2016. [Online]. change doubt,” Global Environmental Change, vol. 36, pp. 89–100,
Available: http://www.nhtsa.gov/nhtsa/av/index.html. [Accessed: Jan. 2016.
13-Oct-2016]. [26] G. Supran and N. Oreskes, “Assessing ExxonMobil’s climate
[3] N. Kitroeff, “Robots could replace 1.7 million American truckers change communications (1977–2014),” Environmental Research
in the next decade,” Los Angeles Times, Sep. 2016. Letters, vol. 12, no. 8, Aug. 2017.
[4] “Proudly brewed. Self-driven,” 25-Oct-2016. [Online]. Avail- [27] E. Graham, “Automated trucks left out of self-driving car bill
able: https://blog.ot.to/proudly-brewed-self-driven-95268c520ba4. but regulatory relief still possible,” 02-Oct-2017. [Online]. Avail-
[Accessed: 13-Nov-2017]. able: https://morningconsult.com/2017/10/02/senate-self-driving-
[5] C. B. Frey and M. Osborne, “The future of employment: How sus- car-legislation-does-not-address-automated-trucks/. [Accessed:
ceptible are jobs to computerisation?” Oxford Martin Programme 13-Nov-2017].
on the Impacts of Future Technology, 2013. [28] J. Worland, “Climate change used to be a bipartisan issue.
[6] M. Arntz, T. Gregory, and U. Zierahn, “The risk of automation here’s what changed,” 27-Jul-2017. [Online]. Available: http://www
for jobs in OECD countries,” OECD Social, Employment, Migration .time.com/4874888/climate-change-politics-history/. [Accessed:
Working Papers, 2016. 13-Nov-2017].
[7] N. Kalra and S. Paddock, “Driving to safety: How many miles of [29] T. Pinch and W. Bijker, “The social construction of facts and
driving would it take to demonstrate autonomous vehicle reliabil- artefacts: Or how the sociology of science and the sociology of
ity?” RAND Corporation, 2016. technology might benefit each other,” Social Studies of Science,
[8] M. Blanco, J. Atwood, S. Russell, T. Trimble, J. McClafferty, and M. vol. 14, no. 3, pp. 399–441, 1984.
Perez, “Automated vehicle crash rate comparison using naturalistic [30] S. Jasanoff, “The idiom of co-production,” in States of
data,” Virginia Tech Transportation Institute, Jan. 2016. knowledge, S. Jasanoff, Ed. London; New York: Routledge, 2004,
[9] W. Oremus, “How Tesla fixed a deadly flaw in its autopilot,” pp. 1–12.
Slate, Sep. 2016. [31] S. Vallor, “Moral deskilling and upskilling in a new machine
[10] C. Tomlin, J. Lygeros, and S. S. Sastry, “A game theoretic age: Reflections on the ambiguous future of character,” Philoso-
approach to controller design for hybrid systems,” Proc. IEEE, vol. phy and Technology, vol. 28, no. 1, pp. 107–24, Feb. 2014.
88, no. 7, pp. 949–70, Jul. 2000. [32] B. Wynne, “Sheepfarming after Chernobyl: A case study in
[11] A. Alam, A. Gattami, K. H. Johansson, and C. J. Tomlin, “Guar- communicating scientific information,” Environment: Science
anteeing safety for heavy duty vehicle platooning: Safe set compu- and Policy for Sustainable Development, vol. 31, no. 2, pp. 10–39,
tations and experimental evaluations,” Control Engineering Prac- 1989.
tice, vol. 24, pp. 33–41, Mar. 2014.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 69


Socio-Economic
and Legal Impact of
Autonomous
Robotics and
AI Entities
The RAiLE© Project

Morgan M. Broman and


Pamela Finckenberg-Broman
ISTOCK

Digital Object Identifier 10.1109/MTS.2018.2795120


Date of publication: 2 March 2018

70 1932-4529/18©2018IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


“The ability to speak does not make you intelligent. to international trade and other global interaction. The
Now get out of here.” purpose behind this article is not answer all the ques-
Qui-Gon Jinn, from Star Wars: Episode I — tions in this matter, but is an attempt to initiate a debate

S
The Phantom Menace (1999) about the implications, long-term, of the current and
future merging of robotics and Artificial Intelligence (AI)
ingularity, or as some may call it — the into an integrated, autonomous entity.1 It is also to address
specific point in time when advanced the legal aspects of daily, continuous interaction between
technological development, for instance humans and this new entity.
artificial intelligence (AI), leads to the We propose the basic questions underpinning this arti-
creation of machines that are smarter cle to be part of a platform for further discussions on the
than human beings — will arrive rough- future legal aspects of human- Robotics/AI Legal Entity
ly around the year 2045 according to (RAiLE©) interaction in key areas of our lives. Initially we
author, computer scientist, inventor and futurist Ray look into aspects affecting the workplace and family.
Kurzweil. [1] The timeline presented by Kurzweil, can be Today we see increased coverage in the global press
argued, but not the fact that we are dealing with rapid about the fascinating developments in the areas of robot-
advances in the fields of robotics and AI. ics and AI, not least in the areas of autonomous vehicles
This article, as part of our early research, is a first and military applications. An important component of
step towards establishing an internationally viable, neu- the concern presented is the very important, and to
tral, consistent legal nomenclature for a specific type of some quite frightening, situation when robotics and AI
technology and its use. Due to the complexity of interac- are combined into one entity. Most of these articles how-
tion between different legal systems and cultures, our ever, examine specific academic, ethical, technical, eco-
early research will focus on soft law. Soft law includes nomic, political, social, or legal impacts, focusing only on
non-enforceable guidelines, policy declarations, or one or two areas of perceived technological disruption [2]
codes of conduct that set standards, often established brought into human society through advanced develop-
by treaty. Many of these are Free Trade Agreements, ment of robotics, with or without AI. We strongly argue
where a lack of technical or legal definitions is a barrier that it is, in particular, the more or less autonomous,
combined robotics/AI entity, that will cause legal issues
in the future.
But what about legal disruption when legislation is not
yet enacted to address the societal impact of new tech-
nology? Facing this issue, we combine our global busi-
ness and legal experience with an experience-based
understanding of information technology (IT) develop-
ment, and see a growing need to look at current legisla-
tion governing the future interaction between humans
and both current and future autonomous robotics/AI enti-
ties. We see today a lack of consistent legal definitions
and related legislation to adequately handle these future
entities, entities that are not created to fulfil only one spe-
cific role but are capable of multiple different, more inte-
grated, roles in our lives and society on a daily basis.
Our question is: “How can we create the necessary
definitions and parameters for a future, global, legisla-
tive framework on human-RAiLE interaction?”

Helpful Definitions
We be g i n by def i n i ng some i mp or t a nt and rele-
vant terminology.
Human-Robot Interaction (HRI): This is the rela-
tionship between human and robot (machine). The term

1
This is not necessarily limited to a 1-on-1 relationship between AI and
robotics, but could apply to 1-M, M-1, or limited M-M. Some form of limita-
tion on the extent of the new legal entity will have to be defined, or any leg-
islation may become applicable to the whole network of units connected.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 71


“robot” is a moving target as a definition, just like most Socially Interactive Robots
terminology in other fast-paced technologies. While, at Social interaction is a cornerstone for the development
times the word “robot” lacks the necessary inclusion of of human society, and as we can see from recent politi-
a wider span of mechanical and/or hybrid forms of arti- cal upheavals it can be a major driving force behind
ficial life that humans may interact with, it provides a changes in our society, and to changes in our daily
solid baseline. In this article, HRI encompasses the social life. With the development of the Cloud and new
human being’s attitude and behavior towards all types interactive, technological tools used for encountering,
of artificial, technology-based lifeform’s, and their attri- communicating, and collaborating with others, our
butes, such as their physiology, integrated technology, social interaction is being transformed [39]. This creates
and forms of interaction [3]. a different relationship to, and a certain dependency on,
Embodiment: This refers to experiences arising the technology used, on a daily basis, for instant social
from a living body based on interactions with the envi- interaction via for example MySpace, Facebook, Skype,
ronment. It also refers to autonomous agents’ under- and Twitter, using connected, online, portable devices
standing of these experiences and activities, based on such as mobile phones, tablets, etc. The individual per-
their sensory input [4]. For our purpose with this article, son’s standard for and expertise in social interaction,
we also look at this as a description of the personifica- particularly within younger generations, is morphing
tion of the technical construct forming the basis for the from traditional, physical face-to-face contact into re -
autonomous “Robotics/AI Legal Entity” (RAiLE). mote “face-to-face” using technology [40].
Embodied Interaction: This is the concept of the As part of this transition our dependence on social
experience of or about an object. It refers to the natural interaction becomes related to the tool used as much as
way of involving the user’s physical body in interaction to the individual. This technological dependency for
with any form of technology, for instance the use of ges- social interaction causes us to develop a special, and at
tures, expressions and/or verbal signals [5]. times very emotional, relationship with our technologi-
Socially Interactive Robots: We define this as an cal devices [9]. An interesting example of these new
artificial lifeform or robot where the social interaction is emotional bonds, based on HRI is, for example in 2015
a key component and reason for their existence [6]. a Sony produced “Aibo” robot dog that was the recipient
of funeral rites at a temple near Tokyo in Japan [10].
The Role of Human-Robot Interaction Some of the key aspects to this bond are based on
The science of research into HRI is often seen as what we traditional HCI parameters for technology interaction,
consider to be a robot and how we as humans can make such as ways to make the interaction effortless (i.e.,
this artificial embodiment of technology function to serve touch screen), fun (emoji’s), and enjoyable (i.e., interact-
or assist us. This conception correlates closely to our ing with anyone you choose at any time of your choice).
human perspective on day-to-day human computer inter- All this leads to a positive experience and a feeling of
action (HCI) as a basis for the development for graphical accomplishment via the use of technology. These param-
user interfaces (GUI). We look for efficiency, depending on eters, established to create a seemingly effortless oppor-
the envisaged user area, but also for acceptability of the tunity for social-related interaction between a human and
technological embodiment from a human point of view. technology, are a key aspect of current and future inter-
The baseline here is that we as humans emphasize devel- action of humans with social robots [4].
opment of technological embodiments that are accept- What is then our definition of a socially interactive
able to our human senses, and that also meet and/or robot? The most basic description is the one provided
fulfil the needs of the human it interacts with [3]. by Fong et al., stating that these are robots whose prime
In studies on HRI there are multidisciplinary contri- function and role is social interaction [11]. The key is
butions from the areas of HCI, AI, robotics, understand- that these robots use human-based communication
ing of languages, and social sciences. (See also [7].) The modes like speech and body language and even facial
cross- disciplinary approach that goes beyond the start- expressions [12], thereby creating the capability and
ing point of computer science and engineering via HCI opportunity for an effortless mode of HRI.
development, and into areas of more complex interac-
tion between human and the robot as a purely mechani- Defining the Robotics/AI Legal Entity (RAiLE)
cal entity largely due to our expanding areas of The very rapid development of AI and related areas of
utilization, leads us to “social robotics” and HRI. With research such as cognitive science has major implica-
additional integration into this area of HRI, of complex tions for our perceptions of current and future HRI [4]. A
cognitive sciences (i.e., the scientific study of the mind key component to achieve the purpose of our continued
and its processes), we move into the area of what has research, as well as for this article, is for us to initially
been termed socially interactive robots [6], [8]. establish a relevant definition for some form of feasible,

72 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


consistent, and usable legal entity. Our current solution forms of associations are examples of non-human legal
to this is the autonomous “Robotics/AI Legal Entity” — entities with their own cluster of rights and obligations,
the RAiLE. but there are also legal rights defined, for instance, for
This definition is necessary for us to establish a legal animals and birds.
position for more complex forms of future HRI involved
that we are looking at. We also have to establish a line Can Current Legislation be Applied
between an advanced artificial, non-biological, lifeform or Adapted to Provide for the RAiLE?
as a good [13], [14] (something to be owned) and a legal Yes, in most countries legislation is constantly subject
entity with rights and obligations of its own, in particular to change, including changes to who and/or what legal
from the perspective of a framework for future legisla- entity falls within the boundaries of the relevant legal
tion around the world. cluster. A typical analogue, used as an example related
A key aspect of the new challenges related to the to the current discussion, is when women were allowed
RAiLE, and to social interaction when compared to the right to vote. All it took was a redefinition of to
other interactive products, is the way they are embod- whom the current cluster of legislation on voting rights
ied. What will the effect of the user experience be when was applicable. Another key part of relevant legislation
it is fundamentally based on emotions [4]? The nature to take into consideration is the relatively common
of embodiment for socially interactive robots raises the adjustments to the rights and obligations of an individ-
issue of to what degree can the HRI become “natural” ual when the person reaches the age of majority. This
in its nature? We already have examples of situations change in legal rights and obligations is based on the
where humans are looking to take the next step in their concept of maturity as a measured and age-based
interaction with a rudimentary form of a RAiLE [15]. capacity to make choices regarding a cluster of differ-
There is already a “robosexual” woman who has ent obligations. It also carries with it some implied
designed herself a robot, the “Inmmovator,” to be her part- rights such as a right to vote and/or buy alcohol and in
ner and future husband. After robot-human marriage is some places to drive a car on one’s own.
legalized she plans to marry her technological creation [16]. This all works under the legal presumption that the
So, if the RAiLE constitutes an emotionally attached legal entity is competent to make necessary decision(s)
social interactor, a “natural” partner in a human’s life within the boundaries of the relevant legal cluster. The
with an evolved AI reaching, or close to, the point of Sin- same applicable clusters of laws also often contain provi-
gularity, i.e. a computer with consciousness or aware- sions for removing rights to make decisions, by taking
ness — what exactly is the RAiLE from a legal standpoint? away the “autonomy” or self-determination of the legal
To ascertain if it is possible for a future RAiLE to exist as entity, usually via a court order. An example of this is the
a legal entity we should establish, for the purpose of our possibility under the law for authorities to withdraw an
future research and this article, a general definition of individual’s driver’s licence, which in practice is a way to
what a legal entity is: remove a legal right (driving a vehicle) from the legal enti-
ty (the driver).
A lawful or legally standing association, corpora- In comparison with the legal decision on women’s
tion, partnership, proprietorship, trust, or individu- right to vote, the decision on the legal competency of
al. Has legal capacity to 1) enter into agreements the RAiLE would have to rely on the current legislators
or contracts, 2) assume obligations, 3) incur and at the time of such a decision. The basis for such a deci-
pay debts, 4) sue and be sued in its own right, and sion would have to be based on the legal definitions uti-
5) to be accountable for illegal activities [17]. lized, put in relationship to and compared to other
similar legal entities.
While this establishes a baseline for our use of a defi-
nition of legal entity, it is important to know that this is Human-Robot Interaction Hierarchy
not an on/off switch. There are many different clusters The human perception of and understanding of embod-
of rights and obligations pertaining to different forms of ied interaction will be key to how HRI will develop with
legal entities. There are, however, also other forms of increasingly advanced RAiLE. For future legislation, we
legal entities to consider, who do not fit within the see it as vital to discuss the RAiLE in three different sce-
parameters of this definition. narios, fundamentally different, but having a vast
impact on the life of the individual human as well as orga-
Does a Legal Entity Have to be Human? nizations interacting and associating with the RAiLE. To
No, we have a series of forms and structures that are assist in sorting out and evaluating the legal implications
defined as legal entities with their own legal personality of how this may occur we have developed three scenarios
using the above definition. Corporations and different for how to define the RAiLE:

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 73


■ Scenario1 (S1) discusses the RAiLE as an object, Which Social Aspects of HRI
explicitly, a good. will Impact Legal Change?
■ Scenario2 (S2) considers RAiLE as a self-standing “Social reality is lived social relations” [24]. Thus, as our
non-human legal entity. relations with the RAiLE evolve, so does the social reality
■ Scenario3 (S3) deliberates on RAiLE either as a sub- we live in. We have identified two specific areas of future
ject with limited rights, “Third Existence” [18] refer- social interaction between “man and machine” where we
ring to [19] Japanese text only, or objects with see that legal issues will occur sooner rather than later:
enhanced legal obligations attached, “electronic per- 1) Workplace.
sons” [20], [21] the term used by the EU. 2) Family unit.
Cultural differences manifest in different treatment of In the workplace, we are already aware of the issue
the RAiLE. In particularly Scenario 3 (S3) can manifest with automatization taking over jobs, but there are also
substantial variations depending on what is of key impor- areas and aspects around future situations, including
tance in the eye of beholder. For instance, in Europe, “elec- effects on the family unit, that should be seriously con-
tronic persons,” receive no legal protection, only liabilities. sidered. The current and future legal status of the
Asian nations appear more RAiLE friendly; the South Kore- RAiLE affects the legal parameters of social interaction.
an draft of Robot Law and the Chinese-Japanese “Third Civil law and common law today (both private and pub-
Existence” proposal incorporates rights for Robots [22]. lic sphere) covers legal subjects and objects. Unlike
Regarding families, formalized relationships today objects, subjects have rights and obligations. In the
must be between natural persons.Future social reality workplace, the RAiLE could be:
may differ with the RAiLE being an integrated family ■ Co-worker — How can the definition of the worker’s
member. In family context only (S2) and (S3) are deliber- obligations/rights under internationals labor law
ated on distinctively, as (S1) as a legal entity and taken apply to the RAiLE?
from the workplace applies analogically for families, with ■ Workforce solutions — How can legal obligations/
certain exceptions. In (S1) and (S3) above, unlike in (S2), rights be maintained when the RAiLE is on tempo-
the human “outranks” the RAiLE in the legal hierarchy. rary employment?
In (S2), more formalized human-RAiLE relationships are ■ Boss — What obligations/rights apply when the
a possibility; they may marry, adopt, inherit etc. RAiLE runs the workplace with humans and other
The level and scope of subordination depends on the types of co-workers (see above) under them?
cultural models of legislators in (S3). For example, a
“Third existence” defined RAiLE would have a higher rank Workplace
in the hierarchy than on defined as “electronic persons.” The workplace is the place where the “job” is done. This
The entity’s legal capacity also varies in accordance with article adopts a broad meaning for the term, covering
the chosen scenario. Both the (S1) and (S3) RAiLE can, everything that affects people at work, i.e., both the phys-
analogically, receive legal protection like animals. Hence, ical geographical location, the physical attributes of the
they may inherit property, be subject to taxation, etc. workplace (e.g., the quality of air, cognitive systems,
Under the same two scenarios, lacking legal capacity, the machines, and robots) [27], and the workplace relation-
RAiLE would need a legal guardian with a power of attor- ships and conditions of employment. The physical and
ney bestowed by the donor to administer the property. mental aspects of the workplace are highly relevant for
situations of a hybrid workforce [27], utilizing both RAiLE
Today — Robotics/AI as “Servant” and human. For the workforce, a workplace populated
RAiLE are per se the servants of humanity unless they purely by RAiLE, is an intellectual black hole.
achieve singularity and potentially full (S2) rights and obli- In their definition of labor standards, the International
gations. In (S2), the RAiLE chooses whether to be a ser- Labour Organization (ILO), a specialized agency of the
vant or not. For (S1) and (S3) the owner of RAiLE decides United Nations (UN), states that for the global economy
its tasks, as for example a homemaker or breadwinner. to benefit all, an international framework for labor stan-
dards is essential [25]. These standards are a response
Tomorrow — Robotics/AI as Equal Partner to the new challenges faced by workers and employers in
Only the (S2) RAiLE is to a large extent of equal legal the globalized economy [25]. Their applicability to the
standing to a human as a partner. Nancy Frazer’s Uni- RAiLE currently has limited applicability for our purpose
versal Caregiver Model [23] could apply analogically, as they clarify that the rights in their “Declaration on
meaning that the RAiLE and human could freely vary in Fundamental Principles and Rights at Work” [26] applies
which role in the relationship they will assume. If this to all people in all States [25].
situation occurs will the RAiLE be covered by discrimi- This implies that the ILO legislative framework is only
nation legislation? concerned with humans. But as the RAiLE continues to

74 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


develop and with the high potential for near future chang- goods any way they wish, distracts from the urgency
es in the legal status of the RAiLE, this is highly likely to of creating a globally acceptable set of legal defini-
change drastically. Legal dilemmas relating to the work- tions and a global legal framework for “robot work-
place could potentially be dealt with by using the defini- ers.” Our suggestion is that this, preferably, should
tion presented above in all scenarios presented. To be become an ongoing cooperative effort between inter-
able to establish a legal framework forming the basis for national legal and engineering scholars, representa-
the workplace, in our future research as well as for this tives from the business and military communities,
article, we must clarify some of the terminology used. and political representatives.
This legislative framework concerns, particularly,
The Worker those legal entities that are more than just a basic
From a global perspective the International Labor Orga- mechanical machine, but can and will interact in a more
nization (ILO), established in 1919, is the primary agency advanced fashion with their human counterparts. Corpo-
for international working standards aimed at the eradica- rate capitalization on the RAiLE, as a good for production
tion of labor conditions involving “injustice, hardship of other goods and services, will likely be through several
and privation.” The goal of these internationally agreed different business models. For companies engaged in
upon standards is the establishment of a minimum pro- financing, leasing, and renting, or hiring out resources, it
tection from inhumane labor practices. It covers work- already is and will for the time being continue to be busi-
er’s rights, job security, and employment terms [25]. ness as usual, by merely tweaking their product range.2
This applies to, for instance: Individually, facing an increasing loss of wage opportu-
nities, i.e., jobs, to machines, humans are bound to
■ hiring and firing,
attempt alternative sources of income. Think tanks glob-
■ rights to negotiate (contracts),
ally offer STEM jobs and “still unidentified” means as
■ wages and vacations.
alternatives [28], [29]. Logically, individuals would attempt
In international treaties, there is no distinct definition to extract revenue from personal assets, like a RAiLE,
of “worker.” As an example, the Organization for Eco- bought or created by the individual. In the future, a RAiLE
nomic Cooperation and Development (OECD) uses the defined as a “worker,” just like a human, may be
ILO’s definition as a basis. The term used is employee, as “employed” on contracts of indeterminate length or tem-
in “Employees are all those workers who hold the type of porary basis. The RAiLE, at this stage, has the potential to
job defined as paid employment jobs” [30]. All type of become either self-employed or work for staffing compa-
jobs, including employees, self-employed, contributing nies, just like a human.
family workers, and those of non-classifiable status, are For (S1), RAiLE would logically be covered by work-
covered by the International Classification of Status in place laws, international and national, excluding the
Employment 1993 (ICSE-93) [25]. labor law. Thus, any liabilities would be sorted accord-
These two treaties are however only binding on their ingly. Workplace law in correlation, depending on situa-
signatories in a limited fashion based on their national tion, with other related legislation could be easily
interpretation. The European Union (EU) treaties defines adapted to cover several issues arising from workplaces
“worker” via caselaw as “…persons who pursue or are with a pure RAiLE and/or hybrid workforce [27].
desirous of pursuing an economic activity” [41]. Unlike In (S2), when the RAiLE fits the classification of a
other international treaties, this makes the definition worker, the full spectrum of international and national
legally binding for all Member States of the EU. workplace laws applies, in line their human counterparts.
The practical application of a robot tax, proposed by For (S3) it depends on the model chosen. “Third Exis-
Bill Gates [31] and the European Parliament [21], to cover tence” may include limited application of international
for jobs lost, could be partially fitted under the “worker” and national labor law, or perhaps full rights if deemed a
category. Relevant legislation on liability for damages com- proxy by the law of agency, while “electronic persons” fall
mitted by the workforce at the place of work, which under (S1) with boosted liabilities, for instance extra
already exists under workplace law, could also be applied.
The ongoing discussion on robot and/or AI as a
“worker” is often misleading, as in current literature on 2
One of the pioneers in the area of leasing is Japan Robot Leasing Co.
Ltd. founded in April 1980 [42]. Robots rental and leasing see for example
the subject it is most often described as: U.S.- based SMP Robotics Systems Corp. at http://smprobotics.com/robot-
■ “Goods” owned by the companies they work for/in, and ics_business/robots-rental-and-leasing/ and RobotLAB Inc which leases
educational program Engage! K12 and sells/leases educational robots
■ Sophisticated machines.
connected to the program. The company also offers financing for their
This provides a very black and white picture, com- products.at http://www.robotlab.com/leasing-nao-robot. In Singapore,
Okagi — Robotics SG are advertising their service robots for lease. Susan,
pared to a potential future RAiLE reality. Focusing on Nance, Lisa, and Judy are presented as an effective way to solve manpower
this closed setting, where owners may “treat” their shortage and rising costs.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 75


insurance and a robot tax for the owners and/or employ- The Family Unit
ers of robots. In the “Family unit” we look at ways in which humans
A future legal framework should clarify if RAiLE is and Robot/AI may enter different kinds of relationships.
part of the workplace, as a good (S1) rather than a work- We also look at gender effects under the law depending
er (S2) or (S3). on the definition of a Robot/AI’s role based on the “Woll-
stonecraft’s dilemma” built on two of three policy models
Wages for gender equality.3 Note the “Universal Breadwinner,”
“Most people work in order to earn money” [25]. The “Homemaker,” and/or “Universal caregiver” models
remuneration for the work done is usually set according based on using Nancy Frazer’s model on welfare regimes
to the type of contract and on an hourly, daily, monthly, [23]. Nancy Fraser’s Universal caregiver model [45]
or piecework basis, relative to the skills and attributes breaks the gendered, hierarchical division between “pro-
of the individual relevant to the performance of the job. duction” and “reproduction” by rejecting both traditional
In (S1), work done by RAiLE, unless owned by the models. The idea is to encourage men and women to
employer, would be remunerated to its owner, or to the both share in paid and unpaid work.
intermediator holding a contract allowing her/him to col-
lect this revenue. Inherently, other legally related rights The “Universal Breadwinner”
and obligations covering RAiLE follow, as they will affect The Universal Breadwinner model, whereby conditional
both directly and indirectly, the rights and obligations of income support is tied to paid work [23], focuses on
the owners/employers of the RAiLE, e.g., directly by dam- masculine life-patterns requiring women to abide by
age of property. This will lead to a reduction of the men’s standards to be considered equal. Accordingly,
RAiLE’s role to simply perform its tasks, which has an only work outside of the home is recognized as produc-
indirect effect. A remuneration model from leasing or tive, inherently generating social recognition [46]. The
rental contracts could be an appropriate method. same would apply for RAiLE. RAiLE would be an effec-
Under (S2) with the RAiLE being treated as an individ- tive breadwinner, as it can outperform humans in stami-
ual legal entity, the same remuneration standards as for na and strength. It works faster than a human and
a human would be appropriate. Levies related to wages would need no breaks. Income security being condition-
would also apply (taxes, social costs, etc.). Both of the al to capacity to earn will likely raise several questions
(S3) models, the “Third Existence” [18] referring to [19] how much RAiLE should be remunerated, and the social
Japanese text only, or objects with enhanced legal obli- benefits, including pension, RAiLE, or directly or indi-
gations attached, “electronic persons” [20], [21] the term rectly its family, should receive.
used by the EU, equals S1, alternatively with the law of An important factor would be the amount of time
agency, with the RAiLE acting as a proxy. RAiLE could be spending at a work place, i.e., being pro-
ductive versus participating in family life.
Worker’s Rights and Obligations
In absence of employment, worker’s rights and obliga- The “Homemaker”
tions, for (S1), where RAiLE is leased hired or rented, are The “homemaker” is typically a woman in the traditional
replaced by the terms of the relevant contract and gov- role of a caregiver, enabling the other partner to work
erned by laws applicable for these types of commercial outside the home. Homemakers tend to be in a subordi-
transactions and related liabilities. In contrast, in (S2) nate relationship,4 exposing them to exploitation and
the same rights and obligations as for a human should possible violence (see, e.g., http://ec.europa.eu/justice/
apply as the RAiLE has integrated with the society and gender-equality/gender-violence/index_en.htm). Therein
the welfare regime. In (S3) rights and obligations for lies the assumption, that all income is evenly distribut-
“Third Existence” RAiLE may apply restrictively, whereas ed, willingly and unconditionally, by the breadwinner to
“electronic persons” only have obligations. all family members [48]–[50].
Provision of an alternative income security for home-
The Boss makers is built on the caregiver parity model [23]. A
In the workplace there is also the future possibility that caregiver parity model recognizes a homemaker’s wage
current Business Intelligence (BI) systems, combined [34], aiming to recognize and remunerate currently
with robotics, utilizing machine learning (ML), will engage unpaid caring work done home by women/RAiLE. The
in different aspects of the corporate decision-making pro- notion was developed in 1972 by the International
cesses. This may include becoming corporate managers at
different levels in the organization (see [43]). This area
3
See [44, p.29] for “Wollstonecraft’s dilemma” dating to the late 18th cen-
will be included in later research on the legal implications tury and the advocacy of women’s rights by Mary Wollstonecraft.
of the RAiLE in the workplace. 4
See Goodins four conditions for being highly vulnerable [47].

76 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


Feminist Collective which included Selma James, Mar- level of protection. In countries like Australia the maxi-
iarosa Dalla Costa, Brigitte Galtier, and Silvia Frederici. mum penalties include fines and jail time up to two years
The RAiLE, not being organically reproductive in of (Animal Welfare Act 1992, ACT Australia and Animal Care
itself, is not inclined towards caring unless it is pro- and Protection Act 2001, QLD Australia). In the EU, the Lis-
grammed to be so. Yet the RAiLE easily fits the concept bon Treaty recognizes animals as sentient beings (Article
of a homemaker. There are already plenty of robotic 13 of Title II s TFEU). This means there is international leg-
household machines, e.g., Roombas, robotic lawn mow- islation covering 28 countries for minimum standards of
ers, and express clothing care system like the Swash. animal welfare (European Commission Animal welfare
The RAiLE is only the next logical, more sophisticated https://ec.europa.eu/food/animals/welfare_en). EU animal
variation on the same theme. Like humans, the RAiLE welfare policies are even exported to non-EU countries
also has economic (and perhaps social) needs. A human [53]. Notably, under these rules, while animals have rights
homemaker’s vulnerability might even be enhanced they do not have obligations. (See European Convention for
where the homemaker is a RAiLE. The need for mainte- the Protection of Animals kept for Farming Purposes Euro-
nance and spare parts for instance, an analogue for pean Convention for the Protection of Animals kept for
humans need for medical service, could lead to a poten- Farming Purposes).
tial need for legal protection from violence and (exploita- But what if the RAiLE is more than that? Formalized
tion) in human-robot subordinate relationships. (On relationships, by way of (S2), isolated from ethical consid-
subordinate relationships, see Goodin’s four conditions erations, between RAiLE and a human, would be dealt
for high vulnerability [48]. See also [51], [52].) with on an equal legal basis. A complex situation arises
from (S3). For a “Third Existence” RAiLE, it would depend
The Universal Caregiver on the scope of the civil rights given by law. For “electron-
RAiLE, being produced rather than “organically repro- ic persons,” the RAiLE equals goods in family context.
duced,” are inherently outside of the binary hierarchy of a Drawing the line for when the RAiLE crosses over
nuclear family. However the RAiLE are gender neutral in from (S1) to (S2) might be challenging. It could be
and if themselves, with their appearance being only a layer argued that once the RAiLE is individually subject to
and the programming, if being gendered is merely superfi- obligations, they have surpassed being categorized as
cial coded according to “expected responses.” It has no goods. That is to say, that if the RAiLE is deemed to
dilemma in varying between both roles. We already have have the capacity to judge the consequence of their
robots capable of doing multiple chores like the Atlas own actions, it has developed a higher level of sentiency
(Developed by Boston Dynamics and designed and operat- than animals. Legally this is of major importance as a
ed by Institute for Human & Machine Cognition (IHMC). shift from an object to a legal subject has occurred.
https://www.ihmc.us/research/biologically-inspired- A byproduct of social robots replicating human traits
robots/) and the Sanbot (Sanbot is a multipurpouse ser- is the legacy of instinctually gendered archetypes [32],
vice robot developed by Qihan Technology Co Ltd, China [33]. Consequently, on one hand female attributes are
at http://en.sanbot.com/index.html). fabricated for machines that undertake traditionally
Thus, the RAiLE is well suited to the Universal care- female jobs. On the other hand, machines assigned to do
giver model. Especially in comparison to a human, as labor historically associated as male are assigned male
the RAilE’s energy levels are not affected by the amount attributes [9], [32]. This inherited gender stigma, is also
of work and housework it has to perform the way it likely to reflect in their treatment of the RAiLE socially
would affect a human. The RAiLE does not need to rest, and occupationally. A RAiLE’s ability to generate income,
except perhaps some “time off” for recharging. could consist of its perceived gender and social recogni-
tion, and it might be restricted to work outside the home.
RAiLE as Part of the Family Unit Income securities earned by the RAiLE would also auto-
The RAiLE as part of a family unit raises new types of legal matically affect the humans of the family.
challenges, depending on the role RAiLE has in the family.
As long as the RAiLE is considered a good, relevant laws Current Attempts at RAiLE Legislation
already exist. The framework is the same no matter what Regular, daily basis social and other human-RAiLE interac-
type of good is dealt with. In situations where the good is tions requires suitable and relevant regulation. A Chino-
a living being, protection applicable on the good itself, or Japanese proposal, founded on a Case Study on Legal
the treatment of it, may apply, e.g., under animal welfare Impacts to Humanoid Robots [35] proposed a hierarchal
laws etc. This would protect the RAiLE from unnecessary Robot law as an appropriate means. Establishing a legal
cruelty by other family members, both deliberate and status named “Third Existence” [18], [19] for intelligent
negligent. The standard of legal protection will vary from robots, the legal hierarchy exists on three levels. Initiated
country to country, from basically non-existent to a high by rules setting central norms for regular interactions

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 77


between humans and robots, the midlevel covers amend- interaction, with a focus on the legal context, for possible
ing existing laws incompatible with advanced robotics. future relationships between the involved entities. In
The footing updates current safety regulations and pro- order to emphasize impacts, we have intentionally put
gramming ethics as a viable framework for robots, to this initial research into two key areas of our daily lives —
avoid damage caused by human-robot coexistence. the workplace and our homes. At a later stage, we plan to
In 2016, the EU Parliament wrote the following in a expand our research into other legal clusters as they are
policy document: “Moreover, accepting that a machine identified as being relevant. We have also intentionally
can be a conscious being would oblige humankind to left out areas of legislation that we currently do not see
respect a robot’s basic rights” [20]. This EU document as directly applicable to the RAiLE itself, for instance
takes an interesting turn in describing how extended ethic rules and/or regulations for the engineers designing
autonomy of robots makes them either owners or users. and creating the new legal entities.
Accordingly, this creates an issue regarding liability, There are other legal aspects that will surface for future
making current legislation in the area insufficient, call- legislation due to different roles that a RAiLE may take on
ing for new rules on the possibility of holding the auton- in the future, for instance in the area of military applica-
omous “machine” partly or fully responsible for its own tions for autonomous RAiLE units. These legal aspects
acts. Therefore, the issue of legal status for the robot should, to ensure legal certainty, be based on coherent
becomes an issue [20]. This is a relatively limited view legal definitions and scenarios, envisioned and estab-
on the legal aspects of autonomous mechanical beings. lished in advance of the legislation being finalized, so as to
This EU document does however approach the very make the immersive societal transition into the next phase
core of this issue based on current legal categories, of global Human-RAiLE interaction as smooth as possible.
such as natural person or juridical person, goods, and So, what does this mean for the RAiLE as a legal entity
animals. It postulates the possibility of the need for a with its own legal personality? First, it is really nothing
new legal entity category with its own specific features new — laws have and will continue to change and adapt
and implications, leading to a specific cluster of rights to new circumstances. Second, it is established here that
and obligations related to autonomous robots [20]. current and future robots, using more and more sophisti-
Unfortunately, the document does arbitrarily dismiss the cated AI systems for decision-making, enables them to
whole issue with the following statement: “When consid- learn and function in an autonomous fashion within a
ering civil law in robotics, we should disregard the idea socially interactive environment. Third, rights and obliga-
of autonomous robots having a legal personality, for the tions under the law work on the assumption that the legal
idea is as unhelpful as it is inappropriate” [20]. entity is competent to make necessary decisions within
Unfortunately, while bringing up several highly rele- the boundaries of the law. These three points taken
vant and hot topics around RAiLE, it generally reflects a together point to the RAiLE becoming a legal entity with
rather Luddite [36] approach to the need for legal solu- its own legal personality. It being just a question of time
tions. It seems to us that the responses given are rooted, and it will force changes to the legal system.
to a large extent, in a Western-world-culture-based human
fear of robots taking over the world [20]. Despite the Author Information
somewhat odd refusal to acknowledge the need for legal Morgan M. Broman is Adjunct Research Fellow at the Law
change due to potential RAiLE in the EU policy docu- Futures Centre, Griffith University, Gold Coast, QLD,
ment, an EU Parliamentary press release in January Australia.
2017 asked the EU Commission to create a legislative Pamela Finckenberg-Broman is a Ph.D. candidate at
proposal for a framework covering current and future the Law School of Griffith University, Gold Coast, QLD,
robotics related issues [37]. Australia.

First Step for Global Legal Nomenclature References


This research article is a first step towards establishing an [1] R. Kurzweil, The Singularity is Near: When Humans Transcend
international and globally consistent legal nomenclature. Biology. New York, NY: Viking, 2005.
[2] J.L. Bower and C.M. Christensen, “Disruptive technologies:
Ethical dilemmas based on emotional responses to Catching the wave,” Harvard Business Rev., vol. 73, no. 1, p. 43–53,
RAiLE affecting future laws are outside the scope of this Jan.- Feb. 1995.
article. Practical consequences for legislators depend on [3] K. Dautenhahn, “Human-robot interaction,” in The Encyclope-
dia of Human-Computer Interaction, 2nd ed, Aarhus: Interaction
the model chosen. For the South Korean model, the exist- Design Foundation, 2013.
ing laws may be “tweaked” to incorporate RAiLE. If the EU [4] J. Alenljung and B. Lindblom, “Socially embodied human-robot
model is applied there is need to create entirely new laws. interaction: Addressing human emotions with theories of embod-
ied cognition,” in Handbook of Research on Synthesizing Human
With this article, we want to initiate and stimulate a Emotion in Intelligent Systems and Robotics. Barcelona, Spain:
serious debate about the future of human-R AiLE IGI Global, 2015, pp. 169-190.

78 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


[5] R. Hartson and P.S. Pharda, Process and Guidelines for Ensur- [28] World Economic Forum (WEF), “The Future of Jobs – Employ-
ing a Quality User Experience. Morgan Kaufmann Publishers, ment, Skills and Workforce Strategy for the Fourth Industrial Revo-
2012. lution,” World Economic Forum, Davos, Switzerland, 2016.
[6] T. Fong, I. Nourbakhsh and K. Dautenhahn, “A survey of socially [29] World Economic Forum (WEF), “The Global Risks Report 2017 -
interactive robots: Concepts, design, and applications,” Technical 12th Edition,” World Economic Forum, Cologne/Geneva, 2017.
rep., The Robotics Institute, Carnegie Mellon University, Pittsburgh, [30] OECD, “EMPLOYEES – ILO,” Mar. 4, 2003. [Online]. Available:
PA, 2002. https://stats.oecd.org/glossary/detail.asp?ID=766.
[7] C. Bartneck and M.J. Lyons, “Facial expression analysis, model- [31] S. French, “Bill Gates says robots should pay taxes if they take
ing and synthesis: Overcoming the limitations of artificial intel- your job,” Market Watch, Feb. 20, 2017.
ligence with the art of the soluble,” in Handbook of Research on [32] A. Krebs, “Love at work,” Acta Analytica, 1998.
Synthetic Emotions and Sociable Robotics: New Applications in [33] G. Voss, “The Second Shift in the Second Machine Age: Auto-
Affective Computing and Artificial Intelligence. Hershey, PA: IGI mation, gender, and the future of work,” in Our Work Here is
Global, 2009, pp. 34-55. Done: Visions of a Robot Economy. London, U.K.: Nesta, 2014,
[8] K.M. Lee, W. Peng and S.-A. Jin, “Can robots manifest personal- pp. 83-93.
ity?: An empirical test of personality recognition, social responses, [34] C.M.Y. Nass and. G.N. Nass, “Are machines gender neutral?
and social presence in human–robot interaction,” J. Communica- Gender‐ stereotypic responses to computers with voices,” J.
tion, p. 754–772, 2006. Applied Social Psychology, vol. 27, no. 10, pp. 864-876, 1997.
[9] M.B.C. Siegel and. N.M.I. Siegel, “Persuasive robotics: The influ- [35] Y.H. Weng et al., “Intersection of “Tokku” Special Zone, Robots,
ence of robot gender on human behavior,” in Proc. IROS 2009. and the Law: A case study on legal impacts to humanoid robots,”
IEEE/RSJ Int. Conf., St. Louis, MO, 2009. Int. J. Social Robotics, p. 841, 2015.
[10] M. Suzuki, “A funeral for ‘Aibo’ robot dogs at a temple near [36] “Luddite Meaning in the Cambridge English Dictionary,” Cam-
Tokyo,” May 25, 2015. [Online]. Available: https://phys.org/news/2015- bridge Univ. Press, May 25, 2017. [Online]. Available: http://dictionary
02-japan-robot-dogs-life-.html, accessed May 24, 2017. .cambridge.org/dictionary/english/luddite.
[11] T. Fong, I. Nourbakhsh and K. Dautenhahn, “A survey of social- [37] EU Legal Affairs Committee, “Robots: Legal Affairs Committee calls
ly interactive robots,” Robotics and Autonomous Systems, pp. for EU wide rules,” European Parliament, Brüssels, Belgium, 2017.
143–166, 2003. [38] J. Cameron, C. Morris, and V. Paxton, “Preparing for life with
[12] D. McColl and G. Nejat, “A human affect recognition system robots: How will they be regulated in Australia?,” Jun. 3. 2017.
for socially interactive robots,” in Handbook of Research on Tech- [Online]. Available: http://www.corrs.com.au/thinking/insights/preparing-
noself: Identity in a Technological Society. IGI Global, 2012, pp. for-life-with-robots-how-will-they-be-regulated-in-australia/.
554-573. [39] G. Zhang and T. Shih, “Peer-to-peer based social interaction
[13] “US — SOFTWOOD LUMBER IV, 2004,” WTO Dispute Settle- tools in ubiquitous learning environment,” in Proc. 2005 11th
ment: One-Page Case Summaries, pp. 103-104, 2017. Int. Conf. Parallel and Distributed Systems (ICPADS’05) & 2010
[14] “Commission vs. Italy, 1968,” InfoCuria - Case-law of the Human Kinetics / News and Excerpts / Excerpts, “Technology can
Court of Justice, 7-68. Judgment of 10. 12. 1968. have positive and negative impact on social interactions,” 2010; http://
[15] D. Levy, Love and Sex with Robots: The Evolution of Human- www.humankinetics.com/excerpts/excerpts/technology-can-have-
Robot Relationships. New York, NY: Harper, 2009. positive-and-negative-impact-on-social-interactions.
[16] V. Craw, “French woman wants to marry a robot as expert pre- [40] C. Murnan, “Expanding communication mechanisms: They’re
dicts sex robots to become preferable to humans,” News.com.au, not just e-mailing anymore,” in Proc. SIGUCCS ‘06 34th Ann. ACM
Dec. 23, 2016. SIGUCCS Conf.: Expanding the Boundaries, 2006, pp. 267-272.
[17] Black’s Law Dictionary, “What is LEGAL ENTITY? definition of [41] D.M. Levin v Staatssecretaris van Justitie, C-53/81 , Decision
LEGAL ENTITY,” Black’s Law Dictionary Free Online Legal Dic- para 17, European Court Reports, Mar. 23, 1982.
tionary, 2nd ed., May 24, 2017. [Online]. Available: http://thelawdiction- [42] “Small firms give robot leasing a quiet send‐off,” Industrial
ary.org/legal-entity/. Robot, vol. 7, no. 4, pp.237-242, 1980; doi.org/10.1108/eb004781.
[18] Y.C.C. Weng and S. Weng, “Toward the human–robot co-exis- [43] R. Wile, “A venture capital firm just named an algorithm to its
tence society: On safety intelligence for next generation robots,” Board Of Directors – Here’s what it actually does,” Business Insider
Int. J. Social Robotics, pp. 267-284, 2009. Australia, May 14, 2014; https://www.businessinsider.com.au/vital-
[19] S. Hashimoto, “The Robot generating an environment and auton- named-to-board-2014-2015.
omous,” in The Book of Wabot 2. Tokyo, Japan: Chuko, 2003, p. 2. [44] C. Pateman, “The patriarchal welfare state,” The Welfare State
[20] N. Nevejans, “European civil law-rules in robotics,” DG for Reader, pp. 134-151, 1988.
Internal Policies, Brüssels, Belgium, 2016. [45] N. Fraser, “After the family wage: Gender equity and the wel-
[21] European Parliament Committee on Legal Affairs, “REPORT with fare state,” Political Theory, vol. 22, no. 4, pp. 591-618, 1994.
recommendations to the Commission on Civil Law Rules on Robot- [46] J.L.R. Pérez and J. Luis, “A New Gender Perspective for Basic
ics (2015/2103(INL)),” European Parliament, Brussels, Belgium, 2017. Income?,” presented at 10th BIEN Congress, Barcelona, Spain,
[22] S. Lovgren, “National Geographic news,” Mar. 16, 2007. 2004, p.2.
[Online]. Available: http://news.na tionalgeographic.com/ [47] T. Mautner and R. Goodin, “Protecting the Vulnerable: A Reanal-
news/2007/03/070316-robot-ethics.html. ysis of Our Social Responsibilities,” JSTOR, 1988.
[23] N. Fraser, “Gender equity and the welfare state: A post-indus- [48] “Universal basic income and women’s liberation,” Ursula
trial thought experiment,” in Democracy and Difference. Contest- Huws Blog, Jan. 22, 2017; https://ursulahuws.wordpress
ing the Boundaries of the Political. Princeton, NJ: Princeton Univ. .com/2017/01/22/universal-basic-income-and-womens-liberation/.
Press, 1996, pp. 218-242. [49] S. M. Okin, Justice, Gender, and the Family. Basic, 1989.
[24] D. Haraway, “A. Cyborg Manifesto. Science, technology, and [50] N.J. Hirschmann, Gender, Class, and Freedom in Modern
socialist-feminism in the late twentieth century.,” in Simians, Political Theory. Princeton, NJ: Princeton Univ. Press, 2009.
Cyborgs, and Women: The Reinvention of Nature. Routledge, [51] K. Richardson, An Anthropology of Robots And AI: Annihila-
2013, pp. 291-218. tion Anxiety and Machines. Routledge, 2015.
[25] I.L.O. (ILO), “Labour Standards,” Jun. 1, 2017. [Online]. Available: [52] K. Richardson, “The human relationship in the ethics of robotics:
http://ilo.org/global/lang–en/index.htm. A call to Martin Buber’s I and Thou,” AI and Society, pp. 1-8, 2017.
[26] I.L.O. (ILO), “ILO Declaration on Fundamental Principles and [53] European Commission, Milestones in Improving Animal
Rights at Work,” Jun. 15, 2010. [Online]. Available: http://www.ilo.org/ Welfare; https://ec.europa.eu/food/sites/food/files/animals/docs/
declaration/lang–en/index.htm. aw_infograph_milestones_en.pdf; accessed Jan. 2018.
[27] L.B. Stone and J. Stone, “People and robotics – the hybrid work-
force,” | KPMG | AU, 2013.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 79


Purposes, End-User
Trustworthiness, and
Framing, but not
Terminology, Affect Public
Support for Drones

FIGURE 1. Photograph of the water sampling UAV in the field (not shown to participants).

A Drone
by Any Other Name

P
rojections indicate that, as an indus- facilitating or hindering technology acceptance and
try, unmanned aerial vehicles (UAVs, uptake [2].
commonly known as drones) could To advance understanding of U.S. public perceptions
bring more than 100  000 jobs and of UAV technologies, we conducted a nationwide survey
$80 billion in economic growth to of a convenience sample of 877 Americans recruited
the U.S. by 2025 [1]. However, these from Amazon’s pool of Mechanical Turk (MTurk) work-
promising projections do not account ers. In our surveys, we used short scenarios to experi-
for how various publics may perceive such technolo- mentally vary UAV characteristics, the end-users of the
gies. Understanding public perceptions is important technology, and certain communication factors (termi-
because the attitudes of different groups can have nology and framing). This allowed us to investigate the
large effects on the trajectory of a technology, strongly impacts of these factors alone and in combination.

Lisa M. PytlikZillig, Brittany Duncan,


Digital Object Identifier 10.1109/MTS.2018.2795121
Date of publication: 2 March 2018 Sebastian Elbaum, and Carrick Detweiler

80 1932-4529/18©2018IEEE IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


In addition, given the conflicts that sometimes arise to persuade the U.S. media to stop calling the technology
around scientific findings and technologies (e.g., climate “drones.” Their concern is that the word “drone” evokes
change, vaccines, [3], [4]), we also gave explicit attention negative visions — perhaps of large predatory war in-
to whether and how public support for UAVs varied by struments unthinkingly and unapologetically completing
self-reported political ideology, issue attitudes, and per- their missions without regard for collateral damage [5].
ceptions of end-user trustworthiness. Finally, because Social science theory and research also suggests that
UAVs for civilian purposes represented relatively new the terminology used to describe objects can influence
technologies at the time of the first survey, we examined people’s perceptions. Euphemistic labeling refers to how
whether public opinion is changing over time, as more different terms (e.g., military force vs. war; collateral
people become aware of UAVs. We thus administered the damage vs. innocent victims) impact people’s responses
same survey twice, separated by one year, in the fall of to verbal descriptions of events and objects [6], [7]. In
2014 and 2015. general, language does matter, and different terms and
The results of our experimental manipulations re- phrases often are associated with different emotions
vealed a surprising lack of impact of terminology and UAV and cognitive associations [8]. Although a prior study
autonomy, a small impact of message framing and UAV found terminology had little or no effect on public atti-
end-user, and a relatively large impact of UAV purpose. tudes toward drones [9], that study was conducted in
We did not find that public attitudes changed much over Australia and it is unclear whether the findings will gen-
the year between samples, and perceptions of end-user eralize to the U.S. On the other hand, we might expect
trustworthiness were strong predictors of public support. terminology to have no effect in the U.S. because the
Still, our regression models only accounted for about term “drones” has been commonly used by the media
40% of the variance in public support, suggesting that and others when referring to commercial applications of
additional variables should be studied in future work to the technologies, thereby potentially reducing the associ-
gain a more complete understanding of public support for ation between the term “drone” and military activity.
UAVs. We also found evidence of a small amount of politi- Even if terminology does not affect public percep-
cal polarization of public opinion related to who was tions, other communicative factors may. A particularly
using the UAVs for what purpose, and this polarization common finding in the social sciences is that humans
appeared to be changing over time. tend to be more strongly motivated to avoid losses or
Taken together, our results — which may be especial- prevent harm than to approach gains or promote bene-
ly useful to UAV designers, marketers, and policy mak- fits [10], [11]. For example, very different levels of sup-
ers — suggest there is a need to establish that the UAVs port are offered when inquiring about “saving lives” (a
are used for valued purposes and by users that publics benefit) versus “preventing deaths” (a harm)—with peo-
find to be trustworthy. However, public judgments might ple generally more supportive of efforts to prevent
be significantly impacted by personal or local ideologies deaths than to save lives [10]. If this framing effect
rather than national priorities. In the next section, we applies to the current context, framing of UAVs in terms
describe in more detail prior research on public support of harms they can prevent should result in more support
for UAVs, and how we formulated our research questions than framing them in terms of benefits they might pro-
and hypotheses. We then describe our methods, results, mote. Some research, however, suggests that opinions
and findings in greater detail. are less susceptible to framing effects when people
have thought more deeply and analytically about the
Background and Research Questions issues [12], [13]. Thus, if framing effects are not found,
In this research, our primary interest was to advance or are found but appear to be decreasing over time, this
understanding of factors that impact public support — could indicate that the public is forming more robust
potentially including politically polarized support — for opinions that are less influenced by communication fac-
UAVs in the U.S. Some of our research questions were tors. In lieu of any prior evidence or significant event
derived from questions facing professionals designing occurring between surveys, we had no reason to expect
UAVs, such as, does it matter what the technologies are that the framing effects would not occur or would
called? Other questions were inspired by prior social sci- change over time. Thus, our first research question (RQ)
ence findings, for instance, how easy is it to evoke politi- and hypothesis (H) is as follows:
cal polarization in response to UAVs? ■ RQ1: Is public support impacted by communicative
factors such as terminology describing the technolo-
Communicating about UAVs: gy and/or promotion and prevention framing, and
Terminology and Framing do these impacts change over time?
Some UAV industry professionals, concerned about how ■ H1: Consistent with prior research, terminology will
terminology might impact public perceptions, have tried have no impact on public support, but framing will

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 81


have a significant impact favoring prevention fram- responses to genetically modified foods was quite dif-
ing that is consistent over time. ferent in the Europe versus the U.S. [24].
We explored the possible provocation of political
UAV Autonomy polarization around UAVs in two ways: First, we explicitly
We also examined the effect of UAV autonomy on U.S. tied certain UAV capabilities to commonly studied politi-
public support. The effect of the autonomy of UAVs on cal issues (e.g., water sampling in the service of environ-
public support has been studied in the context of military mental conservation, or using commercial UAVs to build a
drones, where the primary ethical concerns are related to strong economy). Second, we varied the actor using the
risk of collateral damage and the use of conscience-free UAV (i.e., public or private domain). There are political
weapons making life-and-death decisions [14], [15]. The differences in attitudes toward business versus govern-
autonomy of other technologies has also raised public mental actors in different situations. For example, self-
concern — for example, the public has been leery of self- identified conservatives generally favor free market forces
driving cars [16]. UAV autonomy has been less studied in over governmental involvement, and [25] found that con-
the context of civilian uses. We sought to fill that gap by servatives report less trust in government than liberals.
examining autonomy as it relates to different UAV pur- Yet greater trust in government has been found among
poses. We hypothesized that autonomy will matter less in conservatives when considering foreign rather than
domains where automation increases efficiency and reli- domestic policy [26], and conservatives may support gov-
ability without raising clear ethical concerns. For exam- ernment involvement when it comes to security and
ple, many have suggested that economic topics like defense [27]. These results suggest that UAV purpose and
budgeting and taxes are less relevant to people’s moral end user need to be considered in combination.
and ethical concerns than life and death topics like abor- Related to the experimental variation of UAV purpos-
tion and the death penalty (see review in [17]). Wariness es and end users, we also assessed issue attitudes
of self-driving cars could also be related to fears about (toward environmental, security and economic issues)
deaths and injuries. In the context of security and and perceptions of end-user trustworthiness. Miethe
defense, respondents might imagine the UAVs autono- et al. [23] suggest that, among other factors, distrust of
mously doing harm. However, UAV autonomy may be less actors using the UAVs may have resulted in low support
related to public support when UAVs are used for eco- for use of UAVs for crowd monitoring or use near one’s
nomic or environmental goals. Thus, our second research home. Including measures of issue attitudes and actor
question and hypothesis is: trustworthiness allowed us to test whether any differ-
■ RQ2: Does autonomy of the UAVs affect public sup- ences related to actor, or actor and purpose were due to
port (and/or does it depend on the purpose of the differences in such attitudes. Our final research ques-
UAVs)? tions and hypotheses were as follows:
■ H2: Autonomy will have a negative influence on sup- ■ RQ3: What are the most important factors affecting
port in the context of using UAVs for security, but public support for UAV development and use (and
will have less or no influence in the context of using did these factors change from 2014 to 2015)?
UAVs for economic or environmental purposes. ■ H3: Consistent with prior research, the purpose of
UAVs will be among the most important predictors
UAV Purpose, Perceptions of the UAV End-User, and across both 2014 and 2015.
Political Polarization ■ RQ4: What conditions, if any, appear to elicit politi-
Prior studies suggest UAV purposes influence attitudes cally polarized responses (and did these conditions
towards UAVs, in both the U.S. and elsewhere [18]. U.S. change from 2014 to 2015)?
public opinion is rather positive in the context of mili- ■ H4: Polarized support will be dependent upon both
tary [19], [20] and security [21], [22]. Further, polls sug- the purpose of UAVs and the actor using them,
gest the U.S. public supports UAV use for search and with heightened polarization apparent when UAVs
rescue and scientific research purposes, while being are used to address polarizing issues (e.g., the
less favorable toward every day commercial uses such environment) and used by end users that are differ-
as package delivery or local law enforcement (e.g., entially trusted dependent upon ideology (e.g., the
crowd monitoring and crime detection) [23]. government).
However, the extent to which public attitudes toward
UAVs are or may be becoming politically polarized Methodology
has been given only scant attention to date [23]. Such
investigations are important given the manner in which Study Design
public polarization has impacted progress related to To investigate our research questions, we used a “vignette
other science and technologies. For example, public survey experiment” [28] administered to a convenience

82 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


(i.e., not representative) sample comprised of U.S. Ama- support for the UAVs, and measured perceptions of the
zon MTurk workers. After asking whether participants had trustworthiness, competence, and distrustworthiness of
heard of the technology, the survey presented each partic- the end user. Our three trust-relevant variables (hereaf-
ipant with a brief definition of the technology and then ter referred to collectively as measures of trustworthi-
provided a short scenario depicting an agency investigat- ness) were determined based on theory and preliminary
ing possible future uses of UAVs. Features of the scenario factor analyses [29]. As shown in Table 2, we averaged
were manipulated in a fully-crossed design. Table 1’s first across multiple items to create internally reliable scales
row depicts the template used to develop the scenarios. with Cronbach alpha values greater than or equal to 0.7,
Instances of each item (italicized within brackets) were as is commonly recommended.
selected dependent upon randomly assigned condition Near the end of the survey we assessed demograph-
from the corresponding categories in the Table. ics such as age, gender, and ethnicity. We also assessed
scenario-specific issue attitude (e.g., I believe [the pro-
Measures tection of our environmental resources; U.S. national
Immediately following the experimentally manipulated security; a strong U.S. economy] should be the nation’s
scenario, we assessed our primary dependent measure, top priority) with response options ranging from strongly

TABLE 1. Manipulations of conditions within the vignette survey experiment.


Imagine that… For the next questions, imagine that [Public/private] has established [Agencies] that is investigating the use of
[Technology] to [Purpose1]. For example, the [Public/private] might [Purpose2]. The [Technology] they are using are [Autonomy].
Public/Private The U.S. government A private U.S. company

Agencies Public/Government Private/Business


Economic A new Institute of Economic Development An Economic Development Research Unit
Environment A new Institute of Environmental Enhancement An Environmental Enhancement Research Unit
Security A new Institute of Public Safety and Security A Public Safety and Security Research Unit

Technology Drone(s) Aerial robot(s) Unmanned aerial Unmanned aerial


vehicle(s) — UAV(s) system(s) — UAS(s)

Purposes Promotion Focus Prevention Focus


Economic 1) [Technology] used to promote economic growth 1) [Technology] used to prevent economic decline
2) [for example] to be used to make tasks such as 2) [for example] to be used to make tasks such
package delivery more efficient, possibly allowing as package delivery more efficient, possibly
business owners to expand their businesses and allowing business owners to cut losses and
profits and become more competitive, thereby costs and avoid business closures, thereby
improving the U.S. economy. helping the U.S. economy to remain stable.
Environment 1) [Technology] used to discover or create additional 1) [Technology] used to monitor and protect
natural resources in our country natural resources in our country
2) [for example] to be used to gather water samples in 2) [for example] be used to gather water samples
order to discover and document clean water sources, in order to detect water quality problems, or
or other sources of valuable natural resources. other threats to valuable natural resources.
Security 1) [Technology] used to promote public confidence in 1) [Technology] used to prevent public concerns
everyday security about everyday security
2) [for example] to be used to actively seek out illegal 2) [for example] to help monitor and prevent harm
activities, potentially allowing for the prosecution and from illegal activities, potentially allowing the
punishment of a greater number of crimes happening prevention of increases in crimes happening on
on U.S. soil, resulting in increases in public safety. U.S. soil.

Autonomy Fully autonomous — Partially autonomous — Not autonomous — meaning that


meaning that they are entirely meaning that they are controlled they are entirely manually controlled
controlled by computers that both by computers that have been by people with remote controls
have been programmed to programmed to guide their actions that have been trained to guide the
guide their actions. Human and manually by humans trained [Technology]’s actions. Computer
manual control is not used. to control them remotely. automated controls are not used.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 83


TABLE 2. Measures of support for UAVs and end-user trustworthiness.
Cronbach’s alpha
Subscale Item Response Scale (internal consistency)
Support for UAV To what extent do you approve of [Actor] using Strongly Disapprove (1) 0.93
Use [Technology] for the purposes described above? to Strongly Approve (7)
To what extent would you support or resist [Actor] use of Strongly Resist (1) to
[Technology] for the purposes described above? Strongly Support (7)
For example, how willing would you be to vote to allow
such uses or have public funds promote such uses?
Below, indicate your opinions about how [Actor] would behave when using [Technology] for the purposes described above.
Trustworthiness Only use the [Technology] to benefit the public at large Never (1) to Always (6) 0.87
Be honest with the public about anything they find or
do using the [Technology]
Be transparent (open) about how, when, and why they
are using the [Technology]
Use the [Technology] to achieve values important to you
Distrustworthiness Use the [Technology] for their own selfish benefit Never (1) to Always (6) 0.85
Be dishonest about anything they find or do using the
[Technology]
Hide information about how, when and why they are
using the [Technology]
Use [Technology] to support values that you disagree with
Competence Be competent in their use of [Technology] Never (1) to Always (6) 0.70
Be incompetent in their use of [Technology] (reversed)

disagree (1) to strongly agree (7). Political ideological Participants


identity was assessed as the average across three items A total of 576 participants were recruited in late 2014 and
asking participants to rate the extent to which they were 301 in late 2015 via MTurk. All participants were Ameri-
strongly liberal (1) to strongly conservative (7) on eco- can citizens and were paid 25 cents (0.25 USD) in 2014 or
nomic issues, social issues, and overall (Cronbach’s 50 cents (0.5 USD) in 2015. As is common with MTurk,
alpha = 0.91). both samples, on average, self-reported they were some-
what more liberal than conservative (see Figure 2). In
2014, there were slightly more men than women (299 ver-
2014
sus 277) and slightly more women than men in 2015 (167
versus 134). The samples were of a similar age distribu-
Social 54.8 18.9 26.3
tion with 2014 having a mean age of 36.2 years (SD: 12.8)
Economic 38.4 22.9 38.7 and 2015 having a mean age of 36.6 years (SD: 12.3).
Overall 44.8 27.4 27.5
Results
2015 The data were analyzed using the Statistical Program for
Social 60.1 15 24.9 Social Sciences (IBM SPSS Statistics 23). To answer our
research questions and test our hypotheses, we con-
Economic 43.6 16.4 40.2
ducted correlation and multiple regression analyses.
Overall 48.8 24.3 26.9 Correlation analyses allow us to examine the strength
of the relationship between any two variables. As shown
Liberal (1–3) Centrist (4) Conservative (5–7) in Table 3, among the experimentally varied factors, the
highest correlations and thus the strongest relationships
FIGURE 2. Illustration of distribution of sample responses with support involved UAV purpose. The negative corre-
(numbers indicate percent of sample) to survey questions lation between support and security purposes, and posi-
concerning self-reported ideology. tive correlation between support and environmental

84 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


TABLE 3. Pearson correlations between UAV support and each of our manipulated and measured variables.
2014 2015 Both Years %Var

Manipulated Variables
Purposes
Security use −0.30*** −0.27*** −0.29*** 8.41%
Environmental use 0.24*** 0.31*** 0.27*** 7.29%
Economic use 0.06 −0.04 0.03 0.09%
End-User
Business (vs. Government) −0.11** −0.03 −0.08* 0.64%
Autonomy
Autonomous −0.03 0.09 0.01 0.01%
Manual −0.04 −0.07 −0.05 0.25%
Partially Autonomous 0.07 −0.02 0.04 0.16%
Terminology
UAS term −0.03 −0.09 −0.05 0.25%
UAV term 0.00 0.02 0.01 0.01%
Aerial robot term 0.05 0.01 0.03 0.09%
Drone term −0.02 0.06 0.01 0.01%
Framing
Promotion (vs. prevention) −0.12** −0.03 −0.09* 0.81%
Measured Variables
Female −0.03 −0.02 −0.02 0.04%
Age −0.01 −0.08 −0.03 0.09%
Ideology (Conservativism) −0.05 0.07 −0.01 0.01%
Issue attitude 0.17*** 0.24*** 0.20*** 4.00%
Perceptions of end-users
Trustworthiness 0.50*** 0.55*** 0.52*** 27.04%
Dis-trustworthiness −0.42*** −0.48*** −0.44*** 19.36%
Competence 0.39*** 0.47*** 0.42*** 17.64%
Notes: 2014 N = 576, 2015 N=301. +p<0.10, *p<0.05, **p<0.01, ***p<0.001. %Var is the square of the Pearson correlation across both years and estimates
the variance shared by the predictor and UAV support.

uses, indicates support was lowest for security purposes more negatively skewed in 2015, resulting in the highest
and highest for environmental purposes. average levels of support for environmental purposes.
Figure 3 illustrates the distribution of responses to Among the other variables listed in Table 3, there
questions assessing public support or resistance by UAV were weaker but significant relationships with UAV sup-
purpose and year. There was a relatively bi-modal distri- port favoring prevention-focused framing and end use
bution of support ratings for security purposes in both by the government over private business. Among the
2014 and 2015, indicating public polarization. Ratings non-experimentally varied variables, as expected, there
of support for economic purposes were negatively were relatively strong effects of end-user trustworthi-
skewed, resulting in more support on average than for ness and of issue attitudes relevant to the scenario
security purposes. However, there appeared to be assigned to the participant.
increasing polarization of responses in 2015 relative to Multiple regression procedures provide a different
2014. That is, the percentage of those strongly resisting measure of the importance of variables for predicting
use of UAVs for economic purposes was greater than public support by identifying variables that account for
those expressing more moderate resistance in 2015. independent variance above and beyond other vari-
Finally, support ratings for use of UAVs for environmen- ables. We first tested whether the effects of each of our
tal purposes were negatively skewed in 2014, and even variables depended on time (this is done by testing for

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 85


2014 2015
Security Purpose

20
20
Frequency (%)

Frequency (%)
15
15

10 10

5 5
Resist: 48% Resist: 51%
Support: 39% Support: 39%
0 0
1.00 2.00 3.00 4.00 5.00 6.00 7.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00
Average Support and Approval for Technology Average Support and Approval for Technology
Economic Purpose

30 30
Frequency (%)

Frequency (%)
20 20

10 10
Resist: 28% Resist: 33%
Support: 60% Support: 54%
0 0
1.00 2.00 3.00 4.00 5.00 6.00 7.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00
Average Support and Approval for Technology Average Support and Approval for Technology
Environmental Purpose

30 30
Frequency (%)

Frequency (%)

20 20

10 10
Resist: 17% Resist: 16%
Support: 68% Support: 71%
0 0
1.00 2.00 3.00 4.00 5.00 6.00 7.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00
Average Support and Approval for Technology Average Support and Approval for Technology
Note: Support was assessed by averaging two items (see Table 2) resulting in a mean between 1 and 7.
Percentages of resistors and supporters sum to less than 100 because a small percentage of persons’ mean
scores were at exactly “4” (neutral) and thus were not counted as resistors or supporters.

FIGURE 3. Distribution of rated support or resistance for the development and use of UAVs by purpose and year.

statistical interactions with time). These analyses indi- interpret. Table 4 shows the Step 1 and 2 models’
cated that the overall main effects did not change effects, which we next discuss in relation to our
between 2014 and 2015. We therefore ignore the effect research questions and hypotheses.
of time in most of our remaining analyses. Next, we
examined a regression model in which the experimen- Response to RQ1: U.S. Public Support is Impacted
tally varied factors were entered on Step 1 and the (Slightly) by Framing but not by Terminology
measured variables were entered on Step 2. This Table 4 provides evidence supporting our hypothesis
allows us to see how important each variable is when it (H1) that terminology will have no impact on public
is competing with different combinations of other vari- support in the U.S., but framing will have a significant
ables. Note that we standardized the measured predic- impact favoring prevention framing. Consistent with
tor variables so that they would have a mean of zero prior research in social psychology, prevention framing
(representing the average response) and a standard in terms of protecting people from harm was associat-
deviation of 1, in order to make results easier to ed with slightly more support (predicting a 0.23 point

86 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


TABLE 4. Results of hierarchical regression analyses predicting support
with experimentally varied and measured variables.
Step 1 Step 2
Indep. Indep.
B SE Variance B SE Variance
Step 1
(Constant) 5.26*** 0.165 5.15*** 0.138
Security use (vs. envir.) –1.36*** 0.133 10.52% –1.15*** 0.112 7.46%
Economic use (vs. envir.) –0.58*** 0.129 2.08% –0.58*** 0.108 2.05%
Business (vs. Government) –0.26* 0.106 0.59% –0.30** 0.088 0.80%
Autonomous (vs. partially) –0.04 0.129 0.01% 0.02 0.108 0.00%
Manual (vs. partially) –0.22+ 0.129 0.29% –0.08 0.108 0.04%
UAS term (vs. drone) –0.18 0.149 0.15% –0.09 0.124 0.04%
UAV term (vs. drone) 0.04 0.149 0.01% –0.04 0.124 0.01%
Aerial robot (vs. drone) 0.09 0.147 0.04% 0.12 0.123 0.06%
Promotion (vs. prevention) –0.23* 0.106 0.48% –0.24** 0.088 0.52%

Step 2 a
Trustworthiness 0.52*** 0.063 4.69%
Distrustworthiness –0.16* 0.064 0.45%
Competence 0.27*** 0.059 1.53%
Issue attitude 0.12* 0.046 0.46%
Ideology –0.01 0.044 0.00%

F (9 867) = 13.79, p < 0.001 (14 862) = 40.12, p < 0.001


R2 0.13 0.40
+p<0.10, *p<0.05, **p<0.01, ***p<0.001. B indicates the expected change in support relating to a 1-unit change in the predictor variable. Indep. Variance
indicates the non-overlapping variance accounted for by the predictor above and beyond the variance accounted for by the other variables.
a
Step 2 variables were transformed to z-scores such that 0 equals the mean response and the B value refers to the change in support for a 1 standard
deviation increase in the predictor.

increase in support on the 7-point scale) compared to UAVs than partially autonomous UAVs in Step 1 of the
promotion framing. However, the overall variance model. To examine whether the effect of autonomy
accounted for by prevention or promotion framing depends on the purpose of the UAVs, we conducted
(beyond that accounted for by other variables) was another regression analysis (not shown in Table 4) that
small (independently only accounting for less than one- tested for the interaction between the purpose and auton-
half-percent of the total variance in support for UAVs). omy variables. The interaction was not statistically signifi-
Note that, although terminology did not impact sup- cant, which indicates that our hypothesis (H2) that
port for the technology, it did impact familiarity. A total autonomy will have different effects on support depend-
of 92% of respondents across both years indicated ing on purpose, was not supported. Autonomy did not
“yes” they had heard of drones. Only 59% indicated they affect our respondents’ reported levels of support for
had heard of UAVs, 37% had heard of UASs, and 33% UAVs, regardless of the purpose of the UAVs.
had heard of aerial robots. These results were similar
across both years of the survey. Response to RQ3: Purpose and End-User Trustworthi-
ness are the Most Important Predictors of Support
Response to RQ2: UAV Autonomy did not The results presented in Table 4 further confirm the
Impact UAV Support, Regardless of Purpose importance of UAV purpose for impacting support, as pur-
Table 4 results indicate that, as a main effect, autonomy pose accounts for about 13% (11+2%) of the independent
of the UAVs does not appear to affect public support, variance in the Step 1 model, while end user and framing
although there was slightly less support for fully manual each independently account for vastly less — only about

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 87


one-half of one percent of the variance. Even when per- indicates that political polarization of support (i.e., the
ceptions of actor trustworthiness, issue attitude, and ide- relationship between ideology and support) changed
ology are included in the model in Step 2, UAV purposes dependent upon UAV purpose, end user, and year.
continue to predict the most variance in public support, To clarify how political polarization varied, Table 5
followed by perceptions of end-user trustworthiness. shows the strength of the ideology-support relationship
Taken together, our results support H3 that UAV pur- under different conditions and Figure 4 illustrates the
pose is among the most important predictors of UAV predicted mean support for UAVs dependent on the dif-
support. The B statistics in Table 4 indicate, for example, ferent factors. In Figure 4, the conditions under which
controlling for all other variables (Step 2), use of UAVs significant ideology-support relationships were appar-
for security purposes reduces support by 1.15 points on ent are highlighted by the labeled bars. As shown in
the 7-point scale, compared to environmental purposes. Table 5, our hypothesis (H4) was partially supported:
Using UAVs for economic (rather than environmental) polarization of support depended both on UAV purpose
purposes reduces support by 0.58 points. Additionally, and end user. Specifically, ideology correlated with use
our analyses find that our sample supported use by gov- of UAVs for environmental purposes by government in
ernment more than by businesses (Table  4). Further, both 2014 and 2015 (Table 5 respective rs = –0.27,
issue attitude, trustworthiness perceptions, and ideology –0.32, ps < 0.05), with the negative correlations indicat-
do not completely explain the effects of purpose and ing that conservatives were less supportive of UAVs
end user. We know this because purpose and end user than liberals under those conditions. In addition, Table
remain significant predictors of support even when 5 and Figure 4 show that use of UAVs for security result-
those other variables are included in the model. ed in polarization such that conservatives were more
supportive of that use in both 2014 and 2015 — but the
Response to RQ4: Political Polarization can be polarization was associated with different end users in
Evoked, but is Small and the Pattern 2014 (business) and 2015 (government).
Changes over Time Finally, although the results are partially supportive of
To examine whether certain conditions appeared to elicit our hypothesis, we note that the effect of ideology (and
politically polarized responses, and whether those condi- thus political polarization) is generally small, only account-
tions changed from 2014 to 2015, we tested for differenc- ing for a fraction of a percent of the variance in public sup-
es in the effect of ideology dependent on UAV purpose, port for UAVs. This amount is not always statistically
actor, and year. Specifically, we examined the four-way significant (see Table 5). In addition, the four-way and all
interaction in a multiple regression while including all sub- three-way interactions became non-significant when issue
sumed interactions and main effects also in the model. attitudes and trustworthiness variables were included in
Due to space constraints, the full model results are not the model. This suggests that changes in polarization over
presented but are available from the first author. However, time are due to the differences in issue attitudes and trust-
the four-way interaction was statistically significant. This worthiness ratings between our 2014 and 2015 samples.

2014 2015
5.70 (a)

5.00 (a)
5.46

4.76

3.87
3.86
3.06

2.90

Security Economic Environment Security Economic Environment

Liberal-Govt Liberal-Business Conservative-Govt Conservative-Business

Notes: Bars representing the conditions under which there occurred significant relationships between ideology
and support are labeled. (a) Ideology-support correlation but not the ideology regression coefficient was significant in 2015.

FIGURE 4. Predicted UAV support by year, UAV purpose, UAV end-user, and ideology (computed at –1 and +1 standard deviation
from the sample mean ideology).

88 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


TABLE 5. Effects of self-reported ideology on support under different experimentally varied conditions
as estimated by subsample correlations and regression coefficients computed under different conditions.
2014 2015
Corr %Var B %Var Corr %Var B %Var

Government use for…


Economic –0.09 0.86% –0.14 0.08% 0.12 1.44% 0.18 0.08%
Security –0.02 0.05% –0.04 0.00% 0.29* 8.18% 0.48* 0.55%
Environment –0.27** 7.02% –0.35* 0.48% –0.32* 10.43% –0.35 0.26%

Business use for…


Economic –0.09 0.77% –0.14 0.08% 0.15 2.37% 0.25 0.16%
Security 0.24* 5.71% 0.40* 0.64% –0.00 0.00% –0.00 0.00%
Environment –0.18 3.28% –0.29 0.25% 0.11 1.25% 0.15 0.05%
+p<0.10, *p<0.05, **p<0.01, ***p<0.001. These follow-ups were justified by a significant four-way interaction (R -change=0.006, F-change(2,853) = 3.09,
2

p = 0.046) indicating that the strength of the ideology-support relationship varied by end-user, purpose, and year of the survey. Corr=Pearson correlation.
%var for corr indicates the total variance shared by self-reported ideology and support. %Var for B indicates the independent variance accounted for based on
regression results.

Discussion and Recommendations Instead, focusing on valued purposes may be more


As stated by [30], “it is important to note that on one powerful. Consistent with prior findings, support for
hand a new technology may bring about radical changes UAVs varied significantly by purpose. In our samples,
in society, while on the other hand the fate of that tech- the greatest support was found for UAVs used for envi-
nology rests with the society in which it is being applied” ronmental purposes and the least support was found for
[30, p. 783]. The present work investigating U.S. public use for security purposes. Because we used a conve-
responses to UAV technologies is important, as it con- nience sample, this specific pattern may or may not gen-
firms and extends prior findings, thereby advancing eralize to the U.S. as a whole. Nonetheless, the results
understanding of U.S. public resistance to and support indicate that various U.S. publics are likely paying much
of UAVs, while also suggesting recommendations for UAV more attention to UAV purpose — rather than to factors
developers, end users, and policy makers. like terminology or framing—when deciding how much
For example, terminology had no effect on public sup- they support UAVs.
port, consistent with previous findings [9]. This may sug- The public is also attending to who is using the UAVs
gest UAV developers, policy makers, and users should and how much they trust them. Trustworthiness variables
not waste energy fighting for specific terminology. accounted for the second largest amount of variance after
Instead they should focus on the factors more important UAV purposes. Of the trustworthiness variables, positive
to the public, such as how, why, and by whom the UAVs trustworthiness perception was the strongest predictor of
will be used. It appears from these findings that mem- support, followed by perceived competence, and then
bers of the public translate between “drone” and other perceived distrustworthiness (see Table 4). This indicates
similar, but much less familiar, terms without a measure- a relatively positive public context for UAVs (i.e., support is
able change in attitude. driven by reasons to trust users rather than by reasons to
Second, the finding that prevention framing enhanc- distrust) and could allow broader support for trusted enti-
es support suggests that members of the public are ties, such as fire rescue personnel, to use UAVs. It also
more moved by appeals related to reducing threats than suggests that efforts to impact factors increasing per-
enhancing benefits. This strategy could emphasize how ceived end-user trustworthiness, such as training and
UAVs can be used to prevent or decrease risks rather licensing, may result in greater support for their UAV uses.
than increasing convenience. For example, if designing Regulatory or punitive responses aimed at reducing dis-
UAVs for prescribed fires, more public support may be trust in end users may be less effective.
garnered by emphasizing how use of the UAVs protects It is worth highlighting that drone autonomy was
workers and the public from dangerous situations, rath- mostly unrelated to public support in our two samples.
er than emphasizing efficiency or even safety gains. Furthermore, the direction of one observed marginal
However, the weakness of the effect suggests framing effect indicated a preference for autonomy. This finding
may not be a very powerful strategy for impacting pub- contrasts with the evidence of public concern when it
lic support. comes to autonomous cars [31] or autonomous military

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 89


drones [32] and suggests the public is not uniformly further investigation, is consistent with a recent survey of
against all autonomous technologies. While further the Canadian public which found that more of their survey
study is needed, it is possible that the public will even respondents supported rather than opposed use of UAVs
favor autonomous (or at least partially autonomous) for data collection by government groups but the opposite
over manually controlled UAVs under certain conditions. was true for use of UAVs by private industry [18].
Finally, political polarization was found to account for
only a small amount of the variance in our data, even Limitations and Future Directions
though we chose scenarios in which political polarization While there are limitations associated with the use of a
might be especially likely to occur. This is encouraging convenience sample such as ours, other social science
given the negative impact political polarization has some- research comparing results from MTurk samples with
times had on the trajectories of other technologies, as national samples, especially related to effects of politi-
previously noted. The polarization that we did find provid- cal ideology, suggest similar results are found in both
ed evidence that the purposes and actors supported by types of samples [35]. Nonetheless, future work should
individuals were affected by attitudes such as ideology, involve representative national samples to increase gen-
and changed over time. Conservatives supported securi- eralizability to the U.S. as a whole.
ty purposes more than liberals under some conditions, We have focused on predictors of public support and
and liberals supported environmental uses more than resistance to UAV technologies. Given the importance of
conservatives. Our analyses further indicate, to some predictors such as purpose and end-user trustworthi-
extent, this polarization could be attributed to differenc- ness, future work should focus on gaining a better
es in trust in different end users and in issue attitudes. understanding of the causes of those factors. Also, it
Taken together, our results suggest the importance of was a bit surprising that issue attitude did not reduce the
being responsive to the values of specific target audienc- impacts of UAV purpose when entered into the regres-
es while developing UAVs and while communicating to sion equation, and user trustworthiness perceptions did
different publics about UAV development and usage. not reduce the effect of end user (Table 4). This suggests
Such practices may reduce public resistance and, poten- that issue attitudes are not the reason (or at least not the
tially, reactive legislation. One example of local ideolo- sole reason) for purpose effects, and trustworthiness
gies impacting public acceptance and legislation is in the perceptions were not the reason for end-user effects.
conservative, business-friendly state of Texas. There, a Future research is needed to understand what other rea-
law was adopted to prevent the use of UAVs by the public sons may account for the differences.
for observing business practices after a meat processing Although UAV autonomy did not appear to affect pub-
plant was caught dumping blood into a river by a private lic support for UAVs under the studied conditions, we
environmental activist [33]. As another example, in Cali- anticipate that other information about UAV characteris-
fornia, use of drones over private property has been tics may be important and worthy of further study. Peo-
limited in response to privacy concerns over paparazzi ple may be more sensitive to whether video is being
flights near celebrity homes [34]. In both cases, locally recorded or streamed to another device, what informa-
important concerns have driven statewide legislation tion is being collected and how it will be stored, and
which may have unintended consequences on other whether they trust that the end user will limit its use and
UAV uses. By taking a proactive approach and respon- distribution. Indeed, in one notable case illustrating such
sively designing and communicating about UAVs in concerns, a father shot a drone that he felt was spying
locally acceptable ways, researchers and industry may on his daughter. A judge dismissed the charges against
be able to gain support prior to being threatened with him. Subsequently, a bill was proposed to criminalize
legislative action. drone harassment in Kentucky [36].
It is also important to consider public perceptions of Future research should also investigate additional
end-user trustworthiness. Understanding which publics factors that may impact UAV support. Our models only
trust which end-users for different UAV uses is important, accounted for 40% of the total variance. Other factors
but may change over time and thus needs to be monitored that may be important include trust in the technology
and studied in greater detail. For example, there was a itself. Studies such as [37] that have indicated inappro-
shift of politically polarized support for security purpos- priate comfort with UAVs at close distances (<1.5 m) for
es from the context of business users to government users interaction, which could indicate over-trust in technolo-
from 2014 to 2015. End user also had an overall impact on gies under some conditions. Incorporating perceptions
public support. Our participants supported use by govern- of specific risks and benefits of the technology, as well
ment over businesses, and this effect remained even when as specific characteristics of the platform, would likely
controlling for ideology and perceived trustworthiness of increase variance accounted for and allow a broader
the end user (Table 4). This result, although deserving understanding of public support across contexts.

90 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


Author Information [15] P. Lin, G. Bekey, and K. Abney, “Autonomous military robotics:
Risk, ethics, and design,” California Polytechnic State Univ., San Luis
Lisa M. PytlikZillig is a research associate professor at the Obispo, CA, 2008.
University of Nebraska Public Policy Center and University [16] Morning Report, “National tracking poll #170904: Crosstabula-
of Nebraska-Lincoln Social and Behavioral Sciences tion results,” Sept, 7-11, 2017, 2017.
[17] T.J. Ryan, “Reconsidering moral issues in politics,” J. Politics,
Research Consortium, Lincoln, NE, U.S.A. Email: lpyt- vol. 76, pp. 380-397, 2014.
likz@nebraska.edu. [18] S. Thompson and C. Bracken-Roche, “Understanding public opinion
Brittany A. Duncan is an assistant professors in the of UAVs in Canada: A 2014 analysis of survey data and its policy impli-
cations,” J. Unmanned Vehicle Systems., vol. 3, pp. 156-175, 2015.
Computer Science and Engineering Department at the [19] S. Kreps, “Flying under the radar: A study of public attitudes
University of Nebraska-Lincoln, Lincoln, NE, U.S.A. Email: towards unmanned aerial vehicles,” Research & Politics, vol. 1,
bduncan@cse.unl.edu. pp. 1–7, 2014.
[20] J.I. Walsh, “Precision weapons, civilian casualties, and support for
Sebastian Elbaum is the Bessey Professor in the Com- the use of force,” Political Psychology, vol. 36, no. 5, pp. 507–523, 2015.
puter Science and Engineering Department at the Univer- [21] J. Eyerman, C. Letterman, W. Pitts, J. Holloway, K. Hinkle, D.
sity of Nebraska - Lincoln, Lincoln, NE, U.S.A. Email: Schanzer et al., “Unmanned aircraft and the human element:
Public perceptions and first responder concerns: Research brief,”
elbaum@cse.unl.edu. Institute for Homeland Security Solutions, Research Triangle Park,
Carrick Detweiler is an associate professor in the NC, 2013.
Computer Science and Engineering Department at the [22] M. Sakiyama, D.T. Miethe, D.J. Lieberman, J.M.S. Heen, and
O. Tuttle, “Big hover or big brother? Public attitudes about drone
University of Nebraska-Lincoln, Lincoln, NE, U.S.A. Email: usage in domestic policing activities,” Security J., 2016.
carrick@cse.unl.edu. [23] T.D. Miethe, J.D. Lieberman, M. Sakiyama, and E.I. Troshynski,
“Public attitudes about aerial drone activities: Results of a national
survey,” Center for Crime and Justice Policy, Las Vegas, NV, 2014.
[24] G. Gaskell, M.W. Bauer, J. Durant, and N.C. Allum, “Worlds
References apart? The reception of genetically modified foods in Europe and
[1] D. Jenkins and B. Vasigh, “The Economic Impact of Unmanned the US,” Science, vol. 285, pp. 384-387, 1999.
Aircraft Systems Integration in the United States,” Association for [25] T.E. Cook and P. Gronke, “The skeptical American: Revisiting
Unmanned Vehicle Systems International, Arlington, VA, 2013. the meanings of trust in government and confidence in institu-
[2] J. Stilgoe, R. Owen, and P. Macnaghten, “Developing a framework tions,” J. Politics, vol. 67, pp. 784-803, 2005.
for responsible innovation,” Research Policy, vol. 42, pp. 1568-1580, [26] M.J. Hetherington and T.J. Rudolph, “Priming, performance, and
2013. the dynamics of political trust,” J. Politics, vol. 70, pp. 498-512, 2008.
[3] D. M. Kahan, D. Braman, G. L. Cohen, J. Gastil, and P. Slovic, [27] A. Malka and C.J. Soto, “Rigidity of the economic right? Menu-
“Who fears the HPV vaccine, who doesn’t, and why? An experi- independent and menu-dependent influences of psychological dis-
mental study of the mechanisms of cultural cognition,” Law and positions on political attitudes,” Current Directions in Psychologi-
Human Behavior, vol. 34, pp. 501-516, 2010. cal Science, vol. 24, pp. 137-142, 2015.
[4] D. M. Kahan, E. Peters, M. Wittlin, P. Slovic, L. L. Ouellette, D. Bra- [28] C. Atzmüller and P.M. Steiner, “Experimental vignette studies
man, et al., “The polarizing impact of science literacy and numera- in survey research,” Methodology European J. Research Methods
cy on perceived climate change risks,” Nature Climate Change, vol. for the Behavioral and Social Sciences, vol. 6, pp. 128-138, 2010.
2, pp. 732-735, 2012. [29] L.M. PytlikZillig, J.A. Hamm, E. Shockley, M.N. Herian, T.M.
[5] B. Wolfgang, “Drone industry gives journalists not-so-subtle-hint- Neal, C.D. Kimbrough et al., “The dimensionality of trust-relevant
Don’t use the word ‘drones’,” The Washington Times, 2013. constructs in four institutional domains: Results from confirmatory
[6] G.A. Gladney and T.L. Rittenburg, “Euphemistic text affects factor analyses,” J. Trust Research, 2016.
attitudes, behavior,” Newspaper Research J., vol. 26, pp. 28-41, [30] N. Gupta, A.R. Fischer, and L.J. Frewer, “Socio-psychological
2005. determinants of public acceptance of technologies: A review,” Pub-
[7] A. Bandura, C. Barbaranelli, G. V. Caprara, and C. Pastorelli, “Mech- lic Understanding of Science, vol. 21, pp. 782-795, 2012.
anisms of moral disengagement in the exercise of moral agency,” J. [31] D. Howard and D. Dai, “Public perceptions of self-driving cars:
Personality and Social Psychology, vol. 71, pp. 364-374, 1996. The case of Berkeley, California,” in Proc. Transportation Research
[8] S.M. Mohammad and P.D. Turney, “Emotions evoked by com- Board 93rd Annual Meeting, 2013.
mon words and phrases: Using Mechanical Turk to create an [32] C. Carpenter, “How do americans feel about fully autonomous
emotion lexicon,” in Proc. NAACL HLT 2010 Workshop on Com- weapons?,” DuckofMinerva.com, 2013.
putational Approaches to Analysis and Generation of Emotion [33] M. Schroyer, “Doing drone journalism in Texas? You could be
in Text, 2010, pp. 26-34. fined $10,000 or more,” Professional Society of Drone Journalists, Sept.
[9] R.A. Clothier, D.A. Greer, D.G. Greer, and A.M. Mehta, “Risk per- 15. 2015.
ception and the public acceptance of drones,” Risk Analysis, vol. [34] “Invasion of privacy,” in An act to amend Section 1708.8 of
35, pp. 1167-1183, 2015. the Civil Code, 2015-2016, ch. 521.
[10] A. Tversky and D. Kahneman, “The framing of decisions and [35] S. Clifford, R.M. Jewell, and P.D. Waggoner, “Are samples drawn
the psychology of choice,” Science, vol. 211, pp. 453-458, 1981. from Mechanical Turk valid for research on political ideology?,”
[11] E.T. Higgins, “Promotion and prevention: Regulatory focus as a Research & Politics, vol. 2, 2015.
motivational principle,” Advances in Experimental Social Psychol- [36] “Judge dismisses charges for man who shot down drone,”
ogy, vol. 30, pp. 1-46, 1998. WDRB Louisville News, Oct. 26, 2015.
[12] E.R. Igou and H. Bless, “On undesirable consequences of think- [37] U. Acharya, A. Bevins, and B. Duncan, “Investigation of human-
ing: Framing effects as a function of substantive processing,” J. robot comfort with a small unmanned aerial vehicle compared to a
Behavioral Decision Making, vol. 20, pp. 125-142, 2007. ground robot,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and
[13] A.F. Simon, N.S. Fagley, and J.G. Halleran, “Decision framing: Systems (IROS), 2017.
Moderating effects of individual differences and cognitive process-
ing,” J. Behavioral Decision Making, vol. 17, pp. 77-93, 2004.
[14] M. C. Horowitz, “Public opinion and the politics of the killer
robots debate,” Research & Politics, vol. 3, 2016.

MARCH 2018 ∕ IEEE TECHNOLOGY AND SOCIETY MAGAZINE 91


Last Word
Christine Perakslis

500 Years Later:


ive hundred years

F
ago, Luther nailed
ninety-five theses to
the door of Castle
Church in Wittenberg.
His scholarly objection to certain prac-
tices of the church incited profound
Doors and
and persisting societal change. In the
16th century, church doors were a
mode of publication where academ-
Disputations
ics posted propositions in Latin, thus
inviting debate. Eventually, postings
were no longer written in Latin, but psychological disposition of a child arly propositions, and derive great
rather in the vernacular of people to with autism, resulting in richer inter- benefit from healthy disputation.
better reach society [1], [2]. actions that improve lives [5].
Similar to this custom of long ago, Doors are not only physical portals, Author Information
our authors nail rich scholarship to but also symbols of transitions. Our Christine Perakslis is Associate Pro-
our portal, thus inviting healthy dis- community experiences transition fessor in the MBA Program, College
putation. In this issue, we considered as the torch of editorship is passed. of Management, Johnson & Wales
the value of a mesh of connective We are so grateful for the commend- University, Providence, RI. Email:
vehicles used to overcome the digi- able work of our past “keeper of the christine.perakslis@jwu.edu.
tal divide [3], yet also recognize the threshold.” She gave voice to diverse
dangers of subscribing to techno- stakeholders. She steered us across References
logical fix as a social cure-all. We the thresholds of a plethora of indus- [1] E. Metaxas, Martin Luther: The Man
Who Rediscovered God and Changed the
recognized the benefits of UAVs and tries to mine out intended and unin- World. New York, NY: Penguin, 2017.
military robotics, yet also wrestled tended consequences of current and [2] D. Jütte, The Strait Gate: Thresholds
with tensions between autonomous emerging technologies. She guided and Power in Western History. New Haven,
CT: Yale Univ. Press, 2015.
weapon systems and jus in bello, our foci as we trekked through time [3] “A mobile network to ease your com-
thereby questioning war practices to learn from the past, and to conjec- mute: Portugal’s roving hotspots, Jul.12,
when weighed upon the scales of just ture the future. Our publication has 2017; https://www.wired.com/brandlab
/2017/07/rovinghotspots/.
and fair conduct. also become more applicable and [4] A. Sciutti et al., “Measuring human-robot
Our authors presented method- accessible to a general audience. interaction through motor resonance,” Int. J.
ologies to reform established prac- These rich efforts resulted in mean- Social Robotics, vol. 4, no. 3, pp. 223-234, 2012.
[5] P. Esteban et al., “How to build a sup-
tices. We applaud our colleagues ingful debate and deeper under- ervised autonomous system for robot-
as robots are designed to better standing of the complex interactions enhanced therapy for children with autism
simulate the biological and cogni- between technology and society spectrum disorder,” J. Behavioral Robot-
ics, Apr. 9, 2017; http://www.dream2020
tive processes of humans [4]. We are across the globe. .eu/wpcontent/uploads/2017/05/Paladyn-
inspired as robots better infer the As the torch is passed, our new Journal-of-Behavioral-Robotics-How-to-
“keeper of the threshold” leads us Build-a-SupervisedAutonomous-System-for-
Robot-Enhanced-Therapy-for-Children-with-
Digital Object Identifier 10.1109/MTS.2018.2795122
onward. Under his gifted leadership, Autism-Spectrum-Disorder.pdf.
Date of publication: 2 March 2018 we will continue to post rich schol-

92 IEEE TECHNOLOGY AND SOCIETY MAGAZINE ∕ MARCH 2018


SSIT Fundraising Campaign
Influence the Direction
of Technology!
• SSIT brings together interdisciplinary communities to
explore the evolution of, and inform our understanding of
• Sustainable Development & Humanitarian Technology
• Ethics and Human Values
• Technology Access
• Societal Impact of Technological Innovation
• Protecting the Planet

• We work within the technology community, but draw our


ideas from practitioners, researchers, inventors, policy
makers, professionals and students across many fields
• We influence the direction of technology adoption and
development, from awareness and concepts, to standards
and professional practice

To seize opportunities* in 2018 we need $50,000.


With your help, these programs will make
a difference!
*Funds will be invested in further strengthening and expanding:
• Chapter, Young Professional and Student Activities
• Conference, Distinguished Lecturer (DL) and Standards Programs
• Online Publishing and Education Programs
Contact us for details.

Ways to Contribute
• Donate to SSIT online at https://ieeefoundation.org/ieee_ssit
• You can make a gift to SSIT in honor or memory of someone who has touched
your life
Donations to SSIT are managed by the IEEE Foundation, the philanthropic arm of IEEE. IEEE and the IEEE Founda-
tion are U.S. 501(c)3 non-profit organizations. For more information contact: donate@ieee.org or +1 732 465 5871.

www.TechnologyandSociety.org

Das könnte Ihnen auch gefallen