Sie sind auf Seite 1von 308

Advance Praise for

The Artificial Intelligence Imperative

“Like it or not, artificial intelligence is here to stay. This book is timely and provides readers with
the knowledge needed to understand the concepts behind AI—and also with the critical tools
needed to use it successfully in their own companies.”
—Dr. Jürgen Meffert, McKinsey & Company, Inc., Germany

“The business world is going through an unprecedented level of change that is both fast and
fundamental. Change always brings confusion. To this brave new AI-driven world, Anastassia
Lauterbach and Andrea Bonime-Blanc bring much-needed clarity and structure with a laser-like
precision. It is this precision that strategists need to design the next generation of successful
processes and business models.”
—Clara Durodié, Founder and CEO, Cognitive Finance Group, London

“For anyone wishing to embark on their own deep learning, this book offers a sober and
comprehensive look at the frontier of AI, and a roadmap for navigating its impact. Recommended
reading whether you’re into supervised, or unsupervised, learning.”
—Scott Hartley, New York City-based venture capitalist and author of The Fuzzy and the
Techie: Why the Liberal Arts Will Rule the Digital World

“The printing press, the Internet, and now AI. This book is a remarkable organization and analysis
of this vast, dynamic, and absolutely critical issue, replete with keen observations and practical
advice spanning basic research, board strategy, and cyber security. Organizations that hope to thrive
even in the relatively near future will find this book to be an invaluable tool as they plan and
implement their strategy.”
—Larry Clinton, President and CEO,
Internet Security Alliance, Washington, DC

“This work is an extremely impressive compendium of the history of artificial intelligence, the
nuance of various methods, and the implications of evolutions in the technology to various
applications in modern business. The breadth of research and depth of understanding conveyed are
valuable to students, practitioners, and consumers alike.”
—Anthony J. Scriffignano, PhD, SVP/Chief Data Scientist,
Dun & Bradstreet, New Jersey

“Anastassia Lauterbach and Andrea Bonime-Blanc have masterfully captured the opportunities
presented by AI in the business world. More importantly they have provided a unique glimpse into
the future by identifying governance and compliance considerations that boards and executive
management will need to tackle in a responsible manner. This rare insight can and should help
shape the strategic direction of AI implementation and is a must-read for companies planning to
leverage AI.”

Estadísticos e-Books & Papers


—Bryan Weisbard, Head of Security Analysis, Investigations,
and Business Continuity, Twitter, Inc., San Francisco

“Everyone has the right to be scared, or even critical, of artificial intelligence. But this book allows
the reader to get these attitudes under control. Its authors take you on a journey through the AI
world. They show how AI is already a part of our lives and will be even more so in the future. The
book presents practical recommendations for how to prepare yourself and your business for this
technological revolution. It leaves the reader convinced that AI will change the way we do business
and will become an integral part of our future. Therefore the question is not if, but how we can
prepare ourselves and our businesses to be integrated players in this new world. To get started the
authors offer us a roadmap to become AI natives.”
—Annette Heuser, Executive Director, Beisheim Stiftung, Germany

“This book is simply excellent! I really enjoyed it. If you are a beginner, this book takes you
through the basics of machine learning and artificial intelligence and does a great job de-mystifying
much of the jargon that is thrown around all too casually. If you are not a novice, you will still get a
great deal out of this book as the authors clearly articulate the social, ethical, and governance issues
that we are all going to be faced with. I highly recommend this book— it is one of the must-read
books of our time.”
—Anand Chandrasekher, Senior Vice President, Qualcomm Inc., and
President, Qualcomm Datacenter Technologies, Inc., Germany

“AI has now reached a level of maturity to impact each of our businesses and personal lives.
Understanding AI is now a business imperative for every executive and director. Read this book to
separate all the hype from facts and learn practical ways to improve your business performance.
Board members need to understand the content of this book to adequately execute their governance
and fiduciary responsibilities.”
—Martin M. Coyne II, board director and senior advisor to CEOs, USA

“Artificial intelligence is poised to completely upend the way we do virtually everything. The
consequences—including unintended consequences—are massive and the stakes for getting it right
are high. Understanding this phenomenon is critical for every business leader. Lauterbach and
Bonime-Blanc have done the seemingly impossible—made a complex and wide-ranging subject
digestible, understandable, and actionable. I’ve already applied the learnings from this book to my
work and personal life. This is a must read for every board member and leaders across institutions
and industries. I can’t recommend it highly enough.”
— Erin Essenmacher, Chief Programming Officer, National
Association of Corporate Directors, Washington, DC

“AI impacts are expanding at an increasing pace into all facets of business and society. The authors
have put together rigorous research, insightful points of view, and a prescriptive structure for
leaders to create their own AI path. This ambitious work gives the reader a broad overview of AI
which would be a time-consuming and costly endeavor to do on one’s own.”
—Tom Easthope, Director, ERM, Microsoft, Seattle

“This has become my favorite go-to book for artificial intelligence that I have started to share with
my business network. Being an area of hyper growth and fluidity, it is extremely difficult to provide
clarity, focus, and organization to interested readers. Anastassia Lauterbach and Andrea Bonime-
Blanc have just done that by making this book not only relevant to first-time AI readers but also a
great source of reference for everyday needs.”
—Andreas Roell, Managing Partner, Analytics Ventures, Germany

Estadísticos e-Books & Papers


“If you want to learn all about AI—the conceptual, the practical, the applications, all in the context
of governance—then you must read this book. Written with great clarity for everyone from board
members and CEOs to start-up entrepreneurs and students.”
—Hans W. Decker, CEO and Vice Chair, Siemens USA (retired), USA

“Lauterbach and Bonime-Blanc’s book The Artificial Intelligence Imperative is a must read for
management and boards of all companies. For corporate directors who may not be savvy in this
space, it’s a good tutorial but more importantly, I like that it outlines the questions directors should
be asking and provides a successful governance approach to AI. You want to be an effective board?
. . . Then understand how artificial intelligence can keep companies competitive.”
—T.K. Kerstetter
Host of Inside America’s Boardrooms, CEO of Boardroom
Resources, and co-founder/editor-at-large of Corporate Board Member

Estadísticos e-Books & Papers


The Artificial Intelligence
Imperative

Estadísticos e-Books & Papers


The Artificial Intelligence
Imperative

A Practical Roadmap for


Business

Anastassia Lauterbach and


Andrea Bonime-Blanc
Foreword by Ian Bremmer

Estadísticos e-Books & Papers


Copyright © 2018 by Anastassia Lauterbach and Andrea Bonime-Blanc

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or
transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, except for the inclusion of brief quotations in a review, without prior permission in
writing from the publisher.

Library of Congress Cataloging-in-Publication Data

Names: Lauterbach, Anastassia, author. | Bonime-Blanc, Andrea, author.


Title: The artificial intelligence imperative : a practical roadmap for business / Anastassia
Lauterbach and Andrea Bonime-Blanc.
Description: Santa Barbara : Praeger, [2018] | Includes bibliographical references and index.
Identifiers: LCCN 2017054628 (print) | LCCN 2018003752 (ebook) | ISBN 9781440859953
(ebook) | ISBN 9781440859946 (alk. paper)
Subjects: LCSH: Business—Data processing. | Artificial intelligence.
Classification: LCC HF5548.2 (ebook) | LCC HF5548.2 .L294 2018 (print) | DDC 650.0285/63—
dc23
LC record available at https://lccn.loc.gov/2017054628

ISBN: 978-1-4408-5994-6 (print)


978-1-4408-5995-3 (ebook)

22 21 20 19 18 1 2 3 4 5

This book is also available as an eBook.

Praeger
An Imprint of ABC-CLIO, LLC

ABC-CLIO, LLC
130 Cremona Drive, P.O. Box 1911
Santa Barbara, California 93116-1911
www.abc-clio.com

This book is printed on acid-free paper


Manufactured in the United States of America

Estadísticos e-Books & Papers


We dedicate this book to our families—our parents, husbands, and
children—from the past and the present into the future. We call for
wisdom and knowledge, education and leadership, inclusion and
hope as we enter this brave new world of AI.

Estadísticos e-Books & Papers


“People can’t understand new ideas if their livelihood depends on the old ones.”
—Upton Sinclair

Estadísticos e-Books & Papers


Contents

Foreword
by Ian Bremmer

Acknowledgments

Introduction

PART ONE THE PAST, PRESENT, AND FUTURE OF AI


Chapter 1 A Brief History of AI
Chapter 2 AI Social-Science Concepts
Chapter 3 Key Cognitive and Machine Learning Approaches
Chapter 4 AI Technologies: From Computer Vision to Drones
Chapter 5 AI and Blockchain, IoT, Cybersecurity, and Quantum
Computing

PART TWO ENABLERS AND PRECONDITIONS TO


ACHIEVE AI
Chapter 6 Data
Chapter 7 Algorithms
Chapter 8 Hardware
Chapter 9 Openness

PART THREE THE RACE FOR AI DOMINANCE—FROM FULL-


STACK COMPANIES TO “AI-AS-A-SERVICE”

Estadísticos e-Books & Papers


Chapter 10 AI Crossplay—Academia and the Private Sector
Chapter 11 The Geopolitics of AI—China vs. USA
Chapter 12 Full-Stack Companies—The Battle of the Giants
Chapter 13 Product- and Domain-Focused ML Companies
Chapter 14 Traditional and New Enterprise Companies Focused on
ML
Chapter 15 Introducing AI into a Traditional Business

PART FOUR AI IN DIFFERENT SECTORS—FROM CARS TO


MILITARY ROBOTS
Chapter 16 AI and the Automotive Industry
Chapter 17 AI in Health Care
Chapter 18 AI and the Financial Sector
Chapter 19 AI in Natural Resources and Utilities
Chapter 20 AI in Other Business Sectors
Chapter 21 AI in Government and the Military

PART FIVE THE SOCIO-POLITICS OF AI—CRITICAL


ISSUES
Chapter 22 Data Control, AI, and Societies
Chapter 23 The Future of Employment and the Pursuit of Life-
Long Learning
Chapter 24 Ethical Design and Social Governance

PART SIX THE GOVERNANCE OF AI—A PRACTICAL


ROADMAP FOR BUSINESS
Chapter 25 Traditional Boards’ Readiness to Address AI—
Transforming Fear into Value

Estadísticos e-Books & Papers


Chapter 26 Discussing AI in the Boardroom

Notes

Index

About the Authors

Estadísticos e-Books & Papers


Foreword

In 2018, artificial intelligence (AI) becomes the topic of holiday gatherings


and water cooler banter. Just like every company has become an Internet
company in some form or another, all companies will eventually become
AI companies. The use of AI across sectors to automate or augment a
range of jobs is one of the most pressing issues facing business leaders
today.
But the potential impact of AI on nation states, companies, and
individuals is only now beginning to be studied in an organized manner.
The risks AI brings, among them job displacement, algorithmic
accountability, cybersecurity, and the potential abuse of AI for offensive
cyber operations, need careful consideration and effective responses by
policy makers. The same is true of human-level AI or artificial general
intelligence, even if the prospective adoption of those advanced
technologies remains a ways off.
There is inarguably a tremendous upside for businesses and
governments in using AI. But as deployment of AI and automation
accelerates, the downsides become more apparent. Jobs are the immediate
worry—whole industry sectors could face substantial disruption soon, and
with unknown consequences. In response to these challenges, Anastassia
Lauterbach and Andrea Bonime-Blanc have taken a holistic look at the AI
sector, minus the hype and dread that often surround the issue.

Geopolitical Implications
As a political scientist, my primary interest is in how AI affects
geopolitics—how it shifts the balance of power between different
countries, cultures, sectors, and interest groups. Over the past two years, as
the private sector has pushed ahead with AI innovations, governments
around the world have begun to pay greater attention to the field,
particularly in the U.S., the European Union, and China. Lauterbach and

Estadísticos e-Books & Papers


Bonime-Blanc rightly focus attention on the U.S. and China, which
currently lead the world in AI, and are likely to remain at the forefront of
capabilities and influence for years to come. They then dive further,
exploring AI’s implication on . . .

Global Considerations and Security


Understanding the complex issues of global cooperation and policy
implications in international relations, cybersecurity, and defense are key
aspects of AI that need greater understanding. The interconnected nature
of these issues affecting disaster preparedness, climate change, wildlife
trafficking, the digital divide, jobs, and smart cities are complex and far
ranging. Understanding the issues of international leadership and
economic competitiveness are priorities.

Automation and the Economy


Over the short term, AI’s most prominent economic effects will
involve the automation of tasks that could not be automated before. Like
prior waves of automation, AI will spur productivity growth. However,
certain jobs and sectors will suffer, particularly low- and medium-skilled
jobs—again, like in prior waves of automation. Without effective policies
to counter these trends, income inequality will worsen.

Fairness, Safety, and Governance


AI is ripe with the potential for unintended consequences. Replacing
human judgment and intervention has the potential to exacerbate problems
of bias in hiring processes and any activity involving the use of big data,
such as tools used by judges in criminal sentencing, bail hearings, or
parole decisions. Safety and control will grow more important as AI makes
its way from labs and into the real world, where the unpredictable will
certainly happen. Assessing where those risks exist and devising
processes/systems for limiting them will be key moving forward.
Looking ahead, this book provides a foundation for thinking broadly
about not just AI, but other broader enabling technologies that will
transform business, from blockchain and the Internet of Things in the near
term to quantum computing over the long run. AI-based software agents
coupled with suites of sensors, connectivity, and cloud-based services are

Estadísticos e-Books & Papers


already being used to develop smart systems, including autonomous or
connected vehicle networks and smart road infrastructure. These
applications already require some level of regulatory oversight,
particularly for safety and data privacy; but the smart applications also
raise a host of other issues that will require regulatory consideration,
including ethical and transparency issues around AI algorithm decision-
making and cybersecurity.
As an important contribution to the growing literature on the upsides
and downsides of the AI revolution, this book will be a valuable guide for
any corporate leader seeking a better understanding of how AI will affect
their business, and just as importantly, the broader world in which they
operate.

Ian Bremmer
President and Founder, Eurasia Group
Editor-at-Large, Time Magazine
Author of Us vs. Them: The Failure of Globalism

Estadísticos e-Books & Papers


Acknowledgments

From Anastassia
Abraham Lincoln once famously said something like this: “You cannot
escape the responsibility of tomorrow by evading it today.” The
companies, researchers, and investors involved in the development of
smart machines should never forget about the billions of people new
technologies will impact in the very near future. Millions of educated
readers should start asking questions about AI in their children’s schools,
as well as in their workplaces, universities, and meetings with elected
representatives.
AI apocalypse scenarios like Bostrom’s AI being a “paper-clip
maximizer” or Musk’s AI implementing “strawberry fields forever”
represent colorful warnings. In November, I was listening to Stephen
Hawking address the 2017 Web Summit in Portugal. While
acknowledging that AI has tremendous positive potential, Hawking said
that “we cannot know if we will be infinitely helped by AI, or ignored by it
and side-lined, or conceivably destroyed by it.” He advocated developing
best practices and rules to regulate AI and robotics. These words echoed
something I had heard a month earlier from my good friend Trent
McConaghy: “Lack of governance does not mean ‘no governance.’ It
means ‘bad governance.’” If this book offers practical advice on how to
think about AI in your business, policy development, or research; how to
bridge technology with corporate and societal governance; and how to
think about ethics and consequences, I will consider ten months of writing
as a very good investment. Rather than throwing your hands up in despair,
or thinking that someone with greater authority or deeper pockets might
take care of all AI-related design and implementation problems, we hope
that our educated readers will dedicate some time to addressing these
important and impactful technology issues. Beneficial AI will always be a
multifactor and multidisciplinary field, ranging from technical to ethical,
regulatory, administrative, educational, and social.

Estadísticos e-Books & Papers


In the next months, I expect to see a couple of areas evolve further:
AI will be the technology to invest in for major traditional companies
and Internet players. In the coming months we will witness more AI-
related M&As; the machine learning (ML) technology stack will continue
to diverge from traditional software; AI innovation will occur beyond
Silicon Valley; blockchain technologies will more prominently address AI
and will enable creation of multiple markets for data; traditional
companies will increasingly consider ML capabilities to apply to
cybersecurity; ethics will become a prominent factor in consumer choice
and investment strategies; and more national governments will come out
with AI strategies and plans for employment in AI, as well as regarding
AI’s impact on warfare and cyberdefense independent research
organizations.
In working on this book, I am very grateful to Andrea Bonime-Blanc
for her immense contribution to positioning AI on the agenda of
international governance and ethics professionals. I could not wish for a
more dedicated co-author.
I was lucky to receive the time and attention of 87 executives of
companies shaping today’s economy, including Accenture, Alibaba,
Allianz, Alphabet, AMEX, AXA, Barclays, BCG, BMW, CB Insights,
Cisco, Daimler, Deutsche Bank, Deutsche Post, Deutsche Telekom,
Digital Jersey, Dolby, Ergo, Facebook, Ford, General Motors, ING, Intel,
J&J, Korn & Ferry, KPMG, Huawei, Lufthansa, McKinsey, Nokia,
Nvidia, Oracle, PWC, Qualcomm, Russel Reynolds, Samsung, Salesforce,
Sanofi, SAP, Siemens, Sky, Softbank, Tencent, Telefonica, Twitter, Uber,
and Verizon. I was very privileged to talk to top AI innovators at Analytics
Ventures, AVA, Cogitanda Data Project, CognitiveScale, DFLabs,
Evolution Partners, Flatiron, Fortscale, Graphcore, Intuition Machine,
Numerai, Onfido, Orbital Insight, Sentient, Text IQ, Pypestream, and
Vicarious. I count myself even luckier in that some of these conversations
resulted in new friendships and collaborations.
I would like to particularly thank Anthony Scriffignano and his Dun &
Bradstreet team for sharing insights, and challenging my views on
technology and its implementation. I will always remember how Amy of
x.ai guided me to the wrong location, in spite of the flawless scheduling of
a meeting; how agitated a celebrated oncologist became when asked about
Watson; and how my corporate friends and clients rolled their eyes while

Estadísticos e-Books & Papers


remembering countless pitches of “me-too” startups, which were almost
literally “washing” a very simple process with an AI “detergent.”
As a technology strategist, investor, and non-executive director, I
would like to see deep learning’s transition from research to real world
applications. I am very concerned about the shortage of diverse talent in
AI research and practice, and will dedicate some time to get more young
students interested in AI and robotics. No place and no audience are too
small for the task.
Last but not least, I owe an immense sense of gratitude to Ralph and
Catherine Schuppenhauer, Eckart Reinke, Chris and Mardia Niehaus,
Karen Eden, Thilo Kush, Jürgen Meffert, Thomas Schönauer, Daniel
Obodovsky, Eva Wimmers, Tatiana Tennikova, and Larissa and Victor
Schembakov. Without your love, generosity, friendship, and support I
wouldn’t be able to dare and create.

Bonn, Germany, December 31, 2017

From Andrea
Working on this book has been a journey of discovery and learning,
and I am very grateful to my co-author (and lead author of this book),
Anastassia, for her vast subject matter expertise, incredible work ethic, and
passion for this subject.
This book is the culmination of a story that began for me in 1999 when
I was general counsel of an international electric power company, and
then-CEO Michael J. Thomson asked me to figure out the impact of Y2K
on our global electric operations. He pushed me to think practically about
the interconnection between governance and technology and the
importance of a multidisciplinary approach to new technology risks and
opportunities.
Ten years later, I was the head of risk, compliance, audit, and corporate
responsibility at a midsize global technology company when another tech
challenge fell into my lap. Though I had spent many years developing
governance, risk, and ethics solutions to InfoSec challenges, I had never
supervised that function. And then I was suddenly asked to directly
manage InfoSec. This proved immensely educational, frequently painful,
and a mind-blowing experience that pushed me once again to try to think
ahead of the curve.

Estadísticos e-Books & Papers


This forward outlook and breaking down of barriers and silos is now a
core part of the work I do as a strategic governance, risk, cyber, and ethics
advisor and in my written work, including several cyber-risk governance
research reports for The Conference Board. When I read about Bill Gates,
Stephen Hawking, and Elon Musk’s early alarms about AI’s possible
negative and even existential threats to society, business, politics, and
humanity, the topic struck a nerve with me.
I have many folks to thank for their continuing advice, support, and
wisdom in helping me to develop my approach to governance, risk, and
ethics especially as regards emerging challenges like AI. First and
foremost, a great big special shout-out to my friend, colleague, and partner
in crime, and an all-around wise counselor and fun person, Jacqueline
(Jacquie) E. Brevard, and deep thanks to Hans W. Decker, Erin M.
Essenmacher, Judy Warner, Larry Clinton, Martin M. Coyne, Reatha Clark
King, Philipp Aeby, Alexandra Mihailescu Cichon, Simon M. Franco, Jay
Rosenzweig, Ken Frankel, Marcel Bucsescu, Karen Poniachik, Michael
Marquardt, Peter Dehnen, Annette Heuser, Jeff Pratt, Tom Easthope, Jose
Antonio Herce, Angel Alloza, Saida Garcia, Pat Harned, Moira McGinty
Klos, Matt Pachman, Emmanuel Lulin, S. Prakash Sethi, Dante Disparte,
Matthias Kleinhempel, Bryan Weisbard, Cindy Fornelli, Erin Dwyer,
Rodrigo Cuellar, Pablo Cousteau, Michael Malone, Marjorie Tsang, Adam
Epstein, Maureen Conners, Barbara Adey, Christopher Clark, Jonathan
Chou, Arne Schoenboehm, Roland Hartmann, Azish Filabi, Scott Hartley,
Gordon Goldstein, Adam Segal, Jeff Hoffman, Mark Brzezinski, Kerry
Sulkowicz, Rhonda Brauer, Stephane Martin, Hilary Claggett, Robert
Clougherty, and Cara Smyth. And a very special thanks to Ian Bremmer
for writing the foreword to our book!
Finally, to the amazing men in my life, Roger and Roger, and the
memory of my parents and in-laws—all giants in my life for so many
different reasons, I owe you my eternal love and gratitude.
To the future of AI and to our collective AI future—may we build it
together for the common good!

New York City, January 1, 2018

Estadísticos e-Books & Papers


Introduction
The Dawn of Artificial Intelligence

In October 2016, the world expected Hillary Clinton to win the U.S.
presidential election in November, but MogAI, an artificial intelligence
(AI) system developed by Sanjiv Rai, founder of the Indian healthcare
startup Genic.ai, suggested otherwise. MogAI had predicted the results of
the last three presidential elections, as well as those of the Democratic and
Republican primaries. Its technology utilized information from public
platforms like Facebook, Twitter, and Google to model voting behavior,
and it favored Trump based on data on online user engagement with his
candidacy.1
In December 2016, Alexa—Amazon’s smart speaker, launched in June
of that year—was being used in 4 percent of U.S. households. After a very
short engagement period with consumers, Alexa had received 250,000
marriage proposals.2 Alexa was even wanted by police as a “witness” to a
murder in Arkansas. The victim was found floating in the hot tub of the
prime suspect, James Bates. Amazon claimed that records from the device
should receive special protection under the First Amendment since what
owners say and what Alexa answers is a form of free expression.3
When the automobile was first introduced, its market-growth
predictions were based on the number of professional chauffeurs. Many
believed that cars would never become a consumer product. The noise of
their engines was frightening to farm animals and carriage horses. Are we
trapped in a similar bias when we think about AI? We live in a mixed
world of science, technology, marketing, traditional pre-Internet behaviors,
and ambitions inspired by advances in a number of disciplines, from

Estadísticos e-Books & Papers


genetics and semiconductors to physics and neuroscience. This period has
created confusion, conflicts, differing objectives, and a desire for safety
and predictability on the one hand, and an unrelenting drive to invest in the
big unknown on the other. Only a few business leaders outside of the
Internet industry, however, have a clear view of how to benefit from these
new technologies, recognize their long-term value, and understand the
bottlenecks of immediately leveraging them into current products and
services.
Business should be asking: Are we going to sell to enterprises and
consumers, or do we need to pitch to bots? Can we apply machine learning
(ML) in a legacy IT environment? Will markets punish our stock if we
don’t have a comprehensive vision on data and AI for our industry? And
the answer to each of these questions is “yes.” So what is the timeline?
Formulating realistic expectations, linking them to appropriate decisions,
having scenarios to address shortcomings, and having the courage to
navigate legal and ethical white spaces has never been either more
challenging or more exciting. And let’s not forget that in the case of AI,
decision-makers are often influenced by the public’s attitude toward
unknown or largely untested and “buggy” applications.
Microsoft’s Tay chatbot represents a good example of the perils of
such new products. Tay was a Twitter-based system designed to learn how
to interact using examples of what people said to it. In a very public, very
dramatic failure, Tay was taught to be a “racist, misogynistic jerk” in less
than 24 hours by a focused group of users who were just looking to have
some fun. Tay learned offensive phrases with no understanding of their
meaning.
Tay was not a technological failure—in fact, it was an incredibly
effective call-and-response engine. Tay’s failure was rather the result of
the community that raised it—good technology compromised by a flawed
crowd, stimulated by the anonymity of the Internet, free of empathy and
predefined ethical norms. If we think of technology as a mirror of human
progress and failure, what does the Tay incident teach us? As we look at
different AI applications, our own bias and misconceptions influence how
we understand the capabilities of various systems. We look at Watson, and
because it provides answers in the form of text, we think it must
understand what the text means. Because deep learning models cluster
images of cats together, we assume that it must also understand what cats
are. Our children believe that Siri must actually understand what they are

Estadísticos e-Books & Papers


saying because it is difficult to imagine word recognition without ideas
being attached to words. We appreciate Alexa’s precision in finding our
favorite music, but wonder why she cannot follow our instructions if we
switch from English to German. We know that AI-driven face recognition
is here and can only wonder about the ethical and privacy implications.4
As we look at intelligent systems, we need to understand that we are
looking at a piece of a puzzle that is not yet complete. Although Cortana,
Siri, and the robot Pepper are each genuinely brilliant technologies, none
of them are even close to representing the complete story of intelligence.
Many AI services and products will not be possible in the future if we
don’t find ways of making them more transparent, understandable, and
accountable. To do so, ethical testing during product design and
development will be necessary, including filtering for bias, identifying
previously unknown testing categories, and undertaking predictability
engineering.5 These additions will be necessary as the big technology
companies become increasingly subject to regulatory oversight across the
globe.

AI: Hype or Reality?


Historically, AI development has been the purview of universities.
Inspired by the human genome project, a new field of computer science
seeks to map the brains of humans, cats, and birds. The European Union is
investing €1 billion to build an exact model of a human brain. America’s
BRAIN initiative, with US$100 million funding in 2014 alone, has similar
goals.
Simultaneously, AI has become the next big tech trend of Silicon
Valley. In 2013, Facebook opened an AI lab and recruited Yann LeCun, a
leading AI mind, to be its director. Facebook founder Mark Zuckerberg
has also invested in Vicarious FPC, a promising AI startup, along with
other leaders in the technology field, including Elon Musk. In 2014,
Google acquired DeepMind, a London-based AI company.
This trend reflects a growing recognition of the impact of so-called
“narrow AI.” Companies are beginning to understand that AI is not just
about big, comprehensive solutions like robots that can think, feel, and
perceive. A very significant business and investment opportunity also lies
in narrower AI solutions like optimizing search results or scheduling
meetings without human help.

Estadísticos e-Books & Papers


There has also been a major tech movement toward cloud computing
and big data. Cloud computing creates a more efficient use of computing
resources and centralizes the storage of a tremendous amount of
information and data. Smartphones have fueled the rise of cloud
computing, acting as a delivery mechanism for services in the cloud. AI
can and will play a pivotal role in organizing, synthesizing, and sorting
through the mass of information available through the cloud. Several
major companies and leading minds have recognized this:
• Diane Greene, Google’s senior vice president for Cloud Computing, recently said, “just
teaching companies to use AI will be a big business.”
• Microsoft currently operates more than 20 “cognitive services” in areas like image recognition
and language formation.
• Amazon Web Services have made ML algorithms publicly available.
• According to the chairman of Alphabet Corporation (Google’s parent company), AI will be the
“next layer of programming” in the cloud.
• Andrew Ng, former chief scientist and head of AI at Google Brain and Baidu, declared AI to
be “the new electricity.”
• At Davos 2016, AI was heralded as “the fourth industrial revolution.”

Major technology companies are currently engaged in an AI “platform


war.” A platform war is a race to design the next dominant platform that
other companies work to improve and consumers cannot live without. The
winner will be able to control the direction of the industry for many years
to come.
In 2016, venture capitalists invested US$5 billion in 658 companies, a
61 percent increase over the prior year.6 Big technology players acquired
40 AI startups, a trend that is expected to continue in the next few years.
“We’re definitely AI-first,” says Don Harrison, head of corporate
development at Google. “We pay attention to [valuation] but don’t
necessarily worry about it.”7 It is not surprising that they don’t worry
about valuation, given that AI might disrupt the whole attention economy
that Google is built on. Who will go to the trouble to click through several
links if your question is immediately answered even if you did not
formulate it clearly?
“[AI companies] are not businesses,” echoes John Somorjai, executive
vice president of corporate development at Salesforce, which has acquired
a handful of AI companies. “These [deals] are about technology and
talent.”8
These statements remind us of the development of mobile technologies
20 years ago. Google acquired Android in 2005 for around US$50 million.

Estadísticos e-Books & Papers


Back then, many questioned their monetization model. But things have
changed. Today we expect every company to have a mobile (not TV or
computer screen) first strategy. Android is the most popular mobile
operating system in the world and a wonderful Trojan horse allowing
Google to monetize advertising across all screens.
Will narrow AI be as unspectacularly mundane 20 years from now? Is
AI a revolution like mobile technology was? If traditional companies and
investors believe the latter, they should spend more time proactively
evaluating and investing. In five years, AI could exist as a layer of
capability atop every business process, from customer service and
marketing to product development and sales, across every industry sector.
At that point it will become clear that the lucky early movers were not just
fortunate. They were more strategic.

AI as the Most Important Issue of Our Time


The quest for AI is not limited to forays into science and business.
Prominent thinkers such as Yuval Noah Harari and Sam Harris have
described the rise of a new “techno religion.” As with every faith, there is
a Messiah, in this case Ray Kurzweil; a central text, The Singularity Is
Near: When Humans Transcend Biology (2005); an institution, The
Singularity University; and a crowd of true believers to spread the word
throughout the world. In 2017, Anthony Levandowski, famous because of
Waymo’s lawsuit over self-driving technology, started a new AI religion
called “Way of the Future” to worship a godhead based on AI and to fund
AI research. Only time will determine whether “religion” or “faith” is an
accurate way to conceptualize the new movement. But whether it is or not,
Kurzweil’s views must be taken seriously. He invented a number of highly
influential products, including the first flatbed scanner, the first scanner
converting text to speech, the music synthesizer, and the first
commercially marketed large-vocabulary speech recognition. Kurzweil
predicted that technology like Deep Blue would beat a chess grandmaster
by 1998. He was also convinced in the late 1980s that by the early 2000s,
the Internet would become a global phenomenon—a view the scientific
and industrial establishment disagreed with at the time. Kurzweil’s
influence has been so extensive that in 2012, Google’s co-founder, Larry
Page, asked him to become Google’s director of engineering. His book
How to Create a Mind (2012) contains a theory of the computing of the

Estadísticos e-Books & Papers


neocortex, the outer part of the human brain and one of the major centers
of intelligence.
Kurzweil thinks computers will reach artificial general intelligence
(AGI) by 2029, and that by 2045 we will not only have AGI, but also a
new world he calls the “singularity.” And, in his opinion, AI will have a
mostly positive impact on human life, even prolonging it.
Many technologists and thinkers disagree with Kurzweil’s optimism.
Nick Bostrom, philosopher and director of the Oxford Future of Humanity
Institute, calls for greater caution in thinking about potential outcomes of
AGI, even if it “could give us indefinite lifespan, either by stopping and
reversing the aging process through the use of nano medicine, or by
offering us the option to upload ourselves.”9 Stephen Hawking says the
development of AGI “could spell the end of the human race.”10 Bill Gates
doesn’t “understand why some people are not concerned.”11 And Elon
Musk—who was on the board of Deep Mind before Google acquired the
company—fears that without work on AI safety, we’re “summoning the
demon.” “When it comes to developing super smart AI,” he has said,
“we’re creating something that will probably change everything, but in
totally uncharted territory, and we have no idea what will happen when we
get there.”12 Scientist Danny Hillis compares the situation to “when single-
celled organisms were turning into multicelled organisms. We are amoebas
and we can’t figure out what the hell this thing is that we’re creating.”13
Along similar lines, Bostrom warns, “Before the prospect of an
intelligence explosion, we humans are like small children playing with a
bomb. Such is the mismatch between the power of our plaything and the
immaturity of our conduct.”14 He goes on to say that it’s very likely that
ASI—“artificial super intelligence,” or “AI that achieves a level of
intelligence smarter than all of humanity combined—will be something
entirely different than any form of intelligence” we know today. “On our
little island of human psychology,” Bostrom continues, “we divide
everything into moral or immoral. But both of those only exist within the
small range of human behavioral possibility. Outside our island of moral
and immoral is a vast sea of amoral, and anything that’s not human,
especially something non-biological, would be amoral, by default.”15
Bostrom adds that “[t]o understand ASI, we have to wrap our heads
around the concept of something both smart and totally alien. . . .
Anthropomorphizing AI (projecting human values on a non-human entity)
will only become more tempting as AI systems get smarter and better at

Estadísticos e-Books & Papers


seeming human.” He notes, “Humans feel high-level emotions like
empathy because we have evolved to feel them—i.e., we’ve been
programmed to feel them by evolution—but empathy is not inherently a
characteristic of ‘anything with high intelligence.’”16
Even in the field of narrow AI, there are serious discussions around
accountability in AI, focusing “not on rogue machines going amok, but on
understanding how algorithms perpetuate and amplify existing social
biases and doing something to change that.”17 This call to identify biases
underscores the need for interdisciplinary research done by diverse teams
with different professional backgrounds, genders and ethnicities, from
different schools of thought and with a broad range of emotions including
self-awareness, objectivity, and skepticism. The practical benefits of
interdisciplinarity and diversity are already evident. David Heckerman, a
Bayesian researcher and medical doctor, was the person who first
identified spam, treating this cybersecurity phenomenon as a disease
whose symptoms are the words in the e-mail, e.g., combining the words
“free” and “Viagra,” or “easy” and “moneymaking.” Similarly, it took an
African-American researcher to point out that the entire facial recognition
database at Stanford Media Lab had no people of color in it. It is not a
principal aim of this book to make either short- or long-term AI
predictions about the short- or long-term implications of AI. Our core
message is that in fact the design and implementation of beneficial new
technologies has always been a marathon, not a sprint. Executives at large
enterprises are only just beginning to realize the financial potential of AI.
Most companies have controllers for every product and geography, but
they do not have even one data scientist. Accountants can neither write
code nor manage a ML engineering group. AI is beginning to descend on
companies—and their executives and boards—as the newest piece of the
digital agenda puzzle, and yet technology literacy is still not a “must have”
requirement of either best practice or regulators in most jurisdictions.
The United Nations has also recognized the importance of monitoring
AI development. They opened the Center for Artificial Intelligence in The
Hague to focus on key risks such as mass unemployment and the
deployment of autonomous robotics by criminal organizations or rogue
states.18
We are convinced that the future is more important than the past,
because you can still do something about it today. AI is an integral part of
our future, and there is no greater leadership challenge today than to define

Estadísticos e-Books & Papers


how best to handle it, influence it, and transition smoothly toward it.
Through this book, we are calling for collective leadership in designing
safe and beneficial AI, which will require the involvement of traditional
companies, municipalities, communities, and scientists all working
together to create governance frameworks and forward-looking strategies
that embrace AI in life and in business. We are convinced that positive
progress in this age of the machine will not be possible without the
business community consciously prioritizing the solid funding of
fundamental AI research, the rethinking of public education, and the
restructuring of employee training and development in its strategy. We
totally embrace Max Tegmark’s view that AI is the most important
discussion of our time. Starting right now, at all levels of society and
business, we must position ourselves to dream and to implement a future
full of prospects and progress.

Book Structure
This book will provide the AI beginner as well as knowledgeable
readers with a broad cross-section of the most important developments,
issues, and information needed to understand AI in both a business and
and a socio-political context. We focus predominantly on ML, which
represents just a small part of the AI field, and aim to provide boards,
executives, and students with a series of tools and perspectives to address
AI in their own work and in business in general. To achieve this, we have
organized the book into six parts:
1. The Past, Present, and Future of AI
2. Enablers and Preconditions to Achieve AI
3. The Race for AI Dominance—From Full-Stack Companies to “AI-as-a-Service”
4. AI in Different Sectors—From Cars to Military Robots
5. The Socio-Politics of AI—Critical Issues
6. The Governance of AI—A Practical Roadmap for Business

Each part can be read as a standalone. We purposefully include


information on the history of AI as a discipline because the need to invest
in fundamental research continues to be critically important—without it,
we will not be able to design safe and beneficial AI. In the end, it takes
time to develop scientific theory that can provide the basis for applications.
Google itself was based on math from the 1930s, and AI is based on math
from the 1940s through the 1960s.

Estadísticos e-Books & Papers


We have also included basic machine and deep learning terminology to
highlight how major AI applications might evolve over the next few years.
Executives and board members alike will benefit greatly from learning
these concepts, which until recently have been relegated to the technology
department dungeon. Every executive and board director must understand
financial statements without being a certified accountant. The same must
happen with information technology (IT) because every single business is
becoming a technology business. Without basic tech literacy within top
leadership teams, it will be difficult to make decisions to ensure the long-
term sustainability of any business.
Included throughout the book are “AI/ESG” boxes, our shorthand way
to identify key AI Environmental, Social, and Governance (ESG) issues
because in a world in which ESG and AI risks and opportunities are
scaling rapidly and often interconnecting, it is critically important that we
identify points of interconnection and think about the implications of
AI/ESG both at the entity level as well as at the governmental and policy
levels.19 Included in our ESG considerations is the topic of reputation risk
as well, as it can be pervasive, and can amplify and accelerate other risks
associated with AI in our age of hyper-transparency.20 We urge leaders
across different groups in society to think about who their key stakeholders
are and to understand their expectations clearly as they pursue AI solutions
because, at the end of the day, the design of safe and beneficial AI will
depend heavily on social governance imposed by these stakeholders.
Figure I.1 provides an array of key stakeholders to consider.

Estadísticos e-Books & Papers


Figure I.1 (© Anastassia Lauterbach 2018. All Rights Reserved.)

Below is a sampling of the types of issues that fall under the three ESG
buckets we will be considering:

Table I.A Introduction Sample of Environmental, Social and Governance (ESG) Issues

Environmental • Climate change


• Sustainability
• Environmental laws and regulations
• Toxic waste laws and regulations
Social • Human rights
• Labor rights
• Child labor
• Discrimination, harassment, bullying
• Inclusion and diversity
Governance • Corporate governance
• Anti-corruption
• Anti-fraud
• Anti-money laundering
• Conflicts of interest

Finally, apologies to anyone who does not find their favorite startup,
application, or other AI reference mentioned in our pages. Likewise, we

Estadísticos e-Books & Papers


bow to anyone who feels that we either oversimplified or underestimated
an AI-related concept or trend.

Who Is This Book For?


We hope that this book will appeal to a broad cross-section of readers.
If there ever was a concept that requires cross-disciplinary treatment and
attention, it is AI. If your main interest is in business development
enabled by AI, this book can help you:
• navigate the jungle of AI definitions and identify promising technologies to fit your business
model;
• anticipate some of the new technology that might apply to your business;
• prepare for discussions with your technology and legal functions, if you are considering an AI
startup acquisition or solution;
• evaluate pitches by prospective partners, especially if they are asking you to share company
data or venture capital; and
• understand the importance of data governance to introduce or improve data-centric decision-
making at your company.

If you are an executive or policy maker, this book will provide an


overview on the state of technology, its cornerstones, and where it is
taking companies and society. Privacy, robotics, open research, and
competition are critical issues. Legal frameworks will impact how fast
technologies penetrate specific sectors and are embraced by consumers.
We hope that this book will enable you to address AI more strategically,
looking beyond fragmented use cases and thinking about governance to
deal with data, new business models, and partners.
AI researchers and engineers might be inspired to write a book to
delve more deeply into a favorite aspect of AI, debate what companies
could and should do, or argue what type of AI governance would be the
safest and most beneficial. You will also learn about the business of AI,
including what the potential bottlenecks are for introducing new
applications and services into traditional industries. One of the top reasons
we have written this book is to help forge a stronger connection between
technologists and business executives, between the propagators of
openness and believers in controlled environments, between policy makers
and researchers.
Non-executive board directors will become informed beyond what is
usually discussed at Davos or in the Financial Times or Wall Street
Journal. Since information is power, you might ask for an annual board

Estadísticos e-Books & Papers


session to discuss the value of your company’s data, your data governance
framework (do you have one?), and what the current and targeted
applications are to drive your bottom and top line. You might encourage
your company to review how they benchmark their performance against
the competition, and who the newcomers are. You might ask whether your
company should prepare for things like “Universal Basic Income” and life-
long education. You might review your company’s sustainability
framework and include in it a roadmap for automation with new education
options for your workforce, programs to organize ML centers in your
communities, or funding of fundamental research and applied sciences to
achieve safe and beneficial AI. Moreover, you might wish to add a non-
executive director with IT, data, or even ML experience, and/or someone
who worked in both tech and traditional companies.
And, finally, if you are a student of any age or just a curious mind, this
book will give you a bird’s-eye overview of the state of AI, including
where it is going from a business and technological standpoint and some of
its fundamental issues and dilemmas. You will also gain a deeper
understanding of the variety of AI, including the different industry sector
approaches that are developing and who the leaders and followers are.

Final Words
This is a book written by business, governance, and ethics practitioners
for practitioners. We hope that the book helps our readers make sense of
the multiple factors surrounding AI today. If we had to sum up this book in
a single phrase, it would be: AI is complicated. If we had a single call to
action, it would be: Become educated to actively participate in the AI
discussion. Making AI safe and beneficial—what we call the “AI
Imperative”—requires collective leadership and forward-looking
governance. Though they are very important, small organizations like The
Future of Life Institute or Open AI will not be able to solve all the
problems or answer all the questions. The fact that you are reading these
lines means that you are already better positioned to make a difference.
We urge you to become a leader.

Estadísticos e-Books & Papers


PART 1

The Past, Present,


and Future of AI

AI is an interdisciplinary topic, taking inspiration from biology,


neuroscience, physics, mathematics, computer science, and the
engineering of artificial neural systems. Once introduced into a business or
community, AI requires the inputs of a wide variety of disciplines,
including education, governance, risk, regulatory, and ethics. According to
Fei Fei Li, chief scientist of AI at Google Cloud and associate professor at
Stanford, the first 50 years of AI were “in vitro”; we have now moved into
the “in vivo” phase.1
Over the last five years, AI development was marked by many
victories, ranging from progress in self-driving cars to speech recognition.
According to Google Trends, today people are searching for “machine
learning” nearly five times as often as they were five years ago.2 AI is
increasingly talked about at companies and in households that have come
to see AI as a technology that isn’t “far away,” but rather something that is
already impacting them daily.
The media reports on AI every day. Most tech companies are publicly
articulating their long-term AI strategies. Meanwhile, some researchers are
talking about the “misinformation epidemic” that is already happening
with AI, caused by the “pairing of interest with ignorance.”3 At the same
time, investors and businesses are eager to figure out what immediate and
midterm impact AI will have on their status quo. Responsible governments
and social institutions are commissioning research on the implications of

Estadísticos e-Books & Papers


automation on labor and society. President Obama explicitly talked about
AI in his farewell address.
There is an inherent paradox in what non-technologists perceive AI to
be. Once AI becomes embedded in their daily lives, people get used to it
and stop thinking of it as AI. Though consumers’ traffic or weather
updates show up on their calendars in real time, they don’t think of a
Google search as a giant “artificial brain,” or of their smartphone updates
as “pocket” AI. They don’t think twice about how their Spotify, Pandora,
YouTube, or Netflix app recommendation feedback loops have penetrated
their vastly improved and “stickier” entertainment experiences. In a very
subtle way, AI has become “shorthand for products that use data to
provide a service or insight to a user . . . AI is whatever computers cannot
do until they can.”4 In this book, we will mostly focus on ML and how it
becomes embedded in a vast array of services and products in everyday
life. We will parse through today’s research, which offers a jungle of
terminology, ranging from “cognitive computing” (CC) to “neuromorphic
engineering” (NE), from “artificial neural networks” (ANN) to “deep
learning” (DL), which recently has reached high prominence.

Key Points of Part 1


• AI refers to a multitude of technologies that aim to perform tasks that normally require human
intelligence.
• AI is a relatively new field of scientific discovery; the term “artificial intelligence” was not
coined until 1956. Its underlying math is from the mid 1940s and 1950s, with important
additions since then, including heuristic models, ML, natural language recognition, computer
vision, and others.
• Back when AI was considered a futuristic technology without immediate implications, social
science journalists and researchers differentiated between three broad categories of AI: narrow
AI (or “weak” AI able to do one or a select set of tasks), general AI (human-level or “strong”
AI), and super intelligent AI (beyond human intelligence).
• Most researchers, practitioners, and investors are talking today about narrow AI applications,
and refer to ML, DL, cognitive computing, neural networks, natural language processing,
robotics, and further techologies.
• Making predictions about the future of AI is intimately intertwined with questions about the
concept of intelligence and its evolution, including the IQ of machines, as well as ethics and
the limits of today’s technology.
• AI and blockchain are major technologies that will change the way businesses operate, just like
mobile, social, and cloud before them. As Andrew Ng, the leading AI researcher and investor,
suggests, AI is the next electricity and will transform every industry.
• Both AI and blockchain could help traditional businesses close the digital divide with the large
tech brands, but only if they embrace these technologies with skill, patience, and investment.
Some of the challenges that the Internet of Things (IoT) and cybersecurity present cannot be
solved without AI. Moreover, quantum computing will present solutions such as parallel

Estadísticos e-Books & Papers


computing at scale within seconds, probabilistic scenarios like those found in nature, and a
new cybersecurity paradigm. Recent advances in blockchain also demonstrate its capability to
deliver global infrastructure for decentralized (proprietary and open) data markets to feed
narrow AI applications, which can be accessed and used by free ML developers, thereby
substantially reducing the dominance of full-stack AI players such as Alphabet, Facebook, and
Microsoft.

Estadísticos e-Books & Papers


CHAPTER ONE

A Brief History of AI

AI as an academic field started at the 1955 Dartmouth conference


organized by John McCarthy of Stanford University and Marvin Minsky
of MIT. Conference participants, including Arthur Samuel, Oliver
Selfridge, Ray Solomonoff, Allen Newell, and Herbert Simon, became the
leaders of AI research for decades.

Table 1.1 A Recent Chronology of Key AI Events1

May 1997 was a landmark date in AI, as IBM’s Deep Blue defeated the reigning World Chess
champion Garry Kasparov. Since 2009, programs capable of grandmaster-level play have run on
smartphones.
In 2004, the U.S. government’s Defense Advanced Research Projects Agency (DARPA) organized
the first contest for what we today call self-driving vehicles. The contest involved an emerging
technology called LIDAR (Light/Laser Detection and Ranging), used mainly for military mapping
and targeting, and focused on sensing the environment and responding to it. Additional contests
happened in 2005 and 2007. Sebastian Thrun, leader of the Stanford AI Lab, joined Google after
2007 to work on autonomous driving. Lidar technology became prominent when in May 2017
Alphabet sued Uber over the use of allegedly stolen technology to develop its autonomous-driving
capabilities. This dispute originated when Uber acquired Otto, a self-driving truck firm co-founded
by Anthony Levandowski, who had worked at Alphabet for ten years and who allegedly took
Google proprietary information when he left that firm.
In February 2011, IBM’s Watson defeated two Jeopardy champions. Immediately after collecting
the US$1 million prize, IBM set off to apply Watson to real-world scenarios. At the 2011 Jeopardy
event, Watson used a database of 200 million facts and figures. By 2015, every consumer could
have bought a hard drive to hold such information for US$120.
In 2012, a ML algorithm built by engineers without any medical knowledge helped a team of four
expert pathologists to more accurately diagnose breast cancer with the input of thousands of
screening images.

Estadísticos e-Books & Papers


In 2013, Vicarious, a startup, generated a buzz when it announced that its AI could solve
CAPTCHAs, the ubiquitous website puzzles that require visitors to transcribe a string of distorted
letters—a basic test used to verify that the visitor is actually human—to proceed. Now, not only can
AI recognize objects and shapes, it can “imagine” them, too, like clouds in the shape of an animal.
In 2014, the Hong Kong venture capital firm Deep Knowledge “appointed” an AI tool called
VITAL to its board of directors to help find promising investments, including scanning financing,
intellectual property (IP), and clinical trials from prospective investment candidates, and then
sharing that information with the board and even casting a vote.
In 2015, Baidu beat humans in recognizing two languages. Microsoft and the China University of
Technology and Science taught a computer network how to take an IQ test, and it scored better than
a college post-graduate. Google’s autonomous vehicles became common “guests” on Silicon Valley
highways and engaged in their first “dialogue” with California police.
In March 2016, Google’s AlphaGo computer system sealed a 4-1 victory over South Korean Go
grandmaster Lee Se-dol, arguably the best player of the past decade. Go’s complexity ruled out the
brute force approach of IBM’s Deep Blue chess computer, which beat Kasparov by evaluating 200
million positions per second. Instead, AlphaGo learned to recognize promising moves by playing
large numbers of Go matches against itself, generating new datasets and becoming a “value”
network itself.
In April 2016, Alibaba’s AI predicted all finalists and the grand winner of the popular Chinese
reality show I’m a Singer. The year 2015 and into early 2016 demonstrated the acceleration from
so-called “narrow” AI use cases (e.g., autonomous vehicles, semantic processing of legal texts to
prepare law cases, image recognition) to “deep AI,” which many scientists consider to be “humans’
last invention.”
In early 2017, researchers at the University of Alberta released a paper based on a contest in which
their own AI, named DeepStack, beat 11 professional poker players. In a 20-day tournament at
Rivers in January, humans lost US$1.8 million to an AI bot called Libratus from Carnegie Mellon
University. There has been a debate since then as to which AI—DeepStack or Libratus—is better at
defeating professional players.

Table 1.2 Artificial Intelligence: Some of the Big ESG Issues

Environmental • Designing “safe” AI into robotic and IoT products


• Understanding AI impact on sustainability issues: ensuring low
impact on air, water, land
• Designing AI products that facilitate conservation
• Creating intersections between AI products and services and
electrification
Social • Data and information privacy in the age of AI
• Impact of AI on labor and employment
• Need for lifelong learning education policy shift
Governance • AI requires new governance parameters from inception in products
and services
• Boards and c-suites must learn about AI both generally and as it
applies to their sectors
• Public/private AI collaboration is imperative

Estadísticos e-Books & Papers


• The paradox of designing “ethical” governance into military AI
must be grappled with and regulated

By the late 1960s, the following important developments had occurred:


• Research in natural language processing had started.
• Mobile robotics was initiated with the invention of “Shakey,” a wheeled robot built at SRI
International.
• Samuel’s checkers-playing program became one of the first ML systems. It improved itself
through self-play.
• Rosenblatt’s Perceptron, a computational model based on biological neurons, gave rise to the
area of artificial neural networks.
• Feigenbaum and others started building expert systems for specialized domains such as
chemistry and medical diagnosis. In this manner, research on symbolic systems for reasoning
and building upon was born.

Despite the promising progress made on various aspects of AI during the


preceding decades, by the 1980s, the field could not boast significant
practical successes.2 This was partly due to the fact that there was not yet
any knowledge on how to store information in digital form. Research
focused mostly on methods of reasoning and logic. Even investments in
so-called expert or knowledge systems in the 1980s did not bring much
progress, as the tools and frameworks used lacked power to achieve
expected performance. In subsequent years, AI technologies disappeared
from the public eye but continued to evolve in research labs.

Estadísticos e-Books & Papers


CHAPTER TWO

AI Social-Science Concepts

As AI developed as a technology, the social sciences started to talk about


AI as well, using more general concepts to position it vis-à-vis the human
experience, namely human intelligence. Definitions such as “artificial
narrow intelligence,” “artificial general intelligence,” and “artificial super
intelligence” emerged.

Artificial Narrow Intelligence


Artificial narrow intelligence (ANI) is an AI specializing in just one
area, e.g., playing chess, recognizing patterns on radiological images,
powering Facebook’s newsfeed, filtering spam in e-mail accounts, or
predicting when a rail track requires maintenance. ANI is considered to be
a great productivity factor and is generally perceived to be a harmless
enabler. However, we would caution against “perceived safety.” In a
financial market event called the “Flash Crash of 2010,” a ML program
got “confused” by an unexpected situation and caused the stock market to
briefly plummet, taking with it US$1 trillion of market value within 36
minutes. Only part of the loss was recovered after humans intervened and
corrected the mistake. Most companies today develop ANI to fulfill a
specific task.

Artificial General Intelligence


Artificial general intelligence (AGI) should be able to “authentically
mimic” the behavior and capabilities of a human to solve problems,

Estadísticos e-Books & Papers


comprehend abstraction and complexity, learn from experience, and find
the best way to cope with a new situation. As AGI researchers mostly try
to emulate the human brain, their endeavors can be envisioned in three
stages: mapping, simulation (through software), and embodiment (by
introducing new form factors into devices or machines).2 Robotics
covering the embodiment of AI is an important field. Recent advances in
emotion capturing and measurement will contribute to the further
evolution of robotics. Facebook’s Yann LeCun, DL researcher Yoshua
Bengio, quantum physicist David Deutsch, and Microsoft co-founder Paul
Allen have “reflected the general sentiment that we’re yet to fully
understand the human brain and cognition—let alone translate it into
algorithms,” and we haven’t yet approached “machines that are even as
intelligent as rats.”3 Even Google’s first version of AlphaGo—in spite of
unseating the Go world champion—is no smarter than a six-year-old child
with an IQ of 47.28, though it is twice as smart as Apple’s Siri, which
scored 23.94 in a study conducted by Chinese researchers in the fall of
2017.4

Table 2.1 Examples of Domain-Focused ANI Companies1

Verticals Companies

Legal Casetext with ML legal research assistant CARA


LawGeex with ML for automatically reviewing business contracts
Education and Research Meta uses ML to consolidate information from scientific papers
Osmo uses computer vision to engage children through iPad games
Physical Security Dexto backed by Taser International (smart weapon and bodycam
manufacturer) uses DL-based video analytics solutions
Digital Signal Corporation focuses on biometric imaging, facial
analytics, and 3-D identification
Shield.ai is developing software for in-field robots and drones
Travel Dorchester Collection, a luxury hotel operator, uses ML to modify
their breakfast offerings based on analysis of guest reviews
OLSET uses natural language processing to develop APIs for
personalized travel recommendations
Holdigu uses image recognition to find similar vacation rentals on
multiple sites

News, Media, and China-based Tout9a0 uses ML to personalize news recommendations


Entertainment
Amper uses ML-powered music composition

Estadísticos e-Books & Papers


Mobalytics is doing performance analytics for gamers
Human Resources Onfido uses computer vision to aid in identity verification for
background checks of applicants for jobs or bank services
hiQ Labs uses ML with humans in the loop to predict churn and
improve employee retention
Real Estate Zillow uses ML to estimate the price of homes

Table 2.2 CB Insights’ AGI Startups and Their IP Applications

Company/total
funding as of end
2016 Investors Technology Patents

Vicarious Systems Marc Benioff, Elon Cortical networks Applied for appx. 6
USA, US$68 million Musk, Mark patents since 2013 on
Zuckerberg, Sam topics such as object
Altman, Jeff Bezos, recognition using
Samsung Ventures, recursive cortical
Wipro Ventures, ABB networks
Technology Ventures,
Data Collective, Felicis
Ventures, Formation 8,
Good Ventures, Khosla
Ventures
Kindred AI, US$13 11.2 Capital, AME Robotics Patent activity started
million Cloud Ventures, 2016, two patents titled
Bloomberg Beta, Bold “facilitating device
Capital Partners, Data control” and “methods
Collective, Eclipse of self-preservation of
Ventures, First Round robotic apparatus”
Capital, Google
Ventures, Innovation
Endeavors
Numenta, US$23.6 Ed Colligan, SkyMoon Reverse Patent activity since
million Ventures engineering of 2005, over 40 patents
neocortex
Nnaisence, seed Alma Mundi Ventures DL Team (incl. No U.S. patents
funding (Switzerland) Jürgen
Schmidhuber)
aiming to build
large-scale neural
network solutions
for AGI and
intelligent
automation

Estadísticos e-Books & Papers


According to Gary Marcus, who sold his startup Geometric
Intelligence to Uber in December 2016, AGI is not possible without taking
the cognitive sciences (such as developmental psychology) more seriously.
Children, for instance, are capable of doing many things that machines
have yet to mimic. Kids draw inferences from small amounts of data and
are able to learn a very complex language. The human child apparently has
the innate ability or cognitive structure to learn and improve. The
dominant approach in ML right now is to find statistical approximations of
the human child model that presume little prior knowledge.5 Recent
discussions around backpropagation and unsupervised learning contribute
to building DL models less reliant on mountains of labeled data, which can
be seen as a simplified equivalent of such knowledge.
Daniel Dennett, a celebrated philosopher and cognitive scientist doing
consciousness mind research, suggests in From Bacteria to Bach and Back
that “a natural part of the evolution of intelligence itself is the creation of
systems capable of performing tasks their creators do not know how to
do.” “I think by all means if we’re going to use these things and rely on
them, then let’s get as firm a grip on how and why they’re giving us the
answers as possible,” he says. “But since there may be no perfect answer,
we should be as cautious of AI explanations as we are of each other’s—no
matter how clever a machine seems.”6
Outsourcing a design task to computers is another way to leapfrog
difficulties of brain emulation. Our experience with building complex
machines, e.g., airplanes, has taught us that using a machine-oriented
approach pays off better than trying to emulate biology. Airplanes don’t
copy flapping bird-wing motions. Why should AI be an exact copy of a
human brain?
In any case, we believe that AGI can be achieved only when quantum
computing at scale has been implemented and has affected and changed
the way in which we define and write software. Practitioners agree that
once AGI is achieved, it will have a seismic impact on every domain of
human life.

Artificial Super Intelligence


University of Oxford philosopher Nick Bostrom defines artificial super
intelligence (ASI) as “an intellect that is much smarter than the best human
brains in practically every field, including scientific creativity, general

Estadísticos e-Books & Papers


wisdom and social skills.”7 Researchers disagree about how likely it is that
current human intelligence can be surpassed. Some believe that further
progress in AI will lead to the creation of general reasoning systems that
lack human cognitive limitations. Others assume that humans will evolve
or directly modify their biology and in that way achieve greater
intelligence. Currently, we can only speculate about the implications of
ASI. Some suspect that ASI will be unstoppable by humans and might
even eliminate humankind. Such elimination, the speculation goes, could
occur either as a byproduct of a specific goal or as a result of ASI
determining that humans are an inferior, resources-depleting species not
worthy of protection or preservation. Max Tegmark writes that AI can lead
to virtually any goal, “but almost any sufficiently ambitious goal can lead
to subgoals of self-preservation, resource acquisition and curiosity to
understand the world better.”8 Tegmark believes that aligning machine
goals with our own involves three unsolved problems: “making machines
learn them, adopt them and retain them.”9

Timeline for Artificial General Intelligence


Most AI researchers and practitioners expect that computers will
achieve and even surpass human intelligence. There is, however, little
consensus on the timeline. At the 2006 AI@50 conference, 18 percent of
attendees said that machines will fully achieve human intelligence by
2056, 41 percent guessed that this would occur after 2056, and 41 percent
opined that this would never come to pass.10 At Ben Goertzel’s annual
AGI Conference, James Barrat asked when participants thought AGI
would be achieved—by 2030, 2050, 2100, after 2100, or never; 42 percent
of respondents selected the year 2030, 24 percent chose 2050, 20 percent
chose 2100, and 10 selected after 2100, with only 2 percent assuming that
it would never happen.
Speculative as these projections might be, they are representative of
the median opinion of the AI expert community. Most specialists would
agree that 2060 is a sensible estimate for arrival of powerful ASI. If we
think about this concept using Tegmark’s development phases of life, with
this form of ASI, Life 3.0 will finally take over. At such a point in time, we
will no longer be able to adequately prepare for the multifaceted and
overwhelming changes that will impact humanity as we know it today.

Estadísticos e-Books & Papers


Social Science Controversies Around AI
One could view AI as the second most significant practical scientific
field after Charles Darwin’s theory of biological evolution by natural
selection, which continues to this day to cause emotional discussion and
controversy. This is partly because evolutionary theory challenges many
philosophical and religious doctrines about the uniqueness of humans and
our role in the history of the universe. On the one hand, AI pessimists
express the fear that AI may threaten not only our way of life, but
humanity as a whole. On the other hand, scientists like Yuval Harari,
author of the bestselling books Homo Sapiens and Homo Deus, argue that
in the 21st century, human exploration will turn from the outer to the inner
world, thus focusing primarily on the body, the brain, and the mind.
Humans may even create a new kind of life and/or merge with machines.11
Current discussions are dominated by questions about whether a
computer might one day develop a free will, what a free will means in
humans, and how decisions are made. Sam Harris, an influential thinker
and researcher, argues in his book Free Will (2012) that a choice
independent of any outside or prior influences does not make any sense.
Free will as an intentional choice is only an illusion. Parallels can be found
in how computers or people make a choice or, as Jerry Kaplan states,
“[E]ither both people and computers can have free will, or neither can—at
least until we discover some evidence to the contrary.”12 Tegmark talks
about consciousness instead of free will, and defines it as “subjective
experience.” There is no meaning without consciousness, and humans
should be prepared to be “humbled by ever smarter machines” and take
comfort mainly in “being homo sentients, not homo sapiens.”13
The emergence of AI as a scientific field has rightly given rise to a
discussion of the ethics of AI focused on whether computer consciousness
might imply feeling joy or pain, respect or abuse. The HBO science fiction
thriller Westworld features an entertainment park populated by humanoid
hosts and tells a story of the blurring lines between robot hosts and human
guests. The humans are invited to do whatever they wish within the park
without fear or retaliation from the hosts. Their actions tell more about
their humanity than anything else. Some argue that raping a female robot
host is nothing more than abusing a toaster. Others develop a bond with
the machines and treat them as equals. Both behaviors can be expected in
real life.

Estadísticos e-Books & Papers


There are many ways to anthropomorphize technology. In time, when
robots increasingly care for the elderly and children, when kids grow up
with machines assisting them with their studies and in play, human
consciousness and empathy might embrace robots as valuable members of
families and communities. In this context, the call for greater involvement
by social science in AI computation has never been louder. David Deutsch,
a physicist at the University of Oxford and author of The Beginning of
Infinity (2011), firmly believes that developing AGI is a matter of
“philosophy, not computer science or neurophysiology.” He writes that
what distinguishes human brains from all other physical systems is
qualitatively different from anything else, and cannot be specified in the
same way as other computer programs. The “core functionality in
question” is creativity. Humans do not get all their theories by induction.
Knowledge does not come from extrapolating repeated observations. AGI
is qualitatively, not quantitatively, different from all other computer
programs; its attributes “do not consist only of the relationships between
its inputs and outputs.”14

Intelligent Machines vs. Intelligent People


Machines capable of performing tasks and cognitive functions
reflecting and replicating human intelligence must include perception,
reasoning, learning, communicating, and acting in complex environments.
In order to reach this level, computers must be able to learn skills
automatically, recognize and understand human languages, and solve
problems similarly to or even better than humans. In a broad sense,
according to Nils J. Nilsson, “AI is that activity devoted to making
machines intelligent, and intelligence is that quality that enables an entity
to function appropriately and with foresight in its environment.”15
The question of what constitutes intelligence has historically been
problematic for scientists, philosophers, artists, and even Silicon Valley’s
most admired technologists. As of today, we do not have a comprehensive
theory that tells us what intelligence is, what it takes to build it, and what
the limitations are. As we look at our place in the world, we want to
believe that human intelligence is unique and separates us from other
animals. But the moment we try to define what makes humans unique, we
confront examples of birds using tools to catch food and plan for the
future, dogs finding toys by reasoning from exclusion, and apes using sign

Estadísticos e-Books & Papers


language to communicate with humans.16 It is very hard to find one single
feature setting humans truly apart. As Robert Sapolsky clearly
demonstrated in Behave: The Biology of Humans at Our Best and Worst
(2017), even humans’ ability to know right from wrong and make moral
and ethical decisions has varied throughout history, shows behavioral
parallels with many mammals, and can be largely explained using
genetics.
The philosopher John Searle was the first to describe a hypothetical
technology that would work in the same ways as biological intelligence.
Searle did not indicate the amount of intelligence a computer should
display, but rather focused on how intelligence works. He formulated a
“Chinese Room” experiment to show that looking into a database and
computing outputs based on that database was not the equivalent of human
thought. Only biological systems can consciously understand what they are
doing; computers cannot. Machines create intelligent behavior based on
something like “if X occurs, do Y.” This is fundamentally different from
what humans do. Searle thus distinguished syntax (how machines work)
from semantics (how biological systems work). Today’s AI just learns
stimulus-response associations from a number of images (as in syntax).
Searle was not alone in calling for better approaches to understanding
intelligence in machines vs. in humans. In his 1972 book What Computers
Can’t Do, as well as 20 years later in What Computers Still Can’t Do
(1992), Hubert Dreyfus presents a popular list of the limitations of the
dominant approach to AI.
Most apocalyptic scenarios about the future of AI tend to use the
concept of intelligence the way Bostrom described it in his book
Superintelligence. His view is based on the concept that natural
intelligence exists in a single dimension. Imagine a graph of increasing
amplitude, starting with simple single-cell animals and continuing on to
mammals and eventually to Einstein. This step-by-step progress in
intelligence is aligned with the post-Darwin concept of evolution, where
inferior organisms evolve into something more sophisticated. As Kevin
Kelly writes in The AI Cargo Cult, such a ladder or escalation in
intelligence is a myth. David Hills of the University of Texas has
developed a more precise view of the natural evolution of species that is
based on DNA research. To showcase his idea, Hills uses a disk radiating
outward, which shows that every species alive today is equally evolved
without such an escalatory ladder-type evolutionary superiority.

Estadísticos e-Books & Papers


Intelligence is similar. There are many types and modes of intelligence,
each one on a continuum. There is no one uniform metric for intelligence.
Instead “we have many different metrics for many different types of
cognition.”17
In Life 3.0, Tegmark, an MIT physicist, founder of The Future of Life
Institute and passionate AI researcher, introduces a brilliant concept of the
three stages of life: biological evolution, cultural evolution, and
technological evolution.18
• Life 1.0 is unable to redesign either its hardware or its software during its lifetime. Both are
determined by DNA and change only through evolution over multiple generations. Bacteria
and primitive organisms are examples of this life form.
• Life 2.0 can redesign much of its software. For example, humans can learn complex new skills
such as languages, sports, and skills, and thereby fundamentally update their worldviews and
goals. People like readers of this book are examples of this life form.
• Life 3.0, which does not yet exist on Earth, can dramatically redesign not only its software but
its hardware as well, rather than having to wait for it to gradually evolve over generations. This
life form could arrive during the 21st century. The first signs of Life 3.0 can be seen in narrow
AI applications; the greater upgrade will happen, however, once AGI takes further concrete
shape.

For Tegmark, life means an information-processing entity that can retain


its complexity and replicate. Most “limited” narrow AI applications today
outsmart humans in their areas of focus and imply that Life 3.0 has already
been initiated. Google’s memory is beyond ours in many ways. Trading
systems perform operations in seconds, while humans need hours. Many
AI applications help us solve problems where we would fail on our own.
One of the biggest proponents of combining machine and human
intelligence is no other than Garry Kasparov, who in 1997 failed to win
against IBM’s Deep Blue, which, exploited human missteps with the
“speed and depth of brute force search.” The chess grandmaster used his
own experience to develop an approach to chess playing with the help of
technology. Observing that humans will be as disconcerted by driverless
cars as he was by chess-playing machines, Kasparov urges us to take
advantage of the rise of computers as they take over many of the roles that
lawyers, bankers, doctors, and other professionals have traditionally
performed. He writes, “It’s remarkable how quickly we go from being
skeptics to taking a new technology for granted.” Overreliance on
machines may be dangerous if you want to innovate rather than imitate,
but listening to them allows you to overcome your emotional biases. Given
honest data, machines can “make us into better decision makers.”

Estadísticos e-Books & Papers


Prominent voices in ML recognize that DL is one of the best ML
technologies to achieve both narrow and general types of intelligence.19
This is partly because DL models do not just generalize, they memorize. In
biological systems, learning happens through learning rules embedded in
our genes. For humans, “learning” is an application of these genetically
encoded learning rules to create our long-term memories in a form of more
specialized learning rules. Biological intelligence has working memory.20
Adding more memory to DL is an important step in the evolution of this
technology. It is not too early to consider supporting broad-based research
on the safe and beneficial design of AGI.

The IQ of Computers
In 1950, Alan Turing published a paper in the journal Mind entitled
“Computing Machinery and Intelligence,” which was the first-ever
introduction of an AI concept. He described machines able to pass what
has become known as the Turing Test, defining “machine IQ” in a sort of
game between a human and a machine that is capable of communicating.
McCarthy first used the term “AI” in the proposal he co-authored with
Marvin Minsky, Nathaniel Rochester, and Claude Shannon for the 1955
Dartmouth workshop.21 In the original Turing Test, an interrogator asks
two participants a series of increasingly difficult questions aimed at
determining which of them is the human and which is the computer. If, at
the end of the test, the interrogator cannot distinguish between these two,
then the computer has passed the test.
The Turing Test is just one of several methods to evaluate machine
intelligence. Hector Levesque has also proposed the Winograd Schema
Challenge, which aims to evaluate whether a system can apply general
knowledge of the world, coupled with common sense, to tasks such as
language understanding.
Gary Marcus has proposed a different approach, called the Marcus
Test, with a similar theoretical thrust. His take on intelligence is that it
requires the ability to take in information and synthesize it, and then use
the resulting knowledge. He envisions a test in which systems watch
television and then answer questions based on what they saw and
understood. As with the Winograd Challenge, the Marcus Test assumes
that common-sense reasoning is a core requirement of intelligence and has
to be part of any evaluation.

Estadísticos e-Books & Papers


There are other methods aimed at specific aspects of intelligence. The
Visual Turing Test and the Construction Challenge are designed around
integration of visual recognition and spatial reasoning with interpretation
and plan execution. The Lovelace Test is designed to test creativity. In
2016, an “MIT algorithm managed to produce sounds that were able to
fool human listeners and beat Turing’s Sound Test.”22 This happened
within a larger study aimed at improving a robot’s ability to interact with
its surroundings and environment as a whole, which included sensing
sound.
Each of these tests has its own focus and its own perspective on what
constitutes intelligence. Like with any specialized human doctor, different
AI tests diagnose different functions and pursue different goals. Visual
activity and spatial reasoning are important, but not to a system that is
providing you with financial advice. Speech recognition is necessary for
conversational systems unless they are designed around written input. It
makes sense that we apply different tests and metrics to different types of
capabilities.

Table 2.3 AI ESG Considerations: Social Science Issues

Social • The future of work: Replacing human labor with AI


• Blurring lines between human and tech support: The role of empathy
• Humans humbled by machines but comforted that we are “homo
sentients” vs. “homo sapiens”
Governance • The “perceived safety” of ANI: The need for governance guardrails
heightens
• The future of AI: Governing “Artificial Super-Intelligence” and “Life
3.0”

Even when it comes to currently non-existent AGI systems that aim to


be a full replica of all human capabilities, it is important that we apply
different tests, thinking about how the individual components work
together to create that complete system, along with how they support and
augment each other.
Japan’s National Institute of Informatics is developing one of the most
promising recent tests that will further address AGI. They want to create
an AI program capable of passing the entrance exam at the University of
Tokyo, the nation’s most difficult college entrance exam, arguing, “If an
AI program can trick college admissions officers into thinking that it’s a

Estadísticos e-Books & Papers


human, it would mark one of the greatest accomplishments yet for AI.”23
They plan to have built a working prototype by 2021.

Estadísticos e-Books & Papers


CHAPTER THREE

Key Cognitive and Machine


Learning Approaches

In chapters 3 and 4 we build a practical foundation of ML concepts for our


readers. We reviewed thousands of pages of computer science and AI
publications and talked to around 40 startups to refine our terminology for
this book. We decided to incorporate the most common terms that
businesspeople hear in pitches from AI companies or read in popular
literature. We have also observed great confusion in the past couple of
years at technology conferences and in major consulting firm publications.
Statements such as “ML makes AI possible” and “Phase of AI: ML, DL,
Reinforcement Learning” are unfortunately too common to ignore, and
they fail to contribute to closing the knowledge gap between Internet and
traditional businesses.
We hope to give our readers some basic understanding of what
constitutes mimicking human decision-making with algorithms. While
addressing limitations in ML, we hope our reader will understand why AI
luminaries such as LeCun still believe that in spite of the great progress
that we have recently made, we are still just talking about 5 percent of
what AI can do.
In Figure 3.1, we offer a simple graphic to illustrate the
interrelationships among the concepts of AI, ML, and DL.
Table 3.1 provides another simplified but more detailed view of
general social AI concepts vs. the current technology stack to help our
readers follow what we cover in these two chapters.

Estadísticos e-Books & Papers


Figure 3.1 The Big Picture

Estadísticos e-Books & Papers


Figure 3.2 AI Technology Enablers.

Table 3.1

General Social Science Concepts Current Technology Stack

• Artificial Narrow Intelligence (e.g., ML and • Agent Enablers


DL Applications and Services) • Data Science
• Artificial General Intelligence • ML (including DL)
• Artificial Super Intelligence • Natural Language
• Computer Vision
• Data Capture and Preparation
• Open Source Frameworks
• Hardware
• Fundamental AI Research

Concepts for Building AI


In this book, we focus mainly on statistical approaches to AI, which
help to mimic the human ability to guess. Simple ML models should be
used unless there are a lot of data available. DL is a revolutionary type of
ML advance where AI automates some of the work traditionally performed
through specialized engineering, as for image and video data. DL alone
will not bring about AI. AI as a discipline, however, is much broader.

Estadísticos e-Books & Papers


Knowledge representation and knowledge engineering are essential in
AI research. If computers are expected to solve problems, they need
knowledge about the world. This means that machines must be able to
represent objects, properties, categories, and relations between objects,
situations, events, and time. A symbolic representation of whatever exists
is called ontology.
Automated planning is a branch of AI that thinks about strategies or
action sequences, typically for execution by intelligent software agents,
robots, and unmanned vehicles. Planning is related to decision theory. A
technique called a heuristic is designed to solve a problem more quickly
when traditional methods are too slow, or for finding an approximate
solution when traditional methods fail within a reasonable time frame.
When there is more than one agent, AI crosses into the field of game
theory, dealing with decentralized AI actors with different functions. Go,
Atari, and Poker are demonstrations of this type of algorithmic game
theory.
AI is unthinkable without simulation, which is the capability to create
data. We will discuss simulation later in the book. In the following pages,
we will mostly discuss ML, which is not a homogeneous field, but rather
in constant tribal warfare.1 Some approaches are in opposition to each
other, while others can be used together to build better solutions.

Cognitive Computing
IBM and several other companies are using the term “cognitive
computing” to refer to an approach of curating massive amounts of
information that can be ingested into a “cognitive stack,” while at the same
time creating connections within this information so a particular problem
can be discovered by the user, or a particular new and unanticipated
question can be explored. Watson, an example of cognitive computing, is
not a single tool, but a mix of models and APIs.
Critics of this approach, however, believe that ingesting a ton of data
and then serving it up through a “search” and “retrieve” function doesn’t
constitute or create domain knowledge. In most cases, Watson can’t
answer the “why” question, i.e., explain why the outcome or a decision is
materially significant. Thus, while Watson adds value from a data-
structuring standpoint, it really doesn’t add anything from a new
technology and interface perspective.2

Estadísticos e-Books & Papers


ML “Tribes”
ML is almost the opposite of cognitive computing. In ML, users are
focused on a very specific goal (a goal function) and are looking for
something very specific in the data. ML models will go through a lot of
disparate data and try to create proximity to this goal function by either
training the system or by observing its behavior.3 Neural network, DL, and
Bayesian networks all of which are discussed in this book, are types of
ML. They are all methods of creating generalized systems that can perform
analyses on new data. In The Master Algorithm (2015), Pedro Domingos
discusses five basic “tribes” of researchers in ML, each one building its
own “master algorithm” from a different base.4 Five different tribes try to
solve a problem from a different perspective:
• The symbolists focus more on philosophy, logic, and psychology and view learning as the
inverse of deduction (e.g., John Hougeland, Tom Mitchell, Sebastian Thrun, Oren Etzioni,
Stephen Muggleton, Ross Quilan).
• The connectionists focus on physics and neurosciences and believe in the reverse engineering
of the brain (Geoff Hinton, Navdeep Jaitly, Yann LeCun, Yoshua Bengio, Ian Goodfellow,
Aaron Courville).
• The evolutionaries draw their conclusions on the basis of genetics and evolutionary biology
(John Holland, Serafim Batzoglou).
• The Bayesians focus on statistics and probabilistic inference (David Heuerman, Michael
Jordan).
• The analogizers depend on extrapolating the similarity judgments by focusing more on
psychology and mathematical optimization (Vladimir Vapnin, Peter Hart, Douglas
Hofstadter).5

While Domingos limits himself to five master algorithms, Carlos E. Perez


of Intuition Machine distinguishes 17 tribes. The 2006 IEEE Conference
on Data Mining identified the top 10 ML algorithms. Y Combinator talks
about 21 different AI cultures. On the following pages we will discuss
neural nets, Bayesian networks, and DL as ML approaches currently
making headlines in the M&A, product design, and research strategy of the
Internet giants.

Neural Nets and Neural Networks


Neural nets were inspired by work on decoding a brain’s visual system
and measuring how neurons spike while receiving a signal. A human brain
contains more than 80 billion neurons and tens of trillions of synapses.
Most of AI work to reconstruct the brain is focused on the neurocortex,
which is mostly responsible for cognition, and is organized into columns

Estadísticos e-Books & Papers


with six distinct layers, feedback loops running to another brain structure
called the thalamus, and a recurring pattern of short-range inhibitory
connections and longer-range excitatory ones. The brain processes
information in a clearly defined way through the firing of electrical
neurons into patterns described as spiking. Precise descriptions of a
spiking process have been around since the 1950s, when Alan Hodgkin
and Andrew Huxley formalized the mathematical model that best
describes it, for which they received the Nobel Prize in medicine.
Memories are formed through the strengthening of the connections
between neurons that fire together, using a biochemical process called
long-term potentiation. Mathematical models exist to mirror these
processes, too.
Still, a biological brain offers more mysteries than answers. We do not
know, for instance, why neurons spike, e.g., when we judge the relative
time of arrival of a signal to our ears to get a direction. We do not fully
understand adaptation of synapses. At present, in most artificial neural
nets, researchers just have a timescale for adaptation of the synapses and
then a timescale for activation of the neurons. Intermediate timescales of
synaptic adaptation, which might be very important for short-term
memory, are still missing.6
Moreover, it is unclear how the whole “intelligence engine,” which
does not only include the brain, is involved. Neurons are not confined to a
central nervous system, the brain, and spinal cord. The peripheral nervous
system is also constituted by neurons, which carry sensory signals to the
brain from the body—the skin, the eyes, the stomach, etc. And the
behavior of a neuron is modulated by the presence of chemical
neurotransmitters such as dopamine and serotonin. An important feature of
the brain is its plasticity. During infant to adult development, connections
in a brain undergo reconfigurations, as axons and dendrites grow like the
roots of a plant, abandoning redundant links and establishing new ones.7
If what we are trying to do is replicate the human brain with today’s
technology, our lack of understanding of the biological prototype
represents a perfect engineering challenge. Building a bridge between
computer science and biology will require time, bold experiments with
modeling with less data available, and maybe additional optimization of
the existing algorithms. The difficulty with trying to replicate the
neurocortex is obvious if we think about the work of Geoffrey Hinton and
his students, one of the most influential groups in DL.

Estadísticos e-Books & Papers


Neural networks are made of big flat layers. In 1986, Hinton developed
in 1986 “backpropagation,” a technique used in artificial neural networks
to calculate the error contribution of each neuron after a piece of a dataset
is processed. The technique is employed by embedding an optimization
algorithm into a model to adjust the weight of each neuron, thereby
mimicking how humans learn. Backpropagation was largely used after
2012, benefiting from powerful GPU-based computing systems. In 2014,
backpropagation was used to train a deep neural network for speech
recognition.
In 2017, Hinton raised doubts over the effectiveness of the method,
claiming that this technology was still very different from how the human
brain processes information. He suggested that backpropagation will not
enable neural networks to become intelligent on their own. The human
brain, at the end of the day, does not possess labels for all the data it
processes. Further evolution of unsupervised learning might be a better
path.8 Hinton recognized that in the human neurocortex, biological
neurons are arranged not just horizontally but also vertically, as columns.
These columns seem to be indispensable to recognizing objects, even if
there are changes in our viewpoint or in their arrangement within a picture.
Hinton is currently building an artificial version of this system, called
“capsules.”9 This approach departs from traditional DL as it engineers the
networks with as little prior information or structure as possible and allows
them to “just” learn from data in image recognition and visual systems
design. Practitioners and researchers need to observe how such systems
perform, as they lend themselves to trivial adversarial attacks or might be
unable to properly function with a small change of context or
perspective.10
Additional key concepts are the Convolutional Neural Network
(ConvNet or CNN) and the Recurrent Neural Network (RNN), a technique
in DL that has been successfully applied in analyzing images and natural
language processing. Biological processes in which the connectivity
pattern between neurons was modeled after visual cortex inspired it. CNNs
and RNNs are independent from prior knowledge, require little pre-
processing, and represent feed-forward deep neural networks.11

Bayesian Models

Estadísticos e-Books & Papers


Bayesian researchers find that a statistics-influenced approach is a
good way to build reliable ML models for many cases. Bayes’s theorem
enables updating a belief whenever there is new evidence. In this context,
Bayesians start their modeling with a hypothesis and then update it
depending on the data. They do not rely on the data to drive the
conclusions, as neural networks do. The Bayesians look for ways of
dealing with uncertainty, of feeding new evidence into existing models. As
Domingos writes, “if there’s a limit to what Bayes can learn, we haven’t
found it yet.”12
Chris Williams, a University of Edinburgh AI researcher, co-wrote a
book on Gaussian processes and ML. The Gaussian process (GP) is a
particular kind of statistical model that finds practical applications in
different domains and provides good methodology for identifying
uncertainty. Uber has been hiring academics specializing in Gaussian
processes to improve its ride-hailing services. Gary Marcus sold his 15-
person startup Geometric Intelligence to Uber in December 2016. His co-
founder, the University of Cambridge professor of Information
Engineering Zoubin Ghahramani, is a Bayesian who specializes in the
Gaussian process. Further examples include:
• At Google, GPs help control the company’s high-altitude Internet balloons.
• At Whetlab, a startup acquired by Twitter in 2015, their GP technique provides a better way of
designing neural networks. GPs and Bayesian optimization can help automate the tasks. As
Whetlab founder and Harvard computer scientist Ryan Adams has said, the startup used “ML
to improve ML.” Adams has since left Twitter for Google Brain, the search giant’s central AI
team.
• Ben Vigoda founded the AI startup Gamalon utilizing Bayesian methods through a technique
called probabilistic programming.

Some researchers also believe that the small data powers of GPs can play a
vital role in the push toward autonomous AI. Vishal Chatrath, CEO of the
AI startup Prowler, who has worked with Ghahramani, believes that
building autonomous agents requires rapid adaptability to the environment.
Besides, GPs are easy to interpret. By contrast, the results of neural
networks and DL models are often perceived as coming out of a black box.
Current regulatory frameworks, demanding transparency on how
technologies work, might imply a broader adoption of Bayesian or GP
frameworks instead of DL.

“Deep Learning Is Eating Machine Learning”

Estadísticos e-Books & Papers


DL systems represent a dynamic computing architecture that learns
very well, but without much transparency about how it learns.13 As we
have seen in the history of DL, this technology is historically connected to
the use of ANNs. It has been around since the 1940s, but has only recently
begun to fulfill its promise thanks to today’s exponential proliferation of
data, along with the availability of cheap processing power.14

Table 3.2 A Brief History of DL15

DL started with the work of William McCulloch and Walter Pitts. In 1943, they published “A
Logical Calculus of the Ideas Immanent in Nervous Activity,” in which they outlined the first
computational model of a neural network. This paper served as the blueprint for the first ANNs.
In 1949, Donald Hebb published “The Organization of Behavior,” which argued that the
connections between neurons strengthened with use. This concept proved fundamental to
understanding human learning and how to train ANNs.
In 1954, Belmont Farley and Wesley Clark, using the research done by McCulloch and Pitts, ran the
first computer simulations of an artificial neural network. These networks of up to 128 neurons
were trained to recognize simple patterns.
In the summer of 1956, computer scientists met “to act on the conjecture that every aspect of
learning or any other feature of intelligence can in principle be so precisely described that a
machine can be made to simulate it.” This event, known as the Dartmouth Conference, is
considered the birthplace of AI.
In 1975, Ukrainian Soviet scientist Alexey G. Ivakhnenko was able to train an eight-layer neural
network with limited computation power.
In 1997, Schmidhuber and one of his students, Sepp Hochreiter, wrote a paper that proposed a
method for how artificial neural networks—computer systems that mimic the human brain—could
be boosted with a memory function, by adding loops that interpreted patterns of words or images in
light of previously obtained information. They called it long short-term memory (LSTM). For
instance, if a recurrent neural network (RNN) could recognize verbs or adjectives in a sentence and
if they are used correctly, an LSTM might be able to remember the plot of the book.
In 2006, Geoffrey Hinton introduced an algorithm that could fine-tune the learning procedure used
to train ANNs with multiple hidden layers. Hinton and his team reportedly coined the term “deep
learning” to rebrand ANNs. Today, Hinton believes that unsupervised learning, not
backpropagation, is the best way to implement DL.
DL finally took off with the dataset ImageNet (which started in 2006 and obtained its first results in
2009), which Fei Fei Li, a professor at Stanford and also currently a scientist at Google, started with
her students. It consists of one million images in 1000 classes. The size of the dataset made training
of DL models possible.
In 2015, Google announced that it had managed to improve the error rate of its voice recognition
software by almost 50 percent using LSTM. This is the type of system that powers Amazon’s
Alexa. Apple announced last year that it is using LSTM to improve the iPhone.

In March 2016, Google’s AlphaGo computer system sealed a 4-1 victory over South Korean Go
grandmaster Lee Sedol using DL. AlphaGo also scored a victory against Chinese Grandmaster Ke

Estadísticos e-Books & Papers


Jie in May 2017. This time, AlphaGo was even more efficient than in 2016, requiring ten times less
computing power than in 2016. It used TPUs, the Google chipset.

Carlos E. Perez, AI strategist and practitioner at Intuition Machine,


believes that DL will most probably diverge into two types. One will be
about building powerful narrow intelligence. This branch will continue to
specialize in tools and machines solving specific problems, e.g.,
DeepMind’s AlphaGo, which has combined DL, Monte-Carlo Tree
Search, and Reinforcement Learning to solve the game of Go. This narrow
type of DL will ultimately be well defined and controllable.
The second type will focus more on adaptable and biologically inspired
automation. It will be driven by mechanisms found in biological systems.
Robotic applications will require adaptability to an environment to survive.
This intelligence does not require replicating some level of human
intelligence. Many biological life forms we know possess incredible
intelligence, and yet they are nothing like humans. We believe Carlos E.
Perez is right to say that DL is a correct starting point to achieve such
biologically inspired machines. These machines will not provide full
transparency in terms of their design, goals, and adaptability mechanisms.
This branch of DL will be less controllable, and thus the possible
harbinger of a more dangerous kind of AI.16
Apple’s director of AI, Ruslan Salakhutdinov, believes that the DL
neural networks will be further supercharged in coming years by the
addition of more memory, attention, and general knowledge. To
paraphrase Andreessen’s comment “Software is eating the world,” “DL is
eating ML.” In the near future, we expect that DL will be dominated by
further progress in CNN for image recognition and classification (e.g., in
robotics and autonomous driving),17 meta-learning,18 and reinforcement
learning.
There are regulatory drawbacks to keep in mind when using DL. When
a neural net determines that certain features are important and makes
decisions based on them, we do not know why. This means that if the
system contains corrupted data or human bias, we will not be able to
pinpoint that these exist, which can be dangerous for specific uses that
could impact individuals and society, such as in health care, finance, and
law enforcement. As MIT Technology Review’s Will Knight puts it, wide
implementation “won’t happen—or shouldn’t happen—unless we find

Estadísticos e-Books & Papers


ways of making techniques like DL more understandable to their creators
and accountable to their users.”19

Table 3.3 Applications Using DL

Category Use Cases

Classification Real-time Conversational Assistance


Brain Tumor Detection
2D Object Classification
Gesture Recognition
Detecting Spam E-Mail
Automated E-Mail Replies
Face Identification
Infrared Colorization
Translation Real-time Speech Translation
Translation of Images of Text
Speech Synthesis
Conversational Interfaces
Planning and Prediction Self-Driving Cars
Generation and Optimization Reverse Engineering Biological Processes
Fluid Simulation
Reducing Traffic
Face Tracking in Augmented Reality

Starting in June 2018, the European Union will implement the General
Data Protection Regulation (GDPR), which may require businesses to
explain decisions made by algorithms. Models defining mortgages, credits,
investment, or any other financial tool may become subject to such
regulation. Recommendation engines programmed automatically might hit
regulatory limits as well—even the engineers building these algorithms
cannot fully understand how they function.20
Companies and researchers are working to overcome this problem.
MIT Technology Review reports that the neural network architecture
developed by Nvidia’s researchers is designed to stress the parts of a video
image that influence the behavior of a car’s deep neural network most
strongly. Interestingly, such networks are focused mostly on the edges of
roads, lane markings, and parked cars—just the sort of things that a human
driver would pay attention to. According to Urs Muller, Nvidia’s chief

Estadísticos e-Books & Papers


architect for self-driving cars, “what’s revolutionary about this is that we
never directly told the network to care about these things.”
Several scientists are exploring the issue as well. Jeff Clune at the
University of Wyoming and Carlos Guestrin at the University of
Washington (and Apple) have identified how to highlight the areas of
images that classification systems are picking up on. Tommi Jaakola and
Regina Barzilay at MIT are working on providing snippets of text that
support the generation of conclusions from large quantities of written data.
DARPA is financing a number of projects within its Explainable AI (XAI)
program.21
Explanations about how DL works might come from researchers
dealing with swarm AI, studying the collective behavior of decentralized,
self-organized systems. In such constructs, interactions between individual
agents lead to the emergence of an intelligent “global” behavior unknown
to the individual agents, similar to natural systems like ant and bee
colonies.
Traditional companies would be well advised to support research in
DL. Not only is such an effort educational, it also provides access to
superior talent and a better sense of the timeline in which certain
technologies might go mainstream.

Approaches to Learning: Supervised, Unsupervised,


and Reinforcement Learning
ML means learning from data. There is a vast array of problems that
can be solved by providing the right training data to the right learning
algorithm. ML techniques are not mutually exclusive and can be leveraged
in different combinations, depending on the task and available dataset, and
can even be tried on the same dataset to determine which one works best.

Supervised
Supervised learning algorithms (or classification) make predictions
based on a set of examples. With this method there is an input variable that
consists of labeled training data and a desired output variable. The
algorithm is used to analyze the training data to learn the function that
maps the input to the output.

Estadísticos e-Books & Papers


When the data are used to predict a categorical variable, supervised
learning is also called classification. This happens when assigning a label
or indicator, e.g., cat or dog, to an image. When there are only two labels,
we talk about binary classification. When there are more than two clusters,
we refer to multi-class classification.

Table 3.4 Types of ML Depending on Data Availability and Their Characteristics

Types of Learning Key Points Examples

Supervised Identifies patterns in data when Insurance Underwriting


given predetermined features and
labeled data
Unsupervised Identifies patterns in data, which is Spam Filtering in Gmail
particularly helpful for unlabeled
and unstructured data
Semi-Supervised Blend of supervised and Customer Segmentation
unsupervised learning; best in
situations where there are some
labeled data, but not too much
Reinforcement Provides feedback to the algorithm Playing Chess
as it trains; experience-driven
decision making

Semi-Supervised
The major challenge with supervised learning is that labeling (naming,
defining) data is expensive and time consuming. If labels are limited, a
company can use unlabeled examples to enhance supervised learning.
Because a computer is not fully supervised in solving such a task, we talk
about semi-supervised learning, in which unlabeled examples are injected
with a small amount of labeled data, which can lead to better learning
accuracy.

Unsupervised or Predictive
When performing unsupervised (or predictive) learning, the computer
is presented with completely unlabeled data. It is asked to discover the
intrinsic patterns that underlie the data, e.g., their clustering structure,
sparse tree, graph, or the like. In unsupervised ML, there is no
predetermined correct answer that the algorithm needs to learn to predict.

Estadísticos e-Books & Papers


In clustering, a set of data examples will be grouped that are more
similar than those in other groups. This can be used to segment the whole
dataset into several groups. Analysis can be performed in each group to
identify intrinsic patterns.
In dimension deduction, the number of variables under consideration
will be reduced. In many applications, the raw data have many
dimensional features, and some features are redundant or irrelevant to the
task. Reducing the dimensionality helps to find the true, latent relationship.
Overall, unsupervised learning is similar to the ways humans solve
problems and learn. Researchers around the world are very focused on this
methodology, as it seems the most promising to implement DL at scale.

Reinforcement
In reinforcement learning (RL), the behavior of an agent is optimized
based on feedback from the environment.
Reinforcement learning is one of the areas that is particularly
noteworthy in terms of its potential impact on the future of digital products
and services and the democratization of AI.22 This framework drives ML
attention from pattern recognition to sequential decision-making that is
experience driven. RL learns by trial and error, inspired by the way
humans learn new tasks. In a typical RL setup, an agent is tasked with
observing its current state in a digital environment and with taking actions
that maximize the accrual of a long-term reward for which it has been
incentivized. The agent receives feedback from the environment as a result
of each action so that it knows whether the action promoted or hindered its
progress.
Google’s DeepMind popularized this approach in the games Atari and
Go. An important advantage of using RL agents in environments that can
be simulated (e.g., video games) is that training data can be generated in
droves and at very low cost. This is in marked contrast to supervised DL
tasks, which often require training data that are expensive and difficult to
procure in the real world. Alpha Go Zero (AGZ), as the newest DeepMind
technology, relies on self-play reinforcement learning, starting from
random play without any supervision or use of human input. It uses a
single neural network, rather than separate policy and value networks.
Within three hours, AGZ played as well as a human beginner. Within a
couple of days of self-play, AGZ became the world’s best Go player.

Estadísticos e-Books & Papers


Additionally, it learned to play chess without any human in the loop. The
biggest achievement in AGZ is its “artificial intuition,”23 which didn’t
exist in previous technologies. The self-play implies millions of small
adjustments, allowing for an evolution of something as close to human
intuition as possible to recognize good patterns and learn from them to
develop winning strategies.24
Yann LeCun has memorably highlighted the enormous importance of
unsupervised learning with a cake metaphor. He said, “If intelligence was
a cake, unsupervised learning would be the cake, supervised learning
would be the icing on the cake, and reinforcement learning would be the
cherry on the cake. We know how to make the icing and the cherry, but we
don’t know how to make the cake.”25 Geoffrey Hinton believes that neural
networks can become intelligent with unsupervised learning, which
implies getting rid of many previously celebrated techniques, e.g.,
backpropagation.26

Further Approaches to Learning: Solving Small


Datasets and Privacy Problems
Transfer Learning
Many brilliant minds in the AI industry and AI research are working
on making neural networks function with smaller amounts of data.
Transfer learning is a promising approach to solving this problem.
Through this process, a separate network with a large dataset is trained to
solve a problem. It is only then that one can determine whether the
proposed solution to the second problem can help solve the initial problem.
This mimics much of human learning. We solve artificial problems to
develop underlying insights that help us solve real problems.27

Federated Learning
Internet giants like Netflix and startups like x.ai both benefit from
network effects—the more data they receive, the better their algorithms
work and the more effectively they attract new customers. In a B2B
context, it is more difficult to unleash data network effects because
enterprises are protective of their data for both regulatory and competitive
reasons. Some interesting solutions are appearing, however. Google

Estadísticos e-Books & Papers


Research published a paper on federated learning, in which the idea is to
enable collaborative ML without actually pooling the data. This addresses
privacy concerns and opens the door to allowing data network effects.28

Generative Adversarial Network


Generative adversarial networks (GAN) are used in cases in which
there is a need to produce synthetic data, thereby increasing the size and
quality of the dataset. GAN represents a system in which two neural
networks are acting like adversaries. One generates an output and the other
checks the quality of the output, determining whether it is true and looks
the way it should. For example, when trying to generate a picture of a cat,
the generator (the first neural network) will make an image, and then the
discriminator (the second neural network) will force the generator to try
again if it cannot recognize a cat in the first image.29

Table 3.5 AI/ESG Considerations: Machine Learning

Environmental • The opacity of AI in robots: Possible environmental impact


Social • Social and labor dangers and downsides of AGI
• Algorithms that make individual and social decisions
Governance • The possible negative impact of GDPR on AI innovation development in
the EU and globally
• The need to develop AI data privacy governance
• Engineers’ inability to fully understand governance aspects of DL in
products
• AI products and cybersecurity governance

Estadísticos e-Books & Papers


CHAPTER FOUR

AI Technologies: From
Computer Vision to Drones

In this chapter, we discuss prominent AI fields that make applications such


as autonomous driving, Alexa or Siri, or a synthesized voice of President
Obama possible. These areas are computer vision, language processing
(including bots and chatbots), robotics (including drones), and emotion
measurement technology, a new field that will one day contribute to
achieving AGI.

Computer Vision
Computer vision has been the sub-area of AI most transformed by the
rise of DL. Superior progress has been achieved because of large-scale
computing, especially on GPUs; the availability of vast datasets, including
via the Internet (e.g., classification on ImageNet);1 and optimized neural
network algorithms. In this way, some computers have been capable of
performing some vision tasks better than people.
Although machine vision has already been great at collecting and
tagging data, e.g., recognizing faces, the development of prediction,
assessment, and analytics to detect temporary aspects is still in its infancy.
Taking an unstructured, real-time data stream to understand more about
the signal itself and the content will be a necessity when humans apply
machine vision to robots that are interacting with objects, for example.
This will be crucial for manufacturing applications and for service robots
like those that will take care of the elderly, robot cooks, etc.

Estadísticos e-Books & Papers


Behavioral imaging is another growing field. The ability of machines
to watch and analyze videos of moving people, and then be able to predict
what might happen next, could be used in many industry sectors, including
health care, education of children with autism, and similar applications.
We are still not close to building a system that would give a robot the
vision to react and interpret in an AGI-type of scenario. However,
according to Irfan Essa, a professor at Georgia Tech and a former MIT
Media Lab researcher, there are more limited forms of embodiment that
will have practical use, e.g., machines asking a question at the right time.2
Greater strides have been made through machines that are able to
identify objects in more dynamic scenes. This is especially relevant to the
aspects of the automotive and tech industry focused on the development of
self-driving cars. Vision systems require sensors and cameras to detect and
identify the complex forms and movements of pedestrians, animals,
landmarks, and other real-world phenomena.
Big Internet companies have been investing a lot of effort in
optimizing computer vision systems. For example, Facebook’s Lumos
computer vision platform, developed to improve the experience of
Facebook for visually impaired users, is now optimizing an image content
search for all members of the Facebook community. Now it is possible to
search for images through the key words that describe the contents of a
photo. There is no limitation by tags and captions. To introduce the
feature, Facebook trained a deep neural network on tens of millions of
photos.
Pinterest’s visual search feature has been continuously improved to let
users search images by the objects within them. This makes photos
interactive and commercializable.3 Meanwhile, Shutterstock has applied
ML to reimagine and rebuild the visual search process. Its technology
relies on data within images, instead of on pixels. It has studied over 70
million images and 4 million video clips; broken them down into their
principal features; and recognized what is inside each and every image,
including shapes, colors, and even the smallest of details.4
Cortica is basing its technology on neurology. The company builds its
models from the bottom up, first using unstructured random data to train
the machine to recognize and cluster them, and then labeling or naming
what actually happens. Applications of this technology include organizing
and putting context around mobile consumer image data. Other
applications are in e-commerce and gaming.5

Estadísticos e-Books & Papers


Language Processing
Language is tremendously complex. Even DL approaches have not
been able to achieve much success on language within realistic situations,
with lengthy sentences; grammatical nuances; and expression of
sentiments, accentuation, and accents.6

Table 4.1 Examples of NLP Applications

Industries Applications

Health Care Summarizing doctors’ notes for billing, moving differently formatted
medical records across medical and administration providers
Law Research for legal documents
Financial Services Gathering insights based on sentiments in world news or social media

• Natural language processing (NLP) is what happens when computers read language. NLP
processes turn text into structured data.
• Natural language generation (NLG) is what happens when computers write language. NLG
processes turn structured data into text.7
• Natural language understanding (NLU) is what happens when computers understand language.
This implies sensing the semantics of language, including sarcasm, grammatical errors made
by mistake or by choice, and differences in accentuation and voice modulation that occur
through pronouncing whole sentences and emphasizing one word over another. This is the
most difficult part in language processing, and it is requiring a lot of R&D.

The global NLP market was valued at US$277.2 million in 2015, and is
expected to reach US$2.1 billion by 2024, according to a study by
Tractica, a market intelligence company. The e-commerce sector was the
second-largest end-user segment, accounting for 12 percent of the market
in 2014.8
Rob May of Talla, which builds intelligent conversational enterprise
applications, believes that in five years the natural language stack could
have the following components:9
• Voice input at the top. Voice to text is already relatively well solved for most common cases.
• Next is natural language processing (NLP), which consists of parsing text (from voice or from
direct input) and pulling out the key components. At this point NLP is good, moving toward
great, with parsing at near-human-level success rates.
• Finally, further down, is an understanding of interference and reasoning in language. AI with
bodies (e.g., robots) may be required to help ground language in some other format, or perhaps
new visualization techniques might help, but these are all still very nascent. Rob May expects
that the intersection of DL and visual models could become a very helpful way to solve this
problem.

Estadísticos e-Books & Papers


When it comes to different types of natural language goals, such as text
summarization vs. answering questions, it seems likely that a single
platform will be able to solve them all in the coming years. In other words,
we won’t see dramatically different technologies for each type of problem.
In fact, several experts believe that many natural language problems could
be reframed as machine translation issues, and similar approaches could be
used to solve them all.10
Today, there are multiple large companies offering cloud services for
AI and NLP:11
• Google has its Cloud ML Engine.
• Microsoft has a series of Cognitive services APIs through Azure.
• IBM’s Watson Ecosystem has spread out into various sectors, and its natural language
processing services have been at the forefront.
• Amazon, the biggest cloud player, has many ML tools offered through Amazon Web Services
(AWS), such as Recognition, Polly, and Lex.

There are several services available to engineers based on the level of


customization in which they are interested. On the one hand, there are
open-source application process interfaces (APIs) like Tensor Flow,
Stanford’s Core NLP suite, Caffe, Theano, Torch, and CNTK. These
technologies are helping to democratize ML, as their users do not need a
PhD to implement them and can focus instead on training their datasets.
On the other hand, for faster hypothesis testing, a team in a traditional
business or in a startup can purchase services like API.ai, Watson
Conversation, Amazon Lex, or MS Cognitive Services. They are all
comparable in pricing, and some of them offer free credit for startups,
developers, and entre​​preneurs, e.g., IBM Global Entrepreneurship and
AWS Activate Program.
If machines are to use language like humans, we need to learn how to
embed a synthetic voice into audio. Technology developed by Princeton
University computer scientists called “VoCo” makes it simple to add or
replace a word in an audio recording of a human voice by easily editing a
transcript of the recording. New words are automatically synthesized into
the speaker’s voice, even if they don’t appear anywhere else in the audio
file. The software is still in the research stage. VoCo is based on an
algorithm that searches the voice recording and chooses the best possible
combination of phonemes (partial word sounds) to build new words into
the speaker’s voice. If a synthesized word doesn’t appear to be authentic

Estadísticos e-Books & Papers


enough, VoCo offers uses several versions of the word to choose from.
Tests have been very encouraging.12

Bots and Chatbots


Bots are software applications designed to perform specific automated
tasks. Bots are ubiquitous in today’s technology, ranging from malicious
bots that carry viruses to Internet bots such as “web spiders.” These are
automated scripts that read vast quantities of website information and
itemize the results to create entries for search engines much more quickly
than a human can.13 Chatbots, or computer programs that mimic
conversation, are becoming increasingly popular as well.
Bots will change how customer service is delivered. It will just take a
couple more years to get to a level of accuracy that works.14 The use of
chatbots by companies in the financial and healthcare industries will save a
collective US$22 billion in time and salaries by 2022, according to Juniper
Research.15
Popular examples of chatbots include Apple’s Siri and Cortana, the
personal assistant for Windows 10. Twitter is working on features that
make it easier for people to know whether they are talking to bots or
humans.

Table 4.2 From APPS to Chatbots in the Big Technology Companies

Company Specifics

Facebook Announced the business on Messenger platform in 2016, and by summer


2017 had over 100,000 chatbots on the platform.
Microsoft Implemented Cortana in nearly all their products, launched chatbots for
Skype, and provided developers with development kits for chatbots.
Amazon In summer 2017, led the market with the home speaker Echo and the
assistant Alexa and allowed developers to create app-like skills for that new
platform.
Google Launched an assistant in 2017, including a home speaker to go and offered
the developer tool API.ai, among other features.
Apple Launched Siri with the iPhone 4S in 2011 with modest quality. However, by
2017, had enabled businesses to communicate directly with over 700 million
iPhone users through iMessage. ApplePay is natively integrated to enable
transactional use cases. Introduced HomePod in 2017 and started building a
developer ecosystem around all messaging and speaker services.

Estadísticos e-Books & Papers


Puneet Mehta, CEO of the bot startup msg.ai, has said that it will be up
to platforms like Twitter and Facebook to set expectations and rules for
users as interactions with bots become mainstream. When do you switch
from bot to human? What happens when it switches back to AI? The lack
of a friction-free experience is currently holding back large companies
from fully embracing messaging apps as a means of customer service and
e-commerce. We believe that a lot of domain expertise is required to make
a bot “right.” However, training bots can be helped by crowdsourcing.
No one has yet delivered a platform or framework that has made bots
centered on natural language interactions easy for developers to actually
build. Among many others, Facebook’s WIT.ai, Amazon’s NLP for Echo,
and even Google’s speech APIs are all far from being sufficiently evolved
to be useful for any developer.

Robotics
Robotics is a branch of technology dealing with robots. Robots are
programmable machines used to carry out a series of actions autonomously
or semi-autonomously. These actions often require extreme precision.
These robots may take on work that humans find tedious or hazardous.
The autonomous robotics approach suggests that machines could be
controlled by AI software. These robots should be able to function in
uncertain and constantly changing environments, and therefore be powered
by models with superior learning and adapting capabilities: “Scientific
American reported that by 2050, a single robot would be able to perform
100 trillion instructions per second. It took 50 years for the world to install
the first million industrial robots. The next million will take only eight,
according to Macquarie.”16 Manufacturing robots capable of performing
repetitive tasks are about one-tenth the price they were ten years ago.
Today, China is installing more industrial robots than any other country in
the world.17 Gill Pratt, former head of DARPA’s Robotics Challenge,
assumed in 2016 that robotics is facing a “Cambrian explosion”—a period
of rapid diversification: “Although a single robot cannot yet match the
learning ability of a toddler, Pratt has pointed out that robots have one
huge advantage: humans can communicate with each other at only 10 bits
per second—whereas robots can communicate through the Internet at
speeds 100 million times faster. This could result in multitudes of robots
building on each other’s learning experiences at lightning speed.”18 The

Estadísticos e-Books & Papers


key drivers behind the robotic Cambrian explosion include data,
algorithms, networks, the cloud, and exponential improvements in
hardware. Robots will be increasingly used wherever work is dull, dirty,
dangerous, or expensive.

Table 4.3 Robot Examples

Industrial Vertical Robots

Personal and Family Companions: Kiribo, Toyota


Buddy, bluefrogrobotics.com
Assistive Robots: Romeo, Aldebaran Robotics
Nao, Aldebaran Robotics
Pepper, Aldebaran Robotics
Patient Care and Therapies: Paro, parorobots.com
Huggable, MIT Media Lab
Robear, Mukai
RIBA (Robot for Interactive Body
Assistance), RTC (RIKEN-TRI [Tokai Rubber
Industries] Collaboration Center for Human-
Interactive Robot Research)

Current robotic projects mostly focus on three major families of tasks:


locomotion (moving from point A to point B), navigation (moving in a
complex environment, e.g., inside a building), and manipulation (grasping
an object, placing objects in various locations).19

Drones
Chris Anderson, CEO of drone maker 3D Robotics, calls personal
drones the “peace dividend of the smartphone wars, which is to say that
the components in a smartphone—the sensors, the GPS, the camera, the
ARM core processors, the wireless, the memory, the battery—all that stuff,
which is being driven by the incredible economies of scale and innovation
machines at Apple, Google, and others, is available for a few dollars. They
were essentially ‘unobtainium’ 10 years ago. This is stuff that used to be
military industrial technology; you can buy it at RadioShack now.”20
Kespry, a commercial drone company, is implementing Nvidia’s Jetson
TX1 AI technology to offer the construction industry a way to keep track
of their expensive equipment and allow them to remotely manage multiple

Estadísticos e-Books & Papers


work sites at any given time. Drones fly over farming fields, scanning
them for crop health. Drone data enable farmers to do so-called precision
agriculture. Property insurers use drones to assess buildings damaged by
storms. Sky Futures, a UK company, specializes in flying drones around
oil rigs in the North Sea to inspect equipment.21 A collaborative European
research project has developed software enabling drones to autonomously
detect and follow forest paths to speed up the search and rescue of missing
people in forested and mountain areas. In the U.S., Federal Aviation
Administration (FAA) regulations have made drone test flights more
complicated. To overcome this development hurdle, drone startups like
Zipline International have tested in less-regulated areas such as parts of
Africa, where others focus on terrestrial drones. It is unclear how large
companies like Alphabet or Amazon will approach this challenge.

Emotion Measurement Technology


Emotion measurement technology is a newly emerging field of AI. The
idea is that devices should be able to sense and adapt to emotions like
humans do. Theoretically, this can be done through an understanding of
changes in facial expressions, gestures, and speech. The ability to read and
react to emotions can be applied to bots; social robots; personal assistants
on a phone, like Siri; and even to similar applications in a car.
Social scientists who study emotions have found that only up to 10
percent of the emotional meaning of a message is conveyed through
words. Between 35 and 40 percent is conveyed in the tone of voice, and
the remaining roughly 50 percent through facial expressions and bodily
gestures.
Affectiva, a spin-off of the MIT Media Lab, collected an emotion
database with nearly 6 million faces from 75 countries, including face
videos. Researchers analyzed 38,944 hours of data representing nearly 2
billion facial frames. The data are global, spanning ages and ethnicities, in
a variety of contexts and situations (e.g., people sitting on their couches or
people driving cars). Affectiva applies ML and DL to analyze these data.
The company partners with another firm called Brain Power to bring use
cases to market. As an example, Brain Power uses Affectiva technology
and Google Glass “to help autistic children with emotion recognition and
understanding.”22

Estadísticos e-Books & Papers


Meanwhile, Berlin-based Musimap SA uses AI to capture human
emotions and to build user profiles depending on their music taste. The
company has built a large proprietary database for their computation.
New York–based Emoshape introduced a chip called EPU II (Emotion
Processing Unit), the industry’s first emotion synthesis engine. It delivers
computer emotion awareness. The company’s founder, Patrick Levy-
Rosenthal, extended Paul Ekman’s theory by using not only 12 primary
emotions identified in psycho-evolutionary theory but also pain, pleasure,
frustration, and satisfaction.23 The EPU models enable computers to
respond to stimuli in line with one of the 12 primary emotions, such as
anger, fear, sadness, disgust, indifference, regret, surprise, anticipation,
trust, confidence, desire, and joy. It achieves up to 94 percent accuracy on
conversation. Relatedly, the London-based Being More Digital works to
capture real-time human emotions on video and develop predictive models
to better understand customer loyalty, satisfaction, or location profile (e.g.,
retail).

Table 4.4 AI ESG Considerations: AI Technologies

Environmental • The possible rise of autonomous robots, including military ones


• Drones—autonomous or guided—with both negative and positive
impacts
Social • Privacy and human rights considerations of “behavioral imaging”
and predictive analytics, potentially enabling governments, e.g., to
predict who might be a criminal
• Impact of NLP on labor markets: Diminishing certain kinds of jobs
• Transformation of customer service labor markets via bots and
chatbots
Governance • Governments’ ability to manage the technology “Cambrian
Explosion”
• Building governance guardrails into robotics, AI, and related
products and services
• Privacy and other implications of “emotion measurement
technology”

Estadísticos e-Books & Papers


CHAPTER FIVE

AI and Blockchain,
IoT, Cybersecurity, and
Quantum Computing

Key Issues
• There are two major trends no business leader can afford to miss in the world of technology
today. Both trends have their roots in data. The first is the subject of this book—that is, AI.
• The second is the emergence of advanced cryptographic tools and distributed ledgers, also
known as blockchain or distributed ledger technology. Popularized in general public by
cryptocurrency bitcoin, blockchains are currently being hailed by some as the future
facilitators of radically different societal systems of trust, identity, and economic exchange.
• Blockchain and AI go hand in hand. Both technologies are deeply horizontal. While ML helps
us to find opportunity and improve decision-making, smart contracts and blockchains can
automate verification of the transactional parts of the process. In essence, ML solves the data
efficiency, discovery, and interpretation problem. Blockchain improves the overall architecture
problem in data verification and organization, exchange and storage, and finally, consumer-
centric monetization. Blockchain has the power to reshape the business of current Internet
giants, empower consumers, and enable strong traditional brands to reshape interactions with
enterprises and consumers. Both technologies reshuffle the concept of identity resolution,
which is a strong driver in monetizing relationships with customers in B2C and B2B.
Blockchain has a great chance to deliver global distributed infrastructure for datasets, which
can be utilized to build narrow AI applications.
• Two other critical technologies—the Internet of Things (IoT) and cybersecurity—cannot be
solved without the involvement of AI and blockchain.
• In the past, we needed humans to identify and mitigate security gaps in companies’
infrastucture and datasets. But increasingly, security researchers are building automated
systems that can work alongside these human agents.
• As more and more IoT devices and services move into our daily lives, we require this kind of
automated and secure deployment.
• The growth of data is driving traditional computing to the edge of its possible performance.
Once quantum computing and quantum algorithms arrive at scale, a period of transition in how

Estadísticos e-Books & Papers


data are processed, stored, and trained will create new technology champions.

Blockchain
A blockchain is a cryptographically secure, decentralized, distributed
database of information. A blockchain includes an immutable record of
events that can be made auditable by every participant in the network.
Public blockchains are theoretically available for anyone to participate in
(given a certain level of IT literacy) and are not administered or controlled
by any central party, such as a government or commercial organization
like Google or Facebook. They can be used, among other things, for the
transfer of economic value between network participants, as in the case of
the world’s first cryptocurrency, bitcoin. ML technologies are used in
architecting blockchain, providing permissions, securing data and
operations, and organizing token distribution among participants.
Blockchains can be seen as large opaque data vaults, with federated
permission-giving architectures providing some of the most advanced data
encryption and security to individuals who would otherwise have their
personal information exposed to a variety of parties that they don’t (or
shouldn’t) trust within cyberspace. In fact, a public blockchain of personal
information could be designed as a decentralized bank, with billions of
personal vaults containing the keys to who each of us is and the manner in
which we can be engaged by organizations and individuals alike on fair
terms. Some of today’s prototype blockchain-based identity systems will
most probably serve as the future gateways into the digital world.
The first principle of digital engagement would be to provide users
with the infrastructure to secure their personal information with a set of
private keys, as well as a platform with which to share this information
with third parties if and when they desire (and perhaps with an explicit
economic value and rules attached to the usage of these data through
vehicles such as smart contracts). This kind of setup would create a fairer
and more dynamic equilibrium between those who produce information in
cyberspace and those who exploit it.
Currently, centralized companies hold data in their centralized silos.
They control and generate insights from these data, even though the
information does not legally belong to them since there are so many grey
areas in terms of data ownership. For example, Facebook holds the
personal data of over 1 billion users.

Estadísticos e-Books & Papers


With blockchain, by contrast, consumers and businesses can have
“self-sovereign data,” and be in full control over how these data are stored,
encrypted, and used by third parties. This user-centric governance—the
ultimate democratization and decentralization of governance—is nothing
like what we have in the current economy, where many feed the engine but
only a few monetize attention and “eyeballs.”
Blockchain might show a path to tamper-proof ML, minimizing or
even removing adversarial attacks on data and models due to its
unbreachable architecture, strong governance (which allows control over
data access and use), and exposure of the reputation of any AI entity using
it. Enforced identification and reporting of data holders and users might
help to create a set of rules over how AI is built and used, and deliver
sanctions if certain requirements are not met. Blockchain may also change
the nature of software development, as it enables a new technology stack
with both security and collaboration at its core, and transparent governance
on top of it.
Traditional companies can embrace blockchain to liberate their
customers from the grips of advertising-based business models, such as
those imposed by Facebook, Google, and their peers. Besides, blockchain
can be used to reform organizational structures in traditional businesses,
which were shaped before businesses experienced competition by new
companies, leveraging real-time data to deliver products and services.
Any traditional organization can think about its business units as
members in a blockchain community. Trent McConaghu, the founder and
CTO of BigchainDB, a European blockchain company based in Berlin,
believes decentralization unlocks many potential business models. Most
notably, we are currently facing an emerging “tokenized ecosystem,”
where developers create a tokenized public network that acts like a utility,
e.g., a utility for sequential business logic (like Ethereum smart contracts).
The creators keep some of the initial tokens, and as the network grows in
value, they get rewarded. In general, the network rewards those adding
value according to network guidelines or protocols. This business model
has been already tested in open-source software, and can be implemented
in any type of utility.

General Data Protection Regulation (GDPR) and


Blockchain

Estadísticos e-Books & Papers


According to the European Commission, “personal data is any
information relating to an individual, whether it relates to his or her
private, professional, or public life. It can be anything from a name, a
home address, a photo, an email address, bank details, posts on social
networking websites, medical information, or a computer’s IP address.”1
GDPR directly touches every consumer in Europe who has a bank
account. Blockchain technology can help all companies get GDPR
compliant more quickly and cheaply, with less liability and better long-
term benefits to users. There are several organizations, from large
enterprises to small startups, that are building products using BigchainDB
to address GDPR regulations and other identity-related issues. PSD2, the
EU’s second revised Payment Services Directive to be implemented in
2018, will catalyze this further. It mandates that European banks have
application program interfaces by summer 2017. Since banks serve up
customer account data, anything with PSD2 must also meet GDPR.
At BigchainDB, there is a “tokenized” read-permission functionality.
Users insert data into blockchain, and these data are immediately
encrypted. A user can then send anyone a token to read the data. It is like
bitcoin (a token), except now the token is used to grant access to data
instead of to pay for a certain product or service. The data could be stored
and used in many ways, for example through being:
• directly in encrypted form on a public blockchain database;
• placed on a separate decentralized file system, e.g., a symmetric key to decrypt data on IPFS
(the InterPlanetary Filing System); or
• placed on a separate centralized file system, e.g., the data provide an “authorization” access
token to a third-party server that stores the data on the premises of that third party (this is
helpful to meet new data regulation standards like GDPR).

Distributed Infrastructure for Datasets


Blockchain has great capabilities to provide a global infrastructure for
distributed datasets, which can be utilized to build AI applications and
services. In October 2017, BigchainDB announced a new protocol to
enable AI developers to access proprietary and open datasets on a global
scale. This Ocean protocol was designed to enable “1000 data markets to
bloom,” thereby liberating AI from the dominance of full-stack AI giants
such as Alphabet, Facebook, or Microsoft. The first industrial partnerships
were announced to support the effort, among them AVDEX (Autonomous
Vehicles Data Exchange) and DEX (Data Exchange Singapore).

Estadísticos e-Books & Papers


Interestingly, most coding work is done in Europe under the heavy
influence of GDPR.

Internet of Things (IoT)


IoT is the inter-networking of physical “end-points” or
connected/smart devices (be they a vehicle, a phone, a thermostat, a
building, a wearable health device, or a machine in a factory) that have
embedded electronics, software, sensors, and network connectivity that
enable these devices to collect and exchange data.
In 2016, Andy Rubin, the creator of Android, announced his effort to
launch an operating system powered by ML to achieve horizontal
compatibility, so that developers can easily write applications and run
them on multiple devices. So far major technology players have wanted
everyone to comply with their own connected home platforms, which are
all about siloed ecosystems. Google introduced Thread or Weave
protocols, which are integrated with its Nest thermostat. Apple’s HomeKit
is the only iPhone-friendly way to control your house. Samsung spent
more than US$200 million to acquire connected home gadget maker
SmartThings and enter the market. These ecosystems are competitive.
According to Wired,2 Rubin has introduced Ambient OS, an evolved
Android combining all the major smart home products and platforms into a
single elegant system and interface. Its code is open source. In fact, the
business idea is to repeat the previous success of Android, which among
its first breakthroughs provided Droid, Motorola’s keyboarded handset.
Rubin’s team is working on a smart assistant and several smart devices
with a voice interface and a phone to control everything. Data processing
is happening on a device, not in a cloud. As Rubin does not intend to
implement an advertising business model, he is currently not worried
about data privacy.
ML is inextricably linked with IoT—it is needed not only to organize
and manage data collection and processing on IoT devices but also for the
securing of IoT systems.

Cybersecurity
Cybercriminals are innovative, but most frequently they get into
computer systems because of common programming errors in software

Estadísticos e-Books & Papers


programs. In today’s world, cyberattacks do not even require human
involvement.3 DARPA has been focusing on accelerating automated bug
discovery. The agency spent about US$55 million on a contest called the
“Cyber Grand Challenge,” with an additional $3.75 million destined to
become prize money. DARPA designed and built the event’s heterogenous
playing environment—a network of seven supercomputers and software
that contestants competed to break. The bots were programmed to both
attack and defend, mending security gaps in their own hosts while
exploiting vulnerabilities in the machines of others. Their success
impressed some seasoned cybersecurity experts. The bots demonstrated
speed, finding bugs much faster than a hacker ever could. Simultaneously
they proved that automated security is still very unstable. A similar
experiment was run in Las Vegas at the Defcon security conference. It
included a hacking contest called Capture the Flag. In 2017, the
contestants were machines, not people. Scientists used ML to create a
program that, combined with existing tools, figured out more than 25
percent of the passwords from a set of more than 43 million LinkedIn
profiles. And, of course, the researchers say that criminals can also deploy
this technology.4
The biggest problem in cybersecurity isn’t advances made by
criminals. It is our mentality. On November 11, 2017, reacting to a major
security flaw story, AI safety researcher Roman Yampolskiy tweeted: “But
I am sure we can make safe AI. . . .” Humans have long struggled to
master technology security. AI brings a new level of complexity and
requires a substantial shift in mindset even at the expense of speed. For
this reason, we believe cybersecurity should become a compulsory subject
in schools, not just a part of coding curriculum.

Quantum Computing
The future of AI may be shaped by very different technologies, with
three possible directions, including high performance computing (HPC),
neuromorphic computing (NC), and quantum computing (QC). HPC is the
major focus of what is happening today in the semiconductor industry—
something discussed later in this book. We will touch on NC as well, as it
has already demonstrated improvements over today’s deep learning neural
networks. We believe, however, that QC excels at all types of problem

Estadísticos e-Books & Papers


solving. If it overcomes the speed and cost issues in its design and
development, it will reduce time-to-solution in AI from months to seconds.

What Is a “Qubit”?
A “bit” in classical computing can have one of two states: 0 or 1. A bit
can be represented by a transistor switch set to “off” and “on,” or
abstractly by an arrow pointing up or down.
A qubit, the quantum version of a bit, has many more possible states. If
we have a sphere, the North Pole would be equivalent to 1, and the South
Pole to 0. All other locations on the sphere would be quantum
superpositions of 0 and 1. In this way, a qubit “contains an infinite amount
of information. Its coordinates on the sphere encode an infinite sequence
of digits. The information in a qubit, however, has to be extracted by
measurement. When the qubit is measured, quantum mechanics requires
that the result is always an ordinary bit (a 0 or a 1). The probability of each
outcome depends on the qubit’s “latitude.”5 What will this mean for
computing? Let’s dig a little deeper.

Quantum Computing: Current Status


Nature is inherently quantum mechanical. This implies that it is
probabilistic. Traditional computing, however, is deterministic. To
introduce quantum computing at scale, we need to rethink the way we
define a problem and how we are looking for an answer. We need to
review our understanding of statistical inference with many shorter
trajectories. Therefore, a quantum approach requires rethinking current
software design.
Quantum processes will give us samples of different answers, which
can improve current ML applications.
The cloud will allow quantum machines to be coupled with traditional
computers. This is very important, as right now we cannot store data on
quantum computers. In principle, these machines today are “just” co-
processors. Computation will need to go back and forth from quantum to
traditional components.
Last but not least, quantum computers need new delivery mechanisms.
They require cryogenic cooling systems and a special thermal and

Estadísticos e-Books & Papers


vibration stabilization platform. Such systems cannot be simply delivered
for on-premise installations.
We believe that the current stage of quantum computing is something
similar to the historical mainframes of IBM. It took humans many years to
get from these machines to an AWS kind of environment. In this case,
however, it might take less time to achieve quantum computing. There are
already quantum simulation machines, and people can access them to
practice coding. Much has to be done to rethink algorithms. Quantum
silicon qubit development follows Moore’s law, but software has to evolve
too, and vice versa, in an iterative way.
Once quantum computing arrives at scale, it will serve many industries
and use cases, such as the improvement of “solar panels, drug discovery or
even fertilizer development. Right now, the only algorithms that run on
them are good for chemistry simulations,”6 according to Robin Blume-
Kohout from Sandia National Laboratories, which evaluates quantum
hardware.
Given the progress in research and the existence of companies such as
D-Wave, Rigetti Computing, and Cambridge Quantum Computing, and
giants such as Google, IBM, and Microsoft, we believe quantum
computing might arrive in 20 years. In May 2017, IBM announced that it
had built two new quantum models with 16 and 17 qubits, respectively.
Very soon, IBM will release access to a 20-qubit via the cloud. Over the
next few years, IBM wants to increase the quantum volume of its qubits to
50 and more.7 Google has disclosed that it is making a machine with 49
qubits. With such quantum capabilities, the applications space for AI,
business optimization, energy research, cybersecurity, and the financial
services industry will be filled with opportunity.
According to experts in the field, there is still a long way to go before
the range of quantum computing tools and applications can grow. Useful
applications will arrive when a system has over 100 qubits, according to
Seth Lloyd, a professor at MIT.8

Table 5.1 AI ESG Considerations: Related Technologies

Environmental • Transparency in financial and other areas via the AI/blockchain


marriage?
Social • IoT and other technology data privacy protections via AI and
blockchain
Governance • Governance and privacy promise regarding AI/blockchain
intersections

Estadísticos e-Books & Papers


• Promise/peril of deploying AI for cybersecurity
• Cyberwarfare AI weapons race
• Governance implications for AI/quantum computing cross-
dependencies

Quantum machines would trim computing times drastically, providing


a huge cut in the cost of cloud services. New pricing mechanisms might
emerge. For instance, right now Google rents storage by the minute,
currently trailing the cloud offerings of Amazon and Microsoft. Quantum
capabilities could change that.
With a quantum environment, traditional encryption mechanisms will
require an immediate update and transition to a new paradigm. We
recommend that traditional companies watch emerging quantum
computing research and practice, as it will greatly influence the
development of AI. To paraphrase Salman Rushdie in his comments about
Brexit, neglect of AI and quantum computing is like a company “having a
picnic on a railway line and paying no attention to the hooting noise in the
distance.”9

Estadísticos e-Books & Papers


PART 2

Enablers and Preconditions


to Achieve AI

Key Points in Part 2


• There are several factors enabling AI progress today, such as universal growth in data,
evolution of algorithms, advances in semiconductors, cloud technologies, smart networks, and
openness in libraries, frameworks, APIs, and entire AI platforms to empower developers to
build AI-native applications.
• Data are the most important AI ingredient. We expect that new businesses around data
preparation and collection will emerge dedicated to transforming data from a liability into a
strategic asset.
• Data-specific domain expertise is crucial to avoid disappointments while piloting AI in a
business.
• Today there are open-source tools such as TensorFlow, Torch, and Spark, coupled with the
availability of massive amounts of computation power through AWS, Google Cloud, and other
providers. Democratization of AI means that traditional companies don’t need ML PhDs to
build and train models. It will still take a long time until true democratization of ML takes
place. ML-focused companies and teams cannot be lean startups, as the real commoditization
of ML frameworks is still in its infancy, and building successful ML applications remains
difficult.
• Companies will succeed if they see AI/ML as a long-term investment, and invest in data, tools,
and talent to train systems, launch beta-products, and learn from the experience.
• A solid data governance framework will support data sharing and collection despite the
existence of organizational silos and bottlenecks.
• Every modeling effort should start with the proper problem formulation.
• Diversity in teams is crucial to asking the right questions and avoiding bias in datasets and
algorithms.
• Traditional companies must understand trends in silicon. Roadmaps of semiconductor
companies provide the best insights into what ML can do within the next two to three years.

Estadísticos e-Books & Papers


CHAPTER SIX

Data

What Is This Chapter About?


ML is about data and algorithms, but mostly about data. This chapter
builds on what we learned in Part 1 about ML, DL, and other technologies,
such as computer vision and language processing. There is much
excitement about advances in these areas. Data, however, is the key
ingredient. Most of the hard work for ML is about data transformation or
preparation, which means transforming raw features into features that can
represent signals in your data and are thereby able to generate insights.
Traditional companies often take a passive-aggressive approach to
Internet-age companies that are deeply skilled in data. Lobbying against or
even “just” complaining about data collection and insight generation is not
a healthy or competitive strategy in the age of data.
This chapter is one of the most important ones in this book. We will
describe several key concepts:
• steps to access, capture, and process data;
• the underlying IT engines to analyze data, companies, and tools that might be helpful to
support data access, preparation, and processing;
• a data governance framework from which a traditional company might benefit;
• practices for business developers and executives to deploy in interacting with data scientists;
and
• examples of some of the better practices that data-centric businesses use to operate in real time.

Can Traditional Companies Become New “Data


Businesses”?

Estadísticos e-Books & Papers


From the 1970s to 2000, data was largely the subject of accounting and
regulatory compliance and reporting. Some IT engineers and financial
professionals would look at the data in their warehouses periodically but
not systematically. However, attitudes toward data changed radically when
Google went public. At that point, analysts started to talk about data-
driven organizations. Venture capitalists began to fund businesses that did
not have much revenue, but that had captured users and gathered
information about them and about the services they wanted and bought.
Executives at companies transforming consumer insights into advertising
dollars demanded more value from the data. For the first time, regulators
started asking questions about data ownership.
Data-driven businesses are more successful and profitable than
businesses that do not base their decision-making on data and do not put
data in the hands of employees at all levels of the organization. Data
provide value to any company either by influencing people’s decisions or
by inspiring changes in processes. Data also feed automated systems, e.g.,
in the case of the recommendation engines of Amazon and Netflix.
In principle, every single company has already accumulated enough
data to start experimenting with ML. There are, however, challenges in
companies’ mindsets regarding readiness to apply analytics and AI,
valuation of data, and introduction of data governance frameworks.
According to a Harvard Business Review article, a company cannot be
ready for AI if it is not good at analytics.1 A study by Forbes Insights and
Dun & Bradstreet, based on a survey of 300 executives in North America,
Britain, and Ireland, shows the following somewhat alarming statistics:
• 59 percent of companies are not using predictive models or advanced analytics, not to mention
ML;
• 23 percent of those surveyed are still using spreadsheets as their only tool;
• 17 percent use dashboards;
• 41 percent use predictive models and forecasting techniques; and
• 19 percent use nothing more complicated than basic data models and regressions.2

Our interviews with companies reveal much confusion and fear around
reforming the IT infrastructure to allow for more advanced data analyses.
We also see a serious absence of internal skills in change management and
data science. Many executives single out deep cultural hurdles to changing
existing IT infrastructure. “You will not get fired for doing nothing, but
you will get into trouble for failures and budgeting issues,” as one CIO of
a major telecommunication company told us.

Estadísticos e-Books & Papers


In sum, some of the potential bottlenecks in obtaining data for ML
applications include:
• the absence of a governance framework for potential sharing and processing of data;
• lack of expertise in preparing existing data for processing and insight generation, including
removing unnecessary records, adding information, aggregating data, and pivoting datasets;
• a capacity mismatch that arises when a large pool of analysts rely on a small pool of IT
professionals to prepare “refined” data for them. Removing this bottleneck is an organizational
challenge and involves expanding the range of users who have access to raw data and
providing them with the requisite training and skills;3
• regulatory bottlenecks to collecting data, e.g., addressing privacy concerns;
• risks around bias in data; and
• a lack of investment in modern data infrastructure.

In sum, any traditional business has the potential to become data driven
and thereby improve its overall competitiveness. Leadership, IT literacy in
top management teams, and competence in digital transformation will
separate successful companies from stagnating ones.

Characteristics of Big Data


When most executives talk about data, they address three important
“Vs”—volume, variety, and velocity. Universal connectivity and the
proliferation of data-enabling devices increase the three Vs
simultaneously.
• Volume: By 2020, 44 zettabytes of data will have been created, according to the International
Data Corporation (IDC).
• Variety: Data are not always structured, ranging from semi-structured application logs,
machine-generated logs, and sensor data to unstructured content such as social media
comments, videos, and pictures.
• Velocity: The velocity at which new data are created is increasing. Half a billion tweets are
sent every day, and 300 hours of video are uploaded every minute to YouTube.

Peter Norvig, director of research at Google and councilor of the


Association for the Advancement of AI, speaks about the transformation
from quantitative data into qualitative data, or data that are naturally able
to deal with uncertainty and incomplete or partial information, allowing
real data processing and a high level of abstraction for decision-making
purposes.4 The benefit of big data is that we can unleash algorithms on
them to discover patterns.
ML adds an additional layer to big data to tackle complex analytical
tasks much faster than humans could ever hope to. McKinsey Global
Institute predicted a shortage of just this kind of talent by 2018.

Estadísticos e-Books & Papers


Rob May, CEO of Talla and an active investor in AI companies,
applies two concepts from economics to the world of data and ML:
economies of scale and economies of scope. Let’s now turn to that valuable
analysis.

Economies of Scale
Training ML models is similar to having economies of scale. One must
have enough prepared and cleaned data to train a model to be successful.
Competitive advantage is equivalent to the size of a dataset and its
readiness for computing.
In 2009, three computer scientists from Google wrote a paper entitled
“The Unreasonable Effectiveness of Data.” They demonstrated that even
when using a messy dataset with a trillion items, ML performs much more
effectively in tasks (e.g., machine translation) than ML operating on a
clean dataset with a mere million items. Since then, it has been common
understanding that the size of available datasets represents a competitive
advantage for businesses, especially when they have a refined architecture
that enables them to make the most of it.
The quality of datasets also matters if the best results are to be
achieved. Quality means that the data should reflect the real world. The
best datasets take a lot of manual effort, as the ImageNet story illustrates.
Originally announced in 2009 at a Miami Beach conference center, the
dataset quickly evolved into an annual competition to screen which
algorithms could identify objects in data with the lowest error rate. Fei Fei
Li, who started working on it in 2006, led the project. The key to the
success of ImageNet was a solution by Amazon called Mechanical Turk.
This is a crowdsourcing Internet marketplace enabling individuals and
businesses to coordinate the use of human intelligence to perform tasks
that computers haven’t been able to do yet. After the team discovered
Mechanical Turk, the data set took two and a half years to complete. It
consisted of 3.4 million labeled images, separated into 5,247 categories.
Today the dataset is 13 million images strong, precise, and used by many
companies.
Google, Microsoft, and the Canadian Institute for Advanced Research
have introduced several additional high-profile datasets since 2010.
Startups have begun assembling proprietary datasets. For example,
TwentyBN, a ML company focused on video, collected videos of people

Estadísticos e-Books & Papers


performing simple hand gestures and actions. It used Amazon Mechanical
Turk. The results were releases of two free datasets for research purposes,
each containing over 100,000 videos.
According to Rob May, however, there are several trends in ML
training that are beginning to question the competitive value of large
datasets. These trends are as follows:
• one-shot learning, the ability to train a model on smaller sample sizes by first training it on a
similar but less specific dataset;
• new small-data techniques like probabilistic programming; and
• the ability to use GANs (generative adversarial networks) to generate synthetic datasets when
there are insufficient primary data.

As already discussed in our chapter on ML, the goal of GANs is to


generate data points that are magically similar to some of the data points in
the training set. Yann LeCun, one of the fathers of DL and the head of the
Facebook AI lab, says that it is one of the most powerful ideas in ML in
the last 20 years. Currently GANs are used to generate realistic images
(putting glasses on people or removing them, changing faces, etc.), 3D-
models, and videos. GANs are still unstable. Many companies, such as
Sentient, experiment with them in labs, but do not deploy them with
clients.5

Economies of Scope
Economies of scope can happen when several types of datasets, e.g.,
text data and images, become available. Training of multimodal datasets
can make a model more valuable. Types of data (scope) will be more
powerful than the amount of data in any given set (scale).
The competitive advantage will be with companies that are capable of
generating and/or acquiring the new datasets needed to train systems.
Barriers to training will be inversely proportional to barriers to implement
AI.6

Table 6.1 Questions Business Development Teams Should Ask Their Data Scientists

1. Do you have training data? If not, how do you plan to get it?
2. Do you apply one big dataset, or several smaller sets with different data to start training?
3. Do you have an evaluation procedure built into your application development process to assess
what works best?

Estadísticos e-Books & Papers


4. If you use pre-packaged AI components, do you have a clear plan for how you will go from
using those components to having meaningful application output?

Preparing Data for Modeling and Processing


A report by the advisory company Ovum predicts that the big data
market—currently a US$1.7 billion market—will swell to US$9.4 billion
by 2020.7 Such a market would include data-as-a-service, technologies to
clean and prepare data for modeling and processing, and similar tools to
make use of data in any company and industry.
There is no way around this unappealing and time-consuming task, nor
can it be completely outsourced to third-party vendors. It is also important
to understand that good performance on one dataset doesn’t guarantee
good performance from a ML system in real product scenarios. Many
companies overlook the fact that the most challenging task of building a
new ML model or application is not the algorithm. It’s the data collection,
cleaning, preparation, and labeling (tagging).
Most datasets are proprietary, but research and the evolution of
algorithms would not have happened without the availability of public
data. Such data can be used to baseline a ML case, or obtain a benchmark;
they can be used for purposes of sanity check or for algorithm validation.
“Standard datasets can be used as a good starting point for building a more
tailored solution,”8 as Luke de Oliveira points out in an article for Medium.
The company Data.world empowers data scientists to solve big
problems. It has built a platform where people can contribute, find, and use
a vast array of high-quality open data. It initiated a project with the Anti-
Defamation League to launch an open-data workspace for analyzing hate
crime trends. As we write these lines, new use cases and partnerships are
being forged around this model.

Table 6.2 Examples of Open Datasets9

Area Examples

Computer Vision MNIST, ImageNet, PascalVoc


Natural Language Recognition Text Classification Datasets, Wiki Text, bAbi
Speech Recognition TED-LIUM, VoxForge
Recommendation and Ranking Systems Netflix Challenge, Movie Lens
Networks & Graphs Amazon Co-Purchasing
Geospatial Data Open Street Map, NexRad

Estadísticos e-Books & Papers


Generating training data for ML systems is challenging but very
important. Developing digital environments that simulate the physics and
behavior of the real world provides companies and researchers with test
cases to measure and train an AI’s general intelligence. Training in these
simulated environments can not only help us understand how ML systems
learn and how to improve them, but also provide us with models that can
potentially transfer to real-world applications.10
There are companies specialized in providing environments and
frameworks for data training, e.g., teaching a car to drive (comma.ai), or
developing games (unity3d.com, unrealengine.com, deepmind.com in
collaboration with Blizzard Entertainment).
Data Training-as-a-Service is an interesting emerging business with
big monetization potential, as it targets three groups of users:
• AI experts;
• domain experts (who lack expertise in ML and DL but are capable of writing basic Python or
similar code); and
• experts without any coding experience.

Christopher Ré of Stanford is working on a system called “Snorkel” for


rapidly creating, modeling, and managing training data, and is currently
focused on accelerating the development of structured or “dark” data
extraction applications for domains in which large labeled training sets are
not available or easy to obtain.11 He has already recruited a number of AI
experts and is quickly adding domain experts with some coding skills.12
Sometimes data collection requires additional effort and investment,
e.g., driving down every single street with a specially equipped car, or
moving floor by floor in a building with a specially designed drone to map
interiors manually.
Sometimes, data cannot be easily pooled within an enterprise across
users or customers or complemented with data from the web due to legal,
contractual, or other reasons.

Table 6.3 Examples of Companies Supporting Data Collection and Capture

Diffbot Crowdflower Work Fusion


Kimono Labs (acquired by Palantir in 2016) CrowdAI Datalogue
Data Sift Enigma ParseHub

Estadísticos e-Books & Papers


Gartner estimates that 80 percent of an enterprise’s data are
unstructured.13 Unstructured data represent information that either is not
based on a predefined data model or is not structured according to a set of
rules.14 Good examples of such data are customer forms, chat sessions
with sales agents, and compliance forms. We expect more innovation to
happen in the design of automated tools that can do the matching without
much human intervention.
Writing good interfaces that allow people to describe what they need to
do at an abstract level is a hard problem to solve.15 Data Preparation-as-a-
Service may become a high-margin business.

Table 6.4 Examples of Companies Using ML to Enable Data Preparation

Company Website Industry Use Cases

Trifacta www.trifacta.com Banking and Insurance, High Tech, Consumer Goods,


Pharmaceutical, Telecommunication
Risk Compliance and Security, Fraud Detection, Customer
Behavior Analytics, Churn, Customer Segmentation and
Personalization
Paxata www.paxata.com Financial Services, Automotive, Health Care, ICT, Public Sector
Risk Compliance, Insight Discovery, Customer Behavior
Analytics, Customer Lifetime Value, Supply Chain Optimization
Alation www.alation.com Internet Payments, e-Commerce, Enterprise Applications
ML to enable data inventory and establish a searchable catalog for
insights and decision support
Tamr www.tamr.com Financial Services, Automotive, Consumer Electronics, Enterprise
Tech, Pharmaceutical, High Tech and Manufacturing
ML to clean up and integrate data to be used for analytics,
Customer 360 and Lifetime Value

“Machine Teaching-as-a-Job”: Service, Skill, and


Impact on Organizational Design
There are a number of common job titles and roles for people who
work with data, e.g., data engineer, data architect, data scientist, or data
analyst. In 2017, Microsoft Research published a paper called “Machine
Teaching: A New Paradigm for Building Machine Learning Systems,”
which explores the eventual evolution of ML. The paper makes a clear
distinction between ML and machine teaching. The authors explain that
ML is what is practiced in a research organization, and machine teaching

Estadísticos e-Books & Papers


is what will eventually be practiced by engineering organizations. The
paper concludes with three key developments outlined by Carlos E. Perez
in “Deep Teaching: The Sexiest Job of the Future.”16 They are the
following:
• models closer to general-purpose computer programs, built on top of far richer antecedents
than current differentiable layers;
• models that require less involvement from human engineers—“it shouldn’t be your job to tune
knobs endlessly”;
• greater, systematic reuse of previously learned features and architectures; and
• meta-learning systems based on reusable and modular program subroutines.

Rather than having a library of modular programs that researchers put


together, we might be better off with a library of teaching programs to
train a new system. Machine teaching becomes a new service, and
simultaneously, a new job category.
The most active projects in this space are Google’s Tensorflow’s XLA
and Intel Nervana’s NNVM projects. Perez expects many teaching
programs to emerge from DL hardware, e.g., at GraphCore, Wave
Computing, Groq, and Microsoft. Silicon providers will need to invest in
transforming existing DL frameworks to support their hardware.17
Clickworker GmbH is one of several companies feeding the need for
training data as ML enters into more business processes. Mighty AI and
Understand.ai focus on annotating images for autonomous driving.
DefinedCrowd addresses natural language processing, so workers record
or transcribe speech samples. Microwork photographs and tags brand
logos. Other players are generalists, tagging vehicle damage, categorizing
media or handwritten notes, or assessing product reviews. Today there are
over 1 million people helping to train AI software for free. Beyond
recruiting people and sorting data, AI training companies create the
software interfaces for workers to label data.18
Andrej Karpathy addresses how ML and neural networks will change
the ways we build “software 2.0”: “Software 2.0 is written in neural
network weights. No human is involved in writing this code. A large
portion of real-world problems have the property that it is . . . easier to
collect the data than to explicitly write the program. A large portion of
programmers of tomorrow do not maintain complex software repositories,
write . . . programs, or analyze their running time. They collect, clean,
manipulate, label, analyze and visualize data that feeds neural networks.”19

Estadísticos e-Books & Papers


Karpathy’s ideas can be applied to how future organizations might be
designed. Today’s schools teach problem-solving by breaking issues down
into successively smaller problems. Traditional silos such as marketing,
sales, and product development work in the same way. When given a lot of
data, ML computers figure out what type of steps are needed only. Instead
of clearly defined steps, collaborative teams might first need to curate data.
Instead of taking requests and turning them into specifications, which then
go into product development, the emphasis will be on understanding and
gathering the right data, training and adjusting a machine, and deciding
what can be simulated artificially, if data are not available.

Regulatory Considerations
Although data scientists can gain great insights from large datasets,
many such efforts are compromised from the start. Privacy concerns make
it difficult for researchers and practitioners to access the data they need to
work with.
Systems with a voice interface, for example, are most effective while
personalized and connected to sources of personal data such as calendars
or e-mails. That helps to flag privacy and security challenges. To add to
this challenge, many voice-enabled devices are always in a listening mode.
MIT researchers Kalyan Veeramachaneni, Neha Patki, and Roy Wedge
have presented a good solution to overcome privacy constraints. They
suggested using synthetic instead of authentic data, which can be used to
develop and test data-science algorithms and models without raising
privacy concerns: “Once we model an entire database, we can sample and
recreate a synthetic version of the data that very much looks like the
original database, statistically speaking. If the original database has some
missing values and some noise in it, we also embed that noise in the
synthetic version. . . . In a way, we are using machine learning to enable
machine learning.”20 This innovation can be easily scaled and used in any
industry or for educational purposes.
Algorithms have impacts beyond products and services, or single
companies. According to research published in Virtual Competition
(2016), a book by Ariel Ezrachi of the University of Oxford and Maurice
Stucke of the University of Tennessee,21 companies with very large data
and algorithmic capabilities can manipulate markets by letting their

Estadísticos e-Books & Papers


algorithms quickly react so that competitors have no chance of gaining
customers by lowering prices. This could have an impact on antitrust laws.

Biases in Datasets and Algorithms


Cathy O’Neill, author of Weapons of Math Destruction (2016), points
to implicit biases in the values that determine which datasets we use to
train a computer. For example, if an ML human resources application for
finding the best person to fill a job includes a feature that it is “someone
who stays for years and gets promotions,” this will almost always yield
male candidates.
Using ML to identify “bad teachers” in public school systems would
be highly problematic as well. The problem is that there is no
comprehensive definition of what constitutes a “bad teacher” that can be
embedded into a set of rules for computers. Do we refer to the class
average score on tests? How many students attend college and make
money after school? Live happy lives? Humans might work this out, but
ML algorithms may institute biases implicit in the data humans have
chosen to give them.22
A Carnegie Mellon research team discovered algorithmic bias in online
ads. When they simulated people searching for employment, Google ads
showed listings for high-income jobs to men nearly six times as often as to
equally qualified women. The Carnegie Mellon team has said it believes
internal auditing to moderate companies’ ability to reduce bias would be
helpful. In practice, how many corporate audit leaders are there who speak
and write about data analytics and bias reduction, not to mention ML? We
still have a long way to go.
The AI press talks a lot about bias in algorithms and the non-
inspectability of some algorithms. There are some first attempts, however,
of humans attempting to influence algorithms. MIT Media Lab for Civic
Media has a research task around safety and fairness in online
communities. J. Nathan Matias, a PhD candidate there, believes that trying
to define how online products and platforms affect people, including their
civil liberties, is an obligation. He rated software called Civilservant,
which works very well, for example, with identifying and influencing
distribution and consumption of so-called fake news, a phenomenon that
has recently reached great prominence under the Trump campaign and
presidency. Social media became a new target for manipulation, attracting

Estadísticos e-Books & Papers


all sorts of people, from marketers to state actors and companies to
adversarial actors. For as little as US$5, anyone could buy followers, likes,
and comments on every major site, including Google News, Twitter, and
Facebook. We were not prepared to detect the difference between real and
fake news. “If we believe that data can and should be used to inform
people and fuel technology, we need to start building the infrastructure
necessary to limit the corruption and abuse of that data—and grapple with
how biased and problematic data might work its way into technology and,
through that, into the foundations of our society,”23 said Dana Boyd, a
Microsoft researcher.
Matias reports on a successful experiment at Reddit in which readers
were asked to fact-check each story, which shifted the behavior of Reddit’s
algorithms. Even more, readers could vote the unreliable news down.
Matias named this approach the “AI nudge.” The name was inspired by
Richard Thaler and Cass Sunstein, whose 2008 book Nudge described how
subtle behavioral “nudges” could influence people to make better
decisions.24 Reddit’s “nudge” encouraged users to think critically and
document their behavior, which persuaded the algorithm and,
consequently, benefited other users. We believe that there could be a
whole new business built around the idea of algorithm influencing.
Additionally, integrating adversarial thinking into the design,
development, and testing of AI products will become indispensable.
According to Tim Hwang, Global Public Policy Lead on AI and ML at
Google, fairness in ML systems and datasets, including de-biasing, is a
prominent research task at his company. For example, in order to de-bias a
system, more diverse data are needed. To get the necessary information,
companies collect data on minorities, which in turn raises questions around
privacy. Solving the problem implies working with experts on minorities,
not just working on the technical side of the data collection problem. This
example leads to a broader thought about potential failures in ML: data
engineers don’t always think adequately about their data. As Tim Hwang
has said, “And so the machine does what the machine does, which is trying
to optimize against your objective function that you gave it. And it’ll often
maximize in ways that you don’t expect.”25
The issue of fairness is not always as straightforward as it might first
appear. The technical consequences of implementing a particular
definition of fairness in the context of ML can end up creating unfairness.
Michael Kearns of the Warren Center for Network and Data Sciences and

Estadísticos e-Books & Papers


the Penn Program in Networked and Social System Engineering refers to
the Uber model as such an example. It is difficult to determine who the
best driver on the Uber network is, even when ranking is based on average
individual scores built with the same criteria. The indication of the “best”
driver will change, however, as the number of total Uber rides goes up.
Profit maximization might lead to giving rides to the better driver based on
the numbered ratings (“exploitation” metric), but such decision-making
ignores other important factors (“exploration” metrics). An algorithm can
be fair by the numbers, but potentially unfair when applying other
behavioral standards.26
Another question relates to currently applied data models and
algorithms and how they may fit into most Internet providers’ existing
business models. Companies like Google, Facebook, and Twitter operate
in the so-called “attention economy,” where brands compete for eyeballs
and allocate their advertising dollars to the most successful “attention”
marketplaces. Product design and lifecycle management are focused on
attention-generation, the so-called “stickiness” of products and services to
keep a user attached to them. In the book Phishing for Phools: The
Economics of Manipulation and Deception, Nobel Prize–winning
economists George Akerlof and Robert Scholes made a case for efficient
market theory, in which fraud and abuse don’t mean market failure, but
provide evidence that there is an efficient market for everything, including
deception in which the market benefits from every possible cognitive bias,
including human failings.
Tristan Harris wants digital product design to reflect what is truly
important to a user instead of focusing exclusively on how to attract
eyeballs to a company service and making money through advertising
clicks. Harris was called the “closest thing Silicon Valley has to a
conscience” by The Atlantic magazine. He worked as design ethicist at
Google and currently leads Time Well Spent, a non-profit movement to
align technology with our humanity. He believes we have reached a
tipping point in which the interests of large technology corporations such
as Amazon, Google, and Facebook and those of the users of their products
and services are no longer aligned. Harris wants people to be empowered
to make independent choices instead of reacting to the biased suggestions
of Google or Facebook. Currently, the human drive for social approval
maximizes the time spent on Google, Facebook, and Twitter, monetizing
attention with advertising dollars. This focus on users and their unique

Estadísticos e-Books & Papers


choices represents a challenge in the design of AI products and services.
Harris and his partners believe that the big technology brands should have
a responsibility to support services even if they are contrary to profit
maximization. Like a careful city planner in charge of infrastructure, green
landscapes, and zoning for business versus residential areas, these
technology giants have a responsibility to create spaces that enable
consumers to better control their online experience, e.g., setting up
preferences for quality time, rather than always being unconsciously
sucked into the usual “eyeballs” spiral designed to maximize revenues.27
In many cases, entire pricing models are empowered by ML to
estimate customers’ willingness to pay. This is the case with Uber, which
calculates riders’ propensity for paying a higher price for particular routes
at certain times of day. Uber makes up for this as it advertises a feature
called upfront pricing. By guaranteeing customers a certain fare before
they book, the company has said it provides more transparency. However,
price discrimination occurs even before these prices are communicated to
users.28
If diverse teams do the coding work, bias in data and algorithms can be
mitigated. If AI is to touch all aspects of human life, its designers should
ensure that it represents all kinds of people. The values of the engineers
building AI will be reflected in the solutions they bring to the table,
according to Kate Brodock, Forbes contributor and founder of
@FactorZero, who writes a lot about AI ethics, diversity, and female
technologists. For example, the Google Brain team has 44 men and 3
women, and is over 70 percent white.
Fei-Fei Li, a chief scientist at Google and a Stanford professor, has
worried about diversity for many years: “If we don’t get women and
people of color at the table—real technologists doing the real work—we
will bias systems. Trying to reverse that a decade or two from now will be
so much more difficult, if not close to impossible. This is the time to get
women and diverse voices in so that we build it properly.” Melinda Gates
and Fei-Fei Li created AI4All, a program exposing underrepresented
ninth-grade students to ML.29 Carlos E. Perez argues in “Why Women
Should Lead Our AI Future” that the diversity perspective is not the only
reason to actively include female contributions in the field of AI. Perez,
who studies artificial intuition and DL, believes that women have a better
intuitive understanding of what makes us all human. If we want AI to
improve our life in the long-run, we need at the very least to optimize for

Estadísticos e-Books & Papers


tasks focused on the quality of AI interactions with people, address our
own humanity in relationship with this technology, and improve
governance by data-driven decision-making, rather than “just” pushing for
more automation.30
Traditional companies would be well advised to support diversity and
education initiatives focused on getting minorities and women into ML.
Every small step will contribute to the critically important social benefit of
avoiding bias and designing beneficial and safe AI.
Overall, ML systems will not magically discover how to do the right
thing. They might be very beneficial to society if we can collectively
figure out how to implement the best governance rules and control and
design guidelines for these systems. We need to also find ways to widely
and fairly distribute the benefits of AI. It will be a major challenge and
require rethinking at all levels to try to incorporate social and ethical
impacts into the heart of engineering, but it is something that civil society,
business, and governments must try to do in a coordinated manner.

Data Governance
The best data-driven businesses provide wide access to data and have
data governance based on opening up access and allowing as many people
as possible to find valuable insights. One might call such a framework data
democratization or self-service analytics. Investing in and developing
robust datasets allows for fewer conflicts within a business. Understanding
what kinds of conflicts can happen triggers the building of practices to
address them directly. Conflicts should represent different views on how to
measure or interpret the data, what kind of algorithms to apply, and at
what point in time a company requires outside expertise. It is to the benefit
of any organization to uncover these differing perspectives and to find
constructive ways to coordinate them.
It is critical to find a common data management language for everyone
to use and to support cross-organization collaboration on analyses. Our
interviews in several sectors—telecommunication, automotive, and
consumer goods—reveal that traditional businesses are trying to lock up
analytical skills within financial departments to exercise end-to-end
control over any and every type of analysis. Unfortunately, such practices
are completely counterproductive and the worst fit for a data-driven
company. New strong, visionary leadership is needed in the traditional

Estadísticos e-Books & Papers


financial functions to unlock the value of analytics and open access to
data. Only under this leadership can new value-driven data governance
emerge.
Strong data governance facilitates data exchange, which in turn enables
innovation. To be most powerful, data governance has to be embedded in
an organization’s culture. If this happens, data governance is likely to
influence organizational behavior and bring business value. Data
governance frameworks should be at the top of every corporate board
agenda, as they enable a company to move from piloting data technologies
to mass scale deployment.
At the initial stages, however, expectations need to be carefully
managed, as data technologies are uncharted territory for traditional
businesses. Getting everything right immediately will be impossible.
Persistence and patience will bring better results than finger-pointing at the
corporate IT or legal functions. Unlike other areas, data governance should
be a horizontal task throughout an enterprise to ensure buy-in and healthy
implementation.
In dealing with data governance, it is easy to complain about
technology hurdles, which in turn make data-driven self-service analysis
and decision-making impossible. However, technology only follows
strategy and even the top Internet brands did not start with proper data-
driven decision-making architecture. For example, at Facebook, data teams
initially sat between users and the infrastructure. With its exponential
growth, the absence of self-service tools for data analysis became a
bottleneck. So HIVE (an open-source peta-byte–scale data warehousing
framework developed at Facebook) and other tools were created to make
the infrastructure self-serving while supporting it from behind the scenes.
This allowed the whole company to become data driven. Employees were
empowered to look for data and use the data for decision-making. There is
no doubt that being data driven goes hand-in-glove with being accountable
for how these data are used in products and services.

Table 6.5 Summary of Organizational and Governance Best Practices for Data
Management

• Provide wide access to data within an organization


• Implement mechanisms to track data usage
• Use a common data management language that spans business unit and user roles to avoid
conflicts and find common ground for action
• Maintain a system that allows you to easily transition from development to production

Estadísticos e-Books & Papers


• Consider a rotation program across roles to enable a cleaner hand-off and increase cross-
functional trust and cooperation
• Understand how to anonymize data and/or use synthetic data to ensure privacy protection and
introduce these technologies into a business
• Introduce a supplier management framework for any potential vendor that enables data
collection, preparation, processing, and insight generation
• Use open datasets, and share certain data with outside stakeholders to develop new product and
services, design guidelines to make products AI ready, and provide ethical frameworks to
establish beneficial AI
• Introduce a security framework to manage, exchange, and process data

Data Infrastructure Fit for AI


We expect ML infrastructure to extend companies’ data infrastructure
at scale. In this context, data scientists, data infrastructure experts, and
backend engineers should work together to push an organization toward a
data-driven future.
Several infrastructure-related developments have happened over the
past decade to enable AI applications. Data engineering teams at top
Internet companies like Google, Facebook, Amazon, Airbnb, Pinterest,
and Netflix have set new standards for what software and businesses can
learn from and do with data. Since their products have been massively
adopted globally, these teams have been able to continuously redefine
what it means to do data analyses and analytics at scale.31 Some of these
concepts include the following:
• the “data lake” concept, which was introduced to allow better analysis on all kinds of data
formats;
• improved data infrastructures like Hadoop and the cloud, which made it possible to train on
years of data in a very short period of time; and
• open-source data processing engines, which companies developed to comply with real time,
self service, fast scaling up, and reliability.

Table 6.6 Examples of How Data-Driven Businesses Operate32

• Netflix captures roughly 500 billion events per day, which translates to roughly 1.3 petabytes
(PB) per day. At peak hours, they’ll record 8 million events per second. Netflix employs over
100 data engineers. Their architecture uses components such as Apache Kafka (an open-source
stream processing platform developed by LinkedIn in 2011); Elastic Search (scalable real-time
search technology supporting so-called multi-tenancy, or multiple groups of users sharing
common access to software); AWS S3 (Simple Storage Service); Spark; Hadoop; and EMR
(Amazon Elastic Compute Cloud).
• With over 1 billion active users, Facebook stores over 300 petabytes of data. In order to do
interactive querying at scale, Facebook engineering invented Presto, a custom-distributed SQL
query engine optimized for ad hoc analysis. Over one thousand engineers use it.
• Airbnb supports over 100 million users and over 2 million listings and employs over 30 data
engineers, investing over US$5 million in headcount alone.
• Twitter handles 5 billion sessions per day in real time and uses Amazon ELB (elastic computer
cloud), Kafka, Storm, Hadoop, and Cassandra storage. It invented Heron, a real-time,
distributed, and fault-tolerant stream-processing engine to power Twitter’s entire real-time
analysis sine 2014. Twitter open-sourced Heron in 2016.
• eBay wants to integrate ML into each and every piece of data in the eBay product infrastructure.
They are working on building infrastructure that they can use to promote self-service ML,
reusable ML, and extensible ML.

Table 6.7 Common Modern Infrastructure Concepts33

Data Access

Augmenting a company’s existing data warehouse with a data lake is a path most recommended by
data scientists exploring AI models.
A data warehouse (DW) is a central repository for all the data that is collected in an
organization’s business systems, e.g., ERP. In DW, only processing of structured data is possible.
No DL applications or analytics using unstructured information is feasible. Employees at all levels
of an organization are not able to access DW to run their analyses. The analysis itself cannot access
the original raw data, but certain queries can be answered quickly. Business intelligence tools are
built on top of DW to provide dashboards or other kinds of insights from resident data.
A data lake, in contrast to DW, holds a vast amount of raw data in its native format. The data
scheme is decided upon reading, rather than loading or writing, the data. Data are quickly available
because they do not have to be curated before they can be used. This technology has existed since
2010 and is maturing very quickly. Data lakes allow self-service that is not possible with DW.
Cloudera’s data lake technology has generated a lot of use cases for customer care and marketing
applications (e.g., personalization of a customer instead of working with broad customer segments),
IoT (e.g., telematic devices used for car insurance), or fraud detection and cybersecurity in credit
card operations.

Modern Data Engines and Storage

Hadoop: Google has given the world many tools to handle big data, as the company’s technology
evolved while dealing with massive amounts of real-time information. Hadoop is an example as it
was generated by Google’s File System paper, published in 2013.
Hadoop is an open-source software framework used for distributed storage and processing of
big datasets using the MapReduce programming model. Its highly parallel storage layer offers a lot
of resilience, as it is designed with a fundamental assumption that hardware failures are common
occurrences and should be automatically handled by the framework. Hadoop runs on commodity
hardware. Unfortunately, Hadoop is not very user friendly and it is hard to find and hire people with
adequate Hadoop skills. This explains why more than 50 percent of businesses do not have plans to
invest in it, according to Gartner.
Spark: Spark is a computer framework originally developed at the University of California,
Berkeley, that addresses speed in the handling of data. Its goal is to do queries very fast, thus
providing companies with a competitive advantage and allowing new operating models and use
cases to develop, as it does real-time data processing. Spark has deep ties to ML data structures and
data frames, and supports integration with ML libraries.
Hive: Hive is a DW infrastructure developed at Facebook and built on top of Hadoop to provide
data aggregation, query, and analysis. It is highly suitable for ad hoc analyses, is very fault resistant,
and is especially strong in creating data pipelines for cloud workloads.
Presto: An open-source distributed SQL query engine for running interactive queries against data
sources of all sizes developed at Facebook.
Impala: An open-source, massive parallel-processing SQL query engine developed by Cloudera to
analyze data stored in clusters running Hadoop. Impala is similar to Presto.

Table 6.8 AI ESG Considerations: Data

Environmental • Quality control in data acquisition and development


• Environmental impact of large data processing and storage facilities
Social • Mitigating data bias through diverse teams
• “Machine teaching” as a new job
• Bias in/non-inspectability of algorithms
Governance • Need to develop data governance best-practices framework
• Privacy concerns and multiple, overlapping regulatory regimes and
bottlenecks challenge quality dataset development
• A new role—design ethicist—for the ethical design of new technology
products and services
• Need for boards to have directors who understand IT transformation,
product design, security, and design ethics
CHAPTER SEVEN

Algorithms

Why This Chapter?


This chapter is the logical continuation of what we discussed in Chapter 3.
We brush up on concepts such as ML, DL, cognitive and quantum
algorithms, and deep-dive practical challenges in implementing best-in-
class NAI. Among these challenges are bias in datasets and algorithms,
adjustment and tuning techniques of algorithms to fit the problem,
adversarial attacks on ML models, and cyber issues around new interfaces
such as voice.
Today’s algorithms are sequences of instructions telling computers
what to do. “Believe it or not,” writes Domingos, “every algorithm, no
matter how complex, can be reduced to just . . . three operations: AND,
OR, and NOT.”1 Most algorithms are publicly available and have been in
use for decades. There are several providers of algorithms to fit a company
or a product’s needs, e.g., Algorithmia. Websites like Kaggle provide the
massive datasets to further train these algorithms. “Algorithms, by
themselves, are not usually patentable. In the U.S., a claim consisting
solely of simple manipulations of abstract concepts, numbers, or signals”2
(which is exactly what algorithms do) does not constitute a patentable
“process.” However, practical applications of algorithms can be IP-
protected. In 2005, for example, the U.S. Patent and Trademark Office
awarded a patent to a genetically designed factory optimization system.
However, “The patenting of software is very controversial, and there are
patents involving algorithms, e.g. data compression.”3
According to Andrew Tutt, algorithms should be FDA regulated.4 It is
difficult to hold algorithms accountable under today’s tort and criminal
law. The increased use of algorithms calls for the need to evaluate their
impact on business and society. In the next three years, we expect to see
ML algorithms become a ubiquitous and essential part of business.
Algorithms, trained on faster and better datasets, will be crucial in
identifying customer demand and personalizing it, increasing customer
trust, and delivering better productivity into any kind of an enterprise and
any industry. We expect companies to invest at least as much money and
resources in training machines as they do in training their employees.
A typical question a company should consider when deciding to try AI
for their datasets and being confronted with a wide variety of ML
algorithms is, “Which one should we use? Which are we allowed to use
now and in the future?” The answer to this challenge depends on several
factors, such as:
• the size, quality, and nature of the data;
• the available computational time;
• the urgency of the task; and
• the type of problem the company is trying to address and solve with the data.

What kind of ML models can best address the challenge? If a numeric


prediction is needed, decision trees or logistic regression will serve that
purpose. If a hierarchical result is required, hierarchical clustering will be a
good place to start. In any case, a company might have to try a large
number of algorithms to find out which one is the perfect match for a
given set of data.5

Adjusting Data and Algorithms


One of the difficulties is that learning algorithms (e.g., decision trees,
random forecasts, clustering techniques) will require the setting up of
parameters before usage. That said, anyone who is modeling must set up
these parameters (e.g., number of trees in decision trees, number of layers
in DL, time period picked for injecting data into a model, etc.) in an
optimal way to enable effective and efficient learning. Much is being said
about the “tuning” of algorithms or ML techniques. Tuning is a process
that aims at optimizing the parameters.6 Such an engineering feature is
incredibly important and domain specific.7 Some researchers use complex
learning algorithms, e.g., grid search, before experimenting adequately
with simpler alternatives with better-tuned parameters.
Tuning also goes hand in hand with another issue. Data are the most
desirable ingredient to build an AI product, but there can be too much of a
good thing: “Overfitting is the central problem in ML. In ML, the
computer’s greatest strength—its ability to process vast amounts of data
and endlessly repeat the same steps without tiring—is also its Achilles’
heel.”8 Overfitting can come from a business developer, researcher, or
engineer formulating a hypothesis poorly, from a lack of communication
between people sitting in data silos, or from the absence of clearly
formulated tasks to solve with modeling. Sometimes, too many
assumptions reflect a lack of sufficient data to cluster, analyze, and use to
reach satisfactory conclusions. “Learning is a race between the amount of
data you have and the number of hypotheses you consider.” Harvard’s
Leslie Valiant received the Turing Award (an equivalent of the Nobel
Prize in computer science) for inventing an analysis dealing with the
balance between data required and hypotheses formulated. This approach
is described in his book Probably Approximately Correct.9

Cyberattacks on ML Models
An important research field now includes putting together and creating
a powerful, adaptive model of a cyberattacker. Experts are working on so
called adversarial models, which attempt to break other models to show
where they are weak, or where the data assumptions might be wrong.
Adversarial AI as quality control and security might one day become a
precondition for release of any AI product or service. An example would
include an attack to thwart a self-driving car’s AI system, causing it to
mistake a sign, speed limit, or child for something else. In December 2017,
researchers fooled a Google AI into mistaking a helicopter for a rifle.10
Adversarial examples are inputs to ML models that a cyberattacker has
intentionally implented to cause the model to make mistakes. They
represent a new problem in AI (and overall) safety that should be
addressed as soon as possible. Fixing them is very difficult and requires
research and funding. OpenAI is one of the organizations investing in this
research to comply with its overall goal of delivering safe and beneficial
AI.
One of the ways to combat such attacks is by training models not to be
tricked. This is not easy, especially if an attacker has a good strategy for
guessing where defense weaknesses are, and trains his or her model
against potential defense techniques. Defense strategies in this context
can’t be perfect. They might block one kind of attack, but be open to
another vulnerability.11

Cybersecurity/ML Use Cases


In 2017, humans started to consume content synthesized to mimic real
people. Media, technology companies, and citizens already struggle to
understand the difference between facts and lies. The issue is aggravated
by the fact that AI can synthesize voices. This is great for work in
customer care, medical applications, and industrial maintenance, but the
bad guys are innovating as well. We have entered an era where AI can
mimic anything that can be digitized. Researchers from Bar-Ilan
University in Israel and Facebook’s AI team were able to tweak audio
clips that made AI “hear” them incorrectly. AI can accurately identify
objects in an image and spoken words, but its algorithms don’t work the
same way the human brain does, so they can be tricked in ways that
humans can’t.
The team’s efforts can also be applied to other ML algorithms. By
adding noise to an image of a road scene, the team was able to fool an AI
algorithm (usually used in autonomous-car applications for classifying
features like roads and signs) to see something completely different. Of
course, attacking ML models in real life is something different from
experimenting with adversarial cases in a lab.12 We know, however, that
cybercriminals innovate and are intent on exploiting system vulnerabilities
to their advantage.
The New Zealand research agency DT prototyped a device worn on the
ear and connected to a neural net that was trained on real and synthetic
voices, called “Anti AI AI.” The device notifies the wearer when a
synthetic voice is detected and cools the skin using a thermoelectric plate
to provide an alert that the voice they are hearing is not human but
synthetic.13 The implication of this research cannot be underestimated. We
are at the point at which AI is making it possible for us to create synthetic
data—voices, images, videos—that are indistinguishable from the real
thing. We need defense mechanisms against fraud, be it wearables,
software filters, or other technologies, and we are just at the beginning of
serious research into these matters.
CHAPTER EIGHT

Hardware

Why This Chapter?


This chapter describes the hardware environment needed to implement
narrow AI. Very few traditional businesses use the computational power of
GPUs. In their data-driven competitors, however, such hardware is part of
daily life. Readers from traditional companies will learn about Moore’s
Law, which dictates the path of improvement in semiconductors, and the
role of smart networks and cloud frameworks in democratizing ML. Some
executives might wonder whether this information is needed. Skeptical
executives of traditional businesses would be best advised to communicate
with semiconductor companies and learn about their roadmaps. What is
maturing in silicon R&D might well dictate the technology path of the next
two years. Not every silicon vendor possesses business development skills
to translate their research into a business language of traditional
companies. Reading the following chapter will help those lost in
translation.

Semiconductors
The Development of Silicon: Underlying Drivers and
Their Consequences
In the last three years, AI-enabling infrastructure has become more
easily available, and this trend is expected to accelerate even further. Most
AI researchers and practitioners refer to curve-fitting economic “laws”
while predicting the pace of technology development and adoption.
Moore’s Law, named after Gordon Moore, a founder of Intel, is one of the
best examples. Gordon Moore published a four-page article in the trade
journal Electronics in 1965.1 This article not only articulated the beginning
of a trend, it also described what had to happen in the silicon industry over
the next 50 years. Rodney Brooks dedicates a brilliant piece, “The End of
Moore’s Law,” to explaining how this major discovery shaped competition
in silicon and produced several megatrends.2
According to Moore’s Law, the number of components on an
integrated circuit doubles every one and a half years. This initial reading
outlined the components, and not the transistors. Generally, there are many
more components than transistors, though the ratio has dropped over time
as different types of transistors have been adopted.
With time, we have seen variations on Moore’s Law. The most popular
versions involve one of the following:
• two times as many transistors;
• two times as much switching speed in these transistors (so a computer could run twice as fast);
• twice the amount of memory on a single chip; and
• two times as much secondary memory in a computer (first on mechanically spinning disks, but
more recently in solid-state flash memory).

As transistors become smaller, they become faster in switching on and off.


The memory reference is especially important. The memory chips were
called RAM (random access memory). Moore’s Law made the rules of
competition, amortization of investments, and economics clear to the
market players for many years. Product design went on without big
surprises. This allowed the digital revolution to proceed. In a sense,
Moore’s Law dictated what later became a general-purpose computer
design, which arose by the time central processors could be put on a single
chip. This architecture is known as the von Neuman architecture.
With time, it has become increasingly difficult for semiconductor
players to break out of the economics mold dictated by Moore’s Law. The
architecture of computers built with respect to it has become more or less
inflexible. Though there might have been other ways to organize
computation, companies did not do so. For example, the general-purpose
architecture was nothing like how a human brain works. There were
experiments such as the Lisp Machine or Connection Machine from the
MIT AI Lab, and Japan’s fifth-generation computer project (which played
with the idea of data flow and logical interference). These attempts did not
find a large audience, however, as the conventional organizations were
locked into mainframes, which had been around for decades. Current
chipset families—whether X86, Arm, or PowerPC—all comply with
Moore’s Law. Quantum silicon—as we can see it today—complies with it
too. The only exception is that of graphical processing units (GPUs).
Today’s problem with Moore’s Law is that there are physical limits to
what is theoretically possible and practically achievable. Already Moore’s
Law is becoming less relevant, as major leaps in node size in the
manufacturing process have come to an end, with 14nm as the current
industry standard. With Intel currently working with a 5nm standard,
which is very close to the limit, we might assume that around 2020 the
explosive growth in computational power we’ve seen in the past few
decades will be over. Multicore designs have preserved the “number of
operations done per second” version of Moore’s Law, with all the
associated limitations. Chip makers work closely with software architects
to optimize and fine-tune the performance of their silicon vis-à-vis the
demands of software development.
In the end, Moore’s Law has been flipped. The chip industry releases a
new chipset every year and a half, and this product is usually twice as
powerful as the last one. As a result, semiconductor vendors offer superior
products at reduced prices, massively cutting the costs of computation and
boosting innovation in hardware forms, the amount of devices, and the
software to run on these devices. A natural, though uneven, consequence
of this trend may be that every consumer or enterprise product will be
potentially equipped with a chipset that is connected to the Internet.

Fragmentation of Chipset Solutions for AI


Chipsets cannot be improved quickly enough to handle ML and DL,
which not only require vast volumes of data but also consume more
number-crunching power than entire data centers did just a few years ago.
The industry leaders, including Google and Microsoft, are opting for
increasingly specialized processors from other companies and are even
designing their own. When it comes to AI, there is no magical silicon
“One Chip to Rule Them All.” New silicon will address existing
bottlenecks in software, and then there’s the promise of quantum
computing. Startups and traditional chipset companies will also push for
better High Performance and Neuromorphic computing, among other
things.
The implications for traditional semiconductor companies are
profound.3 CPUs (central processing units), like Intel’s Xeon and Xeon
Phi for datacenter and the Qualcomm Snapdragon for mobile devices, do a
great job for relatively simple data like text and jpeg images, but they may
encounter difficulties in handling high-velocity and resolution data coming
from devices like 4K video cameras or radar.
To be able to address more use cases in ML, Intel acquired Nervana
Systems to lay the groundwork for a general DL infrastructure, and then
Movidius to gain high-performance Systems on the Chip (SoC) platforms
for accelerating computer vision applications. As part of its M&A strategy,
Intel has invested over US$1 billion in companies focused on ML and
robotics, including Mighty AI, Data Robot, Luminata, and AEye.4 To
compete in the automotive sector, the company acquired Mobileye. The
Intel cloud-hardware and Mobileye camera/software combination could be
a serious alternative to Nvidia’s self-driving GPU platform. As Intel has
excellent regional sales and business development resources, the company
can attract traditional automotive OEMs and their suppliers to speed up the
adoption of its autonomous driving technology. Mobileye controls data
generated by its camera in vehicles, which might be a factor to consider if
an automotive OEM cooperates with Intel.
At this point in time, most AI entrepreneurs innovate on GPUs. Driven
by the quest for better performance for video, graphics, and gaming, GPU
was not fit for general-purpose computations but for additions and
multiplications on streams of data. Unlike CPUs, which compute in a
sequential fashion, GPUs offer a massive parallel architecture that can
handle multiple tasks concurrently. GPUs have hundreds of specialized
“cores” (the “brains” of a processor), all working in parallel, whereas
CPUs have only a few powerful ones that tackle computing tasks
sequentially. As an example, Nvidia’s 2017 processors boast 3,584 cores;
Intel’s servers’ CPUs have a maximum of 28.
In 2017, Nvidia announced the Tesla Volta V100, a new GPU that tries
to marry ML with high-performance computing.5 In addition, Nvidia has
built a daunting lead in software tools for its graphic chips, supporting
almost every framework for processing neural networks used in DL.
Currently the company is closing partnerships with system integrators and
software developers in DL, e.g., Mobiliya Inc., to scale its reach into
different industrial sectors.
GPUs have high computational precision that is not always needed and
suffer memory bandwidth and data throughput issues. This has opened up
a market for a new breed of startups and projects within large companies
like Google to design and produce silicon specifically for high-
dimensional ML applications. Improvements in new designs of chipsets
include larger memory bandwidth, computation on graphs instead of
vectors (for GPUs) or scalars (for CPUs), higher computer density,
efficiency, and performance per watt. This enables faster training of
models (especially on graphs); energy and data efficiency when making
predictions; running AI systems at the edge (IoT sensors); cloud
infrastructure as a service; and autonomous vehicles, drones, and robotics.6
In this context, the chipset market for AI is becoming fragmented. The
silicon range is expanding as cloud-computing firms mix and match chips
to make their operations more efficient and stay ahead of the competition.
At one end of the range are ASICs, an acronym for “application-specific
integrated circuits.” As the term suggests, they are hardwired for one
purpose and are the fastest as well as the most energy efficient. Dozens of
startups are developing such chips with AI algorithms already built in.
Google has built an ASIC called a “tensor processing unit” (TPU) for
speech recognition, which is currently used in Google cloud and offers
benefits not just for the company, but also for its cloud customer base.
TPUs improve relevance of search results and the accuracy of Google
maps. In typical Google fashion, the secret remains in the software.7
Google’s tensor processing units (TPUs) enable the squeezing of heavy
and highly sophisticated computation into less silicon. AI workloads using
TPUs run 15 to 30 times faster than other processors at Intel and Nvidia,
while efficiency is 30 to 80 times better.8 Some of the key creators of
Google’s TPU silicon recently left to create a startup, Croq Inc.
The other extreme is field-programmable gate arrays (FPGAs). These
can be programmed, which allows for greater flexibility. This feature is the
reason why, even though they are tricky to handle, Microsoft has added
them to many of its servers, for instance those underlying Bing, its online
search service.
GPUs, FPGAs, and ASICs are each unique in cost/benefit for specific
data types and throughput requirements. Some applications (e.g., vision-
guided autonomous systems) require a hybrid hardware approach to
comply with the latency and data processing requirements of the
application environment. Nvidia issued hybrid hardware platforms with an
ARM/GPU combo in Nvidia’s Jetson and DrivePX2. Intel and Xilinx offer
SoCs that combine ARM and FPGAs in a single low-power package. It
should be noted that “[a]ll of these products are finding their way into
drones, factory robots and automobiles where the right combination of
speed, flexibility and low power demand innovative approaches.”9
Meanwhile, Qualcomm is improving its Snapdragon processor to
include a variety of accelerator technologies to support ML in mobile and
other edge devices that will comprise smart IoT. Aditionally, Qualcomm
has started building chips specifically for executing neural networks,
according to Facebook’s LeCun, as his company is helping the chip maker
develop technologies related to ML. Jeff Gehlhaar, vice president of
technology at Qualcomm, has confirmed the existence of this project.10
Qualcomm is one of the most active VC investors in AI companies.
Amazon’s AWS’s move to FPGAs in the cloud is important for AI
because we do not yet know the final form AI hardware will take. If there
are more than one or two hardware standards, Amazon’s move could put
them at the forefront of AI cloud computing down the road.
The startup Graphcore has another interesting development around a
DL chipset. The company believes it will be capable of using a graph
processor to make an intelligent processing unit (IPU) to perform double
duty in training and inference on the same architecture across multiple
form factors, such as servers and devices. The first chip, named Colossus,
will split up workloads among more than 1000 cores. Popular tools can
translate applications written in ML frameworks like TensorFlow and
Caffee2 into a form that can be exploited by IPUs. Graphcore’s chips
could shorten the wait for generating ML models to hours and days instead
of weeks and months.11
Lastly, Apple is working on a dedicated chip to power ML on devices.
The chip is known internally as the Apple neural engine and it is being
designed to improve tasks like facial recognition and speech recognition.
Currently, Apple devices handle ML tasks with two different
semiconductors: the main processor and the graphics chip.12

Importance of New Algorithms


New chipsets, whether Nvidia’s GPUs or Google’s TPUs, can fully
exploit AI if a new degree of optimization of a computing principle called
locality takes place. Unfortunately, progress in new algorithms does not
match the speed of silicon development to enable full exploitation of
locality. Parallelism, or the simultaneous performance of many tasks, is no
longer sufficient. Without new algorithms, there will be a widening gap
between computational and memory throughput. The efficiency boost that
would come from locality would allow algorithms to scale the memory
wall. Today’s computer vision algorithms, for example, have a leg up on
locality as they rely heavily on CNN. At the same time, the RNN used in
language and voice applications needs change to improve locality,
especially for interference.13

New Chipset Approaches


Earlier, we described general-purpose approaches in computing
hardware and ways to improve existing technologies to serve AI. If,
however, the industry wants to emulate a human brain, its current
capabilities are very limited.
One promising approach is neuromorphic hardware, where the idea is
to construct custom hardware that closely resembles the wetware of the
brain: “Conventional digital hardware performs hundreds of binary
floating point arithmetic operations to simulate a few milliseconds of
change in a single neuron’s membrane potential. This involves thousands
of transistor switching events, each of which consumes power (and
generates heat). The neuromorphic approach does away with all this digital
paraphernalia and uses analogue components that behave like the original
neuron. The result is more efficient in terms of power consumption.”14 The
beauty of the solution is that since not all neurons fire each time,
neuromorphic or “[s]piking NNs could replace hundreds in traditional
deep NN yielding much greater efficiency in both power and size.”15 In
September 2017, Intel announced a nanomorphic prototype called Loihi,
promising to deliver the first full version of the chip with 130,000 neurons
in a couple of months.16 The new chip is supposed to learn from incoming
data, which would make it superior to two generations of the currently
available IBM neuromorphic processor. BrainChip Holdings claims
significant IP patent protection of its Spiking NNs technology.17
Another interesting approach is a new kind of non–von Neumann
processor called a HIVE (hierarchical identify verify exploit), which is
being funded by DARPA with the participation of Qualcomm, Intel,
Georgia Tech, Pacific Northwest National Laboratory, and Northrup
Grumman. The approach targets the creation of a new graph analytic
processor, which would differ from CPUs and GPUs in key ways. The
approach allows one big map that can be accessed by many processors at
the same time, each using its own local scratch-pad memory while
simultaneously performing scatter-and-gather operations across its global
memory. The architecture is highly scalable and a very good fit for big
data problems, which typically involve many-to-many rather than the
many-to-one or one-to-one relationships for which today’s processors are
designed. The military is interested in this technology to fight
cyberattacks. A civilian example would involve mapping all the people
buying from Amazon to all the items each of them bought.18 Today, big
data problems on CPUs and GPUs are mostly subject to regression
analysis, which is inefficient to illustrate relations between highly
dispersed data points. Graph analytic processors would allow for greater
richness in such results.

Cloud Technologies and the “AI-as-a-Service”


Business Model
In less than a decade, the cloud has gone from being a tool to improve
IT economics and agility in an enterprise to a force transforming all kinds
of industries. It delivers on-demand computing to any consumer and
enterprise user with access to the Internet. AI and ML features are an
important component in platforms that operate on distributed datasets to
develop actionable business intelligence from disparate, asynchronous data
sources.
Over the past four years, there have been a number of examples
supporting the hypothesis that AI will be available via APIs in all kind of
applications and through all kind of devices delivered through the cloud,
including the following:19
• Next IT established the Alme platform for virtual healthcare assistants;
• IBM allowed third-party developers to build cognitive apps that leverage cognition hosted in
the cloud on Watson;
• IBM started cooperation with Twilio, a cloud communication platform used by over 1 million
developers; their first products are IBM Watson Message Sentiment and IBM Watson Message
Insights;
• Amazon established a third-party developers program for Alexa;
• Stephen Wolfram, creator of the Wolfram Language, which combines both programs and data
into a so-called “language for the global brain,” started weaving computational knowledge into
everything;
• Bonsai, democratizing development of AI applications and systems for non-AI programmers,
wants to do for AI what databases did for data, while abstracting away the complexity of ML
libraries like TensorFlow, CNTK, or MXnet;
• Google launched Cloud Natural Language API, an ML product that can be used to reveal the
structure and meaning of text in several languages;
• Google partnered with a number of organizations, including data Artisans, Cloudera, and
Talend, to submit the Dataflow Model, SDKs, and runners for popular OSS-distributed
systems to the Apache Incubator to deliver simple data processing in either streaming or batch
mode; and
• in May 2017, Nvidia announced a new GPU cloud service supporting major AI frameworks
such as MXNet, TensorFlow, Torch, Caffe, and CNTK. Part of this cloud is a software stack
that runs on PCs, workstations, and servers, and assigns workloads to local GPUs, connected
DGX-1 boxes, and processors hosted in Nvidia’s cloud as needed.

Many analysts expect an upcoming battle between AI-as-a-service


platforms. We believe that, in the short run, vertical services with deep
domain expertise will be successful, while the allure of easy money from
sticking an ML model behind an API function will face difficulties.
Software developers in traditional businesses will find it hard to train and
debug these off-the-shelf technologies properly. We assume as well that
Amazon, Google, and Microsoft, which are all trying to sell an AI-as-a-
service layer as a component of their cloud strategy, will not achieve full
monetization of this service unless they acquire product specialists to
obtain domain knowledge.
Corporations might find it hard enough to bring in the technical
competence to do ML internally. They might find it even harder to bring in
the “data product” talent that can help identify the right problems and the
means to productize ML solutions. In this context, enterprise startups
currently advertising their “as-a-service” capabilities might become
important outsourcing partners.

Deployment of Smart Networks


When the first network technologies came into commercial use,
companies were happy enough to have the necessary pipes to send data.
Today more is required from these pipes: the convergence of applications,
the need for enhanced quality of service, cybersecurity challenges, calls for
ubiquity of device and people communication, and rising penetration of
sensors. Though AI is automating network management systems, there are
big challenges linked with historically siloed approaches in network
architecture, leading to deficits in security and data mining.20 The rapid
acceleration of AI in the cloud raises important questions about the
readiness of data networks to handle enormous flows of information in real
time. The industry is fast approaching the so-called Shannon Limit, the
theoretical barrier limiting the amount of information a system can safely
manage. Beyond that limit, data would degrade.
In this context, we expect a lot of design discussion to take place
around how much data should stay in the cloud and how much should be
embedded into an end point, including cars, or other devices.21
CHAPTER NINE

Openness

Before 2015–2016, AI breakthroughs happened mostly within a small


number of high-tech giants and some top research schools. In the past
three years, however, the AI community has experienced momentum
toward increased openness. Organizations like OpenAI are trying to make
the case for transparency to develop safe AI while trying to break the data
monopoly and dislodge incumbents. They are doing this by positioning
themselves as a non-profit partner to many (like car companies) with lots
of data but little or no in-house DL expertise.
Open initiatives also help to address the need for talented PhDs in both
the public and private sectors to collaborate and communicate with the
wider community as well as publish their research.
Following are several examples of increased openness in AI:
• Facebook’s research branch FAIR is currently publishing outstanding papers on a regular
basis;
• with several peers, Elon Musk funded OpenAI, which is opening up the simulation
environment universe to the public;
• Google is expending significant resources to boost research on DL, mostly focused within its
DeepMind acquisition, and open-source libraries such as TensorFlow;
• Amazon open-sourced its Deep Scalable Sparse Tensor Network Engine (DSSTNE);
• even Apple announced that it would start releasing its AI research results to the public;
• in April 2016, Facebook’s F8 Conference specifically focused on the future of chatbots as
valuable personal assistants. During the conference, the company announced it would let
developers and other outlets integrate bots capable of responding to inquiries into the
Facebook Messenger service;
• Facebook and Microsoft started a collaboration to enable movement between frameworks, as
well as simplifying conversions from PyTorch to Caffee2. This initiative is called ONNX, the
Open Neural Network Exchange, and it will help developers reduce the lag time between
research and productization;1 and
• “Amazon and Microsoft unveiled ‘Gluon’ neural network technology, teaming up on machine
learning.”2

Software framework and hardware reference designs are not the sole
domain to influence the pace of AI. High-tech giants are releasing large
and well-labeled datasets necessary to train neural networks, such as the
YouTube Video Dataset, with 500,000 hours of video-level labels, and
Yahoo’s 13.5TB of data, which includes 110 billion events describing
anonymized user-news item interactions from 20 million users on various
Yahoo properties.
Similar developments happened prior to AI. Facebook entered the
Internet world after Google and paid attention to open development of
their cloud architecture to benefit from crowd-sourced intelligence without
damaging their core strategic advantage in social networks and monetizing
of social data in the attention economy. Google publicly discussed
DeepMind algorithmic advancements in Go, while some algorithms were
focused on optimizing energy consumption in Google data centers. For
companies such as Google or Amazon, the software and datasets they
open-sourced are a complement to their cloud-computing infrastructure
products, with Google offering a convenient way to run their customers’
systems with TensorFlow in the Google cloud, and Amazon similarly
expanding AWS to make it simple to run DSSTNE. Openness in AI
development is absolutely crucial to preventing the monopolization of AI
research and development within a few commercial companies. It is an
important step in establishing safe and beneficial AI.

Table 9.1 AI/ESG Considerations: Openness

Environmental • AI openness equates to increased transparency


• AI openness focuses on the development of “safe” AI
Social • Openness promotes collaboration/sharing of knowledge internally and
externally to benefit society
Governance • Openness non-profits seek to protect society from the ungoverned
growth of potentially unsafe AI
• Openness furthers the need for public/private collaboration on
governance and safety
PART 3

The Race for AI


Dominance—From
Full-Stack Companies to
“AI-as-a-Service”

Since 2012, two dozen companies have been fiercely competing to


develop the best ML and DL technologies and to recruit the most capable
talent, while the venture capital industry is investing billions, largely
focusing on narrow applications. The largest tech players chose to invest in
full-stack AI capabilities to tightly control productivity costs and user
experience. Smaller companies use ML and DL to improve their domain
leadership. The race for AI dominance is wide open. Traditional
companies can compete, but only if they commit to AI and embrace the
substantial technology, corporate leadership, and organizational changes
needed to become data-driven competitors.

Key Points in Part 3


• AI is a fast-evolving discipline, requiring investment into fundamental research to keep pace.
• Monetizing capabilities of early AI successes and applying cross-subsidies from other field
helps to maintain a competitive edge, acquire additional domain expertise, and move from one
industry vertical and/or use case to another.
• Full-stack AI companies are employing the most prominent AI scientists, who in turn help
recruit students and attract further AI talent, while continuing to pursue their academic
research.
• Experts believe China is just one step behind the U.S. in the race for the most advanced AI
technologies. The line between the government and the private sector in China is blurred, and
joint government-private research centers are appearing, benefiting greatly from direct
government funding.
• There are several types of competitors in AI at the present time:
o Full-stack AI companies such as Alphabet/Google, Apple, Facebook, Microsoft, and
Amazon. In principle, these are data-focused companies, owing a large part of their
success to the data they hold and acquire. They are using data to sell either advertising or
products/cloud. AI largely enables them to optimize their business models. In 2017, five
of the ten biggest listed global companies by market capitalization were in technology.
All five companies declared their strategies to be “AI first.” This implies development of
their own semiconductors, improvement of IT infrastructure for data modeling purposes,
the introduction of close links between R&D and product development and release, and—
in some cases—offering cloud-based ML services in areas including generalized
computer vision and NLP (speech and text processing). The scope of these services will
increase over time. Apple is focused on delivering AI on devices to ensure privacy
(“differential privacy”). Google is testing this type of technology as well.
o Domain-focused companies providing either general-purpose APIs in certain areas or
vertically integrated product players with deep domain expertise. APIs for specific AI
tasks are delivered by Nuance (in the field of NLP), PredictionIO (acquired by
Salesforce), and Wise.io (acquired by GE to complement their analytical capabilities in
industrial IoT).1 Vertically integrated players include Sentient, Euclid Analytics,
HoneyComb, and Judicata.
o Traditional enterprise vendors (like IBM), which are jumping on the AI bandwagon to
better sell their legacy services.
o New enterprise startups, for example, Skymind (DL, integrated with Hadoop and Spark),
Predii (data preparation and insight generation), and Mobiliya Inc. (customized DL for
large companies, training-as-a-service—the company was acquired in early 2018 by
Quest Global).
• Caution should be exercised regarding Watson technology capabilities, as Watson has been
built on IBM legacy technology and expanded into too many industrial verticals, and does not
always deliver on its marketing promise.
• We expect further consolidation to happen in the industry, with traditional businesses slowly
entering the AI race in the next three to five years. M&A will complement full-stack AI
companies’ capabilities. The performance of low-cost general ML services will commoditize
all but the most sophisticated small companies, focused on verticals and technologies that the
tech giants do not sufficiently care about. At the same time, the best AI talent will continue to
go to Amazon and Alphabet.
• As previously discussed, we believe caution should be exercised regarding “AI-as-a-service”
claims. Implementation success can be achieved only if a company working with such service
providers has sufficient in-house expertise to work with data and problem specifications for
data training and modeling.
• For traditional companies, we recommend a portfolio-based approach to piloting ML.
Businesses without deep experience in analytics across all organizational units will face
difficulties in fully embracing ML as an enabler to increase competitiveness.
• As ML and automation become increasingly common, AI-centric design (instead of just
human-centric design) will be necessary. This will greatly benefit product development, as it
will enable insight generation and agility of products and services.
CHAPTER TEN

AI Crossplay—Academia and
the Private Sector

When Andrew Ng joined Google from Stanford in 2011, he was one of the
first AI experts from academia taking up an active role in industry. Ng
moved again in 2014 to become chief scientist at Baidu, building the
company’s research lab in Silicon Valley. He left Baidu in early 2017,
teaches DL on Coursera, the online education company he cofounded and
raised an AI fund. Google hired Geoffrey Hinton, a deep-learning pioneer
at the University of Toronto, in 2013. DeepMind, which was acquired by
Google, has close links to Oxford, where Alphabet is currently funding
over 250 research projects and dozens of PhD fellowships.1 In 2015, Uber
hired almost 40 of 150 researchers at the U.S. National Robotics
Engineering Center based at Carnegie Mellon University in Pittsburgh,
Pennsylvania, mainly those working on self-driving cars. Uber then
donated US$5.5 million to support student and faculty fellowships at the
Center.
These stories are not unique. Fifteen years ago, academia was able to
retain the brightest minds, especially those who would have otherwise
gone to an investment bank. Today technology giants such as Microsoft,
Google, Facebook, and Amazon provide an intellectually stimulating
environment, superior R&D labs, and a competitive salary. Moreover,
scientists can continue doing their research, occasionally coaching product
teams to improve existing products with new insights. In the 1950s, a
similar trend of job migration was observed in semiconductor sector.
Carlos Guestrin works at the University of Washington and Apple. Russ
Salakhutdinov, Hinton’s protégé and head of AI at Apple, still spends time
at Carnegie Mellon. Hinton works for both Google and the University of
Toronto. As The Washington Post has reported, “Building ties to academic
superstars not only helps to improve products but also becomes a key
recruiting tool,” said Richard Zemel, director of the Vector Institute for AI
and a professor specializing in ML at the University of Toronto.2
Yoshua Bengio, who advised IBM and is currently supporting
Microsoft AI efforts, believes the concentration of wealth, power, and
capability in top Internet brands is “dangerous for democracy,” and even
that these companies should be broken up. His reasoning is simple. AI
technology naturally lends itself to a winner-take-all scenario: “The
country and company that dominates the technology will gain more power
with time. More data and a larger customer base give you an advantage
that is hard to dislodge. Scientists want to go to the best places. The
company with the best research labs will attract the best talent. It becomes
a concentration of wealth and power.”3
North America is currently leading in AI research and applications,
although China is a strong challenger and is committed to becoming the
global AI leader by 2030. Thriving AI centers exist in Europe, with the
UK, Germany, France, and Switzerland among the most DL-savvy
geographical hubs.
It is important to mention the Swiss universities Università della
Svizzera Italiana and University of Lugano, whose labs have made
research breakthroughs in many areas, including DL and robotics. Jürgen
Schmidhuber, one of the fathers of DL, works there. He is especially
renowned for his invention and subsequent development of long-short-
term memory (LSTM), a recurrent network algorithm that helps machines
learn many things unlearned by feed-forward networks. A demonstration
of LSTM can be found on everyone’s mobile phone. Schmidhuber’s PhD
student Shane Legg later became a co-founder of DeepMind, which was
sold to Google in 2014.
Gary Marcus encourages us to think big for open AI research. We
would love to see European and U.S. governments launch something
equivalent to CERN or the Manhattan project for AI, with thousands of
scientists sharing their research.4 Such an enormous undertaking would
have huge benefits for all of humanity. In contrast to Alphabet and other
AI giants, even the largest open AI efforts, such as OpenAI, sponsored
partly by Elon Musk, have only about 50 staff members.
When we talked to non-executive directors of nine European and seven
top U.S. companies, not a single one of them was aware of their
companies’ commitment to support AI research. It is critical that a broad-
based international, perhaps transatlantic, public/private collaboration
attract sponsorship and funding from Fortune 2000 companies, as such an
approach would greatly improve fundamental research, which is
profoundly needed to secure next-generation AI applications.
CHAPTER ELEVEN

The Geopolitics of
AI—China vs. USA

The focus in China now is not just on improving existing technologies. On


July 20, 2017, China’s State’s Council issued its “New Generation AI
Development Plan,” which articulates an ambitious, three-step agenda for
China to lead the world in AI. The Chinese government recognizes that AI
will be critical to its “comprehensive national power” and competitiveness,
including in the realm of national defense. China is focused on tackling
key problems in AI R&D, pursuing a range of products and applicatons,
and generally cultivating the AI industry. Ironically, the Chinese plan has
“embraced” and implemented former U.S. president Obama’s strategic
plan for AI, which was released at the end of 2016.1
Obama planned to increase support for AI R&D, since annual federal
funding for all computer science and mathematics research is less than half
of what Alphabet alone spends. Rather than increasing those levels,
however, President Trump’s budget calls for cutting AI R&D at the
National Science Foundation by 10 percent.
China, meanwhile, has demonstrated its willingness to spend to
become the number one AI innovation center by 2030, worth US$150
billion, according to both the blog Lawfare and the Council on Foreign
Relations. The effort will involve extensive government funding, along
with a policy to attract leading AI talent to China. Chinese companies and
research facilities are being encouraged to pursue M&A targets, equity
investment, and venture capital, and to establish research labs abroad.
China also wants to leverage the “One Belt, One Road” strategy to
establish bases for international scientific and technical cooperation and
joint research centers focused on AI.2 Today, China’s AI industry is
hovering at around US$1.5 billion.3 Kai-Fu Lee and Ian Bremmer argue
that China is poised to dominate multiple sectors of AI (including
consumer applications, computer vision, and self-driving cars) while
lagging behind the U.S. on business applications.4
U.S. leadership in AI may also slow down because there is no planning
for the challenges AI might bring to the economy and to employment. The
U.S. treasury secretary, Steven Mnuchin, has said that he believes “there
[is] no need to worry about AI taking over U.S. jobs anytime soon.”
“When pressed for when exactly he thought concern might be warranted,
Mnuchin offered 50 to 100 more years.”5 At the White House Office of
Science and Technology Policy, 70 out of 100 positions remained unfilled
as of September 2017. This office was instrumental in working on AI
policy during the Obama administration.
Meanwhile, Chinese investors are increasingly interested in and
supporting U.S.-based AI companies, and while some of them work on
peaceful products, others are focused on future weapons systems. Over the
past six years, Chinese investors helped finance 51 American AI
companies, contributing to the US$700 million rise in investment,
according to a Pentagon report.6 There are reports on the use of AI in
Chinese military systems, e.g., in something called LRASM (long-range
anti-ship missile), a semiautonomous weapon that uses human soldiers to
choose targets but in which AI is deployed to avoid defenses and make
final targeting decisions. The focus on civil-military integration in AI is
consistent with a national strategy directed by the Military Civil Fusion
Development Commission, which was established in early 2017 under the
leadership of Xi Jinping.
Below is a review of some of the leading Chinese companies
developing AI.

Iflytek
Iflytek is focused on speech recognition and understanding natural
language. The company has won international competitions both in speech
synthesis and in translating Chinese- and English-language texts.
Iflytek is said to have a close relationship with the Chinese government
for the development of surveillance technology. “Our goal is to send the
machine to attend the college entrance examination, and to be admitted by
key national universities in the near future,” said Qingfeng Liu, Iflytek’s
CEO.7

Alibaba
Alibaba is capable of becoming one of the world’s most powerful
ecosystems, serving 2 billion people and enabling 10 million small
businesses with AI-enabled cloud computing and other services. Its
founder Jack Ma has been cultivating the image of a rebel fighting the
system against state-owned enterprises. He is, however, backed by Beijing,
whether in expanding China’s footprint in Africa, exploring the ocean
frontier in Southeast Asia, revitalizing the once-famous Silk Road, or
supporting China’s talks on global trade at Davos.
Alibaba’s AI priorities are currently in the cloud, with a platform
called DT PAI that allows developers and companies using Alibaba’s e-
commerce sites to analyze massive amounts of data in order to predict user
behavior as well as industry trends. The company chose Singapore as the
site of its new data center, as well as its international cloud division
headquarters.8 In 2017, it started offering cloud services in Europe without
any stated profitability targets.
Alibaba has been publishing impressive research in AI. For example,
they published a frequently quoted paper describing a DL system that
learned to execute a number of strategies employed by high-level StarCraft
players without it having been given any specific instructions on how best
to manage combat.9
In March 2017, Alibaba launched ET Medical Brain, a suite of AI
solutions designed to ease the workload of medical personnel. The suite
uses computers to act as virtual assistants for patients and in medical
imaging, drug development, and hospital management. Another healthcare
project between Alibaba Cloud and Wuhan Landing Medical High-Tech
Co. leverages AI and visual computation technologies to detect early stage
cervical cancer using cell cytology.10

Baidu
Qi Lu, who was one of the leading AI heads at Microsoft AI, became
the COO of Baidu, where he will lead the company’s efforts to become a
global leader in AI. Baidu is working on driverless cars. It has turned an
app that started as a visual dictionary (take a picture of an object and the
app will identify it) into a site that uses facial recognition to find missing
people. Baidu’s speech recognition software is considered top of its class,
and it masters the difficult task of hearing tonal differences in Chinese
dialects. In 2017, Baidu opened a joint company-government lab partly run
by academics who once worked on research into Chinese military robots
(the Tsinghua Mobile Robot). In summer 2017, Baidu had 60 different
types of AI services in its suite called Baidu Brain.11
Baidu has acquired Raven Tech, a Y Combinator startup developing a
voice-recognition assistant. It has largely focused on the technology
underpinning smart-home devices. Launched as well is a mobile voice
assistant app named Flow. Baidu will likely search for ways to integrate
Raven Tech’s products into its own digital assistant service, Dumi. Baidu
also acquired the U.S. computer vision company xPerception, which
makes vision perception software and hardware with applications in
robotics and virtual reality (VR).12 In September 2016, the company
launched a US$200 million VC fund dedicated to AI and augmented
reality (AR).13 In 2017, Fast Company named Baidu among the 10 most
innovative companies in AI and ML for accelerating mobile search with
AI.14 It has a platform called DuerOS for a natural language-based,
conversation-based computing. DuerOS in China has already accumulated
more conversation-based skills than Google Now or Siri for Apple. The
platform is in over 100 brands of private home appliances like
refrigerators, air conditioners, TVs, speakers, and the like. The architecture
of the platform is very similar to what Amazon is doing with
Echo/Alexa.15
Baidu has also created a platform for self-driving vehicles called
Apollo. They are trying to implement the same strategy as Google with its
Android operating system for smartphones a decade ago. The platform has
already signed over 50 partners, including Intel, Microsoft, and Ford.16 In
September 2017, Baidu announced that it will invest US$1.5 billion in
start-up autonomous driving companies. In 2016, Baidu Cloud announced
the ABC-Stack, a hybrid cloud platform supporting Baidu customers to
integrate and deploy ML. It includes over 60 AI capabilities, which are
available as a set of open APIs and SaaS products.17
Tencent
In 2016, Tencent, developer of the mobile ecosystem WeChat, which
has 889 million active users in China, created an AI research laboratory in
Seattle and started to invest in U.S.-based AI companies. Additionally, the
company announced a significant new AI hire. Yu Dong, a prominent
expert on speech recognition and DL, is now the deputy director of the lab
in Seattle. He was previously a principal researcher at Microsoft, where he
worked on applying DL to voice recognition, an approach that has
produced dramatic advances in accuracy over the past few years. In
Europe, Tencent owns an AI lab in Barcelona.18

Huawei
In 2016, Huawei announced it would invest US$1 million in new AI
research in partnership with the University of California, Berkeley. In
2017, Huawei launched Kirin 970, a SoC (system on a chip) powered by
HiAI, a new computing architecture for AI acceleration. It includes a NPU
(neural network processing unit), making it up to 25 times faster and 50
times more energy efficient than traditional CPUs.19

SenseTime Group Ltd.


The single largest round of funding in AI history—US$410 million—
went to a Chinese AI startup, SenseTime Group Ltd. Founded in 2014, the
Beijing-based company is focused on DL and machine vision, primarily
around automotive driving. It supplies major firms, including Huawei
Technologies Co., China UnionPay, and Dalian Wanda Group.20
Today governmental policy, technology, and investments have come
together to position China for global AI dominance. With facial
recognition and tight control over the offline and online life of its citizens,
and the fusion of military and civil AI research, there is no clarity about
Chinese efforts to develop AI safety and ethics research. The
consequences could be enormous and well beyond Chinese borders.

Table 11.1 AI ESG Considerations: Geopolitical

Social • AI impacts sourcing of labor, global labor markets


• Without effective policy, U.S. falls behind in global AI race
• AI private/public collaboration should be focus of every nation as
well as an international focus
Governance • The race for AI dominance is a geopolitical and socio-economic
issue requiring government governance attention
• After the U.S. 2016 election and what appears to be cyberwarfare,
without a coherent government AI policy, the U.S. (and other
countries) will be vulnerable to external interference
CHAPTER TWELVE

Full-Stack Companies—The
Battle of the Giants

Alphabet/Google
AI Approach at Google
Google’s 2016 annual report highlights ML and AI on its first page. There
is no reference to AI in the 2015, 2014, or 2013 annual reports. Google
does not only see AI as a crucial component of its strategy, it is developing
full-stack AI capabilities. It uses its data (or rather the data of its over 1
billion monthly active users) to train its proprietary algorithms deployed in
its own cloud, which runs on its own chipset. By operating a very broad
range of services, Google can access data in various formats, be they text,
image, video, maps, voice, or webpages. It has a powerful platform for ML
and invests in development of TPUs, its silicon chip.
Alphabet is seen as the tech industry’s top destination for AI engineers
and scientists. The company is aggressively trying to corral as much AI
talent as possible. Most of its acquisitions have been of firms specializing
in speech and image recognition, such as API.ai, Moodstocks, Dark Blue
Labs, and Vision Factory. These technologies are vital for hot new
services including self-driving cars and virtual assistants.1 Google also
appears to be trying to attract as many female AI researchers as possible as
they understand how diversity contributes to better AI products.
In May 2017, Google launched a new venture capital program focused
on AI led by Anna Patterson, a longtime Google VP of engineering
focused on AI.2 The program is called Gradient Ventures. In July 2017,
the company announced Launchpad, a studio program to provide resources
to AI startups. Roy Geva Glasberg leads this effort. The Launchpad Studio
supports companies with specialized datasets, simulation tools, and
prototyping assistance. Launchpad operates in 40 countries and involves
Alphabet luminaries Peter Norvig, Dan Ariely, Yossie Matias, and Chris
diBona.4

Table 12.1 Google Acquisitions Since 20133

Company, Acquisition Year Specifics

DNN Research, 2013 DL and neural network to upgrade Google’s image search
DeepMind, 2014 Deep Mind beat human champions in “Go”; DL leader of the
industry
JetPac, 2014 Social travel application for iPad aggregating photos from
Facebook and automatically locating where they’re from
Emu, 2014 Messaging product for iPhones using ML and natural
language processing to understand messages, add relevant
information to manage your scheduling and reservations
Vision Factory, 2014 Object and text recognition based on DL
Dark Blue Labs, 2014 Learning deep structured and unstructured representations of
data to make intelligent products, including natural language
understanding
Granata Decision Systems, 2015 Software platform providing real-time optimization and
scenario analysis capabilities for large-scale, data-driven
marketing problems and group/organizational decision-
making
Timeful, 2015 Stealth company developing an app using AI, big data,
behavioral science, and product design to reinvent the way
people manage time
Moodstock, 2016 Visual Search
API.AI, 2016 Bot Platform
AIMatter, 2017 Maker of the Fabby computer vision app, processing images
like humans do while using both a neural network–based AI
platform and SDK to detect and process images quickly on
mobile devices
Halli Labs, 2017 DL and ML system development
Kaggle, 2017 Operator of data sciene and ML competitions, enabling data
scientists to conduct ML contests, host datasets, and write
and share codes
In December 2017, Google announced a new AI research lab in China
under the leadership of Fei-Fei Li, former director of the Stanford AI Lab,
and Jia Li, who was hired from Snap.5

Alphabet’s AI Research Organizations


Google acquired DeepMind in part because of its Atari game–playing
AI. Before it was sold, DeepMind had a prominent board of directors,
including Elon Musk.
DeepMind has always used a combination of DL and reinforcement
learning in its approach. DeepMind’s research focuses on using a variety
of methods embedded in their models. They also focus on attention
mechanisms and memory augmented networks. Their papers can be
accessed at https://deepmind.com/research/publications/.
In 2017, DeepMind successfully taught machines about relational
reasoning, thus delivering a breakthrough in optimizing computer
“thinking.” Until then, computers could distinguish between cats and dogs,
but couldn’t know that dogs chased cats in some cases. DeepMind systems
were able to learn about physical relationships between static objects and
about the behavior of moving objects over time. As a second step,
researchers showed how a similarly modified ML system could predict the
behavior of simple objects in two dimensions. This capability is more
powerful than simply recognizing the objects in a scene. Relational
reasoning is one of the important features of human intelligence.6
DeepMind enabled new types of architecture at Google that had
powerful capabilities. These include:
• Inception, a convolutional neural network that is more than twice as accurate as, and 12 times
simpler than, prior models;
• Neural Machine Translation, a DL-based translation system that is 60 percent more accurate
than prior approaches;
• WaveNet, a DL voice engine that generates audio approaching human voices;
• RankBrain, a method to rank web pages using DL; and
• Federated Learning, a distributed DL architecture that performs AI training on smartphones
instead of relying exclusively on the cloud.7

Moreover, in 2016, Alphabet stated that DeepMind’s AI had cut cooling


costs in its data centers by up to 40 percent.8
In addition to its business unit DeepMind, Google itself has an
initiative called Google Brain. It combines traditional algorithms like beam
search, graph traversals, and generalized linear models with DL, and is
focused on scalable solutions, making endless tweaks on the inception
architecture.9
In May 2017, Google disclosed AutoML, a research project within the
Google.ai initiative. It is a neural network that is capable of selecting the
best from a large group of neural networks that are all being trained for a
specific task. This is an important step to enable machines to build their
own AI models.10 Also in 2017, Google launched an initiative called PAIR
(for People + AI Research Initiative) to build a toolkit of techniques and
ideas about how to design AI systems less prone to disappointing or
surprising humans. Two of Google Brain’s computer vision experts,
Fernanda Viégas and Martin Wattenberg, lead the effort. Google plans to
publish the PAIR results. MIT and Harvard professors Hal Abelson and
Brendan Meade will collaborate with PAIR on how AI can influence
education and science. Like many other R&D initiatives, PAIR is linked to
Google’s product divisions. The results of this research will be
incorporated into applications in industrial verticals like health care, and
will facilitate top-line growth at Google Cloud.11

Google’s AI Technologies
Google’s TensorFlow, an open-source platform for ML, provides
anyone with a computer and Internet connection access to one of the most
powerful ML platforms ever created. TensorFlow is even available on the
Raspberry Pi, which is a small single-board computer priced at US$30 and
developed in the UK to promote the teaching of computer science. Thus
the barriers to entry product coding in AI are really low for anyone willing
to try it. TensorFlow is also implemented in the curriculum of Stanford
computer science professor Christopher Manning.
The TensorFlow system uses data flow graphs. In this system, data
with multiple-dimension values is passed along from mathematical
computation to mathematical computation. Those complex bits of data are
called tensors. The math bits are called nodes, and the way in which the
data change from node to node tells the relationships of the data to the
overall system. These tensors flow through the graph of nodes, thus the
name TensorFlow.
Google open-sourced its own image-captioning model around
TensorFlow that can both identify objects and classify actions with an
accuracy of over 90 percent. This activity has helped the TensorFlow
framework gain prominence and become very popular with ML
developers. According to calculations by François Chollet, now a Google
engineer and the creator of Keras, another popular DL framework,
TensorFlow was the fastest-growing DL framework as of September 2016,
with Keras in second place. Tensorflow is free, but it connects easily with
Google’s servers to provide greater data storage and computing power.12
In 2016, Google disclosed that it was using an in-house-developed chip
customized for AI called a tensor processing unit, or TPU. The TPU
provides high performance in DL inference while consuming only a
fraction of the power of other chipsets. In May 2017, Google announced
its second-generation TPU with a higher-precision floating point number
format, allowing it to perform DL training as well as inference. Google
says companies that use its cloud services will receive the benefits of
TPU’s power and energy efficiency.

Examples of Google’s AI Applications


Google applies AI across all of its products—search, ads, YouTube,
and Play. The “Assistant” is a software system to be implemented across
multiple Google platforms, e.g., the Pixel phone and the Google Home
device. It aims at controlling the functions on the phone like Siri does,
performing services as effortlessly as Amazon’s Alexa, and implementing
chatter that puts to shame the business bot in Facebook’s Messenger.
Google sees the Assistant as an evolution of many products, including
Search, Maps, Photos, and Google Now.13
Google is keen to get as much data from users as possible to feed its AI
engines. “Even if Google doesn’t make any money directly from
something that it offers, it’s still gathering data,” says Domingos. An
example is the feature-rich app Google Photos.
According to Victor Luckerson, a writer for The Ringer, in 2009, one
of Google’s annual April Fool’s Day jokes was to deploy an AI program
that scanned e-mails and automatically wrote responses. In 2015, they
added this concept to the company’s e-mail app, Inbox, and rolled it out to
Gmail in May 2017. When Google was first working on voice recognition,
it felt the need to ask users to donate their Google Voice voicemails for
research purposes. Today the company saves all voice search queries by
default and uses them to train its AI systems. Google argues that these
sorts of use cases don’t pose privacy concerns because machines, not
humans, process the information.14 Google Photos provides information to
train machines to recognize and categorize people, places, and things.
In the Cloud-as-a-Service (CaaS) business, Google still trails Amazon
and Microsoft. However, the company is best in class in cognitive
application programming interfaces, or APIs. Unlike CaaS, which serves
up commodity hardware in the most cost-effective manner, the
performance of cognitive API varies significantly from one vendor to
another due to the combination of algorithms, training data, and underlying
hardware TPU. For example, Google’s translation API supports more than
100 languages in text-to-speech translation—four times that of Amazon
Web Services (AWS). In 2016, Google added support for neural machine
translation for popular languages.15 In September 2017, Google launched
the free Cloud Natural Language API to help media newsrooms and other
businesses sort information so that it is easier to find later.16
Alphabet is also pushing into quantum computing, an exciting
development. Google has already started offering science labs and AI
researchers early free access to its quantum machines over the Internet.
The company wants to speed up the development of tools and applications
for the new technology, and ultimately transform it into a faster, more
powerful cloud computing service. According to Bloomberg, Alphabet has
a lab it calls an “embryonic quantum data center.” There is also ProjectQ,
an open-source effort to support developers in writing code for quantum
computers.17
Evolving cloud capabilities and TensorFlow help Google to grow in
certain industry verticals, such as health care, that are slow to embrace
novelty, and where data are often siloed or under regulatory protection. In
2016, Google demonstrated an AI system that scanned images of eyes to
spot signs of diabetes. In early 2017, similar analyses were used to scan
lymph nodes, and were able to identify breast cancer from a set of 500
images with 89 percent accuracy. In 2016, the University of Colorado,
Denver, moved its health research lab data to Google’s cloud to support
studies on genetics, maternal health, and the effect of legalized marijuana
on the number and severity of injuries to young men. University
researchers hope to halve the cost of processing around 6 million patient
records. German cancer specialist Alacris Theranostics GmbH depends on
Google infrastructure to pair patients with drug therapies.18 Ultimately,
Google hopes to offer a cloud-based diagnostic-as-a-service AI product for
multiple health conditions.
In the past two years, Google has made a lot of progress in natural
language processing. In 2017, the company launched Chatbase, a service
for developers released in Google’s Area 120 internal incubator by Ofer
Ronen and Hari Rajogopalan. API.ai, a Google bot management startup,
apparently complemented the internal technology. Chatbase has already
been in beta testing at various companies such as Viber, eBay, JustFab,
and UNICEF.19

AI Ethics at Alphabet
According to The Guardian, Google wanted to set up an ethics and
safety board as part of the DeepMind acquisition to ensure that its AI
technology is not abused. Since then, executives at DeepMind have
confirmed the existence of this board, though its work, composition, and
agenda remain unknown.
A second board was created in January 2016 to supervise DeepMind
activities in health care. This board meets four times a year and issues an
annual statement outlining its findings. It includes the editor of the Lancet
Medical Journal; Richard Horton, the kidney expert from NHS; Prof.
Donald O’Donaghue; and the chair of Tech City UK, Eileen Burbidge.20
In late 2017, DeepMind created the Ethics & Society (DMES) research
group to investigate AI bias, aligning technology with ethical values,
addressing lethal autonomous weapons as one of the key areas for studies,
and to research the economic impact of automation. This new group came
along after the launch of the Partnership on AI, a consortium of technology
companies, activist organizations and academia focused on best practices
for AI safety. Amazon, Microsoft and IBM are members of the
Partnership.21

Facebook
Facebook’s Approach to AI
In Marc Zuckerberg’s manifesto about building communities, the
Facebook founder uses the terms “artificial intelligence” or “AI” seven
times, all in the context of how ML and other techniques will support
keeping communities safe and well informed. In this context, Facebook is
focused on optimizing user experience on the Facebook platform with AI
capabilities. The company launched Facebook at Work as a separate
version of the social network in 2016. We believe that, as a part of
enterprise experience, Facebook will offer bot technologies in a manner
similar to how it has done in its consumer business.
Facebook is a full-stack AI company, as it controls productivity and
experience on its own infrastructure, with its own platform, and provides
AI tools to developers to grow the ecosystem and ultimately benefit from
scale. Facebook runs computing infrastructure necessary to serve 1.5
billion active daily users. The company invested in open source and open
standards within its Open Compute Project to optimize hardware. It has
already generated more than US$2 billion in savings from this investment.
The company announced next-generation GPU-based systems for training
neural networks, called “Big Sur.” Big Sur is Facebook’s Open Rack–
compatible hardware designed for AI computing at a large scale, designed
with Quanta, a Taiwanese manufacturer, and Nvidia, a chipmaker that
specializes in GPUs. Facebook is apparently also collaborating with
Qualcomm to introduce AI software knowledge into its next generation of
chips.
Open-sourcing Facebook’s AI hardware means that DL has graduated
from the Facebook AI Research (FAIR) lab into Facebook’s mainstream
production systems. FAIR has published a number of good open-source
implementations in Torch, an AI framework used in DL and ML.
Facebook’s company-wide internal platform for ML is called
FBLearner Flow. We understand that this platform will not be open-
sourced to the public, as its value comes from Facebook data. It combines
several ML models to process several billion data points from the activity
of users, and forms predictions about a multitude of behaviors, including
individual users’ interests. FBLearner Flow’s algorithms then choose what
content appears in each user’s news feed and what advertisements a user
sees. ML is therefore augmenting the capabilities of Facebook’s engineers;
1,100 engineers are using FBLearner Flow, and not all of them are ML
experts.

Table 12.2 Facebook Acquisitions Since 201222

Company, Acquisition
Year Specifics

Face.com, 2012 Face recognition software, offering a platform for developers and
publishers to automatically detect and recognize faces in photos using
free REST API
JibbiGo, 2013 Speech translation app, based on speech recognition and machine
translation
Wit.ai, 2015 API for Siri-like voice interfaces to enable developers to add a voice
interface to their device or app in a few minutes
Masquerade Mobile platform using facial recognition to allow users to add various
Technologies, 2016 filters in pictures or real-time videos and share them on social networks
Zurich Eye, 2016 Computer vision, enabling machines/robots to independently navigate
in any space, including indoors, urban areas, etc.
Ozlo, 2017 AI-powered assistant to build compelling experiences with Messenger

CB Insights ranks Facebook third in the global race for AI startups, a


rank the company shares with Microsoft and Intel.

Facebook AI Research
Facebook’s AI groups engage in collaboration to quickly release new
features and products.23 They have two major divisions focused on AI.
Joaquin Quiñonero Candela has headed the new AML team since October
2015. He maintains a close relationship with FAIR (Facebook AI
Research), another AI branch based in New York City, Paris, and Menlo
Park that is headed by Yann LeCun of New York University’s Courant
Institute of Mathematical Sciences. Quiñonero Candela joined Facebook
from Microsoft and applied his experience with the Microsoft organization
in setting up ways to move good ideas quickly between product, R&D, and
other corporate functions.24 Such collaboration was successfully
implemented for a prototype that allows the visually impaired to place
their fingers over an image and their phones to read a description of what’s
happening onscreen.
It has recently become standard practice to train a system to identify
objects in a scene or reach a general conclusion, for example, in what
environment a picture was taken. FAIR’s researchers have found ways to
train neural nets to identify virtually every interesting object in an image
and then figure out from their positions and relationhips to other objects
what the photo is about—for example, analyzing people’s positions to
understand what they are actually doing in a photo.
Quiñonero Candela breaks AI applications into “four areas: vision,
language, speech, and camera effects. All of those, he says, will lead to a
content understanding engine.”25 The company is building generalized
systems where work on one project can accrue to the benefit of other
teams working on related projects. Knowledge and algorithms are getting
transferred from one area to another, improving how quickly Facebook
ships products.
The FAIR team is supporting Facebook’s attempts to combat the fake
news problem. It has already produced a model called World2Vec. This
technology adds memory capability to neural nets and enables tagging
every piece of content with information, such as its origin and who has
shared it. With those data points, Facebook can analyze the sharing
patterns that characterize fake news. At the time of this writing, Facebook
finds itself at the vortex of the U.S. special prosecutor’s investigation into
alleged collusion between the Trump campaign and/or administration and
the Russian government. It remains to be seen how far and how public
their anti–fake news activities will become.26
In 2017, Facebook launched an open-source project to improve natural
speech recognition. The company hopes to get researchers from around the
world to share what they learn from their individual experiments with
language recognition and conversation technologies along with the data
they use.
In September 2017, Facebook announced an AI lab in Montreal with a
US$5.7 million investment. Joelle Pinaeau of the McGill School of
Computer Science will head the effort.

Facebook’s AI Applications
Facebook is convinced that building decent consumer products now
requires the predictive capabilities of AI. AI is now embedded into
Instagram and Messenger.
Facebook has built its neural net to work on smartphones in spite of the
fact that it does not control the hardware like Apple, which has
implemented similar technology. In the short term, this enables quicker
responses in understanding text and interpreting languages. In the longer
term, however, having such control could enable real-time analysis of what
a person sees and says.
Facebook uses ML to translate 2 billion news feed items per day and
has ceased its use of Microsoft’s Bing Translate in favor of its own
technology. Facebook also applies computer vision models to satellite
images to create population density maps to decide where it needs to
deliver broadband. According to Fortune, thanks to AI, Facebook’s video-
captioning efforts have increased engagement by 15 percent and boosted
viewing time by 40 percent.27 This, of course, increases Facebook’s
negotiating power with advertisers.
The launch of the Facebook photo-sharing service Moments in June
2015 showed how far Facebook research in AI has influenced its products.
This service relies on image recognition to let users create private photo
albums with select groups of friends. At the launch, the company said that
its technology was capable of recognizing human faces with 98 percent
accuracy. Regulations on privacy and facial recognition technology
prevented the launch of this feature in Europe.
Its acquisition of Occulus gave Facebook additional technology
capabilities in VR and AR. In 2017, Facebook launched a groundbreaking
platform for AR powered by AI. The platform virtually transforms the
camera on a smartphone into an engine to build digital effects that a user
can layer on top of what he or she sees through the camera. Facebook
allows outside companies and other developers to contribute to the
technology. The platform applies to still images, videos, and even live
videos shot with a phone. It makes it possible to “pin” digital objects to
specific locations and situations in the real world, for example by adding a
cup of coffee on a picture of a kitchen table, or a swimming shark in a
bowl of cereal. And they are expanding into new areas—inspired by games
like Pokémon Go, for example, this technology changes the way people
interact with the world around them. In addition to the Camera Effects
Platform, Facebook announced an open DL platform, AI Caffe2, able to
capture, examine, and process pixels in real time on a smartphone.
The Facebook AR application depends on deep neural networks that
run on a phone—a capability for which Apple is famous. The neural
network is used to track people’s movements, so that digital effects
connect to the real world. According to Mike Schroepfer, Facebook’s
CTO, the company is exploring ways of adding effects based not only on
what people are doing, but also on what they are saying. Through these
efforts, Facebook is building a pipeline of core technologies that will
eventually enable all of these common AR effects.28 Cooperation with
chipset makers of wireless devices such as Qualcomm is essential to
ensure the success of such a complex venture. AI-enabled AR will not
happen just on a phone. It will also go into AR glasses, as explained by
Facebook Oculus Chief Scientist Michael Abrash.29 In 2016, Facebook
opened its wit.ai platform to allow developers to build powerful AI for use
as bots.

Apple
Apple’s Approach to AI
Apple has never been open about its technology, and started to talk
about AI only in 2016. However, if you use an iPhone, you are using
Apple’s AI. For example, your phone identifies a caller who isn’t in your
contact list, but who e-mailed you recently, or you get a reminder of an
appointment that you did not put into your calendar. These product
enhancements were supported by DL and neural nets, which Apple has
been developing for quite some time. In 2014, the company moved Siri
voice recognition to a neural-net–based system. Now Siri relies on DNN,
CNN, and LSTM, among others. Apple has never spoken publicly about
this.
Apple has always been a full-stack company, controlling experience on
devices throughout its App Store and developing its own component
capabilities to be independent from chipset providers like Qualcomm.
Indeed, Apple is developing its own chip for AI. This silicon will not just
power existing product lines; it requires an AI chip for two of the areas
that Apple is betting its future on—AR and self-driving cars.30
Apple is ranked second in the race to acquire AI startups, according to
the CB Insights database. The company’s activity on the M&A front
suggests it is trying to catch up to its rivals in AI. Apple has had a harder
time hiring AI talent because of its unwillingness, until recently, to allow
engineers to publish research papers and participate publicly in open-
source projects. Apple has stepped up its focus on ML to close the gap. In
the hopes of attracting researchers, it has undertaken a concerted PR
campaign in the tech press to talk up its gains in AI.
Apple bought Turi, a Seattle company, for US$200 million. Turi built
an ML toolkit that has been compared to Google’s TensorFlow. In 2017,
Apple acquired Lattice Data for US$200 million. Lattice has developed an
AI reference system that turns unstructured data from video, audio, and
other sources into structured data. Make Cafarella, co-creator of Hadoop,
was Lattice’s co-founder.
Many of the recent Apple technology developments are strongly
connected to the hiring of Carnegie Mellon professor Russlan
Salakhutdinov, a big supporter of unsupervised learning and DL, in 2016.
Among other things, he appears to be behind Apple’s silicon chip efforts.

Table 12.3 Apple AI Acquisitions Since 201431

Company, Acquisition
Year Specifics

Novauris Technologies, Voice interface company


2014
Perceptio, 2015 DL company to classify photos on smartphones and enable advanced
calculations on devices without storing user data in the cloud
Vocaliq, 2016 Spin-off from the Spoken Dialogue Systems Group at the University of
Cambridge, builds a platform for voice interfaces making it easy for
everybody to voice-enable their devices and apps
Emotient, 2016 Leading company in facial recognition, translating facial expressions
into actionable information, thereby enabling companies to develop
emotion-aware technologies and to create new levels of customer
engagement
Turi, 2016 Company behind the fastest and most complete platform for building
predictive and intelligent applications. Started at Carnegie Melon as an
open-source project under the guidance of Carlos Guestrin.
TupleJump, 2016 Unified data mining and analysis platform to leverage users’ data
equity and build on it
Gliimpse, 2016 Personal health-data technology to produce personalized and shareable
electronic health record
RealFace, 2017 Cyber security and ML company specializing in facial recognition
Lattice Data, 2017 Applies an AI-enabled interference engine to take unstructured, “dark”
data and turn them into structured and more usable information
Regaind, 2017 Applies AI and computer vision for analyzing photos
Init.ai, 2017 Optimizing Siri

Apple’s AI Applications
Apple added DL developer tools to iOS in 2016. The new APIs are
allowing developers to build apps of their own where DL processing
would take place on the phone. That speeds up response time, as requests
do not have to go to the cloud and back, and protects user data by keeping
everything in the phone.32 Apple’s competitors have similar capabilities,
but none of the other companies can do these things while protecting
privacy. Many researchers and practitioners believe that Apple is
constrained by its lack of a search engine and its insistence on protecting
user information. But Apple seems to have figured out how to bypass both
hurdles. In 2017, Apple released a paper on how it implements differential
privacy.33 It involves a mathematically solid definition of privacy, which
allows ML to mask user-identifiable features. While it has trade-offs, we
believe other organizations that take privacy seriously should consider this
approach seriously.
Apple uses DL to detect fraud in the Apple store, to extend battery life
between charges on all its devices, and to help the company identify the
most useful feedback from the thousands of reports from its beta testers. It
recognizes faces and locations in the photos. The article “The iBrain is
Here” explains how Apple implements AI.34 Apple executives believe that
it is possible to get all the data required for robust ML without keeping
profiles of users in the cloud or even storing data on their behavior to train
neural nets. This is possible by taking advantage of Apple’s unique control
of both software and hardware, including silicon. The most personal
information stays inside the device. The computing happens right there on
the phone. A neural network–trained system “watches” while a user types,
thereby detecting key items and events, e.g., contacts, flight information,
and appointments. The data itself stays on the phone. Even in backups that
are stored on the company’s cloud, the information is filtered in such a
way that the backup alone can’t “disclose” anything specific on the user.
In 2017, Apple introduced an ML framework API for developers
called Core ML to make AI on mobile devices, including watches, as fast
and powerful as possible. According to Apple, image recognition on the
iPhone will be six times faster than on Google’s Pixel. Core ML will
support all sorts of neural networks (deep, recurrent, and convolutional), as
well as linear models and tree ensembles. Data won’t leave mobile devices
to respect privacy.35 Apple anonymizes data, tagging them with random
identifiers not associated with Apple IDs. Beginning with iOS 10, Apple
has implemented differential privacy, which crowd-sources information in
a way that doesn’t identify users at all. Traditionally, every word and
character that is typed is sent to servers, and then interesting things are
spotted. Apple does not do it this way, as they apply end-to-end encryption
on a massive scale.

Amazon
Amazon’s Approach to AI
According to one July 2017 report, “In its 16 years as a public
company, Amazon has received unique permission from Wall Street to
concentrate on expanding its infrastructure, increasing revenue at the
expense of profit. Stockholders have pushed Amazon shares up to a record
level, even though the company makes only pocket change. Profits were
always promised tomorrow.”36 Amazon has in fact started posting profits
due to the success of Amazon Web Services (AWS). Amazon is pursuing
AI to expand the reach of AWS, embedding more developers and
enterprises into its ecosystem and offering new product categories like
Echo/Alexa.
In almost all “main categories, Amazon’s position as a platform works
as a data feedback loop: Amazon owns the richest dataset on how
consumers consume, how sellers sell, and how developers develop. This
allows Amazon to optimize its user experience in retail, logistics,
developer environment, and now in voice AI, all of which make Amazon’s
offerings even richer.”37 In April 2017, Jeff Bezos, Amazon’s CEO, wrote
extensively about AI and ML in his letter to shareholders. Voice, virtual
assistants, and natural language processing remain his focus areas.
Amazon is also starting a AI-as-a-service AWS and developer community.
This is how Amazon implements a strategy to become a software platform
leader.
According to Bezos, “ML and AI is a horizontal enabling layer. It will
empower and improve every business, every government organization,
every philanthropy—basically there’s no institution in the world that
cannot be improved with ML. . . . Alexa and Eco . . . [and] Prime Air
delivery drones use a tremendous amount of ML, machine vision, systems,
natural language understanding and a bunch of other techniques. . . . [A]
lot of the value that we are getting from ML is actually happening beneath
the surface.”38
Swami Sivasubramanian, Amazon’s AI vice president, believes that AI
makes “it easier to do things that used to take considerable time like
product fulfillment, logistics, personalization, language understanding, and
computer vision to big forward-looking ideas like self driving cars. At
AWS, the combination of the algorithms, access to cheap ways to store
information, process and query data (to train these algorithms), and access
to specialized computer infrastructure (GPU and custom ASICs)”39 all
accelerate AI. AWS is investing in all layers of the stack, from core DL
frameworks (such as Apache MXNet, Caffe2, and TensorFlow) to ML
platforms and ML applications (e.g., Amazon Lex, Amazon Polly, and
Amazon Rekognition). Amazon works with Nvidia and Intel to optimize
these DL frameworks to hardware on GPUs and CPUs.
Amazon’s support for MXNet has advantages over TensorFlow, Torch,
and other frameworks for ML and DL. It is compact and has cross-
platform portability, both of which Werner Vogel, Amazon CTO, praised:
“The core library (with all dependencies) fits into a single C++ source file
and can be compiled for both Android and iOS.” Developers can also use a
wide variety of languages with the framework, such as Python, C++, R,
Scala, Julia, Matlab, and JavaScript. The framework is highly scalable.
It is safe to assume that Amazon’s long-term plans for MXNet include
monetizing it by offering it as a cloud service. This does not have to be
through Amazon’s existing ML service; it could come from an officially
supported machine image like the existing DL API that Amazon already
sells. The former would be suitable for those who want an easily
consumed product; the latter, for those who want total hands-on control.
Amazon also wants to become a major sponsor of MXNet’s
development. Currently, the framework enjoys support from leading AI
companies and researchers such as Nvidia, Microsoft, Baidu, and Carnegie
Mellon. Vogel has stated that Amazon will “contribute code and improved
documentation as well as invest in the ecosystem around MXNet,” and
“partner with other organizations to further advance MXNet.”
This plan includes another possibility: creating custom hardware
specifically designed to run MXNet at scale to provide a service not found
anywhere else. In theory this could be done without making significant
changes to MXNet, although Amazon could build an in-house version with
enhancements specifically coupled to its own hardware. It is not as if the
publicly available open-source version would magically lose its value if
this happened. But cloud vendors recognize the importance of being able
to provide an at-scale option that is normally out of a regular user’s
practical reach.40 Given the scale of AI efforts at Amazon, it’s interesting
that the company so far has not made that many ML/DL-related
acquisitions.
AWS’s acquisition of the San Diego–based cybersecurity startup
harvest.ai is a very interesting move. Harvest.ai’s product MACIE
Analytics monitors in real time how an enterprise’s intellectual property is
being accessed. Assessments of who is looking at data, who is moving
and/or copying documents, along with the location of these events,
supports identification of suspicious behaviors. This enables prevention of
potential data breaches before they take place. AWS already offers
embedded security features for cloud customers, developed either
internally or with help of partners. In time, Amazon ML can become
Amazon’s next pillar of business, along with Prime and AWS.

Table 12.4 Amazon AI Acquisitions41

Company, Acquisition
Year Specifics

Orbeus, 2015 Computer vision engine that enables machines to perform face
detection and recognition, logo and product recognition, optical
character recognition, and scene understanding
AngelAI, 2016 Former GoButler, a company that supports businesses in automating
conversational commerce using natural language processing. Domain-
specific NLP solution is built on an extensive dataset of real
conversations
HarvestAI, 2017 Cybersecurity company using ML to analyze user behavior around a
company’s key intellectual property to try to identify and stop targeted
attacks before valuable customer data can be swiped
Body Labs, 2017 AI-powered 3D human modeling technology, allowing understanding
of both 3D human motion and shape from any input

Amazon’s AI Applications
Amazon is using ML to optimize personalized recommendations and
improve inventory management. Since 2016, Amazon has been known for
its voice-enabled device. Amazon is one of the most impressive innovators
in the world as the company has the willingness and skills to go after
multiple bets. The company unsuccessfully tried to get into the smartphone
business with its Fire model. In October 2014, Amazon wrote off this
business at US$170 million. In November of that year, the company
launched the Amazon Echo, which the company had already started
working on in 2011. Amazon Echo created its own market of voice-based
assistants for home, car, and later B2C use cases. The home was one of
few places in the world where phones were not necessarily the most
practical devices. An ecosystem needed to be assembled: more “smart”
products—light bulbs, thermostats, and power switches—were coming
onto the market.42 Amazon is building a new ecosystem around Echo and
Alexa (Echo’s voice assistant), creating a framework of skills that allows
smart devices to connect to Alexa. The payoff was already obvious at the
2016 Consumer Electronics Show (CES): Alexa support was everywhere.
LG, GE, Ford, and many more companies announced gadgets, home
appliances, and even cars that could connect to Alexa.
Alexa’s business model is smart: it can subsidize the device with other
lines of business. Customers will buy multiple Echos. Alexa makes it easy
to order products, supporting Amazon’s strategic goal of becoming the
logistics provider for everything and everyone.
The number of “skills” that Alexa possesses—tasks that it can perform,
such as setting a thermostat or summoning an Uber—had grown from 135
in January 2017 to 15,000 by the summer of 2017. As a comparison, in the
summer of 2017, Google’s Assistant had 378 skills and Microsoft’s
Cortana had 65.
Flash briefings make up 20 percent of U.S. skills for Alexa, as these
are the easiest to develop and implement. Examples include The Wall
Street Journal, Digiday, NPR, and The Washington Post.43
In September 2016, Amazon announced the annual Alexa Prize, worth
US$2.5 million, for which “university students will try to create bots that
can intelligently chat about topical matters for 20 minutes.”44 There is also
the Alexa Fund, Amazon’s captive venture arm, which has further
supported the developer and hardware ecosystem around Alexa. The
platform offers software development kits (SDKs). Alexa Fund
investments are focused on adding new interfaces, e.g., gesture controls by
Thalmic Labs, as well as new form factors and hardware like robotics,
such as those developed by Emboded. In January 2017, Alexa Accelerator,
in partnership with TechStars—one of the most active IoT accelerators
worldwide—was announced.
Amazon has already worked with the pharmaceutical company Merck
& Co. to make an Alexa application for diabetes patients to check glucose
levels. Amazon invested in the genomic startup GRAIL, which
demonstrates the company’s interests in health care.45 The wide adoption
of Alexa in health care requires compliance with data privacy rules. So far,
however, Alexa has not yet complied with HIPAA (Health Insurance
Portability and Accountability Act).46 Amazon also plans to use Alexa in
call centers.
Amazon wants to serve big and small developers through AWS. These
customers want ML without the upfront costs. Amazon unveiled offerings
that will work like an API and allow any developer to access Lex (the NLP
inside Alexa), Amazon Polly (speech synthesis), and Amazon Rekognition
(image analysis). In addition, Amazon is focused on drones involving
vision for obstacle avoidance. It seems Amazon wants to offer more pre-
built algorithms.47 At the launch of that platform, AWS CEO Andy Jassy
said, “We do a lot of AI in our company. We have thousands of people
dedicated to AI in our business.”48Amazon CTO Werner Vogels has
described security as one of the integrated elements of every service the
company builds. For example, in late 2016, the company launched
Amazon Inspector, a service delivering vulnerability reports on security
and compliance based on a customer’s AWS activities.
Amazon also invests in and partners with companies focused on
enterprise cloud services, e.g., the Middle Eastern e-commerce site
Souq.com; the India-based Housejoy, which will help expand its reach in
the region; Do.com, a meeting productivity software with the potential to
roll into AWS’s video-conferencing suite for a business called Chime; and
Twilio, focused on AI in enterprise and AWS.
We believe Amazon might move into self-driving technologies as well.
Amazon has announced Prime-Air, a program in which unmanned aerial
vehicles will deliver products in less than 30 minutes. Unmanned logistics
and delivery are new business areas with an uncertain timeline. We
understand that Amazon is thinking of adding drones to its pool of delivery
vehicles. Right now there are demos, but no actual products. Amazon
patent activities suggest the “company is trying to create a flying
warehouse that would dispatch package-laden drones to the ground. Called
an ‘Airborne Fulfillment Center’ (AFC), the patent describes it as ‘an
airship that remains at high altitude.’ In another patent, Amazon talks
about a drone mesh network that alerts all other drones about their
surroundings.”49 Other patents contain details on how Amazon’s
fulfillment centers would use robotics to assemble orders by tossing items
through the air.50
As the company is changing the way products and services are
delivered to customers, its efforts will potentially influence the
development of autonomous vehicles as well. According to Nvidia CEO
Jen-Hsun Huang, the “Amazon effect” will turn transportation on its head
with autonomous technology playing a big role, especially for point-to-
point movement of products and people.51

Microsoft
Microsoft’s Approach to AI
Like Google, Facebook, and Amazon, Microsoft is another leading
company in AI. Yoshua Bengio, one of the fathers of DL, agreed to be a
strategic advisor to Microsoft in January 2017. This gave the company a
direct line to one of AI’s top resources for ideas, talent, and strategic
direction. Bengio works with Harry Shum, who is in charge of Microsoft
AI research. The research group is positioned horizontally to categories
such as Windows, Office, and Azure. This development is promising
because it should accelerate product development and get AI’s benefits to
customers faster.52 Microsoft has a dedicated AI business unit with over
8000 computer scientists and engineers focused on embedding AI into the
company’s products.
Microsoft’s approach is very engineering oriented. The company has
fantastic talent and is credited for discovering residual networks. Among
other frameworks is Decision Forest. Models such as Random Forests and
Gradient Boosted Decision Trees imply a “tree of logic rules that slice up
the domain recursively to build a classifier. This approach has been
effective in many Kaggle competitions. Microsoft has an approach that
melds the tree-based models with DL. . . . Their Cognitive toolkit . . . is a
high-quality piece of engineering. It is likely one of the better frameworks
out there with respect to learning, using distributed computers.”53
Microsoft is one of the biggest acquirers of AI startups.

Table 12.5 Microsoft’s AI Acquisitions54

Company, Acquisition Year Technology

Netbreeze, 2013 Spin-off of ETH Zurich, founded in 1998 by the physicists


François Rüf and Ales Prochazka; social media monitoring
Equivio, 2015 Provides text analytics solutions for legal and compliance;
offers Zoom, a court-approval ML platform for the legal
arena and enables the defensible quantification of compliance
decisions
SwiftKey, 2016 Former TouchType, maker of a keyboard primarily used by
Android devices that uses a natural language Fluency engine
technology to learn a user’s writing style, correcting and
predicting text input with accuracy
Genee, 2016 Scheduling system that instantly coordinates the right
meeting times in the best interests of everyone to provide the
productivity and efficiency of a personal assistant for all;
expertise in natural language processing
Maluuba, 2017 Focuses on using deep learning and reinforcement learning to
increase the proficiency of computer-based systems in NLP
to answer questions and make decisions

Microsoft’s AI Applications
Microsoft is very advanced in natural language recognition and
processing. Cortana is a less popular, more functional version of Siri, with
less visibility than Amazon’s Alexa. Because Cortana comes with every
installation of Windows 10, it has 145 million monthly active users.
Unlike Alexa, it responds not just to voice but to text as well. As Cortana
is linked to Bing, Microsoft’s search engine, which has 30 percent of the
U.S. market, it learns from data signals from many different devices.
Microsoft as a dominant player in business productivity extends its
relationships with companies like Nissan and Volkswagen into new
product categories, e.g., embedding ML into cars. Microsoft also designs
bots to automate one-off tasks a person used to do herself, like making a
dinner reservation or completing a bank transaction. Microsoft launched
the Microsoft Bot Framework in 2016 to enable bots for a variety of
environments, from Skype to Alexa to Facebook Messenger. A suite of
plug-and-play tools like image recognition and ML allows developers to
leverage Microsoft’s research on AI. According to Satya Nadella,
Microsoft’s CEO, bots are the new apps. Consumer-focused examples
include:
• the language-learning bot built at Microsoft’s Skype Bot Lab in Palo Alto;
• the social bot Zo, available on the web;
• Xiao Ice (“princess”), Zo’s more experienced Chinese counterpart, with 40 million users;
• the teen-focused social messaging app Kik; and
• Rinna, for Japanese speakers.

Bots are being integrated into Bing searches of locations and events. All of
Microsoft’s bots are built on top of the Bing knowledge platform. A big
part of training the bots is teaching them what concepts are interrelated.
Enterprise bots include a Calendar.help bot to set up meetings and
coordinate schedules, and Who.bot, which identifies people within
Microsoft’s workforce with desired skills. These developments may also
be extended into the LinkedIn database.55
In the automotive sector, Microsoft is teaming up with Baidu to power
self-driving cars. The company will provide its Azure cloud services to
companies using Baidu’s open-source Apollo self-driving platform outside
China. Baidu aims to make this platform open source and follow the steps
Google has already taken with Android on smartphones. Baidu has already
signed on about 50 partners willing to build and improve the platform,
including Chinese OEMs (e.g., Great Wall Motors) and the U.S.
companies Intel and Ford.56
Microsoft Azure Cloud has ML learning tools for developers to exploit
big data, GPUs, data wrangling, and container-based model deployment.

Table 12.6 AI ESG Considerations: For the AI “Giants”

Environmental • Take lead and implement “ethics and safety” policies and
programs
Social • Develop guardrails for specific, sensitive AI applications (e.g.,
health care)
• Special social responsibility to protect against harmful socio-
political impacts (fake news, propaganda)
• The special privacy responsibilities of voice-enabled AI
products (mobile/home)
Governance • Implement AI ethics advisory boards at leading AI companies
to oversee ethical design issues
• Anonymization of personal data as structured and unstructured
data grow exponentially
• Development of AI to combat fraud, corruption, and similar
crimes
CHAPTER THIRTEEN

Product- and Domain-Focused


ML Companies

Product leaders among ML and DL startups focus on integrated solutions


to control experience and solve customer problems from the interface all
the way down the stack to the functionality, models, and data that power
the interface.
These competitors are hard to replicate easily. They are able to
generate proprietary data and models more quickly, and possess domain
expertise needed to guarantee their ML and DL technologies are adding
value. They have teams with a mixed skill set and a balance of
commercial, scientific, and data-modeling expertise to scale fast. Below is
a summary of the key success factors that domain-focused companies
require in today’s marketplace.

Table 13.1 Success Factors in Domain-Focused Companies (Examples)1

Proprietary Data
Merlon Intelligence gathers training data from compliance analyst interactions with a financial
crimes investigation dashboard. Gathering the data requires a full-stack product, where the interface
is designed and instrumented to gather data that feeds into the models. It is learning to rank the
setup and learning to rank for risk. Banks have many operational risks in deploying new financial-
crimes compliance software, so it is a challenge to penetrate the market. The harder it is to gather
proprietary data and the more the data are interlinked with a company’s go-to-market and product
development strategy, the more stable the business.
Full-Stack Industrial Solution
Blue River builds agriculture equipment that reduces chemicals and saves costs. They “personalize”
treatment of each individual plant, applying herbicides only to the weeds and not to the crop or soil.
They use computer vision to identify each individual plant, ML to decide how to treat each plant,
and robotics to take precise corresponding action for each plant.
Cybersecurity Integrated in Product Design and Implementation Processes
Pypestream offers messaging and bots across all segments of customer value chain in financial
services, automotive, and retail. Product design and implementation/customization are linked with
cybersecurity best practices, and reviewed on a weekly basis from a security perspective.
Team with the Right Balance of Skills
The Zymergen leadership team has capabilities that are important for industrial biology. CEO
Joshua Hoffman is a business executive, CSO Zach Server is a scientist, and CTO Aaron Kimball is
a data expert. The balance of skills makes the business stable.

Product-focused companies can become self-sufficient players if they


are not acquired prior to reaching a certain level of profitability. The
bottleneck on the way to profitability occurs mostly in their need to invest
in R&D, e.g., in GANs, adversary models, trials on mesh networks to
harvest on data from sensors, etc.
There are several key challenges that domain expert and product
companies that are trying to successfully sell AI and ML to the Fortune
2000 need to overcome. These include:
• understanding Fortune 2000 company–specific data and data sources;
• being able to work with existing data systems;
• understanding differences between geographies;
• understanding strengths and weaknesses in solutions delivered by full-stack competitors like
Amazon or Google, so-called “AI-as-a-service” providers, and traditional enterprise vendors
like IBM or SAP, which are adding ML and DL to their product and service portfolios;
• paying attention to the data science learning curve in different Fortune 2000 companies and the
resulting AI and ML skills gap;
• understanding the culture within the Fortune 2000 companies, including gaps between R&D
and operations and organizational silos; and
• understanding Fortune 2000 companies’ hardware infrastructure.

General-purpose ML and DL APIs are an important area. The best startups


in this category are often acquired by larger brands; for example,
Salesforce bought PredictionIO and GE acquired WiseIO.
Nuance is a global leader in natural language processing (NLP) and
voice recognition, using ML to optimize its technology. Forrester ranked
Nina by Nuance the #1 Chatbot Vendor.2 It is a conversational assistant
that is able to recognize intent and learn from historical transcripts. Nina
differentiates itself from other chatbots as it can be implemented into an
omni-channel framework on multiple channels, from desktop to mobile
app and from SMS to voice (IVR), according to the company.3 Nuance is
further expanding into the automotive sector. Until now, built-in voice
recognition has had to use limited vocabularies to increase the probability
of accurate recognition. Now that cars increasingly have data connectivity,
Nuance delivers a new assistant with contextual awareness and
personalization close to natural language recognition.4

Table 13.2 Examples of Domain Specialists

Domain Company

Productivity Clara
X.ai
Branasof
Zomm.ai
Julie Desk
Customer Care and Lifecycle Management Action IQ
Preact
Pypestream
Interactions
Zendesk
HR and Talent Unitive
Entelo
Wade&Wendy
Textio
SpringRole
HiQ
Consumer Marketing Appio
Lexalytics
AirPR
Invoa
Finance and Operations AppZen
Sapho
Cognitive Scale
Work Fusion
Industrial and Manufacturing SkyChain
Predix
Pensiamo
Fusion APS
Security and Risk Vectra
Signal Sense
Zimperium
Graphistry
Dark Trace
Anodot
Sift Science
CHAPTER FOURTEEN

Traditional and New Enterprise


Companies Focused on ML

Large enterprise software vendors like IBM or SAP are retooling their
businesses around the belief that AI will make their products and systems
smarter, more resilient, and more profitable. Each company takes a slightly
different approach. Success depends largely on data, capabilities to
transform legacy platforms and systems, access to AI talent, and a nuanced
balance between marketing and ability to execute. In most cases,
traditional enterprise vendors spend more to deliver on ML and DL than
do domain- and product-focused players, who develop their applications
from scratch.
In this chapter, we do not cover Oracle in detail, though it seems clear
that it is reviewing its strategy and may announce more AI-related updates
in 2018. In September 2017, reports unveiled Oracle’s fully autonomous
version of its database in the cloud. With this, it is clear that it wants to
compete with AWS.1
These are some of the critical questions that the likes of SAP,
Salesforce, and IBM are asking:
• Should we engage in more AI acquisitions?
• Should we develop an internal platform?
• How do we balance external partnerships and acquisitions with internal capabilities and
roadmaps?
• How far do we go in introducing AI into our business?

All three companies struggle with similar challenges, though they look
very different from the outside, and though IBM is in a somewhat more
vulnerable position as doubts were raised in 2015 and 2016 about the
company’s ability to deliver on previous promises. We examine each of
them in this chapter.
Enterprise startups with generic ML/DL offerings are in a difficult
position, as they compete against traditional vendors and product and
domain specialists. If they execute well, they will still have great potential
to help traditional companies in augmenting and automating business
processes and tasks, introducing ML into customer care or HR activities,
supporting data preparation, and delivering on training as a service.

SAP
SAP has been a gatekeeper for enterprise data since its inception.
Today SAP delivers APIs with ML through its enterprise cloud, e.g.,
Concur for travel bookings and Success Factors for HR offerings.
According to ComputerWorld UK, Concur processes “[US$50] billion of
travel transactions every year. Success Factors is installed on 245,000
systems.”2
SAP first created simple invoice- and CV-matching applications in
which computers read and match documents, freeing people up from these
tasks. The system is called SAP Resume Matching. ML is used at SAP
Clea, the SAP Cash application that automates the process of making and
capturing payments.
The SAP digital innovation system is called Leonardo, after Leonardo
da Vinci. It comprises applications and microservices around IoT, ML,
blockchain, and big data. The scope is deep, as these technologies are
widely interdependent, and all rely on data. SAP has a number of
accelerator programs to implement Leonardo across industries and across
business functions. The implementation is customized and supported by
consultants from SAP Digital Business Services. They help SAP
customers to clarify innovation priorities and design prototypes. According
to ComputerWoche.de, prototypes can be achieved within eight weeks of
an initial workshop.3
SAP HANA and Google Cloud have announced a partnership, and
developers can now use the SAP HANA express edition to build custom
enterprise applications on the Google Cloud platform with the flexibility of
on-demand deployment. SAP Cloud will be soon available on Google
Cloud. The collaboration involves building ML features into
conversational apps that guide users through workflows and transactions.
There will be integrations between productivity applications such as
Gmail, Calendar, and Sheets. Thus, SAP customers will receive better
choice and capabilities to link e-mail conversations; search for duplicate
contacts; and create new leads, tasks, and visits directly from Gmail.4

Salesforce
Salesforce offers cloud tools for managing sales leads and
communication with customers. The company’s efforts in AI became
visible in 2015, when it announced two major acquisitions: MetaMind,
backed by Khosla Ventures, and the open-source ML server PredictionIO.5
Salesforce also acquired MinHash, a big-data and ML startup; the smart
calendar startup Tempo AI; and a customer relationship platform, RelateIQ
—all of which together form the backbone of SalesforceIQ.6
PredictionIo, originally called TappingStone, was founded in 2012 in
London and offered “ML as a service” before pivoting to an open-source
model with the Apache Software Foundation (ASF), sponsored by Apache
Incubator. Together with ASF, the company developed traction with 8,000
developers and 400 applications. MetaMind’s recursive neural networks
power Salesforce’s sentiment analysis and image classification. Image
classification is a drag-and-drop tool where customers upload an image
with the system, identifying subjects afterwards. The tool can also perform
sentiment analysis on text such as tweets, successfully labeling the
language as positive, negative, or neutral. Richard Socher, chief scientist at
Salesforce and teacher of an AI course at Stanford, is a well-regarded ML
researcher who came from MetaMind. He is enthusiastic about
Salesforce’s huge amount of data. Combined with advances in ML
algorithms, the company will be able to make progress in sales CRM.
Metamind technology has now been fully integrated into Salesforce
platforms.
Salesforce’s Einstein, introduced in late 2016, is the core AI
technology powering the Salesforce CRM platform by using data mining
and ML algorithms.7 Salesforce has 150,000 customers, most of which
have customized Einstein for their own needs. Salesforce keeps each
customer’s data separately. When a customer adds a custom data field,
Salesforce doesn’t even know the nature of the information, according to
Wired reporting. Within Einstein there is a technology called Optimus
Prime, which automates the creation of ML models for each Salesforce
customer so that data scientists’ model-training tasks become easier.8
Currently, Einstein aims to proactively spot trends across sales,
services, and marketing systems. The system is aiming at forecasting
behavior that could spot sales prospects and opportunities, or identify a
crisis situation in advance. The system has interesting image recognition
analysis tools that, when properly trained, can count objects and even
recognize features like color and size.9 The program studies the history of
the data and figures out for itself which factors best predict the future. It
keeps adjusting the model based on new information over time, and the
more data, the subtler the answers.10 “If we were to make the 150,000
companies that use Salesforce 1 percent more efficient through ML, you
would literally see that in the GDP of the United States,” Socher says.11
An algorithm for summarizing documents produces surprisingly
coherent and accurate snippets of text from longer pieces. Its performance
is still not as good as that of a human. Still, it paves the way toward
condensing text to eventually become increasingly automated.12
Summarizing text fully would require AGI or ASI, as it implies
common-sense knowledge and a mastery of language. Shortcomings might
be resolved over time through the injection of new technologies from third
parties. In 2017, Salesforce launched a US$50 million AI fund.

IBM
Watson and Reputation Risk
Watson was mentioned over 200 times in earnings calls starting in
2013. Over the last five years, IBM issued more than 200 press releases
with Watson in the headline. Over this period, IBM invested US$15 billion
in cognitive assets, not including US$5 billion spent on data acquisitions
such as Truven Health Analytics and the Weather Channel.
Watson became famous in 2011 when it won the game show Jeopardy!
against the show’s two highest-rated players. The Watson business unit
was created in 2014, and now has 7,000 employees.
In 2016 and 2017, IBM suffered a major reputational setback, as its
marketing machine brought Watson into too many verticals too soon. In
July 2017, investment bank Jefferies suggested that IBM would not be able
to return value to shareholders because it cannot compete properly with
other technology giants investing in AI. The company was “outgunned in
the war for AI talent,” a problem that will only worsen.13
We interviewed 12 executives working with Watson. In every single
case, feedback on Watson’s performance was cautious and reserved. In all
cases, IBM customers had to invest vast human and financial resources to
get their models up and running. Everyone agreed that Watson’s technical
capabilities were not near Google’s DeepMind or that of niche-focused
startups. “IBM is excellent at using their sales and marketing infrastructure
to convince people who have asymmetrically less knowledge to pay for
something,” says Chamath Palihapitiya, founder and CEO of Social
Capital.14
In order to manage reputational risk, IBM had no other way but to
partner with serious AI research brands. In 2017, IBM teamed up with
MIT for a ten-year fundamental AI research project, creating the US$250
million MIT-IBM Watson AI Lab. The focus is on advancing hardware,
software, and algorithms related to DL and other areas; increase AI’s
impact on industries such as health care and cybersecurity; and explore the
economic and ethical implications of AI on society.15
Watson implies dozens of separate cognitive and ML components
including image recognition, a language classifier, and text-to-speech
translation. The current version does not seem to perform higher-level
cognitive reasoning, which surprised the world during the Jeopardy!
contest. IBM legacy technologies are complemented on Watson by
capabilities to use any language (including Scala, Java, and Python); any
popular ML framework (including Apache SparkML, TensorFlow, and
H2O); and any transactional data type, according to the company. IBM,
however, seems not to be on the cutting edge of DL research and practice.
While their TrueNorth neuromorphic computing program seems to be
promising, it too currently does not have practical substance.16
Watson has ingested an extraordinary amount of data. Serving up
information through a search and retrieve function cannot be considered
domain knowledge, however. Only application of that information
autonomously could be characterized as autonomously created knowledge.
It would be one thing if Watson were able to distinguish information
from autonomously created knowledge and thereby provide added value
and quality in decision-making. However, in many cases (including
finance, automotive, and health) Watson is unable to answer “why”; in
other words, it is unable to explain why the decision is a materially
significant one.
It may be that Watson does not have the treasure of metadata that
Salesforce does, for example. This prevents IBM from automating more
aggressively. Throwing armies of experts at a customer’s problem is not a
scalable or sustainable approach.
According to Shaunak Khire, cofounder of Stealth, the Watson model
is built on the concept of adding the value from data, not adding value
from the interface and technology. The whole point of AI is that it learns,
reasons, and provides outputs autonomously. Watson does not appear to be
close to achieving this goal because it deploys the AI piece only in the
context of NLP and not during the decision-making process. As one writer
noted, “There is a video commercial by Watson with Alpha Modus, where
Watson says it can help investors predict markets close before they close.
What does this even mean? How does one even come to define quality in
such scenario? . . . I suppose one could say if it improves trading on
investment performance by x%—then you at least have some empirical
data point. I find this strange and a marketing gimmick at best without any
underlying datapoints. ‘ROI of 900%,’ but what is the underlying
metric?”17
IBM has begun to deploy ML tools on its Z series mainframes so they
can be run on the premises instead of requiring that data be re-hosted in a
cloud like IBM’s Bluemix. This means that companies are able to build
and deploy these applications on their own systems and in their private
cloud. This removes the cost, latency, and risk of moving data off the
premises.18 IBM Z can encrypt up to 13 gigabytes of data per second per
chip, with roughly 24 chips per mainframe depending on the configuration,
thanks to proprietary on-chip processing hardware. This allows for a high
level of security. Data will be kept “encrypted at all times unless it is being
actively processed, and even then it is only briefly decrypted during such
computations.” The system “cuts down on the number of administrators
who can access raw readable data. This means that hackers have fewer
targets” to gain system access.19
IBM is trying to position Watson as a platform others can use to build
applications. Nearly 80,000 developers have worked with it. IBM is also
looking to establish strategic partnerships to boost its cognitive
capabilities. In March 2017, the company announced a strategic deal with
Salesforce to jointly provide ML and data analytics services, which
support enterprises in making faster and smarter decisions. IBM’s Watson
and Salesforce’s Einstein will be integrated for e-commerce, sales, and
marketing.

Table 14.1 IBM AI Acquisitions20

Company, Acquisition Year Technology

Explorys, 2015 Secure cloud-computing platform specialized for health care,


allowing customers to accelerate research and product
development; enables real-time exploration, performance
management, and predictive analytics
AlchemyAPI, 2015 Cloud-based text mining platform that enables customers to
perform large-scale social media monitoring and target
advertisements, track influencers and sentiment within the media,
and automate content aggregation
Cognea, 2014 Cognitive computing and conversational AI platform; offers
virtual assistants that relate to people through personalities
Merge Health, 2015 Medical image handling and processing

IBM is trying to combine Watson’s capabilities with the work of


conventional consultants such as KPMG. IBM hopes it might be an
important development in consulting and business analytics.21 The
partnership with KPMG seems to be highly promising, as KPMG offers its
own consulting services on AI and cognitive to corporate clients while
simultaneously praising Watson.
From a reputation risk-management standpoint, IBM, like any other
company working in these areas, needs to consider the reputational risk
associated with the development and promotion of its AI-related products.
Below is both a cautionary tale as well as a constructive take on what
could happen to a company that does or does not manage its reputation
risk effectively.

Watson’s Applications
IBM pitches Watson’s diverse range of capabilities to industries like
health care, retail, finance, tax, and architectural drawing, where it claims
to have acquired deep vertical knowledge. Watson grows when industries
it serves grow—it extends the personality of the company that’s using
Watson, so it operates sort of like a white label. When IBM talks about
Watson, it references augmented intelligence more than AI.
IBM has more than 500 industry partners across industry verticals.
Many IBM customers use a Watson-powered virtual assistant to handle
customer support. Its bots have deep domain knowledge about specific
areas. Watson wants to expand into further fields of business, e.g., into
media to scan for potential fake news, according to David Kenny, general
manager of Watson.22 At the same time, Watson’s prices are dropping as
customers can receive similar service and capabilities from competitors,
sometimes for free. As previously discussed, AI- and ML-related APIs are
currently available from Microsoft, Google, and Amazon. Specialized
vendors like Nuance are offering APIs as well. In this context, IBM had to
drop the price of Watson Conversation over 70 percent from US$0.0089 to
US$0.0025 per AIP query per month in October 2016.23
Watson has interesting data assets in the health care and meteorology
industries. In the case of meteorological data, Watson’s capabilities from
its Weather Channel acquisition are not completely unique. CEO Rometty
is also publicly talking about Watson’s automotive capabilities due to its
collaboration with GM OnStar, which will theoretically put Watson into
millions of cars.24 According to our sources at GM, the collaboration was
difficult and GM invested a lot of unplanned money and engineering
resources to make the solution work.

Table 14.2 Reputation Risk and Opportunity25

Reputational Hit Current


Type Description Historical Example Example

DEADLY BLOW ORGANIZATION, • ENRON • EQUIFAX?


PRODUCT, SERVICE • LEHMAN BROS • THERANOS?
AND/OR LEADER
• WORLDCOM
DISAPPEARS
RECOVERABLE ORGANIZATION, • BP • EQUIFAX?
HIT PRODUCT, SERVICE • JP MORGAN • THERANOS?
AND/OR LEADER
• VOLKSWAGEN • WELLS
REGROUPS AND
FARGO?
RECOVERS
• UBER?
• FACEBOOK?
ENHANCEMENT ORGANIZATION, • JOHNSON & • MICROSOFT
EVENT PRODUCT, SERVICE JOHNSON • FACEBOOK?
AND/OR LEADER IS (TYLENOL)
• UBER?
PREPARED IN • SIEMENS
ADVANCE AND
• FONTERRA
BUILDS
REPUTATIONAL
EQUITY OVER TIME

Enterprise Startups
There are several interesting newcomers in B2B market, suggesting
that they can help companies to introduce ML/DL into their product lines.
Some of these started in mobile cybersecurity, IoT, and silicon, and
acquired DL teams to complement their offerings. Others secured VC
funding to go after traditional enterprises. Time will show how successful
these enterprises may become. Generally we are more confident about
domain-focused players, which possess proprietary data and training
expertise to solve problems with a lot of focus. We believe that enterprise
newcomers might bring value in case a company has several ML
implementations going on, which require orchestration and integration
with legacy systems.
Texas-based Mobiliya Inc. has roots in wireless, cybersecurity, and
silicon, and is a traditional engineering outsourcing partner to companies
such as Microsoft, Google, HP, Samsung, and Nvidia. In 2017, the
company built a DL team to offer customized solutions to traditional
business customers.
Toronto-based DeapLearning works with enterprises to accelerate their
ML deployment. The company served Scotiabank to develop a credit card
system that analyzes customer behavior to make connections to credit card
collection. The company partnered with insurance software developer
Symbility, which is working to introduce ML into its tech stack.26

Table 14.3 AI/ESG Considerations: Reputation Risk27

Governance • Reputation risk management should be a key component of any AI


company governance and enterprise risk management strategy, at
all funding or growth stages
• Board directors and executives must understand AI capabilities to
ensure no over- or false-marketing to their stakeholders (especially
customers and regulators); this is a critical component of proper
enterprise risk and reputation risk management

Texas-based Cognitive Scale started in health care and expanded into


consumer goods, retail, and financial services. It provides so-called
“augmented intelligence” products to emulate humane cognitive abilities
in software, and thereby automate enterprise processes and job tasks. The
10-10-10 method helps businesses model their first ML-powered cognitive
system in 10 hours, configure it using their own data in 10 days, and go
live within 10 weeks.
CHAPTER FIFTEEN

Introducing AI into a
Traditional Business

Technological progress tests companies, not just in the area of their IT


infrastructure readiness. According to a 2013 McKinsey Global Institute
report, it is the attitude of business leaders and executives that will dictate
their path of AI development and advancement.1
No one is really sure how fast AI advances will reshape businesses and
benefit customers. The CEO of Salesforce, Marc Benioff, has said: “No
one really knows where the value is. . . . [I]t is helping people do the things
that people are good at, and turning more things over to machines.”2 At the
same time, no one is completely sure at what pace certain tasks will be
completely automated, or how and when new jobs will be created and old
ones made redundant by machines.
There are many standard ML technologies that a company can benefit
from. To get implementation right, however, a company needs to focus
time and effort on building a model that understands what is important to
the business and cleans up and deploys the right data for value-added
results. One of Salesforce’s ML experts, Vitaly Gordon, believes that a
business AI problem is ultimately a people problem: “These companies
probably know more about you than Facebook or Google, but they can’t
compete for the data scientists who know how to mine the mountains of
information.”3
We believe that the right combination of ML and blockchain
technologies could make traditional companies thrive. Firms should
neither fear nor be charmed by new technology. The key is to hire the right
people who can translate technologies into realistic strategy and provide a
transformational roadmap. But that also requires removing managers
without the proper skill set or who are resistant to change, especially the
change required at a time of systemic technological change. It is critical
that there be the right combination of knowledgeable and willing human
resources focused on workforce transformation. The work should be
undertaken in small increments, making adjustments along the way. Some
will use blockchain as a vehicle to transform their organizations,
reorganize data analytics, and comply with privacy regulations.

AI Misconceptions
Many traditional companies seem to have a number of misconceptions
around AI and its implications. Some of these are due to lack of
knowledge about ML technologies, especially in procurement and other
business functions, which then result in poor planning and execution of
pilots. Others have to do with the lack of support at the highest levels of
the company—the c-suite and the board of directors, which lack
individuals who understand data, data governance, and IT-infrastructure
readiness.
Many companies would benefit from greater knowledge and insight on
these topics within their business development teams, who often act as
mediators between technology, products, procurement, legal, sales and
marketing. These teams require top talent with strategic and tech savvy
minds.
Corporations’ inability to work transversally beyond the typical
organizational silos is a common and huge bottleneck. Layer on top of
these issues the fact that some cognitive and ML businesses contribute to
the confusion in their sometimes pushy marketing, often promising more
than they can deliver.
TechEmergence asked 30 AI researchers, “What do you believe to be
the biggest misconception that executives and businesspeople have in
applying ML to business opportunities?”4 Below are some of the results of
this study, complemented by further research from TechEmergence, and
our own findings from extensive interviews with industry leaders. We note
that while they used the term “AI” they really meant ML.

Underestimating Resources and Skill Set


Thirty percent of the respondents to the TechEmergence study noted
that underestimating the resources and staff needed for ML
implementation was the biggest misconception. This idea is mirrored by
Securinox Chief Scientist Igor Baikalov, who notes, “Everybody wants to
do it . . . because there is a serious shortage of talent in this area and
attempts to close [the talent gap] have been somewhat difficult.”5
To generate a competitive advantage from ML applications, companies
need to upgrade their employees’ skills. They need to redesign employees’
accountabilities and incentives so that they are empowered and motivated
to deploy AI machines when they believe that doing so will enhance the
results. We believe that in the majority of cases in the near future, AI will
support people in accomplishing something.6 Computers will handle the
most standard and routine tasks while humans need to address the rest—
with good judgment, problem-solving skills, and the ability to think in an
interdisciplinary and strategic way.
Multidisciplinary and diverse teams working on ML implementation
are a must-have in the adoption of safe and beneficial technology. This is
necessary to work out modeling parameters, limit bias in datasets, and
implement an architecture that is stable enough to defend and combat
attacks from adversaries. Both AI investors and business executives from
traditional industry often overlook this fact.
For the successful implementation of ML, it is critical to have a
balanced and strategic risk culture—both within the analytical teams and
in the feedback loop that needs to exist between R&D and product
development. If teams are too risk averse, they will never become
successful in implementing ML. Many data practitioners shared with us
that they hire people who are comfortable with the idea of failing 50
percent of the time. At the end of the day, AI requires a trial-and-error
approach.
When teams are too focused exclusively on R&D, their company risks
never enjoying the full potential of ML. An organization is best positioned
when it creates a strong feedback loop between research and product
delivery. The best algorithms will not succeed in delivering results if they
do not improve a product or a service experience for a customer.
Companies’ teams should be entirely focused on adding ML/DL to
optimize their existing processes and to solve real business problems
instead of focusing on acquiring the latest and greatest model.
Traditional businesses should embrace operating models of smaller
companies like Flickr, or a cooperative model in AI research and products
teams, as implemented at Facebook. These businesses have demonstrated
that the line between systems and software and operations and
development have become blurred, with collaboration needed to ensure the
best and speediest product releases. Such releases should be constantly
improving and need to happen on a daily, not weekly or monthly, basis.

Issues with Data


Another common theme for AI executives and researchers is the
absolute need for proper data, both in volume and quality.7 We discussed
data in a previous chapter, but would like to underline certain “must dos”
for a company that wants to leverage AI to its full potential.
When a team lacks data to train its ML model, it should be ready to
work manually to generate data for a better future. For example, if a team
is working on a bot to answer customers’ questions, but does not have a lot
of information to learn from, it should deploy manual labor to collect
questions and answers. Obsessing over data quality, new data sources, and
data preparation is critical and will eventually pay off.

Links between Problem Formulation, Analytical


Maturity of an
Organization, and ML Implementation
In best-in-class companies, data are a commonly used source of
knowledge across business units and at all levels of the organization.
Teams should not have to beg the finance function and controllers to give
them data or to do analyses for them. Traditional companies will have a
hard time introducing ML if they have never applied statistics or deployed
data analysts. If a marketing team cannot build a simple rule-based system
(for example, if this person is X years old and lives at this ZIP code, you
might do XYZ), simply throwing ML algorithms at a problem will not be
helpful.
As with any technology, what matters are the specific business issues
to be solved, not the technology itself. In the case of IoT, for example,
there have been many so-called platform companies that have scaled
poorly and did not provide sufficient value to resolve the most challenging
interdisciplinary problems because of multiple connectivity standards,
while maintaining security. While AI will gradually change this part of the
economy that is not yet completely digitized (about 80 percent of the entire
economy), domain expertise and excellence in understanding domain data
nuances will separate the winners from the losers.
There are several aspects to this approach:
• First, a team needs to verbally describe a problem. It needs to be clear about goals and trade-
offs and about which type of ML or DL technique should be applied.
• Second, the statement “No AI is better than bad AI” is correct. If a model does not make sense
or creates a misleading interpretation, it is in many ways worse than no model at all.

Some industries will find it harder to apply AI than others. Business


solutions are often aimed at finding a smart solution to a tough problem.
These problems vary by industry, but the desired outcome of increased
ROI remains. ML is sometimes incorrectly thought of as a “plug and play”
or “black box” solution, when in fact it is not.8 Traditional industries need
to focus on the dual challenge of problem-solving and garnering
customized solutions with the help of ML. Startups are pitching this
approach and organizations must have patience to experiment with ML-led
problem-solving if they are serious about introducing ML into their
business.

Ease of Use and Great User Interfaces (UI)


Technology and its ease of use must go hand in hand in the digital age.
From swiping to voice commands, quick and easy-to-use interfaces are an
integral part of successful technological innovation.
ExClone founder Riza Berkan notes, “Any conversational AI that some
company will adopt will depend on deployment ease and transparency.
Without those two things it would be very difficult for these tools to
flourish.”9

Corporate Acceptance
The “bandwagon effect,” according to Investopedia, is a
“psychological phenomenon in which people do something primarily
because other people are doing it, regardless of their own beliefs, which
they may ignore or override.” AI, according to our interviewed experts,
has created a bandwagon effect in the digital space.
At the same time, and according to some of the executives we
interviewed, there is a myriad of “hoops to jump through” regarding
corporate acceptance for ML and DL solutions. If the executive leadership
team is not ready to redesign business models and end-to-end processes
across the whole organization, a company may never benefit from the full
potential of AI. Scaling ML and embedding it into daily work across
functions and teams should be considered a natural next step in the
evolution of work and business—as natural as revenue forecasting revenue
and profit targets.
AI is poised to transform the technology buying process, creating a
more complex buying process and sales environment. Acceptance of AI is
seriously jeopardized when executives fail to explain its benefits to
employees. Instead of replacing people, AI will augment their jobs and
create new ones. Repetitive tasks will be eliminated and new tasks will
arise that require good human judgement and domain expertise. For
example, fraud detection applications will reduce the time people spend
looking for anomalies yet increase their ability to decide what to do about
deviations.
Likewise, a ML application will allow financial analysts to spend less
time extracting data on financial performance. It will, however, only be
useful if an employee is skilled enough to consider the implications of that
performance. Augmented with ML applications, customer service
employees don’t need to spend a lot of time with routine problems.10 They
can use the time instead to better understand customer requests and needs
and design a better customer experience.
Companies that view AI purely as a cost-cutting opportunity are likely
to deploy ML in all the wrong places, and in a compromised way. These
companies will automate the status quo instead of imagining a better
world. They will cut jobs instead of upgrading roles.11 We wonder how
many traditional businesses will raise human resources to the strategic
prominence it requires to rethink jobs, processes, training, and education,
and the overall organizational talent roadmap that is so critical and
necessary to realizing the full potential of AI.

Having a Silo Strategy or Thinking in Use Cases


From our experience, most traditional companies think about AI in use
cases. Before they have real clarity about the value of their data assets and
the potential changes to their business models due to AI, they hire
companies to implement bots in customer care, or partner with IBM
Watson to reduce costs in their audit function. Thinking about AI purely as
a use case might prevent these companies from fully benefitting from the
technologies they are implementing. A portfolio approach to introducing
AI is a great way to reduce skepticism, learn from mistakes, and
understand how best to partner with third parties. A strategy-free use case
approach is altogether shortsighted. We believe that a use case–oriented
way of thinking about AI is rooted in the way more traditional businesses
are organized. Only a few are currently able to disrupt their internal silos
and establish a cross-company AI vision, transform and adjust IT
infrastructure to fit the demands of AI, and restructure their workforce as a
strategic talent pool instead of as resources that are fragmented across
different functions, business units, and geographies. These are the
requirements of a successful transition to the emerging AI-enabled
business model.

Pitching AI to a Business or within a Business


Companies should consider a “portfolio” approach when first
introducing ML into their organization. Some projects will generate quick
wins while others will focus on transforming end-to-end workflow.12
For a transformational project, a traditional company will need to
consider how the organization can work with gathering, processing, and
labeling its existing big data, and how it will introduce more diversity into
its teams.
Innovation often occurs at the boundaries between different industries,
business functions, or domains. The beauty and challenge of AI often
happens at the intersections, such as:
• removing human involvement in areas previously believed to be hard to automate;
• exploring spaces that are emerging due to a new capability, which was previously considered
not cost effective or even possible; and
• increasing attractiveness of traditional applications by embedding ML techniques in them.

Combining the right business development skills and domain knowledge


with technical expertise enables the successful and safe implementation of
any technology. Business developers in startups and traditional companies
should keep in mind several parameters while pitching especially startup
products to prospective enterprise customers, their venture capital arms,
procurement functions, or business units. Some of the key questions for
ML teams and corporate business developers should be the following:13
• Cost to deploy. What are the full costs of deploying a new technology towards a customer,
including transition costs? What is a minimum payback period in years in capital expenditures
for a prospective customer?
• Added value beyond cost. What is the total value of ownership for the new ML-powered
solution beyond saving on labor costs? Better customer satisfaction, higher net promoter score
(NPS), enhanced quality, fewer errors, better technical performance, or something else?
• Regulatory/compliance issues. What are the current or possible regulatory constraints that
might complicate the adoption of your offering?
• Conflicting goals within potential customers. If ML technology augments and/or eliminates
labor, people to whom your technology is being sold might lose their jobs. What strategy
should be suggested to ensure that there is a positive response to your technology? How do
you overcome this deeply human defensive posture of not wanting to eliminate jobs or billable
hours?
• Industry readiness. Some industries are more mature and willing to adopt new solutions, while
others are more risk averse. This occurs primarily in industries that are focused and
incentivized in terms of time-consuming activities rather than efficiency through new business
models and technologies. An example of this can be seen in traditional utilities.
• Cybersecurity. Security should be an integral component of any ML product and its
operationalization roadmap.

The Need for AI-Friendly Design


As ML and automation become increasingly common, AI-centric
designs for any product or service become important factors to consider.
For example, if a company automates medical records, it has to go through
many fields on the medical forms. Certain fields are standard, while others
are unique or customized. Doctors fill them out manually, in different
colors, and there is always an abundance of acronyms. They do this
because they have never foreseen that record-keeping could become
automated to the benefit of patients and the transparency of diagnostics
and treatments. If they did, the companies designing these forms, or
hospitals accepting them, might have shaped them differently.
Likewise, in the future, we will have street signs and public billboards
that will interact with sensors in cars. Food companies need to assume that
their products will be picked and packed not by humans but by robots.
Again, proper industrial design can greatly assist in this endeavor.
According to Karthik Mahadevan, who has written about AI-friendly
design on Medium, many programmers write code assuming that the
environment cannot be changed, “[so] they do much of the heavy lifting to
train their systems to access this world, that was never designed to be
accessed this way. However, presumably since someone once designed
everything around us for a purpose, things can be changed and
redesigned.”14
We believe it is not premature to start thinking about this concept. AI,
blockchain, and IoT are changing the world too fast. In this context, the
aspiration of “timeless design” should be rethought and infused with new
meaning. When a company develops a new product or service, it would be
wise that as part of this development it should first ask: “Is it possible that
what we are designing today might be accessed by AI in the future?” If
your answer is yes, we recommend that you spend a substantial amount of
time and effort during your development phase on designing AI-friendly
new products and services.

Table 15.1 AI/ESG Considerations: For Traditional Companies

Environmental • Need for AI-friendly design


Social • Acquire appropriate internal/external talent/skills
• Develop cross-disciplinary team to study/develop and plan
• Team diversity essential
• Understand employment/labor impacts of strategy and data
governance
• New skills training for existing employees
Governance • Boards’ and executives’ recognition that AI will be part of business
sooner or later
• Develop appropriate data-governance framework
• Develop strategic scenarios combining ML and blockchain
• Integrate AI into enterprise risk management; evaluate both risk
and opportunity
• Understand links between AI and cybersecurity
PART 4

AI in Different Sectors—
From Cars to Military
Robots

We are still far from solving AGI, but there is much promise and many
developments in the field of narrow AI, from predicting a company’s
earnings to identifying a tumor in a CT scan. In Part 4, we provide an
overview of some of the key industry- and sector-specific AI technologies
being implemented from the automotive to military sectors.
CHAPTER SIXTEEN

AI and the Automotive


Industry

Two fundamental technology shifts are reshaping the traditional car


industry—electric cars and autonomous driving. The entire global fleet of
approximately 11 billion vehicles will be transformed over the next 20
years. This not only changes the transportation industry—it will also
influence energy delivery, urban planning, entertainment, travel, real
estate, retail, and many other sectors.

General Trends
Several trends account for the realization of self-driving:1
• Technology: Computer vision has finally become good enough to distinguish objects on the
road, build 3D maps, and be supported by powerful processor speeds sufficient to operate
within a car. Vision and radar technology can now develop pre-collision systems that allow
cars to brake autonomously when a risk of a collision is detected.
• Ride-sharing: Self-driving technology is still expensive due to its lack of scale. The progress of
ride-sharing companies like Uber or Lyft can speed up if the capital cost is amortized over
many drivers. On-demand transportation has “emerged as another pivotal application of
sensing, connectivity, and AI with algorithms for matching drivers to passengers by location
and suitability (reputation modeling).”2 Partnerships between ride-sharing and autonomous
driving companies (e.g., Lyft and DriveAI) will speed up adoption.
• Electric vehicles: Virtually every fully autonomous car company today is planned on an
electric vehicle platform. The reason is simple: its cost is dramatically lower and municipalities
support it.
• “Free entrepreneur cash”: Internet pioneers and entrepreneurs have created a cash-surplus
environment that has been deployed in new business fields, like autonomous driving.
• Ecosystem partnership growth: Chipset manufacturers, automakers, suppliers, data providers,
fleet managers, and platform developers have been testing simultaneously competitive and
cooperative models in this new ecosystem, a key to successful automotive competition.

Today, approximately 500 companies in the ML and mobility ecosystem


are working to build technologies to compete in the autonomous value
chain. Total disclosed investments of around US$52 billion have been
made since 2010, most of them (97 percent) coming from non-automotive
OEMs.3

The Enablers
Data
Traditional auto companies believe there are five components to
building self-driving cars:
1. large-scale manufacturing capabilities;
2. a virtual driver platform that routes a travel plan;
3. autonomous technology that allows the car to drive;
4. a team to safely redesign cars to accommodate self-driving technology; and
5. a way to manage vehicle servicing.

Interestingly enough, these companies don’t talk about the need for data to
understand traffic, routes, and human behavior.4 Yet data are critical to
everything involving driving, from traffic patterns and changing lanes to
stopping for a police officer.
An average car moves only 10 percent of the time. The rest of the time,
it is parked somewhere. Traditional car companies currently don’t know
enough about their customers. The power of Uber, Lyft, and similar
platforms is in their data and their understanding of the movement and
patterns of cars and people that are derived from these data.
According to Forbes, data generated by a vehicle can be used to
optimize the product and ensure better safety, design, and training
programs and to make software updates, etc. This approach is commonly
known as cognitive predictive maintenance, and companies such as
DataRPM have emerged to create AI-driven platforms that make it
possible. “Cognitive predictive maintenance provides exactly what
manufacturers are looking for: actionable insights,” said Sundeep
Sanghavi, DataRPM co-founder and CEO. “By harnessing the powerhouse
of information generated by IoT, manufacturers can develop deep insights
into results that were never possible earlier with human intervention.”5
AI-enabled features in vehicles tied to biometrics can also help with
security, by enabling a car to drive only when it recognizes a certain voice.
Eye tracking is another possibility for AI to help with monitoring driving
and sense whether a driver might be tired or distracted.
No one likes traffic or unexpected road delays, so having a car that can
instantly provide new routes to avoid this irritation makes AI even more
attractive. Autonomous driving must adjust to traffic and the social norms
of any location. In this context, the question is whether the same self-
driving vehicle can perform at its best in San Francisco, Naples, or
Mumbai. Safe and reliable software updates over-the-air to receive a local
update may be a method to optimize driving in multiple locations.
An understanding of human social norms is critical to building self-
driving technology. For example, if an autonomous car navigates at a low
speed in a residential area, it needs to understand when to stop and when it
is safe to continue moving. Vehicles circulating in crowded locations have
to react to unwritten norms of human behavior. This requires
understanding of public resources sharing (e.g., sidewalks); rules on where
to make a turn; and signals people give each other to coordinate
movements, for example in crossing a road.
Stanford researchers have created Jackrabbot, a prototype of just such
a self-navigating machine. It is equipped with sensors to screen its
surroundings. Researchers hope that Jackrabbot will collect and analyze
enough data to follow human etiquette. These insights will be used in
designing next-generation robots.
Today, this machine is expensive to implement at scale. However,
within the next five years “social robots like this could be as cheap as
$500, making it possible for companies to release them to the mass
market.”6 Autonomous vehicles and the connected transportation
infrastructure will of course also give rise to a new venue for hackers to
exploit vulnerability to attack. Lastly, there is new opportunity: new data
services are emerging around connected vehicles. The Berlin-based
company AVA, for example, is focused on location safety data.

Algorithms and Processing Data


AI learns human behavior and tries to understand how to react to
driving conditions like a person would. This includes sensing how those in
other cars behave and how to drive according to different weather
conditions, road conditions, and other factors. The problem requires
multidimensional decision-making, which implies software optimized for
DL and for hardware integration. Companies that focus exclusively on
algorithms will not be able to build an end-to-end system.
Autonomous vehicles use machine vision algorithms, with different
ways to capture and prepare data. Each sensor—radar, visible-light
camera, lidar, infrared camera, stereo, GPS/IMU, audio, CAN—is usually
connected to its own algorithms, whose outputs may feed a system that
makes the final decision.
The most common ML algorithms used in autonomous vehicles are
based on object tracking. These algorithms are aimed at improving the
accuracy of pinpointing and distinguishing between objects.
Distinguishing between pedestrians, animals, bicycles, and other cars is no
easy task. The algorithm modifies internal parameters or parts of its
structure based on its initial errors and tries again.7
Scene labeling from a camera uses ML. Scene labeling—assigning a
label to each pixel corresponding to the object to which the pixel belongs
—uses DL. Cars use convolutional neural networks (CNN) to recognize
objects on the road. Reinforcement learning algorithms often solve the
traffic problem. U.S. vehicle safety regulators have confirmed that ML
algorithms are “as good as drivers” under U.S. law.

Electrification
The price of lithium batteries has dropped dramatically over the past
decades. Batteries that cost US$150,000 five years ago now cost
US$6,000, with further reductions in sight. Companies that have invested
in electric car development are also well positioned to compete in self-
driving. The U.S., China, Japan, South Korea, and Germany are leading
this effort.
If cars eventually have intelligent charging and vehicle to grid (V2G)
and vehicle to home (V2H) capabilities, cars themselves could be used as
energy resources. The grid could tap into this energy when it is needed.
Electricity would be downloaded from stationary vehicles. The combined
model that Elon Musk is developing, including solar/renewables, batteries,
and Tesla, makes a lot of sense in the longer term.

Supplier Ecosystem
PwC predicts that electronics will account for 50 percent of automobile
manufacturing costs by 2030, up from one-third today.8 Traditional parts
manufacturers will face competition from high-tech players, like chipset
manufacturers. As cars get more connected, traditional after-sales services
will need to change their business models. For example, Zubie, a startup,
“offers real-time diagnostics to owners of connected cars, allowing people
to know what’s wrong with their engine before they bring their car in for
inspection.”9
Some large conglomerates are entering the automotive market. For
example, Samsung is planning to become a serious player in three
categories: car electrification, infotainment, and autonomous driving. The
company already supplies electric car batteries to Tesla and other
automakers. On the infotainment side, Samsung acquired Herman, which
has 15 percent of the market. The company is working on the advanced
driver assistance system (ADAS) in partnership with other companies. The
system will have technology to remotely update software made by
Harman. In autonomous driving, Samsung is investing in lidar companies
such as Quanergy and TetraVue.10 In September 2017, Samsung
committed US$300 million to a captive investment fund to develop a self-
driving car.11
Mobileye, an Intel company, has about 80 percent of the market for
camera-based driver assistance systems, which rely on Mobileye
microchips and its image recognition algorithms. Mobileye currently
partners with BMW and Delphi Automotive in developing autonomous
driving tech​nology. Other traditional suppliers with camera-based, semi-
autonomous systems include Bosch, Autoliv, and Continental.
Nvidia is also focused on cars. In summer 2017, they revealed a
partnership with Toyota. The plan calls for supplying the auto giant with
Nvidia’s Drive PX computing system and working with Toyota engineers
to create autonomous driving software using Nvidia’s AI platform. That
follows other alliances Nvidia has forged with Volvo, Tesla, Audi,
Mercedes-Benz, Germany’s ZF, and Bosch. Given the vast scale of
Toyota, which annually sells more than 10 million cars worldwide, a
successful partnership could make it Nvidia’s most important yet. They are
also funding a self-driving company, Tusimple, in China.12

The Impact
Safety
The most obvious impact of autonomous vehicles is expected to be
accident avoidance. Nearly 37,000 U.S. citizens die each year in car
accidents, and nearly 1.3 million die globally. For every death in the U.S.,
there are over 100 people treated in emergency rooms. In 2012, the costs
of these treatments were estimated at US$33 billion annually. Autonomous
driving will reduce healthcare costs, and traffic congestion due to
accidents will drop significantly.13
There are already a number of AI safety features linked to automatic
braking, alert systems, and collision avoidance systems. Connectivity
systems installed in vehicles enable auto insurance companies to determine
premiums. In time, when car sharing becomes more widespread, such
pricing models can be leveraged on an individual vehicle basis and
mirrored against a fleet.

Traffic and Energy Consumption


MIT’s CS & AI lab found potentially up to a 75 percent reduction in
traffic just by broad adoption of the Lyft Line or Uber Pool concepts.
Lawrence Livermore National Lab at Berkeley looked at the concept of
autonomous electric taxis. They modeled a reduction in greenhouse gases
in cities of up to 87–94 percent by 2030, driven by the combination of
autonomy and electric platforms for cars.14
The Dutch bank ING predicts that increasingly low prices of electric
batteries combined with government support and the ability to scale
electric vehicle production will lead to electric cars being “the rational
choice” for motorists in Europe. Charging infrastructure is also expected to
become more widespread, partly because of government support. The cost
of owning an electric car in Germany around 2024 will be similar to
owning a gasoline-powered vehicle. Some countries in Europe, such as
France, have already started to impose bans on cars powered by gasoline
and diesel, to take effect at a future point.15

Mobility-as-a-Service
Mobility-as-a-Service (MaaS) companies like Lyft, Uber, Didi, Grab,
and Ola share a market valued at roughly US$109 billion today. ARK
thinks that when accounting for the potential cash flow from autonomous
taxi services, the current market should be valued at between US$600
billion and US$3 trillion, depending on an investor’s time horizon. Of
course, companies such as Didi and Uber may not accrue the benefits of
this opportunity. Alphabet has announced plans to partner with original
equipment manufacturers (OEMs); both nuTonomy and Delphi have
launched pilots in Singapore; and several OEMs, notably Tesla, VW,
BMW, and Toyota, have detailed plans for both autonomous vehicles and
shared autonomous services.16
China will become one of the largest markets for autonomous
mobility-as-a-service, reaching US$2.5 trillion by 2030. Contenders to
benefit from this market are:
• Baidu (partnered with BAIC Motor and Nvidia);
• Chongqing Changan, the first OEM in China to complete a 1,200-mile autonomous road trip;
• Volvo/Geely, which launched a China autonomous test driving pilot in 2017; and
• GM, through its joint venture with SAIC Motor and investments in Lyft, together with
ridesharing service Didi.

The Chinese government is likely to support a local front-runner, making it


difficult for foreign firms without Chinese partners. In this context,
Tencent purchased 5 percent in Tesla, signaling that other companies may
be in the race.17
Software and services will likely take a greater share of the profit than
will hardware. ARK believes that the MaaS market will evolve into
regional monopolies because of the deluge of driving data necessary to
train autonomous systems. First-mover advantage in certain geographies
will be important. Among OEMs, Tesla appears to be the leader in
autonomous data collection around the world, as it is alone in selling
vehicles equipped with the necessary hardware sensors to collect
autonomous driving data.
Autonomous vehicles will be used as part of social benefits programs
as well. For example, the city of Columbus, Ohio, has just won the U.S.
Department of Transportation’s $50 million Smart City Challenge. It will
deploy self-driving cars for transportation between a neighborhood where
unemployment is three times the city average and a nearby jobs center.

Urban Planning
Over the past decade, major cities have begun to invest in making
infrastructure smart, while adding sensing capabilities for vehicle and
pedestrian traffic. In 2013, New York started testing a mesh network of
microwave sensors, cameras, and pass readers to profile traffic. Cities
already use ML to build bus and subway schedules, supervise traffic
conditions for dynamic adjustment of speed limits, and implement smart
pricing in public parking spots. However, there are as yet no standards that
are broadly applicable.18
In his article “A New Approach to Designing Smart Cities,” David
Galbraith notes that today’s planners create scenarios for how cities can
emerge organically rather than drawing blueprints that specify exactly how
this should happen. Connected technologies can change this. By merging
social science with architecture, planning becomes something entirely
new. This is about social science becoming more like economics,
generating insights and applying findings to serve urban needs.19
There are about 42 billion square feet in the U.S. dedicated to parking
and an estimated 2 billion parking spaces. There is also metered parking
on most city streets.20 Such space will be re-purposed, and even home
garages might disappear, if citizens move from owning a car to using fleet,
car-share, and other mobility services.
Alphabet plans to build a city from the ground up. Dan Doctoroff, the
head of Sidewalk Labs, an Alphabet subsidiary, is in charge of the project.
The city should emerge in an area of “sufficient size and scale that it can
be a laboratory for innovation on an integrated basis.” This plan is focused
not just on self-driving technology; new approaches to data policies will be
tested as well.21

Interior Redesign
Since humans will no longer need to focus on driving, their traveling
time can be used for other purposes. This shift requires a complete
rethinking of car interiors. BMW addressed this issue in their TED event at
the International Automotive Show in Frankfurt in 2017. Driving hotel
rooms, entertainment centers, and hospitals are possible. Companies like
Panasonic are one step ahead, working on an “autonomous cabin” concept.
They believe that autonomous vehicles will have applications such as
augmented reality and work and entertainment screens, making the interior
of a car more like a living room. Large networks such as Facebook will
focus on how to monetize such interiors.22
Employment
Autonomous vehicles will undoubtedly bring changes to the job
market. In the U.S. in 2015, about 3.8 million people drove taxis, trucks,
ambulances, and other vehicles as their first source of income, and 11.7
million workers drove as part of their job. In all, these numbers represent
approximately 11.3 percent of total U.S. employment. According to The
Wall Street Journal, self-driving cars could transform jobs held by one in
nine U.S. workers.23 Among additional impacts will be the following:
• Police: 42 percent of all police encounters are traffic related, according to the U.S. Bureau of
Justice Statistics (2011). And US$50 million in revenue is generated from parking meters in
the city of San Francisco alone. These funds will disappear with autonomous vehicles.24
• Truck drivers: Truck drivers will be among the first to lose their jobs to automation. Walmart
may be the first company to deploy this technological change, adopting it en masse, according
to Kristin Sharp, the executive director of the New America Foundation and Bloomberg’s Shift
Commission on the Future of Work, Workers and Technology. The Teamsters labor union,
representing almost 100,000 truckers, has pushed the importance of keeping human drivers for
safety reasons.25 There are 3.5 million truck drivers in the U.S. today.
• Employees in traditional auto companies: As the transportation industry becomes more data
driven and digital, its human resources profile will also shift. The current work of most full-
time employees will be automated, with potentially tremendous negative social impacts.
However, new skills that are scarce today—such as data analysis, software development, and
drone programming—will be in high demand, thus underlining the great importance of skills
retraining and restructuring in the labor force.
• Thomas Frey, executive director of the DaVinci Institute think tank, has said that driverless
cars could eliminate jobs in up to 128 industries in the next two decades, including in
agriculture, construction, and public service.26
• As people buy fewer cars, insurance and financing jobs could see a reduction in employment
too. Repair shops could also be affected on a large scale.

The Timeline
HIS Automotive has estimated that unit shipment of AI-based systems
in new vehicles will be 122 million by 2025, as compared to 7 million
today.27 Self-driving cars are being introduced in two ways today. There is
the evolutionary path, in which cars implement self-driving features slowly
but surely. For example, Tesla has already adopted an autopilot feature.
Companies like Peloton and NXP are “platooning” trucks—tying them
together through a vehicle-to-vehicle communication system. A “pilot”
truck drives while those behind can sit and relax.
But there is a revolutionary path as well, like the one Google has been
working on since 2009. Google expects its test vehicles to become
mainstreamed because they will be able to drive in most places. Google is
aiming at embedding insurance into the price of a vehicle starting in 2018.
Ford is testing self-driving cars as well. We believe that both the
evolutionary and revolutionary paths will eventually converge.
Today, it takes approximately 15 years for an auto fleet to turn out.
Experts agree that around 2030, OEMs will only produce fully
autonomous vehicles. By then, 25 percent of all cars will be self-driving.28
Until about 2045, there will be a mix of manual, semi-autonomous, and
fully self-driving cars.
Regulators, insurance companies, traditional car manufacturers,
parking lot owners, and urban planners will need to be fully prepared long
before 2030. Safety standards, accident liability, traffic sign planning, and
infrastructure issues need to be clarified by then, along with
communication connectivity standards.
Traditional car company executives should beware of the risks related
to this new wave of innovation and digitization. Systems must be secured
from cyberattacks, and AI systems should be adequately tested. Such
leaders need to rethink their business models, as their customers will be
better served if deep analytics are part of their offerings. Linking
transportation and logistics with a customer’s value chain, and seamless
product delivery—using data, analytics, and possibly blockchains—may
become the basis for new revenue models and business synergies. For an
industry currently valued at around US$1 trillion, the opportunities for
growth in the era of cognitive technologies are endless. But there will also
be fierce competition from every side.

Competitors in the Autonomous Driving Market


Traditional Car Manufacturers
Car manufacturers have an incentive to develop autonomous driving
technologies in house. We believe traditional OEMs will buy automotive
startups. The question is whether the post-merger integration between the
traditional and the new will be successful.
Ford acquired a majority stake in a self-driving startup Argo AI, and
GM bought Cruise Automation. These moves demonstrate that the plans
for mass-producing self-driving cars were already in place in September
2017.29 There are several interesting candidates to be acquired, e.g.,
Aurora Innovation—founded by executives from Google—Tesla, and
Uber.30 Tesla is constantly looking for new technologies and talent, and
changes in the way the automotive value chain works in manufacturing,
sales, and after-sales.
Traditional OEMs get two-thirds of their profits from after-sales
services, such as car repair, credit leasing schemes, and insurance. They
will need to change their car maintenance system to support a network of
rented cars rather than individually owned vehicles.
Navigant Research reports that Ford, GM, Renault-Nissan, and
Daimler make up the top four OEM companies in the self-driving
technology sector. The real issue is whether the traditional auto company’s
corporate culture and governance might prevent them from winning the
autonomous driving race.
Speaking of governance, from 2015 through the writing of this book,
the German automotive industry became famous (or perhaps infamous) not
for its innovation but for its cheating scandals. Scandals and fines are one
thing, but complacency and lack of integrity are quite another, something
that more deeply affects the long-term confidence and trust of the most
important stakeholder—the customer.

Tesla
Tesla has 1.3 billion miles of data from its autopilot-equipped vehicles
operating under diverse road and weather conditions around the world.
With over 200,000 Tesla cars in operation in 2017 and sales and
production ramping up, the mileage will grow quickly. Tesla pushes
software updates over the air, and new features often emerge from the
company’s treasure trove of data.
It was originally planned that Tesla cars would feed their autopilot
algorithms with data generated by radar and ultrasonic technologies.
Practitioners believe that a combination of radar and camera can
outperform lidar, especially in adverse weather conditions such as rain
(with lights mirroring in the water), snow, and fog.
Tesla creates a detailed 3D map of the locations. As more vehicles
with Autopilot 2.0 and the “Vision” system are purchased, better mapping
with increasingly finer resolution is possible.
In June 2017, Elon Musk hired Andrej Karpathy as AI director at
Tesla. He is a renowned researcher at OpenAI, an expert on reinforcement
learning and computer vision who studied at Stanford with Fei Fei Li and
interned with DeepMind. In his blog about reinforcement learning,
Karpathy talks about RL in the context of an autopilot, and “notes that
while reinforcement learning does not generally scale well to situations
where experimentation is costly, new approaches, combined with lots of
real-world data (of the kind Tesla is collecting) may help.”31 Tesla is also
working with AMD to develop its own silicon. This demonstrates that the
company is getting closer to becoming a full-stack AI competitor, with
decreased reliance on other companies. Jim Keller is in charge of the
project. He previously worked at Apple, where he designed the A4 and A5
iPhone chips. Currently, Tesla uses Nvidia GPUs as part of the autopilot
self-driving hardware.32 In after-sales, Elon Musk announced that Tesla in
Japan had started to sell vehicles for a single price, including insurance
and maintenance.

Figure 16.1 ESG Reputation Risk Example: Tesla (Source: RepRisk® – www.reprisk.com.)

Although it is an isolated example, as we do not yet have much data on


AI and reputation risk (since AI is such a new phenomenon), Figure 16.1
above shows an example of AI-related reputation risk involving Tesla on
the heels of their crisis relating to the death of a motorist using their
autopilot feature in 2016.33

Waymo (Alphabet/Google)
Waymo, Alphabet’s self-driving subsidiary, doesn’t manufacture cars.
Instead, it aims to reach consumers and businesses through partnerships
with car OEMs. In 2015, Alphabet hired John Krafcik, a former Hyundai
North America executive, as Waymo’s first CEO. In September 2017,
Krafcik gave several interviews to charm OEMs into partnering on
autonomous ride-hailing services. Time will tell how successful this charm
offensive was. Automotive OEMs are used to acquiring companies, not
partnering with them.
A 2017 report from the California DMV showed that Alphabet’s
Waymo was leading the industry in fully autonomous miles driven on
public roads. Waymo has become the safest self-driving technology. This
progress is due to Waymo’s ability to harvest data in almost real time.34
The Waymo cars depend on a lidar system, and in 2017, Waymo and
Uber became engaged in litigation when Waymo accused Uber of a
“calculated theft” of its trade secrets, alleging that former Google
employees brought the proprietary lidar technology to Uber.
According to Business Insider, Waymo is now looking to gather data
on human interaction with self-driving cars. To achieve this, the company
created a program in Phoenix, Arizona. Waymo outfitted 100 Fiat Chrysler
Pacificas with its technology, with plans to make 500 more this year.
Phoenix residents will be able to use the service for free, and each car will
have an engineer aboard in case the car requires human intervention. The
program’s goal is to gather data on human reactions to self-driving cars,
from motion sickness to route optimization.35 In addition, Waymo recently
announced a partnership with ride-sharing company Lyft. The
partnership’s goal is to bring self-driving car technology into the
mainstream through joint projects and product development efforts. Some
months later, the parent company Alphabet started talks about investing
US$1 billion in Lyft. In 2013, Google Ventures invested US$258 million
in Uber.
It appears that part of Waymo’s plan is to offer vehicles with integrated
insurance, following the Tesla business model.

Uber
Uber is a fierce competitor in self-driving technology. Its strategic
edge is in data involving types of vehicles; routes; number of driving
hours; distribution of traffic throughout a day, month, and year; and
passengers’ and drivers’ profiles. Through these data, Uber is able to
understand most common patterns needed to develop autonomous
technology. A late entrant into the self-driving technology, Uber uses
M&A to bridge the competence gap.
In 2016, Uber acquired Otto, Anthony Levandowski’s self-driving
truck outfit, for US$700 million. Levandowski has played a central role as
a pioneer of self-driving technology in the U.S. In February 2017, Waymo
sued Uber for conspiring with its former executive Levandowski to steal
Waymo IP in an effort to benefit Uber’s autonomous driving program. In
February 2018, the parties abruptly settled the lawsuit with a $245 million
payment by Uber to Waymo.
In spite of the notorious lawsuit, Uber is still hiring actively for its
Advanced Technologies Group, or ATH, which is focused on autonomous
vehicles. The unit comprises 6 percent of Uber’s total job listings,
according to CB Insights.36 Even before the Levandowski scandal, Uber
hired a number of researchers from Carnegie Mellon and launched an AI
lab in Canada headed by University of Toronto AI researcher Raquel
Urtasun.

Table 16.1 AI/ESG Considerations: In the Automotive Sector

Environmental • Society needs convincing that autonomous cars will make “ethical”
decisions under duress
• Proliferation of electric cars = proliferation of lithium batteries, raising
environmental concerns
• Improved traffic patterns will lead to improved driving efficiencies and
lower emissions
Social • Autonomous vehicles must incorporate social human norms; AI
decision-making needs to improve on human alternative
• Autonomous vehicles adapt to cultural and regulatory norms in every
location
• Autonomous vehicles contribute to substantial decline in driver
employment
• New jobs for AI trainers to provide human context for autonomous-
driving algorithms
Governance • Data privacy issues proliferate

Uber’s internal AI technology is called Michelangelo, which is a mix


between open-source components and components built in house. It uses
Spark, Samza, Cassandra, and TensorFlow, among others. The platform
operates on top of Uber’s data and computer infrastructure, providing a
data lake that stores all of Uber’s transactional and logged data. This is a
beautifully designed ML workflow, with data management components
divided between online and offline mode. A library deployment is also
possible. Uber uses DL and a variety of other ML methods, e.g., decision
trees, linear and logistic models, unsupervised models, and time series.37
In 2016, Uber brought the ride-hailing model to trucking and launched
Uber Freight with a pilot in Texas. It gave the company a way into an
industry that touches 70 percent of American goods. Like the core Uber
consumer services, Uber Freight replaces the middleman, matching drivers
with cargo.38
Poor corporate governance has deeply damaged the company’s
competitiveness and brand, however, and it remains to be seen whether
they will be able to recover under the leadership of Dara Khosrowshahi,
Uber’s new CEO, hired on August 30, 2017, on the heels of a series of
protracted leadership, governance, and ethics issues at the company.
CHAPTER SEVENTEEN

AI in Health Care

Doctors vs. Machines: Avoiding Misunderstandings


There isn’t a day that goes by without a new article about how machines
are outperforming human doctors. Consider:1
• “Computer Program Beats Doctors at Distinguishing Brain Tumors from Radiation Changes”
(Neuroscience News, 2016);2
• “Computers Trounce Pathologists in Predicting Lung Cancer Type, Severity” (Stanford
Medicine News Centre, 2016);3 and
• “Artificial Intelligence Reads Mammograms with 99% Accuracy” (Futurism, 2016).4

Many articles get it wrong, however, in up to three ways: They don’t


understand medicine, they don’t understand AI, or they don’t understand
the difference between human doctors and machines and what each is best
at.
One should not diminish the importance of technology in industries
such as banking, agriculture, or automotive. However, in an industry
where technology may mean the difference between life and death or
between health and sickness, practitioners need to be as critical and exact
as possible.
A prudent technologist will work closely with medical professionals to
understand their work, requirements, and data from real patients.
“Overfitting” (a concept we discussed earlier under “data as an AI
enabler”) is easy and unavoidable when working with publicly available
medical datasets. An AI professional will seek larger-scale tests, multiple
unrelated cohorts, and real-world patients to avoid this problem.
On the one hand, there is the widely reported “Autonomous Robot
Surgeon Bests Humans in World First,”5 where a robot “outperformed”
human surgeons at repairing a pig’s intestines. This involved difficult
“open” surgery requiring the opening up of the pig. So even though robots
outperformed human surgeons on this “open” type of surgery, in reality,
doctors use lower-risk laproscopic surgery more often on humans because
it is much better for the patient if surgeons do not cut a big, dangerous hole
in them and do not put a patient under anesthesia for additional hours.
On the other hand, there are cases where computers can and should
outperform humans. Google trained a DL system to diagnose diabetic
retinopathy (damage to the blood vessels in the eye) from a picture of the
retina. This is a task that ophthalmologists currently perform using the
exact same technique, looking at the retina through a fundoscope.
Google’s system performed on par with the experts in a large clinical
dataset (130,000 patients).6 While this isn’t necessarily outperforming
human doctors, in time there will be efficiency and cost savings from such
data-based DL procedures.7
Another test involving deep convolutional neural networks (CNN)
focused on skin cancer, the most common human malignancy that is
primarily diagnosed visually, followed by dermoscopic analysis, a biopsy,
and other possible procedures. CNNs show potential to support
diagnostics, especially at hospitals or in medical practices, where there
may not be specialized physicians. In some cancer tests, a CNN achieves
performance on par with all tested experts across both tasks, demonstrating
an AI capable of classifying skin cancer with a level of competence
comparable to dermatologists.8
If AI is to augment doctors’ efforts, engineers need to understand what
exactly they are trying to augment. When AI professionals start working
with medical datasets, they need to understand in detail how these were
assembled. For example, a team at University of Pittsburgh Medical
Center used ML to predict whether pneumonia patients might develop
complications,9 with the objective of hospitalizing those with real inpatient
needs. The team used a combination of neural networks and decision trees
to build a set of rules readable by machines. During the study, the
researchers and doctors noticed that one of the rules instructed health
workers to send home pneumonia patients who already had asthma, which
was problematic because asthma sufferers are extremely vulnerable to
complications. However, the model discovered a true pattern in the
available data. The database reflected that it was hospital policy to send
asthma patients to intensive care if they suffered from pneumonia too. This
policy worked excellently.10 Once again, lack of understanding of what
doctors do can impact outcomes even with the best models.
Collaborative leadership is needed to ensure that AI helps and does not
hinder the practice of medicine. While medical professionals need to learn
about AI early, AI generalists need to exercise caution while approaching
their subjects, especially in the absence of (yet to be created) ethical
standards. Moreover, patients should also learn about AI early on, perhaps
with the help of applications like Cognitoys or voice assistants, and/or
from insurers and general medical practitioners.

Market Potential of Healthcare AI


It is estimated that the overall market for AI in health care is expected
to reach almost US$8 billion by 2022, at a compound annual growth rate
(CAGR) of 52.68 percent between 2017 and 2022. The factors driving
market growth are increasing focus on precision medicine, cross-industry
partnerships, the need for reducing costs and time, rising complexity and
volume of data, medical workforce overload, and the ability of AI to make
a difference in all these matters. However, it is important to note that cost
reduction in health care is more difficult than in other industries, as
medical inflation is always above regular inflation.
A report by Frost & Sullivan points out that AI could improve
outcomes by 30 to 40 percent while cutting treatment costs by up to 50
percent. The AI market for healthcare applications could generate $6.7
billion in global revenue by 2021, compared with $811 million in 2015.
Moreover, 50 percent of respondents from industry anticipate broad-scale
AI adoption by 2025, and almost 50 percent expect that initial AI
applications will target chronic conditions.11
There are, of course, critical issues that need to be addressed, such as
tackling machine readability problems from a wide variety of unstructured
data (blogs, photos, reviews, social media, mobile apps data, etc.) and
gaining further actionable insights from such data. It is very important that
partnerships between a variety of AI technology providers and healthcare
providers be forged for wide-scale implementation and subsequent broad
adoption and viability of AI services.
Healthcare AI Implementation: Examples
Pharmaceuticals
In 2016, total U.S. prescription drug expenditures were estimated at
US$450 billion. The Centers for Disease Control and Prevention (CDC)
reports that 48.9 percent of the U.S. population has taken at least one
prescription drug in the preceding 30 days. Bringing a new drug to market
takes about 12 years and costs around US$1 billion in R&D. Industry
leaders are always seeking more efficient ways to solve this challenge. AI
might present potential solutions.12
According to MIT Technology Review, Recursion Pharmaceuticals
“uses software to read out the results of high-throughput screening, which
automates drug testing in cells. While this isn’t a new idea, Recursion uses
algorithms that inspect cells at an unusual level of detail. The software
measures a thousand features of a cell, such as the size and shape of its
nucleus or the distance between different internal [components].”13 The
company identified 15 potential treatments for rare diseases with this
technology, defined as those afflicting fewer than 200,000 people in the
United States. Recursion tests existing drugs to find valuable new uses
instead of developing something completely new. The company partnered
with Sanofi to test drugs that have not yet been fully tested and marketed.
ML reduces drug discovery time. Khosla-backed Atomwise “uses
super​​computers to find relevant therapies from a database of molecular
structures. The company launched a virtual search for safe, existing
medicines that could be redesigned to treat the Ebola virus. . . . This
analysis, which typically would have taken months or years, was
completed in less than one day.”14 The company, which partners with
Merck, used AI to find two drugs to reduce Ebola infectivity.
Other companies using ML for drug discovery include twoXar,
Numerate, and NuMeii.

Data Mining and Insight Generations


Collecting, storing, standardizing, and tracing relationships between
data are very promising areas for AI. Enlitic couples DL with medical data
to advance diagnostics and improve patient outcomes. The company
believes that DL can handle a broad spectrum of diseases and in all
images, e.g., X-rays and CT scans. Berg Health analyzes datasets to
understand why some people survive disease. The company combines ML
with the patients’ own biological data. Other companies to watch in this
field are Ayasdi, H2O.ai, and Digital Reasoning Systems.15

Patient Consultation
Companies working on AI patient consultation applications are not
aiming to replace doctors with AI. Instead, they are partnering with
practicing physicians. Current regulatory regimes around the world will
prevent AI from making official diagnoses. But the systems are designed
to guide patients and suggest whether the symptoms warrant a trip to the
doctor or hospital, and will help doctors get additional information for
their diagnoses. Dr. Clare Aitchison said in an interview with MIT
Technology Review, “While it’s true that computer recall is always going
to be better than that of even the best doctor, what computers can’t do is
communicate with people. . . . People describe symptoms in very different
ways depending on their personalities.”16 These kinds of applications are
likely to become increasingly accepted in everyday life.
Below are several companies that are developing AI-enabled programs
in this sector:
• NextIT’s product, a digital health coach, is similar to a service like Alexa. The assistant can
remind a patient to take medicine and ask about symptoms, and can convey that information to
the doctor.
• California-based Sense.ly developed Molly, a virtual nurse, to help patients with chronic
conditions between doctor visits. Sense.ly claims this technology gives doctors 20 percent of
their time back.
• New York–based AiCure wants to ensure that patients are taking their medications. Their
application is supported by The National Institutes of Health. It uses a smartphone’s webcam
and AI with facial recognition capabilities to autonomously confirm that patients are sticking
to their prescriptions.
• Sentrian analyzes biosensor data and sends individualized patient alerts to doctors.
• UK-based subscription company Babylon provides ML consultation based on personal
medical history and common medical knowledge: “Consumers report the symptoms to the app,
which checks them against a database of diseases using speech recognition. After taking into
account the patient’s history and circumstances, Babylon offers an appropriate course of
action.”17
• Your.MD claims it has already built the largest medical map linking probabilities between
symptoms and conditions. Its chatbot uses ML algorithms and natural language processing to
understand and engage its users. The application comes pre-installed on all Samsung Galaxy
phones.18

Medical Diagnostics and Genomics


Imaging and diagnostics is an especially promising field for AI.
Following are several examples.
• Intelligent Medical Imaging, Inc., integrates neural network technology into blood analysis.
• ATL Ultrasound, Inc., has designed diagnostic ultrasound systems for imaging and monitoring
cardiac tissue structures and activity based on an adaptive intelligence algorithm.
• Neuromedical Systems, Inc. (based in New Jersey) identifies cells for review during cancer
screening with help of neural networks.
• Sparta Science focuses on helping athletes become more resilient by providing personalized
and prescriptive analytics regarding their physical strengths and weaknesses. The company’s
clients include the Atlanta Falcons, the Cleveland Cavaliers, the Colorado Rockies, the
University of Pennsylvania, the University of Kansas, and Australia’s Rugby Union. These
organizations use Sparta Science because the ROI of preventing injuries is immense.19
• Deep Genomics identifies patterns in datasets of genetic information and medical records,
analyzing mutations and linkages to disease. The startup’s technologies can tell clinicians what
will happen within a cell when DNA is altered by genetic variation, whether natural or
therapeutic.

Healthcare Robotics
In 2000, Intuitive Surgical20 launched a technology called da Vinci to
support minimally invasive heart bypass surgery. This system today is in
its fourth generation, providing 3D visualization and wristed instruments
in an ergonomic platform. As a writer for IEEE Spectrum notes, “It is
considered the standard of care in multiple laproscopic procedures, and
used in nearly three quarters of a million procedures a year, providing not
only a physical platform, but also a new data platform for studying the
process of surgery.”21 The success of the platform triggered competition in
robotic surgery, as the Alphabet spin-off Verb, in collaboration with
J&J/Ethicon, proves.
According to Engadget, researchers from Harvard University and
Boston Children’s Hospital have developed a “soft robot” to improve the
survival rate of heart attack patients. This device is wrapped externally
around the heart instead of being inserted into heart valves to assist
cardiovascular function. It has been successfully tested on pigs but, of
course, successful animal tests do not guarantee the device will be used or
used successfully on humans. However, this type of “mechanotherapy”
holds plenty of therapeutic promise, and not just for the heart.22

Hospital Operations
Intelligent automation in hospital operations would seem to be a no-
brainer at first sight. But in fact, it has not been successful. HelpMate
developed a robot for hospital deliveries, such as medical records and
food, but without success. Another example is Aethon,23 which introduced
TUG Robots for basic deliveries. Only a few medical centers have
invested in this technology. This caution does not mirror what is
happening in other service industries such as hotels and warehouses, where
robots have already demonstrated that they can increase the efficiency and
effectiveness of operations (e.g., Amazon Robotics, formerly Kiva). There
may be a different explanation for the lack of adoption of these AI and
robotic solutions, having more to do with labor protections for unionized
nurses and staff.

Technology Giants Competing in Healthcare AI


Technology companies have a mixed track record on healthcare AI,
depending on the depth of their domain expertise.

IBM
IBM has two healthcare-related products still in research and
development. Privacy concerns in this field are great and Watson has not
figured out how to address the issue. In countries with less sophisticated
privacy rules, Watson is being used more successfully. Hospitals in China,
India, South Korea, and Slovakia use Watson for Oncology to obtain a
second opinion on treatment options virtually. In the United States, this
practice has been implemented at the Jupiter Medical Center in Florida.
According to Gizmodo, “Using technology to help doctors treat cancer is a
noble effort [and] AI can have a tremendous impact on health care . . .
[b]ut the technology doesn’t seem to be advanced enough to have a
transformational impact just yet. And Watson technology isn’t especially
unique. But IBM’s buzzy marketing is trying to make us believe it is leaps
and bounds ahead of anything else.”24 Former employees who worked as
design researchers at Watson for Oncology felt uncomfortable with how
commercials portrayed the platform, believing that confusing patients
about the platform’s capabilities might be unethical.
The second IBM healthcare platform is Watson Genomics, which
processes massive amounts of genomic sequencing data from patients’
tumors. According to researchers, it is still unknown how good Watson’s
results are. To increase confidence in Watson, IBM requires access to top
research institutions to recruit thousands of patients, generate real test
results, and publish outcomes in respected industrial sources, as medical
research and practice rely on publications. Experts would also need to
study how physicians’ treatments changed after receiving and
implementing Watson’s recommendations, including comparing basic
statistics, e.g., patient survival rates.
In 2016, The New York Times noted that “[a]t the University of Texas
MD Anderson Cancer Center in Houston, Watson technology was one
ingredient in an automated expert adviser for cancer care.”25 The
partnership, however, is falling apart. According to Forbes, MD Anderson
is actively requesting bids from other contractors to replace IBM. It has
already started working with some of IBM’s smaller competitors, such as
Cognitive Scale. Auditors at the University of Texas say the project cost
MD Anderson over US$62 million (US$21.2 million of which was paid to
PwC, which was hired to create a business plan around the product) but
did not meet its goals.
IBM has not yet published any scientific papers showing how Watson
affects doctors and patients. Watson for Oncology was in development for
six years and is still considered to be in its infancy even by IBM insiders.
The company’s sales representatives talk about “new approaches” to
cancer care, but specialists believe the system does not create new
knowledge.26

Google
Google’s DeepMind division, DeepMind Health, works with multiple
healthcare organizations, including the UK National Health Service and
the Royal Free Hospital London. DeepMind built Streams in collaboration
with the hospital to support doctors and nurses in diagnosing acute kidney
injury.

Microsoft
In 2017, Microsoft launched Healthcare NExT to “combine work from
existing industry players and Microsoft’s Research & AI units to help
doctors reduce data entry tasks, triage sick patients more efficiently and
ease outpatient care.”27 Their ultimate aim is to replace manual data entry
by doctors.
Microsoft’s HealthVault Insights project works with fitness bands,
Bluetooth scales, and other connected devices to make sure patients stick
to their care plans when they leave the hospital or their doctor’s office.
In September 2017, Microsoft launched a new healthcare division at
their Cambridge research facility to focus on personal health information
systems, health monitoring plans, diseases such as diabetes, and AI to
target interventions. Ian Buchan, formerly a clinical professor in public
health informatics at the University of Manchester, is heading this unit.28

Table 17.1 AI/ESG Considerations: In the Healthcare Sector

Social • AI-produced information presents possible safety issues to human health


• Key is for health and AI professionals to work closely together
• Healthcare regulatory and data protection regimes vary from jurisdiction
to jurisdiction, presenting huge challenges to healthcare companies
Governance • Data privacy and need for strong data governance in healthcare facilities
and activities
• Heightened responsibility for cyber-risk governance in healthcare-
related companies, non-profits, and government agencies
CHAPTER EIGHTEEN

AI and the Financial Sector

In this chapter, we provide an overview of some of the more salient AI


developments in the financial sector, including insurance.

AI and the Insurance Sector


Insurance represents a US$1.2 trillion market, and is an industry that
has been applying big data analytics for underwriting purposes for a while.
Indeed, actuaries have always applied statistical analyses to learn from the
historical data and determine the probabilities of occurrences. As the
availability of data has become increasingly abundant, insurance carriers
have started benefiting from AI in many areas, such as claims handling,
insurance fraud, underwriting, and customer care. AI-capable IT has also
allowed data mining of various formats, including images, text, and audio.
In 2016 alone, tech-focused insurance startups raised US$2.6 billion (as
per Financial Technology Partners). ML solutions in insurance have
primarily focused on efficiency and customer-facing automation
improvements. Over time, ML will be used in risk profiling, underwriting
new risks, and identifying new sources of revenues, e.g., credit insurance,
business interruption, and different lines of liability coverage.
The world of risk is becoming increasingly complex, and AI will help
to connect the dots and forecast probabilities and possibilities of loss
events and risk accumulation, and enable greater clarity in discerning, for
instance, one from multiple events. Insurers in the emerging fields of
cybersecurity, self-driving vehicles, genetics, and nanotechnologies must
have substantial AI expertise. Investment in AI will come first from global
reinsurance companies like Munich Re and Swiss Re, followed by giant
multi-line businesses like Allianz, Generali, and Axa.
Some of the key areas in which AI will play a critical role in insurance
will include the following.

Claims Management and Fraud Detection


Claims management can be augmented using AI in different stages of
the claims handling process. Two such examples include fast-tracking
claims to reduce overall processing time and identifying patterns in data to
recognize fraud. Through new technology, unseen patterns can be
discovered that enhance the human ability to detect anomalies or
correlations. ML models handle data in different formats, which, in the
realms of property and liability insurance, can help in assessing the
severity of damages to predict repair costs from historical data, sensors,
and images.
A number of companies are introducing AI technologies into insurance
claims management, including:
• Shift Technology, which offers a solution for claims management and fraud detection;
• RightIndem, which eliminates friction on claims;
• Motionscloud, which offers a mobile solution for the claims-handling process, including
evidence collection and storage;
• ControlExperts, which handles claims for the auto insurance industry, with AI reducing the
need for a variety of human experts in the long run; and
• Cognotekt, which optimizes business processes in claims management, including fraud
detection and underwriting.

Fraud detection is one of the most attractive fields of AI insurance


innovation. Shift Technology, Motionscloud, Cognotekt, and IBM’s
Counter Fraud Management provide solutions in this field.
New York City–based Lemonade hopes to optimize transparency over
claims handling and customer service via technology and behavioral
science. The company creates models that simplify claims reporting and
settlement, drastically reducing wait time for customers. Lemonade is
currently selling renters’ and homeowners’ insurance. Lemonade makes a
profit by taking a flat fee of 20 percent of a customer’s insurance premium.
With this money, it uses 40 percent to place reinsurance, for example, on
the London market. Another 40 percent is to pay for claims. Whatever is
left goes to a charity of the customer’s choice at the end of the year. This
charity component is an unusual practice for commercial carriers.
However, this legal set-up helps to minimize fraudulent claims.
The company implemented two chatbots mimicking two of its
employees. Algorithms powering these bots will learn from interactions
with customers, automate standard cases, and continuously improve
service. The system will hand over more complex cases to the two
employees, who will follow up with customers by phone.
Big carriers have been also experimenting with AI. In 2017, Japanese-
based Fukoku Mutual Life Insurance started to use Watson to replace 34
human insurance claim handlers. The model scans hospital records and
other documents to determine payouts using key data to determine their
scope and size. Augmenting human tasks in this way will enable remaining
employees to process the final payout faster.
Fukoku Mutual will spend US$1.7 million to install the AI system, and
US$128,000 per year for maintenance, according to Japan’s The Mainichi.
The company saves roughly US$1.1 million per year on employee salaries
by using the IBM software, meaning it hopes to see a return on the
investment in less than two years.1

Underwriting and Loss Prevention


There has been a long debate about prevention vs. engaging after
damage is done in many insurance fields like health, workers’
compensation, and employer liability. However, there are data privacy
considerations when insurers experiment with data-mining social networks
to achieve better underwriting decisions. It doesn’t help that there is great
national and international insurance regulation fragmentation, which will
continue to have an impact on ML in insurance innovation.
Several companies are deploying AI solutions of note:
• Atidot is developing a platform for actuarial and risk management using ML techniques,
working with a variety of dynamic data sources: demographics, telematics, wearables, social
media, weather, and news.
• FitSense offers a data analytics platform collecting users’ health data from different devices.
These data are analyzed to build user profiles. The company launched a white-label health
engagement app as a first interface that enables risk carriers to apply their own risk assessment
criteria, health management, and incentive programs. FitSense might become a risk carrier
itself, as it develops underwriting and direct purchase of insurance products based on the data
collected and analyzed on the platform.
• Dreamquark uses DL to analyze medical records and structured and unstructured data, and to
implement prevention-based underwriting.
• Big Cloud Analytics offers a health analytics platform collecting data from wearable devices
and analyses including health scores to assess and ultimately reduce risk.

Marketing and Customer Experience


There are a number of AI insurance products on the customer
interfacing end, including marketing. Among them are:
• Adtelligence, which analyzes the cross-platform customer usage data and statistics to learn
detailed customer profiles and enable individual content and personalized products; and
• Brolly, a personal insurance concierge for customer interaction and portfolio management that
collects all policies and provides easy access to information. Policies are analyzed to determine
whether the cover is appropriate in terms of over- or under-insurance, and even whether the
cover should be purchased at all.

Chatbots
In previous chapters, we discussed how natural language processing
and sentiment build the base for virtual assistants. There are several
companies developing this form of AI product in the insurance field,
including:
• Cognicor, which offers an intelligent customer care service assistant that can be addressed in a
human-like conversational interface. It answers questions, resolves complaints, and suggests
tailored products;
• Conversica, a virtual sales assistant that leverages AI to automate the lead conversation;
• Babylon, which provides virtual consultation to offer affordable health care; and
• Telematics, which provides long-distance information sharing and is expected to have a
significant impact on the insurance industry. A company in this sector, Octo Telematics,
provides telematics in auto insurance. Carriers are already offering black box tariffs, giving
discounts based on the frequency and times of driving as well as other customized factors.

Financial Trading
Traditional hedge funds such as Bridgewater Associates, Point72, and
Renaissance Technologies have been optimizing their IT for a long time.
Some companies are now using ML not just to optimize operations, but to
generate investment ideas. Still, few financial companies are betting on AI
at the core of their trading. There is little performance data available. For
this reason, many funds are expressing caution with respect to new AI and
blockchain trading companies.2
Sentient Technologies, funded by Hong Kong’s richest man, Li Ka-
shing, and India’s biggest conglomerate, Tata Group, are among the
backers who have given the company US$143 million. The company is
preparing again to raise capital to diversify its product portfolio.
In trading, Sentient’s net of computers algorithmically creates what are
essentially trillions of virtual traders that it calls “genes.” These genes are
tested by giving them hypothetical sums of money to trade in simulated
situations created from historical data. The genes that are unsuccessful die
off, while those that make money are spliced together with others to create
the next generation. Thanks to increases in computing power, Sentient can
squeeze 1,800 simulated trading days into a few minutes. An acceptable
trading gene takes a few days and then is used in live trading. Employees
set goals such as returns to achieve, risk level and time horizon, and then
let the machines go to work. The AI system evolves autonomously as it
gains more experience. Sentient typically owns a wide-ranging batch of
U.S. stocks, trading hundreds of times per day and holding positions for
days or weeks.3
The company is considering spinning off the fund business into a separate
entity.
Numerai, founded by Richard Craib in San Francisco, builds financial
trading models by assembling the best algorithms from a community of
data scientists. The company encrypts its trading data before sharing it
with the data scientists to prevent the scientists from repeating the fund
trade strategies themselves. At the same time, Numerai organizes this
information in a way that allows the data scientists to build models leading
to even better trades. The crowd-sourced approach seems to be working
despite the obvious incentive problem. Participants in a trading market are
usually adversaries. Numerai proposes a novel solution where traders back
confidence in their algorithms using stakes in a new cryptocurrency.
Numerai has distributed Numeraire—1,000,000 tokens in all—to 12,000
participating scientists. High-performing algorithms lead to payouts, while
poor performers lose their stake. The system encourages data scientists to
build models that work on live trades, not just test data.
The model so far aligns incentives for co-operation on the platform.
The system has attracted top supporters like Howard Morgan, a founder of
Renaissance Technologies, the highly successful hedge fund that
pioneered an earlier iteration of tech-powered trading. It is still unclear
whether the approach will work in the long run.4 Unlike the typical model
of competition, Wall Street has never implemented a system where
everyone could win. Numerai built its new token on top of Ethereum, a
vast online ledger, a blockchain where anyone can build a bitcoin-like
token driven by a self-operating software program.5

Financial Statement Auditing


Another rich area for AI is in the streamlining of the typical data
acquisition challenges that auditors face. Time-consuming tasks are
minimized and placed into usable formats. The approach leaves humans to
review, analyze, and make audit decisions.4

Identity Verification (IV)


IV is one of the crucial regulatory and compliance risks in any
financial services company. It goes with the phrase Know Your Customer
(KYC). Banks have entire teams working on this subject to prevent money
laundering, identity theft, and tax crime. ML is used to automate KYC
processes, and to make customers’ interaction with financial service
providers simple and enjoyable. Onfido, a London-based company, offers,
via an app, a series of checks to verify people’s identity. Not just banks use
the service. It is also interesting in the context of the sharing economy,
e.g., to check a customer for Airbnb, do a pre-employment screening, or
share a car.

Table 18.1 AI/ESG Considerations: In the Financial Sector

Environmental • Use of predictive analytics may help mitigate environmental crises


Social • AI plus blockchain may increase financial transparency and
efficiency
• AI bots and chatbots improve customer financial services
experience
Governance • AI-driven insurance products elevate risk governance
• Greater insurance price and forecast information transparency
• AI-driven audit and predictive tools aid in anti-fraud, anti-
corruption, anti-money laundering, and similar financial crimes
• Combination of AI in insurance and cybersecurity provides potent
tools to combat cyberinsecurity
• Identity verification properly deployed will increase data privacy
protections
CHAPTER NINETEEN

AI in Natural Resources
and Utilities

The world is beginning to utilize renewable resources, turning away from


fossil fuels. Renewables don’t rely as much as oil and gas on weather
conditions. The energy supply challenge goes hand in hand with the
proliferation of single household producers, individual businesses, and
municipalities becoming small-scale energy manufacturers themselves
through electric cars, solar panels, and individual storage connected to the
grid. Emerging blockchain technology also supports the trend. In this
context, ML-powered software has unmatched capabilities for deploying
predictive algorithms to balance grids, protect and repair networks from
bugs or malicious hacks, and work with data to assess the reliability and
stability of the energy.
Currently there is no industry standard for the integration of single-
household energy storage into the bigger energy grid. The UK’s National
Grid bridges gaps in shortages of renewables with the help of conventional
power stations. For longer gaps, conventional utilities are kept on standby,
which is very costly and is inconsistent with the growing trend of
combating carbon emissions.
Standardization has some examples, like SunSpec Energy Storage
Model Specification, or MESA-Device by the MESA Standards Alliance
in the United States, whose participants are (apart from Alstom) all U.S.
companies. They are developing a standardized approach to integrating
inverters, batteries, and software control systems within an interface that
communicates the data to the outside world.
In 2014, the Swiss-German firm Alpiq started GridSense, which steers
“domestic or commercial energy consumption of the equipment into which
it is integrated (such as boilers, heat pumps, charging stations for electric
vehicles) based on measurements such as grid load, weather forecasts and
electricity charges.”1

Data-as-a-Service in Utilities and Service Companies


An important question for utilities is how will they value, transact, and
ultimately monetize their data in the face of ongoing pressure from
technology distruptors and regulators. According to the Indigo Advisory
Group, a first attempt to provide a common data-value assessment
framework is taking place in New York through the New York Public
Service Commission’s Distributed System Implementation Plan (DSIP)
Guidance Order. The DSIP states, “At the core of the new model is
improved information—improved both in its granularity, temporal and
spatial, and in its accessibility to consumers and market participants.” The
DSIP differentiates between two types of utility data: basic and value
added.2
• Basic data are data that will be freely available at no charge beyond regular costs, including
data that are readily available in the public domain.
• Value-added data are data that will be available for a fee determined through utility-specific
fee structures. Value-added data are not routinely developed or shared, and have been
transformed or analyzed in a customized way (e.g., aggregated customer data).

Value-added data are delivered more frequently than basic data, are
requested and provided on an ad hoc basis, and are more granular than
basic data. Examples of value-added data include system data, such as
forecasted load data, voltage profile and power quality data, and customer
data, such as meter data or aggregation of customer data by ZIP code, for
example.
Value-added data are gaining prominence also because of the shift
from cloud to edge computing in areas such as the connected home, smart
city, and electric or connected cars. In combination with traditional
distributed energy resource (DER) opportunities, asset ownership use
cases include device monitoring and control at the meter, demand
response, DER dispatch, and settlement and interfacing with on-premise
devices (e.g., building management systems).
According to the Indigo Advisory Group, 93 percent of energy and
utility companies have increased their number of IoT projects. Google’s
US$3.2 billion acquisition of Nest in 2014 was less about hardware sales
and more about data. The Green Button Initiative, the energy data
standardization effort that was officially launched in the United States in
January 2012, has enabled the launch of 235 applications using data from
over 50 utilities and 60 million homes and businesses. Similarly, in April
2016, the DOE launched Orange Button, with US$4 million for projects to
increase access to solar data to increase solar market transparency and fair
pricing by establishing data standards for the industry.

AI in Energy Service Applications


Utilities and energy companies are increasingly interacting with new
technologies such as AI, blockchain, and robotics. As a result, dramatic
changes are taking place, including the growth of distributed energy and
the proliferation of sensors on infrastructure and behind the meter.
The three largest segments of use cases for AI in energy, according to
Indigo Advisory Services, are renewables management, demand
management, and infrastructure management.3 Below we also consider
geo-location and failure forecasting to offer a more comprehensive picture.

Renewables Management
AI use cases in this area are focused on enhancing short-term
renewable forecasting and improving equipment maintenance, wind and
solar efficiency, and storage analysis. AI is being deployed in various
pilots focused on wind turbine operation data and solar panel sensor data
that gauge sunlight intensity. This is then combined with atmospheric
observations obtained by radar, satellites, and ground-weather stations.
AI is also being applied to energy storage and estimates of the useful
life of a battery pack or unit by applying prognostic algorithms. In
Germany, a ML program named EWeLiNE has operated as an early-
warning system for grid operators to assist them in calculating renewable
energy output over a 48-hour period.
In Japan, GE is using AI to enhance wind turbine efficiencies, raising
power output by 5 percent and lowering maintenance cost by 20 percent. A
Japanese construction company has developed a smart energy system
powered by AI to manage its solar power plant. The system is called
AHSES (Adjusting to Human Smart Energy System) and is used to
anticipate, manage, and display energy needs from the solar power plant. It
supports smart pricing.4

Demand Management
Several companies are working on AI-backed demand management
solutions that focus on the demand response of different devices running in
parallel. Similarly, a series of AI platforms are under development
focusing on energy performance in buildings with solutions that gauge,
learn, and anticipate user behavior to optimize energy consumption. AI
and game theory use cases are also being applied to reward/penalty
mechanisms to ensure that enough customers in a DER pool participate
and are responsive when necessary. We have already cited the use of DL
by Google’s DeepMind to save 40 percent on power consumed for data-
center cooling purposes, leading to an overall reduction of 15 percent in
power consumption costs. The company is negotiating with the UK’s
National Grid to use AI to help balance energy supply and demand in
Britain.5

Infrastructure Management
Power companies have an opportunity in deploying AI and digital asset
management, in which ML algorithms collate, compare, analyze, and
highlight risks and opportunities. In these cases, AI methods are used to
model possible scenarios and to advise on actions and impacts. AI is also
being deployed for the operation and maintenance of generation sources
such as gas turbines to minimize emission of nitrogen oxides. Siemens, for
example, is using a neural model that alters the distribution of fuel in a
turbine’s burners to increase efficiency. Siemens is also deploying
cognitive technology by Watson to deliver predictive, prescriptive, and
cognitive analytics within the industrial cloud platform MindSphere.
Dan Walker, BP’s Group Technology leader, has said that BP will use
ML to “combine datasets (flow rates, pressures, equipment vibration) with
data from the natural environment (seismic information, ocean wave
height) to transform the way they run and optimize their drilling
operations.” In Chicago and New York, BP has begun testing AI
technology with new “personality pumps” that aim to “provide a more
interactive experience at the pumping station.”6 Customers can interact
with a bot called Miles that offers music, video e-cards, and other
interactive options to share with friends.

Geo-Location Discovery
Fossil energy companies use AI to improve location deployment. For
example, Texas-based Pioneer Natural Resources uses AI to ensure
accurate and optimal drilling locales. Nervana Solutions is
commercializing an algorithm that can detect sub-surface vaults with
possible oil deposits, a task normally completed by human geologists.7

Failure Forecasting and Infrastructure Updating


HiBot USA uses robotics, big data, and ML to help water utilities
upgrade their aging pipelines. According to Fast Company, “water in the
US is generally handled locally, by 50,000 municipal utilities charged with
bringing water in and moving it around localities through millions of miles
of piping. There are 240,000 watermain breaks a year across the US, and
while those thousands of utilities work hard to replace pipes that can be up
to 160 years old, it’s a task that’s done haphazardly.” Most municipalities
replace their pipes based on age only, a replacement process that goes very
slowly. While doing replacements, municipalities have no way to
determine which part of the existing network can still be saved. This
“means there’s potentially US$400 million or more of savings available, if
replacement could be done more efficiently, and only as needed.”8
Unlike sewage and gas lines—which are highly regulated and have
been using robotics for a quite some time—the water market is open to
new companies and services. HiBot designed its “databotics system” to
algorithmically identify the areas where pipes are at greater risk of failure.
This is done with data generated upon the inspection of pipes that have
already been replaced. Many factors go into models, including data on soil
dynamics. HiBot claims its technology is capable of predicting future
failure with 80 to 90 percent accuracy.

Table 19.1 AI/ESG Considerations: In the Utility and Natural Resources Sectors

Environmental
• Innovations using AI (data-as-a-service, demand-management,
renewables) will improve environmental compliance and performance
• AI-enabled failure forecasting saves lives and property
Social • Communities will experience improved environmental and services
consequences
Governance • Utilities/energy companies deploying AI are more desirable by
stakeholders from an ESG governance and sustainability standpoint
CHAPTER TWENTY

AI in Other Business Sectors

In this chapter, we explore the state of AI as it is being applied to several


other notable industries: software, publishing, media, entertainment, and
retail.

AI in Software Development
In 2015, researchers at MIT wrote code that automatically fixed
software bugs by replacing faulty lines of code with working lines from
other programs. Recently, several groups have made even more progress
on getting learning software to make learning software. These include
researchers at Open AI, a non-profit research institute co-founded by Elon
Musk; the University of California, Berkeley; and DeepMind.
Automated ML is the trend to watch in AI. As it expands beyond DL
to other types of AI, these systems will mix and match AI approaches that
should lead to new applications and services. This trend will accelerate AI
even faster, which, while exciting, is not without challenge and risk. We
are living at a time when legal frameworks around AI are still in their
infancy, and humans are struggling to fully understand AI and its
outcomes. Moreover, there is inadequate architecture to protect AI models
from adversarial attacks.

Example of Self-Programming Software


Researchers at Microsoft and the University of Cambridge have
recently created an example of self-programming AI. The system is called
DeepCoder and it has solved some basic challenges similar to those
implemented at coding competitions. With this approach, people without
coding skills will be able to build simple programs and create greater
productivity.
DeepCoder uses a so-called “synthesis program.” It creates new
programs by gluing together lines of software generated by existing code.
DeepCoder receives a list of inputs and outputs for each software fragment
and learns which pieces are needed to achieve the desired result.
DeepCoder uses ML to scour databases of source code and sort the
fragments according to its view of their probable usefulness.
As DeepCoder learns which combinations of source code work and
which ones do not, it is constantly self-improving. At the moment,
DeepCoder is only capable of solving programming challenges that
involve around five lines of code, but a few lines may be all that is needed
for a fairly complex program.1
NEAT, which stands for “neural networks through augmented
topologies,” describes algorithmic concepts of self-learning machines that
are inspired by genetic modification in the evolutionary process. The self-
learning process has three steps:
• First, it defines all potential actions the “player” is allowed to execute.
• Second, researchers define a goal for the computer. NEAT works with a mathematical function
that rewards success called “Fitness Score.”
• Third, researchers define the rules of evolution. NEAT allows mutation of nodes, inter-
connection between nodes, and the passing along of the fittest neural networks to new
offspring. Existing networks optimize themselves, with NEAT adding an innovation number to
each gene that works as a historic marker.2

While NEAT is still in the research stage, its promise is big.

AI in Publishing, Media, and Entertainment


The traditional publishing, media, and entertainment industry was the
first to experience the kind of transformative and sudden change of
digitization. The Internet changed their distribution and content-generation
model overnight. They have transformed their business models from only
analog to mostly digital, with newcomers like Flickr, Netflix, and Twitter
contributing heavily to the speed of the shift.

Twitter: AI in a Social Media Company


Twitter is a social media company that uses DL to improve user
experience. Deep neural networks, leveraging modeling capabilities, and
an AI platform built by Cortex (one of the company’s in-house AI teams)
power Twitter’s ranking algorithm. The company is focused on natural
language processing, natural language understanding, and media domains.
When a user opens Twitter and refreshes their timeline, Twitter scores
every tweet from the people the user follows since their last visit to
determine what update to show at the top. This scoring imposes great
computational demands, as Twitter processes thousands of tweets per
second to satisfy all requests. For Twitter, speed is as important as quality.
The company’s model requires meeting a broad diversity of requirements,
including resource utilization, maintainability, prediction quality, and
speed, while simultaneously training, debugging, evaluating, and
deploying. Twitter relies on the modularity of DL, as Nicolas
Koumchatzky and Anton Andryeyev have explained on the Twitter blog.3
Jack Dorsey, the company’s CEO, has insisted that after concentrating
on the user experience in 2016, the company is now going to bring the
same focus to advertising experiences in 2017 and beyond, including
finding ways to apply ML. The objective, which seems to be paying off, is
to use ML to pull away somewhat from the real-time experience and
engage in a more relevant, curated one.4
The company complements its in-house AI capabilities while acquiring
startups. In the summer of 2016, Twitter acquired Magic Pony
Technologies, a developer of image-reading technology.

AI in Scientific Publishing
The essence of the challenge for scientific journals is that they publish
a lot of scientific data on a broad diversity of topics and, in some cases, for
example that of the publisher Elsevier, they have thousands of publications
and thousands of authors, and the need to peer-review articles before they
are published. A contest called ScienceIE asked teams to create programs
that could extract the basic facts and findings from sentences in scientific
papers and compare those to the basic facts from sentences in other papers.
Isabelle Augenstein, a post-doctoral AI researcher at the University
College of London who works with Elsevier, oversaw this challenge. Her
efforts were focused on automatically suggesting the right reviewers for
each manuscript.5
Several other major publishers implement software to support peer
review. However, it is important not to be overly optimistic about AI in
publishing. AI researchers warn about limitations in contemporary text
recognition and language understanding technologies: “Linguists call
anything written by humans, for humans, natural language. Computer
scientists call natural language a hot mess.”6

Digital Humans or Avatars


We all know the work of Mark Sagar because of his Oscars for the
films Avatar and King Kong. He is also a professor at the University of
Auckland and CEO of Soul Machines, a company that excels in NLP and
neural network models. It has created several very realistic digital humans.
We witnessed an amazing live demonstration of BabyX in Lisbon in 2016,
where the girl on a screen correctly responded to the researcher’s questions
about identifying objects, following his movements with her eyes, and
behaved “her age” (about two years old).
ObEN is building software to create personalized avatars for celebrities
that can talk and sing. ObEN’s goal is to let fans interact with computer-
generated versions of their favorite stars, using natural language
technology to understand and respond to dialogue. The company has been
working already in South Korea and is moving now to China.7

Is AI Creative?
The question of creativity and AI has been looming large over AI
researchers and practitioners for a while. While some believe that AI is
already “creative,” creating a new kind of aesthetic, others stress that
computers are still derivative rather than truly creative. These discussions
are just emerging, but there are already events dedicated to AI creativity,
among them the International Conference on Computational Creativity that
took place in Atlanta in June 2017.

1. AI as a Writer
A novel called The Day a Computer Writes a Novel almost won a
competition in Japan. Hitoshi Matsubara and his team at Future University
Hakodate in Japan selected words and sentences and set parameters for
construction before they “asked” an ML system to “write” the novel. For
the past few years, the Hoshi Shinichi Literary Award has only technically
been open to non-human contenders; 2016 was the first time the award
committee received applications by AI programs. Of 1450 submissions, 11
were written at least partially by non-humans, though they had their
challenges. As Satoshi Hase, a Japanese science fiction novelist, stated, “I
was surprised at the work because it was a well-structured novel. But there
are still some problems (to overcome) to win the prize, such as character
descriptions.”8

2. AI and Music
Google’s project Magenta is looking into ML in music and whether or
not there are ways to create a better collaboration between humans and
machines on that front.9 Below we explore additional uses of AI in music.

3. AI and the Visual Arts


Since 2015, Google has developed DeepDream, a tool that uses neural
networks to generate hallucinogenic images from existing photography.
For example, by taking a photo of a dog and finding a bit of fur that might
look vaguely like an eyeball, the AI enhances and replicates it to create a
result in which the dog is covered in swirling eyeballs.10 Is this a new
aesthetic, or are we entering new artistic territory? Another project at
Google called Sketch RNN is building neural networks that can draw. By
analyzing digital sketches, neural networks learn to make images of their
own.
The Art and Artificial Intelligence Lab at Rutgers University in the
U.S. has used generative adversarial networks (GAN) to replicate a
number of existing and well-known painting styles. One network
generated the images based on what it had been taught, while the other
network judged the resulting work. The new, modified version, creative
adversarial networks (CAN), was then designed to generate work that does
not fit the known artistic styles, thus “maximizing deviation from
established styles and minimizing deviation from art distribution.” For the
training, they used 81,449 paintings by 1,119 artists in the publicly
available WikiArt dataset. According to the Artnet reporting, “The images
generated by CAN do not look like traditional art, in terms of standard
genres (portrait, landscapes, religious paintings, still life, etc.).”11
Table 20.1 AI/ESG Considerations: In the Media and Arts Sectors

Social • AI in arts/entertainment displaces traditional artists but also generates


new forms of and opportunities for art-making/creativity
Governance • Media/social media companies have heightened responsibility to
stakeholders on combating fake news, bots, and propaganda by finding
ways to identify/eliminate
CHAPTER TWENTY-ONE

AI in Government
and the Military

Government AI Policies
In the United States
In Chapter 11, we explored the current Chinese government focus on
leading in AI to secure Chinese global leadership in the field by 2030, and
how the U.S. and China are in a neck-and-neck competition for such
leadership. For some time, the U.S., especially under President Obama,
was clearly focused on the importance of maintaining AI leadership. It is
unclear at the time of this writing whether the U.S. will be able to maintain
this leadership under the current administration led by President Trump.
It is critical that the U.S. remain focused on how to better understand
the impact of AI on employment, on the transformation of traditional
businesses into digitally savvy ones, and on the necessity of adapting the
educational system so that young people develop the skills that are
necessary to adapt, learn, and compete for the rest of their lives.
We are concerned that if investments in fundamental research slow
down or stop altogether, the U.S. will suffer a serious and damaging loss
of leadership in this absolutely key and pervasive technology. There are
still 10 to 15 years remaining for the development of AI-related
technologies and applications based on current research. Once this period
is over, there will be a know-how gap that may never be closed, as the
example of the Russian space industry amply illustrates.
Human-machine collaboration was a major theme in the Obama
administration’s reports “Preparing for the Future of Artificial
Intelligence” and “National Artificial Intelligence Research and
Development Strategic Plan.” The consensus in the U.S. in 2016 was that
there should not be a heavy push into regulating AI too broadly, though its
use in the automotive, aviation, and finance industries should be held to
certain standards.
These two Obama-era reports offered three key guiding principles:
• AI needs to augment rather than replace humanity;
• AI needs to be ethical; and
• there must be equality of opportunity for everyone to access and develop AI.

The report suggested that AI also has a pivotal role in cybersecurity and
can be used to detect and counter cyberattacks as they target U.S. citizens
and/or infrastructure. The reports also recommended the deployment of
algorithmic surveillance of individuals and crowds while acknowledging
that more study is needed on the matter,1 especially given current attempts
to implement “predictive policing,” some of which may be racially
biased.2

In the United Kingdom


In the summer of 2017, the UK government launched an investment
fund focused on electric vehicles, aerospace materials, and satellites with
AI and robotics that focus on two areas:
• £93 million for the development of systems that can be used in extreme environments for
offshore energy, space, and deep mining; and
• £38 million available for AI and control systems for driverless cars.

The majority of these funds were earmarked to undertake fundamental


research.3
In July 2017, the House of Lords Select Committee announced that it
would launch its first public inquiry into AI to consider its economic and
social implications.4 They want to better understand how technology
giants use consumer data; examine the issue of user-generated training
data; and consider what practices large companies implement, especially in
“shielding” customer-specific data. In October 2017, two independent
reports came out: Wendy Hall and Jerome Pesenti’s “Growing the
Artificial Intelligence Industry in the UK” and Olly Buston’s report on the
“Regional Impact of AI Across UK.” The Hall report recognizes how AI
might drive the economy’s growth rate from 2.5 to 3.9 percent within 20
years. It encouraged open data across the UK, but failed to recognize how
UK will retain key AI talent. Buston’s study is very granular and points
out that the most at-risk constituencies will see around 40 percent of their
jobs disappearing from automation within 15 years, and the least at-risk
constituencies will face about one-fifth of jobs being eliminated.

In Canada
The Canadian government is funding the Pan-Canadian AI Strategy for
research and talent that will cement Canada’s position as a world leader in
AI. This US$125 million strategy will attract and retain top academic
talent in Canada, increase the number of post-graduate trainees and
researchers studying AI, and promote collaboration between Canada’s
main centers of expertise in Montreal, Toronto-Waterloo, and Edmonton.
The program will be administered through CIFAR, the Canadian Institute
for Advanced Research.
Canada cemented its global lead in AI in large part due to the early
support by CIFAR of a group of international researchers, led by Geoff
Hinton of the University of Toronto, starting over a decade ago. Notable
Canadian researchers work for U.S. full-stack AI companies, with Hinton,
among others, supporting Google and Yoshua Bengio supporting
Microsoft. CIFAR’s program in Learning in Machine and Brains is now
co-directed by Yoshua Bengio and Yann LeCun (a professor at New York
University and director of AI research at Facebook).5

In Continental Europe
In early 2016, the European Commission (EC) launched a “Digitizing
European Industry” strategy, and identified robotics and AI as critical
technologies in which to invest. Shortly thereafter, SPARC, a public-
private partnership for robotics in Europe, was launched. With €700
million in funding and robust private participation, bringing the fund to a
total €2.8 billion, SPARC is now one of the largest civilian research
programs in this area in the world.
The EC also recognizes the potential liabilities that come with broad
adoption of AI and robotics—for example, see their publication
“Communication on Building a European Data Economy.” One of their
ideas under development includes the establishment of cross-border
corridors to test new technologies such as automated driving.
From a regulatory perspective, the EC is currently evaluating existing
legislation, such as the Defective Products Liability Directive and the
Machinery Directive, to understand its relevance to the changing
technological landscape. Also, in late 2016, the EC started the “Digital
Jobs and Skills Coalition,” focused on understanding the impact of the
digital age on employment.6
There have also been reports that France and Germany are interested in
working together to speed up AI investments in addition to the work that is
already being driven by the EC.

AI and the Public Sector


As in any traditional industry, the government sector—whether
national or international—is poised to benefit from AI applications. But
also, similar to industry, government suffers from legacy IT systems,
siloed data, and a general lack of agility in terms of how best to harvest the
full potential of AI.
In 2017, Deloitte issued a report entitled “AI-Augmented Government:
Using Cognitive Technologies to Redesign Public Sector Work.” The
report states that the typical government worker today allocates their labor
across a “basket” of tasks. By dividing jobs into activities and analyzing
whether automation can be applied to one or another task, the company
calculated the number of labor hours that could be freed up or eliminated.
The report found that millions of “working hours each year (out of some
4.3 billion worked total) could be freed up today by automating tasks that
computers already routinely do.” At the low end of the spectrum,
automation has the potential to save 96.7 million federal hours annually,
with potential savings of US$3.3 billion. At the high end, the number of
hours saved is closer to 1.2 billion hours at a potential annual savings of
US$41.1 billion.
At the U.S. federal level, “simply documenting and recording
information consumes a half-a-billion staff hours each year, at a cost of
more than US$16 billion in wages alone. Procuring and processing
information eats up another 280 million person hours, costing the federal
government an additional US$15 billion annually.”
In this same report, Deloitte outlines several cases of successful trials
of ML in the public sector, including the following.

Chatbots in the Public Sector


Chatbots have a lot of potential for use cases in public services, for
example in call centers. Trained agents in social services are in short
supply, according to Steve Nichols, Georgia’s chief technology officer. He
believes the state’s Department of Human Services will outsource
answering of routine questions to chatbots. This is already happening. For
example, the Department of Homeland Security’s Citizenship and
Immigration and Services have implemented EMMA, a virtual assistant
with reasonably good language understanding. SGT STAR is a virtual
assistant to help prospective recruits understand their options while
answering their questions, checking individual qualifications and referring
future soldiers to human recruiters. This service substitutes for 55
recruiters. As in private industry, complicated responses are left to a
human agent. In time, a chatbot can learn more and more from humans and
develop command over more complex answers. Having the proper domain
expertise is crucial to train such bots, which provides great potential for
government agencies to innovate.

Computer Vision in the Public Sector


In Chicago and Los Angeles, computer vision and ML are supporting
police with investigations, for example while matching lists of license
plates and suspects generated in real time. In London, 39 CCTV cameras
tag potential threats, which enables police to track information in real time.
Meanwhile, the Georgia Government Transparency and Campaign Finance
Commission is implementing handwriting recognition software and
crowd-sourced human review to process the workload of 40,000 pages of
campaign finance disclosures per month.

AI and Public Security


The German company AVA uses ML to flag locations that have
security alerts regardless of the event, whether it is a terrorist attack, a
public fight, or other crisis incident. Several municipalities and police
organizations across Europe are trying AVA for prevention and incident
management. The technology can also be deployed in cars to alert
consumers if they happen to drive into an unsafe area.
Some municipalities have implemented drones for surveillance of
airports, ports, manufacturing, and waterways, though it increases the
number of relevant security and privacy concerns.
Several cities in the U.S., UK, China, and India have linked street and
body cameras on police officers to ML software to process information
that might be important to solving and/or preventing crime. In this manner,
AI allows these municipalities to speed up the information-processing of
thousands of hours of video.
The New York Police Department’s CompStat was the first data tool
initiating predictive policing, and many police departments across the U.S.
and Canada are now using it.7
The European project Law Train is aimed at introducing virtual
interview training systems to enable police to combat drug trafficking and
to collaborate effectively on a cross-border basis. ML techniques are used
in simulation training of law enforcement personnel; “[t]he next step will
be to move from simulation to actual investigations by providing tools that
support such collaborations.”8
Dynamic Aviation Risk Management Solution, or DARMS, is a
research project of the University of Southern California aiming to unify,
quantify, and integrate information across the aviation sector. The ultimate
goal is to assess the risk around a specific individual on a per-flight basis.
In 2017, the U.S. Transportation Security Administration (TSA) conducted
the initial proof of concept of the DARMS for passenger screening. Within
the next one to three years, they would like to finalize the design and build
a prototype that “incorporates the complete aviation security ecosystem
and which tests and evaluates the approach at a few select airports.”9 As
with all technologies of this type, debugging bias will be crucial,
especially if individual people are concerned. Development of such
technologies should happen only through the participation of
interdisciplinary and diverse teams, and the creation of a solid data
governance and ethical decision-making framework around it.

AI in the Military
The U.S. military has been funding, testing, and deploying various
types of ML and robotics for a long time. In 2001, Congress mandated that
one-third of ground combat vehicles should become unmanned by 2015, a
goal that could not be fully implemented.
AI technology could eventually change the balance of international
power by making it easier for smaller nations to go after the richer G7
countries. On the positive side, Estonia successfully implemented the e-
Residence program, which enables any individual to open and carry out
business within the EU, even without being a citizen of Estonia. This
program is a set of technologies utilizing ML and blockchain. On a more
negative note, ML and facial recognition capabilities can be misused.
Recent research demonstrates how voice can be synthesized, or how a
video can be changed, such that a person like President Obama “says”
things that he has never actually said.
At the request of IARPA, the research agency of the Office of the
Director of National Intelligence, Harvard’s Belfer Center for Science and
Inter​national Affairs issued a report on the effect of AI on national
security.10 One of this report’s “conclusions is that the impact of
technologies such as autonomous robots on war and international relations
could rival that of nuclear weapons.” The report recommends that the U.S.
include AI in military action planning and in possible future international
treaty negotiations. Indeed, autonomous and semi-autonomous weapons
are already in use without an overarching agreement on how to control
their implementation and diffusion. Unlike nuclear and biological
weapons, autonomous weapons pose several problems. The absence of an
international treaty is just one of them. A fundamental problem is that it is
relatively easy to develop and deliver autonomous weapons. The hardware
is getting cheaper, with drones already in use by insurgent forces at a price
point of just a few hundred dollars. Software know-how is available.
Humanity might not have much time to negotiate proper and enforceable
regulations.
The report is also concerned with the attack and defense capabilities of
ML, as automation in probing and targeting enemy networks or crafting
fake information can reduce efforts and costs associated with such
operations. The use of cyberattacks has become a major concern, but the
use of technology by the wrong actors is not limited only to these
possibilities. Commoditization of drone delivery and autonomous vehicles
could become powerful weapons of terrorists and criminals. ISIS has
already started using consumer drones to drop explosives in combat.
Today, the CIA has 137 pilot projects directly related to AI. These
experiments include automatic tagging of objects in video (so analysts can
pay attention to what’s important) and better predictions of future events
based on big data and correlational evidence.11 Quite often, research on AI
from intelligence agencies and the military finds its way into civilian
applications. Let’s not forget that the famous touchscreen of a smartphone
was first developed for the military in the 1960s.

Military Robotics and AI


Research and findings on military uses of AI and robots are classified.
We know that in fiscal year 2017 budget proposals, the U.S. military
allocated around US$4.61 billion to drone-related spending. According to
TechEmergence, the global spending on military robots grew from US$2.4
billion in 2000 to US$7.5 billion in 2015. BCG believes it will reach
US$16.5 billion by 2025.
Gull Pratt, former DARPA program manager, linked the current state
of robots to the “Cambrian Explosion,” a relatively short period that
occurred 541 million years ago and ushered in the appearance of most
animal life forms present in the world today. This analogy is applicable to
military robots. From stationary systems to small spying devices, a
multitude of technologies exist that can at some point replace many of the
tasks a human soldier does.
The U.S. military and DARPA are often at the cutting edge of
technology development in the U.S., focused first on basic research before
applying it to a variety of other fields. For example, TCP/IP, which gives
us Internet connectivity, was first developed by DARPA. The iPhone’s
touchscreen was first developed for the U.S. Air Force. GPS, which makes
half of our smartphone apps work, was a U.S. Navy project.
In South Korea, AI-powered systems equipped with guns can track
humans at 3 km during the day and 2.2 km at night. These devices use
pattern recognition and tracking, and can issue verbal warnings and open
fire if approached by a human operator.
DARPA’s Anti-Submarine Warfare Continuous Trail Unmanned
Vessel, or “Sea Hunter,” has been under development since 2010 and will
be delivered to the Navy in 2018. This technology can be used to track
quiet diesel electric submarines, and potentially to engage in direct
combat.

Table 21.1 AI/ESG Considerations: In the Public and Military Sectors

Environmental • AI military robotics race; social fear that out-of-control military robots
make own decisions or are manipulated by bad actors (terrorists, rogue
regimes)
Social • Deployment of government algorithmic surveillance on individuals and
crowds; potential for abuse/violations
• Beneficial AI use in public sector work (e.g., health care)
Governance • Obama administration‘s three guiding principles of AI governance:
ethical, equality of opportunity, augment not replace humans
• AI pivotal role in cybersecurity
• AI arms race that may be equivalent to nuclear race
• AI and public security: a double-edged sword
• In an AI-evolved world, assymetric international power relations may
arise

Military drones have been in use since the late 1960s. Since 2002, the
U.S. has been using Predator drones equipped with Hellfire missiles. This
model was replaced by MQ-9 Reaper, which can fly faster and longer, and
can carry a larger weapons payload. General Atomics is the contractor.
Surveillance drones are getting smaller. There are drones deployed in
swarms to automatically communicate with each other and make
decisions. For example, Perdix drones (named after a character in Greek
mythology) is an experimental project carried out by the Strategic
Capabilities Office of the U.S. Department of Defense, pioneered by MIT
students, and modified for military use around 2013. Some drones are the
size of a hummingbird and can provide information directly to humans. An
example of this kind of system is Black Hornet, developed by Norwegian
Prox Dynamics, later acquired by FLIR.
More than 3,000 researchers, scientists, and executives from
companies including Microsoft and Google signed a 2015 letter to the
Obama administration asking for a ban on autonomous weapons. In 2012,
the U.S. Department of Defense set a temporary policy requiring a human
to be involved in decisions to use lethal force; it was updated to be
permanent in May 2017.
PART 5

The Socio-Politics of AI
—Critical Issues

Key Issues in Part 5


• Data control, AI, and society. Every technology can be used to benefit individuals and
businesses, and at the same time to give the few control over the many; spread false
information and propaganda; and shape popular, community, and business perceptions. AI is
no exception. Fake news is just one negative example of how technology can be misused to
undermine open societies and even democracy.
• The future of employment. AI will change employment across regions, countries, and
industries. We believe that the loss of jobs that will ensue is not because there is too much
employment but because there is too little innovation. Responsible leaders at every level—both
corporate and governmental as well as at the board, executive, and managerial levels—must
start to address shifts in employment at the early stage of technology deployment and not think
of technology as a way to cut costs and eliminate jobs, but rather as a way to innovate and
create relevant new jobs.
• Life-long education. Life-long education is needed to ensure long-term employability, though
it does not guarantee it. Enterpreneurship will gain importance in the age of AI. The fusion of
technology and human-centric sciences will enable the development of new industries and
educational fields.
• Ethical design and social governance. Design of beneficial and safe AI can be achieved only
by embracing a different form of leadership, one that empowers groups to take and implement
collective and cross-disciplinary responsibilities for tackling specific problems. Full-stack AI
companies should be incentivized to create ethics boards for AI governance, and invite into
these boards multidisciplinary representatives from traditional businesses, communities and
policy-making bodies. It will not be easy to create functioning AI governance—there will be
challenges; complaints about bureaucracy; and the need to educate more traditional, less tech-
savvy business people and policy makers. Visionary corporate boards have a key role to play
to set the tone for this essential development, while contributing to the sustainability and social
responsibility of their businesses to their stakeholders, communities, and society.
• The legal, regulatory, and ethical context of AI policy. Industry regulation has a critical role to
play in the adoption and development of new AI technologies. But there is a thin line between
bureaucratic regulations that stifle innovation or make it too costly, and government support
that enables the creation and adoption of new technologies that are not only valuable to the
companies creating them but also to society as a whole.
CHAPTER TWENTY-TWO

Data-Control, AI, and Societies

Surveillance and Influence


The discipline of cybernetics hints, even in its name, at its big regulatory
ambitions. The term originates from a Greek word meaning “the science of
governance” or “the art of steering,” and it relates to the ability of a system
to remain stable by constantly learning and adapting itself to changing
circumstances. Cybernetics has been attached to the world of computing
for quite some time now. It is a discipline focused on describing the
changes in a system caused by algorithms. Wikipedia provides a nice
overview of the significance of the term over time:
Cybernetics is a transdisciplinary approach for exploring regulatory systems—their
structures constraints, and possibilities. Norbert Wiener defined cybernetics in 1948 as “the
scientific study of control and communication in the animal and the machine.” In the 21st
century, the term is often used in a rather loose way to imply “control of any system using
technology.” In other words, it is the scientific study of how humans, animals and machines
control and communicate with each other.1

Evgeny Morozov, a Belarus-born scientist, was among the first scientists


to oppose the algorithmic governance approach. He started to warn about
the misuse of data five years before AI became a mainstream topic, with
misuse potentially happening regarding access to data or data manipulation
techniques, whether on commercial sites like Facebook and Airbnb or
government sites in a democratic or authoritarian regime anywhere in the
world.
Singapore is an example of a data-controlled society, an approach that
emerged first as an effort to combat terrorism. Today, government control
of data in Singapore influences and pervades economic, immigration, real
estate, and education policy.
According to Scientific American, China is taking a similar route.
Recently, Baidu, the Chinese equivalent of Google, invited the military to
take part in the China Brain Project. It involves running DL algorithms
over the search engine data collected from users. In addition to this, the
government plans certain forms of social control as well. According to
recent reports, “every Chinese citizen will receive a so-called ‘Citizen
Score,’ which will determine under what conditions they may get loans,
jobs, or travel visas to visit other countries. This kind of individual
monitoring would include people’s Internet surfing and the behavior of
their social contacts.”2
The Financial Times reports that Chinese authorities—which have
unchecked access to citizens’ histories—are looking into how AI might be
designed to predict and prevent crime. The goal is to know in advance who
might be a terrorist or do something bad, according to Li Meng, vice-
minister of science and technology. The facial recognition company Cloud
Walk has been trialing a system that uses data on individuals’ movements
and behavior, e.g., someone who walks into a shop to sell weapons. The
police would use the system to rate suspicious individuals and groups. AI
is also being implemented for crowd analysis to discover patterns and
generate insights. Under these circumstances, data privacy is considered
secondary to the “greater good.”3
The Russian government is studying what the Chinese are doing with
the thought of introducing similar monitoring into their country. They have
started to do this by already intervening in the business of search and
social network companies. Yandex, a Russian search engine created even
before Google, still dominates the Russian domestic market. However,
interference by the government in the business of this company has
impacted the trustworthiness of the brand. A similar fate is expected for
the dominant Russian social network, V Kontakte, created by Pavel Durov,
an entrepreneur who has since left Russia. He is the builder of another
successful brand, Telegraph, one of the most secure messenger service
apps. Because this business was set up outside of Russia, government
attempts at intervening in it in 2017 have been fruitless. Russians are
increasingly switching to Google and Facebook to search for information
they want and can trust. But, as we learned in 2016, even these brands are
not immune from malicious intervention.
Because AI technologies allow an efficient filtering of information,
they can be used both to create greater privacy, and to exert greater
authoritarian control. They can enable creation and the sharing of factual
information, yet they can also spread fake identities and sources to
influence, confuse, and toy with human emotion. Malicious and interfering
political bots have already been used in major elections in the UK, U.S.,
and France in 2016 and 2017. And this use or misuse of AI techniques in
politics is not going away anytime soon.
Today’s ML systems can predict what bills can pass a particular
legislature by making algorithmic assessments of the text of the bill and by
adding variables such as how many sponsors it has, and even the time of
year it is being presented.4 Further “applications” will undoubtedly arise,
which only increases the need for collaboration—public, private, and
public/private—to develop strategies and tools to counterbalance them.
Algorithms are increasingly used to inform decisions about criminal
justice, child welfare, education, and other arenas. It is impossible for
citizens to see how they work or are being applied. In 2016, an
investigation by ProPublica found that a system used in sentencing and
bail decisions was biased against black people.5 The company Taser/Axon
owns 80 percent of the police body camera market in the United States, but
keeps the footage in private databases and now advertises it is developing
technology for predictive policing.6 As a private company, Taser cannot be
scrutinized according to the same public records laws and oversight
regulations.

Fake News
Fake news and the way it spreads virally via social and other media is
emerging as one of the key threats of our time, as it is found in anything
from the manipulation of stock markets and the purchase of potentially
harmful products to the manipulation of elections, undermining a very
pillar of democracy.
In April 2017, Facebook acknowledged for the first time that
“malicious actors” used their platform during the 2016 presidential
election. The company’s security division produced a paper dividing fake
news production sources into the following four categories (the first three
are quoted directly from the paper):
• Information Operations: Actions taken by governments or organized non-state actors to distort
domestic or foreign political sentiment
• False News: Articles that purport to be factual, but which contain intentional misstatements of
fact with the intention to arouse passions, attract viewership, or deceive
• False Amplifiers: Coordinated activity by inauthentic accounts with the intent of manipulating
political discussion, e.g., by discouraging specific parties from participating in discussion7
• Disinformation: Inaccurate and/or manipulated content that is spread intentionally

Facebook has rightly integrated fake news into its cyberdefense


framework, along with prevention of malware, spam, financial scams, and
hacking. It has yet to deliver a comprehensive plan to effectively combat
it, however. Facebook continues to find itself in the eye of the storm of
fake news, with university researchers and news reports alleging the
formation of “collusion networks” within Facebook designed to create
“reputation manipulation” through the posting of large numbers of “likes”
that can be used to influence a political movement or election, for
example.8 Moreover, Facebook publicly admitted in September 2016 that
it had uncovered political ads aimed at influencing the U.S. presidential
election of 2016 that were paid for by Russian sources to the tune of
US$100,000.9
Given the limitations of how well software can understand language,
the best thing ML can do right now is to help people track fake news
screens and rebut fake news faster. In 2016, Facebook’s director of AI
research, Yann LeCun, told journalists that ML technology that could
squash fake news “either exists or can be developed.”10 The company has
since said it has tweaked its news feed functionality to suppress fake news,
although it’s unclear to what effect.
Not long after LeCun’s comments, a group of academics, tech insiders,
and journalists launched their own project called the Fake News
Challenge, focused on developing fake news–detecting algorithms. The
first results from that effort were released in June 2017. The top three
teams were from Cisco cybersecurity division, Talos Intelligence, the
German TU Darmstadt, and University College of London. The winning
teams’ solutions may help in reining in online misinformation, but as tools
that help to speed up humans working on the problem, not as autonomous
fake news bots.

Table 22.1 AI/ESG Considerations: Social Data

Social • AI surveillance: negative and positive individual and social


consequences possible
• The development of “citizen scores” vs. data privacy; authoritarianism
vs. democracy
Governance • Democratic entities and governments fighting fake news and
cyberattacks that spread misinformation and propaganda, attacking
democracy
• Authoritarian regimes deploying AI tools for control, surveillance,
cyberwarfare
CHAPTER TWENTY-THREE

The Future of Employment


and the Pursuit of
Life-Long Learning

Innovation vs. Employment: The Need for a Paradigm


Shift
Technology visionary and investor Vinod Khosla believes that with AI,
“the vast majority of today’s jobs may be dislocated regardeless of skill or
educational level. . . . [E]motional labor will remain the last bastion of
skills that machines cannot replicate at a human level.”1 Khosla argued
that medical schools, for example, should transition to “emphasizing and
teaching interpersonal and emotional skills instead of Hippocratic
reasoning.”
According to researchers from Oxford and Yale, in the next 10 years,
“AI will outperform humans in many activities including in translating
languages (by 2024), writing high-school essays (by 2026), driving a truck
(by 2027), working in retail (by 2031), and working as a surgeon (by
2053). Researchers believe there is a 50% chance of AI outperforming
humans in all tasks in 45 years and of automating all human jobs in 120
years, with Asian experts expecting this to happen much sooner than North
Americans.”2 At the same time, we expect AI capabilities to get embedded
into every business within the next 15 years. In the following section, we
explore how these expectations relate to the overall discussion on
innovation and employment, and the challenges confronting corporate
leaders, boards, and society overall.
There are two extreme narratives about the technology industry that
have dominated corporate and public discussion over the past two decades.
The first talks about “technology as a bubble,” referring to Silicon Valley
as the land of “smoke and mirrors,” venture capital excess, and tech
company post-IPO underperformance. The opposite narrative worries
about the dramatic impact of technology on jobs and society, ever-faster
innovation disrupting the economic establishment, and the facilitation of
the rise of global political populism. There is truth to both narratives,
especially if they are applied to different economic sectors and
geographical areas.
Tim O’Reilly, a leading data evangelist and investor, has shared his
conversation with an Amazon executive who disclosed that robots have
allowed the company to bring more products into its warehouses and to
speed up picking, so that it can put more packages into fulfillment.
However, and in the face of this increase in the use of robotics and AI at
Amazon, in the summer of 2017, Amazon announced that it expected to
hire another 100,000 workers over the next 18 months, many of them in its
fulfillment centers, and opened a bid process to major cities around the
country for the building of a second major U.S. headquarters.3 This
increased workforce does not include people working in actual delivery.
Amazon Flex, Amazon’s peer-to-peer delivery service, is growing so
rapidly that it may overtake Lyft as the second-largest source of
employment for on-demand drivers. These numbers also do not include
employment at the over 100,000 small companies that use Amazon’s
platform to sell and distribute their goods. This is an excellent example
how growth in productivity—even if strictly limited to one company—can
create new and previously unheard of work opportunities. This example
demonstrates as well that thinking in use cases only is limiting innovation,
as broader strategic vision is needed to implement beneficial AI.
Technology still represents a small portion of most economies, and yet
productivity has been falling in the most industrialized countries. In 2015,
the UK chancellor, George Osborne, made boosting UK productivity a
priority. In an interview with Gold Investor, Alan Greenspan blamed the
aging baby boomer for falling productivity. In 2015, Janet Yellen
explained that lack of wage growth had to do with markedly lower
productivity figures. Many economists claim that declining overall
productivity is directly tied to the decline in manufacturing productivity,
while service productivity is overstated because of off-hours work on
mobile devices. While we do not claim to be economists and the subject is
beyond the scope of this book, we believe that even in view of justified
concerns about the impact of data-driven technologies on labor, we still
lack technology innovation. With more innovation, we expect to see
greater productivity and the creation of new jobs—many of which we
cannot even imagine today—as the simple previous example of Amazon
illustrates.
It is not technology that eliminates jobs. It is the lack of a strategic
vision and incentive plan for innovation that eliminates jobs when
companies that are not prepared for technological change and opportunity
lose to their better-prepared competition. Such uninformed and unprepared
companies look at technology as a narrow and simple short-term cost- and
labor-cutting opportunity. Corporate inflexibility in the face of fast-
moving disruptors is just a symptom of a broader issue. Lack of forward-
looking governance approaches deprioritizes strategic thinking,
implementation of diverse leadership and workforce, and mechanisms to
encourage updates of frameworks and systems to run top-line experiments.
This leads unprepared companies to have a zero-sum mentality—
something that happens most often when companies in the more traditional
industries that have not understood how to incorporate new technologies
into their repertoire and are losing market share or having poor results seek
to blame digital disruptors or regulators, or both, instead of conducting a
clear strategic self-examination.
The Internet has become the largest distribution mechanism in some of
the faster-moving parts of the economy, like media, retail, consumer
electronics, software, and parts of the financial sector. These sectors are
experiencing declining prices, fierce competition, and the creation of
diversified products and services to win over customers.
In some other, slower-moving economic areas—like health care and
education—prices are rising. Technology has yet to have its full impact on
these sectors. In the U.S., while nearly everyone can afford a phone, not
everyone can afford an Ivy League education or the best surgery. In
Europe, taxpayers fund health care and education, creating a system that is
struggling under the challenges and pressures of large-scale migration and
demographic changes. As The Economist has brilliantly stated, when
education fails to keep pace with technology, the result is inequality.4 This
is true for every country and every economy.
In the long term, social safety structures need to respond to changes,
especially with respect to health care and education, or a guaranteed basic
income, something that is increasingly discussed. Indeed, countries like
Switzerland and Finland (and even India) are actively considering such
measures. Some researchers and business leaders believe in taxing AI
systems, while AI is getting more and more involved in wealth creation. It
is not too soon for social debate on how to share the economic benefits of
AI. In traditional societies, children have been supporting their aging
parents. We may head into a future where AI will support us, as the
creators of this new form of intelligence (and maybe even life).
It isn’t surprising that the concept of a universal basic income has
supporters not only in many countries but also across the political
spectrum, from Silicon Valley capitalists to left-leaning academics, though
the devil may be in the details in terms of future implementation. John
Maynard Keynes famously worried that the technology trends of his time
would give humans only 15 hours to work each week. And that was in
1930.5

AI’s Impact on Employment: What Corporate Boards


and Executives Should Know
There are several key aspects of implementing AI in a company that
both the top management and the board should be aware of and
understand. We consider them below.

AI Will Replace Tasks, Not Jobs . . . for Now


AI is likely to augment human jobs and replace certain tasks in the
near term. At the same time, it will create new jobs. However, these new
jobs will most probably be very different from those lost to the advance of
these technologies. AI will slowly move into companies and industries,
changing employment patterns gradually in some places and more rapidly
in others.
The impact of these changes will range from limited replacement or
augmentation to the complete disappearance of even larger job families.
For example, a machine can easily produce a report on the state of the
economy from Internet sources, thus obviating the work of traditional
entry-level analysts and associates in consulting and legal businesses.
Although most of a lawyer’s job is not yet automated, ML is already
crawling into legal information extraction and modeling. Soon enough,
several occupations, from radiologists to truck drivers, will be affected.

The Most Secure Jobs Will Involve Perpetual


Learning
Most secure job positions will probably be those that are linked to
continual learning and self-education. In the past, learning occurred at
corporate trainings and conferences, and through reading expert literature.
Today and in the future, most advanced companies will incorporate
insights from their data into workforce education.
Openness to using and learning from data will be crucial, as most
workers today have very little flexibility to adjust how they work and what
they do in the face of new insights and information. Once again, a data-
driven culture will be one of the driving factors of not only healthy but
also thriving workplaces.

The Size and Location of Jobs Will Change


AI will impact where human employees are located, as well as the size
of teams. Today, organizations grow organically or via M&A across
geographies, adding more and more employees to fulfill tasks and
management layers to supervise growing workforces. With AI growth,
however, we will no longer require adding more and more people. This has
been already observed in companies implementing so-called over-the-top
business models—Internet players. Compared to traditional businesses,
Internet companies have smaller numbers of employees, since part of the
work is being done by technology. In the future, we may witness the rise
of smaller organizations and more natural team sizes, enabling company
leaders to know every human employee, if not every AI agent supporting
them.
The creation of efficiently outsourced labor markets enabled by AI,
and most probably built on blockchain infrastructure, may change the
requirements for corporate and business leaders. Added to this trend is the
fact that the age of the Internet has enabled small practitioners to be self-
employed or to start small businesses as product and service providers to
larger companies, and we may be witnessing the rise of a more
decentralized but interconnected workforce ecosystem including such
smaller enterprises in the future.

New Jobs Will Emerge


The future of and the path to AI depend on the availability of human
teachers. Smart community leaders and CEOs will start refocusing part of
their workforce to serve the emerging AI economy. The need to train
software creates employment opportunites. Too often data are unlabeled
and too many industries still enjoy only modest progress in digitization to
benefit from new ML approaches. Categorization of handwritten notes,
assessment of customer complaints to train chatbots, and tagging of
images to optimize software for self-driving cars, for example, do not
require sophisticated education and computer skills. Educated leaders will
embrace the opportunity and even export their knowledge beyond their
businesses and communities. We expect substantial VC funds to enter this
area of “training-as-a-service.” If governments are smart, they too will
support the emerging needs of feeding AI with data and tools.

AI Will Also Have a Positive Impact on Living


Standards
Today most people work to be able to cover basic life expenses. We
might expect that AI systems will take over many tasks previously
requiring human involvement, resulting in lower costs of products and
services. This might effectively make everyone a little richer. At the same
time, as work has intrinsic value, new questions on self-fullfillment will
arise. For people relying on self-education and life-long learning, this
future will not appear threatening. As long as humanity has problems and
wealth distribution and creation are not harmonized throughout
geographies and industries, humans will have plenty of problems to solve.
AI is often framed just as a threat to jobs. It should also be viewed as a
factor in improving living standards and freeing time to solve problems on
which no one was focused before.
Even if AI’s impacts on the workforce do not occur too rapidly,
companies and their leaders must start to incorporate AI into their business
strategies and their sustainability and ethical decision-making frameworks.

AI and Life-Long Education


We do not have a crystal ball to predict when and how AI will have
specific impacts on the labor market and the world of business, although
there are several excellent studies on this topic from leading consulting
firms like McKinsey and BCG.
We do know, however, that schools and universities need to prepare
the next generations of students for a world of cooperation and coexistence
with intelligent machines. Already today, the meaning of learning is more
important than mastering certain skills. Being prepared for life-long
education is a necessity, though today’s reality falls far short in addressing
this. The Consortium for Advancing Adult Learning & Development
(CAALD), a group of learning authorities whose members include
researchers, corporate and non-profit leaders, and experts from leading
consultancies, was created to assess potential solutions to the changing
state of the workplace.6 It is clear today that STEM subjects will remain
critical, but creative thinking, resilience, collaboration, and emotional
intelligence will become increasingly important. The number of non-
standard and highly complex jobs will rise. Work will be about adapting to
constantly changing challenges. Robots and AI software will be colleagues
and should be considered as team members instead of threats.
Assessments of students’ performance will change to mirror these
“soft“ skills. Handling of data and privacy will go hand in hand with
teaching of human-centric and/or AI-centric product design. Personalized
learning might replace textbooks to a large extent. Implementation of new
education methodologies might lead to new geographical hubs of
excellence, but, sadly as well, to new forms of inequality. If 21st–century
economies are to prevent the creation of a massive underclass,
policymakers, companies, and communities urgently need to work together
to help their citizens learn while they are employed. Adult education and
new degrees will emerge, supported by virtual reality and AI, and
distributed via online channels.

Table 23.1 AI ESG Considerations: Labor, Employment, and Life-Long Learning

Social • AI currently replacing tasks, will soon replace jobs


• But new jobs will emerge in which AI will be part of a team
• In the age of AI, societies need a paradigm shift to manage labor
markets and employment
• Deploy life-long learning with changes in educational/labor markets and
public policy
Governance • Need to deploy a different mix of social safety net policies at the
governmental and business levels

Singapore has individual learning accounts, providing money to


everyone over 25 to spend on any of 500 approved courses. Companies
may want to introduce a similar practice. We are waiting for a day when
all traditional companies change specifications for the role of chief human
resources officer. This position—which could become second to the CEO
position—will focus mostly on strategy, education, and talent
management, along with compensation and labor relations.
Human teachers will always be in high demand to ensure quality
education. At the same time, AI promises to improve how education is
delivered at all levels, and how it is structured, especially by providing
personalization at scale.
CHAPTER TWENTY-FOUR

Ethical Design and Social


Governance

In this section, we review some of the critical ethical and social


governance themes that will provide for a beneficial impact of AI on
democracy, governance, AI, and big data (and its use and misuse in
society). We bring the dual perspective of both corporate executives and
governance consultants who have worked in and for boards of directors
and c-suites as well as that of current citizens of democratic countries with
direct experience in the past living within authoritarian regimes as well.1

Institutional AI Ethics Committees


The previously discussed full-stack AI companies should voluntarily
decide to create ethics committees for the design of a safe AI, and
implement governance mechanisms to make their work transparent and
accessible. If they do not do so voluntarily, they will and should expect to
be met with regulatory requirements, as the socio-political impacts of AI
go well beyond any company’s current social license to operate. There is a
directly applicable historical analogy to such a voluntary effort. In the
1980s, the top defense contractors in the U.S. at the time, who were under
intense scrutiny for corruption and fraud, voluntarily started to build
internal ethics and compliance programs to prevent government from
adding more regulatory oversight.2 Nomination to these voluntary ethics
committees should undergo scrutiny similar to what corporate boards do
when they hire a new non-executive director. These committees should be
as diverse as possible, and should include seasoned and expert
multidisciplinary executives who do not necessarily have a pure
technology background but bring critically needed perspectives to these
issues.

Scientific Institutions as Data Custodians


Leading scientific institutions could act as trustees of the data and
algorithms that might impact how democracy is being operationalized in a
country. This would impose higher levels of governance within such
institutions, including a high-functioning board of trustees or directors, the
implementation of a code of conduct for anyone with access to (sensitive)
data and algorithms—a kind of Hippocratic Oath for IT professionals—
and a chief technology and risk executive to act as the manager and
facilitator of all things data-governance related. The Algorithmic Justice
League (ajlunited.org), for example, is developing new ways to report bias
at companies and software, and offers testing technology with diverse
groups of users. Researchers from Data & Society (datasociety.net), the
National Academy of Engineering, several universities, and Microsoft
have also proposed ethical guidelines for organizations doing research
with big data.

Adopting a Digital Agenda


For corporate and social governance of AI to work effectively, there
needs to be a digital agenda that is transversal and collaborative between
governments, the private sector, academia, and NGOs, to lay the
foundation for new jobs and the future of the digital society. The good
news is that such work has started in many countries, including the U.S.
(under the Obama administration), New Zealand, and the UK, among
others. The bad news is that some of these efforts have been stalled (in the
U.S., for example) and need to be accelerated, as the growth in AI and
related technologies is not waiting.

Creating a Participatory Platform


For AI governance to work—both within companies and within
societies—it is essential that there be a participatory platform that
simplifies how people might become self-employed, set up new temporary
projects, find business partners and beta-users, market products and
services worldwide, etc.—in other words, become part of growing the
economy. To complement this, municipalities of all sizes should create
centers focused on the digital capabilities of their communities. Within
these digital centers, diverse industries should find areas of overlapping
issues that are common to all participants of the local ecosystem.

New Educational Concepts


To ensure that the digital society becomes a success, we need
completely new educational concepts. This approach needs to accentuate
thinking, creativity, inventiveness, and entrepreneurship because the
repetitive, standardized tasks of many current jobs will be soon all be done
by robots and algorithms. This new education must also provide a keen
exposition of how to use digital technologies in a responsible and critical
manner.

Innovation Incentives
To provide for a more robust social and corporate governance of new
technology, government and business institutions could create more
concrete forms of competition to provide additional incentives for
innovation, help increase public visibility of the socio-political impact of
new technologies, and generate momentum for a more participatory digital
society. For example, city hackathons focusing on quality-of-life
improvements via technology should become a regular practice, sponsored
by a mix of research institutions, corporations, startups, municipalities, and
individual contributors. This could include local open-data access to
ensure a better collaboration between different stakeholders.

Real-Time Data Metrics


Municipalities, schools, and companies should facilitate real-time data
measurements available to all capable of contributing to communities,
generating jobs, and complying with larger societal/national goals like
mitigating the negative impacts of climate change, introducing better
schools, and reducing the costs of medical services. At the same time,
municipalities should generate and share data, which their companies and
individuals can use—for example, information on the state of the streets
for developing 3D maps for self-driving cars and information on pollution
generated by sensors, which might be placed on the roof of garbage
collecting vehicles or school buses.
We believe that the design of beneficial and safe AI can only be
achieved by embracing a different form of leadership, one that empowers
multidisciplinary groups to take and implement collective responsibilities
for tackling specific problems. We know this may be a tall order in some
cultures (both corporate and national), but we are optimistic.

Table 24.1 AI/ESG Considerations: Ethical Design and Social Governance

Social • Should scientific institutions become custodians of privacy?


• Cross-functional, interdisciplinary teams develop/assess social impact of
AI products and services
• New educational services to address labor paradigm shift
Governance • Organizations deploy ethical decision-making governance, including
deploying ethics advisory boards/committees
• Governments incentivize AI innovation
• Organizations deploy AI data metrics
• Every organization must adopt a data governance framework
applicable/customized to its domain and overseen by board/equivalent
governance body

Our optimism is fueled partly by what happened when the Trump


administration pulled out of the Paris Climate Accords. Instead of a
negative or dejected withdrawal from the principles of the accords, a vast
array of climate stakeholders—from corporations to municipalities, from
startups to state governments—pushed back immediately and vowed to
collaborate to achieve these goals even in the absence of the federal
government.
PART 6

The Governance of AI—


A Practical Roadmap for
Business

In previous chapters, we provided background information on what AI


technology implies, why it has recently come of age, and what the
necessary preconditions are for organizations to start working with AI in
just about any business environment. Throughout the book, we also
identified some of the key ESG issues that AI touches on or integrates
with.
Now in Part 6, we bring all of these issues together in a focused
consideration of what is most important to the board—the governance
authority and oversight unit within an organization—which, at the end of
the day, is responsible for the development of strategy, value, resilience,
and long-term sustainability.
We first turn to what members of the traditional company board should
do to transform their fear of AI into value, and then conclude in our final
chapter with a seven-phase approach to successfully addressing AI in the
boardroom.
CHAPTER TWENTY-FIVE

Traditional Boards’ Readiness


to Address AI—Transforming
Fear into Value

The Internet has already affected the 20 percent of the economy that is
primarily digital, like entertainment, communications, and finance. We are
now moving rapidly into the era of the Internet of everything and AI where
because of the cloud, new connectivity standards, powerful chip
semiconductors, and advanced algorithms, all industries are becoming
digitized, even the most traditional, such as public services, energy, health
care, and manufacturing.
On the one hand, there’s an enormous potential for innovation. But on
the other, these emerging technologies raise concerns, confusion, and all
kinds of uncertainty around security, data ownership and protection, and
the new competitive landscape, which might completely disrupt
established profit pools.
Investors and boards have traditionally been obsessed and single-
minded about quarterly results. In spite of this continuing pressure, their
focus is now also shifting to other serious strategic topics, such as the need
to transform their industry from within, to adapt to new business models,
and to protect customer relationships, especially from Internet disruptors.
Business sustainability is no longer about complementing the corporate
roadmap with the skills of a few acquired startups, or assessing
environmental and labor policies at suppliers, or complying with European
data-protection regulations. It is about the profound impact emerging
technologies are having on profits, succession planning, workforce
training, and the possible automation of so many processes.

What It Takes to Transform Successfully


In their study “Turning Around the Successful Company,” the Boston
Consulting Group concludes that there are certain common characteristics
to businesses that are succeeding at transforming themselves
technologically. They are (and we quote):
• External orientation—actively striving to pick up and change signals from the outside
environment
• Healthy paranoia—lack of arrogance, and awareness of competitive vulnerability even when
short-term results are good
• Long-term perspective—focus on sustainable competitiveness
• Resource fluidity—ability and willingness to shift resources as needed.1

Visionary boards adapt their agendas and spend more time on technology,
IT transformation, data security, and digital customer interfaces. There are
still challenges, since governance frameworks and board refreshment do
not change at the same pace as the business environment. Among the most
important challenges to transformation are the following:
• Current board and committee structures have been shaped by regulatory demands. They are
implemented to ensure compliance. This results in practices based on past insights; linked to
established regulations; and not focused on the needs, challenges, and creative possibilities of
the present and future.2
• Regulatory frameworks have always trailed technological change, leaving many grey areas
around labor relationships, product liability, data protection, customer-centricity, and
workplace safety. These grey areas become even more urgent to address with the uncertainty
brought about by the rise of AI and related technologies.
• According to McKinsey, as of 2016, only 5 percent of corporate boards in North America had
a technology committee.3 Our interviews with two institutional shareholders and three large
headhunters showed that there are no company boards with a technology committee in
Switzerland, France, or Germany. In the UK, some boards have established advisory councils
to work on topics such as cybersecurity, big data, e-commerce, social media, and IoT. Only 13
percent of U.S. Fortune 100 boards can be called truly digital in the sense that they have a
resident director of digital expertise, and this includes the boards of “technology first”
companies like Amazon and Microsoft. In other words, even these technologically advanced
companies have room for improvement when it comes to addressing the digital challenge.
• We believe that the pressure for the appointment of digitally literate board members will
increasingly come from institutional shareholders.
• Companies and their boards are still lacking an understanding of the value of their data and the
eventual transformational influence of this data. The only way data make it onto the board
agenda is when a negative event like a cybersecurity breach occurs. Data, and its status and
governance in a given company, should be instead a positive and standing topic on all board
agendas and a cornerstone of competitive analysis.
• Discussions about AI are still mainly taking place at technology companies and research
centers. Google and Microsoft have created ethics advisory boards focused on ensuring the
design of safe and beneficial AI. Several high-tech giants support initiatives such as OpenAI.
There is a lack of involvement by traditional industries in the discussion on AI, which is both
regrettable and complacent, taking into account the unrelenting pace of change and the fact
that they will be disrupted by these technologies sooner or later.
• Successful boards and c-suites will also integrate their consideration of AI into their enterprise
risk management—however, those without a disciplined and forward-looking approach to risk
will be in a poor position to assess the risk and opportunity presented by AI.
• Last but not least, businesses will start to design their products and services with AI in mind.
Boards rarely discuss questions of digital design, real-time data to create feedback loops
between products, customer care, and development functions. We believe this will change
within the next five years.

Five of the top ten Fortune 500 companies have declared their strategy to
be “AI first,” and they have been aggressively investing in AI startups and
R&D, attracting top talent from academia, and changing the way in which
research and development collaborate with product developers. If they
want to survive in the short term and thrive in the long term, traditional
business must study the strategies and operational practices of these five
leading companies: Alphabet, Apple, Microsoft, Facebook, and Amazon.
Every traditional and nontraditional company today must understand that it
is also a technology enterprise. The question of what strategic leadership is
in the era of nascent AI has never been more acutely important. Visionary
leaders understand that the future is more important than the past, as they
can still do something about the future today. Knowledge about AI is
crucial, as this time companies and societies cannot rely on simply
learning from past mistakes. It is much better to prepare, do AI safety
research, enable ethical design guardrails, and get partnerships right the
first time around, as there may not be more than a couple of chances to
correct the course.

Table 25.1 Key Elements of an AI Evaluation, Strategy, and Governance Formulation

• Competitive Landscape: Traditional and Disruptive


• Current Products and Services Interface
• Big Data and AI Interface
• Labor Force Impact
• Corporate Culture Impact
• Ethical Analysis
• M&A Opportunities
• R&D Opportunities
• Investor Impact/Needs
• Customer Impact/Needs
• Other Stakeholder Impacts
• Enterprise Risk Management Integration
• Strategic Risk and Opportunity Evaluation
• Reputational (Including Reputation Risk) Impact
• Sustainability, Environmental, Climate Impact
• Legal and Regulatory Assessment
• Supply Chain/Third-Party Implications

So, what do visionary boards do and what should all boards do


regarding AI? First and foremost, every board should be willing, able, and
anxious to learn about this topic. There are countless courses at leading
universities, countless resources from leading management consulting
firms, and myriad consultants who can sell AI strategies and the pros and
cons of being a leader or a follower in implementing AI. For any learning
and discussion of AI to be fruitful, it must be accompanied or even
preceded by an in-depth analysis and understanding of the existing data
that lie within a company. Fear and loathing have no place in such
discussion and analysis.
Above, in Table 25.1, we outline some of the key areas in which a
business needs to understand the impact of its AI strategy. When
businesses are first tackling this issue, we recommend that they consider
self-evaluating (if they have internal expertise) or, preferably, doing a
third-party evaluation of these areas. It will only be when such an
evaluation is done that a proper gap analysis can be performed to develop
an appropriate and customized AI strategy for the business.4 We discuss
many of these topics in our last chapter.
CHAPTER TWENTY-SIX

Discussing AI in the
Boardroom

AI governance, just like the risk and opportunity governance of any key
strategic issue, requires synchronicity between the board’s strategic
oversight function, the executive team’s responsibility for developing and
implementing strategy, and the implementation of such strategy by key
cross-disciplinary and multifunctional teams of experts.1 This is what we
refer to as the AI governance, risk, and ethics triangle, which is depicted in
Figure 26.1 below.
Figure 26.1 The AI Governance Triangle (© Andrea Bonime-Blanc 2018. All Rights Reserved.)

Our key findings relevant to the work of corporate boards are the
following:
• AI is not going away. Like mobile and cloud, it will become pervasive and omnipresent,
powering applications, services, and operations across industrial verticals and business
functions.
• Traditional businesses should not introduce AI for the sake of AI or because it seems like a
“cool” thing to do. Technology is a tool to power a business, creating operational savings and
business opportunities. A great place to start talking about AI at any industry or business
function is via the formulation of a data value strategy to complement a company’s
competitive strategy.
• Data is one of the most important assets of any business. AI only works well when a company
can clearly (1) state a problem that it wants to solve with data; (2) address a part of its
operations that it wants to automate for better efficiency and quality; or (3) want to achieve
insights it currently does not have, or struggles to get in a cost-efficient way.
• There are, of course, different AI schools of thought and methodologies. The choice of
technology frameworks a business needs depends heavily on the particulars of that business—
its sector, footprint, strategy, regulatory framework, operational practices, and areas it has
targeted to invest in.
• A company’s and its board’s lack of understanding of the differences between cognitive, ML,
DL, and IT architecture that enable collection and preparation of data stands in the way of the
successful development and implementation of a data value strategy that includes the
introduction of AI into the business.
• Confusion around the AI ecosystem impacts decision-making on why one vendor might be a
better fit than another, what talent or technology should be acquired, and what is a realistic
assessment about the scope of work, cost of delivery, and the efforts that will be required from
the organization to transform existing practices.
• It is becoming an increasingly obvious fact that certain mission-critical business and
compliance problems cannot and will not be properly solved without AI, including notably, for
example, cybersecurity.
• The regulatory environment around AI is in flux. Technology once again is outpacing and
outflanking legal and regulatory frameworks, creating confusion as well as opportunity.
• The technology industry would greatly benefit from including traditional businesses into its
discussions of AI design frameworks, ethics, and compliance. At the same time, traditional
businesses should view AI as an integral part of their sustainability, ethics, and strategy
frameworks.
• Companies must find talent that understands technology, but also has a keen ability to work
cross-functionally. Business development executives likewise can rise to new prominence in
companies that embrace emerging technologies such as AI and blockchain.

What Do Corporate Directors Ask?


We interviewed 60 directors from different private and public boards
in the U.S., Europe, and Asia. People who talked to us were pragmatic,
humble, and candid. In spite of their different backgrounds, industries, and
exposure to technology topics, most of these directors asked five key
questions, which can be generalized as follows:
1. “Can IBM deliver on time? Don’t they over-promise?”
2. “Why should we give Watson our data and teach them about our business? Will it not backfire
one day?”
3. “Our IT is really outdated. How should we start, if we want to benefit from AI in our
company?”
4. “Is there any link between AI and cybersecurity?”
5. “We have always put employees first. How can we adopt something that might put them out of
their jobs?”

Every technology provider should have good answers to these questions


when they are pitching AI products and services to businesses. At the end
of the day, smart board directors understand that AI is not just a part of
their business. It is a business, and often someone else’s business.

Practical Questions for the Board


Following is a phased approach explaining the key elements of an
effective AI governance approach, with questions that any board should
undertake to develop a practical and eventually fruitful conversation on
AI. We have divided these questions into eight phases or elements through
which the board and management should travel to reach the right answers,
strategies, and tactics on how to deploy AI into their business.
Phase I: Whole Board Strategic Discussion on a Data Value Strategy
We believe that the whole board should participate in a strategic
discussion on the competitiveness of the business, with data value strategy
complementing the existing frameworks. The following are the key
questions to be addressed:

Figure 26.2 Elements of Effective AI Governance.

1. What unique data do we currently hold on our customers, suppliers, and partners? What is their
strategic value? Which company among our current suppliers and competitors would like to
have our data and why?
2. Do we use and should we use real-time (streaming) data to achieve competitive advantage in
strategic positioning, operations, and regulatory reporting?
3. What data does our company use to define a wide variety of activities including strategic
planning, pricing strategies, customer care practices, supply chain management, cybersecurity,
key risks, M&A track record, and business development?
4. What data would the company like to use or have to optimize strategic planning, business
development, and operations?
5. What data assets are most vulnerable from a cybersecurity point of view? What systems
incorporate these data? Where are the cyber-vulnerable “crown jewels”?
6. What regulations are in place in our businesses concerning data collection, insight generation,
pricing based on behavioral insights, and compliance with cybersecurity laws?
7. Who is using data in our company to make decisions?
8. Who is working with data to support decision-making, insight generation, etc.?
9. What should be changed to enable more functions, business teams, and employees to apply
data-driven frameworks for better decision-making and insight generation?
10. If we apply a price tag to our data assets, or part of our data, what would it be?

Phase II: Assess Current Technology Environment


Once the data value strategy is clear, it is appropriate to ask whether
the current technology environment supports strategy implementation.
This high-level board question might be supported by work at the
technology committee (if one exists), which might address questions as
follows:
1. What kind of systems do we have in place to hold our most important data assets? Do we have
a complete inventory of systems and databases? Do we have a cybersecurity profile of our
current systems?
2. Do we have frameworks and systems in place supporting streaming of real-time data, and
instant decision-making based on streamed real-time data?
3. Can our CIO and/or CTO explain what environment is desirable to implement our data-value
strategy and what the gaps are between the status quo and the target architecture?
4. Do our CIO and/or CTO have a transformation plan to get to the target IT environment to
implement our data-value strategy?
5. Do business leaders understand the IT transformation plan, and how they might contribute to
ensuring the better use of optimized systems, new governance practices on data sharing and
analyses, and piloting of data-driven projects?
6. Do we have sufficient skills at the company to implement the transformation plan?
7. What partners and vendors might be helpful in supporting our transformational plan? Our new
data-system architecture? What criteria are we applying to identify and hire new vendors
and/or ensure that our traditional vendors have sufficient skills to implement new tasks?
8. What financial reserves do we need to maintain to support our data-value strategy and the IT
transformation to implement it?

Phase III: Interface with Enterprise and Strategic Risk Management


As with any new digital or other technology being considered, boards
and c-suites need to understand how AI creates risk and opportunity
related to the existing risk profile of the organization. It is useful to think
of AI as something that could touch many—maybe every—aspect of a
company’s risk profile (visualized in Figure 26.3). Some of the critical
questions to be addressed would include:
Figure 26.3 AI and Enterprise Risk Management (© Andrea Bonime-Blanc 2018. All Rights
Reserved.)

1. Has the company’s risk-management function interfaced with other relevant functions—like
technology and strategy—and run scenarios to understand the impact of not engaging AI in the
business, as well as scenarios in which AI is incorporated into products and services?
2. What are risk-mitigation practices we want to apply to ensure our transformational path is
reliable and realistic?
3. Has the company engaged in competitive landscape risk and opportunity benchmarking?
4. Has the legal department, together with risk and technology, assessed the regulatory landscape
nationally and internationally, including its costs and benefits?

Phase IV: Creating a Data Governance Framework


Once the strategy and the target vs. the current IT environment to
implement it have become clear, there is a need to define and implement a
data governance framework. This framework should be discussed at a high
level within the board. Parts of this framework should be linked to the
existing cybersecurity risk-management plans, regulatory reporting, and
compliance data, and the operationalization of this framework needs to be
in the hands of a talented cross-disciplinary team of company employees,
preferably led by one of them and should include actionable plans.

Phase V: Overseeing Key Performance Indicators


A good management practice will be to agree on a number of key
performance indicators to implement the data-value strategy. It will be the
responsibility of the board of directors to supervise and monitor these
indicators at a strategic level. These KPIs will be individual to every
company, as the level of digitization, standard of data infrastructure,
analytical capabilities within organizational units, and geographies will
vary from business to business.
Figure 26.4 From Sit Back to Lean-In AI Governance (© Andrea Bonime-Blanc 2018. All Rights
Reserved.)

Phase VI: Talent Review and Planning


Again, while this task is a management-level task, it is essential that a
board (especially its technology committee, if it exists) has a role in
overseeing and supervising the proper mix of IT and technology talent,
including succession planning.

Phase VII: Types of AI Technology Needed


Only after a board has achieved a certain level of clarity on the role
that data play or should play in a company’s strategic and operational
decision-making can that business—its management and its board—start
asking questions about what kind of AI technologies could support the
plan.
In this context, a board might want to understand and get answers to
the following key questions:
1. Where does our industry stand in the adoption of AI? What are the most common use-cases
and applications to improve the top and bottom line?
2. Who is leading the adoption of AI in our industry? Why?
3. What AI technologies are mostly relevant for our industry? Why?
4. How soon can an investment in AI pay off? Is it right to ask this question at this point in time,
or should we establish a number of pilots with modest budgets to learn from the data?
5. What AI capabilities should our company own? Where should we partner, and what should we
outsource?
6. What are the possibilities and opportunities for co-competition scenarios with our partners and
suppliers?

Phase VIII: Formulating Successful Internal Workplace Strategies


AI technologies will bring not only benefits but also challenges to
workplace environments. Businesses need to formulate internal strategies
to adapt to a shifting competition and workplace environment. Jobs will
need to be redesigned, and there will be a need to invest in the workforce
to leverage new skills. New safety and ethics procedures will need to be
developed. Boards need to understand and supervise these shifts and
encourage the company’s CEO and c-suite to introduce target systems and
compensation plans supporting this transformation.
To ensure that a business is functioning properly, the board has an
obligation to understand whether the company has an inventory of the
most important tasks to be undertaken instead of an inventory of old and
tired job descriptions that may or may not be applicable to the present and
the future. AI will offer real opportunities for more meaningful work for
many employees and it is the responsibility of management and the board
to enable humans to dedicate more time to collaboration, working with
customers, doing research and business development, and supporting
communities, instead of merely undertaking the repetitive tasks in which
they may be engaging today and which will be replaced by AI tomorrow.
And maybe a visionary board will decide to support a grassroots
initiative within their company to enable employees to decide what part of
their work needs to be automated to ensure better workplace safety,
unleash creativity, and reduce failures. If and when top management and
the board collaborate and jointly communicate the advantages that AI
might bring to the workplace, maybe they can be the harbingers of
opportunity, innovation, and creativity instead of purveyors of fear of the
unknown that ignorance of AI might bring.
Notes

Introduction
1. Andrew Buncombe, “AI System That Correctly Predicted Last 3 U.S. Elections Says Donald
Trump Will Win,” The Independent, October 28, 2016.
2. Kate Devlin, “Living with Robots,” The Exponential View Podcast, February 25, 2017.
3. Jeff John Roberts, “Amazon Argues Free Speech in Alexa Murder Case,” Fortune, February
23, 2017.
4. “AI-Driven Facial Recognition Is Coming and Brings Big Ethics and Privacy Concerns,” CB
Insights, September 13, 2017.
5. Pumulo Sikaneta, “AI Won’t Go Anywhere Unless It Has Empathy,” Venturebeat, September
18, 2017.
6. “The 2016 AI Recap: Startups See Record High in Deals and Funding,” CB Insights, January
19, 2017.
7. Erin Griffith, “Google Is on the Prowl for Cloud and AI Deals in 2017,” Fortune, February 8,
2017.
8. Erin Griffith, “It’s Time to Take AI Seriously,” Fortune, February 17, 2017.
9. Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” revised version of a paper
published in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in AI, Vol.
2, ed. 1, Institute of Advanced Studies in Systems Research and Cybernetics, 2003, 12–17.
10. Rory Cellian-Jones, “Stephen Hawking Warns AI Could End Mankind,” bbc.com,
December 2, 2014.
11. Kevin Rawlinson, “Microsoft’s Bill Gates Insists AI Is a Threat,” bbc.com, January 29,
2015.
12. Matt McFarland, “Elon Musk: ‘With AI We Are Summoning the Demon,’” The Washington
Post, October 24, 2014.
13. Danny Hillis, “Back to the Future,” TED, 1994, posted 2012.
14. Tim Adams, “AI: ‘We’re Like Children Playing with a Bomb,’” The Guardian, June 12,
2016.
15. Bostrom, “Ethical Issues in Advanced AI.”
16. Ibid.
17. “Five Distractions in Thinking about AI,” Quamproxime.com, March 25, 2017.
18. Daniel Boffey, “Robots Could Destabilise World Through War and Unemployment, Says
UN,” The Guardian, September 27, 2017.
19. We define “ESG” as those environmental, social, and governance issues that (1) are or
should be part of the portfolio of any organization (whether private, public, non-profit, or
academic); (2) should be considered in relationships with key stakeholders (employees, customers,
regulators, community); and (3) may have negative (risk) or positive (opportunity) financial or
reputational impacts on the organization.
20. Andrea Bonime-Blanc, The Reputation Risk Handbook: Surviving and Thriving in the Age
of Hyper-Transparency, Greenleaf/Routledge, 2014.
Part I: The Past, Present, and Future of AI
1. Fei Fei Li, Frank Chen, and Sonal Chokshi, “When Humanity Meets AI,” a16z Podcast, June
29, 2016.
2. Zachary C. Lipton, “The AI Misinformation Epidemic,” approximatelycorrect.com, March
28, 2017.
3. Ibid.
4. “Five Distractions in Thinking About AI,” Blog.fastforwardlabs.com, March 22, 2017.

Chapter 1: A Brief History of AI


1. Joshua Brustein, “Inside the 20-Year Quest to Build Computers That Play Poker,”
Bloomberg, January 31, 2017.
2. “Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence,”
Stanford Report, September 2016, 51.

Chapter 2: AI Social-Science Concepts


1. “Emerging AI: 7 Industries Including Law, HR, Travel and Media Where AI Is Making an
Impact,” CB Insights, May 3, 2017.
2. “The Technologial Singularity,” Murray Shanahan, pos. 309 (e-book).
3. “Artificial General Intelligence: What Vicarious, Kindred, and Numenta Are Working On,”
CB Insights, April 19, 2017.
4. Zoey Chong, “Google’s AI Is No Smarter Than a 6-Year-Old, Study Says,” Cnet.com,
October 3, 2017.
5. April Joyner, “NYU’s Gary Marcus Is an Artificial Intelligence Contrarian,” technical.ly,
April 10, 2017.
6. Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, April 11, 2017.
7. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014,
22.
8. Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence, Alfred A. Knopf,
2017, pos. 5246 (e-book).
9. Tegmark, Life 3.0, pos. 5232–5246.
10. Meg Houston Maker, “AI@50. First Poll,” July 13, 2006, archived from the original on May
13, 2014.
11. Yuval Harari, “Brains, Bodies, Minds . . . and Techno-Religions,” a16z podcast, February
23, 2017.
12. Jerry Kaplan, Artificial Intelligence: What Everyone Needs to Know, Oxford University
Press, 2016, pos. 1520 (e-book).
13. Tegmark, Life 3.0, pos. 5291, 5909.
14. David Deutsch, “Creative Blocks,” aeon.co, October 3, 2012.
15. Nils J. Nilsson, The Quest for AI: A History of Ideas and Achievements, Cambridge
University Press, 2010.
16. For details about intelligence and communication, see Robin Dunbar, Grooming, Gossip
and the Evolution of Language, Harvard University Press, 1996.
17. Kevin Kelly, “The AI Cargo Cult: The Myth of Superhuman AI,” backchannel.com, April
25, 2017.
18. Tegmark, Life 3.0, pos. 504.
19. Carlos E. Perez, “Deep Learning Is Splitting into Two Divergent Paths,” Medium, August
15, 2017; Interview of Andrej Karpathy with Andrew Ng, http://youtube.com/watch?
v=au3yw46lcg&t=10m14s.
20. Danko Nikolic, “Enrolling in Artificial Intelligence Kindergarten,” blogs.dxc.technology,
February 14, 2017.
21. John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon, “A
Proposal for the Dartmouth Summer Research Project on AI,” AI Magazine, August 31, 1955.
22. Charlie Osborne, “MIT’s AI Passes Key Turing Test: Can the New AI System Fool Your
Senses?,” Between the Lines, zdnet.com, June 13, 2016.
23. Dominique Basulto, “AI: From Turing Test to Tokyo Test,” bigthink.com, 2016.

Chapter 3: Key Cognitive and Machine Learning


Approaches
1. Carlos E. Perez, “Deep Learning Playbook for Enterprise,” Intuition Machine, 2017, 13.
2. Shaunak Khire in “How IBM Is Competing with Google in AI,” Kevin McLaughlin, The
Information, February 16, 2017.
3. Michael Krigsman, “AI in Business, with Anthony Scriffignano (Dun and Bradstreet),” CXO
Talk Podcast, November 11, 2016.
4. Pedro Domingos, The Master Algorithm, Basic Books, 2015, pos. 305 (e-book).
5. Hassaan Achmed, “Is a Master Algorithm the Solution to Our Machine Learning Problems?,”
Techcrunch, January 30, 2017.
6. Jack Clark, “Adapting Ideas from Neuroscience for AI,” oreilly.com, March 28, 2017.
7. Murray Shanahan, The Technological Singularity, The MIT Press Essential Knowledge
Series, 2015, pos. 295 (e-book).
8. Steve LeVine, “Artificial Intelligence Pioneer Says We Need to Start Over,” Axios,
September 15, 2017; Carlos E. Perez, “Why We Should Be Deeply Suspicious of Back
Propagation,” Medium, September 17, 2017.
9. James Somers, “Is AI Riding a One-Trick Pony?,” MIT Technology Review, September 29,
2017.
10. For more information, see Tom Simonite, “Google’s AI Wizard Unveils a New Twist on
Neural Networks,” Wired, November 1, 2017.
11. Dave Gershgorn, “The Quartz Guide to AI,” Quartz, September 10, 2017.
12. Domingos, The Master Algorithm, pos. 672.
13. Perez, “The Deep Learning Playbook for Enterprises.”
14. Ophir Tanz, “Neural Networks Made Easy,” Techcrunch, April 13, 2017.
15. Gershgorn, “The Quartz Guide to AI.”
16. Perez, “Deep Learning Is Splitting into Two Divergent Paths.”
17. The Data Science Blog, “An Intuitive Explanation of Convolutional Neural Networks,”
ujiwalkarn.me, August 11, 2016.
18. Perez, “The Deep Learning Playbook for Enterprises,” 118–35.
19. Will Knight, “The Dark Secret at the Heart of AI,” MIT Technology Review, April 11, 2017.
20. itnext.io, “Deep Learning, AI & the Blackbox,” April 20, 2017.
21. Will Knight, “Nvidia Lets You Peer Inside the Black Box of Its Self-Driving AI,” MIT
Technology Review, May 3, 2017.
22. Nathan Benaich, “6 Areas of AI and ML to Watch Closely,” Medium, January 16, 2017.
23. Carlos E. Perez, “AlphaZero: How Intuition Demolished Logic,” Medium, December 11,
2017.
24. Jason Roell, “Why AlphaGo Is a Bigger Game Changer for Artificial Intelligence Than
Many Realize,” Medium, September 29, 2017.
25. Andrew McAffee and Erik Brynjolfsson, Machine, Platform, Crowd: Harnessing Our
Digital Future,” W.W. Norton, 2017, 82.
26. Steve LeVine, “AI Pioneer Says We Need to Start Over,” Axios, September 15, 2017.
27. Rob Ennals, “What Can Deep Neural Networks Teach Us About Human Thought?,” Quora,
July 25, 2017.
28. Brendan McMahan and Daniel Ramage, “Federated Learning: Collaborative ML Without
Centralized Training Data,” GoogleBlog, April 6, 2017.
29. Chanchana Sornsoontron, “How Do GANs Intuitively Work?,” hackernoon.com, January
28, 2017.

Chapter 4: AI Technologies: From Computer Vision to


Drones
1. ImageNet, Stanford Vision Lab, Stanford University, Princeton University, 2016,
www.image-net.org/.
2. Dan Fagella, “Seeing the World Through Machine Eyes with Dr. Irfan Essa,” AI in Industry
Podcast, January 10, 2016.
3. John Mannes, “Facebook’s AI Unlocks the Ability to Search Photos by What’s in Them,”
Techcrunch, February 2, 2017.
4. Kevin Lester, “Shutterstock Unveils Better, Faster, Stronger Search and Discovery
Technology,” shutterstock.com, March 10, 2016.
5. Dan Faggella, “Future Applications of Machine Vision: An Interview with Cortica’s CEO,”
AI in Industry Podcast, November 20, 2016.
6. For more information, see Yoav Goldberg, “An Adversarial Review of ‘Adversarial
Generation of Natural Language,’” Medium, June 9, 2017. See also the response of Yann LeCun on
Facebook, June 10, 2017.
7. Nate Nichols, “Natural Language Processing and Natural Language Generation: What’s the
Difference?,” narrativescience.com, April 24, 2017.
8. Fung Global Retail and Technology, “10 Emerging Startups in Natural Language Processing
(NLP),” 2017.
9. Rob May, “What Will the Natural Language Technology Stack Look Like in 5 Years?,”
Medium, January 25, 2017.
10. Yuxuan Wang, R.J. Skerry-Ryab, Daisy Stanton, et al., “Tacotron: Towards End-to-End
Speech Synthesis,” Google Research Paper, April 6, 2017.
11. Bhargav Shah, “The Power of Natural Language Processing,” Medium, July 13, 2017.
12. Ray Kurzweil, “Princeton/Adobe Technology Will Let You Edit Voices Like Text,”
kurzweilai.net, May 19, 2017.
13. ScienceDaily, 2016.
14. Sam Lessin, “Bot Check-In: A Year of Disappointment,” The Information, January 3, 2017.
15. Mark Sullivan, “After Lots of Talk, Microsoft’s Bots Show Sign of Life,” Fast Company,
May 16, 2017.
16. The Stanford Report, 22.
17. Chris Bryant and Elaine He, “The Robot Rampage,” Bloomberg, January 8, 2017.
18. Andrew McAffee and Erik Brynjolfsson, Machine. Platform. Crowd: Harnessing Our
Digital Future, 108.
19. Y Combinator, “An AI Primer with Wojciech Zaremba,” blog.ycombinator.com, May 17,
2017.
20. McAffee and Brynjolfsson, Machine. Platform. Crowd, 98.
21. Ibid., 99–100.
22. Nicole Tache, “How AI Is Used to Infer Human Emotion,” oreilly.com, May 18, 2017.
23. Ekman pioneered a study of emotions and their relation to facial expressions, creating an
atlas of emotions.

Chapter 5: AI and Blockchain, IoT, Cybersecurity, and


Quantum Computing
1. “European Commission’s Press Release Announcing the Proposed Comprehensive Reform of
Data Protection Rules,” January 25, 2012; see Wikipedia on GDPR, en.m.wikipedia.org.
2. David Pierce, “Inside Andy Rubin’s Quest to Create an OS for Everything,” Wired, July 25,
2017.
3. Cade Metz, “Hackers Don’t Have to Be Human Anymore: This Bot Battle Proves It,” Wired,
May 8, 2016.
4. Matthew Hutson, “Artificial Intelligence Just Made Guessing Your Password a Whole Lot
Easier,” Science, September 15, 2017.
5. Egor Dezhic, “The Present and Future of Quantum Computing for AI,” Medium, August 10,
2017.
6. Mark Bergen, “Google’s Quantum Computing Push Opens New Front in Cloud Battle,”
Bloomberg, July 17, 2017.
7. Lucian Armasu, “IBM Inches Closer to Quantum Supremacy with 16- and 17-Qubit Quantum
Computers,” tomshardware.com, May 17, 2017.
8. Bergen, “Google’s Quantum Computing Push Opens New Front in Cloud Battle.”
9. Financial Times, September 23, 2017.

Chapter 6: Data
1. Nick Harrison and Deborah O’Neill, “If Your Company Isn’t Good at Analytics, It’s Not
Ready for AI,” Harvard Business Review, June 7, 2017.
2. Forbes Corporate Communications Staff, “Majority of Companies Lack Tools and
Investment Necessary for Analytics Usage in Business,” Forbes, June 7, 2017; “Analytics
Accelerates into the Mainstream: Dun & Bradstreet/Forbes Insights 2017 Enterprise Analytics
Study,” Foreword by Nipa Basu, Chief Analytics Officer, Dun & Bradstreet.
3. Tye Rattenbury, Joe Hellerstein, Jeffrey Heer, Sean Kandel, and Connor Carreras, “Principles
of Data Wrangling: Practical Techniques for Data Preparation,” by O’Reilly Media Podcast, 2017,
19.
4. Teresa Escrig, “Is AI a Real Existential Threat?,” teresaescrig.com, June 9, 2015.
5. Interviews; Adam Gibson, “AI & Robot Show. Episode 5. Adam Gibson (Skymind),” The
Architect Show Podcast, July 14, 2017.
6. Inside AI, “Scope vs. Scale: The Next Wave of AI Strategy,” Inside AI (newsletter), May 21,
2017.
7. Big Data Analytics, “Top 5 Big Data Trends That Will Shape AI in 2017,”
bigdataanalyticsnews.com, January 31, 2017.
8. Luke de Oliveira, “Fueling the Gold Rush: The Greatest Public Datasets for AI,” Medium,
February 11, 2017.
9. See Luke de Oliveira, “Fueling the Gold Rush: The Greatest Public Datasets for AI,”
Medium, February 11, 2017.
10. Nathan Benaich, “Six Areas of AI and ML to Watch Closely,” Medium, January 16, 2017.
11. “Snorkel: A System for Fast Training Data Creation,” with frameworks, tutorials,
references, and contributors, at hazyresearch.github.io.
12. “Creating Large Training Datasets Quickly with Alex Ratner,” O’Reilly Data Show,
O’Reilly Media Podcast, June 8, 2017.
13. Jana Eggers, “AI Building Blocks: The Eggs, the Chicken, and the Bacon,” oreilly.com,
January 26, 2017.
14. “Unstructured Data,” Wikipedia, en.wikipedia.org.
15. Alice Zheng, “We Make the Software, You Make the Robots,” an interview with Andreas
Mueller, radar.oreilly.com.
16. Carlos E. Perez, “Machine Teaching: The Sexiest Job of the Future,” Medium, July 29,
2017.
17. Ibid.
18. Matthew Hutson, “The Future of AI Depends on a Huge Workforce of Human Teachers,”
Bloomberg Businessweek, September 7, 2017.
19. Andrej Karpathy, “Software 2.0,” Medium, November 11, 2017.
20. Stefanie Koperniak, “Artificial Data Give the Same Results as Real Data—Without
Compromising Privacy,” news.mit.edu, March 3, 2017.
21. Ariel Ezrachi and Maurice E. Stucke, Virtual Competition: The Promise and Perils of the
Algorithm-Driven Economy, Harvard University Press, 2016.
22. David Weinberger, “Alien Knowledge: When Machines Justify Knowledge,”
backchannel.com, April 18, 2017.
23. Danah Boyd, “Your Data Is Being Manipulated,” Medium, October 4, 2017.
24. Kelsey Campbell-Dollaghan, “The Art of Manipulating Algorithms,” Fast Company,
January 3, 2017.
25. Y Combinator, “At the Intersection of AI, Governments, and Google,” Y Combinator
Podcast, June 16, 2017.
26. For more information, see Leah Wong, “Experts Weigh in on Fairness and Performance
Trade-Offs in Machine Learning,” The Regulatory Review, October 4, 2017.
27. Sam Harris, “What Is Technology Doing to Us?,” Waking Up Podcast, April 14, 2017.
28. Eric Newcomer, “Uber Starts Charging What It Thinks You’re Willing to Pay,” Bloomberg,
May 19, 2017.
29. Kate Brodock, “Why We Desperately Need Women to Design AI,” Medium, August 6,
2017.
30. Carlos E. Perez, “Why Women Should Lead Our A.I. Future,” Medium, December 4, 2017.
31. Michelle Wetzler, “Architecture of Giants: Data Stacks at Facebook, Netflix, Airnbnb, and
Pintrest,” blog.keen.io, April 4, 2017.
32. Michelle Wetzler, “Architecture of Giants: Data Stacks at Facebook, Netflix, Airnbnb, and
Pintrest,” blog.keen.io, April 4, 2017.
33. Table 6.7 draws on the following resources: (1) “Creating a Data-Driven Enterprise with
DataOps: Insights from Facebook, Uber, LinkedIn, Twitter, and eBay,” by Ashish Thusoo and
Joydeep Sen Sarma, O’Reilly, April 2017; (2) “How Data Lakes Support ML in Industry—with
Cloudera’s Amir Awadallah,” February 26, 2017; (3) “Artificial Intelligence in Industry” podcast
with Dan Faggella www.cloudera.com; (4) http://www.gartner.com/newsroom/id/3051717.

Chapter 7: Algorithms
1. Domingos, The Master Algorithm, pos. 211.
2. Wikipedia article on algorithms.
3. Ibid.
4. Andrew Tutt, “An FDA for Algorithms,” Administrative Law Review, Vol. 67, 2016, posted
March 15, 2016.
5. Hui Li provides a good introduction to the most basic algorithms in his “Which ML
Algorithm Should I Use?”. In addition, SAS Visual Data Mining and ML give a good start for
beginners to learn ML quickly and apply it to different problems.
6. Nathaniel Payne, “What Is Tuning in ML,” stackoverflow.com, April 7, 2014.
7. Dan Faggella, “Tuning ML Algorithms with Scott Clark,” AI in Industry Podcast, February
12, 2017.
8. Domingos, The Master Algorithm, pos. 1274, 1287, 1301.
9. Ibid., pos. 1327.
10. Louise Matsakis, “Researchers Fooled a Google AI into Thinking a Rifle Was a Helicopter,”
Wired, December 20, 2017.
11. See ML security blog cleverhans.io and Ian Goodfellow, Nicolas Papernot, Sandy Huang,
Yan Duan, Pieter Abbeel, and Jack Clark, openai.com, February 16, 2017.
12. Jamie Condliffe, “AI Shouldn’t Believe Everything It Hears,“ MIT Technology Review, July
28, 2017.
13. “Anti AI AI—Wearable AI,” rnd.dt.com.au, May 19, 2017.

Chapter 8: Hardware
1. Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics, Vol.
32, No. 8, April 19, 1965.
2. Rodney Brooks, “The End of Moore’s Law,” rodneybrooks.com, February 4, 2017.
3. “The rise of AI is creating a new variety in the chip market, and trouble for Intel. The success
of Nvidia and its new computing chip signals rapid change in IT architecture.” See The Economist,
February 25, 2017.
4. Darrell Etherington, “Intel Capital Has Invested over $1 Billion in Companies Focused on
AI,” Techcrunch, September 18, 2017.
5. Katyanna Quach, “Your 90-Second Guide to New Stuff Nvidia Teased Today: Volta V100
Chips, a GPU Cloud, and More,” theregister.co.uk, May 10, 2017.
6. Benaich, “Six Areas of AI and ML to Watch Closely.”
7. Carlos E. Perez, “Google’s AI Processor’s (TPU) Heart Throbbing Inspiration,” Medium,
April 5, 2017.
8. Ari Levy, “Several Google Engineers Have Left One of Its Most Secretive AI Projects to
Form a Stealth Startup,” cnbc.com, April 20, 2017.
9. Karl Freund, “An ML Landscape: Where AMD, Intel, NVIDIA, Qualcomm and Xilinx AI
Engines Live,” Forbes, March 3, 2017.
10. Cade Metz, “The Race to Build an AI Chip for Everything Just Got Real,” Wired, April 24,
2017.
11. James Morra, “Graphcore Prepares ML Silicon for this Year,” Electronic Design, July 26,
2017.
12. Mark Gurman, “Apple Is Working on a Dedicated Chip to Power AI on Devices,”
Bloomberg Technology, May 26, 2017.
13. Greg Diamos, “We Need Next Generation Algorithms to Harness the Power of Today’s AI
Chips,” Forbes, June 21, 2017.
14. Shanahan, The Technological Singularity, pos. 457.
15. William Vorhies, “The Three Way Race to the Future of AI: Quantum vs. Neuromorphic vs.
High Performance Computing,” datasciencecentral.com, November 14, 2017.
16. Tom Simonite, “Intel’s New Chip Design Takes Pointers from Your Brain,” Wired,
September 25, 2017.
17. Tom Simonite, “Google’s New Chip Is a Stepping Stone to Quantum Computing
Supremacy,” MIT Technology Review, April 21, 2017.
18. R. Colin Johnson, “DARPA Funds Development of New Type of Processor,” eetimes.com,
June 9, 2017.
19. Nova Spivack, “Why Cognition-as-a-Service Is the Next Operating System Battlefield,”
Bottlenose, gigaom.com, December 7, 2013.
20. For example, see Network Management and AI Lab research of Carleton University,
Canada, and Fernando Koch, Carlos Becker Westphall, Marcos Dias de Assuncao, and Edison
Xavier, “Distributed AI for Network Management Systems.”
21. Mike Barlow, “What About the Shannon Limit?” in “Practical AI in the Cloud,” O’Reilly,
2017, 12.

Chapter 9: Openness
1. John Mannes, “Facebook and Microsoft Collaborate to Simplify Conversions from PyTorch
to Caffee2,” September 7, 2017.
2. Tom Krazit, “Amazon and Microsoft Unveil ‘Gluon’ Neural Network Technology, Teaming
up on Machine Learning,” geekwire.com, October 12, 2017.

Part 3: The Race for AI Dominance—From Full-Stack


Companies to “AI-as-a-Service”
1. Babak Hodjat, “The AI Resurgence: Why Now?,” Wired, March 24, 2015.

Chapter 10: AI Crossplay—Academia and the Private


Sector
1. Elizabeth Gibney, “AI Talent Grab Sparks Excitement and Concern,” Nature, April 26, 2016.
2. Elizabeth Dworkin, “Why Apple Is Struggling to Become an AI Powerhouse,” The
Washington Post, June 5, 2017.
3. Steve LeVine, “AI Pioneer Calls for the Breakup of Big Tech,” Axios, September 21, 2017.
4. Gary Marcus, “AI Is Stuck. Here’s How to Move It Forward,” The New York Times, July 29,
2017.

Chapter 11: The Geopolitics of AI—China vs. USA


1. Gregory Allen and Elsa B. Kania, “China Is Using America’s Own Plan to Dominate the
Future of AI,” foreignpolicy.com, September 8, 2017.
2. Elsa Kania, “The Dual-Use Dilemma in China’s New AI Plan: Leveraging Foreign
Innovation Resources and Military-Civil Fusion,” lawfareblog.com, July 28, 2017.
3. Guest Blogger for Adam Segal, “Beijing’s AI Strategy: Old-School Central Planning with a
Futuristic Twist,” Council on Foreign Relations, cfr.org, August 9, 2017.
4. Sinovation and Eurasia Group Geo-Tech Team, “AI in China: Cutting Through the Hype,”
eurasiagroup.net, December 6, 2017.
5. Gillian B. White, “Steve Mnuchin Is ‘Not Worried at All’ About Machines Displacing
American Workers,” The Atlantic, March 24, 2017.
6. Paul Mozur and John Markoff, “Is China Outsmarting America in AI?,” The New York Times,
May 27, 2017.
7. John Markoff and Matthew Rosenberg, “China’s Intelligent Weaponry Gets Smarter,” The
New York Times, February 3, 2017.
8. Arjun Kharpal, “Alibaba Is Using AI to Take on Amazon, Microsoft,” cnbc.com, August 25,
2015.
9. Ben Popper, “AI Is One Step Closer to Mastering StarCraft,” theverge.com, April 3, 2017.
10. Prableen Bajpai, “How Alibaba Is Using AI in Healthcare,” Nasdaq, July 13, 2017.
11. Jessi Hempel, “How Baidu Will Win China’s AI Race—And Maybe, the World’s,” Wired,
August 9, 2017.
12. Cate Cadell, “China’s Baidu Buys U.S. Computer Vision Startup amid AI Push,”
reuters.com, April 13, 2017.
13. “Baidu Looks Beyond Search with AI Acquisition,” eMarketer.com, February 17, 2017.
14. See Fast Company, March 17, 2017.
15. Hempel, “How Baidu Will Win China’s AI Race—And Maybe, the World’s.”
16. Saheli Oy Choudhury, “Microsoft Is Teaming Up with a Chinese Rival to Power Self-
Driving Cars,” cnbc.com, July 19, 2017.
17. “Baidu Cloud Announces Hybrid Cloud Platform to Accelerate Enterprise Deployment of
AI,” globenewswire.com, September 20, 2017.
18. Will Knight, “Tencent’s New Lab Shows It’s Serious About Mastering AI,” MIT
Technology Review, May 2, 2017.
19. “Huawei Announces Kirin 970: AI in Your Phone,” Synced, www.jiqizhixin.xom, Medium,
September 4, 2017.
20. Coco Feng, “Chinese Startup Breaks AI Fundraising Record,” caiwinglobal.com, July 13,
2017.

Chapter 12: Full-Stack Companies—The Battle of the


Giants
1. Kevin McLaughlin and Mike Sullivan, “Google’s Relentless AI Appetite,” The Information,
January 10, 2017.
2. Dan Primack, “Exclusive: Google Launches AI Investment Platform,” axios.com, May 26,
2017.
3. “The Race for AI: Google, Twitter, Intel, Apple in a Rash to Grab AI Startups,” CB Insights,
March 30, 2017.
4. John Mannes, “Google Launches Its Own AI Studio to Foster Machine Intelligence Startups,”
TechCrunch, July 26, 2017.
5. Jon Russel, “Google Is Opening a China-Based Research Lab Focused on Artificial
Intelligence,” TechCrunch, December 12, 2017.
6. Will Knight, “Forget AlphaGo—DeepMind Has a More Interesting Step Toward General
AI,” MIT Technology Review, June 14, 2017.
7. ARK, “Google: The Full-Stack AI Company,” ARK Investment Management,
seekingalpha.com, May 25, 2017.
8. Edison Investment Research, “Google i/o 2017: Brain Game,” May 22, 2017.
9. Tom Simonite, “A New Google Project Sets Out to Solve Artificial Stupidity,” Wired, July
10, 2017.
10. James Vincent, “Google’s Latest Platform Play Is AI, and It’s Already Winning,”
theverge.com, May 18, 2017.
11. Steven Levy, “The Google Assistant Needs You,” backchannel.com, October 25, 2016.
12. Victor Luckerson, “Why Google Is Suddenly Obsessed with Your Photos,” theringer.com,
May 25, 2017.
13. ARK, “Google: The Full-Stack AI Company.”
14. Sara Fischer, “Scoop: Google Adds New ML Technology to Newsrooms,” Axios,
September 19, 2017.
15. Mark Bergen, “Google’s Quantum Computing Push Opens New Front in Cloud Battle,”
Bloomberg, July 17, 2017.
16. Mark Bergen, “The AI Doctor Orders More Tests,” Bloomberg Businessweek, June 8, 2017.
17. Ingrid Lunden, “Google Quietly Debuts Chatbase, a Chatbot Analytics Platform,”
TechCrunch, May 17, 2017.
18. Alex Hern, “Whatever Happened to the DeepMind AI Ethics Board Google Promised?,”
The Guardian, January 26, 2017.
19. James Temperton, “DeepMind’s New AI Ethics Unit Is the Company’s Next Big Move,”
Wired, October 4, 2017.
20. Steven Levy, “The Applied ML Group Helps Facebook See, Talk, and Understand. It May
Even Root Out Fake News,” backchannel.com, February 23, 2017.
21. Facebook FAIR research can be found at https://research.fb.com/publications/?cat=2.
22. “The Race for AI: Google, Twitter, Intel, Apple in a Rash to Grab AI Startups,” CB Insights,
March 30, 2017.
23. “The Applied ML Group Helps Facebook See, Talk, and Understand.”
24. Jonathan Shieber, “Report: Facebook Gave Special Prosecutor Robert Mueller Detailed Info
on Russian Ad Buys,” TechCrunch, September 16, 2017.
25. Stacey Higginbotham, “Inside Facebook’s Biggest Artificial Project Ever,” Fortune, April
13, 2016.
26. Cade Metz, “Facebook’s Augmented Reality Engine Brings AI Right to Your Phone,”
Wired, April 19, 2017.
27. Master of Code Global, “Latest News from Facebook Developer Conference: Caffe2,
Facebook AI-Powered Glasses,” Medium, April 20, 2017.
28. Kevin McLaughlin, “Slow Uptake for Apple’s Deep Learning Tools,” The Information,
December 7, 2016.
29. Differential Privacy Team, “Learning with Privacy at Scale,” Apple, Machine Learning
Journal, Vol. 1, Issue 8, December 2017.
30. Steven Levy, “The iBrain Is Here,” backchannel.com, August 24, 2016.
31. Mark Hibben, “Apple Uses Its Chip Expertise to Move Ahead in AI,” seekingalpha, May
27, 2017.
32. James Vincent, “Apple Announces New ML API to Make Mobile AI Faster,” theverge.com,
June 5, 2017.
33. CB Insights, “Amazon Strategy Teardown,” Research Report, July 2017.
34. The origins of this strategy are well described in Brad Stone, The Everything Store: Jeff
Bezos and the Age of Amazon, 2013.
35. Todd Bishop, “Jeff Bezos Explains Amazon’s AI and ML Strategy,” geekwire.com, May 6,
2017.
36. Serdar Yegulalp, “Why Amazon Picked MXNet for Deep Learning,” InfoWorld, November
23, 2016.
37. “Alexa: Amazon’s Operating System,” stratechery.com, January 4, 2017.
38. Bret Kinsella, “Amazon Alexa Skill Count Passes 15,000 in the U.S.,” voicebot.ai, July 2,
2017.
39. Harry McCracken, “Echo and Alexa Are Two Years Old. Here’s What Amazon Has
Learned So Far,” Fast Company, July 11, 2016.
40. “Amazon Strategy Teardown,” CB Insights.
41. Tom Krazit, “Amazon Web Services’ Swami Sivasubramanian on the Future of AI in the
Cloud,” geekwire.com, June 5, 2017.
42. Christina Farr, “Amazon Alexa Is Missing One Big Thing Before It Gets into Health Care,”
cnbc.com, September 25, 2017.
43. “Amazon Strategy Teardown,” CB Insights.
44. Christina Farr, “Amazon Alexa Is Missing One Big Thing Before It Gets into Health Care,”
cnbc.com, September 25, 2017.
45. Frederic Lardinois, “Amazon Launches Amazon AI to Bring Its ML Smarts to Developers,”
TechCrunch, November 30, 2016.
46. “Amazon Strategy Teardown,” CB Insights.
47. Ibid.
48. Agam Shah, “The ‘Amazon Effect’ Will Drive Autonomous Vehicles, Nvidia CEO Says,”
pcworld.com, May 9, 2017.
49. Jessi Hempel, “Inside Microsoft’s AI Comeback,” Blackchannel, Wired, June 21, 2017.
50. Perez, The Deep Learning Playbook for Enterprises, 15. You can find an overview of
Microsoft AI research at https://www.microsoft.com/en-us/research-area/artificial-intelligence/.
51. “The Race for AI: Google, Twitter, Intel, Apple in a Rash to Grab Artificial Intelligence
Startups,” CB Insights.
52. Mark Sullivan, “After Lots of Talk, Microsoft’s Bots Show Signs of Life,” Fast Company,
May 16, 2017.
53. Saheli Oy Choudhury, “Microsoft Is Teaming Up with a Chinese Rival to Power Self-
Driving Cars.”
54. Carlos E. Perez, The Deep Learning Playbook for Enterprises, 15. You can find an
overview of Microsoft AI research at https://www.microsoft.com/en-us/research-area/artificial-
intelligence/.
55. “The Race for AI: Google, Twitter, Intel, Apple in a Rash to Grab Artificial Intelligence
Startups,” CB Insights.
56. Mark Sullivan, “After Lots of Talk, Microsoft’s Bots Show Signs of Life,” Fast Company,
May 16, 2017; and Saheli Oy Choudhury, “Microsoft Is Teaming Up with a Chinese Rival to Power
Self-Driving Cars.”

Chapter 13: Product- and Domain-Focused ML


Companies
1. Bradford, “Vertical AI Startups: Solving Industry-Specific Problems by Combining AI and
Subject Matter Expertise,” www.bradfordcross.com/blog, June 14, 2017; interviews.
2. “The Top 10 Chatbots for Enterprise Customer Service,” forrester.com, June 29, 2017.
3. George Skaff, “In a World Filled with Chatbots, This Virtual Assistant Rises to the Top,”
whatsnext.nuance.com, June 29, 2017.
4. Sam Abuelsamid, “Nuance Communication Aims to Bring AI to the Talking Car,” Forbes,
October 19, 2016.

Chapter 14: Traditional and New Enterprise Companies


Focused on ML
1. Larry Gignan, “Oracle Preps Autonomous Database at OpenWorld, Aims to Cut Labor,
Admin Time,” zdnet.com, September 15, 2017.
2. Scott Carey, “AI in the Enterprise: How the Big Enterprise Software Vendors Are Striving to
Make Systems Smarter, from IBM to SAP,” ComputerWorld, June 21, 2016.
3. Andreas Schmitt, “Was ist SAP Leonardo?” Computerwoche.de, May 11, 2017.
4. Nan Boden, “Google Cloud and SAP Forge Partnership to Develop Enterprise Solutions,”
blog.google, March 7, 2017.
5. Rutberg research, CB Insights, Bloomberg, and TechCrunch.
6. Ingrid Lunden, “Salesforce Acquires PredictionIO to Build Up Its ML Muscle,” TechCrunch,
February 19, 2016.
7. “Salesforce Einstein Delivers AI Across the Salesforce Platform,” TechCrunch, September
18, 2016.
8. Scott Rosenberg, “Inside Salesforce’s Quest to Bring AI to Everyone,” Wired, August 2,
2017.
9. Ron Miller, “Salesforce Introduces Several Einstein AI Tools for Third-Party Developers,”
TechCrunch, June 28, 2017.
10. Rosenberg, “Inside Salesforce’s Quest to Bring AI to Everyone.”
11. Will Knight, “AI Winter Isn’t Coming,” MIT Technology Review, December 7, 2016.
12. Will Knight, “An Algorithm Summarizes Lengthy Text Surprisingly Well,” MIT
Technology Review, May 12, 2017.
13. Tiernan Ray, “IMB’s Watson, Despite Hype, Outgunned I AI, Says Jefferies,” barrons.com,
July 12, 2017.
14. Jason Bloomberg, “Is IBM Watson a Joke?,” Forbes, July 2, 2017.
15. Annalee Newitz, “MIT, IBM Team Up on 240 Million USD Effort to Rule the AI World,”
ArsTechnica, September 7, 2017.
16. Perez, The Deep Learning Playbook for Enterprises. IBM research can be found at
https://arxiv.org/abs/160.08242.
17. Comments on an article by Kevin McLaughlin, “How IBM Is Competing with Google in
AI,” The Information, February 16, 2017.
18. David Cardinal, “IBM Is Bringing ML to a Mainframe Near You,” extremetech.com,
February 16, 2017.
19. Lily Haw Newman, “IBM’s Plan to Encrypt Unthinkable Amounts of Sensitive Data,”
Wired, July 17, 2017.
20. This table is adapted and updated from a similar one in Andrea Bonime-Blanc, The
Reputation Risk Handbook: Surviving and Thriving in the Age of Hyper-Transparency, 29.
21. Will Knight, “IBM’s Watson Is Everywhere—But What Is It?,” MIT Technology Review,
October 27, 2016.
22. Kevin McLaughlin, “How IBM Is Competing with Google In AI,” The Information,
February 16, 2017.
23. Tiernan Ray, “IBM’s Watson, Despite Hype, Outgunned in AI, Says Jefferies,” Barron’s,
July 12, 2017.
24. Don Clark, “IBM: A Billion People to Use Watson by 2018. CEO Ginni Rometty Highlights
Watson’s Recent Pact with GM at WSJ’s Tech Conference,” Morningstar, October 26, 2016.
25. Adapted from Andrea Bonime-Blanc, The Reputation Risk Handbook.
26. Jessica Galang, “DeepLernNi.ng Raises $9 Million Series A to Bring AI to Enterprise,”
betakit.com, September 13, 2017.
27. Andrea Bonime-Blanc, The Reputation Risk Handbook.

Chapter 15: Introducing AI into a Traditional Business


1. James Manyika, Michael Chui, Jacques Bughin, Richard Dobbs, et al., “Disruptive
Technologies: Advances That Will Transform Life, Business, and the Global Economy,” McKinsey
Global Institute Report, May 2013.
2. Matthew Johnston, “The Battle for Dominance in AI Is Heating Up (CRM),”
investopedia.com, September 22, 2016.
3. Scott Rosenberg, “Inside Salesforce’s Quest to Bring AI to Everyone,” Wired, August 2,
2017.
4. Dan Faggella, “ML Misconceptions: Infographic from Our Expert Consensus,”
TechEmergence, October 3, 2016.
5. Faggella, “ML Misconceptions.”
6. Jeanne Ross, “The Fatal Flaw of AI Implementation,” MIT Sloane, July 14, 2017.
7. Dan Fagella, “AI Adoption: What It Takes for Industries to Change,” HuffPost, August 1,
2017.
8. Ibid.
9. Ibid.
10. Ross, “The Fatal Flaw of AI Implementation.”
11. Ibid.
12. Kartik Hosangar and Apoorv Saxena, “The First Wave of Corporate AI Is Doomed to Fail,”
Harvard Business Review, April 18, 2017.
13. Preeti Rathi, “Before You Build Another ML Startup, Read This,” Ignition Partners,
venturebeat.com, February 7, 2017; David Kelnar, “The MMC Ventures AI Investment Framework:
17 Success Factors for the Age of AI,” Medium, May 16, 2017.
14. Karthik Mahadevan, “The Need for AI-Friendly Designs,” Medium, June 14, 2017.

Chapter 16: AI and the Automotive Industry


1. Nabeel Hyatt, “Autonomous Driving Is Here, and It’s Going to Change Everything,” Recode,
April 19, 2017.
2. Alexander Howard, “How Digital Platforms Like LinkedIn, Uber and TaskRabbit Are
Changing the On-Demand Economy,” The Huffington Post, July 14, 2015.
3. Andreas Cornet, Matthias Kässer, and Andreas Tschiesner, “The Road to AI in Mobility:
Smart Moves Required,” McKinsey Quarterly, September 2017.
4. Ruth Reader, “In a Driverless Future, Uber and Lyft Will Sit in the Passenger Seat,” Fast
Company, May 17, 2017.
5. Brian Rashid, “How AI Pioneers Will Affect the Car Industry, and Why It’s a Good Thing,”
Fortune, May 16, 2017.
6. Vignesh Ramachandran, “Stanford’s Social Robot ‘Jackrabbot’ Seeks to Understand
Pedestrian Behavior,” Stanford Press Release, news.stanford.edu, June 1, 2016.
7. Narasimha Kumar, Quora, September 11, 2016.
8. PwC, “Spotlight on Automotive: PwC Semiconductor Report,” September 2013.
9. “24 Industries Other Than Auto That Driveless Cars Could Turn Upside Down,” CB Insights,
July 31, 2017.
10. Amir Efrati, “How Samsung Aims to Catch Up in Car Tech,” The Information, August 10,
2017.
11. Sean O’Kane, “Samsung Makes a $300 Million Push into Self-Driving Cars,” The Verge,
September 14, 2017.
12. Amand Lee, “Nvidia, Maker of Graphics Chips, Gets into China’s Self-Driving Cars with
Investment in Tusimple,” South China Morning Post, August 4 and 10, 2017.
13. Nabeel Hyatt, “Autonomous Driving Is Here, and It’s Going to Change Everything,”
Recode, April 19, 2017.
14. Hyatt, “Autonomous Driving Is Here, and It’s Going to Change Everything.”
15. Dom Galeon, “Electric Cars Might Dominate European Markets by 2035, According to a
New Report,” businessinsider.de, July 15, 2017.
16. Andrew J. Hawkings, “Why Delphi and Mobileye Think They Have the Secret Sauce for
Self-Driving Cars,” The Verge, December 1, 2016.
17. Tasha Keeney, “China’s Booming Autonomous Car Opportunity,” ARK Investment
Management, July 17, 2017.
18. Stanford Report, 21–22.
19. David Galbraith, “A New Approach to Designing Smart Cities,” Medium, July 23, 2017.
20. Aniket Patel, “Self-Driving Cars Will Affect Everything You Know,” Medium, July 26,
2017.
21. Leslie Hook, “Alphabet Draws Up Plans to Build City of the Future,” FT, September 23,
2017, 11.
22. Patel, “Self-Driving Cars Will Affect Everything You Know.”
23. Ben Leubsdorf, “Self-Driving Cars Could Transform Jobs Held by 1 in 9 US Workers,” The
Wall Street Journal, August 14, 2017.
24. Hyatt, “Autonomous Driving Is Here, and It’s Going to Change Everything.”
25. David McCabe, Gerald Rich, Lazaro Gamio, “Here’s Where Jobs Will Be Lost When
Robots Drive Trucks,” axios.com, February 22, 2017.
26. Inside IoT Premium Newsletter, September 15, 2017.
27. Narasimha Kumar, Quora, September 11, 2016.
28. John Brandon, “Meet the Founder Trying to Start the Self-Driving Car Revolution,” Inc.,
February 2015.
29. Darrell Etherington, “GM and Cruise Announce First Mass-Production Self-Driving Car,”
TechCrunch, September 11, 2017.
30. Kristen Korosec, “Aurora Is Bringing Self-Driving Cars to Public Roads in California,”
Fortune, August 2, 2017.
31. Andrej Karpathy, “Deep Reinforcement Learning: Pong from Pixels,” karpathy blog,
http://karpathy.github.io/2016/05/31/rl/, May 31, 2016.
32. Jordan Novet, “Tesla Is Working with AMD to Develop Its Own AI Chip for Self-Driving
Cars, Says Source,” cnbc.com, September 21, 2017.
33. “Joint Special Report on Tesla Motors Inc,” GEC Risk Advisory and RepRisk AG, 2016.
https://gecrisk.com/wp-content/uploads/2016/08/Joint-Report-RepRisk-and-GEC-Risk-on-Tesla-
Motors-8.17.16.pdf. The RepRisk Index measures reputation risk from vetted media and social
media sources using algorithms and human analysts, with 75 to 100 reflecting relatively high
reputation risk at about 50, as evidenced in this profile of Tesla after the motorist death incident in
2016. Before the incident their RepRisk Index was relatively low, at an average of 30–40 or below.
34. Sam Levin, “The Road Ahead: Self-Driving Cars on the Brink of a Revolution in
California,” The Guardian, March 17, 2017.
35. Billy Duberstein, “Here’s Why Alphabet’s Waymo Is Leading in Self-Driving Cars,”
Business Insider, June 23, 2017.
36. “Uber Strategy Teardown: The Giant Looks to an Autonomous Future, Food Delivery, and
Tighter Financial Discipline,” CB Insights, September 2017.
37. Jeremy Hermann and Mike Del Balso, “Meet Michelangelo: Uber’s ML Platform,”
eng.uber.com, September 5, 2017.
38. Alex Davies, “With Uber Freight, Travis Takes on Trucking,” Wired, May 18, 2017.

Chapter 17: AI in Health Care


1. Luke Oakden-Rayner, “Do Machines Actually Beat Doctors?”
lukeoakdenrayner.wordpress.com, November 27, 2016.
2. P. Tiwari, P. Prasanna, L. Wolansky, et al., neurosciencenews.com, September 16, 2016.
3. Krista Conger, med.stanford.edu, August 16, 2016.
4. Cecille De Jesus, Futurism, September 25, 2016.
5. Eliza Strickland, spectrum.ieee.org, May 4, 2016.
6. Varun Gulshan, Lily Peng, Marc Coram, et al, jamanetwork.com, December 13, 2016.
7. Oakden-Rayner, “Do Machines Actually Beat Doctors?”
8. Andrea Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M.
Blau, and Sebastian Thrun, “Dermatologist-level Classification of Skin Cancer with Deep Neural
Networks,” Nature, 542, February 2, 2017.
9. R. Caruna et al., “Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital
30-Day Readmission,” Proceedings of the 21st ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, 2015, 1721–1730.
10. Aaron M. Bornstein, “Is Artificial Intelligence Permanently Inscrutable?” Nautilus,
nautil.us, September 1, 2016.
11. Kumba Sennaar, “AI in Pharma and Biomedicine: Analysis of the Top 5 Global Drug
Companies,” TechEmergence, July 5, 2017.
12. Sennaar, “AI in Pharma and Biomedicine.””
13. Tom Simonite, “Machine Vision Helps Spot New Drug Treatments,” MIT Technology
Review, January 24, 2017.
14. “Atomwise Finds First Evidence Towards New Ebola Treatments,” atomwise.com, March
25, 2015.
15. Qu Harrison Terry, “48 Companies Bringing AI to Healthcare,” Redox, April 24, 2017.
16. Dyllan Furness, “The Chatbot Will See You Now: AI May Play Doctor in the Future of
Healthcare,” Digitaltrends.com, October 7, 2016.
17. Simon Parkin, “The Artificially Intelligent Doctor Will Hear You Now,” MIT Technology
Review, March 9, 2016.
18. Dyllan Furness, “The Chatbot Will See You Now.”
19. Philipp Stauffer, “Dawn of the Ultimate Unfair Competitive Advantage,” Medium, June 9,
2017.
20. http://www.intuitivesurgical.com.
21. Evan Ackerman, “Google and Johnson & Johnson Conjugate to Create Verb Surgical,
Promise Fancy Medical Robots,” IEEE Spectrum, December 17, 2015.
22. Steve Dent, “Soft Robot Wraps Around Your Heart to Help It Beat,” Engadget, January 19,
2017.
23. http://www.aethon.com.
24. Jennings Brown, “Why Everyone Is Hating on IBM Watson—Including the People Who
Helped Make It,” Gizmodo, August 10, 2017.
25. Steve Lohr, “The Promise of AI Unfolds in Small Steps,” The New York Times, February
28, 2016.
26. Casey Ross and Ike Swetlitz, “IBM Pitched Its Watson Supercomputer as a Revolution in
Cancer Care. It’s Nowhere Close,” statnews.com, September 5, 2017.
27. Dina Bass, “Microsoft Takes Another Crack at Health Care, This Time with Cloud, AI and
Chatbots,” Bloomberg Technology, February 16, 2017.
28. Cara McGoogan, “Microsoft Launches New Healthcare Division Based on AI Software,”
The Telegraph, September 24, 2017.

Chapter 18: AI and the Financial Sector


1. Dave Gershgorn, “Japanese White-Collar Workers Are Already Being Replaced by AI,”
Quartz, January 2, 2017.2.
2. Cade Metz, “AI and Bitcoin Are Driving the Next Big Hedge Fund Wave,” Wired, February
13, 2017.3. Adam Satariano, “Silicon Valley Hedge Fund Takes On Wall Street with AI Trader,”
Bloomberg, February 6, 2017.
3. Adam Satariano, “Silicon Valley Hedge Fund Takes on Wall Street with AI Trader,”
Bloomberg, February 6, 2017.
4. Richard Craib, Geoffrey Bradway, and Xander Dunn with Joey Krug, “Numeraire: A
Cryptographic Token for Coordinating Machine Intelligence and Preventing Overfitting,”
https://numer.ai, February 20, 2017; Cade Metz, “An AI Hedge Fund Created a New Currency to
Make Wall Street Work Like Open Source,” Wired, February 21, 2017.
5. Robert McMillan, “Project to Turn Bitcoin Into an All-Powerful Programming Language
Raises $15M”, Wired, October 9, 2014.
6. Bill Brennan, Mike Bacala, and Mike Flynn, “AI Comes to Financial Statement Audits,”
CFO, ww2.cfo.com, February 2, 2017.

Chapter 19: AI in Natural Resources and Utilities


1. Osborne Clarke, “AI: The Future of the Electricity Sector,” smartcities.osborneclarke.com,
April 25, 2016.
2. Indigo Advisory Group, “Monetizing Utility Data: The ‘Utility Data as a Service’
Opportunity,” Medium, January 4, 2017.
3. Indigo Advisory Group, “AI in Energy and Utilities,” Medium, April 11, 2017.
4. Rakesh Sharma, “The Role of AI in Smart Utilities,” energycentral.com, December 9, 2016.
5. Madhumita Murgia and Nathalie Thomas, “DeepMind and National Grid in AI Talks to
Balance Energy Supply,” FT, March 12, 2017.
6. Haley Zaremba, “AI Is Crucial for the Energy Industry,” oilprice.com, May 15, 2017.
7. “Background Report on Artificial Intelligence,” Pardee Center for International Futures,
University of Denver, 2017, 55.
8. Daniel Terdiman, “HiBot USA’s Technology Combining Robots, Big Data, and AI Could
Upend the Way Municipalities Replace Aging Water Pipes,” February 24, 2017.

Chapter 20: AI in Other Business Sectors


1. Matt Reynolds, “AI Learns to Write Its Own Code by Stealing from Other Programs,”
newscientist.com, February 22, 2017.
2. Murat Vurucu, “How Do We Teach a Machine to Program Itself? NEAT Learning,” Makea
Industries GmbH Berlin, Medium, July 25, 2017.
3. Nicolas Koumachatzky and Anton Andryeyev, “Using Deep Learning at Scale in Twitter’s
Timeline,” blog.twitter.com, May 9, 2017.
4. Chris O’Brien, “Twitter Hopes Machine Learning Can Save It from Oblivion,”
venturebeat.com, February 10, 2017.
5. Nick Stockton, “If AI Can Fix Peer Review in Science, AI Can Do Anything,” Wired,
February 20, 2017.
6. Stockton, “If AI Can Fix Peer Review in Science, AI Can Do Anything.”
7. Blair Hanley Frank, “ObEN Lands $5 Million from Tencent and Others for AI-powered
Avatars,” Venturebeat, July 18, 2017.
8. Chloe Olewitz, “A Japanese AI Program Just Wrote a Short Novel, and It Almost Won a
Literary Prize,” Digital Trends, March 23, 2016.
9. Y Combinator, “At the Intersection of AI, Governments, and Google,” Y Combinator
Podcast, June 16, 2017.
10. Cade Metz, “How AI Is Creating Building Blocks to Reshape Music and Art,” The New
York Times, August 14, 2017.
11. Sarah Cascone, “AI-Generated Art Now Looks More Convincingly Human Than Work at
Art Basel,” Artnet News, July 11, 2017.
Chapter 21: AI in Government and the Military
1. Dave Gershgorn, “The U.S. Government Has Been Funding AI for 50 Years, and Just Came
Up with a Plan for Its Future,” Quartz, October 12, 2016.
2. Jack Smith, “(Exclusive) Crime-Prediction Tool PredPol Amplifies Racially Biased Policing,
Study Shows,” mic.com, October 9, 2016.
3. Rebecca Hill, “Gov. UK Pops Open Tin of AI and Robotics Research Cash,” The Register,
June 22, 2017.
4. Sputnik News, “UK Government to Launch Its First Public Inquiry into AI,”
sputniknews.com, July 20, 2017.
5. “Canada Funds $125 Million Pan-Canadian Artificial Intelligence Strategy,” Cision, March
22, 2017.
6. European Commission, “The Future of Robotics and Artificial Intelligence in Europe,“
ec.europa.eu, May 10, 2017, updated February 2017.
7. See en.wikipedia.org and https://compstat.nypdonline.org/2e5c3f4b-85c1-4635-83c6-
22b27fe7c75c/view/89.
8. www.law-train.eu.
9. http://teamcore.usc.edu/projects/darms/darms.html.
10. Tom Simonite, “AI Could Revolutionize War as Much as Nukes,” Wired, July 19, 2017.
11. Patrick Tucker, “What the CIA’s Tech Director Wants from AI,” defenseone.com,
September 6, 2017.

Chapter 22: Data Control, AI, and Societies


1. “Cybernetics,” Wikipedia.
2. Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne
Hofstetter, Jeroen van den Hoven, Roberto V. Zicardi, and Andrej Zwitter, “Will Democracy
Survive Big Data and AI?,” Scientific American, February 25, 2017.
3. Yuang Yuang, Yingzhi Yang, and Sherry Fei Ju, “Beijing Seeks to Predict Crime Using AI,”
FT, July 24, 2017, 3.
4. Vyacheslav W. Polonski, “How AI Conquered Democracy,” The Conversation,
techcentral.co.za, August 9, 2017.
5. Tom Simonite, “When Government Rules by Software, Citizens Are Left in the Dark,”
Wired, August 17, 2017.
6. Ava Kofman, “Taser Will Use Police Body Camera Videos to Anticipate Criminal Activity,”
April 30, 2017.
7. Kates Graham, “Facebook, for the First Time, Acknowledges Election Manipulation,”
cbsnews.com, April 28, 2017.
8. Lucinda Shen, “Study Finds ‘Collusion Network’ of Fake Likes on Facebook,” Fortune,
September 9, 2017.
9. Alex Heath, “Facebook Says It’s ‘Unable’ to Reveal the Ads Linked to Russia,” Business
Insider, September 8, 2017.
10. Tom Simonite, “Humans Can’t Expect AI to Just Fight Fake News for Them,” Wired, June
15, 2017.

Chapter 23: The Future of Employment and the Pursuit


of Life-Long Learning
1. Vinod Khosla, “AI: Scary for the Right Reasons,” hackernoon.com, September 12, 2017.
2. Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans, “When Will AI
Exceed Human Performance? Evidence from AI Experts,” Future of Humanity Institute, Oxford
University, Department of Political Science, Yale University, arXiv: 1705.08807v1 [cs. AI], May
24, 2017.
3. Nick Wingfield and Patricia Cohen, “Amazon Plans Second Headquarters, Opening a
Bidding War Among Cities,” The New York Times, September 7, 2017.
4. The Economist, January 14–20, 2017, 9.
5. Hyatt, “Autonomous Driving Is Here, and It’s Going to Change Everything.”
6. McKinsey, “Getting Ready for the Future of Work,” McKinsey Quarterly, September 2017.

Chapter 24: Ethical Design and Social Governance


1. This section also incorporates themes discussed in the excellent Scientific American article
“Will Democracy Survive Big Data and Artificial Intelligence?,” by Dirk Helbing, Bruno S. Frey,
Gerd Gigerenzer, Ernst Hafen, Michael Hagner, et al., February 25, 2017.
2. Andrea Bonime-Blanc, “The Defense Industry Initiative,” in Setting Global Standards, edited
by S. Prakash Sethi, 2003.

Chapter 25: Traditional Boards’ Readiness to Address


AI—Transforming Fear into Value
1. WCD and KPMG, “Visionary Boards. Seeing Far and Seeing Wide: Moving Toward a
Visionary Board,” WCD Foundation and KPMG, boardleadership.kpmg.us, April 2017.
2. Betsy Atkins, “Why Today’s Boards Need to Be Tech Savvy,” Corporate Board Member,
3rd quarter 2015, 46–47.
3. Heather Clancy, “Could Your Board Use a Digital Overhaul?,” Fortune, July 18, 2016.
4. Anastassia Lauterbach and Andrea Bonime-Blanc, “Artificial Intelligence: A Strategic
Business and Governance Imperative,” NACD Directorship Magazine, September/October 2016.

Chapter 26: Discussing AI in the Boardroom


1. Andrea Bonime-Blanc, “Implementing a Holistic Governance, Risk and Reputation Strategy
for Multinationals: Guidelines for Boards,” Ethical Boardroom,
https://ethicalboardroom.com/implementing-a-holistic-governance-risk-and-reputation-strategy-for-
multinationals-guidelines-for-boards/.
Index

Note: Page numbers followed by t indicate content in a table.

ABC stack, Baidu, 103–104


Abelson, Hal, 110
Abrash, Michael, 117
Action IQ, 131t
Adams, Ryan, 27
Adjusting to Human Smart Energy System (AHSES), 189
Advanced cryptographic tools: AI-friendly design, 150, 151t; blockchain, overview, 2–3, 47–50,
54t; business applications of, 143–144; financial sector, 183, 184, 184t; military sector, 204;
natural resource and utility sector, 187, 189; SAP and, 134
Adversarial, adversary model(s), 81, 130; AI product design and, 70, 193; algorithms, 79, 81, 82;
generative adversarial networks (GAN), 35, 63, 197; visual arts and, 197
Adversarial attack, 27; blockchain and, 49; social media and, 70
Advertising, 49, 51, 60, 71, 91, 96, 195
Aeton, TUG Robot(s), 175
AEye, 86
Affectiva, 44
Agent enablers, 23t
Agent(s): AI building concepts, 23, 28; automated planning, 23; chatbots, 202–203; reinforcement,
33–34; swarm AI, 31
AGI (Artificial General Intelligence), 10–12, 11t; computer vision, 38; downsides of, 35t; Life 3.0,
17; Salesforce and, 136; social science controversies, 15; timeline for, 13
AHSES (Adjusting to Human Smart Energy System), 189, 212
AI (Artificial Intelligence): democratization of, 33, 49, 57, 73; misconceptions about, 144; overview
of, 1–3. See also Employment and AI, future of work; Impact of AI
AI ESG (Environmental, Social and Governance) considerations: AI giants considerations for,
127t–128t; AI technology, 45t; automotive sector, 168t; data considerations, 77t; ethical design
and social governance, 226t; financial sector, 184t–185t; geopolitical considerations, 105t;
healthcare sector, 177t; labor, AI ESG (cont.) employment, and life-long learning, 221t;
machine learning, 35t; media and arts sector, 197t; natural resource and utility sector, 191t;
openness, 94; public and military sector, 106t; related technology, 54t; reputation risk, 142t;
social data, 214t; social science, 19t; traditional companies, 151t
AI ethics and governance, 2, 14, 209–210; advisory boards, 231; Alphabet, 113; corporate boards,
role of, 233–240, 233f; data considerations, 77t, 224; digital agenda, 224; innovation incentives,
225; institutional committees, 223–224; new educational concepts, 225; participatory platform
creation, 224–225; real-time data metrics, 225–226, 226t; SenseTime Group, Ltd, 105. See also
AI ESG (Environmental, Social and Governance) considerations
AI4All, 72
AI@50, 13
AI-as-a-service (ML-as-a-service): AI-friendly design, 150–151; Alibaba, 103; Amazon, 120–125;
ANI and AGI applications, 9–12, 10t, 11t; automotive sector, 155–168; Baidu, 103–104; bots
and chatbots, 41–42, 45t; cloud technologies, 89–91; data-as-a-service, 64–66, 65t, 66t;
enterprise startups, 141–142; Facebook, 115–117; financial sector, 179–185; Gaussian
processes, 27; Google, 107–113; healthcare sector, 169–177; IBM Watson, 136–140; machine
teaching, 67–68; Microsoft, 125–128; military sector, 204–207; natural language processing
(NLP), 40; natural resource and utility sector, 187–191; openness, 93–94; product- and domain-
focused companies, 129–132; public sector, 202–204; publishing, media, and entertainment
sector, 194–197; race for dominance, 95–97; reinforcement learning, 33–34; Salesforce, 135;
SAP, 134; software development, 193–194; urban planning, 162
AI-centric design, 97, 150–151
AiCure, 173
AI-friendly (AI-centric) design, 150–151, 151t
AIMatter, Google, 108t
AirBNB, 75, 184, 211
Airborn Fulfillment Center (AFC), Amazon, 125
AirPR, 132t
Aitchison, Clare, 173
Akerlof, George, 71
Alchemy API, IBM, 138t
Alexa, 29t, 41t, 90, 104, 120–125
Alexa Accelerator, 124
Alexa Fund, 124
Alexa Prize, 123–124
Algorithm bias, 58, 61, 69–73, 77t, 79; Algorithmic Justice League, 224; Alphabet and, 113; as
bottleneck to ML, 61; government and military sector, 200, 204, 213; introducing AI into
traditional business, 145
Algorithmia, 79
Algorithmic Justice League, 224
Algorithmic surveillance, 102, 200, 203–204, 206t, 207, 211–213, 214t
Algorithms: adjusting data and algorithms, 80–81; Amazon and, 121, 124; automotive sector, 155,
157–158, 159, 165, 168t; business decisions, 145, 146, 229; chips with built-in algorithms, 87;
computer vision, 37; cybersecurity, 81–82; data governance, 224; deep learning and, 29;
Facebook and, 114, 115; fake news, 214; federated learning, 35; financial sector, 181, 183;
game theory, 23; General Data Protection Regulation, EU (GDPR), 31; Google and, 107, 110,
112; healthcare sector, 172, 173, 174; history of AI, 6t; IBM and, 137; long-short-term memory
(LSTM), 100; machine language tribes, approaches of, 24–25; natural resource and utility
sector, 187, 189, 190, 191; neural networks and, 26; new chipsets, 88; openness, 94; overview
of, 79–80; quantum computing, 48, 54; regulatory concerns, 32–33, 33t, 68–69; Salesforce,
135–136; self-learning machines, 194; social considerations, 35t; supervised, unsupervised, and
semi-supervised learning, 32–33, 33t; surveillance, 200, 206t, 211–213; Twitter and, 195;
validation of, 64; voice applications, 40. See also Algorithm bias
Alibaba, 6, 103
Allen, Paul, 11
Alme platform, 90
Alphabet, 3, 5, 96, 97, 107–113; academia, projects with, 99–100; AI applications, 111–112; AI
technologies, 110–111; automotive sector, 160, 166–167; distributed infrastructure for datasets,
50–51; drones, 44; ethics at, 113; healthcare sector, 174; research organizations, 109–110; urban
planning sector, 162; Waymo, 166–167
AlphaGo (AG), AlphaGo Zero (AGZ), 6t, 11, 29
Alpiq, 188
Amazon: academia and private sector, 99; AI-as-a-service, 90, 96; applications, 123–125; approach
to AI, 120–123; chatbots, 41t, 42; cloud services, 40; data infrastructure, 75, 75t; DeepMind
Ethics & Society (DMES), 113; domain-focused companies, 130; drones, 44; economies of
scale, 62–63; employment, future of, 216; field-programmable gate arrays (FPGAs), 88; history
of AI, 29t; open datasets, 65t; openness, 93, 94
Amazon Inspector, 124
Amazon Lex, 40, 121
Amazon Polly, 121, 124
Amazon Rekognition, 121, 124
Amazon Robotics, Kiva, 175
Amazon Web Services (AWS), 40, 112, 120
Ambient OS, 51
AMD, 165
Amy, x.ai, xvi
Analogizer(s), 25
Anderson, Chris, 43
Android, 51, 104, 121, 126t, 127
AngelAI, Amazon, 122t
ANI (artificial narrow intelligence), 9, 10t, 19
ANN (artificial neural networks), 2
Anodot, 132t
Anonymization of privacy data, 128t
Anthropomorphizing AI, 14–15
Anti AI AI, 82
Anti-corruption, 185t
Anti-Defamation League, 64
Anti-fraud, 185t
Anti-money laundering, 184, 185t
Apache incubator, 90, 135
Apache Software Foundation, 135
API.ai, 40, 41t, 107, 108t, 112
Apollo, Baidu, 104, 127
Appio, 132t
Apple: academia and private sector, 99; AI applications, 119–120; AI-as-a-service, 96; approach to
AI, 117–119; bots and chatbots, 41–42; chipsets, 88; deep learning, 30, 31; Internet of things,
51; LSTM (long short-term memory), 29t; openness, 93
Apple Neural Engine, 88
Application specific integrated circuits (ASICs), 87, 121
AppZen, 132t
Area 120, Google, 112
Ariely, Dan, 109
ARM, 43, 87
ARM core processor, 85, 87
Artificial General Intelligence (AGI), 10–12, 11t; computer vision, 38; downsides of, 35t; Life 3.0,
17; Salesforce and, 136; social science controversies, 15; timeline for, 13
Artificial intuition, 34, 72
Artificial narrow intelligence (ANI), 9, 10t, 19
Artificial neural networks (ANN), 2
ASICs (application specific integrated circuits), 87, 121
Atari, 23, 34, 109
ATL Ultrasound Inc., 174
Atomwise, 172
Attention economy, 71, 94
Augmented reality (AR), 30t, 104, 162
Augmenting human tasks, jobs. See Employment and AI, future of work
Aurora Innovation, 164
Authoritarian regimes and AI, 211–212, 214t. See also Ethical design
Automated planning, 23
Automated tools, 66
AutoML, Google, 110
Automotive sector: chipset solutions, fragmentation of, 86; computer vision, 38; data governance,
73; data preparation, 66t; general trends, 155–156; IBM Watson, 137, 139; Microsoft, 127;
Nuance, 131; Pypestream, cybersecurity and, 130t; SenseTime Group Ltd., 105
Autonomous vehicles (cars): competitors in market, 164–168; data, algorithms and processing,
157–158; data requirements, 156–157; driverless cars, 17, 103, 163, 200; electrification, 158;
employment, impact on, 162–163; general trends, 155–156; interior redesign, 162; mobility-as-
a-service, impact on, 160–161; safety, impact on, 159–160; supplier ecosystem, 158–159;
timeline for, 163–164; traffic and energy use, impact on, 160; urban planning, impact on, 161–
162
Autonomous vehicles data exchange (AVDEX), 51
Autonomous weapon(s), 102, 113, 205, 207
Autopilot, Tesla, 163, 165–166
AVA, 157, 203
AWS Activate Program, 40
Ayasdi, 172
Azure, Microsoft, 40, 125, 127

bAbi, open data set(s), 65t


Babylon, 173, 182
BabyX, 196
Backpropagation (algorithm), 12, 26, 29, 34
BAIC Motor, 161
Baidu, 103–104; Amazon and, 121; Andrew Ng, 99; data control projects, 212; history of AI
technology, 6; self-driving cars, 127, 161
Baidu Brain, 104
Bandwagon effect, 96, 147
Bar-Ilan University, Israel, 82
Barrat, James, 13
Barzilay, Regina, 31
Basic data, DSIP, 188
Batteries, 43, 119, 189
Bayes, Thomas, 27
Bayesian(s): Bayesian ML models, 27–28; ML tribes, 24–25
Behavioral imaging, 37–38, 45t
Being More Digital, 45
Belfer Center for Science and International Affairs, 205
Bengio, Yoshua, 10, 24, 100, 125, 201
Benioff, Marc, 11t, 143
Berg Health, 172
Berkan, Riza, 147
Bezos, Jeff, 11t, 121
Big Cloud Analytics, 182
Big Data: Azure Cloud, Microsoft, 127; characteristics of, 61–62; data engines and storage, 76;
ethics and, 223–226, 226t; failure forecasting and infrastructure updates, 191; government and
military use of, 205; hardware needs, 89; insurance sector, 179–180; modeling and processing,
preparation for, 64–66; pitching AI to business, 149; Salesforce, 135; SAP, 134
Big Sur, Facebook’s Open Rack, 114
BigChainDB, 49, 50–51
Binary classification, 32
Bitcoin, 47, 48, 50, 184
Blizzard Entertainment, 65
Blockchain: AI-friendly design, 150, 151t; business applications of, 143–144; financial sector, 183,
184, 184t; military sector, 204; natural resource and utility sector, 187, 189; overview, 2–3, 47–
50, 54t; SAP and, 134
Blue River, 130
bluefrogrobotics.com, 43t
Blume-Kohout, Robin, 54
BMW, 159, 160, 162
Body Labs, Amazon, 122t
Bosch, 159
Bostrom, Nick, xxiii, xxiv, 12, 16
Bot(s): Amazon and, 123; cybersecurity, 52; data issues, 146; emotion measurement technology,
44; employment, future of, 45t, 219; Facebook and, 93–94, 113, 117; fake news, 213, 214;
financial sector, 181, 182, 184t; Google and, 108t, 112; governance, 197t; healthcare sector,
173; history of AI, 6t; IBM, 139; infrastructure management, 190; Microsoft, 126–127; Nuance,
131; openness, 93–94; overview, 41–42; public sector, 202–203; Pypestream, 130
Boyd, Dana, 70
BP, 140t, 190
BRAIN, America’s initiative, xxi
Brain Power, 44
BrainChip Holdings, 89
Bremmer, Ian, xi–xiii, 102
Brodock, Kate, 72
Brooks, Rodney, 84
Buddy, 43t
Bug(s), 52, 90, 187, 193, 195, 204
Business development, business developer(s), xxviii, 64t, 86, 144, 149, 235, 236, 240
Business sustainability, 210, 220, 229, 234
Buston, Olly, 200–201

CAALD (Consortium for Advancing Adult Learning & Development), 220


CaaS, 112
Cafarella, Make, 118
Caffe, Caffee2, 40, 80, 90, 94, 117, 121
Cambrian explosion in robotics, 42–43, 45t, 206
Cambridge Quantum Computing, 54
Camera Effects Platform, Facebook, 117
Canada: government and military sector, 201; public security sector, 204; Uber, 167
Canadian Institute for Advanced Research (CIFAR), 201
Candela, Joaquin Quiñonero, 115
Capsule(s), 26–27
Capture the Flag contest, 52
CARA, 10t
Carnegie Mellon, 6t, 69, 99, 118, 119t, 121, 167
Cash application, SAP, 134
Cassandra, 76, 168
Center for Artificial Intelligence, UN, xxv
Centers for Disease Control and Prevention (CDC), 171
Central Processing Unit(s) (CPUs), 85–86, 89, 105, 121
CERN, 100
Chatbase, Google, 112
Chatbots: employment, future of, 45t, 219; financial sector, 181, 182, 184t; healthcare sector, 173;
Nuance and, 131; openness, 93–94; overview of, 41–42; public sector, 202–203. See also Bot(s)
Chatrath, Vishal, 28
Chime video-conferencing suite, 124
China: AI technologies, overview, 96, 100, 101–102; Alibaba, 103; automotive sector, 127, 158,
159, 161; Baidu, 103–104, 127; Google and, 109; history of AI, 6t; Huawei, 105; Iflytek, 102;
news, media, and entertainment companies, 10t; ObEN, 196; robotics, 42; SenseTime Group
Ltd., 105; surveillance and influence, 203, 212; Tencent, 104; Xiao Ice, 127
Chinese Room, 15–16
Chips/chipsets: Apple and, 117–118, 120; development of silicon, 83–84; enterprise startups, 141;
fragmentation of chipset solutions, 84–88; Google and, 107; machine teaching-as-a-job, 67; new
algorithms, 88; quantum computing, 54; Tesla and, 165; trends, 58, 83–85
Chollet, François, 111
CIA, 205
CIFAR (Canadian Institute for Advanced Research), 201
Citizen Score, 212, 214t
Civilservant software, 69–70
Clark, Wesley, 28
Classification, binary, multi-class, 32
Clea, SAP, 134
Clickworker GmbH, 67
Clinton, Hillary, xix
Cloud: AI-as-a-service, 89–91; Alibaba, 103; Amazon, 121–123, 124; Apple, 118t, 119–120;
Baidu, 104; boardroom decisions, 234; chipset solutions, 86–88; cloud-as-a-service, 112;
current trends, 57, 96; data infrastructure, 75–76, 77; financial sector, 180, 182; Google, 107,
109, 110, 111, 112, 134; IBM, 138, 138t; large companies with cloud services, 40; Microsoft,
127; natural resource and utility sector, 188, 190; openness, 94; quantum computing, 53–55;
Salesforce, 135; SAP, 134; smart networks, 91; surveillance and influence, 212
Cloud Natural Language API, Google, 90, 112
Cloudera, 76–77, 90
Clune, Jeff, 31
Clustering, 33, 80
CNTK, 40, 90
Cognea, IBM, 138t
Cognicor, 182
Cognitive application programming interfaces, 112
Cognitive computing (CC), 2, 24, 138t
Cognitive Scale, 132t, 141, 176
Cognitive Services, 40
Cognitoys, 171
Cognotekt, 180
Collaborative leadership in AI, 171
Comma.ai, 65
CompStat, 204
Computer Vision, Machine Vision, 37–38; Amazon, 121, 122t; Apple, 119t; automotive sector,
155, 165; Baidu, 104; Blue River, 130t; chipset solutions, 86; development of, 2; domain-
focused ANI companies, 10t; Facebook, 114t, 116; Google, 108t, 110; new algorithms, 88; open
data sets, 65t; public sector, 203; race for AI dominance, 96, 102, 108t
Concur, SAP, 134
Connection Machine, 84
Connectionists, 24
Connectivity standards, 146, 164, 229
Consortium for Advancing Adult Learning & Development (CAALD), 220
Construction Challenge, 18
ControlExperts, 180
Conversational AI platform, 18–19, 30t, 39–40, 122t, 131, 134, 138t, 147, 182
Conversica, 182
Convolutional Neural Network (ConvNet, CNN), 27, 109, 158, 170
Core ML, Apple, 120
Core NLP suite, Stanford, 40
Corporate governance. See Governance
Cortana, 41, 123, 126
Cortex, 195
Cortica, 38
Courant Institute of Mathematical Science, 115
Coursera, 99
Courville, Aaron, 24
CPUs (Central Processing Unit(s)), 85–86, 89, 105, 121
Craib, Richard, 183
CRM, 135
CrowdAI, 66t
Crowdflower, 66t
Crowdsourcing data, information, 42, 62–63
Cruise Automation, 164
Cyber Grand Challenge, DARPA, 52
Cybernetics, 211
Cybersecurity: adversarial models, 81; Amazon and, 122, 122t, 123, 124; automotive sector, 157;
blockchain and, 48–50; boardroom decisions, 229, 230–231, 234, 235, 236, 237, 238; data
access, 76t; data management best practices, 74t; data preparation and, 66t; Defcon security
conference, 52; domain specialists, 132t; enterprise startups, 141; facial recognition, 119t; fake
news, 213, 214; governance considerations, 35t, 54t, 77t, 151t, 185t, 206t; government and
military sector, 200, 203–204, 205, 206t; IBM and, 137, 138; insurance sector, 179, 185t;
Internet of things (IoT), 3, 48, 146; ML use cases, 81–82; overview of, 52; pitching AI to
businesses, 150, 151t; Pypestream, 130t; quantum computing and, 54; regulatory considerations,
68; smart networks, 91
Cyberwarfare, 54t, 105t, 214t

Daimler, 165
Dark Blue Labs, Google, 107, 108t
Dark data, 65, 119t
Dark Trace, 132t
DARMS (Dynamic Aviation Risk Management Solution), 204
DARPA (Defense Advanced Research Projects Agency), 5t, 31, 42, 52, 89, 206
DARPA Robotics Challenge, 42
Dartmouth Conference (1955), 5, 18, 28t
Darwin, Charles, 13–14, 16
Data analytics, 69, 138, 144, 164, 179, 181
Data Artisans, 90
Data collection: best practices, 74t; bias in datasets and algorithms, 70; boardroom decisions, 236;
Internet of things, 51; preparing for modeling and processing, 64–66. See also Data governance;
Data privacy
Data control, 209; fake news, 213–214; surveillance and influence, 211–213
Data custodians, 224
Data Exchange Singapore (DEX), 51
Data governance, 73–74, 74t, 77t, 151t; AI misconceptions, 144; boardroom discussions about, 238;
data custodians and, 224; ethical design and, 226t; in healthcare sector, 177t; in public security,
204
Data graphs, data flow graphs, 110–111
Data metrics, 225–226, 226t
Data preparation, 57, 64–66, 96, 134, 146
Data privacy, 34–35, 35t, 45t, 127t, 128t; Amazon and, 124; Apple and, 96, 119–120; automotive
sector, 168t; best practices, 74t; bias in datasets and algorithms, 70; blockchain, 144; differential
privacy, 96, 119–120; ethical design, 226t; Facebook and, 116; financial sector, 181, 185t;
Google and, 96, 111; healthcare sector, 175, 177t; IBM Watson, 175; Internet of things and, 51,
54t; life-long learning, 220; public security sector, 203–204; regulatory concerns, 61, 68–69,
77t; surveillance and influence, 212, 214t
Data processing, 51, 62, 75, 77, 77t, 87, 90
Data Sift, 66t
Data & Society, datasociety.net, 224
Data vaults, 48, 176
Datalogue, 66t
Data-preparation-as-a-service, 64–66, 96
DataRPM, 156–157
Data-training-as-a-service, 65, 67, 80, 81, 96, 97, 134, 219
DaVinci Institute, 163
Davos, xxii, 103
Decentralized AI actors, 23, 31, 48, 50, 219
Decision Forest, 125
Decision theory, 23
Deep Blue, 5, 6, 17
Deep Genomics, 174
Deep learning (DL), 2, 28–31, 52, 126t
Deep Neural Network(s) (DNNs), 26, 27, 31, 38, 117, 194–195
Deep Scalable Sparse Tensor Network Engine (DSSTNE), 93
DeepCoder system, Microsoft and University of Cambridge, 194
DeepDream, Google, 197
DeepMind, 108t, 109–110; academia and the private sector, 99, 100; data training, 65; deep
learning, overview of, 29; demand management, 190; ethics and safety board, 113; openness,
93, 94; reinforcement learning, 34; software development, 193
DeepMind Ethics & Society (DMES), 113
DeepMind Health, 176
Defcon Security Conference, 52
Defense Advanced Research Projects Agency (DARPA), 5t, 31, 42, 52, 89, 206
Delphi Automotive, 159, 160
Democratization of AI, 33, 49, 57, 73
Dennett, Daniel, 12
Department of Defense, US, 207. See also Military sector
Deutsch, David, 11, 14
DEX (Data Exchange Singapore), 51
Didi, 160–161
Diffbot, 66t
Differential privacy, Apple, 96, 119–120
Digital agenda, 224
Digital Jobs and Skills Coalition, 202
Digital Reasoning Systems, 172
Digitizing European Industry, 201–202
Dimension deduction, 33
Disinformation, fake news, false news, 213–214
Distributed energy resource (DER), 188, 189
Distributed infrastructure of data sets, 47, 50–51
Distributed ledgers, blockchain, 47, 184
Distributed System Implementation Plan (DSIP), 188
Diversity, 58, 72, 107, 149, 151t, 195
DMES (DeepMind Ethics & Society), 113
DNN Research, Google, 108t
DNNs (Deep Neural Network(s)), 26, 27, 31, 38, 117, 194–195
Do.com, 124
Doctoroff, Dan, 162
Domingos, Pedro, 24–25, 27, 79, 111
Dong, Yu, 104
Dorsey, Jack, 195
Dreamquark, 181
DriveAI, 155
DrivePX2, 87
Driverless cars. See Autonomous vehicles (cars)
Droid, Motorola, 51
Drones, 43–44, 45t; Amazon and, 121, 124–125; chipset solutions, 87; data collection, 65;
employment opportunities, 163; military uses, 205, 207; public sector uses, 203–204; software
for, 10t
DSIP (Distributed System Implementation Plan), 188
DT, the New Zealand research agency, 82
DuerOS, Baidu, 104
Dumi, 104
Dun & Bradstreet, 60
Durov, Pavel, 212
D-Wave, 54
Dynamic Aviation Risk Management Solution (DARMS), 204

Ease of use, user interface, 147


Echo, Amazon, 41t, 42, 104, 120, 123
Economies of scale, data, 62–63
Economies of scope, data, 63
Education and AI: AI workforce diversity issues, 72; computer vision, 37–38; domain-focused ANI
companies, 10t; Google and, 110; lifelong learning, 7t, 199, 209, 215, 218–221; new
educational services, 225, 226t; online education options, 217; policy decisions and, 213;
regulatory considerations, 68; self-education, 220
Einstein, Salesforce, 135, 138
Ekman, Paul, 45
Electric vehicles, 155, 158, 159, 160, 168t, 200. See also Autonomous vehicles (cars)
Embodiment, 10, 38
EMMA, Department of Homeland Security’s Citizenship and Immigration & Services, 203
Emotion measurement technology, 37, 44–45
Employment and AI, future of work: automotive sector, 161, 162–163, 168; boardroom discussions
about, 235, 239–240; China Brain Project, 212; future of employment, 209–210, 218–221, 225;
government and military sectors, 201, 202; job augmentation with AI, 147–148; paradigm shift,
need for, 215–218; pitching AI to businesses, 149–150; workforce trends, 102, 143
Enablers of AI, 22fig, 57–58. See also Algorithms; Data; Hardware; Openness
Energy data standardization, 189
Enigma, 66t
EPU II (Emotion Processing Unit), 44–45
Equivio, Microsoft, 126t
E-residence, Estonia, 204
Essa, Irfan, 38
ET Medical Brain, Alibaba, 103
Ethical design, 14; adopting a digital agenda, 224; AI/ESG considerations, 226t; Alphabet, 113;
Chinese AI development, global impact of, 105; corporate governance, 77t, 128t, 168, 209–210,
231, 233–235, 233f; data custodians, 224; diversity of teams, 72; educational concepts, 225;
future of AI, 2; innovation incentives, 225; institutional ethics committees, 223–224;
participatory platforms, creation of, 224–225; real-time data metrics, 225–226
Ethicon, 174
Etzioni, Oren, 24
Euclid analytics, 96
Evolution: of algorithms, 64; Darwin’s theory of, 13–14; emotion awareness, 45; of intelligence, 12,
16; Life 3.0, 16–17; of machine learning, 67; neural networks, 26, 194; of self-driving cars, 163;
self-learning, NEAT and, 194; self-play and, 34; of work and business, 147
Evolutionaries, 25
EWeLiNE, 189
Exclone, 147
Expert system(s), 7
Explainable AI (XAI), 31
Exploration and exploitation metric(s), 71
Explorys, IBM, 138t
Ezrachi, Ariel, 69

F8 Facebook’s conference, 93–94


FAA (Federal Aviation Administration), 44
Facebook: AI applications, 116–117; approach to AI, 113–115; artificial general intelligence (AGI),
10–11; artificial narrow intelligence (ANI), 9; blockchain, 48, 49; chatbots, 41t, 42; chipsets,
87; cybersecurity, 82; data governance, 74; data infrastructure, 75, 77; FAIR (Facebook AI
Research), 93, 114, 115–116; fake news, dataset and algorithm bias, 70, 71, 213–214; full-stack
AI companies, 96; generative adversarial networks (GANs), 63; Lumos computer vision, 38;
openness, 93–94
Face.com, Facebook, 114t
Facial recognition: Apple, 88, 118t, 119t; Baidu, 103; Facebook, 114t, 116; healthcare sector, 173;
military sector, 204; safety and ethics, 105; surveillance and influence, 212
@Factor zero, 72
FAIR (Facebook AI Research), 93, 114, 115–116
Fake news, false news, and misinformation, 213–214
False amplifiers, 213–214
Farley, Belmont, 28
FBLearner Flow, Facebook, 114
Federal Aviation Administration (FAA), 44
Federated learning, 35, 109
Feedback loop(s), 2, 25, 120, 145, 231
Field-programmable gate arrays (FPGAs), 87–88
Firing of neurons, 25–27, 28t, 89
First Amendment, xix
Flash Crash, 9
Flex, Amazon, 216
Flickr, 145, 194
Ford, 104, 123, 127, 163, 164–165
Forward-looking governance, 217
FPGAs (field-programmable gate arrays), 87–88
Free will, 14
Frey, Thomas, 163
Frost & Sulivan Report, AI in healthcare, 171
Fukoku Mutual Life, 181
Full-stack AI company, companies, 3, 51, 95–97; Alphabet/Google, 107–113; Amazon, 120–125;
Apple, 117–120; ethical design and governance, 209–210, 223–226; Facebook, 113–117;
Microsoft, 125–128; Tesla, 165
Fusion APS, 132t
Future of Life Institute, 16
Future of work. See Employment and AI, future of work

Gamalon, 28
Game theory, 23–24, 190
Games: data training with, 65; DeepMind and, 29, 34, 109; demand management solutions, 190;
domain-focused ANI companies, 10t; Facebook and, 117; IBM Watson, 136; Turing Test, 18–
19
GANS (generative adversarial network(s)), 35, 63, 197
Gates, Bill, xxiii
Gates, Melinda, 72
Gaussian process (GP), 27–28
GDPR (General Data Protection Regulation, EU), 31, 35t, 50, 51
GE (General Electric), 96, 123, 131, 189
Geely, 161
Gehlhaar, Jeff, 87–88
Genee, Microsoft, 126t
General Data Protection Regulation, EU (GDPR), 31, 35t, 50, 51
General Electric (GE), 96, 123, 131, 189
General Motors (GM), 139, 161, 164–165
General-purpose AI, DL, and ML, 67, 84, 86, 96, 131
Generative adversarial network(s) (GANs), 35, 63, 197
Geo-location discovery, 190–191
Geometric Intelligence, 12, 27
Georgia Tech, 38, 89
Ghahramani, Zoubin, 27–28
Glasberg, Roy Geva, 109
Gliimpse, Apple, 119t
Gluon, 94
GM (General Motors), 139, 161, 164–165
Go, 6, 11, 23, 29, 34, 94, 108t
Goertzel, Ben (Annual AGI conference), 13
Google: academia and the private sector, 99, 100; AI applications, 111–112; AI approach, 107–109;
AI ethics, 112, 231; AI research organizations, 109–110; AI technologies, 110–111; Google
(cont.) AI-as-a-service model, 90–91; AlphaGo, 11–12; automotive sector, 163–164, 166–167;
bias in datasets and algorithms, 69–73; chatbots, 41t, 42; chipset solutions, 85, 86, 87; CIFAR
program, 201; cloud services, 40; cyberattacks, 81; data, economies of scale, 62–63; data
infrastructure, 75, 76t; deep learning projects, 29; DeepMind, reinforcement learning, 34;
drones, 43; emotion measurement technology, 44; federated learning, 35; full-stack AI
companies, 96; Gaussian process, 27; healthcare sector, 170, 176; Internet of things, 51;
machine teaching, 67; military sector, 207; music, Magenta, 197; natural resources and utility
sector, 188–189, 190; openness, 93, 94; open-source tools, 57; past, present and future of AI, 1–
2, 5–6; quantum computing, 54–55; SAP HANA partnership, 134; startups, 11t; surveillance
and influence, 212; visual arts, DeepDream and Sketch RNN, 197
Google Assistant, 123
Google Brain, 27, 72, 110
Google Cloud, 134
Google Glass, 44
Google Home, 111
Google News, 70
Google.ai, 110
Gordon, Vitaly, 143
Governance: AI giants, 128t; AI misconceptions, 144; AI technologies, 45t; automotive sector, 165,
168, 168t; blockchain, 49, 54t; boardroom discussions, 233–240; current issues in, 7t;
cybersecurity, 54t; data collection and sharing, 57, 58, 60–61, 72–74, 77t, 238; ethical design
and social governance, 209–210, 223–226, 226t; financial sector, 185t; future of employment,
217, 221t; future trends, xxvii, 1; geopolitics of AI, 105t; healthcare sector, 177t; Internet of
Things, 54t; key performance indicators, 238–239; machine learning, 35t; media and arts sector,
197t; military sector, 204, 206t; natural resources and utilities sector, 191t; openness, 94t;
quantum computing, 54t; reputation risk, 141t; social data, 214; social science issues, 19t;
strategic risk management, 237–238; successful business transformation and, 229–232, 232t;
surveillance and influence, 211; technology environment, 237; for traditional companies, 151t
GP (Gaussian process), 27–28
GPUs (graphical processing units): AI-as-a-service, 90; algorithms, 88; Amazon and, 121; chipset
solutions, 86, 87; computer vision, 37–38; Facebook, Big Sur, 113–114; future trends, 89;
Microsoft, Azure Cloud, 127; Moore’s Law, 85; neural networks, 26; Tesla and, 165–166
Gradient Ventures, Google, 107, 109
GRAIL, 124
Granata Decision Systems, Google, 108t
Graph analytic processor(s), 89
Graphcore, 67, 88
Graphical processing units. See GPUs (graphical processing units)
Graphistry, 132t
Great Wall Motors, 127
Green Button Initiative, 189
Greene, Diane, xxi
Grid search, 80
GridSense, Alpiq, 188
Groq, 67
Guestrin, Carlos, 31, 99, 119t

H2O, 137
H2O.ai, 172
Hadoop, 75–77, 96, 118
Hall, Wendy, 200–201
Halli Labs, Google, 108t
HANA, SAP, 134
Harari, Yuval Noah, 14
Hardware: Amazon, 121, 122, 124; Apple, 120; automotive sector, 158, 161, 166; Baidu, 104;
chipset solutions, 85–89; cloud technologies, 89–91; cloud-as-a-service (CaaS), 112; Hadoop,
77; IBM Watson, 137, 138; Life 1.0, 16; machine teaching-as-a-job, 67; military sector, 205;
natural resources and utility sector, 189; new algorithms, 88; Open Computer Project, 113–114;
quantum computing, 54; semiconductors, 83–85; smart networks, 91
Harris, Sam, 14
Harris, Tristan, 71
Harrison, Don, xxii
Hart, Peter, 25
Harvard, Belfer Center for Science and International Affairs, 205
Harvest.ai, Amazon, 122–123
Hawking, Stephen, xv, xxiii
Health Insurance Portability and Accountability Act (HIPAA), 124
Hebb, Donald, 28
Heckerman, David, xxiv
HelpMate robot, 174–175
Heuerman, David, 25
Heuristic, 2, 23
HiBot USA, 191
Hierarchical Identify Verify Exploit (HIVE) processor, 74, 77, 89
High-altitude Internet balloons, 27
High-performance computing (HPC), 52
Hillis, Danny, xxiv
Hinton, Geoff, 24, 26, 29, 34, 99, 201
HIPAA (Health Insurance Portability and Accountability Act), 124
HiQ, 10t, 131t
HIVE (Hierarchical Identify Verify Exploit) processor, 74, 77, 89
Hochreiter, Sepp, 29
Hodgkin, Alan, 25
Hofstadter, Douglas, 25
Holland, Josh, 25
HomeKit, Apple, 51
Homo sentients vs. Homo sapiens, 14, 19t
HoneyComb, 96
Hoshi Shinichi Literary Award, 196
Hougeland, John
HPC (high-performance computing), 52
Huang, Jen-Hsun, 125
Huawei, 105
Huggable robot, MIT, 43t
Huxley, Andrew, 25
Hwang, Tim, 70

IBM: AI acquisitions, 138t; AI-as-a-service model, 90, 130; cognitive computing, 24; Deep Blue,
17; financial sector, 180, 181; healthcare sector, 175–176; history of AI, 5–6; legacy services,
96, 137; ML focus, 133–134; natural language processing (NLP), 40; neuromorphic processors,
89; quantum computing, 40–41; Watson and reputation risk, 136–139; Watson applications, 139
IBM Global Entrepreneurship, 40
Identity resolution, 47
Identity verification, Know Your Customer (KYC), 184
IEEE, 174
IEEE Conference on Data Mining, 25
Iflytek, 102
ImageNet, 29, 37, 62, 65t
Impact of AI: algorithms, impact of, 69, 79–80; autonomous vehicles, impact of, 159–163;
boardroom decisions and, 229–230, 232, 232t, 234, 238; in Continental Europe, 202; Deep
Learning (DL), impact of, 31; diversity issues, 72–73; economic impact, 113, 137, 151t, 162–
163, 199; environmental issues, 7t, 35t, 45t, 77t; ethical issues, 137, 223–226, 226t; in financial
sector, 181–182; governance issues, 35t; in government and military sector, 201, 202, 205; in
healthcare sector, 175; on organizational design, 67–68; past, present and future of AI, 1–3;
reinforcement learning, impact of, 33; social issues, 7t, 12, 13, 45t, 105t, 127t, 151t; on
surveillance and influence, 212; in United Kingdom, 201; in utility sector, 190. See also
Employment and AI, future of work
Inception, convolution neural network, 109–110
ING, 160
Init.ai, Apple, 119t
Innovation: in automotive sector, 164; China’s investment in, 101; data collection and capture, 66;
data governance and, 73; drones, 43–44; employment and, 209, 215–221; in financial sector,
180, 181; in hardware, 85; incentives for, 225; machine learning, 35t; machine-teaching-as-a-
job, 68; pitching AI to a business, 149; regulations, impact on, 210; SAP Leonardo, 134; self-
programming software, 194; user interface and, 147; in utility and natural resource sector, 191t
Intel: Amazon partnership, 121; automotive sector, 127, 159; Baidu partnership, 104; fragmentation
of chipset solutions, 86–87; Loihi, 89; machine teaching-as-a-job, 67; Mobileye, 159; multicore
designs, 85
Intelligence: artificial general intelligence (AGI), 9–12, 13; artificial narrow intelligence (ANI), 9,
10t; artificial super intelligence (ASI), 12–13; of computers, 18–19; machines vs. people, 15–17
Intelligence, human, 2, 13, 15–17, 29, 62, 109
Intelligent machines, smart machines, 15–19, 220
Intelligent Medical Imaging, Inc., 173
Intelligent processing unit (IPU), 88
Interface: Amazon AI applications, 124; application process interfaces (APIs), 40, 112; boardroom
decisions, 230, 232t, 237–238; chatbots, 182; Deep Learning applications, 30t; domain-focused
companies, 129; ease of use, 147; General Data Protection Regulation (GDPR), 50; Internet of
things, 51; machine teaching-as-a-job, 67; natural resource and utility sector, 187; regulatory
considerations, 68; unstructured data, handling of, 66; voice interfaces, 114t, 118t
Interior redesign, automotive, 162
International Conference on Computational Creativity, 196
Internet of Things (IoT), 3, 48, 51
Intuition, artificial, 34, 72
Intuition Machine, 25, 29
Intuitive Surgical, 174
Invoa, 132t
IoT (Internet of Things), 3, 48, 51
IP, 6t, 11t, 50, 79, 89, 206
iPhone, 29, 41t, 51, 117, 120, 165
IPU (intelligent processing unit), 88
IQ of computers, machines, 2, 12, 18–19
Ivakhnenko, Alexey G., 28t

Jaakola, Tommi, 31
Jackrabbot, robot, Stanford, 157
Jaitly, Navdeep, 24
Japan’s National Institute of Informatics, 19
Jassy, Andy, 124
Jeopardy!, 6t, 136, 137
JetPac, Google, 108t
Jetson TX1, 43, 87
JibbiGo, Facebook, 114t
J&J, 174
Jobs. See Employment and AI, future of work
Jordan, Michael, 25
Judicata, 96
Julie Desk, 131t

Kaggle, Google, 79, 108t, 125


Karpathy, Andrej, 68, 165
Ka-shing, Li, 183
Kasparov, Gary, 5t, 6t, 17
Keller, Jim, 165
Kelly, Kevin, 16
Kenny, David, 139
Keras, François Chollet, 111
Kespry, 43
Keynes, John Maynard, 218
Khosla, Vinod, 215
Khosla Ventures, 11t, 135, 172
Khosrowshahi, Dara, 168
Kimono Labs, Palantir, 66t
Kiribo robot, 43t
Kirin 970, 105
Knight, Will, 31
Know Your Customer (KYC), 184
Knowledge engineering, 23
Knowledge representation, 23
Krafcik, John, 166
Kurzweil, Ray, xxiii

Labeling, tagging: Cortica, 38; data preparation, 64–65; dataset openness, 94; DL models, 12;
economies of scale, 63; machine teaching-as-a-job, 67–68; neural networks, 26; pitching AI to a
business, 149; Salesforce, 135; scene labeling, 158; semi-supervised learning algorithms, 33;
supervised learning algorithms, 32; unsupervised or predictive learning, 33
Lattice Data, Apple, 118, 119t
Launchpad, Google, 109
Law Train, 204
Lawrence National Lab, Berkley, 160
LeCun, Yann, 10, 21, 24, 34, 63, 87, 115, 201, 214
Lee, Kai-Fu, 102
Legacy IT, systems, 96, 133, 137, 141, 202
Lemonade, 180
Leonardo, SAP, 134
Levandowski, Anthony, 5t, 167
Levesque, Hector, 18
Levy-Rosenthal, Patrick, 45
Lex, 40, 121, 124
Lexalytics, 132t
Li, Fei-Fei, 72, 109
Li, Jia, 109
Lidar/LIDAR (Light/Laser Detection and Ranging), 5t, 158, 159, 165, 167
Life 3.0, 13, 16–17, 19t
Life-long education, 209, 220–221
Lisp machine, 84
Liu, Qingfeng, 102
Lloyd, Seth, 54
Location safety, 157
Loihi, 89
Long short-term memory (LSTM), 29, 100, 117
Long-range anti-ship missile (LRASM), 102
Lovelace Test, 18
LRASM (long-range anti-ship missile), 102
LSTM (long short-term memory), 29, 100, 117
Luckerson, Victor, 111
Luminata, 86
Lumos computer vision platform, 38
Lyft, 155, 156, 160, 161, 167, 216

Machine learning (ML). See AI-as-a-service (ML-as-a-service)


Machine learning tribes, 24–25
Machine teaching, 67–68, 77t
Machine translation, 40, 62, 109, 112, 114t
Magenta, Google AI creativity, 197
Magic Pony Technologies, Twitter, 195
Maluuba, Microsoft, 126t
Manning, Christopher, 110
Marcus, Garry, 12, 18, 27, 100
Marcus Test, 18
Masquerade Technologies, Facebook, 114t
Master algorithm, 24–25
Matias, J. Nathan, 69, 70
Matias, Yossie, 109
May, Rob, 39–40, 62, 63
McAffee, Andrew, 244, 245
McCarthy, John, 5, 18
McConaghu, Trent, 49
McCulloch, William, 28
McGill School of Computer Science, 116
McKinsey Global Institute, 62, 143
MD Anderson Cancer Center, 176
Meade, Brendan, 110
Mechanical Turk, Amazon, 62–63
Mehta, Puneet, 42
Memory, computer: DeepLearning and, 17; drones, 43; GPUs, 86; graph analytic processors, 89;
long-short-term memory (LSTM), 100; memory augmented networks, 109; neural networks, 26,
29t, 30; new algorithms and, 88; semiconductors, Moore’s Law and, 84; World2Vec,
Facebook’s FAIR team, 115
Meng, Li, 212
Merck, 124, 172
Merge Health, IBM, 138t
Merlon Intelligence, 129
MESA Standard Alliance, 187
MetaMind, 135
Michelangelo, Uber, 168
Microsoft: academia and private sector, 100; AI applications, 126–127; Amazon partnership, 121;
approach to AI, 125, 126t; on autonomous weapons, 207; Baidu partnership, 104, 127; Bing,
116, 127; chatbots, 41t, 126–127; chipset solutions, 85; CIFAR partnership, 201; cloud
technologies, 91; Cortana, 123; datasets, 63; ethical design and governance, 224, 231; field-
programmable gate arrays (FPGAs), 87; full-stack companies, 96; healthcare sector, 176;
history of AI, 6t; machine teaching, 67; Mobiliya Inc. partnership, 141; openness, 94;
Partnership on AI, 113; quantum computing, 54, 55; reputation risk, 140t; self-programming
software, 193–194
Microsoft Bot Framework, 126–127
Microsoft (MS) Cognitive Services, xxii, 40
Microsoft Healthcare NEXT, 176
Microsoft HealthVault Insights, 176
Microsoft’s Skype Bot Lab, 127
Mighty AI, 67, 86
Military Civil Fusion Development Commission, 102
Military sector: AI ESG considerations, 45t, 206t; Baidu, 104; Chinese military use of AI, 102, 105,
212; cybersecurity, 89; drones, 43–44, 207; governance issues, 7t; history of AI, 5t; robotics and
AI, 205–207; United States use of AI, 204–205
MindSphere cloud program, Siemens, 190
Minsky, Marvin, 5, 18
Misinformation, false/fake news, 213–214
MIT: Connection Machine, 84; on dataset privacy, 68; drones, 207; emotion measurement, 44; IBM
partnership, 136–137; intelligence research, 18; Lisp Machine, 84; machine learning research,
31; Media Lab for Civic Media, 69; PAIR (People + AI Research Initiative), 110; robot, MIT
Media Lab, 43t; software development, 193; Technology Review, 31, 172, 173
MIT IBM Watson AI lab, 137
Mitchell, Tom, 24
ML-as-a-service. See AI-as-a-service (ML-as-a-service)
Mobility-as-a-service, 160–161
Mobiliya Inc., 86, 96, 141
Modularity of DL, 195
MogAI, xix
Molly, Sense.ly virtual assistant (VA), 173
Moments, Facebook, 116
Monte Carlo tree search, 29
Moodstock, Google, 107, 108t
Moore, Gordon, 83–84
Moore’s Law, 54, 83–85
Motion, 122t, 167
Motionscloud, 180
Movidius, 86
msg.ai, 42
Muggleton, Stephen, 24
Muller, Urs, 31
Multi-class classification, 32
Multicore (chipset) design, 85
Musimap, 44
Musk, Elon, xvii–xviii, xxiv, 11t, 93, 109, 158, 165–166, 193
MXNet, 90, 121–122

Nadella, Satya, 127


Nao robot, 43t
Narrow AI, 2–3, 6, 7, 47, 83
National Academy of Engineering, 224
National Robotics Engineering Center, Carnegie Mellon, 99
Natural language generation (NLG), 39
Natural language processing (NLP), 39–41, 42, 45t, 96, 122, 124, 126t, 131, 137, 196
Natural language stack, 39–40
Natural language understanding (NLU), 39, 108t, 121, 195
Nervana Solutions, 191
Nervana’s NNVM project, 67, 86
Nest, 51, 189
Net promoter score (NPS), 149
Netbreeze, Microsoft, 126t
Netflix, 75
Neural Machine Translation, 109, 112
Neural network processing unit (NPU), 105
Neural network(s) (NN): Apple and, 120; artificial general intelligence (AGI) and, 11; artificial
neural networks (ANN), 2; automotive sector, 158; avatars, 196; chipset solutions, 86, 87;
computer vision, 37–38; Convolutional Neural Network (ConvNet / CNN), 27, 158, 170; deep
learning (DL) and, 30, 31; Facebook and, 114, 117; Gaussian processes, 27–28; generative
adversarial networks Neural network(s) (NN) (cont.) (GAN), 35; Google and, 108t, 109, 110,
197; healthcare sector, 170, 173–174; history of AI, 2, 7; machine learning (ML) tribes, 24–25;
NEAT (neural networks through augmented topologies), 194; Open Neural Network Exchange,
94; openness, 94; overview of, 25–27, 28t, 29t; Recurrent Neural Network (RNN), 27;
reinforcement learning (RL) and, 34; Salesforce, 135; self-programming software, 194; software
2.0 and, 68; transfer learning, 34; Twitter, 194–195; visual arts sector, 197
Neural networks through augmented topologies (NEAT), 194
Neurocortex, 25–26
Neuromedical Systems Inc., 174
Neuromorphic computing, 2, 52, 85, 89, 137
Neuromorphic engineering (NE), 2
Newell, Allen, 5
News feed, 114, 116, 214
NexRad, open dataset(s), 65t
Next IT, 173
Ng, Andrew, 2, 99
Nilsson, Nils J., 15
Nina, Nuance virtual assistant (VA), 131
NLG (natural language generation), 39
NLP (natural language processing), 39–41, 42, 45t, 96, 122, 124, 126t, 131, 137, 196
NLU (natural language understanding), 39, 108t, 121, 195
Non-Executive Director(s), (NEDs), 100, 223
Northrup, Grumman, 89
Norvig, Peter, 62, 109
Novauris Technology, Apple, 118t
NPU (neural network processing unit), 105
Nuance, 96, 131, 139
Nudge, AI nudge, 70
Numerai (number.ai), 183
Numerate, 172
nuTonomy, 160
Nvidia: Amazon partnership, 121, 125; automotive sector, 31, 86, 87, 159, 161, 165; chipset
solutions, 86, 87; cloud technologies, 90; drones, 43; Facebook partnership, 114; Mobiliya Inc.
partnership, 141

Obama, Barack, 1, 37, 199, 204, 206t, 207, 224


Obama AI report(s), 101–102, 200, 206t
ObEN, 196
Object recognition, 11t
Occulus, 116
Ocean protocol, 51
Octo Telematics, 182
Office of Science and Technology Policy, White House, 102
Oliveira, Luke de, 64
One Belt, One Road strategy, 101–102
O’Neill, Cathy, 69
Open Compute Project, Facebook, 113–114
Open Neural Network Exchange (ONNX), 94
Open Source Framework, 23t
Open Street Map, 65t
OpenAI, 81, 93, 100, 165, 231
Openness in development of AI, 57, 93–94, 218
Optimization, 25, 26, 27, 30t, 54, 79, 88, 108t, 167
Optimus Prime, Einstein Salesforce, 135
Oracle, 133
Orange Button, solar data, 189
Orbeus, Amazon, 122t
O’Reilly, Tim, 216
Otto, Uber, 5t, 167
Ovum, 64
Oxford Future of Humanity Institute, xxiii
Ozlo, Facebook, 114t

Page, Larry, xxiii


PAIR (People + AI research initiative), Google, 110
Palihapitiya, Chamath, 136
Pan-Canadian AI strategy, 201
Pandora, 2
Paro robot, 43t
ParseHub, 66t
Participatory platform, 224–225
Partnership on AI, DeepMind Ethics & Society (DMES), 113
PascalVoc, open data set(s), 65t
Patent and Trademark Office, US, 79
Patentable algorithms, 79
Patki, Neha, 68
Patterson, Anna, 107
Payment Services Directive (PSD2, EU), 50
Penn Program in Networked and Social System Engineering, 70
Pensiamo, 132t
People + AI research initiative (PAIR), Google, 110
Pepper Robot, 43
Perceptio, Apple, 118
Perdix drones, 207
Perez, Carlos E., 25, 29–30, 67, 72
Pinaeau, Joelle, 116
Pinterest, 75
Pitts, Walter, 28
Pixel phone, Google, 111, 120
Poker, 6t, 23
Politician(s): ethical design and governance, 223–226, 226t; geopolitics of AI, China vs. USA, 101–
105; innovation vs. employment, paradigm shift, 215–218; response to fake news and
propaganda, 127t, 213–214; surveillance and influence, 211–213
Polly, 40, 121, 124
PowerPC, 85
Pratt, Gill, 42, 206
Preact, 131
Pre-built algorithms, 124
PredictionIo, 96, 131, 135
Predictive learning, 33
Predii, 96
Predix, 132t
Preparation of data, 57, 64–66, 96, 134, 146
Prime Air, Amazon, 121, 124
Princeton University, 40
Privacy. See Data privacy
Probabilistic software, 3, 25, 28, 53–54, 63, 173, 179
Problem formulation, 58, 146–147
Profit maximization, 60, 71, 120, 130, 133, 147, 161, 164, 180, 230
ProjectQ, Google, 112
ProPublica, 213
Prowler, 28
Pypestream, 130t, 131t
Python, 65, 121, 137
PyTorch, 94

Qualcomm, 85–88, 89, 114, 117


Quanergy, 159
Quanta, 114
Quantum Computers, 3, 12, 48, 52–55, 85, 112
Qubit, 53, 54
Quilan, Ross, 24

Rajogopalan, Hari, 112


RAM (Random Access Memory), 84
Random Forests, 125
RankBrain, 109
Raspberry Pi, 110
Raven Tech, 104
Ré, Christopher, 65
RealFace, Apple, 119t
Real-time data metrics, 225–226
Real-time data processing: blockchain, 49; boardroom decisions, 231, 236, 237; computer vision,
37; Explorys, 138t; Granata Decision Systems, 108t; Hadoop, 76–77; Heron, 76; scalable search
technology, 75; Spark, 77
Reasoning: artificial super intelligence, 13; conversational sector, 39–40; DeepMind, 109; history of
AI, 7; intelligent machines, 15, 18
Recurrent Neural Network (RNN), 27, 29
Recursion Pharmaceuticals, 172
Reddit, 70
Regaind, Apple, 119t
Regional impact of AI across UK, Olly Buston’s report, 200–201
Regulation of AI: boardroom decisions, 236; ethical context, 210; Federal Aviation Administration
(FAA), drones and, 44; General Data Protection Regulation (GDPR), 31, 50, 230; insurance
sector, 181; national security concerns, 205; privacy and facial recognition, 116
Reinforcement learning, 21, 29, 30, 33–34, 109, 126t, 158, 165
RelateIQ, 135
Relational reasoning, 109
Renaissance Technologies, 182, 184
Reputational risk, 136–137, 139, 140t
Residual networks, 125
RIBA robot, 43t
Ride-sharing, automotive, 155, 167–168
Rigetti Computing, 54
RightIndem, 180
Rinna, Microsoft Virtual Assistant (VA), 127
Risks of AI: automotive sector, 164, 166; boardroom decisions, 231, 233, 236, 237–238; data bias,
61; data governance, 238; economic and employment risks, 201; reputational risk, 136–139,
140t, 141t, 166; software development, 193
Road accidents, 159–160, 164
Robear robot, 43t
Robots, robotics: academia and private sector, 99, 100; adaptability to environment, 29; AI ESG
considerations, 45t; Amazon, 124, 125, 216; automated planning, 23; Baidu, 104; Blue River,
130t; chipsets, 86, 87; computer vision, 37–38; current trends, 2; domain-focused ANI
companies, 10t; embodiment of AI, 10–11; emotional measurement technology, 44;
employment, impact on, 216, 220, 225; ESG issues, 7t; European Union investment in, 201–
202; Facebook, 114t; healthcare sector, 170, 174, 175; history of, 7; Jackrabbot, 157; machine
learning, 35t; military sector, 204–206; natural language, 39–40; overview of, 42–43, 43t; social
science controversies, 14–15; United Kingdom investment in, 200; utilities sector, 191;
Westworld, 14
Rochester, Nathaniel, 18
Romeo, robot, 43t
Rometty, Virginia/ Ginni, 139
Ronen, Ofer, 112
Rosenblatt’s Perceptron, 7
RTC robot, 43t
Rubin, Andy, 51
Russia: fake news, 115, 214; surveillance and influence, 212

Safety of AI: artificial narrow intelligence (ANI), 9; automotive sector, 158, 159–160, 163; bias in
datasets and algorithms, 69–70; boardroom decisions, 231–232, 240; China’s efforts in, 105;
cybersecurity, 52, 81; DeepMind Ethics & Society (DMES), 113; governance considerations,
19t, 94t, 127t, 128t; healthcare sector, 177t; openness, 94t; Partnership on AI, 113
Sagar, Mark, 196
SAIC Motor, 161
Salakhutdinov, Ruslan, 30, 99, 118
Salesforce: benefits of AI, 143; domain-focused companies, 96, 131, 133, 135–136; IBM Watson
and, 137, 138
Samsung, 11t, 51, 141, 159, 173
Samuel, Arthur, 5
Sandia National Laboratories, 54
Sanghavi, Sundeep, 157
SAP, 130, 133, 134
Sapho, 132t
Sapolsky, Robert, 15
Schmidhuber, Jürgen, 11t, 29, 100
Scholes, Robert, 71
Schroepfer, Mike, 117
Searl, John, 15–16
Security. See Cybersecurity
Sedol, Lee, 29
Self-play, learning, 7, 34
Self-programming AI, 193–194
Selfridge, Oliver, 5
Semantic(s), 6, 16, 39
Semiconductors. See Silicon, chip(set), semiconductors
Semi-supervised learning, 32t, 33
Sense.ly, Molly Virtual Assistant (VA), 173
SenseTime Group, 105
Sentient Technologies, 183
SGT STAR Virtual Assistant (VA), 203
Shakey Robot, 7
Shannon, Claude, 18
Shannon limit, 91
Shift Technology, 180
Shum, Harry, 125
Shutterstock, 38
Sidewalk Labs, Google, 162
Sift Science, 132t
Signal Sense, 132t
Silicon, chip(set), semiconductors: Apple and, 117–118, 120; development of silicon, 83–84;
enterprise startups, 141; fragmentation of chipset solutions, 84–88; Google and, 107; machine
teaching-as-a-job, 67; new algorithms, 88; quantum computing, 54; Tesla and, 165; trends, 58,
83–85
Simon, Herbert, 5
Simulation: AI building concepts, 23–24; artificial general intelligence (AGI), 9–12, 10t, 11t; DL
applications, 30t; history of, 28t; Launchpad Studio, 109; law enforcement training, 204;
openness, 98; quantum computing and, 54
Singapore, 51, 103, 160, 211–212, 221
Singularity, xxiii
Siri, 12, 41–42, 41t, 44, 117, 119t
Sivusubramanian, Swami, 121
Sketch RNN, Google, 197
Skillset, talent: Apple and, 118; automotive sector, 164; boardroom decisions, 231, 234, 235, 238,
239; in Canada, 201; in China, 101; DL research, 31; domain specialists, 131t; Google and, 107;
IBM and, 136; long-term investment in, 57, 95, 133, 148; Microsoft and, 125; openness and
talent development, 93; recruitment and development of, 96, 97, 100, 144, 231; shortages of, 62,
91, 145; strategic planning around, 148, 151t, 221; in United Kingdom, 201
Sky Futures, 44
Skype, 126, 127
Smart City Challenge, US Department of Transportation, 161
Smart machines, intelligent machines, 15–17, 220
Smart networks, 57, 91
Smartphones: Amazon and, 123; Apple and, 118t; drone technology and, 43; Facebook and, 116–
117; Federated Learning, 109; healthcare sector, 173; history of AI, 5t; military sector, 205, 206
Snorkel, 65
Social Capital, 136
Social governance, 209–210, 223–226
Social media: big data, 61–62; boardroom decisions, 230; fake news, 69–70; financial and
insurance sector, 181; governance considerations, 197t; healthcare sector, 171; IBM and, 138t;
Microsoft and, 126t; NLP applications, 39t; reputation risk, 142; Twitter, 194–195
Social safety net policies, 221t
Soft robot(s), 174
Softbank, xvi
Software 2.0, Andrej Karpathy, 68
Software Development Kit(s) (SDKs), 90, 108t, 124
Solomonoff, Ray, 5
Somorjai, John, xxii
Soul Machines, 196
Souq.com, 124
SPARC, public-private partnership for robotics in Europe, 201
Spark, 57, 75t, 77, 96, 168
Sparta Science, 174
Speech APIs, 42
Speech recognition: Apple neural engine, 88; ASIC, tensor processing units (TPU), 87; Baidu, 103;
Facebook, 114t, 116; healthcare sector, 173; Iflytek, 102; IQ of computers, 18–19; neural
networks, 26; open datasets, 65t; Tencent, 104
Speech synthesis, 30t, 102, 124
Spiking NNs technology, 89
Spiking of neurons, 25
Spotify, 2
SpringRole, 131
SRI International, 7
Stanford, Core NLP Suite, 40
Stanford Media Lab, xxv
Statistical approaches for AI, 23
Stealth, 108t, 137
Stickiness of products, 71
Streams, DeepMind, 176
Strong AI, 2
Structured data, 39, 76, 118
Stucke, Maurice, 69
SunSpec Energy Storage Model Specification, 187
Super intelligence, Super General Intelligence (SI, SGI), 2, 9, 12–13, 16, 23t
Supervised learning, 32–33, 32t, 34
Supply chain, 66t, 232t, 236
Surveillance, 102, 200, 203, 206t, 207, 211–213, 214t
Sustainability, business, 210, 220, 229, 234
Swarm AI, 31, 207
SwiftKey, Microsoft, 126t
Switzerland, 11t, 100, 217, 230
Symbolic systems, 7, 23
Symbolists, 24
Synapses, 25–26
Syntax, 16
Systems on the chip (SoC), 86, 105

Tagging. See Labeling, tagging


Talend, 90
Talent. See Skillset, talent
Talla, 39, 62
Talos Intelligence, 214
Taser/Axon, 10t, 213
Tata Group, 183
Taxation, of AI systems, 217–218
Tay chatbot, xx
Technology stack, 21, 23t, 49
TED-LIUM, open data set(s), 65t
Tegmark, Max, 13, 14, 16–17
Telegraph, 212
Tempo AI, 135
Tensor Flow, 40, 110–111
Tensor processing unit(s) (TPUs), 29, 87, 88, 107, 111, 112
Tensor(s), 110
Tesla: autonomous vehicles, 160, 161, 163, 164; electrification, 158, 159; overview of, 165–166,
166t
Tesla Volta V100, 86
TetraVue, 159
Texas, MD Anderson Cancer Center, 176
Text: Cloud Natural Language API, 90; domain-focused ANI companies, 10t; Facebook and, 116;
full-stack AI companies, 96; Google and, 107, 108t, 112; IBM and, 137, 138t; Iflytek, 102;
Microsoft and, 126, 126t; natural language processing (NLP) and, 39–40; open datasets, 65t;
politics and influence, 213; Salesforce, 135, 136; scientific publishing sector, 195; voice to text,
39
Text Classification Datasets, open dataset(s), 65t
Textio, 131t
Thalmic Labs, 124
Theano, 40, 90
Theranostics GmbH, 112
Thread or Weave protocol, 51
3D Robotics, 43
Thrun, Sebastian, 5t, 24
Timeful, Google, 108t
Token, tokenized: Bigchain DB, 50; blockchain, 48, 49; Numeraire, 183–184
Torch, 40, 57, 90, 114, 121
Toyota, 43t, 159, 160
Training data: business development issues, 64t; data preparation, 64–65; Google and, 112;
proprietary data, 129t; reinforcement learning, 34; sources of, 67; supervised learning
algorithms, 32; United Kingdom, regulation in, 200
Transfer learning, 34
True North, IBM, 137
Trump, Donald, 69, 101, 115, 199, 226
Truven Health Analytics, 136
Tsinghua mobile robot, 104
TUG Robot(s), Aeton, 175
Tuning, 79, 80
TupleJump, Apple, 119t
Turi, 118, 119t
Turing, Alan, 18
Turing Award, 81
Turing test, 18
Tusimple, 159
TwentyBN, 63
Twilio, 90, 124
Twitter: attention economy, 71; bots and chatbots, 41–42; data driven operations, 76t; overview of,
194–195; social media manipulation, 70; Whetlab, 27
twoXar, 172

Uber: academia and private sector, 99; automotive sector, 155, 156, 164; fairness and algorithm
bias, 70–71, 72; Gaussian processes, 27; history of AI, 5t; lidar system, Waymo and, 167;
mobility-as-a-service, 160; overview of, 167–168; reputation risk, 140t; traffic and energy
consumption, 160
Uber Freight, 168
UK National Grid, 187, 190
Understand.ai, 67
United Nations, xxv
Unitive, 131t
unity3d.com, 65
Universal basic income (UBI), 218
Università della Svizzera Italiana, 100
University College of London, 195, 214
University of California, Berkley, 77
University of Cambridge, 27, 118t, 176, 193
University of Edinburgh, 27
University of Lugano, 100
University of Pittsburgh Medical Center, 170
University of Texas, 16, 176
University of Tokyo, 19
University of Toronto, 99–100, 167, 201
University of Washington, 31, 99
University of Wyoming, 31
unrealengine.com, 65
Unstructured data, 32t, 66, 118, 128t, 171, 181
Unsupervised learning, 12, 26, 29, 32t, 33, 34, 118
Urban planning, 155, 161–162, 164
Urtasun, Raquel, 167
US Department of Defense, 207. See also Military sector
US National Robotics Engineering Center, Carnegie Mellon, 99
US Patent and Trademark Office, 79
Use cases: AI-as-a-service, 96; Amazon and, 123; boardroom decisions, 239; chatbots in public
sector, 202–203; chipset solutions, 86; cybersecurity, 81–82; data lakes, 76t; data preparation,
64, 66t; data-as-a-service, 188; DL applications, 30t; emotion measurement technology, 44;
history of AI, 6t; natural resource and utility sector, 188, 189–190; privacy concerns, 111;
quantum computing, 54; silo strategy and, 148; Spark, 77; transactional use cases, 41t
User interface, ease of use, 147
User-centric governance, 49

V Kontakte, 212
Valuation of startups, companies, xxii, 60
Value-added data, DSIP, 188
Vapnin, Vladimir, 25
Vector Institute for AI, 100
Vectra, 132t
Veeramachaneni, Kalyan, 68
Venture capital (VC), xxii, 6t, 60, 95, 101, 107, 149, 216
Vicarious, 6t, 11t
Viégas, Fernanda, 110
Vigoda, Ben, 28
Virtual assistant (VA): Alibaba, 103; Amazon, 121; financial and insurance sector, 182; Google,
107; government and military sector, 203; IBM, 138t, 139. See also Chatbots
Virtual Reality (VR), 104, 221
Vision Factory, 107, 108t
Visual search, 38, 108t
Visual Turing Test, 18
Vocaliq, Apple, 118t
VoCo, 40–41
Vogel, Werner, 121, 124
Voice: algorithms and, 79, 88; Amazon, 120–121, 123; Apple, 117, 118t; automotive sector, 157;
Baidu, 104; bots and chatbots, 41–42; cybersecurity, 82; emotion measurement technology, 44;
Facebook, 114t; Google, 29t, 107, 109, 111; healthcare sector, 171; intelligent machines, 17;
interface considerations, 147; Internet of things, 51; Microsoft, 126, 127t; military sector, 204;
natural language processing, 38–41; Nuance, 131; regulatory considerations, 68; Tencent, 104
Volkswagen, 126, 140t
Volume, variety, velocity – three Vs of data, 61–62, 85, 146, 171
von Neuman, 84
von Neumann architecture, 84
von Neumann processor, 84
VoxForge, open data set(s), 65t

Wade&Wendy, 131t
Warren Center for Network and Data Science, 70
Watson: applications, 139; boardroom decisions, 235; cautions about, 96; cloud services, 40, 90;
cognitive computing, 24; financial and insurance sector, 181; healthcare sector, 175–176;
history of AI, 6t; natural resource and utility sector, 190; reputation risk, 136–139; use cases,
148
Watson for Oncology, 175–176
Watson Genomics, 175–176
Watson Message Insights, 90
Watson Message Sentiment, 90
Wattenberg, Martin, 110
WaveNet, 109
Way of the Future, religion, xxiii
Waymo, 166–167
Weather Channel, IBM, 136, 139
Weave or Thread protocols, 51
WeChat, 104
Westworld, HBO science-fiction thriller, 14
Whetlab, 27
White House Office of Science and Technology Policy, 102
Wiki Text, open data set(s), 65t
Williams, Chris, 27
Winograd Schema Challenge, 18
Wise.io, 96
Wolfram, Stephen, 90
Wolfram language, 90
Work Fusion, 66t, 132t
Workplace impacts. See Employment and AI, future of work
Wuhan Landing Medical High-Tech Co., 103

X86, 85
XAI, Explainable AI, 31
x.ai, Amy assistant, xvi, 35, 131t
Xi Jinping, 102
Xiao Ice, 127
Xilinx, 87
xPerception, 104
X-ray, 172

Y Combinator, 25, 104


Yahoo, 94
Yampolski, Roman, 52
Your.MD, 173
YouTube, 2, 61, 94, 111

Z Series, IBM, 138


Zemel, Richard, 100
Zendesk, 131t
Zero-sum mentality, 217
Zimperium, 132t
Zipline International, 44
Zomm.ai, 131t
Zuckerberg, Mark, 11t, 113
Zurich Eye, Facebook, 114t
zymgergen, 130
About the Authors

Anastassia Lauterbach, PhD, is an international technology strategist,


advisor, and entrepreneur. She has served as a director of Dun &
Bradstreet since 2013, and is a member of the Nomination and Governance
Committee and chair of the Technology & Innovation Committee. Dr.
Lauterbach served as SVP of Global Business Operations Europe at
Qualcomm, a world leader in wireless technologies, from 2011 to 2013.
Previously, she served at Deutsche Telekom AG, an international
telecommunication provider, as SVP of Business Development and
Investments from 2010 to 2011 and Acting Products and Innovation
Officer from 2009 to 2010. During her time at Deutsche Telekom, she
additionally served as a member of the Executive Operating Board. Prior
to Deutsche Telekom, she served as executive vice president, Group
Strategy, at T-Mobile International AG from 2006 to 2010. Prior to T-
Mobile, she served from 1996 to 2006 in various operational and strategic
roles at Daimler Chrysler Financial Services, McKinsey & Company, and
Munich Reinsurance Company. She is the chief executive officer and
founder of 1AU-Ventures and currently serves on advisory and
supervisory boards of several U.S.- and European-based technology
companies. She trains boards in cybersecurity and cognitive/AI-related
technologies and their links to corporate governance. Dr. Lauterbach
serves on the Advisory Council, Next Generation Board Leaders, at
Nasdaq.

Andrea Bonime-Blanc, JD/PhD, is a global governance, risk, and ethics


strategist and advisor who specializes in transforming risk into resilience
and value. After 20 years as a global corporate executive at Bertelsmann,
Verint, and PSEG, Dr. Bonime-Blanc founded GEC Risk Advisory in
2013 to provide innovative global strategic risk governance advice,
helping clients transform their big risks—culture, ethics, cyber, reputation
and digital—into value. In 2017, she was named the Independent Ethics
Advisor to the Financial Oversight and Management Board for Puerto
Rico, a U.S. Congress–created entity to oversee the restructuring of the
Puerto Rican economy. She serves as a start-up mentor at Plug & Play
Tech Center and is Governance Faculty for the National Association of
Corporate Directors (NACD). Dr. Bonime-Blanc has written several books
including The Reputation Risk Handbook, Emerging Practices in Cyber-
Risk Governance, and the forthcoming Gloom to Boom: How Leaders
Transform Risk into Resilience & Value (Routledge/Greenleaf, 2018). She
was named among the Ethisphere 2014 & 2015 “100 Most Influential
People in Business Ethics,” is a life member of the Council on Foreign
Relations, and holds the Carnegie Mellon/NACD Cyber Oversight
Certification. Dr. Bonime-Blanc is a frequent global keynote speaker and
serves on a number of boards and advisory boards internationally,
including New York’s Epic Theatre Ensemble NYU’s Ethical Systems
think tank, and Spain’s IE Business School Governance, Sustainability &
Reputation Centre. She was born in Germany and raised there and in Spain
and received her joint JD/PhD from Columbia University.

Das könnte Ihnen auch gefallen