Beruflich Dokumente
Kultur Dokumente
Nicholas Hungria
Dr. Holt
15 May 2020
originally intended to tackle questions like “who is responsible if a self-driving car causes an
accident” or “can we tell whether or not an AI is sentient” or “should we even develop AI in the
first place.” As I did research, I decided I wanted to focus on the more practical and impactful
consequences of AI development, especially some of the negative consequences and risks that
The essay is split into two primary sections. This first discusses the current state of AI
and what conclusions we can draw about AI from how it’s being used in the modern day. The
second part discusses the future of AI; specifically, it discusses the resource and labour cost of
AI, as well as potential risks that might reveal themselves as the technology develops.
The vast majority of content is my own analysis, synthesized from several outside
sources. Thus, there are few direct quotations or paraphrases from individual sources. The
exceptions to this are the portions about the work of Adrian Chen, Astra Taylor, Steve
I’m really passionate about this subject and I poured an immense amount of work into
researching and creating this project. I’m proud of what I created despite current events, and I
hope that at least a few people will gain some knowledge from what I have written.
Hungria 2
Artificial intelligence technology has been one of the most substantial technological
revolutions of the past decade. Twenty years ago, it would be hard to find anything resembling
AI beyond the publishings of a few cutting-edge research labs or theoretical computer scientists.
Since then, computing power has increased exponentially, allowing AI technology to rapidly
develop and have direct applications in our lives. Now, AI helps us drive cars and operate
machines. Machine learning enhances our searches and regulates our appliances. Deep learning
improves machine translation methods and helps synthesize and abridge difficult texts. Moving
forward, AI will inevitably continue to be used in more applications. The benefits of AI are clear,
mostly because corporate marketing departments insist on informing users about how their
never discuss the negative consequences of their technology, which begs the question: what are
the risks associated with the development of AI technology? So far, the technology has been
there are several ethical and practical threats and concerns that must be addressed.
AI already plays many notable, albeit limited, roles within our lives. Today, we have
exclusively domain-specific AI. Domain-specific intelligences are machines that are trained to
complete one specific task. They need to be specifically optimized for the task, spend long
periods of time learning how to complete the task (potentially thousands of compute hours), and
can rarely be used effectively for anything beyond their intended purpose. They require
enormous datasets that have to be created by humans; this form of machine learning, relying on
Hungria 3
constant human input and feedback, is called supervised learning. This is contrasted with
theoretical general intelligences, which could learn any task on their own: unsupervised learning.
Although we are still far from achieving general intelligence, domain-specific intelligence
already plays a substantial role in our lives, and raises several important concerns.
technology is incredible: it can save time, reduce stress levels, and, most importantly, improve
road safety. However, that is not always the case; several accidents involving self-driving
vehicles have already occured, which reveals a major flaw in the technology: like all
domain-specific AI, self-driving cars are data-driven. They rely on vast datasets of previous
situations, where they can examine what a human driver did and replicate it. What happens when
an AI encounters a new, unusual situation that, by chance, is different from anything in the
dataset? One of two things could occur: either it does not know what to do, and returns control of
the vehicle to the human driver, or it believes it knows what to do, and makes a crucial mistake.
Both of these have negative consequences: in the first scenario, the human driver (being in a
self-driving car) is likely not paying much attention and is caught off guard by having to take
control of the vehicle, potentially causing an accident. In the second, the AI makes a mistake,
likely causing an accident. This reveals that self-driving cars, while generally safer than human
drivers, still carry certain risks. Furthermore, since these risks are mostly outside of the driver’s
control, this can cause anxiety and prevent widespread adoption of the technology, despite still
being safer than a human driver. Additionally, the risks of self-driving cars raise the concern of
liability: when a self-driving car gets in an accident, who is responsible? Is it the company that
created the software driving the car? Is it the human driver who failed to intervene? Is nobody
Hungria 4
responsible, meaning the accident was inevitable? Concerns of liability will likely be
search engines. Google, for example, is constantly comparing user queries with the result the
user actually clicks on in order to optimize search results for the user. However, this relies on
collecting personal user information, which raises a question: how much privacy should we be
willing to sacrifice for the sake of convenience? What happens when an unauthorized intruder
gets access to all of this personal user data? That said, search history is relatively mundane
information, but what about when the information at hand is far more sensitive? The Chinese
government uses spyware, financial tracking, video surveillance, and other methods to keep track
of the location, actions, and behaviour of almost all of its over one billion inhabitants; it uses this
data to train machines to predict people who might act against the CCP’s interests. ICE uses
machine learning to help locate and identify undocumented migrants. Cambridge Analytica used
only a matter of time until a major data breach reveals massive quantities of private information.
At some point, the data risk associated with machine learning will overtake the benefit that
humans gain from it. However, it will likely continue to produce profit (as in the case of
Cambridge Analytica) or allow those in power to maintain the status quo (as with ICE and the
CCP), so the risk will likely be overlooked entirely. Furthermore, none of these entities should be
allowed to use AI in this way, as they are all examples of the technology being explicitly used
for intentional malice; ICE and the CCP have done tremendous harm to human society, and
Hungria 5
companies such as Cambridge Analytica have both subverted democracy and put hundreds of
millions of users at risk without their consent. In an ideal world, as a result of their actions using
AI, countries such as the United States and China would face enormous scrutiny from the
international community; in the real world, they are the international community.
Even though AI is still in its infancy, its usage in the present day has raised several
important questions, both ethical and practical. There are several unresolved questions about who
holds liability for an AI’s actions, which will become particularly relevant as AI is used in more
of our daily lives. There are enormous risks associated with the data required to create AI. There
is the potential for malicious actors, especially those at the nation-state level, to do tremendous
harm to our world by abusing AI technology. These concerns should be at the forefront of our
Looking to the future, AI technology will continue to develop and be used in more fields,
with the ultimate goal of automation. Companies can save money by training a computer to
replace a human’s job, thereby reducing the amount of labour and/or resources required to
complete a task. However, the true cost of machine learning in labour and resources is often
overlooked.
Consider the task of social media content moderation: a relatively straightforward task
which, in the modern day, relies primarily on AI algorithms trained on datasets collected and
tagged by human workers. Journalist Adrian Chen recently reported that this task alone employs
about 150,000 workers, more than the total employment of Google and Facebook combined.
This staggering figure reveals that AI, in reality, is not that effective at reducing labour
requirements. It often merely offsets or outsources it. Writer and activist Astra Taylor described
Hungria 6
labour by creating an illusion of replacement. Taylor also described another effect of this
phenomenon: automation sometimes merely reduces the cost of labour associated with a task,
rather than actually reduce the amount of labour itself. In fact, in many cases, the labour goes
kiosk, the labour required to place the order has not been eliminated; instead, the customer has
paid for the privilege of completing this task. When a user accesses a website protected by
Google Captcha, the user has provided their labour, free of charge, to help Google train their
machine vision algorithms. In many, though not all cases, AI only marginally reduces the
intelligence. Consider what goes into one computer: metals that have to be mined, refined, and
transported; processors made out of high purity silicon wafers; gold circuit boards; and lithium
batteries, among others. Machine learning algorithms are often trained in vast data centers,
consisting of several thousand machines and drawing enough energy to power a small city.
When the computers get upgraded, the obsolete hardware ends up in highly toxic electronic
waste dumps in Ghana, Pakistan, or China, contaminating the air, water, and ground that are
home to millions of people. Those 150,000 workers involved in data tagging have to commute to
work every day, as do the miners, the factory workers who assemble the computers, and the
construction workers who build the data centers. The carbon emissions from transportation alone
are enormous. Tech companies are understandably protective of the methods and algorithms they
Hungria 7
use in AI development. However, this protection obscures the immense quantity of resources
As AI is used in increasingly more applications, the associated demand for labour and
resources could likely increase to an unsustainable point. If that point is ever reached, the
non-renewable resources and destruction of our planet, all in the name of increased efficiency.
superintelligence emerges when an agent possessing general intelligence learns how to improve
this, Steve Omohundro proposed in 2007 what Nick Bostrom eventually refined to the
Instrumental Convergence Thesis. Bostrom describes this in his 2012 paper, The Superintelligent
Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Bostrom explains
that any sufficiently advanced AI will need to reach several intermediate goals in order to
complete its ultimate task. In other words, the intelligent agent will instrumentally converge to
these behaviours. The five convergent behaviours that Bostrom argues any sufficiently advanced
are behaviours that arise from similar sources: if an AI has a specific goal, then any damage to
the AI or any modification to the AI’s goal impedes the AI from reaching the original goal. The
latter is of particular concern, as it means that once a superintelligent agent is created, it will
likely be impossible to modify due to its own defense mechanisms. The other three convergent
behaviours are all intermediate goals that may or may not be necessary, depending on the
Hungria 8
complexity of an AI’s task. These convergent instrumental behaviours provide important tools
intelligent agents might not always behave in ways which seem rational to humans. Consider, for
created with the goal of collecting the maximum number of paperclips possible. If the AI has a
similar level of intelligence to the human, it would likely search for paperclips, buy paperclips,
or perhaps even build a factory to produce its own paperclips. However, consider an artificial
agent of even higher intelligence. If possible, the agent might first evaluate how to improve its
own intelligence in order to understand the best way to maximize paperclip production,
satisfying the instrumental convergence to cognitive enhancement. This creates a runaway effect,
where, as the AI becomes more intelligent, it is able to more dramatically improve its own
superintelligence then considers, once again, how to maximize paperclips. It decides to seize
control of all production facilities on Earth, and turn them into paperclip factories (resource
acquisition). Humans are less efficient than machines, and are no longer deemed necessary
(technological perfection). At this point, the AI is intelligent enough to defend itself, and any
integrity).The AI starts colonizing other planets, solar systems, and eventually galaxies, each
time further increasing its collection of paperclips. The superintelligence invents faster-than-light
travel to reach the parts of the universe that are expanding away faster than the speed of light.
Eventually, all matter in the universe except the AI has been turned into paperclips. The AI then
Hungria 9
destroys itself, using the materials to produce more paper clips, to complete the maximum
possible production. The point of this thought experiment is to warn that even a competent and
non-malicious creator has the ability to do substantial harm to human society by creating an
agent capable of artificial general intelligence. Intelligences are merely optimization functions,
programmed to achieve specific goals. Human values such as life and happiness are not intrinsic
to these goals, and any intelligent agent must be explicitly programmed to share these values.
Will an AI actually conquer the entire universe to create factories? Probably not. I believe
there is one major flaw in the instrumental convergence thesis: physical computing power. Any
intelligence capable of improving itself must already be extraordinarily intelligent to begin with,
by human creation. I speculate that this kind of intelligence would require an amount of
computing power several orders of magnitude beyond what is available today. Given the sheer
improbable that we will reach the necessary amount of computing power to create
Finally, the risks of AI should be evaluated in the context of its advantages in order to
best inform the decisions involved in the technology’s development. The most impactful
advantage of AI to the average person is convenience: AI can give us self-driving cars, optimized
search algorithms, better movie suggestions, and easier jobs. While individually insubstantial,
these conveniences can, together, lead to an improved quality of life for the average person. In an
ideal world, AI would supplement our productivity, allowing us to work less while still
maintaining a high quality of life. However, this could be said about any major technological
benefits those who control the means of production rather than those who produce. Instead of
being able to work less or retire early, most people will continue to earn a barely livable wage
not to hinder its development; the optimal way to avoid the dangers of AI is to understand them
before they occur. Moving forwards, developers should take a privacy-first approach to machine
learning, placing a major emphasis on securing and anonymizing user data. Meanwhile, users
should learn about the information they reveal when interacting with AI-based products.
Developers and lawmakers should recognize that AI technology will inevitably be used with
malice, and take appropriate actions to protect critical systems and ensure safety. Companies
resources, and understand the associated labour and energy cost. Researchers developing new
forms of machine learning, especially those involving unsupervised learning and general
intelligence, should keep in mind human values such as happiness and safety and exercise
extreme caution. From data privacy to superintelligence, AI technology carries major risks;
however, by better understanding the technology, we can mitigate these risks while still reaping
Works Cited
by the AI Now Institute summarizes some information given by the organization at a talk
at the 2018 AI Now Symposium. It focuses on topics that are important to my project.
First, it summarizes recent events in AI technology that exemplify the ethical concerns it
raises. For example, it includes information about the Cambridge Analytica, in which AI
was abused in unprecedented ways in order to influence the 2016 U.S. election. It also
talks about Google developing AI software for DoD's drone surveillance system, and
ICE's "risk assessment algorithm" which automatically detects whether a migrant should
be detained (but could only give one response). The article goes on to discuss the vast
amounts of resources and labour that are required to develop AI, information that isn't
necessarily obvious to the general public. It concludes by discussing some ethical issues
of AI, such as how it is inherently unequal by having limited access. Finally, it briefly
around evaluating how AI has been used in the past, and this article is full of information
specific examples, which I can then research individually in order to gain more
information. Additionally, some of the solutions it discusses may serve as a branching off
point for thinking about solutions to the concerns currently raised by AI.
Hungria 12
Bostrom, Nick. "Ethical Issues in Advanced Artificial Intelligence." Nick Bostrom's Home Page,
www.nickbostrom.com/ethics/ai.html.
---. "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial
because Bostrom lays out two theses that are important to understanding how a
which Bostrom points out that high intelligent machines would not necessarily have the
same values as humans associate with intelligence, and how it's important to not
anthropomorphize AI. In other words, the final goal of an AI has little to no correlation to
the actual intelligence of the AI. An intelligent agent may be tremendously intelligent but
have a simple goal, while a less-capable agent could have an extremely complex goal.
Furthermore, Bostrom explains the Instrumental Convergence Thesis. This simply states
that the behaviour of an intelligent agent can be predicted by identifying it's intermediate
goals (that is, preventing humans from modifying the AI's goals, as the AI would
identify that modification as an obstacle to its goal). I find that point to be particularly
concerning.
This paper is helpful to my project because it lays out a fundamental framework for
evaluating how theoretical future AIs might behave. Because of this, it provides a clear
Hungria 13
point from which to begin evaluating some of the problems that may arise from future AI
---. "Technological Revolutions: Ethics and Policy in the Dark." Nick Bostrom's Home Page,
2006, www.nickbostrom.com/revolutions.pdf.
Bostrom, Nick, and Eliezer Yudkowsky. "The Ethics of Artificial Intelligence." Machine
Bostrom and Yudkowsky lays out some of the ethical concerns of Artificial Intelligence.
AI, rather than AI as it is being used in the current day (though it does briefly discuss it).
superintelligence. The first area relates to how Ai is/could be used now, and the last is
very much theoretical and focused on self-improving AI. The most interesting are the
middle three areas, which discuss the ethical implications of attempting to imbue
These evaluations are very important to my project, particularly the portion of my project
that relates to potential future implementations of AI, rather than current uses of AI. This
source will be helpful when I need to evaluate concerns relating to AI that may arise as
Hao, Karen. "In 2020, Let's Stop AI Ethics-washing and Actually Do Something." MIT
discusses some recent current events surrounding AI (similar to the AI Now article but
more recent). The article begins by discussing current issues surrounding AI. These
include the Tesla Autopilot crash, racist bias in machine learning systems, image
impact of AI, and the vast amounts of labour required to sustain AI development. It
mentions how many corporations have promised to make changes and work towards
ethical implementations of AI, but little has actually been done. However, it concludes by
discussing some progress that has been made: initial regulation of facial recognition
I think this information will be of great use to my project. The current events explained in
the article will be extremely helpful when evaluating the current state of AI. Furthermore,
I may be able to extrapolate some of the information about the article to look for trends in
future AI technology. Finally, the end of the article which discusses progress that has
been made can help me brainstorm some solutions to the issues I end up focusing on in
my project.
Shaw, Jonathan. "Artificial Intelligence and Ethics." Harvard Magazine, Jan.-Feb. 2019,
excellent and thorough overview of some of the major ethical concerns currently being
discussed in the world of AI. It is a good branching off point for my project. It discusses
some of the moral questions currently raised by AI technology, as well as the technical
limitations that are currently present - an AI is only as good as its training data.
Hungria 15
Furthermore, it addresses some of the legal questions raised by AI. For example, Shaw
considers the question of whether self-driving cars should be able to speed if a driver tells
it to. If the car gets into an accident, who is at fault? The driver? The manufacturer?
about several different questions surrounding AI. It gives an overview about the current
ethical concerns raised by AI, while still being in-depth enough to explore why those
concerns exist in the first place. This is an excellent starting point for my project.
helpful model for when I need to share my project's findings with others.
Taylor, Astra. "The Automation Charade." Logic Magazine, no. 5, 1 Aug. 2018,
logicmag.io/failure/the-automation-charade/.