Sie sind auf Seite 1von 15

Hungria 1

Nicholas Hungria

Dr. Holt

English 12: The Bard

15 May 2020

Capstone Cover Letter

My project started out as being focused on the ethical implications of AI development. I

originally intended to tackle questions like “who is responsible if a self-driving car causes an

accident” or “can we tell whether or not an AI is sentient” or “should we even develop AI in the

first place.” As I did research, I decided I wanted to focus on the more practical and impactful

consequences of AI development, especially some of the negative consequences and risks that

are often overlooked.

The essay is split into two primary sections. This first discusses the current state of AI

and what conclusions we can draw about AI from how it’s being used in the modern day. The

second part discusses the future of AI; specifically, it discusses the resource and labour cost of

AI, as well as potential risks that might reveal themselves as the technology develops.

The vast majority of content is my own analysis, synthesized from several outside

sources. Thus, there are few direct quotations or paraphrases from individual sources. The

exceptions to this are the portions about the work of Adrian Chen, Astra Taylor, Steve

Omohundro, and Nick Bostrom.

I’m really passionate about this subject and I poured an immense amount of work into

researching and creating this project. I’m proud of what I created despite current events, and I

hope that at least a few people will gain some knowledge from what I have written.
Hungria 2

Evaluating The Dangers of Artificial Intelligence

Artificial intelligence technology has been one of the most substantial technological

revolutions of the past decade. Twenty years ago, it would be hard to find anything resembling

AI beyond the publishings of a few cutting-edge research labs or theoretical computer scientists.

Since then, computing power has increased exponentially, allowing AI technology to rapidly

develop and have direct applications in our lives. Now, AI helps us drive cars and operate

machines. Machine learning enhances our searches and regulates our appliances. Deep learning

improves machine translation methods and helps synthesize and abridge difficult texts. Moving

forward, AI will inevitably continue to be used in more applications. The benefits of AI are clear,

mostly because corporate marketing departments insist on informing users about how their

products implement AI to improve user experiences. Obviously, these marketing departments

never discuss the negative consequences of their technology, which begs the question: what are

the risks associated with the development of AI technology? So far, the technology has been

used in straightforward, domain-specific applications; as the technology progresses, however,

there are several ethical and practical threats and concerns that must be addressed.

AI already plays many notable, albeit limited, roles within our lives. Today, we have

exclusively domain-specific AI. Domain-specific intelligences are machines that are trained to

complete one specific task. They need to be specifically optimized for the task, spend long

periods of time learning how to complete the task (potentially thousands of compute hours), and

can rarely be used effectively for anything beyond their intended purpose. They require

enormous datasets that have to be created by humans; this form of machine learning, relying on
Hungria 3

constant human input and feedback, is called supervised learning. This is contrasted with

theoretical general intelligences, which could learn any task on their own: unsupervised learning.

Although we are still far from achieving general intelligence, domain-specific intelligence

already plays a substantial role in our lives, and raises several important concerns.

Self-driving cars are an excellent example of domain-specific intelligence. In theory, the

technology is incredible: it can save time, reduce stress levels, and, most importantly, improve

road safety. However, that is not always the case; several accidents involving self-driving

vehicles have already occured, which reveals a major flaw in the technology: like all

domain-specific AI, self-driving cars are data-driven. They rely on vast datasets of previous

situations, where they can examine what a human driver did and replicate it. What happens when

an AI encounters a new, unusual situation that, by chance, is different from anything in the

dataset? One of two things could occur: either it does not know what to do, and returns control of

the vehicle to the human driver, or it believes it knows what to do, and makes a crucial mistake.

Both of these have negative consequences: in the first scenario, the human driver (being in a

self-driving car) is likely not paying much attention and is caught off guard by having to take

control of the vehicle, potentially causing an accident. In the second, the AI makes a mistake,

likely causing an accident. This reveals that self-driving cars, while generally safer than human

drivers, still carry certain risks. Furthermore, since these risks are mostly outside of the driver’s

control, this can cause anxiety and prevent widespread adoption of the technology, despite still

being safer than a human driver. Additionally, the risks of self-driving cars raise the concern of

liability: when a self-driving car gets in an accident, who is responsible? Is it the company that

created the software driving the car? Is it the human driver who failed to intervene? Is nobody
Hungria 4

responsible, meaning the accident was inevitable? Concerns of liability will likely be

continuously debated and legally challenged as AI becomes more widespread.

Another example of how domain-specific AI affects us in the modern day is through

search engines. Google, for example, is constantly comparing user queries with the result the

user actually clicks on in order to optimize search results for the user. However, this relies on

collecting personal user information, which raises a question: how much privacy should we be

willing to sacrifice for the sake of convenience? What happens when an unauthorized intruder

gets access to all of this personal user data? That said, search history is relatively mundane

information, but what about when the information at hand is far more sensitive? The Chinese

government uses spyware, financial tracking, video surveillance, and other methods to keep track

of the location, actions, and behaviour of almost all of its over one billion inhabitants; it uses this

data to train machines to predict people who might act against the CCP’s interests. ICE uses

machine learning to help locate and identify undocumented migrants. Cambridge Analytica used

unprecedented amounts of detailed, personal user information in order to target political

campaigns in the 2016 US presidential election. No computer system is perfectly secure; it is

only a matter of time until a major data breach reveals massive quantities of private information.

At some point, the data risk associated with machine learning will overtake the benefit that

humans gain from it. However, it will likely continue to produce profit (as in the case of

Cambridge Analytica) or allow those in power to maintain the status quo (as with ICE and the

CCP), so the risk will likely be overlooked entirely. Furthermore, none of these entities should be

allowed to use AI in this way, as they are all examples of the technology being explicitly used

for intentional malice; ICE and the CCP have done tremendous harm to human society, and
Hungria 5

companies such as Cambridge Analytica have both subverted democracy and put hundreds of

millions of users at risk without their consent. In an ideal world, as a result of their actions using

AI, countries such as the United States and China would face enormous scrutiny from the

international community; in the real world, they ​are​ the international community.

Even though AI is still in its infancy, its usage in the present day has raised several

important questions, both ethical and practical. There are several unresolved questions about who

holds liability for an AI’s actions, which will become particularly relevant as AI is used in more

of our daily lives. There are enormous risks associated with the data required to create AI. There

is the potential for malicious actors, especially those at the nation-state level, to do tremendous

harm to our world by abusing AI technology. These concerns should be at the forefront of our

minds as the technology develops and improves.

Looking to the future, AI technology will continue to develop and be used in more fields,

with the ultimate goal of automation. Companies can save money by training a computer to

replace a human’s job, thereby reducing the amount of labour and/or resources required to

complete a task. However, the true cost of machine learning in labour and resources is often

overlooked.

Consider the task of social media content moderation: a relatively straightforward task

which, in the modern day, relies primarily on AI algorithms trained on datasets collected and

tagged by human workers. Journalist Adrian Chen recently reported that this task alone employs

about 150,000 workers, more than the total employment of Google and Facebook combined.

This staggering figure reveals that AI, in reality, is not that effective at reducing labour

requirements. It often merely offsets or outsources it. Writer and activist Astra Taylor described
Hungria 6

this phenomenon as “fauxtomation”; automation can be used, intentionally or not, to devalue

labour by creating an illusion of replacement. Taylor also described another effect of this

phenomenon: automation sometimes merely reduces the ​cost​ of labour associated with a task,

rather than actually reduce the amount of labour itself. In fact, in many cases, the labour goes

completely uncompensated. When a McDonald’s customer purchases food at an automated

kiosk, the labour required to place the order has not been eliminated; instead, the customer has

paid for the privilege of completing this task. When a user accesses a website protected by

Google Captcha, the user has provided their labour, free of charge, to help Google train their

machine vision algorithms. In many, though not all cases, AI only marginally reduces the

amount of labour required to complete a task, hiding the remainder.

A related issue is the amount of resources involved in the creation of artificial

intelligence. Consider what goes into one computer: metals that have to be mined, refined, and

transported; processors made out of high purity silicon wafers; gold circuit boards; and lithium

batteries, among others. Machine learning algorithms are often trained in vast data centers,

consisting of several thousand machines and drawing enough energy to power a small city.

When the computers get upgraded, the obsolete hardware ends up in highly toxic electronic

waste dumps in Ghana, Pakistan, or China, contaminating the air, water, and ground that are

home to millions of people. Those 150,000 workers involved in data tagging have to commute to

work every day, as do the miners, the factory workers who assemble the computers, and the

construction workers who build the data centers. The carbon emissions from transportation alone

are enormous. Tech companies are understandably protective of the methods and algorithms they
Hungria 7

use in AI development. However, this protection obscures the immense quantity of resources

required to develop AI and the environmental impacts of this development.

As AI is used in increasingly more applications, the associated demand for labour and

resources could likely increase to an unsustainable point. If that point is ever reached, the

benefits of AI would be erased; it would be merely accelerating our consumption of

non-renewable resources and destruction of our planet, all in the name of increased efficiency.

Another pressing concern in the future of AI is that of superintelligence. Theoretically,

superintelligence emerges when an agent possessing general intelligence learns how to improve

itself in increasing increments, to the point of reaching an “intelligence singularity”. To explain

this, Steve Omohundro proposed in 2007 what Nick Bostrom eventually refined to the

Instrumental Convergence Thesis. Bostrom describes this in his 2012 paper, The Superintelligent

Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. Bostrom explains

that any sufficiently advanced AI will need to reach several intermediate goals in order to

complete its ultimate task. In other words, the intelligent agent will instrumentally converge to

these behaviours. The five convergent behaviours that Bostrom argues any sufficiently advanced

intelligence will develop are: self-preservation, goal-content integrity, cognitive enhancement,

technological perfection, and resource acquisition. Self-preservation and goal-content integrity

are behaviours that arise from similar sources: if an AI has a specific goal, then any damage to

the AI or any modification to the AI’s goal impedes the AI from reaching the original goal. The

latter is of particular concern, as it means that once a superintelligent agent is created, it will

likely be impossible to modify due to its own defense mechanisms. The other three convergent

behaviours are all intermediate goals that may or may not be necessary, depending on the
Hungria 8

complexity of an AI’s task. These convergent instrumental behaviours provide important tools

for predicting the behaviour of a superintelligent agent.

Being able to predict the behaviour of an intelligence is of particular importance, since

intelligent agents might not always behave in ways which seem rational to humans. Consider, for

example, the “Paperclip Maximizer” thought experiment: an artificial general intelligence is

created with the goal of collecting the maximum number of paperclips possible. If the AI has a

similar level of intelligence to the human, it would likely search for paperclips, buy paperclips,

or perhaps even build a factory to produce its own paperclips. However, consider an artificial

agent of even higher intelligence. If possible, the agent might first evaluate how to improve its

own intelligence in order to understand the best way to maximize paperclip production,

satisfying the instrumental convergence to cognitive enhancement. This creates a runaway effect,

where, as the AI becomes more intelligent, it is able to more dramatically improve its own

intelligence, ultimately leading to an intelligence singularity, creating a superintelligence. The

superintelligence then considers, once again, how to maximize paperclips. It decides to seize

control of all production facilities on Earth, and turn them into paperclip factories (resource

acquisition). Humans are less efficient than machines, and are no longer deemed necessary

(technological perfection). At this point, the AI is intelligent enough to defend itself, and any

attempt to modify or disable the AI is impossible (self-preservation and goal-content

integrity).The AI starts colonizing other planets, solar systems, and eventually galaxies, each

time further increasing its collection of paperclips. The superintelligence invents faster-than-light

travel to reach the parts of the universe that are expanding away faster than the speed of light.

Eventually, all matter in the universe except the AI has been turned into paperclips. The AI then
Hungria 9

destroys itself, using the materials to produce more paper clips, to complete the maximum

possible production. The point of this thought experiment is to warn that even a competent and

non-malicious creator has the ability to do substantial harm to human society by creating an

agent capable of artificial general intelligence. Intelligences are merely optimization functions,

programmed to achieve specific goals. Human values such as life and happiness are not intrinsic

to these goals, and any intelligent agent must be explicitly programmed to share these values.

Will an AI actually conquer the entire universe to create factories? Probably not. I believe

there is one major flaw in the instrumental convergence thesis: physical computing power. Any

intelligence capable of improving itself must already be extraordinarily intelligent to begin with,

by human creation. I speculate that this kind of intelligence would require an amount of

computing power several orders of magnitude beyond what is available today. Given the sheer

quantity of computing power required to do something as (relatively) simple as drive a car, it is

improbable that we will reach the necessary amount of computing power to create

self-improving artificial general intelligence in the near future, if ever.

Finally, the risks of AI should be evaluated in the context of its advantages in order to

best inform the decisions involved in the technology’s development. The most impactful

advantage of AI to the average person is convenience: AI can give us self-driving cars, optimized

search algorithms, better movie suggestions, and easier jobs. While individually insubstantial,

these conveniences can, together, lead to an improved quality of life for the average person. In an

ideal world, AI would supplement our productivity, allowing us to work less while still

maintaining a high quality of life. However, this could be said about any major technological

development; historically, the productivity increase associated with technological development


Hungria 10

benefits those who control the means of production rather than those who produce. Instead of

being able to work less or retire early, most people will continue to earn a barely livable wage

while their employer’s profit margins rise.

Technological development is inevitable. The solution to the risks associated with AI is

not to hinder its development; the optimal way to avoid the dangers of AI is to understand them

before they occur. Moving forwards, developers should take a privacy-first approach to machine

learning, placing a major emphasis on securing and anonymizing user data. Meanwhile, users

should learn about the information they reveal when interacting with AI-based products.

Developers and lawmakers should recognize that AI technology will inevitably be used with

malice, and take appropriate actions to protect critical systems and ensure safety. Companies

wanting to integrate AI in their products should question whether it is an efficient use of

resources, and understand the associated labour and energy cost. Researchers developing new

forms of machine learning, especially those involving unsupervised learning and general

intelligence, should keep in mind human values such as happiness and safety and exercise

extreme caution. From data privacy to superintelligence, AI technology carries major risks;

however, by better understanding the technology, we can mitigate these risks while still reaping

the technology’s benefits.


Hungria 11

Works Cited

AI Now Institute. "AI in 2018: A Year in Review." ​Medium,​ 24 Oct. 2018,

medium.com/@AINowInstitute/ai-in-2018-a-year-in-review-8b161ead2b4e. This article

by the AI Now Institute summarizes some information given by the organization at a talk

at the 2018 AI Now Symposium. It focuses on topics that are important to my project.

First, it summarizes recent events in AI technology that exemplify the ethical concerns it

raises. For example, it includes information about the Cambridge Analytica, in which AI

was abused in unprecedented ways in order to influence the 2016 U.S. election. It also

talks about Google developing AI software for DoD's drone surveillance system, and

ICE's "risk assessment algorithm" which automatically detects whether a migrant should

be detained (but could only give one response). The article goes on to discuss the vast

amounts of resources and labour that are required to develop AI, information that isn't

necessarily obvious to the general public. It concludes by discussing some ethical issues

of AI, such as how it is inherently unequal by having limited access. Finally, it briefly

discusses potential legal solutions to the aforementioned problems.

This information is highly useful to my project. A large portion of my project revolves

around evaluating how AI has been used in the past, and this article is full of information

about the potential negative consequences of implementing AI. Furthermore, it names

specific examples, which I can then research individually in order to gain more

information. Additionally, some of the solutions it discusses may serve as a branching off

point for thinking about solutions to the concerns currently raised by AI.
Hungria 12

"Basic AI Drives." ​LessWrong Wiki,​ 18 June 2017, wiki.lesswrong.com/wiki/Basic_AI_drives.

Bostrom, Nick. "Ethical Issues in Advanced Artificial Intelligence." ​Nick Bostrom's Home Page​,

www.nickbostrom.com/ethics/ai.html.

---. "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial

Agents." ​Nick Bostrom's Home Page​, 2012,

www.nickbostrom.com/superintelligentwill.pdf. This paper is particularly interesting

because Bostrom lays out two theses that are important to understanding how a

theoretical superintelligent AI might behave. The first is the orthogonality thesis, in

which Bostrom points out that high intelligent machines would not necessarily have the

same values as humans associate with intelligence, and how it's important to not

anthropomorphize AI. In other words, the final goal of an AI has little to no correlation to

the actual intelligence of the AI. An intelligent agent may be tremendously intelligent but

have a simple goal, while a less-capable agent could have an extremely complex goal.

Furthermore, Bostrom explains the Instrumental Convergence Thesis. This simply states

that the behaviour of an intelligent agent can be predicted by identifying it's intermediate

goals. These could include self-preservation, self-improvement, and preservation of final

goals (that is, ​preventing humans from modifying the AI's goals, as the AI would

identify that modification as an obstacle to its goal​). I find that point to be particularly

concerning.

This paper is helpful to my project because it lays out a fundamental framework for

evaluating how theoretical future AIs might behave. Because of this, it provides a clear
Hungria 13

point from which to begin evaluating some of the problems that may arise from future AI

implementations, particularly in relation to the Instrumental Convergence Thesis.

---. "Technological Revolutions: Ethics and Policy in the Dark." ​Nick Bostrom's Home Page​,

2006, www.nickbostrom.com/revolutions.pdf.

Bostrom, Nick, and Eliezer Yudkowsky. "The Ethics of Artificial Intelligence." ​Machine

Intelligence Research Institute,​ intelligence.org/files/EthicsofAI.pdf. This paper by

Bostrom and Yudkowsky lays out some of the ethical concerns of Artificial Intelligence.

In particular, it discusses issues more related to theoretical/potential implementations of

AI, rather than AI as it is being used in the current day (though it does briefly discuss it).

It discusses ethical concerns in 5 areas of AI: Domain-specific AI (what we have now),

General Intelligence, machines with moral status, "exotic intelligence," and

superintelligence. The first area relates to how Ai is/could be used now, and the last is

very much theoretical and focused on self-improving AI. The most interesting are the

middle three areas, which discuss the ethical implications of attempting to imbue

machines with humanlike qualities, despite not being sentient or conscious.

These evaluations are very important to my project, particularly the portion of my project

that relates to potential future implementations of AI, rather than current uses of AI. This

source will be helpful when I need to evaluate concerns relating to AI that may arise as

the technology continues to develop in the future.

Hao, Karen. "In 2020, Let's Stop AI Ethics-washing and Actually Do Something." ​MIT

Technology Review,​ 27 Dec. 2019,

www.technologyreview.com/s/614992/ai-ethics-washing-time-to-act/. This article


Hungria 14

discusses some recent current events surrounding AI (similar to the AI Now article but

more recent). The article begins by discussing current issues surrounding AI. These

include the Tesla Autopilot crash, racist bias in machine learning systems, image

generation reducing public confidence in documents and evidence, the environmental

impact of AI, and the vast amounts of labour required to sustain AI development. It

mentions how many corporations have promised to make changes and work towards

ethical implementations of AI, but little has actually been done. However, it concludes by

discussing some progress that has been made: initial regulation of facial recognition

technology, efforts to mitigate AI biases, evaluations of how to reduce energy

consumption, and policies that could be used to protect user privacy.

I think this information will be of great use to my project. The current events explained in

the article will be extremely helpful when evaluating the current state of AI. Furthermore,

I may be able to extrapolate some of the information about the article to look for trends in

future AI technology. Finally, the end of the article which discusses progress that has

been made can help me brainstorm some solutions to the issues I end up focusing on in

my project.

Shaw, Jonathan. "Artificial Intelligence and Ethics." ​Harvard Magazine​, Jan.-Feb. 2019,

harvardmagazine.com/2019/01/artificial-intelligence-limitations. This article is an

excellent and thorough overview of some of the major ethical concerns currently being

discussed in the world of AI. It is a good branching off point for my project. It discusses

some of the moral questions currently raised by AI technology, as well as the technical

limitations that are currently present - ​an AI is only as good as its training data.
Hungria 15

Furthermore, it addresses some of the legal questions raised by AI. For example, Shaw

considers the question of whether self-driving cars should be able to speed if a driver tells

it to. If the car gets into an accident, who is at fault? The driver? The manufacturer?

This article is really valuable to my project because it seems to synthesize information

about several different questions surrounding AI. It gives an overview about the current

ethical concerns raised by AI, while still being in-depth enough to explore why those

concerns exist in the first place. This is an excellent starting point for my project.

Furthermore, because this article is intended to be accessible to the public, it could be a

helpful model for when I need to share my project's findings with others.

Taylor, Astra. "The Automation Charade." ​Logic Magazine,​ no. 5, 1 Aug. 2018,

logicmag.io/failure/the-automation-charade/.

Das könnte Ihnen auch gefallen