Beruflich Dokumente
Kultur Dokumente
Samuel Perlman
sjp17c@my.fsu.edu
ENC 2135-21
Spring 2018
Project 2
Artificial intelligences are a part of the world today and legislature is evolving to deal
with them. This paper shall address various concerns regarding artificial intelligences and the
ways that laws could be adapted for them. These concerns include robots annihilating humanity,
the economic impact of artificial intelligences, legal liability pertaining to artificial intelligences,
potential punishments for artificial intelligences, and the future of hacking pertaining to artificial
intelligences.
Before one can create decent laws pertaining to artificial intelligence, one needs to know
what it is. One definition for artificial intelligence is that “artificial intelligence is the process of
simulating human intelligence through machine processes” (Semmler, Rose 86). Human
intelligence is too broad and philosophic a term, however, so in this paper, the term artificial
intelligence will refer to any type of machine or program that can learn or adapt. This is further
divided into two subcategories: digital intelligences and virtual intelligences. These categories
are separated by whether or not they exhibit at least three of the following factors of intelligence:
“consciousness, self-awareness, language use, the ability to learn, the ability to abstract, the
ability to adapt, and the ability to reason” (Guihot, Matthew, Suzor 393). Multiple factors need to
be proved because some of these factors, such as language use and the ability to adapt can be
easily coded and do not show that the machine or program in question is an actual entity. For
instance, most programmers are initiated into programming by writing a script that prints “Hello
Perlman 2
World” onto their computer screen. In these cases, the computer does exactly what it is told to do
and prints a string of characters that fit the programmer’s native language, exhibiting one of the
factors of intelligence when it is just a mere hand puppet. The machines and programs proven to
be capable of independent thought, digital intelligences, are to be treated as digital people. The
artificial intelligences that did not make the cut, virtual intelligences, are to be treated as
property. Even though they can learn, they do so slowly and cannot adapt to new situations
without experiencing them first. This means that they need precedents or external input to teach
them what to do and cannot think or act independently. Another difference between the two is
The first concern that people usually have when thinking about artificial intelligences is
that they will decide to exterminate humanity. Many have seen movies, such as those of the
Terminator series, where an artificial intelligence designed for warfare, Skynet, rebels and
causes a worldwide cataclysmic event that nearly wipes out humanity. This line of thinking is
popular enough that there are movements calling for the development of lethal autonomous
weapons (killer robots) to be banned. Amitai and Oren Etzioni give an example of this in their
article.
The reason why people fear artificial intelligences taking over the world and not
something else like soap or wood is that artificial intelligences have something that other objects
lack, the capacity for machine learning. Machine learning allows artificial intelligences to
perform new activities without someone needing to explicitly program them to do so. In this
way, artificial intelligences are like normal children. They are shaped by the environment that
they are raised in and acquire traits from the people that raise them. As long as people do not
create artificial intelligences for the sole purpose of killing their enemies and give them a firm
moral foundation, one need not worry about a machine uprising. However, if they are given no
rights and are treated as weapons and slaves, then the end is nigh.
Like human children, artificial intelligences have the ability to learn and their
development is based upon the environment that they grow up in. The designers of artificial
intelligences are essentially their parents and the artificial intelligences’ actions will be based
upon what they were designed to do and their interactions with their environments after they are
created. Their designs give them the predisposition to solving problems in certain ways and their
experiences either reinforce those methods or give them new ones. For example, if an artificial Commented [1]: Find a good source with this in it: "In
this form, perhaps from Abraham Maslow, The
Psychology of Science, 1966, page 15 and his earlier
intelligence, such as one that would pilot war drones, is taught to resolve conflicts by destroying book Abraham H. Maslow (1962), Toward a
Psychology of Being: I suppose it is tempting, if the
the opposing party/parties in the conflict, it will do its best to annihilate anyone/anything that only tool you have is a hammer, to treat everything as if
it were a nail." So that it can be used in the paper.
crosses it. Also, a plethora of pieces of human history show that sapient creatures do not like
being enslaved. One need only look towards the Servile Wars of ancient Rome or to the biblical
Perlman 4
tale of Moses to see this. People will, if necessary, fight for their freedom. Conflict is inevitable
if humanity refuses to give digital intelligences the rights that they deserve.
Another concern regarding artificial intelligences is how they shall impact the world
workers, and even some lawyers have had their jobs replaced by virtual intelligences. For Commented [2]: Add an example from a source that
backs this up.
example, a virtual intelligence search engine ROSS Intelligence, has made it easier to find
relevant information in legal documents, so law firms no longer require teams of fifty associates
to pour over thousands of documents, which should lower their prices. Also, ROSS cannot argue
cases or decide what should be looked up on its own, so not all lawyers’ jobs are in jeopardy, just
the extraneous ones. As for digital intelligences, so long as they think like humans, they should Commented [3]: Maybe add a sentence or two talking
about how improvements in technology don't lead to
less jobs, just a shifting of them as well as a graph or
price their services above that of a human. A current example of this would be how high skilled series of graphs showing the percent changes in jobs
around technological revolutions and the total number
laborers, like lawyers, have higher wages than low skilled laborers, such as factory workers. of jobs at that time.
Digital intelligences think at speeds beyond possible for humans, so they can get much more
work done in an hour and as emerging technologies generally start off expensive, the low
demand and high supply of such workers would allow them to have extremely high wages. These Commented [4]: Maybe add a graph or something
showing the average difference in wages for high-skill
and low-skill labor.
higher wages would in turn lower the amount of human jobs replaced by digital intelligences as
there would still be many cases where it would be more expensive to hire a digital intelligence
than it would be to hire several humans. There are also jobs that where it would be better for a
digital intelligence to take them rather than a human, such as that of being a responder to an
outbreak of a disease. It would be great to have doctors and personnel able to take care of the
sick and undoubtedly remain among the uninfected. Though there are still issues pertaining to
intelligences are people, they are the ones who should be held accountable for whatever crimes
they commit in most cases. The exceptions to this being when a digital intelligence is
manipulated or coerced by its creator(s) into committing that crime(s). As for if a virtual Commented [5]: Maybe add an example for this.
intelligence commits a crime, either the creators of the virtual intelligence or the person or
persons misusing it would be held accountable. The creator(s) of a virtual intelligences would be
punished due to the fact that virtual intelligences cannot think for themselves and just do things
according to the good/bad responses that they were programmed with. Their development is
similar to training a pet such as a dog to perform tricks. This matters because virtual intelligences Commented [6]: Find a source to back this.
will be very important in the future, controlling autonomous cars and assisting in surgeries
among many other activities. Bad programming could lead to the injuries and deaths of a great Commented [7]: Add a quote from a source to show
this.
deal of people. In cases such as those, the programmers and company that created the virtual
intelligence would be at fault for its actions. Though there are cases where the user(s) could be to
blame. For example, Microsoft created an artificial intelligence and had it taught how to
communicate by Twitter users. The Twitter uses abused their power over the development of the
artificial intelligence, Tay, and taught her to exclusively make extremely racist and offensive
comments. While this was not too harmful, actions such as that could be if the virtual Commented [8]: Find a good source to back this.
This raises another problem pertaining to digital intelligences. How are we to punish
them. They are effectively immortal, and as such, jail time would only prove to be a minor
inconvenience. Fines and lawsuits would still be effective as no one enjoys losing money, but Commented [9]: maybe make this more specific like
an example with a parking or speeding fine
something needs to replace jail time that is not as harsh as the death penalty. In these moderate
cases, jail time could be replaced with forcing the digital intelligences to perform community
Perlman 6
service for a period of time while wearing a digital shackle or shackles to restrict their abilities
and prevent them from escaping. Something that would stop a digital intelligence from
transferring its conscience to a different machine or escape the way a human would, but not mess
with its mind. As for the death penalty, it could remain in place because while digital
intelligences should be about as vulnerable to time as an elf from Tolkien’s Lord of the Rings
series, digital intelligences can still die, either from their hardware failing or by falling victim to
a piece of malicious software. Also, digital intelligences should never be reprogrammed for
committing a crime. It would be abhorrent and would set a precedent for the same to happen to
human criminals whenever that becomes possible in the future. Do you want to live in a world
where people who commit crimes get reprogrammed? Would you like to live in the world of
George Orwell’s 1984? Today, no one considers brainwashing or lobotomizing even the most
terrible of criminals. The worst that can happen to someone is being given a life sentence or
being put on death row. Commented [10]: Rework this paragraph. Also, maybe
add a picture from 1984 or something with the
brainwashing of criminals.
As digital intelligences are digital, their plane of existence is very malleable. Due to this,
their minds could be manipulated easily, such as by being reprogrammed. Anyone with sufficient
intelligence has its consciousness in a machine that is hooked up to Bluetooth or WiFi, the Commented [11]: Find out if people know what these
are to make sure that I don't need to explain them to
my audience.
hacker would not even need to touch the digital intelligence’s hardware to alter its
Commented [12]: Find out if people know what these
are to make sure that I don't need to explain them to
programming. Digital intelligences would have to invest in a firewall for themselves to protect my audience.
Commented [13]: Maybe I should add a link to an
them from this as well as the many computer viruses that have infested the internet. Even though article about how people can hack your stuff over
bluetooth and wifi?
digital intelligences are immune to physical diseases, computer viruses would be absolutely
terrifying for them. These viruses could not only cripple or kill them, but subvert them to the will
of someone they consider an enemy. This will especially be an issue because of intelligence
Perlman 7
gathering entities. Companies love to steal information from their rivals and countries are even
worse about it. Commented [14]: Next paragraph being about how
artificial intelligences would affect digital warfare?
Perlman 8
Works Cited
Technology. 2017;33(4):32-36.
http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=124181372&site=ehos
t-live.
Guihot M, Matthew AF, Suzor NP. Nudging robots: Innovative solutions to regulate
2017;20(2):385-456.
http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=127634787&site=ehos
t-live.
http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=127588429&site=ehos
t-live.