Sie sind auf Seite 1von 4

Part 3: The Risks of Artificial Intelligence Are too High

With technologies advancing, AIs are becoming more and more prevalent in our lives. As these
AIs advance, concerns about the risks of AI are gaining attention. The risks currently in focus are ethics
and behavioral issues with the AIs. With the inflation of biases through AI, and several behavior issues
that are on the horizon, with new types of AI, the risks of AI are way too high.
The Biases that are implemented into AI can get out of control. One study found that the AI
algorithms used to predict if an offender will commit another crime within the next few years were
turning out black people, as potential repeat offenders, at twice the rate as white people (Tufekci
Zeynep ). When tests were run where the race was isolated from the criminal history, age, and gender. The
AI still had a 77% higher likelihood of labeling a black person at a higher risk for committing future
crimes. The risk for being predicted to commit a future crime was 45% higher, still, for black
people(Angwin julia et al.). One example of this happening is Gregory Lugo, and Mallory Williams.
Gregory had crashed and caused a vehicle accident while under the influence - His record contained about
three other D.U.Is and a charge of battery and one subsequent offence of domestic violence battery.
Mallory on the other hand had two misdemeanors on her record with no subsequent offences. Williams is
black, while Lugo is hispanic. Mallory Williams was given a higher risk score that Gregory Lugo.
Another instance of this happening, as stated in the article, a man who was a seasoned white skinned
criminal was charged with petty theft. He had several other major offenses on his record, such as armed
robbery. A black woman was also charged with petty theft, but only had a few misdemeanors on her
record from when she was an adolescent. The machine still gave the white man a lower risk score than the
black woman. Over the next two years, the woman never committed a crime, while the man had been
given a jail sentence. This incident was caused by a bias entered in by the company that created it
(Angwin Julia et al.). Although the device was made with good intent, it was harmed by the biases that
people have.

Human biases will be transmitted through the date entered into the AI. As we unknowingly enter
our biases into these machines, we are then unknowingly creating something to perpetuate our biases at a
higher, more consistent rate. This brings up the problem of an employers bias still affecting the people
hired. To solve this problem, people have suggested audits on the AIs (Angwin Julia et al.). People want
audits to ensure that the AIs arent making decisions against our morals. People fear that without an audit
on the machines, there will be the risk score incident spread over many more topics. A topic such as hiring
will be severely damaged if an AI with unfair biases are left unsupervised to do the task.
Now, not only are people worried about the biases of these AIs, but they are worried about
behavioral issues with the AI. One of the most notable example embedded into Science Fiction is
Terminator. Yet, where we are now, we dont have the human killing worries that Terminator brings out.
With our current technologies, we are worried about things such as negative side effects. If we give an AI
a task of cleaning, will it try to take shortcuts to get the job done sooner? If an AI does try to take
shortcuts, what does that mean for the environment of the robot? Will the robot suddenly start knocking
over fragile items to be able to take such shortcuts? There are possibilities such as entering in every little
object the robot isnt suppose to knock over, into the data, but that can be rather difficult. With AIs today,
we have the ability to use a positive/negative reward system for learning purposes of the AIs. Yet, this
brings up a new problem of reward hacking (Amodei Dario et al. ). There is a worry of if the robots can
abuse this or not. Would it be possible for an AI to find some way to abuse the system and give itself
constant rewards? If so, how could it do it? People are worried that a robot running a reward system could
disobey its orders to take advantage of the system. Some of the more common predictions are actions
such as; turning off vision so that the AI cannot see a mess, hiding from people so that it cannot be told of
other messes, or maybe hiding messes all together, all to make the system think that it has done its job and
gain positive rewards (Amodei Dario et al.).

With so many concerns about the behavior of AIs, it would be a benefit to the development to
take a step back and slow the production to make sure we do not create some sort of machine which does
create a Terminator like event. We cannot currently create an AI without some sort of bias, and need to
take time to ensure that we erase as much of our unnecessary biases as possible.

Work Cited

Angwin Julia et al. "Machine Bias: Theres Software Used Across the Country to Predict Future
Criminals. And Its Biased Against Blacks." ProPublica. N.p., 31 May 2016. Web. 18 Nov. 2016.
Amodei, Dario. "Concrete Problems in AI Safety." (n.d.): n. pag. Arxiv. Web. 11 Nov. 2016.
Machine Intelligence Makes Human Morals More Important. Perf. Zeynep Tufekci. N.p., n.d.
Web. 11 Nov. 2016. <https://www.ted.com/playlists/310/talks_on_artificial_intelligen>.

Das könnte Ihnen auch gefallen