Sie sind auf Seite 1von 3

Is Artificial Intelligence Really a Threat to Humanity?

By Christian de Looper, Tech Times (August 28, 2015)

C-3PO might be closer to reality than we realize, but is that a good thing?
After years of sci-fi movies showing artificial intelligence through characters like
J.A.R.V.I.S., C-3PO, and the T-1000, the concept of AI is finally knocking on our doors.
For each lovable artificially intelligent robot we encounter in the movies, however,
there is an even more abhorrent one, often intent on taking over the world and even
destroying it. This idea has sent waves of anxiety through the minds of those in
technology and science, not to mention the general public.
However, could AI ever get that out of hand in the real world? It certainly has a
number of top tech and science minds worried. Perhaps most famous in the tech world
is Elon Musk, who is supporting a number of organizations designed to find ways to
avoid the threat of AI. Of course, it's important to mention that Musk isn't against the
idea of AI, but rather for the idea of developing it carefully and with a lot of caution. The
idea is even worrisome for Stephen Hawking.

So, what are these people worried about? Well, they aren't worried about Siri
deciding that she wants to take over the world. Instead, the issue is that AI could one
day be so advanced that it decides that it wants power of its own. Not only that, but
another fear is that artificially intelligent systems misunderstand a mission that they're
given in a way that either causes more damage than good or that ends up hurting a lot
of people. Think Ultron from Avengers 2 taking his mission to bring peace the wrong
way.
When AI is discussed, however, it's often talked about in a very vague and
generic way. While it's certainly true that AI could one day cause damage to people in
general, it could also cause damage to specific industries or in specific situations.
"Imagine that you are tasked with reducing traffic congestion in San Francisco at
all costs, i.e. you do not take into account any other constraints. How would you do it?
You might start by just timing traffic lights better," said Anthony Aguirre from the Future
of Life Institute in an email with Tech Times. "But wouldn't there be less traffic if all the
bridges closed down from 5 to 10 a.m., preventing all those cars from entering the city?
Such a measure obviously violates common sense, and subverts the purpose of
improving traffic, which is to help people get around but it is consistent with the goal
of 'reducing traffic congestion.'"
The Future of Life Institute is an entire organization dedicated to exploring how to
best develop artificial intelligence in ways that will not be threatening to humans.
Directing traffic improperly could not only result in people being late but could also lead
to accidents. Despite that, the problems associated with artificial intelligence directing
traffic pale in comparison to the fears associated with AI being in charge of things like
weapons. In fact, both Elon Musk and Stephen Hawking are among the scores of
experts calling for a ban on any artificially intelligent weapons or weapon control
systems, which would be able to attack targets without any direction from humans.
While AI-controlled weapons may be able to help soldiers and civilians in war
zones, the problem is that their development could cause another global arms race,
which could not only be extremely disastrous but could also be very expensive,
according to experts.
"There is an appreciable probability that the course of human history over the
next century will be dramatically affected by how AI is developed," continued Aguirre. "It
would be extremely unwise to leave this to chance, which is essentially what we do if we
do not develop an understanding of the relevant issues that will allow us to develop AI
that is robust and [in a] beneficial way."
While concern over the development of artificial intelligence seems to be more or
less universal among experts in the field, some are a little less concerned than others.
Some argue that the extent of artificial intelligence threatening people will stop at taking
their jobs or being used by people against other people rather than the machines taking
a life of their own and taking on a mission to gain power.
"An uncontrollable super AI wiping out humanity just doesn't sound that plausible
to me," said Frederick Maier, Associate Director at the University of Georgia's Institute
for Artificial Intelligence, in an email with Tech Times. "The real threat posed by AI is

that it will be used by human beings in harmful or unjust ways (in war, or to oppress or
to marginalize)."
Violence isn't the only threat posed by artificial intelligence. There are a number
of other ways that AI could be threatening to a person. For example, there are countless
ways AI could infringe on a person's privacy. Facebook's facial recognition system is
just one example of that. The system is able to recognize an individual in photos, and
that informs Facebook's tagging recommendations. However, the system also learns as
it goes, and is even now able to recognize a person through other traits, such as how
they stand or the clothes that they wear. If this kind of technology is implemented into
things like security systems, it could be a privacy minefield. In fact, almost
unsurprisingly, it seems as though the NSA has already been developing some kind of
recognition system.
Assuming that artificial intelligence is some kind of threat to humans, it's
important to think about how long it will be before there is any real threat at the rate of
development that is taking place. According to Aguirre, experts suggest that we are
decades away from any super-intelligent AI that is an imminent threat. That does not
mean, however, that it should be ignored until then. The idea is that it is something that
needs to be monitored and dealt with as it is developed rather than when it is too late.
"These are the same sort of timescales over which global climate change will
unfold, and well within our children's lifetimes, so it makes sense to devote significant
resources to it," says Aguirre. "In addition, we don't KNOW that extremely capable AI is
many decades away there is some small probability that it could unfold much faster,
and we are completely unprepared for that."
Because of the fact that most experts agree that there is some level of risk
associated with the development of artificial intelligence, the real question to be posed
is whether or not the benefits outweigh the risks. AI has the potential to be a significant
help to humanity, in the short term certainly, however, even more in the long term.
"We already have access to a lot of AI through our computers and mobile
devices. Image classification, question answering, natural language processing all of
this is done already and in many ways is very impressive. The difference between the
digital assistants of today and those of tomorrow is that the latter won't be as
frustrating," continued Maier. "Further along, AIs will be used in jobs that today require
human beings. Autonomous cars is a ready example, but I imagine that machines will
become better and better at an ever-widening set of tasks."
Experts may not agree on the level of urgency when it comes to the threat of
artificial intelligence, however, most do agree that AI is something worth pursuing; we
just have to be careful in how it is pursued. The potential of AI is huge, but the potential
of its destructiveness is also huge. Only time will tell how smart artificial intelligence
actually gets, but in the meantime, it's important that we impose the right precautions.

Das könnte Ihnen auch gefallen