Sie sind auf Seite 1von 11

Contents

Types of artificial intelligence ............................................................................................................... 3


Type Of Intelligence ................................................................................................................................. 4
 Type 1: Reactive machines ........................................................................................................... 4
 Type 2: Limited memory. .............................................................................................................. 4
 Type 3: Theory of mind ................................................................................................................. 4
 Type 4: Self-awareness. ................................................................................................................ 4
Examples of AI technology ....................................................................................................................... 5
 Automation: ................................................................................................................................. 5
 Machine learning:......................................................................................................................... 5
 Machine vision: ............................................................................................................................ 5
 Natural language processing (NLP): ............................................................................................. 5
 Robotics: ....................................................................................................................................... 6
 Self-driving cars: ........................................................................................................................... 6
AI applications ...................................................................................................................................... 6
 AI in healthcare. ........................................................................................................................... 6
 AI in business. ............................................................................................................................... 6
 AI in education. ............................................................................................................................ 7
 AI in finance. ................................................................................................................................. 7
 AI in law. ....................................................................................................................................... 7
 AI in manufacturing. ..................................................................................................................... 7
Security and ethical concerns .................................................................................................................. 7
Regulation of AI technology ................................................................................................................... 8
BENEFITS & RISKS OF ARTIFICIAL INTELLIGENCE .............................................................. 8
WHAT IS AI? .............................................................................................................................................. 8
WHY RESEARCH AI SAFETY? ...................................................................................................................... 8
HOW CAN AI BE DANGEROUS? ................................................................................................................ 9
1. The AI is programmed to do something devastating:................................................................... 9
2. The AI is programmed to do something beneficial, but it develops a destructive method for
achieving its goal: .................................................................................................................................. 9
WHY THE RECENT INTEREST IN AI SAFETY............................................................................................. 10
THE TOP MYTHS ABOUT ADVANCED AI ................................................................................................ 11
ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) is the simulation of human intelligence processes by machines,
especially computer systems. These processes include learning (the acquisition of information
and rules for using the information), reasoning (using rules to reach approximate or definite
conclusions) and self-correction. Particular applications of AI include expert systems, speech
recognition andmachine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI
system that is designed and trained for a particular task. Virtual personal assistants, such as
Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an
AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a
strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are
including AI components in their standard offerings, as well as access to Artificial Intelligence as
a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment
with AI for various business purposes and sample multiple platforms before making a
commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson
Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses ,the use of artificial
intelligence raises ethical questions. This is because deep learning algorithms, which underpin
many of the most advanced AI tools, are only as smart as the data they are given in training.
Because a human selects what data should be used for training an AI program, the potential for
human bias is inherent and must be monitored closely.
Some industry experts believe that the term artificial intelligence is too closely linked to popular
culture, causing the general public to have unrealistic fears about artificial intelligence and
improbable expectations about how it will change the workplace and life in general. Researchers
and marketers hope the label augmented intelligence, which has a more neutral connotation, will
help people understand that AI will simply improve products and services, not replace the
humans that use them.

ypes of artificial intelligence

Arend Hintze, an assistant professor of integrative biology and computer science and engineering
at Michigan State University, categorizes AI into four types, from the kind of AI systems that
exist today to sentient systems, which do not yet exist. His categories are as follows:
Type Of Intelligence
 Type 1: Reactive machines. An example is Deep Blue, the IBM chess program that beat
Garry Kasparov in the 1990s. Deep Blue can identify pieces on the chess board and make
predictions, but it has no memory and cannot use past experiences to inform future ones. It
analyzes possible moves -- its own and its opponent -- and chooses the most strategic move.
Deep Blue and Google's AlphaGO were designed for narrow purposes and cannot easily be
applied to another situation.

 Type 2: Limited memory. These AI systems can use past experiences to inform future
decisions. Some of the decision-making functions in self-driving cars are designed this way.

 Type 3: Theory of mind. This psychology term refers to the understanding that others have
their own beliefs, desires and intentions that impact the decisions they make. This kind of AI
does not yet exist.

 Type 4: Self-awareness. In this category, AI systems have a sense of self, have consciousness.
Machines with self-awareness understand their current state and can use the information to
infer what others are feeling.
Examples of AI technology

AI is incorporated into a variety of different types of technology. Here are seven examples.

 Automation: What makes a system or process function automatically. For example, robotic
process automation (RPA) can be programmed to perform high-volume, repeatable tasks that
humans normally performed. RPA is different from IT automation in that it can adapt to
changing circumstances.

 Machine learning: The science of getting a computer to act without


programming . Deeplearning is a subset of machine learning that, in very simple terms, can
be thought of as the automation of predictive analytics. There are three types of machine
learning algorithms:

o Supervised learning: Data sets are labeled so that patterns can be detected and used to
label new data sets

o Unsupervised learning: Data sets aren't labeled and are sorted according to similarities or
differences

o Reinforcement learning: Data sets aren't labeled but, after performing an action or several
actions, the AI system is given feedback

 Machine vision: The science of allowing computers to see. This technology captures and
analyzes visual information using a camera, analog-to-digital conversion and digital signal
processing. It is often compared to human eyesight, but machine vision isn't bound by
biology and can be programmed to see through walls, for example. It is used in a range of
applications from signature identification to medical image analysis. Computer vision, which
is focused on machine-based image processing, is often conflated with machine vision.

 Natural language processing (NLP): The processing of human -- and not computer --
language by a computer program. One of the older and best known examples of NLP is spam
detection, which looks at the subject line and the text of an email and decides if it's junk.
Current approaches to NLP are based on machine learning. NLP tasks include text
translation, sentiment analysis and speech recognition.

 Robotics: A field of engineering focused on the design and manufacturing of robots. Robots
are often used to perform tasks that are difficult for humans to perform or perform
consistently. They are used in assembly lines for car production or by NASA to move large
objects in space. Researchers are also using machine learning to build robots that can interact
in social settings.

 Self-driving cars: These use a combination of computer vision, image recognition and deep
learning to build automated skill at piloting a vehicle while staying in a given lane and
avoiding unexpected obstructions, such as pedestrians.

AI applications

Artificial intelligence has made its way into a number of areas. Here are six examples.

 AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs.
Companies are applying machine learning to make better and faster diagnoses than humans.
One of the best known healthcare technologies is IBM Watson. It understands natural
language and is capable of responding to questions asked of it. The system mines patient data
and other available data sources to form a hypothesis, which it then presents with a
confidence scoring schema. Other AI applications include chatbots, a computer program
used online to answer questions and assist customers, to help schedule follow-up
appointments or aid patients through the billing process, and virtual health assistants that
provide basic medical feedback.

 AI in business. Robotic process automation is being applied to highly repetitive tasks


normally performed by humans. Machine learning algorithms are being integrated into
analytics and CRM platforms to uncover information on how to better serve customers.
Chatbots have been incorporated into websites to provide immediate service to customers.
Automation of job positions has also become a talking point among academics and IT
analysts.
 AI in education. AI can automate grading, giving educators more time. AI can assess
students and adapt to their needs, helping them work at their own pace. AI tutors can provide
additional support to students, ensuring they stay on track. AI could change where and how
students learn, perhaps even replacing some teachers.

 AI in finance. AI in personal finance applications, such as Mint or Turbo Tax, is disrupting


financial institutions. Applications such as these collect personal data and provide financial
advice. Other programs, such as IBM Watson, have been applied to the process of buying a
home. Today, software performs much of the trading on Wall Street.

 AI in law. The discovery process, sifting through of documents, in law is often


overwhelming for humans. Automating this process is a more efficient use of time. Startups
are also building question-and-answer computer assistants that can sift programmed-to-
answer questions by examining the taxonomy and ontology associated with a database.

 AI in manufacturing. This is an area that has been at the forefront of incorporating robots
into the workflow. Industrial robots used to perform single tasks and were separated from
human workers, but as the technology advanced that changed .

Security and ethical concerns

The application of AI in the realm of self-driving cars raises security as well as ethical concerns.
Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is
unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable,
forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use
sophisticated machine learning tools to gain access to sensitive systems, complicating the issue
of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools
necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying
or doing things that never took place .
Regulation of AI technology

Despite these potential risks, there are few regulations governing the use AI tools, and where
laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending
regulations require financial institutions to explain credit decisions to potential customers, which
limit the extent to which lenders can use deep learning algorithms, which by their nature are
typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data,
which impedes the training and functionality of many consumer-facing AI applications.

BENEFITS & RISKS OF ARTIFICIAL INTELLIGENCE


“Everything we love about civilization is a product of intelligence, so amplifying our human
intelligence with artificial intelligence has the potential of helping civilization flourish like never
before – as long as we manage to keep the technology beneficial.“

WHAT IS AI?
From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science
fiction often portrays AI as robots with human-like characteristics, AI can encompass anything
from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed
to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a
car). However, the long-term goal of many researchers is to create general AI (AGI or strong
AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess
or solving equations, AGI would outperform humans at nearly every cognitive task.

WHY RESEARCH AI SAFETY?


In the near term, the goal of keeping AI’s impact on society beneficial motivates research in
many areas, from economics and law to technical topics such as verification, validity, security
and control. Whereas it may be little more than a minor nuisance if your laptop crashes or gets
hacked, it becomes all the more important that an AI system does what you want it to do if it
controls your car, your airplane, your pacemaker, your automated trading system or your power
grid. Another short-term challenge is preventing a devastating arms race in lethal autonomous
weapons.
In the long term, an important question is what will happen if the quest for strong AI succeeds
and an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J.
Good in 1965, designing smarter AI systems is itself a cognitive task. Such a system could
potentially undergo recursive self-improvement, triggering an intelligence explosion leaving
human intellect far behind. By inventing revolutionary new technologies, such a
superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong
AI might be the biggest event in human history. Some experts have expressed concern, though,
that it might also be the last, unless we learn to align the goals of the AI with ours before it
becomes superintelligent.

There are some who question whether strong AI will ever be achieved, and others who insist that
the creation of superintelligent AI is guaranteed to be beneficial. At FLI we recognize both of
these possibilities, but also recognize the potential for an artificial intelligence system
to intentionally or unintentionally cause great harm. We believe research today will help us
better prepare for and prevent such potentially negative consequences in the future, thus enjoying
the benefits of AI while avoiding pitfalls.

HOW CAN AI BE DANGEROUS?


Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love
or hate, and that there is no reason to expect AI to become intentionally benevolent or
malevolent. Instead, when considering how AI might become a risk, experts think two scenarios
most likely:
1. The AI is programmed to do something devastating: Autonomous weapons are artificial
intelligence systems that are programmed to kill. In the hands of the wrong person, these
weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently
lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy,
these weapons would be designed to be extremely difficult to simply “turn off,” so humans
could plausibly lose control of such a situation. This risk is one that’s present even with
narrow AI, but grows as levels of AI intelligence and autonomy increase.
2. The AI is programmed to do something beneficial, but it develops a destructive method
for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with
ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the
airport as fast as possible, it might get you there chased by helicopters and covered in vomit,
doing not what you wanted but literally what you asked for. If a superintelligent system is
tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as
a side effect, and view human attempts to stop it as a threat to be met.
As these examples illustrate, the concern about advanced AI isn’t malevolence but
competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if
those goals aren’t aligned with ours, we have a problem. You’re probably not an evil ant-hater
who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project
and there’s an anthill in the region to be flooded, too bad for the ants. A key goal of AI safety
research is to never place humanity in the position of those ants.

WHY THE RECENT INTEREST IN AI SAFETY


Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science
and technology have recently expressed concern in the media and via open letters about the risks
posed by AI, joined by many leading AI researchers. Why is the subject suddenly in the
headlines?
The idea that the quest for strong AI would ultimately succeed was long thought of as science
fiction, centuries or more away. However, thanks to recent breakthroughs, many AI milestones,
which experts viewed as decades away merely five years ago, have now been reached, making
many experts take seriously the possibility of superintelligence in our lifetime. While some
experts still guess that human-level AI is centuries away, most AI researches at the 2015 Puerto
Rico Conference guessed that it would happen before 2060. Since it may take decades to
complete the required safety research, it is prudent to start it now.

Because AI has the potential to become more intelligent than any human, we have no surefire
way of predicting how it will behave. We can’t use past technological developments as much of
a basis because we’ve never created anything that has the ability to, wittingly or unwittingly,
outsmart us. The best example of what we could face may be our own evolution. People now
control the planet, not because we’re the strongest, fastest or biggest, but because we’re the
smartest. If we’re no longer the smartest, are we assured to remain in control?

FLI’s position is that our civilization will flourish as long as we win the race between the
growing power of technology and the wisdom with which we manage it. In the case of AI
technology, FLI’s position is that the best way to win that race is not to impede the former, but to
accelerate the latter, by supporting AI safety research.

THE TOP MYTHS ABOUT ADVANCED AI


A captivating conversation is taking place about the future of artificial intelligence and what it
will/should mean for humanity. There are fascinating controversies where the world’s leading
experts disagree, such as: AI’s future impact on the job market; if/when human-level AI will be
developed; whether this will lead to an intelligence explosion; and whether this is something we
should welcome or fear. But there are also many examples of of boring pseudo-controversies
caused by people misunderstanding and talking past each other. To help ourselves focus on the
interesting controversies and open questions — and not on the misunderstandings — let’s clear
up some of the most common myths.

Das könnte Ihnen auch gefallen