Sie sind auf Seite 1von 9

Abstract

The development oI artiIicial


intelligence (AI) is a small aspect oI the
computer revolution; though with the
creation oI AI we, as humans, are able to
improve our quality oI liIe. ArtiIicial
Intelligence, the Iuture master oI the modern
world, which is Iorming a new era along
with the recent technologies traversing
across the world.It can have many Iaces like
creativity, solving problems, pattern
recognition, classification, learning,
induction, deduction, building analogies,
optimi:ation, surviving in an environment,
language processing, knowledge and many
more.Generally. We think that machines do
not able to think. But the positive approach
oI the thinking human brain paved the way
Ior the selI thinking technology oI machines.
These ideas were proposed to conserve the
human time and liIe in the Iast and curious
world. The technologies use to implement
this type oI upcoming Iuture technology are
explained in this paper deeply such as
DARPA, CYC, Systems, speech
recognisation systems and human machine
interaction systems, etc. And we are also
going to see the upcoming and live projects
and implementations oI this ArtiIicial
Intelligence (AI) concept.

Presented by:
SARATHKUMAR.R,
KGiSL INSTITUTE OF TECHNOLOGY,
sarshreygmail.com
VISHNUVIKASH.J
KGiSL INSTITUTE OF TECHNOLOGY,
vichu13bgmail.com
Art|f|c|a| |nte|||gence
ArtiIicial intelligence (AI) is the intelligence
oI machines and the branch oI computer
science that aims to create it. AI textbooks
deIine the Iield as "the study and design oI
intelligent agents" where an intelligent agent
is a system that perceives its environment
and takes actions that maximize its chances
oI success. John McCarthy, who coined the
term in 1956, deIines it as "the science and
engineering oI making intelligent
machines."
The Iield was Iounded on the claim that a
central property oI humans, intelligence
the sapience oI Homo sapienscan be so
precisely described that it can be simulated
by a machine. This raises philosophical
issues about the nature oI the mind and the
ethics oI creating artiIicial beings, issues
which have been addressed by myth, Iiction
and philosophy since antiquity. ArtiIicial
intelligence has been the subject oI
optimism, but has also suIIered setbacks
and, today, has become an essential part oI
the technology industry, providing the heavy
liIting Ior many oI the most diIIicult
problems in computer science.
AI research is highly technical and
specialized, deeply divided into subIields
that oIten Iail to communicate with each
other. SubIields have grown up around
particular institutions, the work oI individual
researchers, the solution oI speciIic
problems, longstanding diIIerences oI
opinion about how AI should be done and
the application oI widely diIIering tools. The
central problems oI AI include such traits as
reasoning, knowledge, planning, learning,
communication, perception and the ability to
move and manipulate objects. General
intelligence (or "strong AI") is still among
the Iield's long term goals.
story of Artfcal Intellence:
In 1974, in response to the criticism oI
England's Sir James Lighthill and ongoing
pressure Irom Congress to Iund more
productive projects, the U.S. and British
governments cut oII all undirected,
exploratory research in AI. The next Iew
years, when Iunding Ior projects was hard to
Iind, would later be called an "AI winter".
In the early 1980s, AI research was revived
by the commercial success oI expert
systems, a Iorm oI AI program that
simulated the knowledge and analytical
skills oI one or more human experts. By
1985 the market Ior AI had reached over a
billion dollars. At the same time, Japan's
IiIth generation computer project inspired
the U.S and British governments to restore
Iunding Ior academic research in the Iield.
However, beginning with the collapse oI the
Lisp Machine market in 1987, AI once again
Iell into disrepute, and a second, longer
lasting AI winter began.
In the 1990s and early 21st century, AI
achieved its greatest successes, albeit
somewhat behind the scenes. ArtiIicial
intelligence is used Ior logistics, data
mining, medical diagnosis and many other
areas throughout the technology industry.
The success was due to several Iactors: the
increasing computational power oI
computers greater emphasis on solving
speciIic subproblems, the creation oI new
ties between AI and other Iields working on
similar problems, and a new commitment by
researchers to solid mathematical methods
and rigorous scientiIic standards.
edcal Danoss throuh
Artfcal Intellence
.

II you trust your doctor implicitly, it`s
because you probably respect their degree,
their years oI toil and education, and the
number oI years oI experience they have had
on the job. Also, you`re more likely to Ieel
comIortable with someone who has treated
you beIore and done a good job oI it. So iI
you were asked to relate to a machine
instead, one that was extremely intelligent
and capable oI making accurate diagnoses,
would you accept? Would you be
comIortable letting machines equipped with
artiIicial intelligence diagnose your illness
and suggest suitable treatment?

I`m sure most oI us would cringe at the
thought, but it`s already happening
artiIicial intelligence is making inroads into
the Iield oI medical diagnosis, not as a
stand-alone tool that seeks to replace doctors
altogether, but as a supplementary aid to
assisting physicians come to accurate
conclusions in diagnosing some diseases and
illnesses.

The advantages that machines with artiIicial
intelligence, or more speciIically, ArtiIicial
Neural Networks (ANN) bring to this Iield
are many:
They bring down the costs oI medical
diagnoses and treatment.
They can learn Irom inIormation and
data that is made available on a continuous
basis, and so, take logical decisions without
making errors.
When doctors are tired and overworked,
they tend to make mistakes that aIIect the
lives and health oI their patients. Machines
are not limited or hampered by physical
constraints and can work Ior long hours
without giving in to emotions or Iatigue.
They help minimize invasive procedures
- a case in point is the ANN program used
last year by the Mayo Clinic to help doctors
accurately diagnose patients with the heart
inIection endocarditis without the need Ior
an invasive procedure, thus reducing overall
healthcare costs and costs to the patient as
well.
The highly structured reasoning abilities
oI ANNs allow doctors to make 'educated
decisions based on their intuitions. With
ANN, intuition is backed by solid
knowledge, a combination that reduces the
risk oI medical errors by a great percentage.
They provide doctors with all the Iacts
needed to make accurate decisions, Iacts that
are oIten ignored or Iorgotten in the myriad
oI things going on in the minds oI
physicians because oI their proIessional and
personal lives.
OI course, there are ethical aspects to letting
machines without the ability to Ieel decide
on suitable Iorms oI treatment. But when
they are used in tandem with human
intelligence, conscience and compassion,
ANN make the best supplementary tools Ior
medical diagnosis. And it is because oI this
reason, that they are excellent supplements
instead oI stand-alone tools, that there is no
Iear that machines will put doctors out oI
business anytime in the near Iuture.
Approaches
Stuart Russell and Peter Norvig (1995) have
identiIied the Iollowing Iour approaches to
the goals oI AI: (1) computer systems that
act like humans, (2) programs that simulate
the human mind, (3) knowledge
representation and mechanistic reasoning,
and (4) intelligent or rational agent design.
The Iirst two approaches Iocus on studying
humans and how they solve problems, while
the latter two approaches Iocus on studying
real-world problems and developing rational
solutions regardless oI how a human would
solve the same problems.
Programming a computer to act like a
human is a diIIicult task and requires that
the computer system be able to understand
and process commands in natural language,
store knowledge, retrieve and process that
knowledge in order to derive conclusions
and make decisions, learn to adapt to new
situations, perceive objects through
computer vision, and have robotic
capabilities to move and manipulate objects.
Although this approach was inspired by the
Turing Test, most programs have been
developed with the goal oI enabling
computers to interact with humans in a
natural way rather than passing the Turing
Test.
Some researchers Iocus instead on
developing programs that simulate the way
in which the human mind works on
problem-solving tasks. The Iirst attempt to
imitate human thinking was the Logic
Theorist and the General Problem Solver
programs developed by Allen Newell and
Herbert Simon. Their main interest was in
simulating human thinking rather than
solving problems correctly. Cognitive
science is the interdisciplinary Iield that
studies the human mind and intelligence.
The basic premise oI cognitive science is
that the mind uses representations that are
similar to computer data structures and
computational procedures that are similar to
computer algorithms that operate on those
structures.
Other researchers Iocus on developing
programs that use logical notation to
represent a problem and use Iormal
reasoning to solve a problem. This is called
the 'logicist approach to developing
intelligent systems. Such programs require
huge computational resources to create vast
knowledge bases and to perIorm complex
reasoning algorithms. Researchers continue
to debate whether this strategy will lead to
computer problem solving at the level oI
human intelligence.
Still other researchers Iocus on the
development oI 'intelligent agents within
computer systems. Russell and Norvig
(1995, p. 31) deIine these agents as
'anything that can be viewed as perceiving
its environment through sensors and acting
upon that environment through eIIectors.
The goal Ior computer scientists working in
this area is to create agents that incorporate
inIormation about the users and the use oI
their systems into the agents` operations.
undamental System Issues
A robust AI system must be able to store
knowledge, apply that knowledge to the
solution oI problems, and acquire new
knowledge through experience. Among the
challenges that Iace researchers in building
AI systems, there are three that are
Iundamental: knowledge representation,
reasoning and searching, and learning.
Knowlede Representaton
What AI researchers call 'knowledge
appears as data at the level oI programming.
Data becomes knowledge when a computer
program represents and uses the meaning oI
some data. Many knowledge-based
programs are written in the LISP
programming language, which is designed to
manipulate data as symbols.
Knowledge may be declarative or
procedural. Declarative knowledge is
represented as a static collection oI Iacts
with a set oI procedures Ior manipulating the
Iacts. Procedural knowledge is described by
executable code that perIorms some action.
Procedural knowledge reIers to 'how-to do
something. Usually, there is a need Ior both
kinds oI knowledge representation to
capture and represent knowledge in a
particular domain.
First-order predicate calculus (FOPC) is the
best-understood scheme Ior knowledge
representation and reasoning. In FOPC,
knowledge about the world is represented as
objects and relations between objects.
Objects are real-world things that have
individual identities and properties, which
are used to distinguish the things Irom other
objects. In a Iirst-order predicate language,
knowledge about the world is expressed in
terms oI sentences that are subject to the
language`s syntax and semantics.
Reasonn and Searchn
Problem solving can be viewed as searching.
One common way to deal with searching is
to develop a production-rule system. Such
systems use rules that tell the computer how
to operate on data and control mechanisms
that tell the computer how to Iollow the
rules. For example, a very simple
production-rule system has two rules: 'iI A
then B and 'iI B then C. Given the Iact
(data) A, an algorithm can chain Iorward to
B and then to C. II C is the solution, the
algorithm halts.
Matching techniques are Irequently an
important part oI a problem-solving strategy.
In the above example, the rules are activated
only iI A and B exist in the data. The match
between the A and B in the data and the A
and B in the rule may not have to be exact,
and various deductive and inductive
methods may be used to try to ascertain
whether or not an adequate match exists.
Generate-and-test is another approach to
searching Ior a solution. The user`s problem
is represented as a set oI states, including a
start state and a goal state. The problem
solver generates a state and then tests
whether it is the goal state. Based on the
results oI the test, another state is generated
and then tested. In practice, heuristics, or
problem-speciIic rules oI thumb, must be
Iound to expedite and reduce the cost oI the
search process.
earnn
The advent oI highly parallel computers in
the late 1980s enabled machine learning
through neural networks and connectionist
systems, which simulate the structure
operation oI the brain. Parallel computers
can operate together on the task with each
computer doing only part oI the task. Such
systems use a network oI interconnected
processing elements called 'units. Each
unit corresponds to a neuron in the human
brain and can be in an 'on or 'oII state. In
such a network, the input to one unit is the
output oI another unit. Such networks oI
units can be programmed to represent short-
term and long-term working memory and
also to represent and perIorm logical
operations (e.g., comparisons between
numbers and between words).
A simple model oI a learning system
consists oI Iour components: the physical
environment where the learning system
operates, the learning element, the
knowledge base, and the perIormance
element. The environment supplies some
inIormation to the learning element, the
learning element uses this inIormation to
make improvements in an explicit
knowledge base, and the perIormance
element uses the knowledge base to perIorm
its task (e.g., play chess, prove a theorem).
The learning element is a mechanism that
attempts to discover correct generalizations
Irom raw data or to determine speciIic Iacts
using general rules. It processes inIormation
using induction and deduction. In inductive
inIormation processing, the system
determines general rules and patterns Irom
repeated exposure to raw data or
experiences. In deductive inIormation
processing, the system determines speciIic
Iacts Irom general rules (e.g., theorem
proving using axioms and other proven
theorems). The knowledge base is a set oI
Iacts about the world, and these Iacts are
expressed and stored in a computer system
using a special knowledge representation
language.
Applications
There are two types oI AI applications:
stand-alone AI programs and programs that
are embedded in larger systems where they
add capabilities Ior knowledge
representation, reasoning, and learning.
Some examples oI AI applications include
robotics, computer vision, natural-language
processing; and expert systems.
Robotcs
Robotics is the intelligent connection oI
perception by the computer to its actions.
Programs written Ior robots perIorm
Iunctions such as trajectory calculation,
interpretation oI sensor data, executions oI
adaptive control, and access to databases oI
geometric models. Robotics is a challenging
AI application because the soItware has to
deal with real objects in real time. An
example oI a robot guided by humans is the
Sojourner surIace rover that explored the
area oI the Red Planet where the Mars
PathIinder landed in 1997. It was guided in
real time by NASA controllers. Larry Long
and Nancy Long (2000) suggest that other
robots can act autonomously, reacting to
changes in their environment without human
intervention. Military cruise missiles are an
example oI autonomous robots that have
intelligent navigational capabilities.
omputer Vson
The goal oI a computer vision system is to
interpret visual data so that meaningIul
action can be based on that interpretation.
The problem, as John McCarthy points out
(2000), is that the real world has three
dimensions while the input to cameras on
which computer action is based represents
only two dimensions. The three-dimensional
characteristics oI the image must be
determined Irom various two-dimensional
maniIestations. To detect motion, a
chronological sequence oI images is studied,
and the image is interpreted in terms oI
high-level semantic and pragmatic units.
More work is needed in order to be able to
represent three-dimensional data (easily
perceived by the human eye) to the
computer. Advancements in computer vision
technology will have a great eIIect on
creating mobile robots. While most robots
are stationary, some mobile robots with
primitive vision capability can detect objects
on their path but cannot recognize them.
atural-anuae Processn
Language understanding is a complex
problem because it requires programming to
extract meaning Irom sequences oI words
and sentences. At the lexical level, the
program uses words, preIixes, suIIixes, and
other morphological Iorms and inIlections.
At the syntactic level, it uses a grammar to
parse a sentence. Semantic interpretation
(i.e., deriving meaning Irom a group oI
words) depends on domain knowledge to
assess what an utterance means. For
example, 'Let`s meet by the bank to get a
Iew bucks means one thing to bank robbers
and another to weekend hunters. Finally, to
interpret the pragmatic signiIicance oI a
conversation, the computer needs a detailed
understanding oI the goals oI the
participants in the conversation and the
context oI the conversation.
pert Systems
Expert systems consist oI a knowledge base
and mechanisms/programs to inIer meaning
about how to act using that knowledge.
Knowledge engineers and domain experts
oIten create the knowledge base. One oI the
Iirst expert systems, MYCIN, was
developed in the mid-1970s. MYCIN
employed a Iew hundred iI-then rules about
meningitis and bacteremia in order to
deduce the proper treatment Ior a patient
who showed signs oI either oI those
diseases. Although MYCIN did better than
students or practicing doctors, it did not
contain as much knowledge as physicians
routinely need to diagnose the disease.
Although Alan Turing`s prediction that
computers would be able to pass the Turing
Test by the year 2000 was not realized,
much progress has been made and novel AI
applications have been developed, such as
industrial robots, medical diagnostic
systems, speech recognition in telephone
systems, and chess playing (where IBM`s
Deep Blue supercomputer deIeated world
champion Gary Kasparov).
oncluson
The success oI any computer system
depends on its being integrated into the
workIlow oI those who are to use it and on
its meeting oI user needs. A major Iuture
direction Ior AI concerns the integration oI
AI with other systems (e.g., database
management, real-time control, or user
interIace management) in order to make
those systems more usable and adaptive to
changes in user behavior and in the
environment where they operate.
REFERENCES:
sclence[rankorg
almacsberkeleyedu/
wwwbuzzlecom
wwwncblnlmnlhgov
wwwreferenceforbuslnesscom
enwlklpedlaorg
wwwaaalorg
wwwerlcdlgesLsorgussel
8CCkS 8LlL8Lu
1.ArtiIicial Intelligence: A modern approach
byStaurt Russel and peter Norving.
2.ArtiIicial Intelligence by Patrick henry
Winston.