Sie sind auf Seite 1von 30

Introduction to Artificial Intelligence

(CoSc4142)
1

DILLA UNIVERSITY
DILLA INSTITUTE OF TECHNOLOGY
SCHOOL OF COMPUTING

CHAPTER TWO
INTELLIGENT AGENTS
Topics we will cover
2

Intelligent Agents
Introduction
Agents and Environments

Acting of Intelligent Agents (Rationality)

Structure of Intelligent Agents

PEAS Description & Environment Properties

Agent Types
Simple reflex agent

Model-based reflex agent

Goal-based agent

Utility-based agent

Learning agent
Important Concepts and Terms
Agents
3

An agent is anything that can be viewed as:


Perceiving its environment through sensors and

Acting upon that environment through actuators.

Human agent:-
Eyes, ears, and other organs for sensors;

Hands, legs, mouth, and other body parts for actuators.

Robotic agent:-
Cameras and infrared range finders for sensors;

Various motors for actuators.


Agents and Environments
4

Agents interact with environments through sensors


and actuators.

The agent function maps from percept histories to


actions:-
[f: P* A]
A percept is a piece of information perceived by the agent.
The agent program runs on the physical architecture to
produce f.
Agent = Architecture + Program
Vacuum-Cleaner World
5

Figure - A vacuum-cleaner world with just two


locations.

Percepts: location and contents, e.g., [A,Dirty]


Actions: Left, Right, Suck,
A vacuum-cleaner agent
6

Percept Action
[A,Clean] Right

[A,Dirty] Suck

[B,Clean] Left

[B,Dirty] Suck

Note: this only uses the last percept of the percept history, so this
agent can not learn from experience.
Rational agents (1)
7
A rational agent should strive to "do the right thing",
Based on what it can perceive and
The actions it can perform.
The right action is the one that will cause the agent to
be most successful.

Performance measure: An objective criterion for


success of an agent's behavior.
E.g., performance measure of a vacuum-cleaner agent
could be:
Amount of dirt cleaned up,
Amount of time taken,
Amount of electricity consumed,
Amount of noise generated, etc.
Rational agents (2)
8

What is a rational agent?

Rational Agent: For each possible percept


sequence,
A rational agent should select an action that is
expected to maximize its performance measure,
Given the evidence provided by the percept
sequence and whatever built-in knowledge the agent
has.
Rational agents (3)
9

Rationality is distinct from omniscience.


Omniscience (all-knowing with infinite knowledge).
An omniscient agent knows the actual outcome of
its actions and,
Can act accordingly;
But omniscience is impossible in reality.

Agents can perform actions in order to modify


future percepts so as to obtain useful information
(information gathering, exploration).
An agent is autonomous if its behavior is
determined by its own experience (with ability
to learn and adapt).
Rational agents (4)
10

Rationality

What is rational at any given time depends on four


things:
The performance measure that defines the criterion
of success.
The agent's prior knowledge of the environment.

The actions that the agent can perform.

The agent's percept sequence to date.


PEAS Description
11

PEAS:
Performance measure: A measure of how good the
behaviour of agents operating in the environment is?
Environment: What things are considered to be a part
of the environment and what things are excluded?
Actuators: How can an agent perform actions in the
environment?
Sensors:How can the agent perceive the environment?

Must first specify the setting for intelligent agent


design.
PEAS - Example 1: Taxi-Driving System
12

Consider, the task of designing an automated


taxi driver:
Performance measure:
Safe, fast, legal, comfortable trip, maximize profits
Environment:
Roads, other traffic, pedestrians(walkers), customers
Actuators:
Steering wheel, accelerator, brake, indicators, horn(alert or alarm)
Sensors:
Cameras, sonar, speedometer, GPS, engine sensors, keyboard
PEAS Example 2: Medical Diagnosis System
13

Agent: Medical diagnosis system


Performance measure:
Healthy patient, minimize costs, avoid lawsuits(charges).
Environment:
Patient, hospital, staff
Actuators:
Screen display (questions, tests, diagnoses, treatments,
referrals)
Sensors:
Keyboard (entry of symptoms, findings, patient's
answers)
PEAS -Example 3: Interactive English tutor
14

Agent: Interactive English tutor


Performance measure:
Maximize students scores on test
Environment:
Set of students

Actuators:
Screen display (exercises, suggestions, corrections)

Sensors:
Keyboard (student's answers)
PEAS - Example 4: Part Picking Robot
15

Agent: Part-picking robot


Performance measure:
Percentage of parts in correct bins
Environment:
Conveyor belt with parts, bins

Actuators:
Jointed arm and hand

Sensors:
Camera, joint angle sensors
Environment Types (1)
16

Fully observable (vs. partially observable):


An agent's sensors give it access to the complete state of the
environment at each point in time.
Full access to the state of the environment.
If they give it partial access then the environment is partially
observable.
If they have no sensors then is in non-observable.
Deterministic (vs. stochastic):
The next state of the environment is completely determined by
the current state and the action executed by the agent.
If there are apparently random events that can make the next
state unpredictable, the environment is stochastic.
If the environment is deterministic except for the actions of
other agents, it is strategic.
Environment Types (2)
17

Episodic (vs. sequential):


The agent's experience is divided into atomic "episodes" (each
episode consists of the agent perceiving and then performing a single
action), and the choice of action in each episode depends
only on the episode itself.
Otherwise it is sequential.

Static (vs. dynamic):


If the environment stays unchanged whilst the agent is
thinking about what action to take, it is a static environment.
If it is continually changing, even whilst the agent is thinking, it
is dynamic.
If the environment remains unchanged but the agents
performance score changes, it is semi-dynamic.
Environment Types (3)
18

Discrete (vs. continuous):


If the agent has a limited number of possible actions
and percepts, it is a discrete environment.
If the number of actions and/or percepts is effectively
unlimited it is a continuous environment.
Single agent (vs. multi-agent):
An agent operating by itself in an environment.

If there are no other agents in the environment we


say it is a single-agent environment.
If there are other agents it is a multi-agent
environment.
Environment Types (4)
19
Example: Chess with Chess without
a clock a clock
1) Fully observable Yes Yes
2) Deterministic Strategic Strategic
3) Episodic No No
4) Static Semi-dynamic Yes
5) Discrete Yes Yes
6) Single agent No No

The environment type largely determines the agent design:


The easiest type of environment is fully-observable,
deterministic, episodic, static, discrete and single agent.
The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent.
Agent Functions and Programs
20

An agent is completely specified by:


The agent function.

Mapping percept sequences to actions.

One agent function (or a small


equivalence class) is rational.
Aim: Find a way to implement the
rational agent function concisely.
Table-lookup agent
21

Table-lookup agent
Program determines action
by looking up percept Percept Action
sequence in a table,
[A,Clean] Right
e.g. vacuum cleaner
[A,Dirty] Suck
[B,Clean] Left
Drawbacks:
[B,Dirty] Suck
Huge table
Take a long time to build the table
No autonomy (needs to be told everything)
Even with learning, need a long time to learn the
table entries
Agent program for a vacuum-cleaner agent
22

Function Vacuum-Agent ([location,status]) returns an


action
{
If status = Dirty then return Suck
Else if location = A then return Right
Else if location = B then return Left
}
Agent Types
23

Four basic types in order of increasing


generality:
Simple reflex agents
Model-based reflex agents

Goal-based agents

Utility-based agents
Simple reflex agents

Simple reflex agents


The simplest kind of agent is the
simple reflex agent.
These use a set of condition-
action rules that specify which
action to choose for each given
percept.
These agents use only the current
percept, so have no memory of
past percepts.
In particular, they cannot base
decisions on things that they
Figure - Schematic diagram of a simple reflex agent.
cannot directly perceive, i.e. they
have no model of the state of
the world.
Model-based reflex agents
25

Model-based reflex agents


A more complex type of agent is
the model based agent.
Model based agents maintain an
internal model of the world,
which is updated by percepts as
they are received.
In addition, they have built-in
knowledge (i.e. prior knowledge)
of how the world tends to evolve.
They also contain knowledge
about how their actions affect
the state of the world.
Figure - A model-based reflex agent.
Goal-based agents
26

Goal-based agents
Goal based agents are the same
as model based agents, except
They contain an explicit
statement of the goals of the
agent.
These goals are used to
choose the best action at
any given time.
Goal based agents can therefore
choose an action which does
not achieve anything in the
short term, but in the long term
may lead to a goal being achieved.
Figure - A model-based, goal-based agent.
Utility-based agents
27

Utility-based agents
Goals can be useful, but are
sometimes too simplistic.
Utility based agents deal with this
by:
Assigning a utility to each
state of the world.
This utility defines how
happy the agent will be in
such a state.
Explicitly stating the utility
function also makes it easier to
define the desired behaviour of
utility based agents.
Figure - A model-based, utility-based agent.
Learning agents
28

Learning agents
A learning agent can be divided into
four conceptual components:
Performance Element can be
replaced with any of the 4
agent types described above.
The Learning Element is
responsible for suggesting
improvements to any part of the
performance element.
The input to the learning element
comes from the Critic.
The Problem Generator is Figure - A general model of learning agents.
responsible for suggesting
actions that will result in new
knowledge about the world being
acquired.
Summary
29

How to define the AI problem?


PEAS description of environments

Categories of AI environment:
Fully-observable, partially-observable, etc.

Basic agent types:


Simple reflex, model-based, goal-based, utility-based, learning
agents
Any Question?

Das könnte Ihnen auch gefallen