Sie sind auf Seite 1von 33

MSC COMPUTER SCIENCE

AGENT ARCHITECTURES AND AGENT MODELLING

Motivation: How are agents internally modeled and constructed ?

Session Topics
1. Defining Agent architecture
2. Abstract Agent Architecture
3. Perception
4. Agents with states
5. Agent Loop Control
6. Utility Functions of States
7. Reasoning Agents
8. Problems with Symbolic Agents
9. Practical Reasoning Agents
10. Implementation of Practical Reasoning Agents
11. Reactive and Hybrid Agents
12. Agent Modeling

September 2009 UONBI, School of Computing and


Informatics
AGENT ARCHITECTURES
Defining agent architectures
• Maes defines an agent architecture as:
‘A particular methodology for building agents. It specifies
how . . . the agent can be decomposed into the
construction of a set of component modules and how
these modules should be made to interact. An
architecture encompasses techniques and algorithms
that support this methodology.’
• Kaelbling considers an agent architecture to be:
‘A specific collection of software (or hardware) modules,
typically designated by boxes with arrows indicating the
data and control flow among the modules. A more
abstract view of an architecture is as a general
methodology for designing particular modular
decompositions for particular tasks.’
September 2009 UONBI, School of Computing and
Informatics
ICS 806 - MULTI-AGENT SYSTEMS

Agent architectures
There are three types of agent architecture:
1. Symbolic/logical
2. Reactive
3. Hybrid

Agents that are built should have features such as:


autonomy, reactiveness, pro-activeness, and social
ability, etc., that were mentioned earlier.

September 2009 UONBI, School of Computing and


Informatics
ICS 806 - MULTI-AGENT SYSTEMS

Abstract Architecture for Agents


Let Environment be a set E of discrete, instantaneous states:
E = { e, e’, …} eg { [TV-on, Radio-on], [Tv-on, Radio-off], [TV-off, Radio-on], [Tv-off, Radio-off] }
Let Ac = {all actions of an agent that transform the environment state}
Ac = {a, a’,…} eg {switch TV on, switch TV off, Switch Radio on, Switch Radio off}

A run, r, of an agent in an environment is a sequence of interleaved


environment states and actions:
a0 a1 a2 a3 au-1

r:e0 ----> e1 ----> e2 ----> e3 --> ---- eu

Let: R–be the set of all such possible finite sequences (over E and Ac);

RAc- be the subset of these that end with an action; and

RE- be the subset of these that end with an environment state.

September 2009 UONBI, School of Computing and


Informatics
ICS 806 - MULTI-AGENT SYSTEMS

State Transformer Functions


A state transformer function represents behaviour of the
environment:

τ : RAc Æ o(E) where o(E) is some environmental state or outcome

Environments can be : history dependent; or non-deterministic.

If τ (r) = θ, then there are no possible successor states to r. In this


case, we say that the system has ended or terminated its run.

An environment Env is a triple:


Env= <E, e0 , τ > where: E is a set of environment states, e0 ∈ E is the
initial state; and τ is a state transformer function.

September 2009 UONBI, School of Computing and


Informatics
Agents
Agent is a function which maps runs to actions: Ag: RE Æ Ac

An agent makes a decision about what action to perform based on the


history of the system that it has witnessed to date.

Systems
A system is a pair containing an agent and an environment.

Any system will have associated with it a set of possible runs; we denote
the set of runs of agent Ag in environment Env by R(Ag, Env).

A sequence (e0, a0,e1,a1,e2, a2, .. ) represents a run of an agent Ag in


environment Env= <E, e0 , τ > if:
1. e0 is the initial state of Env
2. a0 = Ag(e0) ; and
3. for u>0 , eu ∈ τ (e0, a0,e1,a1,e2, a2, .. ) where au = Ag((e0, a0,e1,a1,e2, a2,))

September 2009 UONBI, School of Computing and


Informatics
Purely Reactive Agents
¾Make no reference to their history
¾Base their decision making entirely on the present, without any reference
to the past.

We call such agents purely reactive:


action: E Æ Ac

A thermostat is a purely reactive agent.

off if e = temperature OK
Action(e) =
on otherwise.

September 2009 UONBI, School of Computing and


Informatics
Perception
Perception enables sensing in the system and is realized using the see
function.

SEE ACTION

ENVIRONMENT

see function -- the agent’s ability to observe its environment;


action function -- represents the agent’s decision-making process;
Output of the see function is a percept: see: E Æ Per which maps
environment states to percepts;
action is now a function action:PerÆ A which maps sequences of
percepts to actions.
September 2009 UONBI, School of Computing and
Informatics
Agents with State
These agents have some internal data structure, which is typically used to record
information about the environmental state and history.

Let I be the set of all internal states of the agent.


The perception function see for a state-based agent is unchanged: see: EÆ Per

The action-selection function action is now defined as a mapping: action: I Æ Ac


from internal states to actions.
function next is introduced, which maps an internal state and percept to an
internal state: next: I x Per Æ I

September 2009 UONBI, School of Computing and


Informatics
Agent control loop
1. Agent starts in some initial internal state i0
2. Observes its environment state e, and generates a percept see(e)
3. Internal state of the agent is then updated via next function, becoming
next(i0, see(e)).
4. The action selected by the agent is action(next(i0, see(e))). This action is
then performed.
5. Goto (2).

Tasks for Agents


We build agents in order to carry out tasks for us.
The task must be specified by us. . .
But we want to tell agents what to do without telling them how to do it.

September 2009 UONBI, School of Computing and


Informatics
Utility Functions over States
One possibility: associate utilities with individual states — the task of the agent is
then to bring about states that maximize utility.

A utility is a function u: E Æ R which associates a real number with every


environment state.

Utility in the Tileworld


Simulated two dimensional grid environment on which there are agents, tiles,
obstacles, and holes. An agent can move in four directions, up, down, left, or right,
and if it is located next to a tile, it can push it. Holes have to be filled up with tiles
by the agent. An agent scores points by filling holes with tiles, with the aim being
to fill as many holes as possible.

TILEWORLD changes with the random appearance and disappearance of holes.


Utility function defined as follows:

number of holes filled in r


u(r) = -----------------------------------------
number of holes that appeared in r
September 2009 UONBI, School of Computing and
Informatics
AGENT ARCHITECTURES continue

REASONING AGENTS (deductive)


1956-1985: agents designed within AI were symbolic reasoning agents.
- explicit logical reasoning in order to decide what to do.
Issues: difficult to build; emergence of reactive agents movement, from 1985–
present.

1990-present, a number of alternatives were proposed especially hybrid


architectures, which attempt to combine the best of reasoning and reactive
architectures.

Symbolic Reasoning Agents


Build agents based on knowledge-based system approach.

Deliberative agent architecture


One that contains an explicitly represented, symbolic model of the world;
makes decisions (for example about what actions to perform) via symbolic
reasoning.

September 2009 UONBI, School of Computing and


Informatics
Issues with building deliberative agents
The transduction problem

- translating the real world into an accurate, adequate symbolic description,


in time for that description to be useful.
.. . vision, speech understanding, learning.

The representation/reasoning problem


- symbolically represent information about complex real-world entities
and processes, and how to get agents to reason with this information
in time for the results to be useful.
... Knowledge representation, automated reasoning, automatic planning.

- !!! none of the problems above is anywhere near solved !!!!!!!!

symbol manipulation algorithms in general are complex: - many search-


based symbol manipulation algorithms of interest are highly intractable.
- alternative techniques emerged – seen later

September 2009 UONBI, School of Computing and


Informatics
Deductive Reasoning in Agents
•theorem proving is used to model agents decision making
•logic encodes a theory stating the best action to perform in any given
situation.
Let:
ρ be a theory (typically a set of rules);
Δ be a logical database that describes the current state of the world;
Ac be the set of actions the agent can perform;
Δ |- ρφ means that φ can be proved from Δ using ρ.

/* try to find an action explicitly prescribed */


for each a ∈ Ac do
if Δ |- ρ
Do(a) then return a
end-if
end-for

September 2009 UONBI, School of Computing and


Informatics
General problems with symbolic architectures
™ how do we convert real world inputs to fit the ontology? Eg. video
camera input to Dirt?
™ decision making assumes a static environment: calculative rationality.
™ decision making using first-order logic is undecidable!
™ Even where we use propositional logic, decision making in the worst
case means solving co-NP-complete problems.

Typical solutions to problems above


1. weaken the logic;
2. use symbolic, non-logical representations;
3. shift the emphasis of reasoning from run time to design time.

September 2009 UONBI, School of Computing and


Informatics
PRACTICAL REASONING AGENTS
Practical reasoning
— the process of figuring out what to do, i.e. which action to take.

- conflicting considerations are weighted for and against


competing options, where the relevant considerations are
provided by what the agent desires/values/cares/ beliefs.
(Bratman)

Components of practical reasoning:


deliberation: deciding what state of affairs we want to achieve;

means-ends reasoning: deciding how to achieve these states of


affairs.
The outputs of deliberation are intentions.

September 2009 UONBI, School of Computing and


Informatics
Intentions in Practical Reasoning
Intentions pose problems for agents, who need to determine ways of
achieving them.
1. If I have an intention to θ , you would expect me to devote resources to deciding how
to bring about θ.
2. Intentions provide a “filter” for adopting other intentions, which must not conflict. If I
have an intention to θ , you would expect me to adopt an intention ϕ such that θ and
ϕ are mutually exclusive.
3. Agents track the success of their intentions, and are inclined to try again if their
attempts fail. If an agent’s first attempt to achieve θ fails, then all other things being
equal, it will try an alternative plan to achieve θ.
4. Agents believe their intentions are possible. That is, they believe there is at least
some way that the intentions could be brought about.
5. Agents do not believe they will not bring about their intentions. It would not be
rational of me to adopt an intention to θ if I believed θ was not possible.
6. Under normal circumstances, agents believe they will bring about their intentions. It
would not normally be rational for one to believe that one would achieve the
intentions all the time; intentions can fail. Moreover, it does not make sense that if
one believes θ is inevitable that one would adopt it as an intention.
7. Agents need not intend all the expected side effects of their intentions. If I believe θ
⇒ ϕ and I intend that θ , I do not necessarily intend ϕ also. (Intentions are not closed
under implication.) Eg. Going to dentist does not mean one wants tooth pain

September 2009 UONBI, School of Computing and


Informatics
PLANNING AGENTS- part of practical reasoning agents
Building largely on the early work of Fikes & Nilsson, many planning algorithms
have been proposed, and the theory of planning has been well-developed.

What is Means-Ends Reasoning?


Basic idea is to give an agent:
¾representation of goal/intention to achieve;
¾representation actions it can perform; and
¾representation of the environment; and have it generate a plan to
achieve the goal.

Question: How do we
represent:
1. a goal to be achieved;
2. the state of environment;
3. the actions available to
agent;
4. the plan itself.

September 2009 UONBI, School of Computing and


Informatics
Example
Contains a robot arm, 2 blocks (A and B) of equal size, and a table-top.
To represent this environment, need an ontology.
On(x,y) obj x on top of obj y
OnTable(x) obj x is on the table
Clear(x) nothing is on top of obj x
Holding(x) arm is holding x
Here is a representation of the blocks world described above:
Clear(A)
On(A,B)
OnTable(B)
OnTable(C)
Use the closed world assumption: anything not stated is assumed to be false.
A goal is represented as a set of formulae.

Here is a goal: {OnTable(A), OnTable(B), OnTable(C)}


Actions are represented using a technique that was developed in the STRIPS
planner. Each action has:
1. a name: which may have arguments;
2. a pre-condition list: list of facts which must be true for action to be executed;
3. a delete list: list of facts that are no longer true after action is performed;
4. an add list: list of facts made
September 2009
true by executing the action.
UONBI, School of Computing and
Each of these may contain variables.Informatics
Example 1:
The stack action occurs when the robot arm places the object x it is
holding is placed on top of object y.
Stack(x, y); pre Clear(y) ∧ Holding(x); del Clear(y) ∧ Holding(x)
add ArmEmpty ∧ On(x, y)

Example 2:
The unstack action occurs when the robot arm picks an object x up from
on top of another object y.
UnStack(x, y); pre On(x, y) ∧ Clear(x) ∧ ArmEmpty
del On(x,y) ∧ ArmEmpty; add Holding(x) ∧ Clear( y)
Stack and UnStack are inverses of one-another.

Example 3:
The pickup action occurs when the arm picks up an object x from the table.
Pickup(x)
pre Clear(x) ∧ OnTable( x) ∧ ArmEmpty
del OnTable(x) ∧ ArmEmpty
add Holding( x)
What is a plan?
A sequence (list) of actions, with variables replaced by constants.

September 2009 UONBI, School of Computing and


Informatics
IMPLEMENTING PRACTICAL REASONING AGENTS

Agent Control Loop Version 1


1. while true
2. observe the world;
3. update internal world model;
4. deliberate about what intention to achieve next;
5. use means-ends reasoning to get a plan for the intention;
6. execute the plan
7. end while

Agent Control Loop Version 2


1. B = B0; /* initial beliefs */
2. while true do
1. get next percept p;
2. B= beliefRevisionF(B, p);
3. I=deliberate(B);
4. P = plan(B, I);
5. execute(P);
3. end while

September 2009 UONBI, School of Computing and


Informatics
Agent Control Loop Version 6 (filter, intentions)
B=B0;
I=I0;
while true do
get next percept p;
B =beliefRevisionF(B,p);
D = options (B,I);
I=filter(B,D,I);
P=plan(B, I);
while (not empty(P) or suceeded(I,B) or impossible(I,B) )do
a = headOfPerceptListFunction(p);
execute(a);
p=tail(p);
get next percept(p);
B=beliefRevisionF(B,p);
D = options (B,I);
I=filter(B,D,I);
if not sound(P,I, B) then
P = plan(B,I);
end-if
end-while
end-while
September 2009 UONBI, School of Computing and
Informatics
BELIEF-DESIRE-INTENTION (BDI) THEORY & PRACTICE

Rao & Georgeff have developed BDI logics which is a non-classical logics with
modal connectives for representing beliefs, desires, and intentions.

BDI Logic

From classical logic: ∧, ∨, ¬.


Path quantifiers:
Aφ ‘on all paths, φ’
Eφ ‘on some paths, φ’

The BDI connectives:


(Bel, i φ) i believes φ

(Des, i φ) i desires φ

(Int, i φ) i intends φ

September 2009 UONBI, School of Computing and


Informatics
Semantics of B-D-I components- some BDI axioms
Let α be an arbitrary O-formula, i.e., one which contains no positive occurrences of A
(universal quantifier).

Belief goal compatibility: (Des α) ⇒ (Bel α)

Goal-intention compatibility: (Int α) ⇒ (Des α)

Volitional commitment: (Int does(a)) ⇒ does(a)

Awareness of goals & intentions: (Des φ) ⇒ (Bel (Des φ))


(Int φ) ⇒ (Bel (Int φ))

No unconscious actions: Done(a) ⇒ (Bel done(a))

No infinite deferral: Int φ ⇒ A (¬(Int φ))


An agent will eventually either act for an intention, or else drop it.

September 2009 UONBI, School of Computing and


Informatics
REACTIVE AND HYBRID ARCHITECTURES
Reactive Architectures
The many unsolved problems associated with symbolic AI led to the
development of reactive architectures.

Brooks (criticism of mainstream AI) — behaviour languages


Brooks has put forward three theses:
1. Intelligent behaviour can be generated without explicit representations of the
kind that symbolic AI proposes.
2. Intelligent behaviour can be generated without explicit abstract reasoning of
the kind that symbolic AI proposes.
3. Intelligence is an emergent property of certain complex systems.

He identifies two key ideas that have informed his research:


Situatedness and embodiment: ‘Real’ intelligence is situated in the world, not in
disembodied systems such as theorem provers or expert systems.
Intelligence and emergence: ‘Intelligent’ behaviour arises as a result of an
agent’s interaction with its environment. Also, intelligence is ‘in the eye of the
beholder’; it is not an innate, isolated property.
Brooks built some of his ideas based on his subsumption architecture.
A subsumption architecture is a hierarchy of task-accomplishing behaviours.

September 2009 UONBI, School of Computing and


Informatics
Situated Automata
Approach of Rosenschein and Kaelbling.

In their situated automata paradigm, an agent is specified in a rule-like


(declarative) language, and this specification is then compiled down to a digital
machine, which satisfies the declarative specification.

This digital machine can operate in a provable time bound.

Reasoning is done off line, at compile time, rather than online at run time.
The theoretical limitations of the approach are not well understood.

Compilation (with propositional specifications) is equivalent to an NP-complete


problem.

The more expressive the agent specification language, the harder it is to


compile it.

(There are some deep theoretical results which say that after a certain
expressiveness, the compilation simply can’t be done.)

September 2009 UONBI, School of Computing and


Informatics
HYBRID ARCHITECTURES
Many researchers have argued that neither a completely deliberative nor
completely reactive approach is suitable for building agents. They propose hybrid
systems.

Hybrid systems attempt to marry classical and alternative approaches.

Build an agent out of two subsystems:


Deliberative one, containing a symbolic world model, which develops plans and
makes decisions in the way proposed by symbolic AI; and
Reactive one, which is capable of reacting to events without complex reasoning.
Often, the reactive component is given some kind of precedence over the deliberative one.

This kind of structuring leads naturally to the idea of a layered architecture:

An agent’s control subsystems are arranged into a hierarchy, with higher layers dealing with
information at increasing levels of abstraction.
Horizontal layering: Layers are each directly connected to the sensory input and action
output.
Vertical layering: Sensory input and action output are each dealt with by at most one layer
each.

September 2009 UONBI, School of Computing and


Informatics
HORIZONTAL LAYERING

September 2009 UONBI, School of Computing and


Informatics
VERTICAL LAYERING

September 2009 UONBI, School of Computing and


Informatics
AGENT MODELING
Modeling is a means to capture ideas, relationships, decisions, and
requirements in a well-defined notation that can be applied to many
different domains. Modeling not only means different things to different
people, but also it can use different aspects of a tool such as UML
depending on what you are trying to convey (Pilone Dan, Neil Pitman
(2005). UML 2.0 in a Nutshell. O'Reilly Media, Inc.).

Modeling languages are used to specify, visualize, construct, and


document systems.

Modeling is part of the process of constructing multi-agent systems.


The conceptual structures are formulated and decisions related to the
overall framework are considered. The modeling process, however
depends on a number of things including the architecture, development
framework and application area.

September 2009 UONBI, School of Computing and


Informatics
ITEMS TO MODEL
AGENTS
BASIC CHARACTERISTICS
AUTONOMY MOBILITY
INTELLIGENCE PERSONALITY
VERACITY BENEVOLENCE
SOCIAL ABILITY REACTIVENESS
PROACTIVITY
BEHAVIOUR

INTERACTIONS
COMMUNICATION
COOPERATION NEGOTIATION AGREEMENTS
DECISION MAKING
INDIVIDUAL GROUP
AGENT SOCIAL SYSTEMS – MULTI-AGENT SYSTEMS
September 2009 UONBI, School of Computing and
Informatics
MODELLING RESOURCES AND TOOLS
-NONE DEDICATED TO MULTI-AGENT SYSTEMS
- VARIATIONS OF:
OBJECT ORIENTED DESIGN RESOURCES + UML
MATHEMATICAL MODELLING
SYMBOLIC AI MODELLING- LOGIC
GAME THEORY AND ECONOMICS
EMERGING SUGGESTIONS SUCH AS MULTI-AGENT
SYSTEMS NETWORK INFLUENCE DIAGRAMS
VARIATIONS OF BAYESIAN REPRESENTATIONS
PSYCHOLOGY- COGNITION
SOCIOLOGY- SOCIAL PROCESSES

September 2009 UONBI, School of Computing and


Informatics
WEEK 2 EXERCISE
1. Select a task that can be done using agents.
Select an agent that can do the task or some
aspect of the task. Give a full description how the
agent should be structured and how it should
operate.
2. Model your agent
3. Implement your agent

September 2009 UONBI, School of Computing and


Informatics

Das könnte Ihnen auch gefallen