Sie sind auf Seite 1von 94

| 





O  

     
   

  ! 
"    

   
 
      



O   

¢  
 , Edited by Jeff M. Bradshaw. AAAI Press/The MIT Press.
 
  
 , Edited by N. Jennings and M. Wooldridge, Springer.
 
 
 
 
 , Jorg P. Muller, Springer.
 
   
¢  , V.S. Subrahmanian, P. Bonatti et al., The MIT Press.

=    
ICMAS, Autonomous Agents (AA), AAAI, IJCAI.


- www.fipa.org
- www.agentlink.org
- www.umbc.edu
- www.agentcities.org
 

[     
http://www.csc.liv.ac.uk/~mjw/pubs/imas/agents.tar.gz
| 

| 


ü An over-used term´ (Patti Maes, MIT Labs, 1996)


ü Agent´ can be considered as a theoretical concept from AI.
ü Many different definitions exist in the literature«..

 !"#
ü An agent is an entity which is:
 ¢in some 


.
 
  , in the sense that it can act without direct intervention from
humans or other software processes, and controls over its own actions and
internal state.
  which means:
±  
 : agents should perceive their environment and
respond to changes that occur in it;
±   : agents should not simply act in response to their environment,
they should be able to exhibit opportunistic, goal-directed behavior and
take the initiative when appropriate;
± ¢ : agents should be able to interact with humans or other artificial
agents
   
  
 
!,
N. Jennings, K. Sycara, M. Wooldridge (1998)

 !$#

Ô  [   

 
 "
´ « one that acts or has the power or
authority to act« or represent another´
  

« an agent carries out a task in favor of


someone who has delegated it ?

ü To avoid tedious description of tasks we sometimes prefer our


agents to be able to infer (predict, guess) our goals ...
ü « so the agents should have some knowledge of task domain and
their user.

 !%#

ü '
 
 

 
 
˜          
 


  
     
   #'

 #$
% !

 !&#
ü '
   
  

  
  
 



$ 



  

 


$
 
 %
     &   
 
#'

  

 !'#

ü 
 
 
 

   


' 
 
 






( 
 
 





(


  
 
$ 
  $

 $


 

O
&
' ( 
|  

ENVIRONMENT


O  

(
ü The agent takes sensory input from its environment,
and produces as output actions that affect it.


  
 )!  

K%

  


   
O  
user, other humans, other agents,
applications, information sources,
their relationships,
platforms, servers, networks, etc.

à   

architecture, goals, abilities, sensors,


effectors, profile, knowledge,
beliefs, etc.

 !)#*+,"--%$../0
Intelligent Agent is an entity that is able to &

  


 






 in such a way that in the case of
unbalance agent can:
‡ 




 to be in balance with the internal one ... (
‡ 





 to be in balance with the external one « (
‡ find out and   
  within the external environment where
balance occurs without any changes « (
‡ closely 
  with one or more other agents (human or artificial) to
be able   
, which internal environment will be able to be
in balance with the external one « (
‡
  
 by filtering the set of acquired features from the external
environment to achieve balance between the internal environment and the
deliberately distorted pattern of the external one. I.e.  

  



    $
) 
 

 
$ & 
´

 !)#*+,"--%0
The above means that an agent:

1) is  
 , because it should have at least one goal - to &


  

 






 ;

2) is   because of the ability to 






;

3) is   because of the ability to 







;

4) is  because of the ability to   


 ;

5) is    because of the ability to 


    
;

6) is 
    because of the ability to    

! by sensing only a suitable´ part of the environment.

 !/#*120

à   

    




 

  
 



  


   
 

  


 




  

 



àà     


 !3#
*4 5‰
  

à  
     
 0

ü An agent is a computational process that implements the


autonomous, communicating functionality of an
application.

 !-#
*|675‰  
 
http://www.wikipedia.org 0

ü In computer science, an  àÔ is a


software agent that exhibits some form of artificial
intelligence that assists the user and will act on their
behalf, in performing non-repetitive computer-related
tasks. While the working of software agents used for
   
 or data mining (sometimes referred to
as  ) is often based on fixed pre-programmed rules,
"intelligent" here implies the ability to adapt and learn.
+ 
7

*,  8| "--'0

ü O  
helps the user during some task
(e.g., Microsoft Office Assistant);
ü @ 
knows where to go when you tell the
destination;
ü ! 
know where to go, when and why.

 

6 
) )! 

O  !  )!  & )!    )! 

)
  * 
 +
)! 
)! 

 (  )!  K 
 )! 

7 


ü Control systems
 e.g. Thermostat
ü Software daemons
 e.g. Mail client

O 

 à    

$
| 9 
:
|    

ü An intelligent agent is one that is capable of flexible

 action in order to meet its design
objectives, where flexible means three things:
 
  ' 
   


$
 


 
  
  
in order to satisfy its
design objectives(
 ˜
   '
 
 
   " 
 &

in order to satisfy its design
objectives;
 

 '
 
 
   
 
 
 
 
 
in order to satisfy its design objectives!(

| 
!# !
 


ü   

   !
 

ü    


!
 


ü    

  

 !
 

ü     
 
  

  
ü    
   
     


ü     !
 
!
" 

ü  
 
  

  



ü  
 


ü    ! 


 

 

; 
ü An agent is responsible for satisfying specific  . There can be
different types of goals such as achieving a specific status,
maximising a given function (e.g., utility), etc.
beliefs
knowledge
Goal1
Goal2

ü The  of an agent includes state of its 






 +
state of &
  and  about its 



.
8
ü An agent is  in an 


, that consists of the objects
and other agents it is possible to interact with.

÷  ÷ 

ü An agent has an 


that distinguishes it from the other agents of
its environment.
ï O 
8
<   5

üÔ 
    
‰  
   
ü D
  
   
  

‰    

  

ü ¢

  
‰  


(
ü In complex environments:
 An agent do not have ˜ control over its
environment, it just have ˜
 
 control
 Partial control means that an agent can   the
environment with its actions
 An action performed by an agent may 
 to have the
desired effect.

ü Conclusion: environments are 


  , and agents must be
prepared for the possibility of 
  .

(
ü K   
˜
 : agent¶s ability to
modify its environment.
ü Actions have ˜  
ü Key problem for an agent: deciding è 
of its actions it should perform in order to
best satisfy its design objectives.

(
ü Agent¶s 


  characterized by a set:
¢  !

ü Effectoric capability of the Agent characterized by a set of


 
:
Ô    !

)!  


  

K%

8

ü A ¢


  decides what action to perform on the
basis of his history (experiences).

ü A Standard agent can be viewed as function



 
¢ü Ô

S* is the set of sequences of elements of S (states).



ü K  can be modeled as function
 ¢   ü =‰¢
where =¢ is the power set of ¢      ¢ ;
This function takes the current state of the environment i¢ and an
action
i (performed by the agent), and maps them to a set of
environment states ‰ 
.

ü D     : all the sets in the range of


 are singletons (contain 1 instance).
ü -     : otherwise.
[
ü [istory represents the interaction between an agent and its
environment. A history is a sequence:

     
    

Where:
 is the initial state of the environment

is the u¶th action that the agent choose to perform


is the u¶th environment state
4 


ü )
 
  % !  +  +  


    
,

    -
ü .  &

 &  
p 

ü 
ü K    

K%
    

/01  
 
 2 

/0
 ,-2
 
 
+ 
47
ü and
  functions:

 





47

ü =    
   


ü =
+

 =  ,( -    
,
    -

ü  p  &


p 
=
ü 
+  
    
47  
  

  p
p (   

2= 2 >

? ?@" ? ?@? ?

 
5 7

+ 
  i  i
,+  ö -

  p 
 


 
47  

ü K  
23
 

 /04
23
 +
    4

25, -, Å-,Å -,Å Å -6
789 :
& 
  

7 27
28
,-2
8 29
2:

< 

ü $   

  


 

 





< 
ü  
  


ü =
ü   (    +
p 
à
ü 
+

à     
    !
ü )      


à

=
ü à

< 

ü O % 

  ! 
    
   %
  
     !  7 + 

     &  ! &
p      


  ! 
   &
% ! %

Ü  
8
#  
"  
O  
  
  

Ô 
  
   à   

ü What does it mean for a piece of software to be autonomous
and to have freedom of action, when its behaviour is
determined by its code, the input data and the machine it is
running on?

ü Autonomy is revealed when interpreting agentµs


characteristics in the following way:
 All input and output from an agent is considered as sensing and
performing actions. Therefore, an agent does not directly receives
commands from users.
 An agent is not programmed directly in terms of    in
a given situation. Agent itself     .
 A(


 '  ,- /&;   '    

* =

 
   )! 
   
    
3/&;  

1 !  
4
 A(

ü D   
 )! &
!
     &; 
 )! 
  &   & ,
  %
  % -
& % 

 ) ( !    


  ( 
 
,simultaneously (or pseudo-simultaneously) running tasks-

ü An object does it for free´ (because it has to)
It has been programmed in order to do things, to perform
actions, to react to specific inputs, to respond orders

ü An agent does it for money or because it wants to!´


ü Every agent has its price!´

An agent requires the action to perform to be complementary to its


goals. It can decide to perform   tasks at specific conditions
if not in contradiction with its own goals.
   
ü *  + (2)   
- An agent is able to   (directly) and 
 the properties
(the status) of objects and agents in its environment, including
the events that happen in it, e.g., change in the state of its
knowledge and beliefs about the environment.
- An agent is able to perform certain  
that can modify the
environment.

Therefore, and agent can:


(1)  
 to changes in its environment
(2) &
 for intervention (goal-oriented behavior)
7
ü If an environment of ordinary program is static and unchangeable, then this
program does not worry about its success or failure.
ü The real world is highly dynamic and information can change and be incomplete
ü Software for dynamic domains has to take into account the possibility of failure
Should I go on?´
ü A   system is able to maintain an ongoing interaction with its environment
and respond in a timely fashion to the changes occurring in it.
I open the door!
The water level is
too high!
4
ü ! "  : reacting to events according to stimulus-
response rules
ü  "  ' agent decides to do something because of a
specific goal it has

ü Proactiveness is equivalent to taking initiative in generating plan and


attempting to achieve goals. This means that agents recognize and
utilize available opportunities.
I want to reach the pick I will take the
of the mountain! ski-lift!
8   
ü The real world is an environment where several agents coexist:
 Agents have to take into account others when trying to achieve its own goals
 In this multi-agent environment some goals cannot be achieved and some others can
only be achieved with the cooperation of others.
Example:
Can I pass through No you can¶t, but
Italy to go to Bologna? your passengers
Goal 1: To bring [ow much would it can take my train
people to Bologna cost me? Y to Bologna
from
Sub-goal: to Domodossola for
bring train X X$
to Bologna Lausanne Bologna

request inform

ü ¢  is the capability of interacting with other agents (and


possibly humans) and perhaps cooperate with them

B 
Agents actions can be:
-  , i.e., they affect properties of objects in the environment;
- 
 +
 , i.e., send messages with the aim of affecting mental
attitudes of other agents;
Messages have a wel-defined 
 $they embed a content expressed in a given



  and containing terms whose meaning is defined in a given
  #

Mm it¶s raining..
I inform you that in Lausanne
it is raining 
  I got the message!



- 


, i.e. making decisions about future actions.
 47
ü — ' the capability of an agent to move within external
environment
ü h ' an agent will not knowingly provide false
information to its user
ü O
 
' agents do not have conflicting goals, therefore
every agent will always try to do what it is asked for
ü 

+
: agents improve their performance
over time
ü 
' agents act in order to achieve their goals and
will not act in such a way such as to prevent their goals
being achieved

 

|6

ü ) 
ü 
  %, ( 8



 -
|  <
ü   %  = & 
ü   & ,   = 
 
  %- = O% 
=   

Ô
Ô 
p p  
   p
p
    
  Ô

; 


ü Logic-based agents
ü Reactive agents
ü Belief-desire-intention agents
ü Layered architectures

  
ü Traditional´ approach to build artificial intelligent systems:
 
  
: symbolic
representation of its
environment and desired
behavior. p  p


ppp
 
     or

  ˜: syntactical =
 p 
manipulation of this
representation.

  57 
ü A cleaning robot
=à   
 
=D   

=p    
 



=;
 D   p
=  
= p    
=    
= 

  57 
ü What to do ?

   
$$ 

 57     % ,  -


- .
 change_direction
   % ,  -
- .
ü Solution  change_direction
 $$
$$ 
   {
   
$  &
 change_direction
   % ,
 
$  &
  -
- .
 change_direction
 change_direction
 change_direction
   % ,
 
$  &
  -
- .

$'  &  

What is stopping criterion ?!  $$

  57 
ü What to do now??
++=8=5 ; 
 C
ü To get 5 ECTS and the grade for
the TIES-433 course (Part I) you
are expected to write < 5 pages of
a free text Ô¢¢à"&"@
describing how you see a possible
approach to the problem on the
picture: (requirements to the
agent architecture and abilities (as
economic as possible); view on
agent¶s strategy (or/and plan) to
reach the goal of cleaning free
shape environments); conclusions

5
8   
ü ormat: Word (or PD) document;
ü Deadline - 30 October of this year (24:00);
ü iles with presentations should be sent by e-mail to Vagan
Terziyan (vagan@cc.jyu.fi);
ü Notification of evaluation - until 20 November;
ü You will get 5 credits for the Part I of the course;
ü Your course grade (for the whole course) will be given based
on originality and quality of this assignment;
ü Reminder: On top of these 5 ECTS you can also get extra-
credits from 1 to 5 ECTS if you will also take part in the
exercise related to Part II of the course (Instructor ± Michal
Nagy)

  57 
ü What now???

  57 
ü Now « ??!

ü When you are able


to design such a
system, this means
that you have
learned everything
you need from the
course Design of
Agent-Based
Systems´
 

ü 
 57 
ü )& 
&    % &  

=Ô
    
 ! 


=Ô
Ô "
#
 
$   

 % $
 

$ 
#
 
1  !1 #
 

ü They have their Roots in understanding ˜


 


.
ü It involves two processes:
 D  
: deciding è  goals we want to achieve.
 —
   
: deciding è we are going to achieve
these goals.
1  
ü irst: try to understand
what ˜  are
available.

ü Then:   between
them, and  to
some.

 &   +  


 
  p !
p  
ü Intentions influence
beliefs upon which future
reasoning is based
1  5

ü Example (taken from Cisneros et al.)

@ 
 
   
 
  
 
@   
1  5


>

@ 
 
   
 
   
 
@    | 
 

ü To satisfy the requirement of integrating a reactive and a


proactive behavior.
ü Two types of control flow:
 [ (   : software layers are each directly
connected to the sensory input and action output.
 h  : sensory input and action output are each
dealt with by at most one layer each.
 5 , 


ü Advantage: conceptual
simplicity (to implement n
behaviors we implement n
layers) * 

?

   
 * 
8  
ü Problem: a  
 * 
7
function is required to
ensure the coherence of the
overall behavior
 5
  

ü Subdivided into:

   "
p
p   

* 
 * 

? ?
* 
8 * 
8
* 
7 * 
7


    
    
  
( 
p
p   
 5+Ü=
2 ;[=8
ü Proposed by Innes erguson

 ! 


 


 &    ! 
) & 

  
  % 


  
 5=+ 4
ü Proposed by Jörg Müller


 
 + !

  
  !+ !

O % 
 
|


|
  
 


    
2 
8!2 8#2
ü !˜ 
 working environment comprising
synergistic software components can cope with
complex problems.
;7
ü Three main approaches:
 Cooperative interaction
 Contract-based co-operation
 Negotiated cooperation
 
ü Principle of 
 

  by [ogg et al.:
/

 
"  $ 
 
 

 
 
  
 0) 

  

) 
 
   
#!

K‰
 ‰ ‰
 ¢‰

è  
K‰
 ˜    

 

‰
 
  
¢‰
 
  
8< 77

!7 77 #

ü The  approach is based on a functional


decomposition of the problem. The decomposition
focuses on algorithmic considerations with the
avoidance of common data.
8< 77

! 77 #

ü The   approach is based on data hiding and


abstraction. This approach aims to minimize the effect
of change in programs. The primary architectural
elements are modules that hide design decisions,
usually about data structures (e.g., Ada, Modula-2).
8< 77

! A77 #
ü The basic idea in an ) " 
 approach is to view a software
system as a collection of interacting entities called " )  ´;
ü Objects defined by an 
$a  ( $
a
 (invoked   
ü The interactions among objects are described in terms of µ   ¶,
ü µ

¶ objects sharing common characteristics are usually grouped
into classes. A number of different 
 can hold among
them. undamental ones are inheritance / classification and
decomposition / aggregation that relate classes to each other, and
µinstance-of¶ which relates a class to its instances.
ü E.g. SmallTalk, C++, Java.
+ 
 77

ü A software system is viewed as a collection of interacting


entities called  
 . As objects, agents have an 
,
a and a  , but these are described in more
sophisticated terms:
 ¢: &
 ,  ,  ,  
 , etc.
 O :   that can be played,  
that can be
performed,  
to events,  
 , etc.

ü The behaviour of an agent is defined in terms of è 


  è
  (and not in terms of è
   ).
 
ü Agents can be considered as active objects.
ü O-O technology such as distributed objects (CORBA,
RMI), applets, mobile object systems, coordination
mechanisms and languages can be used to implement
agent-systems.
ü A Multi-Agent System (MAS) can be composed of agents
and objects which are controlled and accessed by agents.
ü The concept of agent provides a higher-level abstraction
than the concept of object.

+  


Artificial Structured Client Server


Intelligence Programming Architectures

OO Peer to Peer
Distributed AI
Programming Architectures

A type of network in which each workstation has


equivalent capabilities and responsibilities. This
differs from client/server architectures, in which some
Ô computers are dedicated to serving the others
@  

Communication Speech Social


Philosophy Acts Sciences
| 

ü  
 
 
'is beneficial in situations
where complex/diverse types of communication
are required:
± Speech-act based communication limits the number of
message types avoiding the proliferation of object methods.
± The message content is more complex, but the same content
language and the same ontology can be used by several
entities.
± Sharing a well-defined semantics allows the definition of
patterns of communication, i.e., interaction protocols, that can
be reused in similar situations.
| 
 !$#
ü  " 
 : is beneficial when the
system has specific performance requirements in
situations in which it is not practical / possible to
specify its behaviour on a case-by-case basis:
± Agent behaviour is not defined in terms of direct mapping
from input to output, but in terms of how to decide what to do.
± This allows more flexible systems that can adapt their
behaviour to satisfy the goals in different (and changeble)
circumstances.
| 
 !%#
ü Exploting 
:
 Agreeing or refusing to accomplish a task is made
explicit and is not embedded in the body of an object
method.
 This is beneficial in all situations requiring an agent to
act on behalf of a user. More in details in all situations
involving:
± Negotiation
± Cooperation
± Competition
| 
 !&#
ü Exploiting  :

 Multi-agent systems are modular in that sense that agents can be


easily added or replaced.
 This is beneficial when the system is expected to be expanded or
modified, or when more explicitly the purpose of the system is
changed.
 The main advantage is in terms of easier maintenance of the
system.
 Examples:
± Network management.
± ault detection and monitoring.

7 

ü A platform is a   which provides services


to an Agent
 Services: Communications, Resource Access, Migration, Security,
Contact Address Management, Persistence, Storage, Creation etc.

 Middleware
± at AOM (Agent Oriented Middleware): lots of services and
lightweight agents
± Thin AOM: few services and very capable agents
2  


ü The Mobile Agent is the entity that moves


between platforms
 Includes the state and the code where appropriate
 Includes the responsibilities and the social role if
appropriate (I.e. the agent does not usually become a
new agent just because it moved.)
; 

ü The concept of
  is associated with
many different kinds of software and
hardware systems. Still, we found that there
are similarities in many different definitions
of agents.
 Ünfortunately, still, the meaning of the word agent´
depends heavily on who is speaking.
; 
ü There is no consensus on what an agent is, but
several key concepts are fundamental to this
paradigm. We have seen:
 The main characteristics upon which our agent definition relies
 Several types of software agents
 In what an agent differs from other software paradigms

ü Agents as natural trend


ü Agents because of market reasons


ü Who is legally responsible for the actions or


agents?
ü [ow many tasks and which tasks the users want to
delegate to agents?
ü [ow much can we trust in agents?
ü [ow to protect ourselves of erroneously working
agents?