Sie sind auf Seite 1von 14

Autonomous Asteroid Exploration by Rational Agents

N.K. Lincoln
Faculty of Engineering and the Environment, University of Southampton, UK

S.M.Veres
Department of Automatic Control and Systems Engineering, University of Sheffield, UK

L.A. Dennis, M. Fisher, and A. Lisitsa


Department of Computer Science, University of Liverpool, UK

Image licensed by Ingram Publishing

Abstract The history of software agent architectures has been driven by the parallel requirements of real-time decision-making and the sophistication of the capabilities the agent can provide. Starting from reactive, rule based, subsumption through to layered and belief-desire-intention architectures, a compromise always has to be reached between the ability to respond to the environment in a timely manner and the provision of capabilities that cope with relatively complex environments. In the spirit of these past developments, this paper is proposing a novel anthropomorphic agent architecture that brings together the most desirable features: natural language definitions of agent reasoning and skill descriptions, shared knowledge with operators, the combination of fast reactive as well as long term planning, ability to explain why certain actions are taken by the autonomous agent and finally inherent formal verifiability. With these attributes, the proposed agent architecture can potentially cope with the most demanding autonomous space missions.
Digital Object Identifier 10.1109/MCI.2013.2279559 Date of publication: 16 October 2013

I. Introduction

omplex autonomous (robotic) missions require decisions to be made taking into account multiple factors such as mission goals, priorities, hardware functionality, performance and instances of unexpected events. The concise representation and use of relevant knowledge about action and sensing is only part of the autonomous control problem; the organization of the necessary perception processes, prediction of the possible outcomes of action (or inaction), and the communication required for decision making are also critical. The development of any autonomous system culminates in a functional intelligent system, where the developed system operates within a specified domain and implements procedures based upon declarative, procedural and generally heterogeneous knowledge defined across the operational domain. The consistency and integrity of this system knowledge is an essential aspect of the resultant system. Motivated by these problems, our research has produced a formally verifiable deliberative agent architecture linked to a natural language knowledge representation of the world model and possible actions. This architecture has been applied in the context of autonomous space systems, both within simulated environments, and on hardware within a purpose built ground facility [1, 2, 3]. Autonomy is a highly relevant topic for deep space missions where scientific interest in asteroids acts as a technology driver to produce spacecrafts which perform complex missions at great distances from Earth. Two such missions, Hayabusa and Dawn, run by JAXA and NASA respectively, are asteroid exploration

Research supported by EPSRC through grants EP/F037201/1 and EP/F037570/1.

1556-603x/13/$31.002013ieee

November 2013 | IEEE Computational intelligence magazine

25

missions [4, 5]. The former, Hayabusa, was a sample return mission that targeted the Itokawa Near Earth Object and successfully returned a soil sample to Earth in 2010.The latter mission, Dawn, is a current mission targeted at the proto-planets Vesta and Ceres that reside in the Asteroid belt between Mars and Jupiter. Although successful, the Hayabusa mission ran into problems that threatened its completion: in 2003 a solar flare damaged the solar panels onboard the spacecraft reducing the efficiency of the propulsion system; in 2005 two reaction wheels failed, compromising the attitude controllability for the remainder of the mission; the release of the Minerva probing robot failed; a communication blackout of nearly two months was caused by a fuel leakage; and, in 2009 during the return cruise, an ion engine anomaly was detected, which took 15 days to circumvent. The mission phases and operations of Hayabusa were controlled from Earth and this was a contributing factor to the failed Minerva probe release: because of the communication time lag of 32 minutes between the ground station and the spacecraft, the command to release the probe was received during an automatically triggered ascent phase from the asteroid and the probe was consequently lost in space. These events, encountered during the Hayabusa mission, highlight the need for robotic platforms to be capable of reliably performing localized decision making to enable completion of their tasks while immersed in a dynamic, unknown, and possibly hostile environment. The agent programming system to be presented here develops the anthropomorphic belief-desire-intention agent programming approach further by enabling efficient hierarchical planning and execution capabilities. There are five important benefits: (1) the method we propose simplifies agent operations relative to multi-layer agents by blending reactive and foresight based behaviors through logic based reasoning. (2) Agent operations such as sensing, abstractions, task executions, behavior rules and reasoning become transparent for a team of programmers through the use of natural language programming. (3) The ability to use English language descriptions to define agent reasoning also means that operators and agents can have a shared knowledge of meanings and procedures. (4) In our system it is also straightforward to program an agent to make it able to explain its selected actions or its problems to its operators. (5) Despite its user friendliness, our system is formally verifiable by model checking methods to ensure our agents always try to do their best. The improvements proposed here signify important practical benefits for engineers designing and operating such autonomous systems.
II. Programming Approaches to Autonomous Systems

4) programming hierarchical planners and executors [17, 18]. In this section a brief review is given that is followed by a list of the features of our programming system.
A. Hybrid-Automata-Based Autonomy

Hybrid systems model computer controlled engineering systems that interact with continuous physical processes in their environment [19, 7]. Efforts to model agent systems as a series of interconnected hybrid automata have been made [20, 21, 22] and the commercial software, State FlowTM [23], is widely used by industry. This motivated agent development using hybridautomaton based models [24]. To implement a deliberative agent system through a set of hybrid automata would entail the representation of each of the available plans of the agent as a hybrid automaton. The resulting system would become very complex and a multi-agent system would be expressed as a large parallel set of concurrent automata.
B. Multi-Layered Agents

Multi-layered agent systems aim to combine the timely nature of reactive architectures (hybrid automata) with the analytic approach to the environment that takes more time [25, 15]. Consequently, as their name would suggest, these systems involve a horizontal or vertical hierarchy of interacting subsystem layers. The complexity of potential information bottlenecks, and the need for internal mediation within purely horizontal architectures, are partially alleviated by adding vertical architectures, though these structures do not easily provide for fault tolerance [26, 16]. Knowledge based deliberation within agent systems is performed with logical reasoning over symbolic definitions that explicitly represent a model of the world in which the agent resides and is encapsulated by the intentional stance, wherein computational (agent) reasoning is subject to anthropomorphism. This is a promising approach for a system to cope with the computational complexities associated at high levels of automated decision making [27]. Although there is no all-encompassing agent theory for multi-layered agents, significant contributions have been made concerning the properties an agent should have and how they should be formally represented and reasoned [28, 29].
C. Agent Oriented Programming

Current approaches to programming autonomous robot operations fall under the closely related domains of: 1) programming hybrid automata [6, 7, 8, 9] 2) agent oriented programming [10, 11, 12, 8, 13, 14] 3) programming vertical-horizontal multi-layered systems [15, 16]

Logical frameworks in which beliefs, desires and intentions (BDI) are primitive attitudes, as developed by Bratman and following the philosophy of Dennet [30, 28], are a popular format for deliberative systems. Deliberative architectures, and their logical foundations, have been thoroughly investigated and used within numerous agent programming languages, including AgentSpeak, 3APL, GOAL and CogniTAO as well as the Java based frameworks of JADE, Jadex and mJACK [11, 31, 32, 33, 13]. Possibly the most widely known BDI implementations are the procedural reasoning system (PRS) and InteRRaP [14]. PRS is a situated real-time reasoning system that has been applied to the handling of space shuttle malfunctions, threat assessment and the control of autonomous robots [34, 35].

26

IEEE Computational intelligence magazine | November 2013

Within the PRS framework, a knowledge An agent is a software that enables a robot to make database detailing how to achieve particular goals or react to certain situations is interfaced its own decisions to achieve some goals, plan ahead by an interpreter. In its concept this is similar and keep itself to behavior rules. to a horizontally layered Touring machine [10] with parallel layers for modelling, planning tem (Livingstone). In keeping with the ethos of agent systems, and reactive behavior, though notions of intention are conrather than being commanded to execute a sequence of comtained and consequently place PRS as a deliberative BDI mands, the system design of Remote Agent was such that the model. InteRRap is a hybrid BDI architecture with three sepaattainment of a list of goals was sought [38]. rate layers incorporating behavior, local planning and cooperative planning. These interface the world state, model and social knowledge bases, respectively. These knowledge bases are in E. Hybrid Automata Versus Deliberative Agents turn fed through perception and communication processes. The potential state explosion resulting from replicated plan Ultimately it is the lowest level behavioral layer that is responsirepresentations within discrete states of the hybrid system is ble for environmental interaction, though this is augmented by touched upon in [20] and discussed in more detail in [39]. higher levels that involve deliberation processes to enable the Moreover, in constructing a deliberative agent from the perattainment of high level goals [16]. spective of hybrid automata involves a loss of expressivity; the representation as hybrid automata often loses the explicit differentiation between what the agent can do and what it will actuD. Hierarchical Planner and Executor Systems ally choose to do. Abstraction systems in hybrid automata are Two closely related frameworks, MDS (Mission Data System) implicit. This makes system operations more difficult to underand CLARAty (Coupled Layer Architecture for Robotic stand, especially when things go wrong. Autonomy) [17], provide frameworks for the development of goal-based systems intended for, though not limited to, robotic space systems. MDS is an architectural framework F. Planner-Executive Systems Versus Deliberative Agents encapsulating a set of methodologies and technologies for Deliberative agents have a large number of preprogrammed the design and development of complex, software intensive, plans and focus on selecting the appropriate plan for execution control systems on a variety of platforms that map high level given the current situation. Some deliberative agent approaches intentions (goals) through to actions. The system architecture such as Jason [13] and Jade [31] can also generate their own focuses on expressing the physical states that a control system plans, or access sub-systems dedicated to planning based on the needs to manage and the interactions between these states underlying capabilities of the systems. Other systems, such as [36]. CLARAty is a two-layer object oriented software infraCLARAty [17], implement continuous planning and execution structure for the development and integration of robotic based approaches that consequently makes planning the centre algorithms. Its prime purpose is to provide a common reuspiece of an agents existence. able interface for heterogeneous robotics platforms by using Deliberative agents use a reasoning cycle which first established object oriented design principles. In a CLARAty assesses the current situation using reasoning by logic and architecture two layers separate a mainly declarative programthen selects and/or executes a plan. This assessment is based ming within a Decision Layer and a mainly procedural proupon anthropomorphic concepts such as beliefs, goals and gramming within a Functional Layer. It is the functional intentions. Planning procedures and temporal-logic-based layer that encapsulates low and mid-level autonomous capahypothetical inference (about consequences of future actions) bilities, whereas the decision layer encapsulates global reasoncan be accommodated within these deliberative agent frameing about system resources and mission constraints. Both works. Two-layer planner/executor-functional agents, on the MDS and CLARAty seek the application of a generic frameother hand, focus all their activity on efficient real-time work for the development of goal based systems on heteroreplanning. Instead of complete replanning, deliberative geneous platforms. More specifically, MDS seeks to link sysagents can have access to plan-libraries defined for them as tem abstractions between systems engineers and software their knowledge and skill base. In many ways planner-execuengineers through model based design; CLARAty seeks to tive systems and deliberative agents do the same job in differpromote reusable code to enable advancement of robotic ent ways but the most striking difference is in terms of control algorithms. human like reasoning in deliberative agents that tends to An interesting middle ground between BDI systems and make them more able to collaborate with humans operators. complex decision trees is the Remote Agent technology deployed on the DeepSpace1 technology proving mission [37]. G. Features of Our Approach Remote Agent, an agent system based upon model-based diagCreating a new software architecture with significant benefits nosis (MBD), integrated three separate technologies: an onfor autonomous robot control is a difficult problem when board planner-scheduler (EUROPA), a robust multi-threaded there are excellent software packages around [11, 13, 31, 14, executive, and a model-based fault diagnosis and recovery sys17]. In this section we identify the possible aspects of further

November 2013 | IEEE Computational intelligence magazine

27

Deliberation of an agent is its decision making to chose which of its plans and skills to execute at any moment of time to achieve its goals.
progress in terms of new features or strengthening existing features. Our architecture enables a programmer to equip the agent with the intelligence features classified in [40] as cognitive intelligence, social intelligence, behavioral intelligence, ambient intelligence, collective intelligence and genetic intelligence. We also emphasize the existence of shared concepts and understanding between humans and the agents and the agents natural ability to explain why it has chosen a given action. Another important consideration is simplicity and sharability of agent code during the development and easy maintenance during its use. Architectural Approach A model-based approach is taken for handling data with structural templates provided by an ontology of the agent. Class names and their attributes are precise professional terms under-

standable by operators of agents. In the deliberation process a balance is created between the amount of real-time planning and use of a priori plans that are available. This enables seamless blending of fast reactive behavior and slow contemplative evaluation of long term consequences of agent actions and environmental events. Programming Paradigms and Languages Both declarative and procedural programming is used in three layers: natural language program (NLP) compiles into embedded MATLAB code that compiles into standard Java or C++. At the top level abstractions expressed in natural language programming (sEnglish) form layers of abstractions to define operations that lend themselves to human interpretation. In principle a similar route can be used to compile natural language into declarative rational agent code which then runs on a Java-based interpreter. In this paper we focus on declarative agent code in the Gwendolen language which must be created directly by the programmer, however we have also investigated the use of the Jason agent language and the programming of Jason agents using the same natural language interface as that which links to MATLAB. Deployment Architecture Distributed homogeneous and also TCP/IP connected heterogeneous sets of processors can be used where Java and C++/ ROS can run. The primary approach is soft-real time but hard real time implementation is also feasible. Development Environment The current development environment is a mixture of Eclipse (sEnglish/Java/Gwendolen), MATLAB/Simulink and the ROS/C++ development systems. Documentation The high level code for agent capabilities is written using natural language programming. The BDI agent code is in a declarative language such as Gwendolin or Jason, also presentable using sEnglish. Low level code is in MATLAB/C++/Java and documented in standard ways. The natural language descriptions of capabilities facilitate communication within the development team, reduce the effort for users and enable the creation of information repository to capture the development process for future use and maintenance.
III. Programming Paradigm Descriptions

Abstractor of Higher Level States and Reasoning by Logic Inference r

}E

Sensor/Perception Stream

E Abst. r1 Abst. r2 Abst. r3 Abst. r4 Context }1 Context }2 Context }3 Context }4 Plan 1 Plan 2 Plan 3 Plan 4

Abst. rN a

Context }5

Plan N

Figure 1 Operational schematic of a deliberative architecture: yellow blocks represent executable plans that modify the environment E where an agent operates. A perception stream from the environment is abstracted and evaluated against contexts and abstractions. A plan selector function, a, results in the chosen executable plan P.

Our research has focused on a novel agent architecture for autonomous systems. The broad operation of the architecture considered here is illustrated in Figure 1, where an agent may invoke the execution of an action encapsulated within the set of available agent plans P, based upon the set of raw data available from the agents sensors, W E, and the set of discrete statements that represent abstractions of the raw data in W E, L, is formally defined here as:

28

IEEE Computational intelligence magazine | November 2013

Deliberative Agent process occurring within P; consequently a separate communication channel exists between P and X to enable direct inforA deliberative agent is a tuple ag = {E, r, }, P, a} where the components are as follows: mation transfer between these engines without mediation from E is a model of the environment, including the physical A. All actions performed by R are passed through the Abstracdynamics of any agent present in the environment; tion Engine for reification. In this way, R is a traditional BDI r: P (W E) " P (L) 1 is a function that converts raw sensor agent dealing with discrete information, P and X are traditional control systems, while A provides the vital glue between all data into abstract statements in first order logic; } 1 W E is a set describing the agents current perceived these parts by hosting primary communication channels and translating between continuous and abstract data. Interaction context; between the components in the architecture is governed by a lanP is a set of the executable plans of the agent; and guage independent operational semantics [1]. a: P (L) " P is a plan selector function that selects an individThe agent programming language within the Rational Engine ual plan p ! P using information from the abstractions of encourages an engineer to express decisions in terms of the facts the sensor data created by r. a is, potentially, non-determinavailable to an agent, what it wants to achieve and how it will cope istic though it need not be. with any unusual events.This reduces code size so an engineer need The agent, ag, evolves in environment E during successive not explicitly describe how the spacecraft should behave in each reasoning cycles that involve a selecting a new executable plan possible configuration of the system, but can instead focus on those from P based on which abstractions of } currently hold, or facts that are relevant to particular decisions [39]. The key aspect of leaving the currently chosen plan to continue execution. This is deliberation within agent programs allows the decision making part a general definition that is not specific to the composition of } of the system to adapt intelligently to changing dynamic situations, and P nor to individual implementations of r and a. changing priorities, and unreliable hardware systems.The distinctive The structure and connection of modules in a software to features of our agent programming approach, which go beyond implement the theoretical scheme in Figure 3 can look different. those of Jason or Gwendolen are as follows. Figure 2 illustrates the functional architecture of the agent that is implemented within this article, which comprises an augmented high-level reasoning system linked through an abstraction layer to low-level control and sensor Environment systems. Such real-time control and sensPropagate World (Simulation or Real) ing processes form a Physical Engine (P) that is situated in an environment that may be real or simulated; P consists of the aspects that are able to sense and Physical Engine (P) Continuous Engine (X) effect change in the environment. P communicates with an Abstraction Engine Sense/Act Loop(s) Calculate (A) . It is here that perception data is sampled from P and subsequently filtered to prevent flooding of the belief base that belongs to the Rational Engine (R), which dictates processes occurring Continuous Query Continuous Action Continuous Sense within P via A. R is the highest level Abstraction within the system and contains a SenseLayer (A) Abstract Action Reason-Act loop. Sensing here relates Abstract Sense Abstract Query to the perception of changes within the belief base that is being modified by A, via filtered abstraction of data from P, which may result in reasoning over new Reasoning events and result in actions necessitating Engine (R) Sense/Reason/Act Loop(s) interaction with either P or the ContinData Flow uous Engine (X) . X augments R and is Control Flow utilized to perform complex numerical procedures that may be used to assist reasoning processes within R or generate Figure 2 System structure: this maps onto the algorithmic description in Figure 1 by the data that will be required for a physical Abstraction layer mapping onto the Abstractor of higher level states, the Physical Engine
1

P ^ X h denotes the subsets of a set X .

maps to Sensor/Perception Stream, the Continuous Engine to P and the Environment to E. The reasoning engine block corresponds to the deliberative actions responsible for the appropriate selection of plans, and performs the role of a.

November 2013 | IEEE Computational intelligence magazine

29

1) A natural language based organization of events and relations in the physical and continuous engines is implemented. The capabilities available to the agent are specified fully using a natural language, enabling a natural semantic bridge between agent intentions and system actions. This feature enables the world modeling operations and capabilities of the agent to be described in a natural language document that itself compiles into code, whilst also capable of being read by human operators. Consequently, heterogeneous knowledge of procedures and world models with the agent and human operators is made possible. 2) The organization of all procedural code is based upon natural language sentence structures, whilst differing from the strict object oriented principles followed within CLARAty, enables code reuse through common operational abstractions. 3) The BDI agent language responsible for rational actions, a customized variant of the Gwendolen programming language, may be verified by model checking. 4) Three component engines, a physical engine, continuous engine and reasoning engine, are integrated via an abstraction layer. The slow deliberative and fast reactive responses are naturally blended together and are not organized into layers. Timing is determined by the availability of abstract data completed by the physical and continuous engines and ready for use by the reasoning engine. 5) Rationality is based on the BDI deliberative agent paradigm, as opposed to real-time planning. Points 1 and 2 link the two core desires of MDS and CLARAty by providing a linked and transparent abstraction between high level system operation and low level function using linguistics, whilst enabling code reuse via common operational abstractions.

Furthermore, the same natural language abstractions can be utilized by the deliberative agent for reasoning.
A. Natural Language Programming of Agent Actions

sEnglish sEnglishsystem Englishis a controlled natural language, i.e. a subset of standard English with meanings of sentences ultimately defined by code in a high level programming language such as MATLAB, Java or C++ [12]. It is natural language programming made into an exact process. A correctly formulated sEnglish text compiles into executable program code unambiguously if predefined sentence structures together with an ontology are defined. Errors in functionality are reduced due to inherent verification mechanism of sEnglish upon build. This enables a programmer to enjoy the convenience of natural language while keeping with the usual determinism of digital programs. Once a database of sentences and ontologies have been generated, the clarity and configurability of a project written in sEnglish becomes evident. Of particular interest, when applied to development of an agent system, is the link between the abstract manner in which sEnglish solutions are developed and the abstractions of a real agent system. This enables shared understanding between the computational system and its operator.

The capabilities available to an agent implementing the presented architecture are contained within the P and X engines; these actions, either computational data manipulation or complex interaction with hardware devices, are developed in a natural language facilitated by the sEnglish publisher, an Eclipse based design environment [41] (break-out box sEnglish). Just as the agent programming language used within the Rational Engine encourages an engineer to express decisions in terms of the beliefs available to an agent, abstracting agent capabilities in a natural language encourages the development engineer to encode specific skills in abstractions that are then shared between operating personnel and the agent. The capabilities abstracted may be divided into P and X abilities. P abilities, or skills, relate to specific physical actions the agent may invoke on the world environment; X skills are those related to complex queries that may be used to assist rational decision making occurring within R or concerning specific P skills. It is this shared understanding generated by the abstraction of agent capabilities that enables coherent development of all rational processes that are linked to agent actions. Development using sEnglish commences by defining a central ontology, O, to define the concepts pertinent to the target system and that will be used within a natural language program document (NLPdocument) P = {O, S, N, m} where S is set of sentences in a natural language N, N is and underlying programming language such as MATLAB and m is a meaning definition function that assigns code in N to sentences in S . An NLP text, T, may be formed through composition of sentences from S and can be denoted by S *. Abstraction of a specific procedural action is performed through an expanding tree of sentences, whereby the trunk represents a core abstract action and a leaf represents a trivial component computation expressed in a target code language N. In practice the Eclipse plugin of sEnglish Publisher facilitates the mapping of S to code in N [41, 42]. This methodology provides a natural abstraction link between the high level meaning of a particular action, which is the handle used by rational actions, and the low level specifics of the action performative itself, which operates on real-time hardware.
B. Rational Agent Decisions via Gwendolen

Rational decision making is based on symbolic reasoning using plans that are implemented in the Rational Engine, R, and described with a specialised rational agent (BDI) language based upon the Gwendolen programming language [43]. Gwendolen is implemented in the Agent Infrastructure Layer (AIL), a collection of Java classes intended for use in model checking agent programs. A full operational semantics for Gwendolen is presented in [43]. Key components of a Gwendolen agent are a set, R, of beliefs which are ground first order formulae and a set, I, of

30

IEEE Computational intelligence magazine | November 2013

intentions that are stacks of deeds associated A rational agent selects its actions for a reason, its with some event. Deeds include the addition or removal of beliefs, the establishment of new deliberation is driven by planning, logic inference and goals, and the execution of primitive actions. A sometimes simple reflexes. Gwendolen agent may have several concurrent intentions and will, by default, execute the first deed on each intention stack in turn. It is possible to suspend an order to keep reasoning tractable, and so this rapidly leads to intention until some condition is met, in which case no deed issues of practicality. Our approach is to assume that while on the stack is executed until the intention is unsuspended. model-checking of programs is a highly appropriate tool for the Gwendolen is event driven and events include the acquisition of analysis of the reasoning engine, it isnt necessarily, a suitable new beliefs (typically via perception), messages and goals. approach for analyzing other parts of the system. Our implementation differs from the published Gwendolen semantics by the addition of specialised deeds for interacting IV. Description of a Complex Example with the Abstraction, Continuous and Physical Engine. This The behavior of any given agent using the architecture prepermits the implementation of operational semantics for intersented within Figure 2 is governed by its rational decision makaction between the components of our agent architecture as ing processes and its capabilities that are encapsulated within the specified in [1].2 R, X and P engines. A concrete agent is realized through the development of these elements in such a way that their combination enables complex actions to be carried out. Whilst fundaC. Verification of the Gwendolen Agent mentally it is these elements that determine the capabilities of Model checking of finite automata, as a means to verify an agent, abstraction and reification processes that occur within intended system operation, is an established technique used on the abstraction layer are necessary to form a responsive and systems that may be expressed by logical models and enabled coherent agent, operating within the environment E. Here it is by the fact that logical properties of bounded models are decidable [44]. Considering a system S that is represented by intended to demonstrate and explore the application of the agent system in a complex (asteroid) environment. the executable logical model M S, then a property specification given in terms of a logical formula { may be used by a model checker to establish if M S t {. This satisfaction requirement A. An Asteroid Exploration Agent may be computationally tested for all possibilities within M S The mission considered involves the cooperative action of four via exhaustive testing. autonomous spacecrafts in an asteroid environment, tasked with A variation on model checking is the model checking of procataloging the asteroid numbers and composition. Only a small grams. Instead of creating a model of the system, the model subset of the asteroids present within the environment have checking of programs performs exhaustive testing over the known positions initially and are only observable if they are not actual code and naturally hinges on the ability to determine all occluded by other asteroids. This entails operation in a partially executions of a program. This is feasible for programs written in known environment and the need to develop enhanced Java thanks to the existence of JavaPathfinder (JPF), which uses knowledge of this environment through cooperative action, a modified virtual machine to manipulate program executions while performing other high level requirements. Primarily the [45]. This tool has been used for the formal verification of mission is one of scientific measurement and observation. rational agents implemented in languages, such as Gwendolen, These observations include those taking place at a single point which use the AIL, as detailed in [46]. in time, continuous monitoring of an object, and those that Model checking is ideally suited for systems with a finite require the cooperative actions of at least two spacecraft to number of discrete states; unfortunately hybrid systems embedfocus resources on a point of interest. Similarly, some of the ded within the real world are typically working in infinite and observations involve action in close proximity to an asteroid continuous spaces. There is a large body of work in modeland therefore some rational assessment of risk versus priority is checking such systems which relies on analyzing the system to required. Furthermore we specify that: there are over-arching identify specific regions that can be encapsulated as states often mission goals that can change dynamically based on data driven by representing such systems as hybrid automata [47, 6, 7]. received from the spacecraft group on mission; there are unexUnfortunately such models are not compositionali.e. it is not pected hazards such as energetic particle events and uncharted possible to analyze such systems one component at a time in asteroids, which may require the spacecraft to drop its current goal and take evasive action; the spacecraft have different capa2 Gwendolen was also used to program the abstraction engine. Since the abstraction bilities because they have different equipment and these capaengine mediates between the different timings of the other engines and, in particular, bilities can change dynamically as equipment breaks. was used to filter data received from the physical engine into that required for reasoning, we also modified the Gwendolen semantics to speed up the processing of perception (i.e., incoming data). In particular perceptions had to be added directly into the agents belief base rather than being treated as events with plans used to convert them into beliefs. We do not, however, advocate the use of BDI languages for programming abstraction engines, partly as a result of this issue, and so consider this a minor modification.

B. Agent Capabilities

Agent capabilities are formulated using tags and sentences in sEnglish where ultimately all meaning compiles into the

November 2013 | IEEE Computational intelligence magazine

31

Code fragment 4.1 The compiled high level MATLAB script that represents the name tag following trajectory.
function following_trajectory (MyTraj) StateDes = updating_target_state (MyTraj); [Acc , Om] = obtaining_inertial_states; [St ,Dcm] = forming_current_state (Acc, Om); Sigm = generating_guidance_potential (St, StateDes); U = generating_control_signal (St, Sigm); T = determining_thrust_vector (U, Dcm); implementing_thrust_vector (T); 1 2 3 4 5 6 7 8

current state St and desired state Statedes.


Figure 3 Editing agent code in an sEnglish plug-in under Eclipse.

Generate control signal U based upon current state St and guidance potential function Sigma. Determine the required thrust vector T using the control signal U and direction cosine matrix Dcm. Implement the required thrust vector T on hardware. matlab routines url:: w ww.sysbrain.org/asterocraft conceptual graph:: [ I]-(action)-[following](subj)-[trajectory:Mytraj] testing formula:: M ytraj=rand_ object(trajectory); following_trajectory(Mytraj); section number :: 4 input defaults :: c reate_object(trajectory);

MATLAB development language. The result is a series of interconnected m-files that are built from the user prescribed text, linked to the core ontology that defines the conceptual data structures used by the system. First we illustrate the development of a medium level agent skill that corresponds to the meaning tag following_trajectory, resulting in procedural code to perform the function of following a specific trajectory. Considering the abstractions that may complete this control action, intuitively one must obtain a target (kinematic) state, determine the current system state, implement a control law to produce a control signal that will act to drive the error between the desired and actual states to zero and then implement this control signal in hardware. Figure 3 illustrates the Eclipse based editor window for the definition of a sentence Follow trajectory Mytraj. with activity name following trajectory. The following is a listing of a definition file (SEP-file = sEnglish Procedure file) to define the physical activity of following a trajectory that may have been created by a mental activity of the agent:
procedure name :: following trajectory senglish sentences :: F ollow trajectory Mytraj. process, repeat mode :: physproc, runOnce input classes and local names:: trajectory[Mytraj] output classes and local names:: senglish code :: Update the orbital target state Statedes for trajectory Mytraj using temporal progression. Obtain linear acceleration Acc and angular velocity Om from the gyroscope. Form the current state St and the direction cosine matrix Dcm using vision information and angular velocity Om. Generate guidance potential function Sigma based upon

In Code Fragment 4.1 each elemental abstraction to an senglish code, the high level abstraction of followi n g _ t r a j e c t o r y may be completely defined in a structured and meaningful way; its meaning is completely and transparently described by the subsequent abstractions. The meaning of sentences is exactly the senglish code implemented for dealing with trajectory following and upon compilation. The generated following trajectory.m file will be of the format: In the above development all meaning tags are names of actions formed from verbs. At any time the current agent activity may be queried to result in a meaningful response, which is useful for a human operator querying the current actions of an agent. A naive human operator may query the system to ask what it is doing and why, with the response being given in a natural language format, supplemented with reasoned logic behind the execution of these actions. Each sEnglish sentence is matched with a routine call in MATLAB as well as with a similar looking predicate for logic operations that abstracts away the (code based) meaning behind the predicates. Basic predicate abstractions from sentences are applied to both the world environment and the spacecraft model; these are passed by the Physical Engine to

32

IEEE Computational intelligence magazine | November 2013

the Abstraction Engine. The information provided to the Abstraction Engine includes the following (see Fig. 4).

Propulsion System Data Data relevant to the operation of propulsion system hardware is passed to the Abstraction Engine, this information includes (but is not limited to): pressure information (main pressure vessel and fuel lines); valve activation status; and current/voltage data for internal systems.

Natural language programming in sEnglish has proved to be a useful development tool to keep our system operations clear to programmers and easier to validate.
Payload Health A determines the sensor payload operational status, which ultimately determines the utility of the agent. Power System Health A determines its power generation capabilities, which restrict the capabilities of the agent. Al s reifications closely match the P and X abilities previously described. In most cases A adds a few low level details that are unimportant to the deliberations of the Rational Engine and manages housekeeping related to communication between the engines. Adaptivity and hence the agents ability to complete a mission successfully, depends on its ability to monitor relevant events that are vital for achieving its goals. Although it is the set of low level abstractions that dictate specific hardware actions, the agent is only concerned with

Control Performance Data This relates to output control requests that are sent by the control system and actual output responses observed by onboard systems. Differences in these may enable the agent to infer the effectiveness of a particular control system and also to augment investigations into faulty control hardware. Kinematic Data High level kinematic state information is derived from the onboard sensors that monitor the world environment and abstracted into quantities relating to: orbital acquisition and path following status. Payload Status The status (health) of the agent payload, which in this case primarily consists of sensors, is available to the Abstraction Engine. Asteroid Field Updates The internal navigation/mapping system of the agent may flag to the Abstraction Engine if a previously unknown asteroid is detected within the field. Such observations entail the need to check that the currently executing plans are still valid and that they do not pose a threat with regards to colliding with the newly detected body. Additionally, the details of newly detected bodies should be broadcast to all agent members so that each agent may plan with this enhanced knowledge of the local environment. Inbound Solar Energetic Particle Event Coronal mass ejection shock accelerated particles represent a hazard to space operations as they damage hardware components [48]. Inbound solar events are preceded by an enhancement of high energy particles that may theoretically be detected. This information enables localized detection so an agent may take action. The Abstraction Engine, A, is implemented using BDI style plans. It further abstracts the data received from P and sends it to the Rational Engine (R) . It also reifies instructions from R which are passed on to X and P. Al s abstractions include the following. Thruster Malfunction A determines, from the Propulsion System Data, whether a thruster is working or not and its current health.

Some of the Self and Environment Perception Processes sEnglish Sentence Boolean Beliefs (~) Moving on Course As Monitor Expected Course of Expected Self-Movement (~) Communications Work As Monitor Functioning of Expected Communications (~) Measurements Work As Monitor Quality of Expected Measurements (~) Completed All Check That Measurements Measurement Plans Are Complete (~) No More Interesting Check for Remaining Features on Current Asteroid Interesting Features of Current Asteroid Check That All Team Members (~) All Other Agents Completed All Measurement Plans Completed Their Measurement Plans (~) Working on Interpreting Receive Communications Communication from Other Agents (~) Solar Mass Ejection Is Monitor Solar Mass Ejection Expected Event (~) Collision with Asteroid Ast Is Monitor Collision Danger Expected in ~300 s. (~) Collision with Agent A Is Expected in ~100 s. (~) All Collisions Safely Preventable. (~) All Agents Move As Monitor Activity by Other Planned. Team Members (~) Agent A Moves As Planned.
Figure 4 Illustration of perception processes which contribute to the belief base during each reasoning cycle of the BDI agent.

November 2013 | IEEE Computational intelligence magazine

33

Physical and Communications Capabilities Number States Moving into a Kinetic State Within a Coordinated Plan with Other Agents Tracking a Trajectory Possible Boolean State Outcomes (~) Succeeded Moving into Joint Kinetic State, Moved into Joint Kinetic State with Error E (~) Succeeds Following Trajectory, Follows Trajectory with State Error E (~) Succeeded Moving into Joint Kinetic State, (~) Moved into Kinetic State with Error E (~) Succeeded Broadcasting Control Insufficiency (~) Managed to Ask for Vision Assistance (~) Managed to Ask for Communications Assistance (~) Managed to Request Joint Spectrographic Observations

Some Mental Abilities to Solve Problems Activity Tag Reconfiguring Thrusters and Reaction Wheel Allocation for Movement Controller Planning Joint Spectrographic Observations Discovering Earliest Observation Opportunities Estimating Value of Earliest Observation Opportunities Planning Approach to Target Turning Shield Toward Sun Generating Trajectory Generating Evasive Trajectory Selecting Observation Point Predicting Object Motion Evaluate Sensor Data Boolean State (~) Reconfigured Thrusters and Reaction Wheels

Moving Into a Movement State Within a Self-Created Plan Broadcasting Insufficiency of Your Six-DOF Control Asking for Vision Assistance Requesting Communication Assistance Requesting Joint Spectrographic Observations

Figure 5 Some of the physical abilities: each agent is endowed with the ability to control its physical and communications hardware.

(~) Planned Joint Spectrographic Observations of Asteroid Ast (~) Discovered Earliest Opportune Asteroid Ast (~) Estimated Value of Earliest Observation Opportunity (~) Planned Approach to Target Asteroid (~) Turned Shield Toward Sun (~) Generated Required Trajectory (~) Generated Required Trajectory (~) Managed to Select Observation Point (~) Managed to Predict Object Motion with Uncertainty 0.9 (~) Completed Evaluation of Sensor Data

activation of high level action abstractions. It is these abstractions that represent the set of P and X abilities to which the agent has access to and may invoke as part of a plan or as a means to augment reasoning ability. P: Physical Abilities Each agent has the ability to control its physical hardware by using abstract commands expressed in English sentences (Fig. 5). In the instance of an autonomous spacecraft, this relates to the ability to output required forces and torques for desired motion. This entails interaction with multiple systems at various levels of complexity: each agent has access to discrete-time closed-loop control solutions and may interact directly with the propulsion system to enable abilities such as valve switching and power routing to enable contingencies for failure. Sensor systems are assumed to be available for the internal control routines. While the physical agent body is controlled by appropriate force output, damaged hardware systems may result in spurious force and torque outputs. X: Mental Abilities These abilities relate to complex tasks that are required to support reasoning with the R engine and control routines within the P engine. R is interested in the outcomes of implemented action and is concerned with the fact that specific control routines require specific data sets to be generated prior to their implementation. It is also the agent that dictates specific motion within the asteroid system by generation of non-inter-

Figure 6 Some of the mental capabilities: these relate to complex tasks that are required to support reasoning with the R engine and control routines within the P engine.

secting trajectories to target destinations that the internal control systems are then directed to follow. Some of the mental abilities are displayed in Fig. 6. This set of P and X abilities are those which the Rational Engine, R, may utilise to perform reasoned mental or physical actions. Reasoning itself is based upon abstract information that is received by the agent. Abstraction is a two-stage process within the agent architecture. The Physical Engine (P) sends a subset of the sensor data to the Abstraction Engine (A) which then filters, and in some cases further discretizes, the data based on the current situation.
C. Agent Rationality

The executable plans of the agent are programmed in a declarative manner, linking conditioned event triggers to courses of action through statements of the following generic format: ! (!) Event : {Context} # {Plan};

where Event is the acquisition of a new belief, message or goal, {Context} is a predicate logic formula referring to the beliefs and goals of the agent and {Plan} is a stack of deeds to be executed which can consist of action predicates, subgoal declarations, belief base changes, (un)lock and (un)suspend commands (as discussed in section B). Abstract sensory

34

IEEE Computational intelligence magazine | November 2013

perception, as provided by A, is the agents The agent is capable of dealing with multiple disparate primary link to the environment and thus dictates its action by furnishing the agent and concurrent intentions that result in multiple with beliefs about the environment. Instances concurrent actions. of change within the belief base can give rise to triggering events. instance whilst following a specified trajectory, the agent may As an example, a relevant triggering event in this context, is deal with thruster malfunctions, communication processes and the detection of a previously unexplored asteroid. This will crehigh priority avoidance maneuvers. ate an intention with an empty deed stack. The execution of the rational and abstraction engines are If the intention is selected for attention (by default the based primarily upon the use of plans for action and rules for agent cycles through all its intentions in turn) then a plan reasoning about facts. The rational engine has implemented will be selected for handling the event. Once such plan plans for the following: might be: +new_asteroid (Ast): " J Busy , # 1 +! planning_orbit(Ast,P), 2 +! orbiting_asteroid (Ast, P); 3 Asteroid Selection R requests distance information from X for a selection of target asteroids, and negotiates with other agents to avoid multiple agents surveying the same asteroid. R then instructs P to orbit the selected asteroid. Thruster Repair R selects a suitable course of action to compensate for a damaged thruster, based on the propulsion system data, and instructs P to reconfigure its hardware appropriately. Summoning Assistance If R infers that sensor equipment is required that it does not possess, it determines the closest agent with the correct equipment and contacts it for assistance, and Prolog style reasoning rules for: Closest Unexamined Asteroid Having requested distance information from X, and possibly received information from other spacecraft about their intentions,

Here, upon receiving the percept relating to the detection of a new asteroid, the agent checks against the internal belief base entries that it is not currently busy, which is itself a Prolog evaluation over aspects of the agents belief base; if it passes the belief base check then the plan is a valid method of dealing with the event and consequently the intention to execute the plan body may be instantiated. If the plan is selected then the plan body is placed upon the deed stack for the intention and the agent proceeds with execution. Once the intention is selected the first deed on the stack is executed, in this case the acquisition of a subgoal, +!planning_orbit(Ast, P). This generates a new event which may, in turn, trigger the selection of a plan such as: +! planning_orbit(Ast,P): " true , # 1 . query (planning_orbit(Ast,P)); 2

The .query command is an extension of the Gwendolen language which communicates with the abstraction engine, requesting calculations and suspends the execution of the intention until the result is returned. In this case the abstraction engine reifies the command planing_orbit(Ast, P) to a call to the agent capability, planning_orbital_trajectory, described in section B. X calculates a trajectory to asteroid, Ast, and returns the result which the abstraction engine binds to P and then asserts as shared belief, planning_orbit(Ast,P) (i.e, that P is a suitable trajectory for reaching asteroid, Ast ). When the new shared belief is detected by the Reasoning engine the intention unsuspends and continues execution with +! orbit asteroid (Ast, P), the plan for which commands the physical engine to execute the trajectory now bound to P. The above describes the processing of a single new percept to result in a specific action. As mentioned in Section B, the agent is capable of dealing with multiple disparate and concurrent intentions that result in multiple concurrent actions: for

Operation Complexity of an Asteroid Explorer Agent with 10 Reasoning Cycle Per Second (RCPS) Component Category Number of Maximum Abstractions Depth of Abstraction Hierarchy 557 3 Average Ratio of Completion Time Relative to Reas. Cycle 1

Number of Perception Abstractions Logic Rules of Behavior Programmed Plans of Physical Capabilities Programmed Plans of Mental Capabilities

339 183

3 2(6)

1 14377

128

292

Figure 7 Summary of the complexity of the asteroid explorer agent.

November 2013 | IEEE Computational intelligence magazine

35

Logical Reasoning

Hardware Simulation

Dynamic Propagation, Collision Detection, and Visualization

but simply assumed that messages were reliably transmitted between agents. A number of protocols were implemented for agent negotiation including a simple priority based protocol (i.e, each agent had a different priority) and the auction protocols described in [49]. In the auction case one agent was nominated as auctioneer and equipped with the necessary plans for running the auction protocol In the priority case each agent yielded to a higher priority agent if informed of that agents intention to investigate an asteroid.
D. Lessons Learned

Agent Skill Abstraction Repository Complementary Processing (a)

Hardware Logical Reasoning Dynamic Processes

Agent Skill Abstraction Repository Complementary Processing (b)


Figure 8 The framework architecture for applying the presented agent system completely within software (a) and on hardware (b). (a) Agent software system architecture immersed within a software environment to simulate the asteroid spacecraft mission, showing the software components and their interaction(s). (b) Agent software system integrated into representative spacecraft hardware, showing the replacement of simulation tools for hardware and real dynamic processes.

R can determine the closest asteroid that no other spacecraft intends to examine. Closest Spacecraft Similarly, R can use information about other spacecrafts intentions, capabilities and positions to select the closest spacecraft with a required capability. Thruster failures In case of fuel line leak or thruster malfunction, R derives the necessary actuator and control system reconfiguration. 1) Messaging and Negotiation The Gwendolen programming language contains primitives for sending and receiving messages. Our simulation did not contain an accurate model for communication between satellites

The need to pre-program a number of complex skills such as discovering earliest observation opportunities, tracking a trajectory or agreeing to planned approach to target asteroid, required detailed dynamical analysis at the agent programming level. The agent responds quickly to the vast majority of environmental situations and the need for onboard planning using X is rare during a mission. However, the pre-programmed skills of the agent are not always sufficient to solve a problem, for instance in the case of a rare combination of hardware failures. For these situation the agent uses planners in the continuous engine, X, that can be slower. If insufficient care is taken to choose suitable hierarchical abstractions for environmental situations then very long execution times can result for the formal verification. Verification is not absolute: it is not possible to say that the mission is guaranteed to succeed. The verification effort involves listing the hardware and environmental situations, and proving the agent makes appropriate choices in the face of these. The system designers task is to make it unlikely that an agents actions will fail. This will involve the agent designers knowledge of what can physically be anticipated in the environment, including onboard hardware failure. Note that this is not as negative as it sounds: it is provably impossible to build an engineering system that works in an unknown environment and never fails. Overall the agent architecture makes a good compromise between the use of pre-programmed agent solutions and onboard planning on the fly. Describing the complexity of environment for verification is beyond the scope of this paper and is subject to our future investigations to improve efficiency of problem solving by our agents.
V. Implementation in Simulated Environment

Our system has been implemented in a simulated asteroid scenario.


A. Software Implementation

The simulated asteroid environment has been implemented using jBullet, a Java port of the Bullet Physics Library [50], with VR output being performed using OpenGL. Collision impacts between all system bodies may occur. Small impacts to spacecraft may result in only a disturbance to their

36

IEEE Computational intelligence magazine | November 2013

trajectory, however fuel line ruptures, total 5DOF spacecraft models were used to test loss of control thruster(s), loss of sensor payload and even complete agent loss may operations complemented by virtual reality simulations result from major impact events. Hardware in jBullet. failure may occur as a result of a collision instance, it may also occur as a random hardware glitch that the agent must also be tolerant to. environment and a representative hardware system. We sought The complete system is a multi-language software to explore the systems flexibility in an environment requiring system: the abstraction processes and agent reasoning are a high degree of autonomy, and its feasibility for real-time performed in Java and the spacecraft hardware is modeled implementation. in Simulink, which in turn uses skill abstractions developed Encoding agent abilities through natural language abstracwith sEnglish. The hardware actions are propagated within tions resulted in an intuitive interface into system operation; the Java based asteroid environment; a schematic of this is an innate advantage of the method used. In turn this this software system and their interactions is given in abstraction link to hardware and software processes provided a Figure 8(a). clear link between reasoned decisions and output actions. This The simulation was initialized with four spacecraft resulted in a syntactically clear agent system, where its configuagents distributed throughout a partially known asteroid ration and subsequent operation is entirely transparent. The field, with the high level goal of cataloging all the observabstraction method also facilitated expansion and modification able asteroids. The spacecraft were able to negotiate (and of agent capabilities without disturbing existing abilities, as was renegotiate) responsibilities for orbiting asteroids, correct evidenced with the transferal of the system to hardware; the for thruster malfunctions, and operate in phased orbits of several spacecraft around a single asteroid. This is shown in Figure 9. Dur ing all operations, notification of an approaching solar storm acted to override all current activities, forcing the spacecraft to seek shelter behind the closest suitable asteroid.
B. Hardware Implementation

For application of the agent system on hardware, true hardware processes and real dynamics replace the Simulink models and jBullet simulation used within the complete software application. This integrated (hardware) agent system is shown in Figure 8(b) and the ground based hardware facility used is shown within Figure 10. On transferal of the agent system architecture to the hardware system, there is a clear difference between interfacing true hardware devices and those modeled within Simulink. Bridging this gap relates to enriching the sEnglish database to include interface for the specific hardware devices being implemented; neither the high level sEnglish performative abstractions, nor the agent reasoning code, were modified for the hardware application. The test facility demonstrated the capability of the agent architecture to perform aspects of the asteroid scenario, namely motion to nominated points with disturbance compensation and phased orbiting of a nominated point, through negotiation with companion agents. Videos of some of these actions are available to view at http://www.sheffield.ac.uk/
acse/staff/smv. VI. Observations

Figure 9 Screen capture of two spacecraft agents entering a phased orbit about a nominated asteroid.

This article has presented the application of a formally verifiable agent architecture, linked to a skill abstraction library formulated in a natural language, within a complex simulation

Figure 10 Image of the ground test facility and model frame spacecraft robots.

November 2013 | IEEE Computational intelligence magazine

37

The anthropomorphic programming paradigm we used enabled us to create a computationally very complex intelligent system to parallel human piloting capabilities.
only change related to the addition of low level sEnglish abstractions to interface specific hardware systems; existing code, inclusive of both reasoning and higher level action performatives, remained unaltered. In the software scenario, the agent was provided with a minimal set of physical/mental skills and reasoning processes, yet was able to survey a section of a partially known asteroid field in the presence of internal hardware failures and while reacting to dynamic hazards.
References

[1] L. A. Dennis, M. Fisher, N. Lincoln, A. Lisitsa, and S. M. Veres, Declarative abstractions for agent based hybrid control systems, in Proc. 8th Int. Workshop Declarative Agent Languages Technologies, 2010, vol. 6619, pp. 96111. [2] N. Lincoln, S. M. Veres, L. A. Dennis, M. Fisher, and A. Lisitsa, An agent based framework for adaptive control and decision making of autonomous vehicles, in Proc. IFAC Workshop Adaptation Learning Control Signal Processing, 2010, pp. 310317. [3] S. M. Veres and N. K. Lincoln, Testbed for satellite formation f lying control system verification, in Proc. AIAA InfoTech Conf., Rohnert Park, CA, 2007. [4] JAXA. Hayabusa spacecraft [Online]. Available: http://www.isas.ac.jp/e/enterp/missions/hayabusa [5] NASA. Dawn spacecraft [Online]. Available: http://dawn.jpl.nasa.gov [6] R. Alur, C. Courcoubetis, N. Halbwachs, T. A. Henzinger, P.-H. Ho, X. Nicollin, A. Olivero, J. Sifakis, and S. Yovine, The algorithmic analysis of hybrid systems, Theor. Comput. Sci., vol. 138, no. 1, pp. 334, 1995. [7] T. A. Henzinger, The theory of hybrid automata, in Proc. Int. Symp. Logic Computer Science, IEEE Computer Society Press, 1996, pp. 278292. [8] M. J. Wooldridge, An Introduction to MultiAgent Systems. New York: Wiley, 2002. [9] M. Kloetzer and C. Belta, A fully automated framework for control of linear systems from temporal logic specifications, IEEE Trans. Autom. Contr., vol. 53, no. 1, pp. 287297, 2008. [10] I. A. Ferguson, TouringMachines: Autonomous agents with attitudes, Computer, vol. 25, no. 5, pp. 5155, 1992. [11] R. H. Bordini, M. Dastani, J. Dix, and A. E. Fallah-Seghrouchni, Eds. Multi-Agent Programming: Languages, Tools and Applications. New York: Springer-Verlag, 2009. [12] S. M. Veres, Natural Language Programming of Agents and Robotic Devices: Publishing for Humans and Machines in sEnglish. London, U.K.: SysBrain, 2008. [13] R. H. Bordini, J. F. Hbner, and M. J. Wooldridge. Programming Multi-Agent Systems in AgentSpeak Using Jason. New York: Wiley, 2007. [14] J. P. Mller, The Design of Intelligent Agents. New York: Springer-Verlag, 1996. [15] E. Gat, On three-layer architectures, in Artificial Intelligence and Mobile Robots, D. Kortenkamp, R. P. Bonnasso, and R. Murphy, Eds. Menlo Park, CA: AAAI Press, 1997, pp. 195210. [16] J. P. Mller, M. Pischel, and M. Thiel, Modeling reactive behaviour in vertically layered agent architectures, in Proc. Workshop Agent Theories, Architectures, Languages, vol. 890, pp. 261276, 1995. [17] CLARAty. NASA jet propulsion laboratory [Online]. Available: http://claraty.jpl.nasa.gov [18] K. Fregene, D. C. Kennedy, and D. W. L. Wang, Toward a systems- and control-oriented agent framework, IEEE Trans. Syst. Man, Cybern. B , vol. 35, no. 5, pp. 9991012, 2005. [19] J. Lygeros, K. H. Johansson, S. N. Simic, J. Zhang, and S. S. Sastry, Dynamical properties of hybrid automata, IEEE Trans. Autom. Contr., vol. 48, no. 1, pp. 217, 2003. [20] A. Mohammed and U. Furbach, Multi-agent systems: Modeling and verification using hybrid automata, in Proc. 7th Int. Conf. Programming Multi-Agent Systems, pp. 4966, 2009. [21] R. Alur, T. A. Henzinger, G. Lafferriere, and G. J. Pappas, Discrete abstractions of hybrid systems, Proc. IEEE , pp. 971984, 2000.

[22] L. Molnar and S. M. Veres, Hybrid automata dicretising agents for formal modelling of robots, in Proc. 18th IFAC World Congr., 2011, vol. 18, pp. 4954. [23] MATLAB Programming Environment, Simulink, Stateflow and the Realtime Workshop, MathWorks, Natick, MA. [24] A. E. Fallah-Seghrouchni, I. Degirmenciyan-Cartault, and F. Marc, Framework for multi-agent planning based on hybrid automata, in Proc. 3rd Central and Eastern European Conf. MultiAgent Systems, 2003, pp. 226235. [25] R. J. Firby, Adaptive execution in complex dynamic worlds, Yale Univ., New Haven, CT, Tech. Rep., 1990. [26] I. A. Ferguson, Toward an architecture for adaptive, rational, mobile agents, in Proc. 3rd European Workshop Modelling Autonomous Agents in a MultiAgent World, 1991, pp. 249261. [27] M. Wooldridge and N. R. Jennings, Intelligent agents: Theory and practice, Knowl. Eng. Rev., vol. 10, no. 2, pp. 115152, 1995. [28] A. S. Rao and M. Georgeff, BDI agents: From theory to practice, in Proc. 1st International Conf. Multi-Agent Systems, pp. 312319, San Francisco, CA, 1995. [29] H. V. D. Parunak, A. D. Baker, and S. J. Clark, The AARIA agent architecture: From manufacturing requirements to agent-based system design, J. Integr. Comput.-Aided Eng., vol. 8, no. 1, pp. 4558, 2001. [30] M. E. Bratman, Intention, Plans and Practical Reason. Stanford, CA: CSLI Publications, 1987. [31] F. Bellifemine, G. Caire, and D. Greenwood, Developing Multi-Agent Systems with JADE. New York: Wiley, 2007. [32] A. Pokahr, L. Braubach, and W. Lamersdorf, Jadex: Implementing a BDI-infrastructure for JADE agents, EXPSear. Innov. (Special Issue on JADE), vol. 3, no. 3, pp. 7685, Sept. 2003. [33] N. Howden, R. Rnnquist, A. Hodgson, and A. Lucas, JACK intelligent agents Summary of an agent infrastructure, in Proc. 5th Int. Conf. Autonomous Agents, 2001. [34] M. P. Georgeff and A. L. Lansky, Reactive reasoning and planning, in Proc. 6th Nat. Conf. Artificial Intelligence, pp. 677682, 1987. [35] F. F. Ingrand, M. P. Georgeff, and A. S. Rao, An architecture for real-time reasoning and system control, IEEE Expert, vol. 7, no. 6, pp. 3444, 1992. [36] NASA jet propulsion laboratory. Mission data system [Online]. Available: http://mds. jpl.nasa.gov/public/ [37] N. Muscettola, P. P. Nayak, B. Pell, and B. Williams, Remote agent: To boldly go where no AI system has gone before, Artif. Intell., vol. 103, nos. 12, pp. 548, 1998. [38] D. E. Bernard, G. A. Dorais, C. Fry, B. Kanefsky, J. Kurien, W. Millar, N. Muscettola, U. Nayak, B. Pell, K. Rajan, N. Rouquette, B. Smith, and B. C. Williams, Design of the remote agent experiment for spacecraft autonomy, in Proc. IEEE Aerospace Conf., 1998, pp. 259281. [39] L. A. Dennis, M. Fisher, N. Lincoln, A. Lisitsa, and S. M. Veres, Reducing code complexity in hybrid control systems, in Proc. 10th Int. Symp. Artificial Intelligence, Robotics Automation Space (i-Sairas), 2010, pp. 16. [40] J.-H. Kim, I.-W. Park, and S. A. Zaheer, Intelligence technology for robots that think, IEEE Comput. Intell. Mag., vol. 8, no. 3, pp. 7084, 2013. [41] SysBrain Ltd. sEnglish publisher [Online]. Available: http://www.systemenglish.org [42] S. M. Veres, Theoretical foundations of natural language programming and publishing for intelligent agents and robots, in Proc. 11th Conf. Towards Autonomous Robotic Systems, 2010. [43] L. A. Dennis and B. Farwer, Gwendolen: A BDI language for verifiable agents, in Logic and the Simulation of Interaction and Reasoning, B. Lwe, Ed. Aberdeen, U.K.: AISB, 2008. [44] G. J. Holzmann, The Spin Model Checker: Primer and Reference Manual. Reading, MA: Addison-Wesley, 2003. [45] W. Visser, K. Havelund, G. P. Brat, S. Park, and F. Lerda, Model checking programs, Autom. Softw. Eng., vol. 10, no. 2, pp. 203232, 2003. [46] L. A. Dennis, M. Fisher, M. Webster, and R. H. Bordini, Model checking agent programming languages, Autom. Softw. Eng., vol. 19, no. 1, pp. 563, 2012. [47] R. Alur, C. Courcoubetis, T. Henzinger, and P. Ho, Hybrid automata: An algorithmic approach to the specification and verification of hybrid systems, in Hybrid Systems (Lecture Notes in Computer Science Series, vol. 736), R. Grossman, A. Nerode, A. Ravn, and H. Rischel, Eds. New York: Springer-Verlag, 1993, pp. 209229. [48] J. Feynman and S. B. Gabriel, On space weather consequences and prediction, J. Geophys. Res., vol. 105, no. A5, pp. 1054310564, 2000. [49] D. P. Bertsekas, Auction algorithms for network f low problems: A tutorial introduction, Comput. Optim. Appl., vol. 1, no. 1, pp. 766, 1992. [50] JBullet. JBulletJava port of bullet physics library [Online]. Available: http:// jbulletadvel.cz 

38

IEEE Computational intelligence magazine | November 2013

Das könnte Ihnen auch gefallen