Beruflich Dokumente
Kultur Dokumente
ABSTRACT
This paper describes a modular component of the MABEL model agents cognitive
inference mechanism. The probabilistic and probabilogic representation of the agents
environment and state space is coupled with a Bayesian belief and decision network
functionality, which in fact holds Markovian semiparametric properties. Different
approaches to modeling multi-agent systems are described and analyzed; problem-,
model-, and knowledge-driven approaches to agent inference and learning are
emphasized. The notion of modularity in agent-based modeling components is
conceptualized. The modular architecture of the decision inference mechanism allows for
a flexible architectural design that can be either endogenous or exogenous to the agent-
based simulation model. A suite of decision support tools for modular network inference
in the MABEL model is showcased; the emphasis is on the component object model
versus interoperability development interfaces. These tools provide the complex
functionality of developing models within models, thus simplifying the need for
extensive research support and for a high-end level of knowledge acquisition from the
end-users perspective. Finally, the paper assesses the validity of visual modeling
interfaces for data- and knowledge-acquisition mechanisms that can provide an essential
link between an in vitro research model, and the complex realities that are observed and
processed by decision-makers, policy-makers, communities, and stakeholders.
INTRODUCTION
Agent-based systems and models have passed through many stages in their historical
evolution: from experimentation leading to discovery; to architectural modeling and the
development of models and mathematical representations; to game-theoretic or mental
simulation modes; to more realistic and robust applications; and to theory construction and the
study of complex, robust, and resilient structures and patterns. Recent advances in the multi-
disciplinary research and modeling of complex systems (e.g., spatial complexity, complex
network dynamics) laid out the roadmap for advancing the comprehensibility, usability, and
applicability of agent-based models and mechanisms for a wide variety of applications, decision
makers, and policy makers (Ma and Nakamori 2005; McIntosh et al. 2005).
This paper proposes and demonstrates the usability and value of modular components in
an agent-based framework, specifically, the Multi-Agent-Based Economic Landscape (MABEL)
model. One important modular component of MABEL namely, the cognitive inference
mechanism is described. The functionality of the inference mechanism as it relates to the
mechanics of the modeling architecture is defined and encapsulated. This paper also
demonstrates the value of utilizing this core mechanism for building a user-oriented interactive
interface that enhances user experience and, at the same time, integrates user inputs back into the
modeling process. The emphasis is on the ability of the interface to interact with the simulation
framework to provide useful analysis results and graphical operations that can be used directly in
a policy-making exercise.
Agent inference is the ability of agents to make complex decisions, adapt to their
environment, and learn from their decisions or decisions made by other agents. Although
contextually, agent inference is not difficult to comprehend, capturing its symbolic and semantic
formulation is quite a challenge for the researcher or analyst. Inferential modeling in agent-based
systems is the heart of developing artificial intelligence and complex computational
methodologies (Calmet et al. 1996; Edmonds et al. 2000; Gupta and Sinha 2000). Agent
inference must accomplish a series of tasks, such as:
4. Bound the agents and their computational environment within the level of
rationality and rules that natural, historical, and scientific observation and
analysis dictate; and
1. Identifying parameters for eliciting decisions (Muller et al. 2005; Zhu and
Morgan 2004),
of multi-agent systems (Alexandridis et al. 2004; Alexandridis et al. 2006; Alexandridis and
Pijanowski 2006; De Cooman and Zaffalon 2004; Korb and Nicholson 2004; Neapolitan 2004).
While the volume of literature on the Bayesian methods of inference is quite extensive,
the utilization of these nonparametric methods in systems of spatial complexity and
environmental modeling applications is quite limited. Some of the main reasons for this
inconsistency are that (a) Bayesian artificial intelligence is a relatively new field of research, and
the transition from theory to application and problem-oriented research has not been realized
fully; (b) analysis of spatially complex structures requires multi-disciplinary applications and
research skills, a fact that slows up the development and progression of such modeling research;
(c) spatially complex agent interactions emerge at a magnitude of scales, both spatial and
temporal; thus, estimating modeling parameters involves arrays or matrices of interactions
instead of single parameter estimation (the latter point renders estimation properties a
mathematical and statistical challenge); (d) coping with uncertainty and incomplete information,
while commonly encountered in the real world, requires a departure from traditional statistical
theory and comprehension of the fact that systems might display unpredictability and instability
of patterns under such conditions.
In the MABEL model (Alexandridis et al. 2004; Alexandridis et al. 2006; Alexandridis
and Pijanowski 2006; Lei et al. 2005), such an architecture is employed to simulate agent
intentions for decisions on land use change. Four main components are essential for such
decisions: (1) the state-space (agents environment), (2) a transition modeling mechanism
(mapping state-space to actions), (3) the agents expectations for utility of his actions (expected
utility elicitation), and (4) the expected rewards that the agents anticipate for their intended
actions. In addition, agents face evidence entering their perceptual environment (in the form of
prior decisions, or decisions made by neighboring agents), and a learning mechanism combines
their prior beliefs with the new evidence as they enter their systemic sensoring mechanism.
Combining the agents BBN intentional learning mechanism with the agents BDN action
learning mechanism enhances agent and simulation behavior over space and time. A schematic
representation of such a coupling is shown in Figure 1.
Evidence P(bj) w2
Likelihoods
Learning from
Evidence pdf(AB|L)
Expected Rj
Rewards
Figure 1 illustrates the process of decision making in the MABEL model in terms of the
Bayesian underlying structure. Each agent i is faced with a prior belief distribution, denoted as
P(bi). This belief distribution can be conceptualized as a three-dimensional array with
dimensions n m k, where
In a backward propagation, the previous statement implies that for each potential action,
multiple nodes (variables) exist, and for each node, multiple states (probabilistic) exist. The
multi-dimensional array of the prior belief distribution is actually a complex BBN representing
this prior distribution structure.
Each element of the multi-dimensional array of the prior belief distribution has an
expected utility value (EU the expectation that an agent has if a given combination of state,
node, and action was to be undertaken). Combining the prior belief structure with the agents
utility expectations provides us with a probabilistic distribution measure of the agents expected
next state. This probability distribution of expected utility is what an agent faces without any
new information entering the inference system. In complex reality, agents, as well as decision
makers, do obtain dynamically new information, learn from decisions made in previous time
steps, and face potential rewards for their actions. This process is often called reinforced
learning. The probabilistic structure for the evidential mechanism is a two-dimensional array
with dimensions n m, where
Similarly, for each of the nodes of the network and their associated states, there is a
probability (likelihood) that evidence or experiences would indicate that they would change in
the near future. Mapping the likelihood probability distribution to the expected rewards (gains or
losses) that these changes entail for the agents provides us with a conditional probability
distribution for intended actions, given the evidence likelihoods.
The Bayesian learning algorithms are designed to estimate the optimal weights with
which the intentions for the next time step of each agent are calculated. In other words, they are
designed to estimate the strength and degree to which new evidence entering the inferential
system of the simulation alters the intentions of agents for action. In the MABEL model, this
process is performed by using the expectation maximization (EM) algorithm (Beal et al. 2003;
Bohning and Schon 2005; Dellaert 2002; Friedman 1998; Hutter and Zaffalon 2005). The EM
algorithm utilizes an iterative and dynamic maximum likelihood estimation technique in order to
approximate the posterior learning distribution for agents actions.
In the context of the simulation design and modeling procedure, we can identify three
variations of agent inference:
6
Model-driven agent inference mechanisms depend mainly on the focus of the modeling
processes used and emphasize the modeling of policy and real-world applications. The value of
these types of inference mechanisms is not particularly high for theoretical or research-driven
applications, as they aim to explore the emergence or generative dynamics of the modeling and
simulation mechanisms. An example of such an inference mechanism is simulating the
emergence of social and economic phenomena within a finite population or with the use of
simple rules.
While these types of agent inference are not mutually exclusive and can be present
simultaneously in some combination in our modeling enterprises, the above classification can
help researchers understand the strengths and weaknesses of the models, tools, and methods
employed. It can also help in assessing the validity and appropriateness of databases and
knowledge bases, scenarios and policies, applications to be simulated, or research questions that
can be answered within a given study.
The degree of robustness of the agent-based modeling enterprises that is achieved thus
often depends significantly on the subjective researchers skills. The researcher needs to
anticipate outcomes or problems in which stakeholders and policy makers have a potential
interest. Specific databases need to be constructed and data collected in advance. Models and
simulation experiments need to be calibrated and assessed, and full-models can be very
computationally intensive (a fact that affects the ability to replicate or assess the validity of the
simulation results by the end-user community).
8
Allows for the capability to attract and collect expert judgments, scenarios,
and hypotheses that emerge from the ground up.
Modular components and agent-based mechanisms can be stand-alone. End-users do not need to
run full-model simulations (with their limitations in comprehensibility and transparency). The
expertise required for full comprehension of the agent-based model can be modularized as well,
by focusing exclusively on inference mechanisms, learning, and problem solving as separate
modular components of the modeling enterprise. Finally, the modularization of agent-based
model components provides easy access to calibration, assessment and scenario development
techniques, thus reducing the perceived complexity of the mechanism from the end-users
perspective.
This section provides a descriptive demonstration of three examples of inference tools for
the MABEL model: decision net inference tool (problem-driven), agent spillover effects tool
(model-driven), and MABEL scenario generator tool (knowledge-driven). All three components
operate as stand-alone user interfaces and provide added modularity and comprehensibility for
the full MABEL model simulations.
Modular inference capabilities exist within the architectural framework of the MABEL
model (Lei et al. 2005). Specifically, the MABEL model embeds the Netica C++ API (Norsys
2005) within its architecture. The decision-theoretic inference and Markov decision-making
mechanisms in MABEL utilize this framework to perform diagnostic and learning inference for
utility acquisition and optimization tasks (Alexandridis et al. 2004; Alexandridis and Pijanowski
2006). Nevertheless, visualizing these mechanisms and their dynamics is not possible without the
use of new tools.
These added visualization capabilities are achieved via the development of new inference
tools within the MABEL framework. These tools are developed in the Visual Basic.NET
developing framework (Microsoft 2003) and utilize the Netica VB API (Norsys 2005) and the
Microsoft Office.Interop components of the .NET framework. Each of these tools compiles and
9
utilizes existing or modified MABEL decision networks and decision mechanisms and can
re-compile revised networks back to the MABEL model for simulation runs.
The Interop.Netica component within the .NET framework provides high usability of
objects, classes, functions, and class members for use within the development environment
(Figure 3). Each of these methods, when called within the Netica application framework, allows
for a comprehensive visualization of the inference mechanism and the BBN or BDN.
FIGURE 4 User interface of the decision net inference tool for MABEL inference
(Figure 4A shows the main control panel, 4B shows different output windows using the
office.interop interface, and 4C shows the Netica decision network visualization called from
the main control panel.)
For this example, a real-world application of a decision network is provided: the potato
grower problem described by Hardaker et al. (1998). In short, the decision problem copes with a
farmers uncertainty about a decision whether to harvest and sell or harvest and store his potato
yield. The farmer faces two sources of uncertainty: market conditions (i.e., normal versus short
supply) and the acquisition of market prediction estimates (i.e., estimates of the supply problems
in the market). In the latter, a cost is involved for acquisition of a market prediction (e.g., a
private company providing estimates), and each of the harvest decisions results in a monetary
utility acquisition. While the decision network layout looks simple (Figure 4C), elicitation of the
prior and posterior probabilities of the network is somewhat complex (Hardaker et al. 1998).
By using the main control panel (Figure 4A), a user can enter inference evidence into the
decision network or test inferential assumptions about his knowledge about the conditions of the
market. Then estimates of the posterior probability distribution and the potential monetary value
of the decision can be assessed via a streamer process that outputs the network results into the
office.interop spreadsheet component (Figure 4B). Multiple evidence-assessment pairs can be
performed, and multiple results can be obtained and analyzed simultaneously.
The decision net inference tool provides a useful interface that gives end-users the ability
to test and analyze the implications of their assumption-based and inferential decision-making
capabilities. The tool can also be used for training/learning inference at the decision-making
level. Finally, network evidence and experiences by users can be collected and saved as case
studies for agent training and learning by using the empirical users assessment and utilized
further in realistic MABEL simulations.
11
By providing knowledge or evidential information about these three input rules and
performing a diagnostic inference of the network, a percentage estimation of the degree of the
spillover effects that can be transitioned across neighboring simulations can be done (upper
window of part A in Figure 5). In many simulation cases, we might know in advance the number
of agents exceeding capacity in the current simulation area (i.e., from new agent emergence from
a parcelization algorithm in MABEL), in which case a quantitative assessment of the number of
agents to be transitioned to neighboring townships can be assessed (lower window of part A in
Figure 5).
FIGURE 5 User interface of the agent spillover effects tool for MABEL inference
(Figure 5A shows instantiations of the main control panel, and 5B shows the Netica
decision network visualization called from the main control panel.)
The main control panel of the users interface provides the ability for performing simple
evidence elicitation of the scenario decision network (Figure 6A) or performing a learning
simulation over several time-steps of agent learning via the EM algorithm, given evidence
(Figure 6B). The elicited results, including the simulation learning process visualization, can be
called and viewed through the Netica.interop application window (Figure 6C). The lower part of
the main control panel provides a real-time office.interop spreadsheet streamer that monitors
beliefs, likelihoods, and simulation step network results.
Results obtained from EM training and learning via the MABEL scenario generator tool
can provide useful insight on the agent-learning mechanism in MABEL. Agent- learning results
and simulation patterns like the one provided in Figure 7 can be easily assessed and evaluated by
using the tools user interface. Additional scenarios and model elicitation can be done with the
use of users knowledge and evidential information. Finally, the tool can be used for training and
comprehension of the dynamics present in our real-world decision dynamics.
13
FIGURE 6 User interface of the farm utility scenario generator for MABEL inference
(Figure 6A shows the main control panel for simple evidence elicitation, 6B shows the
main control panel for network learning simulations, and 6C shows the Netica decision
network visualization called from the main control panel.)
The examples provided in the above sections, combined with the theoretical arguments
and discussion preceding them, allow us to assess a set of wider implications for a visualization,
comprehension and policy-making (VCPM) framework. This set can be expressed as a sequence
of propositions or lessons learned, as follows:
2. Agent-based models and simulations are the most appropriate research and
methodological approaches to building communities and community
understanding from the ground up. They have the power and ability to
enhance decision-maker and stakeholder understanding of complex spatial
mechanisms, drivers, and processes of change.
5. Reconciling the needs for transparency and veridicality in modeling does not
have to result in abstract and often inaccurate modeling endeavors but can
instead result in informational content-rich enterprises.
6. Developing simulation and inference tools that can serve also as training,
learning, and educational exercises builds future community capabilities for
decision and policy making that can have significant impact.
Beyond the specific tools and modeling enterprises examined in this paper, a whole array
of quantitative and qualitative assessments for network inference mechanisms can be employed
to enhance our multi-layered representation of reality (Figure 8).
Additional qualitative methods for probabilistic and stochastic inference such as role-
playing games, participatory rural assessment techniques, interviews and surveys, and qualitative
scenario assessment methods are examples of techniques that can be used for eliciting BBNs
and BDNs in the MABEL framework. Combined with traditional quantitative assessment
methods and knowledge bases, they present a powerful mechanism of inference in multi-agent
simulation systems.
15
BIBLIOGRAPHY
Ahmed, S.E., and N. Reid. 2001. Empirical Bayes and Likelihood Inference. Lecture notes in
statistics (Springer-Verlag). New York: Springer.
Alexandridis, K., B.C. Pijanowski, and Z. Lei. 2004. The use of robust and efficient
methodologies in agent-based modeling: Case studies using repeated measures and
behavioral components in the MABEL simulation model. In C.M. Macal, D. Sallach, and
M.J. North (editors), Proceedings of the Agent 2004 Conference on Social Dynamics:
Interaction, Reflexivity, and Emergence, ANL-DIS/05-6, co-sponsored by The University
of Chicago and Argonne National Laboratory, Oct. 79.
Alexandridis, K., B.C. Pijanowski, and Z. Lei. 2006. Assessing multi-agent parcelization
performance in the MABEL simulation model using Monte-Carlo replication experiments.
Environment & Planning B: Planning and Design, Special Issue 33 on Agent-based
Modeling (in press).
Alexandridis, K.T., and B.C. Pijanowski. 2006. A Multi Agent-based Behavioral Economic
Landscape (MABEL) model of land use change (in revision).
16
Baroni, P., M. Giacomin, and G. Guida. 2005. Self-stabilizing defeat status computation:
Dealing with conflict management in multi-agent systems. Artificial Intelligence
165(2):187259.
Beal, M.J. 2003. Variational Algorithms for Approximate Bayesian Inference. Ph.D.
Dissertation. London, UK: The Gatsby Computational Neuroscience Unit, University of
London.
Beal, M.J., Z. Ghahramani, J.M. Bernardo, M.J. Bayarri, A.P. Dawid, J.O. Berger,
D. Heckerman, A.F.M. Smith, and M. West. 2003. The variational Bayesian EM
algorithm for incomplete data: With application to scoring graphical model structures. In
Bayesian Statistics 7: Proceedings of the Seventh Valencia International Meeting,
pp. 453464. Oxford, UK: Oxford University Press.
Bohning, D., and D. Schon. 2005. Nonparametric maximum likelihood estimation of population
size based on the counting distribution. Journal of the Royal Statistical Society: Series C
(Applied Statistics) 54(4):721737.
Brown, D.G., S.E. Page, R. Riolo, and W. Rand. 2002. Modeling the Effects of Greenbelts at the
Urban-Rural Fringe, June 2427, Lugano, Switzerland.
Burley, J. 2004. The restoration of research. Forest Ecology and Management 201(1):8388.
Calmet, J., J.A. Campbell, and J. Pfalzgraf. 1996. Artificial Intelligence and Symbolic
Mathematical Computation: International Conference, AISMC-3, Steyr, Austria,
September 2325, 1996, Proceedings, Lecture Notes in Computer Science, 1138. Berlin;
New York: Springer.
Carley, K.M. 2002. Simulating society: The tension between transparency and veridicality. In
C. Macal and D. Sallach (editors), Proceedings of the Agent 2002 Conference on Social
Agents: Ecology, Exchange, and Evolution, ANL/DIS-03-1, co-sponsored by The
University of Chicago and Argonne National Laboratory, Oct. 1112.
Conte, R., and C. Castelfranchi. 1995. Cognitive and Social Action. London: UCL Press.
De Cooman, G., and M. Zaffalon. 2004. Updating beliefs with incomplete observations.
Artificial Intelligence 159(12):75125.
Drennan, M. 2005. The human science of simulation: A robust hermeneutics for artificial
societies. Journal of Artificial Societies and Social Simulation 8(1); available at
http://jasss.soc.surrey.ac.uk/8/1/3.html.
17
Eagly, A.H., and S. Chaiken. 1993. The Psychology of Attitudes. Fort Worth, TX: Harcourt
Brace College Publishers (HBJ).
Edmonds, B., S. Moss, and P. Davidsson. 2000. The use of models: Making MABS actually
work. In Multi Agent Based Simulation, pp. 1532. Springer-Verlag.
Feng, S., L. Da Xu, C. Tang, and S. Yang. 2003. An intelligent agent with layered architecture
for operating systems resource management. Expert Systems 20(4):171178.
Foley, J.A., R. DeFries, G.P. Asner, C. Barford, G. Bonan, S.R. Carpenter, F.S. Chapin,
M.T. Coe, G.C. Daily, H.K. Gibbs, J.H. Helkowski, T. Holloway, E.A. Howard,
C.J. Kucharik, C. Monfreda, J.A. Patz, I.C. Prentice, N. Ramankutty, and P.K. Snyder.
2005. Global consequences of land use. Science 309(5734):570574.
Gupta, M.M., and N.K. Sinha. 2000. Soft Computing and Intelligent Systems: Theory and
Applications. San Diego, CA: Academic.
Hall, P., and A. Yatchew. 2005. Unified approach to testing functional hypotheses in
semiparametric contexts. Journal of Econometrics 127(2):225252.
Hammersley, M. 2005. Should social science be critical? Philosophy of the Social Sciences
35(2):175195.
Hardaker, J.B., R.B.M. Huirne, and J.R. Anderson. 1998. Coping with Risk in Agriculture.
Wallingford, UK: CAB International.
Heemskerk, M., K. Wilson, and M. Pavao-Zuckerman. 2003. Conceptual models as tools for
communication across disciplines. Ecology and Society 7(3); available at http://www.
consecol.org/vol7/iss3/art8/.
Hutter, M., and M. Zaffalon. 2005. Distribution of mutual information from complete and
incomplete data. Computational Statistics & Data Analysis 48(3):633657.
Korb, K.B., and A.E. Nicholson. 2004. Bayesian Artificial Intelligence. Series in Computer
Science and Data Analysis. London, UK: Chapman & Hall/CRC.
Lei, Z., B.C. Pijanowski, and K.T. Alexandridis. 2005. Distributed modeling architecture of a
Multi Agent-based Behavioral Economic Landscape (MABEL) model. Simulation:
Transactions of the Society for Modeling & Simulation International 81(7):503515.
McIntosh, B.S., P. Jeffrey, M. Lemon, and N. Winder. 2005. On the design of computer-based
models for integrated environmental science. Environmental Management 35(6):741752.
McIver, D.K., and M.A. Friedl. 2002. Using prior probabilities in decision-tree classification of
remotely sensed data. Remote Sensing of Environment 81(23):253261.
Muller, P., G.L. Rosner, M.D. Iorio, and S. MacEachern. 2005. A nonparametric Bayesian
model for inference in related longitudinal studies. Journal of the Royal Statistical
Society: Series C (Applied Statistics) 54(3):611626.
Neapolitan, R.E. 2004. Learning Bayesian Networks. Upper Saddle River, NJ: Pearson Prentice
Hall.
Norling, E., and L. Sonenberg. 2004. Creating interactive characters with BDI agents. Paper
presented at the Australian Workshop on Interactive Entertainment, Sidney, Australia.
Norsys. 2005. Netica: Advanced Bayesian Belief Network and Influence Diagram Software
(Version 2.17). Vancouver, Canada: Norsys Software Corp.; available at www.norsys.com.
Parker, D.C., S.M. Manson, M.A. Janssen, M.J. Hoffmann, and P. Deadman. 2003. Multi-agent
systems for the simulation of land-use and land-cover change: A review. Annals of the
Association of American Geographers 93(2):314337.
Rao, A.S., and M.P. Georgeff. 1995. BDI agents: From theory to practice. In Proceedings
of the First International Conference on Multi-Agent Systems (ICMAS-95), San Fransisco,
CA.
Sen, P.K. 1981. Sequential Nonparametrics: Invariance Principles and Statistical Inference.
New York: Wiley.
Stewart, M.B. 2005. A comparison of semiparametric estimators for the ordered response
model. Computational Statistics & Data Analysis 49(2):555573.
Stocker, R., D. Cornforth, and T.R.J. Bossomaier. 2002. Network structures and agreement in
social network simulations. Journal of Artificial Societies and Social Simulation 5(4);
available at http://jasss.soc.surrey.ac.uk/5/4/3.html.
Tiku, M.L., W.Y. Tan, and N. Balakrishnan. 1986. Robust Inference. New York: M. Dekker.
Van Vuuren, D.P., and L.F. Bouwman. 2005. Exploring past and future changes in the
ecological footprint for world regions. Ecological Economics 52(1):4362.
19
Verburg, P.H., P.P. Schot, M.J. Dijst, and A. Veldkamp. 2004. Land use change modelling:
Current practice and research priorities. GeoJournal 61(4):309324.
Zhu, J., and G.D. Morgan. 2004. A nonparametric procedure for analyzing repeated measures of
spatially correlated data. Environmental and Ecological Statistics 11(4):431443.
20