Sie sind auf Seite 1von 20

1

MODULAR BAYESIAN INFERENCE AND LEARNING OF DECISION NETWORKS


AS STAND-ALONE MECHANISMS OF THE MABEL MODEL: IMPLICATIONS
FOR VISUALIZATION, COMPREHENSION, AND POLICY MAKING

K. ALEXANDRIDIS * and B. PIJANOWSKI,


Department of Forestry and Natural Resources, Purdue University, West Lafayette, IN

ABSTRACT

This paper describes a modular component of the MABEL model agents cognitive
inference mechanism. The probabilistic and probabilogic representation of the agents
environment and state space is coupled with a Bayesian belief and decision network
functionality, which in fact holds Markovian semiparametric properties. Different
approaches to modeling multi-agent systems are described and analyzed; problem-,
model-, and knowledge-driven approaches to agent inference and learning are
emphasized. The notion of modularity in agent-based modeling components is
conceptualized. The modular architecture of the decision inference mechanism allows for
a flexible architectural design that can be either endogenous or exogenous to the agent-
based simulation model. A suite of decision support tools for modular network inference
in the MABEL model is showcased; the emphasis is on the component object model
versus interoperability development interfaces. These tools provide the complex
functionality of developing models within models, thus simplifying the need for
extensive research support and for a high-end level of knowledge acquisition from the
end-users perspective. Finally, the paper assesses the validity of visual modeling
interfaces for data- and knowledge-acquisition mechanisms that can provide an essential
link between an in vitro research model, and the complex realities that are observed and
processed by decision-makers, policy-makers, communities, and stakeholders.

Keywords: Agent-based model, MABEL, Bayesian belief networks, Bayesian decision


networks, visualization, decision-theoretic inference, policy making

INTRODUCTION

Agent-based systems and models have passed through many stages in their historical
evolution: from experimentation leading to discovery; to architectural modeling and the
development of models and mathematical representations; to game-theoretic or mental
simulation modes; to more realistic and robust applications; and to theory construction and the
study of complex, robust, and resilient structures and patterns. Recent advances in the multi-
disciplinary research and modeling of complex systems (e.g., spatial complexity, complex
network dynamics) laid out the roadmap for advancing the comprehensibility, usability, and
applicability of agent-based models and mechanisms for a wide variety of applications, decision
makers, and policy makers (Ma and Nakamori 2005; McIntosh et al. 2005).

* Corresponding author address: Konstantinos Alexandridis, Human-Environment Modeling and Analysis


Laboratory (HEMA), Department of Forestry and Natural Resources, Purdue University, 204 Forestry Building,
195 Marsteller Street, West Lafayette, IN 47907; email: ktalexan@purdue.edu.
2

In the special case of spatially explicit agent-based models, advances in computational


methods and resources and in complex multi-disciplinary ecological and natural resource
research methodologies in conjunction with advances in more specialized statistical and
probabilistic approaches to modeling, estimation, and assessment at the spatial level allow
researchers to explore an additional dimension of research related to model utilization by
decision and policy makers (Foley et al. 2005; Heemskerk et al. 2003; van Vuuren and
Bouwman 2005; Verburg et al. 2004). Agent-based models are becoming more and more
commonly encountered not simply as valuable research tools for the discovery and analysis of
complex systems but also as end-user mechanisms for enhancing decision-making capabilities,
testing policy-specific dimensions of spatial problems, and exploring additional scenarios and
alternative futures never realized in the past. These new dimensions of research are part of a
wider goal and vision of integrating science into society in ways that enhance the well-being and
welfare of citizens and society as a whole (Brown et al. 2002; Drennan 2005; Ehrlich and
Kennedy 2005; Hammersley, 2005).

Compared to traditional models (i.e., economic, statistical, and equation-based models) or


to single-disciplinary methods, spatially explicit agent-based systems present greater challenges
to the integration scheme described above. A varying degree of embedded systemic complexity,
coupled with a degree of uncertainty embedded in the natural and socioeconomic systems of our
real world, is often viewed and misunderstood as a form of black box science. Methods and
techniques that may be widely accepted among scientists and researchers are not easily adopted
by stakeholders, decision makers, or policy experts because their lack of simplicity and
transparency renders their comprehension and diffusion a challenge.

This paper proposes and demonstrates the usability and value of modular components in
an agent-based framework, specifically, the Multi-Agent-Based Economic Landscape (MABEL)
model. One important modular component of MABEL namely, the cognitive inference
mechanism is described. The functionality of the inference mechanism as it relates to the
mechanics of the modeling architecture is defined and encapsulated. This paper also
demonstrates the value of utilizing this core mechanism for building a user-oriented interactive
interface that enhances user experience and, at the same time, integrates user inputs back into the
modeling process. The emphasis is on the ability of the interface to interact with the simulation
framework to provide useful analysis results and graphical operations that can be used directly in
a policy-making exercise.

BAYESIAN AGENT INFERENCE AND LEARNING

Agent inference is the ability of agents to make complex decisions, adapt to their
environment, and learn from their decisions or decisions made by other agents. Although
contextually, agent inference is not difficult to comprehend, capturing its symbolic and semantic
formulation is quite a challenge for the researcher or analyst. Inferential modeling in agent-based
systems is the heart of developing artificial intelligence and complex computational
methodologies (Calmet et al. 1996; Edmonds et al. 2000; Gupta and Sinha 2000). Agent
inference must accomplish a series of tasks, such as:

1. Provide the model with a mathematically sound representation of agent


decisions, and learning;
3

2. Establish a sensible network of relationships or relational links not only


among agents and their classes but also between causes and actions;

3. Provide the simulation environment with an adequate level of stochasticity


and dynamic character so it is able to capture the magnitude and patterns of
change that it is designed to replicate;

4. Bound the agents and their computational environment within the level of
rationality and rules that natural, historical, and scientific observation and
analysis dictate; and

5. Allow complex system properties, such as emergence, adaptivity, resilience


and robustness, to be explored as integral parts of the dynamic simulation
framework.

Bayesian inference is a special method of nonparametric, probabilistic, and stochastic


evaluation of noisy data (Ahmed and Reid 2001; Pearl 1988). Probabilistic inference methods are
extremely useful for cases or situations in which a high or deep level of uncertainty is embedded
in the data or in which the decision maker is faced with incomplete observations to access the
future. The power of the Bayesian nonparametric methods lies in the ability of the researcher to
assess, quantify, and analyze qualitative and evaluative statements related to the way that
decisions are made and the relationship between state-space and actions taken. Furthermore,
Bayesian inference methods allow for complex hierarchical network scale development (Conte
and Castelfranchi 1995; Eagly and Chaiken 1993; Stocker et al. 2002) and the elicitation of
likelihood measures of indented decisions. Bayesian nonparametric assessment involves three
fundamental concepts:

1. Identifying parameters for eliciting decisions (Muller et al. 2005; Zhu and
Morgan 2004),

2. Evaluating the prior degree or probability of occurrence and developing


empirical probability density distributions (Bohning and Schon, 2005; McIver
and Friedl 2002; Sen 1981), and

3. Estimating and learning conditional posterior probabilities for actions


performed given a constructed and estimated network structure (Hall and
Yatchew 2005; Stewart 2005; Tiku et al. 1986).

With regard to Bayesian agent inference, Bayesian inference provides an alternative,


probabilogic, nonparametric estimation of an agents beliefs, desires, and intentions (BDI),
especially when the modeling design and architecture favor the character of BDI agents. The
BDI approach to agent-based modeling presents a highly robust and theory-grounded
methodology to address agent intelligence and elaborative human-like agent character (Feng et
al. 2003; Norling and Sonenberg 2004; Rao and Georgeff 1995). Within this context of agent
inference, Bayesian methods or nonparametric decision estimation include BBNs (complex
Bayesian belief networks of agent beliefs and intentions) or BDNs (complex Bayesian decision
networks of agent decisions for actions), or both. BBNs and BDNs combine the mathematical
parameterization methodology of Bayesian inference with the intelligent and learning character
4

of multi-agent systems (Alexandridis et al. 2004; Alexandridis et al. 2006; Alexandridis and
Pijanowski 2006; De Cooman and Zaffalon 2004; Korb and Nicholson 2004; Neapolitan 2004).

While the volume of literature on the Bayesian methods of inference is quite extensive,
the utilization of these nonparametric methods in systems of spatial complexity and
environmental modeling applications is quite limited. Some of the main reasons for this
inconsistency are that (a) Bayesian artificial intelligence is a relatively new field of research, and
the transition from theory to application and problem-oriented research has not been realized
fully; (b) analysis of spatially complex structures requires multi-disciplinary applications and
research skills, a fact that slows up the development and progression of such modeling research;
(c) spatially complex agent interactions emerge at a magnitude of scales, both spatial and
temporal; thus, estimating modeling parameters involves arrays or matrices of interactions
instead of single parameter estimation (the latter point renders estimation properties a
mathematical and statistical challenge); (d) coping with uncertainty and incomplete information,
while commonly encountered in the real world, requires a departure from traditional statistical
theory and comprehension of the fact that systems might display unpredictability and instability
of patterns under such conditions.

In the MABEL model (Alexandridis et al. 2004; Alexandridis et al. 2006; Alexandridis
and Pijanowski 2006; Lei et al. 2005), such an architecture is employed to simulate agent
intentions for decisions on land use change. Four main components are essential for such
decisions: (1) the state-space (agents environment), (2) a transition modeling mechanism
(mapping state-space to actions), (3) the agents expectations for utility of his actions (expected
utility elicitation), and (4) the expected rewards that the agents anticipate for their intended
actions. In addition, agents face evidence entering their perceptual environment (in the form of
prior decisions, or decisions made by neighboring agents), and a learning mechanism combines
their prior beliefs with the new evidence as they enter their systemic sensoring mechanism.
Combining the agents BBN intentional learning mechanism with the agents BDN action
learning mechanism enhances agent and simulation behavior over space and time. A schematic
representation of such a coupling is shown in Figure 1.

Prior Belief P(bi)


Distribution
Prior pdf(AB)
Probabilities
Expected EUi
w1
Utility
BI
Belief updating and Intention Behavior
likelihood weights B

Evidence P(bj) w2
Likelihoods
Learning from
Evidence pdf(AB|L)
Expected Rj
Rewards

FIGURE 1 Elaborative mechanism for agent learning in the MABEL model


5

Figure 1 illustrates the process of decision making in the MABEL model in terms of the
Bayesian underlying structure. Each agent i is faced with a prior belief distribution, denoted as
P(bi). This belief distribution can be conceptualized as a three-dimensional array with
dimensions n m k, where

{n,m,k} = {BeliefStates, BeliefNodes, Actions}. (1)

In a backward propagation, the previous statement implies that for each potential action,
multiple nodes (variables) exist, and for each node, multiple states (probabilistic) exist. The
multi-dimensional array of the prior belief distribution is actually a complex BBN representing
this prior distribution structure.

Each element of the multi-dimensional array of the prior belief distribution has an
expected utility value (EU the expectation that an agent has if a given combination of state,
node, and action was to be undertaken). Combining the prior belief structure with the agents
utility expectations provides us with a probabilistic distribution measure of the agents expected
next state. This probability distribution of expected utility is what an agent faces without any
new information entering the inference system. In complex reality, agents, as well as decision
makers, do obtain dynamically new information, learn from decisions made in previous time
steps, and face potential rewards for their actions. This process is often called reinforced
learning. The probabilistic structure for the evidential mechanism is a two-dimensional array
with dimensions n m, where

{n m} = {LikelihoodStates, LikelihoodNodes}. (2)

Similarly, for each of the nodes of the network and their associated states, there is a
probability (likelihood) that evidence or experiences would indicate that they would change in
the near future. Mapping the likelihood probability distribution to the expected rewards (gains or
losses) that these changes entail for the agents provides us with a conditional probability
distribution for intended actions, given the evidence likelihoods.

The Bayesian learning algorithms are designed to estimate the optimal weights with
which the intentions for the next time step of each agent are calculated. In other words, they are
designed to estimate the strength and degree to which new evidence entering the inferential
system of the simulation alters the intentions of agents for action. In the MABEL model, this
process is performed by using the expectation maximization (EM) algorithm (Beal et al. 2003;
Bohning and Schon 2005; Dellaert 2002; Friedman 1998; Hutter and Zaffalon 2005). The EM
algorithm utilizes an iterative and dynamic maximum likelihood estimation technique in order to
approximate the posterior learning distribution for agents actions.

TYPES OF AGENT INFERENCE

In the context of the simulation design and modeling procedure, we can identify three
variations of agent inference:
6

1. Problem-driven inference. Agent properties and decisions vary across


problem and modeling domains. The emergent ability of the agents to achieve
complex problem solving and optimization is essential.

2. Model-driven inference. Agent behavior drives the evolution of simulation


ensembles and the problem domains for applications. The emergent ability of
the agents for capturing complex patterns and processes is essential.

3. Knowledge-driven inference. Agent knowledge-base and learning capabilities


help in identifying problem and application domains. The emergent ability of
the agents for complex learning and adaptation is essential.

An empirical assessment of the essential inference mechanisms is shown in Figure 2. We


can consider three group categories that isolate particular characteristics within a common pool
of identifiers or modes. A mixture of these modes can help us characterize the type and
qualitative characteristics of the agent type inference. The first group is the driving force for the
agent inference and can be driven by real-world processes, research questions, or policy-related
drivers. The second group emphasizes the characteristics of the modeling process, such as the
observational or state-space characteristics, the hypotheses or assumption-bases that the model
utilizes, and the scenarios that the model assesses. The third group is composed of the elements
of the knowledge-base or learning components of the model. Knowledge bases can be
characterized by the purpose they are designed to test (i.e., databases designed for uncertainty or
error-testing, testing specific hypotheses, or testing developed scenarios for simulations).

FIGURE 2 Process versus mode in agent type inference (The


strength of the relationship indicates an empirical assessment.)
7

Within the above described framework, problem-driven agent inference mechanisms


depend mainly on the problem-specific driving forces, emphasize the modeling of observations
and hypotheses, and utilize knowledge bases designed mainly for error and hypothesis testing.
An example of such an inference mechanism is finding optimal environmental policies from a
pool of available solutions within a finite pool of available resources.

Model-driven agent inference mechanisms depend mainly on the focus of the modeling
processes used and emphasize the modeling of policy and real-world applications. The value of
these types of inference mechanisms is not particularly high for theoretical or research-driven
applications, as they aim to explore the emergence or generative dynamics of the modeling and
simulation mechanisms. An example of such an inference mechanism is simulating the
emergence of social and economic phenomena within a finite population or with the use of
simple rules.

Knowledge-based inference mechanisms depend mainly on the availability of knowledge


and observational learning techniques and emphasize mainly policy and real-world types of
applications. An example of such an inference mechanism is predicting changes in land use
given a set of policy directives or scenarios embedded in the real landscapes.

While these types of agent inference are not mutually exclusive and can be present
simultaneously in some combination in our modeling enterprises, the above classification can
help researchers understand the strengths and weaknesses of the models, tools, and methods
employed. It can also help in assessing the validity and appropriateness of databases and
knowledge bases, scenarios and policies, applications to be simulated, or research questions that
can be answered within a given study.

THE NEED FOR MODULARITY

Designing and implementing spatially explicit agent-based modeling enterprises require a


relatively high level of expertise, such as an understanding of the complexity of agent behavior
and agent interactions (Baroni et al. 2005; Parker et al. 2003). Often the ability to communicate
and diffuse the assumptions, mechanisms, and results of the simulations to the various
stakeholders, policy makers, and end-user communities encounters a number of difficulties
(Carley 2002). Additional difficulties emerge when the research need for abstract symbolic
representation (e.g., mathematical, statistical, computer-language-dependent) conflicts with the
need for simplicity and transparency for comprehension and cognition (Burley 2004; Cox 2005).
Also, as seen in the previous section, such modeling enterprises are often domain specific and
thus not always one size fits all.

The degree of robustness of the agent-based modeling enterprises that is achieved thus
often depends significantly on the subjective researchers skills. The researcher needs to
anticipate outcomes or problems in which stakeholders and policy makers have a potential
interest. Specific databases need to be constructed and data collected in advance. Models and
simulation experiments need to be calibrated and assessed, and full-models can be very
computationally intensive (a fact that affects the ability to replicate or assess the validity of the
simulation results by the end-user community).
8

For these reasons, modularity in agent-based modeling mechanisms is a desired system


and modeling property. Modularity in agent-based simulations:

Provides enhanced visualization capabilities beyond the input-output (I/O)


process of the modeling mechanism;

Enables comprehensibility of patterns and processes emerging at the


problem-, model-, and knowledge-driven levels;

Provides decision and policy-makers with the ability to control assumptions


and inference mechanisms and minimizes the researchers subjective bias in
the simulation and analysis process; and

Allows for the capability to attract and collect expert judgments, scenarios,
and hypotheses that emerge from the ground up.

Modular components and agent-based mechanisms can be stand-alone. End-users do not need to
run full-model simulations (with their limitations in comprehensibility and transparency). The
expertise required for full comprehension of the agent-based model can be modularized as well,
by focusing exclusively on inference mechanisms, learning, and problem solving as separate
modular components of the modeling enterprise. Finally, the modularization of agent-based
model components provides easy access to calibration, assessment and scenario development
techniques, thus reducing the perceived complexity of the mechanism from the end-users
perspective.

MODULAR DECISION SUPPORT TOOLS FOR THE MABEL MODEL

This section provides a descriptive demonstration of three examples of inference tools for
the MABEL model: decision net inference tool (problem-driven), agent spillover effects tool
(model-driven), and MABEL scenario generator tool (knowledge-driven). All three components
operate as stand-alone user interfaces and provide added modularity and comprehensibility for
the full MABEL model simulations.

MABEL Full-model versus Stand-alone Inference Tools

Modular inference capabilities exist within the architectural framework of the MABEL
model (Lei et al. 2005). Specifically, the MABEL model embeds the Netica C++ API (Norsys
2005) within its architecture. The decision-theoretic inference and Markov decision-making
mechanisms in MABEL utilize this framework to perform diagnostic and learning inference for
utility acquisition and optimization tasks (Alexandridis et al. 2004; Alexandridis and Pijanowski
2006). Nevertheless, visualizing these mechanisms and their dynamics is not possible without the
use of new tools.

These added visualization capabilities are achieved via the development of new inference
tools within the MABEL framework. These tools are developed in the Visual Basic.NET
developing framework (Microsoft 2003) and utilize the Netica VB API (Norsys 2005) and the
Microsoft Office.Interop components of the .NET framework. Each of these tools compiles and
9

utilizes existing or modified MABEL decision networks and decision mechanisms and can
re-compile revised networks back to the MABEL model for simulation runs.

The Interop.Netica component within the .NET framework provides high usability of
objects, classes, functions, and class members for use within the development environment
(Figure 3). Each of these methods, when called within the Netica application framework, allows
for a comprehensive visualization of the inference mechanism and the BBN or BDN.

Decision Net Inference Tool Example

The decision net inference tool is an example of a problem-driven inference mechanism


that provides a comprehensive and veridical end-user interface for BDN inference. It provides a
robust query of agent BDN enhanced with the full network visualization capabilities of the
Netica.interop application (Norsys 2005) through the VB API .NET framework. Any BBN or
BDN can be loaded and queried through the main control panel of the users interface
(Figure 4A).

FIGURE 3 Functionality of the Interop.Netica component in the .NET framework: Objects,


classes, members
10

FIGURE 4 User interface of the decision net inference tool for MABEL inference
(Figure 4A shows the main control panel, 4B shows different output windows using the
office.interop interface, and 4C shows the Netica decision network visualization called from
the main control panel.)

For this example, a real-world application of a decision network is provided: the potato
grower problem described by Hardaker et al. (1998). In short, the decision problem copes with a
farmers uncertainty about a decision whether to harvest and sell or harvest and store his potato
yield. The farmer faces two sources of uncertainty: market conditions (i.e., normal versus short
supply) and the acquisition of market prediction estimates (i.e., estimates of the supply problems
in the market). In the latter, a cost is involved for acquisition of a market prediction (e.g., a
private company providing estimates), and each of the harvest decisions results in a monetary
utility acquisition. While the decision network layout looks simple (Figure 4C), elicitation of the
prior and posterior probabilities of the network is somewhat complex (Hardaker et al. 1998).

By using the main control panel (Figure 4A), a user can enter inference evidence into the
decision network or test inferential assumptions about his knowledge about the conditions of the
market. Then estimates of the posterior probability distribution and the potential monetary value
of the decision can be assessed via a streamer process that outputs the network results into the
office.interop spreadsheet component (Figure 4B). Multiple evidence-assessment pairs can be
performed, and multiple results can be obtained and analyzed simultaneously.

The decision net inference tool provides a useful interface that gives end-users the ability
to test and analyze the implications of their assumption-based and inferential decision-making
capabilities. The tool can also be used for training/learning inference at the decision-making
level. Finally, network evidence and experiences by users can be collected and saved as case
studies for agent training and learning by using the empirical users assessment and utilized
further in realistic MABEL simulations.
11

Agent Spillover Effects Tool Example

The agent spillover effects tool is an example of a model-driven inference mechanism


that allows simple rules of spatial inference to be incorporated into the agent-based simulation
framework for MABEL. It focuses on aiding understanding of spatial spillover effects of client
simulations in MABEL and of the immigration-emigration effects of land use change at local and
regional spatial scales. Specifically, when an ensemble of local or regional spatial simulations is
performed, traditional agent inference does not account for in- and out-migration flows of agents
into the spatial area of the simulation. Often an endogenous assumption is being made at the
simulation design level either to ignore the spatial and socioeconomic effects of connectivity
across spatial scales or to assume a nonspatial, deterministic flow exchange. The reality of
landscape and spatially explicit socioeconomic histories renders such assumptions nave at best.
Thus, there is a need to develop a more knowledge-driven and spatially explicit mechanism for
determining the effects and the degree and strength of these effects within a simulation ensemble
framework in MABEL.

The agent spillover effects tool utilizes an eight-nearest-network distributional approach


to spatial configuration of a landscape (Figure 5). Assuming that the simulation area represents
the central community of the framework, an ensemble of eight neighboring simulations can be
identified (Figure 5A). A stochastic decision network based on hypothetical normal distributions
can then been assessed via the Netica.interop application (Norsys 2005). A series of evidence- or
assumption-based inferences can be performed to determine the mean value, standard deviation,
and skew of the joint distributions of the eight neighbors. The elicitation of the spatial
distributional effects depends on the categorical (Likert) density estimation of the central
simulation area, the categorical (Likert) joint density estimation of the neighbor spatial area, and
the spatial directional effects of the joined spillover distributions.

By providing knowledge or evidential information about these three input rules and
performing a diagnostic inference of the network, a percentage estimation of the degree of the
spillover effects that can be transitioned across neighboring simulations can be done (upper
window of part A in Figure 5). In many simulation cases, we might know in advance the number
of agents exceeding capacity in the current simulation area (i.e., from new agent emergence from
a parcelization algorithm in MABEL), in which case a quantitative assessment of the number of
agents to be transitioned to neighboring townships can be assessed (lower window of part A in
Figure 5).

By performing sequential assessments for a finite set of a simulation area ensembles, we


can identify a stochastic assessment of the in- and out-migration dynamics of the MABEL
framework and establish rules for cross- and within-scales modeling assessments. Model
calibration and dynamics related to different or alternative spatially explicit hypotheses
(e.g., number of agents, theoretical distributions) can be performed. In addition, end-users and
policy makers can use their experience and evidential knowledge to train and quantitatively elicit
better network distributions (i.e., training for distribution statistical moments) or determine other
non-normal distributions that can be present in our landscapes. Finally, the tool can be used to
understand residual dynamics of land use change that are not directly related to land use changes
within the simulation area but are transfer effects from spatial changes within wider scales.
12

FIGURE 5 User interface of the agent spillover effects tool for MABEL inference
(Figure 5A shows instantiations of the main control panel, and 5B shows the Netica
decision network visualization called from the main control panel.)

MABEL Scenario Generator Tool Example

The MABEL scenario generator tool provides an example of a knowledge-driven


inference mechanism. It uses a mechanism for generating knowledge-driven scenarios for
modeling assessments and uses the expectation maximization (EM) algorithm (Beal 2003;
Friedman 1998; Moon 1996) for agent training and learning. In the specific example provided in
Figure 6, the tool provides visualization and inference for alternative scenarios related to farmer-
class agents and the capability of performing EM learning for agent classes on the basis of
existing empirical evidence from real farm decisions observed in the landscape.

The main control panel of the users interface provides the ability for performing simple
evidence elicitation of the scenario decision network (Figure 6A) or performing a learning
simulation over several time-steps of agent learning via the EM algorithm, given evidence
(Figure 6B). The elicited results, including the simulation learning process visualization, can be
called and viewed through the Netica.interop application window (Figure 6C). The lower part of
the main control panel provides a real-time office.interop spreadsheet streamer that monitors
beliefs, likelihoods, and simulation step network results.

Results obtained from EM training and learning via the MABEL scenario generator tool
can provide useful insight on the agent-learning mechanism in MABEL. Agent- learning results
and simulation patterns like the one provided in Figure 7 can be easily assessed and evaluated by
using the tools user interface. Additional scenarios and model elicitation can be done with the
use of users knowledge and evidential information. Finally, the tool can be used for training and
comprehension of the dynamics present in our real-world decision dynamics.
13

FIGURE 6 User interface of the farm utility scenario generator for MABEL inference
(Figure 6A shows the main control panel for simple evidence elicitation, 6B shows the
main control panel for network learning simulations, and 6C shows the Netica decision
network visualization called from the main control panel.)

FIGURE 7 Example of the inference learning results obtained for


MABEL learning simulations by using the EM algorithm training for
100 agent time-steps
14

VISUALIZATION, COMPREHENSION, AND POLICY MAKING REVISITED

The examples provided in the above sections, combined with the theoretical arguments
and discussion preceding them, allow us to assess a set of wider implications for a visualization,
comprehension and policy-making (VCPM) framework. This set can be expressed as a sequence
of propositions or lessons learned, as follows:

1. Visualization and comprehension cannot be achieved through black-box-type


complex agent-based models. They require a clear and informational-rich
understanding of the mechanisms, dynamics, and patterns of complexity
present in our models.

2. Agent-based models and simulations are the most appropriate research and
methodological approaches to building communities and community
understanding from the ground up. They have the power and ability to
enhance decision-maker and stakeholder understanding of complex spatial
mechanisms, drivers, and processes of change.

3. Enhancing collaboration and coordination at the stakeholder and decision-


maker level can lead to more accurate predictions and more influential
decisions and reduce uncertainty and risk in decision making. It can also boost
dialogue and communication across and within communities for achieving
sustainable and attainable alternative futures.

4. Combining quantitative and nonparametric assessment methods with


qualitative and scenario assessments, often provided interactively, can
enhance the value of information in our models and modeling enterprises. It
can also shed some light on the complex cross-scale and cross-component
mechanisms present in our simulation domains.

5. Reconciling the needs for transparency and veridicality in modeling does not
have to result in abstract and often inaccurate modeling endeavors but can
instead result in informational content-rich enterprises.

6. Developing simulation and inference tools that can serve also as training,
learning, and educational exercises builds future community capabilities for
decision and policy making that can have significant impact.

Beyond the specific tools and modeling enterprises examined in this paper, a whole array
of quantitative and qualitative assessments for network inference mechanisms can be employed
to enhance our multi-layered representation of reality (Figure 8).

Additional qualitative methods for probabilistic and stochastic inference such as role-
playing games, participatory rural assessment techniques, interviews and surveys, and qualitative
scenario assessment methods are examples of techniques that can be used for eliciting BBNs
and BDNs in the MABEL framework. Combined with traditional quantitative assessment
methods and knowledge bases, they present a powerful mechanism of inference in multi-agent
simulation systems.
15

FIGURE 8 Schematic model for Bayesian artificial intelligence modeling: Quantitative


and qualitative elicitation and probabilistic inference in Bayesian belief networks

BIBLIOGRAPHY

Ahmed, S.E., and N. Reid. 2001. Empirical Bayes and Likelihood Inference. Lecture notes in
statistics (Springer-Verlag). New York: Springer.

Alexandridis, K., B.C. Pijanowski, and Z. Lei. 2004. The use of robust and efficient
methodologies in agent-based modeling: Case studies using repeated measures and
behavioral components in the MABEL simulation model. In C.M. Macal, D. Sallach, and
M.J. North (editors), Proceedings of the Agent 2004 Conference on Social Dynamics:
Interaction, Reflexivity, and Emergence, ANL-DIS/05-6, co-sponsored by The University
of Chicago and Argonne National Laboratory, Oct. 79.

Alexandridis, K., B.C. Pijanowski, and Z. Lei. 2006. Assessing multi-agent parcelization
performance in the MABEL simulation model using Monte-Carlo replication experiments.
Environment & Planning B: Planning and Design, Special Issue 33 on Agent-based
Modeling (in press).

Alexandridis, K.T., and B.C. Pijanowski. 2006. A Multi Agent-based Behavioral Economic
Landscape (MABEL) model of land use change (in revision).
16

Baroni, P., M. Giacomin, and G. Guida. 2005. Self-stabilizing defeat status computation:
Dealing with conflict management in multi-agent systems. Artificial Intelligence
165(2):187259.

Beal, M.J. 2003. Variational Algorithms for Approximate Bayesian Inference. Ph.D.
Dissertation. London, UK: The Gatsby Computational Neuroscience Unit, University of
London.

Beal, M.J., Z. Ghahramani, J.M. Bernardo, M.J. Bayarri, A.P. Dawid, J.O. Berger,
D. Heckerman, A.F.M. Smith, and M. West. 2003. The variational Bayesian EM
algorithm for incomplete data: With application to scoring graphical model structures. In
Bayesian Statistics 7: Proceedings of the Seventh Valencia International Meeting,
pp. 453464. Oxford, UK: Oxford University Press.

Bohning, D., and D. Schon. 2005. Nonparametric maximum likelihood estimation of population
size based on the counting distribution. Journal of the Royal Statistical Society: Series C
(Applied Statistics) 54(4):721737.

Brown, D.G., S.E. Page, R. Riolo, and W. Rand. 2002. Modeling the Effects of Greenbelts at the
Urban-Rural Fringe, June 2427, Lugano, Switzerland.

Burley, J. 2004. The restoration of research. Forest Ecology and Management 201(1):8388.

Calmet, J., J.A. Campbell, and J. Pfalzgraf. 1996. Artificial Intelligence and Symbolic
Mathematical Computation: International Conference, AISMC-3, Steyr, Austria,
September 2325, 1996, Proceedings, Lecture Notes in Computer Science, 1138. Berlin;
New York: Springer.

Carley, K.M. 2002. Simulating society: The tension between transparency and veridicality. In
C. Macal and D. Sallach (editors), Proceedings of the Agent 2002 Conference on Social
Agents: Ecology, Exchange, and Evolution, ANL/DIS-03-1, co-sponsored by The
University of Chicago and Argonne National Laboratory, Oct. 1112.

Conte, R., and C. Castelfranchi. 1995. Cognitive and Social Action. London: UCL Press.

Cox, M.T. 2005. Metacognition in computation: A selected research review. Artificial


Intelligence 169(2):104141.

De Cooman, G., and M. Zaffalon. 2004. Updating beliefs with incomplete observations.
Artificial Intelligence 159(12):75125.

Dellaert, F. 2002. The Expectation Maximization Algorithm, Technical Report GIT-GVU-02-20.


Atlanta, GA: College of Computing, Georgia Institute of Technology.

Drennan, M. 2005. The human science of simulation: A robust hermeneutics for artificial
societies. Journal of Artificial Societies and Social Simulation 8(1); available at
http://jasss.soc.surrey.ac.uk/8/1/3.html.
17

Eagly, A.H., and S. Chaiken. 1993. The Psychology of Attitudes. Fort Worth, TX: Harcourt
Brace College Publishers (HBJ).

Edmonds, B., S. Moss, and P. Davidsson. 2000. The use of models: Making MABS actually
work. In Multi Agent Based Simulation, pp. 1532. Springer-Verlag.

Ehrlich, P.R., and D. Kennedy. 2005. Sustainability: Millennium assessment of human


behavior. Science 309(5734):562563.

Feng, S., L. Da Xu, C. Tang, and S. Yang. 2003. An intelligent agent with layered architecture
for operating systems resource management. Expert Systems 20(4):171178.

Foley, J.A., R. DeFries, G.P. Asner, C. Barford, G. Bonan, S.R. Carpenter, F.S. Chapin,
M.T. Coe, G.C. Daily, H.K. Gibbs, J.H. Helkowski, T. Holloway, E.A. Howard,
C.J. Kucharik, C. Monfreda, J.A. Patz, I.C. Prentice, N. Ramankutty, and P.K. Snyder.
2005. Global consequences of land use. Science 309(5734):570574.

Friedman, N. 1998. The Bayesian structural EM algorithm. In UAI98: Proceedings of the


Fourteenth Conference on Uncertainty in Artificial Intelligence, July 2426, University of
Wisconsin Business School, Madison, WI.

Gupta, M.M., and N.K. Sinha. 2000. Soft Computing and Intelligent Systems: Theory and
Applications. San Diego, CA: Academic.

Hall, P., and A. Yatchew. 2005. Unified approach to testing functional hypotheses in
semiparametric contexts. Journal of Econometrics 127(2):225252.

Hammersley, M. 2005. Should social science be critical? Philosophy of the Social Sciences
35(2):175195.

Hardaker, J.B., R.B.M. Huirne, and J.R. Anderson. 1998. Coping with Risk in Agriculture.
Wallingford, UK: CAB International.

Heemskerk, M., K. Wilson, and M. Pavao-Zuckerman. 2003. Conceptual models as tools for
communication across disciplines. Ecology and Society 7(3); available at http://www.
consecol.org/vol7/iss3/art8/.

Hutter, M., and M. Zaffalon. 2005. Distribution of mutual information from complete and
incomplete data. Computational Statistics & Data Analysis 48(3):633657.

Korb, K.B., and A.E. Nicholson. 2004. Bayesian Artificial Intelligence. Series in Computer
Science and Data Analysis. London, UK: Chapman & Hall/CRC.

Lei, Z., B.C. Pijanowski, and K.T. Alexandridis. 2005. Distributed modeling architecture of a
Multi Agent-based Behavioral Economic Landscape (MABEL) model. Simulation:
Transactions of the Society for Modeling & Simulation International 81(7):503515.

Ma, T., and Y. Nakamori. 2005. Agent-based modeling on technological innovation as an


evolutionary process. European Journal of Operational Research 166(3):741755.
18

McIntosh, B.S., P. Jeffrey, M. Lemon, and N. Winder. 2005. On the design of computer-based
models for integrated environmental science. Environmental Management 35(6):741752.

McIver, D.K., and M.A. Friedl. 2002. Using prior probabilities in decision-tree classification of
remotely sensed data. Remote Sensing of Environment 81(23):253261.

Microsoft. 2003. Microsoft Visual Studio.NET.

Moon, T.K. 1996. The expectation-maximization algorithm. IEEE Signal Processing


Magazine 13(6):4760.

Muller, P., G.L. Rosner, M.D. Iorio, and S. MacEachern. 2005. A nonparametric Bayesian
model for inference in related longitudinal studies. Journal of the Royal Statistical
Society: Series C (Applied Statistics) 54(3):611626.

Neapolitan, R.E. 2004. Learning Bayesian Networks. Upper Saddle River, NJ: Pearson Prentice
Hall.

Norling, E., and L. Sonenberg. 2004. Creating interactive characters with BDI agents. Paper
presented at the Australian Workshop on Interactive Entertainment, Sidney, Australia.

Norsys. 2005. Netica: Advanced Bayesian Belief Network and Influence Diagram Software
(Version 2.17). Vancouver, Canada: Norsys Software Corp.; available at www.norsys.com.

Parker, D.C., S.M. Manson, M.A. Janssen, M.J. Hoffmann, and P. Deadman. 2003. Multi-agent
systems for the simulation of land-use and land-cover change: A review. Annals of the
Association of American Geographers 93(2):314337.

Pearl, J. 1988. Probabilistic reasoning in intelligent systems: Networks of plausible inference.


San Mateo, CA: Morgan Kaufmann Publishers.

Rao, A.S., and M.P. Georgeff. 1995. BDI agents: From theory to practice. In Proceedings
of the First International Conference on Multi-Agent Systems (ICMAS-95), San Fransisco,
CA.

Sen, P.K. 1981. Sequential Nonparametrics: Invariance Principles and Statistical Inference.
New York: Wiley.

Stewart, M.B. 2005. A comparison of semiparametric estimators for the ordered response
model. Computational Statistics & Data Analysis 49(2):555573.

Stocker, R., D. Cornforth, and T.R.J. Bossomaier. 2002. Network structures and agreement in
social network simulations. Journal of Artificial Societies and Social Simulation 5(4);
available at http://jasss.soc.surrey.ac.uk/5/4/3.html.

Tiku, M.L., W.Y. Tan, and N. Balakrishnan. 1986. Robust Inference. New York: M. Dekker.

Van Vuuren, D.P., and L.F. Bouwman. 2005. Exploring past and future changes in the
ecological footprint for world regions. Ecological Economics 52(1):4362.
19

Verburg, P.H., P.P. Schot, M.J. Dijst, and A. Veldkamp. 2004. Land use change modelling:
Current practice and research priorities. GeoJournal 61(4):309324.

Zhu, J., and G.D. Morgan. 2004. A nonparametric procedure for analyzing repeated measures of
spatially correlated data. Environmental and Ecological Statistics 11(4):431443.
20

Das könnte Ihnen auch gefallen