Sie sind auf Seite 1von 344

Proceedings of the Third European Workshop on

pgm06 Probabilistic Graphical Models

Prague, Czech Republic


September 1215, 2006

Edited by Milan Studen and Ji Vomlel

Action M Agency
Prague, 2006
Typeset in LATEX.
The cover page and PGM logo: Jir Vomlel
Graphic art: Karel Solark

ISBN 80-86742-14-8
Preface

These are the proceedings of the third European workshop on Probabilistic Graphical Models
(PGM06) to be held in Prague, Czech Republic, September 12-15, 2006. The aim of this series
of workshops on probabilistic graphical models is to provide a discussion forum for researchers
interested in this topic. The first European PGM workshop (PGM02) was held in Cuenca, Spain
in November 2002. It was a successful workshop and several of its participants expressed interest in
having a biennial European workshop devoted particularly to probabilistic graphical models. The
second PGM workshop (PGM04) was held in Leiden, the Netherlands in October 2004. It was
also a success; more emphasis was put on collaborative work, and the participants also discussed
how to foster cooperation between European research groups.
There are two trends which can be observed in connection with PGM workshops. First, each
workshop is held during an earlier month than the preceding one. Indeed, PGM02 was held in
November, PGM04 in October and PGM06 will be held in September. Nevertheless, I think this
is a coincidence. The second trend is the increasing number of contributions (to be) presented. I
would like to believe that this is not a coincidence, but an indication of increasing research interest
in probabilistic graphical models.
A total of 60 papers were submitted to PGM06 and, after the reviewing and post-reviewing
phases, 41 of them were accepted for presentation at the workshop (21 talks, 20 posters) and
appear in these proceedings. The authors of these papers come from 17 different countries, mainly
European ones.
To handle the reviewing process (from May 15, 2006 to June 28, 2006) the PGM Program
Committee was considerably extended. This made it possible that every submitted paper was
reviewed by 3 independent reviewers. The reviews were handled electronically using the START
conference management system.
Most of the PC members reviewed around 7 papers. Some of them also helped in the post-
reviewing phase, if revisions to some of the accepted papers were desired. Therefore, I would like
to express my sincere thanks here to all of the PC members for all of the work they have done
towards the success of PGM06. I think PC members helped very much to improve the quality of
the contributions. Of course, my thanks are also addressed to the authors.
Further thanks are devoted to the sponsor, the DAR UTIA research centre, who covered the
expenses related to the START system. I am also indebted to the members of the organizing
committee for their help, in particular, to my co-editor, Jirka Vomlel.
My wish is for the participants in the PGM06 workshop to appreciate both the beauty of
Prague and the scientific program of the workshop. I believe there will be a fruitful discussion
during the workshop and I hope that the tradition of PGM workshops will continue.

Prague, June 26, 2006


Milan Studeny
PGM06 PC Chair
Workshop Organization
Program Committee
Chair:
Milan Studeny Czech Republic
Committee Members:
Concha Bielza Technical University of Madrid, Spain
Luis M. de Campos University of Granada, Spain
Francisco J. Dez UNED Madrid, Spain
Marek J. Druzdzel University of Pittsburgh, USA
Ad Feelders Utrecht University, Netherlands
Linda van der Gaag Utrecht University, Netherlands
Jose A. Gamez University of Castilla La Mancha, Spain
Juan F. Huete University of Granada, Spain
Manfred Jaeger Aalborg University, Denmark
Finn V. Jensen Aalborg University, Denmark
Radim Jirousek Academy of Sciences, Czech Republic
Pedro Larranaga University of the Basque Country, Spain
Peter Lucas Radboud University Nijmegen, Netherlands
Fero Matus Academy of Sciences, Czech Republic
Serafn Moral University of Granada, Spain
Thomas D. Nielsen Aalborg University, Denmark
Kristian G. Olesen Aalborg University, Denmark
Jose M. Pena Linkoping University, Sweden
Jose M. Puerta University of Castilla La Mancha, Spain
Silja Renooij Utrecht University, Netherlands
David RosInsua Rey Juan Carlos University, Spain
Antonio Salmeron University of Almeria, Spain
James Q. Smith University of Warwick, UK
Marco Valtorta University of South Carolina, USA
Jir Vomlel Academy of Sciences, Czech Republic
Marco Zaffalon IDSIA Lugano, Switzerland

Additional Reviewers
Alessandro Antonucci IDSIA Lugano, Switzerland
Janneke Bolt Utrecht University, Netherlands
Julia Flores University of Castilla La Mancha, Spain
Marcel van Gerven Radboud University Nijmegen, Netherlands
Yimin Huang University of South Carolina, USA
Sren H. Nielsen Aalborg University, Denmark
Jana Novovicova Academy of Sciences, Czech Republic
Valerie Sessions University of South Carolina, USA
Petr Simecek Academy of Sciences, Czech Republic
Marta Vomlelova Charles University, Czech Republic
Peter de Waal Utrecht University, Netherlands
Changhe Yuan University of Pittsburgh, USA
Organizing Committee
Chair:
Radim Jirousek Academy of Sciences, Czech Republic
Committee members:
Milena Zeithamlova Action M Agency, Czech Republic
Marta Vomlelova Charles University, Czech Republic
Radim Lnenicka Academy of Sciences, Czech Republic
Petr Simecek Academy of Sciences, Czech Republic
Jir Vomlel Academy of Sciences, Czech Republic
Contents
Some Variations on the PC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Joaqun Abellan, Manuel Gomez-Olmedo, and Serafn Moral

Learning Complex Bayesian Network Features for Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9


Peter Antal, Andras Gezsi, Gabor Hullam, and Andras Millinghoffer

Literature Mining using Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


Peter Antal and Andras Millinghoffer

Locally specified credal networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


Alessandro Antonucci and Marco Zaffalon

A Bayesian Network Framework for the Construction of Virtual Agents


with Human-like Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Olav Bangs, Nicolaj Sndberg-Madsen, and Finn V. Jensen

Loopy Propagation: the Convergence Error in Markov Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43


Janneke H. Bolt

Preprocessing the MAP Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51


Janneke H. Bolt and Linda C. van der Gaag

Quartet-Based Learning of Shallow Latent Variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59


Tao Chen and Nevin L. Zhang

Continuous Decision MTE Influence Diagrams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67


Barry R. Cobb

Discriminative Scoring of Bayesian Network Classifiers: a Comparative Study . . . . . . . . . . . . . . . . . 75


Ad Feelders and Jevgenijs Ivanovs

The Independency tree model: a new approach for clustering and factorisation . . . . . . . . . . . . . . . . . 83
M. Julia Flores, Jose A. Gamez, and Serafn Moral

Learning the Tree Augmented Naive Bayes Classifier from incomplete datasets . . . . . . . . . . . . . . . . 91
Olivier C.H. Francois and Philippe Leray

Lattices for Studying Monotonicity of Bayesian Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99


Linda C. van der Gaag, Silja Renooij, and Petra L. Geenen

Multi-dimensional Bayesian Network Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107


Linda C. van der Gaag and Peter R. de Waal

Dependency networks based classifiers: learning models by using independence tests . . . . . . . . . . 115
Jose A. Gamez, Juan L. Mateo, and Jose M. Puerta

Unsupervised naive Bayes for data clustering with mixtures of truncated exponentials . . . . . . . . 123
Jose A. Gamez, Rafael Rum, and Antonio Salmeron

Selecting Strategies for Infinite-Horizon Dynamic LIMIDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131


Marcel A. J. van Gerven and Francisco J. Dez
Sensitivity analysis of extreme inaccuracies in Gaussian Bayesian Networks . . . . . . . . . . . . . . . . . . . 139
Miguel A. Gomez-Villegas, Paloma Man, and Rosario Susi

Learning Bayesian Networks Structure using Markov Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147


Christophe Gonzales and Nicolas Jouve

Estimation of linear, non-gaussian causal models in the presence of confounding


latent variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Patrik O. Hoyer, Shohei Shimizu, and Antti J. Kerminen

Symmetric Causal Independence Models for Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163


Rasa Jurgelenaite and Tom Heskes

Complexity Results for Enhanced Qualitative Probabilistic Networks . . . . . . . . . . . . . . . . . . . . . . . . . 171


Johan Kwisthout and Gerard Tel

Decision analysis with influence diagrams using Elviras explanation facilities . . . . . . . . . . . . . . . . . 179
Manuel Luque and Francisco J. Dez

Dynamic importance sampling in Bayesian networks using factorisation of probability trees . . . 187
Irene Martnez, Carmelo Rodrguez, and Antonio Salmeron

Learning Semi-Markovian Causal Models using Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195


Stijn Meganck, Sam Maes, Philippe Leray, and Bernard Manderick

Geometry of rank tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207


Jason Morton, Lior Pachter, Anne Shiu, Bernd Sturmfels, and Oliver Wienand

An Empirical Study of Efficiency and Accuracy of Probabilistic Graphical Models . . . . . . . . . . . . 215


Jens D. Nielsen and Manfred Jaeger

Adapting Bayes Network Structures to Non-stationary Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223


Sren H. Nielsen and Thomas D. Nielsen

Diagnosing Lyme disease - Tailoring patient specific Bayesian networks


for temporal reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Kristian G. Olesen, Ole K. Hejlesen, Ram Dessau, Ivan Beltoft, and Michael Trangeled

Predictive Maintenance using Dynamic Probabilistic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239


Demet Ozgur-Unluakin and Taner Bilgic

Reading Dependencies from the Minimal Undirected Independence Map of a Graphoid


that Satisfies Weak Transitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Jose M. Pena, Roland Nilsson, Johan Bjorkegren, and Jesper Tegner

Evidence and Scenario Sensitivities in Naive Bayesian Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255


Silja Renooij and Linda van der Gaag

Abstraction and Refinement for Solving Continuous Markov Decision Processes. . . . . . . . . . . . . . .263
Alberto Reyes, Pablo Ibarguengoytia, L. Enrique Sucar, and Eduardo Morales

Bayesian Model Averaging of TAN Models for Clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .271


Guzman Santafe, Jose A. Lozano, and Pedro Larranaga
Dynamic Weighting A* Search-based MAP Algorithm for Bayesian Networks . . . . . . . . . . . . . . . . . 279
Xiaoxun Sun, Marek J. Druzdzel, and Changhe Yuan

A Short Note on Discrete Representability of Independence Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 287


Petr Simecek

Evaluating Causal effects using Chain Event Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293


Peter Thwaites and Jim Smith

Severity of Local Maxima for the EM Algorithm: Experiences with Hierarchical


Latent Class Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Yi Wang and Nevin L. Zhang

Optimal Design with Design Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309


Yang Xiang

Hybrid Loopy Belief Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317


Changhe Yuan and Marek J. Druzdzel

Probabilistic Independence of Causal Influences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325


Adam Zagorecki and Marek J. Druzdzel
Some Variations on the PC Algorithm
J. Abellan, M. Gomez-Olmedo, and S. Moral
Department of Computer Science and Artificial Intelligence
University of Granada
18071 - Granada, Spain

Abstract
This paper proposes some possible modifications on the PC basic learning algorithm and
makes some experiments to study their behaviour. The variations are: to determine
minimum size cut sets between two nodes to study the deletion of a link, to make statistical
decisions taking into account a Bayesian score instead of a classical Chi-square test, to
study the refinement of the learned network by a greedy optimization of a Bayesian score,
and to solve link ambiguities taking into account a measure of their strength. It will be
shown that some of these modifications can improve PC performance, depending of the
objective of the learning task: discovering the causal structure or approximating the joint
probability distribution for the problem variables.

1 Introduction one is that scoring and search procedures allow


to compare very different models by a score that
There are two main approaches to learning can be be interpreted as the probability of be-
Bayesian networks from data. One is based on ing the true model. As a consequence, we can
scoring and searching (Cooper and Herskovits, also follow a Bayesian approach considering sev-
1992; Heckerman, 1995; Buntine, 1991). Its eral alternative models, each one of them with
main idea is to define a global measure (score) its corresponding probability, and using them
which evaluates a given Bayesian network model to determine posterior decisions (model averag-
as a function of the data. The problem is solved ing). Finally, in score and searching approaches
by searching in the space of possible Bayesian different combinatorial optimization techniques
network models trying to find the network with (de Campos et al., 2002; Blanco et al., 2003)
optimal score. The other approach (constraint can be applied to maximize the evaluation of
learning) is based on carrying out several inde- the learned network. On the other hand, the PC
pendence tests on the database and building a algorithm has some advantages. One of them is
Bayesian network in agreement with tests re- that it has an intuitive basis and under some
sults. The main example of this approach is PC ideal conditions it has guarantee of recovering a
algorithm (Spirtes et al., 1993). It can be ap- graph equivalent to the one being a true model
plied to any source providing information about for the data. It can be considered as an smart
whether a given conditional independence rela- selection and ordering of the questions that have
tion is verified. to be done in order to recover a causal structure.
In the past years, searching and scoring pro- The basic point of this paper is that PC al-
cedures have received more attention, due to gorithm provides a set of strategies that can
some clear advantages (Heckerman et al., 1999). be combined with other ideas to produce good
One is that constraint based learning makes learning algorithms which can be adapted to dif-
categorical decisions from the very beginning. ferent situations. An example of this is when
These decisions are based on statistical tests van Dijk et al. (2003) propose a combination of
that may be erroneous and these errors will af- order 0 and 1 tests of PC algorithm with an scor-
fect all the future algorithm bahaviour. Another ing and searching procedure. Here, we propose
several variations about the original PC algo- tail:
rithm. The first one will be a generalization of
the necessary path condition (Steck and Tresp, 1. Start with a complete undirected graph G 0
1999); the second will be to change the statisti- 2. i = 0
cal tests for independence by considering deci- 3. Repeat
sions based on a Bayesian score; the third will be 4. For each X X
to allow the possibility of refining the network 5. For each Y ADJX
learned with PC by applying a greedy optimiza- 6.Test whether S ADJX {Y }
tion of a Bayesian score; and finally the last pro- with |S| = i and I(X, Y |S)
posal will be to delete edges from triangles in the 7. If this set exists
graph following an order given by a Bayesian 8. Make SXY = S
score (removing weaker edges first). We will 9. Remove X Y link from G0
show the intuitive basis for all of them and we 10. i = i + 1
will make some experiments showing their per- 11. Until |ADJX | i, X
formance when learning Alarm network (Bein-
lich et al., 1989). The quality of the learned In this algorithm, ADJX is the set of nodes
networks will be measured by the number or adjacent to X in graph G0 . The basis is that if
missing-added links and the Kullback-Leibler the set of independencies is faithful to a graph,
distance of the probability distribution associ- then there is not a link between X and Y , if
ated to the learned network to the original one. and only if there is a subset S of the adjacent
The paper is organized as follows: Section 2 nodes of X such that I(X, Y |S). For each pair
is devoted to describe the fundamentals of PC of variables, SXY will contain such a set, if it
algorithm; Section 3 introduces the four varia- is found. This set will be used in the posterior
tions of PC algorithm; in Section 4 the results orientation stage.
of the experiments are reported and discussed; The orientation step will proceed by looking
Section 5 is devoted to the conclusions. for sets of three variables {X, Y, Z} such that
edges X Z, Y Z are in the graph by not the
2 The PC Algorithm edge X Y . Then, if Z 6 SXY , it orients the
edges from X to Z and from Y to Z creating a
Assume that we have a set of variables X = v-structure: X Z Y . Once, these orienta-
(X1 , . . . , Xn ) with a global probability distribu- tions are done, then it tries to orient the rest of
tion about them P . By an uppercase bold letter the edges following two basic principles: not to
A we will represent a subset of variables of X. create cycles and not to create new v-structures.
By I(A, B|C) we will denote that sets A and B It is possible that the orientation of some of the
are conditionally independent given C. edges has to be arbitrarily selected.
PC algorithm assumes faithfulness. This If the set of independencies is faithful to a
means that there is a directed acyclic graph, G, graph and we have a perfect way of determin-
such that the independence relationships among ing whether I(X, Y |S), then the algorithm has
the variables in X are exactly those represented guarantee of producing a graph equivalent (rep-
by G by means of the d-separation criterion resents the same set of independencies) to the
(Pearl, 1988). PC algorithm is based on the original one.
existence of a procedure which is able of say- However, in practice none of these conditions
ing when I(A, B|C) is verified in graph G. It is verified. Independencies are decided at the
first tries to find the skeleton (underlying undi- light of independence statistical tests based on
rected graph) and on a posterior step makes the a set of data D. The usual way of doing these
orientation of the edges. Our variations will be tests is by means of a chi-square test based on
mainly applied to the first part (determining the the cross entropy statistic measured in the sam-
skeleton). So we shall describe it with some de- ple (Spirtes et al., 1993). Statistical tests have

2 J. Abelln, M. Gmez-Olmedo, and S. Moral


errors and then, even if faithfulness hypothesis
is verified, it is possible that we do not recover
the original graph. The number of errors of sta-
tistical tests increases when the sample is small X Z Y
or the cardinality of the conditioning set S is
large (Spirtes et al., 1993, p. 116). In both
cases, due to the nature of frequentist statisti-
cal tests, there is a tendency to always decide
independence (Cohen, 1988). This is one rea-
son of doing statistical tests in increasing order Figure 1: An small cut set
of the cardinality of the sets to which we are
conditioning.
Apart from no recovering the original graph, Acid and de Campos (2001) did in a different
we can have another effects, as the possibility context. The computation of this set will need
of finding cycles when orienting v-structures. In some extra time, but it can be done in polyno-
our implementation, we have always avoided cy- mial time with a modification of Ford-Fulkerson
cles by reversing the arrows if necessary. algorithm (Acid and de Campos, 1996).

3 The Variations 3.2 Bayesian Statistical Tests


PC algorithm performs a chi-square statistical
3.1 Necessary Path Condition
test to decide about independence. However,
In PC algorithm it is possible that we delete as shown by Moral (2004), sometimes statis-
the link between X and Y by testing the inde- tical tests make too many errors. They try
pendence I(X, Y |S), when S is a set containing to keep the Type I error (deciding dependence
nodes that do not appear in a path (without when there is independence) constant to the
cycles) from X to Y . The inclusion of these significance level. However, if the sample is
nodes is not theoretically wrong, but statisti- large enough this error can be much lower by
cal tests make more errors when the size of using a different decision procedure, without
the conditioning set increases, then it can be an important increase in Type II error (decid-
a source of problems in practice. For this rea- ing independence when there is dependence).
son, Steck and Tresp (1999) proposed to reduce Margaritis (2003) has proposed to make statis-
ADJX {Y } in Step 6, by removing all the tical tests of independence for continuous vari-
nodes that are not in a path from X to Y . In ables by using a Bayesian score after discretizing
this paper, we will go an step further by consid- them. Previously, Cooper (1997) proposed a
ering any subset CU TX,Y disconnecting X and different independence test based on a Bayesian
Y in the graph in which the link X Y has score, but only when conditioning to 0 or 1 vari-
been deleted, playing the role of ADJ X {Y }. able. Here we propose to do all the statistical
Consider that in the skeleton, we want to see tests by using a Bayesian Dirichlet score 1 (Heck-
whether link X Y can be deleted, then we erman, 1995) with a global sample size s equal
first remove it, and if the situation is the one to 1.0. The test I(X, Y |S) is carried out by com-
in Figure 1, we could consider CU TX,Y = {Z}. paring the scores of X with S as parents and of
However, in the actual algorithm (even with the X with S {Y } as parents. If the former is
necessary path condition) we consider the set larger than the later, the variables are consid-
ADJX {Y }, which is larger, and therefore with ered independent, and in the other case, they
an increased possibility of error.
1
Our proposal is to apply PC algorithm, but We have chosen this score instead of the original
K2 score (Cooper and Herskovits, 1992) because this is
by considering in step 6 a cut set of mini- considered more correct from a theoretical point of view
mum size in the graph without X Y link, as (Heckerman et al., 1995).

Some Variations on the PC Algorithm 3


are considered dependent. The score of X with
a set of parents P a(X) = Z is the logarithm of:
!
Y (s0 ) Y (Nz,x + s00 )

z (Nz + s0 ) x (s00 )

where Nz is the number of occurrences of


[Z = z] in the sample, Nz,x is the number of
Figure 2: A too complex network.
occurrences of [Z = z, X = x] in the sample, s 0
is s divided by the number of possible values of
Z, and s00 is equal to s0 divided by the number X Y
of values of X.

3.3 Refinement Z
If the statistical tests do not make errors and
the faithfulness hypothesis is verified, then PC
algorithm will recover a graph equivalent to the
Figure 3: A simple network
original one, but this can never be assured with
finite samples. Also, even if we recover the origi-
nal graph, when our objective is to approximate make the movement with highest score dif-
the joint distribution for all the variables, then ference while this is positive.
depending of the sample size, it can be more
Refinement can also solve some of the prob-
convenient to use a simpler graph than the true
lems associated with the non verification of
one. Imagine that the variables follow the graph
the faithfulness hypothesis. Assume for exam-
of Figure 2. This graph can be recovered by
ple, that we have a problem with 3 variables,
PC algorithm by doing only statistical indepen-
X, Y, Z, and that the set of independencies is
dence tests of order 0 and 1 (conditioning to
given by the independencies of the graph in Fig-
none or 1 variable). However, when we are go-
ure 3 plus the independence I(Y, Z|). PC al-
ing to estimate the parameters of the network
gorithm will estimate a network, where the link
we have to estimate a high number of probabil-
between Y and Z is lost. Even if the sample
ity values. This can be a too complex model
is large we will estimate a too simple network
(too many parameters) if the database is not
which is not an I-map (Pearl, 1988). If we ori-
large enough. In this situation, it can be rea-
ent the link X Z in PC algorithm, refinement
sonable to try to refine this network, taking into
can produce the network in Figure 3, by check-
account the actual orientation and the size of
ing that the Bayesian score is increased (as it
the model. In this sense, the result of PC al-
should be the case if I(Z, Y |X) is not verified).
gorithm can be used as an starting point for a
The idea of refining a learned Bayesian net-
greedy search algorithm to optimize a concrete
work by means of a greedy optimization of a
metric.
Bayesian score has been used in a different con-
In particular, our proposal is based on the
text by Dash and Druzdzel (1999).
following steps:
3.4 Triangles Resolution
1. Obtain an order compatible with the graph
learned by PC algorithm. Imagine that we have 3 variables, X, Y, Z,
and that no independence relationship involv-
2. For each node, try to delete each one of ing them is verified: each pair of variables is de-
its parents or to add some of the non par- pendent and conditionally dependent giving the
ents preceding nodes as parent, measuring third one. As there is a tendency to decide for
the resulting Bayesian Dirichlet score. We independence when the size of the conditioning

4 J. Abelln, M. Gmez-Olmedo, and S. Moral


set is larger, then it is possible that all order 0 PC variations. In all of them, we have started
tests produce dependence, but when we test the with the original network and we have gener-
independence of two variables with respect to a ated samples of different sizes by logic sampling.
third one, we obtain independence. In this sit- Then, we have tried to recover the original net-
uation, the result of PC algorithm, will depend work from the samples by using the different
of the order in which tests are carried out. For variations of the PC algorithm including the ori-
example, if we ask first for the independence, entation step. We have considered the follow-
I(X, Y |Z), then the link X Y is deleted, but ing measures of error in this process: number of
not the other two links, which will be oriented in missing links, number of added links, and the
a posterior step without creating a v-structure. Kullback-Leibler distance (Kullback, 1968) of
If we test first I(X, Z|Y ), then the deleted link the learned probability distribution to the orig-
will be X Z, but not the other two. inal one2 . Kullback-Leibler distance is a more
It seems reasonable that if one of the links appropriate measure of error when the objective
is going to be removed, we should choose the is to approximate the joint probability distribu-
weakest one. In this paper, for each 3 vari- tion for all the variables and the measures of
ables that are a triangle (the graph contains number of differences in links is more appropri-
the 3 links) after order 0 tests, we measure the ate when our objective is to recover the causal
strength of link X Y as the Bayesian score structure of the problem. We do not consider
of X with Y, Z as parent, minus the Bayesian the number of wrong orientations as our vari-
score of X with Z as parents. For each trian- ations are mainly focused in the skeleton dis-
gle we delete the link with lowest strength (if covery phase of PC algorithm. The number of
this value is lower than 0). This is done as an added or deleted links only depend of the first
intermediate step, between order 0 and order 1 part of the learning algorithm (selection of the
conditional independence tests. skeleton).
In this paper, it has been implemented only in Experiments have been carried out in Elvira
the case in which independence tests are based environment (Consortium, 2002), where a local
on a Bayesian score, but it could be also consid- computation of Kullback-Leibler distance is im-
ered in the case of Chi-square tests by consid- plemented. The different sample sizes we have
ering the strength of a link equal to the p-value used are: 100, 500, 1000, 5000, 10000, and for
of the statistical independence test. each sample size, we have repeated the exper-
A deeper study of this type of interdepen- iment 100 times. The combinations of algo-
dencies between the deletion of links (the pres- rithms we have tested are the following:
ence of a link depends of the absence of other
one, and vice versa) has been carried out by Alg1 This is the algorithm with minimal separat-
Steck and Tresp (1999), but the resolution of ing sets, score based tests, no refinement,
these ambiguities is not done. Hugin system and triangle resolution.
(Madsen et al., 2003) allows to decide between Alg2 Algorithm with minimal separating sets,
the different possibilities by asking to the user. score based tests, refinement, and triangle
Our procedure could be extended to this more resolution.
general setting, but at this stage the implemen- Alg3 Algorithm with adjacent nodes as separat-
tation has been limited to triangles, as it is, at ing sets, score based tests, no refinement,
the sample time, the most usual and simplest and triangle resolution.
situation.
Alg4 Algorithm with minimal separating sets,
Chi-square tests, no refinement and no res-
4 Experiments olution of triangles.
We have done some experiments with the Alarm 2
The parameters are estimated with a Bayesian
network (Beinlich et al., 1989) for testing the Dirichlet approach with a global sample size of 2.

Some Variations on the PC Algorithm 5


100 500 1000 5000 10000 100 500 1000 5000 10000
Alg1 17.94 8.76 6.02 3.25 2.56 Alg1 2.16 4.1 5.88 21.98 42
Alg2 16.29 8.16 5.59 3.7 3.28 Alg2 2.25 4.12 5.89 21.98 42.1
Alg3 26.54 18.47 14.87 8.18 7.08 Alg3 0.33 1.56 3.34 20.73 44.14
Alg4 29.07 11.49 8.42 3.53 2.14 Alg4 2.45 8.9 13.89 39.42 68.85
Alg5 17.87 8.83 6.08 3.03 2.57 Alg5 2.21 4.15 6.02 22.95 44.4

Table 1: Average number of missing links Table 4: Average time

100 500 1000 5000 10000


Alg1 10.42 4.96 2.87 2.2 1.98 and it does not add a significant amount of extra
Alg2 26.13 17.52 16.32 16.01 15.38 time.
Alg3 3.06 0.63 0.14 0.03 0.01
When comparing Alg1 with Alg3 (minimum
Alg4 14.73 5.96 5.68 4.78 4.79
size cut set vs set of adjacent nodes) we ob-
Alg5 10.46 4.99 3.21 1.92 1.9
serve that with adjacent nodes fewer links are
Table 2: Average number of added links added and more ones are missing. The total
amount of errors is in favour of Alg1 (minimum
size cut set). This is due to the fact that Alg3
Alg5 This is the algorithm with minimal separat- makes more conditional independence tests and
ing sets, score based tests, no refinement, with larger conditioning sets of variables, which
and no resolution of triangles. makes more possible to delete links. Kullback-
These combinations are designed in this way, Leibler distance is better for Alg1 except for the
as we consider Alg1 our basic algorithm to re- largest sample size. A possible explanation, is
cover the graph structure, and then we want to that with this large sample size, we really do
study the effect of the application of each one not miss any important link of the network, and
of the variations to it. added links can be more dangerous than deleted
Table 1 contains the average number of miss- ones (when we delete links we are averaging dis-
ing links, Table 2 the average number of added tributions). With smaller sample sizes, Alg3
links, Table 3 the average Kulback-Leibler dis- had a worse Kullback-Leibler as it can be miss-
tance, and finally Table 4 contains the average ing some important links. We do not have any
running times of the different algorithms. In possible explanation to the fact that Alg1 does
these results we highlight the following facts: not improve Kullback-Leibler distance when in-
Refinement (Alg2) increases the number of er- creasing the sample size from 5000 to 10000.
rors in the recovering of the causal structure When comparing the time of both algorithms,
(mainly more added links), but decreases the we see that Alg1 needs more time (to compute
Kullback-Leibler distance to the original distri- minimum size cut sets) however, when the sam-
bution. So, its application will depend of our ple size is large this extra time is compensated
objective: approximate the joint distribution or by the lower number of statistical tests, being
recover the causal structure. Refinement is fast the total time for size 10000 lower in the case of
Alg1 (with minimum size cut sets).
100 500 1000 5000 10000 When comparing Alg1 and Alg4 (Score test
Alg1 4.15 2.46 1.81 0.98 0.99 and triangle resolution vs Chi-square tests and
Alg2 2.91 0.96 0.56 0.19 0.11 no triangle resolution) we observe than Alg4 al-
Alg3 4.98 3.27 2.58 1.11 0.77 ways add more links and miss more links (except
Alg4 5.96 2.19 1.49 1.05 0.91 for the largest sample size). The total number
Alg5 4.15 2.36 1.86 1.11 0.98 of errors is lower for Alg1. It is meaningful the
fact that the number of added links do not de-
Table 3: Average Kullback-Leibler distance crease when going from a sample of 5000 to a

6 J. Abelln, M. Gmez-Olmedo, and S. Moral


sample of 10000. This is due to the fact that refinement step would depend of the final aim:
the probability of considering dependence when if we want to learn the causal structure, then re-
there is independence is fixed (the significance finement should not be applied, but if we want
level) for large the sample sizes. So all the ex- to approximate the joint probability distribu-
tra information of the larger sample is devoted tion, then refinement should be applied. We
to decrease the number of missing links (2.14 in recognize that more extensive experiments are
Alg4 against 2.56 in Alg1), but the difference in necessary to evaluate the application of these
added links is 4.79 in Alg4 against 1.98 in Alg1. modifications, specially the triangle resolution.
So the small decreasing in missing links is at the But we feel that this modification is intuitively
cost of a more important error in the number of supported, and that it could have a more im-
added links. Bayesian scores tests are more bal- portant role in other situations, specially if the
anced in the two types of errors. When consid- faithfulness hypothesis is not verified.
ering the Kullback-Leibler distance, we observe Other combinations could be appropriated if
again the same situation than when comparing the objective is different, for example if we want
Alg1 and Alg2: a greater number of errors in to minimize the number of added links, then
the structure does not always imply a greater Alg3 (with adjacent nodes as cut set) could be
Kullback-Leibler distance. The time is always considered.
greater for Alg4. In the future we plan to make more extensive
The differences between Alg1 and Alg4 are experiments testing different networks and dif-
not due to the triangles resolution in Alg1. As ferent combinations of these modifications. At
we will see now, triangles resolution do not re- the same time, we will consider another possible
ally implies important changes in Alg1 perfor- variations, as for example an algorithm mixing
mance. In fact, the effect of Chi-square tests the skeleton and orientation steps. It is possible
against Bayesian tests without any other addi- that some of the independencies are tested con-
tional factor, can be seen by comparing Alg5 ditional to some sets, that after the orientation
and Alg4. In this case, we can observe the same do not separate the two links. We also plan to
differences as when comparing Alg1 and Alg5. study alternative scores and to study the efect
When comparing Alg1 and Alg5 (no resolu- of using different sample sizes. Also partial ori-
tion of triangles) we see that there is not im- entations can help to make the separating sets
portant differences in performance (errors and even smaller as there can be some paths which
time) when resolution of triangles is applied. are not active without observations. This can
It seems that the total number of errors is de- make algorithms faster and more accurate. Fi-
creased for intermediate sample sizes (500-1000) nally, we think that the use of PC and its vari-
and there are not important differences for the ations as starting points for greedy searching
other sample sizes, but more experiments are algorithms needs further research effort.
necessary. Triangles resolution do not really
add a meaningful extra time. Applying this step Acknowledgments
needs some time, but the graph is simplified and This work has been supported by the Spanish
posterior steps can be faster. Ministry of Science and Technology under the
Algra project (TIN2004-06204-C03-02).
5 Conclusions
In this paper we have proposed four variations References
of the PC algorithm and we have tested them
when learning the Alarm network. Our final S. Acid and L.M. de Campos. 1996. Finding min-
imum d-separating sets in belief networks. In
recommendation would be to use the PC algo- Proceedings of the Twelfth Annual Conference on
rithm with score based tests, minimum size cut Uncertainty in Artificial Intelligence (UAI96),
sets, and triangle resolution. The application of pages 310, Portland, Oregon.

Some Variations on the PC Algorithm 7


S. Acid and L.M. de Campos. 2001. A hy- D. Heckerman, C. Meek, and G. Cooper. 1999. A
brid methodology for learning belief networks: bayesian approach to causal discovery. In C. Gly-
Benedit. International Journal of Approximate mour and G.F. Cooper, editors, Computation,
Reasoning, 27:235262. Causation, and Discovery, pages 141165. AAAI
Press.
I.A. Beinlich, H.J. Suermondt, R.M. Chavez, and
G.F. Cooper. 1989. The alarm monitoring sys- D. Heckerman. 1995. A tutorial on learning with
tem: A case study with two probabilistic inference Bayesian networks. Technical Report MSR-TR-
techniques for belief networks. In Proceedings of 95-06, Microsoft Research.
the Second European Conference on Artificial In-
telligence in Medicine, pages 247256. Springer- S. Kullback. 1968. Information Theory and Statis-
Verlag. tics. Dover, New York.
A.L. Madsen, M. Land, U.F. Kjrulff, and
R. Blanco, I. Inza, and P. Larraaga. 2003. Learning F. Jensen. 2003. The hugin tool for learning net-
bayesian networks in the space of structures by es- works. In T.D. Nielsen and N.L. Zhang, editors,
timation of distribution algorithms. International Proceedings of ECSQARU 2003, pages 594605.
Journal of Intelligent Systems, 18:205220. Springer-Verlag.
W. Buntine. 1991. Theory refinement in bayesian D. Margaritis. 2003. Learning Bayesian Model
networks. In Proceedings of the Seventh Con- Structure from Data. Ph.D. thesis, School of
ference on Uncertainty in Artificial Intelligence, Computer Science, Carnegie Mellon University.
pages 5260. Morgan Kaufmann, San Francisco,
CA. S. Moral. 2004. An empirical comparison of score
measures for independence. In Proceedings of the
J. Cohen. 1988. Statistical power analysis for the Tenth International Conference IPMU 2004, Vol.
behavioral sciences (2nd edition). Erlbaum, Hills- 2, pages 13071314.
dale, NJ.
J. Pearl. 1988. Probabilistic Reasoning with Intelli-
Elvira Consortium. 2002. Elvira: An environ- gent Systems. Morgan & Kaufman, San Mateo.
ment for probabilistic graphical models. In J.A.
Gmez and A. Salmern, editors, Proceedings of the P. Spirtes, C. Glymour, and R. Scheines. 1993. Cau-
1st European Workshop on Probabilistic Graphi- sation, Prediction and Search. Springer Verlag,
cal Models, pages 222230. Berlin.
H. Steck and V. Tresp. 1999. Bayesian belief net-
G.F. Cooper and E.A. Herskovits. 1992. A bayesian
works for data mining. In Proceedings of the 2nd
method for the induction of probabilistic networks
Workshop on Data Mining und Data Warehous-
from data. Machine Learning, 9:309347.
ing als Grundlage moderner entscheidungsunter-
stuetzender Systeme, pages 145154.
G.F. Cooper. 1997. A simple constraint-based
algorithm for efficiently mining observational S. van Dijk, L.C. can der Gaag, and D. Thierens.
databases for causal relationships. Data Mining 2003. A skeleton-based approach to learning
and Knowledge Discovery, 1:203224. bayesian networks from data. In Nada Lavrac,
Dragan Gamberger, Ljupco Todorovski, and Hen-
D. Dash and M.J. Druzdzel. 1999. A hybrid any- drik Blockeel, editors, Proceedings of the Sev-
time algorithm for the construction of causal mod- enth Conference on Principles and Practice of
els from sparse data. In Proceedings of the Fif- Knowledge Discovery in Databases, pages 132
teenth Annual Conference on Uncertainty in Arti- 143. Springer Verlag.
ficial Intelligence (UAI-99), pages 142149. Mor-
gan Kaufmann.

L.M. de Campos, J.M. Fernndez-Luna, J.A. Gmez,


and J. M. Puerta. 2002. Ant colony optimiza-
tion for learning bayesian networks. International
Journal of Approximate Reasoning, 31:511549.

D. Heckerman, D. Geiger, and D.M. Chickering.


1995. Learning bayesian networks: The combina-
tion of knowledge and statistical data. Machine
Learning, 20:197243.

8 J. Abelln, M. Gmez-Olmedo, and S. Moral


Learning Complex Bayesian Network Features for Classification
Peter Antal, Andras Gezsi, Gabor Hullam and Andras Millinghoffer
Department of Measurement and Information Systems
Budapest University of Technology and Economics

Abstract
The increasing complexity of the models, the abundant electronic literature and the
relative scarcity of the data make it necessary to use the Bayesian approach to complex
queries based on prior knowledge and structural models. In the paper we discuss the
probabilistic semantics of such statements, the computational challenges and possible
solutions of Bayesian inference over complex Bayesian network features, particularly over
features relevant in the conditional analysis. We introduce a special feature called Markov
Blanket Graph. Next we present an application of the ordering-based Monte Carlo method
over Markov Blanket Graphs and Markov Blanket sets.

In the Bayesian approach to a structural fea- for elementary features (Madigan et al., 1996;
ture F with values F (G) {fi }R i=1 we are Friedman and Koller, 2000). This paper ex-
interested in the feature posterior induced by tends these results by investigating Bayesian in-
the model posterior given the observations DN , ference about BN features with high-cardinality,
where G denotes the structure of the Bayesian relevant in classification. In Section 1 we
network (BN) present a unified view of BN features enriched
with free-text annotations as a probabilistic
X knowledge base (pKB) and discuss the corre-
p(fi |DN ) = 1(F (G) = fi )p(G|DN ) (1) sponding probabilistic semantics. In Section 2
G
we overview the earlier approaches to feature
The importance of such inference results from learning. In Section 3 we discuss structural BN
(1) the frequently impractically high sample features and introduce a special feature called
and computational complexity of the complete Markov Blanket Graph or Mechanism Bound-
model, (2) a subsequent Bayesian decision- ary Graph. Section 4 discusses its relevance in
theoretic phase, (3) the availability of stochastic conditional modeling. In Section 5 we report an
methods for estimating such posteriors, and (4) algorithm using ordering-based MCMC meth-
the focusedness of the data and the prior on ods to perform inference over Markov Blanket
certain aspects (e.g. by pattern of missing val- Graphs and Markov Blanket sets. Section 6
ues or better understood parts of the model). presents results for the ovarian tumor domain.
Correspondingly, there is a general expectation
1 BN features in pKBs
that for small amount of data some properties
of complex models can be inferred with high Probabilistic and causal interpretations of BN
confidence and relatively low computation cost ensure that structural features can express a
preserving a model-based foundation. wide range of relevant concepts based on condi-
The irregularity of the posterior over the tional independence statements and causal as-
discrete model space of the Directed Acyclic sertions (Pearl, 1988; Pearl, 2000; Spirtes et
Graphs (DAGs) poses serious challenges when al., 2001). To enrich this approach with sub-
such feature posteriors are to be estimated. jective domain knowledge via free-text annota-
This induced the research on the application of tions, we introduce the concept of Probabilistic
Markov Chain Monte Carlo (MCMC) methods Annotated Bayesian Network knowledge base.
Definition 1. A Probabilistic Annotated membership and pairwise precedence was inves-
Bayesian Network knowledge base K for a fixed tigated in (Friedman et al., 1999).
set V of discrete random variables is a first- On the contrary, the Bayesian framework of-
order logical knowledge base including standard fers many advantages such as the normative,
graph, string and BN related predicates, rela- model-based combination of prior and data al-
tions and functions. Let Gw represent a target lowing unconstrained application in the small
DAG structure including all the target random sample region. Furthermore the feature poste-
variables. It includes free-text descriptions for riors can be embedded in a probabilistic knowl-
the subgraphs and for their subsets. We assume edge base and they can be used to induce pri-
that the models M of the knowledge base vary ors for other model spaces and for a subse-
only in Gw (i.e. there is a mapping G M ) quent learning. In (Buntine, 1991) proposed
and a distribution p(Gw |) is available. the concept of a posterior knowledge base con-
A sentence is any well-formed first-order ditioned on a given ordering for the analysis
formula in K, the probability of which is defined of BN models. In (Cooper and Herskovits,
as the expectation of its truth 1992) discussed the general use of the poste-
rior for BN structures to compute the poste-
X rior of arbitrary features. In (Madigan et al.,
Ep(M|K)[M ] = M(G) p(G|K).
1996) proposed an MCMC scheme over the
G
space of DAGs and orderings of the variables
where M(G) denotes its truth-value in the to approximate Bayesian inference. In (Heck-
model M(G). This hybrid approach defines a ermann et al., 1997) considered the applica-
distribution over models by combining a logi- tion of the full Bayesian approach to causal
cal knowledge base with a probabilistic model. BNs. Another MCMC scheme, the ordering-
The logical knowledge base describes the cer- based MCMC method utilizing the ordering of
tain knowledge in the domain defining a set the variables were reported in (Friedman and
of models (legal worlds) and the probabilistic Koller, 2000). They developed and used a
part (p(Gw |)) expresses the uncertain knowl- closed form for the order conditional posteri-
edge over these worlds. ors of Markov blanket membership, beside the
earlier closed form of the parental sets.
2 Earlier works
To avoid the statistical and computational bur- 3 BN features
den of identifying complete models, related lo-
cal algorithms for identifying causal relations The prevailing interpretation of BN feature
were reported in (Silverstein et al., 2000) and in learning assumes that the feature set is sig-
(Glymour and Cooper, 1999; Mani and Cooper, nificantly simpler than the complete domain
2001). The majority of feature learning algo- model providing an overall characterization as
rithms targets the learning of relevant variables marginals and that the number of features and
for the conditional modeling of a central vari- their values is tractable (e.g linear or quadratic
able, i.e. they target the so-called feature sub- in the number of variables). Another interpre-
set selection (FSS) problem (Kohavi and John, tation is to identify high-scoring arbitrary sub-
1997). Such examples are the Markov Blanket graphs or parental sets, Markov blanket sub-
Approximating Algorithm (Koller and Sahami, sets and estimate their posteriors. A set of sim-
1996) and the Incremential Association Markov ple features means a fragmentary representation
Blanket algorithm (Tsamardinos and Aliferis, for the distribution over the complete domain
2003). The subgraphs of a BN as features were model from multiple, though simplified aspects,
targeted in (Peer et al., 2001). The bootstrap whereas using a given complex feature means a
approach inducing confidence measures for fea- focused representation from a single, but com-
tures such as compelled edges, Markov blanket plex point of view. A feature F is complex if

10 P. Antal, A. Gzsi, G. Hullm, and A. Millinghoffer


the number of its values is exponential in the feature is not a unique representative of a condi-
number of domain variables. First we cite a tionally equivalent class of BNs. From a causal
central concept and a theorem about relevance point of view, this feature uniquely represents
for variables V = {X1 , . . . , Xn } (Pearl, 1988). the minimal set of mechanisms including Y . In
Definition 2. A set of variables M B(Xi ) short, under the conditions mentioned above,
is called the Markov Blanket of Xi w.r.t. this structural feature of the causal BN domain
the distribution P (V ), if (Xi V \ model is necessary and sufficient to support the
M B(Xi )|M B(Xi )). A minimal Markov blanket manual exploration and automated construc-
is called a Markov boundary. tion of a conditional dependency model.
There is no closed formula for the posterior
Theorem 1. If a distribution P (V ) factorizes p(M BG(Y, G)), which excludes the direct use
w.r.t DAG G, then of the MBG space in optimization or in Monte
Carlo methods. However, there exist a formula
i = 1, . . . , n : (Xi
V \ bd(Xi )|bd(Xi , G))P ,
for the order conditional posterior with polyno-
where bd(Xi , G) denotes the set of parents, chil- mial time complexity if the size of the parental
dren and the childrens other parents for Xi . sets is bounded by k

So the set bd(Xi , G) is a Markov blanket of p(M BG(Y, G) = mbg|DN ) =


Xi . So we will also refer to bd(Xi , G) as a
p(pa(Y, mbg)|DN )
Markov blanket of Xi in G using the notation Y
M B(Xi , G) implicitly assuming that P factor- p(pa(Xi , mbg)|DN )
izes w.r.t. G. Y Xi (3)
Y pa(Xi ,mbg)
The induced, symmetric pairwise rela- Y X
tion is the Markov Blanket Membership p(pa(Xi )|DN ).
M BM (Xi , Xj , G) w.r.t. G between Xi and Xj Y Xi Y pa(X
/ i)
Y pa(X
/ i ,mbg)
(Friedman et al., 1999)
The cardinality of the M BG(Y ) space is still
M BM (Xi , Xj , G) Xj bd(Xi , G) (2)
super-exponential (even if the number of par-
Finally, we define the Markov Blanket Graph. ents is bounded by k). Consider an order-
ing of the variables such that Y is the first
Definition 3. A subgraph of G is called the and all the other variables are children of it,
Markov Blanket Graph or Mechanism Bound- then the parental sets can be selected indepen-
ary Graph M BG(Xi , G) of a variable Xi if it dently, so the number of alternatives is in the
includes the nodes from M B(Xi , G) and the in- 2
order of (n 1)n (or (n 1)(k1)(n1) ). How-
coming edges into Xi and into its children. ever, if Y is the last in the ordering, then the
It is easy to show, that the characteristic number of alternatives is of the order 2n1 or
property of the MBG feature is that it com- (n 1)(k) ). In case of M BG(Y, G), variable
pletely defines the distribution P (Y |V \ Y ) by Xi can be (1) non-occurring in the MBG, (2)
the local dependency models of Y and its chil- a parent of Y (Xi pa(Y, G)), (3) a child of
dren in a BN model G, in case of point pa- Y (Xi ch(Y, G)) and (4) (pure) other par-
rameters (G, ) and of parameter priors satisfy- ent ((Xi / pa(Y, G) (Xi pa(ch(Y )j )))).
ing global parameter independence (Spiegelhal- These types correspond to the irrelevant (1) and
ter and Lauritzen, 1990) and parameter modu- strongly relevant (2,3,4) categories (see, Def. 4).
larity (Heckerman et al., 1995). This property The number of DAG models G(n) compatible
offers two interpretations for the MBG feature. with a given MBG and ordering can be com-
From a probabilistic point of view the M BG(G) puted as follows: the contribution of the vari-
feature defines an equivalence relation over the ables Xi Y without any constraint and the
DAGs w.r.t. P (Y |V \ Y ), but clearly the MBG contribution of the variables Y Xi that are

Learning Complex Bayesian Network Features for Classification 11


not children of Y, which is still 2O((k)(n)log(n)) 5 Estimating complex features
(note certain sparse graphs are compatible with
many orderings). The basic task is the estimation of the expecta-
tion of a given random variable over the space
4 Features in conditional modeling of DAGs with a specified confidence level in
Eq. 1. We assume complete data, discrete do-
In the conditional Bayesian approach the rel- main variables, multinomial local conditional
evance of predictor variables (features in this distributions and Dirichlet parameter priors. It
context) can be defined in an asymptotic, ensures efficiently computable closed formulas
algorithm-, model- and loss-free way as follows for the (unnormalized) posteriors of DAGs. As
Definition 4. A feature Xi is strongly rel- this posterior cannot be sampled directly and
evant iff there exists some xi , y and si = the construction of an approximating distribu-
x1 , . . . , xi1 , xi+1 , . . . , xn for which p(xi , si ) > 0 tion is frequently not feasible, the standard ap-
such that p(y|xi , si ) 6= p(y|si ). A feature Xi is proach is to use MCMC methods such as the
weakly relevant iff it is not strongly relevant, Metropolis-Hastings over the DAG space (see
and there exists a subset of features Si of Si for e.g. (Gamerman, 1997; Gelman et al., 1995)).
which there exists some xi , y and si for which The DAG-based MCMC method for estimat-
p(xi , si ) > 0 such that p(y|xi , si ) 6= p(y|si ). ing a given expectation is generally applicable,
A feature is relevant if it is either weakly or but for certain types of features such as Markov
strongly relevant; otherwise it is irrelevant (Ko- blanket membership an improved method, the
havi and John, 1997). so-called ordering-based MCMC method can be
applied, which utilizes closed-forms of the or-
In the so-called filter approach to feature se-
der conditional feature posteriors computable
lection we have to select a minimal subset X
in O(nk+1 ) time, where k denotes the maximum
which fully determines the conditional distri-
number of parents (Friedman and Koller, 2000).
bution of the target (p(Y |X) = p(Y |X )). If
the conditional modeling is not applicable and In these approaches the problem is simplified
a domain model-based approach is necessary to the estimation of separate posteriors. How-
then the Markov boundary property (feature) ever, the number of target features can be as
seems to be an ideal candidate for identifying high as 104 106 even for a given type of pair-
relevance. The following theorem gives a suf- wise features and moderate domain complex-
ficient condition for uniqueness and minimality ity. This calls for a decision-theoretic report
(Tsamardinos and Aliferis, 2003). of selection and estimation of the features, but
here we use a simplified approach targeting the
Theorem 2. If the distribution P is stable selection-estimation of the K most probable fea-
w.r.t. the DAG G, then the variables bd(Y, G) ture values. Because of the exponential num-
form a unique and minimal Markov blanket of ber of feature values a search method has to
Y , M B(Y ). Furthermore, Xi M B(Y ) iff Xi be applied either iteratively or in an integrated
is strongly relevant. fashion. The first approach requires the offline
However, the MBG feature provides a more storage of orderings and corresponding common
detailed description about relevancies. As an factors, so we investigated this latter option.
example, consider that a logistic regression (LR) The integrated feature selection-estimation is
model without interaction terms and a Naive particularly relevant for the ordering-based MC
BN model can be made conditionally equiva- methods, because it does not generate implic-
lent using a local and transparent parameter itly high-scoring features and features that
transformation. If the distribution contains ad- are not part of the solution cause extra com-
ditional dependencies, then the induced condi- putational costs in estimation.
tional distribution has to be represented by a The goal of search within the MC cycle at
LR model with interaction terms. step l is the generation of MBGs with high

12 P. Antal, A. Gzsi, G. Hullm, and A. Millinghoffer


order conditional posterior, potentially using estimation of simple classification features such
the already generated MBGs and the posteri- as edge and MBM relations, and the estimation
ors p(M BG| l , DN ). To facilitate the search of the MB features of a given variable using the
we define an order conditional MBG state space estimated MAP MBG collection is not shown).
based on the observation that the order condi-
tionally MAP MBG can be found in O(nk+1 ) Algorithm 1 Search and estimation of classifi-
time with a negligible constant increase only. cation features using the MBG-ordering spaces
An MBG state is represented by an n dimen- Require: p(),p(pa(Xi )| ),k,R,,LS ,S ,LT ,M;
sional vector s, where n is the number of vari- Ensure: K MAP feature value with estimates
ables not preceding the target variable Y in the Cache order-free parental posteriors =
ordering : { i, |pa(Xi )| k : p(pa(Xi )|DN )}
Initialize MCMC, the MBG-tree T , MBM
X
n
and edge posterior matrices R, E;
n = 1(Y  Xi ) (4)
Insert a priori specified MBGs in T ;
i=1
for l = 0 to M do {the sampling cycle}
The range of the values are integers si = Draw next ordering;
0, . . . , ri representing either separate parental Cache order specific common factors for
sets or (in case of Xi where Y l Xi ) a special |pa(Xi )| k:
set of parental sets not including the target vari- p(pa(Xi )| l ) for all Xi
able. The product of the order conditional pos- p(Y / pa(Xi )| l ) for Y l Xi ;
teriors of the represented sets of parental sets Compute p(l |DN );
gives the order conditional posterior of the rep- Construct order conditional MBG-
resented MBG state in Eq. 3. We ensure that Subspace(, , R, )=
the conditional posteriors of the represented sets S S =Search(,LS , S );
of parental sets are monotone decreasing w.r.t. for all mbg S S do
their indices: if mbg / T then
Insert(T , mbg)
si < si : p(si |DN , ) p(si |DN , ) (5) if LT < |T | then
T =PruneToHPD(T ,LT );
which can be constructed in for all mbg T do
O(nk+1 log(maxi ri )) time, where k is the p(mbg|DN )+ = p(mbg| l , DN );
maximum parental set size. Report K MAP MBGs from T ;
This MBG space allows the application of ef- Report K MAP MBs using the MBGs in T ;
ficient search methods. We experimented with
a direct sampling, top-sampling and a deter- Parameters R, allow the restriction of the
ministic search. The direct sampling was used MBG subspace separately for each dimension to
as a baseline, because it does need the MBG a less than R values by requiring that the corre-
space. The top-sampling method is biased to- sponding posteriors are above the exp() ratio
wards sampling MBGs with high order condi- of the respective MAP value. The uniform-cost
tional posterior, by sampling only from the L search starts from the order conditional MAP
most probable sets of parental sets for each MBG, and stops after the expansion of LS num-
Y - Xi . The uniform-cost search constructs ber of states or if the most probable MBG in its
the MBG space, then performs a uniform-cost search list drops below exp(S ) ratio of the or-
search to a maximum number of MBGs or to der conditional posterior of the starting MBG.
threshold p(M BGM AP, | , DN )/S . Generally, the expansion phase has high com-
The pseudo code of searching and estimat- putational costs, but for large LT the update of
ing MAP values for the MBG feature of a given the MBGs in T is high as well. In order to
variable is shown in Alg. 1 (for simplicity the maintain tractability the usage of more refined

Learning Complex Bayesian Network Features for Classification 13


methods such as partial updating are required. method for a single ordering, because a total
Within the explored OC domain however the ordering of the variables was available from an
full, exact update has acceptable costs if the expert. Fig. 1 reports the estimated posteriors
size of the estimated MBG set is LT [105 , 106 ]. of the MAP MB sets for P athology with their
This LT ensures that the newly inserted MBGs MBM-based approximated values assuming
are not pruned before their estimates can re- the independence of the MBM values and
liable indicate their high-scoring potential and Table 1 shows the members of the MB sets.
still allows an exact update. In larger domains Note that the two monotone decreasing curves
this balance can be different and the analysis of correspond to independent rankings, one for
this question in general is for future research. the experts total ordering and one for the
The analysis of the MBG space showed that unconstrained case. It also reports the MB
the conditional posteriors of the ranked parental set spanned by a prior BN specified by the
sets after rank 10 are negligible, so subsequently expert (E), the MB set spanned by the MAP
we will report results using values R = 20, = 4 BN (BN ) and the set spanned by the MAP
and LS = 104 , S = 106 . Note that the ex- MBG (M BG) (see Eq. 6). Furthermore we
pansion with the LS conditionally most proba- generated another reference set (LR) from a
ble MBGs in each step does not guarantee that conditional standpoint using the logistic regres-
the LS most probable MBGs are estimated, not sion model class and the SPSS 14.0 software
even the MAP MBG. with the default setting for forward model
construction (Hosmer and Lemeshow, 2000).
6 Results M Bp reports the result of the deterministic
select-estimate method using the total ordering
We used a data set consisting of 35 discrete vari- of the expert and the M B1 , M B2 , M B3 report
ables and 782 complete cases related to the pre- the result of the unconstrained ordering-based
operative diagnosis of ovarian cancer (see (Antal MCMC with deterministic select-estimate
et al., 2004)). method. Variables F amHist,CycleDay,
First we report the estimation-selection of HormT herapy, Hysterectomy, P arity
MB features for the central variable P athology. PMenoAge are never selected and the variables
We applied the heuristic deterministic search- V olume, Ascites, P apillation, P apF low,
estimation method in the inner cycle of the CA125, W allRegularity are always selected,
MCMC method. The length of the burn-in so they are not reported.
and MCMC simulation was 10000, the prob-
ability of the pairwise replace operator was
The MBM-based approximation performs rel-
0.8, the parameter prior was the BDeu and
atively well, particularly w.r.t. ranking in the
the structure prior was uniform prior for the
case of the experts ordering 0 , but it performs
parental set sizes (Friedman and Koller, 2000).
poorly in the unconstrained case both w.r.t. es-
The maximum number of parents was 4 (the
timations and ranks (see the difference of Mp set
posteriors of larger sets are insignificant).
to M1 w.r.t. variables Age, M eno, P I, T AM X,
For preselected high-scoring MB values after
Solid).
10000 burn-in the single-chain convergence
test from Geweke comparing averages has
z-score approximately 0.5, the R value of We compared the M BG(Y, GM AP ) and
the multiple-chain method of Gelman-Rubin M B(Y, GM AP ) feature values defined by the
with 5 chains drops below 1.05 (Gamerman, MAP BN structure GM AP against the MAP
1997; Gelman et al., 1995). The variances MBG feature value M BG(Y )M AP and the
of the MCMC estimates of these preselected MAP MB feature value M B(Y )M AP including
test feature values drop below 102 . We also the MB feature value defined by the MAP MBG
applied the deterministic search-estimation feature value M B(Y, M BG(Y )M AP )

14 P. Antal, A. Gzsi, G. Hullm, and A. Millinghoffer


Table 1: Markov blanket sets of Pathology 0.1
0.09
among the thirty-five variables.
E LR BN MG M Bp M B1 M B2 M B3 0.08

Age 1 0 1 0 1 0 0 0 0.07
P(MB|<,D)
Meno 1 0 1 1 0 1 1 1 0.06
P(MB|<,D,MBM)
PMenoY 0 1 0 0 0 0 0 0 0.05
P(MB|D)
PillUse 1 0 0 0 0 0 0 0 0.04
P(MB|D,MBM)
Bilateral 1 0 1 1 1 1 1 1
0.03
Pain 0 1 0 0 0 0 0 0
0.02
Fluid 1 0 1 0 0 0 0 0
Septum 1 0 1 1 1 1 1 1 0.01

ISeptum 0 0 1 1 1 1 1 1 0
PSmooth 1 0 0 0 0 0 0 0 1 3 5 7 9 11 13 15 17 19
Loc. 1 1 0 0 0 1 0 1
Shadows 1 0 1 1 1 1 1 1
Echog. 1 0 1 1 1 1 1 1 Figure 1: The ranked posteriors and their
ColScore 1 0 1 1 1 1 1 1
PI 1 1 0 0 1 0 0 0 MBM-based approximations of the 20 most
RI 1 0 1 1 1 1 1 1 probable M B(P athology) sets for the sin-
PSV 1 0 1 1 1 1 1 1 gle/unconstrained orderings.
TAMX 1 1 0 1 1 0 0 0
Solid 1 1 1 1 1 0 1 0
FHBrCa 0 0 0 0 0 0 0 1
FHOvCa 0 0 0 1 0 0 0 0 is connected with the annotated BN knowledge
base defined in Def. 1, which allows an offline
exploration of the domain from the point of view
of conditional modeling. The histogram of the
number of parameters and inputs for the MBGs
GM AP = arg max p(G|DN ) (6)
G using only the fourteen most relevant variables
M BG(Y )M AP = arg max p(mbg(Y )|DN ) are reported in Fig. 2.
mbg(Y )
M AP
M B(Y ) = arg max p(mb(Y )|DN )
mb(Y )
600

We performed the comparison using the best


800 500
BN structure found in the MCMC simulation.
The MAP MBG feature value M BG(Y )M AP
600
400
400
differed significantly from the MAP domain
Frequency

200
model, because of an additional Age and F luid 300
0
variables in the domain model. The MAP 200
200
MB feature value M B(Y )M AP similarly differs 400
14

from the MB sets defined by the MAP domain 600


12 100

models for example w.r.t. the vascularization 100 200 300 400
10
500 600
variables such as P I. Interestingly, the MAP 700 800 8 |Inputs|
|Params|
MB feature value also differs from the MB fea-
ture value defined by the MAP MBG feature
Figure 2: The histogram of the number of pa-
value M B(Y, M BG(Y )M AP ), for example w.r.t.
rameters and inputs of the MAP MBGs.
T AM X, Solid variables. In conclusion these
results together with the comparison against
the simple feature-based analysis such as MBM- 7 Conclusion
based analysis reported in Fig. 1, show the rel-
evance of the complex feature-based analysis. In the paper we presented a Bayesian approach
We also constructed an offline probabilistic for complex BN features, for the so-called
knowledge base containing 104 MAP MBGs. It Markov Blanket set and the Markov Blanket

Learning Complex Bayesian Network Features for Classification 15


Graph features. We developed and applied a C. Glymour and G. F. Cooper. 1999. Computation,
select-estimate algorithm using ordering-based Causation, and Discovery. AAAI Press.
MCMC, which uses the efficiently computable D. Heckerman, D. Geiger, and D. Chickering. 1995.
order conditional posterior of the MBG feature Learning Bayesian networks: The combination of
and the proposed MBG space. The compari- knowledge and statistical data. Machine Learn-
ing, 20:197243.
son of the most probable MB and MBG fea-
ture values with simple feature based approx- D. Heckermann, C. Meek, and G. Cooper. 1997. A
imations and with complete domain modeling bayesian aproach to causal discovery. Technical
Report, MSR-TR-97-05.
showed the separate significance of the analy-
sis based on these complex BN features in the D. W. Hosmer and S. Lemeshow. 2000. Applied
investigated medical domain. The proposed al- Logistic Regression. Wiley & Sons, Chichester.
gorithm and the offline knowledge base in the R. Kohavi and G. H. John. 1997. Wrappers for
introduced probabilistic annotated BN knowl- feature subset selection. Artificial Intelligence,
edge base context allows new types of analysis 97:273324.
and fusion of expertise, data and literature. D. Koller and M. Sahami. 1996. Toward optimal
feature selection. In International Conference on
Acknowledgements Machine Learning, pages 284292.
We thank T. Dobrowiecki for his helpful com- D. Madigan, S. A. Andersson, M. Perlman, and
ments. Our research was supported by grants C. T. Volinsky. 1996. Bayesian model averag-
from Research Council KUL: IDO (IOTA On- ing and model selection for markov equivalence
cology, Genetic networks). classes of acyclic digraphs. Comm.Statist. Theory
Methods, 25:24932520.
S. Mani and G. F. Cooper. 2001. A simulation study
References of three related causal data mining algorithms. In
P. Antal, G. Fannes, Y. Moreau, D. Timmerman, International Workshop on Artificial Intelligence
and B. De Moor. 2004. Using literature and data and Statistics, pages 7380. Morgan Kaufmann,
to learn Bayesian networks as clinical models of San Francisco, CA.
ovarian tumors. AI in Med., 30:257281. Special J. Pearl. 1988. Probabilistic Reasoning in Intelligent
issue on Bayesian Models in Med. Systems. Morgan Kaufmann, San Francisco, CA.
W. L. Buntine. 1991. Theory refinement of Bayesian J. Pearl. 2000. Causality: Models, Reasoning, and
networks. In Proc. of the 7th Conf. on Un- Inference. Cambridge University Press.
certainty in Artificial Intelligence , pages 5260.
Morgan Kaufmann. D. Peer, A. Regev, G. Elidan, and N. Friedman.
2001. Inferring subnetworks from perturbed ex-
G. F. Cooper and E. Herskovits. 1992. A Bayesian pression profiles. Bioinformatics, Proceedings of
method for the induction of probabilistic networks ISMB 2001, 17(Suppl. 1):215224.
from data. Machine Learning, 9:309347.
C. Silverstein, S. Brin, R. Motwani, and J. D. Ull-
N. Friedman and D. Koller. 2000. Being Bayesian man. 2000. Scalable techniques for mining causal
about network structure. In Proc. of the 16th structures. Data Mining and Knowledge Discov-
Conf. on Uncertainty in Artificial Intelligence, ery, 4(2/3):163192.
pages 201211. Morgan Kaufmann.
D. J. Spiegelhalter and S. L. Lauritzen. 1990. Se-
N. Friedman, M. Goldszmidt, and A. Wyner. 1999. quential updating of conditional probabilities on
Data analysis with bayesian networks: A Boot- directed acyclic graphical structures. Networks,
strap approach. In Proc. of the 15th Conf. on 20(.):579605.
Uncertainty in Artificial Intelligence, pages 196
205. Morgan Kaufmann. P. Spirtes, C. Glymour, and R. Scheines. 2001. Cau-
sation, Prediction, and Search. MIT Press.
D. Gamerman. 1997. Markov Chain Monte Carlo.
Chapman & Hall, London. I. Tsamardinos and C. Aliferis. 2003. Towards
principled feature selection: Relevancy,filters, and
A. Gelman, J. B. Carlin, H. S. Stern, and D. B. wrappers. In Proc. of the Artificial Intelligence
Rubin. 1995. Bayesian Data Analysis. Chapman and Statistics, pages 334342.
& Hall, London.

16 P. Antal, A. Gzsi, G. Hullm, and A. Millinghoffer


Literature Mining using Bayesian Networks
Peter Antal and Andras Millinghoffer
Department of Measurement and Information Systems
Budapest University of Technology and Economics

Abstract
In biomedical domains, free text electronic literature is an important resource for knowl-
edge discovery and acquisition, particularly to provide a priori components for evaluating
or learning domain models. Aiming at the automated extraction of this prior knowledge
we discuss the types of uncertainties in a domain with respect to causal mechanisms,
formulate assumptions about their report in scientific papers and derive generative prob-
abilistic models for the occurrences of biomedical concepts in papers. These results allow
the discovery and extraction of latent causal dependency relations from the domain lit-
erature using minimal linguistic support. Contrary to the currently prevailing methods,
which assume that relations are sufficiently formulated for linguistic methods, our ap-
proach assumes only the report of causally associated entities without their tentative
status or relations, and can discover new relations and prune redundancies by providing
a domain-wide model. Therefore the proposed Bayesian network based text mining is an
important complement to the linguistic approaches.

1 Introduction In a wider sense our work provides support


to statistical inference about the structure of
Rapid accumulation of biological data and the the domain model. This is a two-step process,
corresponding knowledge posed a new challenge which consists of the reconstruction of the be-
of making this voluminous, uncertain and fre- liefs in mechanisms from the literature by model
quently inconsistent knowledge accessible. De- learning and their usage in a subsequent learn-
spite recent trends to broaden the scope of for- ing phase. Here, the Bayesian framework is an
mal knowledge bases in biomedical domains, obvious choice. Earlier applications of text min-
free text electronic literature is still the central ing provided results for the domain experts or
repository of the domain knowledge. This cen- data analysts, whereas our aim is to go one step
tral role will probably be retained in the near further and use the results directly in the sta-
future, because of the rapidly expanding fron- tistical learning of the domain models.
tiers. The extraction of explicitly stated or the The paper is organized as follows. Section 2
discovery of implicitly present latent knowledge presents a unified view of the literature, the
requires various techniques ranging from purely data and their models. In Section 3 we re-
linguistic approaches to machine learning meth- view the types of uncertainties in biomedical do-
ods. In the paper we investigate a domain- mains from a causal, mechanism-oriented point
model based approach to statistical inference of view. In Section 4 we summarize recent ap-
about dependence and causal relations given proaches to information extraction and liter-
the literature using minimal linguistic prepro- ature mining based on natural language pro-
cessing. We use Bayesian Networks (BNs) as cessing (NLP) and local analysis of occur-
causal domain models to introduce generative rence patterns. In Section 5 we propose gen-
models of publication, i.e. we examine the re- erative probabilistic models for the occurrences
lation of domain models and generative models of biomedical concepts in scientific papers. Sec-
of the corresponding literature. tion 6 presents textual aspects of the application
domain, the diagnosis of ovarian cancer. Sec- erature(!) data as:
tion 7 reports results on learning BNs given the
L P (G) X L L L
literature. P (G|DN 0) =
L )
P (DN 0 |G )P (G |G).
P (DN 0 L G

2 Fusion of literature and data The formalization (DN G GL DN L )


0

also allows the computation of the posterior


over the domain models given both clinical and
The relation of experimental data DN , proba- the literature(!) data as:
bilistic causal domain models formalized as BNs
L and models of P (DNL |G)
(G, ), domain literature DN 0 L 0 P (DN |G)
L L P (G|DN , DN 0 ) = P (G)
publication (G , ) can be approached at dif- L
P (DN 0 ) P (DN |DNL )
0
ferent levels. For the moment, let us assume X
L L L
P (G)P (DN |G) P (DN 0 |G )P (G |G),
that probabilistic models are available describ-
GL
ing the generation of observations P (DN |(G, ))
and literature P (DN L |(GL , L )). This latter The order of the factors shows that the prior
0

may include stochastic grammars for modeling is first updated by the literature data, then by
the linguistic aspects of the publication, how- the clinical data. A considerable advantage of
ever, we will assume that the literature has a this approach is the integration of literature and
simplified agrammatical representation and the clinical data at the lowest level and not through
corresponding generative model can be formal- feature posteriors, i.e. by using literature pos-
ized as a BN (GL , L ) as well. teriors in feature-based priors for the (clinical)
The main question is the relation of P (G, ) data analysis (Antal et al., 2004).
and P (GL , L ). In the most general approach We will assume that a bijective relation exists
the hypothetical posteriors P (G, |DN , i ) ex- between the domain model structures G and the
pressing personal beliefs over the domain mod- publication model structures GL (T (G) = GL ),
els conditional on the experiments and the per- whereas the parameters L may encode addi-
sonal background knowledge i determine or tional aspects of publication policies and expla-
at least influence the parameters of the model nation. We will focus on the logical link be-
(GL , L ) in P (DN
L |(GL , L ), ). tween the structures, where the posterior given
0 i
the literature and possibly the clinical data is:
The construction or the learning of a full-
L
fledged decision theoretic model of publication P (G|DN , DN 0 , ) (1)
is currently not feasible regarding the state of L
P (G|)P (DN |G)P (DN 0 |T (G)).
quantitative modeling of scientific research and
publication policies, not to mention the cogni- This shows the equal status of the literature
tive and even stylistic aspects of explanation, and the clinical data. In integrated learning
understanding and learning (Rosenberg, 2000). from heterogeneous sources however, the scal-
In a severely restricted approach we will fo- ing of the sources is advisable to express our
cus only on the effect of the belief in domain confidence in them.
models P (G, ) on that in publication models
3 Concepts, associations, causation
P (GL , L ). We will assume that this transfor-
mation is local, i.e. there is a simple proba- Frequently a biomedical domain can be charac-
bilistic link between the model spaces, specifi- terized by a dominant type of uncertainty w.r.t
cally between the structure of the domain model the causal mechanisms. Such types of uncer-
and the structure and parameters of the pub- tainty show certain sequentiality described be-
lication model p(GL , L |G). Probabilistically low, related to the development of biomedical
linked model spaces allow the computation of knowledge, though a strictly sequential view is
the posterior over domain models given the lit- clearly an oversimplification.

18 P. Antalband A. Millinghoffer
(1) Conceptual phase: Uncertainty over the to simple grammatical characterization. Conse-
domain ontology, i.e. the relevant entities. quently top-down methods typically use agram-
(2) Associative phase: Uncertainty over the matical text representations and minimal lin-
association of entities, reported in the literature guistic support. They autonomously prune the
as indirect, associative hypotheses, frequently redundant, inconsistent, indirect relations by
as clusters of entities. Though we accept the evaluating consistent domain models and can
general view of causal relations behind associa- deliver results in domains already in the Asso-
tions, we assume that the exact causal functions ciative phase.
and direct relations are unknown.
(3) Causal relevance phase: (Existential) un- Until recently mainly bottom-up methods
certainty over causal relations (i.e. over mech- have been analyzed in the literature: linguistic
anisms). Typically, direct causal relations are approaches extract explicitly stated relations,
theoretized as processes and mechanisms. possibly with qualitative ratings (Proux et al.,
(4) Causal effect phase: Uncertainty over the 2000; Hirschman et al., 2002); co-occurrence
strength of the autonomous mechanisms em- analysis quantifies the pairwise relations of vari-
bodying the causal relations. ables by their relative frequency (Stapley and
In this paper we assume that the target do- Benoit, 2000; Jenssen et al., 2001); kernel sim-
main is already in the Associative or Causal ilarity analysis uses the textual descriptions or
phase, i.e. that the entities are more or less the occurrence patterns of variables in publi-
agreed, but their causal relations are mostly cations to quantify their relation (Shatkay et
in the discovery phase. This holds in many al., 2002); Swanson and Smalheiser (1997) dis-
biomedical domains, particularly in those link- cover relationships through the heuristic pat-
ing biological and clinical levels. There the As- tern analysis of citations and co-occurrences; in
sociative phase is a crucial but lengthy knowl- (Cooper, 1997) and (Mani and Cooper, 2000)
edge accumulation process, where wide range of local constraints were applied to cope with pos-
research methods is used to report associated sible hidden confounders, to support the discov-
pairs or clusters of the domain entities. These ery of causal relations; joint statistical analysis
methods admittedly produce causally oriented in (Krauthammer et al., 2002) fits a generative
associative relations which are partial, biased model to the temporal pattern of corrobora-
and noisy. tions, refutations and citations of individual re-
lations to identify true statements. The top-
4 Literature mining down method of the joint statistical analysis of
de Campos (1998) learns a restricted BN the-
Literature mining methods can be classified into saurus from the occurrence patterns of words in
bottom-up (pairwise) and top-down (domain the literature. Our approach is closest to this
model based) methods. Bottom-up methods at- and those of Krauthammer et al. and Mani.
tempt to identify individual relationships and
the integration is left to the domain expert. Lin- The reconstruction of informative and faith-
guistic approaches assume that the individual ful priors over domain mechanisms or models
relations are sufficiently known, formulated and from research papers is further complicated by
reported for automated detection methods. On the multiple aspects of uncertainty about the ex-
the contrary, top-down methods concentrate on istence, scope (conditions of validity), strength,
identifying consistent domain models by analyz- causality (direction), robustness for perturba-
ing jointly the domain literature. They assume tion and relevance of mechanism and the in-
that mainly causally associated entities are re- completeness of reported relations, because they
ported with or without tentative relations and are assumed to be well-known parts of common
direct structural knowledge. Their linguistic sense knowledge or of the paradigmatic already
formulation is highly variable, not conforming reported knowledge of the community.

Literature Mining using Bayesian Networks 19


5 BN models of publications 5.1 The intransitive publication model
The first generative model is a two-layer BN.
Considering (biomedical) abstracts, we adopt The upper-layer variables represent the prag-
the central role of causal understanding and ex- matic functions (described or explanandum) of
planation in scientific research and publication the corresponding concepts, while lower-layer
(Thagard, 1998). Furthermore, we assume that variables represent their observable occurrences
the contemporary (collective) uncertainty over (described, explanatory or explained). Upper-
mechanisms is an important factor influenc- layer variables can be interpreted as the in-
ing the publications. According to this causal tentions of the authors or as the property of
stance, we accept the causal relevance inter- the given experimental technique. We assume
pretation, more specifically the explained (ex- that lower-layer variables are influenced only by
planandum) and explanatory (explanans), in the upper-layer ones denoting the correspond-
addition, we allow the described status. This is ing mechanisms, and not by any other external
appealing, because in the assumed causal publi- quantities, e.g. by the number of the reported
cations both the name occurrence and the pre- entities in the paper. A further assumption is
processing kernel similarity method (see Sec- that the belief in a compound mechanism is the
tion 6) express the presence or relevance of the product of the beliefs in the pairwise dependen-
concept corresponding to the respective vari- cies. Consequently we use noisy-OR canonic
able. This implicitly means that we assume distributions for the children in the lower layer.
that publications contain either descriptions of In a noisy-OR local dependency (Pearl, 1988),
the domain concepts without considering their the edges can be labeled with a parameter, in-
relations or the occurrences of entities partici- hibiting the OR function, which can be inter-
pating in known or latent causal relations. We preted also structurally as the probability of an
assume that there is only one causal mechanism implicative edge.
for each parental set, so we will equate a given This model extends the atomistic, individual-
parental set and the mechanism based on it. mechanism oriented information extraction
methods by supporting the joint learning of all
Furthermore, we assume that mainly positive
the mechanisms, i.e. by the search for a domain-
statements are reported and we treat negation
wide coherent model. However it still cannot
and refutation as noise, and that exclusive hy-
model the dependencies between the reported
potheses are reported, i.e. we treat alternatives
associations, and the presence of hidden vari-
as one aggregated hypothesis. Additionally, we
ables considerably increase the computational
presume that the dominant type of publications
complexity of parameter and structure learning.
are causally (forward) oriented. We attempt
to model the transitive nature of causal expla-
5.2 The transitive publication model
nation over mechanisms, e.g. that causal mech-
anisms with a common cause or with a common To devise a more advanced model, we relax the
effect are surveyed in an article, or that subse- assumption of the independence between the
quent causal mechanisms are tracked to demon- variables in the upper layer representing the
strate a causal chain. On the other hand, we pragmatic functions, and we adapt the mod-
also have to model the lack of transitivity, i.e. els to the bag-of-word representation of publi-
the incompleteness of causal explanations, e.g. cations (see Section 6). Consequently we an-
that certain variables are assumed as explana- alyze the possible pragmatic functions corre-
tory, others as potentially explained, except for sponding to the domain variables, which could
survey articles that describe an overall domain be represented by hidden variables. We assume
model. Finally, we assume that the reports of here that the explanatory roles of a variable are
the causal mechanisms and the univariate de- not differentiated, and that if a variable is ex-
scriptions are independent of each other. plained, then it can be explanatory for any other

20 P. Antalband A. Millinghoffer
variable. We assume also full observability of to-cause or diagnostic interpretation and expla-
causal relevance, i.e. that the lack of occur- nation method has a different structure with op-
rence of an entity in a paper means causal ir- posite edge directions.
relevance w.r.t. the mechanisms and variables In the Bayesian framework, there is a struc-
in the paper and not a neutral omission. These tural uncertainty also, i.e. uncertainty over the
assumptions allow the merging of the explana- structure of the generative models (literature
tory, explained and described status with the BNs) themselves. So to compute the probabil-
observable reported status, i.e. we can repre- ity of a parental set P aXi = paXi given a liter-
sent them jointly with a single binary variable. ature data set DNL , we have to average over the
0

Note that these assumptions remain tenable in structures using the posterior given the litera-
case of report of experiments, where the pattern ture data:
of relevancies has a transitive-causal bias. L
P (P aXi = paXi |DN 0) (3)
These would imply that we can model only X
L
full survey papers, but the general, uncon- = P (Xi = 1|paXi , G)P (G|DN 0 )
strained multinomial dependency model used in (paXi Xi )G
X
the transitive BNs provides enough freedom to L
1((paXi Xi ) G)P (G|DN 0) (4)
avoid this. A possible semantics of the parame- G
ters of a binary, transitive literature BN can be Consequently, the result of learning BNs from
derived from a causal stance that the presence the literature can be multiple, e.g. using a max-
of an entity Xi is influenced only by the pres- imum a posteriori (MAP) structure and the cor-
ence of its potential explanatory entities, i.e. its responding parameters, or the posterior over the
parents. Consequently, P (Xi = 1|P aXi = paxi ) structures (Eq. 3). In the first case, the param-
can be interpreted as the belief that the present eters can be interpreted structurally and con-
parental variables can explain the entities Xi verted into a prior for a subsequent learning. In
(P aXi denotes the parents of Xi and P aXi the latter case, we neglect the parametric infor-
Xi denotes the parental substructure). In that mation focusing on the structural constraints,
way the parameters of a complete network can and transform the posterior over the literature
represent the priors for parental sets compatible network structures into a prior over the struc-
with the implied ordering: tures of the real-world BNs (see Eq. 1).

P (Xi = 1|P aXi = paXi ) = P (P aXi = paXi ) 6 The literature data sets
(2)
For our research we used the same collection
where for notational simplicity pa(Xi ) denotes
of abstracts as that described in (Antal et al.,
both the parental set and a corresponding bi-
2004), which was a preliminary work using pair-
nary representation.
wise methods. The collection contains 2256 ab-
The multinomial model allows entity specific
stracts about ovarian cancer, mostly between
modifications at each node, combined into the
1980 and 2002. Also a name, a list of synonyms
parameters of the conditional probability model
and a text kernel is available for each domain
that are independent of other variables (i.e. un-
variable. The presence of the name (and syn-
structured noise). This permits the modeling
onyms) of a variable in documents is denoted
of the description of the entities (P (XiD )), the
with a binary value. Another binary represen-
beginning of the transitive scheme of causal
tation of the publications is based on the kernel
explanation (P (XiB )) and the reverse effect
documents:
of interrupting the transitive scheme (P (XiI )).
These auxiliary variables model simplistic in- K 1 if 0.1 < sim(kj , di )
Rij = , (5)
terventions, i.e. authors intentions about pub- 0 else
lishing an observational model. Note that a which expresses the relevance of kernel doc-
backward model corresponding to an effect- ument kj to document di using the term

Literature Mining using Bayesian Networks 21


frequency-inverse document frequency (TF- linking nodes. (Pure) Confounded (C) The two
IDF) vector representation and the cosine sim- nodes have a common ancestor. The relation
ilarity metric (Baeza-Yates and Ribeiro-Neto, is pure, if there is no edge or path between the
1999). We use the term literature data to denote nodes. Independent (I) None of the previous
both binary representations of the relevance of (i.e. there is no causal connection).
concepts in publications, usually denoted with The difference between two model structures
DN L (containing N 0 publications).
0 can be represented in a matrix containing the
number of relations of a given type in the expert
7 Results model and in the trained model (the type of the
relation in the expert model is the row index
The structure learning of the transitive model is and the type in the trained model is the col-
achieved by an exhaustive evaluation of parental umn index). These matrices (i.e. the compari-
sets up to 4 variables followed by the K2 greedy son of the transitive and the intransitive mod-
heuristics using the BDeu score (Heckerman et els to the experts) are shown in Table 1. Scalar
al., 1995) and an ordering of the variables from
an expert, in order to be compatible with the
learning of the intransitive model. The struc- Table 1: Causal comparison of the intransitive
ture learning of the two-layer model has a higher and the transitive domain models (columns with
computational cost, because the evaluation of a i and t in the subscript, respectively) to the
structure requires the optimization of parame- expert model (rows).
ters, which can be performed e.g. by gradient- Ii Ci Pi Ei It Ct Pt Et
descent algorithms. Because of the use of the I 12 0 0 0 0 4 2 6
forward explanation scheme, only those vari- C 106 20 2 4 4 90 26 12
ables in the upper layer can be the parents P 756 72 80 18 188 460 216 62
of an external variable that succeed it in the E 70 6 8 36 6 38 24 52
causal order. Note that beside the optional
parental edges for the external variables, we al- scores can be derived from this matrix, to evalu-
ways force a deterministic edge from the cor- ate the goodness of the trained model, the stan-
responding non-external variable. During the dard choice is to sum the elements with differ-
parameter learning of a fixed network structure ent weights (Cooper and Yoo, 1999; Wu et al.,
the non-zero inhibitory parameters of the lower 2001). One possibility e.g. if we take the sum of
layer variables are adjusted according to a gradi- the diagonal elements as a measure of similar-
ent descent method to maximize the likelihood ity. By this comparison, the intransitive model
of the data (see (Russell et al., 1995)). After achieves 148 points, while the transitive 358, so
having found the best structure, according to its the transitive reconstructs more faithfully the
semantics, it is converted into a flat, real-world underlying structure. Particularly important is
structure without hidden variables. This con- the (E, E) element according to which 52 of the
version involves the merging of the correspond- 120 edges of the expert model remains in the
ing pairs of nodes of the two layers, and then transitive model, on the contrary the intransi-
reverting the edges (since in the explanatory in- tive model preserves only 36 edges. Similarly
terpretation effects precede causes). the independent relations of the expert model
We compared the trained models to the ex- are well respected by both models.
pert model using a quantitative score based on Another score, which penalizes only the incor-
the comparison of the pairwise relations in the rect identification of independence (i.e. those
model, which are defined w.r.t. the causal inter- and only those weights have a value of 1 which
pretation as follows (Cooper and Yoo, 1999; Wu belong to the elements (I, .) or (., I), the others
et al., 2001): Causal edge (E) An edge between are 0), gives a score 210 and 932 for the tran-
the nodes. Causal path (P) A directed path sitive model and the intransitive respectively.

22 P. Antalband A. Millinghoffer
PMenoAge FamHistBr FamHistOv
ReprYears

FamHist

Meno Age
Age PostMenoY

Hysterect

CycleDay
PillUse Parity

HormThera
Pathology

TAMX PI
PI
PapFlow

PSV RI
PapSmooth
Papillati
ColScore
Solid
Volume
Ascites
WallRegul Septum
Fluid
IncomplSe
Locularit Echogenic Bilateral
Shadows
Pain
CA125

Figure 1: The expert-provided (dotted), the MAP transitive (dashed) and the intransitive (solid)
BNs compatible with the experts total ordering of the thirty-five variables using the literature data
set (P MRCOREL
3
), the K2 noninformative parameter priors, and noninformative structure priors.

This demonstrates that the intransitive model 1


is extremely conservative in comparison with 0.9
Septum
PI
both the other learning method and with the 0.8
ColScore
0.7
knowledge of the expert, it is only capable of 0.6
TAMX
PapSmooth
detecting the most important edges; note that 0.5 RI

the proportion of its false positive predictions 0.4 FamHistBrCa


Fluid
0.3
regarding the edges is only 38% while in the 0.2
Volume

transitive model it is 61%. 0.1


WallRegularity
Bilateral
0 PSV
1980 1985 1990 1995 2000 2005
Furthermore, we investigated the Bayesian
learning of BN features, particularly using the
temporal sequence of the literature data sets. Figure 2: The probability of the relation
An important feature indicating relevance be- Markov Blanket Membership between Pathol-
tween two variables is the so-called Markov ogy and the variables with a slow rise.
Blanket Membership (Friedman and Koller,
2000). We have examined the temporal charac-
ure 2 shows examples for variables with a slow
teristics of the posterior of this relation between
rising in time.
a target variable Pathology and the other ones
using the approximation in Eq. 4. This feature 8 Conclusion
is a good representative for the diagnostic im-
portance of variables according to the commu- In the paper we proposed generative BN mod-
nity. We have found four types of variables: the els of scientific publication to support the con-
posterior of the relevance increasing in time fast struction of real-world models from free-text lit-
or slowly, decreasing slowly or fluctuating. Fig- erature. The advantage of this approach is its

Literature Mining using Bayesian Networks 23


domain model based foundation, hence it is ca- L. Hirschman, J. C. Park, J. Tsujii, L. Wong, and
pable of constructing coherent models by au- C. H. Wu. 2002. Accomplishments and challenges
in literature data mining for biology. Bioinfor-
tonomously pruning redundant or inconsistent
matics, 18:15531561.
relations. The preliminary results support this
expectation. In the future we plan to use the T. K. Jenssen, A. Laegreid, J. Komorowski, and
E. Hovig. 2001. A literature network of human
evaluation methodology applied there including
genes for high-throughput analysis of gene expres-
rank based performance metrics and to investi- sion. Nature Genetics, 28:2128.
gate the issue of negation and refutation partic-
M. Krauthammer, P. Kra, I. Iossifov, S. M. Gomez,
ularly through time.
G. Hripcsak, V. Hatzivassiloglou, C. Friedman,
and A. Rzhetstky. 2002. Of truth and pathways:
chasing bits of information through myriads of ar-
References ticles. Bioinformatics, 18:249257.
P. Antal, G. Fannes, Y. Moreau, D. Timmerman, S. Mani and G. F. Cooper. 2000. Causal discov-
and B. De Moor. 2004. Using literature and ery from medical textual data. In AMIA Annual
data to learn Bayesian networks as clinical mod- Symposium.
els of ovarian tumors. Artificial Intelligence in
Medicine, 30:257281. Special issue on Bayesian J. Pearl. 1988. Probabilistic Reasoning in Intelligent
Models in Medicine. Systems. Morgan Kaufmann, San Francisco, CA.

R. Baeza-Yates and B. Ribeiro-Neto. 1999. Modern D. Proux, F. Rechenmann, and L. Julliard. 2000.
Information Retrieval. ACM Press, New York. A pragmatic information extraction strategy for
gathering data on genetic interactions. In Proc.
G. F. Cooper and C. Yoo. 1999. Causal discovery of the 8th International Conference on Intelligent
from a mixture of experimental and observational Systems for Molecular Biology (ISMB2000), La-
data. In Proc. of the 15th Conf. on Uncertainty in Jolla, California, pages 279285.
Artificial Intelligence (UAI-1999), pages 116125.
A. Rosenberg. 2000. Philosophy of Science: A con-
Morgan Kaufmann.
temporary introduction. Routledge.
G. Cooper. 1997. A simple constraint-based S. J. Russell, J. Binder, D. Koller, and K. Kanazawa.
algorithm for efficiently mining observational 1995. Local learning in probabilistic networks
databases for causal relationships. Data Mining with hidden variables. In IJCAI, pages 1146
and Knowledge Discovery, 2:203224. 1152.
L. M. de Campos, J. M. Fernandez, and J. F. H. Shatkay, S. Edwards, and M. Boguski. 2002. In-
Huete. 1998. Query expansion in information re- formation retrieval meets gene analysis. IEEE In-
trieval systems using a Bayesian network-based telligent Systems, 17(2):4553.
thesaurus. In Gregory Cooper and Serafin Moral,
editors, Proc. of the 14th Conf. on Uncertainty B. Stapley and G. Benoit. 2000. Biobibliometrics:
in Artificial Intelligence (UAI-1998), pages 53 Information retrieval and visualization from co-
60. Morgan Kaufmann. occurrences of gene names in medline asbtracts.
In Proc. of Pacific Symposium on Biocomputing
N. Friedman and D. Koller. 2000. Being (PSB00), volume 5, pages 529540.
Bayesian about network structure. In Craig
Boutilier and Moises Goldszmidt, editors, Proc. D. R. Swanson and N. R. Smalheiser. 1997. An in-
of the 16th Conf. on Uncertainty in Artificial teractive system for finding complementary litera-
Intelligence(UAI-2000), pages 201211. Morgan tures: a stimulus to scientific discovery. Artificial
Kaufmann. Intelligence, 91:183203.
P. Thagard. 1998. Explaining disease: Correlations,
D. Geiger and D. Heckerman. 1996. Knowledge rep- causes, and mechanisms. Minds and Machines,
resentation and inference in similarity networks 8:6178.
and Bayesian multinets. Artificial Intelligence,
82:4574. X. Wu, P. Lucas, S. Kerr, and R. Dijkhuizen. 2001.
Learning bayesian-network topologies in realistic
D. Heckerman, D. Geiger, and D. Chickering. 1995. medical domains. In In Medical Data Analysis:
Learning Bayesian networks: The combination of Second International Symposium, ISMDA, pages
knowledge and statistical data. Machine Learn- 302308. Springer-Verlag, Berlin.
ing, 20:197243.

24 P. Antalband A. Millinghoffer
 
!#"$%&
'%(*),+ + -/.1012 34'5.7637.981:;:;<=-/.10?>@-A2 :B3DC-FE-G(*37.
H +6 <*6 8163JIK5-G(L(*)>3/(L(*),M4K5<ONP6 8Q0Q<RN181(L(TS H ./6),(L(L<*UG)V.1W,-4'%2 6 <*XY:;<L-G(*)
Z\[^]`_GaGbGc >-/.Q.13dTeO8#U/-/.#3#f gNPh^<*6W;);2 (L-/.Q0
iFj1k/lQmPmGjGn#oPp1q!rs#j7tPt#j1k/qGnvuxwQyAoYmPy/jz|{V}
~R,xVYG
! /A,G V -A2 )53P0Q),(L+6 1-A6)BP6)V.Q0\-,G),+ <L-/..#);6 +630Q),-G(Yh^<*6 <Q2 ),:;<L+ <*37.<.12 3/Y- ]
Y<L(L<*6Gg9-/.Q0:;-/.-G:B6 8Q-G(L(*v)2 );U/-A2 01),0-G++ );6 +!3G\-,G),+ <L-/.D.#);6 +;!#<L01)V.1:B)+ 81UGUG),+6 +6 1-A6:B2 ),0Y-G(
.1);6 +=-A2 )-v3Vh);2 T8Q(),-/.Q+632 ),12 ),+ )V./6-/.100Q),-G(h^<*6 4-/./<v3G2 6 -/.76=-/.Q0:1-G(L(*)V.#U/<.#U
Q2 3/Q(*)V+<.8Q.1:B);2 6 -G<.2 ),-G+37.1<.#U1)U/<*G))B#-/4Q(*),+634+ 13Vh?6 Q-A6^+37)3G!6 #),+ )12 3/Q(*)V4+
:;-/.37.Q(*v)3P01),(L(*),09:B2 ),0Q-G(.#);6 +:;-G(L(*),0 GQ;`1G/B*1 V =%#),+)Gg#3Vh);G);2,gR-A2 )
+ 6 <L(L(<L+ + <.1U-DUG2 -G1<L:;-G(2 ),12 ),+)V./6 -A6 <*37.(L-/.#U781-AUG)4-/.10+ 3/(8#6 <*37.-G(*UG3G2 <*6 Q4+;^#)+ <*6 81-A6 <*37.
<L+=P8Q<*6)46 #)3/Y3/+ <*6)4h^<*6 J+),Q-A2 -A6),(*+ v),:;<*XQ),0:B2 ),0Q-G(.#);6 +;gh%1<L:|1-,G)Dv);)V.6 #)4+|81#),:B6
3G\81:|+6 810Q-/.10-G(*UG3G2 <*6 Q4<L:01);G),(*3/Y)V./6;^1<L+5Q-Gv);25U/<*G),+6h3-F3G2:B37./62|<LY8#6 <*37.1+;
<*2 + 6;g<*6^01),(L<*G);2|+%-4.1);h?UG2 -GYQ<L:;-G(R(L-/.#U781-AUG)=6343G2|81(L-A6)-/.76#)=3G:B2 ),0Q-G(O.1);6h3G2 gvv3G6
+ ),Q-A2 -A6),(*-/.10.#37. ] +),Y-A2 -A6),(*+ v),:;<*XQ),0%N#),:B37.10g<*6+ #3xh^+^6 Q-A6 GQ .137. ] +),Q-A2 -A6),(*@+ v),:;< ]
XY),0.#);62 ),12 ),+)V./6),0h^<*6 @6 #).#);h(L-/.#U781-AUG):;-/.v)=),-G+ <L(*62 -/.1+ 3G2|),0<.763-/.),#81<*A-G(*)V./6
+ ),Q-A2 -A6),(*+ v),:;<*XQ),0.#);6;g01);X.#),03VG);2-4(L-A2 UG);250Q37-G<.^1<L+%2 ),+ 8Q(*6^3/v)V.1+%8Q-.P8Qv);23G
.1);hv);2 + v),:B6 <*G),+5-/.10:B37.1:B2 );6)378#6 :B37),+;XQ2|+6%3G-G(L(Tg<*6<),0Q<L-A6),(*)V.Q-GQ(*),+6 #)=)B1<L+6 <.#U
-G(*UG3G2|<*6 Q+=3G2+),Q-A2|-A6),(*+ v),:;<*XQ),0:B2 ),0Q-G(\.#);6 +=63v)-GYQ(L<*),0J63.#37. ] + ),Q-A2 -A6),(*+ ),:;<*XY),0
37.1),+;
xOG/P/R 0Q<*6 <*37.1-G(4-G+ +58Q.1:B6 <*37.<L+-G(L(*3Vh),063A-A2 <.<*6 +
:B2 ),0Q-G(1+ );6<.101),v)V.101)V.76 (*3Gv6 #)3G6 #);2 +,!%#)\2 ), ]
)3P:,8Q+%37.:B2 ),0Q-G(.#);6h3G2 #+4dNP),:B6 <*37.9f=d Z 3GW ] 2 ),+)V.76 -A6 <*37.<L+.1-A6 812 -G(L(*(*3#:;-G(),:;-/8Q+)6 #);2 )-A2 )
-/.g bAG/ f gh%1<L:-A2 )-UG)V.#);2 -G(L<*W,-A6 <*37.3G .#32 ),(L-A6 <*37.1+|1<LQ+v);6h);)V.0Q<E);2 )V./6:B2 ),0Y-G(+ );6 +;
\-,G),+ <L-/..#);6 +,^#)UG)V.#);2|-G(L<*W,-A6 <*37.<L+-G:|1<*);G),0 ^#)P8#),+ 6 <*37.?<L+3G2 ):B37Q(L<L:;-A6),0h^<*6 3G2 )
92 ),(L-F#<.1U^6 #)2 ),P8Q<*2 )V)V.766 1-A6O6 #):B37.10Y<*6 <*37.1-G( UG)V.#);2 -G(R+ v),:;<*XY:;-A6 <*37.1+3G!:B2 ),0Q-G(O.#);6h3G2 #+;gQh%1<L:
-G+ +8Q.1:B6 <*37.1+3G6 #)3#01),(v)12 ),:;<L+ )Gh^<*6 h):;-G(L( YGY,`#GG;$1 V ^#)<L01),-3G
:B2 ),0Q-G(.1);6 +),-G:|3G6 #)V<L+37.Q(*42 ),#81<*2 ),063v) ] .#37. ] + ),Q-A2 -A6),(*+ ),:;<*XY),0:B2 ),0Y-G(v.#);6 +\<L+<.-G:B663
(*37.#U63-:;(*3/+),0:B37.7G)B+);6; Z (*3/+),0:B37./G)B+);6 + -G(L(*3Vh3G2R2 ),(L-A6 <*37.Q+ 1<LQ+v);6h);)V.:B37.10Y<*6 <*37.1-G(P4-G+ +
3G-G+ +^8Q.1:B6 <*37.1+%-A2 )-G(L+31.#3xh%.-G+ V /A;, 8Q.1:B6 <*37.1+<.40Q<E);2 )V./6:B2 ),0Q-G(Q+);6 +,g7h%1<L::;-/.4-G(L+3
-A6);2\eR);P<!dTe);#<Tg aGcA f + <.#U:B2 ),0Q-G(v+);6 +\<.6 #) v)-A2%-,h\-,<.6 #).#);6;
Q(L-G:B)3G\4-G+ +8Q.1:B6 <*37.1+4-AG),+:B2 ),0Q-G(.#);6h3G2 #+ '(*6 #378#U746 #)5<L01),-3GR.137. ] +),Q-A2 -A6),(*D+ v),:;<*XQ),0
-/. ^v V,=v7 7; 3P0Q),(%d-G(L(*);Gg5 aGa Ff :B2 ),0Q-G(.1);6 +<L+2 ),(L-A6 <*G),(*<.76 81<*6 <*G)Gg<*6+ #3781(L0v)
H 6\:;-/.)5+ #3xh%.D6 1-A6-:B2 ),0Q-G(.1);6h3G2 D<L+),P81<* ] +62 ),+ +),0J6 Q-A66 Q<L+#<.103G%.#);6 +1-G+);)V.<.7G),+ ]
-G(*)V.7663- ;,54^G/B/; h^<*6 6 #)+ -/) 6 <*U/-A6),0$G);2 (L<*66 (*)G<.T-G:B6;g6 #);2 )@1-G+v);)V..#3
UG2 -GY -A66)VQ6+3-A2D6301);G),(*3/-JUG)V.#);2|-G(^UG2 -G1<L:;-G(
'5.<v3G2 6 -/.76P81),+6 <*37.<L+h#);6 #);23G2=.13G6-G(L( (L-/.#U781-AUG)63@01),+ :B2 <Lv)6 #)V!-/.106 1);2 )4<L+=.13-G( ]
:B2 ),0Q-G(.#);6h3G2 #+:;-/.?v)2 ),Q2 ),+)V.76),0<.-h\-, UG3G2 <*6 Q63:B374Y8#6)h%<*6 6 1)V*^1<L+5-GQv),-A2 +
6 1-A6)V4Y1-G+ <*W;),+(*3P:;-G(L<*6`G^#)-/.1+ h);2<L+v3/+ <*6 <*G)
<*h)2 ),+62 <L:B66 #)=-A66)V./6 <*37.636 #)=3/+6v3/Y81(L-A2 G  R* xF V
F   ! A"#$  %'|&7@) (7 ||+ *-, ..0/)V1 *  B243V 
6#v)3G:B2 ),0Y-G(.#);6`h3G2 P+;gv6 #3/+):;-G(L(*),0 ;`1G/B* \x, |5B6GG76VGOF98|!:<;>=|
Q , dNP),:B6 <*37.#f H .$6 1<L+:;-G+)Gg),-G:|?:B37. ] F  64G :Vx *F ? 3Fx@ : 6A  % x
63v)-/.8Q.#3G2 6 8Q.Q-A6)U/-G<. 6 #)(L<*6);2|-A6 8#2 )-G+  
F` v/O 
 A  1
6 #).#37. ] + ),Q-A2 -A6)+ v),:;<*XY:;-A6 <*37.+);)V+=63)6 #) Re );681+XY2 +601);X.1)+37)%.13G6 -A6 <*37.4-/.106 #)^8Q.10Q- ]
G);563%3P01),(/4-/./<v3G2 6 -/.7612 3/Q(*)V4+;gV-G+O<L(L(81+ ] )V./6 -G(.#3G6 <*37.3G5\-,G),+ <L-/.$.#);6`h3G2 ve);6
62 -A6),0<.NP),:B6 <*37. _ NP),Q-A2|-A6),(*+ v),:;<*XQ),0:B2 ),0Q-G( d !!! #"fv)-:B3/(L(*),:B6 <*37. 3G2 -/.10137 A-A2 < ]
.#);6 +,gV37.6 1)3G6 #);21-/.Q0gx1-VG)v);)V.56 #)+ 8Q#),:B63G -GQ(*),+,g%h%Q<L:|$6 -AG)A-G(8#),+<. X.1<*6)+);6 +,g-/.10%$
81:-G(*UG3G2 <*6 Q4<L:01);G),(*3/)V.76\d Z 3GWV4-/.g bAG/ f -0Q<*2 ),:B6),0-G:B#:;(L<L:@UG2|-GYdTK^''&=f gh%#3/+ ).#3#01),+
H .6 1<L+OQ-Gv);2h)U/<*G)6h34-F3G2R:B37.762 <LY816 <*37.1+; -A2 )-G+ +3#:;<L-A6),063@6 #)A-A2 <L-GQ(*),+<.( 3G2=),-G:
<*2 +6;gYh)01);X.#)-48Y.1<*XQ),0UG2 -G1<L:;-G((L-/.#U781-AUG)63 #)*+g-,/.10<L+6 #)v3/+ + <LQ<L(L<*6+ Q-G:B)3G2#)g43)!-
(*3#:;-G(L(*+ v),:;<*:B2 ),0Q-G(=.#);6h3G2 #+<.6 #)JUG)V.#);2 -G( UG)V.#);2|<L:),(*)V)V./653G5,/.10g764d#)f\--G+ +%8Q.1:B6 <*37.
:;-G+)dNP),:B6 <*37. f %%#).#);h2 ),12 ),+)V./6 -A6 <*37.<L+5<. ] 3G28#)-/.1096d:3)f6 1)12 3/Q-GQ<L(L<*66 1-A6;#)3)
+ Y<*2 ),0g#<L-6 #) Z^Z > 62|-/.1+3G2|-A6 <*37.d Z -/.#3);6 ^1)Q-A2 )V.76 +3G2 ) g-G:;:B3G2 0Q<.#U63$gY-A2 )0Q)V.#3G6),0
-G(TLg aGa #f g9$6 #)3G2|-G(L<L+ 3G<. 81)V.1:B)0Q<L- ] 96 #)3/<./6F-A2|<L-GQ(*)=<=)`gPh%#3/+ )^v3/+ + <LQ<L(L<*6`+ Q-G:B)
UG2 -/4+;g-/.103G2 )UG)V.#);2 -G(L(* 3G=01),:;<L+ <*37.UG2|-GY1+ <L+1,/>-0 3G2),-G:|@?4)*A,/>B0gC6d#)ED ?)f<L+6 #):B37.10Q< ]
dCQ-/.#U);6-G(TLg aGa 9f H .6 1<L+(L-/.1U781-AUG)^6 #)^UG2 -GY 6 <*37.1-G(-G+ +8Q.1:B6 <*37.3G2#)U/<*G)V.6 #)!3/<./6A-G(8#)
3GR-:B2 ),0Q-G(.#);6\<L+\-/8#U7)V.76),0Dh^<*6 4:B37.762 3/(.#3#01),+ ?)O3G6 #)Q-A2 )V.76 +3G#)`^1<L+\3G2|-G(L<L+ <L+^+ 8GF ]
6 1-A6\)B112 ),+ +\6 1)2 ),(L-A6 <*37.1+ 1<LQ+v);6`h);)V.0Y<Ev);2 )V.76 :;<*)V.7663412 3/);2|(*D<.762 3#0Y81:B)6 #)3/(L(*3Vh^<.1U1
:B2 ),0Y-G(+);6 +;)U/<*G))B1-/Q(*),+\634+|#3Vh6 1-A66 #) HJILKMN N:OBMQP4R '\-,G),+ <L-/..1);6h3G2 dTTSf3xG);21
.#);h(L-/.#U78Q-AUG)412 3xP<L01),+37.#)h^<*6 -.1-A6 8#2 -G(!h\-, <L+!-5Q-G<*2VUW$ YX1Z + 81:6 Q-A6 X <L+-5+);6!3GY:B37.10Y<*6 <*37.1-G(
6301);Xv.#).#37. ] +),Q-A2|-A6),(*+ v),:;<*XQ),0.1);6 +;O-/.10h) 4-G+ +T8Y.1:B6 <*37.1+64d#)[D ?)f g37.#)3G2),-G:|#)*9
U/<*G)-12 3#:B),0Y8#2 )632 );3G2|81(L-A6)-/./+),Q-A2 -A6),(* -/.10J?)C*=,/>-0
+ v),:;<*XQ),0.1);6^<.6 #).#);h(L-/.#U781-AUG)G )-G+ +|8Q)6 1)]\ G A_^D GY/TG 634-AG)#$
N#),:B37.10gh)-AG)-JG);2 + <Y(*)3/Q+);2 A-A6 <*37. 2 ),Q2 ),+)V.7612 3/Q-GQ<L(L<L+ 6 <L:<.101),v)V.101)V.Q:B)2 ),(L-A6 <*37.Q+
dNP),:B6 <*37.7f g7h%Q<L:|1-G++|8#2 12 <L+ <.#U/(*=v3Vh);2 81(#< ] v);6h);)V. 6 #)F-A2 <L-GY(*),+D<.`);G);2 A-A2 <L-GQ(*)<L+
Q(L<L:;-A6 <*37.Q+;h)5+ #3xh 6 Q-A63G2-/.74:B2 ),0Q-G(.#);6`h3G2 <.10Q),)V.Q01)V./63G\<*6 +.#37. ] 01),+ :B)V.10Q-/.76.#37. ] Q-A2 )V.76 +
+ v),:;<*XQ),0@h^<*6 @6 #).#);h(L-/.#U781-AUG)6 #);2 )<L+5-+), ] :B37.10Y<*6 <*37.1-G(37.J<*6 +Q-A2 )V./6 +;^P81+;g-TS 01);6);2 ]
-A2 -A6),(*@+ v),:;<*XQ),0:B2 ),0Y-G(.#);6`h3G2 g01);X.#),03VG);2- 4<.#),+-3/<.76-G+ +8Q.1:B6 <*37.+6df\-G:;:B3G2 0Q<.#UD63
(L-A2 UG);201374-G<.gh%1<L:<L+D),#81<*A-G(*)V./6;^#)12 3 ] 6 #)3/(L(*3Vh%<.#U4T-G:B63G2 <*W,-A6 <*37.
:B),0Y812 )46362 -/.1+ 3G2| 6 #)D3G2);2=<.7636 #)(L-A66);2 "
.#);6`h3G2 <L+G);2 $+ <Q(*)Gg-/.10$6 -AG),+37.1(*$(L<.#),-A2 b
6d:a!f 6d:3 ) D ? ) f  dFf
6 <)G^^#)=G);v3/<./6<L+%6 Q-A6%6 1<L+12 3#:B),0Y8#2 ):;-/. )dc
v)81+),0-G+-63P3/(63I+),Y-A2 -A6),M6 #):B2 ),0Y-G(+);6 +
3G.137. ] +),Q-A2 -A6),(*+ v),:;<*XQ),0.#);6 +;=^1<L+-AG),+5<*6 3G2),-G:|a(*A,/e5g/h%1);2 )\3G2),-G:gf1 !!!hEi 6 #)
v3/+ + <LY(*)633#01),(Tg9+),Q-A2|-A6),(*+ v),:;<*XQ),0.#);6 +;g A-G(8#),+d:3-)  ?)f-A2 ):B37.1+ <L+ 6)V./6h%<*6
a
12 3/Q(*)V+3G2|);2 (*3#01),(L(*),07.137. ] +),Q-A2 -A6),(* j kY l8 C1 J  #Y 7l51mn2
+ v),:;<*XQ),0=37.#),+;9-/.101)V.1:B)6381+ ) GY dT3G6 =)B1-G:B6
-/.10-GQQ2 3,1<-A6)Af)B1<L+6 <.#UJ-G(*UG3G2|<*6 Q 3G2+),Q- ] Z 2 ),0Q-G(.#);6h3G2 #+)B#6)V.10\-,G),+ <L-/. .#);6 +63J01),-G(
2 -A6),(*+ v),:;<*XQ),0@.#);6 +634+3/(*G)+ 81:12 3/Q(*)V4+; h^<*6 <12 ),:;<L+ <*37.<. 12 3/Q-GQ<L(L<*6G ^1<L+J<L+3/ ]
N#37):B374)V.76 +=37.6 1<L+2 ),+ 81(*6-/.10v);2 + ),: ] 6 -G<.#),09),-/.1+3G:;(*3/+),0J:B37./G)B+);6 +53G12 3/ ]
6 <*G),+3G2T816 8#2 )01);G),(*3/)V.76 +-A2 )@0Q<L+ :,81+ + ),0<. -GQ<L(L<*6`4-G+ +T8Y.1:B6 <*37.1+;gh1<L:|-A2 ):;-G(L(*),0 V /A
NP),:B6 <*37. c ^#)=3G2 )6),:|Y.1<L:;-G(!Q-A2 6 +3G!6 1<L+%Q- ] ,; dTe);#<Tg aGcA f )3/(L(*3Vh d Z 3GWV4-/.g bAGG f<.
v);2-A2 ):B3/(L(*),:B6),0<.'Qv)V.10Q<D' ] :B37.1+ <L01);2 <.#U37.1(*X.Q<*6),(*JUG)V.#);2 -A6),0:B2 ),0Q-G(\+);6 +;g
<T)GLgP3/16 -G<.#),0-G+6 1)%:B37.7G)BP81(L(13G-Xv.1<*6).P8Q ]
v);23G-G+ +T8Y.1:B6 <*37.1+;J&);37);62 <L:;-G(L(*Gg-:B2 ),0Q-G(
+);6<L+- #A*A|1 ' :B2 ),0Q-G(!+);6:B37.76 -G<.1+-/.<.#X ]
VTx|||4 * A 6GA x| 2G .1<*6)5.98Y=v);23GR4-G+ +8Q.1:B6 <*37.Q+;gP8#637.1(*-=X.1<*6)
, V G`  Ox0A GG * .P8Qv);23G Eo7,5F Rqp#Y,TG# !6 #3/+)%:B3G2 2 ) ]
243V 8R F  8 7 6432FF  8|!: x x 8
F 3F ; + v37.10Y<.#U=636 1) ^F;T| 3G6 #)v3/(*963/v)Gg1h%Q<L:|

26 A. Antonucci and M. Zaffalon


A- 2 )GgY<.UG)V.#);2 -G(Tg-4+ 81Q+);63G!6 #)UG)V.1);2 -A6 <.#UD4-G+ + < .1<*6 <*37. b gh1<L:|2 ),Q2 ),+)V.76 +- Z S -G+-/.)B1Q(L<L:;<*6
8Q.1:B6 <*37.1+;' :B2 ),0Q-G(%+ );643VG);2J <L+D01)V.#3G6),0$-G+ )V.P8Q);2 -A6 <*37.3G! S+;
df )+ <<L(L-A2|(*0Q)V.#3G6)-G+ d D Yf5- GQ S%);G);2 6 #),(*),+ +,g6 #);2 )-A2 )+ v),:;<*XY:+|81:;(L-G+ +),+3G
G/YA, 7A;, 3xG);29 U/<*G)V.-A-G(8# ) 3G Z S+6 1-A601);X.#)-+);6^3G S+%-G+<.K);X.1<*6 <*37. b
-/.#3G6 #);22 -/.10137A-A2 <L-GQ(* ) 4g<T)GLg-:B2 ),0Y-G(+);6 6 #2 3781U7=(*3#:;-G(1+ v),:;<*XY:;-A6 <*37.1+,!%1<L+O<L+!3G2!)B#-/4Q(*)
3G=:B37.Q0Q<*6 <*37.1-G(-G+ +8Q.1:B6 <*37.1+
64d D Qf &<*G)V. 6 #)5:;-G+)3G Z S%+h^<*6 +),Q-A2 -A6),(*D+ v),:;<*XQ),04:B2 ),0Q-G(
-  GQ, 7AR,; d  4f g6 #) G GYAV /A +);6 +;!g @h%1<L:-A2 )+ <Q(*:;-G(L(*),0 ,`#GG;Q ,
;, 3G2 <L+6 1):B2 ),0Q-G(\+);6 df3/16 -G<.1),07  " <.6 #)3/(L(*3Vh%<.#U1^1<L+4+ ),:;<*X:;-A6 <*37.
6 #)v3/<.76 ] h^<L+)%4-A2 U/<.1-G(L<*W,-A6 <*37.3G 2 37 -G(L(16 #) 2 ),#81<*2 ),+=),-G:|:B37.10Y<*6 <*37.1-G(-G+ +T8Y.1:B6 <*37.63v) ]
3/<./6-G+ +8Q.1:B6 <*37.`64d  f9* d  f < ] (*37.#U63-dT:B37.10Q<*6 <*37.1-G(f:B2 ),0Q-G(+);6;g-G:;:B3G2 0Y<.#U63
.1-G(L(*Gg!U/<*G)V.-+ 81Y+);6 ,
. ,/.g-Y-A2 6 <L:,81(L-A2 (* 6 #)3/(L(*3Vh%<.#U1
<v3G2 6 -/.76!:B2 ),0Q-G(#+);6!3G237812!8#2 v3/+),+<L+!6 #) ^F/, HJILKMN N:O-$ M # R ' + ),Q-A2 -A6),(*+ ),:;<*XY),0 Z S3xG);2
p1_p9V /A!,; 3G2 ,
. g<T)GLg6 1)+);653G-G(L(!4-G+ + <L+R-^Q-G<*2TUW$ & %8Z g#h%#);2 ) % <L+O-+);63GQ:B37.10Y<*6 <*37.1-G(
8Q.1:B6 <*37.1+3VG);2+ -G+ + <*U7.1<.#U12 3/Q-GQ<L(L<*6`37.#)63 :B2 ),0Q-G(!+);6 + d#)[D ?4)`f g37.#)3G25),-G:|Q#)/*  -/.10
,
. H .6 #)^3/(L(*3xh^<.#Uh)h^<L(L(81+)6 #)^h),(L(11.#3xh%. ?4)*A,/>B0
T-G:B66 1-A66 #)G);2 6 <L:B),+3Gv+ 8Q:|-:B2 ),0Q-G(1+ );6-A2 )6 #)
D ,
. D70Q);UG)V.#);2 -A6)-G+ +8Q.1:B6 <*37.1+\-G+ + <*U7.Q<.#U12 3/ ] ^1) G  Eo7;#BG d@f3G-+),Q-A2 -A6),(*
-GQ<L(L<*6`37.#)6346 #)+ <.#U/(*)),(*)V)V.76 +3G2,
. + v),:;<*XQ),0 Z S <L+@01);X.#),0-G+6 #):B37./G)BP81(L(3G
HJILKMN N:O- M  R ' :B2 ),0Q-G(.#);6`h3G2 d Z Sf%3VG);2g 6 #)3/<./644-G+ +8Q.1:B6 <*37.1+g64d@f gh^<*6 g3G2),-G:
<L+\-=Y-G<*2 UW$  i X !!!hYX  u Z + 81:6 1-A6 UW$ YX  Z <L+\- a(*A,/e5
\-,G),+ <L-/..#);6h3G2 3xG);2' 3G2),-G:   !!!C "
^1)^ S%+ i UW$ u  c -A2 )^+ -G<L0463v)6 1) G 64d:a!f 64d:3)[D ?)f  643G2^d#),-G)[D :|?49)`f5#* )*Ad #)E?D ?4)1)*Af  <')
b
#GB S+3GYX 6 #_)Z  Z S :B37.1+ <L01);2 ),0 <. K);X.1< ] )dc 
6 <*37. b d9f
^1) Z S UW$  i X !!!hYX  u Z :;-/.v)58Q+),06301) ] [ );2 ) d ) D ? ) f4:;-/.v)2 ),Q(L-G:B),0?9$6 #)J+);63G
6);2|4<.#)6 #)3/(L(*3Vh^<.#UD:B2 ),0Q-G(+ );6; <*6 +\G);2 6 <L:B),+dT+);) '2 3/v3/+ <*6 <*37.%<.6 #)-GQv)V.10Y<f
NP),Q-A2|-A6),(*+ v),:;<*XQ),0 Z S%+-A2 )6 1)3/+6v3/Y81(L-A2
d@f  Z\[ i 6 d@f !!!C 6  d@f u  d b f 6#v)3G Z S
'+-3G2 )@UG)V.1);2 -G(5:;-G+)Gg%+37)-/816 #3G2 +D:B37. ]
h%#);2 ) Z[ 01)V.13G6),+6 #):B37.7G)B P81(L(D3G-+);6 + <L01);2 ),0+3 ] :;-G(L(*),0 Eo7;#B ^A + ),:;<*X:;-A6 <*37.1+3G Z S+
3GT8Y.1:B6 <*37.1+;g-/.10 6 #)3/<./6-G+ + T8Q.Q:B6 <*37.1+ d );2 2 ),<*2 -0Q)- (^3P:1--/.10 Z 3GWV-/.g bAG/b f gh#);2 )
i 6  d@f u  c -A2 )D6 #3/+)01);6);2|4<.#),0J96 #):B37 ] <.1+6),-G03G-+),Q-A2|-A6)+ v),:;<*XY:;-A6 <*37.3G2),-G:|:B37. ]
Q-A6 <LQ(*) S%+3G6 1) Z S=O<*6 -/.J-G81+)D3G6);2 ] 0Q<*6 <*37.1-G(4-G+ +DT8Q.Q:B6 <*37. -G+<.$K);X.1<*6 <*37.Pg6 #)
<.13/(*3GUGGgPh):;-G(L(6 #)5:B2 ),0Q-G(+);6<.D#81-A6 <*37.d b f UG)V.#);2 <L:=12 3/Q-GY<L(L<*66 -GQ(*)@64d#)[D <')`f g<T)GLg-8Q.1: ]
6 #) G [o/,#TG 3GY6 #) Z S=gG7-/.1-G(*3GUGh^<*6 6 <*37.@3Gv3G6 A ) -/.Q09< ) g<L+501);X.#),063Dv),(*37.#U63
6 #).#3G6 <*37.JQ2 3V#<L01),0J<.6 #)+ ),:;<L-G(:;-G+)43G ,`v -X.1<*6)+);63G6 -GQ(*),+,^#)+62 37.1U)B#6)V.1+ <*37.3G
GGB*DQ , Z S+dT+);)NP),:B6 <*37.@#f H .#);2 )V.1:B) -/.4)BP6)V.1+ <*G) Z S<L+3/Q6 -G<.#),0-G+\<.#81-A6 <*37.d9f g
3VG);2- Z S<L+<.76)V.101),0-G+6 1)\:B374Y8#6 -A6 <*37.3G81 ] 9+ <4Q(*2 ),Q(L-G:;<.#U6 #)+),Y-A2 -A6)2 ),#81<*2 )V)V.76 +
v);2-/.10(*3Vh);2)B#v),:B6 -A6 <*37.1+^3G2^-U/<*G)V.T8Y.1:B6 <*37. 3G2),-G:|+ <.#U/(*)4:B37.10Q<*6 <*37.Q-G(-G+ +8Q.1:B6 <*37.gRh^<*6
3G5 3VG);26 1):B2 ),0Q-G(!+);6 d@f g3G25),#81<*F-G(*)V.76 (* )B#6)V.1+ <*G)D2 ),P8Q<*2 )V)V.76 +-Gv378#6=6 #)46 -GQ(*),+=h%1<L:
3VG);25<*6 +G);2 6 <L:B),+d-G(L(*);Gg! aGa Ff 6 -AG)A-G(8#),+%<.6 #):B3G2 2 ),+ 37.Q0Q<.#UX.1<*6)+ );6 +;
H .6 #).#)BP6+),:B6 <*37.gPh)12 3xP<L01)%-/.4-G(*6);2|.Q-A6 <*G)
 1 5 v _l   /Y1 5Y# l5 1 01);X.Q<*6 <*37.3G Z S=gh%<*6 $6 #)+ -/)UG)V.#);2 -G(L<*6`$3G
^#) -G<. ),-A6 8#2 )3G Q2 3/Q-GQ<L(L<L+6 <L: UG2 -G1<L:;-G( K);X.1<*6 <*37. b gY8#653/16 -G<.#),06 #2 378#U7(*3#:;-G(+ v),:;< ]
3#01),(L+;g%h%1<L:<L+6 #)+ v),:;<*XY:;-A6 <*37.3G-U/(*3/Q-G( XY:;-A6 <*37.-G+^<.K);X.1<*6 <*37.@P
3#01),(O6 #2 378#U7-:B3/(L(*),:B6 <*37.3G+ 81 ] 3P0Q),(L+%(*3#:;-G( *,+,  3xx| 3F \x!Vx| !- !  .  "
-
636 1).#3#01),+3G\6 #)UG2 -GgO:B37.762 -G+6 +=h^<*6 K); ] /10 % / 3 2|+ *<,...1 ;

Locally specified credal networks 27


G l /1G v/O#Y 7l51 4 <.#),09-G(L(6 #)v3/+ + <LQ(*)+ 62 -A6);U/<*),+;8#66 #)
H . 6 1<L++),:B6 <*37.h)12 3xP<L01)-/.-G(*6);2|.Q-A6 <*G)J-/.10 v3/<.76^<L+h%#);6 1);2^3G2.13G6^-G(L(6 #),+).#);6h3G2 #+%1-VG)
G);6!),#81<*F-G(*)V.7601);X.1<*6 <*37.=3G2 Z S%+h^<*6 =2 ),+ ),:B663 6 #)+ -/)K''&gv-G+2 ),#81<*2 ),09K%);Xv.1<*6 <*37. b O3
K);X.1<*6 <*37. b gPh%1<L:<L+<.1+ Y<*2 ),0496 #)%3G2|4-G(L<L+ + 13Vh6 1<L+h)=.1);),06 #)3/(L(*3Vh^<.1U1
3G 9 , TGD;/ x dC1-/.#U);6!-G(TLgQ aGa 9fOP<L-%6 #) 2 x M 4 3 O 6 5Q N:O-M P8 R 7 ^A; *,|A Q ,
ZZ >62 -/.Q+3G2-A6 <*37.d Z -/.#34);6^-G(TLgO aGa #f  " UW$  d9   
f  d X  f ;Z : 7;/= <?> 7 $
^F
HJILKMN N:OB M R ' (*3#:;-G(L(*+ v),:;<*XQ),0:B2 ),0Y-G(.#);6 ] ;,,TG JA @#+^xGT7B 
;G  : ;G /B @
h3G2 3xG);2+
<L+D-62 <LQ(*);6(UW$  d   
f  d  YX f Z  C]* 
D   A @#^BGLG |1;GTG#D^F; $ EGF I H
+ 8Q:|6 1-A6;dT<f=$$<L+-K''&3xG);2 0    
A
|GQ ,5: A @//GA @##G;Q D C A @
dT<L< f  <L+^-4:B3/(L(*),:B6 <*37.3G!+ );6 +8 , .10 ,/.10gQ37.#)3G2 AA @1J @P LG;@ D CLKMF N H,_^FA @#YV7|G|
),-G:`#)J*  -/.Q0 +?4)J* <=)dT<L<L<f X <L+-+);6 #GYG  D CPO
3G:B37.10Q<*6 <*37.Q-G(O-G+ +8Q.1:B6 <*37.1+V6d ) D ? ) f g37.#)3G2
),-G:+#)2*+
-/.Q0+?4)*A,/> 
)<.76)V.1063+|#3Vh6 1-A6K);X.Q<*6 <*37.+ v),:;<*XQ),+
- Z S 3xG);26 #)A-A2 <L-GQ(*),+<. 
6 #).#3#01),+:B3G2 ]
2 ),+ 37.Q0Q<.#U63]
-A2 )6 1);2 );3G2 ):;-G(L(*),0 pP|;G
-/.10Dh^<L(L(v)501),Q<L:B6),094:;<*2 :;(*),+;gQh1<L(*)56 #3/+):B3G2 ] <*U78#2 )A!^#)K''& $
2 );6 812|.#),07O2 -/.1+3G2|- ]
2 ),+ 37.Q0Q<.#U63A  -A2 )4+ -G<L0 7 VB/Y,7B -/.10 6 <*37.U/<*G)V.-4(*3#:;-G(L(*+ v),:;<*XQ),0 Z Sh#3/+)=K^''&
h^<L(L(%v)01),Q<L:B6),0$7+ #81-A2 ),+,eR);6D81+-G+ +3#:;<L-A6) <L+6 1-A6<. <*U78#2 ) b d3G2-G(L+ 3 <*U78#2 )%3G2 <*U78#2 )\#f
),-G:01),:;<L+ <*37..#3#01)+#) *   h^<*6 -+ 3 ] :;-G(L(*),0
7 , TGqp#Y,0TG  . 0! ,/>-0  ,/. 0R2 );6 8#2|.Q<.#U-/. <*U78#2 )Ag2 ),v3G2 6 +-/.)B1-/Q(*)=3GO2 -/.1+3G2|- ]
),(*)V)V./63G , .10 3G2^),-G:A?)2*9,/>B0 Z -G(L( G G 6 <*37. A@^1)K''& $
2 );6 812|.#),0J9O2 -/.1+3G2|- ]
-/.-A2 2|-,3G01),:;<L+ <*37.8Q.1:B6 <*37.1+,g37.#)3G24),-G: 6 <*37.<L+^:B37.1+ <L0Q);2 ),096 #)3/(L(*3xh^<.#U1
#);*  %)D0Q)V.#3G6)-G+#, 6 #)D+);63G-G(L(6 #) 2RQ I O I 5 PT R S @#G GYAOB/ 
;*/ ^A
v3/+ + <LY(*)+62 -A6);U/<*),+; UW$ YX  ;Z : O O A @#A ! p#YVG 6 Fd
f qp1J @DA @#G
-G:|+ 62 -A6);UG * , 01);6);2|<.1),+- S 3xG);2
 #<L-K%);Xv.1<*6 <*37.1'$:B37.10Q<*6 <*37.1-G(Y4-G+ +!8Q.1:B6 <*37. 6 
d:a
f1 U 6 Vd:a   a
f  d f
64d#)ED ?4)f3G2),-G:|8Q.Q:B);2 6 -G<..#3P0Q)##)/* 
-/.10 V  6W*X 
? ) *A, >-0 <L+-G(*2 ),-G01+ v),:;<*XQ),07 X !O301);6);2<.#)
- Sh)Q-,G)6 #)V. 63+ <Y(*2 ),12 ),+)V.7601),:;< ] ;G /B@ a
* , e " : ;/,/|ZYVBAA@1  GQF
+ <*37.Q+^8Q.1:B6 <*37.1+9@-G+ +^T8Y.1:B6 <*37.1+;3G25),-G:|01) ] qp#Y,TG " UW$
YX
 Z _^F, 
: @#; $

:;<L+ <*37..#3#01)8#)*J -/.10]?)1*9,/>-0g#h):B37.Q+ <L01);2 A @#? <[> 7 7,G G $ ; S /#;G/G]\ O
6 #):B37.10Q<*6 <*37.1-G(14-G+ +8Q.1:B6 <*37.1+16 xd#)[D ?4)`f-G+ + <*U7. ] 2 37 h%Q<L:|gY:B37.Q+ <L01);2 <.#U46 #)= S+ UW$

 3G2
<.#U$-G(L(6 #)4-G+ +@63 6 #)A-G(8#)  .10 d?4)`f gh%#);2 ) ),-G:+62 -A6);UG *A[, Yg1<*6<L+v3/+ + <LY(*)63:B37Y.QX :;(Z 8101)G
 .10<L+@6 #)01),:;<L+ <*37.?8Q.1:B6 <*37.?:B3G2 2 ),+ v37.10Y<.#U63
^1)TS 3/16 -G<.#),0?<.6 1<L+h\-,h^<L(L(v)01) ] ^ O 0O _`_ Y4 a PR > L,|A*41 ,  " F <TB
.#3G6),0-G+ UW$ YX  Z g=h%1<L(*)J3G26 #):B3G2 2 ),+ v37.10Q<.#U bv|1;|*9TB " ^F; 
F;
QGT
3/<./6-G+ +4T8Y.1:B6 <*37.g\h):;(*),-A2 (* Q-,G)Gg3G24),-G: GA @#c <?> 7 $
;Wp# , S G1;G|GT: Gd \O
a9d:a   a
f5*A,Ve5 H 6<L+h3G2 6 763.#3G6)6 Q-A6-/./ Z S01);Xv.#),0-G+
<.K);X.Q<*6 <*37. b :;-/.v)2 );3G2|81(L-A6),0-G+<.K);X.1< ]
6 Fd:a   a
f b 6 xd:3  D ? T!f b 64d:3-) D ?)f
 6 <*37.D1g194+ <Y(*-G0Q0Q<.#U-+ <.#U/(*)01),:;<L+ <*37..#3#01)Gg
.Ce .10 Ce#" h%Q<L:|<L+5Q-A2 )V./6%3G-G(L(R6 #)3G6 #);2.#3#01),+=dT+);) <*U ]
d#f 8#2 ) b f
%#)@.#)B#64+6),$<L+46 1)V.3/9P<*3781+,@h)h\-/./6D63 %#)@:B37.10Y<*6 <*37.1-G(-G+ +4T8Q.Q:B6 <*37.1+:B3G2 2 ),+ v37.10 ]
01);Xv.#)=- Z S 9),-/.1+%3G6 #)+);653G S+01);6);2 ] <.#U63@0Q<Ev);2 )V./6A-G(8#),+3G6 #)01),:;<L+ <*37.J.#3#01)-A2 )
$&%'(' 0 `|G| xF ;x x *)#*xx -G+ +|8Q),0=63)6 #3/+)+ ),:;<*XY),096 #):B374Q-A6 <LQ(*)
R, + 3A /F 8x. -0/1Ox@ x -; S+;%1<L+),-/.1+6 Q-A6;g/<* e01)V.#3G6),+O6 #)0Q),:;<L+ <*37.

28 A. Antonucci and M. Zaffalon


> 3G2 )UG)V.#);2|-G(L(*Gg:B37.1+ 62 -G<./6 +3G26 1)+ v),:;<*XY:;- ]

6 <*37.1+43G:B37.10Y<*6 <*37.1-G(%4-G+ +T8Y.1:B6 <*37.1+2 ),(L-A6 <*G)@63
0Q<E);2 )V.76O.#3#01),+-A2 )+ <4<L(L-A2 (*2 ),12 ),+)V.76),09501),:;< ]
+ <*37..13P01),+!h1<L:|-A2 )6 1)Q-A2 )V.76 +3G6 1),+)^.#3#01),+;
<.1-G(L(*Gg#63(*3P:;-G(L(*4+ ),:;<*Gg#-G+2 ),#81<*2 ),07K); ]
<.1<*6 <*37.$1g5-+),Q-A2 -A6),(*$+ v),:;<*XQ),0 Z Sg<*6h3781(L0
*< U78#2 ) b OeR3P:;-G(9+ v),:;<*XY:;-A6 <*37.=3GY-%.#37. ] +),Q-A2 -A6),(* + 8GF:B)632 );3G2|81(L-A6)6 #)J+),Q-A2|-A6),(*?+ v),:;<*XQ),0
+ v),:;<*XQ),0 Z S 3VG);26 #)K''& <. <*U7812 )A (^) ] Z S -G+-/.)B#6)V.1+ <*G) Z S h%#3/+ )6 -GY(*),+-A2 )3/ ]
)Vv);26 1-A6:;<*2 :;(*),+01)V.#3G6)8Q.1:B);2 6 -G<. .#3#01),+;g 6 -G<.#),0:B37.Q+ <L01);2 <.#U-G(L(\6 1):B37=Y<.1-A6 <*37.1+3G6 #)
h%1<L(*)6 1)+ #81-A2 )<L+^81+ ),03G2^6 #)01),:;<L+ <*37.@.#3#01)G G);2 6 <L:B),+43G%6 #)+ ),Q-A2 -A6),(*+ v),:;<*XQ),0:B37.10Y<*6 <*37.1-G(
:B2 ),0Q-G(+);6 +=3G6 #)D+ -/)A-A2 <L-GQ(*)G);6;g6 1<L+-G ]
12 3/-G:D+ 8PE);2 +\-/.43/9#<*3781+)B#v37.#)V.76 <L-G(v)B#Q(*3/+ <*37.
.#3#01)GgA6 #)\+6 -A6),+3G e <.Q01)B6 #)\:B37Q-A6 <LY(*) S+;g 3G!6 #).P8Q);2\3G!6 -GQ(*),+%<.6 #)<.1Y816+ <*W;)G
/- .10 64d#) D ?)  f  6C7d#)ED ?4)f gh%#);2 ) 6C7d#)ED ?4)f '3G2 ))BEv),:B6 <*G)12 3P:B),0Y812 ):B37.1+ <L+6 +<.-G0Y0Q<.#U
-A2 )6 #):B37.10Q<*6 <*37.Q-G(-G+ +%T8Y.1:B6 <*37.1+5+ v),:;<*XQ),07 -01),:;<L+ <*37..#3#01)<.v);6h);)V.),-G: .13P01)-/.Q0<*6 +
6 #) ] 6 J:B374Q-A6 <LQ(*)4 S3G2),-G: #);* 
-/.10 Q-A2 )V.76 +;g2 );U/-A2 0Q<.#U6 #).#3P0Q),+3G@6 #)3G2|<*U/<.1-G(
?4)5* ,/>B0-/.Q0 * , %^1<L+^3G2|81(L-A6 <*37.gYh%1<L: 3#01),(!-G+8Q.1:B);2 6 -G<..#3#01),+d <*U78#2 )#f O3:B37 ]
<L+R-/.)B#-/4Q(*)3G16 #) Z^Z > 62|-/.1+3G2|-A6 <*37.d Z -/.#3 Q(*);6)6 #)(*3#:;-G(+ v),:;<*XY:;-A6 <*37.12 3P:B);),0-G+3/(L(*3Vh%+;
);6-G(TLg aGa #f g<L+37.1(*D+);)V<.#U/(*D(*3P:;-G(TgQv),:;-/81+ )3G 3G2),-G:|D8Q.1:B);2 6 -G<..#3P0Q)V#)g76 #)^+6 -A6),+ )2*A,  0
6 #)-A2 :;+:B37.Q.#),:B6 <.#U6 #)01),:;<L+ <*37..13P01)h^<*6 -G(L( 3G56 #):B3G2 2 ),+ 37.Q0Q<.#U01),:;<L+ <*37. .#3P0Q) e#)5-A2 )-G+ ]
6 #)8Q.1:B);2 6 -G<..#3#01),+; H .6 #)42 )V-G<.1<.#U@Q-A2 63G + 8Q),063<.Q01)B 6 #)G);2 6 <L:B),+3G A 6 1):B37.10Q< ]
6 1<L+\+),:B6 <*37.g1h)+ #3xh13Vh0Q<E);2 )V.76 Z S+\+ v),:;<*X ] 6 <*37.1-G(O:B2 ),0Q-G(+);6 + d#)ED ?)fT* % gRh^<*6 +?4)2*A,/> 
:;-A6 <*37.1+\:;-/.Dv)2 );3G2|81(L-A6),0-G+2 ),#81<*2 ),07K); ] H .6 1<L+4h-VGg3G2),-G:|8Q.1:B);2 6 -G<. .#3#01)+ ) g\<*6D<L+
<.1<*6 <*37.1 v3/+ + <LQ(*)63+);6@6 1):B37.10Y<*6 <*37.1-G(-G+ +T8Y.1:B6 <*37.
3G2\)B1-/Q(*)Gg#h):;-/.2 ),Q2 ),+)V.76)B#6)V.1+ <*G) Z S+ 64d#)[D )f63v)6 #)G);2 6)B3G46 #):B37.10Y<*6 <*37.1-G(
9@<.762 3P081:;<.#U-01),:;<L+ <*37.Q-A2 )V.763G2),-G:J.#3#01) :B2 ),0Q-G(%+);6 d#)ED ?4)f-G+ +3#:;<L-A6),0 63 )g3G2),-G:
3G!6 #)3G2 <*U/<.1-G( Z Sd <*U7812 )=9f )g* ,  0 ();U/-A2 0Q<.1U01),:;<L+ <*37..#3#01),+;g3G2),-G:
01),:;<L+ <*37.D.#3#01[) e ) -/.1046 #)^2 ),(L-A6),04A-G(8#)=0 ? ) 3G6 #)
Q-A2 )V.76 +;gh)@+ <Q(*+);66 #)@+ 81Q+);6 , 0 ,  0
63Jv)+ 81:6 1-A6 i 64d#) D )f u C 0 6W 0 0 -A2 )6 1)G);2 ]
6 <L:B),+3G d ) D ? ) f ^1<L+-GQ12 3/-G:|gh%Q<L:| <L+
:;(*),-A2 (*(L<.#),-A2\<.6 1)^<.1Y8#6+ <*W;)Gg96 -AG),+\<.1+ Y<*2 -A6 <*37.
2 37 v7 7; 2 ),12 ),+ )V./6 -A6 <*37.1+\d Z -/.#3-/.10
<*U78#2 )P!e3#:;-G(1+ v),:;<*XY:;-A6 <*37.3G-/.)B#6)V.1+ <*G) Z S >3G2 -G(Tg bAG/b f
3VG);26 #)K''& <. <*U78#2 )A
^1):B37.10Q<*6 <*37.1-G(-G+ +T8Q.Q:B6 <*37.1+3G!6 #)=8Y.1:B);2 ]
6 -G<..#3P0Q),+:B3G2 2 ),+ v37.10Y<.#U63@0Q<E);2 )V.76A-G(8#),+3G
6 #)2 ),(L-A6),00Q),:;<L+ <*37..#3#01),+-A2 )-G+ + 8Q),063v)
6 #3/+)+ v),:;<*XQ),096 #)0Q<E);2 )V.766 -GQ(*),+^<.46 1)%)B ]
6)V.1+ <*G)\+ v),:;<*XY:;-A6 <*37.=3GQ6 #)\8Q.1:B);2 6 -G<..#3#01)G!^1<L+
),-/.Q+@6 Q-A6;g<*]#)<. -/.8Y.1:B);2 6 -G<..#3#01)-/.10
e ) 6 #):B3G2 2 ),+ v37.10Y<.#UD0Q),:;<L+ <*37.J.#3#01)Gg6 #)+ 6 -A6),+
) *Q,  0<.101)B6 #)6 -GQ(*),+=6 C 0 d#)[D <=)f3G6 #))B ]
6)V.1+ <*G)+ ),:;<*X:;-A6 <*37.3G2 #)g-/.10Q64d#)[D )  ?4)`f<L+
6 #)3/<.76\4-G+ +8Q.1:B6 <*37.]6 C 0 d#)[D ?4)`f-G+ +3#:;<L-A6),0D63 *< U78#2 )1 e 3#:;-G(+ v),:;<*XY:;-A6 <*37. G3 D-+),Q-A2 -A6),(*
6 #) ) ] 6 6 -GQ(*)3G6 #))B#6)V.1+ <*G)+ v),:;<*XY:;-A6 <*37. + v),:;<*XQ),0 Z S?x3 G);2%6 #)K^''& <. *< U78#2 )4A

Locally specified credal networks 29


QN 8Q44-A2 <*W,<.#U1g#h):;-/.3/16 -G<.@<.)!F:;<*)V./66 <) 6 #)<. 8#)V.1:B),+!3G26 1)+#.1);2 U/<*),+v);6h);)V.6 #)A-A2 < ]
-(*3#:;-G(P+ v),:;<*XY:;-A6 <*37.3G1- Z SgF7+ <4Q(*%:B37.1+ <L01);2 ] -GQ(*),+, H h)2 );U/-A2 0$#81-G(L<*6 -A6 <*G)J.#);6 +-G+:B2 ),0Q-G(
<.#U37.#)53G6 #)562 -/.Q+3G2-A6 <*37.1+01),+ :B2 <Lv),0<.6 1<L+ .#);6 +,gh)+ );)6 1-A6.#3G6-G(L(6`Pv),+3G2 ),(L-A6 <*37.1+=:;-/.
+),:B6 <*37. H 6<L+\h3G2 6 .#3G6 <.#U6 1-A6(*3#:;-G(+ v),:;<*XY:;- ] v)52 ),12 ),+ )V./6),09+),Q-A2 -A6)+ ),:;<*X:;-A6 <*37.1+3G6 #)
6 <*37.1+3G Z S+Q-G+ ),0437.6 #)%K''&+\2 ),+ v),:B6 <*G),(*4<. :B37.10Y<*6 <*37.1-G(#:B2 ),0Q-G(P+);6 +;^1<L+<L+;gA3G2<.1+ 6 -/.1:B)GgG6 #)
<*U78#2 )1g <*U78#2 )-/.Q0 <*U78#2 ) b :B3G2 2 ),+ v37.1063 :;-G+)\3GOdTv3/+ <*6 <*G)Af  p1AG:^F^ TpQ;| gGh1<L:|2 ) ]
Z S+RY-G+),037.6 #)\K''&$<. <*U7812 )%h%1<L:-A2 )2 ) ] #81<*2 ),+;gP3G26h33P3/(*),-/.F-A2|<L-GQ(*),+ -/.10 Dg#6 1-A6
+ v),:B6 <*G),(*+),Y-A2 -A6),(*+ v),:;<*XQ),0gF)B#6)V.1+ <*G),(*+ v),: ]
<*XQ),0-/.10.#37. ] +),Q-A2 -A6),(*dT-/.10.#37. ] )B#6)V.1+ <*G),(*f 64"d !7D #Vf $64"d !7&D %'#Vf d_ f
+ v),:;<*XQ),0 <L:B)@G);2 + -#g<*64<L+),-G+ 63:|#),: 6 1-A6
O2 -/.1+3G2|4-A6 <*37.4\:;-/.v)2 );U/-A2 01),0-G+!6 #)<.7G);2 +) ^1)#81-G(L<*6 -A6 <*G)=<. 8#)V.1:B)5v);6h);)V(. ?-/.1(0  :;-/.
3G6 #),+)6 12 );)62 -/.1+ 3G2|4-A6 <*37.1+; 6 #);2 );3G2 )v)D3P01),(*),07J2 ),P8Q<*2 <.#U96)d #D #Vf-/.10
64)d g&D %'#Vf634v),(*37.#U63:B2 ),0Q-G(+);6 +;gYh%1<L:|:;-/.Q.#3G6
 Q4 xO O q C 5  v -lW v)+ ),Q-A2 -A6),(*+ v),:;<*XQ),0),:;-/8Q+)3G6 #)\:B37.1+62 -G<.76
 /Y1 TY #Y l5 Q_ <.#81-A6 <*37.d _ f '.@)B#6)V.1+ <*G)+ v),:;<*XY:;-A6 <*37.3G2
eR);6\81+<L(L(81+62|-A6)594-);h )B1-/Q(*),+6 1-A66 #).#) ]  + #3781(L06 #);2 );3G2 )v):B37.1+ <L0Q);2 ),0633#01),(O6 #)
:B),+ + <*63G.#37. ] +),Y-A2 -A6),(*+ v),:;<*XQ),0:B2 ),0Q-G(.#);6 + v3/+ <*6 <*G)<. 8#)V.1:B)3G  d Z 3GWV-/.);6%-G(TLg bAG #f
-A2 <L+ ),+^.1-A6 8#2 -G(L(*<.-F-A2 <*);6`3GQ2 3/Q(*)V+, *,+ N # _:I M . -vV0 / Q 3 O ^21@ ()V)V);2
^ OBM I  11 N 
I M 3 I IGM  I _:I ^#) |G#;,| 6 1-A6K''&5+2 ),12 ),+)V.76<.101),v)V.101)V.1:;<*),+v);6h);)V.
^FG ^A;,;;| pP Z H (f<L+@- .#);h 2|81(*)3G2
d A-A2 <L-GQ(*),+-G:;:B3G2|0Q<.#U636 #)>@-A2 G3V :B37.Q0Q<*6 <*37.
810Q-A6 <.1Uv),(L<*);T+h^<*6 <.1:B37Y(*);6)3/Y+);2 A-A6 <*37.1+ K5<E);2 )V./6@K''&5+0Q),+ :B2 <LQ<.#U6 1)+ -/)<.101),v)V. ]
dC-FE-G(*37.g bAG/ f Z H (3P01),(L+6 #)5:;-G+)53G4<P),0 01)V.Q:;<*),+D-A2 )+ -G<L0$63v)  p# ^FA;Q 3d );2|--/.10
1.#3Vh%(*),01UG) -Gv378#66 #)<.1:B374Q(*);6)V.#),+ +12 3#:B),+ +;g '),-A2 (Tg aGaA f ^98Q+;g- S :;-/.v)2 );3G2|81(L-A6),0
h%Q<L:|<L+O-G+ + 8Q),0563^v).#),-A2 (*8Q.#1.#3Vh.53G2O+37) 81+ <.#U-/.D),#81<*A-G(*)V./6K^''&^1)+ -/)513/(L0Q+\h^<*6
A-A2 <L-GQ(*),+-/.108Q.1+),(*),:B6 <*G)@3G246 #)3G6 #);2 +,^1<L+ Z S+;g%h%#)V. dT-G+<Q(L<L:;<*6 (*$0137.#)<. 6 1<L+Q-Gv);2;f
(*),-G0Q+63-/.<12 ),:;<L+)Q2 3/Q-GQ<L(L<*6`@3#01),(Tgvh%#);2 ) BG $Y9`1,Y7;| 2 ),Q(L-G:B),+@+ 6 -/.10Q-A2 012 3/Q- ]
-G(L(6 1)v3/+ + <LQ(*):B374Q(*);6 <*37.1+3G56 #)<.1:B374Q(*);6) Q<L(L<L+ 6 <L:<.101),v)V.10Q)V.1:B)<.6 1)>@-A2 G3V:B37.Q0Q<*6 <*37.
3/Q+ );2 F-A6 <*37.Q+3G=6 1)XQ2 +66#v)3GF-A2 <L-GY(*),+-A2 ) d>3G2 -G(-/.10 Z -/.#31g bAG/b f
:B37.1+ <L01);2 ),0 H .-2 ),:B)V./6h3G2 g810Q-A6 <.#U@),(L<*);T+ Z 37.1+ <L01);2,gR3G2)B1-/Y(*)G4g  5 -/.16 0 879Dg
h^<*6 Z H (1-G+v);)V.:B37.1+ <L01);2 ),0 3G2 S%+dT'5. ] h%Q<L:|-A2 ):;(*),-A2 (*),#81<*F-G(*)V.76K''&5+,; :.#)12 3/ ]
637.P81:;:;<-/.10C-FE-G(*37.g bAG/_ f ^#)12 3/Q(*)V <L+ (*)V h%<*6 +),Q-A2 -A6),(*+ ),:;<*XY),0 Z S%+<L+6 1-A66 #);
12 3VG),063Jv)),P8Q<*F-G(*)V.7663-62 -G0Q<*6 <*37.Q-G(v),(L<*); -A2 ).#3G64:;(*3/+),08Q.Q01);26 1<L+4P<.103Gd),P81<*A-G(*)V.76Bf
810Q-A6 <.1U12 3/Q(*)V<.- Z S=6 #)),#81<*F-G(*)V.Q:B)@<L+ +6281:B6 8#2 )$:|1-/.#UG),+, <*h)01);Xv.#)-+),Q-A2 -A6),(*
4-G01)3/+ + <LQ(*)9.137. ] +),Q-A2 -A6),(*+ v),:;<*XQ),0:B2 ),0Q-G( + v),:;<*XQ),0 Z S 3G<2  5DgO-/.Q06 #)V.2 );G);2 + )6 #)
.#);6 +, -A2 :Ag6 #)2 ),+|81(*6 <.#U.#);6h^<L(L(4.#3G6v)+),Q-A2 -A6),(*
+ v),:;<*XQ),04<.UG)V.#);2 -G(T Z 37.1+ <L01);26 1)3/(L(*3xh^<.#U+), ]
 _WN V1 N  I M I   O    8Q-G(L<*6 -A6 <*G) 12 3/Q- ] -A2 -A6)+ v),:;<*XY:;-A6 <*37.1+R3GP6 1):B37.10Q<*6 <*37.1-G(P:B2 ),0Q-G(7+);6 +
Q<L(L<L+ 6 <L:.#);6`h3G2 P+d),(L(4-/.gD aGaA f-A2 )-/.-G ] 3G2^- Z S3VG);=2  >D
+62|-G:B6 <*37.3GTS+;g/h#);2 )6 #)12 3/Q-GQ<L(L<L+6 <L:-G+ + ),+ + ]
)V./6 +D-A2 )2 ),Q(L-G:B),09#81-G(L<*6 -A6 <*G)2 ),(L-A6 <*37.1+3G )d +D !1f  ? a ! A @ )d +&D %'!Qf B  ? c ! b @
 R@ x=F  *5 G||;5 2G 8 A )d f  Z\[ i ?  ! C @  ? ! @ u 
= F R`|A B% B` 6 3 2G  B *
|4 DF|T 2G 3V  3x |A|Ox 2 h%1);2 ) 6h3 ] 0Q<)V.1+ <*37.1-G(J#3G2 <*W;37.76 -G(-A2 2|-,#+-A2 )
 | ||4 %L 3A F| 2F| x / / = 81+ ),0630Q)V.#3G6)-G+ +8Q.1:B6 <*37.1+\3G2v393/(*),-/.DA-A2 < ]
|x *|!  - 1PA v`xx ` 6 34 vF B 
%x  F  ;x 8v ':,F 8 3F * 8F -GQ(*),+,N181:-+ ),Q-A2 -A6),(*+ v),:;<*XQ),0 Z SQ-G+6h3
Fx;v! 3xY \FG|@ 2x 6 O0 3 2G :B374Q-A6 <LQ(*) S+;g37.#)3G2),-G:|G);2 6)BJ3G )d f
 RT * F x|F xF,| 6GA 
x; 5 Q\` F|9F ; 2 37 6 1)3/<.764-G+ +T8Q.Q:B6 <*37.1+\:B3G2 2 ),+ v37.10Q<.1U63

30 A. Antonucci and M. Zaffalon


6 #),+)TS+;g+ -, 6 d)  f%-/.10 6 d)  4f gh)3/ ] 8Q.1:B6 <*37.3/16 -G<.#),0<.6 1<L+%h-VGgh%1<L:|@01);X.1)6 #)
6 -G<.6 1):B37.10Q<*6 <*37.1-G(-G+ +\8Q.1:B6 <*37.1+3G26 1):B3G2 ] + -/):B37.10Q<*6 <*37.1-G(R4-G+ +8Q.1:B6 <*37.1+3G2,
2 ),+ v37.10Y<.#UTS+3xG);2 >= 6 )d f 6 )d f2B  ?  bG @
6 )d fB  ? c  ! ,C @ 6 )d fB  ? cG ! @ 6 d)+D&%'!Qf1 6 d)JD&% !1f1!B?  A@
6 )d #D #Vf B  ?  b ! bGc @ 6 )d #D #Vf B ? Gb ! L @ 6
d ]&D %'#Vf1 6
d ]&D %'#Vf1B  ?*  @
6 )d #&D % #,f2B ? ! _ @ 6 )d #&D % #,f2B  ? G ! _  @
-/.100Y<Ev);2 )V.76^:B37.10Y<*6 <*37.1-G(R-G+ +T8Y.1:B6 <*37.1+\3G2,
':;:B3G2 0Q<.#U$63$K);X.Q<*6 <*37. b g=6 #),+)6h30Y<L+6 <.1:B6 6 )d +D !1f ?*  @ 6 )d +D !QfB
 ? _  ! GC @
+ v),:;<*XY:;-A6 <*37.1+01);X.1)- Z S 3VG);2   gh%1<L: 6 d
 D #VfB? _  ! GC@ 6 d
]D #VfB? ! @
:;-/.Q.#3G6!v)+),Q-A2|-A6),(*=+ v),:;<*XQ),0=-G+<.=K);X.1<*6 <*37.P
O3$+);)6 1<L+;g.#3G6)3G2)B#-/4Q(*)J6 Q-A6@6 #)+ v),: ] )Q-,G)6 1);2 );3G2 )3/16 -G<.#),06h3D S+3VG);,2  
<*XY:;-A6 <*37. 64"d !7D #,f  Gb gQ6"d !B&D %'#Vf  G -/.10   gh%1<L::;-/.@v)2 );U/-A2 0Q),0-G+6 #)=:B374Q-A6 ]
64"d #Vf
 cG g%h%1<L:<L+:;(*),-A2 (*$-v3/+ + <LY(*)@+ v),:;< ] <LQ(*)TS+D3G- Z SN181:|- Z S <L+:;(*),-A2 (*$.#37. ]
XY:;-A6 <*37.<*6 1):B37.10Y<*6 <*37.1-G(:B2 ),0Q-G(v+);6 +\h);2 )+ ),Q- ] +),Q-A2|-A6),(*+ v),:;<*XQ),0g1v),:;-/81+)6 #)6h3TS++ v),: ]
2 -A6),(*4+ ),:;<*XY),0g9h3781(L0D(*),-G0D63=6 #)8Q.1-G:;:B),Q6 -GQ(*) <*=0Q<E);2 )V.76:B37.10Q<*6 <*37.1-G(Y4-G+ +T8Y.1:B6 <*37.1+!3G23G2 )
-G+ +\T8Q.Q:B6 <*37.J64)d fB  ? a ! A @ * )d f 6 1-/.-F-A2 <L-GY(*)G
H 6<L+ 8Q+);T8Q(63 3/Q+);2 G)?6 1-A6 UG)V.#);2 -G(Tg.#37. ]
+),Q-A2|-A6),(*4+ v),:;<*XQ),0g Z S%+013.13G6+ 8PE);23G26 1),+) /  lG 7l lWG C 5  v -lW
12 3/Y(*)V+8Q+6v),:;-/81+)6 #); -A2 ):;(*3/+),0 8Y.101);2 /1 5#Y l5 Q_
),#81<*F-G(*)V.76:1-/.#UG),+<.6 #)^+6281:B6 8#2 )3Gv6 #)^K''&
TI Y MNM - 3 O 5 NM O 5 / _:I I  1V &<*G)V. H .6 1<L+5+),:B6 <*37.gvh)12 3xG)=6 1-A6-/.7(*3#:;-G(L(*@+ v),: ]
6 #2 );)v393/(*),-/. 2 -/.Q0137 F-A2 <L-GY(*),(+ g  -/.1 0 g <*XQ),0 Z S3VG);2g
:;-/.),P8Q<*F-G(*)V.76 (*v)2 );U/-A2|01),0
(*);656 #)K''&     )B#12 ),+ +5<.101),v)V.101)V. ] -G+-+),Q-A2 -A6),(*+ v),:;<*XQ),0 Z S 3xG);2T^#)62|-/.1+ ]
:;<*),+);6`h);)V.46 #)VO)h-/.7663(*),-A2|.46 #)%3#01),( 3G2|-A6 <*37.<L+6),:Q.1<L:;-G(L(*+62 -G<*U7763G2 h-A2|0<*6<L+
12 3/Y-GQ<L(L<*6 <*),+3G2+|81:|-K''& 2 37 6 1)<.Q:B37 ] Q-G+),037.2 ),12 ),+)V.76 <.#U0Q),:;<L+ <*37..#3P0Q),+^98Y.1:B);2 ]
Q(*);6)0Q-A6 -+ );6\<.-GQ(*)AgQ-G+ + 8Y<.#U.#3<.#3G2|- ] 6 -G<.4.13P01),+h^<*6 A-G:,8#3781+:B37.10Q<*6 <*37.1-G(v:B2 ),0Q-G(Q+ );6 +;g
6 <*37.-Gv378#656 #)12 3P:B),+ +4-AP<.1UD6 1)3/Y+);2 A-A6 <*37. -G+3G2|4-G(L<*W;),0v),(*3Vh
3G <L+ + <.1U4<.@6 #)(L-G+ 62 ),:B3G2 03G6 #)0Q-A6 -+);6; 2 V M 3 O 6 51 N:O-M  8 R 7 ^A; *,|A1 ,
^#)3/+6^:B37.Q+);2 A-A6 <*G)-GQQ2 3/-G:|@<L+6 #);2 );3G2 )=63  " UW$  d9   
f  d  YX f Z _^A; 
: 7;G,`v
(*),-A2|.6h30Q<L+6 <.1:B6 S%+2 37 6 #)6`h3:B374Q(*);6) GGB*J1 ,  " UW$ &%8Z _^F,  :  @#,A @1
0Q-A6 -+);6 +4:B3G2 2 ),+ v37.10Q<.1U63J6 #)3/+ + <LQ(*)A-G(8#),+ /YGGAvV /AQ,; % GOBG /B @ #)2*+
3G6 1)4<L+ + <.#UD3/Q+);2 A-A6 <*37.-/.10:B37.1+ <L0Q);25<.Q01);),0 GY ?)1*9,/>-0 E
6 #) Z S4-G01)53G6 1),+):B37Q-A6 <LY(*) S%+;  64d#)ED ?4)f <*4#)2*J

' Z d#) D ?)f    d 7f


! #  W/4 0 0 d#)`f<*4#)2*J   
%'! % # 
! # %  @#, 6d#)ED ?)f ^A@#5F|R p#YV/41 V \ 0
!   X GY W/ 0 0 d#)f A@# A-G:,8#3781+ , 7AQ,;9BG , . 0 O
-GQ(*)@A'0Q-A6 -+);6=-Gv378#66 12 );)v3P3/(*),-/.A-A2 < ] <*U78#2 ) 2 ),v3G2 6 +=-/.)B1-/Y(*)3GR2|-/.1+3G2|- ]
-GQ(*),+; g =01)V.#3G6),+%-<L+ + <.1U3/Q+);2 A-A6 <*37. 6 <*37. b gh%1<L:|@<L+5:;(*),-A2 (*(L<.#),-A2<.@6 #)<.1Y8#6+ <*W;)G
O3-AG)6 1<.#U/+\+ <Y(*)^h)5:B374Y8#6)%6 #)12 3/ ] ^1)4dT+62 37.#U#f2 ),(L-A6 <*37.v);6h);)V.-(*3#:;-G(L(*+ v),: ]
-GQ<L(L<*6 <*),+\3G26 1)3/<./6\+6 -A6),+94),-/.1+3G6 #)%2 ),( ] <*XQ),0 Z S UW$  d   
f  d   X f Z 3VG);2]
-/.106 #)
-A6 <*G)2 ),P81)V.1:;<*),+<.6 #):B37Q(*);6)0Q-A6 -+);6 +;eR);6 +),Q-A2|-A6),(*+ v),:;<*XQ),0 Z S UW$ &% Z 2 );6 8#2.#),07
6 )d     f^-/.Q0(6 )d     f%v)6 #)53/<./6=4-G+ + O2 -/.1+3G2|-A6 <*37. b <L+378#6 (L<.#),07D6 #)3/(L(*3Vh^<.1U1

Locally specified credal networks 31


Z H (810Q-A6 <.#U@Q2 3/Q(*)V0Q),+ :B2 <Lv),0<.NP),:B6 <*37. _ g
h%Q<L:|<L+-.#3G6 -GQ(*)-/.102 ),-G01 ] 63 ] 81+)2 ),+|81(*6;
 kRl AR
 Rl! n2
)1-,G)01);X.#),0 -.#);hUG2 -GY1<L:;-G((L-/.#U781-AUG)63
3G2|81(L-A6)^-/.76#v)^3G:B2 ),0Q-G(v.#);6h3G2 vg13G6 4+), ]
-A2 -A6),(*-/.10.#37. ] +),Q-A2 -A6),(*J+ ),:;<*XY),0)1-VG)
-G(L+3@+ #3Vh),06 Q-A6-/./.#);62 ),12 ),+)V.76),0h%<*6 6 #)
.#);h(L-/.#U781-AUG)5:;-/.Dv)^),-G+ <L(*62 -/.1+3G2|),04<./63-/.
),#81<*A-G(*)V./6+),Q-A2 -A6),(*+ ),:;<*XY),0:B2 ),0Q-G(.#);6;\^1<L+
<*U78#2 ) ^1)K^''& -G+ +3#:;<L-A6),0 636 1)+),Q- ] <4Q(L<*),+;gx<.Q-A2 6 <L:,81(L-A2VgF6 Q-A6.#37. ] + ),Q-A2 -A6),(*+ v),:;< ]
2 -A6),(*+ v),:;<*XQ),0 Z S 2 );6 8#2|.#),0 7O2 -/.1+3G2|- ] XQ),0.#);6 +1-VG)%-/.),P81<*A-G(*)V.76+),Y-A2 -A6),(*+ v),:;<*XQ),0
6 <*37. b g2 37 6 1)(*3#:;-G(L(*+ v),:;<*XQ),0 Z SQ-G+ ),037. 2 ),Q2 ),+)V.76 -A6 <*37.g%3G2h%1<L:+3/(8#6 <*37.1+-G(*UG3G2 <*6 Y+
6 #) K''& <. <*U11 ^#):B37.Q0Q<*6 <*37.1-G(D:B2 ),0Q-G( -A2 )-VF-G<L(L-GY(*)<.6 1)(L<*6);2 -A6 8#2 )G
+);6 +43G6 #)h1<*6)@.#3#01),+@dT:B3G2 2 ),+ v37.10Q<.#U636 #) %#)62 -/.1+3G2|4-A6 <*37.12 3/v3/+),04-G(L+3+|#3Vh^+6 1-A6
3G2 <*U/<.Q-G(#8Y.1:B);2 6 -G<..#3#01),+Bf-A2 )12 ),:;<L+),(*+ v),:;<*XQ),0g -+ 8Qv:;(L-G+ +3GD+ ),Q-A2 -A6),(* + v),:;<*XQ),0:B2 ),0Y-G(4.#);6 ]
h%Q<L(*)6 #)UG2 );.#3#01),+dT<T)GLg.#);h8Q.Q:B);2 6 -G<..#3#01),+ h3G2 #+:;-/.)D81+),063+3/(*G)<.#);2 )V.1:B)12 3/Q(*)V+
:B3G2 2 ),+ v37.10Q<.#U\636 #)3G2|);201),:;<L+ <*37.=.#3P0Q),+BfQ2 ), ] 3G2-A2 Y<*62 -A2 + v),:;<*XQ),0 :B2 ),0Q-G(%.1);6 +;6 1<L+<L+46 #)
2 ),+ )V./6A-A2 <L-GQ(*),+@h%#3/+ )J:B37.10Q<*6 <*37.1-G(=:B2 ),0Q-G(=+);6 + :;(L-G+ +3G.#);6 +5<.h%1<L:6 #):B2 ),0Q-G(+);6 +-A2 )),<*6 #);2
-A2 )A-G:,8#3781+, A-G:,8#3781+3G212 ),:;<L+)G H 6<L+h3G2 6 .13G6 <.#U=6 Q-A6-=2 ) ]
:B)V.7601);G),(*3/)V.763G6 #)^-GQQ2 3,1<-A6)^e b -G(*UG3 ]
2RQ I O I 5 7R ;  d
f |A@# GGYA 2 <*6 Y dT'.7637.98Q:;:;<);6^-G(TLg bAG/_ f+ );)V+\634v)Q-A2 ]
;G 
A@# B/ [o/;1TG  df 6 <L:,81(L-A2|(*+ 81<*6),081+ 63G2\+ 81:4-=:;(L-G+ +;g#-/.10D+ #3781(L0
UW$ &%8Z GY d
f A@#G Eo7;#B/ 6 #);2 );3G2 )v):B37.1+ <L01);2 ),0<.8#6 8#2 )h3G2 v
UW$  d  
f  d  YX f Z O S @#, E <.1-G(L(*Gg6 #)+62 37.#U$:B37.Q.#),:B6 <*37. v);6h);)V.6 #)
(L-/.#U78Q-AUG)3G2%:B2 ),0Q-G(R.#);6`h3G2 P+%<./62 3#0Y81:B),0<.6 1<L+
d
f1
  d
f dc f Q-Gv);24-/.Q06 #)3G2|4-G(L<L+ 3G01),:;<L+ <*37.$.#);6h3G2 #+
dT<.1:;(8Q0Q<.#U5<. 8#)V.1:B)\0Q<L-AUG2 -/4+Bf g/+);)V4+O63v)\Q-A2 ]
2 37^#);3G2 )V b g7<*6<L++62 -G<*U7763G2 h-A2|063:B37. ] 6 <L:,81(L-A2|(*h3G2 6 )B1Q(*3G2 <.#U3G2:B2 3/+ + ] );2 6 <L(L<*W,-A6 <*37.
:;(810Q)6 #)3/(L(*3Vh^<.#U1 v);6h);)V.6 1)6h34XQ),(L0Q+,
^ O 0O _`_ Y4 a  R >5Q;,;;Y=v7B; G$*G  n!m l Y    YO
|A*1 V \  " G|D  p#:^xA;QT*;A ^F
A @#;#GGB*1 V \ $ " ;Wp# ; S G# ^Q<L+2 ),+),-A2 :h\-G+Y-A2 6 <L-G(L(*+ 81Qv3G2 6),096 #)
;G/G O NPh%<L+ +/S5N UG2|-/./6 bAGG/bAF] /aGbGaG
A
eR);6D81++62 ),+ +46 1-A6R2 -/.Q+3G2-A6 <*37. b <L+4G);2
+ <4Q(*)Gg-/.10<*6<L++ 8#2 Q2 <L+ <.#U6 1-A6<*6<L+12 ),+)V.76),0
 /!! 
#);2 )3G2D6 1)@XQ2|+646 <)Gg^-G+<*6D<L+D2 ),-G(L(* 6 #)@G); ^1)3/7#<*3781+D12 3P3G+43G Z 3G2 3/(L(L-A2 -/.10 Z 3G2 3/( ]
63I+ ),Q-A2 -A6),M6 1):B2 ),0Q-G(+);6 +3G.137. ] +),Q-A2 -A6),(* (L-A2 b -A2 )37<*66),0
+ v),:;<*XQ),0 .#);6 +;@<.-G:B6;gU/<*G)V. -.137. ] +),Q-A2 -A6),(* , S @1 G; \ O\e);6^8Q++6 -A2 66 #)4-A2 U/<.1-G( ]
+ v),:;<*XQ),0 Z S=gV37.#)\:;-/.(*3#:;-G(L(*=+ ),:;<*56 #) Z SgA81+ ] <*W,-A6 <*37.<.#81-A6 <*37. d f42 37 -0Q),:;<L+ <*37.?.#3#01)
<.#U6 #)=Q2 ),+ :B2 <L16 <*37.Q+^3G6 #)+),:B37.10Q-A2 6%3GNP),: ]   *Q  %':;:B3G2 0Q<.#UD63DP8Q-A6 <*37.d#f g3G2%),-G:
6 <*37. g-/.10-GQQ(*O2 -/.1+3G2|4-A6 <*37. b 633/16 -G<. a(*A,Ve5
-+),Q-A2|-A6),(*+ v),:;<*XQ),0 Z S='%:;:B3G2 0Y<.#U63 Z 3G2 3/( ]
(L-A2 b g16 #)V.g#-/./D<.#);2 )V.1:B)12 3/Q(*)V 37.6 #)53G2 <*U ] 
<.1-G( Z S:;-/.D),#81<*A-G(*)V./6 (*D)%2 ),12 ),+ )V./6),0D37.6 1<L+ U 6Vd:a!f U b 6Vd:3YD ? T f! b 6d:3)ED ?4)f
.#);h+ ),Q-A2 -A6),(*+ v),:;<*XQ),0 Z S=GR3-AG)-/.)B1-/ ]  
V LW*X  
V 6W*X .   e  .10` e "
Q(*)Gg\6 1<L+D12 3#:B),0Y8#2 ):;-/.<),0Q<L-A6),(*+3/(*G)6 #) da f

32 A. Antonucci and M. Zaffalon


^ P81+;g^3V#<.#U378163G6 1)+ 8Y6 #):B37.10Y<*6 <*37.1-G( , |1F/ \ O)Q2 3VG)6 1)J12 3/v3/+ < ]
12 3/Y-GQ<L(L<*6 <*),+h%1<L:013.#3G62 ););2636 #)+ 6 -A6),+ 6 <*37.9- _p1,T/7qp#_p# g-G+ + 8Y<.#U6 1-A6
3G5  dh%1<L:|-A2 )Q2 <*) 01)V.13G6),07Df g#81- ] -A6(*),-G+6-G);2 6)B 64 d@f3G  df<L+.13G63/Q6 -G<.#),0
6 <*37.Jd a f),:B37),+; 9 -(*3#:;-G(^:B37Q<.Q-A6 <*37.3G5G);2 6 <L:B),+3G6 1):B37. ]
0Q<*6 <*37.1-G(4:B2 ),0Q-G(+);6 +<. ^1<L+),-/.Q+6 1-A6;g

6xd:3  D ?  f b 64d:3_D 3   ?  Ff  
3G2),-G:| a*,/e5g 64 d:a!f% T-G:B63G2 <*W;),+-G+<.#81- ]
U
.   /  6 <*37.Jd,9f gvY8#6-A6^(*),-G+ 6%-:B37.10Y<*6 <*37.1-G(12 3/Y-GQ<L(L<*6
  LW /  <.6 1<L+Q2 3P081:B6:B37),+52 37 -:B37.10Q<*6 <*37.1-G(4-G+ +
d f 8Q.1:B6 <*37. h%1<L:<L+.#3G6-G);2 6)B 3G=6 #)@2 ),(L-A6 <*G)
h%#);2 )
.  01)V.#3G6),+6 #)4:1<L(L012 )V.3G/  -/.10g3G2 :B37.10Q<*6 <*37.Q-G(:B2 ),0Q-G(+);6; ^1<L+:B37.10Q<*6 <*37.Q-G(54-G+ +
),-G:|  *
.  g <  D-A2 )6 #)Q-A2 )V.76 +3G=  01) ] 8Q.1:B6 <*37.g+ -, 64d  D ?f g:;-/.v))B112 ),+ + ),0-G+4-
12 <*G),03G8  %#);2 );3G2 )Gg:B37.Q+ <L01);2 <.#U6 Q-A66 #) :B37.7G)B:B37Q<.1-A6 <*37.3GG);2 6 <L:B),+3G d  D ?  f g7<T)GLg
-G+ +8Q.1:B6 <*37.6  d D ? f-G+ + <*U7.1+\-G(L(v6 #)-G+ +63 64d  D ?f#   6  d  D ? f gh%<*6    
6 #)A-G(8#)  .  d?  f *  ,/ .  gRh#);2 )  .  <L+6 1)401) ] -/.10gY3G2%),-G:|g   $ -/.10+6  d  D ?`f\<L+^-4G);2 ]
:;<L+ <*37.T8Y.1:B6 <*37.-G+ +3P:;<L-A6),0$63 g#81-A6 <*37.d f 6)B3G d  D ?f ^P81+;g13G2),-G:|Aa(*A,/e5g
2 );h2|<*6),+-G+
b 64d:3 _D  .  d?  f  ?  Ff dGFf 6 d:a!f2 U    6  d:3 D ? f b 46 d:3 D ? f  dB#f
.  /  ) c  
H 6<L+6 #);2 );3G2 )+ 8GF4:;<*)V.7663+ );6@<
  < 
<   g h%1<L::;-/.$v)),-G+ <L(*2 );3G28Q(L-A6),0 -G+-J:B37.7G)B
-/.10 :B37Q<.1-A6 <*37.%981+,g 64 d@f<L+\-:B37./G)B:B37Q<.1- ]
6 <*37.@3G),(*)V)V.76 +3G6 #)+ 62 37.#U4)B#6)V.1+ <*37.  d@f
6 
d  D ? 
f  64d D  .  d?  f  ?  xf  d b f ^1<L+RP<*3/(L-A6),+!6 1)-G+ + 8Y16 <*37.56 Q-A6 64 d@f<L+O-^G);2 ]
6)B3G  d@f
632 );U/-A2 0#81-A6 <*37.dGFf-G+\6 #)\3/<.76^-G+ +\8Q.1: ]
6 <*37.3G-4 S 3VG);2; i   u Q-G+ ),037.6 #)=K''& , S @# G;  O\'%:;:B3G2|0Q<.#U 63 ^#) ]
2 );6 8#2.#),09R2|-/.1+3G2|-A6 <*37.4:B37.1+ <L01);2 ),0=3G26 #) 3G2 )V Ag 6 #) + 62 37.#U )B#6)V.1+ <*37. d
f 3G
+ <.#U/(*)01),:;<L+ <*37..#3#01)  *+ ^^#)6 #),+ <L+6 1);2 ) ] UW$ d   
f d  f :;-/.v)2 );U/-A2 01),0 -G+$6 #)
3G2 )3/(L(*3Vh^+2 37 -4+ <Q(*)<*6);2 -A6 <*37.@3xG);25-G(L(R6 #) -A 2 U/<.1-G (3G2= 
E3GX Z
  *A9  %
d@f1 Z\[ i 6 Fdf u  6W!  d f
^1)%3/(L(*3Vh^<.1Uh),(L( ] 1.#3Vh.-/.10d2 ),(L-A6 <*G),(*Yf\<. ]
6 81<*6 <*G)12 3/v3/+ <*6 <*37.J<L+2 ),#81<*2 ),063@3/16 -G<.^#);3 ] h%#);2 )3G2),-G:| * , g-6 xd@f<L+%6 #)^3/<.7654-G+ +
2 )V b g-/.10h^<L(L(v)12 3xG),01);2 )v),:;-/81+)3G6 #) 8Q.1:B6 <*37.-G+ +3#:;<L-A6),063UW$ YX  Z 3G2),-G: D CQ*
+);)V4<.#U/(*(L-G: @3G<*6 +3G2|4-G(R12 393G<.@6 #)=(L<*6);2|- ]  %g6 #):B37.10Q<*6 <*37.Q-G(!4-G+ +8Q.1:B6 <*37.Q+=6 xdD CLD ? CFf g

6 8#2 )G + v),:;<*XQ),03G2),-G:| ? C%* ,/> "<. X ,g-G+ + <*U7.-G" (L(
 O / O N N:OBM PcR S @#V^F,|T|B i 6   d@f u  c \A @1 6 #)4-G+ +" 63 6 #)J+ <.#U/(*)+6 -A6)  . " d? CFf * , .#" g
G [o/;1TG  d@f ;#GGB*1 V h%#);2 )  #. " -A2 )$6 #)$01),:;<L+ <*37. 8Q.1:B6 <*37.1+-G+ +3#:;< ]
4 " UW$ & % Z G  GQF qpP,TG#\9;G -A6),0 63?6 #)+62 -A6);UG * ,  g-/.10 2 ),12 ),+)V.76
;DA@#4 G;YGTG@^A;=A@#;`1G/B* 6 #);2 );3G2 )-G);" 2 6)B3G6 #)JA-G:,8#3781+:B37.10Y<*6 <*37.1-G(
Q , |GYG/YA=V /A5;, : O O : ;G /B @ :B2 ),0Q-G(=+);6 W /$" dD CAf * % + v),:;<*XQ),0?<.?#81- ]
a(*A,/e E " 6 <*37.d 7f ^P81+;g 6  d@f<L+-G);2 6)B3G  d@f^v) ]
6   d:a!f 6   d:3-) D ?)`f 
b d,9f :;-/81+)3G '2 3/v3/+ <*6 <*37.AJ'+ A-A2 <*),+<. , g-G(L(
)dc 6 #)G);2 6 <L:B),+3G  d@f-A2 )3/Q6 -G<.#),04-/.106 #);2 );3G2 )
;G /J @  !!!h :  @#; : ;G /B @ fA  d@f# d@f g2 37 h%1<L:6 #)6 #),+ <L+3/(L(*3Vh%+
!!!hEi GY ?4)/* ,/>B0 : 6   d#)[D ?4)`f 9^F,|[o@ -A2 U/<.1-G(L<*W,<.#U1
d#)[D ?4)`f* % O

Locally specified credal networks 33


QYQG bb)+*Dm

"w $$
D)+U 
 
, )+?$ 6h~
 
   

!"
"#$!%& '(*)+, ) &U 
 <*'
)h?$;  -  - *96/ * $)/.0
; v6+2KY`Q M+Q[^(aR_
- )/.0)+)21354
)7689:
 +;<)7$3$)+ 6=
>2 $?@(8A Y\N+aSajQTN+R@MON+p$
@[ Oy
7 "
$
*$BC?$;  - *)+DEGFHJI0K<L7MON NOPQSRUTVWL8X>Y[ZQSK PCQSR,Y\NK]R^(_ (s*

"
"$u  ( *68<9 +
*v~k)+; )+ +)~o<}$),*'
)
Y`Q[L(R^(abMOL(RXN+K<N+R@MONcL(R2dL8X]YfegN+Y[Z L7P5VhQSR:I3K<Li ^iQSajQkY`l +;<)7$ +*6<6<)+;7FHgI0K<L7MON NOPQSRUTV=L/XY[Z NdsN MOLR@PR_
^R@Pmd@Y\^Y`Q9V]Y`Q[MVOnom?$?@)
; Y\N+K]R@^Y`Q[L(R^(a,d,l(vL(V]QSvJL(R|],K<NOM+Q9V+NI0K<LUi<^Ui]QSaQkY[Q[NV
^R@PsZvNQSK0
@ajQ[MO^Y`Q[L(RVOp,?B)6b(v
$Uuv})+;7
U<
   qpr(s*
pst uF/v)pgwyx=oz
{+A
Dm
h
"
"
#|10
;<*{<*
*B
;<<}$Dm6f~k;?$?;<5 A ,s(s*
""vz
,68);<'(( '); $*)6~k
;3?$; )v9 ] ')
*Dm( ) $?$<*$BE ; )$
$)O6+FHI0K<L7MONON PQSRUTVL/X *v~k);<) ).b }* D=?)+<)$(O$FHEI0K LMONONOP(QSRUTVL/X
Y[Z NhY[ZQSK<PE0vK<L]vNO^RdYq^KY[QSR Th=N]V+NO^(K<MZ N+Kmd@l(_ Y[Z N0rLvKYkZ]R@YqN+K]R@^
Y[Q[LR@^(a d@l($L5V]QSvL(R],K<NOM+Q9VN
$L5V]QSv=nrm
?$?,)7;7 I3K<Li ^iQSajQkY`Q[N]Vc^(RP>Z N+QSK3
@ajQ[MO^Y`Q[L(R$V]p?,B
)76"#5
,7$vuvF/n
z0
$=uE
;Oqo""$r6<B?$;  -  - */4f<; )+)76
  D?$ v<)Dm; B
*96m.b<}*D?$; ) 96<)g?$;  -  - *A }$B,psr`ps,g )7

 D=? v A
 )76+R,Y\v3
@K<L+ NO^(VLRQSR Tp,
85O*] #  
<}$)
; 4~v)7 968*
)/.0
; 6r]R@Yq$U3
@K<L+
NO^5V+L(R,QSRUT
ps
q] 5@5
z0
$pft$z0$,p,uE; 
`

z
 '
)+
6<)O6
~?$;< -  - * )76?$; 
?,B<*
 - 46<*D $9( )m$A
)**$B<; )+)~ +9&U $)6FHI3K<LMONONOP(QSR T(VL/X
YkZ]R@YqN+K]R@^
Y[Q[LR@^(aL(RXN+K<N+R@MONL(RWI0K<L7M NVOV]QSRUT^(RP
eg^R@^+TUN+N+R,YL8X!R@MONKY\^(QSR@Y[lQSRgRL(3aNOP+TUN+_`^(VNOP
d@l5VYqN+V]pv?
B
)76 
w0x=vz
{+Dm
pUzv)z0D?@6pUtuF/v)p
|t$ z
w)+; ;<);O=$|bv O}0"",; 
?@6< s
E;<)*A
 1054)6<*
)/.0
; 6
6 68v 9( ).b<}*D=?;<)+A
+*6<)
!&U * ( ')?$;  -  - *96/ * 
6 68)76<6<D)+U 6FH
I3K<LMONONOP(QSR T(V0L8XY[Z N0 Y[ZfRR^a$L(R7XNK<N+R@MONLRoR_
MON+KYq^QSR,Y`lQSR|KY[Q 0MQ[^aU]R@YqN+aSajQTUNRMONp ?
B
)637"@


bF3; )6 6+
w0x=z
{D
:""
"$z; )$
)/.0
; 6KY[Q 0MQ[^a
R,Y\NaSaQTN+R@MON+p7
"$*
(v

$
w0yx=z
{+Dm

"
"$oxf;O?$}$9 +
 D=vv)*6~k;D?$; ) +*6<)
?;< -  - ** )76+]R,Y\$0,K<LUN ^(V+L(RQSR Tp`(A
O*# @7,
tzrw)+; ;<);OE$cbv O}c,w0x=z
{D
h
"
"Uv
FH$~k)+; )+ +).b<}6<)+?,;O(<)4E6<?,)7 )!6<)O6f~?$;  - A
- * )76 +;<)7$$)+/.;<v6FH:I0K<L7M NONOP(QSR T(VL8X=Y[Z N

Y[Z>LRXN+K<NRMONhLRR@MON+KYq^QSR,Y`lcQSRKY`Q M+Q[^(aR_
Y\N+aSajQTN+R@MON+p$?
B
)63U
"(U  E
; B
 $~kD
$
F]r)+' `c7
"$:sZvNm0R,Y\NK\@K]Q9V+NL8XRL(3aNOP+TUN+hEF/n
;<)76<6pvv
r
u$E; 
s,z0""vuU<; 
B= +
v<*
,$A
$)+?@)+v) )~k; ; )$
68)+ 60RR^(ayVL/Xe!^
YkZvNm^Y`_
Q[MV^(RPKY`Q M+Q[^(a,R,Y\NaSajQTUN+R@MONp@/A\ ]
5 $

n$);<Dm,Et$)
;<q3
"$%& '(*)+, )h6<4U$A
 }$)6<96c~| +
6 Dvv)+96FHI0K LMONONOP(QSRUTV2L/XY[Z N
d@QUYkZR,R^(afL(R7XNK<N+R@MONL(RR@MON+KYq^QSR,Y`lQSR2K]_
Y`Q M+Q[^(a@]R,Y\N+aSajQTN+R@MON+p?
B
)763
5v"$ %968)' );
s*)4
E$dYq^
Y[Q9VY`Q[M ^aoNO^5V+LRQSRUTgQkY[ZE],K N_
M+Q9V+NI0K Li ^iQSajQkY`Q[N]V]3z},?$Dmh
qp$).W
; 

34 A. Antonucci and M. Zaffalon



54
 6& 7!'"8,9!$#%;:<&3 2'(=*)
'((>; + ,(,?-.  #0/1 2 ,(3
@ ACB%DFEBHG+IKJ L+M+N OQP%h RSZ ACBHBUG+TWIKVJ2LSiKGYM+X[G+ZOQP%\^RS]2ACISBH_aOj`bM+klBSDUX+TnmSJ \^oWG5P%JqBHp BrGYBHX6s.pcX[Odt G+Gfe\^]2GY\^] gK\^GYJ \^G
u \^ v BHBHAd]2Zw2RSxy]2\^IG< wzG+RHOdk|DK{}\^] RSJ2x~OdwaiKv+M s+u w \^\^]zG+VxP^BHOQ\^]2GYt P%\


%
q
H
+
r

xywRJ X+\%\^\^xyAQJ"JkRSw ]
\^x~X+\%v+P^w2OQOdJ2G+OQRSI(Gyw Rx\%BH<ts+OdG+OdvyIYp"DOd]2w2 s BHBHwAOQBHJqIKMSO\^k.G<BWw JqDMKOdk]2RSw2]|s BH\nYA BHBHx~IK\^v+G<AQw\vOdGRrJ B;J \%P%J RSJ x~\%J
v+BHs+G~w \^OdG[]|YI<sYBH\^xyGY\rP%M<\ X[OdOCw2BHIr]IrBH]xBHv+RH+kOQPqOdw BHJ A
] \^RqAQ\^}Dr\^BHDKG<\^wz]qM}w2RS+]2OQAQJ(XMBHw2v+Yv+\^] G0RKBSOdP wWx~OdA OQJ ZJ \%\(J;BHw2Z+YAQ\9\?w vRRSOdwG<BHw?tK\?w2 ]BHBHw(w2OQBHRSIKG \^BHG<A
w X+J(\%BHP^] OQJ2\yOQRSJ2GYs+J;v+vBHRrGYJ Xb\%XbBHv+w vRf\qBSBHP^]WwDK<\^]2s+iFxOdBHG<G[w _j\^AdAdO tKOdIK\\^G<] \nwq_ p
I<RHkBH] BSX[P^AQw2\%OQJ RSJ|G.YM
RqBHGYXBHAdOQ]\^BHGw2Yw2\^Y]?\^i9w2 xBHGB%iw2]2BHiv+OdvG+\qIfBHw ]qRfpOdGY P%RSBH]2w|vOQRSJ|]x~BHw OQ\J J2\^OdG+xyIRSOQw2J|OQRSw2GY J?BHw}OdG\^xyw2YRS\~w2OQRSs+GYw2OdJ}A OdOdw2G[OQ\%YJ?sY\^OdwGYP%OQJ\ w2v+Y] \\nkP \^Y]BHRSZ+OQP%AQ\\
w .RFxOdA OdBHIrtK<\yw v+\n[] RHv+T2Ad\%OQP^P^Odww?vw2Y\^]7\ykRSX[]2ixyG \%BHXfx~BHOQP%w J? RHBHkAdZ\^xyRS]2RSI~w2OQRSG+GYOdJ(DK\^BS] JJ2Od}wai\^AdOQJBSZ J;BSw2J Y\%XF\^Od]?RSGFOdG[w2YYsY\;\^GYJ R9P%\PqBHRSAdGQ\%Xfw2Y\y`P YVRSM[OQP%\r+pOQP FYOQ\J
B~YJ2i[\^GJ2w \^\^x=DK\^G<kRSw ]WJ~J2OdOdG[x?Ys+sYAC\^BHGYw2P%OdG+\FIIK<RKs+BHAQxJ~BHRSGb]~\^] xy\%J RSRSw2s+OQ] RSP%GY\%JqJqp p P%YP%RS\] X[{}OdG+RSx~Iv w RyBH]w2BH+w OQRSJ]w2YBHG \%RSBH]2Adi[iKJ M\%\^J9xyYRSRqw2*OQRSGYBHJzGBH\^] \(DK\^BUG<wy\%P^xw \%B%Xi
OdDKG[\%YP^sYw RS\^]GYP%P%RS\G<IKwRKBHBHOdG+AQJ}OdG+BHI(GYX~X+\^] Ir\%] J \%RS\s+] RHP%k\%BHJqG+p"IK\^]qYMr\k\q
BHDr]qBHMnAdT2s RqBHiKw M<RSBH]}GYX+X~\^w J \^RS]2]2x~] RqOdWGYn\%MJ|BHw2GYYX9\w2P Y \BHG+ IKP^\ w2OQRHRSkGy\^xy
] RSRSw2vOQRSRrG J BH\^]
A v+v+] OQP RHt[YJ
AQ\(w2YB\
PqBSBHP^w2Gf\OQRSkZRG\zP^sYxys+Jw2\%OdRSX[A GOdOCqBH\%w2w2XYOdG+\WkIRS{}]vRSw2Rrx~YJ \v J2OdBHZ+
]OdDrBHA OdBHw w2RSAdOQs \%]JBHBHw BHGYRSGYX]WXBHRSGYs+\^xyXFw2AdORSw2GYYw2\OQ\ RSYG  RqBHP^A|w2OQv+w2RSY] GRH\Y
X+AQ\r] \^RSp9DKv\^AQRrRSJ +v\^OQ]q\%J;Xp v EBHvB%\^iK];\%J2OQOCJzBHGOdGYGYJ2v+\^waOd] }\%RSX]2tyZ<i0xyw2RYX+\~\^AQJ}`BHAQJ VR p
HrK
||Kj 
Z rr\^r KB%nDp OQRS s+G ]u \nOC+BHP%Z+\^AQv+R w( kl] \qRSBSxP P%BH\^]2IKw\^BHG<OdwG\nJ [P^+]2OdOdZ+v+Odw w\%Xw2Y\^\;DKJ\^BHG<xyw Jq\ p
eW}RSOd]2s+w2AQs XBHA"ZBH\FIK\^RHG<kzw OdJ;G<w BS\^P^] w2\%OdG+J2wIOdP^G] \%P%X[RSOdx~Z+Adv+i0s+BSw J\^]<s+I<xBHxyBHG0\%JqZp\^OdG+YIK\J Od GYGX+NW\^v\^\^DKGY\^X+]2\^G<OdG<wyw Z\^]\^ N B%DOdIrOQ<RSs+w J~]q}w2Yw2Y\\^iBHIK\^G<BHw OdJwyX+RSG+RbAdiGYRSkwyRS]( B%w2YDK\\
DP^ ACOddBS]2Sw2J Ss J|BHRHA+kfBHI<IKBH\^xySG<w \%JJ|BS;w2P^ lw2BHOdG+w|}Iz}zP^RSJn] s+n\%AQp"X[XOd Z+Z}Ad\^i9GYzBS\nJJYsYw|<J2s+w2s YxBH\BHAd xyGYiJ"RrIrOQJ2OdJ
wDK\Wkl] Sw2RSQYx n\ v+w2BHYAdACB%\^iKiKOdMU]y\^]]BH\nw2[w OQvRRS\%G P%P^BHRSw A<\%xyDXOd\0]2s+w2BHs w2GYOdBHA XOdAwaBHiKOdIKpG<\^w G<\^w ]+JBSOQP^BHJyww2w J \^RSx~AdOds+w2v+w2wOQRSw2w G.YRM|\^xxfYBURqp[}Od\^Wx~DKJ2Od\^qs[]q\ _ M
v+ACB%OdiKw29\^]w2xy+OQRSJ"] kl\"] \%kl\%] \%X+\%RSX+x5RSxw2Yw2\ v+BHACG?B%iKRS\^w2]|Y\^Pq]BHG9P^ACBSZJ \J \%\nJ.[vRHk[\%P^I<w BH\%xyX9\%w JqR p ] OQJ\%J2s+s+G<Adw JBHG<OdGw \%v+X] Od\%GX[OQP%P^RSwx~BHZ+v+AQ\0s+w Z\^]\^ I<B%BHDxyOQRS\%s+Jq]p +OQP sYJ2s BHAd i
OdBHG<G<w i\^]DBSOdP^]2wWw2s BHOdA"w2bBHIKBH\^G<G<iFwqpRS;Z[T2BH\%xyP^w \%JWOdGfBH] w2\Y\IK\^I<w2BHw2xyOdG+\rIMYxyOdGYRSP^] AdsY\?X[BHOdGYG+XI @ GY\RHkyw2Y\BHOdxyJbRHkyw2Y\.OdA OdIr<wv+] RHT2\%P^w0OQJ
xyPqBSRSJ ] \%\bJzAd] O \qw2w2BHAQAd\9OQJ2w2\nOQPbRS]2OdGw;OQBHJWAdv+s+BSw;J2vOdG<\%P^w w RJw2YYRq\(}D\^OdDK]2w2\^s ]BHOdA|GBHIKx\^G<BHw G<Jqi p w w2OQR~RSGYX+Jq\^MUDKw2\^YAQRS\|v0J2i[B(J2w J2\^i[x6J2w \^OQJxPqBHkRSAdQ]\%X(J2OdBHx?G9s+}ACBHyw2OdG+rlIyS<s+SxSBHG \^bxyRHS_
BSJ+<OQJ|s+OdxGBHJ2GYv+J|Odw OQ\WJ|RHBHkxyP%RSRSG+x~I?v+w2s+Yw \\^]RSAQBHX+IK\%\^J2G<ww OdJG<DKBS\%P^J2w2w2OdG+OdI<I9BHw P^\%] X\%X[w OdRSZ+v[Adi _ J2li[HJ2rw l\^xSkSRS]( X[Hi^aG nBH!x~OQlPy`G BHV+]2n]p.BHw2@ OQRSw2GY\^BH]GYBHXOdxyxyJWROdX+GY\^P^AdAdO sYG+X+I0\yRHB k
OQw P%RJWv+OdG ] R X[sY P%l\.s+BH]2IKOdG+\^G<IYw MJ0qrrOdKw2npz<s+x\?BHv+G[] \%_jAdJ O \^tKG<\wZBy\^xy B%\^Dw2OQYRSRs+X ] w2w2YOQRS\zG.v+pACB%NiK\^OdtK] JRSACv+BU] T \nk \^] i\^AQGYX[P%OdI\%JklZ ] RSBSxJ \%XFw2YRS\GF(w2Yj\;+Sv+^ACB%liK\^n] J BSPn7_
+VOQP RSxyOQ\;Jv}RS\^v+Ads+ACJ2BHs+]Odw I<\%BHXxykRS\%Jq]M[DOd]2+w2OQs P BHfA BHB%IKDK\^\;G<w DKJ\^Od]2GfiJ2Od}x~zv+AQJq\ p S
nS9nl9l~ YBSJ} rv+l] SRS5vRrSJ \%X w R?nZ KBSnJ [\WSdw2 YS\WHJ2i[J2w S\^Qx2SRSGw2YY\n_
DxyOd]2\^w2G<s wqBHM
ArBHrIKr\^KG<w JqBHMSGYBHX] \ NWu \^OCDKBH\^Z+]2AQR Od G< w \^lE}]9AdNO %OdUIrBH<] w XJ
lG<E}w OQ\^Rq]2wBHBHOd] G[\r_ M RSJ2i[]2OQJ2\%w J\^xRSGFw2OjYp \r\zp&<s+vYxJ2i[BHP GYRS\^AQxyRSIrRSiYw2yOQRSG BH+AOQP BHGYXIrs+xyOQX+RS\%w2JOdDrBH<w2s+OQxRSG BHBHG A
SB P^w2OQRSYGY\*J? `iAQV$X[OdIYOQJM rJ2rOdrx~KOdnACpBH]w Rw2Y\ @;{{ xyRX+\^A v\%P^w Y\%\?X W`R Vp OQJ X+\%P%RSx~vRrJ \%XOdG<w R~YDK\?J2w \^vYJqpY\


GYj@ \%P ]2tw RSM G<riKrMWr{|KyAQRSJ ] RS\rxyM BH\GYv+X6]BS{}P^RSw2OQAdPqOdBHGYAzJqMv+] qRS Z+AQ\^nxyp J G5Odw2lEBHOdx9]2w7__  BH
IKS\^ G<w+SPq2BHrGjSF] \^P%wRSBHx~OdGv BHRS] ]\%J(xYBHRqOdG<wBHAdOOdtKG\^AdOdiw J OdwWOQJR *w2 BHZw?\nkw2RSY] \\
v+J2OQAQRS\^GYxyJ~\^G<BH] w2\OdG+IvRSw2OdG<Yw \\%X@;RS{s+{!wqM|xyBHAQRRSX+G+\^IbA?kRSOdw2]FkjBSJ P^RSOCAdBHs+A(w2OQ\nRS[GYv+J(] k\%RSJ7]_ BHrGYjSX bBUX+kl\^w w \^\^]]2x~BHG OdGY\%\^JDK\^w2G<Ywb\f \^BSxyJ0RSRw2OQP%RSP^G s+BH] A \%X] \qp BSP^w2OQYRS\G}w RHSw2 Y\  

w2J \^YRSDKxy\}\^G<G<\(w s+JqRHx?pk|"Zw2Y\^[]|v+\^xf] RH\%kpzJ \^J2xy`0OdG+RSRrIw2J2OQJ wWRS\^GYGYDKJ|RS\^]wBHBHBHGYAZ+X(AQ\^\~xytK\%BHRS\^] w2v+\~OQOdRSG+ByGYI?Jz] \%BWBHX[w+sYOQRSP^J2GYw2w OQRSP%RS]2\Gi(] RHRH\n_kk GYRHk\^w2Y\J2Od\^w2s xyBHRSw2w2OQRSOQRSG.G p BHA w(] \%BHJ2AQvJ RSRfGYX+J \r\^w p"\^]2x~Y\WOdGY\%J?%lw2YS\~OdG<2w \^+GYHJ2qOdwani
v+X+] \^RS]?vw RrRJ \%X+J\qBHw2A}Y\0OdBSw2P^w2OQw2RSYG\w2\^YDK\0\^G<BHwqIKpf\^G<wY\OdBSA WP^w2wOQBHRStKG\P OdYGRrJ RS\^]7G _
xv\^BH] OdJ GYRSJG BHBAdO v+wai] RSwaZ+iAQv\^x \%Jqp BHAQRSG+IFOdw2xyRX+\^AdO G+IfX[O\^] \^G<w Jw22YY\RSs+BHAQIKXb\^G<Zw \9J OdG[YsY\^p"GYP%Y\%X\ Z<ib<Hw2nYS\~d \nS[v S\%P^w \%[X SW<R  BHnGYYX
] \%SJ2 v<GRSnGYw2+J SOQ\% J?Jqp.?}RS^Y]2|t\|SX[} ]2q\OdDkOdlG+REP^INWsYkJ(JnRS.] RSP%w G\
RzOdJ2BSGOdP^x?w2w2YOQs+RS\ACGYBHJ~w `\BH\^GYVWxyXOQJRSs+w2OQw2RSOd+A G OdSqBH \ A
" 

    BHnGY2rP%j\0SOdG w2BHYGY\0X[AQI<\%JBHxyP \0BHw G+RIK\%\nJ[+OdGOdZ+Odw2wY\bw2YBH\bIK\^\^G<xyw RSJfw2OQBHRSv+GYvJ\qOjBHp ]7\r_ p




EP


ZP llYRSRSw2OQ07P%\\^xyRH k|qRSBSw2OQP^RS w2WG OQRSBHR GfA"J2OQwJBHBHOdw G[GY\(YX&BHsYGY\^OdwGYXP%OQP \%JYXfRSsYZ<OQJ P%i\%\9X&w2RHYk|w \zRBSDrP^X+BHw2OQAd\^RSsYw G.\\^]2pRHx~kOdBHGYYAd\\ w w2Y\n[\;w2P s+Y] Rr\%JqJ \^MYGbBHG+BSOdP^xw2OQBHRSw2G.OQRSM+GYOkJBH\^G<w PriKpp Y\?%jS9PqBH]2]2OQ\%JRSs+w



v BH]G~BHw w2RS+]OQJ"BHv GYBHXv\^X+]


\%J }P^\]2OdZ\FOdA YYP%RqRS*GYP%w2\^YG<\Fw2]BHv+w ] \RSZ+RSAQGy\^xw2Y\ PqBH{}GRSx9Z\_
v+] \^\%xy] J \%RSRSJ s+w2J OQ\%] RSX0P%GY\%Jqw2J+MzB%] BHRSDrAds+BHQRqOdIrACBHOdZ+G+w2AQIY\\yw J R\^BHDKIKw2\^\^Y]G<BH\w A;J BH\^IKBSxy\^P^G<RSw2OQwfw2RSOQRSGYw GYRJqJp9xyWw RRJ2X+OdG+Z\^I\bA;w2w2\nYY\\ _

w2xyJ OQRSRSRAdGDKX+\%\^XbAQJOdsYA w2zJ2+OdX+] G+RS\%Ifs+J P^IrE]2OdB%ZiKBH\\%G~J2w2OC\nBHYYG\BHx~GYIK\^\^v+waGYAQ}\r\^pRS]BH]2Vt[Az\%JqP^J2pyw2w2OQ]2RSsYG P^Yw2\9s+] GY\b\nOdA[+RHwkJ2YJ w2\%RqYPn \_
k}rRSrRS]r]2Kw2AQXYnp
\zJ2E}w+BHOQiyw J2\w BHRSAdk]2QRSRqi]}klw2Ods+YG+GY\zI(P^BHw2v+IKOQ] RS\^\%GfG<X+wq\nv+M[Y] }GYRSv\W\%XRr\^J Adv+\%O x~] X\nOdkZ<G \^iBH] \^w lGY\zEP%w2BH\%YJ|]2\ w2w GYGYR\%\%P Od\%G[tX _ M YRqY*\zACw2BSYJ2\fwwaxy}RRyX+J \^\%AQP^Jw2OQPqRSBHGYGJJ2RSs+s+v+w2AdvO GYRS]2\%wJklw2s+Yw2\bs+] {}\zRS}x~RSv ]2BHt]MBHVw \%RSPn]q_ p $#

YRHksYBS\^GYP^w2P%OQ\yRSG~ZRSOdw
w2OQJ"w2vYRr\yJ J2P Od Z+BHAQ\G+w IKRW\yxyOdGRX+\^xy\^AYRSX[w2OOQRS\^G] \^BHG<GYwXvP \^Y] J RSRSOQG[P%\ _ w2GYOQ\^RSwaG }RS]2tOdxyA RRSX+s+\^w2AQAdJyO GYPq\~BHYGRqZ\w2s+Yw2\~OdA X+Odq\^\%DKX\^AQRSkRSv]9\%Xw2YE\fB%
iK\%DrJ2BHOCAdBHs[G _
&%

BHAdO |waiBSP waivBH\%IKJ \^G<wWOdA IK\^w;ByP%\^]2wBHOdGBHxyRSs+G<w;RHk WR X}+BHw \%RSRSJ ]2]~P^t]2BHOdw ZGYR9\%XJZw2\zY s+\ BHw2Odw A Odx?P^qw2\%sYOQXRSJ2pGwZ
\;] RSJ2vvRr\%J P^\^O ]qM
\%Xk+RSOd]AQ\fRSs+V]\%P^klw2]OQBHRSxyG\n _
Z<B;iG<s+vx?\^]7ZkRS\^]2]x~RHOdk.G+] I\%J BSRSP^s+w2] OdDP%\%Odw2J}OQ\%OdJqGp}RS|] X+BS\^P ]}w BSR?P^w2ZOd\DOdPqwaiBH]2] ]2\%OQ\%<Xs+OdRS] s+\%Jw  

w2BHBzYGYP%\X(\^]2BHOwkIKBH\^B OdG<G] w}\%BSPqJ P^RSBHw2s+GyOdD] PqP%OdwaBH\i~]2OQ]2JOQi~JAQRrRSX+J2s+\^w"w wq\^Odp"w|]2x~BUYOdGY\\%\%P^WXyw J"R GYfRS+w}] OQ\%RSP P%G+\^AdOdBSi(DKP^\%Z<w2X9Odi9DklOdY] w2RSRqOQ\%x J ' ( jrK
*),+.-/10 32

v\^GY\^YX9]7\9kRS}IK]2\RKx~BHsYOdA|G+J \I9RHkWP^w2AQR Rr+0J OQJ\ BSw }J


R(RSw2YY]2tb\RqZ OQJ;BS<J2w s+OQRPx P%BHRSB%GYGYDKJ\yP%\^}Dv+RSOdw|]2s+w2X[AQs X]2BHOdpDA}OdBHG+R?IKI\^w2w2G<+Yw OQ\JJ
 

Dkr\nRSBH]w \%\%P^w2Xw2 OdBHDKw2wzY\F\;BSw2P^YBHw2IK\fOdD\^G<BSOdwaP^wiKw2pOQOdJDOdBHwaGYi\;XsYOQJ~Z<J \ iZ+Ws++wOQR J}BHIKAQJ \^w RbRGY\^X+Z<]\^BHiw A\^Yv+]2Rq] x~\nkOdGY\^xy] \?\^RSGYw2w2YP%O\\_ BHIKB%\^i[G<Jw JqBHp IK\^ G<w P^Jw2OdDPqOdBHw2GOQ\%J] \%BHP%GY\^OdXDK\ RqWGYR \^] J2+BHGYOdvXBHw2] Y\F\ w2WY\FR RSG+BHAdGi
. 

}yr\flS}RSS]2 t 2 Odw2QkRSs+]}tOdRHGYk"X+w2JY\;RHk?DOd\^]2xyw2s RSBHw2AOQRSBHGYIKJ \^G<w JqSp rM


 

\w2^BHYGYIK\b\^P%G<\%BSXwyP^] w2Z<Od\%ibDP%Od\^waYOdiKDKRqp1\% J~xykl] YRSRS\0w2xOdDrxyBHBbRSw w2\%J2OdXvDrBH\%w2w2P^YOQORS\ GPBHIKkBSRS\^P^G<]w2wOdDw2YOQOdJzwa\0ikRSX[OQ]?J~OX+Od\^G[RS] Y\^OdG+G<s[I w_


   4 

] w2Y\%SJ2\nv2AQRS\^GYDKJ \^M"\(A RHw RFk<\nBHn[Gfv\%\^BHP^DKGYw \^X \%G<X w?WX+2S\^R ^vp\^GYOdGX+YJ;w2\yYRS\?\nGbYJ2BSw2OdYw2P^s w?\~BH\^P w2xy OQRSBHRSGbG+w2IKOQBURS\9klG w Od\^BHG ] A
EP

 

w2xYBH\Fw \%\^JyDK\^w2G< wqBHpwOdwyOQ<JnX[O RP%P^P^s+s+Ad] wJ~w RY] \^\^GwBHw2OdYG\w2YBHIK\f\^AQG<\^wDK\^\%AWJ2w2RHO_k


  BPqSBHP^w2AdQOd\%DXfOdw2OQ] \%\%JJ RSxs+B%] i9P%\%ZJqM \BHOdG[GYYXfsYw2\^YGY\;P%\%xyXyRSZ<w2Odi~DrBHX[w2OOQRS\^G] \^kG<RSw] kj\qBSBSP^P w 0RS] RHJqk M
w2\^Y] \^\GYBSP%\P^w2kOdRSD]}Odw2w2OQ\%YJ(\WOQJ2JvOd\%G[P^YO sYPz\^GYBSP%P^\%w2OdXDOdZ<waiKip w2 Y\]BHBHw2IKOQRS\^G<G wBHA.J(BHv+IK] \^\nG<kw_
P^P^Ws+s+R ] ] 6JJzOdw9YY\^ \^GGbBSXOdw2wYZ \~\nBSkBHJRSIK] Z\^\\%G<P%ww2RSYxy\%\J2\w2\^OdxDKs+\^G BHG<BSw wyP \%+J?OQ\^B+DrOdJ2BHAQ\0OdZ+IrAQG+\rOpS nPq2BH2G<SwX+RR\nPnPn___
 



}OdGRSs+w2+AQXOQJw2}]2iRS]2w tRw2xY\ BU[Odx~OdqRH\kw2w2YY\\\nBH[IKv\^G<\%P^w~w \%BHX AQJ RFWOdR x~v MBSZ+P^s+w Jw 5 

w2P^Y] Y\q\\^BSG\^J xy\zw2OdRSYGw2\~OQklRSs+\nG[w2s+vOQJ] \%\;P^X+w \n\^\%[w X \^v]2\%Wx~P^Rw Od\%GYX \%X]2WOQJ Z<\%R iJqp9w2Y\?+YOdP \9AQ\ OdBHG<G+w SIK\^GY\J2ROdOdP%G0waP^is+\nRH] J_k
 ! w2Y\;YP Y\?RSxyOQP%\?\^w2RHYkRX+BSP^RSw2AQOdRSDIrOdiFwaiKkp RS]WX+\%P^OQX[OdG+I+OQP BSP^w2OdDOdwai

EP
" 
w Rv\^]7kRS]2x PqBHG6Z\bX+\%J P^]2OdZ\%X6Z<iw2Y\0kRSAdQRqOdG+I
36 O. Bangs, N. Sndberg-Madsen, and F. V. Jensen
J2w \^vYJ
 I<BHxyJ\%Jqw2M+w2OQYJ\y}xRS]2BHtOdGOQJkROdP^GsYJw2YOQJ?\bRSP%GRSG< w B%\nD[OdwFG+I0RHk(w2YP%RS\x~BHv+IKs+\^G<w \^w ]J
rp  G\^DK\^G<wP BHG+IK\%Jw2Y\
] \%J RSs+] P%\%JRHk[w2Y\|BHIK\^G<wqp w2w2\nY+[] \W+\%Odv+\Z+ACOdB%w2waiKOdiG+\^vI]q\%pJf<s+RHYxk\;BHBSG[BHP^IK_jw2Ad\^OdO DG<tKOdw \(w2JOQZ\%OdJfG\^ w2w2B% YDBH\zOQwRS\ns+Yi]qBHOQM\^x~RSAQX ZYv+J AQ\WW\^]2RDrBHOdA Z+MAQ \?OdG B%Z<DKiB\
[p|RHk"Yw2\zY\n\;[P v \%BHP^G+w IK\%X\OdWGfR ] \%J xRSs+B%] iP%\%P Jq p BHG+IK\?BSJBHGf\n\%P^w
P%RSx~xy\^] P^OCBHAI<BHxy\;w2Y\^] \;OdA .GY\%\%XFw R9Z\zxyRS] \
 

[p w2 OQRSP G BHBHA.G+] IK\%\ J2vOdGRSGY\nJ [\rvp \%P^w \%X WR w2]2OdIrIK\^] JBHG\^xyRH_ w2c YBH\z]2x~ACBHOdGYG+XIp BSP^w2OdDOdw2OQ\%J( BSJ(w R0X+ROdw2kjBH]2x~OdG+I

# 

% Yp|J RSYs+\b] P%\n\%[JzvBH\%GYP^Xfw \%w2X Y\WGYR \^1 IrOdDK\^G6X[w2OQP^Yw\BHw P \? w2BHYG+\(IK\%BSX6P^w2] OdD<\n__


 w2YRS\WxyBH\IK\^BSG<P^w w2JOdDw OdRw2OQZ\%J9\WBHBH] Z+\yAQ\ w2w YR?\vBS\^P^]7w2kOdRSD]2Odxw2OQ\%J?Od}w2+\yOdGBHw2YG<\w
Odwaiw R~v\^]7kRS]2xfp P%RSG[YGY\%J RHk"w2Y\^Od]YRSxy\rp


EP

}V<v RSOQJ2BH]2t] \nOdw2G+_jYw2IOd\xyRSBH]W\fIKBH\^BSwG<P^w2w9YOdDRSw xyOdRfw2OQ\rZ\%pJ\BHX+] RS\FOdG+BHI0G<iYBS\^P^Gw2OdDOdOdw?w2OQOQ\%JJ9GY}RS\w


\}nYRSBH]2x~t[\(J~v+AQl\rEOdpA BH"G+OdG<IK\
w2J ] sYLRJ BHX[\}GYsY@ XP%Z[\9T2w2\%YP^s+w"\?OdA @Qkl\^]]2x~BHOQxy\^OdG<G.\^w M.}\%rX(RSr]2Ert0KB%?w2iK+\%j] J2@;RSOCBHs+@WG(IrENWNW\^BHJnw7G _

Od\^GDrBHw2G<Yw;\
\nX+Y\^BHwBHx~OdAQv+J;AQ\"BH] kRS\?].+\qOQBSX+J X+\\^RHGbk+Od\nG0[vw2YRrJ2\(Odw2RSOQRSZ[G.T2\%MHP^BSw JJqM w2Y}\
\~Od]2BH] AQ\^J AR _
OdGYG[RSYw|sYYIK\^\^\ GYwP%Wx?\%XyR sYZ<P i~] \%Ww2P%Y\^R \ OdfDKBH\%klIKX] \^RSG<klx] w RSJ|X+x waRSiOd\qG+vBSI\rP M<kjBH\r]2pBSIYx~P^p<w2OdB;OdG+DI9J OdwaRSiBSAQX[P^w2waOQ\^iOdD]vOdw2\~OQOd\%OQA JJ
ZI<BH\^OdAdG6OQ\^DKOd\G6w2 J2vBH\%w|P^J ORS xyPqBH\w2OQP^RSACG6BSJ J w2\%OdxyJPq\BHGyJ2YZRS\s+AQ] X\^sYZJ \%\XBSM<P J +ROQ\^J DKRS\%xyX\ p



B?YkjBH\^]2] xy\qBS\^J;]qpOdw;OdA Z\(w2Y\9xy\qBHG+OdG+IRHk\n[OQJ2w \^GYP%\(kRS]


v+AdRqOQPq}BH\^Z+DKAQ\^\]Ww Rw2YE\;NWOQX+Jqp"\qBSVJRSxyOdG\w2RH+k[OQJw2Yv \
BHGYvRS\^w]WBHBHw2] OQRS\G(\%RH<k s @;BHAd@WifEBHNWv[J_


BHBH] ] \\OdOdAG+dsYv+J2s+w2w]BHGYw R\%XX+\%OdJG] c\^v+OdIr] \%s+J ] \^\G<w2rOdpG+IJ YRS\xyXY\BSJ2GYYR\%X+X\0GYJ2RvX+\%\%PnJ_ +


9
r
C <
YsYY\^\zGYxyP%\%RSXyw2OdZ<Dri~BHw2w2OQYRS\GFJ k\qRSBS] J RSBHAdGkjBHBHGY]2X~x~w2OdYG+\Iw2BSOdP^xyw2Od\ DRHOdw2kOQ\%XYJ B%iKBHp
] \;OdYG[\ _
  
    

OOd G[Y\%X6sY\^RSGYs+P%w \(J2OQRSX+G\0GYw2RYX+\\%JJ P%OdRSGYvJ2OQ\bX+\RHk?OdGYw2J2Yw\BHGYP^P%AC\%BSJWJ JRHk


w2B%YD\?OdG+P^IACBSJ BHJqG p
kjBH]2x~OdG+IBSP^w2OdDOdw2OQ\%JWBH] \
w2GYYRY\X+\?\%GYZJRR%w X++R;\%\%J|w2JWYRS] \G~\^Odv+w2GYY] J2\%\wJ BHs+\^GYG<v+P%wzv\r\^OdMGY]BHJ2GY wXyBHBHAGYk.w2P%YRH\%\kJzw2GYRHYRk}\X+P^Z\%ACJ}BSR%J BHJ w\%BHJqw2] M Y\\OdG+YZv+\^RSs+] w7\w_
xy
AQRrRSJ2s+w}Ir\+OdG+P^IOQ\^G<wOdw2bB%B~iv+RHAQk.RSv+s+AQIrRSbs+IrBH+GYOdX0G+IYB9pY RS\%] <J \rs+pOd] \%JYB\


w w2 RSxBHwBHPq] BH\ GRSZs+\w2v+v s+BHw|] \^GYG<Rw X+J\%RHJqk<GYGYRRX+X+\%\%J|JOdOdGGYJ2w2OQYX+\\Ww2J2Ys+\]2] OdRSGYs+J2GYwBHX[GYOdG+P%I\ \^AQXM B(v+AQRSs+Ir.MBHGYXfB9YRS] J \rp



P^ACBSJ Jqp
B%DrAQRSBHOds+ACIrBHZ++AQOdG+\rMWIv+AQRSOdw2s+5Ir+BOdG+Iv+AQRSs+OdIrw2&.p B v+k9AQRSGYs+RIr6YRSOQ] JJ \w2YOQ\J
xyv+AQRrRSs+J2w&Ir.\ p P^OQ\^G<wqp \%<s+Od] \%JB \^AQX BHGYX B




J2YAQB%RSRqiDKs+\^IrRHAj+k;p OdG+v+I(AQRSs+IrOdw2+OdG+BIYpJ2YRq DK\%\^<Ajp
s+Od] \%YJ\BAQ\qBS J2\^w|AQX\ BHP^GYOQX\^G<Bw



OdBHG[IKYY\^\ sYG<\^wWGY R P%BS\%JX] B?Z<\%P%i9AQRq\^Odw2&DKY\%\W\^XGYBH\^klIK] ]2\^RSIrG<x*iw AQJ}\q\^BSDK\^P GY\^FAj\^M[]2kjOdIrBHwi~]2OQx~JAQ\^OdGYDKG+RS\^IAjwMBS\n\rP^[p w2IYvOdpKD\%OOdX[kwaOQiw2\^YG<OQ\wJ





w R~v+AQRSs+IrfOdw20B9J2YRqDK\^Ajp
cOdOdIrw2Fs+] wa\&}Ry OdG+v+ s+GwGY@;R@WX+\%EJ NBHGYP%XfRSG<BHwGfBHOdRSG+s+OdG+w2v+I1s+wBHG GYROdX+GY\rJ2p
wBHGYYP%\\ Y\;YRSxy\?BSP^w29OdDrOdw2 OQ\%JWCBH] <\
       

@RS;s+@Ww2v+Es+NwGYP^RACBSX+J \rJ;p Odw J \^Ak BSJ?RSGY\~OdG+v+s+wGYRX+\BHGYXRSGY\ {|AQ\qBHG+OdG+IYp Y\0xyRSw2OdDrBHw2OQRSGkRS]FP^AQ\qBHG+OdG+IOQJ 

BU\%P^w \%XZ<iw2Y\zw2Odxy\zRHk"XYB%iFBHGYXw2Y\;J \qBSJ RSG.p


A Bayesian Network Framework for the Construction of Virtual Agents


with Human-like Behaviour 37

RSGYACB%\9iP OdG++IOdAQXpzOdw25Y\(P +xyOdAQRSX[w2] Od\^DrG.BHp$w2OQRSGb \%OQJW<s+OdG[Od] Y\%sYJb\^GYBHwP%\%AQX0\qBSZ<J2i w RHJ RSks+w2] YP%\;\%Jqs+Mw2w2OdAYOdwa\(iX+GY\%RP^X+OQJ2\rOQRSpGbGYYR\zX+OdG+\?v+OQJ;s+wBGYX[Rs+X+x~\%J x?BHi] \;DrBHw2Y]2OC\zBHZ+] \nAQ\_
w2OQJFY\?OdG[BHYIKsY\^\^G<GYw P%J \%X6\^GYZ<\^]2iIriw2YAQ\b\^DKw2\^OdAjxyp\YRH\ k9WXYB%R i6] BH\%GYP%X\^OdDKw2\%YX \ BHBSGYP^w2X?OdDOQOdJwaiKsYpJ \%XkRS]] \qBSX[OdG+Izw2Y\}\n[v\%P^w \%X WR FRHk \qBSP

  ! 

BHIK\^G<w JWP BH]2OQJ2xB[p U


b

ACYB%\iOdxyG+IRSw2OdDrBHOdw2w2OQRSGJ2vOQRSJsYOdG[J \rYp=sY\^GYP%\%\%<Xs+OdZ<] i9\%Jw2YB\ J2J v\qBSRSJ sYRSJ G.\rMp


OdGY\w2+X+] \^\%DK\f\^AQRSB%vi[\%JqXFW@;@WY\bEN@;@WxyERX+NW\^JAQJ}BHJ2] s+\fv+xyvRSR]2X+w|\^w2AQYJ\ RHk`\nV _

       

w2P Y \BH]2OQBHJ2IKx\^G<B[wp J\^YGY\ \^]2IrWi R AQ\^DKOQJF\^AyOdG[BHYGYsYX5\^GYw2P%Y\%\X&J2Z<viRSsYw2J Y\%\J v\^AQ\%JP^w w R9\%X IK\^WwR w2Y|\z}] \%\J2s+PqAdBHwGBHG<w\^G<P w \^BH]G+\^IKDK\?\^G<RHk"w J\n[w vR\%w2P^Yw \%\FX xyWRR X_

 

w2Odxy\RHk
XYB%ifBHGYXFw2Y\?BHIK\^G<w J P BH]2OQJ2xB[p
 

Odw;v YBHGY\]2\%w\%RHX+`k[Jw2w YV&Rf\


J2wX+YBS\^RSJ2w s+tz\^AQ]2RHXx~k[w2OdRSGYYs+\\}w2v+{}w2Ys+RS\ywx~J2BHv w2GBH] \^]G+BHs+w IrvRSw2XY]"BHRHw RHk[\%kw2X Yw2Y\\y`\^M?xyV+J RHnR _ p
4 

+
0 9
r
C <


Y\;J2v BH] \n_jw2Odxy\(BSP^w2OdDOdw2OQ\%JWBH] \


       

w2w2OQYRS\G
BHADr] BH\%Ads J2vBHRSw RSGY]nJ n\Ww v R9BH]2w2w|Y\;RHk\^w2DK+\^OQG<Jw(wBSv J2t9BH]2wOdRHA YkOdw2GYYP^\zAdsYwX+BS\J2tw2YRH\ k
 EP

v+
s+s+Z9Z0OQDJOQOdJ2G[OdwqYpsY\^GYYP%\\%X(xyZ<RSi?w2Odw2DrYBH\w2OQBHRSIKG\^G<kRSw ] J"IKxRSOdBHG+]2IOdwBHw RyA[J2w2wYBU\_



w2Z<sYibJ|w2BHYGY\9X9w2\^OdxyGY\^\y]2IrRHi(kAQXY\^B%DKiK\^MAjp"YY\^\w2YW\^]R Fw2YOQJ"\yOdBHG[IKY\^sYG<\^w?GYP% \%BSX J

.  xyJ2BSOdP^qRw2\ OdX+DRH\^OdkAQw2JOQw2\%YPqJ\WBHGP + v+OQBHP ] G+FRqIKDOQ\WJOQX+OdsY\bGJ \%\nBbX[v]OdBHGF\%G+P^tKP%w \%RS\%XGSXT7Ws+AdOQGYRJ2P^ww2pOQRHRSkGFYPq\;BHGYOd@;w2X[ @WOQXYEBHw N \


w w2YR\ P YRP^Rrw2OQJ RS\FG0w2


Y\] RSBSvP^Rrw2J Od\^D]nOdnwaip w Rv\^]7kRS]2x w2Y\FwBSJ2tRHk
P%RSBSx~Jqp v BHG<iKM
BHGYXYRq x?sYP xyRSGY\^i0w2Y\BHIK\^G<w

EP

kOdu wRSBHBH]A;w2XYOdG+J2BHwIYw2BHOdpbw2G+sYI~ JOQ\%J}BH<GYOds+G[XOdY] \%sY\^J9\^GYGY\^BP%]2Ir\%XYiXBHw Z<AQ\r\^ip0DKw2\^YAjp \;Y\BHIKxyY\^\G<RSww2WOdJDrRBHx5w2OQBHRS]7OQG J_ X+\^w GBHOdA.w2YY\;Rq&kRSAdw2QYRq\;OdxyG+IFRX+J \^\%AQP^Jw2OQJ2RSs+GYv+J v}RS]2\(wX+w2Y\%J \;P^]2OdZ`\VOdpG0xyRS] \
Sa  r.=
7.



OdJ2G[wBHYw2sYsY\^JqGYMSP%w2\%YX\Z<BHiIK\^w2G<Yw \yJw2P Od xyBH\]2OQJ2RHxk B[XYM<B%BHiKGYMX(w2YY\yRqxx?BH]2sYOdwP BH A &    2 */ *0 *0 + 2  *0

v w "\^BHBH]] ]2\%w(BHJ?GRHw2k\^YDKw2\\^YG<\yBHw}IK \^G<`BSwJVbJ~RP%OQ\nJP^[s+vw2] Y\%\%\P^Xw p {}\%X RSGx~Ww2v +R BHOQJ}]BHJ Zw \%\nRSP^k](w2RSOQ] w2RS\ GBHBH}w(GY\ P%XRSx9BUOdkA __
  ) 3)

xyRSGY\^iw2Y\?BHIK\^G<w BSJqp 


ACYB%\ iOdxyG+IRSw2OdP DrYBH\%w2J OQJqRSp*GOQJOd\%G[<Ys+sYOd] \^\%GYJ0P%\%BXP Z<Yi~\%J w2JfY\v w2BHOd]2xyw2GY\W\^RH]qkp
  

J2\^YGYRqXp YRqw2Y\ @;@WENxyRX+\^AQJPqBHG~Z\}sYJ \%X9w RWw2+OQJ


OdXYG<B%w iK\^p(Ad OdIK\^YGY\ P%\rWp R OQJzOdG[YsY\^GYP%\%XbZ<i0w2Y\~BHIK\^G<w J

\] ^\%GYJ2P%s+\%G<AdJw iJ9w2Y\^OdGDK\\^w2G<BHYIKwF\F\^PqG<J2BHw w GRSJFAQ\^Z] G\%\fJ ] RSxy\%s+J R] RSP%X+s+\%\^] JqAQP%\%M \%XJ~\rp Z<IYGYipRSw9BYZRqw2\^YOd\nG+klOdIwFwB%\^OdG[DrDKYBH\^G<Ods[Aw__
$  

wk2RSOdD] OdGwaBSibP^cw2waOdOdiDIrvOds+wa\] i\yPqwaiBHGvBH\?GZ@;\y@;@WJ @W\%E\^ENWG.N1pJPqkRSBHY]zGF\yw2ZYIK\;\~\^GY\nJ2[\^v ]w2BHBH]] BSA\nP^_jJ2w w2w2\%Od]2xyXsYP^\yklw2] s+RSBSx] Pn\_




BPqHBHZ+GAQ\zZkRS\]Odw2GYYJ \?\^]2BHw \%IKX\^G<OdwqG<pw R@ GYw2YP%\(\fBHBSGP^w2\^OdDDKOd\^waG<iw waOQiJvxy\R@;X+\^@WAQ\%EXFNWOdwJ


w2\^+AdOQOQP^J.Odw \n\%YXBHx~kl] RSv+xAQ\rpw2Y\ Y\
I<BHv xyBH]\WBHxyX+\%\^J2w Od\^Ir] GYJ.\^k]qRSM+]BSw2J}Yw2\
Yxy\^iyRX+w \^\^GYAQJXBHw ] R\ BHGYXPqw2BHYG\ ZW\ R P%RSx~iOQv \^AQBHX+] \%\%XXZ<w iR?w2w2YY\ \?WZ\%R J2bw;iBSOQP^\^w2AQOdX+D\%OdwaXiZIr\nOdkDKRS\^] G \
 

OdGB%DKw2Y\\^BHOdG(]I<s+BH]2xyIK\\%w JR Z \^ B%DKB%\}DK\rP%pRSG<c+w2s+] ]2RSw2A+YRq\^DK]2xy\^]"RSY] Rq\}w2YP \^ ] BH\]BSP^OdAw <\^Od] G J


EP
w2OdGY\\n[\^DKv\^\%G<P^w|w \%X BSJ
WOdR G<w2] kRRSX[]WsYw2P%Y\%\~Xp\^DK\^+G<OQwqJ
p?IrNWOdDKRS\%w J
\(w2Yw2 \BHP w; BH\qG+BSP IK \


IKI<\^BHGYxy\^\;]BHP A+ ZBH\}]BSGYP^Rzw \^XY] BHJqwMYB;w2YB%\DrBHX+OdAC\%BHJ2Z+Od] AQ\%\XkRSZ]"\^w2 YB%\DOQZRS\^s+ ] B%RHDk"OQRSw2s+Y]\^xRHk ] Ojp\%\rJ p+RSw2s+Y] \(P%\WBHIKBHAQ\^J G<RwW IKBS\^Jw J B WWR R bkBHRSw2]wBS P B%YD\%OdXG+IkRSw2]}YRq\] GY\%J \^RS] s+J2+] P%Odv.\rMM



GYOdGf\%\%\nX[OQGYJ2w RS\^wGYZP%\r\Wp w2Y\;JBHxy\BSJw2 BHwRHk


BHG<ivRSv+s+ACBHw2OQRSG Bw2HYGY\XP O kBHw2G+YIK\f\] OdG\%J RS\n[s+v] P%\%\fP^w OQ\%JX AQWRrJ2R ww2p +OQJY\Od\^A xyzRSBHw2AQJ OQRSRG BUBHA\%] P^\nw_


v+|s+BSw2P w2YOdG+\^BSIb] \P^w2BFOdOQDJbxyOdwaBH\qiG5BSJ2BHs+OdAQGYJ ] R\J2wRHBH kGYBSP%Jfw2\YBH\ kGRSW]bOdRGY\qJ2BSwP BHRH GYkP%w2BS\ P^BHw2RSw9Ods+Dw2OdBSv+waP^i s+w2Odw2DRSw2OdOds+waG+iKw7I _ p OdJ2G0vRS VGbGY\%J P^c\Ww2OdOQw2IrRSYs+G\^] Gf\rp X+\^BHvG\^GY@;X+@WJERSGfN1w2+kRSOQJ]WP w2 YBH\9G+\^IKDK\(\^G<BSw;Jw2RS s+BHw2wzAdO GYw2Y\%X \
4 



w2YY\\BHv+IK] \^\nG<kw\^] J(\^GYxyP%\RSw2kOdRSDr]BHw2w2OQYRS\9GBSkRSP^w2]OdDw2OdYwa\iFJ2waviv\%P^\O OQJzPB~BSP^v w2BHOdD] Od\^waG<iKw p


 Bv+HIKs+\^wFG<wGYRJ?X+xy\RSbGY\^SinOQJ?J2J2w YRSRSAQ\^s+GAQX&PqBHP%GRSG<ZwBH\OdG&J \%\^w2G.Y\p\n[Yv\\%P^RSw s+\%w7X _

#

38 O. Bangs, N. Sndberg-Madsen, and F. V. Jensen


cOdIrs+] \?  GfRSZ[T2\%P^w RS]2OQ\^G<w \%XEN&kRS]w2Y\J2v BH] \n_jw2Odxy\?BSP^w2OdDOdw2OQ\%Jqp


wxyBHROdG0X+\^w2AY\?OQJ~GYOd\^GY J \^]2Drw BH\%AdXsY\OdGkRS]Ww2Yw2\Y\9BSP^BHw2IKOdD\^G<Odwaw iJWwaxyivRS\0GY\^@;iKpz@WE+NWOQJJ




ZOdw2\^OQ\%wa}JF\%\^GY\^w2] Y\\`0`0RSRSGYGY\^i\^iOQJOdG+v+BHG6s+w~OdG+GYv+Rs+X+wf\FGYBHRGYX+X\rw2p=Y\FBS+P^OQJFw2OdD<OQJ_
cBHIKOdIr\^s+G<w] \ J.xyRS GY\^xyiYnRpX+\^A+Yk\RS]}.B;+w2^YS\nGYklw|RX+\^\"DK\^xyG<wzRX+w2\^YAQJ.\nklw}YRH\^kw2w2YY\^\] J2 YRqGf
OdG aca OdIrs+] \ Y p  a r. |5
# 
%

w2Y\\^DK\^G<w BSJRP%P^s+] \%XfRS]Ok"OdwOQJ B9w2+] \qBHwqp



Kj
K  Y
3+ *0
  *0 ),+ *) )  *0


GDrBHw2Ad+s OQBHJ(w J RS\%]fP^w2BHOQGYRSGX}w2Y\\ RS s+w2P^Adw2O OQGYRS\G6OQX+
\q] BSRSJ(vRrP%J RS\^GY]qP%M\^]2w2G+YOd\%G+J I0\bw2BHY] \\
-
 2 0

DrP^s+BHAd]2sY] \^\?G<kwyRS]WDrBHw2AdYsY\~\BHBHIKGY\^XG<w w2J;Y\] \%v+J RS] s+RSZ ] P%BH\(Z+OdxyA OdwaRSiGY\^RHiKkWMw2IrYOdDK\f\^Gb\^DKw2\^YG<\w kJ2RSw2Od]A kls+v+w2] s+\^] AdO\;x~] Od\%G J BH\q]2BHiK] MP .BHp GYXJ2YRSs+AQXZ\FJ \%\^GBSJ9w RSv+OQP%J


RBSJ P%J P^Rs+P^]2OC]2BHOdG+w \%IYXp*Odw2Y\w2OdYG+\v+s+BHwIK\^GYG<Rw JX+}\bbSSnnGYRJ2YX+RS\s+AQCX6kl] RSZx \ kRSs+];Y\^\xy


RSDrw2BHOQRSAds GYBHJ?w RSBH]
] \(X+\^OdG[w \^Y]2sYx~\^GYOdGYP%\%\%J"Xw RZ<iw2+YOQP \9~\^\nDK[\^w G<\^wqG<p wu w2Y\n\ _
w2w2YY\\v+BS] P^RSw2Z OdDBHOdZ+waibOdA Odxywai~Rw2X+ \^BHAQJnw
nw2pY\\^YDK\~\^G<GYwRX+\ OdA Y.R+P%P^^s+]qP%MrRSOjG<p \rwpSBHw2OdYGY\J w P%\^RS]2] x~X[OdOdG+G+IOdG+w IR0w2J YRS\xy\^\xyJ2RSOdx~w2OQv+RSG AQ\BHA?]2s+OdG[AQ\%YJ9sYsY\^GYJ2OdP%G+\I0OQw2JfY\X+RSP GY \BHG+BSIKPn\ _


v+ k] RSw2+Z OQBHJ Z+Od.A Od+wai^w2 GYBHRwX+\?w2YOQ\~JWw2OdGb+] w2\qYBH\(w?J2wBHOdA w |\?Zkj\~BHAQPqJ BH\r]2M]2w2OQY\%X\(RSRSs+s+w7wq_ p OdGf\n[v\%P^w \%X WR BSJOdG+v+s+w4  

v+BSJ;s+w|w2YGY\9RX+OdG+\;v+bs+wSGYnRX+\OdbA SB%DKnr\ Mw2OjYp \r\ p9JBHGYxyR\w2YX[\nOQJ2klw;w2]2 OdZ+BSs+Jw2OQRRSPnG _ rp|T2RqRriKJ2p Odw2OdDK\(P BHG+IK\?OdG0\n[v\%P^w \%X WR ] \%J2s+Adw J OdG  

Pk^\%s+P^]2w ] \%\%XXp M|J kR0Odw2wYOQ\FJ?BHOdGIK\^w2G<Yw \J~J2xywBHRSw GY\y\^w2i]2sY \rBSM"J9w2YGY\RSw9\^DKZ\^\%G<\^w(G BUBSkJ_ [p|NW\^I<BHw2OdDK\ P BHG+IK\ OdG \n[v\%P^w \%X WR Mw2Y\


Rw2YP%\;P^s+\^] DK\%\^XyG<wqJ RWM BHw2GYYX\ BHw2YIK\^\;G<RSw s+Jw2v+xys+RSwzGYb\^i9SnOdAz+Z\OdA BUw2Y\%P^\^w Gf\%X~P%RSZ<G[i _ x?P s+BH] G+X+IK\^\]WPqRHBHkG+B(GYRSAQRqw(DK\%\qXBSJ2OdRSA GYi0\UZnM+\~] \%] \%J2s+P^w2Adw O J\%OdXGfJ l\rRSp]2IY] pyRqw2Yp \



A Bayesian Network Framework for the Construction of Virtual Agents


with Human-like Behaviour 39
yxcOdRSIrGYs+\^] iY\ }
BSJYZ\\%V<\^GFv BHOd] GY\nJ _j\^w2]2Odw xy\%X\;p BSP^w2OdDOdwaiwaiv\@;@WEN6Y\^] \;BxyRX+\^AkRS]Bw2Y\nklw\^DK\^G<w?w2Y\nklwRHkw2Y\zBHIK\^G<w J
.%  

# [pNWP \^BHI<G+BHIKw2\9OdDKPq\!BHGP ZBH\;G+] IK\%\ P^w2OOd G \%X\n[vOdw2\%YP^w RS\%s+X w;B~WAQRRSwWMRHk|w2\nYk\ _ RHRHkkw2BFYP^\?ACBSv+J ACJBHG.RSMs+sYw2v+J \%s+Xw2w2w OdG+RIfX+w2\^Yw \^\~]2x~v+] OdRSGYZ \(BHw2Z+YOdA\(Odwa\ni[vRHk\%P^J2w sY\%P%XbP%\%\nJ kJ_


kRS]2wqM+] \%J2s+Adw JOdG0BHG+IK\^]qp kk\%RSP^] wBSRSJ GyJ \%w2J YJ2Od\G+] I9\%J w2YRSs+\] X[P%\%O J|P^RHs+k.AdBHwaw2iKw p \^x~v+w2OdG+I?w2Y\v+ACBHGBHGYX
YpNWJ2s+\^AdI<wBHRHw2kOdDKB\w2P + ] BH\qG+BHIKwq\M.w2OdYG\y\nBH[G<vw\%BHP^IKw RS\%G+X OQJ2Ww(R Pq6BHGBSJ(Z\~BFX+] \n\n__ +OQJ|BHv+v+] R%[OdxBHw2OQRSGyxy\qBHGYJ
w2 BHw
}\]2OQJ2t?OdIrGYRS]7_ 

OdRqG+IG+Odw2G+YI\bRSx?GYs+\F] X+OQJy\^]J \^RHw~k~+BOdIrOk\^\rGYM RSs+s+IrG+.AQ\%MJ JFBSJ~w2Yw2Y\ \FW+R Od1IrY\%kRSJ2w]


%  

k\qBHw \%XM+] \%J2s+Adw JOdGBHG+IK\^]qp 

[pNWJ2s+\^AdI<wyBHRHw2kzOdDKB0\P w2+ ] BH\qG+BHIKwq\MOdw2GY\F\n[BHvG<\%wP^BHw IK\%RSX G+OQWJ2wyR 6OQJ9BSGYJ(\n[BFw9] w \nR _ Ir\n[s+vOQJ2\%+P^Odw G+\%IX ZW\^R wa}\%x\^G B%iYp=GYRSw?BHGYOdX&GYP^Ad[sYpX+\PqBHBFG&ZOk\b\rp X+RSu GYOQ\bJ2w2OdZ<G[i _
 

P%\%RSJ x~RSs+v ] BHP%]2\%OdG+JIRHk"w2Yw2Y\\?] BH\%G<J RSwBHs+IK] P%RS\%G+JOQJ2wqRHpkw2Y\bBHIK\^G<wfBHGYXw2Y\


 

Odx~vRrJ J2OdZ+AQ\;w R~X+\nk\qBHwqM+] \%J2s+Adw JOdGFk\qBH]qp


%


]
#[xp9BHu w2OQJzOQOQRSJ2w2YGOd] G+\%PqIrXBHs+GOQOQJ2J;Z+GYOd\yG+RSI;BSw(P Z+B\^OQ\^waw2}DK]2Od\%\%DX\^OCBHGZ<A
J2ibxOdw2xs BHw2BHBHw w2t\^OQRS]qOdG+pGYIJ
 EGYN \^BH] v+xy\v+R[] X+R%p[\^RSAQO]J_ w2RqYDK\\^u ] \^P%\^w RSxy\^x~]2RSx~Odw2G+OQOdRSIG+GYOdw2G+JfYIF\OQJ0w2v+Y] B\9RSZ+x~J2AQw2\^O] x[\^w2G+s+BHIr] GYw2\X9RHYRHk9k|Rqw2w2YYJ \\(\^]2OdX[G[OQRSO YsYsYJ"P^\^s+w2GYYAdP%wa\\~i&\nRSRHG _k


RHX+kW\^w w2\^Y]2\x~OdX[G+O OdG+I0P^s+YAdwaRqiRHX[kWO IK\^P^w2s+w2OdAdG+w(IOdw?] \%OQJJ RSw s+Rf] P%Z+\%]2JqOdM}G+IBHGYw2XY\w2Y\n\^G _ vOdG \%P^\nw [\%Xv\%WP^w R \%FX OQJWBUR \%P^OQw Jq\%p XMrcOjp \rOdGYpSX[YOdRqG+IACw2BHY]2IK\\J2w2w2Y] \\^G+P Ir w2BH5G+IKOQ\J
 $ 

vw2Y\%\P^w \^\%DKX \^G<WwqRpYZ \%BSJ P \tFxyw RRX+BH\^w AQJ}AQ\qBSJ2OdwA ] \% J BH\^wx?OdZ+w AQ\Ww2BSYJ \Zxy\nkRRSX] \_ X+BURSGY\%P^\Fw \%X[Xs+M?]2Odw2G+YI\w2J Y\^\F]2OQX+RSsY\^w J2\^GY]2\%x~J J0OdG OQBHJw2OQw2RSYG\RHRSkzs+w2w2Yv+\Fs+w\^xyRHkRSw2w2OQRSYG \
4 
 

\w2^ AQJBHwBHAdBS] P^\qw2BSOdX[DiOdw2OQ\%X+J \%J BHP^] ]2\WOdZv+\%ACXBHGYp J}kRSY]\bBSP%X[<Os+\^Od]2] Od\^G+GYI~P%\%] J\%J RSs+OdA ] zP%\%ZJq\ M {}RSx~Yv \ BH]BHP^w w2RSOQ]qRSpG
] RSvRrJ \^]zJ2YRSs+AQXbRSs+w2v+s+wWw2Y\?GY\n[w
w2OdG+YI9\BBSP^] w2\%OdJ DRSOdwas+i] P%wa\riM+vBH\%GYJXyBH] OdGY\J2X[w \qOBSX\^] \^RHG<kw\n[v+vACBH\%GYP^w J\%kX RS] WBSR P%0<s+w2YOd]7\_ w2+OdG+I0RHkkRSw2Y](\9w2YBH\FIK\^BHG<IKw;\^G<BHGYw~Xbw RbB~X+AdOQR+J2wWp RH k}w(BSIKP^\^w2w OdJyDOdBSw2OQJ(\%JqOdG+M v+]BHs+G+wytKBH\%XG
s+DK\^w2OdG+A OdOQw2\^OQGY\%P%J"\rpOdA +YZ\|\v+B ACBHxyGYJ.\qBSJ2Ods+A +] \BHAQRHJ kRw2OdYGY\P^Ad\nsY[X+v\\%BHP^G?w \%OdX9GYJ2OdGYwBHP%GYRSG[P%\_ Z<BSP^iwyw2YX[O\^Od] \^] \n\^[G<vw2\%AdiP^w \%w RX WBHGR \^pxyu RSOw2OQRS\^G ] \^BHG<AWwzP BHIKBH\^G+G<IKw \rJM\rpOdIYA p|] BS\nJ_

EP


40 O. Bangs, N. Sndberg-Madsen, and F. V. Jensen


w B RbZ+P s+YAdRiRrJ wa\FivBH\FIrIrBH] IK\%\^J J2G<OdwyDK\FOQJyBSBHP^w2G+OdIKD\^Odw2] OQ\%\%XJqM
Odw~+OQJ9OdAQ\fxyRSBf] \v BSAdP^O tKO \^AdJ2i w SRJ \%G+<Adi?sY\^RSGYGYP%\}\ P RHYk.RSv+OQP%AC\}BHGYRHJqk M] J \%R?J RSRSs+GY] P%P%\ \w2BSYP%\<s+OQJ2Odw2OQOQRSJ}GPqUBHSAQw2P^Ys+\|ACBHZw \%\%XJ2w M
AdBHBHO IKAQtKRS\^\^GYG<Adi\rwpyw ROdA \yP ] Y\^\qRG<BSRrDP^J OQw;\J2OQw RSRBSGP^+w2BOdOdDIrJ2OdYw2i[\^OQJ2\%](w J\^BHxG+IKY\^\^]Y] \Z<\^] i0w2\YZ\q\\^BSOdP G+BHIKIF\^BHxyG<IKw\^RSG<] OQ\wJ }J \%\z<sYPqBH\^GYG+P%GY\rRSp wZ\WJ2s+] \Ww2 BHww2+OQJ}BSJOdGkjBSP^w}w2Y\WZ\%J2w EV

\^G<BSw2J]BHBHGYG P%\?}kyRS]zrl\qBSSP bS \^xy.KRSw2%OQjRSSGbBSSJ 2J %RjSP^%OCBH w \%XOdw2bOdw2 \qBSBHP G J2wBHG<R&w2AdiB%DKIrRS] OQRqX5Odw2G+YIY\M|Dr}BH\AdsY\OdA RHkx\^BHxyOdG<RSww2BHOQRSOdGGYJbBZAd\^OQJ2Odw~G+I6RHk P%w2RSYG[\_
.

BSBSP^P^w2w2OdOdDDOdOdwawaiKiKp
MWx?Ys+\ Adw2Odv+P^Adw2iOQRSw2GY
\] X+RSRSvw7Rr_jJ v+\^] ]|RX[sYOdA P^Yww2YRH\^k GykRS]|\qBHBSGYP X

EF
P%P%RSRSG<G<w2w2]2]2OdOdZ+Z+s+s+w2w2OQOQRSRSGYGJWkjBSw RX+\~RqDK\^M];BHGYw2OdXFxyAQ\r\^pyw Vw2YRS\xyDr\~BH\^AdsYDK\%\^JG<w RHJkx\qBSB%P iK M

YP%RSRqx~}\^v+DKAQ\^\^w ]q\^MyAdiZ\] \^J xyRRqDKv\%RqX}\^kl]7] klRSs+x Ayw2 BHww2w2Y]\^BHi1s+xBHBS] Jn\nM GY\^YDKRq\^ ]


EP

lSSOdw2+S Yw2Y%\ \n[v \%P^w \%X WR iOQ\^AQX[OdG+I0w2Y\f}yS


w2RS+IrOQiKJM9RBHP%GYP^X s+] JOdwbOdGOQJ0<Zs+\^xOdG+BHIGYJOdG<OQDKJ\%J2Bw2OdxI<BHBHw w2\%w X5\^]OdG5RHk;w2vYYJ2\i[P .YRSOA__


EP
5 
EF EP


Adw2O YIr\<w~] \%v+P%] RSRHAdQT2\%\%P^P^w2wqOQpRSG{|RHs+k]2] v+\^] G<\^wDOQ\^RSDKsY\^JWG<\^w DKJ~\^xG<w B%JqiMIrBHOdAQDJ OdR0G+Iw2]2w2OdYIrIK\^x\^]


EV

] w2+\^GYOdG+\^I}\%w2X BHJ2wzw2] }\^G+\9IrYw2RSvOd\?G w RAQRMRStFw2+klOQs+J?]2OQw2JqYM\^RS]zGYOdP%G<\w RBHI<YBHRqOd1G.M


w RB

BHwW]BHRWJ2GYOdB%G+X+DKIRSRSx OQRSX?G+sYv+AdiJ2] Od\%G+w2X[+I~OQP^OQJw2wYBHZ+\%J OdOdA\;A OdwaDri] BH\%}AdJ2sYs+\\%AdJwOdBSOdA GJ<v+}w2OQY\^P t?Od\bIrBH<BHG(w IKJq\^BSp G<P^w2w OdJFDOdBHwaAi _
EV = QoL EP EF

OdGYP^AdsYX+\rp
EP

OdG+B%Ii[w JR;vZ\^\^]7w2kw RS\^]2]x~w2OdYG+\^IOd]}J RSJ2Odxyw2s \BHw2BSOQP^RSw2GyOdDZ<Odwai9iKMw2]2w2i<OdsYG+J?IGYw R?\^DKBS\^P%]<s+w2]2Odi<] \ _   +  Kj 

xyBSJ RSJ \%] \~J J] w2\%YJ \|RSs+X[] O P%\%P^Jqs+p Adwa iJzRH}k\] \UBH7AdBS] P%\q<BSs+X[ibOd]2OdGYG+\%IW\%] X\%J xyRSs+R] X+P%\^\%AQJ.Jkw RSR ]
0 ) ),2 2

w2v+ G ] RKBHw2wBS+P OQGYJ\%J w \%\%RXP^w2Zw OQRS\?RGfOdZx~}\0v+\zAQJ2\^vxy\%OdAP^\^OG< IKw \%R9\%XXw2+kP%RS] RSRS]x~s+w2Irv+Y0AQ\0\^w BHRS\^AdAds+iKw2w2p?YAdO \zGY\%w2\~X6+OdBHG+BHOdIKv[x J_


w2\^YAQJ~\k
RS]DrBHX+Ads \^w BH\^w ]2RSx~]qMYOdG+}Od\;G+Ix~OdIrY<\^wzw2BSYJ\^]}w2\^YAd\sYBHJ \zIK\^w2G<Yw \%JJ \;J2YxyRSRs+XAQX _


w2G+]2OQi\^GYw P%R~\(BSRHP%k|<\qs+BSOd] P \zb] v+\%ACJ BHRSGbs+] PqP%BH\%GJqp


Z\?YJ2\;s+Z+\n[w2]vBS\%P^P^w w \%\%XXklOd] GYRSP%x*RSG<w2DKY\n\_ BHBHww2OdG+X+I\^DK@;\^AQ@WRSv+EOdG+N IxyB(RJ2X+i[\^J2AQw J\^xkl] RSkx=RS]w2BHYs+\w RSJ2xv\%BHP^w2OOQ PqPqBHBHAdw2iOQRSIKGY\^JqGYp \^]7_
\ n[BSvJ|\%ZP^\%w \^\%GX BHWw2w R \^x~v+RHkzw \%w2XYp\fJ2+Odw2OQs J}BHDrw2BHOQRSAdsYG\OQJ|Yw2\^Y] \^\FGw2sYYJ \F\%X~v+ACkBHRSG ]

< H <
wk2RSY] \BSPqP%BH<AQs+P^s+Od]2ACOdBHG+w2IOQRS] G6\%J RHRSk;s+] w2P%Y\%\ J PqBHGfw2RHYk;\^Gfw2YZ\\;v+sYACBHJ \%G.XFp6Z<
iACBHw2YGY\J    
BHIKY\^\G<w ] J\%J BHRSGYs+X?] P%w2Y\%J\^Od]"w2 vBHRrwJ J2x~OdZ+OdAQIr\|<DrwFBHAdZsY\0\%JqB%p
DrVBHRSOdACxyBHZ+\|AQ\0] \%J kRSRS]s+] w2P%Y\%\J

NWP^RSw2w OQ\WRSGw2
BHw] RSw2v+RrOQJJ \^BH]?v+v+OdG] RKw2BSYP \FJOQBHJxyx?\~iKRSv+B%OQiPrM+BSZ+J(s+wBSP^Pqw2BHOdDGFOdw2\qOQBS\%J7Jq_ p Bw2H]2] Od\Z+s+IrAQw RS\%Z JWBHRHAk
Drw2BHYAdsY\(\%BHJqIKM\^\rG<p IYw Jqp9M w2\rYp IY\ p J P \q BSBHJ RS]2OQG.J2xMRSB[w2M Y\^] +JOdAQBH\?] \zRSw2BH[w7__
EV

Od<A sYi?\^ZG<\w
\nv+[ACBHv GYBHJqGYp"X+\%X~+OQw JqRWMSYOdGYRqP^}AdsY\^X+DK\^\ ]qBHMG<P%i(RSxyG<s+\%Jx?BHZw
\^w2]|YRH\kP%J2Rrs+J2ZYw
J RH\n_k
Od\^G] +JOdw2BHAQY\] \0\xyRSI<RSZ[BHGYT2xy\%\^iP^\nw _j}J"x~RSRSOd]Ir]2AQ<BHXwWZYMzJ2Z\rw2\]p IYBSBHpP^Gbw2BOQRSBHGYv+ZYJ
AQJ2RSw2w2s+] BSIrBHP^w
w2OQPqRSOQJFBHG.G9p}BHZGNW\|R~RSkZ[RSxs+T2\%BHGYP^w7X w_
P%BHB RSGyIrx?] \n\%Z+[\%OdvX[G RSi9BHGYw2BH\^OQRSv+G<Gbw2v+OC] BHRHRKA+k
BSIrv+P ] ~ACRqBHxGYw2JB%yi?w RyOdG~Z\^\w2DrYBHBH\}w2Adw s G<\^BHs+x~w x?\rv+p Zw \% \^X]|AdMKw RH\^k]2YG v\^BHRr] w2J \}OdJ2DKOdw2Z+\^YAdAQi \\ w w2Y\^]\bDr BHAdBHsYw\%w2JY\^kl] ifRSxBH] \rw2M+Y}\b\;I<GYBH\%xy\%X\rp=w R~tYGY\ Rq&W}R \^] \;RHk~w R9\qBSIKP \^ w
Y+] OdIrJ2Ywz\%J2J2w w\^v0\n[OQvJW\%w P^Ryw \%YX GYWX0R w2Y\(BUv+klACw BH\^G.]MJ2RSs+]zZ+w2v+]ACBSBHP^GYw2JqOdG+MI9w2Odw2Y0\Ww2\nY\ _ ] J2\%vJ \%RSP^s+O ] P%\%\XMZBH\^AQOdRSG+G+IbIbOdGOdB0w2J2vB\%P^J2Ow BHPGYXYJ2BHwBH] Xw \v+x?] \nsYkJ2\^wy] \^BHGYAQP%J \RkZRS\]
 

vv\%\%P^P^w w \%\%XXOdJ2GYOdw2P%s RSBHG<w2DKOQ\^RSG+GOQ\^BUGYklP%w \~\^]yRHk|PqBHw2]2Y]2\?iOdv+G+ACIBHGRSns+Mw~w2Yw2Y\^G0\w2v+YACBH\(G\nOQJ_



w2P^YAdsY\fX+\9] \%w2J YRSs+\9] J2P%w\rBHp GYXY BHwy] xXbB%]iBHG+BHIKAQ\yJ RBHGZ\fB%DK\n\^[]vBHIK\%X[\OQ\^BHG<IKw\^G<w w;ROdG[OdA _
sYJ RbJ \%RSXfG.w M
Rys+YG<GYw2OdXAGYw2YR0\I<ZBH\%OdGJ2w Odv+GACBH\nG0[vw R\%P^\nw +\%X \%P^s+Ww R \?6GY\nOQJ([wqvMRrBHJ GYJ2OX _ Odw
B%X+DKR\ \%RHJ"k.GY\qRSBSw"P x] BH\%tKJ \RSs+J \^] GYP%\rJ \|MKkRSY]
\^BW] \] \%OdwJ RSxs+BH] P%tK\}\%J}AdO J tK\^\GYJ J \q\rBSMKJ \rRSp G.IYpM
Z+wBHAQ\rIKp\RHkY\9IrOdIrD] OdG+\%\%IX[ibBBHZv+\^v+w2w ] \^RK] BSP WR 6OdA\%"J2w2 OdB%xDKBH\9w w2\rYM
\~Z+s+BSX[w9DrBHG[OdA _ 
Z+s+wOdwX+R\%JkRS]OdG<w \^Ad OdIK\^GYP%\rp
w<BHsYtK\^\zGYP%AQRS\rG+MKIK\q\^BS]P ~w R~vRrPqJ BHJ2AQOdP^Z+s+AQAC\|BHw GY\r\nM[[kw
RSJ2]w \q\^BSv9P GY\%J2\%w X+\^J"vFw RWOdGFZw2\Y\^\;DrJ BH\nA__
$ 
      ! "$#%&' ( )+*," ! !!-($&.($&'
s BHw \%Xpc+s+]2w2Y\^]2xyRS] \w2+OQJ9BHv+v+] RKBSP OdA }OdGYP^AdsYX+\ !!/!0&' "$&.+#213( 5476)&!89:&<;=5>?';@(    
+*," ! !!A&+0&'  &'  B

A Bayesian Network Framework for the Construction of Virtual Agents


with Human-like Behaviour 41
    9 r C <
9r C <          ] wW\%J OQRSJWs+sY] P%J2s \%J?BHAdIri0OdDKBy\^GIKRw R0RXBHGOQX+\qBHBIK\^w G<Rw?xkjBHBHAdtK\9J2Odw2s++] Od\?Gw2 BH wzBHw?w2YOQ\J
vvRrRrJ J J2J2OdOdZ+Z+AQAQ\(\~kw RSRf]Rqw2DK \^BH]2w(]2BHOdIKw \^\ G<w?waiJv\~OdGw RFw2Y \B%BHDKIK\r\^p G<w(w;J2OQvJ?\%BHP^AQO+J R _


OdJ GbRSYs+\q\] BSP%BSP \%P^Jw2w2OdBSD P^OdBHw2waOdwibDOdOdG[wawaiiYvsYwa\%\^iJ(GYvP%BH\r\ AQpWRSw2G+cYYIfRS\]zxy\qOdBSRSw2P w2OdDrBHBHBSAdw2P^OQw2RSw2OdYGD\OdkwaRSiKBS]MP^w2w2w2OdYYD\\?Odw2BS] OQ\%\nPnJ__



PqBHw2OQRSG.p
EF

w2OdG[OdDYOdsYwaiK\^M.GYP%\f+OQBHP AQRSG+X[IbOd] \%P^Odw2w2OQRSG.w2M"Y\BHGYJXBHxyw2Y\\~kRSJ2w2](] \^OdG[G+YIrsYw2\^GYRHP%k\Fw2RSYG \ 2


N YOdtK\RSACBHBUs+T w2Y RSi] AQJyX[Od}IRSks+RSAQ]XDrOdBHGAds v BHBHZ+]2AQw2\OQP^s+X[ACOQBHJ P^]sYAdJ O J2tKOQ\FRSGYw JRRSw2G BH+G+OQt J
- 4+ 0 */  0 32

w2w YR\ OdGYWP^RAdsYX+\RHkBw2YJ2\wBHBSGYP^XYw2BHOdD] OdXwaiKv+p ] \n wk\^OQ] J9\^GYBHP%AQ\0J RkRSBF]IKZRRSRw2XOQX+w2Y\qB\




BJ2SYP^w2RSOds+DAQOdXwai0BHwaAQJ iRv\%ZJ\(BHIrGYOdX0DK\^w2GY\yByBSkjBSP^P^w2w OdDRSOd]zw2OQk\%RSJq];p(\q|BSBSP P \^xyBSP^RSw2w2OdDOQRSOdwaG.i M OQxX+\qBHBSw2OdJ9DK\~RSGBHG+w2IrYAQ\\rp;`YV\~BHs++w2OQYP RS] JW}\}RS s+B%AQDKXb\AdIrO tKOdDK\(\^w GRBfw2 GYBHRSG+]7t _
vsYBHIKJ Rr\%\^J X9G<\^]qwkp RSJ2s+]}wx1PqxBHAQw P^B%Rzs+iw2ACYBHZw2\\zOdG+JIKBHI9RxyRw2YX\}\ DrOQX+BHAd\qsYB9\rM<w MKR9Z<J i~RzAQ\^w2w2w YBHBH\ Adw" YP^GYw2\nOQRS_jw2Gks+RSG+
] Od] G+BHRHGI _ @;s+@WIrEOdGNWJqkRSp.]v+\|] Rq}DRSOQs+X[AQOdXyG+IBHAQBJ Rw AdRO tKRS\|Aw kR RS]w2 J2BHvG+\%tP^Ow2 YPq\BHw2BHOQGYRSRSGG<i<RH_k
w2v+Y] \\nkZ\^] \^\^ GYB%P%D\%OQJqRSp ]}RHkBHIK\^G<w J}PqBHGZ\X+RSGY\ J RSAQ\^Adi~RSGyw2Y\
EV

EF xyRSsYJ] \^DOQ\^}\^] JkRS]DrBHAds BHZ+AQ\P%RSx~xy\^G<w Jqp
w2Y\
ACJBHBHGYxyJ~\kRSBS]yJfBSBSP%<P^w2s+OdOdD]2OdOdw2G+OQI\%JqMW] \%J RSOdw2s+&] P%\%OdGYJP%J2RSYG<RSDKs+\^G+AQXOQ\^GYOdGYP%P^\AdsY] X+\n\ _  0  0 30 Y | 0 2

v+J2YACBSRSP^s+OdAQG+XI OdGYP^WAdsYR X+\|p] \%c+J s+RS]2s+w2] YP%\^\%]2Jxyw2 RS] BH\wOdG[w2YYsY\$\^GYJ2P%v\|\%w2P^YO \|Pqv+BH] w2RSOQRSZ[G _
 !#"%$'&)(+*-,/.1024365879,;:=<?>83651@BADCE:-51FG3652H@BI6I 36J
7%@B51<?@BKL ML$'"N O%P=P-QRO%S=&


OkdBHG[\%Z+P^YOdw AsY\%Odwa\^XiGYP%Z<RH\yik?w2w2YBHY\~w2\w v+\^v+x~] ACRSBHv+Z Gw2BHOdZ+G+ZOdI1A\^OdOdwaG+w2iKYIM\YJ2RqsYv+P%ACP%BH\%] G.J \%MfJ7J klRSs+BHs+GYAj] MXP%Y\%J(RqY!RqBH ] \w2Y] BU\^\nki __
T)U  VV#W'?MYX+%Z[\Z?W' ]^[%Z#`_'&=&=&a3b:=cBI *ed\dD

T  f-ghW'[=_'&=&%_i@Bj-@BADkl3652H@BAmin3 7o>82bpD

J RSRSs+s+w] J2P%sY\%JP%P%BH\%J ] J7\0kls+BUAdiK\%p P^w \%XZ<iw2Y\fv+ACBHGZ\^OdG+IPqBH]2]2OQ\%X q r/s U M _'&=&%$tu9v`JHw8x=x=y=z-w=w%{'JHw=1g| U  =}%Z)~+f'[#DZ


g)f=YMfB]^[%Z#

 <y q
`m\Zf=8s= U f=[=W'MG1f U U  #"==+1>@Y(+*7'J
5362436j-@v`24AD0<B240A@*t),;*-243b*-5pD)W']M=[/ =[\Q

cYRHkRS]fw2Y\q\9BSP BH&ZRqwaDKi\rvM\YRHk\^] \?IKw2\^YG<\9wfBHw2IKY\^\G<wzv+wa] i\nvk\^\(] \^X[GYOP%\%\^Jf] JRHklky] RSBHx Ad
       \ ZsY~+[#

m T W'\Z[#?1/_'&=&%_mdD52H@R7'A:-24365872>@(h(uE*#F=@BI*

v+BHG&] Od\nA .kB%\^ZDK] \z\^\^]GYsYBHP%J IK\|\%\XOQJ.pBHs+IKGY\^YJ2G<v\(wq\%MWBHP^GYIKO \%\^\%G<\%X?X+wJw2waYiw v\|R\J2wZPqBH\bBHGYG0XYJ2vBHBH] \%AQXJ P^R~Ov+ Od] \%GY\nXP^kp\^AdsY] \^X+ GYk9\?P%BB\
t),;*-243b*-5p365t),;c?*#F-3b@?F(>:-A:=<B2H@BA?pDH~+fB[[#M8Q
 %f'|Z}[G f=\}f=f=m \ZW U f=8=[?W-Z f=W U
}W'?W=DZ[?N/ U W-Z f='KL[BZ}fM'W'M|[#\[#W'??}
}W U U [=[#KL[ U `f=[=

J2RHwk}BHw2GY XYBHBHw] Xywai]vBH\9G+IK\OdAkRS]X[Ow2Y\^\]z] kl\%] J RSRSxs+] P%w2Y\%J\B%YDK\^\^]] BH\WIKBH\GBHBHIKIK\^\^G<G<wqw p  T W'%+~! r+  U U []^ |_'&=&=&^*.1J\F=*-kl5(+*-5J

w2 wYIK\|J2\^w2B%G<OdDKAw\^wax]iBHvBHIKtK\%\J
\%BHJxIKJ \^B%\^G<iGYwqJ MrBH\z\rAQJ pw IYRzR9pq w2AQ\^YB%wDK\
\ Z+s+AdJi;J
J2was+w2i x=vBH\
w|w xyR(X[O\^w2G<Y\^w2\;]
OQRSJklGYBH] RS\%xyXx \ p
pD24AD0<B243b*-5:-51Fm@R.@B24362436j-@v`24AD0<B240A@Dpm@R.1A@Dp@B52H:-J
243b*-5365 m:-C=@DpD3b:-5 i@B24k*-Aop/H~+fB[[#M %mf'lZ}[
h} \Z[[%Z}YH%Z[W-Z f=W U`U f=MW/\Z `BW U H%Z[ U U  Q
EF =[B[8fB [BZs9f=[[B[=W'=[#|_'%_oQ_'=S

DrZBH\qAdBHsYw2\rOdG+M.If\rp IYvp\%RSw2v+YAQ\9\~Z+s+s+vAd i0OdwaA i|vX[\~] RSBHvIK\^xyG<w RS] J\~OdGYw2 P^AdBHO GG BHw2w2YOQRS\GB%w D<R _

EF


\^]BH|IKBS\;P v\^BH] IKJ \^RSG<G w


JqGYM[\%\%YX+\^J|G~B T2waRqiiv\]2OQBHJ \%GYJqXyp BzJ2v\%P^O PqBHw2OQRSGyRHk
1

w2BHYIK\z\^G<v+w] \nRHkk\^w2] \^GYBHwP%\%waJiv\rYM\^GY] \;RSw Od\Www2X[ OBHw\^] v+J] kl\n] kRS\^x*] \^GYB9P%\%J2J}wBHkGYRSXY]BHBS] PnX _
w2 k}OdDJ OdRSwai xyU\9v+AC] BH\%GJ RSwas+i] vP%\%\9JGYBH\%] \%\ X++J;+w \%RXZOdGY\(J2OQ+X++\z\%XBHGFBHBHw?IKB\^G<J2wvwa\%iP^vO \rP p


Dr\^]BHBHAdsYIK\r\FMBHRSIK]|\^OdG<GYwyJ2OQRHX+kW\zw2B; ]BHBHw9G+waIKi\ vw2\r MBHOdww9X[x?OsY\^J2] w9JZkl] \RSxJ2vw2\%YP^O\z B%\%D<X_ p


42 O. Bangs, N. Sndberg-Madsen, and F. V. Jensen
Loopy Propagation: the Convergence Error in Markov Networks
Janneke H. Bolt
Institute of Information and Computing Sciences, Utrecht University
P.O. Box 80.089, 3508 TB Utrecht, The Netherlands
janneke@cs.uu.nl

Abstract
Loopy propagation provides for approximate reasoning with Bayesian networks. In pre-
vious research, we distinguished between two different types of error in the probabilities
yielded by the algorithm; the cycling error and the convergence error. Other researchers
analysed an equivalent algorithm for pairwise Markov networks. For such networks with
just a simple loop, a relationship between the exact and the approximate probabilities was
established. In their research, there appeared to be no equivalent for the convergence er-
ror, however. In this paper, we indicate that the convergence error in a Bayesian network
is converted to a cycling error in the equivalent Markov network. Furthermore, we show
that the prior convergence error in Markov networks is characterised by the fact that the
previously mentioned relationship between the exact and the approximate probabilities
cannot be derived for the loop node in which this error occurs.

1 Introduction ity distributions for a singly connected network,


it yields approximate probabilities for the vari-
A Bayesian network uniquely defines a joint ables of a multiply connected network. Good
probability distribution and as such provides approximation performance has been reported
for computing any probability of interest over for this algorithm (Murphy et al 1999).
its variables. Reasoning with a Bayesian net- In (Bolt and van der Gaag 2004), we stud-
work, more specifically, amounts to comput- ied the performance of the loopy-propagation
ing (posterior) probability distributions for the from a theoretical point of view and argued that
variables involved. For networks without any two types of error may arise in the approximate
topological restrictions, this reasoning task is probabilities yielded by the algorithm: the cy-
known to be NP-hard (Cooper 1990). For net- cling error and the convergence error. A cycling
works with specific restricted topologies, how- error arises when messages are being passed
ever, efficient algorithms are available, such as on within a loop repetitively and old informa-
Pearls propagation algorithm for singly con- tion is mistaken for new by the variables in-
nected networks. Also the task of computing volved. A convergence error arises when mes-
approximate probabilities with guaranteed error sages that originate from dependent variables
bounds is NP-hard in general (Dagum and Luby are combined as if they were independent.
1993). Although their results are not guaran- Many other researchers have addressed the
teed to lie within given error bounds, various ap- performance of the loopy-propagation algo-
proximation algorithms are available that yield rithm. Weiss and his co-workers, more specif-
good results on many real-life networks. One of ically, investigated its performance by study-
these algorithms is the loopy-propagation algo- ing the application of an equivalent algorithm
rithm. The basic idea of this algorithm is to ap- on pairwise Markov networks (Weiss 2000, and
ply Pearls propagation algorithm to a Bayesian Weiss and Freeman 2001). Their use of Markov
network regardless of its topological structure. networks for this purpose was motivated by the
While the algorithm results in exact probabil- relatively easier analysis of these networks and
justified by the observation that any Bayesian is presented by
network can be converted into an equivalent Y
pairwise Markov network. Weiss (2000) derived Pr(V) = Pr(A | p(A))
an analytical relationship between the exact and AV
the computed probabilities for the loop nodes For the scope of this paper we assume all vari-
in a network including a single loop. In the ables of a Bayesian network to be binary. We
analysis of loopy propagation in Markov net- will often write a for A = a1 and a for A = a2 .
works, however, no distinction between differ- Fig. 1 depicts a small binary Bayesian network.
ent error types was made, and on first sight
there is no equivalent for the convergence er- A Pr(a) = x
ror. In this paper we investigate this difference
in results; we do so by constructing the simplest Pr(c | ab) = r B Pr(b | a) = p
situation in which a convergence error may oc- Pr(c | ab) = s Pr(b | a) = q
Pr(c | ab) = t
cur, and analysing the equivalent Markov net- Pr(c | ab) = u C
work. We find that the convergence error in
the Bayesian Markov network is converted to a Figure 1: An example Bayesian network.
cycling error in the equivalent Markov network.
Furthermore, we find that the prior convergence A multiply connected network includes one or
error in Markov networks is characterised by the more loops. We say that a loop is simple if none
fact that the relationship between the exact and of its nodes are shared by another loop. A node
the approximate probabilities, as established by that has two or more incoming arcs on a loop
Weiss, cannot be derived for the loop node in will be called a convergence node of this loop.
which this error occurs. Node C is the only convergence node in the net-
work from Fig. 1. Pearls propagation algorithm
(Pearl 1988) was designed for exact inference
2 Bayesian Networks with singly connected Bayesian networks. The
term loopy propagation used throughout the lit-
A Bayesian network is a model of a joint prob- erature, refers to the application of this algo-
ability distribution Pr over a set of stochas- rithm to networks with loops.
tic variables V, consisting of a directed acyclic
graph and a set of conditional probability dis- 3 The Convergence Error in
tributions1 . Each variable A is represented by Bayesian Networks
a node A in the networks digraph2 . (Condi-
When applied to a singly connected Bayesian
tional) independency between the variables is
network, Pearls propagation algorithm results
captured by the digraphs set of arcs accord-
in exact probabilities. When applied to a multi-
ing to the d-separation criterion (Pearl 1988).
ply connected network, however, the computed
The strength of the probabilistic relationships
probabilities may include errors. In previous
between the variables is captured by the con-
work we distinguished between two different
ditional probability distributions Pr(A | p(A)),
types of error (Bolt and van der Gaag 2004).
where p(A) denotes the instantiations of the
The first type of error arises when messages
parents of A. The joint probability distribution
are being passed on in a loop repetitively and
old information is mistaken for new by the vari-
1
Variables are denoted by upper-case letters (A), and ables involved. The error that thus arises will
their values by indexed lower-case letters (ai ); sets of
variables by bold-face upper-case letters (A) and their be termed a cycling error. A cycling error can
instantiations by bold-face lower-case letters (a). The only occur if for each convergence node of a loop
upper-case letter is also used to indicate the whole range either the node itself or one of its descendents
of values of a variable or a set of variables.
2
The terms node and variable will be used inter- is observed. The second type of error originates
changeably. from the combination of causal messages by the

44 J. H. Bolt
convergence node of a loop. A convergence node
1
combines the messages from its parents as if the
parents are independent. They may be depen- Pr(c)

dent, however, and by assuming independence, 0.65

a convergence error may be introduced. A con- 0.5

vergence error may already emerge in a network


in its prior state. In the sequel we will denote
the probabilities that result upon loopy prop- 0
1
Pr(b)
agation with Prf to distinguish them from the Pr(a) 1 0

exact probabilities which are denoted by Pr. Figure 2: The probability of c as a function of
Moreover, we studied the prior convergence Pr(a) and Pr(b), assuming independence of the
error. Below, we apply our analysis to the ex- parents A and B of C (surface), and as a func-
ample network from Figure 1. For the network tion of Pr(a) (line segment).
in its prior state, the loopy-propagation algo-
rithm establishes
X
line segment that matches the probability Pr(a)
f
Pr(c) = Pr(c | AB) Pr(A) Pr(B) from the network and its orthogonal projection
A,B
on the surface. For Pr(a) = 0.5, more specif-
f
ically, the difference between Pr(c) and Pr(c)
as probability for node C. Nodes A and B, how-
ever, may be dependent and the exact probabil- is indicated by the vertical dotted line segment
ity Pr(c) equals and equals 0.65 0.5 = 0.15. Informally speak-
ing:
X
Pr(c) = Pr(c | AB) Pr(B | A) Pr(A)
A,B the more curved the surface is, the larger
the distance between a point on the line
The difference between the exact and approxi- segment and its projection on the surface
mate probabilities is can be; the curvature of the surface is re-
flected by the factor x;
f
Pr(c) Pr(c) =xyz
where the distance between a point on the seg-
ment and its projection on the surface de-
pends on the orientation of the line seg-
x = Pr(c | ab) Pr(c | ab) Pr(c | ab) + Pr(c | ab)
ment; the orientation of the line segment is
y = Pr(b | a) Pr(b | a)
reflected by the factor y;
z = Pr(a) Pr(a)2

the distance between a point on the line


The factors that govern the size of the prior segment and its projection on the surface
convergence error in the network from Figure 1, depends its position on the line segment;
are illustrated in Figure 2; for the construction this position is reflected by the factor z.
of this figure we used the following probabilities:
r = 1, s = 0, t = 0, u = 1, p = 0.4, q = 0.1 and We recall that the convergence error originates
x = 0.5. The line segment captures the exact from combining messages from dependent nodes
probability Pr(c) as a function of Pr(a); note as if they were independent. The factors y and
that each specific Pr(a) corresponds with a spe- z now in essence capture the degree of depen-
f as a func-
cific Pr(b). The surface captures Pr(c) dence between the nodes A and B; the factor x
tion of Pr(a) and Pr(b). The convergence error indicates to which extent this dependence can
equals the distance between the point on the affect the computed probabilities.

Loopy Propagation: the Convergence Error in Markov Networks 45


4 Markov Networks (ab) = p   A  
(ab) = q p q p r
Like a Bayesian network, a Markov network r s q s
(ab) = r
uniquely defines a joint probability distribu- B
tion over a set of statistical variables V. The (ab) = s
variables are represented by the nodes of an Figure 3: An example pairwise Markov network
undirected graph and (conditional) indepen- and its transition matrices.
dency between the variables is captured by the
graphs set of edges; a variable is (conditionally)
independent of every other variable given its The propagation algorithm now is defined
Markov blanket. The strength of the probabilis- as follows. The message from A to B equals
tic relationships is captured by clique potentials. M AB v after normalisation, where v is the vec-
Cliques C are subsets of nodes that are com- tor that results from the component wise mul-
pletely connected; C = V. For each clique, a tiplication of all message vectors sent to A ex-
potential function C (AC ) is given that assigns cept for the message vector sent by B. The
a non-negative real number to each configura- procedure is initialised with all message vectors
tion of the nodes A of C. The joint probability set to (1,1,...,1). Observed nodes do not re-
is presented by: ceive messages and they always transmit a vec-
Y tor with 1 for the observed value and zero for
Pr(V) = 1/Z C (AC ) all other values. The probability distribution for
C a node, is obtained by combining all incoming
P Q
where Z = C C (AC ) is a normalising messages, again by component wise multiplica-
V P
factor, ensuring that V Pr(V) = 1. A pair-
tion and normalisation.
wise Markov network is a Markov network with
cliques of maximal two nodes.
5 Converting a Bayesian Network
For pairwise Markov networks, an algorithm
into a Pairwise Markov Network
can be specified that is functionally equivalent
to Pearls propagation algorithm (Weiss 2000).
In this section, the conversion of a Bayesian net-
In this algorithm, in each time step, every node
work into an equivalent pairwise Markov net-
sends a probability vector to each of its neigh-
work is described (Weiss 2000). In the con-
bours. The probability distribution of a node is
version of the Bayesian network into a Markov
obtained by combining the steady state values
network, for any node with multiple parents,
of the messages from its neighbours.
an auxiliary node is constructed into which the
In a pairwise Markov network, the transition
common parents are clustered. This auxiliary
matrices M AB and M BA can be associated with
node is connected to the child and its parents
any edge between nodes A and B.
and the original arcs between child and parents
AB are removed. Furthermore, all arc directions in
Mji = (A = ai , B = bj )
T
the network are dropped. The clusters are all
Note that matrix M BA equals M AB . pairs of connected nodes. For a cluster with an
Example 1 Suppose we have a Markov net- auxiliary node and a former parent node, the
work with two binary nodes A and B, and potential is set to 1 if the nodes have a simi-
suppose that for this network the potentials lar value for the former parent node and to 0
(ab) = p, (ab) = q, (ab) = r and (a b) = s otherwise. For the other clusters, the potentials
are specified, as in Fig. 3 . We then as- are equal to the conditional probabilities of the
sociate the transition matrix M AB = q s p r former child given the former parent. Further-
more, the prior probability of a former root node
with the
link from A to B and its transpose
p q
is incorporated by multiplication into one of the
M BA
=
r s
with the link from B to A. 2 potentials of the clusters in which it takes part.

46 J. H. Bolt
(ab) = px
(ab) = (1 p)x
matrices plus the other incoming vectors. Sub-
(ab) = q(1 x) sequently, he showed that the reflexive matrices
(a, a0 b0 ) = 1 (b, a0 b0 ) = 1
0 0
(ab) = (1 q)(1 x) also include the exact probability distribution
(a, a b ) = 1 (b, a0 b0 ) = 0
(a, a0 b0 ) = 0 (b, a0 b0 ) = 1 and used those two observations to derive an an-
(a, a0 b0 ) = 0
A (b, a0 b0 ) = 0 alytical relationship between the approximated
(a, a0 b0 ) = 0 (b, a0 b0 ) = 0 and the exact probabilities.
0 0
(a, a b ) = 0 B (b, a0 b0 ) = 1
(a, a0 b0 ) = 1 (b, a0 b0 ) = 0 M 1T
(a, a0 b0 ) = 1 (b, a0 b0 ) = 1 O1 O2
X M1
D1 D2
L1
0 0 0 0
(a b , c) = r (a b , c) = (1 r) L2
(a0 b0 , c) = s (a0 b0 , c) = (1 s)
(a0 b0 , c) = t (a0 b0 , c) = (1 t)
C Mn
(a0 b0 , c) = u (a0 b0 , c) = (1 u) Ln
nT
M
Figure 4: A pairwise Markov network that rep- Dn
resents the same joint probability distribution O n

as the Bayesian network from Figure 1.


Figure 5: An example Markov network with just
one loop.
Example 2 The Bayesian network from Figure
1 can be converted into the pairwise Markov More in detail, Weiss considered a Markov
network from Figure 4 with clusters AB, AX, network with a single loop with n nodes L 1 ...Ln
BX and XC. Node X is composed of A0 and and with connected to each node in the loop,
B 0 and has the values a0 b0 , a0 b0 , a0 b0 and a0 b0 . an observed node O 1 ...O n as shown in Figure 5.
Given that the prior probability of root node During propagation, a node O i will constantly
A is incorporated in the potential of cluster send the same message into the loop. This vec-
AB, the network has the following potentials: tor is one of the columns of the transition ma-
(AB) = Pr(B | A) Pr(A); (XC) = Pr(C | i i
trix M O L In order to enable the incorporation
AB): (AX) = 1 if A0 = A and 0 otherwise of this message into the reflexive matrices, this
and; (BX) = 1 if B 0 = B and 0 otherwise. vector is transformed into a diagonal matrix D i ,
with the vector elements on the diagonal.
For
6 The Analysis of Loopy O i Li p r
Propagation in Markov Networks example, suppose that M =
q s
and
suppose that
the observation
O i = oi1 is made,
Weiss (2000) analysed the performance of the
loopy-propagation algorithm for Markov net- then Di = p0 0q . Furthermore, M 1 is the
works with a single loop and related the approx- transition matrix for the message from L 1 to
T
imate probabilities found for the nodes in the L2 and M 1 the transition matrix for the mes-
loop to their exact probabilities. He noted that sage from L2 to L1 etc. The reflexive matrix
in the application of the algorithm messages C for the transition of a counterclockwise mes-
will cycle in the loop and errors will emerge sage from node L1 back to itself is defined as
T T T
as a result of the double counting of informa- M 1 D 2 ...M n1 D n M n D 1 . The message that
tion. The main idea of his analysis is that for a L2 sends to L1 in the steady state now is in
node in the loop, two reflexive matrices can be the direction of the principal eigenvector of C.
derived; one for the messages cycling clockwise The reflexive matrix C 2 for the transition of a
and one for the messages cycling counterclock- clockwise message from node L1 back to itself
wise. The probability distribution computed by is defined as M n D n M n1 D n1 ...M 1 D 1 . The
the loopy-propagation algorithm for the loop message that node Ln sends to L1 in the steady
node in the steady state, now can be inferred state is in the direction of the principal eigen-
from the principal eigenvectors of the reflexive vector of C 2 . Component wise multiplication of

Loopy Propagation: the Convergence Error in Markov Networks 47


the two principal eigenvectors and the message We do so by constructing the simplest situation
from O 1 to L1 , and normalisation of the result- in which a convergence error may occur, that is,
ing vector, yields a vector of which the com- the Bayesian network from Figure 1 in its prior
ponents equal the approximated values for L 1 state, and analysing this situation in the equiv-
in the steady state. Furthermore, Weiss proved alent Markov network. The focus thereby is on
that the elements on the diagonals of the re- the node that replaces the convergence node in
flexive matrices equal the correct probabilities the loop. We then argue that the results have a
of the relevant value of L1 and the evidence, for more general validity.
example, C1,1 equals Pr(l11 , o). Subsequently, he Consider the Bayesian network from Fig-
related the exact probabilities for a node A in ure 1. In its prior state, there is no cy-
the loop to its approximate probabilities by cling of information, and exact probabilities
will be found for nodes A and B. In node
f i) + P
1 Pr(a Pij j Pji1 C, however, a convergence error may emerge.
Pr(ai ) = P j=2 (1)
j j
The network can be converted into the pair-
wise Markov network from Figure 4. For
in which P is a matrix that is composed of the this network we find the transition matrices
eigenvectors of C, with the principal eigenvector
1 1 0 0
in the first column, and 1 ...j are the eigenval- M XA =
0 0 1 1
, M XB = 10 01 10 01 ,

ues of the reflexive matrices, with 1 the max- M XC =
r s t u
, M AB =
imum eigenvalue. We note that from this for- 1r 1s 1t 1u

px q(1 x)
mula it follows that correct probabilities will be (1 p)x (1 p)(1 x)
and their transposes.
found if 1 equals 1 and all other eigenvalues
In the prior state of the network, C will send the
equal 0.
message M CX (1, 1) = (1, 1, 1, 1) to X. In or-
In the above analysis, all nodes O i are con-
der to enable the incorporation of this message
sidered to be observed. Note that given unob-
into the reflexive loop matrices it is transformed
served nodes outside the loop, the analysis is
into D CX which, in this case, is the 4x4-identity
essentially the same.
In
that case a transition
p r
matrix.
O i Li
matrix M =
q s
will result in the diag- We first evaluate the performance of the loopy

onal matrix D i = p+r 0
. propagation algorithm for the regular loop node
0 q+s A. This node has the following reflexive matri-
7 The Convergence Error in Markov ces for its clockwise and counterclockwise mes-
Networks sages respectively:

As discussed in Section 3 in Bayesian networks,


x 1x
a distinction could be made between the cycling M A = M XA DCX M BX M AB =
x 1x
error and the convergence error. In the previ-
ous section it appeared that for Markov network
such a distinction does not exist. All errors re- with eigenvalues 1 and 0 and principal eigenvec-
sult from the cycling of information and, on first tor (1,1) and
sight, there is no equivalent for the convergence
error. However, any Bayesian network can be x x
M A = M BA M XB DCX M AX =
1x 1x
converted into an equivalent pairwise Markov
network on which an algorithm equivalent to
the loopy-propagation algorithm can be used. with eigenvalues 1 and 0 and principal eigenvec-
In this section, we investigate this apparent in- tor (x, 1x). Note that the correct probabilities
compatibility of results and indicate how the for node A indeed are found on the diagonal of
convergence error yet is embedded in the anal- the reflexive matrices. Furthermore, 1 = 1 and
ysis of loopy propagation in Markov networks. 2 = 0 and therefore correct approximations are

48 J. H. Bolt
expected. We indeed find that the approxima- A first observation is that 1 equals 1 and the
tions (1 x, 1 (1 x)) equal the exact probabil- other eigenvalues equal 0, but the exact and the
ities. Note also that, as expected, the messages approximate probabilities of node X may differ.
from node A back to itself do not change any This is not consistent with Equation 1. The ex-
more after the first cycle. As in the Bayesian planation is that for node X, the matrix P , is
network, for node A no cycling of information singular and therefore, the matrix P 1 , which
occurs in the Markov network. For node B a is needed in the derivation of the relationship
similar evaluation can be made. between the exact and approximate probabili-
We now turn to the convergence node C. In ties, does not exist. Equation 1, thus isnt valid
the Bayesian network in its prior state a conver- for the auxiliary node X. We note furthermore
gence error may emerge in this node. In the con- that the messages from node X back to itself
version of the Bayesian network into the Markov may still change after the first cycle. We there-
network, the convergence node C is placed out- fore find that, although in the Bayesian network
side the loop and the auxiliary node X is added. there is no cycling of information, in the Markov
For X, the following reflexive matrices are com- network, for node X information may cycle, re-
puted for the clockwise and counterclockwise sulting in errors computed for its probabilities.
messages from node X back to itself respec- The probabilities computed by the loopy-
tively: propagation algorithm for node C equal the nor-
malised product M XC v, where v is the vec-
M X = M BX M AB M XA D CX =
" # tor with the approximate probabilities found at
px px q(1 x) q(1 x)
(1 p)x (1 p)x (1 q)(1 x) (1 q)(1 x) node X. It can easily be seen that these ap-
px px q(1 x) q(1 x)
(1 p)x (1 p)x (1 q)(1 x) (1 q)(1 x) proximate probabilities equal the approximate
probabilities found in the equivalent Bayesian
with eigenvalues 1, 0, 0, 0; principal eigenvector network. Furthermore we observe that if node
((px+q(1x))/((1p)x+(1q)(1x)), 1, (px+ X would send its exact probabilities, that is,
q(1 x))/((1 p)x + (1 q)(1 x)), 1) and Pr(AB), exact probabilities for node C would
other eigenvectors (0, 0, 1, 1), (1, 1, 0, 0) and be computed. In the Markov network we
(0, 0, 0, 0). thus may consider the convergence error to be
founded in the cycling of information for the
M X = M AX M BA M XB M CX = auxiliary node X.
" #
px (1 p)x px (1 p)x
px (1 p)x px (1 p)x In Section 3, a formula for the size of the prior
q(1 x) (1 q)(1 x) q(1 x) (1 q)(1 x) convergence error in the network from figure 1
q(1 x) (1 q)(1 x) q(1 x) (1 q)(1 x)
is given. We there argued that this size is de-
with eigenvalues 1, 0, 0, 0; principal eigenvector termined by the factors y and z that capture
(x/(1x), x/(1x), 1, 1) and other eigenvectors the degree of dependency between the parents
(0, 1, 0, 1), (1, 0, 1, 0) and (0, 0, 0, 0). of the convergence node and the factor x, that
On the diagonal of the reflexive matrices of X indicates to which extent the dependence be-
we find the probabilities Pr(AB). As the correct tween nodes A and B can affect the computed
probabilities for a loop node are found on the probabilities. In this formula, x is composed
diagonal of its reflexive matrices, these probabil- of the conditional probabilities of node C. In
ities can be considered to be the exact probabil- the analysis in the Markov network we have a
ities for node X. The normalised vector of the similar finding. The effect of the degree of de-
component wise multiplication of the principal pendence between A and B is reflected in the
eigenvectors of the two reflexive matrices of X difference between the exact and the approxi-
equals the vector with the normalised probabil- mate probabilities found for node X. The ef-
ities Pr(A)Pr(B). Likewise, these probabilities fect of the conditional probabilities at node C
can be considered to be the approximate prob- emerges in the transition of the message vector
abilities for node X. from X to C.

Loopy Propagation: the Convergence Error in Markov Networks 49


We just considered the small example net- is converted to a cycling error in the equivalent
work from Figure 1. Note, however, that for Markov network. Furthermore, we found that
any prior binary Bayesian networks with just the prior convergence error is characterised by
simple loops, the situation for any loop can be the fact that the relationship between the exact
summarised to the situation in Figure 1 by probabilities and the approximate probabilities
marginalisation over the relevant variables. The yielded by loopy propagation, as established by
results with respect to the manifestation of the Weiss, can not be derived for the loop node in
convergence error by the cycling of information which this error occurs. We then argued that
and the invalidity of Equation 1 for the auxil- these results are valid for binary Bayesian net-
iary node, found for the network from Figure work with just simple loops in general.
1, therefore, apply to any prior binary Bayesian
networks with just simple loops.34 Acknowledgements
This research was partly supported by the
8 Discussion Netherlands Organisation for Scientific Re-
Loopy propagation refers to the application of search (NWO).
Pearls propagation algorithm for exact reason-
ing with singly connected Bayesian networks to References
networks with loops. In previous research we
identified two different types of error that may J.H. Bolt, L.C. van der Gaag. 2004. The convergence
error in loopy propagation. Paper presented at the
arise in the probabilities computed by the algo- International Conference on Advances in Intelli-
rithm. Cycling errors result from the cycling of gent Systems: Theory and Applications.
information and arise in loop nodes as soon as
G.F. Cooper. 1990. The computational complexity
for each convergence node of the loop, either the of probabilistic inference using Bayesian belief
node itself, or one of its descendents is observed. networks. Artificial Intelligence, 42:393405.
Convergence errors result from combining infor-
P. Dagum, M. Luby. 1993. Approximate inference in
mation from dependent nodes as if they were in-
Bayesian networks is NP hard. Artificial Intelli-
dependent and may arise at convergence nodes. gence, 60:141153.
This second error type is found both in a net-
works prior and posterior state. Loopy prop- K. Murphy, Y. Weiss, M. Jordan. 1999. Loopy belief
propagation for approximate inference: an empir-
agation has also been studied by the analysis ical study. In 15th Conference on Uncertainty in
of the performance of an equivalent algorithm Artificial Intelligence, pages 467475.
in pairwise Markov networks with just a simple
J. Pearl. 1988. Probabilistic Reasoning in Intelligent
loop. According to this analysis all errors result
Systems: Networks of Plausible Inference. Mor-
from the cyling of information and on first sight gan Kaufmann Publishers, Palo Alto.
there is no equivalent for the convergence error.
We investigated how the convergence error yet is Y. Weiss. 2000. Correctness of local probability
propagation in graphical models with loops. Neu-
embedded in the analysis of loopy propagation ral Computation, 12:141.
in Markov networks. We did so by constructing
the simplest situation in which a convergence Y. Weiss, W.T. Freeman. 2001. Correctness of be-
lief propagation in Gaussian graphical models of
error may occur, and analysing this situation in
arbitrary topology. Neural Computation, 13:2173
the equivalent Markov network. We found that 2200.
the convergence error in the Bayesian network
3
Given a loop with multiple convergence nodes, in the
prior state of the network, the parents of the convergence
nodes are independent and effectively no loop is present.
4
Two loops in sequence may result in incorrect proba-
bilities entering the second loop. The reflexive matrices,
however, will have a similar structure as the reflexive
matrices derived in this section.

50 J. H. Bolt
Preprocessing the MAP problem
Janneke H. Bolt and Linda C. van der Gaag
Department of Information and Computing Sciences, Utrecht University
P.O. Box 80.089, 3508 TB Utrecht, The Netherlands
{janneke,linda}@cs.uu.nl

Abstract
The MAP problem for Bayesian networks is the problem of finding for a set of variables an
instantiation of highest posterior probability given the available evidence. The problem is
known to be computationally infeasible in general. In this paper, we present a method for
preprocessing the MAP problem with the aim of reducing the runtime requirements for
its solution. Our method exploits the concepts of Markov and MAP blanket for deriving
partial information about a solution to the problem. We investigate the practicability of
our preprocessing method in combination with an exact algorithm for solving the MAP
problem for some real Bayesian networks.

1 Introduction problem may be obtained. There is no guar-


antee in general, however, that the values in
Upon reasoning with a Bayesian network, of- the resulting joint instantiation indeed corre-
ten a best explanation is sought for a given spond to the values of the variables in a solu-
set of observations. Given the available evi- tion to the MAP problem. In this paper, we
dence, such an explanation is an instantiation of now show that, for some of the variables of in-
highest probability for some subset of the net- terest, the computation of marginal posterior
works variables. The problem of finding such probabilities may in fact provide exact informa-
an instantiation has an unfavourable computa- tion about their value in a solution to the MAP
tional complexity. If the subset of variables for problem. We show more specifically that, by
which a most likely instantiation is to be found building upon the concept of Markov blanket,
includes just a single variable, then the prob- some of the variables may be fixed to a par-
lem, which is known as the Pr problem, is NP- ticular value; for some of the other variables of
hard in general. A similar observation holds for interest, moreover, values may be excluded from
the MPE problem in which an instantiation of further consideration. We further introduce the
highest probability is sought for all unobserved concept of MAP blanket that serves to provide
variables. These two problems can be solved similar information.
in polynomial time, however, for networks of Deriving partial information about a solution
bounded treewidth. If the subset of interest is a to the MAP problem by building upon the con-
non-singleton proper subset of the set of unob- cepts of Markov and MAP blanket, can be ex-
served variables, on the other hand, the prob- ploited as a preprocessing step before the prob-
lem, which is then known as the MAP problem, lem is actually solved with any available al-
remains NP-hard even for networks for which gorithm. The derived information in essence
the other two problems can be feasibly solved serves to reduce the search space for the prob-
(Park and Darwiche, 2002). lem and thereby reduces the algorithms run-
By performing inference in a Bayesian net- time requirements. We performed an initial
work under study and establishing the most study of the practicability of our preprocess-
likely value for each variable of interest sepa- ing method by solving MAP problems for real
rately, an estimate for a solution to the MAP networks using an exact branch-and-bound al-
gorithm, and found that preprocessing can be MAP variable in a solution m is called its MAP
profitable. value. Dependent upon the size of the MAP set,
The paper is organised as follows. In Sec- we distinguish between three different types of
tion 2, we provide some preliminaries on the problem. If the MAP set includes just a single
MAP problem. In Section 3, we present two variable, the problem of finding the best expla-
propositions that constitute the basis of our pre- nation for a set of observations reduces to es-
processing method. In Section 4, we provide tablishing the most likely value for this variable
some preliminary results about the practicabil- from its marginal posterior probability distribu-
ity of our preprocessing method. The paper is tion. This problem is called the Pr problem as it
ended in Section 5 with our concluding obser- essentially amounts to performing standard in-
vations. ference (Park and Darwiche, 2001). In the sec-
ond type of problem, the MAP set includes all
2 The MAP problem non-observed variables. This problem is known
as the most probable explanation or MPE prob-
Before reviewing the MAP problem, we intro- lem. In this paper, we are interested in the third
duce our notational conventions. A Bayesian type of problem, called the MAP problem, in
network is a model of a joint probability dis- which the MAP set is a non-singleton proper
tribution Pr over a set of stochastic variables, subset of the set of non-observed variables of the
consisting of a directed acyclic graph and a set network under study. This problem amounts to
of conditional probability distributions. We de- finding an instantiation of highest probability
note variables by upper-case letters (A) and for a designated set of variables of interest.
their values by (indexed) lower-case letters (a i ); We would like to note that the MAP problem
sets of variables are indicated by bold-face is more complex in essence than the other two
upper-case letters (A) and their instantiations problems. The Pr problem and the MPE prob-
by bold-face lower-case letters (a). Each vari- lem both are NP-hard in general and are solv-
able is represented by a node in the digraph; able in polynomial time for Bayesian networks
(conditional) independence between the vari- of bounded treewidth. The MAP problem is
ables is encoded by the digraphs set of arcs NPPP -hard in general and remains NP-hard for
according to the d-separation criterion (Pearl, these restricted networks (Park, 2002).
1988). The Markov blanket B of a variable A
consists of its neighbours in the digraph plus the 3 Fixing MAP values
parents of its children. Given its Markov blan-
ket, the variable is independent of all other vari- By performing inference in a Bayesian network
ables in the network. The strengths of the prob- and solving the Pr problem for each MAP vari-
abilistic relationships between the variables are able separately, an estimate for a MAP solu-
captured by conditional probability tables that tion may be obtained. There is no guarantee
encode for each variable A the conditional dis- in general, however, that the value with highest
tributions Pr(A | p(A)) given its parents p(A). marginal probability for a variable corresponds
Upon reasoning with a Bayesian network, of- with its value in a MAP solution. We now show
ten a best explanation is sought for a given set that, for some variables, the computation of
of observations. Given evidence o for a subset marginal probabilities may in fact provide ex-
of variables O, such an explanation is an instan- act information about their MAP values.
tiation of highest probability for some subset M The first property that we will exploit in the
of the networks variables. The set M is called sequel, builds upon the concept of Markov blan-
the MAP set for the problem; its elements are ket. We consider a MAP variable H and its
called the MAP variables. An instantiation m associated Markov blanket. If a specific value
of highest probability to the set M is termed a hi of H has highest probability in the marginal
MAP solution; the value that is assigned to a distribution over H for all possible instantia-

52 J. H. Bolt and L. C. van der Gaag


tions of the blanket, then hi will be the value of I A B
H in a MAP solution for any MAP problem in-
cluding H. Alternatively, if some value h j never H C
has highest marginal probability, then this value
cannot be included in any solution. D E

Proposition 1. Let H be a MAP variable in a F G


Bayesian network and let B be its Markov blan- Figure 1: An example directed acyclic graph.
ket. Let hi be a specific value of H.
1. If Pr(hi | b) Pr(hk | b) for all values The above proposition provides for preprocess-
hk of H and all instantiations b of B, then ing a MAP problem. Prior to actually solving
hi is the value of H in a MAP solution for the problem, some of the MAP variables may be
any MAP problem that includes H. fixed to a particular value using the first prop-
erty. With the second property, moreover, var-
2. If there exist values hk of H with Pr(hi |
ious values of the MAP variables may be ex-
b) < Pr(hk | b) for all instantiations b
cluded from further consideration. By build-
of B, then hi is not the MAP value of H
ing upon the proposition, therefore, the search
in any solution to a MAP problem that in-
space for the MAP problem is effectively re-
cludes H.
duced. We illustrate this with an example.
Proof. We prove the first property stated in the
proposition; the proof of the second property Example 1. We consider a Bayesian network
builds upon similar arguments. with the graphical structure from Figure 1.
We consider an arbitrary MAP problem with Suppose that H is a ternary MAP variable with
the MAP set {H} M and the evidence o for the values h1 , h2 and h3 . The Markov blanket
the observed variables O. Finding a solution to B of H consists of the three variables A, C and
the problem amounts to finding an instantiation D. Now, if for any instantiation b of these vari-
to the MAP set that maximises the posterior ables we have that Pr(h1 | b) Pr(h2 | b) and
probability Pr(H, M | o). We have that Pr(h1 | b) Pr(h3 | b), then h1 occurs in a
MAP solution for any MAP problem that in-
Pr(hi , M | o) = cludes H. By fixing the variable H to the value
P h1 , the search space of any such problem is re-
= b Pr(hi | b, o) Pr(M | b, o) Pr(b | o)
duced by a factor 3. We consider, as an exam-
For the posterior probability Pr(hk , M | o) an ple, the MAP set {H, E, G, I} of ternary vari-
analogous expression is found. Now suppose ables. Without preprocessing, the search space
that for the value hi of H we have that Pr(hi | includes 34 = 81 possible instantiations. By fix-
b) Pr(hk | b) for all values hk of H and all in- ing the variable H to h1 , the search space of the
stantiations b of B. Since B is the Markov blan- problem reduces to 33 = 27 instantiations. 
ket of H, we have that Pr(hi | b) Pr(hk | b)
implies Pr(hi | b, o) Pr(hk | b, o) for all val- Establishing whether or not the properties from
ues hk of H and all instantiations b of B. We Proposition 1 can be used for a specific MAP
conclude that Pr(hi , M | o) Pr(hk , M | o) variable, requires a number of computations
for all hk . The value hi of H thus is included that is exponential in the size of the variables
in a solution to the MAP problem under study. Markov blanket. The computations required,
Since the above considerations are algebraically however, are highly local. A single restricted in-
independent of the MAP variables M and of the ward propagation for each instantiation of the
evidence o, this property holds for any MAP Markov blanket to the variable of interest suf-
problem that includes the variable H.  fices. Since the proposition moreover holds for

Preprocessing the MAP Problem 53


any MAP problem that includes the variable, however, Proposition 2 provides information for
the computational burden involved is amortised any problem in which H has a subset of K for its
over all future MAP computations. MAP blanket and with matching evidence for
The second proposition that we will exploit the observed variables that are not d-separated
for preprocessing a MAP problem, builds upon from H by K. The information derived from
the new concept of MAP blanket. We consider Proposition 2, therefore, is more restricted in
a problem with the MAP variables {H} M. scope than that from Proposition 1.
A MAP blanket K of H now is a minimal sub- We illustrate the application of Proposition 2
set K M such that K d-separates H from for our example network.
M \ K given the available evidence. Now, if a
specific value hi of H has highest probability in
Example 2. We consider again the MAP set
the marginal distribution over H given the ev-
{H, E, G, I} for the Bayesian network from Ex-
idence for all possible instantiations of K, then
ample 1. In the absence of any evidence, the
hi will be the MAP value of H in a solution to
MAP blanket of the variable H includes just
the MAP problem under study. Alternatively, if
the variable I. Now, if for each value i of
some value hj never has highest marginal prob-
I we have that Pr(h1 | i) Pr(h2 | i) and
ability, then this value cannot be a MAP value
Pr(h1 | i) Pr(h3 | i), then the value h1 occurs
in any of the problems solutions.
in a solution to the given MAP problem. The
search space for actually solving the problem
Proposition 2. Let {H} M be the MAP set thus again is reduced from 81 to 27. 
of a given MAP problem for a Bayesian network
and let o be the evidence that is available for
the observed variables O. Let K be the MAP Establishing whether or not the properties from
blanket for the variable H given o, and let h i be Proposition 2 can be used for a specific MAP
a specific value of H. variable, requires a number of computations
that is exponential in the size of the vari-
1. If Pr(hi | k, o) Pr(hk | k, o) for all values ables MAP blanket. The size of this blanket
hk of H and all instantiations k to K, then is strongly dependent of the networks connec-
hi is the MAP value of H in a solution to tivity and of the location of the various MAP
the given MAP problem. variables and observed variables in the network.
The MAP blanket can in fact be larger in size
2. If there exist values hk of H with Pr(hi | than the Markov blanket of the variable. The
k, o) < Pr(hk | k, o) for all instantiations computations required, moreover, are less local
k to K, then hi is not the MAP value of H than those required for Proposition 1 and can
in any solution to the given MAP problem. involve full inward propagations to the variable
of interest. Since the proposition in addition
The proof of the proposition is relatively applies to just a restricted class of MAP prob-
straightforward, building upon similar argu- lems, the computational burden involved in its
ments as the proof of Proposition 1. verification can be amortised over other MAP
The above proposition again provides for pre- computations to a lesser extent than that in-
processing a MAP problem. Prior to actually volved in the verification of Proposition 1.
solving the problem, the values of some vari- The general idea underlying the two propo-
ables may be fixed and other values may be ex- sitions stated above is the same. The idea is
cluded for further consideration. The proposi- to verify whether or not a particular value of
tion therefore again serves to effectively reduce H can be fixed or excluded as a MAP value by
the search space for the problem under study. investigating Hs marginal probability distribu-
While the information derived from Proposi- tions given all possible instantiations of a collec-
tion 1 holds for any MAP problem including H, tion of variables surrounding H. Proposition 1

54 J. H. Bolt and L. C. van der Gaag


uses for this purpose the Markov blanket of H. Example 3. We consider again the MAP set
By building upon the Markov blanket, which in {H, E, G, I} for the Bayesian network from Fig-
essence is independent of the MAP set and of ure 1. Now suppose that the variable H can be
the entered evidence, generally applicable state- fixed to a particular value. Then, by performing
ments about the values of H are found. A dis- evidence absorption of this value, the graphical
advantage of building upon the Markov blanket, structure of the network falls apart into the two
however, is that maximally different distribu- components {A, I, H} and {B, C, D, E, F, G},
tions for H are examined, which decreases the respectively. The MAP problem then decom-
chances of fixing or excluding values. poses into the problem with the MAP set {I}
By taking a blanket-like collection of variables for the first component and the problem with
at a larger distance from the MAP variable H, the MAP set {E, G} for the second component;
the marginal distributions examined for H are both these problems now include the value of
likely to be less divergent, which serves to in- H as further evidence. The search space thus is
crease the chances of fixing or excluding val- further reduced from 27 to 3 + 9 = 12 instanti-
ues of H as MAP values. A major disadvan- ations to be studied. 
tage of such a blanket, however, is that its size
tends to grow with the distance from H, which 4 Experiments
will result in an infeasibly large number of in-
stantiations to be studied. For any blanket-like In the previous section, we have introduced a
collection, we observe that the MAP variables method for preprocessing the MAP problem for
of the problem have to either be in the blan- Bayesian networks. In this section, we perform
ket or be d-separated from H by the blanket. a preliminary study of the practicability of our
Proposition 2 builds upon this observation ex- method by solving MAP problems for real net-
plicitly and considers only the instantiations of works using an exact algorithm. In Section 4.1
the MAP blanket of the variable. The proposi- we describe the set-up of the experiments; we
tion thereby reduces the computations involved review the results in Section 4.2.
in its application yet retains and even further
4.1 The Experimental Set-up
exploits the advantage of examining less diver-
gent marginal distributions over H. Note that In our experiments, we study the effects of our
the values that can be fixed or excluded based preprocessing method on three real Bayesian
on the first proposition, will also be fixed or ex- networks. We first report the percentages of
cluded based on the second proposition. It may values that are fixed or excluded by exploit-
nevertheless still be worthwhile to exploit the ing Proposition 1. We then compare the num-
first proposition because, as stated before, with bers of fixed variables as well as the numbers
this proposition values can be fixed or excluded of network propagations with and without pre-
in general and more restricted and possibly less processing, upon solving various MAP problems
computations are required. with a state-of-the-art exact algorithm.
So far we have argued that application of In our experiments, we use three real
the two propositions serves to reduce the search Bayesian networks with a relatively high con-
space for a MAP problem by fixing variables to nectivity; Table 1 reports the numbers of vari-
particular values and by excluding other values ables and values for these networks. The Wil-
from further consideration. We would like to sons disease network (WD) is a small net-
mention that by fixing variables the graphical work in medicine, developed for the diagnosis
structure of the Bayesian network under study of Wilsons liver disease (Korver and Lucas,
may fall apart into unconnected components, 1993). The classical swine fever network (CSF)
for which the MAP problem can be solved sep- is a network in veterinary science, currently
arately. We illustrate the basic idea with our under development, for the early detection of
running example. outbreaks of classical swine fever in pig herds

Preprocessing the MAP Problem 55


(Geenen and Van der Gaag, 2005). The ex-
Table 1: The numbers of fixed and excluded
tended oesophageal cancer network (OESO+) is
values.
a moderately-sized network in medicine, which
has been developed for the prediction of re- network #vars. #vals. #vars.f. #vals.f.+e.
WD 21 56 4(19.0%) 13(23.2%)
sponse to treatment of oesophageal cancer (Ale- CSF 41 98 7(17.1%) 15(15.3%)
man et al, 2000). For each network, we compute OESO+ 67 175 12(17.9%) 27(15.4%)
MAP solutions for randomly generated MAP
sets with 25% and 50% of the networks vari-
ables, respectively; for each size, five sets are or excluded ranges between 15.3% and 23.2%.
generated. We did not set any evidence. We would like to stress that whenever a variable
For solving the various MAP problems in our can be fixed to a particular value, this result is
experiments, we use a basic implementation of valid for any MAP problem that includes this
the exact branch-and-bound algorithm available variable. The computations involved, therefore,
from Park and Darwiche (2003). This algorithm have to be performed only once.
solves the MAP problem exactly for most net- In the second experiment, we compared for
works for which the Pr and MPE problems are each network the numbers of variables that
feasible. The algorithm constructs a depth-first could be fixed by the two different preprocess-
search tree by choosing values for subsequent ing steps; for Proposition 2, we restricted the
MAP variables, cutting off branches using an number of network propagations per MAP vari-
upper bound. Since our preprocessing method able to four because of the limited applicabil-
reduces the search space by fixing variables and ity of the resulting information. We further es-
excluding values, it essentially serves to decrease tablished the numbers of network propagations
the depth of the tree and to diminish its branch- of the exact branch-and-bound algorithm that
ing factor. were forestalled by the preprocessing.
The results of the second experiment are pre-
4.2 Experimental results sented in Table 2. The table reports in the two
In the first experiment, we established for each leftmost columns, for each network, the sizes
network the number of values that could be of the MAP sets used and the average num-
fixed or excluded by applying Proposition 1, ber of network propagations performed with-
that is, by studying the marginal distributions out any preprocessing. In the subsequent two
per variable given its Markov blanket. For com- columns, it reports the average number of vari-
putational reasons, we decided not to investi- ables that could be fixed to a particular value
gate variables for which the associated blanket by using Proposition 1 and the average num-
had more than 45 000 different instantiations; ber of network propagations performed by the
this number is arbitrarily chosen. In the WD, branch-and-bound algorithm after this prepro-
CSF and OESO+ networks, there were 0, 8 and cessing step. In the fifth and sixth columns, the
15 of such variables respectively. table reports the numbers obtained with Propo-
The results of the first experiment are pre- sition 2; the sixth column in addition shows, be-
sented in Table 1. The table reports, for each tween parenthesis, the average number of prop-
network, the total number of variables, the to- agations that are required for the application
tal number of values, the number of variables of Proposition 2. In the final two columns of
for which a value could be fixed, and the num- the table, results for the two preprocessing steps
ber of values that could be fixed or excluded; combined are reported: the final but one column
note that if, for example, for a ternary variable again mentions the number of variables that
a value can be fixed, then also two values can could be fixed to a particular value by Proposi-
be excluded. We observe that 17.1% to 19.0% tion 1; it moreover mentions the average num-
of the variables could be fixed to a particular ber of variables that could be fixed to a partic-
value. The number of values that could be fixed ular value by Proposition 2 after Proposition 1

56 J. H. Bolt and L. C. van der Gaag


had been used. The rightmost column reports fewer variables are fixed in the step based on
the average number of network propagations re- Proposition 2 than in the step based on Propo-
quired by the branch-and-bound algorithm. We sition 1. This can be attributed to the limited
would like to note that the additional computa- number of propagations used in the step based
tions required for Proposition 2 are restricted on Proposition 2. We conclude that, for the net-
network propagations; although the worst-case works under study, it has been quite worthwhile
complexity of these propagations is the same to use Proposition 1 as a preprocessing step be-
as that of the propagations performed by the fore actually solving the various MAP problems.
branch-and-bound algorithm, their runtime re- Because of the higher computational burden in-
quirements may be considerably less. volved and its relative lack of additional value,
From Table 2, we observe that the number of the use of Proposition 2 has not been worthwhile
network propagations performed by the branch- for our networks and associated problems.
and-bound algorithm grows with the number To conclude, in Section 3 we observed that
of MAP variables, as expected. For the two fixing variables to their MAP value could serve
smaller networks, we observe in fact that the to partition a MAP problem into smaller prob-
number of propagations without preprocessing lems. Such a partition did not occur in our
equals the number of MAP variables plus one. experiments. We would like to note, however,
For the larger OESO+ network, the number of that we studied MAP problems without evi-
network propagations performed by the algo- dence only. We expect that in the presence of
rithm is much larger than the number of MAP evidence MAP problems will more readily be
variables. This finding is not unexpected since partitioned into smaller problems.
the MAP problem has a high computational
complexity. For the smaller networks, we fur- 5 Conclusions and discussion
ther observe that each variable that is fixed
by one of the preprocessing steps translates di- The MAP problem for Bayesian networks is
rectly into a reduction of the number of prop- the problem of finding for a set of variables
agations by one. For the OESO+ network, we an instantiation of highest posterior probabil-
find a larger reduction in the number of net- ity given the available evidence. The problem
work propagations per fixed variable. Our ex- has an unfavourable computational complexity,
perimental results thus indicate that the num- being NPPP -hard in general. In this paper, we
ber of propagations required by the branch-and- showed that computation of the marginal pos-
bound algorithm indeed is decreased by fixing terior probabilities of a variable H given its
variables to their MAP value. Markov blanket may provide exact information
With respect to using Proposition 1, we ob- about its value in a MAP solution. This infor-
serve that in all networks under study a rea- mation is valid for any MAP problem that in-
sonably number of variables could be fixed to cludes the variable H. We further showed that
a MAP value. We did not take the number of computation of the marginal probabilities of H
local propagations required for this proposition given its MAP blanket may also provide exact
into consideration because the computational information about its value. This information
burden involved is amortised over future MAP is valid, however, for a more restricted class of
computations. With respect to using Proposi- MAP problems. We argued that these results
tion 2, we observe that for all networks and all can be exploited for preprocessing MAP prob-
MAP sets the additional computations involved lems before they are actually solved using any
outweigh the number of network computations state-of-the-art algorithm for this purpose.
that are forestalled for the algorithm. This ob- We performed a preliminary experimental
servation applies to using just Proposition 2 as study of the practicability of the preprocessing
well as to using the proposition after Proposi- steps by solving MAP problems for three differ-
tion 1 has been applied. We also observe that ent Bayesian networks using an exact branch-

Preprocessing the MAP Problem 57


Table 2: The number of network propagations without and with preprocessing using Propositions
1 and 2.
Prop. 1 Prop. 2 Prop. 1+2
network #MAP #props. #vars. f. #props. #vars. f. #props. (add.) #vars. f. #props. (add.)
WD 5 6.0 1.2 4.8 0.0 6.0 (0.6) 1.2 + 0.0 4.8 (0.6)
10 11.0 2.4 8.6 1.0 10.0 (6.4) 2.4 + 0.2 8.4 (4.0)
CSF 10 11.0 2.0 9.0 1.0 10.0 (1.6) 2.0 + 0.4 8.6 (0.4)
20 21.0 3.4 17.6 1.6 19.4 (8.2) 3.4 + 0.6 17.0 (5.0)
OESO+ 17 24.8 3.4 18.4 0.8 22.0 (4.2) 3.4 + 0.0 18.4 (2.2)
34 49.8 5.8 40.8 1.8 47.4 (7.8) 5.8 + 0.4 40.0 (4.6)

and-bound algorithm. As expected, the num- Acknowledgements


ber of network propagations required by the al- This research was partly supported by the
gorithm is effectively decreased by fixing MAP Netherlands Organisation for Scientific Re-
variables to their appropriate values. We found search (NWO).
that by building upon the concept of Markov
blanket for 17.1% to 19.0% of the variables a
MAP value could be fixed. Since the results of References
this preprocessing step are applicable to all fu- B.M.P. Aleman, L.C. van der Gaag and B.G. Taal.
ture MAP computations and the computational 2000. A Decision Model for Oesophageal Cancer
burden involved thus is amortised, we consid- Definition Document (internal publication in
ered it worthwhile to perform this step for the Dutch).
investigated networks. We would like to add P.L. Geenen and L.C. van der Gaag. 2005. Devel-
that, because the computations involved are oping a Bayesian network for clinical diagnosis
highly local, the step may also be feasibly ap- in veterinary medicine: from the individual to
the herd. In 3rd Bayesian Modelling Applications
plied to networks that are too large for the exact Workshop, held in conjunction with the 21st Con-
MAP algorithm used in the experiments. With ference on Uncertainty in Artificial Intelligence.
respect to building upon the concept of MAP
blanket, we found that for the investigated net- M. Korver and P.J.F. Lucas. 1993. Converting a rule-
based expert system into a belief network. Medical
works and associated MAP problems, the com- Informatics, 18:219241.
putations involved outweighed the reduction in
network propagations that was achieved. Since J. Park and A. Darwiche. 2001. Approximating
MAP using local search. In 17th Conference on
the networks in our study were comparable with Uncertainty in Artificial Intelligence, pages 403
respect to size and connectivity, further exper- 410.
iments are necessary before any definitive con-
J. Park. 2002. MAP complexity results and approx-
clusion can be drawn with respect to this pre-
imation methods. In 18th Conference on Uncer-
processing step. tainty in Artificial Intelligence, pages 388396.

In our further research, we will expand the J. Park and A. Darwiche. 2003. Solving MAP exactly
experiments to networks with different numbers using systematic search. In 19th Conference on
Uncertainty in Artificial Intelligence, pages 459
of variables, different cardinality and diverging 468.
connectivity. More specifically, we will investi-
gate for which types of network preprocessing is J. Pearl. 1988. Probabilistic Reasoning in Intelligent
Systems: Networks of Plausible Inference. Mor-
most profitable. In our future experiments we gan Kaufmann Publishers, Palo Alto.
will also take the effect of evidence into account.
We will further investigate if the class of MAP
problems that can be feasibly solved can be ex-
tended by our preprocessing method. We hope
to report further results in the near future.

58 J. H. Bolt and L. C. van der Gaag


Quartet-Based Learning of Shallow Latent Variables
Tao Chen and Nevin L. Zhang
Department of Computer Science and Engineering
The Hong Kong University of Science and Technology, Hong Kong, China
{csct,lzhang}@cse.ust.hk

Abstract
Hierarchical latent class(HLC) models are tree-structured Bayesian networks where leaf
nodes are observed while internal nodes are hidden. We explore the following two-stage
approach for learning HLC models: One first identifies the shallow latent variables
latent variables adjacent to observed variables and then determines the structure among
the shallow and possibly some other deep latent variables. This paper is concerned
with the first stage. In earlier work, we have shown how shallow latent variables can be
correctly identified from quartet submodels if one could learn them without errors. In
reality, one does make errors when learning quartet submodels. In this paper, we study
the probability of such errors and propose a method that can reliably identify shallow
latent variables despite of the errors.

1 Introduction Challenge 2000 data set (van der Putten and


van Someren, 2004), which consists of 42 man-
Hierarchical latent class (HLC) models (Zhang, ifest variables and 5,822 records, and a data
2004) are tree-structured Bayesian networks set about traditional Chinese medicine (TCM),
where variables at leaf nodes are observed and which consists of 35 manifest variables and
hence are called manifest variables (nodes), 2,600 records.
while variables at internal nodes are hidden and In terms of running time, HSHC took 98
hence are called latent variables (nodes). HLC hours to analyze the aforementioned TCM data
models were first identified by Pearl (1988) as set on a top-end PC, and 121 hours to analyze
a potentially useful class of models and were the CoIL Challenge 2000 data set. It is clear
first systematically studied by Zhang (2004) as that HSHC will not be able to analyze data sets
a framework to alleviate the disadvantages of with hundreds of manifest variables.
LC models for clustering. As a tool for cluster Aimed at developing algorithms more efficient
analysis, HLC Models produce more meaningful than currently available, we explore a two-stage
clusters than latent class models and they allow approach where one (1) identifies the shallow
multi-way clustering at the same time. As a tool latent variables, i.e. latent variables adjacent
for probabilistic modeling, they can model high- to observed variables, and (2) determines the
order interactions among observed variables and structure among those shallow, and possibly
help one to reveal interesting latent structures some other deep, latent variables. This pa-
behind data. They also facilitate unsupervised per is concerned with the first stage.
profiling. In earlier work (Chen and Zhang, 2005), we
Several algorithms for learning HLC models have shown how shallow latent variables can be
have been proposed. Among them, the heuris- correctly identified from quartet submodels if
tic single hill-climbing (HSHC) algorithm devel- one could learn them without errors. In real-
oped by Zhang and Kocka (2004) is currently ity, one does make errors when learning quartet-
the most efficient. HSHC has been used to submodels. In this paper, we study the proba-
successfully analyze, among others, the CoIL bility of such errors and propose a method that
X1 X1

X2 Y1 X3 X2 Y1 X3

U V T W U V T W
Y2 Y3 Y4 Y5 Y6 Y7 Y2 Y3 Y4 Y5 Y6 Y7

(a) (b) (a) (b)


Figure 1: An example HLC model and the cor-
responding unrooted HLC model. The Xi s are
latent nodes and the Yj s are manifest nodes.
U T V W U W V T

can reliably identify shallow latent variables de-


(c) (d)
spite of the errors.
Figure 2: Four possible quartet substructures
2 HLC Models and Shallow Latent for a quartet Q = {U, V, T, W }. The fork in (a)
Variables is denoted by [U V T W ], the dogbones in (b), (c),
and (d) respectively by [U V |T W ], [U T |V W ],
Figure 1 (a) shows an example HLC model. and [U W |V T ].
Zhang (2004) has proved that it is impossible to
determine, from data, the orientation of edges tent variables in the model respectively.
in an HLC model. One can learn only unrooted
3 Quartet-Based SLV Discovery:
HLC models, i.e. HLC models with all direc-
The Principle
tions on the edges dropped. Figure 1 (b) shows
an example unrooted HLC model. An unrooted A shallow latent node is defined by its relation-
HLC model represents a class of HLC models. ship with its manifest neighbors. Hence to iden-
Members of the class are obtained by rooting tify the shallow latent nodes means to identify
the model at various nodes. Semantically it is sibling clusters. To identify sibling clusters, we
a Markov random field on an undirected tree. need to determine, for each pair of manifest vari-
In the rest of this paper, we are concerned only ables (U, V ), whether U and V are siblings. In
with unrooted HLC models. this section, we explain how to answer this ques-
In this paper, we will use the term HLC struc- tion by inspecting quartet submodels.
ture to refer to the set of nodes in an HLC A quartet is a set of four manifest variables,
model and the connections among them. HLC e.g., Q = {U, V, T, W }. The restriction of an
structure is regular if it does not contain latent HLC model structure S onto Q is obtained from
nodes of degree 2. Starting from an irregular S by deleting all the nodes and edges not in
HLC structure, we can obtain a regular struc- the paths between any pair of variables in Q.
ture by connecting the two neighbors of each Applying regularization to the resulting HLC
latent node of degree 2 and then remove that structure, we obtain the quartet substructure for
node. This process is known as regularization. Q, which we denote by S|Q . As shown in Figure
In this paper, we only consider regular HLC 2, S|Q is either the fork [U V T W ], or one of the
structures. dogbones [U V |T W ], [U T |V W ], and [U W |V T ].
In an HLC model, a shallow latent variable Consider the HLC structure in Figure 1 (b).
(SLV) is one that is adjacent to at least one The quartet substrcuture for {Y1 , Y2 , Y3 , Y4 }
manifest variable. Two manifest variables are is the fork [Y1 Y2 Y3 Y4 ], while that for
siblings if they are adjacent to the same (shal- {Y1 , Y2 , Y4 , Y5 } is the dogbone [Y1 Y5 |Y2 Y4 ],
low) latent variable. For a given shallow la- and that for {Y1 , Y2 , Y5 , Y6 } is the dogbone
tent variable X, all manifest variables adjacent [Y1 Y2 |Y5 Y6 ].
to X constitute a sibling cluster. In the HLC It is obvious that if two manifest variables U
structure shown in Figure 1 (b), there are 3 and V are siblings in the structure S, then they
sibling clusters, namely {Y1 }, {Y2 , Y3 , Y4 }, and must be siblings in any quartet substructure
{Y5 , Y6 , Y7 }. They correspond to the three la- that involves both of them. Chen and Zhang

60 T. Chen and N. L. Zhang


(2005) has proved the converse. So, we have can determine whether two manifest variables
Theorem 1 Suppose S is a regular HLC struc- U and V are siblings (in the generative model)
ture. Let (U, V ) be a pair of manifest variables. as follow:
Then U and V are not siblings in S iff there ex- Pick a third manifest variable T .
ist two other manifest variables T and W such For each Q QU V |T , call QSL(D, Q).
that S|{U,V,T,W } is a dogbone where U and V are If U and V are not siblings in one of the
not siblings. resulting substructures, then conclude that
Theorem 1 indicates that we can determine they are not siblings (in the generative
whether U and V are siblings by examining model).
If U and V are siblings in all the resulting
all possible quartets involving both U and V .
substructures, then conclude that they are
There are (n2)(n3)
2 such quartets, where n
siblings (in the generative model).
is the number of manifest variables. We next
present a result that allows us to do the same We can run the above procedure on each pair
by examining only n 3 quartets. of manifest variables to determine whether they
Let (U, V ) be a pair of manifest variables and are siblings. Afterwards, we can summarize all
T be a third one. We use QU V |T to denote the the results using a sibling graph. The sibling
following collection of quartets: graph is an undirected graph over the manifest
variables where two variables U and V are ad-
QU V |T := {{U, V, T, W }|W Y\{U, V, T }},
jacent iff they are determined as siblings.
If QSL is error-free, then each connected com-
where Y is the set of all manifest variables.
ponent of the sibling graph should be com-
T appears in every quartet in QU V |T and thus
pletely connected and correspond to one latent
called a standing member of QU V |T . It is clear
variable. For example, if the structure of the
that QU V |T consists of n3 quartets. Chen and
generative model is as Figure 3 (a), then the
Zhang (2005) has also proved the following:
sibling graph that we obtain will be as Fig-
Theorem 2 Suppose S is a regular HLC struc- ure 3 (b). There are four completely connected
ture. Let (U, V ) be a pair of manifest variables components, namely {Y1 , Y2 , Y3 }, {Y4 , Y5 , Y6 },
and T be a third one. U and V are not siblings {Y7 , Y8 , Y9 }, {Y10 , Y11 , Y12 }, which respectively
in S iff there exists a quartet Q QU V |T such correspond to the four latent variables in the
that S|Q is a dogbone where U and V are not generative structure.
siblings.
4 Probability of Learning Quartet
In learning tasks, we do not know the struc-
Submodels Correctly
ture of the generative model structure S. Where
do we obtain the quartet substructures? The In the previous section, we assumed that QSL is
answer is to learn them from data. Let M be error-free. In reality, one does make mistakes
an HLC model with a regular structure S. Sup- when learning quartet submodels. We have em-
pose that D is a collection of i.i.d samples drawn pirically studied the probability of such errors.
from M. Each record in D contains values for For our experiments, QSL was implemented
the manifest variables, but not for the latent using the HSHC algorithm. For model selection,
variables. Let QSL(D, Q) be a routine that takes we tried each of the scoring functions, namely
data D and a quartet Q as inputs, and returns BIC (Schwarz, 1978), BICe (Kocka and Zhang,
an HLC structure on the quartet Q. One can 2002), AIC (Akaike, 1974), and the Cheeseman-
use the HSHC algorithm to implement QSL, and Stutz(CS) score (Cheeseman and Stutz, 1995).
one can first project the data D onto the quartet We randomly generated around 20,000 quar-
Q when learning the substructure for Q. tet models. About half of them are
Suppose QSL is error-free, i.e. QSL(D, Q) = fork-structured, while the rest are dogbone-
S|Q for any quartet Q. By Theorem 2, we structured. The cardinalities of the variables

Quartet-Based Learning of Shallow Latent Variables 61


range from 2 to 5. From each of the models,
Table 1: Percentage of times that QSL produced
we sampled data sets of size 500, 1,000, 2,500,
the wrong quartet structure. The table on the
5,000. The QSL was then used to analyze the
top is for the fork-structured generative models,
data sets. In all the experiments, QSL produced
while the table at the bottom is for the dogbone-
either forks or dogbones. Consequently, there
structured generative models.
are only three classes of errors:
F2D: The generative model was a fork, and QSL 500(F2D) 1000(F2D) 2500(F2D) 5000(F2D)
BIC 0.83% 1.87% 3.70% 3.93%
produced a dogbone. BICe 1.42% 3.16% 6.58% 7.86%
AIC 12.8% 10.2% 6.81% 4.85%
D2F: The generative model was a dogbone, and CS 6.20% 5.05% 6.19% 6.41%
QSL produced a fork. Total=10011

D2D: The generative model was a dogbone, and 500 1000 2500 5000
D2F D2D D2F D2D D2F D2D D2F D2D
QSL produced a different dogbone. BIC 95.3%0.00% 87.0%0.00% 68.2%0.00% 51.5%0.00%
BICe 95.0%0.05% 86.8%0.00% 67.8%0.04% 50.6%0.04%
The statistics are shown in Table 5. To un- AIC 71.2%2.71% 60.5%1.58% 46.7%0.54% 39.0%0.38%
CS 88.5%1.93% 83.4%0.74% 67.0%0.30% 51.5%0.13%
derstand the meaning of the numbers, consider Total=10023
the number 0.83% at the upper-left corner of
the top table. It means that, when the sample i) Which quartets should we use in Step 1?
size was 500, QSL returned dogbones in 0.83% ii) How do we determine sibling relationships
percent, or 83, of the 10,011 cases where the in Step 2 based on results from Step 1?
generative models were forks. In all the other iii) How do we introduce SLVs in Step 3 based
cases, QSL returned the correct fork structure. on the sibling graph constructed in Step 2?
It is clear from the tables that: The probabil-
ity of D2F errors is large; the probability of F2D Our answer to the second question is sim-
errors is small; and the probability of D2D er- ple: two manifest variables are regarded as non-
rors is very small, especially when BIC or BICe siblings if they are not siblings in one of the
are used for model selection. Also note that quartet submodels. In the next two subsections,
the probability of D2F decreases with sample we discuss the other two questions.
size, but that of F2D errors do not. In the next 5.1 SLV Introduction
section, we will use those observations when de-
signing an algorithm for identifying shallow la- As seen in Section 3, when QSL is error-free, the
tent variables. sibling graph one obtains has a nice property:
every connected component is a fully connected
In terms of comparison among the scoring
subgraph. In this case, the rule for SLV intro-
functions, BIC and BICe are clearly preferred
duction is obvious:
over the other two as far as F2D and D2D er-
rors are concerned. It is interesting to observe Introduce one latent variable for each
that BICe, although proposed as an improve- connected component.
ment to BIC, is not as good as BIC when it
comes to learning quartet models. For the rest When QSL is error-prone, the sibling graph one
of this paper, we use BIC. obtains no longer has the aforementioned prop-
erty. Suppose data are sampled from a model
5 Quartet-Based SLV Discovery: An with the structure shown in Figure 3 (a). Then
Algorithm what one obtains might be the graphs (c), (d),
or (e) instead of (b).
The quartet-based approach to SLV discov- Nonetheless, we still use the above SLV in-
ery consists of three steps: (1) learn quartet troduction rule for the general case. We choose
submodels, (2) determine sibling relationships it for its simplicity, and for the lack of better
among manifest variables and hence obtain a alternatives. This choice also endows the SLV
sibling graph, and (3) introduce SLVs based on discovery algorithm being developed with error
the sibling graph. Three questions ensue: tolerance abilities.

62 T. Chen and N. L. Zhang


Y1 This is a case of misclustering.
Y2
X1 We next turn to Question 1. There, the most
Y3
important concern is how to minimize errors.
X2 X3 X4
5.2 Quartet Selection
Y4 Y5 Y6 Y7 Y8 Y9 Y10 Y11 Y12 To determine whether two manifest variables U
and V are siblings, we can consider all quartets
(a) (b)
in QU V |T , i.e. all the quartets with a third vari-
able T as a standing member. This selection of
quartets will be referred to as the parsimonious
selection. There are only n 3 quartets in the
selection.
When QSL were error-free, one can use QU V |T
to correctly determine whether U and V are sib-
(c) (d) lings. As an example, suppose data are sampled
from a model with the structure shown in Fig-
X1 X2
ure 3 (a), and we want to determine whether
Y9 and Y11 are siblings based on data. Further
Y1 Y2 Y3 Y4 Y5 Y6
suppose Y4 is picked as the standing member.
X3 X4
If QSL is error-free, we get 4 dogbones where
Y9 and Y11 are not sibling, namely [Y9 Y7 |Y11 Y4 ],
Y7 Y8 Y9 Y10 Y11 Y12 [Y9 Y8 |Y11 Y4 ], [Y9 Y4 |Y11 Y10 ], [Y9 Y4 |Y11 Y12 ], and
(e) (f) hence conclude that Y9 and Y11 are not siblings.
In reality, QSL does make mistakes. Accord-
Figure 3: Generative model (a), sibling graphs
ing to Section 4, the probability of D2F errors
(b, c, d, e), and shallow latent variables (f).
is quite high. There is therefore good chance
that, instead of the aforementioned 4 dogbones,
There are three types of mistakes that one we get 4 forks. In that case, Y9 and Y11 will be
can make when introducing SLVs, namely latent regarded as siblings, resulting in a fake edge in
omission, latent commission, and misclustering. the sibling graph.
In the example shown in Figure 3, if we intro- QU V |T represents one extreme when it comes
duce SLVs based on the sibling graph (c), then to quartet selection. The other extreme is to use
we will introduce three latent variables. Two of all the quartets that involve both U and V . This
them correspond respectively to the latent vari- selection of quartets will be referred to as the
ables X1 and X2 in the generative model, while generous selection. Suppose U and V are non-
the third corresponds to a merge of X3 and siblings in the generative model. This gener-
X4 . So, one latent variable is omitted. ous choice will reduce the effects of D2F errors.
If we introduce SLVs based on the sibling As a matter of fact, there are now many more
graph (d), then we will introduce five latent quartets, when compared with the case of par-
variables. Three of them correspond respec- simonious quartet selection, for which the true
tively to X1 , X3 , and X4 , while the other two structures are dogbones with U and V being
are both related to X2 . So, one latent variable non-siblings. If we learn one of those structures
is commissioned. correctly, we will be able to correctly identify U
If we introduce SLVs based on the sibling and V as non-siblings.
graph (e), then we will introduce four latent The generous selection also comes with a
variables. Two of them correspond respectively drawback. In our running example, consider
to X1 and X4 . The other two are related to X2 the task of determining whether Y1 and Y2 are
and X3 , but there is not clear correspondence. siblings based on data. There are 36 quartets

Quartet-Based Learning of Shallow Latent Variables 63


that involve Y1 and Y2 and for which the true Algorithm DiscoverSLVs(D, m):
structures are forks. Those include [Y1 Y2 Y4 Y7 ], 1. G ConstructSGraph(D, m).
[Y1 Y2 Y4 Y10 ], and so on. According to the Sec- 2. return
tion 4, the probability for QSL to make an the list of the connected components of G.
F2D mistake on any one of those quartets is
small. But there are 36 of them. There is Algorithm ConstructSGraph(D, m):
good chance for QSL to make an F2D mistake
on one of them. If the structure QSL learned 1. G complete graph over manifest nodes Y.
for [Y1 Y2 Y4 Y7 ], for instance, turns out to be the 2. for each edge (U, V ) of G,
dogbone [Y1 Y4 |Y2 Y7 ], then Y1 and Y2 will be re- 3. pick {T1 , , Tm } Y\{U, V }
garded as non-siblings, and hence the edge be- 4. for each Q m i=1 QU V |Ti ,
tween Y1 and Y2 will be missing from the sibling 5. if QSL(D, Q)=[U |V ]
graph. 6. delete edge (U, V ) from G, break.
Those discussions point to the middle ground 7. endFor.
between parsimonious and generous quartet se- 8. endFor.
lection. One natural way to explore this middle 9. return G.
ground is to use several standing members in- Figure 4: An algorithm for learning SLVs.
stead of one.
lationships are tolerated. When the set C is not
5.3 The Algorithm very small, it takes a few or more mistakes in
Figure 4 shows an algorithm for discov- the right combination to break up C.
ering shallow latent variables, namely
6 Empirical Evaluation
DiscoverSLVs. The algorithm first calls a
subroutine ConstructSGraph to construct a We have carried out simulation experiments to
sibling graph, and then finds all the connected evaluate the ability of DiscoverSLVs in discov-
components of the graph. It is understood that ering latent variables. This section describes the
one latent variable is introduced for each of the setup of the experiments and reports our find-
connected components. ings.
The subroutine ConstructSGraph starts from The generative models in the experiments
the complete graph. For each pair of manifest share the same structure. The structure con-
variables U and V , it considers all the quartets sists of 39 manifest variables and 13 latent vari-
that involve U , V , and one of the m standing ables. The latent variables form a complete 3-
members Ti (i = 1, 2, . . . , m). QSL is called to ary tree of height two. Each latent variable in
learn a submodel structure for each of the quar- the structure is connected to 3 manifest vari-
tets. If U and V are not siblings in one of the ables. Hence all latent variables are shallow.
quartet substructures, the edge between U and The cardinalities of all variables were set at 3.
V is deleted. We created 10 generative models from the
DiscoverSLVs has error tolerance mechanism structure by randomly assigning parameter val-
naturally built in. This is mainly because it re- ues. From each of the 10 generative models, we
gards connected components in sibling graph as sampled 5 data sets of 500, 1,000, 2,500, 5,000
sibling clusters. Let C be a sibling cluster in and 10,000 records. DiscoverSLVs was run on
the generative model. The vertices in C will be each of the data sets three times, with the num-
placed in one cluster by DiscoverSLVs if they ber of standing members m set at 1, 3 and 5
are in the same connected component in the sib- respectively. The algorithms were implemented
ling graph G produced by ConstructSGraph. It in Java and all experiments were run on a Pen-
is not required for variables in C to be pairwise tium 4 PC with a clock rate of 3.2 GHz.
connected in G. Therefore, a few mistakes by The performance statistics are summarized in
ConstructSGraph when determining sibling re- Table 2. They consist of errors at three different

64 T. Chen and N. L. Zhang


Table 2: Performance statistics of our SLV discovery algorithm.
QSL-level Edge-level SLV-level
m D2F D2D F2D ME FE LO LCO MC
Sample size = 500
1 77.0%(4.5%) 0.06%(0.05%) 0.40%(0.13%) 1.0(1.1) 88.2(9.2) 11(3.0) 0(0) 0(0)
3 71.8%(4.0%) 0.05%(0.03%) 0.36%(0.14%) 2.9(1.3) 28.0(8.3) 9.3(3.1) 0(0) 0.1(0.3)
5 68.8%(3.0%) 0.05%(0.02%) 0.30%(0.14%) 3.3(1.6) 14.7(6.0) 5.1(1.9) 0.2(0.4) 0(0)
Sample size = 1000
1 65.3%(6.2%) 0.01%(0.03%) 0.17%(0.14%) 0.3(0.6) 55.3(15.7) 11.2(0.7) 0(0) 0(0)
3 58.3%(2.7%) 0.02%(0.02%) 0.10%(0.07%) 0.7(0.9) 10.1(6.5) 4.8(2.9) 0(0) 0(0)
5 56.6%(3.2%) 0.01%(0.01%) 0.10%(0.08%) 1.1(0.7) 6.2(3.8) 2.3(1.3) 0.1(0.3) 0(0)
Sample size = 2500
1 50.1%(2.3%) 0.00%(0.00%) 0.04%(0.07%) 0.2(0.4) 20.4(8.9) 8.6(2.3) 0(0) 0(0)
3 38.0%(4.6%) 0.00%(0.00%) 0.02%(0.04)% 0.2(0.4) 3.1(3.7) 1.4(1.4) 0(0) 0(0)
5 37.3%(3.8%) 0.00%(0.01%) 0.07%(0.08)% 1.1(1.4) 1.3(2.5) 1.5(0.9) 0.1(0.3) 0(0)
Sample size = 5000
1 32.6%(5.5%) 0.00%(0.00%) 0.03%(0.09%) 0(0) 7.7(6.0) 3.9(2.4) 0(0) 0(0)
3 25.2%(4.4%) 0.01%(0.01%) 0.04%(0.06%) 0.3(0.5) 1.5(2.7) 0.5(0.7) 0(0) 0(0)
5 25.7%(3.9%) 0.00%(0.01%) 0.10%(0.11%) 0.9(0.9) 0.9(2.7) 0.1(0.3) 0(0) 0(0)
Sample size = 10000
1 21.8%(5.9%) 0.00%(0.00%) 0.02%(0.07%) 0(0) 2.0(3.3) 1.2(1.8) 0(0) 0(0)
3 17.5%(3.9%) 0.00%(0.00%) 0.05%(0.06%) 0.2(0.4) 0.4(1.2) 0.1(0.3) 0(0) 0(0)
5 17.4%(4.1%) 0.00%(0.01%) 0.03%(0.04%) 0.6(0.8) 0.1(0.3) 0.1(0.3) 0(0) 0(0)

levels of the algorithm: the errors made when 2,500 and m = 5. This is also expected. As m
learning quartet substructures (QSL-level), the increases, the number of quartets examined also
errors made when determining sibling relation- increases. For two manifest variables U and V
ships between manifest variables (edge-level), that are not siblings in the generative model,
and the errors made when introducing SLVs the probability of obtaining a dogbone (despite
(SLV-level). Each number in the table is an D2F errors) where U and V are not siblings also
average over the 10 generative models. The increases. The number of fake edges decreases
corresponding standard deviations are given in with m because as m increases, the probability
parentheses. of D2F errors decreases.
QSL-level errors: We see that the proba- SLV-level errors: We now turn to SLV-
bilities of the QSL-level errors are significantly level errors. Because there were not many miss-
smaller than those reported in Section 4. This is ing edges, true sibling clusters of the generative
because we deal with strong dependency models models were almost never broken up. There are
here, while the numbers in Section 4 are about only five exceptions. The first exception hap-
general models. This indicates that strong de- pened for one generative model in the case of
pendency assumption does make learning easier. sample size 500 and m = 3. In that case, a man-
On the other hand, the trends remain the same: ifest variable from one true cluster was placed
the probability of D2F errors is large, that of into another, resulting in one misclustering er-
F2D errors is small, and that of D2D errors are ror (MC).
very small. Moreover, the probability of D2F The other four exceptions happened for the
errors decreases with sample size. following combinations of sample size and m:
Edge-level errors: For the edge-level, the (500, 5), (1000, 5), (2500, 5). In those cases,
numbers of missing edges (ME) and the number one true sibling cluster was broken into two clus-
of fake edges (FE) are reported. We see that the ters, resulting in four latent commission errors
number of missing edges is always small, and in (LCO).
general it increases with the number of standing Fake edges cause clusters to merge and hence
members m. This is expected since the larger m lead to latent omission errors. In our experi-
is, the more quartets one examines, and hence ments, the true clusters were almost never bro-
the more likely one makes F2D errors. ken up. Hence a good way to measure latent
The number of fake edges is large when the omission errors is to use the total number of
sample size is small and m is small. In gen- shallow latent variables, i.e. 13, minus the num-
eral, it decreases with sample size and m. It ber of clusters returned by DiscoverSLVs. We
dropped to 1.3 when for the case of sample size call this the number of LO errors. In Table 2,we

Quartet-Based Learning of Shallow Latent Variables 65


see that the number of LO errors decreases with models. The quartet-based method for PT re-
sample size and the number of standing mem- construction first learn submodels for all Cn4
bers m. It dropped to 1.4 when for the case of possible quartets and then use them to build
sample size 2,500 and m = 3. When the sample the overall tree ( St. John et al., 2003).
size was increased to 10,000 and m set to 3 or
5, LO errors occurred only once for one of the 8 Conclusions
10 generative models. We have empirically studied the probability of
Running time: The running times of making errors when learning quartet HLC mod-
DiscoverSLVs are summarized in the following els and have observed some interesting regular-
table (in hours). For the sake of comparison, we ities. In particular, we observed the probability
also include the times HSHC took attempting to of D2F errors is high and decreases with sample
reconstruct the generative models based on the size, while the probability of F2D errors is low.
same data as used by DiscoverSLVs. We see Based on those observations, we have developed
that DiscoverSLVs took only a small fraction an algorithm for discovering shallow latent vari-
of the time that HSHC took, especially in the ables in HLC models reliably and efficiently.
cases with large samples. This indicates that
the two-stage approach that we are exploring Acknowledgements
can result algorithms significantly more efficient Research on this work was supported by Hong
than HSHC. Kong Grants Council Grant #622105.
RunningTime(hrs) 500 1000 2500 5000 10000
m=1 1.05 1.06 0.92 0.89 0.84
m=3 1.57 1.52 1.49 1.79 1.91
m=5 1.85 2.07 2.17 2.72 3.02 References
HSHC 4.65 8.69 23.7 43.0 118.6
Akaike, H. (1974). A new look at the statistical
The numbers of quartets examined by model identification. IEEE Transactions on Au-
DiscoverSLVs are summarized in the follow- tomotic Control,.
ing table. We see that DiscoverSLVs exam- Cheeseman, P. and Stutz, J. (1995). Bayesian classi-
4 =82251
ined only a small fraction of all the C39 fication (AutoClass): Theory and results. In Ad-
vances in knowledge discovery and data mining.
possible quartets. We also see that doubling m Chen, T. and Zhang, N.L. (2006). Quartet-Based
does not imply doubling the number of quar- Learning of HLC Models: Discovery of Shallow
tets examined, nor doubling the running time. Latent Variables. In AIMath-06.
Moreover, the number of quartets examined de- Kocka, T. and Zhang, N.L. (2002). Dimension cor-
creases with sample size. This is because the rection for hierarchical latent class models. In
UAI-02.
probability of D2F errors decrease with sample Pearl, J. (1988). Probabilistic Reasoning in Intelli-
size. gent Systems: Networks of Plausible Inference.
500 1000 2500 5000 10000 Schwarz, G. (1978). Estimating the dimension of a
m=1 5552 4191 2861 2131 1816
m=3 8326 5939 4659 4292 4112
model. Annuals of Statistics.
m=5 9801 8178 6776 6536 6496 Silva, R., Scheines, R., Glymour, C. and Spirtes, P.
Total: 82251
(2003). Learning measurement models for unob-
served variables. In UAI-03.
7 Related Work Spirtes, P., Glymour, C. and Scheines, R. (2000).
Causation, Prediction and Search.
Linear latent variable graphs (LLVGs) are a spe- St. John, K., Warnow, T., Moret, B.M.E. and
cial class of structural equation models. Vari- Vawter, L. (2003) Performance study of phylo-
ables in such models are continuous. Some are genetic methods: (unweighted) quartet methods
observed, while others are latent. Sliva et al. and neighbor-joining. Journal of Algorithms.
van der Putten, P. and van Someren, M. (2004).
(2003) has studied the problem of identifying A Bias-Variance Analysis of a Real World Learn-
SLVs in LLVGs. Their approach is based on the ing Problem: The CoIL Challenge 2000. Machine
Tetrad constraints (Spirtes et al., 2000), and its Learning.
complexity is O(n6 ). Zhang, N.L. (2004). Hierarchical latent class models
for cluster analysis. JMLR.
Phylogenetic trees (PT) (St. John et al., Zhang, N.L. and Kocka, T. (2004). Efficient learning
2003) can be viewed as a special class of HLC of hierarchical latent class models. In ICTAI-04.

66 T. Chen and N. L. Zhang


 

   
  !"#$&%'$)(*
+-,/. .0214365-7988
NOQP,/. R STO?U>R&:<7W;>VY=?=@X[=BZ\A/7>CEU]D?7>F S_GI^`HKZBJabL9M ,9Udce+&fda^gU]OQa a
hi^j.k9^gUd^`,mln^`o`^jR,/.0qprUdarR^jR fdRrO
stO\u]^gUdkWRr7>UvwhYx4vwy{zWz>|/}2~363x43
tQ@?wW
l^u]R f].OQa7WVdRr.<fUdZB,/RrOQcO\u]P67>U]O?U>R^`,Wo`a&lnbX&6^gUdf]O?UdZ\O cw^`,/kW. ,9S_aZB,9U .OQP.OQarO?U>RcdOQZB^`a^j7>UPd. 798]
ojO?Sab^jR c^`aZ\.OBRrO)cOQZB^`a^j7>U{,/.<^`,W8ojOQa!b^jR ]7>fdRYo`^gS^jR,/R^j7>Uda!7>UR ]O&c^`aRr. ^`8wf]R^j7>Uda7WV6Z\7>U9R^gUf]7>fda
Zd,9UdZ\O)/,/. ^`,W8ojOQa 7W.R ]OU,/R f].O&7WV6R ]Obf]R^`o`^jR0TVfUdZ\R^j7>UaB3d^`aP,WP6OB.Pd.OQaO?U9Ra,9UTO\uRrO?Uda ^j7>UT7WV
l)X^gU]f]O?UdZ\O&c^`,/kW.<,9S_aYR d,/R ,Wo`oj7@ba 867WR Tcw^`aZ\.OBRrOb,9UdcTZ\7>U>R^gUf]7>fda cOQZB^`a^j7>U /,/. ^`,W8ojOQaB3b^`a
UdOBST7]cdOQoR ]OZ\7>U>R^gUf]7>fa)cdOQZB^`a^j7>Uel)X^gU]f]O?UdZ\Oc^`,/kW.<,9ScdOBWOQoj79Pwa)cdOQZB^`a^j7>U.fdojOQaEV7W.
Z\7>U>R^gUf]7>fda_cdOQZB^`a ^j7>U{,/.<^`,W8ojOQa_,Wa_,VfUdZ\R^j7>U7WVR ]OQ^j._Z\7>U>R^gUf]7>faP,/.O?U>RaB3pUR ^`a_ST7]cdOQoKv
,Z\7>U>R^gUf]7>fdacOQZB^`a^j7>UU]7]cdOq^`aS_,/. k9^gUd,Wo`^jBOQc80.OQPwo`,WZB^gU]k^jRTb^jR ,cOBRrOB.<S_^gUd^`aR^`ZqZ<d,9UdZ\O
Ud7cdO ,9Udc,WPPwoj0^gU]kq,9U79P6OB. ,/R^j7>UcOB. ^jWOQcVI.7>SR ]O_STOBR ]7]c7WV[Z\7>U>W79ogf]R^j7>Uda^gUPd.798,W8w^`o`^jR0
R dOB7W.0W3 prUO\udPOB.<^gSTO?U>RaBvZ\7>U9R^gUf]7>fdacdOQZB^`a^j7>UlnbX^gU]fdO?UdZ\O c^`,/kW. ,9Sa)Pd.7@^`cdO,9Uq^gUdZ\.OQ,WarO
^gUnS_,{ud^gSfwSO\udP6OQZ\RrOQcf]R^`o`^jR0q]O?UZ\7>SP,/.OQcRr7O\ud^`arR^gU]k2STOBR ]7]caB3
@W9! 9t 7WV)Rr.<fwUdZB,/RrOQcO\u]P67>U]O?U>R^`,Wo`aqln)X) ^gU]f]O?UdZ\O2c^
,/kW. ,9SaBvd^`Z<#,/. O^gUdf]O?UdZ\Oc^`,/kW. ,9Sa]OB.O
xiU GQ-MJB:<JLWGK?WAWF ^`an,Z\7>SP,WZ\RekW. ,WPd^`ZB,Wo Pd.798w,W8^`o`^jR0cw^`arRr. ^`8wf]R^j7>Uan,9Udcf]R^`o`^jR0VKfUZ\R^j7>Uda
.OQPd. OQarO?U9R,/R^j7>UmVI7W.-,cdOQZB^`a^j7>UmPd.798ojO?SfUdcdOB.EfU ,/.O.OQPd.OQaO?U9RrOQc8>0ln)X#P67WRrO?U>R^`,Wo`aB3NOQZB^`a^j7>U
Z\OB.R,W^gU>R0W3 pU^jR^`,Wo`oj0Wv^gUdf]O?UdZ\Oc^`,/kW.<,9S_a-OB.O /,/. ^`,W8ojOQa-^gUml)X^gU]f]O?UdZ\Oc^`,/kW. ,9S_a-S fdarR[d,?WO
Pd.79P679arOQc8>07?E,/. c,9Udcl,/R ]OQar7>UQW{z]_,Wa c^`aZ\. OBRrOarR,/RrOaP,WZ\OQaB ]7?-OBWOB.Qv S_,9U>0cdOQZB^`a ^j7>U
,V. 7>U9RO?UdcVI7W.cdOQZB^`a^j7>URr.OBOQaQ3dfd8aOQf]O?U>Roj0Wv Pd.798wojO?S_a_^gUPd. ,WZ\R^`Z\Od,QWOZ\7>U>R^gUf]7>famcdOQZB^`a ^j7>U
ogSarRrOQcQWW&,9Udcd,WZ<>RrOB.QWW)cdOBWOQoj79P6OQc /,/. ^`,W8ojOQaB3
STOBR d7caVI7W.OB/,Wogfd,/R^gU]k4,9UT^gUdf]O?UdZ\O)c^`,/kW.<,9Sc^ 5-7>Ua^`cdOB. R dObVI79o`oj7?b^gU]k cdOQZB^`a^j7>UmPd.798wojO?SVI.7>S
.OQZ\Roj0b^jR d7>f]R4Z\7>U>WOB.R^gU]k^jR4Rr7,ncdOQZB^`a^j7>URr.OBOW3 OQZ\7>U]7>S^`ZBaKarOBO4Y^jk>f]. O{
b]OQaOTS_OBR ]7]cai,Waa fS_OR d,/R ,Wo`ofwUdZ\OB.R,W^gU/,/. ^
,W8ojOQa ^gUR ]OS_7cOQo,/.O2.OQPd.OQarO?U>RrOQc8>0c^`a Z\.OBRrO xS_7>U]79P679o`^`arRbVK,WZ\OQafUdZ\OB.R,W^gUcdO?S,9Udc
Pd.798w,W8^`o`^jR0S,WaaeVfUdZ\R^j7>UaK-ln-a\e,9UdcR d,/R cOQPO?UcdO?U9R7>UVK,QW7W.<,W8ojO {7W.
cdOQZB^`a^j7>Ue{,/.<^`,W8ojOQad,?WOc^`a Z\.OBRrO4arR,/RrO4a P,WZ\OQaB3 fwU]VI,?W7W. ,W8ojO }qS,/.WOBRZ\7>Udcw^
prU]f]O?UZ\Oc^`,/kW. ,9SS_7cOQo`aqR d,/RP6OB.<S^jRqZ\7>U R^j7>UaB3NO?S_,9Udc_^`a[.OBOQZ\RrOQc^gU_R ]OPd.<^`Z\O
R^gUf]7>fdanZd,9UdZ\O,9UdccdOQZB^`a^j7>U/,/. ^`,W8ojOQan^gUZBogfdcdO KT!^jR-.OQZ\OQ^jWOQa&VI7W.&^jRa-7>f]RPwf]R3 ]O
,9fda a^`,9U^gU]6f]O?UdZ\Oc^`,/kW. ,9Sadd,WZ9RrOB.,9Udc S_7>U]79P679o`^`arRZB,9UwU]7WRcw^j.OQZ\Roj04798aOB.WO-cO\
O?UdojOB0Wv-QWWvYS^uR fd.OQa7WV ,9faa^`,9Uda ^gUdf]O?UdZ\O S,9Udcv8wf]R. ,/R ]OB..OQo`^jOQa7>UR ]O. O\
c^`,/kW. ,9SaK79o`,9Udc,9Udcdd,WZ9RrOB.QvQWWv,9Udc a<fdojRaKE7WV ,S_,/.WOBRa f].WOB0nRr7mk9,9f]kWO
o`^gU]OQ,/.r]fd,Wcd.<,/R^`ZZ\7>Uc^jR^j7>Ud,Wo ,9fdaa ^`,9U^gUdf]O?UdZ\O cO?S_,9Udcv),9UcR fdaPd.OQc^`Z\RqPd.<^`Z\OK 3
c^`,/kW. ,9Sa_ln,WcwarO?U,9Udc9O?UdarO?Uvty/}W}9|3X[,WZ<7WV +E,WarOQc7>UR ]Oea<f].WOB0.OQa fdojRaKv-R dO
R ]OQarObS_7cOQo`atO\udPoj79^jRa S fdojR^j/,/. ^`,/RrOU]7W.S_,WoPd.798] S_7>U]79P679o`^`arRS fdarRcOBRrOB.<S_^gU]O^jRa79PdR^gS,Wo
,W8^`o`^jR0R ]OB7W. 0Rr7Pd.7@^`cOmR ]OqcdOQZB^`a ^j7>UarRr. ,/RrOBkW0 7>fdRPwf]R43
,9UdcO\udP6OQZ\RrOQcf]R^`o`^jR0R d,/Rar79ojWOQaeR ]O^gUdf]O?UdZ\O
c^`,/kW. ,9Se3 bdOST7>Ud79P79o`^`aRZB,9UPd.7]cwfdZ\OV. 7>S}Rr7{}
5-798w8,9Uc]O?U]7?0y/}W}/z]^gU>Rr.7cfdZ\O_S_^u]R f].OQa fUd^jRa,9Uc"d,WaR ]OV79o`oj7@b^gU]kfdR^`o`^jR0#VKfwUdZ\R^j7>U
Y ^jk>f].Om/ xUe^gU]f]O?UdZ\O cw^`,/kW. ,9S VI7W.bR ]OST7>U]79P] Y^jk>f].Oy pU]6f]O?UdZ\O c^`,/kW. ,9S"V7W.X!ud,9S_PojOT/3
79o`^`arRBacdOQZB^`a^j7>UnPd.798ojO?Se3
4 5/ 66976 98;:=<-[9t?>
@BADC EGF ?d H FJI
  94 
|  y/}W}W}W}W}]3b]OmP.798,W8^`o h,/. ^`,W8wojOQaYb^`o`o]8O-cdO?U]7WRrOQcT804ZB,WP^jR,WodojOBRrRrOB. aQv9OW3 kd3`v
^jR07WV),9UfUdVI,?W7W. ,W8ojOqS_,/.WOBRm }i^`a}]3 z9}]3 K L&MON 3OBRam7WV /,/. ^`,W8ojOQa2b^`o`o86OncdO?U]7WRrOQc80
bdOSTOQ,9U7WVR dOZ\7>Udc^jR^j7>U,Wo ,9faa^`,9UPd.798,{ 8679o`cdVK,WZ\OZB,WP^jR,Wo_ojOBRrRrOB. aQ)v P ^jV,Wo`o_,/.Ocw^`aZ\.OBRrO
8^`o`^jR0ecdO?Uda^jR0VKfUZ\R^j7>UK Ni[[VI7W.P. ^`Z\OmK E^`ab, Zd,9UdZ\O /,/. ^`,W8ojOQaBRv Q^jVY,Wo`ot,/.O4Z\7>U>R^gUf]7>fabZ<,9UdZ\O
o`^gU]OQ,/.[VKfwUdZ\R^j7>UT7WV]fd,9U9R^jR0nP.7cfdZ\OQcv9^`Z< /,/. ^`,W8ojOQaBTv S^jVb,Wo`oE,/.OcdOQZB^`a^j7>U/,/. ^`,W8ojOQaBv 7W#. U
^`aq8,WaOQc7>UR ]OS_,/. WOBRZ\7>Uc^jR^j7>Udae,WaqVI79o`oj7?aB ^jVeR dOZ\7>SP7>UdO?U9Ra,/.O,S^uR fd.O7WVncw^`aZ\.OBRrO
{  } |/}W}W} {} QB}W} W_,9Udc Zd,9UdZ\OWvZ\7>U9R^gUf]7>fdanZ<d,9UdZ\OWv ,9UccdOQZB^`a^j7>U/,/. ^
{  !"B}W}W}W#} |/} QB}W}W}  3&OQarR ,W8ojOQaQ3p TV U^`a,arOBR7WV {,/. ^`,W8wojOQaBWv V^`a,mZ\7>=U Xk>f
.OQa<fdojRaK,/.Ok9^jWO?U,Wae,P6OB. Z\O?U>R,/kWO7WV Z\7>U . ,/R^j7>U7WVaP6OQZB2^ XwZarR,/RrOQai7WVR ]79aO/,/. ^`,W8ojOQaB3b]O
a fwSTOB. aYa f]. WOB0WOQc_]7 ^gU>RrO?UdcTRr78wf]0R ]ObPd.7]cwfdZ\R c^`a Z\.OBRrOWvZ\7>U>R^gUfd7>fdaBvw7W.iS_^u]OQcearR,/RrO aP,WZ\O 7WTV U
,9Udc,/.OmST7]cdOQojOQc80R ]O_VI79o`oj7?b^gU]k86OBR,e Ni-aB ^`a)cO?U]7WRrOQce8%0 Y[Zi3
w %} '&)(+*-,6/ .   y .>i,9Udc0
$    l)XP.798,W8^`o`^jR0P67WRrO?U>R^`,Wo`aBv&c^`aZ\.OBRrOPd.798,{
&)(+*-,6y . Q/ . 3 8^`o`^jR0P67WRrO?U>R^`,Wo`aBv_,9UdccdOBRrOB.S_^gUd^`arR^`ZP67WRrO?U>R^`,Wo`a
prU]fdO?UdZ\Oc^`,/kW. ,9S_a#^jR Z\7>U9R^gUf]7>fdacdOQZB^ ,/.OcdO?U]7WRrOQcn80qoj7?-OB.rZB,WarOkW.OBOBeojOBRrRrOB. aBvOW3 kd3`vW\ v
] _v ^3 ln)XfdR^`o`^jR0mP67WRrO?U>R^`,Wo`a),/.OcdO?U]7WRrOQc80 a` 3
a^j7>U {,/. ^`,W8wojOQa,9Udc U]7>U ,9fda a^`,9U Z\7>U>R^gUfd7>fda prUkW. ,WPwd^`ZB,Wo2.OQPd. OQarO?U9R,/R^j7>UaBvecdOQZB^`a^j7>U/,/. ^
Zd,9UdZ\O/,/. ^`,W8ojOQa,/. Oc2^ 1ZQfdojRRr7"ar79ojWOfda^gU]k ,W8ojOQa,/.O . OQPd.OQarO?U>RrOQc 80 .OQZ\R,9U]k>fdo`,/. U]7]cdOQa
ZQf]. .O?U9RS_OBR ]7c79oj7WkW0W3bd^`aP,WP6OB.^gU>Rr.7]cwfdZ\OQa Ib^jR ,a^gUdk9ojO87W.<cdOB.V7W.ec^`a Z\.OBRrO,9Udc,cd7>f
R ]O)Z\7>U>R^gUf]7>fda-cdOQZB^`a^j7>Ul)X^gU]f]O?UdZ\O)cw^`,/kW. ,9S 8ojOi867W. cdOB.-V7W.&Z\7>U>R^gUfd7>fda\vwZd,9UdZ\Oi{,/.<^`,W8ojOQa&,/.O
5ENl)XpNvd, S_7cOQod^`Z,Wo`oj7@ba),9U>0Z\7>S8^ .OQP.OQarO?U>RrOQc8>07@{,Wo`aIb^jR ,a ^gU]k9ojO867W. cdOB.TVI7W.
Ud,/R^j7>U7WVc^`aZ\.OBRrOq,9UdcZ\7>U>R^gUf]7>faTZ<d,9UdZ\O/,/. ^ c^`a Z\.OBRrOWv6,2cd7>fd8wojO867W. cdOB.V7W.iZ\7>U9R^gUf]7>fdaQv6,9Udc,
,W8ojOQa b^jR U]7e. OQarRr. ^`Z\R^j7>Uda47>UR ]OTR0]P6O_7WVEPd.798] Rr. ^`PwojOi867W. cOB.-^jVR ]OZ<d,9UdZ\O/,/. ^`,W8ojO ^`a&cOBRrOB.<S_^gU
,W8^`o`^jR0c^`arRr. ^`8wfdR^j7>Uv ,Wa -OQo`o),Wa_c^`aZ\. OBRrOe,9Udc 3@7W. ^`arR^`Z{v-,9Udcf]R^`o`^jR0VKfwUdZ\R^j7>Uda ,/.O2.OQPd.OQarO?U>RrOQc80
Z\7>U>R^gUfd7>fda[cOQZB^`a^j7>U_/,/. ^`,W8ojOQaQ3[5ENln)X p Na-ZB,9U c^`,9S_7>UdcaB3
,Wo`ar7,WZBZ\7>SST7]c,/RrOZ\7>Uc^jR^j7>Ud,Wo`oj0cdOBRrOB.<S^gUd^`arR^`Z b V d cfehgji CaA 5E7>Uda^`cdOB.[R ]Ob^gU]fdO?UdZ\Obc^`,/kW. ,9S^gU
Zd,9UdZ\O/,/. ^`,W8ojOQaQ3xiU79P6OB. ,/R^j7>UcOB. ^jWOQcVI.7>S Y^jk>f].O)y3YprUR d^`aST7]cdOQoKv>^`a!,cw^`aZ\.OBRrO&/,/. ^`,W8ojO
R ]O2STOBR ]7]c7WV&Z\7>U9W79ogf]R^j7>Ua^gUPd.798,W8w^`o`^jR0R ]O\ ^`Z< ZB,9UTR,/WOb7>U {,WogfdOQa[}i7W.[ /3!b]O
7W.0^`afdaOQcRr7OQo`^gS_^gUd,/RrOn,Z\7>U>R^gUf]7>fdacOQZB^`a^j7>U /,/. ^`,W8ojOQa ,9Udc ,/.OZ\7>U>R^gUf]7>fa4Z<,9UdZ\O/,/. ^
U]7]cdOEcwf]. ^gUdkR dOEa79ogf]R^j7>UTPwd,WaOE^gU ,45&Nil)X p N3 ,W8ojOQaQvb^jR ,cdOBRrOB.S_^gUd^`arR^`ZZd,9UdZ\O/,/. ^`,W8ojO
]O.O?S,W^gUdcdOB.7WVR ^`aP,WP6OB.^`a7W. k9,9Ud^jBOQc ^`aZ\7>Udc^jR^j7>Ud,Wo`oj0cOBRrOB.<S_^gUd^`aR^`Z_k9^jWO?U,e/,Wogf]O
,WaVI79o`oj7?baQ3 OQZ\R^j7>Uyk9^jWOQaU]7WR,/R^j7>U,9UdccOBV 7WVY3 b]O4fdR^`o`^jR0qVfUdZ\R^j7>U k [^`a),_Z\7>U>R^gUfd7>fda
^gUd^jR^j7>UaB3 OQZ\R^j7>UcdOQaZ\.<^`8OQan79P6OB. ,/R^j7>UdafdarOQc VfUdZ\R^j7>U7WV 3
Rr7ar79ojWO5ENln)X p NaB3OQZ\R^j7>UzPd.OQaO?U9RamR ]O @BAl@ m H V o no i Fdprq n I 9do is
5ENl)XpN ar79ogf]R^j7>URr7R dOO\ud,9S_PojOP.798ojO?Sq3 b V e FI i I / HI gI
OQZ\R^j7>U|Pd. 7?]^`cdOQaZ\7>UdZBogfa^j7>UdaB3bd^`aP,WP6OB.m^`a
O\u]Rr. ,WZ\RrOQcV. 7>S*,noj7>UdkWOB.-7W.]^gU]kP,WP6OB.5-7988v xS^uR f]. O 7WVRr.<fwUdZB,/RrOQc4O\udP67>U]O?U>R^`,Wo`a)ln)X)P67/
y/}W}93 RrO?U>R^`,Wod^gU,9U ^gU]f]O?UdZ\OEc^`,/kW. ,9S,WaR dO-VI79o`oj7?^gU]k

68 B. R. Cobb
dc O XU^jR^j7>Uv]d^`Z<2^`a&,TST7]c^2XwZB,/R^j7>Um7WVR ]OcdO XUd^ 7WVT^jRaeZ\7>U>R^gUf]7>fanP,/.O?U>RaB_]7@[OBWOB.Qv4R ]OlnbX
R^j7>UeP.79P679arOQcq8>01fwS  ,9Udc],WogS_OB. 7>Uy/}W}9|3 .OQPd. OQarO?U9R,/R^j7>U7WVR d^`amS_OBR ]7c X.<arRfdarOQa_,c^`a
 ;
JB GI 3 stOB' R U 8O",*S^uOQc Z\.OBRrO ,WPPd.7?u]^gS,/R^j7>URr7mR ]OZ\7>U9R^gUf]7>fdaicdOQZB^`a ^j7>U
c^gS_O?Uda^j7>Ud,Wo4{,/. ^`,W8wojOW3 sOBR P   . . . {v /,/. ^`,W8ojOW3
Q   . . . v,9Ud" c S   . . . 286O
R ]Oc^`aZ\. OBRrOiZd,9UdZ\OWvZ\7>U>R^gUfd7>fda)Z<,9UdZ\OWv,9UdccdO\ @WA;o m q bqp F wh H g H f^ S i I o H Hji
ZB^`a^j7>U/,/. ^`,W8ojOP,/. Ra7WMV Uv. OQaP6OQZ\R^jWOQoj0WvTb^jR dfdPwP79aOr)!s^`a),9U^gUdPwfdR&l)XP7WRrO?U>R^`,WoVI7W. U"
,9!Un#"l$)&%XP67W(RrO?'YU9R3^`,Wx VKfUZ\R^j7>U*) Y Z,+- ./ ^`a
o^jVY7>U]O47WVYR ]O U]O\u]RbR ].OBOZ\7>U
Pj 0 Qt0 S.OQPd.OQaO?U9R^gU]k_,T[NiVI7W.u#H Qk9^jWO?U
^jRabPw,/.O?U9Ra Uwv x 93 pV[OZB,9UqWOB. ^jVI0mR ,/R
c^jR^j7>Uda]79o`caB y z { ) s j V  D " D 
/ 3 P1M0 S32T,9Uc4)ZB,9Ue86Oi). ^jRrRrO?Ue,Wa
VI7W.4,Wo`To V[H Y Z6|~} vt-O_arR,/RrOR d,/R)Rs!^`a,9UlnbX
 =CB cdO?Uda ^jR0V7W.k43Om,Waa fS_OR d,/R ,Wo`o ^gUdPwf]RlnbX
)j VY&)5]Y , 687  , ` O\udP< ?>A`E@ D =GF 9 9 Pd.798w,W8^`o`^jR0P7WRrO?U>R^`,Wo`a^gU,5&Nil)X p N,/.OTU]7W.r
`;: = : { S_,Wo`^jBOQcnPd. ^j7W.)Rr7_R dO4ar79ogf]R^j7>UnPwd,WarOW3
@WA S i> i] cfH I HIB HI p F i I / HI g
VI7W=K.B ,Wo`o V*H Y Ziv]OB.O , ` I-_}  . . . J ,9Udc x L>J JBAFGIGiK GK:  ;
JQn GK cdOQaZ\.<^`8OQanR ]Oo`^gU]OQ,/.
>86`@ OB. aQv 3 I/  . . . JvML/  . . .o  ,/.O[.OQ,Wo>UfS cdOBRrOB.<S^gUd^`arR^`Zb.OQo`,/R^j7>Uda d^`P86OBR-OBO?U2,4arOBR-7WV/,/. ^
,W8ojOQ!a Q x  . . .o 93ExcdOBRrOB.<S_^gU^`arR^`Z P67WRrO?U
y 3 P 9 0 S 2",9UcR ]OB. O^`a, P,/. R^jR^j7>U R^`,Wo5E7988,9Udc2d]O?U]7@0Wvy/}W}9|YV7W. Q^`a cdO XUdOQc_,Wa
YN+  . . .oYPO&7WB V YRQ_^gU>Rr74>0P6OB. ZQfd86OQaYa fdZR d,/R ,9UOQ]fd,/R^j7>U
)^`acdO XU]OQcq,Wa
)j VY!&)TSj VY ^jV VUGH YRS  y 5] }x  w5]!}x  :   Iz]
]OB.O_OQ,WZV)T_S XW/  . . . XYZB,9U86O_b. ^jRrRrO?U ]OB. O]vr / . . . ,/. OZ\7>UarR,9U9RaQ3 b]O
^gUeR ]O VI7W.<S#7WV!OQ]fd,/R^j7>U{iK^K3 OW3EOQ,WZ<Z)TS_^`a OQ]fd,/R^j7>U 5]2 }cdO XUdOQa_,o`^gUdOQ,/.2cdOBRrOB.S_^gU
,9Unln)XP67WRrO?U9R^`,Wot7>U YPS3 ^`arR^`Z2.OQo`,/R^j7>Uda<d^`Pv!]OB. O 5] ,  D 
,   v,9UdcdOB.O ,
  . . .o, A ,9Udc ,/.O
3 P[0 S]3\ 2i,9Udc VI7W.OQ,WZ<) Xdu]OQcT{,WogfdO4_T^  s H .OQ,WDo UfS> 86OB. aB3 b]OTZ\7O 1ZB^jO?U9#R ,A ` 7>U ` H> Q^`a
YP`acbv) dfe g5d[ZB,9Ue86O cdO XUdOQcq,Wab^gUy3 OQ]fd,WoERr7]O?UR ]OecdOBRrOB.S_^gUd^`arR^`ZP67WRrO?U>R^`,Wo)^`a
aP6OQZB2^ XOQce,Wab,_Z\7>Udc^jR^j7>Ud,WoP67WRrO?U9R^`,WotV7W.h ` k9^jWO?U
prUR ]OcdO XUd^jR^j7>U,W867?WOWvhY^`aR ]OUfwS86OB._7WV Qvr ` 3]Ol 9: ;WAXi  5] }x-,/.OS_,W^gU
 GJ:JKi ve,9UcjJ ^`aR ]OUfS86OB.7WVnO\u]P67>U]O?U>R^`,Wo R,W^gU]OQc,WaT,cOQZ\7>S_P679arOQcarOBRT7WV[OQ^jk>>RrOQcOQ]fd,{
R dJBA^j. Fkci ZB^g,WUiarOQOW,Wv!ZR ]PO2^jOQP6Z\7WO[Rr7WO?U9VRR ^`],WO-omlnl )XP67WRrO?U9) R^`df,We oKg3Ypr5dUiR VI]7WO. R^j7>UdaBv b^jR .OQPd. OQarO?U9R^gUdkR dO JQGj ? _i VI7W.T,Wo`o
/  . . .<T32b]OT-OQ^jk>9RabR0P^`ZB,Wo`oj0. OQa fdojR
,Wo`o!_T^  s 6GH Y ` e b Z\7>UdarR^jR fdRrA O?9R ]F_OiJBln _i)XP67WRrO?U>R^`,Wo VI.7>S R ]O S_,/.k9^gU,Wo`^jQ,/R^j7>Un7WV,mc^`aZ\.OBRrO{,/.<^`,W8ojOWv
VI7W.  Pf+Q LS 93prUR ^`aPw,WPOB.?v,Wo`o)ln)XP.798,{ ,Waba<]7?U^gUeX!ud,9S_PojO4T^gUOQZ\R^j7>Un3 y3g/3
8^`o`^jR0,9Ucnf]R^`o`^jR0qP67WRrO?U>R^`,Wo`ab,/.O OQ]fd,WoRr7TBOB. 7^gU O RrOB.<S 5] }x , P67WRrO?U>R^`,Wo
fUda POQZB2^ XwOQc.OBk9^j7>UdaB3prUn5ENl)XpNaBvw,Wo`oP.798,{ ^gU R ]O arO?UarO R d,/R ,9U90 /,/. ^`,W8ojO  `
8^`o`^jR0mc^`arRr.<^`8wf]R^j7>Uda-,9Udcqf]R^`o`^jR02VfUdZ\R^j7>Uda-,/.O,WP] R,/WOQa7>U R dO /,Wogf]O D ` r,A  D 
Pd.7?u]^gS,/RrOQce80l)XP67WRrO?U>R^`,Wo`aB3 , e   , e    , A  G/ ,
bdOcO XUd^jR^j7>UPd. OQarO?U9RrOQcdOB.O,Waa fS_OQa2R d,/R b^jR `_ Pd. D 79`_8 ,W8^`o`^jR0` / D ,9` Ud/ c,9U907WR ]OB.D /,Wogf]> Ob^jR `
cdOQZB^`a^j7>U{,/. ^`,W8wojOQa,/.Oc^`aZ\.OBRrOW3 bd^`aP,WP6OB. Pd.798w,W8^`o`^jR0}]3xa_b^jR lnbXP67WRrO?U>R^`,Wo`aBvS fdo
Pd.OQaO?U9Ra!,STOBR d7cVI7W.cdOBWOQoj79P^gU]k ,cdOQZB^`a ^j7>U.<fdojO R^`PojO_VI. ,/k>S_O?U9Ra7WVER ]OTVI7W.<S^gUIz]ZB,9UcO XU]O_,
VI7W.,Z\7>U>R^gUf]7>fdacdOQZB^`a^j7>Un{,/. ^`,W8wojO,Wa,TVKfwUdZ\R^j7>U cdOBRrOB.<S^gUd^`arR^`Z P67WRrO?U>R^`,WoVI7W.,_aOBR)7WVY{,/. ^`,W8wojOQra U3

Continuous Decision MTE Influence Diagrams 69


b dOTV. ,/k>S_O?U9Ra4ZB,9U86OP,/. ,9S_OBRrOB. ^jBOQc80OQ^jR ]OB.
arOBRam7WV4c^`a Z\.OBRrOZd,9UdZ\O,9UdccdOQZB^`a^j7>U{,/.<^`,W8ojOQaBv
7W.80Pw,/.R^jR^j7>Uda7WV90]P6OB. ZQfd86OQa7WV2Z\7>U>R^gUfd7>fda
Zd,9UdZ\O /,/. ^`,W8ojOQaBv,Waba ]7@Uq86OQoj7?43
b V d cfehgji @BA pUR ]Oq^gU]fdO?UdZ\O2cw^`,/kW. ,9S 7WVbY^jk/
f]. Oyv2^`aZ\7>Udc^jR^j7>Ud,Wo`oj0cOBRrOB.<S_^gUd^`aR^`Zk9^jWO?U
3qbd^`a4.OQo`,/R^j7>Uda ^`P^`a.OQPd.OQaO?U9RrOQc8>0R ]O2cdO\ Y^jk>f].O pU]6f]O?UdZ\O c^`,/kW. ,9S"V7W.X!ud,9S_PojO 3
RrOB.<S^gUd^`arR^`Z_P67WRrO?U>R^`,WoVI. ,/k>STO?U>R    }x[7W.
 z> . W9 /_} .gWQy}x)^jV}   _} . W|{zv
,9Udc8>0#R ]OcdOBRrOB.<S^gUd^`arR^`ZP67WRrO?U>R^`,WonVI. ,/k>S_O?U9R Kdfe g5]Y)ndfe g dfe g d?e g 5d
@  }x7W.  m} 2{}4}x^jV6_} . W|{z  
/3 x)ndfe g 5GB L dfe g5  }x W  |
  
  M8 o 8 > VI7W.,Wo`o 5H YRQ 3qxZBZ\7W. c^gU]kRr7nR ]O2stx!"Pd.79P]
/, k9,/R^j7>UaZ<dO?STO_ln,WcwarO?Ue,9Udce9O?UdarO?UvQWW-R ]O
5ENl)XpNa,/.OTar79ojWOQc80fda^gUdkmR ]O Vfda^j7>U,Wo P67WRrO?U>R^`,Wo`a,/. OU]7WRZ\7>S8^gU]OQcv!8wf]R4. ,/R ]OB.TS_,W^gU
kW7W. ^jR wS cdOBWOQoj79P6OQc8>0d]O?U]7@0QWW,9UdcR ]O R,W^gU]OQc2,WaE,cOQZ\7>S_P679arOQcaOBR[7WVP67WRrO?U>R^`,Wo`a&cwf]. ^gU]k
79P6OB. ,/R^j7>Uda-Pd.OQaO?U9RrOQcm^gU_R d^`a[arOQZ\R^j7>U3 ]O)Vfda^j7>U Z\7>S8^gUd,/R^j7>U3
,WojkW7W. ^jR wS ^gU>W79ojWOQacdOQojOBR^gU]k{,/.<^`,W8ojOQa2V.7>S R ]O o Al@ m w# "WH I J g H 5 d H FJI
U]OBR[7W. 2^gU2,TarOQ]f]O?UdZ\O^`Z<m.OQaP6OQZ\RaER ]O ^gU]VI7W.r o Al@BA C S HIQ>o i i %$ I  i'& w HIh gji p F c
S,/R^j7>UZ\7>UdarRr.<,W^gU9RaeI.OQP.OQarO?U>RrOQc80,/. ZBaP79^gU>R m q b I s S i i]o c H I HIQ HI
^gU]kmRr72cOQZB^`a^j7>U{,/. ^`,W8wojOQa^gU^gU]fdO?UdZ\Oc^`,/kW.<,9S_a\ p F i I HI gI
^gU R dO)Pd.798ojO?Se3b^`aZ\7>Udc^jR^j7>U_O?Uda f].OQaYR ,/R[fU d^`a)79P6OB. ,/R^j7>U^`a^`o`ogfarRr. ,/RrOQc8>0qO\ud,9S_PojOKarOBO
798aOB.WOQc_Z<,9UdZ\O)/,/. ^`,W8ojOQa ,/. ObcdOQojOBRrOQc86OBVI7W.O&cdO\ 5-798w8vwy/}W}9[VI7W.b,TVI7W.<S_,WocO XUd^jR^j7>U63
ZB^`a^j7>U{,/.<^`,W8ojOQaB3m]OTVKfa^j7>USTOBR d7c,WPPwo`^jOQa Rr7
Pd. 798ojO?S_a]OB.O2R ]OB.Oq^`a 7>Udoj07>U]Oef]R^`o`^jR0VfUdZ b V d cfehgji o A 5-7>Uda ^`cdOB.R dO^gU]6f]O?UdZ\Ocw^`,/kW. ,9S
R^j7>UI7W.e, 79^gU>Rf]R^`o`^jR0VfUdZ\R^j7>U^`Z<VK,WZ\Rr7W. a ^gUY^jk>f].Ov]OB.O } _} . |,9Udc
S fdojR^`Po`^`ZB,/R^jWOQoj0e^gU>Rr7arOBWOB.<,Wof]R^`o`^jR0eP7WRrO?U>R^`,Wo`a\3  {b_} . |3_]O {,/. ^`,W8wojO ( ^`a Z\7>Uc^jR^j7>U
5E7>S8w^gUd,/R^j7>UT7WVR-7Tln)XP67WRrO?U9R^`,Wo`aE^`a[P79^gU>R ,Wo`oj0cOBRrOB.<S_^gUd^`aR^`Zek9^jWO?7 U A  ! 9v,.OQo`,/R^j7>Uda<d^`P
b^`aOmS fdojR^`Po`^`ZB,/R^j7>U3nl,/.k9^gUd,Wo`^jQ,/R^j7>U7WV&Z<,9UdZ\O .OQP.OQarO?U>RrOQc 80 R ]OcdOBRrOB.<S^gUd^`arR^`ZP67WRrO?U>R^`,Wo
/,/. ^`,W8ojOQaVI.7>S l)X P7WRrO?U>R^`,Wo`a^`a^gU9RrOBkW.<,/R^j7>U VI. ,/k>STO?U>Ra  * )R D  *}T ) y D }x
7@WOB.-Z\7>U>R^gUfd7>fda Zd,9UdZ\Ob/,/. ^`,W8ojOQa[86OQ^gU]k .O?S_7?WOQc ,9Udc Q* ) D  { ) _} . yW| D }x3
,9Udca fSmS_,/R^j7>U7?WOB.ecw^`aZ\.OBRrOZd,9UdZ\O{,/.<^`,W8ojOQa bdO .O?S_7?/,Wo7WV  VI.7>S R ]O Z\7>S8w^gUd,/R^j7>U
86OQ^gU]k. O?ST7?WOQc3 NOBR,W^`o`a7WVqR ]OQarOR-779P6OB. ,{ 7WV V7W7 . #(?  ,9Udc"R dO[l VI7W.  .O\
R^j7>Uda-ZB,9Um8ObV7>fUc^gU5-7988,9Uc2d]O?Ud7?0Wvdy/}W}/z]3 a fojRa^gUR ]OVI79o`oj7?b^gUdkcdOBRrOB.<S_^gU^`arR^`ZP67WRrO?U>R^`,WoK
R dOB.Z\7>S8^gUd,/R^j7>U,9UdcS_,/.k9^gUd,Wo`^jQ,/R^j7>U79P6OB.r ?_} . | ) y D }x _} . !| ) _} . yW| D }x >3
,/R^j7>Uda_.OQf^j.OQcV7W.5ENln)X p Na_,/. OncdOQaZ\. ^`86OQc ]O)/,Wogf]OQa VI7W.-R ]OcdOBRrOB.S_^gUd^`arR^`ZP67WRrO?U9R^`,Wo6,/.O
86OQoj7?43 -OQ^jk>9RrOQcb^jR R dOePd.798,W8^`o`^jR0{,WogfdOQafdP67>U.O\
S_7?/,Wo7WVR dO c^`aZ\.OBRrO /,/. ^`,W8ojOW3
o ADC  F c H I  H FI Fprm q b I s o Al@BA @  FJI H I n F nt +$ I  i'& w/ HIwh gji
S i>o i cfH I HIB HI p F i I HI
gI p F c m q b I s S i i]o c H I HIQ HI
stOBR) dfe g5  86O2,9Ul)XPd.798,W8w^`o`^jR0P67WRrO?U>R^`,Wo p F i I HI gI
7>U U P(0 Q0 S,9UcojOBR dfe g5  }xb86O l,/.k9^gUd,Wo`^jQ,/R^j7>U7WV4,Z\7>U9R^gUf]7>fda2/,/. ^`,W8ojO `
,cdOBRrOB.<S^gUd^`arR^`ZqP67WRrO?U>R^`,WoE7>U U P 0 Q0 Sv VI.7>SR ]OZ\7>S8w^gUd,/R^j7>Um7WV,9UlnbXP7WRrO?U>R^`,Wo,9Udc
dOB.%O Q$ Q 0 Q  3nb]O2Z\7>S8^gUd,/R^j7>U7WVu)ndfe g ,cdOBRrOB.<S_^gU^`arR^`ZmP67WRrO?U>R^`,Wo[^`a a f8arR^jR f]R^j7>U7WV&R ]O
,9Udc dfe gq^`a),_P67WRrO?U>R^`,Wo TVI7W. UcdO XUdOQcq,Wa
^gU>WOB. arOq7WViR ]OqOQ]fd,/R^j7>UKa\^gUR ]OecdOBRrOB.<S^gUd^`arR^`Z

70 B. R. Cobb
6P 7WRrO?U9R^`,Wo ^gU9Rr7qR ]Ol)XP67WRrO?U>R^`,WoK3Tbd^`ai79P6OB. ,{ ]RrOQPe/!5E.OQ,/RrO, cw^`aZ\.OBRrOb,WPP.7Qud^gS_,/R^j7>U Rr7R ]O
R^j7>U2^`a[cOB. ^jWOQc2V.7>SR ]OSTOBR ]7]cT7WVtZ\7>U>W79ogf]R^j7>Uda Z\7>U>R^gUf]7>fdabcOQZB^`a^j7>Uq/,/. ^`,W8ojO4,9Uce,WPPoj02R ]O
^gUPd.798,W8w^`o`^jR04R dOB7W.0Wv>,WaO\u]Pwo`,W^gU]OQcT^gU5-7988 ,9Udc P.7Z\OQcf].O^gUnOQZ\R^j7>U3 y3 3
d]O?Ud7?0Wvy/}W}9|3 ]RrOQPy~a^gU]kojOQ,WarRa fd,/. OQa.OBkW.OQaa ^j7>UvZ\. OQ,/RrO
stOBR) dfe g5  86O_,9Ul)XP67WRrO?U9R^`,Wo 7>U U  ,qcOQZB^`a^j7>U.<fojOVI7W. R ]O_Z\7>U>R^gUf]7>fda cdOQZB^`a ^j7>U
P,0 Q 0 S ,9UdcojOBR dfe g5  i"}x-86O,ncOBRrOB.r
S_^gU^`arR^`ZP7WRrO?U>R^`,Wo)7>U U  0 P 0 Q 0 S3xa /,/. ^`,W8ojO),Wa[,iVKfwUdZ\R^j7>U7WV6^jRa[Z\7>U9R^gUf]7>fda P,/.r
a fS_O Q" QR0 Q  v U U !;0 U  v&,9UdcR d,/R O?U>R?Ka\3
OQ,WZ< dfe gfe 5  [}xv /  . . . Tv^`ai^gU9WOB. R^`8ojO ]RrOQP5E7>UdarRr.<fdZ\R,cdOBRrOB.S_^gUd^`arR^`ZP67WRrO?U>R^`,Wo
^gU4 H Q  Q 3bdO4S_,/.k9^gUd,Wo7W?V x) dfe Wg  dfe Jg  VI.7>S R dOcdOQZB^`a^j7>U.<fdojOcdOBWOQoj79P6OQc^gURrOQP
VI7W.-,` arOBR 7WV/,/. ^`,W 8ojOQa Usd P#0m Qv  ` c0 S U yvZ\7>U>WOB.RR dOZ\7>U>R^gUf]7>facdOQZB^`a ^j7>U /,/. ^
^`a)Z\7>SPwf]RrOQce,Wa ,W8wojO-Rr7,icdOBRrOB.<S^gUd^`arR^`Z-Z<d,9UZ\O&{,/. ^`,W8wojOWv>,9Udc
S,/.k9^gUd,Wo`^jBOR ]O/,/. ^`,W8ojOfa^gU]kR ]OPd.7]Z\O\
)ndfe g dfe g6d?Ze g  5 s cf].O^gUnOQZ\R^j7>U3 y3 y3
b^`ab79P6OB. ,/R^j7>Unb^`o`o8OcdOQaZ\. ^`86OQcnV7W.R ]O ZB,WarO
9     )ndfe g
W dfe gfe 5 s 5 s  ]OB. O,cdOQZB^`a ^j7>U/,/. ^`,W8ojOd,Wae7>UdOZ\7>U>R^gUf]7>fa
: P,/.O?U>RBvw8wfdR&S fdojR^j/,/. ^`,/RrO . OBkW.OQaa^j7>UnZB,9Uq86O4fdarOQc
VI7W.b,Wo`Bo V s GH Y Z  ]OB.O55 s  D ` v 5G-5 s  D ` v Rr7O\u]RrO?UdcR ]O79P6OB. ,/R^j7>UV7W.qR dOZB,WaO]OB.O,
5  5 s  D ` vw,9Udc cdOQZB^`a^j7>U/,/. ^`,W8ojO,WamS fdojR^`PojOnZ\7>U>R^gUf]7>fda2P,/.r
O?U>RaB3fdPP679arO2[OqE,9U9RTRr7OQo`^gS^gUd,/RrOe,Z\7>U>R^gU
Wndfe gfe 5 s Y ,  D  f]7>fdacdOQZB^`a^j7>U/,/. ^`,W8ojO  b^jR Z\7>U>R^gUfd7>fda P,/.r
, e `_  D `_  ;, e ` /  D ` /  , D  > f/ , ` O?U>RrVI.7>S"R ]O^gUdf]O?UdZ\Oc^`,/kW. ,9Sfda^gU]kmR ]O4Vf
. a^j7>U,WojkW7W.<^jR Sq3stOBR 86Oe,9Ul)Xf]R^`o`^jR0P67/
b]OZ\7>UdarR,9U>R  ^`a , `  ^jVk) dfe g^`a,9UlnbX RrO?U>R^`,WoVI7W. x G 93bY^j. arRBv, 9P679^gU9Rc^`aZ\.OBRrO ,WP]
Pd.7?u]^gS,/R^j7>U^`aZ\.OQ,/RrOQcV7W.3X ,WZfd,Wo`^jR,/R^jWO
Pd.798w,W8^`o`^jR0P7WRrO?U>R^`,Wo&,9Udc2^jVN) dfe g^`aT,9UlnbX arR,/RrO7WVR ]Oic^`aZ\.OBRrO,WPPd.7?u]^gS,/R^j7>UZ\7W.. OQaP67>Udca
f]R^`o`^jR0P67WRrO?U9R^`,WoKvt]OB.#O ,A ` .OQPd.OQarO?U>RaR ]O Z\7OBV b^jR ,.OQ,WobUfS8OB. OQc/,Wogf]On^gUR ]OeZ\7>U>R^gUf]7>fa
XwZB^jO?U>R)7>U/,/. ^`,W8ojO$ ` ^gU dfe gfe 5  3 arR,/RrOaP,WZ\Om7WV&R ]O/,/. ^`,W8ojOW v Y  $  " " ` 
 93b]O2.OQ,Wo&UfwS86OB.OQc/,Wogf]OQaT,Waar7]ZB^
oBA @BAo S HIQ>o i>o i S i HIo H FJI & w/ HIw gji ,/" RrOQce"7 ^jR q R ]O4]fd,Wo`^jR,/R^jWO arR,/RrOQa,/.O4cdOBRrOB.<S7 ^gU]OQc
l,/.k9^gUd,Wo`^jQ,/R^j7>Umb^jR . OQaP6OQZ\R!Rr74, c^`aZ\. OBRrObcdO\ ,Wa "  " ` j * _} . |? " " `  G! VI7W.
ZB^`a^j7>U/,/. ^`,W8ojOq^`a7>Udoj0cdO X6U]OQcVI7W._lnbXf]R^`o`^jR0 * / . . .7 w3d7W.a^gSPo`^`ZB^jR
arR,/RrOQab^`o`o,Wo`ar728Oi.OBVIOB..OQceRr77 ,Wa  " +  . 7 . .o " " 3
P67WRrO?U9R^`,Wo`aQ3 stOBR 86O,9Ul)X*f]R^`o`^jR0P67WRrO?U 0WvTR dOfd,Wo`^jR,/R^jWO
R^`,WoiVI7W. U" P1G0 QM0 Sv)dOB.O H S3b]O l,/.k9^gUd,Wo`^jQ,/R^j7>U7WV VI.7>SR ]O_f]R^`o`^jR0P67WRrO?U
S_,/. k9^gUd,Wo7WV VI7W.,marOBR7WV!/,/. ^`,W8ojOQa U A T^`a R^`,Wo ^gU>W79ojWOQa XUdc^gU]kR dO){,WogfdOb7WV "  R ,/R-S_,{ud^
,9Uel)XfdR^`o`^jR0eP67WRrO?U9R^`,WoZ\7>S_PwfdRrOQcq,Wa S_^jBOQa V7W.!OQ,WZ_P679^gU>RY^gUR dO)Z\7>U9R^gUf]7>fdac7>S_,W^gU
7WV43b]OeVI79o`oj7?^gU]kYP^jOQZ\OZ\7>S_P67>U]O?U>RlnbX
 @ Z   B _^ 5= s s Y S_ z,{u _^ 5= s > P67WRrO?U9R^`,Wo-VI7W. #^`a cdOB. ^jWOQcV. 7>S*R ]Oc^`aZ\. OBR^jBOQc
Z\7>U>R^gUf]7>fa&cdOQZB^`a^j7>U2/,/. ^`,W8ojOr fda^gUdk4R ]OPd.7]Z\O\
VI7W.,Wo`o_T^ a5  s s tH Y Z   ]OB.O s s s  " 3 cwf]. Oi^gUOQZ\R^j7>U3 y3
l,/.k9^gUd,Wo`^jQ,/R^j7>U7WV&cdOQZB^`a^j7>U/,/. ^`,W8ojOQa.OQa fdojRa^gU $%% ? D ^jV (   D ( 
,9UlnbXP67WRrO?U9R^`,WoK3d7W._cdOBR,W^`o`aBv-arOBOn5-7988,9Udc
d]O?Ud7?0y/}W}/z]3 D ! & %% 33  D ^jV (   D ( 
7# ' O] D ^jV (O   D  (MO
oBA @BA  FI / H I n F nt S i HIo H FJI & w/ HIw gji
X[o`^gS_^gUd,/R^gU]k,Z\7>U9R^gUf]7>fdacdOQZB^`a^j7>U"{,/.<^`,W8ojO ]OB. O ( = ,9Udc ( = ,/.OR ]Ooj7@[OB.,9Udc#fdPP6OB.
VI.7>S ,m5ENl)XpN^`ab, R ]. OBO\arRrOQPnPd. 7Z\OQaaQ 867>fUdca 7WViR ]Ocd7>S_,W^gU7WVR dOe{,/.<^`,W8ojOZ^gULW

Continuous Decision MTE Influence Diagrams 71


R Pw^jOQZ\O_7WVER ]OlnbXf]R^`o`^jR0P7WRrO?U>R^`,Wo vt,Wa
cdOBRrOB.S_^gU]OQc80R ]OPd.7]Z\OQcwf].Oq^gUOQZ\R^j7>U7  3 y3 3 1.75

bdOQarO4867>fUdca&,/. O4cdO XU]OQcna fdZ<eR d,/R ( `_  O( ` 1.5

( O=  O( = i12eV7W.,Wo`o -I  L/  . . . XY6vmI w\ L,9Udc


1.25

aPw= ,W: Z\Om7W= Vr  ^`=ak9^jWO?UD 7 ,W` a) YRD }#7     D O D `  D 


 (  O(  vYdOB.O_R ]O2arR,/RrO
1

0.75

DRr7W.m7W9V3 . OQ,WstoOBRUfTS 86D OB.OQ c/ ,W" ogf]@ OQB a  . . .  " 7W@ ViB R ]7 On86OcOQ,ZB^`Wa^jOQ7>ZU
0.5

0.25

/,/. ^` ,W8ojO R d,/R2Z\7W..OQaP67>Udcb"  ^jR +  . . .o O^gU


0
0 0.2 0.4 0.6 0.8 1

Y^jk>f].Ozd)b]O lnbXP7WRrO?U>R^`,WoV.<,/k>STO?U>Rab^`Z<
v^K3 OW3 ^`abR dOcdOQZB^`a ^j7>Un.fdojO4798dR,W^gUdOQcfdP67>U Z\7>UdaR^jR f]RrO R ]Ol)XP7WRrO?U>R^`,Wo ] V7W. ?#  9
93
R ]7 OT . O?ST7?/,Wo!7WVn3_]O_cdOQZB^`a^j7>U.fdojO_^`a4,VfUdZ
R^j7>UdOB.OT D b " =KB ^jV ( =   D 9( = VI7W. ,Wo`o % '& )6 (+*  :
L /  . . .oXY63 @
]OmcdOQZB^`a ^j7>U.<fdojO "^`a Rr. ,9UarV7W.STOQcRr7,cdO\ b^`amarOQZ\R^j7>UP.OQarO?U>RamR dO5ENl)XpNa79ogf]R^j7>U
ZB^`a^j7>U.fdoj O  R d,/R^`aarR,/RrOQc,Wa,VKfUZ\R^j7>U7WV Rr7R ]OST7>U]79P679o`^`arRBacdOQZB^`a^j7>UPd.798wojO?Sq3xcc^
43#~a^gUdkojOQ,WarRna]fd,/.OQa.OBkW. OQaa^j7>Uvi-O798dR,W^gU R^j7>Ud,WoYUfSTOB. ^`ZB,WocdOBR,W^`o`a7WV[l)XP7WRrO?U>R^`,Wo`aiZB,9U
,o`^gU]OQ,/.OQ]fd,/R^j7>U7WVER ]OVI7W.<S % D i > P >  D 3 86OiV7>fwUdcq^gU5-7988vy/}W}93
bdOq{,Wogf] OQ a >   > ~,/.OZB,Wo`ZQfdo`,/RrOQc,Wa

  ]OB.O  "  B  "  B  . . .  " O B ADC , i_e i i I ?d H FJI
R
,9Udc @ @ @ bdOlnbXP67WRrO?U>R^`,WoV7W.ik9^jWO?U @7 < T^`a,y@
  /   ! P^jOQZ\O2ln)X,WPPd. 7Qud^gS_,/R^j7>URr7R ]OmU]7W.S_,Wo! Ni
  /  5-798w8e,9UdcddO?U]7?0qy/}W}9/,&b^jR . -|/}W}W} {} 
 ,9Ud0c /W4#B}W} k9^jWO?U }]v,9Ud1c -B}W}W}W}
33  33 . |/} ,9Ud+c /  B}W}W}  k9^jWO?U /3b]OP67/
   /  " RrO?U>R^`,WRo ^VI7W!. ?  7 < Td,WabP67WRrO?U>R^`,WotVI. ,/k>STO?U>Ra
 ^Y R }[,9Ud% c ^Y R {3
bdOicOQZB^`a^j7>Uq.<fojO4^`a&R ]O?UqaR,/RrOQcn,Wa ]Ol)X,WPPd.7?u]^gS,/R^j7>UeRr72R ]OfdR^`o`^jR0eVfUdZ
R^j7>U  V7W[. ? < ^`a[cdO?U]7WRrOQc_80  ,9Udc_^`a cdOBRrOB.r
$%% " ` ^jV > 6 >  D  " ` S^gU]OQcfda^gU]kqR ]OmPd.7]Z\OQcwf].OTcdOQaZ\.<^`8OQc^gU5-7988
,9Udcd]O?U]7@0Wvy/}W}/z]3 ln)X,WPP.7Qud^gS_,/R^j7>UdaRr7
T D Y & %%' > 67 >  ^jD V " `  6  D  7 " R ]O86OBR, Ni-aV7W.ik9^jWO?U ,/. OZ\7>UdarRr.fdZ\RrOQc
7 ^jV >  > >  D > # " 7   fda ^gU]kR ]OPd.7]Z\OQcwf].O^gU*5-798w8 vy/}W}93
" bdOQarOl)XP67WRrO?U>R^`,WowVI. ,/k>STO?U>Ra JM d ^`Z< `H TZ\7>UarR^
dOB.OR dO X. #7arRq ,9UdcR d^j. c. OBk9^j7>Udaq7W$V 7 _   D ,/.O R f]RrOiR ]O4l)XP67WRrO?U>R^`,Wo ] VI7W. ?#  9 \,/. O4c^`a
Po`,?0WOQcekW. ,WPd^`ZB,Wo`oj0e^gUqY^jk>f].O z7?WOB. o`,?0WOQc7>UR ]O
,WccOQcRr7O?Ua f].O  R,/WOQa7>U,n{,WogfdOb^jR d^gU^jRa Z\7W.. OQaP67>Udc^gU]k86OBR,2 Ni-aB3]O-lnV7W. ^`a
arR,/RrOTaPw,WZ\OW3bdOcdOQZB^`a^j7>U.fdojO,W867?WOT^`afdarOQcRr7 .OQP.OQarO?U>RrOQc80R dOP67WRrO?U>R^`,W!o \dOB.;O \[K}
Z\7>UdaRr.<fdZ\R!,cdOBRrOB.<S^gUd^`arR^`ZEP67WRrO?U9R^`,Wo  }x3
h,/. ^`,W8wojO d,Wa86OBO?UZ\7>U>WOB.RrOQcRr7,DcOB" RrOB.<S_^gU }Y_} . zm,9UdGc \-{Ym{Y_} . 3
xojR ]7>f]k>2^`a[,Z\7>U9R^gUf]7>fda-cdOQZB^`a^j7>Um/,/. ^`,W8ojOWv
^`arR^`ZZd,9UdZ\O/,/. ^`,W8ojO ,9Udcq^`aEOQo`^gS^gUd,/RrOQcqVI.7>SR ]O -Om^`o`o ^gUd^jR^`,Wo`oj0fdarO,nR ]. OBO\P79^gU>Rq " c^`a
S_7cdOQo80efda^gU]kR ]O479P6OB. ,/R^j7>UcdOQaZ\. ^`86OQce^gUOQZ Z\.OBRrO,WPPd.7?u]^gS,/R^j7>Ue^jR G xEW/ . W9v   W|v
R^j7>Ue3 y3 y3 ,9Udc  k |W . WWRr7,WPPoj0R dOar79ogf]R^j7>UPd.7]Z\O\
]OTZBo`,Waa 7WV&lnbXP67WRrO?U9R^`,Wo`a^`a ZBoj79arOQcfUdcdOB. cwfd.OW3
OQ,WZ 7WVdR ]O[79P6OB. ,/R^j7>Uda.OQf^j.OQc Rr7ar79ojWOEl)X^gU
fdO?UdZ\O c^`,/kW. ,9Sa45-7988,9Udcd]O?Ud7?0Wvy/}W}/z]&,9Udc Al@ 2WF gDn/ H FI
R ]O[Z\7>U9W79ogf]R^j7>U79P6OB. ,/R^j7>UfdarOQcRr7OQo`^gS_^gUd,/RrO-Z\7>U R
R^gUf]7>fdaecOQZB^`a^j7>U/,/. ^`,W8ojOQa5-7988,9Udcd]O?Ud7?0Wv bdO 5ENl)XpNa79ogf]R^j7>UPd.7]Z\OBOQcai8>0eOQo`^gS^gUd,/R
y/}W}9|3 ^gU]kqR dO_{,/.<^`,W8ojOQa ^gUR ]OmarOQfdO?UdZ\OTvvYTv 3

72 B. R. Cobb
,W8ojO/NOQZB^`a ^j7>UarRr. ,/RrOBkW0VI7W.,9U5&Nil)X p N
200000

150000
ar79ogf]R^j7>Ueb^jR OQ^jk>>RarR,/RrOQabV7W.i 3
OQarRb1)OQa<fdojRa4K fd,9U>R^jR0e .7]cwfdZ\OQc
100000

}  _} .gQW|W| 4W3 9/|


50000

_} .gQW|W |  _} . yWWW| 4z>3gQyW|


0.2 0.4 0.6 0.8 1
-50000

_} . yWWW |  _} . W9/| 4|W3 9/|


-100000

Y^jk>f].O|b]Oif]R^`o`^jR0P67WRrO?U>R^`,Wo6V. ,/k>S_O?U9Ra[d^`Z _} . W9/


|   4W|3 WyW|
Z\7>UdarR^jR fdRrO4R ]O4f]R^`o`^jR0eP67WRrO?U9R^`,Wo dk 3
x 5ENl)XpN ar79ogf]R^j7>U"b^jR OQ^jk>>Rc^`a Z\.OBRrO
7.O?ST7@WO)Tv9-O)ZB,Wo`ZQfdo`,/RrO   ^t  e   3 arR,/RrOQaV7W.bR ]O cdOQZB^`a^j7>Ue{,/. ^`,W8wojO0]^jOQo`ca&R dO4cdO\
b]O/,Wogf]OQabV7W.  ,/.O ZB,Wo`ZQfdo`,/RrOQc,Wa ZB^`a^j7>U.<fojO^gU,W8wojOKar7>S_OarR,/RrOQa7WVq,/.O
y z @ R>R ^ R } " 
U]7WR79PdR^gS_,Wo V7W. ,9U90/,Wogf]OQa7WV) 3eNO XU].O 
   }!  W . 9/| rz> .gQyW|  |9 . >/=|  W_| . 9yW|x ,9Udc
   / 
    !  _} .}9Wy9/| !
   {! y z @ R>R ^ R { "  .  
   / 
  }_. yWy9/|
   
7 . O?ST7?WO me   v [O ZB,Wo`ZQfdo`,/RrO k 

 /  
k  _} . {z>W| .
 \  ]   3 b]OfdR^`o`^jR0 P67WRrO?U>R^`,Wo  
k  /  "  _} . WW9/| "
VI. ,/ k>STO?U>Ra k < x\v dk <   v,9Udc 
b]O .OQa<fdojR^gU]kqcdOQZB^`a^j7>U.<fojO ^`a8w,WarOQc7>UR ]O_o`^gU
dk <  k ,/. Oa ]7@UkW. ,WPd^`ZB,Wo`oj0^gUY^jk/ OQ,/.TOQ]fd,/R^j7>U % z/ . z9}9 W . |/}de,9UdcR ]O
f].O |3 1bO?ST7@^gU]k ^gU9W79ojWOQa X. aR X6Udc^gU]k cdOBRrOB.<S^gUd^`arR^`Z_P67WRrO?U>R^`,Wo >4 }x[VI7W). {  
l,{Ju  k <   L  k <   L  k < d,WaTVI. ,/k>S_O?U9Ra  9_*}x)7W.  z/ . z9}9
 - ,/ROQ,WZ<P79^gU>R^gUR ]O_cd7>S,W^gUn7WV- 34prUR d^`a
ZB,Wk arOWv-O XUdc dk <     k <  k W . |/}d}x!^jV}  _} .//}9,9Uc @ >-}x
,/R _} . y/}9W|3 O\u]RBvqR ]OcOQZB^`a^j7>U.<fdojO^`a 7W.  } {}}x^jVY_} .//}9  /3
cdOBWOQoj79P6OQcn80qfda^gU]kTR d^`a){,Wogf]OW3 A @BAl@  FI i? H I " $ i S i HIo H FJI & w/ HIw gji
A @BADC  o id/ H I " S i HIo H FJI , nhgji p F  F S i> i] cfH I HIB HI %$ I  i
 FI / H I n F nt S i HIo H FJI & w/ HIw gji & w/ HIwh gji
~a^gU]kR ]O_Pd. 7Z\OQcwfd.O^gU]OQZ\R^j7>U3 y3 vt-O_cdO\ 7 OQo`^gS_^gUd,/RrOiR ]OicdOQZB^`a^j7>U2/,/. ^`,W8ojOTv-OZ\7>U
RrOB.<S^gU]OR ]O 79PdR^gS,WoarRr. ,/RrOBkW0q7WV!Pd.7]cwfdZB^gUdk_ a^`cdOB.iR ]O .OB]^`arOQc^gU]6f]O?UdZ\O c^`,/kW. ,9S^gUY^jk>f].Oy
 W|^jV }  _} . y/}9W|v,9Udc  |W . WW R d,/R.OQPo`,WZ\OQaYR ]OEcdOQZB^`a^j7>U U]7]cdO[b^jR ,cdOBRrOB.S_^gU
^jV) _} . y/}9W|  "/3n~a^gU]kojOQ,WarR af,/k.OQa .OBkW. OQa ^`arR^`Z-Pd.798,W8w^`o`^jR04U]7]cdOW3]O P67WRrO?U9R^`,Wo`a.O?S,W^gUd^gU]k
a^j7>UKarOBO]OQZ\R^j7>U3 y3 z]vER ]OqV79o`oj7@b^gU]kcdOQZB^`a ^j7>U ,/VIRrOB.m^`aTZ\7>U9WOB.RrOQcRr7,cdOBRrOB.S_^gUd^`arR^`Z2Z<,9UdZ\O
.<fdojO ^`a)798dR,W^gU]OQc /,/. ^`,W8ojO,/.O dk V7W. {  ,9Uc VI7W. {#  937
S_,/. k9^gUd,Wo`^jBOq,Z\7>U>R^gUfd7>fdaTZ<d,9UZ\O2/,/. ^`,W8ojO2VI.7>S
 Y R ]OZ\7>S8^gUd,/R^j7>U7WV,_f]R^`o`^jR0qP67WRrO?U>R^`,Wo,9Udcq,TcdO\
 z> . W9! P /_} .gWQy ^jV}  _} . W|{z RrOB.<S^gUd^`arR^`ZP67WRrO?U9R^`,WoKv[[Oea f8arR^jR f]RrO,9UO\u]P.OQa
{} ^jV_} . W|{z   . a^j7>U798dR,W^gUdOQcV.7>S R dOcOBRrOB.<S_^gUd^`aR^`ZmP67WRrO?U>R^`,Wo
VI7W.R ]O2/,/. ^`,W8ojORr786Om.O?ST7@WOQc^gU>Rr7R ]Oqf]R^`o`^jR0
7iZ\7>U>WOB.R[Rr7,cdOBRrOB.S_^gUd^`arR^`Z-Z<,9UdZ\O&{,/.<^`,W8ojOWv P67WRrO?U9R^`,WoKv-,WaTcdOBR,W^`ojOQc^gUOQZ\R^j7>U3 y3 y3prUR d^`a
-OmfdarO  bRr7cdO XU]OTR dO_cdOBRrOB.<S_^gU^`arR^`Z_P67WRrO?U ZB,WarOWv6,_U]OBl)Xf]R^`o`^jR0eP67WRrO?U9R^`,Wo^`aB
R^`,Wo VI7W. {#  4,Wa&^gUX!ud,9S_PwojOiy7WVY]OQZ\R^j7>Uey3 z & $% dk Iz> . W9! R /_} .gWQy 
IR ]Ocd7>S,W^gU7WV P.OQZBogfdcdOQaR ]OP679aa^`8w^`o`^jR07WV  %' ^jVY}  _} . W|{z .
Pd.7]cwfdZB^gUdkBOB. 7mfUd^jRa\3 dk {_}  ^jVY_} . W|{z  

Continuous Decision MTE Influence Diagrams 73


prU9RrOBkW.<,/R^gU]k  7@WOB.TR ]O2c7>S_,W^gU7WV"k9^jWOQaR ]O 
} }<%&*Eg Eg}~ !$%& XO=  <j PQ%,Y[!$36!$%&-$!'(%
XU,Wo]O\u]P6OQZ\RrOQcf]R^`o`^jR0T7WVtW9/yW3-5E.OQ,/R^gU]k4R dObcdO\ O 36'(*<xO !80 '(<%%&!/MS]y 36C05&02'(% )>?'DM 5 36!80
=Y}M63 5 %
ZB^`a^j7>Uq.<fdojOaP6OQZB^2XOQcq80R ]OVfUdZ\R^j7>U T !0]^jOQo`ca -8<XM6!8*!/,HR % !$%jM6'(<7(08 @ /Rc/m=XmX& =,XSyR
,9U^gUdZ\.OQ,WarO2^gUO\udPOQZ\RrOQc/,Wogf]O7WV)Wy]B}7?WOB.R ]O  $ =cKx$XRj  &jj&j8,&
cdOQZB^`a ^j7>U.fdojOZ\. OQ,/RrOQcfda^gU]k,9Uln)X ^gUdf]O?UdZ\O 
  (,~, !$% XO  Eg Egj<%&*"5 > f TWH
c^`,/kW.<,9S*b^jR ,nc^`a Z\.OBRrOmcdOQZB^`a ^j7>U/,/. ^`,W8ojOW3e~a H&3 x,'(>?<=M '(% )H 36  &< '(7D'DMSOB* !$%&0 'NMSOBY[5 %R-M '( %R0'(%Oj
^gU]kiR ]O&a,9S_O-P.7Z\OQcf].OWv/R ]O)S_,{ud^gS fSO\udP6OQZ\RrOQc &3 '1*9<xO!80 '1<=%% !/MS]y 36C,0]"'NM6Z>?'NM 5&3 !.0U =Y?M63 5 %
f]R^`o`^jR0VI7W.R ]OST7>U]79P679o`^`arRfda^gU]k,5&Nil)X p N -8<XM6!8*;!/,HR % !$%jM6'(<7(08 @F {1/XG=K  

b^jR qOQ^jk>9Ric^`aZ\.OBRrOarR,/RrOQaV7W.^`aWWWWv^`Z<
.&j x, 

^`ai/}9|Wm^jk>]OB.R ,9Un,9UlnbX^gUdf]O?UdZ\O4cw^`,/kW. ,9S K X]y<36*}  Tj<%&*& K L <=M  !.02 %S.f2PQ%,:R5 !$%&-$!
b^jR TOQ^jk>9REarR,/RrOQa[V7W.-T3 5-7988ny/}W}9!k9^jWOQa-, cdO\ * '(<)3<=>0$ @'(% TW X]<=3*<=%&*& W L <XM &!80 %
R,W^`ojOQcZ\7>S_Pw,/. ^`ar7>UT7WVR dOi5ENl)XpNar79ogf]R^j7>UmRr7 !8* 08 K6 XX=ym/ DJXmBRN6=
X&2$1XmXx1  .x&X  ~jM 3<XM6!$)'1-
a^gS^`o`,/.blnbX,9Udcqcw^`aZ\.OBRrO4^gU]6f]O?UdZ\O c^`,/kW. ,9SaB3 !8-$'(0 '( %&0W36 5 H L !$% 7( EG<=36Cm yT

mt[  >/t > L < * 0 !$%T  <%&*yj!$%&0 !$%U{=j 6~ 7DV'(% )7('D% !.<=3
5&< *,3<XM '1--/ %&*,'DM '( %&<7<=5&0602'1<=%'(%,:&5 !8%&-/!* '(<=
5-7>U>R^gUf]7>fdacdOQZB^`a ^j7>Ul)X^gU]6f]O?UdZ\O)c^`,/kW. ,9Sa ) 36<>0$ @ {$==,=,=SR6$ ?{
^`Z<cdOBRrOB.<S_^gUdO-,cdOQZB^`a^j7>U.<fojO[VI7W.,iZ\7>U>R^gUfd7>fda Kx$XRj  &j&=X, ,
cdOQZB^`a ^j7>U2{,/. ^`,W8wojO ,Wa),VKfUZ\R^j7>U27WV^jRa)Z\7>U>R^gUfd7>fda L < * 0 !$%,T R<=%&*y IR=!8%&0 !$%28 jg <$O!8Vx<7D5
P,/. O?U9Ra a ]7?P67WRrO?U9R^`,WoRr7^gS_P.7?WOcOQZB^`a^j7>U <=M '( %U Yy0 O>>?!$M 36'(-<xO!80 '1<=%U*,!.-/'102'( %H 3 7(!$>0$ @
S,/^gU]k ,/R&, oj7?-OB.bZ\7>S_PwfdR,/R^j7>Ud,WoZ\79arRER d,9Uc^`a '(%? f<0 C!$O<=%&*}E3<*,![!8*&0$ fm//cX
Z\.OBRrOq^gU]6f]O?UdZ\Omcw^`,/kW. ,9S_a7W._lnbX^gU]f]O?UdZ\Oc^ / y/={$Nj/m x, .,  
,/kW. ,9SaB3". OB^j7>fdaSTOBR ]7]caVI7W.d,9Udcwo`^gU]kZ\7>U 7(>0SM6!8*}~m L RS. jf%?36!$H 36!80 !$%jM '(% )<%&*02 7DV'(% )
R^gUf]7>fdaEcdOQZB^`a^j7>Um/,/. ^`,W8ojOQa&^gU2^gU]fdO?UdZ\Oc^`,/kW.<,9S_a * !8-/'10 'D %H 36  7(!$>08 @E & !.02'10$ !$HR<=3 M >?!$%jM" =Y
.OQ]fd^j.OnOQ^jR ]OB.Ud7W.<S_,Wo)cw^`arRr. ^`8wf]R^j7>Ua_VI7W.mZ\7>U>R^gU % )'(% !8!$36'D% )=
-/ %& >?'(-g~O,02M !$>08~jM6<%,Y[ 3*K% 'DV !$3
f]7>faZ<d,9UdZ\O{,/. ^`,W8wojOQa4,9Udc 3@7W.c^`aZ\.OBRrO,WPwPd.7Qud^ 0 'DMSO&~jM<=%,Y[ 36*}RyT

S,/R^j7>Uda4Rr7Z\7>U>R^gUfd7>fda cdOQZB^`a^j7>U{,/.<^`,W8ojOQaB3b]O Eg 7(<%&*},A9 <=%&* ~&< -jM !$3S8  L 'NM65 3 !.0
Z\7>U>R^gUfd7>fdabcdOQZB^`a ^j7>Unl)X^gUdf]O?UdZ\O c^`,/kW.<,9S",Wo Y<=5R0 0 '(<%&0y<=%&*>'(% '(>5 >3 !87(<=M '(V!K!8%jM 36 HOM !8-
oj7@ba ,4S_7W.O&Pd.OQZB^`arOcdOQZB^`a^j7>UT.fdojO),9UdcTR ]O,W8^`o`^jR0 %&' 5 !.0yY[ 3W>? ,*,!$7('(% )-/ % M6'D%5 5&05 %&-$!$3 M6<'D%jM '(!808 @'(%
Rr7_O\udPoj79^jR)R dO Z\7>U9R^gUf]7>fdaUd,/R f].O7WVRrOQarRb. OQa fdojRaB3 mW!8-C !$36><=%U<=%R*UW  L <=>* <% '[!8*&0$ fm/
cX?/ y/=R{$Nj/m6 R&}8 x.&
2  IRF gjis"Jc;i I ? "5&> ,&<%&*TR~,<7D>?!83R %j g6Eg!$% % '(7(!8060
H&3 H,
bdOn,9f]R ]7W.^`akW.<,/RrOBVKfdobV7W.2R ]O^gUda^jk>>RrVKfdoZ\7>S <) <=M '( %]"'DM >'DM 5 36!80 Y M 365 %&-8<XM !.*!$H %&!$%jM '1<=710$ @
S_O?U9Ra&7WV!R d.OBO4,9U]7>U>0]S_7>fda). OB^jOB-OB. ad^`Z<na^jk/ '(%m ,*, B[!8*g X ==B&X[c=xR
 .//cWWx$=&j,m$$/{=R x!8-/M 5 36!
Ud2^ XZB,9U9Roj0q^gS_P.7?WOQcR ]OP,WP6OB.Q3 =M6!80'(%TK32M6'N#-/'1<=7}PQ%jM !87D7('D) !$%&-$! j  ,x =&
~R<-jM !838K 28 j VX<7D5&<=M '(% )'(%,:&5 !8%&-/!* '(<=
 _:d>
 : : : ) 36<>0$ @U / = W/$X / &Rj /j,


  "!$#&% '(% )+*,!.-/'102'( %4365 7(!809'(% ~R<-jM !838m <%&*B !8% 7D!8OS.6<5&0602'1<=%
'(%,:&5&!$%&-$!;*,'1<=) 36<>?08 @+AB 36C'D% )FEG<=H!$3.JIK'(3 ) 'D% '1< '(%,:&5&!$%&-$!*,'1<=) 36<>0$ @? Xm$j// /$ , { 
L 'D7('NM<=36OFPQ%R0SM6'NM65,M ! UTWVX<='(7(< 7(!ZY[ 3\*, X]"% 7( < *^<XM._  ., 
```"acbXdfegaShjikRl.dmhji&e=n,lh&o.pk l oqXpp&l r,s=tuRvxwya{z,ij|  ~&!$% XO mEg EgGS. T;% !8]>?!/M6 ,*UY[ 336!$H&3 !.02!8% M

} }<%&*UEg Egm~&!$% XOJy KO 3 '1*U'(%,:&5 '(% )B<=%&*02 7DV'(% )B<xO !80 '(<%*,!8-$'(0 'D %H 36  7(!$>08 @B'(%
!8%&-/!*,'(<)3<=>05&0 'D%&)>?'NM65 3 !.0W YM63 5&%&-$<=M !8*!$,HR   gK<%&*[!.*} /
$XG{$Nj/m6} =R/
%&!$%jM '1<=710$ @'(% L 
&'(-C !$36'D% )<%&* jK<7DH!$36%U[!.* 0$ c=1/ _ WX c=1//" 2 
&<=H&>?<%<%&*
$/{=R/ y/=R{$Nj/m R  j.  <=7(7 %&* %} 8xm8 


 <%&*Eg Eg~ !8% XO=  j^ KO 3 '1*
<xO !80 '(<%% !$MS]
3 C,0]"'DM 7('D% !.<=3K* !/M !83 >?'(% '10SM6'(-VX<3 'D
< 7(!808 @'(%yy< -$-5&0<%&*,< <=CC 7(<?!8* 08 f&
$/{=R?/
$X/Rc/NN $ R, }8X8&

74 B. R. Cobb
Discriminative Scoring of Bayesian Network Classifiers:
a Comparative Study
Ad Feelders and Jevgenijs Ivanovs
Department of Information and Computing Science
Universiteit Utrecht, The Netherlands

Abstract
We consider the problem of scoring Bayesian Network Classifiers (BNCs) on the basis of the
conditional loglikelihood (CLL). Currently, optimization is usually performed in BN parameter
space, but for perfect graphs (such as Naive Bayes, TANs and FANs) a mapping to an equivalent
Logistic Regression (LR) model is possible, and optimization can be performed in LR parameter
space. We perform an empirical comparison of the efficiency of scoring in BN parameter space,
and in LR parameter space using two different mappings. For each parameterization, we study two
popular optimization methods: conjugate gradient, and BFGS. Efficiency of scoring is compared
on simulated data and data sets from the UCI Machine Learning repository.

1 Introduction proposed mapping is more efficient than scoring in


BN space, because the logistic regression model has
Discriminative learning of Bayesian Network Clas- fewer parameters than its BNC counterpart and be-
sifiers (BNCs) has received considerable attention cause the LR model is known to have a strictly con-
recently (Greiner et al., 2005; Pernkopf and Bilmes, cave loglikelihood function. To test this hypothesis
2005; Roos et al., 2005; Santafe et al., 2005). In dis- we perform experiments to compare the efficiency
criminative learning, one chooses the parameter val- of model fitting with both LR parameterizations,
ues that maximize the conditional likelihood of the and the more commonly used BN parameterization.
class label given the attributes, rather then the joint This paper is structured as follows. In section 2
likelihood of the class label and the attributes. It we introduce the required notation and basic con-
is well known that conditional loglikelihood (CLL) cepts. Next, in section 3 we describe two map-
optimization, although arguably more appropriate pings from BNCs with perfect graphs to equivalent
in a classification setting, is computationally more LR models. In section 4 we give a short descrip-
expensive because there is no closed-form solution tion of the optimization methods used in the experi-
for the ML estimates and therefore numerical op- ments, and motivate their choice. Subsequently, we
timization techniques have to be applied. Since in compare the efficiency of discriminative learning in
structure learning of BNCs many models have to be LR parameter space and BN parameter space, us-
scored, the efficiency of scoring a single model is of ing the optimization methods discussed. Finally, we
considerable interest. present the conclusions in section 7.
For BNCs with perfect independence graphs
(such as Naive Bayes, TANs, FANs) a mapping 2 Preliminaries
to an equivalent Logistic Regression (LR) model
is possible, and optimization can be performed in 2.1 Bayesian Networks
LR parameter space. We consider two such map- We use uppercase letters for random variables and
pings: one proposed in (Roos et al., 2005), and a lowercase for their values. Vectors are written in
different mapping, that, although relatively straight- boldface. A Bayesian network (BN) (X, G =
forward, has to our knowledge not been proposed (V, E), ) consists of a discrete random vector X =
before in discriminative learning of BNCs. We con- (X0 , . . . , Xn ), a directed acyclic graph (DAG) G
jecture that scoring models in LR space using our representing the directed independence graph of X,
and a set of conditional probabilities (parameters) . It is well known that the loglikelihood function
V = {0, 1, . . . , n} is the set of nodes of G, and E of the logistic regression model is concave and has
the set of directed edges. Node i in G corresponds unique maximum (provided the data matrix Z is
to random variable Xi . With pa(i) (ch(i)) we de- of full column rank) attained for finite w except in
note the set of parents (children) of node i in G. We two special circumstances described in (Anderson,
write XS , S {0, . . . , n} to denote the projection 1982).
of random vector X on components with index in
S. The parameter set consists of the conditional 2.3 Log-linear models
probabilities Let G = (V, E) be the (undirected) independence
graph of random vector X, that is E is the set of
xi |xpa(i) = P (Xi = xi |Xpa(i) = xpa(i) ), 0 i n
edges (i, j) such that whenever (i, j) is not in E, the
We use Xi = {0, . . . , di 1} to denote the set variables Xi and Xj are independent conditionally
of possible values of Xi , 0 i n. The set on the rest. The log-linear expansion of a graphical
of possible values of random vector XS is denoted log-linear model is
XS = iS Xi . We also use Xi = Xi \ {0}, and X
likewise XS = iS Xi ln P (x) = uC (xC )
In a BN classifier there is one distinguished vari- CV
able called the class variable; the remaining vari-
ables are called attributes. We use X0 to denote where the sum is taken over all complete subgraphs
the class variable; X1 , . . . , Xn are the attributes. C of G, and all xC XC , that is, uC (xC ) = 0
To denote the attributes, we also write XA , where for i C and xi = 0 (to avoid overparameteri-
A = {1, . . . , n}. We define (i) = pa(i)\{0}, the zation). The u-term u (x) is just a constant. It is
non-class parents of node i, and (i) = {i} (i). well known that for BNs with perfect directed inde-
Finally, we recall the definition of a perfect graph: pendence graph, an equivalent graphical log-linear
a directed graph in which all nodes that have a com- model is obtained by simply dropping the direction
mon child are connected is called perfect. of the edges.

2.2 Logistic Regression 3 Mapping to Logistic Regression


The basic assumption of logistic regression (Ander-
In this section we discuss two different mappings
son, 1982) for binary class variable X0 {0, 1} is
from BNCs with a perfect independence graph to
k equivalent logistic regression models. Equivalent
P (X0 = 1|Z) X
ln = w0 + wi Zi , (1) here means that, assuming P (X) > 0, the BNC and
P (X0 = 0|Z)
i=1 corresponding LR model, represent the same set of
where the predictors Zi (i = 1, . . . , k) can be sin- conditional distributions of the class variable.
gle attributes from XA , but also functions of one or
3.1 Mapping of Roos et al.
more attributes from XA . In words: the log poste-
rior odds are linear in the parameters, not necessar- Roos et al. (Roos et al., 2005) define a map-
ily in the basic attributes. ping from BNCs whose canonical form is a per-
Generalization to a non-binary class variable fect graph, to equivalent LR models. The canonical
X0 X0 gives form is obtained by (1) taking the Markov blanket
of X0 (2) marrying any unmarried parents of X0 .
k
P (X0 = x0 |Z) (x )
X (x ) This operation does clearly not change the condi-
ln = w0 0 + wi 0 Zi , (2)
P (X0 = 0|Z) tional distribution of X0 . They show that if this
i=1
canonical form is a perfect graph, then the BNC
for all x0 X0 . This model is often referred to can be mapped to an equivalent LR model. Their
as the multinomial logit model or polychotomous mapping creates an LR model with predictors (and
logistic regression model. corresponding parameters) as follows

76 A. Feelders and J. Ivanovs


1. Zxpa(0) = I(Xpa(0) = xpa(0) ) with parameter 1 2 1 2
(x0 )
wxpa(0) , for xpa(0) Xpa(0) .
0 0
2. Zx(i) = I(X(i) = x(i) ) with parameter
(x )
0
wx(i) , for i ch(0) and x(i) X(i) .
3 4 5 3 4 5
For a given BNC with parameter value an equiva-
lent LR model is obtained by putting
Figure 1: Example BNC (left); undirected graph
(x0 ) (x0 )
wxpa(0) = ln x0 |xpa(0) , wx(i) = ln xi |xpa(i) with same conditional distribution of class (right).
3.2 Proposed mapping
Like in the previous section, we start from the 3.3 Example
canonical graph which is assumed to be perfect. Consider the BNC depicted in figure 1. Assuming
Hence, we obtain an equivalent graphical log-linear all variables are binary, and using this fact to sim-
model by simply dropping the direction of the plify notation, this maps to the equivalent LR model
edges. We then have (with 9 parameters)
P (X0 = x0 |XA ) P (X0 = 1|Z)
ln ln = w + w{1} X1 + w{2} X2 +
P (X0 = 0|XA ) P (X0 = 0|Z)
P (X0 = x0 , XA )/P (XA )
= ln w{3} X3 + w{4} X4 + w{5} X5 +
P (X0 = 0, XA )/P (XA )
w{1,2} X1 X2 + w{1,3} X1 X3 + w{3,4} X3 X4
= ln P (X0 = x0 , XA ) ln P (X0 = 0, XA ),
for x0 X0 . Filling in the log-linear expansion In the parameterization of (Roos et al., 2005), we
for ln P (X0 = x0 , XA ) and ln P (X0 = 0, XA ), we map to the equivalent LR Roos model (with 14 pa-
see immediately that u-terms that do not contain X0 rameters)
cancel, and furthermore that u-terms with X0 = 0 P (X0 = 1|Z)
are constrained to be zero by our identification re- ln = w{1,2}=(0,0) Z{1,2}=(0,0) +
P (X0 = 0|Z)
strictions. Hence we get
w{1,2}=(0,1) Z{1,2}=(0,1) + w{1,2}=(1,0) Z{1,2}=(1,0) +
ln P (X0 = x0 , xA ) ln P (X0 = 0, xA ) = w{1,2}=(1,1) Z{1,2}=(1,1) + w{1,3}=(0,0) Z{1,3}=(0,0) +
X
u{0} (X0 = x0 ) + uC (X0 = x0 , xC ) = . . . + w{3,4}=(1,1) Z{3,4}=(1,1) +
C
w{5}=(0) Z{5}=(0) + w{5}=(1) Z{5}=(1)
(x ) (x )
X
w 0 + wxC0
C In both cases we set w(0) = 0.
where C is any complete subgraph of G not con-
4 Optimization methods
taining X0 . Hence, to map to a LR model, we create
variables In the experiments, we use two optimization meth-
ods: conjugate gradient(CG) and variable metric
I(XC = xC ), xC XC
(BFGS) algorithms (Nash, 1990). Conjugate Gra-
to obtain the LR specification dient is commonly used for discriminative learning
P (X0 = x0 |Z) of BN parameters, see for example (Greiner et al.,
(x )
X (x )
ln = w 0 + wxC0 I(XC = xC ) 2005; Pernkopf and Bilmes, 2005). In a study of
P (X0 = 0|Z)
C Minka (Minka, 2001), it was shown that CG and
This LR specification models the same set of condi- BFGS are efficient optimization methods for the lo-
tional distributions of X0 as the corresponding undi- gistic regression task.
rected graphical model, see for example (Sutton and BFGS is a Hessian-based algorithm, which up-
McCallum, 2006). dates an approximate inverse Hessian matrix of size

Discriminative Scoring of Bayesian Network Classifiers: a Comparative Study 77


r2 at each step, where r is the number of param- In our study we compare the rates of convergence
eters. CG on the other hand works with a vector for 9 optimization techniques: 3 optimization al-
of size r. This obviously makes each iteration less gorithms (CG, CGB, BFGS) used in 3 parameter
costly, however, in general CG exhibits slower con- spaces (LR, LR Roos, BN). We compare conver-
vergence in terms of iterations. gence in terms of (1) iterations of the update di-
Both algorithms at step k compute an update di- rection calculation, and (2) floating point operations
rection u(k) , followed by a line search. This is (flops). The number of flops provides a fair compar-
a one-dimensional search, which looks for a step ison of the performance of the different methods,
size maximizing f () = CLL(w(k) + u(k) ), but the number of iterations gives us additional in-
where w(k) is a vector of parameter values at step sight into the behavior of the methods.
k. It is argued in (Nash, 1990) that neither algo- Let costCLL and costgrad denote the costs in
rithm benefits from too large a step being taken. flops of CLL and gradient evaluations respectively,
Even more, for BFGS it is not desirable that the in- and let countCLL be the number of times CLL
crease in the function value is different in magni- is evaluated in a particular iteration. We estimate
tude from the one determined by the gradient value, the cost of one particular iteration of BFGS as
(w(k+1) w(k) )T g(k) . Therefore, simple accept- 12r2 + 8r + (4r + costCLL )countCLL + costgrad ;
able point search is suggested for both methods. the cost of one CG and CGB iteration is 10r +(4r +
In case of CG an additional step is made. Once costCLL )countCLL +costgrad . These estimates are
an acceptable point has been found, we have suf- obtained by inspecting the source code in (Nash,
ficient information to fit a parabola to the projec- 1990).
tion of the function on the search direction. The
parabola requires three pieces of information: the 5 Parameter learning
function value at the end of the last iteration (or 5.1 Logistic regression
the initial point), the projection of the gradient at
this point onto the search direction, and the new Equation 2 can be rewritten as follows
function value at the acceptable point. If the CLL (x0 ) T Z
value at the maximum of the parabola is larger than ew
P (X0 = x0 |Z) = P ,
d0 1 (x00 ) T
the CLL value at the acceptable point, then the for- ew Z
x00 =0
mer becomes the starting point for the next iteration.
Another approach to CG line search is Brents line where we put Z0 = 1, which corresponds to the in-
search method (Press et al., 1992). It iteratively fits tercept, and fix w(0) = 0. wT denotes the transpose
a parabola to 3 points. The main difference with the of w. The conditional loglikelihood of the parame-
previous approach is that we do find an optimum in ters given data is
the given update direction. This, however, requires QN (i) (i)
more function evaluations at each iteration. CLL(w) = log i=1 P (x0 |z )
In our experiments we use the implementation of (i) T 0 T
w(x0 ) z(i) )),
z(i) log( dx00 1
PN (x0 )
P
the CG and BFGS methods of the optim function = i=1 (w =0 e
0

of R (Venables and Ripley, 2002) which is based on


where N is the number of observations in the data
the source code from (Nash, 1990). In addition we
set. The gradient of the CLL is given by
implemented CG with Brents line search (with rela-
tive precision for Brents method set to 0.0002). We CLL(w)
(x0 ) =
refer to this algorithm as CGB. The conjugate gra- wk

dient method may use different heuristic formulas (x ) T (i)
(i)
PN (i) ew 0 z zk
for computing the update direction. Our preliminary i=1
1 (i) z
{x0 =x0 } k Pd0 1 w(x00 ) T z(i)
study showed that the difference in performance be- x00 =0
e
tween these heuristics is small. We use the Polak-
Ribiere formula, suggested in (Greiner et al., 2005) where x0 X0 . We note here that the data ma-
for optimization in the BN parameter space. trix Z is a sparse matrix of indicators with 1s in

78 A. Feelders and J. Ivanovs


non-zero positions, thus the CLL and gradient func- Simple calculations result in
tions can be implemented very efficiently. We esti-
CLL()
mate costs (in flops) according to above formulas: j =
i|k
costCLL = |Z|(d0 1) + N (d0 + 2), costgrad = 
|Z|d0 + 2N d0 , where |Z| denotes number of 1s PN
1{x(t) =i,x(t) 1{x(t) j
i|k
t=1 =k} =k}
in the data matrix. Here we used the fact that the j pa(j) pa(j)
T Pd0 1
multiplication of two vectors w(x0 ) z(i) requires

(t)
[P (x(t) )(1 (t) (t) 1 (t) j )]
x0 =0 {x =i,x =k} {x =k} i|k
j pa(j) pa(j)
|z(i) |1 flops. In case of LR Roos matrix Z always Pd0 1
(t)
P (x(t) )
contains exactly 1 + n |pa(0)| 1s irrespective of x0 =0

graph complexity and dimension of attributes. For


our mapping |Z| depends on the sample. In the Note that for each t and j > 1 only d0 dj gradient
experiments we used the following heuristic to re- values are to be considered. In order to obtain
duce |Z|: for each attribute Xi we code the most from we need 3|| flops, where || denotes the
frequent value as 0. We note that |Z| is smaller in number of components of . From the formulas
case of our mapping comparing to LR Roos map- above we estimate costCLL = 3|| + N (nd0 +
ping, when the structure is not very complex (with 2d0 + 2) and costgrad = 3|| + N (2nd0 + n +
regard to the number of parents and the domain size (2d0 + 1)(3 + d1 + . . . + dn )).
of the attributes) and becomes bigger for more com- Finally, we point out that the cost of the gradi-
plex structures. ent is very close to the cost of the CLL in case of
LR and is by a factor 2 + 2d larger in case of BN.
5.2 Bayesian Network Classifiers This suggests that CGB might be better than CG for
Here we follow the approach taken in (Greiner et BN, but it is very unlikely that it will be better also
al., 2005; Pernkopf and Bilmes, 2005). We write for LR and LR Roos parameter spaces. Note that
j
costCLL (BN) is very close to costCLL (LR Roos),
i|k = P (xj = i|xpa(j) = k). which strongly supports the fairness of our cost es-
j Pdj 1 j timates.
We have constraints i|k 0 and i=0 i|k = 1.
We reparameterize to incorporate the constraints on 5.3 Convergence issues
j j
i|k and use different parameters i|k as follows
It is well known that LR has a concave loglikeli-
j hood function. Since BNCs whose canonical form
j
exp i|k
i|k = Pd 1 is a perfect graph are equivalent to the LR models
j j
l=0 exp l|k obtained by either mapping, it follows from the con-
tinuity of the mapping from to w, that the CLL
The CLL is given by function in the standard BN parameterization also
N
X 0 1
dX has only global optima (Roos et al., 2005). This im-
(t)
CLL() = (log P (x ) log P (x(t) )) plies that all our algorithms should converge to the
t=1 (t)
x0 =0 maximal CLL value.
It is important to pick good initial values for the
Further expansion may be obtained using the factor- parameters. The common approach to initialize BN
j
ization of P (x) and plugging in expressions for i|k . parameters is to use the (generative) ML estimates.
It is easy to see that It is crucial for all three parameter spaces to avoid
ij0 |k zero probabilities, therefore we use Laplaces rule,
j
= ij0 |k (1{i=i0 } i|k
j
), and add one to each frequency. After the initial val-
i|k ues of are obtained, we can derive starting val-
ues for w as well. We simply apply the mappings
thus
to obtain the initial values for the LR Roos and LR
P (x) j parameters. This procedure guarantees that all algo-
j
= 1{xpa(j) =k} P (x)(1{xj =i} i|k )
i|k rithms have the same initial CLL value.

Discriminative Scoring of Bayesian Network Classifiers: a Comparative Study 79


6 Experiments 0
1 8
For the experiments we used artificially generated
data sets and data sets from the UCI Machine Learn- 7
2
ing Repository (Blake and Merz, 1998). Artifi- 6
3
cial data were generated from Bayesian Networks, 4
where the parameter values were obtained by sam- 5
pling each parameter from a Dirichlet distribution
with = 1. We generated data from both simple
Figure 2: Structure fitted to Pima Indians data. The
and complex structures in order to evaluate perfor-
structure was constructed using a hill climber on the
mance under different scenarios. We generated per-
(penalized) generative likelihood.
fect graphs in the following way: for each attribute
we select i parents: the class node and i 1 previ-
ous nodes(whenever possible) according to index-
ing, where i {1, 2, 3}.

336.2
Table 1 lists the UCI data sets used in the ex-
periments, and their properties. To discretize nu-

336.4
meric variables, we used the algorithm described in
(Fayyad and Irani, 1993). The ? values in the votes 336.6

data set were treated as a separate value, not as a


CLL

missing value. LR + CG
LR + CG_full.search
336.8

LR + BFGS
LR_Roos + CG
LR_Roos + CG_full.search
Table 1: Data sets and their properties LR_Roos + BFGS
BN + CG
337.0

#Samples #Attr. #Class Max attr. dim. BN + CG_full.search


BN + BFGS
Glass 214 9 6 4
Pima 768 8 2 4 0 e+00 2 e+06 4 e+06 6 e+06 8 e+06 1 e+07

Satimage 4435 36 7 2 flops

Tic-tac-toe 958 9 2 3
Voting 435 16 2 3 Figure 3: Optimization curves on the Pima Indians
data.
The fitted BNC structures for the UCI data were
obtained by (1) connecting the class to all attributes,
(2) using a greedy generative structure learning al- (flops) needed to reach the threshold. For each data
gorithm to add successive arcs. Step (2) was not set these numbers are scaled dividing by the small-
applied for the Satimage data set for which we fit- est value. The bound of 5% was selected heuris-
ted the Naive Bayes model. All structures happened tically. This bound is big enough, so all algorithms
to be perfect graphs, thus there was no need for ad- were able to reach it before 200 iterations and before
justment. In figure 2 we show the structure fitted to satisfying the stopping criterion. The bound is small
the Pima Indians data set. The other structures were enough, so algorithms make in general quite a few
of similar complexity. iterations before achieving it. There are some data
Figure 3 depicts convergence curves for 9 opti- sets, however, on which all algorithms converge fast
mization algorithms on the Pima Indians data. to a very high CLL value, and a clear distinction
Table 2 depicts statistics for different UCI and ar- in performance is visible using a very small bound.
tificial data sets. For each data set i we have start- Pima Indians is an example (5% threshold is set to
ing CLL value CLLiM L and maximal (over all algo- -337.037). We still see that table 2 contains a quite
rithms) CLL value CLLimax . We define a threshold fair comparison, except that BFGS methods are un-
ti = CLLimax 5%(CLLimax CLLiM L ). For ev- derestimated for smaller bounds.
ery algorithm we computed the number of iterations It is very interesting to notice that convergence

80 A. Feelders and J. Ivanovs


Data Set LR LR Roos BN
CG CGB BFGS CG CGB BFGS CG CGB BFGS
Glass 2.1 1.9 1.0 3.0 2.0 1.1 2.4 1.9 1.0
Pima 1.3 1.5 1.8 1.5 1.2 1.8 1.0 1.0 1.8
Satimage 8.2 2.3 1.0 3.3 2.0 1.1 3.0 1.0 1.3
Tic-tac-toe 3.9 2.2 1.5 2.1 1.1 1.2 2.2 1.0 1.3
Voting 3.2 1.8 1.2 1.9 1.3 1.0 1.5 1.0 1.1
1.5.2-3.100 2.8 1.4 1.2 2.3 1.3 1.1 2.0 1.0 1.2
2.5.2-3.100 2.6 2.1 1.4 1.8 1.6 1.3 1.4 1.0 1.0
3.5.2-3.100 2.7 1.8 1.3 2.2 1.0 1.2 1.3 1.2 1.2
1.5.2-3.1000 2.2 2.2 2.2 1.5 1.5 2.5 1.0 1.0 1.5
2.5.2-3.1000 2.2 2.0 1.8 1.2 1.2 1.8 1.0 1.0 2.5
3.5.2-3.1000 3.6 2.2 1.0 1.8 1.6 1.0 1.3 1.3 1.0
1.30.2-3.1000 2.0 1.2 1.0 1.5 1.2 1.5 1.2 1.0 1.2
2.30.2-3.1000 2.0 1.8 2.2 1.6 1.2 1.4 1.4 1.0 1.2
3.30.2-3.1000 2.2 1.2 2.0 1.6 1.0 1.6 1.4 1.0 1.2
1.5.5.1000 3.1 3.2 1.8 2.3 2.3 1.5 1.1 1.1 1.0
2.5.5.1000 6.7 6.9 2.3 3.1 3.0 1.4 1.5 1.5 1.0
3.5.5.1000 4.8 4.2 1.8 2.3 2.2 1.1 1.7 1.7 1.0
mean 3.3 2.4 1.6 2.1 1.6 1.4 1.6 1.2 1.3
Data Set LR LR Roos BN
CG CGB BFGS CG CGB BFGS CG CGB BFGS
Glass 1.0 2.4 3.3 2.6 4.1 8.3 5.9 8.1 12.6
Pima 1.0 2.4 1.6 1.8 2.5 3.1 4.0 6.1 11.0
Satimage 4.6 3.0 1.0 4.6 7.1 3.3 11.5 6.1 7.6
Tic-tac-toe 2.0 3.2 1.0 1.3 1.7 1.3 6.1 4.0 6.0
Voting 1.2 1.9 1.9 1.0 1.8 3.5 3.6 4.2 15.8
1.5.2-3.100 1.0 1.7 1.0 1.2 2.1 1.8 2.9 2.9 5.1
2.5.2-3.100 1.2 2.8 3.5 1.0 2.6 7.4 2.1 3.2 13.2
3.5.2-3.100 1.0 2.4 2.3 1.1 1.4 5.3 2.1 4.4 21.2
1.5.2-3.1000 1.0 2.0 1.7 1.0 1.9 3.9 1.5 2.4 5.5
2.5.2-3.1000 1.5 3.2 2.2 1.0 2.0 4.6 2.3 3.6 25.0
3.5.2-3.1000 1.7 2.6 1.4 1.0 2.2 3.5 2.2 4.3 13.5
1.30.2-3.1000 1.9 3.7 1.0 3.4 8.4 4.3 12.1 17.4 16.6
2.30.2-3.1000 1.0 3.1 2.2 1.2 3.2 3.3 4.9 7.4 11.8
3.30.2-3.1000 1.2 2.3 5.8 1.0 2.4 12.5 4.1 7.6 36.6
1.5.5.1000 1.1 2.8 1.5 1.0 2.4 1.9 1.6 2.4 2.8
2.5.5.1000 2.3 5.6 11.1 1.0 2.2 10.1 1.7 2.4 11.6
3.5.5.1000 2.6 5.4 101.5 1.0 2.3 98.1 2.4 3.8 137.5
mean 1.6 3.0 8.5 1.5 3.0 10.4 4.2 5.3 20.8

Table 2: Convergence speed in terms of iterations (top) and flops (bottom). Artificial data sets are named
in the following way: [number of parents of each attribute].[number of attributes].[domain size of each
attribute].[sample size]. Domain size denoted as 2-3 is randomly chosen between 2 and 3 with equal proba-
bilities.

Discriminative Scoring of Bayesian Network Classifiers: a Comparative Study 81


with regard to iterations is faster in BN and References
LR Roos parameter spaces. It seems that overpa- J. A. Anderson. 1982. Logistic discrimination. In P. R.
rameterization is advantageous in this respect. This Krishnaiah and L. N. Kanal, editors, Classification,
might be explained by the fact that there is only a Pattern Recognition and Reduction of Dimensionality,
single global optimum in the LR case, whereas there volume 2 of Handbook of Statistics, pages 169191.
North-Holland.
are many global optima in case of BN and LR Roos.
Thus, in the latter case one can quickly converge to C.L. Blake and C.J. Merz. 1998. UCI
the closest optimum. Overparameterization, how- repository of machine learning databases
[http://www.ics.uci.edu/mlearn/mlrepository.html].
ever, is also an additional burden, as witnessed by
the flops count; this is especially true for the BFGS U. Fayyad and K. Irani. 1993. Multi-interval discretiza-
method. tion of continuous valued attributes for classification
We observe that BFGS is generally winning, with learning. In Proceedings of IJCAI-93 (volume 2),
pages 10221027. Morgan Kaufmann.
CGB being close, in terms of iterations. Consid-
ering flops, CGB is definitely losing to CG in all R. Greiner, S. Xiaoyuan, B. Shen, and W. Zhou. 2005.
parameterizations. CG seems to be the best tech- Structural extension to logistic regression: Discrimi-
native parameter learning of belief net classifiers. Ma-
nique, though it might be very advantageous to use chine Learning, 59:297322.
BFGS for simple (with respect to the number of par-
ents and the domain size of attributes) structures, in T. Minka. 2001. Algorithms for maximum-likelihood
combination with our LR mapping. logistic regression. Technical Report Statistics 758,
Carnegie Mellon University.
We have both theoretical and practical evidence
that our LR parameterization is the best for rela- J. C. Nash. 1990. Compact Numerical Methods for
tively simple structures. So the general conclusion Computers: Linear Algebra and Function Minimisa-
tion (2nd ed.). Hilger.
is to use LR + BFGS, LR + CG and LR Roos + CG
in order of growing structure complexity. The size F. Pernkopf and J. Bilmes. 2005. Discriminative ver-
of the domain and the number of parents mainly in- sus generative parameter and structure learning of
Bayesian network classifiers. In ICML 05: Proceed-
fluence our choice. On the basis of flop counts, the ings of the 22nd international conference on Machine
BN parameterization is never preferred in our ex- learning, pages 657664, New York. ACM Press.
periments. Finally, we note that our LR parameteri-
zation was the best on 4 out of 5 UCI data sets. W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vet-
terling. 1992. Numerical Recipes in C: the art of sci-
entific computing. Cambridge.
7 Conclusion
T. Roos, H. Wettig, P. Grunwald, P. Myllymaki, and
We have studied the efficiency of discriminative H. Tirri. 2005. On discriminative Bayesian network
scoring of BNCs using alternative parameteriza- classifiers and logistic regression. Machine Learning,
59:267296.
tions of the conditional distribution of the class vari-
able. In case the canonical form of the BNC is a per- G. Santafe, J. A. Lozano, and P. Larranaga. 2005. Dis-
fect graph, there is a choice between at least three criminative learning of Bayesian network classifiers
via the TM algorithm. In L. Godo, editor, ECSQARU
parameterizations. We found out that it is wise to 2005, volume 3571 of Lecture Notes in Computer Sci-
exploit perfectness by optimizing in LR or LR Roos ence, pages 148160. Springer.
spaces. Based on the experiments we have per-
C. Sutton and A. McCallum. 2006. An introduction to
formed, we would suggest to use LR + BFGS, LR +
conditional random fields for relational learning. In
CG and LR Roos + CG in order of growing struc- L. Getoor and B. Taskar, editors, Introduction to Sta-
ture complexity. If only one method is to be se- tistical Relational Learning. MIT Press. To appear.
lected, we suggest LR Roos + CG. It works pretty
W.N. Venables and B.D. Ripley. 2002. Modern Applied
well on any type of data set, plus the mapping is Statistics with S (fourth edition). Springer, New York.
very straightforward, which makes the initialization
step easy to implement.

82 A. Feelders and J. Ivanovs


The Independency tree model:
a new approach for clustering and factorisation
M. Julia Flores and Jose A. Gamez
Computing Systems Department / SIMD (i3 A)
University of Castilla-La Mancha
Albacete, 02071, Spain

Serafn Moral
Departamento de Ciencias de la Computacion e I. A.
Universidad de Granada
Granada, 18071, Spain

Abstract
Taking as an inspiration the so-called Explanation Tree for abductive inference in Bayesian
networks, we have developed a new clustering approach. It is based on exploiting the
variable independencies with the aim of building a tree structure such that in each leaf all
the variables are independent. In this work we produce a structure called Independency
tree. This structure can be seen as an extended probability tree, introducing a new
and very important element: a list of probabilistic single potentials associated to every
node. In the paper we will show that the model can be used to approximate a joint
probability distribution and, at the same time, as a hierarchical clustering procedure.
The Independency tree can be learned from data and it allows a fast computation of
conditional probabilities.

1 Introduction 2001, pg. 293), we could distinguish two differ-


ent objectives in this descriptive task: (a) seg-
In the last years the relevance of unsupervised mentation, in which the aim is simply to par-
classification within data mining processing has tition the data in a convenient way, probably
been remarkable. When dealing with large using only a small number of the available vari-
number of cases in real applications, the iden- ables; and (b) decomposition, in which the aim
tification of common features that allows the is to see whether the data is composed (or not)
formation of groups/clusters of cases seems to of natural subclasses, i.e., to discover whether
be a powerful capability that both simplifies the the overall population is heterogeneous. Strictly
data processing and also allows the user to un- speaking cluster analysis is devoted to the sec-
derstand better the trend(s) followed in the reg- ond goal although, in general, the term is widely
istered cases. used to describe both segmentation and cluster
In the data mining community this descrip- analysis problems.
tive task is known as cluster analysis (Ander- In this work we only deal with categorical or
berg, 1973; Duda et al., 2001; Kaufman and discrete variables and we are closer to segmen-
Rousseeuw, 1990; Jain et al., 1999), that is, tation than (strictly speaking) to cluster anal-
getting a decomposition or partition of a data ysis. Our proposal is a method that in our
set into groups in such a way that the objects opinion has several good properties: (1) it pro-
in one group are similar to each other but as duces a tree-like graphical structure that allows
different as possible from the objects in other us to visually describe each cluster by means
groups. In fact, as pointed out in (Hand et al., of a configuration of the relevant variables for
that segment of the population; (2) it takes ad- chical divisive methods than to any other clus-
vantage of the identification of contextual inde- tering approach (known by us). In concrete,
pendencies in order to build a decomposition of it works as monothetic divisive clustering algo-
the joint probability distribution; (3) it stores a rithms (Kaufman and Rousseeuw, 1990), that
joint probability distribution about all the vari- split clusters using one variable at a time, but
ables, in a simple way, allowing an efficient com- differs from classical divisive approaches in the
putation of conditional probabilities. An exam- use of a Bayesian score to decide which variable
ple of an independence tree is given in figure 1, is chosen at each step, and because branches are
which will be thoroughly described in the fol- not completely developed.
lowing sections. With respect to probabilistic clustering, al-
The paper is structured as follows: first, in though our method produces a hard clustering,
Section 2 we give some preliminaries in rela- it is somehow related to it because probability
tion with our proposed method. Section 3 de- distributions are used to complete the informa-
scribes the kind of model we aim to look for, tion about the discovered model. Probabilistic
while in Section 4 we propose the algorithm we model-based clustering is usually modelled as a
have designed to discover it from data. Section mixture of models (see e.g. (Duda et al., 2001)).
5 describes the experiments carried out. Finally, Thus, a hidden random variable is added to the
Section 6 is devoted to the conclusions and to original observed variables and its states corre-
describe some possible lines in order to continue spond with the components of the mixture (the
our research. number of clusters). In this way we move to a
problem of learning from unlabelled data and
2 Preliminaries usually EM algorithm (Dempster et al., 1977)
is used to carry out the learning task when the
Different types of clustering algorithms can be graphical structure is fixed and structural EM
found in the literature differing in the type of (Friedman, 1998) when the graphical structure
approach they follow. Probably the three main has also to be discovered (Pena et al., 2000).
approaches are: partition-based clustering, hi- Iterative approaches have been described in the
erarchical clustering, and probabilistic model- literature (Cheeseman and Stutz, 1996) in order
based clustering. From them, the first two ap- to discover also the number of clusters (compo-
proaches yield a hard clustering in the sense that nents of the mixture). Notice that although this
clusters are exclusive, while the third one yields is not the goal of the learning task, the struc-
a soft clustering, that is, an object can belong tures discovered can be used for approximate
to more than one cluster following a probability inference (Lowd and Domingos, 2005), having
distribution. Because our approach is somewhat the advantage over general Bayesian networks
related to hierarchical and probabilistic model- (Jensen, 2001) that the learned graphical struc-
based clustering we briefly comment on these ture is, in general, simple (i.e. naive Bayes) and
types of clustering. so inference is extremely fast.
Hierarchical clustering (Sneath and Sokal,
1973) returns a tree-like structure called den- 3 Independency tree model
drogram. This structure reflects the way in If we have the set of variables X =
which the objects in the data set have been {X1 , . . . , Xm } regarding a certain domain, an
merged from single points to the whole set independent probability tree for this set of vari-
or split from the whole set to single points. ables is a tree such that (e.g. figure 1):
Thus, there are two distinct types of hierar-
chical methods: agglomerative (by merging two Each inner node N is labelled by a vari-
clusters) and divisive (by splitting a cluster). able Var(N ), and this node has a child for
Although our method does not exactly belong each one of the possible values (states) of
to hierarchical clustering, it is closer to hierar- Var(N ).

84 M. J. Flores, J. A. Gmez, and S. Moral


When a variable is represented by a node potentials in List(N ), and that DVL(N )1 is the
in the tree it implies that this variable par- union of sets VL(N ), where N is a node in
titions the space from now on (from this a path from N to a leaf, then the independen-
point to the farther branches in the tree) cies are generated from the following statement:
depending on its (nominal) value. So, a Each variable X in VL(N ) - Var(N ) is indepen-
variable cannot appear again (deeper) in dent of the variables (VL(N ) {X})DVL(N )
the tree. given the configuration Conf(N ); i.e. each vari-
able in the list of a node is independent of the
Each node N will have an associated list
other variables in that list and of the variables
of potentials, List(N ), with each of the po-
in its descendants, given the configuration asso-
tentials in the list storing a marginal prob-
ciated to the node.
ability distribution for one of the variables
An independent probability tree also defines
in X, including always a potential for the
a partition of the set of possible values of vari-
variable Var(N ).
ables in X. The number of clusters is the num-
This list appears in Figure 1 framed into a ber of leaves. If N is a leaf with associated
dashed box. configuration Conf(N ), then this group is given
For any path from the root to a leaf, each by all set of values x that are compatible with
one of the variables in X appears uniquely Conf(N ) (they have the same value for all the
and exactly once in the the lists associated variables in Conf(N )). For example, in figure 1
to the nodes along the path. This poten- the configuration X = {0, 1, 0, 1, 0} would fall
tial will determine the conditional proba- on the second leaf since X1 =0 and X2 =1. It
bility of the variables given the values of is assumed that the probability distribution of
the variables on the path from the root to the other variables in the cluster are given by
the list containing the potential. the potentials in the path defined by the config-
uration, for example, P(X3 =0)=1.
A configuration is a subset of variables Y In this way, with an independence tree we are
{X1 , . . . , Xm } together with a concrete value able of accomplishing two goals in one: (1)The
Yj = yj for each one of the variables Yj Y. variables are partitioned in a hierarchical way
Each node N has an associated configuration that gives us at the same time a clustering re-
determined for the variables in the path from sult; (2)The probability value of every configu-
the root to node N (excluding Var(N )) with the ration of the variables.
values corresponding to the children we have to
follow to reach N . This configuration will be
denoted as Conf(N ). X1: "0" [0.67] "1" [0.33]
An independent probability tree represents a X4: "0" [0.5] "1" [0.5]
X5: "0" [0.5] "1" [0.5]
joint probability distribution, p, about the vari- X1
"0" "1"
ables in X. If x = (x1 , . . . , xm ), then X2: "0" [0.5] "1" [0.5]
Y
m
X2 X2: "0" [1] "1" [0]
p(x) = Px(i) (xi ) "1"
X3: "0" [1] "1" [0]
"0"
i=1

where Px(i) is the potential for variable Xi which X3: "0" [0] "1" [1] X3: "0" [1] "1" [0]
is in the path from the root to a leaf determined
by configuration x (following in each inner node Figure 1: Illustrative independence tree struc-
N with variable Var(N ) = Xj , the child corre- ture learned from the exclusive dataset.
sponding to the value of Xj x).
This decomposition is based on a set of inde-
pendencies among variables {X1 , . . . , Xm }. As-
sume that VL(N ) is the set of variables of the 1
D stands for Descendants.

The Independency tree model: a new approach for clustering and factorisation 85
Our structure seeks to keep in leaves those (independent variables). This is exploited by
variables that remain independent. At the same another clustering algorithms as EM-based Au-
time, if the distribution of one variable is shared toClass clustering algorithm (Cheeseman and
by several leaves, we try to store it in their com- Stutz, 1996).
mon ascendant, to avoid repeating it in all the An important fact of this model is that it is a
leaves. For that reason, when one variable ap- generalisation of some usual models in classifi-
pears in a list for a node Nj it means that this cation, the Naive Bayes model (a tree with only
distribution is common for all levels from here to one inner node associated to the class and in
a leaf. For example, in figure 1, the binary vari- its children all the rest of variables are indepen-
able X4 has a uniform (1/2,1/2) distribution for dent), and the classification tree (a tree in which
all leaf cases (also for intermediate ones), since the list of each inner node only contains one po-
it is associated to the root node. On the other tential and the potential associated to the class
hand, we can see how X2 distribution varies de- variable always appears in the leaves). This gen-
pending on the branch (left or right) we take eralization means that our model is able of rep-
from the root, that is, when X1 = 0 the be- resenting the same conditional probability dis-
haviour of variable X2 is uniform whereas being tribution of the class variable with respect to the
1 the value of X1 , X2 is determined to be 0. rest of the variables, taking as basis the same set
The intuition underlying this model is based of associated independencies.
on the idea that inside each cluster the variables From this model one may want to apply two
are independent. When we have a set of data, kinds of operations:
groups are defined by common values in certain The production of clusters and their charac-
variables, having the other variables random terisation. A simple in-depth from root until ev-
variations. Imagine that we have a database ery leaf node access of the tree will give us the
with characteristics of different animals includ- corresponding clusters, and also the potential
ing mammals and birds. The presence of these lists associated to the path nodes will indicate
two groups is based on the existence of depen- the behaviour of the other variables not in the
dencies between the variables (two legs is re- path.
lated with having wings and feathers). Once The computation of the probability for a cer-
these variables are fixed, there can be another tain complete configuration with a value for
variables (size, colour) that can have random each variable: x = {x1 , x2 , . . . , xm }. This al-
variations inside each group, but that they do gorithm is recursive and works as follows:
not define new subcategories. getProb(Configuration {x1 , x2 , . . . , xm },Node N )
Of course, there are some other possible al- P rob 1
1 For all Potential Pj List(N)
ternatives to define clustering. This is based 1.1. Xj Var(Pj )
on the idea that when all the variables are inde- 1.2. P rob P rob Pj (Xj = xj )
pendent, then to subdivide the population is not 2 If N is a leaf then return P rob
3 Else
useful, as we have a simple method to describe 3.1. XN Var(N)
the joint behaviour of the variables. However, 3.2. Next Node N Branch child(XN = xN )
if some of the variables are dependent, then the 3.3. return P robgetProb({x1 , x2 , . . . , xm },N )

values of some basic variables could help to de- If we have a certain configuration we should
termine the values of the other variables, and go through the tree from root until leaves tak-
then it can be useful to divide the population ing the corresponding branches. Every time we
in groups according to the values of these basic reach a node, we have to use the single poten-
variables. tials in the associated list to multiply the value
Another way of seeing it is that having inde- of probability by the values of the potentials
pendent variables is the simplest model we can corresponding to this configuration.
have. So, we determine a set of categories such It is also possible to compute the probabil-
that each one is described in a very simple way ity for an interest variable Z conditioned to

86 M. J. Flores, J. A. Gmez, and S. Moral


a generic configuration (a set of observations): values of the leaves that are compatible with
Y = y. This can be done in two steps: 1) this value (compatibility means that to follow
Transform the independent probability tree into this path we do not have to assume Z = z with
a probability tree (Salmeron et al., 2000); 2) z 6= z ), which is linear too.
Make a marginalisation of the probability tree
by adding in all the variables except in Z as 4 Our clustering algorithm
describe in (Salmeron et al., 2000).
In the following we describe the first step. In this section we are going to describe how
It is a recursive procedure that visits all nodes an independent probability tree can be learned
from root to leaves. Each node can pass to its from a database D, with values for all the vari-
children a float, P rob (1 in the root) and a po- ables in X = {X1 , . . . , Xm }.
tential P which depends of variable Z (empty The basics of the algorithm are simple. It
in the root). Each time a node is visited, the tries to determine for each node, the variable
following operations are carried out: with a strongest degree of dependence with the
All the potentials in List(N ) are examined and rest of remaining variables. This variable will
removed. For any potential, depending on its be assigned to this node, repeating the process
variable Xj we proceed as follows: with its children until all the variables are inde-
- If Xj 6= Z, then if Xj = Yk is in the obser- pendent.
vations configuration, P rob is multiplied by the For this, we need a measure of the degree
value of this potential for Yk = yk . of dependence of two variables Xi and Xj in
- If Xj 6= Z and Xj does not appear in the database D. The measure should be centered
observations configuration, then the potential is around 0, in such a way that the variables are
ignored. considered dependent if and only if the measure
- If Xj = Z and Xj is the one associated is greater than 0. In this paper, we consider the
to node N , then we transform each one of the K2 score (Cooper and Herskovits, 1992), mea-
children of Z, but multiplying P rob by the value suring the degree of dependence as the differ-
of potential in Z = z, before transforming the ence in the logarithm of the K2 score of Xj
child corresponding to this value. conditioned to Xi minus the logarithm of K2
- If Xj = Z and Xj is not the one associated score of marginal Xj , i.e. the difference be-
to node N , then the potential of Xj is stored in tween the logarithms of the K2 scores of two
P. networks with two variables: one in which Xi
After examining the list of potentials, we pro- is a parent of Xj and other in which the two
ceed: variables are not connected. Let us call this
- If N is not a leaf node, then we transform degree of dependence Dep(Xi , Xj |D). This is
its children. a non-symmetrical measure and should be read
- If N is a leaf node and P = , we assign the as the influence of Xi on Xj , however in prac-
value P rob to this node. tice the differences between Dep(Xi , Xj |D) and
- If N is a leaf node and P = 6 , we make N Dep(Xj , Xi |D) are not important.
an inner node with variable Z: For each value In any moment, given a variable Xi and a
Z = z, build a node Nz which is a leaf node database D, we can estimate a potential Pi (D)
with a value equal to P rob P(z). Make Nz a for this variable in the database. This potential
child of N . is the estimation of the marginal probability of
This procedure is fast: it is linear in the the Xi . Here we assume that this is done by count-
size of the independent probability tree (consid- ing the absolute frequencies of each one of the
ering the number of values of Z constant). The values in the database.
marginalisation in the second step has the same The algorithm starts with a list of variables
time complexity. In fact, this marginalization L which is initially equal to {X1 , . . . , Xm } and
can be done by adding for each value Z = z the a database equal to the original one D, then it

The Independency tree model: a new approach for clustering and factorisation 87
determines the root node N and its children in et al., 1993), where a link between two nodes is
a recursive way. For that, for any variable Xi deleted if these nodes are marginally indepen-
in the list, it computes dent.
X Another approximation is that it is assumed
Dep(Xi |D) = Dep(Xi , Xj |D) that all the variables are independent in a leaf
Xj L
when Dep(Xk |D) 0, even if some of the terms
Then, the variable Xk with maximum value we are adding are positive. We have found that
of Dep(Xi |D) is considered. this is a good compromise criterion to limit the
If Dep(Xk |D) > 0, then we assign variable complexity of learned models.
Xk to node N and we add potential Pk (D) to
the list List(N ), removing Xk for L. For all 5 Experiments
the remaining variables Xi in L, we compute
Dep(Xi , Xj |D) for Xj L(j 6= i), and Xj = To make an initial evaluation of the Indepen-
Xk . If all these values are less or equal than dence Tree (IndepT) model, we decided to com-
0, then we add potential Pi (D) to List(N ) and pare it with other well known unsupervised
remove Xi from L; i.e. we keep in this node classification techniques that are also based on
the variables which are independent of the rest Probabilistic Graphical Models: learning of a
of variables, including the variable in the node. Bayesian network (by a standard algorithm as
Finally, we build a child of N for each one of PC) and also Expectation-Maximisation with
the values Xk = xk . This is done by calling a Naive Bayes structure, which uses cross-
recursively to the same procedure, but with the validation to decide the number of clusters. Be-
new list of nodes, and changing database D to cause we produce a probabilistic description of
D[Xk = xk ], where D[Xk = xk ] is the subset the dataset, we use the log-likelihood (logL) of
of D given by those cases in which variable Xk the data given the model to score a given clus-
takes the value xk . tering. By using the logL as goodness measure
If Dep(Xk |D) 0, then the process is we can compare our approach with other algo-
stopped and this is a leaf node. We build rithms for probabilistic model-based unsuper-
List(N ), by adding potential Pi (D) for any vari- vised learning: probabilistic model-based clus-
able Xi in L. tering and Bayesian networks. This is a direct
In this algorithm, the complexity of each node evaluation of the procedures as methods to en-
computation is limited by O(m2 .n) where m is code a complex joint probability distribution.
the number of variables and n is the database At the same time, it is also an evaluation of our
size. The number of leaves is limited by the method from the clustering point of view, show-
database size n multiplied by the maximum ing whether the proposed segmentation is useful
number of cases of a variable, as a leaf with only for a simple description of the population.
one case of the database is never branched. But Then, the basic steps we have followed
usually the number of nodes is much lower. for the three procedures [IndepT,PC-Learn,EM-
In the algorithm, we make some approxima- Naive] are:
tions with respect to the independencies repre- 1. Divide the data cases into a training or data
sented by the model. First, we only look at one- set (SD ) and a test set (ST ), using (2/3,1/3).
to-one dependencies, and not to joint dependen- 2. Build the corresponding model for the cases
cies. It can be the case that Xi is indepen- in SD .
dent of Xj and Xk and it is not independent of 3. Compute the log-likelyhood of the obtained
(Xj , Xk ). However, testing these independents model over the data in ST .
is more costly and we do not have the possibility Among the tested cases, apart from easy syn-
of a direct representation of this in the model. thetical data bases created to check the ex-
This assumption is made by other Bayesian net- pected and right clusters, we have looked for
works learning algorithms such as PC (Spirtes real sets of cases. Some of them have been taken

88 M. J. Flores, J. A. Gmez, and S. Moral


from the UCI datasets repository (Newman et case IndepT PC-Learn EM-Naive
al., 1998) and others from real applications re- 1 -61.33 -62.88 -119.67
lated to our current research environment. 2 -1073.94 -2947.85 -3535.11
We will indicate the main remarks about all 3A -3016.92 -2644.38 -4297.21
the evaluated cases: 3B -2100.40 -3584.06 -6266.71
Case 1: exclusive: This is a very simple 3C -4956.04 -7630.98 -15145.15
dataset with five binary variables, from X1 to 3D -1340.04 -2770.15 -4223.45
X5 . The three first variables have an exclusive 4A -6853.12 -18074.69 -32213.29
behaviour, that is, if one of them is 1 the other 4B -6802.94 -17357.80 -31244.28
two will be 0. On the other hand, X4 and X5 5: -465790.73 -349631.35 -794415.07
are independent with the three first ones and
Table 1: Comparison in terms of the log-
also with each other. This example is interest-
likelyhood value for all cases (datasets).
ing not especially for the LogL comparison, but
mainly to see the interpretation of the tree given
in figure 1. the tree learned in the exclusive case. As it
Case 2: tic-tac-toe: Taken from UCI reposi- can be observed, the tree really captures what
tory and it encodes the complete set of possible is happening in the problem: X4 and X5 are
board configurations at the end of tic-tac-toe independent and placed in the list of root node
games. It presents 958 cases and 9 attributes and then one of the other variables is looked up,
(one per each game square). if its value is 1, then the values of the other two
Cases 3: greenhouses: Data cases taken variables is completely determined and we stop.
from real greenhouses located in Almera If its value is 0, then another variable has to be
(Spain). There are four distinct sets of examined.
cases, here we indicate them with denotation With respect to the capacity of approximat-
(case)={num cases, num attributes}: (3A)= ing the joint probability distribution, we can
{1240,8}, (3B)= {1465,17}, (3C)= {1318,33}, see in table 1 that our method always provides
(3D)= {1465,6}. greater values of logL than EM-Naive in all the
Cases 4: sheep: Datasets taken from the datasets. With respect to PC algorithm, the
work developed in (Flores and Gamez, 2005) independent tree wins in all the situations ex-
where the historical data of the different ani- cept in two of them (cases 3A and 5). This is
mals were registered and used to analyse their a remarkable result as our model has some lim-
genetic merit for milk production directed to itations in representing sets of independencies
Manchego cheese. There are two datasets with that can be easily represented by a Bayesian
3087 cases, 4A with 24 variables/attributes and network (for example a Markov chain), however
4B, where the attribute breeding value (that its behaviour is usually better. We think that
could be interpreted as class attribute) has been this is due to two main aspects: it can represent
removed. asymmetrical independencies and it is a simple
Case 5: connect4: Also downloaded from the model (which is always a virtue).
UCI repository and it contains all legal 8-play
positions in the game of connect-4 in which 6 Concluding remarks and future
neither player has won yet, and in which the work
next move is not forced. There are 67557 cases
and 42 attributes, each corresponding to one In this paper we have proposed a new model for
connect-4 square2 . representing a joint probability distribution in a
The results of the experiments can be found compact way, which can be used for fast compu-
in figure 1 and in table 1. Figure 1 presents tation of conditional probability distributions.
This model is based on a partition of the state
2
Actually, only the half of the cases have been used. space, in such a way that in each group all the

The Independency tree model: a new approach for clustering and factorisation 89
variables are independent. In this sense, it is at A. P. Dempster, N. M. Laird, and D.B. Rubin. 1977.
the same time a clustering algorithm. Maximum likelihood from incomplete data via the
EM algorithm. Journal of the Royal Statistical
In the experiments with real and syntheti-
Society, 1:138.
cal data we have shown its good behaviour as
a method of approximating a joint probability, R.O. Duda, P.E. Hart, and D.J. Stork. 2001. Pat-
providing results that are better (except for two tern Recognition. Wiley.
cases for the PC algorithm) to the factorisa- M.J. Flores and J. A. Gamez. 2005. Breeding value
tion provided by an standard Bayesian networks classification in manchego sheep: A study of at-
learning algorithm (PC) and by EM clustering tribute selection and construction. In Kowledge-
Based Intelligent Information and Enginnering
algorithm. In any case, more extensive experi-
Systems. LNAI, volume 3682, pages 13381346.
ments are necessary in order to compare it with Springer Verlag.
another clustering and factorisation procedures.
We think that this model can be improved N. Friedman. 1998. The Bayesian structural EM al-
gorithm. In Proceedings of the 14th Conference on
and exploited in several ways: (1) Some of the Uncertainty in Artificial Intelligence (UAI-98),
clusters can have very few cases or can be very pages 129138. Morgan Kaufmann.
similar in the distributions of the variables. We
D. Hand, H. Mannila, and P. Smyth. 2001. Princi-
could devise a procedure to joint these clusters; ples of Data Mining. MIT Press.
(2) we feel that the different number of values of
the variables can have some undesirable effects. A.K. Jain, M.M. Murty, and P.J. Flynn. 1999. Data
This problem could be solved by considering bi- clustering: a review. ACM Computing Surveys,
31:264323.
nary trees in which the branching is determined
by a partition of the set of possible values of a F. V. Jensen. 2001. Bayesian Networks and Deci-
variable in two parts. This could allow to extend sion Graphs. Springer Verlag.
the model to continuous variables; (3) we could L. Kaufman and P. Rousseeuw. 1990. Finding
determine different stepping branching rules de- Groups in Data. John Wiley and Sons.
pending of the final objective: to approximate a
D. Lowd and P. Domingos. 2005. Naive Bayes mod-
joint probability or to provide a simple (though els for probability estimation. In Proceedings of
not necessarily exhaustive) partition of the state the 22nd ICML conference, pages 529536. ACM
space that could help us to have an idea of how Press.
the variables take their values; (4) to use this
D.J. Newman, S. Hettich, C.L. Blake, and C.J.
model in classification problems. Merz. 1998. UCI repository of machine learning
databases.
Acknowledgments
This work has been supported by Spanish MEC J.M. Pena, J.A. Lozano, and P. Larranaga. 2000.
An improved Bayesian structural EM algorithm
under project TIN2004-06204-C03-{02,03}. for learning Bayesian networks for clustering.
Pattern Recognition Letters, 21:779786.

References A. Salmeron, A. Cano, and S. Moral. 2000. Impor-


tance sampling in Bayesian networks using prob-
M.R. Anderberg. 1973. Cluster Analysis for Appli- ability trees. Computational Statistics and Data
cations. Academic Press. Analysis, 34:387413.
P. Cheeseman and J. Stutz. 1996. Bayesian classi- P.H. Sneath and R.R. Sokal. 1973. Numerical Tax-
fication (AUTOCLASS): Theory and results. In onomy. Freeman.
U. M. Fayyad, G. Piatetsky-Shapiro, P Smyth,
and R. Uthurusamy, editors, Advances in Knowl- P. Spirtes, C. Glymour, and R. Scheines. 1993. Cau-
edge Discovery and Data Mining, pages 153180. sation, Prediction and Search. Springer Verlag.
AAAI Press/MIT Press.
G.F. Cooper and E.A. Herskovits. 1992. A Bayesian
method for the induction of probabilistic networks
from data. Machine Learning, 9:309347.

90 M. J. Flores, J. A. Gmez, and S. Moral


Learning the Tree Augmented Naive Bayes Classifier from
incomplete datasets
Olivier C.H. Francois and Philippe Leray
LITIS Lab., INSA de Rouen, BP 08, av. de lUniversite
76801 Saint-Etienne-Du-Rouvray, France.

Abstract
The Bayesian network formalism is becoming increasingly popular in many areas such
as decision aid or diagnosis, in particular thanks to its inference capabilities, even when
data are incomplete. For classification tasks, Naive Bayes and Augmented Naive Bayes
classifiers have shown excellent performances. Learning a Naive Bayes classifier from
incomplete datasets is not difficult as only parameter learning has to be performed. But
there are not many methods to efficiently learn Tree Augmented Naive Bayes classifiers
from incomplete datasets. In this paper, we take up the structural em algorithm principle
introduced by (Friedman, 1997) to propose an algorithm to answer this question.

1 Introduction which often give better performances than the


Naive Bayes classifier, require structure learn-
Bayesian networks are a formalism for proba- ing. Only a few methods of structural learning
bilistic reasoning increasingly used in decision deal with incomplete data.
aid, diagnosis and complex systems control. Let
X = {X1 , . . . , Xn } be a set of discrete random vari- We introduce in this paper a method to learn
ables. A Bayesian network B =< G, > is de- Tree Augmented Naive Bayes (tan) classifiers
fined by a directed acyclic graph G =< N, U > based on the expectation-maximization (em)
where N represents the set of nodes (one node principle. Some previous work by (Cohen et
for each variable) and U the set of edges, and al., 2004) also deals with tan classifiers and em
parameters = {ijk }16i6n,16j6qi ,16k6rithe set principle for partially unlabeled data. In there
of conditional probability tables of each node work, only the variable corresponding to the
Xi knowing its parents state P i (with ri and class can be partially missing whereas any vari-
qi as respective cardinalities of Xi and Pi ). able can be partially missing in the approach we
If G and are known, many inference algo- propose here.
rithms can be used to compute the probability We will therefore first recall the issues relat-
of any variable that has not been measured con- ing to structural learning, and review the vari-
ditionally to the values of measured variables. ous ways of dealing with incomplete data, pri-
Bayesian networks are therefore a tool of choice marily for parameter estimation, and also for
for reasoning in uncertainty, based on incom- structure determination. We will then exam-
plete data, which is often the case in real appli- ine the structural em algorithm principle, before
cations. proposing and testing a few ideas for improve-
It is possible to use this formalism for clas- ment based on the extension of the Maximum
sification tasks. For instance, the Naive Bayes Weight Spanning Tree algorithm to deal with
classifier has shown excellent performance. This incomplete data. Then, we will show how to
model is simple and only need parameter learn- use the introduced method to learn the well-
ing that can be performed with incomplete known Tree Augmented Naive Bayes classifier
datasets. Augmented Naive Bayes classifiers from incomplete datasets and we will give some
(with trees, forests or Bayesian networks), experiments on real data.
2 Preliminary remarks 2.2 Bayesian classifiers
2.1 Structural learning Bayesian classifiers as Naive Bayes have shown
Because of the super-exponential size of the excellent performances on many datasets. Even
search space, exhaustive search for the best if the Naive Bayes classifier has underlying
structure is impossible. Many heuristic meth- heavy independence assumptions, (Domingos
ods have been proposed to determine the struc- and Pazzani, 1997) have shown that it is op-
ture of a Bayesian network. Some of them rely timal for conjunctive and disjunctive concepts.
on human expert knowledge, others use real They have also shown that the Naive Bayes clas-
data which need to be, most of the time, com- sifier does not require attribute independence to
pletely observed. be optimal under Zero-One loss.
Here, we are more specifically interested in Augmented Naive Bayes classifier appear as
score-based methods. Primarily, greedy search a natural extension to the Naive Bayes classi-
algorithm adapted by (Chickering et al., 1995) fier. It allows to relax the assumption of inde-
and maximum weight spanning tree (mwst) pendence of attributes given the class variable.
proposed by (Chow and Liu, 1968) and ap- Many ways to find the best tree to augment
plied to Bayesian networks in (Heckerman et al., the Naive Bayes classifier have been studied.
1995). The greedy search carried out in directed These Tree Augmented Naive Bayes classifiers
acyclic graph (dag) space where the interest of (Geiger, 1992; Friedman et al., 1997) are a re-
each structure located near the current struc- stricted family of Bayesian Networks in which
ture is assessed by means of a bic/mdl type the class variable has no parent and each other
measurement (Eqn.1)1 or a Bayesian score like attribute has as parents the class variable and
bde (Heckerman et al., 1995). at most one other attribute. The bic score of
log N such a Bayesian network is given by Eqn.3.
BIC(G, ) = log P (D|G, ) Dim(G) (1)
2
BIC(TAN , ) = bic(C, , C| ) (3)
where Dim(G) is the number of parameters used X
for the Bayesian network representation and + bic(Xi , {C, Pi }, Xi |{C,Pi } )
i
N is the size of the dataset D.
The bic score is decomposable. It can be where C stands for the class node and Pi could
written as the sum of the local score computed only be the emptyset or a singleton {Xj }, Xj 6
for each node as BIC(G, ) = i bic(Xi , Pi , Xi|Pi ) {C, Xi }.
P

where bic(Xi , Pi , Xi|Pi ) = Forest Augmented Naive Bayes classifier


X X log N
Nijk log ijk Dim(Xi|Pi ) (2) (fan) is very close to the tan one. In this
2
Xi =xk Pi =paj model, the augmented structure is not a tree,
with Nijk the occurrence number of {Xi = but a set of disconnected trees in the attribute
xk and Pi = paj } in D. space (Sacha, 1999).
The principle of the mwst algorithm is rather
different. This algorithm determines the best 2.3 Dealing with incomplete data
tree that links all the variables, using a mutual
information measurement like in (Chow and 2.3.1 Practical issue
Liu, 1968) or the bic score variation when two Nowadays, more and more datasets are avail-
variables become linked as proposed by (Hecker- able, and most of them are incomplete. When
man et al., 1995). The aim is to find an optimal we want to build a model from an incomplete
solution, but in a space limited to trees. dataset, it is often possible to consider only the
1
As (Friedman, 1997), we consider that the bic/mdl complete samples in the dataset. But, in this
score is a function of the graph G and the parameters , case, we do not have a lot of data to learn the
generalizing the classical definition of the bic score which model. For instance, if we have a dataset with
is defined with our notation by BIC(G, ) where
is obtained by maximizing the likelihood or BIC(G, ) 2000 samples on 20 attributes with a probabil-
score for a given G. ity of 20% that a data is missing, then, only

92 O. C.H. Franois and P. Leray


23 samples (in average) are complete. General- Do where Xi and P a(Xi ) are measured, not only
izing from the example, we see that we cannot in Dco (where all Xi s are measured) as in the
ignore the problem of incomplete datasets. previous approach.
Many methods try to rely more on all the ob-
2.3.2 Nature of missing data
served data. Among them are sequential updat-
Let D = {Xil }16i6n,16l6N our dataset, with ing (Spiegelhalter and Lauritzen, 1990), Gibbs
Do the observed part of D, Dm the missing part
sampling (Geman and Geman, 1984), and ex-
and Dco the set of completely observed cases in pectation maximisation (EM) in (Dempster et
Do . Let also M = {Mil } with Mil = 1 if Xil is
al., 1977). Those algorithms use the missing
missing, 0 if not. We then have the following data mar properties. More recently, bound
relations: and collapse algorithm (Ramoni and Sebastiani,
Dm = {Xil / Mil = 1}16i6n,16l6N
1998) and robust Bayesian estimator (Ramoni
Do = {Xil / Mil = 0}16i6n,16l6N
and Sebastiani, 2000) try to resolve this task
Dco = {[X1l . . . Xnl ] / [M1l . . . Mnl ] = [0 . . . 0]}16l6N
whatever the nature of missing data.
Dealing with missing data depends on their EM has been adapted by (Lauritzen, 1995)
nature. (Rubin, 1976) identified several types to Bayesian network parameter learning when
of missing data: the structure is known. Let log P (D|) =
mcar (Missing Completly At Random):
log P (Do , Dm |) be the data log-likelihood.
P (M|D) = P (M), the probability for data
Dm being an unmeasured random variable, this
to be missing does not depend on D,
log-likelihood is also a random variable function
mar (Missing At Random): P (M|D) =
of Dm . By establishing a reference model , it
P (M|Do ), the probability for data to be
is possible to estimate the probability density of
missing depends on observed data,
nmar (Not Missing At Random): the prob- the missing data P (Dm | ) and therefore to cal-
ability for data to be missing depends on culate Q( : ), the expectation of the previous
both observed and missing data. log-likelihood:
mcar and mar situations are the easiest to Q( : ) = E [log P (Do , Dm |)] (4)
solve as observed data include all necessary in- So Q( : ) is the expectation of the likelihood
formation to estimate missing data distribution. of any set of parameters calculated using a
The case of nmar is trickier as outside informa- distribution of the missing data P (Dm | ). This
tion has to be used to model the missing data equation can be re-written as follows.
distribution. n
X X X
Q( : ) =
Nijk log ijk (5)
2.3.3 Learning with incomplete data i=1 Xi =xk Pi =paj

With mcar data, the first and simplest possi- where Nijk
= E [Nijk ] = N P (Xi = xk , Pi =
ble approach is the complete case analysis. This
paj | )is obtained by inference in the network
is a parameter estimation based on Dco , the set < G, > if the {Xi ,Pi } are not completely mea-
of completely observed cases in Do . When D is sured, or else only by mere counting.
mcar, the estimator based on Dco is unbiased. (Dempster et al., 1977) proved convergence
However, with a high number of variables the of the em algorithm, as the fact that it was not
probability for a case [X1l . . . Xnl ] to be completely necessary to find the global optimum i+1 of
measured is low and Dco may be empty. function Q( : i ) but simply a value which
One advantage of Bayesian networks is that, would increase function Q (Generalized em).
if only Xi and Pi = P a(Xi ) are measured, then
the corresponding conditional probability table 2.3.4 Learning G with incomplete
can be estimated. Another possible method dataset
with mcar cases is the available case analy- The main methods for structural learning
sis, i.e. using for the estimation of each con- with incomplete data use the em principle: Al-
ditional probability P (Xi |P a(Xi )) the cases in ternative Model Selection em (ams-em) pro-

Learning the Tree Augmented Naive Bayes Classifier from incomplete datasets 93
posed by (Friedman, 1997) or Bayesian Struc- Algorithm 2 : Detailed em for structural learning
tural em (bs-em) (Friedman, 1998). We can 1: Init: f inished = f alse, i = 0
also cite the Hybrid Independence Test pro- Random or heuristic choice of the initial
Bayesian network (G 0 , 0,0 )
posed in (Dash and Druzdzel, 2003) that can 2: repeat
use em to estimate the essential sufficient statis- 3: j=0
4: repeat
tics that are then used for an independence 5: i,j+1 = argmax Q(G i , : G i , i,j )
test in a constraint-based method. (Myers et 6: j =j+1

al., 1999) also proposes a structural learning 7: until convergence (i,j i,j )
o

o o
method based on genetic algorithm and mcmc. 8: if i = 0 or |Q(G i , i,j : G i1 , i1,j )
o o
We will now explain the structural em algorithm Q(G i1 , i1,j : G i1 , i1,j )| >  then
o
principle in details and see how we could adapt 9: G i+1 = arg max Q(G, : G i , i,j )
GVG i
it to learn a tan model. 10: i+1,0 = argmax Q(G i+1 , : G i , i,j )
o


11: i=i+1
3 Structural em algorithm 12: else
13: f inished = true
3.1 General principle 14: end if
The EM principle, which we have described 15: until f inished
above for parameter learning, applies more
generally to structural learning (Algorithm 1
a super-exponential space. However, with Gen-
as proposed by (Friedman, 1997; Friedman,
eralised em it is sufficient to look for a better
1998)).
solution rather than the best possible one, with-
Algorithm 1 : Generic em for structural learning out affecting the algorithm convergence proper-
1: Init: i = 0 ties. This search for a better solution can then
Random or heuristic choice of the initial be done in a limited space, like for example VG ,
Bayesian network (G 0 , 0 ) the set of the neigbours of graph G that have
2: repeat
3: i=i+1 been generated by removal, addition or inver-
4: (G i , i ) = argmax Q(G, : G i1 , i1 ) sion of an arc.
G,
5: until |Q(G i , i : G i1 , i1 ) Concerning the search in the space of the pa-
Q(G i1 , i1 : G i1 , i1 )| 6  rameters (Eqn.7), (Friedman, 1997) proposes
repeating the operation several times, using a
The maximization step in this algorithm (step clever initialisation. This step then amounts to
4) has to be performed in the joint space running the parametric em algorithm for each
{G, } which amounts to searching the best structure G i , starting with structure G 0 (steps 4
structure and the best parameters correspond- to 7 of Algorithm 2). The two structural em
ing to this structure. In practice, these two algorithms proposed by Friedman can therefore
steps are clearly distinct2 : be considered as greedy search algorithms, with
G i = argmax Q(G, : G i1 , i1 ) (6) EM parameter learning at each iteration.
G
i = argmax Q(G i , : G i1 , i1 ) (7)

3.2 Choice of function Q
where Q(G, : G , ) is the expectation of
the likelihood of any Bayesian network < We now have to choose the function Q that will
G, > computed using a distribution of the be used for structural learning. The likelihood
missing data P (Dm |G , ). used for parameter learning is not a good indi-
Note that the first search (Eqn.6) in the space cator to determine the best graph since it gives
of possible graphs takes us back to the initial more importance to strongly connected struc-
problem, i.e. the search for the best structure in tures. Moreover, it is impossible to compute
2
marginal likelihood when data are incomplete,
The notation Q(G, : . . . ) used in Eqn.6 stands for
E [Q(G, : . . . )] for Bayesian scores or Q(G, o : . . . ) so that it is necessary to rely on an efficient
where o is obtained by likelihood maximisation. approximation like those reviewed by (Chicker-

94 O. C.H. Franois and P. Leray


ing and Heckerman, 1996). In complete data There is a change from the regular structural
cases, the most frequently used measurements em algorithm in step 9, i.e. the search for a
are the bic/mdl score and the Bayesian bde better structure for the next iteration. With
score (see paragraph 2.1). When proposing the the previous structural em algorithms, we were
ms-em and mwst-em algorithms, (Friedman, looking for the best dag among the neighbours
1997) shows how to use the bic/mdl score with of the current graph. With mwst-em, we can
incomplete data, by applying the principle of directly get the best tree that maximises func-
Eqn.4 to the bic score (Eqn.1) instead of likeli- tion Q.
hood. Function QBIC is defined as the bic score In paragraph 2.1, we briefly recalled that the
expectation by using a certain probability den- mwst algorithm used a similarity function be-
sity on the missing data P (Dm |G , ) : tween two nodes which was based on the bic
QBIC (G, : G , ) = (8) score variation whether Xj is linked to Xi or
1 not. This function can be summed up in the
EG , [log P (Do , Dm |G, )] Dim(G) log N
2 following (symmetrical) matrix:
h i h i
As the bic score is decomposable, so is QBIC . Mij = bic(Xi , Xj , Xi|Xj ) bic(Xi , , Xi )
16i,j6n
BIC
X bic (11)
Q (G, : G , ) = Q (Xi , Pi , Xi|Pi : G , )
i where the local bic score is defined in Eqn.2.
where Qbic (Xi , Pi , Xi|Pi : G , ) = (9) Running maximum (weight) spanning algo-
X X log N rithms like Kruskals on matrix M enables us to
Nijk log ijk Dim(Xi|Pi ) (10)
X =x P =pa
2 obtain the best tree T that maximises the sum
i k i j

with Nijk
= EG , [Nijk ] = N P (Xi = xk , Pi =
of the local scores on all the nodes, i.e. function
BIC of Eqn.2.
paj |G , ) obtained by inference in the network
{G , } if {Xi ,Pi } are not completely measured,
By applying the principle we described in sec-
or else only by mere counting. With the same tion 3.2, we can then adapt mwst to incom-
reasoning, (Friedman, 1998) proposes the adap- plete data by replacing the local bic score of
tation of the bde score to incomplete data. Equn.11 with its expectation; to do so, we use a
certain probability density of the missing data
4 TAN-EM, a structural EM for P (Dm |T , ) :
h i h
classification Q
Mij = Qbic (Xi , Pi = {Xj }, Xi|Xj : T , )
i,j
i
(Leray and Francois, 2005) have introduced Qbic (Xi , Pi = , Xi : T , ) (12)
mwst-em an adaptation of mwst dealing with With the same reasoning, running a maximum
incomplete datasets. The approach we propose (weight) spanning tree algorithm on matrix
here is using the same principles in order to ef- M Q enables us to get the best tree T that max-
ficiently learn tan classifiers from incomplete imises the sum of the local scores on all the
datasets. nodes, i.e. function QBIC of Eqn.9.
4.1 MWST-EM, a structural EM in 4.2 TAN-EM, a structural EM for
the space of trees classification
Step 1 of Algorithm 2, like all the previous al- The score used to find the best tan structure
gorithms, deals with the choice of the initial is very similar to the one used in mwst, so we
structure. The choice of an oriented chain graph can adapt it to incomplete datasets by defining
linking all the variables proposed by (Friedman, the following score matrix:
1997) seems even more judicious here, since this h
Q
i h
Mij = Qbic (Xi , Pi = {C, Xj }, Xi|Xj C : T , )
chain graph also belongs to the tree space. Steps i,j
i
4 to 7 do not change. They deal with the run- Qbic (Xi , Pi = {C}, Xi|C : T , ) (13)
ning of the parametric em algorithm for each Using this new score matrix, we can use the
structure Bi , starting with structure B0 . approach previously proposed for mwst-em to

Learning the Tree Augmented Naive Bayes Classifier from incomplete datasets 95
get the best augmented tree, and connect the dedicated to classification tasks while the oth-
class node to all the other nodes to obtain the ers do not consider the class node as a specific
tan structure. We are currently using the same variable.
reasoning to find the best forest extension. We also give an confidence interval for each
classification rate, based on Eqn.14 proposed by
4.3 Related works (Bennani and Bossaert, 1996):
2
q
Z T (1T ) 2
Z
(Meila-Predoviciu, 1999) applies mwst algo- T+ 2N
Z N
+ 4N 2
I(, N ) = 2 (14)
rithm and em principle, but in another frame- 1+
Z
N
work, learning mixtures of trees. In this work, where N is the number of samples in the dataset,
the data is complete, but a new variable is intro- T is the classification rate and Z = 1, 96 for =
duced in order to take into account the weight 95%.
of each tree in the mixture. This variable isnt
5.2 Results
measured so em is used to determine the corre-
sponding parameters. The results are summed up in Table 1. First, we
(Pena et al., 2002) propose a change inside could see that even if the Naive Bayes classifier
the framework of the sem algorithm resulting often gives good results, the other tested meth-
in an alternative approach for learning Bayes ods allow to obtain better classification rates.
Nets for clustering more efficiently. But, where all runnings of nb-em give the same
(Greiner and Zhou, 2002) propose maximiz- results, as em parameter learning only needs
ing conditional likelihood for BN parameter an initialisiation, the other methods do not al-
learning. They apply their method to mcar in- ways give the same results, and then, the same
complete data by using available case analysis classification rates. We have also noticed (not
in order to find the best tan classifier. reported here) that, excepting nb-em, tan-
em seems the most stable method concerning
(Cohen et al., 2004) deal with tan classifiers
the evaluated classification rate while mwst-em
and em principle for partially unlabeled data.
seems to be the less stable.
In there work, only the variable corresponding
The method mwst-em can obtain very good
to the class can be partially missing whereas any
structures with a good initialisation. Then, ini-
variable can be partially missing in our tan-em
tialising it with the results of mwst-em gives us
extension.
stabler results (see (Leray and Francois, 2005)
5 Experiments for a more specific study of this point).
In our tests, except for this house dataset,
5.1 Protocol tan-em always obtains a structure that lead
to better classification rates in comparison with
The experiment stage aims at evaluating the
the other structure learning methods.
Tree Augmented Naive Bayes classifier on
Surprisingly, we also remark that mwst-em
incomplete datasets from UCI repository3 :
can give good classification rates even if the
Hepatitis, Horse, House, Mushrooms and
class node is connected to a maximum of two
Thyroid.
other attributes.
The tan-em method we proposed here is
Regarding the log-likelihood reported in Ta-
compared to the Naive Bayes classifier with
ble 1, we see that the tan-em algorithm finds
em parameters learning. We also indicate the
structures that can also lead to a good approx-
classification rate obtained by three methods:
imation of the underlying probability distribu-
mwst-em, sem initialised with a random chain
tion of the data, even with a strong constraint
and sem initialised with the tree given by
on the graph structure.
mwst-em (sem+t). The first two methods are
Finally, the Table 1 illustrates that tan-
3
http://www.ics.uci.edu/mlearn/MLRepository. em and mwst-em have about the same com-
html plexity (regarding the computational time) and

96 O. C.H. Franois and P. Leray


Datasets N learn test #C %I NB-EM MWST-EM TAN-EM SEM SEM+T
Hepatitis 20 90 65 2 8.4 70.8 [58.8;80.5] 73.8 [62.0;83.0] 75.4 [63.6;84.2] 66.1 [54.0;76.5] 66.1 [54.0;76.5]
-1224.2 ; 29.5 -1147.6 ; 90.4 -1148.7 ; 88.5 -1211.5 ; 1213.1 -1207.9 ; 1478.5
Horse 28 300 300 2 88.0 75 [63.5;83.8] 77.9 [66.7;86.2] 80.9 [69.9;88.5] 66.2 [54.3;76.3] 66.2 [54.3;76.3]
-5589.1 ; 227.7 -5199.6 ; 656.1 -5354.4 ; 582.2 -5348.3 ; 31807 -5318.2 ; 10054
House 17 290 145 2 46.7 89.7 [83.6;93.7] 93.8 [88.6;96.7] 92.4 [86.9;95.8] 92.4 [86.9;95.8] 93.8 [88.6;96.7]
-2203.4 ; 110.3 -2518.0 ; 157.0 -2022.2 ; 180.7 -2524.4 ; 1732.4 -2195.8 ; 3327.2
Mushrooms 23 5416 2708 2 30.5 92.8 [91.7;93.8] 74.7 [73.0;73.4] 91.3 [90.2;92.4] 74.9 [73.2;76.5] 74.9 [73.2;76.5]
-97854 ; 2028.9 -108011 ; 6228.2 -87556 ; 5987.4 -111484 ; 70494 -110828 ; 59795
Thyroid 22 2800 972 2 29.9 95.3 [93.7;96.5] 93.8 [92.1;95.2] 96.2 [94.7;97.3] 93.8 [92.1;95.2] 93.8 [92.1;95.2]
-39348 ; 1305.6 -38881 ; 3173.0 -38350 ; 3471.4 -38303 ; 17197 -39749 ; 14482

Table 1: First line: best classification rate (on 10 runs, except Mushrooms on 5, in %) on test
dataset and its confidence interval, for the following learning algorithms: nb-em, mwst-em, sem,
tan-em and sem+t. Second line: log-likelihood estimated with test data and calculation time
(sec) for the network with the best classification rate. The first six columns give us the name of the
dataset and some of its properties : number of attributes, learning sample size, test sample size,
number of classes and percentage of incomplete samples.

are a good compromise between nb-em (clas- applying in (subspace of) dag space. (Chicker-
sical Naive Bayes with em parameter learning) ing and Meek, 2002) proposed an optimal search
and mwst-em (greedy search with incomplete algorithm (ges) which deals with Markov equiv-
data). alent space. Logically enough, the next step
in our research is to adapt ges to incomplete
6 Conclusions and prospects datasets. Then we could test results of this
Bayesian networks are a tool of choice for rea- method on classification tasks.
soning in uncertainty, with incomplete data.
However, most of the time, Bayesian network 7 Acknowledgement
structural learning only deal with complete This work was supported in part by the ist
data. We have proposed here an adaptation of Programme of the European Community, under
the learning process of Tree Augmented Naive the pascal Network of Excellence, ist-2002-
Bayes classifier from incomplete datasets (and 506778. This publication only reflects the au-
not only partially labelled data). This method thors views.
has been successfuly tested on some datasets.
We have seen that tan-em was a good clas-
sification tool compared to other Bayesian net- References
works we could obtained with structural em like
learning methods. Y. Bennani and F. Bossaert. 1996. Predictive neu-
ral networks for traffic disturbance detection in
Our method can easily be extended to un- the telephone network. In Proceedings of IMACS-
supervised classification tasks by adding a new CESA96, page xx, Lille, France.
step in order to determine the best cardinality
for the class variable. D. Chickering and D. Heckerman. 1996. Efficient
Approximation for the Marginal Likelihood of In-
Related future works are the adaptation of
complete Data given a Bayesian Network. In
some other Augmented Naive Bayes classifiers UAI96, pages 158168. Morgan Kaufmann.
for incomplete datasets (fan for instance), but
also the study of these methods with mar D. Chickering and C. Meek. 2002. Finding optimal
datasets. bayesian networks. In Adnan Darwiche and Nir
Friedman, editors, Proceedings of the 18th Con-
mwst-em, tan-em and sem methods are re- ference on Uncertainty in Artificial Intelligence
spective adaptations of mwst, tan and greedy (UAI-02), pages 94102, S.F., Cal. Morgan Kauf-
search to incomplete data. These algorithms are mann Publishers.

Learning the Tree Augmented Naive Bayes Classifier from incomplete datasets 97
D. Chickering, D. Geiger, and D. Heckerman. 1995. R. Greiner and W. Zhou. 2002. Structural extension
Learning bayesian networks: Search methods and to logistic regression. In Proceedings of the Eigh-
experimental results. In Proceedings of Fifth Con- teenth Annual National Conference on Artificial
ference on Artificial Intelligence and Statistics, Intelligence (AAI02), pages 167173, Edmonton,
pages 112128. Canada.
C.K. Chow and C.N. Liu. 1968. Approximating dis- D. Heckerman, D. Geiger, and M. Chickering. 1995.
crete probability distributions with dependence Learning Bayesian networks: The combination of
trees. IEEE Transactions on Information The- knowledge and statistical data. Machine Learn-
ory, 14(3):462467. ing, 20:197243.
I. Cohen, F. G. Cozman, N. Sebe, M. C. Cirelo, S. Lauritzen. 1995. The EM algorithm for graphical
and T. S. Huang. 2004. Semisupervised learning association models with missing data. Computa-
of classifiers: Theory, algorithms, and their ap- tional Statistics and Data Analysis, 19:191201.
plication to human-computer interaction. IEEE
Transactions on Pattern Analysis and Machine P. Leray and O. Francois. 2005. bayesian network
Intelligence, 26(12):15531568. structural learning and incomplete data. In Pro-
ceedings of the International and Interdisciplinary
D. Dash and M.J. Druzdzel. 2003. Robust inde- Conference on Adaptive Knowledge Representa-
pendence testing for constraint-based learning of tion and Reasoning (AKRR 2005), Espoo, Fin-
causal structure. In Proceedings of The Nine- land, pages 3340.
teenth Conference on Uncertainty in Artificial In-
telligence (UAI03), pages 167174. M. Meila-Predoviciu. 1999. Learning with Mixtures
of Trees. Ph.D. thesis, MIT.
A. Dempster, N. Laird, and D. Rubin. 1977. Maxi-
mum likelihood from incomplete data via the EM J.W. Myers, K.B. Laskey, and T.S. Lewitt. 1999.
algorithm. Journal of the Royal Statistical Soci- Learning bayesian network from incomplete data
ety, B 39:138. with stochatic search algorithms. In Proceedings
of the Fifteenth Conference on Uncertainty in Ar-
P. Domingos and M. Pazzani. 1997. On the optimal- tificial Intelligence (UAI99).
ity of the simple bayesian classifier under zero-one
loss. Machine Learning, 29:103130. J.M. Pena, J. Lozano, and P. Larranaga. 2002.
Learning recursive bayesian multinets for data
N. Friedman, D. Geiger, and M. Goldszmidt. 1997. clustering by means of constructive induction.
bayesian network classifiers. Machine Learning, Machine Learning, 47:1:6390.
29(2-3):131163.
M. Ramoni and P. Sebastiani. 1998. Parameter es-
N. Friedman. 1997. Learning belief networks in the timation in Bayesian networks from incomplete
presence of missing values and hidden variables. databases. Intelligent Data Analysis, 2:139160.
In Proceedings of the 14th International Confer-
ence on Machine Learning, pages 125133. Mor- M. Ramoni and P. Sebastiani. 2000. Robust learn-
gan Kaufmann. ing with missing data. Machine Learning, 45:147
170.
N. Friedman. 1998. The bayesian structural EM
algorithm. In Gregory F. Cooper and Serafn D.B. Rubin. 1976. Inference and missing data.
Moral, editors, Proceedings of the 14th Conference Biometrika, 63:581592.
on Uncertainty in Artificial Intelligence (UAI-
98), pages 129138, San Francisco, July. Morgan J.P. Sacha. 1999. New Synthesis of bayesian Net-
Kaufmann. work Classifiers and Cardiac SPECT Image In-
terpretation. Ph.D. thesis, The University of
D. Geiger. 1992. An entropy-based learning algo- Toledo.
rithm of bayesian conditional trees. In Uncer-
tainty in Artificial Intelligence: Proceedings of the D. J. Spiegelhalter and S. L. Lauritzen. 1990. Se-
Eighth Conference (UAI-1992), pages 9297, San quential updating of conditional probabilities on
Mateo, CA. Morgan Kaufmann Publishers. directed graphical structures. Networks, 20:579
605.
S. Geman and D. Geman. 1984. Stochastic re-
laxation, Gibbs distributions, and the Bayesian
restoration of images. IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, 6(6):721
741, November.

98 O. C.H. Franois and P. Leray


 
   !"#$% &'(*)+ #
%,.-/
0'1234
57698:<;=?><@A;B8:DCFE!G?;B;IHDJKL69M N%;OC8DP'PQ6 NRJ<;B8:TSUCFV&E ;W5X>YG!CFC8DC8
Z!CF[;IE%V%\]C8^V#PB_a`b8D_cPBE \];IV%6dPQ8T;B8:e=PQ\$[f<V%698DH]KLgF6dC8<gRChFJYiV&E%Cgkj^V!i#86d@BCFE h%6dVmlBJ
SU>onp>qPsrTtIuL>vuQtBw'JxByIuQt{z,q2iV&E%Cgkj^VFJYV%jDC4|#CFV%jDCFE M9;B8:<hF>
CR}~\];B6MAL<L ^^L0mQ'D^DLU'"
asB
`8*\W;B8^lE%C;BM [E%PBMdC\:DPQ\W;B698<hFJV%j<CW\];B698@I;IE 69;IYMdCPB_68^V&CFE%Ch%VCj<;@BChp\$PQ8DPBV&PQ8<6gF;BM9Mdle68
V&CFEk\]hUPB_V%jDCPBh&CFE @A;IM9C@I;IE 69;IM9ChFJB6984V%j<Ch%C8<h&CV%j<;IV j<6dHQjDCFE0@I;BM9fDChU_PBE0V%jDC@I;IE 69;IM9CXPB_Y68^V&CFE%Ch%V
CgRPQ\$C"\$PBE CM6dBCMdl69V%jj<69HQjDCFE&}PBE :DCFE C:PBYh&CFE%@I;IV%6dPQ8<hF>0i#8<_PBE%V%f8<;IV&CMdlBJBCh&V%;IM69h%j<698<HjDCFV%j<CFE
PBE8DPBVW;*q;slBCh%69;B88DCFV~PBE%CRrLj6d6dV%hV%jDCh%Ce\$PQ8DPBV&PQ869gF6dV~l[<E%PB[CFE%V%69Ch]69h$j<6dHQjMdl698V&E ;BgRV%;IM9C
68HBC8<CFE ;BM>`8V%j<69h[Y;I[CFEsJ C[E%Ch&C8V;\]CFV%jDP':V%j;IVFJX'lf69M9:<698<HTfD[PQ8V%j<CTgRPQ8<gRCF[<V$PB_
;Bh h%6dHQ8<\$C8V$M9;IV&V%69gRCBJ[<E%P@'69:<Ch_PBE]69:<C8^V%6d_clL698DH;B8^l@'69PQM9;IV%6dPQ8<hPB_!V%jDC[<E%PB[CFE%V%6dChPB_$c[;IE%V%6;BM
\]PQ8DPBV&PQ8<69gF6dVmlPB_YV%jDC"PQfDV&[f<V0;B8:_PBEXgRPQ8h&V&E f<gRV%698<H?\W698<69\];BMPIC8<:<698DHgRPQ8V&CRr'V%h> C69M9M9fh&V&E ;IV&C
V%j<C;I[<[M969gF;IV%69PQ8PB_UPQf<E,\$CFV%jDPL:6dV%j;$E C;BMq;lBCh%69;B88DCFVmPBE%698@BCFV&CFEk698<;IE%lh%gF6dC8<gRCB>
aBQ XX'Q~7 fDV%69PQ8HQ6d@BC8Tj<6dHQj<CFE&}PBE :DCFE%C:PBYh&CFE%@I;IV%6dPQ8<hF>
i!8D_cPBE%V%f<8<;IV&CMdlBJV%j<C[<E%PBYMdC\ PB_@BCFE 69_lL698DH
`b8\];B8l[<E%PBMdC\:DPQ\];B698<hJ<V%jDC@I;IE 69;IM9Ch"PB_ 69\} jDCFV%j<CFEPBE"8DPBV;q;slBCh%69;B88DCFVmPBE WCRrDj<6d6dV%hXV%jDC
[PBE%V%;B8<gRC$j<;s@BC:6CFE C8^V!E%PQMdChF>4n!_cV&C8J;8f\4CFE [<E%PB[CFE%V%69ChPB_?\$PQ8DPBV&PQ869gF6dV~l_E PQ\6dV%h$:DPQ\];B698PB_
PB_"PBh&CFE @A;IM9C]698D[fDV@I;IE 69;IYMdCh;B8<:;Th%698DHQM9C$PQfDVb} ;I[<[M69gF;IV%6dPQ86h#j<69HQj<Mdl698V&E ;BgRV%;IMdC698eHBC8DCFE ;BM0;B8<:
[fDV#@A;IEk69;IMdC4;IE C:<69h%V%698DHQf<69h jDC:>`8e;W69PQ\$C:<69gF;BM E%C\];B68<h?V&PC]h&PCF@BC8_PBE4[PQM9lV&E%CFCh>W#M9V%jDPQfDHQj
:<69;IHQ8<PQh&V%69gT;I[<[YM969gF;IV%6dPQ8J_cPBEWCRrD;B\$[MdCBJV%jDC698D[fDV ;B8*;I[<[E%PrD69\];IV&CW;B8l'V%69\$CW;BMdHBPBEk6dV%j<\69h;s@A;B6M9;IMdC
@I;IE 69;IMdChgF;I[<V%f<E%CV%jDC?8<:<68DHQhX_cE%PQ\/:<6dCFE%C8V":<6} _cPBE*:DCgF69:<698<H6d_];2HQ6d@BC88DCFVmPBE 69h\$PQ8<PBV&PQ8DCBJ
;IHQ8DPQh&V%6gV&Ch&V%hW;B8:V%jDCPQfDV&[f<V$@A;IE 6;IMdC\$PL:DCM9h C_PQf<8:{V%j;IV69V%hE f<8V%69\$C!E%Cf6dE%C\$C8V%hV&C8<:V&P
V%jDC{[PQh h%6dMdC]:69h&C;Bh&ChF>ef<MdV%69[MdC$698D[f<V@A;IE 6;IMdCh _cPBE%Ch&V%;BM9Mfh&C698;$[<Ek;BgRV%69gF;BMh&CFV&V%698<HD>
;B8<:$;h%68DHQMdCPQfDV&[fDV0@I;IE 69;IMdC698p_;BgRV ;IE%C"V~l'[69gF;BM9M9l `b8V%j<69h?[Y;I[CFEsJC{[<E%Ch&C8V;T8DCF4Ja\$PBE%CW[<E ;Bg}
_cPQf<8<:698;B8lV~l'[CPB_ :<6;IHQ8DPQh&V%69g?[<E PBMdC\> V%69gF;IM9C\$CFV%j<P':_cPBEh&V%f<:DlL698DH\$PQ8DPBV&PQ8<69gF69V~lPB_
DPBE#\];B8^lW[<E%PBMdC\WhFJV%j<C!E CM9;IV%6dPQ8CFV~CFC8V%jDC q;lBCh%6;B88DCFV~PBE%LhF>z,jDC\$CFV%jDPL:f<69M:<hWfD[PQ8
PQfDV&[f<V@A;IEk69;IMdCW;B8:*V%jDC{PBh&CFE%@I;IMdCW68D[fDV?@I;IE 6} ;M9;IV&V%69gRCPB_;BM9M N&PQ698^V@I;BM9fDC;Bh%h%6dHQ8<\]C8^V%hV&PV%jDC
;IMdCh69h\]PQ8DPBV&PQ8DC698V%jDCh&C8<h&C2V%j<;IVj6dHQjDCFE&} PBh&CFE @A;IM9C@I;IE 69;IMdChf<8<:<CFEeh&V%f<:<lY{V%j<69hM9;IV&V%69gRC
PBE :DCFE C:@A;BMfDCh_cPBEV%jDCW698<[fDV?@A;IEk69;IMdCh?HQ6d@BCWE 69h&C \$PBE%CFP@BCFEJX69hC8<j<;B8gRC:69V%j698D_cPBE \];IV%69PQ8;IPQfDV
V&P;j<6dHQj<CFE&}PBE :DCFE%C:PQfDV&[fDV{_PBEV%jDCe\W;B698@I;IE 6} V%jDCCRCgRV%hPB_$V%jDC@A;IE 69PQf<h;Bh%h 6dHQ8<\$C8V%hTPQ8V%jDC
;IMdCPB_ 698V&CFE%Ch&VF>`b8;]6dPQ\]C:<69gF;BM7:69;IHQ8DPQh&V%69g;I[D} [<E%PBY;I69M96dVml:<69h%V&E 6dfDV%69PQ8P@BCFE4V%jDCW\W;B698@A;IEk69;IMdC
[M96gF;IV%6dPQ8JI_PBE0CRrD;B\$[MdCBJBPBYh&CFE%@L698DH#h%l'\$[V&PQ\]h ;B8<: PB_698V&CFE%Ch&VF>z,j<C;Bh%h 6dHQ8<\$C8VM9;IV&V%6gRC6hf<h%C:_cPBE
h%6dHQ8h"V%j<;IV;IE%C\$PBE%C4h&CF@BCFE C469MMYE Ch%f<MdV"698T;W\$PBE%C 69:DC8V%6d_cl'68DH;B8l@L6dPQM9;IV%6dPQ8<h{PB_V%jDC[<E PB[CFE V%6dChWPB_
h&CF@BCFE%C?:<69h%C;Bh&CC698DH4V%jDC#\$PQh%VM96dBCM9lp@I;BM9fDCPB_V%jDC \$PQ8DPBV&PQ869gF6dV~l698V%jDC8DCFV~PBE%> KLfDh%CfDC8V%MdlBJ
:<69;IHQ8<PQh&V%69g?@I;IE 69;IMdCB> zjDCgRPQ8<gRCF[<V,PB_ \$PQ8<PBV&PQ8<69g} \]69869\];BMBPIC8<:<698<H#gRPQ8V&CRr'V%h ;IE%CgRPQ8<h&V&Ekf<gRV&C:pV%j<;IV
6dVmle698:<69h%V&E 6dfDV%69PQ8j<;Bh?CFC8698V&E%PL:<f<gRC:eV&PTgF;I[D} g j;IE ;BgRV&CFE 69h&CV%jDC*69:DC8V%6d<C:@L6dPQM9;IV%6dPQ8<h;B8:2[<E%PI}
V%fDE%CeV%j<69hWV~l'[CePB_'8<PsMdC:<HBC_cPBEq;lBCh 69;B828DCFVb} @L69:DC?_PBE#_cf<E%V%jDCFE,698@BCh&V%6dHQ;IV%6dPQ8>
PBE%'h?c ;B8:DCFEG?;B;IHTF Bo>dJDIuBuIk>PBE%C#h%[CgF69_} z,j<C#E f8^V%69\]C#gRPQ\$[M9CRrL6dVmlWPB_aPQfDE"\$CFV%jDPL:6hXCRr'}
69gF;BM9M9lBJ;48<CFV~PBE%W69hh%;B69:]V&PpC69h&PBV&PQ8<C#6d_V%jDC!gRPQ8L} [PQ8DC8V%69;BMY68{V%j<C!8'f<\CFEPB_PBYh&CFE%@I;IMdC!@A;IE 6;IMdCh
:<6dV%69PQ8<;BM[E%PB;I69M6dV~l:<69h%V&E 6dfDV%69PQ8gRPQ\$[fDV&C:_cPBE f<8<:<CFE{h&V%f<:<lB>+DPBEM9;IE%HBCFE8DCFV~PBE%Lh698<gFM9f:<698DH;
V%jDCPQfDV&[fDV@A;IEk69;IMdC{HQ6d@BC8h&[CgF6dg$PBh%CFE%@A;IV%69PQ8<h M9;IE%HBC]8'f<\CFE!PB_PBh&CFE @A;IM9Cp@I;IE 69;IYMdCh!V%jDCFE%CF_cPBE%CBJ
69hh&V&PLg j;Bh&V%69gF;BM9Mdl:DPQ\]68<;IV&C:l;B8lh f<g j*:69h&V&E 6} PQfDEX\]CFV%jDP':[E%Ps@L69:DCh _cPBE @BCFE 69_lL698DH![;IE%V%69;BM\$PQ8DPI}
V&PQ8<6gF6dV~l_PBE*M96\]6dV&C:+h fDh&CFV%hPB_{@A;IE 6;IMdChPQ8<MdlB> V&CFE \Wh PB_h&V&PLg j<;Bh&V%6g#:DPQ\W698<;B8<gRC,;B\$PQ8<H4V%jDCh&C!:<69hb}
z,j<Cf<8D_;@BPQf<E ;IMdCgRPQ\$[fDV%;IV%69PQ8<;BMagRPQ\][MdCRrD6dV~lTPB_ V&E 69fDV%6dPQ8<h>7DPBE;[<E%PB;IY69M96dVml:<69h%V&E 6dfDV%69PQ8SXEsI 4
V%jDC{[<E%PBM9C\JajDPCF@BCFEsJX69h4M6dBCMdlV&Pe_cPBE%Ch&V%;BM9MXV%jDC P@BCFEeV%jDCPQfDV&[fDVT@I;IE 69;IYMdCBJ?V%jDCgFf<\pf<M9;IV%69@BC :<69hb}
:DCh 6dHQ8PB_aCh h&C8^V%6;BM9Mdl\$PBE%CC WgF6dC8V\$CFV%j<P':<h> V&E 69fDV%6dPQ8_f< 8<gRV%6dPQZ 8 Y\[^]69h$:DCFY 8DC:;B/h Y\[^]RN _ ` 
C";I[<[YM96dC:PQf<EU@BCFEk6dgF;IV%6dPQ8p\$CFV%jDPL:p_PBE h&V%f<:<l^} SXEsI  8a_ _cPBE;BM9M4@I;BM9fDC9 h _ PBb_ > DPBEV~P
698<H\]PQ8DPBV&PQ8<69gF6dVmlV&P{;]q;lBCh 69;B88DCFVmPBE T698@BCFVb} :<6h&V&E 6df<V%6dPQ8<h"S EAI 4#;B8<:S %E cI 4,Ps@BCFdE J;Bh%h&PLgF6}
CFE 68<;IE%l$h%gF6dC8<gRCB> `b8]E CgRC8^VlBC;IE hFJLC#:DCF@BCM9PB[C:; ;IV&C:#6dV%e j Y2[<]I !;B8<e : Y [^].f I #E%Ch&[CgRV%6d@BCMdlBJC
8DCFVmPBE _PBE0V%jDC:DCFV&CgRV%6dPQ8PB_YgFM9;Bh%h%69gF;BM'h&68DCX_cCF@BCFE h%;slV%j<;IV#SXgE cI 4"6:h RI F3KXh'$ RRN JNKkIci MAPjF"EdJSGYH G, P@BCFE
698[Y6dHQhF>qPBV%jV%j<C8DCFV~PBE% vh{h%V&E f<gRV%fDE%CT;B8:6dV%h SXEsI 4k JX:DC8<PBV&C:S Esk I 4` 8/SX%E cI kJX6d@_ Y [<]Sf N _ ` 8
;Bh%h%P'gF69;IV&C:[E%PB;I69M6dV%6dChCFE%CCM969gF6dV&C:_cE%PQ\ V~P Y [^] N _ _PBE;BM9 M _ - I 4TqCFE HBCFElJ wBtIu^k>C
CRrL[CFE%V%hF>Z?fDE 68DHTV%jDCCM969gF6dV%;IV%69PQ8698V&CFE%@L6dCFhFJUV%jDC 8DPh ;l]V%j<;IVV%jDC?8DCFV~PBE%W6@h JSRXFBI FHG698{6dV%hh&CFVPB_
CRrL[CFE%V%hWj<;B:[<E%PL:<f<gRC:h&CF@BCFE ;BM!h&V%;IV&C\$C8V%hV%j<;IV PBh%CFE%@A;IYMdC?@A;IE 6;IMdCmh 6d_
h%f<HBHBCh&V&C:[<E%PB[CFE%V%69ChPB_U\$PQ8DPBV&PQ8<6gF6dV~lB> Ch&V%f<:L} S EsI qp n r 8S EsI sp n c
6dC:V%jDCh&C[<E PB[CFE V%6dChW698V%jDC8DCFV~PBE%f<h%698DHPQfDE n D n co
@BCFE 69gF;IV%6dPQ82\$CFV%jDPL:>C_cPQf<8<:2;h%\];BMM#8'f<\} _cPBE#;BM9MLNbPQ698V,@I;BM9fDC;Bh%h 6dHQ8<\$C8V%h n ! n c V&`P ]>`8D_cPBE&}
CFE?PB_@L6dPQM9;IV%69PQ8<h!PB_V%jDC$h%f<HBHBCh&V&C:*[<E PB[CFE V%6dChPB_ \];BMMdlh&[C;IL698DHDJC]j<;@BC]V%j;IV;8DCFV~PBE%69h69h&PI}
\$PQ8<PBV&PQ8<69gF6dVmlBJ7j<6g j[<E%P@BC:*V&PC]698<:<6gF;IV%6d@BCPB_ V&PQ8DC?698{:<69h&V&E 69fDV%6dPQ8]69_C8V&CFE 698<Hp;pj<6dHQjDCFE%}PBE :DCFE%C:
\$PL:DCM9M698DH68<;B:DC'f<;BgF6dChF> @I;BM9fDC;Bh%h%6dHQ8<\]C8^VV&PV%jDCPBh&CFE%@I;IMdC@A;IEk69;IMdCh
zjDC?[;I[CFE,69h"PBE%HQ;B8<6h&C:;Bh,_cPQM9MdPhF>0`8K'CgRV%6dPQ8 gF;B8<8<PBV\];IBCj<6dHQjDCFE%}PBE :DCFE%C:@A;BM9f<ChPB_,V%jDC{PQfDVb}
'JCE%CF@L6dCF V%jDCegRPQ8<gRCF[<VWPB_\]PQ8DPBV&PQ8<69gF6dVmlB>`b8 [f<V{@I;IE 69;IM9CMdCh hM6dBCMdlB>z,j<CgRPQ8gRCF[<VPB_;B8V%6}
K'CgRV%69PQ8*x'J7C$[<E%Ch&C8VPQfDE\$CFV%jDPL:_cPBEh&V%f<:DlL698DH V&PQ8<6gF6dV~lWj;BhV%jDC?E%CF@BCFE h&C698V&CFE%[<E CFV%;IV%6dPQ80V%jDC8DCFVb}
\$PQ8<PBV&PQ8<69gF6dVmlB>$CWE%CF[PBE%V?PQ8V%jDC{;I[<[M969gF;IV%69PQ8ePB_ PBE%69h,h%;B69:V&PWCWH GN JcI FHGp69b8 6d_
PQfDE?\$CFV%jDPL:68K'CgRV%69PQ8D>!z,jDC4[Y;I[CFE#C8<:<h#6dV%j
PQfDE,gRPQ8gFM9f<:<698<HpPBYh&CFE%@I;IV%6dPQ8<h,698KLCgRV%6dPQ8y'> n D n c o S EsI qp n r 'S EsI sp n c
  
7X  Xe /7XB7XmD  _cPBE;BMM<@I;BM9fDC;Bh%h 6dHQ8<\$C8V%h n ! n c V&tP ]> |#PBV&C?V%j<;IV6d_
;8DCFVmPBE 69h6h&PBV&PQ8DC]6986dV%h?PBYh&CFE%@I;IMdC@A;IEk69;IMdCh
i#[PQ8E%CF@L6dCF698<HV%jDCgRPQ8<gRCF[<V0PB_\$PQ8DPBV&PQ869gF6dV~lBJBC HQ6d@BC8V%jDCePBE :DCFE 698<HQu h 8 PQ8V%jDC69E{h&CFV%hPB_@I;BM9fDChFJ
;Bh%h f<\$CV%j<;IVT;q;lBCh%69;B88DCFV~PBE%2f<8<:DCFEh&V%f<:<l V%jDC8TV%jDC8DCFV~PBE%69h,;B8V%6dV&PQ8DC4HQ6d@BC8V%jDCE%CF@BCFEkh&C:
698gFM9f<:DCh;h 698DHQMdCPQfDV&[fDV@I;IE 69;IM9C  ;B8< : PBD} PBE :<CFE 698DHQhF> #M9V%jDPQfDHQj;B8^V%6dV&PQ869gF6dV~lV%j'f<h69hcE%CR}
h&CFE @A;IM9C@A;IEk69;IMdC h J  "!###$!% &J (' I @BCFE h%CMdlDC'f<6d@I;BMdC8^VV&P69h&PBV&PQ8<69gF69V~lBJ CCRrL[M96gF6dV%Mdl
698;B:<:<69V%6dPQ8J0V%jDC8DCFV~PBE%\];l698<gFM9f<:<CW;B8;IE%6} :<6h&V%698DHQf<6h%jCFVmCFC8V%jDCXVmP!V~l'[Ch7PB_D\$PQ8<PBV&PQ8<69g}
V&E ;IE l8'f<\CFEPB_$698V&CFE \$C:69;IV&C@A;IEk69;IMdChj69g j 6dVmlh 698<gRCT;*:DPQ\W;B698PB_?;I[<[M969gF;IV%69PQ8\];lCRrDj<6d6dV
;IE%C8DPBV*PBh&CFE @BC:698 [E ;BgRV%69gRCB* > )X;Bg j@I;IE 69;IMdC ;B8698V&E 69gF;IV&C$gRPQ\698<;IV%69PQ8PB_69h&PBV&PQ869gF6dV~l;B8<:;B8L}
+,]698+V%jDC8<CFV~PBE%;B:DPB[<V%hePQ8DCPB_];8<69V&Ch&CFV V%6dV&PQ869gF6dV~l_cPBE698V&CFE%E%CM9;IV&C:PBYh&CFE%@I;IMdC?@A;IEk69;IMdChF>
- . +,/   021 !###3!%054 Q J 6 '7IJPB_#@I;BM9fDCh>*C qf<6M9:<698DHfD[PQ8{V%j<C?;IP@BC4:DCF8<69V%6dPQ8<hFJCj<;s@BC
;Bh%h f<\$CV%j<;IVV%jDCFE%CCRrD69h&V%h;V&PBV%;BM PBEk:DCFE 698D9 H 8PQ8 V%j<;IV:DCgF69:698DHjDCFV%j<CFE{PBE8DPBV;q;lBCh 69;B88DCFVb}
V%j<6h"h&CFV,PB_0@A;BMfDChFD6dV%j<PQfDV"MdPQh%h"PB_0HBC8DCFE ;BM6dV~lBJ<C PBE%$69hX6h&PBV&PQ8DC#;B\]PQf<8^V%hV&P4@BCFEk6d_lL698DHV%j;IVC8V&CFE&}
;Bh%h f<\$CpV%j<;I:V 0<; 80>= jDC8DCF@BCF@E ?A8CB>pz,jDCpPBE&} 698<HT;B8^lj<6dHQj<CFE&}PBE :DCFE%C:*@A;BM9f<C];Bh%h%6dHQ8<\]C8^VV&P6dV%h
:DCFEk698DHQh[CFE@I;IE 69;IMdC ;IE%CV%;IBC84V&P#698<:<f<gRC ;,[;IE%V%69;BM PBh%CFE%@A;IYMdC@A;IE 6;IMdCh$E%Ch f<MdV%h$698;*h%V&P'gkj<;Bh&V%69gF;BM9M9l
PBE :<CFE 698D,H DPQ8WV%jDC#h%CFV PB_NbPQ698V @I;BM9fDC;Bh h%6dHQ8<\$C8V%h :DPQ\W698<;B8V[<E PB;I69M969V~l]:<69h%V&E 6dfDV%69PQ8]Ps@BCFEV%jDCPQfDVb}
V&PW;B8lh%fDh&CFV"PB_0V%jDC8<CFV~PBE%> vh#@I;IE 69;IMdCh> [f<V @A;IEk69;IMdCBv> )Xh%V%;IM969h%j698DH;B8V%6dV&PQ8<69gF6dVml];B\$PQf<8V%h
zjDCgRPQ8<gRCF[<V$PB_ E,FHG>FBI F"GJLKJN MOJ.GQP"JSRN TUJIVXWDN JNFHG V&P@BCFE 69_lL698DHV%j<;IVC8V&CFE 698DHT;B8lh%f<g j;Bh%h%6dHQ8<\]C8^V
8DPf69M9:<hfD[PQ8V%jDCX[PQh%V&CFE 6dPBE7[E%PB;I69M6dV~l:69h&V&E 6} E%Ch f<MdV%h"698;]:<PQ\]698<;IV&C::<6h&V&E 6df<V%6dPQ8>
f<V%6dPQ8<hPs@BCFE"V%jDC?PQfDV&[f<V@I;IE 69;IM9C#HQ6d@BC8V%jDC?@I;IE 6} zjDC2[<E%PBMdC\ PB_:DCgF69:<68DH\]PQ8DPBV&PQ8<69gF6dVml[<_c[PBE
PQf<hYN&PQ698V @A;BMfDC";Bh%h%6dHQ8<\]C8^V%h0V&PV%j<C,8DCFV~PBE% vh PBD} ;q;slBCh%69;B88DCFV~PBE%69h'8<Ps8V&PC*gRPQ|#S }
h&CFE @A;IM9Cp@I;IE 69;IYMdChWc ;B8:<CFEpG?;B;IH#I >dJ IuBuIk> gRPQ\$[YMdCFV&C698HBC8DCFE ;BMJ;B8<:698_;BgRV,E%C\];B68<h"gRPQ|#S }
z,j<CgRPQ8<gRCF[<VFJ\$PBE Ch&[CgF6dYgF;BM9MdlBJW69h:DCF8<C:698 gRPQ\$[YMdCFV&C_cPBE[PQM9lV&E%CFChc ;B8:DCFE*G?;B;IHFI >dJ

100 L. C. van der Gaag, S. Renooij, and P. L. Geenen


I uBuIk>`8@L6dCF PB_!V%jDCh&CgRPQ\$[MdCRrD6dVmlgRPQ8<h%6:DCFE ;A} 0(&#'(
V%6dPQ8<hJY ;B8:DCFEG?;B;IHF,I >:<Ch%6dHQ8DC:e;B8;I[[<E%Pr'}
69\];IV&C{;B8^l'V%69\$CW;BM9HBPBE 6dV%j<\_cPBE4@BCFEk6d_lL698DH#jDCFV%jDCFE 0(&#'% 0%&#'(
PBE]8DPBV];HQ6d@BC88DCFVmPBE 69h$\]PQ8DPBV&PQ8DCB>z,j<69hp;BM}
HBPBE 6dV%j\ h&V%f<:6dCh$V%jDCeE%CM9;IV%6dPQ8CFVmCFC8V%jDCPQfDVb} 0(&#>1 0%&#'% 01$#'(
[fDV@A;IEk69;IMdC;B8<:C;BgkjPBh&CFE%@I;IMdC{@A;IEk69;IMdCh&CF[D}
;IE ;IV&CMdlBJ698V&CFEk\]hpPB_,V%jDCh%6dHQ8PB_,V%jDCf<;BM6dV%;IV%6d@BC
698 YfDC8<gRC?CFV~CFC8TV%jDC\ cCM9M9\];B8J wBwIu^kfD[PQ8 0%&#>1 01$#'%

698<gRPQ8gFM9f<h%6d@BCE Ch%f<MdV%hFJ6dV&CFE ;IV%6d@BCM9l V%6dHQj^V&C8<C:8'fL}


\$CFE 6gF;BM0PQf8<:<h?;IE%C$Ch&V%;IM69h%jDC:PQ8V%jDC$E CMdCF@A;B8V 01$#>1
[<E%PBY;I69M96dV%69Chf<h%698<H!;B8;B8lV%69\]C\]CFV%jDP':$;s@A;B6M9;IMdC 0 6dHQfDE%C Ia!84CRrD;B\$[MdC;Bh%h%6dHQ8\$C8^VaM9;IV&V%69gRC_PBEUV~P
_cE%PQ\ 5a69f*;B8<:CM9M9\W;B8 wBwBtQk>i#8<_PBE%V%f8<;IV&CMdlBJ V&CFE 8<;IE l{PBh%CFE%@A;IYMdC@A;IEk69;IMdChF>
V%jDC*P@BCFE ;BM9M4E f<8V%69\$CE%C'f<6dE%C\]C8^V%hPB_V%jDC;BMdHBPI}
E 6dV%j\V&C8:V&P_PBE%Ch%V%;BM9MLf<h&C"698;?[<E ;BgRV%69gF;BMDh%CFV&V%698DHD>
aQX  m  /7XB7XmD  _cPBEV%jDC#V~PV&CFE 8<;IE%l$@I;IE 69;IM9Ch + ;B8<:*)W> DPBE_fDE&}
 V%jDCFE698D_cPBE \];IV%6dPQ8;IPQfDV{M9;IV&V%69gRChW68HBC8DCFE ;BMJC
n?fDE\$CFV%j<P': _cPBE h&V%f<:DlL698DH \$PQ8DPBV&PQ869gF6dV~l 698 E%CF_cCFE,V&PeG!,E ;I+ .V -FCFE\J w0/<sk>
q;lBCh%6;B8 8DCFV~PBE%Lh8<Ps f<69M9:hfD[PQ8V%jDCgRPQ8L} zaP:DCh%gRE 6dCTV%j<CCRCgRV%hPB_4V%j<Ce@I;IE 6dPQf<h{@I;BM9fDC
h&V&E fgRV!PB_;B8;Bh%h%6dHQ8<\]C8^VM9;IV&V%69gRCB>z,jDCM9;IV&V%69gRC$698L} ;Bh%h%69HQ8<\$C8V%h n PQ8V%jDC[<E%PB;I6M96dV~l2:<69h%V&E 6dfDV%69PQ8
gFM9f<:<Ch;BM9MN&PQ698^V@I;BM9fDC];Bh h%6dHQ8<\$C8V%hV&PV%jDCWh&CFVPB_ Ps@BCFEV%jDCPQfDV&[fDV@I;IE 69;IM9CBJ!V%j<C;Bh%h%69HQ8<\$C8VM9;IVb}
PBh&CFE @A;IM9C@I;IE 69;IYMdCh ;B8<:]69h0C8j<;B8<gRC:$69V%j[<E%PBD} V%69gRC69hC8j<;B8<gRC:6dV%j[E%PB;I69M69h&V%69g,68D_PBEk\];IV%6dPQ8>
;I69M69h&V%69g698D_cPBE \];IV%6dPQ8gRPQ\$[f<V&C:_E%PQ\ V%jDC8DCFVb} 26dV%jTC;BgkjCMdC\]C8^1V # n "PB_ V%jDCM;IV&V%69gRCBJV%jDCgRPQ8L}
PBE%f<8<:<CFEph&V%f:DlB><E%PQ\ V%jDCM9;IV&V%69gRCBJ ;BM9M@L6dPQM9;A} :<6dV%69PQ8<;BM7[E%PB;I69M6dV~lT:<6h&V&E 6df<V%6dPQ8SXEsI  p n #P@BCFE
V%6dPQ8<h0PB_V%jDC"[<E PB[CFE V%6dCh0PB_\]PQ8DPBV&PQ8<69gF6dVmlp;IE C,69:DC8L} V%jDCTPQf<V&[fDV@A;IEk69;IMdA C  6h];Bh%h&PLgF69;IV&C:>Ce8DPBV&C
V%6d<C:J,_E PQ\ #j<69g j\]69869\];BM,PIC8<:<68DHgRPQ8V&CRr'V%h V%j<;IVV%jDCh%C:<69h&V&Ek6dfDV%6dPQ8hW;IE%CE%C;B:69MdlgRPQ\$[YfDV&C:
;IE%C4gRPQ8h&V&E f<gRV&C:_cPBE_fDE%V%j<CFE,698^@BCh%V%6dHQ;IV%6dPQ8> _cE%PQ\ V%jDC*q;lBCh%6;B8+8DCFVmPBE%2f<8<:<CFETh&V%f<:DlB>C
     cD c  69MM E%CFV%fDEk8V&PV%j<CTgRPQ\$[MdCRrD6dV~lPB_V%jDCgRPQ\$[YfDV%;A}

V%6dPQ8<h698^@BPQM9@BC:T698K'CgRV%6dPQ8ex'> D>
z,jDC$ R RUJ GEWX G"dBN JLKkp_cPBE!V%jDCh&CFV PB_ PBh&CFE%@}
;IMdC@A;IE 6;IMdCh$PB_;q;slBCh%69;B828DCFV~PBE%gF;I[<V%f<E%Ch 32 46587 9 :*      cD c 
;BM9MQNbPQ68^V@A;BMfDC!;Bh h%6dHQ8<\$C8V%hV&P WJ<;BMdPQ8DHp6dV%j{V%jDC ql*f<6M9:<698DHfD[PQ8*V%j<C;Bh%h 6dHQ8<\$C8VM9;IV&V%6gRCBJ @BCFE 6}
[;IE%V%6;BMaPBEk:DCFE 698DHCFV~CFC8V%jDC\>DPBEC;Bgkj@I;BM9fDC _cl'698<H\]PQ8DPBV&PQ8<69gF6dVml68:<69h&V&E 69fDV%6dPQ8;B\$PQf<8V%h$V&P
;Bh%h%69HQ8<\$C8V n V&P ]J;B8CMdC\$C8V # n "69h,68<gFM9f<:DC: g j<Cg%L698DH$jDCFV%j<CFE[Y;IE%V%69gFf<M9;IE":DPQ\W698<;B8<gRC![<E%PB[CFE&}
698V%jDCM9;IV&V%69gRCB>zjDCePBV&V&PQ\ PB_V%jDCM9;IV&V%69gRCC8L} V%6dChUjDPQM9:;B\]PQ8DHV%jDC[<E%PB;IY69M96dVml#:<69h%V&E 6dfDV%69PQ8<h;Bhb}
gRPL:DCh!V%jDC$;Bh%h%6dHQ8<\]k C8^V n _PBE?j69g jeC$j<;s@BCV%j<;IV h&PLgF69;IV&C:T#6dV%jV%jDCM9;IV&V%69gRHC vhCMdC\$C8V%hF>
n D n c _PBE$;BM9M n c - N pkV%jDC{PBV&V&PQ\ V%j'f<hpC8L} CE%CgF;BM9MaV%j<;IV?;{q;slBCh%69;B88DCFVmPBE%e69h#6h&PBV&PQ8DC
gRPL:DCh0V%jDCMdPsCh&Vb}PBE :DCFE C:$@A;BMfDC;Bh h%6dHQ8<\$C8VUV&P W> 6986dV%hh&CFV$PB_#PBh&CFE%@I;IMdC@I;IE 69;IMdCdh  69_,C8^V&CFEk698DH
z,jDCV&PB[PB_V%jDCM9;IV&V%69gRCC8<gRPL:DChUV%jDC;Bh%h%6dHQ8\$C8^V n c c ;j<6dHQjDCFE%}PBE :DCFE%C:@A;BM9f<Cp;Bh%h 6dHQ8<\$C8V#V&bP  E%Ch f<MdV%h
_cPBE j<69gkjC,j;@BC,V%j<;IV n c D n c c _cPBE ;BM9M n c k - N k 698;h&V&PLg j<;Bh%V%69gF;BM9Mdl:DPQ\]698<;B8V![<E%PB;I6M96dV~lT:69h&V&E 6}
V%jDCpV&PB[V%jfh#C8gRP':DCh!V%jDCj<6dHQj<Ch&Vb}PBE :DCFE%C:e@I;BM9fDC fDV%69PQ8P@BCFEV%jDCpPQfDV&[fDV!@A;IEk69;IMd,C >C_cf<E%V%jDCFE
;Bh%h%69HQ8<\$C8V V&tP ]>0`b8$V%jDC,M9;IV&V%6gRCBJC,_fDE%V%jDCFE j<;s@BC E%CgF;BM9MV%j<;IVV%jDC[;IE%V%69;BMPBE :DCFEk698D H DPQ8V%jDC4NbPQ698V
V%j<;IVX;B8WCMdC\$C8V # n U[<E%CgRC:<ChX;B8]CM9C\$C8^V # n c c @I;BM9fDCT;Bh%h%69HQ8<\$C8V%h]V&O P  69h$C8<gRPL:DC::<6dE%CgRV%Mdl698
6d_ n D n c c C\$PBE%CFP@BCFE]h%;lV%j<;I!V # n [<E CgRC:DCh V%jDC;Bh%h%6dHQ8\$C8^V?M9;IV&V%69gRC_cPB:E ]>?C]8DPs gRPQ8<h 69:DCFE
# n c c :<6dE CgRV%MdlT6d_ V%jDCFE%C$69h!8DP;Bh%h%6dHQ8\$C8^V n c 6dV%j ;BM9MY[;B6dE hXPB_7gRPQ8<:<69V%6dPQ8<;BM<[E%PB;I69M6dV~l:<69h&V&Ek6dfDV%6dPQ8h
n " n c " n c c > z,jDC[;IE%V%6;BMPBEk:DCFE 698DH:DCFY8DC:plpV%jDC S EsI ap n ];B8<:SXEsI ap n c $;Bh%h&PLgF69;IV&C:69V%jCMdCR}
M9;IV&V%69gRC$V%j'f<hgRPQ68<gF69:DCh?69V%jeV%j<C$[;IE%V%69;BM0PBE :<CFE 698DH \$C8V%*h ! n ];B8<; : # n c $698V%jDCM9;IV&V%69gRCh f<g jV%j<;IV
DPQ8V%jDC#NbPQ68^V?@I;BM9fDC;Bh%h%6dHQ8<\]C8^V%h?V&b P ]>U69HQfDE%C # n [<E CgRC:DC* h ! n c k>2`m_?_cPBEWC;Bgkjh f<g j[;B6dE]C
:<CF[69gRV%hFJ';BhX;B8{CRrL;B\$[YMdCBJ^V%jDC#;Bh%h%6dHQ8<\]C8^VM9;IV&V%69gRC j<;s@BCpV%j;IV!SXEsI  p n 82S EI  p n c kJV%jDC8eV%jDCp8DCFVb}

Lattices for Studying Monotonicity of Bayesian Networks 101


PBE%6h 69h&PBV&PQ8DC,698dW>0zjDC"8DCFV~PBE%$69h ;B8V%6dV&PQ8DC,698 D8 PBV#h%jDPs698V%jDC4q;slBCh%69;B88DCFV~PBE%f<8<:DCFE#h&V%f<:DlB>
6d_U_PBE,C;Bgkjeh f<g j[;B6dE,PB_ :<69h%V&E 6dfDV%69PQ8<hXC4j<;s@BC CV%jDC8Th%;slV%j<;IVV%jDCh%C[<E%PB[CFE%V%6dCh";IE C4@L6dPQM9;IV&C:>
V%j<;IVS EsI sp n c r 8S EsI qp n k> C#8DPBV&C,V%j<;IVFJ'h%698<gRC PBE%C_cPBE \];BM9MdlBJLCh%;lV%j<;IV,;[;B69EPB_N&PQ698^V@I;BM9fDC
V%jDCp[<E%PB[CFE%VmlTPB_h%V&P'gkj<;Bh&V%69g:DPQ\]68<;B8<gRC69hV&Ek;B8<h%6} ;Bh%h 6dHQ8<\$C8V%h n ! n c 6dV%j n D n c JLFIdB~U RV%jDCp[<E%PB[D}
V%6d@BCBJCTj;@BCTV&P*h&V%f:Dl*V%jDCT:DPQ\W698<;B8<gRC[<E%PB[CFE&} CFE%Vml]PB_769h%PBV&PQ8<69gF6dVml]6d_C?j<;s@BC?SXEsI sp n  SXEsI sp
V%6dCh!PB_ V%j<C:69h&V&E 6dYfDV%6dPQ8<h"PB_:<6dE%CgRV%MdlTM698DBC:e[;B6dE h n c _PBE0V%jDC69EU;Bh%h%P'gF69;IV&C:[E%PB;I69M6dV~l?:<69h&V&Ek6dfDV%6dPQ8hF
PB_#CMdC\$C8V%h]698V%jDCM9;IV&V%69gRCPQ8<M9l*V&P*:DCgF69:DCfD[PQ8 @L6dPQM9;IV%6dPQ8PB_!V%jDC[<E%PB[CFE%VmlPB_?;B8^V%6dV&PQ869gF6dV~l69h:DCR}
69h%PBV&PQ8<69gF6dVml{PBE;B8V%6dV&PQ8<6gF6dV~lB> 8<C:Th%69\]6M9;IE MdlB> CPBh&CFE%@BCpV%j<;IV#_PBE!;B8^l@L6dPQM9;IVb}
!h4\]C8^V%6dPQ8<C:698K'CgRV%6dPQ8'J0;:DPQ\];B68PB_,;I[D} 698<Hp[Y;B6dE n ! n c #jDPQh&CCMdC\$C8V%h,;IE%C:<6dE%CgRV%Mdl{M9698DBC:
[M69gF;IV%6dPQ8\];lCRrDj<6d69V!;B8*68^V&E 6gF;IV&C]gRPQ\468<;IV%6dPQ8 698V%j<C;Bh%h%6dHQ8\$C8^VM9;IV&V%69gRCBJ^V%j<CFE%C69h /; W<G5JW<@I;IE 6}
PB_[<E%PB[CFE%V%6dCh!PB_69h&PBV&PQ8<6gF6dV~l;B8<:;B8V%6dV&PQ8<69gF6dVmle_cPBE ;IM9/C 698V%jDC]h&CFV4PB_PBYh&CFE%@I;IMdC$@I;IE 69;IMdC:h  V&P
698V&CFE%E%CM;IV&C:PBh&CFE%@I;IMdC@I;IE 69;IMdCh>p!h;B8CRrD;B\} j69g j n ;B8<: n c ;Bh%h 6dHQ8;T:<6CFE%C8V@A;BMfDCB>]5CFV n ;
[M9CBJAC"gRPQ8<h%69:<CFEUV%j<CVmP@I;IE 69;IYMdCh +;B8<: )+h%fg j CpV%jDC@A;BM9f<CpPB_V%j<69h!@I;IE 69;IMdCp6dV%j698V%j<C;Bh h%6dHQ8L}
V%j<;IVV%jDCPQf<V&[fDVX@I;IE 69;IYMdC,PB_698V&CFE%Ch&VCj;@BCh69h&PI} \$C8V n ;B8<:{M9CFV n = J BZ ?<J'C!6dV%h@A;BM9f<C#698 n c > z,jDC
V&PQ8<6gF;BM9Mdl$69/8 + ;B8:{;B8V%6dV&PQ8<69gF;BMMdl$69*8 ){>0C?_PLgFf<h @L6dPQM9;IV%6dPQ88DP69h?:DC8DPBV&C:'l n ! n c p ; = k>C
PQ8V%j<C@A;BM9f<C;Bh%h%6dHQ8<\]C8^V%h h%;slWV%j<;IVV%jDC?@L6dPQM9;IV%6dPQ8j<;Bh;g j;B8DHBCPB_7V%jDC?@I;BM9fDC

# ; 1 PB2_ @_cE%PQ\ n ; V&P n = _cPBE"6dV%:h F"TUJ JSG<JL,Ek6dV&V&C8  ; = >
n ;  0

0  1.# ;
z,j<Ce@I;BM9fDC;Bh%h%6dHQ8\$C8^V n V&Z P W3 mV%j<;IVT69h
n c
;


0  1.# ; 1
h%j;IE%C:l n ;B8<: n c J69h?gF;BM9MdC:V%jDC K F"G~ ^PB_V%jDC
n c c
;

@L6dPQM9;IV%6dPQ8>
V&PeV%jDCh&CW@I;IE 69;IYMdChF>|PBV&CV%j<;IV4_cPBE4V%j<CWV%jDE%CFC;Bhb} `b8HBC8DCFEk;BMJYf<h 698DH$V%jDC4;Bh h%6dHQ8<\$C8VM9;IV&V%6gRC4_cPBE#;
h%69HQ8<\$C8V%h,Cpj<;s@BCV%j<;IV n  D n c c ;B8<: n c D n c c q;lBCh 69;B88DCFV~PBE%\];leE%Ch%f<M9V#698;h&CFV +PB_X@L6dPI}
V%jDC;Bh%h%6dHQ8\$C8^V%h n  ; ;B8<: n ; c ; j;@B; C8DPPBEk; :DCFE 698DHD; > M9;IV%69PQ8<hPB_0V%jDC[<E%PB[CFE%V%6dChPB_0\$PQ8DPBV&PQ869gF6dV~lB>"K'PQ\$C
DPBEh&V%f<:DlL698DHV%jDC\]PQ8DPBV&PQ8<69gF6dVml[E%PB[CFE%V%6dCh698L} PB_V%jDCW69:<C8^V%6dC:e@L6dPQM9;IV%69PQ8<h\];lh%j<PsgRPQ8<h 69:DCFE&}
@BPQMd@BC:J^C8DPsj<;@BC#V&P4gRPQ\$[;IE C,V%jDC,[<E PB;I69M969V~l ;IM9C]E%CFHQf<M9;IEk6dV~lBJ0698*V%j<C{h&C8<h%C]V%j<;IV6d_";T@L6dPQM9;IV%6dPQ8
:<6h&V&E 6df<V%6dPQ8<h$P@BCFEV%jDCPQfDV&[YfDV@A;IE 6;IMdC  V%j<;IV PBE 69HQ698<;IV&Ch0_cE%PQ\;gkj<;B8DHBC,PB_@I;BM9fDC,698$;?[;IE V%69gFf<M9;IE
;IE%C!gRPQ\$[fDV&C:W_E PQ\V%jDC!q;lBCh%69;B88DCFVmPBE ]f<8<:DCFE gRPQ8V&CRr'VFJ0V%jDC8*V%j<69hh%;B\]CWg j<;B8<HBC{gF;Bf<h&Chp;@L6dPQM9;A}
h&V%f:DlBJHQ6d@BC8V%jDC@I;IE 6dPQf<h?;Bh%h%69HQ8<\$C8V%h?V&bP + ;B8<: V%6dPQ8W698$;BM9MDj<6dHQj<CFE&}PBE :DCFE%C:$gRPQ8V&CRrLV%h;BhXCM9M> KLfg j
){>`__cPBE$V%jDC[;B6dEPB_#:69h&V&E 6dYfDV%6dPQ8<hS EsI  p n 
;B8<:2SXEsI  p n c c; kJ#Cj<;@BCV%j;IVSXEI  p n c c; ; 69h @L6dPQM9;IV%6dPQ8h69M9MCV&CFE \$C: RN TUW KsN W TbI >,n!V%jDCFE@L6dPI}
h&V&PLg j;Bh&V%69gF;BM9Mdl:DPQ\]698<;B8V4P@BCFE]S EI  p n  ; kJ V%jDC8 M9;IV%69PQ8<h_cE%PQ\V%jDCh&CFV *:DP!8DPBVUj;@BCh%f<g jp;,E CFHQf<M9;IE
V%jDC8<CFV~PBE%{CRrDj<6dY6dV%hV%jDC;Bh%h&PLgF69;IV&C:[<E PB[CFE V~lWPB_ h&V&Ekf<gRV%fDE%C;B8<:\];lCgRPQ8<h%6:DCFE%C: JSG>KJNPX GI >0C
69h%PBV&PQ8<69gF6dVml6e 8 +>`__PBEV%jDC{:<6h&V&E 6df<V%6dPQ8<h!HQ6d@BC8 :<6h&V%698DHQf<6h%jCFV~CFC8V%j<C{VmPVml[ChpPB_,@L6dPQM9;IV%6dPQ8
B
; 
8 : 
C$j<;@BCV%j<;IVSXEI  p n c ; ,6h#h&V&PLg j;Bhb} V&P;BM9MdPs;{gRPQ\][;BgRV#E CF[<E%Ch&C8V%;IV%6dPQ8ePB_ V%jDC4C8V%6dE%C
n  c n  c c
V%69gF;BMMdl{:DPQ\W698<;B8V"Ps@BCFES EsI  p n c c; kJ<V%jDC8TV%j<C48DCFVb}
; ; h&CFVPB_U69:<C8^V%6dC:@'6dPQM;IV%6dPQ8<hF>
PBE%h%jDPhV%jDC;Bh%h&PLgF69;IV&C:[E%PB[CFE%V~l?PB_D;B8^V%69V&PQ8<69g} CgRPQ8<h%69:<CFE7V%jDCXh fDh&CFV N  ; = !PB_<;BM9MB@L6}
6dVmle698 ){>4z,jDC$8DCFVmPBE%eV%jfh?E%CF@BC;BM9h?PBV%j[<E%PB[D} PQM9;IV%69PQ8<h?PB_V%j<C$[<E%PB[CFE%V%6dCh?PB_\$PQ8DPBV&PQ8<6gF6dV~lV%j<;IV
CFE%V%69Ch,6d_ PBE 69HQ698<;IV&C,_cE%PQ\;4g j;B8DHBC?PB_V%jDC@I;BM9fDC#PB_V%jDC@I;IE 6}
S EI qp n  ; m 8S EsI  p n c c; m 8S EI qp n c ; ;IM9dC @_cE%PQ\ n ; V&P n = >z,jDC]gRPQ8V&CRr'V n  PB_h%fg j
;@'69PQM9;IV%6dPQ8+8<Ps 69hegF;BMMdC:; RN TUWKN W T&I K F"G~ ^
CFE 69_lL698DHpjDCFV%jDCFEPBE,8DPBV"V%j<C?8<CFV~PBE%6h6h&PBV&PQ8DC F#"uF%$!X G>Ke6d_CTj<;@BCeV%j<;IV]V%jDCFE CT69h$;@L6dPQM9;IV%6dPQ8
69`8 + ;B8<:;B8V%6dV&PQ8DC4698 )8DPs;B\$PQf<8V%h"V&PWh&V% f<:<l^} n ! n c p  ; = k N  ; = _PBE];BMM@A;BM9f<C;Bh h%6dHQ8L}
698<HV%j<C];IP@BC{698<Cf<;BM6dV%6dCh#_PBE4;BMM0@A;BM9f<C:h 0 ;B8<: \$C8V%h n V&`P  V%j;IV?68<gFM9f<:DCV%j69h#gRPQ8V&CRrLV n  PBE;
# ; PB_0V%jDCV~PW@I;IE 69;IYMdChF> j<69HQjDCFE&}PBE :DCFE C:]PQ8DCB> `_V%jDCFE C!69h8DP$MdPCFE&}PBEk:DCFE%C:
h&V&Ekf<gRV%fDE ;BMBgRPQ8V&CRr'VUPB_LPIC8gRCX_cPBE7V%jDCXh ;B\$CXg j;B8DHBC
8     :  9  
: 0 9   5 PB_L@I;BM9fD C  ; = JICh%;lV%j<;IVaV%jDCXgRPQ8V&CRr'V0PB_LPIC8<gRC
i#[PQ8Tf<h%698<HpV%j<C4;Bh%h%6dHQ8\$C8^V#M9;IV&V%69gRC4;Bh!:DCh%gRE 69C: 69h RRN T WKN W T&Ici MEJ.GJ.E]Io>+h&V&E fgRV%fDE ;BM9Mdl]\W698<69\];BM
;IPs@BCBJh&PQ\]C[<E PB[CFE V%6dCh*PB_T\$PQ8<PBV&PQ8<69gF6dVml\];l gRPQ8V&CRr'V"PB_aPIC8gRCV%jf<hgkj<;IE ;BgRV&CFE 69h%Ch";ph%CFV"PB_E%CR}

102 L. C. van der Gaag, S. Renooij, and P. L. Geenen


9M ;IV&C:@L6dPQM9;IV%6dPQ8<h> h&V&Ekf<gRV%fDE ;BM9M9l\W698<69\];BMgRPQ8L} 96 8{;@L6dPQM9;IV%6dPQ8 n ! n c p$ ; = k !> z,jDC[<E%PLgRC:<fDE%C
V&CRrLV n  _PBE0V%jDC@L6dPQM9;IV%6dPQ8<hUPBE 69HQ698<;IV%698DH#_E PQ\  ; = h%fDYh&Cf<C8^V%Mdl*6h&PQM9;IV&Chp_E PQ\ V%jDC;Bh h%6dHQ8<\$C8V$M9;IVb}
\$PBE%C$h&[CgF6dYgF;BM9MdlT:DCF8<Ch;h fDM9;IV&V%69gRCPB_V%jDC$;Bhb} V%69gRCV%jDC$h%fDYM9;IV&V%69gRCp6dV%jV%jDCCMdC\$C8V # n kJ76dV%j
h%6dHQ8\$C8^VM9;IV&V%69gRC698Wj<69gkjWC;BgkjCMdC\$C8V698gFM9f<:DCh n  n ;  n  JY_cPBE?6dV%h,PBV&V&PQ\V%jDCph%fDM9;IV&V%6gRC_cf<E%V%jDCFE
n ;  ;B8<:$698pj<69gkj_PBE C;Bgkj]CMdC\]C8^V n V%jDCFE%CCRrL6h&V%h ; 698<gFMf<:DChU;BM9MLCMdC\$C8V%h # n ; n  a#6dV%j n ; n  D n ; n  >
@L6dPQM9;IV%6dPQ8 n ! n c pj ; = 698TV%j<Ch&CFV ?N  ; = kV%jDC |#PsJ<6d_7C;Bg jCMdC\]C8^VPB_7V%j<C?h fDM9;IV&V%69gRC!PLgFgFfDE h698
gRPQ8V&CRr'V n 69hV%jDC4PBV&V&PQ\PB_aV%j69h"h%fDM;IV&V%69gRCB> ;@L6dPQM9;IV%6dPQ86dV%Z j  ; = _PBE{6dV%h$PBE 6dHQ68JXV%jDC8V%jDC
`b8;B:<:<6dV%6dPQ8{V&PV%jDC?@'69PQM9;IV%6dPQ8<hV%j<;IV";IE%CgRPs@BCFE%C: gRPQ8V&CRr'V n 69h,;h&V&E f<gRV%fDEk;BM9Mdl{\W698<69\];BMgRPQ8V&CRrLV!PB_
'lWh&V&E f<gRV%f<E ;BM9Mdl$\]69869\];BMgRPQ8V&CRrLV%hPB_7PIC8<gRCBJDV%jDC PIC8<gRCB,PBV%jDCFE%#69h&C6dV$69h698<gF69:<C8^V%;BM>z,jDC[<E%PLgRCR}
h&CFVpPB_,;BM9MX6:DC8^V%69<C:*@L6dPQM9;IV%6dPQ8h4\];sl698<gFM9f:DC$CMdCR} :<fDE C69h 69V&CFE ;IV%6d@BCMdlE%CF[C;IV&C:$_PBE ;BMMLlBCFV f<8<gRPs@BCFE C:
\$C8V%h,V%j<;IV#;IE%C48DPBVh%V&E f<gRV%fDE ;BMMdl]E%CM9;IV&C:>gRPQ8L} @L6dPQM9;IV%6dPQ8<h>
V&CRrLV n  69hh%;B69:V&PC;B8 J.G>KJLP^ GI: K F"G~ Q/ F#"  9 7   5  0 9 7   AD 9 7
F#$! G>Kk*6d_6dV{6h{8DPBV;h&V&E f<gRV%fDEk;BMgRPQ8V&CRr'VPB_PB_}
  

_cC8<gRCB>C$8DPBV&CV%j<;IV;B8^lh%CFV!PB_X@'6dPQM;IV%6dPQ8<h!69:DC8L} z,jDC Ekf<8^V%6\$C gRPQ\$[MdCRrD6dV~l PB_PQfDE\$CFV%jDPL:_cPBE


V%6d<C:_cE%PQ\ ;q;slBCh%69;B88<CFV~PBE%6hp8<Ps/g j<;IEk;Bg} h&V%f<:<l'698<H4\$PQ8DPBV&PQ8<6gF6dV~l]PB_U;q;slBCh%69;B88DCFVmPBE%69h
V&CFE 69h%C:lp;h&CFV PB_Yh&V&Ekf<gRV%fDE ;BM9M9l4\]698<6\];BM^gRPQ8V&CRr'V%h :DCFV&CFE \W698DC:$'lV%j<C#h%6 -FC"PB_V%j<C;Bh%h%6dHQ8\$C8^VM9;IV&V%69gRC
PB_0PIC8gRC4;B8<:T;]h%CFV"PB_068<gF69:DC8V%;BMgRPQ8^V&CRrLV%hF> f<h&C:>CPBh&CFE%@BCV%j<;IVpV%j<69hpM9;IV&V%69gRCC8<gRPL:DCh;B8
CRrL[PQ8<C8^V%69;BMY8f\4CFE PB_@I;BM9fDC;Bh%h 6dHQ8<\$C8V%hXV&PV%jDC
!h;B8CRrD;B\$[MdCBJaC]gRPQ8h%69:DCFEV%jDC{M9;IV&V%69gRC$_cE%PQ\ h&CFVPB_PBh&CFE @A;IM9C@I;IE 69;IYMdChF> =PQ8<h&V&E fgRV%698DHV%jDC
06dHQfDE%Z C _cPBEV%jDCVmPV&CFE 8<;IE%lPBh&CFE%@I;IMdC@I;IE 6} M9;IV&V%69gRC;B8<:gRPQ\$[fDV%68DH]V%jDCp[<E%PB;IY69M96dVml:<6h&V&E 6dfD}
;IMdCh + ;B8<: )W> KLfD[<[PQh&CV%j<;IVf<[PQ8@BCFEk6d_l} V%6dPQ8<h!V&PC;Bh%h%P'gF69;IV&C:#6dV%je69V%h#CM9C\$C8^V%hJV%j<CFE%CR}
698DH69h&PBV&PQ8<6gF6dV~l PB_V%jDCq;slBCh%69;B88DCFVmPBE% f<8L} _cPBE%CBJWV%;IBChCRrL[PQ8<C8^V%69;BM]V%69\$CB> TPBE%CFP@BCFEJV%jDC
:DCFEh&V%f:DlBJV%jDC2V%jDE%CFC@L6dPQM9;IV%6dPQ8h2.021.# %"!%0 % #'% p :DPQ\]68<;B8<gRC[<E%PB[CFE%V%6dCh0PB_;B8$CRrL[PQ8DC8V%69;BMD8f\4CFE
+ 1  % !s. 0 1 # ( !%0 % # ( p + 1  % ;B8<:. 0 % # % !%0 ( # % p PB_[;B6dE h?PB_[<E%PBY;I69M96dVml:<69h%V&E 6dfDV%69PQ8<hj<;s@BCWV&PTC
+ %  (s;IE%C69:DC8V%6d<C:> z,jDC*<Ekh&VTV~PPB_pV%j<Ch&C gRPQ\$[;IE C:>DPBE +698<;IE%lPBh&CFE%@I;IMdC@A;IE 6;IMdChFJ
@L6dPQM9;IV%6dPQ8<h{j<;@BCV%jDCh%;B\$CPBE 6dHQ68V%jDCeVmP@L6dPI} _cPBE,CRrL;B\][MdCBJ<;BMdE%C;B:<l
M9;IV%6dPQ8h;IEk69h&C6d_V%jDC@A;BMfDCePB_V%jDC@A;IEk69;IMdA C + 69h
g j;B8DHBC:_cE%PQ\ 01V&Z P 0%I>z,j<Ch&C@'6dPQM;IV%6dPQ8<h;IE%C 
 
.  eb
g j;IE ;BgRV&CFE 69h&C:'lV%j<C{h&V&E fgRV%fDE ;BM9Mdl\]698<6\];BM0gRPQ8L}
 

 
 
V&CRrLV7PB_'PIC8<gRC # %I68PBV%jV%jDC gRPQ8V&CRr'V # % ;B8:V%jDC gRPQ\$[;IEk69h&PQ8<h$;IE%CE Cf<69E%C:>DE PQ\ V%jDCh&CTgRPQ8h%69:L}
j<6dHQj<CFE&}PBE :DCFE%C:gRPQ8^V&CRrLV # ( JQgkj<;B8DHQ698<HV%jDC@A;BMfDCPB_ CFE ;IV%6dPQ8h]Cj<;s@BCeV%j;IVWPQfDE\$CFV%jDPL:j<;Bh;*@BCFE%l
+ _cE%PQ\ 021!V&b P 0 %HQ69@BChE 69h&CpV&PT;@L6dPQM9;IV%6dPQ8>z,jDC j<6dHQjTE f<8V%69\$CgRPQ\$[M9CRrL6dVmlB>"#MdV%j<PQfDHQj6dV%hE%C'f<6dE%CR}
V%j<6dEk:@'6dPQM;IV%6dPQ8T\$C8V%6dPQ8DC:e;IP@BC69h#8DPBV,lBCFV!gRPs@} \$C8V%h$gF;B8C{E C:<f<gRC:V&P*;IV]MdC;Bh&V$h&PQ\]CCRr'V&C8V
CFE%C:lV%jDC{6:DC8^V%69<C:\]698<69\W;BMUgRPQ8V&CRrLVph%68<gRC]6dV 'lCRrL[MdPQ69V%698DHV%jDCW698:DCF[C8<:DC8<gRCh\$PL:DCM9M9C:698*;
j<;Bh!;{:6CFE C8^V#PBE 6dHQ698>,zj<69h,@L6dPQM9;IV%69PQ8TPBE 6dHQ68<;IV&Ch q;lBCh%6;B8{8DCFVmPBE YJL_cPBEM9;IE%HBCFE8DCFV~PBE%Lh698gFM9f<:<698<H
_cE%PQ\ ;g j<;B8<HBCePB_V%jDC@I;BM9fDCTPB_V%jDC@I;IE 69;IMdu C + ;M9;IE%HBCe8'f<\CFEWPB_PBh%CFE%@A;IYMdCT@I;IE 69;IMdCh{h&V%f<:Dl}
_cE%PQ\ 0%#V&P 0 (;B8<:;IHQ;B698j<;Bh #'%,_cPBE"6dV%hPIC8:<698DH 698DH\$PQ8DPBV&PQ869gF6dV~l698DCF@L6dV%;IM9l$CgRPQ\]Ch"698D_cC;Bh%6dMdCB>
gRPQ8V&CRr'VF>z,jDCgRPQ8V&CRr'6V # % 8DP+69h"\$CFE CMdl68<gF69:DC8L} z,jDCf<8D_;@BPQf<E ;IMdCpgRPQ\$[fDV%;IV%6dPQ8;BMUgRPQ\][MdCRrD6dV~lTPB_
V%;BMh%698<gRC!8DP4@L6dPQM9;IV%6dPQ8{j<;BhCFC8{69:DC8V%6d<C:W_PBEV%jDC V%jDC{[<E%PBMdC\TJUjDPCF@BCFEsJ 69h4M969BCMdlV&P_cPBE%Ch&V%;BM9MV%jDC
[;B6dE"PB_U;Bh%h%6dHQ8\$C8^V%h 0% # (?;B8<: 0 (&# (!69V%j68<gFM9f<:DC :DCh%69HQ8PB_UCh h&C8^V%6;BM9Mdl{\]PBE%CC WgF6dC8V\$CFV%j<P':<h>
V%jDC4j6dHQjDCFE&}PBE :<CFE%C:gRPQ8^V&CRrL6V # ( > `b8@L6dCF PB_V%jDCj<6dHQjE f<8V%69\$CgRPQ\$[MdCRrD6dVml698L}
zaP]gRPQ8<gFM9f<:<CBJ'CPQf<M9:M96dBC!V&PW8DPBV&CV%j<;IV_cPBE; @BPQMd@BC:J^C[<E%PB[PQh&CV&Pf<h&CPQfDE \$CFV%jDPL:_cPBE h&V%f<:Dl}
HQ6d@BC8h&CFVPB_@'6dPQM;IV%6dPQ8<hFJV%jDC!h%V&E f<gRV%fDE ;BMMdlp\W698<69\];BM 698DH![<E%PB[CFE%V%69Ch7PB_ L" TRN JI> E/F"G>FBI F"G5JNKJcN MPQ8<MdlB>0z,jDC


gRPQ8V&CRr'V%hPB_$PIC8<gRC;IE CE%C;B:<6MdlCh%V%;IM969h%j<C:l gRPQ8<gRCF[<VaPB_D[;IE%V%69;BM^\$PQ8DPBV&PQ8<69gF69V~l;I[<[M969ChV&P#;#h%fDD}


V&E ;s@BCFE h%698DHV%jDC;Bh%h 6dHQ8<\$C8VWM9;IV&V%69gRCef<8<:DCFEWh%V%f<:DlB> h&CFV + PB_ V%jDCpPBh&CFE @A;IM9C4@I;IE 69;IMdCh#PB_X;8DCFV~PBE%>
K'V%;IE%V%68DH;IV4V%j<CWPBV&V&PQ\ PB_"V%j<C{M9;IV&V%69gRCBJUV%jDC{[<E%PI} !8;Bh%h%6dHQ8<\]C8^V]M9;IV&V%6gRCT69h$gRPQ8<h&V&E fgRV&C:_PBE$V%j<Ch&C
gRC:<fDE C8<:<h;B8TCMdC\$C86V ! n ,h%f<g jeV%j<;IV n 69hV%jDC @I;IE 69;IMdCh;Bh:DCh%gREk6dC: ;IP@BCB> z,jDC[<E PB;I69M}
MdPCh&Vb}PBEk:DCFE%C:4NbPQ68^V @I;BM9fDC!;Bh%h%6dHQ8\$C8^VXP'gFgFfDE E 698DH 6dVml:<69h%V&E 6dfDV%69PQ8<hW;Bh%h%P'gF69;IV&C:2#6dV%jV%jDCCMdC\$C8V%h

Lattices for Studying Monotonicity of Bayesian Networks 103


PB_V%jDCM9;IV&V%69gRC;IE%CgRPQ8<:<6dV%6dPQ8<C:PQ8; 'gP*NbPQ68^V ^@A;IEk69;IMdCha;IE%CPBYh&CFE%@I;IMdCB>a06dHQf<E%C#:DCF[69gRV%h7V%jDC
@I;BM9fDC!;Bh%h%6dHQ8<\]C8^V n  V&PpV%jDC!PBh&CFE%@I;IMdC#@A;IEk69;IMdCh HBE ;I[Yj<69gF;BMh&V&E fgRV%fDE%CPB_UV%jDC4gFfDE%E C8^V,8DCFVmPBE Y>
   ! + V%j<;IV";IE%C?8DPBV"698<gFM9f<:<C:]698{V%jDCh&V%f<:DlB>
26dV%jC;BgkjCM9C\$C8^*V #. 0$PB_?V%jDCM9;IV&V%69gRCeV%jfhW69h  2 9aFsaB:   cD, 
 

;Bh%h%P'gF69;IV&C:V%jDC$gRPQ8<:6dV%6dPQ8<;BMa[<E%PB;IY69M96dVml:<6h&V&E 6dfD} Z?fDE 68DHV%jDCCM96gF6dV%;IV%6dPQ8p698^V&CFE @'6dCF#hFJIPQfDE @BCFV&CFE 698<;IE l


V%6dPQ8S EAI  p<0 n  k>?zjDCpM;IV&V%69gRCp8<Ps [<E%P@'6:DCh_cPBE CRrL[CFE%V%h[<E P':<fgRC:@A;IEk6dPQf<hh&V%;IV&C\]C8^V%hpV%j<;IVh%fDHI}
h&V%f:Dl'68DH"V%jDC\$PQ8DPBV&PQ8<69gF69V~l[<E%PB[CFE%V%69ChPB_<V%jDC8DCFVb} HBCh&V&C:\$PQ8DPBV&PQ8<6gF6dV~lB> z,j<CFl698:<69gF;IV&C:J#_PBETCRr'}
PBE%_cPBEpV%j<Ch&CFV + HQ6d@BC8 n >z,jDC;Bh%h%6dHQ8<\]C8^V ;B\$[YMdCBJYV%j;IV#V%jDCpPQfDV&[f<V#@I;IE 69;IM9C "!2JST&^XEdJ
n  V%jDC869h{V&CFE \$C:V%jDZ C V K HT%F"W G>P" R R J HG5E] G h%j<PQf<M9:Cj<;s@BC69h%PBV&PQ8<69gF;BM9Mdl698V&CFE \]hpPB_,V%jDC{<@BC
_cPBEh&V%f<:<l'698<HV%jDC4[;IE%V%69;BM\$PQ8<PBV&PQ8<69gF6dVmlB> @I;IE 69;IM9Ch$#lJ"TUT h Fs%IJ&% HJIJ'7 XTJ)(Id"J RRJI;B8<:
CPQfM9:M96dBCV&P8DPBV&CV%j;IV*[;IE%V%69;BMW\$PQ8DPI} *$JSG h'^XE/F"TUT h' XRk>pC_PLgFf<h!PQ8V%jDCh&CPBh&CFE @^}
V&PQ8<6gF6dV~lPB_;*8DCFV~PBE%HQ6d@BC8;[;IE%V%69gFfM9;IE$;Bg%} ;IM9CX@I;IE 69;IMdChaV&P69M9M9fh&V&E ;IV&C V%jDC;I[<[M69gF;IV%6dPQ8PB_<PQfDE
HBE%PQf8<:;Bh%h%6dHQ8<\]C8^V n :DP'Ch8DPBV?HQf<;IE ;B8V&CFC][;IE&} \$CFV%j<P':_cPBE#h&V%f:Dl'68DH[Y;IE%V%69;BM\$PQ8DPBV&PQ869gF6dV~lB>
V%69;BM#\$PQ8DPBV&PQ8<6gF6dV~lHQ6d@BC8;B8^lPBV%j<CFE{;Bg HBE PQf<8<: <E%PQ\ V%jDC @BC PBh&CFE%@I;IMdC @I;IE 69;IM9Ch f<8<:DCFE
;Bh%h 6dHQ8<\$C8VF>U!M9h&PDJB@L6dPQM9;IV%69PQ8<hUPB_YV%jDC[<E PB[CFE V%6dCh7PB_ h&V%f:DlBJC]gRPQ8<h&V&E f<gRV&C:;B8*;Bh%h%6dHQ8\$C8^VM;IV&V%69gRC];Bh
\$PQ8<PBV&PQ8<69gF6dVmlV%j<;IVU;IE C69:<C8^V%6dC:HQ6d@BC8p;[Y;IE%V%69gFfL} :DCh gRE 6dC:698V%jDC[E%CF@'69PQf<h*h&CgRV%6dPQ8> KL698gRC;BM9M
M9;IE Y;Bg%'HBE%PQf<8<:];Bh h%6dHQ8<\$C8V \];sl8<PBV P'gFgFfDEHQ6d@BC8 @I;IE 69;IM9Ch,698@BPQMd@BC:CFE%C698<;IE%lBJ;B:DPB[<V%698<H]PQ8DC4PB_
;B8DPBV%j<CFEph%fg j;Bh h%6dHQ8<\$C8VF>z,jDC{;Bg%'HBE%PQf<8<:;Bhb} V%jDC@A;BMfDChNTUW;B8<: "FIiRFFJ4Cf<h%C:+;h M96dHQjV%Mdl
h%69HQ8<\$C8V;IHQ;B698<h&V!j<6g jT[Y;IE%V%69;BM7\]PQ8DPBV&PQ8<69gF6dVml69h \$PBE CgRPQ8<gF69h&CeC8<gRPL:<698DHPB_V%jDC69EN&PQ698^VW@A;BM9f<C;Bhb}
V&PCh&V%f<:<6dC:h%jDPQfM9:V%j<CFE%CF_PBE CCg jDPQh%C86dV%j h%69HQ8<\$C8V%hF>#DPBEC;Bgkj;Bh%h%6dHQ8\$C8^V n V&PV%jDCh&CFV#PB_
gF;IE%C#;B8<:$C";Bh&C:]f<[PQ8$gRPQ8h%69:DCFE ;IV%69PQ8<h0_E PQ\ V%jDC PBh%CFE%@A;IYMdC @A;IEk69;IMdCk hWJCM9CFV # n C V%jDCXh fDh&CFV
:DPQ\W;B698PB_U;I[[M969gF;IV%6dPQ8> PB_5+h%f<g jpV%j<;IVv@ ! n a6d_;B8:4PQ8<M9l46d_2NTUW
  
 9  ~m
B~7 PLgFgFfDE h68 n >Wz,j<C$CMdC\$C8V%hPB_V%jDC]E%Ch%fMdV%698DHM9;IVb}
V%69gRCV%jf<hW;IE%Ch%fDYh&CFV%h$PB_:],V%j<CTPBV&V&PQ\ PB_?V%jDC
C;I[<[M969C:PQfDE0\$CFV%jDPL:4_cPBEUh%V%f<:DlL698DH\$PQ8<PBV&PQ8<69g} M9;IV&V%6gRC6h#V%j<CpC\][<V~lh&CFV;B8<:V%jDCV&PB[C'f<;BM9h!V%jDC
6dVml$V&P$;4E%C;BMq;lBCh%6;B88DCFV~PBE%W698W@BCFV&CFE 698<;IE lWh%gF6} C8V%6dE%Ch&CFVW>z,jDC{E%Ch%f<M9V%698DHTM9;IV&V%69gRC_cPBEV%jDC{<@BC
C8<gRCB>XC<E 6dC l]698^V&E P':<fgRC!V%j<C8DCFV~PBE%;B8<::DCR} @I;IE 69;IM9Chf<8<:DCFEh&V%f:Dle69hh jDPs869806dHQfDE%CWx'>]`V
h%gREk6dC#V%jDC4@'6dPQM;IV%6dPQ8<hV%j<;IV,C469:<C8^V%6dC:> 698gFM9f<:DCh+@xBCMdC\$C8V%haV&P!gF;I[<V%f<E%C;BM9M^[PQh%h%6dM9C
&N PQ698VU@I;BM9fDC";Bh%h 6dHQ8<\$C8V%h0V&PV%jDC"@I;IE 69;IMdCh0;B8<:p_fDE&}
 :    9 9Y{ c<s cQ * :      %V jDCFE698<gFM9f:DCh tIu4:<69E%CgRVXh&CFVb}~698gFM9f<h%6dPQ8Wh&V%;IV&C\$C8V%hF>
`b8gFMdPQh&CgRPQM9M9;IPBE ;IV%69PQ8#6dV%jV~PCRr'[CFE%V%h_cE%PQ\ q CF_cPBE%CV%jDCM9;IV&V%69gRCgRPQfM9: CC8<j<;B8<gRC:6dV%j
V%jDC=C8V&E ;BM!`b8<h&V%6dV%fDV&CPB_p#8<6\];BM#Z?69h%C;Bh&C=PQ8L} gRPQ8<: 6dV%6dPQ8<;BMD[E%PB;I69M6dV%6dCh0PB_V%jDC#[<E%Ch&C8<gRC#PB_7;@L6}
V&E%PQMp698+V%jDC|#CFV%jDCFE M;B8<:<hFJC;IE%C:DCF@BCMdPB[698<H; E ;IC\W96 ;?PB_gFM;Bh%h%69gF;BM<h%698DC_cCF@BCFEJ^Cj<;B:$V&P:DCgF69:DC
q;lBCh 69;B88<CFV~PBE%_PBEV%jDC{:<CFV&CgRV%6dPQ8PB_"gFM9;Bh h%69gF;BM fD[PQ84; Y;Bg%'HBE%PQf<8<:;Bh%h 6dHQ8<\$C8V_cPBEV%jDC?PBV%jDCFElw
h&#698DC_CF@BCFEs>="M9;Bh%h%69gF;BM"h%698DC_cCF@BCFE6h];B8698<_Cg} PBh%CFE%@AI; YMdC]@I;IE 69;IMdCh;IHQ;B698<h&Vj<69gkj*V%jDC[<E%PB[CFE&}
V%6dPQfhW:<69h%C;Bh&CPB_[69HQhFJ"j<69gkjj<;Bhh&CFE 6dPQfhWh&PLgF6dPI} V%6dCh]PB_#[ ;IE V%69;BM"\$PQ8DPBV&PQ8<6gF6dV~lPQfM9:C@BCFEk6d<C:>
CgRPQ8DPQ\W69gF;BMYgRPQ8h&Cf<C8<gRChf<[PQ8W;B8WPQfDV&E%C;IY> !h C:DCgF96 :<C:V&PV%;IBC_cPBEpV%j69h4[YfDE%[PQh&CWV%j<C{@I;BM9fDC
V%jDC,:69h&C;Bh&C"j<;Bh ;?[PBV&C8V%69;BMD_cPBE E ;I[69:h&[<E C;B:JQ6dV 69h ;Bh%h 6dHQ8<\$C 8V$698j<69gkj;BMMPBV%jDCFE$PBh&CFE%@I;IMdC@I;IE 6}
69\][CFEk;IV%6d@BCV%j;IV 6dV%h0P'gFgFf<E%E%C8<gRC,69h0:DCFV&CgRV&C:{698pV%jDC ;IM9ChPB_LV%Dj C 8DCFV~PBE%j<;B:4;B:DPB[<V&C:V%j<C @A;BMfDC "RB RR>
C;IE M9lph&V%;IHBCh> z,jDC,8DCFVmPBE%$f<8:DCFE gRPQ8<h&V&E fgRV%6dPQ8]69h Cg jDPQh&Ce%V j<69h$[Y;IE%V%69gFf<M9;IE];Bh h%6dHQ8<\$C8VWh%68<gRCV%jDC
;B69\]C:;IVh%fD[<[PBE%V%68DH@BCFV&CFEk698<;IE%l[<E ;BgRV%6dV%69PQ8DCFE h"698 @I;IE 6dPQf<h!gFM969896 gF;BMah 6dHQ8<h!j<;@BC];E ;IV%jDCFEh \];BM9Ma[<E%PBD}
V%jDC:<69;IHQ8DPQh 69h]PB_V%j<C:<6h&C;Bh&C#jDC8@L69h%6dV%68DH[6dH ;I6M96dV~lPB_P'Fg gFfDE E%C8<gRC;B8<:6dV6hj6dHQj<Mdlf<8<M6dBCMdl
_;IE \]h6dV%j:<6h&C;Bh&C?[<E%PBM9C\]hPB_af<8DL8DP8gF;Bf<h&CB> V&P8<:*;M9;IE%BH C8'f<\4CFEPB_"V%jDCh&Ch%6dHQ8<h698;h%698L}
n?f<E8<CFV~PBE%gFfDE%E%C8V%MdlW698gFM9f<:DCh^p@I;IE 69;IMdCh_cPBE HQMdC!M96d@BC![6dHD>XG?d6 @BC8{V%j<69h;Bg HBE%PQf8<:{;Bh h%6dHQ8<\$C8VFJ
j69g jPs@BCFEAQuBu[;IEk;B\$CFV&CFE[<E PB;I69M969V%6dChWj<;s@BC CWgRPQ\$[f<V&C:%V jDC]@I;IE 6dPQf<hgRPQ8:<6dV%6dPQ8<;BM0[E%PB;I69Md}
CFC8;Bh%h&Ch%h%C:>z,jDCp@A;IEk69;IMdCh?\$PL:DCMaV%jDC[;IV%jDPI} 6dV%69ChV&PeCW;Bh h&P'Fg 6;IV&C:69V%jV%jDCWCM9C\$C8^V%hPB_V%jDC
HBC8DCh 69hUPB_V%jDC":<69h%C;Bh&C";Bh0CM9ML;Bh0V%jDC"gFM968<69gF;BM'h%6dHQ8<h ;Bh%h 6dHQ8<\$C8V,M9;IV&V%69Rg CB>
PBh%CFE%@BC:68698:<6d@L69:<f<;BMU[6dHQh>APB_,V%jDCV&PBV%;BMPB_ <PBEXC;Bg j[;B69E PB_ :<6dE%CgRV%MdlM9698<BC:CMdC\$C8V%hX_cE%PQ\

104 L. C. van der Gaag, S. Renooij, and P. L. Geenen


06dHQf<E%C4'Xz,jDCHBEk;I[j<69gF;BMh&V&Ekf<gRV%fDE%CPB_UPQfDEq;lBCh 69;B8e8<CFV~PBE%_PBEgFM;Bh%h%69gF;BMh%698DC?_cCF@BCFE#698[69HQhF>
V%jDC;Bh%h%6dHQ8\$C8^VM9;IV&V%69gRCBJ,CgRPQ\$[;IE%C:V%jDCgRPQ8L} V&PHQ6d@BCE 69h%CV&PV%j<6h!gRPQ\698;IV%6dPQ8PB_h%69HQ8<hF>z,jDCFl
:<6dV%69PQ8<;BM [<E PB;I69M969V%6dChPB_;@L6dE ;IC\]69;PB_#gFM9;Bh h%69gF;BM V%j'f<h4698:<69gF;IV&C:V%j<;IVV%jDCW8DCFVmPBE *h%jDPQf<M9:68<:DCFC:
h&68DCe_cCF@BCFE> C*_cPQf<8<:2_cPQfDE@L6dPQM9;IV%6dPQ8<hPB_pV%jDC j<;s@BCCFC869h&PBV&PQ8<C698V%jDC<@BC*@I;IE 69;IMdChf8<:DCFE
[<E%PB[CFE%V%69ChPB_{[;IE%V%69;BM69h&PBV&PQ869gF6dV~l+HQ6d@BC8V%jDCh&CR} h&V%f<:<lHQ6d@BC8WV%jDC#;IYh&C8<gRC,PB_;B8l$PBV%jDCFEh%69HQ8<hF>0z,jDC
MdCgRV&C:;Bg HBE PQf<8<:;Bh%h%6dHQ8\$C8^VFV%j<Ch&C@'6dPQM;IV%6dPQ8<h _cPQfDE69:<C8^V%6dC:$@'69PQM9;IV%6dPQ8<hXV%jf<hCFE%C#698:<69gF;IV%6d@BC,PB_
;IE%Ch jDPs8]l:<;Bh%jDC:WM9698DCh 698$06dHQf<E%Cx'> z,jDC"_cPQfDE \$PL:DCM9M968DHp68<;B:DC'f<;BgF6dCh"698PQfDE#gFfDE%E%C8V,8DCFV~PBE%>
@L6dPQM9;IV%6dPQ8<hU;BM9M'PBE 6dHQ68<;IV&C:p_E%PQ\;B:<:698DHV%j<CgFM698<69gF;BM
h%6dHQ8WPB_7:<69;IE E jDP'C;V&PV%j<C!gRPQ\698<;IV%69PQ8WPB_8:<698DHQh
{7X*m Ia 
PB_ ;IV%;ArL6;];B8<:T\];BM;B69h&CB>Xz,j<C_PQfDE@'6dPQM;IV%6dPQ8<hV%j'f<h `b8V%j<69h?[;I[CFEJCWj<;s@BC][<E%Ch&C8V&C:*;\$CFV%j<P':_cPBE
CFE%C;BM9MgRPs@BCFE%C:'l;h 698DHQMdCXh%V&E f<gRV%fDE ;BMMdl!\W698<69\];BM h&V%f<:<l'698<H\$PQ8DPBV&PQ8<6gF6dV~lPB_q;lBCh%69;B8$8<CFV~PBE%'h>U`b8
gRPQ8V&CRr'V#PB_UPIC8<gRCB> @L6dCF*PB_'V%jDCXf<8D_;@BPQf<E ;IMdC gRPQ\$[M9CRrL6dVml?PB_V%jDCX[<E%PBD}
C[<E%Ch&C8V&C:V%jDC[Y;B6dE hPB_@L6dPQM9;IV%698DH;Bh%h 6dHQ8L} MdC\ 698HBC8DCFE ;BMJV%jDC\$CFV%jDPL:_PLgFf<h&ChPQ8N%f<h&V;
\$C8V%h]V&PV~P@BCFV&CFE 698;IE 69;B8<hF>qC698DHgRPQ8D_cE%PQ8V&C: h%fDYh&CFV#PB_XV%jDCPBh&CFE%@I;IMdCp@A;IEk69;IMdCh#PB_X;8DCFV~PBE%
6dV%jV%jDC _cPQfDE@L6dPQM9;IV%6dPQ8<hJsV%jDCFl698<:<CF[C8:DC8^V%M9l;B8<: ;B8<:f<69M9:hfD[PQ8*;M9;IV&V%6gRC{PB_,;BM9MNbPQ698V4@I;BM9fDC{;Bhb}
6dV%jgRPQ8@L69gRV%6dPQ868<:<69gF;IV&C:V%j<;IV{V%jDC[<E%PBY;I69M96dVml h%6dHQ8\$C8^V%h#V&PV%jDCh&Cp@A;IE 6;IMdChF>zjDC4M9;IV&V%69gRC69h#C8L}
PB_ ;]@L6dE ;IC\W69;PB_ gFM9;Bh%h%6gF;BMh&698<C_CF@BCFE?h%jDPQf<M:T698L} j<;B8<gRC:6dV%j698D_cPBE \];IV%6dPQ8;IPQfDVV%jDCCRCgRV%hPB_
gRE%C;Bh&CfD[PQ88:<698DHV%jDCp;B:<:<6dV%6dPQ8;BMh%6dHQ8PB_ :<69;IE&} V%jDCh&Ce;Bh%h%6dHQ8<\]C8^V%h$PQ8V%jDCT[<E PB;I69M969V~l:<6h&V&E 6dfD}
E jDP'C;L>*qPBV%j@BCFV&CFE 698<;IEk69;B8<h\$C8V%6dPQ8DC:V%j<;IVV%jDC V%6dPQ8Ps@BCFEV%jDCe8<CFV~PBE%> vh\];B68PQfDV&[f<V]@A;IEk69;IMdCB>
gRPQ\698<;IV%6dPQ8PB_X;IV%;ArD69;;B8<::<6;IE%E jDP'C;]Ch&[CgF69;BMMdl z,jDCC8<j<;B8<gRC:pM9;IV&V%69gRCV%jDC8p69h7fh&C:4_cPBE069:DC8V%6d_lL698DH
[PQ698V&C:V&P#gFM9;Bh h%69gF;BMQh&#698DC _cCF@BCFEB6dV%j<68?V%jDCXh%gRPB[C ;BM9MU@L6dPQM9;IV%6dPQ8<h?PB_V%jDC$[<E%PB[CFE%V%69Ch#PB_\$PQ8DPBV&PQ869gF6dV~l
PB_"V%jDCWq;slBCh%69;B88DCFV~PBE%J0V%jDCFl*gRPQf<M9:8DPBVV%j<698< ;B8<:p_cPBE0gRPQ8h&V&E f<gRV%698<H!\]68<69\];BMBPIC8<:<68DH#gRPQ8V&CRr'V%h
PB_$;B8DPBV%jDCFE:<69h&C;Bh%C*V%j<;IVTPQf<M9:C\]PBE%CM96dBCM9l _cPBE,_cf<E%V%jDCFE,gRPQ8<h%6:DCFE ;IV%6dPQ8>

Lattices for Studying Monotonicity of Bayesian Networks 105


"!" #$
%&(
 '
(
 "(
 **(
   

 **(  "(  ( -  (  **(
 "( "!" #
%*&,(  '
(    ( "!" #$
%&(
)  
"(  '
"(    +( "! #$
%&( ,  '+(
,     ** "! #$
%&$   
  

 '
"(  *(  "(   "(  "(  "!" #
%*&,(     
"(  "( 
 "!" #
%&( 
 !" #$
%&(
 "( ,   +( )  
"(  **( "!" #
%&(     
"( "!" #
%*&,(  *(  "(     +(
,
 **  '
 ,      '  '
  **
    
 "!" #
%&$    '
  +*

 '
"( )  
"(    (  ( "!" #$
%&(  +*( ,  '+(  "( "!" #
&(    (
   ** )  
    '
 ,  '   "!" #
%&$  ** "!" #
%*&

  
         "! #$
%&

06dHQfDE Cx'!8;Bh%h%6dHQ8<\]C8^VM9;IV&V%69gRC_cPBEPQfDEq;lBCh%6;B8*8DCFV~PBE%_PBEgFM;Bh%h%69gF;BM h&#698DCp_CF@BCFE7V%jDC$@'69PQM9;IV%6dPQ8<h


PB_UV%jDC[<E PB[CFE V%6dChPB_a[Y;IE%V%69;BM\$PQ8DPBV&PQ869gF6dV~l;IE%C68<:<69gF;IV&C:'l:<;Bh%jDC:M968DChF>
CPQf<M9:eM96dBC4V&P8DPBV&CpV%j<;IVFJ;Bh?;h%fD[<[M9C\$C8^V
/   < YX  
V&PPQfDET\]CFV%jDP':2_cPBETh&V%f<:<l'698<H\$PQ8<PBV&PQ8<69gF6dVmlBJ!C
:DCh 6dHQ8DC:W;ph&[CgF69;BM}[YfDE%[PQh&C"CM969gF69V%;IV%6dPQ8WV&Cgkj<8<69'fDC 0214351768 9:;8%9%1<%=?>;@1BA2CDEC*FHG"CF IDEJLKNMIFHG"F OEPRQTS-MOEU"VWDEPYX
Z[DEV;MG"F DEP]\[PTD^J_V`G"FHGaTbEc-de8%d)1fa2gih9+jkc:?8 9,lm8 9nko;:-1
V%j<;IV;BMMdPsh!_cPBE:<69h%gFfh%h%698DH{6dV%j:DPQ\];B68eCRrL[CFE%V%h
j<CFV%jDCFEPBE8DPBV4V%jDC]69:DC8V%6d<C:@L6dPQM9;IV%6dPQ8h698<:DCFC: p 1 p 9`oE
q r+s%8 9t1u<%=wvi<;1yxzD^CC*F IM{QYSMOEU"V;|~}BF'UG"C]LOEPYIMYC G

gF;B8C?gRPQ8<h&V&E fDC:;Bh@L6dPQM9;IV%6dPQ8hPB_UgRPQ\]\$PQ8<M9lW;Bg} D^PTX5KFHG"C*U"F iC*F'`MuxDEC*CF IMGai1 1T98 8%o;c)1

L8DPs#MdC:DHBC:[;IV&V&CFE 8hPB_\]PQ8DPBV&PQ8<69gF6dVmlc ;B8:DCFE 1 B1Nzjfo^c-d1 71N8 nknfo^cz1e<t=;=?>1c- 9+8%8 cwro;n

G?;B;IHF{Bo>dJ4IuBu Qk> z,j<69hV&Cg j8<69'fDC*j<;BhCFC8


.
r9o?di8 ;9+8t?nfr+jk;cjfcw-o;c?rjroErjf?8h9+?-o^-jfnkjkr+jH

:DCh 6dHQ8DC:Wh%[CgF69gF;BM9Mdl$h&P;BhV&Pp;Bh%{M96dV&V%M9C,V%69\$C?;B8<: c-8r;9%1)cU+OtIMMXEF'PEGOC'S-M-C'SLOEPtMUMPYIM

M969V&V%MdC gRPBHQ8<6dV%6d@BCCRPBE V7_cE%PQ\V%jDCCRr'[CFE%V%ha698V%jDC@BCFE&} O^PPTIM U"CD^F'P-C*VF'P\[U"C*F $I F DEJ)"P2CM J'J_FwMPYIMa)?9+:wo^c


o;i o^cczaih-o^:?8%,;?>;wi1
6dYgF;IV%6dPQ8PB_0V%jDC69:DC8V%6d<C:@L6dPQM9;IV%6dPQ8hF> B1 1%`o;cdi8 9 p o;o;:-a%1 B1t6idinHo^8%c-di8 9o^c-d1%8%8 nHdi8 9%1
zjDCWE%Ch f<MdV%hV%j<;IVCPB<V%;B698<C:_cE%PQ\ ;I[<[M9l'698<H b;@;@;-1^?c^r;cjHjfrjkc6,o`;8%+jHo^cNc8r;9i 1EcUOE

PQfDE"\$CFV%j<P':_cPBEh%V%f<:DlL698DH\]PQ8DPBV&PQ8<69gF6dVmlWV&P$;pE%C;BM IMMXEF'PEGOC'S-MyEC SmO^P%M U+M PTIMuOEPPTIM U"CD^F'P-C*VuF'P

8DCFVmPBE 68@BCFV&CFEk698<;IE%lh%gF6dC8<gRCBJ698<:69gF;IV&CV%j<;IVW6dV \[U"C*F $I F DEJT"P-CM J'J_F?M PTIM a-"a-h-o;:;8t,;;=?vE-1

[<E Ch&C8^V%hX;f<h&CF_f<ML\$CFV%jDPL:$_cPBEXh&V%f<:DlL698DH?E%C;Bh%PQ8<698DH B1 1`o;cdi8%9 p o;o;:-a71 L1 p 8 8%c8 co;c-d1 021 1


[;IV&V&CFEk8<h68q;lBCh 69;B88DCFV~PBE%LhF>z,jDC{8<CRr'Vh&V&CF[ o^2o;c8twgi-j 1Bb;@;@;-1mlm8 9j jkc:W;c-^r+?cjHjfr

8DP69h0V&PCRrLV&C8<:]PQf<E \$CFV%jDPL:lpV&Cg j<869f<Ch V%j<;IV ;e6,o`;8tjHo^cc8r;9ijfr+di?o^jkc8ihT8 9+r%1c

CRrL[MdPQ6dV#V%jDC$gRPQ8<h&V&E fgRV&C:\]698<69\W;BMPIC8:<698DHgRPQ8L} ,U+O%IMMXEF'PEGOC'S-M[-C'SZD^V;M"G"F D^POtX;M J'J_F'Pw\$;YJ_F'

V&CRrLV%h_PBE"69:DC8V%6d_cl'68DHV%jDC\$PL:DCM9M9698<H698<;B:DC'f<;BgF6dCh
ID^CF O^PGO^U+`G+S-OTa2h-o^:?8%=]<t1

698;8DCFV~PBE%V%j<;IV$gF;Bf<h&CV%jDC@A;IEk6dPQf<hp@'69PQM9;IV%6dPQ8<h 1 71^8 nknfo;c)1-<t=;=;@-1Ec-d-o^8 cwro;n-?c-8%hirL^?2o^nf

PB_0\$PQ8DPBV&PQ8<6gF6dV~lB> jfro^r+jk;8h9;-o;jknfjHrjkc-8r;9%1\[U"CF IF D^J"P2CM J'J_F'


wM PTIMai?-Lb;wv?@;-1

106 L. C. van der Gaag, S. Renooij, and P. L. Geenen



   "!#$&%')(+*",- . / (+
021436567 89;:<7=3>5;?A@BC7=7<DE7=365>FG?AHI?A@JK9L5;?CMN7=7=O
P?AQL7<@ H R?S3
H)T=UGV3;UWT=@XR 7<H 1YTZ3>7=365[8\TZRQL];H 1436D ^;_A1Y?S36_`?SaAb6cHI@ ?S_ d
Hc361Y:=?A@Xa 1YHfe
FG9hgi96jkTSlnm<o&9poZm=qrb+s=t<oZm<ujcHI@ ?S_ d
HAbu)d;?Cv)?AH d;?A@XO47=3656a
wyx;z|{6};~$6~r~rx&6r$
{x
2S|L=

M?E143HI@ Tr56]L_`?H d;?U7=R14OYe[T=U\Ri]6OYH 1f561R?S36a 1YTZ3L7=O2j\7Se=?Sa 147=33;?AHT=@ _AO7=a a 1Y6?A@XaA9ud;?SaI? _AO47=aI
a 1Y6?A@ a1436_AO4]656?CTZ3;?T=@CRT=@ ?_AO47=a a:<7<@ 147<LO4?Sa)7=365R]LOYH 1YQLOY?CU?S7<H ];@ ?i:<7<@ 147<LOY?SaSbL)d61_ d3;?A?S5n36T=H
?CRT&5;?SO4OY?S57=a.?S1436D56?AQ?S3L5;?S3H\TZ3?A:=?A@ e_AO7=a a\:y7<@X147<LOY?=9g]6@.U7=R 1OYe T=U$R]6O4H 1f5614R?S3La 1YTZ367=O
_AO7=a a 1Y6?A@Xa 1436_AO4]L5;?Sa7=aEa Q?S_A17=O_A7=aI?SaH d;?[k?SO4O&3;T)3367=14:=?[j\7Se=?Sa 147=3'7=365HI@ ?A?`f7=];DZR?S3
HI?S5
_AO7=a a 1Y6?A@XaAb$e=?AHiT<?A@ a?AHIHI?A@RT&5;?SO4O41436D[_A7<QL7<L14O1YH 1Y?SaKH dL7=3U7=R 14O1Y?SaKT=URTr56?SO4a1YH d7a 1436DZOY?
_AO7=a a:<7<@ 147<LOY?=9M?n5;?Sa _`@X1Y?H d;?[OY?S7<@ 3L143;DQ6@ T=LO4?SRUWT=@7Na ];6U7=R 14O4eNT=UCR]6O4H 1f5614R?S3La 1YTZ367=O
_AO7=a a 1Y6?A@Xa7=365a d6T|H d67<HH d;?_`TZRQLOY?`l;1YHeNT=U)H d;?aITZO];H 1YTZ37=OYD=T=@ 1YH d6R14aiQTZOYe&3;TZR 147=O143H d6?
3r]6R?A@T=U:y7<@ 17<LOY?Sa143:=TZO4:=?S5+9M?U];@ H d;?A@QL@ ?SaI?S3
HaITZR ?iQL@ ?SO414R 1367<@ e?`lrQ?A@ 14R ?S3H 7=O2@ ?Sa ]6O4H a
HITE14O4O4]6a HI@ 7<HI?H d;?C?S3;?A6H a\T=U$H d;?Ri]6OYH 1f5L14R?S36a 14TZ367=O41YHeiT=U$TZ];@_AO7=a a 1Y6?A@XaA9
2=ZrZf :<7<@ 1YTZ]6a_AO47=a a 1Y6?A@ aG14365L14_A7<HI?.R]6O4H 1YQLOY?k_AO47=a aI?SaAbH d;?S3
H d;?14RQLO414?S5_`TZRL1367<H 1YTZ3R7Se3;T=H? H d;?RTZaIH
\j 7Se=?Sa 17=33;?AHT=@ _AO47=a a 1Y6?A@ ad67|:=?.DZ7=143;?S5 _`TZ3La 145& O41Y=?SO4e?`l&QLO47=367<H 1YTZ3T=U$H d;?CT=aI?A@ :=?S5>U?S7<H ];@ ?SaS9
?A@ 7<LO4?.QT=Q]6O47<@ 1YHeUT=@aITZOY:&143;DC_AO47=a a 1YL_A7<H 1YTZ3 Q6@ T=; V3H d614aQL7<Q?A@k?143
HI@ Tr5L]6_`?H d;?_`TZ36_`?AQ6HNT=U
OY?SR a)d;?A@ ?i7=31436a H 7=36_`?i56?Sa _`@ 1Y?S5>re[73
]LR?A@ Ri]6OYH 1f561R?S36a 1YTZ3L7=O41YHfe143j\7|e=?Sa 147=336?AHfkT=@ _AO47=a
T=UU?S7<H ];@ ?SadL7=aHIT?_AO47=a a 1Y6?S5143TZ3;?T=UaI?A:
a 1YL?A@ aHITiQ6@ T:r1456?UWT=@\7=_A_A];@ 7<HI?SO4eRT&5;?SO4O41436DCQ6@ T=;
?A@ 7=O\5614a H 1436_`H_AO7=a aI?SaA9Nud6?a ]L_A_`?Sa aT=U)?SaIQ?S_A147=OOYe OY?SR a)d;?A@ ?n1436aIH 7=36_`?SaE7<@ ?[7=a a 1YDZ3;?S5HITR]LOYH 1YQLOY?
367=1Y:=?j\7Se=?Sa 17=3_AO47=a a 1YL?A@ a)7=365nH d;?RT=@ ?i?`lrQL@ ?Sa _AO47=a a ?SaA9R]6O4H 1f5614R?S3La 1YTZ367=O+j\7Se=?Sa 147=33;?AHfkT=@
a 1Y:=?iHI@ ?A?`f7=];DZR ?S3HI?S5N36?AHfkT=@ [_AO47=a a 1Y6?A@ a14a@ ?S7=5& _AO47=a a 1Y6?A@[143L_AO4]65;?SaTZ3;?T=@RT=@ ?_AO47=a a[:y7<@ 17<LOY?Sa
14OYe?`l&QLO47=1436?S5CUW@ TZRH d;?S14@2?S7=aI?kT=UL_`TZ36a HI@ ]6_`H 1YTZ3i7=365 7=365TZ3;?T=@NR T=@ ?UW?S7<H ]6@ ?:y7<@ 17<LOY?SaA9VHRT&5&
H d;?S1Y@.D=?S36?A@ 7=O4OYeD=T
T&5>_AO47=a a 1YL_A7<H 1YTZ3>Q?A@ UWT=@ R7=36_`?=9 ?SO4aEH d;?@ ?SO47<H 1YTZ36a d61YQLa?AH?A?S3'H d;?:<7<@ 147<LOY?Sa
e
n7=3e7<Q6QLO14_A7<H 1YTZ35;TZR 7=143LaAb+d;T|k?A:=?A@Sb2136_AO4]65;? 7=_`e&_AO414_561Y@ ?S_`HI?S5D=@ 7<QLd6aT:=?A@H d;? _AO47=a a:y7<@ 17<LOY?Sa
_AO47=a a 1YL_A7<H 1YTZ3-QL@ T=LOY?SR a>)d;?A@ ?N7=31436aIH 7=36_`?d67=a 7=365T|:=?A@H d;?>U?S7<H ];@ ?[:<7<@ 147<OY?Sa aI?AQL7<@X7<HI?SOYe=b7=365
HIT>? 7=a a 14DZ3;?S5HIT[7RTZa HO41Y=?SOYe_`TZR14367<H 1YTZ3T=U U];@ H d;?A@_`TZ363;?S_`H aH d;?HT'aI?AH aT=U:<7<@ 147<LOY?Sa[
e
_AO47=a a ?SaA9^&1436_`?)H d;?3r]6R?A@T=U_AO47=a a:<7<@ 147<OY?Sa1437 7>1QL7<@ H 1YHI?E561Y@ ?S_`HI?S5D=@ 7<QLd+7=3?`l&7=RQOY?Ri]6OYH 1
j\7Se=?Sa 17=33;?AHfkT=@ [_AO47=a a 1Y6?A@14a@ ?SaIHI@ 14_`HI?S5nHITTZ3;?=b 5614R ?S36a 1YTZ367=O_AO47=a a 1Y6?A@>14a56?AQL14_`HI?S5143-$1YDZ]6@ ?<9
a ]6_XdQ6@ T=OY?SR aC_A7=363;T=H? RTr56?SO4OY?S5aIHI@ 7=14DZdHIUT=@I aKUT=@TZ3;?`f5L14R?S36a 14TZ367=Oj\7Se=?Sa 147=33;?AHT=@ _AO47=a
\7<@ 56OYe=9-g3;?>7<Q6Q6@ TZ7=_ d14aHIT_`TZ36a HI@ ]6_`HE7_`TZR a 1YL?A@ aAbZk?561aIH 143;DZ]61a di?AH?A?S3561?A@ ?S3
HHferQ?SaT=U
QTZ]6365_AO7=a a[:<7<@ 147<OY?H d67<HRTr56?SO4a[7=O4OKQTZa a 1YLOY? Ri]6OYH 1f561R?S36a 1YTZ3L7=O_AO47=a a 1Y6?A@$
e14RQTZa 13;D)@ ?Sa HI@ 14_
_`TZRL14367<H 1YTZ3LaT=U6_AO47=a a ?SaA9Gu)d614a_AO47=a a:<7<@ 147<LO4?H d;?S3 H 1YTZ36aTZ3H d;?S1Y@D=@ 7<QLdL14_A7=O.aIHI@ ]6_`H ]6@ ?=9L]6O4OYeHI@ ?A?`
?S7=a 14O4e?S3L56a]6Q)1YH d7=3136d61YL14H 1Y:=?SOYe[O47<@ D=?E3r]6R 7=];DZR?S3
HI?S5R]LOYH 1f5614R ?S36a 1YTZ367=O;_AO7=a a 1Y6?A@XaAbZUT=@?`lr
?A@KT=U\:y7=O];?SaA9EO4aIT;b2H d;?aIHI@X]6_`H ];@ ?T=U.H d;?Q6@ T=; 7=RQLO4?=b6d67S:=?i561Y@ ?S_`HI?S5HI@ ?A?SaT|:=?A@H d;?S1Y@_AO7=a a.:<7<@ 1
OY?SR14a[3;T=HnQ6@ T=Q?A@ OYe@ ?A6?S_`HI?S5143H d;?@ ?Sa ]6OYH 143;D 7<LOY?Sa)7=a.?SOO7=a.T|:=?A@H d;?S1Y@.U?S7<H ];@ ?:y7<@ 17<LOY?SaA9
RT&5;?SO9G3;T=H d6?A@7<Q6Q6@ TZ7=_Xd14aGHITK5;?A:=?SOYT=QR]LOYH 1YQLOY? ;T=@)H d;?U7=R 1OYeT=U2U]6O4O4eiHI@ ?A?`f7=];DZR?S3
HI?S5[Ri]6OYH 1
_AO47=a a 1Y6?A@ aAbTZ3;?UT=@?S7=_XdT=@ 1YDZ1367=O_AO47=a aA9[]LOYH 1YQLOY? 5614R ?S36a 1YTZ367=OC_AO7=a a 1Y6?A@XaAbKk?aIH ]L5;e'H d;?O4?S7<@ 36143;D
_AO47=a a 1Y6?A@ aAb
d;T|k?A:=?A@Sb;_A7=363;T=Hk_A7<Q6H ];@ ?)13HI?A@ 7=_`H 14TZ36a Q6@ T=OY?SRb6H d67<H14aAbH d;?Q6@ T=LOY?SR#T=UL3L56143;D 7E_AO47=a
7=RTZ3;DH d;?k:y7<@X1YTZ]6a_AO7=a aI?Sa$7=365iR 7SeKH dr]6aG7=O4aIT3;T=H a 1YL?A@H dL7<HK?SaIH6H aK7aI?AHKT=U\7S:<7=14O47<OY?567<H 7&9M?
Q6@ T=Q?A@ O4en@ ?AL?S_`HH d;?Q6@ T=OY?SR9[[T=@ ?AT|:=?A@Sb1YU.H d;? a d;TH dL7<HAb=DZ1Y:=?S37;l&?S5aI?SO4?S_`H 1YTZ3T=ULUW?S7<H ];@ ?:<7<@ 1
C2
    f '
C1 jk?AUT=@ ?@ ?A:&1Y?A)13;D 367=1Y:=? jk7|e=?Sa 147=3 7=365HI@ ?A?`
C3 7=];DZR ?S3HI?S5 36?AHfkT=@ i_AO7=a a 1Y6?A@XaAb<?143
HI@ T&56]6_`?kTZ];@
3;T=H 7<H 14TZ367=O._`TZ3:=?S3
H 1YTZ36aS9M?>_`TZ3La 145;?A@j\7Se=?Sa 17=3
3;?AHT=@ raT|:=?A@>7L3614HI?naI?AH "! $w #&%$')()()(*'+#-, b
.&/ <b;T=UG5614a _`@ ?AHI?@X7=365;TZR :y7<@X147<LOY?SaAbr)d;?A@ ??S7=_Xd
F2 F3 # CH 7<=?Sa7:y7=O4]6?n143 7NL3614HI?[aI?A1 H 0324+5 # 6X9;T=@
7>a ];LaI?AHKT=U.:<7<@ 147<OY?S8a 7:9 ?]6aI; ? 032<4=5>7?68!
@ACB>DE 0324=5 # 6CHIT56?S3;T=HI?H d;?aI?AHiT=G U FTZ143
HK:<7=O4];?
F1 F4

$1YDZ];@ ?$3?`l;7=RQLO4?\Ri]6OYH 1Yf5614R?S36a 1YTZ367=OZj\7Se=?SaI 7=a a 1YDZ36R?S3


H aHIHT 7 9'j\7Se=?Sa 147=33;?AHT=@ 3;T|14a7
147=33;?AHfkT=@ i_AO47=a a 1YL?A@)1YH d_AO47=a a2:y7<@X147<LOY?Sa 67=365 QL7=14J@ IK!MLN 'PORQ b;)d;?A@ 8? N14a\7=37=_`er_AO41_5L1Y@ ?S_`HI?S5
U?S7<H ];@ ?C:<7<@ 147<LO4?Sa  Z9 D=@ 7<Qd)d6TZaI?:=?A@ H 14_`?Sa._`T=@ @ ?SaIQTZ365HITH d;?@X7=365;TZR
:<7<@ 147<LO4?SSa  7=365 O 14a)7EaI?AH)T=UQ7<@ 7=R?AHI?A@Q6@ T=L7y
L1O41YH 1Y?SaAH d;?EaI?AH O 143L_AO4]65;?SaC7[QL7<@ 7=R?AHI?AU@ T*V BW X V B
7<LO4?SaQ?A@_AO47=a a:y7<@X147<LOY?=b.H d;?OY?S7<@X36143;DNQL@ T=LOY?SR UT=@?S7=_ d:<7=O4];H? YZ[\032<4]5 # 67=365?S7=_ d:<7=O4];?7=a
_A7=3n?5;?S_`TZRQTZaI?S5n143HITET=Q6H 14R 1a 7<H 1YTZ3Q6@ T=LOY?SR a a 14DZ36R?S3
& H ^SYZ_[`0324+5>^ # 6HITH d;?aI?AH ^ # T=U
UT=@KH d;?aI?AHKT=U._AO47=a a:y7<@ 17<LOY?SaK7=365NUWT=@H d;?aI?AHKT=U QL7<@ ?S3H akT=U #  1a 3 Ni9ud6?3;?AHT=@ b I3;T56?AL3;?Sa
U?S7<H ];@ ?:y7<@ 17<LOY?SaaI?AQ7<@ 7<HI?SOYe=bLd614_ dn_A7=3>T=H d>? 7 FTZ143
H)QL@ T=L7<L14O1YHfe5614aIHI@X1YL];H 1YTZ&
c 3 dCeT:=?Af@  H d67<H
aITZO4:=?S513QTZOYe&3;TZR 147=OH 14R?=9 M?U]6@ H d;?A@7<@ DZ];? 14a\U7=_`HIT=@ 14a ?S5>7=_A_`T=@ 561436D HIT
H d67<HAb$7=OYH d;TZ]6DZdTZ];@O4?S7<@ 36143;D7=OYD=T=@X1YH d6R 7=a a ]6R?Sa ,
7n;l&?S5NL1Q7<@ H 1YHI?D=@X7<QLd?AH?A?S3H d;?_AO47=a ai7=365 dCeg5 # % ')()()($'+#-, 6! h T
ACBW XAGB (
U?S7<H ];@ ?K:<7<@ 147<OY?SaAb;1YH14a.?S7=a 14OYe_`TZRL13;?S5)1YH d?`lr j i %
14a H 143;D7<Q6Q6@ TZ7=_Xd;?SaHITUW?S7<H ];@ ?Ka ];LaI?AHa ?SOY?S_`H 1YTZ3+9 \j 7|e=?Sa 147=3 3;?AHT=@ _AO47=a a 1Y6?A@ a$7<@ ?.j\7Se=?Sa 17=3 3;?AH
u)d;?K3
]LR?A@ 14_A7=O@ ?Sa ]6OYH a\H d67<H?KT=LH 7=143;?S5U@ TZR kT=@ &aT=U@ ?SaIHI@ 1_`HI?S5 HIT=QTZOYT=D=eH d67<Hk7<@ ?H 7=14OYT=@ ?S5 HIT
Q6@ ?SO414R 143L7<@ e?`l&Q?A@ 14R?S3
H a )14H dTZ];@OY?S7<@ 3613;D7=O aITZO4:r1436D>_AO47=a a 1Y_A7<H 1YTZ3NQ6@ T=LOY?SR aCd;?A@ ?143LaIH 7=36_`?Sa
D=T=@ 14H d6R _AOY?S7<@ OYe[1O4O4]6aIHI@X7<HI?KH d;??S3;?A6H aT=URi]6OYH 1 5;?Sa _`@ 1Y?S5
e7N3r]6R?A@T=UU?S7<H ];@ ?Sa d67|:=?[HITN?
561R?S36a 1YTZ3L7=O41YHfeT=Uj\7|e=?Sa 147=33;?AHfkT=@ _AO47=a a 1Y6?A@ aA9 _AO47=a a 1Y6?S5143TZ3;?T=UaI?A:=?A@X7=OK5614aIH 136_`HQ6@ ?S5;?AL3;?S5

aIQ?S_A147=O4OYeTZ3a R 7=OOY?A@567<H 7aI?AH aAbrH d;?)_`TZ36aIHI@X]6_`HI?S5 _AO47=a aI?S8a 5;@ 1Y?S5LR 7=l 3 k)mn2<4pobSq=q <6X9u)d;?CaI?AH.T=U:<7<@ 1
Ri]6OYH 1f5L14R?S36a 14TZ367=O_AO7=a a 1Y6?A@XaQ6@ T:r15;?S5Kd61YDZd6?A@7=_ 7<LO4?Sna T=U7Kj\7Se=?Sa 147=33;?AHT=@ _AO7=a a 1Y6?A@1a$QL7<@ H 1
_A];@X7=_`eH dL7=3KH d;?S1Y@2TZ3;?`f5614R ?S36a 1YTZ367=O=_`TZ]L3HI?A@ Q7<@ H aA9 H 1YTZ36?S5[143
HIT7a ?AHqZrs! w  % ')()()(*' ut b v / <bLT=U
V3 _`TZRL14367<H 1YTZ3 )1YH d U?S7<H ];@ ?aI?SOY?S_`H 1YTZ3+bERT=@ ?` w kx2<m>y{zxkU|2zP}~2)4jk7=3657a 143;DZOY?AHITZ3naI?A8 H Z! w 
T:=?A@Sb<TZ];@27=OYD=T=@X1YH d6R@ ?Sa ]6OYHI?S5K13KaIQL7<@ a ?A@2_AO47=a a 1Y6?A@ a )14H dH d;S? )4j2P|2<zP}~24k`9$ 2}|k2kP}~2b)4j2}
)14H d_`TZ36a 15;?A@ 7<LOYeU?A?A@kQL7<@ 7=R ?AHI?A@ a\7=365+b&d;?S36_`?=b k)zdL7=a7C561Y@ ?S_`HI?S5 HI@ ?A?.UWT=@1YH aD=@ 7<QLb d Nb
143)dL14_ d
)14H da R 7=O4OY?A@.:<7<@ 147=3L_`?=9 H d;?_AO47=a a:y7<@X147<LOYc? 14aH d;?]6361 r];?@ TrT=H\7=365 ?S7=_Xd
u)d;?nQ7<Q?A@14aT=@ DZ7=3614aI?S5'7=aUTZO4OYT)aA9 V3^r?S_ U?S7<H ];@ ?:y7<@X147<LOY_ ?   dL7= a  UT=@1YH aETZ36OYeQL7<@ ?S3
HA9
H 1YTZ 3 rb.?n@ ?A:&1Y?A j\7Se=?Sa 147=3'3;?AHfkT=@ _AO47=a a 1Y6?A@ a m>z+kPk+2y<bkmkxH k$m>nz 5$S_6)4j2} kzCd67=a

143D=?S3;?A@X7=O9#V3^&?S_`H 1YTZ3srb?5;?AL3;?TZ];@U7=R UT=@1YH aD=@X7<QL- d N75614@ ?S_`HI?S57=_`er_AO41_D=@ 7<Qdi13)dL14_ d
14O4e T=UNRi]6OYH 1Yf5614R?S36a 1YTZ367=O_AO47=a a 1Y6?A@ aA9 V3 ^r?S_ H d;?_AO47=a a:y7<@X147<LOYc? 14aH d;?]6361 r];?@ TrT=H\7=365 ?S7=_Xd
H 1YTZ 3 ;b?'7=565;@ ?Sa aH d;?'OY?S7<@ 3L143;DQ6@ T=LOY?SR UT=@ U?S7<H ];@ ?:<7<@ 147<OYg? dL7=Ja 7=365 7<HR TZaIHTZ3;?)T=H d;?A@
U]6O4OYeHI@ ?A?`f7=];DZR?S3
HI?S5Ri]6OYH 1Yf5614R?S36a 1YTZ367=O)_AO7=a a 1 U?S7<H ];@ ?:<7<@ 147<LO4?UT=@>1YH aQ7<@ ?S3H aSH d;?Na ]66D=@ 7<QLd
6?A@Xa 7=365Q6@ ?SaI?S3HE7QTZOYer36TZR 147=OH 14R ?7=O4D=T=@ 1YH d6R 143L56]6_`?S5
eH d;?a ?AfH 3rkb2R T=@ ?AT|:=?A@Sb$14aK75L1Y@ ?S_`HI?S5
UT=@aITZO4:r1436D>1YHA9EV3^&?S_`H 1YTZ3trb$k?6@ 14?A6en7=5656@ ?Sa a HI@ ?A?=bLHI?A@ R?S5H d;? w k2<m>y{z+kHm>zxkkT=U$H d;?K_AO47=a a 1Y6?A@S9
U?S7<H ];@ ?aI?SOY?S_`H 1YTZ3UWT=@kTZ];@Ri]6OYH 1Yf5614R?S36a 1YTZ367=Or_AO47=a u)d;?ED=?S36?A@ 7=O\Q6@ T=LO4?SRT=UOY?S7<@ 3613;Dn7j\7Se=?Sa 17=3
a 146?A@ aA92M?k@ ?AQT=@ HGaITZR ?kQL@ ?SO414R 1367<@ e@ ?Sa ]6O4H aU@ TZR 3;?AHT=@ _AO47=a a 1Y6?A@.U@ TZR#7DZ1Y:=?S3>a ?AHT=UG5L7<H 7a 7=R
7=3E7<Q6QLO414_A7<H 14TZ3143H d;?L1YTZR?S561_A7=O;5;TZR 7=143 13E^r?S_ QLO4?Sac! $w % ')()()($'+ K bu / <bG14aHIT>L365U@ TZR
H 1YTZ 3 r9ud;?Q7<Q?A@14a)@ TZ]6365;?S5[T<)1YH dnTZ];@_`TZ3& 7=RTZ36DCH d;?U7=R 14OYeKT=U+3;?AHT=@ _AO7=a a 1Y6?A@Xa2TZ36?H d67<H
_AO4]L56143;DiT=LaI?A@ :<7<H 1YTZ36a)143[^r?S_`H 1YTZ
3 
9 ?SaIHR 7<H _ d;?SaH d6?i7|:y7=14O7<LOY?5L7<H 7&9a7R ?S7=a ];@ ?

108 L. C. van der Gaag and P. R. de Waal


T=U;d;T|k?SO4OZ7RT&5;?SOZ56?Sa _`@ 1Y?Sa+H d;?567<H 7&b<T=UWHI?S3H d;?  0324+53r6 0324]5f6 E)d614_XdHe
Q14_A7=O4OYe]614O456a
RT&5;?SO paGOYT=D<fO41Y=?SO14d;TrTr5CDZ14:=?S3H d;?\567<H 714a2]6aI?S5+=UT=@
 ];QTZ3H d;?
?}3 kz=m2<kP+244zPy{4k `9ca 143;DH d61a\@ ]6OY?=b
7j\7Se=?Sa 147=336?AHfkT=@ f I7=3657567<H 7aI?AH nb=H d;?.OYT=D< 7_AO7=a a 1Y6?A@ TZ];HIQL];H aUWT=@?S7=_ d :<7=O4];?7=a a 1YDZ3&
sI
O41Y=?SO14d;TrTr5ET=U I DZ14:=?S3 1a.5;?AL3;?S5>7=a  R?S3
H
[ 032<4=5Zr6 Xb7_AO47=a a:<7=O4];? [a ]6_XdH d67<H

d e 5 6 / d e 5  ! 6 EUT=@>7=O4O &[ 0324+5R6Xb
" 

S5>I&6n!  OYT=D)dCeg5 6


( 6@ ?S7<&143;DH 1Y?Sa7<H@ 7=3656TZR9
i %
; T=@.H d6?U7=R 14OYeT=Uj\7|e=?Sa 147=3>3;?AHT=@ E_AO47=a a 1Y6?A@ a # $ % n= `  Ln <f   & y 6 
& ('

143D=?S3;?A@ 7=ObH d;? OY?S7<@X36143;DQ6@ T=LO4?SR 1aC143HI@X7=_`H 7<LOY?=9 j\7Se=?Sa 17=336?AHfkT=@ n_AO47=a a 1Y6?A@ a)7=a@ ?A:r1Y?Ak?S5[7<T:=?=b
;T=@:y7<@X1YTZ]6aa ];6U7=R 14O41Y?SaiT=UK_AO47=a a 1Y6?A@ aSbkd;T?A:=?A@|b 1436_AO]65;?7a 1436DZOY?E_AO47=a a:<7<@ 147<OY?E7=3657=aa ]L_ d7<@ ?
H d;?QL@ T=LOY?SR 1aaITZOY:<7<LOY?143 QTZO4er3;TZR147=O&H 14R?=9
lr TZ3;?`f561R?S36a 1YTZ3L7=O9 M?3;T 143
HI@ Tr5L]6_`?H d;?_`TZ3&
7=RQLO4?Sa136_AO4]65;?H d;?367=1Y:=?j\7Se=?Sa 147=3 7=365 uGv _`?AQ6HT=U R]LOYH 1f5614R ?S36a 1YTZ367=O1YHfe143j\7Se=?Sa 147=33;?AH
_AO47=a a 1Y6?A@ a@ ?A:&1Y?A?S57<T:=? 7=365H d;? a ];6U7=R 14O4eT=U kT=@ _AO47=a a 1Y6?A@ aCre5;?A36143;D>7nU7=R14OYeT=URT&5;?SO4a
j\7Se=?Sa 17=3 3;?AHT=@ _AO7=a a 1Y6?A@XaT=U)dL14_ dH d;?na ];; H d67<H)R7Se1436_AO4]L5;?Ri]6OYH 14QLOY?_AO47=a a.:<7<@ 147<LO4?SaA9
D=@ 7<QLd143L56]6_`?S5
eNH d6?U?S7<H ];@ ?:<7<@ 147<OY?Sa_`TZ3LaIH 1 Uy{4pm>}+}-k) P}~2<42k}>2< k$m>nz )4j2P
H ];HI?Sa7UWT=@ ?SaIfH 50]L_A7=aAb <o=o 6X9\;T=@)T=H d;?A@a ];6U7=R P} k)zi14a.7j\7|e=?Sa 147=33;?AHfkT=@ T=UGd614_ dH d;?CD=@X7<QLd
14O414?Sa\T=U$j\7|e=?Sa 147=33;?AHfkT=@ >_AO47=a a 1Y6?A@ aAb&@ ?Sa ?S7<@ _ d;?A@Xa N !KL ' cQ d67=a7K@ ?SaIHI@ 14_`HI?S5 HIT=QTZO4T=D=e=9ud;?aI?AJ H 
d67|:=?Q6@ ?SaI?S3
HI?S5d;?S];@ 14a H 14_7=OYD=T=@X1YH d6R a)d61_ dnQ6@ T<
*)

:&145;?D=T
T&5+bH d6TZ];DZd3;TZ3&T=Q6H 1R 7=ObaITZO4]6H 1YTZ36& a 5^&7y T=U@ 7=3656TZR :<7<@ 147<LOY?Sa14aQ7<@ H 1YH 1YTZ3;?S5[143HITEH d;?aI?AH a
 ! w  % ')()()(*'  b / <bT=U_AO47=a aE:y7<@ 17<LOY?Sa
d67=R 1bSq=<q r K?AT=DZd7=3L5F$7 S7=361bSq=q=q 6X9M?
 
+ -,
7=365H d;?aI?AfH Zr ! w  % ')()()($' ut b v / <b+T=UU?S7y
kTZ]6O45O41Y=?HIT3;T=HI?H d67<HH d;?Sa ?@ ?Sa ]6OYH a7=O4Ok@ ?SO47<HI? H ];@ ?:y7<@X147<LOY?SaA9ud6?aI?AHkT=U7<@ _Aa T=U+H d;?D=@ 7<QLd14a
HITOY?S7<@ 3613;D7i_AO47=a a 146?A@UT=@.7i;lr?S5aI?AH\T=U@ ?SOY?A:y7=3
H
)

U?S7<H ];@ ?:<7<@ 147<LO4?SaA9Gu2TH d6??SaIHT=UTZ];@&3;T|OY?S5;D=?=b QL7<@ H 14H 1YTZ3;?S5143


HITCH d;?.H d;@ ?A?a ?AH a b r7=365 ) ) ).0/

H d;?aI?SOY?S_`H 1YTZ3T=U7=3T=Q6H 14R 7=OaI?AHT=U)UW?S7<H ];@ ?:<7<@ 1 d67|:r1436DH d;?CUWTZOOYT|)13;DQ6@ T=Q?A@ H 1Y?Sa)
7<LOY?Sa)d67=a3;T=H?A?S3aITZO4:=?S5>7=ae=?AHA9 UT=@?S7=_Xs d uq[;3rH d;?A@ ?E14aK_ 7 C [ )1YH d
^&136_`?1YH aD=@X7<QLd d67=a 7 ;l&?S5#HIT=QTZOYT=D=e=bH d;?
1
)2.0/ 5C ' 6[ 7=365UT=@?S7=_ d [_H d;?A@ ?
Q6@ T=OY?SR T=UnOY?S7<@ 361436D7367=1Y:=?'jk7|e=?Sa 147=3 _AO7=a a 1 1a.7=3  f[Zr)1YH d 5 '  6[ ).0/
6?A@-7=RTZ]63
H a'HIT FI]LaIH ?Sa H 7<LO414a dL143;DR 7yl;14Ri]6R H d6?a ];6D=@ 7<QLdT=cU N H d67<H 14a14365L]6_`?S5
e Z
O41Y=?SO14d;TrTr5E?SaIH 14R 7<HI?Sa)UW@ TZR#H d;?K7S:<7=14O47<OY?K567<H 7UT=@ 1
$? r]67=O4 a Nc !MLZ ' Q
1YH aQL7<@ 7=R ?AHI?A@ aA9;T=@7;l&?S5nD=@X7<QLd614_A7=O$aIHI@ ]L_`H ];@ ? *)

143D=?S36?A@ 7=ObGH d;?R 7yl;14Ri]6RfO41Y=?SO14d;TrTr5?SaIH 14R7<HIT=@ a H d6?a ];6D=@ 7<QLdT=8U N H d67<H 14a143L56]6_`?S5
e 3r
T=U$H d;?CQL7<@ 7=R ?AHI?A@)Q6@ T=7<L14O41YH 14?Sa7<@ ?DZ1Y:=?S3re
1
$? r]67=O4
*) a Nqr !ML3r ' r Q 9
d q5Y^SYZ6 '
T  V BW X V B !
ud;?ia ];6D=@X7<QL_ d Nc 14a7D=@X7<QLd614_A7=O2aIHI@ ]6_`H ];@ ?T:=?A@
)d;?A@ ? d 5;?S3;T=HI?SaH d;?\?SRQ1Y@ 14_A7=Or5614aIHI@X1YL];H 1YTZ35;?`
 
 H d;?_AO47=a a:y7<@X147<LOY?Sa7=3651a_A7=O4O4?S5H d;?_AO47=a a 146?A@ pa 3
L3;?S5
e-H d6?U@ ?$
]6?S36_A1Y?Sa>T=UT&_A_A];@ @ ?S36_`?143H d;? )4j2PPyZz=2 H d;?a ];LD=@ 7<QLR
465 d Nqr14a+_A7=O4O4?S5K1YH a w kx2
567<H 7&9\ud;?CQ6@ T=LOY?SR#T=UOY?S7<@ 3L143;D7 uGv_AO47=a a 1Y6?A@ 465 m>y{z+kPyZz=2 9ud;?a ];6D=@ 7<QLR d N !ML ' Q 14a
.0/ *) .0/
7=RTZ]63
H aHIT6@ a H)5;?AHI?A@ R 136143;D7 D=@X7<QLd614_A7=Oa HI@ ]6_ 7L1YQL7<@ H 1YHI?D=@ 7<QdNH d67<H@ ?SO47<HI?SaKH d6? :y7<@X1YTZ]6aCU?S7y
H ];@ ?T=U[R 7yl;14Ri]6R O1Y=?SO414d;TrT&5 DZ1Y:=?S3 H d;?'7S:<7=14O H ];@ ?[:y7<@ 17<LOY?SaHITH d6?[_AO47=a a:<7<@ 147<OY?SaA.H d614aa ];;
7<LOY?567<H 77=365'H d;?S3-?SaIH 7<O414a d613;D?Sa H 14R 7<HI?SaUT=@ D=@ 7<QLd1a._A7=O4OY?S5H d;? w kx2<m>y{zxk8)k4kx*m>}><yZ<z+2 >T=U 7465
1YH aiQL7<@ 7=R ?AHI?A@ aA9;T=@ _`TZ3LaIHI@ ]6_`H 1436D7R 7yl;14Ri]6R H d;?[_AO7=a a 1Y6?A@ 7=3L51YH aaI?AH T=UC7<@ _AaE14aHI?A@ R?S5H d;?
O41Y=?SO14d;TrTr5U?S7<H ];@ ?KHI@ ?A?=b7 QTZOYe&3;TZR 147=OYH 14R?7=OYD=T< 3 _AO47=a a 1Y6?A@ pa w kx2<m>y{zxk$k4kx$m>}~ 2z= )k$m9;T=@7=3e
@ 1YH dLR#14a.7S:<7=14O47<LO4?CUW@ TZR ;@ 1Y?S56R7=l 3 k)m24poq5Sq=q <6X9 :<7<@ 147<LOY? # 1437ER]LOYH 1f5614R ?S36a 1YTZ367=O_AO47=a a 1Y6?A@Sb;k?
u2T _`TZ36_AO]65;?=b?TZ]LO45O41Y=?HIT3;T=HI?H dL7<Hn7 3;T]6aIf? ^ # HITE5;?S36T=HI?H d;?_AO7=a aQL7<@ ?S3H a)T=U #
j\7Se=?Sa 17=33;?AHfkT=@ _AO47=a a 1YL?A@_`TZRQ];HI?SaUWT=@?S7=_Xd 14 3 Nib&H d67<H.14aA{b ^ # ! ^ # Z9M?U]6@ H d;?A@\]6aI?
98
1436a H 7=36_`?[7_`TZ3L561YH 1YTZ367=OQ6@ T=L7<L1O41YHfe5614a HI@ 1YL];H 14TZ3 ^r # HIT56?S3;T=HI?H d;?UW?S7<H ];@ ?iQ7<@ ?S3H aT=U # 143 Ni9
T|:=?A@EH d;?_AO47=a a:<7<@ 147<OY?=9;T=@_AO47=a a 1Y_A7<H 1YTZ3QL];@I vT=HI?)H d67<HUT=@7=3
e_AO47=a a:<7<@ 147<OY? k?H dr]6ad67|:=?
QTZaI?SaAb.1YH U];@ H d6?A@14a 7=a a Tr_A147<HI?S5 )1YH d7U]L36_`H 1YTZ3 H d67<H ^r ! 7=365 ^q !1^q9
;:

Multi-dimensional Bayesian Network Classifiers 109


M 1YH d613 H d;? U7=R 14OYe T=U Ri]6OYH 1f5L14R?S36a 14TZ367=O
' OY?SR UT=@nj\7|e=?Sa 147=336?AHfkT=@ -_AO47=a a 1YL?A@ ad67=a?A?S3
j\7Se=?Sa 147=3 3;?AHfkT=@ _AO47=a a 1Y6?A@ a:y7<@X1YTZ]6a5L1?A@ ?S3H aIH ]L561Y?S5UT=@7>6lr?S5aI?AHKT=U.@ ?SOY?A:<7=3
HKUW?S7<H ]6@ ?:<7<@ 1
He
Q?SaT=U_AO47=a a 1YL?A@7<@ ?i5614aIH 1436DZ]614a d;?S5L7=a ?S5n];QTZ3 7<LO4?SaA9 6TZO4OYT|143;D7a 14R14O47<@7<Q6Q6@ TZ7=_Xd+b?36T|
H d;?S14@D=@ 7<QLd614_A7=O.a HI@ ]6_`H ];@ ?SaS93?`l;7=RQLO4?>14aH d;? UT=@ Ri]6O47<HI?TZ]6@O4?S7<@ 36143;DQ6@ T=LOY?SR HITQ?A@ H 7=143[HIT7
w y{44 q2}|kgHy{4pm>}+}bk }> 24)4j2PP} kz)13)dL14_ d a ]66U7=R14OYeT=U_AO47=a a 1Y6?A@ aUWT=@.d614_ dEH d;?U?S7<H ];@ ?CaI?`
T=H d H d;?_AO7=a aa ];6D=@ 7<Qd 7=365 H d;?U?S7<H ];@ ?a ];; OY?S_`H 14TZ3a ];6D=@ 7<Qd1a\;lr?S59M?K)14O4O@ ?AH ]6@ 3HITH d;?
D=@ 7<Qd>d67S:=??SRQ6He7<@X_KaI?AH aA9kv)T=HI?H d67<H)H d61a.a ];; 14a a ];?T=U2U?S7<H ];@ ?a ];LaI?AHa ?SOY?S_`H 1YTZ3>143[^&?S_`H 1YTZ3[tr9
U7=R 14OYeT=U1QL7<@ H 1YHI?i_AO47=a a 146?A@ a1436_AO4]656?SaH d;?iTZ3;?` M??ADZ143re5;?A36143;DnH d;?a ];6U7=R 14OYeT=UU]LO4OYe
561R?S36a 1YTZ3L7=O367=1Y:=?jk7|e=?Sa 147=3_AO47=a a 1Y6?A@7=a7naIQ?` HI@ ?A?`f7=]6DZR?S3HI?S5 _AO47=a a 1YL?A@ a1YH d 7;l&?S5aI?SOY?S_
_A147=O_A7=aI?=9J)?A:=?A@ aI?SOYe=b7=3
ea ]L_ d1QL7<@ H 1YHI?n_AO47=a H 1YTZ3 T=UU?S7<H ];@ ? :y7<@ 17<LOY?SaQ?A@_AO47=a a:<7<@ 147<LOY?=9
a 146?A@dL7=a7=3 $? r]61Y:<7=OY?S3
H3L7=1Y:=?jk7|e=?Sa 147=3_AO7=a a 1 ud6?SaI?_AO7=a a 1Y6?A@Xa7<@ ?_`TZ36a 15;?A@ ?S5 2U} PP}4kUT=@
6?A@))1YH d7 a 13;DZOY?C_`TZRQTZ]6365_AO47=a a:<7<@ 147<LOY?=93& H d;?OY?S7<@ 3L143;DQL@ T=LOY?SR9$M?KOY?AHkH d;?CaI?AH.T=U@X7=365;TZR
T=H d;?A@HferQ?T=UR]LOYH 1f5614R ?S36a 1YTZ367=O_AO47=a a 1Y6?A@)1a.H d;? :<7<@ 147<LO4?Sqa  ?iQL7<@ H 1YH 14TZ3;?S5n13HIT 7=365 Zr+k?
a ]66U7=R14OYeT=U._AO47=a a 146?A@ aC143)dL14_ dT=H dH d;?E_AO47=a a U];@ H d;?A@+H 7<=?\7aI?AH2T=U;7<@ _Aa HIT)?QL7<@ H 1YH 1YTZ3;?S5K13HIT
)
a ]66D=@ 7<QLd7=365H d;?)U?S7<H ];@ ?a ];6D=@X7<QLd7<@ ?5L1Y@ ?S_`HI?S5 ) ) b r-7=365).0/ 7=a?AUT=@ ?=9M?E3;T _`TZ3La 145;?A@
HI@ ?A?SaS9V3H d;?>@ ?SR7=14365;?A@T=UH d;?>Q7<Q?A@|bk?>)14O4O 7a ];LaI?AH
) .0/ T=8U  @  r a ]6_XdH d67< H L ' Q *) .0/
UTr_A]LaTZ3H dL14aa ];6U7=R 14OYeT=U w y{44 m>zxkk)=2<y<bkZmk 14a7UW?S7<H ];@ ?a ?SOY?S_`H 1YTZ3'a ];6D=@X7<QLd+9U]LO4OYeHI@ ?A?`
Uy{4 m>}+}bk) P}~24C)4j2} kzPX9 7=];DZR ?S3HI?S5Ri]6OYH 1Yf5614R?S36a 1YTZ367=O)_AO7=a a 1Y6?A@3;T14a
Ri]6OYH 1f561R?S36a 1YTZ3L7=O_AO47=a a 1Y6?A@N143?Sa a ?S36_`?14a 7=56R14a a 1YOY?UT=@ ) ./ 1YU2?KdL7S:=?CUT=@)1YH a.aI?AH.T=UG7<@ _Aa
]6a ?S5HITNL365a 7 FITZ143
H:y7=O4]6?7=a a 1YDZ36R?S3
HT=UCd61YDZd& ).0/ )2.0/ H d67<H ! ) 9$ud;?\aI?AH$T=U7=O4O&7=56R14a a 1YOY?
.0/
?SaIHQTZaIHI?A@ 1YT=@QL@ T=L7<L14O1YHfeEHIT1YH aaI?AH)T=U_AO7=a a:<7<@ 1 _AO47=a a 1Y6?A@ akUT=@
) .0/ 14a5;?S36T=HI?S5>7="a !$# %'&9
7<LO4?SaA9$1436561436DNa ]6_ d7=37=a a 1YDZ36R ?S3HEDZ1Y:=?S3:<7=O u)d;?OY?S7<@ 361436D Q6@ T=LOY?SR 3;T|14aHITL365U@ TZR
];?SaUT=@7=OOU?S7<H ];@ ?:y7<@X147<LOY?Sai143
:=TZOY:=?S5+b14a$?
]614: 7=RTZ36D H d;?a ?AHT=U7=56R 14a a 1YLOY?_AO47=a a 1Y6?A@ a\TZ3;?H d67<H
7=OY?S3
HHITaITZOY:&143;DH d;? [F Q6@ T=LOY?SR>9u)d614aQ6@ T=; ?SaIH LH aH d;?7S:<7=14O47<LO4?n567<H 7&9a7R?S7=a ];@ ?[T=U
OY?SR14a&3;T|)3HIT?vFfdL7<@ 5143D=?S36?A@ 7=Ob;e=?AH._A7=3 d;T-?SOO7R Tr5;?SO5;?Sa _`@ 1Y?SakH d;?K567<H 7&b;k?K]6aI?C1YH a
?aITZOY:=?S5-143'QTZOYe&3;TZR 17=O)H 14R ?UT=@[3;?AHT=@ raT=U OYT=D<fO1Y=?SO414d;TrT&5DZ1Y:=?S3'H d;?N5L7<H 7&9 [T=@ ?UWT=@XR 7=O4OYe=b
TZ]63656?S5 HI@ ?A?A)145;H K d 5jkT&56O47<?S365;?A@ k)mb24joRb <o=o 6X9 H d;?nOY?S7<@ 361436DQ6@ T=LOY?SR UWT=@U]LO4OYeHI@ ?A?`f7=];DZR?S3
HI?S5
V3H d6?)Q6@ ?Sa ?S36_`?T=U2]63;T=LaI?A@ :=?S5U?S7<H ];@ ?:y7<@X147<LOY?SaAb Ri]6OYH 1f5L14R?S36a 14TZ367=O_AO7=a a 1Y6?A@Xa+)1YH dK7);l&?S5KU?S7<H ];@ ?
H d;?[Q6@ T=LOY?SR T=UCL36561436D7=a a 1YDZ36R?S3
H aET=UdL1YDZd;?SaIH aI?SO4?S_`H 1YTZ3>7<@ _Ka ?AH ) H d;?S3>14a.HITL3657 _AO47=a a 1Y6?A@
QTZaIHI?A@ 14T=@QL@ T=L7<L14O1YHfe@ ?SR 7=136a143HI@X7=_`H 7<LOY??A:=?S3
./
I 14( 3 ! # %'& H d67<H)R 7yl;14R 14a ?Sa S5>I 6X9  
UT=@H d;?SaI?@ ?SaIHI@X14_`HI?S53;?AHfkT=@ &a 5F$7<@ b <o=o 6X9V3
:&1Y?A T=UH d6?N]63;U7S:=TZ];@ 7<OY?_`TZRQL];H 7<H 1YTZ3L7=OK_`TZR
*) +  -,   . 6/G0L - 
QLO4?`l&1YHe143
:=TZOY:=?S5+bk?3;T=HI?H d67<HH d;?Q6@X7=_`H 14_A7<L14O
1YHeT=URi]6OYH 1f561R?S36a 1YTZ3L7=O+_AO47=a a 1YL?A@ a)14aO414R 14HI?S5>HIT V3H d614a)aI?S_`H 1YTZ3n?a d;T|H d67<H)H d;?iOY?S7<@ 3613;DQ6@ T=;
RT&5;?SO4a\1YH d@ ?SaIHI@ 1_`HI?S5>_AO47=a aa ];6D=@ 7<QLdLaA9 OY?SR UWT=@U]6OOYeHI@ ?A?`f7=]6DZR?S3HI?S5Ri]6OYH 1f5L14R?S36a 14TZ367=O
_AO47=a a 1Y6?A@ a\_A7=3[?Ka TZOY:=?S5>143QTZOYe&3;TZR 147=OH 14R ?=9
g3f n   3   L2< L
% M?_`TZ36a 15;?A@7EU]6O4OYeHI@ ?A?`f7=];DZR ?S3HI?S5_AO47=a a 1Y6?A@
$ % Z `  L yf   & y 6 
n & (' I)1YH dH d;?_AO47=a ai:y7<@X147<LOY?S- a 7=365H d;?U?S7<H ];@ ?
V3iH d614a$aI?S_`H 1YTZ3k?5;?AL36?H d;?.QL@ T=LOY?SRT=UO4?S7<@ 36143;D :<7<@ 147<LO4?SUa ZrH d67<H14ai7=56R 1a a 1YLO4?UWT=@H d;?U?S7<H ];@ ?
7nU]LO4OYeHI@ ?A?`f7=];DZR?S3
HI?S5Ri]6OYH 1f561R?S36a 1YTZ3L7=O_AO47=a aI?SO4?S_`H 1YTZ37<@ _aI?AH 9Nj\]614O56143;D>];QTZ37@ ?Sa ]6OYH
) .0/

a 146?A@7=365a d;TH d67<HH d614a$Q6@ T=LOY?SR_A7=3?.5;?S_`TZR U@ TZR ;@ 1Y?S5LR 7=3 k)m 24jo5Sq=q <6XbG?d67|:=? H d67<HH d;?
QTZaI?S5143
HITHfkT[aI?AQL7<@ 7<HI?T=QLH 14R 14a 7<H 14TZ3[Q6@ T=LOY?SR a OYT=D<fO1Y=?SO414d;TrT&5NT=SU IDZ1Y:=?S37567<H 7aI?A?H _A7=3?
)dL14_ d_A7=3T=H d?KaITZO4:=?S5>143QTZOYe&3;TZR 147=OH 14R?=9 @X1YHIHI?S37=a

  6 G    S5>I6!
jk?AUT=@ ?5;?AL3L143;DH d;? Q6@ T=OY?SR T=UO4?S7<@ 36143;D7 + t
U]6O4OYeHI@ ?A?`f7=];DZR?S3
HI?S5Ri]6OYH 1Yf5614R?S36a 1YTZ367=O)_AO7=a a 1 !21 43  578:6 9 ^ 6<;43
5  c  578:6 9 5>   S
^  6
6?A@U@ TZR 567<H 7&bk?i@ ?S_A7=O4OH dL7<HH d6?i@ ?SO47<HI?S5Q6@ T=; i % i %

110 L. C. van der Gaag and P. R. de Waal


3
+

$8:6 9 5x^c~6 1l43


+

5 8:6 9 2u Ta ]6R R 7<@ 1aI?>H d;?7<T|:=?_`TZ36a 1456?A@ 7<H 1YTZ36aAb\k?


!" 56
d67|:=?H d67<H'7 _AO47=a a 1YL?A@H d67<H'aITZOY:=?Sa H d;?O4?S7<@ 3&
 
ji % i %
t t 143;D Q6@ T=LO4?SR UT=@'U]6OOYeHI@ ?A?`f7=];DZR?S3
HI?S5 Ri]6OYH 1
; 3   8:6 9 5> Zx^S 6/1 43  578:6 9 5> 6 '
5614R ?S36a 1YTZ367=O;_AO7=a a 1Y6?A@Xa)1YH dH d6?);l&?S5 U?S7<H ];@ ?aI?`
i % i % OY?S_`H 1YTZ37<@ _aI?AH ) b1aE7_AO47=a a 146?A@U@ TZR !$# % &
.0/
H d67<H)R7yl&14R14aI?Sa
)d;?A@ ? d -1a.H d;?K?SRQ1Y@ 14_A7=O5614a HI@ 1YL];H 14TZ3EU@ TZR nb
 
 + t
5 8 5 # 6?! 1 V db5Y 6 3OYT= D db5Y 614aiH d;??S3HI@ T=Q
e   8:6 9 5  x ^c  6<; $8:6 9 5>  x ^ r   ^   6 (
 
T=U7@ 7=3656TZR :<7<@ 147<LOY? # )1YH d-5L14aIHI@ 1Y];H 1YTZ3 db i % i %
5 8 5 # 7?6c! 1  V  db5Y ' 6 3ZOYT=n
 D db5Y  65;?`  M?3;TQ6@ T&_`?A?S5rea d;T)143;DH d67<HH d;?O4?S7<@ 36143;D
3;T=HI?SaH d6?\_`TZ3L561YH 1YTZ367=O
?S3
HI@ T=Q
eiT=U # DZ1Y:=?S?3 7Eb7=365 Q6@ T=OY?SR _A7=3?E5;?S_`TZRQTZaI?S513HITnHfkTaI?AQ7<@ 7<HI?
 8 5 # x 7-6! db5Y ' 6/3SOYT=D db5d Y 65Y 3$'db 6 5  6

T=Q6H 14R14a 7<H 1YTZ3Q6@ T=LOY?SRa+)d614_Xd_A7=3iT=H dK?a TZOY:=?S5
143QTZOYe&3;TZR 17=O$H 14R?=9uTnH d614aC?S365bk?6@ a HK_`TZ3&
 
V
a 1456?A@H d;?\R]6H ]67=Of143;UT=@ R 7<H 14TZ3KHI?A@ RQ?A@ H 7=143613;D.HIT
5;?S3;T=HI?Sa)H d;?KRi];H ]67=O1436UWT=@ R7<H 1YTZ3T=U # 7=365 7E9 H d;?_AO47=a ak:<7<@ 147<LOY?SaS92M?T=LaI?A@ :=?H dL7<HAb&a 1436_`?_AO7=a a
ud6?HfkT)?S3
HI@ T=QreCHI?A@ R a 5 6 56+7=3L5 576 5> 6 :<7<@ 147<LOY?Sa)TZ36OYed67|:=?_AO7=a aQL7<@ ?S3H aAbH d614aHI?A@ R 5;?`
143H d;?7<T:=??`l&Q6@ ?Sa a 1YTZ3UT=8:@ 9 S5>I f&8:6>9 _`TZ3&    Q?S3656aTZ3H d;?aI?AHT=U;7<@ _Aa T=U&H d;?_AO7=a aa ]66D=@ 7<QLd
)
_`?A@ 3iR 7<@ DZ143L7=O5614a HI@ 1YL];H 14TZ36a?SaIH 7<LO41a d;?S5KU@ TZRH d;? TZ36OYe=9M'1YH d@ ?SaIQ?S_`HHITH d;?_`TZ36561YH 1YTZ3L7=OR]6H ]67=O
7S:<7=14O47<OY?>567<H 7&9-ud;?SaI?[HI?A@ R a H d6?A@ ?AUWT=@ ?n5;?AQ?S365 143;UT=@ R 7<H 14TZ3HI?A@ R 7<T|:=?=bk?nT=La ?A@ :=?H d67<HH d614a
TZ36OYeTZ3H d;?>?SR QL1Y@ 14_A7=O\561aIHI@ 1YL]6H 1YTZ37=3653;T=HTZ3 HI?A@ R 5;?AQ?S3656a TZ3 H d6?[U?S7<H ];@ ?aI?SOY?S_`H 1YTZ37<@ _aI?AH
H d;?D=@ 7<QLd61_A7=O)aIHI@ ]L_`H ];@ ?[T=UH d;?n_AO47=a a 1Y6?A@S9ud614a ) b)dL14_ d'14a;l&?S5+b7=365TZ3 rkbL];H3;T=HTZ3
)
T=LaI?A@ :y7<H 1YTZ314RQO41Y?SaH d67<H\7=37=56R 14a a 1YLOY?._AO47=a a 1Y6?A@
.0/
H d;?aI?AH 9 ud;?SaI?_`TZ36a 1456?A@ 7<H 1YTZ36a14RQOYeH d67<H
)
H d67<H$R 7yl;14R 14aI?SaH d;?\OYT=D<fO41Y=?SO41d;T
T&5DZ1Y:=?S3iH d;?k567<H 7 H d;?HTHI?A@ R a7<@ ?14365;?AQ?S365;?S3
H$7=365 _A7=3 ?R 7ylr
14a\7_AO47=a a 146?A@\H d67<H.R 7yl&1R 14aI?SakH d;?Ca ]6R T=UG1YH a\HfkT 14R 1aI?S5aI?AQL7<@ 7<HI?SOYe=9
Ri];H ]67=Of1436UWT=@ R7<H 1YTZ3HI?A@ R aA9 ud6?Ri];H ]67=OYf143;UT=@ R 7<H 1YTZ3HI?A@ R"Q?A@ H 7=136143;DHIT
M?_`TZ36a 1456?A@H d;? R];H ]L7=Of143;UT=@ R 7<H 1YTZ3 HI?A@ R H d;?_AO47=a a:y7<@X147<LOY?Sai_A7=3?R 7yl;14R 14aI?S5
eN]La 143;D
$8:6 9 5> x ^ 613aITZR ?\R T=@ ?.5;?AH 7=14O92M?3;T=HI?\H d67<H H d;?Q6@ T&_`?S56];@ ?nU@ TZR 8\d;T 7=365'014M ] 5S<q =m 6UT=@
H d;?.aI?AH$T=ULQ7<@ ?S3H a ^ \T=U7=3
eU?S7<H ];@ ?\:<7<@ 147<LOY?   _`TZ36aIHI@X]6_`H 143;D R 7yl;14Ri]6RfO414=?SO414d;TrTr5HI@ ?A?Sa)
14aQL7<@ H 1YH 1YTZ36?S5 143
HITH d;?)a ?AJH ^qG T=U+_AO47=a aQL7<@ ?S3
H a
7=365H d6?KaI?AH ^rG CT=U$U?S7<H ];@ ?CQL7<@ ?S3H aA9ca 13;DH d;? <9\8\TZ36aIHI@ ]6_`H7 _`TZRQLOY?AHI?#]6365L1Y@ ?S_`HI?S5 D=@X7<QLd
_ dL7=143@ ]LOY?>UT=@ERi];H ]67=O)13;UWT=@XR 7<H 1YTZ3 58kT:=?A@>7=365 T:=?A@H d6?KaI?AHT=U_AO47=a a.:<7<@ 147<OY?Sa Z9
ud;TZR7=aAbSq=q&* 6Xb;H d6?HI?A@ R $8:6 9 5> =x ^ 6_A7=33;T r9a a 14DZ37Nk?S1YDZd
 H $8 6 9 5 ' C6HIT?S7=_ d ?S5;D=?
?Ct @ 1YHIHI?S37=a ?AH?A?S 5 C=b !
3 G7=36_  ;9
 
$8:6 9 5> x ^qu  6 ; $8:6 9 5> Zx ^Sr  ^qC 6 ' 
sr9j\]61O45>7R7yl&14Ri]6Rk?S1YDZd
HI?S5na QL7=36361436DHI@ ?A?=b
Pi %
UT=@ ?`l;7=RQLOY? ]6a 143;D C@ ]6a y7=O pa 7=OYD=T=@X1YH d6R

)d;?A@ ?  8 5 # x7  q614aH d;?_`TZ365L1YH 1YTZ367=O\Ri];H ]67=O
 5Sq=<
t 6X9
143;UT=@ R 7<H 14TZ3T=U # =7 365 7 DZ1Y:=?S3 )1YH d ;9u2@ 7=36a UWT=@ R H d;?]L36561Y@ ?S_`HI?S5HI@ ?A?143
HIT 7 561
 8 5 # x 7  q6!  @ ?S_`HI?S5TZ3;?=b$reN_ d;TrTZa 1436D[7=37<@ L1YHI@ 7<@ e:<7<@ 1
db5Y '  6 7<OY?EUT=@1YH ai@ TrT=H7=3L5a ?AHIH 143;D7=O4O\7<@ _561Y@ ?S_
db5Y ' ' 6$3SOYT=D

!
V

   db5Y  6/3$d 5   6
(
  H 14TZ36a.U@ TZRH d;?K@ TrT=HTZ];Hk7<@X5+9
^&143L_`?H d6?CUW?S7<H ];@ ?aI?SOY?S_`H 1YTZ3[7<@ _CaI?AH 14a\;l&?S5+b ) .0/ ;T=@H d;?i_`TZ36561YH 1YTZ3L7=OR]6H ]67=Of143;UT=@ R 7<H 14TZ3HI?A@ R
H d;?.aI?AH^   T=U_AO47=a a$QL7<@ ?S3
H aGT=ULH d;?\U?S7<H ];@ ?\:<7<@ 1 Q?A@ H 7=143613;DHIT>H d;? U?S7<H ];@ ?E:y7<@ 17<LOY?SaAbk? T=La ?A@ :=?
7<LOYU?  14aH d;?ia 7=R?UWT=@?A:=?A@ e7=56R 14a a 1YLOY?_AO7=a a 1 H d67<H1YH[14anR 7yl&1R 14aI?S5-re'L365613;D7R 7yl;14Ri]6R
6?A@S9M?_`TZ36_AO4]656?H d67<HCH d6?Ri];H ]67=OYf143;UT=@ R 7<H 1YTZ3 O41Y=?SO14d;TrTr5561Y@ ?S_`HI?S5aIQL7=3L36143;D>HI@ ?A?T|:=?A@ H d;?U?S7y
HI?A@  R $8:6 9 5> =x ^qC  614aKH d;?a 7=R?UWT=@i7=O4ORT&5;?SO4a H ];@ ?:<7<@ 147<LOY?SaS9 ^&]L_ d7HI@ ?A?14a_`TZ36a HI@ ]6_`HI?S5
e
143H d;?aI?AHT=U$7=56R 14a a 1YLOY?_AO47=a a 1YL?A@ a ! # % & 9 H d;?KUTZO4OYT)143;DiQ6@ T&_`?S56];@ <?

Multi-dimensional Bayesian Network Classifiers 111


<9\8\TZ36aIHI@ ]6_`H7_`TZRQLOY?AHI?561Y@ ?S_`HI?S5D=@ 7<QdT:=?A@ Ri];H ]67=Of13;UWT=@XR 7<H 1YTZ3HI?A@ R#UT=@H d;?K_AO47=a a.:y7<@X147<LOY?Sa
H d6?aI?AHT=UG:y7<@X147<LOY?Sa Zrk9 d67=aHIT ?KR 7yl&1R 14aI?S5]6a 13;DH d;?K6@ a HkQL@ Tr_`?S5L];@ ?=9
r9a a 14DZ3E7ik?S1YDZdH $6 5>x   ^qC  6HIT?S7=_Xd  3Z  
; GZLrZ
T   8:b 9  !  ;9

7<@X_CUW@ TZR   HIb V3 H d;? Q6@ ?A:&1YTZ]6a-a ?S_`H 1YTZ3+b? d67|:=? 7=5656@ ?Sa aI?S5
sr9j\]61O45>7R 7yl&1R]6R?S1YDZd
HI?S5561Y@ ?S_`HI?S5naIQL7=3& H d;?nOY?S7<@ 361436DQ6@ T=LOY?SR UWT=@U]LO4OYeHI@ ?A?`f7=];DZR?S3
HI?S5
3L143;D HI@ ?A?=bUT=@?`l;7=RQLOY?i]6a 1436DH d;?i7=O4D=T=@ 1YH d6R Ri]6OYH 1f5L14R?S36a 14TZ367=O_AO7=a a 1Y6?A@Xa+)1YH dK7);l&?S5KU?S7<H ];@ ?
T=U8.d
]7=3650214] 5S<q =m 6T=f@
56RTZ3656a 7=OYD=T<
3 aI?SO4?S_`H 1YTZ3>a ];LD=@ 7<QLd+9$M?36T|6@X1Y?A6eE5L14a _A]6a a\U?S7y
@X1YH d6R 5S<q <6X9 H ];@ ?Ka ];LaI?AHa ?SOY?S_`H 1YTZ3UT=@TZ];@)_AO7=a a 1Y6?A@XaA9
VH14aK?SO4Or3;T)3H d67<HAb$1YU\RT=@ ?ET=@OY?Sa a@ ?S56]63&
M?kTZ]6O45O41Y=?HIT3;T=HI?H d67<HUT=@R 7yl&1R 14a 1436D 567=3
HKU?S7<H ];@ ?Sai7<@ ?E136_AO4]65;?S51437>567<H 7naI?AHAb$H d;?SaI?
H d;?\_`TZ365614H 1YTZ367=ORi];H ]67=OYf143;UT=@ R 7<H 1YTZ3HI?A@ RUWT=@GH d;? U?S7<H ];@ ?SaR 7|eNL147=aH d;?_AO7=a a 1Y6?A@iH d67<H14aO4?S7<@ 3;?S5
U?S7<H ];@ ?:y7<@X147<LOY?SaAb?d67|:=?HIT-_`TZ36aIHI@ ]6_`H7'561 U@ TZR H d6? 567<H 7&b)d614_Xd143 H ];@ 3R7Se @ ?Sa ]LOYH143
@ ?S_`HI?S5aIQL7=363613;DHI@ ?A?=b)d614OY?UWT=@R 7yl&1R 14a 1436D H d;? 7-@ ?SO47<H 1Y:=?SOYeQT
T=@N_AO47=a a 1YL_A7<H 1YTZ37=_A_A];@ 7=_`e=9 je
Ri];H ]67=Of13;UWT=@XR 7<H 1YTZ3HI?A@ R#UT=@H d;?K_AO47=a a.:y7<@X147<LOY?Sa _`TZ36a HI@ ]6_`H 143;DH d;? _AO7=a a 1Y6?A@T:=?A@ F ]6aIHC7a ];La ?AHT=U
k?>_`TZRQL];HI?>7=3]636561Y@ ?S_`HI?S5TZ3;?=9ud;?3;?A?S5UT=@ H d;?nUW?S7<H ]6@ ?n:<7<@ 147<OY?SaAb7OY?Sa a_`TZRQLOY?`l_AO47=a a 1Y6?A@
H d61a561Y?A@ ?S3L_`?7<@X14aI?SaUW@ TZR H d6?iT=aI?A@ :<7<H 1YTZ3H d67<H 14aer14?SO45;?S5H dL7<HEHI?S365LaEHITd67|:=?7?AHIHI?A@EQ?A@ UT=@I
$6 5P C 6! $6 5CZP 6+UWT=@GH d;?\_AO47=a a2:y7<@X147<LOY?Sa R 7=3L_` ? 507=3;DZO4?A;e k)mq24pob\Sq=<q 6X9[$1436561436D7nR 14361Y
)8:dL9 14OY? $8:6 9 5>ux   8:9 Z^qG 6!  8:6 9 5> =x u 3^qC6
  Ri]6R a ];aI?AHCT=UU?S7<H ];@ ?Saa ]6_XdH d67<HCH d;?a ?SOY?S_`H 1Y:=?
UT=@H d;?CU?S7<H ];@ ?K:<7<@ 147<OY?SaA9 _AO47=a a 1Y6?A@_`TZ36a HI@ ]6_`HI?S5nT|:=?A@H d614aa ]6LaI?AH)d67=ad61YDZd&
gC];@7=O4D=T=@ 1YH d6R UT=@a TZOY:r13;DH d;?OY?S7<@X36143;DEQ6@ T=; ?SaIH7=_A_A];@ 7=_`e14a&3;T)3n7=aH d;?iU?S7<H ];@ ?a ];LaI?AHaI?`
OY?SR UWT=@U]6OOYeHI@ ?A?`f7=]6DZR?S3HI?S5Ri]6OYH 1f5L14R?S36a 14TZ367=O OY?S_`H 14TZ3>Q6@ T=OY?SR9u)d;?KU?S7<H ];@ ?a ?SOY?S_`H 1YTZ3>QL@ T=LOY?SR
_AO47=a a 1Y6?A@ aN)1YH d7;l&?S5 U?S7<H ];@ ?'aI?SOY?S_`H 1YTZ3 a ];; ]636UWT=@ H ]L367<HI?SOYe14aK&3;T|3NHIT[?vFfdL7<@ 513D=?S3&
D=@ 7<Qd 14a _`TZRQTZaI?S5 T=U H d6?HfkT Q6@ T&_`?S56];@ ?Sa ?A@ 7=O 5ua 7=R 7<@ 5L143;TZa.7=365O41YU?A@ 14aA3b <o=oZs 6X9
5;?Sa _`@ 1Y?S5 7<T|:=?=9 ^rTZOY:&143;D H d6? QL@ T=LOY?SR 6T=@ jk7|e=?Sa 147=3 3;?AHT=@ _AO47=a a 1Y6?A@ aAb5L1?A@ ?S3H
H dr]6a 7=RTZ]L3H a HIT_`TZRQL]6H 143;D 7=3 ]63L561Y@ ?S_`HI?S5 d;?S]6@ 14aIH 14_7<Q6QL@ TZ7=_ d;?Sa)HITU?S7<H ];@ ?a ];LaI?AH)a ?SOY?S_`H 1YTZ3
R 7yl;14Ri]6Rk?S1YDZd
HI?S5aIQL7=363613;DHI@ ?A?.T|:=?A@H d6?_AO47=a a d67|:=??A?S3Q6@ T=QTZa ?S5+9g3;?iT=UH d;?Sa ?1a)H d;?&z+2  74
:<7<@ 147<LO4?Sa7=365 7 561Y@ ?S_`HI?S5 R 7yl&1R]6R?S1YDZd
HI?S5 4 74 4 5k)zg2 z+$2 5 CTZd67S:&1&7=36
 5 =TZd63+b;Sq=q <6Xb13)dL14_ d
aIQ7=3636143;DHI@ ?A?nT|:=?A@H d;?>U?S7<H ];@ ?>:<7<@ 147<LO4?SaA9ud;? H d;?aI?SOY?S_`H 1YTZ3ET=U+U?S7<H ];@ ?):<7<@ 147<OY?Sa14aR?A@ D=?S5E)1YH d
_`TZRQ];H 7<H 1YTZ3T=UH d;??S14DZdH aiUT=@iH d6?]63L561Y@ ?S_`HI?S5 H d;?O4?S7<@ 36143;D-7=OYD=T=@X1YH d6R9 M? 36T| 7<@ DZ]6?H d67<H
HI@ ?A?d67=a7_`TZR QL];H 7<H 1YTZ367=O
_`TZRQLOY?`l;1YHeT=U  5  6Xb , H d;?a 7=R?7<Q6Q6@ TZ7=_ d_A7=3?]6aI?S5UT=@TZ];@Ri]6OYH 1
)dL14OY?H d;?C_`TZ36a HI@ ]6_`H 1YTZ3T=UH d;?HI@ ?A?C1YH a ?SOYU+@ $? r]61Y@ ?Sa 561R?S36a 1YTZ3L7=Oj\7Se=?Sa 17=33;?AHT=@ _AO47=a a 1Y6?A@ aA9ud;?
, ,  5 O4T=D u6H 14R_
 ? 5 C@ ]6aI<7=ObSq=<t 6X9ud;?E_`TZRQL]& @ ?Sa ]6OYH 143;DiQ6@ T&_`?S56];@ ?1a7=a.UTZO4OYT|)a
H 7<H 1YTZ3T=U2H d;?Ck?S1YDZdH a\UT=@.H d;?K5614@ ?S_`HI?S5HI@ ?A?Kd67=a.7 <9\8.d;T
TZa ?H d;??SRQ6HeUW?S7<H ];@ ?aI?SOY?S_`H 1YTZ3a ];;
_`TZRQOY?`l&14Hfe[T=U  5v  6Xbd614OY?iH d;? _`TZ36aIHI@X]6_`H 1YTZ3 D=@X7<QLdUT=@H d;?143L1YH 147=O_A];@ @ ?S3
Ha ];LD=@ 7<QLd+9
T=U+H d;?HI@ ?A?14H aI?SOYUH 7<=?Sa  5v*6$H 1R?=9^&1436_`?)7HferQ;
14_A7=O.5L7<H 7aI?AH a 7<H 14aIL?S?a  OYT=D -7=365  vb
, r96@ TZR H d6?_A];@ @ ?S3
Ha ];LD=@ 7<QLdD=?S3;?A@ 7<HI?7=O4O
k?)d67|:=?H d67<HH d;?)T:=?A@ 7=O4O_`TZRQLOY?`l;1YHfeT=U+TZ];@k7=OYD=T< QTZa a 1YLOY?UW?S7<H ]6@ ?aI?SOY?S_`H 14TZ3 a ];6D=@X7<QLd6aH d67<H
@ 14H d6R 14aQTZOYe&3;TZR 147=O13>H d;?3
]LR?A@T=U:y7<@X147<LOY?Sa 7<@ ?CT=6H 7=143;?S5reE7=565L143;D7=37<@ _UW@ TZR#7_AO47=a a
143
:=TZOY:=?S5+9 :<7<@ 147<OY?CHIT7U?S7<H ];@ ?K:<7<@ 147<LO4?=9
M?_`TZ36_AO4]L5;?[H d61aaI?S_`H 1YTZ3reT=LaI?A@ :r1436DNH d67<H sr96T=@>?S7=_ dD=?S3;?A@ 7<HI?S5-U?S7<H ];@ ?aI?SOY?S_`H 14TZ3-a ];;
H d;?OY?S7<@ 3613;DCQ6@ T=LO4?SR _A7=3E7=OaITK?)UT=@ Ri]6O47<HI?S5 UT=@ D=@X7<QLd+b_`TZR QL];HI?H d;?7=_A_A];@ 7=_`eT=UH d;??SaIH
_AO47=a a 1Y6?A@ aE143 )d614_XdH d;?aI?AH T=@H d;?aI?AH r
) ) _AO7=a a 1Y6?A@DZ1Y:=?S3KH d61a+a ];6D=@ 7<QdH d67<H214a+O4?S7<@ 3;?S5
14aK?SRQ6Hfe=9ngC];@7=OYD=T=@ 14H d6R14aK@ ?S7=5614OYe7=567<Q6HI?S5HIT ]La 143;DH d;?7=OYD=T=@ 1YH d6R UW@ TZR H d;?KQ6@ ?A:r1YTZ]LaaI?S_
H d;?Sa ?KQ6@ T=LO4?SR aA9M'1YH d l! ibLTZ36O4eEH d;?_`TZ36561
) : H 14TZ3+9
H 1YTZ3L7=OR];H ]L7=Of143;UT=@ R 7<H 1YTZ3HI?A@XR UWT=@H d;? U?S7<H ];@ ? ;9\^&?SOY?S_`H$H d;?\?Sa HGD=?S36?A@ 7<HI?S5a ];6D=@X7<QLd+b<H d67<H14aAb
:<7<@ 147<LO4?Sad67=aHIT?NR 7yl;14R 1aI?S5-]6a 1436DH d6?aI?S_ H d6? UW?S7<H ]6@ ?aI?SOY?S_`H 14TZ3Na ];6D=@X7<QLdT=UkH d6? _AO47=a
TZ365QL@ Tr_`?S5L];@ ?7<T:=?=9M1YH d r;! ibTZ36O4e>H d;?
) : a 1Y6?A@.T=Ud61YDZd;?Sa H7=_A_A];@ 7=_`e=9

112 L. C. van der Gaag and P. R. de Waal


tr9VUH d;?7=_A_A];@ 7=_`e T=U+H d;?_AO47=a a 1Y6?A@)14H d H d;?aI?` a 1 A?CT=U567<H 7 a ?AH))Ao=o

O4?S_`HI?S5a ];6D=@ 7<Qd1adL1YDZd;?A@H d67=3H d67<HCT=UkH d;? $4p2PP} kzfm> 3


4 k 2P*o
4{2z)o
_AO7=a a 1Y6?A@1YH dH d;?>_A];@ @ ?S3H a ]66D=@ 7<QLd+bH d;?S3 8kTZR QTZ]L365367=1Y:=? o&9  =q=t
56?S3;T=HI?H d6?\a ?SOY?S_`HI?S5a ];6D=@ 7<Qd7=aGH d;?\_A];@ @ ?S3
H n]6OYH 1f5L14R367=1Y:=? o&9 t
Ss<
a ];6D=@ 7<QLd7=365D=THIT^rHI?A1 Q r9-VU36T=HAb.H d;?S3 8kTZR QTZ]L365u2v o&9 s=t S m<=q
a HIT=Q[7=365Q6@ T=QTZa ?CH d;?K?SaIH_AO47=a a 1Y6?A@.UT=@H d;? n]6OYH 1f5L14RGuGv o&9 6  Zo
_A]6@ @ ?S3
Ha ];6D=@ 7<Qd7=a.H d;?KT:=?A@ 7=O4O?SaIHA9 a 1 A?CT=U567<H 7 a ?AH) <o=o
^rH 7<@ H 13;D )1YH d7=3?SRQLHfe D=@ 7<QLdL14_A7=ONaIHI@ ]L_`H ];@ ? $4p2PP} kzfm> 3k
4 2P*o
{2z)o
4

)1YH d6TZ];H)7=3
e>7<@ _AaAb7=a143[H d;?7<T|:=?iQ6@ T&_`?S56];@ ?=bL14a 8kTZR QTZ]L365367=1Y:=? o&9 <o <&
&3;T|)37=a w zn2z=f$k4kx*m>}>L9$OYHI?A@ 3L7<H 1Y:=?SOYe=b x2 n]6OYH 1f5L14R367=1Y:=? o&9 t=t=t * <q
n2z+-k4 }U}2<m>}~_A7=3?\]6aI?S5byd614_ daIH 7<@ H a$)1YH d 8kTZR QTZ]L365u2v o&9 s<oZt s<o <o
7KU]6O4OrD=@ 7<Qd614_A7=O6aIHI@X]6_`H ];@ ?U@ TZR)d614_Xd a 143;DZO4?7<@ _Aa n]6OYH 1f5L14RGuGv o&9 <t AoZ<q
7<@ ?K@ ?SRT|:=?S5n1437=3>1YHI?A@ 7<H 14:=?KU7=a d61YTZ3+9 a 1 A?CT=U567<H 7 a ?A)H  Zo=o
 6
  L2<    Z<n <  $4p2PP} kzfm> 3k
4 2P*o
{2z)o
4

8kTZR QTZ]L365367=1Y:=? o&9 t=t<o << s


V3H d614aaI?S_`H 1YTZ3#k?-Q6@ ?Sa ?S3HaITZR?-QL@ ?SO414R 1367<@ e n]6OYH 1f5L14R367=1Y:=? o&9 <oZt 
3r]6R?A@ 14_A7=O@ ?Sa ]6OYH aU@ TZR"TZ]6@>?`lrQ?A@ 1R?S3H a>HIT14O 8kTZR QTZ]L365u2v o&9 t<oZt <o
O4]6a HI@ 7<HI?H d;? ?S3;?A6H aT=U\Ri]6OYH 1f561R?S36a 1YTZ3L7=O41YHfeT=U n]6OYH 1f5L14RGuGv o&9 t=m=t s=<m 
j\7Se=?Sa 17=3n3;?AHT=@ _AO7=a a 1Y6?A@XaA9
^&136_`?H d;?cC8kV.@ ?AQTZa 1YHIT=@ eT=U?S3L_ d6R 7<@ >567<H 7 uG7<LOY?)G
l&Q?A@X14R?S3
H 7=O=@ ?Sa ]6O4H a+UWT=@$561?A@ ?S3
HHferQ?Sa
5;Tr?Sa>3;T=H[136_AO4]65;?7=3
e'5L7<H 7aI?AH a>1YH d-R]LOYH 1YQLOY? T=U_AO47=a a 1Y6?A@TZ35L7<H 7EaI?AH aT=U5L1?A@ ?S3Ha 1 A?D=?S36?A@I
_AO47=a ak:<7<@ 147<LO4?SaAbk?56?S_A145;?S5EHITiD=?S36?A@ 7<HI?CaITZR?C7<@I 7<HI?S5>U@ TZR#H d6?CT
?SaIT=Qd67<D=?S7=O2_A7=3L_`?A@3;?AHT=@ 9
H 1YL_A17=O+567<H 7aI?AH a)HITHI?SaIHTZ];@)OY?S7<@X36143;D 7=OYD=T=@X1YH d6R9
ud;?Sa ?567<H 7a ?AH a[k?A@ ?D=?S36?A@ 7<HI?S5UW@ TZR H d;?NTr?`
aIT=QLdL7<D=?S7=O_A7=3L_`?A@3;?AHfkT=@ _ 5 7=356?A@BC7=7< D k)m2<4pob UTZO45 _`@ TZa aI:y7=O41567<H 1YTZ3+9$;T=@kH d;?)Ri]6OYH 1f561R?S36a 1YTZ3L7=O
<o= o 6X9udL14a3;?AHT=@ UT=@H d6?)aIH 7<DZ143;DT=U+_A7=36_`?A@T=U _AO47=a a 1Y6?A@ aAb?5;?AL3;?S5H d;?S1Y@k7=_A_A];@ 7=_`e7=akH d;?Q6@ T<
H d;?T
?Sa T=QLd67<DZ]6a\1436_AO]65;?Sa @ 7=3L5;TZR:<7<@ 147<OY?SaT=U QT=@ H 1YTZ3T=Ua 7=RQLOY?SaH d67<Hk?A@ ?_AO7=a a 1Y6?S5E_`T=@ @ ?S_`H OYe
)d61_ 8d =t7<@ ?T=LaI?A@ :<7<LOY?U?S7<H ];@ ?:y7<@ 17<LOY?SaA9u)d;@ ?A? UT=@)7=O4O+_AO47=a a):y7<@X147<LOY?Sa.143
:=TZOY:=?S5+9
T=UH d;?n36?AHfkT=@ paE:<7<@ 147<LOY?Sa143?Sa aI?S3L_`?n7<@ ?[_AO7=a a ud6?n@ ?Sa ]6OYH aEUW@ TZR TZ];@?`l&Q?A@ 14R?S3
H a7<@ ?a ]6R
:<7<@ 147<LOY?SaSb)dL14_ dK13KH d;?k_A];@ @ ?S3
H23;?AHT=@ 7<@ ?\a ]6R R 7<@ 1aI?S513uG7<LOY?<9ud;?7=_A_A];@ 7=_`eT=U)H d;??SaIH
R 7<@ 1aI?S5n13n7a 1436DZOY?TZ];HIQ];H:<7<@ 147<OY?=9M?iD=?S36?A@I OY?S7<@ 36?S5_AO47=a a 1YL?A@\1aDZ14:=?S3>143H d;?CaI?S_`TZ365>_`TZO]6R 3+
7<HI?S5H d;@ ?A?567<H 7na ?AH aT=UKAo=o&nb <o=o7=36l 5 Zo=oa 7=R H d;?>H dL1Y@ 5_`TZO4]LR 3DZ1Y:=?SaH d;?n3r]6R?A@T=UQ7<@ 7=R
QLOY?SaSb@ ?SaIQ?S_`H 1Y:=?SO4e=bk]6a 143;DOYT=DZ14_a 7=R QLO4143;D;9;@ TZR ?AHI?A@Q6@ T=L7<L1O41YH 1Y?SaH d67<Hk?A@ ?n?SaIH 14R 7<HI?S5UWT=@EH d614a
H d;?D=?S36?A@ 7<HI?S5Ea 7=RQLOY?Sak?@ ?SR T|:=?S5EH d;?):<7=O4];?SaT=U _AO47=a a 1Y6?A@S9 6@ TZR H d;? H 7<OY?k?R7Se _`TZ3L_AO4]65;?
7=O4O3;TZ3&T=aI?A@ :<7<LOY?:y7<@ 17<LOY?SaAbr?`l&_`?AQLH.UWT=@\H d;TZaI?CT=U H d67<HEH d;?nRi]6OYH 1f5L14R?S36a 14TZ367=O_AO7=a a 1Y6?A@XaAb1YH d;TZ];H
H d;?KH d6@ ?A?K_AO47=a a.:<7<@ 147<OY?SaA9 ?`l;_`?AQ6H 1YTZ3+b
TZ];HIQ?A@ UWT=@XR H d;?S1Y@_`TZR QTZ]L365_`TZ]63
HI?A@I
;@ TZR H d;? H d;@ ?A? 5L7<H 7 aI?AH a-k?_`TZ36aIHI@X]6_`HI?S5 QL7<@ H a143 HI?A@XR aT=U7=_A_A];@X7=_`e=9OaITH d;?3
]LR?A@ aT=U
U]6O4OYe 367=1Y:=? 7=365U]LO4OYe HI@ ?A?`f7=];DZR?S3
HI?S5 Ri]6OYH 1 ?SaIH 14R7<HI?S5Q7<@ 7=R?AHI?A@ a7<@ ?_`TZ36a 15;?A@ 7<LOYea R7=O4OY?A@
5614R ?S36a 1YTZ367=O_AO47=a a 1YL?A@ aA9 6T=@H d614anQL];@ QTZaI?=bCk? UT=@iH d6?Ri]6OYH 1Yf5614R?S36a 1YTZ367=O_AO47=a a 1Y6?A@ aA9g37S:=?A@
]6aI?S5H d;?O4?S7<@ 36143;D7=OYD=T=@ 1YH dLR 5;?Sa _`@ 1Y?S5143H d;? 7<D=?=bZH d;?.OY?S7<@X3;?S5Ri]6OYH 1Yf5614R?S36a 1YTZ367=O=_AO47=a a 1Y6?A@ a2@ ?`
Q6@ ?A:&1YTZ]6aa ?S_`H 1YTZ36aA9 ;T=@_`TZR QL7<@ 14aITZ3 QL];@ QTZaI?SaAb r]61Y@ ?KTZ36?`H d61Y@ 5[T=UH d;?3r]6R?A@T=UQL7<@ 7=R?AHI?A@XaT=U
k?U];@ H d6?A@nOY?S7<@ 36?S53L7=1Y:=?7=3L5HI@ ?A?`f7=]6DZR?S3HI?S5 H d;?S1Y@>_`TZR QTZ]L365-_`TZ]63
HI?A@ QL7<@ H aA9 u)d;?561?A@ ?S36_`?
j\7Se=?Sa 17=3 3;?AHT=@ _AO47=a a 1YL?A@ a)1YH d7_`TZRQTZ]6365 14aQL7<@ H 14_A]6O47<@ O4eNaIHI@ 1Y&143;DUWT=@H d6?>367=1Y:=?>_AO47=a a 1Y6?A@ a
_AO47=a a:<7<@ 147<LO4?KUW@ TZR H d;?567<H 7&9C;T=@K7=O4O2_AO47=a a 146?A@ aAb OY?S7<@ 36?S5U@ TZRH d;?Ao=oyfa 7=RQLOY?567<H 7'aI?AHAbd;?A@ ?
k?]6a ?S57nUWT=@ k7<@ 5;faI?SOY?S_`H 1YTZ3@ 7<QLQ?A@i7<Q6Q6@ TZ7=_Xd H d;?KRi]6OYH 1f5L14R?