Sie sind auf Seite 1von 16

Reliability Engineering and System Safety 66 (1999) 6984

www.elsevier.com/locate/ress

Alternative measures of risk of extreme events in decision trees


H.I. Frohwein, J.H. Lambert, Y.Y. Haimes*
Center for Risk Management of Engineering Systems, Olsson Hall, University of Virginia, Charlottesville, VA 22903, USA
Received 15 December 1998; accepted 23 February 1999

Abstract
A need for a methodology to control the extreme events, defined as low-probability, high-consequence incidents, in sequential decisions is
identified. A variety of alternative and complementary measures of the risk of extreme events are examined for their usability as objective
functions in sequential decisions, represented as single- or multiple-objective decision trees. Earlier work had addressed difficulties, related
to non-separability, with the minimization of some measures of the risk of extreme events in sequential decisions. In an extension of these
results, it is shown how some non-separable measures of the risk of extreme events can be interpreted in terms of separable constituents of
risk, thereby enabling a wider class of measures of the risk of extreme events to be handled in a straightforward manner in a decision tree.
Also for extreme events, results are given to enable minimax- and Hurwicz-criterion analyses in decision trees. An example demonstrates the
incorporation of different measures of the risk of extreme events in a multi-objective decision tree. Conceptual formulations for optimizing
non-separable measures of the risk of extreme events are identified as an important area for future investigation. q 1999 Elsevier Science Ltd.
All rights reserved.
Keywords: Decision trees; Risk; Sequential decisions; Extreme events; Multiple objectives; Optimization; Multicriteria decision making

Nomenclature
E[]
expected value
E[ ucondition] conditional expected value, given some
condition
f()
probability density function
probability density function associated with
fi()
alternative i
conditional expected value (rare event) of
f4,a
outcome X, given that the magnitude of the
outcome falls in the upper 100 (1 2 a )% tail
of the cumulative probability distribution of
outcomes, f4,a ; E[XuX $ F 21(a )]
partial expected value (rare event) of
f4,a *
outcome X, given that the magnitude of the
outcome falls in the upper 100 (1 2 a )% tail
of the cumulative probability distribution of
outcomes, f4,a * ; (1 2 a )E[XuX $ F 21(a )]
conditional expected value (severe event) of
f4,b
outcome X, given that the magnitude of the
outcome attains at least the threshold b ,
f4,b ; E[XuX $ b ]
partial expected value (severe event) of
f4,b *
* Corresponding author. Fax: 1 1-804-924-0865.
E-mail address: haimes@virginia.edu (Y.Y. Haimes)

F()
F 21()
g()
h
H (a )
i
j
k
max
min
min(,)
Mr,p,s
n
pi
Prob[ ]
q1, q2
r(f())
R
t

0951-8320/99/$ - see front matter q 1999 Elsevier Science Ltd. All rights reserved.
PII: S0951-832 0(99)00022-8

outcome X, given that the magnitude of the


outcome attains at least the threshold b ,
f4,b * ; f 4,b E[XuX $ b ]
cumulative probability distribution function
inverse cumulative probability distribution
function
function of ()
hours
Hurwicz criterion
index
index
(1) distribution parameter, (2) order of
separability
maximize
minimize
multi-objective minimization with respect to
two objectives
probability weighted moment
index, integer number
probability of experiencing outcome i
probability of specified condition [ ]
non-exceedance (cumulative) probabilities
in the tail of the probability
operator on probability density function
semivariance
threshold for semivariance

70

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

Sequential
Extreme
Events

DecisionMaking

Multi-Objective
Decision-Making

Fig. 1. Components of the sequential optimization of the risk of extreme


events.

x
x1, x2
xa
X
a

b
x
f 4,b
lj
m
s, s 2
2/2
[
)
;
[a,b]
1(X$b )

realization of X, outcome (cost, damage)


outcome levels from the tail of the probability distribution of damages
fractile, xa ; F 21(a )
random variable (outcome, damage)
(1) decision makers non-exceedance probability (cumulative probability) of concern,
(2) factor in Hurwicz criterion H (a )
decision makers outcome threshold of
concern
outcome threshold
probability of outcome X at least attaining
threshold b , f 4,b ; P(X $ b )
j-th L-moment
mean of random variable
standard deviation, variance of random
variable
partial derivative
element of
implies
for all
set of real numbers between a (included) and
b (included)
indicator function,
(
1;
x $ b
1X$b ;
0;
x , b

1. Introduction
There is a great need for the control of extreme events in
which present decisions have an impact on future options, as
decision makers in such situations are often concerned with

potential outcomes of a magnitude that is large and rare to


be reached. Explicit measures of the risk of extreme events
can be included in a decision tree, along with other objectives such as the expected outcome. The control of extreme
events can thus be performed in a multi-objective frameworkalso, a variety of measures of the risk of extreme
events exist and may be included in a multi-objective decision tree. For example, one can consider the optimization of
the vector of objectives
{E[performance],
E[cost],
E[loss],
E[lossusevere conditions],
E[lossurare conditions]
E[lossuworst case scenario],
Prob[loss exceeds a fixed threshold], }
or a subset thereof, which includes the explicit minimization
of the risk of extreme (rare and/or severe) events. Of course,
other objectives could be added or substituted for those
suggested earlier. Thus, our concern is with the intersection
of the domains of sequential decision making using decision
trees, multi-objective decision making, using trade-off
analysis, and extreme events (Fig. 1). Decision trees (e.g.
[1]) are prominent in literature and practice; Corner and
Kirkwood [2] give extensive references. Haimes et al. [3]
introduced multi-objective decision trees and their use for
the minimization of the risk of extreme events, which is also
the topic of Frohwein et al. [4,5]. Multiobjective decision
making is discussed, for example, in Refs. [69]. Refs.
[10,11], for example, are concerned with the statistics of
extremes. Refs. [1214] are examples that address the risk
of extreme events in a multi-objective framework. While
each of the three mentioned domains (sequential decision
making, multi-objective decision making, risk of extreme
events) has found quite widespread attention in the literature, this is much less in the case of the domain defined by
the intersection of the three areas of research (Fig. 1) (e.g.
[35]).
The term risk is used in the literature with various meanings. Here, the definition of risk as a family of measures of
the probability and severity of adverse effects is adopted,
inspired by Lowrance [15] and Kaplan and Garrick [16]. It
is assumed throughout that the outcome of an action is,
among other things, a loss or damage that is to be minimized.
Decision makers should be able to use the measures of the
risk of extreme events that best represent the facets of risk
with which they are concerned in their analyses. However,
not all these measures can be optimized in a straightforward
manner in a decision tree, using the well-known method of
averaging out and folding back ([1]). Haimes et al. [3] have
identified the failure of averaging-out-and-folding-back,
due to non-separability, when incorporating a conditional
expected value as a measure of the risk of extreme events
into a multi-objective decision tree. Similar difficulties arise

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984
decision node

chance node

root node

branch

continuous distribution

terminal chance node

terminal node

Fig. 2. Decision tree.

for an entire class of non-separable measures of the risk of


extreme events.
The aim of this paper is to provide the decision maker
with improved methodology to handle the various criteria
for control of the risk of extreme events in the formulation
and solution of single- and multiple-objective decision trees.
Methods that help us to overcome the problems caused by
non-separability for a number of measures of the risk of
extreme events are identified, in particular, an interpretation
of some measures in terms of several other measures of the
risk of extreme events is adopted in order to streamline the
solution of a decision tree. Multi-objective decision-tree
analysis is thus extended to a wider class of formulations
for optimization.
The organization of the paper is as follows. In Section 2, a
background on decision trees, linearity, and k-th order
separability is given. A compendium of measures of the
risk of extreme events is provided in Section 3, along with
the methods for sequential optimization of each measure in
a decision tree. The use of multiple measures of the risk of
extreme events in a sequential, multi-objective decision
analysis is discussed and illustrated with an example in
Section 4. Finally, areas for future research are identified
in Section 5.

2. Background
2.1. Decision trees
Decision trees are prominent in sequential decision
making and are often structured as shown in Fig. 2. Deci-

71

sions are depicted by squares, chance events by circles, and


the branches indicate the feasible paths through the tree.
It is assumed that the reader is familiar with the averaging-out-and-folding-back procedure for solving decision
trees [1], which leads to the sequential elimination of some
(subtree) policies, thereby eliminating the need to evaluate
all possible policies at the root node of the tree. Haimes et al.
[3] give a multi-objective extension. Further, it is assumed
here that the outcomes, rather than accruing at the intermediate stages, are represented as an overall outcome associated with a given complete path through the decision tree.
A path is a combination of decisions and discrete random
events. The terminal chance nodes can be specified as
discrete or continuous probability distributions over possible outcomes. A policy is a set of decisions to be taken at
various decision nodes in order to traverse the decision tree
from the root node to one of the terminal nodes. A policy is
efficient (non-dominated) if there is no other policy that
performs at least as well with respect to all objectives and
better with respect to at least one objective. Efficient policies can be identified using different techniques, such as the
graphical approach, the weighting approach, or the 1 constraint approach [6].
2.2. Linear objective functions in decision trees
The decision maker generally does not act on the basis of
appreciation of the entire probability densities of the
outcome, but rather on summary statistics that are thought
to represent well the underlying densities. Thus, consider an
objective function that is a scalar operator r(f(x)) on the
probability density function f(x) of outcomes, such as the
expected value (loss), or a conditional expected value (loss).
For example, the expected-value operator rEV(f(x)) can be
defined as
Z
xf x dx:
1
rEV f x ; EX
2

With respect to decision trees, an operator r(f(x)) is said to


be linear when (e.g. [17])
!
n
n
X
X
pi fi x
pi rfi x;
2
r
i1

i1

where
pi $ 0

and

n
X

pi 1:

i1

Fig. 3. Averaging out probability densities at a chance node.

According to Eq. (2), an operator is linear when averaging out of the operator, after it has been applied individually to the probability densities fi(x) (right-hand side of Eq.
(2)), leads to the same result as applying the operator to the
averaged out probability densities (left-hand side of Eq. (2),
see Fig. 3). Thus, a linear operator, e.g. expected value, can
be averaged out and folded back directly in a decision tree,
as this procedure leads to the same result as averaging out

72

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

measure of risk of
extreme events

optimized
measure of risk of
extreme events

Fig. 4. Optimization of the k-th order separable measures of the risk of extreme events in a multi-objective decision tree.

and folding back the probability densities and then applying


the operator (e.g. [17]).
On the contrary, a non-linear operator requires that the
entire probability distributions be averaged out (mixed) to
the root node of the tree according to the calculus of probability (as on the left-hand side of Eq. (2)) before the operator is applied. Consequently, in the absence of any more
advanced techniques (discussed below), the use of some
non-linear operator typically requires the calculation of
the mixture probability densities associated with all policies
available at the root node of the decision tree. The operator
can only then be applied to the mixture densities at the root
node in order to identify the optimal policy, i.e. the policy
that optimizes the operator. Linear operators are separable
and monotonic (see Refs. [1820] for definition), and separability and monotonicity are necessary and sufficient conditions for averaging out and folding back (e.g. [18]).
2.3. Concept of k-th order separability in decision trees
Some operators that are non-linear (non-separable) and
that do not average out and fold back in decision trees can be
expressed as a strictly increasing or decreasing function of
(at least) k separable and monotonic operators (k . 1) [18
21]; i.e.
rf x gr1 f x; ; rk f x;

where r1(f(x)) to rk(f(x)) are separable and monotonic and




2rf x
2rf x
. 0;x or
, 0;x ;i [ {1; ; k}: 4
2ri f x
2ri f x
Eq. (4) states that every ri(f(x)) is required to be either
strictly increasing or strictly decreasing in x. An operator
r(f(x)) with these features is k-th order separable. When the
k-th order separability can be invoked, the policy that optimizes the original, non-separable operator at the root node
of the decision tree is attained by a non-inferior policy of the
modified k-th order separated multi-objective problem at the
root node of the decision tree [1820].

Fig. 4 illustrates how the k-th order separability can be


applied to the minimization of the risk of extreme events in
decision trees. It is shown that the k-th order separable
measures of the risk of extreme events are split into k constituents, each of which is a measure of the risk of extreme
events in its own right. A multi-objective optimization is
applied to these constituents using a multi-objective decision tree and trade-off analyses. The constituents are then
recomposed to the original, but now optimized measure of
the risk of extreme events. The steps involved in the optimization of a k-th order separable measure of the risk of
extreme events in a decision tree are developed in detail in
Refs. [4,5,22].

3. Measures of the risk of extreme events


A distinction can be made between the rare and the severe
events [23]. Extreme events defined relative to a frequency
of occurrence, i.e. events with a severity of consequences
that is only infrequently reached or exceeded, can be called
rare events. (Any isolated level of severity of consequences
may be rare, i.e. occurs infrequently, and therefore the
above definition refers to the likelihood of reaching or
exceeding some level of severity.) Extreme events defined
relative to the severity of consequences, i.e. events with a
severity of consequences that attains or exceeds a threshold
of concern, can be called severe events. Rarity and severity
are typically associated; the more severe an event is, the
more rare it will tend to be, and vice versa.
In the following, a variety of measures of the risk of
extreme events in a decision tree are presented in an alphabetical order. Each measure is briefly introduced, followed
by some comments on its interpretation, where applicable.
Then, it is suggested how to optimize the measure using a
decision tree. Refer to Table 1 for an overview and the
mathematical definition for each criterion. The list of
measures given below is not claimed to be comprehensive.
Rather, it can serve as a starting point for further research.

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984
Table 1
Measures of the risk of extreme events
Measure

Definition

Order of separability

Sequential optimization
approach

Conditional expected value


(rare event)

f4;a ; EXuX $ F 21 a

2 (approximation)

Tail substitution, average out


and fold back according to
min(q1 F(x1), 2 q2 F(x2))

Average out and fold back


according to min(f*4,b , 2 f 4,b )

Average out and fold back


(direct)

Average out and fold back


(direct)
Tail substitution, average out
and fold back two cumulative
probabilities to be explored
See text

Z
F 2 1 a

xf x dx

F 2 1 a

Z
F 2 1 a

Conditional expected value


(severe event)

f x dx

xf x dx

12a
Z

xf x dx
b
f4;b ; EXuX $ b Z
f x dx
b

Exceedance probability

f4;b *
f4;b

f4;b ; PX $ b 1 2 Fb
E1X$b

Expected value

EX

Z
0

Z
b

f x dx

xf x dx

Fractile

xa ; F 21 a

2 (approximation)

Hurwicz

H (a ) a max(x) 1 1 (1 2
a ) min(x)
l j: linear combination of
probability weighted moments
(see below)
Max(x)
EX 2 mn ; n
2; 3; with m EX

n/a

e.g. variance

s2 ; EX 2 m2
EX 2 2 EX2

n-th order moments

EX n

L-moments

Minimax
n-th order central moments

Partial expected value (rare


events)

Z
0

Z
F 2 1 a

Average out and fold back


(direct)

n/a
n

See text
Average out and fold back
according to optimize (E[X],,
E[X n])
Average out and fold back
according to
(minEX 2 ; 2EX)
Average out and fold back
(direct)

xn f x dx

, n 2, 3,
f4;a * ; EX1x$F 21 a

2 (approximation)

Tail substitution, average out


and fold back according to
min(q1 F(x1), 2 q2 F(x2))

Average out and fold back


(direct)

xf x dx

1 2 af4;a
Partial expected value (severe
events)

f4;b * ; EX1X$b
f4;b f4;b

Z
b

xf x dx

73

74

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

Table 1 (continued)
Measure

Definition

Order of separability

Sequential optimization
approach

Probability weighed moments

Mp;r;s EX p {Fx}r {1 2 FX}s

Average out and fold back


(direct)

Average out and fold back


(direct)

Z
0

xp Fxr 1

2 Fxs f x dx

Semivariance

Z
t

x 2 ta f x dx; a . 0

Also, not all the measures discussed are necessarily


employed at present by decision makersthese measures
are included for their theoretical interest and possible future
use. However, the solution techniques proposed here are
considered to extend to many other cases and hence, provide
valuable tools for a decision maker concerned with impacts
of current decisions on future options.
3.1. Conditional expected value (rare event)
The conditional expected value for rare events is denoted
as f4,a [12,24]. In Table 1, the fixed value a denotes the
decision makers non-exceedance probability of concern,
and F 21(), the inverse probability distribution of the
random outcome (loss) X for a chosen policy. The range
of rare events comprises outcomes having an exceedance
probability of (1 2 a ) or less.
As a conditional expected value, f4,a takes into consideration both the probability and severity of possible adverse
effects. For the use of conditional expected values as a
measure of the risk of rare and severe (see below) events,
see for example, Refs. [12,2532], and in Refs. [33,34]
some critique on the use of conditional expectations is
given.
Haimes et al. [3] first identified the difficulties with averaging out and folding back conditional expected values in
decision trees. Frohwein et al. [5] showed that the conditional expected value for rare events f4,a , unlike the conditional expected value for severe events f4,b (see below), is
not second-order separable, but demonstrates how an
approximation of the conditional expected value for rare
events is second-order separable. The approximation is
based on assuming one of the three limiting tail forms
(Gumbel, Frechet, Weibull) for the probability distributions
of the outcomes for each policy at the root node of the
decision tree and expressing the conditional expected
value as a function of two cumulative probabilities, q1 and
q2. Minimizing f4,a thus entails determining an optimal
balance between these two cumulative probabilities. The
policy with the minimum value of the approximated

conditional expected value f4,a is found at the root node of


the multi-objective decision tree from among the efficient
solutions with respect to min(q1, 2 q2). Frohwein et al. [5]
provide a detailed example.
3.2. Conditional expected value (severe event)
The conditional expected value for severe events f4,b is a
counterpart of the conditional expected value for rare events
(above) [12,24]. The value of b bounds from below the
range of outcomes considered to be severe events by the
decision maker.
The conditional expected value f4,b as such does not
account for the overall probability of experiencing a severe
event of magnitude of at least b , i.e. the probability of
reaching or exceeding b . However, in the partitioned
multi-objective risk method [12,24], f4,b and other conditional expected values, which describe the low- and midseverity event ranges, are used in a framework that also
incorporates the probabilities of experiencing a given type
of event. The conditional expected value for severe events
f4,b has the advantage over the conditional expected value
for rare events f4,a of using a fixed outcome b , not a fixed
probability, as a reference, which is intuitively easier to
understand for many decision makers.
While the conditional expected value for severe events
does not average out and fold back directly in decision trees
[3], Frohwein et al. [4] showed that second-order separability can be invoked as follows. The conditional expected
value f4,b can be interpreted as the ratio of the partial
expected value for severe events f4,b * and the exceedance
probability f 4,b , both of which, in their own rights, are
alternative measures of the risk of extreme events (see
below) each of which average out and fold back in a decision tree. The policy with the minimum value of the conditional expected value f4,b is found from among the efficient
policies with respect to min(f4,b *, 2 f 4,b ) at the root node of
the decision tree after performing the calculation f4,b f4,b */
f 4,b for each efficient policy. Frohwein et al. [4] provide
detailed examples.

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

3.3. Exceedance probability (and cumulative probability)


The probability f 4,b of the outcome attaining or exceeding a threshold b of concern, i.e. the probability of observing a severe event, can serve as a measure of extreme
events, becauseall else being equala decision maker
concerned with extreme events may want to minimize this
probability.
The exceedance-probability criterion focuses solely on
the overall probability of experiencing a severe event. It
does not distinguish the level of severity and probability
of occurrence of different severe events beyond the threshold b . However, the exceedance probability can provide
valuable information in connection with, for example, the
conditional expected value for severe events (above), or the
partial expected value for severe events (below).
Exceedance, and hence the cumulative, probabilities are
separable and monotonic, and therefore average out and fold
back directly in a decision tree. See, e.g. Ref. [35] for a use
of the exceedance probability as an objective in decisiontree analysis. Cumulative probabilities are used in Ref. [5]
to optimize the conditional expected value for rare events
(above) using decision trees.
3.4. Expected value
The expected value of the outcome is not actually a
measure of the risk of extreme events. It aggregates low,
intermediate, and severe outcomes into a single number
[16]. Nevertheless, it is a commonly adopted measure of
performance for decision making under risk. It is included
here because of the importance of the expected-value operation for many other measures of risk of extreme events.
The expected-value operator is separable and monotonic
and thus averages out and folds back directly in decision
trees (e.g. [1]).
3.5. Fractile
Fractiles can be seen as a counterpart to exceedance probabilities in decision making. Although exceedance probabilities relate to a fixed severe event threshold, fractiles relate
to a fixed rare event threshold. In the case of fractiles, a nonexceedance probability (i.e. a cumulative probability) a is
chosen and the corresponding fractile is to be minimized by
an adequate choice of policy.
Fractiles associate a severity level with a fixed cumulative probability; doing so has a significantly different interpretation in decision making than associating a probability
with a fixed severity level. Also, similar to the exceedanceprobability approach, the fractile criterion does not distinguish the severity and probability of occurrence of rare
events beyond the threshold.
An attempt to average out and fold back fractiles in decision trees runs into similar problems as the sequential optimization of the conditional expected value for rare events
(above). While a procedure for folding back fractiles in

75

decision trees has not yet been developed, an approach similar to the one used for the conditional expected value f4,a
establish a second-order separable approximation through
probability distribution tail replacement [5]can be
suggested. This suggestion is based on the insight that for
the three mentioned limiting tail forms, the fractile can be
calculated with the knowledge of the two cumulative probabilities q1 and q2 [22].
3.6. Hurwicz criterion
The Hurwicz [36] criterion describes a pessimistic decision maker for a 1, and an optimistic decision maker for
a 0. In particular, for a 1, the minimax criterion (see
below) results. However, a can take on any value in the
range [0,1], resulting in this criterion being a weighted sum
of the optimistic and pessimistic outcomes. Concern with
extreme events is expected to manifest itself in values of a
close or equal to 1. (a in this approach is unrelated to the a
used for the conditional expected value f4,a or the a -fractile.)
The Hurwicz criterion is only applicable to bounded
distributions, i.e. cases where there exist finite minimum
and maximum possible outcomes. The criterion focuses
solely on severity, ignoring the probability of experiencing
an extreme event. In this context, only the worst possible
outcomes per alternative are considered to be extreme
events, i.e. the definition of an extreme event differs from
one terminal chance node to the other in a decision tree.
The a -Hurwicz approach can be implemented in a multiobjective decision tree by selecting, at each decision node,
the subpolicies which are efficient with respect to
min(max(x), min(x)) (where max(x) ; sup{x: F(x) , 1}
and min(x) ; inf{x: F(x) . 0}), i.e. by minimizing the
worst and the best possible outcomes. At each chance
node, the probabilities are disregarded. In effect, a probability of 1.0 is given to the worst (largest) possible outcome in
order to determine a value for max(x), and a probability of
1.0 is given to the best (lowest) possible outcome in order to
determine a value for min(x). Therefore, at least one pair of
probabilities is associated with each branch. Subpolicies
that are not efficient with respect to min(max(x), min(x))
after this averaging out are discarded.
Fig. 5 depicts a sample a -Hurwicz decision tree.
Assumed minimum and maximum outcomes were indicated
for each terminal chance node. At decision nodes A through
D, the alternative(s) that are efficient with respect to
min(max(x), min(x)) are chosen; the results are noted
above the decision nodes. The discarded alternatives are
indicated by a cross-hatched branch. At node C, none of
the two alternatives can be excluded from consideration.
At each chance node, a probability of 1.0 is given to the
worst (largest) maximum outcome, a probability of 0.0 to
the best (lowest) maximum outcome (first probability in
pair). Further, a probability of 1.0 is given to the best
(lowest) minimum outcome, a probability of 0.0 to the

76

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

worst (largest) minimum outcome (second probability in


pair).
At chance node 1.2, two pairs of probabilities are noted at
each branch. The first pair refers to averaging out alternatives 2.5 and 2.7, the second pair applies to 2.6 and 2.7.
Note that both the subpolicy (2.5,2.7) and subpolicy
(2.6,2.7) are efficient with respect to min(max(x), min(x))
at chance node 1.2. Finally, at the root node R, policy (1.1,
(2.2,2.4)) with (max(x), min(x)) (4,0) is dominated by
policies (1.2, (2.5,2.7)) and (1.2, (2.6,2.7)) with (max(x),
min(x)) (2, 2 1) and (4, 2 2), respectively. The preference of the decision maker for either (1.2, (2.5,2.7)) or (1.2,
(2.6,2.7)) depends on the chosen value of a . More specifically, comparison of the values of H (a ) reveals that the
decision maker will prefer (1.2, (2.5,2.7)) for values of a
larger than 1/3, prefer (1.2, (2.6,2.7)) for values of a smaller
than 1/3, and be indifferent between the two for a equal
to 1/3.
3.7. L-moments
L-moments l j are linear combinations of probabilityweighted moments (see below)for the underlying definition, see Ref. [37]. For example, the following relations
hold:

l1 M1;0;0 EX
l2 M1;0;0 2 2M1;0;1 ;

where Mi,j,k denotes the probability-weighted moments.


Some of the L-moments, which emphasize large values of
the outcome, can conceivably be used as measures of the
risk of extreme events. They take into consideration both the
probability and severity of adverse events. However, decision makers may not be accustomed to thinking in terms of
L-moments.
As linear combinations of probability-weighted
moments, L-moments are also separable and monotonic,
and average out and fold back directly in decision trees.
3.8. Minimax criterion
The minimax criterion is related to Walds [38] maximin
criterion and assumes a finite upper bound on the damage.
Under the minimax criterion, the preferred policy is the one
with the lowest upper bound. This approach describes a
pessimistic decision maker.
The minimax criterion is only applicable to bounded
distributions, i.e. cases where there exists a finite maximum
possible outcome. The criterion focuses solely on severity,
ignoring the probability of experiencing an extreme event.
In this context, only the worst possible outcomes per alternative are considered to be extreme events, i.e. the definition
of an extreme event differs from the terminal chance node to
the terminal chance node in a decision tree.
The minimax criterion can be implemented in a decision
tree by selecting, at each decision node, the subpolicy with

the lowest maximum damage. At chance nodes, the branch


probabilities are replaced by a probability of 1.0 for the
worst (largest) possible outcome and 0.0 for all others.
Fig. 6 depicts the solution of a sample minimax decision
tree. At the terminal chance nodes, the assumed maximum
outcome associated with each node is indicated. At the
decision nodes AD, the terminal chance node with the
lower value of the maximum outcome is chosen, and
the appropriate value is noted above each decision node. At
the chance nodes 1.1 and 1.2, the probabilities 1.0 and 0.0
are assigned to the branches such that the worst possible
outcome, given the selection at nodes AD, is generated
(noted above chance nodes). At the root node R, the policy
with the lowest maximum outcome is chosen as optimal
with respect to the minimax criterionpolicy 1.2(2.5,2.7).
3.9. n-th Order central moments
As large values of n in the n-th order central moments
emphasize large outcomes, higher order central moments
can be considered as measures of the risk of extreme events.
For comments on the use of higher-order moments, see the
following subsection on non-central moments.
The n-th order central moment of a random variable X
with mean m is a non-separable operator. However, the n-th
order central moment can be expressed as a function of the
first n moments about the origin, E[X] to E[X n] (e.g. [39]); it
is feasible, possibly with restrictions, to sequentially optimize the n-th order central moment by using the k-th order
separability (with k n).
As an example, consider the second-order central
moment, the variance s 2, which is defined as

s2 ; EX 2 m2 EX 2 2 m2 EX 2 2 EX2 :

Variance is often interpreted as a measure of risk


(e.g.[40,41]. However, a high variance does not only
imply a high risk of experiencing an extreme event (high
severity), but it also implies a high probability of observing
an event with far lower than mean severity, i.e. a positive
extreme event. Therefore, the use of semivariance (see
below) was suggested in order to measure the potential for
undesirable extreme events.
As for the optimization of the variance in a decision tree,
note that E[X] 2 increases strictly in E[X], assuming nonnegative outcomes. Both E[X] and E[X 2] are separable and
monotonic, and s 2 increases strictly in E[X 2] and decreases
strictly in E[X]. Therefore, in a multi-objective decision
tree, the variance can be minimized by invoking a secondorder separability, performing the multi-objective
optimization min(E[X 2], 2 E[X]), and then identifying the
variance-minimal policy among the efficient policies at the
root node of the decision tree.
3.10. n-th Order moments
The n-th order moment (about the origin) of a random

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

variable X is defined as E[X n] (n 2,3,). As moments


with values of n larger than 1 give more weight to larger
values of x (i.e. larger losses), n-th order moments can be
considered as measures of the risk of extreme events.
As expected values, n-th order moments are a measure of
both the probability and severity of adverse events.
However, decision makers are generally not used to thinking in terms of moments other than the expected value (firstorder moment about the origin) and the variance or standard
deviation (second-order central moment and its square-root,
respectively). Thus, decision makers may have difficulties
in interpreting the different values of a particular moment.
In addition, it is unclear which value(s) of n would be most
meaningful for a particular analysis.
As linear operators, moments about the origin can be
averaged out and folded back directly in a decision tree. If
some other non-separable measure of the risk of extreme
events can be expressed as a strictly increasing or decreasing function of k n-th order moments about the origin, k-th
order separability can be achieved, given that the moments
exist, i.e. they are finite. An example is the conditional
expected value f4,a (see above) for the Normal distribution
N(m,s 2). According to Haimes [24],
f4;a m 1 ms EX 1 mEX 2 2 EX2 ;

where m is a function of a alone. Under certain conditions


(depending on the value of m), f4,a is strictly increasing or
strictly decreasing in E[X] and E[X 2], allowing the use of a
second-order separability. To use this approach, the damage
probability distributions at the root node of the decision tree
would be assumed to be Normal.

3.11. Partial expected value (rare event)

Z
F 2 1 a

be expressed as
EX PX # xEXuX # x 1 Px , X , bEXux , X
, b 1 PX $ bEXuX $ b;
9
where x and b classify the outcomes into low, mid and high
severity ranges (see [24]). The term
f4;b * ; PX $ bEXuX $ b f4;b EXuX $ b
EX1x$b

Z
b

xf x dx

10

gives the partial expected value for severe events [34]. It


describes the contribution of severe events (those with an
outcome magnitude of at least b ) to the overall expected
value. A risk-averse decision maker may want to minimize
this contribution to the expected value, all the others being
equal. The partial expected value for severe events can
therefore serve as a measure of the risk of extreme events.
The partial expected value for severe events takes into
consideration both the probability and severity of adverse
effects. As evidenced by Eq. (10), it can be interpreted as the
product of the exceedance probability and the conditional
expected value for severe events. While the interpretation of
the partial expected value is straightforwardcontribution
of severe events to the overall expected valuethe decision
maker may often be interested in knowing the exceedance
probability and the conditional expected value individually
for a more complete characterization of the situation.
Thompson et al. [34] promoted the use of the partial
expected value for the assessment of dam safety.
The partial expected value for severe events is separable
and monotonic and averages out and folds back directly in
decision trees.
3.13. Probability weighted moments

The partial expected value for rare events, defined as


f4;a * ; EX1x$F 21 a

77

xf x dx 1 2 af4;a
8

describes the contribution of the rare events to the overall


expected value E[X]. It is related to the conditional expected
value for rare events f4,a (see above), as it can be obtained by
simply multiplying f4,a by the probability (1 2 a ) of experiencing a rare event, which by definition is the same for all
policies.
The partial expected value for rare events can be optimized in a decision tree following the approach used for the
conditional expected value for rare events f4,a .

3.12. Partial expected value (severe event)


Consider the expected value of the outcome, E[X]. It can

Probability weighted moments involve successively


higher powers of F(X) or (1 2 F(X)) and may be regarded
as integrals of x weighted by the polynomial {F(x)} r or {1 2
F(x)} s (see Table 1) [37,41].
Larger values of p and r emphasize the importance of
large values of X, allowing an interpretation of certain probability-weighted moments as a measure of the risk of
extreme events. However, as mentioned before, decision
makers are generally not familiar with thinking in terms
of moments.
As linear operators, probability-weighted moments average out and fold back directly in decision trees.
3.14. Semivariance
In Table 1, a definition of the family of risk measures
based on semivariance is given according to Zeleny [14].
The semivariance can be used when the variance seems
inappropriate, e.g. because the variance does not distinguish

78

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

between the deviations above and below the mean, although


in the realm of extreme events, the decision maker will
generally be much more concerned with exceeding the
mean loss than with not reaching the mean.
The semivariance accounts for both the probability and
severity of severe events. It has the advantage of being
related to the variance, probably the only higher-order
central moment that most decision makers are fairly
comfortable with. At the same time, the exact interpretation
of the semivariance and the appropriate selection of the
threshold t may cause difficulty.
It can be verified that, if the threshold t is defined as a
constant that does not depend on f(x) and is the same for all
the probability distributions, then the semivariance is a
linear operator that averages out and folds back directly in
the decision trees. This property is lost if t is defined individually for each probability distribution (e.g. ti E[Xi]).
3.15. Other measures of the risk of extreme events
There is a virtually inexhaustible variety of measures of
the risk of extreme events. For example, in addition to those
listed in Table 1, minimax regret [42] conditional variance
and concepts of unconditional and conditional stochastic
dominance of different orders [17,43] come to mind. In
many cases, it may be possible to treat these additional
measures in ways similar to the ones discussed before; in
some cases, additional research may be necessary to enable
their adoption and optimization in a decision tree.
4. Decision trees with multiple measures of risk of
extreme events
4.1. Overview
A variety of measures of the risk of extreme events have
been reviewed above; however, none of these measures can
be claimed to be a clear best choice for decision-tree analysis. Also, quite clearly, it would be naive to expect the use of
different measures to lead to the same conclusions under all
circumstances. Asbeck and Haimes [12] and Zeleny [14]
point out that multi-dimensional measures of risk should
be used in decision analysis or reliability optimization,
rather than one-dimensional measures.
With the results presented in this paper, a larger variety of
measures of the risk of extreme events is now accessible to
decision-tree analysis. Given the reflections in the preceding
paragraph, it may be beneficial to incorporate several of
these measures into a decision tree in order to assess the
performance of a policy from different perspectives. This
can be done either by repeated analyses with changing
measures of the risk of extreme events, or by simultaneously
optimizing more than one measure of risk in a multi-objective analysis. Instead of, or in addition to, using multiple
measures of the risk of extreme events, it can also be useful
to solve the decision tree with the same basic measure of

risk, but with different values for the parameters specifying


the outcomes of concern to the decision maker (a , b , etc.) in
order to identify the sensitivities of the policy assessments
to the choice of these parameters.
When the analysis is performed consecutively for multiple measures of the risk of extreme events, a policy may be
identified as efficient (with respect to the measure of risk
and other objectives) by one, more than one, or even all
measures of risk used in the decision tree. This should
remind the decision maker that the identification of a
best policy does not only depend on the selected objectives on an abstract level (e.g. minimization of the risk of
extreme events, minimization of the expected cost), but also
on the measure of performance chosen to operationalize the
objective.
A policy that is identified as efficient by many or all
alternative measures of the risk of extreme events under
consideration may be considered by some decision makers
as a good choice for implementation. On the one hand, the
decision maker may find it reassuring when a policy is efficient regardless of the choice of the measure of the risk of
extreme events (or any other measure of performance), i.e.
when the evaluation of the policy is insensitive to the chosen
measure of risk. While on the other hand, such a policy may
nevertheless be unacceptable to the decision maker, because
its performance with respect to some objective may be unsatisfactory, an efficient policy is not necessarily a satisfactory
one.
The preceding insights should also serve to caution the
decision maker when applying the concepts of efficiency (in
the case of multiple objectives) and optimality (single
objective). In particular, it should not be forgotten that efficiency and optimality apply to the solution generated with
the decision tree. As such, they are not absolute properties of
reality, but are only properties associated with results of
some more or less sophisticated model of reality, the decision tree (or any other model) used to describe the decision
problem. It should be kept in mind that changing the
measure of performance, or adding or discarding an objective can change the mathematical results. Nevertheless, the
results can provide the decision maker with helpful
guidance.
4.2. Example of maintenance of a machine to minimize the
downtimes
Consider a production job that can alternatively be
performed on two different machines. After a break-in
period, it can be established whether the chosen machine
performs properly (low defects) or not (high defects) for this
job. Then, a maintenance regime has to be chosen. The
chosen maintenance plan affects the probability distribution
of the total downtime (both planned and unplanned) per
planning period. Apart from the expected cost of production
and maintenance, the production manager is particularly
concerned with the extreme downtimes, i.e. he would like

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

79

max(x), min(x)
2.1

(5, 2)

2.2

(4, 1)

2.3

(6, 4)

2.4

(2, 0)

2.5

(1, -1)

2.6

(4, -2)

2.7

(2, 1)

2.8

(7, 2)

(4, 1)
A

1.0, 0.0
(4, 0)
1.1

0.0, 1.0

(2, 0)
B

(2, -1)
(4, -2)
R

(1, -1)
(4, -2)

(2, -1)
(4, -2)

C
1.2

0.0, 1.0
1.0, 1.0
1.0, 0.0
0.0, 0.0
(2, 1)
D

Fig. 5. a -Hurwicz decision tree, values above chance and decision nodes have been averaged out and folded back from terminal chance nodes, discarded
options are crosshatched.

the conditional expected downtime in hours (rare event)


f4,a for a 0.99 (i.e. concern with upper 1%-tail of
probability distribution of downtimes);
the cumulative probability for a downtime threshold of
b 110 h; and
the partial expected downtime f4,b * for a downtime
threshold of b 100 h.

has to estimate two cumulative probabilities, q1 and q2,


associated with two downtime levels, x1 and x2 respectively,
for each terminal chance node, i.e. for each path through the
decision tree [5,22]. Further, note that minimizing the negative cumulative probability, or maximizing the cumulative
probability, is equivalent to minimizing the exceedance
probability.
In order to determine the partial expected downtime f4,b *
for each terminal chance node, a Pareto distribution was
assumed to describe the total downtime for each terminal
chance node. The Pareto distribution is defined as
(
x $ v;
1 2 v=xk ;
11
FPareto x
0;
otherwise; k . 0:

As an approximation of the conditional expected downtime, f4,a is second-order separable, the production engineer

The parameters k and v of the Pareto distribution were


determined based on the cumulative probabilities q1 and q2,

to choose a machine-and-maintenance combination that


minimizes some measure of the risk of experiencing extremely long downtimes. The decision problem is as shown in
Fig. 7.
The following measures of risk of extremes will be
studied, along with the expected cost of the production/
maintenance plan:

80

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984
max(x)
2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

4
A

1.0
4
1.1

0.0

2
B

2
R

C
1.2

0.0
1.0
2
D

Fig. 6. Minimax decision tree, values above chance and decision nodes have been averaged out and folded back from terminal chance nodes, discarded options
are crosshatched.

which have presumably been assessed by the production


engineer. Table 2 indicates the values assumed (q1,q2, and
cost) and calculated (f4,b *) for each terminal chance node.
To illustrate the averaging-out-and-folding-back
procedure for multi-objective decision trees, consider the

multi-objective minimization of the expected cost and the


partial expected downtime f4,b *. In Table 3, the policies that
are efficient with respect to min(cost,f4,b *) at each stage of
the decision problem are listed. For example, policy (1.1,
(2.1,2.3)) at the root node R of the decision tree is

Table 2
Data for example
Terminal chance node

q1 F(100 h)

q2 F(110 h)

f4,b *(b 100 h) [h]

Cost [monetary units]

2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8

0.89
0.9
0.99
0.97
0.9
0.92
0.99
0.91

0.94
0.935
0.999
0.98
0.93
0.96
0.9999
0.985

9.51
1.02
9.28
13.65
3.92
1.04
12.84
13.05

100
110
120
130
140
150
160
170

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

81

Downtime
2.1
Maintenance

Plan P1
Plan P2

2.2

0.2
low

Defects

1.1

high
B
0.8

2.3
P3
P4

M1

2.4

Machine
R

2.5
M2
P1
0.65
1.2

P2

P3

low

2.6

high
0.35
2.7
P4
2.8

Fig. 7. Decision tree for production job example.

interpreted as Schedule the production job on machine


M1. If the machine produces a small number of defects,
implement the maintenance plan P1; if the machine
produces large number of defects, implement the maintenance plan P3.

There are four overall policies that are efficient at the root
node of the decision tree, and the decision maker should
choose to implement one of them according to his preferences. The efficient solutions are characterized by the fact
that, by choosing one efficient solution over another efficient

Fig. 8. Cost/risk trade-offs for various measures of risk of extreme eventscircled policies eliminated by the time the time the root node of the decision tree is
reachedpolicies highlighted by line are efficient at the root node.

82

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

Table 3
Efficient (sub-) policies for sample decision tree
Node

(Sub-) policy

f4,b * (b 100) [h]

Cost [monetary units]

2.1
2.2
2.3
2.5
2.6
2.7
(2.1,2.3)
(2.2,2.3)
(2.5,2.7)
(2.6,2.7)
1.1, (2.1,2.3)
1.1, (2.2,2.3)
1.2, (2.5,2.7)
1.2, (2.6,2.7)

9.51
1.02
9.28
13.65
3.92
12.84
9.32
7.62
7.04
5.17
9.32
7.62
7.04
5.17

100
110
120
130
150
160
116
118
147
153.5
116
118
147
153.5

B
C
D
1.1
1.2
R

solution, the performance with respect to one objective (e.g.


cost) can only be improved at the expense of decreasing the
performance of another objective (e.g. partial expected
downtime). The selection depends on the decision makers
preferences.
Fig. 8 shows the result of the pairwise minimization of the
cost with each of the measures of risk, taken one at a time.
For all the three cases, all eight possible policies are
depicted by markers in the costrisk space for illustration.
Circles indicate that a policy has been excluded from
consideration by the time the root node is reached in the
process of averaging out and folding back. Efficient policies
are highlighted by a line connecting them.
In the case of minimization of the cost and the conditional
expected downtime f4,a , where second-order separability
has to be invoked in order to enable sequential optimization
[5], six out of eight policies have to be averaged out and
folded back to the root node. They are efficient with respect
to minimizing the expected cost, the cumulative probability
q1, and the negative cumulative probability q2, the two
cumulative probabilities being the constituents of the risk
of extreme events as measured by the conditional expected
value f4,a . The efficient policies with respect to the minimization of the expected cost and the conditional expected
value f4,a are known to be among these six policies [5,18].
In this example, only one of these six policies happens to be
efficient in terms of min(cost,f4,a ). The conditional expected
value for rare events f4,a is calculated using the appropriate
formula from Ref. [5]; the limiting tail form for the Pareto
distribution is the Frechet distribution [11].
The policies identified as efficient with respect to
min(cost,risk) differs from one measure of risk to the next.
Policy (1.1, (2.1,2.3)) is identified as efficient by all three
measures of risk under consideration in conjunction with
cost. As discussed before, a decision maker may favor
such a policy, whose evaluation is insensitive to the chosen
measure of risk, for implementation. However, policy (1.1,
(2.1,2.3)) may not be acceptable for the decision maker for
other reasons. For example, the decision maker may find the

value of the partial expected downtime f4,b * associated with


this policy to be too high.
An extension of the preceding example might include the
expected value of the downtime as an objective to be minimized in addition to the measures of the risk of extremes.

5. Need for future research


In some real settings, decision makers may initially have
little basis for the informed preferences for the types of
multi-attribute judgements that are suggested by the methods earlier. A challenge will be to select one or a few
measures of the risk of extremes that can become meaningful to the decision maker in the particular context. A
decision makers experience with the various measures in
evaluating the real, familiar alternatives will perhaps help to
build on and improve such a basis for informed preferences.
In many situations, the detailed shapes of the tails of
distributions are not known with precision. It will thus be
important in these cases to appropriately match the precision of the selected evaluation approaches to the precision
of the distributions describing the decisional alternatives. As
various measures of the risk of extremes become meaningful to a particular decision, the need for further refinement or
improvement of precision in estimates of the tails of the
distributions may emerge.
An important issue that merits further attention is that of
the repercussions of non-separability and the use of the k-th
order separability for finding an optimal policy. It can be
distinguished between (1) an optimal policy that minimizes
the overall non-separable measure of the risk of extreme
events (e.g. conditional expected loss for severe events),
i.e. the measure of performance as perceived at the root
node of the decision tree and (2) an optimal policy that
minimizes the measure of the risk of extreme events to-go
(i.e. it remains till the final stage is reached) at every stage of
decision making. The question of which formulation to
adopt is different from the question of which measure of
the risk of extreme events to chooseany measure can be
optimized using either formulation (1) or (2)and is an
area of active research. Wakker [44] gives comprehensive
references to publications related to the topic. In particular,
Machina [45,46] and McClennen [47,48] advocate approach
(1), called resolute choice by McClennen. Others, such
as Wakker [44], suggest some form of updating; still others,
notably Hammond [49,50], criticise the idea of resolute
choice. Becker and Sarin [51] discuss similar issues, without
coming to a final conclusion, in connection with a lotterydependent (i.e. non-expected) utility model.
Approach (1) was assumed throughout this paper, see
Ref. [22] for more details and an example of the potential
impacts on the efficient policies of choosing (1) vs. (2). By
definition, following approach (1) results in the lowest
possible overall (i.e. at the root node of the decision tree)
conditional expected value. Therefore, a decision maker

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984

who implements (1) can be expected to perform better (or at


least not worse) on an average than one who implements (2).
Moreover, while it is straightforward to see that (1) optimizes the overall conditional expected loss, it is much less
clear what is achieved by (2). In particular, it appears questionable if it is valid to update the objective from stage to
stage (e.g. conditional expected value to-go); it is not
obvious what a decision maker accomplishes by doing so.
In fact, Machina [45,46] argues that such an approach can be
interpreted as enforcing separability where it does not hold,
such as in the case of the conditional expected value f4,b and
f4,a . Possibly, implementing (2) can be seen as an optimization of some function of the vector of the conditional
expected losses to-go of all stages, but the exact form of
such a function remains unclear. Further research into the
selection of an optimization strategy ((1) vs. (2)), particularly in the light of non-separable measures of the risk of
extreme events, appears to be worthwhile.
While it is suggested here that following the optimization
strategy (1) can be better justified than following either of
(2), a decision maker who prefers the use of (2) can still take
advantage of the optimization concepts discussed in this
paper, in particular breaking up some measure of the risk
of extreme events into its constituents. The difference
between (1) and (2) lies only in how the information on
the risk of extreme events is used, i.e. at what point in the
decision tree an optimal decision is identified. However,
formulation (2) still requires overcoming the difficulties
due to non-separability and non-monotonicity of non-linear
measures of riskthis can be achieved with the methods
outlined before.

6. Summary
This paper provided the decision makers with techniques
to incorporate a multitude of different measures that illuminate different facets of the risk of extreme events into decision-tree analysis. While linear measures are the easiest to
use, they may not always capture all the facets of the risk of
extreme events that the decision maker is concerned about.
It has been suggested that the class of k-th order separable
measures of risk can be interpreted in terms of k constituent
measures of risk. Apart from providing a new interpretation
for the k-th order separable measures of the risk of extreme
eventstheir optimization requires finding an optimal
balance between their constituentsit has also been
shown how the k-th order separability can be used to minimize these measures of risk in decision trees. Therefore, the
decision maker is able to incorporate these measures that are
of particular importance into a decision tree, either by
repeated analyses with changing measures of performance,
or by simultaneously optimizing more than one measure of
risk. The recent developments extend the reach of (multiobjective) decision-tree analysis to a wider class of
measures of risk; it is no longer limited to the traditional,

83

linear measures. Moreover, the benefits of applying decision


trees with multiple and alternative measures of the risk of
extreme events have been demonstrated. In this way, decision-tree analysis is complemented by a more complete,
multi-dimensional risk assessment and management. It is
hoped that the presented material will advance the application of sequential, multi-objective decision making in the
face of the risk of extreme events.

References
[1] Raiffa H. Decision analysis: introductory lectures on choices under
uncertainty. Reading, MA: Addison-Wesley, 1968.
[2] Corner JL, Kirkwood CW. Decision analysis applications in the
operations research literature, 19701989. Operations Research
1998;39(2):206219.
[3] Haimes YY, Li D, Tulsiani V. Multiobjective decision-tree analysis.
Risk Analysis 1990;10(1):111129.
[4] Frohwein HI, Lambert JH. Risk of extreme events in multi-objective
decision treespart I: severe events. Risk Analysis 1998, submitted
for publication.
[5] Frohwein HI, Haimes YY, Lambert JH. Risk of extreme events in
multi-objective decision treespart II: rare events. Risk Analysis
1998, submitted for publication.
[6] Chankong V, Haimes YY. Multiobjective decision making: theory
and methodology. New York: Elsevier, 1983.
[7] Cohon JL. multiobjective programming and planning. San Diego,
CA: Academic Press, 1978.
[8] Sawaragi Y, Nakayama H, Tanino T. Theory of multiobjective optimization. Orlando, FL: Academic Press, 1985.
[9] Steuer RE. Multiple criteria optimization: theory, computation, and
application. New York: Wiley, 1986.
[10] Ang AH-S, Tang WH. Probability concepts in engineering planning
and design, vol. 2. New York: Wiley, 1984.
[11] Castillo E. Extreme value theory in engineering. Boston, MA:
Academic Press, 1988.
[12] Asbeck EL, Haimes YY. The partitioned multi-objective risk method
(PMRM). Large Scale Systems 1984;6:1338.
[13] Colson G, Zeleny M. Uncertain prospects rankings and portfolio
analysis under the conditions of partial information. Cambridge:
Oelgeschlager, Gunn and Hain, 1980.
[14] Zeleny M. Multiple criteria decision making. New York: McGrawHill, 1982.
[15] Lowrance WW. Of acceptable risk. Los Altos, CA: Kaufmann, 1976.
[16] Kaplan S, Garrick BJ. On the quantitative definition of risk. Risk
Analysis 1981;1(1):1127.
[17] Wang G. Theoretical foundations of the partitioned multiobjective
risk method. Dissertation, Charlottesville, VA: Department of
Systems Engineering, University of Virginia, 1997.
[18] Li D. Multiple objectives and non-separability in stochastic dynamic
programming. International Journal of Systems Science
1990;21(5):933950.
[19] Li D, Haimes YY. New approach for non-separable dynamic
programming problems. Journal of Optimization Theory and Applications 1990;64(2):311330.
[20] Li D, Haimes YY. Extension of dynamic programming to non-separable dynamic optimization problems. Computers and Mathematics
with Applications 1991;21(11/12):5156.
[21] Geoffrion AM. Solving bicriterion mathematical programs. Operations Research 1967;15:3954.
[22] Frohwein HI. Risk of extreme events in multiobjective decision trees.
Dissertation (in progress), Charlottesville, VA: Department of
Systems Engineering, University of Virginia, 1998.
[23] Bier VM, Haimes YY, Matalas NC, Lambert JH, Zimmerman R.

84

[24]
[25]

[26]
[27]
[28]

[29]

[30]

[31]

[32]

[33]
[34]

[35]
[36]

H.I. Frohwein et al. / Reliability Engineering and System Safety 66 (1999) 6984
Assessment and management of risk of extreme events. Risk Analysis
1999, to appear.
Haimes YY. Risk modeling, assessment, and management. New
York: Wiley, 1998.
Glickman TS, Sherali HD. Catastrophic transportation accidents and
hazardous materials routing decisions. In: Apostolakis G, editor.
Probabilistic safety assessment and management, vol. 2. New York:
Elsevier, 1991.
Haimes YY, Lambert JH, Li D. Risk of extreme events in a multiobjective framework. Water Resources Bulletin 1992;28(1):201209.
Karlsson PO, Haimes YY. Probability distributions and their partitioning. Water Resources Research 1988;24(1):2129.
Karlsson PO, Haimes YY. Risk assessment of extreme events: application. Journal of Water Resources Planning and Management
1989;115(3):299320.
Mitsiopoulos J, Haimes YY, Li D. Approximating catastrophic risk
through statistics of extremes. Water Resources Research
1991;27(6):12231230.
Sherali HD, Brizendine LD, Glickman TS, Subramanian S. Low probability-high consequence considerations in routing hazardous material shipments. Transportation Science 1997;31(3):237251.
Sivakumar RA, Batta R, Karwan MH. A network-based model for
transporting extremely hazardous materials. Operations Research
Letters 1993;13(1):8593.
Sivakumar RA, Batta R, Karwan MH. A multiple route conditional
risk model for transporting hazardous materials. INFOR
1995;33(1):2033.
Erkut E. On the credibility of the conditional risk model for routing
hazardous materials. Operations Research Letters 1995;18:4952.
Thompson KD, Stedinger JR, Heath DC. Evaluation and presentation
of dam failure and flood risks. Journal of Water Resources Planning
and Management 1997;123(4):216227.
Heimann SR, Lusk EJ. Decision flexibility: an alternative evaluation
criterion. The Accounting Review 1976;51:5164.
Hurwicz L. Optimality criteria for decision making under ignorance.
Cowles Commission Discussion Paper No. 370, 1951:370.

[37] Hosking JRM, Wallis JR. Regional frequency analysis. Cambridge:


Cambridge University Press, 1997.
[38] Wald A. Statistical decision functions. New York: Wiley, 1950.
[39] Cramer H. The elements of probability theory. New York: Wiley,
1955.
[40] Markowitz H. Portfolio selection. Journal of Finance 1952;7(1):77
91.
[41] Markowitz H. Portfolio selection: efficient diversification of investments. New York: Wiley, 1959.
[42] Savage LJ. The theory of statistical decisions. Journal of the American Statistical Association 1951;46:5576.
[43] Whitmore GA, Findlay MC. Stochastic dominance. Toronto: Lexington Books, 1978.
[44] Wakker PP. Are counterfactual decisions relevant for dynamically
consistent updating under non-expected utility?. Journal of Economic
Literature (classification number D81) 1997, submitted for publication.
[45] Machina MJ. Dynamic consistency and non-expected utility models
of choice under uncertainty. Journal of Economic Literature
1989;27:16221668.
[46] Machina MJ. Dynamic consistency and non-expected utility. In:
Bacharach M, Hurley S, editors. Foundations of decision theory,
Oxford: Blackwell, 1991.
[47] McClennen EF. Dynamic choice and rationality. In: Munier BR,
editor. Risk, decision and rationality, Dordrecht: Reidel, 1988.
[48] McClennen EF. Rationality and dynamic choice: foundational
explorations. Cambridge: Cambridge University Press, 1990.
[49] Hammond PJ. Consequentialism and the independence axiom. In:
Munier BR, editor. Risk, decision and rationality, Dordrecht: Reidel,
1988.
[50] Hammond PJ. Consequentialist foundations for expected utility.
Theory and Decision 1988;25(1):2578.
[51] Becker JL, Sarin RK. Decision analysis using lottery-dependent
utility. Journal of Risk and Uncertainty 1989;2:105117.

Das könnte Ihnen auch gefallen