Beruflich Dokumente
Kultur Dokumente
R O S S
Reliability, Safety, and Security Studies
at NTNU
Address:
Visiting address:
Telephone:
Facsimile:
N-7491 Trondheim
S.P. Andersens vei 5
+47 73 59 38 00
+47 73 59 71 17
TITLE
DATE
978-82-7706-228-1
2010-02-23
SIGNATURE
PAGES/APPEND.
Marvin Rausand
105
KEYWORD NORSK
KEYWORD ENGLISH
RISIKOVURDERING
RISK ACCEPTANCE
AKSEPTKRITERIER
ACCEPTANCE CRITERIA
RISIKOML
RISK METRICS
Preface
This report is written Autumn 2009, at the Norwegian University of Science
and Technology (NTNU), Department of Production and Quality Engineering.
For their devoted guidance, special gratitude is directed to Professor Marvin
Rausand and Post doc Mary Ann Lundteigen.
I also wish to thank Vivi Moe for allowing me to use her piece Tuppen og
Lillemor in fronting this publication
ii
iii
Abstract
The purpose of this study is to discuss and create a sound basis for formulating risk acceptance criteria. Examining the strengths and pitfalls of
central concepts, metrics and approaches, the thesis provides a comprehensive fundament for the setting and use of risk acceptance criteria. The
ndings are derived from integration and critique of pioneering and state
of the art literature.
Risk acceptance criteria are quantitative or qualitative terms of reference, used in decisions about acceptable risk. Acceptable risk means a risk
level that is accepted by an individual, enterprise or society in a given context. Our willingness to accept risk depends on the benets from taking the
risk, the extent to which the risk can be controlled and the types of consequences that may follow. Acceptable levels of risk are hence never absolute
nor universal, but contingent on trade-os and contextual premises.
Fatality risk can be expressed by individual or societal risk metrics.
To capture the distribution and totality of risk posed by a particular system, both individual and societal risk acceptance criteria are necessary.
Suitability for decision support and communication, unambiguity and independence are qualities that should be sought. The main expressions of
individual risk are IRPA and LIRA. IRPA expresses the activity-based risk
of a specic or hypothetical person, and is particularly suited for decisions
concerning frequently exposed individuals. While exposure is decisive to
IRPA, LIRA assumes that a person is permanently present near a hazardous
site. LIRA is thus location-specic and pragmatically reserved for land use
planning.
Common societal risk metrics are FN-curves, FAR and PLL. FNcriterion lines uniquely distinguish between multiple and single fatality
events, but are accused for providing illogical recommendations. PLL is a
simpler metric, particularly suited for cost-benet analyses. Overall acceptance limits are seldom expressed by PLL, since exposure is not reected.
In contrast, FAR is dened by unit of exposure, enabling realistic comparison with predened criteria. If injury frequency surpasses that of fatalities,
a metric related to injury or ill health may provide the most proper criterion. All metrics are fraught with assumptions and diculties, necessitating
awareness amongst practitioners.
Various approaches are developed for setting risk acceptance criteria.
A distinction is drawn between fundamental principles, deductive methods
and specic approaches. Utility, equity and technology oer principal criteria to be used alone or as building blocks. While utility-based criteria
iv
concern the overall balance of goods and bads to society, the principle of
equity yields an upper risk limit of which none of its members should
surpass. Technology-based criteria see acceptable risk attained by the use
of state of the art technology, possibly at the expense of considerations
of cost-eectiveness and equity. Amongst deductive methods for solving
acceptable-risk problems are expert judgment, bootstrapping and formal
analysis. There are two questionable assumptions underlying bootstrapping
methods; that a successful balance of risks and benets is achieved in the
past, and that the future should work in the same way. Formal analyses
avoid this bias towards status quo, demanding explicit trade-o analyses
between the risk and benets of a current problem. The advantages of
formal analysis are openness and soundness, while a pitfall concerns the
diculty of separating facts and values.
Specic approaches to setting risk acceptance criteria are based on
a combination of fundamental principles and deductive methods. Most
cultivated is the ALARP-approach, capitalizing the advantages of formal
analysis and all principal criteria. Requiring risk to be as low as reasonably practicable, ALARP provides conditional rather than absolute criteria;
uniquely capturing that risk acceptance is a trade-o problem. Problems
are yet addressed, notably in resource intensiveness and the imprecise notion of gross disproportionality. Conceptually close, but practically far from
ALARP, is ALARA. Whereas upper criteria represent the start of ALARPdiscussions, they serve as the endpoint in ALARA. Arguments of reasonableness are considered already built into the strict upper limits of ALARA.
Strict criteria are provided also by the GAMAB-approach, requiring new
systems to be globally as good as any existing system. GAMAB can be
seen as learning-oriented bootstrapping, but may reject developments on
an erroneous standard of reference. While GAMAB is technology-based,
the MEM-approach uses the minimum IRPA of dying from natural causes
as reference level. Little impetus is given for reducing risk beyond this
static requirement. Dierent from these risk-based approaches is the precautionary principle, intended for situations where great uncertainty makes
comparison with a predened metric meaningless. The precautionary principle has been attacked from many stands, but is concluded to oer a
valuable guide in the absence of knowledge.
The extent to which risk acceptance criteria oer sound decision support is nally questioned. The various approaches dier with respect to
consistency, practicality and ethical implications, to the degree risk reduction is encouraged and risk acceptance reected. The choice of metrics
implicitly or explicitly aects how these issues are resolved. Interpreting
risk and probability as subjective constructs are not seen to threaten the
validity of risk acceptance criteria. What may cause a problem, are regulators and practitioners understanding risk acceptance criteria as objective
cut-o limits. The overall conclusion is that acceptance criteria oer sound
decision support, but only if authors and users understand the assumptions
and limitations of the applied metrics and approaches. There is a call for
applied research on the role of the industries and the government in formulating and complying with risk acceptance criteria. As environmental
consequences are omitted from the study, the lack of a sound basis for
formulating environmental risk criteria urges further research.
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1
2
3
4
Clarification of concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 What is risk? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 The meaning of risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Dening risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 What can go wrong? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 How likely is it that it will happen? . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Classical theory of probability . . . . . . . . . . . . . . . . . . . . . . .
2.4.2 The relative frequency theory . . . . . . . . . . . . . . . . . . . . . . .
2.4.3 Subjective probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 If it does happen, what are the consequences? . . . . . . . . . . . . . . .
2.6 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7 Risk assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8 Risk acceptance criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9 Acceptable risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9.1 One accepts options, not risks . . . . . . . . . . . . . . . . . . . . . . .
2.9.2 Acceptability is not tantamount to tolerability . . . . . . . . .
2.9.3 Factors inuencing risk acceptance . . . . . . . . . . . . . . . . . .
2.9.4 Risk acceptance is a social phenomenon . . . . . . . . . . . . . .
5
5
6
8
11
12
13
14
15
16
18
19
21
22
22
23
23
24
27
27
27
28
29
viii
Contents
29
30
30
31
31
33
35
37
40
41
42
43
44
45
46
47
47
47
48
49
49
50
50
51
51
52
57
57
64
66
68
70
Concluding discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Are risk acceptance criteria feasible to the decision maker? . . . .
5.1.1 Non-contradictory ordering of alternatives . . . . . . . . . . . .
5.1.2 Preciseness of recommendations . . . . . . . . . . . . . . . . . . . . .
5.1.3 A binary decision process . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.4 Risk acceptance criteria simplify the decision process . .
5.2 Do risk acceptance criteria promote good decisions? . . . . . . . . .
5.2.1 The interpretation of probability to risk acceptance
criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
76
76
77
78
78
79
79
Contents
5.2.2
5.2.3
5.2.4
5.3 What
5.3.1
ix
81
83
84
85
86
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
1
Introduction
1.1 Background
Risk is ubiquitous. Of all the risks we face in everyday life, only a selection gets
to preoccupy our worried minds. Some are unconsciously undertaken, others are
we willing to live with, and yet a few provoke heated demonstrations or banning.
While children of the postwar period were afraid of DDT and nuclear power
plants, citizens of the new millennium are concerned with nanotechnology and
global warming. Other risks have ever been trivialized, like those of bicycling to
work or pursuing the perfect tan. Risks do in other words dier in acceptability;
through times, across people and situations. And when decisions involving risk
are to be taken, risk acceptance is the measure.
The risk of swine u has recently bombed the headlines of the daily press.
At its onset this summer, people were faced with the choice of canceling a
long planned holiday. Following the pathological development, a topical decision problem has been whether to get vaccinated. Although pandemics are
an aair of the state, these are ultimately personal decisions of individuals
willingness to live with the risk of swine u. In contrast is another current decision problem, which is the governmental settlement on future development of
oil and gas production in Lofoten, Norway. A variety of actors have expressed
their opinions, disagreeing on the relative importance of state economy and
worldwide energy scarcity, in comparison with environment, tourism and shery preservation. The debate has been further precluded by imprecise factual
statements, like Havforskningsinstituttets claim that between 0 and 100 % of
the stock of fry might be lost (Teknisk Ukeblad, 2009). How can a decision be
made in this case? Part of the solution lies in the reply of the Department of
Environment, demanding comprehensive risk analyses to be performed. Risk
analyses are widely used to support discussions related to industrial and societal
developments. To evaluate the results of risk analysis, a term of reference is
needed.
1 Introduction
Risk analysis
RAC
Decision
about risk
Several laws and regulations prescribe the use of risk acceptance criteria for
evaluating new or existing hazardous systems. Oering a level of comparison
for the results of risk analysis, decisions are reached on the grounds that risk
prospects shall not be unacceptably high. However, Norwegian authorities do
not give any guidance on how to establish such criteria. In comparison, the UK
Health and Safety Executive (HSE) is leading in the eld, oering a consistent
framework for practitioners to follow. The value of risk acceptance criteria
has also been questioned, recently in a suite of papers by Terje Aven and
his coworkers at the University of Stavanger. As implied in the Lofoten-case
and illustrated in Figure 1.1, decisions on risk involve complex opposites not
determined by risk alone. A considerable inuence on like discussions is the
pioneering work of Fischho et al. (1981). While three decades have past
since they rst evaluated decision methodologies for acceptable-risk problems,
academic and pragmatic diculties still remain. These are manifested in the
somewhat questionable practice on the continental shelf, calling for enhanced
knowledge on the fundamentals of formulating risk acceptance criteria.
1.2 Objectives
The purpose of this thesis is to discuss and create a sound basis for formulating
risk acceptance criteria. From this overall goal, ve lower level objectives are
deduced:
1. Give a description of the various approaches to setting risk acceptance
criteria related to harm to people, and discuss their basis and applicability.
Both individual and societal risk shall be covered.
1.3 Limitations
2. Present and discuss the main concepts and quantities used to formulate risk
acceptance criteria.
3. Give a description of approaches to setting environmental risk criteria
4. Discuss conceptual problems related to risk acceptance criteria. This
should include a discussion of the objective/subjective interpretations of
probability- and also risk.
5. Compare the use of risk acceptance criteria in two or more selected areas. These shall include the Norwegian oshore oil and gas industry and
maritime transport.
Following agreement with the supervisor, task 3 and 5 will not be covered in
the thesis.
1.3 Limitations
In pursuing the overall goal, the thematic coverage is conned to three out of ve
objectives. This is partly due to the limited time frame of project execution, but
also because of the thorough examination urged by the central objectives. The
focus is thus restricted to harm to people and fatalities in particular. Excluding
the third task is unfortunate, since environmental criteria are required, but poorly
understood in the oshore and maritime industries. Due to the distinct nature of
environmental risk, the reader should beware that the ndings are not directly
transferable to environmental applications.
Disregarding the fth task of performing a comparative study has pragmatic
implications. Since no sectors have been explicitly examined, the ndings are
generic and decoupled from practical and contextual constraints. Paradoxically,
this serves as an advantage as well as a limitation. On the positive side, experience transfer may be sought over a wide range of areas. On the other hand, this
necessitates practical interpretations. A methodological weakness is furthermore induced, since sector-specic considerations lie implicit in the applied
literature. While UK and Dutch contributions mainly concern nuclear power
and land-based process industry, the oshore oil and gas industry is by and
large the center of Norwegian researchers attention. Although a generic focus
is chosen, the thesis is thus knowingly biased towards the Norwegian oshore
industry.
The study is purely theoretical, as its results are derived from integration
and critique of pioneering and state of the art literature. Risk acceptance is a
wide concept to which many theorists have contributed. During the literature
selection process, emphasis has been on gaining fundamental understanding of
key concepts, rather than presenting radical ideas or advanced formulas. The
reader is therefore not required any previous knowledge on the subject. A nal
limitation owes to the diversity of contributions, leaving it neither possible, nor
1 Introduction
1.4 Structure
Chapter 2 is devoted to clarifying the basic concepts central to this thesis.
Understanding the concepts of risk, probability and risk acceptance is prerequisite for creating a sound basis for formulating risk acceptance criteria. The
meanings, denitions and implications of these and related terms are explored,
particularly aided by Kaplan & Garrick (1981) and Fischho et al. (1981).
Subsequently, chapter 3 addresses the second objective by elaborating the
main concepts and metrics used to formulate risk acceptance criteria. First, the
concepts of individual and societal risk are introduced, followed by qualitative considerations in the choice of risk metrics. The main part follows with a
thorough examination of characteristics, pitfalls and strengths of common individual and societal risk metrics. Central to the discussions on societal risk is the
literature review of Ball & Floyd (1998), while the annex of NORSOK Z-013N
(2001) provides useful assistance throughout the chapter. While focus hitherto
has been on fatality risk, a brief nal section presents alternative metrics of
risk acceptance.
In chapter 4, the rst objective is sought through the presentation of various
approaches for deriving risk acceptance criteria. The fundamental principles of
utility, equity and technology described in the R2P2-report of HSE (2001b) are
rst introduced. Subsequently, the three methods of Fischho et al. (1981) for
solving acceptable risk-problems are presented; expert judgment, bootstrapping
and formal analysis. Based on the presentation of these generic principles and
methods, the specic approaches of ALARP, ALARA, GAMAB, MEM and the
precautionary principle are examined. Due to its methodological prominence,
most attention is devoted to the ALARP-approach of HSE (1992).
Finally, chapter 5 raises a set of conceptual problems regarding the feasibility
of risk acceptance criteria. The concluding discussion follows the thread of
Aven and his coworkers (Aven & Vinnem, 2005; Aven et al., 2006; Aven, 2007;
Abrahamsen & Aven, 2008), questioning the ability of risk acceptance criteria to
provide sound decision support. First, issues of user friendliness are addressed.
The meaning of probability, ethics, risk reduction and risk acceptance to various
formulations of risk acceptance criteria are problematized thereafter. Although
this chapter explicitly addresses the fourth objective of the thesis, the reader
should note that conceptual discussions make an integral part of each previous
chapter. By taking a panoramic view on these discussions, overall conclusions
and recommendations for further work are nally given.
2
Clarification of concepts
Most of the terms central to this report are used in everyday life. The reader will
therefore have an intuitive understanding of what risk, probability and risk
acceptance mean. Unfortunately, this intuitive appeal yields inconsistencies
and confusion if their understandings are taken for granted. The importance of
properly dening risk concepts is stressed by Ale et al. (2009), exemplifying
that many risk management frameworks fail to dene the probability they are
referring to. This is unfortunate, since probability can be interpreted in very
distinct ways, leading risk assessments in dierent directions. Not only are
implicit interpretations troublesome within the community of scientic risk
assessment. Fischho et al. (1981) contemplate that misunderstandings between
lay people and experts partly arise from inconsistent denitions of risk, calling
for currently used denitions to be made explicit, assumptions to be claried
and identication of cases that push them to their limits.
Following the advice of Fischho and co., this chapter is devoted to clarifying the focal concepts of risk, probability, safety and risk acceptance. The
promise of clarication may appear ironic, as the examination shows that there
is a wide range of interpretations oering dierent insights on the subject. This
owes to what Breugel (1998) denotes a reductionist approach to risk, meaning
that risk phenomena exhibits a variety of aspects that each has been studied in
detail by engineers, economists, sociologists, psychologists, philosophers and
so on. While some denitions are explicitly adopted to this report, other concepts are left undened, clarifying that a problem can and should be seen from
a variety angles.
2 Clarication of concepts
that risk is dependent on what you know and what you do not know, and is
thus relative to the observer. Adams (2003) makes an important observation in
that you gain knowledge of risk in dierent ways. While some are directly perceivable (car accidents), others are known through science (cholera), whereas
a third group of virtual risk even escape the agreed knowledge of scientists
(global warming). The cornerstone in Adams reasoning is that the meaning
and management of risk depend on how knowledge about the future is inferred.
Interpretations of risk in science
Accepting that risk is a property of the future and distinct from hazards, most
people agree that it does not exist in a physical state of today. But from here,
there is substantial disagreement on whether risk is an objective feature of the
world or thought constructs only. As seen in section 2.4, this is closely related
to how one interprets the concept of probability. Further to the extreme, some
social scientists deny that risk can be quantitative. Following the reasoning of
philosopher Campbell (2005), this is an erroneous claim, as risk at least can be
assigned comparative quantities. Some risks are clearly high (e.g. jumping of
the Elgeseter bridge in Trondheim), while others are evidently lower (crossing
the same bridge by foot). It is indeed a rightful claim that only some risks (if
any) can be given precise quantities. But according Campbell, this does not
mean that a statement of low risk is any less quantitative.
Hovden (2003) summarizes four positions to risk in science:
Rationalists see risk as a real world phenomenon to be measured and estimated by statistics and controlled by scientic management.
Realists interpret risk as objective threats that can be estimated independently of social processes, but may be distorted through frameworks of
interpretation.
Constructionists claim that nothing is a risk in itself. What we understand
to be a risk is a product of cultural ways of seeing.
Middle positions between realists and constructionists see risk as an objective threat, that can never be seen in isolation from social and cultural
processes.
Risk perception
Central in the realist and middle positions is the concept of risk perception, understood as subjective responses to hazard and risk (Breakwell, 2007). Among
the factors inuencing risk perception are voluntariness of exposure, immediacy
of eects, personal control and catastrophic potential. Risk perceptions dier
2 Clarication of concepts
Contingency
Renn (2008) concludes that common for all epistemologic positions to risk, is
the contingency between possible and chosen action. If the future was independent of todays activities, the term risk would give no meaning. This explains
why risk can never be zero, unless we stop performing the activity in question.
But then, another activity is initiated, from which yet another risk is introduced.
Due the contingent nature of our actions, accident risk means the possibility
that an undesirable state of reality may occur as a result of natural events or
human activities. Accepting this general conception, the next step is to explore
how risk can be properly dened.
Uncertainty may be dened as something not certainly and exactly known (Webster,
1978). Following Aven (2003, p.178), uncertainty is lack of knowledge about the performance of a system (the world), and observable quantities in particular. However, most
practitioners speak of risk in situations where probabilities can be assigned, reserving
uncertainty for situations where probabilities are undened or uncertain (Douglas, 1985).
In risk analysis, a distinction is made between two main types of uncertainty; aleatory
(randomness/stochastic variations) and epistemic (scientic; due to our lack of knowledge about the world). While aleatory uncertainty is irreducible, the latter decreases with
increasing knowledge (NASA, 2002)
10
2 Clarication of concepts
Risk Curve
Probability, log
Probability, log P
P=0.9
P
P=0.1
Figure 2.1. Risk curves (adopted from Kaplan & Garrick (1981))
To answer the questions, Kaplan and Garrick suggest the making of a list as in
Table 2.2. Each line, i , is a triplet of a scenario description, si , the probability,
pi , and consequence measure, xi , of that scenario. Including all imaginable
scenarios, the table is the answer to the three questions and therefore the risk.
Formally, risk is dened as a set of triplets:
R D hsi ; pi ; xi i
(2.1)
(2.2)
pi .i / and pi .xi / are the probability density functions for the frequency and
consequence of the i th scenario. Arranging the scenarios in order of increasing
severity and damage, (2.1) and (2.2) can be plotted as a single- or a family of
curves respectively, as shown in Figure 2.5. Kaplan and Garrick stress that it
is not the mean of the curve, but the curve(s) itself which is the risk. While
indicating that a curve can be reduced to a single number, this is prescribed
with caution. In their opinion, a single number is not a big enough concept to
communicate risk, as is often done by claims of risk being probability times
consequence. Since this equates low probability/high damage scenarios with
high probability/low damage scenarios, Kaplan and Garrick prefer risk to be
described as probability and consequence. The latter is adopted for this report,
partly because it is most common in current risk analysis practices (Rausand
& Utne, 2009). More important is it that risk acceptance is far more complex
than a compound number of probability and consequence can tell, which is
discussed in section 2.8. By illustration, the acceptability of trac accidents
and a core nuclear meltdown is hardly the same, even though the mean values
of the risk curves might equate the two.
Aven (2003)s conception of risk can be questioned on a similar basis,
namely because a single number is believed to represent uncertainty about the
future, as well as our calculations of it. With reference to Adams (2003), this
11
may not be a problem for risk perceived directly or through known science,
but is likely to perplex virtual risk of great epistemic uncertainty. Also the
consequence-oriented denition of Klinke & Renn (2002) is inadequate for our
purpose, as it may distort the trade-o considerations section 2.8 shows characterize acceptable risk-problems. Wu & Apostolakis (1990) see a major problem
of non probabilistic theories in the lacking of a rational rule for combining
risk information. As pointed out by Aven (2009), like denitions cannot assist
in concluding if a risk is high or low compared to other options. Kaplan and
Garricks idea of risk as a set of triplets therefore provides the basis for this
report, calling for an examination of its three constitutional elements.
Some standards use the terms accidental event (NORSOK Z-013N, 2001), unwanted
event (NS 5814, 2008) or initiating event. As there seem to be no generally accepted
terms or denitions, the process of properly and uniquely dening identied events is
likely to be clouded.
12
2 Clarication of concepts
Hazards
Pr (AE)
AE
Consequences
Barriers
manner has one fundamental weakness, namely that unidentied scenarios lead
to underestimations of risk. Referring to an accident study showing that less
than 60% of the scenarios were foreseen, Breugel (1998) notes that the most
crucial part of risk analysis concerns the identication of accidents scenarios.
This diculty is recognized by Kaplan & Garrick (1981), admitting that since
the number of possible scenarios in reality is innite, a listing of scenarios
will always be incomplete. According to Kaplan and Garrick, this inherent
weakness may be overcome by introducing an other category of unidentied
scenarios, SN C1 . The set of scenarios is thus logically complete, allowing one
to compensate for residual risk posed by unknown scenarios. Whether this is
a satisfactory counterargument may be questioned, as research has shown that
the main uncertainties in risk analysis still are related to the (in)completeness
of identied events (HSE, 2003b).
13
2004), others struggle to understand how one can repeatedly loose in Yatzi
given the same odds as the opponents.
The meaning of probability can be sought from three main stands; the classical theory, the relative frequency theory and the theory of subjective probability. About to decades ago, these were under considerable scrutiny regarding
the interpretation of probability in risk analysis, exemplied in the academic
correspondence between Watson (1994) and Yellmann & Murray (1995). The
underlying objective of these debates was whether probability, and hence risk,
is an objective feature of the world, and the implications of this on risk analysis.
Subsequently, researchers have concluded that what is important is not what
school of thought you follow, but that the interpretation is chosen that best ts
your purpose (Vatn, 1998).
2.4.1 Classical theory of probability
Up until the twentieth century, probability was by and large interpreted according to the classical theory, developed by mathematicians like Pascal and Laplace
between 1654 and 1812 (Watson, 1994). Following this theory, probability is
an objective property, derived from a set of equally likely possibilities and/or
symmetrical features. The symmetrical properties of a dice yield a 16 probability
of throwing a 6, while drawing an ace of spades and a club nine from a pack
1
. Given a set of equally likely entities,
of cards have the equal probability of 52
Pr.A/ is inferred by counting the proportion of favorable outcomes amongst
the total set of possible outcomes :
Pr.A/ D
Na
N
(2.3)
The probability Pr.A/ of an event is given a priori, with no need for experimentation (Papoulis, 1964). According Watson (1994), this is a satisfactory concept
in games of chance, but invalid for situations not fullling the assumption of
uniform possibilities. As this is certainly not true for either output or input
probabilities of risk analysis, Watson rejects the classical theory as a basis for
interpreting risk analysis results. Also Yellmann & Murray (1995) agree that
the classical theory is a too narrow viewpoint for analyzing accident risk. From
a generalist perspective, Papoulis (1964) criticizes the theory for being circular,
as it in its own denition makes use of the concept being identied; concluding that equally likely means equally probable. Pointing to the not so obvious
possibilities of giving birth to a boy and a girl, Papoulis further accuses the
classical theory for implicitly making use of the relative frequency interpretation of the following section. In conclusion, one can say the classical theory
is of historical interest, but that its current use is limited to a small group of
problems of which accident risk is not one.
14
2 Clarication of concepts
15
the long run. In this sense, a claim of being 92.7 % sure that a decision is right,
means that the fraction of right decisions in the long run is 0.927. Kaplan &
Garrick (1981) acknowledge this fractional meaning, while stating that probability is a distinct concept from observed frequency. In their view, studying
statistical frequencies is the science of handling data, while probability is the
art of handling lack of data. Owing to this, they suggest relative frequencies
assigned by experiments of thought, without ever having to perform a physical
experiment. As this approach, named probability of frequency, is conceptually
far from the original intention of Von Mises, it is not further discussed. What it
does illuminate, is that how probability is understood and the way it is derived,
is not necessarily a matter of the same thing.
2.4.3 Subjective probability
In the preface of De Finetti (1974)s groundbreaking contribution to the theory
of probability, the following thesis is blown up:
PROBABILITY DOES NOT EXIST.
The sentence captures a radically dierent interpretation of probability; the
subjective, or Bayesian, school of thought. De Finetti postulates that probability
is not endowed with some objective existence. Rather, probability is a subjective
measure for degree of belief, existing only in the minds of individuals. Being
subjective, it follows that dierent people can assign dissimilar probabilities to
the same event. This does not imply that probabilities are meaningless in the
opinion of De Finetti; it is perfectly meaningful for an individual to express his
belief in the truth of an event with a number between 0 and 1. Neither does it
mean that the rules of probability are invalid; the numbers associated with two
events of dierent likeliness must still obey the axioms of Kolmogorov 3 . What
it does mean, is that probability is conditioned on an individuals current state of
knowledge. As new knowledge is gained, individuals update their probabilities,
intuitively or formally by Bayes theorem. This is not in the pursuit of a true
objective probability, but as a means for strengthening ones own degree of
belief.
Weather forecasting is a typical situation where subjective probabilities apply. The probability of sunny weather can neither be claimed from symmetrical
properties, nor by repeated observations of the past. Instead, the meteorologist
bases her previsions on professional knowhow and complex analyses, constantly
updating prior knowledge in search for a strengthened degree of belief. This
does not mean that she cannot use weather frequency data as a source of
3
16
2 Clarication of concepts
(2.5)
17
C2
P2
C3
P3
..
AE
C1
Cn-1
Pn-1
Cn
Pn
18
2 Clarication of concepts
2.6 Safety
Having dened risk and its vital elements, one can rhetorically ip the coin and
ask what constitutes safety. Intuitively, safety is understood as freedom from
harm or the opposite of risk; the lower the risk, the greater the safety. Seeking
a general conception of safety, Mller et al. (2006) discover that the concept is
under-theorized. Analyzing the relationship between safety and risk, uncertainty
and control, they conclude that safety is more than the antonym of risk. Of
underrated importance is epistemic uncertainty, as many feel safer given almost
certainty than in less certain outcomes involving lower risk. Another nding
is the distinction between absolute and relative concepts of safety. While the
former implies that the risk of harm has been eliminated, the latter means
that risk has been reduced or controlled to a certain level. Philosopher Nss
(1985) was early engaged with this distinction, claiming that a Utopian search
for absolute safety necessarily will conict individuals quality of life.
Mller and his coworkers put the ethical and unattainable aspects of absolute
safety aside, focusing on the dierent stringency of the two concepts. Suggesting
that absolute safe is the most stringent condition of safety, relatively safe is
reserved for situations under this benchmark. Below the lowest level of safety
that is socially acceptable, it is misleading to use the term safe. But how is such
a level constructed? The answer of Mller et al. is that it is more than some
opposite of an acceptable risk level; one is not necessarily safe just because the
risk is acceptable. Aven (2009) follows this thread, while counter arguing that
safety is the antonym of risk if the latter is dened in a broad sense. Safe can
then be dened by reference to acceptable risk, and acceptable risk can again
be rephrased as acceptable safety, as in the ISO/IEC Guide 51 (1999, p.2)
denition of safety: Freedom from unacceptable risk. Notwithstanding this,
the guide advices against using the words of safe and safety, on the grounds
that they convey no useful extra information. What is more, the reasoning of
Aven rests on his own denition of risk, which was presented and discarded
in section 2.2. Although discussions on acceptable risk are often framed as a
question of how safe is safe enough, the fuzzy notions of safe and safety are
used with caution in this report.
Related to safety is the term security, restricted to harm from intentional
acts like terrorism, sabotage or violence. Security is a broad concept, originally
concerned with military and politically threats to state sovereignty (Barry et al.,
1998). Central to security is the concept of threat agents, meaning an actor
having the intention and capacity of inicting damage to a vulnerable object
(Salter, 2008). As security risk is less tangible and predictable than that of
physical hazards, they are hitherto given minimal attention within standards
and research on risk acceptance criteria.
19
20
2 Clarication of concepts
Define framework
Establish RAC
Risk assessment
Inititating analysis,
problem and
objectives formulation
Organisation of work
Choice of methods
and data sources
Establish system
description
Identify hazards and
unwanted events
Risk analysis
Analysis of causes
and probability
Analysis of
consequences
Risk description
Risk evaluation
Comparison with
Ris acceptance
criteria
Identification of risk
reducing measures
and their effect
Documentation and
conclusions
Risk treatment
21
System
R= f{Pr,C}
Reduce
Pr or C
R<R?
Accepted
Figure 2.5. Principal illustration of risk acceptance criteria (adapted from Breugel (1998))
Risk acceptance criteria need not be quantitative (NS 5814, 2008). According to NSW (2008), it is essential that certain qualitative principles are adopted,
22
2 Clarication of concepts
23
24
2 Clarication of concepts
the terms of societal and individual concerns, expressing impacts on society and
things individuals value personally. While the government sees the benets of
electricity generation and employment, these may for neighbors of the plant be
minor to the risk of nuclear accidents. Balancing individual and governmental
concerns is therefore fundamental when developing national policies on risk
acceptance (NSW, 2008).
2.9.4 Risk acceptance is a social phenomenon
Douglas (1985) asserts that equity and personal freedom are moral determinants of risk acceptability. If risks, benets and control measures are unevenly
distributed, risk acceptance is likely low. Her arguments are rooted in the belief that risk acceptance constitutes the often neglected social dimension of
risk perception. Although risk acceptance is related to the factors inuencing
individual perception of risk, it is by and large culturally determined through
social rationality and institutional trust:
The question of acceptable standards of risk is part of the question of
acceptable standards of living and acceptable standards of morality and
decency, and there is no way of talking seriously about the risk aspects
while evading the task of analyzing the cultural system in which the
other standards are formed (Douglas, 1985, p.82).
The former Soviet Union serves as an example. The totalitarian governments
concealing of the Chernobyl accident amplied institutional distrust, which
in turn lowered public acceptance of nuclear risk (Breakwell, 2007). A less
extreme manifestation is the dierently assumed risk aversion in the UK and
the Netherlands, which is discussed in section 3.4. Risk aversion, understood
as disproportional repugnance towards multiple fatality events, is central to
risk acceptance (Ball & Floyd, 1998). The contrast between trac accidents
and a core meltdown serves as an illustration; being risk averse implies that a
given number of trac fatalities distributed over many accidents, is accepted
over an equal number of deaths in one nuclear accident. Risk aversion and
cultural preferences vary within regions or cities, as well as between countries
(Nordland, 2001).
Closing this presentation, one can conclude that risk acceptance is a complex
issue, going beyond the estimation or physical magnitude of risk. As such,
decisions on risk are fraught with diculties of rational, moral and political
character. Fischho et al. (1981) address ve complexities of acceptable-risk
problems:
Uncertainty about problem denition: Is the decision explicit? What is the
hazard, the consequences and the possible outcomes? The outcome of a
decision may already be determined by the ground rules.
25
Diculties in assessing the facts: How are low probabilities assessed and
expert judgment applied? The treatment of factual uncertainties may prejudice the conclusion.
Diculties in assessing the values: How are labile values confronted or
inferred? If asked to express their opinion, uncertainties are introduced as
people may not be aware of their values.
Uncertainty about the human element: What are the accuracy of laypeoples
perceptions, the fallibility of experts and the rationality of decision makers?
When assumptions about the behavior of experts, laypeople and decision
makers are unrecognized, they can lead to bad decisions and distort the
political process.
Diculties in assessing decision quality: How much condence do we have
in the decision making process? An approach to acceptable risk decisions
must be able to assess its own limits.
The ve complexities oer valuable insights practitioners should have in mind
when making decisions about risk. Evaluating risk by a predened set of acceptance criteria pivots on the contradiction of seeking a rational and objective
decision criterion, in a matter that is utterly contextually contingent. When formulating risk acceptance criteria, it is thus paramount to recognize that their
purpose is to aid practical decision making on risk. This calls for a careful
examination of how criteria are expressed and the manner in which they are
set, which make the topics of the following two chapters.
3
Expressing risk acceptance criteria
3.1 Introduction
For risk acceptance criteria to be operational, a means for describing risk levels is required. The importance of choosing an adequate expression of risk
acceptance is stressed by Holden (1984), warning that improper metrics produce anomalous conclusions. Flipping the coin, harmonization of well-chosen
decision parameters facilitates a consistent, systematic and scientic decision
making process, as advocated by Skjong et al. (2007).
The fundamentals of commonly used metrics are discussed in this chapter.
First, the concepts of risk metrics, individual and societal risk are introduced,
followed by a brieng on aspects to consider when choosing metrics to the
expression of risk acceptance criteria. Central metrics are presented thereafter,
with emphasis on underlying assumptions, areas of application, strengths and
fallacies.
3.1.1 Risk metrics
Risk can be expressed in multiple ways, relating to the spectrum of consequences and the format of presentation. Consulting NORSOK Z-013N (2001),
risk criteria range from qualitative matrices and wordings to quantitative metrics. Due to their practical prominence, the latter make the focus of this report.
Baker et al. (2007, Appendix H-1) dene risk metric as:
A key performance indicator used by managers to track safety performance; to compare or benchmark safety performance against the performance of other companies or facilities; and to set goals for continuous
improvement of safety performance.
The notion key performance indicator(KPI) indicates that risk metrics describe safety performance, which one is able to measure after a period has
28
29
involvement, notions of employees, third and fourth parties are key meta data
to individual risk in particular (Pasman & Vrijling, 2003). Similarly are residential, sensitive and transient populations relevant constructs when putting a
gure to societal risk (HSE, 2009). It should be noted that individual and societal risk are distinct from, and only partly related to, the notions of individually
and socially acceptable levels of risk introduced by Starr & Whipple (1980).
30
of precision. Since uncertainty increases with required level of details, uncertainty considerations might contradict a criterions score on the requirement
of suitability for decision making. The standard presupposes that risk metrics
reect the approach to risk analysis and are consistent with previous use within
the company.
3.2.2 Pragmatic considerations
Still consulting NORSOK Z-013N (2001), one reads that the intended use
and decision context shall be considered when choosing risk acceptance criteria. Pragmatic considerations relating to life cycle phase, systems or activities,
strongly inuence the feasibility of risk metrics. Whether the acceptance criteria
facilitate decision making on risk reducing measures or enable comparison of
overall risk levels, their applicability change across situations. By example, the
dierent contexts of deciding on detailed design solutions in the engineering
phase, and broadly comparing two alternative eld developments in an early
concept study phase, dierently constrain the analysis and evaluation of risk.
Leaving the realm of the oshore industry, it is tempting to generalize
factors of pragmatic importance. Rather than focusing on the practical use of
risk acceptance criteria, HSE (1992) is concerned with the subjects of interest;
the hazard and those at harm. According to HSE, criteria shall be chosen based
on characterization of the hazard in question, the nature of harm (whether
fatalities are prompt or delayed) and characteristics of the populations at risk.
Holden (1984) similarly calls for an adequate description of the particular risk
patterns. Such a description may be simple or complex, but shall capture both
the totality and distribution of risk. The observant reader may notice that the
totality of risk prescribes the use of societal risk metrics, whilst the distribution
of risk is best captured by individual risk metrics.
3.2.3 Past and future observations
Risk metrics are often derived from historical data, based on averages of previous periods and assuming constant trends in the future (Vinnem, 2007). Such an
approach is justiable when the purpose is to monitor trends in risk levels, but
run into philosophical diculties when projecting future levels of acceptable
risk. In the literature, it is seldom specied whether one is to use the predicted
number of occurrences (a parameter), or the historically measured number of
occurrences (an estimate) in the expression of risk acceptance criteria. This
is problematic, because the two quantities rely on dierent assumptions regarding future exposure and contextual premises. Remembering the fallacies
of frequency-based approaches, both risk levels and associated benets are
destined to change along with the population at risk. Consequently, it is questionable whether the past is a legitimate predictor of future risks and their
31
32
33
34
Effect assessment
- Flammable
- Explosive
- Toxic
Initial discharge
-Process failures
-Material properties
-Storage and process
conditions
& Vrijling, 2003). Common practice in the UK and the Netherlands is the use
of safety distances, prohibiting accommodation of vulnerable objects within
certain contours. Typically, zones for residential housing are set for iso-risk
contours with LIRA lower than 106 (Bottelberghs, 2000). In the UK, LIRA
is coupled with the concept of dangerous dose, advising against homes being
built if the probability of receiving a chemical dose leading to severe distress
is greater than 105 per year (HSE, 1992). Since dangerous dose is an intricate
concept stemming from the discipline of toxicology rather than that of risk
analysis, it is not further examined.
Evaluating LIRA
As NORSOK Z-013N (2001) seemingly treats IRPA and LIRA under the same
notion of IR, the strengths and weaknesses of LIRA may be assumed similar to
those of IRPA. Such a conclusion should not be drawn without reconsidering
the distinguishing features of LIRA in light of the NORSOK Z-013N (2001)
requirements. Being a localized risk metric, it scores relatively high on the
aspect of precision in decision making support, as it by denition is concerned
with particular areas of an installation or site. LIRA may be unambiguously
dened with clear system limits, due to the stringent assumptions and its inherent focus on physical boundary limits. Whether ambiguities are introduced
through averaging, is pragmatically conditioned on the particular risk, the area
and its inhabitants. The unrealistic assumption of a person spending a whole
year constantly at a particular point, yield little condence in the risk level
35
a person actually face. Nevertheless, the metric is relatively easy to grasp for
non-experts, owing to the simplied assumptions and the visual aid of iso-risk
contours on maps.
36
37
3.4.1 FN-curves
An FN-curve is basically a plot showing the frequency of events killing N or
more people, as shown in Figure 3.3. It can be used for presenting accident
statistics and risk predictions, as well as drawing criterion lines for acceptable
levels of societal risk. Mathematically, it is derived from the commonly used
expression of risk being a product of frequency and consequence, denoted number of fatalities per year. Perhaps not instinctively, the fatalities are not integers
since they are probabilistically generated (Ball & Floyd, 1998). Graphically, the
curve is presented by taking the double logarithm of this expression, due to the
wide range of possible values of high consequence/low probability risk (Evans
& Verlander, 1997).
There is a distinction between FN- and fN-curves that should be claried.
While the former express the cumulative frequency of N or more fatalities,
an fN-curve represents the frequency of accidents having exactly N fatalities.
As fN-curves are not very informative and rarely used for expressing risk acceptance criteria (HSE, 2009), its relationship with FN-curves is not further
examined. For a thorough discussion on the subject, the reader may consult
Evans & Verlander (1997). Due to their numerical distinction and since NORSOK Z-013N (2001) uses the notion of fN when obviously speaking of cumulative probabilities, there is a call for standardization of terminology to prevent
erroneous criteria being drawn.
Risk aversion in FN-criterion lines
Formulating risk acceptance criteria, a factor is introduced to express risk
aversion:
R D F N
(3.2)
(3.3)
constitutes the slope of the criterion line, as illustrated in Figure 3.4. Additionally, an anchor point (a xed pair of consequence and frequency) is needed
to describe the crossing of the y-axis (Skjong et al., 2007). The literature review
of Ball & Floyd (1998) proves that deciding on is a disputed task. In the UK,
HSE prescribes a neutral factor of -1, in contrast to the Dutch governments
favoring of a risk aversive factor of -2. The rationale is that people are believed
to be more than proportionally aected by the number of fatalities, leaving the
acceptable frequency of an accident killing 100 people 10 times lower than one
killing 10 people. Compressing a complicated discussion, a neutral approach
is preferable, as one otherwise introduces hidden weighting factors making the
38
Number of fatalities, N
Figure 3.3. FN-curve for road, rail and air-transport 1697-2002 (adopted from HSE (2003b))
decision making process opaque. This is mainly because the greater aversion
factor, the stricter the criterion and hence regulation will be, ex ante out ruling the potential benets of a proposal. Approaching the problem dierently,
Linnerooth-Bayer (1993) sees the great problem not in which aversion factor
to use, but how the publics aversive concerns are addressed.
What is wrong with FN-criterion lines?
A numerical example of an FN criterion is provided in HSE (2001b), combining
the slope of -1 with a xed tolerability point of a yearly frequency of 0.0002,
per single accidents killing more than 50 people. The FN-curve is applauded as
a helpful tool if there are societal concerns for multiple fatalities occurring in
one single event. The technique is also judged useful for comparing man-made
accident proles with natural disaster risks deemed tolerable by society. This
claim is contested in HSE (2009) and Skjong et al. (2007). In summary, their
objections concern the lacking ability of FN-curves to allow meaningful comparison, telling nothing about the relative exposure and hazard characteristics
39
= -1
= -2
N
Calculated risk
Figure 3.5. Case where the predicted risk exceeds the FN-criterion line in one area, while
otherwise below
Amongst the most cited antagonists of FN-criterion lines are Evans & Verlander (1997), raising two main objections. First, the criterion is accused for
prescribing unreasonable decisions, as a result of concentrating on just one extreme feature of a statistical distribution. Secondly, they sentence the technique
for being illogical in a decision theoretical sense, providing inconsistent recommendations if an identical risk picture is presented in dierent ways. Hereby
discarding the use of FN-curves for deciding on acceptable risk, the authors
40
41
42
Since the main dierence concerns how averaging of risk is performed, the
variants are not discussed separately. It is, however, crucial to beware the applied
averaging in pragmatic evaluations of FAR.
Accounting for exposure
As FAR is expressed per time unit, one cannot add the contribution to FAR from
dierent activities, unless exposures are assumed equal or weighted relative to
each other (Vinnem, 2007). The element of exposed hours shall also suit the
system under consideration. By example, FAR is tailored to the civil aviation
industry by specifying the number of fatalities per 100 000 hours of ight
(Rausand & Utne, 2009).
Typical FAR-values lie in the area of 1-30, making it fairly easy to grasp
for non-experts compared to risk metrics of very low probabilities (Rausand &
Utne, 2009). Vinnem (2007) reports oshore criteria of FAR=20 for the most
exposed groups, and FAR=10 for the total installation work force. Requiring
a FAR-value of less than 10, basically means that no more than ten fatalities
are acceptable during the lifework of approximately 1400 persons (Rausand &
Utne, 2009). If the exposure time, ti , for each person is known, FAR can be
derived from PLL :
PLL
(3.6)
FAR D P 108
ti
Meaningful comparison
In contrast to PLL, FAR enables meaningful comparison over dierent solutions by taking exposure into account. Indeed, NORSOK Z-013N (2001) states
that FAR is the most convenient of all metrics in this matter. Due to their
situation specic focus and limited averaging, group- and area-FAR are suited
for decisions on risk reduction. As such, conned FAR metrics may describe
both the totality of risk and distributional issues. However, being conceptually
linked to PLL and IRPA, FAR does not distinguish between small- and large
scale accidents. Puritan followers of HSE (1992) might thus accuse the metric
of expressing upscaled individual risk rather than societal such. This problem
was early recognized by Holden (1984), claiming that like most statistic-based
metrics, FAR essentially expresses average individual risk. For this reason, FAR
should not be used in isolation when multiple fatality accidents are possible.
3.5 Other
The focus has hitherto been on metrics considering fatalities as consequential
endpoint. There are several other expressions of risk, to which the reader at
least should have elementary knowledge about.
3.5 Other
43
Consequence
Very infrequent
Infrequent
Fairly frequent
Frequent
Very Frequent
Catastrophic
Very large
Large
Medium
Small
Unacceptable
Reduce risks as low as reasonably practicable
Acceptable
Figure 3.6. Risk matrix (adapted from Rausand & Utne (2009)
44
3.5 Other
45
SIL 2
SIL 3
Acceptable risk
Figure 3.7. Required risk reduction in terms of SIL (adapted from Marszal (2001))
SIL are, like loss of main safety functions, a technical criterion that is suited
for decisions on technical measures related to safety instrumented systems.
According to IEC 61508 (1998), SIL are functional lower level requirements
that shall comply with overall risk acceptance criteria. For this purpose, a layer
of protection analysis (LOPA) is useful, which is a semi-quantitative method
for determining SIS performance requirements and evaluating the adequacy of
protection layers (BP, 2006).
46
4
Deriving risk criteria
4.1 Introduction
Formulating risk acceptance criteria is not a straightforward task. Not only can
the creator choose between a variety of risk metrics, there is also a spectrum
of principles and methods for deciding on the specic risk level. According
to Nordland (1999), each approach attempts to rationally determine objective
levels of acceptable risk. However, since it is dicult (if not impossible) to
calculate acceptable risk levels objectively, dierent societies have developed
distinct approaches. The establishment of risk acceptance criteria is therefore
strongly determined by historical, legal and political contexts (Hartford, 2009;
Ale, 2005).
In this chapter, the basis and applicability of various approaches to setting
risk acceptance criteria are discussed. A distinction is drawn between fundamental principles, deductive methods and specic approaches. Fundamental
principles represent ethical lines of reasoning, while deductive methods describe how criteria are derived. Specic approaches covers the applied reasoning in dierent regimes, based on combinations of fundamental principles and
deductive methods.
48
Equity:
All risks must be kept
below an upper limit
Utility:
Risk acceptability is the
balancing of costs and benefits
Technology:
Risk must be as low as
that of a reference system
Risk
Risk
Benefit
Risk
4.2.1 Utility
Utilitarian ethics, philosophically rooted in the thoughts of Bentham and Mill, is
based on the presumption that one shall maximize the good and minimize what
is bad for society as a whole (Hovden, 1998). When deciding on the introduction
of a new technology, this implies the search for an optimum balance between
its totality of benets and negative consequences or costs. In the allocation of
risk reduction expenditures, a balance between the costs and benets related to
a certain measure is sought (HSE, 2001b).
A central utilitarian assumption is that one shall look at the overall balance for society, rather than the ones experienced by individuals. Utility ethics
therefore provides a powerful line of argument in legitimizing technological
risk to society. The consequence is that some of its members might suer on
behalf of the society as a whole, as protested by Fischho (1994). Also HSE
(2001b) recognizes this inherent deciency , warning that a strict application of
utilitarian thinking imposes no upper bounds on acceptable risk, as only those
49
deemed cost-eective are reduced. This pinpoints both the weakness and merits of utility based criteria. Unconditional levels of acceptable risk cannot be
set (allowing unfair distribution of risk), since one always has to consider the
totality of goods and bads of a proposal (ensuring that the good is maximized).
4.2.2 Equity
The Ethics of equity has its origin in the moral reasoning of Rawls, stating that
one shall prefer this society even if unaware of ones own position in it. Maximizing the minimum, priority is given to the least advantaged (Hovden, 1998).
The premise of equity-based criteria is that all individuals have unconditional
rights to a certain level of protection. Conversely, this yields a level of risk that
is acceptable for none of the members of society; encouraging standards and
xation of upper limit tolerability criteria (HSE, 2001b). Owing to this, the use
of absolute risk acceptance criteria has its origin in the ethics of equity.
The claim that each member has an equal right not to experience high risk,
stands in great contrast to the utilitarian principle. Amongst those favoring
an equity-based reasoning is Fischho (1994), stressing that a technology must
provide acceptable consequences for everyone aected by it. Fischho proposes
that a risk should be considered acceptable only if its benets outweigh the
risks for every member of society. One can question whether this is a pure
equity-based reasoning, or if maximizing individual benets comprise utilitarian
elements on a personal level. However, its ethical core is still equity, as one
looks at the distribution of individual risks and benets rather than the overall
balance.
Although ethics of equity is intuitively appealing, it leads to ineective application of technology and risk reduction measures if carried out to its extreme
(Hovden, 1998). Equity-based criteria also promote considerations of unrealistic worst case scenarios, distorting decisions through systematic overestimations
of risk.
4.2.3 Technology
The principle of technology assumes that an acceptable level of risk is attained
by using state of the art technology and control measures (HSE, 2001b). Risk
acceptance criteria are set by comparison with systems following good practice.
An example is the notion of adequate safety in the EU machinery directives
(EU, 2006), requiring new machines to be at least as safe as comparable devices
already on the market. However, what constitutes a comparable technology is
disputed, as is further discussed in section 4.4.3. Another manifestation is the
concept of minimum SIL employed in the Norwegian oshore and gas-sector,
assisting in the establishment of SIL-requirements based on well-proven design
solutions (OLF 070, 2004).
50
51
Comprehensive
Logically sound
Practical
Open to evaluation
Politically acceptable
Compatible with institutions
Conductive to learning
4.3.1 Expert judgment
Letting the best available experts decide on what risk is acceptable, personal
experience can be integrated with professional practice and the desires of clients
or the society as a whole. Although experts are involved in most decisions on
acceptable risk, what characterizes this method is judgment; professionals are
not bound by the conclusions of analysis, nor do they need to articulate their
rationale. Therefore, only the outcome of decisions are open to evaluation. A
typical situation of expert judgment is medical treatment, in which the doctor
is trusted for taking decisions on behalf of the patient. Another example is
the setting of reliability standards for single components in a complex system.
Both situations represent routine decision making of relatively limited scope,
for which expert judgment is proven practical and cost-eective.
The method fails in comprehensive, irregular decisions, like whether to
go ahead with a new technology. This is partly due to professionals often
lacking ability to grasp the whole problem, and partially because complex
situations urge political discussions. When controversial decisions are taken by
professionals, history has shown that they often serve as scapegoats, accused
for overemphasizing technical issues at the expense of public concerns.
4.3.2 Bootstrapping
Bootstrapping means using the levels of risk tolerated in the past as basis
for evaluating future risks. There are two strong assumptions underlying this
approach; that a successful balance of risks and benets is achieved in the past,
and that the future should work in the same way. The former is empirical,
while the latter is of political character. Fischho et al. distinguish between
four bootstrapping approaches:
Risk compendiums compare dierent situations posing the same level of
risk. By example, Table 4.1 represents a selection of daily activities estimated to increase IRPA in any year by 106.
52
Revealed preferences are reected in the market behavior of the public, assuming that society has already reached an optimum balance between the
risk and benets of any existing technology. A new technology is acceptable
if the associated risk does not exceed that of existing technologies, whose
benets are the same to society (Starr, 1969). In contrast to technology-based
criteria, benets are explicitly considered. However, there are no considerations of how these are distributed, since market behavior does not reect
the cost-benet trade-os of individuals.
Implied preferences are read in legal records. Identifying implicit riskbenet trade-os for existing hazards, acceptability standards are set for
new technologies. The central assumption is that law and regulatory actions
represent societys best comprise, between the publics needs and the current economic and political constraints. The method is criticized for lacking
coherence and reinforcing bad practices, as laws may be context-dependent,
poorly written and hastily conceived.
Natural standards means deriving tolerable risk limits from exposure levels
of preindustrial times. Natural exposure is typically found through geological
or archaeological studies. Unlike the other bootstrapping methods, natural
standards are independent of a particular society and is therefore suited for
global environmental risk problems.
Common for all but the latter is a strong bias towards status quo. Using past or
present risk as reference level does not encourage future improvements. Historical records only provide indications of accepted, i.e. implemented technologies,
telling nothing about whether the associated risks were judged acceptable by
the public. Another deciency is that acceptability judgments are taken without
explicitly considering alternative solutions. Even if so, no guidance is provided
if both fail or pass the comparison. All methods fail in considering cumulative
risks from isolated decisions.
The advantage of bootstrapping-methods is their breadth. A broad spectrum
of hazards are considered, attempting to impose consistent safety standards
throughout society. The element of comparison also provides a risk number
that is simple to grasp and easy to deduce. On the contrary, the weakness of
bootstrapping methods is their lack of depth. Decision problems are improperly
dened, decision rules imprecise and the outcomes unclear and poorly justied
4.3.3 Formal analysis
Formal analyses provide explicit recommendations on the trade-os between
risk and benets of acceptable-risk problems. They are intellectual technologies
for evaluating risk, based on the premise that facts and values can be eectively and coherently organized (Fischho et al., 1981). Complex problems are
53
Cause of death
Cancer, heart disease
Cirrhosis of the liver
Air pollution
Accident
Accident
Accident
Liver cancer
Cancer caused by radiation
Cancer from benzopyrene
Cancer caused by radiation
Table 4.1. Risk compendium of activities estimated to increase the chance of death in any
year by 106 ( Source: Wilson (1979), represented in Fischho et al. (1981))
decomposed into simpler ones, oering a powerful tool for regulatory and administrative institutions dealing with dicult risk issues (Vrijling et al., 2004).
Owing to this, formal analysis is superior to bootstrapping and professional
judgment in evaluating new hazardous technologies.
Fischho and his colleagues emphasize that formal analyses can be utilized
as either methods or aids. If interpreted as a method, it is given that anyone who
accepts its use and underlying assumptions shall follow the recommendations.
Alternatively, the recommendations can be seen as clarifying aids, addressing
issues of facts, values and uncertainties.
Even simple formal analyses require highly trained experts, transferring
acceptable risk decisions to a technical elite. Owing to this, their success is
strongly dependent on good communication with clients and the public. The
great advantages of formal analyses are their openness and soundness, providing
logical recommendations that are open to evaluation. Their conceptual framework helps identifying and sharpening the debate around risk issues, possibly
encompassing a broad range of concerns. However, as full-blown methods are
expensive and time consuming, only the most dominant concerns are included.
And because what constitutes the most important concerns is ultimately a judgmental question, the separation of facts and values is critical (yet Utopian) in
formal analysis. Two main types of formal analysis are presented in Fischho
et al.; cost-benet analysis (CBA) and decision analysis 1 . CBA has according
to Fischho et al. gained broader acceptance than decision analysis, due to the
claim of objective value measurement. Paradoxically, the mixing of facts and
values is especially complex in CBA as they are implicit.
1
None of these were developed for acceptable risk problems. Both assume e.g. a wellinformed, single decision maker or entity and immediacy of consequences. This is rarely
the case in decisions regarding complex risk issues.
54
Cost-benefit analysis
Cost-benet analysis provides a quantitative evaluation of the costs and benets
in a decision problem, expressed in a common monetary unit. Restricting itself to consequences amendable to economic evaluation, recommendations are
produced in pursuit of economic eciency. Grounded in economic theory, the
alternative that best fullls the criterion of utility is recommended. Since the
utilitarian principle ignores distributional issues, Fischho et al. report conceptual disagreement on whether equity considerations may be included in the
analyses.
There is no single CBA-methodology. As pointed out by French et al. (2005),
there is rather a family of methods, sharing the same philosophical premise of
balancing net expected benets and costs. This balancing is an essential feature
in the ALARP-approach of the following section, explicitly integrating CBA
in a broader risk acceptability framework. What seems agreed upon, is that
cost-benet optimization provides a necessary aid for evaluating risk reduction
investments and judging the acceptability of new technological projects.
Requiring all costs and benets to be monetarily expressed, CBA runs into
ethical and practical diculties in assigning the cost of loosing a human life
and the benet of saving one. An area where CBA and the value of preventing
a fatality (VPF) is explicitly used, is the Swedish and Norwegian road transport
authorities. In a recent study by Elvik et al. (2009), the prevention of one road
fatality is valued at approximately 20 million NOK. According to Vatn (1998),
there is no universal agreement on how to value lives. The problem can be seen
from the perspective of the individual as well as the decision maker. Hammit
(2000) reviews the theoretical foundation and empirical methods for estimating the value of a statistical life (VSL), expressing the valuation of changes
in mortality risk across a population. VSL represents what people in average
are willing to pay for an innitesimal mortality risk reduction. This is not to
be interpreted as the amount an individual is willing to pay for avoiding certain death to himself or an identied individual, as most people are willing
to provide unlimited resources in such a situation (Hovden, 1998). The prex
statistical is thus essential, avoiding unrealistically high values attached to the
loss of human life (Vrijling et al., 1998). The VSL for each individual depends
on age, wealth, baseline mortality risk and whether consequences are acute or
delayed. In a report prepared by Australian Safety and Compensation Council
(2008), over 200 literature studies on VSL are reviewed, revealing great dierences between various estimations as shown in Figure 4.2. The estimated VSL
dier between countries and across sectors, ranging from mean values of 11 to
51 million NOK, in health and occupational safety respectively. Although the
theories of VSL are well-established, Hammit (2000) encloses his review by
calling for conceptual and methodological research, on how to account for risk
characteristics other than probability and to value risk across dierent popula-
55
100
80
60
40
20
US
UK
Sweden
South Korea
Japan
France
Denmark
Canada
Australia
Figure 4.2. Mean values of VSL-estimates by country (Source: Australian safety and compensation council(2008))
56
Decision analysis
Decision analysis is based on the axiomatic decision theory for making choices
under uncertainty, providing prescriptive recommendations given that its axioms are accepted. The reader may consult e.g. Abrahamsen & Aven (2008) for
a theoretical examination of the specic axioms. At the core of decision analysis are utilities, meaning subjective value judgments assigned to the various
attributes of a decision problem. By subjectively weighing the importance of
each attribute, consequences are evaluated relative to each other. The alternative
providing the greatest utility over all consequences is recommended. There are
several variants of decision analysis, having subjective utility functions as the
common denominator. One of these is multi attribute utility theory, praised by
French et al. (2005) in the following presentation on the ALARP- principle.
Both the values, what consequences to consider and the probabilities are in
decision analysis subjective. As relative frequency data is not required, decision
analysis is suitable for considering unique, as well as frequent events. Although
a frequentistic interpretation is not required in CBA, Fischho et al. (1981)
report its prominence amongst cost-benet analysts. In further contrast to CBA,
decision analysis enables consideration of non-economic consequences. Hence,
it has the advantage of considering whatever fact or value-issues of interest to
the decision maker. Not claiming an objective ground, the inclusion of attitudes
towards risk is also naturally accommodated in the analysis.
The quality of recommendations rely on the quality of value judgments.
Since values are often badly articulated, unconscious or contradictory acted
upon, utility weights can be erroneously assigned. This calls for non-manipulative
methods for value elicitation and a conscious approach to risk framing. Additional diculties arise if multiple parties involved in societal decision making
do not agree on the relative attractiveness of alternatives. Agreement is still
easier sought than for methods claiming value-immunity, since judgments are
explicit and out in the open (French et al., 2005). A dicult question is whether
company or regulatory decision makers are entitled to make value judgments
on behalf of the public. Somehow this must be resolved, as aggregating public
preferences is an inescapable methodological diculty according to Fischho
et al. (1981). Assuming that the owner and the public have common interest
in the success of a project, Ditlevsen (2003) is convinced that representative
decision analysis provides an upper risk acceptance limit agreeable to both
parties.
57
58
Tolerable region
Risk must be reduced ALARP
In HSE (1992), the up most and lower limits of IRPA for workers in the nuclear sector are suggested as 103 and 106 respectively. These numbers are by
no means universal, since the factors deciding whether a risk is unacceptable,
tolerable or broadly acceptable are dynamic in nature. Melchers (2001) warns
that tolerability changes particularly quickly when there is discontinuity in the
normal pattern of events, raising societal and political pressure for redening
the boundaries. Tolerability regions are still spelled out in guidelines and implicitly reected through industrial practice. It should therefore be emphasized
that these criteria are indicators rather than rigid benchmarks, calling for exible interpretation through deliberation and professional commonsense (HSE,
2001b).
Boundary between broadly acceptable and tolerable risk
The boundary between the broadly acceptable and tolerable region shall according to HSE (1992, p.10) be set by the point at which the risk becomes
truly negligible in comparison with other risks that the individual or society
runs. HSE (2001b) explains that the IRPA limit of 106 is given by trivial activities that can be eectively controlled or are not inherently hazardous. This
is approximately three orders of magnitude lower than the level of background
risk a person experience in his daily environment. With reference to Fischho
et al. (1981), this is a bootstrapping approach, calling for careful consideration
of the moral pitfalls in preserving status quo. But, as the strength of such an
approach is its practical feasibility, one can argue that bootstrapping a lower
limit improves the manageability of ALARP-analysis. Morally, this may be easier justied than approaching an upper criterion in the same manner, since the
59
former exists of utilitarian reasons whilst the latter touches equity concerns.
With further reference to Fischho and his coworkers, it should be stressed
that no risk is acceptable unless it provides some benets. Owing to this, the
lower limit is necessarily conditioned on the benets of a specic situation.
Boundary between unacceptable and tolerable risk
There are no widely applicable criteria dening the boundary between the tolerable and unacceptable region. The argument of HSE (2001b) is that hazards
giving rise to considerably high individual risk also invoke social concerns,
which often is a far greater determinant of risk acceptability. This is line with
Douglas (1985) preoccupation with the social dimension of risk perception,
understanding risk acceptability as a social construct reaching far beyond an objective claim of physical consequences. Even though HSE (2001b) recommends
the use of individual risk criteria in most cases, a tolerability FN-criterion of 50
fatalities per accident is provided for risks giving rise to social concerns. The
suggested IRPA limit of 103 should hence be implemented with caution, also
considering that it is a very lax limit most industries in the UK and Norway fall
well below (Vinnem, 2007). Quite paradoxical, this number is chosen exactly
because most hazardous industries pose a substantially lower risk, making an
excellent example of how bootstrapping discourages improvement. However,
as the upper limits serve only as the starting point of ALARP-improvement
(Ale, 2005), this chain of thought serves as a rhetorical argument rather than a
conceptual attack of the TOR-framework. A principal aw is yet demonstrated,
in that a lax upper limit may legitimize high risk to a small group of people,
since risk falling below this limit risk are judged tolerable according to utility
rather than equity.
Tolerable risk
In the mid region, bounded by the upper and lower acceptability limits, risk
must be kept as low as reasonably practicable. A benecial activity is considered
tolerable if, and only if (HSE, 2001b):
All hazards have been identied
The nature and level of risk is properly addressed, based on the best available scientic evidence or advice and the results are used to determine
appropriate control measures
The residual risks are not unduly high and are kept as low as reasonably
practicable
Risks are periodically reviewed to ensure that they still meet the ALARPcriteria.
60
61
62
d 10
d2
63
deterministic, while the benets are probabilistic and theoretical. The expected
benets are mathematical only, and will never be realized in practice. Dependent on the occurrence of accidents, the nal balance over a life cycle will thus
be either very negative or very positive. Since an installation may not economically survive the worst case scenario, the maximum loss should therefore be
considered in addition to the expected value.
The fallacies of cost-benet analysis are thoroughly assessed by French
et al. (2005), comparing the suitability of CBA and multi attribute utility theory in ALARP-evaluations. The conclusions are in favor of the latter, due to
four problems precluding current CBA applications to ALARP. These concern the nonobjective pricing of safety gains, inconsistent valuation of group
and individual risk, the immoral discounting of trade-os through time, and
lacking theoretical justication and ad hoc use of disproportionality factors.
Moreover, CBA is accused for being ill-dened and implicitly subjective. Multi
attribute utility theory is claimed to address each and all of these concerns in a
more satisfactory manner, in addition to structuring the debate between dierent stakeholders in productive ways. Within this method, a disproportionality
factor is easily modeled by adjusting the relative weights of costs and benets,
and a simple framework is oered for addressing multiple fatality aversion. The
approach is not without drawbacks in an ALARP-context, notably because there
is an explicit requirement in HSE (2001b) of comparing the monetary costs and
benets of a risk reducing measure. Additionally considering the diculty of
trading very dierent kinds of attributes, lack of consistency between decisions
is likely to result (French et al., 2005). Although multi attribute utility analysis
is suggested as an alternative to CBA in HSE (1992), practical applications to
ALARP seem few in number.
ALARP in practice
HSE director Walker (2001) commends the TOR-framework for oering comprehensive decision support that has stood the practical test of time in the UK.
In the Norwegian oshore industry, however, obstacles are found in attaining
a decision making process liberated from using quantitative risk analyses as
the sole basis of documentation. According to Vinnem et al. (2006), this seemingly owes to a widespread misinterpretation of ALARP being a general attitude
to safety, rather than a systematic, documented process to be followed up by
responsible authorities.
From the viewpoint of a plant owner, ALARP requires more eort than
adopting a set of predened criteria, since evaluations are made on a case by
case basis (Aven, 2007). This is especially true if lack of good practice demands
a full cost benet-analysis, which is extremely cost-and resource demanding
(HSE, 1992). Strong authority involvement is also implied, evaluating whether
the search for alternatives have been suciently wide and that the arguments
64
relating to gross disproportion are valid (Ale, 2005). On the positive side, the
pragmatic use of broadly accepted risk criteria may reduce conict costs from
political compromises, as proposed by Starr & Whipple (1980). The existence
of a cut-o lower acceptability limit also provides time-saving decision support
on what is safe enough.
The lacking of a moral discipline
In the critical contribution of Melchers (2001), the ALARP-approach is accused
for having serious moral implications. In addition to the commonly raised objections of assigning monetary values to the benet of risk reduction, Melchers is concerned with the dichotomy between socio-economic matters and the
morality of risk issues. Based on the assertion that the requirements of reasonableness and practicality lack openness, the approach is accused for excluding
public participation in tolerability decisions, and rendering cover up of risk
information in cases of economical or political importance. This claim can be
contested by the promise of HSE (2001b) to involve all relevant stakeholders
in a transparent ALARP process. However, HSE provides little guidance on
how this is performed in practice. It can be suggested that the strength of the
TOR-framework lies in its ability to capitalize the advantages of equity-, utilityand technology-based criteria, while there is a call for procedural inclusion of
the alternative principle of discourse.
4.4.2 ALARA
ALARA is the Dutch acceptability framework, calling for risk to be reduced as
low as reasonably achievable. The approach is conceptually similar to ALARP,
with the distinguishing feature of not considering a region of broad acceptability. Figure 4.5 shows that broadly acceptable and tolerable risks are replaced
by a common notion of acceptable risk. Until 1993, the region of negligible
risk was part of the Dutch policy. Subsequently, it has been abandoned on the
grounds that all risks shall be reduced as long as reasonable (Bottelberghs,
2000). The principle was originally launched by the International Commission on Radiological Protection in the 1970s, for managing risks for which no
non-eect threshold could be demonstrated (HSE, 2002).
What is not explicitly allowed
The distinguishing features of ALARA are addressed by Ale (2005) in a
comparative study of risk regulation in the UK and the Netherlands. Despite
ALARAs striking similarity with ALARP, their practical interpretations dier
greatly. According to Ale, this primarily owes to the distinctive legal and historical context in the two countries. In contrast to the common law tradition in the
65
Risk
Unacceptable region
66
ison of risk and costs a much ner balancing act than in ALARP, since the
criterion of gross disproportionality are known to lapse with decreasing levels
of risk. Reasonableness in ALARA is instead given by the point at which the
marginal benets exceed the marginal costs. Owing to this, a higher level of
precision is required from risk assessments, which is a consequence also of the
legal necessity of demonstrating adherence to an upper limit (Ale, 2005).
Ironically, the search for ALARA is seldom considered reasonable in practice. Jongejan (2008) reports reluctance among both plant owners and local
governments to reduce risk beyond legal limits, which is partly related to a
common misinterpretation of the principle biasing on the side of safety. The
main explanation is yet found in the legal interpretation of ALARA, considering political judgments of reasonableness as already built into the upper risk
acceptability criteria (Hartford, 2009). Owing to this, a distinction between acceptable and tolerable risk becomes meaningless, suppressing utilitarian concerns as ALARA becomes more of a token statement.
4.4.3 GAMAB
GAMAB is the acronym of the French expression Globalement au moins aussi
bon, meaning globally at least as good. The principle prescribes the level of
risk a new transportation system in France has to fall below, requiring new
systems to oer a total risk level that is globally as low as that of any existing
equivalent system (EN 50126, 1999). A recent variant of GAMAB is GAME,
rephrasing the requirement to at least equivalent (Trung, 2000). This criterion
applies to modied systems as well as new technologies, requiring the global
risk to be at least equivalent as prior to the change. The conceptual distinction
between GAMAB and GAME is yet unclear. A possible interpretation is that at
least as good in GAMAB oers a wider interpretation of relevant factors than
equivalent in GAME. Since the abbreviations have the same ruling principle,
and because both are almost exclusively used in the French railway industry,
their distinctiveness is assumed irrelevant to this study.
Using existing technology as point of reference, GAMAB is a pure technologybased criterion. Applying this principle, the decision maker is exempted from
the task of formulating a risk acceptance criterion, as it is given by the present
level of risk. However, to make the criterion operational, what is meant by
globally at least as good and equivalent system needs to be addressed.
An ethical dilemma
The term global is central to GAMAB. This means considering the totality of
risk, ignoring how risk is distributed between dierent subsystems (Stuphorn,
2003). As long as the global level of risk is improved, GAMAB does not
voice concern if parts of the system risk have increased. By example, a new
67
transport system oering enhanced safety to rst class passengers may be judged
acceptable, even if the risk to people in the rear wagon has increased. As
such, the notion of global opens up for trade-os and overcompensation of
risk (Nordland, 1999). Although technology and equity-based criteria do not
principally contradict each other, the GAMAB criterion shows that pragmatic
interpretations of a pure technology criterion may lead to equity violations.
Learning-oriented bootstrapping
At least as good means a risk level that is as low as or lower than the risk of
a comparison system. The simple criterion is then:
Risk metric Risk metric best existing system
(4.2)
68
whether one can rightfully compare an inexpensive device to a far more expensive variant. Since cost-benet considerations are not required in GAMAB,
Trung (2000) similarly claims that unrealistic safety objectives may be generated. In this regard, one can ask if GAMAB is hindering rather than promoting
improvement, rejecting alternatives on erroneous standards of reference.
4.4.4 MEM
MEM is the acronym for minimum endogenous mortality, a German principle
requiring new or modied technological systems not to cause a signicant
increase in IRPA to any person (Schbe, 2004). The probability of dying of
natural causes is used as reference level for risk acceptability. MEM is based
on the fact that death rates vary with age, and the assumption that a portion
of each death rate is caused by technological systems (Nordland, 2001). Unlike
ALARP and GAMAB, MEM oers a universal quantitative risk acceptance
criterion, derived form the minimum endogenous mortality rate.
Endogenous mortality
Endogenous mortality means death due to internal causes, like illness or
disease (Stuphorn, 2003). In contrast, exogenous mortality is caused by the
external inuences of accidents. The endogenous mortality rate is the rate of
deaths due to internal causes of a given population at a given time. Figure 4.6
displays the endogenous mortality of various groups of ages in Norway in 2007.
Not unexpectedly, the maximum rate is found amongst the oldest population,
whereas youngsters have the lowest rate of occurrences. Children within the age
of 5 and 15 have the minimum endogenous mortality rate, which in western
countries is known to be 2 104per year, per person in average (EN 50126,
1999). The MEM-principle requires any technological system not to impose a
signicant increase in risk compared to this level of reference.
The significance of increase
According to the railway standard EN 50126 (1999), a signicant increase is
equal to 5% of MEM. This is mathematically deduced from the assumption that
there are roughly 20 types of technological systems (Trung, 2000). Amongst
these are technologies of transport, energy production, chemical industries and
leisure activities. Assuming that a total technological risk in the size of the minimum endogenous mortality is acceptable, the contribution from each system
is conned to:
RD
Rm
D 105.person=year/
20
(4.3)
69
1
0,1
0,01
0,001
0,0001
MEM
70
Tolerable IRPA
10-5
100
Number of fatalities
Figure 4.7. The MEM-criterion holds for accidents resulting in maximum 100 fatalities
(adapted from EN 50126 (1999))
71
(Klinke & Renn, 2002). In contrast, the precautionary principle is a precautionbased strategy, for handling uncertain or highly vulnerable situations. Klinke
and Renn reason that a risk-based approach of judging numerical risks relative to each other becomes meaningless if based on very uncertain parameters.
Owing to this, precaution-based approaches do not provide quantitative criteria
for which risks can be compared against. Risk acceptability is rather a matter of proportionality, between the severity of potential consequences and the
measures taken in precaution.
Intention and use
The original denition of the precautionary principle is found in principle 15
of the UN declaration from Rio in 1992 (United Nations, 1992):
Where there are threats of serious or irreversible damage, lack of full
scientic certainty shall not be used as a reason for postponing costeective measures to prevent environmental degradation.
Wilson et al. (2006) discuss several denitions of the principle, nding a common interpretation in that complete evidence of harm does not have to exist
for preventive actions to be taken. An alternative interpretation is that absence
of evidence of risk should not be taken as evidence of absence of risk HSE
(2001b). The precautionary principle is hence a guiding philosophy when there
are reasonable grounds for concern of potentially dangerous eects, but the
scientic evidence is insucient, inconclusive or uncertain (EU, 2000). DeFur
& Kaszuba (2002) note two cases in which the principle is most useful, i.e.
situations of present uncertainties and when new information will radically alter
well-known situations. In the latter case, valuable counterbalance is oered to
bootstrapping methods encouraging preservation of status quo.
The precautionary principle is an outgrowth of the increased environmentalist awareness since the 1970s, acknowledging that the scale of technological
development by far have exceeded our predictive knowledge of environmental
consequences (Belt, 2003). In the Rio Declaration, the principle is explicitly
prescribed to the environmental eld. Consulting EUs communication on the
principle (EU, 2000), its scope is claimed far wider in covering both environmental, human, animal and plant eects. Common to all is the concern of
long-term eects, irreversibility and the well-being of future generations. DeFur
& Kaszuba (2002) reports applications to the areas of food safety, persistent
organic pollutants and even in the prevention of worldwide computer crashes
in the late 90s. Trouwborst (2007) sees EU and generalists like DeFur and
Kaszaba as ghting a lonely battle, claiming that legal instruments explicitly
linking the principle to non-environmental consequences are few in number. A
plausible explanation is provided by Trouwborst himself, calling attention to
the often ignored distinction between the exercise of precaution as such (erring
on the safe side) and the precautionary principle.
72
73
74
5
Concluding discussions
The overall objective of this study is to discuss and create a sound basis for
formulating risk acceptance criteria. Fundamental to this aim is a basic understanding of the concepts of risk and risk acceptance, which are claried and
problematized in chapter 2. In chapters 3 and 4 respectively, the problem is
explicitly addressed through the sub objectives of discussing the main concepts
and quantities used to formulate risk acceptance criteria, and questioning the
basis and applicability of the various approaches to setting risk acceptance criteria. An integral part of these discussions are conceptual problems related to
risk acceptance criteria, as prescribed in the fourth objective of the study. For
this reason, the most valuable ndings are the nuances and contrasts provided
in these discussions, pinpointing fallacies and strengths of the various metrics
and approaches.
Readers familiar with the subject may notice that an ongoing debate of the
recent years is omitted. In the academic crusades led by Aven and his coworkers
at the University in Stavanger, the value of risk acceptance criteria per se is
questioned1. The nal chapter follows this thread, evaluating the metasoundness
of seeking a sound formulation of risk acceptance criteria. As the ultimate
purpose of risk acceptance criteria is to aid decision making on risk, two
interrelated questions are raised in these concluding discussions:
Are risk acceptance criteria feasible to the decision maker??
Do risk acceptance criteria promote good decisions?
1
In their critique of risk acceptance criteria, Aven and his coworkers refer to the xation of
an upper limit of acceptable risk. Such a limit is denoted absolute probabilistic criteria
by Skjong et al. (2007), and can be seen in contrast to trade-o based criteria
76
5 Concluding discussions
77
78
5 Concluding discussions
79
Bankruptcy
Parity zone
High hazard
ventures
Protection
Low hazard
ventures
Catastrophe
Production
Figure 5.2. The relationship between production and protection (adopted from Reason, 1997)
80
5 Concluding discussions
81
holds for all probability-generated risk metrics, deduced from all risk-based
approaches. A special concern is voiced for MEM, as it is the principle most
explicitly announcing an objective level of reference.
Erring on the side of safety
The importance of avoiding strict interpretation of risk acceptance criteria is
perhaps greater following a frequentist interpretation. Evaluating ALARP in
light of the two schools of probability, Schoeld (1998) concludes that the relative frequency approach represents signicant problems of model validation.
The subjective interpretation on the other hand, is found to oer a powerful perspective for trade-o analyses in the ALARP-region. Uncertainty in estimations
are particularly large for low frequency/high consequence events, in a frequentists search for an objective quantity. A risk located in the upper tolerability
region in Figure 4.3 may in such a case actually lie above the unacceptable
limit. This also holds for Bayesian calculations, with the important distinction
that uncertainties are assigned to the actual value. This is why conservative
judgments are often preferred over best estimates, erring on the safe side in
the face of uncertainty. While NORSOK Z-013N (2001) recommends the use
of best estimates, a conservative approach is defended in e.g. NSW (2008). In
some cases, the epistemic uncertainty may be so large that one cannot even
know whether a prediction lies in the conservative ballpark. The introduction
of nanotechnology serves as a timely example, whose associated risks are so
uncertain that comparison with predened criteria cannot be justied, regardless if one is of Bayesian or frequentist conviction. In such cases, the decision
maker must rather turn to the Precautionary principle
5.2.2 Ethical implications of risk acceptance criteria
Examining the ethical justication of risk acceptance criteria, Aven (2007) concludes that there are no stronger ethical arguments for using absolute risk acceptance criteria compared to trade-o based regimes. While there are arguments
both pro and con the use of risk acceptance criteria, these are not primarily of
an ethical character. According to Aven, there should be no discussion on the
need for considering all ethical stances of utility, justice, discourse and ethics
of the mind 2 . What should be debated, is rather the balance of the various
principles and concerns. This balancing act can be suggested to work at two
levels; explicitly in the choice of approach, and implicitly in the selection of
risk metrics.
2
Ethics of the mind are rooted in the philosophy of Immanuel Kant. This line of reasoning
states that the rightness of an act is not determined by its consequences. Rather, actions
are correct in and of themselves without reference to other attributes, because they stem
from fundamental obligations (Hovden, 1998).
82
5 Concluding discussions
83
84
5 Concluding discussions
Given that a company will strive to satisfy its criteria, the resulting risk obviously depends on their stringency. Owing to this, the requirement of GAMAB
of being at least as good as the best comparable system seems to promote
unprecedented levels of low risk. This stands in contrast to traditional bootstrapping approaches, where risk reduction is encouraged only by means of
preserving status quo. As an atypical example, the MEM-criterion is relatively
strict, but no eort is required to reduce the risk below an IRPA of 105. Since
the criterion has remained constant through a variety of innovations, it is likely
that the transient assumption of twenty technological systems have weakened
its stringency.
In contrast to the standstill criterion of MEM is ALARP, whose disproportionalitycriterion holds a promise of risk that is as low as practicality allows. Aven &
Vinnem (2005) clearly favor the ALARP-approach over absolute criteria, on the
grounds that it encourages continuous strive for risk reduction. Crucial to this
argument is the distinction between HSEs and the Norwegian interpretation
of ALARP, as reported in Vinnem et al. (2006)s study of ALARP-processes
in the Norwegian oshore industry. While the focus of HSE is on reaching
good solutions in the ALARP-area, the involvement of Norwegian authorities
is restricted to verifying compliance with upper limits. According to Aven &
Vinnem (2005), minimal impetus is given to operating companies for considering if further risk reduction is achievable. The main emphasis amongst Norwegian operators has thus been on satisfying the upper criteria, usually with no
or small margins. If ALARP-evaluations are performed, they very often result
in dismissal of possible improvements. This yields an important conclusion,
namely that equally important as the theoretical formulation of risk acceptance
criteria, is how these are applied in the industry and followed up by authorities.
5.2.4 One accepts options, not risks
A subjective interpretation of probability does not diminish the credibility of
risk acceptance criteria per se. Unfortunately, this recognition is minor to the
fundamental problem of whether acceptable risk is expressible in the form of an
objective criterion. The words of Fischho et al. (1981) are inescapable; risk
is never acceptable in an absolute sense. Rather, risk acceptance is a matter
of trade-os, unique to a particular set of options in a given context. Since
the inextricable question of whether acceptable levels of risk exist resides in
the realm of philosophy, the obstinacy of Fischho and his coworkers will not
be challenged in this thesis. What can be questioned, is to what extent risk
acceptance criteria reect that risk acceptance is a trade-o problem.
Comparing the dierent approaches is in this respect a grateful task, as
ALARP is the only approach not only allowing for, but also demanding tradeo analyses. Neither GAMAB, nor MEM possesses this quality, as both prescribe risk as the single attribute of importance. Due to the stringency of cri-
85
86
5 Concluding discussions
References
88
References
References
89
Elvik, R., Kolbenstvedt, M., Elvebakk, B., Hervik, A., & Brin, K. (2009). Costs
and benets to sweden of swedish road safety research. Accident Analysis
and Prevention, 41, 387392.
EMS (2001). LUL QRA- London Underground Limited Quantied Risk Assessment. Update 2001. Technical report, Safety Quality and Environmental
Department of London Underground.
EN 50126 (1999). Railway applications- The specication and demonstration of reliability , availability, maintainability and safety (RAMS) 50126.
Euopean Norm, Brussels.
EU (2000). Communication from the commision on the precautionary principle(COMM). Technical report, Commission of the European communities ,
Brussels.
EU (2006). Council Directive 2006/42/EC of 17 May 2006 on machinery.
Ocal Journal of the European Communities,L 157/24.
Evans, A. & Verlander, N. (1997). What is wrong with criterion FN-lines for
judging the tolerability of risk? Risk Analysis, 17, 157168.
Fischho, B. (1994). Acceptable risk: A conceptual proposal. Risk: Health,
Safety and Environment, 1, 128.
Fischho, B., Lichtenstein, S., Slovic, P., Derby, S., & Keeney, R. (1981).
Acceptable risk. Cambridge Unversity Press, New York.
French, S., Bedford, T., & Atherton, E. (2005). Supporting ALARP decision
making by cost benet analysis and multiattribute utility theory. Journal of
Risk Research, 8, 207223.
Garland, D. (2003). Risk and morality. University of Toronto Press Incorporated, Toronto Bualo London.
Hammit, J. (2000). Valuing mortality risk: Theory and practice. Environmental
Science and Technology, 34, 13961400.
Hartford, D. (2009). Legal framework considerations in the development of
risk aceptance criteria. Structural Safety, 31, 118123.
Holden, P. (1984). Diculties in formulating risk criteria. Journal of Occupational Accidents, 6, 241251.
Holton, G. (2004). Dening risk. Financial Analysts Journal, 60, 1925.
Hovden, J. (1998). Ethics and safety: mortal
90
References
References
91
92
References
References
93
94
References
Appendix
R O S S
Further information about the
reliability, safety, and security
activities at NTNU may be found
on the Web address:
http://www.ntnu.no/ross
ISBN: 978-82-7706-228-1