Sie sind auf Seite 1von 110

Inger Lise Johansen

Inger Lise Johansen

FOUNDATIONS AND FALLACIES OF


RISK ACCEPTANCE CRITERIA

Foundations and Fallacies of Risk Acceptance Criteria

R O S S
Reliability, Safety, and Security Studies
at NTNU

Dept. of Production and Quality


Engineering

Address:
Visiting address:
Telephone:
Facsimile:

N-7491 Trondheim
S.P. Andersens vei 5
+47 73 59 38 00
+47 73 59 71 17

TITLE

Foundations and Fallacies of Risk Acceptance Criteria


AUTHOR

Inger Lise Johansen


SUMMARY
The objective of this study is to discuss and create a sound basis for formulating
risk acceptance criteria. Risk acceptance criteria are quantitative or qualitative
terms of reference guiding decision making on acceptable risk. Examining
central concepts, metrics and approaches, a comprehensive fundament for the
setting and use of risk acceptance criteria is provided.
First, foundational issues on risk, probability and risk acceptance are presented.
Subsequently, the strengths and fallacies of individual and societal risk metrics
are inquired, followed by a like examination of principal and practical
approaches to setting risk acceptance criteria. The ability of risk acceptance
criteria to offer sound decision support is finally questioned.
REPORT NO.

ROSS (NTNU) 201001


ISBN

DATE

978-82-7706-228-1

2010-02-23

SIGNATURE

PAGES/APPEND.

Marvin Rausand

105

KEYWORD NORSK

KEYWORD ENGLISH

RISIKOVURDERING

RISK ACCEPTANCE

AKSEPTKRITERIER

ACCEPTANCE CRITERIA

RISIKOML

RISK METRICS

Preface
This report is written Autumn 2009, at the Norwegian University of Science
and Technology (NTNU), Department of Production and Quality Engineering.
For their devoted guidance, special gratitude is directed to Professor Marvin
Rausand and Post doc Mary Ann Lundteigen.
I also wish to thank Vivi Moe for allowing me to use her piece Tuppen og
Lillemor in fronting this publication

Trondheim, Norway, 18t h February 2009


Inger Lise Johansen

ii

iii

Abstract
The purpose of this study is to discuss and create a sound basis for formulating risk acceptance criteria. Examining the strengths and pitfalls of
central concepts, metrics and approaches, the thesis provides a comprehensive fundament for the setting and use of risk acceptance criteria. The
ndings are derived from integration and critique of pioneering and state
of the art literature.
Risk acceptance criteria are quantitative or qualitative terms of reference, used in decisions about acceptable risk. Acceptable risk means a risk
level that is accepted by an individual, enterprise or society in a given context. Our willingness to accept risk depends on the benets from taking the
risk, the extent to which the risk can be controlled and the types of consequences that may follow. Acceptable levels of risk are hence never absolute
nor universal, but contingent on trade-os and contextual premises.
Fatality risk can be expressed by individual or societal risk metrics.
To capture the distribution and totality of risk posed by a particular system, both individual and societal risk acceptance criteria are necessary.
Suitability for decision support and communication, unambiguity and independence are qualities that should be sought. The main expressions of
individual risk are IRPA and LIRA. IRPA expresses the activity-based risk
of a specic or hypothetical person, and is particularly suited for decisions
concerning frequently exposed individuals. While exposure is decisive to
IRPA, LIRA assumes that a person is permanently present near a hazardous
site. LIRA is thus location-specic and pragmatically reserved for land use
planning.
Common societal risk metrics are FN-curves, FAR and PLL. FNcriterion lines uniquely distinguish between multiple and single fatality
events, but are accused for providing illogical recommendations. PLL is a
simpler metric, particularly suited for cost-benet analyses. Overall acceptance limits are seldom expressed by PLL, since exposure is not reected.
In contrast, FAR is dened by unit of exposure, enabling realistic comparison with predened criteria. If injury frequency surpasses that of fatalities,
a metric related to injury or ill health may provide the most proper criterion. All metrics are fraught with assumptions and diculties, necessitating
awareness amongst practitioners.
Various approaches are developed for setting risk acceptance criteria.
A distinction is drawn between fundamental principles, deductive methods
and specic approaches. Utility, equity and technology oer principal criteria to be used alone or as building blocks. While utility-based criteria

iv

concern the overall balance of goods and bads to society, the principle of
equity yields an upper risk limit of which none of its members should
surpass. Technology-based criteria see acceptable risk attained by the use
of state of the art technology, possibly at the expense of considerations
of cost-eectiveness and equity. Amongst deductive methods for solving
acceptable-risk problems are expert judgment, bootstrapping and formal
analysis. There are two questionable assumptions underlying bootstrapping
methods; that a successful balance of risks and benets is achieved in the
past, and that the future should work in the same way. Formal analyses
avoid this bias towards status quo, demanding explicit trade-o analyses
between the risk and benets of a current problem. The advantages of
formal analysis are openness and soundness, while a pitfall concerns the
diculty of separating facts and values.
Specic approaches to setting risk acceptance criteria are based on
a combination of fundamental principles and deductive methods. Most
cultivated is the ALARP-approach, capitalizing the advantages of formal
analysis and all principal criteria. Requiring risk to be as low as reasonably practicable, ALARP provides conditional rather than absolute criteria;
uniquely capturing that risk acceptance is a trade-o problem. Problems
are yet addressed, notably in resource intensiveness and the imprecise notion of gross disproportionality. Conceptually close, but practically far from
ALARP, is ALARA. Whereas upper criteria represent the start of ALARPdiscussions, they serve as the endpoint in ALARA. Arguments of reasonableness are considered already built into the strict upper limits of ALARA.
Strict criteria are provided also by the GAMAB-approach, requiring new
systems to be globally as good as any existing system. GAMAB can be
seen as learning-oriented bootstrapping, but may reject developments on
an erroneous standard of reference. While GAMAB is technology-based,
the MEM-approach uses the minimum IRPA of dying from natural causes
as reference level. Little impetus is given for reducing risk beyond this
static requirement. Dierent from these risk-based approaches is the precautionary principle, intended for situations where great uncertainty makes
comparison with a predened metric meaningless. The precautionary principle has been attacked from many stands, but is concluded to oer a
valuable guide in the absence of knowledge.
The extent to which risk acceptance criteria oer sound decision support is nally questioned. The various approaches dier with respect to
consistency, practicality and ethical implications, to the degree risk reduction is encouraged and risk acceptance reected. The choice of metrics
implicitly or explicitly aects how these issues are resolved. Interpreting
risk and probability as subjective constructs are not seen to threaten the

validity of risk acceptance criteria. What may cause a problem, are regulators and practitioners understanding risk acceptance criteria as objective
cut-o limits. The overall conclusion is that acceptance criteria oer sound
decision support, but only if authors and users understand the assumptions
and limitations of the applied metrics and approaches. There is a call for
applied research on the role of the industries and the government in formulating and complying with risk acceptance criteria. As environmental
consequences are omitted from the study, the lack of a sound basis for
formulating environmental risk criteria urges further research.

Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1
1
2
3
4

Clarification of concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1 What is risk? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1 The meaning of risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Dening risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 What can go wrong? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 How likely is it that it will happen? . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1 Classical theory of probability . . . . . . . . . . . . . . . . . . . . . . .
2.4.2 The relative frequency theory . . . . . . . . . . . . . . . . . . . . . . .
2.4.3 Subjective probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 If it does happen, what are the consequences? . . . . . . . . . . . . . . .
2.6 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.7 Risk assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.8 Risk acceptance criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9 Acceptable risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9.1 One accepts options, not risks . . . . . . . . . . . . . . . . . . . . . . .
2.9.2 Acceptability is not tantamount to tolerability . . . . . . . . .
2.9.3 Factors inuencing risk acceptance . . . . . . . . . . . . . . . . . .
2.9.4 Risk acceptance is a social phenomenon . . . . . . . . . . . . . .

5
5
6
8
11
12
13
14
15
16
18
19
21
22
22
23
23
24

Expressing risk acceptance criteria . . . . . . . . . . . . . . . . . . . . . . . . . . .


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Risk metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.2 To whom it may concern . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 Aspects in the choice of risk metrics . . . . . . . . . . . . . . . . . . . . . . .

27
27
27
28
29

viii

Contents

3.2.1 Generic requirements according to NORSOK Z-013 . . . .


3.2.2 Pragmatic considerations . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.3 Past and future observations . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Individual risk metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.1 Individual risk per annum- IRPA . . . . . . . . . . . . . . . . . . . .
3.3.2 Localized individual risk-LIRA . . . . . . . . . . . . . . . . . . . . . .
3.4 Societal risk metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.1 FN-curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.2 Potential loss of life- PLL . . . . . . . . . . . . . . . . . . . . . . . . . .
3.4.3 Fatal accidental rate- FAR . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.1 Risk matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.2 Loss of main safety functions . . . . . . . . . . . . . . . . . . . . . . .
3.5.3 Safety integrity level- SIL . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5.4 Injury and ill health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29
30
30
31
31
33
35
37
40
41
42
43
44
45
46

Deriving risk criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Fundamental principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.2 Equity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.3 Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.4 An alternative principle . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3 Deductive methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1 Expert judgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.2 Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.3 Formal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4 Specic approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1 ALARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.2 ALARA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.3 GAMAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.4 MEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.5 The Precautionary principle . . . . . . . . . . . . . . . . . . . . . . . . .

47
47
47
48
49
49
50
50
51
51
52
57
57
64
66
68
70

Concluding discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1 Are risk acceptance criteria feasible to the decision maker? . . . .
5.1.1 Non-contradictory ordering of alternatives . . . . . . . . . . . .
5.1.2 Preciseness of recommendations . . . . . . . . . . . . . . . . . . . . .
5.1.3 A binary decision process . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.4 Risk acceptance criteria simplify the decision process . .
5.2 Do risk acceptance criteria promote good decisions? . . . . . . . . .
5.2.1 The interpretation of probability to risk acceptance
criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75
76
76
77
78
78
79
79

Contents

5.2.2
5.2.3
5.2.4
5.3 What
5.3.1

Ethical implications of risk acceptance criteria . . . . . . . .


Compliance or continuous strive for risk reduction? . . . .
One accepts options, not risks . . . . . . . . . . . . . . . . . . . . . . .
we really are looking for . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overall conclusions and recommendations for further
work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

81
83
84
85
86

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

1
Introduction

1.1 Background
Risk is ubiquitous. Of all the risks we face in everyday life, only a selection gets
to preoccupy our worried minds. Some are unconsciously undertaken, others are
we willing to live with, and yet a few provoke heated demonstrations or banning.
While children of the postwar period were afraid of DDT and nuclear power
plants, citizens of the new millennium are concerned with nanotechnology and
global warming. Other risks have ever been trivialized, like those of bicycling to
work or pursuing the perfect tan. Risks do in other words dier in acceptability;
through times, across people and situations. And when decisions involving risk
are to be taken, risk acceptance is the measure.
The risk of swine u has recently bombed the headlines of the daily press.
At its onset this summer, people were faced with the choice of canceling a
long planned holiday. Following the pathological development, a topical decision problem has been whether to get vaccinated. Although pandemics are
an aair of the state, these are ultimately personal decisions of individuals
willingness to live with the risk of swine u. In contrast is another current decision problem, which is the governmental settlement on future development of
oil and gas production in Lofoten, Norway. A variety of actors have expressed
their opinions, disagreeing on the relative importance of state economy and
worldwide energy scarcity, in comparison with environment, tourism and shery preservation. The debate has been further precluded by imprecise factual
statements, like Havforskningsinstituttets claim that between 0 and 100 % of
the stock of fry might be lost (Teknisk Ukeblad, 2009). How can a decision be
made in this case? Part of the solution lies in the reply of the Department of
Environment, demanding comprehensive risk analyses to be performed. Risk
analyses are widely used to support discussions related to industrial and societal
developments. To evaluate the results of risk analysis, a term of reference is
needed.

1 Introduction

Risk analysis

RAC

Decision
about risk

Economy, regulatory requirements,


public opinion, interest groups
and other influencing factors

Figure 1.1. Risk decisions

Several laws and regulations prescribe the use of risk acceptance criteria for
evaluating new or existing hazardous systems. Oering a level of comparison
for the results of risk analysis, decisions are reached on the grounds that risk
prospects shall not be unacceptably high. However, Norwegian authorities do
not give any guidance on how to establish such criteria. In comparison, the UK
Health and Safety Executive (HSE) is leading in the eld, oering a consistent
framework for practitioners to follow. The value of risk acceptance criteria
has also been questioned, recently in a suite of papers by Terje Aven and
his coworkers at the University of Stavanger. As implied in the Lofoten-case
and illustrated in Figure 1.1, decisions on risk involve complex opposites not
determined by risk alone. A considerable inuence on like discussions is the
pioneering work of Fischho et al. (1981). While three decades have past
since they rst evaluated decision methodologies for acceptable-risk problems,
academic and pragmatic diculties still remain. These are manifested in the
somewhat questionable practice on the continental shelf, calling for enhanced
knowledge on the fundamentals of formulating risk acceptance criteria.

1.2 Objectives
The purpose of this thesis is to discuss and create a sound basis for formulating
risk acceptance criteria. From this overall goal, ve lower level objectives are
deduced:
1. Give a description of the various approaches to setting risk acceptance
criteria related to harm to people, and discuss their basis and applicability.
Both individual and societal risk shall be covered.

1.3 Limitations

2. Present and discuss the main concepts and quantities used to formulate risk
acceptance criteria.
3. Give a description of approaches to setting environmental risk criteria
4. Discuss conceptual problems related to risk acceptance criteria. This
should include a discussion of the objective/subjective interpretations of
probability- and also risk.
5. Compare the use of risk acceptance criteria in two or more selected areas. These shall include the Norwegian oshore oil and gas industry and
maritime transport.
Following agreement with the supervisor, task 3 and 5 will not be covered in
the thesis.

1.3 Limitations
In pursuing the overall goal, the thematic coverage is conned to three out of ve
objectives. This is partly due to the limited time frame of project execution, but
also because of the thorough examination urged by the central objectives. The
focus is thus restricted to harm to people and fatalities in particular. Excluding
the third task is unfortunate, since environmental criteria are required, but poorly
understood in the oshore and maritime industries. Due to the distinct nature of
environmental risk, the reader should beware that the ndings are not directly
transferable to environmental applications.
Disregarding the fth task of performing a comparative study has pragmatic
implications. Since no sectors have been explicitly examined, the ndings are
generic and decoupled from practical and contextual constraints. Paradoxically,
this serves as an advantage as well as a limitation. On the positive side, experience transfer may be sought over a wide range of areas. On the other hand, this
necessitates practical interpretations. A methodological weakness is furthermore induced, since sector-specic considerations lie implicit in the applied
literature. While UK and Dutch contributions mainly concern nuclear power
and land-based process industry, the oshore oil and gas industry is by and
large the center of Norwegian researchers attention. Although a generic focus
is chosen, the thesis is thus knowingly biased towards the Norwegian oshore
industry.
The study is purely theoretical, as its results are derived from integration
and critique of pioneering and state of the art literature. Risk acceptance is a
wide concept to which many theorists have contributed. During the literature
selection process, emphasis has been on gaining fundamental understanding of
key concepts, rather than presenting radical ideas or advanced formulas. The
reader is therefore not required any previous knowledge on the subject. A nal
limitation owes to the diversity of contributions, leaving it neither possible, nor

1 Introduction

desirable to deduce bombastic conclusions. Rather, the most important ndings


are the nuances and contrasts pinpointed in the discussions.

1.4 Structure
Chapter 2 is devoted to clarifying the basic concepts central to this thesis.
Understanding the concepts of risk, probability and risk acceptance is prerequisite for creating a sound basis for formulating risk acceptance criteria. The
meanings, denitions and implications of these and related terms are explored,
particularly aided by Kaplan & Garrick (1981) and Fischho et al. (1981).
Subsequently, chapter 3 addresses the second objective by elaborating the
main concepts and metrics used to formulate risk acceptance criteria. First, the
concepts of individual and societal risk are introduced, followed by qualitative considerations in the choice of risk metrics. The main part follows with a
thorough examination of characteristics, pitfalls and strengths of common individual and societal risk metrics. Central to the discussions on societal risk is the
literature review of Ball & Floyd (1998), while the annex of NORSOK Z-013N
(2001) provides useful assistance throughout the chapter. While focus hitherto
has been on fatality risk, a brief nal section presents alternative metrics of
risk acceptance.
In chapter 4, the rst objective is sought through the presentation of various
approaches for deriving risk acceptance criteria. The fundamental principles of
utility, equity and technology described in the R2P2-report of HSE (2001b) are
rst introduced. Subsequently, the three methods of Fischho et al. (1981) for
solving acceptable risk-problems are presented; expert judgment, bootstrapping
and formal analysis. Based on the presentation of these generic principles and
methods, the specic approaches of ALARP, ALARA, GAMAB, MEM and the
precautionary principle are examined. Due to its methodological prominence,
most attention is devoted to the ALARP-approach of HSE (1992).
Finally, chapter 5 raises a set of conceptual problems regarding the feasibility
of risk acceptance criteria. The concluding discussion follows the thread of
Aven and his coworkers (Aven & Vinnem, 2005; Aven et al., 2006; Aven, 2007;
Abrahamsen & Aven, 2008), questioning the ability of risk acceptance criteria to
provide sound decision support. First, issues of user friendliness are addressed.
The meaning of probability, ethics, risk reduction and risk acceptance to various
formulations of risk acceptance criteria are problematized thereafter. Although
this chapter explicitly addresses the fourth objective of the thesis, the reader
should note that conceptual discussions make an integral part of each previous
chapter. By taking a panoramic view on these discussions, overall conclusions
and recommendations for further work are nally given.

2
Clarification of concepts

Most of the terms central to this report are used in everyday life. The reader will
therefore have an intuitive understanding of what risk, probability and risk
acceptance mean. Unfortunately, this intuitive appeal yields inconsistencies
and confusion if their understandings are taken for granted. The importance of
properly dening risk concepts is stressed by Ale et al. (2009), exemplifying
that many risk management frameworks fail to dene the probability they are
referring to. This is unfortunate, since probability can be interpreted in very
distinct ways, leading risk assessments in dierent directions. Not only are
implicit interpretations troublesome within the community of scientic risk
assessment. Fischho et al. (1981) contemplate that misunderstandings between
lay people and experts partly arise from inconsistent denitions of risk, calling
for currently used denitions to be made explicit, assumptions to be claried
and identication of cases that push them to their limits.
Following the advice of Fischho and co., this chapter is devoted to clarifying the focal concepts of risk, probability, safety and risk acceptance. The
promise of clarication may appear ironic, as the examination shows that there
is a wide range of interpretations oering dierent insights on the subject. This
owes to what Breugel (1998) denotes a reductionist approach to risk, meaning
that risk phenomena exhibits a variety of aspects that each has been studied in
detail by engineers, economists, sociologists, psychologists, philosophers and
so on. While some denitions are explicitly adopted to this report, other concepts are left undened, clarifying that a problem can and should be seen from
a variety angles.

2.1 What is risk?


In a recent study of risk management frameworks, Ale et al. (2009) nd the
distinction between the description of risk and the risk concept unclear. By
description, one typically means a dening phrase, while a concept can be

2 Clarication of concepts

understood as a cognitive unit of meaning (Wikipedia). The distinction is sought


captured in the following, by rst grasping the broader meaning of risk, and
then presenting a selection of scientic risk denitions. Amongst these is the
pioneering contribution of Kaplan & Garrick (1981) emphasized, as the chosen
denition for this report.
2.1.1 The meaning of risk
The question what is the risk? pops up if your computer automatically blocks
the downloading of a scientic article (or a less sanctimonious feature of the
web). The answer you get is neither the possibilities of your computer catching
a virus, nor the consequences of such an attack. Instead, a description of the
sources of unrequested downloading is provided, followed by a suite of measures to take in precaution. Sociologist Garland (2003) similarly asks what is
risk?, to which he replies that the notion has a broad range of meanings, from
which all are conditioned on the risk of something to someone. What they
have in common, is that risk is always understood in the context of uncertainty;
if a future outcome is certain to happen, one does not face a risk. Economist
Holton (2004) adds a second explanatory factor, in that speaking of risk gives
meaning only if someone is aware about the outcomes, i.e. that they are exposed. Although many have negative connotations to risk, the outcomes may
equally well be positive. Pointing to as distinct situations as launching a new
business, skydiving and initiating a romantic relationship, Holton claries that
risk is a general concept giving meaning to all situations involving the factors
of uncertainty and exposure. Owing to this, one can speak of accident, political,
health and nancial risk, as well as everyday risk of missing the school bus,
being surprised by bad weather or downloading computer viruses.
Risk, hazards and relativity
Although risk encompasses a wide range of phenomena, an important distinction is drawn between risk and hazard. A hazard is a source of physical damage,
that may be converted into actual delivery of loss or damage, but exists only
as a source. Risk on the other hand, entails the possibility of this conversion
(ISO/IEC Guide 51, 1999). The same distinction holds for risk and threat, which
is conceptually reserved for situations of intentional acts (Salter, 2008). In common usage, the notions of risk, hazards and threats are often mixed, like the
computer pop-up providing a list of threats to answer what is the risk?, or a
newspaper disclosing that a popular toy is a risk. According to Garland (2003),
this is a critical misconception, as risk- in contrast to hazards- never exist outside our knowledge of them. Since risk is concerned with the future, it can only
be known in probabilistic terms. This view is held not only by social scientists, but also by risk analysts as Kaplan & Garrick (1981). They acknowledge

2.1 What is risk?

that risk is dependent on what you know and what you do not know, and is
thus relative to the observer. Adams (2003) makes an important observation in
that you gain knowledge of risk in dierent ways. While some are directly perceivable (car accidents), others are known through science (cholera), whereas
a third group of virtual risk even escape the agreed knowledge of scientists
(global warming). The cornerstone in Adams reasoning is that the meaning
and management of risk depend on how knowledge about the future is inferred.
Interpretations of risk in science
Accepting that risk is a property of the future and distinct from hazards, most
people agree that it does not exist in a physical state of today. But from here,
there is substantial disagreement on whether risk is an objective feature of the
world or thought constructs only. As seen in section 2.4, this is closely related
to how one interprets the concept of probability. Further to the extreme, some
social scientists deny that risk can be quantitative. Following the reasoning of
philosopher Campbell (2005), this is an erroneous claim, as risk at least can be
assigned comparative quantities. Some risks are clearly high (e.g. jumping of
the Elgeseter bridge in Trondheim), while others are evidently lower (crossing
the same bridge by foot). It is indeed a rightful claim that only some risks (if
any) can be given precise quantities. But according Campbell, this does not
mean that a statement of low risk is any less quantitative.
Hovden (2003) summarizes four positions to risk in science:
 Rationalists see risk as a real world phenomenon to be measured and estimated by statistics and controlled by scientic management.
 Realists interpret risk as objective threats that can be estimated independently of social processes, but may be distorted through frameworks of
interpretation.
 Constructionists claim that nothing is a risk in itself. What we understand
to be a risk is a product of cultural ways of seeing.
 Middle positions between realists and constructionists see risk as an objective threat, that can never be seen in isolation from social and cultural
processes.
Risk perception
Central in the realist and middle positions is the concept of risk perception, understood as subjective responses to hazard and risk (Breakwell, 2007). Among
the factors inuencing risk perception are voluntariness of exposure, immediacy
of eects, personal control and catastrophic potential. Risk perceptions dier

2 Clarication of concepts

from analytical estimates of risk, as reported in the numerous studies of lay


people and risk analysts evaluations. Amongst the pioneers of the eld were
Tversky & Kahneman (1974), revealing that lay inferences about probability
and risk are distorted through biases of representativeness, availability and anchoring of initial guesses. Table 2.1 is extracted from the studies of Slovic
(1987), illustrating that lay people perceive voluntary and personally controllable risk as lower than told by objective estimations, while involuntary risk
of catastrophic potential are typically exaggerated. Such a comparison gives no
meaning in the constructionist and rational approaches, as the former deny the
existence of any objective risk, while the latter claim that there is nothing but
an objective such.
Activity or technology
College students Experts
Nuclear power
1
20
Smoking
3
2
Pesticides
4
8
Motor vehicles
5
1
Alcohol neverages
7
3
Surgery
11
5
X-rays
17
7
Electric power (non-nuclear)
19
9
Table 2.1. Dierences in lay people and experts ordering of perceived risk. Rank 1 represents the most risky activity or technology (extracted from Slovic (1987)).

Contingency
Renn (2008) concludes that common for all epistemologic positions to risk, is
the contingency between possible and chosen action. If the future was independent of todays activities, the term risk would give no meaning. This explains
why risk can never be zero, unless we stop performing the activity in question.
But then, another activity is initiated, from which yet another risk is introduced.
Due the contingent nature of our actions, accident risk means the possibility
that an undesirable state of reality may occur as a result of natural events or
human activities. Accepting this general conception, the next step is to explore
how risk can be properly dened.

2.2 Defining risk


There is no generally agreed denition of risk. Paraphrasing Aven (2009), the
variety of existing denitions can be sorted into three categories:

2.2 Dening risk

 A. Risk is an event or a consequence


 B. Risk is a combination of probability and expected loss
 C. Risk is expressed through events/consequences and uncertainties.
A risk denition of the rst category is that of Klinke & Renn (2002, p.1071):
The possibility that human actions or events lead to consequences that
harm aspects of things that human beings value.
The denition captures many of the aspects presented in the previous section,
principally that risk is related to an uncertain future and outcomes which humans care about. The reader should note that possibility is used instead of
probability, implying that focus is on the consequences that might occur, rather
the likeliness of this happening. Considering the inverse relationship between
consequence severity and frequency of occurrence, such denitions may bias
discussions on risk towards catastrophic, but unrealistic outcomes (Woodru,
2005).
Representative for the third category is Aven (2003, p.176), dening risk
as:
Uncertainty of the performance of a system (the world), quantied by
probabilities of observable quantities.
According to this denition, risk is a quantitative means for capturing uncertainty1 . Following Aven, it is meaningless to talk of uncertainty in the outcome
of risk calculations, as risk itself is a measure of uncertainty. This is quite
a radical view compared to denitions of the second category, understanding
uncertainty as a measure of condence in your presentation of risk. In this
category, we nd the frequently cited contribution of Kaplan & Garrick (1981,
p.13), dening risk as the answer to three questions:
1. What can happen?
2. How likely is it that it will happen?
3. If it does happen, what are the consequences?
1

Uncertainty may be dened as something not certainly and exactly known (Webster,
1978). Following Aven (2003, p.178), uncertainty is lack of knowledge about the performance of a system (the world), and observable quantities in particular. However, most
practitioners speak of risk in situations where probabilities can be assigned, reserving
uncertainty for situations where probabilities are undened or uncertain (Douglas, 1985).
In risk analysis, a distinction is made between two main types of uncertainty; aleatory
(randomness/stochastic variations) and epistemic (scientic; due to our lack of knowledge about the world). While aleatory uncertainty is irreducible, the latter decreases with
increasing knowledge (NASA, 2002)

10

2 Clarication of concepts

Risk Curve

Level of damage, log X

Probability, log

Probability, log P

P=0.9
P

P=0.1

Level of damage, log X

Figure 2.1. Risk curves (adopted from Kaplan & Garrick (1981))

To answer the questions, Kaplan and Garrick suggest the making of a list as in
Table 2.2. Each line, i , is a triplet of a scenario description, si , the probability,
pi , and consequence measure, xi , of that scenario. Including all imaginable
scenarios, the table is the answer to the three questions and therefore the risk.
Formally, risk is dened as a set of triplets:
R D hsi ; pi ; xi i

(2.1)

Acknowledging uncertainty in consequence and probability estimations, the


denition is further rened into:
R D fhsi ; pi .i ; xi /ig

(2.2)

pi .i / and pi .xi / are the probability density functions for the frequency and
consequence of the i th scenario. Arranging the scenarios in order of increasing
severity and damage, (2.1) and (2.2) can be plotted as a single- or a family of
curves respectively, as shown in Figure 2.5. Kaplan and Garrick stress that it
is not the mean of the curve, but the curve(s) itself which is the risk. While
indicating that a curve can be reduced to a single number, this is prescribed
with caution. In their opinion, a single number is not a big enough concept to
communicate risk, as is often done by claims of risk being probability times
consequence. Since this equates low probability/high damage scenarios with
high probability/low damage scenarios, Kaplan and Garrick prefer risk to be
described as probability and consequence. The latter is adopted for this report,
partly because it is most common in current risk analysis practices (Rausand
& Utne, 2009). More important is it that risk acceptance is far more complex
than a compound number of probability and consequence can tell, which is
discussed in section 2.8. By illustration, the acceptability of trac accidents
and a core nuclear meltdown is hardly the same, even though the mean values
of the risk curves might equate the two.
Aven (2003)s conception of risk can be questioned on a similar basis,
namely because a single number is believed to represent uncertainty about the
future, as well as our calculations of it. With reference to Adams (2003), this

2.3 What can go wrong?

11

Scenario Likelihood Consequence


s1
p1
x1
s2
p2
x2
.
.
.
.
.
.
sn
pn
xn
Table 2.2. The risk table (adopted from Kaplan & Garrick (1981))

may not be a problem for risk perceived directly or through known science,
but is likely to perplex virtual risk of great epistemic uncertainty. Also the
consequence-oriented denition of Klinke & Renn (2002) is inadequate for our
purpose, as it may distort the trade-o considerations section 2.8 shows characterize acceptable risk-problems. Wu & Apostolakis (1990) see a major problem
of non probabilistic theories in the lacking of a rational rule for combining
risk information. As pointed out by Aven (2009), like denitions cannot assist
in concluding if a risk is high or low compared to other options. Kaplan and
Garricks idea of risk as a set of triplets therefore provides the basis for this
report, calling for an examination of its three constitutional elements.

2.3 What can go wrong?


To Kaplan & Garrick (1981), the answer to this question is a list of all identied
scenarios. For illustration, they point to a pipe break, noting that this scenario
actually represents a whole category of pipe ruptures of various kinds and
sizes. Since the notion of scenario is somewhat imprecise, hazardous event is
rather preferred, following the example of Kjelln (2000) 2 . Hazardous events
are commonly restricted to initiating occurrences, meaning that they do not
represent the actual damage that might follow. In this thesis, a hazardous event
is conceptualized as the rst signicant deviation from normal operation, that
may lead to harm if not controlled. The risk of each event can be illustrated by
the bowtie-diagram of Figure 2.2, visualizing a spectrum of possible causes
and consequences of a specic scenario. This presentation format was launched
within the petroleum company Shell, providing conceptual aid in identifying
safety barriers for preventing critical events and mitigating their consequences
(Chevreau et al., 2006).
Accepting that risk is a set of triplets, the overall risk is given by the
collection of bow-ties for all imaginable scenarios. Approaching risk in this
2

Some standards use the terms accidental event (NORSOK Z-013N, 2001), unwanted
event (NS 5814, 2008) or initiating event. As there seem to be no generally accepted
terms or denitions, the process of properly and uniquely dening identied events is
likely to be clouded.

12

2 Clarication of concepts

Hazards

Pr (AE)

AE

Consequences

Barriers

Figure 2.2. Bow-tie diagram (adapted from Chevreau et al. (2006).

manner has one fundamental weakness, namely that unidentied scenarios lead
to underestimations of risk. Referring to an accident study showing that less
than 60% of the scenarios were foreseen, Breugel (1998) notes that the most
crucial part of risk analysis concerns the identication of accidents scenarios.
This diculty is recognized by Kaplan & Garrick (1981), admitting that since
the number of possible scenarios in reality is innite, a listing of scenarios
will always be incomplete. According to Kaplan and Garrick, this inherent
weakness may be overcome by introducing an other category of unidentied
scenarios, SN C1 . The set of scenarios is thus logically complete, allowing one
to compensate for residual risk posed by unknown scenarios. Whether this is
a satisfactory counterargument may be questioned, as research has shown that
the main uncertainties in risk analysis still are related to the (in)completeness
of identied events (HSE, 2003b).

2.4 How likely is it that it will happen?


The answer to this question is the probability of each hazardous event. But what
is probability? And is it distinct from frequency as an expression of likelihood?
In common language, probability is a number between 0 and 1, while frequency
expresses the number of events per time unit, having no upper restrictions
(Wikipedia). Even though the other elements of Kaplan and Garricks denition
pose diculties, these are minor to the dispute the previous centuries saw over
the interpretation of probability. Playing on the words of a classic lm, Martz
& Waller (1988) announce that Probability is a many splintered thing. Some
see probability as a sound mathematical theory, others think of it as the odds or
a feeling associated with the outcomes of a future event, yet some understand
it as an experimental process of observing frequency of hits. While a man saw
no objections in assigning a 67% probability that God exists (The Guardian,

2.4 How likely is it that it will happen?

13

2004), others struggle to understand how one can repeatedly loose in Yatzi
given the same odds as the opponents.
The meaning of probability can be sought from three main stands; the classical theory, the relative frequency theory and the theory of subjective probability. About to decades ago, these were under considerable scrutiny regarding
the interpretation of probability in risk analysis, exemplied in the academic
correspondence between Watson (1994) and Yellmann & Murray (1995). The
underlying objective of these debates was whether probability, and hence risk,
is an objective feature of the world, and the implications of this on risk analysis.
Subsequently, researchers have concluded that what is important is not what
school of thought you follow, but that the interpretation is chosen that best ts
your purpose (Vatn, 1998).
2.4.1 Classical theory of probability
Up until the twentieth century, probability was by and large interpreted according to the classical theory, developed by mathematicians like Pascal and Laplace
between 1654 and 1812 (Watson, 1994). Following this theory, probability is
an objective property, derived from a set of equally likely possibilities and/or
symmetrical features. The symmetrical properties of a dice yield a 16 probability
of throwing a 6, while drawing an ace of spades and a club nine from a pack
1
. Given a set of equally likely entities,
of cards have the equal probability of 52
Pr.A/ is inferred by counting the proportion of favorable outcomes amongst
the total set of possible outcomes :
Pr.A/ D

Na
N

(2.3)

The probability Pr.A/ of an event is given a priori, with no need for experimentation (Papoulis, 1964). According Watson (1994), this is a satisfactory concept
in games of chance, but invalid for situations not fullling the assumption of
uniform possibilities. As this is certainly not true for either output or input
probabilities of risk analysis, Watson rejects the classical theory as a basis for
interpreting risk analysis results. Also Yellmann & Murray (1995) agree that
the classical theory is a too narrow viewpoint for analyzing accident risk. From
a generalist perspective, Papoulis (1964) criticizes the theory for being circular,
as it in its own denition makes use of the concept being identied; concluding that equally likely means equally probable. Pointing to the not so obvious
possibilities of giving birth to a boy and a girl, Papoulis further accuses the
classical theory for implicitly making use of the relative frequency interpretation of the following section. In conclusion, one can say the classical theory
is of historical interest, but that its current use is limited to a small group of
problems of which accident risk is not one.

14

2 Clarication of concepts

2.4.2 The relative frequency theory


As a reaction to the classical theories of probability, the relative frequency
theory was developed by Von Mises around a century ago. Within this theory,
the probability of A is the limit of the relative frequency of which A occurs,
in a long-run series of repetitions (Papoulis, 1964):
nA
(2.4)
n!1 n
Since an experiment cannot be repeated for eternity, we never know the exact probability (Watson, 1994). Nevertheless, an objective denition is given,
providing a precise meaning to probability and the ability to estimate it with
condence. In cases where the classical theory applies, the relative frequency
theory will approach the same Pr.A/. The theories are still fundamentally dierent, as the relative frequency theory deduces Pr.6/ from repeatedly throwing a
dice and observing the occurrence of a 6, rather than through a priori inference
of symmetrical properties. Like classical theorists, disciples of the frequentist school see probability as an objective feature. The dierence is that while
classical theory sees probability given from objective properties, the relative
frequency theory believes in the objectivity of measurements.
The obvious limitation of the relative frequency theory is that it only applies
for situations for which relative frequency data exist. In its strictest form, probability statements can only be given for experiments that are innitely repeatable
under constant conditions. Watson (1994) is clear in his case, prophesying that
since we cannot observe the future even once, Von Mises would probably deny
that risk analysis have any meaning at all. Without passing such a dystopian
sentence, Martz & Waller (1988) agree that in cases of plant specic low probability/high consequence events, the relative frequency interpretation is both
practically and philosophically inappropriate. Typical events are core meltdown
in a nuclear plant and the structural breakdown of an oshore platform. On
the other hand may events of high probability/low consequence, like trac
accidents, give meaning in the sense of relative frequency. But even in these
cases the theory is likely to fail, since the framework conditions under which
accidents occur hardly remain constant (Aven, 2007). For instance are trac accidents inuenced by many transient factors, like technology of vehicles, trac
lane designs and regulations of speed and alcohol (Elvebakk, 2007). As these
are likely to change over past statistical observations, one can question whether
relative frequencies oer reliable previsions of the future.
Most researchers agree that strict adherence to the relative frequency approach is unsuitable in risk analysis. Yet, many claim that the theory provides
the basic meaning of probability. Amongst them are Yellmann & Murray (1995),
asserting that relative frequency is the only rational basis for thinking about
probability. The rationale is provided by Vaurio (1990), arguing that probability should always be given a fractional interpretation, i.e. as a ratio of hits in
Pr.A/ D lim

2.4 How likely is it that it will happen?

15

the long run. In this sense, a claim of being 92.7 % sure that a decision is right,
means that the fraction of right decisions in the long run is 0.927. Kaplan &
Garrick (1981) acknowledge this fractional meaning, while stating that probability is a distinct concept from observed frequency. In their view, studying
statistical frequencies is the science of handling data, while probability is the
art of handling lack of data. Owing to this, they suggest relative frequencies
assigned by experiments of thought, without ever having to perform a physical
experiment. As this approach, named probability of frequency, is conceptually
far from the original intention of Von Mises, it is not further discussed. What it
does illuminate, is that how probability is understood and the way it is derived,
is not necessarily a matter of the same thing.
2.4.3 Subjective probability
In the preface of De Finetti (1974)s groundbreaking contribution to the theory
of probability, the following thesis is blown up:
PROBABILITY DOES NOT EXIST.
The sentence captures a radically dierent interpretation of probability; the
subjective, or Bayesian, school of thought. De Finetti postulates that probability
is not endowed with some objective existence. Rather, probability is a subjective
measure for degree of belief, existing only in the minds of individuals. Being
subjective, it follows that dierent people can assign dissimilar probabilities to
the same event. This does not imply that probabilities are meaningless in the
opinion of De Finetti; it is perfectly meaningful for an individual to express his
belief in the truth of an event with a number between 0 and 1. Neither does it
mean that the rules of probability are invalid; the numbers associated with two
events of dierent likeliness must still obey the axioms of Kolmogorov 3 . What
it does mean, is that probability is conditioned on an individuals current state of
knowledge. As new knowledge is gained, individuals update their probabilities,
intuitively or formally by Bayes theorem. This is not in the pursuit of a true
objective probability, but as a means for strengthening ones own degree of
belief.
Weather forecasting is a typical situation where subjective probabilities apply. The probability of sunny weather can neither be claimed from symmetrical
properties, nor by repeated observations of the past. Instead, the meteorologist
bases her previsions on professional knowhow and complex analyses, constantly
updating prior knowledge in search for a strengthened degree of belief. This
does not mean that she cannot use weather frequency data as a source of
3

Kolmogorovs axioms are a set of fundamental statements about probability of events in


a sample space. The axioms are functions, not meanings, of probability. They thus apply
for all interpretations of probability, putting no requirements on the relationship between
probability and real world phenomena (see De Finetti (1974)).

16

2 Clarication of concepts

knowledge, or express her predictions in terms of expected frequencies. Martz


& Waller (1988) praise the Bayesian theory as the broadest of all approaches,
allowing accommodation of experience from relative frequencies as well as
symmetrical assumptions. While De Finetti (1974) nds denitions of objective
probabilities useless per se, they are acknowledged as valid auxiliary devices.
The readers should beware that if subjective and frequentistic assumptions are
combined, the resulting probability is still subjective (Vaurio, 1990).
The subjective interpretation of probability is the dominating approach
amongst risk analysts of today (Rausand & Utne, 2009). This is mainly because it applies to all types of events and uncertainties, in remarkable contrast
to the classical and relative frequency theories (Wu & Apostolakis, 1990). Following this line of thought, it gives meaning to talk about probability of both
structural collapse and road accidents; analysts of Norse conviction may even
calculate the risk of ragnarock. Clearly favoring a Bayesian interpretation to
risk analysis, Martz & Waller (1988) list nine reasons why it is philosophically
and practically superior to the relative frequency theory. Also Watson (1994)
prefers the subjective approach, but his conclusion diers alarmingly from that
of Martz and Waller. The inescapable problem of subjective probabilities as
Watson sees it, is the provision of subjective advice that may be OK for personal decision making, but lack the scientic objectivity desired for complex
risk decisions. Reasoning that the subjective theory of probability is philosophically, but not politically satisfactory, Watson concludes that the outputs of risk
analysis should be advisory rather than ruling. This pinpoints the core of the debate between Watson and Yellmann & Murray (1995), i.e. what the subjectivity
of probability means to the interpretation of risk analysis. For our purpose, the
problem can be re framed; what does the subjectivity of probability means to
the use of risk acceptance criteria? This question reappears in chapter 5.2, after
rst having examined the meaning and derivation of risk acceptance criteria.

2.5 If it does happen, what are the consequences?


While positive outcomes are prominent in nancial risk, accident risk is restricted to unwanted consequences (Aven, 2007). When speaking of consequences, Kaplan & Garrick (1981) thus refer to a measure of damage, xi , for
a specic scenario. This does not mean that given its occurrence, a scenario
will deterministically lead to one corresponding consequence. Figure 2.3 illustrates that there is a spectrum of possible consequences following an accidental
event, varying in both severity and type. The damage should be regarded as a
vector quantity rather than a single scalar, of which each element is assigned a
corresponding probability:
C D x1 ; x2 ::xn p1 ; p2 ::pn 

(2.5)

2.5 If it does happen, what are the consequences?

17

The probability of each consequence depends on many factors, by which the


eectiveness of reactive barriers indicated in Figure 2.3 is of special importance. Decisive is also the vulnerability of targets, understood as the ability
of objects to withstand the eects of a hazardous event (NS 5814, 2008). A
timely example of variances in vulnerability is the frequently used notion of
risk groups when discussing likely consequences of the swine u.
P1

C2

P2

C3

P3

..

AE

C1

Cn-1

Pn-1

Cn

Pn

Figure 2.3. Spectrum of consequences following an accidental event

If only one type of consequence is considered, the general expression of


(2.5) is limited to a one-dimensional probability distribution, for example over
the number of fatalities. Kaplan and Garrick restrict themselves to exemplifying
the consequences of loss of life and property. Consulting the large body of risk
literature, one might also consider environmental and economic consequences,
or specify damage to national heritage or critical infrastructure. One can further
look at indirect or distal consequences, like damage to future generations, biodiversity or political and social disruption (Breakwell, 2007). The Chernobyl
accident of 1986 tragically illustrates that the severest of consequences may
manifest themselves in decades after an event. Some consequences may be
impossible to quantify, like damages to a natural reserve or loss of trust or reputation. And whether quantiable or not, prioritizing between dierent types of
consequences inevitably involves judgments that are ultimately moral(Douglas,
1985).
Consequences related to loss of life are almost exclusively considered in
regulatory and industrial setting of risk acceptance criteria. In recent years,
increasing attention is devoted to environmental damage. Due to quantication
diculties and the ubiquity of environmental consequences, agreed practices
are left to be seen (Skjong et al., 2007). The reader should note that although
not formally included in an analysts triplet of risk, other consequences may be
important determinants of the meaning and acceptability of risk to an individual
or society.

18

2 Clarication of concepts

2.6 Safety
Having dened risk and its vital elements, one can rhetorically ip the coin and
ask what constitutes safety. Intuitively, safety is understood as freedom from
harm or the opposite of risk; the lower the risk, the greater the safety. Seeking
a general conception of safety, Mller et al. (2006) discover that the concept is
under-theorized. Analyzing the relationship between safety and risk, uncertainty
and control, they conclude that safety is more than the antonym of risk. Of
underrated importance is epistemic uncertainty, as many feel safer given almost
certainty than in less certain outcomes involving lower risk. Another nding
is the distinction between absolute and relative concepts of safety. While the
former implies that the risk of harm has been eliminated, the latter means
that risk has been reduced or controlled to a certain level. Philosopher Nss
(1985) was early engaged with this distinction, claiming that a Utopian search
for absolute safety necessarily will conict individuals quality of life.
Mller and his coworkers put the ethical and unattainable aspects of absolute
safety aside, focusing on the dierent stringency of the two concepts. Suggesting
that absolute safe is the most stringent condition of safety, relatively safe is
reserved for situations under this benchmark. Below the lowest level of safety
that is socially acceptable, it is misleading to use the term safe. But how is such
a level constructed? The answer of Mller et al. is that it is more than some
opposite of an acceptable risk level; one is not necessarily safe just because the
risk is acceptable. Aven (2009) follows this thread, while counter arguing that
safety is the antonym of risk if the latter is dened in a broad sense. Safe can
then be dened by reference to acceptable risk, and acceptable risk can again
be rephrased as acceptable safety, as in the ISO/IEC Guide 51 (1999, p.2)
denition of safety: Freedom from unacceptable risk. Notwithstanding this,
the guide advices against using the words of safe and safety, on the grounds
that they convey no useful extra information. What is more, the reasoning of
Aven rests on his own denition of risk, which was presented and discarded
in section 2.2. Although discussions on acceptable risk are often framed as a
question of how safe is safe enough, the fuzzy notions of safe and safety are
used with caution in this report.
Related to safety is the term security, restricted to harm from intentional
acts like terrorism, sabotage or violence. Security is a broad concept, originally
concerned with military and politically threats to state sovereignty (Barry et al.,
1998). Central to security is the concept of threat agents, meaning an actor
having the intention and capacity of inicting damage to a vulnerable object
(Salter, 2008). As security risk is less tangible and predictable than that of
physical hazards, they are hitherto given minimal attention within standards
and research on risk acceptance criteria.

2.7 Risk assessment

19

2.7 Risk assessment


Amongst the strengths of the risk denition of Kaplan & Garrick (1981), is
its direct relevance to risk assessment. Figure 2.4 is adopted from the Norwegian standard NS 5814 (2008), illustrating the process of risk assessment
within a broader framework of risk management. According to the standard,
risk assessment is a complete process covering the planning of- and execution of risk analysis and risk evaluation. A slightly dierent conceptualization is found in NORSOK Z-013N (2001), while HSE (2001b) and US Presidential/Congressional Commission on Risk Assessment and Risk Management
(1997) see risk assessment as the sole process of identifying and estimating
the likeliness of adverse outcomes. The latter understands risk management as
the process of analyzing, selecting, implementing, and evaluating actions to
reduce risk, which is in accordance with the general conception of NS 5814. In
this report, the term risk analysis is reserved for the process of answering the
three questions of Kaplan and Garrick, while risk assessment is understood
as a broader practice according to NS 5814. Due to the explanatory power of
Figure 2.4, only one element is discussed in detail, namely the principal stage
of risk evaluation forming the underlying objective of this report. NS 5814
(2008, p.6) denes risk evaluation as:
The process of comparing described or calculated risk with specied
risk acceptance criteria.
Risk evaluation involves a decision on whether the risk is acceptable, followed
by an assessment of the need for and feasibility of risk reduction measures.
The output of risk analysis forms the input to the risk evaluation process,
as illustrated in Figure 2.4. This may be the expected number of fatalities or
the fatal accident rate associated with a specic solution. Although Kaplan &
Garrick (1981) urge caution in the use of single number presentations of risk,
the calculation of aggregated values is implied in NORSOK Z-013N (2001).
Most of all is this a practical necessity, enabling comparison and prioritizing
of options against a set of risk acceptance criteria.
Figure 2.4 shows that risk acceptance criteria shall be set previous to the
risk analysis, as part of the broader process of risk assessment. Unfortunately,
NS 5814 provides no guidance on the establishment of such criteria. In the Norwegian oshore industry, the responsibility of dening risk acceptance criteria
lies within the operator, in contrast to current practices in the Netherlands and
the UK. Risk acceptance criteria are central also within the British Health and
Safety Executive (HSE), although chapter 4.4 demonstrates that the practices
are clearly distinctive. According to HSE (2001b), successful risk evaluation
be large depends on the formulation of risk acceptance criteria. But what is a
risk acceptance criterion? And, far more intricate; what is acceptable risk?

20

2 Clarication of concepts
Define framework

Establish RAC
Risk assessment

Inititating analysis,
problem and
objectives formulation
Organisation of work

Choice of methods
and data sources
Establish system
description
Identify hazards and
unwanted events
Risk analysis
Analysis of causes
and probability

Analysis of
consequences

Risk description
Risk evaluation

Comparison with
Ris acceptance
criteria
Identification of risk
reducing measures
and their effect
Documentation and
conclusions

Risk treatment

Figure 2.4. Risk analysis process description according to NS 5814 (2008)

2.8 Risk acceptance criteria

21

2.8 Risk acceptance criteria


A criterion can be understood as (Webster, 1978):
A standard of judging; any established law, rule, principle, or fact by
which a correct judgment may be formed.
N by
In the context of accident risk, a criterion may be a term of reference, R,
which the theoretical risk, R, is compared against as illustrated in Figure 2.5.
For comparison, R and RN must necessarily be expressed by the same risk metric.
Principally, RN can be any type of reference, like a companys past performance,
the best industrial benchmark or legal requirements. R may be calculated in
advance or retrospectively of a period; managing future risk or assessing past
performance. In most standards the term risk acceptance criteria is specied,
implying that comparison is a matter of acceptance or rejection of future risk.
NS 5814 (2008, p.6) denes risk acceptance criteria as
Criteria used as basis for decisions about acceptable risk.
This is complimentary to the denition of NORSOK Z-013N (2001, p.7):
Criteria that is used to express a risk level that is considered tolerable
for the activity in question.
Since tolerable is not tantamount to acceptable, the denition of NS 5814
(2008) is the most adequate for our purpose.

System

R= f{Pr,C}
Reduce
Pr or C
R<R?

Accepted

Figure 2.5. Principal illustration of risk acceptance criteria (adapted from Breugel (1998))

Risk acceptance criteria need not be quantitative (NS 5814, 2008). According to NSW (2008), it is essential that certain qualitative principles are adopted,

22

2 Clarication of concepts

irrespective of the numerical value of quantitative acceptance criteria. Common


qualitative criteria are:
 All avoidable risks shall be avoided
 Risks shall be reduced wherever practicable
 The eects of events shall be contained within the site boundary
 Further development shall not pose any incremental risk

2.9 Acceptable risk


NS 5814 (2008, p.6) denes acceptable risk as:
Risk which is accepted in a given context based on the current values
of society and in the enterprise.
Those who seek a generally agreed denition of acceptable risk are, however,
likely to be disappointed.
2.9.1 One accepts options, not risks
The mighty quintet of Fischho et al. (1981) ended their quest, concluding
that acceptable risk is an expression of wrongful connotations that should
neither be dened, nor used in isolation. Rather, it shall be employed as an
adjective, describing a specic kind of decision problem denoted acceptablerisk problems. With this commandment, Fischho et al. (1981, p.3) underline
that risk acceptability is inherently contingent on time and situations, and is
hence never absolute, nor universal:
The act of adopting an option does not in and of itself mean that its
attendant risk is acceptable in any absolute sense. Strictly speaking,
one does not accept risks. One accepts options that entail some level
of risk among their consequences.
The acceptability of an option represents a trade-o, between the full set of
associated risks, costs and benets of an option. In turn, the desirability of these
factors depend on the other options, values and facts examined in the decision
making process. Owing to this, the most acceptable option in an acceptablerisk problem may not be the one with the least risk. This is why Fischho et
al. nd interpretations like that of Kaplan & Garrick (1981) misleading, who
see acceptable risk as the risk associated with the option oering the most
optimal mix of risk, costs and benets. Still, Kaplan and Garrick contribute
with a to-the-point formulation, namely that no risk is acceptable in isolation.
Consequently, the discouraging conclusion of both stances is that no level of
risk can specied to mark the line between acceptable and unacceptable risk.

2.9 Acceptable risk

23

2.9.2 Acceptability is not tantamount to tolerability


The diculty of dening acceptable risk is not only due to the inherent contingencies of acceptable-risk problems. Another challenge is the disparate terminology across countries and sectors. Of exceptional position is the UK, distinguishing between tolerable and acceptable. The distinction is stressed in
HSEs report on tolerability of risk from nuclear stations (HSE, 1992, p.2):
Tolerability does not mean acceptability. It refers to a willingness to
live with a risk so as to secure certain benets and in the condence
that it is being properly controlled. (..) For a risk to be acceptable on
the other hand means that for purposes of life or work, we are prepared
to take it pretty well as it is.
In Norway and the Netherlands, France and Germany, this distinction is not
clearly drawn, possibly resulting in inconsistent terminology as in NORSOK
Z-013N (2001). To avoid confusion, tolerability is in this report reserved for
discussions on the ALARP-principle of HSE. Beyond that, risk acceptance is
applied, since it is the most consistently used term in standards and literature
on the subject. Quite paradoxically, the meaning of risk acceptance is properly
sought in HSEs denition of tolerability. The notion of acceptable-risk problems are used when applicable, although not following the advice of Fischho
et al. (1981) of banning the substantival notion acceptable risk. This is simply because in the pursuit of a sound formulation of risk acceptance criteria,
some faith is needed in that it makes sense to dene an unacceptable level of
comparable risk, as defended by NSW (2008) and HSE (2001b).
2.9.3 Factors influencing risk acceptance
From the denitions of HSE (1992) and Fischho et al. (1981), it can be
deduced that our willingness to accept risk depends on the benets from taking
it, the extent it can be controlled (personally or institutionally) and the types
of consequences that may follow. In his groundbreaking article social benet
versus technological risk, Starr (1969) observes that peoples willingness to
accept risk is substantially greater for voluntary activities than for involuntary
such. Typical voluntary risks are those of skiing or smoking cigarettes, over
which the individual also exercise personal control. On the contrary is living
in the vicinity of a new-build nuclear plant, associated with both external locus
of control and involuntariness. Although dependent on the benets generated
from the plant, the extent the plant owner controls the risk and so on, the risk
is less likely judged acceptable. This points to a later observation of Starr &
Whipple (1980), in that benets and voluntariness are relative to the evaluator.
The notions of personal and societal acceptable risk are introduced, implying
that one shall always ask the question of acceptable risk to whom. Related are

24

2 Clarication of concepts

the terms of societal and individual concerns, expressing impacts on society and
things individuals value personally. While the government sees the benets of
electricity generation and employment, these may for neighbors of the plant be
minor to the risk of nuclear accidents. Balancing individual and governmental
concerns is therefore fundamental when developing national policies on risk
acceptance (NSW, 2008).
2.9.4 Risk acceptance is a social phenomenon
Douglas (1985) asserts that equity and personal freedom are moral determinants of risk acceptability. If risks, benets and control measures are unevenly
distributed, risk acceptance is likely low. Her arguments are rooted in the belief that risk acceptance constitutes the often neglected social dimension of
risk perception. Although risk acceptance is related to the factors inuencing
individual perception of risk, it is by and large culturally determined through
social rationality and institutional trust:
The question of acceptable standards of risk is part of the question of
acceptable standards of living and acceptable standards of morality and
decency, and there is no way of talking seriously about the risk aspects
while evading the task of analyzing the cultural system in which the
other standards are formed (Douglas, 1985, p.82).
The former Soviet Union serves as an example. The totalitarian governments
concealing of the Chernobyl accident amplied institutional distrust, which
in turn lowered public acceptance of nuclear risk (Breakwell, 2007). A less
extreme manifestation is the dierently assumed risk aversion in the UK and
the Netherlands, which is discussed in section 3.4. Risk aversion, understood
as disproportional repugnance towards multiple fatality events, is central to
risk acceptance (Ball & Floyd, 1998). The contrast between trac accidents
and a core meltdown serves as an illustration; being risk averse implies that a
given number of trac fatalities distributed over many accidents, is accepted
over an equal number of deaths in one nuclear accident. Risk aversion and
cultural preferences vary within regions or cities, as well as between countries
(Nordland, 2001).
Closing this presentation, one can conclude that risk acceptance is a complex
issue, going beyond the estimation or physical magnitude of risk. As such,
decisions on risk are fraught with diculties of rational, moral and political
character. Fischho et al. (1981) address ve complexities of acceptable-risk
problems:
 Uncertainty about problem denition: Is the decision explicit? What is the
hazard, the consequences and the possible outcomes? The outcome of a
decision may already be determined by the ground rules.

2.9 Acceptable risk

25

 Diculties in assessing the facts: How are low probabilities assessed and
expert judgment applied? The treatment of factual uncertainties may prejudice the conclusion.
 Diculties in assessing the values: How are labile values confronted or
inferred? If asked to express their opinion, uncertainties are introduced as
people may not be aware of their values.
 Uncertainty about the human element: What are the accuracy of laypeoples
perceptions, the fallibility of experts and the rationality of decision makers?
When assumptions about the behavior of experts, laypeople and decision
makers are unrecognized, they can lead to bad decisions and distort the
political process.
 Diculties in assessing decision quality: How much condence do we have
in the decision making process? An approach to acceptable risk decisions
must be able to assess its own limits.
The ve complexities oer valuable insights practitioners should have in mind
when making decisions about risk. Evaluating risk by a predened set of acceptance criteria pivots on the contradiction of seeking a rational and objective
decision criterion, in a matter that is utterly contextually contingent. When formulating risk acceptance criteria, it is thus paramount to recognize that their
purpose is to aid practical decision making on risk. This calls for a careful
examination of how criteria are expressed and the manner in which they are
set, which make the topics of the following two chapters.

3
Expressing risk acceptance criteria

3.1 Introduction
For risk acceptance criteria to be operational, a means for describing risk levels is required. The importance of choosing an adequate expression of risk
acceptance is stressed by Holden (1984), warning that improper metrics produce anomalous conclusions. Flipping the coin, harmonization of well-chosen
decision parameters facilitates a consistent, systematic and scientic decision
making process, as advocated by Skjong et al. (2007).
The fundamentals of commonly used metrics are discussed in this chapter.
First, the concepts of risk metrics, individual and societal risk are introduced,
followed by a brieng on aspects to consider when choosing metrics to the
expression of risk acceptance criteria. Central metrics are presented thereafter,
with emphasis on underlying assumptions, areas of application, strengths and
fallacies.
3.1.1 Risk metrics
Risk can be expressed in multiple ways, relating to the spectrum of consequences and the format of presentation. Consulting NORSOK Z-013N (2001),
risk criteria range from qualitative matrices and wordings to quantitative metrics. Due to their practical prominence, the latter make the focus of this report.
Baker et al. (2007, Appendix H-1) dene risk metric as:
A key performance indicator used by managers to track safety performance; to compare or benchmark safety performance against the performance of other companies or facilities; and to set goals for continuous
improvement of safety performance.
The notion key performance indicator(KPI) indicates that risk metrics describe safety performance, which one is able to measure after a period has

28

3 Expressing risk acceptance criteria

passed. A risk metric is thus a measurable quantity, describing a consequence


to the right in the bowtie-model of Figure 2.3. The dierence between quantitative and qualitative criteria is dicult to draw. Is zero-tolerance of fatalities
a quantitative metric, or barely a qualitative vision? And does the grouping
of consequences in a risk matrix yield qualitative criteria? Remembering the
words of Campbell (2005), NORSOK Z-013N (2001) seemingly refers to criteria derived from qualitative analyses rather than qualitative criteria as such.
The Norwegian Petroleum Safety Authority (PSA, 2001) requires risk acceptance criteria for personnel safety and third party risk from major accidents.
Focal to this report is therefore risk metrics considering fatalities as the consequential endpoint. For a discussion on occupational accident risk criteria, the
reader may consult Kjelln & Sklet (1995). It should be noted that PSA also
prescribes the use of environmental risk criteria. Due to the characteristic nature of risk from pollution, environmental risk metrics are excluded from this
study.
3.1.2 To whom it may concern
There are broadly two ways of expressing risk to persons HSE (1992, p.15);
Individual risk: The risk to any particular individual, either a worker
or a member of the public. A member of the public can be dened
either as anybody living at a dened radius from an establishment, or
somebody following a particular pattern of life.
Societal risk: The risk to society as a whole, as represented for example, by the chance of a large accident causing a dened number of
deaths or injuries. More broadly, societal risk can be represented as a
detriment, viz the product of the total amount of damage caused by
a major accident and the probability of this happening during some
specied period of time.
The distinction between the notions and the call for considering both, is best
explained by an illustrative example. Ball & Floyd (1998) pinpoint that at a
particular point along a route for transport of dangerous goods, the individual
risk may be very low. However, the chances of an accident somewhere along
the route may be signicant, posing great aggregate risk to society. Conversely,
in cases where the societal risk is low, e.g. on a scarcely manned installation,
there still might be individuals experiencing undue levels of risk. Both individual and societal risk metrics are therefore necessary to provide an adequate
description of the risk posed by a particular system. According to HSE (1992),
it is furthermore essential to be clear as to whom a gure of risk applies.
For instance, it is meaningless to calculate a national average risk of being
killed while skydiving. Therefore, in order of descending voluntariness and

3.2 Aspects in the choice of risk metrics

29

involvement, notions of employees, third and fourth parties are key meta data
to individual risk in particular (Pasman & Vrijling, 2003). Similarly are residential, sensitive and transient populations relevant constructs when putting a
gure to societal risk (HSE, 2009). It should be noted that individual and societal risk are distinct from, and only partly related to, the notions of individually
and socially acceptable levels of risk introduced by Starr & Whipple (1980).

3.2 Aspects in the choice of risk metrics


3.2.1 Generic requirements according to NORSOK Z-013
In the annex of NORSOK Z-013N (2001), generic guidelines for choosing
adequate risk criteria are provided. These are relevant for all industries. Perhaps
disappointingly, the guidelines concern the qualities of risk metrics as such,
rather than how quantities are to be established. On the positive side, they imply
a conscious approach to the limitations and communication of risk metrics
(Vinnem, 2007). According to the standard, a risk metric should best as possible
satisfy the following four qualities:
 Be suitable for decision support. The most important property of risk acceptance criteria is their ability to provide input to decisions regarding
risk reducing measures. They have to express the eect of such measures,
preferably in a precise manner, and associated with particular features of
the activity in question.
 Be suitable for communication. Risk acceptance criteria and results from
risk analysis shall be easy to understand and interpret for non-experts, such
as operational management or the public at large. Criteria expressing a
societal dimension often fulll this requirement, as comparison with other
activities in society is enabled. However, one must be aware that criteria
that appear easy to understand may represent an oversimplication if the
decision problem is very complicated.
 Be unambiguous in their formulation. This implies a high level of precision,
explicit system limits dening what situation or areas the criteria are valid
for, and a conscious approach to averaging of risk over time, areas and
groups of people. Returning to PSA (2001), methods for averaging risk
shall be used to ensure that the acceptance criteria for the personnel as a
whole and for exposed groups of personnel compliment each other
 Be concept independent. The criteria shall not favor any particular concept
solution, explicitly, nor implicitly through the way risk is expressed.
NORSOK Z-013N (2001) emphasizes that risk metrics are fraught with varying
degrees of uncertainty, depending on the consequential endpoint and the level

30

3 Expressing risk acceptance criteria

of precision. Since uncertainty increases with required level of details, uncertainty considerations might contradict a criterions score on the requirement
of suitability for decision making. The standard presupposes that risk metrics
reect the approach to risk analysis and are consistent with previous use within
the company.
3.2.2 Pragmatic considerations
Still consulting NORSOK Z-013N (2001), one reads that the intended use
and decision context shall be considered when choosing risk acceptance criteria. Pragmatic considerations relating to life cycle phase, systems or activities,
strongly inuence the feasibility of risk metrics. Whether the acceptance criteria
facilitate decision making on risk reducing measures or enable comparison of
overall risk levels, their applicability change across situations. By example, the
dierent contexts of deciding on detailed design solutions in the engineering
phase, and broadly comparing two alternative eld developments in an early
concept study phase, dierently constrain the analysis and evaluation of risk.
Leaving the realm of the oshore industry, it is tempting to generalize
factors of pragmatic importance. Rather than focusing on the practical use of
risk acceptance criteria, HSE (1992) is concerned with the subjects of interest;
the hazard and those at harm. According to HSE, criteria shall be chosen based
on characterization of the hazard in question, the nature of harm (whether
fatalities are prompt or delayed) and characteristics of the populations at risk.
Holden (1984) similarly calls for an adequate description of the particular risk
patterns. Such a description may be simple or complex, but shall capture both
the totality and distribution of risk. The observant reader may notice that the
totality of risk prescribes the use of societal risk metrics, whilst the distribution
of risk is best captured by individual risk metrics.
3.2.3 Past and future observations
Risk metrics are often derived from historical data, based on averages of previous periods and assuming constant trends in the future (Vinnem, 2007). Such an
approach is justiable when the purpose is to monitor trends in risk levels, but
run into philosophical diculties when projecting future levels of acceptable
risk. In the literature, it is seldom specied whether one is to use the predicted
number of occurrences (a parameter), or the historically measured number of
occurrences (an estimate) in the expression of risk acceptance criteria. This
is problematic, because the two quantities rely on dierent assumptions regarding future exposure and contextual premises. Remembering the fallacies
of frequency-based approaches, both risk levels and associated benets are
destined to change along with the population at risk. Consequently, it is questionable whether the past is a legitimate predictor of future risks and their

3.3 Individual risk metrics

31

acceptability. The latter issue is addressed by Fischho et al. (1981), discussed


under the topic of bootstrapping in chapter 4.3.

3.3 Individual risk metrics


Individual risk metrics express the probability that an individual gets killed or
injured per some appropriate measure of exposure, for instance year, km traveled or work hours. If the severity of consequences is high, fatalities are often
considered at the expense of risk to health and injuries (Skjong et al., 2007).
According to Marszal (2001), individual risk metrics are the most common
measure of risk, providing useful information for purposes of facility siting
and regulatory oversight. Individual risk metrics may also answer an individual
wondering what the risk is to him and his family (HSE, 1992). Since assessing risk to an actual individual (fully taking account of the circumstances in
which exposure arise) is a cumbersome task, the average risk of one or more
hypothetical persons is usually calculated. This is an assumed individual with
some xed relation to the hazard, to which actual persons can compare their
own patterns of exposure. Not only does such an approach allow risk to be
meaningfully assessed independently of the people actually exposed, one can
also take into account that exposure is seldom uniform (HSE, 1992). Although
individual risk metrics are favored by many, they possess fundamental weaknesses. Most conspicuous is the inherent limitation that they do not address
the whole risk picture, that is, the totality of risk as expressed by societal risk
metrics (Evans & Verlander, 1997; Holden, 1984; HSE, 2009).
The main expressions of individual risk are individual risk per annum
(IRPA) and localized individual risk per annum (LIRA). Sometimes the abbreviation IR (individual risk) is used, without explicitly dening the type under
consideration. As the two notions rest on distinct assumptions and convey different meanings, there is a call for clarifying their distinctiveness and how both
can be meaningfully utilized.
3.3.1 Individual risk per annum- IRPA
NORSOK Z-013N (2001, p.44) denes individual risk as:
IR is the probability that a specic individual (for example the most
exposed individual in the population) should suer a fatal accident
during the period over which the averaging is carried out (usually a 12
month period).
When considering a period of 12 months, the metric is denoted individual risk
per annum (IRPA). Since calculations usually are carried out for a hypothetical person rather than a specic individual, IRPA is sometimes referred to as
average individual risk (AIR).

32

3 Expressing risk acceptance criteria

The risk of performing an activity


IRPA expresses the risk you as an individual bear when performing an activity,
whether it is crossing the street or the superior task of living. The numeric value
of IRPA is derived from a range of assumptions, like age, sex, workplace- or
leisure activities. For e.g. a worker, the total IRPA is calculated by summing the
IRPA for each work-related activity, an (Rausand & Utne, 2009). A conceptual
formula for IRPA is:
IRPAa D accident frequency  Pr .performing a/  Pr .dies j performing a/
(3.1)
The formula conveys that it is not sucient to consider the frequency of accidents, since exposure and susceptibility are decisive factors. IRPA is therefore
suited for expressing risk to particularly exposed individuals or groups, like
workers or users of a product or service (Skjong et al., 2007). Because the
eects of operational changes or risk reducing measures can be explicitly reected in IRPA, the criterion aids decision making on actions aecting the
safety of individuals (NORSOK Z-013N, 2001). In the UK, IRPA is utilized
when expressing acceptance of rst and third party risk related to plant activities. Typical quantities are recently reviewed by HSE (2009), advising IRPA
to be less than 1 in 1000 or 103 for workers in the UK nuclear power sector.
Moving across the North Sea, Vinnem (2007) reports a similar criterion of 103
by some Norwegian oil and gas oshore operators, while commenting that this
is a very lax limit. By comparison, IRPA of death by lightning is recognized
to be 106 (HSE, 1992).
Averaging over people, exposure and consequences
NORSOK Z-013N (2001) evaluates IRPA as relatively simple to understand and
use in comparison for non-experts, and less concept dependent than common
societal indicators. The former claim is contested by consulting the literature
on risk perception, revealing diculties of laypeople in grasping risk expressed
by small probabilities (Slovic, 1987). A more critical weakness stressed by
NORSOK Z-013N (2001), is the ambiguities following the diculty of dening
precise exposure. There is relatively high uncertainty tied to such calculations,
also because the entire accident sequence needs to be quantied. The latter
concern, however, numeric values resulting from quantitative risk analyses and
not simple frequency-based approximations. Still, this does not leave frequency
based approaches free of charge. Reproducing the objections of Holden (1984),
IRPA can be mathematically reduced by increasing the number of people over
which risk is averaged. The individual risk appears lowered, although more
people are exposed to a level of risk that remains as before. Another concern

3.3 Individual risk metrics

33

is whether it is ethically sound to set criteria averaged over dierently exposed


persons or periods, which is further discussed in section 5.2.
Not only does IRPA apply average gures over people and periods, averaging is also performed over spectra of consequences. Comparison with an
averaged number containing multiple fatalities will according to Holden facilitate anomalous conclusions, since individual risk statistics mostly are made up
by many single fatality accidents. Although three decades have passed since
Holden voiced his concerns, present time researchers are still occupied with
this fundamental deciency. Amongst them are Jongejan et al. (2009), worrying that individual criteria cannot prevent single accidents killing a large
number of people.
3.3.2 Localized individual risk-LIRA
The localized individual risk (LIRA) can be dened as (Jongejan, 2008, p.3):
The annual probability that an unprotected, permanently present individual dies due to an accident at a hazardous site.
In contrast to IRPA, which is dependent on the characteristics of the actual or
hypothetical individuals in question, LIRA is a property of the location.
LIRA is a property of the location
LIRA may be rightfully referred to as geographic rather than individual risk
(Marszal, 2001). Rausand & Utne (2009) nd strong assumptions underlying
this metric, notably that a hypothetical person is residing on a particular location 24 hours a day, throughout a whole year. LIRA considers only major
accident risk in the vicinity of one or more hazardous installations, ignoring the
multiplicity of other risks faced by an individual. Nevertheless, such simplifying assumptions do not leave LIRA simple to calculate. Figure 3.1 illustrates
that complex factors must be accounted for, like wind directions, topography
and dose response-relationships. Sophisticated computer tools are developed
for this purpose, for instance the software program PHAST launched by DNV
(2008).
Land use planning
Since a person is always assumed present, LIRA does not change even if no
one is at the spot when an accident occurs (Bottelberghs, 2000). Due to its
location specic properties, LIRA is almost exclusively used in land use planning, regarding siting of hazardous plants in residential areas and vice versa.
Helpful is in this regard the use of iso-risk contour maps, displaying lines that
connect locations with the same value of LIRA, as illustrated in 3.2 (Pasman

34

3 Expressing risk acceptance criteria


Dispersion
- Topography
- Weather
- Wind directions
- Liquid droplet
thermodynamics

Effect assessment
- Flammable
- Explosive
- Toxic

Initial discharge
-Process failures
-Material properties
-Storage and process
conditions

Figure 3.1. Calculating LIRA

& Vrijling, 2003). Common practice in the UK and the Netherlands is the use
of safety distances, prohibiting accommodation of vulnerable objects within
certain contours. Typically, zones for residential housing are set for iso-risk
contours with LIRA lower than 106 (Bottelberghs, 2000). In the UK, LIRA
is coupled with the concept of dangerous dose, advising against homes being
built if the probability of receiving a chemical dose leading to severe distress
is greater than 105 per year (HSE, 1992). Since dangerous dose is an intricate
concept stemming from the discipline of toxicology rather than that of risk
analysis, it is not further examined.
Evaluating LIRA
As NORSOK Z-013N (2001) seemingly treats IRPA and LIRA under the same
notion of IR, the strengths and weaknesses of LIRA may be assumed similar to
those of IRPA. Such a conclusion should not be drawn without reconsidering
the distinguishing features of LIRA in light of the NORSOK Z-013N (2001)
requirements. Being a localized risk metric, it scores relatively high on the
aspect of precision in decision making support, as it by denition is concerned
with particular areas of an installation or site. LIRA may be unambiguously
dened with clear system limits, due to the stringent assumptions and its inherent focus on physical boundary limits. Whether ambiguities are introduced
through averaging, is pragmatically conditioned on the particular risk, the area
and its inhabitants. The unrealistic assumption of a person spending a whole
year constantly at a particular point, yield little condence in the risk level

3.4 Societal risk metrics

35

Figure 3.2. Risk contours on local map

a person actually face. Nevertheless, the metric is relatively easy to grasp for
non-experts, owing to the simplied assumptions and the visual aid of iso-risk
contours on maps.

3.4 Societal risk metrics


Societal risk is a fuzzy concept. Reviewing the evolution of societal risk criteria, Ball & Floyd (1998) begin by acknowledging that there exist none overly
prescriptive denition of the term. Rather, its precise meaning seems rooted
in an individuals professional background. Seen through the concertizing eyes
of an engineer, societal risk is simply the relationship between accident frequency and the number of people suering harm. Social scientists tend to favor
a broader view, for instance by incorporating socio-political responses. A diculty with societal risk is hence the term itself. For clarication, Ball and Floyd
suggests three categories of societal risk:
 Collective risks, covering non-accidental exposure to harmful materials
 Societal risks, concerning single accidents with potential of causing multiple
fatalities
 Societal concerns, associated with the overall impacts of particular technologies
Societal risk is a subset of societal concerns (HSE, 2001b). However, societal
concerns may also be triggered by accidents with one or none fatalities, de-

36

3 Expressing risk acceptance criteria

pendent on the characteristics of the event and the technology in question. A


typical example is the Three Mile Island-incident of 1979. Although no people
were harmed in the event, public skepticism against nuclear power, divergent
expert opinions and unsuccessful risk communication nourished major outcry
(Breakwell, 2007).
An alternative notion is group risk, describing the risk to a group of persons,
for example workers or travelers (Rausand & Utne, 2009). This notion is in line
with Ball and Floyds second interpretation of societal risk, which is employed
in the following.
According to Marszal (2001), societal risk criteria are well suited for regulatory approval and high-level management oversight of process plants. As
pointed out by Skjong et al. (2007), they help ensuring that risk imposed on
society from large technological projects are adequately controlled. However,
following the reasoning of Ball & Floyd (1998), the usefulness of societal risk
metrics is not universally accepted, due to diculties in dening acceptable levels and methodological problems in generating necessary data. In a consultancy
report provided by HSE (2009), it is stated that although societal risk is not a
novel concept, its explicit incorporation into decisions on land use planning and
on site safety measures in the UK is new. There are still issues to be dealt with,
like incremental build up of populations and redistribution of costs resulting
from the altered balancing point between safety and development. One might
also argue that an individual is rather concerned with individual risk (Ball &
Floyd, 1998). Taking the viewpoint of an individual, she has probably none but
morbid interests in the number of people dying with her in an accident. Still,
there are few disputes over the principal argument of HSE (1992), in that from
a societal point of view, decisions should be based on the totality of risk borne
by the society as whole.
Commonly used societal metrics are FN-curves, fatal accident rate (FAR)
and potential loss of life (PLL). Due to above-mentioned disparities of societal
risk interpretations, the various metrics convey dierent meanings not necessarily consistent with those of Ball & Floyd (1998). What is more, they are
all fraught with assumptions revealing unforeseen implications when used in
practical decision making. A possible explanation is found in the TOR-report
of HSE (1992), admitting that the calculation of societal risk is a complex
process. Individuals extend over dierent generations, are geographically dispersed and accidental releases come in a wide range of magnitudes. Although
the values of individual and societal risk are linked, their precise relationship
depends on numerous factors. Utmost important is the number of people at risk,
accompanied by hazard characteristics and dierent fatality probabilities across
activities or locations (Ball & Floyd, 1998). As in the calculation of individual
metrics, societal risk metrics apply average gures since these factors vary with
time.

3.4 Societal risk metrics

37

3.4.1 FN-curves
An FN-curve is basically a plot showing the frequency of events killing N or
more people, as shown in Figure 3.3. It can be used for presenting accident
statistics and risk predictions, as well as drawing criterion lines for acceptable
levels of societal risk. Mathematically, it is derived from the commonly used
expression of risk being a product of frequency and consequence, denoted number of fatalities per year. Perhaps not instinctively, the fatalities are not integers
since they are probabilistically generated (Ball & Floyd, 1998). Graphically, the
curve is presented by taking the double logarithm of this expression, due to the
wide range of possible values of high consequence/low probability risk (Evans
& Verlander, 1997).
There is a distinction between FN- and fN-curves that should be claried.
While the former express the cumulative frequency of N or more fatalities,
an fN-curve represents the frequency of accidents having exactly N fatalities.
As fN-curves are not very informative and rarely used for expressing risk acceptance criteria (HSE, 2009), its relationship with FN-curves is not further
examined. For a thorough discussion on the subject, the reader may consult
Evans & Verlander (1997). Due to their numerical distinction and since NORSOK Z-013N (2001) uses the notion of fN when obviously speaking of cumulative probabilities, there is a call for standardization of terminology to prevent
erroneous criteria being drawn.
Risk aversion in FN-criterion lines
Formulating risk acceptance criteria, a factor is introduced to express risk
aversion:
R D F  N

(3.2)

Taking the log-log of the expression yields:


log R D log F C log N

(3.3)

constitutes the slope of the criterion line, as illustrated in Figure 3.4. Additionally, an anchor point (a xed pair of consequence and frequency) is needed
to describe the crossing of the y-axis (Skjong et al., 2007). The literature review
of Ball & Floyd (1998) proves that deciding on is a disputed task. In the UK,
HSE prescribes a neutral factor of -1, in contrast to the Dutch governments
favoring of a risk aversive factor of -2. The rationale is that people are believed
to be more than proportionally aected by the number of fatalities, leaving the
acceptable frequency of an accident killing 100 people 10 times lower than one
killing 10 people. Compressing a complicated discussion, a neutral approach
is preferable, as one otherwise introduces hidden weighting factors making the

3 Expressing risk acceptance criteria

Accidents per year with N or more fatalities

38

Number of fatalities, N

Figure 3.3. FN-curve for road, rail and air-transport 1697-2002 (adopted from HSE (2003b))

decision making process opaque. This is mainly because the greater aversion
factor, the stricter the criterion and hence regulation will be, ex ante out ruling the potential benets of a proposal. Approaching the problem dierently,
Linnerooth-Bayer (1993) sees the great problem not in which aversion factor
to use, but how the publics aversive concerns are addressed.
What is wrong with FN-criterion lines?
A numerical example of an FN criterion is provided in HSE (2001b), combining
the slope of -1 with a xed tolerability point of a yearly frequency of 0.0002,
per single accidents killing more than 50 people. The FN-curve is applauded as
a helpful tool if there are societal concerns for multiple fatalities occurring in
one single event. The technique is also judged useful for comparing man-made
accident proles with natural disaster risks deemed tolerable by society. This
claim is contested in HSE (2009) and Skjong et al. (2007). In summary, their
objections concern the lacking ability of FN-curves to allow meaningful comparison, telling nothing about the relative exposure and hazard characteristics

3.4 Societal risk metrics

39

= -1

= -2
N

Figure 3.4. Risk aversion in FN-criterion lines

of dierent sectors. Nevertheless, the band of researchers agree that FN-criteria


are an informative means of complimenting individual risk metrics. Evaluating
the technique in light of their most important requirement, NORSOK Z-013N
(2001) argue that it might confuse rather than aid decision makers if the limit is
exceeded in one area but otherwise below, as illustrated in Figure 3.5. Consulting the contemporary work of HSE (2009), this deciency is left unanswered
still. Because calculating full FN-curves is a resource intensive task requiring
in-depth mathematical analysis of all potential major accident scenarios, HSE
has recently launched more sophisticated methods for eciently aggregating
societal risk, named quickFN and SRI.
F
Risk acceptance criterion line

Calculated risk

Figure 3.5. Case where the predicted risk exceeds the FN-criterion line in one area, while
otherwise below

Amongst the most cited antagonists of FN-criterion lines are Evans & Verlander (1997), raising two main objections. First, the criterion is accused for
prescribing unreasonable decisions, as a result of concentrating on just one extreme feature of a statistical distribution. Secondly, they sentence the technique
for being illogical in a decision theoretical sense, providing inconsistent recommendations if an identical risk picture is presented in dierent ways. Hereby
discarding the use of FN-curves for deciding on acceptable risk, the authors

40

3 Expressing risk acceptance criteria

alternatively suggest the method of expected disutility, under which PLL is a


simple variant.
3.4.2 Potential loss of life- PLL
NORSOK Z-013N (2001, p.41) denes potential loss of life (PLL) as:
The PLL value is the statistically expected number of fatalities within
a specied population during a specied period of time.
By example, the average number of Norwegian persons killed in road accidents during the last ve years was 242 (SSB, 2009), providing an estimate
of PLL within the same population in 2009. Similarly, Table 3.1 illustrates
PLL-estimations of selected Norwegian industries.
Conceptual links between PLL and IRPA, LIRA and FN-curves
PLL can be computed by summing the products of all fN-pairs in a noncumulative fN-curve (Pasman & Vrijling, 2003):
X
PLL D
fN
(3.4)
Idealistically assuming that all n people in a specied population are exposed
to the same individual risk, there is also a link between the values of PLL and
IRPA of a certain activity (Rausand & Utne, 2009):
X
PLL D
n  IRPAn
(3.5)
Combining population density with iso-risk contours, PLL provides valuable
information above that of single LIRA-based metrics. After all, what is the
point in reducing local risk if no people are ever present?
Industry
Number of fatalities
Agriculture
9.4
Transport and communication
7.2
Construction
6.4
Health and social services
1.6
Extraction
1.4
Table 3.1. Estimated PLL of selected industries in Norway, based on average number of
fatalities in 2004-2008 (Source:Arbeidstilsynet (2009))

3.4 Societal risk metrics

41

PLL is a summary measure


PLL is sometimes referred to as the expectation value (EV), as it is a risk
integral representing the expected number of fatalities in one overall number
(HSE, 2009). Evans & Verlander (1997) label this a measure of disutility,
capturing that the expected value increases with the level of harm. Based on the
principle of minimizing disutility, such metrics are claimed to provide consistent
decisions above that of FN-criterion lines.
Multiplied with the value of a statistical life, PLL is commonly utilized in
cost benet analyses. Since PLL allows risk to be expressed on a uniform basis,
it is particularly suited for deciding on risk reduction measures in ALARP
demonstrations (Marszal, 2001). However, it is not common practice to set
overall acceptance limits by PLL, as reported in the study of Vinnem (2007)
on Norwegian oshore practices. A plausible explanation is that PLL does not
take exposure into account, neither in terms of number of people, nor hours
exposed. This complicates comparison between activities, biasing decisions
towards scarcely manned concepts or activities. Being a summary measure,
PLL loses important information about risk to individuals or a small group of
people (HSE, 2009). Furthermore, the metric does not dierentiate between
multiple accidents killing few people, and few accidents killing a multitude of
people. PLL is thus incapable of expressing societal risk in the sense of Ball
& Floyd (1998).
Both the strengths and weaknesses of PLL owes to the simplied calculation
of an absolute level of fatalities. On one hand, the metric is suitable for decision
contexts and tools requiring an overall risk number, in addition to the advantage
of being easy to grasp for non-experts. On the other hand, PLL is not suited
for averaging dierences between dierent groups of people, or for comparing
activities with diering manning levels or exposure peaks (NORSOK Z-013N,
2001).
3.4.3 Fatal accidental rate- FAR
Adopting the denition of Rausand & Utne (2009, p.58), fatal accident rate
(FAR) is:
The statistically expected number of fatalities resulting from accidents
per 108 exposed hours
There are dierent variants of FAR in the oshore industry (NORSOK Z-013N,
2001):
 Group-FAR, expressing risk to a group with uniform risk exposure
 Area-FAR, mapping risk in a physically bounded area
 Overall FAR, averaged over all positions on a specic installation

42

3 Expressing risk acceptance criteria

Since the main dierence concerns how averaging of risk is performed, the
variants are not discussed separately. It is, however, crucial to beware the applied
averaging in pragmatic evaluations of FAR.
Accounting for exposure
As FAR is expressed per time unit, one cannot add the contribution to FAR from
dierent activities, unless exposures are assumed equal or weighted relative to
each other (Vinnem, 2007). The element of exposed hours shall also suit the
system under consideration. By example, FAR is tailored to the civil aviation
industry by specifying the number of fatalities per 100 000 hours of ight
(Rausand & Utne, 2009).
Typical FAR-values lie in the area of 1-30, making it fairly easy to grasp
for non-experts compared to risk metrics of very low probabilities (Rausand &
Utne, 2009). Vinnem (2007) reports oshore criteria of FAR=20 for the most
exposed groups, and FAR=10 for the total installation work force. Requiring
a FAR-value of less than 10, basically means that no more than ten fatalities
are acceptable during the lifework of approximately 1400 persons (Rausand &
Utne, 2009). If the exposure time, ti , for each person is known, FAR can be
derived from PLL :
PLL
(3.6)
FAR D P  108
ti
Meaningful comparison
In contrast to PLL, FAR enables meaningful comparison over dierent solutions by taking exposure into account. Indeed, NORSOK Z-013N (2001) states
that FAR is the most convenient of all metrics in this matter. Due to their
situation specic focus and limited averaging, group- and area-FAR are suited
for decisions on risk reduction. As such, conned FAR metrics may describe
both the totality of risk and distributional issues. However, being conceptually
linked to PLL and IRPA, FAR does not distinguish between small- and large
scale accidents. Puritan followers of HSE (1992) might thus accuse the metric
of expressing upscaled individual risk rather than societal such. This problem
was early recognized by Holden (1984), claiming that like most statistic-based
metrics, FAR essentially expresses average individual risk. For this reason, FAR
should not be used in isolation when multiple fatality accidents are possible.

3.5 Other
The focus has hitherto been on metrics considering fatalities as consequential
endpoint. There are several other expressions of risk, to which the reader at
least should have elementary knowledge about.

3.5 Other

43

3.5.1 Risk matrix


Risk matrix is a graphical presentation of risk, representing possible combinations of consequence and frequency categories, as shown in Figure 3.6. The categories can be either quantitatively or qualitatively expressed, and may include
consequences to personnel, the environment and/or assets (Pasman & Vrijling,
2003). Due to the analysts freedom in choosing consequence categories, risk
matrices can capture both individual and societal risk. A combination may even
be possible, expressing the most serious consequences by multiple fatalities and
the lower end of the scale by IRPA.
Frequency

Consequence

Very infrequent

Infrequent

Fairly frequent

Frequent

Very Frequent

Catastrophic
Very large
Large
Medium
Small

Unacceptable
Reduce risks as low as reasonably practicable
Acceptable

Figure 3.6. Risk matrix (adapted from Rausand & Utne (2009)

Acceptability indications in risk matrices


Although risk matrices, like FN-curves, represent pairs of frequency and consequence, there is an important distinction in that risk matrices express probability
distributions, not cumulative frequencies (Skjong et al., 2007). The cells in a
risk matrix tell nothing about the chance of having a certain number of fatalities or more. Instead, the severity of risk posed by dierent combinations
of frequency and consequence are expressed. Dierent risk levels are usually
indicated by three colored zones, from the most severe, red-colored cases in
the upper left corner, to the least serious marked with green in the lower right
part. Although Skjong et al. (2007) see matrix acceptability indications as a
hinder to holistic considerations of risk, the zones are commonly used to evaluate hazardous events. In Figure 3.6, the upper and lower regions represent
unacceptable and acceptable risk, whilst an intermediate zone demands evaluation of further risk reduction (NORSOK Z-013N, 2001). Such an approach is

44

3 Expressing risk acceptance criteria

often seen with explicit or implicit reference to the ALARP-principle, which is


examined in chapter 4.4.
Conceptual strengths and deficiencies
Risk matrices enable relative ranking of risks, for prioritizing risk reduction
measures or examining the need for detailed analyses (Woodru, 2005). To
prioritize between events, a quantitative measure can be assigned in the form of
a risk priority number (RPN), expressing the seriousness of each cell (Rausand
& Utne, 2009).
As risk matrices allow risk to be qualitatively expressed, they provide a
unique tool when full quantitative analyses are impractical (Pasman & Vrijling,
2003). Owing to this, the approach is adequate for formulating acceptance
criteria for temporary phases (Vinnem, 2007). The categories are then broadly
dened, like small (consequences) and frequent (occurrences). NORSOK
Z-013N (2001) remarks that the coarseness of each category determines the
level of precision and whether risk reduction is reected. Broad categorization
yields few uncertainties, as one most certainly will end up in the right cell.
Simple risk matrices are also easy to communicate to non-experts.
Perhaps the greatest limitation of risk matrices, is that the totality of risk
is concealed when a risk picture is split into many contributions (Rausand &
Utne, 2009). Even if each hazardous event poses an insignicant, green-colored
risk, the risk from the totality of scenarios may be painted red. Therefore,
an evaluation of the overall risk picture should always follow single scenario
assessments. A nal concern is that risk matrices often are tailored to a specic
study, using relative consequence patterns and worst- or best-case assumptions.
This calls for careful consideration of the corresponding frequency classes, and
awareness of risk matrices limited suitability for comparison across activities
(NORSOK Z-013N, 2001).
3.5.2 Loss of main safety functions
PSA (2001) requires acceptance criteria to be set for the loss of main safety
functions. Consulting NORSOK Z-013N (2001), this refers to the frequency of
accidental events leading to impairment of main safety functions, e.g. escape
ways and control room functions. Ensuring that the platform design does not
imply undue levels of risk, loss of main safety functions is a design related
criterion suited for decision making on technical measures.
Vinnem (2007) interprets loss of main safety functions as an indirect expression of risk to personnel. According to NORSOK Z-013N (2001), such metrics
advantageously provide less uncertain risk estimates than direct expressions of
personnel risk, since the endpoint of calculation lies earlier in the event sequence. Loss of main safety functions is unsuitable for comparing risk from
other systems, as it is developed for oshore application exclusively.

3.5 Other

45

3.5.3 Safety integrity level- SIL


Safety integrity level (SIL) describes the amount of risk reduction an electrical/electronic/programmable electronic system provides (Marszal, 2001). A SILcriterion, i.e. the required risk reduction, is in Figure 3.7 conceptualized as
the dierence between the risk prior to a safety instrumented system (SIS) and
the tolerable level of risk. IEC 61508 (1998, p.31) dene safety integrity as:
The probability of a safety related system/SIS satisfactorily performing
the required functions under all stated conditions within a specied
period of time
Safety integrity is split into four discrete levels, from SIL 4 to SIL 1. The
levels are distinguished by maximum tolerable failure frequency and the range
of risk reduction required. Each SIL is quantitatively expressed by probability
1
.
of failure on demand (PFD) and a risk reduction factor, derived from PFD
To claim achievement of a specic SIL, also qualitative requirements must be
adhered to (IEC 61508, 1998).

Process risk, case 2


SIL 4

Process risk, case 3

SIL 2

SIL 3

Process risk, case 4


SIL 1

Required risk (frequency) reduction

Increasing risk (increasing frequency)

Process risk, case 1

Acceptable risk

Figure 3.7. Required risk reduction in terms of SIL (adapted from Marszal (2001))

SIL are, like loss of main safety functions, a technical criterion that is suited
for decisions on technical measures related to safety instrumented systems.
According to IEC 61508 (1998), SIL are functional lower level requirements
that shall comply with overall risk acceptance criteria. For this purpose, a layer
of protection analysis (LOPA) is useful, which is a semi-quantitative method
for determining SIS performance requirements and evaluating the adequacy of
protection layers (BP, 2006).

46

3 Expressing risk acceptance criteria

3.5.4 Injury and ill health


In some circumstances, metrics related to injury or ill health oer the most
proper description of risk to persons. This applies if the frequency of accidents
resulting in injuries surpasses that of fatalities (Skjong et al., 2007), or for accidents whose eects are non-fatal or delayed, like toxic or radioactive releases
(HSE, 1992). Moreover, injury measures advantageously reect variations in
susceptibility (NSW,2008). Whether to use injury or fatality metrics is not only
a pragmatic, but also a moral problem. It is not given that a fatality represents a
more severe consequence than a permanent disabling injury. One can imagine
the position of a severely hurt person experiencing considerable loss of life
quality, wishing shed been struck just a little harder by the accident. Even
when taking a societal view, the economical costs of a permanently disabled
person might be of similar proportions to those of a lost life (Skjong et al.,
2007).
Criteria for injuries and ill health may be expressed as acceptable levels
of surrogate endpoints causing injury or death, like heat radiation (kW/m) or
received concentration of a toxic chemical (NSW, 2008). The ultimate eect
depends on the duration and mode of exposure, as well as the nature of the toxic
material. A dierent approach is suggested in EMS (2001), attaching relative
weight factors of 0.01 and 0.1 to minor and major injuries respectively. Within
this method, it is possible to combine injuries and fatalities into a single risk
criterion.
Alternatively, health criteria can be expressed by quality adjusted life
years(QUALY). QUALY is obtained by multiplying expected life-years with
a weight factor reecting quality of life, ranging form 0 being equal to dead
and 1 reecting full health. QUALY uniquely reects number of life years lost
beyond the binary question of survival or death (Johannesson et al., 1996).
However, using QUALY as risk acceptance criteria appears dicult. Rather,
it is utilized in cost-utility analyses in terms of net cost per QUALY gained,
deriving conditional levels of acceptable risk (Lind, 2002a).

4
Deriving risk criteria

4.1 Introduction
Formulating risk acceptance criteria is not a straightforward task. Not only can
the creator choose between a variety of risk metrics, there is also a spectrum
of principles and methods for deciding on the specic risk level. According
to Nordland (1999), each approach attempts to rationally determine objective
levels of acceptable risk. However, since it is dicult (if not impossible) to
calculate acceptable risk levels objectively, dierent societies have developed
distinct approaches. The establishment of risk acceptance criteria is therefore
strongly determined by historical, legal and political contexts (Hartford, 2009;
Ale, 2005).
In this chapter, the basis and applicability of various approaches to setting
risk acceptance criteria are discussed. A distinction is drawn between fundamental principles, deductive methods and specic approaches. Fundamental
principles represent ethical lines of reasoning, while deductive methods describe how criteria are derived. Specic approaches covers the applied reasoning in dierent regimes, based on combinations of fundamental principles and
deductive methods.

4.2 Fundamental principles


Utility, equity and technology are the three pure criteria for judging risk acceptability (HSE, 2001b). These are of principal, i.e. ethical, nature, to be used
alone or combined as building blocks in the creation of practical approaches.
Yet, HSE (2001b) admits that on their own, a universally accepted application
is waiting. Since each oers only a single line of reasoning in a complex world
of risks, their practical and ethical implications are unavoidably contested. This
is especially the case with utility- and equity-based criteria, oering contrary

48

4 Deriving risk criteria

views on distributional issues. The dierent principles are illustrated in Figure


4.1.

Equity:
All risks must be kept
below an upper limit

Utility:
Risk acceptability is the
balancing of costs and benefits

Technology:
Risk must be as low as
that of a reference system

Risk

Risk

Benefit

Risk

Figure 4.1. Three principal lines of reasoning

4.2.1 Utility
Utilitarian ethics, philosophically rooted in the thoughts of Bentham and Mill, is
based on the presumption that one shall maximize the good and minimize what
is bad for society as a whole (Hovden, 1998). When deciding on the introduction
of a new technology, this implies the search for an optimum balance between
its totality of benets and negative consequences or costs. In the allocation of
risk reduction expenditures, a balance between the costs and benets related to
a certain measure is sought (HSE, 2001b).
A central utilitarian assumption is that one shall look at the overall balance for society, rather than the ones experienced by individuals. Utility ethics
therefore provides a powerful line of argument in legitimizing technological
risk to society. The consequence is that some of its members might suer on
behalf of the society as a whole, as protested by Fischho (1994). Also HSE
(2001b) recognizes this inherent deciency , warning that a strict application of
utilitarian thinking imposes no upper bounds on acceptable risk, as only those

4.2 Fundamental principles

49

deemed cost-eective are reduced. This pinpoints both the weakness and merits of utility based criteria. Unconditional levels of acceptable risk cannot be
set (allowing unfair distribution of risk), since one always has to consider the
totality of goods and bads of a proposal (ensuring that the good is maximized).
4.2.2 Equity
The Ethics of equity has its origin in the moral reasoning of Rawls, stating that
one shall prefer this society even if unaware of ones own position in it. Maximizing the minimum, priority is given to the least advantaged (Hovden, 1998).
The premise of equity-based criteria is that all individuals have unconditional
rights to a certain level of protection. Conversely, this yields a level of risk that
is acceptable for none of the members of society; encouraging standards and
xation of upper limit tolerability criteria (HSE, 2001b). Owing to this, the use
of absolute risk acceptance criteria has its origin in the ethics of equity.
The claim that each member has an equal right not to experience high risk,
stands in great contrast to the utilitarian principle. Amongst those favoring
an equity-based reasoning is Fischho (1994), stressing that a technology must
provide acceptable consequences for everyone aected by it. Fischho proposes
that a risk should be considered acceptable only if its benets outweigh the
risks for every member of society. One can question whether this is a pure
equity-based reasoning, or if maximizing individual benets comprise utilitarian
elements on a personal level. However, its ethical core is still equity, as one
looks at the distribution of individual risks and benets rather than the overall
balance.
Although ethics of equity is intuitively appealing, it leads to ineective application of technology and risk reduction measures if carried out to its extreme
(Hovden, 1998). Equity-based criteria also promote considerations of unrealistic worst case scenarios, distorting decisions through systematic overestimations
of risk.
4.2.3 Technology
The principle of technology assumes that an acceptable level of risk is attained
by using state of the art technology and control measures (HSE, 2001b). Risk
acceptance criteria are set by comparison with systems following good practice.
An example is the notion of adequate safety in the EU machinery directives
(EU, 2006), requiring new machines to be at least as safe as comparable devices
already on the market. However, what constitutes a comparable technology is
disputed, as is further discussed in section 4.4.3. Another manifestation is the
concept of minimum SIL employed in the Norwegian oshore and gas-sector,
assisting in the establishment of SIL-requirements based on well-proven design
solutions (OLF 070, 2004).

50

4 Deriving risk criteria

Unlike the principles of equity and utility, technology-based criteria do not


reect any explicit ethical tenet. Rather, it implicitly assumes that a technology
is ethically justied if present levels of risk are preserved. This represents
both an ethical and a technological shortcoming, as one is not necessarily
provided with an understanding of how things ought to be (Breugel, 1998).
Little incentives are provided for developing more ecient solutions, as early
advocated by Starr (1969). The principle is further criticized for ignoring the
balance between costs and benets (HSE, 2001b), possibly favoring expensive,
safe technology over slightly less safe, but inexpensive developments that people
or organizations actually aord.
The technology-principle strongly resembles the method of bootstrapping,
presented in section 4.3.2. A delicate distinction is that while pure technologyapproaches look at De facto risk for comparable systems, the latter goes further
in assuming that prior accepted risk levels apply across systems and risks of
the future.
4.2.4 An alternative principle
Although not presented in HSE (2001b), Habermas ethics of discourse should
nally be mentioned. The principle stands out from the ones previously discussed, by not claiming righteous truths on acceptable levels or balances of
risk. What matters are not the criteria themselves, but the democratic process
of formulation (Hovden, 1998). As long as criteria are determined through open
transactions and consensus between those aected by the risk, scientic rationality is achieved by means of agreed formalisms (Linnerooth-Bayer, 1993).
Since the rightness of criteria is sought from democratic processes rather
than analytical approaches, the ethics of discourse is not further examined.
However, it does oer a valuable perspective, contemplating that criteria may
be judged unsuitable regardless of the analytical approach.

4.3 Deductive methods


In the pioneering work of Fischho et al. (1981), three methods for solving
acceptable risk-problems are examined. The quartet of social scientists are concerned with the meta decision problem of how to decide on how to decide,
claiming that a lack of consensus on decision making methods has fostered
poorly articulated rationales and idiosyncratic applications to risk acceptability.
Whether this still is the case as a hundred of citations have past, is a conclusion too great to be answered at this point. Since the methods Fischho and his
coworkers systematically evaluated still are in extensive use, valuable advice
are oered to the sound derivation of risk acceptance criteria. Their discussion
on expert evaluation, bootstrapping and formal analysis are represented in the
following, noting the following requirements for acceptable-risk methods to be:

4.3 Deductive methods

51

 Comprehensive
 Logically sound
 Practical
 Open to evaluation
 Politically acceptable
 Compatible with institutions
 Conductive to learning
4.3.1 Expert judgment
Letting the best available experts decide on what risk is acceptable, personal
experience can be integrated with professional practice and the desires of clients
or the society as a whole. Although experts are involved in most decisions on
acceptable risk, what characterizes this method is judgment; professionals are
not bound by the conclusions of analysis, nor do they need to articulate their
rationale. Therefore, only the outcome of decisions are open to evaluation. A
typical situation of expert judgment is medical treatment, in which the doctor
is trusted for taking decisions on behalf of the patient. Another example is
the setting of reliability standards for single components in a complex system.
Both situations represent routine decision making of relatively limited scope,
for which expert judgment is proven practical and cost-eective.
The method fails in comprehensive, irregular decisions, like whether to
go ahead with a new technology. This is partly due to professionals often
lacking ability to grasp the whole problem, and partially because complex
situations urge political discussions. When controversial decisions are taken by
professionals, history has shown that they often serve as scapegoats, accused
for overemphasizing technical issues at the expense of public concerns.
4.3.2 Bootstrapping
Bootstrapping means using the levels of risk tolerated in the past as basis
for evaluating future risks. There are two strong assumptions underlying this
approach; that a successful balance of risks and benets is achieved in the past,
and that the future should work in the same way. The former is empirical,
while the latter is of political character. Fischho et al. distinguish between
four bootstrapping approaches:
 Risk compendiums compare dierent situations posing the same level of
risk. By example, Table 4.1 represents a selection of daily activities estimated to increase IRPA in any year by 106.

52

4 Deriving risk criteria

 Revealed preferences are reected in the market behavior of the public, assuming that society has already reached an optimum balance between the
risk and benets of any existing technology. A new technology is acceptable
if the associated risk does not exceed that of existing technologies, whose
benets are the same to society (Starr, 1969). In contrast to technology-based
criteria, benets are explicitly considered. However, there are no considerations of how these are distributed, since market behavior does not reect
the cost-benet trade-os of individuals.
 Implied preferences are read in legal records. Identifying implicit riskbenet trade-os for existing hazards, acceptability standards are set for
new technologies. The central assumption is that law and regulatory actions
represent societys best comprise, between the publics needs and the current economic and political constraints. The method is criticized for lacking
coherence and reinforcing bad practices, as laws may be context-dependent,
poorly written and hastily conceived.
 Natural standards means deriving tolerable risk limits from exposure levels
of preindustrial times. Natural exposure is typically found through geological
or archaeological studies. Unlike the other bootstrapping methods, natural
standards are independent of a particular society and is therefore suited for
global environmental risk problems.
Common for all but the latter is a strong bias towards status quo. Using past or
present risk as reference level does not encourage future improvements. Historical records only provide indications of accepted, i.e. implemented technologies,
telling nothing about whether the associated risks were judged acceptable by
the public. Another deciency is that acceptability judgments are taken without
explicitly considering alternative solutions. Even if so, no guidance is provided
if both fail or pass the comparison. All methods fail in considering cumulative
risks from isolated decisions.
The advantage of bootstrapping-methods is their breadth. A broad spectrum
of hazards are considered, attempting to impose consistent safety standards
throughout society. The element of comparison also provides a risk number
that is simple to grasp and easy to deduce. On the contrary, the weakness of
bootstrapping methods is their lack of depth. Decision problems are improperly
dened, decision rules imprecise and the outcomes unclear and poorly justied
4.3.3 Formal analysis
Formal analyses provide explicit recommendations on the trade-os between
risk and benets of acceptable-risk problems. They are intellectual technologies
for evaluating risk, based on the premise that facts and values can be eectively and coherently organized (Fischho et al., 1981). Complex problems are

4.3 Deductive methods


Activity
Smoking 1.4 cigarettes
Drinking 0.5 liter of wine
Living 2 days in New York or Boston
Traveling 6 minutes by canoe
Traveling 150 miles by car
Flying 1000 miles by jet
Eating 40 tablespoons of peanut butter
One chest x-ray taken in a good hospital
Eating 100 charcoal-broiled steaks
Living 150 years within 20 miles of a nuclear plant

53

Cause of death
Cancer, heart disease
Cirrhosis of the liver
Air pollution
Accident
Accident
Accident
Liver cancer
Cancer caused by radiation
Cancer from benzopyrene
Cancer caused by radiation

Table 4.1. Risk compendium of activities estimated to increase the chance of death in any
year by 106 ( Source: Wilson (1979), represented in Fischho et al. (1981))

decomposed into simpler ones, oering a powerful tool for regulatory and administrative institutions dealing with dicult risk issues (Vrijling et al., 2004).
Owing to this, formal analysis is superior to bootstrapping and professional
judgment in evaluating new hazardous technologies.
Fischho and his colleagues emphasize that formal analyses can be utilized
as either methods or aids. If interpreted as a method, it is given that anyone who
accepts its use and underlying assumptions shall follow the recommendations.
Alternatively, the recommendations can be seen as clarifying aids, addressing
issues of facts, values and uncertainties.
Even simple formal analyses require highly trained experts, transferring
acceptable risk decisions to a technical elite. Owing to this, their success is
strongly dependent on good communication with clients and the public. The
great advantages of formal analyses are their openness and soundness, providing
logical recommendations that are open to evaluation. Their conceptual framework helps identifying and sharpening the debate around risk issues, possibly
encompassing a broad range of concerns. However, as full-blown methods are
expensive and time consuming, only the most dominant concerns are included.
And because what constitutes the most important concerns is ultimately a judgmental question, the separation of facts and values is critical (yet Utopian) in
formal analysis. Two main types of formal analysis are presented in Fischho
et al.; cost-benet analysis (CBA) and decision analysis 1 . CBA has according
to Fischho et al. gained broader acceptance than decision analysis, due to the
claim of objective value measurement. Paradoxically, the mixing of facts and
values is especially complex in CBA as they are implicit.
1

None of these were developed for acceptable risk problems. Both assume e.g. a wellinformed, single decision maker or entity and immediacy of consequences. This is rarely
the case in decisions regarding complex risk issues.

54

4 Deriving risk criteria

Cost-benefit analysis
Cost-benet analysis provides a quantitative evaluation of the costs and benets
in a decision problem, expressed in a common monetary unit. Restricting itself to consequences amendable to economic evaluation, recommendations are
produced in pursuit of economic eciency. Grounded in economic theory, the
alternative that best fullls the criterion of utility is recommended. Since the
utilitarian principle ignores distributional issues, Fischho et al. report conceptual disagreement on whether equity considerations may be included in the
analyses.
There is no single CBA-methodology. As pointed out by French et al. (2005),
there is rather a family of methods, sharing the same philosophical premise of
balancing net expected benets and costs. This balancing is an essential feature
in the ALARP-approach of the following section, explicitly integrating CBA
in a broader risk acceptability framework. What seems agreed upon, is that
cost-benet optimization provides a necessary aid for evaluating risk reduction
investments and judging the acceptability of new technological projects.
Requiring all costs and benets to be monetarily expressed, CBA runs into
ethical and practical diculties in assigning the cost of loosing a human life
and the benet of saving one. An area where CBA and the value of preventing
a fatality (VPF) is explicitly used, is the Swedish and Norwegian road transport
authorities. In a recent study by Elvik et al. (2009), the prevention of one road
fatality is valued at approximately 20 million NOK. According to Vatn (1998),
there is no universal agreement on how to value lives. The problem can be seen
from the perspective of the individual as well as the decision maker. Hammit
(2000) reviews the theoretical foundation and empirical methods for estimating the value of a statistical life (VSL), expressing the valuation of changes
in mortality risk across a population. VSL represents what people in average
are willing to pay for an innitesimal mortality risk reduction. This is not to
be interpreted as the amount an individual is willing to pay for avoiding certain death to himself or an identied individual, as most people are willing
to provide unlimited resources in such a situation (Hovden, 1998). The prex
statistical is thus essential, avoiding unrealistically high values attached to the
loss of human life (Vrijling et al., 1998). The VSL for each individual depends
on age, wealth, baseline mortality risk and whether consequences are acute or
delayed. In a report prepared by Australian Safety and Compensation Council
(2008), over 200 literature studies on VSL are reviewed, revealing great dierences between various estimations as shown in Figure 4.2. The estimated VSL
dier between countries and across sectors, ranging from mean values of 11 to
51 million NOK, in health and occupational safety respectively. Although the
theories of VSL are well-established, Hammit (2000) encloses his review by
calling for conceptual and methodological research, on how to account for risk
characteristics other than probability and to value risk across dierent popula-

4.3 Deductive methods

55

tions. For an overview on dierent expressions and quantities of the value of a


human life, the reader may consult Skjong et al. (2007).
Million NOK

VSL estimates (2006)

100
80
60
40
20

US

UK

Sweden

South Korea

Japan

France

Denmark

Canada

Australia

Figure 4.2. Mean values of VSL-estimates by country (Source: Australian safety and compensation council(2008))

While determining the value of a human life in itself is a controversial task,


taking into account that both costs and benets come in time series adds more
complexity to the issue. Fischho et al. (1981) report that CBA is hampered
by the absence of consensus on which rate of discounting, i.e. degree of depreciation, to assign future costs and benets. Lind (2002b) acknowledges that
economic discounting is met by repugnance when the value of future lives and
generations is under consideration. In response, Lind proposes a risk acceptability function demanding discounting of nancial quantities only, avoiding
both what he describes as the questionable concept of value of a human life,
and the discounting of it.
Cost-benet analysis is further criticized by Fischho et al. (1981) for
wrongfully claiming value-immunity. Value preferences lie implicit in the political choice of focusing on economical consequences, as well as in easy manipulated marked data. Rened cost-benet functions dealing more explicitly
with value concerns are proposed to the setting of risk acceptance criteria, for
instance by Rackwitz (2004) and Nordland (1999). These integrate compound
indicators of life quality concerns and public risk aversion respectively.

56

4 Deriving risk criteria

Decision analysis
Decision analysis is based on the axiomatic decision theory for making choices
under uncertainty, providing prescriptive recommendations given that its axioms are accepted. The reader may consult e.g. Abrahamsen & Aven (2008) for
a theoretical examination of the specic axioms. At the core of decision analysis are utilities, meaning subjective value judgments assigned to the various
attributes of a decision problem. By subjectively weighing the importance of
each attribute, consequences are evaluated relative to each other. The alternative
providing the greatest utility over all consequences is recommended. There are
several variants of decision analysis, having subjective utility functions as the
common denominator. One of these is multi attribute utility theory, praised by
French et al. (2005) in the following presentation on the ALARP- principle.
Both the values, what consequences to consider and the probabilities are in
decision analysis subjective. As relative frequency data is not required, decision
analysis is suitable for considering unique, as well as frequent events. Although
a frequentistic interpretation is not required in CBA, Fischho et al. (1981)
report its prominence amongst cost-benet analysts. In further contrast to CBA,
decision analysis enables consideration of non-economic consequences. Hence,
it has the advantage of considering whatever fact or value-issues of interest to
the decision maker. Not claiming an objective ground, the inclusion of attitudes
towards risk is also naturally accommodated in the analysis.
The quality of recommendations rely on the quality of value judgments.
Since values are often badly articulated, unconscious or contradictory acted
upon, utility weights can be erroneously assigned. This calls for non-manipulative
methods for value elicitation and a conscious approach to risk framing. Additional diculties arise if multiple parties involved in societal decision making
do not agree on the relative attractiveness of alternatives. Agreement is still
easier sought than for methods claiming value-immunity, since judgments are
explicit and out in the open (French et al., 2005). A dicult question is whether
company or regulatory decision makers are entitled to make value judgments
on behalf of the public. Somehow this must be resolved, as aggregating public
preferences is an inescapable methodological diculty according to Fischho
et al. (1981). Assuming that the owner and the public have common interest
in the success of a project, Ditlevsen (2003) is convinced that representative
decision analysis provides an upper risk acceptance limit agreeable to both
parties.

4.4 Specic approaches

57

4.4 Specific approaches


4.4.1 ALARP
The ALARP-principle of as low as reasonably practicable is the British risk
acceptability framework. Although widely recognized in Norway and other
countries, the principle is by far most institutionalized in the UK under HSE
(Vinnem et al., 2006). HSE has prepared a series of guidance documents on the
principle, with the so-called ALARP-trilogy of HSE (2001a), HSE (2003a)
and HSE (2003c) providing high level guidance on the generic framework outlined in HSE (2001b). Sector specic advice are found in e.g. HSE (2004), for
control of major accident hazards (COMAH).
A principal illustration of ALARP is given in Figure 4.3. This conceptualization is named the TOR-framework, introduced in the report of tolerability
of risk from nuclear stations in HSE (1992) 2 . The framework is subsequently
adapted for general applications in HSE (2001b). The expanding breadth of
the triangle represents increasing levels of risk. At the top we nd the darkest
region of unacceptable risks, whose magnitude demands reduction regardless
of the benets of a proposal. Only in exceptional cases, like war, may risks of
this region be retained. In contrast are the broadly acceptable risks of the bottom of the triangle, generally regarded as insignicant or adequately controlled.
Between these outer zones is a region holding tolerable risks people are willing to tolerate for securing some benets. Unlike the upper and lower zones,
risks in this mid region cannot be claimed tolerable just because they happen
to fall within the limits. The crucial point is that a risk must have been reduced
to a level as low as reasonably practicable to serve this designation. What
constitutes reasonable practicability is given by the ratio between the costs and
benets of reducing a specic risk. This necessitates evaluations taken on a case
by case basis. Technology licensing in the UK is therefore denoted safety case
regulations, reecting the countrys common law tradition of regarding what
is not explicitly allowed as forbidden (Ale, 2005). In this legislative system, it
is the responsibility of the operator to ensure that a risk is tolerable according
to ALARP. In some cases, ALARP can be sought with rapid judgment, whilst
formal analysis is required for situations of major or complex risk.
The setting of acceptability limits in the TOR-framework is guided by all
three pure criteria of section 4.3.1 (HSE, 2001b). While the lower regions
follow a utilitarian rationale, the upper limit is set of equity concerns. Additionally, technology-based criteria ease the criteria setting in all three regions.
2

Sometimes the abbreviation SFAIR (safe so far as is reasonably practicable) is used


instead of ALARP. This term was introduced in the 1974 law of health and safety at
work, but was later developed into the notion of ALARP in HSE (1992). Although
expressing the same idea, the reader should note that they are not always interchangeable,
due to the dissimilar wordings of legal proceedings (HSE, 2001a).

58

4 Deriving risk criteria


Risk
Unacceptable region
Risk can only be justified
under extraordinary cricumstances

Tolerable region
Risk must be reduced ALARP

Broadly acceptable region


Risk is negligible and/or
adequately controlled
Negligible risk

Figure 4.3. The ALARP-principle (adapted from HSE(1992))

In HSE (1992), the up most and lower limits of IRPA for workers in the nuclear sector are suggested as 103 and 106 respectively. These numbers are by
no means universal, since the factors deciding whether a risk is unacceptable,
tolerable or broadly acceptable are dynamic in nature. Melchers (2001) warns
that tolerability changes particularly quickly when there is discontinuity in the
normal pattern of events, raising societal and political pressure for redening
the boundaries. Tolerability regions are still spelled out in guidelines and implicitly reected through industrial practice. It should therefore be emphasized
that these criteria are indicators rather than rigid benchmarks, calling for exible interpretation through deliberation and professional commonsense (HSE,
2001b).
Boundary between broadly acceptable and tolerable risk
The boundary between the broadly acceptable and tolerable region shall according to HSE (1992, p.10) be set by the point at which the risk becomes
truly negligible in comparison with other risks that the individual or society
runs. HSE (2001b) explains that the IRPA limit of 106 is given by trivial activities that can be eectively controlled or are not inherently hazardous. This
is approximately three orders of magnitude lower than the level of background
risk a person experience in his daily environment. With reference to Fischho
et al. (1981), this is a bootstrapping approach, calling for careful consideration
of the moral pitfalls in preserving status quo. But, as the strength of such an
approach is its practical feasibility, one can argue that bootstrapping a lower
limit improves the manageability of ALARP-analysis. Morally, this may be easier justied than approaching an upper criterion in the same manner, since the

4.4 Specic approaches

59

former exists of utilitarian reasons whilst the latter touches equity concerns.
With further reference to Fischho and his coworkers, it should be stressed
that no risk is acceptable unless it provides some benets. Owing to this, the
lower limit is necessarily conditioned on the benets of a specic situation.
Boundary between unacceptable and tolerable risk
There are no widely applicable criteria dening the boundary between the tolerable and unacceptable region. The argument of HSE (2001b) is that hazards
giving rise to considerably high individual risk also invoke social concerns,
which often is a far greater determinant of risk acceptability. This is line with
Douglas (1985) preoccupation with the social dimension of risk perception,
understanding risk acceptability as a social construct reaching far beyond an objective claim of physical consequences. Even though HSE (2001b) recommends
the use of individual risk criteria in most cases, a tolerability FN-criterion of 50
fatalities per accident is provided for risks giving rise to social concerns. The
suggested IRPA limit of 103 should hence be implemented with caution, also
considering that it is a very lax limit most industries in the UK and Norway fall
well below (Vinnem, 2007). Quite paradoxical, this number is chosen exactly
because most hazardous industries pose a substantially lower risk, making an
excellent example of how bootstrapping discourages improvement. However,
as the upper limits serve only as the starting point of ALARP-improvement
(Ale, 2005), this chain of thought serves as a rhetorical argument rather than a
conceptual attack of the TOR-framework. A principal aw is yet demonstrated,
in that a lax upper limit may legitimize high risk to a small group of people,
since risk falling below this limit risk are judged tolerable according to utility
rather than equity.
Tolerable risk
In the mid region, bounded by the upper and lower acceptability limits, risk
must be kept as low as reasonably practicable. A benecial activity is considered
tolerable if, and only if (HSE, 2001b):
 All hazards have been identied
 The nature and level of risk is properly addressed, based on the best available scientic evidence or advice and the results are used to determine
appropriate control measures
 The residual risks are not unduly high and are kept as low as reasonably
practicable
 Risks are periodically reviewed to ensure that they still meet the ALARPcriteria.

60

4 Deriving risk criteria

These requirements oer a strategy in the choice of risk reduction measures,


as well as in the provision of conditional risk acceptance criteria. Seemingly
two sides of the same task, a clear distinction is found in the practical use
of ALARP in the UK and Norwegian oshore industry. While ALARP is the
provider of risk acceptance criteria in the UK, common Norwegian practice is
to view ALARP as a risk reducing process that is conceptually independent
of a predened set of risk acceptance criteria (Aven & Vinnem, 2005). This
section continues its focus on the cultivated ALARP-approach of HSE, while
the implications of the Norwegian conceptualization are further discussed in
chapter 5.2.
Determining that risks have been reduced ALARP involves an assessment
of the risk, the sacrice (in money, time and trouble) of mitigating that risk and
a comparison of these (HSE, 2008). Dependent on the nature of the hazard,
the extent of the initial risk and the available control measures, the process
demands varying degrees of rigor. As a rule of thumb, the higher up a risk is
placed in Figure 4.3, the greater rigor is required. However, in many cases is
complying with good practice sucient to demonstrate ALARP.
Good practice
According to HSE (2003a, p.1), good practice is:
The generic term for those standards for controlling risk which have
been judged and recognized by HSE as satisfying the law when applied
to a particular relevant case in an appropriate manner.
Good practice can be enshrined in written standards or come from unwritten
sources, given that they are recognized to satisfy the established practice of
an industrial sector. The standards are set from HSEs own experience and
judgment, international discussions and by the best industrial and expert advice
of advisory committees (HSE, 1992). Good practice shall not be confused with
best practice, i.e. a standard considerably above the legal minimum. In the
clearing document ALARP at a glance, HSE (HSE) emphasizes that since
best practices are not necessarily reasonably practicable, one should not seek
their enforcement before recognized by HSE as representing good practice.
This advice stands in contrast to the GAMAB-approach presented in section
4.4.3, and may intuitively seem to hamper improvement. Both GAMAB and the
good practice approach to ALARP are technology-based. The latter is, however,
especially vulnerable to the fallacies of technology-based criteria if HSE fails to
keep up with the changing of technology and organizational practice. Another
diculty is whether one can expect the same application for old and new
systems. Nordland (2001) argues that it is neither practicable, nor reasonable to
retrospectively demand implementation of the latest safety technologies. In fact,
continuously modifying an old system may actually be more dangerous than

4.4 Specic approaches

61

retaining the original technology. Since the issue of practicality is substantial to


HSE (2003a), existing installations are required to apply current good practice
only in the extent necessary to satisfy the relevant law.
Costs, benefits and disproportionality
Complex decisions should not be taken based on good practice alone. In such
cases, good practice shall be followed by a consideration of the reasonable
practicality of further risk reduction (HSE, 2001b). If there is an evident disproportion between the costs and eectiveness of a risk reduction measure, this
may be qualitatively done through professional judgment. If the situation is less
clear-cut, for instance in high hazard industries or when introducing a new technology, a formal analysis is necessary. With reference to Fischho et al. (1981),
this may take the form of cost-benet- or decision analysis. Within HSE, the
prescribed method is CBA (HSE, 2001). For the fundamentals of CBA, the
reader is referred to section 4.3.3 or HSE (2008) and annex 3 of NORSOK
Z-013N (2001). Focal to this section is the feasibility of CBA in addressing
ALARP-concerns. The elements of CBA are in this regard not the overall costs
and benets of a system, but those related to reduction of risk in the particular
system. Typical cost elements are those of installation, operation, maintenance
and productivity losses following risk reduction measures. Benets are given in
monetary gains of reduced risk, like the value of preventing fatalities, injuries
and environmental damage or increased productivity. The analysis shall also
address whether the introduction of a measure transfer risk to other employees
or members of the public (HSE, 2008). Although one is analytically comparing
costs and benets, HSE (2001b) denotes the process as a comparison of risk
against costs. Focal in this comparison is the notion of gross disproportionality,
requiring measures to be employed unless their costs are in gross disproportion
to the expected risk reduction. If several options fulll this property, the combination of measures providing the lowest residual risk is selected. The ALARP
criterion is determined by:
costs of risk reduction
(4.1)
benets of risk reduction
A disproportionality factor d of e.g. 3, means that for a measure to be rejected, the costs should be more than three times larger than the benets. In
many cases, this criterion is given by an evident point of rapidly diminishing
marginal returns (HSE, 1992). There are neither authoritative requirements on
what ratio to employ, nor a formal algorithm for which factors to take into
account (HSE, 2008). An explanation is sought in Melchers (2001) critique
of the ALARP-principle, contemplating that the critical words of low, reasonably and practicable are all relative terms of value judgment. Dierent
methods for calculating d have by others yet been proposed, e.g. by Bowles
(2003).
d 

62

4 Deriving risk criteria

Vinnem et al. (2006) report a commonly used ratio of 6, while commenting


that an unfortunate misinterpretation amongst Norwegian oshore operators is
that a measure must be proven suciently benecial in advance. As a rule of
thumb, HSE (2008) suggests a ratio of 2 for low nuclear risk to members of the
public, while a factor of 10 is provided for high risk. Figure 4.4 illustrates that
the disproportionality factor varies substantially according to where the risk is
located in the triangle. Just below the upper limit, considerable eort may be
required even for marginal risk reduction, whilst at the lowest border, expenditure may not be justiable at all (HSE, 1992). Since the rate of increase is
unspecied, one can speculate if a high risk is easier judged ALARP when split
into many small risks. What further complicates the picture, is that judgments
of gross are also conditioned on the overall benets of a technology; like
employment, personal convenience or general social infrastructure. The ratio
between costs and risk must therefore be evaluated in light of all circumstances
relevant to each case, especially when considering high societal risk (HSE,
2008). An important exception is that the size and nancial ability of the duty
holder is not a legitimate determinant of disproportionality. The rationale is
provided in NORSOK Z-013N (2001), arguing that the economical perspective
of society must be chosen to enable a comprehensive, global optimization.
Risk

d 10

d2

Figure 4.4. Disproportionality in ALARP

Both proponents (HSE, 1992) and opponents of CBA in ALARP (Melchers,


2001; French et al., 2005) agree that the method provides a far from precise
calculation, and that it cannot escape the ethical diculty of valuating and
discounting lives as outlined in section 4.3.3. In NORSOK Z-013N (2001),
an additional constraint is highlighted in that the costs of risk reduction are

4.4 Specic approaches

63

deterministic, while the benets are probabilistic and theoretical. The expected
benets are mathematical only, and will never be realized in practice. Dependent on the occurrence of accidents, the nal balance over a life cycle will thus
be either very negative or very positive. Since an installation may not economically survive the worst case scenario, the maximum loss should therefore be
considered in addition to the expected value.
The fallacies of cost-benet analysis are thoroughly assessed by French
et al. (2005), comparing the suitability of CBA and multi attribute utility theory in ALARP-evaluations. The conclusions are in favor of the latter, due to
four problems precluding current CBA applications to ALARP. These concern the nonobjective pricing of safety gains, inconsistent valuation of group
and individual risk, the immoral discounting of trade-os through time, and
lacking theoretical justication and ad hoc use of disproportionality factors.
Moreover, CBA is accused for being ill-dened and implicitly subjective. Multi
attribute utility theory is claimed to address each and all of these concerns in a
more satisfactory manner, in addition to structuring the debate between dierent stakeholders in productive ways. Within this method, a disproportionality
factor is easily modeled by adjusting the relative weights of costs and benets,
and a simple framework is oered for addressing multiple fatality aversion. The
approach is not without drawbacks in an ALARP-context, notably because there
is an explicit requirement in HSE (2001b) of comparing the monetary costs and
benets of a risk reducing measure. Additionally considering the diculty of
trading very dierent kinds of attributes, lack of consistency between decisions
is likely to result (French et al., 2005). Although multi attribute utility analysis
is suggested as an alternative to CBA in HSE (1992), practical applications to
ALARP seem few in number.
ALARP in practice
HSE director Walker (2001) commends the TOR-framework for oering comprehensive decision support that has stood the practical test of time in the UK.
In the Norwegian oshore industry, however, obstacles are found in attaining
a decision making process liberated from using quantitative risk analyses as
the sole basis of documentation. According to Vinnem et al. (2006), this seemingly owes to a widespread misinterpretation of ALARP being a general attitude
to safety, rather than a systematic, documented process to be followed up by
responsible authorities.
From the viewpoint of a plant owner, ALARP requires more eort than
adopting a set of predened criteria, since evaluations are made on a case by
case basis (Aven, 2007). This is especially true if lack of good practice demands
a full cost benet-analysis, which is extremely cost-and resource demanding
(HSE, 1992). Strong authority involvement is also implied, evaluating whether
the search for alternatives have been suciently wide and that the arguments

64

4 Deriving risk criteria

relating to gross disproportion are valid (Ale, 2005). On the positive side, the
pragmatic use of broadly accepted risk criteria may reduce conict costs from
political compromises, as proposed by Starr & Whipple (1980). The existence
of a cut-o lower acceptability limit also provides time-saving decision support
on what is safe enough.
The lacking of a moral discipline
In the critical contribution of Melchers (2001), the ALARP-approach is accused
for having serious moral implications. In addition to the commonly raised objections of assigning monetary values to the benet of risk reduction, Melchers is concerned with the dichotomy between socio-economic matters and the
morality of risk issues. Based on the assertion that the requirements of reasonableness and practicality lack openness, the approach is accused for excluding
public participation in tolerability decisions, and rendering cover up of risk
information in cases of economical or political importance. This claim can be
contested by the promise of HSE (2001b) to involve all relevant stakeholders
in a transparent ALARP process. However, HSE provides little guidance on
how this is performed in practice. It can be suggested that the strength of the
TOR-framework lies in its ability to capitalize the advantages of equity-, utilityand technology-based criteria, while there is a call for procedural inclusion of
the alternative principle of discourse.
4.4.2 ALARA
ALARA is the Dutch acceptability framework, calling for risk to be reduced as
low as reasonably achievable. The approach is conceptually similar to ALARP,
with the distinguishing feature of not considering a region of broad acceptability. Figure 4.5 shows that broadly acceptable and tolerable risks are replaced
by a common notion of acceptable risk. Until 1993, the region of negligible
risk was part of the Dutch policy. Subsequently, it has been abandoned on the
grounds that all risks shall be reduced as long as reasonable (Bottelberghs,
2000). The principle was originally launched by the International Commission on Radiological Protection in the 1970s, for managing risks for which no
non-eect threshold could be demonstrated (HSE, 2002).
What is not explicitly allowed
The distinguishing features of ALARA are addressed by Ale (2005) in a
comparative study of risk regulation in the UK and the Netherlands. Despite
ALARAs striking similarity with ALARP, their practical interpretations dier
greatly. According to Ale, this primarily owes to the distinctive legal and historical context in the two countries. In contrast to the common law tradition in the

4.4 Specic approaches

65

Risk

Unacceptable region

Acceptable region (ALARA)

Figure 4.5. The ALARA-principle

UK, the Netherlands adhere to a legal system of Napoleonic law; principally


regarding what is not explicitly forbidden as allowed. The criteria were until
recently not legally xed, but is now been enshrined in Dutch law (Hartford,
2009). Consequently, demands for more safety must be announced by stricter
criteria in the law, reducing the role of authorities to securing compliance with
the minimum requirements. Whereas upper risk criteria are the start of discussion (and not legally binding) in the UK, they are thus the end of discussion in
the Netherlands. This basically means that emphasis is on complying with the
limit rather than the reasonable practicality of further action. ALARA-criteria
are therefore more strictly anchored than the upper limits of ALARP. Applying
an aversion factor of 2, the Dutch curve is also steeper than the UK version.
But do these dissimilarities yield dierent levels of actual risk? Ale (2005)
concludes that the nal results of spatial planning are surprisingly similar, and
that both countries are leading nations in the eld of risk control. Although the
unacceptable limit of ALARP is remarkably laxer than the Dutch version, the
conditional criteria usually end up similar.
Calculations and judgments of reasonableness
Following Ale (2005), the distinction between ALARA and ALARP seemingly
owes to diering legal and political interpretations. But is there a conceptual
dierence, as indicated by the last letters of the abbreviations? After all, there
is a fundamental distinction in demanding all risks to be reduced if reasonably achievable, and the setting of a lower limit of safe enough. One would
intuitively expect that not having a cut-o implies a limitless search for further
risk reduction, although the requirement of reasonableness hinders an uncritical
pursuit of safety at any cost. The absence of a lower limit makes the compar-

66

4 Deriving risk criteria

ison of risk and costs a much ner balancing act than in ALARP, since the
criterion of gross disproportionality are known to lapse with decreasing levels
of risk. Reasonableness in ALARA is instead given by the point at which the
marginal benets exceed the marginal costs. Owing to this, a higher level of
precision is required from risk assessments, which is a consequence also of the
legal necessity of demonstrating adherence to an upper limit (Ale, 2005).
Ironically, the search for ALARA is seldom considered reasonable in practice. Jongejan (2008) reports reluctance among both plant owners and local
governments to reduce risk beyond legal limits, which is partly related to a
common misinterpretation of the principle biasing on the side of safety. The
main explanation is yet found in the legal interpretation of ALARA, considering political judgments of reasonableness as already built into the upper risk
acceptability criteria (Hartford, 2009). Owing to this, a distinction between acceptable and tolerable risk becomes meaningless, suppressing utilitarian concerns as ALARA becomes more of a token statement.
4.4.3 GAMAB
GAMAB is the acronym of the French expression Globalement au moins aussi
bon, meaning globally at least as good. The principle prescribes the level of
risk a new transportation system in France has to fall below, requiring new
systems to oer a total risk level that is globally as low as that of any existing
equivalent system (EN 50126, 1999). A recent variant of GAMAB is GAME,
rephrasing the requirement to at least equivalent (Trung, 2000). This criterion
applies to modied systems as well as new technologies, requiring the global
risk to be at least equivalent as prior to the change. The conceptual distinction
between GAMAB and GAME is yet unclear. A possible interpretation is that at
least as good in GAMAB oers a wider interpretation of relevant factors than
equivalent in GAME. Since the abbreviations have the same ruling principle,
and because both are almost exclusively used in the French railway industry,
their distinctiveness is assumed irrelevant to this study.
Using existing technology as point of reference, GAMAB is a pure technologybased criterion. Applying this principle, the decision maker is exempted from
the task of formulating a risk acceptance criterion, as it is given by the present
level of risk. However, to make the criterion operational, what is meant by
globally at least as good and equivalent system needs to be addressed.
An ethical dilemma
The term global is central to GAMAB. This means considering the totality of
risk, ignoring how risk is distributed between dierent subsystems (Stuphorn,
2003). As long as the global level of risk is improved, GAMAB does not
voice concern if parts of the system risk have increased. By example, a new

4.4 Specic approaches

67

transport system oering enhanced safety to rst class passengers may be judged
acceptable, even if the risk to people in the rear wagon has increased. As
such, the notion of global opens up for trade-os and overcompensation of
risk (Nordland, 1999). Although technology and equity-based criteria do not
principally contradict each other, the GAMAB criterion shows that pragmatic
interpretations of a pure technology criterion may lead to equity violations.
Learning-oriented bootstrapping
At least as good means a risk level that is as low as or lower than the risk of
a comparison system. The simple criterion is then:
Risk metric  Risk metric best existing system

(4.2)

According to EN 50126 (1999), the GAMAB-analyst is free to choose both


approach and metrics for comparison, e.g. collision rate or PLL. This demands
calculation of the risk posed by both systems, leading to double work if no risk
data are available on the reference system (Schbe, 2004).
Requiring equal risk levels obtained across systems, is by Skjong et al.
(2007) referred to as the principle of equivalency and comparison criteria
in NORSOK Z-013N (2001). In none of these is GAMAB mentioned, seemingly leaving it up to the practitioner to consider local or global risk and what
system to choose for comparison. In NORSOK Z-013N (2001), it is suggested
that a new solution shall not represent any increase in risk compared to current
practice. This resembles the notion of good practice in simplied ALARP
evaluations, believing that generally recognized codes of practice provide satisfying risk. The at least requirement of GAMAB goes beyond this, not only
ensuring that state of the art knowledge is taken into account, but also that further learning is encouraged. Since new systems are required to perform better
or as good as the best system on the market, GAMAB is a learning-oriented
bootstrapping approach. However, it cannot escape the fundamental weakness of
bootstrapping, namely the erroneous assumption that the current level of risk
is acceptable. This philosophical diculty is addressed by Nordland (2001),
concluding that acceptable risk criteria shall be determined from scratch.
..or an impediment to improvement?
What is meant by an equivalent system is dicult to specify, since there
might be large variations between systems providing the same service (Trung,
2000). Both a high speed train and an aircraft oer transport to commuters
from Trondheim to Oslo, but the number of travelers and the speed of traveling dier greatly. One of the transportation modes may also be considerably
more expensive. Are the two systems then comparable? Rausand & Utne (2009)
draw parallels between GAMAB and the EU machinery directive, questioning

68

4 Deriving risk criteria

whether one can rightfully compare an inexpensive device to a far more expensive variant. Since cost-benet considerations are not required in GAMAB,
Trung (2000) similarly claims that unrealistic safety objectives may be generated. In this regard, one can ask if GAMAB is hindering rather than promoting
improvement, rejecting alternatives on erroneous standards of reference.
4.4.4 MEM
MEM is the acronym for minimum endogenous mortality, a German principle
requiring new or modied technological systems not to cause a signicant
increase in IRPA to any person (Schbe, 2004). The probability of dying of
natural causes is used as reference level for risk acceptability. MEM is based
on the fact that death rates vary with age, and the assumption that a portion
of each death rate is caused by technological systems (Nordland, 2001). Unlike
ALARP and GAMAB, MEM oers a universal quantitative risk acceptance
criterion, derived form the minimum endogenous mortality rate.
Endogenous mortality
Endogenous mortality means death due to internal causes, like illness or
disease (Stuphorn, 2003). In contrast, exogenous mortality is caused by the
external inuences of accidents. The endogenous mortality rate is the rate of
deaths due to internal causes of a given population at a given time. Figure 4.6
displays the endogenous mortality of various groups of ages in Norway in 2007.
Not unexpectedly, the maximum rate is found amongst the oldest population,
whereas youngsters have the lowest rate of occurrences. Children within the age
of 5 and 15 have the minimum endogenous mortality rate, which in western
countries is known to be 2  104per year, per person in average (EN 50126,
1999). The MEM-principle requires any technological system not to impose a
signicant increase in risk compared to this level of reference.
The significance of increase
According to the railway standard EN 50126 (1999), a signicant increase is
equal to 5% of MEM. This is mathematically deduced from the assumption that
there are roughly 20 types of technological systems (Trung, 2000). Amongst
these are technologies of transport, energy production, chemical industries and
leisure activities. Assuming that a total technological risk in the size of the minimum endogenous mortality is acceptable, the contribution from each system
is conned to:
RD

Rm
D 105.person=year/
20

(4.3)

4.4 Specic approaches

69

1
0,1
0,01
0,001
0,0001

MEM

Figure 4.6. Endogenous mortality in Norway, 2007 (Source: Statistisk sentralbyr)

A single technological system thus poses an unacceptable risk if it increases


IRPA with more than 5% of MEM, which roughly lies within the limits of
natural statistical variations (Nordland, 1999). It should be emphasized that this
criterion concerns the risk to any individual, not only the age group providing
the reference value. According to Rausand & Utne (2009), the specic limit is
ultimately for the decision maker to choose. This is mainly due to the diculties
of determining the signicance of risk increase.
There are strong assumptions underlying the MEM-criterion, notably that
people may be exposed to more or less or more than 20 technological systems.
For each system, the acceptable IRPA are then laxed or sharpened respectively,
as pointed out by Stuphorn (2003). Since the number of technological factors
increase on a daily basis, the acceptance criteria must be periodically updated
(Trung, 2000).
Implicit in these calculations is the assumption of an accident not resulting
in more than 100 fatalities. This is a reasonable assumption for transportation
systems, but may not hold for larger technological systems. For potential consequences beyond this number, the limit will decrease (EN 50126, 1999). Figure
4.7 visualizes a MEM-based FN-curve of constant acceptability frequency up
to 100 fatalities, while decreasing with an aversion factor of -1 in the larger
end of scale. This reasoning is dicult to grasp, as it embodies societal risk in
what is principally an IRPA-criterion. It is considered sucient that the reader
is aware of this presumption. A nal assumption is that the common MEMapproach only considers fatality risk. For technological systems posing minor
or major injury risk, modied MEM-criteria of 103 and 104 are proposed.

70

4 Deriving risk criteria

Tolerable IRPA

10-5

100
Number of fatalities

Figure 4.7. The MEM-criterion holds for accidents resulting in maximum 100 fatalities
(adapted from EN 50126 (1999))

Moral justification of MEM


In contrast to GAMAB, MEM can be apportioned to subsystems (Schbe,
2004). Dependent on the pragmatic apportionment of risk, it may thus consider
both distributional and global risk issues. Like GAMAB, the explicit notion
of MEM is seldom found outside its country of origin, although similar concepts are used by many regulators. Skjong et al. (2007) describe the common
approach of comparison with known hazards, in which risk criteria are set
by comparing technological risk to those implicit in human activities. MEM is
a subset of this broad approach, with the distinguishing feature of comparing
against internal causes only. It can be suggested that MEM is a variant of the
natural standards approach to bootstrapping, while the method described by
Skjong et al. is more of a risk compendium or revealed preferences approach.
Although both methods have the fundamental bias of assuming that the reference risk is acceptable, the rightfulness of MEM may be easier claimed due to
its natural standard of reference. But, since 2  104 natural fatalities per year
equals the death of almost 1600 German children (Nordland, 1999), it is by
no means given that the technologically caused decease of an equal number of
children is acceptable.
4.4.5 The Precautionary principle
The precautionary principle diers from the other approaches of this chapter.
Common for these is that they are all risk-based, meaning that risk management relies on the numerical assessment of probabilities and potential damages

4.4 Specic approaches

71

(Klinke & Renn, 2002). In contrast, the precautionary principle is a precautionbased strategy, for handling uncertain or highly vulnerable situations. Klinke
and Renn reason that a risk-based approach of judging numerical risks relative to each other becomes meaningless if based on very uncertain parameters.
Owing to this, precaution-based approaches do not provide quantitative criteria
for which risks can be compared against. Risk acceptability is rather a matter of proportionality, between the severity of potential consequences and the
measures taken in precaution.
Intention and use
The original denition of the precautionary principle is found in principle 15
of the UN declaration from Rio in 1992 (United Nations, 1992):
Where there are threats of serious or irreversible damage, lack of full
scientic certainty shall not be used as a reason for postponing costeective measures to prevent environmental degradation.
Wilson et al. (2006) discuss several denitions of the principle, nding a common interpretation in that complete evidence of harm does not have to exist
for preventive actions to be taken. An alternative interpretation is that absence
of evidence of risk should not be taken as evidence of absence of risk HSE
(2001b). The precautionary principle is hence a guiding philosophy when there
are reasonable grounds for concern of potentially dangerous eects, but the
scientic evidence is insucient, inconclusive or uncertain (EU, 2000). DeFur
& Kaszuba (2002) note two cases in which the principle is most useful, i.e.
situations of present uncertainties and when new information will radically alter
well-known situations. In the latter case, valuable counterbalance is oered to
bootstrapping methods encouraging preservation of status quo.
The precautionary principle is an outgrowth of the increased environmentalist awareness since the 1970s, acknowledging that the scale of technological
development by far have exceeded our predictive knowledge of environmental
consequences (Belt, 2003). In the Rio Declaration, the principle is explicitly
prescribed to the environmental eld. Consulting EUs communication on the
principle (EU, 2000), its scope is claimed far wider in covering both environmental, human, animal and plant eects. Common to all is the concern of
long-term eects, irreversibility and the well-being of future generations. DeFur
& Kaszuba (2002) reports applications to the areas of food safety, persistent
organic pollutants and even in the prevention of worldwide computer crashes
in the late 90s. Trouwborst (2007) sees EU and generalists like DeFur and
Kaszaba as ghting a lonely battle, claiming that legal instruments explicitly
linking the principle to non-environmental consequences are few in number. A
plausible explanation is provided by Trouwborst himself, calling attention to
the often ignored distinction between the exercise of precaution as such (erring
on the safe side) and the precautionary principle.

72

4 Deriving risk criteria

Invoking the precautionary principle


EU (2000) describes the application of the precautionary principle as a threestage process. In the rst stage, recourse to the principle is triggered, followed
by a decision stage of whether or not to invoke the principle, and an eventual
stage of selecting precautionary measures. The decision to act is of political
character, whereas the other stages are scientic, having to comply with the
general principles applicable to all risk management measures. Every application shall be considered within a transparent, structured analysis of potentially
negative eects and scientic uncertainty. Owing to this, applying the precautionary principle does not mean that measures are adopted on an arbitrary or
non-scientic basis. The claim of scientic rigor is questioned by Carr (2002),
reasoning that the principle is utterly a moral and political idea.
The political decision of invoking the principle appears in situations where
there is good reason to believe that serious harm might occur (even if the
likelihood is remote), and the current information makes it impossible to move
to the next stages of risk assessment with sucient condence (HSE, 2001b).
According to the EU commission (EU, 2000, p.16), the appropriate response
is:
The result of a political decision, a function of the risk level that is
acceptable to the society on which the risk is imposed.
The quote is not chosen due to its preciseness. The link between acceptable risk
and precautionary measures is unclear, as the commission seemingly assumes
that the one already knows the acceptable level of risk. This is problematic of
many reasons, notably due to the self-contradiction of requiring an unascertainable risk to be below some xed level. Jongejan (2008) presents another
objection in the bias of taking action based of risk characteristics alone; ignoring that risk is only part of the bigger picture. One can alternatively suggest
that the precautionary principle yields acceptable risk by own means, where
what is acceptable is a function of the appropriate measures (not the other way
round), and the appropriate measures is contingent on the case-specic benets,
consequences and scientic uncertainties.
Precautionary measures
Appropriate measures range from total ban, to the funding of a research program
or non action, and must according to EU (2000) be:
 Proportional to the severity of the threat
 Non-discriminatory in their application
 Consistent with measures already taken

4.4 Specic approaches

73

 Based on an examination of the potential benets and costs of action/non


action
 Subject to review in the light of new information
 Capable of assigning responsibility for producing the scientic evidence.
The last requirement has been subject to numerous discussions on the precautionary principle. A common interpretation is that the burden of proof is
reversed, meaning that a new product or technology is deemed dangerous until
its developer can prove the opposite. However, the reversed onus of proof is
not a standard consequence of the principle. According to Trouwborst (2007),
it is a radical instrument imposing great costs on the proponents of a new
technology, reserved for potential situations of great irreversible harm. This
interpretation presupposes that it is ultimately up to the society to ensure that
products bring low risk. But havent developers a genuine interest in their products being safe? The current trends of ethical awareness, warranty claims and
reputation cultivation, imply that providing the onus of proof are advantageous
to developers. Alternatively, Belt (2003) suggests that the polarized discussion
on proof bearing is just a proxy for the larger debates surrounding the future
of e.g. agriculture.
Debating the precautionary principle
The precautionary principle has fostered remarkable academic debate, with critique arising from two distinct camps. On one side are those concerned with the
costs of the principle (like debaters over the reversed burden of proof), while
the other camp is occupied by critics unfamiliar with its applications and underlying principles (DeFur & Kaszuba, 2002). Wilson et al. (2006) studied the
application of the principle among senior policy makers in Canada, concluding
that lack of clarity on when to act is limiting its eective use. Similarly, Belt
(2003) argues that the current denitions fail to prescribe the precise conditions
under which it is invoked and what action to employ. The practical implications are demonstrated in a recent study of Lyster & Coonan (2009), reporting
inconsistent application of the principle in Australian courts.
In a critique of EU (2000), Carr (2002) calls for a strengthening of the
ethical and value-based aspects of the decision stage, as a means for justifying
precautionary recourse to trade partners and the public. Following Carr, such
a justication is a necessary counterweight to critics accusing the principle
for stiing technological innovation. Amongst these are Balzano & Sheppard
(2002), prophesying that the current formulations endanger institutionalizing of
excessive caution, having disastrous eects on society by leaving it susceptible
to decay. The opponents sentence the Rio denition as inherently awed, encouraging ineective and costly measures in a Utopian pursuit of full scientic

74

4 Deriving risk criteria

certainty. Due to the discrepancy between the promise of scientic knowledge


and the practical lacking of it, Tannert et al. (2007) see the principle as problematic to regulatory practice. The REACH-legislation for chemicals in Europe
is provided as an example, pinpointing the missing correspondence between the
precautions taken to deal with uncertainties and the constant demand for further
analysis. Whether this is a valid accusation can yet be questioned. Renn (2008)
claims that REACH does not relate to any denitions of the precautionary
principle, except from the much debated issue on burden of proof. Trouwborst
(2007) recognizes that scientic certainty is a confusion-prone issue, stressing
that the precautionary principle prescribes action in spite of uncertainty, not
because of it.
Balzano & Sheppard (2002) accuse the precautionary principle for being
biased towards perception of fear and lacking the operational qualities needed
in regulatory decision making. Not only may the principle be invoked by fear,
it can also invoke it by amplifying unrealistic risk perceptions. The application
of precautionary measures must therefore be weighted against the outcomes,
whether it is anxieties or unforeseen consequences of poor action. The public
skepticism towards nanotechnology serves as an example. Consulting Phenix
& Treder (2004) at the Center Responsible for Nanotechnology, a strict implementation of the principle will give rise to the severest of risks. Not only may
no alternative solutions may be found for pressing, global problems, but the
world will also be unequipped to deal with responsible use of nanotechnology
in the future.
The precautionary principle holds both practical and conceptual strengths
and shortcomings. Neither does it provide quantitative risk acceptance criteria,
nor can it lessen the controversy of dicult decisions. Nevertheless, it does
oer a valuable perspective in a discussion of what is safe enough, as many of
the greatest decisions on risk are taken under considerable uncertainty.

5
Concluding discussions

The overall objective of this study is to discuss and create a sound basis for
formulating risk acceptance criteria. Fundamental to this aim is a basic understanding of the concepts of risk and risk acceptance, which are claried and
problematized in chapter 2. In chapters 3 and 4 respectively, the problem is
explicitly addressed through the sub objectives of discussing the main concepts
and quantities used to formulate risk acceptance criteria, and questioning the
basis and applicability of the various approaches to setting risk acceptance criteria. An integral part of these discussions are conceptual problems related to
risk acceptance criteria, as prescribed in the fourth objective of the study. For
this reason, the most valuable ndings are the nuances and contrasts provided
in these discussions, pinpointing fallacies and strengths of the various metrics
and approaches.
Readers familiar with the subject may notice that an ongoing debate of the
recent years is omitted. In the academic crusades led by Aven and his coworkers
at the University in Stavanger, the value of risk acceptance criteria per se is
questioned1. The nal chapter follows this thread, evaluating the metasoundness
of seeking a sound formulation of risk acceptance criteria. As the ultimate
purpose of risk acceptance criteria is to aid decision making on risk, two
interrelated questions are raised in these concluding discussions:
 Are risk acceptance criteria feasible to the decision maker??
 Do risk acceptance criteria promote good decisions?
1

In their critique of risk acceptance criteria, Aven and his coworkers refer to the xation of
an upper limit of acceptable risk. Such a limit is denoted absolute probabilistic criteria
by Skjong et al. (2007), and can be seen in contrast to trade-o based criteria

76

5 Concluding discussions

5.1 Are risk acceptance criteria feasible to the decision maker?


Risk acceptance criteria are claimed to provide the rationale for evaluating
calculated risk. Such evaluations take place over a variety of problems and
contexts, ranging from settlements on introducing a new technology to local
optimization of technical solutions. Common for all is that a decision must be
taken, necessitating some kind of decision criterion to arrive at a conclusion.
If not, the faith of hazardous technologies could be as accidental as that of
the main character in the novel The stranger by Camus (1942); indierently
ghting a lost battle towards the games of coincidence. But are risk acceptance
criteria practical seen through the eyes of the decision maker? Or does he, like
the character Meursault, feel weighted by the evaluation criteria?
5.1.1 Non-contradictory ordering of alternatives
According to Douglas (1985), a rational choice presupposes non-contradictory
ordering of the relative desirability of alternatives. This coincides with the
intention of risk acceptance criteria, i.e. to provide an objective means for
ordering issues of risk. However, there are reasons to claim that risk acceptance
criteria provide inconsistent decision support. This can be suggested to work
on two levels; one that touches the fundamental deciency of single valued risk
acceptance criteria, the other being pragmatically conditioned. Abrahamsen &
Aven (2008) are concerned with the former, rejecting the use of risk acceptance
criteria in isolation from other concerns. Evaluating FAR within two axiomatic
theories of decision making, Abrahamsen and Aven conclude that absolute
FAR-based risk acceptance criteria provide inconsistent recommendations. In
fact, this holds for all metrics when used in isolation, as the relative desirability
of a set of options may change if two decision problems are dierently framed.
The swine u serves as a banal example, in which the desirability of getting
inoculated is determined not only by the swine u fatality risk, but the known
side eects of the vaccine and the queue at the public health service. For this
reason, Aven and Abrahamsen argue in favor of the trade-o based approach
of ALARP. The reader should note that also ALARP may provide inconsistent
recommendations, due to the lacking operationality of gross disproportionality.
The conclusion of Aven and Abrahamsen largely coincides with Evans &
Verlander (1997)s critique of FN-criterion lines. A dierence between FNcriterion lines and criteria expressed by FAR, PLL, IRPA or LIRA, is that the
latter provide unambiguous advice if risk is agreed to be the sole attribute
of importance. FN-criterion lines may provide unclear recommendations even
under this simplied assumption. Figure 5.1 illustrates two options whose calculated risk lies partly above the criterion line, representing a decision problem
of which the decision maker is provided with little but confusing aid. The reader
should note that the criterion line is drawn with an aversion factor of -1. Adopting the Dutch practice of setting to -2, option A stands out as the preferable

5.1 Are risk acceptance criteria feasible to the decision maker?


F

77

Risk acceptance criterion line


Calculated risk, option A

Calculated risk, option B

Figure 5.1. Contradictory ordering of alternatives with respect to FN-criterion lines

choice. The feasibility of FN-criterion lines are thus pragmatically conditioned


not only on the relative set of options, but also on assumptions of risk aversion
and initial anchoring. Rhetorically, one can ask whether this decision problem
is any less ambiguous without the presence of an acceptability criterion line.
The relative desirability of the two options is equally unclear with respect to
risk. However, as the decision maker is free to weight other concerns, the alternatives may possibly be ranked in a clear order of preference, reinforcing the
arguments of Abrahamsen & Aven (2008).
5.1.2 Preciseness of recommendations
The most important quality of risk acceptance criteria is according to NORSOK
Z-013N (2001) that local conditions and the eect of risk reducing measures
are reected. The importance of considering realistic exposure should be reemphasized. If the decision maker is to compare an overall acceptance criterion
with a theoretic risk aggregated over an inhomogeneous group of people, imprecise recommendations are likely to follow. This is a pragmatic restriction
that holds for all risk metrics, but is hardly valid as a generic argument against
the use of risk acceptance criteria. If properly accounted for, this is actually a
benet of using risk acceptance criteria. Particularly suited are IRPA and FAR,
advantageously allowing for precise accounting of exposure. Whether following
the ethics of utility or justice in allocating risk reducing measures, IRPA and
FAR may thus provide the decision maker with a precise term of reference.
PLL on the other hand, is ill suited for this purpose, as neither exposure, nor
local variations are reected. This is why PLL is seldom used as an absolute
probabilistic criterion, but rather as input to overall ALARP-analyses. Although
preciseness by and large is given in the choice of risk metrics, it is also conditioned on how criteria are derived. GAMAB stands out in this manner, as it
by denition is concerned with overall aspects exclusively.

78

5 Concluding discussions

5.1.3 A binary decision process


Regardless of the consistency or preciseness of recommendations, risk acceptance criteria are of limited value if they are impractical to real life-decision
making. A polarization is seen between the absolute criteria provided by
GAMAB and MEM (and the common interpretation of ALARA), and the conditional criteria of ALARP and the precautionary principle. The trade-o analysis of ALARP is recognized to be a resource intensive task, posing extensive
requirements to both regulatory and company involvement. In comparison with
absolute risk acceptance criteria, ALARP is a cumbersome process that is unlikely to succeed if regulatory supervision and incentives are not in place. This
partially explains the moderate success of ALARP-processes in the Norwegian
oshore sector, as reported by Vinnem et al. (2006). The issue is by HSE
partly resolved through standards of good practice, easing the ALARP-process
for operators of well-known technologies. ALARP may also be qualitatively formulated (e.g. by risk matrices), oering a practical advantage in cases where
quantitative data are lacking. Particularly relevant is this in comparison with
GAMAB, as data on the reference system may not exist.
Aven et al. (2006) admit that absolute criteria provide a binary decision
making process that is utterly practical. As conceptualized in Figure 2.5, the
decision maker simply has to conclude whether the described risk is above or
below a cut-o limit. In an ALARP-evaluation, formula (4.1) of gross disproportionality serve as a like criterion. But, as the disproportionality factor by
no means is absolute, ALARP cannot be claimed the same user friendliness
as MEM or GAMAB. Even more complicated is the precautionary principle,
which is rightly accused for lacking clarity concerning when and how the principle is to be invoked. Comparing the practicality of the precautionary principle
against the other approaches is indeed an unfair match, as it is reserved for
situations of great uncertainty and is thus fundamentally distinct.
5.1.4 Risk acceptance criteria simplify the decision process
Although conceptual problems are identied, it is in the opinion of this author
that risk acceptance criteria provide considerable aid for reaching decisions on
risk. Seen through the eyes of the decision maker, absolute criteria expressed
by single value metrics are likely to simplify the decision making process.
Also ALARP and ALARA-evaluations provide ecient aids, although their
recommendations appear less clear-cut and relatively time-consuming. However,
considering the conceptual complexity of risk acceptance, it is evident that
simplication comes with a price. Having questioned if risk acceptance criteria
oer practical decision support, an equally important question remains to be
asked; do risk acceptance criteria promote good decisions?

5.2 Do risk acceptance criteria promote good decisions?

79

5.2 Do risk acceptance criteria promote good decisions?


A good decision is according to Fischho et al. (1981) one that addresses all ve
complexities presented at p.22. As these are mostly pragmatic, i.e. dependent on
the specic application in a certain situation or company, they are not the focus
of this nal discussion. Rather, generic problems related to risk acceptability
are addressed.
To judge what constitutes a sound decision, the perspective must necessarily
be widened from the concern of the decision maker to include all actors aected
by the risk. In the eyes of a company, a good decision is one that enables a
proper balance between production and protection, as conceptualized by Reason
(1997) in Figure 5.2. The gure works at a societal level as well, but from a
governmental point of view the balance is intensely intertwined with political
and ethical considerations. For the public, a good decision is one that is in line
with personal levels of acceptable risk, resulting from the trade-o of factors
described in section 2.9.

Bankruptcy
Parity zone
High hazard
ventures
Protection

Low hazard
ventures

Catastrophe
Production

Figure 5.2. The relationship between production and protection (adopted from Reason, 1997)

Ethical, trade-o and strategic aspects to the goodness of risk acceptance


criteria are in the following discussed. Since all issues hinge on the assumption
that an objective criterion for acceptable risk may be set, the thread from section
2.4 must rst picked up.
5.2.1 The interpretation of probability to risk acceptance criteria
According to Nordland (2001), the risk-based approaches of ALARP, ALARA,
GAMAB and MEM are all based on the assumption that an objective level of
acceptable risk exist. With reference to chapter 2, this assumption holds not only
one, but two highly speculative beliefs; that both risk and risk acceptance can be

80

5 Concluding discussions

objectively expressed. As the latter presupposes the former, it is fundamental to


address the implications of subjective probabilities on the use of risk acceptance
criteria.
Intuitively, risk acceptance criteria seem meaningless if probability, and
hence risk, cannot be claimed an objective existence. If two people can assign
two distinct probabilities to the same event, how is the risk to be evaluated if
one falls below the criterion line and the other one over? And on what term of
reference may a criterion be set to claim sovereignty? Even simple methods of
bootstrapping will fail, due to the apparent impossibility of demonstrating an
objective level for comparison. These are extensions of the objections of Watson
(1994), concluding that a subjective interpretation of probability necessarily
reduces the role of probabilistic safety analysis to an advisory one. According
to Watson, this recognition stands in alarming contrast to the wordings of
American regulations, regarding risk analyses as the legitimate provider of
truths. Also in todays Norway, this seems to be an implicit assumption in
most regulations (Aven, 2007). Yellmann & Murray (1995) mock Watson for
having an extreme reaction to the unpleasant recognition that no risk analysis
can ever by perfectly objective, rhetorically asking whom you can trust if you
cannot trust your probabilistic safety analyst.
Analytical consensus and practitioner judgment
A substantial tenet of De Finetti (1974) is that probabilities are conditioned on
ones current state of knowledge. Since the overall task of risk analysts is to
seek knowledge about risk, arbitrary values are by no means assigned to neither
input, nor output probabilities. While probabilities for well-known technologies
may be assessed through experience databases or physical experiments, probabilities of future systems are inferred through advanced models and expert
judgment. Although the latter can be claimed a larger element of subjectivity,
analytical consensus may be sought in both cases. Owing to this, Vatn (1998)
prefer the notion of fur ns probability, characterizing the agreed probabilities
amongst a knowledgeable group of risk analysts. Following this interpretation,
it is perfectly meaningful to draw reference lines for comparison of risk, as
long as one accept the assumptions made explicit in the analysis and the formulation of risk acceptance criteria. Aven (2007) agrees to this position, while
stressing that acceptance criteria must not be mistaken to represent benchmarks
of objective truths.
Subjective probabilities cannot be seen to weaken the value of risk acceptance criteria per se. However, what may pose a problem, are regulators and
practitioners interpreting criteria and risk assessments as objective references
providing rigid cut-o limits. Acknowledging that additional information may
alter the risk assignments as well as the criterion lines, the decision maker
should exercise judgment if the calculated risk falls close to the limits. This

5.2 Do risk acceptance criteria promote good decisions?

81

holds for all probability-generated risk metrics, deduced from all risk-based
approaches. A special concern is voiced for MEM, as it is the principle most
explicitly announcing an objective level of reference.
Erring on the side of safety
The importance of avoiding strict interpretation of risk acceptance criteria is
perhaps greater following a frequentist interpretation. Evaluating ALARP in
light of the two schools of probability, Schoeld (1998) concludes that the relative frequency approach represents signicant problems of model validation.
The subjective interpretation on the other hand, is found to oer a powerful perspective for trade-o analyses in the ALARP-region. Uncertainty in estimations
are particularly large for low frequency/high consequence events, in a frequentists search for an objective quantity. A risk located in the upper tolerability
region in Figure 4.3 may in such a case actually lie above the unacceptable
limit. This also holds for Bayesian calculations, with the important distinction
that uncertainties are assigned to the actual value. This is why conservative
judgments are often preferred over best estimates, erring on the safe side in
the face of uncertainty. While NORSOK Z-013N (2001) recommends the use
of best estimates, a conservative approach is defended in e.g. NSW (2008). In
some cases, the epistemic uncertainty may be so large that one cannot even
know whether a prediction lies in the conservative ballpark. The introduction
of nanotechnology serves as a timely example, whose associated risks are so
uncertain that comparison with predened criteria cannot be justied, regardless if one is of Bayesian or frequentist conviction. In such cases, the decision
maker must rather turn to the Precautionary principle
5.2.2 Ethical implications of risk acceptance criteria
Examining the ethical justication of risk acceptance criteria, Aven (2007) concludes that there are no stronger ethical arguments for using absolute risk acceptance criteria compared to trade-o based regimes. While there are arguments
both pro and con the use of risk acceptance criteria, these are not primarily of
an ethical character. According to Aven, there should be no discussion on the
need for considering all ethical stances of utility, justice, discourse and ethics
of the mind 2 . What should be debated, is rather the balance of the various
principles and concerns. This balancing act can be suggested to work at two
levels; explicitly in the choice of approach, and implicitly in the selection of
risk metrics.
2

Ethics of the mind are rooted in the philosophy of Immanuel Kant. This line of reasoning
states that the rightness of an act is not determined by its consequences. Rather, actions
are correct in and of themselves without reference to other attributes, because they stem
from fundamental obligations (Hovden, 1998).

82

5 Concluding discussions

The ethical act of balancing


ALARP serves as a textbook example of balancing principal lines of reasoning,
although the practical exclusion of the discourse-stand was questioned in section
4.4. Since equity and cost-benet trade-os according to Douglas (1985) and
Fischho et al. (1981) are central determinants of risk acceptability, ALARP
possesses a unique advantage in capturing both utility and justice.
MEM and GAMAB are based on the single principles of equity and technology. Although both provide an absolute risk limit, they must not be mistaken
as equal from an ethical point of view. As MEM requires an upper restriction of
IRPA, it is rooted in the ethics of justice. GAMAB is a technology based criterion, not explicitly relating to any ethical stand. Rather, the ethics of GAMAB
lie implicit in how previous standards are set, which section 4.3 demonstrated
to be a fundamental deciency of all bootstrapping approaches. GAMAB is
furthermore indierent to equity considerations, as it by denition concerns
global risk solely. Although the decision maker is free to express GAMAB by
IRPA, this must necessarily be based on extensive averaging if the aggregated
risk is to be reected. Recalling the advice of Holden (1984), GAMAB is thus
inadequate from the view that risk acceptance criteria should reect both the
totality and distribution of risk. As Ball & Floyd (1998) recommend the use
of both individual and social risk metrics, a plausible solution is to combine
the IRPA-based MEM-criterion with FAR or FN-criterion lines deduced from
GAMAB. The reader should beware that such a symbiosis completely disregards the ethics of utility, which is properly accounted for only in the framework
of ALARP. And regardless of approach, the dual requirement of considering
both societal and individual risk likely gives rise to ethical dilemmas. By example, Vatn (2009) leaves open whether it is ethically sound to reduce societal
risk at the expense of increased IRPA to those ghting a hazardous event.
Who should set the criteria?
Acknowledging the ethical dilemmas of risk acceptance criteria, the moral and
political question of who shall set risk acceptance criteria must necessarily
follow. In Norway, it is up to the operators to dene the criteria. According
to Aven (2007), this creates an ethical problem, as the regulators necessarily
have a broader societal perspective than the industry, whose primarily goal
is prot. Ball & Floyd (1998) similarly note that few duty holders are able
to deal with complex policy issues of acceptable risk, while stressing that the
enterprises play an important role in accounting for non-public concerns. Owing
to this, regulatory authorities are claimed an important role at least in providing
guidance. While the UKs HSE oers extensive advice on the accomplishment
of ALARP, Norwegian authorities provide few guidelines on the formulation of
risk acceptance criteria. An unfortunate eect is that it is almost impossible for

5.2 Do risk acceptance criteria promote good decisions?

83

politicians to reject a companys risk acceptance criteria on principal arguments,


as discussed by Vatn (2009) in light of the new LNG-facility at Risavika,
Norway. What is more serious, is that the criteria are developed for single
plants only, possibly without consideration of the aggregated risk from the
totality of enterprises. If the criteria are not in accord with the overall safety
target of society, failure to take a holistic approach may according to Skjong
et al. (2007) yield disproportional expenditures and excessive levels of global
risk.
Transparency and stakeholder involvement
Paramount to the question of who shall set the criteria, is how to account for
the opinion of relevant stakeholders. The ongoing debate concerning potential
oil- and gas production in Lofoten in Norway, illustrates how a variety of actors
have interests in large societal decisions. This is eminently an issue of risk communication, which lies outside the scope of this report. The interested reader
may consult for instance Sjberg (2003). However, a transparent formulation
of risk acceptance criteria is a prerequisite for successful risk communication.
Skjong et al. (2007) denotes this the accountability principle, demanding a
single, open and clear process for managing risks aecting the public. According to Skjong et al., the principle favors quantitative risk acceptance criteria
and objective assessments. With reference to the previous discussion, the latter requirement is unfortunate. What is more, claims of objective assessments
presupposes strict separation of facts and value judgments, which according to
Fischho et al. (1981) is Utopia. Since the decision rules are explicitly stated,
risk acceptance criteria as such will on the positive side render a transparent decision process. But, whether these are known to the general public is
yet another problem. MEM stands out as the most transparent, being the only
principle prescribing what value of IRPA to employ. This is also true for the
practical interpretation of ALARA, as the rigid upper limits of the Dutch government are known as the prevailing criteria. Even if trade-o analyses are
performed, the ALARA-requirement of marginal benets is relatively easy to
track. GAMAB and ALARP on the other hand, are clouded by the unclear
denitions of comparable systems and gross disproportionality.
5.2.3 Compliance or continuous strive for risk reduction?
Although section 2.9 made it clear that risk acceptance is not determined by
risk alone, this section puts all considerations but risk aside. While it is sensible claim that every company or society seeks the balance of Reason (1997)
between safety and productivity, this presupposes that the desirable risk level
might actually be sought. A basic question therefore needs to be asked; does
the use of risk acceptance criteria promote risk reduction?

84

5 Concluding discussions

Given that a company will strive to satisfy its criteria, the resulting risk obviously depends on their stringency. Owing to this, the requirement of GAMAB
of being at least as good as the best comparable system seems to promote
unprecedented levels of low risk. This stands in contrast to traditional bootstrapping approaches, where risk reduction is encouraged only by means of
preserving status quo. As an atypical example, the MEM-criterion is relatively
strict, but no eort is required to reduce the risk below an IRPA of 105. Since
the criterion has remained constant through a variety of innovations, it is likely
that the transient assumption of twenty technological systems have weakened
its stringency.
In contrast to the standstill criterion of MEM is ALARP, whose disproportionalitycriterion holds a promise of risk that is as low as practicality allows. Aven &
Vinnem (2005) clearly favor the ALARP-approach over absolute criteria, on the
grounds that it encourages continuous strive for risk reduction. Crucial to this
argument is the distinction between HSEs and the Norwegian interpretation
of ALARP, as reported in Vinnem et al. (2006)s study of ALARP-processes
in the Norwegian oshore industry. While the focus of HSE is on reaching
good solutions in the ALARP-area, the involvement of Norwegian authorities
is restricted to verifying compliance with upper limits. According to Aven &
Vinnem (2005), minimal impetus is given to operating companies for considering if further risk reduction is achievable. The main emphasis amongst Norwegian operators has thus been on satisfying the upper criteria, usually with no
or small margins. If ALARP-evaluations are performed, they very often result
in dismissal of possible improvements. This yields an important conclusion,
namely that equally important as the theoretical formulation of risk acceptance
criteria, is how these are applied in the industry and followed up by authorities.
5.2.4 One accepts options, not risks
A subjective interpretation of probability does not diminish the credibility of
risk acceptance criteria per se. Unfortunately, this recognition is minor to the
fundamental problem of whether acceptable risk is expressible in the form of an
objective criterion. The words of Fischho et al. (1981) are inescapable; risk
is never acceptable in an absolute sense. Rather, risk acceptance is a matter
of trade-os, unique to a particular set of options in a given context. Since
the inextricable question of whether acceptable levels of risk exist resides in
the realm of philosophy, the obstinacy of Fischho and his coworkers will not
be challenged in this thesis. What can be questioned, is to what extent risk
acceptance criteria reect that risk acceptance is a trade-o problem.
Comparing the dierent approaches is in this respect a grateful task, as
ALARP is the only approach not only allowing for, but also demanding tradeo analyses. Neither GAMAB, nor MEM possesses this quality, as both prescribe risk as the single attribute of importance. Due to the stringency of cri-

5.3 What we really are looking for

85

teria following comparison with best practice, this is particularly problematic


for GAMAB. One of its main criticisms follows from this deciency, i.e. that
unrealistic safety objectives may hinder the introduction of a cost-ecient technology or product. Also the precautionary principle suers under this claim.
But, as EU (2000) requires an examination of the potential benets and costs
of action/non action, the charges of Balzano & Sheppard (2002) are wrongfully
attributed to principally lacking trade-o considerations.
Fischho et al. (1981) conclude that formal analysis is superior to bootstrapping and expert judgment for situations of complex, technological risk.
If their groundbreaking contribution had been written subsequent to the original TOR-report (HSE, 1992), it would probably have included a commendatory chapter on the ALARP-principle. According to Aven et al. (2006), HSEs
ALARP-approach is unique as it oers conditional rather than absolute acceptance criteria, tailored to the risk, costs and benets of a specic situation. As
such, the principle captures risk acceptability in a balanced consideration of the
various benets and burdens of an activity. Although bootstrapping approaches
implicitly reect this balance, these are trade-os of past acceptability. The
criteria are thus conditional only on the past, unable to reect more than one
aspect of risk acceptance, i.e. the severity of previously accepted consequences.
This uniform focus necessarily restricts political and managerial exibility of
the future.
Aven and his coworkers advice restricting absolute criteria to lower level
functional requirements, e.g. SIL. However, it is in the opinion of this author
that lower level criteria presupposes compliance with some overall acceptance
criterion. Vatn (1998) oers a sensible compromise, suggesting that in order
to avoid sub optimization, one should be restrictive with the number of risk
acceptance criteria. The normative issues should instead be expressed in terms
of value trade-os for overall optimization.

5.3 What we really are looking for


According to Breugel (1998), the rst to ask for in discussions about acceptable
risk is what we are really looking for. Paradoxically, a simple answer to this
question is impossible, as it depends on the particular aspects we are interested
in. Intertwined in risk acceptability are multidisciplinary problems of how safe
is safe enough, how stable is stable enough, what level of economic growth
to seek and the distribution of prosperity and global unbalance. On top of
this comes the question of which authority, if any, is qualied to dene what
represents enough.

86

5 Concluding discussions

5.3.1 Overall conclusions and recommendations for further work


What we really are looking for in this study, is to create a sound basis for the
formulation of risk acceptance criteria. This nal chapter has raised a set of
conceptual questions regarding the goodness of risk acceptance criteria. The
various approaches to setting risk acceptance criteria dier with respect to
consistency, practicality and ethics, in the ability to encourage risk reduction
and to reect risk acceptance. Furthermore, the dierent metrics by which risk
acceptance criteria are expressed, implicitly or explicitly aect how these issues
are resolved. Notwithstanding that conceptual problems jeopardize the sound
decision support of risk acceptance criteria, valuable insights are oered to
their formulation. Equally important is it that these are known to the decision
maker. If users of risk acceptance criteria are unaware of their limitations
and underlying assumptions, there is little point in perfecting the procedure of
formulation. Striving for a sound formulation may even yield negative eects,
if the decision maker is convinced that the criteria provide a perfect term of
reference. From that it follows that risk acceptance criteria oer sound decision
support, but only if their authors and users have a comprehensive understanding
of the applied metrics and approaches. Owing to this, practitioners are advised
to interpret risk acceptance criteria as guiding benchmarks, rather than rigid
representations of an ideal truth.
The discussions demonstrate that risk acceptance criteria provide no perfect
term of reference. As such, the thesis oers a valuable point of departure
for the practitioner, if only by challenging his one discretion. Moreover, the
examination shows that risk acceptance criteria by no means is a digested issue.
As proven in the plentiful studies of Aven and his coworkers in Stavanger, issues
remain unresolved on the practical implementation of risk acceptance criteria
on the continental shelf. This calls for further research on the dialectic role of
industries and government in formulating and complying with risk acceptance
criteria. Finally, the complex issue of environmental damage is omitted from the
study. Although environmental acceptance criteria are required in PSA (2001),
current approaches are theoretically and practically underdeveloped. This urges
theoretical maturing and academic debate, on how environmental consequences
may be adequately included in the acceptable triplet of risk.

References

Abrahamsen, E. & Aven, T. (2008). On the consistency of risk acceptance


criteria with normative theories for decision making. Reliability Engineering
and System Safety, 93, 19061910.
Adams, J. (2003). Risk and Morality. University of Toronto Press Incorporated,
Toronto Bualo London.
Ale, B. (2005). Tolerable or acceptable: A comparison of risk regulation in the
United Kingdom and the Netherlands. Risk Analysis, 25, 231241.
Ale, B., Aven, T., & Jongejan, R. (2009). Review and discussion of basic
concepts and principles in integrated risk managegent. In Reliability, Risk
and Safety: Theory and Applications. Proceedings from ESREL 2009.
Arbeidstilsynet (2009). Dde etter nring. Technical report, Arbeidstilsynet
http://www.arbeidstilsynet.no.
Australian Safety and Compensation Council (2008). The health of nations: The
value of a statistical life. Technical report, Australian Government, Australian
Safety and Compensation Council.
Aven, T. (2003). Foundations of risk analysis. Chichester: Wiley.
Aven, T. (2007). On the ethical justication for the use of risk acceptance
criteria. Risk Analysis, 27, 303312.
Aven, T. (2009). Safety is the antonym of risk for some perspectives of risks.
Safety Science, 47, 925930.
Aven, T. & Vinnem, J. (2005). On the use of risk acceptance criteria in the
oshore oil and gas industry. Reliability Engineering and System Safety, 90,
1524.
Aven, T., Vinnem, J., & Vollen, F. (2006). Perspectives on risk acceptance
criteria and management for oshore applications- application to a development project. International Journal of Materials and Structural Reliability,
4, 1525.
Baker, G. E., Priest, S., Tebo, P. V., Baker, J. A. I., Rosenthal, I., Bowman,
F. L., Hendershot, D., Leveson, L., Wilson, D., Gorton, S., & Wiegmann,

88

References

D. (2007). The report of the BP U.S Reneries Independent Safety Review


Panel. Technical report, the BP U.S Reneries Independent Safety Review
Panel.
Ball, D. & Floyd, P. (1998). Societal risks, Final report. Technical report, The
Health and Safety Executive.
Balzano, Q. & Sheppard, A. (2002). The inuence of the precautionary principle on science-based decision-making: Questionable applications to risks of
radiofrequency elds. Journal of Risk Research, 5, 351369.
Barry, B., de Wilde, J., & Waever, O. (1998). Security: A New Framework for
Analysis. Boulder, London.
Belt, H. d. (2003). Debating the precautionary principle: "guilty until proven
innocent" or " inncocent until proven guilty"? Plant Physiology, 132, 1122
1126.
Bottelberghs, P. (2000). Risk analysis and safety policy developments in the
netherlands. Journal of Hazardous Materials, 71, 5984.
Bowles, D. (2003). Alarp evaluation: Using cost eectiveness and disproportionality to justify risk reduction. In ANCOLD 2003 Conference on Dams.
BP (2006). Guidance on practices for layer of protection analysis (LOPA).
Technical report, British Petroleum procedure: Engineering Technical Practice (ETP) GP 48-03.
Breakwell (2007). The psychology of risk. Cambridge University Press, Cambridge.
Breugel, K. v. (1998). How to deal with and judge the numerical results of risk
analysis. Computers and Structures, 67, 159164.
Campbell, S. (2005). Determining overall risk. Journal of Risk Research, 8,
569581.
Camus, A. (1942). The Stranger. Random House Inc, New York.
Carr, S. (2002). Ethical and value-based aspects of the European Commisions
precautionary principle. Journal of Aggriculture and Environmental Ethics,
15, 3138.
Chevreau, F., Wybo, J., & Cauchois, D. (2006). Organizing learning processes
on risks by using the bow-tie representation. Journal of Hazardous Materials,
130, 276283.
De Finetti, B. (1974). Theory of probability. Volume 1. Wiley and Sons.
DeFur, P. & Kaszuba, M. (2002). Implementing the precautionary principle.
The Science of the Total Environment, 288, 155165.
Ditlevsen, O. (2003). Decision modeling and acceptance criteria. Structural
Safety, 25, 165191.
DNV (2008). Phast. DNV software.
Douglas, M. (1985). Risk acceptability according to the social sciences. Routledge, London.
Elvebakk, B. (2007). Vision zero: Remaking road safety. Mobilities, 2, 425
441.

References

89

Elvik, R., Kolbenstvedt, M., Elvebakk, B., Hervik, A., & Brin, K. (2009). Costs
and benets to sweden of swedish road safety research. Accident Analysis
and Prevention, 41, 387392.
EMS (2001). LUL QRA- London Underground Limited Quantied Risk Assessment. Update 2001. Technical report, Safety Quality and Environmental
Department of London Underground.
EN 50126 (1999). Railway applications- The specication and demonstration of reliability , availability, maintainability and safety (RAMS) 50126.
Euopean Norm, Brussels.
EU (2000). Communication from the commision on the precautionary principle(COMM). Technical report, Commission of the European communities ,
Brussels.
EU (2006). Council Directive 2006/42/EC of 17 May 2006 on machinery.
Ocal Journal of the European Communities,L 157/24.
Evans, A. & Verlander, N. (1997). What is wrong with criterion FN-lines for
judging the tolerability of risk? Risk Analysis, 17, 157168.
Fischho, B. (1994). Acceptable risk: A conceptual proposal. Risk: Health,
Safety and Environment, 1, 128.
Fischho, B., Lichtenstein, S., Slovic, P., Derby, S., & Keeney, R. (1981).
Acceptable risk. Cambridge Unversity Press, New York.
French, S., Bedford, T., & Atherton, E. (2005). Supporting ALARP decision
making by cost benet analysis and multiattribute utility theory. Journal of
Risk Research, 8, 207223.
Garland, D. (2003). Risk and morality. University of Toronto Press Incorporated, Toronto Bualo London.
Hammit, J. (2000). Valuing mortality risk: Theory and practice. Environmental
Science and Technology, 34, 13961400.
Hartford, D. (2009). Legal framework considerations in the development of
risk aceptance criteria. Structural Safety, 31, 118123.
Holden, P. (1984). Diculties in formulating risk criteria. Journal of Occupational Accidents, 6, 241251.
Holton, G. (2004). Dening risk. Financial Analysts Journal, 60, 1925.
Hovden, J. (1998). Ethics and safety: mortal

questions for safety management.


In Paper for Safety in Action, Melbourne 1998.
Hovden, J. (2003). Theory formations related to the risk society. In NoFS XV
2003, Karlstad, Sweden.
HSE. ALARP t a glance. Technical report, The Health and Safety Executive
http://www.hse.gov.uk/risk/theory/alarpglance.htm.
HSE (1992). The tolerability of risk from nuclear power stations. Technical
report, HMSO, London.
HSE (2001a). Principles and guidelines to assist HSE in its judgments that
duty-holders have reduced risks as low as reasonably practicable. Tech-

90

References

nical report, The Health and Safety Executive http://www.hse.gov.


uk/risk/theory/alarp1.htm.
HSE (2001b). Reducing risks, protecting people; HSEs decision-making process. Technical report, HMSO, Norwich.
HSE (2002).
Toxic substances bulletin, issue 47.
Technical report, The Health and Safety Executive http://www.hse.gov.uk/
toxicsubstances/issue47.htm.
HSE (2003a). Assessing compliance with the law in individual cases and the
use of good practice. Technical report, The Health and Safety Executive
http://www.hse.gov.uk/risk/theory/alarp2.htm.
HSE (2003b). Good practice and pitfalls in risk assessment. Technical report,
Health and Safety Executive http://www.hse.gov.uk/research/
rrhtm/rr151.htm.
HSE (2003c). Policy and guidance on reducing risks as low as reasonably
practicable in design. Technical report, The Health and Safety Executive
http://www.hse.gov.uk/risk/theory/alarp3.htm.
HSE (2004). Guidance on as low as reasonably practicable (ALARP) decisions in control of major accident hazards (COMAH). Technical report,
The Health and Safety Executive http://www.hse.gov.uk/comah/
circular/perm12.htm.
HSE (2008). HSE principles for cost benet analysis (CBA) in support of
ALARP decisions. Technical report, Health and Safety executive http:
//www.hse.gov.uk/risk/theory/alarpcba.htm.
HSE (2009). Societal risk: Initial brieng to societal risk technical advisory
group. Technical report, The Health and Safety Executive http://www.
hse.gov.uk/research/rrpdf/rr703.pdf.
IEC 61508 (1998). Functional safety of electrical/electronic/programmable
electronic safety-related systems. Part4. International Electrotechnical commission, Geneva.
ISO/IEC Guide 51 (1999). Safety aspects- guidelines for their inclusion in
standards. International Organization for Standardizationthe International
Electrotechnical Commision.
Johannesson, M., Jnsson, B., & Karlsson, G. (1996). Outcome measurement
in economic evaluation. Health Economics, 5, 279296.
Jongejan, R. (2008). How safe is safe enough? The governments response to
industrial and ood risks. PhD thesis, Technische Universiteit Delft.
Jongejan, R., Jonkman, S., & Maaskant, B. (2009). The potential use of individual and societal risk criteria within the dutch ood safety policy (part 1):
Basic principles. In Reliability, Risk and Safety: Theory and applications,
proceedings from ESREL 2009.
Kaplan, S. & Garrick, J. (1981). On the quantitative denition of risk. Risk
Analysis, 1, 1127.

References

91

Kjelln, U. (2000). Prevention of accidents through experience feedback. Taylor


and Francis, London.
Kjelln, U. & Sklet, S. (1995). Integrating analyses of the risk of occupational
accidents into the design process. Part 1; a review of types of acceptance
criteria and risk analysis methods. Safety Science, 18, 215227.
Klinke, A. & Renn, O. (2002). A new approach to risk evaluation and management: Risk-based, precaution-based and discourse-based strategies. Risk
Analysis, 22, 10711094.
Lind, N. (2002a). Social and economic criteria of acceptable risk. Reliability
Engineering and System Safety, 78, 2125.
Lind, N. (2002b). Time eects in criteria for acceptable risk. Reliability
Engineering and System Safety, 78, 2731.
Linnerooth-Bayer, J. (1993). The social mismanagement of risk? risk aversion
and economic rationality. Technical report, International Institute for Applied
Systens AnalysisIIASA.
Lyster, R. & Coonan, E. (2009). The precautionary principle: A thrill ride on
the roller coaster of energy and climate law. RECIEL, 18, 3849.
Marszal, E. (2001). Tolerable risk guidelines. ISE Transactions, 40, 391399.
Martz, H. & Waller, R. (1988). On the meaning of probability. Reliability
Engineering and System Safety, 23, 299304.
Melchers, R. (2001). On the ALARP approach to risk management. Reliability
Engineering and System Safety, 71, 201208.
Mller, N., Hansson, O., & Peterson, M. (2006). Safety is more than the
antonym of risk. Journal of Applied Philosophy, 23, 419432.
NASA (2002). Probabilistic risk assessment procedures guide for NASA managers and practitioners. NASA Oce of Safety and Mission Assurance,
Washington D.C.
Nordland, O. (1999). A discussion of risk tolerance principles. The Hazards
Forum Newsletter issue, 27, 26.
Nordland, O. (2001). When is risk acceptable? In Presentations at 19th International System Safety Conference, Huntsville, Alabama, USA September
2001.
NORSOK Z-013N (2001). Risko- og beredskapsanalyse. Standard Norge, Oslo.
NS 5814 (2008). Krav til risikovurderinger. Standard Norge, Oslo.
Nss, A. (1985). Filososke betraktninger om lykke og ulykke. In NOFS-85,
SINTEF.
NSW (2008). Hazardous industry planning advisory paper (HIPAP). No 4Risk criteria for land use safety planning (draft). Technical report, NSW
Government, Department of Planning, Sydney, Australia.
OLF 070 (2004). Application of IEC 61508 and IEC 61511 in the norwegian
petroleumindustry. OLF.
Papoulis, A. (1964). The meaning of probability. IEE Transactions on education, 7, 4551.

92

References

Pasman, H. & Vrijling, J. (2003). Social risk assessment of large technical


systems. Human Factors and Ergonomics in Manufacturing, 13, 305316.
Phenix, C. & Treder, M. (2004). Applying the Precautionary Principle to
Nanotechnology. Center Responsible for Nanotechnology http://www.
crnano.org/precautionary.htm.
PSA (2001). Regulations relating to health, environment and safety in the
pertroleum activities (the framework regulations) 2001. Petroleum Safety
Authority Norway (PSA) and Norwegian Pollution Control Authority (SFT)
and Norwegian Social and Health Directorate (NSHD).
Rackwitz, R. (2004). Optimal and acceptable technological facilities involving
risks. Risk Analysis, 24, 675695.
Rausand, M. & Utne, I. (2009). Risikoanalyse- teori og metoder. Tapir
Akademisk Forlag, Trondheim.
Reason, J. (1997). Managing the risks of organizational accidents. Ashgate
Publishing Limited, Hampshire.
Renn, O. (2008). Risk Governance. Coping with uncertainty in a complex
world. Earthscan, London.
Salter, M. (2008). Imagning numbers: Risk, quantication and aviation security.
Security dialogue, 39, 243266.
Schbe, H. (2004). Dierent approaches for determination of tolerable hazard
rates. In ESREL 2004 Conference proceedings.
Schoeld, S. (1998). Oshore QRA and the ALARP principle. Reliability
Engineering and System Safety, 61, 3137.
Sjberg, B. (2003). Introduction to risk communication. current trends in risk
communication: theory and practice. Technical report, Directorate for Civil
Defence and Emergency Planning, Oslo.
Skjong, R., Vanem, E., & Endresen, . (2007). Risk evaluation criteria. Technical
report, SAFEDOR -D-4.5.2 DNV.
Slovic, P. (1987). Perception of risk. Science, 236, 280285.
SSB (2009). Ddsfall etter kjnn, alder og underliggende ddsrsak. Hele landet. 2007. Statistisk Sentralbyr http://www.ssb.no/dodsarsak/.
Starr, C. (1969). Social benet versus technological risk. Science, 165, 1232
1238.
Starr, C. & Whipple, C. (1980). Risks of risk decisions. Science, 208, 1114
1119.
Stuphorn, J. (2003). Iterative decomposition of a communication-bus system
using ontological analysis. PhD thesis, Universitt Bielefeld.
Tannert, C., Elvers, H., & Jandrig, B. (2007). The ethics of uncetainty. In the
light of possible dangers, research becomes a moral duty. EMBO reports, 8,
892896.
Teknisk Ukeblad (2009). Krever nye tall fra havforskerne. Teknisk Ukeblad,
35, 1011.

References

93

Trouwborst, A. (2007). The precautionary principle in general international


law: Combating the babylonian confusion. RECIEL, 16, 185195.
Trung, L. (2000). The GAME, MEM and ALARP principles of safety (abridged
version). Recherche Transports Scurit, 68, 6365.
Tversky, A. & Kahneman, D. (1974). Jugment under uncertainty: Heuristics
and biases. Science, 185, 11241131.
United Nations (1992). Report of the United Nations conference on environment and devleopment, Rio Declaration on environment and development.
Technical report, United Nations, New York.
US Presidential/Congressional Commission on Risk Assessment and Risk
Management (1997). Framework for environmental health risk management. nal report, volum 1. Technical report, US Presidential/Congressional
Commission on Risk Assessment and Risk Management http://www.
riskworld.com/Nreports/1997/risk-rpt/pdf/EPAJAN.PDF.
Vatn, J. (1998). A discussion of the acceptable risk problem. Reliability Engineering and System Safety, 61, 1119.
Vatn, J. (2009). Issues related to localization of an LNG facility. In Reliability,
Risk and Safety: Theory and Applications. Proceedings from ESREL 2009.
Vaurio, J. (1990). On the meaning of probability and frequency. Reliability
Engineering and System Safety, 28, 121130.
Vinnem, J. (2007). Oshore risk assessment. Principles, modeling and application of QRA studies. Springer, London.
Vinnem, J., Haugen, S., & Vollen, F.and Grefstad, J. (2006). ALARP-prosesser.
utredning for Petroleumstilsynet. Technical report, Petroleumstilsynet.
Vrijling, J., Gelder, van P.H.A.J.M.and Goossens, L., Voortman, H., & Pandey,
M. (2004). A framework for risk criteria for critical infrastructures: Fundamentals and case studies in the netherlands. Journal of Risk Research, 7,
569579.
Vrijling, J., Hengel, W., & Houben, R. (1998). Acceptable risk as a basis for
design. Reliability Engineering and System Safety, 59, 141150.
Walker, T. (2001). Tolerability of risk. Its use in the nuclear regulation in the
UK. Technical report, HSE.
Watson, S. (1994). The meaning of probability in probabilistic safety analysis.
Reliability Engineering and System Safety, 45, 261269.
Webster (1978). Websters Encyclopedic Unabridged Dictionary of the English
Language. Random House, New York.
Wilson, K., Leonard, B., Wright, R., Graham, I., Moet, J., Pluscauskas, M.,
& Wilson, M. (2006). Application of the precautionary principle by senior
policy ocials: Results of a canadian survey. Risk Analysis, 26, 981988.
Woodru, J. (2005). Consequence and likelihood in risk estimation: A matter
of balance in UK health and safety risk assessment practice. Safety Science,
43, 345353.

94

References

Wu, J. & Apostolakis, G.E .and Okrent, D. (1990). Uncertainties in system


analysis. Probabilistic versus nonprobabilistic theories. Reliability Engineering and System Safety, 30, 163181.
Yellmann, T. & Murray, T. (1995). Comment on the meaning of probability in
probabilistic safety analysis. Reliability Engineering and System Safety, 49,
201205.

Appendix

Abbreviations and acronyms


ALARA As low as reasonably achievable
ALARP As low as reasonably practicable
AIR
Average individual risk
CBA
Cost-benet analysis
FAR
Fatal accident rate
DDT
dichlorodiphenyltrichloroethane,a banned pesticide
FN-curve Frequency/number of fatalities-curve
GAMAB Globalement aussi bon
HSE
Health and Safety Executive, UK
IR
Individual risk
KPI
Key performance indicator
IRPA
Potential loss of life
LIRA
Localized individual risk
MEM
Minimum endogenous mortality
PFD
Probability of failure on demand
PLL
Potential loss of life
PSA
Petroleum Safety Authority, Norway
QUALY Quality adjusted life years
RAC
Risk acceptance criteria
RPN
Risk priority number
SFAIR Safe so far as is reasonably practicable
SIL
Safety integrity level
SIS
Safety instrumented system
VPF
Value of preventing a fatality
VSL
Value of a statistical life

The ROSS activities at NTNU are


supported by the the insurance
company TrygVesta. The annual
conference Sikkerhetsdagene is
jointly arranged by TrygVesta and
NTNU.

R O S S
Further information about the
reliability, safety, and security
activities at NTNU may be found
on the Web address:
http://www.ntnu.no/ross

ISBN: 978-82-7706-228-1

Das könnte Ihnen auch gefallen