Sie sind auf Seite 1von 99

Università degli Studi di Trieste

Physics Department

Master course in Nuclear and Subnuclear Physics

Uncertainty propagation and sensitivity study


of the dose calculations in an accident
scenario at the European Spallation Source

Candidate: Supervisor:
Nicola Rizzi Ph.D. Riccardo Bevilacqua
Outside examiner:
Ph.D. Paolo Maria Milazzo

ACADEMIC YEAR 2018-2019


Forward

This thesis presents the work done at the ESH&Q division at European Spallation
Source (ESS) during the period from May to August 2019. The thesis represents the
graduation requirement for the Nuclear and Subnuclear Physics Master program at
Università degli studi di Trieste.

First and foremost, I would like to thank my supervisor Riccardo Bevilacqua for
this great opportunity he offered me and humanity showed me at each step of this
journey. I would like to address a special thanks also to Leif Emås, for his clar-
ity and support, to Leif Spanier from Lloyd’s Register and Xuezhi Zhang from the
Chinese Academy of Science, for their invaluable assistance to my project. Finally,
to numerous other people at European Spallation Source that assisted with both
technical and moral support, I am deeply indebted for your help when most needed.

A special thanks also to ENEN and SIRR for their financial support to the mobility
by, respectively, ENEN+ project, co-funded by the EURATOM research, and SIRR
Grant for Master Thesis.

Nicola Rizzi
Trieste, Italy, December 2019

i
Summary

The aim of this thesis is to obtain a quantitative estimation of dose calculations


uncertainties in the case of an accident scenario at the European Spallation Source.
In Chapter 1 an overview of the basics of uncertainty analysis is presented, in partic-
ular a detailed description of the methods and steps generally adopted is discussed
along with brief introduction to the European Spallation Source facility.

The model used is treated in depth in Chapter 2 and is subdivided in three main
parts: activity transport inside the building, dispersion into the atmosphere and
actual dose assessment. Each part deals with the conceptual and computational re-
alization of the physical processes involved and, while the choices made are mainly
due to Target Division specialists, the actual code used for the analysis has been
written ex novo as a part of this work1 .

Chapter 3 is focused on the description of the accident scenario and its radiological
consequences. At the end of this chapter, validation for the code results is also
addressed, using as benchmark the dose assessment by Target Division specialists.

The thesis “core” is Chapter 4 where, after an introduction on Quasi-Monte Carlo


methods, the parametric uncertainty of the model is explored based on scientific
literature and informal solicitation of aforementioned specialists. The results are
then discussed for the accident worst case scenario, followed by some remarks on
model uncertainties.

In the last chapter, the variance-based sensitivity technique adopted is discussed.


The results of the sensitivity analysis have both permitted the ranking of the pa-
rameters with respect to their contribution to the dose variance and given a deeper
understanding of the model itself.

1
The code is available at https://github.com/Athropos/dose-model-root

iii
Sommario

Lo scopo di questa tesi è quello di ottenere una stima quantitativa delle incertezze sui
calcoli di dose nel caso di uno scenario di incidente all’European Spallation source.
Nel Capitolo 1 è presentata una panoramica sulle basi dell’analisi delle incertezze,
in particolare vengono discussi in dettaglio i metodi e le fasi generalmente adottate
insieme ad una breve introduzione alla struttura dell’European Spallation Source.

Il modello usato è trattato in dettaglio nel Capitolo 2 ed è diviso in tre parti prin-
cipali: trasporto di radiazione all’interno dell’edificio, dispersione nell’atmosfera
ad effettiva valutazione della dose. Ciascuna sezione affronta la realizzazione con-
cettuale e computazionale dei modelli dei processi fisici coinvolti e, mentre le scelte
fatte sono dovute principalmente agli specialisti al Target Division, il codice utiliz-
zato per l’analisi è stato scritto ex novo come parte integrante di questo lavoro2 .

Il Capitolo 3 si concentra sulla descrizione dello scenario di incidente e le sue con-


seguenze radiologiche. Alla fine di questo capitolo viene trattata anche la validità
dei risultati restituiti dal codice usando come riferimento le valutazioni di dose degli
specialisti al Target Division.

Il nucleo della tesi è il Capitolo 4 dove, dopo un’introduzione sui metodi Quasi-
Monte Carlo, viene esplorata l’incertezza parametrica del modello basandosi sulla
letteratura scientifica e sulla confidenza informale dei suddetti specialisti. I risultati
sono quindi discussi nello scenario di incidente più pessimistico, seguiti da alcune
considerazioni sulle incertezze del modello.

Nell’ultimo capitolo sono discusse le tecniche di sensitività basate sulla varianza.


I risultati dell’analisi di sensitività hanno sia permesso la classificazione delle vari-
abili rispetto al loro contributo alla varianza della dose sia dato una più profonda
comprensione del modello stesso.

2
Il codice è disponibile all’indirizzo https://github.com/Athropos/dose-model-root

v
Acronyms

Term Definition
AA10 Accident Analysis 10 - Water leakage in Monolith Vessel
A2T Accelerator to Target
ARF Airborne Release Fraction
BCI Bootstrap Confidence Interval
DR Damage Ratio
EFH Exposure Factors Handbook
ESH&Q Environment, Safety, Health and Quality
GLS Gas Liquid Separation (tank)
HVAC Heating, Ventilation and Air Conditioning
IAEA International Atomic Energy Agency
LENA Swedish Radiation Safety Authority Gaussian Dispersion Code
LP Leak Path
MAR Material At Risk
MC Monte Carlo
NBPI Neutron Beam Port Insert
OBT Organically Bound Tritium
PBW Proton Beam Window
PIE Postulated Initiating Event
PSAR Preliminary Safety Analysis Report
PWC Primary Water Cooling
QMC Quasi-Monte Carlo
QR Quasi-random
RS Random Sampling
SMHI Swedish Meteorological and Hydrological Institute
SSM Swedish Radiation Safety Authority

vii
Contents

1 Introduction 1
1.1 Uncertainty analysis: an overview . . . . . . . . . . . . . . . . . . . . 1
1.2 European Spallation Source . . . . . . . . . . . . . . . . . . . . . . . 6

2 Model 11
2.1 Internal activity transport . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Dose assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 Accident Analysis 33
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Scenario Development . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Radiological Consequences . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4 Uncertainty Analysis 43
4.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Parametric Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.4 Model Uncertanties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5 Sensitivity Study 67
5.1 Variance-based sensitivity analysis . . . . . . . . . . . . . . . . . . . 67
5.2 Numerical estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Conclusions 77

Appendix A: Code workflow 79

Appendix B: Activity transport plot 81


Chapter 1

Introduction

1.1 Uncertainty analysis: an overview


The primary objective of this work is to estimate uncertainties on radiation doses
to a reference individual of the public from exposure to radionuclide emissions af-
ter an accident at the European Spallation Source site (see Sec. 1.2). An extensive
overview on this kind of analysis is given in [1] and has been adopted as the main
reference for the following discussion.
Mathematical models have always had a primary role in predicting the consequences
of radionuclides releases into the environment from normal operations and in hypo-
thetical accident conditions. Along with this main role, has come the recognition
that it is not enough to have results presented as single value estimates of dose or
radionuclides environmental concentrations, since models are only approximations
to real environmental systems and therefore there will always be some uncertainty
associated with predictions obtained from their use. As a matter of fact, a model
prediction without some estimate of the associated uncertainty should be of little
value to decision makers.

1.1.1 Reliability
Reliability of a dose model is defined in [1] as a measure of confidence in model
predictions. There are two ways of addressing confidence when it comes to dose
model: qualitative and quantitative. The first one relies on statements like:

– “a model prediction is conservative”

– “a model prediction is realistic”

which by themself give no indication of the possible extent of underprediction (or


overprediction). On the other hand, examples of quantitative reliability statements
are:

– “at a high level of confidence, a model prediction is accurate to within a factor


of three”

1
2 CHAPTER 1. INTRODUCTION

– “a model is unlikely to underpredict by more than one order of magnitude”


Clearly, these ones are preferable to those which are qualitative in nature since they
provide a much firmer basis for decision making. For example, when the stated limits
of accuracy of the model prediction approach some established level of concern (for
example, a legal dose upper bound), there is a strong interest in re-evaluating the
model and its parameters. However, must be pointed out that all procedures used to
evaluate the reliability of model predictions, even the quantitative ones, are affected
to some extent by subjective judgement of the individuals who have performed the
evaluation.
Thus, the primary goal of this thesis is to reach an extensive quantitative description
of reliability of dose models used at the European Spallation Source for accident
scenarios.

1.1.2 Analysis process


The structure of the uncertainty analysis is presented in Figure 1.1; with blue arrows
are showed the steps already done at ESS in the accident analysis documentation
(Chapter 3), while with orange arrows those one uniquely carried out in this work
(Chapter 4 and Chapter 5). In green is indicated a possible re-evaluation of the risk
assessment in the light of this work’s results.

Figure 1.1: Scheme of the structure adopted for the dose assessment and uncertainty analysis.
Blue arrows: steps included in the documentation for accident scenarios. Orange arrows: steps
extensively dealt with in this work. Green arrow: possible re-evaluation in the light of conclusion
of this work

Identification of the problem (scenario) should be the first step in both


assessing the dose and evaluating the model reliability. In fact improper scenario
1.1. UNCERTAINTY ANALYSIS: AN OVERVIEW 3

definition could result in a model predicting correct results for the wrong problem.
The issue can be restated as the answer to the following: “What is the assessment
question?”. Two distinct types of assessment questions can be studied: the one
with a deterministic answer and the one with a stochastic answer. As pointed out
in [2], examples of the first type of assessment question are:

– “What is the annual dose received by a specific individual?”

– “What is the daily dose received by a reference individual of a specified pop-


ulation group?”

Each of these questions has only one true answer. All the uncertainty is then in
the lack of knowledge about the true value of parameters that are invariant with
respect to the reference unit (for example, what the individual ate, how much the
individual ate, or the concentration of the radionuclide in the food the individual
ate).
On the other hand, the second type of assessment question (a question with a
stochastic answer) is:

– “What is the annual dose received per individual in a specified population


group?”

The true answer to this question varies from individual to individual. Part of the
uncertainty in the answer is due to stochastic variability with respect to the reference
unit of the assessment question (what and how much an individual ate varies from
individual to individual). In other words, while in the first case the only contribution
to the answer comes from uncertainty, in the second case there is also a contribution
from variability, which usually refers to the quantitative biological differences in
individual members of a population [3]. For example, two healthy people of the
same age and gender and having identical diets may exhibit substantially different
tritium retention times. Variability should not be confused with the uncertainty on
the central value of some parameter.
This work was aiming to answer a first type question, since the dose assessment
request was:

– “What is the dose received by a reference individual of the public living near
the ESS site after a specific accident?”

Specifying the conceptual model is the second step in the uncertainty pro-
cess. “The model concept takes into account the number of distinct compartments,
interconnections between compartments and the number of processes and mecha-
nisms to be considered explicitly” [1]. It’s clear that, although ideal, it may not
often be practical to include all conceivable processes and mechanisms within the
conceptual structure of the model. This is due either to the marginal role of some
processes or to the unavailability of relevant data; sometimes even data that do
4 CHAPTER 1. INTRODUCTION

exist are empirical and their application field is limited. In any case, one must
be careful on what to keep and what to throw away when it comes to the model.
Differences between predictions and reality may be due entirely to the inability to
identify the characteristics of the release, the important mechanisms and pathways
of environmental transport, the attributes of the exposed population or even the
time and space over which calculations should be averaged.

Specifying the computational model means to identify the set of equations


and parameter values to be used to get quantitative results. Whereas the conceptual
model primarily identifies the processes, pathways, compartments and interactions
between compartments necessary to address a given scenario, the computational
model leads to the numerical answer of the assessment question. Regardless of the
complexity, a model can be seen as:

f :Rp → R
(1.1)
~ 7→ Y = f (X)
X ~

where Y is the output, X ~ = (X1 , . . . , Xp ) are p inputs and f is the model function,
which can be analytically not known. Let’s just keep in mind this definition because
is going to be useful later in Chapter 4 and Chapter 5.

Estimation of parameters value means to evaluate the constants and inde-


pendent variables of the equations that make up the model. The parameter values
used in transport models should be estimated or derived from experimental evidence
obtained under conditions related to the specific situation for which the assessment
is performed. Often, very few experimental data exist to permit quantification of
all parameter values as a function, for example, of location, pathway, radionuclide,
chemical form or atmospheric conditions. In addition, many processes in the model,
like environmental or biokinetic ones, are subject to a large natural variability. Even
worse, many of the important parameters describing these processes are known on
a purely theoretical basis or in conditions not relevant to the problem. Thus, model
parameters usually are associated with large margins of uncertainty.

Quantitative statement of uncertainty is then the next logical step. As al-


ready pointed out, the uncertainty can arise from both the model conceptually not
representing the real world correctly and knowing parameters within a certain level
of confidence. However, it is not always straightforward to quantify a model uncer-
tainty, since it can be estimated the impact of only those processes that have been
willingly neglected (hopefully because they had no impact at all). In general, in [1]
is acknowledged that there are no “general rules or prescriptions on how to assure
that the formulated assessment problem and the formulation of the mathematical
models are correct”, that’s why in this work is assumed that predominant sources
of uncertainty in model predictions are due to uncertainties in the estimation of
1.1. UNCERTAINTY ANALYSIS: AN OVERVIEW 5

parameter values.
The uncertainty of a parameter is expressed in two ways: the range of possible
values and the frequency with which any value within that range is expected to
occur. At this point one can define a Probability Density Function (PDF) for each
parameter and assess a confidence interval by mean of the standard deviation of the
distribution.
The maximum conceivable range is estimated by first reviewing the general scientific
literature (reports, books, and scientific papers) that contain parameter informa-
tion. After the literature search is completed, then, by either self assessment or
expert opinion, is evaluated whether these maximum and minimum values should
be adapted to reflect physical limitations present in the conditions being examined.
At this stage, if there are not enough data to evaluate statistically which PDF
should be selected, subjective determination of the PDF for most parameters is
necessary. To increase the defensibility of this kind of assessment, one may asks an
expert to interpret the available information and quantify the most likely parameter
value and its uncertainty. However, in both cases, the rationale for the assessment
should be well-documented, including a description of the information available and
the evaluation of that information.

Propagation of uncertainties and sensitivity study means to estimate the


uncertainty in the dose prediction and evaluate how much of this uncertainty can be
attributed to different sources of errors in the input factors. In Figure 1.2 is shown
a simple prototype of the analysis, while an extensive overview on the analytical
and computational methods that can be used to carry out the propagation and the
sensitivity study are presented, respectively, in Chapter 4 and Chapter 5.

Figure 1.2: Scheme of the uncertainty analysis conducted as main objective of this thesis.
6 CHAPTER 1. INTRODUCTION

1.2 European Spallation Source


The European Spallation Source (ESS) is a research facility aiming to be the world’s
most powerful neutron source. It is a pan-European project with 13 European na-
tions as members, including the host nations Sweden and Denmark. The ESS
facility is under construction in Lund, while the ESS Data Management and Soft-
ware Centre (DMSC) is located in Copenhagen. Its unique capabilities will both
greatly exceed and complement those of today’s leading neutron sources, enabling
new opportunities for researchers across the spectrum of scientific discovery, includ-
ing materials and life sciences, energy, environmental technology, cultural heritage
and fundamental physics. The ESS mandate is well described by the following
statement:

“ESS will provide up to 100 times brighter neutron beams than currently
available at existing facilities. In simple terms, the difference between
current neutron sources and ESS is something like the difference between
taking a picture in the glow of a candle or using a torch”

In a more quantitative way, in Figure 1.3 is reported the effective thermal flux of
neutron sources around the world as a function of operational year. For more
details about the current situation with neutron facilities in Europe see [4]. Most
existing sources are based on nuclear reactors, an approach that has reached its
maximum capability in terms of how many neutrons can be produced; ESS belongs
to a new generation of neutron sources based on spallation technology which has
been developed by scientists and engineers to reach new standards of efficiency.
From a technical point of view, a particle accelerator speeds up proton up to 2 GeV,
then the spallation process will take place when the accelerated proton beam hits
the tungsten bricks of the target wheel. In the process, those high-speed protons
kick out the neutrons which are directed to the ESS instruments through a gauntlet
of media, guides, optics and filters to be used for scientific research. The facility
design and construction include the most powerful linear proton accelerator ever
built, a five-tonne helium-cooled tungsten target wheel, 22 state-of-the-art neutron
instruments, a suite of laboratories and a supercomputing data management centre.

The accelerator components are showed in Figure 1.4 and treated in details in
[6]; rapidly varying electromagnetic fields heat hydrogen gas in the ion source so
that electrons evaporate from the hydrogen molecules and a plasma of protons re-
mains. The proton beam is then transported through a Low Energy Beam Transport
(LEBT) section to the Radio Frequency Quadrupole (RFQ), where it is bunched
and accelerated up to 3.6 MeV. In the Medium Energy Beam Transport (MEBT)
section the transverse and longitudinal beam characteristics are diagnosed and opti-
mised for further acceleration in the Drift Tube Linac (DTL). After approximately
50 meters the protons have gained enough speed so they can be accelerated by super-
conducting double-spoke cavities (SPK). These cavities are cooled by liquid helium
1.2. EUROPEAN SPALLATION SOURCE 7

Figure 1.3: The evolution of effective neutron source fluxes as a function of calendar year, from
the discovery of the neutron in 1932 to 2016. HFIR, ILL, ISIS, SINQ, SNS, JSNS and FRM-II
(MLZ) are still operational and CSNS and ESS are under construction. Image and caption from
[4].

to −271 ◦C. The spoke-cavities are followed by 36 Medium Beta Linac (MBL) and
84 High Beta Linac (HBL) elliptical cavities. These structures are distributed all
along the linear accelerator and constitute the majority of its length. After accelera-
tion the beam is transported to the target through the High Energy Beam Transport
(HEBT) section. At this point the protons have reached 96% of the speed of light.
A switch dipole will bend the beam to the A2T line while powered and will permit
the beam to be sent to the beam dump while unpowered. The ESS accelerator high
level requirements are to provide a 2.86 ms long proton pulse at 2 GeV at repetition
rate of 14 Hz. This represents 5 MW of average beam power with a 4% duty cycle
on target.

The Target Station uses protons delivered by the linear accelerator to liberate
neutrons from tungsten nuclei via nuclear spallation reactions. These neutrons,
which travel at 10% of the speed of light, are then slowed down to roughly the
speed of sound using liquid hydrogen and water as moderating media, to energy
8 CHAPTER 1. INTRODUCTION

Figure 1.4: Block diagram of ESS accelerator. Image from [5].

levels usable by the scientific instruments. Of those neutrons that are slowed down
to useful speeds, a smaller fraction is allowed to travel from the moderators into
the neutron guides that deliver them to neutron-scattering instruments. The target

Figure 1.5: Cross section view of the Target Building showing the physical locations of the many
systems constituting the target station. Image from [5].

station occupies the Target Building (Figure 1.5) which is a large structure, about
130 m in length, 22 m in width and 30 m in height. In Figure 1.5, the proton beam
is shown entering the Target Building from the left, passing through the proton
beam transport hall and into the target monolith. The proton beam window sepa-
rates the high vacuum in the accelerator from the inert helium gas in the monolith,
shown in Figure 1.6, which is a large structure (11 m diameter by 15.5 m tall, from
basement floor to high bay floor) consisting mostly of 3000 tons of steel shielding.
Embedded within the monolith there is a vessel containing the major components
involved in neutron production and delivery to instruments: the spallation target,
the moderator-reflector system, the proton beam window, and instrumentation.The
target itself is a 2.6 m diameter stainless steel disk containing bricks of tungsten,
a neutron-rich heavy metal. It weighs 4.9 tonnes and rotates at 23.3 RPMs, in
time with the arrival of the proton beam. The unit is cooled by a flowing helium
gas system interfaced with a secondary water system, whose principal components
(pumps, heat exchangers, filters, etc.) are located in Utilities rooms. The modera-
tion is achieved using a 20 K hydrogen-based and water-based moderator (respec-
1.2. EUROPEAN SPALLATION SOURCE 9

Figure 1.6: Up: cross section view of the monolith showing in-monolith components and the
connection cell above. Image from [5]. Down: 3D model of moderator-reflector system.

tively cold and thermal) and a beryllium-lined reflector. The moderator-reflector


system is housed in a replaceable plug, and also includes cryogenic hydrogen and
water-cooling systems (Figure 1.6).
Once moderated, the neutrons are delivered to the instruments through 42 beam
ports radiating from the Target Station. The beam extraction system provides in-
tense slow neutron beams through beam tubes going across the target shielding.
The neutrons are delivered at the surface of the shielding, to be used at the neu-
tron scattering instruments. Additionally, the Target Building contains active cells,
where the highly activated components from the monolith are moved via the High
Bay once they reach the end of their service lives. Finally, a dedicated HVAC sys-
tem operates within the Target Building, servicing all target station areas including
the active cell facilities.
Chapter 2

Model

In this chapter are presented the methods and tools making up the dose calculation
model for the public dose assessment in the safety analyses at ESS. The main
reference for the structure and choices made in this chapter is [7]. Anyway, most of
the content presented is the outcome of independent extensive researches through
scientific literature and books on the subject. In fact, the first part of the work done
at ESS consisted of writing a stand-alone C++ code able to faithfully reproduce the
dose results obtained in the first place by the Target Division specialists. The need
for such a code will be fully clear in Chapter 4, but just to give the reader an idea
of the complexity of the original process, in Figure 2.1 is showed the workflow for
the dose calculations from [11]. The calculations were run jointly by the Target

Figure 2.1: Original workflow for the dose calculations from [11]. Each Blue box represent a
single Mathcad or Excel file. Green boxes are major analysis steps. Red arrows are meant to be
manual transfers of data, while purple arrows are automatic transfers of data.

11
12 CHAPTER 2. MODEL

Division at ESS and Lloyd’s Register with the aid of three software: PTC Mathcad
[8], Microsoft Excel [9] and Lena [10].
Mathcad is used to solve the differential equations describing the internal transport
of activity in the facility (see Sec. 2.1). Lena was used to calculate the relative
concentration used for calculating the dispersion of activity during the transport
outside the facility to the representative person (see Sec. 2.2). Finally Excel is used
for the remaining calculations via three to five different Excel workbooks for each
accident sequence, resulting in dose to the public.
In this thesis, the need of a compact code with a straightforward flow of information
from input to output comes along with the need to fully understand each step of
the analysis to reproduce it. That’s the main reason for the extensive researches
conducted on equations and parameters missing in the original documentation, but
presented here. Finally, unless clearly stated, only the processes included in the
code will be discussed, independently from what has been dealt with in [7].

2.1 Internal activity transport


The information needed for the internal transport calculations, like activity and
thermodynamics of the release, were provided by experts in the Target Division.
Should be noticed that the behaviour of these quantities was in most cases very
complex and was simplified to fit the transport model.

2.1.1 Inner Source Term


The activity released from the accidentally damaged object is called inner source
term, STinner , and is simply a list of nuclides with their associated activities.
Clearly, different accidents have different inner source term, however, for a specific
accident it is a fraction of one or a mix of fractions of few basic inventories. These
basic inventories are the target material [12], the target helium cooling system gas
[13], the moderator water [14], the reflector water [15], the beryllium reflector [16],
the filter in the target helium cooling system [17], the beam dump [18] and the air
in the active cell [19]. The activities in the inventories are given at the planned
operational life-time of he object (e.g. five years for the target wheel).
At ESS the estimation of the activation products as a function of the irradiation
history is performed in a two-step simulation process. In the first step, MCNPX
[20], is used to simulate the particle transport conditions in the system of interest.
At the same time, a file that contains the information about the nuclides produced
due to the spallation process is generated. These two pieces of information are
than passed to the activation code CINDER [21] to estimate the nuclear inventory
depending on the irradiation history.
Starting from the basic inventory of an object, the maximum activity that might
be released in a specific accident is obtained multiplying the inventory by the so-
2.1. INTERNAL ACTIVITY TRANSPORT 13

Element Description
H Hydrogen
He, Ne, Ar, Kr, Xe Noble gases
N, O, P, S, Se, C Non-metals
F, Cl, Br, I Halogens
As, Te Metalloids
Na, K, Rb, Cs Alkali-metals
Zn, Cd Metals

Table 2.1: Elements considered in the safety analyses to be volatile, a gas, or can vaporise.

called Damage Ratio (DR), which is simply the weight of the damaged material of
an object divided by the total weight of the object, also called Material at Risk
(MAR). Since this model is aimed to study only radionuclides released into the air,
the actual inner source term is then given by multiplying the maximum releasable
activity with the Airborn Release Fraction (ARF) of each chemical element and if
several objects are damaged, this process is repeated for each object.
The release fractions of the elements from the target material are treated in some
details in [22], however, for the sake of completeness, Table 2.1 lists the elements
considered to be gases, volatile or can vaporise and are thereby potentially found
in the gas phase.
The inner source term for the postulated accident is then obtained by the following
formula:
STinner = MAR · DR · ARF (2.1)

to apply to each element of the inventory.

2.1.2 Transport model


A simplified general model of the ESS facility is made of three main volumes se-
quentially connected from the first to the last volume. In Figure 2.2 such a model is
showed by mean of a scheme. Rooms or other spaces volumes are indicated as V0 ,
V1 and V2 . The source is placed in the first volume, which usually correspond to
the Monolith Vessel. It can be seen either as a fixed number of radioactive nuclei at
the beginning of the accident or as an activity source generator s(t), which allow to
model both continuous varying sources and sudden releases like explosions or pipe
breaks. Gas flow from one volume to the next one is indicated as f01 (from V0 to
V1 ) and f12 (from V1 to V2 ). Sometimes there is also the need to include the back
flows (f12 and f21 ), which may develop after a while when the leak area is big, like a
door. Each volume may also have a flow directly out to the environment, indicated
with e0 , e1 and e2 . This model makes possible the evaluation of the number of
radioactive nuclides (or, equivalently, the activity) in the different volumes - N0 , N1
and N2 - and emitted to the environment - E0 , E1 and E2 .
14 CHAPTER 2. MODEL

Figure 2.2: Scheme of the facility for the internal transport model. Rooms volumes are indicated
as V0, V1 and V2. Gas flow from one volume to the next one is indicated as f (e.g. f01 is the flow
from V0 to V1). Each volume may also have a flow out to the environment indicated with e0, e1
and e2. Finally, the number of radioactive nuclides (or activity) in the different volumes are N0,
N1 and N2, while the ones emitted to the environment are E0, E1 and E2.

In this simplification, the three dimensional rooms are reduced to “nodes”, which
can be given, in principle, different physical properties; in this case, they were given
only the property of volume. Gas-flows connect the different nodes with each other
and with the environment (also a node). Another important assumption of the
model, is that nuclei that come into a node are immediately spread out in the
whole volume. The change of the number of nuclei (or activity) with the time was
originally described in [7] by a set of coupled ordinary equations:
N1 N0 N0
Ṅ0 = s(t) + f 10(t) − N0 · λ − f 01(t) − e0 (t) (2.2)
V1 V0 V0
N0 N2 N1 N1 N1
Ṅ1 = f 01(t) + f 21(t) − N1 · λ − e1 (t) − f 10(t) − f 12(t)
V0 V2 V1 V1 V1
N1 N2 N2
Ṅ2 = f 12(t) − N2 · λ − e2 (t) − f 21(t)
V1 V2 V2
Ni
Ėi = ei (t) i = 0, 1, 2
Vi
As already said, the numerical solutions to the set (2.2) was solved using the soft-
ware Mathcad. Since the solver is generally slow and training is required to run a
proprietary software like Mathcad, a different path has been chosen for this work.
The problem of transport has been dealt with using a method developed by the
Institute of Modern Physics at the Chinese Academy of Sciences, which has been
collaborating with the ESH&Q Division in the past year. The idea behind this
different approach is to decouple the problem of transport from decay, in order to
2.1. INTERNAL ACTIVITY TRANSPORT 15

treat it in terms of a discrete transport matrix, i.e. a linear algebraic problem. Let’s
generalize the problem to n rooms (including environment) and, just for semplicity,
assume the gas flows constant. Let’s define Sit and Nit respectively, the source and
the number of nuclei in the room i at a time t; if we call Pij = fij /Vi (i 6= j) , then
at the time t + ∆t the following set of n approximated equations holds:

 
X
N1t+∆t = S1t + 1 − P1j  N1t ∆t + N2t P21 ∆t + . . . + Nnt Pn1 ∆t
j
 
X
N2t+∆t = S2t + N1t P12 ∆t + 1 − P2j  N2t ∆t + . . . + Nnt Pn2 ∆t (2.3)
j
..
.  
X
Nnt+∆t = Snt + N1t P1n ∆t + N2t P2n ∆t + . . . + 1 − Pnj  Nnt ∆t
j

Should be clear that the set (2.3) is nothing more than the generalized and dis-
P
cretized form of (2.2) without the decay term. Defining Pii = 1 − j Pij , the set
of n equation (2.3) can be restated in a more compact form:

 
P11 · · · P1n
t  .. .. ..  · ∆t
(N1 , . . . , Nn )t+∆t t
= (S1 , . . . , Sn ) + (N1 , . . . , Nn ) ·  . . .  (2.4)
Pn1 · · · Pnn

which also emphasizes the equivalence of this model to a linear mapping of the
n-dimensional vector, representing the number of nuclei in each room at the time
t, to the one at time t + ∆t by mean the transport matrix Pij . A simple example
of transport matrix is showed in Figure 2.3. The computational strength of this
model lies on both the ease with which one can implement it in a code and the fast
execution speed. Moreover, Sit and Pijt can be adjusted for each step according to,
for example, transient processes, stochastic effects or aerosol dynamics.
Until this point it has been treated the “material” transport, but since we are
dealing with activity, there’s need to reintroduce the decay. In this approximation
the two phenomena are decoupled, so one can just multiply at each step the vector
(N1 , . . . , Nn )t+∆t by the factor e−λ∆t , with λ the decay constant of a specific nuclide.
This means that the process of transport must be repeated for each nuclide involved
in the release, which was also a feature of the Mathcad approach except for the fact
that the time for a single run was incredibly longer.
A validation of this computational model is presented in Sec. 3.3.3, where its results
are compared with the ones obtained by the Mathcad solver for a long-lived nuclide
in a specific accident scenario.
16 CHAPTER 2. MODEL

Figure 2.3: Example of transport in a simplified scenario. Image taken fromQuick Radioactivity
Transport Evaluation with Transport Matrix Method on kind permission of Xuezhi Zhang, Institute
of Modern Physics, Chinese Academy of Sciences

2.1.3 Outer Source Term


The list of nuclides with their activities emitted to the environment during the
whole accident is called outer source term, STouter . Solving the transport equation
(2.2), the last three equations will give the number of nuclei that have passed the
border between the facility and the environment at any time. The value at the last
step gives the total number of nuclei emitted and multiplying this with the decay
constant gives the outer source term for a nuclide with that half-life.
In the aforementioned model there is no actual transport of the nuclei within the
nodes. As already said, the nuclei that come into a node are immediately spread
out in the whole volume, then the emission starts in principle almost immediately.
Because the flow of nuclei from one node to the next is only a fraction per second
of the content in the upstream node, the transport will anyway take time and the
emission rate will be small in the beginning. The total number of emitted nuclei will
increase continuously and it may take a long time until the “last” nucleus leaves the
facility. In [7] quantitative definitions for the timings of the emission, which have
also been used in this analysis, have been fixed:

• Start of emission: the time when 1% of the full emission of a long-lived nuclide
has been emitted.

• Emission duration: the time between when 1% and 90% of the full emission
of a long-lived nuclide has been emitted.

• Emission time: the time between when 1% and 50% of the full emission of a
long-lived nuclide has been emitted.

For some accident scenarios, especially those that have a fast release of the inner
source term and where the emission path goes through several nodes, the emission
2.2. DISPERSION 17

time between 50% and 90% may be more than one order of magnitude longer than
the time between 1% and 50%. Since the local relative concentration in the atmo-
sphere gets lower values for longer emission times (see Sec. 2.2.3), in [7] is pointed out
that using the emission duration may result in too low doses. Thus, the shorter but
not unrealistic choice of emission time when calculating the relative concentration
in the dispersion model was made for conservative purposes.

2.2 Dispersion
The release of radionuclides into the atmosphere causes some radiation exposure
of humans and other biota near the ESS site. The dynamics of those radionu-
clides depends primarily on the weather systems that they encounter, therefore an
overview of particular aspects of meteorology is essential to understand atmospheric
dispersion models [23][24].

2.2.1 Atmospheric turbulence


The atmospheric turbulence is the main responsible of dilution at the local scale,
while the long range transport of atmospheric properties is due to advection, which
is function of wind and air properties gradient. If there was no turbulence, locally,
the dilution of a substance would be a lot slower since it would rely solely on the
random motion of gas molecules (molecular diffusion). Atmospheric turbulence has
two principal sources, one is the wind shear, that is the change of the wind speed and
direction in space, especially with respect to the vertical direction; that originates
the so called mechanical turbulence. The second is the spontaneous vertical motions
of the air masses, due to the planetary surface heating during the day; that is
the convective, or thermal, turbulence, then it is due to the air instability. Local
measurements of atmospheric variables, at high frequency, e.g. sampling time less
than one minute, reveals great variability, especially close to the planetary surface,
because of the turbulence.

Wind

Wind in meteorology is defined as the velocity of air at a point. Wind direction


is the blowing direction in degrees clockwise from true north, ignoring the vertical
plane of the actual three-dimensional wind field. Wind speed usually means mean
wind speed, which is an average of measurements taken over a 10-minute period.
Sudden increases lasting for a few seconds are referred as gusts. Obstacles like trees,
buildings and chimney stacks will enhance “gustiness” (better known as mechanical
turbulence), however the mean wind can only be reduced by the friction created by
a rough surfaces (e.g. earth’s surface). That’s why wind measurements are normally
made in an area of undisturbed flow at 10 m above ground level.
In general, the wind speed in the so called “boundary layer” increases with increasing
18 CHAPTER 2. MODEL

Landscape z0 (m) n
Sea 10−4 0.07
Sandy desert 10−3 0.1
Short grass 0.005 0.13
Open grassland 0.02 0.15
Root crops 0.1 0.2
Farmland 0.2-0.3 0.24-0.255
Open suburbia 0.5 0.3
Cities, woodlands 1 0.39

Table 2.2: Typical values of roughness length and n for various types of terrain from [25].

height till becomes steady and proportional to the pressure gradient (geostrophic
wind). A convenient way to describe the vertical profile is by a power law [25]:
 z n
u(z) = u10 · (2.5)
10
where u(z) is the wind speed at height z and u10 is the measured wind speed at the
reference height of 10 m. The power n is a function of surface roughness (see next
section) and, to some negligible extent [25], atmospheric stability (see Atmospheric
Stability).
Finally, one expect the concentration of substances released, e.g. from a chimney
stack, in the atmosphere to be inversely proportional to the wind speed at that
point. For example, if the wind speed doubles, substances will then be diluted into
twice as much air per unit time, thus halving the concentration per unit volume of
air.

Surface roughness

As already discussed, objects on the earth’s surface (roughness elements) will have
a frictional effects on the wind flow. In order to account for the complexity of the
landscape’s roughness elements, a single parameter called surface roughness length
z0 is used. In general, z0 determine the shape of the profile (2.5); in particular,
n will increase with increasing surface roughness. Typical values for z0 and n are
showed in Table 2.2. The lands surrounding the ESS site are mostly devolved to the
production of common crops, like sugar beets and cereals, therefore a z0 = 0.1 m
value has been fixed for the model.

Atmospheric Stability

The vertical mixing of air is usually addressed by what’s called thermal turbulence.
If a “parcel” of air is moved upwards by the wind it expands adiabatically, i.e. with-
out any heat transfer to the outside, due to the reduction in atmospheric pressure
with height. Since some of the heat within the parcel is used up in this expansion,
2.2. DISPERSION 19

the temperature of the air parcel decreases. Rising air cools and descending air
warms at a rate of about 1 ◦C per 100 m; this is known as the dry adiabatic lapse
rate (Γ). In light winds the temperature change with height is influenced mainly
by exchange of thermal radiation in the atmosphere. If the temperature reduction
within the rising parcel matches the temperature decrease in its surroundings then
it will have the same density as its surroundings and its vertical motion will be nei-
ther suppressed nor enhanced. This is known as “neutral stability” since a parcel
of air displaced in the vertical neither gains nor loses buoyancy.
When the temperature falls with height at a rate less than Γ or even increases with
height (a temperature inversion) the boundary layer is said to be “stable”. This is
because air displaced upward cools with respect to its surroundings and therefore
its upward motion is suppressed. Stable conditions typically occur at night, particu-
larly when there is no cloud cover coupled with light winds. If the temperature falls
with height at a rate greater than the Γ then air displaced in the vertical continues
to be warmer and thus less dense than its surroundings and therefore continues to
rise until constrained at a greater height - this is an “unstable boundary layer”.
These conditions typically occur on cloudless, hot, sunny days in light winds when
the earth’s surface is being heated more than usual.
Pasquill [26] developed an empirical scheme which defined stability in terms of six
categories, A to F, in which A represents the most unstable conditions, D neutral
and F stable. Later a very stable category, G, was added. The approach suggested
by Pasquill depends on knowing the wind speed at a height of 10 m and assessing
the amount of incoming solar radiation (insolation) in qualitative terms. Since the
categorisation of the atmospheric stability has an important influence over the dis-
persion estimates of material released to the atmosphere, several scientists have used
experimental data to derive dispersion parameters for use in models, as a function
of stability category. For example, curves of these parameters against downwind
distance were suggested by Gifford [27] and became known as the Pasquill-Gifford
curves (see also [25]).

2.2.2 Dispersion basic model

The model used in [7] to predict the dispersion of radionuclides into the environment
is a time-integrated Gaussian plume model. There are many versions in use around
the world and a major reason for that is the difference in the way that the dispersion
parameters are derived. The most widely used and practical version of this semi-
empirical model was published as National Radiological Protection Board Report
R91 [25]. This subsequently became known as the “R91 model” which is also the
one adopted in the software Lena [10].
The model assumes that the diffusion of material in the direction of the wind is
relatively small compared to the transport by the wind itself (advection). Along the
cross wind and vertical directions, instead, is described by a Gaussian distribution
characterised by standard deviations σy and σz . Thus, if the origin of the co-ordinate
20 CHAPTER 2. MODEL

system is at ground level directly beneath the discharge point, the integrated activity
concentration C(x, y, z) at the downwind distance x of a radionuclide is given in
the basic form of the Gaussian plume model as:
  2
(z − h)2

Q0 y
C(x, y, z) = exp − + (2.6)
2πσy σz u10 2σy2 2σz2
where Q0 is the total amount released (Bq) and h the release height (m). The
concentration obtained refers to releases which are short enough to be unaffected
by changes in wind direction. For releases of long duration the horizontal spread
of material is heavily influenced by changes in the wind direction rather than by
the dispersion and diffusion of material about the plume’s axis. The conservative
choice made in [7] is to evaluate the concentration downwind (worst scenario).

Reflections from the ground and inversion layer

When material is discharged from an elevated source, the plume will disperse ver-
tically and eventually reach the ground. On reaching the ground a non-depositing
plume is effectively “reflected” back into the atmosphere. This can be accounted for
by mean of a virtual source at a distance h below the ground. The air concentration
at (x, y, z) is then given by:
y2
 
Q0
C(x, y, z) = exp − 2 ×
2πσy σz u10 2σy
(z − h)2 (z + h)2
    
× exp − + exp − (2.7)
2σz2 2σz2
On the other side, when the plume reach the inversion conditions, which are pos-
itive temperature gradients in the lower levels of the atmosphere, the dispersed
material is again reflected. The plume is then trapped between the inversion layer
and the ground (see Figure 2.4). The effect of introducing multiple reflections is
that the radionuclide airborne concentration at the point of interest is obtained by
summation of the contributions from many virtual sources. The optimal number of
contributions depends on the relative values of σz and the height A of the mixing
layer, but, in general, sufficient accuracy is obtained with “first order” reflections
[25]. The concentration distribution can then be expressed as:
y2
 
Q0
C(x, y, z) = exp − 2 · F (h, z, A) (2.8)
2πσy σz u10 2σy
where F is the sum over all virtual sources:
(z − h)2 (z + h)2
   
F (h, z, A) = exp − + exp − +
2σz2 2σz2
(z − 2A + h)2 (z − 2A − h)2
   
exp − + exp − + (2.9)
2σz2 2σz2
(z + 2A − h)2 (z + 2A + h)2
   
exp − + exp −
2σz2 2σz2
2.2. DISPERSION 21

Figure 2.4: Virtual source approach to account for the reflections from the ground and the top
of the mixing layer

2.2.3 Dispersion parameters


The application of the model requires, first of all, a categorisation of the meteoro-
logical conditions into the Pasquill-Gifford stability categories. Once the stability
category has been determined one can study the dispersion coefficients to use for
calculations.

Stability Class

The original work by Pasquill [26] gave the indications showed in Table 2.3 on how
to assess the stability class based on wind speed and solar irradiation. In the
1973, Smith developed the Pasquill scheme further introducing a more objective
approach to estimate the sensible heat flux and substituting the seven categories by
a continuous numerical scale [28]. In the new generations of models for practical
applications, turbulence is usually defined in terms of the boundary layer height and
the Monin-Obukhov length [24], both directly measurable or function of measurable
quantities.
In the safety analysis at ESS the Paquill scheme has been adopted, however the
choice of stability class was made based on precise assumptions on weather condi-
tions; in particular two scenarios have been assumed:
22 CHAPTER 2. MODEL

Insolation Night
u10 (m/s)
Strong Moderate Slight >3/8 low cloud < 4/8 cloud
<2 A A-B B - -
2-3 A-B B C E F
3-5 B B-C C D E
5-6 C C-D D D D
>6 C D D D D

Table 2.3: Pasquill stability categories scheme based on insolation and wind speed, from [26].

• the median weather P50% or the weather with the statistical median value of
the parameters.

• the weather P95% , defined such that only 5% of the values in the parameters
distributions result in higher doses and 95% result in lower doses to the public.

While the first one represent the most realistic assumption between the two, the
second one is by far the most conservative and, for this reason, the P95% is used in
the safety analysis conclusions.
The data that lead to the choice of the stability class were extracted from [29]; in
particular, according to [7], there was an evidence that a Pasquill stability class D
should have been used for the P50% weather. Then, since one expect a larger dose
for more stable weather, for the P95% scenario the stability class F has been chosen.
In the context of this work of thesis, an independent research on stability class data
in southern Sweden has been conducted and presented in Chapter 4.

Vertical standard deviation σz

The vertical standard deviation σz at a given distance from the source is a function of
the atmospheric stability, downwind distance and surface roughness and its value is
usually derived from the Smith’s curves [25][28]. To facilitate numerical analysis and
implementation in computer applications, Hosker fitted Smith’s curves and found
an analytical formula for σz in each Pasquill category together with a correction for
surface roughness [30]. The fitting equation found by Hosker is:

axb
σz = · SRF (z0 , x) (2.10)
1 + cxd
where the coefficients as a function of stability class can be found in Table 2.4.
Surface Roughness Factor (SRF) for z0 different from 0.1 m is given by:

g 1
 
ln f x 1 + hxj for z0 > 0.1 m


SRF (z0 , x) = 1 for z0 = 0.1 m (2.11)
  h i
ln f xg
 1
1+hxj
for z < 0.1 m0
2.2. DISPERSION 23

Pasquill-Gifford stability category a b c d


A 0.112 1.06 5.38 × 10−4 0.815
B 0.13 0.95 6.52 × 10−4 0.75
C 0.112 0.92 9.05 × 10−4 0.718
D 0.098 0.889 1.35 × 10−4 0.688
E 0.0609 0.895 1.96 × 10−4 0.684
F 0.0638 0.783 1.36 × 10−4 0.672
Roughness length z0 (m) f g h j
0.01 1.56 0.048 6.25 × 10−4 0.45
0.04 2.02 0.0269 7.76 × 10−4 0.37
0.4 5.16 -0.098 18.6 -0.225
1.0 7.37 -0.0957 4.29 × 103 -0.6
4.0 11.7 -0.128 4.59 × 104 -0.78

Table 2.4: Coefficients given by Hosker for the fitted vertical dispersion σz adapted from [25].

In the software Lena, along with the Hosker’s formula, also an initial vertical wake
broadening of the plume ∆Z0 = 20 m is implemented [10]. The reason for this
contribution is to account for effects of extra turbulence, due to the presence of the
building, with an arbitrary, although reasonable, initial value of σz . Since a value
of 0.1 m has been fixed throughout the analysis, the formula used to evaluate σz is
then: s 2
axb
σz = + (∆Z0 )2 (2.12)
1 + cxd

Horizontal standard deviation σy

The dispersion of the plume in the horizontal plane is the result of turbulence
processes together with fluctuations in wind direction. A parametrization of the
first contribution as a function of stability class and downwind distance in Open-
Country conditions is due to Briggs [31]:
εx
σyt = √ (2.13)
1 + γx
with coefficient’s values given in Table 2.5. The second contribution to the horizontal
dispersion arise from the necessity to account for change in wind direction in releases
that last longer then 30 minutes. This component σyw may be estimated from the
following equation from [25]:
r
w 7t
σy = 0.065x (2.14)
u10
where t is the emission duration in hours (see definition in Sec. 2.1.3). These two
components can be thought to act independently, so that a single value for σy may
24 CHAPTER 2. MODEL

Pasquill-Gifford stability category ε γ


A 0.22 0.0001
B 0.16 0.0001
C 0.11 0.0001
D 0.08 0.0001
E 0.06 0.0001
F 0.04 0.0001

Table 2.5: Coefficients given by Briggs for the turbulence processes contribution to the σy from
[32].

be achieved by using the equation suggested by Moore [33]:


q
σy = σt2 + σw
2 (2.15)

Once again, an initial spread of the plume due to wake effects has been added in
Lena as a ∆Y0 = 20 m. Finally, the expression used in the calculations is:
q
σy = σt2 + σw
2 + (∆Y )2
0 (2.16)

Wind speed and boundary layer depth

The basic Gaussian plume model equation (2.6) requires the commonly used mean
wind speed measured at a height of 10 m. This is because the empirical derivation
of the (2.14) in the basic model implies the use of the wind speed measured at 10 m.
Clarke in [25] recognises that this procedure will give rise to an error in the ex-
ponential term exp(−y 2 /2σy2 ) for off-axis concentrations well above the ground for
highly elevated sources, but the error is acceptably small in all cases. For specific
purposes one can always adjust the 10-meter wind speed value by mean of the (2.5).
In the ESS accident analyses, the statistics for the wind speed were extracted from
the open data of the weather stations Lund LTH at the SMHI (Swedish Mete-
orological and Hydrological Institute) website. The dataset contained 10-minute
average values taken from 10 m and expressed as a mix of integer and values with
one decimal digit in equally amounts. The whole data set, the integer part and
the non-integer part were analysed to assess the P50% and P95% values. The P95%
results from the whole and the non-integer parts were consistent, but the one from
the integer part was very close to zero. Therefore, only the non-integer set was used
obtaining a P50% wind speed of 2.7 m s−1 and a P95% wind speed of 0.9 m s−1 (see
Figure 2.5). Finally, since boundary layer depth is not measured at many meteoro-
logical stations (included Lund LTH), the common values presented in Table 2 by
Clarke in [25] have been used.
2.2. DISPERSION 25

Figure 2.5: Wind speed and cumulative distribution obtained from open data of the weather
stations Lund LTH. The dataset contains almost 40000 quality-assured, 10-minute average values
taken from 10 m. Image taken from [7].

2.2.4 Extensions to the basic model


The basic model presented in Sec. 2.2.2 refers to concentrations of inert, long-lived
material which remains in the plume. However, there are a number of processes
which can remove activity from a diffusing plume. Such processes include radioac-
tive decay and various deposition mechanisms, which can be taken into account
by “depletion factors” that modify the initial source so that the concentration of
material at a given point is smaller.

Radioactive Decay

In the case of radionuclides with radioactive half-lives which are comparable to the
travel time to a given point, radioactive decay will reduce the activity concentra-
tion of the radionuclide as it travels downwind; the modified concentration can be
obtained multiplying Q0 by the decay depletion factor DFdecay :
 
x
DFdecay = exp −λ · (2.17)
u10
with λ again the decay constant of a specific nuclide and x/u10 the travel time.

Dry Deposition

The deposition of radionuclides from the atmosphere to the ground may occur by
diffusive dry deposition. Dry deposition is a complex process by which particulates
and gases are removed from the air by turbulent impaction on the ground, or on
obstacles on it (e.g. vegetation) or by chemical reaction at the air-ground surface.
26 CHAPTER 2. MODEL

The amount of radioactive substances deposited depends on the nature of the air-
borne material, the ground surface (e.g. roughness and vegetation type) and, to a
lesser extent, on the wind speed and atmospheric stability [34]. It can be estimated
using the concept of “deposition velocity” vg . Deposition velocity is defined as the
ratio of the amount of material deposited on the ground per unit area per unit time,
to the air concentration per unit volume just above the ground. The integrated dry
deposition Cground (Bq/m2 ) is then simply given by:

Cground = vg · C(x, y) (2.18)

The correspondent depletion factor to correct the source strength is derived, for
example, in [35] and is given by:
vg
DFdry = exp [Fdry (x)] u10 (2.19)

where r Z x
h2
  
2 1
Fdry (x) = − dx0 exp − +
π 0 σz (x0 ) 2σz2 (x0 )
(2.20)
(h + 2A)2 (h − 2A)2
   
exp − + exp −
2σz2 (x0 ) 2σz2 (x0 )
This equation apply for any release duration but the integral cannot, in general, be
evaluated analytically.
The most critical parameter in the dry deposition model is the aforementioned
deposition velocity vg . A comprehensive review has been given by Underwood in
[36], while in Table 2.6 are given some values of deposition velocity used in the safety
analyses at ESS.

Format vg
Gas 0
Particles 0.001
Organic gas iodine 0.0001
Inorganic iodine 0.01

Table 2.6: Deposition velocities used in the analyses at ESS [7].

Wet deposition

Radioactive particulates and gases in the plume may be removed also by rain falling
through the plume, which is called “washout” (or below-cloud scavenging) or by in-
corporation of nuclides in the rain cloud, known as “rainout” (or in-cloud scaveng-
ing). The wet deposition rate is therefore a function of the total activity throughout
the depth of the plume rather than only the activity concentration at ground level.
For the analysis at ESS, data were extracted from the same SMHI open data as
2.3. DOSE ASSESSMENT 27

wind speed. The dataset showed that the P50% hourly rain is 0 mm/h while the
P95% value is 0.34 mm/h. In [7] is stated that rain may increase the effective dose
from ground contamination with a factor two, but since the ground dose contributes
with about 20-50% of the total effective dose to the public and is evaluated with
strongly conservative assumptions, no wet deposition has been considered in the
model. A deeper study has been conducted as a part of this thesis in Chapter 4.

In the safety analyses, rather then C, the concentration relative to the emitted
activity C/Q0 is evaluated. Moreover, the representative person was conservatively
assumed to live at 300 m along the central axis. Putting the information together,
the relative concentration RC at ground level as a function of weather and emission
height has been calculated as:
1
RC = · F (h, 0, A) · DFdecay · DFdry (2.21)
2πσy σz u10

2.3 Dose assessment


The dose assessments model was made in compliance with the instruction given by
the Swedish Radiation Safety Authority (SSM) in [37]. The dose exposure pathways
for the public identified in the safety analysis are showed in Figure 2.6 and are mainly
of two type:

• external exposure due to radionuclides dispersed into the environment;

• internal exposure due to radionuclides from within the body.

In the first case, the representative person of the public will mainly receive dose due
to external radiation of both plume passage (cloudshine) and from the deposited
activity on the soil (groundshine). In the second case, the dose will be a result of
inhalation of contaminated air during plume passage or ingestion of radionuclides
embedded in food from contaminated soil. The models and equations presented
refers mainly to [38].

2.3.1 External exposure


Cloudshine

During the passage of the radioactive plume, the effective dose received by the
representative person is expected to be proportional to the concentration in air
of radionuclides at ground level. The proportionality factor is usually known as
effective dose coefficient for cloudshine DCcloud . It depends on the nuclide and is
measured in (Sv/day)/(Bq/m3 ). In the safety analysis at ESS, these coefficients are
taken from [39], while the effective dose for each nuclide is evaluated simply as:

Dcloud = Q0 · RC · DCcloud = C · DCcloud (2.22)


28 CHAPTER 2. MODEL

Figure 2.6: Scheme of the public dose pathways identified in the safety analysis at ESS. Blue
arrows denote paths for internal exposure, green ones for external exposure and orange arrows
outline the transfer of radionuclides. Main references are [38] and [5].

Groundshine

A fraction of the activity released is deposited on the ground during the plume
passage as already seen in Sec. 2.2.4, causing external exposure. The total amount
of activity deposited is given by the (2.18). Again, a proportional relationship
between ground concentration and effective dose is modelled with groundshine dose
coefficients taken from [39]. Integrating the dose rate for the time of exposure,
assumed to be 1 year but with only 8 hours per day spent out in the fields, the
following equation is obtained:

1  T
exp
Dground = Cground · 1 − e−λ·1 y · · DCground (2.23)
λ 1 day

where, if we fix Texp = 8 h, then Texp /1 day = 1/3.

2.3.2 Internal exposure


Inhalation

The committed effective dose from inhalation of radioactive substances during the
passage of the cloud rely on the activity air concentration, the inhalation rate and
dose coefficients. A simple model allow to evaluate the dose as following:

Dinh = Q0 · RC · Rinh · DCinh (2.24)

where Rinh is the representative adult breathing rate in m3 /s and DCinh is the
coefficients in Sv/Bq. For ESS safety analysis, inhalation rate in adults has been
assumed to be 2.56 × 10−4 m3 /s with [40] as reference.
The effective dose coefficients for inhalation, instead, are mainly from Annex G,
2.3. DOSE ASSESSMENT 29

Table G1 in [41] for particulate aerosols. However, for most nuclides the dose
coefficients are not specified for different chemical forms, while in Annex H of the
same report chemical form is stated for soluble or reactive gases, but coefficients
are given only for a limited number of elements. Moreover, the processes leading to
emission during potential accidents at ESS are complex and difficult to foresee, so
that it will not always be possible to know in advance chemical forms of released
nuclides. Therefore, most conservative dose coefficients has been selected, i.e if the
dose coefficient given in Annex H is higher than in Annex G the higher value is
used, except for tritium.
Since there are no organic compounds in the target, it has been assumed that the
tritium will be released in a chemical inorganic form. The dose coefficient used for
tritium is then set to the value of HTO, which is more conservative than the one of
HT (see Table H1 in [41]). Furthermore, a release of HT will react with the oxygen
and water molecules in the air and a fraction will be transferred to HTO during the
dispersion, leading to some exposure due to HTO inhalation in any case [38].

Ingestion

The model adopted to assess the radiological effects of food intake from a contam-
inated soil is widely discussed in [42]. In the ESS surrounding agricultural land,
are produced mainly common crops (sugar beets and cereals). Contributions from
meat or dairy product contamination are excluded since there are no cattle or any
other livestock grazing around ESS.
The crops can be contaminated via two main processes:

• interception and translocation of activity deposited on the surface of the plant;

• root uptake from the activity deposited on the soil.

Both these processes can contribute to the total uptake by the plant, especially
during growing season.

Interception is defined in [43] “as the fraction of a radionuclide deposited by


dry and wet deposition that is initially retained by the vegetation”. The main an-
tagonist process is the “weathering”, or removal by wind and rain. Even though
part of the initially intercepted activity can be removed by weathering, it still has
a central role in radioecological models since it may cause relatively high activity
concentrations in feed and foods.

Translocation describes, on the other hand, “the systemic transport of radionu-


clides in the plant subsequent to foliar uptake” [43], so it has very little impact
on the long-term radiological consequences. However, for estimating radionuclide
concentrations in foods and for the assessment of dose to the public, the systemic
30 CHAPTER 2. MODEL

transport of radionuclides is important for plants of which only specific parts are
edible or used as feed. One practical definition of translocation is the ratio between
the activity of edible part of crops at harvest and the activity deposited, with re-
sulting value expressed in percent.

In [42] a lumped parameter LP for all those processes (interception, weathering and
translocation) describing the transfer from deposition to contamination of edible
part of the plant has been used. The assumed value independent of nuclide is
0.01 m2 /kg based on the assumption of remaining fraction (rf ) on surface of 10%
due to interception and weathering, translocation (tl) of 5% and yield of 0.5 kg/m2
[44]. The lumped parameter is obtained by the expression:
rf · tl
LP = (2.25)
yield
A deeper analysis on parameter values has been conducted in Chapter 4. If a delay
before consumption Tcon of half year is assumed, the concentration in food crops
can be evaluated as:

Ctranslocation = Cground · LP · e−Tcon λ (2.26)

and the correspondent single-harvest crops dose due to ingestion:

Ding = Ctranslocation · Con · DCing (2.27)

where Con is the consumption of food from crops, assumed to be 100 kg/year, while
DCing is the effective dose coefficient for ingestion of a specific radionuclide (values
from Table F1, ANNEX F of ICRP report [41]).

Root Uptake is typically represented by the plant/soil concentration ratio. The


description of the process can be very complex, but an extensive overview is given
in [45], which is also the reference for the model. The following generic expression
for screening purpose has been used for the assessment of contamination of edible
part of the vegetation Croot for each nuclide:
T fe −Tcon λ
Croot = Cground · ·e (2.28)
ρ

where ρ is the surface soil density 260 kg/m2 [45] and T fe is the element spe-
cific transfer factors from soil to crops (Bq/kg plant fresh weight)/(Bq/kg soil dry
weight), with Table XI in [45] as reference for values. The main limit of this prac-
tical model is that it does not consider the type of crop, yield, depth of soil or
migration of the nuclides in the soil.
Similarly to to translocation, the dose from consumption of first-year crops contam-
inated due to root uptake is:

Ding = Croot · Con · DCing (2.29)


2.3. DOSE ASSESSMENT 31

In [38] is also considered the integrated effective dose from consumption during 70
years (first year + 69 years) and until infinity.
In the code written for this thesis, root uptake process has not been considered. The
main reason is showed in Table 2.7. In fact, the only two dose-dominant nuclides in

Nuclide Cloudshine1 Groundshine2 Inhalation3 Ingestion4 T fe 5


3
H 0 0 1.8 × 10−11 4.2 × 10−11 0
14
C 2.2 × 10−13 1.1 × 10−15 5.8 × 10−9 5.8 × 10−10 0
1
in (Sv/day)/(Bq/m3 ) from [39]
2
in (Sv/day)/(Bq/m2 ) from [39]
3
in Sv/Bq from [41]
4
in Sv/Bq from [41] (OBT value for 3H because more conservative)
5
in (Bq/kg plant fresh weight)/(Bq/kg soil dry weight) from [45]

Table 2.7: Effective dose coefficients and transfer factors from soil to crops used in the analysis
for the two dose-dominant nuclides in the accident scenario studied: 3H e 14
C . References in the
text.

the accident scenario studied in this work (Sec. 3.3.2), i.e. 3H and 14C , have both
null transfer factors from soil to crops according to [42].
The issue of assessing root uptake for 3H and 14C is actually more complex to be
reduced to just a null transfer factor. In ANNEX III of [45] the problem is treated
separately for screening purposes and has been studied in detail as a source of un-
certainty for the ESS model in Chapter 4.

In Appendix A the ROOT Code Workflow is presented and should be use a vademecum
throughout the following analysis process.
Chapter 3

Accident Analysis

Accidents could occur in any complex facility and ESS makes no exceptions. Due to
the high energy and power of the proton beam, both prompt radiation and radioac-
tive material is produced during normal operation [5]. The analyses are subject to
SSM requirements imposing them to be carried out in a prescribed deterministic
way (see [47]).
In this chapter the analysis of the accident scenario chosen for the thesis and the
results of the previously discussed model will be presented. Notably, the choice of
the accident was dictated by a criterion of simplicity, since the model and methods
for the uncertainty analysis remain unchanged even in more complex scenarios. The
following accident analysis has been performed by specialists at Target Division and
reported in [46], to be considered this chapter main reference.

3.1 Overview
The Accident Analysis 10 (AA10) covers the possibility of a primary water cooling
(PWC) pipe ruptures in the monolith or in the Proton Beam Window (PBW) vessel
(see Sec. 1.2). Water activated by normal operations subsequently leaks out in
the monolith vessel and contaminated vapour escapes through different leak paths.
The systems involved in the accident are located in the monolith vessel, but the
activity released depends only on the evaporation of cooling water rather then on
the systems that are experiencing the accident; for this reason no mention is made
to these devices’ features unless it’s strictly necessary.
The five water cooling systems, and the respective objects to cool, involved in the
accident are showed in Table 3.1, while in Figure 3.1 a primary water cooling system
prototype is represented by mean of a diagram. The ”object to be cooled” in the
figure is either the moderator, the reflectors, the shielding and plugs, the NBPIs or
the PBW. It is important also to keep in mind that systems 1041 and 1070 share
the same cooling water, just as systems 1043 and 1026.
The first assumption on the event is that the facility has been in operation for
40 years without changing or adding any water to any of the PWC systems. The

33
34 CHAPTER 3. ACCIDENT ANALYSIS

Primary Water Cooling System ID Object cooled


Proton Beam Instrumentation Plug
Shielding & Plugs PWC 1043 Target Monitoring System
Inner Shielding
Neutron Beam Port Insert PWC 1026 Neutron Beam Port Insert
Proton Beam Window PWC 1070 Proton Beam Window
Water Moderator PWC 1041 Water Moderator
Reflector PWC 1042 Reflector

Table 3.1: Scheme of the systems located in the monolith or PBW vessel cooled by the respective
water system. Systems that share the same water are 1026 with 1043 and 1070 with 1041.

Figure 3.1: Flow diagram of a generic primary water cooling system. Image taken from [46].

total operations time per year includes 5400 hours of beam on target with maximum
beam power of 5 MW at 2 GeV [48]. Clearly, assuming a constant full time neutron
production over the facility lifetime represents a conservative upper limit for the
activation levels.
The unfortunate event leading to a pipe rupture is called Postulated Initiating
Event, PIE from now on. Examples of PIEs are mechanical failures, structural
problems in the system or mechanical stresses, which can happen without distinction
during any operational mode of the facility. Therefore the PIEs have been labeled
with the three relevant operational mode of the LINAC:

PIE 1 Mode Beam ON. The proton beam is produced or ready to be produced. The
primary water cooling systems are delivering water to the objects to be cooled;

PIE 2 Mode Beam OFF - cooling system running. The proton beam is not produced,
but the PWC systems are still cooling the objects to be cooled;

PIE 3 Mode Beam OFF - cooling System Shutdown. LINAC has shutdown and the
water in the PWC systems are drained into the drain tanks.

The following analysis evaluates the three mode dose consequences separately, in
3.2. SCENARIO DEVELOPMENT 35

order to highlight possible differences and, thus, also the need for mitigation mea-
sures in the worst scenario. For example, the second mode covers the approximately
4 hours time delay sufficient to allow both the water to cool down to a temperature
below 50 ◦C and short-lived nuclides to decay. Therefore, dose consequences during
this time delay are expected to be the same as during mode Beam ON.

Another important assumption of the safety analysis is that the radionuclides present
in spilled water evaporates and are emitted together with evaporated water. There-
fore, as already mentioned in Sec. 2.1.1, only radionuclides that are considered to
be volatile, are a gas or can vaporise give radiological consequences. Finally, the
contaminated vapours escapes through different identified leak paths [49], however
it is assumed that the entire inventory produced by the accident is emitted via
each leak path independently, to study the differences in transport and to spot the
worst case scenario. This last assumption is, again, unlikely to be realistic but it is
consistent with the conservative approach adopted throughout the analysis.

ID Assumption
A1 Three operational modes for the facility
A2 The event takes place after 40 years of normal operations. The activation
is calculated based on 5400 hours per year with a beam power of 5 MW at
2 GeV, assuming no change or addition of water in the PWC systems.
A3 The entire inventory produced by the accident is emitted with water vapour
through each leak path independently.
A4 The event is evaluated for the baseline monolith atmosphere of rough vac-
uum with a pressure of ≤ 100 Pa.

Table 3.2: Assumptions made for the accident analysis with an identifier code.

3.2 Scenario Development


After the PIE takes place, two possible scenarios have been identified, namely Sce-
nario A and Scenario B. Both involve the same amount of spilled water and inventory
activity, but different evaporation rates, which leads to different emission duration
and, thus, dose to the public (see equation (2.14)). The sequences of events are
reported in detail in [46], here is given a summary.

At the start of the event, a pipe inside the monolith or PBW vessel ruptures.
Cooling water with a temperature of approximately 50 ◦C flows into the monolith
vessel, which is kept under a vacuum pressure of 100 Pa. Therefore, the spilled water
flashes until a pressure of 12 kPa (water vapour saturation pressure at 50 ◦C) has
been reached. Water that has not flashed is pooled at the bottom of the monolith
vessel. From now on the event can develop in the following ways.
36 CHAPTER 3. ACCIDENT ANALYSIS

3.2.1 Scenario A
Only in the case of PIE 1, if the system affected by the rupture is using cooling
water from 1041, then the loss of water from the moderators can cause the cold
moderator to overheat and break. In this circumstance, liquid hydrogen from the
cold moderator evaporates increasing the monolith vessel pressure. Eventually the
unplanned increased pressure damages the rupture disc on the monolith relief sys-
tem, causing the release of the gases and a sudden atmospheric re-entry. Then,
assuming a constant temperature of 50 ◦C, the vessel atmosphere is saturated with
contaminated water vapour, with a density of 8.1 g/m3 , while the flow out of the
vessel is 8.4 m3 /h, 20% of its volume per hour. Contaminated water vapour con-
tinues to evaporate and spreads from the monolith vessel till is emitted through
different leak paths.

3.2.2 Scenario B
If the moderator remains intact, the rough vacuum pumps will maintain low pressure
in the monolith vessel, pumping out the flashed water and causing the pooled water
to boil. According to its technical specifications [50], the vacuum pump capacity is
estimated to be 1 m3 /min at 12 kPa. However, the vacuum pumps have a maximum
water vapour pumping capacity of 600 g/hour and, assuming the worst case, three
pumps are in operation simultaneously which maximises the rate of spilled water
vapour emitted to 1.8 kg/hour.

3.2.3 Leak paths


Leak path (LP) is referred to the possible pathway followed by the inventory from
the location of the rupture to the point of emission. A comprehensive list of pos-
sible leak paths for the accident analyses in the Target Station with an associated
identification (LP-ID) number, can be found in [49]. Here two of them have been
considered:

• LP-104 - If the HVAC is working as designed, a direct path from the monolith
vessel out of the building through the stack is enabled;

• LP-102 - If the HVAC is not working as designed, the emission point is assumed
to be at the height of 20 m above the ground level. In this case it is not a
direct path but rather a transport throughout several volumes (rooms) with
a much longer emission duration.

In table Table 3.3 the two leak paths’ parameters for the transport calculations are
shown. It might also be noteworthy that, even though the emission points of 20 and
45 meter are used when reporting the dose to the public, other heights have been
used as well in the analysis, in particular 10 and 30 m, independently from the leak
path. These emission points are only used for comparison and to get an idea of the
3.3. RADIOLOGICAL CONSEQUENCES 37

Path ID Monolith f01 1 Triangular f12 2 High Bay f23 2 Emission


vessel (V0 ) (V1 ) (V2 ) point
LP-104 42 m3 0.030 - - - - 45 m
LP-102 42 m3 0.030 813 m3 0.0452 40 701 m3 2.261 20 m
1
in m3 /s, calculated from vacuum pump capacity with minimum ballast gas [50].
2
in m3 /s, calculated as 20% of room volume per hour.

Table 3.3: Leak paths for Scenario B with parameters for transport calculations.

effect of changing the emission altitude in case of unplanned events, e.g. collapse of
the chimney stack or the building rooftop after an earthquake.

3.3 Radiological Consequences


3.3.1 Inventory
As already pointed out, systems 1041 and 1070 share the same cooling water, there-
fore the contribution from the activation in the PBW cooling water must be added
to that of the moderator system. Also systems 1043 and 1026 share the same wa-
ter but the activity in 1026 is estimated to be only a small fraction of the 1043
due to its location and irradiated water volume. This is the main reason why the
aforementioned activity has been neither calculated nor included in the 1043 system.

Table 3.4 shows the released activity for both scenarios and all PIEs. The spilled
water has been estimated starting from the capacity of the tanks and pipes (con-
sidered the MAR), then assuming three rupture points and evaluating the volume
lost by each system (DR) according to its physical location; a detailed description
on how has been done can be found in Appendix C of [46], while in Appendix D
of the same report a complete list of radionuclides in the PWC systems with their
activities for all PIEs is presented. The ARF is assumed to be 1 as all spilled water
evaporates/boils-off. What is important to notice from Table 3.4 is that:

• system 1042 has the smallest spilled water volume in all the PIEs, but not the
highest activity;

• system 1043 releases less activity than 1041 and the emission duration is the
longest in all cases;

• previous points suggest that the worst radiological consequences to the public
for all PIEs must come from system 1041. Moreover, Scenario B is expected
to give always higher dose then Scenario A.

Therefore only the results involving Scenario B and system 1041 will be used for
dose calculations.
38 CHAPTER 3. ACCIDENT ANALYSIS

Emission Released
PIE System Spilled water (kg) Activity (Bq/kg)
duration (h) activity (Bq)

1 1041 700 10291 5.13 × 1011 3.59 × 1014


1041 700 3892 5.13 × 1011 3.59 × 1014
1 1042 470 2612 1.12 × 1011 5.26 × 1013
1043 3990 22172 1.63 × 1010 6.50 × 1013
1041 700 389 2.94 × 1011 2.06 × 1014
2 1042 470 261 6.16 × 1010 2.90 × 1013
1043 3990 2217 8.68 × 109 3.46 × 1013
1041 50 28 2.94 × 1011 1.47 × 1013
3 1042 50 28 6.16 × 1010 3.08 × 1012
1043 200 111 8.68 × 109 1.74 × 1012
1
According to Scenario A.
2
According to Scenario B.

Table 3.4: Released activity for both scenarios and all PIEs. All spilled water has
evaporated/boiled-off (ARF=1). Note the amount of activity in PIE 2 and PIE 3 is always lower
due to the four hour decay imposed. Data from [46].

3.3.2 Dose dominating nuclides


As part of the analysis, in [46] a dose dominating nuclides study has been also
performed. The results of this study is that the primary contributors to the public
dose are 3H and 14C , which on their own account for, respectively, the 51.72%
and the 48.28% of the total dose. The contribution from other nuclides amounts to
approximately a factor of 10−5 [46]. As a consequence, the dose calculations with the
ROOT code have been performed using retrospectively only the two dose-dominant
nuclides.

3.3.3 Results
The model discussed in Chapter 2 was applied to the current accident scenario in
the case of PIE 1 with the LP-102. The results from the “old” model, presented in
detail in [51], and the ROOT code were then compared.

Transport

Firstly, the transport results in terms of STouter and emission times for long-lived
nuclides (Table 3.5) were compared. The approximated transport matrix method
has been implemented using a 100 s timestep and a constant source term in V0
3.3. RADIOLOGICAL CONSEQUENCES 39

between 0 and 389 hours. Note that, since was not clear from the documentation
which value to take as STouter , the value chosen is the one relative to the step when
the 99% of total activity of long-lived nuclide has been emitted. Moreover, for each

STouter (Bq)
Nuclide ESS-0094187 [52] ROOT code Error
3
H 1.99 × 1014 2.03 × 1014 1.84%
14
C 4.15 × 1011 4.11 × 1011 1.01%
Emission Time (s)

Emission
ESS-0241705 [51] ROOT code Error
Fraction

1% 4.39 × 104 4.40 × 104 0.3%


10% 1.76 × 105 1.77 × 105 0.5%
50% 7.33 × 105 7.38 × 105 0.7%
90% 1.29 × 105 1.30 × 106 0.7%
99% 1.42 × 106 1.43 × 106 1.1%

Table 3.5: LP-102 PIE1. Transport results for STouter and emission times for long-lived nuclides.

nuclide the code produces a plot of the normalized activity in each room (including
the environment) as a function of time, then stores it in .root file. An example of
such plot is given in Appendix B for the 3H .
Observing the relative errors column, the major difference is found in the outer
source of 3H , which is in any case slightly below 2%. This difference is considered
acceptable and the two methods consistent in the results.

Dispersion

Similarly, the relative concentration RC calculations using the software Lena and
the ROOT code were compared for different assumed weathers and emission heights.
The comparisons are showed in Figure 3.2, from which is evident that the results
are perfectly overlapped. The relative errors, also plotted for the P50% and P95%
weather as a function of height, makes clear that the two models are again largely
compatible. Notably, these results are mostly independent from the particular acci-
dent scenario’s details, since, fixed the inventory, RC is normalized to the emitted
activity and the only factor directly involved in the calculations is the emission time.
40 CHAPTER 3. ACCIDENT ANALYSIS

Figure 3.2: Comparisons of relative concentration RC calculations using the software Lena (ESS-
0241704) and the ROOT code for two different assumed weathers and emission heights. Relative error
for the two cases also displayed as a function of height.

Dose assessment

The same comparison has been made for dose calculations in ESS-0241704 [53]. A
similar graph was plotted also for the dose results and is presented in Figure 3.3.
The error appears to be generally higher for the P95% assumed weather, but in any
case within the 1%, which is a more then acceptable level of discrepancy for the two
model to be considered equivalent.

3.4 Risk Assessment


The accident with the LP-102 scenario was chosen for the comparisons also because,
being the worst case scenario in terms of dose to the public, is the one adopted for
the risk assessment. The risk ranking is summarized in Table 3.6 for each PIE, where
the occurrence probability is from [54] and the radiological consequences are taken
for the nominal emission height of 20 m. The final assessment is made in accordance
with the ESS Guideline for Radiological Hazard Analysis [55], which, for this type
of accident, requires a dose limit for the public reference person of 1 mSv. Since
for the worst case scenario (PIE 1) a dose of 2.60 × 10−2 mSv is predicted, the
consequences are acceptable and there’s no need to implement any mitigations to
reduce the risk and/or the dose.
3.4. RISK ASSESSMENT 41

Figure 3.3: Comparisons of dose calculations from ESS-0241704 and the ROOT code for two
different assumed weathers and emission heights. Relative error for the two cases also displayed as
a function of height.

Postulated Occurrence Radiological


Risk Ranking
Initiating Event probability (y−1 ) Consequences

PIE 1 10−2 > F ≥ 10−4 2.60 × 10−2 mSv Acceptable


PIE 2 10−2 > F ≥ 10−4 2.60 × 10−2 mSv Acceptable
PIE 3 10−2 > F ≥ 10−4 5.85 × 10−3 mSv Acceptable

Table 3.6: Risk ranking of the AA10 LP-102 accident scenario for the unmitigated events.
Chapter 4

Uncertainty Analysis

This chapter, core of this project, presents the uncertainty study approaches that
were adopted for the safety analysis at ESS. The conceptual scheme is the one
discussed in section 1.1. After an introduction on the computational methods for
the uncertainty propagation, a deeper analysis on the parametric uncertainty of
the ESS dose model will be addressed. For each parameter, scientific literature has
been studied to extract information on Probability Density Function (shortened
PDF from now on), trying to stay as much as possible on the peculiarities of the
conditions under examination and keeping it simple in case of lacking data. Finally,
results for the accident scenario AA10 LP102-PIE 1 will be presented followed by
some remarks on model uncertainties.

4.1 Methods
4.1.1 Analytical propagation
Let’s recall the definition of a computational model given in Sec. 1.1.2:

f :Rp → R
(4.1)
~ 7→ Y = f (X)
X ~

where Y is the output, X ~ = (X1 , . . . , Xp ) are p inputs and f is the dose model
function. If the model is simply enough, f could be analytically defined and the
propagation of uncertainties could proceed by exact calculation of the mean and
standard deviation of the dose distribution. For example, in the simplest case of an
additive model:
X p
Y = Xi = X1 + X2 + · · · + Xp (4.2)
i=1

the mean final dose would be given by


p
X
E[Y ] = E[Xi ] (4.3)
i=1

43
44 CHAPTER 4. UNCERTAINTY ANALYSIS

with the variance equal to


p
X p−1 X
X p
V [Y ] = V [Xi ] + 2 COV [Xi , Xj ] (4.4)
i=1 i=1 j=i+1

where COV [Xi , Xj ] is the covariance of correlated (i, j) couple of parameters. Anal-
ogous results can be obtain for multiplicative models, however the analytical ap-
proach shows rapidly his limitations with “less trivial” operations. For example, if
we suppose that our dose prediction could be fully determined by

Y = X1 · X2 · X3 (4.5)

in [2] is showed that the variance is expected to be


V [Y ] =E[X1 ]2 E[X2 ]2 V [X3 ] + E[X1 ]2 E[X3 ]2 V [X2 ]
+ E[X2 ]2 E[X3 ]2 V [X1 ] + 2E[X1 ]E[X2 ]E[X3 ]2 COV [X1 , X2 ]
(4.6)
+ 2E[X1 ]E[X2 ]2 E[X3 ]COV [X1 , X3 ]
+ 2E[X1 ]2 E[X2 ]E[X3 ]COV [X2 , X3 ] + 17 other terms!
Yet should be clear that very rarely things are that simple. One can argues that
approximations can be made (Taylor expansions etc.) or that the complex model
can be decomposed in “submodels” more easy to handle and then propagate the
uncertainties up to the “main”. All these paths, although valid, still rely on the
possibility to define a model with a closed analytical expression, which is not obvi-
ous. Overall these approaches seemed too cumbersome and avoidly complicated for
the ESS dose model, for which reason a Monte Carlo approach was chosen for this
thesis.

4.1.2 Monte Carlo Random Sampling


The term Monte Carlo indicates in general a computational procedure that uses sta-
tistical sampling techniques to obtain a probabilistic approximation to the solution
of a mathematical problem. In other words, the PDF of Y = f (X) ~ would be approx-
imated by the distribution obtained randomly sampling N -times the p-dimensional
space of parameters, with ranges and probabilities defined by their PDFs. In prac-
tice, this is usually achieved by extracting pseudo-random number uniformly be-
tween 0 and 1 and then using the inverse of the cumulative distribution function
(CDF) of the parameter to retrieve the value distributed as the actual PDF. The
suitable number of extractions usually depends both on the complexity of the model
(the more complex, the harder is to sampling each “corners” of the dose space) and
on the precision of the results required. This is the standard Monte Carlo technique
known as Random Sampling (RS), which is easy to implement in a code given a
pseudo-random numbers generator but can be time-consuming, especially because

the number of cycles required grows really fast (statistical error goes as 1/ N ).
Thus, in this work a slightly different solution involving quasi-random sequences is
proposed.
4.1. METHODS 45

4.1.3 Quasi-Monte Carlo


The quasi-Monte Carlo (QMC) differs from the Monte Carlo (MC) in the way the
sampling numbers are chosen. In fact while the latter uses pseudo-random numbers,
the former uses low-discrepancy sequences (or quasi-random QR sequences). For a
complete overview on the subject [64] is suggested. What is important to be aware
of in this context is that QR sequences are specifically designed to generate samples
as uniformly as possible over the unit hypercube Ω [65], so they have an advantage
over pseud-random numbers since they cover the domain of interest quickly and
evenly, resulting in a faster rate of convergence [64]. Moreover for QR sequences,
differently from purely deterministic methods which only give high accuracy when
the number of datapoints is pre-set, the accuracy typically improves continually as
more datapoints are added, with full reuse of the existing points. Notably, the QMC
main application is the statistical evaluation of multidimensional integrals, or, more
generally, all those mathematical problems that do not directly involve intrinsically
stochastic processes, e.g. radioactive decay.

4.1.4 Sobol’ sequence


Several types of QR sequences have been proposed by Faure, Halton, Sobol’, Nieder-
reiter, Hammersley, and others (see [66] for an overview). In the paper of Morokoff
and Caflisch [67] Halton, Sobol, and Faure sequences for QMC are compared with
the standard MC method using pseudo-random sequences for a multi-dimensional
integration problem. They found that the Halton sequence performs best for di-
mensions up to around 6, the Sobol sequence performs best for higher dimensions
and the Faure sequence, while outperformed by the other two, still performs better
than a pseudo-random sequence. However in the same work is pointed out that the
advantage of the QMC method is greater if the number of  dimensionsof the integral
(log N )p
is small. In particular, in order for the QMC error [64] O to be smaller
  N
1
than the MC error O √ , p needs to be small and N needs to be large enough
N
p
(e.g. N > 2 ).
The aforementioned better performances of the Sobol’ sequence for high dimensions
led to the choice of this sequence for the current uncertainty analysis. The original
algorithm was proposed by Sobol’ in 1967 [68], while the one used in this thesis
is called 659 and all the credits go to Stephen Joe and Frances Kuo [69], which
provided also a helpful C++ code1 for generating sets of Sobol’ numbers of any size
N up to dimension 21201. In Figure 4.1 an example comparing the 2D sampling
capacity of this algorithm with a standard RS is showed.
In this work, the following workflow has been adopted:
1. a text file containing N × p Sobol’ points from the sequence of dimension p is
generated by mean of Jon and Kuo code;
1
The code can be found at https://web.maths.unsw.edu.au/~fkuo/sobol/
46 CHAPTER 4. UNCERTAINTY ANALYSIS

Figure 4.1: Left: 1000 points with coordinates randomly generated according a uniform distri-
bution between 0 and 1. This sampling method is the core of the RS. Right: first 1000 points of
the 2D Sobol’ sequence. This sampling method is the one used in the current analysis.

2. each parameter value is extracted from the assumed PDF using a modified
version of GetRandom(), a ROOT method implemented, mutatis mutandis, in
TF1, TH1 and TH2 classes. The variation is in the fact that, while standard
GetRandom() uses a pseudo-random number generator, the ad hoc modified
version is called passing a Sobol’ number from the text file, which is then used
for the same inversion process described in Sec. 4.1.2;

3. the parameter value is then passed to the model for the dose evaluation;

4. the steps 2 and 3 are repeated for each row up to N , which are the total
number of generated rows, and the dose distribution is retrieved.

The last thing to highlight is that, thanks to low-discrepancy sequences property to


be “additive” in accuracy, analyses with smaller samples can be conducted with the
a subset of points from the same text file, while a different one must be generated
when dealing with another number of parameters.

4.2 Parametric Uncertainty


As already pointed out in section 1.1, the uncertainty of a parameter is expressed
with both the range of possible values and the frequency with which any value
within that range is expected to occur. Those two ingredients are essential to define
the PDF for each parameter, which is the starting point for the Monte Carlo analy-
sis. The parameters analysed for the ESS dose model are: amount of spilled water,
activity concentrations in water, stability class, vertical and horizontal standard de-
viation, wind speed, deposition velocities, inhalation and ingestion dose coefficients,
breathing rate and the lumped parameter for translocation.
4.2. PARAMETRIC UNCERTAINTY 47

4.2.1 Spilled water


Should be clear from Chapter 3, that the amount of released activity is directly
proportional to the amount of spilled water. The estimation of the leakage magni-
tude for the PWC systems requires the knowledge of the tanks and pipes complex
design. A schematic overview is given in Figure 4.2 from APPENDIX C in [46],
where is also explained how the final numbers (see Table 3.4) are produced. What

Figure 4.2: Schematic location of postulated rupture points and PWC devices. Image from [46].

is to emphasize in the current discussion is that the estimations are then produced
based solely on both the physical location of tanks, piping etc. and their filling rates
during different PIEs. Therefore, the amount of spilled water in case of rupture is
somewhat constrained with a small margin of error. However, the design of the
PWC systems has changed several time over the years and it’s not unluckily that
will change also in the future. For this reason, after an informal solicitation with
Leif Emås, gas and process engineer at the Target Division, a 25% uniform uncer-
tainty on the assumed value (700 kg in the worst case scenario) is considered to
account for possible modifications to the cooling systems.

4.2.2 Activity concentrations


The particle transport code used at ESS is MCNPX, which relies on Monte Carlo
methods. In particular, MCNPX obtains answers by simulating individual particles
and inferring some aspects (tallies) from their average behaviour using the central
limit theorem. According to [56], tallies are accompanied in the output by a second
number R, which is the relative error defined to be one estimated standard deviation
of the mean Sx̄ divided by the estimated mean x̄. The relative error can then be used
to form confidence intervals, allowing the user to make a statement of uncertainty.
The central limit theorem, in fact, assures that for a sufficiently high number of
48 CHAPTER 4. UNCERTAINTY ANALYSIS

particles, 68% of the times the true result will be in the range x̄ ± Sx̄ . Obviously,
a confidence statement like this one refers only to the precision of the Monte Carlo
calculation itself and not to the accuracy of the result compared to the physical
true value, which would require detailed analysis of the uncertainties in the physical
data, modelling, sampling techniques and approximations used in the calculations.
Since such analysis was outside the scope of this project, only the precision of
the Monte Carlo has been considered. Unfortunately, relative errors on activity
concentrations were not provided in the ESS documentation for the inventories,
however in [56] a guideline for interpreting the quality of the results suggests that the
upper limit for R should be 0.1, above this value tallies are generally questionable.
Moreover, a comparison with other results at ESS where R was reported (e.g. [57]),
suggests a more realistic value of 0.05 for the relative error. Therefore for the
activity concentrations of 3H and 14C a gaussian distribution was assumed, with µ
the nominal activity and σ equal to the 5% of that value.

4.2.3 Stability Class


As seen in Sec.2.2.3, many dispersion parameters depends on the assessment of the
stability class. The lack of hourly data on the atmospheric stability was bypassed
by assuming the P50% and P95% weathers, as already discussed. However, in-depth
research on the subject led to new pieces of information:

a. stability category probability distribution is, clearly, not uniform. A study


from the KTH Royal Institute of Technology in Stockholm [58] reported the
bar chart in Figure 4.3 with data provided by SSM. A representative char-
acteristic of the Southern Swedish climate is the complete lack of the highly
unstable atmospheric class, namely category A, while neutral class is predom-
inant most of the time.

b. one of the main restriction to apply to the gaussian model is that atmospheric
conditions is not supposed to change during the travel of the plume. The
probability that a stability category will persist for a given time is reported
by Jones in [59] (Table 3 of the same report). Jones shows how most categories
persist, on average, for a few hours, with category D the only one with a 24%
chance to last over 24 hours.

Therefore, in [59] is suggested to treat longer releases as a sequence of shorter ones


(usually 1 hour long) with constant, but different conditions. However, this kind of
approach is possible only when suitable meteorological data are available, e.g. the
study on planned releases at ESS [60] by SMHI. At first was considered the idea of
deriving hourly data set on stability categories from ordinary weather observations
(insolation, cloud cover etc.) following the method suggested by Luna and Church in
[61]. Then, the lack of consistent available data and fairly high discrepancies in the
method found by the authors of [61] led to the choice of fixing the stability category
to D, which seemed the most realistic option given a. and b. above. Despite this,
4.2. PARAMETRIC UNCERTAINTY 49

Figure 4.3: Distribution of stability categories for weather conditions in Southern Sweden. Data
from SSM and bar chart from [58].

some results will indeed be presented for the more stable category F to probe the
conservative approach adopted initially in the analysis, thus dispersion coefficient
uncertainties have been evaluated for all the stability classes.

4.2.4 Vertical and horizontal standard deviation


The uncertainty of the vertical dispersion coefficient σz has been estimated using a
method partially suggested by Hamby in [62]. Firstly, σz values at a 300 m downwind
distance were evaluated by making use of equation (2.12) and varying stability class.
For each class, the value was assumed to be the median (or geometric mean GM) of
a lognormal distribution, which has the P5% and the P95% percentiles correspondent
to the logarithmic interpolated midpoint between that value and the one obtained
from the, respectively, subsequent more and less stable class. Since for a lognormal
distribution:
P50% P
= 95% (4.7)
P5% P50%
the arithmetic mean between the two ratios was calculated when they differed and
then used to obtain a “new pair” of P5% and P95% starting from the fixed P50% .
The problem of determining the distribution parameters by mean of percentiles has
been treated extensively by Cook in [63], who has provided also a useful free soft-
ware2 called ParameterSolver with a convenient user interface. Shortly, given two
2
The software is available from the M. D. Anderson Cancer Center Biostatistics software down-
load site: http://biostatistics.mdanderson.org/SoftwareDownload/
50 CHAPTER 4. UNCERTAINTY ANALYSIS

percentiles x1 and x2 , with probabilities p1 and p2 , and a distribution family (gaus-


sian, lognormal, etc.), this software finds the distribution parameters minimizing
the function:
(F (x1 ) − p1 )2 + (F (x2 ) − p2 )2 (4.8)

Cook provided also a theoretical validation in the note [63], showing that the per-
centiles constraints can be satisfied exactly for several distributions and the software
was able to give the same results.
Thus, the GM and the GSD (Geometric Standard Deviation) of the lognormal
distributions for the six stability classes were retrieved and used to generate σz ac-
cordingly (see Table 4.1 and Figure 4.4).

σz (m) at 300 m
Stability Class P5% P50% P95% GSD
A 42.60 52.73 65.28 1.14
B 29.44 34.40 40.24 1.10
C 26.17 28.42 30.86 1.05
D 23.23 24.77 26.41 1.04
E 21.32 21.99 23.34 1.03
F 20.04 20.67 21.32 1.02

Table 4.1: percentiles of σz lognormal distributions at 300 m as a function of the stability class.

The uncertainty on horizontal standard deviation for the gaussian plume model
σy has not been implemented. The reason is that, referring to (2.16), the main
contribution to σy for long releases and near field distances comes from the wind
duration-dependent term σyw , while the turbulent stability-dependent term is neg-
ligible. The horizontal standard deviation distribution is then fully determined by
the uncertainty in both t, proportional to the amount of water spilled, and u10 .

4.2.5 Wind Speed


The Monte Carlo approach to the uncertainty analysis uses efficiently all the infor-
mation of the wind speed data distribution retrieved from SMHI database and pre-
sented in Figure 2.5. Starting from the observations, an empirical distribution func-
tion has been obtained by mean of the TUnuranEmpDist method from TUnuran ROOT
class. Quoting from the class reference3 : “[TUnuranEmpDist] is used by TUnuran to
generate double random number according to this distribution [...] in the case of
unbinned data the density distribution is estimated using gaussian kernel smooth-
ing and then random numbers are generated”. After having filled an histogram TH1
with 106 events generated in that way, the modified method GetRandom() is called.
3
https://root.cern.ch/doc/master/classTUnuranEmpDist.html
4.2. PARAMETRIC UNCERTAINTY 51

Figure 4.4: σz lognormal distributions at 300 m as a function of the stability class. Observe
that more stable conditions correspond to more narrow distributions.

Finally, although there’s a strong correlation between wind speed range of values
and Pasquill stability class, the use of the full distribution in Figure 2.5 is not hardly
conflicting with the fixed stability class D choice, since in neutral condition all wind
speed values are possible [25].

4.2.6 Deposition velocities

This parameter was introduced to model the dry deposition process of dispersed ra-
dionuclides (see Eq.(2.18)). In the safety analyses at ESS deposition velocity based
on chemical and physical properties of emitted nuclides were assumed (Table 2.6); in
the accident scenario studied for this work, 3H has been considered to be emitted as
a gas HT, thus its vg is 0, while 14C as generic particle (vg = 0.001 m/s). However,
throughout scientific literature on the subject, a striking dependence on many other
factors like soil composition, particle size, wind speed, surface characteristics etc.
emerges [36], Thus, a wide range of deposition velocities have been proposed for
each chemical or physical form of the same nuclide. For example, in case of tritium
emission, velocities for HT and HTO are measured and reported by Foerstel [70]
in the ranges, respectively, 10−5 –10−3 m/s and 10−3 –10−2 m/s. However, given
the situational nature of this parameter the most reliable source of information has
been considered the report Atmospheric dispersion modelling of planned releases of
radioactivity at ESS by SMHI [60]. In that report 3H is assumed to be emitted
only by mean of evaporation of HTO, while 14C is assumed to be emitted in the
52 CHAPTER 4. UNCERTAINTY ANALYSIS

Nuclide Water Urban Forest Field


3
H 0.007 0.002 0.007 0.007
14 1
C 0.003 0.001 0.001 0.001
14 2
C 0.003 0.001 0.001 + 0.015 · Rsol 0.001 + 0.012 · Rsol
1
During the winter period assumed to last from September to April
2
During the summer period assumed to last from May to August. Rsol is defined
as the current solar elevation divided by the maximum solar elevation.

Table 4.2: Dry deposition velocities vg (m/s) for different radionuclides and land use from [60].

form of CO2 (gas); the values of vg are hence given in Table 4.2. These values,
according to [60], should be regarded as best knowledge based on literature reviews
and SMHI-internal discussions. The differences from those used in the safety anal-
ysis are in the non null value for 3H and in the seasonal solar elevation-dependent
value for 14C . Since, as already said, even in case of HT emission a fraction will
be transferred to HTO during the dispersion [38][70], the chemical form for the
released tritium has been effectively considered a source of uncertainty. Therefore,
remembering that the ESS site is surrounded by crop fields, a uniform distribution
between 0 and 0.007 m/s for 3H and a piecewise uniform distribution from 0.001
to 0.013 m/s weighted on the number of months in which the values are defined
(assuming always Rsol = 1) for 14C have been considered.

4.2.7 Inhalation and Ingestion dose coefficients


Dose coefficients for inhalation and ingestion presented in Table 2.7 are from Annex
G, Table G1 in ICRP 119 [41]. Harrison in [71] investigated deeply the uncertainties
in ICRP dose coefficients models for intakes of tritium as tritiated water HTO or
organically bound tritium OBT. Harrison reported the resulting P5% –P95% range
in the ingestion coefficient of tritium for adults to be a factor of about 3 for HTO
and about 5 for OBT, while the P50% value was, respectively, 3.9 × 10−11 Sv/Bq
and 8.7 × 10−11 Sv/Bq. The difference between these last values and ICRP ones of
1.8 × 10−11 Sv/Bq and 4.2 × 10−11 Sv/Bq, respectively, is due, according to Harri-
son, mainly to the contribution of large uncertainties in Radio-Biological Effective-
ness (RBE), a parameter of the ICRP model that accounts for the different ways
different types of radiation transfer their energy to the tissues. By considering,
instead, only uncertainties in biokinetic parameters, the coefficients’ distributions
result in P50% values of 2.3 × 10−11 Sv/Bq for HTO and 5.6 × 10−11 Sv/Bq for OBT
and P5% –P95% ranges of about 2 and 4, respectively. These last values seemed more
relible and, thus, are the one adopted for the analysis. In the same work Harrison
states also that the probability distributions obtained can be applied also to inhala-
tion of HTO and OBT. In particular, ingestion and inhalation of HTO as vapour
can be regarded as equivalent since in both cases complete absorption to blood can
be assumed. Finally, percentiles results in [71] and other considerations in [72], sug-
4.2. PARAMETRIC UNCERTAINTY 53

Dose coefficients 3H in 10−11 Sv/Bq


Process P5% P50% P95% GSD
HTO inhalation 1.63 2.29 3.25 1.23
OBT ingestion 2.80 5.58 11.2 1.52

Table 4.3: Percentiles of dose coefficients lognormal distributions for 3H . The GSD has been
retrieved with the aid of ParameterSolver software.

gested a lognormal distribution as a good approximation for the coefficients PDF


(see Table 8 in [71] and remember (4.7)). Hence, in this work the input parameters
distributions for dose coefficients for 3H are two lognormals described by percentiles
given in Table 4.3. Notably, no information could be found on 14C dose coefficients
uncertainties, thus have been fixed as constant in this uncertainty study.
Lastly, Harrison reports that some experimental data from [73] suggest that, after
chronic intakes of HTO, equilibrium tissue concentrations of HTO and some compo-
nents of OBT are similar. This led Harrison to make the range for OBT retention,
another parameter of the ICRP model, to account for the two components. Such
detail has been interpreted in this work as a form of partial correlation between the
dose coefficients, which has been embedded constructing a joint 2-dimensional PDF
starting from the individual 1-dimensional distributions and assuming a 0.5 Pearson
coefficient (mild correlation). This has been done by mean of a “statistic copula”;
briefly, a copula consist of a function that couples (hence the name) together two
marginals (which by definition have no correlation) into a joint probability distri-
bution. In practice:

1. points from a joint PDF are simulated. In this case, a multivariate Normal
distribution with ρ = 0.5 has been chosen for simplicity, which will lead to a
so called Gaussian copula for linear symmetric correlation; for more insights
on choosing a copula see [74].

2. the marginals and their CDF are then retrieved; the previously extracted
points are mapped in two uniform distributions by means of the respective
marginals CDF. Can be useful to visualize this step as the “inverse” of the
inversion method described for the Monte Carlo methods;

3. at last, the uniform distributions obtained from the marginals can be mapped
into the desired 1-D PDF by mean of the inverse CDF of such distribution. In
this case the two uniform marginals are converted into lognormal distributions
with coupled correlated points (see also Figure 4.5).

In this way, by using the uniform distribution as a lingua franca, correlation can be
easily constructed and a complex joint probability distribution retrieved.
54 CHAPTER 4. UNCERTAINTY ANALYSIS

Figure 4.5: Visual representation of a gaussian copula method for coupling two distributions,
assumed to be correlated, in a joint PDF. Notice that the actual copula function is the joint
probability distribution which has uniform marginals (step 2).
4.2. PARAMETRIC UNCERTAINTY 55

4.2.8 Breathing Rate


The uncertainty on the breathing rate assumed value (2.56 × 10−4 m3 /s in [40])
was derived by Table 6.14 and Table 6.15 of Exposure Factors Handbook (EFH)
[75]. These tables present the percentiles for daily average inhalation rate in, re-
spectively, males and females by age category from USA. In this analysis it was
reasonably thought that no substantial breathing rates differences exist between
U.S. and Swedish population. Moreover, since no particular description, other then
“adult”, for the public representative person could be found, for each percentile
an average between males and females from the age of 16 to the age of 61 was
considered. The average was weighted on the age and gender sample size and a
lognormal distribution for the parameter was assumed. The GM and GSD were
retrieved with the usual ParameterSolver and the lognormal approximation was
tested both by checking the property (4.7) and by inserting in the software only the
couple P5% -P95% and then comparing the P50% result with the average obtained
for that percentile; in both cases the difference was of the order of few percents. In
Figure 4.6 the aforementioned distributions are showed.

Figure 4.6: Lognormal distributions assumed for the breathing rate parameter in the age category
16-61 for males, females and gender average. The latter has been used in the uncertainty analysis.
Data from [75].

4.2.9 Food Consumption


The most common crops produced in the ESS surroundings are sugar beets and
cereals [42]. As one can easily imagine, precise studies on these foods consumption
56 CHAPTER 4. UNCERTAINTY ANALYSIS

in Skåne (southern Sweden) are hard to find. The closest to be reliable source of
information that could be found was the Chapter 9 of the aforementioned EFH
which deals with intake of fruits and vegetables. However, data was mainly referred
to U.S. population and subdivided into too specific categories. Thus, the resulting
datasets were too much fragmented and inappropriate for the current study. In this
uncertainty analysis the default value of 100 kg/year has been, hence, kept fixed.

4.2.10 Lumped Translocation Parameter


The lumped parameter LP summarizes all the processes describing the transfer from
deposition to contamination of edible part of the plant: interception, weathering
and translocation (see definition (2.25)). The assumed value in [42] of 0.01 m2 /kg is
based on the assumption of remaining fraction, given by interception+weathering,
on the plant surface of 10% and of translocation to the edible part of 5%. These es-
timates come from averaged nuclides and crop values in [43], but neither 3H nor 14C
are present. Data for these particular radionuclides transfer models are formulated
in the same report in terms of specific activity, which is defined as the radionuclide
activity per mass of the correspondent stable element. In fact, under equilibrium
conditions, the specific activity model provides an alternative approach for describ-
ing the behavior of those long-lived isotopes that mimics the stable counterpart
of the element in physical and biological processes. However, the specific activity
concept-based models are theoretically sound for long-term safety assessments with
constant release rates and do not apply to short-term term releases, like accidental
ones, where concentrations are time-dependent.
The only study found about transient translocation of 3H and 14C [76] showed a
great fragmentation of results according to foliage layout, deposition area, vegeta-
tive cycle at the time of the deposition and even light conditions! Therefore no
interception and translocation uncertainty has been implemented.
The only term in the lumped parameter for which clear information about uncer-
tainty was found is the crop yield. In [42] the conservative choice of 5010 kg/ha was
made, but according to [44] the range for the yield in 2011 was 5010-7470 kg/ha
depending on the type of grain. Thus a uniform distribution for LP between
0.0067 m2 /kg and 0.01 m2 /kg has been assumed in this analysis.
Input Default value PDF Mean SD Min P5% Median P95% Max

spilled water 700 kg Uniform 700 101 525 543 700 858 875
activity conc. 3H 2.94 × 1011 Bq/kg Gaussian 2.94 5% mean – 2.7 2.94 3.18 –
14
activity conc. C 5.93 × 108 Bq/kg Gaussian 5.93 5% mean – 5.44 5.93 6.42 –
vertical dispersion σz see 4.2.4 Lognormal – – – – – – –
wind speed 0.84 or 3 m/s Empirical 3 1.6 0.5 0.9 2.8 5.9 15
deposition vel 3H 0 m/s Uniform 0.004 0.002 0 0.0004 0.0035 0.0066 0.007
4.2. PARAMETRIC UNCERTAINTY

14
deposition vel C 0.001 m/s Piecewise 0.0033 0.0034 0.001 – 0.0033 – 0.013
Uniform
inhalation coeff. 3H 1.8 × 10−11 Sv/Bq Lognormal* 2.29 1.23 0 1.63 2.29 3.25 –
ingestion coeff. 3H 4.2 × 10−11 Sv/Bq Lognormal* 5.58 1.52 0 2.80 5.58 1.12 –
BR (breathing rate) 22.1 m3 /day Truncated 16.7 1.2 0 12.2 16.7 22.83 33.9
Lognormal*
lumped translocation 0.01 m2 /kg Uniform 0.0084 0.0009 0.0067 0.0069 0.0084 0.0098 0.01
*The means and standard deviation SD listed for lognormal distribution on this table are respectively the geometric mean GM and geometric
standard deviation GSD.

Table 4.4: Table of all parameters ranges and PDF considered in this uncertainty analysis. References and more detail in the text.
57
58 CHAPTER 4. UNCERTAINTY ANALYSIS

Figure 4.7: Left: relative error trend for each percentile of the dose distribution as a function
of dataset size with successive points from Sobol’ sequence as sampling method. Right: same but
with generation of pseudo-random number.

4.3 Results
As already mentioned in the Sec. 4.1.2, the suitable number of points for the Monte
Carlo to effectively solve the mathematical problem depends on the complexity of
the model and on the precision sought. When dealing with RS Monte Carlo, one
can always extract information about the convergence by simulating many times
the problem with a fixed N and retrieving the confidence interval of the quantity of
interest (e.g. as standard deviation of the mean); on the other hand, with Quasi-
Monte Carlo methods, once the number of parameters p is fixed, the chosen low
discrepancy sequence will always return the same first N points. This problem has
a solution in Bootstrap techniques as widely described in [77]. However, for the
sake of the current analysis, a quick convergence study has been made evaluating
the variation on the relative error in the dose percentiles as:
i
P − P i−1
E= (4.9)
Pi
where P is a percentile of the dose distribution and i is the index of sample size
Ni = {500, 1000, 1500, . . . , 5000}. In Figure 4.7 relative error plots are showed for
Sobol’ sequence and random sampling techniques, showing the faster convergence
of the former over the latter for almost all the percentiles.
Two things should be kept in mind when looking at plots in Figure 4.7:

1. ingreasing-points datasets from Sobol’ sequence are strongly correlated since


each Ni set contains also the first Ni−1 points from the “previous” set;

2. percentiles from randomly sampled points have not been averaged over mul-
tiple extractions, hence they were likely to exhibit great volatility.

Despite the points above, those plots can be still reasonably seen as a qualitative
proof of the quasi-Monte Carlo approach better performances. Before moving to
4.3. RESULTS 59

Stats
140 Entries 5000
Mean 0.3444
Std Dev 0.2504
120 Underflow 0
Overflow 0
100 Integral 5000
Skewness 2.188

80

60

40

20

0
0 0.5 1 1.5 2 2.5 3
Dose (mSv)

Figure 4.8: Dose distribution in the worst case scenario LP-102 PIE 1 with emission from 20 m
obtained with a sample of 5000 points from the Sobol’ sequence.

the actual results, few words on the confidence interval (CI) estimation. As already
said, the problem of defining an interval with a fixed level of confidence for the
values obtained with QMC is flawlessly overcome by Bootstrap techniques. How-
ever, no CI has been estimated since the actual physical quantity of interest in this
context is the dose to the public, not a specific percentile. In conditions of qual-
itative convergence of results, as the ones showed in Figure 4.7, any statement of
uncertainty on the percentiles of the distribution would be unnecessary given that
the percentiles themselves will provide the quantification of uncertainty sought.
The following results have been obtained using a sample size of 5000, not because
it was expected a significant gain in precision over fewer points, but mainly because
it was not computationally expensive at all. In fact, on the machine used for the
analysis which runs an Intel® Core™ i5-7600K CPU at 3.80GHz, the ROOT code took
just over a minute to complete 5000 cycles. An example of the output of the code is
showed in Figure 4.8 for the worst case scenario LP-102 PIE 1 with emission height
of 20 m. The mean and standard deviation of the distribution is (0.34 ± 0.25) mSv,
which is surprisingly high when compared with the P50% and P95% results from [53]
which are, respectively, 1.32 × 10−2 mSv and 2.60 × 10−2 mSv, with the latter used
in the risk assessment. The results in terms of percentiles for different emission
heights are also reported in Figure 4.9 as boxplots, showing the 5th, 25th, 50th
(median), 75th, and 95th percentiles and comparing them with the two values P50%
and P95% obtained in the safety analysis [46].
How come that the radiological consequences to the public appears to be strongly
60 CHAPTER 4. UNCERTAINTY ANALYSIS

Figure 4.9: Boxplots showing the 5th, 25th, 50th (median), 75th, and 95th percentiles of dose
distribution in the worst case scenario for different emission heights. The results from [53] are also
reported for comparison.

underestimated even after a strictly conservative approach was adopted throughout


the safety analysis?
A partial answer to this question has been found while observing the dose results
every time a new parameter was added to the Monte Carlo code. For example, the
P50% result for LP-102 PIE 1 with assumed deposition velocities 0 m/s and 0.001 m/s
for 3H and 14C is 1.32 × 10−2 mSv, with a contribution of 1.77 × 10−3 mSv (approx-
imately 13% of total dose) from ingestion of food crop due to translocation of 14C ;
but when the same is evaluated with a deposition velocity for 3H of 0.007 m/s the
total dose become 0.44 mSv, with a contribution from tritium ingestion of 0.42 mSv,
almost 97% of total dose. This spike in ingestion dose can be explained looking at
the emitted 3H activity in Table 3.5. The value of 2.03 × 1014 Bq is almost 3 orders
of magnitude larger that the one for 14C , but as long as the deposition velocity
for 3H is assumed null, no tritium ground deposition is expected and, hence, no
dose from ingestion; however, when the 3H vg is “turned on”, even if its DCing is
smaller, the Cground is turned on as well and the high amount of activity emitted
starts to heavily contribute to the dose. Therefore, the ESS dose model for the
AA10 appears to be astonishingly sensible to 3H deposition velocities which differ
from 0 m/s.
Another result confirming this hypothesis is in Figure 4.10, which shows the same
boxplots in Figure 4.9 but with a fixed null deposition velocity for 3H . It’s evident
that this time the P50% and P95% are reasonably closer to the correspondent 50th
4.4. MODEL UNCERTANTIES 61

Figure 4.10: Boxplots showing the 5th, 25th, 50th (median), 75th, and 95th percentiles of dose
distribution in the worst case scenario for different emission heights and with 3H deposition velocity
set to 0 m/s. The results from [53] are also reported for comparison.

and 95th percentiles of the dose distribution. It should be noted that this is not,
by any mean, a form of validation for the uncertainty analysis, since the P50% and
P95% were defined, in the first place, by simply fixing two values of stability class
(D and F) and wind speed PDF (50th and 95th percentiles). However, Figure 4.10
may be interpreted as an indirect proof of the impact of the wind speed parameter,
which alone can almost entirely accounts for the spreading of dose values, especially
for low emission heights, when 3H deposition velocity is turned off. Clearly, the
sensitivity study in Chapter 5 will have the last word, but these considerations set
the premises for what is to be expected from it.

4.4 Model Uncertanties


Model uncertainties are referred mainly to either physical processes in the con-
ceptual model or approximations in the computational one which can introduce
uncertainty in the final result. The main premise of this section is that the pre-
dominant and most reliable sources of uncertainty in the model predictions are still
assumed to be due to parameter values. Introducing new processes in the concep-
tual model require a certain kind of expertise and is far from the goals of this work.
Still a quantitative exploration of potential results and their effects under assumed
conditions is possible. Two main source of uncertainty for the ESS dose model have
been considered: wet deposition and ingestion dose from root uptake.
62 CHAPTER 4. UNCERTAINTY ANALYSIS

4.4.1 Wet Deposition


The problem of wet deposition has been introduced in Sec. 2.2.4. In the report [7]
neither an analytical overview nor detailed results are presented for this process,
but only a reference to some calculations made with Lena software that produced
an increment in ground contamination of a factor of two. Hence a deeper study
appeared necessary in the light of this work; for the following discussion [10], [60]
and [78] have been taken as main references.
The model of wet deposition considered makes use of a washout coefficient Λ, this
means that all the physical and chemical processes involved in the removal of activity
from the plume are represented by simple proportionality between removal rates
and the local airborne concentration of material. The model is founded on few
assumptions:

1. no distinction is made between in-cloud scavenging (rainout) and below-cloud


scavenging (washout). The latter refers to the process of rain falling through
the plume, while the former to the removal of activity incorporated into the
rain drops;

2. the deposition properties for the different nuclides are not considered physical-
chemical specific. Each nuclide is classified depending on how the nuclide
behaves from a hydrodynamical point of view during the transport through
the air;

3. uptake is irreversible and precipitation scavenging does not lead to a redistri-


bution of material in the plume;

4. the rainfall rate is uniform over the domain.

Moreover, conditio sine qua non for the rain to fall is a neutral stability class D [34].
The expected local mass incorporated into rainfall per unit volume is then given by
a linear relation like:
ΛC(x, y, z) (4.10)
where C(x, y, z) is the gaussian local airborne concentration (see Eq.(2.8)). Washout
coefficient Λ can be parameterized as a function of a wide spectrum of variables [79].
A simple one, according to [60] and [78], is given by the relation:

Λ = A · IB (4.11)

where I is the precipitation intensity in mm/h, while the values of A and B are
species-specific and depend on several mechanisms, all involving the size of the
suspended particles or of the rain drops, e.g. impaction onto the rain drop, Brownian
motion of particles to the rain drop and nucleation of a water drop by the particle.
Once again, the most reliable source of information has been considered the SMHI
documentation, which reports a value of 2.0 × 10−4 for A and 0.7 for B for 3H as
HTO, while no wet deposition is expected from 14C as CO2 (gas) due to assumption
4.4. MODEL UNCERTANTIES 63

2. above. Note that the dissolution of CO2 into the rain drops is a very complex
process that is accounted for in more sophisticated models like the “Falling Drop
Method” (see [78]). Unlike dry deposition, that occurs at ground level, precipitation
scavenging involves all volume elements along the vertical profile of the plume. Thus,
assuming 3. above, the total wet deposition per unit horizontal area Fwet , is found
by integrating Eq.(4.10) through a vertical column of air:
Z ∞
Fwet = ΛC(x, y, z)dz (4.12)
0

assuming Λ constant in the domain (assumption 4.) and limiting the integral to the
plume centerline in the boundary layer, (4.12) becomes:
Z A
Fwet = Λ C(x, 0, z)dz (4.13)
0

There are two approaches to the problem of solving integral (4.13): the numerical
one adopted in the SMHI report, SMHI model from now on, and the approximated
one implemented in Lena [10]. The numerical solution of the integral (4.13) is
straightforward from a computational point of view; the unspoken approximation
in Lena is, instead, less obvious. In fact, in [25] Clarke suggests that when the
vertical dispersion coefficient σz becomes greater than the depth of the boundary
layer A, the effective vertical concentration distribution is uniform and (2.8) becomes
simply:  2
Q0 −y
C(x, y, z) = √ exp (4.14)
2πu10 σy A 2σy2
in this case (4.13) has an analytical solution given by:

Q0
Fwet = Λ √ (4.15)
2πu10 σy

which is the closed expression implemented in Lena. However, in the same report,
the reader is invited to always check that σz = A, which usually happens at great
distances from the emission points (≥ 1 km). For this reason, the following results
for the short-field dose from Lena should be taken with a certain prudence. Like dry
deposition, the amount of activity deposited out of the plume along the distance by
rain depletes the plume strength by a factor [78]:
 xrain 
DFwet = exp −Λ (4.16)
u
where xrain is the distance along the plume upwind from the point of interest over
which it was raining during the plume passage and u the wind speed at plume
height. The former is assumed to be simply 300 m, while u is evaluated from u10 with
(2.5). The calculations were then run for worst case scenario LP-102 PIE 1 in P50%
weather conditions (class D is the only one in which rain can occur) and with a rain
intensity of 0.34 mm/h correspondent to the 95th percentile of its hourly distribution
64 CHAPTER 4. UNCERTAINTY ANALYSIS

Figure 4.11: Dose calculations with SMHI model and Lena approximation for wet deposition.
Results are expressed as a function of emission height for the worst case scenario LP-102 PIE 1 in
P50% weather conditions and with a rain intensity of 0.34 mm/h. Presence and absence of 3H dry
deposition were considered separately.

from SMHI open data [7]. The results with and without 3H dry deposition as a
function of emission height are showed in Figure 4.11. The first thing to notice is
that Lena approximation of the integral produces dose results of a factor 1.25-2 less
then SMHI. Without the dry deposition of 3H the dose is almost constant due to the
fact that ingestion, and hence wet deposition, accounts for almost 99% of dose and
is mostly independent from emission height. When tritium vg is turned on, the two
contributes become comparable and the decreasing trend is restored. However, in all
cases these values are way higher then the results without wet deposition from [46]
which range from 1.68 × 10−2 mSv to 3.5 × 10−3 mSv. The calculations without the
dry deposition of 3H clearly prove that also the wet deposition contribution alone
is far from negligible and when both are considered the dose can skyrockets to over
1 mSv, which just happens to be the legal limit for the public in this category of
accident [55]. Obviously, these results are based on conservative assumptions that
are unluckily to be realistic; still, they suggest the need for further investigations in
terms of computational model and parameters.

4.4.2 Root Uptake


The second model uncertainty that will be briefly discussed is the process of activity
uptake by crops roots. As already mentioned, in ANNEX III of [45] a separate dis-
cussion about equilibrium uptake of 3H and 14C is presented. In fact these nuclides
4.4. MODEL UNCERTANTIES 65

can be easily incorporated into a great variety of different chemical compounds


within the human body, thus the radiological consequences from releases of these
radionuclides is best assessed using models that employ a specific activity approach,
as discussed in Sec. 4.2.10. Such models are considered to give conservative dose es-
timates when an individual is assumed to be in complete equilibrium with maximum
levels of environmental specific activity of 3H and 14C . In [80] Galeriu suggests
that this equilibrium can take from months to years to be established. In case of
exposure under non-equilibrium conditions, like accidental scenario, some hints are
given in the same paper. For example, it is stated that experimental works on the
interaction of tritiated water vapour with plants and soil have found that the evo-
lution of HTO concentration in plant and soil water are given, approximately, by
the following relationships:
dCpw p
h    i
= vex Ca − γHsl Cpw + Hsl − Ha Csw Mpw + OBT-term (4.17)
dt
t
dCsw s
Ca − γHss Csw
t

= vex + diff-term − perc-term (4.18)
dt
r
dCsw
= diff-term − perc-term − transp-term (4.19)
dt
where Cpw represents the HTO concentration in plant water, Ca the volumetric
air concentration, Csw the mean soil water concentration through the rooting zone,
t the superficial soil water concentration, and C r the soil water concentration
Csw sw
at various rooting depths. Ha is the actual atmospheric humidity, Hsl and Hss the
saturation humidity in the leaf and the soil, respectively, Mpw the mass of plant
p s
water, γ = 0.909 the ratio of vapour pressure of HTO and H2 O, and vex and vex
the exchange velocities between the external atmosphere and the plant and soil.
Other processes, even though not explicitly treated in [80], must be included, e.g.
exchange with the OBT pool (4.17) and terms due to diffusion, percolation of rain
water and plant transpiration in (4.18) and (4.19).
The picture emerging is that of a general model for interception, translocation
and root uptake which goes beyond the simpler factorizations given in Sec.2.3.2.
Equations (4.17)-(4.19) are used in [80] as a starting point to obtain a more detailed
model of equilibrium, but no instructions on how to use them in that form are given.
Since all the previous results highlight a great impact of ingestion on total dose,
further studies are desirable in this direction too.
Chapter 5

Sensitivity Study

The last step in the current analysis process is the sensitivity study. Its purpose is
to “identify those parameters whose uncertainties have the largest impact on the
uncertainty of the predicted dose” [2]. The results of the sensitivity analysis are
expected to both permit ranking of the parameters with respect to their contribution
to the model predictions and give a somewhat deeper understanding of the model
itself. In this work, it has been chosen to perform a variance-based sensitivity
analysis, hence, after having explained the theoretical background of the methods,
computational solutions will be investigated and results for the ESS dose model
presented. The main references for this chapter are [65], [83] , [84] and [87].

5.1 Variance-based sensitivity analysis


Variance-based sensitivity analysis (SA) techniques are the most often used among
other SA methods [84]. They have a wide spectrum of applications and can naturally
allow to take parameter interactions into account, an aspect more and more stressed
by official guidelines [65]. Variance-based SA have a long history in sensitivity
methods. They have a milestone in the work of Sobol’ [81], with a giant leap after
the introduction of total sensitivity inidices by Homma and Saltelli [82]. A further
improvement was achieved with the computational optimization of those indices by
Saltelli in [65]. The main idea of these methods is to express sensitivity through
input-output variance ratios, which allow to evaluate how the variance of an input
(or group of inputs) contributes into the uncertainty of the output.

5.1.1 Independent variables


Let’s recall once again the definition of a computational model given in (1.1):
f :Rp → R
(5.1)
~ 7→ Y = f (X)
X ~

~ are independent and let’s define the variance


let’s assume that all the variables X
of Y as V (Y ). Intuitively, an indicator of the importance of an input Xi could

67
68 CHAPTER 5. SENSITIVITY STUDY

be based on the conditional variance V (Y |Xi = x∗i ), which is the variance of Y


taken over all variables except Xi which is fixed to its true value x∗i . Another way
to denote the conditional variance is V∼Xi (Y |Xi ). To avoid the problem of not
knowing the true value of Xi , one can evaluate the expectation EXi [V∼Xi (Y |Xi )]
over all possible values for x∗i . Using the known law of total variance:

V (Y ) = VXi (E∼Xi [Y |Xi ]) + EXi [V∼Xi (Y |Xi )] (5.2)

is possible to retrieve VXi (E∼Xi [Y |Xi ]), which is known as variance of the condi-
tional expectation and has the valuable property of increasing in value the greater
the importance of Xi . From (5.2) is, then, straightforward to obtain the indicator
sought:
VXi (E∼Xi [Y |Xi ]) EXi [V∼Xi (Y |Xi )]
Si = =1− (5.3)
V (Y ) V (Y )
This indicator is named by Sobol’ [81] first order sensitivity index, varies from 0
to 1 and measures the first order (e.g. additive) effect of Xi on the model out-
put. These properties should be more clear after introducing the framework of
decomposition equations on which variance-based methods find a solid theoretical
base. In fact, Sobol’ decomposes the model function f into summands of increasing
dimensionality:
p
X XX
f (X1 , . . . , Xp ) = f0 + fi (Xi ) + fi (Xi , Xj ) + . . . f1...p (X1 , . . . , Xp ) (5.4)
i=1 i j>i

for a total of 2p terms. Eq.(5.4) is known as Hoeffding decomposition and [81]


should be used as reference for further reading on its mathematical properties, like
the unicity of those terms. Sobol’ showed also that if it is assumed f (X)~ to be a
p
square integrable function over R , then each term of the decomposition is square
integrable over Rp , the integral over any of its own variables is zero and they are
all mutually orthogonal. This leads to the following decomposition of the variance
of Y:
Xp XX
V (Y ) = Vi + Vij + · · · + V1...p (5.5)
i=1 i j>i

where:
Vi =VXi (E∼Xi [Y |Xi ])
Vij =VXi Xj (E∼Xij [Y |Xi , Xj ] − E∼Xi [Y |Xi ] − E∼Xj [Y |Xj ]) (5.6)
..
.
Since all the variables are independent, the previous expression becomes:

Vi =VXi (E∼Xi [Y |Xi ])


Vij =VXi Xj (E∼Xij [Y |Xi , Xj ]) − Vi − Vj (5.7)
..
.
5.1. VARIANCE-BASED SENSITIVITY ANALYSIS 69

from which is clear that, if (5.5) is divided by V (Y ), then it gives the following
important relation:
p
X XX
Si + Sij + · · · + S1,2,...,p = 1 (5.8)
i=1 i j>i

At this point should be clear why the aforementioned Si can be at most 1 and is the
sensitivity index of “first order”. The second order index Sij expresses sensitivity
of the model to the interaction between Xi and Xj , which is not included in the
individual effects of Xi and Xj . One can define any index of higher order, but for a
model of p inputs, the number of all order indices is 2p − 1, which grows really fast
as p increases. For this reason, total sensitivity indices were introduced by Homma
and Saltelli [82], becoming soon a popular variance-based measure. The definition
of total sensitivity index for a variable Xi is:

E∼Xi [VXi (Y |∼Xi )] V∼Xi (EXi [Y |∼Xi ])


ST i = =1− (5.9)
V (Y ) V (Y )

To clarify the cumbersome notation: VXi (Y |∼Xi ) is the conditional variance taken
over Xi but when all other variables are fixed and E∼Xi [VXi (Y |∼Xi )] its expectation
value taken over all factors except for Xi . ST i measures the total effect, i.e. first and
higher order effects (interactions) of factor Xi . In [65] is suggested that one way to
visualize ST i is by considering, in the second equality in (5.9), V∼Xi (EXi [Y |∼Xi ])
like a first order effect that accounts for the variation of all variables except for
Xi (compare it with (5.3)), so that V (Y ) minus V∼Xi (EXi [Y |∼Xi ]) must give the
contribution of all terms in the variance decomposition which do include Xi . The
total sensitivity index is then a powerful tool for SA when models are nonlinear and
higher order terms can be equally important.

5.1.2 Correlated variables


In the previous section it has been assumed that all the inputs were independent,
but this is rarely a real case. The problem when variables are correlated concerns the
interpretation of sensitivity indices values. In fact, if it is not assumed that inputs
are independent, functions of Hoeffding decomposition (5.4) are not orthogonal
anymore, which implies that new terms must appear in the variance decomposition
(5.7) and ultimately the sum of all order sensitivity indices (5.8) is not equal to 1.
In a more practical way, changes in two correlated variables Xi and Xj are linked,
hence, when quantifying sensitivity to one of these two variables, means inevitably
to quantify a part of the sensitivity due to the other variable as well. So, even
though one can always compute VXi (E∼Xi [Y |Xi ]), it’s impossible to decompose
first order and interaction effects. A straightforward solution for a particular case
is given in [84] and consists of defining the so called group sensitivity indices. Let’s
assume that the covariance matrix of X ~ is not diagonal, but at the same time
not all variables are correlated rather, different groups of correlated inputs can be
70 CHAPTER 5. SENSITIVITY STUDY

identified. Clearly, variables into a group are dependent, but variables of different
groups are independent. This is exactly the case that suits In [84] these groups are
denoted like “multidimensional variables” X ~ i , then p-dimensional input X
~ is, for
instance, written as:

~ = ( X1 , . . . , Xi , . . . , Xi+k , Xi+k +1 , . . . , Xi+k , . . . , Xi+k +1 , . . . , Xp ) (5.10)


X
|{z} | {z }1 | 1
{z }2 | L−1
{z }
~1
X ~I
X ~ I+1
X ~ I+L
X

The reader could have figured out the trick at this point. The first order sensitivity
indices for the multidimensional independent variables (X ~ 1, . . . , X
~ I+L ) are simply
defined along the lines of (5.3) by:
 
~ k]
VX~ k E∼X~ k [Y |X
Sk = ∀k ∈ [1, I + L] (5.11)
V (Y )

which measures the first order impact of the multidimensional variable X ~ k on the
~ k is one-
output Y . Notably, Eq.(5.11) correctly return the (5.3) if the variable X
dimensional. From here is then straightforward to define high order and total sen-
sitivity indices, hence no demonstration will be given.

5.2 Numerical estimation


The best overview on existing estimators to compute Si and STi with Monte Carlo
simulations is given by Saltelli in [65]. The method used in this sensitivity study
can be summarized in the following steps:

1. two independent (N × p) matrices of variables A and B are sampled, with


aji and bji as generic elements. The index i runs from one to p, the number
of factors, while the index j runs from one to N , the number of simulations.
Since quasi-random numbers are used to sample the variables space, matrices
A and B, can be easily generated from a sequence of size (N × 2p): then A
is the left half of the sequence and B the right one [65];

2. the mean and variance of Y are then computed. In this work, they have been
calculated from sampled values in A, so the estimators in subsequent points
have been chosen accordingly. To introduce the notation, the mean and the
variance have been estimated with:
N N
1 X 1 X 2
f0 = f (A)j D= f (A)j − f02 (5.12)
N N −1
j=1 j=1

3. two new matrices for each inputs are introduced: AiB and BiA , where all
columns are from A, or B, except the i-th column which is from B, or A
respectively;
5.2. NUMERICAL ESTIMATION 71

4. the first order index Si is obtained from (5.3), with the variance of the condi-
tional expectation approximated by [65]:
N
1 X  
f (B)j f AiB j − f (A)j

VXi (E∼Xi [Y |Xi ]) = (5.13)
N
j=1

similar results should be achieved with the triplet A, BiA and B;

5. the computation of ST i proceeds similarly from definition (5.9), where


E∼Xi [VXi (Y |∼Xi )] is obtained from [86]:
N
1 X  2
E∼Xi [VXi (Y |∼Xi )] = f (A)j − f AiB j (5.14)
2N
j=1

6. finally, confidence intervals on approximated indices are computed with Boot-


strap techniques (see next section).
Few remarks on what has been told on numerical estimations of sensitivity indices.
First of all, since no second order indices have been considered in this work, they
have not been treated in details in the previous discussion. However, it’s possible
to estimate second order indices simply by computing Bik A , which is just B but with
both i-th and k-th column fixed [84]. Secondly, no reference to multidimensional
variables has been made to keep the notation light, but in the specific case of
this sensitivity study that approach has indeed been adopted to estimate the 3H
ingestion-inhalation first order and total sensitivity indices. Moreover, as outlined
in [65], the triplet A, B and AiB for estimators (5.13) and (5.14) is the best choice
when using Sobol’ sequence. The reason is that a matrix (N ×2p) is used to retrieve
A and B, but as the column index increases the equidistribution property of the
numbers deteriorates. Thus, the triplet A, B and AiB contains a higher number of
“better points” with respect to A, B and BiA .

5.2.1 Bootstrap Confidence Intervals


As stated in [87], no estimate of sensitivity can be of any use without an estimate of
its sampling variability. For detailed discussion of Bootstrap Confidence Intervals
(BCIs) one can, once again, refer to [77]. Briefly, the QMC generated A and B are
resampled, in the sense of sampled with replacement, B times, at each stage and
for each variable Si and ST i are recalculated, leading to a bootstrap estimate of the
sampling distribution of the sensitivity indices. Then BCIs can then be constructed
with the percentiles method, which selects as endpoints for the interval two extremes
of the bootstrap distribution. For the sake of this sensitivity study the 5th and
95th percentiles have been chosen, with a nominal coverage of 90%. Interestingly,
bootstrapping works because sampling with replacement from a set of independent,
identically distributed data is equivalent to sampling from the empirical distribution
function of the data and in Appendix A of [87] is showed why the bootstrap works
also with quasi-random numbers.
72 CHAPTER 5. SENSITIVITY STUDY

5.2.2 Validation with test function


Several mathematical functions have been proposed in literature as test cases (see
for example [65] or [87]); the one proposed by Jacques [84] seemed the most suitable
for this study even though not particularly challenging for the estimators. In fact,
let’s consider the model

Y = f (X1 , . . . , X6 ) = aX1 X2 + bX3 X4 + cX5 X6 (5.15)

where all the Xi are normally distributed (i.e. Xi ∼ N (0, 1)) and where X3 and X4
are correlated with Pearson correlation coefficient ρ1 , as well as X5 and X6 with
coefficient ρ2 . In this case, the multidimensional approach previously described is
applied and the two couples of correlated variables are considered precisely as two
2-dimensional inputs. First order sensitivity indices are noted by S{3,4} and S{5,6}
and similarly the total ones ST {3,4} and ST {5,6} . Besides its simplicity, this model
is useful as a validation for the code since has analytically computable sensitivity
indices [84]:
a2
S12 = 2
a + b2 (1 + ρ21 ) + c2 (1 + ρ22 )
b2 (1 + ρ21 )
S{3,4} = 2 (5.16)
a + b2 (1 + ρ21 ) + c2 (1 + ρ22 )
c2 (1 + ρ22 )
S{5,6} = 2
a + b2 (1 + ρ21 ) + c2 (1 + ρ22 )
while all the others first and higher order indices are null. The total indices, instead,
are clearly given by:

ST 1 = S12 ST 2 = S12 ST {3,4} = S{3,4} ST {5,6} = S{5,6} (5.17)

To test the reliability of the indices, three scenarios with different configurations of
parameters have been assumed (see Table 5.1). The results obtained with estimators
(5.13) and (5.14) and a sample of size N = 1000 are showed in Figure 5.1. In scenario
1, the high value of the factor a clearly makes the interaction between X1 and X2 the
most important contribution to the variance of Y , hence the value 0.73 for S12 . In
scenario 2, instead, a = b = c but the correlation between X3 and X4 , as well as X5
and X6 , breaks the symmetry and increases the importance of the couple. Finally,

Scenario a b c ρ1 ρ2 S12 S{3,4} S{5,6}


1 3 1 1 0.8 0.8 0.73 0.13 0.13
2 1 1 1 0.8 0.8 0.23 0.38 0.38
3 1 1 3 0.8 0.8 0.06 0.09 0.85

Table 5.1: Parameters values for the the model (5.15) and analytical solution of its sensitivity
indices in each scenario. Values approximated to the second decimal place.
5.3. RESULTS 73

in scenario 3 the situation is specular to the first one, but this time the correlation
makes once again the 2-dimensional input X5 and X6 even more important. In all
cases, the two estimators are able to retrieve a fairly good approximation of the
exact analytical values, as proven by the fact that they always lie in the BCIs.
These positive results can be interpreted as a validation for the code, allowing its
relatively safe application to ESS dose model.

5.3 Results
The sensitivity analysis results on the ESS dose model parameters in the worst
case scenario for AA10 are showed in Figure 5.2. Clearly, only first order and total
sensitivity indices were evaluated since the number of all order indices is 210 − 1,
and hence would be prohibitively expensive to calculate them all, not to mention
that ST i results indirectly suggest little contribution of higher order terms. The
indices were estimated using N = 1000 as sample size; the reason for that is a
trade-off between sufficient sampling and computational time. The BCIs, instead,
were obtained with 10000 replicas of a 1000 bootstrap sample. From the first plot
in Figure 5.2 is clear that the first three major contributions to the dose variance
come from, in order of importance, 3H deposition velocity, ingestion-inhalation dose
coefficients and wind speed uncertainties. This does not come unexpected after the
considerations made in section 4.3 for vg and wind speed. The other variables,
instead, have almost null indices, suggesting a negligible contribution to the final
dose uncertainty. One thing should be put in evidence: some of these, like 14C
activity concentration, σz , 14C deposition velocity and breathing rate, have a single-
bin bootstrap distribution for the total index, which means that the BCI retrieved
is the same for all of them and is just the width of the bin. Such a behavior
could be explained by an apparent independence of the index from the bootstrap
sample extracted, which seems to be a direct consequence of the extremely low
sensitivity of the dose to these parameters. In other words, no matter how the
data set is resampled, the total index will be all the time the same close-to-zero
number. Notably, in the case of σz the index itself is exactly 0, meaning that the
dose evaluations for the differences in the (5.14) give the same results for A and
AiB , hence it must be concluded that σz uncertainty is irrelevant. In Figure 5.2
the same analysis has been conducted also fixing to 0 m/s the tritium deposition
velocity, showing an increased importance of the uncertainty in wind speed, 14C vg
and in breathing rate, while a substantial devaluation of the ingestion-inhalation
coefficients correlated uncertainty. As discussed once again in section 4.3, turning
off the tritium deposition velocity PDF result in a drop in the ingestion dose, which
in turn limits the contribution of the ingestion parameter and makes room for the
inhalation dose, hence the breathing rate. Moreover, the difference in value between
the 14C vg and wind speed total indices doesn’t appear to be statistically significant
by looking at the BCIs, even though this is clearly not a rigorous hypothesis test;
therefore, it’s hard to state which one is more impactful. Lastly, in Figure 5.3
74 CHAPTER 5. SENSITIVITY STUDY

Figure 5.1: First (or second) order and total sensitivity indices (crosses) and 90% BCIs estimated
via quasi-Monte Carlo method for scenarios (from top to bottom) 1,2 and 3 of the test function
(5.15). The circles indicate the exact analytical values. The indices were estimated using N = 1000
as sample size, while bootstrap confidence intervals were obtained using a sample of the same size
with 10000 replicas.
5.3. RESULTS 75

Figure 5.2: First order and total sensitivity indices with 90% BCIs estimated via quasi-Monte
Carlo method for AA10 worst case scenario with (top) and without (bottom) tritium deposition
velocity. The indices were estimated using N = 1000 as sample size, while bootstrap confidence
intervals were obtained using a sample of the same size with 10000 replicas.
76 CHAPTER 5. SENSITIVITY STUDY

the total sensitivity indices of the three most important variables in the case of
implemented 3H vg are plotted as a function of the emission height. The importance
of the wind speed uncertainty slightly increases with the height, while the other two
experience an opposite trend but in subtler way. An almost identical plot was
obtained imposing a stability class F, suggesting a very low impact of more stable
categories on the ranking of the sensitivity indices. These results appear to be in line
with what was expected from the previous considerations; in fact, the dominating
contribution of the ingestion to the total dose due to deposited 3H depends on the
stability class and emission height only through the relative concentration, which
is, by the way, common to all the other contributions. Hence the ranking of the
variables should reasonably be preserved.

Figure 5.3: Total sensitivity indices of the three most impactful variables for AA10 worst case
scenario.
Conclusions

This thesis goal was to deal with the propagation of uncertainties and to perform
an analysis of the sensitivity to the initial conditions on the final dose results of a
given accident scenario at ESS. The model presented in Chapter 2 was adopted by
Target Specialists and reproduced by a self-written ROOT code. The code structure
is summarized in Appendix A, while validating results for the AA10 scenario are
presented at the end of Chapter 3. The need for a compact code with a straight-
forward flow of information from input to output came from the choice of a Monte
Carlo technique for the uncertainty analysis. In this case a Quasi-Monte Carlo
approach using Sobol’ sequence has been adopted to reduce the number of points
required for results to converge. A detailed analysis on the parametric uncertainty
of the model has brought to a deeper understanding of the processes involved, as
well as to the observation that the dose distribution is spread over a far more large
range then expected from the safety analysis. The main reason is the uncertainty
surrounding the tritium deposition velocity in conjunction with a great amount of
released activity for the isotope. In general, the deposition velocity is an empirical
parameter quite variable from place to place; in the analysis the reference by SMHI
report [60] has been considered a reliable source of information, however must be
notice that no experimental-based values have been reported. Since the sensitivity
study confirmed the huge contribution of such parameter to the dose uncertainty,
two courses of action are suggested by this thesis conclusions:

1. a on-field experiment to assess the value for tritium vg in the ESS surrounding
should be designed and performed. Alternatively, a detailed analysis on the
fields properties and accurate modeling by experts to estimate a reliable range
of values should be commissioned;

2. another important limitation of the ESS model is the fixed choice of stabil-
ity class and wind speed over a long period of time. As for this particular
case, one can always adopt a conservative approach, however must be noted
that measures of wind speed and stability class are easily accessible by every
modern meteorological station. A posteriori hourly averaged joint probabil-
ity distribution of atmospheric parameters (stability class, wind speed, wind
direction, boundary layer height etc. . . ) would surely lead to a much more
accurate dose prediction and uncertainty evaluations with little effort.

77
78 CHAPTER 5. SENSITIVITY STUDY

Concerning the absence of wet deposition in the model, at the end of Chapter 4 a first
study on its potential contribution has been performed. The conditions assumed
were extremely pessimistic, nevertheless the results seemed to suggest an impact
on the final dose far from negligible. Clearly, a deeper and more realistic analysis
which includes also a computational model for the root uptake, it’s required. Finally,
should be pointed out that in this analysis, due to the wide distribution range, the
dose to public outliers have exceeded the legal limit of 1 mSv for the AA10 scenario.
Therefore a new assessment by decision maker on the lines of action to adopt for
the radiological consequences in the light of these new pieces of information seems
reasonable.
Appendix A

Figure 5.4: Workflow of the self-written ROOT code for the dose calculations. Legend in the
up-right corner. Full code available at https://github.com/Athropos/dose-model-root

79
Appendix B

Result H-3 (3.890000E+08 s)


1 Nuclei in Volume #
Normalized Activity in volume

N0
N1
10−1
N2
N3 (emission)
10−2

10−3

10−4

10−5

10−6

10−7
102 103 104 105 106
Time elapsed (s)

Figure 5.5: LP-102 PIE1. Plot of normalized activity in each room (including the environment)
as a function of time. The approximated transport matrix method has been implemented using a
100 s timestep and a constant source term in V0 between 0 and 389 hours.

81
Bibliography

[1] IAEA, Evaluating the Reliability of Predictions Made Using Environmental


Transfer Models, Vienna, 1989.

[2] J.C. Simpson, J.V. Ramsdell, Uncertainty and Sensitivity Analyses Plan, Han-
ford Environmental Dose Reconstruction Project, 1993.

[3] CERRIE, Report of the Committee Examining Radiation Risks of Internal Emit-
ters, 2004.

[4] ESFRI Physical Sciences and Engineering Strategy Working Group - Neutron
Landscape Group, Neutron scattering facilities in Europe - Present status and
future perspectives, September 2016.

[5] ESS-0000002 – Preliminary Safety Analysis Report (PSAR).

[6] ESS-0146659 – Description of the ESS accelerator systems and related infras-
tructure.

[7] ESS-0092033 – Activity transport and dose calculation models and tools used in
safety analyses at ESS.

[8] Mathcad 15.0 M010, User’s guide, PTC July 2011 (version M040 was used in
the safety analyses performed during July 2016 to February 2017 and 2018 and
2019).

[9] Excel 2016 contained in Office 365 for PC, Microsoft 2016. (Calculations were
performed on a PC with Windows 7).

[10] Lena 2003 version 2.3.0, Användarhandledning, SSM 2009-10-13.

[11] ESS-1102691,Dose calculation user manual, April 2019, Lloyds Register Con-
sulting

[12] ESS-0048608 – Target inventory for normal operation.

[13] ESS-0045261 – Target He inventory for normal operation.

[14] ESS-0052231 – Moderator water inventory for normal operation.

83
84 BIBLIOGRAPHY

[15] ESS-0052266 – Reflector water inventory for normal operation.

[16] ESS-0050210 – Beryllium reflector inventory for normal operations.

[17] ESS-0061660 – Radionuclides content in Target systems

[18] ESS-0088188 – Copper-Graphite Tuning Beam Dump inventory for normal op-
eration.

[19] ESS-0042017 – Active cell airborne contamination evaluations.

[20] Gregg W. McKinney et al., MCNPX 2.7.0 - New Features Demonstrated,


LAUR-12-25775.

[21] CINDER Version 1.05: Code System for Actinide Transmutation Calculations,
RSICC CODE PACKAGE CCC-755.

[22] ESS-0050077 – An overview of target station radiological hazard analysis doc-


umentation.

[23] Steven R. Hanna, Gary A. Briggs, Rayford P. Hosker, Jr., Handbook on Atmo-
spheric Diffusion, National Oceanic and Atmospheric Administration, 1982

[24] A. Mayall, Modelling the dispersion of radionuclides in the atmosphere, Mod-


elling radioactivity in the Environment, Elsevier Science, 2003

[25] R.H.Clarke, A Model for Short and Medium Range Dispersion of Radionuclides
Released to the Atmosphere, National Radiological Protection Board (NRPB),
1979

[26] F. Pasquill, The estimation of the dispersion of windborne material, Meteoro-


logical Magazine, 90, 1063, 33-49, 1961.

[27] F. A. Gifford, Use of routine meteorological observations for estimating atmo-


spheric dispersion, Nuclear Safety, 2, 47-57, 1961.

[28] F.B. Smith , A scheme for estimating the vertical dispersion of a plume from
a source near ground level, Proceedings of the 3 Meeting of an expert panel on
air pollution modelling, Brussels: NATO-CCMS Report 14, 1973.

[29] ESS-0000934 – Site description, Uppdragsnummer 3830363.

[30] R.P. Hosker, Estimates of dry deposition and plume depletion over forests and
grasslands, In Proceedings of the Symposium on Physical behaviour of Radioac-
tive Contaminants in the Atmosphere (p. 291), Vienna: IAEA, 1974.

[31] G.A. Briggs, Diffusion estimation for small emissions. Preliminary report, Na-
tional Oceanic and Atmospheric Administration, Oak Ridge, Tenn. (USA). At-
mospheric Turbulence and Diffusion Lab., 1973.
BIBLIOGRAPHY 85

[32] R.F.Griffiths, Errors in the use of the Briggs parameterization for atmospheric
dispersion coefficients, Atmospheric Environment Volume 28, Issue 17, Septem-
ber 1994, Pages 2861-2865.

[33] D. J. Moore, Calculation of ground level concentration for different sampling


periods and source locations, Atmospheric Pollution (p. 516), Amsterdam: El-
sevier, 1976.

[34] J.A. Jones, A Procedure to Include Deposition in the Model for Short and
Medium Range Atmospheric Dispersion of Radionuclides, National Radiological
Protection Board (NRPB), 1981

[35] I. Van der Haven, Deposition of particles and gases, Meteorology and Atomic
Energy 1968, Section 5.3 (Slade, D H, Editor), US Atomic Energy Commission,
TID-24190, 1968.

[36] B. Y. Underwood , Review of deposition velocity and washout coefficient, At-


mospheric Dispersion Modelling Liaison Committee Annual Report 1998-1999
(Annex A), Chilton: NRPB-R322, 2001

[37] SSM, Beräkningsregler för analys av stråldoser vid utsläpp av radioak-


tiva ämnen från svenska anläggningar i samband med oplanerade händelser,
Strålsäkerhetsmyndighetens beslut SSM 2013/1525.

[38] S. Nordlinder, Assessment of radiological environmental impact at unplanned


events at ESS, Studsvik Technical Note N-13/183, 2013

[39] K.F. Eckerman & R.W Leggett DCFPAK: Dose Coefficient Data File Package
for Sandia National Laboratory, Oak Ridge National Laboratory ORNL/TM-
13347, 1996

[40] VPC, Methodology Handbook for Realistic Analysis of Radiological Conse-


quences, Vattenfall Power Consultant, Report T-NA 10-24, 2010

[41] ICRP 119, Compendium of Dose Coefficients based on ICRP Publication 60,
ICRP Publication 119, Ann. ICRP 41(Suppl.), 2012

[42] S. Nordlinder, Scooping studies on radiological effects due to releases at severe


accident at ESS, Studsvik Technical Note N-12/191, 2012

[43] IAEA, Quantification of Radionuclide Transfer in Terrestrial and Freshwater


Environments for Radiological Assessments, IAEA-TECDOC-1616, 2009

[44] SCB/JV, Jordbruksstatistisk årsbok 2012, chapter 4, 2012

[45] IAEA, Generic Models for Use in Assessing the Impact of Discharges of Ra-
dioactive Substances to the Environment, IAEA Safety Report Series No. 19,
2001
86 BIBLIOGRAPHY

[46] ESS-0052199 – AA10 – Accident Analysis Report – Public – Water leakage in


Monolith Vessel

[47] ESS-0015358 – Särskilda villkor för ESS-anläggningen i Lund

[48] ESS-0003640 – ESS Concept of Operations Description

[49] ESS-0088177 – Leak paths for Target Accident Analyses

[50] ESS-0134942 – SDD-Sol System 1027 - Monolith Rough Vacuum Pump System

[51] ESS-0241705 – Calculation files to the public for AA10

[52] ESS-0094187 – Source term and dose to worker in accident analyses

[53] ESS-0241704 – Calculation of dose to the public for AA10

[54] ESS-0273080 – Pipe break and cracks – Event classification

[55] ESS-0041755 – ESS Guideline for Radiological Hazard Analysis

[56] X-5 Monte Carlo Team, MCNP — A General Monte Carlo N-Particle Trans-
port Code, Version 5, 2003

[57] ESS-0136227 – Temporary shielding wall.

[58] A. Caldwell, Addressing Off-site Consequence Criteria Using Level 3 Probabilis-


tic Safety Assessment: A Review of Methods, Criteria, and Practices, Depart-
ment of Nuclear Power Safety KTH Royal Institute of Technology, Stockholm,
Sweden, May 2012

[59] J.A. Jones, The Uncertainty in Dispersion Estimates Obtained from the Work-
ing Group Models, National Radiological Protection Board (NRPB), 1986

[60] ESS-0051604 – Atmospheric dispersion modelling of planned releases of ra-


dioactivity at ESS

[61] R.E. Luna and H.W. Church, A Comparison of Turbulence Intensity and Stabil-
ity Ratio Measurement to Pasquill Stability Classes, Sandia Laboratories, 1972.

[62] D. M. Hamby, The Gaussian Atmospheric Model and its sensitivity to the Joint
Frequency Distribution and Parametric Variability, Health Physics. 82(1):64-73,
January 2002

[63] John D. Cook, Determining distribution parameters from quantiles, The Uni-
versity of Texas M. D. Anderson Cancer Center, January 27 2010

[64] Harald Niederreiter, Random Number Generation and Quasi-Monte Carlo


Methods, Society for Industrial and Applied Mathematics, 1992.
BIBLIOGRAPHY 87

[65] A. Saltelli, P. Annoni et al, Variance based sensitivity analysis of model output.
Design and estimator for the total sensitivity index, Computer Physics Commu-
nications, Volume 181, Issue 2, February 2010, Pages 259-270

[66] Bratley, B.L. Fox, Algorithm 659 implementing Sobol’s quasi-random sequence
generator, ACM Transactions on Mathematical Software 14 (1) (1988) 88–100.

[67] William J. Morokoff and Russel E. Caflisch, Quasi-Monte Carlo integration, J.


Comput. Phys. 122 (1995), no. 2, 218–230.

[68] I.M. Sobol’, Distribution of points in a cube and approximate evaluation of


integrals, Zh. Vych. Mat. Mat. Fiz. 7: 784–802 (in Russian); U.S.S.R Comput.
Maths. Math. Phys. 7: 86–112 (in English), 1967.

[69] S. Joe and F. Y. Kuo, Remark on Algorithm 659: Implementing Sobol’s quasir-
andom sequence generator, ACM Trans. Math. Softw. 29, 49-57 (2003)

[70] H. Foerstel, HT to HTO Conversion in the Soil and Subsequent Tritium Path-
way: Field Release Data and Laboratory Experiments,Nuclear Research Centre
(KFA) Radioagronomy, Fusion Technology 14 1241-1246, 1988.

[71] J.D. Harrison, A. Khursheed , B.E. Lambert, Uncertainties in dose coefficients


for intakes of tritiated water and organically bound forms of tritium by members
of the public,Radiation Protection Dosimetry, Vol. 98, No. 3, pp. 299–311, 2002

[72] S.F. Snyder,Parameters used in the environmental pathways and radiological


dose modules (DESCARTES, CIDER, and CRD codes) of the Hanford Envi-
ronmental Dose Reconstruction Integrated Codes (HEDRIC), Report PNWD–
2023-HEDR-REV.1, 1994

[73] S.L. Commerford, A. L. Carsten and E. P. Cronkite, The Distribution of Tri-


tium in the Glycogen, Hemoglobin and Chromatin of Mice Receiving Tritium in
their Drinking Water, Radiat. Res. 72, 333–342 (1977).

[74] M. Dorey, P. Joubert, Modelling Copulas: An Overview

[75] U.S. EPA, Exposure Factors Handbook 2011 Edition (Final Report), U.S. En-
vironmental Protection Agency, Washington, DC, EPA/600/R-09/052F, 2011.

[76] R. S. Gage and S. Aronoff, Translocation III. Experiments with Carbon 14,
Chlorine 36, and Hydrogen, Plant Physiology, Vol. 35, No. 1 (Jan., 1960), pp.
53-64.

[77] Efron and Tibshirani, An Introduction to the Bootstrap, Chapman and Hall,
New York, 1997

[78] D. Apsley, Modelling dry deposition, National Power and CERC, 2012
88 BIBLIOGRAPHY

[79] B. Sportisse, A review of parameterizations for modelling dry deposition and


scavenging of radionuclides Atmospheric, Environment 41, 2683–2698, 2007

[80] D. Galeriu, Transfer Parameters for Routine Releases of HTO — Considera-


tion of OBT, Rep. AECL-11052, COG-94-76, Atomic Energy of Canada, Chalk
River, ON, 1994

[81] I.M. Sobol’, Sensitivity analysis for non-linear mathematical models, Mathe-
matical Modelling and Computational Experiment 1, 407–414, 1993.

[82] T. Homma, A. Saltelli, Importance measures in global sensitivity analysis of


model output, Reliability Engineering and System Safety 52 (1) (1996) 1–17.

[83] I.M. Sobol’, Global sensitivity indices for nonlinear mathematical models and
their Monte Carlo estimates, Mathematics and Computers in Simulation 55,
271–280, 2001

[84] J. Jacques, C. Lavergne, N. Devictor, Sensitivity analysis in presence of model


uncertainty and correlated inputs, Reliability Engineering and System Safety,
Elsevier, vol. 91(10), pages 1126-1134, 2006

[85] T. Ishigami and T. Homma, An importance qualification technique in uncer-


tainty analysis for compute models, Proceedings of the Isuma ’90. First Interna-
tional Symposium on Uncertainty Modelling and Analysis, University of Mary-
land, 3–5 December 1996

[86] M.J.W. Jansen, Analysis of variance designs for model output, Computer
Physics Communications 117 35–43, 1999

[87] G. E. B. Archer , A. Saltelli & I. M. Sobol’, Sensitivity measures,anova-like


Techniques and the use of bootstrap, Journal of Statistical Computation and
Simulation, 58:2, 99-120, 1997

Das könnte Ihnen auch gefallen