Beruflich Dokumente
Kultur Dokumente
Physics Department
Candidate: Supervisor:
Nicola Rizzi Ph.D. Riccardo Bevilacqua
Outside examiner:
Ph.D. Paolo Maria Milazzo
This thesis presents the work done at the ESH&Q division at European Spallation
Source (ESS) during the period from May to August 2019. The thesis represents the
graduation requirement for the Nuclear and Subnuclear Physics Master program at
Università degli studi di Trieste.
First and foremost, I would like to thank my supervisor Riccardo Bevilacqua for
this great opportunity he offered me and humanity showed me at each step of this
journey. I would like to address a special thanks also to Leif Emås, for his clar-
ity and support, to Leif Spanier from Lloyd’s Register and Xuezhi Zhang from the
Chinese Academy of Science, for their invaluable assistance to my project. Finally,
to numerous other people at European Spallation Source that assisted with both
technical and moral support, I am deeply indebted for your help when most needed.
A special thanks also to ENEN and SIRR for their financial support to the mobility
by, respectively, ENEN+ project, co-funded by the EURATOM research, and SIRR
Grant for Master Thesis.
Nicola Rizzi
Trieste, Italy, December 2019
i
Summary
The model used is treated in depth in Chapter 2 and is subdivided in three main
parts: activity transport inside the building, dispersion into the atmosphere and
actual dose assessment. Each part deals with the conceptual and computational re-
alization of the physical processes involved and, while the choices made are mainly
due to Target Division specialists, the actual code used for the analysis has been
written ex novo as a part of this work1 .
Chapter 3 is focused on the description of the accident scenario and its radiological
consequences. At the end of this chapter, validation for the code results is also
addressed, using as benchmark the dose assessment by Target Division specialists.
1
The code is available at https://github.com/Athropos/dose-model-root
iii
Sommario
Lo scopo di questa tesi è quello di ottenere una stima quantitativa delle incertezze sui
calcoli di dose nel caso di uno scenario di incidente all’European Spallation source.
Nel Capitolo 1 è presentata una panoramica sulle basi dell’analisi delle incertezze,
in particolare vengono discussi in dettaglio i metodi e le fasi generalmente adottate
insieme ad una breve introduzione alla struttura dell’European Spallation Source.
Il modello usato è trattato in dettaglio nel Capitolo 2 ed è diviso in tre parti prin-
cipali: trasporto di radiazione all’interno dell’edificio, dispersione nell’atmosfera
ad effettiva valutazione della dose. Ciascuna sezione affronta la realizzazione con-
cettuale e computazionale dei modelli dei processi fisici coinvolti e, mentre le scelte
fatte sono dovute principalmente agli specialisti al Target Division, il codice utiliz-
zato per l’analisi è stato scritto ex novo come parte integrante di questo lavoro2 .
Il nucleo della tesi è il Capitolo 4 dove, dopo un’introduzione sui metodi Quasi-
Monte Carlo, viene esplorata l’incertezza parametrica del modello basandosi sulla
letteratura scientifica e sulla confidenza informale dei suddetti specialisti. I risultati
sono quindi discussi nello scenario di incidente più pessimistico, seguiti da alcune
considerazioni sulle incertezze del modello.
2
Il codice è disponibile all’indirizzo https://github.com/Athropos/dose-model-root
v
Acronyms
Term Definition
AA10 Accident Analysis 10 - Water leakage in Monolith Vessel
A2T Accelerator to Target
ARF Airborne Release Fraction
BCI Bootstrap Confidence Interval
DR Damage Ratio
EFH Exposure Factors Handbook
ESH&Q Environment, Safety, Health and Quality
GLS Gas Liquid Separation (tank)
HVAC Heating, Ventilation and Air Conditioning
IAEA International Atomic Energy Agency
LENA Swedish Radiation Safety Authority Gaussian Dispersion Code
LP Leak Path
MAR Material At Risk
MC Monte Carlo
NBPI Neutron Beam Port Insert
OBT Organically Bound Tritium
PBW Proton Beam Window
PIE Postulated Initiating Event
PSAR Preliminary Safety Analysis Report
PWC Primary Water Cooling
QMC Quasi-Monte Carlo
QR Quasi-random
RS Random Sampling
SMHI Swedish Meteorological and Hydrological Institute
SSM Swedish Radiation Safety Authority
vii
Contents
1 Introduction 1
1.1 Uncertainty analysis: an overview . . . . . . . . . . . . . . . . . . . . 1
1.2 European Spallation Source . . . . . . . . . . . . . . . . . . . . . . . 6
2 Model 11
2.1 Internal activity transport . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Dispersion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Dose assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 Accident Analysis 33
3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Scenario Development . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Radiological Consequences . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Uncertainty Analysis 43
4.1 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Parametric Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.4 Model Uncertanties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5 Sensitivity Study 67
5.1 Variance-based sensitivity analysis . . . . . . . . . . . . . . . . . . . 67
5.2 Numerical estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Conclusions 77
Introduction
1.1.1 Reliability
Reliability of a dose model is defined in [1] as a measure of confidence in model
predictions. There are two ways of addressing confidence when it comes to dose
model: qualitative and quantitative. The first one relies on statements like:
1
2 CHAPTER 1. INTRODUCTION
Figure 1.1: Scheme of the structure adopted for the dose assessment and uncertainty analysis.
Blue arrows: steps included in the documentation for accident scenarios. Orange arrows: steps
extensively dealt with in this work. Green arrow: possible re-evaluation in the light of conclusion
of this work
definition could result in a model predicting correct results for the wrong problem.
The issue can be restated as the answer to the following: “What is the assessment
question?”. Two distinct types of assessment questions can be studied: the one
with a deterministic answer and the one with a stochastic answer. As pointed out
in [2], examples of the first type of assessment question are:
Each of these questions has only one true answer. All the uncertainty is then in
the lack of knowledge about the true value of parameters that are invariant with
respect to the reference unit (for example, what the individual ate, how much the
individual ate, or the concentration of the radionuclide in the food the individual
ate).
On the other hand, the second type of assessment question (a question with a
stochastic answer) is:
The true answer to this question varies from individual to individual. Part of the
uncertainty in the answer is due to stochastic variability with respect to the reference
unit of the assessment question (what and how much an individual ate varies from
individual to individual). In other words, while in the first case the only contribution
to the answer comes from uncertainty, in the second case there is also a contribution
from variability, which usually refers to the quantitative biological differences in
individual members of a population [3]. For example, two healthy people of the
same age and gender and having identical diets may exhibit substantially different
tritium retention times. Variability should not be confused with the uncertainty on
the central value of some parameter.
This work was aiming to answer a first type question, since the dose assessment
request was:
– “What is the dose received by a reference individual of the public living near
the ESS site after a specific accident?”
Specifying the conceptual model is the second step in the uncertainty pro-
cess. “The model concept takes into account the number of distinct compartments,
interconnections between compartments and the number of processes and mecha-
nisms to be considered explicitly” [1]. It’s clear that, although ideal, it may not
often be practical to include all conceivable processes and mechanisms within the
conceptual structure of the model. This is due either to the marginal role of some
processes or to the unavailability of relevant data; sometimes even data that do
4 CHAPTER 1. INTRODUCTION
exist are empirical and their application field is limited. In any case, one must
be careful on what to keep and what to throw away when it comes to the model.
Differences between predictions and reality may be due entirely to the inability to
identify the characteristics of the release, the important mechanisms and pathways
of environmental transport, the attributes of the exposed population or even the
time and space over which calculations should be averaged.
f :Rp → R
(1.1)
~ 7→ Y = f (X)
X ~
where Y is the output, X ~ = (X1 , . . . , Xp ) are p inputs and f is the model function,
which can be analytically not known. Let’s just keep in mind this definition because
is going to be useful later in Chapter 4 and Chapter 5.
parameter values.
The uncertainty of a parameter is expressed in two ways: the range of possible
values and the frequency with which any value within that range is expected to
occur. At this point one can define a Probability Density Function (PDF) for each
parameter and assess a confidence interval by mean of the standard deviation of the
distribution.
The maximum conceivable range is estimated by first reviewing the general scientific
literature (reports, books, and scientific papers) that contain parameter informa-
tion. After the literature search is completed, then, by either self assessment or
expert opinion, is evaluated whether these maximum and minimum values should
be adapted to reflect physical limitations present in the conditions being examined.
At this stage, if there are not enough data to evaluate statistically which PDF
should be selected, subjective determination of the PDF for most parameters is
necessary. To increase the defensibility of this kind of assessment, one may asks an
expert to interpret the available information and quantify the most likely parameter
value and its uncertainty. However, in both cases, the rationale for the assessment
should be well-documented, including a description of the information available and
the evaluation of that information.
Figure 1.2: Scheme of the uncertainty analysis conducted as main objective of this thesis.
6 CHAPTER 1. INTRODUCTION
“ESS will provide up to 100 times brighter neutron beams than currently
available at existing facilities. In simple terms, the difference between
current neutron sources and ESS is something like the difference between
taking a picture in the glow of a candle or using a torch”
In a more quantitative way, in Figure 1.3 is reported the effective thermal flux of
neutron sources around the world as a function of operational year. For more
details about the current situation with neutron facilities in Europe see [4]. Most
existing sources are based on nuclear reactors, an approach that has reached its
maximum capability in terms of how many neutrons can be produced; ESS belongs
to a new generation of neutron sources based on spallation technology which has
been developed by scientists and engineers to reach new standards of efficiency.
From a technical point of view, a particle accelerator speeds up proton up to 2 GeV,
then the spallation process will take place when the accelerated proton beam hits
the tungsten bricks of the target wheel. In the process, those high-speed protons
kick out the neutrons which are directed to the ESS instruments through a gauntlet
of media, guides, optics and filters to be used for scientific research. The facility
design and construction include the most powerful linear proton accelerator ever
built, a five-tonne helium-cooled tungsten target wheel, 22 state-of-the-art neutron
instruments, a suite of laboratories and a supercomputing data management centre.
The accelerator components are showed in Figure 1.4 and treated in details in
[6]; rapidly varying electromagnetic fields heat hydrogen gas in the ion source so
that electrons evaporate from the hydrogen molecules and a plasma of protons re-
mains. The proton beam is then transported through a Low Energy Beam Transport
(LEBT) section to the Radio Frequency Quadrupole (RFQ), where it is bunched
and accelerated up to 3.6 MeV. In the Medium Energy Beam Transport (MEBT)
section the transverse and longitudinal beam characteristics are diagnosed and opti-
mised for further acceleration in the Drift Tube Linac (DTL). After approximately
50 meters the protons have gained enough speed so they can be accelerated by super-
conducting double-spoke cavities (SPK). These cavities are cooled by liquid helium
1.2. EUROPEAN SPALLATION SOURCE 7
Figure 1.3: The evolution of effective neutron source fluxes as a function of calendar year, from
the discovery of the neutron in 1932 to 2016. HFIR, ILL, ISIS, SINQ, SNS, JSNS and FRM-II
(MLZ) are still operational and CSNS and ESS are under construction. Image and caption from
[4].
to −271 ◦C. The spoke-cavities are followed by 36 Medium Beta Linac (MBL) and
84 High Beta Linac (HBL) elliptical cavities. These structures are distributed all
along the linear accelerator and constitute the majority of its length. After accelera-
tion the beam is transported to the target through the High Energy Beam Transport
(HEBT) section. At this point the protons have reached 96% of the speed of light.
A switch dipole will bend the beam to the A2T line while powered and will permit
the beam to be sent to the beam dump while unpowered. The ESS accelerator high
level requirements are to provide a 2.86 ms long proton pulse at 2 GeV at repetition
rate of 14 Hz. This represents 5 MW of average beam power with a 4% duty cycle
on target.
The Target Station uses protons delivered by the linear accelerator to liberate
neutrons from tungsten nuclei via nuclear spallation reactions. These neutrons,
which travel at 10% of the speed of light, are then slowed down to roughly the
speed of sound using liquid hydrogen and water as moderating media, to energy
8 CHAPTER 1. INTRODUCTION
levels usable by the scientific instruments. Of those neutrons that are slowed down
to useful speeds, a smaller fraction is allowed to travel from the moderators into
the neutron guides that deliver them to neutron-scattering instruments. The target
Figure 1.5: Cross section view of the Target Building showing the physical locations of the many
systems constituting the target station. Image from [5].
station occupies the Target Building (Figure 1.5) which is a large structure, about
130 m in length, 22 m in width and 30 m in height. In Figure 1.5, the proton beam
is shown entering the Target Building from the left, passing through the proton
beam transport hall and into the target monolith. The proton beam window sepa-
rates the high vacuum in the accelerator from the inert helium gas in the monolith,
shown in Figure 1.6, which is a large structure (11 m diameter by 15.5 m tall, from
basement floor to high bay floor) consisting mostly of 3000 tons of steel shielding.
Embedded within the monolith there is a vessel containing the major components
involved in neutron production and delivery to instruments: the spallation target,
the moderator-reflector system, the proton beam window, and instrumentation.The
target itself is a 2.6 m diameter stainless steel disk containing bricks of tungsten,
a neutron-rich heavy metal. It weighs 4.9 tonnes and rotates at 23.3 RPMs, in
time with the arrival of the proton beam. The unit is cooled by a flowing helium
gas system interfaced with a secondary water system, whose principal components
(pumps, heat exchangers, filters, etc.) are located in Utilities rooms. The modera-
tion is achieved using a 20 K hydrogen-based and water-based moderator (respec-
1.2. EUROPEAN SPALLATION SOURCE 9
Figure 1.6: Up: cross section view of the monolith showing in-monolith components and the
connection cell above. Image from [5]. Down: 3D model of moderator-reflector system.
Model
In this chapter are presented the methods and tools making up the dose calculation
model for the public dose assessment in the safety analyses at ESS. The main
reference for the structure and choices made in this chapter is [7]. Anyway, most of
the content presented is the outcome of independent extensive researches through
scientific literature and books on the subject. In fact, the first part of the work done
at ESS consisted of writing a stand-alone C++ code able to faithfully reproduce the
dose results obtained in the first place by the Target Division specialists. The need
for such a code will be fully clear in Chapter 4, but just to give the reader an idea
of the complexity of the original process, in Figure 2.1 is showed the workflow for
the dose calculations from [11]. The calculations were run jointly by the Target
Figure 2.1: Original workflow for the dose calculations from [11]. Each Blue box represent a
single Mathcad or Excel file. Green boxes are major analysis steps. Red arrows are meant to be
manual transfers of data, while purple arrows are automatic transfers of data.
11
12 CHAPTER 2. MODEL
Division at ESS and Lloyd’s Register with the aid of three software: PTC Mathcad
[8], Microsoft Excel [9] and Lena [10].
Mathcad is used to solve the differential equations describing the internal transport
of activity in the facility (see Sec. 2.1). Lena was used to calculate the relative
concentration used for calculating the dispersion of activity during the transport
outside the facility to the representative person (see Sec. 2.2). Finally Excel is used
for the remaining calculations via three to five different Excel workbooks for each
accident sequence, resulting in dose to the public.
In this thesis, the need of a compact code with a straightforward flow of information
from input to output comes along with the need to fully understand each step of
the analysis to reproduce it. That’s the main reason for the extensive researches
conducted on equations and parameters missing in the original documentation, but
presented here. Finally, unless clearly stated, only the processes included in the
code will be discussed, independently from what has been dealt with in [7].
Element Description
H Hydrogen
He, Ne, Ar, Kr, Xe Noble gases
N, O, P, S, Se, C Non-metals
F, Cl, Br, I Halogens
As, Te Metalloids
Na, K, Rb, Cs Alkali-metals
Zn, Cd Metals
Table 2.1: Elements considered in the safety analyses to be volatile, a gas, or can vaporise.
called Damage Ratio (DR), which is simply the weight of the damaged material of
an object divided by the total weight of the object, also called Material at Risk
(MAR). Since this model is aimed to study only radionuclides released into the air,
the actual inner source term is then given by multiplying the maximum releasable
activity with the Airborn Release Fraction (ARF) of each chemical element and if
several objects are damaged, this process is repeated for each object.
The release fractions of the elements from the target material are treated in some
details in [22], however, for the sake of completeness, Table 2.1 lists the elements
considered to be gases, volatile or can vaporise and are thereby potentially found
in the gas phase.
The inner source term for the postulated accident is then obtained by the following
formula:
STinner = MAR · DR · ARF (2.1)
Figure 2.2: Scheme of the facility for the internal transport model. Rooms volumes are indicated
as V0, V1 and V2. Gas flow from one volume to the next one is indicated as f (e.g. f01 is the flow
from V0 to V1). Each volume may also have a flow out to the environment indicated with e0, e1
and e2. Finally, the number of radioactive nuclides (or activity) in the different volumes are N0,
N1 and N2, while the ones emitted to the environment are E0, E1 and E2.
In this simplification, the three dimensional rooms are reduced to “nodes”, which
can be given, in principle, different physical properties; in this case, they were given
only the property of volume. Gas-flows connect the different nodes with each other
and with the environment (also a node). Another important assumption of the
model, is that nuclei that come into a node are immediately spread out in the
whole volume. The change of the number of nuclei (or activity) with the time was
originally described in [7] by a set of coupled ordinary equations:
N1 N0 N0
Ṅ0 = s(t) + f 10(t) − N0 · λ − f 01(t) − e0 (t) (2.2)
V1 V0 V0
N0 N2 N1 N1 N1
Ṅ1 = f 01(t) + f 21(t) − N1 · λ − e1 (t) − f 10(t) − f 12(t)
V0 V2 V1 V1 V1
N1 N2 N2
Ṅ2 = f 12(t) − N2 · λ − e2 (t) − f 21(t)
V1 V2 V2
Ni
Ėi = ei (t) i = 0, 1, 2
Vi
As already said, the numerical solutions to the set (2.2) was solved using the soft-
ware Mathcad. Since the solver is generally slow and training is required to run a
proprietary software like Mathcad, a different path has been chosen for this work.
The problem of transport has been dealt with using a method developed by the
Institute of Modern Physics at the Chinese Academy of Sciences, which has been
collaborating with the ESH&Q Division in the past year. The idea behind this
different approach is to decouple the problem of transport from decay, in order to
2.1. INTERNAL ACTIVITY TRANSPORT 15
treat it in terms of a discrete transport matrix, i.e. a linear algebraic problem. Let’s
generalize the problem to n rooms (including environment) and, just for semplicity,
assume the gas flows constant. Let’s define Sit and Nit respectively, the source and
the number of nuclei in the room i at a time t; if we call Pij = fij /Vi (i 6= j) , then
at the time t + ∆t the following set of n approximated equations holds:
X
N1t+∆t = S1t + 1 − P1j N1t ∆t + N2t P21 ∆t + . . . + Nnt Pn1 ∆t
j
X
N2t+∆t = S2t + N1t P12 ∆t + 1 − P2j N2t ∆t + . . . + Nnt Pn2 ∆t (2.3)
j
..
.
X
Nnt+∆t = Snt + N1t P1n ∆t + N2t P2n ∆t + . . . + 1 − Pnj Nnt ∆t
j
Should be clear that the set (2.3) is nothing more than the generalized and dis-
P
cretized form of (2.2) without the decay term. Defining Pii = 1 − j Pij , the set
of n equation (2.3) can be restated in a more compact form:
P11 · · · P1n
t .. .. .. · ∆t
(N1 , . . . , Nn )t+∆t t
= (S1 , . . . , Sn ) + (N1 , . . . , Nn ) · . . . (2.4)
Pn1 · · · Pnn
which also emphasizes the equivalence of this model to a linear mapping of the
n-dimensional vector, representing the number of nuclei in each room at the time
t, to the one at time t + ∆t by mean the transport matrix Pij . A simple example
of transport matrix is showed in Figure 2.3. The computational strength of this
model lies on both the ease with which one can implement it in a code and the fast
execution speed. Moreover, Sit and Pijt can be adjusted for each step according to,
for example, transient processes, stochastic effects or aerosol dynamics.
Until this point it has been treated the “material” transport, but since we are
dealing with activity, there’s need to reintroduce the decay. In this approximation
the two phenomena are decoupled, so one can just multiply at each step the vector
(N1 , . . . , Nn )t+∆t by the factor e−λ∆t , with λ the decay constant of a specific nuclide.
This means that the process of transport must be repeated for each nuclide involved
in the release, which was also a feature of the Mathcad approach except for the fact
that the time for a single run was incredibly longer.
A validation of this computational model is presented in Sec. 3.3.3, where its results
are compared with the ones obtained by the Mathcad solver for a long-lived nuclide
in a specific accident scenario.
16 CHAPTER 2. MODEL
Figure 2.3: Example of transport in a simplified scenario. Image taken fromQuick Radioactivity
Transport Evaluation with Transport Matrix Method on kind permission of Xuezhi Zhang, Institute
of Modern Physics, Chinese Academy of Sciences
• Start of emission: the time when 1% of the full emission of a long-lived nuclide
has been emitted.
• Emission duration: the time between when 1% and 90% of the full emission
of a long-lived nuclide has been emitted.
• Emission time: the time between when 1% and 50% of the full emission of a
long-lived nuclide has been emitted.
For some accident scenarios, especially those that have a fast release of the inner
source term and where the emission path goes through several nodes, the emission
2.2. DISPERSION 17
time between 50% and 90% may be more than one order of magnitude longer than
the time between 1% and 50%. Since the local relative concentration in the atmo-
sphere gets lower values for longer emission times (see Sec. 2.2.3), in [7] is pointed out
that using the emission duration may result in too low doses. Thus, the shorter but
not unrealistic choice of emission time when calculating the relative concentration
in the dispersion model was made for conservative purposes.
2.2 Dispersion
The release of radionuclides into the atmosphere causes some radiation exposure
of humans and other biota near the ESS site. The dynamics of those radionu-
clides depends primarily on the weather systems that they encounter, therefore an
overview of particular aspects of meteorology is essential to understand atmospheric
dispersion models [23][24].
Wind
Landscape z0 (m) n
Sea 10−4 0.07
Sandy desert 10−3 0.1
Short grass 0.005 0.13
Open grassland 0.02 0.15
Root crops 0.1 0.2
Farmland 0.2-0.3 0.24-0.255
Open suburbia 0.5 0.3
Cities, woodlands 1 0.39
Table 2.2: Typical values of roughness length and n for various types of terrain from [25].
height till becomes steady and proportional to the pressure gradient (geostrophic
wind). A convenient way to describe the vertical profile is by a power law [25]:
z n
u(z) = u10 · (2.5)
10
where u(z) is the wind speed at height z and u10 is the measured wind speed at the
reference height of 10 m. The power n is a function of surface roughness (see next
section) and, to some negligible extent [25], atmospheric stability (see Atmospheric
Stability).
Finally, one expect the concentration of substances released, e.g. from a chimney
stack, in the atmosphere to be inversely proportional to the wind speed at that
point. For example, if the wind speed doubles, substances will then be diluted into
twice as much air per unit time, thus halving the concentration per unit volume of
air.
Surface roughness
As already discussed, objects on the earth’s surface (roughness elements) will have
a frictional effects on the wind flow. In order to account for the complexity of the
landscape’s roughness elements, a single parameter called surface roughness length
z0 is used. In general, z0 determine the shape of the profile (2.5); in particular,
n will increase with increasing surface roughness. Typical values for z0 and n are
showed in Table 2.2. The lands surrounding the ESS site are mostly devolved to the
production of common crops, like sugar beets and cereals, therefore a z0 = 0.1 m
value has been fixed for the model.
Atmospheric Stability
The vertical mixing of air is usually addressed by what’s called thermal turbulence.
If a “parcel” of air is moved upwards by the wind it expands adiabatically, i.e. with-
out any heat transfer to the outside, due to the reduction in atmospheric pressure
with height. Since some of the heat within the parcel is used up in this expansion,
2.2. DISPERSION 19
the temperature of the air parcel decreases. Rising air cools and descending air
warms at a rate of about 1 ◦C per 100 m; this is known as the dry adiabatic lapse
rate (Γ). In light winds the temperature change with height is influenced mainly
by exchange of thermal radiation in the atmosphere. If the temperature reduction
within the rising parcel matches the temperature decrease in its surroundings then
it will have the same density as its surroundings and its vertical motion will be nei-
ther suppressed nor enhanced. This is known as “neutral stability” since a parcel
of air displaced in the vertical neither gains nor loses buoyancy.
When the temperature falls with height at a rate less than Γ or even increases with
height (a temperature inversion) the boundary layer is said to be “stable”. This is
because air displaced upward cools with respect to its surroundings and therefore
its upward motion is suppressed. Stable conditions typically occur at night, particu-
larly when there is no cloud cover coupled with light winds. If the temperature falls
with height at a rate greater than the Γ then air displaced in the vertical continues
to be warmer and thus less dense than its surroundings and therefore continues to
rise until constrained at a greater height - this is an “unstable boundary layer”.
These conditions typically occur on cloudless, hot, sunny days in light winds when
the earth’s surface is being heated more than usual.
Pasquill [26] developed an empirical scheme which defined stability in terms of six
categories, A to F, in which A represents the most unstable conditions, D neutral
and F stable. Later a very stable category, G, was added. The approach suggested
by Pasquill depends on knowing the wind speed at a height of 10 m and assessing
the amount of incoming solar radiation (insolation) in qualitative terms. Since the
categorisation of the atmospheric stability has an important influence over the dis-
persion estimates of material released to the atmosphere, several scientists have used
experimental data to derive dispersion parameters for use in models, as a function
of stability category. For example, curves of these parameters against downwind
distance were suggested by Gifford [27] and became known as the Pasquill-Gifford
curves (see also [25]).
The model used in [7] to predict the dispersion of radionuclides into the environment
is a time-integrated Gaussian plume model. There are many versions in use around
the world and a major reason for that is the difference in the way that the dispersion
parameters are derived. The most widely used and practical version of this semi-
empirical model was published as National Radiological Protection Board Report
R91 [25]. This subsequently became known as the “R91 model” which is also the
one adopted in the software Lena [10].
The model assumes that the diffusion of material in the direction of the wind is
relatively small compared to the transport by the wind itself (advection). Along the
cross wind and vertical directions, instead, is described by a Gaussian distribution
characterised by standard deviations σy and σz . Thus, if the origin of the co-ordinate
20 CHAPTER 2. MODEL
system is at ground level directly beneath the discharge point, the integrated activity
concentration C(x, y, z) at the downwind distance x of a radionuclide is given in
the basic form of the Gaussian plume model as:
2
(z − h)2
Q0 y
C(x, y, z) = exp − + (2.6)
2πσy σz u10 2σy2 2σz2
where Q0 is the total amount released (Bq) and h the release height (m). The
concentration obtained refers to releases which are short enough to be unaffected
by changes in wind direction. For releases of long duration the horizontal spread
of material is heavily influenced by changes in the wind direction rather than by
the dispersion and diffusion of material about the plume’s axis. The conservative
choice made in [7] is to evaluate the concentration downwind (worst scenario).
When material is discharged from an elevated source, the plume will disperse ver-
tically and eventually reach the ground. On reaching the ground a non-depositing
plume is effectively “reflected” back into the atmosphere. This can be accounted for
by mean of a virtual source at a distance h below the ground. The air concentration
at (x, y, z) is then given by:
y2
Q0
C(x, y, z) = exp − 2 ×
2πσy σz u10 2σy
(z − h)2 (z + h)2
× exp − + exp − (2.7)
2σz2 2σz2
On the other side, when the plume reach the inversion conditions, which are pos-
itive temperature gradients in the lower levels of the atmosphere, the dispersed
material is again reflected. The plume is then trapped between the inversion layer
and the ground (see Figure 2.4). The effect of introducing multiple reflections is
that the radionuclide airborne concentration at the point of interest is obtained by
summation of the contributions from many virtual sources. The optimal number of
contributions depends on the relative values of σz and the height A of the mixing
layer, but, in general, sufficient accuracy is obtained with “first order” reflections
[25]. The concentration distribution can then be expressed as:
y2
Q0
C(x, y, z) = exp − 2 · F (h, z, A) (2.8)
2πσy σz u10 2σy
where F is the sum over all virtual sources:
(z − h)2 (z + h)2
F (h, z, A) = exp − + exp − +
2σz2 2σz2
(z − 2A + h)2 (z − 2A − h)2
exp − + exp − + (2.9)
2σz2 2σz2
(z + 2A − h)2 (z + 2A + h)2
exp − + exp −
2σz2 2σz2
2.2. DISPERSION 21
Figure 2.4: Virtual source approach to account for the reflections from the ground and the top
of the mixing layer
Stability Class
The original work by Pasquill [26] gave the indications showed in Table 2.3 on how
to assess the stability class based on wind speed and solar irradiation. In the
1973, Smith developed the Pasquill scheme further introducing a more objective
approach to estimate the sensible heat flux and substituting the seven categories by
a continuous numerical scale [28]. In the new generations of models for practical
applications, turbulence is usually defined in terms of the boundary layer height and
the Monin-Obukhov length [24], both directly measurable or function of measurable
quantities.
In the safety analysis at ESS the Paquill scheme has been adopted, however the
choice of stability class was made based on precise assumptions on weather condi-
tions; in particular two scenarios have been assumed:
22 CHAPTER 2. MODEL
Insolation Night
u10 (m/s)
Strong Moderate Slight >3/8 low cloud < 4/8 cloud
<2 A A-B B - -
2-3 A-B B C E F
3-5 B B-C C D E
5-6 C C-D D D D
>6 C D D D D
Table 2.3: Pasquill stability categories scheme based on insolation and wind speed, from [26].
• the median weather P50% or the weather with the statistical median value of
the parameters.
• the weather P95% , defined such that only 5% of the values in the parameters
distributions result in higher doses and 95% result in lower doses to the public.
While the first one represent the most realistic assumption between the two, the
second one is by far the most conservative and, for this reason, the P95% is used in
the safety analysis conclusions.
The data that lead to the choice of the stability class were extracted from [29]; in
particular, according to [7], there was an evidence that a Pasquill stability class D
should have been used for the P50% weather. Then, since one expect a larger dose
for more stable weather, for the P95% scenario the stability class F has been chosen.
In the context of this work of thesis, an independent research on stability class data
in southern Sweden has been conducted and presented in Chapter 4.
The vertical standard deviation σz at a given distance from the source is a function of
the atmospheric stability, downwind distance and surface roughness and its value is
usually derived from the Smith’s curves [25][28]. To facilitate numerical analysis and
implementation in computer applications, Hosker fitted Smith’s curves and found
an analytical formula for σz in each Pasquill category together with a correction for
surface roughness [30]. The fitting equation found by Hosker is:
axb
σz = · SRF (z0 , x) (2.10)
1 + cxd
where the coefficients as a function of stability class can be found in Table 2.4.
Surface Roughness Factor (SRF) for z0 different from 0.1 m is given by:
g 1
ln f x 1 + hxj for z0 > 0.1 m
SRF (z0 , x) = 1 for z0 = 0.1 m (2.11)
h i
ln f xg
1
1+hxj
for z < 0.1 m0
2.2. DISPERSION 23
Table 2.4: Coefficients given by Hosker for the fitted vertical dispersion σz adapted from [25].
In the software Lena, along with the Hosker’s formula, also an initial vertical wake
broadening of the plume ∆Z0 = 20 m is implemented [10]. The reason for this
contribution is to account for effects of extra turbulence, due to the presence of the
building, with an arbitrary, although reasonable, initial value of σz . Since a value
of 0.1 m has been fixed throughout the analysis, the formula used to evaluate σz is
then: s 2
axb
σz = + (∆Z0 )2 (2.12)
1 + cxd
The dispersion of the plume in the horizontal plane is the result of turbulence
processes together with fluctuations in wind direction. A parametrization of the
first contribution as a function of stability class and downwind distance in Open-
Country conditions is due to Briggs [31]:
εx
σyt = √ (2.13)
1 + γx
with coefficient’s values given in Table 2.5. The second contribution to the horizontal
dispersion arise from the necessity to account for change in wind direction in releases
that last longer then 30 minutes. This component σyw may be estimated from the
following equation from [25]:
r
w 7t
σy = 0.065x (2.14)
u10
where t is the emission duration in hours (see definition in Sec. 2.1.3). These two
components can be thought to act independently, so that a single value for σy may
24 CHAPTER 2. MODEL
Table 2.5: Coefficients given by Briggs for the turbulence processes contribution to the σy from
[32].
Once again, an initial spread of the plume due to wake effects has been added in
Lena as a ∆Y0 = 20 m. Finally, the expression used in the calculations is:
q
σy = σt2 + σw
2 + (∆Y )2
0 (2.16)
The basic Gaussian plume model equation (2.6) requires the commonly used mean
wind speed measured at a height of 10 m. This is because the empirical derivation
of the (2.14) in the basic model implies the use of the wind speed measured at 10 m.
Clarke in [25] recognises that this procedure will give rise to an error in the ex-
ponential term exp(−y 2 /2σy2 ) for off-axis concentrations well above the ground for
highly elevated sources, but the error is acceptably small in all cases. For specific
purposes one can always adjust the 10-meter wind speed value by mean of the (2.5).
In the ESS accident analyses, the statistics for the wind speed were extracted from
the open data of the weather stations Lund LTH at the SMHI (Swedish Mete-
orological and Hydrological Institute) website. The dataset contained 10-minute
average values taken from 10 m and expressed as a mix of integer and values with
one decimal digit in equally amounts. The whole data set, the integer part and
the non-integer part were analysed to assess the P50% and P95% values. The P95%
results from the whole and the non-integer parts were consistent, but the one from
the integer part was very close to zero. Therefore, only the non-integer set was used
obtaining a P50% wind speed of 2.7 m s−1 and a P95% wind speed of 0.9 m s−1 (see
Figure 2.5). Finally, since boundary layer depth is not measured at many meteoro-
logical stations (included Lund LTH), the common values presented in Table 2 by
Clarke in [25] have been used.
2.2. DISPERSION 25
Figure 2.5: Wind speed and cumulative distribution obtained from open data of the weather
stations Lund LTH. The dataset contains almost 40000 quality-assured, 10-minute average values
taken from 10 m. Image taken from [7].
Radioactive Decay
In the case of radionuclides with radioactive half-lives which are comparable to the
travel time to a given point, radioactive decay will reduce the activity concentra-
tion of the radionuclide as it travels downwind; the modified concentration can be
obtained multiplying Q0 by the decay depletion factor DFdecay :
x
DFdecay = exp −λ · (2.17)
u10
with λ again the decay constant of a specific nuclide and x/u10 the travel time.
Dry Deposition
The deposition of radionuclides from the atmosphere to the ground may occur by
diffusive dry deposition. Dry deposition is a complex process by which particulates
and gases are removed from the air by turbulent impaction on the ground, or on
obstacles on it (e.g. vegetation) or by chemical reaction at the air-ground surface.
26 CHAPTER 2. MODEL
The amount of radioactive substances deposited depends on the nature of the air-
borne material, the ground surface (e.g. roughness and vegetation type) and, to a
lesser extent, on the wind speed and atmospheric stability [34]. It can be estimated
using the concept of “deposition velocity” vg . Deposition velocity is defined as the
ratio of the amount of material deposited on the ground per unit area per unit time,
to the air concentration per unit volume just above the ground. The integrated dry
deposition Cground (Bq/m2 ) is then simply given by:
The correspondent depletion factor to correct the source strength is derived, for
example, in [35] and is given by:
vg
DFdry = exp [Fdry (x)] u10 (2.19)
where r Z x
h2
2 1
Fdry (x) = − dx0 exp − +
π 0 σz (x0 ) 2σz2 (x0 )
(2.20)
(h + 2A)2 (h − 2A)2
exp − + exp −
2σz2 (x0 ) 2σz2 (x0 )
This equation apply for any release duration but the integral cannot, in general, be
evaluated analytically.
The most critical parameter in the dry deposition model is the aforementioned
deposition velocity vg . A comprehensive review has been given by Underwood in
[36], while in Table 2.6 are given some values of deposition velocity used in the safety
analyses at ESS.
Format vg
Gas 0
Particles 0.001
Organic gas iodine 0.0001
Inorganic iodine 0.01
Wet deposition
Radioactive particulates and gases in the plume may be removed also by rain falling
through the plume, which is called “washout” (or below-cloud scavenging) or by in-
corporation of nuclides in the rain cloud, known as “rainout” (or in-cloud scaveng-
ing). The wet deposition rate is therefore a function of the total activity throughout
the depth of the plume rather than only the activity concentration at ground level.
For the analysis at ESS, data were extracted from the same SMHI open data as
2.3. DOSE ASSESSMENT 27
wind speed. The dataset showed that the P50% hourly rain is 0 mm/h while the
P95% value is 0.34 mm/h. In [7] is stated that rain may increase the effective dose
from ground contamination with a factor two, but since the ground dose contributes
with about 20-50% of the total effective dose to the public and is evaluated with
strongly conservative assumptions, no wet deposition has been considered in the
model. A deeper study has been conducted as a part of this thesis in Chapter 4.
In the safety analyses, rather then C, the concentration relative to the emitted
activity C/Q0 is evaluated. Moreover, the representative person was conservatively
assumed to live at 300 m along the central axis. Putting the information together,
the relative concentration RC at ground level as a function of weather and emission
height has been calculated as:
1
RC = · F (h, 0, A) · DFdecay · DFdry (2.21)
2πσy σz u10
In the first case, the representative person of the public will mainly receive dose due
to external radiation of both plume passage (cloudshine) and from the deposited
activity on the soil (groundshine). In the second case, the dose will be a result of
inhalation of contaminated air during plume passage or ingestion of radionuclides
embedded in food from contaminated soil. The models and equations presented
refers mainly to [38].
During the passage of the radioactive plume, the effective dose received by the
representative person is expected to be proportional to the concentration in air
of radionuclides at ground level. The proportionality factor is usually known as
effective dose coefficient for cloudshine DCcloud . It depends on the nuclide and is
measured in (Sv/day)/(Bq/m3 ). In the safety analysis at ESS, these coefficients are
taken from [39], while the effective dose for each nuclide is evaluated simply as:
Figure 2.6: Scheme of the public dose pathways identified in the safety analysis at ESS. Blue
arrows denote paths for internal exposure, green ones for external exposure and orange arrows
outline the transfer of radionuclides. Main references are [38] and [5].
Groundshine
A fraction of the activity released is deposited on the ground during the plume
passage as already seen in Sec. 2.2.4, causing external exposure. The total amount
of activity deposited is given by the (2.18). Again, a proportional relationship
between ground concentration and effective dose is modelled with groundshine dose
coefficients taken from [39]. Integrating the dose rate for the time of exposure,
assumed to be 1 year but with only 8 hours per day spent out in the fields, the
following equation is obtained:
1 T
exp
Dground = Cground · 1 − e−λ·1 y · · DCground (2.23)
λ 1 day
The committed effective dose from inhalation of radioactive substances during the
passage of the cloud rely on the activity air concentration, the inhalation rate and
dose coefficients. A simple model allow to evaluate the dose as following:
where Rinh is the representative adult breathing rate in m3 /s and DCinh is the
coefficients in Sv/Bq. For ESS safety analysis, inhalation rate in adults has been
assumed to be 2.56 × 10−4 m3 /s with [40] as reference.
The effective dose coefficients for inhalation, instead, are mainly from Annex G,
2.3. DOSE ASSESSMENT 29
Table G1 in [41] for particulate aerosols. However, for most nuclides the dose
coefficients are not specified for different chemical forms, while in Annex H of the
same report chemical form is stated for soluble or reactive gases, but coefficients
are given only for a limited number of elements. Moreover, the processes leading to
emission during potential accidents at ESS are complex and difficult to foresee, so
that it will not always be possible to know in advance chemical forms of released
nuclides. Therefore, most conservative dose coefficients has been selected, i.e if the
dose coefficient given in Annex H is higher than in Annex G the higher value is
used, except for tritium.
Since there are no organic compounds in the target, it has been assumed that the
tritium will be released in a chemical inorganic form. The dose coefficient used for
tritium is then set to the value of HTO, which is more conservative than the one of
HT (see Table H1 in [41]). Furthermore, a release of HT will react with the oxygen
and water molecules in the air and a fraction will be transferred to HTO during the
dispersion, leading to some exposure due to HTO inhalation in any case [38].
Ingestion
The model adopted to assess the radiological effects of food intake from a contam-
inated soil is widely discussed in [42]. In the ESS surrounding agricultural land,
are produced mainly common crops (sugar beets and cereals). Contributions from
meat or dairy product contamination are excluded since there are no cattle or any
other livestock grazing around ESS.
The crops can be contaminated via two main processes:
Both these processes can contribute to the total uptake by the plant, especially
during growing season.
transport of radionuclides is important for plants of which only specific parts are
edible or used as feed. One practical definition of translocation is the ratio between
the activity of edible part of crops at harvest and the activity deposited, with re-
sulting value expressed in percent.
In [42] a lumped parameter LP for all those processes (interception, weathering and
translocation) describing the transfer from deposition to contamination of edible
part of the plant has been used. The assumed value independent of nuclide is
0.01 m2 /kg based on the assumption of remaining fraction (rf ) on surface of 10%
due to interception and weathering, translocation (tl) of 5% and yield of 0.5 kg/m2
[44]. The lumped parameter is obtained by the expression:
rf · tl
LP = (2.25)
yield
A deeper analysis on parameter values has been conducted in Chapter 4. If a delay
before consumption Tcon of half year is assumed, the concentration in food crops
can be evaluated as:
where Con is the consumption of food from crops, assumed to be 100 kg/year, while
DCing is the effective dose coefficient for ingestion of a specific radionuclide (values
from Table F1, ANNEX F of ICRP report [41]).
where ρ is the surface soil density 260 kg/m2 [45] and T fe is the element spe-
cific transfer factors from soil to crops (Bq/kg plant fresh weight)/(Bq/kg soil dry
weight), with Table XI in [45] as reference for values. The main limit of this prac-
tical model is that it does not consider the type of crop, yield, depth of soil or
migration of the nuclides in the soil.
Similarly to to translocation, the dose from consumption of first-year crops contam-
inated due to root uptake is:
In [38] is also considered the integrated effective dose from consumption during 70
years (first year + 69 years) and until infinity.
In the code written for this thesis, root uptake process has not been considered. The
main reason is showed in Table 2.7. In fact, the only two dose-dominant nuclides in
Table 2.7: Effective dose coefficients and transfer factors from soil to crops used in the analysis
for the two dose-dominant nuclides in the accident scenario studied: 3H e 14
C . References in the
text.
the accident scenario studied in this work (Sec. 3.3.2), i.e. 3H and 14C , have both
null transfer factors from soil to crops according to [42].
The issue of assessing root uptake for 3H and 14C is actually more complex to be
reduced to just a null transfer factor. In ANNEX III of [45] the problem is treated
separately for screening purposes and has been studied in detail as a source of un-
certainty for the ESS model in Chapter 4.
In Appendix A the ROOT Code Workflow is presented and should be use a vademecum
throughout the following analysis process.
Chapter 3
Accident Analysis
Accidents could occur in any complex facility and ESS makes no exceptions. Due to
the high energy and power of the proton beam, both prompt radiation and radioac-
tive material is produced during normal operation [5]. The analyses are subject to
SSM requirements imposing them to be carried out in a prescribed deterministic
way (see [47]).
In this chapter the analysis of the accident scenario chosen for the thesis and the
results of the previously discussed model will be presented. Notably, the choice of
the accident was dictated by a criterion of simplicity, since the model and methods
for the uncertainty analysis remain unchanged even in more complex scenarios. The
following accident analysis has been performed by specialists at Target Division and
reported in [46], to be considered this chapter main reference.
3.1 Overview
The Accident Analysis 10 (AA10) covers the possibility of a primary water cooling
(PWC) pipe ruptures in the monolith or in the Proton Beam Window (PBW) vessel
(see Sec. 1.2). Water activated by normal operations subsequently leaks out in
the monolith vessel and contaminated vapour escapes through different leak paths.
The systems involved in the accident are located in the monolith vessel, but the
activity released depends only on the evaporation of cooling water rather then on
the systems that are experiencing the accident; for this reason no mention is made
to these devices’ features unless it’s strictly necessary.
The five water cooling systems, and the respective objects to cool, involved in the
accident are showed in Table 3.1, while in Figure 3.1 a primary water cooling system
prototype is represented by mean of a diagram. The ”object to be cooled” in the
figure is either the moderator, the reflectors, the shielding and plugs, the NBPIs or
the PBW. It is important also to keep in mind that systems 1041 and 1070 share
the same cooling water, just as systems 1043 and 1026.
The first assumption on the event is that the facility has been in operation for
40 years without changing or adding any water to any of the PWC systems. The
33
34 CHAPTER 3. ACCIDENT ANALYSIS
Table 3.1: Scheme of the systems located in the monolith or PBW vessel cooled by the respective
water system. Systems that share the same water are 1026 with 1043 and 1070 with 1041.
Figure 3.1: Flow diagram of a generic primary water cooling system. Image taken from [46].
total operations time per year includes 5400 hours of beam on target with maximum
beam power of 5 MW at 2 GeV [48]. Clearly, assuming a constant full time neutron
production over the facility lifetime represents a conservative upper limit for the
activation levels.
The unfortunate event leading to a pipe rupture is called Postulated Initiating
Event, PIE from now on. Examples of PIEs are mechanical failures, structural
problems in the system or mechanical stresses, which can happen without distinction
during any operational mode of the facility. Therefore the PIEs have been labeled
with the three relevant operational mode of the LINAC:
PIE 1 Mode Beam ON. The proton beam is produced or ready to be produced. The
primary water cooling systems are delivering water to the objects to be cooled;
PIE 2 Mode Beam OFF - cooling system running. The proton beam is not produced,
but the PWC systems are still cooling the objects to be cooled;
PIE 3 Mode Beam OFF - cooling System Shutdown. LINAC has shutdown and the
water in the PWC systems are drained into the drain tanks.
The following analysis evaluates the three mode dose consequences separately, in
3.2. SCENARIO DEVELOPMENT 35
order to highlight possible differences and, thus, also the need for mitigation mea-
sures in the worst scenario. For example, the second mode covers the approximately
4 hours time delay sufficient to allow both the water to cool down to a temperature
below 50 ◦C and short-lived nuclides to decay. Therefore, dose consequences during
this time delay are expected to be the same as during mode Beam ON.
Another important assumption of the safety analysis is that the radionuclides present
in spilled water evaporates and are emitted together with evaporated water. There-
fore, as already mentioned in Sec. 2.1.1, only radionuclides that are considered to
be volatile, are a gas or can vaporise give radiological consequences. Finally, the
contaminated vapours escapes through different identified leak paths [49], however
it is assumed that the entire inventory produced by the accident is emitted via
each leak path independently, to study the differences in transport and to spot the
worst case scenario. This last assumption is, again, unlikely to be realistic but it is
consistent with the conservative approach adopted throughout the analysis.
ID Assumption
A1 Three operational modes for the facility
A2 The event takes place after 40 years of normal operations. The activation
is calculated based on 5400 hours per year with a beam power of 5 MW at
2 GeV, assuming no change or addition of water in the PWC systems.
A3 The entire inventory produced by the accident is emitted with water vapour
through each leak path independently.
A4 The event is evaluated for the baseline monolith atmosphere of rough vac-
uum with a pressure of ≤ 100 Pa.
Table 3.2: Assumptions made for the accident analysis with an identifier code.
At the start of the event, a pipe inside the monolith or PBW vessel ruptures.
Cooling water with a temperature of approximately 50 ◦C flows into the monolith
vessel, which is kept under a vacuum pressure of 100 Pa. Therefore, the spilled water
flashes until a pressure of 12 kPa (water vapour saturation pressure at 50 ◦C) has
been reached. Water that has not flashed is pooled at the bottom of the monolith
vessel. From now on the event can develop in the following ways.
36 CHAPTER 3. ACCIDENT ANALYSIS
3.2.1 Scenario A
Only in the case of PIE 1, if the system affected by the rupture is using cooling
water from 1041, then the loss of water from the moderators can cause the cold
moderator to overheat and break. In this circumstance, liquid hydrogen from the
cold moderator evaporates increasing the monolith vessel pressure. Eventually the
unplanned increased pressure damages the rupture disc on the monolith relief sys-
tem, causing the release of the gases and a sudden atmospheric re-entry. Then,
assuming a constant temperature of 50 ◦C, the vessel atmosphere is saturated with
contaminated water vapour, with a density of 8.1 g/m3 , while the flow out of the
vessel is 8.4 m3 /h, 20% of its volume per hour. Contaminated water vapour con-
tinues to evaporate and spreads from the monolith vessel till is emitted through
different leak paths.
3.2.2 Scenario B
If the moderator remains intact, the rough vacuum pumps will maintain low pressure
in the monolith vessel, pumping out the flashed water and causing the pooled water
to boil. According to its technical specifications [50], the vacuum pump capacity is
estimated to be 1 m3 /min at 12 kPa. However, the vacuum pumps have a maximum
water vapour pumping capacity of 600 g/hour and, assuming the worst case, three
pumps are in operation simultaneously which maximises the rate of spilled water
vapour emitted to 1.8 kg/hour.
• LP-104 - If the HVAC is working as designed, a direct path from the monolith
vessel out of the building through the stack is enabled;
• LP-102 - If the HVAC is not working as designed, the emission point is assumed
to be at the height of 20 m above the ground level. In this case it is not a
direct path but rather a transport throughout several volumes (rooms) with
a much longer emission duration.
In table Table 3.3 the two leak paths’ parameters for the transport calculations are
shown. It might also be noteworthy that, even though the emission points of 20 and
45 meter are used when reporting the dose to the public, other heights have been
used as well in the analysis, in particular 10 and 30 m, independently from the leak
path. These emission points are only used for comparison and to get an idea of the
3.3. RADIOLOGICAL CONSEQUENCES 37
Table 3.3: Leak paths for Scenario B with parameters for transport calculations.
effect of changing the emission altitude in case of unplanned events, e.g. collapse of
the chimney stack or the building rooftop after an earthquake.
Table 3.4 shows the released activity for both scenarios and all PIEs. The spilled
water has been estimated starting from the capacity of the tanks and pipes (con-
sidered the MAR), then assuming three rupture points and evaluating the volume
lost by each system (DR) according to its physical location; a detailed description
on how has been done can be found in Appendix C of [46], while in Appendix D
of the same report a complete list of radionuclides in the PWC systems with their
activities for all PIEs is presented. The ARF is assumed to be 1 as all spilled water
evaporates/boils-off. What is important to notice from Table 3.4 is that:
• system 1042 has the smallest spilled water volume in all the PIEs, but not the
highest activity;
• system 1043 releases less activity than 1041 and the emission duration is the
longest in all cases;
• previous points suggest that the worst radiological consequences to the public
for all PIEs must come from system 1041. Moreover, Scenario B is expected
to give always higher dose then Scenario A.
Therefore only the results involving Scenario B and system 1041 will be used for
dose calculations.
38 CHAPTER 3. ACCIDENT ANALYSIS
Emission Released
PIE System Spilled water (kg) Activity (Bq/kg)
duration (h) activity (Bq)
Table 3.4: Released activity for both scenarios and all PIEs. All spilled water has
evaporated/boiled-off (ARF=1). Note the amount of activity in PIE 2 and PIE 3 is always lower
due to the four hour decay imposed. Data from [46].
3.3.3 Results
The model discussed in Chapter 2 was applied to the current accident scenario in
the case of PIE 1 with the LP-102. The results from the “old” model, presented in
detail in [51], and the ROOT code were then compared.
Transport
Firstly, the transport results in terms of STouter and emission times for long-lived
nuclides (Table 3.5) were compared. The approximated transport matrix method
has been implemented using a 100 s timestep and a constant source term in V0
3.3. RADIOLOGICAL CONSEQUENCES 39
between 0 and 389 hours. Note that, since was not clear from the documentation
which value to take as STouter , the value chosen is the one relative to the step when
the 99% of total activity of long-lived nuclide has been emitted. Moreover, for each
STouter (Bq)
Nuclide ESS-0094187 [52] ROOT code Error
3
H 1.99 × 1014 2.03 × 1014 1.84%
14
C 4.15 × 1011 4.11 × 1011 1.01%
Emission Time (s)
Emission
ESS-0241705 [51] ROOT code Error
Fraction
Table 3.5: LP-102 PIE1. Transport results for STouter and emission times for long-lived nuclides.
nuclide the code produces a plot of the normalized activity in each room (including
the environment) as a function of time, then stores it in .root file. An example of
such plot is given in Appendix B for the 3H .
Observing the relative errors column, the major difference is found in the outer
source of 3H , which is in any case slightly below 2%. This difference is considered
acceptable and the two methods consistent in the results.
Dispersion
Similarly, the relative concentration RC calculations using the software Lena and
the ROOT code were compared for different assumed weathers and emission heights.
The comparisons are showed in Figure 3.2, from which is evident that the results
are perfectly overlapped. The relative errors, also plotted for the P50% and P95%
weather as a function of height, makes clear that the two models are again largely
compatible. Notably, these results are mostly independent from the particular acci-
dent scenario’s details, since, fixed the inventory, RC is normalized to the emitted
activity and the only factor directly involved in the calculations is the emission time.
40 CHAPTER 3. ACCIDENT ANALYSIS
Figure 3.2: Comparisons of relative concentration RC calculations using the software Lena (ESS-
0241704) and the ROOT code for two different assumed weathers and emission heights. Relative error
for the two cases also displayed as a function of height.
Dose assessment
The same comparison has been made for dose calculations in ESS-0241704 [53]. A
similar graph was plotted also for the dose results and is presented in Figure 3.3.
The error appears to be generally higher for the P95% assumed weather, but in any
case within the 1%, which is a more then acceptable level of discrepancy for the two
model to be considered equivalent.
Figure 3.3: Comparisons of dose calculations from ESS-0241704 and the ROOT code for two
different assumed weathers and emission heights. Relative error for the two cases also displayed as
a function of height.
Table 3.6: Risk ranking of the AA10 LP-102 accident scenario for the unmitigated events.
Chapter 4
Uncertainty Analysis
This chapter, core of this project, presents the uncertainty study approaches that
were adopted for the safety analysis at ESS. The conceptual scheme is the one
discussed in section 1.1. After an introduction on the computational methods for
the uncertainty propagation, a deeper analysis on the parametric uncertainty of
the ESS dose model will be addressed. For each parameter, scientific literature has
been studied to extract information on Probability Density Function (shortened
PDF from now on), trying to stay as much as possible on the peculiarities of the
conditions under examination and keeping it simple in case of lacking data. Finally,
results for the accident scenario AA10 LP102-PIE 1 will be presented followed by
some remarks on model uncertainties.
4.1 Methods
4.1.1 Analytical propagation
Let’s recall the definition of a computational model given in Sec. 1.1.2:
f :Rp → R
(4.1)
~ 7→ Y = f (X)
X ~
where Y is the output, X ~ = (X1 , . . . , Xp ) are p inputs and f is the dose model
function. If the model is simply enough, f could be analytically defined and the
propagation of uncertainties could proceed by exact calculation of the mean and
standard deviation of the dose distribution. For example, in the simplest case of an
additive model:
X p
Y = Xi = X1 + X2 + · · · + Xp (4.2)
i=1
43
44 CHAPTER 4. UNCERTAINTY ANALYSIS
where COV [Xi , Xj ] is the covariance of correlated (i, j) couple of parameters. Anal-
ogous results can be obtain for multiplicative models, however the analytical ap-
proach shows rapidly his limitations with “less trivial” operations. For example, if
we suppose that our dose prediction could be fully determined by
Y = X1 · X2 · X3 (4.5)
Figure 4.1: Left: 1000 points with coordinates randomly generated according a uniform distri-
bution between 0 and 1. This sampling method is the core of the RS. Right: first 1000 points of
the 2D Sobol’ sequence. This sampling method is the one used in the current analysis.
2. each parameter value is extracted from the assumed PDF using a modified
version of GetRandom(), a ROOT method implemented, mutatis mutandis, in
TF1, TH1 and TH2 classes. The variation is in the fact that, while standard
GetRandom() uses a pseudo-random number generator, the ad hoc modified
version is called passing a Sobol’ number from the text file, which is then used
for the same inversion process described in Sec. 4.1.2;
3. the parameter value is then passed to the model for the dose evaluation;
4. the steps 2 and 3 are repeated for each row up to N , which are the total
number of generated rows, and the dose distribution is retrieved.
Figure 4.2: Schematic location of postulated rupture points and PWC devices. Image from [46].
is to emphasize in the current discussion is that the estimations are then produced
based solely on both the physical location of tanks, piping etc. and their filling rates
during different PIEs. Therefore, the amount of spilled water in case of rupture is
somewhat constrained with a small margin of error. However, the design of the
PWC systems has changed several time over the years and it’s not unluckily that
will change also in the future. For this reason, after an informal solicitation with
Leif Emås, gas and process engineer at the Target Division, a 25% uniform uncer-
tainty on the assumed value (700 kg in the worst case scenario) is considered to
account for possible modifications to the cooling systems.
particles, 68% of the times the true result will be in the range x̄ ± Sx̄ . Obviously,
a confidence statement like this one refers only to the precision of the Monte Carlo
calculation itself and not to the accuracy of the result compared to the physical
true value, which would require detailed analysis of the uncertainties in the physical
data, modelling, sampling techniques and approximations used in the calculations.
Since such analysis was outside the scope of this project, only the precision of
the Monte Carlo has been considered. Unfortunately, relative errors on activity
concentrations were not provided in the ESS documentation for the inventories,
however in [56] a guideline for interpreting the quality of the results suggests that the
upper limit for R should be 0.1, above this value tallies are generally questionable.
Moreover, a comparison with other results at ESS where R was reported (e.g. [57]),
suggests a more realistic value of 0.05 for the relative error. Therefore for the
activity concentrations of 3H and 14C a gaussian distribution was assumed, with µ
the nominal activity and σ equal to the 5% of that value.
b. one of the main restriction to apply to the gaussian model is that atmospheric
conditions is not supposed to change during the travel of the plume. The
probability that a stability category will persist for a given time is reported
by Jones in [59] (Table 3 of the same report). Jones shows how most categories
persist, on average, for a few hours, with category D the only one with a 24%
chance to last over 24 hours.
Figure 4.3: Distribution of stability categories for weather conditions in Southern Sweden. Data
from SSM and bar chart from [58].
some results will indeed be presented for the more stable category F to probe the
conservative approach adopted initially in the analysis, thus dispersion coefficient
uncertainties have been evaluated for all the stability classes.
Cook provided also a theoretical validation in the note [63], showing that the per-
centiles constraints can be satisfied exactly for several distributions and the software
was able to give the same results.
Thus, the GM and the GSD (Geometric Standard Deviation) of the lognormal
distributions for the six stability classes were retrieved and used to generate σz ac-
cordingly (see Table 4.1 and Figure 4.4).
σz (m) at 300 m
Stability Class P5% P50% P95% GSD
A 42.60 52.73 65.28 1.14
B 29.44 34.40 40.24 1.10
C 26.17 28.42 30.86 1.05
D 23.23 24.77 26.41 1.04
E 21.32 21.99 23.34 1.03
F 20.04 20.67 21.32 1.02
Table 4.1: percentiles of σz lognormal distributions at 300 m as a function of the stability class.
The uncertainty on horizontal standard deviation for the gaussian plume model
σy has not been implemented. The reason is that, referring to (2.16), the main
contribution to σy for long releases and near field distances comes from the wind
duration-dependent term σyw , while the turbulent stability-dependent term is neg-
ligible. The horizontal standard deviation distribution is then fully determined by
the uncertainty in both t, proportional to the amount of water spilled, and u10 .
Figure 4.4: σz lognormal distributions at 300 m as a function of the stability class. Observe
that more stable conditions correspond to more narrow distributions.
Finally, although there’s a strong correlation between wind speed range of values
and Pasquill stability class, the use of the full distribution in Figure 2.5 is not hardly
conflicting with the fixed stability class D choice, since in neutral condition all wind
speed values are possible [25].
This parameter was introduced to model the dry deposition process of dispersed ra-
dionuclides (see Eq.(2.18)). In the safety analyses at ESS deposition velocity based
on chemical and physical properties of emitted nuclides were assumed (Table 2.6); in
the accident scenario studied for this work, 3H has been considered to be emitted as
a gas HT, thus its vg is 0, while 14C as generic particle (vg = 0.001 m/s). However,
throughout scientific literature on the subject, a striking dependence on many other
factors like soil composition, particle size, wind speed, surface characteristics etc.
emerges [36], Thus, a wide range of deposition velocities have been proposed for
each chemical or physical form of the same nuclide. For example, in case of tritium
emission, velocities for HT and HTO are measured and reported by Foerstel [70]
in the ranges, respectively, 10−5 –10−3 m/s and 10−3 –10−2 m/s. However, given
the situational nature of this parameter the most reliable source of information has
been considered the report Atmospheric dispersion modelling of planned releases of
radioactivity at ESS by SMHI [60]. In that report 3H is assumed to be emitted
only by mean of evaporation of HTO, while 14C is assumed to be emitted in the
52 CHAPTER 4. UNCERTAINTY ANALYSIS
Table 4.2: Dry deposition velocities vg (m/s) for different radionuclides and land use from [60].
form of CO2 (gas); the values of vg are hence given in Table 4.2. These values,
according to [60], should be regarded as best knowledge based on literature reviews
and SMHI-internal discussions. The differences from those used in the safety anal-
ysis are in the non null value for 3H and in the seasonal solar elevation-dependent
value for 14C . Since, as already said, even in case of HT emission a fraction will
be transferred to HTO during the dispersion [38][70], the chemical form for the
released tritium has been effectively considered a source of uncertainty. Therefore,
remembering that the ESS site is surrounded by crop fields, a uniform distribution
between 0 and 0.007 m/s for 3H and a piecewise uniform distribution from 0.001
to 0.013 m/s weighted on the number of months in which the values are defined
(assuming always Rsol = 1) for 14C have been considered.
Table 4.3: Percentiles of dose coefficients lognormal distributions for 3H . The GSD has been
retrieved with the aid of ParameterSolver software.
1. points from a joint PDF are simulated. In this case, a multivariate Normal
distribution with ρ = 0.5 has been chosen for simplicity, which will lead to a
so called Gaussian copula for linear symmetric correlation; for more insights
on choosing a copula see [74].
2. the marginals and their CDF are then retrieved; the previously extracted
points are mapped in two uniform distributions by means of the respective
marginals CDF. Can be useful to visualize this step as the “inverse” of the
inversion method described for the Monte Carlo methods;
3. at last, the uniform distributions obtained from the marginals can be mapped
into the desired 1-D PDF by mean of the inverse CDF of such distribution. In
this case the two uniform marginals are converted into lognormal distributions
with coupled correlated points (see also Figure 4.5).
In this way, by using the uniform distribution as a lingua franca, correlation can be
easily constructed and a complex joint probability distribution retrieved.
54 CHAPTER 4. UNCERTAINTY ANALYSIS
Figure 4.5: Visual representation of a gaussian copula method for coupling two distributions,
assumed to be correlated, in a joint PDF. Notice that the actual copula function is the joint
probability distribution which has uniform marginals (step 2).
4.2. PARAMETRIC UNCERTAINTY 55
Figure 4.6: Lognormal distributions assumed for the breathing rate parameter in the age category
16-61 for males, females and gender average. The latter has been used in the uncertainty analysis.
Data from [75].
in Skåne (southern Sweden) are hard to find. The closest to be reliable source of
information that could be found was the Chapter 9 of the aforementioned EFH
which deals with intake of fruits and vegetables. However, data was mainly referred
to U.S. population and subdivided into too specific categories. Thus, the resulting
datasets were too much fragmented and inappropriate for the current study. In this
uncertainty analysis the default value of 100 kg/year has been, hence, kept fixed.
spilled water 700 kg Uniform 700 101 525 543 700 858 875
activity conc. 3H 2.94 × 1011 Bq/kg Gaussian 2.94 5% mean – 2.7 2.94 3.18 –
14
activity conc. C 5.93 × 108 Bq/kg Gaussian 5.93 5% mean – 5.44 5.93 6.42 –
vertical dispersion σz see 4.2.4 Lognormal – – – – – – –
wind speed 0.84 or 3 m/s Empirical 3 1.6 0.5 0.9 2.8 5.9 15
deposition vel 3H 0 m/s Uniform 0.004 0.002 0 0.0004 0.0035 0.0066 0.007
4.2. PARAMETRIC UNCERTAINTY
14
deposition vel C 0.001 m/s Piecewise 0.0033 0.0034 0.001 – 0.0033 – 0.013
Uniform
inhalation coeff. 3H 1.8 × 10−11 Sv/Bq Lognormal* 2.29 1.23 0 1.63 2.29 3.25 –
ingestion coeff. 3H 4.2 × 10−11 Sv/Bq Lognormal* 5.58 1.52 0 2.80 5.58 1.12 –
BR (breathing rate) 22.1 m3 /day Truncated 16.7 1.2 0 12.2 16.7 22.83 33.9
Lognormal*
lumped translocation 0.01 m2 /kg Uniform 0.0084 0.0009 0.0067 0.0069 0.0084 0.0098 0.01
*The means and standard deviation SD listed for lognormal distribution on this table are respectively the geometric mean GM and geometric
standard deviation GSD.
Table 4.4: Table of all parameters ranges and PDF considered in this uncertainty analysis. References and more detail in the text.
57
58 CHAPTER 4. UNCERTAINTY ANALYSIS
Figure 4.7: Left: relative error trend for each percentile of the dose distribution as a function
of dataset size with successive points from Sobol’ sequence as sampling method. Right: same but
with generation of pseudo-random number.
4.3 Results
As already mentioned in the Sec. 4.1.2, the suitable number of points for the Monte
Carlo to effectively solve the mathematical problem depends on the complexity of
the model and on the precision sought. When dealing with RS Monte Carlo, one
can always extract information about the convergence by simulating many times
the problem with a fixed N and retrieving the confidence interval of the quantity of
interest (e.g. as standard deviation of the mean); on the other hand, with Quasi-
Monte Carlo methods, once the number of parameters p is fixed, the chosen low
discrepancy sequence will always return the same first N points. This problem has
a solution in Bootstrap techniques as widely described in [77]. However, for the
sake of the current analysis, a quick convergence study has been made evaluating
the variation on the relative error in the dose percentiles as:
i
P − P i−1
E= (4.9)
Pi
where P is a percentile of the dose distribution and i is the index of sample size
Ni = {500, 1000, 1500, . . . , 5000}. In Figure 4.7 relative error plots are showed for
Sobol’ sequence and random sampling techniques, showing the faster convergence
of the former over the latter for almost all the percentiles.
Two things should be kept in mind when looking at plots in Figure 4.7:
2. percentiles from randomly sampled points have not been averaged over mul-
tiple extractions, hence they were likely to exhibit great volatility.
Despite the points above, those plots can be still reasonably seen as a qualitative
proof of the quasi-Monte Carlo approach better performances. Before moving to
4.3. RESULTS 59
Stats
140 Entries 5000
Mean 0.3444
Std Dev 0.2504
120 Underflow 0
Overflow 0
100 Integral 5000
Skewness 2.188
80
60
40
20
0
0 0.5 1 1.5 2 2.5 3
Dose (mSv)
Figure 4.8: Dose distribution in the worst case scenario LP-102 PIE 1 with emission from 20 m
obtained with a sample of 5000 points from the Sobol’ sequence.
the actual results, few words on the confidence interval (CI) estimation. As already
said, the problem of defining an interval with a fixed level of confidence for the
values obtained with QMC is flawlessly overcome by Bootstrap techniques. How-
ever, no CI has been estimated since the actual physical quantity of interest in this
context is the dose to the public, not a specific percentile. In conditions of qual-
itative convergence of results, as the ones showed in Figure 4.7, any statement of
uncertainty on the percentiles of the distribution would be unnecessary given that
the percentiles themselves will provide the quantification of uncertainty sought.
The following results have been obtained using a sample size of 5000, not because
it was expected a significant gain in precision over fewer points, but mainly because
it was not computationally expensive at all. In fact, on the machine used for the
analysis which runs an Intel® Core™ i5-7600K CPU at 3.80GHz, the ROOT code took
just over a minute to complete 5000 cycles. An example of the output of the code is
showed in Figure 4.8 for the worst case scenario LP-102 PIE 1 with emission height
of 20 m. The mean and standard deviation of the distribution is (0.34 ± 0.25) mSv,
which is surprisingly high when compared with the P50% and P95% results from [53]
which are, respectively, 1.32 × 10−2 mSv and 2.60 × 10−2 mSv, with the latter used
in the risk assessment. The results in terms of percentiles for different emission
heights are also reported in Figure 4.9 as boxplots, showing the 5th, 25th, 50th
(median), 75th, and 95th percentiles and comparing them with the two values P50%
and P95% obtained in the safety analysis [46].
How come that the radiological consequences to the public appears to be strongly
60 CHAPTER 4. UNCERTAINTY ANALYSIS
Figure 4.9: Boxplots showing the 5th, 25th, 50th (median), 75th, and 95th percentiles of dose
distribution in the worst case scenario for different emission heights. The results from [53] are also
reported for comparison.
Figure 4.10: Boxplots showing the 5th, 25th, 50th (median), 75th, and 95th percentiles of dose
distribution in the worst case scenario for different emission heights and with 3H deposition velocity
set to 0 m/s. The results from [53] are also reported for comparison.
and 95th percentiles of the dose distribution. It should be noted that this is not,
by any mean, a form of validation for the uncertainty analysis, since the P50% and
P95% were defined, in the first place, by simply fixing two values of stability class
(D and F) and wind speed PDF (50th and 95th percentiles). However, Figure 4.10
may be interpreted as an indirect proof of the impact of the wind speed parameter,
which alone can almost entirely accounts for the spreading of dose values, especially
for low emission heights, when 3H deposition velocity is turned off. Clearly, the
sensitivity study in Chapter 5 will have the last word, but these considerations set
the premises for what is to be expected from it.
2. the deposition properties for the different nuclides are not considered physical-
chemical specific. Each nuclide is classified depending on how the nuclide
behaves from a hydrodynamical point of view during the transport through
the air;
Moreover, conditio sine qua non for the rain to fall is a neutral stability class D [34].
The expected local mass incorporated into rainfall per unit volume is then given by
a linear relation like:
ΛC(x, y, z) (4.10)
where C(x, y, z) is the gaussian local airborne concentration (see Eq.(2.8)). Washout
coefficient Λ can be parameterized as a function of a wide spectrum of variables [79].
A simple one, according to [60] and [78], is given by the relation:
Λ = A · IB (4.11)
where I is the precipitation intensity in mm/h, while the values of A and B are
species-specific and depend on several mechanisms, all involving the size of the
suspended particles or of the rain drops, e.g. impaction onto the rain drop, Brownian
motion of particles to the rain drop and nucleation of a water drop by the particle.
Once again, the most reliable source of information has been considered the SMHI
documentation, which reports a value of 2.0 × 10−4 for A and 0.7 for B for 3H as
HTO, while no wet deposition is expected from 14C as CO2 (gas) due to assumption
4.4. MODEL UNCERTANTIES 63
2. above. Note that the dissolution of CO2 into the rain drops is a very complex
process that is accounted for in more sophisticated models like the “Falling Drop
Method” (see [78]). Unlike dry deposition, that occurs at ground level, precipitation
scavenging involves all volume elements along the vertical profile of the plume. Thus,
assuming 3. above, the total wet deposition per unit horizontal area Fwet , is found
by integrating Eq.(4.10) through a vertical column of air:
Z ∞
Fwet = ΛC(x, y, z)dz (4.12)
0
assuming Λ constant in the domain (assumption 4.) and limiting the integral to the
plume centerline in the boundary layer, (4.12) becomes:
Z A
Fwet = Λ C(x, 0, z)dz (4.13)
0
There are two approaches to the problem of solving integral (4.13): the numerical
one adopted in the SMHI report, SMHI model from now on, and the approximated
one implemented in Lena [10]. The numerical solution of the integral (4.13) is
straightforward from a computational point of view; the unspoken approximation
in Lena is, instead, less obvious. In fact, in [25] Clarke suggests that when the
vertical dispersion coefficient σz becomes greater than the depth of the boundary
layer A, the effective vertical concentration distribution is uniform and (2.8) becomes
simply: 2
Q0 −y
C(x, y, z) = √ exp (4.14)
2πu10 σy A 2σy2
in this case (4.13) has an analytical solution given by:
Q0
Fwet = Λ √ (4.15)
2πu10 σy
which is the closed expression implemented in Lena. However, in the same report,
the reader is invited to always check that σz = A, which usually happens at great
distances from the emission points (≥ 1 km). For this reason, the following results
for the short-field dose from Lena should be taken with a certain prudence. Like dry
deposition, the amount of activity deposited out of the plume along the distance by
rain depletes the plume strength by a factor [78]:
xrain
DFwet = exp −Λ (4.16)
u
where xrain is the distance along the plume upwind from the point of interest over
which it was raining during the plume passage and u the wind speed at plume
height. The former is assumed to be simply 300 m, while u is evaluated from u10 with
(2.5). The calculations were then run for worst case scenario LP-102 PIE 1 in P50%
weather conditions (class D is the only one in which rain can occur) and with a rain
intensity of 0.34 mm/h correspondent to the 95th percentile of its hourly distribution
64 CHAPTER 4. UNCERTAINTY ANALYSIS
Figure 4.11: Dose calculations with SMHI model and Lena approximation for wet deposition.
Results are expressed as a function of emission height for the worst case scenario LP-102 PIE 1 in
P50% weather conditions and with a rain intensity of 0.34 mm/h. Presence and absence of 3H dry
deposition were considered separately.
from SMHI open data [7]. The results with and without 3H dry deposition as a
function of emission height are showed in Figure 4.11. The first thing to notice is
that Lena approximation of the integral produces dose results of a factor 1.25-2 less
then SMHI. Without the dry deposition of 3H the dose is almost constant due to the
fact that ingestion, and hence wet deposition, accounts for almost 99% of dose and
is mostly independent from emission height. When tritium vg is turned on, the two
contributes become comparable and the decreasing trend is restored. However, in all
cases these values are way higher then the results without wet deposition from [46]
which range from 1.68 × 10−2 mSv to 3.5 × 10−3 mSv. The calculations without the
dry deposition of 3H clearly prove that also the wet deposition contribution alone
is far from negligible and when both are considered the dose can skyrockets to over
1 mSv, which just happens to be the legal limit for the public in this category of
accident [55]. Obviously, these results are based on conservative assumptions that
are unluckily to be realistic; still, they suggest the need for further investigations in
terms of computational model and parameters.
Sensitivity Study
The last step in the current analysis process is the sensitivity study. Its purpose is
to “identify those parameters whose uncertainties have the largest impact on the
uncertainty of the predicted dose” [2]. The results of the sensitivity analysis are
expected to both permit ranking of the parameters with respect to their contribution
to the model predictions and give a somewhat deeper understanding of the model
itself. In this work, it has been chosen to perform a variance-based sensitivity
analysis, hence, after having explained the theoretical background of the methods,
computational solutions will be investigated and results for the ESS dose model
presented. The main references for this chapter are [65], [83] , [84] and [87].
67
68 CHAPTER 5. SENSITIVITY STUDY
is possible to retrieve VXi (E∼Xi [Y |Xi ]), which is known as variance of the condi-
tional expectation and has the valuable property of increasing in value the greater
the importance of Xi . From (5.2) is, then, straightforward to obtain the indicator
sought:
VXi (E∼Xi [Y |Xi ]) EXi [V∼Xi (Y |Xi )]
Si = =1− (5.3)
V (Y ) V (Y )
This indicator is named by Sobol’ [81] first order sensitivity index, varies from 0
to 1 and measures the first order (e.g. additive) effect of Xi on the model out-
put. These properties should be more clear after introducing the framework of
decomposition equations on which variance-based methods find a solid theoretical
base. In fact, Sobol’ decomposes the model function f into summands of increasing
dimensionality:
p
X XX
f (X1 , . . . , Xp ) = f0 + fi (Xi ) + fi (Xi , Xj ) + . . . f1...p (X1 , . . . , Xp ) (5.4)
i=1 i j>i
where:
Vi =VXi (E∼Xi [Y |Xi ])
Vij =VXi Xj (E∼Xij [Y |Xi , Xj ] − E∼Xi [Y |Xi ] − E∼Xj [Y |Xj ]) (5.6)
..
.
Since all the variables are independent, the previous expression becomes:
from which is clear that, if (5.5) is divided by V (Y ), then it gives the following
important relation:
p
X XX
Si + Sij + · · · + S1,2,...,p = 1 (5.8)
i=1 i j>i
At this point should be clear why the aforementioned Si can be at most 1 and is the
sensitivity index of “first order”. The second order index Sij expresses sensitivity
of the model to the interaction between Xi and Xj , which is not included in the
individual effects of Xi and Xj . One can define any index of higher order, but for a
model of p inputs, the number of all order indices is 2p − 1, which grows really fast
as p increases. For this reason, total sensitivity indices were introduced by Homma
and Saltelli [82], becoming soon a popular variance-based measure. The definition
of total sensitivity index for a variable Xi is:
To clarify the cumbersome notation: VXi (Y |∼Xi ) is the conditional variance taken
over Xi but when all other variables are fixed and E∼Xi [VXi (Y |∼Xi )] its expectation
value taken over all factors except for Xi . ST i measures the total effect, i.e. first and
higher order effects (interactions) of factor Xi . In [65] is suggested that one way to
visualize ST i is by considering, in the second equality in (5.9), V∼Xi (EXi [Y |∼Xi ])
like a first order effect that accounts for the variation of all variables except for
Xi (compare it with (5.3)), so that V (Y ) minus V∼Xi (EXi [Y |∼Xi ]) must give the
contribution of all terms in the variance decomposition which do include Xi . The
total sensitivity index is then a powerful tool for SA when models are nonlinear and
higher order terms can be equally important.
identified. Clearly, variables into a group are dependent, but variables of different
groups are independent. This is exactly the case that suits In [84] these groups are
denoted like “multidimensional variables” X ~ i , then p-dimensional input X
~ is, for
instance, written as:
The reader could have figured out the trick at this point. The first order sensitivity
indices for the multidimensional independent variables (X ~ 1, . . . , X
~ I+L ) are simply
defined along the lines of (5.3) by:
~ k]
VX~ k E∼X~ k [Y |X
Sk = ∀k ∈ [1, I + L] (5.11)
V (Y )
which measures the first order impact of the multidimensional variable X ~ k on the
~ k is one-
output Y . Notably, Eq.(5.11) correctly return the (5.3) if the variable X
dimensional. From here is then straightforward to define high order and total sen-
sitivity indices, hence no demonstration will be given.
2. the mean and variance of Y are then computed. In this work, they have been
calculated from sampled values in A, so the estimators in subsequent points
have been chosen accordingly. To introduce the notation, the mean and the
variance have been estimated with:
N N
1 X 1 X 2
f0 = f (A)j D= f (A)j − f02 (5.12)
N N −1
j=1 j=1
3. two new matrices for each inputs are introduced: AiB and BiA , where all
columns are from A, or B, except the i-th column which is from B, or A
respectively;
5.2. NUMERICAL ESTIMATION 71
4. the first order index Si is obtained from (5.3), with the variance of the condi-
tional expectation approximated by [65]:
N
1 X
f (B)j f AiB j − f (A)j
VXi (E∼Xi [Y |Xi ]) = (5.13)
N
j=1
where all the Xi are normally distributed (i.e. Xi ∼ N (0, 1)) and where X3 and X4
are correlated with Pearson correlation coefficient ρ1 , as well as X5 and X6 with
coefficient ρ2 . In this case, the multidimensional approach previously described is
applied and the two couples of correlated variables are considered precisely as two
2-dimensional inputs. First order sensitivity indices are noted by S{3,4} and S{5,6}
and similarly the total ones ST {3,4} and ST {5,6} . Besides its simplicity, this model
is useful as a validation for the code since has analytically computable sensitivity
indices [84]:
a2
S12 = 2
a + b2 (1 + ρ21 ) + c2 (1 + ρ22 )
b2 (1 + ρ21 )
S{3,4} = 2 (5.16)
a + b2 (1 + ρ21 ) + c2 (1 + ρ22 )
c2 (1 + ρ22 )
S{5,6} = 2
a + b2 (1 + ρ21 ) + c2 (1 + ρ22 )
while all the others first and higher order indices are null. The total indices, instead,
are clearly given by:
To test the reliability of the indices, three scenarios with different configurations of
parameters have been assumed (see Table 5.1). The results obtained with estimators
(5.13) and (5.14) and a sample of size N = 1000 are showed in Figure 5.1. In scenario
1, the high value of the factor a clearly makes the interaction between X1 and X2 the
most important contribution to the variance of Y , hence the value 0.73 for S12 . In
scenario 2, instead, a = b = c but the correlation between X3 and X4 , as well as X5
and X6 , breaks the symmetry and increases the importance of the couple. Finally,
Table 5.1: Parameters values for the the model (5.15) and analytical solution of its sensitivity
indices in each scenario. Values approximated to the second decimal place.
5.3. RESULTS 73
in scenario 3 the situation is specular to the first one, but this time the correlation
makes once again the 2-dimensional input X5 and X6 even more important. In all
cases, the two estimators are able to retrieve a fairly good approximation of the
exact analytical values, as proven by the fact that they always lie in the BCIs.
These positive results can be interpreted as a validation for the code, allowing its
relatively safe application to ESS dose model.
5.3 Results
The sensitivity analysis results on the ESS dose model parameters in the worst
case scenario for AA10 are showed in Figure 5.2. Clearly, only first order and total
sensitivity indices were evaluated since the number of all order indices is 210 − 1,
and hence would be prohibitively expensive to calculate them all, not to mention
that ST i results indirectly suggest little contribution of higher order terms. The
indices were estimated using N = 1000 as sample size; the reason for that is a
trade-off between sufficient sampling and computational time. The BCIs, instead,
were obtained with 10000 replicas of a 1000 bootstrap sample. From the first plot
in Figure 5.2 is clear that the first three major contributions to the dose variance
come from, in order of importance, 3H deposition velocity, ingestion-inhalation dose
coefficients and wind speed uncertainties. This does not come unexpected after the
considerations made in section 4.3 for vg and wind speed. The other variables,
instead, have almost null indices, suggesting a negligible contribution to the final
dose uncertainty. One thing should be put in evidence: some of these, like 14C
activity concentration, σz , 14C deposition velocity and breathing rate, have a single-
bin bootstrap distribution for the total index, which means that the BCI retrieved
is the same for all of them and is just the width of the bin. Such a behavior
could be explained by an apparent independence of the index from the bootstrap
sample extracted, which seems to be a direct consequence of the extremely low
sensitivity of the dose to these parameters. In other words, no matter how the
data set is resampled, the total index will be all the time the same close-to-zero
number. Notably, in the case of σz the index itself is exactly 0, meaning that the
dose evaluations for the differences in the (5.14) give the same results for A and
AiB , hence it must be concluded that σz uncertainty is irrelevant. In Figure 5.2
the same analysis has been conducted also fixing to 0 m/s the tritium deposition
velocity, showing an increased importance of the uncertainty in wind speed, 14C vg
and in breathing rate, while a substantial devaluation of the ingestion-inhalation
coefficients correlated uncertainty. As discussed once again in section 4.3, turning
off the tritium deposition velocity PDF result in a drop in the ingestion dose, which
in turn limits the contribution of the ingestion parameter and makes room for the
inhalation dose, hence the breathing rate. Moreover, the difference in value between
the 14C vg and wind speed total indices doesn’t appear to be statistically significant
by looking at the BCIs, even though this is clearly not a rigorous hypothesis test;
therefore, it’s hard to state which one is more impactful. Lastly, in Figure 5.3
74 CHAPTER 5. SENSITIVITY STUDY
Figure 5.1: First (or second) order and total sensitivity indices (crosses) and 90% BCIs estimated
via quasi-Monte Carlo method for scenarios (from top to bottom) 1,2 and 3 of the test function
(5.15). The circles indicate the exact analytical values. The indices were estimated using N = 1000
as sample size, while bootstrap confidence intervals were obtained using a sample of the same size
with 10000 replicas.
5.3. RESULTS 75
Figure 5.2: First order and total sensitivity indices with 90% BCIs estimated via quasi-Monte
Carlo method for AA10 worst case scenario with (top) and without (bottom) tritium deposition
velocity. The indices were estimated using N = 1000 as sample size, while bootstrap confidence
intervals were obtained using a sample of the same size with 10000 replicas.
76 CHAPTER 5. SENSITIVITY STUDY
the total sensitivity indices of the three most important variables in the case of
implemented 3H vg are plotted as a function of the emission height. The importance
of the wind speed uncertainty slightly increases with the height, while the other two
experience an opposite trend but in subtler way. An almost identical plot was
obtained imposing a stability class F, suggesting a very low impact of more stable
categories on the ranking of the sensitivity indices. These results appear to be in line
with what was expected from the previous considerations; in fact, the dominating
contribution of the ingestion to the total dose due to deposited 3H depends on the
stability class and emission height only through the relative concentration, which
is, by the way, common to all the other contributions. Hence the ranking of the
variables should reasonably be preserved.
Figure 5.3: Total sensitivity indices of the three most impactful variables for AA10 worst case
scenario.
Conclusions
This thesis goal was to deal with the propagation of uncertainties and to perform
an analysis of the sensitivity to the initial conditions on the final dose results of a
given accident scenario at ESS. The model presented in Chapter 2 was adopted by
Target Specialists and reproduced by a self-written ROOT code. The code structure
is summarized in Appendix A, while validating results for the AA10 scenario are
presented at the end of Chapter 3. The need for a compact code with a straight-
forward flow of information from input to output came from the choice of a Monte
Carlo technique for the uncertainty analysis. In this case a Quasi-Monte Carlo
approach using Sobol’ sequence has been adopted to reduce the number of points
required for results to converge. A detailed analysis on the parametric uncertainty
of the model has brought to a deeper understanding of the processes involved, as
well as to the observation that the dose distribution is spread over a far more large
range then expected from the safety analysis. The main reason is the uncertainty
surrounding the tritium deposition velocity in conjunction with a great amount of
released activity for the isotope. In general, the deposition velocity is an empirical
parameter quite variable from place to place; in the analysis the reference by SMHI
report [60] has been considered a reliable source of information, however must be
notice that no experimental-based values have been reported. Since the sensitivity
study confirmed the huge contribution of such parameter to the dose uncertainty,
two courses of action are suggested by this thesis conclusions:
1. a on-field experiment to assess the value for tritium vg in the ESS surrounding
should be designed and performed. Alternatively, a detailed analysis on the
fields properties and accurate modeling by experts to estimate a reliable range
of values should be commissioned;
2. another important limitation of the ESS model is the fixed choice of stabil-
ity class and wind speed over a long period of time. As for this particular
case, one can always adopt a conservative approach, however must be noted
that measures of wind speed and stability class are easily accessible by every
modern meteorological station. A posteriori hourly averaged joint probabil-
ity distribution of atmospheric parameters (stability class, wind speed, wind
direction, boundary layer height etc. . . ) would surely lead to a much more
accurate dose prediction and uncertainty evaluations with little effort.
77
78 CHAPTER 5. SENSITIVITY STUDY
Concerning the absence of wet deposition in the model, at the end of Chapter 4 a first
study on its potential contribution has been performed. The conditions assumed
were extremely pessimistic, nevertheless the results seemed to suggest an impact
on the final dose far from negligible. Clearly, a deeper and more realistic analysis
which includes also a computational model for the root uptake, it’s required. Finally,
should be pointed out that in this analysis, due to the wide distribution range, the
dose to public outliers have exceeded the legal limit of 1 mSv for the AA10 scenario.
Therefore a new assessment by decision maker on the lines of action to adopt for
the radiological consequences in the light of these new pieces of information seems
reasonable.
Appendix A
Figure 5.4: Workflow of the self-written ROOT code for the dose calculations. Legend in the
up-right corner. Full code available at https://github.com/Athropos/dose-model-root
79
Appendix B
N0
N1
10−1
N2
N3 (emission)
10−2
10−3
10−4
10−5
10−6
10−7
102 103 104 105 106
Time elapsed (s)
Figure 5.5: LP-102 PIE1. Plot of normalized activity in each room (including the environment)
as a function of time. The approximated transport matrix method has been implemented using a
100 s timestep and a constant source term in V0 between 0 and 389 hours.
81
Bibliography
[2] J.C. Simpson, J.V. Ramsdell, Uncertainty and Sensitivity Analyses Plan, Han-
ford Environmental Dose Reconstruction Project, 1993.
[3] CERRIE, Report of the Committee Examining Radiation Risks of Internal Emit-
ters, 2004.
[4] ESFRI Physical Sciences and Engineering Strategy Working Group - Neutron
Landscape Group, Neutron scattering facilities in Europe - Present status and
future perspectives, September 2016.
[6] ESS-0146659 – Description of the ESS accelerator systems and related infras-
tructure.
[7] ESS-0092033 – Activity transport and dose calculation models and tools used in
safety analyses at ESS.
[8] Mathcad 15.0 M010, User’s guide, PTC July 2011 (version M040 was used in
the safety analyses performed during July 2016 to February 2017 and 2018 and
2019).
[9] Excel 2016 contained in Office 365 for PC, Microsoft 2016. (Calculations were
performed on a PC with Windows 7).
[11] ESS-1102691,Dose calculation user manual, April 2019, Lloyds Register Con-
sulting
83
84 BIBLIOGRAPHY
[18] ESS-0088188 – Copper-Graphite Tuning Beam Dump inventory for normal op-
eration.
[21] CINDER Version 1.05: Code System for Actinide Transmutation Calculations,
RSICC CODE PACKAGE CCC-755.
[23] Steven R. Hanna, Gary A. Briggs, Rayford P. Hosker, Jr., Handbook on Atmo-
spheric Diffusion, National Oceanic and Atmospheric Administration, 1982
[25] R.H.Clarke, A Model for Short and Medium Range Dispersion of Radionuclides
Released to the Atmosphere, National Radiological Protection Board (NRPB),
1979
[28] F.B. Smith , A scheme for estimating the vertical dispersion of a plume from
a source near ground level, Proceedings of the 3 Meeting of an expert panel on
air pollution modelling, Brussels: NATO-CCMS Report 14, 1973.
[30] R.P. Hosker, Estimates of dry deposition and plume depletion over forests and
grasslands, In Proceedings of the Symposium on Physical behaviour of Radioac-
tive Contaminants in the Atmosphere (p. 291), Vienna: IAEA, 1974.
[31] G.A. Briggs, Diffusion estimation for small emissions. Preliminary report, Na-
tional Oceanic and Atmospheric Administration, Oak Ridge, Tenn. (USA). At-
mospheric Turbulence and Diffusion Lab., 1973.
BIBLIOGRAPHY 85
[32] R.F.Griffiths, Errors in the use of the Briggs parameterization for atmospheric
dispersion coefficients, Atmospheric Environment Volume 28, Issue 17, Septem-
ber 1994, Pages 2861-2865.
[34] J.A. Jones, A Procedure to Include Deposition in the Model for Short and
Medium Range Atmospheric Dispersion of Radionuclides, National Radiological
Protection Board (NRPB), 1981
[35] I. Van der Haven, Deposition of particles and gases, Meteorology and Atomic
Energy 1968, Section 5.3 (Slade, D H, Editor), US Atomic Energy Commission,
TID-24190, 1968.
[39] K.F. Eckerman & R.W Leggett DCFPAK: Dose Coefficient Data File Package
for Sandia National Laboratory, Oak Ridge National Laboratory ORNL/TM-
13347, 1996
[41] ICRP 119, Compendium of Dose Coefficients based on ICRP Publication 60,
ICRP Publication 119, Ann. ICRP 41(Suppl.), 2012
[45] IAEA, Generic Models for Use in Assessing the Impact of Discharges of Ra-
dioactive Substances to the Environment, IAEA Safety Report Series No. 19,
2001
86 BIBLIOGRAPHY
[50] ESS-0134942 – SDD-Sol System 1027 - Monolith Rough Vacuum Pump System
[56] X-5 Monte Carlo Team, MCNP — A General Monte Carlo N-Particle Trans-
port Code, Version 5, 2003
[59] J.A. Jones, The Uncertainty in Dispersion Estimates Obtained from the Work-
ing Group Models, National Radiological Protection Board (NRPB), 1986
[61] R.E. Luna and H.W. Church, A Comparison of Turbulence Intensity and Stabil-
ity Ratio Measurement to Pasquill Stability Classes, Sandia Laboratories, 1972.
[62] D. M. Hamby, The Gaussian Atmospheric Model and its sensitivity to the Joint
Frequency Distribution and Parametric Variability, Health Physics. 82(1):64-73,
January 2002
[63] John D. Cook, Determining distribution parameters from quantiles, The Uni-
versity of Texas M. D. Anderson Cancer Center, January 27 2010
[65] A. Saltelli, P. Annoni et al, Variance based sensitivity analysis of model output.
Design and estimator for the total sensitivity index, Computer Physics Commu-
nications, Volume 181, Issue 2, February 2010, Pages 259-270
[66] Bratley, B.L. Fox, Algorithm 659 implementing Sobol’s quasi-random sequence
generator, ACM Transactions on Mathematical Software 14 (1) (1988) 88–100.
[69] S. Joe and F. Y. Kuo, Remark on Algorithm 659: Implementing Sobol’s quasir-
andom sequence generator, ACM Trans. Math. Softw. 29, 49-57 (2003)
[70] H. Foerstel, HT to HTO Conversion in the Soil and Subsequent Tritium Path-
way: Field Release Data and Laboratory Experiments,Nuclear Research Centre
(KFA) Radioagronomy, Fusion Technology 14 1241-1246, 1988.
[75] U.S. EPA, Exposure Factors Handbook 2011 Edition (Final Report), U.S. En-
vironmental Protection Agency, Washington, DC, EPA/600/R-09/052F, 2011.
[76] R. S. Gage and S. Aronoff, Translocation III. Experiments with Carbon 14,
Chlorine 36, and Hydrogen, Plant Physiology, Vol. 35, No. 1 (Jan., 1960), pp.
53-64.
[77] Efron and Tibshirani, An Introduction to the Bootstrap, Chapman and Hall,
New York, 1997
[78] D. Apsley, Modelling dry deposition, National Power and CERC, 2012
88 BIBLIOGRAPHY
[81] I.M. Sobol’, Sensitivity analysis for non-linear mathematical models, Mathe-
matical Modelling and Computational Experiment 1, 407–414, 1993.
[83] I.M. Sobol’, Global sensitivity indices for nonlinear mathematical models and
their Monte Carlo estimates, Mathematics and Computers in Simulation 55,
271–280, 2001
[86] M.J.W. Jansen, Analysis of variance designs for model output, Computer
Physics Communications 117 35–43, 1999