Sie sind auf Seite 1von 57

A probabilistic approach to design

civil engineering structures

Gabrielle Muller & Gabriele Albertini

Semester project
Spring term 2015

Professor Jean-Francois Molinari

Fabian Barras
Computational Solid Mechanics Laboratory -LSMS
Ecole Polytechnique Federale de Lausanne -EPFL



Theoretical considerations and principles of probabilistic

approach in civil engineering

about probabilistic and deterministic approach in structure design

1.1 Basic structural design principles
1.2 The deterministic approach - The classical approach in civil engineering
1.3 The probabilistic approach

mathematical and numerical tools required for a probabilistic approach

2.1 Define Probability Density Functions and the Monte Carlo method
2.2 The Monte Carlo Method
2.3 Advantages and limitations of using Monte Carlo

II The application of probabilistic theory: from a basic example in Python to a real structure in Akantu



an illustrative example of the monte carlo theory in python

3.1 Evolution of the accuracy of the results
3.2 Study of the accuracy of the Monte Carlo method

an illustrative example of the monte carlo theory using

4.1 Getting started in Akantu
4.2 The Model
4.3 The analytical method
4.4 The Monte Carlo Simulation
4.5 Results




using akantu for an application of the monte carlo theory

5.1 Introduction
5.2 Considerations about the model
5.3 The Monte Carlo simulations
5.4 Results





The goal of this semester project is to study a probabilistic approach to design

civil engineering structures. In fact, currently most structural design is done via
a deterministic approach, being the easiest, fastest and best-known approach to
engineers. Even if deterministic design guarantees a certain structural safety proposed by the SIA standard prescriptions, it is of interest to consider a probabilistic
approach in order to quantify structural safety and reliability that cannot be assessed with a deterministic approach. The differences between deterministic and
probabilistic design approached will be studied and consequences for future related work in structural engineering will be drawn.
This report is divided in two parts:
Part 1 is the theoretical part which exposes the concepts of deterministic design and probabilistic design in structural engineering and introduces the Monte Carlo theory.
Part 2 is the application of the theory and consists in an implementation of the Monte Carlo theory to real examples. Starting from the
simple case of a steel bar subjected to axial loading, using open-source
software Akantu, we implemented the Monte Carlo theory to assess
reliability of an existing dam. The goal was to include probabilistic
theory in the frame of dam design principles. .
Numerical tools used during this project
Throughout this project, we used the Akantu software through a Python
interface. Akantu is an open source Finite Element library implemented at
the Laboratory of Computational Solid Mechanics (LSMS) at EPFL.
Gmsh was used to create the mesh for the dam.
Supervision This project was done within the LSMS at EPFL under the
direction of Professor Molinari. The supervision was done by Fabian Barras,
who we collaborated closely with throughout the term.

Part I.
Theoretical considerations and
principles of probabilistic approach in
civil engineering



basic structural design principles

Most engineering problems can be solved by the confrontation of the two following quantities:
solicitation or stress S
S can be the resulting bending moment in a beam subjected to well-defined
loads, deflection of a beam under a given load case or ground solicitation
due to external loads.
corresponding capacity or resistance R
R can be the ultimate resistant moment of a beam, the maximum allowable
deflection for a beam or the shear capacity.
Structural safety requires that R > S. Failure occurs whenever R < S. This general formulation is applicable to most civil engineering problems.
(These considerations are directly taken from course support [1])

the deterministic approach - the classical approach in civil


Currently, deterministic approach is the method most widely used by civil engineers when designing structures.
Design of structural elements is done as follows: the design value of resistance
Rd is compared to the corresponding solicitation value Sd . Structural safety requires that
R d > Sd

1.2 the deterministic approach - the classical approach in civil engineering

The design value Rd is the resistance value R divided by a safety coefficient

R that takes into account simplifications linked to the model, uncertainties and
variability of the material properties.
The design value Sd takes into account the different scenarios taken into account as well as the corresponding loads. Here again, a safety coefficient S is
defined in order to include simplifications. When designing a building, one has
to differentiate between predominant and concomitant effects, inducing different
safety factors. However, this differentiation is not relevant in the context of this
project and will not be done for the rest of this report.
The most characteristic feature of the deterministic approach is the fact both
resistance and solicitation are defined by a fixed value, the characteristic value,
being the result of multiple considerations and scenarios. In the concept of this
fixed value lies the major difference with the probabilistic approach [1].
As a help for structural engineers, the Eurocode and the SIA standards give
guidelines about how determining a characteristic value for an action as well as the
safety coefficients. The guiding principles of the SIA standards are described in
section .

1.2.1 SIA Standard prescriptions for deterministic design

design value of solicitation
characteristic value Sk
generally, the characteristic value corresponds to a the probability
of non-exceedance of pt = 1 p f = 95% in the case of a normal
law. This means that there is a probability of 5% for the Sk to be
safety factor S
this factor takes into account the lack of precision of the characteristic load value with the factor f and the uncertainties linked to
the structural analysis m . Values for both factors are well defined
in the Standard. In the case of linear relationship between actions
and action effects, the safety factor S is computed as
S = f m


1.2 the deterministic approach - the classical approach in civil engineering

the design value for solicitation is then defined by equation 3

Sd =



design value of resistance

is a function of the design values of
material properties Xd
geometric data ad
uncertainties due to the resistance model are taken into account with
the safety factor R and the material properties with the factor m .
M = R m

the design value for resistance is defined in 4

Rd =



The considerations of this entire section are mostly taken from course [1].
1.2.2 Considerations in relation to the 5% fracture percentile
It is interesting to notice that the deterministic design approach is nevertheless
based on probabilistic design. Indeed, the characteristic values are determined
through a fixed percentile via the normal law distribution (in most cases, but it
could theoretically also be another probabilistic law). With respect to this, it is
important to mention that the 5% fracile is not a fixed percentile, but depend on
the level of reliability that wants to be achieved. The higher the required level, the
smaller the resulting fractile and thus the bigger the characteristic value.
Moreover, the safety factors given in the Standard for the deterministic approach
hide a probabilistic procedure which they were determined with. The methodology for the determination of these safety factors will be explained in section1.3.

This deterministic procedure given by the SIA Standards gives reliability to

the design and confidence to the engineer in charge of the structural design, but
this reliability is not quantified. It only says if yes or no the studied element
passes the verification for a given risk scenario, but in any cases it does not give
a quantitative sense of how far the structure is close to failure. The deterministic
approach may look secure to most engineers and make them pretend to design

1.3 the probabilistic approach

safe structures, but in reality it does not give a good understanding of the reliability of the structure. The probabilistic method exposed in 1.3 balances this lack of
quantification of reliability and suggests a way to evaluate reliability of structures.
According to the different possible verification formats, how can reasonable safety be
defined considering uncertainties related to different parameters? Chapter 7,[1]
An attempt to answer this question is done in section 1.3.

the probabilistic approach

The major advantage of designing structures with a probabilistic approach is the

possibility to quantify the reliability of the structure. Instead of using characteristic values which correspond to upper or lower boundary values, a probabilistic
approach allows engineers to quantify the reliability of the designed structures,
as opposed to deterministic design which only allows to determine whether yes
or no the structure is safe. In most cases, the probabilistic approach of designing
a structure gives results that are closer to reality and thus less conservative than
a deterministic approach. This could be of interest in structural design since it
would allow to design structures differently and save on materials and on money,
as well as assessing the reliability of an existing structure and determining how
far it is from failure. Moreover, it is an useful tool for assessing the reliability of
existing structures since parameters can be adapted with respect to target reliability or importance of the building.
Some basic concepts related to probability and statistics need to be reminded:
probability density, with the mean value and standard deviation
distribution function, with median value
normal distribution, widely used for engineering and science applications
In the context of structural design in civil engineering, it is of interest to quantify
the safety and reliability of a structure, especially for existing structures.
1.3.1 Quantifying the reliability of a structure
A relevant approach of quantifying the reliability of a structure is to estimate the
probability of failure. Probability of failure is a reliable indicator of structural
safety and a useful tool from an engineering point of view.

1.3 the probabilistic approach

The basic principles of statistics and probability reminded in section1.3 can be

applied to a probabilistic analysis of structural safety and allow to mathematically
express this concept of reliability.
Therefore, two values need to be considered [1]:
limit function G: G = R S (R: resistance and S: solicitation)
reliability index , from which the failure probability p f can be directly
Analytically, the failure probability of a structure is defined as follows [1]:
pf =

Z +

f S ( x ) FR ( x )dx


with FR : cumulative distribution function

FR ( x ) = P( R < x ) =

Z x

f R ( x )dx


and f S : probability density function

P[ a < S < b] =

Z b

f S ( x )dx


p f relates to the probability of failure of a structural element subjected to a

solicitation S and having a resistance R. S and R can follow different probability
laws depending on the nature of the the parameters behavior. In the context of
this project, the behavior of S and R will be limited to follow Uniform, Normal,
Poisson or Weibull law. These laws can be expressed analytically, and will be
described in section 2.1.
Knowing that G = R S, the failure probability can also be expressed as
pf =

Z 0

f ( G )dG = ( ) = 1 ( ) with : normal law


This relation shows that probability of failure p f and the reliability index are
directly related and dependent of each other.

1.3 the probabilistic approach

1.3.2 Structural safety verification

According to the probabilistic approach, structural safety is insured when the
following condition is satisfied:
> limit or p f < plimit


This shows that the reliability expressed through or the failure probability
p f of an element is compared to a limit value. This limit value fixes a minimal
reliability expected by society and shows an expectation of society with respect to
the structures. [1]
In the SIA Standards for new constructions, is fixed at 4.7, corresponding to
a failure probability of 106 . Guidelines are given for target reliability factors for
existing structures.
The design value defined through a probabilistic approach is defined as:
Sm : mean value, =

S = Sm (1 + S S )
G :

Rm : mean value, =

influence value and S =

Sm :

variation coefficient

R = Rm (1 R R )

=influence value and R =


=variation coefficient

1.3.3 Methodology for determining safety factors in the SIA Standard: interrelation
between probabilistic and deterministic design
In order to define safety factors prescribed in the Standard, following considerations have been done: one equalizes the design values obtained by the two
approaches, equation(3) = equation(10) and equation(4) = equation(11) obtaing

Sd = S
Sd = k
S = Sm (1 + S S )

= Sm (1 + S S )

1.3 the probabilistic approach

S =

(1 + S S )


Rd = R
Rd =


R = Rm (1 R R )

M =

(1 R R )


It goes without saying that the choice of the parameters , , and is a key choice
for the determination of the safety factors. A safety factor is a function of the
importance of the variable (defined with limit value p f and the reliability index
limit .[1]. The design values determined with the probabilistic approach are sensitive to the choice of these parameters. This was observed later in the project when
defining the values for the dam design verification.
All the safety factors in the standard should be determined following the foregoing procedure, even if, according to Professor Bruwiler,
Les normes SIA se

disctuent autout dune tasse de cafe-Prof. Bruwiler,

Septembre 2014.

1.3.4 Summary and conclusion

The general expressions for the verification of structural safety are:
deterministic approach: Sd < Rd
probabilistic approach: S < R
For basic cases, it is efficient and precise to compute probability of failure via
the analytical method. However, for more complex examples with numerous
parameters, each one following a different probabilistic law, it gets impossible
to find analytical expressions as well as computing the integrals with equations
(5),(6) and (7). To face this shortcoming of analytical tools, a method called Monte
Carlo has been developed and widely used, being a reliable method to estimate
failure probability. The Monte Carlo approach and it usefulness in the context of
probabilistic design in civil engineering structures will be explained in section 2.1.

1.3 the probabilistic approach

The principles of probabilistic and deterministic design explained in this section

are summarized in figure 1.

Figure 1.: Summary of probabilistic and deterministic design



In this chapter, we will describe the probabilistic approach for designing structures. First, the description of the random variables via probability density function will be discussed. Later, the Monte Carlo method will be explained, as well
as its advantages and limitations.

define probability density functions and the monte carlo


In a probabilistic approach we model the uncertainty linked to the input parameter such as material properties, applied loads and geometric properties with
various probability distributions.
For each parameter of interest, its uncertainties are taken into account by assigning a probabilistic law to that parameter. Depending of the nature of the
parameter, a different probabilistic law is chosen. The most relevant Probability
Density Functions (PDF) in the context of civil engineering are exposed in the
next paragraphs.
2.1.1 Uniform Distribution
The probabilitiy function of a uniform distribution is defined as

for x e[ a, b]


A parameter following this uniform law is equally likely when lying in the
range between a and b. This type of parameter is not relevant in the context of a
probabilistic design since the value will not vary. [5]


2.1 define probability density functions and the monte carlo method

2.1.2 Normal Distribution

( x )2

PDF = f ( x ) = e 22
with : mean value and : standard deviation


The normal distribution is probably the most important one in science and
engineering. The central limit theorem states that averages of random variables
independently drawn from independent distributions are normally distributed,
which makes the normal distribution attractive to engineers and scientists. In
civil engineering, when providing a construction material, manufacturers give
the mean value and standard deviation characterizing the material strength or
size. These references from the factory can be used by engineers in order to estimate the variability of the material resistance and extrapolating it to the structural
2.1.3 Weibull Distribution
k  x  k 1 ( x ) k

with k: shape parameter and the scale parameter of the distribution.
PDF = f ( x ) =


From figure 2, it can be noticed that the shape parameter k has a strong influence on the type of the curve. When k=1, the function looks like an exponential
function, and when k is equal to 5, the Weibull is close to a normal law.
Figure 2 shows the Weibull law and the influence of the different parameters
and k.


2.1 define probability density functions and the monte carlo method

Figure 2.: Probability density function for Weibull distribution-from [5]

The Weibull law is typically used in material science in the domain of failure
analysis of a material over their life span. The Weibull distribution is useful to
describe the ultimate stress of a material and gives an idea about the dispersion
of the default sizes present in the material. The shape parameter k is commonly
referred to as the Weibull modulus. The more similar the defaults of the studied
material, the larger the statistical variance k. Ceramic materials with a homogeneous distribution of defaults in the material have a high shape parameter k. In
the context of failure mechanics and defect analysis, the scale parameter represents the mean ultimate stress of the sample volume.
The cumulative Weibull distribution is also employed in particle-based methods
to describe the size of particles generated by grinding and crushing operations.
In fact, the Weibull distribution predicts a more accurate distribution of small
particle sizes than the normal distribution. The cumulative distribution function
F ( x, k, ) is the mass fraction of particles with diameter smaller than x, with
being the mean particle size and k a measure of the spread of the particle sizes.
This particle size distribution and mass fraction estimation with the cumulative
Weibull distribution is applied in the mining industry and in other industrial
applications in order to design the containers and the conveyor belts carrying
crushed material. [5]


2.2 the monte carlo method

2.1.4 Poisson Distribution

The Poisson distribution is a discrete probability distribution that expresses the
probability of a given number of events occurring at a fixed time interval of time
if these events occur with a known average rate and independently in time. The
probability mass function, giving the probability of a discrete random variable, is
given by the expression
with : average rate and k: number of events occurring in a fixed interval of time
Figure 3 shows the shape of the probability mass function of the Poisson distribution for different parameters of the average rate .

Figure 3.: Probability mass function for Poisson distribution-from [5]

For sufficiently large values of (say > 10), the normal distribution and the
Poisson distribution converge. Assuming that is bigger than 10 in the context of
this project, we may relate the Poisson distribution to a normal distribution and
will not use it for future application.


the monte carlo method

This method differs from the probabilistic distributions explained above in a way
that it does not proceed analytically, but is based on repeated random sampling
to obtain numerical results. It is widely used in numerical simulations when it is


2.2 the monte carlo method

difficult or impossible to use analytical expressions. The methodology of a Monte

Carlo simulation is described in the following paragraph.
Creating the Sample: Random Number Generator
When creating a sample of interest, a domain of definition of possible inputs
is established and the inputs are randomly generated from a well-defined
probability distribution.
A few considerations have to be made concerning the random generation
of inputs. A computer can only generate uniform pseudo-random numbers.
Even if the generated numbers are independently drawn in the sample, there
is a periodicity linked with the random generation, thus generating pseudorandom numbers. However in Python, when using the numpy.random.RandomState()
function, the period of return is 219937 . The number of Monte Carlo simulations in our applications being significantly lower than the period of return
of the sample drawings, we can assume that the numbers are indeed generated randomly.
Running the Numerical Model
A deterministic condition is set in the loop.In the context of this project,
the maximum strength in a structural element (resulting from randomly
distributed variables) is compared to the resistance of the material. If the
strength exceeds the resistance, a counter implemented in the code increases
by one. The model is run as many times as Monte Carlo drawings want to
be done, starting with the number of drawings of N = 10 to N = 1000000 if
the computer power is sufficient.
Number of simulations N required
Lemaire gives an estimation of number of simulations N that have to be done
in order for the results to converge. For a 95 % reliability of the probability
of failure value p f = 10n N = 10n+2,3
this means that for a failure probability of
106 N = 108,9 samples are required in order to achieve good accuracy of results
Analysing the Data


2.3 advantages and limitations of using monte carlo

Through the counter implemented in the code, the number of failure cases
in known at the end of the simulation. The probability of failure is then
obtained by dividing by the total number of Monte Carlo simulations N.

advantages and limitations of using monte carlo

The main advantage of Monte Carlo is its easy numerical implementation, especially for complex cases where the analytical expressions are too complicated. The
results are reliable and accurate.
However, it may take a lot of time depending on the complexity of the problem
and the number of samples drawn.


Part II.
The application of probabilistic
theory: from a basic example in
Python to a real structure in Akantu



The goals of this example were to

learn how to program in Python
estimate the failure probability of a steel bar subjected to an axial force using
the principles of Monte Carlo theory
This basic example is relevant because it allowed us to study how the accuracy
of Monte Carlo simulations evolves with the sample size and how the resulting
failure probability converged to a value after sufficient simulations were done.


evolution of the accuracy of the results

3.1.1 Parameter description and example considerations

First, the parameters of the beam are
the applied force F
the cross-sectional area of the bar A
the tensile resistance of the steel bar R
Different cases were considered: each parameter was considered following either Weibull or Gaussian distribution, which resulted in eight different possible
combinations (each one of the three parameters can follow two different laws).
This allowed to compare the different outputs and influence of the laws on the
Only one single case out of the eight studied cases will described in this report.


3.1 evolution of the accuracy of the results

The mean and standard deviation chosen for the inputs to define the state of
the beam are
for the applied force: = 2000 000 N and = 2000 N
for the cross-sectional area of the bar A: = 700 mm2 and = 0.2 mm2
for the tensile resistance of the steel bar R: = 330 MPa and = 10 MPa
The force F is chosen to follow a Weibull law with a shape parameter = 6 and
the area A and the resistance Rare chosen to follow a Gaussian law. The stress
in the steel bar subjected to tension is given by equation 18.
= F/A


Comment: The choice of the force following a Weibull distribution has only
been done for illustrative purposes. It does not make physical sense for a force to
follow a Weibull distribution. However, to stay as close as possible to the reality,
the shape parameter k of the Weibull distribution is chosen in order to have a
probability density function as close as possible to a normal law.
3.1.2 Methodology
In order to implement the Monte Carlo theory, a loop is introduced in the code in
order to compare for each loop passage the tensile strength induced in the bar
with the tensile resistance R of the bar. This methodology is deterministic since
it compares two values and draws meaningful consequences in terms of failure
probability and reliability of the bar.

3.1.3 Simulation results and outputs given by the Python code

The results obtained on the Python interface for this example are given in this
section. Since the goal of this example was to study the evolution of accuracy of
the results with the sample size, different results related to different Monte Carlo
simulations are shown in the figures below, as well as the associated failure probability.
Starting with one simulation (N=1), the number of simulations was progressively increased to N=10e7, being the limit of the Virtual Box installed on our
computer. The outputs are shown in the figures 4,5,6,and 7.


3.1 evolution of the accuracy of the results

(b) N=10e1, p f = 0

(a) N=1, p f = 0

Figure 4.: Outputs for N=1 and N=10

(a) N=10e2, p f = 0

(b) N=10e3, p f = 0

Figure 5.: Outputs for N=10e2 and N=10e3


3.1 evolution of the accuracy of the results

(b) N=10e5, p f = 2e 5

(a) N=10e4, p f = 0

Figure 6.: Outputs for N=1024 and N=10e5

(a) N=10e6, p f = 1.1e 5

(b) N=10e5, p f = 9.1e 6

Figure 7.: Outputs for N=10e6 and N=10e7

The different figures shown here above give a good idea about the evolution
of the accuracy of the failure probability estimation with respect to the number
of Monte Carlo simulations (resulting in the sample size). At the very beginning,
when the number of simulations is still low, the bar chart illustrates well the
studied case: in fact, it can be seen visually that the bars from the resistance
and stress chart do not intersect in the first four images. With the number of
simulations increasing, the chart starts taking the appearance of a continuous
normal law and the areas of the two curves do intersect, giving a first estimation
of the failure probability.


3.1 evolution of the accuracy of the results

3.1.4 Convergence rate analysis

The results shown in the figures from the previous section are summarized in the
table 1.
Table 1.: Summary table
Number of simulations N

Failure probability p f


2.0e 05
1.1e 05
9.1e 06

Theses results can be visualized in figure 8.

Figure 8.: Convergence rate analysis

This plot renders the evolution of the Python outputs with the increase of the
number of Monte Carlo simulations and brings to light the convergence of the


3.2 study of the accuracy of the monte carlo method

output values, converging to a probability of failure of p f = 9.1e 06.

The failure probability of the steel bar has been hereby computed with a code
implemented in Python and plausible results have been found through a convergence analysis. It is now necessary to study the accuracy of the results predicted
by the Monte Carlo method compared to the analytical results. Therefore, an
other example has been done and will be explained in the following section.

study of the accuracy of the monte carlo method

The goal of this example was to estimate the failure probability of a basic example
via three methods, being the
analytical method
approximate method proposed first when typing failure probability on Google
and widely used for most civil engineering practises
Monte Carlo method
The results obtained with the three methods will be compared and relevant
conclusion can be drawn concerning the accuracy of each of the approaches.
The same model as in section 3.1 was chosen: a steel bar subjected to axial tension. The stresses induced in the bar are governed by equation 18.
For this example, the stresses in the bar are following a normal distribution
with a mean value of = 286MPa and a standard deviation of = 14.29MPa.
The resistance is following a Weibull distribution with the scale parameter chosen
to be = 360MPa and a shape parameter k = 20.
3.2.1 The analytical method
The exact solution of the failure probability is analytically defined as
pf =

f s ( x ) FR ( x )dx



3.2 study of the accuracy of the monte carlo method

Figure 9.: Failure probability p f computed analytically

Since the computation of the integral involves a Normal and a Weibull distribution, it cannot be done analytically. Thus, it will be done numerically using the
Python function scipy.integrate.quad.
We found a failure probability of
p f = 1.550 102
3.2.2 Approximate method
In order to confirm this result, it was of interest to use a different method to
approximate failure probability. Found in the course [4] and in numerous other
references in literature, the expression for an approximation of computing failure
probability is given in equation 20. This integral corresponds to the intersection
area of the two probability density functions. The two PDF are shown in figure
10, the green line representing the stress and the blue line the resistance. The red
line delimits the area of the intersection where the integral will be computed.

pf =

Z x

f R ( x )dx +


f s ( x )dx


the value where the probability density of both resistance and stress function
is the same:
f s ( x ) = f R ( x )


3.2 study of the accuracy of the monte carlo method

Figure 10.: Approximate calculation of the failure probability p f

By using the same numerical integration tool in Python as for the previous
section, we found an estimation of the failure probability of
p f = 8.788 102
3.2.3 The Monte Carlo method
Similar to the example in the previous section, for each Monte Carlo drawing, the
stresses in the bar are compared to the resistance. A counter is set in order to
count the number of times the stress in the bar exceeds the resistance.
Figure 11 shows the distribution for the stresses and the resistance.
The sample size for this Monte Carlo simulation was fixed at N = 105 in order
to limit the computation time. The failure probability was calculated by dividing
the number of failed specimens by the total sample size N. For this example, we
found a failure probability of
p f = 1.502 102


3.2 study of the accuracy of the monte carlo method

Figure 11.: Failure probability calculated with 105 Monte Carlo simulations
3.2.4 Comparison between the three different methods
To estimate a failure probability of the order of 102 , according to Lemaires formula, the sample size N is suitable. On the other hand, the approximate solution
gives a result of an order of magnitude greater than the other results. This shows
that the approximate method is very conservative.
It is of interest to compare the different ways to calculate the failure probability
when the target failure probability decreases. This was done by increasing the
scale parameter , all the others parameters remaining the same. This causes a
shift to the right of the resistance curve.
The results are summarized in the table 2 with a fixed number of Monte Carlo
simulations N.
Table 2.: Influence of the R parameter on failure probability




p f analytical

p f approx.

p f MonteCarlo





1.55e 02
1.92e 03
1.17e 04
2.21e 05
2.29e 06

8.79e 02
1.86e 02
2.05e 03
5.24e 04
7.91e 05

1.47e 02
2.07e 03
8.00e 05
5.00e 05
0.00e + 00

This example shows accurately how the Monte Carlo simulation can underestimate the failure probability if the target probability is very small. In that case, in


3.2 study of the accuracy of the monte carlo method

order to increase accuracy of the solution, a larger sample size is necessary. The
fact that for = 560MPa none of the samples failed shows that the sample size is
not big enough.
Another approach would be to change the sampling procedure by focusing on
the zone of interest e. g. the failure zone (in the context of this example). This
concept is called importance sampling or instrumental density and is explained
in a more explicit manner in the course [8].
The basic example of a steel bar in tension gave us the possibility to learn how
code in Python
implement a loop for the Monte Carlo simulation
compute failure probability analytically using mathematical functions defined in the theoretical part of this report
estimate failure probability with the Monte Carlo methodology and implement a code for the Monte Carlo simulations
compare Monte Carlo results with the analytical solutions and draw conclusions
evaluate the accuracy of Monte Carlo results with respect to the sample size
We also realized that the failure probability approach proposed in numerous
references and most civil engineering references has to be treated with caution
since it is only accurate for small probabilities. In fact, the idea that the failure
probability is equal to the area of intersection under the S and R curve is only
right for p f < 1010 and should not be systematically used. Instead, the correct
method proposed by equation 19 should be kept in mind for a general case.
The next step is to use Akantu via the Python interface in order to estimate
failure probability of more complex cases using the basic principles explained in
the above section.



As a continuation of the previous example described in chapter 3, the idea is

to add difficulty and estimate the failure probability of a beam element (as opposed to a bar element in previous example). To face the challenges linked with
structural mechanics, the open source software Akantu is used through a Python
interface. A simple example was chosen in order to be able to check the results
analytically with the equations for basic beams.
The goal of this examples is to
learn how to use Akantu from the Python interface
estimate failure probability of a beam with the Monte Carlo method

getting started in akantu

In order to learn the Akantus operation and the functions important for the rest
of the project, we did a couple of basic examples to get familiar with Akantu. First,
we made examples for a static and dynamic response of a beam, then did some
implicit and explicit simulations. We also learned the main differences between
the solid and structural mechanics part and the relevant functions that we would
use for the future example. We visualized most of our result in Paraview [13].

the model

The case of a simply supported beam is considered. The beam is composed of

two elements (3 nodes) and a force is applied at the middle of the beam (on the
mid-node). The beam has a length of 10m.
The varying parameters were chosen to be:


4.3 the analytical method

the applied force F, following a normal with = 1000 N and = 10 N.

Youngs Modulus E, following a normal law with = 2e10 MPa and = 2e7
Moment of intertia I
the resistance of the beam R, following a normal law with = 2430 MPa
and = 2.4 MPa.

the analytical method

The moments resulting in the steel beam can be easily computed using first principles of statics. This was useful in order to check the results obtained with Akantu.

the monte carlo simulation

Similarly to the previous example, a loop is implemented with a comparison of

the stress in the bar with the beam resistance. Each time the solicitation in the
bar exceeds the resistance, the counter increases by one. That way, the probability
of failure can be computed by dividing the failed cases with the total number of
Monte Carlo drawings.
We started with 10 simulations, increasing to N = 10e6 simulations, being the
limit of a reasonable computation time on the Virtual Box.


The results of the simulations are shown in the figures 12,13,14. The evolution of
the sample distribution can be clearly seen on these figures.


4.5 results

(b) N=10e2, p f = 0

(a) N=10e1, p f = 0

Figure 12.: Outputs for N=10e1 and N=10e2

(a) N=10e3, p f = 9.0e 3

(b) N=10e4, p f = 7.2e 4

Figure 13.: Outputs for N=10e3 and N=10e4

(a) N=10e5, p f = 9.0e 4

(b) N=10e6, p f = 9.89e 4

Figure 14.: Outputs for N=10e1 and N=10e2


4.5 results

The convergence rate analysis of this example is shown in the figure 15. The
convergence is not as smooth as the for the example of the steel bar in tension,
but it nevertheless converges to a value of 9.89e 4.

Figure 15.: Convergence rate analysis

The failure probability of this beam for the studies load case is estimated to be
p f = 9.89e 4.
This example was a good introduction to Akantu and a key element for the
implementation of case of the dam in the next chapter.





Even if non-sampling-based methods such as polynomial chaos are more precise

than the sample-based Monte Carlo method, the Monte Carlo method presents
a clear advantage when it comes to complex geometries and non-linear systems.
Since the overarching goal of this project is to estimate the failure probability of
a dam, the Monte Carlo is the most accurate method being able to face the challenges related to the complexities of the future problem.
Having used Akantu via the Python interface for a basic example to estimate
the failure probability of a structure, we are now eager to apply to knowledge
acquired so far in order to study a complex structure. In order to do that, the
structure of a dam has been chosen for the purpose of this semester project. The
idea is to create the mesh of a dam embedded in rock, define key parameters
which follow a given probabilistic law and finally apply the Monte Carlo theory
in order to compute the failure probability of a structure.
Through a loop over the number of Monte Carlo drawings, each drawing compares the maximum displacement computed via Akantu with the allowable displacement, the maximum tensile strength in the concrete with the resistance in
tension, the maximum compressive strength with the resistance of concrete in
For simplification purposes, it has been assumed that the dam fails when the
stress at least one element exceeds the resistance of the concrete. We are aware
of the fact that this is not totally corresponding to the reality since a local failure
of concrete would not necessary imply a total failure of the dam. However, this
simplification allows a quick assessment of the reliability of the dam and gives a
good approximation of the failure probability of the structure.


5.2 considerations about the model


considerations about the model

The mesh of the dam set up for the simuation is taken form another project of an
arc dam in Canton Bern, Switzerland. The dam does not exist yet and the geometrical properties have been taken from another semester project at the Laboratoire
de Constructions Hydrauliques (LCH) at EPFL.
The dam is modelled as a monolithic section, without joints. The connection of
the dam with the foundation and the rock is assumed to be rigid.
5.2.1 The methodology and principles implemented in the code
The analysis of the dam was done using Akantu through the Python interface.
Akantu was used for static solid mechanics analysis.
For each Monte Carlo simulation, the principal stresses in each element of the
dam are computed via a function directly computing the Eigenvectors and Eigenvalues. Then, the maximum of the principal stresses of all the elements is compared to the tensile resistance of concrete in order to determine whether or not
failure occurs. The same procedure is implemented for the compressive strength.
A counter is set up in the code and increases each time the tensile strength in
one element exceeds the concrete resistance or the compressive strength in one
element exceeds the concrete resistance in compression. Finally, the total number
of failure cases is divided by the number of Monte Carlo drawings in order to
determine the probability of failure.
The displacement of the midpoint of the dam is computed for each Monte
Carlo drawing and compared to an maximum allowable displacement, an arbitrary value fixed by the engineer. Even if the displacement is not relevant in
terms of structural safety (tensile stresses and compressive stresses will determine whether the structure will fail or not), it is interesting to study the evolution
of the displacement over the Monte Carlo drawings.
5.2.2 The mesh setting up
Our aim was to model a real example, with a complex 3D geometry due to a
double curvature of the arch and the presence of the cantilever, as well as the
asymmetry of the valley flanks.


5.2 considerations about the model

This complex geometry was approximated with 8 2D arcs of constant depth

and of a circular shape. In reality, elliptical arcs or logarithmic spiral arcs can
be chosen to optimize the incident angle with the foundation. To model the
connection between the dam and the bedrock, a raw geometry of the valley has
been added to the existing model. The mesh was done with the open source
software GMSH.[14]
The 3D mesh has a tetrahedron geometry. First and second order computations
have been done (4 and 10 quadrature points respectively).
The aspect of the mesh in GMSH can be seen in figure 16.

Figure 16.: Aspect of the mesh in GMSH

We set two different mesh sizes: a fine one for the dam and a coarse one for the
bedrock. In order to define the optimal mesh size, a sensitivity analysis has been
done: by decreasing the mesh size, we analysed the evolution of the maximal
displacement and of the computation time required for each simulation.
First, this had to been done for the dam alone in order to find the optimal mesh
size of the dam. The results of the sensitivity analysis are shown in figure 17 and


5.2 considerations about the model

The loads applied on the structure are the dead load of the dam and the hydrostatic pressure. It is interesting to see that the relation between computation time
and the number of nodes on the mesh is almost linear.

Figure 17.: Influence of dam mesh size in m (on the x axis) on maximal displacement in m (on the y axis)

Figure 18.: Influence of dam mesh size in m (on x axis) on computation time in s
(on y axis)
Then, fixing the mesh size of the dam, we varied the mesh size of the bedrock
and performed the same calculations. The final mesh size has been chosen by doing a compromise between execution time and precision.The optimal mesh sizes
adopted for the Monte Carlo simulation are: for the dam hdam = 4m and for the


5.2 considerations about the model

bedrock hrock = 40m.

Figure 19.: Influence of bedrock mesh size in m (on the x axis) on maximal displacement in m (on the y axis)

Figure 20.: Influence of bedrock mesh size in m (on the x axis) on computation
time in s (on the y axis) and the number of nodes
It is important to note that this analysis has been done by considering only the
displacement. For a real life application it would be essential to perform the same
analysis considering the evolution of the stresses as well. In fact, the thickness of
the dam is in the range of 10m, with a mesh of 4m by looking at the cross-section
we realize that the mesh is rather coarse and the stresses caused by the bending
moment in the cantilever may not be accurate cf. figure 21.
Another issue occurred at the contact between the dam and the foundation. In
our simplified geometry there is a sharp edge at this interface dam-foundation,


5.2 considerations about the model

which causes a local stress concentration cause by the model, but not necessarily
occurring in reality. Therefore, our model is accurate to compute displacements,
but not to compute stresses in the vicinity of the foundation and on the interface
on the side between the dam and the bedrock, as it can be seen in figure 21.
In order to avoid these stress concentrations at the interfaces between bedrock
and concrete dam, an additional physical volume has been created, which includes only the central part of the dam. In the Monte Carlo loop, the structural
analysis and stress comparison was only done in this inner region.
Considerations for future work: For a better estimation of the stresses over the
entire domain and for a more accurate model, the mesh model should be revisited
and sharp edges should be avoided, and a finer mesh should be chosen.

Figure 21.: Stress Concentration at the interface dam - foundation

5.2.3 The materials: rock and concrete
The materials are assumed to behave in a linearly elastic way and are described
in the model by their Young Modulus E, Poisson Ratio and Density .The numerical values are defined in chapter 5.3.


5.2 considerations about the model

The density of the rock is set to 0 since we are only interested in the stresses
and displacements in the concrete and we are not performing a dynamic analysis
where the rock mass have an influence. Furthermore, we do not want to take
into account the deformation due to the wight of the rock-mass and can therefore
neglect the dead load of the rock for the purposes of our project.
5.2.4 The loads
The loads considered are the dead load of the dam and the hydrostatic pressure
due to the water considering a full lake (worst case scenario in term of magnitude
of hydrostatic pressure).
The dead load of the dam has been applied through a force vector at the
centre of every element. The hydrostatic pressure is applied on the dam at
the upstream face and is applied perpendicular to the surface via the function
model.applyHydrostaticPressure defined in Akantu and used through the
Python interface.


5.3 the monte carlo simulations


the monte carlo simulations

5.3.1 Risk scenarios

The considered risk scenario is the lake entirely filled with water. We did not
consider the scenario of the empty lake. We did not consider any pressure induced by the depositions of sediments or by the formation of ice. The effect of
the temperature was not taken into account.
The height of the water is fixed at 1720m (the altitude of the crown of the dam)
and will not vary with the different simulations, since it is upper boundary for
the water height and the worst possible case for the considered scenario.
5.3.2 Failure criteria: deterministic values
A first criterion is necessary for the maximal allowable tension in the concrete.
From [9], we fixed this value to a 2MPa resistance. For each Monte Carlo simulation, the maximum tension stress in the concrete will be compared to the 2MPa
resistance, and each exceeded value will be kept as a failure case.
The second criterion is necessary for the maximal compression in the concrete.
Again, from [9], we defined this value as 8MPa.
These two criteria are the key criteria for structural resistance and will be used
in order to evaluate structural reliability.
Since we are not dealing with an extreme scenario (ultimate failure scenario),
we chose to impose a more conservative characteristic resistance than the one suggested by Prof. Schleiss in [9]. This is also related to the fact that, for the given
risk scenario, we want no damage at all to occur and assume that an exceeded
resistance value in one single element corresponds to the failure of the structure
(conservative approach). The tensile resistance of concrete was fixed to be around
0.3MPa (one failure occurred for this tensile resistance). The compressive strength
is chosen to be 6MPa.
Furthermore, as part of this semester project, we intended to show the evolution
of the failure probability with the evolution of the sample size. Due to the limited
capacity of the Virtual Box and thus the limited number of possible simulations, it
was necessary to lower the resistance (compared to the resistances defined by [9]
in order to assure at least on failure case (otherwise no failure at all would have


5.4 results

Table 3.: Parameter describing probabilistic distribution







occurred, hence no failure probability could have been computed, which would
be at no interest in the context of this project).
5.3.3 Material parameters used and description of the nature of these parameters
Two materials have been defined in the model: concrete and bedrock.
In this project, only the parameters related to the concrete of the dam are assessed with a probabilistic approach. We chose Youngs Modulus E and density
of the dam following a normal law, which accurately corresponds to reality since
the concrete quality in the dam is of high variability.
5.3.4 Sample size
The sample size required to estimate a failure probability in the order of magnitude of 106 (which corresponds to the estimated failure probability for an important structure and equivalent to a = 4.7 according to SIA Standard) would
require sample size N = 108 . Because of the limited computation time it was not
possible to produce such a big sample. The number of computations achieved
was N = 340000.


In the following paragraph the results of the Monte Carlo simulation will be
discussed. First the distribution of the maximal displacements and the stresses
will be presented, later we will study the evolution of the failure probability with
increasing sample size.


5.4 results

5.4.1 Distributions
Figure 22 shows the distribution of the maximal displacement in the central part
of the dam. The average displacement is d = 1.07 mm and its standard deviation
is 0.104mm.The average value is just above the arbitrary limit we imposed to be
dmax = 1mm. N.B.:This arbitrary value has been chosen in order to get an interesting
example and does correspond to displacement boundary values in reality

Figure 22.: Distribution of the maximal displacement

The distribution of the maximal compression is shown in figure 23. It shows a
shape similar to the distribution of the maximal displacements.The average maximal compressive stress is c = 2.95 MPa and it is above the resistance in compression of the concrete r = 7 MPa, the standard deviation is 0.0107 MPa.


5.4 results

Figure 23.: Distribution of the maximal compression

Figure 24 shows the distribution of the maximal tensile stress present in the
structure. The mean value is t = 0.29 MPa with a standard deviation of 0.00025
MPa. This value has a great importance because the tensile strength of concrete
is very low and is in most cases the weakest link in a failure scenario. Since there
is no reinforcement in an arch dam, the structure is supposed to be solely under
compressive stress and tensile stresses should be absolutely avoided. The geometry of this dam should be optimized in order to have compression everywhere in
the structure and no significant tensile stress should appear in the concrete.
An interesting feature is the very sharp lower boundary in the distribution. This
behaviour is unexpected and physically unrealistic. Further study should be done
to asses the origin of this phenomena, whether is coherent with the real behaviour
of the structure or it is caused by a computational error or by a singularity in the


5.4 results

Figure 24.: Distribution of the maximal tension

5.4.2 Interdependences
It is relevant to study which input parameter mostly affects the behaviour of the
structure. In a qualitative way we can say that the smaller the E moduli of both
dam and bedrock, the greater the maximal displacement. On the other hand the
influence on the stresses is difficult to predict, we could imagine that a lower E
modulus of the bedrock would cause lower stresses in the structures, due to the
hyperstaticity. This hypothesis will be verified in the following paragraph.
Figure 25 clearly shows how the maximal displacement directly depends on
the E modulus of the rock, while for the other parameter no clear dependency is
noticed and the points are scattered around their average value.


5.4 results

(a) Maximal displacement vs Edam

(b) Maximal displacement vs Erock

(c) Maximal displacement vs

Figure 25.: Parameters influencing the maximal displacement

5.4 results

Figure 26 shows the distribution of the maximal tensile stress as a function of

the input parameters. The tensile stress is reduced when the E modulus of the
rock is lower. It can be seem that there is a small increase in tensile stress when
the E modulus of the dam decreases. No clear interdependence can be seen with
respect to the density of the concrete.
Figure 27 shows the plots of the maximal compressive stress as a function of
the different input parameters. Again we see that the E modulus of the rock has
the greater influence on the values of the stress compared to the other parameters.
This time, a clear increase in stresses appears when the stiffness of the foundation
in decreased. The opposite trend is seen for the E modulus of the dam but with
a less clear pattern. Again no clear relation between density and stresses can be
deduced from this analysis.
For all the considered parameters, we notice a strong relation with the Young
modulus of the bedrock. This is reasonable, because an arch dam is a hyperstatic
structure and therefore the conditions at the connection with the foundation is
of fundamental importance for the behaviour of the structure and thus for the
assessment of its safety.


5.4 results

(a) Maximal compressive stress vs Edam

(b) Maximal compressive stress vs Erock

(c) Maximal compressive
stress vs
Figure 27.: Parameters influencing the maximal compressive stress

5.4 results

(a) Maximal tensile stress vs Edam

(b) Maximal tensile stress vs Erock

(c) Maximal tensile stress vs

Figure 26.: Parameters influencing the maximal tensile stress

5.4 results

5.4.3 Convergence of the failure probability

Using the data obtained with the simulations, we were able to asses the failure
probability of our structure, by setting for criteria a tensile strength of sr = 0.3
MPa and a compressive strength of cr = 7 MPa.
The value of the tensile strength has been chosen for didactic reason in order
to have at lest one failure. Theoretically, we could also have imposed no tensile
resistance at all, but then the failure probability would have been p f = 1 (there
would certainly at least one element failing the criterion). Moreover, due to a
stress redistribution after a crack opening, imposing zero tensile resistance to the
concrete would not be accurate.
Figure 28 shows the evolution of the failure probability with increasing sample
size. The failure probability p f = 2.94 106 is not accurately estimated, because
just one sample failed. A larger sample size would be necessary, as discussed in
chapter 5.3.4.

Figure 28.: Evolution of failure probability with sample size: Stress criterion


5.4 results

Figure 29.: Evolution of failure probability with sample size: Displacement criterion
To illustrate the evolution of the failure probability with increasing sample size,
we decided to introduce a fictive failure criterion by setting an upper limit to the
displacement capacity of the structure dmax = 0.001 m. This criterion is fictive in
the sense that displacement will not govern structural safety of dam. Figure 29
shows very clearly that p f slowly converges to its asymptote, and estimates the
failure probability of the dam to be p f = 0.264.



Safety coefficients Safety coefficient does not have an intrinsic meaning, it

has to be associated with the concept of characteristic value to give it any
sens. The reliability theory predicts a close relation between these two concepts, being the basic of deterministic design but hence having probabilistic
background. Improvements can result from the reliability theory, for example by rewarding a better production quality with a decrease in the partial
coefficient associated with the concerned variable [7]. It has also been shown
that it is important to know the concepts behind the definition of the safety
coefficients in order to be able to use them appropriately and adjust them in
the case of existing structures.
To conclude, it is important to keep in mind that a safety coefficient only
denotes a number, which, associated with data selection, a failure scenario
and a rule, generally results in a satisfactory design. But this coefficient does
not measure safety, and all the ignorance and uncertainties of the engineers
are masked behind this coefficient. Safety can only be evaluated with a good
knowledge of these coefficients and an sensitivity analysis of the parameters
of the model.
The most universally adopted method for quantifying reliability depends
on the computation of probabilities. [7]. Even if the probabilistic method
allows to accurately quantify the failure probability of a structure and give
an estimation about how close the structure is to failure, this probability p f
is still a number. Theoretically, for a structure that has a probability of failure
of 106 , it theoretically means that if one would build 1000000 houses with
the same materials and same workers, one of the houses should fail due to
a particular event.
This absurd example shows that there is an undeniable gap between the
theory of reliability in buildings (as powerful as it will ever be) and the real
state of a structure that will never be able to be quantified with engineering
What we have learnt from this project: the biggest challenge of this project
probably lies in the use of the numerical tools and the implementation of


5.4 results

the code. Learning new concepts in programming in Python and learning

how to use Akantu through the Python interface was a very interesting and
challenging part. Throughout the semester, we also faced the limitations of
computers and programming (e.g. divergent results because of inappropriately defined interface conditions) and got aware of the engineers responsibility: design problems require a good understanding on the part of the
engineer in charge of the design project. Structural safety and reliability has
to be guaranteed to the building user and these concepts need to be well
defined and understood by the civil engineer. Numerical tools may help to
solve the technical part, but the fundamental understanding lies in the hand
of the engineer. In this perspective, immersing ourselves in the probabilistic
approach in civil engineering design made us aware of the challenges hidden behind structural safety and reliability. We discovered a new approach
from a theoretical point of view by reading through research papers, before
applying this knowledge to estimate failure probability of a real structure.
What made this project interesting was its progression: starting from a simple steel bar, we were able to finally implement a code and run simulations
for a dam.
Considerations for future work in the topic: for more accuracy and a more
efficient computation of probability of failure, the approach proposed by
the course [8] would be a good starting point for future imporvements. In
fact, the methodology proposes more specific analysis (more samples in the
context of Monte Carlo) in the zone of interest. In the context of our dam
structure, it would mean increasing the number of samples in the failure
zones (middle of the dam) and decreasing the number of sample drawings
in the irrelevant zones.
Suggestions to improve our work: to simplify the problem, we made the
assumption that Youngs modulus E of the rock is constant through the
bedrock mass. However, this is not strictly correct geotechnically speaking.
In reality, E is higher in the depth of the rock than on the outer surface of
the rock (because of squeezing and confining effects). For more accuracy in
our model, this E modulus should be appropriately defined across the mesh.
Another interesting study for future work would be the assessment of the
safety of an existing structure under a rare event, e.g. seismic loading.


5.4 results

We want to address a special thank you to our teaching assistant Fabian

Barras for great help, support and patience granted throughout the



[1] Eugen Bruhwiler.

Polycopie du cours Securite et Fiabilite. EPFL-ENAC-MCS,

Septembre 2012.
[2] Eugen Bruhwiler.
Polycopie du cours Structures existantes: Examen et interven
tions, notions de base. EPFL-ENAC-MCS, Septembre 2014.
[3] Eugen Bruhwiler.
Polycopie du cours Structures existantes: Examen et interven
tions, Chapitres choisis. EPFL-ENAC-MCS, Fevrier 2015.
[4] Laurent Vulliet. Polycopie du cours Fiabilite et securite des syst`emes civils. Partie
1, EPFL-ENAC-LMS, Juin 1997.
[5] Wikipedia:
[6] Maurice Lemaire. Article scientifique Approche probabiliste de dimensionnement
et mondelisation de lincertain et methode de Monte Carlo. Reference BM5003,
publie le 10 avril 2014.
[7] Maurice Lemaire and Alaa Chateauneuf. Structural reliability. London ISTE,
[8] B. Sudret, Slides from lecture Structural reliability and risk analysis, Simulation
methods. ETH-Department of civil, environmental and geomatic engineeringChair of risk, safety and uncertainty quantification, November 2014.
[9] Anton J. Schleiss et Henri Pougatsch, Les barrages. Traite de Genie Civil, volume 17. Presses Polytechniques et Universitaires Romandes, 2011.
[10] Scientific modules for


[11] NumPy for MatLab users:






[12] MOOCs for Python:

[14] hhtp://