Beruflich Dokumente
Kultur Dokumente
20.1 Introduction
For many years it has been assumed in design of structural systems that all loads and
strengths are deterministic. The strength of an element was determined in such a way
that it exceeded the load with a certain margin. The ratio between the strength and the
load was denoted the safety factor. This number was considered as a measure of the
reliability of the structure. In codes of practice for structural systems values for loads,
strengths and safety factors are prescribed. These values are traditionally determined
on the basis of experience and engineering judgment. However, in new codes partial
safety factors are used. Characteristic values of the uncertain loads and resistances are
specified and partial safety factors are applied to the loads and strengths in order to
ensure that the structure is safe enough. The partial safety factors are usually based on
experience or calibrated to existing codes or to measures of the reliability obtained by
probabilistic techniques.
As described above structural analysis and design have traditionally been based on
deterministic methods. However, uncertainties in the loads, strengths and in the
modeling of the systems require that methods based on probabilistic techniques in a
number of situations have to be used. A structure is usually required to have a
satisfactory performance in the expected lifetime, i.e. it is required that it does not
collapse or becomes unsafe and that it fulfills certain functional requirements.
Generally structural systems have a rather small probability that they do not function
as intended, see table 20.1.
1
rrastogi@apsara.barc.ernet.in
20.1
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Reliability of structural systems can be defined as the probability that the structure
under consideration has a proper performance throughout its lifetime. Reliability
methods are used to estimate the probability of failure. The information of the models,
which the reliability analyses are based on, is generally not complete. Therefore the
estimated reliability should be considered as a nominal measure of the reliability and
not as an absolute number. However, if the reliability is estimated for a number of
structures using the same level of information and the same mathematical models,
then useful comparisons can be made on the reliability level of these structures.
Further design of new structures can be performed by probabilistic methods if similar
models and information are used as for existing structures, which are known to
perform satisfactory. If probabilistic methods are used to design structures where no
similar existing structures are known then the designer has to be very careful and
verify the models used as much as possible.
20.2
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
The fundamental quantities that characterize the behavior of a structure are called the
basic variables and are denoted X = (X1, X2, X3… Xn) n is the number of basic
stochastic variables. Typical examples of basic variables are loads, strengths,
dimensions and materials. The basic variables can be dependent or independent, see
below where different types of uncertainty are discussed. In probabilistic analysis
these stochastic variables are represented using probability distribution functions.
These need to derived for each variable using field data.
The above types of uncertainty are usually treated by the reliability methods, which
will be described in the following sections. Another type of uncertainty, which is not
covered by these methods, is gross errors or human errors. These types of errors can
be defined as deviation of an event or process from acceptable engineering practice.
20.3
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Level I methods: The uncertain parameters are modeled by one characteristic value, as
for example in codes based on the partial safety factor concept.
Level II methods: The uncertain parameters are modeled by the mean values and the
standard deviations, and by the correlation coefficients between the stochastic
variables. The stochastic variables are implicitly assumed to be normally distributed.
The reliability index method is an example of a level II method.
Level III methods: The uncertain quantities are modeled by their joint distribution
functions. The probability of failure is estimated as a measure of the reliability.
Level IV methods: In these methods the consequences (cost) of failure are also taken
into account and the risk (consequence multiplied by the probability of failure) is used
as a measure of the reliability. In this way different designs can be compared on an
economic basis taking into account uncertainty, costs and benefits.
Level I methods can e.g. be calibrated using level II methods, level II methods can be
calibrated using level III methods, etc.
Level II and III reliability methods are considered in these lectures. Several
techniques can be used to estimate the reliability for level II and III methods, e.g.
Simulation techniques: Samples of the stochastic variables are generated and the
relative number of samples corresponding to failure is used to estimate the probability
of failure. The simulation techniques are different in the way the samples are
generated.
FORM techniques: In First Order Reliability Methods the limit state function (failure
function) is linearized and the reliability is estimated using level II or III methods.
dFX ( x )
fX ( x) = (20.1)
dx
20.4
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
∫ (x − µ ) f X ( x ) dx
2
σ X2 = X (20.3)
−∝
σx
VX = (20.4)
µx
20.5
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
It is noted that it is important that the lower part of the distribution for the strength
and the upper part of the distribution for the load are modeled as accurate as possible.
20.6
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.7
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.8
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.9
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.10
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.11
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Cov [ X 1 , X 2 ]
ρX ,X =
1 2
σ 1σ 2 (20.9)
−1 ≤ ρ X 1 , X 2 ≤ 1
It is the measure of linear dependence between X1 and X2.
If ρX1,X2 = 0 then X1 and X2 are uncorrelated but not necessarily statistically
independent. For a stochastic vector X = (X1, X2, …. Xn) the covariance matrix is
defined by
20.12
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
1 ρX ,X " ρ X1 , X n
1 2
ρX ,X 1 " ρ X2 ,Xn
ρ X = 1 2 (20.11)
# # % #
ρ X1 , X n ρX 2 ,Xn
" 1
A structural component fails when the applied loads ‘S’ exceed its resistance ‘R’ i.e.
R – S < 0. Generally, this condition is expressed in terms of a failure equation or a
limit state function ‘g’. Let g(X) be a function of variables X1, X2, X3,…. Xn. The
failure is defined by the condition g(X) < 0. The variables X are modeled as stochastic
variables in the probabilistic analysis. The failure probability Pf is defined by.
Pf = P( g < 0) (20.12)
It is the probability that the x lie in the failure domain F.
Pf = F = {x | g ( x) < 0} (20.13)
The probability of failure can be written in terms of the marginal density functions fi
of the stochastic variables X.
The first developments of FPI Methods took place almost 30 years ago. Since then the
methods have been refined and extended significantly and by now they form one of
the most important methods for reliability evaluations in structural reliability theory.
Several commercial computer codes have been developed for FPI [20.2-20.2]
methods and the methods are widely used in practical engineering problems and for
code calibration purposes.
Consider the case where the limit state function, g(X) is a linear function of the basic
random variables X . Then we may write the limit state function as
20.13
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
n
g ( X ) = a0 + ∑ ai X i (20.15)
i =1
The limit state function g is linear. All Xi’s are distributed based on Normal
distribution. The mean and the standard deviation of g can be calculated as:
n
µ g = a0 + ∑ ai µ X i
i =1
n n n
(20.16)
σ = ∑a σ +∑
2
g
2
i
2
Xi ∑ ρij ai a jσ X σ Xj
i
i =1 i =1 j =1, j ≠ i
The distribution of g will also be Normal. The probability of failure (P (g< 0)) is
given by:
0 − µg µg
Pf = P ( g < 0 ) = Φ = Φ − = Φ ( − β )
σ
g σg (20.17)
β = reliability index
Example:
R→Normal(3.5, 0.25) and S→Normal(2, 0.3). It is assumed that they are not
correlated, i.e. ρRS = 0.
µg = 3.5 – 2 = 1.5
20.14
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Pf = Φ(-3.84) = 6.15x10-5
Consider the case where the limit state function, g(X) is a non-linear function of the
basic random variables X . The limit state function can be linearized using the first
order terms of the Taylor’s series about the mean value.
) ∂∂Xg
n
i =1
(
g ( X ) ≈ g ( µ X ) + ∑ X i − µ Xi (20.18)
i
The non-linear g has formulation now similar to eq.(20.15). Thus the Cornell
reliability index can be obtained.
The Cornell reliability however suffers from the problem of invariance. The results
for g = R-S , g = R2-S2 or g = R/S – 1 will be different.
X i − µ Xi
ui = (20.19)
σX i
such that the random variables ui have zero means and unit standard deviations.
Then the reliability index β has the simple geometrical interpretation as the smallest
distance from the line (or generally the hyper-plane) forming the boundary between
the safe domain and the failure domain, i.e. the domain defined by the failure event. It
should be noted that this definition of the reliability index does not depend on the
limit state function but rather the boundary between the safe domain and the failure
domain. The point on the failure surface with the smallest distance to origin is
commonly denoted the design point.
20.15
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
n
∂g
∂g
∑ ∂u .u − g i
ui' =
i =1 i (20.20)
∂ui n
∂g
2
∑
i =1 ∂ui
n
7. β = ∑u
i =1
2
i
20.16
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
∂g ∂ui
αi = (20.21)
n
∑ ( ∂g ∂ui )
2
i =1
Example:
Consider a steel rod under pure tension loading. The rod will fail if the applied
stresses on the rod cross-sectional area exceeds the steel yield stress. The yield stress
R of the rod and the loading stress on the rod S are assumed to be uncertain modeled
by uncorrelated normal distributed variables. The cross sectional area of the steel rod
A is also uncertain. The steel yield stress R is normal distributed with mean values and
standard deviation 350 and 35 MPa respectively. The loading S is normal distributed
with mean value and standard deviation of 2 and 0.4 kN. The cross sectional area A is
also assumed normally distributed with mean value and standard deviation 10 and 2
mm2.
Solution:
g = RA – S
20.17
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
R = µR + σR uR
A = µA + σA uA
S = µS + σS uS
3. Transforming g in standard normal space
uR = uA = uS = 0
∂g/∂uR = ( µA + σA uA) σR
∂g/∂uA = ( µR + σR uR) σA
∂g/∂uS = - σS
One of the commonly used approaches for treating this situation is to approximate the
probability distribution function and the probability density function for the non-
normally distributed random variables by normal distribution and normal density
functions. This methodology is also referred to as Rackwitz-Fiesseler method.
As the design point is usually located in the tails of the distribution functions of the
basic random variables the scheme is often referred to as the “normal tail
approximation”.
20.18
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Equating the distribution functions and their derivatives, to solve for unknowns, mean
and standard deviation of equivalent normal distribution, does this. Eq.(20.22).
x * −µ '
FX ( x *) = Φ
σ'
differentiating (20.22)
1 x * −µ '
f X ( x *) = ϕ
σ ' σ '
The equivalent mean and standard deviation are estimate from eq.(20.23).
20.19
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
ϕ {Φ −1 FX ( x *) }
σ eq =
f X ( x *)
µ eq = x * −σ eq Φ −1 FX ( x *)
where (20.23)
ϕ = std . normal density function
Φ = std . normal distribution function
n
∂g
∂g
∑ ∂u .u − g i
ui' =
i =1 i (20.24)
∂ui n
∂g
2
∑
i =1 ∂ui
n
8. β = ∑u
i =1
2
i
Example:
20.20
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
1. g = R – S
2. Let R = 1000 and S = 700 (mean values), g = R – S = 300
3. Calculating equivalent mean and standard deviation at this point.
ϕ {Φ −1 [ 0.5703]}
σ eq
= = 191.295
R
( 0.002503)
µ Req = 1000 − 191.295Φ −1 [ 0.5703] = 966.1266
1000 − 966.1266
uR = = 0.1771
191.295
700 − 700
uS = =0
100
gu = ( µ Re + uRσ Req ) − ( µ S + uSσ S )
5. Calculating partial derivatives
∂gu
= σ Req = 191.295
∂uR
∂gu
= −σ S = −100
∂uS
∂gu ∂g
A= uR + u uS − g = 191.295 × 0.1771 + (−100)(0) − 300 = −266.1266
∂u R ∂uS
2 2
∂g ∂g
B = u + u = (191.295 ) + ( −100 ) = 46593.7673
2 2
∂u R ∂uS
A / B = −0.0057
u R' = 191.295(−0.0057) = −1.0926
uS' = −100(−0.0057) = 0.57
20.21
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
7. Calculating β
Next iteration
F(757.1164) = 0.0696
f(757.1164) = 0.0012
σ Req = 112.4427
µ Req = 923.36
4. Transforming to standard normal space
uR = −1.4785
uS = 0.57
5. Partial derivatives
∂gu
= σ Req = 112.4427
∂uR
∂gu
= −σ S = −100
∂uS
The situation where basic random variables X are stochastically dependent is often
encountered in practical problems. For normally distributed random variables the joint
20.22
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
probability distribution function may be described in terms of the first two moments,
i.e. the mean value vector and the covariance matrix. This is, however, only the case
for normally or log-normally distributed random variables.
Considering the case of normally distributed random variables these situations may be
treated completely along the same lines as described in the previous sections. Here a
transformation is required such that the variables are uncorrelated in the standard
normal space.
The correlations between the random variables X are defined in for a correlation
matrix CX defined by eq.(20.10).
{ X } = {D} + [Q ]{u}
{ X } = Vector of random variables (correlated)
{D} = Vector of mean values of random variables (20.25)
{u} = Vector of random variables in uncorrelated standard normal space
[Q ] = Lower trangular matrix of C X , obtained from Choleski's decomposition
∑
i =1 ∂ui
n
8. β = ∑u
i =1
2
i
Example:
20.23
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Solution:
g=R-S
2. Obtain [Q] by Choleski decomposition of Cx
2002 300
CX = 2
300 100
Applying Choleski’s decomposition
R 1000 200 0 u R
= +
S 700 1.5 100 uS
R = 1000 + 200uR
S = 700 + 1.5u R + 100uS
4. Transform g(X)→g(u)
The set value of the random variables after the last iteration of any of the above
algorithms is termed as Most Probable Point for failure. At this point, the value of
20.24
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Πf (x )
i =1
i i is maximum in the failure region. fi(xi) represents the value of the density
function of ith random variable.
Basics:
I = ∫ f ( x) d n x
V
20.25
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
In practice, N becomes just too large very quickly for the standard
methods: already at d = 10, if n = 10 (pretty small), N = 1010. Too
much!
Good random numbers play a central part in Monte Carlo simulations. Usually these
are generated using a deterministic algorithm, the numbers generated this way are
called pseudorandom numbers. There are several methods for obtaining random
numbers for Monte Carlo simulations.
Physical random numbers are generated from some truly random physical process
(radioactive decay, thermal noise, roulette wheel . . .). Before the computer age,
special machines were built to produce random numbers, which were often published
in books. For example, 1955 the RAND corporation published a book with a million
random numbers, obtained using an electric “roulette wheel”. This classic book is
now available on the net, at
http://www.rand.org/publications/classics/randomdigits/
Physical random numbers are not very useful for Monte Carlo, because:
Pseudorandom numbers
Almost all of the Monte Carlo calculations utilize pseudorandom numbers, which are
generated using deterministic algorithms. Typically the generators produce a random
integer (with a definite number of bits), which is converted to a floating point number
x ∈ [ 0,1] by multiplying with a suitable constant.
The generators are initialized once before use with a seed number, typically an integer
value or values. This sets the initial state of the generator.
Repeatability – the same initial values (seeds) produces the same random
number sequence. This can be important for debugging.
20.26
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Long period – the generators have a finite amount of internal state information,
so the sequences must repeat itself after finite period. The period should be
much longer than the amount of numbers needed for the calculation
(preferably).
Insensitive to seeds – the period and randomness properties should not depend
on the initial seed.
Fast
One of the simplest, widely used and oldest generators is he linear congruential
generator (LCG). Usually the language or library “standard” generators are of this
type.
X i +1 = ( aX i + c ) mod m (20.26)
This generates integers from 0 to (m-1) (or from 1 to (m-1), if c = 0). Real numbers in
[0; 1) are obtained by division fi = Xi / m.
Since the state of the generator is specified by the integer Xi, which is smaller than m,
the period of the generator is at most m. The constants a, c and m must be carefully
chosen to ensure this. Arbitrary parameters are sure to fail!
A typical generator:
This is essentially a 32-bit algorithm. The cycle time of this generator is m = 231 ~
2x109.
20.27
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Example:
Example
0 x<0
FX ( x ) = x 2 0 ≤ x ≤ 1
1 x >1
0 y<0
FY ( y ) = y 3 0 ≤ y ≤ 1
1 y >1
g = X −Y
Estimate the probability of failure using Monte Carlo method. Use the set of 10
uniform random numbers each for X and Y given in the following table.
Knowing the random number for X ‘rx’, the value of the random variable X van be
found out by
X = √r x
Similarly knowing the value of the random number for Y ‘ry’, the value of random
variable Y can be found by
Y = (ry)1/3
20.28
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
10 such values are obtained and each pair is used in the limit state
function g. Out of these 5 lead to failure. Hence the probability of
failure can be given by 5/10 = 0.5.
Box-Muller Method for generating Normal random variable
u1 = −2 ln v1 cos ( 2π v2 )
u2 = −2 ln v1 sin ( 2π v2 )
(20.27)
v1 , v2 are independent uniform random numbers
u1 , u2 are independent standard normal random variables
The error estimate in the Monte Carlo simulation is estimated using the coefficient of
variation of the result. It is given by
cov =
(1 − P )
f
NPf (20.28)
Pf = probability of failure estimated from N simulations
Fail = Fail+1
20.29
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Pf = Fail / 50
Probabilistic methodology is being increasing used in different industries all over the
world to prepare an optimized in-service inspection (ISI) plan. Since inspection plans
involve shutdown of the plant in most of the cases, it is very much imperative to
inspect minimum without compromising the safety. The need is to address the
following questions:
Traditionally these questions have been addressed mostly on the basis of experience
of the operators and in some cases deterministic methods.
The stress analysis of piping components gives the locations, which see the maximum
stresses. The welds are in general expected to be the initiation sites for defects. Thus
combinations of these and the fatigue usage factors of the different components
typically help in selecting the vulnerable sites. There is however considerable scatter
in the parameters, which help in making the decisions in this manner. The material
properties especially fatigue data and toughness data has considerable uncertainty
associated with it. The fatigue loads coming on to the plant can also not be predicted
with sureness, as is the fatigue crack growth in service. The NDE methods used for
ISI do not detect the defects with high confidence.
1. Select locations of interest on the piping system using the traditional method
of high stress/inferior material properties and sites for possible degradation
with service life.
20.30
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
2. Formulate the time variant damage models for each such site (fatigue, creep,
corrosion, erosion, irradiation etc).
4. A crack is assumed at each of these locations. The size of the crack can be
estimated from the crack initiation models.
5. There will be a probability for detecting these initiated cracks during the pre-
service inspection (PSI). This probability is generally given as probability of
detection (POD) curves, which are well documented in literature. A typical
POD curve is shown below.
6. If a proof test of the piping system is there, the cracks, which will fail during
this test, can be censored, as they will not come into service. The failure can
be determined using fracture mechanics methods such as J-Tearing, R6,
SINTAP etc. depending on the failure mechanism expected.
7. The damaging mechanism are now applied. In cases like the fatigue, the
arrival times of the cyclic loads (transients) also need to be modeled as
stochastic variables. With time, because of the advent of damaging
mechanisms the condition of the structure deteriorates. It can be increasing
flaw depth (fatigue, corrosion), wall thinning (erosion) or degradation of
material properties (irradiation) etc. These need to be calculated with the
passage of time and by using the appropriate damage models. The failure of
the component needs to be evaluated with respect to time using appropriate
failure equations. A curve of probability of failure (Pf) versus time for each
site of interest can be obtained. This value includes the effect of successive
inspections.
20.31
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
Pf (log scale)
Time
A target value of Pf-Target can be decided. The inspections can be scheduled such that
Pf always remain below Pf-Target. This is shown in the figure below. T1, T2 and T3 in
this case can be taken as the inspection intervals.
Pf -Target
Pf (log scale)
T1 T2 T3
Time
20.32
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
This analysis is to be performed for all locations identified earlier. From this
group a selection of more vulnerable ones based on Pf curves can be made.
R = L1 + L2 + L3
R/(L1+L2+L3) = 1.5
The reliability index for a linear function with all variables normally
distributed is given by
µ R − ( µ L1 + µ L 2 + µ L 3 )
β=
σ R2 + σ L21 + σ L22 + σ L23
Let µR = 300. Thus to satisfy design equation we should have ΣµL = 200.
This shows that though the design equation is being satisfied in each of the cases,
different value of probability of failure is obtained for different cases. This kind of
an anomaly is addressed by probabilistic methods where instead of keeping a
single factor of safety we can design for a target Pf value.
20.33
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
This methodology is used in civil engineering design codes for many years.
Various mechanical design codes are also incorporating this philosophy now.
Example
Table 20.2: Partial Safety Factors of API 579 (2000) for the assessment of crack-like
flaws.
The table above has been taken from the API 579 [20.4] code for safety evaluation
of cracked components. They have generated these partial safety factors using R6
as failure mode [20.5].
20.6 References
[20.1] Melchers, R.E.: Structural Reliability, analysis and prediction. John Wiley &
Sons, New York, 1987.
20.34
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
[20.5] Bloom J. M., “Partial Safety Factors (PSF) and Their Impact on ASME Section
XI, IWB-3610”, Presented at the 2000 ASME Pressure Vessel and Piping Conference
July 23 - 27, 2000 Seattle, Washington
20.35