Sie sind auf Seite 1von 35

Lecture Notes on SIAM

BARC Advanced Elective Course, May-July, 2004

Chapter 20: Structural Reliability

Author: Rohit Rastogi1, RSD, BARC.

20.1 Introduction

For many years it has been assumed in design of structural systems that all loads and
strengths are deterministic. The strength of an element was determined in such a way
that it exceeded the load with a certain margin. The ratio between the strength and the
load was denoted the safety factor. This number was considered as a measure of the
reliability of the structure. In codes of practice for structural systems values for loads,
strengths and safety factors are prescribed. These values are traditionally determined
on the basis of experience and engineering judgment. However, in new codes partial
safety factors are used. Characteristic values of the uncertain loads and resistances are
specified and partial safety factors are applied to the loads and strengths in order to
ensure that the structure is safe enough. The partial safety factors are usually based on
experience or calibrated to existing codes or to measures of the reliability obtained by
probabilistic techniques.

Table 20.1: Some Risks in Society [20.1]

Activity Approximate death rate Typical Typical risk of


(x 10-9 deaths/h exposure death
exposure) (h/year) (x 10-6 per year)
Alpine climbing 30000–40000 50 1500-2000
Boating 1500 80 120
Swimming 3500 50 170
Cigarette smoking 2500 400 1000
Air travel 1200 20 24
Car travel 700 300 200
Train travel 80 200 15
Coal mining (UK) 210 1500 300
Construction 70-200 2200 150-440
work
Manufacturing 20 2000 40
Building fires 1-3 8000 8-24
Structural failures 0.02 6000 0.1

As described above structural analysis and design have traditionally been based on
deterministic methods. However, uncertainties in the loads, strengths and in the
modeling of the systems require that methods based on probabilistic techniques in a
number of situations have to be used. A structure is usually required to have a
satisfactory performance in the expected lifetime, i.e. it is required that it does not
collapse or becomes unsafe and that it fulfills certain functional requirements.
Generally structural systems have a rather small probability that they do not function
as intended, see table 20.1.

1
rrastogi@apsara.barc.ernet.in

20.1
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Reliability of structural systems can be defined as the probability that the structure
under consideration has a proper performance throughout its lifetime. Reliability
methods are used to estimate the probability of failure. The information of the models,
which the reliability analyses are based on, is generally not complete. Therefore the
estimated reliability should be considered as a nominal measure of the reliability and
not as an absolute number. However, if the reliability is estimated for a number of
structures using the same level of information and the same mathematical models,
then useful comparisons can be made on the reliability level of these structures.
Further design of new structures can be performed by probabilistic methods if similar
models and information are used as for existing structures, which are known to
perform satisfactory. If probabilistic methods are used to design structures where no
similar existing structures are known then the designer has to be very careful and
verify the models used as much as possible.

The reliability estimated as a measure of the safety of a structure can be used in a


decision (e.g. design) process. A lower level of the reliability can be used as a
constraint in an optimal design problem. The lower level of the reliability can be
obtained by analyzing similar structures designed after current design practice or it
can be determined as the reliability level giving the largest utility (benefits – costs)
when solving a decision problem where all possible costs and benefits in the expected
lifetime of the structure are taken into account.

In order to be able to estimate the reliability using probabilistic concepts it is


necessary to introduce stochastic variables and/or stochastic processes/fields and to
introduce failure and non-failure behavior of the structure under consideration.

Generally the main steps in a reliability analysis are:

1) Select a target reliability level.


2) Identify the significant failure modes of the structure.
3) Decompose the failure modes in series systems of parallel systems of
single components (only needed if the failure modes consist of more than
one component).
4) Formulate failure functions (limit state functions) corresponding to each
component in the failure modes.
5) Identify the stochastic variables and the deterministic parameters in the
failure functions. Further specify the distribution types and statistical
parameters for the stochastic variables and the dependencies between
them.
6) Estimate the reliability of each failure mode.
7) In a design process change the design if the reliabilities do not meet the
target reliabilities. In reliability analysis the reliability is compared with
the target reliability.
8) Evaluate the reliability result by performing sensitivity analyses.

The single steps are discussed below.

Typical failure modes to be considered in a reliability analysis of a structural system


are yielding, buckling (local and global), fatigue, fracture and excessive deformations.

20.2
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

The failure modes (limit states) are generally divided in:

Ultimate limit states


Ultimate limit states correspond to the maximum load carrying capacity, which can be
related to e.g. formation of a mechanism in the structure, excessive plasticity, rupture
due to fatigue and instability (buckling).

Conditional limit states


Conditional limit states correspond to the load-carrying capacity if a local part of the
structure has failed. A local failure can be caused by an accidental action or by fire.
The conditional limit states can be related to e.g. formation of a mechanism in the
structure, exceeding of the material strength or instability (buckling).

Serviceability limit states


Serviceability limit states are related to normal use of the structure, e.g. excessive
deflections, local damage and excessive vibrations.

The fundamental quantities that characterize the behavior of a structure are called the
basic variables and are denoted X = (X1, X2, X3… Xn) n is the number of basic
stochastic variables. Typical examples of basic variables are loads, strengths,
dimensions and materials. The basic variables can be dependent or independent, see
below where different types of uncertainty are discussed. In probabilistic analysis
these stochastic variables are represented using probability distribution functions.
These need to derived for each variable using field data.

The uncertainty modeled by stochastic variables can be divided in the following


groups:

Physical uncertainty: or inherent uncertainty is related to the natural randomness of


a quantity, for example the uncertainty in the yield stress due to production
variability.

Measurement uncertainty: is the uncertainty caused by imperfect measurements of


for example a geometrical quantity.

Statistical uncertainty: is due to limited sample sizes of observed quantities.

Model uncertainty: It is the uncertainty related to imperfect knowledge or


idealizations of the mathematical models used or uncertainty related to the choice of
probability distribution types for the stochastic variables.

The above types of uncertainty are usually treated by the reliability methods, which
will be described in the following sections. Another type of uncertainty, which is not
covered by these methods, is gross errors or human errors. These types of errors can
be defined as deviation of an event or process from acceptable engineering practice.

Generally, methods to measure the reliability of a structure can be divided in four


groups

20.3
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Level I methods: The uncertain parameters are modeled by one characteristic value, as
for example in codes based on the partial safety factor concept.

Level II methods: The uncertain parameters are modeled by the mean values and the
standard deviations, and by the correlation coefficients between the stochastic
variables. The stochastic variables are implicitly assumed to be normally distributed.
The reliability index method is an example of a level II method.

Level III methods: The uncertain quantities are modeled by their joint distribution
functions. The probability of failure is estimated as a measure of the reliability.

Level IV methods: In these methods the consequences (cost) of failure are also taken
into account and the risk (consequence multiplied by the probability of failure) is used
as a measure of the reliability. In this way different designs can be compared on an
economic basis taking into account uncertainty, costs and benefits.

Level I methods can e.g. be calibrated using level II methods, level II methods can be
calibrated using level III methods, etc.

Level II and III reliability methods are considered in these lectures. Several
techniques can be used to estimate the reliability for level II and III methods, e.g.

Simulation techniques: Samples of the stochastic variables are generated and the
relative number of samples corresponding to failure is used to estimate the probability
of failure. The simulation techniques are different in the way the samples are
generated.

FORM techniques: In First Order Reliability Methods the limit state function (failure
function) is linearized and the reliability is estimated using level II or III methods.

SORM techniques: In Second Order Reliability Methods a quadratic approximation


to the failure function is determined and the probability of failure for the quadratic
failure surface is estimated.

20.2 Stochastic Variables


Consider a continuous stochastic variable X. The distribution function of X is
denoted FX(x) and gives the probability FX(x) = P(X ≤ x). A distribution function is
illustrated in figure 20.1. The density function fX(x) is illustrated in figure 20.2 and is
defined by eq.(20.1).

dFX ( x )
fX ( x) = (20.1)
dx

20.4
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Figure 20.1: Distribution Function FX(x)

Figure 20.2: Density Function fX(x)

The expected value µX is defined by eq.(20.2).


+∝
µX = ∫ xf ( x ) dx
−∝
X (20.2)

The variance σ2x is defined by eq.(20.3).


+∝

∫ (x − µ ) f X ( x ) dx
2
σ X2 = X (20.3)
−∝

σ is the standard deviation.


The coefficient of variation VX is defined as (eq. )

σx
VX = (20.4)
µx

20.2.1 Example: Probability of Failure, Fundamental Case


Consider a structural element with load bearing capacity R which is loaded by the
load S. R and S are modeled by independent stochastic variables with density
functions fR and fS and distribution functions FR and FS , see figure 20.3. The
probability of failure becomes
∝ ∝
Pf = P ( failure ) = P ( R ≤ S ) = ∫ P ( R ≤ x ) P ( x ≤ S ≤ x + dx ) dx = ∫ F ( x) f
R S ( x) dx (20.5)
−∝ −∝

20.5
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Figure 20.3: Density Functions for Fundamental Case


Alternately probability of failure can be evaluate as

Pf = P ( failure ) = P ( R ≤ S ) = ∫ P ( x ≤ R ≤ x + dx ) P ( S ≥ x ) dx
−∝
∝ ∝
(20.6)
= ∫ f R ( x) (1 − FS ( x) ) dx = 1 − ∫ f R ( x) FS ( x)dx
−∝ −∝

It is noted that it is important that the lower part of the distribution for the strength
and the upper part of the distribution for the load are modeled as accurate as possible.

20.6
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

20.2.2 Some Important Probability Distributions

20.7
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

20.8
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

20.9
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

20.10
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

20.11
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

20.2.3 Covariance and Correlation


The covariance between X1 and X2 is defined by

Cov [ X 1 , X 2 ] = E ( X 1 − µ1 )( X 2 − µ 2 )  (20.7)


It is seen that

Cov [ X 1 , X 1 ] = Var [ X 1 ] = σ 12 (20.8)

The correlation coefficient between X1 and X2 is defined by

Cov [ X 1 , X 2 ]
ρX ,X =
1 2
σ 1σ 2 (20.9)
−1 ≤ ρ X 1 , X 2 ≤ 1
It is the measure of linear dependence between X1 and X2.
If ρX1,X2 = 0 then X1 and X2 are uncorrelated but not necessarily statistically
independent. For a stochastic vector X = (X1, X2, …. Xn) the covariance matrix is
defined by

 Var [ X 1 , X 1 ] Cov [ X 1 , X 2 ] " Cov [ X 1 , X n ] 


 
Cov [ X 1 , X 2 ] Var [ X 2 , X 2 ] " Cov [ X 2 , X n ]
CX =  (20.10)
 # # % # 
 
Cov [ X 1 , X n ] Cov [ X 2 , X n ] " Var [ X n , X n ] 

Correspondingly the correlation coefficient matrix is defined by

20.12
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

 1 ρX ,X " ρ X1 , X n 
 
1 2

ρX ,X 1 " ρ X2 ,Xn 
ρ X =  1 2 (20.11)
# # % # 
 
 ρ X1 , X n ρX 2 ,Xn
" 1 

A structural component fails when the applied loads ‘S’ exceed its resistance ‘R’ i.e.
R – S < 0. Generally, this condition is expressed in terms of a failure equation or a
limit state function ‘g’. Let g(X) be a function of variables X1, X2, X3,…. Xn. The
failure is defined by the condition g(X) < 0. The variables X are modeled as stochastic
variables in the probabilistic analysis. The failure probability Pf is defined by.

Pf = P( g < 0) (20.12)
It is the probability that the x lie in the failure domain F.
Pf = F = {x | g ( x) < 0} (20.13)

The probability of failure can be written in terms of the marginal density functions fi
of the stochastic variables X.

Pf = ∫ f X ( x )dx = ∫ f X1 ( x1 )... f X n ( xn )dx1...dxn (20.14)


F F

Thus the calculation of probability of failure essentially reduces to the calculation of


the integral given by eq.(20.14) . The techniques broadly fall in two categories. First
category includes approximation methods like FORM and SORM. These are also
known as fast probability integration methods (FPI). The second category comprises
of simulation-based methods. These are basically Monte Carlo based methods.

20.3 Fast Probability Integration Methods (FPI)

The first developments of FPI Methods took place almost 30 years ago. Since then the
methods have been refined and extended significantly and by now they form one of
the most important methods for reliability evaluations in structural reliability theory.
Several commercial computer codes have been developed for FPI [20.2-20.2]
methods and the methods are widely used in practical engineering problems and for
code calibration purposes.

20.3.1 Linear Limit State Functions and Normal Distributed


Correlated Variables (Level II Method, Cornell Index)

Consider the case where the limit state function, g(X) is a linear function of the basic
random variables X . Then we may write the limit state function as

20.13
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

n
g ( X ) = a0 + ∑ ai X i (20.15)
i =1

The limit state function g is linear. All Xi’s are distributed based on Normal
distribution. The mean and the standard deviation of g can be calculated as:

n
µ g = a0 + ∑ ai µ X i
i =1
n n n
(20.16)
σ = ∑a σ +∑
2
g
2
i
2
Xi ∑ ρij ai a jσ X σ Xj
i
i =1 i =1 j =1, j ≠ i

The distribution of g will also be Normal. The probability of failure (P (g< 0)) is
given by:

 0 − µg   µg 
Pf = P ( g < 0 ) = Φ   = Φ  −  = Φ ( − β )
 σ
 g   σg  (20.17)
β = reliability index

Figure 20.4: Illustration of β and Pf

This formulation of reliability index is given by Cornell, and is know as Cornell


reliability index. This is valid when all the variables are Normally distributed and the
limit state is linear. In reliability analysis it generally sufficient to mention the value
of reliability index.

Example:

Consider a failure function of the form g = R – S.

R→Normal(3.5, 0.25) and S→Normal(2, 0.3). It is assumed that they are not
correlated, i.e. ρRS = 0.

The Cornell Reliability index is given by β = µg/σg.

µg = 3.5 – 2 = 1.5

20.14
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

σg = √(0.252 + 0.32) = 0.39


β = 3.84

Pf = Φ(-3.84) = 6.15x10-5

20.3.2 Non-Linear Limit State Functions and Normal Distributed


Un-correlated Variables (Level II Method, MVFOSM)

Consider the case where the limit state function, g(X) is a non-linear function of the
basic random variables X . The limit state function can be linearized using the first
order terms of the Taylor’s series about the mean value.

The failure function after linearization can be written as

) ∂∂Xg
n

i =1
(
g ( X ) ≈ g ( µ X ) + ∑ X i − µ Xi (20.18)
i
The non-linear g has formulation now similar to eq.(20.15). Thus the Cornell
reliability index can be obtained.

This method is known as Mean-Value-First-Order-Second-Moment method as this


uses first order terms of Taylor series expansion at mean values. Calculation of
reliability index uses the first (mean) and the second moment (standard deviation)
values only.

The Cornell reliability however suffers from the problem of invariance. The results
for g = R-S , g = R2-S2 or g = R/S – 1 will be different.

20.3.3 Non-Linear Limit State Functions and Normal Distributed


Un-correlated Variables (Level II Method, Hasofer-Lind Reliability
Index)
Hasofer and Lind remove the problem of invariance in the Cornell index. The limit
state function g(X) is transformed into the limit state function g(u) by normalization of
the random variables in to standardized normally distributed random variables as

X i − µ Xi
ui = (20.19)
σX i

such that the random variables ui have zero means and unit standard deviations.

Then the reliability index β has the simple geometrical interpretation as the smallest
distance from the line (or generally the hyper-plane) forming the boundary between
the safe domain and the failure domain, i.e. the domain defined by the failure event. It
should be noted that this definition of the reliability index does not depend on the
limit state function but rather the boundary between the safe domain and the failure
domain. The point on the failure surface with the smallest distance to origin is
commonly denoted the design point.

20.15
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

The geometrical interpretation is illustrated in Figure 20.4 where a two dimensional


case is considered.

Figure 20.5: Illustration of the two-dimensional case of a linear limit state


function and standardized normally distributed variables U

The estimation of reliability index now essentially is a optimization problem to


estimate the minimum distance of the failure function in the standard normal space
with respect to the origin. An algorithm to arrive at the minimum distance is listed
next.

1. Formulate the limit state function. g(X) = 0


2. Transform all Xi = µi + ui σI
3. Transform g(X)→g(u)
4. Assume some initial value of each ui. (Typically a start is made with each ui =
0)
5. Compute ∂g/∂ui for all variables, at the ui
6. Compute the values of each ui for the next iteration using

n
 ∂g 
∂g
∑  ∂u .u  − g i

ui' =
i =1  i  (20.20)
∂ui n
 ∂g 
2

∑  
i =1  ∂ui 

n
7. β = ∑u
i =1
2
i

8. Repeat 5 to 7 until a converged value of β is obtained.

Features of the Hasofer Lind Reliability index

1. This method of determining β is popularly known as First Order Reliability


Method (FORM).
2. The method formulates the non-linear limit function as a tangent hyper plane
at the minimum distance point (most probable point or MPP).

20.16
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Figure 6: Hasofer-Lind Reliability Index

3. It requires determination of partial derivatives for the limit state function.


Hence it can be applied only to continuous failure functions.
4. This formulation does not suffer from the problem of invariance.
5. The method fails if there is more than one point on failure surface at minimum
distance from origin.
6. The method is easy to apply and to program.
7. The value of each of the random variables at design point(MPP) denote their
relative effect on failure probability. If the mean value is close to MPP shows
that the Pf is very much dependent on the random variable. If X*I denotes the
value of a variable Xi at design point, then the available margin on the mean
value is X*i/µi for load variable and µi/X*i for strength variables.
8. The relative importance of the variables is calculated using αi. Where αi is
defined as:

∂g ∂ui
αi = (20.21)
n

∑ ( ∂g ∂ui )
2

i =1

Example:

Consider a steel rod under pure tension loading. The rod will fail if the applied
stresses on the rod cross-sectional area exceeds the steel yield stress. The yield stress
R of the rod and the loading stress on the rod S are assumed to be uncertain modeled
by uncorrelated normal distributed variables. The cross sectional area of the steel rod
A is also uncertain. The steel yield stress R is normal distributed with mean values and
standard deviation 350 and 35 MPa respectively. The loading S is normal distributed
with mean value and standard deviation of 2 and 0.4 kN. The cross sectional area A is
also assumed normally distributed with mean value and standard deviation 10 and 2
mm2.
Solution:

1. Writing the limit state function

g = RA – S

20.17
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

2. Transform the variables to standard normal space

R = µR + σR uR
A = µA + σA uA
S = µS + σS uS
3. Transforming g in standard normal space

gu = (µR + σR uR)( µA + σA uA) – (µS + σS uS)

4. Assuming initial value

uR = uA = uS = 0

5. Finding the partial derivatives

∂g/∂uR = ( µA + σA uA) σR
∂g/∂uA = ( µR + σR uR) σA
∂g/∂uS = - σS

Rest of the iterative steps is shown in the Table below.

Iteration uR uA uS gu ∂g/∂uR ∂g/∂uA ∂g/∂uS uR uA uS β


1. 0.000 0.000 0.000 1500 350 700 -400.0 -0.680 -1.359 0.777 1.707
2. -0.680 -1.359 0.777 64.662 254.854 652.427 -400.0 -0.562 -1.439 0.882 1.779
3. -0.562 -1.439 0.882 -0.658 249.246 660.643 -400.0 -0.546 -1.448 0.877 1.779
4. -0.546 -1.448 0.877 -0.010 248.648 661.762 -400.0 -0.544 -1.449 0.876 1.779

20.3.4 Non-linear failure functions with no limitations on Un-


correlated variable distribution (Rackwitz-Fiesseler Method, Level
III Method)

As a further development of the iterative calculation procedure for the evaluation of


the failure probability we need to consider the cases where also non-normally
distributed random variables are present.

One of the commonly used approaches for treating this situation is to approximate the
probability distribution function and the probability density function for the non-
normally distributed random variables by normal distribution and normal density
functions. This methodology is also referred to as Rackwitz-Fiesseler method.
As the design point is usually located in the tails of the distribution functions of the
basic random variables the scheme is often referred to as the “normal tail
approximation”.

20.18
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Consider a random variable X whose density function is denoted by fX(x)and


distribution function by FX(x) which are non-normal. Denoting the design point of
the random variable X, coming in the FORM iteration by x*, the non-normal
distribution is approximated by an equivalent normal distribution at x*. See figure
below. This is required as the basic Hasofer-Lind reliability index is valid only in the
standard normal space. The objective now is to estimate the mean and the standard
deviation of the equivalent normal distribution. The normal istribuuion is fitted such
that the value of the distribution function of X at x* is equal to that of equivalent
normal distribution function.

Figure 20.7: Equivalent Normal distribution

Equating the distribution functions and their derivatives, to solve for unknowns, mean
and standard deviation of equivalent normal distribution, does this. Eq.(20.22).
 x * −µ ' 
FX ( x *) = Φ  
 σ' 
differentiating (20.22)
1  x * −µ ' 
f X ( x *) = ϕ
σ '  σ ' 

The equivalent mean and standard deviation are estimate from eq.(20.23).

20.19
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

ϕ {Φ −1  FX ( x *) }
σ eq =
f X ( x *)
µ eq = x * −σ eq Φ −1  FX ( x *) 
where (20.23)
ϕ = std . normal density function
Φ = std . normal distribution function

This calculation needs to be performed for each iteration in the Hasofer-Lind


reliability calculation. Thus, the iterative procedure in case of non-normal variables
can be given as:

1. Formulate the limit state function. g(X) = 0


2. Assume initial values of all random variables. Typically, Xi = µi is taken in
first iteration.
3. Find equivalent normal distribution parameters (µieq, σieq) for all non-normal
random variables. (eq. (20.23))
4. Transform all Xi = µieq + ui σIeq
5. Transform g(X)→g(u)
6. Compute ∂g/∂ui for all variables, at the ui
7. Compute the values of each ui for the next iteration using

n
 ∂g 
∂g
∑  ∂u .u  − g i

ui' =
i =1  i  (20.24)
∂ui n
 ∂g 
2

∑  
i =1  ∂ui 

n
8. β = ∑u
i =1
2
i

9. Transform all ui into Xi space. Go to step 3.


Repeat until a converged value of β is obtained.

Example:

Consider a failure function g = R – S. R follows a Gumbel distribution with mean


value of 1000 and standard deviation of 200. S follows a normal distribution with
mean of 700 and standard deviation of 100. Evaluate the reliability index.

The Gumbel distribution is given by

20.20
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

F ( R) = exp  − exp {−α ( R − u )}


f ( R) = α exp  −α ( R − u ) − exp {−α ( R − u )}
1.282 1.282
α= = = 6.41× 10−3
σR 200
0.577
u = µR − = 910
α

1. g = R – S
2. Let R = 1000 and S = 700 (mean values), g = R – S = 300
3. Calculating equivalent mean and standard deviation at this point.

F(1000) = 0.5703 using the equation above


f(1000) = 0.002053

ϕ {Φ −1 [ 0.5703]}
σ eq
= = 191.295
R
( 0.002503)
µ Req = 1000 − 191.295Φ −1 [ 0.5703] = 966.1266

4. Transforming to standard normal space

1000 − 966.1266
uR = = 0.1771
191.295
700 − 700
uS = =0
100
gu = ( µ Re + uRσ Req ) − ( µ S + uSσ S )
5. Calculating partial derivatives
∂gu
= σ Req = 191.295
∂uR
∂gu
= −σ S = −100
∂uS

6. Calculating points for next iteration

∂gu ∂g
A= uR + u uS − g = 191.295 × 0.1771 + (−100)(0) − 300 = −266.1266
∂u R ∂uS
2 2
 ∂g   ∂g 
B =  u  +  u  = (191.295 ) + ( −100 ) = 46593.7673
2 2

 ∂u R   ∂uS 
A / B = −0.0057
u R' = 191.295(−0.0057) = −1.0926
uS' = −100(−0.0057) = 0.57

20.21
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

7. Calculating β

β = uR' 2 + uS' 2 = 1.2329

Next iteration

2. New Values of R and S


R = 966.1266 + 191.295(-1.09265) = 757.1164
S = 700 + 0.57(100) = 757
g = 0.1164
3. Calculating equivalent mean and standard deviation at this point.

F(757.1164) = 0.0696
f(757.1164) = 0.0012
σ Req = 112.4427
µ Req = 923.36
4. Transforming to standard normal space
uR = −1.4785
uS = 0.57
5. Partial derivatives

∂gu
= σ Req = 112.4427
∂uR
∂gu
= −σ S = −100
∂uS

6. Points or next iteration


A = -223.36
B = 22643.3617
A/B = -0.0099
uR’= -1.1092
uS’ = 0.9864
7. Calculating β
β = 1.4843

Similarly further iterations can be performed. Converged value of β is obtained after 5


iterations and is equal to 1.4968.

20.3.5 Non-Linear Limit State Functions and Normal Distributed


Correlated Variables (Level II Method, Hasofer-Lind Reliability
Index)

The situation where basic random variables X are stochastically dependent is often
encountered in practical problems. For normally distributed random variables the joint

20.22
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

probability distribution function may be described in terms of the first two moments,
i.e. the mean value vector and the covariance matrix. This is, however, only the case
for normally or log-normally distributed random variables.

Considering the case of normally distributed random variables these situations may be
treated completely along the same lines as described in the previous sections. Here a
transformation is required such that the variables are uncorrelated in the standard
normal space.
The correlations between the random variables X are defined in for a correlation
matrix CX defined by eq.(20.10).

The transformation from the X space to the u space is made using

{ X } = {D} + [Q ]{u}
{ X } = Vector of random variables (correlated)
{D} = Vector of mean values of random variables (20.25)
{u} = Vector of random variables in uncorrelated standard normal space
[Q ] = Lower trangular matrix of C X , obtained from Choleski's decomposition

The steps to calculate the reliability index are given below.

1. Formulate the limit state function. g(X) = 0


2. Obtain [Q] by Choleski decomposition of Cx
3. Transform all {Xi} ={µI} + [Q]{ui}
4. Transform g(X)→g(u)
5. Assume some initial value of each ui. (Typically a start is made with each
ui = 0)
6. Compute ∂g/∂ui for all variables, at the ui
7. Compute the values of each ui for the next iteration using
n
 ∂g 
∂g
∑ 
∂u
.ui  − g
ui' =
i =1  i 
∂ui n
 ∂g 
2

∑ 
i =1  ∂ui 

n
8. β = ∑u
i =1
2
i

9. Repeat 4 to 8 until a converged value of β is obtained.

Example:

Consider a failure function of the form g = R – S. The statistics of the stochastic


variables R and S is defined below.

20.23
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

R = Normal (1000, 200) and S = Normal (700, 100)

Correlation between R and S is 300.

Solution:

1. Formulate the limit state function. g(X) = 0

g=R-S
2. Obtain [Q] by Choleski decomposition of Cx

 2002 300 
CX =  2
 300 100 
Applying Choleski’s decomposition

 2002 300   Q11 0  Q11 Q12 


 2
=  
 300 100  Q21 Q22   0 Q22 
 Q2 Q11Q12 
=  11 
Q11Q12 Q112 + Q222 
Solving
 200 0 
[Q] =  
 1.5 100 

3. Transform all {Xi} ={µI} + [Q]{ui}

 R  1000   200 0  u R 
 = +   
 S   700   1.5 100  uS 
R = 1000 + 200uR
S = 700 + 1.5u R + 100uS

4. Transform g(X)→g(u)

gu = 300 + 198.5uR − 100uS

This can be solved to get the value of β = 1.35.

Most Probable Point (MPP) of failure

The set value of the random variables after the last iteration of any of the above
algorithms is termed as Most Probable Point for failure. At this point, the value of

20.24
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Πf (x )
i =1
i i is maximum in the failure region. fi(xi) represents the value of the density
function of ith random variable.

20.4 Integration using Monte Carlo Simulation


The most common use for Monte Carlo methods is the evaluation of integrals. This is
also the basis of Monte Carlo simulations (which are actually integrations). The basic
principles hold true in both cases.

Basics:

Standard Monte Carlo Integration

Let us consider an integral in d dimensions:

I = ∫ f ( x) d n x
V

Let V be an n dimensional hypercube with 0 ≤ x ≤ 1

Monte Carlo integration:

- Generate N random vectors xi from flat distribution (0 <= xi <=1).


- As N → ∝
V N
∑ f ( xi ) → I
N i =1
- Error is proportional to 1/√N

“Normal” numerical integration methods:

- Divide each axis in n evenly spaced intervals


- Total number of points N = nd
- Errors proportional to
i. 1/n midpoint rule
2
ii. 1/n trapezoidal rule
4
iii. 1/n Simpson method

20.25
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

- If d is small, Monte Carlo integration has much larger errors than


standard methods.
In practice, MC integration becomes better when d > 6–8.

In practice, N becomes just too large very quickly for the standard
methods: already at d = 10, if n = 10 (pretty small), N = 1010. Too
much!

Random Number Generation

Good random numbers play a central part in Monte Carlo simulations. Usually these
are generated using a deterministic algorithm, the numbers generated this way are
called pseudorandom numbers. There are several methods for obtaining random
numbers for Monte Carlo simulations.

Physical random numbers

Physical random numbers are generated from some truly random physical process
(radioactive decay, thermal noise, roulette wheel . . .). Before the computer age,
special machines were built to produce random numbers, which were often published
in books. For example, 1955 the RAND corporation published a book with a million
random numbers, obtained using an electric “roulette wheel”. This classic book is
now available on the net, at

http://www.rand.org/publications/classics/randomdigits/

Physical random numbers are not very useful for Monte Carlo, because:

• The sequence is not repeatable.


• The generators are often slow.
• The quality of the distribution is often not perfect. For example, a sequence of
random bits might have slightly more 0’s than 1’s. This is not acceptable for
Monte Carlo.

Pseudorandom numbers

Almost all of the Monte Carlo calculations utilize pseudorandom numbers, which are
generated using deterministic algorithms. Typically the generators produce a random
integer (with a definite number of bits), which is converted to a floating point number
x ∈ [ 0,1] by multiplying with a suitable constant.
The generators are initialized once before use with a seed number, typically an integer
value or values. This sets the initial state of the generator.

The essential properties of a good random number generator are:

Repeatability – the same initial values (seeds) produces the same random
number sequence. This can be important for debugging.

Randomness – random numbers should be

20.26
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

• from uniform distribution – for example, really homogeneously distributed


between [0,1]
• non-correlated, i.e. independent of each other. This is tough! No
pseudorandom sequence is truly independent.

Long period – the generators have a finite amount of internal state information,
so the sequences must repeat itself after finite period. The period should be
much longer than the amount of numbers needed for the calculation
(preferably).
Insensitive to seeds – the period and randomness properties should not depend
on the initial seed.

Fast

Portability – same results on different computers.

Linear congruential generator

One of the simplest, widely used and oldest generators is he linear congruential
generator (LCG). Usually the language or library “standard” generators are of this
type.

The generator is defined by integer constants a, c and m, and produces a sequence of


random integers Xi via

X i +1 = ( aX i + c ) mod m (20.26)

This generates integers from 0 to (m-1) (or from 1 to (m-1), if c = 0). Real numbers in
[0; 1) are obtained by division fi = Xi / m.

Since the state of the generator is specified by the integer Xi, which is smaller than m,
the period of the generator is at most m. The constants a, c and m must be carefully
chosen to ensure this. Arbitrary parameters are sure to fail!

A typical generator:

a = 1103515245; c = 12345; m = 231

This is essentially a 32-bit algorithm. The cycle time of this generator is m = 231 ~
2x109.

Random numbers from non-uniform distributions

•Generate zi uniform on [0,1)


•Compute xi = F −1 ( zi )
•Then the xi are distributed according to f(x)

20.27
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

•To apply, must be able to compute and invert distribution function F

Example:

Consider F(x) = 1-exp(-λx)

The random values of x can be generated from

x = -1/λln(1-v) where v is a uniform random number between 0 and 1.

Example

Consider two probabilistic variables, X and Y. The probability distribution functions


of these are given below.

0 x<0

FX ( x ) =  x 2 0 ≤ x ≤ 1
 1 x >1

0 y<0

FY ( y ) =  y 3 0 ≤ y ≤ 1
 1 y >1

The failure function is given by

g = X −Y

Estimate the probability of failure using Monte Carlo method. Use the set of 10
uniform random numbers each for X and Y given in the following table.
Knowing the random number for X ‘rx’, the value of the random variable X van be
found out by
X = √r x

Similarly knowing the value of the random number for Y ‘ry’, the value of random
variable Y can be found by

Y = (ry)1/3

Uniform X Uniform Y Failure


Random Random
Numbers for X Numbers for Y
1 0.056 0.237 0.667 0.874 1
2 0.931 0.965 0.260 0.638 0
3 0.159 0.399 0.548 0.818 1
4 0.621 0.788 0.033 0.321 0
5 0.199 0.446 0.468 0.776 1
6 0.307 0.554 0.179 0.563 1

20.28
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

7 0.476 0.690 0.288 0.660 0


8 0.799 0.894 0.291 0.663 0
9 0.365 0.604 0.162 0.545 0
10 0.080 0.283 0.780 0.920 1

10 such values are obtained and each pair is used in the limit state
function g. Out of these 5 lead to failure. Hence the probability of
failure can be given by 5/10 = 0.5.
Box-Muller Method for generating Normal random variable

u1 = −2 ln v1 cos ( 2π v2 )
u2 = −2 ln v1 sin ( 2π v2 )
(20.27)
v1 , v2 are independent uniform random numbers
u1 , u2 are independent standard normal random variables

Error Estimate in Monte Carlo Method

The error estimate in the Monte Carlo simulation is estimated using the coefficient of
variation of the result. It is given by

cov =
(1 − P )
f

NPf (20.28)
Pf = probability of failure estimated from N simulations

Monte Carlo applied to structural mechanics

Consider the following example:

The failure equation can be written as


g ( a, σ , Kc ) = Kc − 1.12σ π a

a, σ and Kc have inherent scatter. Their distribution


functions are represented by Fa, Fσ and FK
respectively.
Procedure:

Generate 3 random numbers [0,1), one for each


random variable.

Fail = Fail+1

Repeat the steps say a 50 times.

20.29
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Probability of failure is given by

Pf = Fail / 50

20.5 Application of Reliability methods in structural analysis

20.5.1 Formulating an Optimized In-Service Inspection Plan

Probabilistic methodology is being increasing used in different industries all over the
world to prepare an optimized in-service inspection (ISI) plan. Since inspection plans
involve shutdown of the plant in most of the cases, it is very much imperative to
inspect minimum without compromising the safety. The need is to address the
following questions:

• What to inspect: Which components in the system are to be inspected.


• How much to inspect: Once the component has been selected, extent of the
inspection needs to be prescribed.
• When to inspect: An optimized inspection schedule needs to be generated.
• Defects detected during inspections need to be repaired?

Traditionally these questions have been addressed mostly on the basis of experience
of the operators and in some cases deterministic methods.

The stress analysis of piping components gives the locations, which see the maximum
stresses. The welds are in general expected to be the initiation sites for defects. Thus
combinations of these and the fatigue usage factors of the different components
typically help in selecting the vulnerable sites. There is however considerable scatter
in the parameters, which help in making the decisions in this manner. The material
properties especially fatigue data and toughness data has considerable uncertainty
associated with it. The fatigue loads coming on to the plant can also not be predicted
with sureness, as is the fatigue crack growth in service. The NDE methods used for
ISI do not detect the defects with high confidence.

Traditional methods consider these uncertainties by incorporating high factor of safety


at each step of analysis leading to very conservative an on certain occasions incorrect
results. Probabilistic methods on the other hand quantify the uncertainties by using the
density functions for the uncertain parameters. These however rely strongly on the
amount of data available for fitting the distributions.

Probabilistic methods help in ranking the relative importance of different locations in


the piping system incorporating the uncertainty associated with the decision
parameters. A typical probabilistic methodology can be given as:

1. Select locations of interest on the piping system using the traditional method
of high stress/inferior material properties and sites for possible degradation
with service life.

20.30
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

2. Formulate the time variant damage models for each such site (fatigue, creep,
corrosion, erosion, irradiation etc).

3. Represent the parameters involved in the damage models using suitable


probability density functions.

4. A crack is assumed at each of these locations. The size of the crack can be
estimated from the crack initiation models.

5. There will be a probability for detecting these initiated cracks during the pre-
service inspection (PSI). This probability is generally given as probability of
detection (POD) curves, which are well documented in literature. A typical
POD curve is shown below.

Figure 20.8: Typical POD Curve

6. If a proof test of the piping system is there, the cracks, which will fail during
this test, can be censored, as they will not come into service. The failure can
be determined using fracture mechanics methods such as J-Tearing, R6,
SINTAP etc. depending on the failure mechanism expected.

7. The damaging mechanism are now applied. In cases like the fatigue, the
arrival times of the cyclic loads (transients) also need to be modeled as
stochastic variables. With time, because of the advent of damaging
mechanisms the condition of the structure deteriorates. It can be increasing
flaw depth (fatigue, corrosion), wall thinning (erosion) or degradation of
material properties (irradiation) etc. These need to be calculated with the
passage of time and by using the appropriate damage models. The failure of
the component needs to be evaluated with respect to time using appropriate
failure equations. A curve of probability of failure (Pf) versus time for each
site of interest can be obtained. This value includes the effect of successive
inspections.

20.31
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

A typical Pf versus time curve in absence of any inspection is shown below.

Pf (log scale)

Time

Figure 20.9: Pf versus time (without inspections)

A target value of Pf-Target can be decided. The inspections can be scheduled such that
Pf always remain below Pf-Target. This is shown in the figure below. T1, T2 and T3 in
this case can be taken as the inspection intervals.

Pf -Target

Pf (log scale)

T1 T2 T3
Time

Figure 20.10: Optimized ISI intervals

20.32
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

This analysis is to be performed for all locations identified earlier. From this
group a selection of more vulnerable ones based on Pf curves can be made.

20.5.2 Designing based on Load and Resistance Factor Design


(LRFD)

Consider a Design equation

R = L1 + L2 + L3

Let the required factor of safety be 1.5

Thus for safe design:

R/(L1+L2+L3) = 1.5

Let all the variables R, L1, L2, L3 follow a normal distribution.

Coefficient of Variation (σ/ µ) is given as


R 0.1
L1 0.1
L2 0.2
L3 0.3

The reliability index for a linear function with all variables normally
distributed is given by
µ R − ( µ L1 + µ L 2 + µ L 3 )
β=
σ R2 + σ L21 + σ L22 + σ L23

Now we will try to estimate probability of failure for different load


combinations.

Let µR = 300. Thus to satisfy design equation we should have ΣµL = 200.

µL1 µL2 µL3 ΣµL Pf


200 0 0 200 2.8x10-3
0 200 0 200 2.3x10-3
0 0 200 200 6.8x10-2

This shows that though the design equation is being satisfied in each of the cases,
different value of probability of failure is obtained for different cases. This kind of
an anomaly is addressed by probabilistic methods where instead of keeping a
single factor of safety we can design for a target Pf value.

20.33
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

Alternately margins on each of the parameters can be derived using probabilistic


methods, such that the target Pf value is always maintained. More is the cov in a
particular variable, more safety margin is required for that variable.

γRR = γ1L1 + γ2L2 + γ3L3

This methodology is used in civil engineering design codes for many years.
Various mechanical design codes are also incorporating this philosophy now.

Example

Table 20.2: Partial Safety Factors of API 579 (2000) for the assessment of crack-like
flaws.

The table above has been taken from the API 579 [20.4] code for safety evaluation
of cracked components. They have generated these partial safety factors using R6
as failure mode [20.5].

20.6 References

[20.1] Melchers, R.E.: Structural Reliability, analysis and prediction. John Wiley &
Sons, New York, 1987.

[20.2] COMREL: Software for Evaluating Component Reliability, Free Demo


Download Available at, www.strurel.de.

[20.3] NESSUS: Probabilistic Analysis software, www.nessus.swri.org

[20.4] API 579: Recommended Practice for Fitness-For-Service, 2000.

20.34
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004

[20.5] Bloom J. M., “Partial Safety Factors (PSF) and Their Impact on ASME Section
XI, IWB-3610”, Presented at the 2000 ASME Pressure Vessel and Piping Conference
July 23 - 27, 2000 Seattle, Washington

20.35

Das könnte Ihnen auch gefallen