0 Stimmen dafür0 Stimmen dagegen

75 Aufrufe198 SeitenStructural Reliability

Oct 07, 2014

© © All Rights Reserved

PDF, TXT oder online auf Scribd lesen

Structural Reliability

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

75 Aufrufe

Structural Reliability

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

- Ang and Tang Solutions
- Structural Reliability
- Structural Reliability Handbook 2015 (1)
- Reliability Handbook
- Ch4-6_Probability Reliability and Statistical Methods
- Reliability
- Methods of Structural Reliability
- ANSYS_Structural Reliability Analysis Using Deterministic Finite Element
- Office of Energy Reliability Strategic Plan
- Structural reliability and risk analysis
- Reliability and Safety Engineering 2nd Ed [2015]
- Jack Benjamin, C. a. Cornell-Probability, Statistics, And Decisions for Civil Engineers -McGraw-Hill Companies (1960)
- Structural Reliability and Risk Analysis Courses
- Power Utilities
- Ang a. H-S, Probability Concepts in Engineering Planning and Design, 1984
- Life Cycle Reliability Engineering
- SMART GRIDS the Electric Energy System of the Future
- The Basic Reliability Calculations
- Handbook of Reliability Engineering 0471571733
- Chapter 20 Structural Reliability Theory

Sie sind auf Seite 1von 198

MSc.

Asset Maintenance and Management

Reliability Assessment of Structures

(EMM 5023)

Course Syllabus and Introduction

2

What is the principal objective?

If you think that safety is expensive,

try to have an accident!

Monetary

profit

Failures

Running

technical

system

Learning Objectives

The main objective of the course covers

the:

Definition and concept of structural

reliability including the uncertainty and

certainty modeling.

Definition Risk, failure modes and risk

analysis

An overview of probability and stochastic

modeling

Reliability measurement methods.

Learning Objectives

First order reliability methods (FORM)

Second order reliability methods (SORM)

Reliability assessment of series and parallel

systems

Reliability design and code calibration

2

Learning Outcome

At the end of this course, students should

be able to:

Demonstrate the concepts of Reliability of

Structures

Determine the Level of Certainty of

Structural Performance

Perform the Reliability Based Design

Calibrate the Code based on Reliability

Learning Outcome

Determine the Fatigue Reliability of

Structures

Plan the Probability and Risk Based

Inspection

MSc.

Asset Maintenance and Management

Reliability Assessment: An

Introduction

Chapter-1

Introduction

Reliability: Risk and Safety in Engineering

WHY?

3

Unfortunately, structures fail

Partial collapse of

Pentagon Building

Partial collapse of

CGA Terminal

Minor failures

Failure due to debris impact

Failure due to insufficient shear

capacity

Catastrophic failures

Collapse of I-35W Mississippi River bridge, August 1, 2007

13 killed, 145 injured

Reliability theory: arguments in favor

DETERMINISTIC analysis of

structures, say, Eurocodes

Time-independent

PROBABILISTIC

(reliability-based)

analysis of

error-free structures

Assessment of

DURABILITY OF STRUCTURES

Incorporation of possibility of

HUMAN ERRORS

Consideration of

STRUCTURAL SYSTEM

rather than individual components

Consideration of

ABNORMAL SITUATIONS

(accidental actions)

4

Reliability theory: arguments against

The need to study probability calculus and statistics

The need to collect statistical data on structures and actions

(loads)

The need to move outside the safe and customary area

ruled by design codes of practice

Do you know the answer on the question How safe is safe

enough? ?

Why should we be concerned about structural

reliability

Individuals: involuntary of risk due to structural

failures

The risk levels for buildings and bridges are

usually associated with involuntary risk and

are much lower than the risk associated with

voluntary activities (travel, mountain climbing,

deep see fishing)

Why should we be concerned about structural

reliability

Society: failure results in decrease of

confidence in stability and continuity of one's

surroundings

Society is interested in structural reliability

only in the sense that a structural failure with

significant consequences shatters confidence

in the stability and continuity of ones

surroundings

Why should we be concerned about structural

reliability

Engineers: the need to apply novel structures

and novel construction methods generates

interest in safety

Design, construction, and use of new or

particularly hazardous systems should be of

particular interest in their safety (new and

unique bridge, new off-shore structure, NPP,

chemical plant, liquefied gas depot)

5

Principal causes of structural failures

Human errors & deliberate actions

Factor of

uncertainty

Accidental actions

(explosions, collisions, etc.)

Accumulation of

damage (ageing)

The main subject of

reliability theory

The need to bridge the gap: how to join quickly?

STUDENTS & PRACTISING

ENGINEERS

ELITE SCIENTICS

GAP

Introduction

Sustainable development related to

conservation of the environment, the

welfare and safety of the people have been

subject to increasing concern of the society

during the last decades.

At the same time optimal allocations of

available natural and financial resources are

considered very important.

Introduction

Therefore methods of risk and reliability

analysis in engineering designs are

developed, which are becoming more and

more important as decision support tools in

civil engineering applications.

The decision process is illustrated in

figure 1.

6

Introduction

Figure-1:

Elements of decision making for

engineering structures

Introduction

All engineering facilities such as bridges,

buildings, power plants, dams, machines &

equipment and offshore platforms are all

intended to contribute to the benefit and

quality of life.

Therefore when such facilities are planned

it is important that the benefit of the facility

can be identified considering all phases of

the life of the facility, i.e. including design,

manufacturing, construction, operation and

eventually decommissioning.

Introduction

Benefit has different meanings for different

people in the society, simply because

different people have different preferences.

However, benefit for the society can be

understood as:

economically efficient for a specific purpose

fulfill given requirements with regard to safety

of people directly or indirectly involved with and

exposed to the facility

fulfill given requirements to the effects of the

facility on the community and environment

Introduction

Taking into account these requirements it is

seen that the task of the engineer is to

make decisions or to provide the decision

basis for others such that it may be

ensured that engineering facilities are

established, operated, maintained and

decommissioned in such a way that they

will optimize or enhance the possible

benefits to society and individuals of

society.

7

Introduction

For many years it has been assumed in design of

structural systems that all loads and strengths are

deterministic.

The strength of an element was determined in

such a way that it exceeded the load with a certain

margin.

The ratio between the strength and the load was

denoted the safety factor.

This number was considered as a measure of the

reliability of the structure.

In codes of practice for structural systems values

for loads, strengths and safety factors are

prescribed.

Introduction

As described above structural analysis and

design have traditionally been based on

deterministic methods.

However, uncertainties in the loads,

strengths and in the modeling of the

systems require that methods based on

probabilistic techniques in a number of

situations have to be used.

Introduction

A structure is usually required to have a

satisfactory performance in the expected

lifetime, i.e. it is required that it does not

collapse or becomes unsafe and that it

fulfills certain functional requirements.

Generally structural systems have a rather

small probability that they do not function

as intended, see table 1.

Introduction

8

Reliability definition

Reliability of structural systems can be

defined as the probability that the structure

under consideration has a proper

performance throughout its lifetime.

Reliability methods are used to estimate the

probability of failure.

The information of the models which the

reliability analyses are based on are

generally not complete.

Reliability definition

Therefore the estimated reliability should

be considered as a nominal measure of the

reliability and not as an absolute number.

However, if the reliability is estimated for a

number of structures using the same level

of information and the same mathematical

models, then useful comparisons can be

made on the reliability level of these

structures.

Reliability definition

Further design of new structures can be

performed by probabilistic methods if

similar models and information are used as

for existing structures which are known to

perform satisfactory.

If probabilistic methods are used to design

structures where no similar existing

structures are known then the designer has

to be very careful and verify the models

used as much as possible.

Reliability definition

The reliability estimated as a measure of the safety

of a structure can be used in a decision (e.g.

design) process.

A lower level of the reliability can be used as a

constraint in an optimal design problem.

The lower level of the reliability can be obtained by

analyzing similar structures designed after current

design practice or it can be determined as the

reliability level giving the largest utility (benefits

costs) when solving a decision problem where all

possible costs and benefits in the expected lifetime

of the structure are taken into account.

9

Reliability definition

In order to be able to estimate the reliability using

probabilistic concepts it is necessary to introduce

stochastic variables and/or stochastic

processes/fields and to introduce failure and non-

failure behavior of the structure under

consideration.

Generally the main steps in a reliability analysis are:

1. Select a target reliability level.

2. Identify the significant failure modes of the

structure.

Reliability definition

3. Decompose the failure modes in series systems of

parallel systems of single components (only

needed if the failure modes consist of more than

one component).

4. Formulate failure functions (limit state functions)

corresponding to each component in the failure

modes.

5. Identify the stochastic variables and the

deterministic parameters in the failure functions.

Further specify the distribution types and

statistical parameters for the stochastic variables

and the dependencies between them.

Reliability definition

6. Estimate the reliability of each failure

mode.

7. In a design process change the design if

the reliabilities do not meet the target

reliabilities. In a reliability analysis the

reliability is compared with the target

reliability.

8. Evaluate the reliability result by

performing sensitivity analyses.

10

11

12

13

14

15

16

17

18

19

20

21

MSc.

Asset Maintenance and Management

Reliability Assessment of Structures

(EMM 5023)

Chapter-2

Review of Probability & Statistics,

probability modelling and Decision in

Engineering

The main objective of this lesson is to review the

fundamental concepts of:

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

The main objective is to understand:

71

72

73

74

75

76

77

MSc.

Asset Maintenance and Management

Reliability Assessment: An

Introduction

Chapter-3

Framework of Risk Analysis

78

The objectives of this lesson are to

understand the concept of:

(Joint committee on structural safety)

Framework of risk analysis

Risk assessment is used in a number of

situations with the general intention to

indicate that important aspects of

uncertainties, probabilities and/or

frequencies and consequences have been

considered in some way or other.

Decision theory provides a theoretical

framework for such analyses,

Framework of risk analysis

In typical decision problems encountered the

information basis is often not very precise. In

many situations it is necessary to use historical

and historical data.

The available historical information is often not

directly related to the problem considered but to a

somewhat similar situation.

Furthermore, an important part of a risk

assessment is to evaluate the effect of additional

information, risk reducing measures and/or

changes of the considered problem.

Framework of risk analysis

It is therefore necessary that the

framework for the decision analysis can

take these types of information into

account and allow decisions to be updated,

based upon new information. This is

possible if the framework of Bayesian

decision theory is used.

79

Framework of risk analysis

A fundamental principle in decision theory is

that optimal decisions must be identified as

those resulting in the highest expected

utility.

In typical engineering applications the utility

may be related to consequences in terms of

costs, fatalities, environmental impact etc.

In these cases the optimal decisions are

those resulting in the lowest expected costs,

the lowest expected number of fatalities and

so on.

Framework of risk analysis

Principal flow diagram of risk assessment

80

81

82

Implementation of risk analysis

Risk analyses can be presented in a format,

which is almost independent from the

application.

Figure on next slide shows a general

scheme for risk analysis.

Maybe the most important step in the

process of a risk analysis is to identify the

context of the decision problem, i.e. the

relation between the considered

engineering system and/or activity and the

analyst performing the analysis:

83

Implementation of risk analysis

1. Who are the decision maker(s) and the

parties with interests in the activity (e.g.

society, client(s), state and organizations).

2. Which matters might have a negative

influence on the impact of the risk analysis

and its results.

3. What might influence the manner in which

the risk analysis is performed (e.g.

political, legal, social, financial and

cultural).

Implementation of risk analysis

Furthermore the important step of setting

the acceptance criteria must be performed.

This includes the specification of the

accepted risks in regard to economic, public

or personnel safety and environmental

criteria.

In setting the acceptable risks which

might be considered a decision problem

itself, due account should be taken to both

international and national regulations in the

considered application area.

Implementation of risk analysis

However, for risk analysis performed for

decision making in the private or

inter-company sphere with no potential

consequences for third parties the criteria

may be established without the

consideration of such regulations.

In these cases the issue of risk acceptance

is reduced to a pure matter of cost or

resource optimization involving the

preferences of the decision maker alone.

84

85

System definition

The system (or the activity) considered has

to be described and all assumptions

regarding the system representation and

idealizations stated.

86

Identification of Hazard Scenarios

The next step is to analyze the system with

respect to how the system might fail or

result in other undesirable consequences.

Three steps are usually distinguished in this

analysis, namely:

1. Decomposition of the system into a number

of components and/or sub-systems.

2. Identification of possible states of failure for

the considered system and sub-systems

i.e. the hazards associated with the system.

Identification of Hazard Scenarios

3. Identification of how the hazards might be

realized for the considered system and

subsystems, i.e. the identification of the

scenarios of failure events of components

and subsystems which if they occur will lead

to system failure.

Identification of Hazard Scenarios

A hazard is typically referred to as a failure

event for the considered system or activity.

Occurrence of a hazard is therefore also

referred to as a system failure event.

System failures may thus represent events

such as collapse of a building structure,

flooding of a construction site or explosion in

a road or rail or tunnel.

Identification of Hazard Scenarios

Identification of hazards is concerned about

the identification of all events, which might

have an adverse consequence to

People

Environment

Economy

87

Identification of Hazard Scenarios

Different techniques for hazard identification

have developed from various engineering

application

areas such as the chemical, nuclear power

and aeronautical industries. Examples are:

Preliminary Hazard Analysis (PHA)

Failure Mode and Effect Analysis (FMEA)

Failure Mode Effect and Criticality Analysis

(FMECA)

Hazard and Operability Studies (HAZOP)

Risk Screening (Hazid sessions)

Analysis of Consequences

Typical consequences are economic

consequences, loss of life and effects on the

environment.

The estimation of consequences given

failure of the system of sub-systems

requires a good understanding of the

system and its interrelation with its

surroundings and is thus best performed in

collaboration with experts who have hands

on experience with the considered type of

activity.

Analysis of Probability

Evaluation of probabilities of failure for the

individual components and sub-systems

may be based on, in principle, two different

approaches:

failure rates for e.g. electrical and production

systems or

methods for structural reliability for structural

systems as buildings and bridges.

Risk analyses are typically made on the

basis of information, which is subject to

uncertainty.

Analysis of Probability

These uncertainties may be divided in:

inherent or natural variability, e.g. the yield

strength of steel.

modeling uncertainty:

i. uncertainty related to the influence of

parameters not included in the model, or

ii. uncertainty related to the mathematical

model used.

statistical uncertainty.

88

Identify Risk Scenarios

When consequences and probabilities are

identified the risk can then be computed.

Hazard scenarios, which dominate the risk

may then be identified.

The risk scenarios can bee ranked in

accordance with the risk contribution.

Analyze Sensitivities

The sensitivity analysis is useful for analysis

of the identified risk scenarios and normally

includes an identification of the most

important factors influencing the risks

associated with the different risk scenarios.

Also the sensitivity analysis may include

studies of what if situations for the

evaluation of the importance of various

system simplifications performed under the

definition of the system.

Risk Treatment

Calculated risks are compared with the

accepted risks initially stated in the risk

acceptance criteria.

Should the risks not be acceptable in

accordance with the specified risk

acceptance criteria there are principally four

different ways to proceed.

Risk Treatment

Risk mitigation:

Risk mitigation is implemented by modification of

the system such that the source of risk is removed.

For example, the risk of fatalities from a ship

collision with a bridge may be mitigated by traffic

lights stopping traffic proceeding onto the bridge

whenever a ship navigates under the bridge.

Risk reduction:

Risk reduction may be implemented by reduction of

the consequences and/or the probability of

occurrence in practice risk reduction is normally

performed by a physical modification of the

considered system.

89

Risk Treatment

Risk transfer:

Risk transfer may be performed by e.g.

insurance or other financial arrangements

where a third party takes over the risk.

Risk acceptance:

If the risks do not comply with the risk

acceptance criteria and other approaches

for risk treatment are not effective then risk

acceptance may be an option.

Monitoring and Review

Risk analyses may be performed as already

stated for a number of decision support

purposes.

For many engineering applications such as

cost control during large construction

projects and inspection and maintenance

planning for bridge structures the risk

analysis is a process where there is constant

feed back of information from the system.

Whenever new information is provided the

risk analysis may be updated.

Quantitative Risk Analysis (QRA)

Quantitative Risk Analysis (QRA) is used in

assessment of the risks.

Three calculation methods are:

1. Event Tree Analysis (ETA)

2. Fault Tree Analysis (FTA)

3. Risk matrix

90

Event Tree Analysis

The initial event is usually placed on the left and

branches are drawn to the right, each branch

representing a different sequence of events and

terminating in an outcome.

The main elements of the tree are event definitions

and branch points, or logic vertices.

The initial event is usually expressed as a

frequency (events/year) and the subsequent splits

as probabilities (events/demand), so that the final

outcomes are also expressed as frequencies

(event/year).

Event Tree Analysis

The initial event is usually placed on the left and

branches are drawn to the right, each branch

representing a different sequence of events and

terminating in an outcome.

The main elements of the tree are event definitions

and branch points, or logic vertices.

The initial event is usually expressed as a

frequency (events/year) and the subsequent splits

as probabilities (events/demand), so that the final

outcomes are also expressed as frequencies

(event/year).

Event Tree Analysis

Each branch of the Event Tree represents a

particular scenario.

An example of a simple Event Tree is shown in

figures on next slides

The fire protection is provided by a sprinkler

system.

A detector will either detect the rise in temperature

or it will not.

If the detector succeeds the control box will either

work correctly or it will not - and so on.

There is only one branch in the tree that indicates

that all the subsystems have succeeded:

91

Event Tree Analysis Event Tree Analysis

Event Tree Analysis

The results of the Event Tree are outcome event

frequencies (probabilities) per year.

The outcome frequencies may be processed further

to obtain the following results:

Risk to Workforce

Annual Risk

The annual risk may be expressed as

Potential Loss of Life (PLL), where the PLL

expresses the probability of fatalities per

year for all the operation personnel.

As such the PLL is a risk indicator which is

valid for the whole installation, rather than

for an individual. The calculation for a given

event i is of the form:

92

Risk to Workforce Risk to Workforce

Risk to Workforce Risk to Workforce

93

Risk to Workforce

Individual Risk

The Individual Risk (IR) expresses the probability per

year of fatality for one individual.

It is also termed as Individual Risk Per Annum

(IRPA).

The IR depends on the location of the individual at a

given time and its contents of work.

In practice, for the operating personnel of an

installation an Average Individual Risk, AIR may be

estimated for groups of persons taking into account

the percentage of time of exposure to the hazard per

year.

For all the personnel involved in the annual

operation of the installation, the AIR may be derived

from the PLL

Risk to Workforce

Individual Risk

Risk to Workforce

Fatal Accident Rate

The Fatal Accident Rate (FAR) is defined as the

potential number of fatalities in a group of people

exposed for a specific exposure time to the activity

in question.

Generally, the FAR is expressed as a probability of

fatality per 100 million exposure hours for a given

activity.

It is mainly used for comparing the fatality risk of

activities.

The 100 million exposure hours is to represent the

number of hours at work in 1000 working lifetimes.

Risk to Workforce

Fatal Accident Rate

94

Risk to Workforce

Fatal Accident Rate

Risk to Workforce

Fatal Accident Rate

Risk to public

FN curves or F/N plots (generally also called the Cumulative

Frequency Graphs) are probability versus consequence

diagrams where F denotes frequency of a potential event

and N the number of associated fatalities.

A Cumulative Frequency Graph shows the probability of N or

more fatalities occurring.

Such graphs tend to be of interest when the risk acceptance

criterion selected, or, as is more often the case, imposed by

the Regulator, includes an aversion to potential incidents that

would result in, say, more than ten fatalities.

In simple terms, risk aversion exists if society regards a

single accident with 100 fatalities as in some sense worse

than 100 accidents (e.g. road accidents) with a single fatality

each.

Risk to public

95

Fault Tree Analysis

Compared to an Event Tree the "Fault Tree"

analysis works in the opposite direction: It is a

"deductive approach, which starts from an effect

and aims at identifying its causes.

Therefore a Fault Tree is used to develop the

causes of an undesirable event.

It starts with the event of interest, the top event,

such as a hazardous event or equipment failure,

and is developed from the top down.

Fault Tree Analysis

The Fault Tree is both a qualitative and a

quantitative technique.

Qualitatively it is used to identify the individual

scenarios (so called paths or cut sets) that lead to

the top (fault) event, while quantitatively it is used

to estimate the probability (frequency) of that

event.

A component of a Fault Tree has one of two binary

states, either in the correct state or in a fault

state.

In other words, the spectrum of states from total

integrity to total failure is reduced to just two

states.

Fault Tree Analysis

The application of a Fault Tree may be

illustrated by considering the probability of

a crash at a road junction and constructing

a tree with AND and OR logic gates (figure

on next slide).

The Tree is constructed by deducing in turn

the preconditions for the top event and

then successively for the next levels of

events, until the basic causes are identified.

96

Fault Tree Analysis Qualitative Analysis

By using the property of the Boolean

algebra it is possible first to establish the

combinations of basic (components) failures

which can lead to the top (undesirable)

event when occurring simultaneously.

These combinations are so called "minimal

cut sets" (or "prime impliquant" ) and can

be derived from the logical equation

represented by the Fault Tree.

Qualitative Analysis

Considering the Fault Tree representing figure on

last slide, six scenarios can be extracted:

Qualitative Analysis

These 6 minimal cut sets are in first

approach equivalent. However, a common

cause failure analysis could show, for

example that the "Road too slippery"

increase the probability of "Car at main

road junction" because it is too slippery

from both side.

Therefore the 4

th

cut set is perhaps more

likely than the others.

97

Semi-Qualitative Analysis

The second step consists of calculating the

probability of occurrence of each scenario.

By ascribing probabilities to each basic

event we obtain the next figures for our

example:

Semi-Qualitative Analysis

Now it is possible to sort the minimal cut sets in a

more accurate way i.e. into three classes:

One cut set at 10-3, three at 10-4 and two at

10-5.

Of course, it is better to improve the scenarios

with the higher probabilities first if we want to be

efficient.

As by-product of this calculation, the global failure

probability 1.32 10-3 is obtained by a simple sum

of all the individual probabilities.

Semi-Qualitative Analysis

But this simple calculation is a conservative

approximation, which works well when the

probabilities are sufficiently low (in case of

safety, for example).

It is less accurate when the probabilities

increase and it can even exceed 1 when

probabilities are very high.

This is due to cross terms that are

neglected. Therefore, this approach must

be used with care.

Quantification in Fault Tree Analysis

As a Fault Tree represents a logical formula

it is possible to calculate the probability of

the top event by ascribing probabilities to

each basic event, and by applying the

probability calculation rules.

When the events are independent, and

when the probabilities are low it is possible

to roughly estimate the probability of the

output event if an OR gate is the sum of the

probabilities of the events in input.

An example is given in figure on next slide.

98

Quantification in Fault Tree Analysis

These simple calculations only work on the basis of

the above hypothesis.

For example, as soon as the Fault Tree contains

repeated events (same events in several location

in the Tree) the independence hypothesis is lost.

Therefore the calculation becomes wrong and even

worse it is impossible to know if the result is

optimistic or pessimistic.

On the other hand, the estimation of the top event

probability is less and less accurate (more and

more conservative) when the probabilities increase

(even if the events are independent).

Risk Matrix

The arrangement of accident probability

and corresponding consequence in a Risk

Matrix may be a suitable expression of risk

in cases where many accidental events are

involved or where single value calculation is

difficult.

As figure on next slide shows the matrix is

separated into three regions,-

unacceptable risk,

further evaluation or attention is required, and

acceptable risk.

Risk Matrix

99

Risk Matrix

Further evaluations have to be carried out

for the region between acceptable and

unacceptable risk, to determine whether

further risk reduction is required or more

studies should be performed.

The limit of acceptability is set by defining

the regions in the matrix which represent

unacceptable and acceptable risk.

The Risk Matrix may be used for qualitative

as well as quantitative studies.

Risk Matrix

If probability is classified in broad categories such as

rare and frequent and consequence in small,

medium, large and catastrophic, the results

from a qualitative study may be shown in the Risk

Matrix.

The definition of the categories is particularly

important in case of qualitative use.

The categories and the boxes in the Risk Matrix may

be replaced by continuous variables, implying a full

quantification.

An illustration of this is shown in Figure on next

slide.

Risk Matrix Risk Matrix

The upper tolerability limit (last two figures)

is almost always defined, whereas the lower

limit is related to each individual risk

reducing measure, depending on when the

cost of implementing each measure becomes

unreasonably disproportional to the reduction

of risk.

100

Risk Matrix

Examples of the applications of Risk Matrix

are evaluation of:

Risk to safety of personnel for different solutions

such as integrated versus separate quarters

platform;

Risk of operations such as exploration drilling;

Risk of the use of a particular system such as

mechanical pipe handling;

Environmental risk.

Decision trees

Decision trees are used to illustrate decisions and

consequences of decisions.

Further, when probabilities are assigned to

consequences expected costs / utilities of different

alternatives can be determined.

In the next figure an example of a decision tree

where each possible decision and consequence are

systematically identified the example is taken

from.

Two alternative designs for the structural deign of

a building are considered.

Decision trees

Design A is based on a conventional procedure

with a probability of satisfactory performance

equal to 99% and costs $1.5 million.

Design B is a new concepts and will reduce the

costs to $1 million.

The reliability of B is not known, but the engineer

estimates if the assumptions made are correct the

probability of good performance to 0.99, whereas

if the assumptions are wrong then the probability

is only 0.9. He is only 50% sure of the

assumptions.

The cost of unsatisfactory performance is $10

million.

Decision trees

The expected costs of the two alternatives

are:

A: C=0.99 x 1.5 + 0.01 x 11.5 = 1.6

B: C=0.5 x (0.99 x 1.0 + 0.01 x 11.0) +

0.5 x (0.9 x 1.0 + 0.1 x 11.0) = 1.55

According to decision theory the alternative with

the lowest expected costs should be chosen, i.e.

alternative

B should be chosen here.

101

Decision trees Decision trees

The decision tree is constructed from left to

right.

Each consequence is associated with

probabilities (summing up to 1) after each

node. For each branch the expected cost

/utility is determined by multiplying

probabilities and costs/utilities for that

branch.

Risk acceptance criteria

Acceptance of risk is basically problem of decision

making, and is inevitably influenced by many factors

such as type of activity, level of loss, economic,

political, and social factors, confidence in risk

estimation, etc.

A risk estimate, in the most simplest form, is

considered acceptable when below the level which

divides the unacceptable from acceptable risks.

For example, an estimate of individual risk per

annum of 10-7 can be considered as negligible

risk; similarly, an estimate of injuries occurring

several times per year, can be considered as

unacceptable.

Risk acceptance criteria

The as low as reasonably practicable (ALARP)

principle is sometimes used in the industry as the

only acceptance principle and sometimes in

addition to other risk acceptance criteria.

The use of the ALARP principle may be interpreted

as satisfying a requirement to keep the risk level

as low as reasonably practicable, provided that the

ALARP evaluations are extensively documented.

The ALARP principle is shown in the next slide.

102

Risk acceptance criteria Risk acceptance criteria

The risk level should be reduced as far as possible in

the interval between acceptable and unacceptable

risk.

The common way to determine what is possible is to

use cost-benefit evaluations as basis for decision on

whether to implement certain risk reducing measures

or not.

The upper tolerability limit is almost always defined,

whereas the lower tolerability limit is sometimes

defined, or not defined.

The lower limit is individual to each individual risk

reducing measure, depending on when the cost of

implementing each measure becomes unreasonably

disproportional to the risk reducing effect.

Risk acceptance criteria

The ALARP principle is normally used for risk

to safety of personnel, environment and

assets.

The value for the upper tolerability limit

derived from accident statistics, for example,

indicate that a risk of death around 1 in

1,000 per annum is the most that is ordinarily

accepted by a substantial group of workers in

any industry in the UK.

Risk acceptance criteria

HSE (Health and Safety Executive),

suggested the upper maximum tolerable risk

level as a line with a slope of 1 through point

n = 500 (number of fatalities), F = 2 x 10-4

(frequency) per year.

This line corresponds to n = 1 at F = 10-1 per

year, and n = 100 at F = 10-3 per year.

However, in the document, HSE quotes that

risk of a single accident causing the death of

50 people or more with the frequency of 5 x

10-3 per year is intolerable.

103

Risk acceptance criteria

For the negligible level, the HSE

recommends a line drawn three decades lower

than the intolerable line.

This line corresponds to one fatality, n = 1, in

one per ten thousand per year, F = 10-4 per

year, and similarly, n = 100 corresponds to

one in a million per year, F = 10-6 per year.

Risk acceptance criteria

Railway transport

For the railway area different railway operators in the

UK has suggested the risk criteria in the following

table.

Failure modes in reliability analysis

Typical failure modes to be considered in a

reliability analysis of a structural system are

yielding, buckling (local and global), fatigue

and excessive deformations.

The failure modes (limit states) are generally

divided in:

Ultimate limit states

Conditional limit states

Serviceability limit states

Ultimate limit states

Ultimate limit states correspond to the

maximum load carrying capacity which can

be related to e.g. formation of a mechanism

in the structure, excessive plasticity, rupture

due to fatigue and instability (buckling).

104

Conditional limit states

Conditional limit states correspond to the

load-carrying capacity if a local part of the

structure has failed. A local failure can be

caused by an accidental action or by fire.

The conditional limit states can be related to

e.g. formation of a mechanism in the

structure, exceeding of the material strength

or instability (buckling).

Serviceability limit states

Serviceability limit states are related to

normal use of the structure, e.g. excessive

deflections, local damage and excessive

vibrations.

The fundamental quantities that characterize

the behaviour of a structure are called the

basic variables and are denoted X = (X1 ,...,

X n ) where n is the number of basic

stochastic variables.

Typical examples of basic variables are

loads, strengths, dimensions and materials.

Serviceability limit states

The basic variables can be dependent or

independent, see below where different

types of uncertainty are discussed.

A stochastic process can be defined as a

random function of time such that for any

given point in time the value of the

stochastic process is a random variable.

Stochastic fields are defined in a similar way

where the time is exchanged with the space.

Uncertainty models

The uncertainty modelled by stochastic variables can

be divided in the following groups:

Physical uncertainty:

or inherent uncertainty is related to the natural

randomness of a quantity, for example the

uncertainty in the yield stress due to production

variability.

Measurement uncertainty:

is the uncertainty caused by imperfect

measurements of for example a geometrical

quantity.

105

Model uncertainty

Model uncertainty is the uncertainty related to

imperfect knowledge or idealizations of the

mathematical models used or uncertainty related to

the choice of probability distribution types for the

stochastic variables.

The above types of uncertainty are usually treated

by the reliability methods which will be described in

the following chapters. Another type of uncertainty

which is not covered by these methods are gross

errors or human errors.

These types of errors can be defined as deviation of

an event or process from acceptable engineering

practice.

MSc.

Asset Maintenance and Management

Reliability Assessment of Structures

Chapter-4

Methods of Reliability Assessment

Reliability Assessment

Structural improvements, which are

necessary to support affordable access to

space, are in the materials, joints,

reliability, and the design system process.

Reliability improvement provided the widest

range of benefits with the least committed

resources.

In the next chapters, the problem of

estimating the reliability or equivalently the

probability of failure is considered.

Reliability Assessment

Generally, methods to measure the reliability of a

structure can be divided into four groups:

Level I methods:

The uncertain parameters are modeled by one

characteristic value, as for example in codes based

on the partial coefficients concept.

Level II methods:

The uncertain parameters are modeled by the mean

values and the standard deviations, and by the

correlation coefficients between the stochastic

variables.

The stochastic variables are implicitly assumed to be

normally distributed.

The reliability index method is an example of a level

II method.

106

Reliability Assessment

Level III methods:

The uncertain quantities are modeled by their joint

distribution functions.

The probability of failure is estimated as a measure

of the reliability.

Level IV methods:

In these methods the consequences (cost) of failure

are also taken into account and the risk

(consequence multiplied by the probability of failure)

is used as a measure of the reliability.

In this way different designs can be compared on an

economic basis taking into account uncertainty,

costs and benefits.

Reliability Assessment

If the reliability methods are used in design they

have to be calibrated so that consistent reliability

levels are obtained.

Level I methods can e.g. be calibrated using level II

methods, level II methods can be calibrated using

level III methods, etc.

Several techniques can be used to estimate the

reliability for level II and III methods, e.g.

Simulation techniques:

Samples of the stochastic variables are generated and the

relative number of samples corresponding to failure is used

to estimate the probability of failure.

Reliability Assessment

Simulation techniques:

Samples of the stochastic variables are

generated and the relative number of

samples corresponding to failure is used to

estimate the probability of failure.

The simulation techniques are different in

the way the samples are generated.

Reliability Assessment

FORM techniques:

In First Order Reliability Methods the limit

state function (failure function) is linearized

and the reliability is estimated using level II

or III methods.

FORM techniques for level II methods are

described in this chapter.

107

Reliability Assessment

SORM techniques:

In Second Order Reliability Methods a

quadratic approximation to the failure

function is determined and the probability of

failure for the quadratic failure surface is

estimated.

108

109

110

111

112

113

METHODS OF

STRUCTURAL RELIABILITY THEORY

114

115

116

117

118

119

120

121

122

123

124

FIRST ORDER RELIABILITY

ILLUSTRATED WITH EXAMPLE

(FROM PERSPECTIVE OFNASA)

125

Introduction of FORM Techniques

Structural improvements, which are

necessary to support affordable access to

space, are in the materials, joints,

reliability, and the design system process.

The first-order reliability method was

developed because it offered the best

approach to surmount deterministic

inherent deficiencies and to accomplish

them within prevailing cultures and

practices.

Introduction of FORM Techniques

It is the simplest, most expedient, and the most

developed and familiar of all reliability methods.

Because first-order reliability is restricted to

normal probability distributions, the proposed

approach of normalizing all skewed distributions

leads to the universal adoption of the first-order

reliability method.

This pragmatic technique of using only the

engaged half of the distribution data to construct a

symmetrical (normal) distribution is seemingly

sound.

Introduction of FORM Techniques

Undue difference between the actual and

the normalized distribution may be treated

similar to other modeling design errors.

Both deterministic and reliability methods

are shown to achieve structural safety by

sizing structural forms or elements through

specified ratios of resistive to applied

stresses.

The deterministic method specifies the ratio

by an arbitrarily selected safety factor.

Introduction of FORM Techniques

The proposed method derives the reliability design

factor from specified reliability criteria.

Both applications are illustrated through a

structural design procedure outlined in figure-1, to

provide an orderly phasing and development

process of statistical data and design parameters,

and to explore their relationship and control over

reliability.

Reliability selection criteria are briefly addressed.

This study has been limited to semi-static

structures that comprise over 60 percent of the

aero-structural weight.

126

Introduction of FORM Techniques

Pertinent excerpts from earlier concept

developments are included for

completeness, and published standard

methods are referenced.

Though lacking eloquence, it is hoped the

visibility of analytical illustrations and depth

of discussions and techniques are sufficient

to provide the structural deterministic

community and the topic novice the

understanding of its application and

motives for improvements.

Introduction of FORM Techniques

Failure Concept

Central to the appreciation of the proposed

universal first-order reliability method is a

fundamental understanding of the failure

concept and its necessary conditions.

All observed and measured phenomena

may be reduced to probability distributions.

When applied stress demand, F

A

, and

resistive stress capability, F

R

, are defined

by probability distributions, failure occurs

when the tails of the two distributions

overlap, as shown in figure-2.

Failure Concept

127

Failure Concept

Their tail-overlap area suggests the

probability that a weak resistive material

will encounter an excessively applied stress

to cause failure.

The probability of failure is reduced as their

tail overlap area decreases by increasing

the difference of the resistive and applied

stress means, m

R

-m

A

, and as their

distribution natural shapes decrease.

Failure Concept

Controlling features

The difference between the applied and resistive-

stress distribution means is the only designer

control (active) parameter of the area of

overlapped tails.

Tail shapes are defined by passive (firm) design

variables which are uniquely fixed by their natural

scatter around their distribution means.

In a given structural form having common material

properties, the resistive-stress distribution shape

may be constant through all regions.

However, local applied-stress distribution shapes

may vary throughout the structure due to local

abrupt changes in geometry, loads, metallurgy,

temperature, etc.

Failure Concept

Controlling features

Therefore, any change in applied-stress

distribution shape without a corresponding change

in the means will change the probability of failure

in that region, resulting in non-uniformly reliable

structures, and worse, unsuspected weak regions.

In engineering applications, these shapes are

modeled by distribution functions to estimate the

probability of a desired value for an assigned range

of distribution.

As shapes become more complex, probability

distribution types and complexities increase, which

prolongs lead time, and intensifies labor, skills,

and training.

Failure Concept

Controlling features

The normal distribution shape is the simplest, best

developed, most known, and expedient.

Its distribution is symmetrical about the mean, and

it is completely characterized by two variables.

As in most engineering applications, only the

distribution side producing the worst-case design

problem is of any interest, as was clearly

demonstrated by the failure concept of figure-2.

Only data from the right half of the applied-stress

distribution (greatest demand) are engaged with

data from only the left side (weakest capability) of

the resistive stress.

Data from the other two disengaged-distribution

halves are irrelevant to the failure concept.

128

Failure Concept

Controlling features

This inherent observation, as well as experience

with related data shapes and the central limit

theory, leads the author to presume that all

probability distributions associated with semi-static

structural loads, stresses, and materials may be

made universally symmetrical by constructing a

mirror image of the engineering engaged side

about the peak frequency value of the distribution.

This constituted symmetrical distribution entitles

its adaptation to all practical normal distribution

techniques and advantages.

The universally normalized distribution is

characterized by two parameters, the mean and

the standard deviation.

Failure Concept

Controlling features

The mean is assumed by:

Failure Concept

Controlling features

The universal transformation of random variables

to normal distributions simplifies a wide range of

structural interfaces, applications, and design

specifications.

Should an inconsistency appear between

normalized and another assumed distribution, the

normalizing approach is pragmatically preferred

and the difference is treated as all other design

modeling errors.

Normal distribution is easiest to learn and simplest

to apply, and it is pivotal to the development of the

universal first order reliability method.

Failure Concept

Tolerance Limit

An extensively practiced feature of normal

distribution by loads, stress, and materials

disciplines is the specification of a design

parameter through the statistical characterization

of the tolerance limit.

Tolerance limits specify the mean and the

probability distribution range on either left or right

side of the mean. It is specified by:

129

Failure Concept

Tolerance Limit

or, in using equation (4), the tolerance limit may

be more conveniently expressed as a product of the

mean value and dimensionless variables,

The designer-controlled N-factor specifies the

probability range, as illustrated on the probability

density distribution in figure-3.

It is sometimes referred to as the tolerance limit

coefficient, but here it is referred to as the

probability range factor.

Failure Concept

Tolerance Limit

The designer-controlled N-factor specifies the probability

range, as illustrated on the probability density distribution in

figure-3.

It is sometimes referred to as the tolerance limit coefficient,

but here it is referred to as the probability range factor.

A probability range factor specified by N = 1, 2, 3, or 4

standard deviations about the mean of a normal distribution is

calculated to capture 68.27, 95.45, 99.86, or 99.73 percent of

the phenomenon population, respectively.

A probability range factor N = 1, 2, 3, or 4 of a one-sided

distribution is calculated to capture 84.13, 97.72, 99.86, or

99.94 percent of the phenomenon population, respectively.

Failure Concept

Tolerance Limit

Failure Concept

Tolerance Limit

A positive deviation specifies the upper tolerance

limit usually associated with demands, and a

negative range factor refers to the weaker side of

the capability.

One standard deviation includes the probability

range to the inflection point of the normal

distribution curve.

While a minimum of 30 samples may provide a

workable mean stress, more than 4 times that

many samples may be required to establish a good

3 standard deviation stress.

As the sample size increases, the natural

probability range factor approaches 2.

130

Illustration Models

The illustration model selected was a simple static

structure conceived to demonstrate the

normalization and characterization of engineering

data and the formatting of the stress form and

sizing required for combining multi-axial stress

components.

The deterministic and first-order reliability methods

are illustrated through analytical models for

maximum visibility, understanding, and

implementation of fundamental features to a

variety of practical design conditions leading to a

robust structural link.

Here robustness is understood as performing well,

reliably, and at least life-cycle costs.

Illustration Models

Configuration

The structural system environments consist of a

tension load, P, at an angle, 0, from the axial

torsional load, T, to be transmitted a distance, L,

to point x = 0.

These requirements establish the envelope size

and operating environments that shape and

optimize load paths to produce a high-performance

structure.

A tapered round shaft, shown in figure-4, provides

the optimum configuration for the specified type

loads, paths, and arrangements.

The single surface, shape, and limited dimensions

simplify production and inspection, all of which

minimize rejects and costs.

Configuration Configuration

The third robust condition is operational reliability

that focuses on determination of the shaft radius,

r. For brevity of presentation, the radius will be

determined only at x = 0.

After determining the scope of the problem, noting

its load paths, and framing the component to

minimize the load influences on structural form

sizing, then the engineering data development and

stress response formulations follow that are

required to determine the radius for a robust

structural link.

131

Illustration Models

Data Development

Imposed tension and torsion environment

data are assumed to be based on a series

of observed measurements reduced into a

frequency distribution, or probability

histograms, as shown in figure-5.

The base of the histogram is bounded by

successive and equal ranges of measured

values, and the heights represent the

number of observations (frequency) in each

range.

Data Development

Data Development

To illustrate the direct normalization of a

skewed distribution, the torque frequency

distribution data of figure-5 are applied to

equations (1) through (4).

Because the greater torque side defines the

worst demand case, only data from the

shaded right side are used in figure-6 to

calculate the normalized distribution

variables.

Data Development

132

Data Development

The materials selection task interfaces with

all structural disciplines, and its result has

the greatest and most lasting effect on

robust design.

All material performance, manufacturing

processes, control points, and their costs

are researched and traded.

The structural analyst's interest at this

interface is the assurance of robust material

performance and a sufficient mechanical

properties data base defined with tolerance

limit variables.

Data Development

Experience or knowledge from previous

similar applications of critical and complex

regions subjected to forging, spinning.

welding, cold shaping, etc., manufacturing

processes are scrutinized for potential

bottlenecks.

Figure-7 shows examples of strength

frequency distributions data assumed for

developing the required capability

properties for dimensioning a structural

component to a specified reliability.

Data Development

Exceeding the yield strength deforms the

part, which may change boundary

conditions and compromise the part's

operation.

Exceeding the ultimate strength by

anomalous loading will fracture the part

thus leading to serious losses.

Data Development

133

Data Development Data Development

Maximum expected loads and minimum

material strengths are specified through the

tolerance limit for specific events such that

any required proportion of their distribution

may be represented in response analyses.

Passive statistical variables that

characterize tolerance limits are listed in

table-1.

Currently, there is no uniform criterion for

specifying the probability range factor

across disciplines and projects.

Data Development

Load disciplines generally select the

probability range factor for specific events

according to their data and experience

base.

Applying the commonly used probability

range factor of N = 3 to the statistical

variables from table 1, the loads tolerance

limits are:

Data Development

134

Data Development

The material probability range factor is

specified by a K-factor.

Because of the inherent randomness in

specimens and testing, the same test

conducted on the same number of

specimens by different experimenters will

result in different means and standard

deviations.

Data Development

To ensure, with a certain percent

confidence, that other portions are

contained in the population, a K-factor is

determined to account for the sample size

and proportion.

Figure-8 provides K-factors for random

variables with 95-percent confidence levels

with three commonly used probabilities in

one-sided normal distributions

Data Development Data Development

The K-factor is designer controlled by the

specification of the number of samples required, as

noted in figure-8.

The K-factor rate increases sharply for all

probabilities using less than 30 samples.

Decreasing the sample size is seen in equation (5)

to decrease the allowed material performance, and

it is compounded when the material coefficient of

variation is large.

For large acreage of structures, trading cost for

increasing the sample size may decrease the cost

of payload delivery.

135

Data Development

Most of NASA's and DOD's material

properties are specified by "A" and "B"

basis.

The "A" basis allows that 99 percent of

materials produced will exceed the specified

value with 95 percent confidence.

The "B" basis allows 90 percent with the

same 95 percent confidence.

Again using statistical variables from table

1 and assuming an A-basis material, the

probability range factor for 32 samples is

K = 3.

Data Development

The material tolerance limit for yield strength is:

Illustration Models

Stress Response Models

The tension and torque loads shown in figure-4

were chosen to illustrate applications of normal

and shear type stresses.

The format required is specifically illustrated to

combine multi-axial stress components into

response models and for calculating their response

combined-mean and standard-deviation values as

required for the reliability method.

The oblique tension load produces axial and

bending loads that induces normal and varying

bending normal and transverse shear loads across

the shaft length.

Stress Response Models

The ratio of length to diameter qualifies it as a long

beam for basic strength of materials formulation.

The round section is an optimum element to sustain

torsional shear.

The local simultaneous maximum stress responses

to bending, tension, and shear occur on the upper

boundary which sizes the structural form.

The normal maximum stress at x = 0 is expressed

by:

136

Stress Response Models

Though unnecessary for some deterministic

problems, the stress response must be

expressed as a product of the random

variable (load) and a stress-form coefficient

for reliability methods.

These correspond to the load and stress-

transformation matrices, respectively, in a

multi-degree-of-freedom dynamic problem:

The normal stress response of equation

(11) is then defined by:

Stress Response Models

Stress Response Models

where L

yz

= T is defined by the tolerance

limit of equation (8), and the stress-form

coefficient from equation (12) is

Stress Response Models

Response equations (13) and (14) predict

the multi-axial component stresses that

must be combined so as not to exceed

material strengths derived from figure-7

statistical data.

Since these material strengths are based on

uniaxial tension tests, the combined normal

and shear applied stress (demand) values

must be compatible and correlational to the

uniaxially test derived strengths

(capabilities)

137

Illustration Models Combined stresses

A commonly used criterion for combining

multi-axial stresses into uniaxial stress is the

minimum strain energy-distortion theory,

which supposes that hydrostatic strain

(change in volume) in a metallic structure

does not cause yielding, but changing shape

(shear) does cause permanent deformation.

This limit of multi-axial stress state is

empirically related to the uniaxial tensile

yielding, and it is reasonably consistent with

experimental observations.

Combined stresses

It is sometimes referred to as Mises failure

criterion 6 and is expressed by

Deterministic Methods

The deterministic method is dominantly used for

sizing structures in the aerospace industry with

mixed justifications.

It is the easiest technique to apply and verify.

It is generally perceived to be conservative, but

the method harbors enough unsuspected

deficiencies that its conservatism may be

contributing to its half-century of success.

It is the preferred method for sizing multi-

component systems having multi-critical regions

per component, and whose combined structural

weight is not payload-performance sensitive.

Deterministic Methods

It is shown to be limited in safety

assessments.

The method's design data, parameters, and

specified probability ranges are

independently developed by loads and

materials disciplines and are provided to

stress analysts to size (non-optimally) and

test structural elements and forms to

standard safety factors.

138

Deterministic Methods

Concept

The deterministic method assumes that a given

structural system safety may be specified by an

arbitrarily selected ratio of single-valued material

minimum strength and maximum applied stress.

That specified ratio is the conventional safety

factor,

Deterministic Methods

Concept

Many safety factor criterion for semi-static

structures is a verified 1.0 ratio on yield and 1.4 on

ultimate strength.

Though resistive and applied stresses are generally

provided and specifically applied as single values,

they are developed by their respective disciplines

with probability ranges specified through tolerance

limits.

Applied-stress components are combined through

the Mises criterion, and the resulting uniaxial stress

is expressed by the tolerance limit of equation (17).

Deterministic Methods

Concept

The minimum resistive stress based on yield

or ultimate stresses is characterized by the

tolerance limits of equations (9) or (10).

Incorporating the resistive- and applied-

stress tolerance limits into equation (18),

the safety factor may be decomposed with

statistical and designer control variables,

Deterministic Methods

Concept

In constructing design parameters from

equation (19) into the failure concept of

figure-2, the deterministic concept emerges

as dividing the difference of the resistive-

and applied-stress means into three distinct

zones, as shown in figure-9.

The sum of these zones,

139

Deterministic Methods

Concept

governs the tail-overlap lengths to satisfy

one condition of the failure concept.

But the method ignores the corresponding

size of the overlap area, which is the second

failure concept condition and, therefore,

cannot predict its combined reliability.

To understand the deterministic failure

governing technique, it should be noted that

each end zone specifies a probability range

to control the tail overlap intercept.

Deterministic Methods

Concept

Deterministic Methods

Concept

Zone l

A

is the probability range of the

combined applied stresses, l

A

= N

A

s

A

,

derived from equation (17).

Zone l

R

is the probability range of the

resistive stress, l

R

= Ks

R

, from equation (9)

or (10).

Both zones independently control the

difference of their means through the

designers arbitrary selection of probability

range factors, N

A

and K.

Deterministic Methods

Concept

The mid-zone l

o

does not explicitly specify

a probability range, but its included safety

factor does effectively increase the

probability range of the applied stress

range factor, N

A

.

When the safety factor is greater than

unity, the combined applied stress effective

probability range factor is extended by:

140

Deterministic Methods

Concept

Specifying a 1.0 safety factor, the effective

range factor is identically the applied-stress

specified probability range factor.

Applying a 1.4 safety factor with NA = 3 will

effectively increase it about three times with

a probability value that can only be

established as being very safe.

On the other end, operating under the

maximum specified environments with a sub-

marginal safety factor will reduce the

applied-stress probability, which increases

the tail overlap and probability of failure.

Deterministic Methods

Concept

Since the applied-stress probability range

factor is related to operational loads, and

because operational loads are verified by

limited field or flight tests at a much later

development phase, this effective

probability range parameter could serve as

another useful index of the unverified load

in a stress audit.

Deterministic Methods

Concept

While the safety factor margin would verify the

pass-or-fail response of the test article, the

effective range factor would predict the total

probability of the applied test load using the test

derived safety factor in equation (20).

The test derived safety factor would further identify

the proportion of the effective range factor verified.

This combination would contribute information for

design acceptance or modification, provided the

coefficient of variation is made available from the

deterministic method.

Deterministic Methods

Concept

In particular, safety factors exceeding unity

will expand the difference of the distribution

means through their inclusion into the mid-

zone and the net extended difference is

expressed by:

141

Deterministic Methods

Application

Two primary applications of the

deterministic method are to size a structural

form to a specified safety factor and to

predict the safety factor of an existing

structural article or design.

A structural element, or form, is sized

through the Mises criterion of equation (15),

which is equated to the maximum allowable

stress criterion of equation (18), which, in

turn, is limited by a specified safety factor.

Deterministic Methods

Application

Prediction of a structural safety factor is the

reverse of sizing and is more direct,

therefore, only the structural form sizing of

the figure-4 configuration needs to be

illustrated.

In sizing a structural form, the deterministic

tension load of equations (7a) and the

stress-form coefficient of equation (13b) are

substituted into equation (13) to give the

deterministic single-value normal-stress

component expressed with the unknown

radius,

Deterministic Methods

Application

Deterministic Methods

Application

Substituting equations (23c), (23d), and SF =1

into equation (18), the radius dimension is solved

by the Newton method to be r = 1.14 inches.

Usually safety criterion requires a structure to be

verified to no less than the specified design safety

factor.

To avoid premature test failure and potential

redesign, an estimated uncertainty factor must be

lumped into equation (18) to compensate for

modeling errors and human assembling

dispersions,

142

Deterministic Methods

Application

Modeling errors include boundary assumptions,

response models, loads, etc.

Estimates may be based on structural complexities

and sensitivities or from knowledge of past test

deficiencies.

Not all uncertainties are equally significant on any

one structure.

Estimating a lump error of 10 percent and using

equation (24), the radius is recalculated to a

minimum requirement of r = 1.19 inches.

Repeating the analysis with the SF = 1.4 on

ultimate strength, the minimum radius required is

r = 1.13, which is less than the yield strength

case, and admits the yield strength condition to be

the worst design case.

Deterministic Methods

Application

The production specifications of the

diameter nominal and tolerances

dimensions are based on sensitivity

analyses and trades to produce a robust

component.

Note that a 10-percent reduction in

allowable stress in the yield strength mode

increased the radius 4.4 percent, which

should increase the weight 9 percent.

A 9-percent weight increase on large

structural forms could be a significant

payload penalty.

Deterministic Methods

Application

These types of sensitivity analyses also

provide a basis for specifying raw materials

acceptance and processing, machining and

heat treatment tolerances, assembly

tolerances, inspection points, etc., and for

trading their life-cycle costs with payload

delivery costs.

Deterministic verification consists of

experimentally validating the structural

response through the specified safety factor

applied to equation (18).

Deterministic Methods

Application

Because the probability of applied loads

varies from project to component, and

because the safety factor is essentially a

hit-or-miss proposition, the safety factor

alone is not an absolute reference of safety.

Verification tests resulting in sub-marginal

safety factors are usually resolved by

intuitive estimates of probability and the

consequence of failure, and by similar

collective experiences with minimum

operational safety factors.

143

Deterministic Methods

Deficiencies

Perhaps the most detrimental feature in the

deterministic method is its inability to design and

predict the structural reliability over all regions of a

component through a fixed specified safety factor as

commonly assumed.

Because the tail-overlap area of the interacting

applied-stress and resistive-stress distributions is

governed by the difference of their means only, and

recalling from the failure concept conditions that

change in combined applied-stress distribution

shapes,

A

, acting at critical regions cannot be

recognized for local sizing, then a constant safety

factor cannot provide a uniformly reliable structure.

Deterministic Methods

Deficiencies

Since the probability range factor and the

safety factor are independently specified,

and both simultaneously govern the tail-

overlap through the applied-stress effective

range factor expressed by equation (20), a

stress audit based on safety factor margins

alone is incapable of assessing relative

safety or of necessarily exposing the

weakest structural region.

Relative safety assessment of different

material parts becomes more clouded.

Deterministic Methods

Deficiencies

A test-verified safety-factor margin may

exceed specification, but combined with a

low probability range factor represented in

equation (20) may result into a sub-

marginally stressed region that may not be

visible to the analyst.

Omission of discipline probability

contributions and the genetic shortcoming

in ignoring local distribution shapes

compounds the fading confidence of some

stress audits to evaluate critical reliabilities

or to identify the weakest links through

safety factor margins.

Deterministic Methods

Deficiencies

Another weakness in the method is that by

imposing a standard safety factor on all

structural materials, the structural

reliability is dependent on the strength of

selected materials, as expressed by the

mid-zone stress of equation (22).

Holding the safety factor constant and

increasing the resistive stress decreases the

available operational elastic range of high-

performance materials.

144

Deterministic Methods

Deficiencies

Figure-10 depicts the relative stress

performance of high-strength steel and

aluminum structures using current safety

factors.

Though aluminum and steel specific yield

strengths are relatively the same (lightest

shade), the contingent stress (medium

shade) imposed on steels for anomalous

loads backup is double that of aluminum's,

which inequitably denies elastic stress

(darkest shade) for more operational

performance.

Deterministic Methods

Deficiencies

Figure-10 further illustrates that a stress

audit indicting a steel structure with a

negative safety margin may have more

reserved operational stress (darkest shade)

than some aluminum structures with

positive margins and negligible denied

elastic stress.

Deterministic Methods

Deficiencies

First Order Reliability Method

Many techniques have been investigated

and others are evolving for providing

reliable structures, but the one that

promises to be most compatible with

prevailing deterministic design techniques

and with the culture of most analysts is the

first-order reliability method.

The first-order reliability method assumes

that applied and resistive stress probability

density functions are normal and

independent and may be combined to form

a third normal expression;

145

First Order Reliability Method

First Order Reliability Method

Proposed Reliability Concept

In designing to a specified reliability, its related

safety index of equation (25) should be

characterized with design control and passive

variables in common with current deterministic

computational methods to facilitate understanding

and the technical bridging to the reliability method.

The deterministic stress zones in equation (21) and

figure 9 embody these design variables, and their

sum further defines the difference of the applied-

and resistive-stress means in common with the

safety index numerator in equation (25).

First Order Reliability Method

Proposed Reliability Concept

Standard deviations required by the

denominator are defined by the

deterministic respective zones.

To incorporate these expressions into the

safety index, tolerance limit variables of the

end zones are rearranged and abbreviated

to ease their repeated use.

First Order Reliability Method

Proposed Reliability Concept

Zone l

R

in figure-9 is the probability

contribution of the resistive stress, which is

characterized by tolerance limit equation

(9a), and by which the resistive mean

stress may be expressed as

146

First Order Reliability Method

Proposed Reliability Concept

First Order Reliability Method

Proposed Reliability Concept

First Order Reliability Method

Proposed Reliability Concept

At this point, it may be noted that the reliability

method established three criteria over the

deterministic's two, which deserve comparison.

Unlike the deterministic arbitrarily selected safety

factor, the reliability design factor, fSF, is solved

from the reliability criterion, equation (30), to

satisfy a specified reliability, Z.

Similarly to the deterministic method, the

allowable applied-stress criterion, equation (31), is

constrained by the reliability design-factor

criterion.

First Order Reliability Method

Proposed Reliability Concept

As in the deterministic method, the

structure is sized through the Mises

criterion, equation (A1), equated to the

maximum allowed applied stress.

But unlike it, the combined tolerance limit

variables are statistically derived from the

Mises criterion and iterated back into the

reliability criterion.

147

First Order Reliability Method

Proposed Reliability Concept

As in the deterministic method, the

reliability method basic applications are to

size a structural form to satisfy a specified

reliability, or to determine the reliability of

an existing sized structure.

Structural sizing is an iterative process

which should be initiated by first estimating

the structural size using the deterministic

method.

First Order Reliability Method

Proposed Reliability Concept

This approach would allow sharing common

design Parameters and techniques and

would provide comparison of their final

results.

The estimated size is then substituted into

the stress form coefficients and combined

with loads tolerance limits to define multi-

axial component stresses of equations(13)

and (14).

First Order Reliability Method

Proposed Reliability Concept

These multi-axial stress components are

combined into a uniaxial stress through the

Mises criterion of equation(A1).

Reducing the tolerance limit stress

components to single values reduces the

resulting uniaxial stress into a worst-on-

worst deterministic single value.

First Order Reliability Method

Proposed Reliability Concept

To derive the statistical tolerance-limit

variables of the uniaxial stress based on the

Mises criterion, and as required by the

reliability criterion of equation (30), the

combined mean, standard deviation, and

tolerance-limit coefficient are computed

through the error propagation law.

148

First Order Reliability Method

Proposed Reliability Concept

Applying these variables for the estimated

structural size into the reliability criterion,

the reliability design factor is solved for a

specified reliability, and it is imposed on the

maximum allowable applied-stress criterion

of equation (31).

This size iteration process is repeated until

optimized by the disparity coefficient in

equation (31), achieving unity.

First Order Reliability Method

Proposed Reliability Concept

Design variables, controlling the disparity

coefficient that optimizes structural sizing,

are the independently specified probability

range factors N

A

and K applied to the Mises

and reliability criteria.

This is a welcome discovery, in that finally a

compelling requirement for indirectly

coordinating and optimizing multidiscipline

control parameters has been identified by

the reliability criterion.

First Order Reliability Method

Proposed Reliability Concept

Reducing the disparity coefficient increases

structural performance and decreases

payload delivery cost.

This supplemental role of the reliability

criterion to optimize performance should

support and enhance reliability systems

trades with payload costs.

First Order Reliability Method

Proposed Reliability Concept

The Mises criterion was noted to produce

two combined applied stresses, the worst

on worst FA from the deterministic single

values of equation (A1), and the

statistically derived FA tolerance-limit

format of equation (A12) for the same size

structure.

They are related by

and imply that the statistically derived

allowable stress is more efficient by a factor

equal to the disparity coefficient.

149

First Order Reliability Method

Proposed Reliability Concept

It should be expected that the disparity

coefficient will increase as more multi-axial

stress components with dispersions are

included in the Mises criterion.

Thus, a reliability sized structure should be

optimized by reducing the size to achieve a

disparity coefficient of unity for the

specified safety factor and reliability.

This relationship quantitatively

demonstrates the conservative performance

of the deterministic over the reliability

method.

First Order Reliability Method

Proposed Reliability Concept

To predict the reliability of an existing structure, the

actual size is substituted in the Mises criterion and

processed through the reliability criterion as above.

The disparity coefficient is set to unity in the

reliability criterion, and the reliability is directly

determined.

The first-order reliability method generates a

uniformly reliable structure, and its application

requires no new skilled analysts and no exceptional

understanding and effort over the prevailing

deterministic method.

It must and does provide for the appropriate

implementation of design uncertainties and for the

reliability response verification which follow.

First Order Reliability Method

Design Uncertainties

For simplicity and expediency, design

iteration phases often use mean value data,

and postpone design dispersions that are not

obviously dominant and to which the system

is not sensitive.

Dispersions and uncertainties that are later

estimated to be significant should be

appropriately implemented into the reliability

criterion.

First Order Reliability Method

Design Uncertainties

Uncertainties that are frequently neglected,

and that most often cause premature test

failures, are the modeling uncertainties:

loads, stress, metallurgy, and

manufacturing.

The latter three uncertainties are stress

response related and are lump verified as

either exceeding or diminishing the predicted

safety factor.

150

First Order Reliability Method

Design Uncertainties

Modeling errors encroach on normal

probability distributions through the two

normalized statistical variables with different

sensitivities to reliability.

If the error biases the applied stress mean,

ignoring it will in fact increase its mean

stress, decrease the difference of the means,

and thereby increase the distribution tail-

overlap.

First Order Reliability Method

Design Uncertainties

This error may be compensated for by an

accumulative uncertainty factor,

acting on the conventional safety factor. Stress

modeling and boundary conditions are more likely to

bias the mean.

Other examples may be related to dimensional

buildup and final assembly force-fits producing

preloads in operationally critical stress regions.

First Order Reliability Method

Design Uncertainties

Modeling manufacturing uncertainties, which

bias the coefficient of variation, are judged

on available data base and related

experiences.

Some estimates may be modeled from

assumed tolerance behavior.

Dynamic loads are dependent on structural

stiffness, which is contingent on material

properties dispersions and on manufacturing

and assembly tolerances.

First Order Reliability Method

Design Uncertainties

Contact wear increases tolerances and

reduces stiffness with increasing usage and

must be considered in operational robust

design.

Manufacturing processes are other sources

of uncertainties related to dispersions.

These kinds of uncertainties increase the

applied-stress standard deviation and tail

lengths about the fixed mean, which increase

the tail-overlap.

151

First Order Reliability Method

Design Uncertainties

Standard deviation uncertainties are

combined in conformance with error

propagation laws that follow.

First Order Reliability Method

Design Uncertainties

Verification Appendix

152

Appendix Appendix

Appendix Appendix

153

Appendix Appendix

Illustrated Applications (Examples) Illustrated Applications (Examples)

154

Illustrated Applications (Examples) Illustrated Applications (Examples)

Illustrated Applications (Examples) Illustrated Applications (Examples)

155

Illustrated Applications (Examples) Illustrated Applications (Examples)

Illustrated Applications (Examples) Illustrated Applications (Examples)

156

Illustrated Applications (Examples) Illustrated Applications (Examples)

Illustrated Applications (Examples) Illustrated Applications (Examples)

157

Illustrated Applications (Examples) Reliability selection criteria

Formulations of reliability selection criteria are still

in sparse and sketchy concepts for various

structural failure modes.

Selection criteria concepts being considered for

semi-static structures range from an arbitrarily

agreed upon standard value as fashioned by the

deterministic safety factor to criteria supporting

risk analyses.

In the absences ,of any established selection

criterion, it is interesting to examine briefly the

interaction of these two concepts with the

proposed first-order reliability method.

Reliability selection criteria

An immediate demand for a simple and user-

friendly reliability selection criterion would be

to develop a standard safety index derived

from the reliability criterion of equation (30),

based on a range of design variables

representative of successful deterministic

design and operational experiences.

This approach would not only provide a basis

for safety factor and safety index judgment

and correlation, but it would also promote

designer confidence in the transition.

Reliability selection criteria

A first-cut safety index was bounded with a

small sample of A-basis materials, 3-sigma

probability forcing function dispersions, and

design variables associated with a current

aero-structure.

The resulting minimum reliability exceeded

a value of four-nines on operational stress

limit (yield stress).

158

Reliability selection criteria

Because this limited analysis revealed a critical

sensitivity of the safety index to the reliability

design factor, the structure should be designed to a

reliability of five-nines in order to guarantee four-

nines.

The safety index was also noted to be an order of

magnitude less sensitive to other design variables.

The motive for designing to an arbitrarily selected

reliability over the arbitrarily selected safety factor is

to overcome non-uniform reliability design,

inadequate stress audits, and other deficiencies

discussed above.

Reliability selection criteria

One considered approach to supporting risk

analyses is to calculate the risk cost using

the product of the probability of failure,

and the cost consequence of that structural

failure.

The cost consequence may include cost of

life and property loss, cost of operational and

experiment delays, inventories, etc.

Reliability selection criteria

A suggested criterion for balancing the risk

cost may be to equate some proportion of

the risk cost to the initial and recurring costs

required to provide the structural reliability

to balance the risk cost.

Initial costs would consider the increased

structural sizing to the same reliability used

in the risk through the failure probability of

equation (45).

Reliability selection criteria

Recurring costs include increased propellant,

and the increased payload performance costs

caused by the increased structural sizing and

propellant weights to accommodate the risk

side of the equation.

It would seem that a structural reliability

design method is essential for the

development of a reliability selection

criterion.

159

Reliability selection criteria

Since different failure modes may require

different reliability design methods, reliability

selection criteria should be expected to be

failure mode related.

MSc.

Asset Maintenance and Management

Reliability Assessment of Structures

(EMM 5023)

Chapter 5

Reliability of Structures

Loads and Resistance and Miscellaneous

Topics

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

- Ang and Tang SolutionsHochgeladen vonRachel Lewis-Malatesta
- Structural ReliabilityHochgeladen vonstructures
- Structural Reliability Handbook 2015 (1)Hochgeladen vonferasalkam
- Reliability HandbookHochgeladen vonromixrayzen
- Ch4-6_Probability Reliability and Statistical MethodsHochgeladen voneldiana9600
- ReliabilityHochgeladen vonJessie Radaza Tutor
- Methods of Structural ReliabilityHochgeladen vonmgrubisic
- ANSYS_Structural Reliability Analysis Using Deterministic Finite ElementHochgeladen vonpachernyangyuen
- Office of Energy Reliability Strategic PlanHochgeladen vonKim Hedum
- Structural reliability and risk analysisHochgeladen vonAlexandru Constantinescu
- Reliability and Safety Engineering 2nd Ed [2015]Hochgeladen vonpopera93
- Jack Benjamin, C. a. Cornell-Probability, Statistics, And Decisions for Civil Engineers -McGraw-Hill Companies (1960)Hochgeladen vonmrswcecivil
- Structural Reliability and Risk Analysis CoursesHochgeladen vonClaudia Covatariu
- Power UtilitiesHochgeladen vont7wakim
- Ang a. H-S, Probability Concepts in Engineering Planning and Design, 1984Hochgeladen vonIonescu Paul
- Life Cycle Reliability EngineeringHochgeladen vonWagner De Souza Lima
- SMART GRIDS the Electric Energy System of the FutureHochgeladen vonk rajendra
- The Basic Reliability CalculationsHochgeladen vonAbdulyunus Amir
- Handbook of Reliability Engineering 0471571733Hochgeladen vonmramirez1741
- Chapter 20 Structural Reliability TheoryHochgeladen vonYoun Seok Choi
- Ang y Tang ProbabilityConceotinEngineering.pdfHochgeladen vonJulio Cesar Vega Duarte
- Engineering ReliabilityHochgeladen vonjapele
- Dimitri Kececioglu - Reliability Engineering Handbook, Vol. 1Hochgeladen vonfly33
- Reliability engineeringHochgeladen vonSyed Sohail Akhtar
- 30828739 Reliability EngineeringHochgeladen vonalfing
- Reliability Electricity USAHochgeladen vonJuned Ansari
- Measurement Practices for Reliability and Power QualityHochgeladen vonue06037
- Infrastructure Quality and ReliabilityHochgeladen vonue06037
- gis in utilities.pdfHochgeladen vonsahilcc
- Electric Utlity Act AlbertaHochgeladen vonpartha_gang4526

- 129964889 Analysis Design and Construction of Steel Space FramesHochgeladen vonAlpa Dudhaiya
- Stability Analysis of Single-layer Elliptical Parabolid Latticed Shells With Semi-rigid JointsHochgeladen vonponshhh
- Materials HandbookHochgeladen vonrafaeldesmonteiro
- Aluminum Material PropertiesHochgeladen vonponshhh
- Structural reliability and risk analysisHochgeladen vonAlexandru Constantinescu
- Gridlines India Article 2013Hochgeladen vonBikash Kumar Nayak
- Africa’s InfrastructureHochgeladen vonponshhh

- Timing Light Height Affects Sprint Times, Cronin and Templeton (2008)Hochgeladen vonjohn lewis
- v4-246-255Hochgeladen vonVeerendranath Nani
- The Benefit of Siblings at HomeHochgeladen vonChristine Iwaly
- Act MeasuresHochgeladen vonomsohamom
- LEADERSHIP AND EDUCATION AFFECT ON SOLDIERS? PERFORMANCE IN SUPPORTING NATIONAL DEFENSE(STUDY IN AIR DEFENSE ARTILLERY BATTALION 2/2/K MALANG).Hochgeladen vonIJAR Journal
- a Path Analytic Model of Store Loyalty Involving Self-Concept, Store Image...Hochgeladen vonjoannakam
- Drivers and Barriers Facing Adoption of Green Supply Chain Management in Egyptian Food and Beverage IndustryHochgeladen vonIngi Abdel Aziz Srag
- Filipino Made ExamsHochgeladen vonJesse Ebreo
- TRIANGULATION PROCESSHochgeladen vonQamar Zahid
- 13 Clinical Skills Training in Undergraduate Medical Education Using a Student Centered ApproachHochgeladen vonAmeliaa Kumalaa S
- iltsHochgeladen vonHarjit Singh Mangat
- Soccer-Specific Performance Testing of Fitness and Athleticism the Development of a Comprehensive Player Profile.Hochgeladen vonMarko Brzak
- Moderating Effects of the Implementation Good Corporate Governance on the Competence and the Accounting Instance System on the Quality of Accountability Financial Report’s PT Pelayaran Nasional Indonesia (Persero)Hochgeladen vonAnonymous izrFWiQ
- MULTIPLE REGRESSION ANALYSIS : DETERMINANT OF CUSTOMER SATISFACTION OF PT PEGADAIAN (PERSERO) IN INDONESIAHochgeladen vonGSA publish
- 06 Van Der Merwe, P. Chapter_4Hochgeladen vonjupesanfer77
- More on ICCsHochgeladen vonAlvian Fachrurrozi
- Women Managers' Career ProgressionHochgeladen vonTanu Arumugam
- Communication Development Report [CDR]: A parent report instrument for the early screening of communication and language development in Greek-speaking infants and toddlersHochgeladen vonAlexia Karousou
- Multiple Choice Test Item AnalysisHochgeladen voneen widyaselawekia
- Formula Test de LanzaderaHochgeladen vonLeslie Yuz Aracena
- 4. ReliabilityHochgeladen vonJuan Obierna
- Rumination, Experiential Avoidance, And Dysfunctional Thinking in Eating DisordersHochgeladen vonVarvara Markoulaki
- Parent and Adolescent Satisfaction With Mental Health ServicesHochgeladen vonAlice Rm
- THE DEVELOPMENT OF ASSESSMENT INSTRUMENT TOWARDS THE STUDENTS? CRITICAL THINKING ABILITY ON THE HIGH SCHOOL PHYSICS LESSON WITH THE CREATIVE PROBLEM SOLVING MODEL.Hochgeladen vonIJAR Journal
- Customer PerceptionHochgeladen vonumesh122
- Course OutlineHochgeladen vonvirna
- A Systematic Review of Validity Procedures Used in Neuropsychological BatteriesHochgeladen vonIvana Fasano
- Information Security Awareness-Haeussinger Kranz 2013Hochgeladen vonsyaiful_beno
- PS Mwansa Chabala DigitaalHochgeladen vonMikealay
- compititiveness destination 1 (1).pdfHochgeladen vonshinkoicagmailcom

## Viel mehr als nur Dokumente.

Entdecken, was Scribd alles zu bieten hat, inklusive Bücher und Hörbücher von großen Verlagen.

Jederzeit kündbar.