Sie sind auf Seite 1von 225

Springer Texts in Statistics

Advisors:
Stephen Fienberg Ingram Olkin
Springer Texts in Statistics

Alfred Elements of Statistics


for the Life and Social Sciences

Blom Probability and Statistics:


Theory and Applications

Chow and Teicher Probability Theory: Independence,


Interchangeability, Martingales
Second Edition

Christensen Plane Answers to Complex Questions:


The Theory of Linear Models

Christensen Linear Models for Multivariate, Time


Series, and Spatial Data

Christensen Log-Linear Models

du Toit, Steyn and Graphical Exploratory Data Analysis


Stumpf

Finkelstein and Levin Statistics for Lawyers

Jobson Applied Multivariate Data Analysis,


Volume I: Regression and Experimental
Design

Kalbfleisch Probability and Statistical Inference:


Volume 1: Probability
Second Edition

Kalbfleisch Probability and Statistical Inference:


Volume 2: Statistical Inference
Second Edition

Keyfitz Applied Mathematical Demography


Second Edition

Kiefer Introduction to Statistical Inference

Kokoska and Nevison Statistical Tables and Formulae

Lindman Analysis of Variance in Experimental Design


(continued after index)
Shelemyahu Zacks

Introduction to
Reliability Analysis
Probability Models and Statistical Methods

With 50 Illustrations

Springer-Verlag
New York Berlin Heidelberg London Paris
Tokyo Hong Kong Barcelona Budapest
Shelemyahu Zacks
Department of Mathematical Sciences
State University of New York at
Binghamton
Binghamton, NY 13902-6000
USA

Editorial Board
Stephen Fienberg Ingram Olkin
Office of the Vice President Department of Statistics
(Academic Affairs) Stanford University
York University Stanford, CA 94305
North York, Ontario M3J IP3 USA
Canada

Mathematics Subject Classifications: 6OK20, 62N05, 90B25

Library of Congress Cataloging-in-Publication Data


Zacks, Shelemyahu, 1932-
Introduction to reliability analysis:probability models and
statistical methods / Shelemyahu Zacks.
p. em. - (Springer texts in statistics)
Includes bibliographical references (p. ) and index.
ISBN -13 :978-1-4612-7697-5
1. Reliability (Engineering) - Statistical methods. 2. Reliability
(Engineering)-Mathematical models. I. Title. II. Series.
TA169.Z33 1992
620'.00452-dc20 91-33854

Printed on acid-free paper.


© 1992 Springer-Verlag New York Inc.
Softcover reprint of the hardcover 1st edition 1992

All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New
York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis.
Use in connection with any form of information storage and retrieval, electronic adaptation,
computer software, or by similar or dissimilar methodology now known or hereafter developed is
forbidden.
The use of general descriptive names, trade names, trademarks, etc., in this publication, even if
the former are not especially identified, is not to be taken as a sign that such names, as under-
stood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by
anyone.

Production managed by Natalie Johnson; manufacturing supervised by Jacqui Ashri.


Photocomposed copy prepared from the author's A ~-T EX files.

987654321

ISBN-13:978-1-4612-7697-5 e-ISBN-13:978-1-4612-2854-7
DOl: 10.1007/978-1-4612-2854-7
To Hanna, Yuval and David
Preface

Several years ago we provided workshops at the Center for Statistics, Qual-
ity Control and Design of the State University of New York at Binghamton,
NY to engineers of high technology industries in the area. Several hundred
engineers participated in these workshops, which covered the subjects of
statistical analysis of industrial data, quality control, systems reliability
and design of experiments. It was a special challenge to deliver the mate-
rial in an interesting manner and to develop the skills of the participants in
problem solving. For this purpose special notes were written and computer
software was developed. The present textbook is an expansion of such notes
for a course of statistical reliability, entitled A Workshop on Statistical
Methods of Reliability Analysis for Engineers (1983).
The guiding. principle in the present book is to explain the concepts
and the methods, and illustrate applications in problem solving, without
dwelling much on theoretical development. Electrical, mechanical and in-
dustrial engineers usually have sufficient mathematical background to un-
derstand and apply the formulae presented in the book. Graduate students
in statistics may find the book useful in preparing them for a career as
statisticians in industry. Moreover, they could practice their knowledge of
probability and statistical theory by verifying and deriving all the formulae
in the book. Most difficult is Chapter 4, on the reliability of repairable
systems. Most systems of interest, excluding missiles, are repairable ones.
To treat the subject of the availability of repairable systems we have to
introduce more advanced concepts of renewal processes, Markov processes,
etc. The chapter is written, however, in a manner that can be grasped
without much background knowledge of probability theory. However, in
some courses the instructor may choose to skip this chapter.
viii Preface

The original workshop notes were tied to specific software for the IBM
PC which was developed at the Center. It was decided, however, that
the present textbook would not be written for any specific software. The
student can use any of the several statistical software packages available on
the market, like MINITAB©, STATGRAPHICS©, or even LOTUS©, to
make computations, plot graphs and analyze data sets. We hope to issue
at _a future date a compendium to this textbook, which will have special
software and will present solutions to most of the exercises which are listed
at the end of each chapter. Specially designed examples illustrate in each
chapter the methodology and its applications. The present text can thus be
used for a one semester course on systems reliability in engineering schools
or in statistics departments.
The author would like to acknowledge the assistance of Dr. David
Berengut and Dr. John Orban in the preparation of the original work-
shop notes, and to express his gratitude to the Research Foundation of the
State University of New York for releasing their copyright on the origillal
notes. Mrs. Marge Pratt skillfully typed the manuscript using AMS-TEX.
Last but not least I would like to thank my wife, Dr. Hanna Zacks, who en-
couraged me and supported me during the demanding period of manuscript
writing.

Shelemyahu Zacks
Binghamton, NY
January 1991
Contents

Preface .............................................................. vii

List of Abbreviations .............................................. xiii

1. System Effectiveness ............................................. 1

1.1 Basic Concepts and Relationships .................................... 1


1.2 Time Categories ..................................................... 2
1.3 Reliability and Related Functions .................................... 5
1.4 Availability, Maintainability and Repairability ....................... 9
1.5 Exercises ........................................................... 11

2. Life Distributions, Models and Their Characteristics ........ 13

2.1 Types of Failure Observations ...................................... 13


2.2 General Characteristics of Life Distributions ........................ 14
2.3 Some Families of Life Distributions ................................. 18
2.3.1 Exponential and Shifted Exponential Distributions ........... 18
2.3.2 Erlang, Chi-Square and Gamma Distributions ................. 20
2.3.3 Weibull Distributions ......................................... 24
2.3.4 Extreme Value Distributions .................................. 27
2.3.5 The Normal and Truncated Normal Distributions ............. 28
2.3.6 Normal Approximations ....................................... 31
2.3.7 Lognormal Distributions ...................................... 32
2.3.8 Auxiliary Distributions: t and F .............................. 33
2.4 Discrete Distributions of Failure Counts ............................ 34
2.4.1 Distributions of Discrete Random Variables ................... 34
2.4.2 The Binomial Distribution .................................... 35
x Contents

2.4.3 The Poisson Distribution ...................................... 37


2.4.4 Hypergeometric Distributions ................................. 38
2.5 Exercises ........................................................... 39

3. Reliability of Composite Systems .............................. 44

3.1 System Reliability for Series and Active Parallel


Independent Components ........................................... 44
3.2 k Out of n Systems of Independent Components .................... 47
3.3 The Decomposition Method ........................................ 49
3.4 Minimal Paths and Cuts ........................................... 51
3.5 The MTTF of Composite Systems .................................. 53
3.6 Sequentially Operating Components ................................ 54
3.7 Fault Tree Analysis ................................................ 56
3.8 Exercises ........................................................... 62

4. Reliability of Repairable Systems ............................. 67

4.1 The Renewal Process ............................................... 67


4.2 The Renewal Function and Its Density .............................. 69
4.3 Asymptotic Approximations ........................................ 74
4.4 Increasing the Availability by Preventive Maintenance and
Standby Systems ................................................... 76
4.4.1 Systems with Standby and Repair ............................ 76
4.4.2 Preventive Maintenance ....................................... 78
4.5 Exercises ........................................................... 81

5. Graphical Analysis of Life Data ............................... 84

5.1 Probability Plotting for Parametric Models with


Uncensored Data ................................................... 84
5.2 Probability Plotting with Censored Data ........................... 90
5.3 Non-Parametric Plotting ........................................... 92
5.3.1 Product Limit Estimator of Reliability ....................... 92
5.3.2 Total Time on Test Plots ..................................... 95
5.4 Graphical Aids ..................................................... 96
5.5 Exercises ........................................................... 97

6. Estimation of Life Distributions and


System Characteristics ....................................... 100

6.1 Properties of Estimators .......................................... 100


6.1.1 The Estimation Problem ..................................... 100
6.1.2 Sampling Distributions, Accuracy and Precision .............. 102
6.1.3 Closeness Probabilities ....................................... 104
6.1.4 Confidence and Prediction Intervals .......................... 108
Contents xi

6.1.4.1 Estimating the Parameters of a Normal Distribution .. 108


6.1.4.2 Estimating the Reliability Function for an Exponential
Distribution .......................................... 109
6.2 Maximum Likelihood Estimation .................................. 111
6.2.1 Single-Parameter Distributions ............................... 111
6.2.1.1 Derivation ............................................ 111
6.2:1.2 The Invariance Property .............................. 114
6.2.1.3 The Variance of an MLE .............................. 115
6.2.2 Multiparameter Distributions ................................ 117
6.3 MLE of System Reliability ........................................ 122
6.4 MLE from Censored Samples-Exponential Life Distributions ...... 125
6.4.1 Type I Censored Data ....................................... 125
6.4.2 Type II Censored Data ...................................... 127
6.5 The Kaplan-Meier PL Estimator as an MLE of R{t):
Non-Parametric Approach ......................................... 129
6.6 Exercises .......................................................... 132

7. Maximum Likelihood Estimators and Confidence Intervals


for Specific Life Distributions ................................. 135

7.1 Exponential Distributions ......................................... 135


7.2 Shifted Exponential Distributions ................................. 137
7.3 Erlang Distributions .............................................. 139
7.4 Gamma Distributions ............................................. 141
7.5 Weibull Distributions ............................................. 146
7.6 Extreme Value Distributions ...................................... 151
7.7 Normal and Lognormal Distributions .............................. 153
7.8 Truncated Normal Distributions ................................... 155
7.9 Exercises .......................................................... 157

8. Bayesian Reliability Estimation and Prediction ............. 160

8.1 Prior and Posterior Distributions .................................. 160


8.2 Loss Functions and Bayes Estimators .............................. 163
8.2.1 Distribution-Free Bayes Estimator of Reliability .............. 164
8.2.2 Bayes Estimator of Reliability for Exponential Life
Distributions ......................................... , ...... 165
8.3 Bayesian Credibility and Prediction Intervals ...................... 166
8.3.1 Distribution-Free Reliability Estimation ...................... 166
8.3.2 Exponential Reliability Estimation ........................... 167
8.3.3 Prediction Intervals .......................................... 168
8.4 Credibility Intervals for the Asymptotic Availability of Repairable
Systems: The Exponential Case ................................... 170
8.5 Empirical Bayes Method .......................................... 172
8.6 Exercises .......................................................... 175
xii Contents

9. Reliability Demonstration: Testing and


Acceptance Procedures .. ...................................... 177

9.1 Reliability Demonstration ......................................... 177


9.2 Binomial Testing .................................................. 178
9.3 Exponential Distributions ......................................... 180
9.4 ~equential Reliability Testing ...................................... 181
9.4.1 The SPRT for Binomial Data ................................ 182
9.4.2 The SPRT for Exponential Lifetimes ......................... 184
9.5 Sequential Tests for Poisson Processes ............................. 188
9.6 Bayesian Reliability Demonstration Tests .......................... 190
9.7 Accelerated Life Testing ........................................... 195
9.8 Exercises .......................................................... 196

Annotated Bibliography . ......................................... 199

Appendix of Statistical Tables ................................... 203

Index ........................................................... , ... 210


List of Abbreviations

AC: asymptotic covariance


ASD: asymptotic standard deviation
ASN: average sample numbers
BRDT: Bayesian reliability demonstration test
CDF: cumulative failure rate
DFR: decreasing failure rate
EBE: empirical Bayes estimator
FIM: Fisher information matrix
FIT: number of failures per 109 device hours
MLE: maximum likelihood estimator
MTTF: mean time till failure
MTTR: mean time till repair
PDF: probability density function
PL: product limit
RIT: number of replacements per 109 device hours
RMSE: root mean square error
SPRT: sequential probability ratio test
TTC: time till censoring
TTF: time till failure
TTR: time till repair
TTT: total time on test
1
System Effectiveness

1.1 Basic Concepts and Relationships


The term reliability is used generally to express a certain degree of as-
surance that a device or a system will operate successfully in a specified
environment during a certain time period. The concept is a dynamic one,
and does not refer just to an instantaneous event. IT a device fails, this
does not necessarily imply that it is unreliable. Every piece of mechan-
ical or electronic equipment fails once in a while. The question is how
frequently failures occur in specified time periods. There is often some
confusion amollg practitioners between quality level and reliability. IT the
proportion of defective computer chips manufactured daily is too high, we
could say that the manufacturing process is not sufficiently reliable. The
defective computer chips are scrapped yet the non-defective ones may be
very reliable. The reliability of the computer chips depends on their tech-
nological design and their function in the computer. IT the installed chips
do not fail frequently, they might be considered reliable. A more accurate
definition of reliability will be provided later. We distinguish between the
mission reliability of a device which is constructed for the performance
of one mission only and the operational reliability of a system that is
turned on or off intermittently, for the purpose of performing a certain
function.
A missile is an example of an extremely complex system which is de-
signed to perform a specified mission just once. The mission reliability
of a missile is the probability that it will perform its mission (takeoff, ar-
rive at the target and destroy it) under specified environmental conditions.
This is mission reliability. A radar system on an aircraft is turned on and
2 1. System Effectiveness

off during the flight. It is designed to function continuously on several mis-


sions whenever required (turned on). However, sometimes the radar system
might fail during the flight. In such an event the failed system is generally
replaced immediately by a standby system. In case both the main system
and the standby one are down (unoperational), radar is unavailable.
The operational reliability of the radar system is the probability that
the ~ystem is ready when needed multiplied by the probability that it will
function for a specified length of time, t. Thus, operational reliability is a
function of both the readiness and the probability of continuous function-
ing of the system for a specified period of time. This kind of reliability is
a function of time. There are systems whose operational reliability grows
with their age, and some whose reliability declines with age. Examples of
such systems will be shown later.
The availability of systems, as well as their capability to perform certain
functions during a specified time period, depends not only on their engineer.-
ing design but also on their maintenance, the repair facilities, the logistics of
spare parts (inventory systems) and other related factors. All these factors
together contribute to the system effectiveness. System effectiveness is
a measure of the ability of the system to perform its intended functions, to
provide maintenance service and repair to its failing components. It is also
a function of the capability of the system to operate in accordance with the
engineering design concept. Thus the system effectiveness is a function of
• usage requirements
• equipment conditions
• performance characteristics.
In the following section we list some relevant time categories connected
with usage requirements and equipment conditions.

1.2 Time Categories


The following time categories play an important role in the theory of reli-
ability, availability and maintainability of systems.
I. Usage-Related Time Categories
1.1 Operating time is the time interval during which the equipment
or the system is in actual operation.
1.2 Scheduled operating time is the time interval during which the
operation of the system is required.
1.3 Free time is the time interval during which the equipment/system
is scheduled to be off duty.
1.4 Storage time is the time period during which the equipment/system
is stored as a spare part.
II. Equipment Condition Time Categories
2.1 Up time is the time interval during which the equipment/system
is being operated or ready for operation.
1.2 Time Categories 3

2.2 Down time is the time interval during which the equipment/system
is in a state of failure (inoperable).
The down time is partitioned into
2.2.1 Administrative time
2.2.2 Active repair time
2.2.3 Logistic time (repair suspension due to la.ck of spare parts).
Ill. Indices
Certain concepts which were previously mentioned are measured by in-
dices based on ratios of time categories. These are:

OO A o'-'bOIOt _
I D t rlDSlC operating time
vaJUI. I I Y - •. • . t.
operatmg tune + actlve reparr une

A vailab °lit = operating time


I Y operat·mg t·une + d own t·une

o nal Reado
O pera:t 10 up time
Iness = al al d .
tot c en ar tune
These indices of intrinsic availability, availability and operational readi-
ness can be interpreted in probabilistic terms. For example, operational
readiness is the probability that, at a randomly selected time, one will find
the system ready.
We conclude this section with a block diagram showing the relationships
among the concepts discussed above (Figure 1.1).
EXAMPLE 1.1
We now provide a numerical example based on data gathered on 20 radar
systems. We will compute a few of the indices discussed above on the basis
of these data.
1. Total calendar time = 120000 [s.hr], where [s.hr] is a system hour unit
of time.
1.1 Total flight time = 9750 [s.hr]
1.1.1 Flight up time = 8500 [s.hr]
1.1.1.1 Radar idle = 4500 [s.hr]
1.1.1.2 Radar power on = 4000 [s.hr]
1.1.1.2.1 Radar standby = 1950 [s.hr]
1.1.1.2.2 Radar in operation = 2050 [s.hr]
1.1.2 Flight down time = 1250 [s.hr]
1.1.2.1 Flight active repair = 5 [s.hr]
1.1.2.2 Flight logistics time = 700 [s.hr]
1.1.2.3 Flight administrative time = 545 [s.hr]
1.2 Total ground time = 110,250 [s.hr]
1.2.1 Ground up time = 92,000 [s.hr]
1.2.2 Ground down time = 18,250 [s.hr]
1.2.2.1 Ground active repair time = 1750 [s.hr]
1.2.2.2 Ground logistics time = 10,000 [s.hr]
4 1. System Effectiveness

System
Effectiveness

Active Repelr
Time

Logistic
Time

Administrative
Time

Figure 1.1. Block Diagram of the


Components of System Effectiveness

1.2.2.3 Ground administrative time = 6500 [s.hr]


Further data available in this example are:
1. Number of flights = 2400
2. Number of flights using radar = 1200
3. Mean length of flight = 4.1 [hr]
4. Number of malfunctions detected in flights = 96
5. Number of malfunctions detected on ground = 85
6. Total number of repair activities = 130
7. Mean adm. time/repair = 54.2 [hr]
8. Mean logistic time/repair = 82.3 [hr]
9. Mean active repair time = 13.5 [hr]
From the above data we obtain the following indices:
(1) Operational Readiness =

100,500 [s.hr] = .8375


120, 000 [s.hr]
(2) Flight Operational Readiness =
8500 [s.hr]
(8500 + 1250) [s.hr] = .8718
1.3 Reliability and Related FUnctions 5

(3) Flight Availability =

2050 [s.hr]
(2050 + 1250) [s.hr] = .6212

(4) Flight Mean Time Between Failures =

(1200)~~1) [hr] = 51.2 hr

(5) Operational Reliability Function, assuming exponential lifetime (see Ch.


2), is the flight operational readiness multiplied by the reliability function
of a working system. Thus, we have here

R(t) = .8718· exp( -t/51.2)


where t[hr] is the mission length.

6 7 8
.7754 .7604 .7457


1.3 Reliability and Related Functions
Basic to the definition of reliability functions and other related functions
is the length of life variable. The length of life (lifetime) of a compo-
nent/system is the length of the time interval, T, from the initial activation
of the unit until its failure. This variable, T, is considered a random vari-
able, since the length of life cannot be exactly predicted.
The cumulative (life) distribution function (CDF) of T, denoted by
F(t), is the probability that the lifetime does not exceed t, i.e.,

(1.3.1) F(t) = Pr{T $ t}, 0 < t < 00.


In Chapter 2 we will study the properties of several families of life distri-
butions, which are commonly applied in the field of engineering reliability.
In the present section we define some of the basic concepts and provide a
simple example.
The lifetime random variable T is called continuous if its CDF is a
continuous function of t. The probability density function (PDF) cor-
responding to F(t) is its derivative (if it exists). We denote the PDF by
f(t). This is a non-negative valued function such that

(1.3.2) F(t) = lot f(x)dx, 0$ t < 00.


6 1. System Effectiveness

The reliability function, R(t), of a component/system having a life


distribution F(t) is

(1.3.3) R{t) = 1 - F{t) = Pr{T > t}.

This is the probability that the lifetime of the component/system will ex-
ceed t. Another important function related to the life distribution is the
failure rate, or hazard function, h{t). This is the instantaneous failure
rate of an element which has survived t units of time, Le.,

(1.3.4) h{t) = lim F{t + a) - F{t) = J(t) .


a .... o a Pr{T > t} R{t)

Notice that h{t)at is approximately, for small at, the probability that a
unit still functioning at age t will fail during the time interval (t, t + at).
From formula (1.3.4) we can obtain

(1.3.5) h{t) = - ! lnR{t),

and

(1.3.6) R{t) = exp { -I t


h{X)dX} .

Finally, we introduce the concept of Mean Time to Failure (MTTF).


This is the average length of time until failure (the expected value of T).
The general definition of the expected value of a lifetime random variable
Tis

(1.3.7) E{T} = l co
tJ(t)dt,

provided this integral is finite. It can be shown that

(1.3.8) E{T} = l co
R(t)dt.

We will denote the MTTF by the symbol J..L. We provide now a simple
example.
EXAMPLE 1.2
A. Suppose that the failure rate of a given radar system is constant in
time, Le.,
h(t) = >., all 0 t < 00. s:
Then, the reliability function of this system is

(1.3.9) R(t) = exp {-It >.dt} = e-.xt, t ~ O.


1.3 Reliability and Related Functions 7

The MTTF of the system is

(1.3.10) J.L = 10['XJ e->.t dt = :X.


1

This reliability function corresponds to the exponential life distribution


having a CDF
(1.3.11) F{t) = 1- e->'t, t ~ O.
We mention in this connection that the dimension of the failure rate
function is l/[time unit] and the dimension of J.L is [time unit]. Thus, if the
failure rate function of an electronic device is constant at 8/[1000 hr] then
the MTTF of this device is J.L = 125 [hr]. The reliability of this device is a
function of time, R{t). Thus, the value of R{t) at t = 100(hr] is

R(100) = exp { - {8i~~~0~~f] } = .~493


B. In the following table we provide the empirical CDF of the life length
T [103 hr], as obtained from life testing of a large number of similar parts.
A graphical presentation of this life distribution is provided in Figure 1.2.
From Table 1.1 we estimate that the median of the time to failure is
750 [hr], and the MTTF is J.L = 740 (hr]. The MTTF, J.L, was computed
according to the formula

100

80

;... 60
t!
IL
o
U 40

20

•- l.. I _...
o· >
.0 .30 .60 .90 1.20 1.50
TIME TO FAILURE [10 3 Hr]

Figure 1.2. Graph of the Empirical CDF


8 1. System Effectiveness

Table 1.1 The Empirical CDF of the Lifetime

i ti[103] F(ti)
1 0.25 0
2 0.35 0.10
3 0.45 0.13
4 0.55 0.27
5 0.65 0.36
6 0.75 0.50
7 0.85 0.67
8 0.95 0.81
9 1.05 0.87
10 1.15 0.93
11 1.25 0.97
12 1.35 0.99
13 1.45 1.00

The reliability function of this equipment for values oft = .25(.1)1.45[103


hr] is given in Table 1.2, as well as the corresponding values of the failure
rate function.

Table 1.2 The Empirical Reliability and


Failure Rate Function

i ti[103] R(ti) h(t)/[103 ]


1 0.25 1.00 1.053
2 0.35 0.90 0.339
3 0.45 0.87 1.750
4 0.55 0.73 1.314
5 0.65 0.64 2.456
6 0.75 0.50 4.096
7 0.85 0.33 5.385
8 0.95 0.19 3.750
9 1.05 0.13 6.000
10 1.15 0.07 8.000
11 1.25 0.03 10.000
12 1.35 0.01 20.000
13 1.45 0.00

The values of the failure rate function h(t) were determined according
1.4 Availability, Maintainability and Repairability 9

20

'i.:"'
:I:
15
'"0...
"
...!:..

...w
c( 10
II:
W
II:
...:;:
::)

II.. 5

-I 1 1
o 500 1000 1500
AGE [HrJ
Figure 1.3. Estimate of Failure Rate Function

to the approximation formula

for ti-l ~ t ~ ti, i = 2, ... ,13.


This approximation is based on the assumption that the PDF is constant
at the value (F(ti) - F(ti-l))/(ti - t i - 1 ) for all ti-l ~ t ~ t i . These
estimates of the failure rate function indicate, as shown in Figure 1.3, that
the failure rate function is increasing and convex.
The failure time distribution exhibited here shows parts whose failure
rate increases in time. As shown in Figure 1.3, at age 600 [hr] the failure
rate is 1. 75/1000 [hr]. The probability of failure during the next hour
is approximately .002, while at age 900 [hr] this probability increases to
.004. In the next chapter we will study families of life distributions which
demonstrate such behavior.

1.4 Availability, Maintainability and Repairability
As we mentioned earlier, notions of availability, maintainability and re-
pairability pertain only to systems which undergo repair after failure, to
restore them to working condition, or to systems which are turned off at
prescheduled times for maintenance and then turned back on. Thus, a sys-
tem could be down either due to failure or due to preventive maintenance.
10 1. System Effectiveness

Both the length of operation time until failure and the length of down time
(administrative plus repair plus logistic) are random variables.
The maintainability of a system could be defined, in terms of the
distribution of the down time, as the probability that, when maintenance is
performed under specified conditions, the system will be up again (in state
of operation) within a specified period.
Maintainability is connected with repairability. The repairability is
the probability that a failure of a system can be repaired under specified
conditions and in a specified time period. Not all repairs can be performed
on site. In repairability we have to consider two things: the probability
that the failure was caused by certain components or subsystems, and the
conditional down time distribution of those subsystems. In Section 3.7 we
will discuss an analysis that can help in identifying the causes for system
failure.
Some failures require return of the system tQ. the manufacturer; somEr
times the system has to be scrapped. These issues depend on the system
design, economic considerations, training of personnel, strategic consider-
ations, etc. Each system must be examined individually by relating cost
factors and strategic considerations. The diagnostic element is the largest
contributor to active maintenance time and to repairability. The diagnosis
entails the isolation of the defective part, module or subsystem. Built-in
test equipment in modem systems helps to reduce the diagnosis time. Fully
automated checking devices, which perform computerized testing, are also
available. However, skilled technicians are still needed to operate sophisti-
cated testing equipment.
Maintainability and repairability functions are connected also with in-
ventory management of spare parts. Many subsystems are manufactured
as modules that can be easily replaced. Thus, if a module fails and a spare
module is available in stock, the down time of the system can be reduced.
The availability index of the system increases as a result. However, some
modules may be very expensive, and overstocking of spare parts would be
unnecessary and costly. There is much discussion in the literature on the
problems of optimal inventory management. It is most important to keep
good records on the frequency of failures of various systems and of their
components, on the length of time till failure, on the length of down time,
etc. In particular, data on the proportion of failures that could be repaired
locally, as opposed to those which had to be handled elsewhere, are of es-
sential importance. Man-hours devoted to maintenance and cost factors
should be well recorded and available for analysis. Without adequate data
it is very difficult to devise optimal maintenance systems and predict the
reliability of the systems.
1.5 Exercises 11

1.5 Exercises*
[1.2.1] A machine is scheduled to operate for two shifts a day (8 hours each
shift), five days a week. What is the weekly index of scheduled idle
time of this machine?
[1.2.2] During the last 48 weeks, the machine discussed in Exercise [1.2.1]
:was "down" five times. The average down time is broken into
1. Average administrative time = 30 hours
2. Average repair time = 9 hours
3. Average logistic time = 7.6 hours
Compute the indices of:
(i) Availability;
(ii) Intrinsic availability;
(iii) Operational readiness.
[1.2.3] During 600 hours of manufacturing time, a.machine which inserts
components on computer boards was up for 566.4 hours. It had 310
failures which required a total of 8.2 hours of repair time. What
is the MTTF of this machine? What is the mean time till repair,
MTTR, for this machine? What is its intrinsic availability?
[1.2.4] A given system has two subsystems that function sequentially, in
two stages. The first stage is designed to function 5 [min] and the
second stage lasts 15 [min]. The system accomplishes its mission if
the two stages are accomplished. In preliminary testing, 5 out of
1500 subsystems failed in the first stage and 7 out of 3000 subsys-
tems failed in the second stage. Provide an estimate of the mission
reliability of the system.
[1.3.1] The sample proportional frequency distribution of the lifetime in
a random sample of n = 2000 solar cells, under accelerated life
testing, is given in the following table

The assumed relationship between the scale parameters of the life-


time distributions, between normal and accelerated conditions, is
10:1.
(i) Estimate the reliability of the solar cells, at age t = 3.5 [yr];
under normal conditions.
(ii) What is the failure rate at age 1 [yr], under normal conditions?
(iii) What percentage of solar cells are expected to survive under
normal conditions 40,000 hours among those which survived 20,000
hours?
(iv) What percentage of cells are expected to fail under normal

*The exercises are numbered by section.


12 1. System Effectiveness

conditions between 20,000 and 40,000 hours, after reaching the age
of 10,000 hours?
[1.3.2] The CDF of the lifetime [months] of an electronic device is
F(t) = t 3 /216, ifO:::;t<6
= 1, if 6:::; t.
(i) What is the failure rate function of this equipment?
(ii) What is the MTTF?
(iii) What is the reliability of the device at age 4 months?
[1.3.3] If the reliability function of a system is R(t) = exp( -2t - 3t2 ), find
the failure rate function. What is the failure rate at age t = 3?
Given that a system reached an age of t = 3, what is its reliability
for 2 more time units?
[1.3.4] Suppose that the failure rate function of a system is a constant, hl
[l/yr], till age tl [yr] and then it incre~es to the constant h2 [l/yr]
(h2 > hl), i.e.,
hl,
h(t) = {
h2' tl < t < 00.
(i) Determine the formula for the reliability function R(t).
(ii) Graph R(t).
(iii) What is the reliability of the system at age ~tl [yr] when
hl = 1/3 [l/yr], h2 = 1/2 [l/yr] and tl = 6 [yr]?
(iv) What is the probability that the system will live at least 9 [yr]
but will fail before t = 10 [yr]?
[1.3.5] (i) Find the MTTF of a system with reliability function R(t) =
! !
exp( -t/2) + exp( -t/3).
(ii) Show that the failure rate function of this system is h(t) =
(! le+ t / 6 )/(1 + e t / 6 ), and find the failure rate at age t = 1.
(iii) Show that h(t) is a decreasing failure rate function.
(iv) What is the probability that the unit will fail between t = 2
and t = 3, given that it survived 2 units of time?
[1.3.6] Failure rates and replacement rates are often measured in units of
109 device-hours. These units are called FITs and RITs. A given
device has a constant failure rate of 325,000 FITs.
(i) What is the probability that the device will first fail in the
interval between 6 and 12 months, given that it has survived the
first 6 months of operation? [1 month = 160 device-hours.]
(ii) How many failures of the device are expected in a 10 4 device-
hour operation?
(iii) If each failure requires on the average 4 hours of active repair,
15 minutes of administrative time and 20 minutes of logistic time,
what are the availability and the intrinsic availability indices of this
device?
2
Life Distributions, Models
and Their Characteristics

2.1 Types of Failure Observations


A typical experiment in life testing of equipment consists of installing a
sample of n similar units on appropriate devices and subjecting the units
to operation under specified conditions until failure of the equipment is
observed. We distinguish between two types of data. The first type is
obtained under continuous monitoring of a unit until failure is observed.
In this case we have exact information on the length of life, or time till
failure, T, of that unit. The observed random variable, T, is a continuous
variable, i.e., it can assume any value in a certain time interval. The second
type of data arises when the units are observed only at discrete time points
tll t2,···. The number of failures among the n tested units is recorded for
each inter-inspection time interval. Let N I , N 2 , ••• , denote the number of
units failing in the time intervals [0, tl), [tll t2), .. '. These are discrete
random variables representing failure counts.
The proper analysis of data depends on the type of observations avail-
able. Experiments often must terminate before all units on test have failed.
In such cases we have complete information on the time till failure (if mon-
itoring is continuous) only on part of the sample. On all the units which
have not failed we have only partial information. Such data are called
time-censored. If all the units start operating at the same time we say
that the censoring is single. Single time censoring is also called censoring
of Type I. Some experiments terminate at the instance of the r-th failure,
where r is a predetermined integer smaller than n. In these cases the data
are failure-censored. Single failure censoring is called censoring of Type
II. If different units start operating at different time points in an interval
[0, t*], and the experiment is terminated at t* , we have multiple censoring
14 2. Life Distributions, Models and Their Characteristics

of data. We distinguish also between censoring on the left and censoring on


the right. If some units began operating before the official time started we
have left censoring. The other type of censored information, where the
unit is still in operation at the termination of monitoring, is called right
censoring.

2.2 General Characteristics of Life Distributions


We consider here the continuous random variable, T, which denotes the
length of life, or the length of time till failure, in a continuous operation
of the equipment. We denote by F(t) the cumulative distribution function
(CDF) of T, i.e.,

(2.2.1) F(t) = Pr{T ::; t}.


Obviously, F(t) = 0 for all t ::; O. We assume here that initially the equip-
ment is in good operating condition. Thus, we eliminate from consideration
here defective or inoperative units. The CDF F(t) is assumed to be con-
tinuous, satisfying the conditions
(i) F(O) = 0;
(ii) lim F(t) = 1;
t-+oo
(iii) if tl < t2 then F(tl) ::; F(t2)'
EXAMPLE 2.1
The following is an example of a life distribution of a device which always
fails between the time points to and t l , where 0 < to < h < 00:
0, if t ::; to

F(t) =
( r
t - to
2 h -to '
if t <t<
0_ -
to +
2 tl

1_2(h-t)2, if to +2 tl <- t <- t 1


tl - to

1, if tl ::; t.
In Figure 2.1 we provide the graph of the life CDF, F(t), for to = 100
[hr] and tl = 400 [hr].

The reliability function, or the survival function of the equipment



having a life length CDF F(t), is defined as

(2.2.2) R(t) = 1 - F(t), 0 ::; t < 00.


The reliability at time t is the probability that the life length of the equip-
ment exceeds t [time units]. The survival function is the same as the relia-
bility function.
2.2 General Characteristics of Life Distributions 15

1.00 -- ....., ,
,,
',Reliability

,
.75 \,

::i
\
\
iii \
\
c:(

8II: .50-
\
\
a. \
,,
\

,
.25 - ,,
\

\
\

o I I
'" ' ..... >,
100 200 300 400
TIME TO FAILURE [Hr]

Figure 2.1. CDF for Time Till Failure and


the Associated Reliability Function

The probability density function (PDF) of a random variable, T,


having a CDF F(t), is a non-negative function, f(t), such that

(2.2.3) F(t) = fat f(x)dx, 0:::; t < 00.

According to this definition, f(t) can be determined, at almost all points


of t, as the derivative of F(t).
EXAMPLE 2.2
The PDF corresponding to the CDF of Example 2.1 is

0, t:::; to

4 t < t < h +to


( )2(t-tO), 0_ - 2
h -to
f(t) =
4 to +2 tl -< t -< t 1
(tl - to )2(h- t ),

0, tl :::; t.

This is a triangular function on the interval (to, tl), symmetric around the
mid-point (to + h)/2. •
16 2. Life Distributions, Models and Their Characteristics

The p-th percentile point, or p-th fractile, of a life distribution F(t),


for a value of pin (0,1), is the value of t, denoted by t p , for which F(t) = Pi
Le.,

(2.2.4)

If ~here is more than one value of t satisfying the above equation, we define
tp to be the smallest one.
EXAMPLE 2.3
The p-th fractile of the life distribution of Example 2.1 is

to + Vifi. (tl - to), if 05, P 5, .5


tp = {
tl - J(l - p)/2 (tl - to), if .5 5, P 5, 1.

If P = .75, to = 100 [hr] and h = 400 [hr] we obtain t.75 = 293.93 [hr]. This
means that the life lengths of 75% of the units of this population do not
exceed 294 [hr].

The median, t.50, and the lower and upper quartiles, t.25 and t. 75 ,

respectively, are important characteristics of a life distribution.
Moments of order r of the life distribution are defined as

(2.2.5) J.lT = 1 00
t Tf(t)dt, r = 1,2,· ...

Moments J.lT may not be finite. For example, suppose that

f( t) 2/7r 0<t < 00.


= 1 +t2 '

Then
J.ll = - 21
7r 0
00
- -t d t
1 + t2
= ~7r t->oo
lim log(l + e) = 00.

We can further show that if J.lj = 00 then J.lk = 00 for all k > j. In such
cases we say that the corresponding moments do not exist. In all further
discussion we assume that the moments under consideration exist.
The first order moment, J.l = J.ll, is called the mean time to failure
(MTTF) or the expected lifetime.
The variance of the life distribution is

(2.2.6)
(72 = 1 00
(t - J.l)2 f(t)dt

= J.l2 - J.li·
2.2 General Characteristics of Life Distributions 17

The variance is always positive, unless the lifetime is a constant. It measures


the dispersion around the MTTF. The (positive) square root of the variance
is called the standard deviation.
If the PDF, f(t), is symmetric around a point f, then J.L = f (provided J.L
is finite). Moreover, if f(t) is symmetric, as in Example 2.2, the median is
equal to the MTTF.
Another important relationship is that

(2.2.7) J.L = looo R(t)dt


where R(t) is the reliability function.
The MTTF of the life distribution of Example 2.2 is, obviously, the point
of symmetry f = 250 [hr].
We conclude the present section with the definitio~ of the failure (haz-
ard) rate function. The failure rate function, associated with a life dis-
tribution F(t), is

f(t)
(2.2.8) h(t) = R(t)' 0~t < 00.

The function H (t) = lot h( x )dx is called the cumulative hazard func-
tion.
EXAMPLE 2.4
We derive here the failure rate function corresponding to the life distri-
bution of Example 2.1. According to the above definition

0, t ~ to

4(t - to) t < t < to + h


(h - to? - 2(t - to?' 0_ - 2
h(t) =
2 to +2 tl < t <t
- - 1
tl - t'

00, tl ~ t.

This is an example of an increasing failure rate function. A graph of this


function for to = 100 [hr] and tl = 400 [hr] is given in Figure 2.2. As seen
in that figure, the failure rate in this example is 1/50 [hr] at t = 300 [hr].
This means that units of age 300 [hr] have probability .02 of failing within
the next hour. However, this failure rate changes as time goes on. Every
unit is expected to fail by 400 [hr] of operation.
18 2. Life Distributions, Models and Their Characteristics

..
r-->

%
0
....
0
3
....
"-
---...

~a::
w 2
a::
~
-I

~
1

I _
I
o 100 200 300 400
TIME [Hr]

Figure 2.2. Failure Rate Function


2.3 Some Families of Life Distributions
2.3.1 Exponential and Shifted Exponential Distributions

We start with the family of exponential distributions. Consider a constant


failure rate function

0, t<O
{
(2.3.1) h(t) = 1
t ~ O.
f3'

This constant failure rate function implies that parts do not age. The
corresponding reliability function is

(2.3.2) R(t) = exp {-~ lot dX} = exp{-tf(3}.


It follows that the CDF of the life distribution is

(2.3.3) FE(t; (3) =1- exp( -tf(3), 0 :s: t < 00.


2.3 Some Families of Life Distributions 19

This is the family of exponential distributions, having probability density


function
0, t<O
(2.3.4) fE(t;{3) = {
~ exp( -t/(3), t ~ O.

The parameter {3 is called a scale parameter, 0 < {3 < 00. A random


variable having such an exponential distribution with a scale parameter (3
will be denoted by E({3). The expression X '" E({3) will denote that the
random variable X has the E({3) distribution.
The MTTF of E({3) is

That is, on the average a unit fails every (3 time units. The standard
deviation of E([3) is (j = {3. This means that the larger the MTTF, {3, the
larger the dispersion.
EXAMPLE 2.5
Consider an exponential life distribution with {3 = 100 [hr]. In this case,
the MTTF is j.£ = 100 [hr], and the standard deviation of the lifetime is
(j = 100 [hr]. The percentage of parts that are expected to survive at least

200 [hr] of operation is exp( -200/100) x 100% = 13.5%. The percentage


expected to survive more than 300 [hr] is 5%.

The p-th fractile of the exponential distribution with mean life (3 is



Ep({3) = -(3ln(l- p).

Thus, the median of E({3) is E. 50 ({3) = .693{3. That is, if a certain equip-
ment has an exponential life distribution with MTTF of 100 [hr], 50% of
these units are expected to fail in less than 69.3 [hr]. This reflects the
considerable skewness (asymmetry) of the exponential distribution.
The shifted exponential life distribution is an exponential distribution
starting at to, i.e., its PDF, is

0,
(2.3.5) fSE(t; (3, to) = { 1
:a exp{ -(t - to)/{3}, t ~ to.

to is called a location parameter, which is some positive value, 0 ::; to < 00.
This model is relevant when no unit fails before time to, and the failure
rate is constant for all t ~ to. The MTTF is j.£ = to + {3 and the standard
deviation is (j = 13.
20 2. Life Distributions, Models and Their Characteristics

2.3.2 Erlang, Chi-Square and Gamma Distributions

Consider a device which is constructed of k similar units. These units


are connected sequentially. When unit #1 fails unit #2 starts operating
automatically, and so on, until all k units fail. Suppose furthermore that
each one of the units has an exponential life distribution with the same
M'FTF, (3, and that the units operate independently. Then, the life length
of the device, TD, is the sum of the k life lengths, Tt, T 2 ,··· ,Tk of its
component units. That is, TD = T1 + ... + Tk. The distribution of TD is
a special case of the gamma distributions, called the Erlang distribution.
The PDF of an Erlang distribution is

tk - 1
(2.3.6) fER(t; k, (3) == (k _ 1)!(3k exp( -tl(3), 0~t < 00.

(3 is a scale parameter, 0 < (3 < 00, and k is a shape parameter, k =


1,2,···. The CDF of the Erlang distribution is given by

FER(t; k, (3) = (k _11)!(3k lot X k- 1 exp( -xl(3)dx

(2.3.7)
= 1- e- t /{3 ~ (t/~)j, t> O.
j=O J.

One can derive this identity employing integration by parts. It actually


represents a deeper probabilistic result connecting the Poisson and Erlang
distributions. We will denote by G(k, (3) a random variable having an Er-
lang distribution with parameters k and (3. Notice that the exponential
distribution is a special case of the Erlang for k = 1. We further remark
that the distribution of X = (G(I, 2))1/2 is called the Rayleigh distribu-
tion. The PDF of the Rayleigh is given by

(2.3.8) fRA(t) =t exp( _t2 12), t > O.

In Figure 2.3 we plot several CDF of G(k, (3) to illustrate the effect of
the shape parameter, k, on the distribution.
The MTTF of a G(k, (3) life distribution is

(2.3.9) JL = k(3.
The standard deviation of this distribution is

(2.3.10) (j = (3Vk.
2.3 Some Families of Life Distributions 21

100

.75

~f .50

o
o 2 4 6 8 10
TIME

Figure 2.3. The CDF of Erlang Distributions f3 = 1, k = 1,2,3


The failure rate function of an Erlang life distribution, G(k, (3), is

p(k - Ij t/(3)
(2.3.11) hER(tjk,f3) = f3 Pos{k -ljt/(3)

where

(2.3.12) ". -
p(J, >.) - e
-A >.j'
l j = 0, 1",'
J.
and
j
(2.3.13) Pos(jj >.) = :L>(ij >.).
i=O

These functions are the probability distribution and the cumulative distri-
bution functions of a Poisson random variable, which will be discussed in
Section 2.5.
In Figure 2.4 we draw the failure rate function hER(tj k, (3) for the case of
f3 = 1 and k = 1"" ,6. For k = 1 the distribution is exponential and the
failure rate is a constant (1/(3). For k = 2,3"" , the failure rate functions
are strictly increasing from 0 to 1/f3.
A variation of the Erlang distributions is obtained when we allow the
shape parameter k to assume values which are multiples of 1/2, i.e., if
k = m/2, m = 1,2,···. The scale parameter f3 is fixed at f3 = 2. The
22 2. Life Distributions, Models and Their Characteristics

1.0;--------------------------------------k=1
k=2

k=3
.8
k=4

en. k=5
"
~
w
.6
k=6
I-

a:
w .4
a:
~

=

LI.

.2

.0
o 2 4 6 8 10
TIME [(3]

Figure 2.4. Failure Rate Functions of G(k, (3), k = 1" .. ,6

distributions obtained are called chi-square distributions, and denoted by


x2 [m]. The parameter m is a shape parameter of the distribution. The
PDF of X2 [m] is

(2.3.14) . _ 1 m/2-1 -t/2


fx2(t,m) - 2m / 2 r(m/2)t e , t 2: O.

The function r(o:), 0: > 0, defined as

(2.3.15)

is called the gamma function. This function has the useful property that

(2.3.16) r(o:) = (0: - l)r(o: -1), 0: > 1.

Recursive application of this formula yields

(2.3.17) r(k) = (k - I)!, k = 1,2,·" .


2.3 Some Families of Life Distributions 23

Some selected values of the gamma function are

r (~) = 3.62561···

r (~) = y7r = 1.77245···

r (~) = 2.67893···

r (~) = 1.35412··· .

Values of rex) for x = .01-1.00 are given in the Appendix Table A-VI.
The mean of x2 [m] is J.L = m and its standard deviation (j = V2m.
Fractiles of the chi-square distribution are given in Appendix Table A-III.
We denote these fractiles by x~[m]. The following relationship holds
between G(k,{3) and x2 [m]:

The symbol '" indicates identity of distributions. Thus, the fractiles of


G(k,{3) can be obtained from those of X2 [2k] according to the formula

(2.3.18)

EXAMPLE 2.6
For p = .5 and k = 10 we have X~s[20] = 19.34. Hence,

G. s (lO, {3) = 9.67{3.

This is the median of G(IO, {3). The mean of this distribution is J.L = 1O{3.
This shows that G(IO, {3) is almost symmetric.

Further generalization is obtained by considering any positive real num-



ber, v, as a shape parameter. In this manner we extend the above families of
life distributions to the general gamma family, {G(v,{3); 0 < {3, v < co}.
The PDF of a (general) gamma distribution is

(2.3.19) fG(t; v, {3) = {3v~(v) t v- 1 exp( -tj{3), t > o.

The CDF of G(v,{3) is


24 2. Life Distributions, Models and Their Characteristics

1.0
t

t
---
Figure 2.5. Weibull PDF for 13 = 1, v = 1,2,3
The integral defining FG(tj v, 13) is called the incomplete gamma integral.
There are various numerical procedures for computing this integral. In the
case where v equals a positive integer, k, the incomplete gamma integral
can be evaluated with the aid of the Poisson CDF, as given earlier. The
failure rate functions of the gamma life distributions are increasing, as in
Figure 2.4, for v > 1. For v < 1 these functions are decreasing.

2.3.3 Weibull Distributions

The Weibull family of life distributions has been found to provide good
models in many empirical studies. We consider first a two-parameter
Weibull distribution, W(v,j3), whose CDF is given by the formula

(2.3.20) Fw(tj v, 13) = 1 - exp{ -(tf j3t}, t ~ O.

13 isa positive scale parameter and v is a positive shape parameter. Note


that if v = 1, the Weibull distribution reduces to the exponential distribu-
tion. The PDF of W(v, 13) is

(2.3.21) fw(t; v, 13) = ~ (~ ) v-I exp{ -(tf j3t}, t ~ O.


In Figure 2.5 we present some PDFs from the Weibull family.
The p-th fractile of W(v,j3) is

(2.3.22) Wp(v,j3) = j3( -In(1 - p))l/v, 0 < P < 1.

EXAMPLE 2.7
In the following table we provide the values of the lower quartile, the
median and the upper quartile for 13 = 1, v = 1,2,3,4,10.
2.3 Some Families of Life Distributions 25

Table 2.1. The Median and Quartiles of W(v, (3)


for (3 = 1, v = 1,2,3,4,10

p v=l v=2 v=3 v=4 v= 10


0.~5 0.288 0.536 0.660 0.732 0.883
0.50 0.693 0.833 0.885 0.912 0.964
0.}5 1.386 1.177 1.115 1.085 1.033

We see that as v grows the distribution becomes more symmetric. The



mean of W(v, (3) is

(2.3.23)

If v = 2, for example,

As seen in Table 2.1, the median of this distribution is .833{3. The mean
and the median are quite close even for v = 2. On the other hand, when
v = 1 (the exponential case) the median is about 69% of the MTTF, which
reflects a pronounced asymmetry.
The standard deviation of W(v, (3) is

Thus, for the case of v = 2,

a = (3 [r(2) - r2 (~) f/2


= {3 (1 - ~r/2 = .463{3.

For v = 3 we obtain

a = {3 (r (1 + ~) - r2 (1 + ~) f/2
= {3 (~(1.35412) _ ~ (2.67893)2) 1/2
= .325{3.
An important property of the Weibull distribution is that the minimum
of n independent random variables having an identical Weibull
26 2. Life Distributions, Models and Their Characteristics

4.0

3.0

...
CQ.
"-

~ 2.0
ex:
w
V=1

(:1._=______. . . ___~"'"-_-_-_-_-_-_-_-..-_-_-_-_-_-.-...
§

:~
-_-_-_-_V_=_1..

.0 .20 .40 .60 .80 1.00


TIME [(3]
Figure 2.6. Failure Rate Functions
of W(v,,8) in Units of,8

distribution, W(v,{3) is the Weibull distribution W(v,{3/n 1 / V ). That


is, if n independent devices start to operate at the same instant, and if the
life distributions of these devices are the same W(v, ,8), then the time till
the first failure has the Weibull distribution W(v,,8/n 1 / v ).
EXAMPLE 2.8
In the case of n = 10, ,8 = 100 [hr] and v = 2, the time till the first
failure has the distribution W(2, lOVlO). Thus, while the MTTF of each
device is 88.6 [hr], the mean time till the first failure is 28.0 [hr].

The above property makes the Weibull family an attractive one for mod-

eling reliability of systems of similar components connected in series (see
Chapter 3), or for mechanical systems where the weakest-link model is
appropriate.
We conclude the present section with a presentation of the failure rate
functions of the Weibull distributions. From the definition of the failure
rate function we obtain for the Weibull distributions the function

(2.3.24) hw(t;v,,8)=~ (
* '
)
V-l

t ?: o.

In Figure 2.6 we present the graphs of some of these failure rate functions.
Finally, one can consider also the family of shifted Weibull distributions.
2.3 Some Families of Life Distributions 27

Such distributions have PDFs


(2.3.25)
0, t < to
{
fws('; ""{J, '0) ~ ~ (' ~ '0 fl exp { _ ( ' ~ to) "}. t 2:: to·

2.3.4 Extreme Value Distributions


If a random variable X has the Weibull distribution W(v, f3), then Y = In X
has a distribution called the extreme value distribution, with a CDF

(2.3.26)

where -00 < Y < 00, and "( = 1/f3 v . It is customary in many textbooks
to present this family of distributions as a location and scale parameters
family, in the form

{2.3.27) FEV(Y;~' 8) = 1 - exp { - exp (Y ~ ~) }, -00 < Y< 00.

This representation can be obtained from the above, by defining 8 = l/v


and ~ = -8In"{ = Inf3. Thus, 8 is a positive parameter, and -00 < ~ <
00. We will adopt this latter representation and denote the extreme value
distribution by

EV(~,8); -00 < ~ < 00, 0< 8 < 00.

The corresponding PDF is

(2.3.28) 1
fEv(y;~,8)=8exp {Y-~
-8-- exp (Y-~)}
-8- , -00 < Y < 00.

These functions are illustrated in Figure 2.7. It can be shown that this
extreme value distribution EV(~, 8) is related to the asymptotic distribu-
tion, as the sample size n grows, of the minimum of a random sample from a
wide range of distributions. A distribution related to the asymptotic distri-
bution of sample maxima is caned the Gumbel distribution, or extreme
value distribution of Type I, having a CDF

(2.3.29) FEV{X;~, 8) = exp ( - exp ( - x ~ ~) ), -00 < x < 00.

The mean value of EV{~, 8) is

(2.3.30) fJ. =~ - .57728


28 2. Life Distributions, Models and Their Characteristics

.4

.3

IL
d
a. .2

.1

0
-4 -3 -2 -1 0 1 2 3 4
x-~

Figure 2.7. PDF of Extreme Value


Distributions, EV(e,8), e = 0,8 = 1,2,3
and its standard deviation is

(2.3.31) U = 1.2838.

Finally, the p-th fractile of the extreme value distribution, EV(e, 8), is

(2.3.32) EVp(e,8) = e + 8In(-ln(1- p)).

Thus, the median of the distribution is

Me = e + 8lnln2 = e - .36658.

Thus, J.L < med. This implies that the extreme value distribution EV(e,8)
is skewed to the left (negative asymmetry).
The asymptotic distribution of minima, EV(e, 8), has been applied as a
model of time to. leakage of pipes carrying corrosive chemicals (in missiles),
and other problems of mechanical failures. The asymptotic distribution of
maxima has been applied to model yearly maximum of water discharge in
rivers (flooding), maximal yearly tide, etc.

2.3.5 The Normal and 'Iruncated Normal Distributions


A random variable is said to have a normal (or Gaussian) distribution if
its PDF is

(2.3.33) 1 {1
fN(XjJ.L,U) = V2-Ku exp -2" (X-J.L)2}
-u- ,
2.3 Some Families of Life Distributions 29

.5

I&.
o
D.

L.-.- r - ,--~
~ (J=3

o
-4 -3
..
o
' 1 3 4
x-IJ
Figure 2.8. PDF of Normal Distribution
N(IL, 0'), IL = 0,0' = 1,2,3

for -00 < x < 00. We will denote a normal distribution by N(IL, 0'). [Note:
many textbooks use the notation N(IL, 0'2) rather than N(IL, 0').] The PDF
is symmetric around the point x = IL. The location parameter, IL, is also
the mean (expected value) of the distribution. The scale parameter, 0', is
also the standard deviation of the distribution. The CDF of a normal
distribution is

(2.3.34) -00 < x < 00,

where ~(z) is the standard normal integral defined as

(2.3.35) ~(z) = . ftC


1
v21l'
l
-00
z
exp( _x2 /2)dx.

There are various formulas for computing this integral. The derivative of
~(z) is called the standard normal PDF and is given by

1
(2.3.36) fjJ(z) = ftC exp( _Z2 /2), -00 < z < 00.
v21l'

The p-th fractile of the standard normal distribution will be denoted by zp;
i.e., ~(zp) = p, 0 < p < 1. Values of zp can be obtained from Table A-I
and Table A-II of the appendix.
The p-th fractile of an arbitrary normal distribution N(IL, 0') is

(2.3.37)
30 2. Life Distributions, Models and Their Characteristics

p.,=O
3
b.....
p.,=1
w
~
cC 2
II:
w
II:
:::I
p.,=2
~
IL
1

--.-.--.--.----~ .
2 3
TIME [a]
Figure 2.9. Failure Rate Functions
of NT(j.L, 1,0) for j.L = 1, ... ,3

The normal distribution has a central theoretical role in the theory of prob-
ability and statistics. Some relevant applications of this theory will be
discussed later.
A truncated version of the normal distribution, NT(j.L, a, to), has a PDF
of the form

0, if x < to

(2.3.38) fNT(Xj j.L, a, to) =


if x ~ to.

If to > 0 tae truncated normal distribution NT(j.L, a, to) can be used as a


model for a life distribution. It has also been used to model the distribution
of material strength. The reliability function in this case is

(2.3.39)

while R(t) = 1 for all 0 < t < to. The failure rate function is
2.3 Some Families of Life Distributions 31

(2.3.40) t ~ to.

Several graphs of this failure rate function are displayed in Figure 2.9.

2.3.6 Normal Approximations

Some distributions can be well approximated, under certain conditions, by


a normal distribution having the same mean and variance. These approx-
imations are often based on the Central Limit Theorem. According to
this theorem, if Sn
is the sum of n independent and -identically distributed
random variables having mean JL and variance a 2 then

(2.3.41) lim Pr {Sna ~JL :::; z} = cp(z).


n--+oo n

Thus, for example, the Erlang distribution E( k, (3) is the distribution of the
sum of k independent exponential random variables E({3). Hence, by the
Central Limit Theorem

(2.3.42) - (3k) ,
FER(t; k, (3) ~ cp ( t (3.,fk

for large values of k.


In a similar manner we can approximate the distribution of X2 [m], for
large values of m, by that of N(m, .J2m). The distribution of G(v, (3) can
be approximated, for large v values, by N(v{3, (3Fv). In this case, the p-th
fractile of G(v, (3) can be obtained by the approximation

(2.3.43)

The above approximations may require very large values of m or v to at-


tain a desired accuracy. Various methods are available to improve these
approximations in case of moderate or small values of v or m.
EXAMPLE 2.9
Let v = 1000 and (3 = 1/10. v = 1000 is generally considered sufficiently
large to apply the approximation (2.3.43), which yields

G. 95 (1000, .1) = 100 + z.95(.1)J1000


= 100 + 1.645J1000/1O
= 105.202.
32 2. Life Distributions, Models and Their Characteristics

Moreover, suppose we have to compute the reliability function of a system


having a G(1000, 0.1) life distribution. Formula (2.3.7) yields the function

R(t) = Pos(999, t/.1).


It is impractical to compute this function directly from the definition of
Pos.(ki A). However, with the normal approximation to G(1000, .1) we get

R(t) ~ 1- <p (lOt - 1000) .


31.6227

Hence, R(100) = .5 and R(105) = .0569. The normal approximation yields


immediate numerical results.

Normal approximations to some discrete distributions will be discussed



in Section 2.4.

2.3.7 Lognormal Distributions


A random variable X has a lognormal distribution LN (j.l, u) if Y = In X
has the normal distribution N(j.l,U)i -00 < j.l < 00, 0 < u. The CDF of
an LN(j.l, u) is thus

lnt - j.l) t > O.


FLN(ti j.l, u) = <p ( u '

The corresponding PDF is

(2.3.44)
1
iLN(tij.l,U) = y2ITut exp {-21 ( lnt-j.l
u )2} '
for 0 < t < 00. The PDF is zero for negative values of t. A graph of the
PDF of LN(O, 1) is given in Figure 2.10.
As we see in Figure 2.10, the lognormal distribution is highly skewed to
the right.
The fractiles of this distribution are given by the formula

(2.3.45)

The quartiles, the median and the .9-fractile of LN(j.l, u) are tabulated in
the following table for a few values of j.l and u.
The extent of skewness of the lognormal distributions is well illustrated
in Table 2.2, in particular as u and j.l increase. The mean and the standard
deviation of the LN (j.l, u) are given, respectively, by the formulae

(2.3.46) ~ = exp(j.l + u 2 /2) (mean)


2.3 Some Families of Life Distributions 33

.6

II.. .4
o
11.

.2

OL-__________ ~ ____ ~~ __ ~

o 1 2 3
TIME

Figure 2.10. The PDF of LN(O, 1)

and

(2.3.47) D = ~(e<72 - 1)1/2 (standard deviation).

These values are presented in Table 2.2 for J-L = 0,1 and (7 = 1,2. The dif-
ference between the mean ~ and the median eJ.£ is very sensitive to variations
of the parameter (7, as is shown in Table 2.2.

Table 2.2. p-Fractiles, Means and


Standard Deviations of LN(p., (7)

(7
= 1 (7
= 2
p p.=0 p.=1 p.=0 p.=1
0.25 0.51 1.38 0.26 0.71
0.50 1.00 2.72 1.00 2.72
0.75 1.96 5.34 3.85 10.48
0.90 3.60 9.79 12.96 35.23
mean 1.65 4.48 7.39 20.09
S.D. 1.33 3.61 18.68 50.77

The lognormal distribution has been widely applied to model the distri-
bution of material strength, air and water pollution, and other phenomena
with highly skewed distributions.

2.3.8 Auxiliary Distributions: t and F

The t- and the F-distributions are not used as life distributions but have an
34 2. Life Distributions, Models and Their Characteristics

important role in statistical inference. These distributions will be frequently


used in later chapters.
Let Z have a standard normal distribution, N(O, 1). Suppose also that W
is a random variable, independent of Z, and that lIW 2 has a X2[lI] distribu-
tion. The distribution of t = Z IW is called a Student's t-distribution with
II degrees of freedom, and denoted as t[lI]. The t[lI] distribution is symmet-
ric around the origin, t = 0, and converges as II increases to the standard
normal distribution, N(O,l). The fractiles tp[lI] are given in Table A-IV,
for values of P = .6, .75, .9, .95, .975, .99, .995 and .9995. t.5[V] = 0 for all
v and for P < .5, tp[lI] = -tl_P[V]. The F-distribution is the distribution of
UIV, where U and V are independent random variables; III U is distributed
like X2[Vl] and V2V is distributed like X2[V2]' This F-distribution is repre-
sented as F[v1, V2]. Ftactiles of F[lIl' V2] are given in Table A-V. Note that
for 0 < P < .5 the fractiles are given by Fp[v1, V2] = (F1 - p[V2, Vl])-I.

2.4 Discrete Distributions of Failure Counts


In many cases, exact information on failure times is unavailable; instead we
have only information on the number, N(tl' t2), of failures during the time
interval [h, t2)' This is often the case, for example, when studying field
performance of equipment. We discuss three important classes of distribu-
tions for the failure counts N(h, t2)' An important difference between the
actual time till failure T and the failure counts N is that T is a continuous
random variable, while N is a discrete random variable.

2.4.1 Distributions of Discrete Random Variables

A random variable J is called discrete if it can assume only a finite or


countable number of values j1, h,' . " with positive probabilities, Pi =
Pr{J = ji}, i = 1,2,···. The function

Pi, if x = ji, i = 1,2, ...


f(x) = {
0, otherwise

is called the probability distribution function (PDF) of the random


variable J. The CDF of J is the step function

(2.4.1)

where
I,
fiji :::; x} = {
0, otherwise.
2.4 Discrete Distributions of Failure Counts 35

The p-fractile of J is defined as

(2.4.2) xp = the least value of x for which F(x) ;::: p.

The mean of J is

(2.4.3)

and its variance is

(12 = 2:(ji - J.L)2Pi


i
(2.4.4)
= 'L..J1iPi
" ' ·2 - J.L 2 .
i

2.4.2 The Binomial Distribution


J is said to have a binomial distribution with parameters n and (), where
n is a positive integer and () a value between 0 and 1, if

(2.4.5)

We denote such a binomial distribution by B(n, ()). The PDF of B(n, ())
will be denoted by fB(j; n, ()). The CDF of B(n, ()) will be designated
FB(j;n,()). The mean (expected value) of B(n,()) is

(2.4.6) J.L = n(),

and its standard deviation is

(2.4.7) (1 = (n()(l _ ()))1/2.

The binomial distribution represents the distribution of the number of


occurrences of some specified event anlOng n independent trials, for which
the probability of that event is (). Independent trials in which () stays
fixed are called Bernoulli trials. Thus, the binomial distribution is the
distribution of the number of "successes" among n Bernoulli trials.
EXAMPLE 2.10
Suppose that 10 identical devices are placed on test at the same time
t = O. Let J be the number of failures in the time interval [0,1). Suppose
further that the life distribution of each device is exponential with MTTF
13 = 5 [hr). What is the distribution of J? The probability of failure of a
particular device in the time interval [0,1) is () = 1- exp( -1/13). Thus, the
PDF of J is fB(j; 10, 1 - e- 1 / 5 ) or fB(j; 10, .1813). The expected number
36 2. Life Distributions, Models and Their Characteristics

of failures during the first hour is J.L = nO = 1.81. The standard deviation
of J is u = (10 x .1813 x .8187)1/2 = 1.22. In a similar fashion we obtain
that the distribution of the number of failures during the second hour, J 2 ,
is B(10, 02) where
02 = exp( -1/5) - exp( -2/5) = .1484.
Thus, the expected number of failures during the second hour (taking no
account of what happened in the first hour) is J.L(2) = 10, 02 = 1.48. The
standard deviation of J 2 is u(2) = 1.12. The distributions of the number of
failures during the third hour, etc., can be obtained similarly.

Computer programs for f B (j; n, 0) are often based on the recursive for- •
mula
. . n-j 0
(2.4.8) fB(J + 1; n, 0) = !B(J; n, 0) j + 1 1 _ ()'

Thus, fB(l; 100, .9) = fB(O; 100, .9)(99)(9). But, since fB(O; 100, .9) =
.1100 = 10- 100 , the computer might show the value 0 for fB(O; 100, .9),
and for all other fB(j; n, ()) values. To overcome this difficulty, one can ap-
ply the normal approximation to the binomial, which is generally good
if
(2.4.9) n ? 9/()(1 - ()).
This approximation, for large n, is

(2.4.10) F ("
B J, ,
n()) ~ 1> (jIn()(l
+ 1/2 - n()) .
_ ())

Thus,
80.5 - 90)
FB(80; 100, .9) ~ 1> ( 3 = .00077.

More generally, for any i < j,

(2.4.11) Pr{i < J < + 1/2 - n()) _1> (i -1/2 - n()) ,


j} ~ 1> (jIn()(l
- - - ()) In()(l- ())
where J is B(n, ()), and n is large.
EXAMPLE 2.11
The probability of a defective item in a production process is () = .06.
What is the probability that the number of defective items in a batch of
n = 1000 items is between 50 and 70 inclusive?

Pr{50:S; J:S; 70} ~ 1> (7~0)


49.5 - 60)
- 1> ( J56.4
56.4
= 21>(1.398) - 1 = .8379.

2.4 Discrete Distributions of Failure Counts 37

2.4.3 The Poisson Distribution


The Poisson distribution is the distribution of a discrete random variable,
which can assume the non-negative integer values with probabilities

(2.4.12) p(j; A) = e-'\ Ai / jl, j = 0,1, ...

where A is a positive parameter. p(j; A) is the PDF of the Poisson distri-


bution. We denote the corresponding CDF by Pos(j; A), and the generic
symbol for the distribution is POS(A). The Poisson distribution was found
to provide good models for many natural phenomena (number of trees in
an acre of forest, number of defects in one square meter of aluminum sheet,
etc.). In the area of reliability, if a given device has an exponential lifetime
distribution E(f3), and whenever it fails it is instantaneously replaced by
a similar device, then the number of failures per time unit has a Poisson
distribution Pos (~ ). This relationship between the .exponential and Pois-
son distributions can be proven theoretically. It is also the basis for the
relationship between the Poisson and the Erlang distributions mentioned
in Section 2.3.1.
The mean value of POS(A) is
(2.4.13)

and its standard deviation is


(2.4.14)

Thus, if the MTTF of a device having a life distribution E(f3) is f3 = 10


[hr], the expected number of failures per hour is A = .1, or 1 failure per 10
hours.
When A is large (larger than 30), the Poisson distribution can be ap-
proximated by a normal distribution N(A, .JX). That is,

(2.4.15) Pr{i < J < .} ~ <p


- _J
(j + 1/2
.JX
- A) _ <p (i - 1/2 -
.JX'
A)
where the distribution. of J is POS(A). For example, if J has a Poisson
distribution with A = 50, then
Pr{45 < J < 60} ~ <p (60.5 - 50) _ qi (44.5 - 50))
- - v'5O v'5O
= .9312 - .0142 = .9170.
The Poisson distribution is itself useful in approximating the binomial
distribution when n is large and 0 is small. The approximation is given by
(2.4.16) FB(j; n, 0) ~ Pos(j; nO)
where n is very large and 0 very small.
38 2. Life Distributions, Models and Their Characteristics

EXAMPLE 2.12
IT the probability of a defective computer chip is (J = 10-3 , what is the
distribution of the number, J, of defectives among n = 104 chips? The
theoretical answer is B(104 , 10- 3 ). However, to compute

we- can use the Poisson distribution with mean .A = n(J = 104 X 10-3 = 10.
Thus,
Pr{7 :$ J :$ 13} :::::l Pos(13j 10) - Pos(6j 10)
= .734.
The normal approximation to these binomial probabilities yields
Pr{7 :$ J :$ 13} :::::l .732,
which is close to the value obtained by the Poisson approximation.

2.4.4 Hypergeometric Distributions
The hypergeometric distribution is a distribution of a discrete random vari-
able having a PDF

(2 . 4•17) f H (Jj. N , M)
,n =
(~)(~~~)
(-:) , .
J = 0, ... ,n,
where N, M and n are positive integer-valued parameters, and

(:) = b!(aa~ b)!' 0:$ b:$ a.


We will denote by FH(jjN,M,n) the associated CDF and by H(N,M,n)
the distribution.
The hypergeometric distribution H(N, M, n) is the distribution of the
number J of elements having a certain attribute, in a random sample of size
n without replacement from a finite population of size N, where M is the
number of elements in the population having the prescribed attribute. The
hypergeometric distribution plays a major role in the theory of sampling
inspection by attribute from finite lots.
The mean of H(N,M,n) is

(2.4.18)
and its standard deviation is

(2.4.19)

IT n/N < .1, the hypergeometric distribution H(N, M, n) can be approx-


imated by the binomial B (n, tJ).
2.5 Exercises 39

EXAMPLE 2.13
A lot of N = 1000 elements contains M = 5 defectives. The probability
that a random sample of size n = 20, without replacement, will contain
more than 1 defective is
Pr{ J > I} = 1 - Pr{ J ~ I}
= 1 - FH(lj 1000,5,20)
1 - FB(lj 20, .005)
R:J

= 1 - (.995)20 - 20(.005)(.995)19
= .00447.

One can approximate the hypergeometric distribution by a normal dis-



tribution when n is large (and so is N). For this purpose we set J.t = n.z;J
and
(1

and use the formula


= (n ~ (1 - ~) (1 -; ~ ~) r/ 2

(i - 1/2 - J.t)
(2.4.20)
. .
FH()j N, M, n) - FH(~j N, M, n) R:J ip
(i + 1/2
(1
- J.t) - ip (1 •

EXAMPLE 2.14
If N = 100000, M = 5000 and n = 300, we find J.t = 15 and = 3.769j (1

hence,

Pr{lO < J < 20} R:J ip (20.5 - 15) _ ip (9.5 - 15) = 855
- - 3.769 3.769"


2.5 Exercises
[2.1.1] Give examples of data which are:
(i) right time censoredj
(ii) right frequency censoredj and
(iii) left time censored and right frequency censored.
[2.1.2] A study of television failures is going to be conducted during the
next three years. The systems (televisions) enter the study as they
are sold. Assume that the sale times are randomly distributed over
the study period. For each television in the study we record the sale
date, the number of hours of operation every day, the failure dates,
the type of failure and length of down time (repair and logistics).
If a customer leaves the area then her/his set drops from the study.
What type of censoring characterizes this study?
40 2. Life Distributions, Models and Their Characteristics

[2.2.1] Consider a continuous random variable, X, having a CDF

0, x<1

Inx
F{x) =
In5 '

1, x> 5.
(i) Find the PDF of X.
(ii) Show that xp = sP, 0 < p < 1, is the p-th fractile of Xi in
particular, the median is Me = 51/ 2 = 2.236.
(iii) Show that the r-th moment of X is J.tr = (5 r -1)/rIn5, r =
0,1, . . .. Use this formula to find the expected value, J.t, and the
standard deviation, u.
[2.2.2] Suppose ~hat the lifetime of a piece of equipment has a uniform
distribution over the range tl = 5 [hr] to t2 = 15 [hr]. The PDF is
thus
for 5 ::; t::; 15
f(t) = { 110 ,
0, otherwise.
(i) What is the failure rate function of this equipment?
(ii) Show that the MTTF, J.t, and the median life Me are equal.
(iii) What is the standard deviation of this life distribution?
[2.3.1] Consider an exponential distribution with MTTF of 1000 [hr].
(i) Determine the first and third quartiles, E. 25 and E.75, of this
distribution.
(ii) Determine the standardized interquartile range W = (E. 75-
E.25)/U.
[2.3.2] Let Xl and X 2 be two independent random variables having the
exponential distributions E(/3l) and E(/32), respectively.
(i) Let Y = min(Xb X 2). Show that Y has the exponential E(/3*)
where /3* = /3h/ 2. /3h = [~(Jl + J2)]-1 is the harmonic mean of /31
and /32.
[Hint: Show that the reliability function of Y is e- t /{3*. Use the
independence of Xl and X 2 and the fact that min(Xb X 2 ) > t if
and only if Xl > t and X 2 > t.]
(ii) A system consists of two independent components in series.
Each has an exponential life distribution with MTTFs /31 = 500
[hr] and /32 = 1000 [hr]. What is the MTTF for the system? What
is the system reliability at t = 300 hours?
[2.3.3] Generalize the result of [2.3.2] to show that, for a system consisting
of n independent components connected in series, the life distribu-
tion of the system is exponential provided the life distribution of
2.5 Exercises 41

each component is exponential. When components are connected


in series, the failure of anyone component results in the failure of
the system.
[2.3.4] Simulate N = 100 failure times of the system specified in [2.3.2], by
using a computer program which generates random numbers, and
applying the relationship E(f3) rv -f3ln(U), where U has a uniform
distribution on (0,1). Analyze the characteristics of the generated
sample. Compare the sample mean to 13*, and the proportion of
sample values greater than 300 [hr] to R(300).
[2.3.5] Let Xl rv E(5) and X 2 '"" E(lO). What are the expected value and
standard deviation of Xl +X2' when Xl and X 2 are independent?
[Hint: Recall that the variance of the sum of independent random
variables is the sum of their variances.]
[2.3.6] Let T I , ... ,TlO be the lifetime random variables of a random sam-
ple of n = 10 devices, having identical exponential distributions
E(f3), where 13 = 1,000 [hr].
(i) What is the expected time till the first failure, T(l) = min(Ti)?
(Apply problem [2.3.3].)
(ii) Let TCI) :::; T(2) :::; ... :::; TClO) , be the order statistic of this
sample. Due to the "memoryless" property of the exponential dis-
tribution, T(2) - TCI) is distributed like the minimum of a ran-
dom sample of size 9 from E(f3); T(3) - T(2) is distributed like the
minimum of a random sample of size 8, etc. Moreover, the incre-
ments T(i) - T Ci - I ), i = 1,2,··· ,n are independent. Prove that
E{T(i)} = f3(~ + n~l + ... + n-~+1) and that the variance of T(i)
is V{T(i)} = 13 2 (.;2 + Cn~I)2 + ... + (n-i~I)2)· Thus, find E{T(5)}
and V{T(5 )}.
[2.3.7] The lifetime, T [months], of a semiconductor device has an Erlang
distribution G(5, 100).
(i) Show that the probability that the device fails before 500 months
is .56.
(ii) What are the expected value and standard deviation ofthe life
distribution?
(iii) What is the failure rate of the device at age t = 500?
(iv) Using a table of the X2 distribution, find the .95 fractile of the
life distribution.
[2.3.8] Find the values of r(5), r(5.5), r(6.33), r(6.67).
[2.3.9] Find the median, quartiles, expected value and standard deviation
of the gamma distribution G( ll, 1000).
[2.3.10] The lifetime T (in hours) of a relay has a Weibull distribution
W(2, 1000).
(i) What is the probability that T will exceed 1000 hours?
(ii) Calculate the mean and standard deviation of the lifetime.
(iii) Calculate the median and the quartiles of the distribution.
42 2. Life Distributions, Models and Their Characteristics

[2.3.11] Determine the expected value and standard deviation of the shifted
Weibull WS(2, 10,5), where v = 2, (3 = 10 and to = 5.
[2.3.12] Consider the extreme value distribution EV(50, 2). Compute the
median, the expected value (mean) and the standard deviation of
this distribution.
[2.3.13] If X '" EV(50, 2), what is the expected value of exp(3X)?
[Hint: X '" In W(v, (3), exp(3X) '" (W(v, (3»3. Find also the rela-
tions between {,8 and v, {3 and substitute ( = 50,8 = 2.]
[2.3.14] If X", N(1O, 2) find the probabilities
(i) Pr{8 :::::; X :::::; 12};
(ii) Pr{X ~ 13};
(iii) Pr{IXI > 1O.5}.
[2.3.15] The r-th moment of a standard normal distribution, N(O, 1), is
given by the formula

ifT = 2m+ 1,

ifr = 2m,

for all m = 0, 1,···. Find the third and the fourth central moments
of N(1OO, 5).
[2.3.16] If X is distributed like N(J.L,(1'), find the expected value of Y =
IX - J.LI·
[2.3.17] Consider a device having a truncated normal life distribution, with
J.L = 5, (1' = 2 and to = 3 [hr]. Find the reliability at t = 6 [hr] and
the failure rate at this age.
[2.3.18] Apply the normal approximation to the Erlang life distribution
[weeks] G(20, 5) to determine the probability (approximately) that
a device with that lifetime distribution will fail between 90 and 110
weeks of operation.
[2.3.19] A general formula for the moments of a lognormal distribution
LN(J.L,(1') is J.Lr = exp(rJ.L + !r2(1'2), r = 0,1,2,···.
(i) Compute the first 3 moments of the distribution for the case of
J.L = 1, (1' = 1.
(ii) Compute the standard deviation r, and the third and fourth
central moments

J.L3 = J.L3 - 3J.L2J.Ll + 2J.L~,


J.L4 = J.L4 - 4J.L3J.Ll + 6J.L2J.L~ - 3J.Li

for J.L = 1 and (1' = 1. [r2 = J.L2 - J.L~.]


(iii) Compute the coefficients of skewness and of steepness
2.5 Exercises 43

for the case of f.L = (7 = 1.


[2.3.20] The tensile strength, X [kg], of certain fibers has a lognormal distri-
bution LN(f.L, 3). How large should f.L be so that Pr{X S 158} = .5?
[2.4.1] Use a computer program to plot the PDF of the binomial distri-
bution B(n, 0) for n = 25 and 0 = .7. Compute also the following
probabilities:
(j) Pr{12 S X S 20};
(ii) Pr{ X > 17.5}.
[2.4.2] The lifetime of a radar system [months] has the CDF

et -1
F(t) = et + l' 0<t < 00.

A random sample of n = 25 units from this distribution is tested.


(i) What is the probability that at least 10 units will fail within
the first month?
(ii) What is the conditional probability that a unit will fail during
the second month of testing, given that it has survived the first
month?
(iii) Given that 12 units failed during the first month, find the
conditional probability that among the remaining 13 units, at least
6 will fail during the second month.
[2.4.3] The random variable J has a binomial distribution B(n, 0) with
11, = 2500 and 0 = .13. Apply the normal approximation to the
binomial distribution to find the probability Pr{300 S J S 400}.
[2.4.4] Use a computer program to tabulate the PDF of the Poisson dis-
tribution with .A = 7 for values of J between 0 and 15.
[2.4.5] Consider a device in which a component is replaced immediately
upon failure by another component of the same type. If the lifetime
of the component is exponential with mean (3, then the number of
replacements per time unit, X, has a Poisson distribution with
mean .A = 1/(3. Replacement parts can be stocked once a month.
Suppose that (3 = .2[m]. How many spares should be stocked so
that the probability shortage will not exceed 0: = .05?
[2.4.6] Apply the normal approximation to the Poisson distribution with
mean .A = 70, to compute Pr{60 S J S 80}.
[2.4.7] Apply the Poisson approximation to the binomial distribution to
compute the probability of having at least one defective in a lot of
N = 10,000 chips, if the probability of a defective chip is 0 = 10- 4 .
[2.4.8] In a lot of 30 personal computers there are 3 having a defective
keyboard. Five PCs are randomly chosen from the lot. What is
the probability of having at least one defective keyboard in the
sample?
[2.4.9] Apply the normal approximation to H(j; N, M, n) with N = 3000,
M = 500, n = 100 to determine Pr{13 S J S 19}.
3
Reliability of
Composite Systems

In the present chapter we discuss methods of determining the reliability


of a system when we have information on the reliability of its subsystems,
or components, and we know the structure of the system. The reliability
function is a function of time, R(t), 0 < t < 00. In much of the discussion
the time argument, t, is fixed at a given value, say to. Thus, we can often
suppress the time variable and write R for R( to). Furthermore, if a system
is comprised of n subsystems (components), the corresponding reliability
values will be denoted by R 1 , R2 ,··· , Rr,. The reliability of the whole
system will be denoted by Rsys. We would like to be able to express the
reliability of the system, R sys , as a function 'IjJ(Rl, ... , Rr,), ofthe reliability
values of its subsystems. This is generally possible if one has a well defined
structure function which describes the interrelations among the subsystems,
and if the failure times of the subsystems are mutually independent random
variables. If the failure times are not independent but correlated, it may
be impossible to determine Rsys just from the information on R 1 ,··· , Rr,.
One may need some further information. In such cases, however, upper and
lower bounds for Rsys can often be determined as functions of R 1 ,··· , Rr,.

3.1 System Reliability for Series and Active Parallel


Independent Components
Consider a system having two components C 1 and C 2 . We say that the
two components are connected in series if the failure of either one of the
components causes an immediate failure of the system. This type of logical
structure can be diagramed by a block diagram given in Figure 3.1.
Let Ii (i = 1,2) be an indicator function, assuming the value Ii = 1 if
component Ci operates throughout the specified period [0, to), and Ii = 0,
3.1 System Reliability for Series and Active Parallel Independent Components 45

Fiugre 3.1. Block Diagram of a Series Structure

otherwise. The system operates through the period [0, to) if, and only if,
1112 = 1. We therefore define the series structure function

(3.1.1)

Both II and h are random variables, and E{Id = Pr{Ii = I} =~, where

°
E{-} denotes the expected value, and ~ is the reliability of Gi , i = 1,2.
Notice that 'l/Js (II , 12) assumes only the value (if the system fails) or 1 (if
the system survives). The reliability of the system is

Rsys = Pr{'l/Js(ft,I2) = I}
(3.1.2)
= Pr{I I = 1,12 = I}.
But, due to the independence of II and 12 ,

Pr{ft = 1,12 = I} = Pr{II = l}Pr{h = I}.


Hence,

(3.1.3)

Thus, if we define the function 'l/Js(Xl, X2) = XIX2, for all Xl, X2 in [0,1]'
then
Rsys = Pr{'l/Js(ft,I2) = I}
(3.1.4)
= 'l/Js(Rl, R2)'
In the same manner one can extend this result to a system of n independent
components connected in series. Thus, let
n
(3.1.5) 'l/Js(Xl,'" ,xn) = IT Xi
i=1

for all Xi in [0,1], where


n
IT ai == al . a2' ... . an·
i=1

Then
Rsys =Pr{'l/Js(Il,'" ,In) = I}
(3.1.6) = R 1 · R 2 · .... Rn

= 'l/Js(R 1 ,'" ,Rn).


46 3. Reliability of Composite Systems

Figure 3.2. Block Diagram of Parallel Structure

Notice that if T 1 , .•• , Tn are the actual failure times of the n components,
then the failure time of a system connected in series is Ts = min Ti .
l::5i::5n
A system of two components, C 1 and C2 , is connected in active-parallel
if the system fails only when both components fail. The parallel structure
function is

(3.1.7)

0:::; Xl> X2 :::; 1. In terms of the indicator functions h and 12 ,

'l/Jp(h,I2 ) = 1 - (1 - h)(l - 12 )
(3.1.8)
= II + 12 - 11 12.

Thus, 'l/Jp{Il> 12) = 0 if, and only if, both h = 0 and 12 = O. In this case, if
II and 12 are independent,

Rsys = Pr{'l/Jp(h,I2 ) = 1}
(3.1.9) = 1- E{l - It}E{l - I 2 }
= 1- (1- Rd{l- R 2 ) = 'l/Jp{R 1 ,R2 ).

If the system is comprised of n independent components, connected in active


parallel, then

n
(3.1.1O)
= 1- II (1 - ~).
i=l

The block diagram of active parallel components is given in Figure 3.2.


Any part of a system whose block diagram shows one line going in and
one line going out is called a module. In Figure 3.3 we illustrate a system
comprised of modules.
Each component Ci (i = 1,··· ,5) can be considered as a module. The
system Ml = {C1 ,C2 ,C3 } is a module. The system M2 = {C4 ,C5 } is a
3.2 k Out of n Systems of Independent Components 47

Figure 3.3. A System Structure with Modules

module. The whole system is a module containing more elementary mod-


ules. In Figure 3.4 we present a crosslinked system (module). Components
C 2 and C3 are not modules. Components C l and C4 are modules, as well
as the whole system.
If a system consists of modules, we can compute first the reliability of
each module, and then the reliability of the system according to the struc-
ture function connecting the modules. Thus, for example, referring back to
the block diagram of Figure 3.3, if R; is the reliability of Ci (i = 1,··· ,5),
RMj is the reliability of module M j (j = 1,2), then if all components are
independent, the system reliability is

Rsys = 'l/Jp(RMll R M2 )
(3.1.11)
= 1- (1- R MJ(1- R M2 ),

where

Thus, if Rl = ... = R5 = .9, the reliability of the system is Rsys = .949.

3.2 k Out of n Systems of Independent Components


Some systems, or modules, are constructed of n active parallel components,
but in order that the system function it is necessary that at least k out of
the n units function. The previously defined active parallel structure func-
tion, 'l/JP (Xl, ... ,xn ), required that at least 1 out of n components function.
The reliability function of a "k out of n" parallel system, in which all com-
ponents are independent and have the same reliability, R, is

(3.2.1)
'l/J(k)n(R) = t
j=k
(~) Rj (1 -
J
R)n- j

= 1- FB(k -1;n,R).
48 3. Reliability of Composite Systems

Figure 3.4. A Crosslinked System


EXAMPLE 3.1
A. The system S consists of n = 3 water pumps for cooling a reactor. If
the pumps function independently, and the reliability of each, over a 1000
[hr] period, is R = .95, and if at least k = 2 pumps must function (assuming
no repairs), the reliability of the system is

'I/J(2)3(.95) = 1- (.05)3 - 3C.95)(.05)2


= .993.

B. Two modules M 1 , M2 are connected in series. Module Ml is a 3 out


of 5 system, with component reliability Rl = .8. Module M2 is a 4 out of
8 system, with component reliability R2 = .6. The reliability of Ml is

RMl = 'I/J(3)5(.8) = 1 - FB(2; 5, .8)


= 1 - .058 = .942.

The reliability of M2 is

RM2 = 'I/J(4)s(.6) = 1 - FB(3; 8, .6)


=1-.174
= .826.
The reliability of the system is

Rsys = 'l/Js(.9421, .8263)


= (.9421)(.8263)
= .779.

If the two modules are connected in parallel then the reliability of the
system is
Rsys = 'l/Jp(.9421, .8263)
= 1 - (1 - .9421)(1 - .8263) = .990.


3.3 The Decomposition Method 49

3.3 The Decomposition Method


In Figure 3.4 we illustrated a crosslinked system. In order to determine
the reliability of such a system we apply the decomposition method.
According to this method, a component is chosen close to the left or to the
right end of the block diagram. This component is called a "keystone." We
then comIlute the conditional system reliability, given that the keystone
survives, and the conditional system reliability, given that the keystone fails.
The reliability of the system is then determined as a weighted average of
these two conditional reliabilities, where the weights are, respectively, the
reliability of the keystone, R, and 1 - R.
EXAMPLE 3.2
Consider the block diagram of Figure 3.4. If C3 is chosen as keystone
then

(3.3.1)
where RsyslCa is the conditional system reliability, given that C 3 survives;
RSyslCa is the conditional system reliability, given that C fails. Now, if C3
survives, then under independence,

(3.3.2)
Indeed, if we know that C3 operates throughout the mission period, the
system will operate if either C 2 or C4 survives; C 1 is irrelevant. If C 3
fails, the system survives only if both C 1 and C 2 survive. Thus, under
independence,

(3.3.3)
Hence,

Rsys = R3(R2 + R4 - R 2R4) + (1 - R3)R1R2


(3.3.4)
= RIR2 + R2R3 + R3R4 - R1R2R3 - R 2R3R4·
If, for example, R1 = R2 = R3 = R4 = .9, then Rsys = .972. If the link
between C2 and C3 is severed, then the system reliability is reduced to

We remark in this connection that it is immaterial which component is


chosen as a keystone. If we had chosen C2 for a keystone, then

(3.3.5)
But
50 3. Reliability of Composite Systems

Figure 3.5. A Double-Crosslinked System

and

Hence,

Rsys = R 2(R I + R3 - R 1 R 3) -to (1 - R2)R3R 4


(3.3.6)
=~~+~~+~~-~~~-~~~

This is exactly the same as the result previously obtained. IT there is more
than one crosslink, the system reliability can be determined by successive
steps of decomposition.

EXAMPLE 3.3

Consider ~he system having a double-crosslinked structure, as shown in
Figure 3.5.
Suppose we choose C4 as the first keystone, then

(3.3.7)

IT C4 fails, the only way that the system will survive is if Cl , C2 and C3 all
survive. Thus,

(3.3.8)

However, to compute RsyslC4 we have to take into account the second


crosslink. For this purpose we choose C5 as a second keystone. We write
then,

(3.3.9)

Now, if both C4 and C5 survive, the system survives if either C3 or C6


survives. Thus,

(3.3.10)

If C5 fails, then the system survives only if both C2 and C3 survive. Thus,

(3.3.11)
3.4 Minimal Paths and Cuts 51

Combining all these results we obtain

Rsys = R4[R5(R3 + ~ - R3~)


(3.3.12)
+ (1 - R5)R2 R 3] + (1 - R 4 )R 1 R 2 R 3.

If all components have a reliability value of R = .95, the reliability of this


double-crosslinked system is Rsys = .9860.

In a similar manner one can determine the reliability of systems having



multiple crosslinks.

3.4 Minimal Paths and Cuts


In the present section we will discuss another method for determining the
reliability of systems. Let 1f;(X1,'" , xn) be a function of n variables, 0 ::;
Xi ::; 1 for all i = 1" .. , n. This function is called a structure function if
1f;(h,'" , In) = 1 when the system survives and is equal to zero otherwise,
where / 1 ,'" , In are the survival indicators of the components.
Consider a system S of n components, represented by a given structure
function. Let P = {Cil ,'" , Cim } be a set of m components of S. P is
called a path set if the system S survives whenever all the elements of
P survive. A path set, P, is called minimal if the set is not a path set
following the exclusion of any of its members; i.e., no proper subset of P is
a path set.
Given a block diagram of a system, one can list all the minimal paths.
EXAMPLE 3.4
Consider a "bridge" described in Figure 3.6. The following is a list of
the minimal paths associated with this structure:

P 1 = {C1 ,C2 }, P2 = {C3 ,C4 }


P3 = {C2 ,C3 ,C5 }, P4 = {Cl,C4 ,C5 }.

The system survives if all the components in at least one of these four path
sets survive. Thus, we can write

[§J----,~---..

~
~~t. . . .
Figure 3.6. A Bridge Connection
52 3. Reliability of Composite Systems

'ljJ(h, ... ,15) = 'ljJp(hh 13h 121315, h 1415)


= 1112 + 1314 + 12 13h + h1415
(3.4.1) - h12Ia14 - 11121315
- 11121415 - 12131415
- 11131415 + 2h12 131415'

Notice that since Ii admits only the values 0 or 1, It = Ii, If = h etc.


Finally, under the assumption that the elements are independent the system
reliability is

Rsys = E{'ljJ(11,· .. ,In)}


= R1R2 + R3R4 + R2R3R5 + R1R4R5
- R1R2R 3R4 - R1R2R 3R 5
(3.4.2)
- R1R2R4R 5 - R2R3~R5

- R1 R 3R 4R 5 + 2R1R2 R 3R 4R 5
= 'ljJ(R 1, ... , R5).


A cut set is a set of components of a system such that if all the compo-
nents belonging to the set fail then the system fails too. A cut set is called
minimal if the survival of any of its elements entails system survival.
EXAMPLE 3.5
The minimal cut sets of the system presented in Figure 3.6 are

Je 1 = {Ct, C 3}, Je 2 = {C2 , C 4 }


Je3 = {Ct,C4,C5}, Je4 = {C2,C3,C5}.
The system fails if all the components in at least one of these minimal cut
sets fail; it survives otherwise. Thus, the structure function of the system
can be written in terms of these minimal cut sets as

'ljJ(lt, ... , h) = (1 - (1 - 11)(1 - 13))


(3.4.3) . (1- (1 - 12 )(1- 14)) . (1- (l-lt)(l- 14)(1- h))
. (1- (1 - 12 )(1- Ia)(l- 15))'

The structure function (3.4.1) obtained by considering the minimal path


sets is the same as the function (3.4.3) obtained by considering the min-
imal cut sets. Since the structure function (3.4.3) can be expressed as
a multinomial function of its arguments, the independence implies that
Rsys = 'ljJ(R1'''' ,R5)' •
3.5 The MTTF of Composite Systems 53

An algorithm for finding all the minimal cut sets will be given in Section
3.7, in connection with the analysis of fault trees.

3.5 The MTTF of Composite Systems


If we substitute in an Rsys formula the reliability of components as functions
of age, t, then we obtain the system reliability as a time function, i.e.,

The MTTF of the system is determined by the formula

(3.5.1) J.Lsys = 1 00
Rsys(t)dt.

EXAMPLE 3.6
Consider the double-crosslinked system in Figure 3.5. Suppose that each
component of the system has an exponential life distribution E(f3i), i =
1" .. ,6. The reliability function of the system, according to (3.3.12), is

Rsys(t) = exp(-tlf34) [exp(-tlf35) .


(exp( -tl 133) + exp( -tl136) - exp( -t(11133 - 1/136)))
(3.5.2)
+ (1 - exp( -tlf35)) exp( -t(11132 - 1/133))]
+ (1- exp(-tlf34))exp(-t(1/f31 + 1/132 + 1/133)).
Let Ai = 11 f3i, i = 1, ... ,6. Then, the MTTF of the system is

J.Lsys = 100
[exp( -t(A3 + A4 + A5))
(3.5.3) + exp( -t(A4 + A5 + A6)) - exp( -t(A3 + A4 + A5 + A6))
+ exp( -t(A2 + A3 + A4)) - exp( -t(A2 + A3 + A4 + A5))
+ exp( -t(A1 + A2 + A3)) - exp( -t(A1 + A2 + A3 + A4»]dt.

Finally, since Jooo e-atdt = 1/ex, for all ex > 0, we obtain


+ A2 + A3)-1 + (A2 + A3 + A4)-1
J.Lsys = (AI
+ (A3 + A4 + A5)-1 + (A4 + A5 + A6)-1
(3.5.4)
- (A1 + A2 + A3 + A4)-1 - (A2 + A3 + A4 + A5)-1
- (A3 + A4 + A5 + A6)-1.

In particular, if all the values of Ai are equal to A then

717
J.Lsys = 12 . >: = 12 13 ,
54 3. Reliability of Composite Systems

where {3 = 1/ A.

It is easy to compute the MTTF exactly, when the life distributions of



the components are exponential. If, on the other hand, the life distributions
are not exponential the computation can be more tedious.
EXAMPLE 3.7
,Suppose that a system consists of two independent components in series.
Assume also that the life distributions of these components are Erlang
G(k, {3). Then the MTTF of the system is

(3.5.5)

*).
Indeed, according to (2.3.7), the reliability function of each component is
R(t) = Pos(k - 1; Hence,

(3.5.6)

In the case of k = 5 we have ILsys = 1.231{3.



3.6 Sequentially Operating Components
Suppose that two similar computer systems are mounted on board an air-
plane, one for operation and the second as a standby. If the first computer
fails during the flight the second computer is instantaneously switched on.
The block diagram for such a standby system is given in Figure 3.7.
We have to take into account the possibility that the standby system
may suffer degradation during its idle period. Furthermore, there might be
failure of the second unit at the time of switching. Thus, let Raw be the
switch-on reliability and suppose that the time till failure of the first unit
has a PDF ft(t). Suppose that the time till failure in the standby mode
has a distribution Fid (t). We denote by Rid (t) the reliability of the standby
unit at age t, if it was idle throughout [0, t). The reliability of the whole
system is then

(3.6.1)
3.6 Sequentially Operating Components 55

~1
2

Figure 3.7. Two Units: One Operating and One Standby

where R2 (t) is the reliability of the second unit in operation.


EXAMPLE 3.8
Suppose that the two computers each have an exponential life distri-
bution E(!3t) during operation. Suppose that during the standby period
the idle computer has no degradation, and that the switch-on reliability is
Rsw = .95. Then, the reliability of the system is

Rsys(t) = exp( -t/(31)


95
+ ·!31 10 t exp( -xl(31) exp( -(t - x)/(31)dx
(3.6.2)
= exp( -t/!3t)(1 + ;~ t).
The MTTF of this system is

(3.6.3) JLsys = 1.95!31.

If the switch-on reliability is Rsw = 1, then the life distribution of the


system is that of the sum of two independent exponential random variables
with mean !31. This is the Erlang distribution G(2, (31) with mean JL = 2!31.
If during standby the second computer might fail (due to possible stress
during the flight), and the time till failure of the idle computer has an
exponential distribution with mean !32, then the system reliability is
Rsys(t) = exp( -t/ (31)
95
+ ·!31
rt exp(-x/!31 - X/(32) exp(-(t - x)/!31)dx
10
(3.6.4) = exp( -t/ (31)[1 + .95~~ (1 - exp( -t/(32))]

= (1 + .95~~)exp(-t/!31)
- .95!32 exp
!31
(-t (~ + ~)) .
!31!32
The mean time to failure of the system is then

(3.6.5)
56 3. Reliability of Composite Systems

3.7 Fault Tree Analysis


An alternative representation of the system structure is given by an event
tree. If failures are emphasized rather than successes, the event tree is
called a fault tree. A fault tree provides the logical structure of events
that can cause the failure of a system.
~To draw an event or fault tree, we employ the following representation.
Basic events, like failure of components, are represented by circles or dia-
monds. A system event of major importance is called a top event. This
appears at the top, in the form

top event

Intermediate system (or subsystem) events are also represented by rectan-


gles. Immediately preceding each rectangle WEt place gates. These are either
OR gates or AND gates. An OR gate and an AND gate are presented in
Figure 3.8.
The logical meaning of OR(AND) gate is the same as that of the union
(intersection) of events. If either one of the input events to an OR gate
occurs the output event of the gate follows. The output event of an AND
gate requires the occurrence of all the input events leading to it. Thus,
a series system consisting of components GI , G2 , G3 can be represented by
the event tree of Figure 3.9.
The system represented in Figure 3.3 by a block diagram can be pre-
sented by the fault tree diagram in Figure 3.10.
G1: Failure of GI or G2 or G3
G2: Failure of G4 or G5

OR Gate

AND Gate
Figure 3.8. Types of Gates
3.7 Fault Tree Analysis 57

SYSTEM
FAILURE

1:C1 Fails
2: C2 Fails
3: C3 Fails
Figure 3.9. Series Structure Fault Diagram

Figure 3.10. Fault Tree Diagram For the System of Figure 3.3

Figure 3.11 is an example of a fault tree representing rupture of a pressure


tanle
The objectives served by event (fault) trees are, generally,
(i) to assist in determining the possible causes of a failure (accident);
(ii) to display results (show the weak points of a design);
(iii) to provide a convenient format for the determination of failure
probabilities, listing all cuts and paths, etc.
In Figure 3.12 we present a fault tree diagram for an electric circuit
failure. In order to compute the failure probability (or the reliability) of
58 3. Reliability of Composite Systems

PRESSURE TANK RUPTURE

SECONDARY
FAILURE

Figure 3.11. Fault Tree for Pressure Tank Rupture

this circuit we will list first all the minimal cut sets. A cut set of an event
tree is a set of basic events whose occurrence causes the top event to happen.
A cut set is minimal if each one of its elements is essential. An algorithm
for generating such a list is provided in the following section.
Algorithm for Generating Minimal Cut Sets
1. All gates are numbered, starting with the top event gate, GO, down to
the last gate.
2. All basic events are numbered Bl, B 2 ,···.
3. Start the list at GO. At any stage of the process, if a gate is an OR
gate replace it with a list of all gates or basic events feeding into it on
separate rows; if the gate is an AND gate insert the list on the same row.
4. Continue till all the gates are replaced by basic events.
We now illustrate the algorithm on the fault tree in Figure 3.12.
GO
Gl, G2, G3

Bl, G2, G3
B2, G2, G3

Bl, B4, G3
B2, B4, G3

Bl, B4, G4
Bl, B4, G5
B2, B4, G4
B2, B4, G5
3.7 Fault Tree Analysis 59

B1, B4, G6, G7


B1, B4, B3
B2, B4, G6, G7
B2, B4, B3

B1, B4, B5, G7


B1, B4, B6, G7
B1, B4, B3
B2, B4, B5, G7
B2, B4, B6, G7
B2, B4, B3

B1, B4, B5, B7


B1, B4, B5, B8
B1, B4, B6, B7
B1, B4, B6, B8
B1, B4, B3
B2, B4, B5, B7
B2, B4, B5, B8
B2, B4, B6, B7
B2,B4,B6,B8
B2,B4,B3

Thus we arrive at a list of m = 10 minimal cut sets.


The Probability of System Failure
Let J i = 1 if Bi fails (fault event occurs) and J i = 0 otherwise. Let
Fsys = 1 if the top event occurs (the system fails) and Fsys = 0 otherwise.
Let Qsys = Pr{Fsys = I} and Qi = Pr{Ji = I}, i = 1,··· , n where n is the
number of basic events.
The occurrence of all the basic fault events in anyone of the minimal
cut sets causes the failure of the system. Thus, the failure probability of
the system is the probability that at least one minimal cut set will cause a
failure.
Let Fl = IIJi , 1 = 1,··· , m, where Kl is the I-th minimal cut set.
iEKI
Fi = 1 if the I-th minimal cut set causes failure. In terms of the electric
60 3. Reliability of Composite Systems

TOP EVENT
G.8'

Figure 3.12. Fault Tree for Electrical Circuit

GO: over-heated wire G5: relay contacts failed


G 1: fuse unable to open B3: primary relay contact failure
G2: motor failed (shorted) G6: timer unable to open
G3: power applied to system G7: switch failed to open
for extended time B5: timer coil failed to open
B 1: over-sized fuse installed B6: timer contacts failed (closed)
B2: primary fuse failure B7: switch contacts failed (closed)
B4: primary motor failure B8: external control failed to
G4: power not removed from release switch
relay coil
(From W. Hammer, Handbook of System and Product Safety, 1972, p. 244,
by courtesy of Prentice Hall.)
3.7 Fault Tree Analysis 61

circuit example we have


F1 = J1J3J4
F2 = J1 J4JSJ7
F3 = J1J4JSJS
F4 = J1J4J6J7
Fs = Jd4J6JS
F6 = J2J3J4
F7 = J2J4 J SJ7
Fs = J2J4 J SJS
Fg = J2J4J6J7
F10 = J2J4 J6JS
Finally
m
(3.7.1) Fsys = 1- II(l- Fi).
1=1

stituting the proper terms for FI,


The system failure probability Qsys can be determined from (3.7.1) by sub-
multiplying and taking expected values.
An upper bound to this probability can be obtained in the following man-
ner. The probability of system failure can be written as

(3.7.2) Qsys = Pr {U(FI = 1)}.


1=1

Since the probability of a union of events is smaller than or equal to the


sum of their probabilities, the following inequality holds:

(3.7.3) Pr LQ(Fi = 1)} :5 ~pr{FI = 1}.


If the basic· fault events are independent then

(3.7.4) Pr{FI = 1} = II Qi.


iEKI

Thus, from (3.7.2) - (3.7.4) we obtain the following upper bound to Qsys:

m
(3.7.5) Qsys ::; L II Qi.
1=1 iEKI
62 3. Reliability of Composite Systems

For example, if Qi = .05 for all i = 1,··· ,8 for the electric circuits analyzed
above, then an upper bound to the system failure probability is

Q sys ::; .0003.

In many cases such an upper bound provides sufficient information on the


prpbability of failure of the system.

3.8 Exercises
[3.1.1] An aircraft has four engines but can land using only two engines.
(i) Assuming that the reliability of each engine is R = .95 to com-
plete a mission, and that engine failur.es are independent, compute
the mission reliability of the aircraft.
(ii) What is the mission reliability of the aircraft if at leaSt one
functioning engine must be on each wing?
[3.1.2] (i) Draw the block diagram of a system having the following struc-
ture function:

where

and
'l/JM2 = 'l/Js(R4, Rs).
(ii) Determine Rsys if all the components act independently and
have the same reliability, R = .8.
[3.1.3] Consider a system of n components in a series structure. Let
Rl, ... ,Rn be the reliability of the components. Show that the
system reliability is
n
Rsys ;::: 1- ~)1 - ~).
i=l

[Apply the BonferroniInequality P {OAi };: : 1-t(l- P{ Ai})].


[3.2.1] Consider a system having three identical coolant loops, each with
two identical pumps connected in parallel. The coolant system
requires that at least 2 out of 3 loops function successfully. The
reliability of a pump over the life of the plant is R = .6. Compute
the reliability of the system.
3.8 Exercises 63

Figure 3.13. Crosslinked System

[3.2.2] What is the reliability of the coolant system described in [3.2.1] if


the reliability of the pumps in loop 1 is .8, that of pumps in loop 2
is .9 and the one for pumps in loop 3 is .951-
[3.2.3] A 4 out of 8 system has identical components whose life lengths
T [weeks] have identical Weibull distributions W(~, 100). What is
the reliability of the system at to = 50 weeks?
[3.3.1] Consider Example 3.2. Compute the system reliability Rsys for the
case of Rl = .9, R2 = .85, Rs = .8, R4 = .75.
[3.3.2] Apply the decomposition method to compute the reliability of the
cross linked system given in Figure 3.13, when the subsystems act
independently and have the same reliability of R = .8.
[3.3.3] Draw a block diagram of a triple-crosslinked system and write its
reliability function under the assumption of independent compo-
nents.
[3.4.1] Consider the system S with a block diagram given in Figure 3.14.
(i) List all the minimal paths.
(ii) List all the minimal cuts.
(iii) If the components act independently and have reliability R =
.7, what is the reliability of the system?
[3.4.2] In Figure 3.15 we see the schematic block diagram of a system.
Each block in the figure represents a 2 out of 3 subsystem. List all
the cut sets of this system.
[3.5.1] The system shown in Figure 3.16 below has identical components
which act independently, with exponential lifetime distribution hav-
ing failure rate .\ = 10-4 [l/hr]. Write the function Rsys(t) and
find the MTTF [hr] of the system.
[3.5.2] The variance of the time till failure of a system, Tsys , is E{T':ys } -
(MTTF)2. Similar to (3.5.1) we have the formula
64 3. Reliability of Composite Systems

Figure 3.14. Block Diagram of a System

Figure 3.15.

Figure 3.16. Block Diagram


3.8 Exercises 65

Figure 3.17. A Fault Tree

for this variance. Compute the variance of the time till failure of
the double-crosslinked system given in Figure 3.5.
[3.6.1] A system consists of a main unit and two standby units. The
lifetimes of these units are exponential with mean (3 = 100 [hr].
Assuming that the standby units undergo no failures when idle,
and that switching will take place when required, compute the
MTTF of the system.
[3.7.1] Determine the minimal cut sets of the system having the fault tree
of Figure 3.17,
[3.7.2] Determine the failure probability of a system having the fault tree
of Exercise [3.7.1], when the failure probability of Bl is ql = .05,
those pf B2, B a, B4 are equal to q2 = .10, that of Bs is .03, of B6 is
.07 and of B7 is .06. Moreover, component failures are independent
events.
66 3. Reliability of Composite Systems

WATER TANK
RUPTURE

Figure 3.18. Fault Tree for Domestic Water Heater


(From E.J. Henley and H. Kumamoto, Reliability
Engineering and Risk Assessment, 1981, p. 150,
by courtesy of Prentice-Hall.)

[3.7.3) Figure 3.18 provides a fault tree for a domestic water heater sys-
tem.
(i) List all the cut sets of the system.
(ii) Write a formula for the failure probability of this system, as-
suming independent failure events.
4
Reliability of Repairable Systems

4.1 The Renewal Process


Consider a system (or unit) which fails at random times. After each failure
the system is repaired (renewed). The repair time is a random variable
with some repair-time distribution (instantaneous or fixed-time repairs are
special cases). After each renewal the system starts a new cycle, indepen-
dent of the previous ones. Thus. let T1 be the time till failure (TTF) of the
system in the first cycle; let 8 1 be the length of time till repair (TTR); T2
and 8 2 the TTF and TTR in the second cycle, etc. (See Figure 4.1.)
Let t1, t2,"', be the failure times and T1, T2,"', the renewal times.

t1 = T1
T1 = t1 + 81 1st renewal
t2 = T1 +T2
T2 = t2 + 82 2nd renewal

tn = Tn -1 +Tn
Tn = tn + 8n n-th renewal

We assume that all the random variables T 1 , 8 1 , T 2 , 8 2 ,'" ,Tn. 8 n •··· ,


are mutually independent; T 1 , T2 , " ' , have an identical life distribution
F(t) and 81, 8 2 . " " have an identical repair length distribution G(s). The
length of the i-th cycle is Ci = Ti +8i . The CDF of a cycle length is K(t) =
Pr{C ::::; t}. Assuming that F(t) and G(s) are continuous CDFs having
68 4. Reliability of Repairable Systems

I II Sl ...
I
1 _ _ _ _ __
J

---..-_...-- ------
o t1

1 5t Cycle
71 t2
......, - - - - "
2nd Cycle
72 TIME

Figure 4.1. The Up-Down cycles

PDFs J(t) and g(s), respectively, the PDF of C, k(t), can be obtained by
the convolution formula

(4.1.1) k(t) = lot J(x)g(t - x)dx.


The convolution (4.1.1) is often performed by numerical integration. In
certain cases, as shown in the following example, it can be performeq ana-
lytically.
EXAMPLE 4.1
Suppose the lift length T is exponential E«(3) and the repair length S is
exponential E(')'), i.e.,

1
J(t) = ""ffiexp(t/(3)
1
g(s) = -exp(-sh).
')'

Then the PDF of the cycle length C is

k(t) = - I1t
(3')' 0
exp {-x
(3
t-x}
-- - - dx
')'

(4.1.2) _1_(e-tl/3 _ e-th),


(3-')'
{
.!.-
(32e -tl/3 ,

We see that if the failure time is exponential and the repair time is expo-
nential, the distribution of the cycle time is not exponential. Usually, the
mean repair time is much smaller than the mean lifetime, i.e., ')' «(3.

Let J(t) denote the state of the system at time t, where



0, if system is up at time t
J(t) = {
1, if system is down at time t.
4.2 The Renewal FUnction and Its Density 69

Let NF{t) be the number offailures ofthe system during the time interval
(O, t], assuming that J(O) = O. Let NR{t) be the number of repairs accom-
plished (renewals) in (0, t]. Obviously, NF(O) = 0 and NR(t) ~ NF(t) for
all 0 ~ t < 00. IT the system is not repairable then NF(t) ~ 1 for all t,
lim NF{t).= 1 and NR(t) = 0 for all t.
t-+oo
IT we denote by Q(t) the probability that the system is down at time t,
then Q(W= E{J{t)} = 1 - A(t), where A{t) is the availability function.
Let W(t) = E{NF(t)} and V(t) = E{NR(t)}. Notice that W(t) = F(t),
namely the CDF of the TTF, if the system is unrepairable. In repairable
systems W(t) 2 F(t) for all t, and lim W{t) = 00. Also Q(t) = W(t) -
t-+oo
V(t) for all t. Let us assume that W(t) and V(t) are differentiable with
respect to t almost everywhere (absolutely continuous functions of t). Let
w(t) = W'(t) andv(t) = V'(t) (wherever the derivatives exist). Thefailure
intensity function of a repairable system is defined as

(4.1.3) A(t) = w{t)fA(t), 0 < t < 00.

Similarly, the repair intensity is

(4.1.4) p,(t) = v{t)/Q(t).

Notice that the failure rate function h{t) discussed in the previous chapters
coincides with A(t) if the system is unrepairable. The function h{t) char-
acterizes the TTF distribution, F(t), while A(t) depends also on the repair
process.
The random function {NR(t)j 0 ~ t < oo} is called a renewal proceSSj
V(t) = E{NR(t)} is called the renewal function and v{t) is called the
renewal density. In the following section we discuss some properties of
these functions.

4.2 The Renewal Function and Its Density


The event {NR{t) 2 n} is equivalent to the event {Tn < t}. Thus,

Pr{NR{t) 2 n} = Pr{Tn < t}


(4.2.1) = Pr{Ol + ... + On < t}
= Kn(t),

where Kn{t) is the distribution of the sum of n independent random vari-


ables, Ci (i = 1,··· ,n) each with CDF K{t). A recursive formula for Kn{t)
is given by

(4.2.2)
70 4. Reliability of Repairable Systems

EXAMPLE 4.2
Suppose the distribution of cycle length C is exponential, E({3). This is
the case, for example, when a renewal is instantaneous after a failure. Then
Tn '" G(n, {3), and

Kn(t) = r(:){3n lot un- l e- u /{3du = 1- Pos(n - 1; ~).


Thus in this case it is clear that the distribution of NR(t) is Poisson with
parameter t I {3.

The renewal function V(t) is given by



00 00

(4.2.3) V(t) = LPr{NR(t) ~ n} = LKn(t).


n=l n=l

It is straightforward to verify that in the case of Example 4.2, V(t) = tl{3.


The renewal function V(t) also satisfies the following integral equation:

(4.2.4) V(t) = K(t) + lot V(t - x)k(x)dx.

Although it is not always possible to obtain a closed-form solution for V(t),


we can obtain useful upper and lower bounds. Since ~ax Wi ::; Tn =
t<n
W l + ... + Wn , we have Pr{~ax Wi::; t} ~ Pr{Tn::; t} or-Kn(t) ::; Kn(t).
t~n
Thus,
00 00 K(t)
V(t) = ~ Kn(t) ::; ~ Kn(t) = 1 _ K(t)'

On the other hand, V(t) ~ Kl(t) = K(t). Moreover, by definition of NR(t),


t::; TN(t)+1' Hence, t::; E{TN(t)+l} = JL(V(t) + 1), where JL = E{W} is the
mean time between renewals (assumed to be finite). Hence, V(t) ~ ~ - l.
Thus, we have the inequalities

(4.2.5) max(K(t), ~ - 1) ::; V(t) ::; 1 ~~(t)

For small values of t, V(t) :::::l K(t).


The renewal density v(t) can be expressed as

(4.2.6) v(t) = lot w(x)g(t - x)dx, 0 < t < 00

where the failure density is related to v(x) according to

(4.2.7) w(t) = f(t) + lot v(x)f(t - x)dx, 0<t < 00.


4.2 The Renewal Function and Its Density 71

J(.) and g(.) are the PDF of the TTF and TTR.

1
Let v*(s), w*(s), f*(s) and g*(s) denote the Laplace transforms of vet),
wet), J(t) and get), respectively, i.e., v*(s) =
00
e-tsv(t)dt, etc. Equations
(4.2.6) and (4.2.7) yield the Laplace transforms

(4.2.8) v*(s) - f*(s)g*(s) 0 <s< 00


- 1 - /*(s)g*(s) '

and

(4.2.9) w*(s) = v*(s)lg*(s), 0 < s < 00.

In principle, the renewal density vet) can be obtained by inverting (4.2.8). If


both the TTF and the TTR are exponentially distributed we obtain simple
solutions, as shown in the following example.
EXAMPLE 4.3
Suppose that the time till failure has an exponential distribution E({3),
and the time till repair has an exponential distribution E(')'). Let A = II (3
and J1 = 1 h. Accordingly,

J(t) =

r Ae -At ,
t::;O

t>O

r
and
t::;O
get) =
J1e-f.L t , t> O.
The corresponding Laplace transforms are f* (s) = AI (A + s) and g* (s) =
J11(J1 + s), respectively. According to (4.2.8), the Laplace transform of the
renewal density is

(4.2.10) *
v (s) = S2
AJ1
+ (A + J1)s = A + J1
AJ1 ( 1 1) .
:; - s + A + J1

Every Laplace transform f*(s) on (0,00) has a unique inverse J(t) on


(0,00). Tables of Laplace transforms and their inverses are given in various
handbooks. The student is referred to CRC Standard Mathematical
Tables (25th Edition), p. 474.
The inverse transform of (4.2.10), as can be easily checked, is

(4.2.11) vet) = ~ - AJ1 e-t(Mf.L) 0 <t< 00.


A+J1 (A+J1) ,
72 4. Reliability of Repairable Systems

In a similar fashion we obtain the failure density

(4.2.12) w(t) = ~ + ~e-t(>'+I') 0<t < 00.


A+JL A+JL '
Integrating (4.2.11) and (4.2.12) we obtain, for 0 < t < 00,

(4.2.13)

and

(4.2.14)

Finally, the unavailability function is

Q(t) = W(t) - V(t)


(4.2.15) = _A_ _ _ A_ e - t (>.+I')
O<t<oo
A+JL A+JL '
and the availability function is

A(t) = 1 - Q(t)
(4.2.16)
= _JL_ + _A_ e - t (>.+I') 0 < t < 00.
A+JL A+JL '

Notice that lim A(t) = Aoo = -!!-


+ JL = f3 +
f3 .

t ..... oo .1\ 'Y

Generally, the availability function A(t) is given by the formula

(4.2.17) A(t) = 1 - F(t) + lot v(x)[l - F(t - x)]dx


or

(4.2.18) A(t) = R(t) + lot v(x)R(t - x)dx,


where F(t) is the CDF of the TTF and R(t) is the reliability function of
a nonrepairable system. Thus, if R* (8) is the Laplace transform of the
reliability function, and A * (8) is that of the availability function, we obtain
from (4.2.8) and (4.2.18) that

A*(8) = R*(8)(1 + V*(8»


(4.2.19)
= R*(8) 0 < 8 < 00.
1 - /*(8)g*(8) ,
4.2 The Renewal FUnction and Its Density 73

The Laplace transform (4.2.19) should be inverted either analytically or nu-


merically to obtain the availability function of the system. In the following
example we illustrate this process.
EXAMPLE 4.4
In Example 4.3 we derived the availability function of a system whose
TTF is exponentially distributed, E({3), and the TTR is exponentially dis-
tributed E( 'Y). The availability function of this system is given in (4.2.16).
In the present example we derive the availability function of a system whose
TTF has the Erlang distribution G(2,{3), and whose TTR is exponentially
distributed E(-y). This model can represent a system with a standby unit,
which is switched on automatically upon the failure of the functional unit.
Repair, however, is delayed until both units fail, which is a failure time for
the whole system. The system is renewed when both units are repaired.
Let A = 1/{3 and J.t = 1h. We assume that A < J.t14.
According to (2.3.7) the reliability of the system is R(t) = e->.t + Ate->'t ,
o < t < 00. The Laplace transform of this reliability function is
(4.2.20) * 2A+8
R(8)=(A+8)2·

The Laplace transform of the PDF of the TTF is

(4.2.21)

and that of the TTR is g*(8) = iiTs.


Thus, according to (4.2.19), the
Laplace transform of the availability function is

A*(8) _ 82 + (2A + J.t)8 + 2AJ.t


- 8[82 + (2A + J.t)8 + (A2 + 2AJ.t)]
(4.2.22)
= p(8) 0< 8 < 00.
8q(8) ,

Notice that q( 8) = p(8) + A2. Let 81 and 82 be the two roots of the second
order polynomial q( 8). These roots are

(4.2.23)

We find in the tables of Laplace transforms that the inverse of (4.2.22) is

(4.2.24)
0< t < 00.
74 4. Reliability of Repairable Systems

Notice from (4.2.23) that both roots 81 and 82 are negative. Thus, the
asymptotic availability of the system under consideration is
J1
(4.2.25) AX) = J1 + ),,/2
In the following table we compare (4.2.24) to (4.2.16) for several values of t,
for the case of J1 = 2.00 [1/hr] and ).. = .01 [l/hr]. We see that the present
system has a higher availability than that with no standby unit at all t
values.
Table 4.1. The Availability FUnctions (4.2.16)
and (4.2.24), ).. = .01, J1 = 2.0
Time Availability
[hr] (4.2.16) (4.2.24)
0 1.00000 1.00000
10 0.99502 8.99957
20 0.99502 0.99919
30 0.99502 0.99889
40 0.99502 0.99864
50 0.99502 0.99843
100 0.99502 0.99751

4.3 Asymptotic Approximations
In cases where the TTF and TTR are not exponentially distributed, explicit
solutions for the renewal function and its density may be difficult to obtain.
However, there are a number of asymptotic results which can provide useful
approximations for large values of t (large here meaning large relative to
the mean cycle length). A few of these results are listed below. In all cases,
J1 is the mean length of the renewal cycle and u 2 is its variance. These are
assumed to be finite and K(t) is assumed to be continuous.
Result 1: lim V(t) = .!..
t-+oo t J1

t::
Result 2: lim [V(t + a) - V(t)] = 2:, for any a > O.
Result 3:
t-+oo
[V(t) _ !]J1 = 2J12u ~~.
2
2
Result 4: If k(t) is continuous, then lim v(t)
t-+oo
= .!..
J1

Result 5: lim Pr
t-+oo
{N~J1
u t / J13
~ z} = <T>(z), for any z.
2
The residual time till next renewal of a system at time t is defined as
NR(t)+l
(t = L Ci - t, where Co == O.
i=O
4.3 Asymptotic Approximations 75

Result 6: lim Pr{(t ~ x}


t--+oo
=- 11
~ x
00
[1- K(u)]du, and E{(t} = -2~ + -2
~.
Result 7:
Aoo = lim -T
1 [T A(t)dt
T--+oo 10
E{TTF}
= E{TTF}+E{TTR}'
Notice that'!' [T A(t)dt is the expected proportion of total time in the
T 10
interval (0, T) in which the system is up.
EXAMPLE 4.5
A particular component of a submarine navigation system has a Weibull
W(2,1O) [months] lifetime distribution. Upon failure of the component,
it is replaced essentially instantaneously. Thus the. length of cycle C is
also distributed as W(2,1O). We wish to know how many replacement
components should be stocked on board so that there is no more than a 1%
chance of running out of parts during the first 60 months.
Using the formulae for ~ and (T from Section 2.3.3, we have ~ = IOr(1 +
!) = 10· !J7r = 8.9 [months] and (T = lO[r(1 + 1) - r2(1 + !)P/2 =
10[1 - ~j1/2 = 4.6 [months].
By Result 5, the number of renewals (replacement) in 60 months,
N R (60), is distributed approximately like N(60/8.9, 4.6 V60/8.9 3 / 2), or
N(6.7,1.3). The .99 fractile of this distribution is 6.7 + Z.99 (1.3) = 6.7 +
2.33(1.3) = 9.7. Thus, the requirement should be satisfied if 10 replacement
parts are kept on board.

Although it is generally impossible to get an explicit solution for the



availability function A(t), in non-exponential cases, one can at least get an
expression for the limit of A(t) as t approaches infinity. This limit, denoted
by A oo , is called the asymptotic or stationary availability coefficient,
and is given by the formula

(4.3.1) Aoo = lim A(t) = ~T ,


t--+oo ~T + ~s
where ~T is the mean time till failure and ~s is the mean repair time.
EXAMPLE 4.6
Consider a repairable system where T rv G(2, 100) [hr] and S rv W(2, 2.5)
[hr]. Then ~T = 2(100) = 200 [hr] and ~s = 2.5r(~) = 2.5¥ = 2.2 [hr].
. (4.3.1), we have Aoo = 200+2.2
From 200 = .989. Thus, III . the long run, the
system will be available about 99% of the time.

The operational reliability function, Rt(u), is the probability that



the system is up at time t and continues to be up for the next u time
76 4. Reliability of Repairable Systems

units, i.e., it is the probability that the system is functioning continuously


throughout the time interval between t and t + u. It can be shown that
R t ( u) satisfies the equation

(4.3.2) Rt(u) = 1- F(t + u) + fat [1 - F(t + u - x)]v(x)dx.

Note that Rt(O) is simply A(t). In general, Rt(u) varies with t. If we


take the limit of Rt (u) as t approaches 00, we obtain the asymptotic
operational reliability function, Roo (u). It can be shown that

(4.3.3)
1 00
(1 - F(x))dx
Roo (u) = Aoo . .: . .!... : . . - - - - - -
JLT

EXAMPLE 4.7
Suppose that T '" E(f3) and S '" E(a). Then 1- F(t+u) = e-(t+"')If3 =
e- u1f3 (1-F(t)), and similarly 1- F(t+u- x) = C u1f3 (1-F(t- x)). Thus
substituting in (4.3.2), we arrive at the equation

Rt(u) = e- u1f3 [1_ F(t) + fat (1- F(t - x))v(x)dx]


(4.3.4)
= e- u1f3 A(t)

where A(t) is given by (4.2.16). The asymptotic operational reliability


function is thus

(4.3.5)


4.4. Increasing the Availability by
Preventive Maintenance and Standby Systems
4.4.1 Systems with Standby and Repair

A typical standby model is one in which there are two (or more) identical
(or non-identical) units, but only one is required to perform the operation.
The other unit(s) is (are) in standby. If the operating unit fails a standby
unit starts to function immediately. The failed unit enters repair immedi-
ately. The intensity of failure of the operating unit is .A, and the intensity of
repair is JL. It is generally the case that JL is much bigger than .A, and repair
of a failed unit is expected to be accomplished before the other operating
unit fails. In this case the system is continuously up. The system fails when
all units are down and require repair. There is only one repairman, who
4.4. Increasing Availability 77

repairs the units in order of entering the repair queue. Thus, in the case of
two units we distinguish between six states of the system:

SO. Unit 1 is in operation and unit 2 in standby.


S1. Unit 1 is in repair and unit 2 in operation.
S2. Unit 2 is in operation and unit 1 is up (repaired).
S3. Unit 2 is in repair and unit 1 in operation.
S4. Both units are down and unit 1 is in repair.
S5. Both units are down and unit 2 is in repair.

Under this model we may further assume that unit 1 in standby may fail
with intensity Ai (could be zero) and unit 2 in standby may fail with in-
tensity Ai. The repair intensity of unit 1 is J.Ll and of unit 2 is J.L2.
The availability analysis of such a standby system, or of a more compli-
cated one, can be performed under the assumption that the transitions of
the system from one state to another follow a Birth and Death Markov
Process. This subject is not discussed in the present text. The interested
reader is referred to N.J. McCormick (1981, p. 120) or I.B. Gertsbakh (1989,
p. 283). We present here only the formulae for the steady-state availability,
A oo , for several such systems, assuming no failure of standby units (i.e.,
A* = 0).

Number of Number of Formula For


Units Repairmen Aoo
J.L 2 +J.LA
2 1
J.L2 +J.LA+A 2
2J.L2 + 2J.LA
2 2
2J.L2 + 2f.LA + A2
f.L3+f.L2 A + A2J.L
3 1
f.L3 + f.L2A + A2f.L + A3
6f.L3 + bJ.L2 A + 3A 2f.L
3 3
6f.L3 + 6f.L2A + 3A 2f.L + f.L3

EXAMPLE 4.8
The TTF of a radar system is exponentially distributed with mean
MTTF = 250 [hr]. The repair time, TTR, of this system is also expo-
nentially distributed with MTTR = 3 [hr]. Thus, if this system has no
standby units, its steady-state availability is A~) = 1/3!\~250 = .988. If
we add to the system an identical standby unit we reach a steady-state
availability of

A(l) = (1/3)2 + (1/3)(1/250) =


00 (1/3)2 + (1/3)(1/250) + (1/250)2 .9999.
78 4. Reliability of Repairable Systems

According to these results, the radar system without a standby unit is


expected to be down 119 hours every 10000 hours, while the system with
one standby unit is expected to be down 1 hour every 10000 hours.

4.4.2 Preventive Maintenance

Overhauling or periodic maintenance of equipment consists of measures


taken to restore the state of the equipment to its initial state, or to make
it as good as possible. The objective is to delay failures. Such preventive
maintenance is profitable if the total maintenance cost over a specified
period of time is less than the expected additional cost for repairs and loss
due to failure of the system over that period. In order to be more specific,
let us consider the following simple model.
Suppose that the TTF of a system has a distribution F(t). Every T units
of time after renewal the system is shut down-for preventive maintenance.
The preventive maintenance takes a random time, having a distribution
H(t). Whenever the system fails it is immediately repaired. Repair time is
a random variable having a distribution G(t). Let MT denote the mainte-
nance time. We assume that TTF, TTR and MT are mutually independent
random variables. We further assume that the system is "as good as new"
immediately after maintenance or after repair. Accordingly, we have under
this model a renewal process with cycles which are either T + MT or TTF
+ TTR. The expected up time within a renewal cycle is

/-tu = T(l - F(T)) + lT tf(t)dt


(4.4.1)
= lT (1- F(u))du.

The expected down time within a renewal cycle is

/-tD = 8(1 - F(T)) + 'YF(T)


(4.4.2)
= b - 8)F(T) + 8,
where 8 = E{MT} and 'Y = E{TTR}. We assume that 'Y > 8.
Thus, according to Result 7 of Section 4.3, the asymptotic availability
of the system, with preventive maintenance period of length T, is

AXl(T) = /-tu

lT
/-tU+/-tD

(4.4.3) (1 - F(u))du
-
1 T
(1- F(u))du + b - 8)F(T) + 8
.
4.4. Increasing Availability 79

One criterion of optimal determination of T is to maximize Aoo(T).


In the following example we will show that if the TTF is exponentially
distributed the optimal value of T is To = 00. This means that in the
exponential case there is no gain in preventive maintenance, since at any
time in the life of the system it is as good as new. On the other hand, as
will be shown in the example, if the failure rate function h(t) of the TTF is
increasing,- we can find a value To for the maintenance period, 0 < To < 00,
for which Aoo(To) is maximal.
EXAMPLE 4.9
A. IT TTF '" E(j3), TTR '" E{t) and MT '" E(8)" > 8, we obtain
from (4.4.3)

(4.4.4) 13
Aoo(T) = (13 +,) + 8e-T /f3(1 _ e-T/(3)-l .

It is easy to check that Aoo(O) = 0, that Aoo(T) is strictly increasing with


T, and that lim Aoo(T) =
T ...... oo fJ +,
-;/i-,
which is the asymptotic availability
function of a system without preventive maintenance.
B. Let TTF '" W(2,j3), and TTR, MT have arbitrary distributions with
means, and 8, respectively; 13 >, > 8 > O. In the present case

1 - F(x) = e-(z/f3)2

and
IoT[l- F(x)]dx = loT e-(z/f3)2 dx

= j3~ [~ ( V;T) -~1'


where ~(z) is given in (2.3.35). Accordingly, the asymptotic availability is

(4.4.5)

Let x = 0 Tlj3. The function Aoo(x) is plotted in Figure 4.2 for the case
of 13 =100 [hr], , = 2 [hr], 8 = 1 [hr].
We see in Figure 4.2 that Aoo(T) has a unique point of maximum. The
optimal value of T is TO = j3xO10, where x O is the (unique) root of the

8 [1],
equation
'Y -- e -z2/2 +x(,-8) ~(x)-- = - .
-
....tFff 2 211"
80 4. Reliability of Repairable Systems

.987 A..,(x)

.982

>
I-
::::i
iii
0(
..I .977
C
>
0(

.972

x
.967~----~------~------~----~------~------~~
o .5 1.5 2 2.5 3

x =~ ~
Figure 4.2. Asymptotic Availability Function (4.4.5),
(3 = 100 [hr], 'Y = 2 [hr], 8 = 1 [hr]

x O can be determined numerically by several methods.

The phenomenon illustrated in the previous example can now be gener-



alized in the following manner.
A class of TTR distributions is called a decreasing failure rate class
(DFR) if the failure rate function, h(t), of each element of the class is a
decreasing function of t. A system whose TTF distribution is a DFR one
has the characteristic of "old better than new."
Let R(t) be the reliability function of a system, i.e., R(t) =1- F(t).
Formula (4.4.3) can be written as

(4.4.6) Aoo(T) = T
iT
0
R(u)du

10 R(u)du + ("{ - 8)(1 - R(T)) + 8


Let A(T) denote the denominator of (4.4.6) and let A:x,(T) denote the de-
4.5 Exercises 81

rivative of Aoo(T) with respect to T. Straightforward differentiation yields

a2 (T),
R(T) Aoo (T) = 'Y - (-y - 6)
(4.4.7)
[R(T) + h(T) loT R(u)du] , 0 < T < 00.

Indeed, according to (1.3.6), R'(T) = -h(T)R(T).


Let W(T) = R(T) + h(T) loT R(u)du. The derivative of W(T) is
W' (T) = R' (T) + h' (T) loT R(u )du + h(T)R(T)
(4.4.8)
= h'(T) loT R(u)du.
Accordingly, if h(T) is a decreasing function, h'(T) < 0 for all T, and

10r R(u)du =
T
W(T) is a decreasing function of T. Finally, if limh(T) 0
T ..... O
then W(O) = R(O) = 1. Thus, if the distribution F(t) is a DFR one then
the right hand side of (4.4.7) is greater than 6. This means that for a system
with a TTF having a DFR distribution, the optimal policy is not to have
preventive maintenance.
For additional treatment of the topic the reader is referred to the books
of Gertsbakh (1989, Chapter 4) and Barlow and Proschan (1965).

4.5 Exercises
[4.1.1] Suppose that the TTF in a renewal cycle has a W(a, (3) distribution
and that the TTR has a lognormal distribution LN(J.L, u). Assume
further that TTF and TTR are independent. Find the mean and
standard deviation of the length of a renewal cycle.
[4.1.2] Show that if Xl tV N(J.LI,Ot) and X 2 tV N(J.L2, (2) and if Xl and
X 2 are independent then Xl + Xl N(J.LI + J.L2, (u~ + u~)1/2).
f'V

[4.1.3] The gamma distribution G(v, (3) can be approximated by


N(v(3, (3y'v) if v is large. Suppose that in a renewal process,
the components T and S of a renewal cycle are independent ran-
dom variables having gamma distributions G(VI' (3J) and G(V2' (32),
where VI and V2 are large. Apply the result of Exercise [4.1.2] to
determine a normal approximation to the distribution of the length
of a renewal cycle.
[4.1.4] Show that if Xl and X 2 are independent random variables and if
Xi G(Vi' (3) (i = 1,2) (the same scale parameter) then Xl +X2
f'V f'V

G(VI + V2, (3).


82 4. Reliability of Repairable Systems

[4.2.1] Suppose that a renewal cycle C has a notmal distribution N(I00, 10).
Apply formula (4.2.1) and the results of the previous exercises to
determine the PDF of NR(200).
[4.2.2] Suppose that a renewal cycle C has a gamma distribution G(~,,8),
k = 1,2, ... , 0 < ,8 < 00.
(i) Show that

where X2[v] denotes a chi-squared random variable with v degrees


of freedom.
(ii) Show that if k = 2 then V(t) = tl,8.
[4.2.3] Let C N(100, 10). Approximate the value of V(1000).
f'V

[4.2.4] Derive the renewal density, v(t), for the process of Exercise [4.2.3].
[4.2.5] Suppose that the components of a renewal cycle are independent
random variables T and S, T W(2,,8) and S E(-y), 0 < ,8 < 00,
f'V f'V

o < 7 < 00. Show that the Laplace transform of the renewal
density is
v*(s) = Jt(I- st/J(s))
s(1 + Jtt/J(s)) ,

where t/J(s) =..fo es2 / 4 (1- q, (~)).


[4.2.6] A system is composed of two identical units which are connected
in series but function independently. Suppose that the TTF of a
unit has an exponential distribution E(,8), and the TTR is E(',().
Derive the availability function of this system. [Hint: See Exercise
[2.3.2].]
[4.2.7] Consider a system in which two identical independent units are
connected in parallel. The units are not fixed until both fail. As-
suming that the TTF of each unit is E(,8) and the total time to
repair the two units is G(2, 7), 0 < ,8,7 < 00, derive the availability
function of the system.
[4.3.1] Compare the value of V(1000) of Exercise [4.2.3] to Result 1 of
Section 4.3.
[4.3.2] A renewal cycle length C has a lognormal distribution LN(5, 1.5).
(i) What is the asymptotic standard deviation of NR(1000)?
(ii) Determine an interval such that the probability that NR(1000)
will belong to that interval is approximately .95.
[4.3.3] A system has a renewal cycle length [hr] which has a W(2,1000)
distribution. The system has been in operation for a long time (it
is in a steady state). You approach the system at any time and find
it operating (up). What is the probability that the next renewal
will not take place earlier than 500 [hr]?
4.5 Exercises 83

[4.3.4] An insertion machine places electronic components on designated


locations on a board. The machine is operating under normal con-
ditions at the rate of 3600 placements per hour. Due to its complex-
ity the machine stops once in a while and requires either restart or
repair. Let T be the number of placements till stopping and S the
time in minutes for restart/repair. Data gathered during 7 months
show that T '" W{.463, 1106.5) and that E{S} = 1.09 [min]. Show
that the asymptotic availability of this machine is Aoo = .975.
[4.4.1] A machine has an exponentially distributed time till failure. The
MTTF ofthis machine is f3 = 4 [hr]. The TTR is also exponentially
distributed with 'Y = .25 [hr]. Productivity loss due to failures is
equivalent to $1000 per hour. For a one time investment of $100,000
an identical standby unit can be connected. How many hours are
required for the enlarged system (two units and one repairman) to
gain in productivity enough to cover the initW. investment?
[4.4.2] A machine has a TTF which is distributed like W(3, (3), with f3 =
100 [days]. The repair takes on the average 1 [day] and preventive
maintenance takes on the average 1/2 [day]. Apply formula (4.4.7)
to find the optimal maintenance period T.
[4.4.3] N = 20 identical machines working in parallel. The TTF of a ma-
chine has an E(1000) distribution [hr]. The machines are inspected
every T [hr] and each machine which is down is repaired and re-
newed. There is no repair of failed machines between inspection
periods. Each inspection and repair costs $300 per machine. On
the other hand, the loss for a non-operational machine (in a down
state) is $500 per hour. What is the optimal inter-inspection period
TJ?
5
Graphical Analysis of
Life Data

In the present chapter we study several graphical techniques for testing


which life distribution fits the data. The objective is to obtain fast graphical
verification or assessment of the type of life distribution for a particular
system. The analysis is based on a given sample of observations on the
TTF of a given system (or component of a system). It is often the case
that such observations are censored (see Section 2.1) and the information
is incomplete. We also distinguish between parametric and non-parametric
(distribution-free) models of life distribution. We start with probability
plotting techniques for parametric uncensored data.

5.1 Probability Plotting for Parametric Models with


Uncensored Data
Consider a data set consisting of n observations Xl, X2,' .. ,Xn , on indepen-
dent random variables XI, ... ,Xn having an identical distribution, with a
CDF F{x). Such a data set is called a random sample from the distribu-
tion F{x). The empirical CDF of this random sample is defined as
1
Fn{x) = -{# of sample values::; x}
n
(5.1.1) 1 n
=- LI{Xi ::; x},
n i=l

where
I,
(5.1.2) I{Xi ::; x} = {
0, if Xi> X.
5.1 Probability Plotting for Parametric Models with Uncensored Data 85

1.000
. 900
>
!: .800
-I
CD
. 700
« . 600
CD
0 . 500
a:
a. . 400
~
::::I . 300
U
. 200
.1 00
.000
6 7 8 9 10 11 12 13
SAMPLE VALUES

Figure 5.1. The Empirical Distribution Function of a


Sample Size N = 100 from N(lO, 1) and the CDF

A basic result in probability theory states that the empirical CDF, Fn(x),
converges in a probabilistic sense, as n grows, to F(x). This means that, if
the sample is very large, the empirical CDF is close to the true CDF with
high probability. Notice that if X(1) ::; X(2) ::; ... ::; X(n) are the ordered
sample values, then the value of Fn(x) at x = X(i) is i/n, i.e.,

(5.1.3)

In Figure 5.1 we present the empirical CDF of a random sample of n =


100 values from an N(lO, 1) distribution, as well as the corresponding true
CDF. We see that the empirical CDF, Fn(x), is close to the true CDF,
F(x). Notice that X(i) is the *-th fractile of the empirical distribution
Fn(x). Since in large samples Fn(x) is close to F(x), we expect that X(i)
will be close to the *-th fractile of the true distribution, i.e., X(i) ~ F- 1 (*) .
In other words, if X(i) is the i-th order statistic of a random sample from a
distribution F(x), we expect that the points (F-l (*) ,X(i»)' i = 1"" ,n,
will be scattered around a straight line with slope 1, passing through the
origin.
Furthermore, if the true CDF is F (X~J.I), xp = J1.+aF- 1 (p), and we ex-
pect that the points (F- 1 (*) ,X(i»), i = 1"" ,n, will be scattered around
a straight line with intercept J1. and slope a. This is the basis for the method
of probability plotting.
EXAMPLE 5.1
The p-th fractile of the normal distribution is xp = J1. + azp. We may
consider hence the graph of x (i) against <1> -1 (*), i = 1, . .. ,n - 1. To avoid
losing the n-th point, due to the fact that <1>-1 (~) = 00, it is customary
to consider for the abscissa the coordinate <1>-1 (n~1) or <1>-1 (!~~)~). In
86 5. Graphical Analysis Of Life Data

13

en 12
w
:)
..J
11
<t
> 10
w
..J 9 Slope: 1.0233
a.
::i: Intercept: 9 . 9198
<t 8
en R squared . 9931
7

6
-2.5 -2.0 -1 . 5 -1. 0 -.5 0 .5 1 .0 1.5 2.0 2.5 SCORES

o o o :.. (..) U1 <D P RO BAB ILITY


o N (I) U1 o o <D
(I) (..) ..,j <D <D o ....
Figure 5.2. Sample Values from N(lO, 1) Against Their z-Scores

Figure 5.2 we present a plot of points (Zi ,n, XCi))' where Zi,n = <p- 1 (~~;)~).
Zi,n are called the normal scores of the sample.
A straight line was fitted through the points by the method of least-
squares. The slope of this line provides an estimate of the standard devi-
ation, a, and the intercept provides an estimate of J,L. In Figure 5.2 the
intercept of the line is p, = 9.92 and its slope is a = 1.02.

We provide now a list of the coordinates required for the probability plots

corresponding to the various families of distributions discussed in Chapter
2. The value Pi = i/(n + 1) corresponding to XCi) is called the plotting
position. In some books we find the use of Pi = i~.5 for a plotting position.
We remark here that i / (n+ 1) is the expected value of F (XCi )) ' The problem
of which plotting position should be used receives considerable attention in
statistical research.
1. Exponential or shifted exponential distributions:
XCi) versus Ei,n where Ei ,n = -In (1 - n~l)' i = 1,· .. , n.
2. Weibull distributions:
Yi = lnx(i) versus Ui ,n = In (-In (1 - n~l))' i = 1,··· , n.
3. Extreme value distributions (minima):
XCi) versus Ui,n = In (-In ( 1 - n~l) ),
i = 1, ... , n .
4. Gamma distributions:
XCi) versus Gi,n = n~l th fractile of G(v, 1), i = 1,··· , n.
5. Normal distributions:
XCi) versus Zi,n -- <p-
1(i-3/8)
'-1 n+l/4 ' Z- ,... ,
n.
5.1 Probability Plotting for Parametric Models with Uncensored Data 87

6. Lognormal distributions:
",-1 (i-3/8). = 1,"', n.
Yi = In XCi) versus Zi,n = '*' n+1/4' ~
Some parameters of the life distributions can be estimated directly from
the probability plots. A list of these parameters and the corresponding
estimates is given below. We denote by 5:(z) the predicted sample value for
a given score z, i.e., 5:(z) = intercept + Z . slope.

Table 5.1. Graphical Estimators for Several Life Distributions

Distribution Parameter Estimator


Exponential Mean slope
or shifted Median 5:(.693)
exponential Standard Deviation slope
to int~.rcept
Weibull Shape, v l/slope
Scale, f3 exp(intercept)
Mean exp(5:(O))r(l+ slope)
Median exp(5:( -.3665))
Standard Deviation exp(5:(O))[r(1 + 2 slope)
-r2(1+ slope)]1/2
Extreme Location Parameter, ( intercept
Value Scale Parameter, 8 slope
Mean 5:( -.5772)
Median 5:( -.3665)
Standard Deviation 5:(1.283)- intercept
Gamma (v known) Scale Parameter, f3 slope
Mean v . slope
Median 5:(C-1(.5; v, 1)) . slope
Standard Deviation VV· slope
Normal Mean intercept
Standard Deviation slope
Lognormal Mean exp(intercept + (slope)~ /2)
Median exp(intercept)
Standard Deviation Mean[exp(slope 2) - 1]1/2

We provide now a few examples.


EXAMPLE 5.2
1. A Probability Plot from an Exponential Distribution
In Figure 5.3 we present the plot of 100 values generated at random from
an exponential distribution E(5). We fit a straight line through the origin
to the points by the method of least-squares. A linear regression routine
provides the line
5: = 5.94* E
88 5. Graphical Analysis Of Life Data

30
27

I/) 24
w
~ 21
...J

"'>w 18 •
15
...J 12 Slope : 5 .9413
Go
~ 9 Intercept; .0000
"'
I/) 6 R squared : .9800
3
0
.0 .5 1.0 1.5 2.0 2.5 3.0 3.5 4 .0 4.5 5.0 SCORES

0 It) It) i.e PROBABILITY


0 <11 It)
0 o o""" w

Figure 5.3. Probability Plot of n = 100 Random Deviates from E(5)

Accordingly, the slope of a straight line fitted to the points provides an


estimate of the true mean and standard deviation, f3 = 5. An estimate of
the median is
x( .693) = .693 x 5.9413 = 4.117.

The true median is Me = 3.465.


II. A Probability Plot from a Weibull Distribution
In Figure 5.4 we provide a probability plot of n = 100 values generated
from a Weibull distribution with parameters v = 2 and f3 = 2.5.
Least-squares fitting of a straight line to these points yields the line

y= .856 + .479u.

Accordingly, we obtain the following estimates:

v= 1/.479 = 2.087
/3 = exp(.856) = 2.354
Median = exp(.856 - (.3665)(.479))
= 1.975.

The true median is equal to f3(ln 2)1/2 = 2.081. The estimate of the mean
is
p, = /3r(l + .479)
= /3 x .479 x r(.479) = 2.080.
5.1 Probability Plotting for Parametric Models with Uncensored Data 89

2.0

UJ 1.5
w
:I
..J 1.0
<I:
>
w 0.5
..J
Q,
:I 0
<I:
UJ
-.5

-1.0
-5 -4 -3 -2 -1 0 1 2 SCORES

0
0
....
..
0
Ol
0
~
CO
:..
I\)
....
Co>
0
Ol
01
(,0)
I\)
CD
(,0)
~
CD
CO
CO
PROBABILITY

Figure 5.4. Probability Plot of n = 100 Random Deviates from W(2, 2.5)

The true mean is J.L = ,Br(1.5) = 2.216. Finally, an estimate of the standard
deviation is

fJ = ,8(r(1.958) - r2(1.479))1/2
= ,8[.958 x r(.958) - (.479 x r(.479))2]1/2 = 1.054.
The true value is = ,B(r(2) - r2(1.5))1/2 = 1.158.

(1

EXAMPLE 5.3
A sample of n = 72 observations were taken on the repair time of an
insertion machine. The sample statistics are given in the following table.

Table 5.2. Sample Statistics for Repair Times [min]

Sample size (n) 72.00


Sample mean (x) 1.77
Sample standard deviation (8) 2.63
Minimum sample point (min) 0.15
Maximum sample point (max) 18.00
Lower quartile (Ql) 0.63
Sample median (Me) 1.00
Upper quartile (Q3) 2.00

We see in Table 5.2 that 75% of the sample values are not greater than
2 [min], and that the distribution is apparently very skewed (the maximum
value is 18 [minD. For this reason a lognormal probability plotting was
90 5. Graphical Analysis Of Life Data

3.0
2.5

~ 2.0
:: 1 .5
;! 1.0
w .5
~ .0
~ -0.5
(/) -1.0
-1.5
-2.0~·~~--~--~--~~--~--~--~--~--~--~~
-2.5 -2.0 -1.5 - 1.0 -.5 0 .5 1 .0 1.5 2 .0 2 . 5 SCORES

o w io CD PROBABILIT Y
o o
(/I
o ...,
...,
~

en CD o ~

Figure 5.5. Lognormal Probability Plot for Repair Times Sample

done. The plot is presented in Figure 5.5. We see in this figure that the fit
of the lognormal distribution to the sample values is quite good.
The parameters of the repair distribution are estimated from the line
fitted to the plot. We obtain

Median = exp(.0687) = 1.07 [min],


Mean = exp( .0687 + (.9071)2/2)
= 1.62 [min],
and
Standard Deviation = 1.62[exp(.9071 2) - 1j1/ 2
= 1.83 [min].

Notice that there are differences between the sample statistics in Table 5.1
and the estimates from the plot. The sample mean and the sample standard
deviation are sensitive to extreme values in the sample and are therefore
not stable (robust) estimates of the parameters of the distribution.

5.2 Probability Plotting with Censored Data
There are cases in which observations are censored either from the left, or
from the right, or both. For example, if any repair taking less than half
a minute is recorded as .5 [min] then in some cases, as in the data set of
Example 5.2, a few values are left censored. Similarly, if any repair taking
longer than 5 [min] is recorded as 5+ we have lost the exact information on
the length of the repair time. We plot only the points which correspond to
5.2 Probability Plotting With Censored Data 91

uncensored data, but in determining the plotting positions of these points


we bring into account the number of censored values and the total sample
size. For example, if a sample has n = 10 observations, 2 are censored on
the left and 1 on the right we have
Table 5.3. Plotting Poisitions in a Censored Sample

i z
Xi n+l
1 -
2 -
3
3 X(3) IT
4
4 X(4) 11

9
9 X(9) IT
10 - -

We plot only seven points (Xi, Zi), i = 3"" ,9, where Zi = F- 1 (~).

1.8

1.4

en
w 1.0
=>
...J
<C
>
W
...J
Q. .6
:E
<C
enI
C)
0 .2
...J

-.2

-. 6L:~ __ ~ __ -...J~ __ ~ ____ ~ ____ ~ ____ ~~"

-.8 - .4 0 .4 .8 1.2 1.6


Z-SCORES (UNCENSORED)
Figure 5.6. Lognormal Probability Plot for Censored Sample
92 5. Graphical Analysis Of Life Data

In the following example we illustrate it on the data of Example 5.3.


EXAMPLE 5.4
In Example 5.3 we analyzed a sample of 72 repair times [min]. IT all
values smaller than .5 [min] and all values greater than 5 [min] are censored,
we get 16 values censored on the left and 5 values censored on the right.
We plot only the 51 uncensored data points. In Figure 5.6 we present the
lognormal probability plot of the censored sample.
We see again a very good fit of the lognormal distribution. The least-
squares regression line, in the censored case, of the y = log(x) against z
is y = .00323 + .9286z, with R2 = .9725. In the non-censored case we
obtained the regression line ONe = .0687 + .9071z, with R2 = .9563. These
results are very close. Extrapolating into the censoring region one could,
using the prediction equation y = -.003 + .9286z, predict the values which
have been censored.

5.3 Non-Parametric Plotting
5.3.1 Product Limit Estimator of Reliability
The Kaplan-Meier method yields an estimate, called the Product Limit
(PL) estimate of the reliability function, without an explicit reference to
the life distribution. The estimator of the reliability function at time t will
be denoted by Rn(t) when n is the number of units put on test at time
t = O. IT all the failure times 0 < h < t2 < ... < tn < 00 are known then
the PL estimator is equivalent to

(5.3.1) Rn(t) = 1 - Fn(t),

where Fn(t) is the empirical CDF defined in (5.1.1).


In some cases either random or non-random censoring or withdrawals
occur and we do not have complete information on the exact failure times.
Suppose that 0 < tl < t2 < ... < tk < 00, k ::; n, are the failure times and
w = n - k is the total number of withdrawals.
Let Ij = (tj_{, tj), j = 1, ... ,k + 1, with to = 0, tk+1 = 00, be the time
intervals between recorded failures. Let Wj be the number of withdrawals
during the time interval I j • The PL estimator of the reliability function is
then

where no = n, and nl is the number of operating units just prior to the


failure time tl.
Usually, when units are tested in the laboratory under controlled condi-
tions, there may be no withdrawals. This is not the case, however, if tests
5.3 Non-Parametric Plotting 93

are conducted in field conditions, and units on test may be lost, withdrawn
or destroyed for reasons different than the failure phenomenon under study.
Suppose now that systems are installed in the field as they are purchased
(random times). We decide to make a follow-up study of the systems for
a period of two years. The time till failure of systems participating in the
study is recorded. We assume that each system operates continuously from
the time o£installment until its failure. H a system has not failed by the end
of the study period the only information available is the length of time it
has been operating. This is a case of multiple censoring. At the end of the
study period we have the following observations: {(1i,6.), i = 1,,,, ,n},
where n is the number of systems participating in the study; 1i is the length
of operation of the i-th system (TTF or time till censoring); 6i = 1 if the
i-th observation is not censored and 6i = 0 otherwise.
Let T(I) ~ T(2) ~ ..• ~ T(n) be the order statistic of the operation
times and let 631 , 6i2 , .•• ,6j ,. be the 6-values corresp~~ding to the ordered
T values where ji is the index of the i-th order statistic T(i) , i.e., T(i) = Tjt
(i = 1, .. · ,n).
The PL estimator of R(t) is given then by

Rn(t) = I{t < T(l)}


(5.3.3) n
+ LI{T(i) ~ t < T(i+l)} II
i (
1-
6.)
_ ~ 1 .
i=1 j=1 n J+

Another situation prevails in the laboratory or in field studies when the


exact failure times cannot be recorded. Let 0 < tl < t2 < ... < tk < 00
be fixed inspection times. Let Wi be the number of withdrawals and Ii the
number of failures in the time interval Ii(i = 1,··· , k + 1). In this case
formula (5.3.2) is modified to be

(5.3.4)

This version of the estimator of R(t), when the inspection times are fixed
(not random failure times), is called the actuarial estimator.
In the following examples we illustrate these estimators of the reliability
function. Before proceeding to the examples we remark that, according
to (1.3.6), a non-parametric estimator of the cumulative hazard function,
H(t), is

(5.3.5) Hn(t) = -log Rn(t).


94 5. Graphical Analysis Of Life Data

EXAMPLE 5.5
A machine is tested before it is shipped to the customer for a one week
period (120 [hr]) or till its first failure, whichever comes first. Twenty such
machines were tested consecutively. In Table 5.4 we present the ordered
time till failure or time till censor (TTF /TTC) of the 20 machines, the
factors (1- 6d(n - i + 1)) and the PL estimator R(ti ), i = 1,··· ,20. The
gmphs of R20(t) and of H20 (t) are presented in Figure 5.7 and Figure 5.8.
In Figure 5.7 we present also the reliability function of E(lOO) and in
Figure 5.8 the cumulative hazard function of E(100). We see from the
plots that, apparently, the TTF of the tested machines is exponentially
distributed, with an MTTF close to 100 [hrl. Later we will study how to
estimate the MTTF when censoring is present.

1.1

.9

>
I-
oJ

~ .7
:::;
w
,
, ~Empirical

""
lr

~Theoretical
.5 ""
"
"

.3 1-
o 50 100 150
TIME [HrJ

Figure 5.7. PL Estimator of Reliability Under Censoring


5.3 Non-Parametric Plotting 95

Table 5.4 Failure Times [hr] and PL Estimates

i Tei) (1- n-i+1


8 i ) R(Ti)
1 4.78 0.950 0.95
2 8.37 0.947 0.90
3 8.76 0.944 0.85
4 13.77 0.941 0.80
5 29.20 0.937 0.75
6 30.53 0.933 0.70
7 47.96 0.928 0.65
8 59.22 0.923 0.60
9 60.66 0.916 0.55
10 62.12 0.909 0.50
11 67.06 0.900 0.45
12 92.15 0.888 0.40
13 98.09 0.875 0.35
14 107.60 0.857 0.30
15 120.00 1.000 0.30
16 120.00 1.000 0.30
17 120.00 1.000 0.30
18 120.00 1.000 0.30
19 120.00 1.000 0.30
20 120.00 1.000 0.30

5.3.2 Total Time on Test Plots

Suppose that n units are put on test at time to = O. Let 0 < h < ... <
tn < 00 be the failure times of these units. Let to = 0 and tn+l = 00. The
total time on test (TTT) at time t is defined as
n
T(t) = I{t :S h}nt + L1{ti-l < t:S ti}
i=2
(5.3.6)
[ I:(n - j + 1)(tj - tj-l) + (n - i + 1)(t - ti - 1)].
J=l

Let Ti = T(ti)' i = 1, ... , n be the values of T(t) at the failure times. Let
Vi = TdTn (i - 1,··· , n). The graph of (*, Vi), i = 1,··· , n is called the
TTT plot.
In Figure 5.9 we present the TTT plot for the data of Table 5.4.
The graph of (*, Vi) approaches, as n ---- 00, the function

(5.3.7) o :S x :S 1,
96 5. Graphical Analysis Of Life Data

1.5

Empirical
~

",'"'"
","'''' Theoretical
",'"
1:0 ",,'"
Q
II:
c(
N
c(
:c
:I
::;)
0
.5

0-'-_ _ _ _ _ _ _--'-_ _ _ _- - - - - - - - - 1 --~

o 50 100 150
TIME [Hr]

Figure 5.8. Cumulative Hazard Plot Under Censoring

where f(y) is the CDF of the TTF, and R(y) = 1 - F(y) is the corre-
sponding reliability function. We can show that if F(y) = 1 - e->'Y then
HF(X) = x. On the other hand, if the failure rate function h(t) is in-
creasing (IFR distribution) then HF(X) is a concave function of x, with
HF(O) = 0 and HF(l) = 1. Thus, in the IFR case HF(X) > x for all
0< x < 1. Conversely, if F is a DFR distribution then the function HF(X)
is convex, and HF(X) < x for all 0 < x < 1. The TTT plot of Figure 5.9
shows that apparently the life distribution it represents is an IFR one.

5.4 Graphical Aids


The practice has been to use special graph papers for probability plotting.
In Figure 5.10 we present such a plot on a Weibull probability paper. In this
graph paper, the Weibull scores are plotted against the log failure times.
The ordinate scale is that of In( -In( 1 - nt 1)) and the abscissa scale is
In(xi)' In such plots the slope of the line provides an estimate of the shape
parameter 11. A special scale is given for estimating the shape parameter.
For details on purchasing such probability papers, see W. Nelson (1990, p.
131).
Most of the probability plots presented in this chapter were first prepared
5.5 Exercises 97

.8

t-
(/)
IU
t-
Z .6
0
IU
2
;:
...
cC
.4
t-
O
t-

.2

PROPORTIONAL RANK

Figure 5.9. Total Time on Test Plot

by special software RELSTAT© for the PC. One can make probability
plots on the PC by using various software packages in the market, like
MINITAB©, STATGRAPHICS©, and others.

5.5 Exercises
[5.1.1] Use a computer program to generate a random sample from N(50, 5)
of size n = 50.
(i) Plot the empirical CDF of this sample.
(li) Draw a normal probability plot of this sample, and estimate J.t
and (J' from this plot.
[5.1.2] The following data are failure times [hr] of a random sample of
n = 50 electronic devices.
26.3, 78.5, 29.8, 22.6, 113.1, 10.8, 157.4, 2.4, 51.9, 29.3, 40.3, 216.6,
30.5,31.6, 57.5, 38.1, 113.7, 1.0,96.8,63.3, 72.1, 107.4,39.6,29.0,
11.0, 105.2, 36.7, 7.1, 85.5, 24.6, 28.0, 23.6, 14.7, 24.3, 46.9, 56.9,
293.4, 33.0, 47.0, 51.9, 20.0, 20.3, 158.9, 54.0, 14.8, 81.2, 46.0, 42.8,
8.9,35.7.
(i) Make a Weibull probability plot of this sample.
(li) Make an exponential probability plot of the data.
0.0 SMAce BETA ESTIMATOR ; ,
ORIGIN ~
- 2.0 - 1.0 0 .0 1.0 2.0 3.0 4 iO 5i O 6.0 7o 90 10.0 11.0 13.0
999 - j I I I I~I 1 ,

99:0 In• •(TIME /STRESS) J ... .• ... .


'I" • .
"I,
;;~f-- " . ! .::-
r:' ': ::'ti
-::. ,
1-. · !1 ~J-.-:...:;'~: '
••.••.•••
. ., ' ... •
. j)
' I t:H IT
0 .5 -
'.! .'::: ::: : : :'Uili::'
98 .0 . :::::::: 1::I'I.~::: . :::: .. Jill .
':J-.-:., :r':"'; .~ l~tcP.1 1.0

HI •••• H
••• +~~"Ttif~r : ~~W:h~ ~~. ::t 4~~r"'( ~.U:·. .'.: ~ 0.0
1.0- 40.0 .... , . . . . . . . :-'7="1-- - --~
-.c .... 1-. -C WC--'I- fc,~,~ .. , 1.0
i. to- - 2.0 :J
.:, ti~ltl:~r'jj~ !':,~, : "I ;::~1':'1f~ r- :;-
3.0
-:='::':' _-:::::: '.- :"::-1::."2:: :-:::.:::1-:;. :! .-.-
. .. 0
L5~ I~ ~.3·8~ ' i.'.~:,'i'J~~~~'=~jt
::.: .: : ' --:-'i-i':';~ ::::.r:.- L.lill~~ :-::~[:i:~ 0
~ ~~ ;~~;'~)it;fH ;·i::; : :'::' . ... -.... .::: ::::: '::"" .........,.. . ::;!~ j ~+i!:, - 4.0 I

'OJ, ....... ,.,.,., . Th~ "'1' i' ......, fcc •.""·!;r" ,, .. " .. , ". m
""
10--5.0 :>J
"1~L' " ......, !I ;,+~ ()

~18
6 .0 -i
"T1
t- l>
7.0 ;=
c ?'
on
ti~;':i!.: :; ;,! .•••• ~~7 .~.~i:,·i~j~i~0· ~ I ; I ! j i l ! i ' : !C~ i I , ~ , ;~: ~ -8.0 m o
~
,II! Lllll rr .·,Ii-··· ~-9.0 e:
(')
::~ ,,,;;,;;,;.; ;;;;;} +~;,;.iii;prJII ......... :.:1 ~ ,:;: :'.: - ~: jj:r :::~8' ur· e.
8:88r.:c.=.-::· ~ 10.0

~r" . ... .... :. IT :, .. ,.. :' : /:1:11, rl :: 1::::1 r:1"jlll' 11 .0


.:::j:::il:I:!: 111
: ...
't:t:H111-;-,:!I:l;i::
fij'
g:gg; .::i-i.-
"- ::r::::' ..... 1 11 ::. JJllil i
0.001 :;c:-. r:!l .t:~ :!:: : : :: ::::1 : I: II: - i2.0 o
....
+ • • ;..;;
g:~~ .::::. ~~:l:±~ . .... t4iJR.i:
:;:
. , ..-..'' ' 1. t I. II'i ;,I!!~I'!:,
.. t""
0.0003 ~+ti • , •• 11 ' ;;
0.0002 g.... .. . :. 13.0
: ,::. ,: :::: :
. ..... ::. , t:J
:Jr~~
~ ' ~:r'1:::r
1 l,UrllI ~ I l
0.0001 ti ~
1 234681234681 234681234681 :2 3 4 6 81 2 34 681 2 3 4 6 8 1 0>
TIME/STRESS AXIS
Figure 5.10. Plot on a Weibull Probability Paper
5.5 Exercises 99

(iii) Which lifetime distribution do you consider more appropriate


for this type of device?
(iv) Estimate the median, mean, and standard deviation of the
distribution.
[5.1.3] (i) Apply a computer program to generate (simulate) a random
sample of size n = 50 from the Weibull distribution, W(v, {3), with
v = 3 and {3 = 2.
(ti) What is the theoretical mean, p, and standard deviation, q, of
W(3,2)?
(iii) Compute the sample mean, it, and standard deviation, S, and
compare them to p and q.
(iv) Compute the estimates of p and q obtained from the intercept
and slope of the straight line fitted to the points in a probability
plotting of the sample.
[5.1.4] In a Weibull probability plotting one obtains a straight line fit to
the data with intercept a = 5.01 and slope b-=2.75. Estimate the
scale parameter, {3, and the shape parameter, v, of the distribu-
tion. Estimate also the mean, p, and the standard deviation of the
distribution.
[5.1.5] The following sample lists the pneumatic pressure (kg/cm2 ] re-
quired to break 20 concrete cubes of dimensions 10 x 10 x Wcm3 :
94.9, 106.9, 229.7, 275.7, 144.5, 112.8, 159.3, 153.1, 270.6, 322.0,
216.4,544.6,266.2,263.6, 138.5, 79.0, 114.6,66.1, 131.2,91.1.
Make a lognormal probability plotting of these data and estimate
the mean and standard deviation of this distribution.
[5.2.1] The following is a sample of censored failure data [hr] from a log-
normal distribution:
1764,2775,3400,3560,3900,4890,5200,5355,5500+,5500+.
(Censored values appear as x+ .) Make a lognormal probability plot
of the data and estimate the mean and standard deviation of the
TTF.
[5.2.2] The following data represent the time till first failure [days] of elec-
trical equipment. The data were censored after 400 days.
400+,350,400+,303,249,400+,157,176,13,172.
Make a Weibull probability plot of these data and estimate the
median of the distribution.
[5.3.1] Plot the PL estimator of the reliability function of the electrical
devices, based on the failure times given in Exercise [5.1.2].
[5.3.2] Plot the cumulative hazard function for the data of Exercise [5.2.2].
[5.3.3] Make a TTS plot for the data of Exercise [5.1.2].
6
Estimation of
Life Distributions and
System Characteristics

In Chapters 2 and 3 we presented families of life distributions of compo-


nents/systems and probabilistic methods of determining the reliability of
a system as a function of the reliabilities of its components. In Chapter 5
we touched upon some problems of estimating some interesting parameters
of life distributions. The estimation methods discussed there are based on
graphical analysis of probability plots. These methods, though often coarse
and imprecise, do provide quick estimates.
In the present chapter we study in some detail the problem of analyzing
life testing data in order to estimate the parameters of life distributions
and the corresponding reliability/availability functions. The type of esti-
mation procedure used depends on the type of data available: discrete or
continuous, complete or censored.
We start with a brief discussion of the estimation problem and define
various types of estimation procedures. The problem of estimating system
reliability/availability from data available on the system's components is
also discussed.

6.1 Properties of Estimators


6.1.1 The Estimation Problem
Suppose that a new component has been developed, but its reliability func-
tion is unknown. In order to obtain information on the reliability function
and on other characteristics of the life distribution, a certain number, n, of
such identical components are subjected to a test. Suppose that the test
lasts until all of the n components fail (complete sample). Let t l , t2,'" , tn
be the failure times of the n components. This set of failure times is called
6.1 Properties of Estimators 101

10617
9556
••
(I)
UI 8495
::)
.J 7434
cC 6373
>
w 5312
.J
CL 4251
:& 3190
cC
(I) 2129
1068
7.~

0
0
0

Figure 6.1. Exponential Probability Plot of


Lifetimes of 20 Electric Generators

a random sample, if t l , ••. , tn are realizations of n independent lifetime


random variables T 1 ,'" , Tn! having the same life distribution. How do we
utilize the information in the sample to provide estimates of the reliability
function of the new component, or of its MTTF? In the following example
we illustrate the problem numerically.
EXAMPLE 6.1
Twenty electric generators were subjected to accelerated life testing. The
test terminated after all the generators failed. The following values repre-
sent the lifetimes of these generators in hours:

121.5 1425.5 2951.2 5637.9


1657.2 592.1 10609.7 9068.5
848.2 5296.6 7.5 2311.1
279.8 7201.9 6853.7 6054.3
1883.6 6303.9 1051.7 711.5

We assume that the 20 generators represent a random sample from the


relevant population. In Figure 6.1 we present the probability plot of this
sample for a possible exponential life distribution. That is, we plot the
point!:! t(i) versus

g
t,n
= -In (1 - n+
_i- )
1 '
i = 1, ... ,n
where t(i) represents the i-th order statistic of the failure times.
102 6. Estimation of Life Distributions and System Characteristics

We see in Figure 6.1 that the points (Ei,n, t(i») are scattered around a
straight line. A least-squares fit of a line through the origin yielded a slope
of b = 3866 [hr). This graphical analysis shows that the assumption of an
exponential life distribution is plausible.
The graphical estimate is not, however, the only possible estimate of (3.
We may also try to estimate (3 by the sample mean,

_ 1
I:ti = 3543.4 [hr).
n
tn = -
n i=l

A third possible estimate can be obtained from the sample median, Me.
We know that the median of the distribution is Median = .693(3. This
suggests for an estimate of (3 the value Mej.693 = 3026.4 [hr). We have
thus obtained three different estimates of (3 from the same data. Which
one should we adopt? The answer to this question binges on notions of
accuracy and precision that will now be conslaered.

6.1.2 Sampling Distributions, Accuracy and Precision

An estimator of a characteristic of a distribution is a function of the sample


values which yields values in the domain of variation of that characteristic.
For example, suppose that the characteristic of interest is the standard
deviation, 0", of the distribution. Since 0" assumes only positive values, any
estimator of 0" should be positive.
Since an estimator is a function of the sample values, different samples
from the same distribution will produce different values of that estimator.
Since the samples are random, the estimators are also random variables.
The distribution of the estimator is called its sampling distribution.
EXAMPLE 6.2
One hundred random samples of size n = 4 were drawn (by simulation)
from the exponential distribution E(5000). The sample mean

1 n
xn = -
n
I:
i=l
Xi

was computed for each sample. In Figure 6.2 we present the histogram of
these 100 sample means.
We see that the sample means vary over the interval (800,14,600). The
average of all the 100 sample means is x
= 4814.8 and their standard
deviation is Sii = 2519.5. We see that although individual sample means
could vary considerably from the distribution mean f./" which in the present
case is equal to 5000, the average of all 100 means is quite close to the true
x
value. (Note that can also be regarded as the average of a random sample
of size n = 400.)
6.1 Properties of Estimators 103

20

16
>
0
z
w 12
:l
0
W
a: 8
II.

0 I , I I 'I I , I ,ril, ,_.


-2.0 6.0 1.4
SAMPLE MEANS [1000]

Figure 6.2. Histogram of 100 Sample Means,


n = 4, Drawn from E(5000)

In Chapter 2 we pointed out that the sum of n independent random


variables having the same exponential distribution, E({3), is the Erlang
distribution G(n, (3). Hence, the means of random samples of size n from
E({3) have the gamma sampling distribution, Le.,

Xn rv G(n,{3/n).

Thus, the expected value of Xn is the mean of G(n, (3/n) , namely E{Xn} =
{3. The standard deviation of the sampling distribution SD{ Xn} = f3/ fo,
In Example 6.2, n = 4 and {3 = 5000. Therefore, SD{ X4 } = 2500. The
standard deviation of the 100 sample means was 2519.5, which is close to
the theoretical value of 2500.

The accuracy of an estimator is measured in terms of the closeness of



the mean of its sampling distribution to the true value of the characteristic
it estimates. More formally, if () is the value of a characteristic of a distri-
bution, and B(X1 ,··· ,Xn ) is an estimator of (), we say that the estimator
is unbiased (accurate) if

(6.1.1)

for all possible values of ().


The precision of an estimator Bis expressed in terms of its root mean-
squared-error

(6.1.2)
104 6. Estimation of Life Distributions and System Characteristics

A small RMSE means a high precision. Notice that if 0 is unbiased then


n
the RMSE{O} equals the SD{O}. Generally, if Xn = ~LXi is the mean
i=l
of a random sample from any distribution with mean J.t and variance (7'2,
then Xn is an unbiased estimator of J.t and hence

(6.1.3)

A special case of this result was illustrated in Example 6.2.


EXAMPLE 6.3
A radar component has a life distribution which is a shifted exponential
SEta, to). The values of /3 as well as of to are unknown. n identical parts
are put on life testing. Let tel) :::; t(2) ... :::; t(i) :::; ... :::; ten) be the ordered
statistics of n failure times. The sample minimum tel) can serve as an
estimator of the parameter to. For estimating Jj we apply the estimator

It can be shown that tel) and /J are independent random variables and

tel) '" SE(/3/n, to),


(6.1.4)
/J"'G(n-l'n~l) .
tel) is a biased estimator of to, since E{t(l)} = to + /3/n. We see that the
bias of tel) is /3/n and thus becomes negligible as n grows. The RMSE of
tel) is v'2/3/n. Such an RMSE is considered to approach zero very fast.
tel) is a very precise estimator in large samples. /J is an unbiased estimator
of the scale parameter /3. Indeed E{/J} = /3. The SD of /J is /3/";n - 1.

6.1.3 Closeness Probabilities
We have seen that estimators have sampling distributions. Knowing the
sampling distribution, one can calculate the probability that an estimator
o will fall in a specified interval around the characteristic () which is being
estimated. One could specify a value 0 and determine the probability that
o lies within 0 units of the true (), i.e.,

(6.1.5) Pr{IO - (}I < o}.


This probability is called a coverage probability of a fixed-width inter-
val, or fixed-closeness probability. Another approach to measuring the
6.1 Properties of Estimators 105

degree of closeness is to determine the proportional-closeness proba-


bility. For a specified value ,,/, 0 < "/ < 1, one determines

(6.1.6)

The fixed-closeness, or proportional-closeness, probability of a given esti-


mator may depend on the actual value of (), on the choice of 8 or "/ and on
the sample size. We provide now a few examples.
EXAMPLE 6.4
Given a random sample from a normal distribution N(j.t, a), we are in-
terested in estimating a. Suppose that the estimator of a is the sample
standard deviation

We would like the estimates to fall within 10% of the true value of a.
Thus, the interval of interest is (.9a, 1.Ia). The sampling distribution of 8 2
2
is like that of _a_ X2 [n_I]. Hence, the proportional-closeness probability
n-I
of 8 is
Pr{.9a ~ 8 ::; 1.Ia}
(6.1.7) = Pr{(.9)2 ::; (8ja)2 ::; (1.I)2}
= Pr{.8I(n -1) ::; x2[n -1] ::; 1.2I(n - I)}.

We see that for a given n, the proportional-closeness probability of 8 is the


same for all values of a. In the following table we present the proportional-
closeness probability of 8 as a function of the sample size n.

n 10 20 30 100 200
proportional
closeness .3236 .4597 .5521 .8396 .9529

Clearly, in order to attain high proportional-closeness probability, one has


to take large size samples.

EXAMPLE 6.5 •
Let tI, t2,··· ,tn be a random sample from an exponential distribution
E((3). We wish to estimate the reliability function, i.e.,
() = exp( -tj(3).
Let in be the sample mean, and consider the estimator
106 6. Estimation of Life Distributions and System Characteristics

For 'Y = .1, the proportional-closeness probability of On is

Pr{.9B ~ On ~ 1.lO}
(6.1.8) = Pr{ln(.9) - t//3 ~ -t/fn ~ In(1.1) - t//3}
= Pr{ln(.9) ~ -~ (~ - 1) ~ In(1.1)}.

We saw earlier that in is distributed like G(n, ~).


n
Thus,

Hence, the proportional-closeness probability can be written, after some


algebraic manipulation, as

Pr{.9B ~ On ~ LIB}

{ tn n}
(6.1.9)
= Pr /3 ~ G (n, 1) ~ +
1- In(.9) (1 - ~ In(1.1))

where a+ = ma.x(a, 0). Finally, from (2.3.9) and (6.1.9) we obtain

Pr{.9B ~ On ~ 1.lO}

= Pos (n -1; n )
------=--/3
(6.1.10)
1- t
In(.9)

- Pos (n - 1; (
1-
/3 n ) ).
t In(1.1) a+

Notice that when t < .095/3 then n/(I- ~t In(1.1))+ = 00. In this case the
second term on the right hand side of (6.1.10) is zero.
In the following figure we provide a few graphs of the proportional-
closeness probabilities of On, for several values of T = t/ /3 and n, with
'Y = .1.
The proportional-closeness probabilities of On, for each n, decline mono-
tonically as t/ /3 grows. This is not the case when we consider the fixed-
closeness probabilities of On, which are defined by (6.1.5). In this case the
6.1 Properties of Estimators 107

>-
...
!:: 1.00..-.:0..:-"'__
iii
OIl(
ED
oII:
a. .]5
en
en
w
z
w
en
9 .50
U
...
I

OIl(
Z n=75
oj: .25
n:50
II: n=25
oa.
oII:
a. 0 ... I I I I
.50 1.0 1.5 2.0
TIME [MTTF]

Figure 6.3. Proportional-Closeness Probability of Reliability


Estimators for Exponential Life Distribution, 'Y = .1

formula for the fixed-closeness probability is

Pr{IOn - 91 ::; 6}

(6.1.11) = Pos (n - 1; _In(e_:r _ 6)a+)


- Pos (n - 1; (_In(e.:r + 6»a+) .

Graphs of the fixed-closeness probability function are given in Figure 6.4


for 6 =.1.
It is easy to explain why the proportional-closeness probability function
in the present example is so different from the fixed-closeness probability
function. Since the reliability function R(r) = e- T is decreasing to zero,
the proportional-closeness criterion requires that for large values of r the
precision of the estimator will be higher than that for small values of r.
Indeed, for 10% proportional-closeness, if r = .5 it is required that the
estimator fall within e-· 5 /1O = .061 of the true value; however, if r = 2,
e- 2 /10 = .014.

108 6. Estimation of Life Distributions and System Characteristics

n=50

>
5 .95
iii
c(
CD
o
IX:
~

If)
If)
w
z
w
If)
o...
(J
I
Q
W
)(
Li:
.80 .

• 75
o .50 1.00 1.50 2.00
TIME [MTTFJ
Figure 6.4. Fixed-Closeness Probabilities of Reliability
Estimators for Exponential Life Distributions, 8 = .1

6.1.4 Confidence and Prediction Intervals

A confidence interval for (J, at level of confidence ,,/, is an interval,


C,),(X1 ,··· ,Xn ), determined by the sample values, satisfying

(6.1.12)

for all (J. That is, with confidence probability of at least "/ the interval
C,),(X1 ,··· ,Xn ) contains the true value of (J, whatever the value of (J is.
We show below how such intervals can be determined in particular cases.

6.1.4.1 Estimating the Parameters of a Normal Distribution

The following data are a r~dom sample of size n = 10 from an N(J-L, u)


distribution. Both J-L and u are unknown.

11.65 15.88 9.58 11.75 11.09


10.79 6.38 8.52 11.20 9.98
6.1 Properties of Estimators 109

The sample mean and standard deviation are

XlO = 10.68 and 8 = 2.46.


The stand,ardized random variable T = xs"'.,;n has a t-distribution with
n - 1 degrees of freedom. Let tl = (1 - ,)/2 and t2 = (1 + ,)/2. Since
t€l [n - 1] ~ -t€2 [n - 1], we obtain

(6.1.13)

for all j.t and 0". It follows that the lower and upper limits of the confidence
interval for j.t, at level " are

(6.1.14)

,=
The confidence limits for tL based on the given sample, at .' confidence level
.95, are 10.68 ± 2.262 x 2.46/VlO, or 8.92 and 12.44.
To obtain a confidence interval for at confidence level" we apply
0",
again the result mentioned in Example 6.4, namely

This yields

X;l[n-l] X;2[n-l]}
Pr { < -82 < --'-''""'-----''
n -1 - 0"2 - n-l
(6.1.15)
= Pr {
82 (n - 1)
X~2 [n
<0"
- 1] -
2
<
-
8 2(n - 1) }
X~l [n - 1]
=,
for all tL and 0". From tables of the fractiles of the chi-square distribution
we obtain that for, = .95, X~025[9] = 2.70 and X~975[9] = 19.0. Thus, the
lower and upper confidence limits for 0"2 are, respectively,

(6.05)(9) d (6.05)(9)
19 an 2.7'
The confidence limits for 0" are obtained by determining the square root of
the above limits for 0"2. Thus, the .95 confidence limits for 0" are 1.69 and
4.49.

6.1.4.2 Estimating the Reliability Function for an Exponential Distribution

Let tb t2,'" ,tn be a random sample from E(f3). We saw in Example 6.2
that
110 6. Estimation of Life Distributions and System Characteristics

Thus, if €l = (1 - '"'1)/2 and €2 = (1 + '"'1)/2 then

(6.1.16)
Pr { ! X;l [2n] :$ fn :$ ! X;2 [2n] }

= Pr { 2nfn < {3 < 2nfn } =


X~2[2nl - - X~1[2nl '"'I,

for all {3. Therefore, the lower and upper confidence limits for {3, {3L and
{3u, respectively, are

(6.1.17)

If we use the sample of Example 6.1, we have n = 20 and f 20 = 3543.4


[hr]. For '"'I = .95 we have
X~025[40] = 23.999 and X~975[40] = 58.842. Thus,
formula (6.1.17) yields the following confidence limits for {3:

{3L = 2408.8 [hr],


{3u = 5905.9 [hr].

Finally, the lower and upper confidence limits for R(t) = exp( -t/{3) are

RL = exp(-t/lh),
Ru = exp( -t/{3u).

Thus, the confidence limits for the reliability at t = 5000 [hr] are

Rz = .125 and Ru = .429.

That is, with a confidence level of '"'I = .95, the interval (.125, .429) contains
the value of R(5000).

Prediction intervals at level '"'I are intervals determined by the sample



data such that, with probability '"'I, a certain statistic based on a future
sample is predicted to fall within the interval. We illustrate prediction
intervals in the following example.
EXAMPLE 6.6 (Normal Distributions)
A random sample of size n is available from an N(J.L, a) distribution. We
wish to determine an interval such that, with probability '"'I, the average of
a future independent random sample, of size m, will belong to this interval.
If we know J.L and a, then the interval J.L±Z(1--y)/2a/..;m will contain, with
probability '"'I, the mean Ym of a future sample, since Ym rv N (J.L, a / ..;m).
6.2 Maximum Likelihood Estimation 111

When J.L and a are unknown we replace them by Xn and S, the mean and
standard deviation of the sample. The interval to consider has limits of the
form Xn ± C"!,n,mS/rm, where the coefficient C"!,n,m must be determined
by the requirement

(6.1.18) {- S - Xn- + C"!,n,m rm


Pr Xn - C"!,n,m rm ::; Ym ::;
S} = ,
for all J.L and a.
Since Xn and Ym are independent, we have

(6.1.19)

Furthermore,

(6.1.20)

where t[vl denotes a random variable having a t-distribution with v degrees


of freedom. Since (6.1.18) can be expressed in the form

we obtain that C",') n ,m = t€2[n - llJn + n m, where E2 = (1 + ,/2). The


interval determined with these limits is a prediction interval for Ym , at level
,. Note that the probability, is based on the randomness of both Xn and
Ym .

6.2 Maximum Likelihood Estimation
The method of maximum likelihood generally yields estimators having de-
sirable properties, especially in large samples. It is a convenient method to
apply in cases where one wishes to estimate functions of several parame-
ters. First we present the general method and later we list the maximum
likelihood estimators (MLEs) for particular distributions of interest.

6.2.1 Single-Parameter Distributions

6.2.1.1 Derivation

Consider a distribution depending on a single parameter, B, and having


a PDF f(x; B). We treat here both continuous and discrete distributions.
112 6. Estimation of Life Distributions and System Characteristics

Thus, the PDF is either the probability density function or the probability
distribution function. A few families of single-parameter distributions are
the binomial, Poisson and exponential.
Let XI, X2,' .. ,X n be a random sample from some distribution, belong-
ing to a specified family. The likelihood function L(()j XI,'" ,x n ) is a
function of the parameter (), given the sample values Xl, ... ,Xn , defined as

II f(Xij ()),
n
(6.2.1) L((), x) =
i=l
where () ranges over a specified domain, e, called the parameter spacej
x = (XI,' .. ,xn ) is the vector of n observations.
EXAMPLE 6.7
Let Xl,'" ,Xn be a random sample from a Poisson distribution. The
PDF is
e- 8 ()X
f(xj()) = --,-, X = 0,1"" .
x.
The likelihood function is
n
L(()jx) = C(x)exp(-n() + LXiln()),
i=l
for 0 < () < 00, where

II
The maximum likelihood estimator (MLE) of () is defined as a value
of () in the parameter space e, for which L(()j x) is maximized.
If the function L(()j x) has its maximum in the interior of e and is dif-
ferentiable there with respect to (), for any x, then the MLE 0 is a value of
() satisfying
8 82
8()L(()jx) = 0 and 8()2L(()jx) < O.
If this method of differentiation is inapplicable, we have to determine the
e
point 0 in at which L(()j x) attains its maximal value by other methods.
EXAMPLE 6.8 (Binomial Distributions)
Let XI,'" ,X n be a random sample from a binomial distribution B(N, ()),
N known. The likelihood function can be written as

(6.2.2)

The symbol ex: means "is proportional to." The proportionality factor de-
pends only on x.
6.2 Maximum Likelihood Estimation 113

n
Let Sn = LXi. Notice that, since the logarithmic function is strictly
i=l
increasing, we can obtain 0 by finding the maximum of
l(O;x) = lnL(O;x).

Thus, in the binomial case,

(6.2.3) l(O,x) = Cn(x) + Sn ln (1 ~ 0) +Nnln(l- 0),

0< 0 < 1, where Cn(x) does not depend on O. Taking the partial derivative
with respect to 0 we obtain

(6.2.4) a () Sn Nn
aolO,x = 0(1-0) -1-0'

Equating (6.2.4) to zero and solving for 0 yields the MLE

Sn
(6.2.5)
A

On=-·
Nn

it is easily verifiable that ~l(O,x) at 0 = 0 is negative.


This MLE is unbiased, since E{O} = 0 for all 0 in (0,1); its standard
deviation is

(6.2.6) SD{On} = (O(~~ 0») 1/2


The MLE On is the mean of Xl, ... , Xn divided by N. Hence, by the Central
Limit Theorem, its distribution approaches a normal distribution as n -)


00.

EXAMPLE 6.9 (Shifted Exponential Distributions)


We consider the shifted exponential distribution with scale parameter
(3 = 1. The location parameter 0 is unknown. The parameter space e is
the whole real line, i.e., -00 < 0 < 00. The PDF of X is

(6.2.7) !(X; 0) = 1(0 ::; x) exp( -(x - 0»,

where 1(0 ::; x) is equal to 1 if 0 ::; x, and equal to 0 otherwise. Let


Xl! .•. , Xn be a random sample and X(l) ::; x(2) ::; •• , ::; X(n) the ordered
sample. The likelihood function of 0 can be written as
n
(6.2.8) L(O;x) = 1(0::; X(l» exp(nO - LXi),
i=l
114 6. Estimation of Life Distributions and System Characteristics

-00 < 9 < 00. The function L(9; x) increases monotonically for all 9 ~ X(l),
and is equal to zero for all 9 > X(l). Hence, en
= X(l). Notice that in the
present case we obtained the MLE just by considering the shape of the
likelihood function. One cannot differentiate L(9;x} at the value 9 = X(l)'
The MLE en is distributed like the minimum of a random sample from
a shifted exponential distribution, i.e.,

(6.2.9) en ~ 9+E (~).


Hence, E{en } = 9 + lin and

(6.2.10)

The MLE is biased, but the bias goes to zero as n increases. The asymptotic
distribution of en, as n grows, is not normal but exponential, i.e., n(en -8) ~
E(l}, for all n = 1,2, ....

6.2.1.2 The Invariance Property

The invariance property of an MLE can be stated as follows:


If w = g(9} is a known function of 9, and en
is the MLE of 9, then
wn = g(en} is the MLE of w.
The function g( 9} does not have to be linear; the invariance property holds
for all functions.
EXAMPLE 6.10
The lifetime of a component is exponential E«(3}. A random sample of n
such components are tested. The data available are the number of failures,
In. among the n components, during the first t' hours. I n has the binomial
distribution B(n,9}, where
9 = 1- exp(-t'I(3}.

=
The MLE of 9 is en Jnln. This is indeed a special case of (6.2.5). We are
interested in the MLE of (3. (3 can be expressed as a function of 9, namely
(6.2.11) (3 = -t'l In(l - 9).

Thus, by the invariance property, the MLE of (3 is

(6.2.12)

The MLE of the reliability function R(t) = exp( -tl(3) is

Rn(t) = exp( -tlPn)


(6.2.13)
6.2 Maximum Likelihood Estimation 115

Notice that the proportion of components that survived at time t' is the
MLE of R(t'). This is generally the case, for any life distribution, if the
data available are just the number of failures up to time t'.

6.2.1.3 The Variance of an MLE

AB we see in (6.2.12), the MLE may well be a nonlinear function of the


sample values. In such cases the determination of the variance (or SD) of
the MLE may be very tedious. The method described below often provides
good approximations to the variance of the MLE.
Let L(9;x) be the likelihood function of 9, given x, and l(9;x) its loga-
rithm. Define the likelihood-score function
8
(6.2.14) 8(9; x) = 89l(9;x).

We assume that 8(9; x) exists for all 9 and all x. Since X = (Xl,'" ,Xn )
changes at random from sample to sample, 8(9, X) is a random variable.
Assume that the variance of 8(9; X) is finite, for each 9. The function

(6.2.15) 1n(9) = V9{8(9;X)}

is called the Fisher information function of 9. The theory of statistics


establishes that, under general "smoothness" conditions on L(9;x), the
variance of the MLE, On, is asymptotically (large samples) equal to 1;;:1(9),
i.e.,

(6.2.16)

Furthermore, if w = g(9) is a differentiable function of 9 and Wn = g(On) is


the MLE of w then,

(6.2.17)

In a random sample, the n random variables Xl, ... ,Xn are independent
and identically distributed. Therefore, from definition (6.2.1)

(6.2.18)

Hence,

(6.2.19)
116 6. Estimation of Life Distributions and System Characteristics

or, equivalently,

(6.2.20) In{f}) = nI(O),

where 1(0) == h (0) is the Fisher information function based on a single


observation.
We illustrate these results in the following example.
EXAMPLE 6.11
Consider Example 6.10. The variance of On is tinmediately obtained from
the fact that I n '" B(n,O). Thus, Ve{On} = 0(1- O)ln.
As we saw in Example 6.8, the likelihood function of 0, given I n , is

(6.2.21)

Thus, the likelihood-score function is

(6.2.22)

Hence,

(6.2.23)

From (6.2.16) we see that the AVe{On} coincides with Ve{On} for all n.
Considering the MLE (In of Example 6.10, an approximation to Ve{(Jn}
is given according to (6.2.17) by

AVe{,6n} =
A 0(1 - 0)
n
[8 (
8()
t'
In(1 - 0)
)] 2
(6.2.24)
O( t')2
= n(1 - O)(ln(1 - 0))4·

The asymptotic variance can be expressed also in terms of ,6 as

(6.2.25) AV (f:l ) = ,62 (exp(t' 1,6) - 1)


f3 fJn n( t' 1,6)2 .

It is interesting to assess how much precision is lost due to the fact that
the data are given in terms of the number offailures, I n , to time t' and not
the actual failure times. If the actual failure times t 1 , ... ,tn are available,
the MLE of ,6 is the sample mean tn. Its variance is Vf3(tn ) = ,62 In. In
Figure 6.5 we plot the ratio of ,62 In to (6.2.25). This ratio is called the
relative efficiency of the MLE based on I n compared to that based on tn.
The graph shows that the MLE of ,6 based on the number of failures can
6.2 Maximum Likelihood Estimation 117

1.00

>
(J .7~
z
w
(J
i&:
II.
w .50
w
>
j:
...C
W
II: .25

_1.___ =--,,-1-1_ _ _--'--_~.,.-


..
o 1 234 5
TRIAL TIME [MTTF]

Figure 6.5. Relative Efficiency of an MLE


Based on I n Compared to tn

be considerably worse than the one based on the average failure time. The
relative efficiency is maximized near t' / {3 = 1.5. Thus, if the life testing
span is very short or very long, relative to {3, the amount of information on
{3 provided by the number of failures, I n , may be very small.

It is often the case that the MLE, On, converges in probabilistic sense

to the true value 0, and that the distribution of ..(ii(On - 0) converges to
the normal distribution N(O,I- I / 2 (O», as n -+ 00. More comprehensive
discussion of this topic requires substantial knowledge of probability theory.

6.2.2 Multiparameter Distributions

In the present section we generalize the results of the previous section to the
case where the PDF depends on several parameters OI,'" ,Ok' The param-
eter space e
is now the set of all k-dimensional vectors 8 = (Ot.··· ,Ok),
which specify the distributions in the family under consideration. The like-
lihood function is now a function of k variables 01 ,' •• ,Ok and of x, i.e.,

n
(6.2.26) L(OI,'" ,Ok;X) = IIf(Xi;Ot. ... ,On)'
i=1
118 6. Estimation of Life Distributions and System Characteristics

The likelihood-score vector is the gradient of the log-likelihood, Le.,

S(O;x) = (Sl(O;X),'" ,Sk(O;X»

(6.2.27)

i = 1"" ,k. The MLE of the vector 0 is the value of 0 in e maximizing


L(O; x). In many cases this is obtained by solving the k equations

(6.2.28)

EXAMPLE 6.12
Let Xl,'" ,Xn be a random sample from a normal distribution N(J.L, o}
The likelihood function of (J.L, a) is

(6.2.29)

-00 < J.L < 00, 0 < a < 00. It follows immediately that the likelihood
function is maximized, for each value of a, by

(6.2.30)

where xn is the sample mean.


n
Let Qn = 2:(Xi - x)2. Substituting fln in (6.2.29), we obtain
i=l

(6.2.31)

Hence,

(6.2.32) {)
~lnL
('J.L,a;x ) = - -
n Qn
+3 '
ua a a
6.2 Maximum Likelihood Estimation 119

Equating this partial derivative to zero and solving for a, we obtain the
MLE

(6.2.33)

The invariance principle is extended to the multiparameter case in the



following manner. If 01 , .•• ,Ok are the MLE of (h, ... ,(h, and if

are known functions, where 1 :::; r :::; k, then the MLE of WI, ... ,Wr are

EXAMPLE 6.13
Let Xl, .•. ,X n be a random sample from a lognormal distribution, LN(p" a).
Let Yi = In Xi (i = 1"" ,n). The MLE of p, and a are

1
L
n
fLn = fin = - Yi
n i=l

and

Consider the mean and standard deviation of the LN(p" a) distribution


given by the formulae (2.3.51) and (2.3.52). According to the invariance
principle the MLE of ~ and Tare

(6.2.34)

and

(6.2.35)

The notion of the Fisher information function I (0) is generalized presently



to that of the Fisher information matrix for the parameters 01 , •.. ,Ok'
Consider the log-likelihood function 1(8; x) and its gradient vector S(8; x).
120 6. Estimation of Life Distributions and System Characteristics

The Fisher information matrix (FIM) is defined as the variance-covariance


matrix of S(O;X), i.e.,

In(O) = (Iij(O); i,j = 1,··· , k),

where

(6.2.36)

The Fisher information matrix (FIM) does not exist for all distributions.
The PDF must be sufficiently smooth so that l(O; x) will have partial deriva-
tives for all 0 and all x, and the covariances of these partial derivatives must
exist.
As in the single-parameter case, when x = (Xl,· .. , Xn) represent a ran-
dom sample, we have In(O) = nI(O), where I(O) is the FIM based on a
single observation. The asymptotic variance-covariance matrix of the MLE
On is, under certain regularity conditions, the inverse of the FIM for a
sample of size n, i.e.,

(6.2.37)

EXAMPLE 6.14
Continuing with Example 6.12, we determine the FIM for the normal
case. The log-likelihood function for a single observation is

Hence,

and

Furthermore, V {8l } = ~, cov( 8 1 , 8 2 ) = 0 and V {82 } = 2/0'2. Hence, the


FIM is

I(/L,O') = [ ~~2 :2 1. 0

The asymptotic variance-covariance matrix of (P, a) is


0'2

AC(p,a) =~
n
[
0 •
6.2 Maximum Likelihood Estimation 121

Finally, if w = (WI,··· ,Wr ), where Wi = gi((h,··· ,On), i = 1,··· ,r,


then the asymptotic covariance matrix of wn is
(6.2.38)

where D(6) is an r by k matrix, with elements

D· ·(6)
a
= -g·(OI
(6.2.39)
'3 a o ...." 'Ok)·
3

EXAMPLE 6.15
Continuing Example 6.13 we derive the asymptotic covariance matrix
of en and Tn, which are the MLE of the mean and standard deviation of
LN(/L, u). According to (2.3.46) and (2.3.47)

Du (/L, u)
a
= aIL exp(/L + u 2 /2)
(6.2.40)
= exp(/L + u 2 /2),

D12(/L, u) = :u exp(/L + u 2/2)


(6.2.41)
= uexp(/L + u 2 /2),

D21 (/L, u) = :/L exp(/L + ( 2)(e U2


- 1) 1/2
(6.2.42)
= exp(/L + u 2 /2)(e U2 _ 1)1/2,

and

D 22 (/L,U) = :u exp(/L+u 2/2)(e U2 _1)1/2


(6.2.43)
=aexp(/L+u2/2)(e U2 _1)1/2(I+eU2 (e U2 _1)-1).

Substituting these elements into the matrix D(/L, u) and employing formula
(6.2.37), we obtain

(6.2.44)
122 6. Estimation of Life Distributions and System Characteristics

where

(6.2.45)

C12 = 0'2e2,.+u2 (eU2 _ 1)1/2 (1 + ~2 + ~2 eu2 (eU2 _ 1)-1) ,


(6.2.46)
C21 = C 12.

and

(6.2.47)


6.3 MLE of System Reliability
In Chapter 3 we studied methods for determining the reliability of a com-
plex system, as a function of the reliabilities of its components. In the
present section we apply the method of maximum likelihood in multi-
parameter cases for determining the MLE and confidence limits for the
system reliability. More specifically, suppose that a given system is com-
prised of k components, and that the reliability function of the system is a
function
t/J{R1{t; 0(1», ... ,Rk(t; O(k»)
of the reliability functions R;,{t; O(i», i = 1,··· ,k, of the components.
These reliability functions depend on the parameters O(i) = (Oil,··· ,Oin.)'
of the life distributions of the components. Notice that the number of pa-
rameters of the various life distributions are not necessarily the same. If
the life distribution of one component is the shifted exponential, SE(,8, to),
it depends on two parameters, and if the life distribution of another compo-
nent is the truncated normal, NT{J1., 0', to), it depends on three parameters.
We further suppose that random samples of failure times are available for
each component and that these samples are independent. We determine
the MLE of the reliability functions R;, (t; O(i», i = 1, ... ,k, and substitute
these estimators in the function t/J(R1{t; 0(1», ... ,Rk(t; O(k») to obtain the
MLE of the system reliability function. The asymptotic SD of the system
reliability is determined according to (6.2.38).
EXAMPLE 6.16
Consider a system having components C1 and C2 connected in parallel.
Suppose that the lifetime of C1 has a Weibull distribution W{v, ,81) and the
lifetime of C2 has an Erlang distribution G(3, ,82). A random sample of nl
6.3 MLE of System Reliability 123

failure times is available for Gl , and a random sample of n2 failure times is


available for G2' The reliability function of Gl is

(6.3.1)

and that of G2 is

(6.3.2)

The reliability function of the system is

(6.3.3)

Let v and Pl be the MLE of v and {3l determined from the sample data
on Gl . Let P2 be the MLE of (32 determined on the basis of the data on
G2 • Formulae for v, Pl and P2 are given in Chapter 7. The MLE of the
system reliability function is

(6.3.4)

We now determine the asymptotic SD of RSys(tj v{3l!(32). Since the ran-


dom sample for Gl is independent of the random sample for G2 , the MLE
(v, Pl) is independent of the MLE P2'
Thus, according to (7.5.4)-(7.5.6)
and (7.3.3), the asymptotic variance-covariance matrix of (v, Pl! P2)
is

2
.608~ .254{3l 0
nl nl

(6.3.5) AC(V,P1!P2) = .254{3l 1.109 {3~ 0


nl nlv2
{3~
0 0
3n 2

In order to obtain AV (R Sys ), we have to determine the vector


124 6. Estimation of Life Distributions and System Characteristics

and apply formula (6.2.38), in which AC(v, ,61, h) is substituted for .!.I- 1 (O).
n
From (6.3.3) we obtain

:v RSys (t;v,/31,/32) = - (;J (;J v


In

(;J V) (1 - ;2) (1+ ;2 + 2~i ) ) ,


(6.3.6)
. exp ( - exp ( -

(6.3.7)

and

Finally, the asymptotic standard deviation of RSys (t; V, /31, /32) is

(6.3.9)

In the following table we provide some numerical values of the ASD


{RSYS (t; v, /31, /32)}, for the case of t = 500 [hr], nl = 30, n2 = 20 and some
selected values of v, /31 and /32'

V /31 [hr] /32 [hr] Rsys ASD(Rsys


2 300 165 .4527 .0839
2 300 200 .5722 .0793
2 375 165 .5151 .0790
2 375 200 .6209 .0733
3 300 165 .4221 .0861
3 300 200 .5483 .0821
3 375 165 .4710 .0825
3 375 200 .5864 .0776

Asymptotic confidence intervals for Rsys can be obtained from the above

result and the formula

(6.3.lO)

where the estimated ASD{Rsys } is obtained by substituting MLE {}(i) for


O(i) in the formula for ASD{RSys }.
6.4 MLE from Censored Samples-Exponential Life Distributions 125

6.4 MLE from Censored Samples-Exponential


Life Distributions
In the present section we discuss the effect of censoring on life testing and
on the MLE, when the lifetime distribution is exponential. The reader
is referred to Bain (1978) and Nelson (1982) for estimation methods with
censored data from other distributions.
In Section 2.1 we discussed the various types of censored data. We focus
attention here on a single right censoring of Type I (time-censored data)
and of Type II (failure-censored data).

6.4.1 Type I Censored Data

We consider a life testing experiment or field data collection in which a


sample of n devices start to operate at time t = O. The experiment is
censored (terminated) at time t*. The number of failures during the time
period [O,t*), In, has the binomial distribution B(n, 1-exp(-t* /(3)). (We
assume here that failed units are not replaced.) In addition, we often have
the information on the actual failure times t(l) :5 t(2) :5 ... :5 t(r), given
that I n = r.
In Example 6.11 we derived the MLE of {3, based on I n . We discuss here
the analysis when the actual failure times of the failed units are given. The
likelihood function, given {In = r} and (t(l),'" ,t(r»), is

r
Let Tn,r = ~)(i) + (n - r)t*. Tn,r is called the total life statistic. The
i=1
MLE of {3 can be obtained by differentiating the log-likelihood function with
respect to {3 and setting the partial derivative equal to zero. The resulting
MLEis

(6.4.2)

°
The CDF of f;n,r, for small samples, was derived by Bartholomew (1963).
Several approximations are also given. IT r > and (3 < t* / In 2 then
the following random variable, W, has approximately a standard normal
distribution N(O, 1):

W = .In(f;n,r - (3) + [(32(1- e- t */(3) + 2t*(f;n,r - (3)e- t */f3


(6.4.3)
+ (f;n,r - {3?e- t * /f3 (1 - e- t * /(3)]1/2.
126 6. Estimation of Life Distributions and System Characteristics

This approximation can yield approximate confidence limits for [3. For this
purpose, let <P = ~n,r - [3. Since W2 is distributed approximately like X2 [lJ,

(6.4.4) Pr{A(n, {' (3)<p2 - 2B(n, {, (3)<p - C(n, {' (3) :::; O} ~ {'

where

(6.4.5)

(6.4.6) B( n,{, (3) -_X~[lJt* e


-t*//3
,
n

and

(6.4.7)

We determine the confidence limits, [3L,"( and [3U,"fl by the following


iterative procedure: Initially, substitute in (6.4.5)-(6.4.7) the value of the
MLE, ~n,r' for the unknown [3, and solve the quadratic equation

(6.4.8) A(n,{,[3)<p2 - 2B(n,{,[3)<p - C(n,{, [3) = O.


The two roots are

<Pi = B{n, {' (3) ± [B2(n, {,(3) + A(n, ,)" (3)C(n, ,)" (3)j1/2
(6.4.9)
A(n,')',[3) A(n,,),, (3)

i = 1,2, where <Pl < <P2. Notice that C{n, ,)" (3) > 0 for all [3, and if
n > X~[1l/4 then A(n,')', [3) > 0 for all [3. An example of this is ')' = .95,
X~[lJ = 3.84 and n > X~95[lJ/4 for all n ~ 1. In this case the roots exist for
all [3 and satisfy <Pl < 0 < <P2'
Let (32) = ~n,r - <P2 and [3g) = ~n,r - <Pl. After this step, substitute
[3il ) for [3 in (6.4.5)-(6.4.7) and solve (6.4.8) to obtain <Pl,L and <P2,L. Take
[3i2 ) = ~n,r - <P2,L. Substitution of [3g) for [3 in (6.4.5)-(6.4.7) and solution
of (6.4.8) yields <Pl,U and <P2,u. The second iterative approximation to [3u
is taken to be [3C;;) = ~n,r - <Pl,U. We continue this iterative procedure until
a desirable convergence is attained. If n > X~[1l/4, we are guaranteed that
[3c;,) < ~n,r < [3g) for all j.
EXAMPLE 6.17
Consider the example of life testing with a single right-censored sample
of size n = 20, t* = 4500 [hrJ. r = 17 units failed within this time, yielding
a total life of T 20 ,l7 = 73,738 [hrJ. Assuming that the lifetime has an
exponential distribution, E([3), the MLE of [3 is ~20,l7 = T2o ,l7/17 = 4338
6.4 MLE from Censored Samples-Exponential Life Distributions 127

[hr). We apply the above iterative procedure for determining confidence


limits for 13, with confidence level of'Y = .95. The following are the results
of 12 iterations:

f3L f3u
2422.7 5612.4
3201.9 5846.4
2857.5 5886.9
3005.0 5893.8
2940.9 5895.0
2968.6 5895.2
2956.6 5895.3
2961.8 5895.3
2959.5 5895.3
2960.5 5895.3
2960.1 5895.3
2960.3 5895.3

We see that after 8 iterations the iterative procedure has converged in the
first 4 significant figures to the values f3L = 2960 [hr) and f3u = 5895 [hr) .

6.4.2 Type II Censored Data •


In Type II censoring, the data are censored at the r-th failure, 1 < r < n.
More specifically, a random sample of n devices are put on test. The failure
times are recorded and the trials terminate at the r-th failure. (n-r) units
still operate at the end of the trials.
Let tCl) ::; t(2) ::; '" ::; tCr) be the recorded failure times. The total
r
time on test of the n devices is Tn,r = L)Ci) + (n - r)tCr)' As before,
i=l
we assume that the lifetime of the devices has an exponential distribution.
The likelihood function is then

(6.4.10)

It follows that the MLE of 13 is

(6.4.11) !3n,r = Tn,r/r.

It is interesting that the MLE of 13 is formally the same for both Type I
and Type II censored data. The sampling distribution of !3n,r is, however,
considerably simpler in Type II censored data. It can be shown that

(6.4.12)
A 13 2
f3n,r '" 2rX [2r].
128 6. Estimation of Life Distributions and System Characteristics

Thus, E{.Bn.r} = /3 and SD{.Bn.r} = /3/Jr. Exact confidence limits for {3,
at confidence level "I, are given by the formulae

· 2Tn r
Lo
wer limit= ~[2']
XE2 r

and
2Tn.r
U pper limi't = ~[2
XE1 r
]

where €l = (1- "1)/2 and €2 = (1 + "1)/2.


An important design problem in life testing is that of determining the
value of r, or the censoring fraction.
The length of the experiment is a random variable, namely t(r)' One can
show that for an exponential lifetime

(6.4.13) E{t(r)} = /3 tt
r

n- i
1
+ l'

Suppose that the cost of life testing is a linear function of both the number
of items on test, n, and the duration of the test, t(r), i.e.,

(6.4.14)

For a fixed value of r we can determine the value of n minimizing the


expected cost

1
= E{Cn •r } = Cl/3 L .
r
K(n, r) 1 + C2 n + C3 •
i=l n - ~+

Let .o.(n) = K(n, r) - K(n - 1, r). Then,

(6.4.15) .o.(n) = - (C1/3)


n n-r
+ C2 •

We increase n as long as .o.(n) < O. Thus, the value of n, nO, minimizing


K(n, r), is approximated by the formula

(6.4.16)

The problem is that the optimal value of n depends on the unknown param-
eter /3. If one has some idea of the value of /3 from previous experiments,
one can substitute in (6.4.16) a prudently chosen value of /3.
6.5 The Kaplan-Meier PL Estimator as an MLE 129

EXAMPLE 6.18
Consider the design of a life testing experiment with frequency censoring
and exponential life distribution. The SD of fin,r is f3/.,fF. H we require
that SD{fin,r} = .2f3, then, setting the equation

f3
.2f3 = .,fF

we obtain r = {1/.2)2 = 25. Suppose now that we wish to minimize the


expected cost at f3 = 100 [hr], where C1 = C2 = 2 [$/unit]. Then, from
formula (6.4.16), the optimal sample size is

n= 2"25 ( ( 4
1+ 1+ 25100
)1/2) =64.

The expected duration of this experiment, when f3 = 100 [hr], is

25 1
E{t(25)} = 100 L
i=1
65 _ . = 49.0 [br].
'/,

Notice that if we put only 25 units on test, and terminate at the 25th
failure, the expected length of the experiment when f3 = 100 [hr] is

25 1
E{t(25)} = 100 L 26 _ i = 381.6 [hr].
i=1

Thus, if n = 64 the experiment is expected to last only 12.8% of the ex-


pected duration of an experiment with n = r = 25.

6.5 The Kaplan-Meier PL Estimator as an
MLE of R(t): Non-Parametric Approach
In the present section we consider a more general, non-parametric definition
of MLE. Consider a general class of life distribution, :F, which contains all
possible CDFs on {a, 00). All the families of continuous as well as discrete
distribution discussed in Chapter 2 belong to:F. However,:F contains
additional distributions. In such a general model we do not estimate a
parameter, but we try to estimate the distribution F{t) as a function over
[0,00).
A CDF, F{t), on [0,00) is a right continuous non-decreasing function
such that
(i) F{O-) = 0;
(ii) F{oo) = 1;
130 6. Estimation of Life Distributions and System Characteristics

(ill) For all tl < t2,F(tr) :5 F(t2)'


Let Pr{T = t} = F(t)-F(t-O). The reliability function, R(t) = I-F(t), is
a right continuous non-increasing function, such that R(O) = 1 and R( 00) =
O. Also Pr{T = t} = R(t - 0) - R(t).
Let tr,··· ,tn be a random sample of failure times. Assume first that
there is no censoring. Let t = (t(l), t(2),'" , ten»~ be the vector of ordered
sample values. The likelihood of the function R(t) given t is defined as
n
(6.5.1) L(R(t);t) = II(R(t(i) - 0) - R(t(i»)'
i=l
Now, if R(t) is continuous at t(i) then R(t(i) -O)-R(t(i» = O. Thus, in order
to maximize (6.5.1) we choose a reliability function having discontinuities
n
at t(i)' i = 1,,,, ,no Let Pi = R(t(i) - 0) - R(t(i»' Obviously Vi:5 1.
i=l
However, in order to maximize the likelihood function (6.5.1) we choose a
n
step function Rp (t), such that Vi = 1. The likelihood of such functions
i=l
is
n
(6.5.2) L(Rp(t);t) = IIpi.
i=l
We show now that the function Rp. (t), which maximizes (6.5.2), under

the constraint Di =
~
1 is Rp.(t) = 1 - Fn(t), in which pi = .!.
n
for all

i = 1"" ,n. Indeed, maximizing (6.5.2) under the above constraint is the
n
same as maximizing l(p;t) = L log Pi' Thus, let
i=l

(6.5.3)

be the Lagrangian. Differentiating (6.5.3) partially with respect to Pi (i =


1" .. ,n) and A, we obtain the equations

* 1
Pi = :X' i = 1"" ,n,
(6.5.4)

1 .
The solution is pi = - for all 2 = 1,··· ,n. Thus, the MLE of R(t)
n
is Rn(t) = 1 - Fn(t). This MLE is equivalent to the Kaplan-Meier PL
estimator. Indeed,
6.5 The Kaplan-Meier PL Estimator as an MLE 131

Rn(t) = I{t < t(l)} + tI{t(i) ~ t < ti+1} (1- ~)


1=1
n

(6.5.5) = I{t < t(1)} + 2:I{t(i) ~ t < t(i+1)}


i=1

.iI (1-
3=1
n- ~ + 1)'
This MLE is equivalent to (5.3.4) in the case of no censoring, Le., Oi = 1 for
all i = 1"" ,n. If some of the failure times are censored then the MLE,
Rn(t), is obtained from formula (5.3.4).
If there is no censoring then the number of failures occurring before
a specified time t is a random variable In(t) '" B(n, F(t». Accordingly,
V{Fn(t)} = F(t)(l- F(t»/n. Accordingly, for each.O < t < 00

(6.5.6) VF{Rn(t)} = R(t)(1 - R(t» .


n

Asymptotic confidence limits for R(to), at a specified value t = to, can be


obtained by applying the angular transformation

(6.5.7)

It can be shown that, for large n,

The transformation (6.5.7) is monotone increasing (one-to-one) for all t.


Asymptotic confidence limits for R(to), at level of confidence (1- a),
are given by

RU,a(to) '= 1 - sin2 (~max (0, Yn(t) - Z1-a/2.Jn) ) ,


(6.5.8)
RL,a(tO) = I-sin2 (~min (~, Yn(t) +Z1-a/2 .In)).
It is more complicated to find a confidence band around Rn(t), say Bn,a(t),
such that
Pr{R(t) E Bn,a(t), all 0 < t < oo} 2:: 1- a.
Nair (1984) provides a large sample approximation for such a band.
132 6. Estimation of Life Distributions and System Characteristics

6.6 Exercises
[6.1.1] Let X5 be the mean of a random sample size n = 5 from E(100).
(i) What is the sampling distribution of X5?
(ii) What is the SD of X5?
(iii) With the aid of Table A-III, determine an interval [£, U] around
:s :s
f3 = 100 so that Pr{ L X5 U} = .95.
[6.1.2] A random sample of size n = 10 is drawn from the shifted exponen-
tial distribution SE(f3, to), with f3 = 100 and to = 10. How large is
the bias of the sample minimum t(l), as an estimator of to? What
is the standard deviation of

[6.1.3] The following data are a random sample from a normal distribu-
tion:
1. 71, 1.83, 2.05, 3.21, 1.93, 1.53, 2.17, 2.18, 1.95, 1.62, 1.99, 2.15,
2.08, 3.02, 2.99, 3.00, 2.75, 2.82, 2.91, 2.60
Compute confidence intervals at level 'Y = .95 for the mean, j.L, and
the standard deviation, a, of the distribution.
[6.1.4] A random sample of size n = 20 from an exponential distribution
E(f3) yielded a mean tn = 3505 [hr]. Compute confidence limits
for f3 at confidence level 'Y = .90. What are the corresponding
confidence limits for the reliability function at t = 3600 [hr]?
[6.1.5] Compute the proportional closeness probabilities (6.1.7) and (6.1.9)
for a sample of size n = 50.
[6.1.6] A random sample of size n = 20 from a normal distribution yielded
X 20 = 135.5 and S = 17.5. Determine a .90 level prediction interval
for the mean YlO of an additional sample of size m = 10.
[6.2.1] The number of defective items found among n = 25 items chosen
at random from a production process is x = 2. Determine the
maximum likelihood estimator (MLE) of the proportion defective,
e, of that production process, and estimate the standard deviation
(SD) of the MLE.
[6.2.2] The number of failures among n = 50 devices during the first 100
hours of operation is x = 3.
(i) Determine the MLE of the reliability of this device at age 100
[hr], and estimate its SD.
(ii) What is the MLE of f3 (MTTF) if the lifetime distribution of
these devices is the exponential E(f3)? .
[6.2.3] Consider a random sample of size n from a normal distribution
N({L,a).
(i) What is the MLE of the p-th fractile ~p = {L + zpa?
6.6 Exercises 133

(li) What is the asymptotic standard deviation (ASD) of the MLE


derived in (i)?
(iii) If the sample consists of n = 100 observations, and if the
sample mean and sample standard deviation are X = 30.5 and
S'= 10.2, what is the MLE of (95, and what is the estimate of its
standard deviation?
[6.2.4] Consider a repairable system. Let tl,··· ,tn and 81,82,' .. ,8n be
the TTF and the TTR of n consecutive (independent) renewal
cycles. Let AT be the MLE of the MTTF and let fts be the MLE
of the MTTR. Let O'f and O'~ be the variances of ftT and fts,
respectively.
(i) What is the MLE of the asymptotic availability, Aoo?
(li) What is the AC of Aoo?
[6.2.5] The following are two independent samples of TTF [hr] and TTR
[hr] of a repairable system. It is assumed that TTF '" W(2, (3) and
TTR '" LN(O,lr
TTF Sample: 54.6, 35.4, 94.1, 94.3, 101.5, 112.5, 130.0, 26.4, 116.7,
143.3,64.5,21.7,49.5,51.2,86.4,32.6,210.4, 149.9,83.0, 54.1j
TTR Sample: .28, .83, 3.43, 1.14, 2.44, .62, .28, .56, 1.95, .66, .90,
1.67, .51, 2.73, 1.25, .47, .75, 1.84, .72, 2.71.
Determine the MLE of Aoo and estimate its ASD.
[6.3.1] Consider a system composed of three modules, having the struc-
ture function 'l/Jp('l/Js(Ml, M 2 ), M3)' Suppose that the modules have
independent exponential lifetime distributions, with scale parame-
ters {31. {32, {33, respectively.
(i) Write the reliability function of the system, Rsys(tj {31. (32, (33).
(ii) What is the MLE of R sys (tj{31,{32,{33)?
(iii) Show that the ASD of this MLE is

ASD{Rsys } = In [(1- exp( -t/(33))2 (;r - ;~)


. exp (-2t (~ + ~)) - ~
{31 {32 (3g

(-t
. exp(-2t/{33) (1- exp (;1 + ;J) rr/2.

[6.3.2] Consider the system specified in Example 3.6. Suppose that an


independent sample of size n is given on the TTF of each compo-
nent.
(i) What is the MLE of J1.sys?
(ii) What is the AC of J1.sys?
[6.4.1] Consider a life testing experiment in which n = 30 identical units
are put to test. The lifetime distributions of these units are in-
dependent random variables, having a common exponential distri-
bution E({3). The life testing is censored at the r = 10th failure
134 6. Estimation of Life Distributions and System Characteristics

(Type II censoring).
(i) What is the expected length of the experiment if {3 = 100 [hr]?
(li) If the total life Tn,r in such an experiment is 1500 [hr], what is
the MLE of {3?
(iii) Determine confidence limits for the reliability of the unit at
age t = 50 [hr], when Tn,r = 1500 [hr] and the confidence level is
'Y = .95.
[6.4.2] Consider the problem of designing a Type II censored life test for
an exponential distribution E({3). Suppose that we require that
SD{,Bn,r} :$ .1{3. Furthermore, the cost components are C1 = 1
and C2 = .5[$].
(i) Determine the sample size, n, which minimizes the expected
cost of this experiment.
(li) Determine the expected length of the experiment if {3 = 250
[hr].
[6.4.3] Redo Example 6.17, with n = 30, t* = 4500 [hr], r = 25, T30 ,25 =
100050 [hr].
7
Maximum Likelihood
Estimators and Confidence
Intervals for Specific
Life Distributions

In the present chapter we provide a survey of methods for determining the


MLE of the characteristics of the life distributions presented in Sections
2.3 and 2.5. All these estimators relate to complete sample data on time
till failure. The required adjustments for censored data were discussed in
Section 6.4. We also include (whenever available) formulae for the exact
confidence limits for the parameters. Formulae for the asymptotic confi-
dence limits are given for all cases.

7.1 Exponential Distributions


(a) Likelihood Function:

(7.1.1)

(b) MLE:

(7.1.2)

(c) Standard Deviation of Estimator:

(7.1.3)

(d) Asymptotic Standard Deviation:

(7.1.4)
136 7. Maximum Likelihood Estimators and Confidence Intervals

(e) Confidence Limits:

·
Lower limIt 2ntn
= 2
XE2 n
[2 ]'
(7.1.5)
U limi· 2ntn
pper t= X 2 [2n]'
E1

where €1 = (1 - 'Y/2), €2 = (1 + 'Y) /2.

(f) Asymptotic Distribution:

(7.1.6) /3n ~ N«(3, (3/vn) , as n -+ 00.

~ denotes approximate equality of distributions.

(g) Asymptotic Confidence Limits:

(7.1.7)
Lower limit =/3n - ZE2/3n/vn,
Upper limit = /3n + ZE2/3n/vn.

EXAMPLE 7.1
Consider the failure times [hr] of 20 electric generators, given in Exam-
ple 6.1. Assuming that this is a random sample from an exponential life
distribution, we proceed to determine the MLE and confidence intervals.
(1) The MLE of {3 is /320 = t20 = 3543.4 [hr].
(2) The SD of tho is estimated by

(3) Exact confidence limits for (3, at level'Y = .95 (€2 = .975), are given
by
L '· _ 4ot20 _ (40)(3543.4)
limi
ower t - X~975 [40] - 59.34
= 2388.6 [hr],
Upper limit = ;Ot[10] = 5801.8 [hr].
X.025

(4) Asymptotic confidence limits:

Lower l··t
lIDl =
a
1-'20 - Z.975·
/320
J20
= 1990.4 [hr],
Upper limit = 5096.4 [hr].
7.2 Shifted Exponential Distributions 137

Notice the difference between the approximate confidence limits and the
exact ones. It can be shown that the probability that such approximate
confidence intervals will cover the true value, when n = 20, is .926, which
is somewhat lower than the nominal confidence level .95.

7.2 Shifted Exponential Distributions
(a) Likelihood Function:

L(/3, to; x) = I {to:::; X(I)} . /3-n


(7.2.1) . exp ( n(X(I~ - to) - Qnl f3 ) ,

o < /3 < 00; -00 < to < 00, where X(I) < X(2) < ... ,< X(n) are the ordered
n
sample values and Qn = 2)X(i) - X(I)).
i=2
(b) MLE:

io = X(1),
(7.2.2)
/3 = Qnln.

(c) Standard Deviations:

SD{io} = f3ln,
~ /3
(7.2.3) SD{f3} = In=l'
n-l
cov(io, (3) = o.
(d) Asymptotic Standard Deviations:

ASD{io} = f3!n,
(7.2.4)
ASD{/3} = f3 I v'n.

(e) Confidence Limits:


(e.1) Limits for to:

~ 1 ~
Lower limit = to - --/3F,,[2, 2n - 2],
(7.2.5) n-l
Upper limit = io,

where Fp [VI , V2] is the p-fractile of the F-distribution with VI and V2 degrees
of freedom (see Table A-V).
138 7. Maximum Likelihood Estimators and Confidence Interva1s

(e.2) Limits for (3:

Lower 1· · 2np
mut = 2 [2 - 2] ,
X£2 n
(7.2.6)
U limi· 2np
pper t= 2 [2 -2]'
X£l n

€l = (1 - 'Y)/2, €2 = (1 + 'Y)/2.
(f) Asymptotic Distributions:

n(to - to) '" E«(3), all n,


(7.2.7)
p N«(3,(3/v'n),
R:. large n.

(g) Asymptotic Confidence Limits:


(g.l) Asymptotic limits for to:

(7.2.8) Lower limit = io - ! x~[2],


Upper limit = io.

(g.2) Asymptotic limits for (3:

(7.2.9)
Lower limit =p- Z£2P/v'n,
Upper limit = P+ Z£2P/v'n,

where €2 = (1 + 'Y)/2.
EXAMPLE 7.2
The lifetime of a certain device has a shifted exponential distribution.
The following is a random sample of n = 10 failure times:

102, 147, 154, 140, 204, 120, 120, 131, 313, 200.

The ordered sample is

102, 120, 120, 131, 140, 147, 154, 200, 204, 313.

(1) The MLE are

io = 102,
10
P= ~)X(i) - x(l»/lO = 61.1.
i=2
7.3 Erlang Distributions 139

(2) Estimates of Standard Deviations:

SD{io} = 6.1,
SD{J3} = 18.3.

(3) Asymptotic standard deviations:


_A
ASD{to} = 6.1,
_ A

ASD{/3} = 19.3.

(4) Confidence limits, at level'Y = .95.


(4.1) For to:

Lower limit = 102 - ~ ·61.1 F:-95[2, 18]


=78,
Upper limit = 102.

(4.2) For /3:

Lower limi· t = (2)(61.1)


2 [18]
= 38.8,
X.975
· (20)(61.1)
U
pper linut = 2 [18] = 148.5.
X.025

(5) Asymptotic confidence limits:


(5.1) For to:
Lower limit = 102 - 6:~1 X~95 [2]
=84,
Upper limit = 102.
(5.2) For /3:
Lower limit = 23.2,
Upper limit = 99.0.
The asymptotic approximation is not good when n = 10.

7.3 Erlang Distributions
(a) Likelihood Function:
140 7. Maximum Likelihood Estimators and Confidence Intervals

(b) MLE:
n
(7.3.2) Pn = L tdnk = tn/k.
i=1

(c) Standard Deviation:

(7.3.3) SD{Pn} = (3/..fiik.

(d) Asymptotic Standard Deviation:

(7.3.4)

(e) Confidence Limits:

L 1· · 2ntn
ower lIDlt = X~2 [2nk]'
(7.3.5)
U limi· 2ntn
pper t = X~l [2nk] ,

101 = (1 - 'Y)/2, 102 = (1 + 'Y)/2.


(f) Asymptotic Distribution:

(7.3.6) Pn ~ N({3, (3/..fiik).

(g) Asymptotic Confidence Limits:

Lower limit = Pn - Z€2Pn/..fiik,


(7.3.7)
Upper limit = Pn + Z€2Pn/..fiik,

where 102 = (1 + "1)/2.


EXAMPLE 7.3
The lifetimes of waterpumps have an Erlang distribution with k = 100
and {3 unknown. A random sampling of 10 pumps yields the following
lifetimes [days]:

135, 138, 156, 165, 166, 136, 176, 162, 165, 165.

Thus, tlO = 156.4 [days].


(1) MLE: PIO = tlO/k = 1.56 [days].
(2) SD{PlO} = PlO/v'1000 = .049 [days].
7.4 Gamma Distributions 141

(3) Confidence limits for {3 at 'Y = .95:

Lower limi·t = 2otlO 20tlO


= -----'--==
X~975 [2000] 2000 + 1.96v'4000
= 1.47 [days],

Upper limi·t = - - -20tlO


-==
2000 - 1.96v'4000
= 1.67 [days].

(4) Asymptotic confidence limits:

Lower limit = tho - 1.96.61O/v'1000


= 1.47 [days],
Upper limit = .610 + 1.96.61O/V'lOOO
= 1.66 [days].


7.4 Gamma Distributions
(a) Likelihood Function:

(7.4.1)

o < {3 < 00, 0 < v < 00.


(b) MLE:

.6n = tn/Vn'
(7.4.2) Vn = root of the equation
.!v exp('IjJ(v)) = Gn/tn'
where G n is the geometric mean of tI, ... ,tn, Le.,

(7.4.3)

and 'IjJ(v) is the digamma function, i.e.,


f)
(7.4.4) 'IjJ(v) = f)v1nr(v).
142 7. Maximum Likelihood Estimators and Confidence Intervals

The function "p(v) satisfies the recursive relationship

(7.4.5) "p(v + 1) = "p(v) + l/v, v >0

and for 0 < v < lone can use the formula

= -')' + L
00

(7.4.6) "p(v + 1)
n=l
(v
n n+v
r
For integer-valued arguments we have

"p(1) =-')'
n-l
(7.4.7)
'lj;(n) = -')'+ Lk- 1, n 2: 2,
k=l

and for arguments of the form n + ~,

(7.4.8) "p (n +!) = -')' -


2
2ln2 + 2 (1 +!3 + ... + _1_)
2n-1
.

')' = .577216 ... is the Euler constant. Notice that Gn;tn :::; 1 for all
o < ti < (Xl (i = 1, . .. ,n).
In Figure 7.1 we present a graph of the function exp("p(v»/v. One can
determine the MLE vn from this graph by plotting a horizontal line at the
level Gn/in and finding the value of v where this line cuts the graph. The
value of /3n is then determined from (7.4.2).
The following formula also provides a good approximation to the solution
vn. Let Y n = In(in/G n ); then vn can be determined by
(7.4.9)
vn ~{ :n [.50.0088 + .164885Yn - .054427Y,;], if 0 < Yn < .5772

8.898919 + 9.05995Yn + .977537Y;


if .5772 :::; Y n < 17.
Yn [17.79728 + 11.968477Yn + Y,?] ,

(c) Standard Deviations:


Formulae for the determination of the variances of vn and /3n and their
covariance, suitable for any size sample, are not available in a simple closed
form. Bain (1978) provides tables of n . V {/3n/ ,8}, n . V {vn/v} and n .
COV(/3n/,8, vn/v). We provide these values for selected values of n in Table
7.1. Thus, for example, if v = 3, ,8 = 1.5 and n = 10, we obtain from the
table that
A 10 A

10 . V {,8w/,8} = (32 V {,8w} = 1.95.


7.4 Gamma Distributions 143

1.0

I • I • •
. .

.. 9
Gnltn -
I
I
~ .8
.....
..... I
;::. I
..;. I
Ii: I
~ .7
I
I
I
I
.6
I
I
I
I. I __L---...
.5-
o 2 t4 6 8 10 12 14 16 18 20
V
"
Vn
Figure 7_1. Graph ofexp(7jJ(v))jv Versus v,
for the Determination of the MLE, f)n

Hence,
v {SlO} = .439.
Similarly,
10· V{f)lOjV} = I~V{f)lO} = 7.25.
v
Hence,
V{f)lO} = 6.52.
Finally,
COV{f)lO,SlO} = -1.17.
(d) Asymptotic Standard Deviations:
By using formulae (6.2.36) and (6.2.37) we obtain that the asymptotic
standard deviations and covariance are

~ f3 [ 7jJ'(v) ] 1/2
(7.4.10) ASD{f3n} =..;n v7jJ'(v) - 1 '
144 7. Maximum Likelihood Estimators and Confidence Intervals

Table 7.1. Variances and Covariance of the MLE of


(3 and v for the Gamma Life Distribution
Reprinted from L.J. Bain, Statistical Analysis of Reliability
and Life-Testing Models, 1978, pp. 332-333,
by courtesy of Marcel Dekker, Inc.

v
n 0.2 0.5 1.0 1.5 2.0 3.0 5.0 10.0 00

nVar((3/(3)
8 5.900 3.170 2.350 2.100 2.000 1.900 1.830 1.790 1.750
10 5.970 3.210 2.390 2.150 2.040 1.950 1.880 1.840 1.800
20 6.090 3.290 2.470 2.240 2.140 2.040 1.980 1.930 1.900
40 6.140 3.330 2.510 2.280 2.180 2.090 2.030 1.980 1.950
00 6.176 3.363 2.551 2.324 2.225 2.137 2.076 2.036 2.000
nVar(v/v)
8 4.610 7.370 9.660 10.750 11.390 12'.080 12.680 13.150 13.653
10 3.140 4.610 5.870 6.490 6.850 7.250 7.590 7.870 8.163
20 1.790 2.290 2.750 2.990 3.140 3.300 3.450 3.570 3.691
40 1.430 1.740 2.030 2.190 2.290 2.400 2.500 2.580 2.671
100 1.270 1.500 1.720 1.850 1.920 2.010 2.100 2.160 2.237
00 1.176 1.363 1.551 1.658 1.725 1.804 1.876 1.936 2.000
n Cov((3/(3, V/v)
6 -2.090 -2.730 -3.190 -3.410 -3.520 -3.670 -3.790 -3.890 -4.000
10 -1.560 -1.940 -2.260 -2.400 -2.510 -2.600 -2.700 -2.770 -2.157
20 -1.340 -1.600 -1.840 -1.960 -2.050 -2.130 -2.220 -2.280 -2.353
40 -1.250 -1.470 -1.680 -1.800 -1.880 -1.960 -2.030 -2.090 -2.162
00 -1.176 -1.363 -1.551 -1.658 -1.725 -1.804 -1.876 -1.936 -2.000

1 [ ] 1/2
(7.4.11) ASD{vn } = vn v'¢'(~) - 1 '

and

(7.4.12)

where '¢'(v) is the derivative of '¢(v), given by the formulae


00

'¢'(v)=I)v+k)-2, v>O
k=O
(7.4.13) n-1
'¢'(n + z) = ,¢'(1 + z) - Z)j + Z)-2, n ~ 2, 0 < z < l.
j=l

The asymptotic variances and covariance can be obtained from Table 7.1,
by taking n = 00.
7.4 Gamma Distributions 145

(e) Confidence Limits:


AB shown by Bain (1978),

even in moderately small samples. Thus, if v ~ 2 and n ~ 10 we can apply


the following formulae:
(e.1) Limits for v:

.. X~l[n -I]
Lower limIt = 2nYn '
(7.4.14)
·
Upper limIt X~2 [n - I]
= 2nYn .

It is expected that for v < 2, one could use an adjustment as described in


Bain (1978, p. 338).
(e.2) Limits for f3:
The method for determining confidence limits for f3 is more complicated.
However, conservative confidence intervals (having somewhat higher cover-
age probabilities) can be obtained from the formulae

L ·· 2ntn
ower 1lmIt =
(7.4.15)
2
XE2
[2 [ ]
n Vu + I] ,
U pper 1· · 2ntn
lIDlt = 2 [2 [ ]] ,
XEl n VL

where [VL] and [vu] are the integer parts of the lower and upper confidence
limits for Vj €1 = (1 - 'Y)/4, €2 = (3 + 'Y)/4.
(f) ABymptotic Distributions:
The asymptotic distributions of fin and vn are normal, with means f3
and v, and variances given by (7.4.10) and (7.4.11).
(g) Asymptotic Confidence Limits:
(g.l) Asymptotic limits for f3:

(7.4.16)

(g.2) Asymptotic limits for v:

(7.4.17)

where €2 = (1 + 'Y)/2.
146 7. Maximum Likelihood Estimators and Confidence IntervaIs

EXAMPLE 7.4
We use the data of Example 7.2, assuming that both 11 and {3 are un-
known. We have n = 10, to = 156.4, GlO = 155.8, YlO = .00412. Hence,
from (7.4.9), VlO = 121.6 and, from (7.4.2), {JI0 = 1.29. The confidence
limits for 11, at level 'Y = .95 are, according to (7.4.14),

Lower limit = X~25[9] = 32.8


20YlO

and
Upper limit = ~~~!:] = 230.6.
The conservative confidence limits for {3 are

. . (20)(156.4)
Lower limit = X.9875
2 [(20) (230)- 1] ,
+
X~9875[4601] == 4601 + Z.9875v'9202 = 4815.9.

Hence, the lower confidence limit is {3L = .65. The upper conservative
confidence limit for {3 is

(3u = 2(20)(156.4) ,
X.012S[(20)(32)]
X~0l25[640] == 640 - z. 9875 v'1280 = 559.9

Thus, (3u = 5.59. Now,


120
""'(121.55) = ""'(1.55) - L(j + .55)-2
j=1

= .89505 - .88679 = .00826.


Hence, from (7.4.16), the asymptotic limits for {3 are .07 and 2.51. The
asymptotic confidence limits for 11 are 6.3 and 236.8. The asymptotic con-
fidence limits are inaccurate for n = 10.

7.5 Weibull Distributions
(a) Likelihood Function:

(7.5.1) L({3,11; t)=


n )V-l (n
;:v (!!t i -t;(td{3t ) ,
exp

o < {3 < 00; 0 < 11 < 00.


7.5 Weibull Distributions 147

(b) MLE:

(7.5.2)

and

n
~t~n
~'Z.
lnt· 1, n
1_1
(7.5.3) [ i=1 _ .!. Llnt.
i)f n n i=1 '
,=1

One can show that equation (7.5.3) has a unique positive solution, vn .
This solution can be obtained by the iterative formula

]_1
[
~l(j)
~'l.
lnt.'l. n

v
'(j+1) _
-
i
L ~(j)
_ .!.n ?:lnt,. ,
t, ,=1
i

starting with v(O) = 1.


(c) Standard Deviations:
Formulae for the exact SD of !3n and vn are unavailable. If the samples
are large one can use the asymptotic formulae (7.5.4) - (7.5.6). Bain (1978,
p. 220) provides a table of coefficients to adjust the asymptotic variance of
vn if the samples are smaller than n = 120.
(d) Asymptotic Standard Deviations:
The asymptotic standard deviations and covariance are

ASD{!3n} =L (1 + ('¢(2))2) 1/2


(7.5.4) Vii v ,¢'(1)
(3
= 1.053 ;;:;,
vyn

ASD{vn} = ~('¢'(1))-1/2
Vii
(7.5.5)
V6v. V
= ;;:; = .780 ;;:; ,
7ryn yn

and

(7.5.6)
148 7. Maximum Likelihood Estimators and Confidence Intervals

Table 7.2. Fractiles bp of .,fii(vn - v)/v


for Weibull Distributions
Reprinted from L.J. Bain, Statistical Analysis of Reliability
and Life-Testing Models, 1978, pp. 222-223,
by courtesy of Marcel Dekker, Inc.

p
11, 0.02 0.05 0.10 0.25 0.40 0.50 0.60 0.75 0.85 0.90 0.95 0.98
5 -0.89 -0.71 -0.52 -0.11 0.26 0.53 0.85 1.50 2.24 2.86 3.98 5.63
6 -0.92 -0.74 -0.54 -0.15 0.20 0.46 0.75 1.33 1.99 2.52 3.52 5.06
7 -0.96 -0.77 -0.57 -0.19 0.16 0.41 0.68 1.22 1.82 2.28 3.13 4.34
8 -0.98 -0.79 -0.59 -0.21 0.13 0.37 0.63 1.14 1.70 2.11 2.87 3.90
9 -1.01 -0.81 -0.61 -0.23 0.11 0.34 0.60 1.08 1.61 2.00 2.69 3.60
10 -1.03 -0.83 -0.63 -0.24 0.09 0.32 0.57 1.04 1.55 1.90 2.55 3.38
11 -1.04 -0.85 -0.64 -0.25 0.07 0.30 0.54 1.00 1.49 1.83 2.45 3.22
12 -1.06 -0.86 -0.66 -0.26 0.06 0.28 0.52-·0.97 1.45 1.78 2.36 3.10
13 -1.07 -0.87 -0.67 -0.27 0.05 0.27 0.51 0.95 1.41 1.73 2.29 2.99
14 -1.09 -0.88 -0.68 -0.28 0.04 0.26 0.49 0.93 1.38 1.70 2.23 2.91
15 -1.10 -0.89 -0.69 -0.29 0.03 0.25 0.48 0.91 1.35 1.65 2.18 2.84
16 -1.11 -0.90 -0.70 -0.30 0.02 0.24 0.47 0.89 1.33 1.62 2.14 2.77
18 -1.13 -0.92 -0.71 -0.31 0.Q1 0.22 0.45 0.87 1.29 1.57 2.07 2.67
20 -1.15 -0.94 -0.72 -0.32 0.00 0.21 0.43 0.84 1.26 1.53 2.01 2.59

22 -1.16 -0.95 -0.74 -0.33 -0.01 0.20 0.42 0.83 1.23 1.50 1.96 2.52
24 -1.18 -0.96 -0.75 -0.33 -0.02 0.19 0.41 0.81 1.21 1.48 1.92 2.47
28 -1.21 -0.98 -0.76 -0.35 -0.03 0.18 0.39 0.78 1.16 1.42 1.86 2.38
32 -1.23 -1.00 -0.78 -0.36 -0.04 0.16 0.38 0.76 1.14 1.39 1.81 2.31
36 -1.24 -1.01 -0.79 -0.37 -0.05 0.15 0.37 0.75 1.12 1.36 1.76 2.26
40 -1.26 -1.02 -0.80 -0.38 -0.06 0.15 0.35 0.73 1.10 1.33 1.73 2.22
45 -1.28 -1.03 -0.80 -0.39 -0.07 0.14 0.35 0.72 1.07 1.32 1.69 2.17
50 -1.29 -1.05 -0.81 -0.40 -0.08 0.13 0.34 0.71 1.05 1.29 1.66 2.13
55 -1.31 -1.05 -0.81 -0.40 -0.08 0.12 0.33 0.70 1.04 1.27 1.64 2.10
60 -1.32 -1.06 -0.82 -0.40 -0.09 0.12 0.32 0.69 1.03 1.26 1.61 2.07
70 -1.34 -1.08 -0.83 -0.42 -0.10 0.11 0.31 0.68 1.00 1.22 1.57 2.03
80 -1.36 -1.09 -0.83 -0.43 -0.11 0.10 0.30 0.67 0.98 1.20 1.55 1.99
100 -1.39 -1.12 -0.84 -0.44 -0.12 0.09 0.29 0.65 0.96 1.16 1.50 1.92
120 -1.41 -1.13 -0.84 -0.45 -0.13 0.08 0.27 0.64 0.94 1.14 1.46 1.87
00 -1.60 -1.28 -1.00 -0.53 -0.20 0.00 0.20 0.53 0.81 1.00 1.28 1.60

(e) Confidence Limits:


(e.1) Limits for v:

Lower limit =
1+
:n/.,fii'
E2 n

+ :n /.,fii'
(7.5.7)
Upper limit =
1 El n
7.5 Weibull Distributions 149

Table 7.3. Fractiles, uP' of .fii(fin/v) In(/3n/ f3)V


for Weibull Distributions
Reprinted from L.J. Bain, Statistical Analysis of Reliability
and Life-Testing Models, 1978, p. 228,
by courtesy of Marcel Dekker, Inc.

p
n 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98
5 -3.647 -2.788 -1.986 -0.993 -0.125 0.780 1.726 2.475 3.537
6 -3.419 -2.467 -1.813 -0.943 -0.110 0.740 1.631 2.300 3.162
7 -3.164 -2.312 -1.725 -0.910 -0.101 0.720 1.582 2.193 2.963
8 -2.987 -2.217 -1.672 -0.885 -0.091 0.710 1.547 2.124 2.837
9 -2.862 -2.151 -1.632 -0.867 -0.087 0.705 1.521 2.073 2.751
10 -2.770 -2.103 -1.603 -0.851 -0.082 0.702 1.502 2.037 2.691
11 -2.696 -2.063 -1.582 -0.839 -0.076 0.700 1.486 2.007 2.643
12 -2.640 -2.033 -1.562 -0.828 -0.073 0.700 1.472 1.981 2.605
13 -2.592 -2.008 -1.547 -0.822 -0.069 0.699 1.464 1.961 2.574
14 -2.556 -1.991 -1.534 -0.812 -0.067 0.700 1.456 1.946 2.548
15 -2.521 -1.971 -1.522 -0.806 -0.062 0.697 1.448 1.933 2.529
16 -2.496 -1.956 -1.516 -0.800 -0.060 0.700 1.440 1.920 2.508
18 -2.452 -1.930 -1.498 -0.793 -0.055 0.700 1.434 1.896 2.478
20 -2.415 -1.914 -1.485 -0.783 -0.054 0.702 1.422 1.883 2.455
22 -2.387 -1.895 -1.473 -0.779 -0.052 0.704 1.417 1.867 2.434
24 -2.366 -1.881 -1.465 -0.774 -0.044 0.705 1.411 1.857 2.420
28 -2.334 -1.863 -1.450 -0.762 -0.042 0.709 1.402 1.836 2.397
32 -2.308 -1.844 -1.437 -0.758 -0.034 0.707 1.397 1.827 2.376
36 -2.292 -1.830 -1.428 -0.750 -0.030 0.708 1.392 1.812 2.358
40 -2.277 -1.821 -1.417 -0.746 -0.025 0.715 1.391 1.802 2.346
45 -2.261 -1.808 -1.412 -0.741 -0.023 0.714 1.385 1.794 2.334
50 -2.249 -1.796 -1.400 -0.735 -0.021 0.714 1.379 1.789 2.319
55 -2.240 -1.791 -1.394 -0.734 ·.0.015 0.716 1.376 1.784 2.314
60 -2.239 -1.782 -1.387 -0.728 -0.015 0.713 1.371 1.774 2.301
70 -2.226 -1.765 -1.380 -0.720 -0.008 0.711 1.372 1.765 2.292
80 -2.218 -1.762 -1.368 -0.716 0.000 0.689 1.324 1.699 2.200
100 -2.210 -1.740 :1.360 -0.710 0.000 0.710 1.360 1.750 2.260
120 -2.210 -1.730 -1.350 -0.700 0.010 0.700 1.350 1.740 2.250
00 -2.160 -1.730 -1.350 -0.710 0.000 0.710 1.350 1.730 2.160
where El = (1 - 'Y)/2 and E2 = (1 + 'Y)/2. The coefficients bp are given in
Table 7.2.
(e.2) Limits for f3:

(7.5.8)
Lower limit= /3nexP(-U€2/.,fii fin),
Upper limit = /3n exp( -U€l /.,fii vn ).

The fractiles up are tabulated in Table 7.3.


150 7. Maximum Likelihood Estimators and Confidence Intervals

(f) Asymptotic Distributions:


The asymptotic distributions of fJn and vn are normal, with means f3
and v and standard errors given by (7.5.4) and (7.5.5).
(g) Asymptotic Confidence Limits:
(g.l) Asymptotic limits for f3:

(7.5.9)

(g.2) Asymptotic limits for v:

(7.5.10)

EXAMPLE 7.5
The following is a random sample of size n = 20 from a Weibull distri-
bution W(1.75, 1)

.58 .29 .93 1.11 1.26


1.39 1.89 .28 .44 1.57
.76 .64 2.12 .25 1.36
.63 .87 .57 .64 1.68

Starting with v(O) = 1, the following values were obtained from the recursive
formula following (7.5.3) (see Exercise [7.5.1] for a BASIC program):

j v(j)

1 2.95624
2 1.48198
3 2.19162
4 1.71465
5 1.98635
6 1.81348
7 1.91672
8 1.85253
9 1.89149
10 1.86749
11 1.88214
12 1.87314
13 1.87865
14 1.87527
15 1.88734
16 1.87607
17 1.87685
18 1.87637
7.6 Extreme Value Distributions 151

We see that by the 18th iteration the algorithm has converged in the first
four significant digits to the value v = 1.876. The MLE of {3 is, by (7.5.2),
~ = 1.089. Both iJ and v are quite close to the true values {3 = 1 and
1/ = 1. 75. Estimates of the asymptotic standard errors and covariance are
obtained by substituting in formulae (7.5.4)-(7.5.6) the estimates of {3 and
1/. Thus we obtain

AsD{~n} = (1.053)(1.089)/( v'2O 1.876)


= .137,

AsD{vn } = (.78)(1.876)/v'20
= .327

and
AcOV{iJn,Vn} = (.254)(1.089)/20
= .0138.
Confidence limits for (3, at level "y = .90, are

Lower limit = 1.089 exp( -1.883/1.876v'20)


= .870,

Upper limit = 1.089 exp(1.914/1.876v'20)


= 1.368.

Confidence limits for v, at level "y = .90, are


. . 1.876
Lower hmIt = IOr\ = 1.295,
1 + 2.008/y20
. . 1.876
Upper hmIt = J20 = 2.372.
1 - .935/ 20
The asymptotic confidence intervals at level"Y = .90 are
(i) for (3:
Lower limit = .864,
Upper limit = 1.314;
(ii) for 1/:
Lower limit = 1.338,
Upper limit = 2.414.


7.6 Extreme Value Distributions
We consider here the extreme value distribution of minima, EV(~, 8). The
results are immediately translatable to those for extreme value distribution
152 7. Maximum Likelihood Estimators and Confidence Intervals

of maxima, by replacing Xi by -Xi, and e by -e.


As shown in Chapter 2, if
X W(v,.8) then Y = lnX '" EV(ln.8, ~). Thus, if we are given a random
I'oJ

sample from an extreme value distribution EV(e, 8), we can transform the
sample values Xi to Yi = exp(xi), i = 1, ... ,n. The transformed sample has
a We}, e~) distribution. We estimate v = } and .8 = e~ by the method of
MLE' of the previous chapter and then the MLE of e and 8 are = In and t P
6 = Uv. The results can be obtained, however, directly from the following
formulae:
(a) Likelihood Function:

(7.6.1) L(e, 8; t) = 1 (1"6 t;(t


8n exp
n
i - e) - t;
n
exp«ti - e)18)
)
,

-00 < e< 00, 0 < 8 < 00.


(b) MLE:

(7.6.2)

and
n
2:) exp(tiI6n)
(7.6.3) 6n = -fn + ..:..i=-=~,--_ _ __
Lexp(ti/6n )
i=l

- 1'"
where tn = fiL)i.
n

i=l
The MLE of 8 can be determined from (7.6.3) by iteration, starting with
an initial value 8(0). The value of 6n obtained is then substituted in (7.6.2)
to obtain tn.
(c) Standard Deviations:
Formulae for the exact standard deviations are not available.
(d) Asymptotic Standard Deviations:
The asymptotic standard errors and covariance of and 6 are tn n
(7.6.4)

(7.6.5) ASD{6n } = .78081.[ii,

and
2
(7.6.6)
A A

ACOV(en. On) = .2548 In.


7.7 Normal and Lognormal Distributions 153

(e) Confidence Limits at level ')':


(e.1) Limits for 6:

Lower limit = 6n (1 + bEt/Vii),


(7.6.7)
Upper limit = 6n (1 + b /Vii),
E2

where €1 == (1 - ,),)/2 and €2 = (1 + ,),)/2.


(e.2) Limits for e:

(7.6.8)
Lower limit = en - 6n Vii, U E2 /

Upper limit = en - 6 UE1 /Vii. n

Values of bp and Up are given in Tables 7.2 and 7.3, respectively.


EXAMPLE 7.6
The following data are the lifetimes [hr] of a sample of 20 electric wires
under accelerated testing:

8.71 10.57 4.14 9.41 7.38


10.42 9.26 8.47 9.04 9.11
10.05 8.47 11.87 8.26 8.77
10.26 8.37 8.45 7.66 11.29

It is assumed that this is a random sample of size 20 from EV(e, 6). To


obtain the MLE of e and 6 we apply formulae (7.6.2) and (7.6.3). Formula
(7.6.3) requires iterative solution. Starting with 6(0) = 1 we obtain after 30
n
iterations the estimate 6 = 1.38. The MLE of e is en
= 9.72. Estimates
of the asymptotic standard deviation of the MLE, according to (7.6.4) and
(7.6.5), are _ A

ASD{en} = .32,
AsD{ 6n } = .24.
The confidence limits obtained for 6, at level')' = .90, according to (7.6.7)
are 1.09 and 1.99. The asymptotic confidence limits are .98 and 1.77. The
.90-1evel confidence limits for e are, according to (7.6.8), 9.14 and 10.09,
while the corresponding asymptotic confidence limits are 9.19 and 10.26 .

7.7 Normal and Lognormal Distributions
(a) MLE:
In Example 6.12 we showed that the MLE of J.L and u in the normal case
are
_ 1 n
jln =Xn = - LXi
n i=l
154 7. Maximum Likelihood Estimators and Confidence Intervals

and

(b) Standard Deviation and Covariance:


The SD of Pn and fTn are

(7.7.1)

(In fact, Pn and an are independent).


(c) Asymptotic Standard Deviation and Covariance:

ASD{Pn} = a/Vii
(7.7.2) ASD{an } = a/V2n
ACOV(Pn, an) = o.
(d) Confidence Limits at level "(:
(d.1) Limits for f.L:

Lower limit = Xn - t€2[n -l]S/Vii,


(7.7.3)
Upper limit = Xn + t€2[n -l]S/Vii,
1/2
where €2 = (1 + ,,()/2, S = (
n~l L:(Xi - Xn)2 ) .
(d.2) Limits for a:

n-1
Lower limit = S
X~2[n - 1]'
(7.7.4)
n-1
Upper limit =S
X~l[n -1]'

where €1 = (1 - ,,()/2.
(d) Reliability Function:
Since R(t) = 1- cJ> (~), by the invariance principle, the MLE of R(t)

).
is

(7.7.5) Rn(t) = 1_ cJ> (t ~:n


7.8 Truncated Normal Distributions 155

The asymptotic SD of R",(t) is obtained by employing formula (6.2.38):

(7.7.6) ASD{Rn(t)} =
¢
In t) ( + ~ (JL;
(JL -
1 t)
2) 1/2
,

where ¢(z.) is the PDF of N(O, 1).


The determination of efficient confidence intervals for R(t) is somewhat
complicated. We will only discuss the asymptotic confidence intervals.
These are
JL -
¢ (- U
t) ( 1 JL-t 2)1/2
(7.7.7) Vn 1 + 2 ( --;;-)
A

R",(t) ± ZE2

IT the sample Xl, ... ,Xn is drawn from a lognormal· distribution, LN (JL, u),
we make the transformation Yi = lnXi , i = 1" .. ,n. Then Yi,'" ,Yn can
be considered a random sample from N(JL,u) and the MLE of JL and u are
obtained by the formulae given above.

7.8 Truncated Normal Distributions


Suppose that t1,'" ,tn is a random sample from a truncated normal dis-
tribution NT(JL, u, to).
(a) Likelihood Function:

(7.8.1)

(b) MLE:
From (7.8.1) we see that the likelihood function, as a function of to, is
maximized by the largest value of to which does not exceed t(l), for any JL
and u. Hence to = t(l)' Substituting this value of to in (7.8.1) and taking
logarithms we obtain that the log-likelihood function of (JL, u) at to = to is

Qn n(fn - JL)2 (JL - t(l»)


(7.8.2) l(JL,u,to,t)--nlnu-2u2- 2u2 -nln~ u .
A. _

Thus, the likelihood equations are

(7.8.3)
156 7. Maximum Likelihood Estimators and Confidence Intervals

and

(7.8.4) aA2 = -Qn + (Ap, - t (1) )(t-n A)


- P, + (t-n A)2 •
- P,
n
These equations can be written in the form

(7.8.5)

and

(7.8.6)

The solution of equations (7.8.5) and (7.8.6-) can be obtained using an


iterative procedure, starting with p,(0) = tn, &(0) = (~) 1/2. At the j-th
step, p,U- 1 ) and &U-l) are substituted in the right hand side of (7.8.6) to
produce p,U>. This value of p,U) is substituted then in (7.8.5) to yield &U>
The pair (p,U>, &U» will converge to the solution (P,n. &n) as j - 00.
(c) Asymptotic Standard Deviations:
The formulae for the asymptotic variances and covariances of the MLE
of to, P, and a are complicated. Tables for the numerical determination of
these quantities can be found in Johnson and Kotz (1970), Hald (1949) and
Cohen (1961).
EXAMPLE 7.7
The following random sanlple of size n = 20 was generated from
NT(100, 10, 90):
98.31 110.90 106.82 90.56 106.71
91.48 110.52 91.47 92.38 104.01
104.30 101.66 100.93 111.13 97.66
94.24 105.91 99.44 98.83 90.85
The MLE of to is t(l) = 90.56. The MLE of p, and a are determined itera-
tively from (7.8.5) and (7.8.6), starting with t20 and (Q20/20)1/2. Below is
a partial listing of the results of the first 20 iterations:
teration p, &
1 100.17 6.96
2 99.01 7.72
5 97.12 8.81
10 95.60 9.61
15 94.79 10.00
20 94.23 10.27
7.9 Exercises 157

Thus, {Ln = 94.2 and Un = 10.3.



7.9 Exercises
[7.1.1] The average failure time in a random sample of n = 30 water pumps
is tso = 3.75 [yr]. Assuming that the lifetime of water pumps is
eJq>onential E([3), determine:
(i) The MLE of [3 and the MLE of SD(.B);
(ii) Exact confidence limits for [3, at level of confidence 'Y = .90;
(iii) Exact confidence limits for the reliability of the pumps at age
t = 4 [yr] at confidence level 'Y = .90.
[7.1.2] How many observations on the TTF should be taken in the ex-
ponential case so that the lower confidence limit of [3, at level of
confidence 'Y = .95, would be .9 tn ?
[Hint: For large n, X~2 [2n] ~ 2n + 2ZEllv'n.]
[7.2.1] The sample minimum of lifetimes among n = 50 radar components
is t(l) = 500 [hr]. The sample average of the lifetimes is tso =
750 [hr]. Assuming that the lifetime distribution is the shifted
exponential SE([3, to), determine:
(i) The MLE of to and [3;
(ii) Estimates of SD(to) and SD(.B);
(iii) Confidence limits for to and for [3, at level of confidence 'Y = .95.
[7.2.2] Prove that if tl,'" ,tn is a random sample from SE(to,[3) then
t(l) f'V to + E (~), where t(l) is the sample minimum. Using this
result, verify that n(t(l) - to) E([3) for all n ~ 1.
f'V

[7.2.3] Consider the data of Example 7.2. Suppose that instead of the
largest failure time, 313, we see the value 250+ (censored value).
(i) Estimate to and [3 by the least-squares method from a proba-
bility plotting.
(ii) Estimate to to be to = t(l)' Subtract the value of t(l) from each
sample value. Use the method of Section 6.4 to obtain an MLE
and confidence interval for [3, based on a sample of 9 differences
with 1 censored value.
[7.3.1] Redo Example 7.3, assuming that k = 5.
[7.4.1] The following is a sample of n = 22 values from a G(v, [3) life dis-
tribution:
85.0,249.9,34.0,605.4, 175.6,253.1,47.4,69.1,38.5, 141.7, 249.2,
342.4, 226.0, 159.6, 10.9, 380.4, 201.1, 235.7, 289.3, 65.8, 215.8,
11.7.
(i) Determine the MLE of v and of [3.
(ii) Find approximations to the SD of v and .B by employing Table
7.1.
(iii) Use formulae (7.4.14) and (7.4.15) to determine confidence in-
tervals for v and [3, at level of confidence 'Y = .90.
158 7. Maximum Likelihood Estimators and Confidence Intervals

(iv) Compare the intervals in (iii) to those determined by the as-


ymptotic formulae (7.4.16) and (7.4.17).
[7.4.2] Find the value of W'(31.6) [formula (7.4.13)].
[7.5.1] The following BASIC program solves equations (7.5.2) and (7.5.3)
iteratively, for the purpose of obtaining the MLE of 13 and v of the
Weibull distribution W(v,j3):
10 INPUT N
20 DIM T(N)
30 PRINT "INSERT SAMPLE VALUES"
40 FOR I = 1 TO N
50 INPUT T(I)
60 NEXT I
70 PRINT "INSERT NO. OF ITERATION8"
80 INPUT M
90 V = 1
100 FOR I = 1 TO M
110 81 = 0
120 82 = 0
130 83 = 0
140 FOR J = 1 TO N
150 Sl = 81 + T(J) 1\ V
160 82 = 82 + (T(J) 1\ V) * LOG (T(J))
170 83 = 83 + LOG (T(J))
180 NEXT J
190 B = (81 j N) 1\ (1 j V)
200 V = 1 j (82 j 81 - 83 j N)
210 PRINT B,V
220 NEXT I
230 END
(i) Apply this program to determine the MLE of v and 13, based
on the random sample 33, 123, 45, 129, 167, 145, 122, 33, 79, 150.
(ii) Estimate the asymptotic standard deviation (A8D) of fin and
of /3n.
[7.6.1] It is assumed that the compressive strength of concrete cubes of
size 7" x 7" x 7", after three days in a humidity chamber, has the
extreme value distribution EV(e,8). The following data are the
compressive strength of n = 10 cubes (in [kgjcm2]): 122, 128, 95,
115, 130, 129, 132, 137, 103, 99.
(i) Obtain the MLE of e and 8.
(ii) Determine the MLE of the expected value /-L and ofthe standard
deviation a of the distribution.
(iii) What are the A8Ds of the MLE {Ln and o-n?
[7.7.1] Use the data of Exercise [7.6.1] to determine the MLE of the ex-
pected compressive strength and its standard deviation, under the
model that the distribution is lognormal.
7.9 Exercises 159

[7.8.1] The following is a random sample of size n = 20 from a truncated


normal distribution NT(J-L, a, to):
2.55, 3.35, 5.51, 2.22, 1.05, 7.82, 3.49, 4.45, 6.62, 5.17, 2.79, 7.12,
3.14, 4.45, 4.69, 6.14, 6.63, 2.57, 3.18, 3.49.
(i) Compute the values of the statistics t(l), tn, Q/n.
(ii) Apply the following BASIC program to obtain the values of
the MLE {.t and (j, according to the iterative solution to equations
(7.8.5) and (7.8.6).
10 PRINT "INSERT VALUES OF TO, TBAR,Q/N"
20 INPUT TO, AV,VR
30 PRINT "NUMBER OF ITERATIONS"
40 INPUT M
50 L = 1
60 PH = 4 * ATN (1)
70 AM=AV
80 SD = SQR (VR)
90 Z = (AM - TO) / SD
100 FZ = EXP ( - Z * Z / 2) / SQR (2 * PH)
110 X = Z
120 GOSUB 200
130 GZ = QN
140 SD = SQR (VR + (AV - AM) * (AM - TO))
150 AM = AV - SD * FZ / GZ
160 PRINT AM, SD
170 L = L + 1
180 IF L < = M GOTO 90
190 END
200 I = 0
210 IF X > = 0 GOTO 220
215 X = ABS (X)
217 I = 1
220 P = .23164
230 B1 = .31938
240 B2 = -.35656
250 B3 = 1.78148
260 B4 = -1.82126
270 B5 = 1.3303
280 T = 1 / (1 + P * X)
290 R = .39894 * EXP ( - X * X /2)
300 QN = 1 - R * (T * (B1 + T * (B2 + T * (B3 + T * (B4 + T * B5)))))
310 IF I = 0 GOTO 330
320 QN = 1- QN
330 RETURN
8
Bayesian Reliability
Estimation and Prediction

It is often the case that some information is available on the parameters of


the life distributions from prior experiments or prior analysis of failure data.
The Bayesian approach provides the methodology for formal incorporation
of prior information with the current data.

8.1 Prior and Posterior Distributions


Let Xl,· .. ,Xn be a random sample from a distribution with a PDF f(x; 8),
where 8 = (0 1 , ••• ,Ok) is a vector of k parameters, belonging to a parameter
space 8. So far we have assumed that the point 8 is an unknown constant.
In the Bayesian approach, 8 is considered a random vector having some
specified distribution. The distribution of 8 is called a prior distribution.
The problem of which prior distribution to adopt for the Bayesian model is
not easily resolvable, since the values of 8 are not directly observable. The
discussion of this problem is beyond the scope of the book.
Let h(Ot.··· ,Ok) denote the joint PDF of (01 , .. . ,Ok), corresponding to
the prior distribution. This PDF is called the prior PDF of 8. The joint
PDF of X and 8 is

(8.1.1) g(x,8) = f(x; 8)h(8).

The marginal PDF of X, which is called the predictive PDF, is

(8.1.2) rex) = J·e· J f(x; 8)h(8)d01 .•• dOk·

Furthermore, the conditional PDF of 8 given X = x is

(8.1.3) L(8 I x) = g(x, 8)/ rex).


8.1 Prior and Posterior Distributions 161

This conditional PDF is called the posterior PDF of (), given x. Thus,
starting with a prior PDF, h((}), we convert it, after observing the value of
x, to the posterior PDF of () given x.
If Xl,'" , Xn is a random sample from a distribution with a PDF f(x; (})
then the posterior PDF of (), corresponding to the prior PDF h((}), is
n
II f(Xi; (})h( ())
(8.1.4) h((} I x) = ~=l
f 'e' f IIf(xi;O)h((})d(h" ·dOk
i=l

For a given sample, x, the posterior PDF h((} I x) is the basis for most
types of Bayesian inference.
EXAMPLE 8.1
I. Binomial Distributions
X", B(n;O), 0 < 0 < 1.
The PDF of X is

f(x; 0) = (:) OX(l- o)n-x, X = 0,··· , n.

Suppose that () has a prior beta distribution, with PDF

(8.1.5)

0< 0 < 1, 0 < VI, V2 < 00, where j3(a, b) is the complete beta function

j3(a, b) = 10 1
x a - l (l - x)b-ldx

r(a)r(b)
- r(a+b)"

The posterior PDF of 0, given X = x, is

(8.1.6) h(O I x) = 1 OV1+x-l(1_ 0t2 +n - x- l , 0 < 0 < 1.


j3(Vl + X, V2 + x)
Notice that the posterior PDF is also that of a beta distribution, with
parameters VI + X and V2 + n - x. The expected value of the posterior
distribution of 0, given X = x, is

(8.1.7)
162 8. Bayesian Reliability Estimation and Prediction

°
II. Poisson Distributions
X '" peA), < A < 00.
The PDF of X is
AX
l(x;A) = e->' -x!, x = 0,1,··· .
Suppose that the prior distribution of A is the gamma distribution, G(v,r).
The prior PDF is thus

(8.1.8)

The posterior PDF of A, given X = x, is

(8.1.9)

That is, the posterior distribution of A, given X = x, is G{v+x, I~T). The


posterior expectation of A, given X = x, is (v + x)r/{1 + r).
III. Exponential Distributions
X", E{(3).
The PDF of X is
1 x
I{x; (3) = {3_e- //3.

Let (3 have an inverse-gamma prior distribution, IG{v, r). That is, ~ '"
G(v,r). The prior PDF is

(8.1.10)

Then, the posterior PDF of {3, given X = x, is

(8.1.11)

That is, the posterior distribution of (3, given X = x, is IG{v + 1, I';XT).


The posterior expectation of {3, given X = x, is (x + 1/r)/v.
In formula (6.2.1) we defined the likelihood function L(9; x), of a param-
eter 9, over a parameter space 8. In the definition of the posterior PDF of
9, given x, we see that any factor of L(9; x) which does not depend on 9
is irrelevant. For example, the binomial PDF, under () is

I(x;()) = (:)()X(1_ ()t- x , x = 0,1,··· ,n,


8.2 Loss FUnctions and Bayes Estimators 163

0< () < 1. The factor (:) can be omitted from the likelihood function in
Bayesian calculations. The factor of the likelihood which depends on () is
called the kernel of the likelihood. In the above binomial example, (}X(l-
(})n-x, is the kernel of the binomial likelihood. If the prior PDF of (), h((}),
is of the s.ame functional form (up to a proportionality factor which does
not depend on ()) as that of the likelihood kernel, we call that prior PDF
a conjugSlte one. As shown in Example 8.1, the beta prior distributions
are conjugate to the binomial model, the gamma prior distributions are
conjugate to the Poisson model and the inverse-gamma priors are conjugate
to the exponential model.
If a conjugate prior distribution is applied, the posterior distribution
belongs to the conjugate family.
One of the fundamental problems in Bayesian analysis is that of the
choice of a prior distribution of (). From a Bayesian point of view, the
prior distribution should reflect the prior knowledge of the analyst on the
parameter of interest. It is often difficult to express "the prior belief about
the value of 0 in a PDF form. We find that analysts apply, whenever possi-
ble, conjugate priors whose means and standard deviations may reflect the
prior beliefs. Another common approach is to use a "diffused," "vague" or
Jeffrey's prior, which is proportional to Il((})1 1 / 2 , where l((}) is the Fisher in-
formation function (matrix). For further reading on this subject the reader
is referred to Box and Tiao (1973), Good (1965) and Press (1989).

8.2 Loss Functions and Bayes Estimators
In order to define Bayes estimators we must first specify a loss function,
L(O,O), which represents the cost involved in using the estimate 0 when
the true value is O. Often this loss is taken to be a function of the distance
between the estimate and the true value, i.e., 10 - 01. In such cases, the
loss function is written as

L( 0,0) = W(IO - (1).

Examples of such loss functions are

Squared-error loss: W(IO - (1) = (0 - 0)2,


Absolute-error loss: W(lO - (1) = 10 - 01·
The loss function does not have to be symmetric. For example, we may
consider the function

A {a((}-B),
L((}, (}) =
(3(B - ()), ifB>(}
164 8. Bayesian Reliability Estimation and Prediction

where a and f3 are some positive constants.


The Bayes estimator of 0, with respect to a loss function L(8,O), is
defined as the value of 8 which minimizes the posterior risk, given x,
where the posterior risk is the expected loss with respect to the posterior
distribution. For example, suppose that the PDF of X depends on sev-
eral parameters 91 , ... ,9k, but we wish to derive a Bayes estimator of 91
with respect to the squared-error loss function. We consider the marginal
posterior PDF of 91 , given x, h(91 I x). The posterior risk is

It is easily shown that the value of 01 which minimizes the posterior risk
R(O},x) is the posterior expectation of 91 :

If the loss function is L(01 ,0) = 101 - 91 1, the Bayes estimator of 91 is the
median of the posterior distribution of fh given x.

8.2.1 Distribution-Free Bayes Estimator of Reliability


Let I n denote the number of failures in a random sample of size n, during
the period [0, t). The reliability of the device on test at age t is R(t) =
1 - F(t), where F(t) is the CDF of the life distribution. Let Kn = n - I n .
The distribution of Kn is the binomial B(n,R(t)). Suppose that the prior
distribution of R(t) is uniform on (0,1). This prior distribution reflects our
initial state of ignorance concerning the actual value of R(t).
The uniform distribution is a special case of the beta distribution with
VI = 1 and V2 = 1. Hence, according to Part I of Example 8.1, the posterior
distribution of R(t), given Kn, is a beta distribution with parameters VI =
Kn + 1 and V2 = 1 + n - Kn. Hence, the Bayes estimator of R(t), with
respect to the squared-error loss function, is

R(t;Kn) = E{R(t) I Kn}


(8.2.1) _ K n +l
- n+2 .

If the sample size is n = 50, and Kso = 27, the Bayes estimator of R(t) is
R(t; 27) = 28/52 = .538. Notice that the MLE of R(t) is Rso = 27/50 =
.540. The sample size is sufficiently large for the MLE and the Bayes
estimator to be numerically close. If the loss function is IR - RI, the Bayes
estimator of R is the median of the posterior distribution of R(t) given Kn,
i.e., the median of the beta distribution with parameters VI = Kn + 1 and
V2 = n- Kn + 1.
8.2 Loss Functions and Bayes Estimators 165

Generally, if Vl and V2 are integers then the median of the beta distri-
bution is
Me - vlF:5[2vl,2v2]
(8.2.2)
+ V1F:5[2vl, 2V2] ,
-----''--:----"---0'
- V2
where F:5 [it, j2] is the median of the Flil, j2] distribution. Substituting
Vl = Kn + 1 and V2 = n - Kn + 1 in (8.2.2), we obtain that the Bayes
estimator of R( t) with respect to the absolute error loss is

(8.2.3) R(t) = (Kn + 1)F:5[2Kn + 2, 2n + 2 - Kn]


n +1- Kn + (Kn + 1)F.5[2Kn + 2, 2n + 2 - 2Kn ]'
Numerically, for n = 50, Kn = 27, F.5[56,48] = 1.002, and R(t) = .539.
The two Bayes estimates are very close.

8.2.2 Bayes Estimator of Reliability for Exponential Life Distributions

Consider a Type II censored sample of size n from an exponential distribu-


tion, E({3), with censoring at the r-th failure. Let tCl) ::; t(2) ::; ... ::; tC1')
be the ordered failure times. For squared-error loss, the Bayes estimator of
R(t) = e- t / f3 is given by
R(t) = E{R(t) I tCl),"· , tC1')}
(8.2.4)
= E{e- t /f3 I tCl)'''' , tC1')}'
This conditional expectation can be computed by integrating e- t / f3 with
respect to the posterior distribution of (3, given tCl)"" , tC1')'
Suppose that the prior distribution of (3 is IG(v, r). One can easily
verify that the posterior distribution of (3 given t(l),··· , tC1') is the inverted-
l'

gamma IG(v + r, + (n - r)tC1')' Hence, the


l+;n,r7') where Tn,1' = L)Ci)
i=l
Bayes estimator of R(t) = exp( -tf(3) is, for squared-error loss,

R(t) - (1 + T,n,1' r)1'+1/


A

\ - r1'+l/r(r + v)
1 0
00
1
{31'+I/H

(8.2.5) . exp(-~ (Tn'1'+~+t))d{3


1 +Tn1'r )1'+1/
(
= 1 + (Tn,1' '+ t)r

Note that the estimator only depends on n through Tn,1"


In the following table we provide a few values of the Bayes estimator R(t)
for selected values of t, when v = 3, r = 23, Tn,1' = 2242 and r = 10- 2,
along with the corresponding MLE, which is
MLE = e- t / i3n ,r = e-1't/Tn,r.
166 8. Bayesian Reliability Estimation and Prediction

t 50 100 150 200


R(t) .577 .337 .199 .119
MLE .599 .359 .215 .129

If we have a series structure of k modules, and the TTF of each module


is exponentially distributed, then formula (8.2.5) is extended to

(8.2.6)

where TJ!~i is the total time on test statistic for the i-th module, Ti is the
censoring frequency of the observations on the i-th module, Ti and Vi are
the prior parameters for the i-th module. As in (8.2.5), (8.2.6) is the Bayes
estimator for the squared-error loss, under the.assumption that the MTTFs
of the various modules are priorly independent. In a similar manner one
can write a formula for the Bayes estimator of the reliability of a system
having a parallel structure.

8.3 Bayesian Credibility and Prediction Intervals


Bayesian credibility intervals at level "/ are intervals C,,(x) in the parameter
space 8, for which the posterior probability that () E C,,(x) is at least ,,/,
i.e.,

(8.3.1) Pr{9 E C,,(x) I x} ~ "/.

Pr{E I x} denotes the posterior probability of the event E, given x. The


Bayesian credibility interval for 9, given x, has an entirely different inter-
pretation than that of the confidence intervals discussed in the previous
sections. While the confidence level of the classical confidence interval is
based on the sample-to-sample variability of the interval, for fixed 9, the
credibility level of the Bayesian credibility interval is based on the presumed
variability of 9, for a fixed sample.

8.3.1 Distribution-Free Reliability Estimation

In Section 8.2.1 we developed the Bayes estimator, with respect to squared-


error loss, of the reliability at age t, R(t), when the data available are the
number of sample units which survive at age t, namely Kn. We have seen
that the posterior distribution of R(t), given Kn, for a uniform prior is the
beta distribution with VI = Kn + 1 and V2 = n - Kn + 1. The Bayesian
credibility interval at level "/ is the interval whose limits are the €1- and €2-
fractiles of the posterior distribution, where €1 = (1 - ,,/) /2, €2 = (1 + ,,/) /2.
8.3 Bayesian Credibility and Prediction Intervals 167

These limits can be determined with aid of a table of the fractiles of the
F-distribution, according to the formulae
(8.3.2)
(Kn + 1)
L ower lim1't =~~~~~--~--~~~--~~~~~---=
(Kn + 1) + (n - Kn + 1)F'2[2n + 2 - 2Kn ,2Kn + 2]

and
(8.3.3)
··
U pper 1Imlt (Kn + 1)F'2 [2Kn + 2, 2n + 2 - 2Kn]
= .
(n - Kn + 1) + (Kn + 1)F'2[2Kn + 2,2n + 2 - 2Kn]

In Section 8.2.1 we considered the case of n = 50 and Kn = 27. For,,( = .95


we need
~975[48, 56] = 1.735

and
F.975 [56, 48] = 1.746.
Thus, the Bayesian credibility limits obtained for R(t) are .402 and .671.
Recall the Bayes estimator was .538.

8.3.2 Exponential Reliability Estimation

In Section 8.2.2 we developed a formula for the Bayes estimator of the


reliability function R(t) = exp( -t/ (3) for Type II censored data. We saw
that if the prior on (3 is IG(l/, T) then the posterior distribution of (3, given
the data, is IG(l/ + r, T /(1 + TTn,r). Thus, 'Y level Bayes credibility limits
for (3 are given by (3L,"! (lower limit) and Bu,,,! (upper limit), where

(8.3.4)

and

(8.3.5) (3 - Tn,r + l/T


u"!-
, (
G'l l/+r,l) .

Moreover, if l/ is an integer then we can replace Gp(l/+r, 1) by !X~[2l/+2r].


Finally, since R(t) = exp( -t/ (3) is an increasing function of (3, the ,,(-level
Bayes credibility limits for R(t) are

(8.3.6)

and

(8.3.7) Ru,,,!(t) = exp(-t/(3u,of)'


168 8. Bayesian Reliability Estimation and Prediction

IT we consider the values v = 3, r = 23, Tn,r = 2242 and T = 10- 2 we


need for "I = .95, X~025 [52] = 33.53 and X~975 [52] = 73.31. Thus,

f3L,.95 = 63.91 and f3U,.95 = 139.73.

The corresponding Bayesian credibility limits for R(t), at t = 50, are


RL•. 95(50) = .457 and Ru,.95(50) = .699.

8.3.3 Prediction Intervals

In Section 6.1.4 we introduced the notion of prediction intervals of level


"I. This notion can be adapted to the Bayesian framework in the following
manner.
Let X be a sample from a distribution governed by a parameter e; we
assume that e has a prior distribution. Let k(e I x) denote the posterior
PDF of e, given X = x. x represents the values of a random sample already
observed. We are interested in predicting the value of some statistic T(Y)
based on a future sample Y from the same distribution. Let get; 0) denote
the PDF of T(Y) under e. Then the predictive distribution of T(Y),
given x, is

(8.3.8) g*(t I x) = Ie g(t;e)h(e I x)de.

A Bayesian prediction interval of level "I for T(Y), given x, is an interval


(TL(X), Tu(x)) which contains a proportion "I of the predictive distribution,
i.e., satisfying

(8.3.9) l TU (X)

Tdx)
g*(t I x)dt = "I.

Generally, the limits are chosen so that the tail areas are each (1-"1/2). We
illustrate the derivation of a Bayesian prediction interval in the following
example.
EXAMPLE 8.2
Consider a device with an exponential lifetime distribution E(f3). We
test a random sample of n of these, stopping at the r-th failure. Suppose
the prior distribution of 13 is IG(v, T). Then, as seen in Section 8.2.2, the
posterior distribution of 13 given the ordered failure times t(l),··· ,t(r) is
r
IG(v + r, l+;n,r T )' where Tn,r = ~:)(i) + (n - r)t(r)'
i=l
Suppose we have an additional 8 such
devices, to be used one at a time
in some system, replacing each one immediately upon failure by another.
We are interested in a prediction interval of level "I for T, the time until all
8 devices have been used up. Letting Y = (Yt,··. ,Ys) be the lifetimes of
8.3 Bayesian Credibility and Prediction Intervals 169

the devices, we have T(Y) = l)'i. Thus, T(y) has a G(8, (3) distribution.
i=1
Substituting in (8.3.8), it is easily shown that the predictive PDF of T(Y),
given t(1)' ... , t(r)' is

g*(t It(1)"" , t(r») = (B(8, v + r)(Tn,r + 1/T))-1


(8.3.10)
(
T )8-1 ( Tn,r + l/T )r+V+1
. t + Tn,r + l/T t + Tn,r + l/T
Making the transformation

U = (Tn,r + l/T)/(T(Y) + Tn,r + l/T)

one can show that the predictive distribution of U given t(1)' ... , t(r) is the
beta(r + v, 8) distribution. If we let Bee1 (r + v, 8) and· Bee2 (r + v, 8) be the
101- and €2-fractiles of beta (r+v, 8), where 101 = (1-1")/2 and 102 = (1+1")/2,
then the lower and upper Bayesian prediction limits for T(Y) are

(8.3.11)

and

(8.3.12) Tu = (
Tn r +
,
1) (1
-T Be e1 (v+r,8)
)
- 1 .

If v is an integer, the prediction limits can be expressed as

(8.3.13) TL = (Tn ,r + T -8- .!.)


v+r Fe1
[28, 2v + 2r]

and
Tu = (Tn r +
, T
.!.) -8-
v+r
Fe2 [28, 2v + 2r].

Formulae (8.3.12) and (8.3.13) have been applied in the following con-
text.
Twenty computer monitors have been put on test at time to = O. The
test was terminated at the sixth failure (r = 6). The total time on test
was T 20 ,6 = 75,805.6 [hr]. We wish to predict the time till failure [hr] of
monitors which are shipped to customers. Assuming that TTF E«(3) r-.J

and ascribing (3 a prior IG(5, 10- 3 ) distribution, we compute the prediction


limits TL and Tu for 8 = 1, at level 1" = .95.
In the present case 2v + 21" = 22 and F.025 [2, 22] = 1/F. 975 [22, 2] =
1/39.45 = .0253. Moreover, F:975[2.22] = 4.38. Thus,
1
TL = 76805.6 11 x .0253 = 176.7 [hr]
170 8. Bayesian Reliability Estimation and Prediction

and
1
Tu = 76805.6 11 X 4.38 = 30,582.6 [hr].

We have high prediction confidence that a monitor in the field will not fail
before 175 hours of operation.

8.4 Credibility Intervals for the
Asymptotic Availability of Repairable Systems:
The Exponential Case
Consider a repairable system. We take observations on n consecutive re-
newal cycles. It is assumed that in each renewal cycle, TTF rv E(j3) and
TTR rv Eb). Let t1,'" ,tn be the values of TTF in the n cycles and
81, ... ,8n be the values of TTR. One can readily verify that the likelihood
n
function of j3 depends on the statistic U = Lti and that of, depends on
i=l
n
V = L 8i' U and V are called the likelihood (or minimal sufficient)
i=l
statistics. Let A = II j3 and J-L = 1;'. The asymptotic availability is
Aoo = J-LI(J-L + A).
In the Bayesian framework we assume that A and J-L are priorly indepen-
dent, having prior gamma distributions G(V,7) and G(w, (), respectively.
One can verify that the posterior distributions of A and J-L, given U and V,
are G(n + v, U + 7) and G(n + w, V + (), respectively. Moreover, A and J-L
are posteriorly independent. Routine calculations yield that

where Beta(p, q) denotes a random variable having a beta distribution, with


parameters p and q, 0 < p, q < 00. Let 1:1 = (1 - ,)/2 and 1:2 = (1 + ,)/2.
We obtain that the lower and upper limits of the ,-level credibility interval
for Aoo are A oo ,€l and Aoo,€2 where

(8.4.1) Aoo€ =
, 1
[1+ V+(.
U+
Be€2(n+v,n+w)]-1
7 + + Be€l (n w, n v)

and

(8.4.2) A = [1 + V+(.
U
Be€l(n+v,n+w)]-l
OO,€2
+ B (
7
)
e€2 n + w, n + v
8.4 Credibility Intervals for the Asymptotic Availability 171

where Beta.e(P, q) is the e-th fractile of Beta(p, q). Moreover, the fractiles
of the beta distribution are related to those of the F -distribution according
to the following formulae:

(8.4.3)

and

(8.4.4)

We illustrate these results in the following example.


EXAMPLE 8.3
Observations were taken on n = 72 renewal cycles of an insertion ma-
chine. It is assumed that TTF E({3) and TTR E(;) in each cycle. The
r.J r.J

observations gave the values U = 496.9 [min] and V = 126.3 [min]. Accord-
ing to these values, the MLE of Aoo is Aoo = 496.9/(496.9 + 126.3) = .797.
AE!sume the gamma prior distributions for oX and jt, with v = 2, r = .001,
w = 2 and ( = .005. We obtain from (8.4.3) and (8.4.4) for 'Y = .95,

Be.025 (74, 74) = .3802, Be.975 (74, 74) = .6198.

Finally, the credibility limits obtained from (8.4.1) and (8.4.2) are A oo ,.025 =
.707, and A oo ,.975 = .865. To conclude this example we remark that the
Bayes estimator of Aoo, for the absolute deviation loss function, is the
median of the posterior distribution of Aoo, given (U, V), namely A oo ,.5.
In the present example n+v ~ n+w = 74. The Beta{74, 74) distribution
is symmetric. Hence BeO.5(74, 74) = .5. To obtain the Aoo,.5 we solve the
equation

I-A 5
A 00,. (U +r)
1 _ A 00,.5 = Be.5(n + v, n + w).
A 00,.5 (U + r) + (V + ()
00,.5

In the present case we get

V
Aoo,.5 = ( 1 + U + r
+()-1 = 1
126.305 = .797.
1 + 496.901

This is equal to the value of the MLE.



172 8. Bayesian Reliability Estimation and Prediction

8.5 Empirical Bayes Method


Empirical Bayes estimation is designed to utilize the information in large
samples to estimate the Bayes estimator, without specifying the prior dis-
tribution. We introduce the idea in relation to estimating the parameter,
,x, of a Poisson distribution.
Suppose that we have a sequence of independent trials, in each trial a
value of ,x (failure rate) is chosen from some prior distribution H()..), and
then a value of X is chosen from the Poisson distribution 1'()..). IT this is re-
peated n times we have n pairs ()..l, Xl),··· ,()..n, xn). The statistician, how-
ever, can observe only the values Xl. X2,·· • ,Xn . Let fn(i), i = 0,1,2,· .. , be
the empirical PDF ofthe observed variable X, i.e., fn(i) = .! tI{Xj = i}.
nj=l
A new trial is to be performed. Let Y be the observed variable in the new
trial. It is assumed that Y has a Poisson distribution with mean ).. which will
be randomly chosen from the prior distribution H()..). The statistician has
to estimate the new value of).. from the observed value y of Y. Suppose
that the loss function for erroneous estimation is the squared-error loss,
(A - ,X)2. The Bayes estimator, if H()") is known, is

(8.5.1)

where h()") is the prior PDF of )...


The predictive PDF of Y, under H, is

(8.5.2) fH(Y) = ~ [00 )..ye-Ah()")d)".


y.lo
The Bayes estimator of'x (8.5.1) can be written in the form

(8.5.3)

The empirical PDF fn(Y) converges (by the Strong Law of Large Numbers)
in a probabilistic sense, as n -+ 00, to fH(Y). Accordingly, replacing fH(Y)
in (8.5.3) with fn(Y) we obtain an estimator of EH{)..I y} based on the past
n trials. This estimator is called an empirical Bayes estimator (EBE)
of )..:
fn(Y + 1)
(8.5.4) )..n(Y) = (y + 1)
A

fn(Y) , Y = 0,1,··· .

In the following example we illustrate this estimation method.


8.5 Empirical Bayes Method 173

EXAMPLE 8.4
n = 188 batches of circuit boards were inspected for soldering defects.
Each board has typically several hundred soldering points, and each batch
contained several hundred boards. It is assumed that the number of solder-
ing defects, X (per 105 points), has a Poisson distribution. In the following
table we present the frequency distribution of X among the 188 observed
batches.

Table 8.1. Empirical Distribution of Number of


Soldering Defects (per 100,000 Points).

x 0 1 2 3 4 5 6 7 8
f(x) 4 21 29 32 19 14 13 5 8

x 9 10 11 12 13 14 15 16 17 18
f(x) 5 9 1 2 4 4 1 4 2 1

x 19 20 21 22 23 24 25 26 Total
f(x) 1 1 1 1 2 1 2 1 188

Accordingly, if in a new batch the number of defects (per 105 points) is


5
Y = 8, the EBE of A is A188(8) = 9 x "8 = 5.625 (per 105 ), or 56.25 (per lOB
A

points), i.e., 56.25 PPM. After observing Y189 = 8 we can increase h88(8)
by 1, i.e., h89(8) = h88(8) + 1, and observe the next batch.

The above method of deriving an EBE can be employed for any PDF

f(x; 0) of a discrete distribution, such that

f(x + 1; 0)
f(x; 0) = a(x) + b(x)O.
In such a case, the EBE of 0 is

(8.5.5)

Generally, however, it is difficult to obtain an estimator which converges,


as n increases, to the value of the Bayes estimator. A parametric EB
procedure is one in which, as part of the model, we assume that the prior
distribution belongs to a parametric family, but the parameter of the prior
distribution is consistently estimated from the past data. For example, if
the model assumes that the observed TTF is E(j3) and that j3 rv IG(I/, T),
instead of specifying the values of 1/ and T, we use the past data to esti-
mate. We may obtain an estimator of E{ 0 I T, 1/, T) which converges in a
174 8. Bayesian Reliability Estimation and Prediction

probabilistic sense, as n increases, to the Bayes estimator. An example of


such a parametric EBE is given below.
EXAMPLE 8.5
Suppose that T rv E(/3) and /3 has a prior IG(v, T). The Bayes estimator
of the reliability function is given by (8.2.5).
Let tb t2,'" ,tn be past independent observations on T.
The expected value of T under the predictive PDF is

1
(8.5.6) E.,.,v{T} = T(V -1)'

provided v > 1. The second moment of T is

(8.5.7)

provided v > 2.
1 1
;;:I)r
n n
Let M1,n = ;;::~:)i and M 2 ,n = M1,n and M 2 ,n converge in a
i=l i=l
probabilistic sense to ET,v{T} and E T,v{T2}, respectively. We estimate T
and v by the method of moment equations, by solving

1
(8.5.8) M1,n = f(v - 1)

and
2
(8.5.9) M 2 ,n = f2(V _ l)(v - 2)'

Let D~ = M2 ,n - M'f,n be the sample variance. Simple algebraic manipu-


lations yield the estimators

A (D~ - M'f,n)
(8.5.10)
Tn = [M1,n(D; + M'f,n)] '

(8.5.11)

provided D~ > M'f n' It can be shown that for large values of n, D~ > M'f n
with high probability. '
Substituting the empirical estimates fn and vn in (8.2.5) we obtain a
parametric EBE of the reliability function.

8.6 Exercises 175

For additional results on the EBE of reliability functions, see Martz and
Waller (1982) and Tsokos and Shimi (1977).

8.6 Exercises
[8.1.1] Suppose that the TTF of a system is a random variable having
exponential distribution, E(/3). Suppose also that the prior distri-
bution of A = 1//3 is G(2.25, .01).
(i) What is the posterior distribution of A, given T = 150 [hr]?
(ii) What is the Bayes estimator of /3, for the squared-error loss?
(iii) What is the posterior SD of /3?
[8.1.2] Let J(t) denote the number offailures of a device in the time inter-
val (0, t]. After each failure the device is instantaneously renewed.
Let J(t) have a Poisson distribution with mean At. Suppose that
A has a gamma prior distribution, with parameters v = 2 and
7 = .05.
(i) What is the predictive distribution of J(t)?
(ii) Given that J(t)/t = 10, how many failures are expected in the
next time unit?
(iii) What is the Bayes estimator of A, for the squared-error loss?
(iv) What is the posterior SD of A?
[8.1.3] The proportion of defectives, (), in a production process has a uni-
form prior distribution on (0,1). A random sample of n = 10 items
from this process yields KlO = 3 defectives.
(i) What is the posterior distribution of ()?
(ii) What is the Bayes estimator of () for the absolute error loss?
[8.1.4] Let X rv peA) and suppose that A has the Jeffrey improper prior
1
h(A) = J):' Find the Bayes estimator for squared-error loss and
its posterior SD.
[8.2.1] Apply formula (8.2.3) to determine the Bayes estimator of the re-
liability when n = 50 and K50 = 49.
[8.2.2] A system has three modules, Ml, M 2, M 3 . Ml and M2 are con-
nected in series and these two are connected in parallel to A13,
i.e.,

where R; is the reliability of module Mi. The TTFs of the three


modules are independent random variables having exponential dis-
tributions with prior IG(Vi' 7i) distributions of their MTTF. More-
over, VI = 2.5, V2 = 2.75, V3 = 3, 71 = 72 = 73 = 1/1000. In sepa-
rate independent trials of the TTF of each module we obtained the
statistics T~1) = 4565 [hr], T~2) = 5720 [hr] and T~3) = 7505 [hr],
where in all three experiments n = r = 10. Determine the Bayes
estimator of R sys , for the squared-error loss.
176 8. Bayesian Reliability Estimation and Prediction

[8.3.1] n = 30 computer monitors were put on test at a temperature of


100°F and relative humidity of 90% for 240 [hr]. The number of
monitors which survived this test is K30 = 28. Determine the
Bayes credibility interval for R(240), at level 'Y = .95, with respect
to a uniform prior on (0,1).
[8.3.2] Determine a 'Y = .95 level credibility interval for R(t) at t = 25 [hr]
when TTF rv E({3), (3 rv IG(3, .01), T = 27, Tn,r = 3500 [hr].
[8.4.1] Under the conditions of Exercise [8.3.2] determine a Bayes predic-
tion interval for the total life of s = 2 devices.
[8.4.2] A repairable system has exponential TTF and exponential TTR,
which are independent of each other. n = 100 renewal cycles were
observed. The total times till failure were 10,050 [hr] and the total
repair times were 500 [min]. Assuming gamma prior distributions
for A and J1, with v = w = 4 and T = .0004 [hr], ( = .01 [min], find
a 'Y = .95 level credibility interval for Aoo.
[8.5.1] In reference to Example 8.4, suppose that the data of Table 8.1
were obtained for a Poisson random variable where AI,··· ,A188
have a gamma (v, T) prior distribution.
(i) What is the predictive distribution of the number of defects per
batch?
(ii) Find the formulae for the first two moments of the predictive
distribution.
(iii) Find, from the empirical frequency distribution of Table 8.1,
the first two sample moments.
(iv) Use the method of moment equations to estimate the prior
parameters v and T.
(v) What is the Bayes estimator of A if X 189 = 8?
9
Reliability Demonstration:
Testing and Acceptance
Procedures

9.1 Reliability Demonstration


Reliability demonstration is a procedure for testing whether the reliabil-
ity of a given device (system) at a certain age is sufficiently high. More
precisely, a time point to and a desired reliability Ro are specified, and we
wish to test whether the reliability of the device at age to, R(to), satisfies
the requirement that R(to) 2: Ro. If the life distribution of the device is
completely known, including all parameters, there is no problem of relia-
bility demonstration - one computes R(to) exactly and determines whether
R(to) 2: Ro· If, as is generally the case, either the life distribution or its
parameters are unknown, then the problem of reliability demonstration is
that of obtaining suitable data and using them to test the statistical hy-
pothesis that R(to) 2: Ro versus the alternative that R(to) < Ro. Thus,
the theory of testing statistical hypotheses provides the tools for reliability
demonstration. In the present section we review some of the basic notions
of hypothesis testing as they pertain to reliability demonstration.
On the basis of the data, we must choose between two hypotheses: the
null hypothesis, Ho, which specifies that R(to) 2: Ro, and the alterna-
tive hypothesis, H l , which states that R(to) < Ro. Since the decision
whether to accept or reject Ho depends on the sample data, there is always
a possibility that an erroneous decision will be made. We distinguish be-
tween two types of errors. An error of Type I is committed if Ho is rejected
when it is true, Le., when R(to) 2: Ro, and an error of Type II is commit-
ted if Ho is accepted when it is false, Le., when R(to) < Ro. We wish to
design a statistical test which will entail small probabilities of error. It is
common to specify a small value a as an upper limit for the probability of
a Type I error and construct the test accordingly. This value is called the
178 9. Reliability Demonstration: Testing and Acceptance Procedures

significance level of the test. One may also specify that the test have a
particular probability, {3, of a Type II error when R(to) equals some partic-
ular value RI less than Ro. RI can be regarded as a dearly unacceptable
reliability level, under which we require the test to have high probability
(1 - {3) of rejecting the null hypothesis.
The characteristics of any given hypothesis test can be summarized by its
op_erating characteristic (OC) function. For any value R, 0 < R < 1,
the value of the OC function at R, OC(R), represents the probability that
the test will accept Ho when the true value of R(to) is R, i.e.,
OC(R) = Pr{test accepts Ho I R(to) = R}.
Since any reasonable test for the reliability demonstration problem will have
an OC function that increases with R, the two requirements mentioned
above are expressible as follows:
OC(Ro) = 1- a and OC{Rd = {3.
In the following sections we develop several tests of interest in reliability
demonstration. We remark here that procedures for obtaining confidence
intervals for R(to), which were discussed in Chapter 7, can be used to
provide tests of hypotheses. Specifically, the procedure involves computing
the upper confidence limit of a (1 - 2a)-level confidence interval for R(to)
and comparing it with the value Ro. If the upper confidence limit exceeds
Ro then the null hypothesis is accepted, otherwise it is rejected. This test
will have a significance level of a.
For example, if the specification of the reliability at age t = to is R = .75
and the confidence interval for R(to), at level of confidence, = .90, is
(.80, .85), the hypothesis Ho can be immediately accepted at a level of
significance of a = (1 -,)/2 = .05. There is a duality between procedures
for testing hypotheses and for confidence intervals.

9.2 Binomial Testing


A random sample of n devices is put on life test simultaneously. Let I n be
the number of failures in the time interval [0, to), and Kn = n - I n . We
have seen that Kn "'" B(n, R(to)). Thus, if Ho is true, i.e., R(to) 2: Ro, the
values of Kn will tend to be larger, in a probabilistic sense, than when HI
is true (R(to) < Ro). Thus, one tests Ho by specifying a critical value COt
and rejecting Ho whenever Kn ::; COt. The critical value COt is chosen as
the largest value satisfying
(9.2.1)
The OC function of this test, as a function of the true reliability R, is
OC(R) = Pr{Kn > COt I R(to) = R}
(9.2.2)
= 1- FB(COt ;n, R).
9.2 Binomial Testing 179

If n is large, then one can apply the normal approximation to the bi-
nomial CDF. In these cases we can determine Co. to be the integer most
closely satisfying

Co. + 1/2 - nRo )


(9.2.3) ~ ( (nRo(1 - Ro))1/2 = a.

Generally, this will be given by

(9.2.4) Co. = integer closest to {nRo - 1/2 - zl-o.(nRo(I- Ro))1/2},

where ZI-o. = ~-I(I_ a). The OC function of this test in the large sample
case is approximated by

nR-C -1/2)
(9.2.5) OC(R)~~ ( (nR(I_o.R))1/2 .

The normal approximation is quite accurate whenever n > 9/(R(1 - R)).


If in addition to specifying a, we specify that the test have Type II error
probability (3, when R(to) = R I , then the normal approximation provides
us with a formula for the necessary sample size:

(9.2.6)

where (T'f = ~(1 - ~), i = 0, 1.


EXAMPLE 9.1
Suppose that we wish to test at significance level a = .05 the null hy-
pothesis that the reliability at age 1000 [hr] of a particular system is at
least 85%. If the reliability is 80% or less, we want to limit the chances of
accepting the null hypothesis to (3 = .10. Our test is to be based on K n , the
number of systems, out of a random sample of n, surviving at least 1000
hours of operation. Setting Ro = .85 and RI = .80, we have (To = .357,
(TI = .4, Z.95 = 1.645, Z.90 = 1.282. Substituting in (9.2.6) we obtain that
the necessary sample size is n = 483. From (9.2.4) we obtain the critical
value C. 05 = 397.
We see that in binomial testing one may need very large samples to
satisfy the specifications of the test. If in the above problem we reduce
the sample size to n = 100 then C. 05 = 79. However, now the probabil-
ity of accepting the null hypothesis when R = .80 is, according to (9.2.5),
OC(.8J = ~(.125) = .55, which is considerably higher than the correspond-
ing probability of .10 under n = 483.

180 9. Reliability Demonstration: Testing and Acceptance Procedures

9.3 Exponential Distributions


Suppose that we know that the life distribution is exponential E({3), but {3
is unknown. The hypotheses
Ho : R(to) ~ Ro
versus
HI : R(to) < Ro
can be rephrased in terms of the unknown parameter, {3, as

Ho : {3 ~ {3o
versus
HI: {3 < {3o
where {3o = -tol In Ro. Let tl, ... ,tn be the values of a (complete) random
sample of size n. Let tn = .!.
ni=l
'tti. The hypothesis Ho is rejected if tn < CO"
where

(9.3.1)

The OC function of this test, as a function of {3, is

OC({3) = Pr{tn > COt I {3}


(9.3.2)
= Pr{x 2 [2n] > ~X~[2n]}.
If we require that at {3 = {31 the OC function of the test will assume the
value 'Y, then the sample size n should satisfy

(9.3.3)

The fractiles of x2[2n], for n ~ 15, can be approximated by the formula

(9.3.4)

Substituting this approximation in (9.3.3) and solving for n, we obtain the


approximation

(9.3.5)

where, = {301 {31.


9.4 Sequential Reliability Testing 181

EXAMPLE 9.2
Suppose that in Example 9.1, we know that the system lifetimes are
exponentially distributed. It is interesting to examine how many systems
would have to be tested in order to achieve the same error probabilities as
before, if (mr decision were now based on tn.
Since f3 = -t/lnR(t), the value of the parameter f3 under R(to) =
R(1000) = .85 is f30 = -1000/ln(.85) = 6153 [hr], while its value under
R(to) = .80 is f31 = -1000/ln(.80) = 4481 [hr]. Substituting these values
into (9.3.5), along with a = .05 and 'Y = .10 b was denoted by f3 in Example
9.1), we obtain the necessary sample size n ~ 87.
Thus we see that the additional knowledge that the lifetime distribution
is exponential, along with the use of complete lifetime data on the sample,
allows us to achieve a greater than fivefold increase in efficiency in terms
of the sample size necessary to achieve the desired error probabilities .

We remark that if the sample is censored at the r-th failure then all the
formulae developed above apply after replacing n by r, and tn by {In,r =
Tn,r/r.
EXAMPLE 9.3
Suppose that the reliability at age t = 250 [hr] should be at least Ro =
.85. Let Rl = .75. The corresponding values of f30 and f31 are 1538 [hr] and
869 [hr], respectively. Suppose that the sample is censored at the r = 25th
failure. Let {In,r = Tn,r/25 be the MLE of f3. Ho is rejected, with level of
significance a = .05, if

f3n,r ~ ro
1538 2
X.o5 [50] = 1069 [hr].

The Type II error probability of this test, at f3 = 869, is

1538
OC(869) = Pr{x2[50] > 869 X~o5[50]}
= Pr{x 2[50] > 61.5}
~ 1_ CP (61.5 - 50)
v'100
= .125.


9.4 Sequential Reliability Testing
Sometimes in reliability demonstration an overriding concern is keeping the
number of items tested to a minimum, subject to whatever accuracy re-
quirements are imposed. This could be the case, for example, when testing
182 9. Reliability Demonstration: Testing and Acceptance Procedures

very complex and expensive systems. In such cases, it may be worthwhile


applying a sequential testing procedure, where items are tested one at time
in sequence until the procedure indicates that testing can stop and a deci-
sion be made. Such an approach would also be appropriate when testing
prototypes of some new design, which are being produced one at a time at
a relatively slow rate.
The Wald sequential probability ratio test (SPRT) provides sequen-
tial procedures for testing Ho : R(to) ;:::: Ro vs. HI : R(to) < RI with ap-
proximate significance level a and approximate Type II error probability 'Y
when R(to) = R I . The SPRT has the optimal property of minimizing the
expected sample size when R(to) equals Ro or R I .
The SPRT is based on the likelihood ratio An(Xn) = LI(xn)/Lo(Xn),
where Li(Xn) represents the likelihood function for the observed data Xn
under R(to) =~, i = 1,2. For suitably chosen A and B, the SPRT makes
the following decision after n observations:

Continue sampling if A < An(Xn) < B,

(9.4.1)

The values A and B are given by the approximations

A...:...
'Y
- I-a'
(9.4.2)
B~ I-a.
'Y
In many cases, it is more convenient to express the test in terms of the
statistics lnAn(Xn) and the boundaries InA and InB.
We will consider the SPRT in detail only as it applies to the two special
cases considered for the non-sequential case in Sections 9.2 and 9.3.

9.4.1 The SPRT for Binomial Data


Without any assumptions about the lifetime distribution of a device, we
can test hypotheses concerning R(to) by simply observing whether or not a
device survives to age to. Letting Kn represent the number of devices among
n randomly selected ones surviving to age to, we have Kn rv B(n, R(to)).
The likelihood ratio is given by
9.4 Sequential Reliability Testing 183

Thus,
l-Rl)
( 1-Ro (Ro(l-Rl))
lnAn=nln -Kn ln Rl (l-Ro) .

It follows from (9.4.1) and (9.4.2) that the SPRT can be expressed in terms
of Kn as follows:
Continue sampling if -hl + sn < Kn < h2 + sn,

(9.4.3) Accept Ho if Kn 2: h2 + sn,

Reject Ho if Kn ~ -hl + sn,


where

s -In
-
(1- RoRl) jln (RRlo(l- Ro)
1-
Rd)
,
(1 -

(9.4.4)

Note that if we plot Kn vs. n, the accept and reject boundaries are parallel
straight lines with common slope s and intercepts h2 and -hl, respectively.
The OC function of this test is expressible (approximately) in terms of
an implicit parameter 'ljJ. Letting

1- (1- Rl)""
1-Ro
(9.4.5)
( Rl)"" _ (l-Rl)"'"
Ro 1- Ro
s,

we have that the OC function at R(to) = R("") is given by

C:'Y)"" -1
'ljJ=j:O

(9.4.6)
(1:'Y)"" -(1 2a)"'"
InC:'Y) 'ljJ=O.
184 9. Reliability Demonstration: Testing and Acceptance Procedures

It is easily verified that for'r/J = 1, R('I/J) equals Ro and OC(R('I/J») equals


1- a, while for 'r/J = -1, R('I/J) equals Rl and OC(R('I/J») equals 7.
The expected sample size, or average sample number (ASN), as a
function of R('I/J), is given by

1-7_ (1 - 7))
(9.4.7)
In OC(R('I/J») In a)(1 -
a a7
In 1 - Rl _ R('I/J) In (Ro(1 - Rl))
1-Ro Rl(1-Ro)
hlh2
'r/J=o.
8(1- 8)'
The ASN function will typically have a maximum at some value of R be-
tween Ro and R 1 , and decrease as R moves away from the point of maxim.nm
in either direction.
EXAMPLE 9.4
Consider Example 9.2, where we had t = 1000 [hr], Ro = .85, Rl = .80,
a = .05, 7 = .10. Suppose now that systems are tested sequentially, and
we apply the SPRT based on the number of systems still functioning at
1000 [hr]. Using (9.4.4), the parameters of the boundary lines are 8 = .826,
hi = 8.30, and ~ = 6.46.
The OC and ASN functions of the test are given in Table 9.1, for selected
values of'r/J.
Compare the values in the ASN column to the sample size required for
the corresponding fixed-sample test, n = 483. It is clear that the SPRT
effects a considerable saving in sample size, particularly when R(to) is less
than Rl or greater than Ro. Note also that the maximum ASN value occurs
when R(to) is near 8.

9.4.2 The SPRT for Exponential Lifetimes
When the lifetime distribution is known to be exponential, we have seen
the increase in efficiency gained by measuring the actual failure times of the
parts being tested. By using a sequential procedure based on these failure
times, further gains in efficiency can be achieved.
Expressing the hypotheses in terms of the parameter /3 of the lifetime
distribution E(/3), we wish to test Ho : /3 ~ /30 vs. HI : /3 < /30, with
significance level a and Type II error probability 7, when /3 = /31, where
/31 < /30. Letting tn = (tl, .. · ,tn ) be the times till failure of the first n
parts tested, the likelihood ratio statistic is given by
9.4 Sequential Reliability Testing 185

Table 9.1. OC and ASN Values for the SPRT of Example 9.4

1jJ Rl'I/JJ OC(Rl'I/JJ) ASN(R('I/J»


-2.0 0.7724 0.0110 152.0
-1.8 0.7780 0.0173 167.9
-1.6 0.7836 0.0270 186.7
-1.4 0.7891 0.0421 208.6
-1.2 0.7946 0.0651 234.1
-1.0 0.8000 0.1000 263.0
-0.8 0.8053 0.1512 294.2
-0.6 0.8106 0.2235 325.5
-0.4 0.8158 0.3193 352.7
-0.2 0.8209 0.4357 370.2
0.0 0.8259 0.5621 373.1
0.2 0.83Q9 0.6834 360.2
0.4 0.8358 0.7858 334.8
0.6 0.8406 0.8629 302.6
0.8 0.8453 0.9159 269.1
1.0 0.8500 0.9500 238.0
1.2 0.8546 0.9709 210.8
1.4 0.8590 0.9833 187.8
1.6 0.8634 0.9905 168.6
1.8 0.8678 0.9946 152.7
2.0 0.8720 0.9969 139.4

Thus,

Applying (9.4.1) and (9.4.2), we obtain the following SPRT:

n
Continue sampling if -hI + sn < L:)i < h2 + sn,
i=1
n
(9.4.8) Accept Ho if ,I)i ~ h2 + sn,
i=1
n
Reject Ho if ~)i :S -hI + sn.
i=1
186 9. Reliability Demonstration: Testing and Acceptance Procedures

where

(9.4.9)

n
Thus, if we plot Z)i vs. n, the accept and reject boundaries are again
i=l
parallel straight lines.
As before, let 'IjJ be an implicit parameter, and define

(9.4.10)
'IjJ=0.

Then the OC and ASN functions are approximately given by

'IjJ=/=0
(9.4.11)

and
(9.4.12)

'IjJ=0

Note that when 'IjJ = 1, f3{,p) equals f3o, while when 'IjJ = -1, f3{,p) equals f31.
EXAMPLE 9.5
Continuing Example 9.2, recall we had 0: = .05, 'Y = .10, f30 = 6153,
f31 = 4481. Using (9.4.9), the parameters of the boundaries of the SPRT
9.4 Sequential Reliability Testing 187

Table 9.2. OC and ASN Values for the SPRT of Example 9.5

'!f; /3("') OC(/3("'J) ASN(/3("'J)


-2.0 3872 0.0110 34.3
-1.8 3984 0.0173 37.1
-1.6 4101 0.0270 40.2
-1.4 4223 0.0421 43.8
-1.2 4349 0.0651 47.9
-1.0 4481 0.1000 52.4
-0.8 4618 0.1512 57.1
-0.6 4762 0.2235 61.4
-0.4 4911 0.3193 64.7
-0.2 5067 0.4357 66.1
0.0 5229 0.5621 64.7
0.2 5398 0.6834 60.7
0.4 5575 0.7858 54.8
0.6 5759 0.8629 48.1
0.8 5952 0.9159 41.5
1.0 6153 0.9500 35.6
1.2 6363 0.9709 30.6
1.4 6582 0.9833 26.4
1.6 6811 0.9905 22.9
1.8 7051 0.9946 20.1
2.0 7301 0.9969 17.8

are s = 5229, hi = 47662, h2 = 37124. The OC and ASN functions, for


selected values of '!f;, are given in Table 9.2.
In Example 9.2, we saw that the fixed-sample test with the same 0: and
'Y requires a sample size of n = 87. Thus, in the exponential case as well,
we see that the SPRT can result in substantial savings in sample size .

It is obviously impractical to perform a sequential test, like the one



described in the above example, by running one system, waiting till it fails,
renewing it and running it again and again until a decision can be made.
In the above example, if the MTTF of the system is close to the value of
/30 = 6153 [hr], it takes on the average about 256 days between failures,
and on the average about 36 failures till decision is reached. This trial may
take over 25 years. There are three ways to overcome this problem. The
first is to put on test several systems simultaneously. Thus, if in the trial
described in the above example 25 systems are tested simultaneously, the
expected duration of the test will be reduced to one year. Another way
is to consider a test based on a continuous time process, not on discrete
samples of failure times. The third possibility of reducing the expected test
188 9. Reliability Demonstration: Testing and Acceptance Procedures

duration is to perform accelerated life testing. In the following sections


we discuss these alternatives.

9.5 Sequential Tests for Poisson Processes


Suppose that we put n systems on test starting at t = O. Suppose also that
any system which fails is instantaneously renewed, and at the renewal time
it is as good as new. In addition we assume that the life characteristics of
the systems are identical, the TTF of each system is exponential (with the
same (3) and failures of different systems are independent of each other.
Under these assumptions, the number of failures in each system, in the
time interval (0, tJ, is a Poisson random variable with mean >"t, where>.. =
1/{3.
Let Xn(t) = total number of failures among all the n systems during
the time interval (0, t]. Xn(t) rv Pos(n>..t), and the collection {Xn(t); 0 <
t < oo} is called a Poisson process. We add· the initial condition that
Xn(O) = O.
The random function Xn(t), 0 < t < 00, is a non-decreasing step function
which jumps one unit at each random failure time of the system. The
random functions Xn(t) satisfy:
(i) Xn (t) , rv Pos(n>..t), all 0 < t < 00;
(ii) For any tl < t2, X n (t2) - X n (tl) is independent of X n (tl);
(iii) Foranytl, t2, 0 < tl < h < 00, X n (t2)-X n (tl) rv Pos(n>..(t2-tt)).
We develop now the SPRT based on the random functions Xn(t).
The hypotheses H 0 : {3 ~ {30 versus HI: {3 ~ {31, for 0 < {31 < {30 < 00,
are translated to the hypotheses Ho : >.. ~ >"0 versus HI : >.. ~ >"1 where
>.. = 1/{3. The likelihood ratio at time t is

>.. )Xn(t)
(9.5.1) A(t; Xn(t)) = ( >..~ exp{ -nt(>"1 - >"o)}.

The test continues as long as the random graph of (Tn(t) , Xn(t)) is between
the two linear boundaries

(9.5.2)

and

(9.5.3) h(t) = -hI + sTn (t) , 0 ~ t < 00,

where Tn(t) = nt is the total time on test at t,

(9.5.4)
9.5 Sequential Tests for Poisson Processes 189

(9.5.5)

and

(9.5.6)

The instant Xn{t) jumps above bu{t) the test terminates and Ho is re-
jected; on the other hand, the instant Xn{t) = bL{t) the test terminates
and Ho is accepted. Acceptance of Ho entails that the reliability meets the
specified requirement. Rejection of Ho may lead to .?-dditional engineering
modification to improve .the reliability of the system.
The OC function of this sequential test is the same as that given by
(9.4.1O) and (9.4.11). Let T denote the random time of termination. It can
be shown that Pr>. {T < oo} = 1 for all 0 < A < 00. The expected deviation
of the test is given approximately by

(9.5.7)

where
h2 - OC{A){hl + h2)
(9.5.8) E>.{Xn(T)} ~ { 1- S/A

hlh2' if A = s.

It should be noticed that formula (9.5.8) yields the same values as formula
(9.4.12) for A = 1//3("'). The SPRT of Section 9.4 can terminate only after
a failure, while the SPRT based on Xn(t) may terminate while crossing the
lower boundary bL(t), before a failure occurs.
The minimal time required to accept Ho is TO = hl/ns. In the case of
Example 9.5, with n = 20, TO = 9.11536/(20 x .0001912) = 2383.2 [hr].
That is, over 99 days of testing without any failure; The SPRT may be,
in addition, frequency censored by fixing a value x'" so that as soon as
Xn(t) ~ x'" the test terminates and Ho is rejected. In Example 9.5 we see
that the expected number of failures at termination may be as large as 66.
We can censor the test at x'" = 50. This will reduce the expected duration
of the test but will increase the probability of a Type I error, a. Special
programs are available for computing the operating characteristics of such
censored tests, but these are beyond the scope of the present text.
190 9. Reliability Demonstration: Testing and Acceptance Procedures

9.6 Bayesian Reliability Demonstration Tests


Consider, as in the previous section, the null hypothesis Ho : R(to) ~ Ro,
against the alternative hypothesis HI : R(to) ~ RI! where
o < Rl < Ro < 1.
Suppose the model specifies that the PDF of the TTF is f(t; 0) and the
prior PDF of 0 is h(O). According to this model, the prior probability that
Ho is correct is
(9.6.1) 7ro = J
{8jR(toj8)~Ro}
h(O)dO.

The prior probability that HI is correct is

(9.6.2) 7rl = J
{8jR(toj9):5Rl}
h(O)dO.

Let tl, t2,'" ,tn be n observed values of the TTF and let tn = (tl,'" ,tn)'
We will convert the prior distribution of 0 to the posterior distribution,
given tn. Let 7ro(t n ) and 7rl(tn ) be the posterior probabilities of Ho and
HI! respectively, given tn. These probabilities are obtained from (9.6.1) and
(9.6.2) by replacing the prior PDF h(O) by the posterior PDF h(O I t n ).
Let lo be the loss incurred by accepting Ho when actually R ~ Rl.
On the other hand, let It be the loss due to rejecting Ho when actually
R ~ Ro. The optimal Bayesian decision is to take the action entailing a
smaller expected posterior risk. Accordingly, Ho is accepted if

(9.6.3) 7rl(tn ) < It.


7ro(t n ) - lo
IT inequality (9.6.3) does not hold it is optimal to reject Ho.
EXAMPLE 9.6
Consider the exponential model, i.e., TTF E({3). Let A = 1/{3. The
I"V

prior distribution of A is G(l/, r) with l/ = 3 and r = .01. The life testing is


censored at the 23rd failure.
The specifications require that the reliability at 50 hours should be
greater than Ro = .9. IT R(50) ~ Rl = .75, the system should be rejected.
Accordingly, Ao = .002107 and Al = .005754. The posterior distribution
of A, given Tn,23, is G(26, (Tn,23 + 100)-1). Hence,

7rO(Tn,23) = Pr{A ~ Ao I Tn,23} = Pr{G(26, 1) ~ Ao(Tn,23 + 100)}


= 1 - Pos(25; Ao(Tn,23 + 100)).

Similarly,
7rl (Tn,23) = Pr{ A ~ Al I Tn,23}
= Pos(25; Al (Tn ,23 + 100)).
9.6 Bayesian Reliability Demonstration Tests 191

In the following table we present the values of 7rO(Tn,23) and 7rl(Tn,23), and
the ratio 7rl(Tn,23)/7ro(Tn,23), as a function of the total life.

Table 9.3. Posterior Probabilities as


Functions of Total Life

Tn,23 7rO(Tn,23) 7rl(Tn,23) 7r1O/ 7r o0


4500 0 0.4505 00
6000 0.0009 0.0415 46.111
7000 0.0060 0.0033 0.550
8000 0.0263 0.0001 0.004

We see in the above table that, in order to accept Ho when lo is greater


than h (h/lo < 1), we need large values of Tn ,23.

Several Bayesian risk concepts can be found in the lite.rature in relation to



Bayesian Reliability Demonstration Tests (BRDT) (see Martz and Waller,
1982, Ch. 10).
A BRDT specifies, under the Bayesian model, how many units are put
on test, under what stress conditions (temperature, humidity, voltage, etc.),
fDr how long and for which values of the observed random variable (number
of failures or total life on test) the null hypothesis, Ho : R;:::: Ha, is rejected.
In the previous sections we characterized the performance of a test by
its operating characteristics function OC(R), which is the probability of
accepting H o, when the true reliability value is R. In a Bayesian framework
this function is meaningless, since R is considered a random variable. If
H(R) is the prior CDF of R, having a PDF, h(R), one could consider the
predictive acceptance probability, under H, which is

(9.6.4) 7rH = fa1 OC(R)h(R)dR.


The predictive rejection probability will be denoted by '¢H = 1 -
7rH. The conditional producer's and consumer's risks, given that the
BRDT rejects or accepts H o, are, respectively,

(9.6.5) au =} 'f/H lRo


1
r [1 - OC(R)]h(R)dR

and

(9.6.6) 'Yu = _1 rR10C(R)h(R)dR.


7rH 10
Other risk concepts which are found in the literature are the average pre-
dictive risks

(9.6.7)
192 9. Reliability Demonstration: Testing and Acceptance Procedures

and

(9.6.8)

In the following examples we illustrate these Bayesian characteristics of


demonstration tests.
EXAMPLE 9.1
An insertion machine places 3500 components in an hour of operation.
Sometimes an error occurs in the placement operation. It is desired that the
number of such errors will not exceed 100 PPM (parts per 106 placements).
Accordingly, if the expected number of errors is smaller than 100 PPM the
machine is considered reliable. On the other hand, if the expected number
of errors exceeds 120 PPM the machine requires modification. It has been
decided to test a machine by operating it continuously for 30 hours, and
accept it if the total number of errors does not exceed x* = 15.
Let X denote the number of errors in 30-'hours (105,000 placements).
The model is that X rv Pos('\). Let'\o = 105,000 x 10-4 = 10.5 and
'\1 = 105,000 x 1.2 X 10- 4 = 12.6. The null hypothesis is Ho : ,\ ~ 10.5
while the alternative hypothesis is HI : ,\ 2: 12.6. We assume a prior
G(3,3.5) distribution for ,\. Recall that

15 X"
OC('\) = Pos(15;,\) = e->' L "
x=o x.

Thus, the predictive acceptance probability is

7r = ~~. 1 roo ,\x+2e->'(1+1/3.5)d,\


H f='o x! r(3)(3.5)3 10
= _1_
(4.5)3 x=o
t (x + 1)(x + 2)
2
. (3.5)X
4.5
= .7978.

The predictive rejection probability is '¢H = 1 - 7rH = .2022.


We compute now the conditional producer's risk. This quantity is given
by

aN = 1
'¢H 10(>.o [15 1
1 - ~ Pos(x 1'\) g('\ I 3,3.5)d'\

_
- '¢H
1
{
1
r(3)(3.5)3
1
0
10.5

,\
2 ->./3.5
e d'\
15 10.5 }
1
- ~ x!r(3)(3.5)31 ,\
x+2 _>.~
e 3.sd,\ .
9.6 Bayesian Reliability Demonstration Tests 193

Notice that

1 f1O·5
r(3)(3.5)3 Jo >-.2e-)../3.5d).. =1- Pos(2 I 3) = .577.

1
Also,
1 10.5
x+2 _).. 4.5
x!r(3)(3.5)3 0 ).. e 3.5 d)"
(x + l)(x + 2)
2(4.5)3
(3.5)X
4.5 [1 - Pos(x + 2; 13.5)].

Substituting these in the formula of Q'H, we obtain Q'H = .029.


On the other hand, the conditional consumer's risk is

I: 1
1
I 3,3.5)d)"
15 00

'Y'H = - pos(x; )..)g()..


1rH x=o 12.6

= 1rH
1 {
(4.5)3
1 15
~
(x+1)(x+2)
2
(3.5)X
4.5 Pos(x + 2; 16.2)
}
= .848.

We see that the reliability demonstration test under consideration does not
protect the consumer. In order to protect the consumer the test should last
longer than 30 hours.

A sequential BRDT is one in which the length of the test is not de-

termined ahead, but a stopping rule, which is a function of the data, is
chosen. When the stopping rule terminates the test a decision whether to
accept Ho or to reject it is done according to (9.6.3). The question of which
stopping rule to choose is an important one. The reader who is interested
to learn about the theory of optimal stopping rules is referred to DeGroot
(1970) or Chow, Robbins and Siegmund (1971).
The following stopping rule is often applied. Choose numbers 1r and 7r,
o ~ 1r < 7r < 1. 1r is chosen close to zero, and 7r close to one. Let 1r~ denote
the posterior probability of Ho at time t, 0 ~ t < 00, which is a function of
the observations in (0, t].
The stopping rule:
Stop at the smallest value of t, 0 ~ t, at which 1r~ ~ 1r or 1r~ ~ 7r.
Let T denote the stopping time. If 1r~ ~ 1r reject H o , if 1r~ ~ 7r accept
H o·
We illustrate this stopping rule in the following example.
EXAMPLE 9.8
As in Section 9.5, we put n units on test. Units are instantaneously
renewed after failures. We observe the process {Xn(t); 0 ~ t < oo}, with
Xn(O) == O. Suppose that {Xn(t); 0 ~ t < oo} is a Poisson process, with
194 9. Reliability Demonstration: Testing and Acceptance Procedures

EA{Xn(tn = nAt. We further assume that A rv G(v, T), where v is an


integer. As we showed earlier, under this Bayesian model, the posterior
distribution of A, at time t, is G(v + Xn(t),(nt + ~)-1).
The null hypothesis is Ho : A ::; AO' Thus, the posterior probability of
Ho at time t is

1f~ = Pr{).. ::; AO I Xn(t), t}

= Pr{ G(v + Xn(t), (nt + -1 )-1) ::; AO}


T

=
1
Pr{x2[2v + 2Xn(t)] ::; 2Ao(nt + -
T

Accordingly, 1f? ~ 7r if, and only if,

1
X~[2v + 2Xn (t)] ::; 2Ao(nt + -).
T

Similarly, 1f? ::; 1f if, and only if,

1
X;[2v + 2Xn(t)] ~ 2Ao(nt + -).
T

From these two inequalities one can obtain lower and upper boundaries
for the continuation region of the test. As soon as the process Xn(t) hits
either the lower or the upper boundary the test terminates. Ho is accepted
if Xn(T) is equal to the lower boundary, otherwise Ho is rejected.
If v ~ 15 we can approximate the p-th fractile of X2[v] by

Using this approximation we obtain the upper boundary,

and the lower boundary

bL(t) ="21 ( 2 (AO


-:;: + Aont )1/2 - )2
Z1f - V.

In addition, one can censor the test at a certain failure frequency, x*. In
Figure 9.1 we illustrate these boundaries for a censored test, with param-
eters n = 20, AO = .0021072, v = 15, T = .00027, 1f = .1, 7r = .9 and
x* = 25. •
9.7 Accelerated Life Testing 195

30

: Rejection _::;;;: = : = ;>


Region .,"
.,"
II:
~
:! 20 .,"
c .,"
~ /
~ Continuation.,"
o Region"" .,.,

L .,.,.,.,.,.,
ffi .,.,., Acceptance
I 10 .,.,., Region

i
.,.,
.,.,., I
I I I
_~

o 100 200 300 400


TIME [Hr]

Figure 9.1. Bayes Sequential RDT

9.7 Accelerated Life Testing


It is often the case that systems have very long expected life under normal
conditions, and a useful test may not be plausible. For example, suppose
that one is manufacturing solar cells for a space ship, and the specifications
require that the solar cells have MTTF of 20 years. Can we perform a test
long enough to demonstrate that the solar cells are reliable?
Accelerated life testings are conducted at higher than normal stress
levels, in order to induce failures. The question is how to relate the re-
sults from an accelerated test to what should be expected under normal
conditions.
Several models are available in the literature concerning the relationship
between certain parameters of the life distribution and the stress level at
which the experiment is conducted.
It is often assumed that the TTF at different stress levels is of the same
family. For example, suppose that the TTF at stress level V is E(,B(V)),
where ,B(V) is a function relating the MTTF to V. Some models of ,B(V)
are:
1. The Power Rule Model,
c
,B(V) = VP' C > O.

2. The Arrhenius Reaction Rate Model,


196 9. Reliability Demonstration: Testing and Acceptance Procedures

3. The Ayring Model,

The Power Rule Model was applied in accelerated life testing of dielectric
capacitors. Here V is the applied voltage. The parameters C and p are
unknown, and should be estimated from the results of the tests. In the
Al,Thenius Model, the MTTF is in terms of the operating temperature V.
The parameters A and B should be estimated from the data. This model
has been applied in accelerated testing of semiconductors.
The methodology is to perform K independent life tests at K values of
V. After observing the failure times at each stress level, the likelihood of
the model parameters is formulated in terms of the data from all the K
trials. The MLE of the model parameters are determined together with
their AC matrix. Finally the MTTF is estimated at a normal stress level
Vo, and confidence or credibility intervals are determined for this parame-
ter. For details and formulae the reader is referred to Mann, Schafer and
Singpurwalla (1974, Ch. 9) and Nelson (1990).

9.8 Exercises
[9.2.1] A vendor claims that his resistors have a mean useful lifetime of at
least 5 x 105 [hr]. It is believed that the lifetimes have an exponen-
tial distribution. You plan to test a random sample of n resistors
until one fails.
(i) How large should your sample be so that the expected duration
of the test is no greater than 1000 [hr], assuming the mean lifetime
of the resistors is actually 5 x 105 [hr]?
(ii) Using the sample size obtained in (i), design a test of the ven-
dor's claim with significance level .05.
(iii) Assuming the true mean lifetime is only 105 [hr],
(a) What is the probability of rejecting the vendor's claim?
(b) What is the probability that the test will last over 1000 [hr]?
[9.2.2] Redo Exercise [9.2.1], but here you are going to continue testing
until 10 resistors have failed. You may now find it convenient to
use some approximations in parts (i) and (iii b).
[9.3.1] The TTF of a system is exponential with MTTF of j3 [hr]. n = 10
independent systems are put on test simultaneously. The test ter-
minates at the r = 5th failure. Failed units are not replaced (re-
newed). The null hypothesis is Ho : j3 :2: 1000 [hr], the alternative
hypothesis is Hi : j3 :::; 750 [hr].
(i) What is the expected duration of the test?
(ii) The test statistic is the total life T lO ,5. What is the critical
value Ca (TlO ,5) for 0: = .05?
(iii) What is the probability of a Type II error, ,,(, if p = 600 [hr]?
9.8 Exercises 197

[9.3.2] In the exponential case, if (30 = 1200 and (31 = 900 [units of time],
and a = 'Y = .05, what should be the sample size n?
[9.4.1] (i) Obtain the equation of the boundary lines for the SPRT of
Ho : (3 2?: 100 vs. HI : (3 < 100, where (3 is the parameter of an
exponential life distribution. Design the test with a significance
level of a = .05 and a Type II error probability of 'Y = .05 when
(3 == 50.
(ii) Compute the OC«(3) function of this test.
(iii) What is the value of ASN at (3 = 75?
[9.4.2] An insertion machine is tested for the reliability of component
placement. It is desired that the expected number of errors not
exceed 50 PPM. The machine is considered unproductive if the ex-
pected number of errors is greater than 300 PPM. Construct an
SPRT with risk levels a = .10 and 'Y = .10. If the machine inserts
on the average 4000 parts per hour, what is the expected duration
of the test (excluding repair time) if >. = 100 PPM?
[9.5.1] Let {X(t); 0 ~ t < oo} be a Poisson process with intensity param-
eter >. = 25.
(i) What is the standard deviation of X(5) - X(3)?
(ii) What is the correlation between X(3) and X(5)?
[9.5.2] Compute the boundary lines for an SPRT based on the Poisson
process {X(t); 0 ~ t} for testing Ho : >. ~ 10 against HI : >. 2?: 12,
with risk levels a = 'Y = .10. What is the value of OC(9)? What is
the value of E)" {T} at >. = 13?
[9.6.1] The TTF values [hr] of 5 radar systems are 1505, 975, 1237, 1313
and 1498. Assuming that TTF rv E«(3), and that the prior distri-
bution of >. = 1/(3 is G(10, 10- 4 ):
(i) Determine a credibility interval for (3, at credibility level 'Y =
.95.
(ii) What is the credibility interval for R(250) at level 'Y = .95?
[9.6.2] In relation to problem [9.6.1], the null hypothesis is Ho : (3 2?: 1000
against HI : (3 < 1000.
(i) Compute the prior probabilities of Ho and HI'
(ii) What are the posterior probabilities of Ho and of HI?
(iii) Let lo = 1000 [$], the loss due to accepting Ho when HI is
true, and h = 500 [$], the loss due to accepting HI when Ho is
true. Which hypothesis would you accept?
[9.6.3] Consider a binomial RDT in which n = 200 units are put on test
for 120 [hr]. The test rejects the unit's reliability if the number of
survivals is less than 105. Assuming that R(120) has a prior beta
(30,5) distribution, determine:
(i) the predictive acceptance and rejection probabilities 7rH and
'l/JH;
(ii) the conditional risk probabilities a B and 'YB'
198 9. Reliability Demonstration: Testing and Acceptance Procedures

[9.6.4] In the sequential BRDT described in Example 9.7, the test termi-
nated at time 250 [hr] and the decision was to accept Ho. Deter-
mine a credibility interval for f3 = 1/.>. at level "I = .95.
Annotated Bibliography

There are hundreds of good papers in various journals on statistical reli-


ability. This annotated bibliography lists only some important textbooks
in which the reader can obtain further information on the various topics of
the present book. A few research papers are mentioned too.
1. Ascher, H. and Feingold, H., Repairable Systems Reliability: Mod-
eling, Inference, Misconceptions and Their Causes. Lecture Notes
in Statistics, Vol. 7, Marcel Dekker, New York, 1984.
In this book the reader will find discussion of important issues connected
with the availability, maintainability and readiness of repairable systems.
2. Bain, L.J., Statistical Analysis of Reliability and Life-Testing
Models, Marcel Dekker, New York, 1978.
Provides a comprehensive statistical analysis of life distributions in the
uncensored and censored cases.
3. Barlow, R.E. and Proschan, F., Mathematical Theory of Reliabil-
ity, John Wiley, New York, 1965.
Advanced mathematical treatment of maintenance policies and redun-
dancy optimization.
4. Barlow, R.E. and Preschan, F., Statistical Theory of Reliability and
Life Testing: Probability Models, Holt, Rinehart and Winston, New
York, 1975.
An excellent advanced introduction to the theory of reliability. This book
treats probabilistic models connected with lifetime of complex systems.
The book includes treatment of the case where the components of a
system are dependent.
5. Bartholomew, D.J., The sampling distribution of an estimate arising in
life testing, Technometrics, 5: 361-374 (1963).
200 Annotated Bibliography

Derives the CDF of the MLE, 8, in the case of Type I censoring from an
exponential distribution.
6. Beyer, W.H., Standard Mathematical Tables, CRC Press, West
Palm Beach, FL, 1978.
Qne can find in this collection tables of Laplace transforms and their
inverses.
7. Box, G.E.P. and Tiao, G.C., Bayesian Inference in Statistical Anal-
ysis, Addison-Wesley, Reading, MA, 1973.
Comprehensive textbook on Bayesian classical statistical analysis. Com-
parison of means, variances, linear models, block designs, components of
variance and regression analysis, are redone in the Bayesian framework.
8. Chow, Y.S., Robbins, H. and Siegmund, D., Great Expectations: The
Theory of Optimal Stopping, Houghton Mifflin, Boston, 1971.
Very advanced mathematical presentation of the theory of optimal stop-
ping times.
9. Cohen, A.C., Jr., Tables for maximum likelihood estimates; singly trun-
cated and single censored samples, Technometrics, 3: 535-541 (1961).
Such tables were required when computers were not readily available.
10. DeGroot, M.H., Optimal Statistical Decisions, McGraw-Hill, New
York,1970.
An excellent introduction to the theory of optimal Bayesian decision
making.
11. Gerstbakh, LB., Statistical Reliability Theory, Marcel Dekker, New
York,1989.
Advanced mathematical treatment of systems with renewable compo-
nents and optimal preventive maintenance. Discusses also statistical
aspects of lifetime data analysis.
12. Gnedenko, B.V., Belyayev, Yu. K. and Solovyev, A.D., Mathematical
Methods of Reliability Theory, Academic Press, New York, 1969.
An advanced monograph on reliability of renewable systems. Contains
development of sequential tests.
13. Good, I.J., The Estimation of Probability: An Essay on Modern
Bayesian Methods, MIT Press, Cambridge, MA, 1965.
Skillfully written introduction to Bayesian estimation of probabilities
and distributions.
14. Hald, A., Maximum likelihood estimation of the parameters of a normal
distribution which is truncated at a known point, Skandinavisk Ak-
tuar., 32: 119-134 (1949).
This paper develops the theory and methodology of estimating the pa-
rameters of a truncated normal distribution.
15. Henley, E.J. and Kumamoto, H., Reliability Engineering and Risk
Assessment, Prentice-Hall, Englewood Cliffs, NJ, 1981.
Chapter 2 provides an excellent introduction to fault tree construction.
Chapter 3 discusses path and cut sets and decision tables.
Annotated Bibliography 201

16. Ireson, W.G., Editor, Reliability Handbook, McGraw-Hill, New York,


1966, reissued 1982.
This is a good reference book on various reliability problems: reliability
data systems, system effectiveness, mathematical and statistical models,
testing, estimation, human factors, etc.
17. Johnson, N .L. and Kotz, S., Distributions in Statistics: Continuous
Univariate Distributions-l, Houghton Mifflin, Boston, 1970.
An excellent survey of the important families of statistical distributions
and their particular estimation problems. Comprehensive bibliography
for each chapter.
18. Klassen, B. and van Peppen, J.C.L., System Reliability: Concepts
and Applications, Edward Arnold (A division of Hodder & Stoughton),
New York, 1989.
Good discussion of availability, maintainability and repairability of sys-
tems.
19. Lloyd, D.K. and Lipow, M., Reliability: Management, Methods
and Mathematics, 2nd Edition, published by the authors, 1977.
Very authoritative textbook. Provides good introduction to reliability
management, organization, data systems, etc. Good treatment of sta-
tistical methods of estimation and testing (reliability demonstration).
Discusses problems and methods of software reliability.
20. Mann, N.R., Schafer, R.E. and Singpurwalla, N.D., Methods for Sta-
tistical Analysis of Reliability and Life Data, John Wiley, New
York,1974.
Includes a good chapter on testing statistical hypotheses on the parame-
ters of various life distributions. Also an excellent chapter on accelerated
life testing and MLE of the parameters.
2l. Martz, H.F. and Waller, R.A., Bayesian Reliability Analysis, John
Wiley, New York, 1982.
The book contains important information for Bayesian analysis in relia-
bility theory, including a good chapter on empirical Bayes methods.
22. McCormick, N.J., Reliability and Risk Analysis. Academic Press,
New York, 1981.
The book provides interesting and illuminating presentation of fault tree
analysis and lists available computer programs for such analysis. It pro-
vides also treatment of the subject of availability of systems with repair.
23. Miller, R.G., Jr., Survival Analysis, John Wiley, New York, 1981.
Treats mainly non-parametric methods in a concise manner. Provides
many references.
24. Nair, V.N., Confidence Bands for Survival Functions with Censored
Data: A Comparative Study, Technometrics, 26: 265-275 (1984).
Compares several methods of determining confidence regions (bands) for
survival functions with censored data.
25. Nelson, W., Applied Data Analysis, John Wiley, New York, 1982.
This book is recommended for additional study of graphical methods
202 Auruuotated Bibliography

in life data analysis, maximum likelihood estimation with censored data


and iinear estimation methods.
26. Nelson, W., Accelerated Testing: Statistical Models, Test Plans
and Data Analyses, John Wiley, New York, 1990.
Very authoritative book on accelerated life testing. The introductory
chapter provides important background material on types of engineering
applications.
27. Press, S.J., Bayesian Statistics: Principles, Models and Applica-
tions, John Wiley, New York, 1989.
An excellent introduction to the subject of Bayesian analysis.
28. Tsokos, C.P. and Shimi, LN., The Theory and Applications ofReli-
ability with Emphasis on Bayesian and Non-Parametric Meth-
ods, Academic Press, New York, 1977.
A collection of articles on various topics connected with Bayesian reli-
ability analysis. In particular, several articles are devoted to empirical
Bayes methods.
Appendix of
Statistical Tables

Table A-I: Standard Normal Cumulative Distribution Function.


Area Under the Standard Normal Curve Between -00 and z

<p(z)
z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.50 0.50 0.51 0.51 0.52 0.52 0.52 0.53 0.53 0.54
0.1 0.54 0.54 0.55 0.55 0.56 0.56 0.56 0.57 0.57 0.58
0.2 0.58 0.58 0.59 0.59 0.59 0.60 0.60 0.61 0.61 0.61
0.3 0.62 0.62 0.63 0.63 0.63 0.64 0.64 0.64 0.65 0.65
0.4 0.66 0.66 0.66 0.67 0.67 0.67 0.68 0.68 0.68 0.69
0.5 0.69 0.70 0.70 0.70 0.71 0.71 0.71 0.72 0.72 0.72

0.6 0.73 0.73 0.73 0.74 0.74 0.74 0.75 0.75 0.75 0.75
0.7 0.76 0.76 0.76 0.77 0.77 0.77 0.78 0.78 0.78 0.79
0.8 0.79 0.79 0.79 0.80 0.80 0.80 0.81 0.81 0.81 0.81
0.9 0.82 0.82 0.82 0.82 0.83 0.83 0.83 0.83 0.84 0.84
1.0 0.84 0.84 0.85 0.85 0.85 0.85 0.86 0.86 0.86 0.86

1.1 0.86 0.87 0.87 0.87 0.87 0.87 0.88 0.88 0.88 0.88
1.2 0.88 0.89 0.89 0.89 0.89 0.89 0.90 0.90 0.90 0.90
1.3 0.90 0.90 0.91 0.91 0.91 0.91 0.91 0.91 0.92 0.92
1.4 0.92 0.92 0.92 0.92 0.93 0.93 0.93 0.93 0.93 0.93
1.5 0.93 0.93 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94
204 Appendix of Statistical Tables

Table A-I. (continued)

Z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
1.6 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95
1.7 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96
1.8 0.96 0.96 0.97 0.97 0.97 0.97 0.97 0.97 0.97 0.97
1.9 0.97 0.97 0.97 0.97 0.97 0.97 0.98 0.98 0.98 0.98
2.0 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98

2.1 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.99 0.99 0.99
2.2 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
2.3 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
2.4 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
2.5 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 1.00

2.6 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
2.7 1.00 1.00 1.00 1.00 1.00 1.00 - 1.00 1.00 1.00 1.00
2.8 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
2.9 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
3.0 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
For values of Z < 0 use the relationship ~(-z) = 1- ~(z).
Computed by use of GAUSS© software.

Table A-2. Selected Fractiles of the Standard Normal Distribution, za'


For Other Values of a, Compile the Value of Za from Table A-I

a Zn.
0.50 0.000
0.55 0.125
0.60 0.253
0.65 0.385
0.70 0.524
0.75 0.674
0.80 0.841
0.85 1.036
0.90 1.282
0.95 1.645
0.975 1.960
0.99 2.326
0.995 2.576
For a < .5 use za = -Zl-a. For example, Z.05 = -Z.95 = -1.645.
Computed according to formula 26.2.23 of M. Abramowitz and A. Stegun,
Handbook of Mathematical Functions, Dover Publications, New York,
1968.
Appendix of Statistical Tables 205

Table A-3. Fractiles of the Chi-Square Distribution X~[v].


Pr{x2 [v] :s; X~[v]} = a

x~[vl
a
0.005 0.01 0.025 0.05 0.10 0.25
1 0.04 393 0.0 3 157 0.0 3 982 0.0 2 393 0.0158 0.102
2 0.0100 0.0201 0.0506 0.103 0.211 0.575
3 0.0717 0.115 0.216 0.352 0.584 1.21
4 0.207 0.297 0.484 0.711 1.06 1.92
5 0.412 0.554 0.831 1.15 1.61 2.67

6 0.676 0.872 1.24 1.64 2.20 3.45


7 0.989 1.24 1.69 2.17 2.83 4.25
8 1.34 1.65 2.18 2.73 3.49 5.07
9 1.73 2.09 2.70 3.33 4.17 5.90
10 2.16 2.56 3.25 3.94 4.87 6.74

11 2.60 3.05 3.82 4.57 5.58 7.58


12 3.07 3.57 4.40 5.23 6.30 8.44
13 3.57 4.11 5.01 5.89 7.04 9.30
14 4.07 4.66 5.63 6.57 7.79 10.2
15 4.60 5.23 6.26 7.26 8.55 11.0

16 5.14 5.81 6.91 7.96 9.31 11.9


17 5.70 6.41 7.56 8.67 10.1 12.8
18 6.26 7.01 8.23 9.39 10.9 13.7
19 6.84 7.63 8.91 10.1 11.7 14.6
20 7.43 8.26 9.59 10.9 12.4 15.5

21 8.03 8.90 10.3 11.6 13.2 16.3


22 8.64 9.54 11.0 12.3 14.0 17.2
23 9.26 10.2 11.7 13.1 14.8 18.1
24 9.89 10.9 12.4 13.8 15.7 19.0
25 10.5 11.5 13.1 14.6 16.5 19.9

26 11.2 12:2 13.8 15.4 17.3 20.8


27 11.8 12.9 14.6 16.2 18.1 21.7
28 12.5 13.6 15.3 16.9 18.9 22.7
29 13.1 14.3 16.0 17.7 19.8 23.6
30 13.8 15.0 16.8 18.5 20.6 24.5
For v > 30, X~[v] = (za + J2v - 1)2/2.
Computed with the aid of STATGRAPHICS© software.
206 Appendix of Statistical Tables

Table A-3 (continued)

X~[l/J
a
0.50 0.75 0.90 0.95 0.975 0.99 0.995
1 0.455 1.32 2.71 3.84 5.02 6.63 7.88
2 1.39 2.77 4.61 5.99 7.38 9.21 10.6
3 2.37 4.11 6.25 7.81 9.35 11.3 12.8
4 3.36 5.39 7.78 9.49 11.1 13.3 14.9
5 4.35 6.63 9.24 11.1 12.8 15.1 16.7

6 5.35 7.84 10.6 12.6 14.4 16.8 18.5


7 6.35 9.04 12.0 14.1 16.0 18.5 20.3
8 7.34 10.2 13.4 15.5 17.5 20.1 22.0
9 8.34 11.4 14.7 16.9 19.0 21.7 23.6
10 9.34 12.5 16.0 18.3 2!)'5 23.2 25.2

11 10.3 13.7 17.3 19.7 21.9 24.7 26.8


12 11.3 14.8 18.5 21.0 23.3 26.2 28.3
13 12.3 16.0 19.8 22.4 24.7 27.7 29.8
14 13.3 17.1 21.1 23.7 26.1 29.1 31.3
15 14.3 18.2 22.3 25.0 27.5 30.6 32.8

16 15.3 19.4 23.5 26.3 28.8 32.0 34.3


17 16.3 20.5 24.8 27.6 30.2 33.4 35.7
18 17.3 21.6 26.0 28.9 31.5 34.8 37.2
19 18.3 22.7 27.2 30.1 32.9 36.2 38.6
20 19.3 23.8 28.4 31.4 34.2 37.6 40.0

21 20.3 24.9 29.6 32.7 35.5 38.9 41.4


22 21.3 26.0 30.8 33.9 36.8 40.3 42.8
23 22.3 27.1 32.0 35.2 38.1 41.6 44.2
24 23.3 28.2 33.2 36.4 39.4 43.0 45.6
25 24.3 29.3 34.4 37.7 40.6 44.3 46.9

26 25.3 30.4 35.6 38.9 41.9 45.6 48.3


27 26.3 31.5 36.7 40.1 43.2 47.0 49.6
28 27.3 32.6 37.9 41.3 44.5 48.3 51.0
29 28.3 33.7 39.1 42.6 45.7 49.6 52.3
30 29.3 34.8 40.3 43.8 47.0 50.9 53.7
Appendix of Statistical Tables 207

Table A-4. Fractiles of the t-Distribution, ta[v].


Pr{t[v] ::::; ta[v]} =a
ta[v]
a
v 0.60 0.75 0.90 0.95 0.975 0.99 0.995 0.9995
1 0.325 1.000 3.078 6.314 12.706 31.821 63.657 636.619
2 0.289 0.816 1.886 2.920 4.303 6.965 9.925 31.598
3 0.277 0.765 1.638 2.353 3.182 4.541 5.841 12.941
4 0.271 0.741 1.533 2.132 2.776 3.747 4.604 8.610
5 0.267 0.727 1.476 2.015 2.571 3.365 4.032 6.859

6 0.265 0.718 1.440 1.943 2.447 3.143 3.707 5.959


7 0.263 0.711 1.415 1.895 2.365 2.998 3.499 5.405
8 0.262 0.706 1.397 1.860 2.306 2.896 3.355 5.041
9 0.261 0.703 1.383 1.833 2.262 2.82L 3.250 4.781
10 0.260 0.700 1.372 1.812 2.228 2.764 3.169 4.587

11 0.260 0.697 1.363 1.796 2.201 2.718 3.106 4.437


12 0.259 0.695 1.356 1.782 2.179 2.681 3.055 4.318
13 0.259 0.694 1.350 1.771 2.160 2.650 3.012 4.221
14 0.258 0.692 1.345 1.761 2.145 2.624 2.977 4.140
15 0.258 0.691 1.341 1.753 2.131 2.602 2.947 4.073

16 0.258 0.690 1.337 1.746 2.120 2.583 2.921 4.015


17 0.257 0.689 1.333 1.740 2.110 2.567 2.898 3.965
18 0.257 0.688 1.330 1.734 2.101 2.552 2.878 3.922
19 0.257 0.688 1.328 1.729 2.093 2.539 2.861 3.883
20 0.257 0.687 1.325 1.725 2.086 2.528 2.845 3.850

21 0.257 0.686 1.323 1.721 2.080 2.518 2.831 3.819


22 0.256 0.686 1.321 1.717 2.074 2.508 2.819 3.792
23 0.256 0.685 1.319 1.714 2.069 2.500 2.807 3.767
24 0.256 0.685 1.318 1.711 2.064 2.492 2.797 3.745
25 0.256 0.684 1.316 1.708 2.060 2.485 2.787 3.725

26 0.256 0.684 1.315 1.706 2.056 2.479 2.779 3.707


27 0.256 0.684 1.314 1.703 2.052 2.473 2.771 3.690
28 0.256 0.683 1.313 1.701 2.048 2.467 2.763 3.674
29 0.256 0.683 1.311 1.699 2.045 2.462 2.756 3.659
30 0.256 0.683 1.310 1.697 2.042 2.457 2.750 3.646

40 0.255 0.681 1.303 1.684 2.021 2.423 2.704 3.551


60 0.254 0.679 1.296 1.671 2.000 2.390 2.660 3.460
120 0.254 0.677 1.289 1.658 1.980 2.358 2.617 3.373
00 0.253 0.674 1.282 1.645 1.960 2.326 2.576 3.291
For a < .5, ta[v] = -tl-a[V]; t.5[V] = 0 for all v.
Computed with the aid of STATGRAPHICS© software.
208 Appendix of Statistical Tables

Table A-5. Fractiles of the F-Distribution

P 112
III
5 10 20 30 40 50 120
0.5 5 1.000 1.073 1.111 1.123 1.130 1.134 1.142
0.75 1.889 1.890 1.882 1.878 1.876 1.875 1.872
0.90 3.491 3.298 3.207 3.175 3.158 3.148 3.139
0.95 5.050 4.736 4.560 4.498 4.466 4.448 4.435
0.975 7.146 6.620 6.332 6.232 6.181 6.150 6.135
0.5 10 0.932 1.000 1.035 1.047 1.052 1.056 1.064
0.75 1.585 1.551 1.523 1.512 1.506 1.502 1.492
0.90 2.522 2.327 2.201 2.156 2.132 2.120 2.085
0.95 3.326 2.985 2.774 2.699 2.665 2.642 2.586
0.975 4.237 3.724 3.419 3.312 3.262 3.229 3.149
0.5 20 0.900 0.966 1.000 1.011 1.017 1.020 1.028
0.75 1.450 1.399 1.358 1.340 1.330 1.324 1.308
0.90 2.158 1.937 1.795 1.738 1.709 1.690 1.644
0.95 2.711 2.348 2.125 2.039 1.994 1.966 1.897
0.975 3.289 2.774 2.467 2.349 2.207 2.250 2.157
0.5 30 0.890 0.955 0.989 1.000 1.006 1.009 1.017
0.75 1.407 1.351 1.303 1.282 1.270 1.263 1.242
0.90 2.049 1.820 1.667 1.607 1.573 1.552 1.499
0.95 2.534 2.165 1.932 1.841 1.792 1.761 1.684
0.975 3.027 2.511 2.195 2.075 2.009 1.968 1.867
0.5 40 0.885 0.950 0.983 0.994 1.000 1.003 1.011
0.75 1.386 1.327 1.276 1.253 1.240 1.231 1.208
0.90 1.997 1.763 1.605 1.541 1.506 1.483 1.425
0.95 2.450 2.077 1.839 1.744 1.693 1.660 1.576
0.975 2.904 2.388 2.068 1.943 1.876 1.832 1.724
For p < .05 apply the relationship Fp[Vb V2] = 1/FI- p[V2, VI].
Computed with the aid of STATGRAPHICS© software.
Appendix of Statistical Tables 209

Table A-6. Gamma Function

r(x),O.Ol :s; x :s; 1.00


x r(x) x r(x) x r(x) x r(x)
0.01 99.4326 0.26 3.4785 0.51 1.7384 0.76 1.2123
0.02 .49.4422 0.27 3.3426 0.52 1.7058 0.77 1.1997
0.03 32.7850 0.28 3.2169 0.53 1.6747 0.78 1.1875
0.04 24.4610 0.29 3.1001 0.54 1.6448 0.79 1.1757
0.05 19.4701 0.30 2.9916 0.55 1.6161 0.80 1.1642
0.06 16.1457 0.31 2.8903 0.56 1.5886 0.81 1.1532
0.07 13.7736 0.32 2.7958 0.57 1.5623 0.82 1.1425
0.08 11.9966 0.33 2.7072 0.58 1.5369 0.83 1.1322
0.09 10.6162 0.34 2.6242 0.59 1.5126 0.84 1.1222
0.10 9.5135 0.35 2.5461 0.60 1.4892 0.85 1.1125
0.11 8.6127 0.36 2.4727 0.61 1.4667 0.86 1.1031
0.12 7.8633 0.37 2.4036 0.62 1.4450 0.87 1.0941
0.13 7.2302 0.38 2.3383 0.63 1.4242 0.88 1.0853
0.14 6.6887 0.39 2.2765 0.64 1.4041 0.89 1.0768
0.15 6.2203 0.40 2.2182 0.65 1.3848 0.90 1.0686
0.16 5.8113 0.41 2.1628 0.66 1.3662 0.91 1.0607
0.17 5.4512 0.42 2.1104 0.67 1.3482 0.92 1.0530
0.18 5.1318 0.43 2.0605 0.68 1.3309 0.93 1.0456
0.19 4.8468 0.44 2.0132 0.69 1.3142 0.94 1.0384
0.20 4.5908 0.45 1.9681 0.70 1.2981 0.95 1.0315
0.21 4.3599 0.46 1.9252 0.71 1.2825 0.96 1.0247
0.22 4.1505 0.47 1.8843 0.72 1.2675 0.97 1.0182
0.23 3.9598 0.48 1.8453 0.73 1.2530 0.98 1.0119
0.24 3.7855 0.49 1.8081 0.74 1.2390 0.99 1.0059
0.25 3.6256 0.50 1.7725 0.75 1.2254 1.00 1.0000
Index

accelerated life testing, 195,203 single, 13


accuracy, 111 Type I, 13, 133
actuarial estimator, 102 Type II, 13, 135
administrative time, 3 censoring fraction, 136
angular transformation, 139 central limit theorem, 30
Arrhenius reaction model, 204 conditional consumer's risk, 199
asymptotic conditional producer's risk, 199
availability, 81 confidence interval, 116
covariance matrix, 129 conjugate prior distribution, 171
operational reliability, 82 convolution, 74
standard deviation, 132 coverage probability, 113
availability, 3,8 credibility intervals, 174
intrinsic, 3 cumulative distribution, 5
availability function, 79 cut set, 56
average prediction risk, 200 minimal, 56
average sample number, 192 algorithm for, 63
Ayring model, 203 death and birth process, 83
Bayesian reliability decomposition method, 52
demonstration test, 199 decreasing failure rate
sequential, 201 distribution, 87
Bernoulli trials, 36 distributions
beta function, 169 beta, 169
binomial testing, 199 binomial, 36
censoring discrete, 35
left, 14 F, 34
multiple, 13 hypergeometric, 39
right, 14 inverse gamma, 170
Index 211

normal, 28 truncated normal, 28


Poisson, 36 Weibull, 24
t, 34 likelihood function, 120
down time, 2 likelihood kernel, 171
empirical Bayes, 180 likelihood ratio, 190
empirical CDF, 7,92 likelihood score function, 123
estimator likelihood score vector, 125
Bayes, 173 likelihood statistic, 178
maximum likelihood, 120 logistic time, 3
moment equations, 182 loss function, 171
unbiased, 112 maintainability, 8
Euler constant, 150 Markov process, 83
event tree, 61 maximum likelihood estimator
failure intensity, 75 asymptotic variance, 124
failure rate function, 6, 17 censored data, 133
fault tree, 60 Erlang distribution, 147
Fisher information function, 123 exponential distribution, 143
Fisher information matrix, 128 extreme value distributions, 159
FIT, 12 gamma distribution, 149
fixed closeness, 113 invariance property, 122
fractile, 16 Kaplan-Meier, 138
free time, 2 lognormal distribution, 161
gamma function, 22 normal distribution, 161
gates, and/or, 61 shifted exponential, 145
graphical analysis, 91 system reliability function, 130
hazard function, 6,17 truncated normal, 163
hypothesis Weibull distribution, 154
alternative, 185 mean time till failure, 6, 16
null, 185 composite systems, 57
interquartile range, 41 median, 16
Jeffrey's prior, 171 moments, 16,44
Kaplan-Meier PL normal approximations, 30
estimator, 100,139 normal score, 93
keystone, 52 operating characteristic
Laplace transform, 77 function, 186
life distributions operating time, 2
chi-square, 20,22 operational reliability
Erlang, 20 function, 82
exponential, 18 parallel structure function, 49
extreme value, 27 parameter space, 120
gamma, 20,23 path set, 55
Gumbel, 28 minimal, 55
lognormal, 33 plotting positions, 93
Rayleigh, 20 Poisson processes, 196
shifted exponential, 19 posterior risk, 172
212 Index

power rule model, 204 Poisson processes, 196


precision, 112 sequential testing, 189
prediction intervals, 119 series structure function, 48
B~yesian, 176 shape parameter, 20
predictive acceptance significance level, 185
probability, 199 skewness, 44
preyentive maintenance, 84 standard deviation, 17
probability density function, 5 standard normal integral, 29
discrete variables, 35 standby unit, 59,83
posterior, 169
stationary availability
prior, 168
coefficient, 82
symmetric, 17
steepness, 44
probability papers, 104
probability plotting, 91 stopping rule, 201
censored data, 98 storage time, 2
proportional closeness, 113 structUre function, 55
quartiles, 16 sufficient statistic, 178
random sample, 91, 108 survival function, 14
readiness, 2 system effectiveness, 2
operational, 3 system failure function, 66
relative efficiency, 125 system reliability,
reliability conditional, 52
demonstration, 185 system structure
function, 5 bridge, 55
mission, 1 crosslinked, 52
operational, 1 double-crosslinked, 53
renewal k out ofn, 50
cycle, 73 module, 49
density, 76
function, 76
parallel, 48
process, 75 sequential, 58
repair intensity, 75 series, 46
repair time, 3 time categories, 2
repairability, 8 time censored, 13
residual time, 8 time till failure, 73
ruT, 12 time till repair, 73
root mean square error, 112 total time on test, 103
sampling distribution, 110 plot, 103
scale parameter, 19 up time, 2
sequential probability ratio variance, 17
test, 190 weakest link, 26
Springer Texts in Statistics (continued from p. ii)

Mandansky Prescriptions for Working Statisticians

McPherson Statistics in Scientific Investigation:


Its Basis, Application, and Interpretation

Nguyen and Rogers Fundamentals of Mathematical Statistics:


Volume I: Probability for Statistics

Nguyen and Rogers Fundamentals of Mathematical Statistics:


Volume II: Statistical Inference

Noether Introduction to Statistics:


The Nonparametric Way

Peters Counting for Something:


Statistical Principles and Personalities

Pfeiffer Probability for Applications

Santner and Duffy The Statistical Analysis of Discrete Data

Saville and Wood Statistical Methods: The Geometric Approach

Sen and Srivastava Regression Analysis:


Theory, Methods, and Applications

Zacks Introduction to Reliability Analysis:


Probability Models and Statistical Methods

Das könnte Ihnen auch gefallen