Beruflich Dokumente
Kultur Dokumente
Advisors:
Stephen Fienberg Ingram Olkin
Springer Texts in Statistics
Introduction to
Reliability Analysis
Probability Models and Statistical Methods
With 50 Illustrations
Springer-Verlag
New York Berlin Heidelberg London Paris
Tokyo Hong Kong Barcelona Budapest
Shelemyahu Zacks
Department of Mathematical Sciences
State University of New York at
Binghamton
Binghamton, NY 13902-6000
USA
Editorial Board
Stephen Fienberg Ingram Olkin
Office of the Vice President Department of Statistics
(Academic Affairs) Stanford University
York University Stanford, CA 94305
North York, Ontario M3J IP3 USA
Canada
All rights reserved. This work may not be translated or copied in whole or in part without the
written permission of the publisher (Springer-Verlag New York, Inc., 175 Fifth Avenue, New
York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis.
Use in connection with any form of information storage and retrieval, electronic adaptation,
computer software, or by similar or dissimilar methodology now known or hereafter developed is
forbidden.
The use of general descriptive names, trade names, trademarks, etc., in this publication, even if
the former are not especially identified, is not to be taken as a sign that such names, as under-
stood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by
anyone.
987654321
ISBN-13:978-1-4612-7697-5 e-ISBN-13:978-1-4612-2854-7
DOl: 10.1007/978-1-4612-2854-7
To Hanna, Yuval and David
Preface
Several years ago we provided workshops at the Center for Statistics, Qual-
ity Control and Design of the State University of New York at Binghamton,
NY to engineers of high technology industries in the area. Several hundred
engineers participated in these workshops, which covered the subjects of
statistical analysis of industrial data, quality control, systems reliability
and design of experiments. It was a special challenge to deliver the mate-
rial in an interesting manner and to develop the skills of the participants in
problem solving. For this purpose special notes were written and computer
software was developed. The present textbook is an expansion of such notes
for a course of statistical reliability, entitled A Workshop on Statistical
Methods of Reliability Analysis for Engineers (1983).
The guiding. principle in the present book is to explain the concepts
and the methods, and illustrate applications in problem solving, without
dwelling much on theoretical development. Electrical, mechanical and in-
dustrial engineers usually have sufficient mathematical background to un-
derstand and apply the formulae presented in the book. Graduate students
in statistics may find the book useful in preparing them for a career as
statisticians in industry. Moreover, they could practice their knowledge of
probability and statistical theory by verifying and deriving all the formulae
in the book. Most difficult is Chapter 4, on the reliability of repairable
systems. Most systems of interest, excluding missiles, are repairable ones.
To treat the subject of the availability of repairable systems we have to
introduce more advanced concepts of renewal processes, Markov processes,
etc. The chapter is written, however, in a manner that can be grasped
without much background knowledge of probability theory. However, in
some courses the instructor may choose to skip this chapter.
viii Preface
The original workshop notes were tied to specific software for the IBM
PC which was developed at the Center. It was decided, however, that
the present textbook would not be written for any specific software. The
student can use any of the several statistical software packages available on
the market, like MINITAB©, STATGRAPHICS©, or even LOTUS©, to
make computations, plot graphs and analyze data sets. We hope to issue
at _a future date a compendium to this textbook, which will have special
software and will present solutions to most of the exercises which are listed
at the end of each chapter. Specially designed examples illustrate in each
chapter the methodology and its applications. The present text can thus be
used for a one semester course on systems reliability in engineering schools
or in statistics departments.
The author would like to acknowledge the assistance of Dr. David
Berengut and Dr. John Orban in the preparation of the original work-
shop notes, and to express his gratitude to the Research Foundation of the
State University of New York for releasing their copyright on the origillal
notes. Mrs. Marge Pratt skillfully typed the manuscript using AMS-TEX.
Last but not least I would like to thank my wife, Dr. Hanna Zacks, who en-
couraged me and supported me during the demanding period of manuscript
writing.
Shelemyahu Zacks
Binghamton, NY
January 1991
Contents
2.2 Down time is the time interval during which the equipment/system
is in a state of failure (inoperable).
The down time is partitioned into
2.2.1 Administrative time
2.2.2 Active repair time
2.2.3 Logistic time (repair suspension due to la.ck of spare parts).
Ill. Indices
Certain concepts which were previously mentioned are measured by in-
dices based on ratios of time categories. These are:
OO A o'-'bOIOt _
I D t rlDSlC operating time
vaJUI. I I Y - •. • . t.
operatmg tune + actlve reparr une
o nal Reado
O pera:t 10 up time
Iness = al al d .
tot c en ar tune
These indices of intrinsic availability, availability and operational readi-
ness can be interpreted in probabilistic terms. For example, operational
readiness is the probability that, at a randomly selected time, one will find
the system ready.
We conclude this section with a block diagram showing the relationships
among the concepts discussed above (Figure 1.1).
EXAMPLE 1.1
We now provide a numerical example based on data gathered on 20 radar
systems. We will compute a few of the indices discussed above on the basis
of these data.
1. Total calendar time = 120000 [s.hr], where [s.hr] is a system hour unit
of time.
1.1 Total flight time = 9750 [s.hr]
1.1.1 Flight up time = 8500 [s.hr]
1.1.1.1 Radar idle = 4500 [s.hr]
1.1.1.2 Radar power on = 4000 [s.hr]
1.1.1.2.1 Radar standby = 1950 [s.hr]
1.1.1.2.2 Radar in operation = 2050 [s.hr]
1.1.2 Flight down time = 1250 [s.hr]
1.1.2.1 Flight active repair = 5 [s.hr]
1.1.2.2 Flight logistics time = 700 [s.hr]
1.1.2.3 Flight administrative time = 545 [s.hr]
1.2 Total ground time = 110,250 [s.hr]
1.2.1 Ground up time = 92,000 [s.hr]
1.2.2 Ground down time = 18,250 [s.hr]
1.2.2.1 Ground active repair time = 1750 [s.hr]
1.2.2.2 Ground logistics time = 10,000 [s.hr]
4 1. System Effectiveness
System
Effectiveness
Active Repelr
Time
Logistic
Time
Administrative
Time
2050 [s.hr]
(2050 + 1250) [s.hr] = .6212
6 7 8
.7754 .7604 .7457
•
1.3 Reliability and Related Functions
Basic to the definition of reliability functions and other related functions
is the length of life variable. The length of life (lifetime) of a compo-
nent/system is the length of the time interval, T, from the initial activation
of the unit until its failure. This variable, T, is considered a random vari-
able, since the length of life cannot be exactly predicted.
The cumulative (life) distribution function (CDF) of T, denoted by
F(t), is the probability that the lifetime does not exceed t, i.e.,
This is the probability that the lifetime of the component/system will ex-
ceed t. Another important function related to the life distribution is the
failure rate, or hazard function, h{t). This is the instantaneous failure
rate of an element which has survived t units of time, Le.,
Notice that h{t)at is approximately, for small at, the probability that a
unit still functioning at age t will fail during the time interval (t, t + at).
From formula (1.3.4) we can obtain
and
(1.3.7) E{T} = l co
tJ(t)dt,
(1.3.8) E{T} = l co
R(t)dt.
We will denote the MTTF by the symbol J..L. We provide now a simple
example.
EXAMPLE 1.2
A. Suppose that the failure rate of a given radar system is constant in
time, Le.,
h(t) = >., all 0 t < 00. s:
Then, the reliability function of this system is
100
80
;... 60
t!
IL
o
U 40
20
•- l.. I _...
o· >
.0 .30 .60 .90 1.20 1.50
TIME TO FAILURE [10 3 Hr]
i ti[103] F(ti)
1 0.25 0
2 0.35 0.10
3 0.45 0.13
4 0.55 0.27
5 0.65 0.36
6 0.75 0.50
7 0.85 0.67
8 0.95 0.81
9 1.05 0.87
10 1.15 0.93
11 1.25 0.97
12 1.35 0.99
13 1.45 1.00
The values of the failure rate function h(t) were determined according
1.4 Availability, Maintainability and Repairability 9
20
'i.:"'
:I:
15
'"0...
"
...!:..
...w
c( 10
II:
W
II:
...:;:
::)
II.. 5
-I 1 1
o 500 1000 1500
AGE [HrJ
Figure 1.3. Estimate of Failure Rate Function
Both the length of operation time until failure and the length of down time
(administrative plus repair plus logistic) are random variables.
The maintainability of a system could be defined, in terms of the
distribution of the down time, as the probability that, when maintenance is
performed under specified conditions, the system will be up again (in state
of operation) within a specified period.
Maintainability is connected with repairability. The repairability is
the probability that a failure of a system can be repaired under specified
conditions and in a specified time period. Not all repairs can be performed
on site. In repairability we have to consider two things: the probability
that the failure was caused by certain components or subsystems, and the
conditional down time distribution of those subsystems. In Section 3.7 we
will discuss an analysis that can help in identifying the causes for system
failure.
Some failures require return of the system tQ. the manufacturer; somEr
times the system has to be scrapped. These issues depend on the system
design, economic considerations, training of personnel, strategic consider-
ations, etc. Each system must be examined individually by relating cost
factors and strategic considerations. The diagnostic element is the largest
contributor to active maintenance time and to repairability. The diagnosis
entails the isolation of the defective part, module or subsystem. Built-in
test equipment in modem systems helps to reduce the diagnosis time. Fully
automated checking devices, which perform computerized testing, are also
available. However, skilled technicians are still needed to operate sophisti-
cated testing equipment.
Maintainability and repairability functions are connected also with in-
ventory management of spare parts. Many subsystems are manufactured
as modules that can be easily replaced. Thus, if a module fails and a spare
module is available in stock, the down time of the system can be reduced.
The availability index of the system increases as a result. However, some
modules may be very expensive, and overstocking of spare parts would be
unnecessary and costly. There is much discussion in the literature on the
problems of optimal inventory management. It is most important to keep
good records on the frequency of failures of various systems and of their
components, on the length of time till failure, on the length of down time,
etc. In particular, data on the proportion of failures that could be repaired
locally, as opposed to those which had to be handled elsewhere, are of es-
sential importance. Man-hours devoted to maintenance and cost factors
should be well recorded and available for analysis. Without adequate data
it is very difficult to devise optimal maintenance systems and predict the
reliability of the systems.
1.5 Exercises 11
1.5 Exercises*
[1.2.1] A machine is scheduled to operate for two shifts a day (8 hours each
shift), five days a week. What is the weekly index of scheduled idle
time of this machine?
[1.2.2] During the last 48 weeks, the machine discussed in Exercise [1.2.1]
:was "down" five times. The average down time is broken into
1. Average administrative time = 30 hours
2. Average repair time = 9 hours
3. Average logistic time = 7.6 hours
Compute the indices of:
(i) Availability;
(ii) Intrinsic availability;
(iii) Operational readiness.
[1.2.3] During 600 hours of manufacturing time, a.machine which inserts
components on computer boards was up for 566.4 hours. It had 310
failures which required a total of 8.2 hours of repair time. What
is the MTTF of this machine? What is the mean time till repair,
MTTR, for this machine? What is its intrinsic availability?
[1.2.4] A given system has two subsystems that function sequentially, in
two stages. The first stage is designed to function 5 [min] and the
second stage lasts 15 [min]. The system accomplishes its mission if
the two stages are accomplished. In preliminary testing, 5 out of
1500 subsystems failed in the first stage and 7 out of 3000 subsys-
tems failed in the second stage. Provide an estimate of the mission
reliability of the system.
[1.3.1] The sample proportional frequency distribution of the lifetime in
a random sample of n = 2000 solar cells, under accelerated life
testing, is given in the following table
conditions between 20,000 and 40,000 hours, after reaching the age
of 10,000 hours?
[1.3.2] The CDF of the lifetime [months] of an electronic device is
F(t) = t 3 /216, ifO:::;t<6
= 1, if 6:::; t.
(i) What is the failure rate function of this equipment?
(ii) What is the MTTF?
(iii) What is the reliability of the device at age 4 months?
[1.3.3] If the reliability function of a system is R(t) = exp( -2t - 3t2 ), find
the failure rate function. What is the failure rate at age t = 3?
Given that a system reached an age of t = 3, what is its reliability
for 2 more time units?
[1.3.4] Suppose that the failure rate function of a system is a constant, hl
[l/yr], till age tl [yr] and then it incre~es to the constant h2 [l/yr]
(h2 > hl), i.e.,
hl,
h(t) = {
h2' tl < t < 00.
(i) Determine the formula for the reliability function R(t).
(ii) Graph R(t).
(iii) What is the reliability of the system at age ~tl [yr] when
hl = 1/3 [l/yr], h2 = 1/2 [l/yr] and tl = 6 [yr]?
(iv) What is the probability that the system will live at least 9 [yr]
but will fail before t = 10 [yr]?
[1.3.5] (i) Find the MTTF of a system with reliability function R(t) =
! !
exp( -t/2) + exp( -t/3).
(ii) Show that the failure rate function of this system is h(t) =
(! le+ t / 6 )/(1 + e t / 6 ), and find the failure rate at age t = 1.
(iii) Show that h(t) is a decreasing failure rate function.
(iv) What is the probability that the unit will fail between t = 2
and t = 3, given that it survived 2 units of time?
[1.3.6] Failure rates and replacement rates are often measured in units of
109 device-hours. These units are called FITs and RITs. A given
device has a constant failure rate of 325,000 FITs.
(i) What is the probability that the device will first fail in the
interval between 6 and 12 months, given that it has survived the
first 6 months of operation? [1 month = 160 device-hours.]
(ii) How many failures of the device are expected in a 10 4 device-
hour operation?
(iii) If each failure requires on the average 4 hours of active repair,
15 minutes of administrative time and 20 minutes of logistic time,
what are the availability and the intrinsic availability indices of this
device?
2
Life Distributions, Models
and Their Characteristics
F(t) =
( r
t - to
2 h -to '
if t <t<
0_ -
to +
2 tl
1, if tl ::; t.
In Figure 2.1 we provide the graph of the life CDF, F(t), for to = 100
[hr] and tl = 400 [hr].
1.00 -- ....., ,
,,
',Reliability
,
.75 \,
•
::i
\
\
iii \
\
c:(
8II: .50-
\
\
a. \
,,
\
,
.25 - ,,
\
\
\
o I I
'" ' ..... >,
100 200 300 400
TIME TO FAILURE [Hr]
0, t:::; to
0, tl :::; t.
This is a triangular function on the interval (to, tl), symmetric around the
mid-point (to + h)/2. •
16 2. Life Distributions, Models and Their Characteristics
(2.2.4)
If ~here is more than one value of t satisfying the above equation, we define
tp to be the smallest one.
EXAMPLE 2.3
The p-th fractile of the life distribution of Example 2.1 is
If P = .75, to = 100 [hr] and h = 400 [hr] we obtain t.75 = 293.93 [hr]. This
means that the life lengths of 75% of the units of this population do not
exceed 294 [hr].
The median, t.50, and the lower and upper quartiles, t.25 and t. 75 ,
•
respectively, are important characteristics of a life distribution.
Moments of order r of the life distribution are defined as
(2.2.5) J.lT = 1 00
t Tf(t)dt, r = 1,2,· ...
Then
J.ll = - 21
7r 0
00
- -t d t
1 + t2
= ~7r t->oo
lim log(l + e) = 00.
We can further show that if J.lj = 00 then J.lk = 00 for all k > j. In such
cases we say that the corresponding moments do not exist. In all further
discussion we assume that the moments under consideration exist.
The first order moment, J.l = J.ll, is called the mean time to failure
(MTTF) or the expected lifetime.
The variance of the life distribution is
(2.2.6)
(72 = 1 00
(t - J.l)2 f(t)dt
= J.l2 - J.li·
2.2 General Characteristics of Life Distributions 17
f(t)
(2.2.8) h(t) = R(t)' 0~t < 00.
The function H (t) = lot h( x )dx is called the cumulative hazard func-
tion.
EXAMPLE 2.4
We derive here the failure rate function corresponding to the life distri-
bution of Example 2.1. According to the above definition
0, t ~ to
00, tl ~ t.
..
r-->
%
0
....
0
3
....
"-
---...
~a::
w 2
a::
~
-I
~
1
I _
I
o 100 200 300 400
TIME [Hr]
•
2.3 Some Families of Life Distributions
2.3.1 Exponential and Shifted Exponential Distributions
0, t<O
{
(2.3.1) h(t) = 1
t ~ O.
f3'
This constant failure rate function implies that parts do not age. The
corresponding reliability function is
That is, on the average a unit fails every (3 time units. The standard
deviation of E([3) is (j = {3. This means that the larger the MTTF, {3, the
larger the dispersion.
EXAMPLE 2.5
Consider an exponential life distribution with {3 = 100 [hr]. In this case,
the MTTF is j.£ = 100 [hr], and the standard deviation of the lifetime is
(j = 100 [hr]. The percentage of parts that are expected to survive at least
Thus, the median of E({3) is E. 50 ({3) = .693{3. That is, if a certain equip-
ment has an exponential life distribution with MTTF of 100 [hr], 50% of
these units are expected to fail in less than 69.3 [hr]. This reflects the
considerable skewness (asymmetry) of the exponential distribution.
The shifted exponential life distribution is an exponential distribution
starting at to, i.e., its PDF, is
0,
(2.3.5) fSE(t; (3, to) = { 1
:a exp{ -(t - to)/{3}, t ~ to.
to is called a location parameter, which is some positive value, 0 ::; to < 00.
This model is relevant when no unit fails before time to, and the failure
rate is constant for all t ~ to. The MTTF is j.£ = to + {3 and the standard
deviation is (j = 13.
20 2. Life Distributions, Models and Their Characteristics
tk - 1
(2.3.6) fER(t; k, (3) == (k _ 1)!(3k exp( -tl(3), 0~t < 00.
(2.3.7)
= 1- e- t /{3 ~ (t/~)j, t> O.
j=O J.
In Figure 2.3 we plot several CDF of G(k, (3) to illustrate the effect of
the shape parameter, k, on the distribution.
The MTTF of a G(k, (3) life distribution is
(2.3.9) JL = k(3.
The standard deviation of this distribution is
(2.3.10) (j = (3Vk.
2.3 Some Families of Life Distributions 21
100
.75
~f .50
o
o 2 4 6 8 10
TIME
p(k - Ij t/(3)
(2.3.11) hER(tjk,f3) = f3 Pos{k -ljt/(3)
where
(2.3.12) ". -
p(J, >.) - e
-A >.j'
l j = 0, 1",'
J.
and
j
(2.3.13) Pos(jj >.) = :L>(ij >.).
i=O
These functions are the probability distribution and the cumulative distri-
bution functions of a Poisson random variable, which will be discussed in
Section 2.5.
In Figure 2.4 we draw the failure rate function hER(tj k, (3) for the case of
f3 = 1 and k = 1"" ,6. For k = 1 the distribution is exponential and the
failure rate is a constant (1/(3). For k = 2,3"" , the failure rate functions
are strictly increasing from 0 to 1/f3.
A variation of the Erlang distributions is obtained when we allow the
shape parameter k to assume values which are multiples of 1/2, i.e., if
k = m/2, m = 1,2,···. The scale parameter f3 is fixed at f3 = 2. The
22 2. Life Distributions, Models and Their Characteristics
1.0;--------------------------------------k=1
k=2
k=3
.8
k=4
en. k=5
"
~
w
.6
k=6
I-
<£
a:
w .4
a:
~
=
<£
LI.
.2
.0
o 2 4 6 8 10
TIME [(3]
(2.3.15)
is called the gamma function. This function has the useful property that
r (~) = 3.62561···
r (~) = 2.67893···
r (~) = 1.35412··· .
Values of rex) for x = .01-1.00 are given in the Appendix Table A-VI.
The mean of x2 [m] is J.L = m and its standard deviation (j = V2m.
Fractiles of the chi-square distribution are given in Appendix Table A-III.
We denote these fractiles by x~[m]. The following relationship holds
between G(k,{3) and x2 [m]:
(2.3.18)
EXAMPLE 2.6
For p = .5 and k = 10 we have X~s[20] = 19.34. Hence,
This is the median of G(IO, {3). The mean of this distribution is J.L = 1O{3.
This shows that G(IO, {3) is almost symmetric.
1.0
t
t
---
Figure 2.5. Weibull PDF for 13 = 1, v = 1,2,3
The integral defining FG(tj v, 13) is called the incomplete gamma integral.
There are various numerical procedures for computing this integral. In the
case where v equals a positive integer, k, the incomplete gamma integral
can be evaluated with the aid of the Poisson CDF, as given earlier. The
failure rate functions of the gamma life distributions are increasing, as in
Figure 2.4, for v > 1. For v < 1 these functions are decreasing.
The Weibull family of life distributions has been found to provide good
models in many empirical studies. We consider first a two-parameter
Weibull distribution, W(v,j3), whose CDF is given by the formula
EXAMPLE 2.7
In the following table we provide the values of the lower quartile, the
median and the upper quartile for 13 = 1, v = 1,2,3,4,10.
2.3 Some Families of Life Distributions 25
(2.3.23)
If v = 2, for example,
As seen in Table 2.1, the median of this distribution is .833{3. The mean
and the median are quite close even for v = 2. On the other hand, when
v = 1 (the exponential case) the median is about 69% of the MTTF, which
reflects a pronounced asymmetry.
The standard deviation of W(v, (3) is
For v = 3 we obtain
a = {3 (r (1 + ~) - r2 (1 + ~) f/2
= {3 (~(1.35412) _ ~ (2.67893)2) 1/2
= .325{3.
An important property of the Weibull distribution is that the minimum
of n independent random variables having an identical Weibull
26 2. Life Distributions, Models and Their Characteristics
4.0
3.0
...
CQ.
"-
~ 2.0
ex:
w
V=1
(:1._=______. . . ___~"'"-_-_-_-_-_-_-_-..-_-_-_-_-_-.-...
§
:~
-_-_-_-_V_=_1..
The above property makes the Weibull family an attractive one for mod-
•
eling reliability of systems of similar components connected in series (see
Chapter 3), or for mechanical systems where the weakest-link model is
appropriate.
We conclude the present section with a presentation of the failure rate
functions of the Weibull distributions. From the definition of the failure
rate function we obtain for the Weibull distributions the function
(2.3.24) hw(t;v,,8)=~ (
* '
)
V-l
t ?: o.
In Figure 2.6 we present the graphs of some of these failure rate functions.
Finally, one can consider also the family of shifted Weibull distributions.
2.3 Some Families of Life Distributions 27
(2.3.26)
where -00 < Y < 00, and "( = 1/f3 v . It is customary in many textbooks
to present this family of distributions as a location and scale parameters
family, in the form
(2.3.28) 1
fEv(y;~,8)=8exp {Y-~
-8-- exp (Y-~)}
-8- , -00 < Y < 00.
These functions are illustrated in Figure 2.7. It can be shown that this
extreme value distribution EV(~, 8) is related to the asymptotic distribu-
tion, as the sample size n grows, of the minimum of a random sample from a
wide range of distributions. A distribution related to the asymptotic distri-
bution of sample maxima is caned the Gumbel distribution, or extreme
value distribution of Type I, having a CDF
.4
.3
IL
d
a. .2
.1
0
-4 -3 -2 -1 0 1 2 3 4
x-~
(2.3.31) U = 1.2838.
Finally, the p-th fractile of the extreme value distribution, EV(e, 8), is
Me = e + 8lnln2 = e - .36658.
Thus, J.L < med. This implies that the extreme value distribution EV(e,8)
is skewed to the left (negative asymmetry).
The asymptotic distribution of minima, EV(e, 8), has been applied as a
model of time to. leakage of pipes carrying corrosive chemicals (in missiles),
and other problems of mechanical failures. The asymptotic distribution of
maxima has been applied to model yearly maximum of water discharge in
rivers (flooding), maximal yearly tide, etc.
(2.3.33) 1 {1
fN(XjJ.L,U) = V2-Ku exp -2" (X-J.L)2}
-u- ,
2.3 Some Families of Life Distributions 29
.5
I&.
o
D.
L.-.- r - ,--~
~ (J=3
o
-4 -3
..
o
' 1 3 4
x-IJ
Figure 2.8. PDF of Normal Distribution
N(IL, 0'), IL = 0,0' = 1,2,3
for -00 < x < 00. We will denote a normal distribution by N(IL, 0'). [Note:
many textbooks use the notation N(IL, 0'2) rather than N(IL, 0').] The PDF
is symmetric around the point x = IL. The location parameter, IL, is also
the mean (expected value) of the distribution. The scale parameter, 0', is
also the standard deviation of the distribution. The CDF of a normal
distribution is
There are various formulas for computing this integral. The derivative of
~(z) is called the standard normal PDF and is given by
1
(2.3.36) fjJ(z) = ftC exp( _Z2 /2), -00 < z < 00.
v21l'
The p-th fractile of the standard normal distribution will be denoted by zp;
i.e., ~(zp) = p, 0 < p < 1. Values of zp can be obtained from Table A-I
and Table A-II of the appendix.
The p-th fractile of an arbitrary normal distribution N(IL, 0') is
(2.3.37)
30 2. Life Distributions, Models and Their Characteristics
p.,=O
3
b.....
p.,=1
w
~
cC 2
II:
w
II:
:::I
p.,=2
~
IL
1
--.-.--.--.----~ .
2 3
TIME [a]
Figure 2.9. Failure Rate Functions
of NT(j.L, 1,0) for j.L = 1, ... ,3
The normal distribution has a central theoretical role in the theory of prob-
ability and statistics. Some relevant applications of this theory will be
discussed later.
A truncated version of the normal distribution, NT(j.L, a, to), has a PDF
of the form
0, if x < to
(2.3.39)
while R(t) = 1 for all 0 < t < to. The failure rate function is
2.3 Some Families of Life Distributions 31
(2.3.40) t ~ to.
Several graphs of this failure rate function are displayed in Figure 2.9.
Thus, for example, the Erlang distribution E( k, (3) is the distribution of the
sum of k independent exponential random variables E({3). Hence, by the
Central Limit Theorem
(2.3.42) - (3k) ,
FER(t; k, (3) ~ cp ( t (3.,fk
(2.3.43)
(2.3.44)
1
iLN(tij.l,U) = y2ITut exp {-21 ( lnt-j.l
u )2} '
for 0 < t < 00. The PDF is zero for negative values of t. A graph of the
PDF of LN(O, 1) is given in Figure 2.10.
As we see in Figure 2.10, the lognormal distribution is highly skewed to
the right.
The fractiles of this distribution are given by the formula
(2.3.45)
The quartiles, the median and the .9-fractile of LN(j.l, u) are tabulated in
the following table for a few values of j.l and u.
The extent of skewness of the lognormal distributions is well illustrated
in Table 2.2, in particular as u and j.l increase. The mean and the standard
deviation of the LN (j.l, u) are given, respectively, by the formulae
.6
II.. .4
o
11.
.2
OL-__________ ~ ____ ~~ __ ~
o 1 2 3
TIME
and
These values are presented in Table 2.2 for J-L = 0,1 and (7 = 1,2. The dif-
ference between the mean ~ and the median eJ.£ is very sensitive to variations
of the parameter (7, as is shown in Table 2.2.
(7
= 1 (7
= 2
p p.=0 p.=1 p.=0 p.=1
0.25 0.51 1.38 0.26 0.71
0.50 1.00 2.72 1.00 2.72
0.75 1.96 5.34 3.85 10.48
0.90 3.60 9.79 12.96 35.23
mean 1.65 4.48 7.39 20.09
S.D. 1.33 3.61 18.68 50.77
The lognormal distribution has been widely applied to model the distri-
bution of material strength, air and water pollution, and other phenomena
with highly skewed distributions.
The t- and the F-distributions are not used as life distributions but have an
34 2. Life Distributions, Models and Their Characteristics
(2.4.1)
where
I,
fiji :::; x} = {
0, otherwise.
2.4 Discrete Distributions of Failure Counts 35
The mean of J is
(2.4.3)
(2.4.5)
We denote such a binomial distribution by B(n, ()). The PDF of B(n, ())
will be denoted by fB(j; n, ()). The CDF of B(n, ()) will be designated
FB(j;n,()). The mean (expected value) of B(n,()) is
of failures during the first hour is J.L = nO = 1.81. The standard deviation
of J is u = (10 x .1813 x .8187)1/2 = 1.22. In a similar fashion we obtain
that the distribution of the number of failures during the second hour, J 2 ,
is B(10, 02) where
02 = exp( -1/5) - exp( -2/5) = .1484.
Thus, the expected number of failures during the second hour (taking no
account of what happened in the first hour) is J.L(2) = 10, 02 = 1.48. The
standard deviation of J 2 is u(2) = 1.12. The distributions of the number of
failures during the third hour, etc., can be obtained similarly.
Computer programs for f B (j; n, 0) are often based on the recursive for- •
mula
. . n-j 0
(2.4.8) fB(J + 1; n, 0) = !B(J; n, 0) j + 1 1 _ ()'
Thus, fB(l; 100, .9) = fB(O; 100, .9)(99)(9). But, since fB(O; 100, .9) =
.1100 = 10- 100 , the computer might show the value 0 for fB(O; 100, .9),
and for all other fB(j; n, ()) values. To overcome this difficulty, one can ap-
ply the normal approximation to the binomial, which is generally good
if
(2.4.9) n ? 9/()(1 - ()).
This approximation, for large n, is
(2.4.10) F ("
B J, ,
n()) ~ 1> (jIn()(l
+ 1/2 - n()) .
_ ())
Thus,
80.5 - 90)
FB(80; 100, .9) ~ 1> ( 3 = .00077.
EXAMPLE 2.12
IT the probability of a defective computer chip is (J = 10-3 , what is the
distribution of the number, J, of defectives among n = 104 chips? The
theoretical answer is B(104 , 10- 3 ). However, to compute
we- can use the Poisson distribution with mean .A = n(J = 104 X 10-3 = 10.
Thus,
Pr{7 :$ J :$ 13} :::::l Pos(13j 10) - Pos(6j 10)
= .734.
The normal approximation to these binomial probabilities yields
Pr{7 :$ J :$ 13} :::::l .732,
which is close to the value obtained by the Poisson approximation.
•
2.4.4 Hypergeometric Distributions
The hypergeometric distribution is a distribution of a discrete random vari-
able having a PDF
(2 . 4•17) f H (Jj. N , M)
,n =
(~)(~~~)
(-:) , .
J = 0, ... ,n,
where N, M and n are positive integer-valued parameters, and
(2.4.18)
and its standard deviation is
(2.4.19)
EXAMPLE 2.13
A lot of N = 1000 elements contains M = 5 defectives. The probability
that a random sample of size n = 20, without replacement, will contain
more than 1 defective is
Pr{ J > I} = 1 - Pr{ J ~ I}
= 1 - FH(lj 1000,5,20)
1 - FB(lj 20, .005)
R:J
= 1 - (.995)20 - 20(.005)(.995)19
= .00447.
(i - 1/2 - J.t)
(2.4.20)
. .
FH()j N, M, n) - FH(~j N, M, n) R:J ip
(i + 1/2
(1
- J.t) - ip (1 •
EXAMPLE 2.14
If N = 100000, M = 5000 and n = 300, we find J.t = 15 and = 3.769j (1
hence,
Pr{lO < J < 20} R:J ip (20.5 - 15) _ ip (9.5 - 15) = 855
- - 3.769 3.769"
•
2.5 Exercises
[2.1.1] Give examples of data which are:
(i) right time censoredj
(ii) right frequency censoredj and
(iii) left time censored and right frequency censored.
[2.1.2] A study of television failures is going to be conducted during the
next three years. The systems (televisions) enter the study as they
are sold. Assume that the sale times are randomly distributed over
the study period. For each television in the study we record the sale
date, the number of hours of operation every day, the failure dates,
the type of failure and length of down time (repair and logistics).
If a customer leaves the area then her/his set drops from the study.
What type of censoring characterizes this study?
40 2. Life Distributions, Models and Their Characteristics
0, x<1
Inx
F{x) =
In5 '
1, x> 5.
(i) Find the PDF of X.
(ii) Show that xp = sP, 0 < p < 1, is the p-th fractile of Xi in
particular, the median is Me = 51/ 2 = 2.236.
(iii) Show that the r-th moment of X is J.tr = (5 r -1)/rIn5, r =
0,1, . . .. Use this formula to find the expected value, J.t, and the
standard deviation, u.
[2.2.2] Suppose ~hat the lifetime of a piece of equipment has a uniform
distribution over the range tl = 5 [hr] to t2 = 15 [hr]. The PDF is
thus
for 5 ::; t::; 15
f(t) = { 110 ,
0, otherwise.
(i) What is the failure rate function of this equipment?
(ii) Show that the MTTF, J.t, and the median life Me are equal.
(iii) What is the standard deviation of this life distribution?
[2.3.1] Consider an exponential distribution with MTTF of 1000 [hr].
(i) Determine the first and third quartiles, E. 25 and E.75, of this
distribution.
(ii) Determine the standardized interquartile range W = (E. 75-
E.25)/U.
[2.3.2] Let Xl and X 2 be two independent random variables having the
exponential distributions E(/3l) and E(/32), respectively.
(i) Let Y = min(Xb X 2). Show that Y has the exponential E(/3*)
where /3* = /3h/ 2. /3h = [~(Jl + J2)]-1 is the harmonic mean of /31
and /32.
[Hint: Show that the reliability function of Y is e- t /{3*. Use the
independence of Xl and X 2 and the fact that min(Xb X 2 ) > t if
and only if Xl > t and X 2 > t.]
(ii) A system consists of two independent components in series.
Each has an exponential life distribution with MTTFs /31 = 500
[hr] and /32 = 1000 [hr]. What is the MTTF for the system? What
is the system reliability at t = 300 hours?
[2.3.3] Generalize the result of [2.3.2] to show that, for a system consisting
of n independent components connected in series, the life distribu-
tion of the system is exponential provided the life distribution of
2.5 Exercises 41
[2.3.11] Determine the expected value and standard deviation of the shifted
Weibull WS(2, 10,5), where v = 2, (3 = 10 and to = 5.
[2.3.12] Consider the extreme value distribution EV(50, 2). Compute the
median, the expected value (mean) and the standard deviation of
this distribution.
[2.3.13] If X '" EV(50, 2), what is the expected value of exp(3X)?
[Hint: X '" In W(v, (3), exp(3X) '" (W(v, (3»3. Find also the rela-
tions between {,8 and v, {3 and substitute ( = 50,8 = 2.]
[2.3.14] If X", N(1O, 2) find the probabilities
(i) Pr{8 :::::; X :::::; 12};
(ii) Pr{X ~ 13};
(iii) Pr{IXI > 1O.5}.
[2.3.15] The r-th moment of a standard normal distribution, N(O, 1), is
given by the formula
ifT = 2m+ 1,
ifr = 2m,
for all m = 0, 1,···. Find the third and the fourth central moments
of N(1OO, 5).
[2.3.16] If X is distributed like N(J.L,(1'), find the expected value of Y =
IX - J.LI·
[2.3.17] Consider a device having a truncated normal life distribution, with
J.L = 5, (1' = 2 and to = 3 [hr]. Find the reliability at t = 6 [hr] and
the failure rate at this age.
[2.3.18] Apply the normal approximation to the Erlang life distribution
[weeks] G(20, 5) to determine the probability (approximately) that
a device with that lifetime distribution will fail between 90 and 110
weeks of operation.
[2.3.19] A general formula for the moments of a lognormal distribution
LN(J.L,(1') is J.Lr = exp(rJ.L + !r2(1'2), r = 0,1,2,···.
(i) Compute the first 3 moments of the distribution for the case of
J.L = 1, (1' = 1.
(ii) Compute the standard deviation r, and the third and fourth
central moments
et -1
F(t) = et + l' 0<t < 00.
otherwise. The system operates through the period [0, to) if, and only if,
1112 = 1. We therefore define the series structure function
(3.1.1)
Both II and h are random variables, and E{Id = Pr{Ii = I} =~, where
°
E{-} denotes the expected value, and ~ is the reliability of Gi , i = 1,2.
Notice that 'l/Js (II , 12) assumes only the value (if the system fails) or 1 (if
the system survives). The reliability of the system is
Rsys = Pr{'l/Js(ft,I2) = I}
(3.1.2)
= Pr{I I = 1,12 = I}.
But, due to the independence of II and 12 ,
(3.1.3)
Thus, if we define the function 'l/Js(Xl, X2) = XIX2, for all Xl, X2 in [0,1]'
then
Rsys = Pr{'l/Js(ft,I2) = I}
(3.1.4)
= 'l/Js(Rl, R2)'
In the same manner one can extend this result to a system of n independent
components connected in series. Thus, let
n
(3.1.5) 'l/Js(Xl,'" ,xn) = IT Xi
i=1
Then
Rsys =Pr{'l/Js(Il,'" ,In) = I}
(3.1.6) = R 1 · R 2 · .... Rn
Notice that if T 1 , .•• , Tn are the actual failure times of the n components,
then the failure time of a system connected in series is Ts = min Ti .
l::5i::5n
A system of two components, C 1 and C2 , is connected in active-parallel
if the system fails only when both components fail. The parallel structure
function is
(3.1.7)
'l/Jp(h,I2 ) = 1 - (1 - h)(l - 12 )
(3.1.8)
= II + 12 - 11 12.
Thus, 'l/Jp{Il> 12) = 0 if, and only if, both h = 0 and 12 = O. In this case, if
II and 12 are independent,
Rsys = Pr{'l/Jp(h,I2 ) = 1}
(3.1.9) = 1- E{l - It}E{l - I 2 }
= 1- (1- Rd{l- R 2 ) = 'l/Jp{R 1 ,R2 ).
n
(3.1.1O)
= 1- II (1 - ~).
i=l
Rsys = 'l/Jp(RMll R M2 )
(3.1.11)
= 1- (1- R MJ(1- R M2 ),
where
(3.2.1)
'l/J(k)n(R) = t
j=k
(~) Rj (1 -
J
R)n- j
= 1- FB(k -1;n,R).
48 3. Reliability of Composite Systems
The reliability of M2 is
If the two modules are connected in parallel then the reliability of the
system is
Rsys = 'l/Jp(.9421, .8263)
= 1 - (1 - .9421)(1 - .8263) = .990.
•
3.3 The Decomposition Method 49
(3.3.1)
where RsyslCa is the conditional system reliability, given that C 3 survives;
RSyslCa is the conditional system reliability, given that C fails. Now, if C3
survives, then under independence,
(3.3.2)
Indeed, if we know that C3 operates throughout the mission period, the
system will operate if either C 2 or C4 survives; C 1 is irrelevant. If C 3
fails, the system survives only if both C 1 and C 2 survive. Thus, under
independence,
(3.3.3)
Hence,
(3.3.5)
But
50 3. Reliability of Composite Systems
and
Hence,
This is exactly the same as the result previously obtained. IT there is more
than one crosslink, the system reliability can be determined by successive
steps of decomposition.
EXAMPLE 3.3
•
Consider ~he system having a double-crosslinked structure, as shown in
Figure 3.5.
Suppose we choose C4 as the first keystone, then
(3.3.7)
IT C4 fails, the only way that the system will survive is if Cl , C2 and C3 all
survive. Thus,
(3.3.8)
(3.3.9)
(3.3.10)
If C5 fails, then the system survives only if both C2 and C3 survive. Thus,
(3.3.11)
3.4 Minimal Paths and Cuts 51
The system survives if all the components in at least one of these four path
sets survive. Thus, we can write
[§J----,~---..
~
~~t. . . .
Figure 3.6. A Bridge Connection
52 3. Reliability of Composite Systems
- R1 R 3R 4R 5 + 2R1R2 R 3R 4R 5
= 'ljJ(R 1, ... , R5).
•
A cut set is a set of components of a system such that if all the compo-
nents belonging to the set fail then the system fails too. A cut set is called
minimal if the survival of any of its elements entails system survival.
EXAMPLE 3.5
The minimal cut sets of the system presented in Figure 3.6 are
An algorithm for finding all the minimal cut sets will be given in Section
3.7, in connection with the analysis of fault trees.
(3.5.1) J.Lsys = 1 00
Rsys(t)dt.
EXAMPLE 3.6
Consider the double-crosslinked system in Figure 3.5. Suppose that each
component of the system has an exponential life distribution E(f3i), i =
1" .. ,6. The reliability function of the system, according to (3.3.12), is
J.Lsys = 100
[exp( -t(A3 + A4 + A5))
(3.5.3) + exp( -t(A4 + A5 + A6)) - exp( -t(A3 + A4 + A5 + A6))
+ exp( -t(A2 + A3 + A4)) - exp( -t(A2 + A3 + A4 + A5))
+ exp( -t(A1 + A2 + A3)) - exp( -t(A1 + A2 + A3 + A4»]dt.
717
J.Lsys = 12 . >: = 12 13 ,
54 3. Reliability of Composite Systems
where {3 = 1/ A.
(3.5.5)
*).
Indeed, according to (2.3.7), the reliability function of each component is
R(t) = Pos(k - 1; Hence,
(3.5.6)
(3.6.1)
3.6 Sequentially Operating Components 55
~1
2
= (1 + .95~~)exp(-t/!31)
- .95!32 exp
!31
(-t (~ + ~)) .
!31!32
The mean time to failure of the system is then
(3.6.5)
56 3. Reliability of Composite Systems
top event
OR Gate
AND Gate
Figure 3.8. Types of Gates
3.7 Fault Tree Analysis 57
SYSTEM
FAILURE
1:C1 Fails
2: C2 Fails
3: C3 Fails
Figure 3.9. Series Structure Fault Diagram
Figure 3.10. Fault Tree Diagram For the System of Figure 3.3
SECONDARY
FAILURE
this circuit we will list first all the minimal cut sets. A cut set of an event
tree is a set of basic events whose occurrence causes the top event to happen.
A cut set is minimal if each one of its elements is essential. An algorithm
for generating such a list is provided in the following section.
Algorithm for Generating Minimal Cut Sets
1. All gates are numbered, starting with the top event gate, GO, down to
the last gate.
2. All basic events are numbered Bl, B 2 ,···.
3. Start the list at GO. At any stage of the process, if a gate is an OR
gate replace it with a list of all gates or basic events feeding into it on
separate rows; if the gate is an AND gate insert the list on the same row.
4. Continue till all the gates are replaced by basic events.
We now illustrate the algorithm on the fault tree in Figure 3.12.
GO
Gl, G2, G3
Bl, G2, G3
B2, G2, G3
Bl, B4, G3
B2, B4, G3
Bl, B4, G4
Bl, B4, G5
B2, B4, G4
B2, B4, G5
3.7 Fault Tree Analysis 59
TOP EVENT
G.8'
Thus, from (3.7.2) - (3.7.4) we obtain the following upper bound to Qsys:
m
(3.7.5) Qsys ::; L II Qi.
1=1 iEKI
62 3. Reliability of Composite Systems
For example, if Qi = .05 for all i = 1,··· ,8 for the electric circuits analyzed
above, then an upper bound to the system failure probability is
3.8 Exercises
[3.1.1] An aircraft has four engines but can land using only two engines.
(i) Assuming that the reliability of each engine is R = .95 to com-
plete a mission, and that engine failur.es are independent, compute
the mission reliability of the aircraft.
(ii) What is the mission reliability of the aircraft if at leaSt one
functioning engine must be on each wing?
[3.1.2] (i) Draw the block diagram of a system having the following struc-
ture function:
where
and
'l/JM2 = 'l/Js(R4, Rs).
(ii) Determine Rsys if all the components act independently and
have the same reliability, R = .8.
[3.1.3] Consider a system of n components in a series structure. Let
Rl, ... ,Rn be the reliability of the components. Show that the
system reliability is
n
Rsys ;::: 1- ~)1 - ~).
i=l
Figure 3.15.
for this variance. Compute the variance of the time till failure of
the double-crosslinked system given in Figure 3.5.
[3.6.1] A system consists of a main unit and two standby units. The
lifetimes of these units are exponential with mean (3 = 100 [hr].
Assuming that the standby units undergo no failures when idle,
and that switching will take place when required, compute the
MTTF of the system.
[3.7.1] Determine the minimal cut sets of the system having the fault tree
of Figure 3.17,
[3.7.2] Determine the failure probability of a system having the fault tree
of Exercise [3.7.1], when the failure probability of Bl is ql = .05,
those pf B2, B a, B4 are equal to q2 = .10, that of Bs is .03, of B6 is
.07 and of B7 is .06. Moreover, component failures are independent
events.
66 3. Reliability of Composite Systems
WATER TANK
RUPTURE
[3.7.3) Figure 3.18 provides a fault tree for a domestic water heater sys-
tem.
(i) List all the cut sets of the system.
(ii) Write a formula for the failure probability of this system, as-
suming independent failure events.
4
Reliability of Repairable Systems
t1 = T1
T1 = t1 + 81 1st renewal
t2 = T1 +T2
T2 = t2 + 82 2nd renewal
tn = Tn -1 +Tn
Tn = tn + 8n n-th renewal
I II Sl ...
I
1 _ _ _ _ __
J
---..-_...-- ------
o t1
1 5t Cycle
71 t2
......, - - - - "
2nd Cycle
72 TIME
PDFs J(t) and g(s), respectively, the PDF of C, k(t), can be obtained by
the convolution formula
1
J(t) = ""ffiexp(t/(3)
1
g(s) = -exp(-sh).
')'
k(t) = - I1t
(3')' 0
exp {-x
(3
t-x}
-- - - dx
')'
We see that if the failure time is exponential and the repair time is expo-
nential, the distribution of the cycle time is not exponential. Usually, the
mean repair time is much smaller than the mean lifetime, i.e., ')' «(3.
Let NF{t) be the number offailures ofthe system during the time interval
(O, t], assuming that J(O) = O. Let NR{t) be the number of repairs accom-
plished (renewals) in (0, t]. Obviously, NF(O) = 0 and NR(t) ~ NF(t) for
all 0 ~ t < 00. IT the system is not repairable then NF(t) ~ 1 for all t,
lim NF{t).= 1 and NR(t) = 0 for all t.
t-+oo
IT we denote by Q(t) the probability that the system is down at time t,
then Q(W= E{J{t)} = 1 - A(t), where A{t) is the availability function.
Let W(t) = E{NF(t)} and V(t) = E{NR(t)}. Notice that W(t) = F(t),
namely the CDF of the TTF, if the system is unrepairable. In repairable
systems W(t) 2 F(t) for all t, and lim W{t) = 00. Also Q(t) = W(t) -
t-+oo
V(t) for all t. Let us assume that W(t) and V(t) are differentiable with
respect to t almost everywhere (absolutely continuous functions of t). Let
w(t) = W'(t) andv(t) = V'(t) (wherever the derivatives exist). Thefailure
intensity function of a repairable system is defined as
Notice that the failure rate function h{t) discussed in the previous chapters
coincides with A(t) if the system is unrepairable. The function h{t) char-
acterizes the TTF distribution, F(t), while A(t) depends also on the repair
process.
The random function {NR(t)j 0 ~ t < oo} is called a renewal proceSSj
V(t) = E{NR(t)} is called the renewal function and v{t) is called the
renewal density. In the following section we discuss some properties of
these functions.
(4.2.2)
70 4. Reliability of Repairable Systems
EXAMPLE 4.2
Suppose the distribution of cycle length C is exponential, E({3). This is
the case, for example, when a renewal is instantaneous after a failure. Then
Tn '" G(n, {3), and
J(.) and g(.) are the PDF of the TTF and TTR.
1
Let v*(s), w*(s), f*(s) and g*(s) denote the Laplace transforms of vet),
wet), J(t) and get), respectively, i.e., v*(s) =
00
e-tsv(t)dt, etc. Equations
(4.2.6) and (4.2.7) yield the Laplace transforms
and
J(t) =
r Ae -At ,
t::;O
t>O
r
and
t::;O
get) =
J1e-f.L t , t> O.
The corresponding Laplace transforms are f* (s) = AI (A + s) and g* (s) =
J11(J1 + s), respectively. According to (4.2.8), the Laplace transform of the
renewal density is
(4.2.10) *
v (s) = S2
AJ1
+ (A + J1)s = A + J1
AJ1 ( 1 1) .
:; - s + A + J1
(4.2.13)
and
(4.2.14)
A(t) = 1 - Q(t)
(4.2.16)
= _JL_ + _A_ e - t (>.+I') 0 < t < 00.
A+JL A+JL '
(4.2.21)
Notice that q( 8) = p(8) + A2. Let 81 and 82 be the two roots of the second
order polynomial q( 8). These roots are
(4.2.23)
(4.2.24)
0< t < 00.
74 4. Reliability of Repairable Systems
Notice from (4.2.23) that both roots 81 and 82 are negative. Thus, the
asymptotic availability of the system under consideration is
J1
(4.2.25) AX) = J1 + ),,/2
In the following table we compare (4.2.24) to (4.2.16) for several values of t,
for the case of J1 = 2.00 [1/hr] and ).. = .01 [l/hr]. We see that the present
system has a higher availability than that with no standby unit at all t
values.
Table 4.1. The Availability FUnctions (4.2.16)
and (4.2.24), ).. = .01, J1 = 2.0
Time Availability
[hr] (4.2.16) (4.2.24)
0 1.00000 1.00000
10 0.99502 8.99957
20 0.99502 0.99919
30 0.99502 0.99889
40 0.99502 0.99864
50 0.99502 0.99843
100 0.99502 0.99751
•
4.3 Asymptotic Approximations
In cases where the TTF and TTR are not exponentially distributed, explicit
solutions for the renewal function and its density may be difficult to obtain.
However, there are a number of asymptotic results which can provide useful
approximations for large values of t (large here meaning large relative to
the mean cycle length). A few of these results are listed below. In all cases,
J1 is the mean length of the renewal cycle and u 2 is its variance. These are
assumed to be finite and K(t) is assumed to be continuous.
Result 1: lim V(t) = .!..
t-+oo t J1
t::
Result 2: lim [V(t + a) - V(t)] = 2:, for any a > O.
Result 3:
t-+oo
[V(t) _ !]J1 = 2J12u ~~.
2
2
Result 4: If k(t) is continuous, then lim v(t)
t-+oo
= .!..
J1
Result 5: lim Pr
t-+oo
{N~J1
u t / J13
~ z} = <T>(z), for any z.
2
The residual time till next renewal of a system at time t is defined as
NR(t)+l
(t = L Ci - t, where Co == O.
i=O
4.3 Asymptotic Approximations 75
(4.3.3)
1 00
(1 - F(x))dx
Roo (u) = Aoo . .: . .!... : . . - - - - - -
JLT
EXAMPLE 4.7
Suppose that T '" E(f3) and S '" E(a). Then 1- F(t+u) = e-(t+"')If3 =
e- u1f3 (1-F(t)), and similarly 1- F(t+u- x) = C u1f3 (1-F(t- x)). Thus
substituting in (4.3.2), we arrive at the equation
(4.3.5)
•
4.4. Increasing the Availability by
Preventive Maintenance and Standby Systems
4.4.1 Systems with Standby and Repair
A typical standby model is one in which there are two (or more) identical
(or non-identical) units, but only one is required to perform the operation.
The other unit(s) is (are) in standby. If the operating unit fails a standby
unit starts to function immediately. The failed unit enters repair immedi-
ately. The intensity of failure of the operating unit is .A, and the intensity of
repair is JL. It is generally the case that JL is much bigger than .A, and repair
of a failed unit is expected to be accomplished before the other operating
unit fails. In this case the system is continuously up. The system fails when
all units are down and require repair. There is only one repairman, who
4.4. Increasing Availability 77
repairs the units in order of entering the repair queue. Thus, in the case of
two units we distinguish between six states of the system:
Under this model we may further assume that unit 1 in standby may fail
with intensity Ai (could be zero) and unit 2 in standby may fail with in-
tensity Ai. The repair intensity of unit 1 is J.Ll and of unit 2 is J.L2.
The availability analysis of such a standby system, or of a more compli-
cated one, can be performed under the assumption that the transitions of
the system from one state to another follow a Birth and Death Markov
Process. This subject is not discussed in the present text. The interested
reader is referred to N.J. McCormick (1981, p. 120) or I.B. Gertsbakh (1989,
p. 283). We present here only the formulae for the steady-state availability,
A oo , for several such systems, assuming no failure of standby units (i.e.,
A* = 0).
EXAMPLE 4.8
The TTF of a radar system is exponentially distributed with mean
MTTF = 250 [hr]. The repair time, TTR, of this system is also expo-
nentially distributed with MTTR = 3 [hr]. Thus, if this system has no
standby units, its steady-state availability is A~) = 1/3!\~250 = .988. If
we add to the system an identical standby unit we reach a steady-state
availability of
AXl(T) = /-tu
lT
/-tU+/-tD
(4.4.3) (1 - F(u))du
-
1 T
(1- F(u))du + b - 8)F(T) + 8
.
4.4. Increasing Availability 79
(4.4.4) 13
Aoo(T) = (13 +,) + 8e-T /f3(1 _ e-T/(3)-l .
1 - F(x) = e-(z/f3)2
and
IoT[l- F(x)]dx = loT e-(z/f3)2 dx
(4.4.5)
Let x = 0 Tlj3. The function Aoo(x) is plotted in Figure 4.2 for the case
of 13 =100 [hr], , = 2 [hr], 8 = 1 [hr].
We see in Figure 4.2 that Aoo(T) has a unique point of maximum. The
optimal value of T is TO = j3xO10, where x O is the (unique) root of the
8 [1],
equation
'Y -- e -z2/2 +x(,-8) ~(x)-- = - .
-
....tFff 2 211"
80 4. Reliability of Repairable Systems
.987 A..,(x)
.982
>
I-
::::i
iii
0(
..I .977
C
>
0(
.972
x
.967~----~------~------~----~------~------~~
o .5 1.5 2 2.5 3
x =~ ~
Figure 4.2. Asymptotic Availability Function (4.4.5),
(3 = 100 [hr], 'Y = 2 [hr], 8 = 1 [hr]
(4.4.6) Aoo(T) = T
iT
0
R(u)du
a2 (T),
R(T) Aoo (T) = 'Y - (-y - 6)
(4.4.7)
[R(T) + h(T) loT R(u)du] , 0 < T < 00.
10r R(u)du =
T
W(T) is a decreasing function of T. Finally, if limh(T) 0
T ..... O
then W(O) = R(O) = 1. Thus, if the distribution F(t) is a DFR one then
the right hand side of (4.4.7) is greater than 6. This means that for a system
with a TTF having a DFR distribution, the optimal policy is not to have
preventive maintenance.
For additional treatment of the topic the reader is referred to the books
of Gertsbakh (1989, Chapter 4) and Barlow and Proschan (1965).
4.5 Exercises
[4.1.1] Suppose that the TTF in a renewal cycle has a W(a, (3) distribution
and that the TTR has a lognormal distribution LN(J.L, u). Assume
further that TTF and TTR are independent. Find the mean and
standard deviation of the length of a renewal cycle.
[4.1.2] Show that if Xl tV N(J.LI,Ot) and X 2 tV N(J.L2, (2) and if Xl and
X 2 are independent then Xl + Xl N(J.LI + J.L2, (u~ + u~)1/2).
f'V
[4.2.1] Suppose that a renewal cycle C has a notmal distribution N(I00, 10).
Apply formula (4.2.1) and the results of the previous exercises to
determine the PDF of NR(200).
[4.2.2] Suppose that a renewal cycle C has a gamma distribution G(~,,8),
k = 1,2, ... , 0 < ,8 < 00.
(i) Show that
[4.2.4] Derive the renewal density, v(t), for the process of Exercise [4.2.3].
[4.2.5] Suppose that the components of a renewal cycle are independent
random variables T and S, T W(2,,8) and S E(-y), 0 < ,8 < 00,
f'V f'V
o < 7 < 00. Show that the Laplace transform of the renewal
density is
v*(s) = Jt(I- st/J(s))
s(1 + Jtt/J(s)) ,
where
I,
(5.1.2) I{Xi ::; x} = {
0, if Xi> X.
5.1 Probability Plotting for Parametric Models with Uncensored Data 85
1.000
. 900
>
!: .800
-I
CD
. 700
« . 600
CD
0 . 500
a:
a. . 400
~
::::I . 300
U
. 200
.1 00
.000
6 7 8 9 10 11 12 13
SAMPLE VALUES
A basic result in probability theory states that the empirical CDF, Fn(x),
converges in a probabilistic sense, as n grows, to F(x). This means that, if
the sample is very large, the empirical CDF is close to the true CDF with
high probability. Notice that if X(1) ::; X(2) ::; ... ::; X(n) are the ordered
sample values, then the value of Fn(x) at x = X(i) is i/n, i.e.,
(5.1.3)
13
en 12
w
:)
..J
11
<t
> 10
w
..J 9 Slope: 1.0233
a.
::i: Intercept: 9 . 9198
<t 8
en R squared . 9931
7
6
-2.5 -2.0 -1 . 5 -1. 0 -.5 0 .5 1 .0 1.5 2.0 2.5 SCORES
Figure 5.2 we present a plot of points (Zi ,n, XCi))' where Zi,n = <p- 1 (~~;)~).
Zi,n are called the normal scores of the sample.
A straight line was fitted through the points by the method of least-
squares. The slope of this line provides an estimate of the standard devi-
ation, a, and the intercept provides an estimate of J,L. In Figure 5.2 the
intercept of the line is p, = 9.92 and its slope is a = 1.02.
We provide now a list of the coordinates required for the probability plots
•
corresponding to the various families of distributions discussed in Chapter
2. The value Pi = i/(n + 1) corresponding to XCi) is called the plotting
position. In some books we find the use of Pi = i~.5 for a plotting position.
We remark here that i / (n+ 1) is the expected value of F (XCi )) ' The problem
of which plotting position should be used receives considerable attention in
statistical research.
1. Exponential or shifted exponential distributions:
XCi) versus Ei,n where Ei ,n = -In (1 - n~l)' i = 1,· .. , n.
2. Weibull distributions:
Yi = lnx(i) versus Ui ,n = In (-In (1 - n~l))' i = 1,··· , n.
3. Extreme value distributions (minima):
XCi) versus Ui,n = In (-In ( 1 - n~l) ),
i = 1, ... , n .
4. Gamma distributions:
XCi) versus Gi,n = n~l th fractile of G(v, 1), i = 1,··· , n.
5. Normal distributions:
XCi) versus Zi,n -- <p-
1(i-3/8)
'-1 n+l/4 ' Z- ,... ,
n.
5.1 Probability Plotting for Parametric Models with Uncensored Data 87
6. Lognormal distributions:
",-1 (i-3/8). = 1,"', n.
Yi = In XCi) versus Zi,n = '*' n+1/4' ~
Some parameters of the life distributions can be estimated directly from
the probability plots. A list of these parameters and the corresponding
estimates is given below. We denote by 5:(z) the predicted sample value for
a given score z, i.e., 5:(z) = intercept + Z . slope.
30
27
•
I/) 24
w
~ 21
...J
"'>w 18 •
15
...J 12 Slope : 5 .9413
Go
~ 9 Intercept; .0000
"'
I/) 6 R squared : .9800
3
0
.0 .5 1.0 1.5 2.0 2.5 3.0 3.5 4 .0 4.5 5.0 SCORES
y= .856 + .479u.
v= 1/.479 = 2.087
/3 = exp(.856) = 2.354
Median = exp(.856 - (.3665)(.479))
= 1.975.
The true median is equal to f3(ln 2)1/2 = 2.081. The estimate of the mean
is
p, = /3r(l + .479)
= /3 x .479 x r(.479) = 2.080.
5.1 Probability Plotting for Parametric Models with Uncensored Data 89
2.0
UJ 1.5
w
:I
..J 1.0
<I:
>
w 0.5
..J
Q,
:I 0
<I:
UJ
-.5
-1.0
-5 -4 -3 -2 -1 0 1 2 SCORES
0
0
....
..
0
Ol
0
~
CO
:..
I\)
....
Co>
0
Ol
01
(,0)
I\)
CD
(,0)
~
CD
CO
CO
PROBABILITY
Figure 5.4. Probability Plot of n = 100 Random Deviates from W(2, 2.5)
The true mean is J.L = ,Br(1.5) = 2.216. Finally, an estimate of the standard
deviation is
fJ = ,8(r(1.958) - r2(1.479))1/2
= ,8[.958 x r(.958) - (.479 x r(.479))2]1/2 = 1.054.
The true value is = ,B(r(2) - r2(1.5))1/2 = 1.158.
•
(1
EXAMPLE 5.3
A sample of n = 72 observations were taken on the repair time of an
insertion machine. The sample statistics are given in the following table.
We see in Table 5.2 that 75% of the sample values are not greater than
2 [min], and that the distribution is apparently very skewed (the maximum
value is 18 [minD. For this reason a lognormal probability plotting was
90 5. Graphical Analysis Of Life Data
3.0
2.5
•
~ 2.0
:: 1 .5
;! 1.0
w .5
~ .0
~ -0.5
(/) -1.0
-1.5
-2.0~·~~--~--~--~~--~--~--~--~--~--~~
-2.5 -2.0 -1.5 - 1.0 -.5 0 .5 1 .0 1.5 2 .0 2 . 5 SCORES
o w io CD PROBABILIT Y
o o
(/I
o ...,
...,
~
en CD o ~
done. The plot is presented in Figure 5.5. We see in this figure that the fit
of the lognormal distribution to the sample values is quite good.
The parameters of the repair distribution are estimated from the line
fitted to the plot. We obtain
Notice that there are differences between the sample statistics in Table 5.1
and the estimates from the plot. The sample mean and the sample standard
deviation are sensitive to extreme values in the sample and are therefore
not stable (robust) estimates of the parameters of the distribution.
•
5.2 Probability Plotting with Censored Data
There are cases in which observations are censored either from the left, or
from the right, or both. For example, if any repair taking less than half
a minute is recorded as .5 [min] then in some cases, as in the data set of
Example 5.2, a few values are left censored. Similarly, if any repair taking
longer than 5 [min] is recorded as 5+ we have lost the exact information on
the length of the repair time. We plot only the points which correspond to
5.2 Probability Plotting With Censored Data 91
i z
Xi n+l
1 -
2 -
3
3 X(3) IT
4
4 X(4) 11
9
9 X(9) IT
10 - -
We plot only seven points (Xi, Zi), i = 3"" ,9, where Zi = F- 1 (~).
1.8
1.4
en
w 1.0
=>
...J
<C
>
W
...J
Q. .6
:E
<C
enI
C)
0 .2
...J
-.2
are conducted in field conditions, and units on test may be lost, withdrawn
or destroyed for reasons different than the failure phenomenon under study.
Suppose now that systems are installed in the field as they are purchased
(random times). We decide to make a follow-up study of the systems for
a period of two years. The time till failure of systems participating in the
study is recorded. We assume that each system operates continuously from
the time o£installment until its failure. H a system has not failed by the end
of the study period the only information available is the length of time it
has been operating. This is a case of multiple censoring. At the end of the
study period we have the following observations: {(1i,6.), i = 1,,,, ,n},
where n is the number of systems participating in the study; 1i is the length
of operation of the i-th system (TTF or time till censoring); 6i = 1 if the
i-th observation is not censored and 6i = 0 otherwise.
Let T(I) ~ T(2) ~ ..• ~ T(n) be the order statistic of the operation
times and let 631 , 6i2 , .•• ,6j ,. be the 6-values corresp~~ding to the ordered
T values where ji is the index of the i-th order statistic T(i) , i.e., T(i) = Tjt
(i = 1, .. · ,n).
The PL estimator of R(t) is given then by
(5.3.4)
This version of the estimator of R(t), when the inspection times are fixed
(not random failure times), is called the actuarial estimator.
In the following examples we illustrate these estimators of the reliability
function. Before proceeding to the examples we remark that, according
to (1.3.6), a non-parametric estimator of the cumulative hazard function,
H(t), is
EXAMPLE 5.5
A machine is tested before it is shipped to the customer for a one week
period (120 [hr]) or till its first failure, whichever comes first. Twenty such
machines were tested consecutively. In Table 5.4 we present the ordered
time till failure or time till censor (TTF /TTC) of the 20 machines, the
factors (1- 6d(n - i + 1)) and the PL estimator R(ti ), i = 1,··· ,20. The
gmphs of R20(t) and of H20 (t) are presented in Figure 5.7 and Figure 5.8.
In Figure 5.7 we present also the reliability function of E(lOO) and in
Figure 5.8 the cumulative hazard function of E(100). We see from the
plots that, apparently, the TTF of the tested machines is exponentially
distributed, with an MTTF close to 100 [hrl. Later we will study how to
estimate the MTTF when censoring is present.
•
1.1
.9
>
I-
oJ
~ .7
:::;
w
,
, ~Empirical
""
lr
~Theoretical
.5 ""
"
"
.3 1-
o 50 100 150
TIME [HrJ
Suppose that n units are put on test at time to = O. Let 0 < h < ... <
tn < 00 be the failure times of these units. Let to = 0 and tn+l = 00. The
total time on test (TTT) at time t is defined as
n
T(t) = I{t :S h}nt + L1{ti-l < t:S ti}
i=2
(5.3.6)
[ I:(n - j + 1)(tj - tj-l) + (n - i + 1)(t - ti - 1)].
J=l
Let Ti = T(ti)' i = 1, ... , n be the values of T(t) at the failure times. Let
Vi = TdTn (i - 1,··· , n). The graph of (*, Vi), i = 1,··· , n is called the
TTT plot.
In Figure 5.9 we present the TTT plot for the data of Table 5.4.
The graph of (*, Vi) approaches, as n ---- 00, the function
(5.3.7) o :S x :S 1,
96 5. Graphical Analysis Of Life Data
1.5
Empirical
~
",'"'"
","'''' Theoretical
",'"
1:0 ",,'"
Q
II:
c(
N
c(
:c
:I
::;)
0
.5
o 50 100 150
TIME [Hr]
where f(y) is the CDF of the TTF, and R(y) = 1 - F(y) is the corre-
sponding reliability function. We can show that if F(y) = 1 - e->'Y then
HF(X) = x. On the other hand, if the failure rate function h(t) is in-
creasing (IFR distribution) then HF(X) is a concave function of x, with
HF(O) = 0 and HF(l) = 1. Thus, in the IFR case HF(X) > x for all
0< x < 1. Conversely, if F is a DFR distribution then the function HF(X)
is convex, and HF(X) < x for all 0 < x < 1. The TTT plot of Figure 5.9
shows that apparently the life distribution it represents is an IFR one.
.8
t-
(/)
IU
t-
Z .6
0
IU
2
;:
...
cC
.4
t-
O
t-
.2
PROPORTIONAL RANK
by special software RELSTAT© for the PC. One can make probability
plots on the PC by using various software packages in the market, like
MINITAB©, STATGRAPHICS©, and others.
5.5 Exercises
[5.1.1] Use a computer program to generate a random sample from N(50, 5)
of size n = 50.
(i) Plot the empirical CDF of this sample.
(li) Draw a normal probability plot of this sample, and estimate J.t
and (J' from this plot.
[5.1.2] The following data are failure times [hr] of a random sample of
n = 50 electronic devices.
26.3, 78.5, 29.8, 22.6, 113.1, 10.8, 157.4, 2.4, 51.9, 29.3, 40.3, 216.6,
30.5,31.6, 57.5, 38.1, 113.7, 1.0,96.8,63.3, 72.1, 107.4,39.6,29.0,
11.0, 105.2, 36.7, 7.1, 85.5, 24.6, 28.0, 23.6, 14.7, 24.3, 46.9, 56.9,
293.4, 33.0, 47.0, 51.9, 20.0, 20.3, 158.9, 54.0, 14.8, 81.2, 46.0, 42.8,
8.9,35.7.
(i) Make a Weibull probability plot of this sample.
(li) Make an exponential probability plot of the data.
0.0 SMAce BETA ESTIMATOR ; ,
ORIGIN ~
- 2.0 - 1.0 0 .0 1.0 2.0 3.0 4 iO 5i O 6.0 7o 90 10.0 11.0 13.0
999 - j I I I I~I 1 ,
HI •••• H
••• +~~"Ttif~r : ~~W:h~ ~~. ::t 4~~r"'( ~.U:·. .'.: ~ 0.0
1.0- 40.0 .... , . . . . . . . :-'7="1-- - --~
-.c .... 1-. -C WC--'I- fc,~,~ .. , 1.0
i. to- - 2.0 :J
.:, ti~ltl:~r'jj~ !':,~, : "I ;::~1':'1f~ r- :;-
3.0
-:='::':' _-:::::: '.- :"::-1::."2:: :-:::.:::1-:;. :! .-.-
. .. 0
L5~ I~ ~.3·8~ ' i.'.~:,'i'J~~~~'=~jt
::.: .: : ' --:-'i-i':';~ ::::.r:.- L.lill~~ :-::~[:i:~ 0
~ ~~ ;~~;'~)it;fH ;·i::; : :'::' . ... -.... .::: ::::: '::"" .........,.. . ::;!~ j ~+i!:, - 4.0 I
'OJ, ....... ,.,.,., . Th~ "'1' i' ......, fcc •.""·!;r" ,, .. " .. , ". m
""
10--5.0 :>J
"1~L' " ......, !I ;,+~ ()
~18
6 .0 -i
"T1
t- l>
7.0 ;=
c ?'
on
ti~;':i!.: :; ;,! .•••• ~~7 .~.~i:,·i~j~i~0· ~ I ; I ! j i l ! i ' : !C~ i I , ~ , ;~: ~ -8.0 m o
~
,II! Lllll rr .·,Ii-··· ~-9.0 e:
(')
::~ ,,,;;,;;,;.; ;;;;;} +~;,;.iii;prJII ......... :.:1 ~ ,:;: :'.: - ~: jj:r :::~8' ur· e.
8:88r.:c.=.-::· ~ 10.0
10617
9556
••
(I)
UI 8495
::)
.J 7434
cC 6373
>
w 5312
.J
CL 4251
:& 3190
cC
(I) 2129
1068
7.~
0
0
0
g
t,n
= -In (1 - n+
_i- )
1 '
i = 1, ... ,n
where t(i) represents the i-th order statistic of the failure times.
102 6. Estimation of Life Distributions and System Characteristics
We see in Figure 6.1 that the points (Ei,n, t(i») are scattered around a
straight line. A least-squares fit of a line through the origin yielded a slope
of b = 3866 [hr). This graphical analysis shows that the assumption of an
exponential life distribution is plausible.
The graphical estimate is not, however, the only possible estimate of (3.
We may also try to estimate (3 by the sample mean,
_ 1
I:ti = 3543.4 [hr).
n
tn = -
n i=l
A third possible estimate can be obtained from the sample median, Me.
We know that the median of the distribution is Median = .693(3. This
suggests for an estimate of (3 the value Mej.693 = 3026.4 [hr). We have
thus obtained three different estimates of (3 from the same data. Which
one should we adopt? The answer to this question binges on notions of
accuracy and precision that will now be conslaered.
•
6.1.2 Sampling Distributions, Accuracy and Precision
1 n
xn = -
n
I:
i=l
Xi
was computed for each sample. In Figure 6.2 we present the histogram of
these 100 sample means.
We see that the sample means vary over the interval (800,14,600). The
average of all the 100 sample means is x
= 4814.8 and their standard
deviation is Sii = 2519.5. We see that although individual sample means
could vary considerably from the distribution mean f./" which in the present
case is equal to 5000, the average of all 100 means is quite close to the true
x
value. (Note that can also be regarded as the average of a random sample
of size n = 400.)
6.1 Properties of Estimators 103
20
16
>
0
z
w 12
:l
0
W
a: 8
II.
Xn rv G(n,{3/n).
Thus, the expected value of Xn is the mean of G(n, (3/n) , namely E{Xn} =
{3. The standard deviation of the sampling distribution SD{ Xn} = f3/ fo,
In Example 6.2, n = 4 and {3 = 5000. Therefore, SD{ X4 } = 2500. The
standard deviation of the 100 sample means was 2519.5, which is close to
the theoretical value of 2500.
(6.1.1)
(6.1.2)
104 6. Estimation of Life Distributions and System Characteristics
(6.1.3)
It can be shown that tel) and /J are independent random variables and
(6.1.6)
We would like the estimates to fall within 10% of the true value of a.
Thus, the interval of interest is (.9a, 1.Ia). The sampling distribution of 8 2
2
is like that of _a_ X2 [n_I]. Hence, the proportional-closeness probability
n-I
of 8 is
Pr{.9a ~ 8 ::; 1.Ia}
(6.1.7) = Pr{(.9)2 ::; (8ja)2 ::; (1.I)2}
= Pr{.8I(n -1) ::; x2[n -1] ::; 1.2I(n - I)}.
n 10 20 30 100 200
proportional
closeness .3236 .4597 .5521 .8396 .9529
EXAMPLE 6.5 •
Let tI, t2,··· ,tn be a random sample from an exponential distribution
E((3). We wish to estimate the reliability function, i.e.,
() = exp( -tj(3).
Let in be the sample mean, and consider the estimator
106 6. Estimation of Life Distributions and System Characteristics
Pr{.9B ~ On ~ 1.lO}
(6.1.8) = Pr{ln(.9) - t//3 ~ -t/fn ~ In(1.1) - t//3}
= Pr{ln(.9) ~ -~ (~ - 1) ~ In(1.1)}.
Pr{.9B ~ On ~ LIB}
{ tn n}
(6.1.9)
= Pr /3 ~ G (n, 1) ~ +
1- In(.9) (1 - ~ In(1.1))
Pr{.9B ~ On ~ 1.lO}
= Pos (n -1; n )
------=--/3
(6.1.10)
1- t
In(.9)
- Pos (n - 1; (
1-
/3 n ) ).
t In(1.1) a+
Notice that when t < .095/3 then n/(I- ~t In(1.1))+ = 00. In this case the
second term on the right hand side of (6.1.10) is zero.
In the following figure we provide a few graphs of the proportional-
closeness probabilities of On, for several values of T = t/ /3 and n, with
'Y = .1.
The proportional-closeness probabilities of On, for each n, decline mono-
tonically as t/ /3 grows. This is not the case when we consider the fixed-
closeness probabilities of On, which are defined by (6.1.5). In this case the
6.1 Properties of Estimators 107
>-
...
!:: 1.00..-.:0..:-"'__
iii
OIl(
ED
oII:
a. .]5
en
en
w
z
w
en
9 .50
U
...
I
OIl(
Z n=75
oj: .25
n:50
II: n=25
oa.
oII:
a. 0 ... I I I I
.50 1.0 1.5 2.0
TIME [MTTF]
Pr{IOn - 91 ::; 6}
n=50
>
5 .95
iii
c(
CD
o
IX:
~
If)
If)
w
z
w
If)
o...
(J
I
Q
W
)(
Li:
.80 .
• 75
o .50 1.00 1.50 2.00
TIME [MTTFJ
Figure 6.4. Fixed-Closeness Probabilities of Reliability
Estimators for Exponential Life Distributions, 8 = .1
(6.1.12)
for all (J. That is, with confidence probability of at least "/ the interval
C,),(X1 ,··· ,Xn ) contains the true value of (J, whatever the value of (J is.
We show below how such intervals can be determined in particular cases.
(6.1.13)
for all j.t and 0". It follows that the lower and upper limits of the confidence
interval for j.t, at level " are
(6.1.14)
,=
The confidence limits for tL based on the given sample, at .' confidence level
.95, are 10.68 ± 2.262 x 2.46/VlO, or 8.92 and 12.44.
To obtain a confidence interval for at confidence level" we apply
0",
again the result mentioned in Example 6.4, namely
This yields
X;l[n-l] X;2[n-l]}
Pr { < -82 < --'-''""'-----''
n -1 - 0"2 - n-l
(6.1.15)
= Pr {
82 (n - 1)
X~2 [n
<0"
- 1] -
2
<
-
8 2(n - 1) }
X~l [n - 1]
=,
for all tL and 0". From tables of the fractiles of the chi-square distribution
we obtain that for, = .95, X~025[9] = 2.70 and X~975[9] = 19.0. Thus, the
lower and upper confidence limits for 0"2 are, respectively,
(6.05)(9) d (6.05)(9)
19 an 2.7'
The confidence limits for 0" are obtained by determining the square root of
the above limits for 0"2. Thus, the .95 confidence limits for 0" are 1.69 and
4.49.
Let tb t2,'" ,tn be a random sample from E(f3). We saw in Example 6.2
that
110 6. Estimation of Life Distributions and System Characteristics
(6.1.16)
Pr { ! X;l [2n] :$ fn :$ ! X;2 [2n] }
for all {3. Therefore, the lower and upper confidence limits for {3, {3L and
{3u, respectively, are
(6.1.17)
Finally, the lower and upper confidence limits for R(t) = exp( -t/{3) are
RL = exp(-t/lh),
Ru = exp( -t/{3u).
Thus, the confidence limits for the reliability at t = 5000 [hr] are
That is, with a confidence level of '"'I = .95, the interval (.125, .429) contains
the value of R(5000).
When J.L and a are unknown we replace them by Xn and S, the mean and
standard deviation of the sample. The interval to consider has limits of the
form Xn ± C"!,n,mS/rm, where the coefficient C"!,n,m must be determined
by the requirement
(6.1.19)
Furthermore,
(6.1.20)
6.2.1.1 Derivation
Thus, the PDF is either the probability density function or the probability
distribution function. A few families of single-parameter distributions are
the binomial, Poisson and exponential.
Let XI, X2,' .. ,X n be a random sample from some distribution, belong-
ing to a specified family. The likelihood function L(()j XI,'" ,x n ) is a
function of the parameter (), given the sample values Xl, ... ,Xn , defined as
II f(Xij ()),
n
(6.2.1) L((), x) =
i=l
where () ranges over a specified domain, e, called the parameter spacej
x = (XI,' .. ,xn ) is the vector of n observations.
EXAMPLE 6.7
Let Xl,'" ,Xn be a random sample from a Poisson distribution. The
PDF is
e- 8 ()X
f(xj()) = --,-, X = 0,1"" .
x.
The likelihood function is
n
L(()jx) = C(x)exp(-n() + LXiln()),
i=l
for 0 < () < 00, where
II
The maximum likelihood estimator (MLE) of () is defined as a value
of () in the parameter space e, for which L(()j x) is maximized.
If the function L(()j x) has its maximum in the interior of e and is dif-
ferentiable there with respect to (), for any x, then the MLE 0 is a value of
() satisfying
8 82
8()L(()jx) = 0 and 8()2L(()jx) < O.
If this method of differentiation is inapplicable, we have to determine the
e
point 0 in at which L(()j x) attains its maximal value by other methods.
EXAMPLE 6.8 (Binomial Distributions)
Let XI,'" ,X n be a random sample from a binomial distribution B(N, ()),
N known. The likelihood function can be written as
(6.2.2)
The symbol ex: means "is proportional to." The proportionality factor de-
pends only on x.
6.2 Maximum Likelihood Estimation 113
n
Let Sn = LXi. Notice that, since the logarithmic function is strictly
i=l
increasing, we can obtain 0 by finding the maximum of
l(O;x) = lnL(O;x).
0< 0 < 1, where Cn(x) does not depend on O. Taking the partial derivative
with respect to 0 we obtain
(6.2.4) a () Sn Nn
aolO,x = 0(1-0) -1-0'
Sn
(6.2.5)
A
On=-·
Nn
•
00.
-00 < 9 < 00. The function L(9; x) increases monotonically for all 9 ~ X(l),
and is equal to zero for all 9 > X(l). Hence, en
= X(l). Notice that in the
present case we obtained the MLE just by considering the shape of the
likelihood function. One cannot differentiate L(9;x} at the value 9 = X(l)'
The MLE en is distributed like the minimum of a random sample from
a shifted exponential distribution, i.e.,
(6.2.10)
The MLE is biased, but the bias goes to zero as n increases. The asymptotic
distribution of en, as n grows, is not normal but exponential, i.e., n(en -8) ~
E(l}, for all n = 1,2, ....
•
6.2.1.2 The Invariance Property
=
The MLE of 9 is en Jnln. This is indeed a special case of (6.2.5). We are
interested in the MLE of (3. (3 can be expressed as a function of 9, namely
(6.2.11) (3 = -t'l In(l - 9).
(6.2.12)
Notice that the proportion of components that survived at time t' is the
MLE of R(t'). This is generally the case, for any life distribution, if the
data available are just the number of failures up to time t'.
•
6.2.1.3 The Variance of an MLE
We assume that 8(9; x) exists for all 9 and all x. Since X = (Xl,'" ,Xn )
changes at random from sample to sample, 8(9, X) is a random variable.
Assume that the variance of 8(9; X) is finite, for each 9. The function
(6.2.16)
(6.2.17)
In a random sample, the n random variables Xl, ... ,Xn are independent
and identically distributed. Therefore, from definition (6.2.1)
(6.2.18)
Hence,
(6.2.19)
116 6. Estimation of Life Distributions and System Characteristics
or, equivalently,
(6.2.21)
(6.2.22)
Hence,
(6.2.23)
From (6.2.16) we see that the AVe{On} coincides with Ve{On} for all n.
Considering the MLE (In of Example 6.10, an approximation to Ve{(Jn}
is given according to (6.2.17) by
AVe{,6n} =
A 0(1 - 0)
n
[8 (
8()
t'
In(1 - 0)
)] 2
(6.2.24)
O( t')2
= n(1 - O)(ln(1 - 0))4·
It is interesting to assess how much precision is lost due to the fact that
the data are given in terms of the number offailures, I n , to time t' and not
the actual failure times. If the actual failure times t 1 , ... ,tn are available,
the MLE of ,6 is the sample mean tn. Its variance is Vf3(tn ) = ,62 In. In
Figure 6.5 we plot the ratio of ,62 In to (6.2.25). This ratio is called the
relative efficiency of the MLE based on I n compared to that based on tn.
The graph shows that the MLE of ,6 based on the number of failures can
6.2 Maximum Likelihood Estimation 117
1.00
>
(J .7~
z
w
(J
i&:
II.
w .50
w
>
j:
...C
W
II: .25
be considerably worse than the one based on the average failure time. The
relative efficiency is maximized near t' / {3 = 1.5. Thus, if the life testing
span is very short or very long, relative to {3, the amount of information on
{3 provided by the number of failures, I n , may be very small.
It is often the case that the MLE, On, converges in probabilistic sense
•
to the true value 0, and that the distribution of ..(ii(On - 0) converges to
the normal distribution N(O,I- I / 2 (O», as n -+ 00. More comprehensive
discussion of this topic requires substantial knowledge of probability theory.
In the present section we generalize the results of the previous section to the
case where the PDF depends on several parameters OI,'" ,Ok' The param-
eter space e
is now the set of all k-dimensional vectors 8 = (Ot.··· ,Ok),
which specify the distributions in the family under consideration. The like-
lihood function is now a function of k variables 01 ,' •• ,Ok and of x, i.e.,
n
(6.2.26) L(OI,'" ,Ok;X) = IIf(Xi;Ot. ... ,On)'
i=1
118 6. Estimation of Life Distributions and System Characteristics
(6.2.27)
(6.2.28)
EXAMPLE 6.12
Let Xl,'" ,Xn be a random sample from a normal distribution N(J.L, o}
The likelihood function of (J.L, a) is
(6.2.29)
-00 < J.L < 00, 0 < a < 00. It follows immediately that the likelihood
function is maximized, for each value of a, by
(6.2.30)
(6.2.31)
Hence,
(6.2.32) {)
~lnL
('J.L,a;x ) = - -
n Qn
+3 '
ua a a
6.2 Maximum Likelihood Estimation 119
Equating this partial derivative to zero and solving for a, we obtain the
MLE
(6.2.33)
are known functions, where 1 :::; r :::; k, then the MLE of WI, ... ,Wr are
EXAMPLE 6.13
Let Xl, .•. ,X n be a random sample from a lognormal distribution, LN(p" a).
Let Yi = In Xi (i = 1"" ,n). The MLE of p, and a are
1
L
n
fLn = fin = - Yi
n i=l
and
(6.2.34)
and
(6.2.35)
where
(6.2.36)
The Fisher information matrix (FIM) does not exist for all distributions.
The PDF must be sufficiently smooth so that l(O; x) will have partial deriva-
tives for all 0 and all x, and the covariances of these partial derivatives must
exist.
As in the single-parameter case, when x = (Xl,· .. , Xn) represent a ran-
dom sample, we have In(O) = nI(O), where I(O) is the FIM based on a
single observation. The asymptotic variance-covariance matrix of the MLE
On is, under certain regularity conditions, the inverse of the FIM for a
sample of size n, i.e.,
(6.2.37)
EXAMPLE 6.14
Continuing with Example 6.12, we determine the FIM for the normal
case. The log-likelihood function for a single observation is
Hence,
and
I(/L,O') = [ ~~2 :2 1. 0
AC(p,a) =~
n
[
0 •
6.2 Maximum Likelihood Estimation 121
D· ·(6)
a
= -g·(OI
(6.2.39)
'3 a o ...." 'Ok)·
3
EXAMPLE 6.15
Continuing Example 6.13 we derive the asymptotic covariance matrix
of en and Tn, which are the MLE of the mean and standard deviation of
LN(/L, u). According to (2.3.46) and (2.3.47)
Du (/L, u)
a
= aIL exp(/L + u 2 /2)
(6.2.40)
= exp(/L + u 2 /2),
and
Substituting these elements into the matrix D(/L, u) and employing formula
(6.2.37), we obtain
(6.2.44)
122 6. Estimation of Life Distributions and System Characteristics
where
(6.2.45)
and
(6.2.47)
•
6.3 MLE of System Reliability
In Chapter 3 we studied methods for determining the reliability of a com-
plex system, as a function of the reliabilities of its components. In the
present section we apply the method of maximum likelihood in multi-
parameter cases for determining the MLE and confidence limits for the
system reliability. More specifically, suppose that a given system is com-
prised of k components, and that the reliability function of the system is a
function
t/J{R1{t; 0(1», ... ,Rk(t; O(k»)
of the reliability functions R;,{t; O(i», i = 1,··· ,k, of the components.
These reliability functions depend on the parameters O(i) = (Oil,··· ,Oin.)'
of the life distributions of the components. Notice that the number of pa-
rameters of the various life distributions are not necessarily the same. If
the life distribution of one component is the shifted exponential, SE(,8, to),
it depends on two parameters, and if the life distribution of another compo-
nent is the truncated normal, NT{J1., 0', to), it depends on three parameters.
We further suppose that random samples of failure times are available for
each component and that these samples are independent. We determine
the MLE of the reliability functions R;, (t; O(i», i = 1, ... ,k, and substitute
these estimators in the function t/J(R1{t; 0(1», ... ,Rk(t; O(k») to obtain the
MLE of the system reliability function. The asymptotic SD of the system
reliability is determined according to (6.2.38).
EXAMPLE 6.16
Consider a system having components C1 and C2 connected in parallel.
Suppose that the lifetime of C1 has a Weibull distribution W{v, ,81) and the
lifetime of C2 has an Erlang distribution G(3, ,82). A random sample of nl
6.3 MLE of System Reliability 123
(6.3.1)
and that of G2 is
(6.3.2)
(6.3.3)
Let v and Pl be the MLE of v and {3l determined from the sample data
on Gl . Let P2 be the MLE of (32 determined on the basis of the data on
G2 • Formulae for v, Pl and P2 are given in Chapter 7. The MLE of the
system reliability function is
(6.3.4)
2
.608~ .254{3l 0
nl nl
and apply formula (6.2.38), in which AC(v, ,61, h) is substituted for .!.I- 1 (O).
n
From (6.3.3) we obtain
(6.3.7)
and
(6.3.9)
Asymptotic confidence intervals for Rsys can be obtained from the above
•
result and the formula
(6.3.lO)
r
Let Tn,r = ~)(i) + (n - r)t*. Tn,r is called the total life statistic. The
i=1
MLE of {3 can be obtained by differentiating the log-likelihood function with
respect to {3 and setting the partial derivative equal to zero. The resulting
MLEis
(6.4.2)
°
The CDF of f;n,r, for small samples, was derived by Bartholomew (1963).
Several approximations are also given. IT r > and (3 < t* / In 2 then
the following random variable, W, has approximately a standard normal
distribution N(O, 1):
This approximation can yield approximate confidence limits for [3. For this
purpose, let <P = ~n,r - [3. Since W2 is distributed approximately like X2 [lJ,
(6.4.4) Pr{A(n, {' (3)<p2 - 2B(n, {, (3)<p - C(n, {' (3) :::; O} ~ {'
where
(6.4.5)
and
(6.4.7)
<Pi = B{n, {' (3) ± [B2(n, {,(3) + A(n, ,)" (3)C(n, ,)" (3)j1/2
(6.4.9)
A(n,')',[3) A(n,,),, (3)
i = 1,2, where <Pl < <P2. Notice that C{n, ,)" (3) > 0 for all [3, and if
n > X~[1l/4 then A(n,')', [3) > 0 for all [3. An example of this is ')' = .95,
X~[lJ = 3.84 and n > X~95[lJ/4 for all n ~ 1. In this case the roots exist for
all [3 and satisfy <Pl < 0 < <P2'
Let (32) = ~n,r - <P2 and [3g) = ~n,r - <Pl. After this step, substitute
[3il ) for [3 in (6.4.5)-(6.4.7) and solve (6.4.8) to obtain <Pl,L and <P2,L. Take
[3i2 ) = ~n,r - <P2,L. Substitution of [3g) for [3 in (6.4.5)-(6.4.7) and solution
of (6.4.8) yields <Pl,U and <P2,u. The second iterative approximation to [3u
is taken to be [3C;;) = ~n,r - <Pl,U. We continue this iterative procedure until
a desirable convergence is attained. If n > X~[1l/4, we are guaranteed that
[3c;,) < ~n,r < [3g) for all j.
EXAMPLE 6.17
Consider the example of life testing with a single right-censored sample
of size n = 20, t* = 4500 [hrJ. r = 17 units failed within this time, yielding
a total life of T 20 ,l7 = 73,738 [hrJ. Assuming that the lifetime has an
exponential distribution, E([3), the MLE of [3 is ~20,l7 = T2o ,l7/17 = 4338
6.4 MLE from Censored Samples-Exponential Life Distributions 127
f3L f3u
2422.7 5612.4
3201.9 5846.4
2857.5 5886.9
3005.0 5893.8
2940.9 5895.0
2968.6 5895.2
2956.6 5895.3
2961.8 5895.3
2959.5 5895.3
2960.5 5895.3
2960.1 5895.3
2960.3 5895.3
We see that after 8 iterations the iterative procedure has converged in the
first 4 significant figures to the values f3L = 2960 [hr) and f3u = 5895 [hr) .
(6.4.10)
It is interesting that the MLE of 13 is formally the same for both Type I
and Type II censored data. The sampling distribution of !3n,r is, however,
considerably simpler in Type II censored data. It can be shown that
(6.4.12)
A 13 2
f3n,r '" 2rX [2r].
128 6. Estimation of Life Distributions and System Characteristics
Thus, E{.Bn.r} = /3 and SD{.Bn.r} = /3/Jr. Exact confidence limits for {3,
at confidence level "I, are given by the formulae
· 2Tn r
Lo
wer limit= ~[2']
XE2 r
and
2Tn.r
U pper limi't = ~[2
XE1 r
]
(6.4.13) E{t(r)} = /3 tt
r
n- i
1
+ l'
Suppose that the cost of life testing is a linear function of both the number
of items on test, n, and the duration of the test, t(r), i.e.,
(6.4.14)
1
= E{Cn •r } = Cl/3 L .
r
K(n, r) 1 + C2 n + C3 •
i=l n - ~+
(6.4.16)
The problem is that the optimal value of n depends on the unknown param-
eter /3. If one has some idea of the value of /3 from previous experiments,
one can substitute in (6.4.16) a prudently chosen value of /3.
6.5 The Kaplan-Meier PL Estimator as an MLE 129
EXAMPLE 6.18
Consider the design of a life testing experiment with frequency censoring
and exponential life distribution. The SD of fin,r is f3/.,fF. H we require
that SD{fin,r} = .2f3, then, setting the equation
f3
.2f3 = .,fF
n= 2"25 ( ( 4
1+ 1+ 25100
)1/2) =64.
25 1
E{t(25)} = 100 L
i=1
65 _ . = 49.0 [br].
'/,
Notice that if we put only 25 units on test, and terminate at the 25th
failure, the expected length of the experiment when f3 = 100 [hr] is
25 1
E{t(25)} = 100 L 26 _ i = 381.6 [hr].
i=1
the constraint Di =
~
1 is Rp.(t) = 1 - Fn(t), in which pi = .!.
n
for all
i = 1"" ,n. Indeed, maximizing (6.5.2) under the above constraint is the
n
same as maximizing l(p;t) = L log Pi' Thus, let
i=l
(6.5.3)
* 1
Pi = :X' i = 1"" ,n,
(6.5.4)
1 .
The solution is pi = - for all 2 = 1,··· ,n. Thus, the MLE of R(t)
n
is Rn(t) = 1 - Fn(t). This MLE is equivalent to the Kaplan-Meier PL
estimator. Indeed,
6.5 The Kaplan-Meier PL Estimator as an MLE 131
.iI (1-
3=1
n- ~ + 1)'
This MLE is equivalent to (5.3.4) in the case of no censoring, Le., Oi = 1 for
all i = 1"" ,n. If some of the failure times are censored then the MLE,
Rn(t), is obtained from formula (5.3.4).
If there is no censoring then the number of failures occurring before
a specified time t is a random variable In(t) '" B(n, F(t». Accordingly,
V{Fn(t)} = F(t)(l- F(t»/n. Accordingly, for each.O < t < 00
(6.5.7)
6.6 Exercises
[6.1.1] Let X5 be the mean of a random sample size n = 5 from E(100).
(i) What is the sampling distribution of X5?
(ii) What is the SD of X5?
(iii) With the aid of Table A-III, determine an interval [£, U] around
:s :s
f3 = 100 so that Pr{ L X5 U} = .95.
[6.1.2] A random sample of size n = 10 is drawn from the shifted exponen-
tial distribution SE(f3, to), with f3 = 100 and to = 10. How large is
the bias of the sample minimum t(l), as an estimator of to? What
is the standard deviation of
[6.1.3] The following data are a random sample from a normal distribu-
tion:
1. 71, 1.83, 2.05, 3.21, 1.93, 1.53, 2.17, 2.18, 1.95, 1.62, 1.99, 2.15,
2.08, 3.02, 2.99, 3.00, 2.75, 2.82, 2.91, 2.60
Compute confidence intervals at level 'Y = .95 for the mean, j.L, and
the standard deviation, a, of the distribution.
[6.1.4] A random sample of size n = 20 from an exponential distribution
E(f3) yielded a mean tn = 3505 [hr]. Compute confidence limits
for f3 at confidence level 'Y = .90. What are the corresponding
confidence limits for the reliability function at t = 3600 [hr]?
[6.1.5] Compute the proportional closeness probabilities (6.1.7) and (6.1.9)
for a sample of size n = 50.
[6.1.6] A random sample of size n = 20 from a normal distribution yielded
X 20 = 135.5 and S = 17.5. Determine a .90 level prediction interval
for the mean YlO of an additional sample of size m = 10.
[6.2.1] The number of defective items found among n = 25 items chosen
at random from a production process is x = 2. Determine the
maximum likelihood estimator (MLE) of the proportion defective,
e, of that production process, and estimate the standard deviation
(SD) of the MLE.
[6.2.2] The number of failures among n = 50 devices during the first 100
hours of operation is x = 3.
(i) Determine the MLE of the reliability of this device at age 100
[hr], and estimate its SD.
(ii) What is the MLE of f3 (MTTF) if the lifetime distribution of
these devices is the exponential E(f3)? .
[6.2.3] Consider a random sample of size n from a normal distribution
N({L,a).
(i) What is the MLE of the p-th fractile ~p = {L + zpa?
6.6 Exercises 133
(-t
. exp(-2t/{33) (1- exp (;1 + ;J) rr/2.
(Type II censoring).
(i) What is the expected length of the experiment if {3 = 100 [hr]?
(li) If the total life Tn,r in such an experiment is 1500 [hr], what is
the MLE of {3?
(iii) Determine confidence limits for the reliability of the unit at
age t = 50 [hr], when Tn,r = 1500 [hr] and the confidence level is
'Y = .95.
[6.4.2] Consider the problem of designing a Type II censored life test for
an exponential distribution E({3). Suppose that we require that
SD{,Bn,r} :$ .1{3. Furthermore, the cost components are C1 = 1
and C2 = .5[$].
(i) Determine the sample size, n, which minimizes the expected
cost of this experiment.
(li) Determine the expected length of the experiment if {3 = 250
[hr].
[6.4.3] Redo Example 6.17, with n = 30, t* = 4500 [hr], r = 25, T30 ,25 =
100050 [hr].
7
Maximum Likelihood
Estimators and Confidence
Intervals for Specific
Life Distributions
(7.1.1)
(b) MLE:
(7.1.2)
(7.1.3)
(7.1.4)
136 7. Maximum Likelihood Estimators and Confidence Intervals
·
Lower limIt 2ntn
= 2
XE2 n
[2 ]'
(7.1.5)
U limi· 2ntn
pper t= X 2 [2n]'
E1
(7.1.7)
Lower limit =/3n - ZE2/3n/vn,
Upper limit = /3n + ZE2/3n/vn.
EXAMPLE 7.1
Consider the failure times [hr] of 20 electric generators, given in Exam-
ple 6.1. Assuming that this is a random sample from an exponential life
distribution, we proceed to determine the MLE and confidence intervals.
(1) The MLE of {3 is /320 = t20 = 3543.4 [hr].
(2) The SD of tho is estimated by
(3) Exact confidence limits for (3, at level'Y = .95 (€2 = .975), are given
by
L '· _ 4ot20 _ (40)(3543.4)
limi
ower t - X~975 [40] - 59.34
= 2388.6 [hr],
Upper limit = ;Ot[10] = 5801.8 [hr].
X.025
Lower l··t
lIDl =
a
1-'20 - Z.975·
/320
J20
= 1990.4 [hr],
Upper limit = 5096.4 [hr].
7.2 Shifted Exponential Distributions 137
Notice the difference between the approximate confidence limits and the
exact ones. It can be shown that the probability that such approximate
confidence intervals will cover the true value, when n = 20, is .926, which
is somewhat lower than the nominal confidence level .95.
•
7.2 Shifted Exponential Distributions
(a) Likelihood Function:
o < /3 < 00; -00 < to < 00, where X(I) < X(2) < ... ,< X(n) are the ordered
n
sample values and Qn = 2)X(i) - X(I)).
i=2
(b) MLE:
io = X(1),
(7.2.2)
/3 = Qnln.
SD{io} = f3ln,
~ /3
(7.2.3) SD{f3} = In=l'
n-l
cov(io, (3) = o.
(d) Asymptotic Standard Deviations:
ASD{io} = f3!n,
(7.2.4)
ASD{/3} = f3 I v'n.
~ 1 ~
Lower limit = to - --/3F,,[2, 2n - 2],
(7.2.5) n-l
Upper limit = io,
where Fp [VI , V2] is the p-fractile of the F-distribution with VI and V2 degrees
of freedom (see Table A-V).
138 7. Maximum Likelihood Estimators and Confidence Interva1s
Lower 1· · 2np
mut = 2 [2 - 2] ,
X£2 n
(7.2.6)
U limi· 2np
pper t= 2 [2 -2]'
X£l n
€l = (1 - 'Y)/2, €2 = (1 + 'Y)/2.
(f) Asymptotic Distributions:
(7.2.9)
Lower limit =p- Z£2P/v'n,
Upper limit = P+ Z£2P/v'n,
where €2 = (1 + 'Y)/2.
EXAMPLE 7.2
The lifetime of a certain device has a shifted exponential distribution.
The following is a random sample of n = 10 failure times:
102, 147, 154, 140, 204, 120, 120, 131, 313, 200.
102, 120, 120, 131, 140, 147, 154, 200, 204, 313.
io = 102,
10
P= ~)X(i) - x(l»/lO = 61.1.
i=2
7.3 Erlang Distributions 139
SD{io} = 6.1,
SD{J3} = 18.3.
ASD{/3} = 19.3.
(b) MLE:
n
(7.3.2) Pn = L tdnk = tn/k.
i=1
(7.3.4)
L 1· · 2ntn
ower lIDlt = X~2 [2nk]'
(7.3.5)
U limi· 2ntn
pper t = X~l [2nk] ,
135, 138, 156, 165, 166, 136, 176, 162, 165, 165.
•
7.4 Gamma Distributions
(a) Likelihood Function:
(7.4.1)
.6n = tn/Vn'
(7.4.2) Vn = root of the equation
.!v exp('IjJ(v)) = Gn/tn'
where G n is the geometric mean of tI, ... ,tn, Le.,
(7.4.3)
= -')' + L
00
(7.4.6) "p(v + 1)
n=l
(v
n n+v
r
For integer-valued arguments we have
"p(1) =-')'
n-l
(7.4.7)
'lj;(n) = -')'+ Lk- 1, n 2: 2,
k=l
')' = .577216 ... is the Euler constant. Notice that Gn;tn :::; 1 for all
o < ti < (Xl (i = 1, . .. ,n).
In Figure 7.1 we present a graph of the function exp("p(v»/v. One can
determine the MLE vn from this graph by plotting a horizontal line at the
level Gn/in and finding the value of v where this line cuts the graph. The
value of /3n is then determined from (7.4.2).
The following formula also provides a good approximation to the solution
vn. Let Y n = In(in/G n ); then vn can be determined by
(7.4.9)
vn ~{ :n [.50.0088 + .164885Yn - .054427Y,;], if 0 < Yn < .5772
1.0
I • I • •
. .
.. 9
Gnltn -
I
I
~ .8
.....
..... I
;::. I
..;. I
Ii: I
~ .7
I
I
I
I
.6
I
I
I
I. I __L---...
.5-
o 2 t4 6 8 10 12 14 16 18 20
V
"
Vn
Figure 7_1. Graph ofexp(7jJ(v))jv Versus v,
for the Determination of the MLE, f)n
Hence,
v {SlO} = .439.
Similarly,
10· V{f)lOjV} = I~V{f)lO} = 7.25.
v
Hence,
V{f)lO} = 6.52.
Finally,
COV{f)lO,SlO} = -1.17.
(d) Asymptotic Standard Deviations:
By using formulae (6.2.36) and (6.2.37) we obtain that the asymptotic
standard deviations and covariance are
~ f3 [ 7jJ'(v) ] 1/2
(7.4.10) ASD{f3n} =..;n v7jJ'(v) - 1 '
144 7. Maximum Likelihood Estimators and Confidence Intervals
v
n 0.2 0.5 1.0 1.5 2.0 3.0 5.0 10.0 00
nVar((3/(3)
8 5.900 3.170 2.350 2.100 2.000 1.900 1.830 1.790 1.750
10 5.970 3.210 2.390 2.150 2.040 1.950 1.880 1.840 1.800
20 6.090 3.290 2.470 2.240 2.140 2.040 1.980 1.930 1.900
40 6.140 3.330 2.510 2.280 2.180 2.090 2.030 1.980 1.950
00 6.176 3.363 2.551 2.324 2.225 2.137 2.076 2.036 2.000
nVar(v/v)
8 4.610 7.370 9.660 10.750 11.390 12'.080 12.680 13.150 13.653
10 3.140 4.610 5.870 6.490 6.850 7.250 7.590 7.870 8.163
20 1.790 2.290 2.750 2.990 3.140 3.300 3.450 3.570 3.691
40 1.430 1.740 2.030 2.190 2.290 2.400 2.500 2.580 2.671
100 1.270 1.500 1.720 1.850 1.920 2.010 2.100 2.160 2.237
00 1.176 1.363 1.551 1.658 1.725 1.804 1.876 1.936 2.000
n Cov((3/(3, V/v)
6 -2.090 -2.730 -3.190 -3.410 -3.520 -3.670 -3.790 -3.890 -4.000
10 -1.560 -1.940 -2.260 -2.400 -2.510 -2.600 -2.700 -2.770 -2.157
20 -1.340 -1.600 -1.840 -1.960 -2.050 -2.130 -2.220 -2.280 -2.353
40 -1.250 -1.470 -1.680 -1.800 -1.880 -1.960 -2.030 -2.090 -2.162
00 -1.176 -1.363 -1.551 -1.658 -1.725 -1.804 -1.876 -1.936 -2.000
1 [ ] 1/2
(7.4.11) ASD{vn } = vn v'¢'(~) - 1 '
and
(7.4.12)
'¢'(v)=I)v+k)-2, v>O
k=O
(7.4.13) n-1
'¢'(n + z) = ,¢'(1 + z) - Z)j + Z)-2, n ~ 2, 0 < z < l.
j=l
The asymptotic variances and covariance can be obtained from Table 7.1,
by taking n = 00.
7.4 Gamma Distributions 145
.. X~l[n -I]
Lower limIt = 2nYn '
(7.4.14)
·
Upper limIt X~2 [n - I]
= 2nYn .
L ·· 2ntn
ower 1lmIt =
(7.4.15)
2
XE2
[2 [ ]
n Vu + I] ,
U pper 1· · 2ntn
lIDlt = 2 [2 [ ]] ,
XEl n VL
where [VL] and [vu] are the integer parts of the lower and upper confidence
limits for Vj €1 = (1 - 'Y)/4, €2 = (3 + 'Y)/4.
(f) ABymptotic Distributions:
The asymptotic distributions of fin and vn are normal, with means f3
and v, and variances given by (7.4.10) and (7.4.11).
(g) Asymptotic Confidence Limits:
(g.l) Asymptotic limits for f3:
(7.4.16)
(7.4.17)
where €2 = (1 + 'Y)/2.
146 7. Maximum Likelihood Estimators and Confidence IntervaIs
EXAMPLE 7.4
We use the data of Example 7.2, assuming that both 11 and {3 are un-
known. We have n = 10, to = 156.4, GlO = 155.8, YlO = .00412. Hence,
from (7.4.9), VlO = 121.6 and, from (7.4.2), {JI0 = 1.29. The confidence
limits for 11, at level 'Y = .95 are, according to (7.4.14),
and
Upper limit = ~~~!:] = 230.6.
The conservative confidence limits for {3 are
. . (20)(156.4)
Lower limit = X.9875
2 [(20) (230)- 1] ,
+
X~9875[4601] == 4601 + Z.9875v'9202 = 4815.9.
Hence, the lower confidence limit is {3L = .65. The upper conservative
confidence limit for {3 is
(3u = 2(20)(156.4) ,
X.012S[(20)(32)]
X~0l25[640] == 640 - z. 9875 v'1280 = 559.9
(b) MLE:
(7.5.2)
and
n
~t~n
~'Z.
lnt· 1, n
1_1
(7.5.3) [ i=1 _ .!. Llnt.
i)f n n i=1 '
,=1
One can show that equation (7.5.3) has a unique positive solution, vn .
This solution can be obtained by the iterative formula
]_1
[
~l(j)
~'l.
lnt.'l. n
v
'(j+1) _
-
i
L ~(j)
_ .!.n ?:lnt,. ,
t, ,=1
i
ASD{vn} = ~('¢'(1))-1/2
Vii
(7.5.5)
V6v. V
= ;;:; = .780 ;;:; ,
7ryn yn
and
(7.5.6)
148 7. Maximum Likelihood Estimators and Confidence Intervals
p
11, 0.02 0.05 0.10 0.25 0.40 0.50 0.60 0.75 0.85 0.90 0.95 0.98
5 -0.89 -0.71 -0.52 -0.11 0.26 0.53 0.85 1.50 2.24 2.86 3.98 5.63
6 -0.92 -0.74 -0.54 -0.15 0.20 0.46 0.75 1.33 1.99 2.52 3.52 5.06
7 -0.96 -0.77 -0.57 -0.19 0.16 0.41 0.68 1.22 1.82 2.28 3.13 4.34
8 -0.98 -0.79 -0.59 -0.21 0.13 0.37 0.63 1.14 1.70 2.11 2.87 3.90
9 -1.01 -0.81 -0.61 -0.23 0.11 0.34 0.60 1.08 1.61 2.00 2.69 3.60
10 -1.03 -0.83 -0.63 -0.24 0.09 0.32 0.57 1.04 1.55 1.90 2.55 3.38
11 -1.04 -0.85 -0.64 -0.25 0.07 0.30 0.54 1.00 1.49 1.83 2.45 3.22
12 -1.06 -0.86 -0.66 -0.26 0.06 0.28 0.52-·0.97 1.45 1.78 2.36 3.10
13 -1.07 -0.87 -0.67 -0.27 0.05 0.27 0.51 0.95 1.41 1.73 2.29 2.99
14 -1.09 -0.88 -0.68 -0.28 0.04 0.26 0.49 0.93 1.38 1.70 2.23 2.91
15 -1.10 -0.89 -0.69 -0.29 0.03 0.25 0.48 0.91 1.35 1.65 2.18 2.84
16 -1.11 -0.90 -0.70 -0.30 0.02 0.24 0.47 0.89 1.33 1.62 2.14 2.77
18 -1.13 -0.92 -0.71 -0.31 0.Q1 0.22 0.45 0.87 1.29 1.57 2.07 2.67
20 -1.15 -0.94 -0.72 -0.32 0.00 0.21 0.43 0.84 1.26 1.53 2.01 2.59
22 -1.16 -0.95 -0.74 -0.33 -0.01 0.20 0.42 0.83 1.23 1.50 1.96 2.52
24 -1.18 -0.96 -0.75 -0.33 -0.02 0.19 0.41 0.81 1.21 1.48 1.92 2.47
28 -1.21 -0.98 -0.76 -0.35 -0.03 0.18 0.39 0.78 1.16 1.42 1.86 2.38
32 -1.23 -1.00 -0.78 -0.36 -0.04 0.16 0.38 0.76 1.14 1.39 1.81 2.31
36 -1.24 -1.01 -0.79 -0.37 -0.05 0.15 0.37 0.75 1.12 1.36 1.76 2.26
40 -1.26 -1.02 -0.80 -0.38 -0.06 0.15 0.35 0.73 1.10 1.33 1.73 2.22
45 -1.28 -1.03 -0.80 -0.39 -0.07 0.14 0.35 0.72 1.07 1.32 1.69 2.17
50 -1.29 -1.05 -0.81 -0.40 -0.08 0.13 0.34 0.71 1.05 1.29 1.66 2.13
55 -1.31 -1.05 -0.81 -0.40 -0.08 0.12 0.33 0.70 1.04 1.27 1.64 2.10
60 -1.32 -1.06 -0.82 -0.40 -0.09 0.12 0.32 0.69 1.03 1.26 1.61 2.07
70 -1.34 -1.08 -0.83 -0.42 -0.10 0.11 0.31 0.68 1.00 1.22 1.57 2.03
80 -1.36 -1.09 -0.83 -0.43 -0.11 0.10 0.30 0.67 0.98 1.20 1.55 1.99
100 -1.39 -1.12 -0.84 -0.44 -0.12 0.09 0.29 0.65 0.96 1.16 1.50 1.92
120 -1.41 -1.13 -0.84 -0.45 -0.13 0.08 0.27 0.64 0.94 1.14 1.46 1.87
00 -1.60 -1.28 -1.00 -0.53 -0.20 0.00 0.20 0.53 0.81 1.00 1.28 1.60
Lower limit =
1+
:n/.,fii'
E2 n
+ :n /.,fii'
(7.5.7)
Upper limit =
1 El n
7.5 Weibull Distributions 149
p
n 0.02 0.05 0.10 0.25 0.50 0.75 0.90 0.95 0.98
5 -3.647 -2.788 -1.986 -0.993 -0.125 0.780 1.726 2.475 3.537
6 -3.419 -2.467 -1.813 -0.943 -0.110 0.740 1.631 2.300 3.162
7 -3.164 -2.312 -1.725 -0.910 -0.101 0.720 1.582 2.193 2.963
8 -2.987 -2.217 -1.672 -0.885 -0.091 0.710 1.547 2.124 2.837
9 -2.862 -2.151 -1.632 -0.867 -0.087 0.705 1.521 2.073 2.751
10 -2.770 -2.103 -1.603 -0.851 -0.082 0.702 1.502 2.037 2.691
11 -2.696 -2.063 -1.582 -0.839 -0.076 0.700 1.486 2.007 2.643
12 -2.640 -2.033 -1.562 -0.828 -0.073 0.700 1.472 1.981 2.605
13 -2.592 -2.008 -1.547 -0.822 -0.069 0.699 1.464 1.961 2.574
14 -2.556 -1.991 -1.534 -0.812 -0.067 0.700 1.456 1.946 2.548
15 -2.521 -1.971 -1.522 -0.806 -0.062 0.697 1.448 1.933 2.529
16 -2.496 -1.956 -1.516 -0.800 -0.060 0.700 1.440 1.920 2.508
18 -2.452 -1.930 -1.498 -0.793 -0.055 0.700 1.434 1.896 2.478
20 -2.415 -1.914 -1.485 -0.783 -0.054 0.702 1.422 1.883 2.455
22 -2.387 -1.895 -1.473 -0.779 -0.052 0.704 1.417 1.867 2.434
24 -2.366 -1.881 -1.465 -0.774 -0.044 0.705 1.411 1.857 2.420
28 -2.334 -1.863 -1.450 -0.762 -0.042 0.709 1.402 1.836 2.397
32 -2.308 -1.844 -1.437 -0.758 -0.034 0.707 1.397 1.827 2.376
36 -2.292 -1.830 -1.428 -0.750 -0.030 0.708 1.392 1.812 2.358
40 -2.277 -1.821 -1.417 -0.746 -0.025 0.715 1.391 1.802 2.346
45 -2.261 -1.808 -1.412 -0.741 -0.023 0.714 1.385 1.794 2.334
50 -2.249 -1.796 -1.400 -0.735 -0.021 0.714 1.379 1.789 2.319
55 -2.240 -1.791 -1.394 -0.734 ·.0.015 0.716 1.376 1.784 2.314
60 -2.239 -1.782 -1.387 -0.728 -0.015 0.713 1.371 1.774 2.301
70 -2.226 -1.765 -1.380 -0.720 -0.008 0.711 1.372 1.765 2.292
80 -2.218 -1.762 -1.368 -0.716 0.000 0.689 1.324 1.699 2.200
100 -2.210 -1.740 :1.360 -0.710 0.000 0.710 1.360 1.750 2.260
120 -2.210 -1.730 -1.350 -0.700 0.010 0.700 1.350 1.740 2.250
00 -2.160 -1.730 -1.350 -0.710 0.000 0.710 1.350 1.730 2.160
where El = (1 - 'Y)/2 and E2 = (1 + 'Y)/2. The coefficients bp are given in
Table 7.2.
(e.2) Limits for f3:
(7.5.8)
Lower limit= /3nexP(-U€2/.,fii fin),
Upper limit = /3n exp( -U€l /.,fii vn ).
(7.5.9)
(7.5.10)
EXAMPLE 7.5
The following is a random sample of size n = 20 from a Weibull distri-
bution W(1.75, 1)
Starting with v(O) = 1, the following values were obtained from the recursive
formula following (7.5.3) (see Exercise [7.5.1] for a BASIC program):
j v(j)
1 2.95624
2 1.48198
3 2.19162
4 1.71465
5 1.98635
6 1.81348
7 1.91672
8 1.85253
9 1.89149
10 1.86749
11 1.88214
12 1.87314
13 1.87865
14 1.87527
15 1.88734
16 1.87607
17 1.87685
18 1.87637
7.6 Extreme Value Distributions 151
We see that by the 18th iteration the algorithm has converged in the first
four significant digits to the value v = 1.876. The MLE of {3 is, by (7.5.2),
~ = 1.089. Both iJ and v are quite close to the true values {3 = 1 and
1/ = 1. 75. Estimates of the asymptotic standard errors and covariance are
obtained by substituting in formulae (7.5.4)-(7.5.6) the estimates of {3 and
1/. Thus we obtain
AsD{vn } = (.78)(1.876)/v'20
= .327
and
AcOV{iJn,Vn} = (.254)(1.089)/20
= .0138.
Confidence limits for (3, at level "y = .90, are
•
7.6 Extreme Value Distributions
We consider here the extreme value distribution of minima, EV(~, 8). The
results are immediately translatable to those for extreme value distribution
152 7. Maximum Likelihood Estimators and Confidence Intervals
sample from an extreme value distribution EV(e, 8), we can transform the
sample values Xi to Yi = exp(xi), i = 1, ... ,n. The transformed sample has
a We}, e~) distribution. We estimate v = } and .8 = e~ by the method of
MLE' of the previous chapter and then the MLE of e and 8 are = In and t P
6 = Uv. The results can be obtained, however, directly from the following
formulae:
(a) Likelihood Function:
(7.6.2)
and
n
2:) exp(tiI6n)
(7.6.3) 6n = -fn + ..:..i=-=~,--_ _ __
Lexp(ti/6n )
i=l
- 1'"
where tn = fiL)i.
n
i=l
The MLE of 8 can be determined from (7.6.3) by iteration, starting with
an initial value 8(0). The value of 6n obtained is then substituted in (7.6.2)
to obtain tn.
(c) Standard Deviations:
Formulae for the exact standard deviations are not available.
(d) Asymptotic Standard Deviations:
The asymptotic standard errors and covariance of and 6 are tn n
(7.6.4)
and
2
(7.6.6)
A A
(7.6.8)
Lower limit = en - 6n Vii, U E2 /
ASD{en} = .32,
AsD{ 6n } = .24.
The confidence limits obtained for 6, at level')' = .90, according to (7.6.7)
are 1.09 and 1.99. The asymptotic confidence limits are .98 and 1.77. The
.90-1evel confidence limits for e are, according to (7.6.8), 9.14 and 10.09,
while the corresponding asymptotic confidence limits are 9.19 and 10.26 .
•
7.7 Normal and Lognormal Distributions
(a) MLE:
In Example 6.12 we showed that the MLE of J.L and u in the normal case
are
_ 1 n
jln =Xn = - LXi
n i=l
154 7. Maximum Likelihood Estimators and Confidence Intervals
and
(7.7.1)
ASD{Pn} = a/Vii
(7.7.2) ASD{an } = a/V2n
ACOV(Pn, an) = o.
(d) Confidence Limits at level "(:
(d.1) Limits for f.L:
n-1
Lower limit = S
X~2[n - 1]'
(7.7.4)
n-1
Upper limit =S
X~l[n -1]'
where €1 = (1 - ,,()/2.
(d) Reliability Function:
Since R(t) = 1- cJ> (~), by the invariance principle, the MLE of R(t)
).
is
(7.7.6) ASD{Rn(t)} =
¢
In t) ( + ~ (JL;
(JL -
1 t)
2) 1/2
,
R",(t) ± ZE2
IT the sample Xl, ... ,Xn is drawn from a lognormal· distribution, LN (JL, u),
we make the transformation Yi = lnXi , i = 1" .. ,n. Then Yi,'" ,Yn can
be considered a random sample from N(JL,u) and the MLE of JL and u are
obtained by the formulae given above.
(7.8.1)
(b) MLE:
From (7.8.1) we see that the likelihood function, as a function of to, is
maximized by the largest value of to which does not exceed t(l), for any JL
and u. Hence to = t(l)' Substituting this value of to in (7.8.1) and taking
logarithms we obtain that the log-likelihood function of (JL, u) at to = to is
(7.8.3)
156 7. Maximum Likelihood Estimators and Confidence Intervals
and
(7.8.5)
and
(7.8.6)
[7.2.3] Consider the data of Example 7.2. Suppose that instead of the
largest failure time, 313, we see the value 250+ (censored value).
(i) Estimate to and [3 by the least-squares method from a proba-
bility plotting.
(ii) Estimate to to be to = t(l)' Subtract the value of t(l) from each
sample value. Use the method of Section 6.4 to obtain an MLE
and confidence interval for [3, based on a sample of 9 differences
with 1 censored value.
[7.3.1] Redo Example 7.3, assuming that k = 5.
[7.4.1] The following is a sample of n = 22 values from a G(v, [3) life dis-
tribution:
85.0,249.9,34.0,605.4, 175.6,253.1,47.4,69.1,38.5, 141.7, 249.2,
342.4, 226.0, 159.6, 10.9, 380.4, 201.1, 235.7, 289.3, 65.8, 215.8,
11.7.
(i) Determine the MLE of v and of [3.
(ii) Find approximations to the SD of v and .B by employing Table
7.1.
(iii) Use formulae (7.4.14) and (7.4.15) to determine confidence in-
tervals for v and [3, at level of confidence 'Y = .90.
158 7. Maximum Likelihood Estimators and Confidence Intervals
This conditional PDF is called the posterior PDF of (), given x. Thus,
starting with a prior PDF, h((}), we convert it, after observing the value of
x, to the posterior PDF of () given x.
If Xl,'" , Xn is a random sample from a distribution with a PDF f(x; (})
then the posterior PDF of (), corresponding to the prior PDF h((}), is
n
II f(Xi; (})h( ())
(8.1.4) h((} I x) = ~=l
f 'e' f IIf(xi;O)h((})d(h" ·dOk
i=l
For a given sample, x, the posterior PDF h((} I x) is the basis for most
types of Bayesian inference.
EXAMPLE 8.1
I. Binomial Distributions
X", B(n;O), 0 < 0 < 1.
The PDF of X is
(8.1.5)
0< 0 < 1, 0 < VI, V2 < 00, where j3(a, b) is the complete beta function
j3(a, b) = 10 1
x a - l (l - x)b-ldx
r(a)r(b)
- r(a+b)"
(8.1.7)
162 8. Bayesian Reliability Estimation and Prediction
°
II. Poisson Distributions
X '" peA), < A < 00.
The PDF of X is
AX
l(x;A) = e->' -x!, x = 0,1,··· .
Suppose that the prior distribution of A is the gamma distribution, G(v,r).
The prior PDF is thus
(8.1.8)
(8.1.9)
Let (3 have an inverse-gamma prior distribution, IG{v, r). That is, ~ '"
G(v,r). The prior PDF is
(8.1.10)
(8.1.11)
0< () < 1. The factor (:) can be omitted from the likelihood function in
Bayesian calculations. The factor of the likelihood which depends on () is
called the kernel of the likelihood. In the above binomial example, (}X(l-
(})n-x, is the kernel of the binomial likelihood. If the prior PDF of (), h((}),
is of the s.ame functional form (up to a proportionality factor which does
not depend on ()) as that of the likelihood kernel, we call that prior PDF
a conjugSlte one. As shown in Example 8.1, the beta prior distributions
are conjugate to the binomial model, the gamma prior distributions are
conjugate to the Poisson model and the inverse-gamma priors are conjugate
to the exponential model.
If a conjugate prior distribution is applied, the posterior distribution
belongs to the conjugate family.
One of the fundamental problems in Bayesian analysis is that of the
choice of a prior distribution of (). From a Bayesian point of view, the
prior distribution should reflect the prior knowledge of the analyst on the
parameter of interest. It is often difficult to express "the prior belief about
the value of 0 in a PDF form. We find that analysts apply, whenever possi-
ble, conjugate priors whose means and standard deviations may reflect the
prior beliefs. Another common approach is to use a "diffused," "vague" or
Jeffrey's prior, which is proportional to Il((})1 1 / 2 , where l((}) is the Fisher in-
formation function (matrix). For further reading on this subject the reader
is referred to Box and Tiao (1973), Good (1965) and Press (1989).
•
8.2 Loss Functions and Bayes Estimators
In order to define Bayes estimators we must first specify a loss function,
L(O,O), which represents the cost involved in using the estimate 0 when
the true value is O. Often this loss is taken to be a function of the distance
between the estimate and the true value, i.e., 10 - 01. In such cases, the
loss function is written as
A {a((}-B),
L((}, (}) =
(3(B - ()), ifB>(}
164 8. Bayesian Reliability Estimation and Prediction
It is easily shown that the value of 01 which minimizes the posterior risk
R(O},x) is the posterior expectation of 91 :
If the loss function is L(01 ,0) = 101 - 91 1, the Bayes estimator of 91 is the
median of the posterior distribution of fh given x.
If the sample size is n = 50, and Kso = 27, the Bayes estimator of R(t) is
R(t; 27) = 28/52 = .538. Notice that the MLE of R(t) is Rso = 27/50 =
.540. The sample size is sufficiently large for the MLE and the Bayes
estimator to be numerically close. If the loss function is IR - RI, the Bayes
estimator of R is the median of the posterior distribution of R(t) given Kn,
i.e., the median of the beta distribution with parameters VI = Kn + 1 and
V2 = n- Kn + 1.
8.2 Loss Functions and Bayes Estimators 165
Generally, if Vl and V2 are integers then the median of the beta distri-
bution is
Me - vlF:5[2vl,2v2]
(8.2.2)
+ V1F:5[2vl, 2V2] ,
-----''--:----"---0'
- V2
where F:5 [it, j2] is the median of the Flil, j2] distribution. Substituting
Vl = Kn + 1 and V2 = n - Kn + 1 in (8.2.2), we obtain that the Bayes
estimator of R( t) with respect to the absolute error loss is
\ - r1'+l/r(r + v)
1 0
00
1
{31'+I/H
(8.2.6)
where TJ!~i is the total time on test statistic for the i-th module, Ti is the
censoring frequency of the observations on the i-th module, Ti and Vi are
the prior parameters for the i-th module. As in (8.2.5), (8.2.6) is the Bayes
estimator for the squared-error loss, under the.assumption that the MTTFs
of the various modules are priorly independent. In a similar manner one
can write a formula for the Bayes estimator of the reliability of a system
having a parallel structure.
These limits can be determined with aid of a table of the fractiles of the
F-distribution, according to the formulae
(8.3.2)
(Kn + 1)
L ower lim1't =~~~~~--~--~~~--~~~~~---=
(Kn + 1) + (n - Kn + 1)F'2[2n + 2 - 2Kn ,2Kn + 2]
and
(8.3.3)
··
U pper 1Imlt (Kn + 1)F'2 [2Kn + 2, 2n + 2 - 2Kn]
= .
(n - Kn + 1) + (Kn + 1)F'2[2Kn + 2,2n + 2 - 2Kn]
and
F.975 [56, 48] = 1.746.
Thus, the Bayesian credibility limits obtained for R(t) are .402 and .671.
Recall the Bayes estimator was .538.
(8.3.4)
and
(8.3.6)
and
(8.3.9) l TU (X)
Tdx)
g*(t I x)dt = "I.
Generally, the limits are chosen so that the tail areas are each (1-"1/2). We
illustrate the derivation of a Bayesian prediction interval in the following
example.
EXAMPLE 8.2
Consider a device with an exponential lifetime distribution E(f3). We
test a random sample of n of these, stopping at the r-th failure. Suppose
the prior distribution of 13 is IG(v, T). Then, as seen in Section 8.2.2, the
posterior distribution of 13 given the ordered failure times t(l),··· ,t(r) is
r
IG(v + r, l+;n,r T )' where Tn,r = ~:)(i) + (n - r)t(r)'
i=l
Suppose we have an additional 8 such
devices, to be used one at a time
in some system, replacing each one immediately upon failure by another.
We are interested in a prediction interval of level "I for T, the time until all
8 devices have been used up. Letting Y = (Yt,··. ,Ys) be the lifetimes of
8.3 Bayesian Credibility and Prediction Intervals 169
the devices, we have T(Y) = l)'i. Thus, T(y) has a G(8, (3) distribution.
i=1
Substituting in (8.3.8), it is easily shown that the predictive PDF of T(Y),
given t(1)' ... , t(r)' is
one can show that the predictive distribution of U given t(1)' ... , t(r) is the
beta(r + v, 8) distribution. If we let Bee1 (r + v, 8) and· Bee2 (r + v, 8) be the
101- and €2-fractiles of beta (r+v, 8), where 101 = (1-1")/2 and 102 = (1+1")/2,
then the lower and upper Bayesian prediction limits for T(Y) are
(8.3.11)
and
(8.3.12) Tu = (
Tn r +
,
1) (1
-T Be e1 (v+r,8)
)
- 1 .
and
Tu = (Tn r +
, T
.!.) -8-
v+r
Fe2 [28, 2v + 2r].
Formulae (8.3.12) and (8.3.13) have been applied in the following con-
text.
Twenty computer monitors have been put on test at time to = O. The
test was terminated at the sixth failure (r = 6). The total time on test
was T 20 ,6 = 75,805.6 [hr]. We wish to predict the time till failure [hr] of
monitors which are shipped to customers. Assuming that TTF E«(3) r-.J
and
1
Tu = 76805.6 11 X 4.38 = 30,582.6 [hr].
We have high prediction confidence that a monitor in the field will not fail
before 175 hours of operation.
•
8.4 Credibility Intervals for the
Asymptotic Availability of Repairable Systems:
The Exponential Case
Consider a repairable system. We take observations on n consecutive re-
newal cycles. It is assumed that in each renewal cycle, TTF rv E(j3) and
TTR rv Eb). Let t1,'" ,tn be the values of TTF in the n cycles and
81, ... ,8n be the values of TTR. One can readily verify that the likelihood
n
function of j3 depends on the statistic U = Lti and that of, depends on
i=l
n
V = L 8i' U and V are called the likelihood (or minimal sufficient)
i=l
statistics. Let A = II j3 and J-L = 1;'. The asymptotic availability is
Aoo = J-LI(J-L + A).
In the Bayesian framework we assume that A and J-L are priorly indepen-
dent, having prior gamma distributions G(V,7) and G(w, (), respectively.
One can verify that the posterior distributions of A and J-L, given U and V,
are G(n + v, U + 7) and G(n + w, V + (), respectively. Moreover, A and J-L
are posteriorly independent. Routine calculations yield that
(8.4.1) Aoo€ =
, 1
[1+ V+(.
U+
Be€2(n+v,n+w)]-1
7 + + Be€l (n w, n v)
and
(8.4.2) A = [1 + V+(.
U
Be€l(n+v,n+w)]-l
OO,€2
+ B (
7
)
e€2 n + w, n + v
8.4 Credibility Intervals for the Asymptotic Availability 171
where Beta.e(P, q) is the e-th fractile of Beta(p, q). Moreover, the fractiles
of the beta distribution are related to those of the F -distribution according
to the following formulae:
(8.4.3)
and
(8.4.4)
observations gave the values U = 496.9 [min] and V = 126.3 [min]. Accord-
ing to these values, the MLE of Aoo is Aoo = 496.9/(496.9 + 126.3) = .797.
AE!sume the gamma prior distributions for oX and jt, with v = 2, r = .001,
w = 2 and ( = .005. We obtain from (8.4.3) and (8.4.4) for 'Y = .95,
Finally, the credibility limits obtained from (8.4.1) and (8.4.2) are A oo ,.025 =
.707, and A oo ,.975 = .865. To conclude this example we remark that the
Bayes estimator of Aoo, for the absolute deviation loss function, is the
median of the posterior distribution of Aoo, given (U, V), namely A oo ,.5.
In the present example n+v ~ n+w = 74. The Beta{74, 74) distribution
is symmetric. Hence BeO.5(74, 74) = .5. To obtain the Aoo,.5 we solve the
equation
I-A 5
A 00,. (U +r)
1 _ A 00,.5 = Be.5(n + v, n + w).
A 00,.5 (U + r) + (V + ()
00,.5
V
Aoo,.5 = ( 1 + U + r
+()-1 = 1
126.305 = .797.
1 + 496.901
(8.5.1)
(8.5.3)
The empirical PDF fn(Y) converges (by the Strong Law of Large Numbers)
in a probabilistic sense, as n -+ 00, to fH(Y). Accordingly, replacing fH(Y)
in (8.5.3) with fn(Y) we obtain an estimator of EH{)..I y} based on the past
n trials. This estimator is called an empirical Bayes estimator (EBE)
of )..:
fn(Y + 1)
(8.5.4) )..n(Y) = (y + 1)
A
fn(Y) , Y = 0,1,··· .
EXAMPLE 8.4
n = 188 batches of circuit boards were inspected for soldering defects.
Each board has typically several hundred soldering points, and each batch
contained several hundred boards. It is assumed that the number of solder-
ing defects, X (per 105 points), has a Poisson distribution. In the following
table we present the frequency distribution of X among the 188 observed
batches.
x 0 1 2 3 4 5 6 7 8
f(x) 4 21 29 32 19 14 13 5 8
x 9 10 11 12 13 14 15 16 17 18
f(x) 5 9 1 2 4 4 1 4 2 1
x 19 20 21 22 23 24 25 26 Total
f(x) 1 1 1 1 2 1 2 1 188
points), i.e., 56.25 PPM. After observing Y189 = 8 we can increase h88(8)
by 1, i.e., h89(8) = h88(8) + 1, and observe the next batch.
The above method of deriving an EBE can be employed for any PDF
•
f(x; 0) of a discrete distribution, such that
f(x + 1; 0)
f(x; 0) = a(x) + b(x)O.
In such a case, the EBE of 0 is
(8.5.5)
1
(8.5.6) E.,.,v{T} = T(V -1)'
(8.5.7)
provided v > 2.
1 1
;;:I)r
n n
Let M1,n = ;;::~:)i and M 2 ,n = M1,n and M 2 ,n converge in a
i=l i=l
probabilistic sense to ET,v{T} and E T,v{T2}, respectively. We estimate T
and v by the method of moment equations, by solving
1
(8.5.8) M1,n = f(v - 1)
and
2
(8.5.9) M 2 ,n = f2(V _ l)(v - 2)'
A (D~ - M'f,n)
(8.5.10)
Tn = [M1,n(D; + M'f,n)] '
(8.5.11)
provided D~ > M'f n' It can be shown that for large values of n, D~ > M'f n
with high probability. '
Substituting the empirical estimates fn and vn in (8.2.5) we obtain a
parametric EBE of the reliability function.
•
8.6 Exercises 175
For additional results on the EBE of reliability functions, see Martz and
Waller (1982) and Tsokos and Shimi (1977).
8.6 Exercises
[8.1.1] Suppose that the TTF of a system is a random variable having
exponential distribution, E(/3). Suppose also that the prior distri-
bution of A = 1//3 is G(2.25, .01).
(i) What is the posterior distribution of A, given T = 150 [hr]?
(ii) What is the Bayes estimator of /3, for the squared-error loss?
(iii) What is the posterior SD of /3?
[8.1.2] Let J(t) denote the number offailures of a device in the time inter-
val (0, t]. After each failure the device is instantaneously renewed.
Let J(t) have a Poisson distribution with mean At. Suppose that
A has a gamma prior distribution, with parameters v = 2 and
7 = .05.
(i) What is the predictive distribution of J(t)?
(ii) Given that J(t)/t = 10, how many failures are expected in the
next time unit?
(iii) What is the Bayes estimator of A, for the squared-error loss?
(iv) What is the posterior SD of A?
[8.1.3] The proportion of defectives, (), in a production process has a uni-
form prior distribution on (0,1). A random sample of n = 10 items
from this process yields KlO = 3 defectives.
(i) What is the posterior distribution of ()?
(ii) What is the Bayes estimator of () for the absolute error loss?
[8.1.4] Let X rv peA) and suppose that A has the Jeffrey improper prior
1
h(A) = J):' Find the Bayes estimator for squared-error loss and
its posterior SD.
[8.2.1] Apply formula (8.2.3) to determine the Bayes estimator of the re-
liability when n = 50 and K50 = 49.
[8.2.2] A system has three modules, Ml, M 2, M 3 . Ml and M2 are con-
nected in series and these two are connected in parallel to A13,
i.e.,
significance level of the test. One may also specify that the test have a
particular probability, {3, of a Type II error when R(to) equals some partic-
ular value RI less than Ro. RI can be regarded as a dearly unacceptable
reliability level, under which we require the test to have high probability
(1 - {3) of rejecting the null hypothesis.
The characteristics of any given hypothesis test can be summarized by its
op_erating characteristic (OC) function. For any value R, 0 < R < 1,
the value of the OC function at R, OC(R), represents the probability that
the test will accept Ho when the true value of R(to) is R, i.e.,
OC(R) = Pr{test accepts Ho I R(to) = R}.
Since any reasonable test for the reliability demonstration problem will have
an OC function that increases with R, the two requirements mentioned
above are expressible as follows:
OC(Ro) = 1- a and OC{Rd = {3.
In the following sections we develop several tests of interest in reliability
demonstration. We remark here that procedures for obtaining confidence
intervals for R(to), which were discussed in Chapter 7, can be used to
provide tests of hypotheses. Specifically, the procedure involves computing
the upper confidence limit of a (1 - 2a)-level confidence interval for R(to)
and comparing it with the value Ro. If the upper confidence limit exceeds
Ro then the null hypothesis is accepted, otherwise it is rejected. This test
will have a significance level of a.
For example, if the specification of the reliability at age t = to is R = .75
and the confidence interval for R(to), at level of confidence, = .90, is
(.80, .85), the hypothesis Ho can be immediately accepted at a level of
significance of a = (1 -,)/2 = .05. There is a duality between procedures
for testing hypotheses and for confidence intervals.
If n is large, then one can apply the normal approximation to the bi-
nomial CDF. In these cases we can determine Co. to be the integer most
closely satisfying
where ZI-o. = ~-I(I_ a). The OC function of this test in the large sample
case is approximated by
nR-C -1/2)
(9.2.5) OC(R)~~ ( (nR(I_o.R))1/2 .
(9.2.6)
Ho : {3 ~ {3o
versus
HI: {3 < {3o
where {3o = -tol In Ro. Let tl, ... ,tn be the values of a (complete) random
sample of size n. Let tn = .!.
ni=l
'tti. The hypothesis Ho is rejected if tn < CO"
where
(9.3.1)
(9.3.3)
(9.3.4)
(9.3.5)
EXAMPLE 9.2
Suppose that in Example 9.1, we know that the system lifetimes are
exponentially distributed. It is interesting to examine how many systems
would have to be tested in order to achieve the same error probabilities as
before, if (mr decision were now based on tn.
Since f3 = -t/lnR(t), the value of the parameter f3 under R(to) =
R(1000) = .85 is f30 = -1000/ln(.85) = 6153 [hr], while its value under
R(to) = .80 is f31 = -1000/ln(.80) = 4481 [hr]. Substituting these values
into (9.3.5), along with a = .05 and 'Y = .10 b was denoted by f3 in Example
9.1), we obtain the necessary sample size n ~ 87.
Thus we see that the additional knowledge that the lifetime distribution
is exponential, along with the use of complete lifetime data on the sample,
allows us to achieve a greater than fivefold increase in efficiency in terms
of the sample size necessary to achieve the desired error probabilities .
•
We remark that if the sample is censored at the r-th failure then all the
formulae developed above apply after replacing n by r, and tn by {In,r =
Tn,r/r.
EXAMPLE 9.3
Suppose that the reliability at age t = 250 [hr] should be at least Ro =
.85. Let Rl = .75. The corresponding values of f30 and f31 are 1538 [hr] and
869 [hr], respectively. Suppose that the sample is censored at the r = 25th
failure. Let {In,r = Tn,r/25 be the MLE of f3. Ho is rejected, with level of
significance a = .05, if
f3n,r ~ ro
1538 2
X.o5 [50] = 1069 [hr].
1538
OC(869) = Pr{x2[50] > 869 X~o5[50]}
= Pr{x 2[50] > 61.5}
~ 1_ CP (61.5 - 50)
v'100
= .125.
•
9.4 Sequential Reliability Testing
Sometimes in reliability demonstration an overriding concern is keeping the
number of items tested to a minimum, subject to whatever accuracy re-
quirements are imposed. This could be the case, for example, when testing
182 9. Reliability Demonstration: Testing and Acceptance Procedures
(9.4.1)
A...:...
'Y
- I-a'
(9.4.2)
B~ I-a.
'Y
In many cases, it is more convenient to express the test in terms of the
statistics lnAn(Xn) and the boundaries InA and InB.
We will consider the SPRT in detail only as it applies to the two special
cases considered for the non-sequential case in Sections 9.2 and 9.3.
Thus,
l-Rl)
( 1-Ro (Ro(l-Rl))
lnAn=nln -Kn ln Rl (l-Ro) .
It follows from (9.4.1) and (9.4.2) that the SPRT can be expressed in terms
of Kn as follows:
Continue sampling if -hl + sn < Kn < h2 + sn,
s -In
-
(1- RoRl) jln (RRlo(l- Ro)
1-
Rd)
,
(1 -
(9.4.4)
Note that if we plot Kn vs. n, the accept and reject boundaries are parallel
straight lines with common slope s and intercepts h2 and -hl, respectively.
The OC function of this test is expressible (approximately) in terms of
an implicit parameter 'ljJ. Letting
1- (1- Rl)""
1-Ro
(9.4.5)
( Rl)"" _ (l-Rl)"'"
Ro 1- Ro
s,
C:'Y)"" -1
'ljJ=j:O
(9.4.6)
(1:'Y)"" -(1 2a)"'"
InC:'Y) 'ljJ=O.
184 9. Reliability Demonstration: Testing and Acceptance Procedures
1-7_ (1 - 7))
(9.4.7)
In OC(R('I/J») In a)(1 -
a a7
In 1 - Rl _ R('I/J) In (Ro(1 - Rl))
1-Ro Rl(1-Ro)
hlh2
'r/J=o.
8(1- 8)'
The ASN function will typically have a maximum at some value of R be-
tween Ro and R 1 , and decrease as R moves away from the point of maxim.nm
in either direction.
EXAMPLE 9.4
Consider Example 9.2, where we had t = 1000 [hr], Ro = .85, Rl = .80,
a = .05, 7 = .10. Suppose now that systems are tested sequentially, and
we apply the SPRT based on the number of systems still functioning at
1000 [hr]. Using (9.4.4), the parameters of the boundary lines are 8 = .826,
hi = 8.30, and ~ = 6.46.
The OC and ASN functions of the test are given in Table 9.1, for selected
values of'r/J.
Compare the values in the ASN column to the sample size required for
the corresponding fixed-sample test, n = 483. It is clear that the SPRT
effects a considerable saving in sample size, particularly when R(to) is less
than Rl or greater than Ro. Note also that the maximum ASN value occurs
when R(to) is near 8.
•
9.4.2 The SPRT for Exponential Lifetimes
When the lifetime distribution is known to be exponential, we have seen
the increase in efficiency gained by measuring the actual failure times of the
parts being tested. By using a sequential procedure based on these failure
times, further gains in efficiency can be achieved.
Expressing the hypotheses in terms of the parameter /3 of the lifetime
distribution E(/3), we wish to test Ho : /3 ~ /30 vs. HI : /3 < /30, with
significance level a and Type II error probability 7, when /3 = /31, where
/31 < /30. Letting tn = (tl, .. · ,tn ) be the times till failure of the first n
parts tested, the likelihood ratio statistic is given by
9.4 Sequential Reliability Testing 185
Table 9.1. OC and ASN Values for the SPRT of Example 9.4
Thus,
n
Continue sampling if -hI + sn < L:)i < h2 + sn,
i=1
n
(9.4.8) Accept Ho if ,I)i ~ h2 + sn,
i=1
n
Reject Ho if ~)i :S -hI + sn.
i=1
186 9. Reliability Demonstration: Testing and Acceptance Procedures
where
(9.4.9)
n
Thus, if we plot Z)i vs. n, the accept and reject boundaries are again
i=l
parallel straight lines.
As before, let 'IjJ be an implicit parameter, and define
(9.4.10)
'IjJ=0.
'IjJ=/=0
(9.4.11)
and
(9.4.12)
'IjJ=0
Note that when 'IjJ = 1, f3{,p) equals f3o, while when 'IjJ = -1, f3{,p) equals f31.
EXAMPLE 9.5
Continuing Example 9.2, recall we had 0: = .05, 'Y = .10, f30 = 6153,
f31 = 4481. Using (9.4.9), the parameters of the boundaries of the SPRT
9.4 Sequential Reliability Testing 187
Table 9.2. OC and ASN Values for the SPRT of Example 9.5
>.. )Xn(t)
(9.5.1) A(t; Xn(t)) = ( >..~ exp{ -nt(>"1 - >"o)}.
The test continues as long as the random graph of (Tn(t) , Xn(t)) is between
the two linear boundaries
(9.5.2)
and
(9.5.4)
9.5 Sequential Tests for Poisson Processes 189
(9.5.5)
and
(9.5.6)
The instant Xn{t) jumps above bu{t) the test terminates and Ho is re-
jected; on the other hand, the instant Xn{t) = bL{t) the test terminates
and Ho is accepted. Acceptance of Ho entails that the reliability meets the
specified requirement. Rejection of Ho may lead to .?-dditional engineering
modification to improve .the reliability of the system.
The OC function of this sequential test is the same as that given by
(9.4.1O) and (9.4.11). Let T denote the random time of termination. It can
be shown that Pr>. {T < oo} = 1 for all 0 < A < 00. The expected deviation
of the test is given approximately by
(9.5.7)
where
h2 - OC{A){hl + h2)
(9.5.8) E>.{Xn(T)} ~ { 1- S/A
hlh2' if A = s.
It should be noticed that formula (9.5.8) yields the same values as formula
(9.4.12) for A = 1//3("'). The SPRT of Section 9.4 can terminate only after
a failure, while the SPRT based on Xn(t) may terminate while crossing the
lower boundary bL(t), before a failure occurs.
The minimal time required to accept Ho is TO = hl/ns. In the case of
Example 9.5, with n = 20, TO = 9.11536/(20 x .0001912) = 2383.2 [hr].
That is, over 99 days of testing without any failure; The SPRT may be,
in addition, frequency censored by fixing a value x'" so that as soon as
Xn(t) ~ x'" the test terminates and Ho is rejected. In Example 9.5 we see
that the expected number of failures at termination may be as large as 66.
We can censor the test at x'" = 50. This will reduce the expected duration
of the test but will increase the probability of a Type I error, a. Special
programs are available for computing the operating characteristics of such
censored tests, but these are beyond the scope of the present text.
190 9. Reliability Demonstration: Testing and Acceptance Procedures
(9.6.2) 7rl = J
{8jR(toj9):5Rl}
h(O)dO.
Let tl, t2,'" ,tn be n observed values of the TTF and let tn = (tl,'" ,tn)'
We will convert the prior distribution of 0 to the posterior distribution,
given tn. Let 7ro(t n ) and 7rl(tn ) be the posterior probabilities of Ho and
HI! respectively, given tn. These probabilities are obtained from (9.6.1) and
(9.6.2) by replacing the prior PDF h(O) by the posterior PDF h(O I t n ).
Let lo be the loss incurred by accepting Ho when actually R ~ Rl.
On the other hand, let It be the loss due to rejecting Ho when actually
R ~ Ro. The optimal Bayesian decision is to take the action entailing a
smaller expected posterior risk. Accordingly, Ho is accepted if
Similarly,
7rl (Tn,23) = Pr{ A ~ Al I Tn,23}
= Pos(25; Al (Tn ,23 + 100)).
9.6 Bayesian Reliability Demonstration Tests 191
In the following table we present the values of 7rO(Tn,23) and 7rl(Tn,23), and
the ratio 7rl(Tn,23)/7ro(Tn,23), as a function of the total life.
and
(9.6.7)
192 9. Reliability Demonstration: Testing and Acceptance Procedures
and
(9.6.8)
15 X"
OC('\) = Pos(15;,\) = e->' L "
x=o x.
aN = 1
'¢H 10(>.o [15 1
1 - ~ Pos(x 1'\) g('\ I 3,3.5)d'\
_
- '¢H
1
{
1
r(3)(3.5)3
1
0
10.5
,\
2 ->./3.5
e d'\
15 10.5 }
1
- ~ x!r(3)(3.5)31 ,\
x+2 _>.~
e 3.sd,\ .
9.6 Bayesian Reliability Demonstration Tests 193
Notice that
1 f1O·5
r(3)(3.5)3 Jo >-.2e-)../3.5d).. =1- Pos(2 I 3) = .577.
1
Also,
1 10.5
x+2 _).. 4.5
x!r(3)(3.5)3 0 ).. e 3.5 d)"
(x + l)(x + 2)
2(4.5)3
(3.5)X
4.5 [1 - Pos(x + 2; 13.5)].
I: 1
1
I 3,3.5)d)"
15 00
= 1rH
1 {
(4.5)3
1 15
~
(x+1)(x+2)
2
(3.5)X
4.5 Pos(x + 2; 16.2)
}
= .848.
We see that the reliability demonstration test under consideration does not
protect the consumer. In order to protect the consumer the test should last
longer than 30 hours.
A sequential BRDT is one in which the length of the test is not de-
•
termined ahead, but a stopping rule, which is a function of the data, is
chosen. When the stopping rule terminates the test a decision whether to
accept Ho or to reject it is done according to (9.6.3). The question of which
stopping rule to choose is an important one. The reader who is interested
to learn about the theory of optimal stopping rules is referred to DeGroot
(1970) or Chow, Robbins and Siegmund (1971).
The following stopping rule is often applied. Choose numbers 1r and 7r,
o ~ 1r < 7r < 1. 1r is chosen close to zero, and 7r close to one. Let 1r~ denote
the posterior probability of Ho at time t, 0 ~ t < 00, which is a function of
the observations in (0, t].
The stopping rule:
Stop at the smallest value of t, 0 ~ t, at which 1r~ ~ 1r or 1r~ ~ 7r.
Let T denote the stopping time. If 1r~ ~ 1r reject H o , if 1r~ ~ 7r accept
H o·
We illustrate this stopping rule in the following example.
EXAMPLE 9.8
As in Section 9.5, we put n units on test. Units are instantaneously
renewed after failures. We observe the process {Xn(t); 0 ~ t < oo}, with
Xn(O) == O. Suppose that {Xn(t); 0 ~ t < oo} is a Poisson process, with
194 9. Reliability Demonstration: Testing and Acceptance Procedures
=
1
Pr{x2[2v + 2Xn(t)] ::; 2Ao(nt + -
T
n·
Accordingly, 1f? ~ 7r if, and only if,
1
X~[2v + 2Xn (t)] ::; 2Ao(nt + -).
T
1
X;[2v + 2Xn(t)] ~ 2Ao(nt + -).
T
From these two inequalities one can obtain lower and upper boundaries
for the continuation region of the test. As soon as the process Xn(t) hits
either the lower or the upper boundary the test terminates. Ho is accepted
if Xn(T) is equal to the lower boundary, otherwise Ho is rejected.
If v ~ 15 we can approximate the p-th fractile of X2[v] by
In addition, one can censor the test at a certain failure frequency, x*. In
Figure 9.1 we illustrate these boundaries for a censored test, with param-
eters n = 20, AO = .0021072, v = 15, T = .00027, 1f = .1, 7r = .9 and
x* = 25. •
9.7 Accelerated Life Testing 195
30
L .,.,.,.,.,.,
ffi .,.,., Acceptance
I 10 .,.,., Region
i
.,.,
.,.,., I
I I I
_~
The Power Rule Model was applied in accelerated life testing of dielectric
capacitors. Here V is the applied voltage. The parameters C and p are
unknown, and should be estimated from the results of the tests. In the
Al,Thenius Model, the MTTF is in terms of the operating temperature V.
The parameters A and B should be estimated from the data. This model
has been applied in accelerated testing of semiconductors.
The methodology is to perform K independent life tests at K values of
V. After observing the failure times at each stress level, the likelihood of
the model parameters is formulated in terms of the data from all the K
trials. The MLE of the model parameters are determined together with
their AC matrix. Finally the MTTF is estimated at a normal stress level
Vo, and confidence or credibility intervals are determined for this parame-
ter. For details and formulae the reader is referred to Mann, Schafer and
Singpurwalla (1974, Ch. 9) and Nelson (1990).
9.8 Exercises
[9.2.1] A vendor claims that his resistors have a mean useful lifetime of at
least 5 x 105 [hr]. It is believed that the lifetimes have an exponen-
tial distribution. You plan to test a random sample of n resistors
until one fails.
(i) How large should your sample be so that the expected duration
of the test is no greater than 1000 [hr], assuming the mean lifetime
of the resistors is actually 5 x 105 [hr]?
(ii) Using the sample size obtained in (i), design a test of the ven-
dor's claim with significance level .05.
(iii) Assuming the true mean lifetime is only 105 [hr],
(a) What is the probability of rejecting the vendor's claim?
(b) What is the probability that the test will last over 1000 [hr]?
[9.2.2] Redo Exercise [9.2.1], but here you are going to continue testing
until 10 resistors have failed. You may now find it convenient to
use some approximations in parts (i) and (iii b).
[9.3.1] The TTF of a system is exponential with MTTF of j3 [hr]. n = 10
independent systems are put on test simultaneously. The test ter-
minates at the r = 5th failure. Failed units are not replaced (re-
newed). The null hypothesis is Ho : j3 :2: 1000 [hr], the alternative
hypothesis is Hi : j3 :::; 750 [hr].
(i) What is the expected duration of the test?
(ii) The test statistic is the total life T lO ,5. What is the critical
value Ca (TlO ,5) for 0: = .05?
(iii) What is the probability of a Type II error, ,,(, if p = 600 [hr]?
9.8 Exercises 197
[9.3.2] In the exponential case, if (30 = 1200 and (31 = 900 [units of time],
and a = 'Y = .05, what should be the sample size n?
[9.4.1] (i) Obtain the equation of the boundary lines for the SPRT of
Ho : (3 2?: 100 vs. HI : (3 < 100, where (3 is the parameter of an
exponential life distribution. Design the test with a significance
level of a = .05 and a Type II error probability of 'Y = .05 when
(3 == 50.
(ii) Compute the OC«(3) function of this test.
(iii) What is the value of ASN at (3 = 75?
[9.4.2] An insertion machine is tested for the reliability of component
placement. It is desired that the expected number of errors not
exceed 50 PPM. The machine is considered unproductive if the ex-
pected number of errors is greater than 300 PPM. Construct an
SPRT with risk levels a = .10 and 'Y = .10. If the machine inserts
on the average 4000 parts per hour, what is the expected duration
of the test (excluding repair time) if >. = 100 PPM?
[9.5.1] Let {X(t); 0 ~ t < oo} be a Poisson process with intensity param-
eter >. = 25.
(i) What is the standard deviation of X(5) - X(3)?
(ii) What is the correlation between X(3) and X(5)?
[9.5.2] Compute the boundary lines for an SPRT based on the Poisson
process {X(t); 0 ~ t} for testing Ho : >. ~ 10 against HI : >. 2?: 12,
with risk levels a = 'Y = .10. What is the value of OC(9)? What is
the value of E)" {T} at >. = 13?
[9.6.1] The TTF values [hr] of 5 radar systems are 1505, 975, 1237, 1313
and 1498. Assuming that TTF rv E«(3), and that the prior distri-
bution of >. = 1/(3 is G(10, 10- 4 ):
(i) Determine a credibility interval for (3, at credibility level 'Y =
.95.
(ii) What is the credibility interval for R(250) at level 'Y = .95?
[9.6.2] In relation to problem [9.6.1], the null hypothesis is Ho : (3 2?: 1000
against HI : (3 < 1000.
(i) Compute the prior probabilities of Ho and HI'
(ii) What are the posterior probabilities of Ho and of HI?
(iii) Let lo = 1000 [$], the loss due to accepting Ho when HI is
true, and h = 500 [$], the loss due to accepting HI when Ho is
true. Which hypothesis would you accept?
[9.6.3] Consider a binomial RDT in which n = 200 units are put on test
for 120 [hr]. The test rejects the unit's reliability if the number of
survivals is less than 105. Assuming that R(120) has a prior beta
(30,5) distribution, determine:
(i) the predictive acceptance and rejection probabilities 7rH and
'l/JH;
(ii) the conditional risk probabilities a B and 'YB'
198 9. Reliability Demonstration: Testing and Acceptance Procedures
[9.6.4] In the sequential BRDT described in Example 9.7, the test termi-
nated at time 250 [hr] and the decision was to accept Ho. Deter-
mine a credibility interval for f3 = 1/.>. at level "I = .95.
Annotated Bibliography
Derives the CDF of the MLE, 8, in the case of Type I censoring from an
exponential distribution.
6. Beyer, W.H., Standard Mathematical Tables, CRC Press, West
Palm Beach, FL, 1978.
Qne can find in this collection tables of Laplace transforms and their
inverses.
7. Box, G.E.P. and Tiao, G.C., Bayesian Inference in Statistical Anal-
ysis, Addison-Wesley, Reading, MA, 1973.
Comprehensive textbook on Bayesian classical statistical analysis. Com-
parison of means, variances, linear models, block designs, components of
variance and regression analysis, are redone in the Bayesian framework.
8. Chow, Y.S., Robbins, H. and Siegmund, D., Great Expectations: The
Theory of Optimal Stopping, Houghton Mifflin, Boston, 1971.
Very advanced mathematical presentation of the theory of optimal stop-
ping times.
9. Cohen, A.C., Jr., Tables for maximum likelihood estimates; singly trun-
cated and single censored samples, Technometrics, 3: 535-541 (1961).
Such tables were required when computers were not readily available.
10. DeGroot, M.H., Optimal Statistical Decisions, McGraw-Hill, New
York,1970.
An excellent introduction to the theory of optimal Bayesian decision
making.
11. Gerstbakh, LB., Statistical Reliability Theory, Marcel Dekker, New
York,1989.
Advanced mathematical treatment of systems with renewable compo-
nents and optimal preventive maintenance. Discusses also statistical
aspects of lifetime data analysis.
12. Gnedenko, B.V., Belyayev, Yu. K. and Solovyev, A.D., Mathematical
Methods of Reliability Theory, Academic Press, New York, 1969.
An advanced monograph on reliability of renewable systems. Contains
development of sequential tests.
13. Good, I.J., The Estimation of Probability: An Essay on Modern
Bayesian Methods, MIT Press, Cambridge, MA, 1965.
Skillfully written introduction to Bayesian estimation of probabilities
and distributions.
14. Hald, A., Maximum likelihood estimation of the parameters of a normal
distribution which is truncated at a known point, Skandinavisk Ak-
tuar., 32: 119-134 (1949).
This paper develops the theory and methodology of estimating the pa-
rameters of a truncated normal distribution.
15. Henley, E.J. and Kumamoto, H., Reliability Engineering and Risk
Assessment, Prentice-Hall, Englewood Cliffs, NJ, 1981.
Chapter 2 provides an excellent introduction to fault tree construction.
Chapter 3 discusses path and cut sets and decision tables.
Annotated Bibliography 201
<p(z)
z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.50 0.50 0.51 0.51 0.52 0.52 0.52 0.53 0.53 0.54
0.1 0.54 0.54 0.55 0.55 0.56 0.56 0.56 0.57 0.57 0.58
0.2 0.58 0.58 0.59 0.59 0.59 0.60 0.60 0.61 0.61 0.61
0.3 0.62 0.62 0.63 0.63 0.63 0.64 0.64 0.64 0.65 0.65
0.4 0.66 0.66 0.66 0.67 0.67 0.67 0.68 0.68 0.68 0.69
0.5 0.69 0.70 0.70 0.70 0.71 0.71 0.71 0.72 0.72 0.72
0.6 0.73 0.73 0.73 0.74 0.74 0.74 0.75 0.75 0.75 0.75
0.7 0.76 0.76 0.76 0.77 0.77 0.77 0.78 0.78 0.78 0.79
0.8 0.79 0.79 0.79 0.80 0.80 0.80 0.81 0.81 0.81 0.81
0.9 0.82 0.82 0.82 0.82 0.83 0.83 0.83 0.83 0.84 0.84
1.0 0.84 0.84 0.85 0.85 0.85 0.85 0.86 0.86 0.86 0.86
1.1 0.86 0.87 0.87 0.87 0.87 0.87 0.88 0.88 0.88 0.88
1.2 0.88 0.89 0.89 0.89 0.89 0.89 0.90 0.90 0.90 0.90
1.3 0.90 0.90 0.91 0.91 0.91 0.91 0.91 0.91 0.92 0.92
1.4 0.92 0.92 0.92 0.92 0.93 0.93 0.93 0.93 0.93 0.93
1.5 0.93 0.93 0.94 0.94 0.94 0.94 0.94 0.94 0.94 0.94
204 Appendix of Statistical Tables
Z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
1.6 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95 0.95
1.7 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96 0.96
1.8 0.96 0.96 0.97 0.97 0.97 0.97 0.97 0.97 0.97 0.97
1.9 0.97 0.97 0.97 0.97 0.97 0.97 0.98 0.98 0.98 0.98
2.0 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.98
2.1 0.98 0.98 0.98 0.98 0.98 0.98 0.98 0.99 0.99 0.99
2.2 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
2.3 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
2.4 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99
2.5 0.99 0.99 0.99 0.99 0.99 0.99 0.99 0.99 1.00 1.00
2.6 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
2.7 1.00 1.00 1.00 1.00 1.00 1.00 - 1.00 1.00 1.00 1.00
2.8 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
2.9 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
3.0 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
For values of Z < 0 use the relationship ~(-z) = 1- ~(z).
Computed by use of GAUSS© software.
a Zn.
0.50 0.000
0.55 0.125
0.60 0.253
0.65 0.385
0.70 0.524
0.75 0.674
0.80 0.841
0.85 1.036
0.90 1.282
0.95 1.645
0.975 1.960
0.99 2.326
0.995 2.576
For a < .5 use za = -Zl-a. For example, Z.05 = -Z.95 = -1.645.
Computed according to formula 26.2.23 of M. Abramowitz and A. Stegun,
Handbook of Mathematical Functions, Dover Publications, New York,
1968.
Appendix of Statistical Tables 205
x~[vl
a
0.005 0.01 0.025 0.05 0.10 0.25
1 0.04 393 0.0 3 157 0.0 3 982 0.0 2 393 0.0158 0.102
2 0.0100 0.0201 0.0506 0.103 0.211 0.575
3 0.0717 0.115 0.216 0.352 0.584 1.21
4 0.207 0.297 0.484 0.711 1.06 1.92
5 0.412 0.554 0.831 1.15 1.61 2.67
X~[l/J
a
0.50 0.75 0.90 0.95 0.975 0.99 0.995
1 0.455 1.32 2.71 3.84 5.02 6.63 7.88
2 1.39 2.77 4.61 5.99 7.38 9.21 10.6
3 2.37 4.11 6.25 7.81 9.35 11.3 12.8
4 3.36 5.39 7.78 9.49 11.1 13.3 14.9
5 4.35 6.63 9.24 11.1 12.8 15.1 16.7
P 112
III
5 10 20 30 40 50 120
0.5 5 1.000 1.073 1.111 1.123 1.130 1.134 1.142
0.75 1.889 1.890 1.882 1.878 1.876 1.875 1.872
0.90 3.491 3.298 3.207 3.175 3.158 3.148 3.139
0.95 5.050 4.736 4.560 4.498 4.466 4.448 4.435
0.975 7.146 6.620 6.332 6.232 6.181 6.150 6.135
0.5 10 0.932 1.000 1.035 1.047 1.052 1.056 1.064
0.75 1.585 1.551 1.523 1.512 1.506 1.502 1.492
0.90 2.522 2.327 2.201 2.156 2.132 2.120 2.085
0.95 3.326 2.985 2.774 2.699 2.665 2.642 2.586
0.975 4.237 3.724 3.419 3.312 3.262 3.229 3.149
0.5 20 0.900 0.966 1.000 1.011 1.017 1.020 1.028
0.75 1.450 1.399 1.358 1.340 1.330 1.324 1.308
0.90 2.158 1.937 1.795 1.738 1.709 1.690 1.644
0.95 2.711 2.348 2.125 2.039 1.994 1.966 1.897
0.975 3.289 2.774 2.467 2.349 2.207 2.250 2.157
0.5 30 0.890 0.955 0.989 1.000 1.006 1.009 1.017
0.75 1.407 1.351 1.303 1.282 1.270 1.263 1.242
0.90 2.049 1.820 1.667 1.607 1.573 1.552 1.499
0.95 2.534 2.165 1.932 1.841 1.792 1.761 1.684
0.975 3.027 2.511 2.195 2.075 2.009 1.968 1.867
0.5 40 0.885 0.950 0.983 0.994 1.000 1.003 1.011
0.75 1.386 1.327 1.276 1.253 1.240 1.231 1.208
0.90 1.997 1.763 1.605 1.541 1.506 1.483 1.425
0.95 2.450 2.077 1.839 1.744 1.693 1.660 1.576
0.975 2.904 2.388 2.068 1.943 1.876 1.832 1.724
For p < .05 apply the relationship Fp[Vb V2] = 1/FI- p[V2, VI].
Computed with the aid of STATGRAPHICS© software.
Appendix of Statistical Tables 209