Beruflich Dokumente
Kultur Dokumente
To Accompany
Statistical Quality Assurance Methods for Engineers
by
Vardeman and Jobe
Stephen B. Vardeman
V2.0: January 2001
Contents
1 Measurement and Statistics
1.1 Theory for Range-Based Estimation of Variances . . . . . . . . .
1.2 Theory for Sample-Variance-Based Estimation of Variances . . .
1.3 Sample Variances and Gage R&R . . . . . . . . . . . . . . . . . .
1.4 ANOVA and Gage R&R . . . . . . . . . . . . . . . . . . . . . . .
1.5 Condence Intervals for Gage R&R Studies . . . . . . . . . . . .
1.6 Calibration and Regression Analysis . . . . . . . . . . . . . . . .
1.7 Crude Gaging and Statistics . . . . . . . . . . . . . . . . . . . . .
1.7.1 Distributions of Sample Means and Ranges from Integer
Observations . . . . . . . . . . . . . . . . . . . . . . . . .
1.7.2 Estimation Based on Integer-Rounded Normal Data . . .
1
1
3
4
5
7
10
11
12
13
2 Process Monitoring
21
2.1 Some Theory for Stationary Discrete Time Finite State Markov
Chains With a Single Absorbing State . . . . . . . . . . . . . . . 21
2.2 Some Applications of Markov Chains to the Analysis of Process
Monitoring Schemes . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Integral Equations and Run Length Properties of Process Monitoring Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 An Introduction to Discrete Stochastic Control Theory/Minimum
Variance Control
37
3.1 General Exposition . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Process Characterization and Capability Analysis
4.1 General Comments on Assessing and Dissecting Overall Variation
4.2 More on Analysis Under the Hierarchical Random Eects Model
4.3 Finite Population Sampling and Balanced Hierarchical Structures
45
45
47
50
5 Sampling Inspection
53
5.1 More on Fraction Nonconforming Acceptance Sampling . . . . . 53
5.2 Imperfect Inspection and Acceptance Sampling . . . . . . . . . . 58
3
CONTENTS
5.3
6 Problems
1
Measurement and Statistics . . . .
2
Process Monitoring . . . . . . . . .
3
Engineering Control and Stochastic
4
Process Characterization . . . . . .
5
Sampling Inspection . . . . . . . .
A Useful Probabilistic Approximation
. . . . . . . . . .
. . . . . . . . . .
Control Theory
. . . . . . . . . .
. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
69
. 69
. 74
. 93
. 101
. 115
127
Chapter 1
1.1
Suppose that X1 ; X2 ; : : : ; Xn are iid Normal (,2 ) random variables and let
R = max Xi min Xi
= max(Xi ) min(Xi )
Xi
Xi
= max
min
= (max Zi min Zi )
where Zi = (Xi )=. Then Z1 ; Z2 ; : : : ; Zn are iid standard normal random
variables. So for purposes of studying the distribution of the range of iid normal
variables, it suces to study the standard normal case. (One can derive general
facts from the = 1 facts by multiplying by .)
Consider rst the matter of the nding the mean of the range of n iid standard normal variables, Z1 ; : : : ; Zn . Let
U = min Zi ;
V = max Zi
1
and W = V U :
Then
EW = EV EU
and
EU = E min Zi = E( min Zi ) = E max(Zi ) ;
where the n variables Z1 ; Z2 ; : : : ; Zn are iid standard normal. Thus
EW = EV EU = 2EV :
Then, (as is standard in the theory of order statistics) note that
V t , all n values Zi are t :
So with the standard normal cdf,
P [V t] = n (t)
and thus a pdf for V is
f (v) = n(v)n1 (v) :
So
EV =
v n(v)n1 (v) dv ;
and the evaluation of this integral becomes a (very small) problem in numerical
analysis. The value of this integral clearly depends upon n. It is standard to
invent a constant (whose dependence upon n we will display explicitly)
:
d2 (n) = EW = 2EV
that is tabled in Table A.1 of V&J. With this notation, clearly
ER = d2 (n) ;
(and the range-based formulas in Section 2.2 of V&J are based on this simple
fact).
To nd more properties of W (and hence R) requires appeal to a well-known
order statistics result giving the joint density of two order statistics. The joint
density of U and V is
g(u; w)dudw ;
EW =
1Z 1
w2 g(u; w)dudw :
Note that upon computing EW and EW 2 , one can compute both the variance
of W
Var W = EW 2 (EW )2
p
and the standard deviation of W , Var W . It is common to give this standard
deviation the name d3 (n) (where we continue to make the dependence on n
explicit and again this
constant is tabled in Table A.1 of V&J). Clearly, having
: p
computed d3 (n) = Var W , one then has
p
Var R = d3 (n) :
1.2
Continue to suppose that X1 ; X2 ; : : : ; Xn are iid Normal (; 2 ) random variables and take
n
1 X
:
2:
s2 =
(Xi X)
n 1 i=1
Standard probability theory says that
(n 1)s2
2n1 :
2
Now if U 2 it is the case that EU = and Var U = 2. It is thus immediate
that
2
2
(n 1)s2
(n 1)s2
2
Es = E
=
E
= 2
n1
2
n1
2
and
Var s2 = Var
so that
2
n1
(n 1)s2
2
p
Var s2 = 2
2
n1
2
:
n1
Var
(n 1)s2
2
24
n1
n1
2
f (x) =
2(n1)=2 ( 2 )
:
0
otherwise
p
Es = E
=
xf(x)dx = c4 (n) ;
2
n1
n1 0
for
:
c4 (n) =
R1p
0
xf (x)dx
p
n1
another constant (depending upon n) tabled in Table A.1 of V&J. Further, the
standard deviation of s is
q
q
q
p
2
2
Var s = Es (Es) = 2 (c4 (n))2 = 1 c24 (n) = c5 (n)
for
:
c5 (n) =
q
1 c24 (n)
1.3
The methods of gage R&R analysis presented in V&J 2.2.2 are based on ranges
(and the facts in 1.1 above). They are presented in V&J not because of their
eciency, but because of their computational simplicity. Better (and analogous) methods can be based on the facts in 1.2 above. For example, under the
two-way random eects model (2.4) of V&J, if one pools I J cell sample
variances s2ij to get s2p ooled , all of the previous paragraph applies and gives methods of estimating the repeatability variance component 2 (or the repeatability
standard deviation ) and calculating means and variances of estimators based
on s2p ooled .
So
1X 2
s
I i i
2
is a plausible estimator of 2 +
+ 2 =m. Hence
1 X 2 s2p ooled
s
;
I i i
m
or better yet
1 X 2 s2pooled
max 0;
s
I i i
m
(1.1)
2
is a plausible estimator of reproducibility
.
1.4
Under the two-way random eects model (2.4) of V&J, with balanced data, it
is well-known that the ANOVA mean squares
X
1
(yijk y:: )2 ;
M SE =
IJ(m 1)
i;j;k
X
m
M SAB =
(
yij yi: y:j + y:: )2 ;
(I 1)(J 1) i;j
mJ X
(
yi: y:: )2 ; and
I 1 i
mI X
(
y:j y:: )2 ;
M SB =
J 1 i
M SA =
2
EM SAB = 2 + m
;
2
EM SA = 2 + m
+ mJ2 ;
EMSB =
2
+ m
+ mI2
and
Table 1.1: Two-way Balanced Data Random Eects Analysis ANOVA Table
ANOVA Table
Source
SS
df
MS
EM S
2
Parts
SSA
I 1
MSA
2 + m
+ mJ2
2
2
Operators
SSB
J 1
M SB
+ m + mI2
2
PartsOperators SSAB (I 1)(J 1) M SAB 2 + m
2
Error
SSE
(m 1)IJ
MSE
Total
SST ot
mIJ 1
and that the quantities
(m 1)IJM SE (I 1)(J 1)M SAB (I 1)MSA
(J 1)M SB
;
;
and
EM SE
EM SAB
EMSA
EMSB
are 2 random variables with respective degrees of freedom
(m 1)IJ ; (I 1)(J 1) ; (I 1) and (J 1) :
These facts about sums of squares and mean squares for the two-way random
eects model are often summarized in the usual (two-way random eects model)
ANOVA table, Table 1.1. (The sums of squares are simply the mean squares
multiplied by the degrees of freedom. More on the interpretation of such tables
can be found in places like 8-4 of V.)
As a matter of fact, the ANOVA error mean square is exactly s2pooled from
1.3 above. Further, the expected mean squares suggest ways of producing sensible estimators of other parametric functions of interest in gage R&R contexts
(see V&J page 27 in this regard). For example, note that
2
reproducibility
=
1
1
1
1
EM SB + (1 )EMSAB EM SE ;
mI
m
I
m
1
1
1
1
2
breproducibility
= max 0;
M SB + (1 )MSAB MSE :
mI
m
I
m
(1.2)
What may or may not be well known is that this estimator (1.2) is exactly the
2
estimator of reproducibility
in display (1.1).
Since many common estimators of quantities of interest in gage R&R studies
are functions of mean squares, it is useful to have at least some crude standard
errors for them. These can be derived from delta method/propagation of
error/Taylor series argument provided in the appendix to these notes. For
example, if M Si i = 1; : : : ; k are independent random variables, (i MSi =EMSi )
with a 2i distribution, consider a function of k real variables f (x1 ; : : : ; xk ) and
the random variable
U = f (M S1 ; M S2 ; :::; MSk ) :
!2
!2
k
k
X
X
2(EM Si )2
@f
@f
Var U
Var
M
S
=
;
i
@xi EMS1 ;EMS2 ;:::;EMSk
@xi EMS1 ;EMS2 ;:::;EMSk
i
i=1
i=1
and upon substituting mean squares for their expected values, one has a standard error for U , namely
v
!2
u k
p
u X @f
(MSi )2
t
d
:
(1.3)
Var U = 2
v
u k
p
u X c2 (M Si )2
i
d
Var U = t2
;
i
i=1
2
which provides at least a crude method of producing standard errors for
breproducibility
2
and
boverall
. Such standard errors are useful in giving some indication of the
precision with which the quantities of interest in a gage R&R study have been
estimated.
1.5
The parametric functions of interest in gage R&R studies (indeed in all random
eects analyses) are functions of variance components, or equivalently, functions
of expected mean squares. It is thus possible to apply theory for estimating such
quantities to the problem of assessing precision of estimation in a gage study.
As a rst (and very crude) example of this, note that taking the point of view of
1.4 above, where U = f (MS1 ; M S2 ; : : : ; MSk ) is a sensible
p point estimator of
d U is the standard
an interesting function of the variance components and Var
error (1.3), simple approximate two-sided 95% condence limits can be made as
p
dU :
U 1:96 Var
These limits have the virtue of being amenable to hand calculation from the
ANOVA sums of squares, but they are not likely to be reliable (in terms of
holding their nominal/asymptotic coverage probability) for I,J or m small.
Linear models experts have done substantial research aimed at nding reliable condence interval formulas for important functions of expected mean
(1.4)
is of interest. Let
b = c1 MS1 + + cp M Sp cp+1 M Sp+1 ck MSk :
VL =
p
X
k
X
i=1
i=p+1
p
p1 X
p
k
X
X
X
c2i M Si2 Hi2 +
ci cj M Si M Sj Gij +
ci cj M Si M Sj Gij ;
i=1 j=p+1
for
Gi = 1
Hi =
Gij =
i=1 j>i
i
;
2:i
i
1 ;
21:i
2
Hj2
(F:i ;j 1)2 G2i F:
i ;j
;
F:i ;j
and
Gij
8
>
< 0
1
=
>
: p1
! if p = 1
2
i + j
(i + j )2 G2i i G2j j
1
otherwise :
:i +j
i j
j
i
p
X
i=1
k
X
i=p+1
p
k
k1
k
X
X
X X
;
c2i M Si2 G2i +
ci cj MSi MSj Hij +
ci cj MSi M Sj Hij
i=1 j=p+1
i=p+1 j>i
2
(1 F1:i ;j )2 Hi2 F1:
G2j
i ;j
F1:i ;j
and
8
0
>
>
<
0
1 if k = p + 1
!2
2
(i + j )2 G2i i Gj j A
1
Hij
=
@ 1 i + j
otherwise :
>
2
>
: kp1
:i +j
i j
j
i
One uses (L; 1) or (1; U) for condence level (1 ) and the interval (L; U )
for condence level (1 2). (Using these formulas for hand calculation is
(obviously) no picnic. The C program written by Brandon Paris (available o
the Stat 531 Web page) makes these calculations painless.)
A problem similar to the estimation of quantity (1.4) is that of estimating
= c1 EM S1 + + cp EM Sp
(1.5)
and continue the Gi and Hi notation from above. Then approximate condence
limits on given in display (1.5) are of the form
v
v
u p
u p
uX
uX
2
2
2
t
b
b
L=
ci M Si Gi and/or U = + t
c2i M Si2 Hi2 :
i=1
i=1
One uses (L; 1) or (1; U) for condence level (1 ) and the interval (L; U )
for condence level (1 2).
The Fortran program written by Andy Chiang (available o the Stat 531
Web page) applies Burdick and Graybill-like material and the standard errors
(1.3) to the estimation of many parametric functions of relevance in gage R&R
studies.
Chiangs 2000 Ph.D. dissertation work (to appear in Technometrics in August 2001) has provided an entirely dierent method of interval estimation of
functions of variance components that is a uniform improvement over the modied large sample methods presented by Burdick and Graybill. His approach
is related to improper Bayes methods with so called Jereys priors. Andy
has provided software for implementing his methods that, as time permits, will
be posted on the Stat 531 Web page. He can be contacted (for preprints of his
work) at stackl@nus.edu.sg at the National University of Singapore.
10
1.6
The estimation of standard deviations and variance components is a contribution of the subject of statistics to the quantication of measurement system
precision. The subject also has contributions to make in the matter of improving measurement accuracy. Calibration is the business of bringing a local
measurement system in line with a standard measurement system. One takes
measurements y with a gage or system of interest on test items with known
values x (available because they were previously measured using a gold standard measurement device). The data collected are then used to create a conversion scheme for translating local measurements to approximate gold standard
measurements, thereby hopefully improving local accuracy. In this short section we note that usual regression methodology has implications in this kind of
enterprise.
The usual polynomial regression model says that n observed random values
yi are related to xed values xi via
yi = 0 + 1 xi + 2 x2i + + k xki + "i
(1.6)
for iid Normal (0; 2 ) random variables "i . The parameters and are the
usual objects of inference in this model. In the calibration context with x a
gold standard value, quanties precision for the local measurement system.
Often (at least over a limited range of x) 1) a low order polynomial does a good
job of describing the observed x-y relationship between local and gold standard
measurements and 2) the usual (least squares) tted relationship
y^ = g(x) = b0 + bx + b2 x2 + + bk xk
has an inverse g 1 (y). When such is the case, given a measurement yn+1 from
the local measurement system, it is plausible to estimate that a corresponding
measurement from the gold standard system would be x
^n+1 = g1 (yn+1 ). A
reasonable question is then How good is this estimate?. That is, the matter
of condence interval estimation of xn+1 is important.
One general method for producing such condence sets for xn+1 is based on
the usual prediction interval methodology associated with the model (1.6).
That is, for a given x, it is standard (see, e.g. 9-2 of V or 9.2.4 of V&J#2) to
produce a prediction interval of the form
q
y))2
y^ t s2 + (std error(^
for an additional corresponding y. And those intervals have the property that
for all choices of x; ; 0 ; 1 ; 2 ; :::; k
Px;;0 ;1 ;2 ;:::;k [y is in the prediction interval at x]
= desired condence level
= 1 P [a tnk1 random variable exceeds jtj] .
11
(1.7)
Conceptually, one simply makes prediction limits around the tted relationship
y^ = g(x) = b0 + bx + b2 x2 + + bk xk and then upon observing a new y sees
what xs are consistent with that observation. This produces a condence set
with the desired condence level.
The only real diculties with the above general prescription are 1) the lack of
simple
p explicit formulas and 2) the fact that when is large (so that the regression MSE tends to be large) or the tted relationship is very nonlinear, the
method can produce (completely rational but) unpleasant-looking condence
sets. The rst problem is really of limited consequence in a time when standard statistical software will automatically produce plots of prediction limits
associated with low order regressions. And the second matter is really inherent
in the problem.
For the (simplest) linear version of this inverse prediction problem, there
is an approximate condence method in common use that doesnt have the
deciencies of the method (1.7). It is derived from a Taylor series argument and
has its own problems, but is nevertheless worth recording here for completeness
sake. That is, under the k = 1 version of the model (1.6), commonly used
approximate condence limits for xn+1 are (for x
^n+1 = (yn+1 b0 )=b1 and
x
the sample mean of the gold standard measurements from the calibration
experiment)
s
p
M SE
1
(^
xn+1 x
)2
x
^n+1 t
1 + + Pn
.
jb1 j
n
)2
i=1 (xi x
1.7
All real-world measurement is to the nearest something. Often one may ignore
this fact, treat measured values as if they were exact and experience no real
diculty when using standard statistical methods (that are really based on an
assumption that data are exact). However, sometimes in industrial applications
gaging is crude enough that standard (e.g. normal theory) formulas give
nonsensical results. This section briey considers what can be done to appropriately model and draw inferences from crudely gaged data. The assumption
throughout is that what are available are integer data, obtained by coding raw
observations via
raw observation some reference value
integer observation =
smallest unit of measurement
12
1.7.1
To begin with something simple, note rst that in situations where only a few
dierent coded values are ever observed, rather than trying to model observations with some continuous distribution (like a normal one) it may well make
sense to simply employ a discrete pmf, say f, to describe any single measurement. In fact, suppose that a single (crudely gaged) observation Y has a pmf
f(y) such that
f (y) = 0 unless y = 1; 2; :::; M :
Then if Y1 ; Y2 ; : : : ; Yn are iid with this marginal discrete distribution, one can
easily approximate the distribution of a function of these variables via simulation
(using common statistical packages). And for two of the most common statistics
used in QC settings (the sample mean and range) one can even work out exact
probability distributions using computationally feasible and very elementary
methods.
To nd the probability distribution of Y in this context, one can build up
the probability distributions of sums of iid Yi s recursively by adding probabilities on diagonals in two-way joint probability tables. For example the n = 2
distribution of Y can be obtained by making out a two-way table of joint probabilities for Y1 and Y2 and adding on diagonals to get probabilities for Y1 + Y2 .
Then making a two-way table of joint probabilities for (Y1 + Y2 ) and Y3 one
can add on diagonals and nd a joint distribution for Y1 + Y2 + Y3 . Or noting
that the distribution of Y3 + Y4 is the same as that for Y1 + Y2 , it is possible to
make a two-way table of joint probabilities for (Y1 + Y2 ) and (Y3 + Y4 ), add on
diagonals and nd the distribution of Y1 + Y2 + Y3 + Y4 . And so on. (Clearly,
after nding the distribution for a sum, one simply divides possible values by n
to get the corresponding distribution of Y .)
To nd the probability distribution of R = max Yi min Yi (for Yi s as above)
a feasible computational scheme is as follows. Let
Skj =
Pj
0
x=k
f(y) = P [k Y j] if k j
otherwise
13
and one may compute and store these values. Finally, note that
P [R = r] =
Mr
X
Mk;k+r :
k=1
1.7.2
The problem of drawing inferences from crudely gaged data is one that has a
history of at least 100 years (if one takes a view that crude gaging essentially
rounds exact values). Sheppard in the late 1800s noted that if one rounds
a continuous variable to integers, the variability in the distribution is typically
increased. He thus suggested not using the sample standard deviation (s) of
rounded values but instead employing what is known as Sheppards correction
to arrive at
r
1
(n 1)s2
(1.8)
n
12
g(x j) =
0
otherwise :
x + :5
x :5
<
for x an integer
:
g(x j; ) =
:
0
otherwise ;
and the balance of this section will consider the use of this specic important
model. So suppose that X1 ; X2 ; : : : ; Xn are iid integer-valued random observations (generated from underlying normal observations by rounding). For an
observed vector of integers (x1 ; x2 ; : : : ; xn ) it is useful to consider the so-called
14
likelihood function that treats the (joint) probability assigned to the vector
(x1 ; x2 ; : : : ; xn ) as a function of the parameters,
Y x + :5
xi :5
: Y
i
L(; ) =
g(xi j; ) =
i
i
The log of this function of and is (naturally enough) called the loglikelihood
and will be denoted as
:
L(; ) = ln L(; ) :
A sensible estimator of the parameter vector (; ) is the point (b
;
b) maximizing the loglikelihood. This prescription for estimation is only partially
complete, depending upon the nature of the sample x1 ; x2 ; : : : ; xn . There are
three cases to consider, namely:
1. When the sample range of x1 ; x2 ; : : : ; xn is at least 2, L(; ) is wellbehaved (nice and mound-shaped) and numerical maximization or just
looking at contour plots will quickly allow one to maximize the loglikelihood. (It is worth noting that in this circumstance, usually
b is close to
the Sheppard corrected value in display (1.8).)
m
n
That is, in this case one ought to estimate that is small and the
relationship between and is such that a fraction m=n of the underlying
normal distribution is to the left of min xi + :5, while a fraction 1 m=n
is to the right.
3. When the sample range of x1 ; x2 ; : : : ; xn is 0, strictly speaking L(; )
fails to achieve a maximum. However,
sup L(; ) = 0
;
and for any 2 (x1 :5; x1 + :5), L(; ) ! 0 as ! 0. That is, in this
case one ought to estimate that is small and 2 (x1 :5; x1 + :5).
15
Beyond the making of point estimates, the loglikelihood function can provide
approximate condence sets for the parameters and/or . Standard large
sample statistical theory says that (for large n and 2: the upper point of
the 2 distribution):
1. An approximate (1) level condence set for the parameter vector (; )
is
1
(1.9)
f(; )jL(; ) > sup L(; ) 2:2 g :
2
;
2. An approximate (1 ) level condence set for the parameter is
1
fj sup L(; ) > sup L(; ) 2:1 g :
2
(1.10)
(1.11)
Several comments and a fuller discussion are in order regarding these condence sets. In the rst place, Karen (Jensen) Hultings CONEST program
(available o the Stat 531 Web page) is useful in nding sup L(; ) and pro;
ducing rough contour plots of the (joint) sets for (; ) in display (1.9). Second,
it is common to call the function of dened by
L () = sup L(; )
the prole loglikelihood function for . Note that display (1.10) then says that
the condence set should consist of those s for which the prole loglikelihood
is not too much smaller than the maximum achievable. And something entirely
analogous holds for the sets in (1.11). Johnson Lee (in 2001 Ph.D. dissertation
work) has carefully studied these condence interval estimation problems and
determined that some modication of methods (1.10) and (1.11) is necessary in
order to provide guaranteed coverage probabilities for small sample sizes. (It
is also very important to realize that contrary to naive expectations, not even
a large sample size will make the usual t-intervals for and 2 -intervals for
hold their nominal condence levels in the event that is small, i.e. that the
rounding or crudeness of the gaging is important. Ignoring the rounding when
it is important can produce actual condence levels near 0 for methods with
large nominal condence levels.)
16
n
:05
:10
:20
2 3:084 1:547 :785
3
:776
:562
4
:517
(1.12)
(1.13)
17
n m
:05
:10
:20
2
1
(6:147; 6:147) (3:053; 3:053) (1:485; 1:485)
3
2
(1:552; 1:219) (1:104; 0:771) (0:765; 0:433)
4
3
(1:025; 0:526) (0:082; 0:323) (0:639; 0:149)
2
(0:880; 0:880) (0:646; 0:646) (0:441; 0:441)
5
4
(0:853; 0:257) (0:721; 0:132) (0:592; 0:024)
3
(0:748; 0:548) (0:592; 0:339) (0:443; 0:248)
6
5
(0:772; 0:116) (0:673; 0:032) (0:569; 0:000)
4
(0:680; 0:349) (0:562; 0:235) (0:444; 0:126)
3
(0:543; 0:543) (0:420; 0:420) (0:299; 0:299)
7
6
(0:726; 0:035) (0:645; 0:000) (0:556; 0:000)
5
(0:640; 0:218) (0:545; 0:130) (0:446; 0:046)
4
(0:534; 0:393) (0:432; 0:293) (0:329; 0:193)
8
7
(0:698; 0:000) (0:626; 0:000) (0:547; 0:000)
6
(0:616; 0:129) (0:534; 0:058) (0:446; 0:000)
5
(0:527; 0:281) (0:439; 0:197) (0:347; 0:113)
4
(0:416; 0:416) (0:327; 0:327) (0:236; 0:236)
9
8
(0:677; 0:000) (0:613; 0:000) (0:541; 0:000)
7
(0:599; 0:065) (0:526; 0:010) (0:448; 0:000)
6
(0:521; 0:196) (0:443; 0:124) (0:361; 0:054)
5
(0:429; 0:321) (0:350; 0:242) (0:267; 0:163)
10 9
(0:662; 0:000) (0:604; 0:000) (0:537; 0:000)
8
(0:587; 0:020) (0:521; 0:000) (0:450; 0:000)
7
(0:515; 0:129) (0:446; 0:069) (0:371; 0:012)
6
(0:437; 0:242) (0:365; 0:174) (0:289; 0:105)
5
(0:346; 0:346) (0:275; 0:275) (0:200; 0:200)
18
n
:05
2 10:47
3
7:26
4
6:15
5
5:58
6
5:24
7
5:01
8
4:84
9
4:72
10
4:62
15
4:34
20
4:21
30
4:08
1
3:84
in Estimating
:10
7:71
5:23
4:39
3:97
3:71
3:54
3:42
3:33
3:26
3:06
2:97
2:88
2:71
making this d(n; ) for 2:1 substitution is enough to produce an actual condence level approximating the nominal one. However, even this modication
is not adequate to produce an acceptable coverage probability for (; ) with
small .
For samples with range 0 or 1, formula (1.11) prescribes intervals of the form
(0; U). And reasoning that when is small, samples will typically have range
0 or 1, Lee was able to nd (larger) replacements for the limit U prescribed by
(1.11) so that the resulting estimation method has actual condence level not
much below the nominal level for any (; ) (with large or small).
That is if a 0-range sample is observed, estimate by
(0; 0 )
where 0 is taken from Table 1.5. If a range 1 sample is observed consisting,
say, of values x and x + 1, and nx ; nx +1 and m are as in displays (1.12) and
(1.13), estimate using
(0; 1;m )
where 1;m is taken from Table 1.6.
The use of these values 0 for range 0 samples, and 1;m for range 1 samples,
and the values d(n; ) in place of 2:1 in display (1.11) nally produces a reliable
method of condence interval estimation for when normal data are integerrounded.
n
:05
:10
2 5:635
2:807
3 1:325
0:916
4 0:822
0:653
5 0:666
0:558
6 0:586
0:502
7 0:533
0:464
8 0:495
0:435
9 0:466
0:413
10 0:443
0:396
11 0:425
0:381
12 0:409
0:369
13 0:396
0:358
14 0:384
0:349
15 0:374
0:341
19
20
n
2
3
4
5
6
7
8
9
10
11
12
13
14
15
:05
:10
16:914(1)
8:439(1)
3:535(2)
2:462(2)
1:699(3)
2:034(2)
1:303(3)
1:571(2)
1:143(4)
1:516(3)
0:921(4)
1:231(3)
0:897(5)
1:153(4)
1:285(3)
0:752(5)
0:960(4)
1:054(3)
0:768(6)
0:944(5)
1:106(4)
0:660(6)
0:800(5)
0:949(4)
0:687(7)
0:819(6)
0:952(5)
0:599(7)
0:707(6)
0:825(5)
1:009(4)
0:880(4)
0:629(8)
0:736(7)
0:837(6)
0:555(8)
0:644(7)
0:726(6)
0:941(5)
0:831(5)
0:585(9)
0:677(8)
0:747(7)
0:520(9)
0:597(8)
0:654(7)
0:851(6)
0:890(5)
0:753(6)
0:793(5)
0:550(10)
0:630(9)
0:690(8)
0:493(10)
0:560(9)
0:609(8)
0:775(7)
0:851(6)
0:685(7)
0:763(6)
0:522(11) 0:593(10)
0:646(9)
0:470(11) 0:531(10)
0:573(9)
0:708(8)
0:789(7)
0:818(6)
0:626(8)
0:707(7)
0:738(6)
0:499(12) 0:563(11) 0:610(10)
0:452(12) 0:506(11) 0:544(10)
0:658(9)
0:733(8)
0:791(7)
0:587(9)
0:655(8)
0:716(7)
0:479(13) 0:537(12) 0:580(11)
0:436(13) 0:485(12) 0:520(11)
0:622(10)
0:681(9)
0:745(8)
0:558(10)
0:607(9)
0:674(8)
0:768(7)
0:698(7)
0:463(14) 0:515(13) 0:555(12)
0:422(14) 0:468(13) 0:499(12)
0:593(11) 0:639(10)
0:701(9)
0:534(11) 0:574(10)
0:632(9)
0:748(8)
0:682(8)
Chapter 2
Process Monitoring
Chapters 3 and 4 of V&J discuss methods for process monitoring. The key
concept there regarding the probabilistic description of monitoring schemes is
the run length idea introduced on page 91 and specically in display (3.44).
Theory for describing run lengths is given in V&J only for the very simplest case
of geometrically distributed T . This chapter presents some more general tools
for the analysis/comparison of run length distributions of monitoring schemes,
namely discrete time nite state Markov chains and recursions expressed in
terms of integral (and dierence) equations.
2.1
(m+1)(m+1)
= (pij )
where
pij = P [system is in Sj at time t + 1 j system is in Si at time t] :
21
22
.8
.05
.1
S1
S2
.9
.1
.05
S3
1.0
(2.1)
Lm
11
L = (I R)1 1 :
1
C
C
C
A
(2.2)
2.1. SOME THEORY FOR STATIONARY DISCRETE TIME FINITE STATE MARKOV CHAINS WITH A SIN
To argue that display (2.2) is correct, note that the following system of m
equations clearly holds:
L1 = (1 + L1 )p11 + (1 + L2 )p12 + + (1 + Lm )p1m + 1 p1;m+1
L2 = (1 + L1 )p21 + (1 + L2 )p22 + + (1 + Lm )p2m + 1 p2;m+1
..
.
Lm = (1 + L1 )pm1 + (1 + L2 )pm2 + + (1 + Lm )pmm + 1 pm;m+1 :
(2.3)
So
i.e.
L RL = 1 ;
(I R)L = 1 :
Under the conditions of the present discussion it is the case that (I R) is
guaranteed to be nonsingular, so that multiplying both sides of this matrix
equation by the inverse of (I R) one nally has equation (2.2).
For the simple 3-state example with transition matrix (2.1) it is easy enough
to verify that with
:8 :1
R=
:9 :05
one has
10:5
(I R)1 1 =
:
11
That is, the mean number of transitions required for absorption (into S3 ) from
S1 is 10:5 while the mean number required from S2 is 11:0.
When one is working with numerical values in P and thus wants numerical
values in L, the matrix formula (2.2) is most convenient for use with numerical
analysis software. When, on the other hand, one has some algebraic expressions
for the pij and wants algebraic expressions for the Li , it is usually most eective
to write out the system of equations represented by display (2.3) and to try and
see some slick way of solving for an Li of interest.
It is also worth noting that while the discussion in this section has centered
on the computation of mean times to absorption, other properties of time to
absorption variables can be derived and expressed in matrix notation. For
example, Problem 2.22 shows that it is fairly easy to nd the variance (or
standard deviation) of time to absorption variables.
24
2.2
2.2. SOME APPLICATIONS OF MARKOV CHAINS TO THE ANALYSIS OF PROCESS MONITORING SCH
1- q - q
1
S1
S2
1- q - q
1
q +q
1
S3
1.0
(2.4)
and the ARL of the scheme (under the iid model for the Q sequence) is L1 , the
mean time to absorption into the alarm state from the all-OK state. Figure
2.2 is a schematic representation of this scenario.
It is worth noting that a system of equations for L1 and L2 is
L1 = 1 q1 + (1 + L2 )q2 + (1 + L1 )(1 q1 q2 )
L2 = 1 (q1 + q2 ) + (1 + L1 )(1 q1 q2 ) ;
which is equivalent to
L1 = 1 + L1 (1 q1 q2 ) + L2 q2
L2 = 1 + L1 (1 q1 q2 ) ;
which is the non-matrix version of the system (2.3) for this example. It is
easy enough to verify that this system of two linear equations in the unknowns
L1 and L2 has a (simultaneous) solution with
L1 =
1 + q2
:
1 (1 q1 q2 ) q2 (1 q1 q2 )
As a second application of MC technology to the analysis of a process monitoring scheme, we will consider a so-called Run-Sum scheme. To dene such a
26
scheme, one begins with zones for the variable Q as indicated in Figure 3.9 of
V&J. Then scores are dened for various possible values of Q. For j = 0; 1; 2
a score of +j is assigned to the eventuality that Q is in the positive j-sigma to
(j + 1)-sigma zone, while a score of j is assigned to the eventuality that Q
is in the negative j-sigma to (j + 1)-sigma zone. A score of +3 is assigned to
any Q above the upper 3-sigma limit while a score of 3 is assigned to any Q
below the lower 3-sigma limit. Then, for the variables Q1 ; Q2 ; : : : one denes
corresponding scores Q1 ; Q2 ; : : : and run sums R1 ; R2 ; : : : where
Ri = the sum of scores Q through time i under the provision that a
new sum is begun whenever a score is observed with a sign dierent
from the existing Run-Sum.
(Note, for example, that a new score of Q = +0 will reset a current Run-Sum
of R = 2 to +0.) The Run-Sum scheme then signals at the rst i for which
jQi j = 3 or jRi j 4.
Then dene states for a Run-Sum process monitoring scheme
S1
S2
S3
S4
S5
S6
S7
S8
S9
= no alarm
= no alarm
= no alarm
= no alarm
= no alarm
= no alarm
= no alarm
= no alarm
= alarm.
yet
yet
yet
yet
yet
yet
yet
yet
and
and
and
and
and
and
and
and
R = 0,
R = 1,
R = 2,
R = 3,
R = +0,
R = +1,
R = +2,
R = +3, and
q+2
q+2
q+2
q+2
q+2
q+1
q+0
0
0
0
0
0
0
0
q+2
q+1
q+0
0
q3 + q+3
q3 + q+3
q3 + q2 + q+3
q3 + q2 + q1 + q+1
q3 + q+3
q3 + q+3
q3 + q+2 + q+3
q3 + q+1 + q+2 + q+3
1
1
C
C
C
C
C
C
C
C
C
C
C
C
A
and the ARL for the scheme is L1 = L5 . (The fact that the 1st and 5th rows of
P are identical makes it clear that the mean times to absorption from S1 and S5
2.2. SOME APPLICATIONS OF MARKOV CHAINS TO THE ANALYSIS OF PROCESS MONITORING SCH
f *(y)
...
-m
-h/m
-1
-h
h/m
...
m-1
2h/m
must be the same.) It turns out that clever manipulation with the non-matrix
version of display (2.3) in this example even produces a fairly simple expression
for the schemes ARL. (See Problem 2.24 and Reynolds (1971 JQT ) and the
references therein in this nal regard.)
To turn to a dierent type of application of the MC technology, consider
the analysis of a high side decision interval CUSUM scheme as described in
4.2 of V&J. Suppose that the variables Q1 ; Q2 ; : : : are iid with a continuous
distribution specied by the probability density f (y). Then the variables Q1
k1 ; Q2 k1 ; Q3 k1 ; : : : are iid with probability density f (y) = f(y + k1 ). For a
positive integer m, we will think of replacing the variables Qi k1 with versions
of them rounded to the nearest multiple of h=m before CUSUMing. Then the
CUSUM scheme can be thought of in terms of a MC with states
h
qm =
h
h+ 12 ( m
)
h
h 12 ( m
)
f (y)dy = P [Q1 k1 h +
f (y)dy = P [h
1
2
h
m
1
2
h
];
m
< Q1 k1 ] ;
h
h
Z j( m
)+ 12 ( m
)
h
h
j( m
) 12 ( m
)
f (y)dy :
(2.5)
28
matrix
0
0
X
qj
B
B
j=m
B
B
1
X
B
B
qj
B
B
j=m
B
2
X
B
P
=B
qj
B
(m+1)(m+1)
B
j=m
B
B
..
B
.
B
B
B q
B m + qm+1
@
0
q1
q2
qm1
qm
q0
q1
qm2
qm1 + qm
q1
q0
qm3
qm2 + qm1 + qm
..
.
..
.
qm+2
qm+3
q0
..
.
..
.
m
X
qj
j=1
:
ai = LCLEW MA + + (i 1)
2
for i = 1; 2; : : : ; m. For i = 1; 2; :::; m let
Si = no alarm yet and the rounded EWMA is ai
and
Sm+1 = alarm.
C
C
C
C
C
C
C
C
C
C
C:
C
C
C
C
C
C
C
C
C
A
2.3. INTEGRAL EQUATIONS AND RUN LENGTH PROPERTIES OF PROCESS MONITORING SCHEMES2
And for 1 i; j m, let
qij = P [moving from Si to Sj ] ;
= P [aj
(1 )ai + Q aj + ] ;
2
2
aj (1 )ai
aj (1 )ai
= P[
Q
+
];
2
(j i)
(j i)
= P [ai +
Q ai +
+
];
Z ai + (ji)
+ 2
=
f (y)dy :
ai + (ji)
2
Then with
B q11
B
B
B
B q
B 21
B
P =B
B ..
B .
B
B
B
B qm1
@
0
q12
q22
q1m
q2m
..
.
..
.
qm2
qmm
m
X
(2.6)
q1j C
C
C
C
1
q2j C
C
C
j=1
C
C
..
C
.
C
m
C
X
C
1
qmj C
A
j=1
j=1
m
X
the mean time to absorption from the state S(m+1)=2 (the value L(m+1)=2 ) of
a MC with this transition matrix is an approximation for the EWMA scheme
ARL with EW M A0 = (U CLEW MA + LCLEW MA )=2. In practice, in order to
nd the ARL for the original scheme, one would nd approximate ARL values
for an increasing sequence of ms until those appear to converge.
The four examples in this section have illustrated the use of MC calculations
in the second and third of the two circumstances listed at the beginning of this
section. The rst circumstance is conceptually the simplest of the three, and is
for example illustrated by Problems 2.25, 2.28 and 2.37. The examples have also
all dealt with iid models for the Q1 ; Q2 ; : : : sequence. Problem 2.26 shows that
the methodology can also easily accommodate some kinds of dependencies in
the Q sequence. (The discrete model in Problem 2.26 is itself perhaps less than
completely appealing, but the reader should consider the possibility of discrete
approximation of the kind of dependency structure employed in Problem 2.27
before dismissing the basic concept illustrated in Problem 2.26 as useless.)
2.3
30
schemes where continuous variables Q are involved. That is through the use of
integral equations, and this section introduces the use of these. (As it turns out,
by the time one is forced to nd numerical solutions of the integral equations,
there is not a whole lot of dierence between the methods of this section and
those of the previous one. But it is important to introduce this second point of
view and note the correspondence between approaches.)
Before going to the details of specic schemes and integral equations, a small
piece of calculus/numerical analysis needs to be reviewed and notation set for
use in these notes. That concerns the approximation of denite integrals on the
interval [a; a + h]. Specication of a set of points
a a1 a2 am a + h
and weights
wi 0 with
so that
m
X
wi = h
i=1
a+h
m
X
wi f (ai )
i=1
i 12
:
: h
ai = a +
h with wi =
:
(2.7)
m
m
(This choice amounts to approximating an integral of f by a sum of signed areas
of rectangles with bases h=m and (signed) heights chosen as the values of f at
midpoints of intervals of length h=m beginning at a.)
Now consider a high side CUSUM scheme as in 4.2 of V&J, where Q1 ; Q2 ; : : :
are iid with continuous marginal distribution specied by the probability density
f(y). Dene the function
:
L1 (u) = the ARL of the high side CUSUM scheme using a head start of u :
If one begins CUSUMing at u, there are three possibilities of where he/she will be
after a single observation, Q1 . If Q1 is large (Q1 k1 h u) then there will be
an immediate signal and the run length will be 1. If Q1 is small (Q1 k1 u)
the CUSUM will zero out, one observation will have been spent, and on
average L1 (0) more observations are to be faced in order to produce a signal.
Finally, if Q1 is moderate (u < Q1 k1 < h u) then one observation will
have been spent and the CUSUM will continue from u + (Q1 k1 ), requiring on
average an additional L1 (u + (Q1 k1 )) observations to produce a signal. This
reasoning leads to the equation for L1 ,
L1 (u) = 1 P [Q1 k1 h u] + (1 + L1 (0))P [Q1 k1 u]
Z k1 +hu
+
(1 + L1 (u + y k1 ))f (y)dy :
k1 u
2.3. INTEGRAL EQUATIONS AND RUN LENGTH PROPERTIES OF PROCESS MONITORING SCHEMES3
Writing F (y) for the cdf of Q1 and simplifying slightly, this is
L1 (u) = 1 + L1 (0)F (k1 u) +
L1 (y)f (y + k1 u)dy :
(2.8)
The argument leading to equation (2.8) has a twin that produces an integral
equation for
:
L2 (v) = the ARL of a low side CUSUM scheme using a head start of v :
That equation is
L2 (v) = 1 + L2 (0) (1 F (k2 u)) +
L2 (y)f (y + k2 v)dy :
(2.9)
And as indicated in display (4.20) of V&J, could one solve equations (2.8) and
(2.9) (and thus obtain L1 (0) and L2 (0)) one would have not only separate high
and low side CUSUM ARLs, but ARLs for some combined schemes as well.
(Actually, more than what is stated in V&J can be proved. Yashchin in a
Journal of Applied Probability paper in about 1985 showed that with iid Qs,
high side decision interval h1 and low side decision interval h2 for nonnegative
h2 , if k1 k2 and
(k1 k2 ) jh1 h1 j max (0; u v max(h1 ; h2 )) ;
for the simultaneous use of high and low side schemes
ARLcombined =
It is easily veried that what is stated on page 151 of V&J is a special case of
this result.) So in theory, to nd ARLs for CUSUM schemes one need only
solve the integral equations (2.8) and (2.9). This is easier said than done. The
one case where fairly explicit solutions are known is that where observations are
exponentially distributed (see Problem 2.30). In other cases one must resort to
numerical solution of the integral equations.
So consider the problem of approximate solution of equation (2.8). For
a particular quadrature rule for integrals on [0; h], for each ai one has from
equation (2.8) the approximation
L1 (ai ) 1 + L1 (a1 )F (k1 ai ) +
m
X
j=1
wj L1 (aj )f(aj + k1 ai ) :
32
That is, at least approximately one has the system of m linear equations
L1 (a1 ) = 1 + L1 (a1 )[F (k1 a1 ) + w1 f (k1 )] +
m
X
j=2
m
X
j=2
m
X
j=2
in the m unknowns L1 (a1 ); : : : ; L1 (am ). Again in light of equation (2.8) and the
notion of numerical approximation of denite integrals, upon solving this set of
equations (for approximate values of (L1 (a1 ); : : : ; L1 (am )) one may approximate
the function L1 (u) as
L1 (u) 1 + L1 (a1 )F (k1 u) +
m
X
j=1
wj L1 (aj )f (aj + k1 u) :
It is a revealing point that the system of equations above is of the form (2.3)
that was so useful in the MC approach to the determination of ARLs. That is,
let
1
0
L1 (a1 )
B L1 (a2 ) C
C
B
L=B
C
..
A
@
.
L1 (am )
and
B
B
R=B
@
F (k1 a1 ) + w1 f(k1 )
F (k1 a2 ) + w1 f (a1 + k1 a2 )
..
.
w2 f (a2 + k1 a1 )
w2 f(k1 )
..
.
wm f (am + k1 a1 )
wm f (am + k1 a2 )
..
.
wm f(k1 )
and note that the set of equations for the ai head start approximate ARLs is
exactly of the form (2.3). With the simple quadrature rule in display (2.7) note
that a generic entry of R; rij , for j 2 is
h
h
rij = wj f (aj + k1 ai ) =
f (j i)
+ k1 :
m
m
But using again the notation f (y) = f(y+k1 ) employed in the CUSUM example
of 2.2, this means
Z (ji)( h )+ 1 ( h )
m
2 m
h
h
f (y)dy = qji
rij =
f (j i)
h
h
m
m
(ji)( m
) 12 ( m
)
1
C
C
C
A
2.3. INTEGRAL EQUATIONS AND RUN LENGTH PROPERTIES OF PROCESS MONITORING SCHEMES3
(in terms of the notation (2.5) from the CUSUM example). The point is that
whether one begins from a discretize the Q k1 distribution and employ the
MC material point of view or from a do numerical solution of an integral
equation point of view is largely immaterial. Very similar large systems of
linear equations must be solved in order to nd approximate ARLs.
As a second application of integral equation ideas to the analysis of process
monitoring schemes, consider the EWMA schemes of 4.1 of V&J where Q1 ; Q2 ; : : :
are iid with a continuous distribution specied by the probability density f (y).
Let
L(u) = the ARL of a EWMA scheme with EW MA0 = u :
When one begins a EWMA sequence at u, there are 2 possibilities of where
he/she will be after a single observation, Q1 . If Q1 is extreme (Q1 + (1 )u >
UCLEW MA or Q1 + (1 )u < LCLEW MA ) then there will be an immediate
signal and the run length will be 1. If Q1 is moderate (LCLEW MA Q1 +
(1 )u UCLEW MA ) one observation will have been spent and on average
L(Q1 +(1)u) more observations are to be faced in order to produce a signal.
Now the event
LCLEW MA Q1 + (1 )u U CLEW MA
is the event
LCLEW MA (1 )u
U CLEW MA (1 )u
Q1
;
U CLEW MA (1 )u
LCLEW MA (1 )u
Q1
]
L(u) = 1 1 P [
(1)u
Z U CLEW MA
+ LCL
(1 + L(y + (1 )u)) f(y)dy ;
(1)u
EW MA
or
L(u) = 1 +
or nally
L(u) = 1 +
UCLEW MA (1)u
LCLEW MA (1)u
L(y + (1 )u)f(y)dy ;
U CLEW MA
LCLEW MA
L(y)f
y (1 )u
dy :
(2.10)
m
1X
aj (1 )ai
L(ai ) 1 +
wj L(aj )f
:
(2.11)
j=1
34
0
L(a1 )
aj (1)ai
w
f
j
B
C
..
A :
(2.12)
L=@
R =@
A and mm
.
L(am )
m
1X
aj (1 )u
L(u) 1 +
:
wj L(aj )f
j=1
Again as in the CUSUM case, it is worth noting the similarity between the
set of equations used to nd MC ARL approximations and the set of equations used to nd integral equation ARL approximations. With the quadrature rule (2.7) and an odd integer m, using the notation = (U CLEW MA
LCLEW MA )=m employed in 2.2 in the EWMA example, note that a generic
entry of R dened in (2.12) is
a (1)ai
Z ai + (ji)
+ 2
wj f j
f ai + (ji)
rij =
f (y)dy = qij ;
=
(ji)
ai + 2
(in terms of the notation (2.6) from the EWMA example of 2.2). That is,
as in the CUSUM case, the sets of equations used in the MC and integral
equation approximations for the EWMA0 = ai ARLs of the scheme are very
similar.
As a nal example of the use of integral equations in the analysis of process
monitoring schemes, consider the X=M R schemes of 4.4 of V&J. Suppose that
observations x1 ; x2 ; : : : are iid with continuous marginal distribution specied
by the probability density f (y). Dene the function
L(y) = the mean number of additional observations to alarm, given that
there has been no alarm to date and the current observation is y.
Then note that as one begins X=MR monitoring, there are two possibilities of
where he/she will be after observing the rst individual, x1 . If x1 is extreme
(x1 < LCLx or x1 > U CLx ) there will be an immediate signal and the run
length will be 1. If x1 is not extreme (LCLx x1 U CLx ) one observation
will have been spent and on average another L(x1 ) observations will be required
in order to produce a signal. So it is reasonable that the ARL for the X=MR
scheme is
Z UCLx
ARL = 1 (1 P [LCLx x1 UCLx ]) +
(1 + L(y))f (y)dy ;
LCLx
2.3. INTEGRAL EQUATIONS AND RUN LENGTH PROPERTIES OF PROCESS MONITORING SCHEMES3
that is
ARL = 1 +
U CLx
L(y)f(y)dy ;
(2.13)
LCLx
that is,
L(y) = 1 +
=1+
min(UCLx ;y+UCLR )
L(x)f(x)dx
(2.14)
(The notation I[A] is indicator function notation, meaning that when A holds
I[A] = 1; and otherwise I[A] = 0.) As in the earlier CUSUM and EWMA examples, once one species a quadrature rule for denite integrals on the interval
[LCLx ; UCUx ], this expression (2.14) provides a set of m linear equations for
approximate values of L(ai )s. When this system is solved, the resulting values
can be fed into a discretized version of equation (2.13) and an approximate ARL
produced. It is worth noting that the potential discontinuities of the integrand
in equation (2.14) (produced by the indicator function) have the eect of making numerical solutions of this equation much less well-behaved than those for
the other integral equations developed in this section.
The examples of this section have dealt only with ARLs for schemes based
on (continuous) iid observations. It therefore should be said that:
1. The iid assumption can in some cases be relaxed to give tractable integral
equations for situations where correlated sequences Q1 ; Q2 ; : : : are involved
(see for example Problem 2.27),
2. Other descriptors of the run length distribution (beyond the ARL) can
often be shown to solve simple integral equations (see for example the
integral equations for CUSUM run length second moment and run length
probability function in Problem 2.31), and
36
Chapter 3
An Introduction to Discrete
Stochastic Control
Theory/Minimum Variance
Control
Section 3.6 of V&J provides an elementary introduction to the topic of Engineering Control and contrasts this adjustment methodology with (the process
monitoring methodology of) control charting. The last item under the Engineering Control heading of Table 3.10 of V&J makes reference to optimal
stochastic control theory. The object of this theory is to model system behavior using probability tools and let the consequences of the model assumptions
help guide one in the choice of eective control/adjustment algorithms. This
chapter provides a very brief introduction to this theory.
3.1
General Exposition
Let
f: : : ; Z(1); Z(0); Z(1); Z(2); : : :g
stand for observations on a process assuming that no control actions are taken.
One rst needs a stochastic/probabilistic model for the sequence fZ(t)g, and
we will let
F
stand for such a model. F is a joint distribution for the Zs and might, for
example, be:
t1
X
s=0
A(a(s); t s) ;
which is the sum of what would have been observed with no control and all of
the current eects of previous control actions. For t 0, a(t) will be chosen
based on
f: : : ; Z(1); Z(0); Y (1); Y (2); : : : ; Y (t)g :
A common objective in this context is to choose the actions so as to minimize
EF (Y (t) T (t))
or
t
X
s=1
EF (Y (s) T (s))2
39
for some (possibly time-dependent) target value T (s). The problem of choosing
of control actions to accomplish this goal is called the minimum variance
(MV) control problem, and it has a solution that can be described in fairly
(deceptively, perhaps) simple terms.
Note rst that given f: : : ; Z(1); Z(0); Y (1); Y (2); : : : ; Y (t)g one can recover
f: : : ; Z(1); Z(0); Z(1); Z(2); : : : ; Z(t)g. This is because
Z(s) = Y (s)
s1
X
r=0
A(a(r); s r)
i.e., to get Z(s), one simply subtracts the (known) eects of previous control
actions from Y (s).
Then the model F (at least in theory) provides one a conditional distribution
for Z(t + 1); Z(t + 2); Z(t + 3); : : : given the observed Zs through time t. The
conditional distribution for Z(t + 1); Z(t + 2); Z(t + 3) : : : given what one can
observe through time t, namely f: : : ; Z(1); Z(0); Y (1); Y (2); : : : ; Y (t)g, is then
the conditional distribution one gets for Z(t + 1); Z(t + 2); Z(t + 3); : : : from the
model F after recovering Z(1); Z(2); : : : ; Z(t) from the corresponding Y s. Then
for s t + 1, let
EF [Z(s)j : : : ; Z(1); Z(0); Z(1); Z(2); : : : ; Z(t)] or just EF [Z(s)jZ t ]
stand for the mean of this conditional distribution of Z(s) available at time t.
Suppose that there are u 0 periods of dead time (u could be 0). Then
the earliest Y that one can hope to inuence by choice of a(t) is Y (t + u + 1).
Notice then that if one takes action a(t) at time t, ones most natural projection
of Y (t + u + 1) at time t is
t1
X
:
Yb (t + u + 1jt) = EF [Z(t + u + 1)jZ t ] +
A(a(s); t + u + 1 s) + A(a(t); u + 1)
s=0
It is then natural (and in fact turns out to give the MV control strategy) to try
to choose a(t) so that
Yb (t + u + 1jt) = T (t + u + 1) :
t1
X
s=0
A(a(s); t + u + 1 s)
t
X
s=1
(Y (s) T (s))2 ;
s=0
3.2
An Example
To illustrate the meaning of the preceding formalism, consider the model (F)
specied by
for some constant (that depends upon the known variances 2 and 2 ).
We will nd MV control policies under model (3.1) with two dierent functions A(a; s). Consider rst the possibility
A(a; s) = a 8s 1 ; (3:2:2)
(3.2)
(an adjustment a at a given time period takes its full and permanent eect
at the next time period).
b
Consider the situation at time t = 0. Available are Z(0) and Z(0j1)
(the
prior mean of W (0)) and from these one may compute the prediction
:
b
b
Z(1j0)
= Z(0) + (1 )Z(0j1)
+d :
3.2. AN EXAMPLE
41
That means that taking control action a(0), one should predict a value of
: b
Yb (1j0) = Z(1j0)
+ a(0)
for the controlled process at time t = 1, and upon setting this equal to the
target T (1) and solving for a(0) one should thus choose
b
a(0) = T (1) Z(1j0)
:
At time t = 1 one has observed Y (1) and may recover Z(1) by noting that
Y (1) = Z(1) + A(a(0); 1) = Z(1) + a(0) ;
so that
Z(1) = Y (1) a(0) :
That means that with a target of T (2) one should predict a value of the controlled process at time t = 2 of
: b
Yb (2j1) = Z(2j1)
+ a(0) + a(1) :
Upon setting this value equal to T (2) and solving it is clear that one should
choose
b
+ a(0) :
a(1) = T (2) Z(2j1)
So in general under (3.2), at time t one may note that
Z(t) = Y (t)
t1
X
a(s)
s=0
Then setting the predicted value of the controlled process equal to T (t + 1) and
solving for a(t), nd the MV control action
!
t1
X
b + 1jt) +
a(t) = T (t + 1) Z(t
a(s) :
s=0
Finally, consider the problem of MV control under the same model (3.1),
but now using
0 if
s=1
A(a; s) =
(3.3)
a for s = 2; 3; : : :
so that
:
b
Z(2j0)
= EF [Z(2)jZ(0)] ;
= EF [Z(1) (1) + d + (2) + (2)jZ(0)] ;
b
= Z(1j0)
+d ;
b
= Z(0) + (1 )Z(0j1)
+ 2d
and upon setting this equal to the time t = 2 target, T (2), and solving, one has
the MV control action
b
a(0) = T (2) Z(2j0)
:
b
At time t = 1 one has in hand Y (1) = Z(1) and Z(1j0)
and the rst Y that
can be aected by the choice of a(1) is Y (3). Now
Z(3) = W (3) + (3) ;
= W (2) + d + (3) + (3) ;
= Z(2) (2) + d + (3) + (3)
so that
:
b
Z(3j1)
= EF [Z(3)jZ(0); Z(1)] ;
= EF [Z(2) (2) + d + (3) + (3)jZ(0); Z(1)] ;
b
= Z(2j1)
+d ;
b
= Z(1) + (1 )Z(1j0)
+ 2d
3.2. AN EXAMPLE
43
and upon setting this equal to the time t = 3 target, T (3), and solving, one has
the MV control action
b
a(1) = T (3) Z(3j1)
+ a(0) :
Finally, in general under (3.3), one may at time t note that
Z(t) = Y (t)
t2
X
a(s)
s=0
Then setting the time t + 2 predicted value of the controlled process equal to
T (t + 2) and solving for a(t), we nd the MV control action
!
t1
X
b + 2jt) +
a(s) :
a(t) = T (t + 2) Z(t
s=0
Chapter 4
Process Characterization
and Capability Analysis
Sections 5.1 through 5.3 of V&J discuss the problem of summarizing the behavior of a stable process. The bottom line of that discussion is that onesample statistical methods can be used in a straightforward manner to characterize a process/population/universe standing behind data collected under
stable process conditions. Section 5.5 of V&J opens a discussion of summarizing process behavior when it is not sensible to model all data in hand as random
draws from a single/xed universe. The notes in this chapter carry the theme
of 5.5 of V&J slightly further and add some theoretical detail missing in the
book.
4.1
The questions How much variation is there overall? and Where is the variation coming from? are fundamental to process characterization/understanding
and the guidance of improvement eorts. To provide a framework for discussion here, suppose that in hand one has r samples of data, sample i of size ni
(i = 1; : : : ; r). Depending upon the specic application, these r samples can
have many dierent logical structures. For example, 5.5 of V&J considers the
case where the ni are all the same and the r samples are naturally thought of as
having a balanced hierarchical/tree structure. But many others (both regular
and completely irregular) are possible. For example Figure 4.1 is a schematic
parallel to Figure 5.16 of V&J for a staggered nested data structure.
When data in hand represent the entire universe of interest, methods of
probability and statistical inference have no relevance to the basic questions
How much variation is there overall? and Where is the variation coming
from? The problem is one of descriptive statistics only, and various creative
45
2
2
4.2
Consider the hierarchical random eects model with 2 levels of nesting discussed
in 5.5.2 of V&J. We will continue the notations yijk ; yij ; yi: and y:: used in
that section and also adopt some additional notation. For one thing, it will be
useful to dene some ranges. Let
Rij = max yijk min yijk = the range of the jth sample within the ith level of A ;
k
i = max yij min yij = the range of the J sample means within the ith level of A ;
j
and
= max yi: min yi: = the range of the means for the I levels of A :
i
It will also be useful to consider the ANOVA sums of squares and mean
squares alluded to briey in 5.5.3. So let
X
SSTot =
(yijk y:: )2
i;j;k
and
SSA = KJ
X
(
yi: y:: )2
i
and
: SSC(B(A))
:
M SC(B(A)) =
IJ(K 1)
Now these ranges, sums of squares and mean squares are interesting measures
of variation in their own right, but are especially helpful when used to produce
estimates of variance components and functions of variance components. For
example, it is straightforward to verify that under the hierarchical random eects
model (5.28) of V&J
ERij = d2 (K) ;
q
Ei = d2 (J) 2 + 2 =K
and
q
E = d2 (I) 2 + 2 =J + 2 =JK :
So, reasoning as in 2.2.2 of V&J (there in the context of two-way random eects
models and gage R&R) reasonable range-based point estimates of the variance
components are
2
R
2
b =
;
d2 (K)
!
2
b2
2
b = max 0;
d2 (J)
K
and
b2
2
2 !
1
:
= max 0;
d2 (I)
J d2 (J)
Now by applying linear model theory or reasoning from V&J displays (5.30)
and (5.32) and the fact that Es2ij = 2 , one can nd expected values for the
mean squares above. These are
EMSA = KJ2 + K2 + 2 ;
EMSB(A) = K2 + 2
and
EMSC (B(A)) = 2 :
b2 =
;
IJ(K 1)
SSB(A)
1
2
2
max 0;
b =
K
I(J 1)
and
b2
SSA
SSB(A)
1
max 0;
=
JK
(I 1) I(J 1)
are exactly the estimators (described without using ANOVA notation) in displays (5.29), (5.31) and (5.33) of V&J. The virtue of describing them in the
present terms is to suggest/emphasize that all that was said in 1.4 and 1.5
(in the gage R&R context) about making standard errors for functions of mean
squares and ANOVA-based condence intervals for functions of variance components is equally true in the present context.
For example, the formula (1.3) of these notes can be applied to derive stanb2 immediately above. Or since
dard errors for
b2 and
2 =
1
1
EM SB(A) EMSC(B(A))
K
K
and
1
1
EM SA
EM SB(A)
JK
JK
are both of form (1.4), the material of 1.5 can be used to set condence limits
for these quantities.
As a nal note in this discussion of the what is possible under the hierarchical
random eects model, it is worth noting that while the present discussion has
been conned to a balanced data framework, Problem 4.8 shows that at least
2 =
4.3
This brief subsection is meant to illustrate the kinds of things that can be done
with nite population sampling theory in terms of estimating overall variability
in a (balanced) hierarchical concrete population of items and dissecting that
variability.
Consider rst a nite population consisting of N M items arranged into N
levels of A, with M levels of B within each level of A. (For example, there might
be N boxes, each containing M widgets. Or there might be N days, on each of
which M items are manufactured.) Let
yij = a measurement on the item at level i of A and level j of B within the
ith level of A (e.g. the diameter of the jth widget in the ith box) :
Suppose that the quantity of interest is the (grand) variance of all N M measurements,
N X
M
X
1
S2 =
(yij y: )2 :
N M 1 i=1 j=1
1
2
2
M (N 1)SA
S2 =
+ N (M 1)SB
NM 1
where
2
SA
=
and
N
1 X
(
yi y: )2 = the variance of the N A level means
N 1 i=1
0
1
N
M
X
X
1
1
2
@
=
SB
(yij yi )2 A = the average of the N within A level variances.
N i=1 M 1 j=1
Suppose that one selects a simple random sample of n levels of A, and for each
level of A a simple random sample of m levels of B within A. (For example, one
might sample n boxes and m widgets from each box.) A naive way to estimate
S 2 is to simply use the sample variance
s2 =
X
1
(yij y: )2
nm 1
1
m(n 1) 2
n(m 1) m(n 1) 1
2
2
S +
+
SB
;
Es =
nm 1 A
nm 1
nm 1
m M
which is not in general equal to S 2 .
However, it is possible to nd a linear combination of the sample versions of
2
2
SA
and SB
that has expected value equal to the population variance. That is,
let
1 X
s2A =
(
yi y: )2
n1
= the sample variance of the n sample means (from the sampled levels of A)
and
1X
1 X
(yij yi )2
n
m1
= the average of the n sample variances (from the sampled levels of A) :
s2B =
1
1
m M
2
SB
and
2
Es2B = SB
:
1
N (M 1) M(N 1) 1
M(N 1) 2
s +
s2B :
NM 1 A
NM 1
NM 1
m M
This kind of analysis can, of course, be carried beyond the case of a single
level of nesting. For example, consider the situation with two levels of nesting (where both the nite population and the observed values have balanced
hierarchical structure). Then in the ANOVA notation of 4.2 above, take
SSA
;
(I 1)JK
s2A =
s2B =
and
s2C =
SSB(A)
I(J 1)K
SSC(B(A))
:
IJ(K 1)
(1 fB ) 2 (1 fC ) 2
SB +
SC ;
J
JK
2
Es2B = SB
+
and
(1 fC ) 2
SC
K
2
Es2C = SC
:
Chapter 5
Sampling Inspection
Chapter 8 of V&J treats the subject of sampling inspection, introducing the
basic methods of acceptance sampling and continuous inspection. This chapter
extends that discussion somewhat. We consider how (in the fraction nonconforming context) one can move from single sampling plans to quite general
acceptance sampling plans, we provide a brief discussion of the eects of inspection/measurement error on the real (as opposed to nominal) statistical properties of acceptance sampling plans, and then the chapter closes with an elaboration of 8.5 of V&J, providing some more details on the matter of economic
arguments in the choice of sampling inspection schemes.
5.1
Section 8.1 of V&J (and for that matter 8.2 as well) connes itself to the
discussion of single sampling plans. For those plans, a sample size is xed in
advance at some value n, and lot disposal is decided on the basis of inspection of
exactly n items. There are, however, often good reasons to consider acceptance
sampling plans whose ultimate sample size depends upon how the inspected
items look as they are examined. (One might, for example, want to consider
a double sampling plan that inspects an initial small sample, terminating
sampling if items look especially good or especially bad so that appropriate
lot disposal seems clear, but takes an additional larger sample if the initial
one looks inconclusive regarding the likely quality of the lot.) This section
considers fraction nonconforming acceptance sampling from the most general
perspective possible and develops the OC, ASN, AOQ and ATI for a general
fraction nonconforming plan.
Consider the possibility of inspecting one item at a time from a lot of N , and
after inspecting each successive item deciding to 1) stop sampling and accept
53
54
Reject
4
3
2
1
n
1
the lot, 2) stop sampling and reject the lot or 3) inspect another item. With
Xn = the number of nonconforming items found among the rst n inspected
a helpful way of thinking about various dierent plans in this context is in
terms of possible paths through a grid of ordered pairs of integers (n; Xn ) with
0 Xn n. Dierent acceptance sampling plans then amount to dierent
choices of Accept Boundary and Reject Boundary. Figure 5.1 is a diagram
representing a single sampling plan with n = 6 and c = 2, Figure 5.2 is a diagram
representing a doubly curtailed version of this plan (one that recognizes that
there is no need to continue inspection after lot disposal has been determined)
and Figure 5.3 illustrates a double sampling plan in these terms.
Now on a diagram like those in the gures, one may very quickly count the
number of permissible paths from (0; 0) to a point in the grid by (working left
to right) marking each point (n; Xn ) in the grid (that it is possible to reach)
with the sum of the numbers of paths reaching (n 1; Xn 1) and (n 1; Xn )
provided neither of those points is a stop-sampling point. (No feasible paths
leave a stop-sampling point. So path counts to them do not contribute to path
counts for any points to their right.) Figure 5.4 is a version of Figure 5.2 with
permissible movements through the (n; Xn ) grid marked by arrows, and path
counts indicated.
The reason that one cares about the path counts is that for any stop-sampling
Accept
Xn
Reject
3
2
1
n
1
Xn
5
4
Accept
Reject
3
2
1
n
1
56
Reject
1
10
10
10
3
2
1
n
5
Figure 5.4: Diagram for the Doubly Curtailed Single Sampling Plan with Path
Counts Indicated
Nn
NpXn
N
Np
And these probabilities of reaching the various stop sampling points are the
fundamental building blocks of the standard statistical characterizations of an
acceptance sampling plan.
For example, with A and R respectively the acceptance and rejection boundaries, the OC for an arbitrary fraction nonconforming plan is
X
P [reaching (n; Xn )] :
(5.1)
Pa =
(n;Xn )2A
And the mean number of items sampled (the Average Sample Number) is
X
ASN =
nP [reaching (n; Xn )] :
(5.2)
(n;Xn )2A[R
(5.3)
(n;Xn )2A
from perspective A
AOQ =
(p
(n;Xn )2A
Xn
)P [reaching (n; Xn )]
N
(5.4)
nP [reaching (n; Xn )] :
(5.5)
(n;Xn )2A
These formulas are conceptually very simple and quite universal. The fact
that specializing them to any particular choice of acceptance boundary and
rejection boundary might have been unpleasant when computations had to be
done by hand is largely irrelevant in todays world of plentiful fast and cheap
computing. These simple formulas and a personal computer make completely
obsolete the many many pages of specialized formulas that at one time lled
books on acceptance sampling.
Two other matters of interest remain to be raised regarding this general
approach to fraction nonconforming acceptance sampling. The rst concerns
the dicult mathematical question What are good shapes for the accept and
reject boundaries? We will talk a bit in the nal section of this chapter about
criteria upon which various plans might be compared and allude to how one
might try to nd a best plan (best shapes for the acceptance and rejection
boundaries) according to such criteria. But at this point, we wish only to
note that Abraham Wald working in the 1940s on the problem of sequential
testing, developed some approximate theory that suggests that parallel straight
line boundaries (the acceptance boundary below the rejection boundary) have
some attractive properties. He was even able to provide some approximate
two-point design criteria. That is, in order to produce a plan whose OC curve
runs approximately through the points (p1 ; P a1 ) and (p2 ; P a2 ) (for p1 < p2 and
P a1 > P a2 ) Wald suggested linear stop-sampling boundaries with
1
ln 1p
1p2
:
slope =
(5.6)
1)
ln pp21 (1p
(1p2 )
An appropriate Xn -intercept for the acceptance boundary is approximately
a1
ln P
P a2
;
hA =
(5.7)
p2 (1p1 )
ln p1 (1p2 )
while an appropriate Xn -intercept for the rejection boundary is approximately
a2
ln 1P
1P a1
:
(5.8)
hR =
p2 (1p1 )
ln p1 (1p2 )
Wald actually derived formulas (5.6) through (5.8) under innite lot size
assumptions (that also allowed him to produce some approximations for both the
OC and ASN of his plans). Where one is thinking of applying Walds boundaries
in acceptance sampling of a real (nite N ) lot, the question of exactly how to
truncate the sampling (close in the right side of the continue sampling region)
58
3
2
1
0
1
Figure 5.5: Path Counts from (1; 1) to Stop Sampling Points for the Plan of
Figure 5.4
must be answered in some sensible fashion. And once that is done, the basic
formulas (5.1) through (5.5) are of course relevant to describing the resulting
plan. (See Problem 5.4 for an example of this kind of logic in action.)
Finally, it is an interesting side-light here (that can come into play if one
wishes to estimate p based on data from something other than a single sampling
plan) that provided the stop-sampling boundary has exactly one more point in it
than the largest possible value of n, the uniformly minimum variance unbiased
estimator of p for both type A and type B contexts is (for (n; Xn ) a stopsampling point)
pb ((n; Xn )) =
For example, Figure 5.5 shows the path counts from (1,1) needed (in conjunction
with the path counts indicated in Figure 5.4) to nd the uniformly minimum
variance unbiased estimator of p when the doubly curtailed single sampling plan
of Figure 5.4 is used.
Table 5.1 lists the values of pb for the 7 points in the stop-sampling boundary
for the doubly curtailed single sampling plan with n = 6 and c = 2, along with
the corresponding values of Xn =n (the maximum likelihood estimator of p).
5.2
The nominal statistical properties of sampling inspection procedures are perfect inspection properties. The OC formulas for the attributes plans in 8.1
and 8.4 of V&J and 5.1 above are really premised on the ability to tell with
certainty whether an inspected item is conforming or nonconforming. And the
OC formulas for the variables plans in 8.2 of V&J are premised on an assumption that the measurement x that determines whether an item is conforming or
59
Table 5.1: The UMVUE and MLE of p for the Doubly Curtailed Single Sampling
Plan
Stop-sampling point (n; Xn ) UMVUE, pb MLE, Xn =n
(3; 3)
1=1
3=3
(4; 0)
0=1
0=4
(4; 3)
2=3
3=4
(5; 1)
1=4
1=5
(5; 3)
3=6
3=5
(6; 2)
4=10
2=6
(6; 3)
4=10
3=6
Table 5.2: Perspective B Description of a Single Inspection Allowing fo Inspection Error
Inspection Result
G
D
Actual
G (1 wG )(1 p) wG (1 p) 1 p
Condition D
pwD
p(1 wD )
p
1 p
p
nonconforming can be obtained for a given item completely without measurement error. But the truth is that real-world inspection is not perfect and the
nominal statistical properties of these methods at best approximate their actual
properties. The purpose of this section is to investigate (rst in the attributes
context and then in the variables context) just how far actual OC values for
common acceptance sampling plans can be from nominal ones.
Consider rst the percent defective context and suppose that when a conforming (good) item is inspected, there is a probability wG of misclassifying it as
nonconforming. Similarly, suppose that when a nonconforming (defective) item
is inspected, there is a probability wD of misclassifying it as conforming. Then
from perspective B, a probabilistic description of any single inspected item is
given in Table 5.2, where in that table we are using the abbreviation
p = wG (1 p) + p(1 wD )
for the probability that an item (of unspecied actual condition) is classied as
nonconforming by the inspection process.
It should thus be obvious that from perspective B in the fraction nonconforming context, an attributes single sampling plan with sample size n and
acceptance number c has an actual acceptance probability that depends not
only on p but on wG and wD as well through the formula
c
X
n
P a(p; wG ; wD ) =
(p )x (1 p )nx :
(5.9)
x
x=0
60
rx :
(5.10)
n
x=0
and have an OC band in which the real OC (that depends upon the unknown
inspection ecacy) is guaranteed to lie.
Similar analyses can be done for nonconformities per unit contexts as follows.
Suppose that during inspection of product, real nonconformities are missed
with probability m and that (independent of the occurrence and inspection
of real nonconformities) phantom nonconformities are observed according
to a Poisson process with rate P per unit inspected. Then from perspective B
in a nonconformities per unit context, the number of nonconformities observed
on k units is Poisson with mean
k((1 m) + P ) ;
so that an actual acceptance probability corresponding to the nominal one given
in display (8.8) of V&J is
P a(; P ; m) =
c
X
exp (k((1 m) + P )) (k((1 m) + P ))x
x=0
x!
(5.12)
61
(5.13)
And the same kinds of bounding ideas used above for the fraction nonconforming
context might be used with the OC (5.12) in the mean nonconformities per unit
context. Pretty clearly, if one could guarantee that P a and that m b, one
would have (from display (5.12))
P a(; a; 0) P a(; P ; m) P a(; 0; b)
(5.14)
(5.15)
62
And under model (2.1) of V&J, a given set of parameters (x ; x ) for the x
distribution has corresponding fraction nonconforming
L x
p(x ; x ) =
x
and acceptance probability
y L
k
sy
1
0
Ly
yy
p
p
p
y = n
y = n
=P@
k nA
sy
P a(x ; x ; ; measurement ) = P
L y
(L x )=x =x
p = q
p ;
2
y = n
1 + measurement = n
(5.16)
2
x
y y
p Normal (0; 1)
y = n
p
s
independent of yy , which has the distribution of U=(n 1) for U a 2n1
random variable. That is, with W a noncentral t random variable with noncentrality parameter given in display (5.16), we have
p
P a(x ; x ; ; measurement ) = P [W k n] :
And the crux of the matter is that (even if measurement bias, , is 0) in
display (5.16) is not a function of (L x )=x alone unless one assumes that
measurement is EXACTLY 0.
Even with no measurement bias, if measurement 6= 0 there are (x ; x ) pairs
with
L x
=z
x
p
(and therefore p = (z)) and ranging all the way from z n to 0. Thus
considering z 0 and p :5 there are corresponding P as ranging from
p
P [a tn1 random variable k n]
to
p
p
P [a non-central tn1 (z n) random variable k n] ;
.5
to
p
P [a tn1 random variable k n] :
That is, one is confronted with the extremely unpleasant and (initially counterintuitive) picture of real OC indicated in Figure 5.6.
It is important to understand the picture painted in Figure 5.6. The situation
is worse than in the attributes data case. There, if one knows the ecacy of
the inspection methodology it is at least possible to pick a single appropriate
OC curve. (The OC bands indicated by displays (5.11) and (5.14) are created
only by ignorance of inspection ecacy.) The bizarre OC bands created in
the variables context (and sketched in Figure 5.6) do not reduce to curves if one
knows the inspection bias and precision, but rather are intrinsic to the fact that
unless measurement is exactly 0, dierent (; ) pairs with the same p must have
dierent P as under acceptance criterion (5.15). And the only way that one can
replace the situation pictured in Figure 5.6 with one having a thinner and more
palatable OC band (something approximating a curve) is by guaranteeing
that
x2
2
measurement
is of some appreciable size. That is, given a particular measurement precision,
one must agree to concern oneself only with cases where product variation cannot
hide in measurement noise. Such is the only way that one can even come close
to the variables sampling goal of treating (; ) pairs with the same p equally.
5.3
Section 8.5 of V&J alludes briey to the possibility of using economic/decisiontheoretic arguments in the choice of sampling inspection schemes and cites the
1994 Technometrics paper of Vander Wiel and Vardeman. Our rst objective
64
in this section is to provide some additional details of the Vander Wiel and
Vardeman analysis. To that end, consider a stable process fraction nonconforming situation and continue the wG and wD notation used above (and also
introduced on page 493 of V&J). Note that Table 5.2 remains an appropriate
description of the results of a single inspection. We will suppose that inspection
costs are accrued on a per item basis and adopt the notation of Table 8.16 of
V&J for the costs.
As a vehicle to a very quick demonstration of the famous all or none
principle, consider facing N potential inspections and employing a random
inspection policy that inspects each item independently with probability .
Then the mean cost suered over N items is simply N times that suered for 1
item. And this is
ECost = (kI + (1 p)wG kGF + p(1 wD )kDF + pwD kDP ) + (1 )pkDU
= (kI + wG kGF pK) + pkDU
(5.17)
for
K = (1 wD )(kDU kDF ) + wD (kDU kDP ) + wG kGF
(as in display (8.50) of V&J). Now it is clear from display (5.17) that if K < 0,
ECost is minimized over choices of by the choice = 0. On the other hand,
if K > 0, ECost is minimized over choices of
by the choice = 0 if p
and
by the choice = 1 if p
kI + wG kGF
K
kI + wG kGF
:
K
kI +wG kGF
K
if K 0
if K > 0
n k2
= k1 N 1 + P a(n; c; p) 1
p 1
:
(5.18)
N
k1
Optimal choice of n and c requires that one be in the business of comparing the
functions of p dened in display (5.18). How one approaches that comparison
depends upon what one is willing to input into the decision process in terms of
information about p.
First, if p is xed/known and available for use in choosing n and c, the optimization of criterion (5.18) is completely straightforward. It amounts only to
the comparison of numbers (one for each (n; c) pair), not functions. And the
66
solution is quite simple. In the case that p > k1 =k2 , p kk12 1 > 0 and from
examination of display
(5.18)
minimum expected total cost will be achieved if
n
P a(n; c; p) = 0 or if 1 N
= 0. That is, all is optimal. In the case that
n
= 1. That
expected total cost will be achieved if P a(n; c; p) = 1 and 1 N
is, none is optimal. This is a manifestation of the general Vander Wiel and
Vardeman result. For known p in this kind of problem, sampling/partial inspection makes no sense. One is not going to learn anything about p from the
sampling. Simple economics (comparison of p to the critical cost ratio k1 =k2 )
determines whether it is best to inspect and rectify, or to take ones lumps in
later costs.
When one may not assume that p is xed/known (and it is thus unavailable for use in choosing an optimal (n; c) pair) some other approach has to be
taken. One possibility is to describe p with a probability distribution G, average
ETC(n; c; p) over p according to that distribution to get EG ETC(n; c), and then
to compare numbers (one for each (n; c) pair) to identify an optimal inspection
plan. This makes sense
p < k1 =k2 ,
k1
;
k2
k1
:
k2
k1
= max x j EG [p j X = x]
k2
(5.19)
(And it is perhaps comforting to know that the monotone likelihood ratio property of the binomial distribution guarantees that EG [p jX = x] is monotone in
x.)
What is this saying? The assumptions 1) that p G and 2) that conditional
on p the variable X Binomial (n; p) together give a joint distribution for p and
X. This in turn can be used to produce for each x a conditional distribution
of pjX = x and therefore a conditional mean value of p given that X = x.
The prescription (5.19) says that one should nd the largest x for which that
conditional mean value of p is still less than the critical cost ratio and use that
value for copt
G (n). To complete the optimization of EG ETC(n; c; p), one then
would then need to compute and compare (for various n) the quantities
EG ETC(n; copt
G (n); p) :
(5.20)
The fact is that depending upon the nature of G, the minimizer of quantity
(5.20) can turn out to be anything from 0 to N . For example, if G puts all its
probability on one side or the other of k1 =k2 , then the conditional distributions
of p given X = x must concentrate all their probability (and therefore have
their means) on that same side of the critical cost ratio. So it follows that if G
puts all its probability to the left of k1 =k2 , none is optimal (even though one
doesnt know p exactly), while if G puts all its probability to the right of k1 =k2 ,
all is optimal in terms of optimizing EG ETC(n; c; p).
On the other hand, consider an unrealistic but instructive situation where
k1 = 1; k2 = 1000 and G places probability 12 on the possibility that p = 0 and
probability 12 on the possibility that p = 1. Under this model the lot is either
perfectly good or perfectly bad, and a priori one thinks these possibilities are
equally likely. Here the distribution G places probability on both sides of the
breakeven quantity k1 =k2 = :001. Even without actually carrying through the
whole mathematical analysis, it should be clear that in this scenario the optimal
n is 1! Once one has inspected a single item, he or she knows for sure whether
p is 0 or is 1 (and the lot can be rectied in the latter case).
The most common mathematically nontrivial version of this whole analysis
of the Deming Inspection Problem is the case where G is a Beta distribution.
If G is the Beta(; ) distribution,
EG [p jX = x] =
+x
+ +n
68
so that copt
G (n) is the largest value of x such that
k1
+x
:
++n
k2
That is, in this situation, for byc the greatest integer in y,
copt
G (n) = b
k1
k1
k1
( + + n) c = b n + ( + )c ;
k2
k2
k2
which for large n is essentially kk12 n. The optimal value of n can then be found
by optimizing (over choice of n) the quantity
EG ETC(n; copt
G (n); p) =
ETC(n; copt
G (n); p)
1
p1 (1 p)1 dp :
B(; )
The reader can check that this exercise boils down to the minimization over n
of
copt (n) Z 1
n GX n
k2
x
nx
p (1 p)
1
p 1 p1 (1 p)1 dp :
N x=0 x 0
k1
(The SAMPLE program of Lorenzen alluded to earlier actually uses a dierent
approach than the one discussed here to nd optimal plans. That approach is
computationally more ecient, but not as illuminating in terms of laying bare
the basic structure of the problem as the route taken in this exposition.)
As two nal pieces of perspective on this topic of economic analysis of sampling inspection we oer the following. In the rst place, while the Deming
Inspection Problem is not a terribly general formulation of the topic, the results
here are typical of how things turn out. Second, it needs to be remembered that
what has been described here is the nding of a cost-optimal xed n inspection
plan. The problem of nding a plan optimal among all possible plans (of the
type discussed in 5.1) is a more challenging one. For G placing probability
on both sides of the critical cost ratio, not only need it not be that case that
all or none is optimal, but in general an optimal plan need not be of the
xed n variety. While in principle the methodology for nding an overall best
inspection plan is well-established (involving as it does so called dynamic programming or backwards induction) the details are unpleasant enough that
it will not make sense to pursue this matter further.
Chapter 6
Problems
1
2
2
) =Var
b +(Eb
) . (Mean squared error is variance plus squared
bias.)
(b) What is an optimal (in terms of minimum mean squared error) multiple of s to use in estimating ?
1.2. How do R=d2 (n) and s=c4 (n) compare (in terms of mean squared error)
as estimators of ? (The assumption here is that they are both based on
a sample from a normal distribution. See Problem 1.1 for a denition of
mean squared error.)
1.3. Suppose that sample variances s2i , i = 1; 2; : : : ; r are based on independent
samples of size m from normal distributions with a common standard
deviation, . A common SQC-inspired estimator of is s=c4 (m). Another
possibility is
s
s21 + + s2r
spooled =
r
69
70
CHAPTER 6. PROBLEMS
or
Standard distribution theory says that r(m 1)s2pooled = 2 has a 2 distribution with r(m 1) degrees of freedom.
(a) Compare s=c4 (m), spooled and
^ in terms of mean squared error.
(b) What is an optimal multiple of spooled (in terms of mean squared
error) to use in estimating ?
(Note: See Vardeman (1999 IIE Transactions) for a complete treatment
of the issues raised in Problems 1.1 through 1.3.)
1.4. Set up a double integral that gives the probability that the sample range
of n standard normal random variables is between .5 and 2.0. How is
this probability related to the probability that the sample range of n iid
normal (; 2 ) random variables is between .5 and 2.0?
1.5. It is often helpful to state standard errors (estimated standard deviations) corresponding to point estimates of quantities of interest. In a
2 (n)
context where a standard deviation, , is to be estimated by R=d
based on r samples of size n, what is a reasonable standard error to announce? (Be sure that your answer is computable from sample data, i.e.
doesnt involve any unknown process parameters.)
1.6. Consider the paper weight data in Problem (2.12) of V&J. Assume that
the 2-way random eects model is appropriate and do the following.
(a) Compute the yij ; sij and Rij for all IJ = 25 = 10 PieceOperator
combinations. Then compute both row ranges of means i and row
sample variances of means s2i .
(b) Find both range-based and sample variance-based point estimates of
the repeatability standard deviation, .
(c) Find both range-based and sample variance-based pointqestimates of
2 .
the reproducibility standard deviation reproducibility = 2 +
(d) Get a statistical package to give you the 2-way ANOVA table for these
data. Verify that s2pooled = MSE and that your sample variancebased estimate of reproducibility from part (c) is
s
1
I 1
1
max 0;
M SB +
M SAB M SE :
mI
mI
m
71
6overall
:
(U L)
Used in the way it was in this study, does the scale seem adequate
to check conformance to such specications?
(i) Give (any sensible) point estimates of the fractions of the overall measurement variance attributable to repeatability and to reproducibility.
1.7. In a particular (real) thorium detection problem, measurement variation
for a particular (spectral absorption) instrument was thought to be about
measurement = :002 instrument units. (Division of a measurement expressed in instrument units by 58.2 gave values in g/l.) Suppose that in
an environmental study, a eld sample is to be measured once (producing
ynew ) on this instrument and the result is to be compared to a (contemporaneous) measurement of a lab blank (producing yold ). If the eld
reading exceeds the blank reading by too much, there will be a declaration
that there is a detectable excess amount of thorium present.
(a) Assuming that measurements are normal, nd a critical value Lc so
that the lab will run no more than a 5% chance of a false positive
result.
(b) Based on your answer to (a), what is a lower limit of detection,
Ld , for a 90% probability () of correctly detecting excess thorium?
What, by the way, is this limit in terms of g/l?
72
CHAPTER 6. PROBLEMS
4,3,1
8,9,6
2,1,4
1.9. In applying ANOVA methods to gage R&R studies, one often uses linear
combinations of independent mean squares as estimators of their expected
values. Section 1.5 of these notes shows it is possible to also produce standard errors (estimated standard deviations) for these linear combinations.
i M Si
Suppose that M S1 ; M S2 ; : : : ; M Sk are independent random variables,
EM Si
2i . Consider the random variable
U = c1 M S1 + c2 MS2 + + ck M Sk :
(a) Find the standard deviation of U .
(b) Your expression from (a) should involve the means EM Si , that in
applications will be unknown. Propose a sensible (data-based) estimator of the standard deviation of U that does not involve these
quantities.
(c) Apply your result from (b) to give a sensible standard error for the
2
2
ANOVA-based estimators of 2 , reproducibility
and overall
.
1.10. Section 1.7 of the notes presents rounded data likelihood methods for
normal data with the 2 parameters and . The same kind of thing can be
done for other families of distributions (which can have other numbers of
parameters). For example, the exponential distributions with means 1
can be used. (Here there is the single parameter .) These exponential
distributions have cdfs
1 exp(x) for x 0
F (x) =
0
for x < 0 :
Below is a frequency table for twenty exponential observations that have
been rounded to the nearest integer.
rounded value 0 1 2 3 4
frequency
7 8 2 2 1
(a) Write out an expression for the appropriate rounded data log likelihood function for this problem,
L() = ln L(dataj) :
73
(b) Make a plot of L(). Use it and identify the maximum likelihood
estimate of based on the rounded data.
(c) Use the plot from (b) and make an approximate 90% condence interval for . (The appropriate 2 value has 1 associated degree of
freedom.)
1.11. Below are values of a critical dimension (in .0001 inch above nominal)
measured on hourly samples of size n = 5 precision metal parts taken
from the output of a CNC (computer numerically controlled) lathe.
sample
measurements
1
4,3,3,2,3
2
2,2,3,3,2
3
4,1,0,1,0
4
2,0,2,1,4
5
2,2,1,3,4
6
2, 2,2,1,2
(a) Compute for each of these samples the raw sample standard deviation (ignoring rounding) and the Sheppards correction standard
deviation that is appropriate for integer rounded data. How do these
compare for the eight samples above?
(b) For each of the samples that have a range of at least 2, use the CONEST program to nd rounded normal data maximum likelihood
estimates of the normal parameters and . The program as written accepts observations 1, so you will need to add an integer to
each element of some of the samples above before doing calculation
with the program. (I dont remember, but you may not be able to input a standard deviation of exactly 0 either.) How do the maximum
likelihood estimates of compare to x
values? How do the maximum likelihood estimates of compare to both the raw standard
deviations and to the results of applying Sheppards correction?
(c) Consider sample #2. Make 95% and 90% condence intervals for
both and using the work of Johnson Lee.
(d) Consider sample #1. Use the CONEST program to get a few approximate values for L () and some approximate values for L ().
(For example, look at a contour plot of L over a narrow range of
means near to get an approximate value for L ().) Sketch L ()
and L () and use your sketches and Lees tables to produce 95%
condence intervals for and .
(e) What 95% condence intervals for and would result from a 9th
sample, f2; 2; 2; 2; 2g?
7
0,0,0,2,0
8
1,1,2,0,2
74
CHAPTER 6. PROBLEMS
1.12. A single operator measures a single widget diameter 15 times and obtains
a range of R = 3 104 inches. Then this person measures the diameters
of 12 dierent widgets once each and obtains a range of R = 8 104
inches. Give an estimated standard deviation of widget diameters (not
including measurement error).
1.13. Cylinders of (outside) diameter O must t in ring bearings of (inside)
diameter I, producing clearance C = I O. We would like to have
some idea of the variability in actual clearances that will be obtained by
random assembly of cylinders produced on one production line with ring
bearings produced on another. The gages used to measure I and O are
(naturally enough) dierent.
In a study using a single gage to measure outside diameters of cylinders,
nO = 10 dierent cylinders were measured once each, producing a sample
standard deviation sO = :001 inch. In a subsequent study, this same
gage was used to measure the outside diameter of an additional cylinder
mO = 5 times, producing a sample standard deviation sOgage = :0005
inch.
In a study using a single gage to measure inside diameters of ring bearings,
nI = 20 dierent inside diameters were measured once each, producing
a sample standard deviation sI = :003 inch. In a subsequent study, this
same gage was used to measure the inside diameter of another ring bearing
mI = 10 times, producing a sample standard deviation sIgage = :001 inch.
(a) Give a sensible (point) estimate of the standard deviation of C produced under random assembly.
(b) Find a sensible standard error for your estimate in (a).
Process Monitoring
Methods
2.1. Consider the following hypothetical situation. A variables process monitoring scheme is to be set up for a production line, and two dierent
measuring devices are available for data gathering purposes. Device A
produces precise and expensive measurements and device B produces less
precise and less expensive measurements. Let measurement for the two
devices be respectively A and B , and suppose that the target for a particular critical diameter for widgets produced on the line is 200.0.
2. PROCESS MONITORING
75
2.2. The following are some data taken from a larger set in Statistical Quality
Control by Grant and Leavenworth, giving the drained weights (in ounces)
of contents of size No. 2 12 cans of standard grade tomatoes in puree. 20
samples of three cans taken from a canning process at regular intervals
are represented.
76
CHAPTER 6. PROBLEMS
Sample
1
2
3
4
5
6
7
8
9
10
x1
22.0
20.5
20.0
21.0
22.5
23.0
19.0
21.5
21.0
21.5
x2
22.5
22.5
20.5
22.0
19.5
23.5
20.0
20.5
22.5
23.0
x3
22.5
22.5
23.0
22.0
22.5
21.0
22.0
19.0
20.0
22.0
Sample
11
12
13
14
15
16
17
18
19
20
x1
20.0
19.0
19.5
20.0
22.5
21.5
19.0
21.0
20.0
22.0
x2
19.5
21.0
20.5
21.5
19.5
20.5
21.5
20.5
23.5
20.5
x3
21.0
21.0
21.0
24.0
21.0
22.0
23.0
19.5
24.0
21.0
(a) Suppose that standard values for the process mean and standard deviation of drained weights ( and ) in this canning plant are 21.0 oz
and 1.0 oz respectively. Make and interpret standards given x
and R
charts based on these samples. What do these charts indicate about
the behavior of the lling process over the time period represented
by these data?
(b) As an alternative to the standards given range chart made in part
(a), make a standards given s chart based on the 20 samples. How
does its appearance compare to that of the R chart?
Now suppose that no standard values for and have been provided.
(c) Find one estimate of for the lling process based on the average
and another based on the average of 20
of the 20 sample ranges, R,
sample standard deviations, s.
=
and make retrospective
(d) Use x and your estimate of based on R
control charts for x
and R. What do these indicate about the stability
of the lling process over the time period represented by these data?
=
2. PROCESS MONITORING
Sample
1
2
3
4
5
6
7
8
9
10
Defectives
6
7
5
7
5
5
4
5
12
6
77
Sample
11
12
13
14
15
16
17
18
19
20
Defectives
7
7
6
6
6
6
23
10
8
5
(a) Suppose that company standards are that on average p = :02 of the
cans are defective. Use this value and make a standards given p chart
based on the data above. Does it appear that the process fraction
defective was stable at the p = :02 value over the period represented
by these data?
(b) Make a retrospective p chart for these data. What is indicated by
this chart about the stability of the canning process?
2.4. Modern business pressures are making standards for fractions nonconforming in the range of 104 to 106 not uncommon.
(a) What are standards given 3 control limits for a p chart with standard fraction nonconforming 104 and sample size 100? What is the
all-OK ARL for this scheme?
(b) If p becomes twice the standard value (of 104 ), what is the ARL
for the scheme from (a)? (Use your answer to (a) and the binomial
distribution for n = 100 and p = 2 104 .)
(c) What do (a) and (b) suggest about the feasibility of doing process
monitoring for very small fractions defective based on attributes
data?
2.5. Suppose that a dimension of parts produced on a certain machine over a
short period can be thought of as normally distributed with some mean
and standard deviation = :005 inch. Suppose further, that values of
this dimension more than .0098 inch from the 1.000 inch nominal value
are considered nonconforming. Finally, suppose that hourly samples of 10
of these parts are to be taken.
78
CHAPTER 6. PROBLEMS
(a) If is exactly on target (i.e. = 1:000 inch) about what fraction of
parts will be nonconforming? Is it possible for the fraction nonconforming to ever be any less than this gure?
(b) One could use a p chart based on n = 10 to monitor process performance in this situation. What would be standards given 3 sigma
control limits for the p chart, using your answer from part (a) as the
standard value of p?
(c) What is the probability that a particular sample of n = 10 parts will
produce an out-of-control signal on the chart from (b) if remains
at its standard value of = 1:000 inch? How does this compare to
the same probability for a 3 sigma x
chart for an n = 10 setup with
a center line at 1.000? (For the p chart, use a binomial probability
calculation. For the x
chart, use the facts that x = and x =
p
= n.) What are the ARLs of the monitoring schemes under these
conditions?
(d) Compare the probability that a particular sample of n = 10 parts
will produce an out-of-control signal on the p chart from (b) to the
probability that the sample will produce an out of control signal on
the (n = 10) 3 sigma x
chart rst mentioned in (c), supposing that in
fact = 1:005 inch. What are the ARLs of the monitoring schemes
under these conditions? What moral is told by your calculations here
and in part (c)?
2.6. The article High Tech, High Touch, by J. Ryan, that appeared in Quality Progress in 1987 discusses the quality enhancement processes used by
Martin Marietta in the production of the space shuttle external (liquid
oxygen) fuel tanks. It includes a graph giving counts of major hardware
nonconformities for each of 41 tanks produced. The accompanying data
are approximate counts read from that graph for the last 35 tanks. (The
rst six tanks were of a dierent design than the others and are thus not
included here.)
2. PROCESS MONITORING
Tank
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Nonconformities
537
463
417
370
333
241
194
185
204
185
167
157
139
130
130
267
102
130
79
Tank
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Nonconformities
157
120
148
65
130
111
65
74
65
148
74
65
139
213
222
93
194
6
30
.2
10
30
.33
15
30
.5
18
30
.6
17
30
.57
2
15
.13
7
15
.47
5
15
.33
6
15
.4
5
15
.33
80
CHAPTER 6. PROBLEMS
(Overall, 91 of the 225 marked vials placed into the inspection process
were detected/captured.)
(a) Carefully investigate (and say clearly) whether there is evidence in
these data of instability in the defect detection rate.
(b) 91=225 = :404. Do you think that the company these students worked
with was likely satised with the 40.4% detection rate? What, if
anything, does your answer here have to do with the analysis in (a)?
2. PROCESS MONITORING
81
Suppose that one uses k1 = 8 and h1 = 10. Use the normal approximation
to the binomial distribution to obtain an approximate ARL for this scheme
if p = :025.
2.10. Consider the monitoring of a process that we will assume produces normally distributed observations X with standard deviations = :04.
(a) Set up both a two-sided CUSUM scheme and a EWMA scheme for
monitoring the process (Q = X), using a target value of .13 and a
desired all-OK ARL of roughly 370, if quickest possible detection of
a change in mean of size = :02 is desired.
(b) Plot on the same set of axes, the logarithms of the ARLs for your
charts from (a) as functions of , the real mean of observations being
CUSUMed or EWMAed. Also plot on this same set of axes the
logarithms of ARLs for a standard 3 Shewhart Chart for individuals.
Comment upon how the 3 ARL curves compare.
2.11. Shear strengths of spot welds made by a certain robot are approximately
normal with a short term variability described by = 60 lbs. The
strengths in samples of n of these welds are going to be obtained and
x
values CUSUMed.
(a) Give a reference value k2 , sample size n and a decision interval h2
so that a one-sided (lower) CUSUM scheme for the x
s will have an
ARL of about 370 if = 800 lbs and an ARL of about 5 if = 750
lbs.
(b) Find a sample size and a lower Shewhart control limit for x
, say #,
so that if =800 lbs, there will be about 370 samples taken before an
x
will plot below #, and if = 750 there will be on average about 5
samples taken before an x
will plot below #.
2.12. You have data on the eciency of a continuous chemical production process.
The eciency is supposed to be about 45%, and you will use a CUSUM
scheme to monitor the eciency. Eciency is computed once per shift,
but from much past data, you know that :7%.
(a) If you wish quickest possible detection of a shift of .7% (one standard
deviation) in mean eciency, design a two-sided CUSUM scheme for
this situation with an all-OK ARL of about 500.
82
CHAPTER 6. PROBLEMS
(b) Apply your procedure from (a) to the data below. Are any alarms
signaled?
Shift
Eciency
Shift
Eciency
1
45.7
11
45.8
2
44.6
12
45.4
3
45.0
13
46.8
4
44.4
14
45.5
5
44.4
15
45.8
6
44.2
16
46.4
7
46.1
17
46.0
8
44.6
18
46.3
9
45.7
19
45.6
10
44.4
(c) Make a plot of raw CUSUMs using a reference value of 45%. From
your plot, when do you think that the mean eciency shifted away
from 45%?
(d) What are the all-OK and = 45:7% ARLs if one employs your
procedure from (a) modied by giving both the high and low side
charts head starts of u = v = h1 =2 = h2 =2?
(e) Repeat part (a) using a EWMA scheme rather than a CUSUM scheme.
(f) Apply your procedure from (e) to the data. Are any alarms signaled?
Plot your EWMA values. Based on this plot, when do you think that
the mean eciency shifted away from 45%?
1
.14
2
.08
3
.06
4
.05
5
.05
6
.04
7
.04
8
.04
9
.03
Use Crowders EWMA ARL program (and some trial and error) to nd
values of K that when used with the s above will produce an on-target
2. PROCESS MONITORING
83
ARL of 370. Then determine how large n must then be in order to meet
the 370 and 5.0 ARL requirements. How does this compare to what Table
4.8 says is needed for a two-sided CUSUM to meet the same criteria?
2.14. Consider a combination of high and low side decision interval CUSUM
schemes with h1 = h2 = 2:5, u = 1, v = 1, k1 = :5 and k2 = :5.
Suppose that Qs are iid normal variables with Q = 1:0. Find the ARLs
for the combined scheme if Q = 0 and then if Q = 1:0. (You will need to
use Gans CUSUM ARL program and Yashchins expression for combining
high and low side ARLs.)
2.15. Set up two dierent X/MR monitoring chart pairs for normal variables
Q, in the case where the standards are Q = 5 and Q = 1:715 and the allOK ARL desired is 250. For these combinations, what ARLs are relevant
if in fact Q = 5:5 and Q = 2:00? (Run Crowders X=M R ARL program
to get these with minimum interpolation.)
2.16. If one has discrete or rounded data and insists on using x
and/or R charts,
1.7.1 shows how these may be based on the exact all-OK distributions
of x
and/or R (and not on normal theory control limits). Suppose that
measurements arise from integer rounding of normal random variables
with = 2:25 and = :5 (so that essentially only values 1, 2, 3 and 4 are
ever seen). Compute the four probabilities corresponding to these rounded
values (and fudge them slightly so that they total to 1.00). Then, for
n = 4 compute the probability distributions of x
and R based on iid
observations from this distribution. Then run Karen (Jensen) Hultings
DIST program and compare your answers to what her program produces.
2.17. Suppose that standard values of process parameters are = 17 and =
2:4.
(a) Using sample means x
based on samples of size n = 4, design both
a combined high and low side CUSUM scheme (with 0 head starts)
and a EWMA scheme to have an all-OK ARL of 370 and quickest
possible detection of a shift in process of mean of size .6.
(b) If, in fact, the process mean is = 17:5 and the process standard
deviation is = 3:0, show how you would nd the ARL associated
with your schemes from (a). (You dont need to actually interpolate
in the tables, but do compute the values you would need in order to
enter the tables, and say which tables you must employ.)
84
CHAPTER 6. PROBLEMS
p
nj 0 j= is respectively 0, 1,
Theory
2.21. Consider the problem of samples of size n = 1 in variables control charting
contexts, and the notion of there using moving ranges for various purposes.
2. PROCESS MONITORING
85
This problem considers a little theory that may help illustrate the implications of using an average moving range, M R, in the estimation of in
such circumstances.
Suppose that X1 and X2 are independent normal random variables with a
common variance 2 , but possibly dierent means 1 and 2 . (You may, if
you wish, think of these as widget diameters made at times 1 and 2, where
the process mean has potentially shifted between the sampling periods.)
(a) What is the distribution of X1 X2 ? The distribution of (X1 X2 )=?
(c) Notice that in part (b), you have found the cumulative distribution
function for the random variable M R=. Dierentiate your answer
to (b) to nd the probability density for MR= and then use this
probability density to write down an integral that gives the mean of
the random variable M R=, E(M R=). (You may abbreviate the
standard normal pdf as , rather than writing everything out.)
Vardeman used his trusty HP 15C (and its denite integral routine)
and evaluated the integral in (c) for various values of . Some values
that he obtained are below.
0
:1
:2
:3
:4
:5
1:0
E(M R=) 1.1284 1.1312 1.1396 1.1537 1.1732 1.198 1.399
2:0 2:5 3:0 3:5 4:0 large jj
2.101 2.544 3.017 3.506 4.002
jj
(Notice that as expected, the = 0 value is d2 for a sample of size
n = 2.)
(d) Based on the information above, argue that for n independent normal
random variables X1 ; X2 ; : : : ; Xn with common standard deviation ,
if 1 = 2 = = n then the sample average moving range, MR,
when divided by 1.1284 has expected value .
(e) Now suppose that instead of being constant, the successive means,
1 ; 2 ; : : : ; n in fact exhibit a reasonably strong linear trend. That
is suppose that t = t1 + . What is the expected value of
MR/1.1284 in this situation. Does MR/1.1284 seem like a sensible estimate of here?
1:5
1.710
86
CHAPTER 6. PROBLEMS
(f) In a scenario where the means could potentially bounce around
according to t = t1 k, how large might k be without destroying the usefulness of MR/1.1284 as an estimate of ? Defend your
opinion on the basis of the information contained in the table above.
2.22. Consider the kind of discrete time Markov Chain with a single absorbing
state used in 2.1 to study the run length properties of process monitoring
schemes. Suppose that one wants to know not the mean times to absorption from the nonabsorbing states, but the variances of those times. Since
for a generic random variable X, VarX =EX 2 (EX)2 , once one has mean
times to absorption (belonging to the vector L = (I R)1 1) it suces
to compute the expected squares of times to absorption. Let M be an
m 1 vector containing expected squares of times to absorption (from
states S1 through Sm ). Set up a system of m equations for the elements
of M in terms of the elements of R; L and M. Then show that in matrix
notation
M = (I R)1 (I + 2R(I R)1 )1 :
2.23. So-called Stop-light Control or Target Area Control of a measured
characteristic X proceeds as follows. One rst denes Green (OK),
Yellow (Marginal) and Red (Unacceptable) regions of possible values
of X. One then periodically samples a process according to the following
rules. At a given sampling period, a single item is measured and if it
produces a Green X, no further action is necessary at the time period in
question. If it produces a Red X, lack of control is declared. If it produces
a Yellow X, a second item is immediately sampled and measured. If this
second item produces a Green X, no further action is taken at the period
in question, but otherwise lack of control is declared.
Suppose that in fact a process under stop-light monitoring is stable and
pG = P [X is Green], pY = P [X is Yellow] and pR = 1 pG pY = P [X
is Red].
(a) Find the mean number of sampling periods from the beginning of
monitoring through the rst out-of-control signal, in terms of the ps.
(b) Find the mean total number of items measured from the beginning
of monitoring through the rst out-of-control signal, in terms of the
ps.
2. PROCESS MONITORING
87
2.24. Consider the Run-Sum control chart scheme discussed in 2.2. In the notes
Vardeman wrote out a transition matrix for a Markov Chain analysis of
the behavior of this scheme.
(a) Write out the corresponding system of 8 linear equations in 8 mean
times to absorption for the scheme. Note that the mean times till
signal from T = 0 and T = +0 states are the same linear
combinations of the 8 mean times and must thus be equal.
(b) Find a formula for the ARL of this scheme. This can be done as
follows. Use the equations for the mean times to absorption from
states T = +3 and T = +2 to nd a constant +2;+3 such that
L+3 = +2;+3 L+2 . Find similar constants +1;+2 , +0;+1 , 2;3 ,
1;2 and 0;1 . Then use these constants to write a single linear
equation for L+0 = L0 that you can solve for L+0 = L0 .
2.25. Consider the problem of monitoring
X = the number of nonconformities on a widget :
Suppose the standard for is so small that a usual 3 Shewhart control
chart will signal any time Xt > 0. On intuitive grounds the engineers
involved nd such a state of aairs unacceptable. The replacement for the
standard Shewhart scheme that is then being contemplated is one that
signals at time t if
i) Xt 2
or
ii) Xt = 1 and any of Xt1 , Xt2 , Xt3 or Xt4 is also equal
to 1.
Show how you could nd an ARL for this scheme. (Give either a matrix
equation or system of linear equations one would need to solve. State
clearly which of the quantities in your set-up is the desired ARL.)
2.26. Consider a discrete distribution on the (positive and negative) integers
specied by the probability function p(). This distribution will be used
below to help predict the performance of a Shewhart type monitoring
scheme that will sound an alarm the rst time that an individual observation Xt is 3 or more in absolute value (that is, the alarm bell rings the
rst time that jXt j 3).
(a) Give an expression for the ARL of the scheme in terms of values of
p(), if observations X1 ; X2 ; X3 ; : : : are iid with probability function
p().
88
CHAPTER 6. PROBLEMS
(b) Carefully set up and show how you would use a transition matrix
for an appropriate Markov Chain in order to nd the ARL of the
scheme under a model for the observations X1 ; X2 ; X3 ; : : : specied
as follows:
X1 has probability function p(), and given X1 ; X2 ; : : : ; Xt1 ,
the variable Xt has probability function p( Xt1 )
You need not carry out any matrix manipulations, but be sure to
fully explain how you would use the matrix you set up.
2.27. Consider the problem of nding ARLs for a Shewhart individuals chart
supposing that observations X1 ; X2 ; X3 ; : : : are not iid, but rather realizations from a so-called AR(1) model. That is, suppose that in fact for some
with jj < 1
Xt = Xt1 + t
for a sequence of iid normal random variables 1 ; 2 ; : : : each with mean 0
and variance 2 . Notice that under this model the conditional distribution
of Xt+1 given all previous observations is normal with mean Xt and
variance 2 .
2. PROCESS MONITORING
89
L(u) L(0) 1
for 0 u k1
L0 (u) =
L(u) L(u k1 ) 1 for k1 u :
(Vardeman and Ray (Technometrics, 1985) solve this dierential equation and a similar one for low side CUSUMs to obtain ARLs for
exponential Q.)
(b) Suppose that one decides to approximate high side exponential CUSUM
ARLs by using simple numerical methods to solve (approximately)
the integral equation discussed in class. For the case of k1 = 1:5 and
h1 = 4:0, write out the R matrix (in the equation L = 1 + RL) one
has using the quadrature rule dened by m = 8, ai = (2i 1)h1 =2m
and each wi = h1 =m.
(c) Consider making a Markov Chain approximation to the ARL referred
to in part (b). For m = 8 and the discretization discussed in class,
write out the R matrix that would be used in this case. How does
this matrix compare to the one in part (b)?
90
CHAPTER 6. PROBLEMS
2.31. Consider the problem of determining the run length properties of a high
side CUSUM scheme with head start u, reference value k and decision
interval h if iid continuous observations Q1 ; Q2 ; : : : with common probability density f and cdf F are involved. Let T be the run length variable.
In class, Vardeman concentrated on L(u) =ET , the ARL of the scheme.
But other features of the run length distribution might well be of interest
in some applications.
(a) The variance of T , Var T =ET 2 L2 (u) might also be of importance
in some instances. Let M (u) =ET 2 and argue very carefully that
M (u) must satisfy the integral equation
Z
M (u) = 1+(M(0) + 2L(0)) F (ku)+
(Once one has found L(u), this gives an integral equation that can
be solved for M(u), leading to values for Var T , since then Var T =
M (u) L2 (u).)
P (t 1; s)f(s + k u)ds :
2. PROCESS MONITORING
91
(c) The equations from (b) can be solved simultaneously for M1 and M2 .
Express the variance of the run length for the Wetherill scheme in
terms of M1 , M2 , L1 and L2 .
2.33. Consider a Shewhart control chart with the single extra alarm rule signal
if 2 out of any 3 consecutive points fall between 2 and 3 limits on one
side of the center line. Suppose that points Q1 ; Q2 ; Q3 ; : : : are to be
plotted on this chart and that the Qs are iid.
Use the notation
pA = the probability Q1 falls outside 3 limits
pB =the probability Q1 falls between 2 and 3 limits above the center line
pC =the probability Q1 falls between 2 and 3 limits below the center line
pD =the probability Q1 falls inside 2 limits
and set up a Markov Chain that you can use to nd the ARL of this scheme
under the iid model for the Qs. (Be sure to carefully and completely dene
your state space, write out the proper transition matrix and indicate which
entry of (I R)1 1 gives the desired ARL.)
2.34. A process has a good state and a bad state. Suppose that when in
the good state, the probability that an observation on the process plots
outside of control limits is g, while the corresponding probability for the
bad state is b. Assume further that if the process is in the good state at
time t 1, there is a probability d of degradation to the bad state before
an observation at time t is made. (Once the process moves into the bad
state it stays there until that condition is detected via process monitoring
and corrected.) Find the ARL/mean time of alarm, if the process is in
the good state at time t = 0 and observation starts at time t = 1.
2.35. Consider the following (nonstandard) process monitoring scheme for a
variable X that has ideal value 0. Suppose h(x) > 0 is a function with
h(x) = h(x) that is decreasing in jxj. (h has its maximum at 0 and
decreases symmetrically as one moves away from 0.) Then suppose that
i) control limits for X1 are h(0),
and
ii) for t > 1 control limits for Xt are h(Xt1 ).
(Control limits vary. The larger that jXt1 j is, the tighter are the limits
on Xt .) Discuss how you would nd an ARL for this scheme for iid X
with marginal probability density f . (Write down an appropriate integral
92
CHAPTER 6. PROBLEMS
equation, briey discuss how you would go about solving it and what you
would do with the solution in order to nd the desired ARL.)
3.1. Consider the use of the PI(D) controller X(t) = :5E(t) + :25E(t) in a
situation where the control gain, G, is 1 and the target for the controlled
:
variable is T (t) = 0. Suppose that no control actions are applied before the
time t = 0, but that for t 0, E(t) and E(t) are used to make changes
in the manipulated variable, X(t), according to the above equation.
Suppose further that the value of the controlled variable, Y (t), is the sum
of what the process would do with no control, say Z(t), and the sum of
eects at time t of all changes in the manipulated variable made in previous
periods based on E(0), E(0), E(1), E(1), E(2), E(2); : : : ; E(t 1),
E(t 1).
Consider 3 possible patterns of impact at time s of a change in the manipulated variable made at time t, X(t) :
Pattern 1:
Pattern 2:
Pattern 3:
The eect on Y (s) is 1 X(t) for all s t + 1 (a control action takes its
full eect immediately).
The eect on Y (t + 1) is 0, but the eect on Y (s) is 1 X(t) for all
s t + 2 (there is one period of dead time, after which a control action
immediately takes its full eect).
The eect on Y (s) is 1 (1 2ts )X(t) for all s t + 1 (there is an
exponential/geometric pattern in the way the impact of X(t) is felt,
the full eect only being seen for large s).
94
CHAPTER 6. PROBLEMS
3.2. Consider again the PI(D) controller of Problem 3.1. Suppose that the
target is T (t), where T (t) = 0 for t 5 and T (t) = 3 for t > 5. For the
Pattern 1 of impact of control actions and Patterns A, B and C for Z(t),
make up tables giving at times t = 1; 0; 1; 2; : : : ; 10 the values of Z(t),
T (t), E(t), E(t), X(t) and Y (t).
3.3. Consider again the PI(D) controller of Problem 3.1 and
Pattern D:
For the Patterns 1 and 2 of impact of control actions, make up tables giving
at times t = 1; 0; 1; 2; : : : ; 10 the values of Z(t), T (t), E(t), E(t), X(t)
and Y (t).
3.4. There are two tables here giving some values of an uncontrolled process
:
Z(t) that has target T (t) = 0. Suppose that a manipulated variable X is
available and that the simple (integral only) control algorithm
X(t) = E(t)
will be employed, based on an observed process Y (t) that is the sum of
Z(t) and the eects of all relevant changes in X.
Consider two dierent scenarios:
(a) a change of X in the manipulated variable impacts all subsequent
values of Y (t) by the addition of an amount X, and
(b) there is one period of dead time, after which a change of X in the
manipulated variable impacts all subsequent values of Y (t) by the
addition of an amount X.
Fill in the two tables according to these two scenarios and then comment
on the lesson they seem to suggest about the impact of dead time on the
eectiveness of PID control.
3.5. On pages 87 and 88 V&J suggest that over-adjustment of a process will
increase rather than decrease variation. In this problem we will investigate this notion mathematically. Imagine periodically sampling a widget
produced by a machine and making a measurement yi . Conceptualize the
situation as
yi = i + i
where
Table 6.2: Table for Problem 3.4(a), One Period of Dead Time
t
0
1
2
3
4
5
6
7
8
9
Z(t)
1
1
1
1
1
1
1
1
1
1
T (t)
0
0
0
0
0
0
0
0
0
0
Y (t)
1
E(t) = X(t)
96
CHAPTER 6. PROBLEMS
i = the true machine setting (or widget diameter) at time i
i = random variability at time i aecting only measurement i .
and
Further, suppose that the (coded) ideal diameter is 0 and i is the sum of
natural machine drift and adjustments applied by an operator up through
time i. That is, with
i = the machine drift between time i 1 and time i
i = the operator (or automatic controllers) adjustment applied
between time i 1 and time i
and
j
X
i +
i=1
j
X
i :
i=1
We will here consider the (integral-only) adjustment policies for the machine
i = yi1 for an 2 [0; 1] :
It is possible to verify that for j 1
and
if
if
if
=0:
=1:
2 (0; 1) :
P
yj = ji=1 i + j
yj = j j1 + j
P
P
yj = ji=1 i (1 )ji ji=1 i1 (1 )ji + j .
Model 0 ; 1 ; 2 ; : : : as independent random variables with mean 0 and variance 2 and consider predicting the likely eectiveness of the adjustment
policies by nding lim E2j . (E2j is a measure of how close to proper
j!1
EF [Z(t + 1)jZ t ] =
1 X
(1)j (t + 1 j)Z(t j)
t + 2 j=0
while
EF [Z(s)jZ t ] = 0 for s t + 2 :
:
If T (t) = 0 nd optimal (MV) control strategies for two dierent situations
involving numerical process adjustments a.
(a) First suppose that A(a; s) = a for all s 1. (Note that in the limit
as t ! 1, the MV controller is a proportional-only controller.)
(b) Then suppose the impact of a control action is similar to that in (a),
except there is one period of delay, i.e.
a for s 2
A(a; s) =
0 for s = 1
:
(You should decide that a(t) = 0 is optimal.)
(c) For the situation without dead time in part (a), write out Y (t) in
terms of s. What are the mean and variance of Y (t)? How do these
compare to the mean and variance of Z(t)? Would you say from this
comparison that the control algorithm is eective in directing the
process to the target T (t) = 0?
(d) Again for the situation of part (a), consider the matter of process
monitoring for a change from the model of this problem (that ought
to be greeted by a revision of the control algorithm or some other
appropriate intervention). Argue that after some start-up period it
makes sense to Shewhart chart the Y (t)s, treating them as essentially
iid Normal (0; 2 ) if all is OK. (What is the correlation between
Y (t) and Y (t 1)?)
98
CHAPTER 6. PROBLEMS
3.7. Consider the optimal stochastic control problem as described in 3.1 with
Z(t) an iid normal (0; 1) sequence of random variables, control actions
:
a 2 (1; 1), A(a; s) = a for all s 1 and T (s) = 0 for all s. What do
you expect the optimal (minimum variance) control strategy to turn out
to be? Why?
3.8. (Vander Wiel) Consider a stochastic control problem with the following
elements. The (stochastic) model, F, for the uncontrolled process, Z(t),
will be
Z(t) = Z(t 1) + (t)
where the (t) are iid normal (0; 2 ) random variables and is a (known)
constant with absolute value less than 1. (Z(t) is a rst order autoregressive process.) For this model,
EF [Z(t + 1)j : : : ; Z(1); Z(0); Z(1); : : : ; Z(t)] = Z(t) :
For the function A(a; s) describing the eect of a control action a taken s
periods previous, we will use A(a; s) = as1 for another known constant
0 < < 1 (the eect of an adjustment made at a given period dies out
geometrically).
Carefully nd a(0), a(1), and a(2) in terms of a constant target value T
and Z(0), Y (1) and Y (2). Then argue that in general
!
t1
t
X
X
s
a(t) = T 1 + ( )
Y (t) ( )
s Y (t s) :
s=0
s=1
3.9. Consider the following stochastic control problem. The stochastic model,
F, for the uncontrolled process Z(t), will be
Z(t) = ct + (t)
where c is a known constant and the (t)s are iid normal (0; 2 ) random
variables. (The Z(t) process is a deterministic linear trend seen through
iid/white noise.) For the function A(a; s) describing the eect of a control
action a taken s periods previous, we will use A(a; s) = (1 2s )a for all
s 1. Suppose further that the target value for the controlled process is
T = 0 and that control begins at time 0 (after observing Z(0)).
b =EF [Z(t+1)j : : : ; Z(1); Z(0); Z(1); : : : ; Z(t)] =
(a) Argue carefully that Z(t)
c(t + 1).
100
CHAPTER 6. PROBLEMS
3.12. A process has a Good state and a Bad state. Every morning a gremlin
tosses a coin with P [Heads] = u > :5 that governs how states evolve day
to day. Let
Ci = P [change state on day i from that on day i 1] .
Each Ci is either u or 1 u.
(a) Before the gremlin tosses the coin on day i, you get to choose whether
Ci = u (so that Heads =) change)
or
Ci = 1 u (so that Heads =) no change)
(You either apply some counter-measures or let the process evolve
naturally.) Your object is to see that the process is in the Good state
as often as possible. What is your optimal strategy? (What should
you do on any morning i? This needs to depend upon the state of
the process from day i 1.)
(b) If all is as described here, the evolution of the states under your optimal strategy from (a) is easily described in probabilistic terms. Do
so. Then describe in rough/qualitative terms how you might monitor
the sequence of states to detect the possibility that the gremlin has
somehow changed the rules of process evolution on you.
(c) Now suppose that there is a one-day time delay in your countermeasures. Before the gremlin tosses his coin on day you get to choose
only whether
Ci+1 = u
or
Ci+1 = 1 u:
(You do not get to choose Ci on the morning of day i.) Now what is
your optimal strategy? (What you should choose on the morning of
day i depends upon what you already chose on the morning of day
(i 1) and whether the process was in the Good state or in the Bad
state on day (i 1).) Show appropriate calculations to support your
answer.
4. PROCESS CHARACTERIZATION
101
Process Characterization
4.1. The following are depth measurements taken on n = 8 pump end caps.
The units are inches.
4:9991; 4:9990; 4:9994; 4:9989; 4:9986; 4:9991; 4:9993; 4:9990
The specications for this depth measurement were 4:999 :001 inches.
(a) As a means of checking whether a normal distribution assumption is
plausible for these depth measurements, make a normal plot of these
data. (Use regular graph paper and the method of Section 5.1.) Read
an estimate of from this plot.
Regardless of the appearance of your plot from (a), henceforth suppose that one is willing to say that the process producing these
lengths is stable and that a normal distribution of depths is plausible.
(b) Give a point estimate and a 90% two-sided condence interval for
the process capability, 6.
(c) Give a point estimate and a 90% two-sided condence interval for
the process capability ratio Cp .
(d) Give a point estimate and a 95% lower condence bound for the
process capability ratio Cpk .
(e) Give a 95% two-sided prediction interval for the next depth measurement on a cap produced by this process.
(f) Give a 99% two-sided tolerance interval for 95% of all depth measurements of end caps produced by this process.
4.2. Below are the logarithms of the amounts (in ppm by weight) of aluminum
found in 26 bihourly samples of recovered PET plastic at a Rutgers University recycling plant taken from a JQT paper by Susan Albin. (In this
context, aluminum is an impurity.)
5.67, 5.40, 4.83, 4.37, 4.98, 4.78, 5.50, 4.77, 5.20, 4.14, 3.40, 4.94, 4.62,
4.62, 4.47, 5.21, 4.09, 5.25, 4.78, 6.24, 4.79, 5.15, 4.25, 3.40, 4.50, 4.74
(a) Set up and plot charts for a sensible monitoring scheme for these
values. (They are in order if one reads left to right, top to bottom.)
102
CHAPTER 6. PROBLEMS
Caution: Simply computing a mean and sample standard deviation
for these values and using limits for individuals of the form x
3s
does not produce a sensible scheme! Say clearly what you are doing
and why.
(b) Suppose that (on the basis of an analysis of the type in (a) or otherwise) it is plausible to treat the 26 values above as a sample of size
n = 26 from some physically stable normally distributed process.
(Note x
4:773 and s :632.)
i. Give a two-sided interval that you are 90% sure will contain
the next log aluminum content of a sample taken at this plant.
Transform this to an interval for the next raw aluminum content.
ii. Give a two-sided interval that you are 95% sure will contain
90% of all log aluminum contents. Transform this interval to one
for raw aluminum contents.
(c) Rather than adopting the stable process model alluded to in part
(b) suppose that it is only plausible to assume that the log purity
process is stable for periods of about 10 hours, but that mean purities
can change (randomly) at roughly ten hour intervals. Note that if
one considers the rst 25 values above to be 5 samples of size 5, some
summary statistics are then given below:
period
1
2
3
4
5
x
5.050
4.878
4.410
5.114
4.418
s
.506
.514
.590
.784
.661
R
1.30
1.36
1.54
2.15
1.75
Based on the usual random eects model for this two-level nested/hierarchical
situation, give reasonable point estimates of the within-period standard deviation and the standard deviation governing period to period
changes in process mean.
1
k2
2
+
:
n 2n
Use the Wallis approximation to the distribution of x
+ ks and nd k
such that for x1 ; x2 ; : : : ; x26 iid normal random variables, x
+ ks is a 99%
upper statistical tolerance bound for 95% of the population. (That is,
4. PROCESS CHARACTERIZATION
103
your approximate value compare to the exact one given in Table A.9b?
4.4. Consider the problem of pooling together samples of size n from, say,
ve dierent days to make inferences about all widgets produced during
that period. In particular, consider the problem of estimating the fraction
of widgets with diameters that are outside of engineering specications.
Suppose that
Ni = the number of widgets produced on day i
pi = the fraction of widgets produced on day i that have
diameters that are outside engineering specications
and
p^i = the fraction of the ith sample that have out-of-spec. diameters .
If the samples are simple random samples of the respective daily productions, standard nite population sampling theory says that
Ni 1 pi (1 pi )
E pbi = pi and Var pbi =
:
Ni n
n
p=
5
X
Ni pi
i=1
5
X
;
Ni
i=1
are
pb =
5
X
Ni pbi
i=1
5
X
i=1
Ni
and
1X
p^i :
pb =
5 i=1
N 1
p(1 p)
:
N 5n
5n
104
CHAPTER 6. PROBLEMS
4.5. Suppose that the hierarchical random eects model used in Section 5.5 of
V&J is a good description of how 500 widget diameters arise on each of 5
days in each of 10 weeks. (That is, suppose that the model is applicable
with I = 10, J = 5 and K = 500.) Suppose further, that of interest is
the grand (sample) variance of all 10 5 500 widget diameters. Use the
expected mean squares and write out an expression for the expected value
of this variance in terms of 2 , 2 and 2 .
Now suppose that one only observes 2 widget diameters each day for 5
weeks and in fact obtains the data in the accompanying table. From
these data obtain point estimates of the variance components 2 , 2 and
2 . Use these and your formula from above to predict the variance of all
10 5 500 widget diameters. Then make a similar prediction for the
variance of the diameters from the next 10 weeks, supposing that the 2
variance component could be eliminated.
4.6. Consider a situation in which a lot of 50,000 widgets has been packed into
100 crates, each of which contains 500 widgets. Suppose that unbeknownst
to us, the lot consists of 25,000 widgets with diameter 5 and 25,000 widgets
with diameter 7. We wish to estimate the variance of the widget diameters
in the lot (which is 50,000/49,999). To do so, we decide to select 4 crates
at random, and from each of those, select 5 widgets to measure.
(a) One (not so smart) way to try and estimate the population variance
is to simply compute the sample variance of the 20 widget diameters
we end up with. Find the expected value of this estimator under
two dierent scenarios: 1st where each of the 100 crates contains 250
widgets of diameter 5 and 250 widgets with diameter 7, and then
2nd where each crate contains widgets of only one diameter. What,
in general terms, does this suggest about when the naive sample
variance will produce decent estimates of the population variance?
(b) Give the formula for an estimator of the population variance that is
unbiased (i.e. has expected value equal to the population variance).
4.7. Consider the data of Table 5.8 in V&J and the use of the hierarchical
normal random eects model to describe their generation.
(a) Find point estimates of the parameters 2 and 2 based rst on
ranges and then on ANOVA mean squares.
4. PROCESS CHARACTERIZATION
Week 1
Week 2
Week 3
Week 4
Week 5
Day
M
T
W
R
F
M
T
W
R
F
M
T
W
R
F
M
T
W
R
F
M
T
W
R
F
105
yi:
s2Bi
15:0
.605
7:0
.275
14:0
.370
12:0
.515
7:0
.155
106
CHAPTER 6. PROBLEMS
(b) Find a standard error for your ANOVA-based estimator of 2 from
(a).
(c) Use the material in 1.5 and make a 90% two sided condence interval
for 2 .
4.8. All of the variance component estimation material presented in the text is
based on balanced data assumptions. As it turns out, it is quite possible
to do point estimation (based on sample variances) from even unbalanced
data. A basic fact that enables this is the following: If X1 ; X2 ; : : : ; Xn are
uncorrelated random variables, each with the same mean, then
n
Es2 =
1X
Var Xi :
n i=1
(Note that the usual fact that for iid Xi , Es2 = 2 , is a special case of
this basic fact.)
Consider the (hierarchical) random eects model used in Section 5.5 of
the text. In notation similar to that in Section 5.5 (but not assuming that
data are balanced), let
yij
= the sample mean of data values at level i of A and level j of B within A
s2
ij = the sample variance of the data values at level i of A and level j of B within A
s2
ij
at level i of A
Bi = the sample variance of the values y
and
s2
i
A = the sample variance of the values y
Suppose that instead of being furnished with balanced data, one has a
data set where 1) there are I = 2 levels of A, 2) level 1 of A has J1 = 2
levels of B while level 2 of A has J2 = 3 levels of B, and 3) level 1 of B
within level 1 of A has n11 = 2 levels of C, level 2 of B within level 1 of
A has n12 = 4 levels of C, levels 1 and 2 of B within level 2 of A have
n21 = n22 = 2 levels of C and level 3 of B within level 2 of A has n23 = 3
levels of C.
P
1
2
2
2
2
Evaluate the following: Es2pooled , E 15 i;j s2
ij , EsB1 , EsB2 , E 2 sB1 + sB2 ,
1
2
2
2
2
Es2
.
Then
nd
linear
combinations
of
s
,
s
+
s
pooled
2
A
B1
B2 and sA
that could sensibly used to estimate 2 and 2 .
4. PROCESS CHARACTERIZATION
107
108
CHAPTER 6. PROBLEMS
transfer function
Vout
s2 + 1 !1 s + !12
(s) = 2
;
Vin
s + 2 !2 s + !22
where
!1 = (C2 L)1/2 ;
1/2
C1 + C2
!2 =
;
LC1 C2
R2
1 =
;
2L!1
and
2 =
R1 + R2
:
2L!2
R1 and R2 are the resistances involved in ohms, C1 and C2 are the capacitances in Farads, and L is the value of the inductance in Henries.
Standard circuit theory says that !1 and !2 are the natural frequencies
of this network,
!12 =!22 = C1 =(C1 + C2 )
is the DC gain, and 1 and 2 determine whether the zeros and poles
are real or complex. Suppose that the circuit in question is to be massed
produced using components with the following characteristics:
EC1 =
1
399 F
Var C1 =
2
1
3990
2
ER2 = 2-
Var R1 = (3:8)
1 2
Var C2 = 20
EL = 1H
Var L = (:1)2
ER1 = 38EC2 = 12 F
Var R2 = (:2)2
4. PROCESS CHARACTERIZATION
109
Now suppose that you are designing such an RCL circuit. To simplify
things, use the capacitors and the inductor described above. You may
choose the resistors, but their quality will be such that
Var R1 = (ER1 =10)2
and
Your design goals are that 2 should be (approximately) .5, and subject to this constraint, Var 2 be minimum.
(c) What values of ER1 and ER2 satisfy (approximately) the design
goals, and what is the resulting (approximate) standard deviation
of 2 ?
(Hint for part (c): The rst design goal allows one to write ER2 as a
function of ER1 . To satisfy the second design goal, use the propagation of
error idea to write the (approximate) variance of 2 as a function of ER1
only. By the way, the rst design goal allows you to conclude that none
of the partial derivatives needed in the propagation of error work depend
on your choice of ER1 .)
4.12. Manufacturers wish to produce autos with attractive t and nish, part
of which consists of uniform (and small) gaps between adjacent pieces
of sheet metal (like, e.g., doors and their corresponding frames). The
accompanying gure is an idealized schematic of a situation of this kind,
where we (at least temporarily) assume that edges of both a door and
its frame are linear. (The coordinate system on this diagram is pictured
as if its axes are vertical and horizontal. But the line on the body
need not be an exactly vertical line, and whatever this lines intended
orientation relative to the ground, it is used to establish the coordinate
system as indicated on the diagram.)
On the gure, we are concerned with gaps g1 and g2 . The rst is at the
level of the top hinge of the door and the second is d units below that
level in the body coordinate system (d units down the door frame line
from the initial measurement). People manufacturing the car body are
responsible for the dimension w. People stamping the doors are responsible for the angles 1 and 2 and the dimension y. People welding the top
door hinge to the door are responsible for the dimension x. And people
hanging the door on the car are responsible for the angle . The quantities
x; y; w; ; 1 and 2 are measurable and can be used in manufacturing to
110
CHAPTER 6. PROBLEMS
y
p
x
g1
r
d
g2
t
door
verify that the various folks are doing their jobs. A door design engineer
has to set nominal values for and produce tolerances for variation in these
quantities. This problem is concerned with how the propagation of errors
method might help in this tolerancing enterprise, through an analysis of
how variation in x; y; w; ; 1 and 2 propagates to g1 ; g2 and g1 g2 .
If I have correctly done my geometry/trigonometry, the following relationships hold for labeled points on the diagram:
p = (x sin ; x cos )
q = p + (y cos + 1
; y sin + 1
2
2
s = (q1 + q2 tan ( + 1 + 2 ) ; 0)
and
u = (q1 + (q2 + d) tan ( + 1 + 2 ) ; d) :
Then for the idealized problem here (with perfectly linear edges) we have
g1 = w s1
4. PROCESS CHARACTERIZATION
111
and
g2 = w u1 :
Actually, in an attempt to allow for the notion of form error in the ideally
linear edges, one might propose that at a given distance below the origin
of the body coordinate system the realized edge of a real geometry is its
nominal position plus a form error. Then instead of dealing with g1 and
g2 , one might consider the gaps
g1 = g1 + 1 2
and
g2 = g2 + 3 4 ;
for body form errors 1 and 3 and door form errors 2 and 4 . (The interpretation of additive form errors around the line of the body door
frame is perhaps fairly clear, since the error at a given level is measured
perpendicular to the body line and is thus well-dened for a given realized body geometry. The interpretation of an additive error on the right
side door line is not so clear, since in general one will not be measuring
perpendicular to the line of the door, or even at any consistent angle with
it. So for a realized geometry, what form error to associate with a given
point on the ideal line or exactly how to model it is not completely clear.
Well ignore this logical problem and proceed using the models above.)
Well use d = 40 cm, and below are two possible sets of nominal values
for the parameters of the door assembly:
Design A
x = 20 cm
y = 90 cm
w = 90:4 cm
=0
1 = 2
2 = 2
Design B
x = 20 cm
y = 90 cm
w = (90 cos 10
+ :4) cm
= 10
1 = 2
2 = 4
10
112
CHAPTER 6. PROBLEMS
Design A
@g1
@x = 0
@g1
@y = 1
@g1
@w = 1
@g1
@ = 0
@g1
@1 = 20
@g1
@2 = 20
Design B
@g1
@x = :309
@g1
@y = :951
@g1
@w = 1
@g1
@ = 0
@g1
@1 = 19:021
@g1
@2 = 46:833
@g2
@x
@g2
@y
@g2
@w
@g2
@
@g2
@1
@g2
@2
@g2
@x
@g2
@y
@g2
@w
@g2
@
@g2
@1
@g2
@2
=0
= 1
=1
= 40
= 60
= 60
= :309
= :951
=1
= 40
= 59:02
= 86:833
(a) Suppose that a door engineer must eventually produce tolerances for
x; y; w; ; 1 and 2 that are consistent with :1 cm tolerances on
g1 and g2 . If we interpret :1 cm tolerances to mean g1 and g2
are no more than .033 cm, consider the set of sigmas
x = :01 cm
y = :01 cm
w = :01 cm
= :001 rad
1 = :001 rad
2 = :001 rad
First for Design A and then for Design B, investigate whether this
set of sigmas is consistent with the necessary nal tolerances on
g1 and g2 in two dierent ways. Make propagation of error approximations to g1 and g2 . Then simulate 100 values of both g1 and g2
using independent normal random variables x; y; w; ; 1 and 2 with
means equal to the design nominals and these standard deviations.
(Compute the sample standard deviations of the simulated values
and compare to the .033 cm target.)
(b) One of the assumptions standing behind the propagation of error
approximations is the independence of the input random variables.
Briey discuss why independence of the variables 1 and 2 may not
be such a great model assumption in this problem.
(c) Notice that for Design A the propagation of error formula predicts
that variation on the dimension x will not much aect the gaps
presently of interest, g1 and g2 , while the situation is dierent for
4. PROCESS CHARACTERIZATION
113
(d) What does the propagation of error formula predict for variation in
the dierence g1 g2 , rst for Design A, and then for Design B?
(e) Suppose that one desires to take into account the possibility of form
errors aecting the gaps, and thus considers analysis of g1 and g2
instead of g1 and g2 . If standard deviations for the variables are
all .001 cm, what does the propagation of error analysis predict for
variability in g1 and g2 for Design A?
4.13. The electrical resistivity, , of a wire is a property of the material involved
and the temperature at which it is measured. At a given temperature, if
a cylindrical piece of wire of length L and (constant) cross-sectional area
A has resistance R, then the materials resistivity is calculated as
=
RA
:
L
114
CHAPTER 6. PROBLEMS
measuring equipment used in the lab are such that standard deviations
L = 103 meter, D = 104 meter and R = 104 - are appropriate.
(a) Find an approximate standard deviation that might be used to describe the precision associated with an experimentally derived value
of .
(b) Imprecision in which of the measurements appears to be the biggest
contributor to imprecision in experimentally determined values of ?
(Explain.)
(c) One should probably expect the approximate standard deviation derived here to under-predict the kind of variation that would actually
be observed in such lab exercises over a period of years. Explain why
this is so.
4.14. A bullet is red horizontally into a block (of much larger mass) suspended
by a long cord, and the impact causes the block and embedded bullet to
swing upward a distance d measured vertically from the blocks lowest
position. The laws of mechanics can be invoked to argue that if d is measured in feet, and before testing the block weighs w1 , while the block and
embedded bullet together weigh w2 (in the same units), then the velocity
(in fps) of the bullet just before impact with the block is approximately
p
w2
v=
64:4 d :
w2 w1
Suppose that the bullet involved weighs about .05 lb, the block involved
weighs about 10.00 lb and that both w1 and w2 can be determined with
a standard deviation of about .005 lb. Suppose further that the distance
d is about .50 ft, and can be determined with a standard deviation of .03
ft.
(a) Compute an approximate standard deviation describing the uncertainty in an experimentally derived value of v.
(b) Would you say that the uncertainties in the weights contribute more
to the uncertainty in v than the uncertainty in the distance? Explain.
(c) Say why one should probably think of calculations like those in part
(a) as only providing some kind of approximate lower bound on the
uncertainty that should be associated with the bullets velocity.
4.15. On page 243 of V&J there is an ANOVA table for a balanced hierarchical
data set. Use it in what follows.
5. SAMPLING INSPECTION
115
(a) Find standard errors for the usual ANOVA estimates of 2 and 2
(the casting and analysis variance components).
(b) If you were to later make 100 castings, cut 4 specimens from each
of these and make a single lab analysis on each specimen, give a
(numerical) prediction of the overall sample variance of these future
400 measurements (based on the hierarchical random eects model
and the ANOVA estimates of 2 , 2 and 2 ).
Sampling Inspection
Methods
5.1. Consider attributes single sampling.
(a) Make type A OC curves for N = 20, n = 5 and c = 0 and 1, for both
percent defective and mean defects per unit situations.
(b) Make type B OC curves for n = 5, c = 0, 1 and 2 for both percent
defective and mean defects per unit situations.
(c) Use the imperfect inspection analysis presented in 5.2 and nd OC
bands for the percent defective cases above with c = 1 under the
assumption that wD :1 and wG :1.
5.2. Consider single sampling for percent defective.
(a) Make approximate OC curves for n = 100, c = 1; n = 200, c = 2;
and n = 300, c = 3.
(b) Make AOQ and ATI curves for a rectifying inspection scheme using
a plan with n = 200 and c = 2 for lots of size N = 10; 000. What is
the AOQL?
5.3. Find attributes single sampling plans (i.e. nd n and c) having approximately
(a) P a = :95 if p = :01 and P a = :10 if p = :03.
(b) P a = :95 if p = 106 and P a = :10 if p = 3 106 .
5.4. Consider a (truncated sequential) attributes acceptance sampling plan,
that for
Xn = the number of defective items found through the nth item inspected
116
CHAPTER 6. PROBLEMS
rejects the lot if it ever happens that Xn 1:5 + :5n, accepts the lot if it
ever happens that Xn 1:5 + :5n, and further never samples more than
11 items. We will suppose that if sampling were extended to n = 11, we
would accept for X11 = 4 or 5 and reject for X11 = 6 or 7 and thus note
that sampling can be curtailed at n = 10 if X10 = 4 or 6.
(a) Find expressions for the OC and ASN for this plan.
(b) Find formulas for the AOQ and ATI of this plan, if it is used in a
rectifying inspection scheme for lots of size N = 100.
5. SAMPLING INSPECTION
117
Theory
5.9. Consider variables acceptance sampling based on exponentially distributed
observations, supposing that there is a single lower limit L = :2107.
(a) Find means corresponding to fractions defective p = :10 and p = :19.
(b) Use the Central Limit Theorem to nd a number k and sample size
n so that an acceptance sampling plan that rejects a lot if x
< k has
P a = :95 for p = :10 and P a = :10 for p = :19.
(c) Sketch an OC curve for your plan from (b).
5.10. Consider the situation of a consumer who will repeatedly receive lots of
1500 assemblies. These assemblies may be tested at a cost of $24 apiece
or simply be put directly into a production stream with a later extra
manufacturing cost of $780 occurring for each defective that is undetected
because it was not tested. Well assume that the supplier replaces any
assembly found to be defective (either at the testing stage or later when the
extra $780 cost occurs) with a guaranteed good assembly at no additional
cost to the consumer. Suppose further that the producer of the assemblies
has agreed to establish statistical control with p = :02.
(a) Adopt perspective B with p known to be .02 and compare the mean
per-lot costs of the following 3 policies:
i. test the whole lot,
ii. test none of the lot, and
iii. go to Mil. Std. 105D with AQL= :025 and adopt an inspection
level II, normal inspection single sampling plan (i.e. n = 125
and c = 7), doing 100% inspection of rejected lots. (This by the
118
CHAPTER 6. PROBLEMS
way, is not a recommended use of the standard. It is designed
to guarantee a consumer the desired AQL only when all the
switching rules are employed. Im abusing the standard.)
(b) Adopt the point of view that in the short term, perspective B may
be appropriate, but that over the long term the suppliers p vacillates
between .02 and .04. In fact, suppose that for successive lots the
pi = perspective B p at the time lot i is produced
are independent random variables, with P [pi = :02] = P [pi = :04] =
:5. Now compare the mean costs of policies i), ii) and iii) from (a)
used repeatedly.
(c) Suppose that the scenario in (b) is modied by the fact that the
consumer gets control charts from the supplier in time to determine
whether for a given lot, perspective B with p = :02 or p = :04 is
appropriate. What should the consumers inspection policy be, and
what is its mean cost of application?
5.11. Suppose that the fractions defective in successive large lots of xed size N
can be modeled as iid Beta (; ) random variables with = 1 and = 9.
Suppose that these lots are subjected to attributes acceptance sampling,
using n = 100 and c = 1. Find the conditional distribution of p given that
the lot is accepted. Sketch probability densities for both the original Beta
distribution and this conditional distribution of p given lot acceptance.
5.12. Consider the following variation on the Deming Inspection Problem discussed in 5.3. Each item in an incoming lot of size N will be Good (G),
Marginal (M) or Defective (D). Some form of (single) sampling inspection
is contemplated based on counts of Gs, Ds and Ms. There will be a
per-item inspection cost of k1 for any item inspected, while any Ms going uninspected will eventually produce a cost of k2 , and any Ds going
uninspected will produce a cost of k3 > k2 . Adopt perspective B, i.e. that
any given incoming lot was produced under some set of stable conditions,
characterized here by probabilities pG , pM and pD that any given item in
that lot is respectively G, M or D.
(a) Argue carefully that the All or None criterion is in force here and
identify the condition on the ps under which All is optimal and
the condition under which None is optimal.
5. SAMPLING INSPECTION
119
(b) If pG , pM and pD are not known, but rather are described by a joint
probability distribution, n other than N or 0 can turn out to be optimal. A particularly convenient distribution to use in describing the
ps is the Dirichlet distribution (it is the multivariate generalization
of the Beta distribution for variables that must add up to 1). For a
Dirichlet distribution with parameters G > 0, M > 0 and D > 0,
it turns out that if XG , XM and XD are the counts of Gs, Ms and
Ds in a sample of n items, then
E[pG jXG ; XM ; XD ] =
G + XG
G + M + D + n
E[pM jXG ; XM ; XD ] =
M + XM
G + M + D + n
E[pD jXG ; XM ; XD ] =
D + XD
:
G + M + D + n
and
Use these expressions and describe what an optimal lot disposal (acceptance or rejection) is, if a Dirichlet distribution is used to describe
the ps and a sample of n items yields counts XG , XM and XD .
5.13. Consider the Deming Inspection Problem exactly as discussed in 5.3.
Suppose that k1 = $50, k2 = $500, N = 200 and ones a priori beliefs
are such that one would describe p with a (Beta) distribution with mean
.1 and standard deviation .090453. For what values of n are respectively
c = 0, 1 and 2 optimal? If you are brave (and either have a pretty good
calculator or are fairly quick with computing) compute the expected total
costs associated with these values of n (obtained using the corresponding
copt (n)). From these calculations, what (n; c) pair appears to be optimal?
5.14. Consider the problem of estimating the process fraction defective based
on the results of an inverse sampling plan that samples until 2 defective
items have been found. Find the UMVUE of p in terms of the random
variable n = the number of items required to nd the second defective.
Show directly that this estimator of p is unbiased (i.e. has expected value
equal to p). Write out a series giving the variance of this estimator.
5.15. The paper The Economics of Sampling Inspection by Bernard Smith
(that appeared in Industrial Quality Control in 1965 and is based on earlier
theoretical work of Guthrie and Johns) gives a closed form expression for
an approximately optimal n in the Deming inspection problem for cases
120
CHAPTER 6. PROBLEMS
where p has a Beta(; ) prior distribution and both and are integers.
Smith says
v
u
N B(; )p
u
0 (1 p0 )
nopt t
Bi( + 1j + ; p0 )
2 p0 Bi(j + 1; p0 ) +
for p0 k1 =k2 the break-even quantity, B(; ) the usual beta function and
Bi(xjn; p) the probability that a binomial (n; p) random variable takes a
value of x or more. Suppose that k1 = $50, k2 = $500, N = 200 and our a
priori beliefs about p (or the process curve) are such that it is sensible
to describe p as having mean .1 and standard deviation .090453. What
xed n inspection plan follows from the Smith formula?
5.16. Consider the Deming inspection scenario as discussed in 5.3. Suppose
that N = 3, k1 = 1:5, k2 = 10 and a prior distribution G assigns P [p =
:1] = :5 and P [p = :2] = :5. Find the optimal xed n inspection plan by
doing the following.
(a) For sample sizes n = 1 and n = 2, determine the corresponding
optimal acceptance numbers, copt
G (n).
(b) For sample sizes n = 0, 1, 2 and 3 nd the expected total costs
associated with those sample sizes if corresponding best acceptance
numbers are used.
5.17. Consider the Deming inspection scenario once again. With N = 100,
k1 = 1 and k2 = 10, write out the xed p expected total cost associated
with a particular choice of n and c. Note that None is optimal for p < :1
and All is optimal for p > :1. So, in some sense, what is exactly optimal
is highly discontinuous in p. On the other hand, if p is near .1, it doesnt
matter much what inspection plan one adopts, All, None or anything
else for that matter. To see this, write out as a function of p
worst possible expected total cost(p) best possible expected total cost(p)
:
best possible expected total cost(p)
How big can this quantity get, e.g., on the interval [.09,.11]?
5.18. Consider the following percent defective acceptance sampling scheme. One
will sample items one at a time up to a maximum of 8 items. If at any
point in the sampling, half or more of the items inspected are defective,
sampling will cease and the lot will be rejected. If the maximum 8 items
are inspected without rejecting the lot, the lot will be accepted.
5. SAMPLING INSPECTION
121
(a) Find expressions for the type B Operating Characteristic and the
ASN of this plan.
(b) Find an expression for the type A Operating Characteristic of this
plan if lots of N = 50 items are involved.
(c) Find expressions for the type B AOQ and ATI of this plan for lots
of size N = 50.
(d) What is the (uniformly) minimum variance unbiased estimator of p
for this plan? (Say what value one should estimate for every possible
stop-sampling point.)
5.19. Vardeman argued in 5.3 that if one adopts perspective B with known p
and costs are assessed as the sum of identically calculated costs associated
with individual items, either All or None inspection plans will be
optimal. Consider the following two scenarios (that lack one or the other
of these assumptions) and show that in each the All or None paradigm
fails to hold.
(a) Consider the Deming inspection scenario discussed in 5.3, with k1 =
$1 and k2 = $100 and suppose lots of N = 5 are involved. Suppose
that one adopts not perspective B, but instead perspective A, and
that p is known to be .2 (a lot contains exactly 1 defective). Find the
expected total costs associated with All and then with None inspection. Then suggest a sequential inspection plan that has smaller
expected total cost than either All or None. (Find the expected
total cost of your suggested plan and verify that it is smaller than
that for both All and None inspection plans.)
(b) Consider perspective B with p known to be .4. Suppose lots of size
N = 5 are involved and costs are assessed as follows. Each inspection costs $1 and defective items are replaced with good items at no
charge. If the lot fails to contain at least one good item (and this
goes undetected) a penalty of $1000 will be incurred, but otherwise
the only costs charged are for inspection. Find the expected total
costs associated with All and then with None inspection. Then
argue convincingly that there is a better xed n plan. (Say clearly
what plan is superior and show that its expected total cost is less
than both All and None inspection.)
5.20. Consider the following nonstandard variables acceptance sampling situation. A supplier has both a high quality/low variance production line
122
CHAPTER 6. PROBLEMS
(#1) and a low quality/high variance production line (#2) used to manufacture widgets ordered by Company V. Coded values of a critical dimension of these widgets produced on the high quality line are normally
distributed with 1 = 0 and 1 = 1, while coded values of this dimension
produced on the low quality line are normally distributed with 2 = 0 and
2 = 2. Coded specications for this dimension are L = 3 and U = 3.
The supplier is known to mix output from the two lines in lots sent to
Company V. As a cost saving measure, this is acceptable to Company V,
provided the fraction of out-of-spec. widgets does not become too large.
Company V expects
= the proportion of items in a lot coming from the high variance line (#2)
to vary lot to lot and decides to institute a kind of incoming variables
acceptance sampling scheme. What will be done is the following. The
critical dimension, X, will be measured on each of n items sampled from a
lot. For each measurement X, the value Y = X 2 will be calculated. Then,
for a properly chosen constant, k, the lot will be accepted if Y k and
rejected if Y > k. The purpose of this problem is to identify suitable n
and k, if P a :95 is desired for lots with p = :01 and P a :05 is desired
for lots with p = :03.
(a) Find an expression for p (the long run fraction defective) as a function of . What values of correspond to p = :01 and p = :03
respectively?
(b) It is possible to show (you need not do so here) that EY = 3 + 1
and Var Y = 92 + 39 + 2. Use these facts, your answer to (a) and
the Central Limit Theorem to help you identify suitable values of n
and k to use at Company V.
5. SAMPLING INSPECTION
123
Based on the kind of cost information alluded to above, one might give
each inspected item a score s according to
8
< 3 if the item is D
s=
1 if the item is M
:
0 if the item is G :
(a) Give formulas for standards-given Shewhart control limits for average
scores s based on samples of size n. Describe how you would obtain
the information necessary to calculate limits for future control of s.
(b) Ultimately, suppose that standard values are set at pG = :90, pM =
:07 and pD = :03 and n = 100 is used for samples of a high volume
product. Use a normal approximation to the distribution of s and
nd an approximate ARL for your scheme from part (a) if in fact the
mix of items shifts to where pG = :85; pM = :10 and pD = :05.
(c) Suppose that one decides to use a high side CUSUM scheme to monitor individual scores as they come in one at a time. Consider a
scheme with k1 = 1 and no head-start that signals the rst time that
a CUSUM of scores of at least h1 = 6 is reached. Set up an appropriate transition matrix and say how you would use that matrix
to nd an ARL for this scheme for an arbitrary set of probabilities
(pG ; pM ; pD ).
(d) Suppose that inspecting an item costs 1/5th of the extra expense
caused by an undetected marginal item. A plausible (single sampling)
acceptance sampling plan for lots of N = 10; 000 of these items then
accepts the lot if
s :20 :
If rejection of the lot will result in 100% inspection of the remainder, consider the (perspective B) economic choice of sample size
for plans of this form, in particular the comparison of n = 100 and
n = 400 plans. The following table gives some approximate acceptance probabilities for these plans under two sets of probabilities
p = (pG ; pM; pD ).
n = 100
n = 400
p = (:9; :07; :03)
P a :76 P a :92
p = (:85; :10; :05) P a :24 P a :08
124
CHAPTER 6. PROBLEMS
Find expected costs for these two plans (n = 100 and n = 400) if
costs are accrued on a per-item and per-inspection basis and prior
probabilities of these two sets of process conditions are respectively
.8 for p = (:9; :07; :03) and .2 for p = (:85; :10; :05).
5.23. Consider variables acceptance sampling for a quantity X that has engineering specications L = 3 and U = 5. We will further suppose that X
has standard deviation = :2.
(a) Suppose that X is uniformly distributed with mean . That is, suppose that X has probability density
( + 1)x x 2 (0; 1)
f (x) =
0
otherwise :
For this density, it is possible to show that EX = ( + 1)=( + 2) and
VarX = ( + 1)=( + 2)2 ( + 3). Containers with X < :1 are considered
defective and we wish to do acceptance sampling to hopefully screen lots
with large p.
(a) Find the values of corresponding to fractions defective p1 = :01 and
p2 = :03.
5. SAMPLING INSPECTION
125
(b) Use the Central Limit Theorem and nd a number k and a sample
size n so that an acceptance sampling plan that rejects if x
< k has
P a1 = :95 and P a2 = :10.
5.25. A measurement has an upper specication U = 5:0. Making a normal
distribution assumption with = :015 and desiring P a1 = :95 for p1 = :03
and P a2 = :10 for p2 = :10, a statistician sets up a variables acceptance
sampling plan for a sample of size n = 23 that rejects a lot if x
> 4:97685.
In fact, a Weibull distribution with shape parameter = 400 and scale
parameter is a better description of this characteristic than the normal
distribution the statistician used. This alternative distribution has cdf
0
if x < 0
x 400
F (xj) =
1 exp(
) if x > 0 ;
and mean :9986 and standard deviation = :0032.
Show how to obtain an approximate OC curve for the statisticians acceptance sampling plan under this Weibull model. (Use the Central Limit
Theorem.) Use your method to nd the real acceptance probability if
p = :03.
5.26. Heres a prescription for a possible fraction nonconforming attributes acceptance sampling plan:
stop and reject the lot the rst time that Xn 2 +
n
2
n
2
(a) Find a formula for the OC for this symmetric wedge-shaped plan.
(One never samples more than 7 items and there are exactly 8 stop
sampling points prescribed by the rules above.)
(b) Consider the use of this plan where lots of size N = 100 are subjected
to rectifying inspection and inspection error is possible. (Assume
that any item inspected and classied as defective is replaced with
one drawn from a population that is in fact a fraction p defective and
has been inspected and classied as good.) Use the parameters wG
and wD dened in 5.2 of the notes and give a formula for the real
AOQ of this plan as a function of p, wG and wD .
5.27. Consider a perspective A economic analysis of some fraction defective
xed n inspection plans. (Dont simply try to use the type B calculations
made in class. They arent relevant. Work this out from rst principles.)
126
CHAPTER 6. PROBLEMS
Suppose that N = 10, k1 = 1 and k2 = 10 in a Deming Inspection
Problem cost structure. Suppose further that a prior distribution for
p (the actual lot fraction defective) places equal probabilities on p = 0; :1
and :2 . Here we will consider only plans with n = 0; 1 or 2. Let
X = the number of defectives in a simple random sample from the lot
(a) For n = 1, nd the conditional distributions of p given X = x.
For n = 2, it turns out that the joint distribution of X and p is:
x
0
1
2
0 :333
0
0
:333
p :1 :267 :067
0
:333
:2 :207 :119 :007 :333
:807 :185 :007
and the conditionals of p given X = x are:
x
0
1
2
0
:413
0
0
p :1 :330
:360
0
:2 :2257 :640 1:00
(b) Use your answer to (a) and show that the best n = 1 plan REJECTS
if X = 0 and ACCEPTS if X = 1. (Yes, this is correct!) Then use
the conditionals above for n = 2 and show that the best n = 2 plan
REJECTS if X = 0 and ACCEPTS if X = 1 or 2.
(c) Standard acceptance sampling plans REJECT FOR LARGE X. Explain in qualitative terms why the best plans from (b) are not of this
form.
(d) Which sample size (n = 0; 1 or 2) is best here? (Show calculations
to support your answer.)
A Useful Probabilistic
Approximation
Here we present the general delta method or propagation of error approximation that stands behind several variance approximations in these notes as
well as much of 5.4 of V&J. Suppose that a p 1 random vector
1
0
X1
B X2 C
C
B
X=B . C
@ .. A
Xp
B
B
=B
@
EX1
EX2
..
.
EXp
C B
C B
C=B
A @
1
2
..
.
p
1
C
C
C
A
Cov
(X
;
X
)
Cov (X2 ; Xp )
2
2
p1
B
B
.
.
.
..
.
..
..
..
..
=B
.
B
@ Cov (X1 ; Xp1 ) Cov (X2 ; Xp1 )
VarXp1
Cov (Xp1 ; Xp )
Cov (X1 ; Xp )
Cov (X2 ; Xp ) Cov (Xp1 ; Xp )
VarXp
0
1
12
12 1 2
1;p1 1 p1
1p 1 p
2
B 12 1 2
C
2;p1 2 p1
2p 2 p
2
B
C
B
C
.
.
.
.
.
..
..
..
..
..
=B
C
B
C
2
@ 2p 2 p 2;p1 2 p1
p1
p1;p p1 p A
1p 1 p
2p 2 p
p1;p p1 p
p2
= (ij i j )
1
C
C
C
C
C
A
128
k1
kp p1
Cov Y = A A0
(The k = 1 version of this for uncorrelated Xi is essentially quoted in (5.23)
and (5.24) of V&J.)
The propagation of error method says that if instead of the relationship
Y = A X, I concern myself with k functions g1 ; g2 ; :::; gk (each mapping Rp to
R) and dene
1
0
g1 (X)
B g2 (X) C
C
B
Y =B
C
..
A
@
.
gk (X)
a multivariate Taylors Theorem argument and the facts above provide an approximate mean vector and an approximate covariance matrix for Y . That is,
if the functions gi are dierentiable, let
@gi
D =
kp
@xj 1 ;2 ;:::;p
A multivariate Taylor approximation says that for each xi near i
0
1 0
1
g1 (x)
g1 ()
B g2 (x) C B g2 () C
B
C B
C
y=B
CB
C + D (x )
..
..
@
A @
A
.
.
gk (x)
gk ()
So if the variances of the Xi are small (so that with high probability Y is near
, that is that the linear approximation above is usually valid) it is plausible
129
that Y has mean vector
0
B
B
B
@
EY1
EY2
..
.
EYk
C B
C B
CB
A @
g1 ()
g2 ()
..
.
gk ()
Cov Y D D0
1
C
C
C
A