Beruflich Dokumente
Kultur Dokumente
ppm Table
The performance of a process may be characterized in terms of how close it gets to hitting
its target or meeting its specifications and how consistent it is in doing so. For a process
whose output data comprise a normal distribution, its performance can be conveniently
quantified in terms of its process capability index, Cpk.
The Cpk of a process measures how centered the output of the process is between its
lower and upper specification limits and how variable (and therefore how stable or nonstable) the output is. In fact, the Cpk is expressed as the ratio of how far the mean of the
output data is from the closer spec limit (the centering of the process) to three times their
standard deviation (the process variability).
If the mean of the process data is closer to the lower spec limit LSL and the standard
deviation of the process data is Stdev, then Cpk = (Mean-LSL) / (3 Stdev). If the mean of
the process data is closer to the upper spec limit USL, then
Cpk = (USL-Mean) / (3 Stdev).
An ideal process is one whose output is always dead center between the spec limits, such
that the mean of its output data equals this dead center and the standard deviation is zero.
The Cpk of this ideal process is infinite.
As a process becomes less centered between the spec limits or as it becomes more
variable, its Cpk decreases. As its Cpk decreases, the probability of it exhibiting an output
that is outside its specification limits increases. Thus, every Cpk value corresponds to a
percent defective rate, which may be expressed in parts per million, or ppm.
Table 1 shows some Cpk values and their equivalent ppm rates. In the semiconductor
industry, the Cpk goal for a process is normally set at 1.67, although a Cpk of 1.33 is still
considered acceptable.
Table 1. Cpk Vs. ppm
Cpk
Sigma
0.43
0.47
0.50
0.53
0.57
0.60
0.63
0.67
0.70
0.73
0.77
0.80
0.83
0.87
0.90
0.93
0.97
1.50
-
96,800
80,755
66,805
54,800
44,565
35,980
28,715
22,750
17,865
13,905
10,725
8,200
6,210
4,661
3,467
2,555
1,866
193,600
161,510
133,610
109,600
89,130
71,960
57,430
45,500
35,730
27,810
21,450
16,400
12,420
9,322
6,934
5,110
3,732
1.00
1.03
1.07
1.10
1.13
1.16
1.20
1.23
1.27
1.30
1.33
1.37
1.40
1.43
1.47
1.50
1.53
1.57
1.60
2.00
3.00
~3.50
4.00
4.50
~5.00
6.00
1350
967
687
483
337
232
159
108
73
49
32
20.5
13.5
8.5
5.5
3.5
2
1.5
0.25
0.00099
2,700
1,935
1.374
967
674
465
318
216
145
98
64
41
27
17
11
7
4
3
0.5
0.00198
LINKS:
Control Charting
Process
Capability
Cpk Vs. ppm
Table
Monitors &
Controls
Quality Systems
The normal distribution is bell-shaped, i.e., it peaks at the center and tapers off outwardly
while remaining symmetrical with respect to the center. To illustrate this in more tangible
terms, imagine taking down the height of every student in a randomly selected Grade 5
class and plotting the measurements on a chart whose x-axis corresponds to the height of
the student and whose y-axis corresponds to the number of students.
What is expected to emerge from this exercise is a normal curve, wherein a big slice of the
student population will have a height that is somewhere in the middle of the distribution,
say 57-59 inches tall. The number of students belonging to other height groups will be
less than the number of students in the 57"-59" category .
In fact, the number of students decreases at a calculable rate as the height group moves
further away from the center. Eventually you might find only one shortest student at, say,
48", and one tallest student who probably stands at 66". Lastly, plotting the number of the
students falling under different height ranges of equal intervals will result in a bell-shaped
curve. Such a plot is called a histogram, a simple example of which is shown in Figure 2.
LINKS:
Control Charting
Process
Capability
Cpk Vs. ppm
Table
Monitors &
Controls
Quality Systems
# of Sigma's
% of Data
Covered
% of Data
Outside
+/- 1 Sigma
66%
37%
+/- 2 Sigmas
95%
5%
+/- 3 Sigmas
99.73%
0.27%
+/- 4 Sigmas
99.9936%
0.0063%
+/- 5 Sigmas
99.99995%
0.00005%
Skewed Distributions
Perfectly normal curves are hard to come by with finite samples or data. Thus, some data
distributions that are theoretically normal may not appear to be one once the data are
plotted, i.e., the mean may not be at the center of the distribution or there may be slight
non-symmetry. If a normal distribution appears to be 'heavy' or leaning towards the right
side of the distribution, it is said to be skewed to the left. A normal distribution that's
leaning to the left is said to be skewed to the right.
Many response parameters encountered in the semiconductor industry behave normally,
which is why statistical process control has found its way extensively into this industry.
The objective of SPC is to produce data distributions that are stable, predictable, and well
within the specified limits for the parameter being controlled.
In relation to the preceding discussions, this is equivalent to achieving data distributions
that are centered between the specified limits, and as narrow as possible. Good centering
between limits and negligible variation translates to parameters that are always within
specifications, which is the true essence of process control.
Control Charting
It is often said that you can not control something that you do not measure. Thus, every
engineer setting up a new process must have a clear idea of how the performance of this
new process is to be measured. Since every process needs to satisfy customer
requirements, process output parameters for measurement and monitoring are generally
based on customer specifications. Industry-accepted specifications are also followed in
selecting process parameters for monitoring.
Control charting is a widely-used tool for process monitoring in the semiconductor
industry. It employs control charts (see Fig. 3), which are simply plots of the process
output data over time. Before a control chart may be used, the process engineer must first
ensure that the process to be monitored is normal and stable.
A process may have several control charts - one for each of its major output parameters. A
new control chart must have at least the following: the properly labeled x- and y-axes, lines
showing the lower and upper specification limits for the parameter being monitored, and a
line showing the center or target of these specifications. Once a control chart has been set
up, the operator must diligently plot the output data at predefined intervals.
After 30 data points have been collected on the chart (may be less if measurement intervals
are long), the upper and lower control limits of the process may already be computed.
Control limits define the boundaries of the normal behavior of the process. Their values
depend only on the output data generated by the process in the immediate past. Control
limits are therefore independent of specification limits. However, both sets of limits are used
in the practice of SPC, although in different ways.
Once the control limits have been included on the control charts (also in the form of
horizontal lines like the specification limits), the operator can start using the chart visually
to detect anomalous trends in the process that she would need to notify the engineer
about.
For instance, any measurement outside the control limits is an automatic cause for alarm,
because the probability of getting such a measurement is low. Four (4) or more
consecutively increasing or decreasing points form a trend that is not normal, and
therefore deserves attention. Six (6) consecutive points on one side of the mean also
deserve investigation. When such abnormalities are observed, the process owner must
take an action to bring the process back to its normal behavior.
Control limits must be recomputed regularly (say, every quarter), to ensure that the control
limits being used by the operator are reflective of the current process behavior.
LINKS:
Control Charting
Process
Capability
Cpk Vs. ppm
Table
Monitors &
Controls
Quality Systems
===================
SixSigma recently released a process sigma calculator which allows the operator to
input process opportunities and defects and easily calculate the process sigma to
determine how close (or far) a process is from 6 sigma. One of the caveats written in fine
print refers to the calculator using a default process shift of 1.5 sigma. From an earlier
poll, greater than 50% of polled quality professionals indicated that they are not aware of
why a process may shift 1.5 sigma. My goal is to explain it here.
I'm not going to bore you with the hard core statistics. There's a whole statistical section
dealing with this issue, and every green, black and master black belt learns the calculation
process in class. If you didn't go to class (or you forgot!), the table of the standard normal
distribution is used in calculating the process sigma. Most of these tables, however, end at
a z value of about 3 (see the iSixSigma table for an example). In 1992, Motorola
published a book (see chapter 6) entitled Six Sigma Producibility Analysis and Process
Characterizationbuy it now!, written by Mikel J. Harry and J. Ronald Lawson. In it is one of
the only tables showing the standard normal distribution table out to a z value of 6.
Using this table you'll find that 6 sigma actually translates to about 2 defects per billion
opportunities, and 3.4 defects per million opportunities, which we normally define as 6
sigma, really corresponds to a sigma value of 4.5. Where does this 1.5 sigma difference
come from? Motorola has determined, through years of process and data collection, that
processes vary and drift over time - what they call the Long-Term Dynamic Mean
Variation. This variation typically falls between 1.4 and 1.6.
After a process has been improved using the Six Sigma DMAIC methodology, we
calculate the process standard deviation and sigma value. These are considered to be
short-term values because the data only contains common cause variation -- DMAIC
projects and the associated collection of process data occur over a period of months,
rather than years. Long-term data, on the other hand, contains common cause variation
and special (or assignable) cause variation. Because short-term data does not contain this
special cause variation, it will typically be of a higher process capability than the longterm data. This difference is the 1.5 sigma shift. Given adequate process data, you can
determine the factor most appropriate for your process.
In Six Sigma, The Breakthrough Management Strategy Revolutionizing The World's Top
Corporations, Harry and Schroeder write:
"By offsetting normal distribution by a 1.5 standard deviation on either side, the adjustment takes
into account what happens to every process over many cycles of manufacturing Simply put,
accommodating shift and drift is our 'fudge factor,' or a way to allow for unexpected errors or
movement over time. Using 1.5 sigma as a standard deviation gives us a strong advantage in
improving quality not only in industrial process and designs, but in commercial processes as well.
It allows us to design products and services that are relatively impervious, or 'robust,' to natural,
unavoidable sources of variation in processes, components, and materials."
Statistical Take Away: The reporting convention of Six Sigma requires the process
capability to be reported in short-term sigma -- without the presence of special cause
variation. Long-term sigma is determined by subtracting 1.5 sigma from our short-term
sigma calculation to account for the process shift that is known to occur over time.
This topic was revisited and more information has been provided.
================
Measuring Your Process Capability
Featured
Article
Bombay
He can be contacted at e-mail address:
mkapadia@tacogroup.com
or through us at webmaster@symphonytech.com