Beruflich Dokumente
Kultur Dokumente
Development Center
_____________________________________________________________
Report
Number 2001CRD126 Date August 2001
Number
of Page 8 Class 1
Key Words Six Sigma Training, Six Sigma Tools, Six Sigma Best Practices, Six Sigma Resources,
Six Sigma for Specialized Audiences
Six Sigma Programs have proven valuable for improving quality and profitability. Based on the
authors’ experience, this article suggests items and areas to revisit in training and targeted tools as Six
Sigma evolves.
One of the key themes of Six Sigma is to make decisions based on data. This idea is
reflected by such popular Six Sigma sayings as, “We don’t know what we don’t know
(or don’t measure)” and “In God we trust—all else bring data!”
To ensure we obtain the right data and transform it into actionable information, we
deploy statistical tools. These tools and closely related concepts, such as the design of
experiments, are key elements of Six Sigma training and comprise up to half of the stan-
dard curriculum. The other half consists of various nonstatistical tools, such as failure
mode effects analysis and quality function deployment, and softer organizational skills,
such as team and project leadership, critical to obtaining favorable business results.
1
Cause for Applause
Participants in Six Sigma training are taught key concepts and tools essential to their
success, such as the design of experiments, gauge R&R (repeatability and reproduci-
bility), transfer functions (statistical models describing relationship between variables)
and statistical process control. Also, Six Sigma training emphasizes the power of graphi-
cal tools over formal statistical analyses, confirming the saying that a picture is worth a
thousand words—and even more in the wallet.
The underlying approach to teaching Six Sigma is a key factor to its success. Some
proven best practices are summarized in the sidebar on the next page. Many of these were
part of the basic Six Sigma philosophy espoused by Mikel Harry and his associates.
Others have evolved over time and are based upon lessons learned in successfully
implementing Six Sigma.
2
SIDEBAR 1
Proven Best Practices in Six Sigma Training
• Road map: Integrate statistical tools into an overall road map, such as the traditional define,
measure, analyze, improve, control model. This allows students to learn how the tools fit
together. It contrasts with a typical college statistics course in which the training sequence is
determined by mathematical complexity, rather than a tool’s natural place in addressing real-life
problems.
• Theory vs. applications: Teach the tools and their applications, and omit the underlying theory.
• Appreciation of assumptions: Don’t confuse de-emphasizing theory with omitting
assumptions. Six Sigma practitioners need a clear understanding of the key assumptions
underlying the use of each tool, their importance, how to evaluate them in a given situation and
how to proceed if the assumptions are violated.
• Trainers: These people should be enthusiastic, experienced Black Belts and Master Black Belts
who can speak knowledgeably about business applications and who also understand the basic
principles of adult learning.
• Hands-on implementation: Instruction on a tool should come with easy-to-use, and,
preferably, familiar software, such as Excel, JMP1, and Minitab.2
• Project tie-in: Six Sigma practitioners should learn the tools by immediately applying them to
their projects. However, we must resist being dogmatic, and should not require participants to
apply a specific tool. Instead, we need emphasize using the method that is most effective for the
problem at hand.
• Tailored training: Adapt the materials and examples to the specialized business needs of the
target audience.
References
1. SAS Institute Inc., JMP User’s Guide, Version 3.1, (SAS Institute, Cary, NC, 1995).
2. Minitab, Minitab User’s Guide 1: Data Graphics and Macros and Minitab User’s Guide 2:
Data Analysis and Quality Tools, release 12 (State College, PA: Minitab, 1997).
3
Top 10 Recommendations
As the scope of Six Sigma has expanded beyond the original define, measure, analyze,
improve, control model (DMAIC), so have training needs. Tools and concepts that were
not part of the standard Six Sigma package, but have proven their worth in successful
projects, need to be added to the standard curriculum. Our top ten list of recommended
additions is:
The shortcomings of much historical data and the paramount importance of getting the
right data [3].
The justification for assuming data to be normally distributed and the importance of
normality in different situations [4,5].
How to handle non-normal data [6].
Why statistical intervals often understate the actual uncertainty. This happens because
Six Sigma projects typically aim to improve a dynamic process (what Deming
referred to as an analytic study), rather than to describe the current process (an
enumerative study) [7,8].
Presentation of the design of experiments as a step-by-step learning process, rather
than a one-shot undertaking [9], as illustrated by the well-known helicopter design
example [10]. (This example involves experimenting with factors such as the
length, thickness, and weight of the paper to improve flight times of paper heli-
copters.)
Recognition of the fact that in many designed experiments all variables are not created
equal. Some are harder to change than others, leading to the frequent use of split
plot designs [11].
The use of simulation for determining how large a sample is needed and similar ques-
tions in planning investigations. William Meeker and Luis Escobar apply this
approach to various applications in reliability, including planning a product life
test [12].
Analysis of categorical data and the desirability of having continuous data, where
possible [13].
Additional informative ways of analyzing and displaying data graphically
[14,15,16,17].
Tools for quantifying individual sources, or components, of variation [18]. These have
become important because people as recognition grows that one of Six Sigma’s
major goals is the reduction of variability, in addition to, or instead of, improving
the mean.
See the sidebar “Useful Sources” for more information on these and other tools.
4
SIDEBAR 2
Useful Sources
Articles:
• “Quality Quandaries,” a column in Quality Engineering (a
quarterly journal published by ASQ, www.asq.org).
• “Statistics Roundtable,” a bimonthly column in Quality Progress.
On the Internet:
• NIST/SEMATECH, Engineering Statistics Handbook,
http://www.nist.gov/itl/div898/handbook/index2.htm
• StatSoft, Electronic Statistics Textbook,
http://www.statsoftinc.com/textbook/stathome.html
5
• Chemical and processing industries: Mixture experiments, the marriage of engi-
neering process control and statistical monitoring, and concepts of combinatorial
chemistry [25,26,27].
• Software development: Design of experiments to identify faults and the use of reli-
ability growth curves.
What to de-emphasize
Which tools in the standard training have been less effective and might be de-emphasized
to make room for additions? Hypothesis tests, such as F-tests and t-tests, top our list.
These tests deal with statistical significance. However, in applications we are generally
interested in practical significance. Unfortunately, statistical significance and practical
significance are far from equivalent.
Hypothesis tests are sample size dependent. With a sufficiently large sample size, you can
disprove most statistical hypotheses, therefore establishing statistical significance, even
though the results are not of practical significance. Conversely, lack of significance might
often be due to inadequate sample size, rather than lack of true effects, therefore resulting
in inability to establish statistical significance, even though the effect may be of practical
significance.
Statistical interval statements, such as confidence intervals that quantify the statistical
uncertainty, are generally more informative [28]. Similarly, we would de-emphasize the
analysis of variance (except as a tool for estimating components of variation) in favor of
graphical displays.
We would also place less emphasis on R-squared (the percent of variability accounted for
by a fitted regression line) as a measure of association because this, unlike the standard
deviation of a fitted regression, provides limited information on prediction ability [29].
6
References
[1] Gerald J. Hahn, William J. Hill, Roger W. Hoerl and Stephen A. Zinkgraf, “The
Impact of Six Sigma Improvement—A Glimpse Into the Future of Statistics,” The
American Statistician, Vol. 53, No. 3, 1999.
[2] Forest W. Breyfogle III, Implementing Six Sigma—Smarter Solutions Using
Statistical Methods, Second Edition (New York: John Wiley and Sons, 1999).
[3] George Box, William Hunter, and J. Stuart Hunter, Statistics for Experimenters:
An Introduction to Design, Data Analysis and Model Building (New York: John
Wiley and Sons, 1978).
[4] Gerald J. Hahn, “How Abnormal Is Normality?” Journal of Quality Technology,
Vol. 3, 1971.
[5] Gerald J. Hahn, “Whys and Wherefores of Normal Distribution,” ChemTech, Vol.
6, No. 8, 1976.
[6] Christopher Stanard and Brock Osborn, “Six Sigma Quality Beyond the Normal,”
Joint Statistical Meetings, Proceedings of the American Statistical Association
Section on Quality and Productivity, 1999.
[7] W. Edwards Deming, “On the Distinction Between Enumerative and Analytic
Survey,” Journal of the American Statistical Association, 1953.
[8] Gerald J. Hahn, and William Q. Meeker, Statistical Intervals: A Guide for Practi-
tioners (New York: John Wiley and Sons, 1991).
[9] Box, Hunter and Hunter, Statistics for Experimenters: An Introduction to Design,
Data Analysis and Model Building (see reference 2).
[10] George Box and Patrick Y.T. Liu, “Statistics as a Catalyst to Learning by Scientific Method
Part I—An Example,” Journal of Quality Technology, Vol. 31, No.1, 1999.
[11] George Box and Stephen Jones, “Split Plots for Robust Product and Process
Experimentation,” Quality Engineering, Vol. 13, No. 1, 2000.
[12] William Q. Meeker and Luis Escobar, Statistical Methods for Reliability Data
(New York: John Wiley and Sons, 1998).
[13] Alan Agresti, Categorical Data Analysis (New York: John Wiley & Sons, 1990).
[14] Edward R. Tufte, The Visual Display of Quantitative Information (Cheshire, CT:
Graphics Press, 1983).
[15] Edward R. Tufte, Envisioning Information (Cheshire, CT: Graphics Press, 1990).
[16] Edward R. Tufte, Visual Explanation (Cheshire, CT: Graphics Press, 1996).
[17] William S. Cleveland, Visualizing Data (Summit, NJ: Hobart Press, 1993).
[18] George Box, “’Quality Quandaries’ Multiple Sources of Variation: Variance
Components,” Quality Engineering, Vol. 11, No. 1, 1998.
7
[19] Gerald J. Hahn, Necip Doganaksoy and Roger W. Hoerl, “The Evolution of Six
Sigma,” Quality Engineering, Vol. 12, No. 3, 2000.
[20] Richard L. Scheaffer, William Mendenhall and R. Lyman Ott, Elementary Survey
Sampling, 5th edition (Pacific Grove, CA: Duxbury Press, 1995).
[21] Gerald J. Hahn, Necip Doganaksoy and William Q. Meeker, “Reliability
Improvement: Issues and Tools,” Quality Progress, 1999.
[22] Meeker and Escobar, Statistical Methods for Reliability Data (see reference 11).
[23] Sholom M. Weiss and Nitin Indurkhya, Predictive Data Mining: A Practical
Guide (San Francisco: Morgan Kaufmann Publishers, 1998).
[24] David W. Hosmer and Stanley Lemeshow, Applied Logistic Regression (New
York: John Wiley and Sons, 1989).
[25] John A. Cornell, Experiments with Mixtures: Designs, Models, and the Analysis
of Mixture Data (New York: John Wiley and Sons, 1990).
[26] Scott Vander Wiel, William Tucker, Frederick Faltin and Necip Doganaksoy,
“Algorithmic Statistical Process Control: Concepts and an Application,”
Technometrics, Vol. 35, No. 4, 1992.
[27] Stu Borman, “Combinatorial Chemistry,” Chemical and Engineering News,
February 24, 1997.
[28] Hahn and Meeker, Statistical Intervals: A Guide for Practitioners (see reference
7).
[29] Gerald J. Hahn, “The Coefficient of Determination Exposed!” ChemTech, Vol. 3,
No. 10, 1973.
[30] Roger Hoerl, “Six Sigma Black Belts: What Do They Need to Know?” To
appear in Journal of Quality Technology.
8
G. Hahn Statistical Tools for Six Sigma 2001CRD126
N. Doganaksoy August 2001
C. Stanard