Sie sind auf Seite 1von 12

GE Research &

Development Center
_____________________________________________________________

Statistical Tools for Six Sigma

G. Hahn, N. Doganaksoy, and C. Stanard

2001CRD126, August 2001


Class 1

Technical Information Series


Copyright © 2001 American Society for Quality. Used with permission.
Corporate Research and Development

Technical Report Abstract Page

Title Statistical Tools for Six Sigma

Author(s) G. Hahn* Phone (518)387-5319


N. Doganaksoy 8*833-5319
C. Stanard

Component Information Systems Laboratory

Report
Number 2001CRD126 Date August 2001

Number
of Page 8 Class 1

Key Words Six Sigma Training, Six Sigma Tools, Six Sigma Best Practices, Six Sigma Resources,
Six Sigma for Specialized Audiences

Six Sigma Programs have proven valuable for improving quality and profitability. Based on the
authors’ experience, this article suggests items and areas to revisit in training and targeted tools as Six
Sigma evolves.

Manuscript received July 19, 2001

*Recently retired manager.


Statistical Tools for Six Sigma
What to emphasize and de-emphasize in training

G.J. Hahn, N. Doganaksoy, and C. Stanard

One of the key themes of Six Sigma is to make decisions based on data. This idea is
reflected by such popular Six Sigma sayings as, “We don’t know what we don’t know
(or don’t measure)” and “In God we trust—all else bring data!”
To ensure we obtain the right data and transform it into actionable information, we
deploy statistical tools. These tools and closely related concepts, such as the design of
experiments, are key elements of Six Sigma training and comprise up to half of the stan-
dard curriculum. The other half consists of various nonstatistical tools, such as failure
mode effects analysis and quality function deployment, and softer organizational skills,
such as team and project leadership, critical to obtaining favorable business results.

Objectives of Statistical training


The goal of standard Six Sigma statistical training is to give Green Belts and Black Belts
an appreciation of statistical thinking and a hands-on introduction to the tools needed for
successful projects. The aim is not to convert Six Sigma practitioners into statistical
experts. Instead, it is to give them the knowledge essential to their success in obtaining
business results. Professional statisticians can provide added support for complex and
critical applications.
Most people in the quality field today have had some exposure to introductory statistics
prior to Six Sigma training, so the training should supplement this knowledge by pro-
viding a hands-on review and a practical perspective on what is most critical to the prac-
titioner’s work. It is also important to present needed tools often not taught in most intro-
ductory statistics courses, such as the design of experiments.
Six Sigma generally involves eight to 15 days of instruction, given in one week doses
over a three to four month period. See Figure 1 for a typical curriculum [1]. At GE, all
professional employees receive this or similar training. Breyfogle [2] provides a descrip-
tion of statistical tools commonly taught in Six Sigma training.

1
Cause for Applause
Participants in Six Sigma training are taught key concepts and tools essential to their
success, such as the design of experiments, gauge R&R (repeatability and reproduci-
bility), transfer functions (statistical models describing relationship between variables)
and statistical process control. Also, Six Sigma training emphasizes the power of graphi-
cal tools over formal statistical analyses, confirming the saying that a picture is worth a
thousand words—and even more in the wallet.

• Week 1 (Define and Measure) • Week 3 (Improve)


- Six Sigma Overview & the DMAIC Roadmap -ANOVA
- Process Mapping - DOE (Design of Experiments)
- QFD (Quality Function Deployment) • Factorial Experiments
- FEMA (Failure Mode and Effects Analysis) • Fractional Factorials
- Organizational Effectiveness Concepts • Balanced Block Design
(e.g. Team Development) • Response Surface Design
- Basic Stats Using Minitab
- Process Capability • Week 4 (Control)
- Measurement Systems Analysis - Control Plans
- Mistake Proofing
• Week 2 (Analyze) - Special Applications: DIscrete Parts,
- Statistical Thinking Continuous Processes,
- Hypothesis Testing and Confidence Administration, Design
Intervals (F, t, etc.) - Final Exercise
- Correlation Analysis
- Multi-Vari and Regression Analysis
Notes:
• Project reviews are done each day in weeks 1-4
• Hands-on exercises are on most days
• Three weeks of applied time between sessions

Figure 1. Six Sigma – Typical Training Curriculum

The underlying approach to teaching Six Sigma is a key factor to its success. Some
proven best practices are summarized in the sidebar on the next page. Many of these were
part of the basic Six Sigma philosophy espoused by Mikel Harry and his associates.
Others have evolved over time and are based upon lessons learned in successfully
implementing Six Sigma.

2
SIDEBAR 1
Proven Best Practices in Six Sigma Training
• Road map: Integrate statistical tools into an overall road map, such as the traditional define,
measure, analyze, improve, control model. This allows students to learn how the tools fit
together. It contrasts with a typical college statistics course in which the training sequence is
determined by mathematical complexity, rather than a tool’s natural place in addressing real-life
problems.
• Theory vs. applications: Teach the tools and their applications, and omit the underlying theory.
• Appreciation of assumptions: Don’t confuse de-emphasizing theory with omitting
assumptions. Six Sigma practitioners need a clear understanding of the key assumptions
underlying the use of each tool, their importance, how to evaluate them in a given situation and
how to proceed if the assumptions are violated.
• Trainers: These people should be enthusiastic, experienced Black Belts and Master Black Belts
who can speak knowledgeably about business applications and who also understand the basic
principles of adult learning.
• Hands-on implementation: Instruction on a tool should come with easy-to-use, and,
preferably, familiar software, such as Excel, JMP1, and Minitab.2

• Project tie-in: Six Sigma practitioners should learn the tools by immediately applying them to
their projects. However, we must resist being dogmatic, and should not require participants to
apply a specific tool. Instead, we need emphasize using the method that is most effective for the
problem at hand.

• Tailored training: Adapt the materials and examples to the specialized business needs of the
target audience.
References
1. SAS Institute Inc., JMP User’s Guide, Version 3.1, (SAS Institute, Cary, NC, 1995).
2. Minitab, Minitab User’s Guide 1: Data Graphics and Macros and Minitab User’s Guide 2:
Data Analysis and Quality Tools, release 12 (State College, PA: Minitab, 1997).

3
Top 10 Recommendations
As the scope of Six Sigma has expanded beyond the original define, measure, analyze,
improve, control model (DMAIC), so have training needs. Tools and concepts that were
not part of the standard Six Sigma package, but have proven their worth in successful
projects, need to be added to the standard curriculum. Our top ten list of recommended
additions is:
The shortcomings of much historical data and the paramount importance of getting the
right data [3].
The justification for assuming data to be normally distributed and the importance of
normality in different situations [4,5].
How to handle non-normal data [6].
Why statistical intervals often understate the actual uncertainty. This happens because
Six Sigma projects typically aim to improve a dynamic process (what Deming
referred to as an analytic study), rather than to describe the current process (an
enumerative study) [7,8].
Presentation of the design of experiments as a step-by-step learning process, rather
than a one-shot undertaking [9], as illustrated by the well-known helicopter design
example [10]. (This example involves experimenting with factors such as the
length, thickness, and weight of the paper to improve flight times of paper heli-
copters.)
Recognition of the fact that in many designed experiments all variables are not created
equal. Some are harder to change than others, leading to the frequent use of split
plot designs [11].
The use of simulation for determining how large a sample is needed and similar ques-
tions in planning investigations. William Meeker and Luis Escobar apply this
approach to various applications in reliability, including planning a product life
test [12].
Analysis of categorical data and the desirability of having continuous data, where
possible [13].
Additional informative ways of analyzing and displaying data graphically
[14,15,16,17].
Tools for quantifying individual sources, or components, of variation [18]. These have
become important because people as recognition grows that one of Six Sigma’s
major goals is the reduction of variability, in addition to, or instead of, improving
the mean.

See the sidebar “Useful Sources” for more information on these and other tools.

4
SIDEBAR 2
Useful Sources
Articles:
• “Quality Quandaries,” a column in Quality Engineering (a
quarterly journal published by ASQ, www.asq.org).
• “Statistics Roundtable,” a bimonthly column in Quality Progress.
On the Internet:
• NIST/SEMATECH, Engineering Statistics Handbook,
http://www.nist.gov/itl/div898/handbook/index2.htm
• StatSoft, Electronic Statistics Textbook,
http://www.statsoftinc.com/textbook/stathome.html

Focus on Targeted Audiences


Initially, Six Sigma was aimed principally at managers and practitioners directly involved
in the manufacture of a product, based on the DMAIC model. Since then, it has proven its
applicability to the entire business, from design to commercial transactions, and to enter-
prises other than manufacturing operations, such as banks, hospitals and schools [19].
Many of the statistical tools from the standard training have general applicability for
these audiences, but the examples should be relevant to the participants’ interests. For
example, in presenting Six Sigma for commercial applications, gauge R&R might be
illustrated in terms of the consistency of assessments by different insurance appraisers.
Also, the sampling of human populations for marketing evaluations, and the sampling
of bills to quantify error rates [20], might be used as examples, while downplaying formal
aspects of the design of experiments.
In addition, different areas of application call for specialized tools that should be included
in targeted training; for example,
• Design for Six Sigma: Tools for reliability improvement and product life data analy-
sis [21,22].
• Commercial and other transactions: Discrete event simulation to model non-
manufacturing processes, and graphical and data mining tools that deal with large
databases, such as classification and regression trees and logistic regression for
categorical response variables [23,24].

5
• Chemical and processing industries: Mixture experiments, the marriage of engi-
neering process control and statistical monitoring, and concepts of combinatorial
chemistry [25,26,27].
• Software development: Design of experiments to identify faults and the use of reli-
ability growth curves.

What to de-emphasize
Which tools in the standard training have been less effective and might be de-emphasized
to make room for additions? Hypothesis tests, such as F-tests and t-tests, top our list.
These tests deal with statistical significance. However, in applications we are generally
interested in practical significance. Unfortunately, statistical significance and practical
significance are far from equivalent.
Hypothesis tests are sample size dependent. With a sufficiently large sample size, you can
disprove most statistical hypotheses, therefore establishing statistical significance, even
though the results are not of practical significance. Conversely, lack of significance might
often be due to inadequate sample size, rather than lack of true effects, therefore resulting
in inability to establish statistical significance, even though the effect may be of practical
significance.
Statistical interval statements, such as confidence intervals that quantify the statistical
uncertainty, are generally more informative [28]. Similarly, we would de-emphasize the
analysis of variance (except as a tool for estimating components of variation) in favor of
graphical displays.
We would also place less emphasis on R-squared (the percent of variability accounted for
by a fitted regression line) as a measure of association because this, unlike the standard
deviation of a fitted regression, provides limited information on prediction ability [29].

The Evolution of Six Sigma


Six Sigma training is meeting its goals by impacting the quality experienced by custom-
ers, streamlining the hidden factory and strengthening the bottom line for the corporations
employing it. Six Sigma is also revolutionizing the way people and organizations work,
through its key concept of basing decisions on data, not hunches. It is providing an appre-
ciation of variability in all aspects of business operations, from product design and manu-
facture to order fulfillment. In addition its use is being expanded to ever-increasing areas
of application.
Training of new and previously untrained employees and refresher courses for previously
trained employees need reflect Six Sigma’s continuing evolution and leverage the lessons
learned, by adding proven concepts and tools, de-emphasizing those that have not proven
themselves and tailoring our teachings to the specialized business needs of our audience.
A forthcoming article by our colleague, Roger Hoerl provides a more general discussion
[30].

6
References
[1] Gerald J. Hahn, William J. Hill, Roger W. Hoerl and Stephen A. Zinkgraf, “The
Impact of Six Sigma Improvement—A Glimpse Into the Future of Statistics,” The
American Statistician, Vol. 53, No. 3, 1999.
[2] Forest W. Breyfogle III, Implementing Six Sigma—Smarter Solutions Using
Statistical Methods, Second Edition (New York: John Wiley and Sons, 1999).
[3] George Box, William Hunter, and J. Stuart Hunter, Statistics for Experimenters:
An Introduction to Design, Data Analysis and Model Building (New York: John
Wiley and Sons, 1978).
[4] Gerald J. Hahn, “How Abnormal Is Normality?” Journal of Quality Technology,
Vol. 3, 1971.
[5] Gerald J. Hahn, “Whys and Wherefores of Normal Distribution,” ChemTech, Vol.
6, No. 8, 1976.
[6] Christopher Stanard and Brock Osborn, “Six Sigma Quality Beyond the Normal,”
Joint Statistical Meetings, Proceedings of the American Statistical Association
Section on Quality and Productivity, 1999.
[7] W. Edwards Deming, “On the Distinction Between Enumerative and Analytic
Survey,” Journal of the American Statistical Association, 1953.
[8] Gerald J. Hahn, and William Q. Meeker, Statistical Intervals: A Guide for Practi-
tioners (New York: John Wiley and Sons, 1991).
[9] Box, Hunter and Hunter, Statistics for Experimenters: An Introduction to Design,
Data Analysis and Model Building (see reference 2).
[10] George Box and Patrick Y.T. Liu, “Statistics as a Catalyst to Learning by Scientific Method
Part I—An Example,” Journal of Quality Technology, Vol. 31, No.1, 1999.

[11] George Box and Stephen Jones, “Split Plots for Robust Product and Process
Experimentation,” Quality Engineering, Vol. 13, No. 1, 2000.
[12] William Q. Meeker and Luis Escobar, Statistical Methods for Reliability Data
(New York: John Wiley and Sons, 1998).
[13] Alan Agresti, Categorical Data Analysis (New York: John Wiley & Sons, 1990).
[14] Edward R. Tufte, The Visual Display of Quantitative Information (Cheshire, CT:
Graphics Press, 1983).
[15] Edward R. Tufte, Envisioning Information (Cheshire, CT: Graphics Press, 1990).
[16] Edward R. Tufte, Visual Explanation (Cheshire, CT: Graphics Press, 1996).
[17] William S. Cleveland, Visualizing Data (Summit, NJ: Hobart Press, 1993).
[18] George Box, “’Quality Quandaries’ Multiple Sources of Variation: Variance
Components,” Quality Engineering, Vol. 11, No. 1, 1998.

7
[19] Gerald J. Hahn, Necip Doganaksoy and Roger W. Hoerl, “The Evolution of Six
Sigma,” Quality Engineering, Vol. 12, No. 3, 2000.
[20] Richard L. Scheaffer, William Mendenhall and R. Lyman Ott, Elementary Survey
Sampling, 5th edition (Pacific Grove, CA: Duxbury Press, 1995).
[21] Gerald J. Hahn, Necip Doganaksoy and William Q. Meeker, “Reliability
Improvement: Issues and Tools,” Quality Progress, 1999.
[22] Meeker and Escobar, Statistical Methods for Reliability Data (see reference 11).
[23] Sholom M. Weiss and Nitin Indurkhya, Predictive Data Mining: A Practical
Guide (San Francisco: Morgan Kaufmann Publishers, 1998).
[24] David W. Hosmer and Stanley Lemeshow, Applied Logistic Regression (New
York: John Wiley and Sons, 1989).
[25] John A. Cornell, Experiments with Mixtures: Designs, Models, and the Analysis
of Mixture Data (New York: John Wiley and Sons, 1990).
[26] Scott Vander Wiel, William Tucker, Frederick Faltin and Necip Doganaksoy,
“Algorithmic Statistical Process Control: Concepts and an Application,”
Technometrics, Vol. 35, No. 4, 1992.
[27] Stu Borman, “Combinatorial Chemistry,” Chemical and Engineering News,
February 24, 1997.
[28] Hahn and Meeker, Statistical Intervals: A Guide for Practitioners (see reference
7).
[29] Gerald J. Hahn, “The Coefficient of Determination Exposed!” ChemTech, Vol. 3,
No. 10, 1973.
[30] Roger Hoerl, “Six Sigma Black Belts: What Do They Need to Know?” To
appear in Journal of Quality Technology.

8
G. Hahn Statistical Tools for Six Sigma 2001CRD126
N. Doganaksoy August 2001
C. Stanard

Das könnte Ihnen auch gefallen