Sie sind auf Seite 1von 64

MEASUREMENT SYSTEM ANALYSIS

A. CHAPTER OBJECTIVES
B. MEASUREMENT SYSTEM ANALYSIS
C. TERMINOLOGY
D. MEASUREMENT ERROR
E. ACCURACY, LINEARITY AND STABILITY
F. REPEATABILITY & REPRODUCIBILITY (R & R)
G. ATTRIBUTE GAGE STUDY
H. ATTRIBUTE BREAKOUT EXERCISE (TEAM)
I. VARIABLE GAGE STUDY
J. ACCEPTABILITY CRITERIA
K. VARIABLE GAGE R & R EXERCISE
L. MEASUREMENT SYSTEM STUDY GUIDELINES
M. APPLICATION EXAMPLE
N. VARIABLE BREAKOUT EXERCISE (TEAM)

12-1

CHAPTER OBJECTIVES
The objectives of this chapter are as follows.
To explain different terminology used in measurement systems analysis.
To identify, evaluate, and control the primary sources of measurement errors.
To learn how to perform Gage Linearity and Accuracy Studies.
To learn how to perform Gage Reproducibility & Repeatability Studies (Gage
R & R).
To analyze and interpret Gage R & R results using Minitab.
To identify acceptability criteria in evaluating measurement systems.
To point out guidelines in the conduct of measurement system studies.

12-2

MEASUREMENT SYSTEM ANALYSIS


MANUFACTURING
Manufacturing uses many types of measuring systems to
make decisions about a product or processs acceptability.
Leviton uses variable measuring instruments such as
micrometers, calipers and optical comparators.
Attribute visual inspection (pass, fail) is another critical
aspect of our measurement system.

12-3

MEASUREMENT SYSTEM ANALYSIS


MANUFACTURING
The question is, How exact is our measurement system?
When an appraiser/operator does not measure a part
consistently, the expense to Leviton can be very great.
Satisfactory parts are rejected.
Unsatisfactory parts are accepted.

12-4

MEASUREMENT SYSTEM ANALYSIS


SERVICE
Service processes have the same measurement system
issues.
Consider this example:
A function receives an input (electronic paper, verbal,
etc.), interprets the input data, assigns a value to the
information and inputs the value into our system.
We make business decisions based on the assumption
that the data in the system is correct.
Therefore, we must verify that the information entered into
the system is correct.
The system is only as good as the data that goes into it.

12-5

MEASUREMENT SYSTEM ANALYSIS


SERVICE
Organizations frequently overlook the impact of not having
a quality measurement system.
In most cases, they do not even consider that their
measurements might not be exact.
Such assumptions and inadequate considerations lead to
questionable analysis and conclusions.

12-6

TERMINOLOGY
Accuracy (Bias) difference between the observed average of
measurements and the reference value.
Linearity difference in the bias values through the expected
operating range of the measuring instrument.
Repeatability is the variability resulting from successive trials
under defined conditions of measurement. The best term for
repeatability is within system variability where the conditions of
measurement are fixed (fixed part, instrument, method, operator,
etc.).
Reproducibility is the variation in the average of
measurements caused by normal conditions) of change in the
measurement process.
Stability (or drift) total variation in the measurement obtained
with a measurement system on the same master or parts when
measuring a single characteristic over an extended time period.

12-7

MEASUREMENT ERROR
Averages
Averages

Measurement
System Bias
Determined through

Accuracy Study

total product measurement


Variability
Variability

Measurement System
Variability
Determined through
R&R Study

2total 2product 2measurement


It is enough to have product or process variability at undesirable level, but it should
not be compounded by adding in measurement inaccuracy and variation termed as
Measurement Error.

12-8

SOURCES OF VARIABILITY
Measurements

Materials

Mech Integrity
Wear
Elec Instability

Humidity
Cleanliness
Vibration
Voltage Variation
Temp Fluctuations

Environment

Men

Cleanliness
Temperature
Dimension
Weight
Corrosion
Hardness
Conductivity
Density

Procedure
Fatigue
Attention
Calibration Error
Interpretation
Speed
Coordination
Knowledge
Dexterity
Vision

Cleanliness
Temperature
Training
Design
Frequency
Precision
Maintenance Standard
Calibration
Sufficient Work Time
Resolution
Standard Procedure
Stability
Operator Techniques
Wear
Ease of Use

Methods

Variation in
Measurement System

Machines

Variation exists within the measurement system we must


identify and quantify the source.
As indicated above, variation can be attributed to 6 specific
12-9
factors.

ACCURACY AND PRECISION


Measurement error can be classified into two categories:

Accuracy describes the difference between the measurements and


the parts actual value. There are three components of Accuracy:
ACCURACY
LINEARITY
STABILITY
Precision describes the variation you see when you measure the
same parts repeatedly with the same device. There are two
components of Precision:
REPEATABILITY
REPRODUCIBILITY

Each component will be defined in detail.

12-10

Graphical differentiation between:

ACCURACY
VS.
PRECISION

accurate and precise

precise but not accurate

accurate but not precise

not accurate or precise

12-11

Accuracy relates to true value while precision deals with consistency.

CATEGORIES OF MEASUREMENT ERROR


WHICH AFFECT LOCATION
Accuracy

Linearity

Stability

Shown here are the three categories of measurement error which affect location.
These categories are evaluated by taking multiple, repeated measurements on
parts and comparing it to a master or standard parts.

12-12

ACCURACY
Accuracy is calculated by taking multiple measurements on a part
and calculating the difference between the observed average and
the reference value.
Reference
Value

Observed Average Value


How accurate is my measuring instrument when compared
to a master value ?

12-13

LINEARITY
Good Linearity

Bad Linearity

RegressionPlot

Linearityis Not Good

55

55

45

45

35

35

Y=0.934227+0.994959X
R-Squared=0.981

25

Trials

Trials

Linearity is the difference in the accuracy values of a measuring


instrument through the expected operating range of the gage.

15

15

5
10

20

30

40

50

Standard
NOTE: THIS CHART INDICATES GOOD LINEARITY
DUE TO VARIATION BEING CONSTANT THROUGH
THE OPERATING RANGE OF THE GAGE.

Y=0.245295+0.99505X
R-Squared=0.982

25

10

20

30

40

50

Standard
NOTE: THIS CHART INDICATES BAD LINEARITY
DUE TO INCREASING VARIATION AT THE HIGHER
END OF THE OPERATING RANGE OF THE GAGE.

12-14

Does my measuring instrument have the same accuracy for all sizes of objects being measured?

STABILITY (DRIFT)
Stability is determined by measuring a single characteristic on the
same master part(s) over an extended time period. These are
monitored and evaluated using graphical output such as a control
charts (Control Phase).

Points to the frequency of Mean center Calibration


Time-2

Magnitude

Time-1

Stability

time

How stable is my measuring instrument over an extended time period


12-15 ?

GAGE LINEARITY AND


ACCURACY STUDY
EXAMPLE
Five items are selected that represent the expected range of the
measurements. Each part was measured by layout inspection to
determine its reference value. Then, one operator randomly
measured each part twelve times in random sequence.
Data was stored in GAGELIN.mtw file in GBData directory.
Perform Gage Linearity Study using Minitab and interpret
results.

12-16

LINEARITY EXERCISE
Perform the following Steps:
Open the file GAGELIN.mtw under GBData folder.
1.

Choose Stat > Quality Tools > Gage Study > Gage Linearity and Bias Study

2.
3.
4.

In Part numbers:, select C1 Part


In Reference values:, select C2 Master
In Measurement data:, select C3 Response

5.

In Process Variation, type 14.1941 (Required only for the gage accuracy part of the
study.

6. Click OK.
*14.1941 is a value associated with a certain process variation which is being
used only for illustration purposes. We will learn more about it during the
Variable Gage R & R discussions later in the chapter.

12-17

GAGE LINEARITY STUDY


MINITAB OUTPUT (GAGELIN.mtw)
Gage Linearity and Bias Study for Response
Reported by:
Tolerance:
Misc:

Gage name:
Date of study:

Predictor
Constant
Slope

Regression
95% CI

1.0

Data
Avg Bias

S
Linearity

0.23954
1.86889

Bias

0.5

0.0

Reference
Average
2
4
6
8
10

Gage Linearity
Coef
SECoef
0.73667
0.07252
-0.13167
0.01093
R-Sq
%Linearity

Gage Bias
Bias
%Bias
-0.053333
0.4
0.491667
3.5
0.125000
0.9
0.025000
0.2
-0.291667
2.1
-0.616667
4.3

P
0.000
0.000
71.4%
13.2

P
0.040
0.000
0.293
0.688
0.000
0.000

-0.5

-1.0
2

6
Reference Value

10

Percent

Percent of Process Variation


10
5
0

Linearity

Bias

10% linearity or less is acceptable (the closer the slope is to zero, the better the gage linearity).
Less than 1% (%Bias), is acceptable. If bias is greater than 1%, consider calibrating or
changing the gage.
12-18

CATEGORIES OF MEASUREMENT ERROR


WHICH AFFECT THE SPREAD

Repeatability

Reproducibility

There are two categories of Measurement Error Which Affect


the Spread :
Repeatability and Reproducibility.

12-19

REPEATABILITY AND REPRODUCIBILITY


Repeatability and Reproducibility are important contributors to
measurement error affecting the spread of the distribution.
However, each one focuses on different specific factors.
Observed Process Variation

Actual Process Variation

Measurement Variation

Long-term

Short-term

Variation

Variation due

Variation due

Process Variation

Process Variation

w/i sample

to gage

to operators

Repeatability

Accuracy

Stability

Linearity

Reproducibility

We will look at Repeatability and Reproducibility as these are the


primary contributors to measurement error.

12-20

REPEATABILITY OF THE
MEASUREMENT PROCESS
Implies that the measurement process variability is
consistent.
It is the variation in the measurements obtained with one
measuring instrument when one operator uses the same
instrument for measuring identical characteristics on the
same parts.

12-21

REPRODUCIBILITY OF THE
MEASUREMENT SYSTEM
Implies that variability among the operators is consistent.
It is the variation in the average of the measurements
made by different operators using the same measuring
instrument when measuring identical characteristics of the
same parts.
Operator-B

Operator-C
Operator-A
Reproducibility

12-22

ATTRIBUTE GAGE STUDY

NO-GO

GO

An Attribute Gage either accepts or rejects a part based on comparison to a


known set of limits or attributes.

Unlike a variable gauge, an attribute gage cannot quantify the degree to


which a part is good or bad.
12-23

EXAMPLE-ATTRIBUTE GAGE STUDY


The Morganton Six Sigma Team is working to improve the final inspection of the Catalog
5601 Decora Rocker Switch. The Critical to Quality characteristic measured during
inspection is the aesthetic condition, or appearance of the switch. Before the switches are
packed out, line inspectors perform a visual inspection for defects, the result of which is a
Go/No-go response. One such defect is spots on the rocker - a switch with small black
spots. The inspection methodology is highly subjective and an Attribute Gage Study is
required to evaluate the measurement system - are the line inspectors accurately and
consistently identifying bad parts and at the same time not rejecting good parts?
20 switches - some with spots, without spots, questionable spots - have been selected.
The rocker samples are numbered and the individual appearance, or attribute of each
sample is noted and recorded in Minitab in the ATTRIBUTE column. This is the attribute by
which inspection response will be measured; how consistently sample sets are evaluated
against a known standard. Because the inspection response is Go/No-go, samples are
rated either Good (G) or Bad (B). Note: Samples with questionable spots are considered
Good.
The 20 samples were displayed in random order. Three line inspectors participated in the
study. Each inspector examined each sample twice (2 separate trials). The sample
numbers were not visible and sample order was randomized after each inspection/trial.

12-24

SPOTS ON THE ROCKER

12-25

ATTRIBUTE GAGE STUDY WITH MINITAB


Attribute column - known good/bad
parts (G/B) predetermined by team
Inspectors column - identifies line
inspectors one, two and three
Sample column - identifies samples
evaluated
Rating column - inspection
response (G/B)
Note: This data is stacked - all data
points relating to the column heading
are contained in that column.
Project team members marked down responses and (G/B) and recorded
data in Minitab. Two separate trials were run.

12-26

ATTRIBUTE GAGE STUDY WITH MINITAB


Perform the following steps:

Open up file Attgage.mtw located in GBdata.


1. Choose Stat > Quality Tools > Attribute Agreement Analysis
2. In Attribute column:, select C3 Rating.
3. In Samples:, select C2 Sample.
4. In Appraisers:, select C1 Inspector.
Note: Minitab defaults to single column data. Select multiple
columns to utilize unstacked data.
5. In Known standard/attribute, select C4 Attribute. Attribute type
can be numeric or text, but must match Response type.
6. Click OK

12-27

Within Appraisers
Assessment Agreement

Point out Session Window Output; we


will look at sections on following slides

Appraiser # Inspected # Matched Percent


95 % CI
1
20
20 100.00 (86.09, 100.00)
2
20
17 85.00 (62.11, 96.79)
3
20
18 90.00 (68.30, 98.77)

Individual Response
CONSISTENCY

# Matched: Appraiser agrees with him/herself across trials.


Each Appraiser vs Standard
Assessment Agreement
Appraiser # Inspected # Matched Percent
95 % CI
1
20
20 100.00 (86.09, 100.00)
2
20
16 80.00 (56.34, 94.27)
3
20
16 80.00 (56.34, 94.27)

Individual Response
ACCURACY

# Matched: Appraiser's assessment across trials agrees with the known standard.
Assessment Disagreement
Appraiser # G / B Percent # B / G Percent # Mixed Percent
1
0 0.00
0 0.00
0 0.00
2
1 9.09
0 0.00
3 15.00
3
2 18.18
0 0.00
2 10.00
# G / B: Assessments across trials = G / standard = B.
# B / G: Assessments across trials = B / standard = G.
# Mixed: Assessments across trials are not identical.
Between Appraisers
Assessment Agreement
# Inspected # Matched Percent
95 % CI
20
15 75.00 (50.90, 91.34)

Collective Response
CONSISTENCY

# Matched: All appraisers' assessments agree with each other.


All Appraisers vs Standard
Assessment Agreement
# Inspected # Matched Percent
95 % CI
20
15 75.00 (50.90, 91.34)
# Matched: All appraisers' assessments agree with the known standard.

Collective Response
ACCURACY
12-28

MINITAB OUTPUTS
When performing an Attribute study,
Minitab produces 4 outputs:
1. Assessment Within Appraiser
2. Assessment of Each Appraiser vs.
Standard
3. Assessment Between Appraisers
4. Assessment of All Appraisers vs.
Standard
Each output displays a unique
characteristic of the measurement
system.

12-29

MINITAB OUTPUTS
Within Appraisers
Assessment Agreement
Appraiser # Inspected # Matched Percent
95 % CI
1
20
20 100.00 (86.09, 100.00)
2
20
17 85.00 (62.11, 96.79)
3
20
18 90.00 (68.30, 98.77)
# Matched: Appraiser agrees with him/herself across trials.

Calculates the number of responses consistent across the


sample set per inspector - how consistently inspectors are
able to repeat their own measurements.
Note: These values do not reflect ACCURACY.

12-30

MINITAB OUTPUTS
Assessment Agreement
Appraiser
1
2
3

# Inspected
20
20
20

# Matched
20
16
16

Percent
100.00
80.00
80.00

95 % CI
(86.09, 100.00)
(56.34, 94.27)
(56.34, 94.27)

# Matched: Appraiser's assessment across trials agrees with the


known standard.
Assessment Disagreement
Appraiser # G / B Percent
1
0
0.00
2
1
9.09
3
2
18.18

# B / G
0
0
0

Percent
0.00
0.00
0.00

# Mixed
0
3
2

Percent
0.00
15.00
10.00

# G / B: Assessments across trials = G / standard = B.


# B / G: Assessments across trials = B / standard = G.
# Mixed: Assessments across trials are not identical.

Calculates the proportion of appraiser responses matching a known


sample attribute - how consistent inspector responses are vs. the
known standard. These values measure response ACCURACY to the
attribute.

12-31

MINITAB OUTPUTS
Between Appraisers
Assessment Agreement
# Inspected # Matched
20
15

Percent
75.00

95 % CI
(50.90, 91.34)

# Matched: All appraisers' assessments agree with each other.


All Appraisers vs Standard
Assessment Agreement
# Inspected # Matched
20
15

Percent
75.00

95 % CI
(50.90, 91.34)

# Matched: All appraisers' assessments agree with the known


standard.

Between Appraisers - Calculates the total number of responses matched


among all appraisers.
All Appraisers vs Standard - Calculates the proportion of all inspector
responses matching the known sample attribute. This value should be at
least 95%.
12-32

MINITAB OUTPUTS - GRAPHS


Date of study:
Reported by:
Name of product:
Misc:

Assessment Agreement

Within Appraisers
100

Appraiser vs Standard
100

95.0%CI
Percent

90

Percent

Percent

90

95.0%CI
Percent

80

80

70

70

60

60
1

2
Appraiser

2
Appraiser

Within Appraiser displays the consistency of appraiser responses across


all trials. This graph will be displayed only when there are multiple trials.
Appraiser vs. Standard displays the matched proportion of appraiser
responses vs. the known standard (response accuracy). This graph will be
displayed only when the attribute is known and entered in Minitab.12-33

IMPROVEMENTS

Due to the results of the initial Attribute study, the Morganton team
implemented the following changes to the measurement system:
To improve consistency and accuracy among inspectors, a sample
board of Go/No-go switches was created. The samples are used as a
reference during inspection, reducing inspector subjectivity.
Inspectors received additional training based on the sample board
criteria.
After improvements, a second study was conducted, with All Appraiser vs.
Standard and Between Appraisers scores improving from 75% to 90%.

12-34

GO/NO-GO SAMPLE BOARD

12-35

ATTRIBUTE GAGE R&R


BREAKOUT EXERCISE (TEAM)
This will be a team exercise. Instructors will distribute bags of
Peanut M&M candies to each team. Data will be entered in
Peanut.mtw file located in GBData.
Materials Needed for the Exercise
One bag of Peanut M&M candies (per team)
Ummda good
stuff!

12-36

ATTRIBUTE GAGE R&R


BREAKOUT
EXERCISE
TASK 1
1. Each Team will designate (1) Master Appraiser, (1) Recorder and (3) Inspectors. If the size of
the team is less than 5, the Master Appraiser and the Recorder can be the same person, or
use only two Appraisers.
2. The Master Appraiser will take out (20) M&M candies and line them up as samples 1-20.
3. Based on a general visual criteria (do not touch/handle samples), the Master Appraiser will
determine which M&Ms are good/bad and record observations in the Attribute Column (C3) of
the Peanut.mtw file located in GBData.
Important: The Master Appraisers criteria will not be discussed with Inspectors.
4. Inspectors will take turns visually inspecting the Peanut M&M candies to determine if the
samples are good/bad and call out responses to Recorder (enter responses in Rating columns
(C4, C5 & C6) of Peanut.mtw file. Important: Inspectors should not discuss individual
responses with one another - all determinations are to remain independent.
TASK 2
1. Upon completion of first trial, Master Appraiser and Inspectors will determine the criteria for
defects prior to conducting a second trial.
2. Run the second trail in the manner described above.
3. Analyze/compare the data and be prepared to discuss the results of Tasks 1 and 2.

12-37

ATTRIBUTE STUDY - PEANUT.MTW


This exercise will be using unstacked data - the previous
Attribute example utilized stacked data.
Perform the following steps:

1. Choose Stat > Quality Tools > Attribute


Agreement Analysis
2. Click on Multiple columns
3. Click inside dialogue box and select columns C4, C5,
and C6.
4. Enter number of appraisers (3) and number of trials (1).
5. Enter C3 (Attribute) for Known standard/attribute.
6. Click OK.
Repeat steps for Task 2 using appropriate data columns.

12-38

VARIABLE GAGE STUDY

Rivet

Buckling Height
Measure Here
Variables are elements subject to variation.
A Variable Gage measures the degree to which a part varies in relation
to a certain specification.
It is represented by a quantifiable scale of measure or data.
Variable Gage studies are better than Attribute Gage studies because it offers more
information on the actual behavior of the process being studied.
12-39

VARIABLE GAGE STUDY WITH MINITAB


The Gage R&R study function in Minitab software will enable you to analyze and interpret these data.

This is a sample of Minitabs data window with measurement data to include part
numbers, operators and actual parts measurements.
You can perform two methods in Minitab to estimate repeatability and reproducibility,
the ANOVA or the Xbar and R Chart method. ANOVA, the more powerful method, is a
statistical technique that is used to estimate and analyze the variance, whereas the
Xbar and R chart is a graphical method known as the Control Chart method.

12-40

EVALUATING
REPEATABILITY AND REPRODUCIBILITY
WITH ANOVA
Components of Measurement System Variation

Variation due to gage

Variation due to operators

Operator

Repeatability

Operator by Part

Reproducibility

ANOVA method of analysis will provide a more accurate assessment of the


measurement system study than the Xbar and R Chart method.
Our discussion will focus on the ANOVA method (use Minitab Help menu and manual
if you are interested in learning the Xbar and R Method).

12-41

ANOVA (ANALYSIS OF VARIANCE)


ANOVA / Variance Component Analysis

op

2
r

Part
Repeatability

Measurement
Operator

Operator by part
interaction

ANOVA stands for Analysis of Variance.


Variance is defined as the square of the standard deviation.
ANOVA is a standard statistical technique which attempts to analyze the variation
between measurement observations and then identify the important contributing
factors.
When doing a Gage R&R, ANOVA breaks down the measurement system
variation into Reproducibility and Repeatability.
12-42

ANOVA - COMPONENTS OF VARIATION


The ANOVA method partitions the total variance in the measurements into
different components. For a traditional Gage R&R study, the variation is
broken down into four categories of components:
Operator - By looking at the operator component, it allows quantifying the
variation observed between different operators who are measuring the same set
of parts.
Part-to-Part - For the part component, this allows quantifying the variation
observed for characteristic measured on different parts, regardless of the
operator.
Operator by Part - The Operator by Part component explores the interaction
between each operator and part. This allows quantifying the variation between
average part measurements for each operator, which accounts for different
situations where, for example, one operator may obtain more variation when
measuring smaller parts rather than larger parts, or vice versa.
Repeatability - quantifies the variation due to the instrument itself and the
position of the parts in the instrument.

12-43

VARIABLE GAGE R & R EXERCISES


In these exercises, we will do a Gage R&R study on two data sets:
One in which measurement system variation contributes little to the
overall observed variation (GAGEAIAG.mtw), and one in which
measurement system variation contributes a lot to the overall observed
variation (GAGE2.mtw). Analyze and interpret data using the ANOVA
method. The first one we will do as a group and the second one will be
an individual exercise.
For the GAGEAIAG data set, ten parts were selected that represent the
expected range of the process variation. Three operators measured the
ten parts, two times per part, in a random order.
Open the Minitab file GAGEAIAG.mtw in GBdata. Perform the exercise
with the instructor.

12-44

VARIABLE GAGE R & R


(ANOVA METHOD)
MINITAB APPLICATION
ANOVA method with GAGEAIAG.mtw data
Perform the following Steps:
1. Choose Stat > Quality Tools >
Gage Study > Gage R&R Study (Crossed).
2. In Part numbers:, select C1 Part
3. In Operators:, select C2 Operator
4. In Measurement data:, Select C3 Response
5. In Method of Analysis, click ANOVA.
6. Click OK.

12-45

MINITABS ANOVA OUTPUT


MINITAB SESSION WINDOW OUTPUT
Gage R&R Study - ANOVA Method
F
39.7178
4.1672
4.4588

P
0.000
0.033
0.000

GRAPHICAL OUTPUT
ANOVA
TABLE

Gage R&R (ANOVA) for Response

Components of Variation
100

Gage R&R

Study Var
(6 * SD)
0.39969
0.21564
0.33653
0.18120
0.28358
1.15668
1.22379

Number of Distinct Categories = 4

0.50
Gage R&R

Repeat

Reprod

Part-to-Part

% Contribution
Variance
Component

UCL=0.1252

0.05

_
R=0.0383

0.00

LCL=0

0.50

% Study
Variance

Discrimination Index

10

0.50
1

2
Operator

Operator * Part Interaction

1.00
0.75

0.75

Xbar Chart by Operator


1

5
6
Part

1.00

_UCL=0.8796
_
X=0.8075
LCL=0.7354

Operator
1
2
3

1.00
0.75
0.50
1

%Study Var
(%SV)
32.66
17.62
27.50
14.81
23.17
94.52
100.00

Response by Operator

0.10

A verage

StdDev (SD)
0.066615
0.035940
0.056088
0.030200
0.047263
0.192781
0.203965

0.75

1
Sample Range

Source
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*Part
Part-To-Part
Total Variation

1.00

R Chart by Operator

Sample Mean

VarComp
0.0044375
0.0012917
0.0031458
0.0009120
0.0022338
0.0371644
0.0416019

Response by Part
% Contribution
%StudyVar

50

%Contribution
(of VarComp)
10.67
3.10
7.56
2.19
5.37
89.33
100.00

Source
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*Part
Part-To-Part
Total Variation

Reported by:
Tolerance:
Misc:

Gage name:
Date of study:

Percent

Two-Way ANOVA Table With Interaction


Source
DF
SS
MS
Part
9 2.05871 0.228745
Operator
2 0.04800 0.024000
Part * Operator 18 0.10367 0.005759
Repeatability
30 0.03875 0.001292
Total
59 2.24913

5
6
Part

We will interpret each


output in detail...

12-46

10

ANOVA TABLE

Two-Way ANOVA Table With Interaction


Source
DF
SS
MS
Part
9 2.05871 0.228745
Operator
2 0.04800 0.024000
Part * Operator 18 0.10367 0.005759
Repeatability
30 0.03875 0.001292
Total
59 2.24913

F
39.7178
4.1672
4.4588

P
0.000
0.033
0.000

The ANOVA Table displays the analysis of variance output for the fitted effects;DF
(Degrees of Freedom), SS (Sum of Squares), MS (Mean Square), F-ratio and P (P-Value).
The P column values is where we need to focus our attention.
A p-value less than 0.05 indicates that the sources of variation can be considered
statistically significant (i.e., active, influential). However, interactions can fool you. If an
interaction is significant (Operator*Part), the two individual participants (Part and/or
Operator) involved should also be considered statistically significant.
In this example, all components are statistically significant.
Decision to reject or accept the measurement system should not be made at this point
without evaluating the rest of the ANOVA outputs.

12-47

VARIANCE COMPONENT &


% CONTRIBUTION TABLE
Source
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*Part
Part-To-Part
Total Variation

VarComp
0.0044375
0.0012917
0.0031458
0.0009120
0.0022338
0.0371644
0.0416019

%Contribution
(of VarComp)
10.67
3.10
7.56
2.19
5.37
89.33
100.00

VarComp (or Variance) column the variance component contributed by each source.
% Contribution - the percent contribution to the overall variation made by each variance
component. Each variance component divided by the total variation, then multiplied by 100.
The percentages in this column add to 100.
Shown in the table under % Contribution, the percent contribution from Part-To-Part
(89.33%) is larger than that of the Total Gage R&R (10.67%). This tells you that most of the
variation is due to differences between parts, very little is due to measurement system error.

12-48

Standard Deviation, Study Variance


& % Study Variance
Source
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*Part
Part-To-Part
Total Variation

StdDev (SD)
0.066615
0.035940
0.056088
0.030200
0.047263
0.192781
0.203965

Study Var
(6 * SD)
0.39969
0.21564
0.33653
0.18120
0.28358
1.15668
1.22379

%Study Var
(%SV)
32.66
17.62
27.50
14.81
23.17
94.52
100.00

StdDev column the standard deviation for each variance component.


Study Var (5.15*Sigma column) the standard deviation multiplied by 5.15. You can change the
multiple from 5.15 to some other number. The default is 5.15 sigma, because 5.15 is the number of
standard deviations to capture 99 % of your process measurements. The last entry in the
5.15*Sigma column is 5.15*Total. This number is usually referred to as the study variation and
estimates the width of the interval you need to capture 99% of your process measurements.
% Study Var - the percent of the study variation for each component (the standard deviation for each
component divided by the total standard deviation). These percentages do not add to 100.

12-49

NUMBER OF DISTINCT CATEGORIES = 4


It is the number of distinct categories within the process data that the measurement
system can discern.
When you measure 10 different parts, and Minitab reports that your measurement
system could discern 4 distinct categories, this means that some of the 10 parts are not
different enough to be discerned as being different by your measurement system.
If you want to distinguish a higher number of distinct categories, you need a more precise
gage (i.e. it is difficult to measure a part using a 3 ft. wooden ruler which is scaled
(.016), if you want to measure .001 dimension).
Rule of Thumb
Number of categories is less than 2, the measurement system is of no value for
controlling the process, since one part cannot be distinguished from another
When the number of categories is 2, the data can be divided into two groups, say
high and low.
When the number of categories is 3, the data can be divided into 3 groups, say low,
middle and high.
A value of 4 or more denotes an acceptable measurement system.

12-50

ANOVA GRAPHICAL ANALYSIS


Gage R&R (ANOVA) for Response
Reported by:
Tolerance:
Misc:

Gage name:
Date of study:
Components of Variation
Percent

100

Response by Part
% Contribution
%StudyVar

1.00
0.75

50

0.50
0

Gage R&R

Repeat

Reprod

Part-to-Part

R Chart by Operator
Sample Range

UCL=0.1252
0.10
_
R=0.0383

0.00

10

2
Operator

Operator * Part Interaction


_
_UCL=0.8796
X=0.8075
LCL=0.7354

Operator
1
2
3

1.00
A verage

Sample Mean

0.50

0.50

1.00
0.75

0.75

Xbar Chart by Operator


2

1.00

LCL=0

5
6
Part

Response by Operator

0.05

0.75
0.50
1

5
6
Part

12-51

10

ACCEPTABILITY CRITERIA
% R & R Indices
10%
10% - 30%

30%

Acceptable Measurement System


May be acceptable based upon importance of
application, cost of measurement device, cost of
repair, etc.
Consider not acceptable. Measurement system
needs improvement.

Number of Distinct Categories


1
2-3
4

Unacceptable. One part cannot be distinguished


from another.
Generally unacceptable
Recommended

12-52

VARIABLE GAGE R & R


INDIVIDUAL EXERCISE
Now its your turn
In this exercise, do a gage R&R study on GAGE2.mtw file in GBdata.
For the GAGE2 data set, three parts were selected that represent the
expected range of the process variation. Three operators measured the
three parts, three times per part, in a random order.
Analyze and interpret data using the ANOVA method. Be ready to
present your analysis and interpretation in front of the class.

12-53

SOLUTION TO
VARIABLE GAGE R & R EXERCISE
(ANOVA METHOD)
ANOVA METHOD WITH GAGE2 DATA
Perform the following Steps:
1. Open the file GAGE2.mtw in GBData
2. Choose Stat > Quality Tools > Gage Study >
Gage R&R Study (Crossed).
3. In Part numbers:, select C1 Part
In Operators:, select C2 Operator
In Measurement data:. Select C3 Response
4. In Method of Analysis, ANOVA will be selected.
5. Click OK

12-54

Gage R&R Study - ANOVA Method

SOLUTION TO
VARIABLE GAGE R & R
EXERCISE
Interpreting the Results:
1 When the p-value for Operator is >0.25, Minitab fits
the model without the interaction and uses the
reduced model to define the Gage R&R statistics. This
value is shown in the ANOVA Table with Operator *
part Interaction (p=0.484).
2 Shown in the last table under %Contribution, the
percent contribution from Total Gage R&R (84.36) is
larger than that of Part-to-Part (15.64%). Thus, most
of the variation arises from the measuring system,
very little is due to difference between parts.
3. The Number of Distinct Categories = 1. A 1 tells you
the measurement system is poor, it cant distinguish
between parts. Refer to Number of Distinct Categories
Statement for details.

Two-Way ANOVA Table With Interaction


Source
Part
Operator
Part * Operator
Repeatability
Total

DF
2
2
4
18
26

SS
38990
529
26830
133873
200222

MS
19495.2
264.3
6707.4
7437.4

F
2.90650
0.03940
0.90185

P
0.166
0.962
0.484

Two-Way ANOVA Table Without Interaction


Source
Part
Operator
Repeatability
Total

DF
2
2
22
26

SS
38990
529
160703
200222

MS
19495.2
264.3
7304.7

F
2.66887
0.03618

P
0.092
0.965

Gage R&R
%Contribution
(of VarComp)
84.36
84.36
0.00
0.00
15.64
100.00

Source
Total Gage R&R
Repeatability
Reproducibility
Operator
Part-To-Part
Total Variation

VarComp
7304.67
7304.67
0.00
0.00
1354.50
8659.17

Source
Total Gage R&R
Repeatability
Reproducibility
Operator
Part-To-Part
Total Variation

StdDev (SD)
85.4673
85.4673
0.0000
0.0000
36.8036
93.0547

Study Var
(6 * SD)
512.804
512.804
0.000
0.000
220.821
558.328

%Study Var
(%SV)
91.85
91.85
0.00
0.00
39.55
100.00

Number of Distinct Categories = 1

12-55

EXERCISE:VARIABLE GAGE R & R


GRAPHICAL ANALYSIS
Gage R&R (ANOVA) for Response
Reported by:
Tolerance:
Misc:

Gage name:
Date of study:
Components of Variation
Percent

100

Response by Part
% Contribution
%StudyVar

600
400

50

200
0

Gage R&R

Repeat

Reprod

Part-to-Part

Sample Range

R Chart by Operator
400

UCL=376.5

_
R=146.3

600
400
200

LCL=0

Xbar Chart by Operator


1

400

__
X=406.2

300
LCL=256.5

A verage

Sample Mean

500

2
Operator

Operator * Part Interaction

UCL=555.8

Interpreting the Results:

Response by Operator

200

2
Part

Operator
1
2
3

450
400
350
1

2
Part

1. In the Components of Variation chart, the percent contribution from Gage R&R is larger than that of Part-to-Part,
telling you that most of the variation is due to the measurement system-primarily repeatability, little is due to
differences between parts.
2. Most of the points in Xbar Chart by Operator are inside the control limits, indicating the observed variation is mainly
due to the measurement system.
3. In the By Part chart, there is little difference between parts, as shown by the nearly level line.
4. In the By Operator chart, there are no differences between operators, as shown by the level line.

12-56

5. Operator * Part Interaction is a visualization of the p-value for Oper*part-0.484 in this case-indicating the differences
between each operator /part combination are insignificant compared to the total amount of variation.

PREPARATION FOR A MEASUREMENT STUDY


GUIDELINES
Plan the approach. Determine if reproducibility is an issue because
sometimes it can be considered negligible-for example, when pushing a
button.
Select the number of appraisers, number of samples or parts, and number of
repeat reading. Select appraisers who normally operate the instruments.
Establish frequency of reading based on part configuration and availability for
measurements.
Sample parts from the process that represents its entire operating range.
Ensure that measurement procedures and necessary data collection forms
are available, clearly defined and completely understood by all participants.
Gages should have a discrimination or graduation that is at least one-tenth of
the expected process variation of the characteristic to be read.

12-57

ACTUAL CONDUCT OF MEASUREMENT STUDY


GUIDELINES

Ensure that the measuring method of the appraiser and


instrument is following the defined procedure.

Execute measurements in random order to ensure that


drift or changes that occur will be spread randomly
throughout the study.
Record readings

12-58

10-STEP MEASUREMENT SYSTEM ANALYSIS


STEP 1: Verify the appropriateness of the measurement for judging the characteristic of interest
(i.e. historical experience).
STEP 2: Construct a process map (flowchart) of the measurement process.
STEP 3: List possible sources of variation and their impact on the measurement process on Cause
& Effect Diagram (Fishbone).
STEP 4: Calibrate measurement instrument or verify calibration (accuracy study) has been
performed in the range of interest. For some processes (but not all) linearity or accuracy may also
be of interest.
STEP 5: Carefully plan Gage R & R Study, Run Trials and Collect Data.
STEP 6: Obtain Gage R & R Results through Minitab application.
STEP 7: Analyze and Interpret Results.
STEP 8: Verify consistency of measurement units.
STEP 9: On-Going evaluation: measurement as a process (over time).
STEP 10: Identify the individual(s) responsible for ensuring the quality of the measurement system.

12-59

A 5-STEP MEASUREMENT
IMPROVEMENT PROCESS
Breyfoggles Implementing Six Sigma (2nd edition) has an
excellent 5-step measurement improvement process.
The approach is simple and effective and provides the
methods for identifying and reducing measurement
variation.

THIS BOOK IS A MUST!

12-60

TEAM VARIABLE
GAGE R & R EXERCISE
Working as a team, you will perform a Variable Gage R & R Team Exercise
following these guidelines:
Objective:
To shoot the catapult 10 times (10 different settings) with a target distance of
3-10 feet and obtaining measurements using (2) Distance Recorders.
Materials Needed:
1. Catapult assembly
2. Catapult Ball
3. Tape Measure
4. Variable Gage R & R Form (Refer to the next slide)
Team Composition:
1 Catapult Operator
2 Distance Recorders
1 Data Entry Operator
1 Coordinator (also Ball Retriever)

12-61

TEAM VARIABLE
GAGE R & R EXERCISE
1. The Catapult Operator sets the catapult and launches the ball. The Coordinator retrieves the ball
and marks the point where the ball landed as Sample 1. Distance recorders secretly record the ball
distance (to the nearest 1/16 inch) on their Variable Gage R & R Form 1 under Sample 1 Try 1.
2. The Coordinator instructs the Catapult Operator to make one adjustment on the catapult
(attachment point, tension point, angle, etc). This now represents Sample 2. The Catapult Operator
launches the ball. The Coordinator retrieves the ball and marks the point where the ball landed as
sample 2. Both distance recorders take their measurements and log it on the Form 1 under Sample
2 Try 1. This procedure is repeated until ten (10) different settings are shot, marked and measured.
3. After completing the Form 1 Try 1 column of the form, both data recorders turn in their measurement
to the Data Entry Operator who transcribes and formats the measurements of the (2) Distance
Recorders into Minitab.
4. Both Distance Recorders go back and re-measure the 10 different points. Their measurements
are now recorded on Form 2 under column Try 2. After completing the form, they submit this to
the Data Entry Operator to be transcribed into Minitab together with the first measurement data.
5. Your team will perform a Variable Gage R & R analysis on the data and interpret results. Have a
spokesperson ready to present your results to the class.

12-62

12-63

12-64