Sie sind auf Seite 1von 14

Highlight points

FMEA - 4th Edition

P2 - Is an analytical methodology
P2 - Critical and safety related components or processes should be given a higher priority
P3 - FMEA process can be applied when,
New design / technology / process
Modifications to existing design / process
Use of an existing design or process in a new environment / location / application or usage profile
P4 - Mgmt. has the responsibility and ownership for development and maintenance of FMEAs.
P5 - Do not extend or extrapolate FMEA language beyond teams level of understanding.
P5 - Clear statements, concise terminology and focus on the actual effects are key to the effective identification
and mitigation of risks.
P8 - FMEA approach to address
Potential product or process failure to meet expectations
Potential consequences
Potential causes of failure mode
Application of current tools
Level of risk
Risk reduction
P11 - 4 major customers to be considered,
End-user
OEM Assly. and Mfg. centers ( plants )
Supply chain mfg.
Regulators
P13 - Controls are those activities that prevent or detect the cause of the failure or failure mode.
P13 - Risk is evaluated in 3 ways,
Severity, Assmt. of level of impact of failure on the Cust.
Occurrence, how often the cause of failure may occur
Detection, Assmt. of how well the product or process controls detect the cause of failure or the failure
mode
P16 - DFMEA should,
Be initiated before design concept finalization
Be updated as changes occur or additional information is obtained throughout the phases of product
development
Be fundamentally completed before the production design is released
Be a source of lessons learned for future design iterations
P18 - DFMEA prerequisites,
Assembling a team
Determining scope
Creating block diagrams or P - diagrams
P18 - Block diagrams of the product shows physical and logical relationship between components of the
product.
P21 - The P diagram is a structured tool to understand the physics related to the function(s) of the design
P22 - DFMEA functional & interface reqmts.,
Safety, Govt. regulations, Reliability, Loading & Duty cycles, Quiet operations ( Noise, vibration,
harshness ), Fluid retention, Ergonomics, Appearance, Packaging & shipping, Service, DFA & DFM.
P22 - Other tools and info resources
Schematics, Drwgs., BOM, Interrelationship matrices, Interface matrix, Quality function deployment
(QFD), Quality & reliability history
P31 - Potential failure mode : The manner in which a component, subsystem, or a system could potentially fail
to meet or deliver intended function.
P35 - Potential effects of failure : The effects of failure mode on function as perceived by customer
P37 - Severity : Value associated with the most serious effect for a given failure mode
P39 - A Characteristic designated in the design record as special without an associated design failure mode
identified in the DFMEA is an indication of a weakness in the design process.
P39 - Failure mechanism : Physical, Chemical, Electrical, Thermal or other process that results in failure mode.
P41 - Potential cause of failure : Indication of how the design process could allow the failure to occur, described
in terms of something that can be corrected or can be controlled.
P41 - Investigation of causes needs focus on the failure mode and not on the effect(s).
P41 - In preparing DFMEA, assume that the design will be manufactured and assembled to the design intent.
P45 - Occurrence : Is the likelihood that a specific cause / mechanism will occur resulting in the failure mode
within the design life
P49 - Prevention : Eliminate the cause of mechanism of failure or the failure mode from occurring, or reduce its
rate of occurrence

Page 1 of 14
Highlight points
P49 - Detection : Identify the existence of a cause, the resulting mechanism of failure or the failure mode, either
by analytical or physical methods, before the item is released for production.
P49 - Prevention controls : Benchmarking studies, Fail-safe design, Design & Matl. Standards, Documentation (
records of best practices, LL etc.) from similar designs, Simulation studies ( analysis of concepts to establish
design reqmts. ), Error-proofing
P49 - Detection controls : Design reviews, Prototype testing, Validation testing, Simulation studies ( validation of
design ), DOE including reliability testing, Mock-up using similar parts.
P51 - Preventing the causes of failure mode through a design change or design process change is the only way
a reduction in the occurrence ranking.
P53 - Detection : Rank associated with the best detection control listed in the current design control detection
column.
P53 - The ranking value of 1 is reserved for failure prevention through proven design solutions.
P57 - When severity is 9 or 10, ensure that the risk is addressed through existing design controls or
recommended actions.
P57 - Failure modes with severity 8 & below, consider causes having highest occurrence or detection rankings
P59 - The intent of recommended actions is to improve the design. Identifying these actions should consider
reducing rankings in the order of Severity, Occurrence and detection.
P59 - To reduce Severity ranking, revise design.
P61 - To reduce Occurrence ranking, remove or control one or more of causes or mechanisms of failure mode
through a design revision.
P61 - To reduce Detection ranking, use of error / mistake proofing or Design change to a specific part
P64 - In cases where field issues have occurred, the rankings should be revised.
P65 - Output of DFMEA can be used as an input to subsequent product development processes.
P66 - DFMEA defines what the controls are, while DVP&R ( Design verification plan & report ) provides how
such as acceptance criteria, procedure and sample size.

PFMEA

Diff. between DFMEA & PFMEA ????


P 68 - PFMEA should,
Be initiated before or at the feasibility stage
Be initiated prior to tooling for production
Take in to account all Mfg. Opns. from individual components to assemblies
Include all processes within the plant that can impact Mfg. & Assly. Opns. like shipping, receiving,
material transportation, labeling etc.
P 68 - Assumes the product as designed will meet the design intent.
P 68 - Potential failure modes that can occur because of a design weakness may be included in PFMEA
P 68 - PFMEA takes into consideration a products design characteristics relative to the planned Mfg. or Assly.
process to assure that, to the extent possible the resultant product meets Cust. needs & expectations.
P 68 - Assumes machines and equipment will meet their design intent and therefore are excluded from the
scope.
P 73 - Other tools and info resources
DFMEA, Drwgs. & Design records, Bill of process, Interrelationship ( Characteristic ) matrix, Internal and
external ( Cust. ) NCs, Quality and reliability history.
P 73 - Research information
LLs, Process yield, First time capability, PPM, Process capability studies, Warranty metrics.
P 81 - Assume incoming parts, materials are correct.
P 83 - For the end user, the potential effect should be stated in terms of product or system performance. If the
Cust. is the next Opn. or subsequent Opn(s) / location(s), the potential effects should be stated in terms of
process / operation performance.
P 91 - Where a Spl. Ch. is identified with severity of 9 or 10, the design responsible engineer should be notified.
P 110 - Review when there is a product or process design change and update + Periodic review. Additionally
where field issues, or production issues like disruptions have occurred, rankings should be revised.

SPC - 2nd Edition

Page no. Highlight point


4 Statistical methods have been routinely applied to parts, rather than processes
6 Detection - Tolerates waste, Prevention - Avoids waste
7 Strategy of prevention - Avoid waste by not producing unusable output in first place
9 A process control system is a feedback system. SPC is one of the feedback system
If only common causes of variation are present, the output of a process forms a distribution that is
12
stable over time and is predictable
12 If Spl. Causes of variation are present the process output is not stable over time

Page 2 of 14
Highlight points
Page no. Highlight point
Individual measured values may all be different, as a group they tend to form a pattern. Distribution
13 characterized as Location ( central value ), Spread ( span or width of values from smallest to
largest ), Shape ( the pattern of variation - symmetrical, skewed etc. )
Common causes within a process produce a stable and repeatable distribution over time, called in
13 & 14 a state of statistical control . If only common causes of variation present and do not change, the
output of a process is predictable
Special causes - Affect only some of the process output. Intermittent & unpredictable. The process
14
output will not be stable over time
Changes in the process distribution due to special causes can be either detrimental or beneficial.
14
When beneficial, they should be understood and made as a permanent part of a process.
Local actions - 1. Are required to eliminate Spl. Causes of variation 2. Can usually be taken by
16
people close to the process 3. Can correct typically about 15% of process problems
Action on system - 1. Are required to reduce the variation due to common causes 2. Almost always
16
require mgmt. action for correction 3. Needed to correct typically about 85% of process problems
The goal of process control system is to make predictions about the current and future state of the
19
process
19 Process capability - Variation that comes from common causes.
Process perf. - Overall output of a process and how it relates to the reqmts., irrespective of process
19
variation
Case 3 ( process meets Tol. reqmts. but not in statistical control ) circumstances - 1. Cust. is
20 insensitive to variation within Specs. 2. The economics involved in acting upon Spl. Cause exceed
the benefit 3. Spl. Cause is identified, documented as consistent and predictable.
Calculate the capability ( common cause variation ) only after a process has been demonstrated to
21
be in a state of statistical control
24 Stages of continual process improvement cycle - Analyse, Maintain and Improve the process
Control charts should be used during process improvement cycle. If a statistical control has been
25
reached, the process current level of long-term capability can be assessed.
Process improvement through variation reduction typically involves purposefully introducing
26
changes into the process and measuring the effects
Control charts - 1. Collection ( gather data & plot ), 2. Control ( calculate trial control limits / identify
28 Spl. Causes of variation and act upon ), 3. Analysis & Improvement ( Quantify common cause
variation, take action )
29 Histograms - Graphical representation of the distributional form of a process variation
If process control activities assure that no Spl. Cause sources of variation active, the process said
30
to be in statistical control or In control. Such processes said to be stable, predictable and consistent
After all Spl. Causes have been addressed and the process is running in statistical control, the
33
control chart continues as a monitoring tool. Process capability can be calculated.
Once properly computed, and if no changes to common cause variation of the process occur, then
34
the control limits remain valid.
The gains & benefits from control charts are directly related to Management philosophy, Engg.
37 & 38
Philosophy, Mfg., Quality control and Production
43 Control charts can be used to monitor or evaluate a process
Variable chart can explain process data in terms of its process variation, piece-to-piece variation
45 and its process average
X bar - Measure of process average and R - Measure of process variation
p chart - Proportion of units nonconforming
np chart - No. of units nonconforming
46
c chart - No. of nonconformances per unit
u chart - No. of nonconformities per unit
Elements of control charts - Appropriate scale, UCL / LCL, Centerline, Subgroup sequence /
48
timeline, Identification of out-of-control plotted values, Event log
Preparatory steps - Establish environment, Define process, Determine characteristics based on
53 Cust. needs / Current & potential problem areas / Correlation between characteristics, Define
characteristic, Define measurement system, Minimize unnecessary variation,
Attribute control charts used to monitor and evaluate isolated variables
53 Variable control charts used to monitor and evaluate continuous variables
Measurement system must be evaluated ( MSA )
Steps for using control charts - Data collection, Establish control limits, Interpret for statistical
55
control, Extend control limits for ongoing control
Variation within subgroup represents piece-to-piece variation over a short period of time. Significant
55
variation between subgroups reflects changes in process that should be investigated
55 Larger subgroup size makes it easier to detect small process shifts

Page 3 of 14
Highlight points
Page no. Highlight point
Initial scale could be set to twice the difference between the ( expected ) maximum & minimum
58
values
Spl. Causes can affect either the process location ( Eg., average, median ) or the variation ( Eg.,
60
Range, standard deviation ) or both.
60 Average will be used for the location control statistic and the Range for variation control statistic
Since the control limits of location statistic are dependent on variation statistic, the variation control
60
statistic should be analysed first for stability
A process cannot be said to be stable ( in statistical control ) unless both charts have no out-of-
60
control conditions ( indication of Spl. Causes )
61 For sample sizes of less than 7, there is no lower control limit for ranges Ie. Zero ( 0 )
X bar chart - Operator variation
62
R chart Machine variation
A change in subgroup sample size would affect the expected average range and control limits for
65
both ranges and averages
67 The larger the average range, the larger the standard deviation
A point outside a control limit is generally because of,
1. The control limit or plot point is miscalculated or mis-plotted
69 2. Piece-to-piece variability or the spread of the distribution has increased
3. Measurement system changed ( Diff. appraiser or instrument )
4. Measurement system lacks appropriate discrimination
When the ranges are in statistical control, the process spread - the within subgroup variation is
69
considered to be stable
A run above the average range or a run up signifies,
1. Greater spread in output values, which could be from an irregular cause like Equpmt. Malfunction
71
or loose fixturing or from a shift in one of the process elements like a new / less uniform material lot
2. A change in Measurement system ( new inspector or gauge )
A run below the average range or a run down signifies,
1. A smaller spread in output values, which is usually a good condition that should be studied for
72
wider application and process improvement
2. A change in Measurement system, which could mask real perf. Changes
When subgroup size becomes smaller ( 5 or less ), a run length of 8 or more could be necessary to
72
signal a decrease in process variability
A run relative process average is generally a sign of,
72 1. Process average has changed, and may still be changing
2. The measurement system has changed ( drift, Bias, Sensitivity etc. )
Type I error - Over control, False alarm
76 Type II error - Under control
A measure of this balance is Average run length ( ARL )
Attributes charts are used to track unacceptable parts by identifying nonconforming items and
89
nonconformities within an item
Chapter 3 not read
Location is estimated by the sample mean or sample median
127
Spread is estimated using the sample range or sample standard deviation
A shift in process location, an increase in process spread or a combination of both may produce
127
parts out of Specn. Limits
Indices of process variation only, relative to Specn. - Cp & Pp
127
Indices of process variation and centering combined, relative to Specn. - Cpk & Ppk
1. Inherent process variation - Variation due to common causes only
2. Within subgroup variation - Variation within subgroup. If process in statistical control this variation
is good to estimate inherent process variation. Can be estimated by R bar / d2 or s bar / c4
3. Between subgroup variation - If the process is in statistical control this variation should be Zero
4. Total process variation - Variation due to both within subgroup & between subgroup variations. If
the process is not in statistical control the total process variation will include the effect of Spl.
131
Causes as well as common causes. This variation is may be estimated by s ( the sample standard
deviation )
5. Process capability - The 6 sigma range of inherent process variation for statistically stable
processes only, Where Sigma = R bar / d2 or s bar / c4
6. Process performance - The 6 sigma range of total process variation, where sigma is usually
estimated by s - the total process standard deviation

Page 4 of 14
Highlight points
Page no. Highlight point
If the process is in statistical control the process capability will be very close to process
131
performance. A large diff. between capability and performance indicates the presence of Spl. Cause
Cp - Capability index. Compares the process capability to the max. allowable variation as indicated
132
by the Tol. Cp is not impacted by process location. This is applicable only for bilateral tolerances
Cpk - Capability index. Takes the process location as well as the capability into account. For
132
bilateral Tol. Cpk will always be less than or equal to Cp. Cpk = CP, only when process is centered
Pp - Performance index. Compares the process performance to the max. allowable variation as
133
indicated by the Tol. Pp is not impacted by process location.
Ppk - Performance index. Takes the process location as well as the performance into account. For
133
bilateral Tol. Ppk will always be less than or equal to Pp. Cpk = CP, only when process is centered
If the process is in statistical control the process capability will be very close to process
134
performance. A large diff. between C & P indices indicate the presence of Spl. Cause.
137 Calculating Cp has no meaning for unilateral Tol.
Cpk - Cpk can be less than, equal to or greater than Cp
137 CPU = ( USL - X double bar ) / 3 sigma where sigma = R bar / d2
CPL = ( X double bar - LSL ) / 3 sigma where sigma = R bar / d2
137 Calculating Pp has no meaning for unilateral Tol.
Ppk
138 PPU = ( USL - X double bar ) / 3s
PPL = ( X double bar - LSL ) / 3s
Not read from Chapter IV Section B ( Page 139 )
Over adjustment - Is the practice of treating each deviation from the target as if it were the result of
171
the action of the special cause of variation in the process.
181 & 182 Table of constants and Formulas for control charts
183 Table of constants and Formulas for control charts - Attributes charts
187 Clarity on Normal probability plot
189 Formula contradicting
A large discrepancy between Cpk and Ppk would initiate the presence of excessive between-
190
subgroup variation
A large discrepancy between Cp and Cpk ( or between Pp & Ppk ) would initiate a process
190
centering problem
Common cause - A source of variation that affects all the individual values of the process output
192
being studied. This is the source of the inherent process variation
Control limit - A line ( or lines ) on a control chart used as a basis for judging the stability of a
193
process
193 Control statistic - The statistic used in developing and using a control chart
193 Correlation - The degree of relationship between varaibles
193 Correlation matrix - A matrix of all possible correlations of factors under consideration
Detection - A reactive ( past-oriented ) strategy that attempts to identify unacceptable output after it
194
has been produced and then separate it from acceptable output
Distribution - A way of describing the output of a stable system of variation, in which individual
values as a group form a pattern that can be described in terms of its location, spread and shape.
194
Location is commonly expressed by the mean or average, or by the median. Spread is expressed
in terms of standard deviation or the range of sample.
195 Location - A general term for the typical values of central tendency of a distribution
195 Mean - A Measure of location. The average of values in a group of measurements
Median - A Measure of location. The middle value in a group of measurements, when arranged
195
from lowest to highest.
Mode - A Measure of location defined by the value that occurs most frequently in a distribution or
195
data set ( there may be more than one mode within one data set )
Moving range - A measure of process spread. The diff. between the highest and lowest value
195
among two or more successive samples.
Normal distribution - A continuous, symmetrical, bell-shaped frequency distribution for variables
196
data that is the basis for the control charts for variables.
197 Over-adjustment - Tampering, taking action on a process when the process is in statistical control.
Pareto chart - A simple tool for problem solving that involves ranking all potential problem areas or
197
sources of variation according to their contribution to cost or to total variation.
Prevention - A proactive ( future-oriented ) strategy that improves quality and productivity by
197
directing analysis and action towards correcting the process itself.
Problem solving - The process of moving from symptoms to causes ( special or common ) to
198
actions.

Page 5 of 14
Highlight points
Page no. Highlight point
Process spread - The extent to which the distribution of individual values of the process
198
characteristic vary
199 Range - A measure of spread.
Run - A consecutive no. of points consistently increasing or decreasing, or above or below the
200
centerline.
200 Shape - A general concept for the overall pattern formed by a distribution of values.
Special cause - A source of variation that affects only some of the output of the process, often
200
intermittent and unpredictable.
200 Specification - The Engg. Reqmt. for judging acceptability of a particular characteristic.
201 Spread - The expected span of values from smallest to largest in a distribution
201 Stability - The absence of Spl. Causes of variation.
Standard deviation - A measure of the spread of the process output or the spread of a sampling
201
statistic from the process
202 Subgroup - One or more observations or measurements used to analyze the perf. of a process
202 Type I error - Rejecting an assumption that is true.
202 Type II error - Failing to reject an assumption that is false.
203 Variation - The inevitable diff. among individual outputs of a process.

MSA - 4th Edition

Page no. Highlight point


Use of measurement data is to determine if a significant relationship exists between two or more
3
variables
Quality of measurement data - The statistical properties most commonly used to characterize the
quality of data are the Bias & Variance of the measurement system. The property called Bias refers
3
to the location of the data relative to reference ( master ) value, and the property called Variance
refers to the Spread of the data
Quality of measurement data - One of the most common reasons for low quality data is too much
3 variation. Much of the variation in a set of measurements may be due to the interaction between the
measurement system and its environment
4 Cust. Approval is required for MSA methods not covered in this manual
The process of assigning the numbers is defined as the measurement process, and the value
4
assigned is defined as the measurement value
Measurement system : Collection of instruments or gauges, Standards, Operations, Methods,
5 Fixtures, Software, Personnel, Environment and assumptions used to quantify a unit of measure or
fix assessment to the feature characteristic being measured
5 Standard : Acptd. Basis for comparison, Criteria for acceptance,
6 Reference value : Used as the surrogate for the True value
6 True value : Actual value of an artifact
Location variation :
1. Accuracy - Closeness to the true value or to an accepted reference value
2. Bias - Diff. between observed average of measurements and the reference value ( true value )
on the same characteristic on the same part
3. Stability - The change in Bias over time or P 52 - Total variation in the measurements obtained
6
with a measurement system on the same master or parts when measuring a single characteristic
over an extended time period.
A stable measurement process is in statistical control with respect to location
4. Linearity - The change in bias over the normal operating range or P 52 - The Diff. of Bias
throughout the expected operating ( measurement ) range of the equpmt.
Width variation :
1. Precision - Closeness of repeated readings to each other
A random error of the component of the measurement system
2. Repeatability - Variation in measurements obtained with one measuring instrument when used
several times by an appraiser while measuring the identical characteristic on the same part
7 The variation in successive ( short-term ) trials under fixed and defined conditions of measurement
Commonly referred to as EV ( Equpmt. Variation )
Within-system variation, P 54 - when the conditions of measurement are fixed and defined Fixed
part, Instrument, Standard, Method, Optr. Env. And assumptions
P 54 - Referred to as within appraiser variability. This is the inherent variation or capability of the
equpmt. itself. Repeatability is the common cause.

Page 6 of 14
Highlight points
Page no. Highlight point
3. Reproducibility - Variation in the average of the measurements made by diff. appraisers, using
the same gauge or measuring instrument when measuring an identical characteristic on the same
part
For product and process qualification, error may be appraiser, Env. ( time ), or method
Commonly referred to as AV ( Appraiser variation )
Between-system ( conditions ) variation
7 P 55 - This is true for manual instruments influenced by skill of the operator, whereas not true for
measurement processes where operator is not a major source of variation
4. GRR or Gauge R & R - The combined estimate of measurement system repeatability and
reproducibility or P 56 - GRR is the variance equal to the sum of within-system and between-
system variances
P 57 - GRR = Reproducibility + Repeatability
5. Measurement system capability - Short-term estimate of measurement system variation
Width variation :
6. Measurement system performance - Long-term estimate of measurement system variation
7. Sensitivity - Smallest input that results in a detectable ( usable ) output signal
Responsiveness of the measurement system to changes in measured feature
Determined by gauge design ( discrimination ), inherent quality ( OEM ), in-service maintenance
8 and operating condition of the instrument and standard
8. Consistency - The degree of change of repeatability over time or P 57 - Diff. in the variation of
the measurements taken over time. It may be reviewed as repeatability over time
A consistent measurement process is in statistical control with respect to width ( variability )
9. Uniformity - The change in repeatability over the normal operating range or P 58 - Diff. in
variation throughout the operating range of the gauge
System variation :
1. Capability - Variability in readings taken over a short period of time or P 58 - Capability of a
measurement system is an estimate of the combined variation of measurement errors ( random &
systematic ) based on a short-term assmt.
2. Performance - Variability in readings taken over a long period of time or P 59 - Net effect of all
8 significant and determinable sources of variation over time.
Based on total variation
P 59 - Performance quantifies the long-term assmt. of combined errors ( random & systematic )
3. Uncertainty - An estimated range of values about the measured value in which the true value is
believed to be contained or P 60 - Parameter associated with the result of a measurement, that
characteristics the dispersion of the values that could reasonably be attributed to the measured.
NIST ( The National institute of Standards and Technology )
NMI ( National measurement institute )
9 NISTs primary responsibility - Provide measurement services and maintain measurement
standards that assist US Industry in making traceable measurements which ultimately assist in
trade of products and services
Traceability - The property of a measurement or the value of a standard where by it can be related
9 to stated references, usually national or international standards, through an unbroken chain of
comparisons all having stated uncertainties
Sources of variation - Measurement system is impacted by random and systemic sources of
15
variation
Sources of variation - Standard, Work-piece, Instrument, Person / Procedure, Environment
16 ( SWIPE ), used to represent six essential elements of a generalized measuring system to assure
attainment of reqd. objectives
Measurement system error is the combination of errors quantified by Linearity, Uniformity,
18
Repeatability and Reproducibility.
Effect on product decisions - A good part will sometimes be called Bad ( Type I error, producers
19 risk or False alarm )
A bad part will sometimes be called Good ( Type II error, consumers risk or miss rate )
Effect on product decisions - Goal to maximize correct decisions, two choices - Improve production
20
process and Improve measurement system
Effect on process decisions - The basic relationship between the actual and the observed process
20 variation
Observed process variance = Actual process variance + Variance of measurement system

Page 7 of 14
Highlight points
Page no. Highlight point
Process setup / Control ( Funnel experiment ) Rules of Funnel experiment
Rule 1 - Make no adjustment or take no action unless the process is unstable
Rule 2 - Adjust the process in an equal amount and in an Opp. Direction from where the process
was last measured to be
23
Rule 3 - Reset the process to target. Then adjust the process in an equal amount and in an Opp.
Direction from the target
Rule 4 Adjust the process to the point of last measurement
Rule 1 best choice to produce Min. variation
Measurement source development steps - Datum coordination Prerequisites & Assumptions,
29 Detailed Engg. Concept, PM considerations, Specifications, Evaluate quotations, Qualification at
supplier, Shipment, Qualification at Cust., Documentation delivery
Preventive maintenance - Lubrication, Vibration analysis, Probe integrity, parts replacement,
32
Draining air filters etc
Qualification at supplier - Perform formal MSA by supplier. Points to be considered are,
1. Objective of the preliminary MSA Study
35 2. Qty. of pieces, trials and operators in study
3. Use of supplier personnel Vs Cust. supplied personnel
4. Necessary training for personnel
Measurement issues to be addressed when evaluating measurement system
1. The measurement system must demonstrate adequate sensitivity
41 2. The measurement system must be stable
3. The statistical properties ( errors ) are consistent over the expected range and adequate for the
purpose of measurement
41 A measurement system must be re-evaluated for its intended purpose
Types of measurement system variation - Measurement system errors classified as Bias,
42
Repeatability, Reproducibility, Stability and Linearity
Objective of measurement study - Obtain information relative to the amount and types of
measurement variation associated with a measurement system when it interacts with Env.
Measurement system study provide the following,
1. Criteria to accept new measuring equpmt.
2. Comparison of one measuring device against other
42 3. Basis for evaluating a gauge suspected of being deficient
4. Comparison for measuring equpmt. before and after repair
5. Required component ( element ) for calculating process variation, and the acceptability level for a
production process
6. Info necessary to develop a gauge performance curve ( GPC ) which indicates the probability of
accepting a part of some true value
43 Standard - Basis for comparison
43 Influences affecting measurement uncertainty - Env., Procedures, Personnel etc.
1. Ref. Standard - A standard from which measurements made at that location are derived
2. Measurement & Test equpmt ( M & TE ) - All of the measurement instruments, measurement
standards, reference materials, and auxiliary apparatus that are necessary to perform a
measurement
43 & 44
3. Calibration standard - Serves as a reference in the perf. of routine calibrations.
4. Master - Used as a reference in a calibration process
5. Working standard - To perform routine measurements within the lab, not intended as a calibration
standard, but may be utilized as a transfer standard
1. Check standard - A measurement object that closely resembles what the process is designed to
measure
45 2. Ref. Value ( also known as accepted reference or master value ) - Value of an object or group
that serves as an agreed upon reference for comparison
3. True value - Is the actual measure of the part.
Discrimination - Amount of change from a reference value that an instrument can detect and
46
faithfully indicate. Also referred to as readability or resolution.
Discrimination - If the range chart shows four possible values for the range within control limits and
47 more than one-fourth of the ranges are Zero, then the measurements are being made with
inadequate discrimination
Measurement process variation - Total measurement variation is usually described as a normal
50
distribution
Location variation - The measurement process must be in a state of statistical control, otherwise
50
the accuracy of the process has no meaning

Page 8 of 14
Highlight points
Page no. Highlight point
Bias is the measure of systematic error of the measurement system. It is the contribution to the
51
total error comprised of the combined effects of all sources of variation, known or unknown

Possible causes for

Excessive Bias Instability Linearity error


1. Instrument needs calibration 1. Instrument needs calibration, reduce 1. Instrument needs calibration, reduce the
2. Worn Instrument, Equpmt. or Fixture the calibration interval calibration interval
3. Worn or damaged master, error in 2. Worn Instrument, Equpmt. or Fixture 2. Worn Instrument, Equpmt. or Fixture
master 3. Normal aging or obsolescence 3. Poor maint. Air, Power, Hyd., Filters,
4. Improper calibration or use of the (oldness) Corrosion, Rust, Cleanliness
setting master 4. Poor maint. Air, Power, Hyd., Filters, 4. Worn or damaged master(s), error in
5. Poor quality instrument - Design or Corrosion, Rust, Cleanliness master(s) - Min / Max.
conformance 5. Worn or damaged master, error in 5. Improper calibration ( not covering the
6. Linearity error master operating range ) or use of the setting
7. Wrong gauge for the application 6. Improper calibration or use of the master(s)
8. Diff. measurement method - Setup, setting master 6. Poor quality instrument - Design or
loading, clamping, technique 7. Poor quality instrument - Design or conformance
9. Measuring the wrong characteristic conformance 7. Instrument design or method lacks
10. Distortion ( Gauge or part ) 8. Instrument design or method lacks robustness
11. Env. - Temp., Humidity, Vibration, robustness 8. Wrong gauge for the application
Cleanliness 9. Diff. measurement method - Setup, 9. Diff. measurement method - Setup,
12. Violation of an assumption, error in an loading, clamping, technique loading, clamping, technique
applied constant 10. Distortion ( Gauge or part ) 10. Distortion ( Gauge or part ) changes
13. Application - Part size, Posn., Optr. 11. Env. Drift - Temp., Humidity, Vibration, with part size
Skill, Fatigue, Observation error Cleanliness 11. Env. - Temp., Humidity, Vibration,
( readability, parallax ) 12. Violation of an assumption, error in an Cleanliness
applied constant 12. Violation of an assumption, error in an
13. Application - Part size, Posn., Optr. applied constant
Skill, Fatigue, Observation error 13. Application - Part size, Posn., Optr.
( readability, parallax ) Skill, Fatigue, Observation error
( readability, parallax )

Page no. Highlight point


54 Precision - Precision is to repeatability what linearity is to Bias

Possible causes for,

Poor repeatability Potential sources of reproducibility


1. Within-part ( sample ) - Form, Posn., Surface finish, Taper, 1. Between parts ( samples ) - Av. Diff. when measuring types of
Sample conistency parts using the same instrument, operators and method
2. Within-Instrument - Repair, Wear, Equpmt or Fixture failure, 2. Between instruments - Av. Diff. using instruments for the same
Poor quality or maint parts, operators and Env.
3. Within-Standard - Quality, Class, Wear 3. Between standards - Av. Influence of diff. setting standards in
4. Within-Method - Variation in setup, Technique, Zeroing, the measurement process
Holding, Clamping 4. Between methods - Av. Diff. caused by changing point
5. Within-Appraiser - Technique, Posn., Lack of Exp., densities, manual Vs automated systems, zeroing, holding or
Manipulation skill or Trg., Feel, Fatigue clamping methods etc.
6. Within-Env. - Short-cycle fluctuations in Temp., Humidity, 5. Between appraisers - Av. Diff. between appraisers caused by
Vibrations, Lighting, Cleanliness Trg., Technique, Skill and Exp. This is recommended study for
7. Violation of an assumption - Stable, proper operation product & process qualification and a manual measuring
8. Instrument design or method lacks robustness, poor uniformity instrument
9. Wrong gauge for the application 6. Between Env. - Av. Diff. in measurements over time cause by
10. Distortion ( Gauge or part ), lack of rigidity Env. Cycles. This is the most common study for highly automated
11. Application - Part size, Posn., Observation error ( readability, systems in product and process qualifications
parallax ) 7. Violation of an assumption in the study
8. Instrument design or method lacks robustness
9. Optr. Trg. Effectiveness
10. Application - Part size, Posn., Observation error ( readability,
parallax )

Page 9 of 14
Highlight points
Factors that

Affect sensitivity Impact consistency are Spl. Causes of Impacting uniformity


variation
1. Ability to dampen ( reduce ) an 1. Temp. of parts 1. Fixture allows smaller / larger sizes to
instrument 2. Warm up required for electronic equpmt. posn. Differently
2. Skill of Optr. 3. Worn equpmt. 2. Poor readability on scale
3. Repeatability of the measuring device 4. 3. Parallax in reading
4. Ability to provide drift free Opn. in the
case of electronic or pneumatic gauges
5. Conditions under which the instrument
is being used such as ambient air, dirt,
humidity

Page no. Highlight point


58 Capability = Bias + GRR
59 Performance = Capability + Stability + Consistency
61 Bias & Repeatability are independent of each other
Measurement uncertainty, a term that is used internationally to describe the quality of a
63
measurement value
Measurement uncertainty - Uncertainty is the value assigned to the measurement result that
describes, within a defined level of confidence, the range expected to contain the true
63 measurement result
True measurement = Observed measurement ( result ) U
U - Expanded uncertainty of the measured and measurement result
Measurement uncertainty is simply an estimate of how much a measurement may vary at the time
of measurement and should consider all significant sources of measurement variation in the
measurement process plus significant errors of calibration, master standards, method, Env. and
64
others not previously considered in the measurement process. It is appropriate to periodically
reevaluate uncertainty related to a measurement process to assure the continued accuracy of the
estimate
The major diff. between uncertainty and MSA is that the MSA focus is on understanding the
measurement process, determining the amount of error in the process, and assessing the
64
adequacy of the measurement system for product and process control. MSA promotes
understanding and improvement ( variation reduction )
Measurement traceability - is the property of a measurement or a value of a standard where by it
64 can be related to stated references, usually national or international standards, through an
unbroken chain of comparisons all having stated uncertainties
Measurement problem analysis - Identify issue, Identify team, Flow chart of measurement system
65 & 66 and process, Cause and effect diagram, Plan-Do-Study-Act ( PDSA ), Possible solution and proof
of correction, Institutionalize the change
Possible solution and proof of correction - This can be done using some form of DOE to validate
66
the solution
Preparation for MSA,
1. Plan approach, determine by using engg. Judgment, visual observations, or a gauge study if
there is an appraiser influence in calibrating or using the instrument
2. Determine no. of appraisers / sample parts / repeat readings by considering,
73 2a. Criticality of Dimn. - Critical Dimns. Require more parts and/or trials.
2b. Part configuration - Bulky or heavy parts dictate fewer samples and more trials
2c. Cust. reqmts.
3. Appraisers should be from those who normally operate the instrument
4. Selection of sample parts
For product control situations where measurement result determine conformance or
73
nonconformance to the feature Spec. ( 100% Inspn. or Sampling ) , use % GRR to Tol.
For process control situations where measurement result determine process stability, direction and
73 compliance with the natural process variation ( SPC, Process monitoring, capability, & process
improvement ), use % GRR to process variation
When an independent estimate of process variation is not av., or to determine process direction
and continued suitability of the measurement system for process control, sample parts must be
selected from the process and represent the entire production operating range. The variation in
74
sample parts ( PV ) selected for MSA study is used to calculate the total variation ( TV ) of the
study. The TV index ( %GRR to TV ) is an indicator of process direction and continued suitability of
the measurement system for process control

Page 10 of 14
Highlight points
Page no. Highlight point
To mimise the likelihood of the misleading results follow,
1. Measurements should be made in a random order, appraisers should be unaware of which
numbered part is being checked to avoid possible bias. Person conducting study should know
which numbered part is being checked and record data accordingly
74 2. In reading Equpmt., measurement values should be recorded to the practical limit of the
instrument discrimination. For analog devices, if the smallest scale graduation is 0.0001, then the
measurement results should be recorded to 0.00005
3. The study should be managed and observed by a person who understands the importance of
conducting reliable study
The no. of appraisers, trials, and parts should remain constant between phase 1 & 2 test programs
74
or between sequential phase 2 tests for common measurement systems
Analysis of results - Assly or Fixture error, Location error, Width error
Measurement system should be stable before any additional analysis.
77
Location error - Defined by analyzing Bias & Linearity
Width error - Total variation = Under root Process Sq. + MSA Sq.
78 Number of distinct categories = Greater than or equal to 5
Eg. Test procedures - Major sources of variation are due to instrument, person, and method
83
( procedure )
Guidelines for determining Stability
1. Obtain a sample and establish its reference value(s) relative to traceable standard. If not
available select a production part that falls in the mid-range of the production measurements and
designate it as the master sample for stability analysis. It may be desirable to have master samples
85 for the low end, the high end and mid-range of expected measurements.
2. On periodic basis ( daily, weekly ) measure the master sample 3 to 5 times. The readings to be
taken at diff. times to represent when the measurement system is actually being used to account
for warm-up, ambient or other factors that may change during the day
3. Plot the data on X bar & R or X bar & s control chart in time order.
Analysis of results - If the measurement process is stable, the data can be used to determine the
86 Bias of the measurement system. Also the standard deviation of the measurements can be used as
an approximation for the measurement systems repeatability.
Guidelines for determining Bias
The independent sample method for determining whether the bias is acceptable uses the test of
Hypothesis : Ho bias = 0 and Hi bias 0
The bias or linearity error of a measurement system is acceptable if it is not statistically significantly
diff. from zero when compared to repeatability.
1. Obtain a sample and establish its reference value(s) relative to traceable standard. If not
87
available select a production part that falls in the mid-range of the production measurements and
designate it as the master sample for bias analysis. Measure the part for n 10 times in the gauge
or tool room and compute the average of the n readings. Use this av. As the reference value.
It may be desirable to have master samples for the low end, the high end and mid-range of
expected measurements.
2. Have a single appraiser measure the sample n 10 times in the normal manner.
Analysis of results
87 3. Determine bias of each reading : Bias = xi - reference value
4. Plot the bias data as a Histogram relative to the reference value.
5. Compute the av. Bias of the n readings
6. Compute the repeatability standard deviation
7. Determine if the repeatability is acceptable by calculating
88
% EV = 100 ( EV / TV ) = 100 ( Repeatability Std. Deviation / TV )
Where TV is based on the expected process variation
If % EV is large, then the measurement system may be unacceptable.
If the bias is statistically non-zero, possible causes are
1. Error in master or reference value. Check mastering procedure
2. Worn instrument
3. Instrument made to wrong Dimn.
95
4. Instrument measuring the wrong characteristic
5. Instrument not calibrated properly. Review calibration procedure
6. Instrument used improperly by appraiser. Review measurement instruments
7. Instrument correction algorithm incorrect
For the measurement system linearity to be acceptable, the Bias = 0 line must lie entirely within the
98
confidence bands of the fitted line
101 If the measurement system has a linearity problem, it needs to be recalibrated to achieve zero bias

Page 11 of 14
Highlight points
Page no. Highlight point
Guidelines for determining R & R
Variable gauge study can be performed by using Range method, Average & Range method
( including control chart method ) and ANOVA method
101
The ANNOVA method is preferred because it measures the Optr. to part interaction gauge error,
whereas the Range and Average & Range methods does not include this variation.
All methods ignore with-in part variation ( Roundness, Diametric taper, Flatness etc )
Reproducibility is usually interpreted as appraiser variation
If all the parts are handled, fixture and measured by the same equpmt. ( in-process measurement
102
system ) then reproducibility id zero, Ie. Only a repeatability study is needed. If however multiple
fixtures are used, then the reproducibility is the between-fixture variation.
Range method
Provide a quick approximation of measurement variability
Provide only the overall picture of the measurement system
Typically used as a quick check to verify that the GRR has not changed
This approach has the potential to detect an unacceptable measurement system 80% of the time
with a sample size of 5 and 90% of the time with a sample size of 10
102
Typically uses 2 appraisers and 5 parts for study.
Both appraisers measure each part once.
The range for each part is absolute diff. between measurement obtained between appraisers
Sum of ranges is found and the av. Range ( R bar ) is calculated
Total measurement variability = R x 1 / d2 ( where ref. d2 at Appendix C with m = 2 and g = no. of
parts
Average & range method
Provide an estimate of both repeatability & reproducibility
Allow the measurement systems variation to be decomposed into two separate components,
103
repeatability & reproducibility
Variation due to the interaction between the appraiser and the part/gauge not accounted in this
analysis
Average & range method - Conducting study
1. Obtain a sample of n 10 parts that represent the actual or expected range of process variation
2. Refer to appraisers, Eg. A, B & C and number the parts 1 to 10 so that the numbers are not
visible to appraisers
3. Calibrate the gauge if this is part of the normal measurement system procedure. Let appraiser A
measure 10 parts in a random order and enter results.
104 4. Let appraisers B & C measure the same 10 parts without seeing each others readings, then
enter the results
5. Repeat the cycle using a diff. random order of measurement. Enter data. Record data in
appropriate column.
6. Steps 4 & 5 may be changed to the following when large part size or simulations unavailability of
parts makes it necessary ( refer manual for details )
7. An alternative method when appraisers are on diff. shifts ( refer manual )
Average & range method - Average chart
The averages of each part are plotted with part no. as an index.
This can assist in determining consistency between appraisers
The area within Control limits represents the measurement sensitivity ( understanding / feeling )
Since parts used in the study represents process variation, approx. one half or more of the
106
averages should fall outside control limits. If data show this pattern, then the measurement system
should be adequate to detect part-to-part variation and the measurement system can provide useful
info for analyzing and controlling the process
If less than half fall outside the control limits then either the measurement system lacks adequate
effective resolution or the sample does not represent expected process variation
108 Average & range method - Range chart
Used to determine whether process is in control
Spl. Causes need to be identified and removed before a measurement study can be relevant
Ranges of multiple readings on each part are plotted on a Std. Range chart including the
average range and control limits
If all ranges are in control, all appraisers are doing same job
If one appraiser is out of control, the method used differs from others
If all appraisers have some out of control ranges, the measurement system is sensitive
(complex) to appraiser technique and needs improvement to obtain useful data
Stability is determined by a point or points beyond the control limit ; within-appraiser or within-

Page 12 of 14
Highlight points
part patterns.

Page 13 of 14
Highlight points
Page no. Highlight point
Range chart can assist in determining,
108 1. Statistical control with respect to repeatability
2. Consistency of the measurement process between appraisers for each part
Error chart
Data from MSA can be analyzed by running error charts of the individual deviations from the
112 accepted reference values. The individual deviation or error for each part is calculated by,
Error = Observed value - Ref. Value Or
Error = Observed value - Av. Measurement of the part
X - Y plot of averages by size determine,
114 1. Linearity ( if the reference value is used )
2. Consistency in linearity between appraisers
Comparison X - Y plots
115 If there were perfect agreement between appraisers, the plotted points would describe a straight
line through the origin and 45 to the axis
Numerical calculations
116 & 117
Follow manual
Analysis of results - Numerical
120
If a - ve value is calculated under the Sq. root sign, the appraiser variation ( AV ) defaults to zero
There are generally 4 diff. approaches to determine the process variation which is used to analyse
120
the acceptability of the measurement variation ( refer manual for details - page 121 )
The sum of the % consumed by each factor will not equal 100% ( each factor under % total
122
variation - ref. page 119 )
Analysis of variance ( ANNOVA ) method
123
Not read from page 123 to 129
Attribute measurement systems study
131 Attribute systems like visual standards may result in 5 to 7 classifications, Eg. Very good / Good /
Fair / Poor / Very poor
Methods of Hypothesis test analysis and Signal detection theory do not quantify measurement
131
system variability, they should be used only with the consent of the Cust.
To determine the level of agreement between appraisers, use KAPPA study which measures the
136
agreement between the evaluations of two raters when both are rating the same object.
General rule of thumb is that values of KAPPA greater than 0.75 indicate good to excellent
137
agreement ( with a max. KAPPA = 1 ) ; Values less than 0.4 indicate poor agreement
As the process capability improves, the required random sample for the attributive study should
141
become larger
Not read from page 143 to 150
Destructive measurement systems
When the part ( characteristic ) being measured is destroyed by the act of measuring the process is
153
known as destructive measurement systems, Eg. are Destructive Weld testing / Plating testing, Salt
spray / Humidity booth testing, Impact testing or Mass spectroscopy
Not read from page 155 to 166
Recognizing the effect of excessive within-part variation
167 Unaccounted within-part variation affects the estimate of repeatability and reproducibility
Eg., of within-part variation ; Roundness, Concentricity, Taper, Flatness, Profile, Cylindricity
Not read from page 169 to 212

Page 14 of 14

Das könnte Ihnen auch gefallen