Sie sind auf Seite 1von 11

Statistical quality control (SQC) is the term used to describe the set of statistical tools used by quality

professionals to manage quality of goods & services.


History
In 1924, Walter A. Shewhart of the Bell Telephone Laboratories laid the foundation for statistical
quality control
Since then, the area of SQC has been enriched by the work of numerous statisticians, quality
philosophers, and researchers
Shewhart developed the control chart in 1924 & the concept of a state of statistical control.
Shewhart kept improving & working on conrol charts & in 1931 he published a book on
statistical quality control, Economic Control of quality of Manufactured Product.
W Edwards Deming invited Shewarts to speak at the Graduate School of the US Department of
Agriculture & served as the editor of She wart's book Statistical Method from the viewpoint of
Quality Control(1939) which was the result of that lecture.
Deming was an important architect of the quality control short courses that trained American
industry in the new techniques during WWII
Deming traveled to Japan during the allied occupation & met with the Union of Japanese
Scientists & Engineers(JUSE) in an effort to introduce SQC methods to Japanese industry.
SQC CATEGORIES
1) Descriptive statistics
2) Acceptance sampling
3) Statistical Process control

QUALITY IMPROVEMENT EFFORTS HAVE THEIR FOUNDATION IN STATISTICS.
Statistical quality control involves the
Collection, tabulation analysis, interpretation presentation of numerical data.

SOURCES OF VARIATION
Variation exists in all processes.
Variation can be categorized as either:
Common or Random causes of variation, or
Random causes that we cannot identify
Unavoidable, e.g. slight differences in process variables like diameter, weight,
service time, temperature
Assignable causes of variation
Causes can be identified and eliminated: poor employee training, worn tool,
machine needing repair
TYPES OF DATA
ATTRIBUTE DATA:
When the quality characteristic being investigated is noted by either its presence or absence
and then classified as Defective or Non- Defective.
Attributes data are discrete and tell whether the characteristics conforms to specifications
Most quality characteristics in service industry are attributes
Example: Conforming or non-conforming
Pass or fail
Good or bad
VARIABLE DATA:
The characteristics are actually measured and can take on a value along a continuous scale.
Generally expressed with statistical measures such as averages and standard deviations.
Sophisticated instruments (caliper) used.
Continuous data that is concerned with degree of conformance to specifications
Example: Length, Weight
Descriptive Statistics
Descriptive Statistics are used to describe quality characteristics & relationships.
Gives numerical and graphic procedures to summarize a collection of data in a clear and
understandable way
Measures of Central Tendency
Describes the center position of the data
Mean Median Mode
Measures of Dispersion
Describes the spread of the data
Range Variance Standard deviation

MEAN
The mean is the arithmetic average of the scores
The calculation of the mean considers both the number of scores and their value
Arithmetic mean x =
where x
i
is one observation, E means sum of all scores and N is the number of observations

=
N
i
i x
N
1
1
MEDIAN
Is the number that is in the middle of a set of numbers.
To find the median, the data points must first be sorted into either ascending or descending
numerical order.
The position of the median value can then be calculated using the following formula:

MODE
Is the number in a set of numbers that occurs the most often.
Arrange the set of numbers in order from least to greatest.
The number that occurs the most often is the mode of that set of numbers.
Mode vs. Median vs. Mean
When there is only one mode and distribution is fairly symmetrical the three measures (as well
as others to be discussed) will have similar values
However, when the underlying distribution is not symmetrical, the three measures of central
tendency can be quite different.
MEASURES OF DISPERSION
The measures of dispersion include the range, variance, and standard deviation. These
numerical values describe the amount of spread, or variability, that is found among the data:
Closely grouped data have relatively small values, and more widely spread-out data have larger
values.
The closest possible grouping occurs when the data have no dispersion (all data are the same
value); in this situation, the measure of dispersion will be zero.
Deviation from the mean A deviation from the mean, is the difference
between the value of x and the mean,
Each individual value of x deviates from the mean by an amount equal to
This deviation is zero when x is equal to the mean, The
deviation is positive when x is larger than and negative when x is
smaller than
Median Location =
N + 1
2


RANGE
The Range- difference between largest/smallest observations in a set of data
Example: In {4, 6, 9, 3, 7} the lowest value is 3, and the highest is 9.
So the range is 9-3 = 6.
VARIANCE
Variance is the mean of the squared deviation scores
The larger the variance is, the more the scores deviate, on average, away from the mean
The smaller the variance is, the less the scores deviate, on average, from the mean
Calculate the deviation from the mean for every observation.
Square each deviation
Add them up and divide by the number of observations


STANDARD DEVIATION
The standard deviation is the most useful and the most popular measure of dispersion. Just as
the arithmetic mean is the most of all the averages, the standard deviation is the best of all
measures of dispersion.
Standard deviation is the positive square root of the mean-square deviations of the observations
from their arithmetic mean
Population

Sample
ACCEPTANCE SAMPLING
Acceptance sampling is a method used to accept or reject product based on a random sample of
the product.
n
x
n
i
i
2
= 2
)
=


o
1
(
( )
N
x
i

=
2

o
( )
1
2

=

N
x x
s
i
Not a Quality Control Tool
The purpose of acceptance sampling is to sentence lots (accept or reject) rather than to
estimate the quality of a lot.
Acceptance sampling plans do not improve quality. The nature of sampling is such that
acceptance sampling will accept some lots and reject others even though they are of the same
quality.
The most effective use of acceptance sampling is as an auditing tool to ensure
incoming supply meets specifications
output from a process meets specifications
PRODUCERS RISK
Producers risk refers to the probability of rejecting a good lot.
The producers risk ( ) is the risk that the sampling plan will fail to verify an acceptable lots
quality and, thus, reject ita type I error. Most often the
producers risk is set at 0.05, or 5 percent.
In order to calculate this probability there must be a numerical definition as to what constitutes
good
AQL (Acceptable Quality Level)
The numerical definition of a good lot.
The ANSI/ASQC standard describes AQL as the maximum percentage or proportion of
nonconforming items or number of nonconformities in a batch that can be considered
satisfactory as a process average
The quality level desired by the consumer.
The producer of the item strives to achieve the AQL, which typically is written into a
contract or purchase order.
For example, a contract might call for a quality level not to exceed one defective unit in
10,000, or an AQL of 0.0001
CONSUMERS RISK
Consumers Risk refers to the probability of accepting a bad lot .It is the worst level of quality
that the consumer can tolerate.
The probability of accepting a lot with LTPD quality is the consumers risk , or the type II error
of the plan.
A common value for the consumers risk is 0.10, or 10 percent.
LTPD (Lot Tolerance Percent Defective)
The numerical definition of a bad lot
The ANSI/ASQC standard describes LTPD as the percentage or proportion of
nonconforming items or non-conformities in a batch for which the customer wishes the
probability of acceptance to be a specified low value.
DESIGNING SAMPLING PLANS
Operationally, three values need to be determined before a sampling plan can be implemented
(for single sampling plans):
N = the number of units in the lot
n = the number of units in the sample
c = the maximum number of nonconforming units in the sample for which the lot will be
accepted.
All sampling plans are devised to provide a specified producers and consumers risk.
However, it is in the consumers best interest to keep the AVERAGE NUMBER OF ITEMS
INSPECTED (ANI) to a minimum because that keeps the cost of inspection low.
Sampling plans differ with respect to ANI.
Three often-used attribute sampling plans are the single-sampling plan, the double-
sampling plan, and the sequential-sampling plan.
SAMPLING PLANS
Single sampling plans:
Most popular and easiest to use
Two numbers n and c
If there are more than c defectives in a sample of size n the lot is rejected; otherwise it is
accepted
The single-sampling plan is easy to use but usually results in a larger ANI than the other
plans.
Double sampling plans:
A sample of size n
1
is selected.
If the number of defectives is less than or equal to c
1
than the lot is accepted. (say d
1
) s
c
1

Else, another sample of size n
2
is drawn.
If the cumulative number of defectives in both samples is more than c
2
(d
1
+ d
2
) > c
2
the
lot is rejected; otherwise it is accepted
SEQUENTIAL SAMPLING
A further refinement of the double-sampling plan is the sequential-sampling plan, in which the
consumer randomly selects items from the lot and inspects them one by one.
Items are sampled one at a time and the cumulative number of defectives is recorded at each
stage of the process.
Based on the value of the cumulative number of defectives there are three possible decisions at
each stage:
Reject the lot
Accept the lot
Continue sampling
The analyst plots the total number of defectives against the cumulative sample size, and if the
number of defectives is less than a certain acceptance number ( ), the consumer accepts the lot.
If the number is greater than another acceptance number ( ), the consumer rejects the lot.
If the number is somewhere between the two, another item is inspected.
In general, the sequential-sampling plan may reduce the ANI to 50 percent of that required by a
comparable single-sampling plan and, consequently, save substantial inspection costs.
OPERATING CHARACTERISTIC CURVE
The Operating Characteristic Curve is typically used to represent the four parameters (Producers
Risk, Consumers Risk, AQL and LTPD) of the sampling plan as shown below where the P on the x
axis represents the percent defective in the lot:
Note: if the sample is less than 20 units the binomial distribution is used to build the OC Curve
otherwise the Poisson distribution is used.
IN CASE OF PRODUCERS RISK
EXAMPLE: If process runs normally at 1% defective, probability of acceptance is 95%. Producers
risk is 1 0.95 0.05 or 5% !
IN CASE OF CONSUMERS RISK
EXAMPLE: Define a level at which the product consumer wants the product rejected at 6%
defective, probability of acceptance is 10%. Consumers risk is 10% for the defined Percent
defective


AQL : Defined so there is a high probability of acceptance
RQL : Defined so there is a low probability of acceptance
The Operating characteristic curve is a picture of a sampling plan. Each sampling plan has a
unique OC curve.
The sample size and acceptance number define the OC curve and determine its shape.
The OC curve shows the probability of acceptance for various values of incoming quality.
There are three probability distributions that may be used to find the probability of acceptance.
The hypergeometric distribution
The binomial distribution
The Poisson distribution
Although the hypergeometric may be used when the lot sizes are small, the binomial and
Poisson are by far the most popular distributions to use when constructing sampling plans.
Hypergeometric Distribution
The hyper geometric distribution is used to calculate the probability of acceptance of a sampling
plan when the lot is relatively small.
It can be defined as the true basic probability distribution of attribute data but the calculations
could become quite cumbersome for large lot sizes.
The hyper geometric takes into consideration that each sample taken affects the probability
associated with the next sample.
This is called sampling without replacement.
Binomial Distribution
The binomial distribution is used when the lot is very large.
For large lots, the non-replacement of the sampled product does not affect the probabilities.
The binomial assumes that the probabilities associated with all samples are equal.
This is sometimes referred to as sampling with replacement although the parts are not
physically replaced.
The binomial is used extensively in the construction of sampling plans.
The sampling plans in the Dodge-Romig Sampling Tables were derived from the binomial
distribution.
POISSON DISTRIBUTION
The Poisson distribution is used for sampling plans involving the number of defects or defects
per unit rather than the number of defective parts.
It is also used to approximate the binomial probabilities involving the number of defective parts
when the sample (n) is large and p is very small.
When n is large and p is small, the Poisson distribution formula may be used to approximate the
binomial.
Using the Poisson to calculate probabilities associated with various sampling plans is relatively
simple because the Poisson tables can be used.
Also specific sampling charts can be used
Average Outgoing Quality
To check whether the performance of the plan is what we want, we can calculate the plans
average outgoing quality (AOQ), which is the expected proportion of defects that the plan will
allow to pass.
We assume that all defective items in the lot will be replaced with good items if the lot is
rejected and that any defective items in the sample will be replaced if the lot is accepted. This
approach is called rectified inspection.
The equation for AOQ is
AOQ = p(Pa)(N - n)
N
where
n = sample size
N = lot size
Pa = probability of accepting the lot
p = true proportion defective of the lot
AVERAGE OUTGOING QUALITY LIMIT (AOQL)
The analyst can calculate AOQ to estimate the performance of the plan over a range of possible
proportion defectives in order to judge whether the plan will provide an acceptable degree of
protection.
The maximum value of the average outgoing quality over all possible
values of the proportion defective is called the average outgoing quality limit (AOQL).
If the AOQL seems too high, the parameters of the plan must be modified until an acceptable
AOQL is achieved.
Statistical Process Control
A methodology for monitoring a process to identify special causes of variation and signal the
need to take corrective action when appropriate
SPC relies on control charts
Statistical process control may be used when a large number of similar items such as Mars
bars, jars of jam or car doors are been produced.
Each process is subject to variability.
It is not possible to put exactly the same amount of jam in every jar or to make every car door of
exactly the same width.
The variability present when a process is running well is called the short term or inherent
variability. It is usually measured by the standard deviation.
Most processes will have a target value. i.e. too much jam in a jar will be uneconomic to the
manufacturer but too little will lead to customer complaints. A car door which is too wide or
narrow may not close smoothly.
Hence the purpose of SPC is to provide a signal when the process mean has moved away from
the target.
A second purpose is to give a signal when item to item variability has increased.

Das könnte Ihnen auch gefallen